From xen-devel-bounces@lists.xenproject.org Tue Jun 01 01:16:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 01:16:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134591.250328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnt0S-0005eP-W6; Tue, 01 Jun 2021 01:15:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134591.250328; Tue, 01 Jun 2021 01:15:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnt0S-0005eI-S4; Tue, 01 Jun 2021 01:15:44 +0000
Received: by outflank-mailman (input) for mailman id 134591;
 Tue, 01 Jun 2021 01:15:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnt0R-0005e8-B9; Tue, 01 Jun 2021 01:15:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnt0R-0003Pv-45; Tue, 01 Jun 2021 01:15:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnt0Q-0001ye-SH; Tue, 01 Jun 2021 01:15:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lnt0Q-0001gY-Rk; Tue, 01 Jun 2021 01:15:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FwF2i25iIXK9QqKf7+2EJZPcxJrlyTs0RuQ68O4VDQM=; b=nbbVBEXUDDP+P/uoYd70zH0zIG
	MXWj1i65FPt/q9MTj5hAp9IFMyJwtmkOpjIusxtYaGyKafjzHNVyRpsiNKwM9GvunN7fOyRPo3sw2
	HnmKvO8xsEFZNj9RUdiW8JtlqWZFd7yCEx0hKzzo7TBfhFTyoIPoDOHCReYaUSxSkgkQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162276-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162276: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=57f68dfd2d111a2ad381df740543c901b41f2299
X-Osstest-Versions-That:
    xen=683d899e4bffca35c5b192ea0662362b0270a695
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Jun 2021 01:15:42 +0000

flight 162276 xen-unstable real [real]
flight 162280 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162276/
http://logs.test-lab.xenproject.org/osstest/logs/162280/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 162269

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162269
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162269
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162269
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162269
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162269
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 162269
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162269
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162269
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162269
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162269
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162269
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162269
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  57f68dfd2d111a2ad381df740543c901b41f2299
baseline version:
 xen                  683d899e4bffca35c5b192ea0662362b0270a695

Last test of basis   162269  2021-05-31 01:52:43 Z    0 days
Testing same since   162276  2021-05-31 14:09:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 57f68dfd2d111a2ad381df740543c901b41f2299
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Mon May 31 12:47:12 2021 +0200

    x86/mtrr: remove stale function prototype
    
    Fixes: 1c84d04673 ('VMX: remove the problematic set_uc_mode logic')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit e95c243f67a95bca8b4be62b4e024c64ab082e56
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 31 12:46:48 2021 +0200

    x86/tboot: adjust UUID check
    
    Replace a bogus cast, move the static variable into the only function
    using it, and add __initconst. While there, also remove a pointless NULL
    check.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Lukasz Hawrylko <lukasz.hawrylko@linux.intel.com>

commit 8701f68d26afc641527c775d2e56a8709f535ffd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 31 12:45:51 2021 +0200

    x86/tboot: include all valid frame table entries in S3 integrity check
    
    The difference of two pdx_to_page() return values is a number of pages,
    not the number of bytes covered by the corresponding frame table entries.
    
    Fixes: 3cb68d2b59ab ("tboot: fix S3 issue for Intel Trusted Execution Technology.")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Lukasz Hawrylko <lukasz.hawrylko@linux.intel.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 02:26:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 02:26:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134610.250366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnu6Z-0004Iv-2Z; Tue, 01 Jun 2021 02:26:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134610.250366; Tue, 01 Jun 2021 02:26:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnu6Y-0004IZ-Rk; Tue, 01 Jun 2021 02:26:06 +0000
Received: by outflank-mailman (input) for mailman id 134610;
 Tue, 01 Jun 2021 02:26:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ur3e=K3=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lnu6W-0004IQ-Q4
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 02:26:04 +0000
Received: from mail-oo1-xc32.google.com (unknown [2607:f8b0:4864:20::c32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 41882eed-3583-42ce-b399-fdeb54c6accd;
 Tue, 01 Jun 2021 02:26:01 +0000 (UTC)
Received: by mail-oo1-xc32.google.com with SMTP id
 s24-20020a4aead80000b02901fec6deb28aso3151920ooh.11
 for <xen-devel@lists.xenproject.org>; Mon, 31 May 2021 19:26:01 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id q19sm2305346oov.18.2021.05.31.19.26.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 31 May 2021 19:26:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41882eed-3583-42ce-b399-fdeb54c6accd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=PHTJSZWgt9tLZnJJOY4xFyu77OyX5M5HRVhis1msNs0=;
        b=l1YdSZ9kLlNAQiAK8Za8dgUharrK+ruAwZmH0L+0y5EQDBTopHdkI/5cu/69nojmT0
         JNUaCisppDfdUzvvJjIw4aDR7e4DtX04BTpPA6XllsHpSUd9HAOMKZgtw1ae3MdkxFs5
         fy6cCTE4f/mWwYHpbEyQ4bKHvzDfvulXHS1zIux91o7yJxFMQBNN3CoPbCrKrTnS6ExX
         TPFm4TDZXkku0+qfHxTpgn4MOkzm0gXJ5Wsr25m0VWm3RtrHFVwgoKDW/5MLqZ0jKL8O
         /WP9U2uh5npu2y/KemO7G9v4TvSZ9xVrAsmjtz75g05otSpASEHbBWO2R7fNatPue/32
         CUMg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=PHTJSZWgt9tLZnJJOY4xFyu77OyX5M5HRVhis1msNs0=;
        b=t5cO8xXQhZ1ETDEP6n+5VWe2NRjZxU6w8z4lJOlfHIsvJQM+2zyRijvSZdu8SOZpWm
         QCt5SarJ+9qhFxW3O6pRYq3o7Ch045QM0NrTtiNVbg2P+f++VTBe4BoNZkYVrewhUsaj
         oVWSWP+i+lAQ9V+TuX35Cagen2MV40onIAg6dCVA/0VJcGswk2EJqAorCU8gTedfbuCM
         WZJokJB7aoyzjcvR/iVOTmnkuZhbIu18aTgZOTa6nJ0r2FSa2O+DzyoIaF5Rf0aSTjvJ
         pYMcejcL0mbnTfYfTWICdfo5h/2dB/RdRMQhTqbvKso3MDtp8/iJcJyF71BuoukLYydS
         yV5A==
X-Gm-Message-State: AOAM533pVdL7xQlAXS5lL1JYVz/sQa6B2Oa/TcjrJgqk7bYGOulNR+YK
	TkWnxvNmO7cLmyW233vvmcD+5E7FE1r7EA==
X-Google-Smtp-Source: ABdhPJwq7U/6JNK233hBfcRXDY5zdhbtVgJeraGAPr5DLzlDXIvUzgTF/jomONwXqpbTrX/XP/p1uA==
X-Received: by 2002:a4a:e416:: with SMTP id t22mr18086340oov.39.1622514360774;
        Mon, 31 May 2021 19:26:00 -0700 (PDT)
Subject: Re: [PATCH v4 3/4] xen: Add files needed for minimal riscv build
To: Bob Eshleman <bobbyeshleman@gmail.com>, Jan Beulich <jbeulich@suse.com>
Cc: Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621712830.git.connojdavis@gmail.com>
 <88ca49cea8dc0c44604957d42722388bb3d9e3ff.1621712830.git.connojdavis@gmail.com>
 <7d1b6d2a-641c-4508-9b29-b74db4769170@suse.com>
 <39a8a78c-3662-528f-fde4-d47427e64b15@gmail.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <b0699bc4-5e79-7ce0-c885-b4d8dfa8b74f@gmail.com>
Date: Mon, 31 May 2021 20:26:08 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <39a8a78c-3662-528f-fde4-d47427e64b15@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US



On 5/25/21 12:13 PM, Bob Eshleman wrote:
> On 5/25/21 1:48 AM, Jan Beulich wrote:
>> On 24.05.2021 16:34, Connor Davis wrote:
>>> Add arch-specific makefiles and configs needed to build for
>>> riscv. Also add a minimal head.S that is a simple infinite loop.
>>> head.o can be built with
>>>
>>> $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen tiny64_defconfig
>>> $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen TARGET=head.o
>>>
>>> No other TARGET is supported at the moment.
>>>
>>> Signed-off-by: Connor Davis <connojdavis@gmail.com>
>>> ---
>>>   config/riscv.mk                         |  4 +++
>>>   xen/Makefile                            |  8 +++--
>>>   xen/arch/riscv/Kconfig                  | 47 +++++++++++++++++++++++++
>>>   xen/arch/riscv/Kconfig.debug            |  0
>>>   xen/arch/riscv/Makefile                 |  0
>>>   xen/arch/riscv/Rules.mk                 |  0
>>>   xen/arch/riscv/arch.mk                  | 14 ++++++++
>>>   xen/arch/riscv/asm-offsets.c            |  0
>>>   xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
>>>   xen/arch/riscv/head.S                   |  6 ++++
>>>   xen/include/asm-riscv/config.h          | 47 +++++++++++++++++++++++++
>>>   11 files changed, 137 insertions(+), 2 deletions(-)
>>>   create mode 100644 config/riscv.mk
>>>   create mode 100644 xen/arch/riscv/Kconfig
>>>   create mode 100644 xen/arch/riscv/Kconfig.debug
>>>   create mode 100644 xen/arch/riscv/Makefile
>>>   create mode 100644 xen/arch/riscv/Rules.mk
>>>   create mode 100644 xen/arch/riscv/arch.mk
>>>   create mode 100644 xen/arch/riscv/asm-offsets.c
>>>   create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
>>>   create mode 100644 xen/arch/riscv/head.S
>>>   create mode 100644 xen/include/asm-riscv/config.h
>> I think this wants to be accompanied by an addition to ./MAINTAINERS
>> right away, such that future RISC-V patches can be acked by the
>> respective designated maintainers, rather than falling under "THE REST".
>> Question is whether you / we have settled yet who's to become arch
>> maintainer there.
>>
>> Jan
>>
> I'd like to volunteer myself for this, as I'm slated to continue
> with the port indefinitely and would at least like to review
> patches.  If Connor has the time, I think it makes sense for him
> to be listed there too.
>
> Until we have others (it's just us two right now), it'll be a
> lot of us reviewing each other's arch-specific work (in addition to
> reviewers elsewhere in the Xen project, of course).
>
> -Bobby
>
Jan: can you list Bobby as the maintainer on commit? You can list me as 
well,
but I can't guarantee a time commitment unlike Bobby.

Thanks,
Connor



From xen-devel-bounces@lists.xenproject.org Tue Jun 01 04:53:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 04:53:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134640.250437 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnwOa-0002ir-OY; Tue, 01 Jun 2021 04:52:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134640.250437; Tue, 01 Jun 2021 04:52:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnwOa-0002ik-LM; Tue, 01 Jun 2021 04:52:52 +0000
Received: by outflank-mailman (input) for mailman id 134640;
 Tue, 01 Jun 2021 04:52:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnwOY-0002iY-Tn; Tue, 01 Jun 2021 04:52:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnwOY-0003sa-O6; Tue, 01 Jun 2021 04:52:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnwOY-0003v9-Dn; Tue, 01 Jun 2021 04:52:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lnwOY-00070x-DI; Tue, 01 Jun 2021 04:52:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vn5BOyff4kn5JQz2I8zRjvNFT1uHGcCfNBjMozgSCTo=; b=htbrjOTqzOAk2Slw+ani/8Lj7I
	d/aaRlfb1YCoAliL3pmyvAE53fX+FtKlcAWu9CDpQiYGp3rd3gRpi2rPrvmg7DxJVqDa9Il2syoRS
	VhJqxki2wAs01xwSnHssX7BGLoiAhxmCv//WhW6z/wZZ3mdXSpn8NynbKEtky2niA7j0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162277-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162277: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=52848929b70dcf92a68aedcfd90207be81ba3274
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Jun 2021 04:52:50 +0000

flight 162277 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162277/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                52848929b70dcf92a68aedcfd90207be81ba3274
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  284 days
Failing since        152659  2020-08-21 14:07:39 Z  283 days  524 attempts
Testing same since   162270  2021-05-31 03:30:37 Z    1 days    2 attempts

------------------------------------------------------------
519 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 164273 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 06:01:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 06:01:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134659.250478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnxSV-00022Y-6w; Tue, 01 Jun 2021 06:00:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134659.250478; Tue, 01 Jun 2021 06:00:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnxSV-00022R-3O; Tue, 01 Jun 2021 06:00:59 +0000
Received: by outflank-mailman (input) for mailman id 134659;
 Tue, 01 Jun 2021 06:00:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnxST-00022G-M9; Tue, 01 Jun 2021 06:00:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnxST-0005Nk-Hh; Tue, 01 Jun 2021 06:00:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnxST-0006J3-AL; Tue, 01 Jun 2021 06:00:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lnxST-00044x-9s; Tue, 01 Jun 2021 06:00:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DHeW43TS6s8r1i5PTQRL2FWIyJEXy6pNlxra6zmnDEM=; b=l+IuQSg6/fF2mPH++gprPkjz68
	C8C9U1T1aa50ilyvvdFtFVUo9oz3NkFEm8qBBv7UNyKoy8z7m1RxhSqd8bZuREEJUFXHW3o7piLGI
	s3wRHZa6A17VWArW6zejT4YIQOTZy+jdmeOBL4IwiQuvRbQpCmwS9UWhlkoCpy6AUupE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162295-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162295: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=70f53b1c04cfed8529c87c7be8ca4c76d1123a30
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Jun 2021 06:00:57 +0000

flight 162295 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162295/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              70f53b1c04cfed8529c87c7be8ca4c76d1123a30
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  326 days
Failing since        151818  2020-07-11 04:18:52 Z  325 days  318 attempts
Testing same since   162243  2021-05-28 04:20:06 Z    4 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 59708 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 06:03:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 06:03:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134667.250492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnxUa-0002ll-ON; Tue, 01 Jun 2021 06:03:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134667.250492; Tue, 01 Jun 2021 06:03:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnxUa-0002le-Kl; Tue, 01 Jun 2021 06:03:08 +0000
Received: by outflank-mailman (input) for mailman id 134667;
 Tue, 01 Jun 2021 06:03:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ILtd=K3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lnxUZ-0002ix-AD
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 06:03:07 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1692972c-a222-4f89-a4f6-a3a376e90a6a;
 Tue, 01 Jun 2021 06:03:06 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 76A3A1FD2D;
 Tue,  1 Jun 2021 06:03:05 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 2D066118DD;
 Tue,  1 Jun 2021 06:03:05 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 8L6pCZnNtWBPeAAALh3uQQ
 (envelope-from <jbeulich@suse.com>); Tue, 01 Jun 2021 06:03:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1692972c-a222-4f89-a4f6-a3a376e90a6a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622527385; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yMU2JPgK0qmrP32rl8NH4eVDq1jiSBJi8A+N3bYsJck=;
	b=ELdXrf8UX1LeQoLnrsYffBU7wi3GWrbd82YxwCFGdDMVWw6mLp3Ql5fOr+CEEzyw4QxZWl
	UOlEwDoNL9zfNgIo974Rsfn1kOd0b218WBLKR9t+Fw/g9AWzJwJULAtQC26BmuqN4Zv1Ts
	4yunzzirAiDtHnn6olNGsFtuRYMdcms=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622527385; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yMU2JPgK0qmrP32rl8NH4eVDq1jiSBJi8A+N3bYsJck=;
	b=ELdXrf8UX1LeQoLnrsYffBU7wi3GWrbd82YxwCFGdDMVWw6mLp3Ql5fOr+CEEzyw4QxZWl
	UOlEwDoNL9zfNgIo974Rsfn1kOd0b218WBLKR9t+Fw/g9AWzJwJULAtQC26BmuqN4Zv1Ts
	4yunzzirAiDtHnn6olNGsFtuRYMdcms=
Subject: Re: [PATCH v4 3/4] xen: Add files needed for minimal riscv build
To: Connor Davis <connojdavis@gmail.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>
Cc: Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621712830.git.connojdavis@gmail.com>
 <88ca49cea8dc0c44604957d42722388bb3d9e3ff.1621712830.git.connojdavis@gmail.com>
 <7d1b6d2a-641c-4508-9b29-b74db4769170@suse.com>
 <39a8a78c-3662-528f-fde4-d47427e64b15@gmail.com>
 <b0699bc4-5e79-7ce0-c885-b4d8dfa8b74f@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0eab888a-4c48-497d-6c43-b749389e52dd@suse.com>
Date: Tue, 1 Jun 2021 08:03:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <b0699bc4-5e79-7ce0-c885-b4d8dfa8b74f@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
Authentication-Results: imap.suse.de;
	none
X-Spam-Level: 
X-Spam-Score: -1.00
X-Spamd-Result: default: False [-1.00 / 100.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 TO_DN_SOME(0.00)[];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 FREEMAIL_ENVRCPT(0.00)[gmail.com];
	 MIME_GOOD(-0.10)[text/plain];
	 DKIM_SIGNED(0.00)[suse.com:s=susede1];
	 NEURAL_HAM_SHORT(-1.00)[-1.000];
	 RCPT_COUNT_SEVEN(0.00)[10];
	 FREEMAIL_TO(0.00)[gmail.com];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MIME_TRACE(0.00)[0:+];
	 RCVD_COUNT_TWO(0.00)[2];
	 MID_RHS_MATCH_FROM(0.00)[];
	 FREEMAIL_CC(0.00)[gmail.com,citrix.com,xenproject.org,xen.org,kernel.org,lists.xenproject.org]
X-Spam-Flag: NO

On 01.06.2021 04:26, Connor Davis wrote:
> 
> 
> On 5/25/21 12:13 PM, Bob Eshleman wrote:
>> On 5/25/21 1:48 AM, Jan Beulich wrote:
>>> On 24.05.2021 16:34, Connor Davis wrote:
>>>> Add arch-specific makefiles and configs needed to build for
>>>> riscv. Also add a minimal head.S that is a simple infinite loop.
>>>> head.o can be built with
>>>>
>>>> $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen tiny64_defconfig
>>>> $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen TARGET=head.o
>>>>
>>>> No other TARGET is supported at the moment.
>>>>
>>>> Signed-off-by: Connor Davis <connojdavis@gmail.com>
>>>> ---
>>>>   config/riscv.mk                         |  4 +++
>>>>   xen/Makefile                            |  8 +++--
>>>>   xen/arch/riscv/Kconfig                  | 47 +++++++++++++++++++++++++
>>>>   xen/arch/riscv/Kconfig.debug            |  0
>>>>   xen/arch/riscv/Makefile                 |  0
>>>>   xen/arch/riscv/Rules.mk                 |  0
>>>>   xen/arch/riscv/arch.mk                  | 14 ++++++++
>>>>   xen/arch/riscv/asm-offsets.c            |  0
>>>>   xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
>>>>   xen/arch/riscv/head.S                   |  6 ++++
>>>>   xen/include/asm-riscv/config.h          | 47 +++++++++++++++++++++++++
>>>>   11 files changed, 137 insertions(+), 2 deletions(-)
>>>>   create mode 100644 config/riscv.mk
>>>>   create mode 100644 xen/arch/riscv/Kconfig
>>>>   create mode 100644 xen/arch/riscv/Kconfig.debug
>>>>   create mode 100644 xen/arch/riscv/Makefile
>>>>   create mode 100644 xen/arch/riscv/Rules.mk
>>>>   create mode 100644 xen/arch/riscv/arch.mk
>>>>   create mode 100644 xen/arch/riscv/asm-offsets.c
>>>>   create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
>>>>   create mode 100644 xen/arch/riscv/head.S
>>>>   create mode 100644 xen/include/asm-riscv/config.h
>>> I think this wants to be accompanied by an addition to ./MAINTAINERS
>>> right away, such that future RISC-V patches can be acked by the
>>> respective designated maintainers, rather than falling under "THE REST".
>>> Question is whether you / we have settled yet who's to become arch
>>> maintainer there.
>>>
>>> Jan
>>>
>> I'd like to volunteer myself for this, as I'm slated to continue
>> with the port indefinitely and would at least like to review
>> patches.  If Connor has the time, I think it makes sense for him
>> to be listed there too.
>>
>> Until we have others (it's just us two right now), it'll be a
>> lot of us reviewing each other's arch-specific work (in addition to
>> reviewers elsewhere in the Xen project, of course).
>>
>> -Bobby
>>
> Jan: can you list Bobby as the maintainer on commit? You can list me as 
> well,
> but I can't guarantee a time commitment unlike Bobby.

Well, actually I did expect a v5 submission with whatever entry you
both deem suitable. Otoh I now realize that patch 1 had my extra
request addressed (by myself) and hence with this one needing just
that entry, I could indeed add that addition myself while committing.
Let me move the mails back to the to-be-committed folder...

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 06:11:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 06:11:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134678.250509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnxcN-0004J9-NJ; Tue, 01 Jun 2021 06:11:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134678.250509; Tue, 01 Jun 2021 06:11:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnxcN-0004J2-Jv; Tue, 01 Jun 2021 06:11:11 +0000
Received: by outflank-mailman (input) for mailman id 134678;
 Tue, 01 Jun 2021 06:11:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ILtd=K3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lnxcN-0004Iv-0c
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 06:11:11 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0e291bc1-0aae-472c-b083-d8194cba2e18;
 Tue, 01 Jun 2021 06:11:08 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id BCB5F218B4;
 Tue,  1 Jun 2021 06:11:07 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 61B1F118DD;
 Tue,  1 Jun 2021 06:11:07 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 2EAdFnvPtWAvewAALh3uQQ
 (envelope-from <jbeulich@suse.com>); Tue, 01 Jun 2021 06:11:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e291bc1-0aae-472c-b083-d8194cba2e18
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622527867; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MOQrSaE9kmycw5mZMZYdB+oQsrhglo+Ox4YYu40YY/E=;
	b=CWDXCRhi59PqIDbiSmC69MZc8rhLCGVdc7aJwyotoy+5b8uN1dViSyAJDdscCLyrABOaGX
	9R0pHOVVRMizVsKZSgOp1aDyQPKJ9baIG3/aPv4fjsnEngEpP01C1gNYjDP64C04ZlDtv4
	k2CQlzyDPRjFStsDJSzAJwOQ4sudcVA=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622527867; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MOQrSaE9kmycw5mZMZYdB+oQsrhglo+Ox4YYu40YY/E=;
	b=CWDXCRhi59PqIDbiSmC69MZc8rhLCGVdc7aJwyotoy+5b8uN1dViSyAJDdscCLyrABOaGX
	9R0pHOVVRMizVsKZSgOp1aDyQPKJ9baIG3/aPv4fjsnEngEpP01C1gNYjDP64C04ZlDtv4
	k2CQlzyDPRjFStsDJSzAJwOQ4sudcVA=
Subject: Re: [PATCH v4 3/4] xen: Add files needed for minimal riscv build
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621712830.git.connojdavis@gmail.com>
 <88ca49cea8dc0c44604957d42722388bb3d9e3ff.1621712830.git.connojdavis@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0f7192ec-5e50-c4b9-774c-febcade90288@suse.com>
Date: Tue, 1 Jun 2021 08:11:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <88ca49cea8dc0c44604957d42722388bb3d9e3ff.1621712830.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
Authentication-Results: imap.suse.de;
	none
X-Spam-Level: 
X-Spam-Score: -1.00
X-Spamd-Result: default: False [-1.00 / 100.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 TO_DN_SOME(0.00)[];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 FREEMAIL_ENVRCPT(0.00)[gmail.com];
	 MIME_GOOD(-0.10)[text/plain];
	 DKIM_SIGNED(0.00)[suse.com:s=susede1];
	 NEURAL_HAM_SHORT(-1.00)[-1.000];
	 RCPT_COUNT_SEVEN(0.00)[10];
	 FREEMAIL_TO(0.00)[gmail.com];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MIME_TRACE(0.00)[0:+];
	 RCVD_COUNT_TWO(0.00)[2];
	 MID_RHS_MATCH_FROM(0.00)[];
	 FREEMAIL_CC(0.00)[gmail.com,citrix.com,xenproject.org,xen.org,kernel.org,lists.xenproject.org]
X-Spam-Flag: NO

On 24.05.2021 16:34, Connor Davis wrote:
> Add arch-specific makefiles and configs needed to build for
> riscv. Also add a minimal head.S that is a simple infinite loop.
> head.o can be built with
> 
> $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen tiny64_defconfig
> $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen TARGET=head.o
> 
> No other TARGET is supported at the moment.

Just to confirm - an actual (even if not functioning) xen binary can't
be linked yet? I ask because that would be the prereq for me to set up
routine (cross) build testing, just like I do for Arm.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 07:34:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 07:34:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134698.250552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnyup-0004dD-Mt; Tue, 01 Jun 2021 07:34:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134698.250552; Tue, 01 Jun 2021 07:34:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnyup-0004d6-Jt; Tue, 01 Jun 2021 07:34:19 +0000
Received: by outflank-mailman (input) for mailman id 134698;
 Tue, 01 Jun 2021 07:34:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnyuo-0004cw-AF; Tue, 01 Jun 2021 07:34:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnyuo-0006wk-2y; Tue, 01 Jun 2021 07:34:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnyun-000491-QR; Tue, 01 Jun 2021 07:34:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lnyun-0005s2-Px; Tue, 01 Jun 2021 07:34:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CJ7t90S6hQybaJy6g9fDgRqeLc31JLvPcHIRchDQO1Y=; b=LNaBPG8gbeD80nkSEit6z57Gxf
	BMI3yXs/LLgQ54luMkp9h4xOwhq+rBPMh2NeKbvHGCSJtZM8seNJkDTFHHLiPe05wYbN0T5gU2Z/E
	j4tNRt6f4jLsjoKbHLHd7FAv2mklnQu4/E1LZNXMxd2BZIr0g1r3RNrnjdfi2r5jUnqc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162279-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162279: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start/debian.repeat:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start/debian.repeat:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c2131f7e73c9e9365613e323d65c7b9e5b910f56
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Jun 2021 07:34:17 +0000

flight 162279 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162279/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 20 guest-start/debian.repeat fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 22 guest-start/debian.repeat fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                c2131f7e73c9e9365613e323d65c7b9e5b910f56
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  304 days
Failing since        152366  2020-08-01 20:49:34 Z  303 days  517 attempts
Testing same since   162279  2021-05-31 20:42:18 Z    0 days    1 attempts

------------------------------------------------------------
6126 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1665618 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 07:34:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 07:34:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134700.250567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnyv7-0004zZ-3q; Tue, 01 Jun 2021 07:34:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134700.250567; Tue, 01 Jun 2021 07:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnyv7-0004zS-0z; Tue, 01 Jun 2021 07:34:37 +0000
Received: by outflank-mailman (input) for mailman id 134700;
 Tue, 01 Jun 2021 07:34:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tu9B=K3=rmail.be=alien@srs-us1.protection.inumbo.net>)
 id 1lnyv5-0004yD-Oc
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 07:34:35 +0000
Received: from mail.rmail.be (unknown [85.234.218.189])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id fde5eead-cb44-45fe-b7b8-7bf0977246d6;
 Tue, 01 Jun 2021 07:34:33 +0000 (UTC)
Received: from mail.rmail.be (localhost [127.0.0.1])
 by mail.rmail.be (Postfix) with ESMTP id 739C5B1133A
 for <xen-devel@lists.xenproject.org>; Tue,  1 Jun 2021 09:34:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fde5eead-cb44-45fe-b7b8-7bf0977246d6
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit
Date: Tue, 01 Jun 2021 09:34:33 +0200
From: AL13N <alien@rmail.be>
To: xen-devel@lists.xenproject.org
Subject: pci passthrough issue introduced between 4.14.1 and 4.15.0
Message-ID: <9562f5c0911567f12ed9fef8830f3018@mail.rmail.be>
X-Sender: alien@rmail.be
User-Agent: Roundcube Webmail/1.0.9-1.2.mga5

Not 100% it's a bug or something i did wrong, but,

with xl create i start a PV with 3 pci passthroughs

after wards, xl pci-list shows all 3 nicely

looking at the domU boot logs, pcifront is only creating one pci device 
and lspci in the guest shows only 1 pci entry

in at least 4.14.1 it still works.

looking deeper, if you start with only 1 entry or 0 and you do 
pci-attach for the other pci entries, all of this works fine.

this is the pci section in question:

pci=['0000:04:00.0,permissive=1', 
'0000:00:1a.0,permissive=1,rdm_policy=relaxed,seize=1', 
'0000:00:1d.0,permissive=1,rdm_policy=relaxed,seize=1']

this works fine in 4.12.1 and 4.14.1, but fails in 4.15.0, however

booting with pci=['0000:04:00.0,permissive=1'] and then doing

[]# xl pci-attach <domain> 
'0000:00:1a.0,permissive=1,rdm_policy=relaxed,seize=1'
[]# xl pci-attach <domain> 
'0000:00:1d.0,permissive=1,rdm_policy=relaxed,seize=1'

also works,

Regards,

Maarten (alien on Libera and OFTC)



From xen-devel-bounces@lists.xenproject.org Tue Jun 01 07:36:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 07:36:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134713.250578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnywa-0005sw-GF; Tue, 01 Jun 2021 07:36:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134713.250578; Tue, 01 Jun 2021 07:36:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnywa-0005sp-Cp; Tue, 01 Jun 2021 07:36:08 +0000
Received: by outflank-mailman (input) for mailman id 134713;
 Tue, 01 Jun 2021 07:36:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tu9B=K3=rmail.be=alien@srs-us1.protection.inumbo.net>)
 id 1lnywY-0005sY-U4
 for xen-devel@lists.xen.org; Tue, 01 Jun 2021 07:36:06 +0000
Received: from mail.rmail.be (unknown [85.234.218.189])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 0cb89227-cc56-4064-ac7f-4ecf78fadcc8;
 Tue, 01 Jun 2021 07:36:05 +0000 (UTC)
Received: from mail.rmail.be (localhost [127.0.0.1])
 by mail.rmail.be (Postfix) with ESMTP id 2A030B11353
 for <xen-devel@lists.xen.org>; Tue,  1 Jun 2021 09:36:05 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0cb89227-cc56-4064-ac7f-4ecf78fadcc8
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit
Date: Tue, 01 Jun 2021 09:36:05 +0200
From: AL13N <alien@rmail.be>
To: Xen-devel <xen-devel@lists.xen.org>
Subject: pci passthrough issue introduced between 4.14.1 and 4.15.0
Message-ID: <6ccb04f2d93be6089b049df1f94a91dd@mail.rmail.be>
X-Sender: alien@rmail.be
User-Agent: Roundcube Webmail/1.0.9-1.2.mga5

Not 100% it's a bug or something i did wrong, but,

with xl create i start a PV with 3 pci passthroughs

after wards, xl pci-list shows all 3 nicely

looking at the domU boot logs, pcifront is only creating one pci device 
and lspci in the guest shows only 1 pci entry

in at least 4.14.1 it still works.

looking deeper, if you start with only 1 entry or 0 and you do 
pci-attach for the other pci entries, all of this works fine.

this is the pci section in question:

pci=['0000:04:00.0,permissive=1', 
'0000:00:1a.0,permissive=1,rdm_policy=relaxed,seize=1', 
'0000:00:1d.0,permissive=1,rdm_policy=relaxed,seize=1']

this works fine in 4.12.1 and 4.14.1, but fails in 4.15.0, however

booting with pci=['0000:04:00.0,permissive=1'] and then doing

[]# xl pci-attach <domain> 
'0000:00:1a.0,permissive=1,rdm_policy=relaxed,seize=1'
[]# xl pci-attach <domain> 
'0000:00:1d.0,permissive=1,rdm_policy=relaxed,seize=1'

also works,

Regards,

Maarten (alien on Libera and OFTC)



From xen-devel-bounces@lists.xenproject.org Tue Jun 01 08:40:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 08:40:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134740.250619 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnzwg-0005Np-6L; Tue, 01 Jun 2021 08:40:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134740.250619; Tue, 01 Jun 2021 08:40:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnzwg-0005Ni-3O; Tue, 01 Jun 2021 08:40:18 +0000
Received: by outflank-mailman (input) for mailman id 134740;
 Tue, 01 Jun 2021 08:40:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lnzwe-0005Nb-Qq
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 08:40:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lnzwd-00006a-N6; Tue, 01 Jun 2021 08:40:15 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lnzwd-0007fi-GR; Tue, 01 Jun 2021 08:40:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=IJWbL+e1S2nPPWMdMcq0auZeSSq+qjk/7s/F7aq5Mr4=; b=XjWL/DGesC9jswEhrQKGIOVd49
	7wCpZZ5gFFsxHDWDDo585D5e51+WEVf7FBqAljdD2HM6T+gG34XwzfWzBKRoocKYI2tIW3GGNoTPu
	/6llLmiH9B2Bjgzmyu//EVm1c59HYfR3t+xPuHGDXiEc/lf6NxZeh8tVlvdykOgHQnf8=;
Subject: Re: [PATCH v4 3/4] xen: Add files needed for minimal riscv build
To: Jan Beulich <jbeulich@suse.com>, Connor Davis <connojdavis@gmail.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>
Cc: Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1621712830.git.connojdavis@gmail.com>
 <88ca49cea8dc0c44604957d42722388bb3d9e3ff.1621712830.git.connojdavis@gmail.com>
 <7d1b6d2a-641c-4508-9b29-b74db4769170@suse.com>
 <39a8a78c-3662-528f-fde4-d47427e64b15@gmail.com>
 <b0699bc4-5e79-7ce0-c885-b4d8dfa8b74f@gmail.com>
 <0eab888a-4c48-497d-6c43-b749389e52dd@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e95a2ae5-16a7-aa2d-67c9-b8a0d6f0c2e5@xen.org>
Date: Tue, 1 Jun 2021 09:40:13 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <0eab888a-4c48-497d-6c43-b749389e52dd@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 01/06/2021 07:03, Jan Beulich wrote:
> On 01.06.2021 04:26, Connor Davis wrote:
>>
>>
>> On 5/25/21 12:13 PM, Bob Eshleman wrote:
>>> On 5/25/21 1:48 AM, Jan Beulich wrote:
>>>> On 24.05.2021 16:34, Connor Davis wrote:
>>>>> Add arch-specific makefiles and configs needed to build for
>>>>> riscv. Also add a minimal head.S that is a simple infinite loop.
>>>>> head.o can be built with
>>>>>
>>>>> $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen tiny64_defconfig
>>>>> $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen TARGET=head.o
>>>>>
>>>>> No other TARGET is supported at the moment.
>>>>>
>>>>> Signed-off-by: Connor Davis <connojdavis@gmail.com>
>>>>> ---
>>>>>    config/riscv.mk                         |  4 +++
>>>>>    xen/Makefile                            |  8 +++--
>>>>>    xen/arch/riscv/Kconfig                  | 47 +++++++++++++++++++++++++
>>>>>    xen/arch/riscv/Kconfig.debug            |  0
>>>>>    xen/arch/riscv/Makefile                 |  0
>>>>>    xen/arch/riscv/Rules.mk                 |  0
>>>>>    xen/arch/riscv/arch.mk                  | 14 ++++++++
>>>>>    xen/arch/riscv/asm-offsets.c            |  0
>>>>>    xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
>>>>>    xen/arch/riscv/head.S                   |  6 ++++
>>>>>    xen/include/asm-riscv/config.h          | 47 +++++++++++++++++++++++++
>>>>>    11 files changed, 137 insertions(+), 2 deletions(-)
>>>>>    create mode 100644 config/riscv.mk
>>>>>    create mode 100644 xen/arch/riscv/Kconfig
>>>>>    create mode 100644 xen/arch/riscv/Kconfig.debug
>>>>>    create mode 100644 xen/arch/riscv/Makefile
>>>>>    create mode 100644 xen/arch/riscv/Rules.mk
>>>>>    create mode 100644 xen/arch/riscv/arch.mk
>>>>>    create mode 100644 xen/arch/riscv/asm-offsets.c
>>>>>    create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
>>>>>    create mode 100644 xen/arch/riscv/head.S
>>>>>    create mode 100644 xen/include/asm-riscv/config.h
>>>> I think this wants to be accompanied by an addition to ./MAINTAINERS
>>>> right away, such that future RISC-V patches can be acked by the
>>>> respective designated maintainers, rather than falling under "THE REST".
>>>> Question is whether you / we have settled yet who's to become arch
>>>> maintainer there.
>>>>
>>>> Jan
>>>>
>>> I'd like to volunteer myself for this, as I'm slated to continue
>>> with the port indefinitely and would at least like to review
>>> patches.  If Connor has the time, I think it makes sense for him
>>> to be listed there too.
>>>
>>> Until we have others (it's just us two right now), it'll be a
>>> lot of us reviewing each other's arch-specific work (in addition to
>>> reviewers elsewhere in the Xen project, of course).
>>>
>>> -Bobby
>>>
>> Jan: can you list Bobby as the maintainer on commit? You can list me as
>> well,
>> but I can't guarantee a time commitment unlike Bobby.
> 
> Well, actually I did expect a v5 submission with whatever entry you
> both deem suitable. Otoh I now realize that patch 1 had my extra
> request addressed (by myself) and hence with this one needing just
> that entry, I could indeed add that addition myself while committing.
> Let me move the mails back to the to-be-committed folder...

At the risk of being picky, I don't think this should be done of commit. 
Instead, a formal patch should be sent with both maintainers + one of 
the committer acking (or signed-off-by) the proposal.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 10:04:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 10:04:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134761.250663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo1Fk-0005by-AF; Tue, 01 Jun 2021 10:04:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134761.250663; Tue, 01 Jun 2021 10:04:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo1Fk-0005bi-7F; Tue, 01 Jun 2021 10:04:04 +0000
Received: by outflank-mailman (input) for mailman id 134761;
 Tue, 01 Jun 2021 10:04:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=v8JE=K3=gmail.com=dunlapg@srs-us1.protection.inumbo.net>)
 id 1lo1Fj-0005bc-Cj
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 10:04:03 +0000
Received: from mail-ej1-x632.google.com (unknown [2a00:1450:4864:20::632])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77026a2f-d279-4d9b-95a7-8a1790c11c2e;
 Tue, 01 Jun 2021 10:04:02 +0000 (UTC)
Received: by mail-ej1-x632.google.com with SMTP id b9so20714962ejc.13
 for <xen-devel@lists.xenproject.org>; Tue, 01 Jun 2021 03:04:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77026a2f-d279-4d9b-95a7-8a1790c11c2e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=umich.edu; s=google-2016-06-03;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=NguXytoAzNDpHJ9iVaOA6ZF/SSQbqCjlr5BrWtTX8Hw=;
        b=Ive3Q/KNhC813q24syndBkjVnehTGM3ui91Kt9a+rF/7Bk7PJMLgdELeWPbCiwOOLm
         pkqNKDqnBu9c+2ki3c6skBVvm3eVNoWPZBeMHGyMH1ejgFU4YhYiLpbvwrMUVG2PKbmZ
         8D0Bm9C6vrIGq1pn+q7sICkWZWzcz9SS9UnGcZkhjhXAAhGDRnocAPkDWQpW7UX+SVzT
         8Bzrc2KA83SZSPQTh2CxkQP/wHNplYkI5PBtGGIxmnmcd0V+eFGV9MW1ofvnM40DL+bc
         5QMsJYwF6Ezy0xLySzg3zhWO7AxsAhjcdoB0vu4kLwKXdNgUMPRLa8sdKwFf0aQHyegM
         DV/A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=NguXytoAzNDpHJ9iVaOA6ZF/SSQbqCjlr5BrWtTX8Hw=;
        b=t+qod+BVOG+Nl5qfPcoM157ekCfBiy5sJmLkRjcGWkbdTpgV/2C3F4lsWuHqIdUuW1
         Ifz1GatIZJFy1YwMB85nyVOgKMNWWRBRCD1xhjArS8vUwoJpXGEh/9zeE1ZP+USorEZ0
         EmfAW+OyN0hZvDp2y2FQgZp+oxi0h/eBHvW3EYRTXMJN16CVtJsvOy1c1xMVw8jnCkm1
         oIwwrb+Jt2+DeHOQ9D9ZkbfwiU9WzUqf7Cct3e8bCTrLREX7kvFGo0RX4oWpU1hPimGA
         ZXX4GwqwpOwammvzfhgTLqHiKdPJJWE7XMPg+GTmXtySqqA8Q4VMO0jg8kaQPlwnSk/9
         1esw==
X-Gm-Message-State: AOAM533C5w/sElpq+KSYkKdWvnzmlLmwXQLZ1jzsQyW2ydsFSlZZXMnF
	NLkHUS5c67UyqBGnszX6TDyUh5TV/A4ZfCj2LEL8oWnD
X-Google-Smtp-Source: ABdhPJxp9o6N7j96dsbOaH9JpXUniB66j4tTybQqUDB5cE5krtieoD9RKNHDJaxna3FskSM3IuJRwo9/fVj9NjBPXHs=
X-Received: by 2002:a17:906:51c7:: with SMTP id v7mr28934642ejk.86.1622541841528;
 Tue, 01 Jun 2021 03:04:01 -0700 (PDT)
MIME-Version: 1.0
References: <9562f5c0911567f12ed9fef8830f3018@mail.rmail.be>
In-Reply-To: <9562f5c0911567f12ed9fef8830f3018@mail.rmail.be>
From: George Dunlap <dunlapg@umich.edu>
Date: Tue, 1 Jun 2021 11:03:50 +0100
Message-ID: <CAFLBxZZqvdmGmSXS9P7XzsCahpyTcnrwQr+OLw-x_jqHwceWsg@mail.gmail.com>
Subject: Re: pci passthrough issue introduced between 4.14.1 and 4.15.0
To: AL13N <alien@rmail.be>, Ian Jackson <Ian.Jackson@citrix.com>, 
	Anthony Perard <anthony.perard@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Content-Type: multipart/alternative; boundary="00000000000018feaa05c3b1769a"

--00000000000018feaa05c3b1769a
Content-Type: text/plain; charset="UTF-8"

On Tue, Jun 1, 2021 at 8:34 AM AL13N <alien@rmail.be> wrote:

> Not 100% it's a bug or something i did wrong, but,
>
> with xl create i start a PV with 3 pci passthroughs
>
> after wards, xl pci-list shows all 3 nicely
>
> looking at the domU boot logs, pcifront is only creating one pci device
> and lspci in the guest shows only 1 pci entry
>
> in at least 4.14.1 it still works.
>
> looking deeper, if you start with only 1 entry or 0 and you do
> pci-attach for the other pci entries, all of this works fine.
>
> this is the pci section in question:
>
> pci=['0000:04:00.0,permissive=1',
> '0000:00:1a.0,permissive=1,rdm_policy=relaxed,seize=1',
> '0000:00:1d.0,permissive=1,rdm_policy=relaxed,seize=1']
>
> this works fine in 4.12.1 and 4.14.1, but fails in 4.15.0, however
>
> booting with pci=['0000:04:00.0,permissive=1'] and then doing
>
> []# xl pci-attach <domain>
> '0000:00:1a.0,permissive=1,rdm_policy=relaxed,seize=1'
> []# xl pci-attach <domain>
> '0000:00:1d.0,permissive=1,rdm_policy=relaxed,seize=1'
>
> also works,
>

Sounds like it might be a tools issue?

 -George

--00000000000018feaa05c3b1769a
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Tue, Jun 1, 2021 at 8:34 AM AL13N =
&lt;<a href=3D"mailto:alien@rmail.be">alien@rmail.be</a>&gt; wrote:<br></di=
v><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;borde=
r-left:1px solid rgb(204,204,204);padding-left:1ex">Not 100% it&#39;s a bug=
 or something i did wrong, but,<br>
<br>
with xl create i start a PV with 3 pci passthroughs<br>
<br>
after wards, xl pci-list shows all 3 nicely<br>
<br>
looking at the domU boot logs, pcifront is only creating one pci device <br=
>
and lspci in the guest shows only 1 pci entry<br>
<br>
in at least 4.14.1 it still works.<br>
<br>
looking deeper, if you start with only 1 entry or 0 and you do <br>
pci-attach for the other pci entries, all of this works fine.<br>
<br>
this is the pci section in question:<br>
<br>
pci=3D[&#39;0000:04:00.0,permissive=3D1&#39;, <br>
&#39;0000:00:1a.0,permissive=3D1,rdm_policy=3Drelaxed,seize=3D1&#39;, <br>
&#39;0000:00:1d.0,permissive=3D1,rdm_policy=3Drelaxed,seize=3D1&#39;]<br>
<br>
this works fine in 4.12.1 and 4.14.1, but fails in 4.15.0, however<br>
<br>
booting with pci=3D[&#39;0000:04:00.0,permissive=3D1&#39;] and then doing<b=
r>
<br>
[]# xl pci-attach &lt;domain&gt; <br>
&#39;0000:00:1a.0,permissive=3D1,rdm_policy=3Drelaxed,seize=3D1&#39;<br>
[]# xl pci-attach &lt;domain&gt; <br>
&#39;0000:00:1d.0,permissive=3D1,rdm_policy=3Drelaxed,seize=3D1&#39;<br>
<br>
also works,<br></blockquote><div><br></div><div>Sounds like it might be a t=
ools issue?</div><div><br></div><div>=C2=A0-George<br></div><div>=C2=A0</di=
v></div></div>

--00000000000018feaa05c3b1769a--


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 10:08:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 10:08:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134767.250674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo1K4-0006Iu-Ro; Tue, 01 Jun 2021 10:08:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134767.250674; Tue, 01 Jun 2021 10:08:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo1K4-0006In-P2; Tue, 01 Jun 2021 10:08:32 +0000
Received: by outflank-mailman (input) for mailman id 134767;
 Tue, 01 Jun 2021 10:08:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ILtd=K3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lo1K3-0006Ig-6B
 for xen-devel@lists.xen.org; Tue, 01 Jun 2021 10:08:31 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e69cafdb-9885-4632-a491-1f0fc3ee296b;
 Tue, 01 Jun 2021 10:08:30 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 552611FD30;
 Tue,  1 Jun 2021 10:08:29 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 347FC118DD;
 Tue,  1 Jun 2021 10:08:29 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id GMJ/Cx0HtmBrAwAALh3uQQ
 (envelope-from <jbeulich@suse.com>); Tue, 01 Jun 2021 10:08:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e69cafdb-9885-4632-a491-1f0fc3ee296b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622542109; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1z4ijang45Vhplh+/3psU16FFTJ8psvdcXAbjGXAiK0=;
	b=m+6BBpDym4XTglt7cntwLFLqOWdtqgw/W0sNMjRfBb7Z1IXfTx4Z1O6K8/vvTGwBfLqxKd
	6dR++eXckXZsrobfF0LcwfBL/8w+QQ+iBfsL28eE/k6zDNXrhlCzu+dKMmzrBjHqzfH1n6
	nIhOZLWgmyVN8qkNJH3tqoK4lkiK0Lc=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622542109; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1z4ijang45Vhplh+/3psU16FFTJ8psvdcXAbjGXAiK0=;
	b=m+6BBpDym4XTglt7cntwLFLqOWdtqgw/W0sNMjRfBb7Z1IXfTx4Z1O6K8/vvTGwBfLqxKd
	6dR++eXckXZsrobfF0LcwfBL/8w+QQ+iBfsL28eE/k6zDNXrhlCzu+dKMmzrBjHqzfH1n6
	nIhOZLWgmyVN8qkNJH3tqoK4lkiK0Lc=
Subject: Re: pci passthrough issue introduced between 4.14.1 and 4.15.0
To: AL13N <alien@rmail.be>
References: <6ccb04f2d93be6089b049df1f94a91dd@mail.rmail.be>
Cc: Xen-devel <xen-devel@lists.xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e9a3f7a8-7bf2-4f0a-cc25-d8ce015df1f2@suse.com>
Date: Tue, 1 Jun 2021 12:08:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <6ccb04f2d93be6089b049df1f94a91dd@mail.rmail.be>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.06.2021 09:36, AL13N wrote:
> Not 100% it's a bug or something i did wrong, but,
> 
> with xl create i start a PV with 3 pci passthroughs
> 
> after wards, xl pci-list shows all 3 nicely
> 
> looking at the domU boot logs, pcifront is only creating one pci device 
> and lspci in the guest shows only 1 pci entry
> 
> in at least 4.14.1 it still works.

This reminds me of my report at
https://lists.xen.org/archives/html/xen-devel/2021-03/msg00956.html

Meanwhile the proposed pciback change has gone in upstream:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/xen/xen-pciback?id=c81d3d24602540f65256f98831d0a25599ea6b87

I wasn't, however, aware that this may have been an issue going
from 4.14.1 to 4.15.0, i.e. something that was presumably (as
George also has just said) a regression in the tools. Or else I
probably wouldn't have suggested taking care of this in Linux.
Nevertheless you may want to give that change a try.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 10:28:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 10:28:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134780.250701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo1dL-0000Wi-Rp; Tue, 01 Jun 2021 10:28:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134780.250701; Tue, 01 Jun 2021 10:28:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo1dL-0000Wb-Op; Tue, 01 Jun 2021 10:28:27 +0000
Received: by outflank-mailman (input) for mailman id 134780;
 Tue, 01 Jun 2021 10:28:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QqUo=K3=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lo1dK-0000WU-Tz
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 10:28:27 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c22c3517-5919-43d2-8038-3e57be83f8f4;
 Tue, 01 Jun 2021 10:28:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c22c3517-5919-43d2-8038-3e57be83f8f4
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622543305;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=qxOxkpgf3vvxIUnsnqn7ZffiZYbW75rYqnqUF1GTIo4=;
  b=Qv954ht0UIMpq1gsQNbcxrECl8Xn2DdaHSaG0y6+xsfpihulyJBCXR49
   oZ3fv1YAqhl216LERkXmA+/DqJB1btt+0BjYagt8HglsFqaRiFMmujJjE
   5m53NS1ufJpf2uYZvn8EPNdGKA93SJ1Gwa5oQBcIs+57lT1x6X8NFEgGI
   s=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ms+u5OLTjbZmZ2Kr81fTzr2L6em8ROUEGvvBQM2O2ZiyQnOEBPJaTIiIpKXLJ/ZJsQKpoane7p
 0MRSmdBo8LTlybd90QLi6CJjohqewZnWwfguKZ6tNdSVC3caXneNcM9kh4f7Mz7n96sjA/R9h5
 Vkl9P+TofE3x5lnwtatIJEJfAQEag+nXGzejCzXMr9kx99QWVojKZHW8pTJ+ek3Z2kXStiqPxg
 LQhlyFKgHzmNJ9Vs2umBQ9ez+psUEV+sFvC2k6d0uhyiwWJiPcczYmStVCGjUU2T7zLb1Afl4/
 PwU=
X-SBRS: 5.1
X-MesageID: 44770327
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:7iypqaD5sj00z7TlHemg55DYdb4zR+YMi2TC1yhKJyC9Ffbo8/
 xG/c5rsyMc5wxwZJhNo7y90cq7MBbhHPxOkOos1N6ZNWGM0gaVxelZnOzfKlbbehEWmNQz6U
 4ZSdkdNOHN
X-IronPort-AV: E=Sophos;i="5.83,239,1616472000"; 
   d="scan'208";a="44770327"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH] tools/firmware/ovmf: Use OvmfXen platform file is exist
Date: Tue, 1 Jun 2021 11:28:03 +0100
Message-ID: <20210601102804.698364-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

A platform introduced in EDK II named OvmfXen is now the one to use for
Xen instead of OvmfX64. It comes with PVH support.

Also, the Xen support in OvmfX64 is deprecated,
    "deprecation notice: *dynamic* multi-VMM (QEMU vs. Xen) support in OvmfPkg"
    https://edk2.groups.io/g/devel/message/75498

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

PVH support isn't working at the moment, but that's just a detail :-)
---
 tools/firmware/ovmf-makefile | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/tools/firmware/ovmf-makefile b/tools/firmware/ovmf-makefile
index 55f999214545..637ee509c366 100644
--- a/tools/firmware/ovmf-makefile
+++ b/tools/firmware/ovmf-makefile
@@ -17,8 +17,14 @@ all: build
 .PHONY: build
 build:
 	if test -e .git ; then $(GIT) submodule update --init --recursive ; fi
-	OvmfPkg/build.sh -a X64 -b $(TARGET) -n 4
-	cp Build/OvmfX64/$(TARGET)_GCC*/FV/OVMF.fd ovmf.bin
+	set -ex; \
+	if test -e OvmfPkg/OvmfXen.dsc; then \
+	  OvmfPkg/build.sh -a X64 -b $(TARGET) -n 4 -p OvmfPkg/OvmfXen.dsc; \
+	  cp Build/OvmfXen/$(TARGET)_GCC*/FV/OVMF.fd ovmf.bin; \
+	else \
+	  OvmfPkg/build.sh -a X64 -b $(TARGET) -n 4; \
+	  cp Build/OvmfX64/$(TARGET)_GCC*/FV/OVMF.fd ovmf.bin; \
+	fi
 
 .PHONY: clean
 clean:
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue Jun 01 10:58:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 10:58:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134793.250734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo26A-0004Db-JG; Tue, 01 Jun 2021 10:58:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134793.250734; Tue, 01 Jun 2021 10:58:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo26A-0004DU-FG; Tue, 01 Jun 2021 10:58:14 +0000
Received: by outflank-mailman (input) for mailman id 134793;
 Tue, 01 Jun 2021 10:58:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lo268-0004DI-Oq; Tue, 01 Jun 2021 10:58:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lo268-0002mc-Ka; Tue, 01 Jun 2021 10:58:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lo268-0005Gi-Ai; Tue, 01 Jun 2021 10:58:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lo268-0005MG-AE; Tue, 01 Jun 2021 10:58:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Kn4jmhNiIKMNnqDZrYgSp/sFohk8LTzACEb3BWv5nuY=; b=moFrMqkxMCPviJ8p2wbvBcdL6U
	+sPaqHWFJZb8TuJLl75JYnNUOt8MUBHCoXVleAamWyWE6jSc56YtV2H4IkIOQmFhG5CTYiqGBR8sI
	YGq0ijW1Y8fvPIOqYri3o2ydjlIxQdT4ADwiHDfQhhLuoQb6eys23imiWQXTz/jmxOCE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162282-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162282: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=57f68dfd2d111a2ad381df740543c901b41f2299
X-Osstest-Versions-That:
    xen=683d899e4bffca35c5b192ea0662362b0270a695
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Jun 2021 10:58:12 +0000

flight 162282 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162282/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162269
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162269
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162269
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162269
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162269
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 162269
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162269
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162269
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162269
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162269
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162269
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162269
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  57f68dfd2d111a2ad381df740543c901b41f2299
baseline version:
 xen                  683d899e4bffca35c5b192ea0662362b0270a695

Last test of basis   162269  2021-05-31 01:52:43 Z    1 days
Testing same since   162276  2021-05-31 14:09:01 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   683d899e4b..57f68dfd2d  57f68dfd2d111a2ad381df740543c901b41f2299 -> master


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 11:22:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 11:22:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134804.250750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo2TZ-0007fI-RH; Tue, 01 Jun 2021 11:22:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134804.250750; Tue, 01 Jun 2021 11:22:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo2TZ-0007fB-OJ; Tue, 01 Jun 2021 11:22:25 +0000
Received: by outflank-mailman (input) for mailman id 134804;
 Tue, 01 Jun 2021 11:22:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lo2TY-0007f0-KT; Tue, 01 Jun 2021 11:22:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lo2TY-0003Hk-EC; Tue, 01 Jun 2021 11:22:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lo2TY-0006oV-7d; Tue, 01 Jun 2021 11:22:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lo2TY-00007D-7A; Tue, 01 Jun 2021 11:22:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0HO20aljrrvrBoSj6jxCyIFiovMWibnK24AJTZgdx1s=; b=ADO+QhFhZ5r4oQY/RGk51+4I14
	93PuHyMuM/5x9HlP7L36na/igpcU6CD6IX9VApUgyCBzUhPXtK0TmXkkyhhEgnfocXiPizJ7BZ0Y1
	+w3Lodxne9ZcDMVJiB2+WM0pgtsqBh3zwIIJXxw3rnrvXTeur/bBJtjPvB5u3WBQ5QZQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162288-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162288: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d3ff5dbe1dfc3420e5254d290500c0b6f6282d17
X-Osstest-Versions-That:
    ovmf=fe5da0927aad98f3c005088197fa30c1b8f9d3e8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Jun 2021 11:22:24 +0000

flight 162288 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162288/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d3ff5dbe1dfc3420e5254d290500c0b6f6282d17
baseline version:
 ovmf                 fe5da0927aad98f3c005088197fa30c1b8f9d3e8

Last test of basis   162271  2021-05-31 03:40:13 Z    1 days
Testing same since   162288  2021-06-01 02:43:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Kun Qin <kuqin12@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   fe5da0927a..d3ff5dbe1d  d3ff5dbe1dfc3420e5254d290500c0b6f6282d17 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 11:54:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 11:54:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134812.250765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo2yq-0002tq-FQ; Tue, 01 Jun 2021 11:54:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134812.250765; Tue, 01 Jun 2021 11:54:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo2yq-0002tj-C7; Tue, 01 Jun 2021 11:54:44 +0000
Received: by outflank-mailman (input) for mailman id 134812;
 Tue, 01 Jun 2021 11:54:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ILtd=K3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lo2yo-0002tc-Ip
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 11:54:42 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d55c1ba9-0e74-4586-afef-48d0351a6c85;
 Tue, 01 Jun 2021 11:54:40 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 043AC2191F;
 Tue,  1 Jun 2021 11:54:40 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id C1FBB118DD;
 Tue,  1 Jun 2021 11:54:39 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id qlwZLv8ftmCxRAAALh3uQQ
 (envelope-from <jbeulich@suse.com>); Tue, 01 Jun 2021 11:54:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d55c1ba9-0e74-4586-afef-48d0351a6c85
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622548480; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nB/B+v2Z9jxJ+dqPH9aL68ZYBi20yAHgPl9ohZJGzY0=;
	b=Pu+k8SPK+3Ji93GRnjIUTOPBAoOzzJI5Oe6fheR4CNak+VknEs3ZAs36vqfaq/fQY6yQ8y
	4co+cU+BzLJIC2pfHbYjJF8EdaZ//sNJ44IO/ong6Br9wgf1zrrztgE/a4/uzco/vB1TLm
	4vQoA1RVscyD7yixGbpTNKpTBulp0pQ=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622548479; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nB/B+v2Z9jxJ+dqPH9aL68ZYBi20yAHgPl9ohZJGzY0=;
	b=jCiwIqgJIm08q1d/DaKxisG8UbpzfiAS7POqz8jA4Wi+gMk/RE5NVxb/aWW34ghahUVn/1
	WvXyvBJvHo+w3NCZ0gjgCckEd/1D54/QH8vPzWXx4fBXS82/zr1VZFXwsywHWbqXM463xp
	Gr5gMjGdEji2HPEGXUgB9C8dDh4wvao=
Subject: Re: [PATCH v6 1/3] evtchn: slightly defer lock acquire where possible
To: Julien Grall <julien@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <01bbf3d4-ca6a-e837-91fe-b34aa014564c@suse.com>
 <5939858e-1c7c-5658-bc2d-0c9024c74040@suse.com>
 <938eb888-ec15-feb1-19f7-b90dfee822ae@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e5e6ab32-815c-49d8-94f3-a75d975465b3@suse.com>
Date: Tue, 1 Jun 2021 13:54:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <938eb888-ec15-feb1-19f7-b90dfee822ae@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.05.2021 20:48, Julien Grall wrote:
> On 27/05/2021 12:28, Jan Beulich wrote:
>> port_is_valid() and evtchn_from_port() are fine to use without holding
>> any locks. Accordingly acquire the per-domain lock slightly later in
>> evtchn_close() and evtchn_bind_vcpu().
> 
> So I agree that port_is_valid() and evtchn_from_port() are fine to use 
> without holding any locks in evtchn_bind_vcpu(). However, this is 
> misleading to say there is no problem with evtchn_close().
> 
> evtchn_close() can be called with current != d and therefore, there is a 
> risk that port_is_valid() may be valid and then invalid because 
> d->valid_evtchns is decremented in evtchn_destroy().

While this is the case for other functions as well (and hence a
comment along the lines of what you ask for below should have
been in place already), I've added

/*
 * While calling the function is okay without holding a suitable lock yet
 * (see the comment ahead of struct evtchn_port_ops for which ones those
 * are), for a dying domain it may start returning false at any point - see
 * evtchn_destroy(). This is not a fundamental problem though, as the
 * struct evtchn instance won't disappear (and will continue to hold valid
 * data) until final cleanup of the domain, at which point the domain itself
 * cannot be looked up anymore and hence calls here can't occur anymore in
 * the first place.
 */

...

> Thankfully the memory is still there. So the current code is okayish and 
> I could reluctantly accept this behavior to be spread. However, I don't 
> think this should be left uncommented in both the code (maybe on top of 
> port_is_valid()?) and the commit message.

... ahead of port_is_valid() (and not, as I did intend originally,
in evtchn_close()). As far as the commit message goes, I'll have it
refer to the comment only.

I hope this satisfies the requests of both of you. I'll take the
liberty and retain your ack, Roger.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 14:06:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 14:06:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134875.250830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo52O-0001qh-Vf; Tue, 01 Jun 2021 14:06:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134875.250830; Tue, 01 Jun 2021 14:06:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo52O-0001qa-Sp; Tue, 01 Jun 2021 14:06:32 +0000
Received: by outflank-mailman (input) for mailman id 134875;
 Tue, 01 Jun 2021 14:06:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tu9B=K3=rmail.be=alien@srs-us1.protection.inumbo.net>)
 id 1lo52N-0001qU-AH
 for xen-devel@lists.xen.org; Tue, 01 Jun 2021 14:06:31 +0000
Received: from mail.rmail.be (unknown [85.234.218.189])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id ffe33ae8-98d4-44b0-8f6c-393e4e1857a9;
 Tue, 01 Jun 2021 14:06:29 +0000 (UTC)
Received: from mail.rmail.be (localhost [127.0.0.1])
 by mail.rmail.be (Postfix) with ESMTP id 6832BB11C2F;
 Tue,  1 Jun 2021 16:06:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ffe33ae8-98d4-44b0-8f6c-393e4e1857a9
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit
Date: Tue, 01 Jun 2021 16:06:28 +0200
From: AL13N <alien@rmail.be>
To: Xen-devel <xen-devel@lists.xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Subject: Re: pci passthrough issue introduced between 4.14.1 and 4.15.0
In-Reply-To: <e9a3f7a8-7bf2-4f0a-cc25-d8ce015df1f2@suse.com>
References: <6ccb04f2d93be6089b049df1f94a91dd@mail.rmail.be>
 <e9a3f7a8-7bf2-4f0a-cc25-d8ce015df1f2@suse.com>
Message-ID: <a7c0e9b0cdd8f9e709abc329c7f6239c@mail.rmail.be>
X-Sender: alien@rmail.be
User-Agent: Roundcube Webmail/1.0.9-1.2.mga5

Jan Beulich schreef op 2021-06-01 12:08:
> On 01.06.2021 09:36, AL13N wrote:
>> Not 100% it's a bug or something i did wrong, but,
>> 
>> with xl create i start a PV with 3 pci passthroughs
>> 
>> after wards, xl pci-list shows all 3 nicely
>> 
>> looking at the domU boot logs, pcifront is only creating one pci 
>> device
>> and lspci in the guest shows only 1 pci entry
>> 
>> in at least 4.14.1 it still works.
> 
> This reminds me of my report at
> https://lists.xen.org/archives/html/xen-devel/2021-03/msg00956.html
> 
> Meanwhile the proposed pciback change has gone in upstream:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/xen/xen-pciback?id=c81d3d24602540f65256f98831d0a25599ea6b87
> 
> I wasn't, however, aware that this may have been an issue going
> from 4.14.1 to 4.15.0, i.e. something that was presumably (as
> George also has just said) a regression in the tools. Or else I
> probably wouldn't have suggested taking care of this in Linux.
> Nevertheless you may want to give that change a try.

Well, both tests have only different tools en hypervisor, no kernel was 
changed between both tests, neither in dom0 or domU , so, it might not 
be pciback?


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 14:19:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 14:19:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134882.250841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo5Ex-0003OB-5x; Tue, 01 Jun 2021 14:19:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134882.250841; Tue, 01 Jun 2021 14:19:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo5Ex-0003O4-1l; Tue, 01 Jun 2021 14:19:31 +0000
Received: by outflank-mailman (input) for mailman id 134882;
 Tue, 01 Jun 2021 14:19:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4phv=K3=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lo5Ev-0003Ny-Gk
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 14:19:29 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6f671a40-d242-4796-aae6-09857f4aec61;
 Tue, 01 Jun 2021 14:19:28 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 151E9nPV070385;
 Tue, 1 Jun 2021 14:18:47 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2120.oracle.com with ESMTP id 38ue8pdn0b-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 01 Jun 2021 14:18:47 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 151EBFZJ194638;
 Tue, 1 Jun 2021 14:18:46 GMT
Received: from nam10-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam10lp2107.outbound.protection.outlook.com [104.47.58.107])
 by aserp3020.oracle.com with ESMTP id 38ude91kh0-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 01 Jun 2021 14:18:46 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by MN2PR10MB4271.namprd10.prod.outlook.com (2603:10b6:208:199::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.24; Tue, 1 Jun
 2021 14:18:42 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4173.030; Tue, 1 Jun 2021
 14:18:42 +0000
Received: from [10.74.100.249] (138.3.201.57) by
 SN1PR12CA0112.namprd12.prod.outlook.com (2603:10b6:802:21::47) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4173.20 via Frontend Transport; Tue, 1 Jun 2021 14:18:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f671a40-d242-4796-aae6-09857f4aec61
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=iq2IoC+rAI5TZaWS2GJdPAbjYc8YtZgDLuwhYcoU5ZM=;
 b=Q27+2miE3VP5CE3bsn3MVjrzS13oYM38oxdXA7ZP4yHcnd4pviDQLZ777Q1WhJ/xtAl3
 n/dbQIAzxoy0EpMHvw72guPAFNtOaMhjOfY5QI6D8Al4M0tu2NSaNWCiJzIe1K/Oyg0B
 hIAJFoR/zc0s2Yzi1pXa0EJXzOjZmnPLjFcJKPwXDLOdvvdfKu7NBvBLalZMAWPhez7j
 EJzuLDOcQgjT92QjVHex3ye/3Kz0NaUgtrHQshNXZq7MYQ4c2d9zjsp3zFbknliip1xq
 lXAHYDN3bDdGN/YYbsr3GFdAvljlN9sV/gqQBpffHMsPjFPMzdDVqKspkdKBZZxP1U1v aQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hrWlft7/ZmgTDCj2a04vWtm6K1efUr0NQGy2C7yHLOx90mxzziI4C/J3hQmndGs2nnCoai84igTY02DM50P3+d8/badr/Yuj0xSq+iJovSR1ysI/ll+mjUWAJzhn6KgqnA3Vd9rRJT0/cAsV9affSHYGOXW+Ut6gXJc9lwnWA00mBIna9Z8JbLTCPSwyf9C3+MCsTEAuP+4OWbKvXWng2COH7CaCb/I98pyG4VT69/3UyqMOYxzJ0viW/3fZiet+aTvU1ZMXgvuRW5t6uE2eNNePaicBRXIJEx2r2Eh8x4t6nFqdEKCf4DK38XyVeHFfqV8d7sYrT16BJNcIYbuqrw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iq2IoC+rAI5TZaWS2GJdPAbjYc8YtZgDLuwhYcoU5ZM=;
 b=Y7kKGoixJj/8zgXMgpG61S6VKDz3mOGzbl9kD1R51cAKuoAltdJoPbyu3zs4JcgkuxCEs3/lx3kciMqUyvairKFvQCAOqDDqpvsXZF4SpLoara0ejZhNEREH18t9EUrMycPojNVnWqs127j96g2pLsIxSmpQUXrGHgpdJh84UyWd7mOWct+ZyUOl5qTP3M6BMuF3OKz5iV/kDWJYFPIOLg0v6vZaSSVmwCnasm/b7Hh0n/5/d9YDfSexTQwcJ1r70w8lEwdwqvqSuvs6DRcXMqx7LiyCT6DW1UKSJXqMAnXdLPJzqVMYOEKo9HOgWD1/8S/zFqGsfV3i++oYtRs9Qg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iq2IoC+rAI5TZaWS2GJdPAbjYc8YtZgDLuwhYcoU5ZM=;
 b=wixhrfWmVvDiyzkCuhI3Q90mtcuSOhUkP2NaXthB31IlmrUW+3CvSFJ/z/ZkayGbBEFMWhBxg3DhOwWh/dEfuXMdoGajexRmr8T+aGHXWku8NQ4BBXnK3HnmdhuCH3yJJRL2Msb2Dj5G+Nl34qXqAABhrJV8ZF+dzOZcmaVGpcc=
Authentication-Results: amazon.co.uk; dkim=none (message not signed)
 header.d=none;amazon.co.uk; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH v3 01/11] xen/manage: keep track of the on-going suspend
 mode
To: Anchal Agarwal <anchalag@amazon.com>
Cc: "tglx@linutronix.de" <tglx@linutronix.de>,
        "mingo@redhat.com" <mingo@redhat.com>, "bp@alien8.de" <bp@alien8.de>,
        "hpa@zytor.com" <hpa@zytor.com>, "jgross@suse.com" <jgross@suse.com>,
        "linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>,
        "linux-mm@kvack.org" <linux-mm@kvack.org>,
        "sstabellini@kernel.org" <sstabellini@kernel.org>,
        "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
        "roger.pau@citrix.com" <roger.pau@citrix.com>,
        "axboe@kernel.dk" <axboe@kernel.dk>,
        "davem@davemloft.net" <davem@davemloft.net>,
        "rjw@rjwysocki.net" <rjw@rjwysocki.net>,
        "len.brown@intel.com" <len.brown@intel.com>,
        "pavel@ucw.cz" <pavel@ucw.cz>,
        "peterz@infradead.org" <peterz@infradead.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "vkuznets@redhat.com" <vkuznets@redhat.com>,
        "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
        "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
        dwmw@amazon.co.uk
References: <20200925190423.GA31885@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <274ddc57-5c98-5003-c850-411eed1aea4c@oracle.com>
 <20200925222826.GA11755@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <cc738014-6a79-a5ae-cb2a-a02ff15b4582@oracle.com>
 <20200930212944.GA3138@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <8cd59d9c-36b1-21cf-e59f-40c5c20c65f8@oracle.com>
 <20210521052650.GA19056@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0b1f0772-d1b1-0e59-8e99-368e54d40fbf@oracle.com>
 <20210526044038.GA16226@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <33380567-f86c-5d85-a79e-c1cd889f8ec2@oracle.com>
 <20210528215008.GA19622@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <1ff91b30-3963-728e-aefb-57944197bdde@oracle.com>
Date: Tue, 1 Jun 2021 10:18:36 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
In-Reply-To: <20210528215008.GA19622@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.201.57]
X-ClientProxiedBy: SN1PR12CA0112.namprd12.prod.outlook.com
 (2603:10b6:802:21::47) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d0787eed-b926-4449-61a2-08d9250828dc
X-MS-TrafficTypeDiagnostic: MN2PR10MB4271:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<MN2PR10MB4271CED78E24752F001BC1F58A3E9@MN2PR10MB4271.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	bpGZM8FgifKtDDDUGAr2VKfUdL2YimEgKMnFxZAT0VOZsNyGHUzJqxEmqksWCkuv7kkA73Ti/W9ekqmhE6V9TJvolhFUUE0BsOlVuH7ZEKw3hqy0S4kSqY3PoEBr4nKRiKrD+pldUrZ5ybtSbhqFB773KUjTPzFosnjXnRBKAJT1Mc8PUgpMG06dWiL5RijIVezJBXR3wJ4Y1Xppr/dfsZ0O66nzJ2ef12pVF5s5TfJfmo4vOEm9QHMfq1rQmohsELEWInHaatSZKiSlGH99QM32msdjG0b2OTlENsZ12Xtzjpetoq/wsjWCz9MS1+PlGTJZ4rJ9cwjNp7qcMWkQPMI77VlSqidrE1l9hER7UZJMCXl3VaUhEvplOXxxf0eSypoHUai+lVeh8Xiz8g776d7E/a+33Zq5ruj/ci/SgaEOizNUpa9NfNEnDEHRrFFrCbEUYyY4KDSqd05Uk1KBpFEO1Hn5uh2V5enYjiozNMGq/b6nFRXT6OWgwiHrt21zX9UHwYYVA1Tu6eui+9Ah4lZ4ICPwNMwZDfOHBNYUZxlshuSzEHEAAXDw/5KEZyscjcRPfVKvlycq8rDopHeTPUfYLhuA5JLO7iPXd1739rcxzCLJheHc8Yc99xe66yJ00SQsRSlDq4kINDq51Su3H9uyQescB+u7ZBrMwCizDVPU2WCsow7lYrJLeDZ7DFvG
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(396003)(346002)(39860400002)(136003)(366004)(5660300002)(186003)(16526019)(6916009)(316002)(54906003)(38100700002)(36756003)(6666004)(6486002)(31686004)(7416002)(8676002)(8936002)(4326008)(86362001)(44832011)(83380400001)(2616005)(956004)(53546011)(66556008)(66476007)(66946007)(2906002)(31696002)(26005)(478600001)(16576012)(4744005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?UDhrR1plR3FoZis2UTVUSnRVbzZIYzJONWNhZ3BCYU9mUThJVVdXQlNkZG5w?=
 =?utf-8?B?TG5WcnhBMFJsWEtWSm5tcS9wL2FkeGQ5NWNROXA4dmdjRTQ4UVVkcmlXNEZG?=
 =?utf-8?B?aUdpaVgrbEJWQWh0Z0h4OHRwT2FHT21LdFBFSFAwWFlOYU9wR3JhOS83OU9U?=
 =?utf-8?B?NG1MOWNaQkpVM2lpUjF0d3hCTXBMdXo0MjlvYkRFbjFMYVVVbVppWEdDUXFv?=
 =?utf-8?B?SW8rRzEwenRDamFaQmFXK01rblJJNGVxanNkTURIUmxqNTNPOXRNaDhOQnBs?=
 =?utf-8?B?cEFsZC8ydjRVUjh6eFZ0clgxeGVDZmVYZ2FoM0o1bE1sRjFhdXJ2UDd4YnRK?=
 =?utf-8?B?Mzc4QnYvakJVaktNSUo0VFBvblB6bEJiMUI4d2o0MTlMRml0UEo4eHZDbFJs?=
 =?utf-8?B?aFpOb2h0M1h1WTlobmhNQ3hJb0YvZTE4MUR5VHRGREFPVGsxTzkvNldUZGdJ?=
 =?utf-8?B?Ykk2dWMxMkxoWmJKYmQzZWNZS2hBbHd1L2U4WGNzSE14YkZldDlMeUd2ZTgv?=
 =?utf-8?B?NjFSTzN0dGVUMDhqUUJFZzBla3lBd2tydXQ5SzA4aWJLU25SaVpXTitWNUtO?=
 =?utf-8?B?LzhxVXFPdDlsOHZadllkTlczTTlaVk5jU0NEaEZ6VlI4Y1Q5MUkxNWR5ZGZW?=
 =?utf-8?B?L2drQjRpWHBDeUxuc3E0c1I3WkkrTXZkSWlYeGM5Zk9uZXdPUUpJdWFMSFlQ?=
 =?utf-8?B?dUpmQmt6L29ocUNULytLUHFjV0IyNk9XaVhMRFloS0U0NXkyWnhoYVk5OU92?=
 =?utf-8?B?MTVOL1p5TC9LUHg4QVpySzlGZ21tNjBoRE5INWtTYS8wSUk2TkVLR1FRblp6?=
 =?utf-8?B?NW5NM2l2MG5rcERJWFZiUkhTTjhVeEFaRWdDQUpvK2VNV1k5YUZuSkJLMjYw?=
 =?utf-8?B?ZDY3SXA2VklsOWF1RldETklnZkFqSkE4ZHptMndEdEp5Sjd3OGNiempSWFVZ?=
 =?utf-8?B?SzZFSlU1aDFYWjA0THZhWDFnR1RXU3NOMUJiRVhEZlNvTDZjaEgrak5Ob3g4?=
 =?utf-8?B?Q0drckU1YzhlQmZIaTlJa1g1UmV3YzFkNWFwWmMyYVUvVFNHazJYRG9LcFBU?=
 =?utf-8?B?ckJWSzVuRVd1WitHc2hpeUVtQm11b251UGVOOThPNXFxcU4rOVdFbThRSnFZ?=
 =?utf-8?B?bkdaKzhld05rblIrZGtuaW5ibVI2TUxKamtuZ3BDclRtUHFHRWRRWmRYcHE0?=
 =?utf-8?B?M25YZ0l4N0tVWjNVSUJvcHRFTG9mSm9Db2k1Mk9lZzQ2Z0hQTjlyb21USEth?=
 =?utf-8?B?Q0NuaE5jMFQvWmVpUTVlNUVjUFd1cjZ5bjhJM3VOaWh2ZG5GZ1VnQzhsRGJF?=
 =?utf-8?B?ZE0yanZzUUErc3lTK1pGSXRUVWI3V291OTFESFNWY0J4SnZCOGxUeWo3c0xI?=
 =?utf-8?B?OEp4dnljYVhHNFZhUE5Xb0VKU1krYW1GVE9MVEN4OWo3OFU2NXMzaFBWb3VT?=
 =?utf-8?B?ZUJzT1pSQnJneGEvaXVJQ2JWdUJ2SFNWcEltRitCcVlWSVpON1poK05vaEIy?=
 =?utf-8?B?aGNhWDRXbS9wM3BmT1RLMWhGTVFZZllCZzQxWTZoblc0d1V6NU5oanYwakVv?=
 =?utf-8?B?SmU0RVNTbmVPNUVBMG05UUx3azBlUU82OFZ3TDZKbk9QMnlZa3VOajRmNGlL?=
 =?utf-8?B?QmQ2UzVScEpZa0s0Rk9LejJJU3E4NUxHSHQwa3J3OE9lakhyUjdEOHlnSjJz?=
 =?utf-8?B?YXREbkhtcTExWGNjWmZoaGNtT1RTeGJPK2lSbklzaUhVNEtmMTlZL1NnMSs3?=
 =?utf-8?Q?uyF892CDBSzcSqyoaj/UVcjv/7E7WLOZmBUjmKx?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d0787eed-b926-4449-61a2-08d9250828dc
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2021 14:18:42.7801
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UQH8FOlBKmK9nVnr00riM+pbxqXDNFASFBn8fuRRIxnTkk+YZX9ZGVeC8gMVaiacoJm4HfQiR71KMRWTQFIGAdnqiALr7IVrbLaOP6eMrE8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR10MB4271
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10002 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 mlxscore=0 phishscore=0
 suspectscore=0 spamscore=0 mlxlogscore=999 adultscore=0 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106010097
X-Proofpoint-GUID: t1GTqF7qu9OyNqOxgVQ_J0d9c5GNRPTH
X-Proofpoint-ORIG-GUID: t1GTqF7qu9OyNqOxgVQ_J0d9c5GNRPTH
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10002 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 impostorscore=0
 malwarescore=0 adultscore=0 suspectscore=0 lowpriorityscore=0 spamscore=0
 bulkscore=0 phishscore=0 priorityscore=1501 clxscore=1015 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106010097


On 5/28/21 5:50 PM, Anchal Agarwal wrote:

> That only fails during boot but not after the control jumps into the image. The
> non boot cpus are brought offline(freeze_secondary_cpus) and then online via cpu hotplug path. In that case xen_vcpu_setup doesn't invokes the hypercall again.


OK, that makes sense --- by that time VCPUs have already been registered. What I don't understand though is why resume doesn't fail every time --- xen_vcpu and xen_vcpu_info should be different practically always, shouldn't they? Do you observe successful resumes when the hypercall fails?


>
> Another line of thought is something what kexec does to come around this problem
> is to abuse soft_reset and issue it during syscore_resume or may be before the image get loaded.
> I haven't experimented with that yet as I am assuming there has to be a way to re-register vcpus during resume.


Right, that sounds like it should work.


-boris




From xen-devel-bounces@lists.xenproject.org Tue Jun 01 14:28:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 14:28:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134889.250852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo5Nn-0004td-44; Tue, 01 Jun 2021 14:28:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134889.250852; Tue, 01 Jun 2021 14:28:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo5Nm-0004tW-WA; Tue, 01 Jun 2021 14:28:38 +0000
Received: by outflank-mailman (input) for mailman id 134889;
 Tue, 01 Jun 2021 14:28:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tu9B=K3=rmail.be=alien@srs-us1.protection.inumbo.net>)
 id 1lo5Nm-0004tN-0H
 for xen-devel@lists.xen.org; Tue, 01 Jun 2021 14:28:38 +0000
Received: from mail.rmail.be (unknown [85.234.218.189])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id c49875bc-3693-4736-979a-6d7cee828a54;
 Tue, 01 Jun 2021 14:28:36 +0000 (UTC)
Received: from mail.rmail.be (localhost [127.0.0.1])
 by mail.rmail.be (Postfix) with ESMTP id AE896B11C68;
 Tue,  1 Jun 2021 16:28:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c49875bc-3693-4736-979a-6d7cee828a54
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit
Date: Tue, 01 Jun 2021 16:28:35 +0200
From: AL13N <alien@rmail.be>
To: Xen-devel <xen-devel@lists.xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Subject: Re: pci passthrough issue introduced between 4.14.1 and 4.15.0
In-Reply-To: <a7c0e9b0cdd8f9e709abc329c7f6239c@mail.rmail.be>
References: <6ccb04f2d93be6089b049df1f94a91dd@mail.rmail.be>
 <e9a3f7a8-7bf2-4f0a-cc25-d8ce015df1f2@suse.com>
 <a7c0e9b0cdd8f9e709abc329c7f6239c@mail.rmail.be>
Message-ID: <9d4840391d9f99bd5b3f346a7782a1f9@mail.rmail.be>
X-Sender: alien@rmail.be
User-Agent: Roundcube Webmail/1.0.9-1.2.mga5

AL13N schreef op 2021-06-01 16:06:
> Jan Beulich schreef op 2021-06-01 12:08:
>> On 01.06.2021 09:36, AL13N wrote:
>>> Not 100% it's a bug or something i did wrong, but,
>>> 
>>> with xl create i start a PV with 3 pci passthroughs
>>> 
>>> after wards, xl pci-list shows all 3 nicely
>>> 
>>> looking at the domU boot logs, pcifront is only creating one pci 
>>> device
>>> and lspci in the guest shows only 1 pci entry
>>> 
>>> in at least 4.14.1 it still works.
>> 
>> This reminds me of my report at
>> https://lists.xen.org/archives/html/xen-devel/2021-03/msg00956.html
>> 
>> Meanwhile the proposed pciback change has gone in upstream:
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/xen/xen-pciback?id=c81d3d24602540f65256f98831d0a25599ea6b87
>> 
>> I wasn't, however, aware that this may have been an issue going
>> from 4.14.1 to 4.15.0, i.e. something that was presumably (as
>> George also has just said) a regression in the tools. Or else I
>> probably wouldn't have suggested taking care of this in Linux.
>> Nevertheless you may want to give that change a try.
> 
> Well, both tests have only different tools en hypervisor, no kernel
> was changed between both tests, neither in dom0 or domU , so, it might
> not be pciback?

forgot to mention, dom0 is 5.7.19 and domU is 5.10.27 for all tests


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 14:33:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 14:33:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134896.250863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo5Sp-0006Ks-Mt; Tue, 01 Jun 2021 14:33:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134896.250863; Tue, 01 Jun 2021 14:33:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo5Sp-0006Kl-JS; Tue, 01 Jun 2021 14:33:51 +0000
Received: by outflank-mailman (input) for mailman id 134896;
 Tue, 01 Jun 2021 14:33:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ILtd=K3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lo5So-0006Ke-0X
 for xen-devel@lists.xen.org; Tue, 01 Jun 2021 14:33:50 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [62.140.7.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id adde564b-427a-4b6e-ae3f-0c35ab59406e;
 Tue, 01 Jun 2021 14:33:49 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2057.outbound.protection.outlook.com [104.47.13.57]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-40-vEedeJqRNZuSiJn43XEciQ-1; Tue, 01 Jun 2021 16:33:46 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7039.eurprd04.prod.outlook.com (2603:10a6:800:12b::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.21; Tue, 1 Jun
 2021 14:33:45 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4173.030; Tue, 1 Jun 2021
 14:33:45 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR3P281CA0007.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:1d::6) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.11 via Frontend Transport; Tue, 1 Jun 2021 14:33:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: adde564b-427a-4b6e-ae3f-0c35ab59406e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1622558028;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WPcPwxps5kyMy7X8TpWBhD/PnI8UMa6lr20ghPnD918=;
	b=fJCDwD8PlHstsEBBKmG0Yp7F4j6y9TKLrMMZMYsk2quEjsX16cji5zgGDI8sW2LsLVIF7l
	bSQZeJT97r+cvZysb2/IW60clJhrvOnEb4V64S3m/xDgDcoICjZ2xX7FeUPX8nKx87f+oh
	tHU09aV/W+8P2eBILDBTU7BAsghq3k8=
X-MC-Unique: vEedeJqRNZuSiJn43XEciQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I02Qou4eiuomiTz+N8vP9AQcCrXJme7566hUmkDcxlxgxI6I4YAyHqAFFM7EmX5xUYRPli4csDaVu+BxTcDLZT3dsf1N7UsT2XFvxBgkZWHP4ilA8ExiIoPxtU3o4Nbu6Vg+ep4w6XtXJftEFwiTk4koNm2Kk1eOrtefGi4+blUxd9etuOBlZzdEWD3Wr0F32tsJlhLtVkdok2okPvqJR2XyMBE68TTdWQMAYNrOmJexRB9ooUIdA+Kt+9cvHzIP6t7cjSx/VlaifEkEPFT8PcnIRfI5PS1JSa+ZWWGVkKAH9XyU0lfnchVgehETa1+FwBwJzbi0kej5NYlRuH856w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WPcPwxps5kyMy7X8TpWBhD/PnI8UMa6lr20ghPnD918=;
 b=Wf3wq6h2uGFgTB7YBHCtDcBRrkbIGxs9Bef1eH7JVaHwYUVsSa9IL7++2t8nt710nLmXj5HODL63oLrPV1QUMioAhEGAFh48lTTzmXGK+Ugi1apzQC3DTmRP6IwNCgznHnlc0xSizfFZEw5JhaqHORkCad1gRHLfBLkdb9qyuOcWJ5j4Qt6rAvkuYBV4ZzhqQ4G3FaUOUiDDcX/CJXAhY/fy/HTLxoFmrZFSWQpso0Rj5poGIE/OuB2umfGIkA/fuZfGrADGf3EsRezoYNURQQXzAq22GvP4aU29cPJiN4TeDbuhxcdAJa+totp/Z4+BFIUURnjopKCBytWiu+3i7Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xen.org; dkim=none (message not signed)
 header.d=none;lists.xen.org; dmarc=none action=none header.from=suse.com;
Subject: Re: pci passthrough issue introduced between 4.14.1 and 4.15.0
To: AL13N <alien@rmail.be>
References: <6ccb04f2d93be6089b049df1f94a91dd@mail.rmail.be>
 <e9a3f7a8-7bf2-4f0a-cc25-d8ce015df1f2@suse.com>
 <a7c0e9b0cdd8f9e709abc329c7f6239c@mail.rmail.be>
Cc: Xen-devel <xen-devel@lists.xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b5ff15fc-ec3b-6b48-3d15-7de29fa5b2aa@suse.com>
Date: Tue, 1 Jun 2021 16:33:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
In-Reply-To: <a7c0e9b0cdd8f9e709abc329c7f6239c@mail.rmail.be>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR3P281CA0007.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::6) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 00cd749e-a20d-4832-ffe5-08d9250a42e2
X-MS-TrafficTypeDiagnostic: VI1PR04MB7039:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB7039692DEBEA3B7BBD4017DFB33E9@VI1PR04MB7039.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Qk1cXsbPA3TFb9Ab/5KjyoKHURdeYuElCIqlv0FP2piRAL4VE33CAEVC0zCfSU3TJrxt8NVMgHNXtOwz5eyQ0gN+yTnW1xK/R5DFM/qHejMwQbdV/FmSuaennvCFV67RZsK6Uq5U42H6q11aQ+3tObB9YLB3mAB+Vst5EdgXVHa6mzRVvhvjo06d6j22UL8+11If1dCkxLRoArhB8oiiWUy5/tjErw7fX05u1CcR1EXRPxL7nO4fYabgGpoKto2groPAd6Z14XreoAov8KrEbyaakAQC9nv7fD4A1/6tdB1504z108DxVhGDnKm4tV3mvC4dVQ62em7ZbqRwM8Y4kfZPb3/Z6Mto8+qI5D4fDU2jRSYpLOVand5Jz89TnHDJj+a6hzeNvLIIXtGN4i/bxVpvvvldtQ8q0mvVjh61thj3hXgNRtrmWSdwJS9A6X62Ydn8gu3amLfQfF20iiR4Omrm+5drEmG2D0PpxxQfxcZFQeVYOfWmWj2AkTXxzl5DRSQCMt7w14B+Bky1paGXp7j19Qz+XKebb8wYBcO98UN9B97HNrDk/PvbABYd8848VQioWN8d0oi3W0O9RzMKNm1rN2fqlfaQapVyq7V8FPBS6M3nedgiwPvdbcw0YXJ0tiuBliKZkLJFXbuAvDNN6OQp2RUq/puttd8ibXr/VaP1/IpOFuqJcPiKY8xZ+oS7DKYGQ3doeaEjYnnpp2lL787vuJn96y9HDLlo4vFFdP+s7tgH0kRGSwEms9UnSRsQaC2wmQkQ0dnztsJjFfZZzxYgvoolGdVmVjNrrEMQP0Q=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(16576012)(86362001)(66476007)(66556008)(31696002)(38100700002)(8676002)(2906002)(26005)(66946007)(4326008)(186003)(16526019)(53546011)(966005)(5660300002)(31686004)(2616005)(36756003)(498600001)(8936002)(6486002)(6916009)(956004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?enlwTGVFNzFWbldwR2pPdy9xYVBLNEl6Qk1VZExVc2VRcGtZQkhJUkQ0b3A1?=
 =?utf-8?B?eWQ5Y1RTSWpGNWNZanNBbkxvVEZFbTErcU81R2dHS3d3ek9TdjcvdnJmTldR?=
 =?utf-8?B?Qm9Zd1RNaTVRMG15MmVQZHdiUzU1MjhVYThFQ1hPRDB5YlFtSm8rWjgrWTFl?=
 =?utf-8?B?ZFIxTCtqdmRRQzFlN2NneW1lVlNCU1VQNFQxMXdvTDJxamFVYnUrUERUWTA1?=
 =?utf-8?B?Zk9mQUJtYllkTmNCRzg0SnBEdUwrUDlWZ1NpaTljSjNtK0FveUNVWVhTc1hS?=
 =?utf-8?B?SVY0ZlJxRS9Nb0hUTlNxbEwxV2dIT3RFMmxvZVh2QWxXcGtzdjR5MElKZTVX?=
 =?utf-8?B?VnMrRzR4bkI3ZS9EMmcxdFF5a1pvTzJIdHQ3U3BHY0R3VkluQXpkanhGb05x?=
 =?utf-8?B?WHFJb0JZV3kxVEU4UGlOQSt6UHZHdDl2NlhwZ1FQd0NuTnA4ekJYVzU5QVJ0?=
 =?utf-8?B?eUVlQXFtc3M4dnRqemo1NmVrczRpUVQ3bGoxZW53akN0RHJ2TVdhOStPL20y?=
 =?utf-8?B?MElCM0Y4SUNldSt6eXlyM3k5UlRBN01FQU5TY0w3bi9SMWZxV20vZHBMKzMy?=
 =?utf-8?B?MlEzVGlEa2hScFRVWjRDbHA0bUF3NjNHQWpqSGRaZ3NrYmxaUWZTemdIVE8v?=
 =?utf-8?B?a1padTNudUhCcWwxYU5JTy83UmZlOExHeHp3cTV1ZzZnUUt2bGhmSVQ4VUUr?=
 =?utf-8?B?aWhtVUVXTmxKSEtBRDkwbkR3dUl2S01iRjFSbWxMSjB5WkZVWFVNdTdGSmlh?=
 =?utf-8?B?SGNDQTUzNDNhTTZTTFJEYWdZV0F1anNPaXdYZVdDQmQvSmdLcjJ1dnNWa1JM?=
 =?utf-8?B?VndVS3Bhc3VGakxoVG5WcTVYb042NjNSRWEybFNNNk1hVXJaaWZybnN4aUdz?=
 =?utf-8?B?MDJLMm55UlA4NUF4bnNDa2htalRQOGZBVmkwWUYzSDVBdnd1L3JEM1FROVdO?=
 =?utf-8?B?cUVNaVIyUWZnNndGcjlrQ1R5akRTM1VKeUVBQ0NnbzRQdkVBV1RVS01aRDli?=
 =?utf-8?B?N0FhQzYrRFdHbTFZd0tnMEZUQ0pSVlpESkdvRE9uTWxIaElDeXY5aldQWWdW?=
 =?utf-8?B?S3pVS3Y5T0pSdS80VVdad0ZzdCtnTHFpSm9VY1Z6ZVR5UEgyY3lQeXY2cFo2?=
 =?utf-8?B?SWRNUXNqbTVJcUVxUkJYNjI2OFlHaUdQZGszeU5jdHFrcyt1cDlNR0ZpSU9B?=
 =?utf-8?B?V2JHbVJJQWFOZnlwZTNYSHdGQ1h2eWVGMGE1akFPeTJaQld5TWlRNVZqOGc5?=
 =?utf-8?B?WGErVVBqT1ZtRmdJdEVHaGl3NlZITFNJT0ZYOEJjakNNL3MwN0xsSjFEQUo5?=
 =?utf-8?B?TFZTTEp4UExpZVJzdzc5NHk4NVkvN2xhTnpUMFFyREpCR3MxdUZzeEpDK1hm?=
 =?utf-8?B?bFBVMVdkdGdHZ1JBVjhIM2srdm9XalkzVzgyMkkvcUQwbWkrU3NFSlh3dnlI?=
 =?utf-8?B?SHBtYUJ1b0FObGQxV1YrRGtibklxdU9IOFAvRFJXMXRGZUkyaEY3c3dDTFpi?=
 =?utf-8?B?TWRkL3RBL0lYQ01WWlZjZ2NyREY5KzQ4WVhIVGZaN1MvZWxmL09hZTUxc2xK?=
 =?utf-8?B?ckNpczhlODVnc3FUVDFWdlBROVhobE9OcmZybElWUVV6V2ZEcHlraU5uVFBa?=
 =?utf-8?B?L3JjQnRKS2ZNTEllYzBPc0lBWFFwVkV2dWhnYlgvM0JDanNlKzZRTVZEQXQ2?=
 =?utf-8?B?VXpEdkN3d3U0a1l1bnRCRUExQmhFUWVwWXNIM2E5TFM5aDZtVUNlNkdFejZF?=
 =?utf-8?Q?jJiMd50F9rv4y2ycZArAqZxXkz+AKzsHe397LWb?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 00cd749e-a20d-4832-ffe5-08d9250a42e2
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2021 14:33:45.2958
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yLZTEBwpnzyOLXAK+7Axox02lH/m4dyofCwxFsT05bfUmMSEPNGbcdLE2Hy+5hHXOGaHdIjIH3EfqhTx1zrGZA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7039

On 01.06.2021 16:06, AL13N wrote:
> Jan Beulich schreef op 2021-06-01 12:08:
>> On 01.06.2021 09:36, AL13N wrote:
>>> Not 100% it's a bug or something i did wrong, but,
>>>
>>> with xl create i start a PV with 3 pci passthroughs
>>>
>>> after wards, xl pci-list shows all 3 nicely
>>>
>>> looking at the domU boot logs, pcifront is only creating one pci 
>>> device
>>> and lspci in the guest shows only 1 pci entry
>>>
>>> in at least 4.14.1 it still works.
>>
>> This reminds me of my report at
>> https://lists.xen.org/archives/html/xen-devel/2021-03/msg00956.html
>>
>> Meanwhile the proposed pciback change has gone in upstream:
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/xen/xen-pciback?id=c81d3d24602540f65256f98831d0a25599ea6b87
>>
>> I wasn't, however, aware that this may have been an issue going
>> from 4.14.1 to 4.15.0, i.e. something that was presumably (as
>> George also has just said) a regression in the tools. Or else I
>> probably wouldn't have suggested taking care of this in Linux.
>> Nevertheless you may want to give that change a try.
> 
> Well, both tests have only different tools en hypervisor, no kernel was 
> changed between both tests, neither in dom0 or domU , so, it might not 
> be pciback?

Well, if the problem was introduced in the tools and this having been
the reason for me running into the same or a similar issue, the patch
may still address the issue, even if - in case it's a regression in
the tools - it would have been better to also address the issue there.
As said, when analyzing the issue I didn't have indications of changed
tool stack behavior, i.e. I assumed the problem would have always been
there.

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 01 14:44:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 14:44:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134905.250874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo5dA-0007w6-Nj; Tue, 01 Jun 2021 14:44:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134905.250874; Tue, 01 Jun 2021 14:44:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo5dA-0007vz-Kg; Tue, 01 Jun 2021 14:44:32 +0000
Received: by outflank-mailman (input) for mailman id 134905;
 Tue, 01 Jun 2021 14:44:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tu9B=K3=rmail.be=alien@srs-us1.protection.inumbo.net>)
 id 1lo5d8-0007vt-W1
 for xen-devel@lists.xen.org; Tue, 01 Jun 2021 14:44:31 +0000
Received: from mail.rmail.be (unknown [85.234.218.189])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id d6351342-6f8c-47cc-a737-8a4e9d9686bc;
 Tue, 01 Jun 2021 14:44:29 +0000 (UTC)
Received: from mail.rmail.be (localhost [127.0.0.1])
 by mail.rmail.be (Postfix) with ESMTP id B5F7BB11C91;
 Tue,  1 Jun 2021 16:44:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6351342-6f8c-47cc-a737-8a4e9d9686bc
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit
Date: Tue, 01 Jun 2021 16:44:28 +0200
From: AL13N <alien@rmail.be>
To: Jan Beulich <jbeulich@suse.com>
Cc: Xen-devel <xen-devel@lists.xen.org>
Subject: Re: pci passthrough issue introduced between 4.14.1 and 4.15.0
In-Reply-To: <b5ff15fc-ec3b-6b48-3d15-7de29fa5b2aa@suse.com>
References: <6ccb04f2d93be6089b049df1f94a91dd@mail.rmail.be>
 <e9a3f7a8-7bf2-4f0a-cc25-d8ce015df1f2@suse.com>
 <a7c0e9b0cdd8f9e709abc329c7f6239c@mail.rmail.be>
 <b5ff15fc-ec3b-6b48-3d15-7de29fa5b2aa@suse.com>
Message-ID: <175befe0e853565761e51f07b79c49cf@mail.rmail.be>
X-Sender: alien@rmail.be
User-Agent: Roundcube Webmail/1.0.9-1.2.mga5

Jan Beulich schreef op 2021-06-01 16:33:
> On 01.06.2021 16:06, AL13N wrote:
>> Jan Beulich schreef op 2021-06-01 12:08:
>>> On 01.06.2021 09:36, AL13N wrote:
>>>> Not 100% it's a bug or something i did wrong, but,
>>>> 
>>>> with xl create i start a PV with 3 pci passthroughs
>>>> 
>>>> after wards, xl pci-list shows all 3 nicely
>>>> 
>>>> looking at the domU boot logs, pcifront is only creating one pci
>>>> device
>>>> and lspci in the guest shows only 1 pci entry
>>>> 
>>>> in at least 4.14.1 it still works.
>>> 
>>> This reminds me of my report at
>>> https://lists.xen.org/archives/html/xen-devel/2021-03/msg00956.html
>>> 
>>> Meanwhile the proposed pciback change has gone in upstream:
>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/xen/xen-pciback?id=c81d3d24602540f65256f98831d0a25599ea6b87
>>> 
>>> I wasn't, however, aware that this may have been an issue going
>>> from 4.14.1 to 4.15.0, i.e. something that was presumably (as
>>> George also has just said) a regression in the tools. Or else I
>>> probably wouldn't have suggested taking care of this in Linux.
>>> Nevertheless you may want to give that change a try.
>> 
>> Well, both tests have only different tools en hypervisor, no kernel 
>> was
>> changed between both tests, neither in dom0 or domU , so, it might not
>> be pciback?
> 
> Well, if the problem was introduced in the tools and this having been
> the reason for me running into the same or a similar issue, the patch
> may still address the issue, even if - in case it's a regression in
> the tools - it would have been better to also address the issue there.
> As said, when analyzing the issue I didn't have indications of changed
> tool stack behavior, i.e. I assumed the problem would have always been
> there.

Yeah after rereading the thread, i got this impression.

though after looking at a quick grep:

[alien@localhost xen]$ git log RELEASE-4.14.1..RELEASE-4.15.0 --oneline 
--decorate -- tools/xl/ | grep -i pci
bdc0799fe2 libxlu: introduce xlu_pci_parse_spec_string()
96ed6ff297 libxlu: introduce xlu_pci_parse_spec_string()
929f231140 libxl: introduce 'libxl_pci_bdf' in the idl...
c00da82355 libxl: add libxl_device_pci_assignable_list_free()...
7499b22ba1 libxl: make sure callers of libxl_device_pci_list() free the 
list after use
6c2590967f xl: s/pcidev/pci where possible


it doesn't seem like one of these? (well, i've not familiar with any of 
the xen code)

This mailing list is the correct place for the toolstack too? right?

Regards,

Maarten


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 14:48:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 14:48:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134911.250885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo5h2-00008C-7U; Tue, 01 Jun 2021 14:48:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134911.250885; Tue, 01 Jun 2021 14:48:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo5h2-000085-4a; Tue, 01 Jun 2021 14:48:32 +0000
Received: by outflank-mailman (input) for mailman id 134911;
 Tue, 01 Jun 2021 14:48:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tu9B=K3=rmail.be=alien@srs-us1.protection.inumbo.net>)
 id 1lo5h1-00007z-B4
 for xen-devel@lists.xen.org; Tue, 01 Jun 2021 14:48:31 +0000
Received: from mail.rmail.be (unknown [85.234.218.189])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 8c8fc680-83e5-422b-ab21-d62bd6a5cf0a;
 Tue, 01 Jun 2021 14:48:30 +0000 (UTC)
Received: from mail.rmail.be (localhost [127.0.0.1])
 by mail.rmail.be (Postfix) with ESMTP id D9069B11C9B;
 Tue,  1 Jun 2021 16:48:29 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c8fc680-83e5-422b-ab21-d62bd6a5cf0a
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit
Date: Tue, 01 Jun 2021 16:48:29 +0200
From: AL13N <alien@rmail.be>
To: Xen-devel <xen-devel@lists.xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Subject: Re: pci passthrough issue introduced between 4.14.1 and 4.15.0
In-Reply-To: <175befe0e853565761e51f07b79c49cf@mail.rmail.be>
References: <6ccb04f2d93be6089b049df1f94a91dd@mail.rmail.be>
 <e9a3f7a8-7bf2-4f0a-cc25-d8ce015df1f2@suse.com>
 <a7c0e9b0cdd8f9e709abc329c7f6239c@mail.rmail.be>
 <b5ff15fc-ec3b-6b48-3d15-7de29fa5b2aa@suse.com>
 <175befe0e853565761e51f07b79c49cf@mail.rmail.be>
Message-ID: <c216536c95b4febf9f5565c290f6ecb9@mail.rmail.be>
X-Sender: alien@rmail.be
User-Agent: Roundcube Webmail/1.0.9-1.2.mga5

AL13N schreef op 2021-06-01 16:44:
> Jan Beulich schreef op 2021-06-01 16:33:
>> On 01.06.2021 16:06, AL13N wrote:
>>> Jan Beulich schreef op 2021-06-01 12:08:
>>>> On 01.06.2021 09:36, AL13N wrote:
>>>>> Not 100% it's a bug or something i did wrong, but,
>>>>> 
>>>>> with xl create i start a PV with 3 pci passthroughs
>>>>> 
>>>>> after wards, xl pci-list shows all 3 nicely
>>>>> 
>>>>> looking at the domU boot logs, pcifront is only creating one pci
>>>>> device
>>>>> and lspci in the guest shows only 1 pci entry
>>>>> 
>>>>> in at least 4.14.1 it still works.
>>>> 
>>>> This reminds me of my report at
>>>> https://lists.xen.org/archives/html/xen-devel/2021-03/msg00956.html
>>>> 
>>>> Meanwhile the proposed pciback change has gone in upstream:
>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/xen/xen-pciback?id=c81d3d24602540f65256f98831d0a25599ea6b87
>>>> 
>>>> I wasn't, however, aware that this may have been an issue going
>>>> from 4.14.1 to 4.15.0, i.e. something that was presumably (as
>>>> George also has just said) a regression in the tools. Or else I
>>>> probably wouldn't have suggested taking care of this in Linux.
>>>> Nevertheless you may want to give that change a try.
>>> 
>>> Well, both tests have only different tools en hypervisor, no kernel 
>>> was
>>> changed between both tests, neither in dom0 or domU , so, it might 
>>> not
>>> be pciback?
>> 
>> Well, if the problem was introduced in the tools and this having been
>> the reason for me running into the same or a similar issue, the patch
>> may still address the issue, even if - in case it's a regression in
>> the tools - it would have been better to also address the issue there.
>> As said, when analyzing the issue I didn't have indications of changed
>> tool stack behavior, i.e. I assumed the problem would have always been
>> there.
> 
> Yeah after rereading the thread, i got this impression.
> 
> though after looking at a quick grep:
> 
> [alien@localhost xen]$ git log RELEASE-4.14.1..RELEASE-4.15.0
> --oneline --decorate -- tools/xl/ | grep -i pci
> bdc0799fe2 libxlu: introduce xlu_pci_parse_spec_string()
> 96ed6ff297 libxlu: introduce xlu_pci_parse_spec_string()
> 929f231140 libxl: introduce 'libxl_pci_bdf' in the idl...
> c00da82355 libxl: add libxl_device_pci_assignable_list_free()...
> 7499b22ba1 libxl: make sure callers of libxl_device_pci_list() free
> the list after use
> 6c2590967f xl: s/pcidev/pci where possible
> 
> 
> it doesn't seem like one of these? (well, i've not familiar with any
> of the xen code)
> 
> This mailing list is the correct place for the toolstack too? right?

i forgot the libs....

[alien@localhost xen]$ git log RELEASE-4.14.1..RELEASE-4.15.0 --oneline 
--decorate -- tools/libs/light/ | grep -i pci
9cd5bbf536 libxl / libxlu: support 'xl pci-attach/detach' by name
57bff091f4 libxl: add 'name' field to 'libxl_device_pci' in the IDL...
d473d74af3 libxl: stop setting 'vdevfn' in pci_struct_fill()
8bf0fab142 libxl / libxlu: support 'xl pci-attach/detach' by name
5ab684cb3e libxl: introduce 
libxl_pci_bdf_assignable_add/remove/list/list_free(), ...
66c2fbc6e8 libxl: convert internal functions in libxl_pci.c...
929f231140 libxl: introduce 'libxl_pci_bdf' in the idl...
413fd4e4e9 libxl: use COMPARE_PCI() macro is_pci_in_array()...
c00da82355 libxl: add libxl_device_pci_assignable_list_free()...
7499b22ba1 libxl: make sure callers of libxl_device_pci_list() free the 
list after use
f8cfb85719 libxl: remove get_all_assigned_devices() from libxl_pci.c
4951b9ea80 libxl: remove unnecessary check from libxl__device_pci_add()
fe91a3aadc libxl: generalise 'driver_path' xenstore access functions in 
libxl_pci.c
b5429d65e1 libxl: stop using aodev->device_config in 
libxl__device_pci_add()...
a825ab3a6b libxl: remove extraneous arguments to do_pci_remove() in 
libxl_pci.c
33e1c5a5a8 libxl: s/detatched/detached in libxl_pci.c
d8cba539f2 libxl: add/recover 'rdm_policy' to/from PCI backend in 
xenstore
0fdb48ffe7 libxl: Make sure devices added by pci-attach are reflected in 
the config
fce69998ed libxl: make libxl__device_list() work correctly for 
LIBXL__DEVICE_KIND_PCI...
e43780f15f libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X


what is this "f8cfb85719 libxl: remove get_all_assigned_devices() from 
libxl_pci.c" doesn't it seems sus?


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 14:53:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 14:53:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134919.250896 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo5mD-0001bD-Rx; Tue, 01 Jun 2021 14:53:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134919.250896; Tue, 01 Jun 2021 14:53:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo5mD-0001b6-Ox; Tue, 01 Jun 2021 14:53:53 +0000
Received: by outflank-mailman (input) for mailman id 134919;
 Tue, 01 Jun 2021 14:53:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ILtd=K3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lo5mC-0001b0-M3
 for xen-devel@lists.xen.org; Tue, 01 Jun 2021 14:53:52 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 790a32e5-4e1f-4cd2-9fa1-e11f5c8d103d;
 Tue, 01 Jun 2021 14:53:51 +0000 (UTC)
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur03lp2051.outbound.protection.outlook.com [104.47.10.51]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-34-eegdxSP-OS-JSRID5pLCJw-1; Tue, 01 Jun 2021 16:53:50 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3775.eurprd04.prod.outlook.com (2603:10a6:803:1a::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.24; Tue, 1 Jun
 2021 14:53:49 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4173.030; Tue, 1 Jun 2021
 14:53:49 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0240.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1e::36) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4173.20 via Frontend Transport; Tue, 1 Jun 2021 14:53:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 790a32e5-4e1f-4cd2-9fa1-e11f5c8d103d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1622559230;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=piOMuwkBshYty62f8FsiT+C4NIZhl+ittycigmmlPn8=;
	b=jMqFlkI0zTWlJjLApZ9jQGwKV8L1VJUINFI6x44WbdQMnteyb9q37le9aQXOnjE3EtFw+3
	AnN+zi68BbyeOTuZNyPRDNU96PRdwt7iuug1IZfaqWfMqTXj07BfWclL45H+W4ewiFjUZo
	PaWCFOs4aDFXNyAWKG8hyq1QLk000m0=
X-MC-Unique: eegdxSP-OS-JSRID5pLCJw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lmVxEAj7ajvFRPqYGWEEfEejC3Bt3C6zh+30oaqZOyjvbq/d6S/zndDsGy3LDe19WnMtShagIhKUt9kzk2uEhpxLq0kaDiKviH4ZrwaGxpA/WgMPe00eqE7t0McoVSnAAUHOYqfPSWCOL9poi0BeIFP8oM2p9024WBz5qwcrtIreDCZHfigvs1JFswZno36ArJnGJi3erwuHK3qy5JEjyi+5VZRovqR1RNpIUWVmjPurzq1BNnLK4qWJMFo+ire2Mu0nkELadAXS3TzWfOrg8KWpoXUuy42HwJZVUv1JU+PMJi96KaNlnyf/pK7JqoF9TlYFC8rtmzoioiSoIMDUNg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=piOMuwkBshYty62f8FsiT+C4NIZhl+ittycigmmlPn8=;
 b=N5W+fK7RQXRrkaacV8eH213cNo5Koqhdk6hpQ3ttoCQlCKENr74fB5mpD+6Pubgm8Qoic8xQb7xqjDJERNQaK+oN5Ea6Qt0GN6cLDrhgrQGBsLqFjd0HblMb6hmRQZvjFeLprC02M8G1PoEDeLyipusWGaFyf+RWvdIxGZAbfT/JBD5XCf6SC9J8J3ncaebIF7M3SV18z4XxDFk6DWZT61BaRRreOHHhLnYA65S1bStD/OaUnbL/Bqw5vfbSiKKqe5zTKXqfPzCRE/SefHBfbx2MASLdLSEMbOTU7omMlWJJBw7qB+3X7DrVgTXdotjBl9XJUkR2zEoX8q94F2pRHA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xen.org; dkim=none (message not signed)
 header.d=none;lists.xen.org; dmarc=none action=none header.from=suse.com;
Subject: Re: pci passthrough issue introduced between 4.14.1 and 4.15.0
To: AL13N <alien@rmail.be>
Cc: Xen-devel <xen-devel@lists.xen.org>
References: <6ccb04f2d93be6089b049df1f94a91dd@mail.rmail.be>
 <e9a3f7a8-7bf2-4f0a-cc25-d8ce015df1f2@suse.com>
 <a7c0e9b0cdd8f9e709abc329c7f6239c@mail.rmail.be>
 <b5ff15fc-ec3b-6b48-3d15-7de29fa5b2aa@suse.com>
 <175befe0e853565761e51f07b79c49cf@mail.rmail.be>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <552b4348-1c52-ce6b-9001-a144c7147a7c@suse.com>
Date: Tue, 1 Jun 2021 16:53:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
In-Reply-To: <175befe0e853565761e51f07b79c49cf@mail.rmail.be>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0240.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1e::36) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5517fe34-7895-46b0-3472-08d9250d1062
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3775:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB3775124DEA826C9A997B53CBB33E9@VI1PR0402MB3775.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4502;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lKgjVnqiuGhnCrzHlgddX9w2x84Zf3llh/XSVU9kEeeXuU6fZIWPhXf67hq+ABUvSPnHG2OJkW3Zj+jp1E3odcLjGCTc8FO2njQdHFNPBQTx7JWDIVgqoQZoWu47GiTsoM3f/I8zDK2STR0kMmAiUt6poVhTuqTHA447ggZU2UxwRRQMtUCKh1Stwly+j2epfZvETjN2yv22EH6vXrQGydZuEIPii7zVR1Lshdjh4yIBWui+jfiZCe9/p78HdjxCf4Z5cnKaTapgmA3uaokMLnjaVU+GqUke9VVAv0aXYjfMrH07b/X7LpAW0Xn9i6n9FC/FCEhMDCUkvfwnatzCWZBDuWmYvqo34stlmn+ooK0nYyjVv522WDVSdiphYOAQxT692J5Mlf/QzR3VsJ+iRTf1TYVa8WBhAJARxTUmzY9G4Pu96WrWjnz1RD8Zm3zPtaIutBa5XQiB1Ib6NfF1Rkhk4mwMdEt0nvEcqYVTw4sbQgS6S2KJu6GjOpTifA8o+KmyAHf/+PF/kzpegaYLf/sOwMySDZb0/Im1z2zOQSrXdwxMjKpGfigIFE/nRtjuJgrZ0QmXnL6//3bWOomI0IyPwGv+6b1zvWJcwgQvMEIxN3FV5iQClqdX8Fk1TEW3R8q9kBty2ldNVNdXxzxP+ovoemRtiOJ+oGPi6yr/c1V/03Qm/PS/ZadmAo0KG3xa
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(4326008)(16526019)(36756003)(558084003)(956004)(186003)(2616005)(8936002)(66946007)(31696002)(86362001)(2906002)(8676002)(66476007)(66556008)(6486002)(16576012)(38100700002)(6916009)(498600001)(26005)(31686004)(5660300002)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?OTY4MGZBY3Zza0xTSkI1a3d1ZTJuU3JpYzc2dEFXOGdpU2xQUjlqTUMxVXhu?=
 =?utf-8?B?OFJ1NTJtZnJYZ1oxTTBoZlV4VkNyOVRraDB3eXFwdkJEMjdWQ1ZkQmg2ZlFI?=
 =?utf-8?B?VGV4Mm5keTBSaUtudE9XajhubWptOTRGSXRmckY2NnFkUDlzVVpiYkRSSG1u?=
 =?utf-8?B?UHlnbXFyUTlNOVZmMVhNbzVDWkpXNUlZYUtVU1FXTHM2dUlVR0RyVDc4RUs0?=
 =?utf-8?B?cUVlOGZ0cnMzaEVWZk1rVkFZdnlVU1d0TmFBYnRNbCttK3IwM044MjZzTTcx?=
 =?utf-8?B?WUpBUitZZ25oWTNBT1RmN3M0V0xjRzRXekZQK0xxUmVDaFpEazh0WlA4eita?=
 =?utf-8?B?dzB2TXY4ZG9wMnhNY1U2SGp0RHNrZ2tDWjNCc3pld21nWFlkZGhhMXJhT3Br?=
 =?utf-8?B?ZVJnN1d2YmtxWmxnUlluSFp0MXRXdStHdkI1dHdxSHlLLzBoaElWN0RZM3h0?=
 =?utf-8?B?c3J4TkV2eWNMV2xiQnFOVVQrZnNmM0thTHI4Yk1nSjU4YkQvL216Z2VmNnBk?=
 =?utf-8?B?c0wrRmhZTWZ0Mzg5cThXNTFhSm8zTDFicVBoUjlLaGxidXpCbi8wR0FvaHZS?=
 =?utf-8?B?bEsvdSt6V1huV2hIN2tCMjBJdkpyazFpTGQrR25WT1huRFYySTRaUlJETVJs?=
 =?utf-8?B?OVhpU3FGTEEyZTFBV01OeEpnNFk5c2RVSExhODdlemhTdGZ6SXJYVEN1Zndw?=
 =?utf-8?B?Wmx2MnU1czhEZUxReWx0cnNHdWxaRjQ5VEZpWWpPaUUwQXRVR1FQdUJVemd4?=
 =?utf-8?B?ajlaSDhxUlpTS2lTdEVWWU9vZGlidFZrYWF4cTRYR2ZIRW5Va2lxamY2QVBU?=
 =?utf-8?B?YitxVzdZWFM0R21YYnZFQzFFcUdZM1djM1AvNFlqNk9FRGdDVklIcnVPSUIr?=
 =?utf-8?B?bXJDTVVRaWowSlpPeEFkMW4yYUc1NjlzcTVHbWxkeVhiajh2aWhlVVNheDdD?=
 =?utf-8?B?K3dmOUZFWGVPWGFPU1JTTTBjeCtuS25hYk5rTjdWaHN6OC9BZkRqbnVBY2p6?=
 =?utf-8?B?YlZhTzFiK01rT21xc2ZKa1prMmN5aVh4aTM2dG9PVXJ6TkRxOWpZUDQ3T1RV?=
 =?utf-8?B?UUVPL1FhaG9aVFFwaGFlQW0vYTdYVTlzV1RSbjdFQ1NTaXdUa09Gai8xNy9T?=
 =?utf-8?B?SldYcjlOajg1TWt1MFFqRmVvWkpwSEZMWGVmdWNXTlAza1REekxXYUpTUlRG?=
 =?utf-8?B?dmtoV3FLZVQxWXhTdWZ6Z1RrUDNVWWhLUWFnVUNNM00waXFXK0dvazFHRFlU?=
 =?utf-8?B?RFJrUEs0SWlobmdBS2pOb2tjckpNNm55c29ldThpUEtqMjNSeFF5SlRFZFB2?=
 =?utf-8?B?YmtGQlprWld2NkpaR3Y2ZGRFcGYzMUM0ZkpLODhldkdtRWRFZU5mN0phU0k0?=
 =?utf-8?B?MGRuZkFmRlVRY09MZDl4dHJIOGRqT0dsVVRiTndYcytyQUwwUS9SVjg3VE83?=
 =?utf-8?B?aTU3b0doN1FTcGVRK2FiUUhDT0ZBMmFiMUkvWm5Qdm1JdXNqSzk5S2RuYldG?=
 =?utf-8?B?c2oxK3UzNUtyVk9QbnV4QUNKSnJDWCtrNWYvWE9UbldVTXZsdys5cG9uY3Fq?=
 =?utf-8?B?dWZxam1HamwxdTZ0TEZzTVUyTjlDU1VSc2c5Y1dNOC81Rlplb2kyWm90aGNB?=
 =?utf-8?B?cmpJQlRNd1NHZVlNaDNrSFBkdTdiNUpqQ0wzaWs5RSt5aGZSOHpMMGR0M2ZN?=
 =?utf-8?B?M2hhQkhEOGZFMUFhQkdGYWV6VFVKYXM1dmJ6OE1Lc3hUSlBsdzYxMmVOYi93?=
 =?utf-8?Q?jogKB3YjlHb76YygSm4SsZj//UI6XLwCLAGeyj5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5517fe34-7895-46b0-3472-08d9250d1062
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2021 14:53:49.0539
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KZyFc6bkVY7k9PYrLYBO8yLy3GmxuhYu+ujRF/ZsLmM6k/BEWM3xYDoEoadSV4+dnrybGB0YgTNUTpR7E9cxzg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3775

On 01.06.2021 16:44, AL13N wrote:
> This mailing list is the correct place for the toolstack too? right?

Yes.

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 01 15:36:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 15:36:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134926.250906 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6Qs-0005sd-4G; Tue, 01 Jun 2021 15:35:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134926.250906; Tue, 01 Jun 2021 15:35:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6Qs-0005sW-1D; Tue, 01 Jun 2021 15:35:54 +0000
Received: by outflank-mailman (input) for mailman id 134926;
 Tue, 01 Jun 2021 15:35:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lo6Qq-0005sP-Ak
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 15:35:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lo6Qp-0008Iz-2k; Tue, 01 Jun 2021 15:35:51 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lo6Qo-0006Vg-S2; Tue, 01 Jun 2021 15:35:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=/ihpCauTQfpifvAgMLpv/b08tZHMMzbgGergYeq+Oek=; b=j6rA5m4vcJVgXEZgDrhrcIaZPF
	UFylu8WNmz4EiereD2ZEfrIwovISBBMSph2M0zCRFRcUsH4ZVKS9zuqqx1QL6B8tlFL8bN9qSf3/1
	9p4mDvIqelH4ttHkspkXwgCV5wy7FMc06tVmjquU7ykskYyF3xbD06+ugXN6WpHP4F14=;
Subject: Re: [PATCH v2] xen/page_alloc: Remove dead code in
 alloc_domheap_pages()
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210526161129.28572-1-julien@xen.org>
 <b449be48-cded-b1a2-5086-d4d6856ed5dc@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <dc3777a7-e28e-5323-bfa8-95c6bfe2f8f4@xen.org>
Date: Tue, 1 Jun 2021 16:35:48 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <b449be48-cded-b1a2-5086-d4d6856ed5dc@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 27/05/2021 08:15, Jan Beulich wrote:
> On 26.05.2021 18:11, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Since commit 1aac966e24e9 "xen: support RAM at addresses 0 and 4096",
>> bits_to_zone() will never return 0 and it is expected that we have
>> minimum 2 zones.
>>
>> Therefore the check in alloc_domheap_pages() is unnecessary and can
>> be removed. However, for sanity, it is replaced with an ASSERT().
>>
>> Also take the opportunity to switch from min_t() to min() as
>> bits_to_zone() cannot return a negative value. The macro is tweaked
>> to make it clearer.
>>
>> This bug was discovered and resolved using Coverity Static Analysis
>> Security Testing (SAST) by Synopsys, Inc.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> ---
>>      Changes in v2:
>>          - Remove BUILD_BUG_ON()
>>          - Switch from min_t() to min()
> 
> Since this fulfills the dependencies put in place at the time, the
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> continues to apply here. Thanks for making the adjustments.

Thanks for the review. It is now committed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 15:42:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 15:42:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134933.250918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6Wj-0007Gq-QO; Tue, 01 Jun 2021 15:41:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134933.250918; Tue, 01 Jun 2021 15:41:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6Wj-0007Gj-Mq; Tue, 01 Jun 2021 15:41:57 +0000
Received: by outflank-mailman (input) for mailman id 134933;
 Tue, 01 Jun 2021 15:41:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QqUo=K3=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lo6Wi-0007Gd-Q4
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 15:41:56 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 170f7a4f-90b0-43e8-84cc-37ab07740112;
 Tue, 01 Jun 2021 15:41:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 170f7a4f-90b0-43e8-84cc-37ab07740112
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622562115;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=uGDUR96hUGFk1aFgdFn9czWUWD+ZG6iEEz3pTgysQ1E=;
  b=bt6G+er2sFla8Udp5gep3ZtlDeBFmBzXVGr+AuIdM5HkFvSZckqr1Oh9
   WE/QI8xLSNf6z5UvyzN7cGH7U48EnykUFVN4UZjwqer10HysvqybB5ocu
   Ng5zAREYqGvkq51iBZzghR/9GhbvtSPktUOYGj31RqLdTC1Qv4yR6WJ22
   w=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 7+OO+9ynfigWsQ/frsj5JP43RgOC+SumdwtP6gmlJXQ8DYKI8oNrsqP/eROxyCgW1AbucYXRtI
 uTDM+xbsGDu6C2tW+Pgq228LuAWciuUbZYs+pw1Q/A3HKQl29RQULK/Oh2FhFfNlp42P9rP/qE
 Z1BawY8gYyygo0KNxa0czREV09ryxHoJ2GnOL7GQakZ5TvYR0Q8cZ7akvtFUrzJIV6owPcaDUq
 +sSzOGAYBdH5z5cLjLnEHN2cOeJg4ltER+gYIs2wvU1hO514ltG0TPiEJUTXzIDPAxIRLFZhIb
 xk0=
X-SBRS: 5.1
X-MesageID: 45055839
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:HEw44q1m0v9eYvNQKlxD0gqjBEgkLtp133Aq2lEZdPU0SKGlfg
 6V/MjztCWE7Ar5PUtLpTnuAsa9qB/nm6KdgrNhWItKPjOW21dARbsKheffKlXbcBEWndQtt5
 uIHZIeNDXxZ2IK8PoT4mODYqodKA/sytHWuQ/cpU0dMz2Dc8tbnmBE4p7wKDwMeOFBb6BJcq
 a01458iBeLX28YVci/DmltZZm4mzWa/KiWGCLvHnQcmXGzsQ8=
X-IronPort-AV: E=Sophos;i="5.83,240,1616472000"; 
   d="scan'208";a="45055839"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH] libs/foreignmemory: Fix osdep_xenforeignmemory_map prototype
Date: Tue, 1 Jun 2021 16:41:47 +0100
Message-ID: <20210601154147.55799-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Commit cf8c4d3d13b8 made some preparation to have one day
variable-length-array argument, but didn't declare the array in the
function prototype the same way as in the function definition. And now
GCC 11 complains about it.

Fixes: cf8c4d3d13b8 ("tools/libs/foreignmemory: pull array length argument to map forward")
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/foreignmemory/private.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libs/foreignmemory/private.h b/tools/libs/foreignmemory/private.h
index 1ee3626dd278..5bb0cefb0987 100644
--- a/tools/libs/foreignmemory/private.h
+++ b/tools/libs/foreignmemory/private.h
@@ -32,7 +32,7 @@ int osdep_xenforeignmemory_close(xenforeignmemory_handle *fmem);
 void *osdep_xenforeignmemory_map(xenforeignmemory_handle *fmem,
                                  uint32_t dom, void *addr,
                                  int prot, int flags, size_t num,
-                                 const xen_pfn_t arr[num], int err[num]);
+                                 const xen_pfn_t arr[/*num*/], int err[/*num*/]);
 int osdep_xenforeignmemory_unmap(xenforeignmemory_handle *fmem,
                                  void *addr, size_t num);
 
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue Jun 01 15:50:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 15:50:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134941.250932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6fJ-0000Od-Na; Tue, 01 Jun 2021 15:50:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134941.250932; Tue, 01 Jun 2021 15:50:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6fJ-0000OW-KG; Tue, 01 Jun 2021 15:50:49 +0000
Received: by outflank-mailman (input) for mailman id 134941;
 Tue, 01 Jun 2021 15:50:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tu9B=K3=rmail.be=alien@srs-us1.protection.inumbo.net>)
 id 1lo6fI-0000OP-I6
 for xen-devel@lists.xen.org; Tue, 01 Jun 2021 15:50:48 +0000
Received: from mail.rmail.be (unknown [85.234.218.189])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id d2235fb3-d95a-4ff2-88dd-1f840fd92154;
 Tue, 01 Jun 2021 15:50:47 +0000 (UTC)
Received: from mail.rmail.be (localhost [127.0.0.1])
 by mail.rmail.be (Postfix) with ESMTP id CF63BB11CF7;
 Tue,  1 Jun 2021 17:50:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2235fb3-d95a-4ff2-88dd-1f840fd92154
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit
Date: Tue, 01 Jun 2021 17:50:46 +0200
From: AL13N <alien@rmail.be>
To: Xen-devel <xen-devel@lists.xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Subject: Re: pci passthrough issue introduced between 4.14.1 and 4.15.0
In-Reply-To: <c216536c95b4febf9f5565c290f6ecb9@mail.rmail.be>
References: <6ccb04f2d93be6089b049df1f94a91dd@mail.rmail.be>
 <e9a3f7a8-7bf2-4f0a-cc25-d8ce015df1f2@suse.com>
 <a7c0e9b0cdd8f9e709abc329c7f6239c@mail.rmail.be>
 <b5ff15fc-ec3b-6b48-3d15-7de29fa5b2aa@suse.com>
 <175befe0e853565761e51f07b79c49cf@mail.rmail.be>
 <c216536c95b4febf9f5565c290f6ecb9@mail.rmail.be>
Message-ID: <adcaf02b52d1968dc8da8d164cc63005@mail.rmail.be>
X-Sender: alien@rmail.be
User-Agent: Roundcube Webmail/1.0.9-1.2.mga5

AL13N schreef op 2021-06-01 16:48:
> AL13N schreef op 2021-06-01 16:44:
>> Jan Beulich schreef op 2021-06-01 16:33:
>>> On 01.06.2021 16:06, AL13N wrote:
>>>> Jan Beulich schreef op 2021-06-01 12:08:
>>>>> On 01.06.2021 09:36, AL13N wrote:
>>>>>> Not 100% it's a bug or something i did wrong, but,
>>>>>> 
>>>>>> with xl create i start a PV with 3 pci passthroughs
>>>>>> 
>>>>>> after wards, xl pci-list shows all 3 nicely
>>>>>> 
>>>>>> looking at the domU boot logs, pcifront is only creating one pci
>>>>>> device
>>>>>> and lspci in the guest shows only 1 pci entry
>>>>>> 
>>>>>> in at least 4.14.1 it still works.
>>>>> 
>>>>> This reminds me of my report at
>>>>> https://lists.xen.org/archives/html/xen-devel/2021-03/msg00956.html
>>>>> 
>>>>> Meanwhile the proposed pciback change has gone in upstream:
>>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/xen/xen-pciback?id=c81d3d24602540f65256f98831d0a25599ea6b87
>>>>> 
>>>>> I wasn't, however, aware that this may have been an issue going
>>>>> from 4.14.1 to 4.15.0, i.e. something that was presumably (as
>>>>> George also has just said) a regression in the tools. Or else I
>>>>> probably wouldn't have suggested taking care of this in Linux.
>>>>> Nevertheless you may want to give that change a try.
>>>> 
>>>> Well, both tests have only different tools en hypervisor, no kernel 
>>>> was
>>>> changed between both tests, neither in dom0 or domU , so, it might 
>>>> not
>>>> be pciback?
>>> 
>>> Well, if the problem was introduced in the tools and this having been
>>> the reason for me running into the same or a similar issue, the patch
>>> may still address the issue, even if - in case it's a regression in
>>> the tools - it would have been better to also address the issue 
>>> there.
>>> As said, when analyzing the issue I didn't have indications of 
>>> changed
>>> tool stack behavior, i.e. I assumed the problem would have always 
>>> been
>>> there.
>> 
>> Yeah after rereading the thread, i got this impression.
>> 
>> though after looking at a quick grep:
>> 
>> [alien@localhost xen]$ git log RELEASE-4.14.1..RELEASE-4.15.0
>> --oneline --decorate -- tools/xl/ | grep -i pci
>> bdc0799fe2 libxlu: introduce xlu_pci_parse_spec_string()
>> 96ed6ff297 libxlu: introduce xlu_pci_parse_spec_string()
>> 929f231140 libxl: introduce 'libxl_pci_bdf' in the idl...
>> c00da82355 libxl: add libxl_device_pci_assignable_list_free()...
>> 7499b22ba1 libxl: make sure callers of libxl_device_pci_list() free
>> the list after use
>> 6c2590967f xl: s/pcidev/pci where possible
>> 
>> 
>> it doesn't seem like one of these? (well, i've not familiar with any
>> of the xen code)
>> 
>> This mailing list is the correct place for the toolstack too? right?
> 
> i forgot the libs....
> 
> [alien@localhost xen]$ git log RELEASE-4.14.1..RELEASE-4.15.0
> --oneline --decorate -- tools/libs/light/ | grep -i pci
> 9cd5bbf536 libxl / libxlu: support 'xl pci-attach/detach' by name
> 57bff091f4 libxl: add 'name' field to 'libxl_device_pci' in the IDL...
> d473d74af3 libxl: stop setting 'vdevfn' in pci_struct_fill()
> 8bf0fab142 libxl / libxlu: support 'xl pci-attach/detach' by name
> 5ab684cb3e libxl: introduce
> libxl_pci_bdf_assignable_add/remove/list/list_free(), ...
> 66c2fbc6e8 libxl: convert internal functions in libxl_pci.c...
> 929f231140 libxl: introduce 'libxl_pci_bdf' in the idl...
> 413fd4e4e9 libxl: use COMPARE_PCI() macro is_pci_in_array()...
> c00da82355 libxl: add libxl_device_pci_assignable_list_free()...
> 7499b22ba1 libxl: make sure callers of libxl_device_pci_list() free
> the list after use
> f8cfb85719 libxl: remove get_all_assigned_devices() from libxl_pci.c
> 4951b9ea80 libxl: remove unnecessary check from libxl__device_pci_add()
> fe91a3aadc libxl: generalise 'driver_path' xenstore access functions
> in libxl_pci.c
> b5429d65e1 libxl: stop using aodev->device_config in 
> libxl__device_pci_add()...
> a825ab3a6b libxl: remove extraneous arguments to do_pci_remove() in 
> libxl_pci.c
> 33e1c5a5a8 libxl: s/detatched/detached in libxl_pci.c
> d8cba539f2 libxl: add/recover 'rdm_policy' to/from PCI backend in 
> xenstore
> 0fdb48ffe7 libxl: Make sure devices added by pci-attach are reflected
> in the config
> fce69998ed libxl: make libxl__device_list() work correctly for
> LIBXL__DEVICE_KIND_PCI...
> e43780f15f libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
> 
> 
> what is this "f8cfb85719 libxl: remove get_all_assigned_devices() from
> libxl_pci.c" doesn't it seems sus?

or maybe "7499b22ba1 libxl: make sure callers of libxl_device_pci_list() 
free the list after use" , if between adding pci, it also does the list 
and then maybe it gets freed and thus stops after the first one?

sounds like a longshot, tbh...

anyone have an idea which it might be? or should i look in other places?


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:09:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:09:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134968.250961 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6xf-0003o0-6p; Tue, 01 Jun 2021 16:09:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134968.250961; Tue, 01 Jun 2021 16:09:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6xf-0003nt-3m; Tue, 01 Jun 2021 16:09:47 +0000
Received: by outflank-mailman (input) for mailman id 134968;
 Tue, 01 Jun 2021 16:09:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lo6xe-0003nb-0L; Tue, 01 Jun 2021 16:09:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lo6xd-0000xy-RZ; Tue, 01 Jun 2021 16:09:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lo6xd-0003q5-Go; Tue, 01 Jun 2021 16:09:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lo6xd-00040S-G2; Tue, 01 Jun 2021 16:09:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rRy+dKPRKWlPGsOwerKy/5okeIQLqnO3Vk4SXP+vOyc=; b=W5Zck4BQAIY41IxaXZsJP/f62c
	67fH2JI7l2XyQtnMHTeVFDGUsIUxXih0V9QmOlX5oSjy8K1il8smDrltLEeB2pJeBU+DgTYB8pIav
	hlpUTgfqZBQbq78wzYHP6JdpIbu1L/AMYV03hm46eBKj+UB+jvKVzR6GvM3419jryCGQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162299-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162299: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=52848929b70dcf92a68aedcfd90207be81ba3274
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Jun 2021 16:09:45 +0000

flight 162299 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162299/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                52848929b70dcf92a68aedcfd90207be81ba3274
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  285 days
Failing since        152659  2020-08-21 14:07:39 Z  284 days  525 attempts
Testing same since   162270  2021-05-31 03:30:37 Z    1 days    3 attempts

------------------------------------------------------------
519 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 164273 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:11:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:11:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134978.250975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zJ-0005Et-JV; Tue, 01 Jun 2021 16:11:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134978.250975; Tue, 01 Jun 2021 16:11:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zJ-0005Em-GD; Tue, 01 Jun 2021 16:11:29 +0000
Received: by outflank-mailman (input) for mailman id 134978;
 Tue, 01 Jun 2021 16:11:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo6zI-0005Ec-1S
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:11:28 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.21])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 032e56eb-b77e-45fe-b95a-fb2950e57e76;
 Tue, 01 Jun 2021 16:11:26 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBO1B5
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 032e56eb-b77e-45fe-b95a-fb2950e57e76
ARC-Seal: i=1; a=rsa-sha256; t=1622563885; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=o8goBTfDenihvrljGvM2v005zlEHG9m3nfjPFkZi+A8YYUoOXANIc/PMTs2jKJtIXc
    lILFicKM6EtuEJN30gTYYvTISdXa+nfEoqRn4ziuUEhFbkDAMX5qiTCfrf78WSaMadyo
    UIITWgrFrEkP+0gz0ewjWxnQT8tiZgd007T8RwW86VPpUPON8TvVc/gGV2s0c476A6TV
    agR0h9HREPQHHMnx3iNnxkgZuEtH5oXGae6ws1KtP4jgwISbIlFATM+5WZswQAZp5TXz
    XlM3uQrxI/Fvg44UAFyjH4PM9otFaCeyhTTQZRHSS/f7K8yTUNys0xWe8zGq+mCgtGfZ
    xZ9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563885;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=bw53oX3KFyhVByZkDFnW7+r8yCMylKYkPgRao8lX0jQ=;
    b=nTyRSGG038jERdFo72r9AEpau6SvPmKojB17028m0nlQ+9NAyVZc5InMFvUJdGqrVH
    AAtg0kjxm4erZWPvaWodgDMyReMmuW6qktjr0hpBLrPYZarwtXd2w7JsFjAVLTf9MPmg
    jLS9JoMZJTb88LJPBlsskVWLkmotIryv5cNEYA5Q3p8///cGNe0093qjJHGfVZ5rTpx0
    4tT7qnXbjubedi0B4jfB4IYc0GlyR+/CJHtfd+8cbQ6+u/gSzWyH86ainV+8F8OvTDD3
    1tdONo4S750Lf7hA2Yh34j4ct4bZM0Sofzepsklx0AHBXppMIoX36/Jh4eRZkvrB5pJW
    XUzA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563885;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=bw53oX3KFyhVByZkDFnW7+r8yCMylKYkPgRao8lX0jQ=;
    b=QoPbrLy6f99MRAPXmhdUHvgE6u28C46qd/vJ7g7LIN5QXuvMlVh35HbD90EKiemLye
    DHvXQ0msSeCiM6hCz9ZLjCd6gklpI9i696MtU0u0m6bBhWyUF81y3ekubgYX2uTpP/Mg
    hkdIrsegqZl733o0flIemXNYKBs5BGoDRigoGtejw/Cgl/+RCXGGOqYps0fshEo7GtIQ
    bqAIbspLY/cbCBjAizcyHGf6XZckpWcqTZGz2Z1rOpHf/Pw/xcFBQr0psiMNFJ+Pggu8
    No64kSjvhOPmewQh7RyiSayDHcOPl7hjaS+XSq/GwNzuThFOo0x74ajTvSLU8Gm7XADt
    EY/Q==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>
Subject: [PATCH v20210601 00/38] leftover from 2020
Date: Tue,  1 Jun 2021 18:10:40 +0200
Message-Id: <20210601161118.18986-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Various unreviewed changes, rebase to 57f68dfd2d.

Olaf Hering (38):
  tools: add API to work with sevaral bits at once
  xl: fix description of migrate --debug
  tools: create libxensaverestore
  tools: add readv_exact to libxenctrl
  tools: add xc_is_known_page_type to libxenctrl
  tools: use xc_is_known_page_type
  tools: unify type checking for data pfns in migration stream
  tools: show migration transfer rate in send_dirty_pages
  tools/guest: prepare to allocate arrays once
  tools/guest: save: move batch_pfns
  tools/guest: save: move mfns array
  tools/guest: save: move types array
  tools/guest: save: move errors array
  tools/guest: save: move iov array
  tools/guest: save: move rec_pfns array
  tools/guest: save: move guest_data array
  tools/guest: save: move local_pages array
  tools/guest: restore: move pfns array
  tools/guest: restore: move types array
  tools/guest: restore: move mfns array
  tools/guest: restore: move map_errs array
  tools/guest: restore: move mfns array in populate_pfns
  tools/guest: restore: move pfns array in populate_pfns
  tools/guest: restore: split record processing
  tools/guest: restore: split handle_page_data
  tools/guest: restore: write data directly into guest
  tools: recognize LIBXL_API_VERSION for 4.16
  tools: adjust libxl_domain_suspend to receive a struct props
  tools: change struct precopy_stats to precopy_stats_t
  tools: add callback to libxl for precopy_policy and precopy_stats_t
  tools: add --max_iters to libxl_domain_suspend
  tools: add --min_remaining to libxl_domain_suspend
  tools: add --abort_if_busy to libxl_domain_suspend
  tools: add API for expandable bitmaps
  tools: use xg_sr_bitmap for populated_pfns
  tools: use superpages during restore of HVM guest
  tools: remove migration stream verify code
  hotplug/Linux: fix starting of xenstored with restarting systemd

 .gitignore                                    |   2 +
 docs/man/xl.1.pod.in                          |  24 +-
 tools/hotplug/Linux/init.d/xencommons.in      |   2 +-
 tools/hotplug/Linux/launch-xenstore.in        |  40 +-
 .../Linux/systemd/xenstored.service.in        |   2 +-
 tools/include/libxl.h                         |  32 +-
 tools/include/xenguest.h                      | 186 -----
 tools/include/xensaverestore.h                | 207 ++++++
 tools/libs/Makefile                           |   1 +
 tools/libs/ctrl/xc_bitops.h                   |  25 +
 tools/libs/ctrl/xc_private.c                  |  55 +-
 tools/libs/ctrl/xc_private.h                  |  34 +
 tools/libs/guest/Makefile                     |  11 -
 tools/libs/guest/xg_dom_x86.c                 |   5 -
 tools/libs/guest/xg_offline_page.c            |   1 -
 tools/libs/guest/xg_private.h                 |   5 +
 tools/libs/guest/xg_sr_restore_x86_hvm.c      | 274 --------
 tools/libs/light/Makefile                     |   4 +-
 tools/libs/light/libxl_dom_save.c             |  24 +
 tools/libs/light/libxl_domain.c               |  10 +-
 tools/libs/light/libxl_internal.h             |   7 +
 tools/libs/light/libxl_save_helper.c          |   1 +
 tools/libs/light/libxl_save_msgs_gen.pl       |   5 +-
 tools/libs/light/libxl_stream_write.c         |   9 +-
 tools/libs/light/libxl_types.idl              |   1 +
 tools/libs/saverestore/Makefile               |  38 ++
 .../xg_sr_common.c => saverestore/common.c}   |  77 ++-
 .../xg_sr_common.h => saverestore/common.h}   | 224 +++++-
 .../common_x86.c}                             |   2 +-
 .../common_x86.h}                             |   2 +-
 .../common_x86_pv.c}                          |   2 +-
 .../common_x86_pv.h}                          |   2 +-
 .../nomigrate.c}                              |   0
 .../xg_sr_restore.c => saverestore/restore.c} | 598 ++++++++--------
 tools/libs/saverestore/restore_x86_hvm.c      | 644 ++++++++++++++++++
 .../restore_x86_pv.c}                         |  70 +-
 .../xg_sr_save.c => saverestore/save.c}       | 209 +++---
 .../save_restore.h}                           |   2 -
 .../save_x86_hvm.c}                           |   7 +-
 .../save_x86_pv.c}                            |  33 +-
 .../stream_format.h}                          |   0
 tools/libs/uselibs.mk                         |   4 +-
 tools/ocaml/libs/xl/xenlight_stubs.c          |   3 +-
 tools/xl/xl_cmdtable.c                        |  26 +-
 tools/xl/xl_migrate.c                         |  54 +-
 tools/xl/xl_saverestore.c                     |   3 +-
 46 files changed, 1970 insertions(+), 997 deletions(-)
 create mode 100644 tools/include/xensaverestore.h
 delete mode 100644 tools/libs/guest/xg_sr_restore_x86_hvm.c
 create mode 100644 tools/libs/saverestore/Makefile
 rename tools/libs/{guest/xg_sr_common.c => saverestore/common.c} (71%)
 rename tools/libs/{guest/xg_sr_common.h => saverestore/common.h} (70%)
 rename tools/libs/{guest/xg_sr_common_x86.c => saverestore/common_x86.c} (99%)
 rename tools/libs/{guest/xg_sr_common_x86.h => saverestore/common_x86.h} (98%)
 rename tools/libs/{guest/xg_sr_common_x86_pv.c => saverestore/common_x86_pv.c} (99%)
 rename tools/libs/{guest/xg_sr_common_x86_pv.h => saverestore/common_x86_pv.h} (98%)
 rename tools/libs/{guest/xg_nomigrate.c => saverestore/nomigrate.c} (100%)
 rename tools/libs/{guest/xg_sr_restore.c => saverestore/restore.c} (67%)
 create mode 100644 tools/libs/saverestore/restore_x86_hvm.c
 rename tools/libs/{guest/xg_sr_restore_x86_pv.c => saverestore/restore_x86_pv.c} (94%)
 rename tools/libs/{guest/xg_sr_save.c => saverestore/save.c} (85%)
 rename tools/libs/{guest/xg_save_restore.h => saverestore/save_restore.h} (98%)
 rename tools/libs/{guest/xg_sr_save_x86_hvm.c => saverestore/save_x86_hvm.c} (96%)
 rename tools/libs/{guest/xg_sr_save_x86_pv.c => saverestore/save_x86_pv.c} (97%)
 rename tools/libs/{guest/xg_sr_stream_format.h => saverestore/stream_format.h} (100%)



From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:11:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:11:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134979.250987 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zO-0005XX-Rk; Tue, 01 Jun 2021 16:11:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134979.250987; Tue, 01 Jun 2021 16:11:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zO-0005XO-OE; Tue, 01 Jun 2021 16:11:34 +0000
Received: by outflank-mailman (input) for mailman id 134979;
 Tue, 01 Jun 2021 16:11:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo6zN-0005Ec-0g
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:11:33 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.20])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 13ab766b-3fdb-4683-ae05-9169682260c7;
 Tue, 01 Jun 2021 16:11:31 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBP1B6
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13ab766b-3fdb-4683-ae05-9169682260c7
ARC-Seal: i=1; a=rsa-sha256; t=1622563885; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=Q8tN60+RqOha/MCXIQ5n8Okf/eCE4XN4QKhcqJNEH4fZXpfnhdq1Mhn5RraxGGhURH
    bR2V3bg8mIRVxZhQd9j1M3EBFN4HTux1HLo56QpEB/Gxbo6JOG1f0QcftuzC+MeDRv7i
    wU77+TyThcRMJ1EdIUcxG4X3nP+Sq8mODjxlYfaeKbwmClPa7woA6uNk3tI6v1hrf4O5
    IXwfnpqnCWi6HP6WdFWLjThkI5sNIypXP2ec7UnoBCaAc3Ifuh15qusQ4JMHU/p25Vef
    dCUcnFHg/OdvfpwLvwXct7TJtQjpbWaoZMQEHXea/r6wj6xn/I1WCc1gmhtDhRybPImn
    XJPA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563885;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=ti7wpap7OKPtBty/ilhF07cZXxS6hyTWcz0abirc9Nk=;
    b=NlPTluv9dqlH2y81U97VooLLFW933ZhYmDeUIoPU04RBuEqy11hEOu/YqD9RUud8AG
    4EmoFn+72H4J0QwM2rqMQ+EB9PxXbwBFfMbqKNQgMh4QkT4SrDhgMoyrDy2bBOTJevK9
    8FryrKOg/HxOAfYTCtBtIu+5HzYTMf7tuak3wWU2Lqb3wQ+461DDPpdfr2IyZcllc6hf
    oN8FNN5l7cEBd3zVMyhJ8yCoUdKYh7HsLXRpiArQbdvtPE+Jqv9M2Y5n2luWdb4VkwOn
    Wk2S1SfZ2N3FsRTDqx0vi25FVmC6vCvuaZogANWP4q6kjCjw+Ft10E7yupTRpZizgIbb
    5zPQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563885;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=ti7wpap7OKPtBty/ilhF07cZXxS6hyTWcz0abirc9Nk=;
    b=elofRk4c6+a0/b7PP9QcqvXDvSWVEb5gDa8oAMVpm4vMk6jdtWIkhVRdaQG6anPHjs
    rze8P0ywimL3CGv4VwC2MCHyx5V03dXo6+yaNiNPsEstbtFafrFTBl8btXejoOGbNWQw
    4QWm/GibrNHC3k0EwzwNOOl+asl/XaG+4lZkBlOmCD9izIGAth3l2WKT4/3T6k4fMsJT
    CKDyLDJoVQccGr6oXvtIjFRouK+g7tijGkc2NSnCIOxG2T4PAyQYtjTf1PkxF+z4VDir
    0nRHFlteuEjT/O09+DlcuzGLtnr+QcO0sEpeCQhByhMSKQk1JtMsSzqVrN14Wok3nZRq
    eVxQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 01/38] tools: add API to work with sevaral bits at once
Date: Tue,  1 Jun 2021 18:10:41 +0200
Message-Id: <20210601161118.18986-2-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce new API to test if a fixed number of bits is clear or set,
and clear or set them all at once.

The caller has to make sure the input bitnumber is a multiply of BITS_PER_LONG.

This API avoids the loop over each bit in a known range just to see
if all of them are either clear or set.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/ctrl/xc_bitops.h | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/tools/libs/ctrl/xc_bitops.h b/tools/libs/ctrl/xc_bitops.h
index f0bac4a071..92f38872fb 100644
--- a/tools/libs/ctrl/xc_bitops.h
+++ b/tools/libs/ctrl/xc_bitops.h
@@ -77,4 +77,29 @@ static inline void bitmap_or(void *_dst, const void *_other,
         dst[i] |= other[i];
 }
 
+static inline int test_bit_long_set(unsigned long nr_base, const void *_addr)
+{
+    const unsigned long *addr = _addr;
+    unsigned long val = addr[nr_base / BITS_PER_LONG];
+    return val == ~0;
+}
+
+static inline int test_bit_long_clear(unsigned long nr_base, const void *_addr)
+{
+    const unsigned long *addr = _addr;
+    unsigned long val = addr[nr_base / BITS_PER_LONG];
+    return val == 0;
+}
+
+static inline void clear_bit_long(unsigned long nr_base, void *_addr)
+{
+    unsigned long *addr = _addr;
+    addr[nr_base / BITS_PER_LONG] = 0;
+}
+
+static inline void set_bit_long(unsigned long nr_base, void *_addr)
+{
+    unsigned long *addr = _addr;
+    addr[nr_base / BITS_PER_LONG] = ~0;
+}
 #endif  /* XC_BITOPS_H */


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:11:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:11:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134980.250992 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zP-0005eM-AZ; Tue, 01 Jun 2021 16:11:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134980.250992; Tue, 01 Jun 2021 16:11:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zP-0005c3-68; Tue, 01 Jun 2021 16:11:35 +0000
Received: by outflank-mailman (input) for mailman id 134980;
 Tue, 01 Jun 2021 16:11:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo6zO-0005X1-0o
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:11:34 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.83])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4413065c-d112-4774-a33b-349495fd5293;
 Tue, 01 Jun 2021 16:11:32 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBR1BB
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:27 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4413065c-d112-4774-a33b-349495fd5293
ARC-Seal: i=1; a=rsa-sha256; t=1622563888; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=DgU87WsRh140kAE9TWo8r3FrWteINAYMMfUBdn8sMf+WLhlYtnUV+9gTzJ2ZhipOFk
    +o7WvTY1KoYyCYNg/mvF09j44xxF034sxcTh8r/Coc51YIcmmJVF8i8/CuOarsNGiPvH
    Ux1To3IUtDs5VAXJ/zrj8S11JEiiJk7Fpe/KNRelscm/lQwXnIwDd2Plpg14qyuW8O6i
    ewxJT4xHE287ykPD8o09R7p4oJ7IDu+okvS3lTrOXNJuBjhM/XKlPckvaf3cwSJW8a0A
    sUvK+gQ/5AA3DgjcIJUkVDpus2wG18Uc9U0XqiK44gjdxs+V0ulLGY+EHXOtkbNOQPaf
    0QSA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563888;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=RFOd7y+e4U6vNTZpXSTmtkhWhMZi9HHw/wrHDQUGpSw=;
    b=KV23dBncIbEZihTRJbO4NExxqIwqU+zmmb2U04PlPeViIOHVkHmAXuPMjMetbsB+sA
    50bEcZHMKh+JzeyPG26RPoFoYAzqfLJYCV2uTAyqBwjQ58ohOl4WXvh6gFsYiUrQvM93
    KhDlexcoE06Osg5GCFfnKvqr+8NHWh7ufZU0/E5LvIrEWYlozQGUYu5nU5W82e/DZnum
    tjDNzGlFdkvbrL7hM0G3YaPSWH/UMPReJQ4e+eRP8ozXqOQinhjXeYaVNjJxfpW+Lybn
    BujRGqhEU6cxPfqql9XXrPlqPrutr/2zgdK6Uq7/g/LnhGMqMuvI8QlhIL/VCxENTSmt
    a//w==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563888;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=RFOd7y+e4U6vNTZpXSTmtkhWhMZi9HHw/wrHDQUGpSw=;
    b=PFT2fniDwBrbJmUSN5yaULUvwdA1DzSIoDeSvoQP27u7KyVOPm000P/xpuaoFuNV5X
    fVTeojkZrGthq694NBeSn6WM8g4YR6O6fsyZqkScWHepV4egq/E0Tn8gssGa/kKG9Vgv
    UAh5lL0QbRfWScmkVv75iWzBzhJEosoINZ1CQPOVr+yAYMDfngx8RdcDpmFIIvRAMck/
    qrQ1/6A670/1toUPB8t7xu94HbcYqHB9PDK/qCmajSP7a3NK2u4eQjIGpThdAq70OeaD
    +eb0ZzP8nSnOk/9dIx3DztwVkpptW3jgEEiDjNtsMjhqr7Bv51xjnaEtQ1TraJuJELtf
    WQjg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 06/38] tools: use xc_is_known_page_type
Date: Tue,  1 Jun 2021 18:10:46 +0200
Message-Id: <20210601161118.18986-7-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Verify pfn type on sending side, also verify incoming batch of pfns.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/restore.c | 3 +--
 tools/libs/saverestore/save.c    | 6 ++++++
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index be259a1c6b..cccb0dcb71 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -406,8 +406,7 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         }
 
         type = (pages->pfn[i] & PAGE_DATA_TYPE_MASK) >> 32;
-        if ( ((type >> XEN_DOMCTL_PFINFO_LTAB_SHIFT) >= 5) &&
-             ((type >> XEN_DOMCTL_PFINFO_LTAB_SHIFT) <= 8) )
+        if ( xc_is_known_page_type(type) == false )
         {
             ERROR("Invalid type %#"PRIx32" for pfn %#"PRIpfn" (index %u)",
                   type, pfn, i);
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index 92d96b0533..8d449ee0ae 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -147,6 +147,12 @@ static int write_batch(struct xc_sr_context *ctx)
 
     for ( i = 0; i < nr_pfns; ++i )
     {
+        if ( xc_is_known_page_type(types[i]) == false )
+        {
+            ERROR("Wrong type %#"PRIpfn" for pfn %#"PRIpfn, types[i], mfns[i]);
+            goto err;
+        }
+
         switch ( types[i] )
         {
         case XEN_DOMCTL_PFINFO_BROKEN:


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:11:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:11:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134981.251009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zT-0006Ai-Nf; Tue, 01 Jun 2021 16:11:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134981.251009; Tue, 01 Jun 2021 16:11:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zT-0006AW-JG; Tue, 01 Jun 2021 16:11:39 +0000
Received: by outflank-mailman (input) for mailman id 134981;
 Tue, 01 Jun 2021 16:11:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo6zS-0005Ec-0h
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:11:38 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.50])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7821914-ffaa-48dd-966b-98d1e9215484;
 Tue, 01 Jun 2021 16:11:31 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBR1B9
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:27 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7821914-ffaa-48dd-966b-98d1e9215484
ARC-Seal: i=1; a=rsa-sha256; t=1622563887; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=YjmQDA4/eh3J54LS6IB/jTUMgrucmiljDftMd5EfbbEYqO7565Xc5Z0QceBEhFaguu
    FA6KyT2U+N//foL6O6GZjxkepQgFvqy94h9RaRNOJKE8hllGOVRcYiJ/1gZgpMJN0j1u
    VgQUUjwce13nf6EZgmqJK6KcwnfvUHAlH4H4Y3N5qz3wi4OPo51IWmYR46EzUCv9D2Zb
    J0Xno6qVUBuAcLkG9cqy2w3qDxUkP0/xo3r/9Fb2fiJtxMOXY9X/csSR6c/3dX+sLqfe
    PzU6L13ebt4VXJrWg9q+EJ23yU8zHk3mMkzf3xoUiUSZLy2VHBDKN0lxxe2MNicJqjgF
    C9ig==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563887;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=5m93ctia/WnTzxirdrXBAXbAAhmoz3AnrSBGz7krrzI=;
    b=hvRDmt8H5FRof7n8T1KREYMwr4EyVRPtnWlsjqHTb/OoQ16P6xmPM9ZnRLnH2d+p7o
    r9ddcdQUyFSU8Baptt58b+ojz0976r30YMAzDEnUN9I5HIrjnQs2foYO+bpTSWB/kIIT
    Vc09t2YPC5uRl7zPnBFVp4cSkFq20jDGAAvLT6SjvykWKYLB+tlpeGrzJ/VG1jfxguDe
    lbNdHU3vJYme5kU8Etm/GNgEpw0KgUNIRPpgwt7zqrXaXThz173ZxZU3JGnAtgazvD4b
    kqcb/a3P5km9vBa76oDN8tIdori3xicH63GzUqeELh52ycHVBat4pQsFj0/jpU727CDD
    D91A==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563887;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=5m93ctia/WnTzxirdrXBAXbAAhmoz3AnrSBGz7krrzI=;
    b=CVYEABEVitHSu0luEGCRyaL9r4GPwpYcPYWWs4a6AYZW1RfpRK/sGrVPWJesz/R8sF
    9QnWK/4XTsWBD/kAFUhbW3MXZeOWRzsaKD5HNTr7ieXdVu3jV75j0d4eRrS7rR5W0Af2
    1Kb2Wo0QWRv6fTPSgO+bcwnUuG2xXS3vdjokRQZhCOvBKMpKE7SfhKQwwDBaVKROrS+p
    6Sk/SI93Px8GL6YfSp+dNq/mQAuj4PdzXqrAWYDOxEYAZVUn8Q+CKdj0aBZ+tLEmqJ9/
    Gn8sNYy8+hkoHiYDpH/6RFDPriv0F9CXWOb/OvON1lgjsl+CtRpsv/WXOSLapuRij71v
    PgdA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 04/38] tools: add readv_exact to libxenctrl
Date: Tue,  1 Jun 2021 18:10:44 +0200
Message-Id: <20210601161118.18986-5-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Read a batch of iovec's.

In the common case of short reads, finish individual iov's with read_exact.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/ctrl/xc_private.c | 55 +++++++++++++++++++++++++++++++++++-
 tools/libs/ctrl/xc_private.h |  1 +
 2 files changed, 55 insertions(+), 1 deletion(-)

diff --git a/tools/libs/ctrl/xc_private.c b/tools/libs/ctrl/xc_private.c
index d94f846686..ea420b9ba8 100644
--- a/tools/libs/ctrl/xc_private.c
+++ b/tools/libs/ctrl/xc_private.c
@@ -659,8 +659,23 @@ int write_exact(int fd, const void *data, size_t size)
 
 #if defined(__MINIOS__)
 /*
- * MiniOS's libc doesn't know about writev(). Implement it as multiple write()s.
+ * MiniOS's libc doesn't know about readv/writev().
+ * Implement it as multiple read/write()s.
  */
+int readv_exact(int fd, const struct iovec *iov, int iovcnt)
+{
+    int rc, i;
+
+    for ( i = 0; i < iovcnt; ++i )
+    {
+        rc = read_exact(fd, iov[i].iov_base, iov[i].iov_len);
+        if ( rc )
+            return rc;
+    }
+
+    return 0;
+}
+
 int writev_exact(int fd, const struct iovec *iov, int iovcnt)
 {
     int rc, i;
@@ -675,6 +690,44 @@ int writev_exact(int fd, const struct iovec *iov, int iovcnt)
     return 0;
 }
 #else
+int readv_exact(int fd, const struct iovec *iov, int iovcnt)
+{
+    int rc = 0, idx = 0;
+    ssize_t len;
+
+    while ( idx < iovcnt )
+    {
+        len = readv(fd, &iov[idx], min(iovcnt - idx, IOV_MAX));
+        if ( len == -1 && errno == EINTR )
+            continue;
+        if ( len <= 0 )
+        {
+            rc = -1;
+            goto out;
+        }
+        while ( len > 0 && idx < iovcnt )
+        {
+            if ( len >= iov[idx].iov_len )
+            {
+                len -= iov[idx].iov_len;
+            }
+            else
+            {
+                void *p = iov[idx].iov_base + len;
+                size_t l = iov[idx].iov_len - len;
+
+                rc = read_exact(fd, p, l);
+                if ( rc )
+                    goto out;
+                len = 0;
+            }
+            idx++;
+        }
+    }
+out:
+    return rc;
+}
+
 int writev_exact(int fd, const struct iovec *iov, int iovcnt)
 {
     struct iovec *local_iov = NULL;
diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
index f0b5f83ac8..5d2c7274fb 100644
--- a/tools/libs/ctrl/xc_private.h
+++ b/tools/libs/ctrl/xc_private.h
@@ -441,6 +441,7 @@ int xc_flush_mmu_updates(xc_interface *xch, struct xc_mmu *mmu);
 
 /* Return 0 on success; -1 on error setting errno. */
 int read_exact(int fd, void *data, size_t size); /* EOF => -1, errno=0 */
+int readv_exact(int fd, const struct iovec *iov, int iovcnt);
 int write_exact(int fd, const void *data, size_t size);
 int writev_exact(int fd, const struct iovec *iov, int iovcnt);
 


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:11:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:11:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134982.251020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zV-0006TF-32; Tue, 01 Jun 2021 16:11:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134982.251020; Tue, 01 Jun 2021 16:11:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zU-0006Rp-V4; Tue, 01 Jun 2021 16:11:40 +0000
Received: by outflank-mailman (input) for mailman id 134982;
 Tue, 01 Jun 2021 16:11:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo6zS-0005X1-VV
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:11:39 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 634c5233-a18d-4ebb-9836-a0592a04f72c;
 Tue, 01 Jun 2021 16:11:33 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBS1BC
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 634c5233-a18d-4ebb-9836-a0592a04f72c
ARC-Seal: i=1; a=rsa-sha256; t=1622563888; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=H8lJPq/K477YKdql0qIATRzz7uyTOc6B28Ae8wq/QXQHwmHhuuI+ZHR1jvElGTWjYV
    eM7B8be88v9B8z8grOxNUJf2YuyvD4bmjf+XRmHwaEUmMTU1zlbOwz5YLshzxDIcGPLy
    wIQ1F0eupzLwDA+IhtyK78bixDxiEeAEnmXjV6Y+S/OctdupEY2ejYaf+vYHR3A9RUyq
    LD8AHSVjS+/veeZV1BbSOS5fr5DfRCTDIkZFr/lSie14xSuTSbX5kRsicP/w5yjQFPRs
    tS/GWzShpP34YCik9znfhQ2D9QLH81nfeuqpmjfXYhHWRk0nvhbyapxMz1gnqO3Sz1Ov
    PPBA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563888;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=sTwQgwZm9rWRjmRsqDvzAs56CeWxNcqTtl2Vld2f57Q=;
    b=C6+utwmof9OOi4xjLK2eK3ZNkf7VrIebRVjGqdisVPh9ofzjJpJC56r0cNLUdQp6bP
    vZNS3sevpo+5eQWncU7OHHrWfY1xGdAIyaaH/PzieZlf1GiVNaB7hGRTr0fcIEyFG2SZ
    OHXcc2ARFfviE5zwtNi2lY1xmaajinr/MBYZe4kRsOhG+e79zFatl0dcgKZWGmJ6FWYc
    kdXaqIOa36qjmQkBfBZY0jVHdmCJovzjaHVKRvQIU8CMJFhMzXR2iVirEtFlZb8WTlLr
    tz3+fQZtM+LcEzagdGuqBxnFmYQgYq6M6CaDJ66VxT7qo5y8SY35vDeXLQJNXuJgS2VW
    p6lg==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563888;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=sTwQgwZm9rWRjmRsqDvzAs56CeWxNcqTtl2Vld2f57Q=;
    b=KB6pKtR4ZTfiqdk49MNfSbkAa7AWVIrl1ZMbYq/HM7amcx5uEK6E4YI5YovRWFau6m
    TdnCuuwI7/4YtBtUdZT0XB8pehAwaoY0EPrO7lXtutGIg4aVt653oEtqZbUsjDNgrQhk
    N2N2aIo7VTgxjbOPX5TMWnLEkVcjv7OeGPNn0pscYMxqk0Vgv5yVkMdxsaIuJui8U/dd
    ZLaaKBW1Di1H17/05oTLG6M3cD39YDrqz8D0jFnATmqazbBa/v77JIowK1KK4XL9wXUD
    FkhZaeiLfvvnBtNoV2CFPO8zYPGPWx71Q+Pe8byGRdNYyM+1W8SNxMFyHdqexOTCfw1C
    YrFQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 07/38] tools: unify type checking for data pfns in migration stream
Date: Tue,  1 Jun 2021 18:10:47 +0200
Message-Id: <20210601161118.18986-8-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce a helper which decides if a given pfn type has data
for the migration stream.

No change in behavior intended.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h  | 17 ++++++++++++++++
 tools/libs/saverestore/restore.c | 34 +++++---------------------------
 tools/libs/saverestore/save.c    | 14 ++-----------
 3 files changed, 24 insertions(+), 41 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 36946e5d48..50a8479d39 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -467,6 +467,23 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
 /* Handle a STATIC_DATA_END record. */
 int handle_static_data_end(struct xc_sr_context *ctx);
 
+static inline bool page_type_has_stream_data(uint32_t type)
+{
+    bool ret;
+
+    switch (type)
+    {
+    case XEN_DOMCTL_PFINFO_XTAB:
+    case XEN_DOMCTL_PFINFO_XALLOC:
+    case XEN_DOMCTL_PFINFO_BROKEN:
+        ret = false;
+        break;
+    default:
+        ret = true;
+        break;
+    }
+    return ret;
+}
 #endif
 /*
  * Local variables:
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index cccb0dcb71..700f9e74b5 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -152,9 +152,8 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
 
     for ( i = 0; i < count; ++i )
     {
-        if ( (!types || (types &&
-                         (types[i] != XEN_DOMCTL_PFINFO_XTAB &&
-                          types[i] != XEN_DOMCTL_PFINFO_BROKEN))) &&
+        if ( (!types ||
+              (types && page_type_has_stream_data(types[i]) == true)) &&
              !pfn_is_populated(ctx, original_pfns[i]) )
         {
             rc = pfn_set_populated(ctx, original_pfns[i]);
@@ -233,25 +232,8 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
     {
         ctx->restore.ops.set_page_type(ctx, pfns[i], types[i]);
 
-        switch ( types[i] )
-        {
-        case XEN_DOMCTL_PFINFO_NOTAB:
-
-        case XEN_DOMCTL_PFINFO_L1TAB:
-        case XEN_DOMCTL_PFINFO_L1TAB | XEN_DOMCTL_PFINFO_LPINTAB:
-
-        case XEN_DOMCTL_PFINFO_L2TAB:
-        case XEN_DOMCTL_PFINFO_L2TAB | XEN_DOMCTL_PFINFO_LPINTAB:
-
-        case XEN_DOMCTL_PFINFO_L3TAB:
-        case XEN_DOMCTL_PFINFO_L3TAB | XEN_DOMCTL_PFINFO_LPINTAB:
-
-        case XEN_DOMCTL_PFINFO_L4TAB:
-        case XEN_DOMCTL_PFINFO_L4TAB | XEN_DOMCTL_PFINFO_LPINTAB:
-
+        if ( page_type_has_stream_data(types[i]) == true )
             mfns[nr_pages++] = ctx->restore.ops.pfn_to_gfn(ctx, pfns[i]);
-            break;
-        }
     }
 
     /* Nothing to do? */
@@ -271,14 +253,8 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
 
     for ( i = 0, j = 0; i < count; ++i )
     {
-        switch ( types[i] )
-        {
-        case XEN_DOMCTL_PFINFO_XTAB:
-        case XEN_DOMCTL_PFINFO_BROKEN:
-        case XEN_DOMCTL_PFINFO_XALLOC:
-            /* No page data to deal with. */
+        if ( page_type_has_stream_data(types[i]) == false )
             continue;
-        }
 
         if ( map_errs[j] )
         {
@@ -413,7 +389,7 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
             goto err;
         }
 
-        if ( type < XEN_DOMCTL_PFINFO_BROKEN )
+        if ( page_type_has_stream_data(type) == true )
             /* NOTAB and all L1 through L4 tables (including pinned) should
              * have a page worth of data in the record. */
             pages_of_data++;
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index 8d449ee0ae..bcff2d28f5 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -153,13 +153,8 @@ static int write_batch(struct xc_sr_context *ctx)
             goto err;
         }
 
-        switch ( types[i] )
-        {
-        case XEN_DOMCTL_PFINFO_BROKEN:
-        case XEN_DOMCTL_PFINFO_XALLOC:
-        case XEN_DOMCTL_PFINFO_XTAB:
+        if ( page_type_has_stream_data(types[i]) == false )
             continue;
-        }
 
         mfns[nr_pages++] = mfns[i];
     }
@@ -177,13 +172,8 @@ static int write_batch(struct xc_sr_context *ctx)
 
         for ( i = 0, p = 0; i < nr_pfns; ++i )
         {
-            switch ( types[i] )
-            {
-            case XEN_DOMCTL_PFINFO_BROKEN:
-            case XEN_DOMCTL_PFINFO_XALLOC:
-            case XEN_DOMCTL_PFINFO_XTAB:
+            if ( page_type_has_stream_data(types[i]) == false )
                 continue;
-            }
 
             if ( errors[p] )
             {


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:11:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:11:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134983.251031 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zZ-0006vt-CI; Tue, 01 Jun 2021 16:11:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134983.251031; Tue, 01 Jun 2021 16:11:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zZ-0006vi-8T; Tue, 01 Jun 2021 16:11:45 +0000
Received: by outflank-mailman (input) for mailman id 134983;
 Tue, 01 Jun 2021 16:11:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo6zX-0005Ec-0s
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:11:43 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e280699-7760-4a0f-adfb-a44356807a37;
 Tue, 01 Jun 2021 16:11:32 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBR1BA
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:27 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e280699-7760-4a0f-adfb-a44356807a37
ARC-Seal: i=1; a=rsa-sha256; t=1622563887; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=gISW+Gu/ZVcbfMZyUxvVAXMjU8BS7D4pv8ZvtGeCsbYtzWqCA8j3ayIU4wTvSrvgP3
    G4bitN66zGnxrgW8IA+Jwhtao7w/EJbsyGyoD7ZuCTKbKuc0SScyaLIYKI/ZJNsqasaD
    nMg4BSvuJzosmK80Y6psVd6yRFxEIX0xio6kPTuB/Kr3+m7mhwlOCTPG1+HSKpW+3/KI
    1yxeDyy8d9vLXeeiJU3WtpqyxVUFPTE42CefxTWO7yo3QvuiH0HpIeY3GlXqNj3dmuxs
    aR0cmj7G2aabMfEPmyxz2HuerX+b8IBbuPOnefG3/3wnyO59Jg/1MwAuFoqgEtQz9uh6
    GrGA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563887;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=8EOjWvQk239Wp/aWNifZAKC+Sjf+9ZihUrmLiKkJAAc=;
    b=jQtdKSlb9FH79dzWoVHsq+ujCn1n/8KRNjKaVo8a5JhPySvEmEgswqcmPRAhLxmZnN
    rPIrQgwycooC1awY+AEii3QeQOGCQaoAgjlLPa0igEwBv9il2BiI/L0vTkxKTFiud2ed
    Pt+l/YLilb0RC0KipK0/QP7mbBgDd6aVawfYDz7hUUZBXeA8KWdcxjwvix2IuJViwL7O
    wvfvpHBeJUCbfvEMqAWFAEAiNUQyOikhs95pEN3ZEXOZR3fpBURQphfQy1usoZu22LeI
    3jG9+TzBJLiP7K0phGwU8FqYuVIx/RicY4Tlp/3PU77+JXE7EpQmAILFHJd866+JSQg5
    NKYg==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563887;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=8EOjWvQk239Wp/aWNifZAKC+Sjf+9ZihUrmLiKkJAAc=;
    b=eO8dXLOMjMQzUxkYPzl55FUozhW0zM/NyzUQ5Bsb7bzJrJxU2+ycgTzHyD53pMVHkg
    CtnaH/07//v16yl0yAZ0oDD9Mru5KiGvB8jRPtZXXbqaR97ySCpMFXIz66Agm0fPozmr
    M6mzR8HMMWFpOexfNKUiJvVJSlgBxui1QDv56AKqJlBV5CAPG7k38+j/M4fwHt/2KfnK
    zhMD+IKME9f9fgm1oQQYEdADxcZ5P53VoGUz/XKvv3RjIX6x80yKdU7NmJLpRRW5tCyv
    01DhK8f0YTK7x/5V0qOBbBit+J/eUSm/TzLLpqFMaFrSg/+fp57Lu7KwzDBWRMzleVrv
    phFQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 05/38] tools: add xc_is_known_page_type to libxenctrl
Date: Tue,  1 Jun 2021 18:10:45 +0200
Message-Id: <20210601161118.18986-6-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Users of xc_get_pfn_type_batch may want to sanity check the data
returned by Xen. Add a simple helper for this purpose.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/ctrl/xc_private.h | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
index 5d2c7274fb..afb08aafe1 100644
--- a/tools/libs/ctrl/xc_private.h
+++ b/tools/libs/ctrl/xc_private.h
@@ -421,6 +421,39 @@ void *xc_map_foreign_ranges(xc_interface *xch, uint32_t dom,
 int xc_get_pfn_type_batch(xc_interface *xch, uint32_t dom,
                           unsigned int num, xen_pfn_t *);
 
+/* Sanitiy check for types returned by Xen */
+static inline bool xc_is_known_page_type(xen_pfn_t type)
+{
+    bool ret;
+
+    switch (type)
+    {
+    case XEN_DOMCTL_PFINFO_NOTAB:
+
+    case XEN_DOMCTL_PFINFO_L1TAB:
+    case XEN_DOMCTL_PFINFO_L1TAB | XEN_DOMCTL_PFINFO_LPINTAB:
+
+    case XEN_DOMCTL_PFINFO_L2TAB:
+    case XEN_DOMCTL_PFINFO_L2TAB | XEN_DOMCTL_PFINFO_LPINTAB:
+
+    case XEN_DOMCTL_PFINFO_L3TAB:
+    case XEN_DOMCTL_PFINFO_L3TAB | XEN_DOMCTL_PFINFO_LPINTAB:
+
+    case XEN_DOMCTL_PFINFO_L4TAB:
+    case XEN_DOMCTL_PFINFO_L4TAB | XEN_DOMCTL_PFINFO_LPINTAB:
+
+    case XEN_DOMCTL_PFINFO_XTAB:
+    case XEN_DOMCTL_PFINFO_XALLOC:
+    case XEN_DOMCTL_PFINFO_BROKEN:
+        ret = true;
+        break;
+    default:
+        ret = false;
+        break;
+    }
+    return ret;
+}
+
 void bitmap_64_to_byte(uint8_t *bp, const uint64_t *lp, int nbits);
 void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, int nbits);
 


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:11:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:11:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134984.251035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zZ-0006zH-TP; Tue, 01 Jun 2021 16:11:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134984.251035; Tue, 01 Jun 2021 16:11:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zZ-0006xx-Io; Tue, 01 Jun 2021 16:11:45 +0000
Received: by outflank-mailman (input) for mailman id 134984;
 Tue, 01 Jun 2021 16:11:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo6zX-0005X1-Vi
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:11:44 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.169])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 81fbc3a2-15f3-4487-9727-c00067a00375;
 Tue, 01 Jun 2021 16:11:35 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBT1BE
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:29 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81fbc3a2-15f3-4487-9727-c00067a00375
ARC-Seal: i=1; a=rsa-sha256; t=1622563889; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=T9QrHmjHticlOV5jT7XpL4jjdUnulJExOzssvTbvpX60wpSvZyO9ov7fUxpNbmF8BL
    g07dtudMcsazpJVJFxnAvJ0CbMB9BwB15OtZKb1X8uciw2h0QuJ83AQPlANkVmLb2g4v
    G33hNxTSFxEdhnjfj02XcSWzxIEUDZ4nGpkAA9rcCFhzkhDZFpwT3WcmiFuoQmuLw3IG
    6cWPG3Reu162mWKvFoJxEAa+mkaFdJbX2F5fsdMSfWsKoAZWdQOFH3qJ24V4ZqpdPeE1
    7SN4X6IkVg7VThgmhP2BbycjHjK9pACxpySLHLuIgD7ZKeOc4s0i3ESTGfS/Hl1kAdNE
    evMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563889;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=5ichTSe1XRD84BJzTM12jbhc1IFgz0uXqluaqXpByoA=;
    b=ZPQOqOMWFGKx8XkLHojU2CYmvcs4NIy2WrFgZEttDQmjmZ4LtbvrlEZ2gqa+pswchB
    EDzF8134fStSRlndZTWwQGiYJ6V1AmvEPQKY7CkD12JbhcfC1JmGLSTqqFzbVThcwus3
    wSu/ety/n3pgDCTEoECb1PysUN94j0nG7++0jomdTJ0IaKCLQKZ6+e1HCdCYNlbZIMVt
    W+FJRjCSekYJWON/lcwSIxN509pB8KmgkAxhqcrFrqpRnu98S4X0jl7hgM4+3l1HqSks
    ORk8UFcL16IBOnWz03HzTJw2M4beoz58f/sAx+4h5GmIEe4Y+lN+OouKs6NNp/tgPhwn
    S4/Q==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563889;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=5ichTSe1XRD84BJzTM12jbhc1IFgz0uXqluaqXpByoA=;
    b=bEZ+OWbk1tg7oZ5SRTFU6lDhU0LescHmqXSi48yjAbmsag1gAtNFCZ0h6RNNgTIQFy
    Q3FuZO0dlhzO1Cayc4FQ/UECK8TL/8t88uUSoFtFWFthuQqBYjw3a/v0Oco8p/RFd79y
    BEP0tptNLxxaHJh4bHYzyXzwjG6xWXS0RX/Y7PaPGdqedIX493aOaSBYlIwfUDilNKww
    VNFWIGH8zwZT3pJ9Gr68GxZYh3cYncjKM/Vh7k4yndwB57NWvbwsq6D6XG709aAajm7e
    +otmb88g75ABukbnR7RMSWKCfis0KexEwArRWpWgMLo99Rw+c3spOa5vRzRWouLuiVud
    IeFQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 09/38] tools/guest: prepare to allocate arrays once
Date: Tue,  1 Jun 2021 18:10:49 +0200
Message-Id: <20210601161118.18986-10-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The hotpath 'send_dirty_pages' is supposed to do just one thing: sending.
The other end 'handle_page_data' is supposed to do just receiving.

But instead both do other costly work like memory allocations and data moving.
Do the allocations once, the array sizes are a compiletime constant.
Avoid unneeded copying of data by receiving data directly into mapped guest memory.

This patch is just prepartion, subsequent changes will populate the arrays.

Once all changes are applied, migration of a busy HVM domU changes like that:

Without this series, from sr650 to sr950 (xen-4.15.20201027T173911.16a20963b3 xen_testing):
2020-10-29 10:23:10.711+0000: xc: show_transfer_rate: 23663128 bytes + 2879563 pages in 55.324905335 sec, 203 MiB/sec: Internal error
2020-10-29 10:23:35.115+0000: xc: show_transfer_rate: 16829632 bytes + 2097552 pages in 24.401179720 sec, 335 MiB/sec: Internal error
2020-10-29 10:23:59.436+0000: xc: show_transfer_rate: 16829032 bytes + 2097478 pages in 24.319025928 sec, 336 MiB/sec: Internal error
2020-10-29 10:24:23.844+0000: xc: show_transfer_rate: 16829024 bytes + 2097477 pages in 24.406992500 sec, 335 MiB/sec: Internal error
2020-10-29 10:24:48.292+0000: xc: show_transfer_rate: 16828912 bytes + 2097463 pages in 24.446489027 sec, 335 MiB/sec: Internal error
2020-10-29 10:25:01.816+0000: xc: show_transfer_rate: 16836080 bytes + 2098356 pages in 13.447091818 sec, 609 MiB/sec: Internal error

With this series, from sr650 to sr950 (xen-4.15.20201027T173911.16a20963b3 xen_unstable):
2020-10-28 21:26:05.074+0000: xc: show_transfer_rate: 23663128 bytes + 2879563 pages in 52.564054368 sec, 213 MiB/sec: Internal error
2020-10-28 21:26:23.527+0000: xc: show_transfer_rate: 16830040 bytes + 2097603 pages in 18.450592015 sec, 444 MiB/sec: Internal error
2020-10-28 21:26:41.926+0000: xc: show_transfer_rate: 16830944 bytes + 2097717 pages in 18.397862306 sec, 445 MiB/sec: Internal error
2020-10-28 21:27:00.339+0000: xc: show_transfer_rate: 16829176 bytes + 2097498 pages in 18.411973339 sec, 445 MiB/sec: Internal error
2020-10-28 21:27:18.643+0000: xc: show_transfer_rate: 16828592 bytes + 2097425 pages in 18.303326695 sec, 447 MiB/sec: Internal error
2020-10-28 21:27:26.289+0000: xc: show_transfer_rate: 16835952 bytes + 2098342 pages in 7.579846749 sec, 1081 MiB/sec: Internal error

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h  | 8 ++++++++
 tools/libs/saverestore/restore.c | 8 ++++++++
 tools/libs/saverestore/save.c    | 4 +++-
 3 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index f5fe23caad..80b2e878aa 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -223,6 +223,12 @@ static inline int update_blob(struct xc_sr_blob *blob,
     return 0;
 }
 
+struct xc_sr_save_arrays {
+};
+
+struct xc_sr_restore_arrays {
+};
+
 struct xc_sr_context
 {
     xc_interface *xch;
@@ -260,6 +266,7 @@ struct xc_sr_context
             unsigned long *deferred_pages;
             unsigned long nr_deferred_pages;
             xc_hypercall_buffer_t dirty_bitmap_hbuf;
+            struct xc_sr_save_arrays *m;
         } save;
 
         struct /* Restore data. */
@@ -311,6 +318,7 @@ struct xc_sr_context
 
             /* Sender has invoked verify mode on the stream. */
             bool verify;
+            struct xc_sr_restore_arrays *m;
         } restore;
     };
 
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index 700f9e74b5..a6cf9ee41c 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -739,6 +739,13 @@ static int setup(struct xc_sr_context *ctx)
     }
     ctx->restore.allocated_rec_num = DEFAULT_BUF_RECORDS;
 
+    ctx->restore.m = malloc(sizeof(*ctx->restore.m));
+    if ( !ctx->restore.m ) {
+        ERROR("Unable to allocate memory for arrays");
+        rc = -1;
+        goto err;
+    }
+
  err:
     return rc;
 }
@@ -757,6 +764,7 @@ static void cleanup(struct xc_sr_context *ctx)
         xc_hypercall_buffer_free_pages(
             xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->restore.p2m_size)));
 
+    free(ctx->restore.m);
     free(ctx->restore.buffered_records);
     free(ctx->restore.populated_pfns);
 
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index 760ca04a84..1662e3ee50 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -853,8 +853,9 @@ static int setup(struct xc_sr_context *ctx)
     ctx->save.batch_pfns = malloc(MAX_BATCH_SIZE *
                                   sizeof(*ctx->save.batch_pfns));
     ctx->save.deferred_pages = bitmap_alloc(ctx->save.p2m_size);
+    ctx->save.m = malloc(sizeof(*ctx->save.m));
 
-    if ( !ctx->save.batch_pfns || !dirty_bitmap || !ctx->save.deferred_pages )
+    if ( !ctx->save.m || !ctx->save.batch_pfns || !dirty_bitmap || !ctx->save.deferred_pages )
     {
         ERROR("Unable to allocate memory for dirty bitmaps, batch pfns and"
               " deferred pages");
@@ -886,6 +887,7 @@ static void cleanup(struct xc_sr_context *ctx)
                                    NRPAGES(bitmap_size(ctx->save.p2m_size)));
     free(ctx->save.deferred_pages);
     free(ctx->save.batch_pfns);
+    free(ctx->save.m);
 }
 
 /*


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:11:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:11:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134986.251053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zd-0007em-C6; Tue, 01 Jun 2021 16:11:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134986.251053; Tue, 01 Jun 2021 16:11:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zd-0007eD-5s; Tue, 01 Jun 2021 16:11:49 +0000
Received: by outflank-mailman (input) for mailman id 134986;
 Tue, 01 Jun 2021 16:11:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo6zc-0005Ec-17
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:11:48 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f02d3d3-25dd-40bb-a90b-7a56cafa0d5f;
 Tue, 01 Jun 2021 16:11:32 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBP1B7
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f02d3d3-25dd-40bb-a90b-7a56cafa0d5f
ARC-Seal: i=1; a=rsa-sha256; t=1622563886; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=rTKxOAkKqYFJL5L7GOducmz8OOO1xHg+Am9DHOVEIThR2y71qgA8qMEqUn3HXCS82l
    6D6ABoIHBvhSlR9CZMALQWBO+AfyINY4HR6duXnGenx9CtJEfzShCwFm6yeAQTZvQn2c
    g+kU9Gd2Yvx4s6lt3K0Cg2PceL4O3nnXhARYRiW0JakF/zr3sGvZONAbwr4IWo9INVep
    UBs6HD/ro0SeILfjKpBfJdGDqMLRUtJoJ0qXaQOhK4HxVL+VqROjRRzZ7lDuIB3lg6Ba
    tidqqlXoVMIF1oBi+ICHtlm7C+mDBVv66gCBcaFLpiqMiVZYtAJ1hI22grtrmkBy/UVv
    EVdg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563886;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=q97eEfHeTtV0cDWNiuGMRdoBkuW+It3JWF9KWqcO/JA=;
    b=SSyySTzshn8kEOHxegJbOF9NyOmyCNKBOlv8OwbvTSmrL+/cA5XWYwmuo5fGxKGULN
    rfFzd/kXuhwqoHjpEkAEd2d1VG4g5Rhp4lZx7EtSRfe6kvzXwKmSZd4km4fv7KfaN7DD
    sEWrakTta9mRArnZMZ1GKZHfIzwzYi9sUhfK3dQiJOUfrmlXrpkcKquTDEsuFiYvOwEl
    muPbLDjBxJtEco8io+KVCxTni03X2HTkMCrRrZokvuuVuVskoIQiVULyxTKIsWDO/a38
    ft7okdV922U0qqWng/27ihvkB/cIgRqrX9VZScwkYFft4FXxAKwo+COLidPZkCOesZ/w
    lSQQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563886;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=q97eEfHeTtV0cDWNiuGMRdoBkuW+It3JWF9KWqcO/JA=;
    b=lM4O7DGKUfc+l8/uC42qfSOM6h4+DtCfrOtiBAKTdo5MgUSCOy26LhszcdSjhS7kjF
    0jCv7eJ3E4IitkJzIJWSv7EjpM29Be1TfA3BPehj8QsuqYXOWS0w+gEYRdHDpNiWQb3P
    AUTFySO+n0cEo7ORuucXWnkfv9BUSGHmcG2RqcGdrkF2wjcZWcthPVTL5PXEQnjLq6c+
    lvu3OXIJ3UIVXLO3Y6Jyrd/xqhW5JSFDGYHZq7xTZ+zUbZa7StjvIqbtr6rCcSWxmqB3
    QsEzp7laZa4pu9KbOkU9YcE/ck6oVtv7MGHGf4VNQaNFIesv5b8opTH+l9wjnsCFAOfI
    finA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v20210601 02/38] xl: fix description of migrate --debug
Date: Tue,  1 Jun 2021 18:10:42 +0200
Message-Id: <20210601161118.18986-3-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xl migrate --debug used to track every pfn in every batch of pages.
But these times are gone. Adjust the help text to tell what --debug
is supposed to do today.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/man/xl.1.pod.in          | 4 +++-
 tools/libs/guest/xg_sr_save.c | 2 +-
 tools/xl/xl_cmdtable.c        | 2 +-
 3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index e2176bd696..ed3f4dee1e 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -481,7 +481,9 @@ domain.
 
 =item B<--debug>
 
-Display huge (!) amount of debug information during the migration process.
+Verify transferred domU page data. All memory will be transferred one more
+time to the destination host while the domU is paused, and compared with
+the result of the inital transfer while the domU was still running.
 
 =item B<-p>
 
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index 2ba7c3200c..51542a98c8 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -752,7 +752,7 @@ static int send_domain_memory_live(struct xc_sr_context *ctx)
     if ( rc )
         goto out;
 
-    if ( ctx->save.debug && ctx->stream_type != XC_STREAM_PLAIN )
+    if ( ctx->save.debug )
     {
         rc = verify_frames(ctx);
         if ( rc )
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 661323d488..6fd18856c0 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -172,7 +172,7 @@ const struct cmd_spec cmd_table[] = {
       "                migrate-receive [-d -e]\n"
       "-e              Do not wait in the background (on <host>) for the death\n"
       "                of the domain.\n"
-      "--debug         Print huge (!) amount of debug during the migration process.\n"
+      "--debug         Verify transferred domU page data.\n"
       "-p              Do not unpause domain after migrating it.\n"
       "-D              Preserve the domain id"
     },


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:11:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:11:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134987.251058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6ze-0007kS-1E; Tue, 01 Jun 2021 16:11:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134987.251058; Tue, 01 Jun 2021 16:11:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zd-0007jB-ML; Tue, 01 Jun 2021 16:11:49 +0000
Received: by outflank-mailman (input) for mailman id 134987;
 Tue, 01 Jun 2021 16:11:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo6zc-0005X1-Vp
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:11:49 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.169])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d95e56a8-658a-4556-a121-5a0a161d398a;
 Tue, 01 Jun 2021 16:11:35 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBU1BH
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d95e56a8-658a-4556-a121-5a0a161d398a
ARC-Seal: i=1; a=rsa-sha256; t=1622563890; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=YMR9cnE/7YTMLEsOAkqAaGI/s9hiUMB9XThQFl0NtmytdriDfkpznVZNJ5H9zOIuiq
    SB8U1oZikyG0GXcnyYvjvaZRxAuL3+gCuMuEmkQaZ1gEb/uQD0mOeMp4hp5vQzR5g4qo
    M0KlcQlZC7TOpXMC7c4KbAqv78NQVVKI8tPgcuEaQhleMUPT9fGdNWoqvTElzXAtGVhv
    RjXsCkmW1+AsTMos6fUsYcGOC4Kxc6ej5G806z2FT0lnR2IALlSb3llwuWSBBNR1FAeJ
    vBFExwNHFwD3xwMx7kxBWkXxq1x/he6LIftLl9Auo+UVXlCrsIBVTQsdthOhqVnAWRPs
    s5EA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563890;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=yKL38YLQb6Sra5wh9hHuxAm/2IobWvLNV1y2lL6e8b0=;
    b=p4YmyBC9+RbKz2QbG/UltqPdFVcJhVy/agk4dz6I7fPe7w3g6b6vqEe8wxX4sy14pj
    J4vyp4A1cZLShGuGGjICkoGxWxRFfh4oHKSHJkKY2jbETn+dZof6QF/c6BkAlJcOU2hG
    XCiORVT+sP7B52ZB4dtFUuf7zk130chphDl/xshi/hocRl9hidJhfzxpUr556VozCdas
    3CEOdVaBPMQoIAAZXMbOOwO6ki+kqTeuioRDUJ23Wbq2iqSvrgD/eI05OH/2Q2AHqrel
    Mx1BHxjZ/I049hSpuB7QSpaZj/1UML+CqyA4WNUIc9CiDNH5LfcjWs3Dvbe6SqEk0CQB
    DRWQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563890;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=yKL38YLQb6Sra5wh9hHuxAm/2IobWvLNV1y2lL6e8b0=;
    b=S/4senCq7s40tD0AErAmsIDsxFNOWSmL/5lyhGIVEG0hbn0AEHz82+rBKNT56SRWk0
    FJgKATfvzGEbtKO1w5ojKjm0dFnC1PRzjpm3QDphUcTB6qR7Um+HIZIqNex2mCLOCJiB
    IRC8nSEJNg644oKCgOk6RDlO9QTkYvmkUKodUsUR8xVVzFafNXDBXCNB3aTuKUyoaefD
    0Xczd9EvaQbhMSzQ9aBhouWjurTZ2dS/C6ZIOks6z3acGTFR4TqS+kQm6D+25FpGDgrM
    GsfxPSk+56+oKsmQEaPGLMBAEU9sFT1Nw1xxHVeLCFG0YZ0PrhHsrG+cXSdYlBy7ESdD
    JM2Q==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 12/38] tools/guest: save: move types array
Date: Tue,  1 Jun 2021 18:10:52 +0200
Message-Id: <20210601161118.18986-13-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move types array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h | 2 ++
 tools/libs/saverestore/save.c   | 7 ++-----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 2335e6d27b..81d4a79b13 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -227,6 +227,8 @@ struct xc_sr_save_arrays {
     xen_pfn_t batch_pfns[MAX_BATCH_SIZE];
     /* write_batch: Mfns of the batch pfns. */
     xen_pfn_t mfns[MAX_BATCH_SIZE];
+    /* write_batch: Types of the batch pfns. */
+    xen_pfn_t types[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_restore_arrays {
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index 1719ff08ba..be65286570 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -88,7 +88,7 @@ static int write_checkpoint_record(struct xc_sr_context *ctx)
 static int write_batch(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
-    xen_pfn_t *mfns = ctx->save.m->mfns, *types = NULL;
+    xen_pfn_t *mfns = ctx->save.m->mfns, *types = ctx->save.m->types;
     void *guest_mapping = NULL;
     void **guest_data = NULL;
     void **local_pages = NULL;
@@ -105,8 +105,6 @@ static int write_batch(struct xc_sr_context *ctx)
 
     assert(nr_pfns != 0);
 
-    /* Types of the batch pfns. */
-    types = malloc(nr_pfns * sizeof(*types));
     /* Errors from attempting to map the gfns. */
     errors = malloc(nr_pfns * sizeof(*errors));
     /* Pointers to page data to send.  Mapped gfns or local allocations. */
@@ -116,7 +114,7 @@ static int write_batch(struct xc_sr_context *ctx)
     /* iovec[] for writev(). */
     iov = malloc((nr_pfns + 4) * sizeof(*iov));
 
-    if ( !types || !errors || !guest_data || !local_pages || !iov )
+    if ( !errors || !guest_data || !local_pages || !iov )
     {
         ERROR("Unable to allocate arrays for a batch of %u pages",
               nr_pfns);
@@ -274,7 +272,6 @@ static int write_batch(struct xc_sr_context *ctx)
     free(local_pages);
     free(guest_data);
     free(errors);
-    free(types);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:11:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:11:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134989.251075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zi-00009s-DQ; Tue, 01 Jun 2021 16:11:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134989.251075; Tue, 01 Jun 2021 16:11:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zi-00009U-8h; Tue, 01 Jun 2021 16:11:54 +0000
Received: by outflank-mailman (input) for mailman id 134989;
 Tue, 01 Jun 2021 16:11:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo6zh-0005Ec-1H
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:11:53 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.171])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c2cc1909-2698-480b-8f54-aba8ac30e54e;
 Tue, 01 Jun 2021 16:11:34 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBS1BD
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2cc1909-2698-480b-8f54-aba8ac30e54e
ARC-Seal: i=1; a=rsa-sha256; t=1622563889; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=rnJXdhgrXacgywk38aKsQE7O0+NhpB+PxH/eZKPkEcLT/M19R5KSVeg8kCZYBLfUUA
    AdK8ByK7g8edyjpdC7wtaQdGV+riv8mzN0QZrHHVT5y415uX87SFAHYOCO2DpwLRybAt
    NKdF1K64VpqL/FYm0MwUhnwW3XpTyYDIxhk5LwO67BI3a1GlxlmTD3UyVn/JVhku7nq4
    yZz1zI+OXSZOuaQghShoAUHIL2xfbTZLYn6RsoahVmr9zDdW7DcEZkUg/J1ghXqmdWf7
    VSHUg0yFXt2NyvLmiNrYyw9lrF25GKrq6yJpQ55Svxti2YNMGqLcm/bFJ7ViY0af9RY4
    bVZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563889;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=cDil57jxI4Z5Z9nbXjVlKKlvOaQMfsb66RRaO8m9ED8=;
    b=eUW5+lj09D+EqeYYhWwyDOnODC1rQFKFUnFVTvraE2R2lM4Frg5cOvunGY6tW5XPMf
    9WNVbsiXm/xvd2/LkVdhmd0GgV2Zqk+k+RZmn81G+4wqIUriS+nslfVg0ZXXRtxtEwm6
    NInQxJoQLUvRCR1kxHcrv/otELiU7bcy18W56VhYV/S+dFLXRRWglzVgHUIVYlS3jvfy
    6vEmdDhlC6rDuRcC2KiOtsWG02mipc+rj1LX0FjNnoS2mP9I9j1Y0lyYCAR7uZX0r8ij
    p0WgLa8Alv0NiBcWfDihUQcZUn/kj3PzktUykj5jDeUS8YjDxV9EamF4SZWum1zZCN7K
    rKDg==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563889;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=cDil57jxI4Z5Z9nbXjVlKKlvOaQMfsb66RRaO8m9ED8=;
    b=hNSJVlEhOJlKLSnCwsYffA1aYwF0tB02Ok2uJ21W3vA/ExnRX0/9xUPLYs1JpLlotg
    m4vphErILK7zWQwilFZ2DvEgeebVRQ8nouo91ktzb9yHleGt+srZkLPC//+yOXLJN8Rk
    DkZ1ezBLEGpahNxBOIV5rbPrzAM3hk059Z/0681Th/S+0HoqJvW8Th/S5WtTP2gr1YaU
    nQ5M+vGShR0Ey+L00f5Q6O2JtU9q5oM2RpJ/MyEXMhM5hbNvU70gf9iVGZ3TYREq26uM
    KUO2ql8E6ybCmzCisCzTDtByTrYo7v50wxzaCrVHm61HZnag1J95lamGSp6dGIv2e1Pi
    3G9Q==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 08/38] tools: show migration transfer rate in send_dirty_pages
Date: Tue,  1 Jun 2021 18:10:48 +0200
Message-Id: <20210601161118.18986-9-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Show how fast domU pages are transferred in each iteration.

The relevant data is how fast the pfns travel, not so much how much
protocol overhead exists. So the reported MiB/sec is just for pfns.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h |  2 ++
 tools/libs/saverestore/save.c   | 47 +++++++++++++++++++++++++++++++++
 2 files changed, 49 insertions(+)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 50a8479d39..f5fe23caad 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -250,6 +250,8 @@ struct xc_sr_context
             bool debug;
 
             unsigned long p2m_size;
+            size_t pages_sent;
+            size_t overhead_sent;
 
             struct precopy_stats stats;
 
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index bcff2d28f5..760ca04a84 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -1,5 +1,6 @@
 #include <assert.h>
 #include <arpa/inet.h>
+#include <time.h>
 
 #include "common.h"
 
@@ -238,6 +239,8 @@ static int write_batch(struct xc_sr_context *ctx)
     iov[3].iov_len = nr_pfns * sizeof(*rec_pfns);
 
     iovcnt = 4;
+    ctx->save.pages_sent += nr_pages;
+    ctx->save.overhead_sent += sizeof(rec) + sizeof(hdr) + nr_pfns * sizeof(*rec_pfns);
 
     if ( nr_pages )
     {
@@ -357,6 +360,43 @@ static int suspend_domain(struct xc_sr_context *ctx)
     return 0;
 }
 
+static void show_transfer_rate(struct xc_sr_context *ctx, struct timespec *start)
+{
+    xc_interface *xch = ctx->xch;
+    struct timespec end = {}, diff = {};
+    size_t ms, MiB_sec = ctx->save.pages_sent * PAGE_SIZE;
+
+    if (!MiB_sec)
+        return;
+
+    if ( clock_gettime(CLOCK_MONOTONIC, &end) )
+        PERROR("clock_gettime");
+
+    if ( (end.tv_nsec - start->tv_nsec) < 0 )
+    {
+        diff.tv_sec = end.tv_sec - start->tv_sec - 1;
+        diff.tv_nsec = end.tv_nsec - start->tv_nsec + (1000U*1000U*1000U);
+    }
+    else
+    {
+        diff.tv_sec = end.tv_sec - start->tv_sec;
+        diff.tv_nsec = end.tv_nsec - start->tv_nsec;
+    }
+
+    ms = (diff.tv_nsec / (1000U*1000U));
+    if (!ms)
+        ms = 1;
+    ms += (diff.tv_sec * 1000U);
+
+    MiB_sec *= 1000U;
+    MiB_sec /= ms;
+    MiB_sec /= 1024U*1024U;
+
+    errno = 0;
+    IPRINTF("%s: %zu bytes + %zu pages in %ld.%09ld sec, %zu MiB/sec", __func__,
+            ctx->save.overhead_sent, ctx->save.pages_sent, diff.tv_sec, diff.tv_nsec, MiB_sec);
+}
+
 /*
  * Send a subset of pages in the guests p2m, according to the dirty bitmap.
  * Used for each subsequent iteration of the live migration loop.
@@ -370,9 +410,15 @@ static int send_dirty_pages(struct xc_sr_context *ctx,
     xen_pfn_t p;
     unsigned long written;
     int rc;
+    struct timespec start = {};
     DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
                                     &ctx->save.dirty_bitmap_hbuf);
 
+    ctx->save.pages_sent = 0;
+    ctx->save.overhead_sent = 0;
+    if ( clock_gettime(CLOCK_MONOTONIC, &start) )
+        PERROR("clock_gettime");
+
     for ( p = 0, written = 0; p < ctx->save.p2m_size; ++p )
     {
         if ( !test_bit(p, dirty_bitmap) )
@@ -396,6 +442,7 @@ static int send_dirty_pages(struct xc_sr_context *ctx,
     if ( written > entries )
         DPRINTF("Bitmap contained more entries than expected...");
 
+    show_transfer_rate(ctx, &start);
     xc_report_progress_step(xch, entries, entries);
 
     return ctx->save.ops.check_vm_state(ctx);


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:11:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:11:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134990.251079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zj-0000GC-5D; Tue, 01 Jun 2021 16:11:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134990.251079; Tue, 01 Jun 2021 16:11:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zi-0000ES-Qm; Tue, 01 Jun 2021 16:11:54 +0000
Received: by outflank-mailman (input) for mailman id 134990;
 Tue, 01 Jun 2021 16:11:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo6zh-0005X1-Vz
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:11:54 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 083036e1-0830-4d8b-ad5b-37494827089f;
 Tue, 01 Jun 2021 16:11:35 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBT1BF
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:29 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 083036e1-0830-4d8b-ad5b-37494827089f
ARC-Seal: i=1; a=rsa-sha256; t=1622563889; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=j23TsGK+nbxG3edYgJTv/d8uKBKj6wrHbhV04xpLKdY0XkbgSQOZXCoPHjLAWlM3Pk
    QonfCPg1lvOQ+Hy1IK2zwotH1I570y4bmoWrDHqw6L4VWs0XnuKvdtgvt5qDq0I+17S5
    0Kke+8CmT63dwlYE8S6GVNSlIf6TTGhwqK8reIMvhJevQk5sG7BXpIMe8wmptslDGnp0
    imOeMDiupWnNsBUmtDjrT38rRQLotHqptcfcCcmpasdlrsQnTXm1i3/Q26CHAGJfUuaV
    euTLK+cqnPBiLGlXDxqjxu79d8JRY+kq8rheD1fDXfijG8RyKijsKpG9WYuV9GI8WCcN
    45Eg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563889;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=E7wVU3AYom1qSRf+sliiKc9mzY5OMMBLqGFhTssnmUM=;
    b=OTb0QjhiXLeVWtHNw8GQlI1+c5r0H2nPn3R+l6SKShj2rIwf+kb4lGnsFLWkHF2LbJ
    EyVEHbc49qRnZ3wd9FLhGNzanaF/ZLDFSoPAeoA2kTxTZv2vxAEmJn3k29MlzMgYLmIs
    yN0Y7QnLCKBvQYuEpNhcG0NPSV8RiFVNfV3KFyFxhtkK9/ERFdIecPyq16ep0MHY88YT
    waayuplCtyhIhoPP/bx3HgI7b5KQU6k57gq6CEtsCpGlfgpSBGuW2r4L6DCOaUhlhjS+
    2QRv7wsTBEv2Js0gWjUWZmOOCRXsRbRO7hyNZfoCAYokV9MDq7cdiucs4TtX+J7e8aLE
    8YoA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563889;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=E7wVU3AYom1qSRf+sliiKc9mzY5OMMBLqGFhTssnmUM=;
    b=cVcv1Crx7fTfkauGUpNoOIuyaysmjNizIy2vMBgKV15/N5Rh3KQeueIMpvMWtFe3PJ
    S03OxF0ICFBINoo7HNh5D3xnVEJWfvBLrz6d5hP1RJpL4N5wwdw60OSklwOHdP9Ko53x
    W9XYO8zNS4E7M7LH1v6mjlYyDhhuBEFqz2QCi5Z0HH4mfl3MkXSeEKXbRdZZJCb1MuVW
    DLhIL9Bgg/msb0MxWq8xtRAdMwVIkWZvwhJF5zqJgF99GcAQG2Y/c8UcwcuIP4BfFk1s
    4JZrkIbElRkzQE12k65ydyhT+/Amxzz+LEkyhBn/jUFkO+Kdp8G//YHk2ze3tsPxhqeb
    3Ucg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 10/38] tools/guest: save: move batch_pfns
Date: Tue,  1 Jun 2021 18:10:50 +0200
Message-Id: <20210601161118.18986-11-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The batch_pfns array is already allocated in advance.
Move it into the preallocated area.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h |  2 +-
 tools/libs/saverestore/save.c   | 25 +++++++++++--------------
 2 files changed, 12 insertions(+), 15 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 80b2e878aa..0d94a4c01e 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -224,6 +224,7 @@ static inline int update_blob(struct xc_sr_blob *blob,
 }
 
 struct xc_sr_save_arrays {
+    xen_pfn_t batch_pfns[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_restore_arrays {
@@ -261,7 +262,6 @@ struct xc_sr_context
 
             struct precopy_stats stats;
 
-            xen_pfn_t *batch_pfns;
             unsigned int nr_batch_pfns;
             unsigned long *deferred_pages;
             unsigned long nr_deferred_pages;
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index 1662e3ee50..b11ce70a11 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -77,7 +77,7 @@ static int write_checkpoint_record(struct xc_sr_context *ctx)
 
 /*
  * Writes a batch of memory as a PAGE_DATA record into the stream.  The batch
- * is constructed in ctx->save.batch_pfns.
+ * is constructed in ctx->save.m->batch_pfns.
  *
  * This function:
  * - gets the types for each pfn in the batch.
@@ -128,12 +128,12 @@ static int write_batch(struct xc_sr_context *ctx)
     for ( i = 0; i < nr_pfns; ++i )
     {
         types[i] = mfns[i] = ctx->save.ops.pfn_to_gfn(ctx,
-                                                      ctx->save.batch_pfns[i]);
+                                                      ctx->save.m->batch_pfns[i]);
 
         /* Likely a ballooned page. */
         if ( mfns[i] == INVALID_MFN )
         {
-            set_bit(ctx->save.batch_pfns[i], ctx->save.deferred_pages);
+            set_bit(ctx->save.m->batch_pfns[i], ctx->save.deferred_pages);
             ++ctx->save.nr_deferred_pages;
         }
     }
@@ -179,7 +179,7 @@ static int write_batch(struct xc_sr_context *ctx)
             if ( errors[p] )
             {
                 ERROR("Mapping of pfn %#"PRIpfn" (mfn %#"PRIpfn") failed %d",
-                      ctx->save.batch_pfns[i], mfns[p], errors[p]);
+                      ctx->save.m->batch_pfns[i], mfns[p], errors[p]);
                 goto err;
             }
 
@@ -193,7 +193,7 @@ static int write_batch(struct xc_sr_context *ctx)
             {
                 if ( rc == -1 && errno == EAGAIN )
                 {
-                    set_bit(ctx->save.batch_pfns[i], ctx->save.deferred_pages);
+                    set_bit(ctx->save.m->batch_pfns[i], ctx->save.deferred_pages);
                     ++ctx->save.nr_deferred_pages;
                     types[i] = XEN_DOMCTL_PFINFO_XTAB;
                     --nr_pages;
@@ -224,7 +224,7 @@ static int write_batch(struct xc_sr_context *ctx)
     rec.length += nr_pages * PAGE_SIZE;
 
     for ( i = 0; i < nr_pfns; ++i )
-        rec_pfns[i] = ((uint64_t)(types[i]) << 32) | ctx->save.batch_pfns[i];
+        rec_pfns[i] = ((uint64_t)(types[i]) << 32) | ctx->save.m->batch_pfns[i];
 
     iov[0].iov_base = &rec.type;
     iov[0].iov_len = sizeof(rec.type);
@@ -296,9 +296,9 @@ static int flush_batch(struct xc_sr_context *ctx)
 
     if ( !rc )
     {
-        VALGRIND_MAKE_MEM_UNDEFINED(ctx->save.batch_pfns,
+        VALGRIND_MAKE_MEM_UNDEFINED(ctx->save.m->batch_pfns,
                                     MAX_BATCH_SIZE *
-                                    sizeof(*ctx->save.batch_pfns));
+                                    sizeof(*ctx->save.m->batch_pfns));
     }
 
     return rc;
@@ -315,7 +315,7 @@ static int add_to_batch(struct xc_sr_context *ctx, xen_pfn_t pfn)
         rc = flush_batch(ctx);
 
     if ( rc == 0 )
-        ctx->save.batch_pfns[ctx->save.nr_batch_pfns++] = pfn;
+        ctx->save.m->batch_pfns[ctx->save.nr_batch_pfns++] = pfn;
 
     return rc;
 }
@@ -850,14 +850,12 @@ static int setup(struct xc_sr_context *ctx)
 
     dirty_bitmap = xc_hypercall_buffer_alloc_pages(
         xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->save.p2m_size)));
-    ctx->save.batch_pfns = malloc(MAX_BATCH_SIZE *
-                                  sizeof(*ctx->save.batch_pfns));
     ctx->save.deferred_pages = bitmap_alloc(ctx->save.p2m_size);
     ctx->save.m = malloc(sizeof(*ctx->save.m));
 
-    if ( !ctx->save.m || !ctx->save.batch_pfns || !dirty_bitmap || !ctx->save.deferred_pages )
+    if ( !ctx->save.m || !dirty_bitmap || !ctx->save.deferred_pages )
     {
-        ERROR("Unable to allocate memory for dirty bitmaps, batch pfns and"
+        ERROR("Unable to allocate memory for dirty bitmaps and"
               " deferred pages");
         rc = -1;
         errno = ENOMEM;
@@ -886,7 +884,6 @@ static void cleanup(struct xc_sr_context *ctx)
     xc_hypercall_buffer_free_pages(xch, dirty_bitmap,
                                    NRPAGES(bitmap_size(ctx->save.p2m_size)));
     free(ctx->save.deferred_pages);
-    free(ctx->save.batch_pfns);
     free(ctx->save.m);
 }
 


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:11:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:11:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134991.251097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zn-0001EJ-O0; Tue, 01 Jun 2021 16:11:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134991.251097; Tue, 01 Jun 2021 16:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zn-0001D1-FS; Tue, 01 Jun 2021 16:11:59 +0000
Received: by outflank-mailman (input) for mailman id 134991;
 Tue, 01 Jun 2021 16:11:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo6zm-0005Ec-25
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:11:58 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.171])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bc1433c2-08b3-4c9d-af2f-27f9947fd7e2;
 Tue, 01 Jun 2021 16:11:34 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBU1BG
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc1433c2-08b3-4c9d-af2f-27f9947fd7e2
ARC-Seal: i=1; a=rsa-sha256; t=1622563890; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=gfvBK3rm3/R3R8slMSuvIEW5iMFWWgvTDeBPPf/JfLLEDErVU4b0BBhZvlpAl/U1fU
    1BEqNyQGG8fNotSFJF+UsE0GVrO0Bd7UTjDks25GmoEhLX1hKqQ7z5ASzb1p3eeODGXO
    QSOl4D3MKHXHuxCirEVNO9ePwLYTVX4XpDw9JSLwxj6VnnJrE7ULAKtTmplelYsryByD
    suz7yKUGMFRP/ZYSQ01kuNeSco5XpW01wu1XQv4aEGgvhtifd6kQgSVn/pVVzY9tbNjE
    +rPo+jQTRNUWxeW+tZbrF1rJ98A6PQVlfjYszv8+h9fbhWn0I+p5Jv7+AK9ZHHyclWAP
    3miA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563890;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=vfw3BzaI+AtROP8LqwGXxo+9GSxWRwpN3QZDZ8A0bxw=;
    b=kn8QxZANHcRrRWn34z0LNNH9t4PefGaDihf6CSyVrGAQGfua+RAJmsDoUjQCiDLTCL
    SyDdmXXKbZazK7InbtE+Q7kcJzIZABdN7uGqRyq/299EDng8qCsd5dWwshI2k26VxSoO
    cRJwCzO8e53RwSFx7EsKCeY+qnmBf+mvx4J0qc5o+6A3BEYQKpElDbGo3S/8/7jhzQSF
    tOrlRiHYpRlSueNf2x4SNC1gWP9hNkU6DruJaB8NYfIPq/xMb0+g3LJITxGHRrTM3Ob0
    fxthk/0YHII7n7aPt1dxuZltmy8456ph4Sz5QNdFHuY5pgqXOhneCg37iumSaPTRIu8t
    51Yg==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563890;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=vfw3BzaI+AtROP8LqwGXxo+9GSxWRwpN3QZDZ8A0bxw=;
    b=RqAcStaHO1dwd0/znVRGXp/aK8OS5Dj9pZ9TV4CgY6qAnMADhOjX2QgDcD4JKb3PaC
    fF6oW+ib1BIiloMTwD1N4jdcYn2OTboVBEP91xy7BHGapPA/0kVmgwfJck/bSuJg7XMa
    PJfJUXF9OHDPnQ2JIZgNKm9KkULXlFIEdeljT+KZI6nCL4y4JfDyHXJEL1CdtsB7BcN/
    HZ3hUJ9ebfNx+hB/O2va2d2ptilEqDTZdy2NRlXFpAIDdhgBt52u/IWdR0t3p9OX6dTY
    HWmFksYWgr81/B5Z1lebvh7Q9LsyGC5yLjN4W84fmXueSElm3udOyAmSF7pQtNH2axV5
    +4XQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 11/38] tools/guest: save: move mfns array
Date: Tue,  1 Jun 2021 18:10:51 +0200
Message-Id: <20210601161118.18986-12-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move mfns array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h | 2 ++
 tools/libs/saverestore/save.c   | 7 ++-----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 0d94a4c01e..2335e6d27b 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -225,6 +225,8 @@ static inline int update_blob(struct xc_sr_blob *blob,
 
 struct xc_sr_save_arrays {
     xen_pfn_t batch_pfns[MAX_BATCH_SIZE];
+    /* write_batch: Mfns of the batch pfns. */
+    xen_pfn_t mfns[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_restore_arrays {
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index b11ce70a11..1719ff08ba 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -88,7 +88,7 @@ static int write_checkpoint_record(struct xc_sr_context *ctx)
 static int write_batch(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
-    xen_pfn_t *mfns = NULL, *types = NULL;
+    xen_pfn_t *mfns = ctx->save.m->mfns, *types = NULL;
     void *guest_mapping = NULL;
     void **guest_data = NULL;
     void **local_pages = NULL;
@@ -105,8 +105,6 @@ static int write_batch(struct xc_sr_context *ctx)
 
     assert(nr_pfns != 0);
 
-    /* Mfns of the batch pfns. */
-    mfns = malloc(nr_pfns * sizeof(*mfns));
     /* Types of the batch pfns. */
     types = malloc(nr_pfns * sizeof(*types));
     /* Errors from attempting to map the gfns. */
@@ -118,7 +116,7 @@ static int write_batch(struct xc_sr_context *ctx)
     /* iovec[] for writev(). */
     iov = malloc((nr_pfns + 4) * sizeof(*iov));
 
-    if ( !mfns || !types || !errors || !guest_data || !local_pages || !iov )
+    if ( !types || !errors || !guest_data || !local_pages || !iov )
     {
         ERROR("Unable to allocate arrays for a batch of %u pages",
               nr_pfns);
@@ -277,7 +275,6 @@ static int write_batch(struct xc_sr_context *ctx)
     free(guest_data);
     free(errors);
     free(types);
-    free(mfns);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:12:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:12:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134994.251102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zo-0001Nx-KR; Tue, 01 Jun 2021 16:12:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134994.251102; Tue, 01 Jun 2021 16:12:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zo-0001LY-85; Tue, 01 Jun 2021 16:12:00 +0000
Received: by outflank-mailman (input) for mailman id 134994;
 Tue, 01 Jun 2021 16:11:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo6zm-0005X1-Vx
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:11:59 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.170])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6934594f-8962-4454-8048-10c9b500185c;
 Tue, 01 Jun 2021 16:11:38 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBY1BP
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6934594f-8962-4454-8048-10c9b500185c
ARC-Seal: i=1; a=rsa-sha256; t=1622563894; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=ClxwDl0qOSDjoIzNwbzCmJ4ADXloUwbcJ7wlHM4iM61l6JqODFCZZpqcu0/GlNFtlG
    5oA2jSfzWBMYdDDzvhd3lnaUbTHRVkTFASjvrgWoXjHxyBLaXavo/dcy3Cl1KKGpfRwa
    6t240vWP1WoA+vo0e/qyvLEoJCqXumv1FMIpvN7F6F/xFhtb4lS0FM/Q2dK+bwxgH44M
    HG4dud/1aqI7dpg0Lp5tO4CYltpEK0jqmgTkRZ3ErK8vc8LRIwB60FsntryBmVe6sRvK
    DwT9ikobnc+pdygCRehXCwJU2aWA0mpVU+tady/FNC63CQYB2hvwic+r8nmyhQ9xEgz6
    9nKg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563894;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=z4yh0601yitxQDA4KYzEdbnjeAj2W9nnPiW8js19Yoc=;
    b=tRhf971bAx8V7lliMKUwnmZnrWq10x1IewtiX9BIroszIi5zpyi9whPdppL+JfkjWh
    48m1XweFg1qyXLA++7UWS6rAierRXtXnKrY/0X4250XBEqxd9y13pBuViuPEqz9wc9lv
    1GOQh10dVz9lVYEZsR2K2dJA1hU6XoquqN9Pkexf3heRUUqE4rOFndz10ooMqqwKvmXR
    Yq4Nsc8l0Ne2n6e49UJktQOK+b4qCX6j4IFEBjGdrLJZiI34aw1P2hZHSe7h/FGE+sAl
    GHGTFexX7BV9by0Xn5jz71OeoBzZbW0BEZSS7mz0xLL9nDgHGNJS631r1ue1yZdQG4Wb
    4Hrw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563894;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=z4yh0601yitxQDA4KYzEdbnjeAj2W9nnPiW8js19Yoc=;
    b=JL0R7wiQcmcqwYI3HwBz4r03ZaCVbE7XBGrVZIHr49HwKasO7txilGAoztHfqC/8Za
    fbrsr0orj2xYnTaByMb4rQqkb3KnydVeOKdhvFV9xzNVPFM6QoqJowWvTSqPkSq2StxM
    xKAq3fwBOCVknK6+1S14D3+RLEIwviAlTl+gIYjN3rkj9F/jVOqAeOYSmgBCCwtJEKU9
    zTF0zC8IdF+lSl2B17vPfcVeZpwDCqQpjddYhQOZIYDWy9kMkLodPhKjp5KiGSL685WF
    rnZ5/qcRV1+ptvuhFxLCdaTK14i0Dhu/ZhlRB1p3x137VvJ1p6DOmIqNLI02+9TNx0+2
    3RcA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 20/38] tools/guest: restore: move mfns array
Date: Tue,  1 Jun 2021 18:11:00 +0200
Message-Id: <20210601161118.18986-21-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move mfns array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h  | 2 ++
 tools/libs/saverestore/restore.c | 5 ++---
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 96cd60e0d6..c7291bb5ca 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -239,6 +239,8 @@ struct xc_sr_restore_arrays {
     /* handle_page_data */
     xen_pfn_t pfns[MAX_BATCH_SIZE];
     uint32_t types[MAX_BATCH_SIZE];
+    /* process_page_data */
+    xen_pfn_t mfns[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_context
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index 815a2d5a12..aadf322428 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -205,7 +205,7 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
                              xen_pfn_t *pfns, uint32_t *types, void *page_data)
 {
     xc_interface *xch = ctx->xch;
-    xen_pfn_t *mfns = malloc(count * sizeof(*mfns));
+    xen_pfn_t *mfns = ctx->restore.m->mfns;
     int *map_errs = malloc(count * sizeof(*map_errs));
     int rc;
     void *mapping = NULL, *guest_page = NULL;
@@ -213,7 +213,7 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
         j,          /* j indexes the subset of pfns we decide to map. */
         nr_pages = 0;
 
-    if ( !mfns || !map_errs )
+    if ( !map_errs )
     {
         rc = -1;
         ERROR("Failed to allocate %zu bytes to process page data",
@@ -299,7 +299,6 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
         xenforeignmemory_unmap(xch->fmem, mapping, nr_pages);
 
     free(map_errs);
-    free(mfns);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:12:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:12:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134996.251120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zs-0002Gi-C9; Tue, 01 Jun 2021 16:12:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134996.251120; Tue, 01 Jun 2021 16:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zs-0002Fn-0k; Tue, 01 Jun 2021 16:12:04 +0000
Received: by outflank-mailman (input) for mailman id 134996;
 Tue, 01 Jun 2021 16:12:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo6zr-0005Ec-1g
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:03 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c456e689-f8e7-4608-a313-5ae1a4025123;
 Tue, 01 Jun 2021 16:11:35 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBV1BI
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:31 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c456e689-f8e7-4608-a313-5ae1a4025123
ARC-Seal: i=1; a=rsa-sha256; t=1622563891; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=LCJtyMR+Zgd7W9rxsbxtC58x5DfEqpSITeFdBdbnYt4fP5IZM6iO61a5JCg+irtKtN
    WF8XQtNF/OaWj6lSX7ugEy9guiW6JR4O94cMSSlJUQhDGZ3OP0PTpsPwEdMtZ8VmDI2r
    Ed9b3gCHYbq3hH1qAhtFLlPuPQVr5A1PrrG66MlKkrH87v3zgeEQOCtBt94mDb3e4jIa
    hbD+kZNaMF2Ix8BEhjmoe/JyS7jEXSYCrC1o0iWeAnjRp+KhxLidp8JX0PYOSMAgnG9c
    FyUNV4og2SgLVQj4v211CNDBBsD6N1GDIGABj/Tb3y+IPC5ukaMF4vUP1aKWJ1bKgAAY
    P6xg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563891;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=1Vlq702zoBkp5PCmpnAeHRVWBwGGiYV8RLsu7fpX0qA=;
    b=jbQ28FbpZshSCLQxBf2YVYZKWlfJQNtSvqQz9GWFU1unQnx11tq3ODYUqT26XjAAC6
    +bIyj4IxL0vdH5I/W9uSsG6HIJq4GkqondTs5j12K8ressIjv1boHQ+UxhFXTILZXfNK
    3BTnKDb6ntItpXfLrM4PSroAB4dAFWvs6GdlH/7Fvuy3evm+rEmRiD/Aa6UPuPeiYsT9
    GvAS/z6VBJzfTkulBz7doXRyhFKtzx4xoJCUGEBrSh4SBPOm15x5VHo4EcMHtHCh9+x+
    r0ZBkT8SotrGgADOvN7BzyYUX7MGRquG4xVNvimOOaeF3v3oQNr6Vy/SBuqJU0Bkf3AP
    6tJw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563891;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=1Vlq702zoBkp5PCmpnAeHRVWBwGGiYV8RLsu7fpX0qA=;
    b=NQ3hEjfyFg2Fe5DZfbqXwhRCjwdZDgKX+fBM2hPIRF7TkVeFsAD4FKQ3HANlaDTW3T
    9QfHm+knoTz8as5wuOjsn99kFxk4CDLtnmo5W9x26Wn1hq1maCxVDMRwBWvKcC5n47BY
    Egi/WhJT/ziSg0OXsPqPk5UGsiN89zInoibcmL6ljx/orY+0PLaEqUHz+GoeOL5vO7Mr
    2bSdv+MT/NwJC4Ya8a+P9Po3sItrOJQOb3qLUmbv4NfqocohCU/m59PrOAJFT4/5WjiB
    bE/imqAhGepO8tu6gzdHJHFVUoBT/+CbHYQ6s2x1uRcJYNDkLtNVVqVIx0FrKhZqcXcH
    en6A==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 13/38] tools/guest: save: move errors array
Date: Tue,  1 Jun 2021 18:10:53 +0200
Message-Id: <20210601161118.18986-14-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move errors array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h | 2 ++
 tools/libs/saverestore/save.c   | 7 ++-----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 81d4a79b13..07dfa7d57d 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -229,6 +229,8 @@ struct xc_sr_save_arrays {
     xen_pfn_t mfns[MAX_BATCH_SIZE];
     /* write_batch: Types of the batch pfns. */
     xen_pfn_t types[MAX_BATCH_SIZE];
+    /* write_batch: Errors from attempting to map the gfns. */
+    int errors[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_restore_arrays {
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index be65286570..5033f18bef 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -92,7 +92,7 @@ static int write_batch(struct xc_sr_context *ctx)
     void *guest_mapping = NULL;
     void **guest_data = NULL;
     void **local_pages = NULL;
-    int *errors = NULL, rc = -1;
+    int *errors = ctx->save.m->errors, rc = -1;
     unsigned int i, p, nr_pages = 0, nr_pages_mapped = 0;
     unsigned int nr_pfns = ctx->save.nr_batch_pfns;
     void *page, *orig_page;
@@ -105,8 +105,6 @@ static int write_batch(struct xc_sr_context *ctx)
 
     assert(nr_pfns != 0);
 
-    /* Errors from attempting to map the gfns. */
-    errors = malloc(nr_pfns * sizeof(*errors));
     /* Pointers to page data to send.  Mapped gfns or local allocations. */
     guest_data = calloc(nr_pfns, sizeof(*guest_data));
     /* Pointers to locally allocated pages.  Need freeing. */
@@ -114,7 +112,7 @@ static int write_batch(struct xc_sr_context *ctx)
     /* iovec[] for writev(). */
     iov = malloc((nr_pfns + 4) * sizeof(*iov));
 
-    if ( !errors || !guest_data || !local_pages || !iov )
+    if ( !guest_data || !local_pages || !iov )
     {
         ERROR("Unable to allocate arrays for a batch of %u pages",
               nr_pfns);
@@ -271,7 +269,6 @@ static int write_batch(struct xc_sr_context *ctx)
     free(iov);
     free(local_pages);
     free(guest_data);
-    free(errors);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:12:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:12:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134998.251128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zt-0002bQ-Qm; Tue, 01 Jun 2021 16:12:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134998.251128; Tue, 01 Jun 2021 16:12:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zt-0002Zn-IE; Tue, 01 Jun 2021 16:12:05 +0000
Received: by outflank-mailman (input) for mailman id 134998;
 Tue, 01 Jun 2021 16:12:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo6zr-0005X1-WB
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:04 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.82])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1163992d-2102-4ff6-ab5b-775defa88b61;
 Tue, 01 Jun 2021 16:11:38 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBW1BM
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:32 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1163992d-2102-4ff6-ab5b-775defa88b61
ARC-Seal: i=1; a=rsa-sha256; t=1622563893; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=lWWIEL9HC5km8WOt/x3R4+rqn5v1bNFWvkPAbSkrUqYNkp40AlmE6jjiYWVCydo/ob
    csPuquEqrRfkmBqzA2eOKcigfbX13plTo6qbUGjDiRI4GKM2sgI7cI78zMFo42m/q9CX
    6s0EkpCc7ePlmNhxXycWZKTN2DRHoWSNS2/2Ej/YhTGanXeMyP4FO7OlGTrncwoukc5u
    YZufV3hndVO0e3fWcdoXqwWNQNmfD0tDw8Y/AKVi5AQRnOy9MZ+fsStZIlvZJgzYNR9/
    neFvRkC/JRKGOI0Jd7M4pbOdnqmClisEC8v9ruyrOb4GKHSfBOTUTk3EX4+8mrw36AR2
    daKQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563893;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=Dd4NjKNzHYeI+VwgjA8Sp8NfOu58WGHDpSZRBu7+Wsk=;
    b=G5Ov9oYy7ggp6gy5d/idYUXaeC4ExsmevB8T8+Tk5hYvfGjjVfWqU9hg5pih+WPZU0
    UGGfxm+qTKuIzEXYbk7ZnS4k2cTZzxigRIp6WwHub37qlKRhSfhSiS3xy33pvf+2cRtH
    K9WYQoZa3WTI5Ka+VX7PLWfRAieXMU6UA3YotYlavahITjflfpyO1Orc/sC1sDvN3xfK
    ySNUQy3fxfLW3LxQNQvgi/JznZAl/TsSTlPo1K4bQAGSRqxPtk2V4XjMN76VbF+p1+iB
    U+GkoeB0ngJ02Nu+7FSHlXkNI/VSwwZP7fEFfWrCltKyiXlmzvKexU0P++hfxYLx4lpa
    qWaw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563893;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=Dd4NjKNzHYeI+VwgjA8Sp8NfOu58WGHDpSZRBu7+Wsk=;
    b=l5ArMBsKjU3Cb1YbyVURF8KFBTeNuE4kMQPXKk54t54hF8bKl8uN5LntfGydG/oOXT
    sjjbN7zNm6V+sjUOlxEFhYY5Wowydxq/jH4fHpbS8zrvVRWcXMn/8DX9l/gS3cD9h7K8
    4C2uaU6MIHe9c/qQHO6OqvNinsxN7g38ru7a3wM3G3qi1EYxD+D6AUtXVxNGjKeErbVi
    K0myJDlx3iOOWToTjmcUqVEJb6Z4P1SXAbYvjOU5ROPMfRNw0heamWJ5zde9HcmjWVcx
    fwSjDvdVkySrA3GUophNuENcxZIwwoBjv6AzQP9iQeeBywPAESZtn31N+0BhWGUY2/OO
    kmvg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 17/38] tools/guest: save: move local_pages array
Date: Tue,  1 Jun 2021 18:10:57 +0200
Message-Id: <20210601161118.18986-18-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move local_pages array into preallocated space.

Adjust the code to use the src page as is in case of HVM.
In case of PV the page may need to be normalised, use an private memory
area for this purpose.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h       | 22 ++++++++++---------
 tools/libs/saverestore/save.c         | 25 +++------------------
 tools/libs/saverestore/save_x86_hvm.c |  5 +++--
 tools/libs/saverestore/save_x86_pv.c  | 31 ++++++++++++++++++---------
 4 files changed, 39 insertions(+), 44 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 098aa39667..0e03b0731c 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -45,16 +45,12 @@ struct xc_sr_save_ops
      * Optionally transform the contents of a page from being specific to the
      * sending environment, to being generic for the stream.
      *
-     * The page of data at the end of 'page' may be a read-only mapping of a
-     * running guest; it must not be modified.  If no transformation is
-     * required, the callee should leave '*pages' untouched.
+     * The page of data '*src' may be a read-only mapping of a running guest;
+     * it must not be modified. If no transformation is required, the callee
+     * should leave '*src' untouched, and return it via '**ptr'.
      *
-     * If a transformation is required, the callee should allocate themselves
-     * a local page using malloc() and return it via '*page'.
-     *
-     * The caller shall free() '*page' in all cases.  In the case that the
-     * callee encounters an error, it should *NOT* free() the memory it
-     * allocated for '*page'.
+     * If a transformation is required, the callee should provide the
+     * transformed page in a private buffer and return it via '**ptr'.
      *
      * It is valid to fail with EAGAIN if the transformation is not able to be
      * completed at this point.  The page shall be retried later.
@@ -62,7 +58,7 @@ struct xc_sr_save_ops
      * @returns 0 for success, -1 for failure, with errno appropriately set.
      */
     int (*normalise_page)(struct xc_sr_context *ctx, xen_pfn_t type,
-                          void **page);
+                          void *src, unsigned int idx, void **ptr);
 
     /**
      * Set up local environment to save a domain. (Typically querying
@@ -383,6 +379,12 @@ struct xc_sr_context
 
                 union
                 {
+                    struct
+                    {
+                        /* Used by write_batch for modified pages. */
+                        void *normalised_pages;
+                    } save;
+
                     struct
                     {
                         /* State machine for the order of received records. */
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index c4fd9a15f0..c4876ba24c 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -91,11 +91,10 @@ static int write_batch(struct xc_sr_context *ctx)
     xen_pfn_t *mfns = ctx->save.m->mfns, *types = ctx->save.m->types;
     void *guest_mapping = NULL;
     void **guest_data = ctx->save.m->guest_data;
-    void **local_pages = NULL;
     int *errors = ctx->save.m->errors, rc = -1;
     unsigned int i, p, nr_pages = 0, nr_pages_mapped = 0;
     unsigned int nr_pfns = ctx->save.nr_batch_pfns;
-    void *page, *orig_page;
+    void *src;
     uint64_t *rec_pfns = ctx->save.m->rec_pfns;
     struct iovec *iov = ctx->save.m->iov; int iovcnt = 0;
     struct xc_sr_rec_page_data_header hdr = { 0 };
@@ -105,16 +104,6 @@ static int write_batch(struct xc_sr_context *ctx)
 
     assert(nr_pfns != 0);
 
-    /* Pointers to locally allocated pages.  Need freeing. */
-    local_pages = calloc(nr_pfns, sizeof(*local_pages));
-
-    if ( !local_pages )
-    {
-        ERROR("Unable to allocate arrays for a batch of %u pages",
-              nr_pfns);
-        goto err;
-    }
-
     for ( i = 0; i < nr_pfns; ++i )
     {
         types[i] = mfns[i] = ctx->save.ops.pfn_to_gfn(ctx,
@@ -176,11 +165,8 @@ static int write_batch(struct xc_sr_context *ctx)
                 goto err;
             }
 
-            orig_page = page = guest_mapping + (p * PAGE_SIZE);
-            rc = ctx->save.ops.normalise_page(ctx, types[i], &page);
-
-            if ( orig_page != page )
-                local_pages[i] = page;
+            src = guest_mapping + (p * PAGE_SIZE);
+            rc = ctx->save.ops.normalise_page(ctx, types[i], src, i, &guest_data[i]);
 
             if ( rc )
             {
@@ -195,8 +181,6 @@ static int write_batch(struct xc_sr_context *ctx)
                 else
                     goto err;
             }
-            else
-                guest_data[i] = page;
 
             rc = -1;
             ++p;
@@ -255,9 +239,6 @@ static int write_batch(struct xc_sr_context *ctx)
  err:
     if ( guest_mapping )
         xenforeignmemory_unmap(xch->fmem, guest_mapping, nr_pages_mapped);
-    for ( i = 0; local_pages && i < nr_pfns; ++i )
-        free(local_pages[i]);
-    free(local_pages);
 
     return rc;
 }
diff --git a/tools/libs/saverestore/save_x86_hvm.c b/tools/libs/saverestore/save_x86_hvm.c
index 91c2cb99ab..26f49ee267 100644
--- a/tools/libs/saverestore/save_x86_hvm.c
+++ b/tools/libs/saverestore/save_x86_hvm.c
@@ -129,9 +129,10 @@ static xen_pfn_t x86_hvm_pfn_to_gfn(const struct xc_sr_context *ctx,
     return pfn;
 }
 
-static int x86_hvm_normalise_page(struct xc_sr_context *ctx,
-                                  xen_pfn_t type, void **page)
+static int x86_hvm_normalise_page(struct xc_sr_context *ctx, xen_pfn_t type,
+                                  void *src, unsigned int idx, void **ptr)
 {
+    *ptr = src;
     return 0;
 }
 
diff --git a/tools/libs/saverestore/save_x86_pv.c b/tools/libs/saverestore/save_x86_pv.c
index 92f77fad0f..159ff59480 100644
--- a/tools/libs/saverestore/save_x86_pv.c
+++ b/tools/libs/saverestore/save_x86_pv.c
@@ -999,29 +999,31 @@ static xen_pfn_t x86_pv_pfn_to_gfn(const struct xc_sr_context *ctx,
  * save_ops function.  Performs pagetable normalisation on appropriate pages.
  */
 static int x86_pv_normalise_page(struct xc_sr_context *ctx, xen_pfn_t type,
-                                 void **page)
+                                  void *src, unsigned int idx, void **ptr)
 {
     xc_interface *xch = ctx->xch;
-    void *local_page;
     int rc;
+    void *dst;
 
     type &= XEN_DOMCTL_PFINFO_LTABTYPE_MASK;
 
     if ( type < XEN_DOMCTL_PFINFO_L1TAB || type > XEN_DOMCTL_PFINFO_L4TAB )
+    {
+        *ptr = src;
         return 0;
+    }
 
-    local_page = malloc(PAGE_SIZE);
-    if ( !local_page )
+    if ( idx >= MAX_BATCH_SIZE )
     {
-        ERROR("Unable to allocate scratch page");
-        rc = -1;
-        goto out;
+        ERROR("idx %u out of range", idx);
+        errno = ERANGE;
+        return -1;
     }
 
-    rc = normalise_pagetable(ctx, *page, local_page, type);
-    *page = local_page;
+    dst = ctx->x86.pv.save.normalised_pages + idx * PAGE_SIZE;
+    rc = normalise_pagetable(ctx, src, dst, type);
+    *ptr = dst;
 
- out:
     return rc;
 }
 
@@ -1031,8 +1033,16 @@ static int x86_pv_normalise_page(struct xc_sr_context *ctx, xen_pfn_t type,
  */
 static int x86_pv_setup(struct xc_sr_context *ctx)
 {
+    xc_interface *xch = ctx->xch;
     int rc;
 
+    ctx->x86.pv.save.normalised_pages = malloc(MAX_BATCH_SIZE * PAGE_SIZE);
+    if ( !ctx->x86.pv.save.normalised_pages )
+    {
+        PERROR("Failed to allocate normalised_pages");
+        return -1;
+    }
+
     rc = x86_pv_domain_info(ctx);
     if ( rc )
         return rc;
@@ -1118,6 +1128,7 @@ static int x86_pv_check_vm_state(struct xc_sr_context *ctx)
 
 static int x86_pv_cleanup(struct xc_sr_context *ctx)
 {
+    free(ctx->x86.pv.save.normalised_pages);
     free(ctx->x86.pv.p2m_pfns);
 
     if ( ctx->x86.pv.p2m )


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:12:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:12:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135000.251141 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zx-0003YT-CL; Tue, 01 Jun 2021 16:12:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135000.251141; Tue, 01 Jun 2021 16:12:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zx-0003XJ-4O; Tue, 01 Jun 2021 16:12:09 +0000
Received: by outflank-mailman (input) for mailman id 135000;
 Tue, 01 Jun 2021 16:12:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo6zw-0005Ec-1n
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:08 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 72e4c8e8-9b99-499b-92c8-dc20ae7426e6;
 Tue, 01 Jun 2021 16:11:35 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBV1BJ
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:31 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72e4c8e8-9b99-499b-92c8-dc20ae7426e6
ARC-Seal: i=1; a=rsa-sha256; t=1622563891; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=Xd7pOeVIiHYXGHD5gRxZOQBgSiSriTJtjfkH69uEPdEq+Nd1KZVpseWUuk97WuPxI1
    qMRegzQl14wS6ijxza1EykXP/NXS714hp2STDA6QpEc6rD5hJuwbl3R7UKHQWH7Azusa
    C+wtWvtN1St4gDsTIg8KTa9Pi3vrpxL8lH1eSryoVSyDKldVJxGtIG9f78eZdTqOKuEI
    8tYccyi/9BvOKkijUOB0irNLd3P/pz8VnqzTMkDNjuVpr7TPAmjtPi89Iknum9aROedD
    3QEYjzmx7IZuMlJ9Pzt3ynOhYKKW6k9aNTGAULanPz+gfGYQnqET+TWastQ2rJ/0PRoA
    q6tw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563891;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=kIq3yzkr1orG0VWk81XmidxQ5pZTMt5qbdvFQWlb7zs=;
    b=UtyPia57fROHf4uKw/IvvaBpuizNrkgb0xR9uBnPu8azM4Y9z+LMDd2yPY9Rv1DqlP
    dVvfQAv1P37lX8P/lugVi81UjUGredLIj1wrEKzmDrlJnFFjpzfPOovx5KfGHuJf6/Og
    1DVdu56tVECjja2o7b/0HusynbosHL5R8qzR+0IfOjbFje+/6E7fMrZhf9SXbNife8pE
    Kzy7fEh07yg65khvuG+LPfPGZAGeJLj19622yk7b3szhI++W3uFZPcoJS2zqb4vFTL+B
    HWsaFbVbIKcMMtUyvWlXqUMD9YLFz7scioMxWwZA8B3Kmta3oPwkYeQZtm7sOqcN+OP2
    tJPw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563891;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=kIq3yzkr1orG0VWk81XmidxQ5pZTMt5qbdvFQWlb7zs=;
    b=gDsYGPfPJiR94MM2OVSIHEQY6FxEfWz1Yq9YOXSQ9rDhfFqhTXykv6A1tSFfDcIDVl
    J2g/szHtGlFD1FTmbZjNTOqwsOpkgwysc8x314SwZDDgF3+B3oYTRbVqE3NYtFU/nnQ0
    Ob+gb0DfEk525bhm/NXrkpXo8tpNk4YMjJEempP7HIw5jeC6RqURk1TRCMxqHG6NkINd
    QQUW5YZUsEIJD/tqdby4ixmx+bwBcMCSSA0V9LbiZX+D7oBpO9KTUMHGVbJjkgM1XNO/
    uRMqm6KQ+ymlB5nUrUGDo4DSl/DAxcZZ56QESw/G999enAO9SDxGS7qwyHv/IQ5jvdcv
    BGsg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 14/38] tools/guest: save: move iov array
Date: Tue,  1 Jun 2021 18:10:54 +0200
Message-Id: <20210601161118.18986-15-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move iov array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h | 2 ++
 tools/libs/saverestore/save.c   | 7 ++-----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 07dfa7d57d..b3294e79af 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -231,6 +231,8 @@ struct xc_sr_save_arrays {
     xen_pfn_t types[MAX_BATCH_SIZE];
     /* write_batch: Errors from attempting to map the gfns. */
     int errors[MAX_BATCH_SIZE];
+    /* write_batch: iovec[] for writev(). */
+    struct iovec iov[MAX_BATCH_SIZE + 4];
 };
 
 struct xc_sr_restore_arrays {
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index 5033f18bef..1d9e55afe8 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -97,7 +97,7 @@ static int write_batch(struct xc_sr_context *ctx)
     unsigned int nr_pfns = ctx->save.nr_batch_pfns;
     void *page, *orig_page;
     uint64_t *rec_pfns = NULL;
-    struct iovec *iov = NULL; int iovcnt = 0;
+    struct iovec *iov = ctx->save.m->iov; int iovcnt = 0;
     struct xc_sr_rec_page_data_header hdr = { 0 };
     struct xc_sr_record rec = {
         .type = REC_TYPE_PAGE_DATA,
@@ -109,10 +109,8 @@ static int write_batch(struct xc_sr_context *ctx)
     guest_data = calloc(nr_pfns, sizeof(*guest_data));
     /* Pointers to locally allocated pages.  Need freeing. */
     local_pages = calloc(nr_pfns, sizeof(*local_pages));
-    /* iovec[] for writev(). */
-    iov = malloc((nr_pfns + 4) * sizeof(*iov));
 
-    if ( !guest_data || !local_pages || !iov )
+    if ( !guest_data || !local_pages )
     {
         ERROR("Unable to allocate arrays for a batch of %u pages",
               nr_pfns);
@@ -266,7 +264,6 @@ static int write_batch(struct xc_sr_context *ctx)
         xenforeignmemory_unmap(xch->fmem, guest_mapping, nr_pages_mapped);
     for ( i = 0; local_pages && i < nr_pfns; ++i )
         free(local_pages[i]);
-    free(iov);
     free(local_pages);
     free(guest_data);
 


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:12:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:12:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135001.251150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zz-0003vH-5w; Tue, 01 Jun 2021 16:12:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135001.251150; Tue, 01 Jun 2021 16:12:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo6zy-0003ul-Nb; Tue, 01 Jun 2021 16:12:10 +0000
Received: by outflank-mailman (input) for mailman id 135001;
 Tue, 01 Jun 2021 16:12:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo6zw-0005X1-WE
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:09 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.82])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f81fd6b4-bf1c-44f2-8abb-fda67d4f1df2;
 Tue, 01 Jun 2021 16:11:39 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBY1BR
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f81fd6b4-bf1c-44f2-8abb-fda67d4f1df2
ARC-Seal: i=1; a=rsa-sha256; t=1622563895; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=GO3QMTQt+1PQdk26evIOQmJRuGWZ+32SI2lYkoj8qbRmKkk1NrTUy+wzUpOONFVLnw
    KkEKVAAInwnepCRZ+sQj8bKk+QyOXZTTXLokE12qjz7M40M38ZRV1uWcYp9QoquE+Agi
    Qn0sUYrTXR49exLojAW154D6BKGwWFlmNPqBLtelADD+sySIVaUgZjU0lBzYW9HEMUdB
    OtNZcmogfjxC0gM3+oLY/B1cj+wL+k+L3FBWHcS22PFL3s0BdcjIUvqrSlq+PN8kB2NY
    ccdHYc5uFS5JOta58Oqwgi8Wa0+54oCzqdZheQXLZz/bi4rtt6Ue7DpUkZPbsvaGOUye
    Qrsw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563895;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=MHTDuRs4KJXGAe1IwzaUM3cPcSsfMCXWpWBD7RHfqcQ=;
    b=oUujeh3ij71G841X2CbWGKoy44B3T2WnsQeioHkz3PBg3S0PdxQH611aIVujvzrMG0
    Jjm1b+Yv6TEPWvjNriLKk1FP1Q0HPpCXu87dXbH3h6DvAEbv9np12AdX/XTWx86KN8Wy
    SNzZvwOcedVzVQ3G5vSWSl8CEIc+SZcMV3AQ9Z1PqzJXFtU0yYRUGGwO0nxiHiVIr2MT
    9cK/1TfRvEtmijw0Cr/9agRs/oa1dPUqPEftRQ8gQ5Wp532nBFF53Iyv0fJur1R5OwB9
    +v9KhZCNQFigNBS7kCzLwT9hbobwKZt5EQ2rPzq46AoYfyZtKaNSo+e7hl0bokvuMkoj
    M9MQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563895;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=MHTDuRs4KJXGAe1IwzaUM3cPcSsfMCXWpWBD7RHfqcQ=;
    b=d1oxsrIGWxW0vEUGvkWomIQ48mfGtUoieX+YZL23ervAaEs2u+g3opolyICVx+cvs5
    jVKNq80IyqWfgXE/ZxU1aYty3h9h0GvJewGqnzwSPTnfu0rkHK14R2frBmNwVgR3vuHk
    Kj/q/LvmKN/YzJckZhATGyEjzOlsVzGQkrzBthtqdUoeVeJl3JJr1kqhI49wEYvXf/Nm
    hKva6W1YFgOL6p8Ikbq00Pp+PdUzT3dxopRat/m3yWKh0RJ/o9HgDem8EJIpFbvS1jwA
    n1/kPRGufXzY3ArMcHxNgYAqLIjMdZj4QKp1OaOnTke5rLXNJ+StHBf6JL6KSaaqj37n
    tZ4Q==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 22/38] tools/guest: restore: move mfns array in populate_pfns
Date: Tue,  1 Jun 2021 18:11:02 +0200
Message-Id: <20210601161118.18986-23-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move populate_pfns mfns array into preallocated space.
Use some prefix to avoid conflict with an array used in handle_page_data.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h  | 2 ++
 tools/libs/saverestore/restore.c | 5 ++---
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index cea549d129..6ed213e14f 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -242,6 +242,8 @@ struct xc_sr_restore_arrays {
     /* process_page_data */
     xen_pfn_t mfns[MAX_BATCH_SIZE];
     int map_errs[MAX_BATCH_SIZE];
+    /* populate_pfns */
+    xen_pfn_t pp_mfns[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_context
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index b534d80cbc..2ab9f792ef 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -138,12 +138,12 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
                   const xen_pfn_t *original_pfns, const uint32_t *types)
 {
     xc_interface *xch = ctx->xch;
-    xen_pfn_t *mfns = malloc(count * sizeof(*mfns)),
+    xen_pfn_t *mfns = ctx->restore.m->pp_mfns,
         *pfns = malloc(count * sizeof(*pfns));
     unsigned int i, nr_pfns = 0;
     int rc = -1;
 
-    if ( !mfns || !pfns )
+    if ( !pfns )
     {
         ERROR("Failed to allocate %zu bytes for populating the physmap",
               2 * count * sizeof(*mfns));
@@ -191,7 +191,6 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
 
  err:
     free(pfns);
-    free(mfns);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:12:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:12:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135004.251163 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo702-0004kR-NR; Tue, 01 Jun 2021 16:12:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135004.251163; Tue, 01 Jun 2021 16:12:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo702-0004jJ-HH; Tue, 01 Jun 2021 16:12:14 +0000
Received: by outflank-mailman (input) for mailman id 135004;
 Tue, 01 Jun 2021 16:12:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo701-0005Ec-1y
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:13 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.84])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c478b9f-60b0-4b75-9873-757c55e8a7d0;
 Tue, 01 Jun 2021 16:11:36 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBV1BK
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:31 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c478b9f-60b0-4b75-9873-757c55e8a7d0
ARC-Seal: i=1; a=rsa-sha256; t=1622563892; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=PfsEkPMGS+5tPzW4MpLS4b5cQT/mvpFrBoxLm5VNSVORaH7Jw7/RTMsrI2WRYVOtiF
    y2l8e9TSu1ykKVsd8PLopUW8TJm1oRPVaEIb4Y98rxmjmVHWmN3+YDLPf6mBVmr+gzrM
    aug2asGY0xM/8T3tEflKxKeKLeICjFCs+gGOU3naJsFTJs7RwvxANiR1pltU2F9hYoXs
    t+U67JjJ4Djq7ZjUvsYlQPCibe69wnN6nq1jq/oSoT4uXAiKLf3XpxekLghXdwNbB8lA
    IWyhxl+qrK0LjeVK/fbBOoc+u0EWtO9S8yP9jd8tz3wCPVDmW5n6SxOulL5iBi9t/vP1
    qILg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563892;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=3eJ8lfkM75YdRBwLz3IvZoNgnQXSN5j9vpY7FzwJfSQ=;
    b=PBFVi+mK0hxstPCEJ4VhRAjU21GlGzuN+SL++kGQ/i2BDcF/NfOW2Zsy3wsKFevozm
    o4kY2J/Avt2CWLPjOaLSDmdm17jyFDcBdbKGaD9WAwcDma/4ho4XGQyMuS6jsvRyaI3N
    Ta/4iLKMo0NAGLSyEpckb/ZVS1/NcJR0fQwyucnYiAFJL+uAWo063RuwiuV7fCDl9ZwD
    uKn5K9niilZV6BIkjqvujNtL389HLKKmVH4LyqQ6u6pGqn/MjPKgxIyekIIcd/tYs1WA
    q7HOMX2DHXg3YZ73w8EIU6oW3VFZkWWgjCKYiOq4or/zygr4dI2BNtWFBdivBL972paw
    Outw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563892;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=3eJ8lfkM75YdRBwLz3IvZoNgnQXSN5j9vpY7FzwJfSQ=;
    b=BVfMlPG3S1pF1bPhA5sQHrRPoE/6HsSYf91H0qpFO6TenKzmlcmNlWQZxgoaNC8j0m
    mcGTbchP62wspXpYSY6OtY9HgOUbsK7KAFsIqy0G7oN6XtECFCwlJ8nF/LxW/mkOg9SL
    DTqsAlvt7C/Nw3H2fbilYxL9ajdwTKEbKMlJa+esA/8xW492zYZ88HD1eOTOlKtQGGbJ
    KnRPw1KGLTLI9t2M6BdLUUBUyAvxP7UhmRvHTW2LU5ir+QGfzxt7KwKhWvyUUxwkGMKo
    Qx8088QqIPSHN96IQBSL7PQgUK04uBGFS7MJHTFftmTa0Z2tWjcQf+LeJzROH1eG/vNp
    yV0w==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 15/38] tools/guest: save: move rec_pfns array
Date: Tue,  1 Jun 2021 18:10:55 +0200
Message-Id: <20210601161118.18986-16-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move rec_pfns array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h |  2 ++
 tools/libs/saverestore/save.c   | 11 +----------
 2 files changed, 3 insertions(+), 10 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index b3294e79af..6a2f266469 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -233,6 +233,8 @@ struct xc_sr_save_arrays {
     int errors[MAX_BATCH_SIZE];
     /* write_batch: iovec[] for writev(). */
     struct iovec iov[MAX_BATCH_SIZE + 4];
+    /* write_batch */
+    uint64_t rec_pfns[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_restore_arrays {
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index 1d9e55afe8..ba8046b530 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -96,7 +96,7 @@ static int write_batch(struct xc_sr_context *ctx)
     unsigned int i, p, nr_pages = 0, nr_pages_mapped = 0;
     unsigned int nr_pfns = ctx->save.nr_batch_pfns;
     void *page, *orig_page;
-    uint64_t *rec_pfns = NULL;
+    uint64_t *rec_pfns = ctx->save.m->rec_pfns;
     struct iovec *iov = ctx->save.m->iov; int iovcnt = 0;
     struct xc_sr_rec_page_data_header hdr = { 0 };
     struct xc_sr_record rec = {
@@ -201,14 +201,6 @@ static int write_batch(struct xc_sr_context *ctx)
         }
     }
 
-    rec_pfns = malloc(nr_pfns * sizeof(*rec_pfns));
-    if ( !rec_pfns )
-    {
-        ERROR("Unable to allocate %zu bytes of memory for page data pfn list",
-              nr_pfns * sizeof(*rec_pfns));
-        goto err;
-    }
-
     hdr.count = nr_pfns;
 
     rec.length = sizeof(hdr);
@@ -259,7 +251,6 @@ static int write_batch(struct xc_sr_context *ctx)
     rc = ctx->save.nr_batch_pfns = 0;
 
  err:
-    free(rec_pfns);
     if ( guest_mapping )
         xenforeignmemory_unmap(xch->fmem, guest_mapping, nr_pages_mapped);
     for ( i = 0; local_pages && i < nr_pfns; ++i )


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:12:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:12:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135007.251172 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo704-00054v-Dc; Tue, 01 Jun 2021 16:12:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135007.251172; Tue, 01 Jun 2021 16:12:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo703-00052v-Uk; Tue, 01 Jun 2021 16:12:15 +0000
Received: by outflank-mailman (input) for mailman id 135007;
 Tue, 01 Jun 2021 16:12:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo702-0005X1-0H
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:14 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.103])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9ea60976-470e-41f2-a4d4-de729ddb777a;
 Tue, 01 Jun 2021 16:11:41 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBb1BW
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:37 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ea60976-470e-41f2-a4d4-de729ddb777a
ARC-Seal: i=1; a=rsa-sha256; t=1622563897; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=DceQnoMD1LPxJK7nDDoI5jY7qyrJO2b8pj9TXL3fxAY8NxT/G8fHE1LT7IyYYEUgsk
    VC2V36pU/F/9Uu7l/TmVZxOy1inNDMA/+R6ewt/qsmeC2fHkp+rIKrcEItQ1iFVmWhc3
    0iG6DTYDmR0dAvQua+fBjIQ3TfjNi2StaB/clALrkHKoHltA84Nqa7Rtso+UMaDDbFpK
    LoPcnwCPDXbaJFV3H/flgac9SE/CtOTTViWNLD3lwgQXjE7//B/mHqc2dd4XZJVpC5z4
    DnhGlj00Hxzs2qKPPyUJBiF2newFmudvMsY0l1LGqvAeYU4rfZEGdSfHErp+bdm8uMuQ
    15/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563897;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=0LMXyjqr5dhJU6Kt86LbarPPbfCTtmro+NAOq1cH7Ts=;
    b=MxhvzI3nXOCmBeAiRJFUchsTY/iiFWK16afmVmO7QesHmwQsU/ZIum/e4kJxpkINwb
    DC2yJT/j4a7ebhe0H3SdWCI8BR1/pfzoH2YgHWKXGYSbwAmzSE41S0cH1CjToAu4mjNl
    J0jzndpOI2m9J1AIZ/C17acSy/0DIUf/vHWC/ouT7If9g/tw0CFiUhCZFJtLhJUnUA3W
    s9N0eYzO8/sClRLqA+zFrpPoR/YVmWRU/FSM5CT3blvk3GedmOYavbHM6XoDULwBqrTF
    EjUwV+ejaa7rM2+J9SrBAl5U3OvqzxPzQS73GduI0vOOT9GcMpw8d63gqK/BS7AROldV
    2umQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563897;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=0LMXyjqr5dhJU6Kt86LbarPPbfCTtmro+NAOq1cH7Ts=;
    b=a3XGOIo9gK0sthRo1QW/ge3Lkbka6xS371SdjVtTG/iXT5nFIlwa7ngxBPwuIPAr3C
    L64Vvat9i1dmTDh1iEVYeDmuHVXnSkbiIiKGPChYjogOQug5W4x4lc6QXUX2XKZiwfX+
    6JUCJM2cSA8y0xDAzqKajAnh5WpmMJ+XyglmRTpyi8N0ytP8b8PJ+OYlz4SCxgypjTTs
    Fi47XB9TJ2M8js49koTSW4cYbjafFJl/itk5QIHxRfl94FgN2AOhU4v/bRkZmvEWY0Gj
    tjE0HNGOPC4mkzS5VegXe8nbhJuTH0Yc0THPtZe2SGRxYdqZ2kwPYeww6lFrBNGZF7E3
    IfVQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 27/38] tools: recognize LIBXL_API_VERSION for 4.16
Date: Tue,  1 Jun 2021 18:11:07 +0200
Message-Id: <20210601161118.18986-28-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is required by upcoming API changes.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/include/libxl.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index ae7fe27c1f..29931626a2 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -729,7 +729,8 @@ typedef struct libxl__ctx libxl_ctx;
 #if LIBXL_API_VERSION != 0x040200 && LIBXL_API_VERSION != 0x040300 && \
     LIBXL_API_VERSION != 0x040400 && LIBXL_API_VERSION != 0x040500 && \
     LIBXL_API_VERSION != 0x040700 && LIBXL_API_VERSION != 0x040800 && \
-    LIBXL_API_VERSION != 0x041300 && LIBXL_API_VERSION != 0x041400
+    LIBXL_API_VERSION != 0x041300 && LIBXL_API_VERSION != 0x041400 && \
+    LIBXL_API_VERSION != 0x041600
 #error Unknown LIBXL_API_VERSION
 #endif
 #endif


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:12:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:12:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135011.251185 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo709-00065a-2z; Tue, 01 Jun 2021 16:12:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135011.251185; Tue, 01 Jun 2021 16:12:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo708-000624-RE; Tue, 01 Jun 2021 16:12:20 +0000
Received: by outflank-mailman (input) for mailman id 135011;
 Tue, 01 Jun 2021 16:12:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo707-0005X1-0T
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:19 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.104])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 08fba356-fd76-4932-91f5-08adea71b95f;
 Tue, 01 Jun 2021 16:11:42 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBc1BZ
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08fba356-fd76-4932-91f5-08adea71b95f
ARC-Seal: i=1; a=rsa-sha256; t=1622563898; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=h/1sE7f735WbI7yPgAA9MGI8Nd6R6y99TEAhwGOGR88axJzA6YQjdLY2jnSRo+J5bh
    5VOeSe2YgyrbJpL4PSRovNzBUm7YRtu605kwZlKHaIaSdgEmXhYGXgyb3RWbNy0Auwuv
    FdeeXRfdfoYPwzZvXldE4kQt4fLYEk/PdDNVUQcP5xIzXDmuVTnPSIGAFYNQ5s/gEn8q
    Vr5VQIXEQRjHNpuJetYWUytNmm4uxWKX8p9bKZpGh94EAVgIj4+Ik5Z0FpD8o6EVLSGP
    McWZIDRVmBFeg3yMN9wGHdkbF+NHpQBnUHl2j2EnGzmw+/wKVBozM73uvonXhsvPFWq3
    gy5Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563898;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=qhG0lw5KlRBNwnqR+lID3/utnVv1YZhnHFw1MeFsVt0=;
    b=tYY9fiG0VkhKyuGz+oE3vIiS809HD/c+N/9Ys7iBjS+CyGF+CSR4w7D9tHxz+TKhWb
    pPYUls34Tin16HTrYKaO5CtWJE9ReALQ55d48RaJFgUymUC5wsse8lGzJOQshWAz9ysp
    pct+EV3Jw3y1iRNgm9+/smZBh8vNSKImwSDUHuVCvXhGMqEkOW2BMqmO3z3UzSv1Uutf
    VwWnmNu49YTBj/sLJv0N9LtCA5JnaV894ExYxPVaY6vHl/cc7O8p8hV8o23KGwMSOCqs
    7aV5MGhUdGIEGQ3epJVuzz4yFXTJVH354AS99TSU+t5Km0CELIxcGtVGjSypQXYKyOop
    9d4g==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563898;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=qhG0lw5KlRBNwnqR+lID3/utnVv1YZhnHFw1MeFsVt0=;
    b=SnLjOEIWOF3dJBgmgSGgEWMsmdoJJAPSwbSuqpWSfrQx5cI/cXBsfmuHABoNfFCDqG
    Bha0fJhbCqZ6L/LtuapHJxp2SruWGQriEkGQcbQF+KuA6h0/oSJIBz0Gu16RypFC5Ee4
    aPk027v/nnOQLxXcg1eJxRm4oPygov9JWZAjZQCvcFa4RVo/2VbavwCnsBf80CqhWCr5
    LC+NQ/u8AnnwawGN0czS/Lb0EbU6xTabLihK8peb93nJCOYp9AkW6sJ2W5xAm/QTyIAe
    u0hGt19MMQyg9bWr3l+6atuSOsj77bUnpWszzGhIvQ82TQGctCo+UlkqvseI34Pt6Egl
    iozg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 29/38] tools: change struct precopy_stats to precopy_stats_t
Date: Tue,  1 Jun 2021 18:11:09 +0200
Message-Id: <20210601161118.18986-30-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This will help libxl_save_msgs_gen.pl to copy the struct as a region of memory.

No change in behavior intented.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/include/xensaverestore.h  | 7 +++----
 tools/libs/saverestore/common.h | 2 +-
 tools/libs/saverestore/save.c   | 6 +++---
 3 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/tools/include/xensaverestore.h b/tools/include/xensaverestore.h
index 0410f0469e..dca0134605 100644
--- a/tools/include/xensaverestore.h
+++ b/tools/include/xensaverestore.h
@@ -23,18 +23,17 @@
 #define XCFLAGS_DEBUG     (1 << 1)
 
 /* For save's precopy_policy(). */
-struct precopy_stats
-{
+typedef struct {
     unsigned int iteration;
     unsigned long total_written;
     long dirty_count; /* -1 if unknown */
-};
+} precopy_stats_t;
 
 /*
  * A precopy_policy callback may not be running in the same address
  * space as libxc an so precopy_stats is passed by value.
  */
-typedef int (*precopy_policy_t)(struct precopy_stats, void *);
+typedef int (*precopy_policy_t)(precopy_stats_t, void *);
 
 /* callbacks provided by xc_domain_save */
 struct save_callbacks {
diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 4218a8ecd6..cf8d6545e2 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -283,7 +283,7 @@ struct xc_sr_context
             size_t pages_sent;
             size_t overhead_sent;
 
-            struct precopy_stats stats;
+            precopy_stats_t stats;
 
             unsigned int nr_batch_pfns;
             unsigned long *deferred_pages;
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index b59cb069ed..523457eaba 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -489,7 +489,7 @@ static int update_progress_string(struct xc_sr_context *ctx, char **str)
 #define SPP_MAX_ITERATIONS      5
 #define SPP_TARGET_DIRTY_COUNT 50
 
-static int simple_precopy_policy(struct precopy_stats stats, void *user)
+static int simple_precopy_policy(precopy_stats_t stats, void *user)
 {
     return ((stats.dirty_count >= 0 &&
              stats.dirty_count < SPP_TARGET_DIRTY_COUNT) ||
@@ -516,13 +516,13 @@ static int send_memory_live(struct xc_sr_context *ctx)
     precopy_policy_t precopy_policy = ctx->save.callbacks->precopy_policy;
     void *data = ctx->save.callbacks->data;
 
-    struct precopy_stats *policy_stats;
+    precopy_stats_t *policy_stats;
 
     rc = update_progress_string(ctx, &progress_str);
     if ( rc )
         goto out;
 
-    ctx->save.stats = (struct precopy_stats){
+    ctx->save.stats = (precopy_stats_t){
         .dirty_count = ctx->save.p2m_size,
     };
     policy_stats = &ctx->save.stats;


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:13:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:13:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135019.251196 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo70v-0000Jl-Eq; Tue, 01 Jun 2021 16:13:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135019.251196; Tue, 01 Jun 2021 16:13:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo70v-0000Je-Bm; Tue, 01 Jun 2021 16:13:09 +0000
Received: by outflank-mailman (input) for mailman id 135019;
 Tue, 01 Jun 2021 16:13:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo70R-0005X1-1E
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:39 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [81.169.146.173])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c58b872c-705e-46c7-a4aa-94a90b0c6dcb;
 Tue, 01 Jun 2021 16:11:47 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBg1Bi
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c58b872c-705e-46c7-a4aa-94a90b0c6dcb
ARC-Seal: i=1; a=rsa-sha256; t=1622563902; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=K7MnlPHaXmu4fU5RlfPrsnOzwcPTcmOAWCvSh+ooAre1bwTpwi4eMMGz/5Jv1Ig1ZW
    JloAzpPSnJnclTW4wr7Ytk8bLdATSBiQRhf3LpoxulBCdjGwn8fEikIu6++kaZ7sPHfq
    m9GFIxxfgtPMPPVdJKFpa2B/70/RZ3tKv/8obUaj1nj6W5e7me+PNZCLyOUaoqBecAnP
    C+C45/54baeVovaLeEH1v31apfihBaozH7X9mlU0glNcddolHxmNutWRvmEqNjiEttZy
    Fojaz3PuVQbehB5xyP8qO3zAvTPyTuNNoFDakouqC8rrm+uP5B2S4r6HMpOWkvv7a9a9
    Fklw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563902;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=fttDAmpegUqwpOBB61SFRLpHBJB9wPmVo+V4iKyUqQw=;
    b=ZKo6xL4wFzjwUtYQMxZLthEnX/2j4XXU+ET8fslQ8JqjAiO9jlMLLDaX7tNV/rIito
    YSjbAKL+Lgd9rhcCBSKycTE/JI+vMKvhm2LCur4Zyxg0TZeUGyGVDBoeri5jHWzi+v8m
    5uGj4AhoMJH1/UpyRd+ABqMeblXCauuk+Qnxu4g4en0FXc8YE536ZPSMNgxEPJ7EA5vB
    NtOn+kHn36uvexNqpQEWuZkcJfaru5vXOE5Xoup/d/KePoFZOqJoZFV2HxvZehI7SvEX
    W6W8XvN3C4y2+CfoOWniMWikvjXytRwbvQo80fTUHzxDYBzxgh5qWXM/941Fm3bd1Yeq
    EMfg==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563902;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=fttDAmpegUqwpOBB61SFRLpHBJB9wPmVo+V4iKyUqQw=;
    b=ShTWQZKcQxZ1sfebsNAw3YaYcPQB1WYhFoM850GJhLl8wmQwEfctVTPf+8YlicvXTj
    VlDW3wj8vaW/oiD6NQm1ua1hkBTsrESScVeGdK9/qnZ6IAKgoLi2QrU71IgL5R7JKqFu
    SOZBrr9BV/XYgYN2PZBWv8VWy+NnFnwL8tTBdr2BjS8LjSr9CCPjYjQlWHTpZY/Qwz4s
    uJ6yIZ97QLzsAjNs79CzpqBfvtk8o1P3YfO2jfbA8fkwYPLd4ggLTjuAP2/hUahRWXzu
    6mIhBdYVJTtX9k2p+5OPkdMh21UKzH2P1Dk4KdBAkWGepZutG7i3uFqQ9drAKLmYKjln
    lsCw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 38/38] hotplug/Linux: fix starting of xenstored with restarting systemd
Date: Tue,  1 Jun 2021 18:11:18 +0200
Message-Id: <20210601161118.18986-39-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

A hard to trigger race with another unrelated systemd service and
xenstored.service unveiled a bug in the way how xenstored is launched
with systemd.

launch-xenstore may start either a daemon or a domain. In case a domain
is used, systemd-notify was called. If another service triggered a
restart of systemd while xenstored.service was executed, systemd may
temporary lose track of services with Type=notify. As a result,
xenstored.service would be marked as failed and units that depend on it
will not be started. This breaks the enire Xen toolstack.

The chain of events is basically: xenstored.service sends the
notification to systemd, this is a one-way event. Then systemd may be
restarted by the other unit. During this time, xenstored.service is done
and exits. Once systemd is done with its restart, it collects the pending
notifications and childs. If it does not find the unit which sent the
notification it will declare it as failed.

A workaround for this scenario is to leave the child processes running
for a short time after sending the "READY=1" notification. If systemd
happens to restart it will still find the unit it launched.

Adjust the callers of launch-xenstore to specifiy the init system:
Do not fork xenstored with systemd, preserve pid. This wil also avoid
the need for a sleep because the process which sent the "READY=1" (the
previously forked child) is still alive.

Remove the --pid-file in the systemd case because the pid of the child
is known, and the file had probably little effect anyway due to lack of
PidFile= and Type=forking in the unit file.

Be verbose about xenstored startup only with sysv to avoid interleaved
output in systemd journal. Do the same also for domain case, even if is
not strictly needed because init-xenstore-domain has no output.

The fix for upstream systemd which is supposed to fix it:
575b300b795b6 ("pid1: rework how we dispatch SIGCHLD and other signals")

Signed-off-by: Olaf Hering <olaf@aepfle.de>

--
v04:
- do mkdir unconditionally because init-xenstore-domain writes the domid to
  xenstored.pid
v03:
- remove run_xenstored function, follow style of shell built-in test function
v02:
- preserve Type=notify
---
 tools/hotplug/Linux/init.d/xencommons.in      |  2 +-
 tools/hotplug/Linux/launch-xenstore.in        | 40 ++++++++++++++-----
 .../Linux/systemd/xenstored.service.in        |  2 +-
 3 files changed, 31 insertions(+), 13 deletions(-)

diff --git a/tools/hotplug/Linux/init.d/xencommons.in b/tools/hotplug/Linux/init.d/xencommons.in
index 7fd6903b98..dcb0ce4b73 100644
--- a/tools/hotplug/Linux/init.d/xencommons.in
+++ b/tools/hotplug/Linux/init.d/xencommons.in
@@ -60,7 +60,7 @@ do_start () {
 	mkdir -m700 -p ${XEN_LOCK_DIR}
 	mkdir -p ${XEN_LOG_DIR}
 
-	@XEN_SCRIPT_DIR@/launch-xenstore || exit 1
+	@XEN_SCRIPT_DIR@/launch-xenstore 'sysv' || exit 1
 
 	echo Setting domain 0 name, domid and JSON config...
 	${LIBEXEC_BIN}/xen-init-dom0 ${XEN_DOM0_UUID}
diff --git a/tools/hotplug/Linux/launch-xenstore.in b/tools/hotplug/Linux/launch-xenstore.in
index 019f9d6f4d..d40c66482a 100644
--- a/tools/hotplug/Linux/launch-xenstore.in
+++ b/tools/hotplug/Linux/launch-xenstore.in
@@ -15,6 +15,17 @@
 # License along with this library; If not, see <http://www.gnu.org/licenses/>.
 #
 
+initd=$1
+
+case "$initd" in
+	sysv) nonl='-n' ;;
+	systemd) nonl= ;;
+	*)
+	echo "first argument must be 'sysv' or 'systemd'"
+	exit 1
+	;;
+esac
+
 XENSTORED=@XENSTORED@
 
 . @XEN_SCRIPT_DIR@/hotplugpath.sh
@@ -44,14 +55,16 @@ timeout_xenstore () {
 	return 0
 }
 
-test_xenstore && exit 0
+mkdir -p @XEN_RUN_DIR@
+
+if test "$initd" = 'sysv' ; then
+	test_xenstore && exit 0
+fi
 
 test -f @CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons && . @CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons
 
 [ "$XENSTORETYPE" = "" ] && XENSTORETYPE=daemon
 
-/bin/mkdir -p @XEN_RUN_DIR@
-
 [ "$XENSTORETYPE" = "daemon" ] && {
 	[ -z "$XENSTORED_TRACE" ] || XENSTORED_ARGS="$XENSTORED_ARGS -T @XEN_LOG_DIR@/xenstored-trace.log"
 	[ -z "$XENSTORED" ] && XENSTORED=@XENSTORED@
@@ -59,13 +72,15 @@ test -f @CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons && . @CONFIG_DIR@/@CONFIG_LEAF
 		echo "No xenstored found"
 		exit 1
 	}
+	[ "$initd" = 'sysv' ] && {
+		echo $nonl Starting $XENSTORED...
+		$XENSTORED --pid-file @XEN_RUN_DIR@/xenstored.pid $XENSTORED_ARGS
+		timeout_xenstore $XENSTORED || exit 1
+		exit 0
+	}
 
-	echo -n Starting $XENSTORED...
-	$XENSTORED --pid-file @XEN_RUN_DIR@/xenstored.pid $XENSTORED_ARGS
-
-	systemd-notify --booted 2>/dev/null || timeout_xenstore $XENSTORED || exit 1
-
-	exit 0
+	exec $XENSTORED -N $XENSTORED_ARGS
+	exit 1
 }
 
 [ "$XENSTORETYPE" = "domain" ] && {
@@ -75,9 +90,12 @@ test -f @CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons && . @CONFIG_DIR@/@CONFIG_LEAF
 	XENSTORE_DOMAIN_ARGS="$XENSTORE_DOMAIN_ARGS --memory $XENSTORE_DOMAIN_SIZE"
 	[ -z "$XENSTORE_MAX_DOMAIN_SIZE" ] || XENSTORE_DOMAIN_ARGS="$XENSTORE_DOMAIN_ARGS --maxmem $XENSTORE_MAX_DOMAIN_SIZE"
 
-	echo -n Starting $XENSTORE_DOMAIN_KERNEL...
+	echo $nonl Starting $XENSTORE_DOMAIN_KERNEL...
 	${LIBEXEC_BIN}/init-xenstore-domain $XENSTORE_DOMAIN_ARGS || exit 1
-	systemd-notify --ready 2>/dev/null
+	[ "$initd" = 'systemd' ] && {
+		systemd-notify --ready
+		sleep 9
+	}
 
 	exit 0
 }
diff --git a/tools/hotplug/Linux/systemd/xenstored.service.in b/tools/hotplug/Linux/systemd/xenstored.service.in
index 80c1d408a5..c226eb3635 100644
--- a/tools/hotplug/Linux/systemd/xenstored.service.in
+++ b/tools/hotplug/Linux/systemd/xenstored.service.in
@@ -11,7 +11,7 @@ Type=notify
 NotifyAccess=all
 RemainAfterExit=true
 ExecStartPre=/bin/grep -q control_d /proc/xen/capabilities
-ExecStart=@XEN_SCRIPT_DIR@/launch-xenstore
+ExecStart=@XEN_SCRIPT_DIR@/launch-xenstore 'systemd'
 
 [Install]
 WantedBy=multi-user.target


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:13:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:13:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135020.251206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo715-0000iu-NK; Tue, 01 Jun 2021 16:13:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135020.251206; Tue, 01 Jun 2021 16:13:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo715-0000in-KD; Tue, 01 Jun 2021 16:13:19 +0000
Received: by outflank-mailman (input) for mailman id 135020;
 Tue, 01 Jun 2021 16:13:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo70M-0005X1-18
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:34 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.100])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 00302e53-7264-4ccf-be6f-c370c8624d65;
 Tue, 01 Jun 2021 16:11:45 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBd1Bb
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00302e53-7264-4ccf-be6f-c370c8624d65
ARC-Seal: i=1; a=rsa-sha256; t=1622563899; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=hGephZD7UqHQtWANDp+Oel53eEzNAhaUGp/dNZuz4qmGEsyd1ZYeAmPl3I76m99/U6
    u9dvxuflsQOFP0vxFTJTRFYRteU++Z5zWY969jj/lCx5Jb6zekj66EvCWDJRjYJU6mj2
    +M5mmYTPmUOzu05llwonv1XmKuSrBJXUt/3VFIMXxdd+455/ScYu85BVszO9FVkTJ2tP
    KdhDpMNHBPaH6Chk8D+BzvDxL6bEuTYH3kC6AuNA0DB5/2UrXwAvJHHbBZqO19Gyv9j7
    0G29fv+dPnmTBdoBj23ZBIYI14af/hPmm06m6/uz+FsrXMUoIeedTx0BGTnGPgR3dD9I
    OwZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563899;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=43lDIDUwBH095VtVkTuqmuE4BK9API0PDHgwKMKg2GQ=;
    b=ctj7hFOJX3/WbRdkNZt+ZDsuhnBRoNyLVW8TeorpBWEg5JjPfF3jcs3+qLSr8gwHOT
    EneRoF239La7oXtL/PmQm3z3FjZIjkiC1hlKLScCLBPa0sK8phX5g4tsAL8GMJ8lSqIC
    nsV9it15egNJ6j+StsqZcbTl24yAFpL1+E22lcqmZZCwtHOLsLlMuuOvR0P6UZBnvNnz
    KG9/rFBzOrEh23nLVKsrljjhoWo8Ydq6qx5bJ98cLLiKJIj+8zvoyG9e8CIeszSXBf9y
    BIrIRSdfFVhpXyq39njyS+Cl0jlatZn5idWQOLFsxV/LPb6DJDcms1DSRsMeQX2PBpq+
    YOIw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563899;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=43lDIDUwBH095VtVkTuqmuE4BK9API0PDHgwKMKg2GQ=;
    b=fw03lZoWf6At1jJTOW+AnPunAYfhTFxjvUfl2wHinBUJl1pw6D2/fXloeVmF5b+G8L
    HgcNoLysmbrYmB6WwmGk0L1qZJs5MHgWr6r+wQuq6TlOH6j60C4OplVLi5uKyr7cVtXm
    1hFYKzpR7ATlTbmaW/lI9IpEGZEmb2QTXlNyswZkGeH4taEbULkkFVNOWW29S6ne2OdU
    o40HKw+//fDWytQLNe4KYveueocVd2fLgJKNTYw+1aC9RsdLL1xpquOEYq/z7evsHppq
    EtDhz50UKUzbFsqM5bpMYq3lPVLfqXqzk8AtrfkEYCDfB65w1AztK3Bk2uaoUAdypfa6
    o8xQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v20210601 31/38] tools: add --max_iters to libxl_domain_suspend
Date: Tue,  1 Jun 2021 18:11:11 +0200
Message-Id: <20210601161118.18986-32-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Migrating a large, and potentially busy, domU will take more
time than neccessary due to excessive number of copying iterations.

Allow to host admin to control the number of iterations which
copy cumulated domU dirty pages to the target host.

The default remains 5, which means one initial iteration to copy the
entire domU memory, and up to 4 additional iterations to copy dirty
memory from the still running domU. After the given number of iterations
the domU is suspended, remaining dirty memory is copied and the domU is
finally moved to the target host.

This patch adjusts xl(1) and the libxl API.
External users check LIBXL_HAVE_DOMAIN_SUSPEND_PROPS for the availibility
of the new .max_iters property.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/man/xl.1.pod.in              |  4 ++++
 tools/include/libxl.h             |  1 +
 tools/libs/light/libxl_dom_save.c |  2 +-
 tools/libs/light/libxl_domain.c   |  1 +
 tools/libs/light/libxl_internal.h |  1 +
 tools/xl/xl_cmdtable.c            |  3 ++-
 tools/xl/xl_migrate.c             | 10 +++++++++-
 7 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index ed3f4dee1e..b13e09c0ee 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -496,6 +496,10 @@ such that it will be identical on the destination host, unless that
 configuration is overridden using the B<-C> option. Note that it is not
 possible to use this option for a 'localhost' migration.
 
+=item B<--max_iters> I<iterations>
+
+Number of copy iterations before final suspend+move (default: 5)
+
 =back
 
 =item B<remus> [I<OPTIONS>] I<domain-id> I<host>
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 9a4d7514ed..bf77da0524 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -1714,6 +1714,7 @@ static inline int libxl_retrieve_domain_configuration_0x041200(
 
 typedef struct {
     uint32_t flags; /* LIBXL_SUSPEND_* */
+    uint32_t max_iters;
 } libxl_domain_suspend_props;
 #define LIBXL_SUSPEND_DEBUG 1
 #define LIBXL_SUSPEND_LIVE 2
diff --git a/tools/libs/light/libxl_dom_save.c b/tools/libs/light/libxl_dom_save.c
index 3f3cff0342..938c0127f3 100644
--- a/tools/libs/light/libxl_dom_save.c
+++ b/tools/libs/light/libxl_dom_save.c
@@ -383,7 +383,7 @@ static int libxl__domain_save_precopy_policy(precopy_stats_t stats, void *user)
          stats.iteration, stats.dirty_count, stats.total_written);
     if (stats.dirty_count >= 0 && stats.dirty_count < LIBXL_XGS_POLICY_TARGET_DIRTY_COUNT)
         goto stop_copy;
-    if (stats.iteration >= LIBXL_XGS_POLICY_MAX_ITERATIONS)
+    if (stats.iteration >= dss->max_iters)
         goto stop_copy;
     return XGS_POLICY_CONTINUE_PRECOPY;
 
diff --git a/tools/libs/light/libxl_domain.c b/tools/libs/light/libxl_domain.c
index 45e0c57c3a..612d3dc4ea 100644
--- a/tools/libs/light/libxl_domain.c
+++ b/tools/libs/light/libxl_domain.c
@@ -527,6 +527,7 @@ int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd,
     dss->domid = domid;
     dss->fd = fd;
     dss->type = type;
+    dss->max_iters = props->max_iters ?: LIBXL_XGS_POLICY_MAX_ITERATIONS;
     dss->live = props->flags & LIBXL_SUSPEND_LIVE;
     dss->debug = props->flags & LIBXL_SUSPEND_DEBUG;
     dss->checkpointed_stream = LIBXL_CHECKPOINTED_STREAM_NONE;
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 069c35452d..82b9dca5a0 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -3641,6 +3641,7 @@ struct libxl__domain_save_state {
     int live;
     int debug;
     int checkpointed_stream;
+    uint32_t max_iters;
     const libxl_domain_remus_info *remus;
     /* private */
     int rc;
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 6fd18856c0..8f8fa72760 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -174,7 +174,8 @@ const struct cmd_spec cmd_table[] = {
       "                of the domain.\n"
       "--debug         Verify transferred domU page data.\n"
       "-p              Do not unpause domain after migrating it.\n"
-      "-D              Preserve the domain id"
+      "-D              Preserve the domain id\n"
+      "--max_iters N   Number of copy iterations before final stop+move"
     },
     { "restore",
       &main_restore, 0, 1,
diff --git a/tools/xl/xl_migrate.c b/tools/xl/xl_migrate.c
index 144890924f..af117d4d56 100644
--- a/tools/xl/xl_migrate.c
+++ b/tools/xl/xl_migrate.c
@@ -178,6 +178,7 @@ static void migrate_do_preamble(int send_fd, int recv_fd, pid_t child,
 
 static void migrate_domain(uint32_t domid, int preserve_domid,
                            const char *rune, int debug,
+                           uint32_t max_iters,
                            const char *override_config_file)
 {
     pid_t child = -1;
@@ -189,6 +190,7 @@ static void migrate_domain(uint32_t domid, int preserve_domid,
     int config_len;
     libxl_domain_suspend_props props = {
         .flags = LIBXL_SUSPEND_LIVE,
+        .max_iters = max_iters,
         };
 
     save_domain_core_begin(domid, preserve_domid, override_config_file,
@@ -542,8 +544,10 @@ int main_migrate(int argc, char **argv)
     char *host;
     int opt, daemonize = 1, monitor = 1, debug = 0, pause_after_migration = 0;
     int preserve_domid = 0;
+    uint32_t max_iters = 0;
     static struct option opts[] = {
         {"debug", 0, 0, 0x100},
+        {"max_iters", 1, 0, 0x101},
         {"live", 0, 0, 0x200},
         COMMON_LONG_OPTS
     };
@@ -571,6 +575,9 @@ int main_migrate(int argc, char **argv)
     case 0x100: /* --debug */
         debug = 1;
         break;
+    case 0x101: /* --max_iters */
+        max_iters = atoi(optarg);
+        break;
     case 0x200: /* --live */
         /* ignored for compatibility with xm */
         break;
@@ -605,7 +612,8 @@ int main_migrate(int argc, char **argv)
                   pause_after_migration ? " -p" : "");
     }
 
-    migrate_domain(domid, preserve_domid, rune, debug, config_filename);
+    migrate_domain(domid, preserve_domid, rune, debug,
+                   max_iters, config_filename);
     return EXIT_SUCCESS;
 }
 


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:13:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:13:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135023.251218 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo71H-0001GP-6i; Tue, 01 Jun 2021 16:13:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135023.251218; Tue, 01 Jun 2021 16:13:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo71H-0001G5-2d; Tue, 01 Jun 2021 16:13:31 +0000
Received: by outflank-mailman (input) for mailman id 135023;
 Tue, 01 Jun 2021 16:13:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo70W-0005X1-1K
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:44 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 278042e2-4acd-4f2f-8a00-3bcb9a4b4a4d;
 Tue, 01 Jun 2021 16:11:47 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBg1Bh
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 278042e2-4acd-4f2f-8a00-3bcb9a4b4a4d
ARC-Seal: i=1; a=rsa-sha256; t=1622563902; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=GeSymt4sBAtamqwu9Qf/zyFLkpCxODm6nO9s0ZiWqB94QdYonpmCondOY39DJhz7ab
    2M7Qu2AfX6KGmOO5H4L26EKPFJ4xTciT6GYke96FK3x6yOTCc6jwxMBzay/YP4RB/AS+
    nczvhI0gIHZfwKjJxHpPH2jiLTpgLSUmaRyHs3rFbK0TEKbZFb8dreXvElyKslcUoe5m
    AfyG9O2TsHZCnwIJwpxMLJ0KaAfW3dMgw2YZhSTehyGN7IJLjYq2pu4PuPSe/oXGAtcB
    1tcxssT/fo5jwJCEjwNY5ascx/5HesM9L3HQ0QF1n3pyDXmiej6NCkWQaVeSLY/6WUl0
    xsFg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563902;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=6EezNi5nT36Er9YZKB+7tjmR7V8p17EAGCb3I0QL+5s=;
    b=UskJ41eWQFyxIqoSJ0UbrO2OZY0O5JRSH8qP2FMg5ktabG6ppN1i4yMHUb9fw7QZ77
    r6dV0UWdB16t2j+k94SZBhdhH7wqyWVBXcWjNSdR4kqmeWprjxyUnTYCM0/9jWUBBXTJ
    4MQE0HgWaXpQ9VHFxHTydxG6o4SCu4tYb9TJbcss4feKjK4mQiSqfc/BkLBElPYYKP/V
    96Fl0XaAmmqUykmd0UJjaSjitivygCRbB5GRQoIJqujhlvJ1x/SCd1+1z2Ev9K14tShO
    dEr+2jH7f+WKL4IaEKhAfjk2lwlh5MyZ7O9FFbdgJneEVG1g38AFwJLzXLjmp6BIw77k
    cF4g==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563902;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=6EezNi5nT36Er9YZKB+7tjmR7V8p17EAGCb3I0QL+5s=;
    b=VYpxfqnSD8t0H6aLBhP5dxoK9sO90LouyqSEYxfqVVwdyE4vcLV6MnyJnAqJ6CxYB2
    iwgHF/J1IbQ7JQsfUaaMdOOs18zBBwqsFzp4e5naiSodmW9pGH8KzCZgd6Ylzf3y5Tly
    gTY15o14NCLZruKyIEwJ57UFGnKF6NPZX2QRzfJMppSrmMfU6NoW+WgSq1Uk+7q+nSaF
    XoNIQiQXQoiD8+OygNVgWo2teoYUGCvs2T/jfdNqftXE32ARgJGoYOn9R3d1yeYpBaUW
    9V5sp/L587S/AAOdyAb2r4le07PGPs+gLKrlsApjmYE+s9I6pM7wGo84vWqzqQRvTzdI
    3FEw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 37/38] tools: remove migration stream verify code
Date: Tue,  1 Jun 2021 18:11:17 +0200
Message-Id: <20210601161118.18986-38-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The verify code tries to make sure the transferred pages from the paused
domU were received correctly in the previous, final iteration. Every
page is transferred once again by the sender, the receiver compares the
content with what is already mapped into the not-yet started domU.

This does unfortunately not work because a domU has also a number of
pages in its grant table, so that frondend drivers can communicate with
the backend drivers. Since the backend drivers are unaware about the
ongoing saving of the domU, they will continue to write into the shared
pages. As a result the verification of the domU pages will fail.

This is not fixable, so remove the verification code.

The sending side will never send a REC_TYPE_VERIFY record again. It will
silently accept, and ignore the debug flag.

The receiving side will abort an almost complete migration in case it
sees a REC_TYPE_VERIFY record from an old sender. That record is sent at
the very end, it has no way to know in advance that verification was
requested.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h  |  7 ------
 tools/libs/saverestore/restore.c | 43 ++++----------------------------
 tools/libs/saverestore/save.c    | 43 --------------------------------
 3 files changed, 5 insertions(+), 88 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index b323c1b71a..b8ca24e667 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -354,9 +354,6 @@ struct xc_sr_context
             /* Live migrate vs non live suspend. */
             bool live;
 
-            /* Further debugging information in the stream. */
-            bool debug;
-
             unsigned long p2m_size;
             size_t pages_sent;
             size_t overhead_sent;
@@ -418,10 +415,6 @@ struct xc_sr_context
             /* Bitmap of currently populated PFNs during restore. */
             struct xg_sr_bitmap populated_pfns;
 
-            /* Sender has invoked verify mode on the stream. */
-            bool verify;
-            void *verify_buf;
-
             struct xc_sr_restore_arrays *m;
             void *guest_mapping;
             uint32_t nr_mapped_pages;
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index d4657e8e57..9a7253a972 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -337,10 +337,7 @@ static int handle_incoming_page_data(struct xc_sr_context *ctx,
             continue;
 
         m->iov[iov_idx].iov_len = PAGE_SIZE;
-        if ( ctx->restore.verify )
-            m->iov[iov_idx].iov_base = ctx->restore.verify_buf + i * PAGE_SIZE;
-        else
-            m->iov[iov_idx].iov_base = m->guest_data[i];
+        m->iov[iov_idx].iov_base = m->guest_data[i];
         iov_idx++;
     }
 
@@ -369,15 +366,6 @@ static int handle_incoming_page_data(struct xc_sr_context *ctx,
 
         }
 
-        if ( ctx->restore.verify )
-        {
-            if ( memcmp(m->guest_data[i], m->iov[iov_idx].iov_base, PAGE_SIZE) )
-            {
-                ERROR("verify pfn %#"PRIpfn" failed (type %#"PRIx32")",
-                      m->pfns[i], m->types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
-            }
-        }
-
         iov_idx++;
     }
 
@@ -447,20 +435,7 @@ static int handle_buffered_page_data(struct xc_sr_context *ctx,
 
         }
 
-        if ( ctx->restore.verify )
-        {
-            if ( memcmp(m->guest_data[i], p, PAGE_SIZE) )
-            {
-                errno = EIO;
-                ERROR("verify pfn %#"PRIpfn" failed (type %#"PRIx32")",
-                      m->pfns[i], m->types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
-                goto err;
-            }
-        }
-        else
-        {
-            memcpy(m->guest_data[i], p, PAGE_SIZE);
-        }
+        memcpy(m->guest_data[i], p, PAGE_SIZE);
 
         idx++;
     }
@@ -737,17 +712,9 @@ static int process_buffered_record(struct xc_sr_context *ctx, struct xc_sr_recor
         break;
 
     case REC_TYPE_VERIFY:
-        DPRINTF("Verify mode enabled");
-        ctx->restore.verify = true;
-        if ( !ctx->restore.verify_buf )
-        {
-            ctx->restore.verify_buf = malloc(MAX_BATCH_SIZE * PAGE_SIZE);
-            if ( !ctx->restore.verify_buf )
-            {
-                rc = -1;
-                PERROR("Unable to allocate verify_buf");
-            }
-        }
+        errno = EINVAL;
+        PERROR("Verify mode is obsolete");
+        rc = -1;
         break;
 
     case REC_TYPE_CHECKPOINT:
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index 523457eaba..2d34822509 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -707,41 +707,6 @@ static int suspend_and_send_dirty(struct xc_sr_context *ctx)
     return rc;
 }
 
-static int verify_frames(struct xc_sr_context *ctx)
-{
-    xc_interface *xch = ctx->xch;
-    xc_shadow_op_stats_t stats = { 0, ctx->save.p2m_size };
-    int rc;
-    struct xc_sr_record rec = { .type = REC_TYPE_VERIFY };
-
-    DPRINTF("Enabling verify mode");
-
-    rc = write_record(ctx, &rec);
-    if ( rc )
-        goto out;
-
-    xc_set_progress_prefix(xch, "Frames verify");
-    rc = send_all_pages(ctx);
-    if ( rc )
-        goto out;
-
-    if ( xc_shadow_control(
-             xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_PEEK,
-             &ctx->save.dirty_bitmap_hbuf, ctx->save.p2m_size,
-             NULL, 0, &stats) != ctx->save.p2m_size )
-    {
-        PERROR("Failed to retrieve logdirty bitmap");
-        rc = -1;
-        goto out;
-    }
-
-    DPRINTF("  Further stats: faults %u, dirty %u",
-            stats.fault_count, stats.dirty_count);
-
- out:
-    return rc;
-}
-
 /*
  * Send all domain memory.  This is the heart of the live migration loop.
  */
@@ -761,13 +726,6 @@ static int send_domain_memory_live(struct xc_sr_context *ctx)
     if ( rc )
         goto out;
 
-    if ( ctx->save.debug )
-    {
-        rc = verify_frames(ctx);
-        if ( rc )
-            goto out;
-    }
-
  out:
     return rc;
 }
@@ -1005,7 +963,6 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
     /* GCC 4.4 (of CentOS 6.x vintage) can' t initialise anonymous unions. */
     ctx.save.callbacks = callbacks;
     ctx.save.live  = !!(flags & XCFLAGS_LIVE);
-    ctx.save.debug = !!(flags & XCFLAGS_DEBUG);
     ctx.save.recv_fd = recv_fd;
 
     if ( xc_domain_getinfo(xch, dom, 1, &ctx.dominfo) != 1 )


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:13:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:13:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135025.251223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo71H-0001KR-Ly; Tue, 01 Jun 2021 16:13:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135025.251223; Tue, 01 Jun 2021 16:13:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo71H-0001Iq-D3; Tue, 01 Jun 2021 16:13:31 +0000
Received: by outflank-mailman (input) for mailman id 135025;
 Tue, 01 Jun 2021 16:13:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo70C-0005X1-0f
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:24 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 02291225-47e5-4fe0-9d54-ab7ebb72cbcf;
 Tue, 01 Jun 2021 16:11:43 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBQ1B8
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:26 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02291225-47e5-4fe0-9d54-ab7ebb72cbcf
ARC-Seal: i=1; a=rsa-sha256; t=1622563886; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=oRRt7UA8nF/QnQKfNCY4t0quJSyOYnj+ODxgLZQ2JCgIBxG2+Za6QIvOmCKlmrlRrW
    C3xJQvjGuA38VBNPQI7WIWZx03bBxk4580YNnN0spJmLluTHabYI5ATltxg2/+K4E8yC
    +immYjEsB3pditQTp57aMeiM1GmKSxaA53uI1Zl8As8Lb7Ph/G7iGqh3DXi3+Hk+TIf7
    E2y7crubqCw58rkkI8z6YOLIE3Ne5pk3tOCkWUnVBn6RuKzGg4dImT5vuCIqsjSSVbs8
    CjamgFWxeo63DBFnUAIiMz4q/CsT+YJRTIqq/phGYEDuK8X278ISOSh7yZG1cvfIf7bX
    GKrw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563886;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=USuh1vETqHao4YOD0ii6LvSJ55qejRnauMVjBJeZuwA=;
    b=XEn7q4Y6aeUJkbaKyGb2wZoybokEPdESgQG1nFHXl/sJC5Fw42b4/fb8+lxw4/D9bN
    AUlQDTtQG+mDW/m6geJZ6qFwEjoJ9axdqk6/331BSJhJ1SXqweo7X98UQExeJI1qQd4U
    ScpvPVrnceC0ehxXpcJ5DrhdmsF4HxFsp9n1bK+bcZFLz25luXo28v+ZV1juRTW/p2Fx
    oMpOGD86ahfrbJH3eJacIoL0F2u4r771KKZmjyRNRPT/tmuMF484XzDZ7IJaIv+Mvj+m
    zPdFtWxGKjt4EEpn0+/O+Ez1fQHIohvT7ZM7uZlQKKYVxyfkinG4lFZ3ge87TmOgJLFc
    QPTw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563886;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=USuh1vETqHao4YOD0ii6LvSJ55qejRnauMVjBJeZuwA=;
    b=qJHv/jOe3HQwplaJnIQJkhQT3BlI7ZBVWyBnUcTG9fmhMQAtRM2wRHYlJAvb7ymFsZ
    hCUCUwXVYRg2lwBiTlKKOzfZMwAiFN9VoceIDKNDynz5HB2VCNlSBjjml0cB73U+4JGR
    +jz3he+MwfiyQySximUwz5oLHrSSOf1d39E334dRWwwZ4kQG2pk3A7o/hV8rPAOu9Gzn
    f8+XNHwPAxntKnD7Bv4E3WwN+DTwLbWoRRLdUTmAiJ22h2oSzGVQccBSUAdYB+1MwmQ2
    eXxVBz7Pfcu4ajYpvuIcSK9b7Kp6CmRQuiRMPyKoRSej4DVudGtRJVet+edPlIPV3fOI
    8AWg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v20210601 03/38] tools: create libxensaverestore
Date: Tue,  1 Jun 2021 18:10:43 +0200
Message-Id: <20210601161118.18986-4-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move all save/restore related code from libxenguest.so into a separate
library libxensaverestore.so. The only consumer is libxl-save-helper.
There is no need to have the moved code mapped all the time in binaries
where libxenguest.so is used.

According to size(1) the change is:
   text	   data	    bss	    dec	    hex	filename
 187183	   4304	     48	 191535	  2ec2f	guest/libxenguest.so.4.15.0

 124106	   3376	     48	 127530	  1f22a	guest/libxenguest.so.4.15.0
  67841	   1872	      8	  69721	  11059	saverestore/libxensaverestore.so.4.15.0

While touching the files anyway, take the opportunity to drop the
reduntant xg_sr_ filename prefix.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Acked-by: Wei Liu <wl@xen.org>

v4:
- drop xg_ prefix from filenames (jgross)
- drop sr_ prefix from filenames (jbeulich)
v3:
- repost in time for 4.16
v2:
- copy also license header
- move xg_nomigrate.c
- add size(1) output to commit msg
- remove change from libxl_create.c
---
 .gitignore                                    |   2 +
 tools/include/xenguest.h                      | 186 ----------------
 tools/include/xensaverestore.h                | 208 ++++++++++++++++++
 tools/libs/Makefile                           |   1 +
 tools/libs/guest/Makefile                     |  11 -
 tools/libs/guest/xg_offline_page.c            |   1 -
 tools/libs/light/Makefile                     |   4 +-
 tools/libs/light/libxl_internal.h             |   1 +
 tools/libs/light/libxl_save_helper.c          |   1 +
 tools/libs/light/libxl_save_msgs_gen.pl       |   2 +-
 tools/libs/saverestore/Makefile               |  38 ++++
 .../xg_sr_common.c => saverestore/common.c}   |   2 +-
 .../xg_sr_common.h => saverestore/common.h}   |  16 +-
 .../common_x86.c}                             |   2 +-
 .../common_x86.h}                             |   2 +-
 .../common_x86_pv.c}                          |   2 +-
 .../common_x86_pv.h}                          |   2 +-
 .../nomigrate.c}                              |   0
 .../xg_sr_restore.c => saverestore/restore.c} |   2 +-
 .../restore_x86_hvm.c}                        |   2 +-
 .../restore_x86_pv.c}                         |   2 +-
 .../xg_sr_save.c => saverestore/save.c}       |   2 +-
 .../save_restore.h}                           |   2 -
 .../save_x86_hvm.c}                           |   2 +-
 .../save_x86_pv.c}                            |   2 +-
 .../stream_format.h}                          |   0
 tools/libs/uselibs.mk                         |   4 +-
 27 files changed, 282 insertions(+), 217 deletions(-)
 create mode 100644 tools/include/xensaverestore.h
 create mode 100644 tools/libs/saverestore/Makefile
 rename tools/libs/{guest/xg_sr_common.c => saverestore/common.c} (99%)
 rename tools/libs/{guest/xg_sr_common.h => saverestore/common.h} (98%)
 rename tools/libs/{guest/xg_sr_common_x86.c => saverestore/common_x86.c} (99%)
 rename tools/libs/{guest/xg_sr_common_x86.h => saverestore/common_x86.h} (98%)
 rename tools/libs/{guest/xg_sr_common_x86_pv.c => saverestore/common_x86_pv.c} (99%)
 rename tools/libs/{guest/xg_sr_common_x86_pv.h => saverestore/common_x86_pv.h} (98%)
 rename tools/libs/{guest/xg_nomigrate.c => saverestore/nomigrate.c} (100%)
 rename tools/libs/{guest/xg_sr_restore.c => saverestore/restore.c} (99%)
 rename tools/libs/{guest/xg_sr_restore_x86_hvm.c => saverestore/restore_x86_hvm.c} (99%)
 rename tools/libs/{guest/xg_sr_restore_x86_pv.c => saverestore/restore_x86_pv.c} (99%)
 rename tools/libs/{guest/xg_sr_save.c => saverestore/save.c} (99%)
 rename tools/libs/{guest/xg_save_restore.h => saverestore/save_restore.h} (98%)
 rename tools/libs/{guest/xg_sr_save_x86_hvm.c => saverestore/save_x86_hvm.c} (99%)
 rename tools/libs/{guest/xg_sr_save_x86_pv.c => saverestore/save_x86_pv.c} (99%)
 rename tools/libs/{guest/xg_sr_stream_format.h => saverestore/stream_format.h} (100%)

diff --git a/.gitignore b/.gitignore
index 38a085e398..08a321e995 100644
--- a/.gitignore
+++ b/.gitignore
@@ -147,6 +147,8 @@ tools/libs/light/test_timedereg
 tools/libs/light/test_fdderegrace
 tools/libs/light/tmp.*
 tools/libs/light/xenlight.pc
+tools/libs/saverestore/libxensaverestore.map
+tools/libs/saverestore/xensaverestore.pc
 tools/libs/stat/_paths.h
 tools/libs/stat/headers.chk
 tools/libs/stat/libxenstat.map
diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
index a4492038cf..80adf8dc6f 100644
--- a/tools/include/xenguest.h
+++ b/tools/include/xenguest.h
@@ -24,9 +24,6 @@
 
 #define XC_NUMA_NO_NODE   (~0U)
 
-#define XCFLAGS_LIVE      (1 << 0)
-#define XCFLAGS_DEBUG     (1 << 1)
-
 #define X86_64_B_SIZE   64 
 #define X86_32_B_SIZE   32
 
@@ -433,189 +430,6 @@ static inline xen_pfn_t xc_dom_p2m(struct xc_dom_image *dom, xen_pfn_t pfn)
  */
 struct xenevtchn_handle;
 
-/* For save's precopy_policy(). */
-struct precopy_stats
-{
-    unsigned int iteration;
-    unsigned long total_written;
-    long dirty_count; /* -1 if unknown */
-};
-
-/*
- * A precopy_policy callback may not be running in the same address
- * space as libxc an so precopy_stats is passed by value.
- */
-typedef int (*precopy_policy_t)(struct precopy_stats, void *);
-
-/* callbacks provided by xc_domain_save */
-struct save_callbacks {
-    /*
-     * Called after expiration of checkpoint interval,
-     * to suspend the guest.
-     */
-    int (*suspend)(void *data);
-
-    /*
-     * Called before and after every batch of page data sent during
-     * the precopy phase of a live migration to ask the caller what
-     * to do next based on the current state of the precopy migration.
-     *
-     * Should return one of the values listed below:
-     */
-#define XGS_POLICY_ABORT          (-1) /* Abandon the migration entirely
-                                        * and tidy up. */
-#define XGS_POLICY_CONTINUE_PRECOPY 0  /* Remain in the precopy phase. */
-#define XGS_POLICY_STOP_AND_COPY    1  /* Immediately suspend and transmit the
-                                        * remaining dirty pages. */
-    precopy_policy_t precopy_policy;
-
-    /*
-     * Called after the guest's dirty pages have been
-     *  copied into an output buffer.
-     * Callback function resumes the guest & the device model,
-     *  returns to xc_domain_save.
-     * xc_domain_save then flushes the output buffer, while the
-     *  guest continues to run.
-     */
-    int (*postcopy)(void *data);
-
-    /*
-     * Called after the memory checkpoint has been flushed
-     * out into the network. Typical actions performed in this
-     * callback include:
-     *   (a) send the saved device model state (for HVM guests),
-     *   (b) wait for checkpoint ack
-     *   (c) release the network output buffer pertaining to the acked checkpoint.
-     *   (c) sleep for the checkpoint interval.
-     *
-     * returns:
-     * 0: terminate checkpointing gracefully
-     * 1: take another checkpoint
-     */
-    int (*checkpoint)(void *data);
-
-    /*
-     * Called after the checkpoint callback.
-     *
-     * returns:
-     * 0: terminate checkpointing gracefully
-     * 1: take another checkpoint
-     */
-    int (*wait_checkpoint)(void *data);
-
-    /* Enable qemu-dm logging dirty pages to xen */
-    int (*switch_qemu_logdirty)(uint32_t domid, unsigned enable, void *data); /* HVM only */
-
-    /* to be provided as the last argument to each callback function */
-    void *data;
-};
-
-/* Type of stream.  Plain, or using a continuous replication protocol? */
-typedef enum {
-    XC_STREAM_PLAIN,
-    XC_STREAM_REMUS,
-    XC_STREAM_COLO,
-} xc_stream_type_t;
-
-/**
- * This function will save a running domain.
- *
- * @param xch a handle to an open hypervisor interface
- * @param io_fd the file descriptor to save a domain to
- * @param dom the id of the domain
- * @param flags XCFLAGS_xxx
- * @param stream_type XC_STREAM_PLAIN if the far end of the stream
- *        doesn't use checkpointing
- * @param recv_fd Only used for XC_STREAM_COLO.  Contains backchannel from
- *        the destination side.
- * @return 0 on success, -1 on failure
- */
-int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
-                   uint32_t flags, struct save_callbacks *callbacks,
-                   xc_stream_type_t stream_type, int recv_fd);
-
-/* callbacks provided by xc_domain_restore */
-struct restore_callbacks {
-    /*
-     * Called once the STATIC_DATA_END record has been received/inferred.
-     *
-     * For compatibility with older streams, provides a list of static data
-     * expected to be found in the stream, which was missing.  A higher level
-     * toolstack is responsible for providing any necessary compatibiltiy.
-     */
-#define XGR_SDD_MISSING_CPUID (1 << 0)
-#define XGR_SDD_MISSING_MSR   (1 << 1)
-    int (*static_data_done)(unsigned int missing, void *data);
-
-    /* Called after a new checkpoint to suspend the guest. */
-    int (*suspend)(void *data);
-
-    /*
-     * Called after the secondary vm is ready to resume.
-     * Callback function resumes the guest & the device model,
-     * returns to xc_domain_restore.
-     */
-    int (*postcopy)(void *data);
-
-    /*
-     * A checkpoint record has been found in the stream.
-     * returns:
-     */
-#define XGR_CHECKPOINT_ERROR    0 /* Terminate processing */
-#define XGR_CHECKPOINT_SUCCESS  1 /* Continue reading more data from the stream */
-#define XGR_CHECKPOINT_FAILOVER 2 /* Failover and resume VM */
-    int (*checkpoint)(void *data);
-
-    /*
-     * Called after the checkpoint callback.
-     *
-     * returns:
-     * 0: terminate checkpointing gracefully
-     * 1: take another checkpoint
-     */
-    int (*wait_checkpoint)(void *data);
-
-    /*
-     * callback to send store gfn and console gfn to xl
-     * if we want to resume vm before xc_domain_save()
-     * exits.
-     */
-    void (*restore_results)(xen_pfn_t store_gfn, xen_pfn_t console_gfn,
-                            void *data);
-
-    /* to be provided as the last argument to each callback function */
-    void *data;
-};
-
-/**
- * This function will restore a saved domain.
- *
- * Domain is restored in a suspended state ready to be unpaused.
- *
- * @param xch a handle to an open hypervisor interface
- * @param io_fd the file descriptor to restore a domain from
- * @param dom the id of the domain
- * @param store_evtchn the xenstore event channel for this domain to use
- * @param store_mfn filled with the gfn of the store page
- * @param store_domid the backend domain for xenstore
- * @param console_evtchn the console event channel for this domain to use
- * @param console_mfn filled with the gfn of the console page
- * @param console_domid the backend domain for xenconsole
- * @param stream_type XC_STREAM_PLAIN if the far end of the stream is using
- *        checkpointing
- * @param callbacks non-NULL to receive a callback to restore toolstack
- *        specific data
- * @param send_back_fd Only used for XC_STREAM_COLO.  Contains backchannel to
- *        the source side.
- * @return 0 on success, -1 on failure
- */
-int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
-                      unsigned int store_evtchn, unsigned long *store_mfn,
-                      uint32_t store_domid, unsigned int console_evtchn,
-                      unsigned long *console_mfn, uint32_t console_domid,
-                      xc_stream_type_t stream_type,
-                      struct restore_callbacks *callbacks, int send_back_fd);
-
 /**
  * This function will create a domain for a paravirtualized Linux
  * using file names pointing to kernel and ramdisk
diff --git a/tools/include/xensaverestore.h b/tools/include/xensaverestore.h
new file mode 100644
index 0000000000..0410f0469e
--- /dev/null
+++ b/tools/include/xensaverestore.h
@@ -0,0 +1,208 @@
+/******************************************************************************
+ * A library for guest domain save/restore/migration in Xen.
+ *
+ * Copyright (c) 2003-2004, K A Fraser.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2.1 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef XENSAVERESTORE_H
+#define XENSAVERESTORE_H
+
+#define XCFLAGS_LIVE      (1 << 0)
+#define XCFLAGS_DEBUG     (1 << 1)
+
+/* For save's precopy_policy(). */
+struct precopy_stats
+{
+    unsigned int iteration;
+    unsigned long total_written;
+    long dirty_count; /* -1 if unknown */
+};
+
+/*
+ * A precopy_policy callback may not be running in the same address
+ * space as libxc an so precopy_stats is passed by value.
+ */
+typedef int (*precopy_policy_t)(struct precopy_stats, void *);
+
+/* callbacks provided by xc_domain_save */
+struct save_callbacks {
+    /*
+     * Called after expiration of checkpoint interval,
+     * to suspend the guest.
+     */
+    int (*suspend)(void *data);
+
+    /*
+     * Called before and after every batch of page data sent during
+     * the precopy phase of a live migration to ask the caller what
+     * to do next based on the current state of the precopy migration.
+     *
+     * Should return one of the values listed below:
+     */
+#define XGS_POLICY_ABORT          (-1) /* Abandon the migration entirely
+                                        * and tidy up. */
+#define XGS_POLICY_CONTINUE_PRECOPY 0  /* Remain in the precopy phase. */
+#define XGS_POLICY_STOP_AND_COPY    1  /* Immediately suspend and transmit the
+                                        * remaining dirty pages. */
+    precopy_policy_t precopy_policy;
+
+    /*
+     * Called after the guest's dirty pages have been
+     *  copied into an output buffer.
+     * Callback function resumes the guest & the device model,
+     *  returns to xc_domain_save.
+     * xc_domain_save then flushes the output buffer, while the
+     *  guest continues to run.
+     */
+    int (*postcopy)(void *data);
+
+    /*
+     * Called after the memory checkpoint has been flushed
+     * out into the network. Typical actions performed in this
+     * callback include:
+     *   (a) send the saved device model state (for HVM guests),
+     *   (b) wait for checkpoint ack
+     *   (c) release the network output buffer pertaining to the acked checkpoint.
+     *   (c) sleep for the checkpoint interval.
+     *
+     * returns:
+     * 0: terminate checkpointing gracefully
+     * 1: take another checkpoint
+     */
+    int (*checkpoint)(void *data);
+
+    /*
+     * Called after the checkpoint callback.
+     *
+     * returns:
+     * 0: terminate checkpointing gracefully
+     * 1: take another checkpoint
+     */
+    int (*wait_checkpoint)(void *data);
+
+    /* Enable qemu-dm logging dirty pages to xen */
+    int (*switch_qemu_logdirty)(uint32_t domid, unsigned enable, void *data); /* HVM only */
+
+    /* to be provided as the last argument to each callback function */
+    void *data;
+};
+
+/* Type of stream.  Plain, or using a continuous replication protocol? */
+typedef enum {
+    XC_STREAM_PLAIN,
+    XC_STREAM_REMUS,
+    XC_STREAM_COLO,
+} xc_stream_type_t;
+
+/**
+ * This function will save a running domain.
+ *
+ * @param xch a handle to an open hypervisor interface
+ * @param io_fd the file descriptor to save a domain to
+ * @param dom the id of the domain
+ * @param flags XCFLAGS_xxx
+ * @param stream_type XC_STREAM_PLAIN if the far end of the stream
+ *        doesn't use checkpointing
+ * @param recv_fd Only used for XC_STREAM_COLO.  Contains backchannel from
+ *        the destination side.
+ * @return 0 on success, -1 on failure
+ */
+int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
+                   uint32_t flags, struct save_callbacks *callbacks,
+                   xc_stream_type_t stream_type, int recv_fd);
+
+/* callbacks provided by xc_domain_restore */
+struct restore_callbacks {
+    /*
+     * Called once the STATIC_DATA_END record has been received/inferred.
+     *
+     * For compatibility with older streams, provides a list of static data
+     * expected to be found in the stream, which was missing.  A higher level
+     * toolstack is responsible for providing any necessary compatibiltiy.
+     */
+#define XGR_SDD_MISSING_CPUID (1 << 0)
+#define XGR_SDD_MISSING_MSR   (1 << 1)
+    int (*static_data_done)(unsigned int missing, void *data);
+
+    /* Called after a new checkpoint to suspend the guest. */
+    int (*suspend)(void *data);
+
+    /*
+     * Called after the secondary vm is ready to resume.
+     * Callback function resumes the guest & the device model,
+     * returns to xc_domain_restore.
+     */
+    int (*postcopy)(void *data);
+
+    /*
+     * A checkpoint record has been found in the stream.
+     * returns:
+     */
+#define XGR_CHECKPOINT_ERROR    0 /* Terminate processing */
+#define XGR_CHECKPOINT_SUCCESS  1 /* Continue reading more data from the stream */
+#define XGR_CHECKPOINT_FAILOVER 2 /* Failover and resume VM */
+    int (*checkpoint)(void *data);
+
+    /*
+     * Called after the checkpoint callback.
+     *
+     * returns:
+     * 0: terminate checkpointing gracefully
+     * 1: take another checkpoint
+     */
+    int (*wait_checkpoint)(void *data);
+
+    /*
+     * callback to send store gfn and console gfn to xl
+     * if we want to resume vm before xc_domain_save()
+     * exits.
+     */
+    void (*restore_results)(xen_pfn_t store_gfn, xen_pfn_t console_gfn,
+                            void *data);
+
+    /* to be provided as the last argument to each callback function */
+    void *data;
+};
+
+/**
+ * This function will restore a saved domain.
+ *
+ * Domain is restored in a suspended state ready to be unpaused.
+ *
+ * @param xch a handle to an open hypervisor interface
+ * @param io_fd the file descriptor to restore a domain from
+ * @param dom the id of the domain
+ * @param store_evtchn the xenstore event channel for this domain to use
+ * @param store_mfn filled with the gfn of the store page
+ * @param store_domid the backend domain for xenstore
+ * @param console_evtchn the console event channel for this domain to use
+ * @param console_mfn filled with the gfn of the console page
+ * @param console_domid the backend domain for xenconsole
+ * @param stream_type XC_STREAM_PLAIN if the far end of the stream is using
+ *        checkpointing
+ * @param callbacks non-NULL to receive a callback to restore toolstack
+ *        specific data
+ * @param send_back_fd Only used for XC_STREAM_COLO.  Contains backchannel to
+ *        the source side.
+ * @return 0 on success, -1 on failure
+ */
+int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
+                      unsigned int store_evtchn, unsigned long *store_mfn,
+                      uint32_t store_domid, unsigned int console_evtchn,
+                      unsigned long *console_mfn, uint32_t console_domid,
+                      xc_stream_type_t stream_type,
+                      struct restore_callbacks *callbacks, int send_back_fd);
+
+#endif /* XENSAVERESTORE_H */
diff --git a/tools/libs/Makefile b/tools/libs/Makefile
index 1afcd12e2b..ca43c66777 100644
--- a/tools/libs/Makefile
+++ b/tools/libs/Makefile
@@ -12,6 +12,7 @@ SUBDIRS-y += devicemodel
 SUBDIRS-y += ctrl
 SUBDIRS-y += guest
 SUBDIRS-y += hypfs
+SUBDIRS-y += saverestore
 SUBDIRS-y += store
 SUBDIRS-y += stat
 SUBDIRS-$(CONFIG_Linux) += vchan
diff --git a/tools/libs/guest/Makefile b/tools/libs/guest/Makefile
index 6d2a1d5bbc..4d99ee1f41 100644
--- a/tools/libs/guest/Makefile
+++ b/tools/libs/guest/Makefile
@@ -10,18 +10,7 @@ SRCS-y += xg_private.c
 SRCS-y += xg_domain.c
 SRCS-y += xg_suspend.c
 ifeq ($(CONFIG_MIGRATE),y)
-SRCS-y += xg_sr_common.c
-SRCS-$(CONFIG_X86) += xg_sr_common_x86.c
-SRCS-$(CONFIG_X86) += xg_sr_common_x86_pv.c
-SRCS-$(CONFIG_X86) += xg_sr_restore_x86_pv.c
-SRCS-$(CONFIG_X86) += xg_sr_restore_x86_hvm.c
-SRCS-$(CONFIG_X86) += xg_sr_save_x86_pv.c
-SRCS-$(CONFIG_X86) += xg_sr_save_x86_hvm.c
-SRCS-y += xg_sr_restore.c
-SRCS-y += xg_sr_save.c
 SRCS-y += xg_offline_page.c
-else
-SRCS-y += xg_nomigrate.c
 endif
 
 CFLAGS += -I$(XEN_libxenctrl)
diff --git a/tools/libs/guest/xg_offline_page.c b/tools/libs/guest/xg_offline_page.c
index d4722f0e8c..e49ef55887 100644
--- a/tools/libs/guest/xg_offline_page.c
+++ b/tools/libs/guest/xg_offline_page.c
@@ -29,7 +29,6 @@
 
 #include "xc_private.h"
 #include "xg_private.h"
-#include "xg_save_restore.h"
 
 struct pte_backup_entry
 {
diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index 7d8c51d492..68e51dd13c 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -179,7 +179,7 @@ $(ACPI_OBJS) $(ACPI_PIC_OBJS): CFLAGS += -I. -DLIBACPI_STDUTILS=\"$(CURDIR)/libx
 $(TEST_PROG_OBJS) _libxl.api-for-check: CFLAGS += $(CFLAGS_libxentoollog) $(CFLAGS_libxentoolcore)
 libxl_dom.o libxl_dom.opic: CFLAGS += -I$(XEN_ROOT)/tools  # include libacpi/x86.h
 libxl_x86_acpi.o libxl_x86_acpi.opic: CFLAGS += -I$(XEN_ROOT)/tools
-$(SAVE_HELPER_OBJS): CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenevtchn) $(CFLAGS_libxenguest)
+$(SAVE_HELPER_OBJS): CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenevtchn) $(CFLAGS_libxensaverestore)
 
 testidl.o: CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenlight)
 testidl.c: libxl_types.idl gentest.py $(XEN_INCLUDE)/libxl.h $(AUTOINCS)
@@ -241,7 +241,7 @@ test_%: test_%.o test_common.o libxenlight_test.so
 	$(CC) $(LDFLAGS) -o $@ $^ $(filter-out %libxenlight.so, $(LDLIBS_libxenlight)) $(LDLIBS_libxentoollog) $(LDLIBS_libxentoolcore) -lyajl $(APPEND_LDFLAGS)
 
 libxl-save-helper: $(SAVE_HELPER_OBJS) libxenlight.so
-	$(CC) $(LDFLAGS) -o $@ $(SAVE_HELPER_OBJS) $(LDLIBS_libxentoollog) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxentoolcore) $(APPEND_LDFLAGS)
+	$(CC) $(LDFLAGS) -o $@ $(SAVE_HELPER_OBJS) $(LDLIBS_libxentoollog) $(LDLIBS_libxenctrl) $(LDLIBS_libxensaverestore) $(LDLIBS_libxentoolcore) $(APPEND_LDFLAGS)
 
 testidl: testidl.o libxenlight.so
 	$(CC) $(LDFLAGS) -o $@ testidl.o $(LDLIBS_libxenlight) $(LDLIBS_libxentoollog) $(LDLIBS_libxentoolcore) $(APPEND_LDFLAGS)
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 44a2f3c8fe..8af075291a 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -56,6 +56,7 @@
 #define XC_WANT_COMPAT_MAP_FOREIGN_API
 #include <xenctrl.h>
 #include <xenguest.h>
+#include <xensaverestore.h>
 #include <xenhypfs.h>
 
 #include <xen-tools/libs.h>
diff --git a/tools/libs/light/libxl_save_helper.c b/tools/libs/light/libxl_save_helper.c
index 65dff389bf..896e845a2f 100644
--- a/tools/libs/light/libxl_save_helper.c
+++ b/tools/libs/light/libxl_save_helper.c
@@ -48,6 +48,7 @@
 
 #include "xenctrl.h"
 #include "xenguest.h"
+#include "xensaverestore.h"
 #include "_libxl_save_msgs_helper.h"
 
 /*----- logger -----*/
diff --git a/tools/libs/light/libxl_save_msgs_gen.pl b/tools/libs/light/libxl_save_msgs_gen.pl
index 9d425b1dee..f263ee01bb 100755
--- a/tools/libs/light/libxl_save_msgs_gen.pl
+++ b/tools/libs/light/libxl_save_msgs_gen.pl
@@ -72,7 +72,7 @@ END_BOTH
 END_CALLOUT
 
 #include <xenctrl.h>
-#include <xenguest.h>
+#include <xensaverestore.h>
 #include "_libxl_save_msgs_${ah}.h"
 
 END_HELPER
diff --git a/tools/libs/saverestore/Makefile b/tools/libs/saverestore/Makefile
new file mode 100644
index 0000000000..48728b3be2
--- /dev/null
+++ b/tools/libs/saverestore/Makefile
@@ -0,0 +1,38 @@
+XEN_ROOT = $(CURDIR)/../../..
+include $(XEN_ROOT)/tools/Rules.mk
+
+ifeq ($(CONFIG_MIGRATE),y)
+SRCS-y += common.c
+SRCS-$(CONFIG_X86) += common_x86.c
+SRCS-$(CONFIG_X86) += common_x86_pv.c
+SRCS-$(CONFIG_X86) += restore_x86_pv.c
+SRCS-$(CONFIG_X86) += restore_x86_hvm.c
+SRCS-$(CONFIG_X86) += save_x86_pv.c
+SRCS-$(CONFIG_X86) += save_x86_hvm.c
+SRCS-y += restore.c
+SRCS-y += save.c
+else
+SRCS-y += nomigrate.c
+endif
+
+CFLAGS += -I$(XEN_libxenctrl)
+CFLAGS += -I$(XEN_libxenguest)
+
+-include $(XEN_TARGET_ARCH)/Makefile
+
+CFLAGS   += -Werror -Wmissing-prototypes
+CFLAGS   += -I. -I./include $(CFLAGS_xeninclude)
+CFLAGS   += -D__XEN_TOOLS__
+CFLAGS   += -include $(XEN_ROOT)/tools/config.h
+# Needed for asprintf()
+CFLAGS-$(CONFIG_Linux) += -D_GNU_SOURCE
+
+LIBHEADER := xensaverestore.h
+
+NO_HEADERS_CHK := y
+
+include $(XEN_ROOT)/tools/libs/libs.mk
+
+.PHONY: cleanlocal
+cleanlocal:
+	rm -f libxensaverestore.map
diff --git a/tools/libs/guest/xg_sr_common.c b/tools/libs/saverestore/common.c
similarity index 99%
rename from tools/libs/guest/xg_sr_common.c
rename to tools/libs/saverestore/common.c
index 17567ab133..77128bc747 100644
--- a/tools/libs/guest/xg_sr_common.c
+++ b/tools/libs/saverestore/common.c
@@ -1,6 +1,6 @@
 #include <assert.h>
 
-#include "xg_sr_common.h"
+#include "common.h"
 
 #include <xen-tools/libs.h>
 
diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/saverestore/common.h
similarity index 98%
rename from tools/libs/guest/xg_sr_common.h
rename to tools/libs/saverestore/common.h
index cc3ad1c394..36946e5d48 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/saverestore/common.h
@@ -1,13 +1,25 @@
 #ifndef __COMMON__H
 #define __COMMON__H
 
+#include <unistd.h>
+#include <errno.h>
 #include <stdbool.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+
+#include "xc_private.h"
+#include "xenguest.h"
+#include "xensaverestore.h"
 
 #include "xg_private.h"
-#include "xg_save_restore.h"
+#include "save_restore.h"
 #include "xc_bitops.h"
 
-#include "xg_sr_stream_format.h"
+#include "stream_format.h"
 
 /* String representation of Domain Header types. */
 const char *dhdr_type_to_str(uint32_t type);
diff --git a/tools/libs/guest/xg_sr_common_x86.c b/tools/libs/saverestore/common_x86.c
similarity index 99%
rename from tools/libs/guest/xg_sr_common_x86.c
rename to tools/libs/saverestore/common_x86.c
index 563b4f0168..f1beb234ae 100644
--- a/tools/libs/guest/xg_sr_common_x86.c
+++ b/tools/libs/saverestore/common_x86.c
@@ -1,4 +1,4 @@
-#include "xg_sr_common_x86.h"
+#include "common_x86.h"
 
 int write_x86_tsc_info(struct xc_sr_context *ctx)
 {
diff --git a/tools/libs/guest/xg_sr_common_x86.h b/tools/libs/saverestore/common_x86.h
similarity index 98%
rename from tools/libs/guest/xg_sr_common_x86.h
rename to tools/libs/saverestore/common_x86.h
index b55758c96d..3a2d91dcb8 100644
--- a/tools/libs/guest/xg_sr_common_x86.h
+++ b/tools/libs/saverestore/common_x86.h
@@ -1,7 +1,7 @@
 #ifndef __COMMON_X86__H
 #define __COMMON_X86__H
 
-#include "xg_sr_common.h"
+#include "common.h"
 
 /*
  * Obtains a domains TSC information from Xen and writes a X86_TSC_INFO record
diff --git a/tools/libs/guest/xg_sr_common_x86_pv.c b/tools/libs/saverestore/common_x86_pv.c
similarity index 99%
rename from tools/libs/guest/xg_sr_common_x86_pv.c
rename to tools/libs/saverestore/common_x86_pv.c
index cd33406aab..caf3cc588f 100644
--- a/tools/libs/guest/xg_sr_common_x86_pv.c
+++ b/tools/libs/saverestore/common_x86_pv.c
@@ -1,6 +1,6 @@
 #include <assert.h>
 
-#include "xg_sr_common_x86_pv.h"
+#include "common_x86_pv.h"
 
 xen_pfn_t mfn_to_pfn(struct xc_sr_context *ctx, xen_pfn_t mfn)
 {
diff --git a/tools/libs/guest/xg_sr_common_x86_pv.h b/tools/libs/saverestore/common_x86_pv.h
similarity index 98%
rename from tools/libs/guest/xg_sr_common_x86_pv.h
rename to tools/libs/saverestore/common_x86_pv.h
index 953b5bfb8d..a9f8c970e3 100644
--- a/tools/libs/guest/xg_sr_common_x86_pv.h
+++ b/tools/libs/saverestore/common_x86_pv.h
@@ -1,7 +1,7 @@
 #ifndef __COMMON_X86_PV_H
 #define __COMMON_X86_PV_H
 
-#include "xg_sr_common_x86.h"
+#include "common_x86.h"
 
 /* Virtual address ranges reserved for hypervisor. */
 #define HYPERVISOR_VIRT_START_X86_64 0xFFFF800000000000ULL
diff --git a/tools/libs/guest/xg_nomigrate.c b/tools/libs/saverestore/nomigrate.c
similarity index 100%
rename from tools/libs/guest/xg_nomigrate.c
rename to tools/libs/saverestore/nomigrate.c
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/saverestore/restore.c
similarity index 99%
rename from tools/libs/guest/xg_sr_restore.c
rename to tools/libs/saverestore/restore.c
index b57a787519..be259a1c6b 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -2,7 +2,7 @@
 
 #include <assert.h>
 
-#include "xg_sr_common.h"
+#include "common.h"
 
 /*
  * Read and validate the Image and Domain headers.
diff --git a/tools/libs/guest/xg_sr_restore_x86_hvm.c b/tools/libs/saverestore/restore_x86_hvm.c
similarity index 99%
rename from tools/libs/guest/xg_sr_restore_x86_hvm.c
rename to tools/libs/saverestore/restore_x86_hvm.c
index d6ea6f3012..bd63bd2818 100644
--- a/tools/libs/guest/xg_sr_restore_x86_hvm.c
+++ b/tools/libs/saverestore/restore_x86_hvm.c
@@ -1,7 +1,7 @@
 #include <assert.h>
 #include <arpa/inet.h>
 
-#include "xg_sr_common_x86.h"
+#include "common_x86.h"
 
 /*
  * Process an HVM_CONTEXT record from the stream.
diff --git a/tools/libs/guest/xg_sr_restore_x86_pv.c b/tools/libs/saverestore/restore_x86_pv.c
similarity index 99%
rename from tools/libs/guest/xg_sr_restore_x86_pv.c
rename to tools/libs/saverestore/restore_x86_pv.c
index dc50b0f5a8..96608e5231 100644
--- a/tools/libs/guest/xg_sr_restore_x86_pv.c
+++ b/tools/libs/saverestore/restore_x86_pv.c
@@ -1,6 +1,6 @@
 #include <assert.h>
 
-#include "xg_sr_common_x86_pv.h"
+#include "common_x86_pv.h"
 
 static xen_pfn_t pfn_to_mfn(const struct xc_sr_context *ctx, xen_pfn_t pfn)
 {
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/saverestore/save.c
similarity index 99%
rename from tools/libs/guest/xg_sr_save.c
rename to tools/libs/saverestore/save.c
index 51542a98c8..92d96b0533 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/saverestore/save.c
@@ -1,7 +1,7 @@
 #include <assert.h>
 #include <arpa/inet.h>
 
-#include "xg_sr_common.h"
+#include "common.h"
 
 /*
  * Writes an Image header and Domain header into the stream.
diff --git a/tools/libs/guest/xg_save_restore.h b/tools/libs/saverestore/save_restore.h
similarity index 98%
rename from tools/libs/guest/xg_save_restore.h
rename to tools/libs/saverestore/save_restore.h
index 3dbbc8dcd2..20bd3d30a5 100644
--- a/tools/libs/guest/xg_save_restore.h
+++ b/tools/libs/saverestore/save_restore.h
@@ -15,8 +15,6 @@
  * License along with this library; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include "xc_private.h"
-
 #include <xen/foreign/x86_32.h>
 #include <xen/foreign/x86_64.h>
 
diff --git a/tools/libs/guest/xg_sr_save_x86_hvm.c b/tools/libs/saverestore/save_x86_hvm.c
similarity index 99%
rename from tools/libs/guest/xg_sr_save_x86_hvm.c
rename to tools/libs/saverestore/save_x86_hvm.c
index 1634a7bc43..91c2cb99ab 100644
--- a/tools/libs/guest/xg_sr_save_x86_hvm.c
+++ b/tools/libs/saverestore/save_x86_hvm.c
@@ -1,6 +1,6 @@
 #include <assert.h>
 
-#include "xg_sr_common_x86.h"
+#include "common_x86.h"
 
 #include <xen/hvm/params.h>
 
diff --git a/tools/libs/guest/xg_sr_save_x86_pv.c b/tools/libs/saverestore/save_x86_pv.c
similarity index 99%
rename from tools/libs/guest/xg_sr_save_x86_pv.c
rename to tools/libs/saverestore/save_x86_pv.c
index 4964f1f7b8..92f77fad0f 100644
--- a/tools/libs/guest/xg_sr_save_x86_pv.c
+++ b/tools/libs/saverestore/save_x86_pv.c
@@ -1,7 +1,7 @@
 #include <assert.h>
 #include <limits.h>
 
-#include "xg_sr_common_x86_pv.h"
+#include "common_x86_pv.h"
 
 /* Check a 64 bit virtual address for being canonical. */
 static inline bool is_canonical_address(xen_vaddr_t vaddr)
diff --git a/tools/libs/guest/xg_sr_stream_format.h b/tools/libs/saverestore/stream_format.h
similarity index 100%
rename from tools/libs/guest/xg_sr_stream_format.h
rename to tools/libs/saverestore/stream_format.h
diff --git a/tools/libs/uselibs.mk b/tools/libs/uselibs.mk
index efd7a475ba..62a2990b95 100644
--- a/tools/libs/uselibs.mk
+++ b/tools/libs/uselibs.mk
@@ -20,6 +20,8 @@ LIBS_LIBS += ctrl
 USELIBS_ctrl := toollog call evtchn gnttab foreignmemory devicemodel
 LIBS_LIBS += guest
 USELIBS_guest := evtchn ctrl
+LIBS_LIBS += saverestore
+USELIBS_saverestore := guest ctrl
 LIBS_LIBS += store
 USELIBS_store := toolcore
 LIBS_LIBS += vchan
@@ -27,7 +29,7 @@ USELIBS_vchan := toollog store gnttab evtchn
 LIBS_LIBS += stat
 USELIBS_stat := ctrl store
 LIBS_LIBS += light
-USELIBS_light := toollog evtchn toolcore ctrl store hypfs guest
+USELIBS_light := toollog evtchn toolcore ctrl store hypfs guest saverestore
 LIBS_LIBS += util
 USELIBS_util := light
 FILENAME_util := xlutil


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:13:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:13:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135026.251239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo71J-0001rW-8Y; Tue, 01 Jun 2021 16:13:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135026.251239; Tue, 01 Jun 2021 16:13:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo71J-0001rA-45; Tue, 01 Jun 2021 16:13:33 +0000
Received: by outflank-mailman (input) for mailman id 135026;
 Tue, 01 Jun 2021 16:13:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo70G-0005Ec-2c
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:28 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.169])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b54d6a0b-382f-48b9-8b14-afc26fd82239;
 Tue, 01 Jun 2021 16:11:38 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBX1BO
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b54d6a0b-382f-48b9-8b14-afc26fd82239
ARC-Seal: i=1; a=rsa-sha256; t=1622563893; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=UUM+exyc4Jrd1VjUci/3jYH7JekthyphWT5aBWkhbqXHdikeRhjJciVwYbT6pBKxnQ
    sCssb14D+0d2l0ieo61vaGv/+gXHHH0EIgc9VlWAxVVaC2JVUuXpooY0PbGOdAlWumq8
    RLyO6YzqPa4LgKySXRHvrReFKUokuRTDFPvPRMuV0dVCc9atUNRe3fKHl+WpkYSPVpYb
    /bXBKI80r82XVNiBFSE9nsa4s3yxbkV5mbnu3D+UmHk4ZYQzsXJdDNYGA6eEmDhRPI4o
    mohDQD0ZKRxBBq4hWYx24LWbRepKO9i7xRbsPrVxSLnYrDDtpmfVOg4hoSSvmpXU4foG
    vtcg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563893;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=aAGkFwdPcwm3sDiRcDYWvmNyL5GfWbZILOcBD1kChMw=;
    b=WuaAIM3CKjPaQpNFLkan3sxYrsUWe15VJwwOkRkJvJxFSNzjJkgTaoyqLco1NVrvik
    rDuEQMjXNbIACUimJfcEG69pNeM6uCtuaQ8Z0EcPFjrWRWCSrNinT7dG9UYinAeNWf+T
    BRt2KbTc/kkZkoBrjm3c1HZ47YFOhYDghdpchGOrTTPl9UGjY570bg0DwOTkK3qwuaoG
    SRr2SfNQAWE/7zZD93V39TnKrzPtr5CXXmJuGhizFzZSHqfgaBwO2qLfL2qHvVJGM4VY
    6E4qdAfjNY0gfHLFNI//uKFaw6X7WqJD6xWS1UjiDHqpzbst0LQKkDn1mIYkH6rGCAro
    Ysew==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563893;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=aAGkFwdPcwm3sDiRcDYWvmNyL5GfWbZILOcBD1kChMw=;
    b=gAzpX16i4/ez6pQcYlrP7qEdVAQ80FBGQ2Dn4f1iV1B3P1h8/2DMCaiMjAm7dN7FBY
    K4uDxEOZ1lMUiga+eZ8g7zG8XAWxCD8uQQB+VJuPiKTt2RXYjN/Nn9VJSLvWtqnvWyM8
    74rtepR3Bi28C+yCKeSQTCT2cbTM3Jt+tP5dwrAi5RuzDyvHI2j+VE8M6NIF8veCY9Vd
    hEwIi+ZSsOg2+MXoDY/5/O+VF8eftF/xaXmYjhTyQENFIZ/kiMKegjykeiUmmzjEATHJ
    2NV34oiv3Aq/xv+TN6oynwFKJ/t0IcuutOZAfKyykRtRW6CKrwrhP4rBNOLCIBnRreE8
    qoJA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 19/38] tools/guest: restore: move types array
Date: Tue,  1 Jun 2021 18:10:59 +0200
Message-Id: <20210601161118.18986-20-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move types array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h  |  1 +
 tools/libs/saverestore/restore.c | 12 +-----------
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index be6f98af7f..96cd60e0d6 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -238,6 +238,7 @@ struct xc_sr_save_arrays {
 struct xc_sr_restore_arrays {
     /* handle_page_data */
     xen_pfn_t pfns[MAX_BATCH_SIZE];
+    uint32_t types[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_context
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index 4d8ee19adf..815a2d5a12 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -316,7 +316,7 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
     int rc = -1;
 
     xen_pfn_t *pfns = ctx->restore.m->pfns, pfn;
-    uint32_t *types = NULL, type;
+    uint32_t *types = ctx->restore.m->types, type;
 
     /*
      * v2 compatibility only exists for x86 streams.  This is a bit of a
@@ -363,14 +363,6 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         goto err;
     }
 
-    types = malloc(pages->count * sizeof(*types));
-    if ( !types )
-    {
-        ERROR("Unable to allocate enough memory for %u pfns",
-              pages->count);
-        goto err;
-    }
-
     for ( i = 0; i < pages->count; ++i )
     {
         pfn = pages->pfn[i] & PAGE_DATA_PFN_MASK;
@@ -410,8 +402,6 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
     rc = process_page_data(ctx, pages->count, pfns, types,
                            &pages->pfn[pages->count]);
  err:
-    free(types);
-
     return rc;
 }
 


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:13:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:13:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135032.251250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo71Y-00030E-LG; Tue, 01 Jun 2021 16:13:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135032.251250; Tue, 01 Jun 2021 16:13:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo71Y-000301-H1; Tue, 01 Jun 2021 16:13:48 +0000
Received: by outflank-mailman (input) for mailman id 135032;
 Tue, 01 Jun 2021 16:13:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo70B-0005Ec-2Z
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:23 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.82])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c93f827c-c556-4e0e-b6de-2562934ba459;
 Tue, 01 Jun 2021 16:11:37 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBX1BN
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c93f827c-c556-4e0e-b6de-2562934ba459
ARC-Seal: i=1; a=rsa-sha256; t=1622563893; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=NpnnLGqmKCVUQlrMkSAxwv++mpIPTiUMxYC7kJiZVaiJDEP3oT0dtrLAAfd6l4x6di
    yhfyFRysyH77HR5I0ndRZx0KCYWp97MSEzSl9OAfgCmU9NPbORVZjFWam4ePPSCdKhFx
    1KMKXhhrwK7yKcCIhdpn+qLGk01i9XzX5ItSczWj9VgOXGCeJknJMnKcwXpj0mZBj0Dq
    nj14BTT61TUgHER1aApkp/8UiOo7fyxDmn6hT3gW9YiRSmAMWIqtbFSfTmul/OlvszJE
    iZuSRbNLn3SRQVOWTrfIVF4TgylKQonKfsc24dk/Hmgcn0wKqpWu5ervz5Btb6CFYzfL
    4/qQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563893;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=yMk7eA+P/3dbcIUts8vVxsnRdZMI0VGT4aHfrLA5Vm0=;
    b=pFtNbtmH4REr819pBxs6SzUv+AU/vAJ+tuUsrKPOlVkHmxuV7cwde7Lpc42PrRuqQF
    hIN/J18CK8/eQ5TyFS5VmoD/R8Si2F15MZJlH882EI+jHL2ZlpcSIYN8NjZwr65ra25l
    Hehz17AlrVFORkZC5fszCCMlOvgttk3SfMXbCjqBvUXO7tDpyx1iITXjXFqvqizxwJQt
    ZNSG9vuK3zmOWZ6Kgy6uPwD99y+8Wcsbp8V+2ea+MDTHjTkyHNtbGar3pyM4l1VfhEN/
    f2dAV845M6Wwtd5k/knWona6J7AbiKmZl/djct1nIDlTYwLlKb4GSz9tS2Yr5Sy+3i9N
    sW6A==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563893;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=yMk7eA+P/3dbcIUts8vVxsnRdZMI0VGT4aHfrLA5Vm0=;
    b=ippzeAWbK2faCr3B2s6o05Bzhj8iWFbH8CU8ygiVTZFHevXTYqUJ/AEMROc72AB1hw
    lpgjaCdv0xJBRhrq7w020hzQRQ4NzJzsWKrePNmC52lwIlwTAb+9/Q4kcbY7T1rWsWyH
    uj/2T8xgi28RAm1UkNYKFCGsd47G45abVj8SoISCVbqxwBTBKLeILro5U/7TGl8RykhA
    mmaPWYes2Vby3pSibNYcjBOl66BwjJ3CCBFtsXuBIfyNpPEw6iRsQkHtjaBy61GWLtSv
    7nB1XEB6WmHY69KFTvSfsFsrdQuwAq+QOqXX60+vnSFaNkuUPBS4doabOZ9QxE96BRad
    BgEQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 18/38] tools/guest: restore: move pfns array
Date: Tue,  1 Jun 2021 18:10:58 +0200
Message-Id: <20210601161118.18986-19-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move pfns array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h  | 2 ++
 tools/libs/saverestore/restore.c | 6 ++----
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 0e03b0731c..be6f98af7f 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -236,6 +236,8 @@ struct xc_sr_save_arrays {
 };
 
 struct xc_sr_restore_arrays {
+    /* handle_page_data */
+    xen_pfn_t pfns[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_context
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index a6cf9ee41c..4d8ee19adf 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -315,7 +315,7 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
     unsigned int i, pages_of_data = 0;
     int rc = -1;
 
-    xen_pfn_t *pfns = NULL, pfn;
+    xen_pfn_t *pfns = ctx->restore.m->pfns, pfn;
     uint32_t *types = NULL, type;
 
     /*
@@ -363,9 +363,8 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         goto err;
     }
 
-    pfns = malloc(pages->count * sizeof(*pfns));
     types = malloc(pages->count * sizeof(*types));
-    if ( !pfns || !types )
+    if ( !types )
     {
         ERROR("Unable to allocate enough memory for %u pfns",
               pages->count);
@@ -412,7 +411,6 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
                            &pages->pfn[pages->count]);
  err:
     free(types);
-    free(pfns);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:13:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:13:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135036.251262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo71d-0003RP-Tk; Tue, 01 Jun 2021 16:13:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135036.251262; Tue, 01 Jun 2021 16:13:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo71d-0003RG-Qd; Tue, 01 Jun 2021 16:13:53 +0000
Received: by outflank-mailman (input) for mailman id 135036;
 Tue, 01 Jun 2021 16:13:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo706-0005Ec-2L
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:18 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.82])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id be339adb-dd1c-4964-8c19-30734b5a23c9;
 Tue, 01 Jun 2021 16:11:36 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBW1BL
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:32 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be339adb-dd1c-4964-8c19-30734b5a23c9
ARC-Seal: i=1; a=rsa-sha256; t=1622563892; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=nJcQYmO6JnmLStShMuTZQonJW5TWgz38m7lw0d/r/IqhqEV2gKDq9WZAJNkYtU/XSf
    eYyhmN6HJBIsX724ecE3F0P8Z7koT/VKw+jZQdB4H+n3jUS3o4niHB+jEaBdSbgoi8Yd
    XC3mC9XKLMMDTsphTsnOw8iA8cZC3upIcCjKn+sKFUs+LYk9nP1YugRuPRIbybwCu3mD
    nrgGWnwn63YGL0tWo774GMfJAI4C31sBbuRkLCv3FXbfIjouI+GsyeY2l7+T5oWiEBgS
    Uf39W9IzR88idjE1m2JX/nW1ki9Gd0CZVZqfqc69dnDCzRlGlr+ZXw4D26tYQzzEzUho
    rsoQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563892;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=g1Kh05gVx9V+bsfHm/5UnEfyde3/5C+vW/1l/cFLlB4=;
    b=s3dv8RdGvOGeCiV5xxKydBo1IbVjzseWzGCzyT8acdVIbcoK7RNzB667vBy2yGDpfp
    89BmgTIJ9mPZWdetQoHaQl4SoPgp5u3DWstQkkz6T3m0nVYZkeiD2mm6XSpuZHcFU5SR
    iNs0CDjf8fv9DT/5u+gWQLILZa4q4DqWT5zSRQ4ua02JcbVZmoC20PlI6pCIyULi7Dvn
    zJElbnLCraXVAmoFCWhRsfxORfqjLSgS6e+7rxyqLy1mXVPYeeE40v/LbVOr0YSCSfO7
    hLmFFvA25HXZwfj7tqoD4ib5b+8m9zbHZ4TeR6OTdSKW+bG5c711hIQ+npu/pp7E7IBf
    pphQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563892;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=g1Kh05gVx9V+bsfHm/5UnEfyde3/5C+vW/1l/cFLlB4=;
    b=jhGd+n8lCtV8YEc926r5Ve7N+wO2Yxb2jBOXUezA4E7pFOGtvqMr7tHK0JrDnsGVPB
    fqGeClsc90b6TzarLUpFj2E6OMw+GKxHjTzSi7+LqGVD6dPmE/fyoAn8oFP3H5897b5f
    U/fmnS27fNRulyB5izkZH9atT66SUjMyXIp89ylsdI+wGAs5p+Ku4RA+1zESwl8Z1G0u
    g/vKk/yrmI/QHZYmWmsuG7xrKSFZA2a1AOjAmb370VcTTgO+F4I/MTYYbiYjfmhoITlN
    iTXrFOtlXuGIgfF1dC3vMscb0r+aS5azW/D7DDOjl0wVRLLyjDNjwGU4AIaCVojNQTGY
    k6CQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 16/38] tools/guest: save: move guest_data array
Date: Tue,  1 Jun 2021 18:10:56 +0200
Message-Id: <20210601161118.18986-17-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move guest_data array into preallocated space.

Because this was allocated with calloc:
Adjust the loop to clear unused entries as needed.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h |  2 ++
 tools/libs/saverestore/save.c   | 11 ++++++-----
 2 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 6a2f266469..098aa39667 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -235,6 +235,8 @@ struct xc_sr_save_arrays {
     struct iovec iov[MAX_BATCH_SIZE + 4];
     /* write_batch */
     uint64_t rec_pfns[MAX_BATCH_SIZE];
+    /* write_batch: Pointers to page data to send. Mapped gfns or local allocations. */
+    void *guest_data[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_restore_arrays {
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index ba8046b530..c4fd9a15f0 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -90,7 +90,7 @@ static int write_batch(struct xc_sr_context *ctx)
     xc_interface *xch = ctx->xch;
     xen_pfn_t *mfns = ctx->save.m->mfns, *types = ctx->save.m->types;
     void *guest_mapping = NULL;
-    void **guest_data = NULL;
+    void **guest_data = ctx->save.m->guest_data;
     void **local_pages = NULL;
     int *errors = ctx->save.m->errors, rc = -1;
     unsigned int i, p, nr_pages = 0, nr_pages_mapped = 0;
@@ -105,12 +105,10 @@ static int write_batch(struct xc_sr_context *ctx)
 
     assert(nr_pfns != 0);
 
-    /* Pointers to page data to send.  Mapped gfns or local allocations. */
-    guest_data = calloc(nr_pfns, sizeof(*guest_data));
     /* Pointers to locally allocated pages.  Need freeing. */
     local_pages = calloc(nr_pfns, sizeof(*local_pages));
 
-    if ( !guest_data || !local_pages )
+    if ( !local_pages )
     {
         ERROR("Unable to allocate arrays for a batch of %u pages",
               nr_pfns);
@@ -166,7 +164,10 @@ static int write_batch(struct xc_sr_context *ctx)
         for ( i = 0, p = 0; i < nr_pfns; ++i )
         {
             if ( page_type_has_stream_data(types[i]) == false )
+            {
+                guest_data[i] = NULL;
                 continue;
+            }
 
             if ( errors[p] )
             {
@@ -183,6 +184,7 @@ static int write_batch(struct xc_sr_context *ctx)
 
             if ( rc )
             {
+                guest_data[i] = NULL;
                 if ( rc == -1 && errno == EAGAIN )
                 {
                     set_bit(ctx->save.m->batch_pfns[i], ctx->save.deferred_pages);
@@ -256,7 +258,6 @@ static int write_batch(struct xc_sr_context *ctx)
     for ( i = 0; local_pages && i < nr_pfns; ++i )
         free(local_pages[i]);
     free(local_pages);
-    free(guest_data);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:13:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:13:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135038.251272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo71f-0003o8-93; Tue, 01 Jun 2021 16:13:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135038.251272; Tue, 01 Jun 2021 16:13:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo71f-0003nG-4B; Tue, 01 Jun 2021 16:13:55 +0000
Received: by outflank-mailman (input) for mailman id 135038;
 Tue, 01 Jun 2021 16:13:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo70V-0005Ec-2v
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:43 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [81.169.146.173])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 81bee282-d57c-4860-a341-5a7dd6617a8c;
 Tue, 01 Jun 2021 16:11:39 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBZ1BS
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81bee282-d57c-4860-a341-5a7dd6617a8c
ARC-Seal: i=1; a=rsa-sha256; t=1622563895; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=BhWx2Lzvkn8drrWiY9EL+zi8qGJasAKHBfLwAQ4xsCYXjaeUmkQLcSbxo0/q8HUN0o
    Em0Ad4PkM3dTa+nIhfXrFfxGvNvQJgOcpVgJMvqNGYUHoVQURkGI/KemvE/s/qb0XJVY
    8cjZoUQU7lHTAsarC20/kDl0upgArQ3953JQwn2J6U3v/PvyQ31tWbs8Or3BbCUygqxS
    YjLw+GSIWihXeRtKlzfb3sHsvBJPjrLPC7A02eaJ3kTovml5Ixp6shM+x/Xy78Yvmg7j
    JMnr38LrnyujJCWYm87psmkzhjRpwHNmzX7kD21HG36XBHozKhaZh+megSbSMweRR0u+
    xe5w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563895;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=pHLUvMX7U6lUA9on8DVqvFH+okBXqcoC5FrFwr0VWLE=;
    b=Mol42r7yOgKyOSeNd6LYWFOCEO41h5/ExFJ3B9CD+Fj/3SFQt1Ml0Ig3yQ/emp1xyz
    qC/0qjvLV8QoI0WQAjHTCJmch+N3z4JY5BhpxnZcMjatuBZV+CdyAjQGothm9W6JShsB
    pKTC/8rEREm9FdtGb3YxbmPDAv7QIN3AGZor0jC8uFDmGOGsdehxszPXsxYIRxkhkcFr
    3PKsuW+XAQPRmt/Lyxfzn6sy8sm3yISsUGYcWe9Or17DtK6OCZal+kSqJERrrEv2duUY
    WXloG2LrddUjm0bUwe1TrPEM+ZgiLPBrk64BDNHY6B+As0Eo1FEI9dHgw8HanjKxnLeS
    WxeQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563895;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=pHLUvMX7U6lUA9on8DVqvFH+okBXqcoC5FrFwr0VWLE=;
    b=HTQbY5Y4vm6yKSuHhB6y3ow6W7nY8IpOM8WlqG+2UvpGk8r4GabQfKcGkpodR9HsVd
    7cL/9R4ayd5znMGMzFR/6ATu8j6CbHWA3RMCyVj+y+XuLANEW6tMBO1Eiu9hlik5vFkR
    GPI8oT0LODTI6NMYocy9jAcW9tqlqJbIsamBAkjFgkYmrFdhRF8KvGN3IiFJCvD8RNGr
    RcYtLxLN1nXsJqDSuUaCac+Wh0J646tdaPIEywDcuniaTda78uq77fBgoJwjcSnxZohn
    XNL5vCFQgIG+FnXCVHOdnsOqUAWgx2nNKxJcA5AtxC3Pr3yV+esMK7/hkFjR2v2Yag8v
    hYzg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 23/38] tools/guest: restore: move pfns array in populate_pfns
Date: Tue,  1 Jun 2021 18:11:03 +0200
Message-Id: <20210601161118.18986-24-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move populate_pfns' pfns array into preallocated space.
Use some prefix to avoid conflict with an array used in handle_page_data.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h  |  1 +
 tools/libs/saverestore/restore.c | 11 +----------
 2 files changed, 2 insertions(+), 10 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 6ed213e14f..9de09ae64a 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -244,6 +244,7 @@ struct xc_sr_restore_arrays {
     int map_errs[MAX_BATCH_SIZE];
     /* populate_pfns */
     xen_pfn_t pp_mfns[MAX_BATCH_SIZE];
+    xen_pfn_t pp_pfns[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_context
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index 2ab9f792ef..598a4aa06d 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -139,17 +139,10 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
 {
     xc_interface *xch = ctx->xch;
     xen_pfn_t *mfns = ctx->restore.m->pp_mfns,
-        *pfns = malloc(count * sizeof(*pfns));
+        *pfns = ctx->restore.m->pp_pfns;
     unsigned int i, nr_pfns = 0;
     int rc = -1;
 
-    if ( !pfns )
-    {
-        ERROR("Failed to allocate %zu bytes for populating the physmap",
-              2 * count * sizeof(*mfns));
-        goto err;
-    }
-
     for ( i = 0; i < count; ++i )
     {
         if ( (!types ||
@@ -190,8 +183,6 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
     rc = 0;
 
  err:
-    free(pfns);
-
     return rc;
 }
 


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:13:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:13:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135041.251284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo71j-0004N6-Kj; Tue, 01 Jun 2021 16:13:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135041.251284; Tue, 01 Jun 2021 16:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo71j-0004Ml-GF; Tue, 01 Jun 2021 16:13:59 +0000
Received: by outflank-mailman (input) for mailman id 135041;
 Tue, 01 Jun 2021 16:13:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo70f-0005Ec-3L
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:53 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.100])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7758098d-fefe-4003-bede-ed93932f6711;
 Tue, 01 Jun 2021 16:11:41 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBZ1BT
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7758098d-fefe-4003-bede-ed93932f6711
ARC-Seal: i=1; a=rsa-sha256; t=1622563896; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=kc7Yj/zHkuxSkuv91lZrKLFhxicbkhg2PrG0a22bcdRA4EYn0v6h6gKz34AdefHZl+
    EJyvTAopM/CfodrzWrb147NRFmFDJz/XfJrO6Jq8f/8Wbdt7UlIKnGzR+ASqGVGf8x+o
    Rz7bh5z49QsWnCQS4g0icfQs1hIRAxEswdZCD5OpCUhYm+ZXfIg816YZEcpPD4Nfy1V5
    zjydUXGqGR2Xj1v26rFjjfsUcSVTbVotBX488J+WaNOEL0GgopJJbfw3cRgLIB4kjtN7
    YrcfPsZcDHHS0DNoJzFRcQwLqcZXt7Hl8JOyfEOnr4Ml4duXdfmAVBqzndEfhjvBFOPl
    Ptkw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563896;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=KiB5zmNJWHh3QN6pLulGVrCrsceJ5e3LZlNQkeTmQNs=;
    b=r7uWJ9iWXTHG6ECfj3bEZt5/Lo06UgF5YH3FHjvxUwYrQJPhrxCFqJHq5oHFNDo97C
    SOybKMZEGGtKfKZMz3cJsEZV8c3ctBTiklqKuuXHkpSDHUSwgFVW6kCLgA+vstWxE/+F
    QT2A0zxs/XKEOUwDxzVKMdPYXvjGTaywBIyAgdFXR3y+4/i8kycF+UwZAa1F4IhdwJ1i
    ujNFtHYD9wk7uVpYoywTP/4H2J3MegGylO+lGdemO9PdYrn/aOivc8An49yyOTuJmqQD
    ygv2ILdRbdwJYCRJO8K30tLH97nw8PPMCN5TdIjuydQqSc91dgTk2+KMIHi+d4p0pc1T
    B+HQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563896;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=KiB5zmNJWHh3QN6pLulGVrCrsceJ5e3LZlNQkeTmQNs=;
    b=naDXOxfkmm5wQsb35m1m/5F9QiDgv5TjkdmqtNiP/cHO0E/XhKL7kQ+OO/+eZQCN89
    GMxdCnfgXCAPoEFvvA8CLNwAr8eXorkxeI2UcQCmm1TNdnNqoPIpnYqBsmeSi/gBS9TT
    sxQAmMKxtGaMvAe2papjTwG9uhxBk8hp7zNQcjqUDEX9331uoZswbOf7DvJ2lmmtm/G9
    wTjQizEt0u5TTUcUFcsWjxNOs6/P+GjUkFYzA60hJLufZHtZ2GJcFWx3qw4GvXWdDmNM
    /jdcYxg0pVu2F14esr9R6Vw0uOMap4adylS5ifs+pwqOsAUjlO2ZZGyTGT4dubU8atdd
    a0sg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 24/38] tools/guest: restore: split record processing
Date: Tue,  1 Jun 2021 18:11:04 +0200
Message-Id: <20210601161118.18986-25-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

handle_page_data must be able to read directly into mapped guest memory.
This will avoid unneccesary memcpy calls for data which can be consumed verbatim.

Rearrange the code to allow decisions based on the incoming record.

This change is preparation for future changes in handle_page_data,
no change in behavior is intended.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.c  | 33 ++++++++++++---------
 tools/libs/saverestore/common.h  |  4 ++-
 tools/libs/saverestore/restore.c | 49 ++++++++++++++++++++++----------
 tools/libs/saverestore/save.c    |  7 ++++-
 4 files changed, 63 insertions(+), 30 deletions(-)

diff --git a/tools/libs/saverestore/common.c b/tools/libs/saverestore/common.c
index 77128bc747..7da7fa4e2c 100644
--- a/tools/libs/saverestore/common.c
+++ b/tools/libs/saverestore/common.c
@@ -91,26 +91,33 @@ int write_split_record(struct xc_sr_context *ctx, struct xc_sr_record *rec,
     return -1;
 }
 
-int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec)
+int read_record_header(struct xc_sr_context *ctx, int fd, struct xc_sr_rhdr *rhdr)
 {
     xc_interface *xch = ctx->xch;
-    struct xc_sr_rhdr rhdr;
-    size_t datasz;
 
-    if ( read_exact(fd, &rhdr, sizeof(rhdr)) )
+    if ( read_exact(fd, rhdr, sizeof(*rhdr)) )
     {
         PERROR("Failed to read Record Header from stream");
         return -1;
     }
 
-    if ( rhdr.length > REC_LENGTH_MAX )
+    if ( rhdr->length > REC_LENGTH_MAX )
     {
-        ERROR("Record (0x%08x, %s) length %#x exceeds max (%#x)", rhdr.type,
-              rec_type_to_str(rhdr.type), rhdr.length, REC_LENGTH_MAX);
+        ERROR("Record (0x%08x, %s) length %#x exceeds max (%#x)", rhdr->type,
+              rec_type_to_str(rhdr->type), rhdr->length, REC_LENGTH_MAX);
         return -1;
     }
 
-    datasz = ROUNDUP(rhdr.length, REC_ALIGN_ORDER);
+    return 0;
+}
+
+int read_record_data(struct xc_sr_context *ctx, int fd, struct xc_sr_rhdr *rhdr,
+                     struct xc_sr_record *rec)
+{
+    xc_interface *xch = ctx->xch;
+    size_t datasz;
+
+    datasz = ROUNDUP(rhdr->length, REC_ALIGN_ORDER);
 
     if ( datasz )
     {
@@ -119,7 +126,7 @@ int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec)
         if ( !rec->data )
         {
             ERROR("Unable to allocate %zu bytes for record data (0x%08x, %s)",
-                  datasz, rhdr.type, rec_type_to_str(rhdr.type));
+                  datasz, rhdr->type, rec_type_to_str(rhdr->type));
             return -1;
         }
 
@@ -128,18 +135,18 @@ int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec)
             free(rec->data);
             rec->data = NULL;
             PERROR("Failed to read %zu bytes of data for record (0x%08x, %s)",
-                   datasz, rhdr.type, rec_type_to_str(rhdr.type));
+                   datasz, rhdr->type, rec_type_to_str(rhdr->type));
             return -1;
         }
     }
     else
         rec->data = NULL;
 
-    rec->type   = rhdr.type;
-    rec->length = rhdr.length;
+    rec->type   = rhdr->type;
+    rec->length = rhdr->length;
 
     return 0;
-};
+}
 
 static void __attribute__((unused)) build_assertions(void)
 {
diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 9de09ae64a..73256f08ab 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -487,7 +487,9 @@ static inline int write_record(struct xc_sr_context *ctx,
  *
  * On failure, the contents of the record structure are undefined.
  */
-int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec);
+int read_record_header(struct xc_sr_context *ctx, int fd, struct xc_sr_rhdr *rhdr);
+int read_record_data(struct xc_sr_context *ctx, int fd, struct xc_sr_rhdr *rhdr,
+                     struct xc_sr_record *rec);
 
 /*
  * This would ideally be private in restore.c, but is needed by
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index 598a4aa06d..bb1574ed74 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -471,7 +471,7 @@ static int send_checkpoint_dirty_pfn_list(struct xc_sr_context *ctx)
     return rc;
 }
 
-static int process_record(struct xc_sr_context *ctx, struct xc_sr_record *rec);
+static int process_buffered_record(struct xc_sr_context *ctx, struct xc_sr_record *rec);
 static int handle_checkpoint(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
@@ -510,7 +510,7 @@ static int handle_checkpoint(struct xc_sr_context *ctx)
 
         for ( i = 0; i < ctx->restore.buffered_rec_num; i++ )
         {
-            rc = process_record(ctx, &ctx->restore.buffered_records[i]);
+            rc = process_buffered_record(ctx, &ctx->restore.buffered_records[i]);
             if ( rc )
                 goto err;
         }
@@ -571,10 +571,11 @@ static int handle_checkpoint(struct xc_sr_context *ctx)
     return rc;
 }
 
-static int buffer_record(struct xc_sr_context *ctx, struct xc_sr_record *rec)
+static int buffer_record(struct xc_sr_context *ctx, struct xc_sr_rhdr *rhdr)
 {
     xc_interface *xch = ctx->xch;
     unsigned int new_alloc_num;
+    struct xc_sr_record rec;
     struct xc_sr_record *p;
 
     if ( ctx->restore.buffered_rec_num >= ctx->restore.allocated_rec_num )
@@ -592,8 +593,13 @@ static int buffer_record(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         ctx->restore.allocated_rec_num = new_alloc_num;
     }
 
+    if ( read_record_data(ctx, ctx->fd, rhdr, &rec) )
+    {
+        return -1;
+    }
+
     memcpy(&ctx->restore.buffered_records[ctx->restore.buffered_rec_num++],
-           rec, sizeof(*rec));
+           &rec, sizeof(rec));
 
     return 0;
 }
@@ -624,7 +630,7 @@ int handle_static_data_end(struct xc_sr_context *ctx)
     return rc;
 }
 
-static int process_record(struct xc_sr_context *ctx, struct xc_sr_record *rec)
+static int process_buffered_record(struct xc_sr_context *ctx, struct xc_sr_record *rec)
 {
     xc_interface *xch = ctx->xch;
     int rc = 0;
@@ -662,6 +668,19 @@ static int process_record(struct xc_sr_context *ctx, struct xc_sr_record *rec)
     return rc;
 }
 
+static int process_incoming_record_header(struct xc_sr_context *ctx, struct xc_sr_rhdr *rhdr)
+{
+    struct xc_sr_record rec;
+    int rc;
+
+    rc = read_record_data(ctx, ctx->fd, rhdr, &rec);
+    if ( rc )
+        return rc;
+
+    return process_buffered_record(ctx, &rec);
+}
+
+
 static int setup(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
@@ -745,7 +764,7 @@ static void cleanup(struct xc_sr_context *ctx)
 static int restore(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
-    struct xc_sr_record rec;
+    struct xc_sr_rhdr rhdr;
     int rc, saved_rc = 0, saved_errno = 0;
 
     IPRINTF("Restoring domain");
@@ -756,7 +775,7 @@ static int restore(struct xc_sr_context *ctx)
 
     do
     {
-        rc = read_record(ctx, ctx->fd, &rec);
+        rc = read_record_header(ctx, ctx->fd, &rhdr);
         if ( rc )
         {
             if ( ctx->restore.buffer_all_records )
@@ -766,25 +785,25 @@ static int restore(struct xc_sr_context *ctx)
         }
 
         if ( ctx->restore.buffer_all_records &&
-             rec.type != REC_TYPE_END &&
-             rec.type != REC_TYPE_CHECKPOINT )
+             rhdr.type != REC_TYPE_END &&
+             rhdr.type != REC_TYPE_CHECKPOINT )
         {
-            rc = buffer_record(ctx, &rec);
+            rc = buffer_record(ctx, &rhdr);
             if ( rc )
                 goto err;
         }
         else
         {
-            rc = process_record(ctx, &rec);
+            rc = process_incoming_record_header(ctx, &rhdr);
             if ( rc == RECORD_NOT_PROCESSED )
             {
-                if ( rec.type & REC_TYPE_OPTIONAL )
+                if ( rhdr.type & REC_TYPE_OPTIONAL )
                     DPRINTF("Ignoring optional record %#x (%s)",
-                            rec.type, rec_type_to_str(rec.type));
+                            rhdr.type, rec_type_to_str(rhdr.type));
                 else
                 {
                     ERROR("Mandatory record %#x (%s) not handled",
-                          rec.type, rec_type_to_str(rec.type));
+                          rhdr.type, rec_type_to_str(rhdr.type));
                     rc = -1;
                     goto err;
                 }
@@ -795,7 +814,7 @@ static int restore(struct xc_sr_context *ctx)
                 goto err;
         }
 
-    } while ( rec.type != REC_TYPE_END );
+    } while ( rhdr.type != REC_TYPE_END );
 
  remus_failover:
     if ( ctx->stream_type == XC_STREAM_COLO )
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index c4876ba24c..b59cb069ed 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -590,6 +590,7 @@ static int send_memory_live(struct xc_sr_context *ctx)
 static int colo_merge_secondary_dirty_bitmap(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
+    struct xc_sr_rhdr rhdr;
     struct xc_sr_record rec;
     uint64_t *pfns = NULL;
     uint64_t pfn;
@@ -598,7 +599,11 @@ static int colo_merge_secondary_dirty_bitmap(struct xc_sr_context *ctx)
     DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
                                     &ctx->save.dirty_bitmap_hbuf);
 
-    rc = read_record(ctx, ctx->save.recv_fd, &rec);
+    rc = read_record_header(ctx, ctx->save.recv_fd, &rhdr);
+    if ( rc )
+        goto err;
+
+    rc = read_record_data(ctx, ctx->save.recv_fd, &rhdr, &rec);
     if ( rc )
         goto err;
 


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:14:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:14:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135042.251294 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo71l-0004iC-5g; Tue, 01 Jun 2021 16:14:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135042.251294; Tue, 01 Jun 2021 16:14:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo71l-0004hX-1K; Tue, 01 Jun 2021 16:14:01 +0000
Received: by outflank-mailman (input) for mailman id 135042;
 Tue, 01 Jun 2021 16:14:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo70b-0005X1-1Q
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:49 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [81.169.146.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b1b21753-71d3-4abf-ae18-17f60e97b2ba;
 Tue, 01 Jun 2021 16:11:48 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBe1Bd
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1b21753-71d3-4abf-ae18-17f60e97b2ba
ARC-Seal: i=1; a=rsa-sha256; t=1622563900; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=i+C0lQTgDG0B7xQdUrq1nGARbLlZYl/nskb/IwQqsimOXTONDE4qYhKz0EIZRSeyGI
    kPOf2EHOxTlezAkGdwnYfP+CeKb91xfiO84YmOpM50zFgiHUGtV+hsATK882h0DXLwVL
    Lz5Tk+cmXopFFVsMgIm/Eo6fBX+Kl9TK3KIq8gJmN9UhMf/QbRx0ohkD/RPcZlktqw55
    AJKU8pNZb3iqoJ4kwXjT8uUMDbVgjQ96UlYBhVEJuCKQq3EjVPUV7JIOiM4sStSNYtbb
    FcBT8EoTt1xW+IwKuhYnyUEeFgWPhengL+5LXTJLKR+fHmps7fd0gw7B286Z/xm4Jqeb
    UoVg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563900;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=yHDLzQ8w0aK0NM0b+OJIaVk/C7sbn6MbRiUz15zozj8=;
    b=d1xKl9YT8gYggHF5Kk4WO0b/XgA5dzP7xLE7XBNmgBHFLQ/qTdP3P8ueLlMSgIlP83
    MfkfO/b+BqIBN6v9h3/jFtKF/ZEwez+4+Syo5Z12ehaWT2UGRCy09AAwceXPtSCbmQbE
    ZUbLDjmIOatdcAmz2PD5ytZgvPP4QyZgn3T+BmX4zwmmU25fdRyjfiovRiCH5BJsDNvh
    MZU6u3k0FiGQLC3OP4XhBqsGG5c+INKdQzfTPJtkEZYhYCY5fplppUeGT2K5/tq9cTvY
    CTn1l/ETDXRM96seNcMDcfwogR9kAJLfZ6xMvwG0+/tWmi59nP5JxDKhx3wK9bHarIXv
    ieuQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563900;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=yHDLzQ8w0aK0NM0b+OJIaVk/C7sbn6MbRiUz15zozj8=;
    b=r9kc6tUP5PACoqGznsVAKLelvJDqMmOq9/jpHmb6tX7eUOR1APn7wPVAzU2spubTo0
    d3H6+T7e1lFZP9IqvP+qmaVf0TNRjlImB2OeTyqPORuS0il+wXU/xHUgm6jVCBt9ScCV
    Q62bdvENTBdIZmNXQbtcD4Ra60HRrl8OaekaFXaL44WxvHOaKn9dWPDJtd230oaVMOTt
    wtAwWWmhzgM+wHK/0cLRckzTFPK5LOs6lnXuHCihdD0mEI3F/wG7ng8yJ5yhRZO5XeMo
    R9kR2m0N/Lw4T2cpJ7E2loAmfCBxu92g0MjTltSEHQeC0Gxh5reugUuTa2bwnIyoeGJe
    D2sA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v20210601 33/38] tools: add --abort_if_busy to libxl_domain_suspend
Date: Tue,  1 Jun 2021 18:11:13 +0200
Message-Id: <20210601161118.18986-34-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Provide a knob to the host admin to abort the live migration of a
running domU if the downtime during final transit will be too long
for the workload within domU.

Adjust error reporting. Add ERROR_MIGRATION_ABORTED to allow callers of
libxl_domain_suspend to distinguish between errors and the requested
constraint.

Adjust precopy_policy to simplify reporting of remaining dirty pages.
The loop in send_memory_live populates ->dirty_count in a different
place than ->iteration. Let it proceeed one more time to provide the
desired information before leaving the loop.

This patch adjusts xl(1) and the libxl API.
External users check LIBXL_HAVE_DOMAIN_SUSPEND_PROPS for the availibility
of the new .abort_if_busy property.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/man/xl.1.pod.in                  |  8 +++++++
 tools/include/libxl.h                 |  1 +
 tools/libs/light/libxl_dom_save.c     |  7 ++++++-
 tools/libs/light/libxl_domain.c       |  1 +
 tools/libs/light/libxl_internal.h     |  2 ++
 tools/libs/light/libxl_stream_write.c |  9 +++++++-
 tools/libs/light/libxl_types.idl      |  1 +
 tools/xl/xl_cmdtable.c                |  6 +++++-
 tools/xl/xl_migrate.c                 | 30 ++++++++++++++++++++-------
 9 files changed, 55 insertions(+), 10 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index 43609f6cdd..b258d56ab6 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -508,6 +508,14 @@ low, the guest is suspended and the domU will finally be moved to I<host>.
 This allows the host admin to control for how long the domU will likely
 be suspended during transit.
 
+=item B<--abort_if_busy>
+
+Abort migration instead of doing final suspend/move/resume if the
+guest produced more than I<min_remaining> dirty pages during th number
+of I<max_iters> iterations.
+This avoids long periods of time where the guest is suspended, which
+may confuse the workload within domU.
+
 =back
 
 =item B<remus> [I<OPTIONS>] I<domain-id> I<host>
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 28d70b1078..cc056ed627 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -1719,6 +1719,7 @@ typedef struct {
 } libxl_domain_suspend_props;
 #define LIBXL_SUSPEND_DEBUG 1
 #define LIBXL_SUSPEND_LIVE 2
+#define LIBXL_SUSPEND_ABORT_IF_BUSY 4
 
 int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd,
                          libxl_domain_suspend_props *props,
diff --git a/tools/libs/light/libxl_dom_save.c b/tools/libs/light/libxl_dom_save.c
index ad5df89b2c..1999a8997f 100644
--- a/tools/libs/light/libxl_dom_save.c
+++ b/tools/libs/light/libxl_dom_save.c
@@ -383,11 +383,16 @@ static int libxl__domain_save_precopy_policy(precopy_stats_t stats, void *user)
          stats.iteration, stats.dirty_count, stats.total_written);
     if (stats.dirty_count >= 0 && stats.dirty_count < dss->min_remaining)
         goto stop_copy;
-    if (stats.iteration >= dss->max_iters)
+    if (stats.dirty_count >= 0 && stats.iteration >= dss->max_iters)
         goto stop_copy;
     return XGS_POLICY_CONTINUE_PRECOPY;
 
 stop_copy:
+    if (dss->abort_if_busy)
+    {
+        dss->remaining_dirty_pages = stats.dirty_count;
+        return XGS_POLICY_ABORT;
+    }
     return XGS_POLICY_STOP_AND_COPY;
 }
 
diff --git a/tools/libs/light/libxl_domain.c b/tools/libs/light/libxl_domain.c
index ae4dc9ad01..913653bd76 100644
--- a/tools/libs/light/libxl_domain.c
+++ b/tools/libs/light/libxl_domain.c
@@ -529,6 +529,7 @@ int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd,
     dss->type = type;
     dss->max_iters = props->max_iters ?: LIBXL_XGS_POLICY_MAX_ITERATIONS;
     dss->min_remaining = props->min_remaining ?: LIBXL_XGS_POLICY_TARGET_DIRTY_COUNT;
+    dss->abort_if_busy = props->flags & LIBXL_SUSPEND_ABORT_IF_BUSY;
     dss->live = props->flags & LIBXL_SUSPEND_LIVE;
     dss->debug = props->flags & LIBXL_SUSPEND_DEBUG;
     dss->checkpointed_stream = LIBXL_CHECKPOINTED_STREAM_NONE;
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 63028586fe..7453a3aa7b 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -3640,9 +3640,11 @@ struct libxl__domain_save_state {
     libxl_domain_type type;
     int live;
     int debug;
+    int abort_if_busy;
     int checkpointed_stream;
     uint32_t max_iters;
     uint32_t min_remaining;
+    long remaining_dirty_pages;
     const libxl_domain_remus_info *remus;
     /* private */
     int rc;
diff --git a/tools/libs/light/libxl_stream_write.c b/tools/libs/light/libxl_stream_write.c
index 634f3240d1..1ab3943f3e 100644
--- a/tools/libs/light/libxl_stream_write.c
+++ b/tools/libs/light/libxl_stream_write.c
@@ -344,11 +344,18 @@ void libxl__xc_domain_save_done(libxl__egc *egc, void *dss_void,
         goto err;
 
     if (retval) {
+        if (dss->remaining_dirty_pages) {
+            LOGD(NOTICE, dss->domid, "saving domain: aborted,"
+                 " %ld remaining dirty pages.", dss->remaining_dirty_pages);
+        } else {
         LOGEVD(ERROR, errnoval, dss->domid, "saving domain: %s",
               dss->dsps.guest_responded ?
               "domain responded to suspend request" :
               "domain did not respond to suspend request");
-        if (!dss->dsps.guest_responded)
+        }
+        if (dss->remaining_dirty_pages)
+           rc = ERROR_MIGRATION_ABORTED;
+        else if(!dss->dsps.guest_responded)
             rc = ERROR_GUEST_TIMEDOUT;
         else if (dss->rc)
             rc = dss->rc;
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index f45adddab0..b91769ee10 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -76,6 +76,7 @@ libxl_error = Enumeration("error", [
     (-30, "QMP_DEVICE_NOT_ACTIVE"), # a device has failed to be become active
     (-31, "QMP_DEVICE_NOT_FOUND"), # the requested device has not been found
     (-32, "QEMU_API"), # QEMU's replies don't contains expected members
+    (-33, "MIGRATION_ABORTED"),
     ], value_namespace = "")
 
 libxl_domain_type = Enumeration("domain_type", [
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index acb84e3486..6c9de3bdec 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -176,7 +176,11 @@ const struct cmd_spec cmd_table[] = {
       "-p                Do not unpause domain after migrating it.\n"
       "-D                Preserve the domain id\n"
       "--max_iters N     Number of copy iterations before final stop+move\n"
-      "--min_remaining N Number of remaining dirty pages before final stop+move"
+      "--min_remaining N Number of remaining dirty pages before final stop+move\n"
+      "--abort_if_busy   Abort migration instead of doing final stop+move,\n"
+      "                  if the number of dirty pages is higher than <min_remaining>\n"
+      "                  after <max_iters> iterations. Otherwise the amount of memory\n"
+      "                  to be transfered would exceed maximum allowed domU downtime."
     },
     { "restore",
       &main_restore, 0, 1,
diff --git a/tools/xl/xl_migrate.c b/tools/xl/xl_migrate.c
index 14feb2b7ec..f523746e5b 100644
--- a/tools/xl/xl_migrate.c
+++ b/tools/xl/xl_migrate.c
@@ -177,7 +177,7 @@ static void migrate_do_preamble(int send_fd, int recv_fd, pid_t child,
 }
 
 static void migrate_domain(uint32_t domid, int preserve_domid,
-                           const char *rune, int debug,
+                           const char *rune, int debug, int abort_if_busy,
                            uint32_t max_iters,
                            uint32_t min_remaining,
                            const char *override_config_file)
@@ -213,14 +213,20 @@ static void migrate_domain(uint32_t domid, int preserve_domid,
 
     if (debug)
         props.flags |= LIBXL_SUSPEND_DEBUG;
+    if (abort_if_busy)
+        props.flags |= LIBXL_SUSPEND_ABORT_IF_BUSY;
     rc = libxl_domain_suspend(ctx, domid, send_fd, &props, NULL);
     if (rc) {
         fprintf(stderr, "migration sender: libxl_domain_suspend failed"
                 " (rc=%d)\n", rc);
-        if (rc == ERROR_GUEST_TIMEDOUT)
-            goto failed_suspend;
-        else
-            goto failed_resume;
+        switch (rc) {
+            case ERROR_GUEST_TIMEDOUT:
+                goto failed_suspend;
+            case ERROR_MIGRATION_ABORTED:
+                goto failed_busy;
+            default:
+                goto failed_resume;
+        }
     }
 
     //fprintf(stderr, "migration sender: Transfer complete.\n");
@@ -302,6 +308,12 @@ static void migrate_domain(uint32_t domid, int preserve_domid,
     fprintf(stderr, "Migration failed, failed to suspend at sender.\n");
     exit(EXIT_FAILURE);
 
+ failed_busy:
+    close(send_fd);
+    migration_child_report(recv_fd);
+    fprintf(stderr, "Migration aborted as requested, domain is too busy.\n");
+    exit(EXIT_FAILURE);
+
  failed_resume:
     close(send_fd);
     migration_child_report(recv_fd);
@@ -545,13 +557,14 @@ int main_migrate(int argc, char **argv)
     char *rune = NULL;
     char *host;
     int opt, daemonize = 1, monitor = 1, debug = 0, pause_after_migration = 0;
-    int preserve_domid = 0;
+    int preserve_domid = 0, abort_if_busy = 0;
     uint32_t max_iters = 0;
     uint32_t min_remaining = 0;
     static struct option opts[] = {
         {"debug", 0, 0, 0x100},
         {"max_iters", 1, 0, 0x101},
         {"min_remaining", 1, 0, 0x102},
+        {"abort_if_busy", 0, 0, 0x103},
         {"live", 0, 0, 0x200},
         COMMON_LONG_OPTS
     };
@@ -585,6 +598,9 @@ int main_migrate(int argc, char **argv)
     case 0x102: /* --min_remaining */
         min_remaining = atoi(optarg);
         break;
+    case 0x103: /* --abort_if_busy */
+        abort_if_busy = 1;
+        break;
     case 0x200: /* --live */
         /* ignored for compatibility with xm */
         break;
@@ -619,7 +635,7 @@ int main_migrate(int argc, char **argv)
                   pause_after_migration ? " -p" : "");
     }
 
-    migrate_domain(domid, preserve_domid, rune, debug,
+    migrate_domain(domid, preserve_domid, rune, debug, abort_if_busy,
                    max_iters, min_remaining, config_filename);
     return EXIT_SUCCESS;
 }


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:16:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:16:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135060.251306 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo74O-0007Cn-O4; Tue, 01 Jun 2021 16:16:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135060.251306; Tue, 01 Jun 2021 16:16:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo74O-0007Cg-KJ; Tue, 01 Jun 2021 16:16:44 +0000
Received: by outflank-mailman (input) for mailman id 135060;
 Tue, 01 Jun 2021 16:16:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo70k-0005Ec-3S
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:58 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [81.169.146.174])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c6e3a17-5e01-451b-b7b0-3bf1394c1a90;
 Tue, 01 Jun 2021 16:11:42 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBa1BU
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c6e3a17-5e01-451b-b7b0-3bf1394c1a90
ARC-Seal: i=1; a=rsa-sha256; t=1622563896; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=oq8uQSOCopC2wqt4KXP/PJIMSsJMeRfwAtBCnP3SPlaTbYuA1whhMGU1X8xDREzu9e
    im7cg2t8gff2Kv4a6g7CtYM84w/eqeVnXN+Yd9ibLjoT4xtnz1J0vlIo/RdHy3cqyMJh
    yvSSYNKU2nK133nA9+g/AXOMKXpONKxLZK/jwywNtHYsLsX5Uu/yXP8nOwaYYccTcQnN
    9eF4Ezw5eiX6lAR6tnK9IbVLEQ5SLBrn2JINwDFUuQDiEoqkUzluZ4PRFxiHPINO+Oc2
    qIdIntXAw7DT948EDP4d7Z9S+zBfvZO6fhgtN2JPekin7o2hK00bCNnLCvIfMZELVYQ2
    m4Ow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563896;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=OehxVYQzQRfrdjC5ERJeiffQghSA59T3Cae/kEPq6Gc=;
    b=kU3HGacCm+PbjhdqOIrzws0AzFGx9mcS4kLVDRX74SATNRbGwiwV3VTEq/s3OoRzlM
    I0mN+DvSJhEg8pf7QNfpCnaDUkbAAvzj6SAchEqm5NUsy5rUK4zb7xXLO+awNKUCw+2I
    r/nxOgZBsZ6ugKbo9JsSOt2srZb/2N2tBaMbRi9h9rBZRZOyUPbY69TRttkrrgUc1Z5t
    r/BwFiNEN8EB5f50UJK6sm/LX9CcWSpGvlVVujAYZMLGk+dX+0SasXdKguV3XeP7i18O
    ObcW937mDHj99DT9KumElASHwzWM/H9AoPVGTFZoVKA/7enGluyvr0PpFBxxcQ96y3yu
    6gQQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563896;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=OehxVYQzQRfrdjC5ERJeiffQghSA59T3Cae/kEPq6Gc=;
    b=p7cPHCBZPWH95nUnid7843WoRQ4AbwjwOfNRRCW4BG3VnoV+S24IXBfAtGep5EjypU
    P+WRTZI0bUUwuMNIFoQ/Nqwkx1tTn7nFjfm7QVX8wDNb9CCIiIEXAo0/uwlTgvJzEygV
    YaqUm8LFqFh6wzO54oGsgUd6XAfsH+7v01z/tlrqwr5yxaUGz4Dr+7X1ow25TArgpGaS
    1LmR+e+HyzwWiBdzj3lZe4Evpwf8JDIVSuG0+c8BeZLarUjnDjK1goBopeNOUeboaviE
    rjNRsp5r9a6F4+xHh6hykcfMlbQkaKxSZcXkDhfaXabJw34uUIYZnh/zA8HfjEnzQ+o7
    If6Q==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 25/38] tools/guest: restore: split handle_page_data
Date: Tue,  1 Jun 2021 18:11:05 +0200
Message-Id: <20210601161118.18986-26-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

handle_page_data must be able to read directly into mapped guest memory.
This will avoid unneccesary memcpy calls for data that can be consumed verbatim.

Split the various steps of record processing:
- move processing to handle_buffered_page_data
- adjust xenforeignmemory_map to set errno in case of failure
- adjust verify mode to set errno in case of failure

This change is preparation for future changes in handle_page_data,
no change in behavior is intended.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h  |   9 +
 tools/libs/saverestore/restore.c | 343 ++++++++++++++++++++-----------
 2 files changed, 231 insertions(+), 121 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 73256f08ab..21fdcf234e 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -242,9 +242,14 @@ struct xc_sr_restore_arrays {
     /* process_page_data */
     xen_pfn_t mfns[MAX_BATCH_SIZE];
     int map_errs[MAX_BATCH_SIZE];
+    void *guest_data[MAX_BATCH_SIZE];
+
     /* populate_pfns */
     xen_pfn_t pp_mfns[MAX_BATCH_SIZE];
     xen_pfn_t pp_pfns[MAX_BATCH_SIZE];
+
+    /* Must be the last member */
+    struct xc_sr_rec_page_data_header pages;
 };
 
 struct xc_sr_context
@@ -335,7 +340,11 @@ struct xc_sr_context
 
             /* Sender has invoked verify mode on the stream. */
             bool verify;
+            void *verify_buf;
+
             struct xc_sr_restore_arrays *m;
+            void *guest_mapping;
+            uint32_t nr_mapped_pages;
         } restore;
     };
 
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index bb1574ed74..f7b4a7124b 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -186,123 +186,18 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
     return rc;
 }
 
-/*
- * Given a list of pfns, their types, and a block of page data from the
- * stream, populate and record their types, map the relevant subset and copy
- * the data into the guest.
- */
-static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
-                             xen_pfn_t *pfns, uint32_t *types, void *page_data)
+static int handle_static_data_end_v2(struct xc_sr_context *ctx)
 {
-    xc_interface *xch = ctx->xch;
-    xen_pfn_t *mfns = ctx->restore.m->mfns;
-    int *map_errs = ctx->restore.m->map_errs;
-    int rc;
-    void *mapping = NULL, *guest_page = NULL;
-    unsigned int i, /* i indexes the pfns from the record. */
-        j,          /* j indexes the subset of pfns we decide to map. */
-        nr_pages = 0;
-
-    rc = populate_pfns(ctx, count, pfns, types);
-    if ( rc )
-    {
-        ERROR("Failed to populate pfns for batch of %u pages", count);
-        goto err;
-    }
-
-    for ( i = 0; i < count; ++i )
-    {
-        ctx->restore.ops.set_page_type(ctx, pfns[i], types[i]);
-
-        if ( page_type_has_stream_data(types[i]) == true )
-            mfns[nr_pages++] = ctx->restore.ops.pfn_to_gfn(ctx, pfns[i]);
-    }
-
-    /* Nothing to do? */
-    if ( nr_pages == 0 )
-        goto done;
-
-    mapping = guest_page = xenforeignmemory_map(
-        xch->fmem, ctx->domid, PROT_READ | PROT_WRITE,
-        nr_pages, mfns, map_errs);
-    if ( !mapping )
-    {
-        rc = -1;
-        PERROR("Unable to map %u mfns for %u pages of data",
-               nr_pages, count);
-        goto err;
-    }
-
-    for ( i = 0, j = 0; i < count; ++i )
-    {
-        if ( page_type_has_stream_data(types[i]) == false )
-            continue;
-
-        if ( map_errs[j] )
-        {
-            rc = -1;
-            ERROR("Mapping pfn %#"PRIpfn" (mfn %#"PRIpfn", type %#"PRIx32") failed with %d",
-                  pfns[i], mfns[j], types[i], map_errs[j]);
-            goto err;
-        }
-
-        /* Undo page normalisation done by the saver. */
-        rc = ctx->restore.ops.localise_page(ctx, types[i], page_data);
-        if ( rc )
-        {
-            ERROR("Failed to localise pfn %#"PRIpfn" (type %#"PRIx32")",
-                  pfns[i], types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
-            goto err;
-        }
-
-        if ( ctx->restore.verify )
-        {
-            /* Verify mode - compare incoming data to what we already have. */
-            if ( memcmp(guest_page, page_data, PAGE_SIZE) )
-                ERROR("verify pfn %#"PRIpfn" failed (type %#"PRIx32")",
-                      pfns[i], types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
-        }
-        else
-        {
-            /* Regular mode - copy incoming data into place. */
-            memcpy(guest_page, page_data, PAGE_SIZE);
-        }
-
-        ++j;
-        guest_page += PAGE_SIZE;
-        page_data += PAGE_SIZE;
-    }
-
- done:
-    rc = 0;
-
- err:
-    if ( mapping )
-        xenforeignmemory_unmap(xch->fmem, mapping, nr_pages);
-
-    return rc;
-}
+    int rc = 0;
 
-/*
- * Validate a PAGE_DATA record from the stream, and pass the results to
- * process_page_data() to actually perform the legwork.
- */
-static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
-{
+#if defined(__i386__) || defined(__x86_64__)
     xc_interface *xch = ctx->xch;
-    struct xc_sr_rec_page_data_header *pages = rec->data;
-    unsigned int i, pages_of_data = 0;
-    int rc = -1;
-
-    xen_pfn_t *pfns = ctx->restore.m->pfns, pfn;
-    uint32_t *types = ctx->restore.m->types, type;
-
     /*
      * v2 compatibility only exists for x86 streams.  This is a bit of a
      * bodge, but it is less bad than duplicating handle_page_data() between
      * different architectures.
      */
-#if defined(__i386__) || defined(__x86_64__)
+
     /* v2 compat.  Infer the position of STATIC_DATA_END. */
     if ( ctx->restore.format_version < 3 && !ctx->restore.seen_static_data_end )
     {
@@ -320,12 +215,26 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         ERROR("No STATIC_DATA_END seen");
         goto err;
     }
+
+    rc = 0;
+err:
 #endif
 
-    if ( rec->length < sizeof(*pages) )
+    return rc;
+}
+
+static bool verify_rec_page_hdr(struct xc_sr_context *ctx, uint32_t rec_length,
+                                 struct xc_sr_rec_page_data_header *pages)
+{
+    xc_interface *xch = ctx->xch;
+    bool ret = false;
+
+    errno = EINVAL;
+
+    if ( rec_length < sizeof(*pages) )
     {
         ERROR("PAGE_DATA record truncated: length %u, min %zu",
-              rec->length, sizeof(*pages));
+              rec_length, sizeof(*pages));
         goto err;
     }
 
@@ -335,13 +244,35 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         goto err;
     }
 
-    if ( rec->length < sizeof(*pages) + (pages->count * sizeof(uint64_t)) )
+    if ( pages->count > MAX_BATCH_SIZE )
+    {
+        ERROR("pfn count %u in PAGE_DATA record too large", pages->count);
+        errno = E2BIG;
+        goto err;
+    }
+
+    if ( rec_length < sizeof(*pages) + (pages->count * sizeof(uint64_t)) )
     {
         ERROR("PAGE_DATA record (length %u) too short to contain %u"
-              " pfns worth of information", rec->length, pages->count);
+              " pfns worth of information", rec_length, pages->count);
         goto err;
     }
 
+    ret = true;
+
+err:
+    return ret;
+}
+
+static bool verify_rec_page_pfns(struct xc_sr_context *ctx, uint32_t rec_length,
+                                 struct xc_sr_rec_page_data_header *pages)
+{
+    xc_interface *xch = ctx->xch;
+    uint32_t i, pages_of_data = 0;
+    xen_pfn_t pfn;
+    uint32_t type;
+    bool ret = false;
+
     for ( i = 0; i < pages->count; ++i )
     {
         pfn = pages->pfn[i] & PAGE_DATA_PFN_MASK;
@@ -364,23 +295,183 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
              * have a page worth of data in the record. */
             pages_of_data++;
 
-        pfns[i] = pfn;
-        types[i] = type;
+        ctx->restore.m->pfns[i] = pfn;
+        ctx->restore.m->types[i] = type;
     }
 
-    if ( rec->length != (sizeof(*pages) +
+    if ( rec_length != (sizeof(*pages) +
                          (sizeof(uint64_t) * pages->count) +
                          (PAGE_SIZE * pages_of_data)) )
     {
         ERROR("PAGE_DATA record wrong size: length %u, expected "
-              "%zu + %zu + %lu", rec->length, sizeof(*pages),
+              "%zu + %zu + %lu", rec_length, sizeof(*pages),
               (sizeof(uint64_t) * pages->count), (PAGE_SIZE * pages_of_data));
         goto err;
     }
 
-    rc = process_page_data(ctx, pages->count, pfns, types,
-                           &pages->pfn[pages->count]);
+    ret = true;
+
+err:
+    return ret;
+}
+
+/*
+ * Populate pfns, if required
+ * Fill m->guest_data with either mapped address or NULL
+ * The caller must unmap guest_mapping
+ */
+static int map_guest_pages(struct xc_sr_context *ctx,
+                           struct xc_sr_rec_page_data_header *pages)
+{
+    xc_interface *xch = ctx->xch;
+    struct xc_sr_restore_arrays *m = ctx->restore.m;
+    uint32_t i, p;
+    int rc;
+
+    rc = populate_pfns(ctx, pages->count, m->pfns, m->types);
+    if ( rc )
+    {
+        ERROR("Failed to populate pfns for batch of %u pages", pages->count);
+        goto err;
+    }
+
+    ctx->restore.nr_mapped_pages = 0;
+
+    for ( i = 0; i < pages->count; i++ )
+    {
+        ctx->restore.ops.set_page_type(ctx, m->pfns[i], m->types[i]);
+
+        if ( page_type_has_stream_data(m->types[i]) == false )
+        {
+            m->guest_data[i] = NULL;
+            continue;
+        }
+
+        m->mfns[ctx->restore.nr_mapped_pages++] = ctx->restore.ops.pfn_to_gfn(ctx, m->pfns[i]);
+    }
+
+    /* Nothing to do? */
+    if ( ctx->restore.nr_mapped_pages == 0 )
+        goto done;
+
+    ctx->restore.guest_mapping = xenforeignmemory_map(xch->fmem, ctx->domid,
+            PROT_READ | PROT_WRITE, ctx->restore.nr_mapped_pages,
+            m->mfns, m->map_errs);
+    if ( !ctx->restore.guest_mapping )
+    {
+        rc = -1;
+        PERROR("Unable to map %u mfns for %u pages of data",
+               ctx->restore.nr_mapped_pages, pages->count);
+        goto err;
+    }
+
+    /* Verify mapping, and assign address to pfn data */
+    for ( i = 0, p = 0; i < pages->count; i++ )
+    {
+        if ( page_type_has_stream_data(m->types[i]) == false )
+            continue;
+
+        if ( m->map_errs[p] == 0 )
+        {
+            m->guest_data[i] = ctx->restore.guest_mapping + (p * PAGE_SIZE);
+            p++;
+            continue;
+        }
+
+        errno = m->map_errs[p];
+        rc = -1;
+        PERROR("Mapping pfn %#"PRIpfn" (mfn %#"PRIpfn", type %#"PRIx32") failed",
+              m->pfns[i], m->mfns[p], m->types[i]);
+        goto err;
+    }
+
+done:
+    rc = 0;
+
+err:
+    return rc;
+}
+
+/*
+ * Handle PAGE_DATA record from an existing buffer
+ * Given a list of pfns, their types, and a block of page data from the
+ * stream, populate and record their types, map the relevant subset and copy
+ * the data into the guest.
+ */
+static int handle_buffered_page_data(struct xc_sr_context *ctx,
+                                     struct xc_sr_record *rec)
+{
+    xc_interface *xch = ctx->xch;
+    struct xc_sr_rec_page_data_header *pages = rec->data;
+    struct xc_sr_restore_arrays *m = ctx->restore.m;
+    void *p;
+    uint32_t i;
+    int rc = -1, idx;
+
+    rc = handle_static_data_end_v2(ctx);
+    if ( rc )
+        goto err;
+
+    /* First read and verify the header */
+    if ( verify_rec_page_hdr(ctx, rec->length, pages) == false )
+    {
+        rc = -1;
+        goto err;
+    }
+
+    /* Then read and verify the pfn numbers */
+    if ( verify_rec_page_pfns(ctx, rec->length, pages) == false )
+    {
+        rc = -1;
+        goto err;
+    }
+
+    /* Map the target pfn */
+    rc = map_guest_pages(ctx, pages);
+    if ( rc )
+        goto err;
+
+    for ( i = 0, idx = 0; i < pages->count; i++ )
+    {
+        if ( !m->guest_data[i] )
+            continue;
+
+        p = &pages->pfn[pages->count] + (idx * PAGE_SIZE);
+        rc = ctx->restore.ops.localise_page(ctx, m->types[i], p);
+        if ( rc )
+        {
+            ERROR("Failed to localise pfn %#"PRIpfn" (type %#"PRIx32")",
+                  m->pfns[i], m->types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
+            goto err;
+
+        }
+
+        if ( ctx->restore.verify )
+        {
+            if ( memcmp(m->guest_data[i], p, PAGE_SIZE) )
+            {
+                errno = EIO;
+                ERROR("verify pfn %#"PRIpfn" failed (type %#"PRIx32")",
+                      m->pfns[i], m->types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
+                goto err;
+            }
+        }
+        else
+        {
+            memcpy(m->guest_data[i], p, PAGE_SIZE);
+        }
+
+        idx++;
+    }
+
+    rc = 0;
+
  err:
+    if ( ctx->restore.guest_mapping )
+    {
+        xenforeignmemory_unmap(xch->fmem, ctx->restore.guest_mapping, ctx->restore.nr_mapped_pages);
+        ctx->restore.guest_mapping = NULL;
+    }
     return rc;
 }
 
@@ -641,12 +732,21 @@ static int process_buffered_record(struct xc_sr_context *ctx, struct xc_sr_recor
         break;
 
     case REC_TYPE_PAGE_DATA:
-        rc = handle_page_data(ctx, rec);
+        rc = handle_buffered_page_data(ctx, rec);
         break;
 
     case REC_TYPE_VERIFY:
         DPRINTF("Verify mode enabled");
         ctx->restore.verify = true;
+        if ( !ctx->restore.verify_buf )
+        {
+            ctx->restore.verify_buf = malloc(MAX_BATCH_SIZE * PAGE_SIZE);
+            if ( !ctx->restore.verify_buf )
+            {
+                rc = -1;
+                PERROR("Unable to allocate verify_buf");
+            }
+        }
         break;
 
     case REC_TYPE_CHECKPOINT:
@@ -725,7 +825,8 @@ static int setup(struct xc_sr_context *ctx)
     }
     ctx->restore.allocated_rec_num = DEFAULT_BUF_RECORDS;
 
-    ctx->restore.m = malloc(sizeof(*ctx->restore.m));
+    ctx->restore.m = malloc(sizeof(*ctx->restore.m) +
+            (sizeof(*ctx->restore.m->pages.pfn) * MAX_BATCH_SIZE));
     if ( !ctx->restore.m ) {
         ERROR("Unable to allocate memory for arrays");
         rc = -1;


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:16:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:16:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135062.251317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo74X-0007YP-4v; Tue, 01 Jun 2021 16:16:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135062.251317; Tue, 01 Jun 2021 16:16:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo74X-0007YE-1X; Tue, 01 Jun 2021 16:16:53 +0000
Received: by outflank-mailman (input) for mailman id 135062;
 Tue, 01 Jun 2021 16:16:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo70L-0005Ec-2e
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:33 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.82])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a3763927-e6a9-4fd4-87dd-f33d389b1d2a;
 Tue, 01 Jun 2021 16:11:39 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBY1BQ
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3763927-e6a9-4fd4-87dd-f33d389b1d2a
ARC-Seal: i=1; a=rsa-sha256; t=1622563894; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=YjCXDU33U/i7SQqbO4oNdvChTQ9Xikz/PPoCzCeAMh76/S2n131Zgf+Z3JnYi/7NNx
    KA+HmB0pkoYeKbsmfaN4f0YxSTLKBn4SRn8PyEkfFvZlEfO9zZ1ibzvMgLIWN3Cjep0n
    rVjyEPF962rtyZUnsGfo9K2TjKD1qr9BR/JNb4JGvbTNijZdz3LQIK7i1yTwovLRDmPA
    StjdkMifuaqae6kkRjuK11CdQCjRx2uiO/zgWjxwXOzNRj9whi4PZJ010WV6GS0d0JRn
    oOmrvGAhiQgLLJa1Fof4KZWWHRoCzkfL2twmWO8GKjfto+qZ/8jd4yev5v/PKQJIQC91
    e4dA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563894;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=2CUXZ/FCQQr/mVtYU8Ymox8qVtJ5RlUVL8AFNCIqE5o=;
    b=THUtBBqNyXmXwLsoInTEjQ+V9nWHqPDGjuqPAISrGT9lYd7T5q41L8q5B1BLRWTM/I
    FiS25ENH56Ylj2qC08uGFoG+N4b/DNjRGKPleubnD6k6hHWgi7gQcoqO8AxmOtJUXNvH
    xA5CJM3d8/wbpx03aRuwm8VsViBPKCOlqvUqo2dyNlO7yoMncczD4BB3JBIN/AV+l1n3
    0NOvzBn8jKlMzz/mKgMlBP+rqOBO92pgFz6J6RuKGPyv7hm8Y3HYdZOn5Eum+fOOXHL5
    bIucAB9id0d5K0Dx1YHCG6M1oBO0g7MUac2fLh2UEFInYrY1jfsW2fiawKWkK4lorLQo
    TjuQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563894;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=2CUXZ/FCQQr/mVtYU8Ymox8qVtJ5RlUVL8AFNCIqE5o=;
    b=RWo97elgVniK6V+b74Q743JkQM6rkfWGvLM8ZCp9/ufrKvHqYbyY6zYv61duUXW4kg
    zcR3QjEg6a0pfDoKfYCAVgj+k+Ds83eGyMDu0RgbQ7rDFbG2jRup97m+lJrrchp8X6cP
    Ta1jf2uej1BJd7ajNHqubxbVrpgR43h2CZ7dBuCTK/WIjpdXSLpOFuvo6v1AsZiV/Age
    ECYdURZB7iJZsHS8/3dPuncfBeJ6MeGK72NH8yDp4jQFs8ZwfYNTIgnvO1YeyppeaZmQ
    ePsN8GOjb26s8I6E5HSUoQR6TkA3zuXc32CR6XQ0jsLvdhSvKVbVeGjLZkbhChJ5RBCL
    cdOQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 21/38] tools/guest: restore: move map_errs array
Date: Tue,  1 Jun 2021 18:11:01 +0200
Message-Id: <20210601161118.18986-22-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move map_errs array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h  |  1 +
 tools/libs/saverestore/restore.c | 12 +-----------
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index c7291bb5ca..cea549d129 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -241,6 +241,7 @@ struct xc_sr_restore_arrays {
     uint32_t types[MAX_BATCH_SIZE];
     /* process_page_data */
     xen_pfn_t mfns[MAX_BATCH_SIZE];
+    int map_errs[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_context
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index aadf322428..b534d80cbc 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -206,21 +206,13 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
 {
     xc_interface *xch = ctx->xch;
     xen_pfn_t *mfns = ctx->restore.m->mfns;
-    int *map_errs = malloc(count * sizeof(*map_errs));
+    int *map_errs = ctx->restore.m->map_errs;
     int rc;
     void *mapping = NULL, *guest_page = NULL;
     unsigned int i, /* i indexes the pfns from the record. */
         j,          /* j indexes the subset of pfns we decide to map. */
         nr_pages = 0;
 
-    if ( !map_errs )
-    {
-        rc = -1;
-        ERROR("Failed to allocate %zu bytes to process page data",
-              count * (sizeof(*mfns) + sizeof(*map_errs)));
-        goto err;
-    }
-
     rc = populate_pfns(ctx, count, pfns, types);
     if ( rc )
     {
@@ -298,8 +290,6 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
     if ( mapping )
         xenforeignmemory_unmap(xch->fmem, mapping, nr_pages);
 
-    free(map_errs);
-
     return rc;
 }
 


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:17:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:17:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135067.251328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo74s-0008NV-Dt; Tue, 01 Jun 2021 16:17:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135067.251328; Tue, 01 Jun 2021 16:17:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo74s-0008NO-Ao; Tue, 01 Jun 2021 16:17:14 +0000
Received: by outflank-mailman (input) for mailman id 135067;
 Tue, 01 Jun 2021 16:17:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo70a-0005Ec-32
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:12:48 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [81.169.146.174])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 92920130-7184-433e-aa96-fa8bc703aa30;
 Tue, 01 Jun 2021 16:11:41 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBa1BV
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 92920130-7184-433e-aa96-fa8bc703aa30
ARC-Seal: i=1; a=rsa-sha256; t=1622563897; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=dudlQMcq6QurLVSYJAZ5weU9gLvgk7ga30PI1ni10/6v6vlW/RpnHiPHc344aQoT5n
    2nz6eoXiB/UvOMX5ayAh296N7uv2c6NzJkDR+WJMrih3YAByL9vZjGszF01V7WtxVgfV
    QHJATJaC2ppHIXC18mXOcO8fRGdVT/ZZkm0tW2YStIJv5Ok/vEcOBtskppeG10WX0pl1
    zER3j01QAJf8z25RLfcSf6e5zR7dBrvfE8o6auj+TTE4u3VXi8Xsb2XYg/I2ALhGf8vJ
    F3JMeO+47uR4o8kEUTblo+Z6kXw5H9UaPLNxYNn/YbCsbbiWlmTZsDh1oRwX4GugPjhC
    QpzQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563897;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=eDLBvLwdNBkdJsm65Hcts+kQfVHtshr8GV54yTNeA1Y=;
    b=JbEojE7DjUODPQOlO5CTL9kaucbAdMnMgmE0frwsA9qoL0+7NQivopCQV1UYoDI3io
    Bv4OKH8RLvhWhHozcxfNvNWB0httJNIpTZRT4KZzok7EPnsIeMYkwSi/q8zbQCbw3j//
    AltqW/84hmVILTF473mnzapNbVfMUmfJ0vZ3EaWLMoykCOdkzUL7zPrCIUsMVG1gY+zu
    YnyVvu57o/rPpQ46Mq2TXmP++9Vo67ER487wS1y1YZEC77u0hk7KJWirxRBhyHvIO/UV
    6glChkY0ZOmKCxYv4spTfEYEYAwrJRrXmUbDY2RGsQiPEDxMwqQn7sFTveRDeYDhDVBZ
    RxhA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563897;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=eDLBvLwdNBkdJsm65Hcts+kQfVHtshr8GV54yTNeA1Y=;
    b=M7R0KO8O/hmI94jXqEKM8+BNsOwtJDTsMT3JBfBJAiidLHe0/pK+fv7NWrqk7fr5es
    42Y1ZKKcFIWVjB/9F+GtXKa1vWrVXxKIBhrml9jbsCD3ImanI17KncC2trj677FlC+Ja
    6OGVucrVcpzSSpGtUp0UwiNm3x4iBGWpZqkas/bW8X8BQAov7/qkLmjzpGmLdQnt617a
    Pz39F4JDfz4TU3ldYRFayHtqh6kKdgpBzan401LwdvOzidPKERz6p9UNJl1FKj160qLb
    AhpS1NyKO2cLtYhyOZnY372E2XK0G+148UGj6Ati25SZjLtxmpAz8bPqEo9+5EBrlHzk
    0FAA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 26/38] tools/guest: restore: write data directly into guest
Date: Tue,  1 Jun 2021 18:11:06 +0200
Message-Id: <20210601161118.18986-27-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Read incoming migration stream directly into the guest memory.
This avoids the memory allocation and copying, and the resulting
performance penalty.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h  |   1 +
 tools/libs/saverestore/restore.c | 132 ++++++++++++++++++++++++++++++-
 2 files changed, 129 insertions(+), 4 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 21fdcf234e..4218a8ecd6 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -243,6 +243,7 @@ struct xc_sr_restore_arrays {
     xen_pfn_t mfns[MAX_BATCH_SIZE];
     int map_errs[MAX_BATCH_SIZE];
     void *guest_data[MAX_BATCH_SIZE];
+    struct iovec iov[MAX_BATCH_SIZE];
 
     /* populate_pfns */
     xen_pfn_t pp_mfns[MAX_BATCH_SIZE];
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index f7b4a7124b..722f77b46a 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -392,6 +392,122 @@ err:
     return rc;
 }
 
+/*
+ * Handle PAGE_DATA record from the stream.
+ * Given a list of pfns, their types, and a block of page data from the
+ * stream, populate and record their types, map the relevant subset and copy
+ * the data into the guest.
+ */
+static int handle_incoming_page_data(struct xc_sr_context *ctx,
+                                     struct xc_sr_rhdr *rhdr)
+{
+    xc_interface *xch = ctx->xch;
+    struct xc_sr_restore_arrays *m = ctx->restore.m;
+    struct xc_sr_rec_page_data_header *pages = &m->pages;
+    uint64_t *pfn_nums = m->pages.pfn;
+    uint32_t i;
+    int rc, iov_idx;
+
+    rc = handle_static_data_end_v2(ctx);
+    if ( rc )
+        goto err;
+
+    /* First read and verify the header */
+    rc = read_exact(ctx->fd, pages, sizeof(*pages));
+    if ( rc )
+    {
+        PERROR("Could not read rec_pfn header");
+        goto err;
+    }
+
+    if ( verify_rec_page_hdr(ctx, rhdr->length, pages) == false )
+    {
+        rc = -1;
+        goto err;
+    }
+
+    /* Then read and verify the incoming pfn numbers */
+    rc = read_exact(ctx->fd, pfn_nums, sizeof(*pfn_nums) * pages->count);
+    if ( rc )
+    {
+        PERROR("Could not read rec_pfn data");
+        goto err;
+    }
+
+    if ( verify_rec_page_pfns(ctx, rhdr->length, pages) == false )
+    {
+        rc = -1;
+        goto err;
+    }
+
+    /* Finally read and verify the incoming pfn data */
+    rc = map_guest_pages(ctx, pages);
+    if ( rc )
+        goto err;
+
+    /* Prepare read buffers, either guest or throw away memory */
+    for ( i = 0, iov_idx = 0; i < pages->count; i++ )
+    {
+        if ( !m->guest_data[i] )
+            continue;
+
+        m->iov[iov_idx].iov_len = PAGE_SIZE;
+        if ( ctx->restore.verify )
+            m->iov[iov_idx].iov_base = ctx->restore.verify_buf + i * PAGE_SIZE;
+        else
+            m->iov[iov_idx].iov_base = m->guest_data[i];
+        iov_idx++;
+    }
+
+    if ( !iov_idx )
+        goto done;
+
+    rc = readv_exact(ctx->fd, m->iov, iov_idx);
+    if ( rc )
+    {
+        PERROR("read of %d pages failed", iov_idx);
+        goto err;
+    }
+
+    /* Post-processing of pfn data */
+    for ( i = 0, iov_idx = 0; i < pages->count; i++ )
+    {
+        if ( !m->guest_data[i] )
+            continue;
+
+        rc = ctx->restore.ops.localise_page(ctx, m->types[i], m->iov[iov_idx].iov_base);
+        if ( rc )
+        {
+            ERROR("Failed to localise pfn %#"PRIpfn" (type %#"PRIx32")",
+                  m->pfns[i], m->types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
+            goto err;
+
+        }
+
+        if ( ctx->restore.verify )
+        {
+            if ( memcmp(m->guest_data[i], m->iov[iov_idx].iov_base, PAGE_SIZE) )
+            {
+                ERROR("verify pfn %#"PRIpfn" failed (type %#"PRIx32")",
+                      m->pfns[i], m->types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
+            }
+        }
+
+        iov_idx++;
+    }
+
+done:
+    rc = 0;
+
+err:
+    if ( ctx->restore.guest_mapping )
+    {
+        xenforeignmemory_unmap(xch->fmem, ctx->restore.guest_mapping, ctx->restore.nr_mapped_pages);
+        ctx->restore.guest_mapping = NULL;
+    }
+    return rc;
+}
+
 /*
  * Handle PAGE_DATA record from an existing buffer
  * Given a list of pfns, their types, and a block of page data from the
@@ -773,11 +889,19 @@ static int process_incoming_record_header(struct xc_sr_context *ctx, struct xc_s
     struct xc_sr_record rec;
     int rc;
 
-    rc = read_record_data(ctx, ctx->fd, rhdr, &rec);
-    if ( rc )
-        return rc;
+    switch ( rhdr->type )
+    {
+    case REC_TYPE_PAGE_DATA:
+        rc = handle_incoming_page_data(ctx, rhdr);
+        break;
+    default:
+        rc = read_record_data(ctx, ctx->fd, rhdr, &rec);
+        if ( rc == 0 )
+            rc = process_buffered_record(ctx, &rec);;
+        break;
+    }
 
-    return process_buffered_record(ctx, &rec);
+    return rc;
 }
 
 


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:17:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:17:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135069.251338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo74v-0000GP-MV; Tue, 01 Jun 2021 16:17:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135069.251338; Tue, 01 Jun 2021 16:17:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo74v-0000GD-Ip; Tue, 01 Jun 2021 16:17:17 +0000
Received: by outflank-mailman (input) for mailman id 135069;
 Tue, 01 Jun 2021 16:17:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo70u-0005Ec-3o
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:13:08 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.104])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14052265-3ec3-46ef-a819-c5198683c5e4;
 Tue, 01 Jun 2021 16:11:45 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBc1Ba
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14052265-3ec3-46ef-a819-c5198683c5e4
ARC-Seal: i=1; a=rsa-sha256; t=1622563899; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=Nzzt+2JzYgcElHAvvU+QlBQIcLp433Kx2kvRxjzCmaxh4vGLV+IBGWpul7FPjRQgUp
    ExSTIaImPvhM2qc0wC+fAUm23xlgXu1hT39G2qHii5BTN5lnzbMUWfHQ30wQQ/8ZmSUG
    gqWRBaqik7yQRBF3K61FpUEOvVYcvMPojhx32fx4JH/by7fbgKOMKU9uj1V9SB54MU6h
    7YWBGLbTfbdLg7K51ByMrW1BKqMOA1mKkCKAQHHLR6gvNEuzBwM1WocPsV0vL72MSO7I
    qoKvd8Sf31ed/pu7Dn1UAkay9MWOdlX8zd9bLNnpZGQ2ABT3ShzY7STxUdMIH1sIMwb3
    ihIA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563899;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=oEobMmPDeGkR5HKQsTMYf9poCJx2hGJ9mHWfmH/ZeOo=;
    b=eRlYMUhLv1SFTirRAn5jjC1lo61RJ9Hi2DcTjmEyK0y2Q5LHYpjnNS2P6uOzjvXwk7
    LMLKASPuzbAMVBqNj86YpY7yT+LMmWXFgbnIqTWmidpgm+OraykJjVP4uC7xDlx9imwL
    wQOJjqr4Sgab5kx6cSPTrbIYi/iYyiZG1XUHF36TvypxWkhzvaLPjDC1ljHEtlbk05az
    GLknIgAM2tquzRUpfT1qgXPGX7TjChnay13sm/W4uuc/sBslxBMKOle6mgscczhQLLEW
    X4i35d+RcDTX2WTQI0smFTKy7jN1bFpU86VQFD4xrPx0jGw/T+n26IQcWMy4EdBxPOqh
    LR7Q==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563899;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=oEobMmPDeGkR5HKQsTMYf9poCJx2hGJ9mHWfmH/ZeOo=;
    b=N4ktv5IxWOYa/s3fjZBqMunV6MlGiILphSiu69DrwtxNOEY3GK+tY8ZxEhQJXjMFge
    leQvxASgSmHfGRHoHk7ze+m70aYb5Z3Dh71d3krEa4MLzp86ZYCSChmi5ukdjPRnxAr3
    +BTml+Z7BUA9ZtAH1Viu2YKaE+qySr/VpgA7lePZLi6ohkKHqm+bN+Yiuiu1dQY2T8y+
    qEhSY5pyhSG05pbP/bH/5UvvWKZKEd4q+yV31F176QMqi2cpif6KIUq1YQqSoFSgMyek
    EvSJXIsW1eZJsnNm+Uq8zgdall5x0mLJl9pVekmR8WxfdOsMKlYD5HJfTJXmaNm1BFIu
    6hoQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v20210601 30/38] tools: add callback to libxl for precopy_policy and precopy_stats_t
Date: Tue,  1 Jun 2021 18:11:10 +0200
Message-Id: <20210601161118.18986-31-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This duplicates simple_precopy_policy. To recap its purpose:
- do up to 5 iterations of copying dirty domU memory to target,
  including the initial copying of all domU memory, excluding
  the final copying while the domU is suspended
- do fewer iterations in case the domU dirtied less than 50 pages

Take the opportunity to also move xen_pfn_t into qw().

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/light/libxl_dom_save.c       | 19 +++++++++++++++++++
 tools/libs/light/libxl_internal.h       |  2 ++
 tools/libs/light/libxl_save_msgs_gen.pl |  3 ++-
 3 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/tools/libs/light/libxl_dom_save.c b/tools/libs/light/libxl_dom_save.c
index 32e3cb5a13..3f3cff0342 100644
--- a/tools/libs/light/libxl_dom_save.c
+++ b/tools/libs/light/libxl_dom_save.c
@@ -373,6 +373,24 @@ int libxl__save_emulator_xenstore_data(libxl__domain_save_state *dss,
     return rc;
 }
 
+static int libxl__domain_save_precopy_policy(precopy_stats_t stats, void *user)
+{
+    libxl__save_helper_state *shs = user;
+    libxl__domain_save_state *dss = shs->caller_state;
+    STATE_AO_GC(dss->ao);
+
+    LOGD(DEBUG, shs->domid, "iteration %u dirty_count %ld total_written %lu",
+         stats.iteration, stats.dirty_count, stats.total_written);
+    if (stats.dirty_count >= 0 && stats.dirty_count < LIBXL_XGS_POLICY_TARGET_DIRTY_COUNT)
+        goto stop_copy;
+    if (stats.iteration >= LIBXL_XGS_POLICY_MAX_ITERATIONS)
+        goto stop_copy;
+    return XGS_POLICY_CONTINUE_PRECOPY;
+
+stop_copy:
+    return XGS_POLICY_STOP_AND_COPY;
+}
+
 /*----- main code for saving, in order of execution -----*/
 
 void libxl__domain_save(libxl__egc *egc, libxl__domain_save_state *dss)
@@ -430,6 +448,7 @@ void libxl__domain_save(libxl__egc *egc, libxl__domain_save_state *dss)
         callbacks->suspend = libxl__domain_suspend_callback;
 
     callbacks->switch_qemu_logdirty = libxl__domain_suspend_common_switch_qemu_logdirty;
+    callbacks->precopy_policy = libxl__domain_save_precopy_policy;
 
     dss->sws.ao  = dss->ao;
     dss->sws.dss = dss;
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 8af075291a..069c35452d 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -125,6 +125,8 @@
 #define DOMID_XS_PATH "domid"
 #define PVSHIM_BASENAME "xen-shim"
 #define PVSHIM_CMDLINE "pv-shim console=xen,pv"
+#define LIBXL_XGS_POLICY_MAX_ITERATIONS 5
+#define LIBXL_XGS_POLICY_TARGET_DIRTY_COUNT 50
 
 /* Size macros. */
 #define __AC(X,Y)   (X##Y)
diff --git a/tools/libs/light/libxl_save_msgs_gen.pl b/tools/libs/light/libxl_save_msgs_gen.pl
index f263ee01bb..ab55c81644 100755
--- a/tools/libs/light/libxl_save_msgs_gen.pl
+++ b/tools/libs/light/libxl_save_msgs_gen.pl
@@ -23,6 +23,7 @@ our @msgs = (
                                              STRING doing_what),
                                             'unsigned long', 'done',
                                             'unsigned long', 'total'] ],
+    [ 'scxW',   "precopy_policy", ['precopy_stats_t', 'stats'] ],
     [ 'srcxA',  "suspend", [] ],
     [ 'srcxA',  "postcopy", [] ],
     [ 'srcxA',  "checkpoint", [] ],
@@ -142,7 +143,7 @@ static void bytes_put(unsigned char *const buf, int *len,
 
 END
 
-foreach my $simpletype (qw(int uint16_t uint32_t unsigned), 'unsigned long', 'xen_pfn_t') {
+foreach my $simpletype (qw(int uint16_t uint32_t unsigned precopy_stats_t xen_pfn_t), 'unsigned long') {
     my $typeid = typeid($simpletype);
     $out_body{'callout'} .= <<END;
 static int ${typeid}_get(const unsigned char **msg,


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:24:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:24:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135150.251350 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo7Bt-0002OK-Fp; Tue, 01 Jun 2021 16:24:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135150.251350; Tue, 01 Jun 2021 16:24:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo7Bt-0002O6-CX; Tue, 01 Jun 2021 16:24:29 +0000
Received: by outflank-mailman (input) for mailman id 135150;
 Tue, 01 Jun 2021 16:24:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo71E-0005Ec-4T
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:13:28 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [81.169.146.174])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 48fa1829-73d1-48ce-92c9-cdc6cf16f212;
 Tue, 01 Jun 2021 16:11:47 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBd1Bc
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48fa1829-73d1-48ce-92c9-cdc6cf16f212
ARC-Seal: i=1; a=rsa-sha256; t=1622563900; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=q4QpgJ61zpOybvUvhUr09ZmeltRRHXUJPMawiTvc9JFN5FTIFY5oP+64k6xDO1D0s5
    UV04M/mB5Hi/+GidhhR4hIyGNA/HRQvyAdRue4BOPs+/PR96QxUyLbSNHvqdr9hL9GQu
    EFAU9dhkSJIbLNjkMaPBE1yd4XHuax7yhS5ThiG+xqBgMbjoqQeFFifH43qJSc9HnguL
    96nBvelGvVr27kzdU39w7y0s0nQivTOHmVwVKCyPdLi31+n5kgoNSmmJ8BJiJinuvO8x
    kKMaeAGIvjreAuP2dQogevYgTNgdMN2LH7J6yvucvfDOFNcYkkm+q0sTeUyOYsmv/3Tx
    qj4A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563900;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=z1lmf0xePHwyesyqjBFkPKIgt7yTBfVYJQlZCg92Pig=;
    b=Ui/N5dcIi30X7j3unjmJJR0yBmG7TDkeBhos7LZDGabwet4KhNstuaiNwJyRVPeV8/
    ZHi6YMufr7c1fyUoohYHo9FvJPMmxZGuJSSuWj5u3lv0Bczv5Wc3/max9djUTSJFmIhA
    uGZnH7y6swi4ePLz9u0JGFnPsz57gvhatxVZygXmuobmmES1YO7MTkBJ5QNuLy+r5MQ2
    7KGuss9KN2PMo9FmC9ZU5KXBeuJeWQfxahLi9fQO6d5LSIjwa9xFrEpgIY0HVOUn6Sys
    4/2SxEOlLVFKZBJY53UhA8NF4tjlXgCa3w8q7DiUOnHh2rtJ/07fA5UwhsGl97XLpQ/l
    cQAQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563900;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=z1lmf0xePHwyesyqjBFkPKIgt7yTBfVYJQlZCg92Pig=;
    b=mIespZvsQbeG2kuTMSr5Pwx4UR6KJeERxOoCrGqGTYDkxK5mYXRlv2tiwqH9WGDvum
    Xj0bWpAhyLRMl2V9wh58oktUBwOahEDzP8w/bRVe17JVUjVmeBTd59UquYMawBYPnDIn
    fKaQKDg3ixisNwzc5kwaYSxrT6cY4XSCJ4gQoNGvEQLCDN9qsXhDfTT7+HBO3DIDnlEj
    BwY7u4lU8/9BHm0yFFzw3WyT6+tDznvZ+sbIWO8KX77uq8oNeSdoV2NeRSLIyOZMGGRT
    K8br9V/w9fErm11F7lIxS1F+rKYk7t9lcy9iKTtV09hWa/vZioRHQQYNb4aVL8gbnyL/
    KGzg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v20210601 32/38] tools: add --min_remaining to libxl_domain_suspend
Date: Tue,  1 Jun 2021 18:11:12 +0200
Message-Id: <20210601161118.18986-33-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The decision to stop+move a domU to the new host must be based on two factors:
- the available network bandwidth for the migration stream
- the maximum time a workload within a domU can be savely suspended

Both values define how many dirty pages a workload may produce prior the
final stop+move.

The default value of 50 pages is much too low with todays network bandwidths.
On an idle 1GiB link these 200K will be transferred within ~2ms.

Give the admin a knob to adjust the point when the final stop+move will
be done, so he can base this decision on his own needs.

This patch adjusts xl(1) and the libxl API.
External users check LIBXL_HAVE_DOMAIN_SUSPEND_PROPS for the availibility
of the new .min_remaining property.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/man/xl.1.pod.in              |  8 ++++++++
 tools/include/libxl.h             |  1 +
 tools/libs/light/libxl_dom_save.c |  2 +-
 tools/libs/light/libxl_domain.c   |  1 +
 tools/libs/light/libxl_internal.h |  1 +
 tools/xl/xl_cmdtable.c            | 23 ++++++++++++-----------
 tools/xl/xl_migrate.c             |  9 ++++++++-
 7 files changed, 32 insertions(+), 13 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index b13e09c0ee..43609f6cdd 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -500,6 +500,14 @@ possible to use this option for a 'localhost' migration.
 
 Number of copy iterations before final suspend+move (default: 5)
 
+=item B<--min_remaing> I<pages>
+
+Number of remaining dirty pages. If the number of dirty pages drops that
+low, the guest is suspended and the domU will finally be moved to I<host>.
+
+This allows the host admin to control for how long the domU will likely
+be suspended during transit.
+
 =back
 
 =item B<remus> [I<OPTIONS>] I<domain-id> I<host>
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index bf77da0524..28d70b1078 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -1715,6 +1715,7 @@ static inline int libxl_retrieve_domain_configuration_0x041200(
 typedef struct {
     uint32_t flags; /* LIBXL_SUSPEND_* */
     uint32_t max_iters;
+    uint32_t min_remaining;
 } libxl_domain_suspend_props;
 #define LIBXL_SUSPEND_DEBUG 1
 #define LIBXL_SUSPEND_LIVE 2
diff --git a/tools/libs/light/libxl_dom_save.c b/tools/libs/light/libxl_dom_save.c
index 938c0127f3..ad5df89b2c 100644
--- a/tools/libs/light/libxl_dom_save.c
+++ b/tools/libs/light/libxl_dom_save.c
@@ -381,7 +381,7 @@ static int libxl__domain_save_precopy_policy(precopy_stats_t stats, void *user)
 
     LOGD(DEBUG, shs->domid, "iteration %u dirty_count %ld total_written %lu",
          stats.iteration, stats.dirty_count, stats.total_written);
-    if (stats.dirty_count >= 0 && stats.dirty_count < LIBXL_XGS_POLICY_TARGET_DIRTY_COUNT)
+    if (stats.dirty_count >= 0 && stats.dirty_count < dss->min_remaining)
         goto stop_copy;
     if (stats.iteration >= dss->max_iters)
         goto stop_copy;
diff --git a/tools/libs/light/libxl_domain.c b/tools/libs/light/libxl_domain.c
index 612d3dc4ea..ae4dc9ad01 100644
--- a/tools/libs/light/libxl_domain.c
+++ b/tools/libs/light/libxl_domain.c
@@ -528,6 +528,7 @@ int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd,
     dss->fd = fd;
     dss->type = type;
     dss->max_iters = props->max_iters ?: LIBXL_XGS_POLICY_MAX_ITERATIONS;
+    dss->min_remaining = props->min_remaining ?: LIBXL_XGS_POLICY_TARGET_DIRTY_COUNT;
     dss->live = props->flags & LIBXL_SUSPEND_LIVE;
     dss->debug = props->flags & LIBXL_SUSPEND_DEBUG;
     dss->checkpointed_stream = LIBXL_CHECKPOINTED_STREAM_NONE;
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 82b9dca5a0..63028586fe 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -3642,6 +3642,7 @@ struct libxl__domain_save_state {
     int debug;
     int checkpointed_stream;
     uint32_t max_iters;
+    uint32_t min_remaining;
     const libxl_domain_remus_info *remus;
     /* private */
     int rc;
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 8f8fa72760..acb84e3486 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -165,17 +165,18 @@ const struct cmd_spec cmd_table[] = {
       &main_migrate, 0, 1,
       "Migrate a domain to another host",
       "[options] <Domain> <host>",
-      "-h              Print this help.\n"
-      "-C <config>     Send <config> instead of config file from creation.\n"
-      "-s <sshcommand> Use <sshcommand> instead of ssh.  String will be passed\n"
-      "                to sh. If empty, run <host> instead of ssh <host> xl\n"
-      "                migrate-receive [-d -e]\n"
-      "-e              Do not wait in the background (on <host>) for the death\n"
-      "                of the domain.\n"
-      "--debug         Verify transferred domU page data.\n"
-      "-p              Do not unpause domain after migrating it.\n"
-      "-D              Preserve the domain id\n"
-      "--max_iters N   Number of copy iterations before final stop+move"
+      "-h                Print this help.\n"
+      "-C <config>       Send <config> instead of config file from creation.\n"
+      "-s <sshcommand>   Use <sshcommand> instead of ssh.  String will be passed\n"
+      "                  to sh. If empty, run <host> instead of ssh <host> xl\n"
+      "                  migrate-receive [-d -e]\n"
+      "-e                Do not wait in the background (on <host>) for the death\n"
+      "                  of the domain.\n"
+      "--debug           Verify transferred domU page data.\n"
+      "-p                Do not unpause domain after migrating it.\n"
+      "-D                Preserve the domain id\n"
+      "--max_iters N     Number of copy iterations before final stop+move\n"
+      "--min_remaining N Number of remaining dirty pages before final stop+move"
     },
     { "restore",
       &main_restore, 0, 1,
diff --git a/tools/xl/xl_migrate.c b/tools/xl/xl_migrate.c
index af117d4d56..14feb2b7ec 100644
--- a/tools/xl/xl_migrate.c
+++ b/tools/xl/xl_migrate.c
@@ -179,6 +179,7 @@ static void migrate_do_preamble(int send_fd, int recv_fd, pid_t child,
 static void migrate_domain(uint32_t domid, int preserve_domid,
                            const char *rune, int debug,
                            uint32_t max_iters,
+                           uint32_t min_remaining,
                            const char *override_config_file)
 {
     pid_t child = -1;
@@ -191,6 +192,7 @@ static void migrate_domain(uint32_t domid, int preserve_domid,
     libxl_domain_suspend_props props = {
         .flags = LIBXL_SUSPEND_LIVE,
         .max_iters = max_iters,
+        .min_remaining = min_remaining,
         };
 
     save_domain_core_begin(domid, preserve_domid, override_config_file,
@@ -545,9 +547,11 @@ int main_migrate(int argc, char **argv)
     int opt, daemonize = 1, monitor = 1, debug = 0, pause_after_migration = 0;
     int preserve_domid = 0;
     uint32_t max_iters = 0;
+    uint32_t min_remaining = 0;
     static struct option opts[] = {
         {"debug", 0, 0, 0x100},
         {"max_iters", 1, 0, 0x101},
+        {"min_remaining", 1, 0, 0x102},
         {"live", 0, 0, 0x200},
         COMMON_LONG_OPTS
     };
@@ -578,6 +582,9 @@ int main_migrate(int argc, char **argv)
     case 0x101: /* --max_iters */
         max_iters = atoi(optarg);
         break;
+    case 0x102: /* --min_remaining */
+        min_remaining = atoi(optarg);
+        break;
     case 0x200: /* --live */
         /* ignored for compatibility with xm */
         break;
@@ -613,7 +620,7 @@ int main_migrate(int argc, char **argv)
     }
 
     migrate_domain(domid, preserve_domid, rune, debug,
-                   max_iters, config_filename);
+                   max_iters, min_remaining, config_filename);
     return EXIT_SUCCESS;
 }
 


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:24:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:24:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135159.251361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo7C3-0002k3-T9; Tue, 01 Jun 2021 16:24:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135159.251361; Tue, 01 Jun 2021 16:24:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo7C3-0002jv-PI; Tue, 01 Jun 2021 16:24:39 +0000
Received: by outflank-mailman (input) for mailman id 135159;
 Tue, 01 Jun 2021 16:24:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo719-0005Ec-4B
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:13:23 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.100])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 60b9d1c2-9421-440a-ab4c-3ccfa7d0390c;
 Tue, 01 Jun 2021 16:11:46 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBf1Bf
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60b9d1c2-9421-440a-ab4c-3ccfa7d0390c
ARC-Seal: i=1; a=rsa-sha256; t=1622563901; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=XQvi/hWgKaEyTgjWvMcvqZqOox101nbYQgaaiAp5SBLHR29ICUoKym8AxgMDY6ieLe
    sMsvEkhKmXz+9TtBGRkSCrJEXBHt59b3L9NIZkTJDO/tNga/bbXm4Y4tPTOzLRECpfHM
    1ouopBa8M/jdhzRnh44V+zVPaqfQXSndaFMdwd15DZ+CPyNg/vezkbvM6OmVA3vHoLXD
    /lxReH7+2UR881rX4r3oEQzFmsmj+qEarC6ZPQ+BWv6oecW85Fp227gR8vmRs65/wgIR
    ssbAVSHRyfo8vKWo/Hkzo08UGCOZYUzLUPopuhgLbDKhWZ4POEfLr8kd5j3gwLbuWAYa
    orIg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563901;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=w5x8dKYtaakiRZ5BSlh/86KLwxZ1yLf8VEe49hX7ZHA=;
    b=ffy6cFrg7UEFmCg3EIpdeyDVdI8S3WfoWVJxTXXMXZCV3ToFqrMv443V8bcOFyFLiM
    DFzY4kHwziBDuE2AaVmIBMowevy2hp3VHuWUQeq5YqIoPOsjwFh400JrB40XysFAAr7w
    SlfNejl62jBRZ5TmAo4aRLUSefpljy+VbzBmzH6s71NQlizP4TD/JlFZLD2sJgDtPNkx
    eJo+4JgrA0jhuFHk31kO+ynk9um5gLJmHIanY23e/O3mhiFu9HINmoTfX5I8+DPTZwqx
    dLftyE5qV33pmQE4RSHlIgReGD7BUv+OcI2p2zUD7bkOaa/Z+kNeP0J83JN7RIzLh4+6
    nNZg==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563901;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=w5x8dKYtaakiRZ5BSlh/86KLwxZ1yLf8VEe49hX7ZHA=;
    b=SseXCMQ9Cv8cqJPYDhT1fW17p7wimEzxvMfFOPxB7C052+05DL6x3oBP4pqUCl0U8B
    Ky1zKQsRub4kbeyC7vDa9R+OMCdzBBTDhhG8wSYjb/AyfRzvzMo+HDwYZTyZ3FP5JL0B
    q83sFWdYpOKAhLVsFLW6st0ByOmbDvWGtDaTXPE/ZUVl+TOIxcEuAaI3KYI+wC9MmkxI
    jDnENx2p9At7sgk4P9YEfPBHNEl+pXrmnFWtQeXmhDQAvnbNypDscIQkCDPa2Xnsf6xN
    M2n5cqoUR4KJ1cV0nRQmJrXZ/xHyg7NdLlUmlwgNA5IspnndP8cn4QpcDoC7hslJEUjA
    rOpA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 35/38] tools: use xg_sr_bitmap for populated_pfns
Date: Tue,  1 Jun 2021 18:11:15 +0200
Message-Id: <20210601161118.18986-36-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h          | 21 +++++++-
 tools/libs/saverestore/restore.c         | 69 ------------------------
 tools/libs/saverestore/restore_x86_hvm.c |  9 ++++
 tools/libs/saverestore/restore_x86_pv.c  |  7 +++
 4 files changed, 35 insertions(+), 71 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 5241e50f5e..f3ee619844 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -403,8 +403,7 @@ struct xc_sr_context
             uint32_t     xenstore_domid,  console_domid;
 
             /* Bitmap of currently populated PFNs during restore. */
-            unsigned long *populated_pfns;
-            xen_pfn_t max_populated_pfn;
+            struct xg_sr_bitmap populated_pfns;
 
             /* Sender has invoked verify mode on the stream. */
             bool verify;
@@ -596,6 +595,24 @@ static inline bool page_type_has_stream_data(uint32_t type)
     }
     return ret;
 }
+
+static inline bool pfn_is_populated(struct xc_sr_context *ctx, xen_pfn_t pfn)
+{
+    return xg_sr_test_bit(pfn, &ctx->restore.populated_pfns);
+}
+
+static inline int pfn_set_populated(struct xc_sr_context *ctx, xen_pfn_t pfn)
+{
+    xc_interface *xch = ctx->xch;
+
+    if ( xg_sr_set_bit(pfn, &ctx->restore.populated_pfns) == false )
+    {
+        PERROR("Failed to realloc populated_pfns bitmap");
+        errno = ENOMEM;
+        return -1;
+    }
+    return 0;
+}
 #endif
 /*
  * Local variables:
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index 722f77b46a..0682616f16 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -71,64 +71,6 @@ static int read_headers(struct xc_sr_context *ctx)
     return 0;
 }
 
-/*
- * Is a pfn populated?
- */
-static bool pfn_is_populated(const struct xc_sr_context *ctx, xen_pfn_t pfn)
-{
-    if ( pfn > ctx->restore.max_populated_pfn )
-        return false;
-    return test_bit(pfn, ctx->restore.populated_pfns);
-}
-
-/*
- * Set a pfn as populated, expanding the tracking structures if needed. To
- * avoid realloc()ing too excessively, the size increased to the nearest power
- * of two large enough to contain the required pfn.
- */
-static int pfn_set_populated(struct xc_sr_context *ctx, xen_pfn_t pfn)
-{
-    xc_interface *xch = ctx->xch;
-
-    if ( pfn > ctx->restore.max_populated_pfn )
-    {
-        xen_pfn_t new_max;
-        size_t old_sz, new_sz;
-        unsigned long *p;
-
-        /* Round up to the nearest power of two larger than pfn, less 1. */
-        new_max = pfn;
-        new_max |= new_max >> 1;
-        new_max |= new_max >> 2;
-        new_max |= new_max >> 4;
-        new_max |= new_max >> 8;
-        new_max |= new_max >> 16;
-#ifdef __x86_64__
-        new_max |= new_max >> 32;
-#endif
-
-        old_sz = bitmap_size(ctx->restore.max_populated_pfn + 1);
-        new_sz = bitmap_size(new_max + 1);
-        p = realloc(ctx->restore.populated_pfns, new_sz);
-        if ( !p )
-        {
-            ERROR("Failed to realloc populated bitmap");
-            errno = ENOMEM;
-            return -1;
-        }
-
-        memset((uint8_t *)p + old_sz, 0x00, new_sz - old_sz);
-
-        ctx->restore.populated_pfns    = p;
-        ctx->restore.max_populated_pfn = new_max;
-    }
-
-    assert(!test_bit(pfn, ctx->restore.populated_pfns));
-    set_bit(pfn, ctx->restore.populated_pfns);
-
-    return 0;
-}
-
 /*
  * Given a set of pfns, obtain memory from Xen to fill the physmap for the
  * unpopulated subset.  If types is NULL, no page type checking is performed
@@ -929,16 +871,6 @@ static int setup(struct xc_sr_context *ctx)
     if ( rc )
         goto err;
 
-    ctx->restore.max_populated_pfn = (32 * 1024 / 4) - 1;
-    ctx->restore.populated_pfns = bitmap_alloc(
-        ctx->restore.max_populated_pfn + 1);
-    if ( !ctx->restore.populated_pfns )
-    {
-        ERROR("Unable to allocate memory for populated_pfns bitmap");
-        rc = -1;
-        goto err;
-    }
-
     ctx->restore.buffered_records = malloc(
         DEFAULT_BUF_RECORDS * sizeof(struct xc_sr_record));
     if ( !ctx->restore.buffered_records )
@@ -977,7 +909,6 @@ static void cleanup(struct xc_sr_context *ctx)
 
     free(ctx->restore.m);
     free(ctx->restore.buffered_records);
-    free(ctx->restore.populated_pfns);
 
     if ( ctx->restore.ops.cleanup(ctx) )
         PERROR("Failed to clean up");
diff --git a/tools/libs/saverestore/restore_x86_hvm.c b/tools/libs/saverestore/restore_x86_hvm.c
index bd63bd2818..73f17b7fcb 100644
--- a/tools/libs/saverestore/restore_x86_hvm.c
+++ b/tools/libs/saverestore/restore_x86_hvm.c
@@ -136,6 +136,7 @@ static int x86_hvm_localise_page(struct xc_sr_context *ctx,
 static int x86_hvm_setup(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
+    unsigned long max_pfn;
 
     if ( ctx->restore.guest_type != DHDR_TYPE_X86_HVM )
     {
@@ -161,6 +162,13 @@ static int x86_hvm_setup(struct xc_sr_context *ctx)
     }
 #endif
 
+    max_pfn = max(ctx->restore.p2m_size, ctx->dominfo.max_memkb >> (PAGE_SHIFT-10));
+    if ( !xg_sr_bitmap_expand(&ctx->restore.populated_pfns, max_pfn) )
+    {
+        PERROR("Unable to allocate memory for populated_pfns bitmap");
+        return -1;
+    }
+
     return 0;
 }
 
@@ -241,6 +249,7 @@ static int x86_hvm_stream_complete(struct xc_sr_context *ctx)
 
 static int x86_hvm_cleanup(struct xc_sr_context *ctx)
 {
+    xg_sr_bitmap_free(&ctx->restore.populated_pfns);
     free(ctx->x86.hvm.restore.context.ptr);
 
     free(ctx->x86.restore.cpuid.ptr);
diff --git a/tools/libs/saverestore/restore_x86_pv.c b/tools/libs/saverestore/restore_x86_pv.c
index 96608e5231..bdaa0c0e76 100644
--- a/tools/libs/saverestore/restore_x86_pv.c
+++ b/tools/libs/saverestore/restore_x86_pv.c
@@ -1060,6 +1060,12 @@ static int x86_pv_setup(struct xc_sr_context *ctx)
     if ( rc )
         return rc;
 
+    if ( !xg_sr_bitmap_expand(&ctx->restore.populated_pfns, 32 * 1024 / 4) )
+    {
+        PERROR("Unable to allocate memory for populated_pfns bitmap");
+        return -1;
+    }
+
     ctx->x86.pv.restore.nr_vcpus = ctx->dominfo.max_vcpu_id + 1;
     ctx->x86.pv.restore.vcpus = calloc(sizeof(struct xc_sr_x86_pv_restore_vcpu),
                                        ctx->x86.pv.restore.nr_vcpus);
@@ -1153,6 +1159,7 @@ static int x86_pv_stream_complete(struct xc_sr_context *ctx)
  */
 static int x86_pv_cleanup(struct xc_sr_context *ctx)
 {
+    xg_sr_bitmap_free(&ctx->restore.populated_pfns);
     free(ctx->x86.pv.p2m);
     free(ctx->x86.pv.p2m_pfns);
 


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:25:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:25:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135185.251372 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo7CV-0003Yq-59; Tue, 01 Jun 2021 16:25:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135185.251372; Tue, 01 Jun 2021 16:25:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo7CV-0003Yi-23; Tue, 01 Jun 2021 16:25:07 +0000
Received: by outflank-mailman (input) for mailman id 135185;
 Tue, 01 Jun 2021 16:25:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo714-0005Ec-47
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:13:18 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.103])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c75630c-d80a-48a9-99c5-350bb6fde6fb;
 Tue, 01 Jun 2021 16:11:46 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBb1BX
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:37 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c75630c-d80a-48a9-99c5-350bb6fde6fb
ARC-Seal: i=1; a=rsa-sha256; t=1622563898; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=mLSCEVWIFwr02TVbN1MLeBsQfsvHUlomH4hp7tBnrC/a1zOboNFjxGPvuv0nyJ3Uuk
    xvl1JVOLX3SUyaesWJJrQcEIyZtT5iUykTLdwsYVgkjEDnJo1pgknKJHt30/NSkxzb2D
    JjlMMnPGKIECbpLepyJTTCf2u/QCsd+DMDbZ0kxjBiovQ0QvN3vB50mPvZNqYTPsmXe4
    AEZRBKZ/aM1q59MH4MgqGu5Ol6UYdOdaHnhFwq5OpwM9WeWT/aBFH3XOH+qrm7ox5xz/
    BLxoauPJk/fnQoIOsw91SUv3r0qfknFEl/YRAyARtDcSmh1m03lh5pD+3nkT2MsZdvqb
    W6Eg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563898;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=MTSqerw0vG4VSeyAPsATEiIANrLKHzhkpCbX69ppdjs=;
    b=cDP8SqOs6PbiHTLtkZHJMJO3LZgPhQDqfsDb6YGUlSOpGNWeIcyzOmQt+yB8jr3GCs
    Pxo9cQa6EVHzmKQteeIRimlg6eTxcV3fjHVqsDfQPYVv6zMvYUQKL4xMtkq477r02SGX
    rn6gs6q7l9Q1l/3YBy5lcOJOsoI244Drqrw9PPL0xE7+RhKRWsn0v+O+ebI6muqiQTLr
    1NemoUBsFyy8Sv4nZUmcC9XIreuZJ6TzUjSwcJHvHFuRVeSKwYap8BN4zbrfzUQqOowN
    7CTmqbCWD5IcLg6Mwk9olqqIfLQbSiDIptyGOzE179TbeuN8ChcETaf9RiuCp2AdgQnT
    uIaQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563898;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=MTSqerw0vG4VSeyAPsATEiIANrLKHzhkpCbX69ppdjs=;
    b=HIfYIkehMLE0pDA3u3B/kjs+Yufrb7XkJu/LvIJowk/lHA9LfbTCISGFdXYvPh/5xc
    iyRBvdOctkN5Ml7Oxhb9vNNTWoi64gwJyWZrLIlDHxgGv045Jln9X3cBy7U4JnGNvxKq
    7ZpVirnVLOlvrzyvF8xFKwDgH81z+ph+EE6zilrHqwJO0Pbxzv1C+JtoiOZOXcTqT5iY
    IBIhrmRb763eM2W7rCRv1L0MoGoFO2+h4JoRiyYmV1YGdQ3amma8tIATUmo5GDTqKzTW
    Nf8lOgxURBOqqS/0akEFZE3/0FC/bz+5Qd1Ny7MnJ0yRtUhJyoUmvDnm4bjU5U1TX5j8
    KySw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Christian Lindig <christian.lindig@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	David Scott <dave@recoil.org>
Subject: [PATCH v20210601 28/38] tools: adjust libxl_domain_suspend to receive a struct props
Date: Tue,  1 Jun 2021 18:11:08 +0200
Message-Id: <20210601161118.18986-29-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Upcoming changes will pass more knobs down to xc_domain_save.
Adjust the libxl_domain_suspend API to allow easy adding of additional knobs.

No change in behavior intented.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Acked-by: Christian Lindig <christian.lindig@citrix.com>
---
 tools/include/libxl.h                | 26 +++++++++++++++++++++++---
 tools/libs/light/libxl_domain.c      |  7 ++++---
 tools/ocaml/libs/xl/xenlight_stubs.c |  3 ++-
 tools/xl/xl_migrate.c                |  9 ++++++---
 tools/xl/xl_saverestore.c            |  3 ++-
 5 files changed, 37 insertions(+), 11 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 29931626a2..9a4d7514ed 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -1706,12 +1706,32 @@ static inline int libxl_retrieve_domain_configuration_0x041200(
     libxl_retrieve_domain_configuration_0x041200
 #endif
 
+/*
+ * LIBXL_HAVE_DOMAIN_SUSPEND_PROPS indicates that the
+ * libxl_domain_suspend_props() function takes a props struct.
+ */
+#define LIBXL_HAVE_DOMAIN_SUSPEND_PROPS 1
+
+typedef struct {
+    uint32_t flags; /* LIBXL_SUSPEND_* */
+} libxl_domain_suspend_props;
+#define LIBXL_SUSPEND_DEBUG 1
+#define LIBXL_SUSPEND_LIVE 2
+
 int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd,
-                         int flags, /* LIBXL_SUSPEND_* */
+                         libxl_domain_suspend_props *props,
                          const libxl_asyncop_how *ao_how)
                          LIBXL_EXTERNAL_CALLERS_ONLY;
-#define LIBXL_SUSPEND_DEBUG 1
-#define LIBXL_SUSPEND_LIVE 2
+#if defined(LIBXL_API_VERSION) && LIBXL_API_VERSION < 0x041600
+static inline int libxl_domain_suspend_0x041500(libxl_ctx *ctx, uint32_t domid,
+                         int fd, int flags, /* LIBXL_SUSPEND_* */
+                         const libxl_asyncop_how *ao_how)
+{
+    libxl_domain_suspend_props props = { .flags = flags, };
+    return libxl_domain_suspend(ctx, domid, fd, &props, ao_how);
+}
+#define libxl_domain_suspend libxl_domain_suspend_0x041500
+#endif
 
 /*
  * Only suspend domain, do not save its state to file, do not destroy it.
diff --git a/tools/libs/light/libxl_domain.c b/tools/libs/light/libxl_domain.c
index 5d4ec90711..45e0c57c3a 100644
--- a/tools/libs/light/libxl_domain.c
+++ b/tools/libs/light/libxl_domain.c
@@ -505,7 +505,8 @@ static void domain_suspend_cb(libxl__egc *egc,
 
 }
 
-int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd, int flags,
+int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd,
+                         libxl_domain_suspend_props *props,
                          const libxl_asyncop_how *ao_how)
 {
     AO_CREATE(ctx, domid, ao_how);
@@ -526,8 +527,8 @@ int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd, int flags,
     dss->domid = domid;
     dss->fd = fd;
     dss->type = type;
-    dss->live = flags & LIBXL_SUSPEND_LIVE;
-    dss->debug = flags & LIBXL_SUSPEND_DEBUG;
+    dss->live = props->flags & LIBXL_SUSPEND_LIVE;
+    dss->debug = props->flags & LIBXL_SUSPEND_DEBUG;
     dss->checkpointed_stream = LIBXL_CHECKPOINTED_STREAM_NONE;
 
     rc = libxl__fd_flags_modify_save(gc, dss->fd,
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 352a00134d..eaf7bce35a 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -614,10 +614,11 @@ value stub_libxl_domain_suspend(value ctx, value domid, value fd, value async, v
 	int ret;
 	uint32_t c_domid = Int_val(domid);
 	int c_fd = Int_val(fd);
+    libxl_domain_suspend_props props = {};
 	libxl_asyncop_how *ao_how = aohow_val(async);
 
 	caml_enter_blocking_section();
-	ret = libxl_domain_suspend(CTX, c_domid, c_fd, 0, ao_how);
+	ret = libxl_domain_suspend(CTX, c_domid, c_fd, &props, ao_how);
 	caml_leave_blocking_section();
 
 	free(ao_how);
diff --git a/tools/xl/xl_migrate.c b/tools/xl/xl_migrate.c
index b8594f44a5..144890924f 100644
--- a/tools/xl/xl_migrate.c
+++ b/tools/xl/xl_migrate.c
@@ -186,7 +186,10 @@ static void migrate_domain(uint32_t domid, int preserve_domid,
     char *away_domname;
     char rc_buf;
     uint8_t *config_data;
-    int config_len, flags = LIBXL_SUSPEND_LIVE;
+    int config_len;
+    libxl_domain_suspend_props props = {
+        .flags = LIBXL_SUSPEND_LIVE,
+        };
 
     save_domain_core_begin(domid, preserve_domid, override_config_file,
                            &config_data, &config_len);
@@ -205,8 +208,8 @@ static void migrate_domain(uint32_t domid, int preserve_domid,
     xtl_stdiostream_adjust_flags(logger, XTL_STDIOSTREAM_HIDE_PROGRESS, 0);
 
     if (debug)
-        flags |= LIBXL_SUSPEND_DEBUG;
-    rc = libxl_domain_suspend(ctx, domid, send_fd, flags, NULL);
+        props.flags |= LIBXL_SUSPEND_DEBUG;
+    rc = libxl_domain_suspend(ctx, domid, send_fd, &props, NULL);
     if (rc) {
         fprintf(stderr, "migration sender: libxl_domain_suspend failed"
                 " (rc=%d)\n", rc);
diff --git a/tools/xl/xl_saverestore.c b/tools/xl/xl_saverestore.c
index 953d791d1a..476d4d9a6a 100644
--- a/tools/xl/xl_saverestore.c
+++ b/tools/xl/xl_saverestore.c
@@ -130,6 +130,7 @@ static int save_domain(uint32_t domid, int preserve_domid,
     int fd;
     uint8_t *config_data;
     int config_len;
+    libxl_domain_suspend_props props = {};
 
     save_domain_core_begin(domid, preserve_domid, override_config_file,
                            &config_data, &config_len);
@@ -146,7 +147,7 @@ static int save_domain(uint32_t domid, int preserve_domid,
 
     save_domain_core_writeconfig(fd, filename, config_data, config_len);
 
-    int rc = libxl_domain_suspend(ctx, domid, fd, 0, NULL);
+    int rc = libxl_domain_suspend(ctx, domid, fd, &props, NULL);
     close(fd);
 
     if (rc < 0) {


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:25:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:25:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135225.251383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo7D0-0004KE-Fd; Tue, 01 Jun 2021 16:25:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135225.251383; Tue, 01 Jun 2021 16:25:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo7D0-0004K7-CI; Tue, 01 Jun 2021 16:25:38 +0000
Received: by outflank-mailman (input) for mailman id 135225;
 Tue, 01 Jun 2021 16:25:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo70z-0005Ec-3p
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:13:13 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.103])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8196cfa2-eb0f-448e-8912-37d6a2321964;
 Tue, 01 Jun 2021 16:11:45 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBe1Be
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8196cfa2-eb0f-448e-8912-37d6a2321964
ARC-Seal: i=1; a=rsa-sha256; t=1622563900; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=K0Zx3LsjNuvmX6bb/7E8krJY0h3wiRmnd1Iea4Mn7/bvNh47ZNrEJFxkHe2izUzrXe
    +rwp5gHhr76tMqvWqJqJd0y99aZgsM1/xnRbTcSuzWUJ5v7wOW6hkW1DPvyBbnRHohOj
    Gytiyp7Ext008MkAWiOs9qNNV1ggeTwyfZfJ4AIIUU2Ui53tNvRSbSXvck/jX6UxB821
    DFrU/IziJg6ICyDO8Mzq8XYVk654MmPq0I4es24ug7WoGeG4MHRfCSQLesL9fvbOR064
    UyNcAiZhCOwxrVL8iNU5DyrRU6I0eIOBZq2uhCsR7jbJdT+SB1uzngSKYIrNM3zTjI4z
    kjSg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563900;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=aRD7fh9RvdtKI1U0FLi3WzkFwBE/k6ib8HdQNOs79bo=;
    b=CIv1T1c91pfhV264VbVU4AxCLfH/LvZWqdJQcBFiJCZHW5urMgZuWY1CLjYTyF1NQl
    O91kPoTN2t3D7GXTRt7aHLpCJ8odBdAuxINJfPGHJ/VlrinYWy6ySdGY9XhN2K0mZGBM
    dRlYR5Ag5RF1sBEVOSPEfvoXfU9EvRTIO84ARVie5QFWDVZpMhNBJMLsuob7IDFnvMMQ
    /zaM8SuQ9GjIPS8Cnl1ID47+N4tm3vG+ry3dy9lcW0rDGPkYUzqnGJRx7A1y5fOzlH3j
    rz56GRF2gcTq2u16zS2IftFX2rjYBhmXbxJpJ5atRMNchqD9OHGOPxoR2FByjaQlBkNY
    zRyQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563900;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=aRD7fh9RvdtKI1U0FLi3WzkFwBE/k6ib8HdQNOs79bo=;
    b=byQaCNEsLu+pjNRt4QtSeXJBkLjIgjSdqTqKhGGXeKKSvKGK2NJNo3fxUdWoBxNzqJ
    1TnNPfO6aE3yRpak2777FoyZJtiRArnz4o65jk+MyqoONsgTAkbQ97sPeKLXxXu1EugD
    J2Kj//lp0+bIG5No1GIxZrjLRTCOCsGrhE6nx+FqzykSJIESfmxnk5g3t6kp9cDmHfsJ
    KMuMdSvjOnX98h5HyvvPrbQArAkL65xJnGByHLNtXBY3PmYYgAJMWRfsgtsrlrGVxGzy
    1O606WYYHKHlYmtbViFvg+FQOVPJwVnRX5kkEEyG+UfqS4YZWlqAHmObCo66gobFXATs
    umsw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 34/38] tools: add API for expandable bitmaps
Date: Tue,  1 Jun 2021 18:11:14 +0200
Message-Id: <20210601161118.18986-35-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Since the incoming migration stream lacks info about what the highest pfn
will be, data structures can not be allocated upfront.

Add an API for expandable bitmaps, loosely based on pfn_set_populated.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.c | 40 ++++++++++++++++++++
 tools/libs/saverestore/common.h | 67 +++++++++++++++++++++++++++++++++
 2 files changed, 107 insertions(+)

diff --git a/tools/libs/saverestore/common.c b/tools/libs/saverestore/common.c
index 7da7fa4e2c..9f1af4e671 100644
--- a/tools/libs/saverestore/common.c
+++ b/tools/libs/saverestore/common.c
@@ -163,6 +163,46 @@ static void __attribute__((unused)) build_assertions(void)
     BUILD_BUG_ON(sizeof(struct xc_sr_rec_hvm_params)        != 8);
 }
 
+/*
+ * Expand the tracking structures as needed.
+ * To avoid realloc()ing too excessively, the size increased to the nearest
+ * power of two large enough to contain the required number of bits.
+ */
+bool _xg_sr_bitmap_expand(struct xg_sr_bitmap *bm, unsigned long bits)
+{
+    size_t new_max;
+    size_t old_sz, new_sz;
+    void *p;
+
+    if (bits <= bm->bits)
+        return true;
+
+    /* Round up to the nearest power of two larger than bit, less 1. */
+    new_max = bits;
+    new_max |= new_max >> 1;
+    new_max |= new_max >> 2;
+    new_max |= new_max >> 4;
+    new_max |= new_max >> 8;
+    new_max |= new_max >> 16;
+    if ( sizeof(unsigned long) > 4 )
+        new_max |= new_max >> 32;
+
+    /* Allocate units of unsigned long */
+    new_max = (new_max + BITS_PER_LONG - 1) & ~(BITS_PER_LONG - 1);
+
+    old_sz = bitmap_size(bm->bits);
+    new_sz = bitmap_size(new_max);
+    p = realloc(bm->p, new_sz);
+    if (!p)
+        return false;
+
+    memset(p + old_sz, 0, new_sz - old_sz);
+    bm->p = p;
+    bm->bits = new_max;
+
+    return true;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index cf8d6545e2..5241e50f5e 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -30,6 +30,73 @@ const char *rec_type_to_str(uint32_t type);
 struct xc_sr_context;
 struct xc_sr_record;
 
+struct xg_sr_bitmap
+{
+    void *p;
+    unsigned long bits;
+};
+
+extern bool _xg_sr_bitmap_expand(struct xg_sr_bitmap *bm, unsigned long bits);
+
+static inline bool xg_sr_bitmap_expand(struct xg_sr_bitmap *bm, unsigned long bits)
+{
+    if (bits > bm->bits)
+        return _xg_sr_bitmap_expand(bm, bits);
+    return true;
+}
+
+static inline void xg_sr_bitmap_free(struct xg_sr_bitmap *bm)
+{
+    free(bm->p);
+    bm->p = NULL;
+}
+
+static inline bool xg_sr_set_bit(unsigned long bit, struct xg_sr_bitmap *bm)
+{
+    if (xg_sr_bitmap_expand(bm, bit) == false)
+        return false;
+
+    set_bit(bit, bm->p);
+    return true;
+}
+
+static inline bool xg_sr_test_bit(unsigned long bit, struct xg_sr_bitmap *bm)
+{
+    if (bit > bm->bits)
+        return false;
+    return !!test_bit(bit, bm->p);
+}
+
+static inline void xg_sr_clear_bit(unsigned long bit, struct xg_sr_bitmap *bm)
+{
+    if (bit <= bm->bits)
+        clear_bit(bit, bm->p);
+}
+
+static inline bool xg_sr_test_and_clear_bit(unsigned long bit, struct xg_sr_bitmap *bm)
+{
+    if (bit > bm->bits)
+        return false;
+    return !!test_and_clear_bit(bit, bm->p);
+}
+
+/* No way to report potential allocation error, bitmap must be expanded prior usage */
+static inline bool xg_sr_test_and_set_bit(unsigned long bit, struct xg_sr_bitmap *bm)
+{
+    if (bit > bm->bits)
+        return false;
+    return !!test_and_set_bit(bit, bm->p);
+}
+
+static inline bool xg_sr_set_long_bit(unsigned long base_bit, struct xg_sr_bitmap *bm)
+{
+    if (xg_sr_bitmap_expand(bm, base_bit + BITS_PER_LONG) == false)
+        return false;
+
+    set_bit_long(base_bit, bm->p);
+    return true;
+}
+
 /**
  * Save operations.  To be implemented for each type of guest, for use by the
  * common save algorithm.


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 16:25:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 16:25:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135241.251394 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo7DA-0004iZ-Ns; Tue, 01 Jun 2021 16:25:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135241.251394; Tue, 01 Jun 2021 16:25:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo7DA-0004iR-KN; Tue, 01 Jun 2021 16:25:48 +0000
Received: by outflank-mailman (input) for mailman id 135241;
 Tue, 01 Jun 2021 16:25:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HQ7/=K3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lo71O-0005Ec-4m
 for xen-devel@lists.xenproject.org; Tue, 01 Jun 2021 16:13:38 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.103])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d8b81a0-24c2-445e-8bf5-5522f279730a;
 Tue, 01 Jun 2021 16:11:48 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx51GBf1Bg
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 1 Jun 2021 18:11:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d8b81a0-24c2-445e-8bf5-5522f279730a
ARC-Seal: i=1; a=rsa-sha256; t=1622563901; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=G5OeYBgOp4rc5S5+iKFBIEWS9eGrgCUetx3c30jxXTFyLlPt/8S3E7GHAIjEv4nuOe
    u9KDm2Lh5jjV5HxFnlOSmwc685PVtVzjYSFo8qh1f7ni8HLEGO0SmnJWkAQvFGZ/aK/6
    vlW4V9mSThA4t7aKmtzIBgv2JCdcQyuiAQZQcxQ13FsHQ49PjXVYjJq/72QZai+xQ7w+
    ci0Yl2iUJjBTrBlySs2QCK3lXqMe11ZKuMtRZF49GCCacmEhqrEFI/kRowwBVR4ihxsG
    sPCBe4Ehxduhm1vqQT8UE/ksIMEvePcTXzDeXw8B9LdukU0AfyuPzn6oei6fW+Agl+1x
    KwyA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563901;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=99DJGLqeC4gMgdJU+sFS1vBiF6RF3t97LEFxoh+Df5c=;
    b=UPlmfzGfPdHX7Rpx679A7gEr/Rh5DevSimcqsb7MSgaF2sNDK7GQd1HHqZ4Mra+luv
    W5FmoVa2v2RTE0xaSuT194uFpMCLhVl+bAR8RdIu9ATCND2VOUVKi5icN+6uqcDVrnrw
    RvFsL9q/5Gk6aUikqd3KJaW9YIgLyxT0cFKaqHAcvBaYqb/WD8PTBc1UZ+QK/YY4l5kI
    HkdOCTnxDTMFO30J5SFhV7e7c8VL72uI6AlkUdKkNpY1E1DT8wkPUm7yhgfytUFO5VSs
    3TXXYCexthPQSFOzuV0N0GEI1Tr/4+QgLG1m2KT9RBQQX7vFiUWFuqI7OdidW5RgApKH
    DZCw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622563901;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=99DJGLqeC4gMgdJU+sFS1vBiF6RF3t97LEFxoh+Df5c=;
    b=BdiS6ICEIgqUof7q6X9Ukl/wsn1WAeEdDuaC8qU4x4FNoezx3JP08F2ex9qfqxsWVJ
    3iC4+FLlKkgYhf1FeDz6nGTrzME52DIe0IfoxKakuNqJxOvmuO7zroWfjiVNQ+Av/msO
    XDWOM4nAtFa5R0FEPYFcZxdAaRA6V068au/gbTda0ZKWmLBAv1xBK2NR/Oyjp9vuaGD6
    Uj7p0TrjTbVin+AXEDQZEnmSPNc1uXsV2FP8flVPJIgNcDB6bp27m8QVq4a3i5ae59in
    KLBn9iESKOeKjfpUiqKq65yuF60AW95yhSWjRmQUpnCA80IVyrGo3z6C63H9BF5T42c2
    xaBw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgs09ValFcstyKtnZMLOo4jr88Zf5nXI1mYJUK+h"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210601 36/38] tools: use superpages during restore of HVM guest
Date: Tue,  1 Jun 2021 18:11:16 +0200
Message-Id: <20210601161118.18986-37-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

During creating of a HVM domU meminit_hvm() tries to map superpages.
After save/restore or migration this mapping is lost, everything is
allocated in single pages. This causes a performance degradation after
migration.

Add neccessary code to preallocate a superpage for an incoming chunk of
pfns. In case a pfn was not populated on the sending side, it must be
freed on the receiving side to avoid over-allocation.

The existing code for x86_pv is moved unmodified into its own file.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_dom_x86.c            |   5 -
 tools/libs/guest/xg_private.h            |   5 +
 tools/libs/saverestore/common.c          |   2 -
 tools/libs/saverestore/common.h          |  29 +-
 tools/libs/saverestore/restore.c         |  62 +---
 tools/libs/saverestore/restore_x86_hvm.c | 369 ++++++++++++++++++++++-
 tools/libs/saverestore/restore_x86_pv.c  |  61 +++-
 7 files changed, 455 insertions(+), 78 deletions(-)

diff --git a/tools/libs/guest/xg_dom_x86.c b/tools/libs/guest/xg_dom_x86.c
index d2eb89ce01..ec0d18fd60 100644
--- a/tools/libs/guest/xg_dom_x86.c
+++ b/tools/libs/guest/xg_dom_x86.c
@@ -44,11 +44,6 @@
 
 #define SUPERPAGE_BATCH_SIZE 512
 
-#define SUPERPAGE_2MB_SHIFT   9
-#define SUPERPAGE_2MB_NR_PFNS (1UL << SUPERPAGE_2MB_SHIFT)
-#define SUPERPAGE_1GB_SHIFT   18
-#define SUPERPAGE_1GB_NR_PFNS (1UL << SUPERPAGE_1GB_SHIFT)
-
 #define X86_CR0_PE 0x01
 #define X86_CR0_ET 0x10
 
diff --git a/tools/libs/guest/xg_private.h b/tools/libs/guest/xg_private.h
index 25e46d7ce1..ad3d108c14 100644
--- a/tools/libs/guest/xg_private.h
+++ b/tools/libs/guest/xg_private.h
@@ -154,4 +154,9 @@ int pin_table(xc_interface *xch, unsigned int type, unsigned long mfn,
 #define M2P_SIZE(_m)    ROUNDUP(((_m) * sizeof(xen_pfn_t)), M2P_SHIFT)
 #define M2P_CHUNKS(_m)  (M2P_SIZE((_m)) >> M2P_SHIFT)
 
+#define SUPERPAGE_2MB_SHIFT   9
+#define SUPERPAGE_2MB_NR_PFNS (1UL << SUPERPAGE_2MB_SHIFT)
+#define SUPERPAGE_1GB_SHIFT   18
+#define SUPERPAGE_1GB_NR_PFNS (1UL << SUPERPAGE_1GB_SHIFT)
+
 #endif /* XG_PRIVATE_H */
diff --git a/tools/libs/saverestore/common.c b/tools/libs/saverestore/common.c
index 9f1af4e671..09b2983898 100644
--- a/tools/libs/saverestore/common.c
+++ b/tools/libs/saverestore/common.c
@@ -1,5 +1,3 @@
-#include <assert.h>
-
 #include "common.h"
 
 #include <xen-tools/libs.h>
diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index f3ee619844..b323c1b71a 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -3,6 +3,7 @@
 
 #include <unistd.h>
 #include <errno.h>
+#include <assert.h>
 #include <stdbool.h>
 #include <stdio.h>
 #include <stdlib.h>
@@ -219,6 +220,16 @@ struct xc_sr_restore_ops
      */
     int (*setup)(struct xc_sr_context *ctx);
 
+    /**
+     * Populate PFNs
+     *
+     * Given a set of pfns, obtain memory from Xen to fill the physmap for the
+     * unpopulated subset.
+     */
+    int (*populate_pfns)(struct xc_sr_context *ctx, unsigned count,
+                         const xen_pfn_t *original_pfns, const uint32_t *types);
+
+
     /**
      * Process an individual record from the stream.  The caller shall take
      * care of processing common records (e.g. END, PAGE_DATA).
@@ -366,6 +377,8 @@ struct xc_sr_context
 
             int send_back_fd;
             unsigned long p2m_size;
+            unsigned long max_pages;
+            unsigned long tot_pages;
             xc_hypercall_buffer_t dirty_bitmap_hbuf;
 
             /* From Image Header. */
@@ -503,6 +516,14 @@ struct xc_sr_context
                     {
                         /* HVM context blob. */
                         struct xc_sr_blob context;
+
+                        /* Bitmap of currently allocated PFNs during restore. */
+                        struct xg_sr_bitmap attempted_1g;
+                        struct xg_sr_bitmap attempted_2m;
+                        struct xg_sr_bitmap allocated_pfns;
+                        xen_pfn_t prev_populated_pfn;
+                        xen_pfn_t iteration_tracker_pfn;
+                        unsigned long iteration;
                     } restore;
                 };
             } hvm;
@@ -567,14 +588,6 @@ int read_record_header(struct xc_sr_context *ctx, int fd, struct xc_sr_rhdr *rhd
 int read_record_data(struct xc_sr_context *ctx, int fd, struct xc_sr_rhdr *rhdr,
                      struct xc_sr_record *rec);
 
-/*
- * This would ideally be private in restore.c, but is needed by
- * x86_pv_localise_page() if we receive pagetables frames ahead of the
- * contents of the frames they point at.
- */
-int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
-                  const xen_pfn_t *original_pfns, const uint32_t *types);
-
 /* Handle a STATIC_DATA_END record. */
 int handle_static_data_end(struct xc_sr_context *ctx);
 
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index 0682616f16..d4657e8e57 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -71,63 +71,6 @@ static int read_headers(struct xc_sr_context *ctx)
     return 0;
 }
 
-/*
- * Given a set of pfns, obtain memory from Xen to fill the physmap for the
- * unpopulated subset.  If types is NULL, no page type checking is performed
- * and all unpopulated pfns are populated.
- */
-int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
-                  const xen_pfn_t *original_pfns, const uint32_t *types)
-{
-    xc_interface *xch = ctx->xch;
-    xen_pfn_t *mfns = ctx->restore.m->pp_mfns,
-        *pfns = ctx->restore.m->pp_pfns;
-    unsigned int i, nr_pfns = 0;
-    int rc = -1;
-
-    for ( i = 0; i < count; ++i )
-    {
-        if ( (!types ||
-              (types && page_type_has_stream_data(types[i]) == true)) &&
-             !pfn_is_populated(ctx, original_pfns[i]) )
-        {
-            rc = pfn_set_populated(ctx, original_pfns[i]);
-            if ( rc )
-                goto err;
-            pfns[nr_pfns] = mfns[nr_pfns] = original_pfns[i];
-            ++nr_pfns;
-        }
-    }
-
-    if ( nr_pfns )
-    {
-        rc = xc_domain_populate_physmap_exact(
-            xch, ctx->domid, nr_pfns, 0, 0, mfns);
-        if ( rc )
-        {
-            PERROR("Failed to populate physmap");
-            goto err;
-        }
-
-        for ( i = 0; i < nr_pfns; ++i )
-        {
-            if ( mfns[i] == INVALID_MFN )
-            {
-                ERROR("Populate physmap failed for pfn %u", i);
-                rc = -1;
-                goto err;
-            }
-
-            ctx->restore.ops.set_gfn(ctx, pfns[i], mfns[i]);
-        }
-    }
-
-    rc = 0;
-
- err:
-    return rc;
-}
-
 static int handle_static_data_end_v2(struct xc_sr_context *ctx)
 {
     int rc = 0;
@@ -270,7 +213,7 @@ static int map_guest_pages(struct xc_sr_context *ctx,
     uint32_t i, p;
     int rc;
 
-    rc = populate_pfns(ctx, pages->count, m->pfns, m->types);
+    rc = ctx->restore.ops.populate_pfns(ctx, pages->count, m->pfns, m->types);
     if ( rc )
     {
         ERROR("Failed to populate pfns for batch of %u pages", pages->count);
@@ -1077,6 +1020,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
         return -1;
     }
 
+    /* See xc_domain_getinfo */
+    ctx.restore.max_pages = ctx.dominfo.max_memkb >> (PAGE_SHIFT-10);
+    ctx.restore.tot_pages = ctx.dominfo.nr_pages;
     ctx.restore.p2m_size = nr_pfns;
     ctx.restore.ops = ctx.dominfo.hvm
         ? restore_ops_x86_hvm : restore_ops_x86_pv;
diff --git a/tools/libs/saverestore/restore_x86_hvm.c b/tools/libs/saverestore/restore_x86_hvm.c
index 73f17b7fcb..79bcf07e04 100644
--- a/tools/libs/saverestore/restore_x86_hvm.c
+++ b/tools/libs/saverestore/restore_x86_hvm.c
@@ -130,6 +130,25 @@ static int x86_hvm_localise_page(struct xc_sr_context *ctx,
     return 0;
 }
 
+static bool x86_hvm_expand_sp_bitmaps(struct xc_sr_context *ctx, unsigned long max_pfn)
+{
+    struct xg_sr_bitmap *bm;
+
+    bm = &ctx->x86.hvm.restore.attempted_1g;
+    if ( !xg_sr_bitmap_expand(bm, max_pfn >> SUPERPAGE_1GB_SHIFT) )
+        return false;
+
+    bm = &ctx->x86.hvm.restore.attempted_2m;
+    if ( !xg_sr_bitmap_expand(bm, max_pfn >> SUPERPAGE_2MB_SHIFT) )
+        return false;
+
+    bm = &ctx->x86.hvm.restore.allocated_pfns;
+    if ( !xg_sr_bitmap_expand(bm, max_pfn) )
+        return false;
+
+    return true;
+}
+
 /*
  * restore_ops function. Confirms the stream matches the domain.
  */
@@ -164,12 +183,20 @@ static int x86_hvm_setup(struct xc_sr_context *ctx)
 
     max_pfn = max(ctx->restore.p2m_size, ctx->dominfo.max_memkb >> (PAGE_SHIFT-10));
     if ( !xg_sr_bitmap_expand(&ctx->restore.populated_pfns, max_pfn) )
-    {
-        PERROR("Unable to allocate memory for populated_pfns bitmap");
-        return -1;
-    }
+        goto out;
+
+    if ( !x86_hvm_expand_sp_bitmaps(ctx, max_pfn) )
+        goto out;
+
+    /* No superpage in 1st 2MB due to VGA hole */
+    xg_sr_set_bit(0, &ctx->x86.hvm.restore.attempted_1g);
+    xg_sr_set_bit(0, &ctx->x86.hvm.restore.attempted_2m);
 
     return 0;
+
+out:
+    PERROR("Unable to allocate memory for pfn bitmaps");
+    return -1;
 }
 
 /*
@@ -250,6 +277,9 @@ static int x86_hvm_stream_complete(struct xc_sr_context *ctx)
 static int x86_hvm_cleanup(struct xc_sr_context *ctx)
 {
     xg_sr_bitmap_free(&ctx->restore.populated_pfns);
+    xg_sr_bitmap_free(&ctx->x86.hvm.restore.attempted_1g);
+    xg_sr_bitmap_free(&ctx->x86.hvm.restore.attempted_2m);
+    xg_sr_bitmap_free(&ctx->x86.hvm.restore.allocated_pfns);
     free(ctx->x86.hvm.restore.context.ptr);
 
     free(ctx->x86.restore.cpuid.ptr);
@@ -258,6 +288,336 @@ static int x86_hvm_cleanup(struct xc_sr_context *ctx)
     return 0;
 }
 
+/*
+ * Set a range of pfns as allocated
+ */
+static void pfn_set_long_allocated(struct xc_sr_context *ctx, xen_pfn_t base_pfn)
+{
+    xg_sr_set_long_bit(base_pfn, &ctx->x86.hvm.restore.allocated_pfns);
+}
+
+static void pfn_set_allocated(struct xc_sr_context *ctx, xen_pfn_t pfn)
+{
+    xg_sr_set_bit(pfn, &ctx->x86.hvm.restore.allocated_pfns);
+}
+
+struct x86_hvm_sp {
+    xen_pfn_t pfn;
+    xen_pfn_t base_pfn;
+    unsigned long index;
+    unsigned long count;
+};
+
+/*
+ * Try to allocate a 1GB page for this pfn, but avoid Over-allocation.
+ * If this succeeds, mark the range of 2MB pages as busy.
+ */
+static bool x86_hvm_alloc_1g(struct xc_sr_context *ctx, struct x86_hvm_sp *sp)
+{
+    xc_interface *xch = ctx->xch;
+    unsigned int order;
+    int i, done;
+    xen_pfn_t extent;
+
+    /* Only one attempt to avoid overlapping allocation */
+    if ( xg_sr_test_and_set_bit(sp->index, &ctx->x86.hvm.restore.attempted_1g) )
+        return false;
+
+    order = SUPERPAGE_1GB_SHIFT;
+    sp->count = SUPERPAGE_1GB_NR_PFNS;
+
+    /* Allocate only if there is room for another superpage */
+    if ( ctx->restore.tot_pages + sp->count > ctx->restore.max_pages )
+        return false;
+
+    extent = sp->base_pfn = (sp->pfn >> order) << order;
+    done = xc_domain_populate_physmap(xch, ctx->domid, 1, order, 0, &extent);
+    if ( done < 0 ) {
+        PERROR("populate_physmap failed.");
+        return false;
+    }
+    if ( done == 0 )
+        return false;
+
+    DPRINTF("1G %" PRI_xen_pfn "\n", sp->base_pfn);
+
+    /* Mark all 2MB pages as done to avoid overlapping allocation */
+    for ( i = 0; i < (SUPERPAGE_1GB_NR_PFNS/SUPERPAGE_2MB_NR_PFNS); i++ )
+        xg_sr_set_bit((sp->base_pfn >> SUPERPAGE_2MB_SHIFT) + i, &ctx->x86.hvm.restore.attempted_2m);
+
+    return true;
+}
+
+/* Allocate a 2MB page if x86_hvm_alloc_1g failed, avoid Over-allocation. */
+static bool x86_hvm_alloc_2m(struct xc_sr_context *ctx, struct x86_hvm_sp *sp)
+{
+    xc_interface *xch = ctx->xch;
+    unsigned int order;
+    int done;
+    xen_pfn_t extent;
+
+    /* Only one attempt to avoid overlapping allocation */
+    if ( xg_sr_test_and_set_bit(sp->index, &ctx->x86.hvm.restore.attempted_2m) )
+        return false;
+
+    order = SUPERPAGE_2MB_SHIFT;
+    sp->count = SUPERPAGE_2MB_NR_PFNS;
+
+    /* Allocate only if there is room for another superpage */
+    if ( ctx->restore.tot_pages + sp->count > ctx->restore.max_pages )
+        return false;
+
+    extent = sp->base_pfn = (sp->pfn >> order) << order;
+    done = xc_domain_populate_physmap(xch, ctx->domid, 1, order, 0, &extent);
+    if ( done < 0 ) {
+        PERROR("populate_physmap failed.");
+        return false;
+    }
+    if ( done == 0 )
+        return false;
+
+    DPRINTF("2M %" PRI_xen_pfn "\n", sp->base_pfn);
+    return true;
+}
+
+/* Allocate a single page if x86_hvm_alloc_2m failed. */
+static bool x86_hvm_alloc_4k(struct xc_sr_context *ctx, struct x86_hvm_sp *sp)
+{
+    xc_interface *xch = ctx->xch;
+    unsigned int order;
+    int done;
+    xen_pfn_t extent;
+
+    order = 0;
+    sp->count = 1UL;
+
+    /* Allocate only if there is room for another page */
+    if ( ctx->restore.tot_pages + sp->count > ctx->restore.max_pages ) {
+        errno = E2BIG;
+        return false;
+    }
+
+    extent = sp->base_pfn = (sp->pfn >> order) << order;
+    done = xc_domain_populate_physmap(xch, ctx->domid, 1, order, 0, &extent);
+    if ( done < 0 ) {
+        PERROR("populate_physmap failed.");
+        return false;
+    }
+    if ( done == 0 ) {
+        errno = ENOMEM;
+        return false;
+    }
+
+    DPRINTF("4K %" PRI_xen_pfn "\n", sp->base_pfn);
+    return true;
+}
+/*
+ * Attempt to allocate a superpage where the pfn resides.
+ */
+static int x86_hvm_allocate_pfn(struct xc_sr_context *ctx, xen_pfn_t pfn)
+{
+    bool success;
+    unsigned long idx_1g, idx_2m;
+    struct x86_hvm_sp sp = {
+        .pfn = pfn
+    };
+
+    if ( xg_sr_test_bit(pfn, &ctx->x86.hvm.restore.allocated_pfns) )
+        return 0;
+
+    idx_1g = pfn >> SUPERPAGE_1GB_SHIFT;
+    idx_2m = pfn >> SUPERPAGE_2MB_SHIFT;
+
+    sp.index = idx_1g;
+    success = x86_hvm_alloc_1g(ctx, &sp);
+
+    if ( success == false ) {
+        sp.index = idx_2m;
+        success = x86_hvm_alloc_2m(ctx, &sp);
+    }
+
+    if ( success == false ) {
+        sp.index = 0;
+        success = x86_hvm_alloc_4k(ctx, &sp);
+    }
+
+    if ( success == false )
+        return -1;
+
+    do {
+        if ( sp.count >= BITS_PER_LONG ) {
+            sp.count -= BITS_PER_LONG;
+            ctx->restore.tot_pages += BITS_PER_LONG;
+            pfn_set_long_allocated(ctx, sp.base_pfn + sp.count);
+        } else {
+            sp.count--;
+            ctx->restore.tot_pages++;
+            pfn_set_allocated(ctx, sp.base_pfn + sp.count);
+        }
+    } while ( sp.count );
+
+    return 0;
+}
+
+/*
+ * Deallocate memory.
+ * There was likely an optimistic superpage allocation.
+ * This means more pages may have been allocated past gap_end.
+ * This range is not freed now. Incoming higher pfns will release it.
+ */
+static int x86_hvm_punch_hole(struct xc_sr_context *ctx,
+                               xen_pfn_t gap_start, xen_pfn_t gap_end)
+{
+    xc_interface *xch = ctx->xch;
+    xen_pfn_t _pfn, pfn;
+    uint32_t domid, freed = 0;
+    int rc;
+
+    pfn = gap_start >> SUPERPAGE_1GB_SHIFT;
+    do
+    {
+        xg_sr_set_bit(pfn, &ctx->x86.hvm.restore.attempted_1g);
+    } while (++pfn <= gap_end >> SUPERPAGE_1GB_SHIFT);
+
+    pfn = gap_start >> SUPERPAGE_2MB_SHIFT;
+    do
+    {
+        xg_sr_set_bit(pfn, &ctx->x86.hvm.restore.attempted_2m);
+    } while (++pfn <= gap_end >> SUPERPAGE_2MB_SHIFT);
+
+    pfn = gap_start;
+
+    while ( pfn <= gap_end )
+    {
+        if ( xg_sr_test_and_clear_bit(pfn, &ctx->x86.hvm.restore.allocated_pfns) )
+        {
+            domid = ctx->domid;
+            _pfn = pfn;
+            rc = xc_domain_decrease_reservation_exact(xch, domid, 1, 0, &_pfn);
+            if ( rc )
+            {
+                PERROR("Failed to release pfn %" PRI_xen_pfn, pfn);
+                return -1;
+            }
+            ctx->restore.tot_pages--;
+            freed++;
+        }
+        pfn++;
+    }
+    if ( freed )
+        DPRINTF("freed %u between %" PRI_xen_pfn " %" PRI_xen_pfn "\n",
+                freed, gap_start, gap_end);
+    return 0;
+}
+
+static int x86_hvm_unpopulate_page(struct xc_sr_context *ctx, xen_pfn_t pfn)
+{
+    xg_sr_clear_bit(pfn, &ctx->restore.populated_pfns);
+    return x86_hvm_punch_hole(ctx, pfn, pfn);
+}
+
+static int x86_hvm_populate_page(struct xc_sr_context *ctx, xen_pfn_t pfn)
+{
+    xen_pfn_t gap_start, gap_end;
+    bool has_gap, first_iteration;
+    int rc;
+
+    /*
+     * Check for a gap between the previous populated pfn and this pfn.
+     * In case a gap exists, it is required to punch a hole to release memory,
+     * starting after the previous pfn and before this pfn.
+     *
+     * But: this can be done only during the first iteration, which is the
+     * only place there superpage allocations are attempted. All following
+     * iterations lack the info to properly maintain prev_populated_pfn.
+     */
+    has_gap = ctx->x86.hvm.restore.prev_populated_pfn + 1 < pfn;
+    first_iteration = ctx->x86.hvm.restore.iteration == 0;
+    if ( has_gap && first_iteration )
+    {
+        gap_start = ctx->x86.hvm.restore.prev_populated_pfn + 1;
+        gap_end = pfn - 1;
+
+        rc = x86_hvm_punch_hole(ctx, gap_start, gap_end);
+        if ( rc )
+            goto err;
+    }
+
+    rc = x86_hvm_allocate_pfn(ctx, pfn);
+    if ( rc )
+        goto err;
+    pfn_set_populated(ctx, pfn);
+    ctx->x86.hvm.restore.prev_populated_pfn = pfn;
+
+    rc = 0;
+err:
+    return rc;
+}
+
+/*
+ * Try to allocate superpages.
+ * This works without memory map because the pfns arrive in incremental order.
+ * All pfn numbers and their type are submitted.
+ * Only pfns with data will have also pfn content transmitted.
+ */
+static int x86_hvm_populate_pfns(struct xc_sr_context *ctx, unsigned count,
+                                 const xen_pfn_t *original_pfns,
+                                 const uint32_t *types)
+{
+    xc_interface *xch = ctx->xch;
+    xen_pfn_t pfn, min_pfn, max_pfn;
+    bool has_data, populated;
+    unsigned i = count;
+    int rc = 0;
+
+    min_pfn = count ? original_pfns[0] : 0;
+    max_pfn = count ? original_pfns[count - 1] : 0;
+    DPRINTF("batch of %u pfns between %" PRI_xen_pfn " %" PRI_xen_pfn "\n",
+            count, min_pfn, max_pfn);
+
+    if ( !x86_hvm_expand_sp_bitmaps(ctx, max_pfn) )
+    {
+        ERROR("Unable to allocate memory for pfn bitmaps");
+        return -1;
+    }
+
+    /*
+     * There is no indicator for a new iteration.
+     * Simulate it by checking if a lower pfn is coming in.
+     * In the end it matters only to know if this iteration is the first one.
+     */
+    if ( min_pfn < ctx->x86.hvm.restore.iteration_tracker_pfn )
+        ctx->x86.hvm.restore.iteration++;
+    ctx->x86.hvm.restore.iteration_tracker_pfn = min_pfn;
+
+    for ( i = 0; i < count; ++i )
+    {
+        pfn = original_pfns[i];
+
+        has_data = page_type_has_stream_data(types[i]);
+        populated = pfn_is_populated(ctx, pfn);
+
+        /*
+         * page has data, pfn populated: nothing to do
+         * page has data, pfn not populated: likely never seen before
+         * page has no data, pfn populated: likely ballooned out during migration
+         * page has no data, pfn not populated: nothing to do
+         */
+        if ( has_data && !populated )
+        {
+            rc = x86_hvm_populate_page(ctx, pfn);
+        } else if ( !has_data && populated )
+        {
+            rc = x86_hvm_unpopulate_page(ctx, pfn);
+        }
+        if ( rc )
+            break;
+    }
+
+    return rc;
+}
+
+
 struct xc_sr_restore_ops restore_ops_x86_hvm =
 {
     .pfn_is_valid    = x86_hvm_pfn_is_valid,
@@ -266,6 +626,7 @@ struct xc_sr_restore_ops restore_ops_x86_hvm =
     .set_page_type   = x86_hvm_set_page_type,
     .localise_page   = x86_hvm_localise_page,
     .setup           = x86_hvm_setup,
+    .populate_pfns   = x86_hvm_populate_pfns,
     .process_record  = x86_hvm_process_record,
     .static_data_complete = x86_static_data_complete,
     .stream_complete = x86_hvm_stream_complete,
diff --git a/tools/libs/saverestore/restore_x86_pv.c b/tools/libs/saverestore/restore_x86_pv.c
index bdaa0c0e76..f4f0f49dee 100644
--- a/tools/libs/saverestore/restore_x86_pv.c
+++ b/tools/libs/saverestore/restore_x86_pv.c
@@ -959,6 +959,64 @@ static void x86_pv_set_gfn(struct xc_sr_context *ctx, xen_pfn_t pfn,
         ((uint32_t *)ctx->x86.pv.p2m)[pfn] = mfn;
 }
 
+/*
+ * Given a set of pfns, obtain memory from Xen to fill the physmap for the
+ * unpopulated subset.  If types is NULL, no page type checking is performed
+ * and all unpopulated pfns are populated.
+ */
+static int x86_pv_populate_pfns(struct xc_sr_context *ctx, unsigned count,
+                                const xen_pfn_t *original_pfns,
+                                const uint32_t *types)
+{
+    xc_interface *xch = ctx->xch;
+    xen_pfn_t *mfns = ctx->restore.m->pp_mfns,
+        *pfns = ctx->restore.m->pp_pfns;
+    unsigned int i, nr_pfns = 0;
+    int rc = -1;
+
+    for ( i = 0; i < count; ++i )
+    {
+        if ( (!types ||
+              (types && page_type_has_stream_data(types[i]) == true)) &&
+             !pfn_is_populated(ctx, original_pfns[i]) )
+        {
+            rc = pfn_set_populated(ctx, original_pfns[i]);
+            if ( rc )
+                goto err;
+            pfns[nr_pfns] = mfns[nr_pfns] = original_pfns[i];
+            ++nr_pfns;
+        }
+    }
+
+    if ( nr_pfns )
+    {
+        rc = xc_domain_populate_physmap_exact(
+            xch, ctx->domid, nr_pfns, 0, 0, mfns);
+        if ( rc )
+        {
+            PERROR("Failed to populate physmap");
+            goto err;
+        }
+
+        for ( i = 0; i < nr_pfns; ++i )
+        {
+            if ( mfns[i] == INVALID_MFN )
+            {
+                ERROR("Populate physmap failed for pfn %u", i);
+                rc = -1;
+                goto err;
+            }
+
+            ctx->restore.ops.set_gfn(ctx, pfns[i], mfns[i]);
+        }
+    }
+
+    rc = 0;
+
+ err:
+    return rc;
+}
+
 /*
  * restore_ops function.  Convert pfns back to mfns in pagetables.  Possibly
  * needs to populate new frames if a PTE is found referring to a frame which
@@ -1003,7 +1061,7 @@ static int x86_pv_localise_page(struct xc_sr_context *ctx,
         }
     }
 
-    if ( to_populate && populate_pfns(ctx, to_populate, pfns, NULL) )
+    if ( to_populate && x86_pv_populate_pfns(ctx, to_populate, pfns, NULL) )
         return -1;
 
     for ( i = 0; i < (PAGE_SIZE / sizeof(uint64_t)); ++i )
@@ -1200,6 +1258,7 @@ struct xc_sr_restore_ops restore_ops_x86_pv =
     .set_gfn         = x86_pv_set_gfn,
     .localise_page   = x86_pv_localise_page,
     .setup           = x86_pv_setup,
+    .populate_pfns   = x86_pv_populate_pfns,
     .process_record  = x86_pv_process_record,
     .static_data_complete = x86_static_data_complete,
     .stream_complete = x86_pv_stream_complete,


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 18:50:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 18:50:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135333.251440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo9TF-00052T-VF; Tue, 01 Jun 2021 18:50:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135333.251440; Tue, 01 Jun 2021 18:50:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo9TF-00052M-S8; Tue, 01 Jun 2021 18:50:33 +0000
Received: by outflank-mailman (input) for mailman id 135333;
 Tue, 01 Jun 2021 18:50:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lo9TE-00052C-Q4; Tue, 01 Jun 2021 18:50:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lo9TE-0003wb-Fr; Tue, 01 Jun 2021 18:50:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lo9TE-00043n-4j; Tue, 01 Jun 2021 18:50:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lo9TE-0005Mn-4I; Tue, 01 Jun 2021 18:50:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=F3DjSSMvwin6udt1hzMbUGPbq63C8xwvfZ7+OKpE1NE=; b=EqzyEkWBWroMDl0Je+1ABY03iH
	7fcIIRUt4mxCRKExZXucOl8TGqh60enu9TDMS+VCTAtd5loIdjjHJm1VV2yCUVRq9FZUvDtlrv+1u
	TyJX94e1vvqx8CdWKMfYn+JemAx6ZH7HHOd9jdXT/BDK5Ua4OcjSzEF9O7/2aOC/Z+Lk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162310-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162310: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c2131f7e73c9e9365613e323d65c7b9e5b910f56
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Jun 2021 18:50:32 +0000

flight 162310 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162310/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-xsm 20 guest-start/debian.repeat fail in 162279 pass in 162310
 test-amd64-amd64-dom0pvh-xl-intel 22 guest-start/debian.repeat fail in 162279 pass in 162310
 test-amd64-amd64-xl-credit1  22 guest-start/debian.repeat  fail pass in 162279

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                c2131f7e73c9e9365613e323d65c7b9e5b910f56
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  304 days
Failing since        152366  2020-08-01 20:49:34 Z  303 days  518 attempts
Testing same since   162279  2021-05-31 20:42:18 Z    0 days    2 attempts

------------------------------------------------------------
6126 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1665618 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 18:54:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 18:54:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135340.251455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo9X1-0005nY-JB; Tue, 01 Jun 2021 18:54:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135340.251455; Tue, 01 Jun 2021 18:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lo9X1-0005nR-FK; Tue, 01 Jun 2021 18:54:27 +0000
Received: by outflank-mailman (input) for mailman id 135340;
 Tue, 01 Jun 2021 18:54:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lo9X0-0005nH-BC; Tue, 01 Jun 2021 18:54:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lo9X0-000417-7q; Tue, 01 Jun 2021 18:54:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lo9X0-0004S0-13; Tue, 01 Jun 2021 18:54:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lo9X0-0008VZ-0V; Tue, 01 Jun 2021 18:54:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=w3wzP+tzU0RiF0XwpF/KZ3mKEIXDoalvEs1XReVyW+0=; b=KlUJe9zEgpvdnRQaPYt50DK/E8
	TFp+Pzj9if/lTEKoVXWmuvW2oJPrNM09wKVivGZhAkQV5v42cB3eGr+sD+hwbcXyNKggfg358bmAH
	1L3A8/LcW4Qf5UrE1lEoR079xR2YTJgIWqTH5nMDZfeX4NjzLZkaeqpZlFwpNWyLObt4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162327-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162327: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
X-Osstest-Versions-That:
    xen=57f68dfd2d111a2ad381df740543c901b41f2299
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Jun 2021 18:54:26 +0000

flight 162327 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162327/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
baseline version:
 xen                  57f68dfd2d111a2ad381df740543c901b41f2299

Last test of basis   162275  2021-05-31 11:01:38 Z    1 days
Testing same since   162327  2021-06-01 16:01:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   57f68dfd2d..5268b2dcf7  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 21:05:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 21:05:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135357.251469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loBZs-0001fc-Tm; Tue, 01 Jun 2021 21:05:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135357.251469; Tue, 01 Jun 2021 21:05:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loBZs-0001fU-Oe; Tue, 01 Jun 2021 21:05:32 +0000
Received: by outflank-mailman (input) for mailman id 135357;
 Tue, 01 Jun 2021 21:05:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loBZr-0001fJ-Kh; Tue, 01 Jun 2021 21:05:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loBZr-0006G9-Fn; Tue, 01 Jun 2021 21:05:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loBZr-0002DF-8p; Tue, 01 Jun 2021 21:05:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1loBZr-0006Xz-8G; Tue, 01 Jun 2021 21:05:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QI7h0j+07m2aWQOpBLAhiXvfJu3fCGHdACATQmfrVXk=; b=ql+zY/ll6crjvjysUJLNylwepH
	KaUMuC0WhDxiBdm59R9GGea8CB2/Fb4OPoXx0KGUF7ePvJy3uLVAyybIfLtm6AsGuTa+AhVTatoy6
	CkEEQI5MLZFzK0qIVeRfwSi5GsCX3kVuiwucpjNVdvrHUkBf2bzwrjgImWtueZUKreXQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162326-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162326: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=fdf3666f01a2dd02d83a808f609b9c744a74c652
X-Osstest-Versions-That:
    ovmf=d3ff5dbe1dfc3420e5254d290500c0b6f6282d17
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Jun 2021 21:05:31 +0000

flight 162326 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162326/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 fdf3666f01a2dd02d83a808f609b9c744a74c652
baseline version:
 ovmf                 d3ff5dbe1dfc3420e5254d290500c0b6f6282d17

Last test of basis   162288  2021-06-01 02:43:22 Z    0 days
Testing same since   162326  2021-06-01 11:41:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ashish Singhal <ashishsingha@nvidia.com>
  Marcin Wojtas <mw@semihalf.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d3ff5dbe1d..fdf3666f01  fdf3666f01a2dd02d83a808f609b9c744a74c652 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 01 21:44:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Jun 2021 21:44:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135368.251483 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loCBG-0005za-TH; Tue, 01 Jun 2021 21:44:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135368.251483; Tue, 01 Jun 2021 21:44:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loCBG-0005zT-QF; Tue, 01 Jun 2021 21:44:10 +0000
Received: by outflank-mailman (input) for mailman id 135368;
 Tue, 01 Jun 2021 21:44:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loCBF-0005zJ-FV; Tue, 01 Jun 2021 21:44:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loCBF-0006sk-Ai; Tue, 01 Jun 2021 21:44:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loCBE-00041Y-W9; Tue, 01 Jun 2021 21:44:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1loCBE-0006q8-Vd; Tue, 01 Jun 2021 21:44:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3/meIHFbKEt/D5yMfATEr0n+Wz2ySqqqn48snvjNXNE=; b=ImsWqvZ7HtVMUzVbM1kNpLi18l
	1bMoYQOJk6D2sgj3Ia4x9Ms1jwrYIwEZHaViDoVTbfsjZeCB2uBZOC6pEVOKyZnQseC3qS3RZPPVZ
	BMo27EKqla/KcZ0QhLr8pqxFB+qnSPRq0gDOqxGPTEuwOPEk6woYa5G1jhvpOsYzRClI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162325-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162325: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=57f68dfd2d111a2ad381df740543c901b41f2299
X-Osstest-Versions-That:
    xen=57f68dfd2d111a2ad381df740543c901b41f2299
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Jun 2021 21:44:08 +0000

flight 162325 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162325/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162282
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162282
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162282
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162282
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162282
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 162282
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162282
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162282
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162282
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162282
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162282
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162282
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  57f68dfd2d111a2ad381df740543c901b41f2299
baseline version:
 xen                  57f68dfd2d111a2ad381df740543c901b41f2299

Last test of basis   162325  2021-06-01 10:58:56 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 01:18:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 01:18:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135379.251497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loFW5-0000r4-BX; Wed, 02 Jun 2021 01:17:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135379.251497; Wed, 02 Jun 2021 01:17:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loFW5-0000qx-8F; Wed, 02 Jun 2021 01:17:53 +0000
Received: by outflank-mailman (input) for mailman id 135379;
 Wed, 02 Jun 2021 01:17:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/Ii/=K4=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1loFW3-0000qr-RL
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 01:17:52 +0000
Received: from mx0b-00069f02.pphosted.com (unknown [205.220.177.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 51e712c3-3c63-412a-bd53-275cf04da908;
 Wed, 02 Jun 2021 01:17:50 +0000 (UTC)
Received: from pps.filterd (m0246632.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 1521CclW001584; Wed, 2 Jun 2021 01:17:11 GMT
Received: from oracle.com (userp3020.oracle.com [156.151.31.79])
 by mx0b-00069f02.pphosted.com with ESMTP id 38wu57r33j-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 02 Jun 2021 01:17:11 +0000
Received: from userp3020.oracle.com (userp3020.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 1521HAbR134930;
 Wed, 2 Jun 2021 01:17:10 GMT
Received: from nam10-bn7-obe.outbound.protection.outlook.com
 (mail-bn7nam10lp2101.outbound.protection.outlook.com [104.47.70.101])
 by userp3020.oracle.com with ESMTP id 38uycru71c-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 02 Jun 2021 01:17:10 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by MN2PR10MB4398.namprd10.prod.outlook.com (2603:10b6:208:1dc::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.20; Wed, 2 Jun
 2021 01:17:08 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4173.030; Wed, 2 Jun 2021
 01:17:08 +0000
Received: from [10.74.101.162] (160.34.89.162) by
 SA0PR12CA0020.namprd12.prod.outlook.com (2603:10b6:806:6f::25) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4173.20 via Frontend Transport; Wed, 2 Jun 2021 01:17:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51e712c3-3c63-412a-bd53-275cf04da908
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=kOpUSB9gbHIsvVsRPdwLbfNGUezEr64tyQ3KnDDoIGQ=;
 b=S146tfUEUcrf/q00V1AAkylznrj2SZZOzHxkXqEey+SNG52EGRCk3hgka86LPlhAtsz3
 FVZseRP4hjvotpdIv5k+lWRWBJSQInjya398v/GGOKiOc2PjRT2Ozf9uZE4XcIGXmv1X
 /GrS5qR0FvoR2fm1ZmmnEgORSytKCTmd/P1pTkwFwleOar8zN0K5TsfisbRw578zWr7R
 7wdwM2BzIomrvE5cwMw0ZnSVifh5SWGa58bazHgx2GENb84cW84vPWITcr4rOhigN1fy
 DCIQWCNJlSgGhm3x7tmDn+4ob7IT4yJW8Z8kp4sET0qSShCYpNaaPT47EfqyJYib2WoK eg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KQg5yZ8o35qamd2VdcT95IT8P5aUghju7uPUzK0Q2iVKwhj98hlsPN2TynwSyi+bmrw/vu9AumO8IEpwlk76/wYW8DIvR9Dpac2lWasPktIUg7DkD+7u2aDA91djupKCmiSfiSzeVomAs+hWvH5+eOx346RQ+s6Owt5TVxab86f5UmI2cd40L3p5Zj+O82mJbkC8Hpu0yH4VQ5c0EV0ZlscKORNtwybyO81C4YnK26hekDyopYyPgrMWNcCd1/3yZNt4FOpoI3bChBCpZAA96yv0scOZZjOYGRpi/Xe8att9TvSOiY+uFmbgBuhQUhM6FK8B0ZjPKdKXqi4re71IcQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kOpUSB9gbHIsvVsRPdwLbfNGUezEr64tyQ3KnDDoIGQ=;
 b=WtwZWKCIOXO8z+RorrbrjBf8zydMIIxmLNfc2dfLd8RwAG8CFsinje0oymExulsWcYY3ZBnLEiQJf4+4gDFk9221ULoTARRoY3OQ6LjPaiUOyTemZxMXGpZSa3F8BqtNwZV/7K+Dek+nu2c/+chJGBDyM96WducAPcy+JN4ajz+e9yWVqoscOOuCXHKTz/kSV7Y+fcTLU4d7FGUz8ZBIMpTq7VwlI7KyEi3AowOYuHWKOEGpleHMRpjjCraxklf2n8jhOIsVvY/ZArw/rInyXMyDm8uSAoKwrVZcgEoC+9CWptu37Xvh2AQqqlzD0hRSpkta+2r0HjdSn4DLtmEDgA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kOpUSB9gbHIsvVsRPdwLbfNGUezEr64tyQ3KnDDoIGQ=;
 b=cnqlTSwgx8vmeA/xPZfwHhGC6S6q3o/w4/Owx9P0lpDQtDBu0jBgiDUkivG9/+451FlPrYmpuzWC7CJ0kA/clJak+32a9hjlBjqPrJMkcB5dRmrJhl00h8kGpUAFPGXMjCwnvqa7b8hqmfmgtScdvkSmgCh8ojJSDQvGQRXEDpE=
Authentication-Results: microsoft.com; dkim=none (message not signed)
 header.d=none;microsoft.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [RFC PATCH V3 09/11] HV/IOMMU: Enable swiotlb bounce buffer for
 Isolation VM
To: Tianyu Lan <ltykernel@gmail.com>, kys@microsoft.com,
        haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org,
        decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com,
        bp@alien8.de, x86@kernel.org, hpa@zytor.com, arnd@arndb.de,
        dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org,
        akpm@linux-foundation.org, kirill.shutemov@linux.intel.com,
        rppt@kernel.org, hannes@cmpxchg.org, cai@lca.pw,
        krish.sadhukhan@oracle.com, saravanand@fb.com,
        Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, hch@lst.de,
        m.szyprowski@samsung.com, robin.murphy@arm.com, jgross@suse.com,
        sstabellini@kernel.org, joro@8bytes.org, will@kernel.org,
        xen-devel@lists.xenproject.org, davem@davemloft.net, kuba@kernel.org,
        jejb@linux.ibm.com, martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
        linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
        linux-scsi@vger.kernel.org, netdev@vger.kernel.org,
        vkuznets@redhat.com, thomas.lendacky@amd.com, brijesh.singh@amd.com,
        sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-10-ltykernel@gmail.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <9488c114-81ad-eb67-79c0-5ed319703d3e@oracle.com>
Date: Tue, 1 Jun 2021 21:16:59 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
In-Reply-To: <20210530150628.2063957-10-ltykernel@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [160.34.89.162]
X-ClientProxiedBy: SA0PR12CA0020.namprd12.prod.outlook.com
 (2603:10b6:806:6f::25) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 68c80a36-c37e-4ff1-e9f8-08d9256423d2
X-MS-TrafficTypeDiagnostic: MN2PR10MB4398:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<MN2PR10MB439862B7639BFC28FDE1D56D8A3D9@MN2PR10MB4398.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:317;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	0zJfQJsHlld0qAhP0fn1gEZYyI9G7gck5zysRuZN6Y/16dKSiF8tvRNJSfIqrCfneu2ExH1dZpwMzmxdoqlFwQIes1M5ZtqeIAIiQReawU819xlfuRkh/Tn3k6YePPfMlMrfNsuKIED8zEdAkek97lp/VlBGZlf+tnkpBsCTAXnMN0wfceyCl3caYR5ho3baZn+QtIPEjemMyiP8XrIPmokwBJW3WQCsQT5o1Lb+AiXJEaTwV+feD1BV//5xDGytYmTVYoCHFLuaVZ24zHEzHRzdKAwncYe2GM1NuKtoLWK/qjIIEFcHNpv18oMAwQvKOZzusLlqjyZBibYZMjjhktBCxGlvtfGK9Xs7ZuH1EAWiOgSrhvbAA48ZXOyANurfKZad9rp4eZIxcBdbmkZwj+1jPXVw/uoLesIxqVeThdZqu41RbYMMfM1g913GNireu7DHoYcvGWM6UsGE1uT9VjLaRN0TKyXgmOkB/+xARqcvi/6nxP0XL+S8keK+A1rpUYAEfz6bOL0A1mOavMTFMBlV/XqcwoL89mwHsNLdwuPsPrUlnq/ZvsV34NX5+wz835THxKx6qH7H/UtIgdwXS7PBQg+vxN3swBmyOIdZaO534CEuJ72JRj9X55Iazvfqr71B9v/OGMcakxPhS9gWt80bJ1SJwaf1yBPMXOMzm6yOj3GBP8b20MPQyI110CZFUPbfbFgCDYRQq1666hMV8w==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(376002)(396003)(346002)(366004)(39850400004)(31686004)(6666004)(66476007)(2616005)(4326008)(26005)(66556008)(53546011)(16526019)(44832011)(186003)(7416002)(83380400001)(5660300002)(956004)(8676002)(6486002)(921005)(558084003)(478600001)(66946007)(16576012)(2906002)(86362001)(36756003)(316002)(31696002)(6636002)(7406005)(8936002)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?VlFtdjBKY1lXNDNVVmkxdEFlL1NoNmcreURpR3RTNGRZTXowQXdlbWg1LzZt?=
 =?utf-8?B?WjllVHFBa3lJc1hWakVaRHFRUC9nTS9qY2RpdUZnNHpmWHVpQ2c3TWdkWkV5?=
 =?utf-8?B?TXJtTDNDUTJPRXFXQkRXejBrWEJsaXMwOUZnTkZqNXhiL2pHN3F2S3M2NVI4?=
 =?utf-8?B?ZXV1QVd2MVU3SytqVkdhMUZhQlBFMDV0NndkV2dRTm95NlF5UGNwRnRnR0xL?=
 =?utf-8?B?L2l1T0JMVjFoakFiaS9uOEZuN2hiYWNNNzc1ajRWUjd1TURGUFNhaXRyWFdh?=
 =?utf-8?B?RjZIR3NXeCtGZnRqWVF3MUd5UlhDSzBsY2RpREFQTHJ6TXJ4SUNxWk5PYTJq?=
 =?utf-8?B?bTI0M3Foczlpekp4RTVVWm1TQUgwaGRIVk4zYnV3MTJPd1pwY3d0a3QwanBa?=
 =?utf-8?B?QUtKMGtjWTVqOEhDUUVzc3JqNmZSL1FEQ0hMcTRaa1VZcCtrRy8xSTdsS3lD?=
 =?utf-8?B?ajFWdDNnREFQd3E2MytuN1UzZXM5OUZrZThXY0JFWmRVM0FyazBidS9VZ243?=
 =?utf-8?B?L01vZ3ZIRy9jTE43ZG1FV2dPMDJ0QTNiQkJjejBBZzdkc2NpZFhNSmZtejA1?=
 =?utf-8?B?KzUyRHNZcEx2L3Q0QVVBVm1CYVJVWXRDVDUzaXFUdUlHTGV3MW81MkxGdFhF?=
 =?utf-8?B?L1IvZkRuR0dCUnhTbzJCRGhDTnpwenNqTlVvT0dLRW12eVBDMkdPOWl2SmNl?=
 =?utf-8?B?dTQvMGxVMEhSTEJ5cnZadFFEc2E5YVhWN1djbDE3N2FINXIyL2VaU0dSL3lP?=
 =?utf-8?B?REZuTDJrWlNRL3k2UURCeVZUTEJ4Zi9RYitMdDVub01CdzBGa3RJeHU4MUJk?=
 =?utf-8?B?eUQwVTM2VWc5U1lvbkwyUjI2bGJDWlpHSHVHZmZmSjRqUER1Y3BiRmQvYUVw?=
 =?utf-8?B?cjR6cklESGhEYy9uZXR5UFRiWFplamt6RzdpcGROUXd4czk4b2IrTlp3TkhQ?=
 =?utf-8?B?Qy9mbUc1dytvNHU2OURVYkx5NHd3N0YwcStDZXlVOG5lWTRBUTFDb3ZVUFRJ?=
 =?utf-8?B?Qmc1SzJWVTBBakJLWks5cm9sRHVVOWZsRWVrRnBCVmRtY2IyNXg1WFE5d08r?=
 =?utf-8?B?L3BESzZLTURBYTFmZmZET0Z3aU92UEE5dFY0VFdWcklSLzJDWW1HTFo4dEdJ?=
 =?utf-8?B?a1MweEliQWRIM2NrOXI2ZnIzcGhGYlIzbG1SUDQ1ZXArZ0FBSk1vYndLZU1l?=
 =?utf-8?B?NXppVE5mQ2Y0YTRreUpMaGk3QThFckVsWHppNFd0Y2dtQXZyUEFzQ0hCTVdX?=
 =?utf-8?B?ODlrMnNrNmJhc1NlbzBsOFJacHB0cit0eUQ5MDZ0U21tVmU1UXBhYnN1eEpO?=
 =?utf-8?B?UExNUStIellxUmZXcjY3cUhGNDR1OW8rcnNTSnBKQVM1MTdpY0Q5Snp3cWlj?=
 =?utf-8?B?K2VnenNLWUJtYjVvZmxZWEEwZnVVZ2VZcFhyYTBJOG1nU3B0REJFUWhTMjJp?=
 =?utf-8?B?b1JJR2o4QU43V3RKbFpYQS9FMGNJQmhIdVlQRDZFMFJTWTZFNkpySUlEZk4w?=
 =?utf-8?B?QmczNHdpU1laL2JVKzd6ZUEwL0EvbktwY2h0ZXdPbit0S2ZEbERLVkxURFpN?=
 =?utf-8?B?L094cWJwd3FMR1JSeGJENFlIbUphRTN1REd0eE9iKytqdUQ2bXJiN2dKQ004?=
 =?utf-8?B?WTcwQmNQdndXN0NMZ3JTcVlPSGlrTkRnR3BYMzZRRUJKN2ZTSCtNL3ppL2c2?=
 =?utf-8?B?TGFVRDE0OHBvTUpLWGN2NHVETkVKcXE1TU54T1U5WTdiMkxkTXRubkk4WUxr?=
 =?utf-8?Q?9HOLaLdyfhu3e/onhz1+Tb/Wqc2UJBoJeuHYjG2?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 68c80a36-c37e-4ff1-e9f8-08d9256423d2
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2021 01:17:08.1521
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nz/6C1w+Vw9+pJ+4DpRQMclwPbiPzUU8BN0PTxVGTBS/2YSczBYCPFddCeQaRGBQj5M1sqZiq47Pue6LPS2ms+9X9ln9G+20lX843bMVn5w=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR10MB4398
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10002 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 spamscore=0 malwarescore=0
 phishscore=0 mlxlogscore=999 suspectscore=0 bulkscore=0 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106020007
X-Proofpoint-GUID: li0xUnJntxljGM1WLz_p6SOzZ7ywgiLv
X-Proofpoint-ORIG-GUID: li0xUnJntxljGM1WLz_p6SOzZ7ywgiLv


On 5/30/21 11:06 AM, Tianyu Lan wrote:
> @@ -91,6 +92,6 @@ int pci_xen_swiotlb_init_late(void)
>  EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
>  
>  IOMMU_INIT_FINISH(2,
> -		  NULL,
> +		  hyperv_swiotlb_detect,
>  		  pci_xen_swiotlb_init,
>  		  NULL);


Could you explain this change?


-boris





From xen-devel-bounces@lists.xenproject.org Wed Jun 02 02:12:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 02:12:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135389.251508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loGN4-0007Dc-6k; Wed, 02 Jun 2021 02:12:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135389.251508; Wed, 02 Jun 2021 02:12:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loGN4-0007DV-2Y; Wed, 02 Jun 2021 02:12:38 +0000
Received: by outflank-mailman (input) for mailman id 135389;
 Wed, 02 Jun 2021 02:12:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GNUT=K4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1loGN3-0007DP-Oh
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 02:12:37 +0000
Received: from mail-oi1-x236.google.com (unknown [2607:f8b0:4864:20::236])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af84e1cf-2d66-4811-909f-af108e70e65e;
 Wed, 02 Jun 2021 02:12:36 +0000 (UTC)
Received: by mail-oi1-x236.google.com with SMTP id b25so1354076oic.0
 for <xen-devel@lists.xenproject.org>; Tue, 01 Jun 2021 19:12:36 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id r124sm3862947oig.38.2021.06.01.19.12.35
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 01 Jun 2021 19:12:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af84e1cf-2d66-4811-909f-af108e70e65e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=qiax4dzC13hXpfyVrKthMe6RnI1vPDDz06KGbQaGmCo=;
        b=M6GLrjh2DRluOqmkGcz1SJvxYUo8AtrRlwfh3tnr6cV6RZY5eZV39OSj1d6HxIKIn9
         +ewZw7TcwWop0f7tdz1XhqWk60DEfPu/h7Nk4JwVuHGQtXleHbjDcuNX2bN4Yo/wBb4v
         Jcc55rtjwIqX4T1n+ph2AM7+mBzwZZuwW2Zx1GT3w5SfHF8mnT5FoKMEFHIbwRV7xCcf
         kW/7zh+xcdPD8NxyBq7RsYsSmznVX8FARZ2x6z1JVVYjcVIBaKMzvu75dQvM/S67J4qP
         EpkH3kmIwzyG4CY/WmEwCxOfqTwNr2aMB2V0OyBFyQOgxjhEZqSbVw8Lps5O0pCfqqA7
         aM/Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=qiax4dzC13hXpfyVrKthMe6RnI1vPDDz06KGbQaGmCo=;
        b=aE/TnscQ6mziWC+Bz+Ap29EM2WG1IC7GOCjd6RmUlXOSE4GG2Ehcjqzj7YhYZk27eU
         yrxoEw/PrLtOpLCeges0aG21bnG1T7RiMD3GbNMQPJiwbdDVg1sYNdfaj1TYuPqjQB7/
         uCoBzhUhzbwutvbYJJVFP1wImDcJEe/u2TQ00K4DvLKf70kIZrDOw2mOmxUuwpmwMyxH
         WMU+Ze5IGH52nH6VzFXefkekcIf8iSlG7NEhhNRWVtyKOQfUSzT2Qqc6jbkSOTcA9Y/D
         6QKcSOPWV2kDyKFdQhWH5huRqgP8ZkURSA11H7SDotNRX5XG2WCRIuXkrbRwuaOR6odP
         GG0Q==
X-Gm-Message-State: AOAM531/DPN7W0SUB6FLs5Mu3jAlSIkoRJCdDolbgt+wKDsBrGLE+SAh
	LD/e2YlD3VrhZsGjjkvP9i/gziE6WUrqdQ==
X-Google-Smtp-Source: ABdhPJzV8uhvTXDUsZMazleK7Leik3kBYYOwD1DuzHPuBKdmx4SHtccMtSrPc8GiLys1IjNKRREIRQ==
X-Received: by 2002:a54:438e:: with SMTP id u14mr714911oiv.126.1622599956120;
        Tue, 01 Jun 2021 19:12:36 -0700 (PDT)
Subject: Re: [PATCH v4 3/4] xen: Add files needed for minimal riscv build
To: Jan Beulich <jbeulich@suse.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621712830.git.connojdavis@gmail.com>
 <88ca49cea8dc0c44604957d42722388bb3d9e3ff.1621712830.git.connojdavis@gmail.com>
 <0f7192ec-5e50-c4b9-774c-febcade90288@suse.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <2ba8a930-2795-0fc5-eaab-15ddc5a3c67f@gmail.com>
Date: Tue, 1 Jun 2021 20:12:42 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <0f7192ec-5e50-c4b9-774c-febcade90288@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US



On 6/1/21 12:11 AM, Jan Beulich wrote:
> On 24.05.2021 16:34, Connor Davis wrote:
>> Add arch-specific makefiles and configs needed to build for
>> riscv. Also add a minimal head.S that is a simple infinite loop.
>> head.o can be built with
>>
>> $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen tiny64_defconfig
>> $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen TARGET=head.o
>>
>> No other TARGET is supported at the moment.
> Just to confirm - an actual (even if not functioning) xen binary can't
> be linked yet? I ask because that would be the prereq for me to set up
> routine (cross) build testing, just like I do for Arm.

That's right. The consensus was that targets and files should be added
incrementally, so I stopped at head.o for now.

Thanks,
Connor


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 02:13:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 02:13:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135394.251519 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loGNv-0007s5-GL; Wed, 02 Jun 2021 02:13:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135394.251519; Wed, 02 Jun 2021 02:13:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loGNv-0007ry-DK; Wed, 02 Jun 2021 02:13:31 +0000
Received: by outflank-mailman (input) for mailman id 135394;
 Wed, 02 Jun 2021 02:13:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GNUT=K4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1loGNt-0007rm-UZ
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 02:13:29 +0000
Received: from mail-ot1-x32e.google.com (unknown [2607:f8b0:4864:20::32e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d5fc445-9341-4c48-aa70-13bc70078d96;
 Wed, 02 Jun 2021 02:13:29 +0000 (UTC)
Received: by mail-ot1-x32e.google.com with SMTP id
 h19-20020a9d6f930000b02903c0c4560e99so1184506otq.1
 for <xen-devel@lists.xenproject.org>; Tue, 01 Jun 2021 19:13:29 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id c13sm4172605oto.18.2021.06.01.19.13.27
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 01 Jun 2021 19:13:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d5fc445-9341-4c48-aa70-13bc70078d96
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=TqB9mFSMTqI1XSKPJCVgfXa0Mx+awWAIK9TXEgCzqhI=;
        b=bhy0P3vlljDZxlrsBqKA/r2209pTsAkEuGpLS/LTF1ljOzPDqIn08cH5SksdOJzGO2
         gQSE2B/ymQGvpgKRYMObs4uXkY9XrFfDX1gNh8sCwMZbwZ0k5CHovSYSp4cZISfkSp2M
         NELj/WBCTdZ398F91bQgRhLV9pBnaOh43jApNwNoszGPH6bsh0u3KPaYM5HFYuta0LP2
         SbeCSPF3d0/wkk66YB5Y1qs7t1Cq6INPPxlKSAuizBJzDuxadO4gfHKOQxA/EeDL5aWJ
         rM5eK6MijGugoRZnIf2xfjVawf5V0JC+zwXdcUOlCGbDLhcvgZWl16DyeiaZwKxF09RU
         wzAw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=TqB9mFSMTqI1XSKPJCVgfXa0Mx+awWAIK9TXEgCzqhI=;
        b=NGX9CSgle5fzCpNDor3TYBkmLMNctGQgmvU2vgCQSGhFOUMm9liF97yJGLcZTqSUFw
         Kj/ZU4IMHchpTbMBGrM8vYcxKBtIuNIAMgBKFY4YheLuqFS/XUAHk5bDGPpiGf6xiyTZ
         5VfzwVvdys3cqrt00N5dG2c+X9VS3DWZnTTKM5Il0C7CjoXUHrTyghwT7pq+86grODAX
         yapSlHOdIH2JywoW8Olz7sphDc7W/FpE9odIqaWYWx/S9pnPzlgbPCx14fX9Oq62I9hQ
         2+uVjPb+hc9cJG1Ga8nPJ70/TXS3DO+L/4KsKtTGdfZ4YOeEr5kmjRmELATTgNmiY8y+
         8sSw==
X-Gm-Message-State: AOAM531c3VYSMqQoDpSNHmYSZIcZakKPVUY9rUmPYjzgMBXZI9tPHgau
	Lc5Y0uXl0zPrsUaxS+Vgjx6Jexgz25/eFQ==
X-Google-Smtp-Source: ABdhPJzcL6mcBvH0fViT7+12H1U2mbRfhpqFqgufbxQRBk63YFe34gSsHfeYtFQ7ehRIaEf35ndbRw==
X-Received: by 2002:a05:6830:4110:: with SMTP id w16mr23309299ott.372.1622600008303;
        Tue, 01 Jun 2021 19:13:28 -0700 (PDT)
Subject: Re: [PATCH v4 3/4] xen: Add files needed for minimal riscv build
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>
Cc: Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1621712830.git.connojdavis@gmail.com>
 <88ca49cea8dc0c44604957d42722388bb3d9e3ff.1621712830.git.connojdavis@gmail.com>
 <7d1b6d2a-641c-4508-9b29-b74db4769170@suse.com>
 <39a8a78c-3662-528f-fde4-d47427e64b15@gmail.com>
 <b0699bc4-5e79-7ce0-c885-b4d8dfa8b74f@gmail.com>
 <0eab888a-4c48-497d-6c43-b749389e52dd@suse.com>
 <e95a2ae5-16a7-aa2d-67c9-b8a0d6f0c2e5@xen.org>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <1a73a6d0-af96-b009-9c74-651eecc61109@gmail.com>
Date: Tue, 1 Jun 2021 20:13:35 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <e95a2ae5-16a7-aa2d-67c9-b8a0d6f0c2e5@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US



On 6/1/21 2:40 AM, Julien Grall wrote:
> Hi,
>
> On 01/06/2021 07:03, Jan Beulich wrote:
>> On 01.06.2021 04:26, Connor Davis wrote:
>>>
>>>
>>> On 5/25/21 12:13 PM, Bob Eshleman wrote:
>>>> On 5/25/21 1:48 AM, Jan Beulich wrote:
>>>>> On 24.05.2021 16:34, Connor Davis wrote:
>>>>>> Add arch-specific makefiles and configs needed to build for
>>>>>> riscv. Also add a minimal head.S that is a simple infinite loop.
>>>>>> head.o can be built with
>>>>>>
>>>>>> $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen tiny64_defconfig
>>>>>> $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen TARGET=head.o
>>>>>>
>>>>>> No other TARGET is supported at the moment.
>>>>>>
>>>>>> Signed-off-by: Connor Davis <connojdavis@gmail.com>
>>>>>> ---
>>>>>>    config/riscv.mk                         |  4 +++
>>>>>>    xen/Makefile                            |  8 +++--
>>>>>>    xen/arch/riscv/Kconfig                  | 47 
>>>>>> +++++++++++++++++++++++++
>>>>>>    xen/arch/riscv/Kconfig.debug            |  0
>>>>>>    xen/arch/riscv/Makefile                 |  0
>>>>>>    xen/arch/riscv/Rules.mk                 |  0
>>>>>>    xen/arch/riscv/arch.mk                  | 14 ++++++++
>>>>>>    xen/arch/riscv/asm-offsets.c            |  0
>>>>>>    xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
>>>>>>    xen/arch/riscv/head.S                   |  6 ++++
>>>>>>    xen/include/asm-riscv/config.h          | 47 
>>>>>> +++++++++++++++++++++++++
>>>>>>    11 files changed, 137 insertions(+), 2 deletions(-)
>>>>>>    create mode 100644 config/riscv.mk
>>>>>>    create mode 100644 xen/arch/riscv/Kconfig
>>>>>>    create mode 100644 xen/arch/riscv/Kconfig.debug
>>>>>>    create mode 100644 xen/arch/riscv/Makefile
>>>>>>    create mode 100644 xen/arch/riscv/Rules.mk
>>>>>>    create mode 100644 xen/arch/riscv/arch.mk
>>>>>>    create mode 100644 xen/arch/riscv/asm-offsets.c
>>>>>>    create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
>>>>>>    create mode 100644 xen/arch/riscv/head.S
>>>>>>    create mode 100644 xen/include/asm-riscv/config.h
>>>>> I think this wants to be accompanied by an addition to ./MAINTAINERS
>>>>> right away, such that future RISC-V patches can be acked by the
>>>>> respective designated maintainers, rather than falling under "THE 
>>>>> REST".
>>>>> Question is whether you / we have settled yet who's to become arch
>>>>> maintainer there.
>>>>>
>>>>> Jan
>>>>>
>>>> I'd like to volunteer myself for this, as I'm slated to continue
>>>> with the port indefinitely and would at least like to review
>>>> patches.  If Connor has the time, I think it makes sense for him
>>>> to be listed there too.
>>>>
>>>> Until we have others (it's just us two right now), it'll be a
>>>> lot of us reviewing each other's arch-specific work (in addition to
>>>> reviewers elsewhere in the Xen project, of course).
>>>>
>>>> -Bobby
>>>>
>>> Jan: can you list Bobby as the maintainer on commit? You can list me as
>>> well,
>>> but I can't guarantee a time commitment unlike Bobby.
>>
>> Well, actually I did expect a v5 submission with whatever entry you
>> both deem suitable. Otoh I now realize that patch 1 had my extra
>> request addressed (by myself) and hence with this one needing just
>> that entry, I could indeed add that addition myself while committing.
>> Let me move the mails back to the to-be-committed folder...
>
> At the risk of being picky, I don't think this should be done of 
> commit. Instead, a formal patch should be sent with both maintainers + 
> one of the committer acking (or signed-off-by) the proposal.

No problem, I will send a v5.

Thanks,
Connor


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 03:14:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 03:14:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135403.251530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loHKR-0005bQ-5P; Wed, 02 Jun 2021 03:13:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135403.251530; Wed, 02 Jun 2021 03:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loHKR-0005bJ-2L; Wed, 02 Jun 2021 03:13:59 +0000
Received: by outflank-mailman (input) for mailman id 135403;
 Wed, 02 Jun 2021 03:13:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loHKP-0005b9-QG; Wed, 02 Jun 2021 03:13:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loHKP-0004A2-GA; Wed, 02 Jun 2021 03:13:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loHKP-0005Cw-3s; Wed, 02 Jun 2021 03:13:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1loHKP-0005Xk-38; Wed, 02 Jun 2021 03:13:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=I+eXzf5owJgjM5bOHQTikfNHt2Zay5GuXFszJYBDL8Q=; b=gNy75U/IE0eyX1VoQcMzBW//bc
	6LkjJTesgG2mw/3+jaJI/jOU3ScoiUHXJY6IIInQtj45XVlw51PEGH7WIXm2VwlxUdYx4Qg2DtLMA
	RxAQkEeFn3kemKP7ajcWXRY/FmMBBxae3+sGWAId15aMXq2QkyFRKQfWCqGXROPbSnLc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162328-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162328: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start.2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=52848929b70dcf92a68aedcfd90207be81ba3274
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Jun 2021 03:13:57 +0000

flight 162328 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162328/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     19 guest-start.2           fail blocked in 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                52848929b70dcf92a68aedcfd90207be81ba3274
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  285 days
Failing since        152659  2020-08-21 14:07:39 Z  284 days  526 attempts
Testing same since   162270  2021-05-31 03:30:37 Z    1 days    4 attempts

------------------------------------------------------------
519 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 164273 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 03:32:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 03:32:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135413.251544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loHcC-0007zW-TV; Wed, 02 Jun 2021 03:32:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135413.251544; Wed, 02 Jun 2021 03:32:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loHcC-0007zP-PP; Wed, 02 Jun 2021 03:32:20 +0000
Received: by outflank-mailman (input) for mailman id 135413;
 Wed, 02 Jun 2021 03:32:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMiO=K4=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1loHcB-0007zJ-5e
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 03:32:19 +0000
Received: from out5-smtp.messagingengine.com (unknown [66.111.4.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 17626843-21af-4aeb-8a31-52243b95928c;
 Wed, 02 Jun 2021 03:32:17 +0000 (UTC)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id C5FD65C0182;
 Tue,  1 Jun 2021 23:32:16 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute6.internal (MEProxy); Tue, 01 Jun 2021 23:32:16 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 1 Jun 2021 23:32:15 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17626843-21af-4aeb-8a31-52243b95928c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-transfer-encoding:content-type
	:date:from:message-id:mime-version:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=JAel1Y
	62j7V265HvG19jOnr5m/PUQDTrK9g934SuVcg=; b=QkXfu7vMoI4hol3jr7cBzk
	+c5KS4BFTB6D+9JN2V1cxVcz3bOnKy/giPr6e3wWwtqLTQ+e+VS0JKy91X/lNZy+
	FObXdAMxl8ahldESq3MEcpaxrajjXmAMXqXz3vJ73rKQvmWnR5dTiT52hnXd9OCa
	TfPajsfmeFEvU4xCakhJoehi8ECsV28EtFZmN11F5ZZgI3p3adGXTp8niAX7mQGD
	2Rr4tgksioZK/dNi2s13wXF7r/SkQ4b9Thf8NDAnhUCKeJJNJrEF8gPplhTfQsne
	kjk5Sl0babZ4KhtOGu83yjRRc4nAn4w/1+ISa+5Nhr24PYcyaPpnoTULZfXDWH2w
	==
X-ME-Sender: <xms:wPu2YEZlV6VZe1psP8aa1QG6OR2QUPHfKnXNny5Idj0p7bgN4D6pKg>
    <xme:wPu2YPYesUOy11eSdNdWgZiRkrC4wG6qSB9R6GAD36uoywSTMveZfsxy_ukL8k6_o
    q7cjpLx-_ycew>
X-ME-Received: <xmr:wPu2YO-KY8pzGUZq9SFz1apqy88sHG4YZDaFXndT_UiqtVYLrE7h5ZzDwhe5FFiP1E9Fv0hWAB3vqfxmPQb3wX7M_wPj6XGfgUn8wt-cHCgxE5OIgQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrvdeliedgjeefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvufffkffogggtohfgsehtkeertdertdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeetgeet
    keeukeffhfejueeludehtedtkeeuiedtgffgtdfhveefueeiiefhudehgeenucevlhhush
    htvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhes
    ihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:wPu2YOpNbDYRY9B_HqKhihaRuhdtLLwhWJLHCz1llIT25UDQFp5_pw>
    <xmx:wPu2YPosGXYX64TaPGMwpgkkRb9DSEMojZifrQhfBUuHf52ANcX6Ow>
    <xmx:wPu2YMRef6r_5X4Y5FQtnSDQi1LvJ7brCGNR4qlmNpHuyInnk7yUwg>
    <xmx:wPu2YIBsg_x0k1AXPmwM6ZJ8v5iV4AqNLaqJVoY-89bEhMbTNAmBtg>
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] autoconf: fix handling absolute $PYTHON path
Date: Wed,  2 Jun 2021 05:32:06 +0200
Message-Id: <20210602033206.720860-1-marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.26.3
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Organization: Invisible Things Lab
Content-Transfer-Encoding: 8bit

Don't strip full path from $PYTHON variable. This is especially
relevant, if it points outside of $PATH. This is the case
for RPM build on CentOS 8 (%{python3} macro points at
/usr/libexec/platform-python).

For this reason, adjust also python-config handling - AC_PATH_PROG
doesn't work on already absolute path, so make it conditional.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 m4/python_devel.m4 | 6 +++++-
 tools/configure.ac | 1 -
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/m4/python_devel.m4 b/m4/python_devel.m4
index bbf1e0354b2b..676489b8e978 100644
--- a/m4/python_devel.m4
+++ b/m4/python_devel.m4
@@ -2,7 +2,11 @@ AC_DEFUN([AX_CHECK_PYTHON_DEVEL], [
 ac_previous_cppflags=$CPPFLAGS
 ac_previous_ldflags=$LDFLAGS
 ac_previous_libs=$LIBS
-AC_PATH_PROG([pyconfig], [$PYTHON-config], [no])
+AS_IF([echo "$PYTHON" | grep -q "^/"], [
+    pyconfig="$PYTHON-config"
+], [
+    AC_PATH_PROG([pyconfig], [$PYTHON-config], [no])
+])
 AS_IF([test x"$pyconfig" = x"no"], [
     dnl For those that don't have python-config
     CPPFLAGS="$CFLAGS `$PYTHON -c 'import distutils.sysconfig; \
diff --git a/tools/configure.ac b/tools/configure.ac
index 6414fcbb445e..ebf1265643b3 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -368,7 +368,6 @@ AS_IF([test -z "$PYTHON"], [AC_CHECK_PROGS([PYTHON], [python3 python python2], e
 AS_IF([test "$PYTHON" = "err"], [AC_MSG_ERROR([No python interpreter found])])
 AS_IF([echo "$PYTHON" | grep -q "^/"], [], [AC_PATH_PROG([PYTHON], [$PYTHON])])
 PYTHONPATH=$PYTHON
-PYTHON=`basename $PYTHONPATH`
 
 AX_PATH_PROG_OR_FAIL([PYTHONPATH], [$PYTHON])
 AX_CHECK_PYTHON_VERSION([2], [6])
-- 
2.26.3



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 05:56:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 05:56:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135421.251555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loJrE-0005AC-Vl; Wed, 02 Jun 2021 05:56:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135421.251555; Wed, 02 Jun 2021 05:56:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loJrE-0005A5-SD; Wed, 02 Jun 2021 05:56:00 +0000
Received: by outflank-mailman (input) for mailman id 135421;
 Wed, 02 Jun 2021 05:55:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loJrD-00059v-Pi; Wed, 02 Jun 2021 05:55:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loJrD-0007Jo-J6; Wed, 02 Jun 2021 05:55:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loJrD-0004pb-8u; Wed, 02 Jun 2021 05:55:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1loJrD-00021o-89; Wed, 02 Jun 2021 05:55:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=R5s9AHfS+ijT+KTrzwIFAZgogotEzYPC7sckxxMRuwc=; b=ZOEtqps8pXV4l6XIITEXLxFk5/
	zmO5/47wsCqmsneGP9zXw/Ju0Imn1KNUdKrIbegMugQpgruKF+uKUDkBPXvFpk+4f1cxieI2kUpvB
	zj01y+mHj8o0mIIjv/V9fniwmKC2WmrvOFnqUo4u2mk8GgSEnqRqnhAozOX3TemzeYc4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162329-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162329: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:windows-install:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c2131f7e73c9e9365613e323d65c7b9e5b910f56
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Jun 2021 05:55:59 +0000

flight 162329 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162329/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-credit1 22 guest-start/debian.repeat fail in 162310 pass in 162329
 test-amd64-amd64-xl-qemut-ws16-amd64 12 windows-install    fail pass in 162310

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop  fail in 162310 like 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                c2131f7e73c9e9365613e323d65c7b9e5b910f56
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  305 days
Failing since        152366  2020-08-01 20:49:34 Z  304 days  519 attempts
Testing same since   162279  2021-05-31 20:42:18 Z    1 days    3 attempts

------------------------------------------------------------
6126 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1665618 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 06:09:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 06:09:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135429.251569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loK3s-0006nR-7Y; Wed, 02 Jun 2021 06:09:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135429.251569; Wed, 02 Jun 2021 06:09:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loK3s-0006nK-2a; Wed, 02 Jun 2021 06:09:04 +0000
Received: by outflank-mailman (input) for mailman id 135429;
 Wed, 02 Jun 2021 06:09:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loK3r-0006nE-Bs
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:09:03 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 88582303-67da-4b78-849e-d6c881a4b1a9;
 Wed, 02 Jun 2021 06:09:02 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1A0891FD2D;
 Wed,  2 Jun 2021 06:09:01 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id DC6F4118DD;
 Wed,  2 Jun 2021 06:09:00 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 1SkRNHwgt2CIOgAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 06:09:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88582303-67da-4b78-849e-d6c881a4b1a9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622614141; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Rx8GaGfrnRnz9Xrxr+CQIosD80qzF1Kj58PFdM0v+e0=;
	b=FRb9Bt4qjxCXSJ3QpXeredqDG3mSEYy174iu5VY8Br4sAJw4sXxgEXOlPAvmeqgtzT/S46
	OMNGbe+YDx3LhD9hEOmU30dkachoLQ12emZN6X1W6yH1y0GL1Iggf4r37ybLs5JL3f1RI3
	UufOotwWxkc7dEsT1/9epUX/eHL/zR0=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622614141; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Rx8GaGfrnRnz9Xrxr+CQIosD80qzF1Kj58PFdM0v+e0=;
	b=FRb9Bt4qjxCXSJ3QpXeredqDG3mSEYy174iu5VY8Br4sAJw4sXxgEXOlPAvmeqgtzT/S46
	OMNGbe+YDx3LhD9hEOmU30dkachoLQ12emZN6X1W6yH1y0GL1Iggf4r37ybLs5JL3f1RI3
	UufOotwWxkc7dEsT1/9epUX/eHL/zR0=
Subject: Re: [PATCH v20210601 02/38] xl: fix description of migrate --debug
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-3-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <58453bfc-d932-3b46-7ec8-cd883b4c7440@suse.com>
Date: Wed, 2 Jun 2021 08:09:00 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-3-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="inhqTRmAz0fy8YyVm73IPKk4lPg2Davhu"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--inhqTRmAz0fy8YyVm73IPKk4lPg2Davhu
Content-Type: multipart/mixed; boundary="h9tSNc98vmWAGLeG1r05yqVfivLDZnJpl";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <58453bfc-d932-3b46-7ec8-cd883b4c7440@suse.com>
Subject: Re: [PATCH v20210601 02/38] xl: fix description of migrate --debug
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-3-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-3-olaf@aepfle.de>

--h9tSNc98vmWAGLeG1r05yqVfivLDZnJpl
Content-Type: multipart/mixed;
 boundary="------------723CCE4B6876ED99D20ABC80"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------723CCE4B6876ED99D20ABC80
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> xl migrate --debug used to track every pfn in every batch of pages.
> But these times are gone. Adjust the help text to tell what --debug
> is supposed to do today.
>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>   docs/man/xl.1.pod.in          | 4 +++-
>   tools/libs/guest/xg_sr_save.c | 2 +-
>   tools/xl/xl_cmdtable.c        | 2 +-
>   3 files changed, 5 insertions(+), 3 deletions(-)
>=20
> diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
> index e2176bd696..ed3f4dee1e 100644
> --- a/docs/man/xl.1.pod.in
> +++ b/docs/man/xl.1.pod.in
> @@ -481,7 +481,9 @@ domain.
>  =20
>   =3Ditem B<--debug>
>  =20
> -Display huge (!) amount of debug information during the migration proc=
ess.
> +Verify transferred domU page data. All memory will be transferred one =
more
> +time to the destination host while the domU is paused, and compared wi=
th
> +the result of the inital transfer while the domU was still running.

Shouldn't you adapt (or remove?) this paragraph with patch 37?

>  =20
>   =3Ditem B<-p>
>  =20
> diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_sav=
e.c
> index 2ba7c3200c..51542a98c8 100644
> --- a/tools/libs/guest/xg_sr_save.c
> +++ b/tools/libs/guest/xg_sr_save.c
> @@ -752,7 +752,7 @@ static int send_domain_memory_live(struct xc_sr_con=
text *ctx)
>       if ( rc )
>           goto out;
>  =20
> -    if ( ctx->save.debug && ctx->stream_type !=3D XC_STREAM_PLAIN )
> +    if ( ctx->save.debug )

This is no documentation change IMO. You should either mention this
modification in the commit message, or put it into a separate patch.


Juergen

--------------723CCE4B6876ED99D20ABC80
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------723CCE4B6876ED99D20ABC80--

--h9tSNc98vmWAGLeG1r05yqVfivLDZnJpl--

--inhqTRmAz0fy8YyVm73IPKk4lPg2Davhu
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3IHwFAwAAAAAACgkQsN6d1ii/Ey9k
3wf+NtKUD/x+I/jnzlxyBboO2sTl5ciy5bIdHwIrAN/q715ydtrVXVphFp2TNr8kuWgbyY++V04l
LjNp5NvcOOR6htU9OLGJsdCcnJIlJw/BlPRGM3Tw9npLdU5MFCNktNXmXryxLlVGUlWLjccNNeku
uZvpCpjgIDt/lpoPT6VsPlL4081Z/U81Lg9LDLipj3HB8vpCFHdyP4JvNoFlXNCdblq4pavYTQB/
WJ25FwaCocZBxUCmtPjN9HxZt5sPJmCKB9GAVALRMBlwYKIcO3m6qQWL5/J4M+QWhQKAc4sRWKbo
Z4992xMVRBFxTe1SeDkUTFxu24f4agmm3E1BwOIeKg==
=kEXt
-----END PGP SIGNATURE-----

--inhqTRmAz0fy8YyVm73IPKk4lPg2Davhu--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 06:10:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 06:10:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135435.251580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loK5B-0008Ae-N5; Wed, 02 Jun 2021 06:10:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135435.251580; Wed, 02 Jun 2021 06:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loK5B-0008AX-JJ; Wed, 02 Jun 2021 06:10:25 +0000
Received: by outflank-mailman (input) for mailman id 135435;
 Wed, 02 Jun 2021 06:10:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loK5A-0008AI-40
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:10:24 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 39da23e5-102d-470a-a163-0189e836ba19;
 Wed, 02 Jun 2021 06:10:23 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id A5D391FD2D;
 Wed,  2 Jun 2021 06:10:22 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 858E8118DD;
 Wed,  2 Jun 2021 06:10:22 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 48X8Hs4gt2BwOwAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 06:10:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39da23e5-102d-470a-a163-0189e836ba19
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622614222; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=NmJWzqQMnAH5+c0/R9dSsDGOgLMfVoccXuDHqR6fGOU=;
	b=sf9yTyxcNU7CVwaqNoMzECRxjwXPX3rRxEGUvZ+7eu8WnoxHMntklp8jRawGB1P+3PeCmy
	1BirPbOo1JgkoqBBvhFKv+qSrCXesr2khptmPuPRQZj4jepLKLiFM2j1z1V3SKhw+8OlVb
	FLFguqpmt28mWbl1NvkuL0xVJIMGybA=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622614222; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=NmJWzqQMnAH5+c0/R9dSsDGOgLMfVoccXuDHqR6fGOU=;
	b=sf9yTyxcNU7CVwaqNoMzECRxjwXPX3rRxEGUvZ+7eu8WnoxHMntklp8jRawGB1P+3PeCmy
	1BirPbOo1JgkoqBBvhFKv+qSrCXesr2khptmPuPRQZj4jepLKLiFM2j1z1V3SKhw+8OlVb
	FLFguqpmt28mWbl1NvkuL0xVJIMGybA=
Subject: Re: [PATCH v20210601 00/38] leftover from 2020
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
References: <20210601161118.18986-1-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <24670339-c080-7e47-c2a8-22c22f7a719e@suse.com>
Date: Wed, 2 Jun 2021 08:10:21 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="z1YSA1QcpIdGcEckdImNMXIAb0Eyekwpc"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--z1YSA1QcpIdGcEckdImNMXIAb0Eyekwpc
Content-Type: multipart/mixed; boundary="OqTZr5gVGcWUmsdpsgz7Qz9mR9esOrDWa";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Message-ID: <24670339-c080-7e47-c2a8-22c22f7a719e@suse.com>
Subject: Re: [PATCH v20210601 00/38] leftover from 2020
References: <20210601161118.18986-1-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-1-olaf@aepfle.de>

--OqTZr5gVGcWUmsdpsgz7Qz9mR9esOrDWa
Content-Type: multipart/mixed;
 boundary="------------1CB4EEF0A830B4109F0742C9"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------1CB4EEF0A830B4109F0742C9
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> Various unreviewed changes, rebase to 57f68dfd2d.

Would it be possible to split this into multiple independent
patches/series?


Juergen

--------------1CB4EEF0A830B4109F0742C9
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------1CB4EEF0A830B4109F0742C9--

--OqTZr5gVGcWUmsdpsgz7Qz9mR9esOrDWa--

--z1YSA1QcpIdGcEckdImNMXIAb0Eyekwpc
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3IM4FAwAAAAAACgkQsN6d1ii/Ey8m
fwf9Ft+E5O8p0xJi6+p5zWqm8JRNKGCvvP+qP34r1eCbbeTDreMCWC06ttPJB6S0HA3137V/qWph
ajk/atpQEFZON6mA41LFrooziumHf7UJ5SdDFSviNs3kh9YoBN3P+KLDGSmR1Z9t80lCXV2n3siE
vI+9WfG5nevcNpkX5dIZDpaxX3AN82+MkBcYgpNfad3nS6HN7pNi9EebFXyf1CAHUBgSwmMiw2c3
xYuSZcYvN9b5o4xJpUZi0hkVjfPifaCsuThXkVKoh+ajwfn7+TPS5oA9K6jASERXYbpu6nj1/hqg
9NQVEqL2QQSA4BMRjOMeAmA9U8+exrnFt3ygSJcA0A==
=VtS9
-----END PGP SIGNATURE-----

--z1YSA1QcpIdGcEckdImNMXIAb0Eyekwpc--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 06:20:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 06:20:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135445.251591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKET-0000ak-LO; Wed, 02 Jun 2021 06:20:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135445.251591; Wed, 02 Jun 2021 06:20:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKET-0000ad-Hu; Wed, 02 Jun 2021 06:20:01 +0000
Received: by outflank-mailman (input) for mailman id 135445;
 Wed, 02 Jun 2021 06:20:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loKES-0000aX-0r
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:20:00 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a07cae91-12df-49d9-b1e1-9af4394b61ba;
 Wed, 02 Jun 2021 06:19:59 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 3B2C821919;
 Wed,  2 Jun 2021 06:19:58 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 10827118DD;
 Wed,  2 Jun 2021 06:19:58 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id jprSAg4jt2BrPwAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 06:19:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a07cae91-12df-49d9-b1e1-9af4394b61ba
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622614798; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=UOKzZZM9iU38AIsLSlj43HUq4PROSnlYG9my9vpYe3o=;
	b=dF8TjfRLF4prMmht92elvtsVIS/gVXocArH+2I/aKSDpZl3WAZpsewduNb0Lrq661cO3B/
	+WDP4mS4MezxwI/co2pAZ3juoMhCP/hMvc8cYxvVcLdo7nluqRNWq1dKRsJCrkCQ8eMhlQ
	NADqdG16C5W2LJ9iImSwuZxHOXCm1sQ=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622614798; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=UOKzZZM9iU38AIsLSlj43HUq4PROSnlYG9my9vpYe3o=;
	b=dF8TjfRLF4prMmht92elvtsVIS/gVXocArH+2I/aKSDpZl3WAZpsewduNb0Lrq661cO3B/
	+WDP4mS4MezxwI/co2pAZ3juoMhCP/hMvc8cYxvVcLdo7nluqRNWq1dKRsJCrkCQ8eMhlQ
	NADqdG16C5W2LJ9iImSwuZxHOXCm1sQ=
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-2-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v20210601 01/38] tools: add API to work with sevaral bits
 at once
Message-ID: <33fa0a7f-360e-b5fc-ee0f-ad2ff98496a1@suse.com>
Date: Wed, 2 Jun 2021 08:19:57 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-2-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="xcmw2V1XBphYYyPiNu60stHLrOUtOC2lC"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--xcmw2V1XBphYYyPiNu60stHLrOUtOC2lC
Content-Type: multipart/mixed; boundary="9TlvDplAWW3rMpk1DgyctGLmUyzou1tnA";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <33fa0a7f-360e-b5fc-ee0f-ad2ff98496a1@suse.com>
Subject: Re: [PATCH v20210601 01/38] tools: add API to work with sevaral bits
 at once
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-2-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-2-olaf@aepfle.de>

--9TlvDplAWW3rMpk1DgyctGLmUyzou1tnA
Content-Type: multipart/mixed;
 boundary="------------F254D6538E081A658275154C"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------F254D6538E081A658275154C
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> Introduce new API to test if a fixed number of bits is clear or set,
> and clear or set them all at once.

More precise: the new functions check whether BITS_PER_LONG bits are
set or clear.

> The caller has to make sure the input bitnumber is a multiply of BITS_P=
ER_LONG.

s/multiply/multiple/

>=20
> This API avoids the loop over each bit in a known range just to see
> if all of them are either clear or set.
>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>   tools/libs/ctrl/xc_bitops.h | 25 +++++++++++++++++++++++++
>   1 file changed, 25 insertions(+)
>=20
> diff --git a/tools/libs/ctrl/xc_bitops.h b/tools/libs/ctrl/xc_bitops.h
> index f0bac4a071..92f38872fb 100644
> --- a/tools/libs/ctrl/xc_bitops.h
> +++ b/tools/libs/ctrl/xc_bitops.h
> @@ -77,4 +77,29 @@ static inline void bitmap_or(void *_dst, const void =
*_other,
>           dst[i] |=3D other[i];
>   }
>  =20
> +static inline int test_bit_long_set(unsigned long nr_base, const void =
*_addr)

Make return type bool (here and below)?

> +{
> +    const unsigned long *addr =3D _addr;
> +    unsigned long val =3D addr[nr_base / BITS_PER_LONG];

Add a blank line here (same below).

> +    return val =3D=3D ~0; > +}
> +
> +static inline int test_bit_long_clear(unsigned long nr_base, const voi=
d *_addr)
> +{
> +    const unsigned long *addr =3D _addr;
> +    unsigned long val =3D addr[nr_base / BITS_PER_LONG];
> +    return val =3D=3D 0;
> +}
> +
> +static inline void clear_bit_long(unsigned long nr_base, void *_addr)
> +{
> +    unsigned long *addr =3D _addr;
> +    addr[nr_base / BITS_PER_LONG] =3D 0;
> +}
> +
> +static inline void set_bit_long(unsigned long nr_base, void *_addr)
> +{
> +    unsigned long *addr =3D _addr;
> +    addr[nr_base / BITS_PER_LONG] =3D ~0;
> +}
>   #endif  /* XC_BITOPS_H */
>=20


Juergen

--------------F254D6538E081A658275154C
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------F254D6538E081A658275154C--

--9TlvDplAWW3rMpk1DgyctGLmUyzou1tnA--

--xcmw2V1XBphYYyPiNu60stHLrOUtOC2lC
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3Iw0FAwAAAAAACgkQsN6d1ii/Ey+J
iAf+LdJFaUf852kkkPbmVwA173TnZYwiLW8646/Rtcmx0JPAUiBqCVKSu9C1D7PY6C3AZZ3JJ51V
BlqhDj0uhqqX2pULWkwlBxaxTUJA5EOREh8my0EVJt3WN8a0vBre3BFq9ccPjz1tGxjUmQgMn9n+
2+6ls7jkq++eM9Bl38iD3BbYvsSYHy7zdFhfgL6Bo7xgYHeQOZMgaG/EQLW8DNQVL2Z9R1zSVBkf
Z5FdIHME1rXu2GP6DkcT7TCMm6GWwFnMWqvNRULUcKVmHRO+/xr0OVWQup2erxNQy9netLNJz8hf
tQv+HC3PTfoMxUgU0TxXe3EvOpGZx08/BBPMHITiSg==
=K8OR
-----END PGP SIGNATURE-----

--xcmw2V1XBphYYyPiNu60stHLrOUtOC2lC--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 06:30:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 06:30:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135453.251601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKOK-0002pk-Lo; Wed, 02 Jun 2021 06:30:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135453.251601; Wed, 02 Jun 2021 06:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKOK-0002pd-I8; Wed, 02 Jun 2021 06:30:12 +0000
Received: by outflank-mailman (input) for mailman id 135453;
 Wed, 02 Jun 2021 06:30:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loKOJ-0002pX-OS
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:30:11 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0dfc82b3-2b42-4850-8430-29b7bfda74b5;
 Wed, 02 Jun 2021 06:30:10 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1E66B1FD32;
 Wed,  2 Jun 2021 06:30:10 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id E6695118DD;
 Wed,  2 Jun 2021 06:30:09 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id FkpJNnElt2DIQwAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 06:30:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0dfc82b3-2b42-4850-8430-29b7bfda74b5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622615410; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Zn2FBJyBSmD2wvdoETJjGC8BW04cSEbwAHqYhfPKguQ=;
	b=iZMYPyFPQr1wGqjNOORU6JHk1UdTUL8Dp+YAEvJ8wtaWPp4rDGSJ3K22iZoGAnCoLhd2mF
	KypJWz5QSlWy3cWE7vFh3CZ8cHV7OzZmOdvCt3a5wm5LTosq5QBCXEwHpbZ8tiwykUoPo6
	gGobJnVIJAfAsYfyQWbxaTpnTLQ4LlY=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622615410; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Zn2FBJyBSmD2wvdoETJjGC8BW04cSEbwAHqYhfPKguQ=;
	b=iZMYPyFPQr1wGqjNOORU6JHk1UdTUL8Dp+YAEvJ8wtaWPp4rDGSJ3K22iZoGAnCoLhd2mF
	KypJWz5QSlWy3cWE7vFh3CZ8cHV7OzZmOdvCt3a5wm5LTosq5QBCXEwHpbZ8tiwykUoPo6
	gGobJnVIJAfAsYfyQWbxaTpnTLQ4LlY=
Subject: Re: [PATCH v20210601 04/38] tools: add readv_exact to libxenctrl
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-5-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <23783088-dc59-abd1-c66c-5fcd314d1f5c@suse.com>
Date: Wed, 2 Jun 2021 08:30:08 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-5-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="bRlk6HqKhqVKkKdAurDgUasSvORqDaYRD"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--bRlk6HqKhqVKkKdAurDgUasSvORqDaYRD
Content-Type: multipart/mixed; boundary="gNGgc9DCOFQqgRTPe7WSYKnM9mYUHWE4z";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <23783088-dc59-abd1-c66c-5fcd314d1f5c@suse.com>
Subject: Re: [PATCH v20210601 04/38] tools: add readv_exact to libxenctrl
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-5-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-5-olaf@aepfle.de>

--gNGgc9DCOFQqgRTPe7WSYKnM9mYUHWE4z
Content-Type: multipart/mixed;
 boundary="------------D42367AB7853177860FBC717"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------D42367AB7853177860FBC717
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> Read a batch of iovec's.
>=20
> In the common case of short reads, finish individual iov's with read_ex=
act.
>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>   tools/libs/ctrl/xc_private.c | 55 +++++++++++++++++++++++++++++++++++=
-
>   tools/libs/ctrl/xc_private.h |  1 +
>   2 files changed, 55 insertions(+), 1 deletion(-)
>=20
> diff --git a/tools/libs/ctrl/xc_private.c b/tools/libs/ctrl/xc_private.=
c
> index d94f846686..ea420b9ba8 100644
> --- a/tools/libs/ctrl/xc_private.c
> +++ b/tools/libs/ctrl/xc_private.c
> @@ -659,8 +659,23 @@ int write_exact(int fd, const void *data, size_t s=
ize)
>  =20
>   #if defined(__MINIOS__)
>   /*
> - * MiniOS's libc doesn't know about writev(). Implement it as multiple=20
write()s.
> + * MiniOS's libc doesn't know about readv/writev().
> + * Implement it as multiple read/write()s.
>    */
> +int readv_exact(int fd, const struct iovec *iov, int iovcnt)
> +{
> +    int rc, i;
> +
> +    for ( i =3D 0; i < iovcnt; ++i )
> +    {
> +        rc =3D read_exact(fd, iov[i].iov_base, iov[i].iov_len);
> +        if ( rc )
> +            return rc;
> +    }
> +
> +    return 0;
> +}
> +
>   int writev_exact(int fd, const struct iovec *iov, int iovcnt)
>   {
>       int rc, i;
> @@ -675,6 +690,44 @@ int writev_exact(int fd, const struct iovec *iov, =
int iovcnt)
>       return 0;
>   }
>   #else
> +int readv_exact(int fd, const struct iovec *iov, int iovcnt)
> +{
> +    int rc =3D 0, idx =3D 0;
> +    ssize_t len;
> +
> +    while ( idx < iovcnt )
> +    {
> +        len =3D readv(fd, &iov[idx], min(iovcnt - idx, IOV_MAX));
> +        if ( len =3D=3D -1 && errno =3D=3D EINTR )
> +            continue;
> +        if ( len <=3D 0 )
> +        {
> +            rc =3D -1;

Is EOF really an error?

> +            goto out;
> +        }
> +        while ( len > 0 && idx < iovcnt )
> +        {
> +            if ( len >=3D iov[idx].iov_len )
> +            {
> +                len -=3D iov[idx].iov_len;
> +            }
> +            else
> +            {
> +                void *p =3D iov[idx].iov_base + len;
> +                size_t l =3D iov[idx].iov_len - len;
> +
> +                rc =3D read_exact(fd, p, l);
> +                if ( rc )
> +                    goto out;
> +                len =3D 0;

This will stop the loop, even if idx hasn't reached iovcnt.

> +            }
> +            idx++;
> +        }
> +    }
> +out:
> +    return rc;
> +}
> +


Juergen

--------------D42367AB7853177860FBC717
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------D42367AB7853177860FBC717--

--gNGgc9DCOFQqgRTPe7WSYKnM9mYUHWE4z--

--bRlk6HqKhqVKkKdAurDgUasSvORqDaYRD
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3JXAFAwAAAAAACgkQsN6d1ii/Ey88
2gf/eBa3ZquvgaqWEDqBTF/z5Kuc2fwHhEX0bjtYwxDPG9jHUqqAtO+53i9A4aAN0XOg0CgtDqgz
XwTm0IM/dKgzs31oRGGLYW0DimzJxbVpRbExrriaf/5RHrRNWZJOaDVuNLT+ruZE7kRJ1g+cCejw
+4qzbKCxnNGiFvQsKrQ5JXhxYbVsvD8D2iRao0xSMIFqgM9/LS8vUZ12zW+RWHKKlKlj3tgAJoCY
27q8ndwLSnDQcswNDEY9x7hxBMVRFcaryuLAfkZlRvxahkxlH7M27oE3eDY+vrSMnyjgWJtr3U79
Rv13IX4oIETPNTgL9fs0R9Lhlj0BPST6kbkLOc/oCw==
=oHkY
-----END PGP SIGNATURE-----

--bRlk6HqKhqVKkKdAurDgUasSvORqDaYRD--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 06:51:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 06:51:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135462.251613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKjF-0005LL-M6; Wed, 02 Jun 2021 06:51:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135462.251613; Wed, 02 Jun 2021 06:51:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKjF-0005LE-I3; Wed, 02 Jun 2021 06:51:49 +0000
Received: by outflank-mailman (input) for mailman id 135462;
 Wed, 02 Jun 2021 06:51:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loKjE-0005L8-1R
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:51:48 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 980cd92d-25f3-4c19-87e3-c11a33c3fd31;
 Wed, 02 Jun 2021 06:51:47 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 2FEF821928;
 Wed,  2 Jun 2021 06:51:46 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 03119118DD;
 Wed,  2 Jun 2021 06:51:45 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id fSI7O4Eqt2DzTQAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 06:51:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 980cd92d-25f3-4c19-87e3-c11a33c3fd31
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622616706; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=7h38BL584hO19wtfBX6H3SP7X8t4ul+fWeygrPyzZ2k=;
	b=lzj+mk2/j1ICe9zy/O6OWaOWpdOJi815t60TpERBRfmnsgF8HPNO48oj6CGPs36va2AduO
	Qh5h7uG7K9+pntzPz/53YZAQPZ/4XVAZlFLMePTcax4ZsTJsOX+ylItHkmtAPZ+jm23I8b
	+NNh1SuYSYjNNNTSSSJY+WkdKRoUXbY=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622616706; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=7h38BL584hO19wtfBX6H3SP7X8t4ul+fWeygrPyzZ2k=;
	b=lzj+mk2/j1ICe9zy/O6OWaOWpdOJi815t60TpERBRfmnsgF8HPNO48oj6CGPs36va2AduO
	Qh5h7uG7K9+pntzPz/53YZAQPZ/4XVAZlFLMePTcax4ZsTJsOX+ylItHkmtAPZ+jm23I8b
	+NNh1SuYSYjNNNTSSSJY+WkdKRoUXbY=
Subject: Re: [PATCH v20210601 05/38] tools: add xc_is_known_page_type to
 libxenctrl
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-6-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <f1f00500-f74f-3515-cd46-7a12abdf18c3@suse.com>
Date: Wed, 2 Jun 2021 08:51:45 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-6-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="g7SepDU1tVx8HBQBwS79hijoJmtCBxivC"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--g7SepDU1tVx8HBQBwS79hijoJmtCBxivC
Content-Type: multipart/mixed; boundary="V5MxSTN6LxHhXJhOPB4nJMASIm9LOR58B";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <f1f00500-f74f-3515-cd46-7a12abdf18c3@suse.com>
Subject: Re: [PATCH v20210601 05/38] tools: add xc_is_known_page_type to
 libxenctrl
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-6-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-6-olaf@aepfle.de>

--V5MxSTN6LxHhXJhOPB4nJMASIm9LOR58B
Content-Type: multipart/mixed;
 boundary="------------DE2FE20E396435AB51020ACE"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------DE2FE20E396435AB51020ACE
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> Users of xc_get_pfn_type_batch may want to sanity check the data
> returned by Xen. Add a simple helper for this purpose.
>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>   tools/libs/ctrl/xc_private.h | 33 +++++++++++++++++++++++++++++++++
>   1 file changed, 33 insertions(+)
>=20
> diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.=
h
> index 5d2c7274fb..afb08aafe1 100644
> --- a/tools/libs/ctrl/xc_private.h
> +++ b/tools/libs/ctrl/xc_private.h
> @@ -421,6 +421,39 @@ void *xc_map_foreign_ranges(xc_interface *xch, uin=
t32_t dom,
>   int xc_get_pfn_type_batch(xc_interface *xch, uint32_t dom,
>                             unsigned int num, xen_pfn_t *);
>  =20
> +/* Sanitiy check for types returned by Xen */
> +static inline bool xc_is_known_page_type(xen_pfn_t type)
> +{
> +    bool ret;
> +
> +    switch (type)

I think you should not imply the planned use case here. It would be
better to use "switch (type & XEN_DOMCTL_PFINFO_LTAB_MASK)".

I'm on the edge regarding putting the new function into xc_private.h.
In the end your use case is _not_ to call the new function from
libxenctrl.


Juergen

--------------DE2FE20E396435AB51020ACE
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------DE2FE20E396435AB51020ACE--

--V5MxSTN6LxHhXJhOPB4nJMASIm9LOR58B--

--g7SepDU1tVx8HBQBwS79hijoJmtCBxivC
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3KoEFAwAAAAAACgkQsN6d1ii/Ey/C
lAf/TaysGhgMVz+LFg9v/AUW+h4263+CqOc9H+o8dO58f66W8bDAWDiW6trTviR1yrWMYXon2v93
3fhDYikoOqwPXADxEPOJZyGpgl40o3MUAMGeAG8CHSGSWqISkeE+MaWZvf7PK3yebT6kRTGX35Jw
EdoVxo2l5XPy6XO0Cvxxh1wtDkBbRzwzP65GNPQgiaumP7lMZz/w9h8WiHHtkl2TQKNwUORu0gZK
BtzLNnf5HCNyIYgvG2WExsUWfmVbayFRuEw523ukJkQtMns7kpUCxeiMwEPEByfiZkPxYyhhTSio
Ij+8MbaVjAKA48xu1OaA+Js6Ht8qgh7SlMS9WnQpDQ==
=7GkR
-----END PGP SIGNATURE-----

--g7SepDU1tVx8HBQBwS79hijoJmtCBxivC--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 06:53:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 06:53:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135467.251623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKkV-0005vn-08; Wed, 02 Jun 2021 06:53:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135467.251623; Wed, 02 Jun 2021 06:53:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKkU-0005vg-TO; Wed, 02 Jun 2021 06:53:06 +0000
Received: by outflank-mailman (input) for mailman id 135467;
 Wed, 02 Jun 2021 06:53:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loKkT-0005vY-CF
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:53:05 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a6789a3-52df-499a-b528-90da0cb01efc;
 Wed, 02 Jun 2021 06:53:04 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 3DBDF1FD2D;
 Wed,  2 Jun 2021 06:53:03 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 12FF5118DD;
 Wed,  2 Jun 2021 06:53:03 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id /ZCDA88qt2B9TgAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 06:53:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a6789a3-52df-499a-b528-90da0cb01efc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622616783; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=L7RObIhuSbekciFyRYt0oigFWyXeIX7fnD2buYf2AQU=;
	b=X3M3V9oJ+ODpmwTPPyQcHbpSM3rbs1RoU8dAMwG3itfUIGLvNQ2oPMhreh1L06SQUUVcvE
	AYbhiNprxFTH9kqp9Hu6JKvSBGN4Ux1jPzxEHsbWLxurFtIXE74giQc6hmkZjQ4Hze6i5u
	6sHFNNOds6FX3B1WNK6oZainZ8KnS84=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622616783; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=L7RObIhuSbekciFyRYt0oigFWyXeIX7fnD2buYf2AQU=;
	b=X3M3V9oJ+ODpmwTPPyQcHbpSM3rbs1RoU8dAMwG3itfUIGLvNQ2oPMhreh1L06SQUUVcvE
	AYbhiNprxFTH9kqp9Hu6JKvSBGN4Ux1jPzxEHsbWLxurFtIXE74giQc6hmkZjQ4Hze6i5u
	6sHFNNOds6FX3B1WNK6oZainZ8KnS84=
Subject: Re: [PATCH v20210601 06/38] tools: use xc_is_known_page_type
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-7-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <4df4dfd6-a270-46b4-3a6e-42ab32b14ca4@suse.com>
Date: Wed, 2 Jun 2021 08:53:02 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-7-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="TsZMO2gJE87Q78Q70T7i7RPBpGWENFnN7"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--TsZMO2gJE87Q78Q70T7i7RPBpGWENFnN7
Content-Type: multipart/mixed; boundary="FB2xdZEmuDqRZF9ZVgw45Lk45zWNLyvnp";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <4df4dfd6-a270-46b4-3a6e-42ab32b14ca4@suse.com>
Subject: Re: [PATCH v20210601 06/38] tools: use xc_is_known_page_type
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-7-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-7-olaf@aepfle.de>

--FB2xdZEmuDqRZF9ZVgw45Lk45zWNLyvnp
Content-Type: multipart/mixed;
 boundary="------------AFC82CC61F153E6A93288C36"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------AFC82CC61F153E6A93288C36
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> Verify pfn type on sending side, also verify incoming batch of pfns.
>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------AFC82CC61F153E6A93288C36
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------AFC82CC61F153E6A93288C36--

--FB2xdZEmuDqRZF9ZVgw45Lk45zWNLyvnp--

--TsZMO2gJE87Q78Q70T7i7RPBpGWENFnN7
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3Ks4FAwAAAAAACgkQsN6d1ii/Ey9c
MQf/TkMLi0sFVWAMisbNnQrEMulU28YOnbqfOG56VH8NctOz7M5NuR3GULv5G6tyimhM08oJJIiY
IJ98jmaXls0UpRuDZGkWMdbTdRb+uH6ZPFDzC5ylPoKCIoP7urWNKXyHBGxCsEOc3ncSiAEYsC7m
DYRBjq9rzk0+INVxvVGpzY9uKjThCiPuiNgGFtooLTMICfbRjGFsoU3XAFtf6tJYwOzzZUjRCP+U
PIpcC9GLMLpf/eG6rhV8Rbqf4egKXf8B6VMHNCOsDaMeBo+/y7zcXPQUBflEKIETZn77bnIjI2bR
TN43ByRykH8Rg1ndto892pTahpjwZ/0zSzYiOxIosA==
=3O91
-----END PGP SIGNATURE-----

--TsZMO2gJE87Q78Q70T7i7RPBpGWENFnN7--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 06:54:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 06:54:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135474.251635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKlb-0006gd-BD; Wed, 02 Jun 2021 06:54:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135474.251635; Wed, 02 Jun 2021 06:54:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKlb-0006gW-84; Wed, 02 Jun 2021 06:54:15 +0000
Received: by outflank-mailman (input) for mailman id 135474;
 Wed, 02 Jun 2021 06:54:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NSCb=K4=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1loKlZ-0006gQ-SY
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:54:13 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.24])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5333d106-1e67-4d36-b760-56d5b5ea03cf;
 Wed, 02 Jun 2021 06:54:13 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx526sB2iS
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 2 Jun 2021 08:54:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5333d106-1e67-4d36-b760-56d5b5ea03cf
ARC-Seal: i=1; a=rsa-sha256; t=1622616851; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=GYgEK4BjILMs4T3k6AItl94teUt7xghrRo0VT9ZzejfYnSphkc/000QdnBQ2sfkO5r
    /WBtGdRMhoSpiC4vNStR+HFLiZUu/wQcxD05pQ75egvVCgcnK9uJI4O1FX2P36+Lks/j
    0iKFU9lQkiuR1GUCjGPuulkjI6zEQEXPb4bhMHGVznMP0a13Z8U7RQoOcAWaU8z5Rwfj
    1ldI/El9Ll/TFERAaYlYIhDgQ3k7h4HAqMcs9o5OPTImCAzN08Y0Id0LTOgbl/Iv/riN
    jRl6nfrDxBsOl0pV/kMc1EGePykOEhZUDMcTkVXhmJWfLIMm7W65g7UA6XfJWPhXJGZV
    sWKw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622616851;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=p+1jTP3Fbi/pzfCLcmpYURwnECSrVl4NZeWhW9VjnTA=;
    b=Lw8qGXueUngHCg9Z/tSRZ4XLEv02CZVm4hpvoz2hRgl8M6FyC/JBNXFLefyH9mKQU8
    GeNMC3CE5mzck9Q7RzPgqB+9u0CIC2Lfvl4r6rYWvzW5TE/vExodKl4OCG0jL9HZSD4B
    eVhdkoy0USkaymAYgkLYbMSRBDoiV9uMNW0I9Lwa8vIeNj14lv0AXFIwyYfS4n3D77Dg
    Ukg7/IxJY3sHwCeBUPRDF1cJbgcm7UpeWMyTbqt5IzH1y3GJtvT/hNYAhQ7B15yaeQV4
    hqaE2asGiU9fG5RSUXPh/Ncw7Prj+v4Dk7rtJ3MCc9QUYiNkF+NR/OVUz5Xz0zivOsBt
    X63g==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622616851;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=p+1jTP3Fbi/pzfCLcmpYURwnECSrVl4NZeWhW9VjnTA=;
    b=br+7EQ/4zCkRe2f06RHEHTj+EBN+QujDEBVEsWx3BsLP1ZiPqWVhK/pOUCrwkeUKfU
    +B78I7AxKUEtX20RbbjHhIH54Pum2GtH78I0M5fy/7QHirq9HN2+W87OLpTdak8DN8E2
    fJCp+cmG3nONh0lno5LSigbJ0Sxhp6G5JGVV4Dzq460YaZyrkW/3zexH5m86rxzPVx97
    ug7c4efvoSmKO9qrchSy8O78QObf8ITsI0G+uYsb2Zpbwn/W0bjnhh6s8GS6Sm3R8idq
    GklB4Hz5j4AfIML050PCdblBF72uFx9oMuNj/9pLPjYVAyet+dYr97/2Zwwv8Vf1z/oS
    xgyQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF9Wx7WbE3s+BU2kLCYUBd7t4vRd/ulzKn4R+Wk"
X-RZG-CLASS-ID: mo00
Date: Wed, 2 Jun 2021 08:54:03 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [PATCH v20210601 00/38] leftover from 2020
Message-ID: <20210602085403.40064aed.olaf@aepfle.de>
In-Reply-To: <24670339-c080-7e47-c2a8-22c22f7a719e@suse.com>
References: <20210601161118.18986-1-olaf@aepfle.de>
	<24670339-c080-7e47-c2a8-22c22f7a719e@suse.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/FO=8H/H/=4J2GRdLzOdXjkb";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/FO=8H/H/=4J2GRdLzOdXjkb
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Wed, 2 Jun 2021 08:10:21 +0200
schrieb Juergen Gross <jgross@suse.com>:

> Would it be possible to split this into multiple independent
> patches/series?

Sure, I can send each individual part that was already sent over and over a=
gain.


Olaf

--Sig_/FO=8H/H/=4J2GRdLzOdXjkb
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmC3KwwACgkQ86SN7mm1
DoAoww/8CoWvgrgmVmYBlkyXBcE5y/g6tsZEExaRIOgPlRYXUA9BouNQbejFlJi7
qpRQayZDoTdXLg/q4/DVtioOG+pXR1im83fcc5N0d4JA1wlyaieBX/rwD8bZSZKm
+WUByhWcp8QMpL8IG6PqmUYCYK0+W99XToF1wgT/amz2FFcMHuejEEx+DCDdshpL
Fyj8JSQebiiCgZID+vpJouxfFHjqpFnBL+2X/xUmNY/XhssmJMGwxS1loY9hhPRO
qOyC8+5UWbIoE99ylDcCgagsUR/saOUnGWSAMJYY4x6llCgjEYs+1CyPSZPdqgkZ
1Tqs28oefLrHSjC6siMvXbB8gfftTNsHOyy8OFfofO3GOm3NmECQY0tXGehSHVVB
yhXSQBB1aTpneU6oKqVet11As+V7zzxIS+AUVA/K8wvhx9Qg1Ck0GkUOYxAGL3v1
ChZcpZpLFvIXlJJej4o5ol4FyO3NiPAnz3fXFwFLU/1nq4g3sOFLI2V/oIeO1sn9
VMB7Ytdw7fZdlAqtqaufoQffod7DzHuFQDQt88Bol70lrjGUluniQeHFkyXkC706
RVSHJpqdVGSRJoiHSyx5h/FJ68O8ECtZEB1gxAFE6HbTK14X6dh8U1o1I88VHos4
gwaGMChEGJG9mUwwMw7L0X7RDz9naaKJpS8bUFQ78Nm+2MwvxG8=
=kg1n
-----END PGP SIGNATURE-----

--Sig_/FO=8H/H/=4J2GRdLzOdXjkb--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 06:54:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 06:54:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135476.251646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKly-0007Bo-Jz; Wed, 02 Jun 2021 06:54:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135476.251646; Wed, 02 Jun 2021 06:54:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKly-0007Bd-Go; Wed, 02 Jun 2021 06:54:38 +0000
Received: by outflank-mailman (input) for mailman id 135476;
 Wed, 02 Jun 2021 06:54:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKlw-0007A2-42
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:54:37 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d107c9f5-7dc1-4bb4-b0a7-d3bc98fc6501;
 Wed, 02 Jun 2021 06:54:33 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKlL-0025I2-Rr; Wed, 02 Jun 2021 06:54:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d107c9f5-7dc1-4bb4-b0a7-d3bc98fc6501
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=myJzdG0UY+YhduciAvl62vV0+eYIWXuf0viapKKEim4=; b=iGVLX51PspxKKCnzMpC4EouJj5
	B2zrb4+pSV9mnMrSg79gkg0MGmfd8cZ1WB50YixRgjFbx4kOwp4juWx6b16UoCxx1LpymNY6Yk0x8
	1nzXicL+0dUmecEKqHWZqBXMyeC2V5zboJkZLT8Z5vqTYJk/8WfYA9rvi3yfvAsJVwtzqljPiu8AS
	Za/PcTal9v5fdMjqkHRd71J1Y7Beckc40vm4RzsUt68TBw5T15mRNRvzTDicRFuks4v90WQ7UfdEb
	6GRv3x0E21udpoMdOjy7ckRjA7wLy9iC+PnSFHrTfG3ucQpfezI0IJkNiWgyB2fdem2B9y0l01OTl
	3C4E0UGA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 02/30] blk-mq: improve the blk_mq_init_allocated_queue interface
Date: Wed,  2 Jun 2021 09:53:17 +0300
Message-Id: <20210602065345.355274-3-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Don't return the passed in request_queue but a normal error code, and
drop the elevator_init argument in favor of just calling elevator_init_mq
directly from dm-rq.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c           | 36 ++++++++++++++----------------------
 block/blk.h              |  1 -
 block/elevator.c         |  2 +-
 drivers/md/dm-rq.c       |  9 +++------
 include/linux/blk-mq.h   |  5 ++---
 include/linux/elevator.h |  1 +
 6 files changed, 21 insertions(+), 33 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index eaacfa963a73..6112741e1ff9 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3115,21 +3115,18 @@ void blk_mq_release(struct request_queue *q)
 struct request_queue *blk_mq_init_queue_data(struct blk_mq_tag_set *set,
 		void *queuedata)
 {
-	struct request_queue *uninit_q, *q;
+	struct request_queue *q;
+	int ret;
 
-	uninit_q = blk_alloc_queue(set->numa_node);
-	if (!uninit_q)
+	q = blk_alloc_queue(set->numa_node);
+	if (!q)
 		return ERR_PTR(-ENOMEM);
-	uninit_q->queuedata = queuedata;
-
-	/*
-	 * Initialize the queue without an elevator. device_add_disk() will do
-	 * the initialization.
-	 */
-	q = blk_mq_init_allocated_queue(set, uninit_q, false);
-	if (IS_ERR(q))
-		blk_cleanup_queue(uninit_q);
-
+	q->queuedata = queuedata;
+	ret = blk_mq_init_allocated_queue(set, q);
+	if (ret) {
+		blk_cleanup_queue(q);
+		return ERR_PTR(ret);
+	}
 	return q;
 }
 EXPORT_SYMBOL_GPL(blk_mq_init_queue_data);
@@ -3273,9 +3270,8 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
 	mutex_unlock(&q->sysfs_lock);
 }
 
-struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
-						  struct request_queue *q,
-						  bool elevator_init)
+int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+		struct request_queue *q)
 {
 	/* mark the queue as mq asap */
 	q->mq_ops = set->ops;
@@ -3325,11 +3321,7 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
 	blk_mq_init_cpu_queues(q, set->nr_hw_queues);
 	blk_mq_add_queue_tag_set(set, q);
 	blk_mq_map_swqueue(q);
-
-	if (elevator_init)
-		elevator_init_mq(q);
-
-	return q;
+	return 0;
 
 err_hctxs:
 	kfree(q->queue_hw_ctx);
@@ -3340,7 +3332,7 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
 	q->poll_cb = NULL;
 err_exit:
 	q->mq_ops = NULL;
-	return ERR_PTR(-ENOMEM);
+	return -ENOMEM;
 }
 EXPORT_SYMBOL(blk_mq_init_allocated_queue);
 
diff --git a/block/blk.h b/block/blk.h
index 3440142f029b..d3fa47af3607 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -192,7 +192,6 @@ void blk_account_io_done(struct request *req, u64 now);
 
 void blk_insert_flush(struct request *rq);
 
-void elevator_init_mq(struct request_queue *q);
 int elevator_switch_mq(struct request_queue *q,
 			      struct elevator_type *new_e);
 void __elevator_exit(struct request_queue *, struct elevator_queue *);
diff --git a/block/elevator.c b/block/elevator.c
index 440699c28119..06e203426410 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -693,7 +693,7 @@ void elevator_init_mq(struct request_queue *q)
 		elevator_put(e);
 	}
 }
-
+EXPORT_SYMBOL_GPL(elevator_init_mq); /* only for dm-rq */
 
 /*
  * switch to new_e io scheduler. be careful not to introduce deadlocks -
diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
index 9c3bc3711b33..0dbd48cbdff9 100644
--- a/drivers/md/dm-rq.c
+++ b/drivers/md/dm-rq.c
@@ -530,7 +530,6 @@ static const struct blk_mq_ops dm_mq_ops = {
 
 int dm_mq_init_request_queue(struct mapped_device *md, struct dm_table *t)
 {
-	struct request_queue *q;
 	struct dm_target *immutable_tgt;
 	int err;
 
@@ -557,12 +556,10 @@ int dm_mq_init_request_queue(struct mapped_device *md, struct dm_table *t)
 	if (err)
 		goto out_kfree_tag_set;
 
-	q = blk_mq_init_allocated_queue(md->tag_set, md->queue, true);
-	if (IS_ERR(q)) {
-		err = PTR_ERR(q);
+	err = blk_mq_init_allocated_queue(md->tag_set, md->queue);
+	if (err)
 		goto out_tag_set;
-	}
-
+	elevator_init_mq(md->queue);
 	return 0;
 
 out_tag_set:
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index bb950fc669ef..73750b2838d2 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -429,9 +429,8 @@ enum {
 struct request_queue *blk_mq_init_queue(struct blk_mq_tag_set *);
 struct request_queue *blk_mq_init_queue_data(struct blk_mq_tag_set *set,
 		void *queuedata);
-struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
-						  struct request_queue *q,
-						  bool elevator_init);
+int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+		struct request_queue *q);
 struct request_queue *blk_mq_init_sq_queue(struct blk_mq_tag_set *set,
 						const struct blk_mq_ops *ops,
 						unsigned int queue_depth,
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index dcb2f9022c1d..783ecb3cb77a 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -120,6 +120,7 @@ extern void elv_merged_request(struct request_queue *, struct request *,
 extern bool elv_attempt_insert_merge(struct request_queue *, struct request *);
 extern struct request *elv_former_request(struct request_queue *, struct request *);
 extern struct request *elv_latter_request(struct request_queue *, struct request *);
+void elevator_init_mq(struct request_queue *q);
 
 /*
  * io scheduler registration
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 06:54:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 06:54:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135477.251657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKm2-0007UT-0w; Wed, 02 Jun 2021 06:54:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135477.251657; Wed, 02 Jun 2021 06:54:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKm1-0007UK-TW; Wed, 02 Jun 2021 06:54:41 +0000
Received: by outflank-mailman (input) for mailman id 135477;
 Wed, 02 Jun 2021 06:54:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKm1-0007A2-2b
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:54:41 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6138d0e7-d37e-44e1-965f-4e0dc4867887;
 Wed, 02 Jun 2021 06:54:32 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKlB-0025F3-IY; Wed, 02 Jun 2021 06:53:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6138d0e7-d37e-44e1-965f-4e0dc4867887
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
	Content-ID:Content-Description:In-Reply-To:References;
	bh=GNz/Hu0V5gogSB4co5eOK6xP1gnr1qh2/gA3KL3Dtxg=; b=BI35MypsV5ujI5j9LOM8ffKO3O
	PPR2xJ+smyVf1yBqrB22os7cElpqLd6BGSF+LAFux1FWghPPtYUh71lumcyGTaH+GySHaNY4TdhS/
	AV4vc/JIVMat63n8MdcbvA03bNT/GrAfqDiF7ptTNcphdfH4Gmp4s374edZksMq4302Y5G3rKEHTv
	zkjAvx0fvVtUm2B/6XY5TSPrkqQmnos5aAqRHBWaQFtoEeGLkMCCmRvVNoipNM5EQcjvqqNXgrxv+
	Uv5yjt1xCW0uwaWbb4vvYLNz6VSHAG5NmTU3avE4Jh6rqZRN77aCtw5TtcxU3fh9zr5QCwFWGydth
	c/21yB+g==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: simplify gendisk and request_queue allocation for blk-mq based drivers
Date: Wed,  2 Jun 2021 09:53:15 +0300
Message-Id: <20210602065345.355274-1-hch@lst.de>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Hi all,

this series is the scond part of cleaning up lifetimes and allocation of
the gendisk and request_queue structure.  It adds a new interface to
allocate the disk and queue together for blk based drivers, and uses that
in all drivers that do not have any caveats in their gendisk and
request_queue lifetime rules.

Diffstat:
 block/blk-mq.c                      |   91 +++++++++++++++-------------------
 block/blk.h                         |    1 
 block/elevator.c                    |    2 
 drivers/block/amiflop.c             |   16 +-----
 drivers/block/aoe/aoeblk.c          |   33 ++++--------
 drivers/block/aoe/aoedev.c          |    3 -
 drivers/block/ataflop.c             |   16 +-----
 drivers/block/floppy.c              |   20 +------
 drivers/block/loop.c                |   19 ++-----
 drivers/block/nbd.c                 |   53 +++++++------------
 drivers/block/null_blk/main.c       |   11 +---
 drivers/block/paride/pcd.c          |   19 +++----
 drivers/block/paride/pd.c           |   30 ++++-------
 drivers/block/paride/pf.c           |   18 ++----
 drivers/block/ps3disk.c             |   36 +++++--------
 drivers/block/rbd.c                 |   52 ++++++-------------
 drivers/block/rnbd/rnbd-clt.c       |   35 +++----------
 drivers/block/sunvdc.c              |   47 ++++-------------
 drivers/block/swim.c                |   34 +++++-------
 drivers/block/swim3.c               |   33 +++++-------
 drivers/block/sx8.c                 |   23 ++------
 drivers/block/virtio_blk.c          |   26 ++-------
 drivers/block/xen-blkfront.c        |   96 ++++++++++++++----------------------
 drivers/block/z2ram.c               |   15 +----
 drivers/cdrom/gdrom.c               |   45 +++++++---------
 drivers/md/dm-rq.c                  |    9 +--
 drivers/memstick/core/ms_block.c    |   25 +++------
 drivers/memstick/core/mspro_block.c |   26 ++++-----
 drivers/mtd/mtd_blkdevs.c           |   48 ++++++++----------
 drivers/mtd/ubi/block.c             |   68 ++++++++++---------------
 drivers/s390/block/scm_blk.c        |   21 ++-----
 include/linux/blk-mq.h              |   24 ++++++---
 include/linux/elevator.h            |    1 
 33 files changed, 386 insertions(+), 610 deletions(-)


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 06:54:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 06:54:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135478.251668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKm7-0007rC-As; Wed, 02 Jun 2021 06:54:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135478.251668; Wed, 02 Jun 2021 06:54:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKm7-0007qz-65; Wed, 02 Jun 2021 06:54:47 +0000
Received: by outflank-mailman (input) for mailman id 135478;
 Wed, 02 Jun 2021 06:54:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKm6-0007A2-2o
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:54:46 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14b895fd-ad3b-4db4-8bd1-36518816df4c;
 Wed, 02 Jun 2021 06:54:33 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKlG-0025Fb-J7; Wed, 02 Jun 2021 06:53:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14b895fd-ad3b-4db4-8bd1-36518816df4c
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=SnRF2RasuO3TzWzCcWD7psRmahwH4O2DjzNGyzgJCxM=; b=RfiXqx1buYxjqGM6Ru6s2uXbtK
	szP8Lp35KdwoBLpRViL7iCBQHBOTXZ4Qu5LJOvN43OaaHCTJwbYuBSl2SqXNEe3Ty441iVm62zV/q
	FK7bqtRTYnR/sNzxWhxA2AWxTTSYR0R47li/OgrbNqJ/z/C247XNi+vyVCBUJz2HWLNRhHDg9jROf
	W37wVhanz0woIFa4DI1Hiypd6xGuMEzMDW+t6oSvgQDur5ioE4KQTAUXWM04CshX1rW4l4k5bAsYt
	fMWsLaqDTkxSQegnpPyUc9dgQ8DJelIKHSJsddjUREuKapJzcCJM+AP8gSr/w6bRFlEnBsoDVPXvd
	WWIbvGOQ==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 01/30] blk-mq: factor out a blk_mq_alloc_sq_tag_set helper
Date: Wed,  2 Jun 2021 09:53:16 +0300
Message-Id: <20210602065345.355274-2-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Factour out a helper to initialize a simple single hw queue tag_set from
blk_mq_init_sq_queue.  This will allow to phase out blk_mq_init_sq_queue
in favor of a more symmetric and general API.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c         | 32 ++++++++++++++++++--------------
 include/linux/blk-mq.h |  3 +++
 2 files changed, 21 insertions(+), 14 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index f11d4018ce2e..eaacfa963a73 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3152,24 +3152,12 @@ struct request_queue *blk_mq_init_sq_queue(struct blk_mq_tag_set *set,
 	struct request_queue *q;
 	int ret;
 
-	memset(set, 0, sizeof(*set));
-	set->ops = ops;
-	set->nr_hw_queues = 1;
-	set->nr_maps = 1;
-	set->queue_depth = queue_depth;
-	set->numa_node = NUMA_NO_NODE;
-	set->flags = set_flags;
-
-	ret = blk_mq_alloc_tag_set(set);
+	ret = blk_mq_alloc_sq_tag_set(set, ops, queue_depth, set_flags);
 	if (ret)
 		return ERR_PTR(ret);
-
 	q = blk_mq_init_queue(set);
-	if (IS_ERR(q)) {
+	if (IS_ERR(q))
 		blk_mq_free_tag_set(set);
-		return q;
-	}
-
 	return q;
 }
 EXPORT_SYMBOL(blk_mq_init_sq_queue);
@@ -3589,6 +3577,22 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set)
 }
 EXPORT_SYMBOL(blk_mq_alloc_tag_set);
 
+/* allocate and initialize a tagset for a simple single-queue device */
+int blk_mq_alloc_sq_tag_set(struct blk_mq_tag_set *set,
+		const struct blk_mq_ops *ops, unsigned int queue_depth,
+		unsigned int set_flags)
+{
+	memset(set, 0, sizeof(*set));
+	set->ops = ops;
+	set->nr_hw_queues = 1;
+	set->nr_maps = 1;
+	set->queue_depth = queue_depth;
+	set->numa_node = NUMA_NO_NODE;
+	set->flags = set_flags;
+	return blk_mq_alloc_tag_set(set);
+}
+EXPORT_SYMBOL_GPL(blk_mq_alloc_sq_tag_set);
+
 void blk_mq_free_tag_set(struct blk_mq_tag_set *set)
 {
 	int i, j;
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 359486940fa0..bb950fc669ef 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -439,6 +439,9 @@ struct request_queue *blk_mq_init_sq_queue(struct blk_mq_tag_set *set,
 void blk_mq_unregister_dev(struct device *, struct request_queue *);
 
 int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set);
+int blk_mq_alloc_sq_tag_set(struct blk_mq_tag_set *set,
+		const struct blk_mq_ops *ops, unsigned int queue_depth,
+		unsigned int set_flags);
 void blk_mq_free_tag_set(struct blk_mq_tag_set *set);
 
 void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule);
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 06:54:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 06:54:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135481.251679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKmB-0008Ea-KB; Wed, 02 Jun 2021 06:54:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135481.251679; Wed, 02 Jun 2021 06:54:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKmB-0008EP-Gk; Wed, 02 Jun 2021 06:54:51 +0000
Received: by outflank-mailman (input) for mailman id 135481;
 Wed, 02 Jun 2021 06:54:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKmB-0007A2-31
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:54:51 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e58446d6-9e03-4f7a-a64f-71458a14aaab;
 Wed, 02 Jun 2021 06:54:36 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKlW-0025KS-9M; Wed, 02 Jun 2021 06:54:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e58446d6-9e03-4f7a-a64f-71458a14aaab
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=9pvQ9WWixoN79V+IFrwZ0yGtfnJdYnRvQElOajEUx4E=; b=gnk1MFf6oC2iDvM0lQExGF55IT
	Fu4WEOCNNJ9mVEtVcIDrtrwvLS92XEx/Y9jJNPlMZzP3mxVH717a4VCLxOLSslRLKspq6OAA9D5yw
	EQDkVL09aPuAvCsSUDM3ZFzSxHOeCU6gBkEfhYQCCehrU9uW68j9GkCyo7Ds0V43zhcXAh05TNWGS
	LhXZW/q9HT26Y7QKqOeqywIxRBtkOI49AYNNeXwG3akyX15CxDIe0h/fq/1eJkZhMao+G/fiRrulm
	dyrb+u0WyEaL8vr4wMg0Ys8BKcowBJnF+Zn2V2osWkSUWM27u+JhH6YkPr+2yh9LuvxLx7wgsnTfO
	l9Bduvjw==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 04/30] virtio-blk: use blk_mq_alloc_disk
Date: Wed,  2 Jun 2021 09:53:19 +0300
Message-Id: <20210602065345.355274-5-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use the blk_mq_alloc_disk API to simplify the gendisk and request_queue
allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/virtio_blk.c | 26 +++++++-------------------
 1 file changed, 7 insertions(+), 19 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index b9fa3ef5b57c..e4bd3b1fc3c2 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -749,13 +749,6 @@ static int virtblk_probe(struct virtio_device *vdev)
 	if (err)
 		goto out_free_vblk;
 
-	/* FIXME: How many partitions?  How long is a piece of string? */
-	vblk->disk = alloc_disk(1 << PART_BITS);
-	if (!vblk->disk) {
-		err = -ENOMEM;
-		goto out_free_vq;
-	}
-
 	/* Default queue sizing is to fill the ring. */
 	if (likely(!virtblk_queue_depth)) {
 		queue_depth = vblk->vqs[0].vq->num_free;
@@ -779,21 +772,20 @@ static int virtblk_probe(struct virtio_device *vdev)
 
 	err = blk_mq_alloc_tag_set(&vblk->tag_set);
 	if (err)
-		goto out_put_disk;
+		goto out_free_vq;
 
-	q = blk_mq_init_queue(&vblk->tag_set);
-	if (IS_ERR(q)) {
-		err = -ENOMEM;
+	vblk->disk = blk_mq_alloc_disk(&vblk->tag_set, vblk);
+	if (IS_ERR(vblk->disk)) {
+		err = PTR_ERR(vblk->disk);
 		goto out_free_tags;
 	}
-	vblk->disk->queue = q;
-
-	q->queuedata = vblk;
+	q = vblk->disk->queue;
 
 	virtblk_name_format("vd", index, vblk->disk->disk_name, DISK_NAME_LEN);
 
 	vblk->disk->major = major;
 	vblk->disk->first_minor = index_to_minor(index);
+	vblk->disk->minors = 1 << PART_BITS;
 	vblk->disk->private_data = vblk;
 	vblk->disk->fops = &virtblk_fops;
 	vblk->disk->flags |= GENHD_FL_EXT_DEVT;
@@ -892,8 +884,6 @@ static int virtblk_probe(struct virtio_device *vdev)
 
 out_free_tags:
 	blk_mq_free_tag_set(&vblk->tag_set);
-out_put_disk:
-	put_disk(vblk->disk);
 out_free_vq:
 	vdev->config->del_vqs(vdev);
 	kfree(vblk->vqs);
@@ -913,8 +903,7 @@ static void virtblk_remove(struct virtio_device *vdev)
 	flush_work(&vblk->config_work);
 
 	del_gendisk(vblk->disk);
-	blk_cleanup_queue(vblk->disk->queue);
-
+	blk_cleanup_disk(vblk->disk);
 	blk_mq_free_tag_set(&vblk->tag_set);
 
 	mutex_lock(&vblk->vdev_mutex);
@@ -925,7 +914,6 @@ static void virtblk_remove(struct virtio_device *vdev)
 	/* Virtqueues are stopped, nothing can use vblk->vdev anymore. */
 	vblk->vdev = NULL;
 
-	put_disk(vblk->disk);
 	vdev->config->del_vqs(vdev);
 	kfree(vblk->vqs);
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 06:54:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 06:54:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135484.251690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKmH-0000Mh-00; Wed, 02 Jun 2021 06:54:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135484.251690; Wed, 02 Jun 2021 06:54:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKmG-0000MQ-Q8; Wed, 02 Jun 2021 06:54:56 +0000
Received: by outflank-mailman (input) for mailman id 135484;
 Wed, 02 Jun 2021 06:54:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKmG-0007A2-3B
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:54:56 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 05503f4d-4956-49b5-9394-c00d370a3ba3;
 Wed, 02 Jun 2021 06:54:36 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKlS-0025JJ-Ew; Wed, 02 Jun 2021 06:54:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05503f4d-4956-49b5-9394-c00d370a3ba3
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=Yi0gWgny8DlRPvb2CPLj2qpffFrZ3FaoaK+2w2W5Ero=; b=Ht4009yUCSzO+7+WkZSpPv8/pg
	DF4TuO/V/1LnprDJA3jlEhqFaXKBH7nFqyPlLMER9C4xGoA29+zO97djThq/H5WGD+qvf9L+NXUS+
	NF+wjIhh/npg/q4yZwzJ2XU8Rh/TBSfn+u/dM7E/U9YdPLNk9qDr3C//ieS4lZI5LRisQPRidcs0K
	1KxMYlyEV+r/yTnMeO4E9D5sjO0i2VDyXsAV0BqYgFAku1jAbz09LqX7rvILnxpnxFg/tBckuBSOC
	E+KSpkcLJ/UGUgdvZ89eU6BxLq+pC8HlqDRtF4O6vNTYENtcydGYMepxD+bizsH8Il7O6KAzLt54A
	NMGPi+1w==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 03/30] blk-mq: add the blk_mq_alloc_disk APIs
Date: Wed,  2 Jun 2021 09:53:18 +0300
Message-Id: <20210602065345.355274-4-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Add a new API to allocate a gendisk including the request_queue for use
with blk-mq based drivers.  This is to avoid boilerplate code in drivers.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c         | 19 +++++++++++++++++++
 include/linux/blk-mq.h | 12 ++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 6112741e1ff9..1e6036e6fd66 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3137,6 +3137,25 @@ struct request_queue *blk_mq_init_queue(struct blk_mq_tag_set *set)
 }
 EXPORT_SYMBOL(blk_mq_init_queue);
 
+struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, void *queuedata)
+{
+	struct request_queue *q;
+	struct gendisk *disk;
+
+	q = blk_mq_init_queue_data(set, queuedata);
+	if (IS_ERR(q))
+		return ERR_CAST(q);
+
+	disk = __alloc_disk_node(0, set->numa_node);
+	if (!disk) {
+		blk_cleanup_queue(q);
+		return ERR_PTR(-ENOMEM);
+	}
+	disk->queue = q;
+	return disk;
+}
+EXPORT_SYMBOL(__blk_mq_alloc_disk);
+
 /*
  * Helper for setting up a queue with mq ops, given queue depth, and
  * the passed in mq ops flags.
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 73750b2838d2..f496c6c5b5d2 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -426,6 +426,18 @@ enum {
 	((policy & ((1 << BLK_MQ_F_ALLOC_POLICY_BITS) - 1)) \
 		<< BLK_MQ_F_ALLOC_POLICY_START_BIT)
 
+#define blk_mq_alloc_disk(set, queuedata)				\
+({									\
+	static struct lock_class_key __key;				\
+	struct gendisk *__disk = __blk_mq_alloc_disk(set, queuedata);	\
+									\
+	if (__disk)							\
+		lockdep_init_map(&__disk->lockdep_map,			\
+			"(bio completion)", &__key, 0);			\
+	__disk;								\
+})
+struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set,
+		void *queuedata);
 struct request_queue *blk_mq_init_queue(struct blk_mq_tag_set *);
 struct request_queue *blk_mq_init_queue_data(struct blk_mq_tag_set *set,
 		void *queuedata);
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 06:55:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 06:55:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135487.251701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKmM-0000vy-7Y; Wed, 02 Jun 2021 06:55:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135487.251701; Wed, 02 Jun 2021 06:55:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKmM-0000vp-2v; Wed, 02 Jun 2021 06:55:02 +0000
Received: by outflank-mailman (input) for mailman id 135487;
 Wed, 02 Jun 2021 06:55:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKmL-0007A2-3P
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:55:01 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 73a52a5f-e36d-438f-89af-9a5fae7ac46b;
 Wed, 02 Jun 2021 06:54:39 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKlc-0025MH-2X; Wed, 02 Jun 2021 06:54:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73a52a5f-e36d-438f-89af-9a5fae7ac46b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=itoIfl0lOn3PjkK+CaoYHstG5XDmIyS+X3zd0fvSPIM=; b=bhWE2wpHQl41sfsjCXEfpYjcLR
	nmreDYw+q1yPr6HX9B86hwxNp6nZ2Tnhh1KaILqMkYW4Y4qsc6sKDqxr3JfzzlCVpsVZXiDriGbCZ
	Y1kvrYb+ETid8liK1mxwid55k+FW8rcCy2J/MzUAKhHFJvh0OOcWM1xAYYJAYTp672FlIPmWSCfDL
	9pyoh9N5A4twK5q+oFZXZitHcqLsuWTjWUamSh8+v0dco1JdyeSUI/QOzHpsSeMo4cK+uUWipczTX
	1iuWGBvi4yeyTjSdFkWeQcxOVqoNke9pcbsxonDqFmJouTDXz2rxwn7JoK4aW4Gp3nNoPkisDeRe8
	L9x0B1Og==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 05/30] pcd: use blk_mq_alloc_disk
Date: Wed,  2 Jun 2021 09:53:20 +0300
Message-Id: <20210602065345.355274-6-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use the blk_mq_alloc_disk API to simplify the gendisk and request_queue
allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/paride/pcd.c | 19 ++++++++-----------
 1 file changed, 8 insertions(+), 11 deletions(-)

diff --git a/drivers/block/paride/pcd.c b/drivers/block/paride/pcd.c
index 70da8b86ce58..f9cdd11f02f5 100644
--- a/drivers/block/paride/pcd.c
+++ b/drivers/block/paride/pcd.c
@@ -309,21 +309,19 @@ static void pcd_init_units(void)
 
 	pcd_drive_count = 0;
 	for (unit = 0, cd = pcd; unit < PCD_UNITS; unit++, cd++) {
-		struct gendisk *disk = alloc_disk(1);
+		struct gendisk *disk;
 
-		if (!disk)
+		if (blk_mq_alloc_sq_tag_set(&cd->tag_set, &pcd_mq_ops, 1,
+				BLK_MQ_F_SHOULD_MERGE))
 			continue;
 
-		disk->queue = blk_mq_init_sq_queue(&cd->tag_set, &pcd_mq_ops,
-						   1, BLK_MQ_F_SHOULD_MERGE);
-		if (IS_ERR(disk->queue)) {
-			disk->queue = NULL;
-			put_disk(disk);
+		disk = blk_mq_alloc_disk(&cd->tag_set, cd);
+		if (IS_ERR(disk)) {
+			blk_mq_free_tag_set(&cd->tag_set);
 			continue;
 		}
 
 		INIT_LIST_HEAD(&cd->rq_list);
-		disk->queue->queuedata = cd;
 		blk_queue_bounce_limit(disk->queue, BLK_BOUNCE_HIGH);
 		cd->disk = disk;
 		cd->pi = &cd->pia;
@@ -343,6 +341,7 @@ static void pcd_init_units(void)
 		cd->info.mask = 0;
 		disk->major = major;
 		disk->first_minor = unit;
+		disk->minors = 1;
 		strcpy(disk->disk_name, cd->name);	/* umm... */
 		disk->fops = &pcd_bdops;
 		disk->flags = GENHD_FL_BLOCK_EVENTS_ON_EXCL_WRITE;
@@ -759,10 +758,8 @@ static int pcd_detect(void)
 	for (unit = 0, cd = pcd; unit < PCD_UNITS; unit++, cd++) {
 		if (!cd->disk)
 			continue;
-		blk_cleanup_queue(cd->disk->queue);
-		cd->disk->queue = NULL;
+		blk_cleanup_disk(cd->disk);
 		blk_mq_free_tag_set(&cd->tag_set);
-		put_disk(cd->disk);
 	}
 	pi_unregister_driver(par_drv);
 	return -1;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 06:55:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 06:55:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135490.251712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKmR-0001c5-Fc; Wed, 02 Jun 2021 06:55:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135490.251712; Wed, 02 Jun 2021 06:55:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKmR-0001bw-B2; Wed, 02 Jun 2021 06:55:07 +0000
Received: by outflank-mailman (input) for mailman id 135490;
 Wed, 02 Jun 2021 06:55:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKmQ-0007A2-3g
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:55:06 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 067e6fa5-e7bd-4578-b8b9-77274bf19698;
 Wed, 02 Jun 2021 06:54:43 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKlg-0025NX-RE; Wed, 02 Jun 2021 06:54:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 067e6fa5-e7bd-4578-b8b9-77274bf19698
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=BVgbkrgI5w8hw6EfzYARLfGWi6Kh9lFpzsXZ419HK1w=; b=U/MbBupmxlqb75y1PHkCyebynQ
	Rt4fL6fa+WzCejYcSifBRXJN9ka135vcRGgG8l7nbiWMw0RO1zAmvQeHG1EB4hkdZrMO1+8e6kT24
	CCFvZVn7hwrTPmSO8fpiW0WjG8kLUcuJo4czgl1Ug2/VWcJs1QWpkGaOQJlwRiQFMiTkkEzvrxwXC
	/fDjwhFDYriH71KkRIgFNetz9g9Hzp0uZkvRppj4VHGBAA/bVFhYgYPuSMVLVDyrW+QDFj6Zc5Hbr
	orlbN/emDladETPZOn+hbjRC0uB/M6eGxSnx/UmBiIxbqX/2NW/f466ciE4Zata7/KUdQ0LnF4eAy
	LoIgT7+A==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 06/30] pf: use blk_mq_alloc_disk
Date: Wed,  2 Jun 2021 09:53:21 +0300
Message-Id: <20210602065345.355274-7-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use the blk_mq_alloc_disk API to simplify the gendisk and request_queue
allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/paride/pf.c | 18 +++++++-----------
 1 file changed, 7 insertions(+), 11 deletions(-)

diff --git a/drivers/block/paride/pf.c b/drivers/block/paride/pf.c
index bb09f21ce21a..d5b9c88ba76f 100644
--- a/drivers/block/paride/pf.c
+++ b/drivers/block/paride/pf.c
@@ -294,20 +294,17 @@ static void __init pf_init_units(void)
 	for (unit = 0, pf = units; unit < PF_UNITS; unit++, pf++) {
 		struct gendisk *disk;
 
-		disk = alloc_disk(1);
-		if (!disk)
+		if (blk_mq_alloc_sq_tag_set(&pf->tag_set, &pf_mq_ops, 1,
+				BLK_MQ_F_SHOULD_MERGE))
 			continue;
 
-		disk->queue = blk_mq_init_sq_queue(&pf->tag_set, &pf_mq_ops,
-							1, BLK_MQ_F_SHOULD_MERGE);
-		if (IS_ERR(disk->queue)) {
-			disk->queue = NULL;
-			put_disk(disk);
+		disk = blk_mq_alloc_disk(&pf->tag_set, pf);
+		if (IS_ERR(disk)) {
+			blk_mq_free_tag_set(&pf->tag_set);
 			continue;
 		}
 
 		INIT_LIST_HEAD(&pf->rq_list);
-		disk->queue->queuedata = pf;
 		blk_queue_max_segments(disk->queue, cluster);
 		blk_queue_bounce_limit(disk->queue, BLK_BOUNCE_HIGH);
 		pf->disk = disk;
@@ -318,6 +315,7 @@ static void __init pf_init_units(void)
 		snprintf(pf->name, PF_NAMELEN, "%s%d", name, unit);
 		disk->major = major;
 		disk->first_minor = unit;
+		disk->minors = 1;
 		strcpy(disk->disk_name, pf->name);
 		disk->fops = &pf_fops;
 		disk->events = DISK_EVENT_MEDIA_CHANGE;
@@ -766,10 +764,8 @@ static int pf_detect(void)
 	for (pf = units, unit = 0; unit < PF_UNITS; pf++, unit++) {
 		if (!pf->disk)
 			continue;
-		blk_cleanup_queue(pf->disk->queue);
-		pf->disk->queue = NULL;
+		blk_cleanup_disk(pf->disk);
 		blk_mq_free_tag_set(&pf->tag_set);
-		put_disk(pf->disk);
 	}
 	pi_unregister_driver(par_drv);
 	return -1;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 06:55:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 06:55:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135493.251722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKmV-0002Ar-Tp; Wed, 02 Jun 2021 06:55:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135493.251722; Wed, 02 Jun 2021 06:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKmV-0002Aa-Qc; Wed, 02 Jun 2021 06:55:11 +0000
Received: by outflank-mailman (input) for mailman id 135493;
 Wed, 02 Jun 2021 06:55:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKmV-0007A2-3u
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:55:11 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 81d54a99-7c05-4dc5-b2d0-4eabac53016e;
 Wed, 02 Jun 2021 06:54:49 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKln-0025P8-FW; Wed, 02 Jun 2021 06:54:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81d54a99-7c05-4dc5-b2d0-4eabac53016e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=KlBRFDn8CKBuz4Jtw5oxvJwFPJCrn3leZL7f/TB5v/I=; b=JAbKOuEt/ZwlI1Cb8jU8PIVTi0
	rWrP2Q0oyjZ1qHxdn5YfZWmZGCE1OX7t/Z5qHClzdMNkXAAV2MvEk5MtF+zy9GIwLRfySkORmJitv
	hBUNmjxZRc5StQgvgrScxkKXoprJTIDdzdZyS8U/8FI6Ub7l742n47gp+1BOs+WlJvaHsKwHFkDWB
	xBui56bbOzKQ99aHIVAIpMOWuKdRjPxfD/L6mOHl825ZMYqNbqTRQR2bdCKOIAofpqO2q95xIdQCk
	qchgx4WVGIXqbErN5FqxXen4FaqChWEu053VwmNbr8IDVXg22x1cC2laSwL86K80z+bhdU0MJo1y9
	XRk5+uaA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 07/30] ms_block: use blk_mq_alloc_disk
Date: Wed,  2 Jun 2021 09:53:22 +0300
Message-Id: <20210602065345.355274-8-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use the blk_mq_alloc_disk API to simplify the gendisk and request_queue
allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/memstick/core/ms_block.c | 25 ++++++++++---------------
 1 file changed, 10 insertions(+), 15 deletions(-)

diff --git a/drivers/memstick/core/ms_block.c b/drivers/memstick/core/ms_block.c
index 0bacf4268f83..dac258d12aca 100644
--- a/drivers/memstick/core/ms_block.c
+++ b/drivers/memstick/core/ms_block.c
@@ -2110,21 +2110,17 @@ static int msb_init_disk(struct memstick_dev *card)
 	if (msb->disk_id  < 0)
 		return msb->disk_id;
 
-	msb->disk = alloc_disk(0);
-	if (!msb->disk) {
-		rc = -ENOMEM;
+	rc = blk_mq_alloc_sq_tag_set(&msb->tag_set, &msb_mq_ops, 2,
+				     BLK_MQ_F_SHOULD_MERGE);
+	if (rc)
 		goto out_release_id;
-	}
 
-	msb->queue = blk_mq_init_sq_queue(&msb->tag_set, &msb_mq_ops, 2,
-						BLK_MQ_F_SHOULD_MERGE);
-	if (IS_ERR(msb->queue)) {
-		rc = PTR_ERR(msb->queue);
-		msb->queue = NULL;
-		goto out_put_disk;
+	msb->disk = blk_mq_alloc_disk(&msb->tag_set, card);
+	if (IS_ERR(msb->disk)) {
+		rc = PTR_ERR(msb->disk);
+		goto out_free_tag_set;
 	}
-
-	msb->queue->queuedata = card;
+	msb->queue = msb->disk->queue;
 
 	blk_queue_max_hw_sectors(msb->queue, MS_BLOCK_MAX_PAGES);
 	blk_queue_max_segments(msb->queue, MS_BLOCK_MAX_SEGS);
@@ -2135,7 +2131,6 @@ static int msb_init_disk(struct memstick_dev *card)
 	sprintf(msb->disk->disk_name, "msblk%d", msb->disk_id);
 	msb->disk->fops = &msb_bdops;
 	msb->disk->private_data = msb;
-	msb->disk->queue = msb->queue;
 
 	capacity = msb->pages_in_block * msb->logical_block_count;
 	capacity *= (msb->page_size / 512);
@@ -2155,8 +2150,8 @@ static int msb_init_disk(struct memstick_dev *card)
 	dbg("Disk added");
 	return 0;
 
-out_put_disk:
-	put_disk(msb->disk);
+out_free_tag_set:
+	blk_mq_free_tag_set(&msb->tag_set);
 out_release_id:
 	mutex_lock(&msb_disk_lock);
 	idr_remove(&msb_disk_idr, msb->disk_id);
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 06:55:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 06:55:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135498.251734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKmg-0002z3-9w; Wed, 02 Jun 2021 06:55:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135498.251734; Wed, 02 Jun 2021 06:55:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKmg-0002ys-6J; Wed, 02 Jun 2021 06:55:22 +0000
Received: by outflank-mailman (input) for mailman id 135498;
 Wed, 02 Jun 2021 06:55:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKmf-0007A2-4E
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:55:21 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 91c486af-9bea-47be-bfe9-22043f6813e4;
 Wed, 02 Jun 2021 06:54:58 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKls-0025R4-W9; Wed, 02 Jun 2021 06:54:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91c486af-9bea-47be-bfe9-22043f6813e4
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=VrhBOXpqbc1/w6Vz4TvYYyGic8czOTYvar2owBC0Nao=; b=M0oF4rS9JAR1tnscAYGs3Ny4Aw
	R0M734kT6ibhdEiE624YEKsyPPj2YIu9zEGz0wkFfjwV4WFPnIh+1qyKDWdZ3z8EzlkVFGAb8Dcf7
	3V8VbExj8uNId3jTIdf48776+H/+fLIW8cknoNSzbxVEsD1AHyQD0KcKh763iNg2evMPCysdTxB14
	ax5A/5pAgqFYu/FRgKYessRM0Gxr+BcwAdKdzvnS3J8oKCUIv74U/fN2dbR6gw8G1XrC9oJZoC90l
	B8UJw9Kz7LkEpdXWVakEj3I01Qm/fH4nuPfSRHY69/phvcTpV74SAlSbAP7u3gkCl0yYJXtqgKgJ3
	bYIwaHtg==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 08/30] mspro: use blk_mq_alloc_disk
Date: Wed,  2 Jun 2021 09:53:23 +0300
Message-Id: <20210602065345.355274-9-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use the blk_mq_alloc_disk API to simplify the gendisk and request_queue
allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/memstick/core/mspro_block.c | 26 +++++++++++---------------
 1 file changed, 11 insertions(+), 15 deletions(-)

diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c
index cf7fe0d58ee7..22778d0e24f5 100644
--- a/drivers/memstick/core/mspro_block.c
+++ b/drivers/memstick/core/mspro_block.c
@@ -1205,21 +1205,17 @@ static int mspro_block_init_disk(struct memstick_dev *card)
 	if (disk_id < 0)
 		return disk_id;
 
-	msb->disk = alloc_disk(1 << MSPRO_BLOCK_PART_SHIFT);
-	if (!msb->disk) {
-		rc = -ENOMEM;
+	rc = blk_mq_alloc_sq_tag_set(&msb->tag_set, &mspro_mq_ops, 2,
+				     BLK_MQ_F_SHOULD_MERGE);
+	if (rc)
 		goto out_release_id;
-	}
 
-	msb->queue = blk_mq_init_sq_queue(&msb->tag_set, &mspro_mq_ops, 2,
-						BLK_MQ_F_SHOULD_MERGE);
-	if (IS_ERR(msb->queue)) {
-		rc = PTR_ERR(msb->queue);
-		msb->queue = NULL;
-		goto out_put_disk;
+	msb->disk = blk_mq_alloc_disk(&msb->tag_set, card);
+	if (IS_ERR(msb->disk)) {
+		rc = PTR_ERR(msb->disk);
+		goto out_free_tag_set;
 	}
-
-	msb->queue->queuedata = card;
+	msb->queue = msb->disk->queue;
 
 	blk_queue_max_hw_sectors(msb->queue, MSPRO_BLOCK_MAX_PAGES);
 	blk_queue_max_segments(msb->queue, MSPRO_BLOCK_MAX_SEGS);
@@ -1228,10 +1224,10 @@ static int mspro_block_init_disk(struct memstick_dev *card)
 
 	msb->disk->major = major;
 	msb->disk->first_minor = disk_id << MSPRO_BLOCK_PART_SHIFT;
+	msb->disk->minors = 1 << MSPRO_BLOCK_PART_SHIFT;
 	msb->disk->fops = &ms_block_bdops;
 	msb->usage_count = 1;
 	msb->disk->private_data = msb;
-	msb->disk->queue = msb->queue;
 
 	sprintf(msb->disk->disk_name, "mspblk%d", disk_id);
 
@@ -1247,8 +1243,8 @@ static int mspro_block_init_disk(struct memstick_dev *card)
 	msb->active = 1;
 	return 0;
 
-out_put_disk:
-	put_disk(msb->disk);
+out_free_tag_set:
+	blk_mq_free_tag_set(&msb->tag_set);
 out_release_id:
 	mutex_lock(&mspro_block_disk_lock);
 	idr_remove(&mspro_block_disk_idr, disk_id);
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:01:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:01:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135517.251744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKs1-0005Ib-Up; Wed, 02 Jun 2021 07:00:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135517.251744; Wed, 02 Jun 2021 07:00:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKs1-0005IU-Rm; Wed, 02 Jun 2021 07:00:53 +0000
Received: by outflank-mailman (input) for mailman id 135517;
 Wed, 02 Jun 2021 07:00:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loKs0-0005IM-0V
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 07:00:52 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 244085fd-4871-4959-9ff6-5ab815bfa2b1;
 Wed, 02 Jun 2021 07:00:51 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 993EE21937;
 Wed,  2 Jun 2021 07:00:50 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 7809D118DD;
 Wed,  2 Jun 2021 07:00:50 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id YKk1HKIst2A0UgAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 07:00:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 244085fd-4871-4959-9ff6-5ab815bfa2b1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622617250; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Fa3RUNw+SKpVsbmLK+o0PKvms/4JNeoONXmILzCMOfE=;
	b=gTxVpRZHxGHR83iKqdcL5xgF9im7Dirgal0eAkUfbvMOIMuJz80Z2bzFLcJFObGooIJ70l
	N0PAB1vnC5JnceOAuJtGo/54yY0h8bxiMvzTyHAFafskgUOQicHlfRVoT3qO8PBI7BQm+h
	CXwhRybmIwzkvkkgb6+AaaWt4H/DF2A=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622617250; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Fa3RUNw+SKpVsbmLK+o0PKvms/4JNeoONXmILzCMOfE=;
	b=gTxVpRZHxGHR83iKqdcL5xgF9im7Dirgal0eAkUfbvMOIMuJz80Z2bzFLcJFObGooIJ70l
	N0PAB1vnC5JnceOAuJtGo/54yY0h8bxiMvzTyHAFafskgUOQicHlfRVoT3qO8PBI7BQm+h
	CXwhRybmIwzkvkkgb6+AaaWt4H/DF2A=
Subject: Re: [PATCH v20210601 00/38] leftover from 2020
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org
References: <20210601161118.18986-1-olaf@aepfle.de>
 <24670339-c080-7e47-c2a8-22c22f7a719e@suse.com>
 <20210602085403.40064aed.olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <51055fa2-b57e-c66b-78d1-6f07e0164b5b@suse.com>
Date: Wed, 2 Jun 2021 09:00:49 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210602085403.40064aed.olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="0R5yR0kmJDthhk6p7sd2UWkPl6r3fhbtk"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--0R5yR0kmJDthhk6p7sd2UWkPl6r3fhbtk
Content-Type: multipart/mixed; boundary="B8fhApnKctGNEhD5FJdK2EN2rNrsZclQK";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org
Message-ID: <51055fa2-b57e-c66b-78d1-6f07e0164b5b@suse.com>
Subject: Re: [PATCH v20210601 00/38] leftover from 2020
References: <20210601161118.18986-1-olaf@aepfle.de>
 <24670339-c080-7e47-c2a8-22c22f7a719e@suse.com>
 <20210602085403.40064aed.olaf@aepfle.de>
In-Reply-To: <20210602085403.40064aed.olaf@aepfle.de>

--B8fhApnKctGNEhD5FJdK2EN2rNrsZclQK
Content-Type: multipart/mixed;
 boundary="------------E7EB69D824C6B123C0AD1ADA"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E7EB69D824C6B123C0AD1ADA
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: quoted-printable

On 02.06.21 08:54, Olaf Hering wrote:
> Am Wed, 2 Jun 2021 08:10:21 +0200
> schrieb Juergen Gross <jgross@suse.com>:
>=20
>> Would it be possible to split this into multiple independent
>> patches/series?
>=20
> Sure, I can send each individual part that was already sent over and ov=
er again.

IMO this will make it more probable to get at least parts committed.


Juergen

--------------E7EB69D824C6B123C0AD1ADA
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E7EB69D824C6B123C0AD1ADA--

--B8fhApnKctGNEhD5FJdK2EN2rNrsZclQK--

--0R5yR0kmJDthhk6p7sd2UWkPl6r3fhbtk
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3LKEFAwAAAAAACgkQsN6d1ii/Ey+S
PAf9Ff/vwymgCqWXpq71uKpWOGKKwso8Mn9TJ7UTOFUSuPWg3FqQWvqj0ozl+2H+ZGXSaKZSaVOh
WYkCzmQ+T53zTpTjZXNw1fouBu2hC0oyDQKZiwxhDiVxp8EijSVdimO9CtIjkHqIGaPbY2QEIvSR
rgI3g64ZsYk9aYZxavm1oRGOvvEe7rqllp0Xf+jNo2WhhyS1yb1Q0H7dEQFHZdC/qizTQiDonp1D
/O9DkOhBY+AHxJdF4285ZDNtZETa9KZPe2bLqPvfkpt5z3dQ7OtPHGb0akAL7JDrrFno2E8GpHSX
krcAKEfitMMWFC5IC6nkwu2NGSdbwpXchRAGoCJ6fw==
=F354
-----END PGP SIGNATURE-----

--0R5yR0kmJDthhk6p7sd2UWkPl6r3fhbtk--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135523.251763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuE-00060A-Oq; Wed, 02 Jun 2021 07:03:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135523.251763; Wed, 02 Jun 2021 07:03:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuE-0005zL-H8; Wed, 02 Jun 2021 07:03:10 +0000
Received: by outflank-mailman (input) for mailman id 135523;
 Wed, 02 Jun 2021 07:03:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKoR-0007A2-8F
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:57:11 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b06eb1f-aec6-49d4-92ae-0308a6a0f748;
 Wed, 02 Jun 2021 06:55:35 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKmW-0025pH-8N; Wed, 02 Jun 2021 06:55:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b06eb1f-aec6-49d4-92ae-0308a6a0f748
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=whkiEYkd/RZIFiccnNyVDQBt5GosCsybR61i2OE/TAs=; b=sGyQUwx4y9mBLuKjtVUp+c2wE9
	QKndoQI8+lVj5QubiuPxijfodshDbI28HmJC7XZxoK1I3O/xDGWqqWfWHPGGuhE/xYEpYPaljJGG0
	Fr4gr1lXEhUJVsLAp4MvKyynYGbSyA4OVlnSrYZiKU1gb2XC/QCckw8whfLsI+GKqp2G0C8FEKneP
	do3Abzx9O1Apmvm5lGswbnuiBjA19TYT3MinqZLYbcCGK2VKfmLyeYtMwiBKm8hatOP3dxWqLa5yK
	W7GKvYBzqn1hQiPqfJxptmiWvUrY4I54If9aEuYVsVn2eDAvQ6qCfhLBxQm3PYYJuSGatkU1tnAjt
	EHt0T+HQ==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 17/30] floppy: use blk_mq_alloc_disk and blk_cleanup_disk
Date: Wed,  2 Jun 2021 09:53:32 +0300
Message-Id: <20210602065345.355274-18-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and
request_queue allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/floppy.c | 20 +++++---------------
 1 file changed, 5 insertions(+), 15 deletions(-)

diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
index 8a9d22207c59..cbed9776f285 100644
--- a/drivers/block/floppy.c
+++ b/drivers/block/floppy.c
@@ -4491,23 +4491,15 @@ static bool floppy_available(int drive)
 static int floppy_alloc_disk(unsigned int drive, unsigned int type)
 {
 	struct gendisk *disk;
-	int err;
-
-	disk = alloc_disk(1);
-	if (!disk)
-		return -ENOMEM;
 
-	disk->queue = blk_mq_init_queue(&tag_sets[drive]);
-	if (IS_ERR(disk->queue)) {
-		err = PTR_ERR(disk->queue);
-		disk->queue = NULL;
-		put_disk(disk);
-		return err;
-	}
+	disk = blk_mq_alloc_disk(&tag_sets[drive], NULL);
+	if (IS_ERR(disk))
+		return PTR_ERR(disk);
 
 	blk_queue_max_hw_sectors(disk->queue, 64);
 	disk->major = FLOPPY_MAJOR;
 	disk->first_minor = TOMINOR(drive) | (type << 2);
+	disk->minors = 1;
 	disk->fops = &floppy_fops;
 	disk->events = DISK_EVENT_MEDIA_CHANGE;
 	if (type)
@@ -4727,10 +4719,8 @@ static int __init do_floppy_init(void)
 		if (!disks[drive][0])
 			break;
 		del_timer_sync(&motor_off_timer[drive]);
-		blk_cleanup_queue(disks[drive][0]->queue);
-		disks[drive][0]->queue = NULL;
+		blk_cleanup_disk(disks[drive][0]);
 		blk_mq_free_tag_set(&tag_sets[drive]);
-		put_disk(disks[drive][0]);
 	}
 	return err;
 }
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135522.251756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuE-0005wY-Cv; Wed, 02 Jun 2021 07:03:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135522.251756; Wed, 02 Jun 2021 07:03:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuE-0005wR-8q; Wed, 02 Jun 2021 07:03:10 +0000
Received: by outflank-mailman (input) for mailman id 135522;
 Wed, 02 Jun 2021 07:03:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKnJ-0007A2-5p
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:56:01 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 00870660-fed7-4b75-a701-c131b2982eeb;
 Wed, 02 Jun 2021 06:55:09 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKm6-0025Ys-7i; Wed, 02 Jun 2021 06:54:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00870660-fed7-4b75-a701-c131b2982eeb
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=qi7AU80Oz7hj4K5BUy3Hu3UJMfMiprlsOMq1LlDbVww=; b=wfnn0CGP9teCIUigpA0cLAiCEG
	XaYeXRoQvQOrEfJFcuxKu18rxlCj9kAVnqYvA6ffbosWHxomIXQsSArr+ftwbguMp5SU3M9/bmk0L
	bqzQ8XiTRZ6QBfzP4K/UHsKW47GVQnDBFuGf6p23+3crdLQ2ThNz4kvRi6KgNmbzK/nin+yjIgWxz
	6kQGuIL3DvYz9Jv29HP0rZaCMuA/qvMMWQpgEREhwRn38bXZiX53OXpalBkps4hZhmquoC2R0dNpO
	luUD0oRku7pr6gfgS8FYvxXuSkXea92js4/YEKQkinqriGTLqiSIzKawm91/GTGJJNVz/av0Gu21E
	TAsHH4GQ==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 11/30] swim3: use blk_mq_alloc_disk
Date: Wed,  2 Jun 2021 09:53:26 +0300
Message-Id: <20210602065345.355274-12-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use the blk_mq_alloc_disk API to simplify the gendisk and request_queue
allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/swim3.c | 33 ++++++++++++++-------------------
 1 file changed, 14 insertions(+), 19 deletions(-)

diff --git a/drivers/block/swim3.c b/drivers/block/swim3.c
index a515d0c1d2cb..965af0a3e95b 100644
--- a/drivers/block/swim3.c
+++ b/drivers/block/swim3.c
@@ -1202,30 +1202,27 @@ static int swim3_attach(struct macio_dev *mdev,
 			return rc;
 	}
 
-	disk = alloc_disk(1);
-	if (disk == NULL) {
-		rc = -ENOMEM;
-		goto out_unregister;
-	}
-
 	fs = &floppy_states[floppy_count];
 	memset(fs, 0, sizeof(*fs));
 
-	disk->queue = blk_mq_init_sq_queue(&fs->tag_set, &swim3_mq_ops, 2,
-						BLK_MQ_F_SHOULD_MERGE);
-	if (IS_ERR(disk->queue)) {
-		rc = PTR_ERR(disk->queue);
-		disk->queue = NULL;
-		goto out_put_disk;
+	rc = blk_mq_alloc_sq_tag_set(&fs->tag_set, &swim3_mq_ops, 2,
+			BLK_MQ_F_SHOULD_MERGE);
+	if (rc)
+		goto out_unregister;
+
+	disk = blk_mq_alloc_disk(&fs->tag_set, fs);
+	if (IS_ERR(disk)) {
+		rc = PTR_ERR(disk);
+		goto out_free_tag_set;
 	}
-	disk->queue->queuedata = fs;
 
 	rc = swim3_add_device(mdev, floppy_count);
 	if (rc)
-		goto out_cleanup_queue;
+		goto out_cleanup_disk;
 
 	disk->major = FLOPPY_MAJOR;
 	disk->first_minor = floppy_count;
+	disk->minors = 1;
 	disk->fops = &floppy_fops;
 	disk->private_data = fs;
 	disk->events = DISK_EVENT_MEDIA_CHANGE;
@@ -1237,12 +1234,10 @@ static int swim3_attach(struct macio_dev *mdev,
 	disks[floppy_count++] = disk;
 	return 0;
 
-out_cleanup_queue:
-	blk_cleanup_queue(disk->queue);
-	disk->queue = NULL;
+out_cleanup_disk:
+	blk_cleanup_disk(disk);
+out_free_tag_set:
 	blk_mq_free_tag_set(&fs->tag_set);
-out_put_disk:
-	put_disk(disk);
 out_unregister:
 	if (floppy_count == 0)
 		unregister_blkdev(FLOPPY_MAJOR, "fd");
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135526.251777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuH-0006WT-2P; Wed, 02 Jun 2021 07:03:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135526.251777; Wed, 02 Jun 2021 07:03:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuG-0006WM-VB; Wed, 02 Jun 2021 07:03:12 +0000
Received: by outflank-mailman (input) for mailman id 135526;
 Wed, 02 Jun 2021 07:03:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKn4-0007A2-5N
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:55:46 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b1903a0d-270d-4525-9128-57f82aa6401c;
 Wed, 02 Jun 2021 06:55:05 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKly-0025TX-SP; Wed, 02 Jun 2021 06:54:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1903a0d-270d-4525-9128-57f82aa6401c
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=VKYFJ4i2fp8TEWNMfgNX8wUyWWT+i6aWdBUPvxl8r/8=; b=Kje0bT6cmZTfkyQxEo++Euls2J
	q9bXO5jBawfI5ZZjRGS03IawDLbrxJhUhzTvnTagkSofonFcJnjtZC0dS+BvJRy6582P7WupPFtDf
	YeiiT9xlBwIlX3HIpr7KUTIHfDr8j72sUD7rZUU1oNkALcPDkDl0X1EYUDita2YrN3DmOI90nfQUA
	gBWdA9G+j1xkdZAZ9F+vJTeuzyM+QXgQvwA/7khxUkXj7qUfyZB13p4qh8JGRf1c0Rpcy0rETNuYn
	ceg4VbVFwo5MAMklt15SLCAQymtkaL4oCoEvsKTQbGlTwFmOXo2sfzTaSyLUMXNv/EYWRrCaCG51D
	BG4zCUxQ==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 09/30] mtd_blkdevs: use blk_mq_alloc_disk
Date: Wed,  2 Jun 2021 09:53:24 +0300
Message-Id: <20210602065345.355274-10-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use the blk_mq_alloc_disk API to simplify the gendisk and request_queue
allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/mtd/mtd_blkdevs.c | 48 ++++++++++++++++++---------------------
 1 file changed, 22 insertions(+), 26 deletions(-)

diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
index fb8e12d590a1..5dc4c966ea73 100644
--- a/drivers/mtd/mtd_blkdevs.c
+++ b/drivers/mtd/mtd_blkdevs.c
@@ -30,11 +30,9 @@ static void blktrans_dev_release(struct kref *kref)
 	struct mtd_blktrans_dev *dev =
 		container_of(kref, struct mtd_blktrans_dev, ref);
 
-	dev->disk->private_data = NULL;
-	blk_cleanup_queue(dev->rq);
+	blk_cleanup_disk(dev->disk);
 	blk_mq_free_tag_set(dev->tag_set);
 	kfree(dev->tag_set);
-	put_disk(dev->disk);
 	list_del(&dev->list);
 	kfree(dev);
 }
@@ -354,7 +352,7 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
 	if (new->devnum > (MINORMASK >> tr->part_bits) ||
 	    (tr->part_bits && new->devnum >= 27 * 26)) {
 		mutex_unlock(&blktrans_ref_mutex);
-		goto error1;
+		return ret;
 	}
 
 	list_add_tail(&new->list, &tr->devs);
@@ -366,17 +364,28 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
 	if (!tr->writesect)
 		new->readonly = 1;
 
-	/* Create gendisk */
 	ret = -ENOMEM;
-	gd = alloc_disk(1 << tr->part_bits);
+	new->tag_set = kzalloc(sizeof(*new->tag_set), GFP_KERNEL);
+	if (!new->tag_set)
+		goto out_list_del;
 
-	if (!gd)
-		goto error2;
+	ret = blk_mq_alloc_sq_tag_set(new->tag_set, &mtd_mq_ops, 2,
+			BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_BLOCKING);
+	if (ret)
+		goto out_kfree_tag_set;
+
+	/* Create gendisk */
+	gd = blk_mq_alloc_disk(new->tag_set, new);
+	if (IS_ERR(gd)) {
+		ret = PTR_ERR(gd);
+		goto out_free_tag_set;
+	}
 
 	new->disk = gd;
 	gd->private_data = new;
 	gd->major = tr->major;
 	gd->first_minor = (new->devnum) << tr->part_bits;
+	gd->minors = 1 << tr->part_bits;
 	gd->fops = &mtd_block_ops;
 
 	if (tr->part_bits)
@@ -398,22 +407,9 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
 	spin_lock_init(&new->queue_lock);
 	INIT_LIST_HEAD(&new->rq_list);
 
-	new->tag_set = kzalloc(sizeof(*new->tag_set), GFP_KERNEL);
-	if (!new->tag_set)
-		goto error3;
-
-	new->rq = blk_mq_init_sq_queue(new->tag_set, &mtd_mq_ops, 2,
-				BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_BLOCKING);
-	if (IS_ERR(new->rq)) {
-		ret = PTR_ERR(new->rq);
-		new->rq = NULL;
-		goto error4;
-	}
-
 	if (tr->flush)
 		blk_queue_write_cache(new->rq, true, false);
 
-	new->rq->queuedata = new;
 	blk_queue_logical_block_size(new->rq, tr->blksize);
 
 	blk_queue_flag_set(QUEUE_FLAG_NONROT, new->rq);
@@ -437,13 +433,13 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
 		WARN_ON(ret);
 	}
 	return 0;
-error4:
+
+out_free_tag_set:
+	blk_mq_free_tag_set(new->tag_set);
+out_kfree_tag_set:
 	kfree(new->tag_set);
-error3:
-	put_disk(new->disk);
-error2:
+out_list_del:
 	list_del(&new->list);
-error1:
 	return ret;
 }
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135529.251789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuI-0006oY-Ex; Wed, 02 Jun 2021 07:03:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135529.251789; Wed, 02 Jun 2021 07:03:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuI-0006nu-9R; Wed, 02 Jun 2021 07:03:14 +0000
Received: by outflank-mailman (input) for mailman id 135529;
 Wed, 02 Jun 2021 07:03:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKnT-0007A2-61
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:56:11 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d3699b6c-2647-4455-8297-c1370470e73e;
 Wed, 02 Jun 2021 06:55:15 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKmA-0025cC-BJ; Wed, 02 Jun 2021 06:54:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3699b6c-2647-4455-8297-c1370470e73e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=5ce2JiRw1Xfew42Ze/lpHvPyVGAKQBeRUdFT9myuT9Y=; b=D3gIVkolzefyEuZRqpxqbTjD1/
	me+j80WuDmIEDCJoCJJi3EaA6ZCyWCHg/zV//LKDbuT47z7Uv/x4kG2fPmshfH/L7rjbpzOatrjCm
	ARVjVeFvQjsO3HUvF2D9OqnaTSbbV853xQhgkz8ti5uaY4uAnMmLFziTNlF8DV+PTEuHXC3T5Pive
	LO7ao7bFh4VNXvbPzkbKc4TXiRq98YZ/TLJxnVtZ5DgjQkmVu3QKesdVOoc5Nq3l6loa9nnM9JqI4
	k1sn5uw5l5Y+eo2X4wE+smW+Tw3NfDakiaWjXDN2Xlkt/Jo9eltYtyzKb8C/e2tK4DiSGkIBOzcFb
	t7mn9Egw==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 12/30] swim: use blk_mq_alloc_disk
Date: Wed,  2 Jun 2021 09:53:27 +0300
Message-Id: <20210602065345.355274-13-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use the blk_mq_alloc_disk API to simplify the gendisk and request_queue
allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/swim.c | 34 ++++++++++++++--------------------
 1 file changed, 14 insertions(+), 20 deletions(-)

diff --git a/drivers/block/swim.c b/drivers/block/swim.c
index 2917b21f48ff..7ccc8d2a41bc 100644
--- a/drivers/block/swim.c
+++ b/drivers/block/swim.c
@@ -800,23 +800,20 @@ static int swim_floppy_init(struct swim_priv *swd)
 	spin_lock_init(&swd->lock);
 
 	for (drive = 0; drive < swd->floppy_count; drive++) {
-		struct request_queue *q;
-
-		swd->unit[drive].disk = alloc_disk(1);
-		if (swd->unit[drive].disk == NULL) {
-			err = -ENOMEM;
+		err = blk_mq_alloc_sq_tag_set(&swd->unit[drive].tag_set,
+				&swim_mq_ops, 2, BLK_MQ_F_SHOULD_MERGE);
+		if (err)
 			goto exit_put_disks;
-		}
 
-		q = blk_mq_init_sq_queue(&swd->unit[drive].tag_set, &swim_mq_ops,
-						2, BLK_MQ_F_SHOULD_MERGE);
-		if (IS_ERR(q)) {
-			err = PTR_ERR(q);
+		swd->unit[drive].disk =
+			blk_mq_alloc_disk(&swd->unit[drive].tag_set,
+					  &swd->unit[drive]);
+		if (IS_ERR(swd->unit[drive].disk)) {
+			blk_mq_free_tag_set(&swd->unit[drive].tag_set);
+			err = PTR_ERR(swd->unit[drive].disk);
 			goto exit_put_disks;
 		}
 
-		swd->unit[drive].disk->queue = q;
-		swd->unit[drive].disk->queue->queuedata = &swd->unit[drive];
 		swd->unit[drive].swd = swd;
 	}
 
@@ -824,6 +821,7 @@ static int swim_floppy_init(struct swim_priv *swd)
 		swd->unit[drive].disk->flags = GENHD_FL_REMOVABLE;
 		swd->unit[drive].disk->major = FLOPPY_MAJOR;
 		swd->unit[drive].disk->first_minor = drive;
+		swd->unit[drive].disk->minors = 1;
 		sprintf(swd->unit[drive].disk->disk_name, "fd%d", drive);
 		swd->unit[drive].disk->fops = &floppy_fops;
 		swd->unit[drive].disk->events = DISK_EVENT_MEDIA_CHANGE;
@@ -839,14 +837,10 @@ static int swim_floppy_init(struct swim_priv *swd)
 	do {
 		struct gendisk *disk = swd->unit[drive].disk;
 
-		if (disk) {
-			if (disk->queue) {
-				blk_cleanup_queue(disk->queue);
-				disk->queue = NULL;
-			}
-			blk_mq_free_tag_set(&swd->unit[drive].tag_set);
-			put_disk(disk);
-		}
+		if (!disk)
+			continue;
+		blk_cleanup_disk(disk);
+		blk_mq_free_tag_set(&swd->unit[drive].tag_set);
 	} while (drive--);
 	return err;
 }
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135530.251793 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuI-0006tP-TE; Wed, 02 Jun 2021 07:03:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135530.251793; Wed, 02 Jun 2021 07:03:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuI-0006re-Nn; Wed, 02 Jun 2021 07:03:14 +0000
Received: by outflank-mailman (input) for mailman id 135530;
 Wed, 02 Jun 2021 07:03:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKpZ-0007A2-Ai
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:58:21 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 369ca6a9-5334-40bf-bedc-c82caf2214f7;
 Wed, 02 Jun 2021 06:56:11 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKn1-0026HM-J4; Wed, 02 Jun 2021 06:55:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 369ca6a9-5334-40bf-bedc-c82caf2214f7
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=yE8r2l5ezl7YxULOvA5WBGUgnr1EKs2E6TVb/drKQds=; b=r98dDxDiRvwlhNhs3T+1sVCcmx
	9LSO0JAkx53106Z5FUQCYz3et9mT695nE04/mTWwxr/+e0qSYn8Q1H1hjEWJdvuJc3pumGMIBHyUT
	t4W4GpkPuUNrdOHpe0KYFnWLvVAzI+WQwBh+pEi7U6I9KnMAFAlHPJRr4BSXUOn39VZTgNc5w2jvT
	YFMKttfQVUSu/qkAB2YLyqtpsqykpUWPfHXrpKaBY5Lxa/Kceg/Y0XBAcrgpyvG0PgFSkx/IaoPgV
	NvObxijOnNz7w1Y0/FNOweKAbiF8uEg9oHb82XorMJoTWx9Agt4jviXSvoy73IZqfJrU1acBTcolk
	E0JQevww==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 24/30] sx8: use blk_mq_alloc_disk and blk_cleanup_disk
Date: Wed,  2 Jun 2021 09:53:39 +0300
Message-Id: <20210602065345.355274-25-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and
request_queue allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/sx8.c | 23 +++++++----------------
 1 file changed, 7 insertions(+), 16 deletions(-)

diff --git a/drivers/block/sx8.c b/drivers/block/sx8.c
index 2cdf2771f8e8..f01f860b0e62 100644
--- a/drivers/block/sx8.c
+++ b/drivers/block/sx8.c
@@ -1343,32 +1343,25 @@ static int carm_init_disk(struct carm_host *host, unsigned int port_no)
 {
 	struct carm_port *port = &host->port[port_no];
 	struct gendisk *disk;
-	struct request_queue *q;
 
 	port->host = host;
 	port->port_no = port_no;
 
-	disk = alloc_disk(CARM_MINORS_PER_MAJOR);
-	if (!disk)
-		return -ENOMEM;
+	disk = blk_mq_alloc_disk(&host->tag_set, port);
+	if (IS_ERR(disk))
+		return PTR_ERR(disk);
 
 	port->disk = disk;
 	sprintf(disk->disk_name, DRV_NAME "/%u",
 		(unsigned int)host->id * CARM_MAX_PORTS + port_no);
 	disk->major = host->major;
 	disk->first_minor = port_no * CARM_MINORS_PER_MAJOR;
+	disk->minors = CARM_MINORS_PER_MAJOR;
 	disk->fops = &carm_bd_ops;
 	disk->private_data = port;
 
-	q = blk_mq_init_queue(&host->tag_set);
-	if (IS_ERR(q))
-		return PTR_ERR(q);
-
-	blk_queue_max_segments(q, CARM_MAX_REQ_SG);
-	blk_queue_segment_boundary(q, CARM_SG_BOUNDARY);
-
-	q->queuedata = port;
-	disk->queue = q;
+	blk_queue_max_segments(disk->queue, CARM_MAX_REQ_SG);
+	blk_queue_segment_boundary(disk->queue, CARM_SG_BOUNDARY);
 	return 0;
 }
 
@@ -1382,9 +1375,7 @@ static void carm_free_disk(struct carm_host *host, unsigned int port_no)
 
 	if (disk->flags & GENHD_FL_UP)
 		del_gendisk(disk);
-	if (disk->queue)
-		blk_cleanup_queue(disk->queue);
-	put_disk(disk);
+	blk_cleanup_disk(disk);
 }
 
 static int carm_init_shm(struct carm_host *host)
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135531.251799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuJ-00070w-Bj; Wed, 02 Jun 2021 07:03:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135531.251799; Wed, 02 Jun 2021 07:03:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuJ-0006zE-6j; Wed, 02 Jun 2021 07:03:15 +0000
Received: by outflank-mailman (input) for mailman id 135531;
 Wed, 02 Jun 2021 07:03:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKp5-0007A2-A2
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:57:51 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fbc7e121-7955-4001-835b-a07c6a1cacbe;
 Wed, 02 Jun 2021 06:55:56 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKmo-002647-90; Wed, 02 Jun 2021 06:55:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fbc7e121-7955-4001-835b-a07c6a1cacbe
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=nI/ZQsUq4UAVWWD+XlWiGZNbHY/pEUl6HaoFswHnuQ4=; b=zGlrfHSC/oppIdruF4aRva3o8B
	wtEFmwSmcFHEhWITeGFtHuY9Wo10P04noPv6NDlI408bgIBJRkVwwSNa+XWLEAJlAJuKZOmEWMLIK
	ypzvochOmpTNDSw00TOxKnVYokteVrnhgtmSr1j1yBWFdJc27AJBjUNwKpISp0vKiLy2jjLimY0Ek
	6P9TdCUkFiGBNMeFMPeq+ck3KCV/5OtDAHAEJPXDdOXuVm3Wp8pMIeYa4M2y0KOxO5BAOQxPKiJkk
	QtHEQ5wWjrHNPSLHnxZaMHUkkPwojUCno1EhOKjx/cVHeiOj1wg20mdo5EoURfDorHUaeAxyeZdGp
	7/lZ2gHg==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 21/30] pd: use blk_mq_alloc_disk and blk_cleanup_disk
Date: Wed,  2 Jun 2021 09:53:36 +0300
Message-Id: <20210602065345.355274-22-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and
request_queue allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/paride/pd.c | 30 ++++++++++++------------------
 1 file changed, 12 insertions(+), 18 deletions(-)

diff --git a/drivers/block/paride/pd.c b/drivers/block/paride/pd.c
index 828a45ffe0e7..3b2b8e872beb 100644
--- a/drivers/block/paride/pd.c
+++ b/drivers/block/paride/pd.c
@@ -879,18 +879,6 @@ static void pd_probe_drive(struct pd_unit *disk)
 {
 	struct gendisk *p;
 
-	p = alloc_disk(1 << PD_BITS);
-	if (!p)
-		return;
-
-	strcpy(p->disk_name, disk->name);
-	p->fops = &pd_fops;
-	p->major = major;
-	p->first_minor = (disk - pd) << PD_BITS;
-	p->events = DISK_EVENT_MEDIA_CHANGE;
-	disk->gd = p;
-	p->private_data = disk;
-
 	memset(&disk->tag_set, 0, sizeof(disk->tag_set));
 	disk->tag_set.ops = &pd_mq_ops;
 	disk->tag_set.cmd_size = sizeof(struct pd_req);
@@ -903,14 +891,21 @@ static void pd_probe_drive(struct pd_unit *disk)
 	if (blk_mq_alloc_tag_set(&disk->tag_set))
 		return;
 
-	p->queue = blk_mq_init_queue(&disk->tag_set);
-	if (IS_ERR(p->queue)) {
+	p = blk_mq_alloc_disk(&disk->tag_set, disk);
+	if (!p) {
 		blk_mq_free_tag_set(&disk->tag_set);
-		p->queue = NULL;
 		return;
 	}
+	disk->gd = p;
+
+	strcpy(p->disk_name, disk->name);
+	p->fops = &pd_fops;
+	p->major = major;
+	p->first_minor = (disk - pd) << PD_BITS;
+	p->minors = 1 << PD_BITS;
+	p->events = DISK_EVENT_MEDIA_CHANGE;
+	p->private_data = disk;
 
-	p->queue->queuedata = disk;
 	blk_queue_max_hw_sectors(p->queue, cluster);
 	blk_queue_bounce_limit(p->queue, BLK_BOUNCE_HIGH);
 
@@ -1019,9 +1014,8 @@ static void __exit pd_exit(void)
 		if (p) {
 			disk->gd = NULL;
 			del_gendisk(p);
-			blk_cleanup_queue(p->queue);
 			blk_mq_free_tag_set(&disk->tag_set);
-			put_disk(p);
+			blk_cleanup_disk(p);
 			pi_release(disk->pi);
 		}
 	}
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135533.251807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuK-0007AT-1g; Wed, 02 Jun 2021 07:03:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135533.251807; Wed, 02 Jun 2021 07:03:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuJ-00078F-Mz; Wed, 02 Jun 2021 07:03:15 +0000
Received: by outflank-mailman (input) for mailman id 135533;
 Wed, 02 Jun 2021 07:03:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKpo-0007A2-BA
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:58:36 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7bcd4f9d-4ff9-4974-b0a8-629787498a7d;
 Wed, 02 Jun 2021 06:56:20 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKnE-0026TF-Pp; Wed, 02 Jun 2021 06:55:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7bcd4f9d-4ff9-4974-b0a8-629787498a7d
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=t0ZfpA7/b1tuSB9Qdmm+6Pkt4GtuM/+pFYP7xY239G0=; b=uyZhEjeEW990KkF+Rai7xjxEUV
	5bWN+uUzqHopzatrtAIE4foiriDdqCyVhGFR6vrU02JeY+SQh7ATWFUZXA3UpyDJcgDYSDLhQXP9V
	RbuXa4k+DVNm7KNjqg4cbdIBM1Va/6aZbjJtQTnqImkNPJ0qKqmBir0MJFXK6gK5PvPfEwCxj1xmD
	dJgu840EtdLj1HHThtV37MaOh3u42wXScgCL7y9GVONAhUdX9SrS6vntka8Pn6bww45YDmPKVYARg
	M/UkenAPSpkgf5JK1kNXiSwoq6fWJ8kR9bHE+sA5X31QgA+XlZSlp6YoKzXnGG50xif3YpnMjuZQG
	qCj5Hh3A==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 27/30] scm_blk: use blk_mq_alloc_disk and blk_cleanup_disk
Date: Wed,  2 Jun 2021 09:53:42 +0300
Message-Id: <20210602065345.355274-28-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and
request_queue allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/s390/block/scm_blk.c | 21 ++++++---------------
 1 file changed, 6 insertions(+), 15 deletions(-)

diff --git a/drivers/s390/block/scm_blk.c b/drivers/s390/block/scm_blk.c
index a4f6f2e62b1d..88cba6212ee2 100644
--- a/drivers/s390/block/scm_blk.c
+++ b/drivers/s390/block/scm_blk.c
@@ -462,12 +462,12 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
 	if (ret)
 		goto out;
 
-	rq = blk_mq_init_queue(&bdev->tag_set);
-	if (IS_ERR(rq)) {
-		ret = PTR_ERR(rq);
+	bdev->gendisk = blk_mq_alloc_disk(&bdev->tag_set, scmdev);
+	if (IS_ERR(bdev->gendisk)) {
+		ret = PTR_ERR(bdev->gendisk);
 		goto out_tag;
 	}
-	bdev->rq = rq;
+	rq = bdev->rq = bdev->gendisk->queue;
 	nr_max_blk = min(scmdev->nr_max_block,
 			 (unsigned int) (PAGE_SIZE / sizeof(struct aidaw)));
 
@@ -477,17 +477,11 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
 	blk_queue_flag_set(QUEUE_FLAG_NONROT, rq);
 	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, rq);
 
-	bdev->gendisk = alloc_disk(SCM_NR_PARTS);
-	if (!bdev->gendisk) {
-		ret = -ENOMEM;
-		goto out_queue;
-	}
-	rq->queuedata = scmdev;
 	bdev->gendisk->private_data = scmdev;
 	bdev->gendisk->fops = &scm_blk_devops;
-	bdev->gendisk->queue = rq;
 	bdev->gendisk->major = scm_major;
 	bdev->gendisk->first_minor = devindex * SCM_NR_PARTS;
+	bdev->gendisk->minors = SCM_NR_PARTS;
 
 	len = snprintf(bdev->gendisk->disk_name, DISK_NAME_LEN, "scm");
 	if (devindex > 25) {
@@ -504,8 +498,6 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
 	device_add_disk(&scmdev->dev, bdev->gendisk, NULL);
 	return 0;
 
-out_queue:
-	blk_cleanup_queue(rq);
 out_tag:
 	blk_mq_free_tag_set(&bdev->tag_set);
 out:
@@ -516,9 +508,8 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
 void scm_blk_dev_cleanup(struct scm_blk_dev *bdev)
 {
 	del_gendisk(bdev->gendisk);
-	blk_cleanup_queue(bdev->gendisk->queue);
+	blk_cleanup_disk(bdev->gendisk);
 	blk_mq_free_tag_set(&bdev->tag_set);
-	put_disk(bdev->gendisk);
 }
 
 void scm_blk_set_available(struct scm_blk_dev *bdev)
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135539.251833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuR-0008O8-Rf; Wed, 02 Jun 2021 07:03:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135539.251833; Wed, 02 Jun 2021 07:03:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuR-0008Nn-Ky; Wed, 02 Jun 2021 07:03:23 +0000
Received: by outflank-mailman (input) for mailman id 135539;
 Wed, 02 Jun 2021 07:03:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loKqX-0007A2-DI
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:59:21 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e37098d-3c97-4c7c-a59d-1bc383dcfcf9;
 Wed, 02 Jun 2021 06:59:15 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 578F01FD2D;
 Wed,  2 Jun 2021 06:59:14 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 2A94E118DD;
 Wed,  2 Jun 2021 06:59:14 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 4HqCCEIst2AxUQAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 06:59:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e37098d-3c97-4c7c-a59d-1bc383dcfcf9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622617154; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=rdBvG+0FsUAmReTO/XftE6HTFmGjRf0uXjCMPsQH1nM=;
	b=S/tQZEvbXcTqFoE3ou9CnoeJagiLipzTe9C+1VjHRVvajWC/ApbYROkH0XgIOgV+c5KtjZ
	2MhJP/Budy1TeXOv/tbghfG8b5S9R6U00I/UtUsr9HIomY2QZdAVC6MMLiEGtjvc/Iaq1B
	dBsrFoV3/2F6Af/wdN9U2AVMRdVRlNA=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622617154; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=rdBvG+0FsUAmReTO/XftE6HTFmGjRf0uXjCMPsQH1nM=;
	b=S/tQZEvbXcTqFoE3ou9CnoeJagiLipzTe9C+1VjHRVvajWC/ApbYROkH0XgIOgV+c5KtjZ
	2MhJP/Budy1TeXOv/tbghfG8b5S9R6U00I/UtUsr9HIomY2QZdAVC6MMLiEGtjvc/Iaq1B
	dBsrFoV3/2F6Af/wdN9U2AVMRdVRlNA=
Subject: Re: [PATCH v20210601 07/38] tools: unify type checking for data pfns
 in migration stream
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-8-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <9045add9-0cd0-7f9d-87ef-26cea15b74cd@suse.com>
Date: Wed, 2 Jun 2021 08:59:13 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-8-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="Qbsz8S2ZeH31Pb7jaHGGo8aAeWhDzmbty"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--Qbsz8S2ZeH31Pb7jaHGGo8aAeWhDzmbty
Content-Type: multipart/mixed; boundary="D7PqIiafqrscaQG9vey6UULGmaCe3LuDX";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <9045add9-0cd0-7f9d-87ef-26cea15b74cd@suse.com>
Subject: Re: [PATCH v20210601 07/38] tools: unify type checking for data pfns
 in migration stream
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-8-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-8-olaf@aepfle.de>

--D7PqIiafqrscaQG9vey6UULGmaCe3LuDX
Content-Type: multipart/mixed;
 boundary="------------599F303F92C22A3F17171267"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------599F303F92C22A3F17171267
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> Introduce a helper which decides if a given pfn type has data
> for the migration stream.
>=20
> No change in behavior intended.
>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>   tools/libs/saverestore/common.h  | 17 ++++++++++++++++
>   tools/libs/saverestore/restore.c | 34 +++++--------------------------=
-
>   tools/libs/saverestore/save.c    | 14 ++-----------
>   3 files changed, 24 insertions(+), 41 deletions(-)
>=20
> diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/c=
ommon.h
> index 36946e5d48..50a8479d39 100644
> --- a/tools/libs/saverestore/common.h
> +++ b/tools/libs/saverestore/common.h
> @@ -467,6 +467,23 @@ int populate_pfns(struct xc_sr_context *ctx, unsig=
ned int count,
>   /* Handle a STATIC_DATA_END record. */
>   int handle_static_data_end(struct xc_sr_context *ctx);
>  =20
> +static inline bool page_type_has_stream_data(uint32_t type)
> +{
> +    bool ret;
> +
> +    switch (type)
> +    {
> +    case XEN_DOMCTL_PFINFO_XTAB:
> +    case XEN_DOMCTL_PFINFO_XALLOC:
> +    case XEN_DOMCTL_PFINFO_BROKEN:
> +        ret =3D false;
> +        break;
> +    default:
> +        ret =3D true;
> +        break;
> +    }
> +    return ret;
> +}
>   #endif
>   /*
>    * Local variables:
> diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/=
restore.c
> index cccb0dcb71..700f9e74b5 100644
> --- a/tools/libs/saverestore/restore.c
> +++ b/tools/libs/saverestore/restore.c
> @@ -152,9 +152,8 @@ int populate_pfns(struct xc_sr_context *ctx, unsign=
ed int count,
>  =20
>       for ( i =3D 0; i < count; ++i )
>       {
> -        if ( (!types || (types &&
> -                         (types[i] !=3D XEN_DOMCTL_PFINFO_XTAB &&
> -                          types[i] !=3D XEN_DOMCTL_PFINFO_BROKEN))) &&=

> +        if ( (!types ||
> +              (types && page_type_has_stream_data(types[i]) =3D=3D tru=
e)) &&

What about XEN_DOMCTL_PFINFO_XALLOC? Is this case impossible here, or
are you changing behavior?


Juergen

--------------599F303F92C22A3F17171267
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------599F303F92C22A3F17171267--

--D7PqIiafqrscaQG9vey6UULGmaCe3LuDX--

--Qbsz8S2ZeH31Pb7jaHGGo8aAeWhDzmbty
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3LEEFAwAAAAAACgkQsN6d1ii/Ey+W
Jwf+LqjCX/chM+6JQIKunF1kjtoJ6L7LXVdlYJxB4SLano3bQBvc50Hwui9b3dSKFQcR4qbv4LCr
sYCsr0LSG7r9EplRi3sOdUkTnWDMEsMMHotysZ5i0YOM7e8UEMGkHNL9HafDUc2XRbMqTQ8ycDly
C/06bPnZsMxLlP2NMdqmak9cVUGwDdmNhQLj0IUOxrfSiILJrcpjt8N719/+EQX/Du3wKRirhj9f
72Dl7wJwGzqq/1bhOehvlgnd7xPmnIeATCLvYCSi/ZXlA2VKjrnBsO59xOwYKXndZbD1DEgerdEC
S/0n8EYSVezjUBorBAyF7dnAcA5oNWhUy7vF+Z0wcA==
=L6za
-----END PGP SIGNATURE-----

--Qbsz8S2ZeH31Pb7jaHGGo8aAeWhDzmbty--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135547.251844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuY-0000u9-4O; Wed, 02 Jun 2021 07:03:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135547.251844; Wed, 02 Jun 2021 07:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuX-0000tx-W1; Wed, 02 Jun 2021 07:03:29 +0000
Received: by outflank-mailman (input) for mailman id 135547;
 Wed, 02 Jun 2021 07:03:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKni-0007A2-6Y
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:56:26 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35fb7b1f-44c7-4b27-ad45-0306383e6a1a;
 Wed, 02 Jun 2021 06:55:16 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKmD-0025eq-Jk; Wed, 02 Jun 2021 06:54:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35fb7b1f-44c7-4b27-ad45-0306383e6a1a
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=pV41u4G0u0xaNGxoxDb/eyNeH9eLbF/Aei3AXX+HWdY=; b=xynVkbApPKJ1V6SCFnDhqzTRhk
	HG0wUR2lljnrg8jjaBi3fB5wReJl4j8YP38RFTuxGHRPiXc3MvRjlT3pljHnX0XyyZXySOVGvD36r
	Yn+kDsgqGIGCgFW7UzE2sDUrTXFZpZSvReujYSqg/GDnk+jMrGw/l7eQr8EmpYPtyi2dlKoDQZniQ
	A295rgd6IcGuhT/qNWGyPQm803gvicBX9TIVBVODY+wwREKWrX67HS+VcQ/s9Gl/7xUWi6bQoGvGi
	xaTI/gKDuismLjMiDlvwEAfqkBTMh3FXwJk3/FqJCq/s8UQ5jthfbUwng9lkO26oujqiNf9SJQcbE
	JqveapQg==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 13/30] sunvdc: use blk_mq_alloc_disk
Date: Wed,  2 Jun 2021 09:53:28 +0300
Message-Id: <20210602065345.355274-14-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use the blk_mq_alloc_disk API to simplify the gendisk and request_queue
allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/sunvdc.c | 47 ++++++++++++------------------------------
 1 file changed, 13 insertions(+), 34 deletions(-)

diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c
index 39aeebc6837d..c53b38578bb7 100644
--- a/drivers/block/sunvdc.c
+++ b/drivers/block/sunvdc.c
@@ -780,27 +780,6 @@ static const struct blk_mq_ops vdc_mq_ops = {
 	.queue_rq	= vdc_queue_rq,
 };
 
-static void cleanup_queue(struct request_queue *q)
-{
-	struct vdc_port *port = q->queuedata;
-
-	blk_cleanup_queue(q);
-	blk_mq_free_tag_set(&port->tag_set);
-}
-
-static struct request_queue *init_queue(struct vdc_port *port)
-{
-	struct request_queue *q;
-
-	q = blk_mq_init_sq_queue(&port->tag_set, &vdc_mq_ops, VDC_TX_RING_SIZE,
-					BLK_MQ_F_SHOULD_MERGE);
-	if (IS_ERR(q))
-		return q;
-
-	q->queuedata = port;
-	return q;
-}
-
 static int probe_disk(struct vdc_port *port)
 {
 	struct request_queue *q;
@@ -838,21 +817,21 @@ static int probe_disk(struct vdc_port *port)
 				    (u64)geom.num_sec);
 	}
 
-	q = init_queue(port);
-	if (IS_ERR(q)) {
-		printk(KERN_ERR PFX "%s: Could not allocate queue.\n",
-		       port->vio.name);
-		return PTR_ERR(q);
-	}
-	g = alloc_disk(1 << PARTITION_SHIFT);
-	if (!g) {
+	err = blk_mq_alloc_sq_tag_set(&port->tag_set, &vdc_mq_ops,
+			VDC_TX_RING_SIZE, BLK_MQ_F_SHOULD_MERGE);
+	if (err)
+		return err;
+
+	g = blk_mq_alloc_disk(&port->tag_set, port);
+	if (IS_ERR(g)) {
 		printk(KERN_ERR PFX "%s: Could not allocate gendisk.\n",
 		       port->vio.name);
-		cleanup_queue(q);
-		return -ENOMEM;
+		blk_mq_free_tag_set(&port->tag_set);
+		return PTR_ERR(g);
 	}
 
 	port->disk = g;
+	q = g->queue;
 
 	/* Each segment in a request is up to an aligned page in size. */
 	blk_queue_segment_boundary(q, PAGE_SIZE - 1);
@@ -862,6 +841,7 @@ static int probe_disk(struct vdc_port *port)
 	blk_queue_max_hw_sectors(q, port->max_xfer_size);
 	g->major = vdc_major;
 	g->first_minor = port->vio.vdev->dev_no << PARTITION_SHIFT;
+	g->minors = 1 << PARTITION_SHIFT;
 	strcpy(g->disk_name, port->disk_name);
 
 	g->fops = &vdc_fops;
@@ -1083,9 +1063,8 @@ static int vdc_port_remove(struct vio_dev *vdev)
 		del_timer_sync(&port->vio.timer);
 
 		del_gendisk(port->disk);
-		cleanup_queue(port->disk->queue);
-		put_disk(port->disk);
-		port->disk = NULL;
+		blk_cleanup_disk(port->disk);
+		blk_mq_free_tag_set(&port->tag_set);
 
 		vdc_free_tx_ring(port);
 		vio_ldc_free(&port->vio);
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135550.251852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuZ-0001Ec-G7; Wed, 02 Jun 2021 07:03:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135550.251852; Wed, 02 Jun 2021 07:03:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuZ-0001D2-Ar; Wed, 02 Jun 2021 07:03:31 +0000
Received: by outflank-mailman (input) for mailman id 135550;
 Wed, 02 Jun 2021 07:03:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKog-0007A2-8l
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:57:26 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 831013cb-db5d-4f6d-83af-e0e71db48b75;
 Wed, 02 Jun 2021 06:55:41 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKma-0025sw-BW; Wed, 02 Jun 2021 06:55:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 831013cb-db5d-4f6d-83af-e0e71db48b75
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=Ay3+BGPlRqdc0EFLNjvSyNAlCXYv7F/E4ZKoFdkW+qs=; b=re781XvQTLqNz+tW0K+eeLN126
	ZHdsuYJ2Awc6hKt5NXrxWVcF1gLtmPmqrRoYM1HI43jkn7Qf75wWP01UlBu2V2sY61J+OM/Etw42Y
	+/Y9bV0NZSR7XSiaIB3hoA5FplO+Knn8VPNUx84Za0n+grjUeqHa7Ia2Y6hM5luzozyZJ8Is3hs5t
	0f1bIF+gL/yS4tMtIJDr7RZZ0gaHP+N8QcEaL1x8gVuo08fsq6poeekNRuls+ivPNI9U2B65ypj48
	yQjKdXwi7x17Mz91bhzCLHhe29vJAUjBIJ8vYze0d6zCgmUka+cbdrK9DUBsKkhmarGXEPYZxI+5z
	n/v3EyXw==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 18/30] loop: use blk_mq_alloc_disk and blk_cleanup_disk
Date: Wed,  2 Jun 2021 09:53:33 +0300
Message-Id: <20210602065345.355274-19-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and
request_queue allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/loop.c | 19 ++++++-------------
 1 file changed, 6 insertions(+), 13 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 95c570f5923f..3f40e673a101 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -2117,12 +2117,12 @@ static int loop_add(struct loop_device **l, int i)
 	if (err)
 		goto out_free_idr;
 
-	lo->lo_queue = blk_mq_init_queue(&lo->tag_set);
-	if (IS_ERR(lo->lo_queue)) {
-		err = PTR_ERR(lo->lo_queue);
+	disk = lo->lo_disk = blk_mq_alloc_disk(&lo->tag_set, lo);
+	if (IS_ERR(disk)) {
+		err = PTR_ERR(disk);
 		goto out_cleanup_tags;
 	}
-	lo->lo_queue->queuedata = lo;
+	lo->lo_queue = lo->lo_disk->queue;
 
 	blk_queue_max_hw_sectors(lo->lo_queue, BLK_DEF_MAX_SECTORS);
 
@@ -2134,11 +2134,6 @@ static int loop_add(struct loop_device **l, int i)
 	 */
 	blk_queue_flag_set(QUEUE_FLAG_NOMERGES, lo->lo_queue);
 
-	err = -ENOMEM;
-	disk = lo->lo_disk = alloc_disk(1 << part_shift);
-	if (!disk)
-		goto out_free_queue;
-
 	/*
 	 * Disable partition scanning by default. The in-kernel partition
 	 * scanning can be requested individually per-device during its
@@ -2166,6 +2161,7 @@ static int loop_add(struct loop_device **l, int i)
 	spin_lock_init(&lo->lo_lock);
 	disk->major		= LOOP_MAJOR;
 	disk->first_minor	= i << part_shift;
+	disk->minors		= 1 << part_shift;
 	disk->fops		= &lo_fops;
 	disk->private_data	= lo;
 	disk->queue		= lo->lo_queue;
@@ -2174,8 +2170,6 @@ static int loop_add(struct loop_device **l, int i)
 	*l = lo;
 	return lo->lo_number;
 
-out_free_queue:
-	blk_cleanup_queue(lo->lo_queue);
 out_cleanup_tags:
 	blk_mq_free_tag_set(&lo->tag_set);
 out_free_idr:
@@ -2189,9 +2183,8 @@ static int loop_add(struct loop_device **l, int i)
 static void loop_remove(struct loop_device *lo)
 {
 	del_gendisk(lo->lo_disk);
-	blk_cleanup_queue(lo->lo_queue);
 	blk_mq_free_tag_set(&lo->tag_set);
-	put_disk(lo->lo_disk);
+	blk_cleanup_disk(lo->lo_disk);
 	mutex_destroy(&lo->lo_mutex);
 	kfree(lo);
 }
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135551.251860 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKua-0001Op-Ny; Wed, 02 Jun 2021 07:03:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135551.251860; Wed, 02 Jun 2021 07:03:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKua-0001Mx-2o; Wed, 02 Jun 2021 07:03:32 +0000
Received: by outflank-mailman (input) for mailman id 135551;
 Wed, 02 Jun 2021 07:03:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKpe-0007A2-Ar
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:58:26 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d53a957c-8725-41b8-8647-9d52c1a3d3a3;
 Wed, 02 Jun 2021 06:56:13 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKn5-0026La-Sd; Wed, 02 Jun 2021 06:55:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d53a957c-8725-41b8-8647-9d52c1a3d3a3
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=Nt6wErfubiqnYVzweXP3z8/rIFCiMtQahAHnZtmIiWQ=; b=hKOfYl8YkpHQsdsCJlgWZugO4N
	8fA3EUPkn9+TcsNuxwOAiqb3mNj8IytqoaliRlOJGLTV2vKpHMO6rQ1hlUlt0+Mkm+c5R/1hkmkn7
	oTHREX9ZW78opJHrkSMxB5aCt3NEqpb+8QhNRYh4/wvPcfO1Rb2/ZQR+g2Eo/Q/BlHFp+u+PN1zzV
	pUp3mo54MUYKztGaP64TzrsyiDi6S2vcUaJtmVAkviIbIqZRFaDYaSgCXtB5+UEr/Ho31XLAOGGlI
	c+6ImRaQZ98uKF5v5kiT9CbtO1B6knRIsPn1hDSDTVU+kF579t+oYPcuUhzdK5Z+yGE/Xb+qFY58k
	TDv/4cKQ==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 25/30] xen-blkfront: use blk_mq_alloc_disk and blk_cleanup_disk
Date: Wed,  2 Jun 2021 09:53:40 +0300
Message-Id: <20210602065345.355274-26-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and
request_queue allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/xen-blkfront.c | 96 +++++++++++++++---------------------
 1 file changed, 39 insertions(+), 57 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index f2c1aedcdf5a..8d49f8fa98bb 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -968,48 +968,6 @@ static void blkif_set_queue_limits(struct blkfront_info *info)
 	blk_queue_dma_alignment(rq, 511);
 }
 
-static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size,
-				unsigned int physical_sector_size)
-{
-	struct request_queue *rq;
-	struct blkfront_info *info = gd->private_data;
-
-	memset(&info->tag_set, 0, sizeof(info->tag_set));
-	info->tag_set.ops = &blkfront_mq_ops;
-	info->tag_set.nr_hw_queues = info->nr_rings;
-	if (HAS_EXTRA_REQ && info->max_indirect_segments == 0) {
-		/*
-		 * When indirect descriptior is not supported, the I/O request
-		 * will be split between multiple request in the ring.
-		 * To avoid problems when sending the request, divide by
-		 * 2 the depth of the queue.
-		 */
-		info->tag_set.queue_depth =  BLK_RING_SIZE(info) / 2;
-	} else
-		info->tag_set.queue_depth = BLK_RING_SIZE(info);
-	info->tag_set.numa_node = NUMA_NO_NODE;
-	info->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
-	info->tag_set.cmd_size = sizeof(struct blkif_req);
-	info->tag_set.driver_data = info;
-
-	if (blk_mq_alloc_tag_set(&info->tag_set))
-		return -EINVAL;
-	rq = blk_mq_init_queue(&info->tag_set);
-	if (IS_ERR(rq)) {
-		blk_mq_free_tag_set(&info->tag_set);
-		return PTR_ERR(rq);
-	}
-
-	rq->queuedata = info;
-	info->rq = gd->queue = rq;
-	info->gd = gd;
-	info->sector_size = sector_size;
-	info->physical_sector_size = physical_sector_size;
-	blkif_set_queue_limits(info);
-
-	return 0;
-}
-
 static const char *flush_info(struct blkfront_info *info)
 {
 	if (info->feature_flush && info->feature_fua)
@@ -1146,12 +1104,36 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
 
 	err = xlbd_reserve_minors(minor, nr_minors);
 	if (err)
-		goto out;
+		return err;
 	err = -ENODEV;
 
-	gd = alloc_disk(nr_minors);
-	if (gd == NULL)
-		goto release;
+	memset(&info->tag_set, 0, sizeof(info->tag_set));
+	info->tag_set.ops = &blkfront_mq_ops;
+	info->tag_set.nr_hw_queues = info->nr_rings;
+	if (HAS_EXTRA_REQ && info->max_indirect_segments == 0) {
+		/*
+		 * When indirect descriptior is not supported, the I/O request
+		 * will be split between multiple request in the ring.
+		 * To avoid problems when sending the request, divide by
+		 * 2 the depth of the queue.
+		 */
+		info->tag_set.queue_depth =  BLK_RING_SIZE(info) / 2;
+	} else
+		info->tag_set.queue_depth = BLK_RING_SIZE(info);
+	info->tag_set.numa_node = NUMA_NO_NODE;
+	info->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
+	info->tag_set.cmd_size = sizeof(struct blkif_req);
+	info->tag_set.driver_data = info;
+
+	err = blk_mq_alloc_tag_set(&info->tag_set);
+	if (err)
+		goto out_release_minors;
+
+	gd = blk_mq_alloc_disk(&info->tag_set, info);
+	if (IS_ERR(gd)) {
+		err = PTR_ERR(gd);
+		goto out_free_tag_set;
+	}
 
 	strcpy(gd->disk_name, DEV_NAME);
 	ptr = encode_disk_name(gd->disk_name + sizeof(DEV_NAME) - 1, offset);
@@ -1164,14 +1146,16 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
 
 	gd->major = XENVBD_MAJOR;
 	gd->first_minor = minor;
+	gd->minors = nr_minors;
 	gd->fops = &xlvbd_block_fops;
 	gd->private_data = info;
 	set_capacity(gd, capacity);
 
-	if (xlvbd_init_blk_queue(gd, sector_size, physical_sector_size)) {
-		del_gendisk(gd);
-		goto release;
-	}
+	info->rq = gd->queue;
+	info->gd = gd;
+	info->sector_size = sector_size;
+	info->physical_sector_size = physical_sector_size;
+	blkif_set_queue_limits(info);
 
 	xlvbd_flush(info);
 
@@ -1186,9 +1170,10 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
 
 	return 0;
 
- release:
+out_free_tag_set:
+	blk_mq_free_tag_set(&info->tag_set);
+out_release_minors:
 	xlbd_release_minors(minor, nr_minors);
- out:
 	return err;
 }
 
@@ -1217,12 +1202,9 @@ static void xlvbd_release_gendisk(struct blkfront_info *info)
 	nr_minors = info->gd->minors;
 	xlbd_release_minors(minor, nr_minors);
 
-	blk_cleanup_queue(info->rq);
-	blk_mq_free_tag_set(&info->tag_set);
-	info->rq = NULL;
-
-	put_disk(info->gd);
+	blk_cleanup_disk(info->gd);
 	info->gd = NULL;
+	blk_mq_free_tag_set(&info->tag_set);
 }
 
 /* Already hold rinfo->ring_lock. */
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135552.251867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKub-0001UM-FH; Wed, 02 Jun 2021 07:03:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135552.251867; Wed, 02 Jun 2021 07:03:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKua-0001SB-I4; Wed, 02 Jun 2021 07:03:32 +0000
Received: by outflank-mailman (input) for mailman id 135552;
 Wed, 02 Jun 2021 07:03:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKov-0007A2-9Q
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:57:41 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 753a3bf8-eb67-40a9-9ab9-e22be25b8ddd;
 Wed, 02 Jun 2021 06:55:51 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKmj-00260g-93; Wed, 02 Jun 2021 06:55:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 753a3bf8-eb67-40a9-9ab9-e22be25b8ddd
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=dPXgPagX0YUjFpHkmb7+BJvE2rgQGNh10xmKRHSEoBw=; b=BmfFZoZgZc3og4CmFwOmdLx19i
	QO/EHgrku+yvgdhYC17jfB1aTEH3npIjzEWzSTOP+FtfRG47SoCYsd6XDZdDvG8QFc30baslGnBO5
	603qv1a5lD3zj6piDqc7/7HezJHswlQmBy2hQzdsqi+Pq2AqXMvtz6CZelFZP4OuEsOQ481G/g0oe
	2D5d45xL9+Y8IhSZ0SJ7QgmfDd59y+PvwieZLqYIaXHXwCMZQBwPlFGdwrK57V7ShUrkG8V37WJ3E
	EOpP42tJMZNBze5kjBlU3735zuT501c+A9ju4eeM4xto97GnaI9ogfZqwLOqhLmVBPmt6XU0UdEOR
	G7vAYXaA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 20/30] nullb: use blk_mq_alloc_disk
Date: Wed,  2 Jun 2021 09:53:35 +0300
Message-Id: <20210602065345.355274-21-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and
request_queue allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/null_blk/main.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index d8e098f1e5b5..74fb2ec63219 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -1851,13 +1851,12 @@ static int null_add_dev(struct nullb_device *dev)
 
 		rv = -ENOMEM;
 		nullb->tag_set->timeout = 5 * HZ;
-		nullb->q = blk_mq_init_queue_data(nullb->tag_set, nullb);
-		if (IS_ERR(nullb->q))
-			goto out_cleanup_tags;
-		nullb->disk = alloc_disk_node(1, nullb->dev->home_node);
-		if (!nullb->disk)
+		nullb->disk = blk_mq_alloc_disk(nullb->tag_set, nullb);
+		if (IS_ERR(nullb->disk)) {
+			rv = PTR_ERR(nullb->disk);
 			goto out_cleanup_disk;
-		nullb->disk->queue = nullb->q;
+		}
+		nullb->q = nullb->disk->queue;
 	} else if (dev->queue_mode == NULL_Q_BIO) {
 		rv = -ENOMEM;
 		nullb->disk = blk_alloc_disk(nullb->dev->home_node);
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135553.251873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuc-0001p3-En; Wed, 02 Jun 2021 07:03:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135553.251873; Wed, 02 Jun 2021 07:03:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKub-0001k4-Q4; Wed, 02 Jun 2021 07:03:33 +0000
Received: by outflank-mailman (input) for mailman id 135553;
 Wed, 02 Jun 2021 07:03:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKo7-0007A2-7X
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:56:51 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ccbb73db-3b46-42b2-b0b6-505b9c0b6592;
 Wed, 02 Jun 2021 06:55:30 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKmR-0025lm-Ax; Wed, 02 Jun 2021 06:55:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ccbb73db-3b46-42b2-b0b6-505b9c0b6592
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=JLPBr5a4Ln5tpVtARufaz5Rx9DFRvnAc8JDCRXj5CBM=; b=FBI391AmGpCGer8nPkf12dCy+o
	eIEdbU42pvJv3rPD9dFUDNG20907L9ASE2fIQkUHxjUoIi8PgAItnVYsfQZ6hIztO2P1/v5yAnQCY
	AEIEL8vqjQuYY/yLZNb4GnSP5PvUwFzpizHw+fRWs4gY52j0aZwRyeKYcxL1PkeJAHD+Jt16XML9Y
	6k/bwePGimCyBX6ZWlmiXus1eyEpeartBYIhWxPwfdF5ffOhB85DbtrRd+aBJwu9lstkJ64aPi30u
	wP4A8hbv0iVNdk/AGNcCWJHaTuex2I57tAI1/KazjM7mVhow4NChmgEaGqXE5WnMhD7z0Peh8k1rH
	SZflGEZA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 16/30] aoe: use blk_mq_alloc_disk and blk_cleanup_disk
Date: Wed,  2 Jun 2021 09:53:31 +0300
Message-Id: <20210602065345.355274-17-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and
request_queue allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/aoe/aoeblk.c | 33 ++++++++++++---------------------
 drivers/block/aoe/aoedev.c |  3 +--
 2 files changed, 13 insertions(+), 23 deletions(-)

diff --git a/drivers/block/aoe/aoeblk.c b/drivers/block/aoe/aoeblk.c
index c34e71b0c4a9..06b360f7123a 100644
--- a/drivers/block/aoe/aoeblk.c
+++ b/drivers/block/aoe/aoeblk.c
@@ -338,14 +338,13 @@ static const struct blk_mq_ops aoeblk_mq_ops = {
 	.queue_rq	= aoeblk_queue_rq,
 };
 
-/* alloc_disk and add_disk can sleep */
+/* blk_mq_alloc_disk and add_disk can sleep */
 void
 aoeblk_gdalloc(void *vp)
 {
 	struct aoedev *d = vp;
 	struct gendisk *gd;
 	mempool_t *mp;
-	struct request_queue *q;
 	struct blk_mq_tag_set *set;
 	ulong flags;
 	int late = 0;
@@ -362,19 +361,12 @@ aoeblk_gdalloc(void *vp)
 	if (late)
 		return;
 
-	gd = alloc_disk(AOE_PARTITIONS);
-	if (gd == NULL) {
-		pr_err("aoe: cannot allocate disk structure for %ld.%d\n",
-			d->aoemajor, d->aoeminor);
-		goto err;
-	}
-
 	mp = mempool_create(MIN_BUFS, mempool_alloc_slab, mempool_free_slab,
 		buf_pool_cache);
 	if (mp == NULL) {
 		printk(KERN_ERR "aoe: cannot allocate bufpool for %ld.%d\n",
 			d->aoemajor, d->aoeminor);
-		goto err_disk;
+		goto err;
 	}
 
 	set = &d->tag_set;
@@ -391,12 +383,11 @@ aoeblk_gdalloc(void *vp)
 		goto err_mempool;
 	}
 
-	q = blk_mq_init_queue(set);
-	if (IS_ERR(q)) {
+	gd = blk_mq_alloc_disk(set, d);
+	if (IS_ERR(gd)) {
 		pr_err("aoe: cannot allocate block queue for %ld.%d\n",
 			d->aoemajor, d->aoeminor);
-		blk_mq_free_tag_set(set);
-		goto err_mempool;
+		goto err_tagset;
 	}
 
 	spin_lock_irqsave(&d->lock, flags);
@@ -405,16 +396,16 @@ aoeblk_gdalloc(void *vp)
 	WARN_ON(d->flags & DEVFL_TKILL);
 	WARN_ON(d->gd);
 	WARN_ON(d->flags & DEVFL_UP);
-	blk_queue_max_hw_sectors(q, BLK_DEF_MAX_SECTORS);
-	blk_queue_io_opt(q, SZ_2M);
+	blk_queue_max_hw_sectors(gd->queue, BLK_DEF_MAX_SECTORS);
+	blk_queue_io_opt(gd->queue, SZ_2M);
 	d->bufpool = mp;
-	d->blkq = gd->queue = q;
-	q->queuedata = d;
+	d->blkq = gd->queue;
 	d->gd = gd;
 	if (aoe_maxsectors)
-		blk_queue_max_hw_sectors(q, aoe_maxsectors);
+		blk_queue_max_hw_sectors(gd->queue, aoe_maxsectors);
 	gd->major = AOE_MAJOR;
 	gd->first_minor = d->sysminor;
+	gd->minors = AOE_PARTITIONS;
 	gd->fops = &aoe_bdops;
 	gd->private_data = d;
 	set_capacity(gd, d->ssize);
@@ -435,10 +426,10 @@ aoeblk_gdalloc(void *vp)
 	spin_unlock_irqrestore(&d->lock, flags);
 	return;
 
+err_tagset:
+	blk_mq_free_tag_set(set);
 err_mempool:
 	mempool_destroy(mp);
-err_disk:
-	put_disk(gd);
 err:
 	spin_lock_irqsave(&d->lock, flags);
 	d->flags &= ~DEVFL_GD_NOW;
diff --git a/drivers/block/aoe/aoedev.c b/drivers/block/aoe/aoedev.c
index e2ea2356da06..c5753c6bfe80 100644
--- a/drivers/block/aoe/aoedev.c
+++ b/drivers/block/aoe/aoedev.c
@@ -277,9 +277,8 @@ freedev(struct aoedev *d)
 	if (d->gd) {
 		aoedisk_rm_debugfs(d);
 		del_gendisk(d->gd);
-		put_disk(d->gd);
+		blk_cleanup_disk(d->gd);
 		blk_mq_free_tag_set(&d->tag_set);
-		blk_cleanup_queue(d->blkq);
 	}
 	t = d->targets;
 	e = t + d->ntargets;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135555.251883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKue-0002HO-36; Wed, 02 Jun 2021 07:03:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135555.251883; Wed, 02 Jun 2021 07:03:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKud-00027n-6v; Wed, 02 Jun 2021 07:03:35 +0000
Received: by outflank-mailman (input) for mailman id 135555;
 Wed, 02 Jun 2021 07:03:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKpy-0007A2-BZ
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:58:46 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4fb205f2-e459-4e31-b892-046aaed03897;
 Wed, 02 Jun 2021 06:56:32 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKnR-0026ec-A8; Wed, 02 Jun 2021 06:56:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fb205f2-e459-4e31-b892-046aaed03897
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=ZrMEf0Ls9F+Osx12fthB6jYFp7YA2RuOfcMRudkR5eo=; b=RmhbmeEgAua7D4yc0CN6m+daMw
	ggu55W3yOtqsYbnSz5c1dfLFQJ9ncvb9hc3nNvkXY6f0gcSGMOhLStwUwb0UQrJ1aG7h0ZSUng/WX
	es/ejunr+apwEHWPUWRQDfPQvgeX1hrHlUl4NOb5oItwm7czJQwirbV+FChfRP7dM0qxLnCRqVmgI
	bA1/+LYF3tWUOd/53KL9zns/VStS5lpZkIiq55Ah7xS6m/8ryTxd0+2JtSdNyv1ovCOilXcKoWuJr
	8KfLwtubqhZXD6/OkB2mdwrFmz8NTFBmkmZB+9B1+iGvFIgntGiUljL/AChzeEhq1iCnrHX8J/lkd
	xKgRqjOg==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 30/30] z2ram: use blk_mq_alloc_disk and blk_cleanup_disk
Date: Wed,  2 Jun 2021 09:53:45 +0300
Message-Id: <20210602065345.355274-31-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and
request_queue allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/z2ram.c | 15 ++++-----------
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/drivers/block/z2ram.c b/drivers/block/z2ram.c
index c1d20818e649..a8968d9e759b 100644
--- a/drivers/block/z2ram.c
+++ b/drivers/block/z2ram.c
@@ -323,27 +323,20 @@ static const struct blk_mq_ops z2_mq_ops = {
 
 static int z2ram_register_disk(int minor)
 {
-	struct request_queue *q;
 	struct gendisk *disk;
 
-	disk = alloc_disk(1);
-	if (!disk)
-		return -ENOMEM;
-
-	q = blk_mq_init_queue(&tag_set);
-	if (IS_ERR(q)) {
-		put_disk(disk);
-		return PTR_ERR(q);
-	}
+	disk = blk_mq_alloc_disk(&tag_set, NULL);
+	if (IS_ERR(disk))
+		return PTR_ERR(disk);
 
 	disk->major = Z2RAM_MAJOR;
 	disk->first_minor = minor;
+	disk->minors = 1;
 	disk->fops = &z2_fops;
 	if (minor)
 		sprintf(disk->disk_name, "z2ram%d", minor);
 	else
 		sprintf(disk->disk_name, "z2ram");
-	disk->queue = q;
 
 	z2ram_gendisk[minor] = disk;
 	add_disk(disk);
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135558.251891 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuf-0002Ub-5i; Wed, 02 Jun 2021 07:03:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135558.251891; Wed, 02 Jun 2021 07:03:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKue-0002Pv-Ay; Wed, 02 Jun 2021 07:03:36 +0000
Received: by outflank-mailman (input) for mailman id 135558;
 Wed, 02 Jun 2021 07:03:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKpj-0007A2-Az
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:58:31 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7b141df7-66ac-474d-b20d-54e51929abcb;
 Wed, 02 Jun 2021 06:56:17 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKnA-0026PT-6J; Wed, 02 Jun 2021 06:55:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b141df7-66ac-474d-b20d-54e51929abcb
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=3eCT5ADUsi1wXGQC5ROk8G/iPsdy5HF5nCZ/CzuEH9g=; b=CHpdvSdDQx2S7VBfw36yFJIOKW
	u3raTq1dP6Gu10/PJ+IzNfdJsTGDfodiKpWnXSrAPsiiXAil4FixRR4LHQx/r2/2WajLehYqseurF
	+SXOVenwMIEvShMpPI3TV0igiPSTDc1jus2MS134mkHJE4fSKWXJgTMpWUETOWUQR+sQvpcZNrvGr
	ImONWbqeM3vG4waS2LyqljTVK2xihrO+4vFy94tUR5AZHm4zAb5VJXdTbeKyJ5b1ZyVapnUITJgc0
	rAK1o1WbLZZ2lIkx0tqU86DqYL3xaSap2ZyjXquU1Hz/pJyGXKGtlKYDaGKwU57VNJwF/m1NojREe
	Jk3wMbUQ==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 26/30] ubi: use blk_mq_alloc_disk and blk_cleanup_disk
Date: Wed,  2 Jun 2021 09:53:41 +0300
Message-Id: <20210602065345.355274-27-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and
request_queue allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/mtd/ubi/block.c | 68 ++++++++++++++++++-----------------------
 1 file changed, 29 insertions(+), 39 deletions(-)

diff --git a/drivers/mtd/ubi/block.c b/drivers/mtd/ubi/block.c
index e1a2ae21dfd3..e003b4b44ffa 100644
--- a/drivers/mtd/ubi/block.c
+++ b/drivers/mtd/ubi/block.c
@@ -394,53 +394,46 @@ int ubiblock_create(struct ubi_volume_info *vi)
 	dev->vol_id = vi->vol_id;
 	dev->leb_size = vi->usable_leb_size;
 
+	dev->tag_set.ops = &ubiblock_mq_ops;
+	dev->tag_set.queue_depth = 64;
+	dev->tag_set.numa_node = NUMA_NO_NODE;
+	dev->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
+	dev->tag_set.cmd_size = sizeof(struct ubiblock_pdu);
+	dev->tag_set.driver_data = dev;
+	dev->tag_set.nr_hw_queues = 1;
+
+	ret = blk_mq_alloc_tag_set(&dev->tag_set);
+	if (ret) {
+		dev_err(disk_to_dev(dev->gd), "blk_mq_alloc_tag_set failed");
+		goto out_free_dev;;
+	}
+
+
 	/* Initialize the gendisk of this ubiblock device */
-	gd = alloc_disk(1);
-	if (!gd) {
-		pr_err("UBI: block: alloc_disk failed\n");
-		ret = -ENODEV;
-		goto out_free_dev;
+	gd = blk_mq_alloc_disk(&dev->tag_set, dev);
+	if (IS_ERR(gd)) {
+		ret = PTR_ERR(gd);
+		goto out_free_tags;
 	}
 
 	gd->fops = &ubiblock_ops;
 	gd->major = ubiblock_major;
+	gd->minors = 1;
 	gd->first_minor = idr_alloc(&ubiblock_minor_idr, dev, 0, 0, GFP_KERNEL);
 	if (gd->first_minor < 0) {
 		dev_err(disk_to_dev(gd),
 			"block: dynamic minor allocation failed");
 		ret = -ENODEV;
-		goto out_put_disk;
+		goto out_cleanup_disk;
 	}
 	gd->private_data = dev;
 	sprintf(gd->disk_name, "ubiblock%d_%d", dev->ubi_num, dev->vol_id);
 	set_capacity(gd, disk_capacity);
 	dev->gd = gd;
 
-	dev->tag_set.ops = &ubiblock_mq_ops;
-	dev->tag_set.queue_depth = 64;
-	dev->tag_set.numa_node = NUMA_NO_NODE;
-	dev->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
-	dev->tag_set.cmd_size = sizeof(struct ubiblock_pdu);
-	dev->tag_set.driver_data = dev;
-	dev->tag_set.nr_hw_queues = 1;
-
-	ret = blk_mq_alloc_tag_set(&dev->tag_set);
-	if (ret) {
-		dev_err(disk_to_dev(dev->gd), "blk_mq_alloc_tag_set failed");
-		goto out_remove_minor;
-	}
-
-	dev->rq = blk_mq_init_queue(&dev->tag_set);
-	if (IS_ERR(dev->rq)) {
-		dev_err(disk_to_dev(gd), "blk_mq_init_queue failed");
-		ret = PTR_ERR(dev->rq);
-		goto out_free_tags;
-	}
+	dev->rq = gd->queue;
 	blk_queue_max_segments(dev->rq, UBI_MAX_SG_COUNT);
 
-	dev->rq->queuedata = dev;
-	dev->gd->queue = dev->rq;
-
 	/*
 	 * Create one workqueue per volume (per registered block device).
 	 * Rembember workqueues are cheap, they're not threads.
@@ -448,7 +441,7 @@ int ubiblock_create(struct ubi_volume_info *vi)
 	dev->wq = alloc_workqueue("%s", 0, 0, gd->disk_name);
 	if (!dev->wq) {
 		ret = -ENOMEM;
-		goto out_free_queue;
+		goto out_remove_minor;
 	}
 
 	list_add_tail(&dev->list, &ubiblock_devices);
@@ -460,14 +453,12 @@ int ubiblock_create(struct ubi_volume_info *vi)
 	mutex_unlock(&devices_mutex);
 	return 0;
 
-out_free_queue:
-	blk_cleanup_queue(dev->rq);
-out_free_tags:
-	blk_mq_free_tag_set(&dev->tag_set);
 out_remove_minor:
 	idr_remove(&ubiblock_minor_idr, gd->first_minor);
-out_put_disk:
-	put_disk(dev->gd);
+out_cleanup_disk:
+	blk_cleanup_disk(dev->gd);
+out_free_tags:
+	blk_mq_free_tag_set(&dev->tag_set);
 out_free_dev:
 	kfree(dev);
 out_unlock:
@@ -483,11 +474,10 @@ static void ubiblock_cleanup(struct ubiblock *dev)
 	/* Flush pending work */
 	destroy_workqueue(dev->wq);
 	/* Finally destroy the blk queue */
-	blk_cleanup_queue(dev->rq);
-	blk_mq_free_tag_set(&dev->tag_set);
 	dev_info(disk_to_dev(dev->gd), "released");
+	blk_cleanup_disk(dev->gd);
+	blk_mq_free_tag_set(&dev->tag_set);
 	idr_remove(&ubiblock_minor_idr, dev->gd->first_minor);
-	put_disk(dev->gd);
 }
 
 int ubiblock_remove(struct ubi_volume_info *vi)
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135567.251916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuj-0003c0-3h; Wed, 02 Jun 2021 07:03:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135567.251916; Wed, 02 Jun 2021 07:03:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKui-0003ZG-If; Wed, 02 Jun 2021 07:03:40 +0000
Received: by outflank-mailman (input) for mailman id 135567;
 Wed, 02 Jun 2021 07:03:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKn9-0007A2-5X
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:55:51 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b9185394-a990-477c-9918-92a8993a35e9;
 Wed, 02 Jun 2021 06:55:06 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKm2-0025WN-Lj; Wed, 02 Jun 2021 06:54:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9185394-a990-477c-9918-92a8993a35e9
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=FS+70uyYe+4tLFvg24eJMtfqOnrP5LJ1R+LEkZqTKlU=; b=XaazqAYB8sweEFqM2vMqm2BF1a
	1A6gyISfZs5MiI+w7Ee7x8edliJofVEmh9+Mx86DNUmvx47GaLfDmZ091KW9cpX7wVUeqEGqiRsSr
	gYH9yuynmotrC/Z84Q1dT3M3+dCa+hte+sWLIalSchRf3P3NlYSXLH8M9Vnytw0SUBcvQc2lsA9Q+
	Z+GDx2MIWDM0rBH85AuZdxZFOvz66B5vXQu6l2EaDGdQvxiHVJCR81wkPj1Uh3HkiiJVkksNGRl30
	79AcKSm0yUrgNvUfyA9x1cG9jsfkPUkL2zxUXnaTSgmkRfc75N+g558tteVCrwudBz7Eg5HX5gdRp
	O/xhSQ1g==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 10/30] ps3disk: use blk_mq_alloc_disk
Date: Wed,  2 Jun 2021 09:53:25 +0300
Message-Id: <20210602065345.355274-11-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use the blk_mq_alloc_disk API to simplify the gendisk and request_queue
allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/ps3disk.c | 36 ++++++++++++++----------------------
 1 file changed, 14 insertions(+), 22 deletions(-)

diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
index ba3ece56cbb3..f374ea2c67ce 100644
--- a/drivers/block/ps3disk.c
+++ b/drivers/block/ps3disk.c
@@ -29,7 +29,6 @@
 
 struct ps3disk_private {
 	spinlock_t lock;		/* Request queue spinlock */
-	struct request_queue *queue;
 	struct blk_mq_tag_set tag_set;
 	struct gendisk *gendisk;
 	unsigned int blocking_factor;
@@ -267,7 +266,7 @@ static irqreturn_t ps3disk_interrupt(int irq, void *data)
 	blk_mq_end_request(req, error);
 	spin_unlock(&priv->lock);
 
-	blk_mq_run_hw_queues(priv->queue, true);
+	blk_mq_run_hw_queues(priv->gendisk->queue, true);
 	return IRQ_HANDLED;
 }
 
@@ -441,17 +440,20 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
 
 	ps3disk_identify(dev);
 
-	queue = blk_mq_init_sq_queue(&priv->tag_set, &ps3disk_mq_ops, 1,
+	error = blk_mq_alloc_sq_tag_set(&priv->tag_set, &ps3disk_mq_ops, 1,
 					BLK_MQ_F_SHOULD_MERGE);
-	if (IS_ERR(queue)) {
-		dev_err(&dev->sbd.core, "%s:%u: blk_mq_init_queue failed\n",
-			__func__, __LINE__);
-		error = PTR_ERR(queue);
+	if (error)
 		goto fail_teardown;
+
+	gendisk = blk_mq_alloc_disk(&priv->tag_set, dev);
+	if (IS_ERR(gendisk)) {
+		dev_err(&dev->sbd.core, "%s:%u: blk_mq_alloc_disk failed\n",
+			__func__, __LINE__);
+		error = PTR_ERR(gendisk);
+		goto fail_free_tag_set;
 	}
 
-	priv->queue = queue;
-	queue->queuedata = dev;
+	queue = gendisk->queue;
 
 	blk_queue_max_hw_sectors(queue, dev->bounce_size >> 9);
 	blk_queue_dma_alignment(queue, dev->blk_size-1);
@@ -462,19 +464,11 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
 	blk_queue_max_segments(queue, -1);
 	blk_queue_max_segment_size(queue, dev->bounce_size);
 
-	gendisk = alloc_disk(PS3DISK_MINORS);
-	if (!gendisk) {
-		dev_err(&dev->sbd.core, "%s:%u: alloc_disk failed\n", __func__,
-			__LINE__);
-		error = -ENOMEM;
-		goto fail_cleanup_queue;
-	}
-
 	priv->gendisk = gendisk;
 	gendisk->major = ps3disk_major;
 	gendisk->first_minor = devidx * PS3DISK_MINORS;
+	gendisk->minors = PS3DISK_MINORS;
 	gendisk->fops = &ps3disk_fops;
-	gendisk->queue = queue;
 	gendisk->private_data = dev;
 	snprintf(gendisk->disk_name, sizeof(gendisk->disk_name), PS3DISK_NAME,
 		 devidx+'a');
@@ -490,8 +484,7 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
 	device_add_disk(&dev->sbd.core, gendisk, NULL);
 	return 0;
 
-fail_cleanup_queue:
-	blk_cleanup_queue(queue);
+fail_free_tag_set:
 	blk_mq_free_tag_set(&priv->tag_set);
 fail_teardown:
 	ps3stor_teardown(dev);
@@ -517,9 +510,8 @@ static void ps3disk_remove(struct ps3_system_bus_device *_dev)
 		    &ps3disk_mask);
 	mutex_unlock(&ps3disk_mask_mutex);
 	del_gendisk(priv->gendisk);
-	blk_cleanup_queue(priv->queue);
+	blk_cleanup_disk(priv->gendisk);
 	blk_mq_free_tag_set(&priv->tag_set);
-	put_disk(priv->gendisk);
 	dev_notice(&dev->sbd.core, "Synchronizing disk cache\n");
 	ps3disk_sync_cache(dev);
 	ps3stor_teardown(dev);
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135570.251925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKul-0003yr-BW; Wed, 02 Jun 2021 07:03:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135570.251925; Wed, 02 Jun 2021 07:03:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuk-0003rw-8V; Wed, 02 Jun 2021 07:03:42 +0000
Received: by outflank-mailman (input) for mailman id 135570;
 Wed, 02 Jun 2021 07:03:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKpA-0007A2-AE
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:57:56 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ab929fe-3993-416b-be05-4c950b1845cf;
 Wed, 02 Jun 2021 06:56:00 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKms-00269K-Qr; Wed, 02 Jun 2021 06:55:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ab929fe-3993-416b-be05-4c950b1845cf
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=KHLITrhMPjuz4XYQlvLuyMQ6jhJjT1OoZPVERAs4yN8=; b=njrsDfxSfC2RA5qwJt2Gy1XMn7
	HMViANxKgA+mZa9oqtDL6Zln6j6l+rAT4dPLT9bnUbXsnsTaUQ0Zo+tgVqDNvAGXnfDkphiwKMj5x
	BrmkF9vTV9DOM9uH60akL46TTYkw9d/OlGWdsEyS5dZeuUHVkR/+D5uY8OBMz1zYKkYBE5WpfvDgG
	ycoy1/m6Ju0fMNbnbjt00eVfjn6eWdVzfPHvduRb/7SIu+tbYbfcP19f/+8wnLJPAKrRAtTgm/q9k
	vpFwkdSuotCuwgIKk3p5rYt00HK0RgsfLoLRKB1ml/qrcqSSpKjUq4ZUl1lZoAhTbkjqikESCazL2
	RLfCWW5Q==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 22/30] rbd: use blk_mq_alloc_disk and blk_cleanup_disk
Date: Wed,  2 Jun 2021 09:53:37 +0300
Message-Id: <20210602065345.355274-23-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and
request_queue allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/rbd.c | 52 ++++++++++++++++-----------------------------
 1 file changed, 18 insertions(+), 34 deletions(-)

diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index bbb88eb009e0..531d390902dd 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -4750,9 +4750,8 @@ static blk_status_t rbd_queue_rq(struct blk_mq_hw_ctx *hctx,
 
 static void rbd_free_disk(struct rbd_device *rbd_dev)
 {
-	blk_cleanup_queue(rbd_dev->disk->queue);
+	blk_cleanup_disk(rbd_dev->disk);
 	blk_mq_free_tag_set(&rbd_dev->tag_set);
-	put_disk(rbd_dev->disk);
 	rbd_dev->disk = NULL;
 }
 
@@ -4922,22 +4921,6 @@ static int rbd_init_disk(struct rbd_device *rbd_dev)
 	    rbd_dev->layout.object_size * rbd_dev->layout.stripe_count;
 	int err;
 
-	/* create gendisk info */
-	disk = alloc_disk(single_major ?
-			  (1 << RBD_SINGLE_MAJOR_PART_SHIFT) :
-			  RBD_MINORS_PER_MAJOR);
-	if (!disk)
-		return -ENOMEM;
-
-	snprintf(disk->disk_name, sizeof(disk->disk_name), RBD_DRV_NAME "%d",
-		 rbd_dev->dev_id);
-	disk->major = rbd_dev->major;
-	disk->first_minor = rbd_dev->minor;
-	if (single_major)
-		disk->flags |= GENHD_FL_EXT_DEVT;
-	disk->fops = &rbd_bd_ops;
-	disk->private_data = rbd_dev;
-
 	memset(&rbd_dev->tag_set, 0, sizeof(rbd_dev->tag_set));
 	rbd_dev->tag_set.ops = &rbd_mq_ops;
 	rbd_dev->tag_set.queue_depth = rbd_dev->opts->queue_depth;
@@ -4948,13 +4931,26 @@ static int rbd_init_disk(struct rbd_device *rbd_dev)
 
 	err = blk_mq_alloc_tag_set(&rbd_dev->tag_set);
 	if (err)
-		goto out_disk;
+		return err;
 
-	q = blk_mq_init_queue(&rbd_dev->tag_set);
-	if (IS_ERR(q)) {
-		err = PTR_ERR(q);
+	disk = blk_mq_alloc_disk(&rbd_dev->tag_set, rbd_dev);
+	if (IS_ERR(disk)) {
+		err = PTR_ERR(disk);
 		goto out_tag_set;
 	}
+	q = disk->queue;
+
+	snprintf(disk->disk_name, sizeof(disk->disk_name), RBD_DRV_NAME "%d",
+		 rbd_dev->dev_id);
+	disk->major = rbd_dev->major;
+	disk->first_minor = rbd_dev->minor;
+	if (single_major) {
+		disk->minors = (1 << RBD_SINGLE_MAJOR_PART_SHIFT);
+		disk->flags |= GENHD_FL_EXT_DEVT;
+	} else {
+		disk->minors = RBD_MINORS_PER_MAJOR;
+	}
+	disk->fops = &rbd_bd_ops;
 
 	blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
 	/* QUEUE_FLAG_ADD_RANDOM is off by default for blk-mq */
@@ -4976,21 +4972,11 @@ static int rbd_init_disk(struct rbd_device *rbd_dev)
 	if (!ceph_test_opt(rbd_dev->rbd_client->client, NOCRC))
 		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, q);
 
-	/*
-	 * disk_release() expects a queue ref from add_disk() and will
-	 * put it.  Hold an extra ref until add_disk() is called.
-	 */
-	WARN_ON(!blk_get_queue(q));
-	disk->queue = q;
-	q->queuedata = rbd_dev;
-
 	rbd_dev->disk = disk;
 
 	return 0;
 out_tag_set:
 	blk_mq_free_tag_set(&rbd_dev->tag_set);
-out_disk:
-	put_disk(disk);
 	return err;
 }
 
@@ -7088,8 +7074,6 @@ static ssize_t do_rbd_add(struct bus_type *bus,
 		goto err_out_image_lock;
 
 	device_add_disk(&rbd_dev->dev, rbd_dev->disk, NULL);
-	/* see rbd_init_disk() */
-	blk_put_queue(rbd_dev->disk->queue);
 
 	spin_lock(&rbd_dev_list_lock);
 	list_add_tail(&rbd_dev->node, &rbd_dev_list);
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135576.251937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKup-0004sf-1V; Wed, 02 Jun 2021 07:03:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135576.251937; Wed, 02 Jun 2021 07:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuo-0004nB-A6; Wed, 02 Jun 2021 07:03:46 +0000
Received: by outflank-mailman (input) for mailman id 135576;
 Wed, 02 Jun 2021 07:03:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKnx-0007A2-72
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:56:41 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ebf16284-33f8-491e-b768-0337c5832b7c;
 Wed, 02 Jun 2021 06:55:24 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKmM-0025j3-HR; Wed, 02 Jun 2021 06:55:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebf16284-33f8-491e-b768-0337c5832b7c
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=yxB3CToEvB1AGII3Fhuj66pLyS4oXdc/Ax4ufYJhDmU=; b=2Ev1PQ3iAXTcAu7Nm/w91IlhBi
	XlySTC28St0acrPx9XgYMlbiWgHgwnXpAO7fssa+3RO6s6w1gNt/L0bRJxPAncGQ3wVJ/bZBHhsGL
	uVoo++YnZjLmVwV7/fA+A/a3WTVeZ7Iox3GoTmd7i9BvdmfF+QzgThe8cp3OM151RtbpSjuGYv1qk
	Z49auwfLnjtdi4znRE+V/2lHNnnW9JmtU9bmCTyqdsOZLGP/YzUu+pzLr8O4OJgs++QcKu78dpSPH
	xBu4qqzNSfKG6LxxIT8EjfGs2NvFuwBi/8BGzrjd39Z06zSSm1tbNqLwSjr7IrytRt6ttKelR32cD
	obDFH8eg==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 15/30] blk-mq: remove blk_mq_init_sq_queue
Date: Wed,  2 Jun 2021 09:53:30 +0300
Message-Id: <20210602065345.355274-16-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

All users are gone now.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c         | 22 ----------------------
 include/linux/blk-mq.h |  4 ----
 2 files changed, 26 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 1e6036e6fd66..25e25177c2b1 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3156,28 +3156,6 @@ struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, void *queuedata)
 }
 EXPORT_SYMBOL(__blk_mq_alloc_disk);
 
-/*
- * Helper for setting up a queue with mq ops, given queue depth, and
- * the passed in mq ops flags.
- */
-struct request_queue *blk_mq_init_sq_queue(struct blk_mq_tag_set *set,
-					   const struct blk_mq_ops *ops,
-					   unsigned int queue_depth,
-					   unsigned int set_flags)
-{
-	struct request_queue *q;
-	int ret;
-
-	ret = blk_mq_alloc_sq_tag_set(set, ops, queue_depth, set_flags);
-	if (ret)
-		return ERR_PTR(ret);
-	q = blk_mq_init_queue(set);
-	if (IS_ERR(q))
-		blk_mq_free_tag_set(set);
-	return q;
-}
-EXPORT_SYMBOL(blk_mq_init_sq_queue);
-
 static struct blk_mq_hw_ctx *blk_mq_alloc_and_init_hctx(
 		struct blk_mq_tag_set *set, struct request_queue *q,
 		int hctx_idx, int node)
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index f496c6c5b5d2..02a4aab0aeac 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -443,10 +443,6 @@ struct request_queue *blk_mq_init_queue_data(struct blk_mq_tag_set *set,
 		void *queuedata);
 int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
 		struct request_queue *q);
-struct request_queue *blk_mq_init_sq_queue(struct blk_mq_tag_set *set,
-						const struct blk_mq_ops *ops,
-						unsigned int queue_depth,
-						unsigned int set_flags);
 void blk_mq_unregister_dev(struct device *, struct request_queue *);
 
 int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set);
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135588.251953 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuu-000667-Bd; Wed, 02 Jun 2021 07:03:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135588.251953; Wed, 02 Jun 2021 07:03:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuu-00065Q-6L; Wed, 02 Jun 2021 07:03:52 +0000
Received: by outflank-mailman (input) for mailman id 135588;
 Wed, 02 Jun 2021 07:03:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKq3-0007A2-Bi
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:58:51 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id abb111d9-170d-4a31-ae08-83f4e7b0419c;
 Wed, 02 Jun 2021 06:56:33 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKnN-0026aZ-Dz; Wed, 02 Jun 2021 06:56:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: abb111d9-170d-4a31-ae08-83f4e7b0419c
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=7GHnP6DNZGlKFwjbxdBpad0uFlCxBKJH9Xsnx8ec7ac=; b=nF5H5Xm7fe4zGXbHdq72l4Rnzy
	kg+w8MA3+xaOntMSOadwazIKALvfRRzCTRy9MFxfEVrh2sKEgQoBbgUOGRaieuAuAxc4ABeXBS8HN
	SQtDG1OQjVRLervqQcLcyPmZ3DOCzf6Iza9QUOxa7eEES0n40jma6ts5XDhrZorWgE/EAulN1GwbV
	Z5MnbQ9Pt4ZuIesxtmUE7CYUNs9IpOi40S5W3ULs7jy5knAcpmh8yHUS3I5I9+QDcpjjk0bMn7Cxn
	tMuGXllg4mbf6SO7+IH+aPv1V5kGTRswN64kZiIdU1FDVemfSZMUYCMTkt4iBJ9J6ovY+/RiKuAOr
	KKtQYNhA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 29/30] ataflop: use blk_mq_alloc_disk and blk_cleanup_disk
Date: Wed,  2 Jun 2021 09:53:44 +0300
Message-Id: <20210602065345.355274-30-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and
request_queue allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/ataflop.c | 16 ++++------------
 1 file changed, 4 insertions(+), 12 deletions(-)

diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
index d601e49f80e0..a093644ac39f 100644
--- a/drivers/block/ataflop.c
+++ b/drivers/block/ataflop.c
@@ -1968,22 +1968,14 @@ static const struct blk_mq_ops ataflop_mq_ops = {
 static int ataflop_alloc_disk(unsigned int drive, unsigned int type)
 {
 	struct gendisk *disk;
-	int ret;
-
-	disk = alloc_disk(1);
-	if (!disk)
-		return -ENOMEM;
 
-	disk->queue = blk_mq_init_queue(&unit[drive].tag_set);
-	if (IS_ERR(disk->queue)) {
-		ret = PTR_ERR(disk->queue);
-		disk->queue = NULL;
-		put_disk(disk);
-		return ret;
-	}
+	disk = blk_mq_alloc_disk(&unit[drive].tag_set, NULL);
+	if (IS_ERR(disk))
+		return PTR_ERR(disk);
 
 	disk->major = FLOPPY_MAJOR;
 	disk->first_minor = drive + (type << 2);
+	disk->minors = 1;
 	sprintf(disk->disk_name, "fd%d", drive);
 	disk->fops = &floppy_fops;
 	disk->events = DISK_EVENT_MEDIA_CHANGE;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:03:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:03:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135595.251965 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKuy-0006xk-6C; Wed, 02 Jun 2021 07:03:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135595.251965; Wed, 02 Jun 2021 07:03:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKux-0006wU-SC; Wed, 02 Jun 2021 07:03:55 +0000
Received: by outflank-mailman (input) for mailman id 135595;
 Wed, 02 Jun 2021 07:03:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKpt-0007A2-BJ
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:58:41 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7da5cd36-d946-4933-9b91-21633317f508;
 Wed, 02 Jun 2021 06:56:24 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKnJ-0026XD-03; Wed, 02 Jun 2021 06:56:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7da5cd36-d946-4933-9b91-21633317f508
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=e/ctmVcBKo8/4TfSYThwuDhurHZGsQmtrWLVmP8rYA8=; b=0A/jGm6O4Kj2j/Kx7dvigliFHC
	rITdmfIP63MHFDu3WSFM5d7dMu7+beFsioI4GrDpda176/3PUgnVrUbqyRWbSjWdcuAiJqiob1gTK
	AvRYkCVCw9w56bk5jObu58cLFiy6YDR/IPuribWTRblB+Qy/UxnevwGEDkt/f/knnTFZf15Y8vlCM
	5ocB4AhVAL+iOilDAuQ8GJXKzc8ZFMa3OISnsDLeHTuB1dBage0GQubfEEJs2dxaTysO0wL6PENPG
	HsubMhTFqnBjdPbq87DlkaqfHQIsjrwpF/ip3GpPNqk0nBPScEi1aAKtroJznPZbqZYyddYJMUb2K
	Z/qjGhrw==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 28/30] amiflop: use blk_mq_alloc_disk and blk_cleanup_disk
Date: Wed,  2 Jun 2021 09:53:43 +0300
Message-Id: <20210602065345.355274-29-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and
request_queue allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/amiflop.c | 16 ++++------------
 1 file changed, 4 insertions(+), 12 deletions(-)

diff --git a/drivers/block/amiflop.c b/drivers/block/amiflop.c
index 9e2d0c6a3877..8b1714021498 100644
--- a/drivers/block/amiflop.c
+++ b/drivers/block/amiflop.c
@@ -1781,15 +1781,13 @@ static int fd_alloc_disk(int drive, int system)
 {
 	struct gendisk *disk;
 
-	disk = alloc_disk(1);
-	if (!disk)
-		goto out;
-	disk->queue = blk_mq_init_queue(&unit[drive].tag_set);
-	if (IS_ERR(disk->queue))
-		goto out_put_disk;
+	disk = blk_mq_alloc_disk(&unit[drive].tag_set, NULL);
+	if (IS_ERR(disk))
+		return PTR_ERR(disk);
 
 	disk->major = FLOPPY_MAJOR;
 	disk->first_minor = drive + system;
+	disk->minors = 1;
 	disk->fops = &floppy_fops;
 	disk->events = DISK_EVENT_MEDIA_CHANGE;
 	if (system)
@@ -1802,12 +1800,6 @@ static int fd_alloc_disk(int drive, int system)
 	unit[drive].gendisk[system] = disk;
 	add_disk(disk);
 	return 0;
-
-out_put_disk:
-	disk->queue = NULL;
-	put_disk(disk);
-out:
-	return -ENOMEM;
 }
 
 static int fd_alloc_drive(int drive)
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:04:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:04:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135600.251976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKv1-0007jW-LE; Wed, 02 Jun 2021 07:03:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135600.251976; Wed, 02 Jun 2021 07:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKv1-0007io-9c; Wed, 02 Jun 2021 07:03:59 +0000
Received: by outflank-mailman (input) for mailman id 135600;
 Wed, 02 Jun 2021 07:03:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKns-0007A2-70
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:56:36 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c9809242-a8e4-4f1a-9400-4e7f8bf7d610;
 Wed, 02 Jun 2021 06:55:23 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKmI-0025hI-AR; Wed, 02 Jun 2021 06:54:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9809242-a8e4-4f1a-9400-4e7f8bf7d610
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=uZ7id5h0Y8eTodjolim75Nh4xqN5GmxeKJFIibiB+SQ=; b=Y+ZIEpYOA2iHCs2T6tZeqlhwn4
	2PZNnbXaohXgLYFyDB1Km2Bu2wyT0psxnFom5uMCa1WL63p08XIPFssnnwdwOWXz6S+f1NQhfrJgw
	DT/OnKnomp1fr4hlxHzl1uRlbcM58luxBJNmSQR1ISWEmb6j+xDelW/EPSueMH5qVJqvrMzGOnsJB
	1yLmbxBIjZuZM4gaGqkWw/56tktEZ+pXt7zHVQEX0324IpvUKx/kGJ0wdmf5QkCuJQDan3LvZi1gh
	v7Tu3lvr/4iTt31gmsoCM89vROF9iZm4xBhs773hy5MvLL57mXmlU4k/Fy9BEGPPXaP9XwV6nM+1f
	cFvfrZKA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 14/30] gdrom: use blk_mq_alloc_disk
Date: Wed,  2 Jun 2021 09:53:29 +0300
Message-Id: <20210602065345.355274-15-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use the blk_mq_alloc_disk API to simplify the gendisk and request_queue
allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/cdrom/gdrom.c | 45 ++++++++++++++++++++-----------------------
 1 file changed, 21 insertions(+), 24 deletions(-)

diff --git a/drivers/cdrom/gdrom.c b/drivers/cdrom/gdrom.c
index c6d8c0f59722..8e1fe75af93f 100644
--- a/drivers/cdrom/gdrom.c
+++ b/drivers/cdrom/gdrom.c
@@ -772,53 +772,50 @@ static int probe_gdrom(struct platform_device *devptr)
 		goto probe_fail_no_mem;
 	}
 	probe_gdrom_setupcd();
-	gd.disk = alloc_disk(1);
-	if (!gd.disk) {
-		err = -ENODEV;
-		goto probe_fail_no_disk;
+
+	err = blk_mq_alloc_sq_tag_set(&gd.tag_set, &gdrom_mq_ops, 1,
+				BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_BLOCKING);
+	if (err)
+		goto probe_fail_free_cd_info;
+
+	gd.disk = blk_mq_alloc_disk(&gd.tag_set, NULL);
+	if (IS_ERR(gd.disk)) {
+		err = PTR_ERR(gd.disk);
+		goto probe_fail_free_tag_set;
 	}
+	gd.gdrom_rq = gd.disk->queue;
 	probe_gdrom_setupdisk();
 	if (register_cdrom(gd.disk, gd.cd_info)) {
 		err = -ENODEV;
-		goto probe_fail_cdrom_register;
+		goto probe_fail_cleanup_disk;
 	}
 	gd.disk->fops = &gdrom_bdops;
 	gd.disk->events = DISK_EVENT_MEDIA_CHANGE;
 	/* latch on to the interrupt */
 	err = gdrom_set_interrupt_handlers();
 	if (err)
-		goto probe_fail_cmdirq_register;
-
-	gd.gdrom_rq = blk_mq_init_sq_queue(&gd.tag_set, &gdrom_mq_ops, 1,
-				BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_BLOCKING);
-	if (IS_ERR(gd.gdrom_rq)) {
-		err = PTR_ERR(gd.gdrom_rq);
-		gd.gdrom_rq = NULL;
-		goto probe_fail_requestq;
-	}
+		goto probe_fail_cleanup_disk;
 
 	err = probe_gdrom_setupqueue();
 	if (err)
-		goto probe_fail_toc;
+		goto probe_fail_free_irqs;
 
 	gd.toc = kzalloc(sizeof(struct gdromtoc), GFP_KERNEL);
 	if (!gd.toc) {
 		err = -ENOMEM;
-		goto probe_fail_toc;
+		goto probe_fail_free_irqs;
 	}
 	add_disk(gd.disk);
 	return 0;
 
-probe_fail_toc:
-	blk_cleanup_queue(gd.gdrom_rq);
-	blk_mq_free_tag_set(&gd.tag_set);
-probe_fail_requestq:
+probe_fail_free_irqs:
 	free_irq(HW_EVENT_GDROM_DMA, &gd);
 	free_irq(HW_EVENT_GDROM_CMD, &gd);
-probe_fail_cmdirq_register:
-probe_fail_cdrom_register:
-	del_gendisk(gd.disk);
-probe_fail_no_disk:
+probe_fail_cleanup_disk:
+	blk_cleanup_disk(gd.disk);
+probe_fail_free_tag_set:
+	blk_mq_free_tag_set(&gd.tag_set);
+probe_fail_free_cd_info:
 	kfree(gd.cd_info);
 probe_fail_no_mem:
 	unregister_blkdev(gdrom_major, GDROM_DEV_NAME);
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:04:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:04:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135605.251981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKv2-0007sf-9t; Wed, 02 Jun 2021 07:04:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135605.251981; Wed, 02 Jun 2021 07:04:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKv1-0007pR-UA; Wed, 02 Jun 2021 07:03:59 +0000
Received: by outflank-mailman (input) for mailman id 135605;
 Wed, 02 Jun 2021 07:03:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKoq-0007A2-9L
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:57:36 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 413141d6-a989-469e-92fd-fbc443d5b624;
 Wed, 02 Jun 2021 06:55:45 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKme-0025w6-S0; Wed, 02 Jun 2021 06:55:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 413141d6-a989-469e-92fd-fbc443d5b624
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=+VW3u/9XCdDTbtqEZvujMXwGJbXBwfM20yjCH8e5d3E=; b=GwQP6IERGYi9h/w11aM5Gv7Bak
	KwC96m3PPXGkoyZYwi/kNvLQSakOJo8KyQC+PaSGnzJZtxZtj+EAf1YdIWxyGh1h3/ItGALtuAyht
	Zb0UCagxGAtcH3yWBInWzJjMPdgOtlr0aVxDJradU0LlW4CU0UoZ9uFlxXZ9MaFs9QpR6rJJqNKpM
	Lp+TbCyqXP0eixAsJNjUJ38MR0l0b3AJnHt9CihDHSmpFaOitPKX0HSMzdaVcLJGEB+8v4t2ImMr4
	1mz3v9UKvRtkqRqwAqZX0MDPtIqhunVVCv1zjkY4f5TbIaxRb9R62pOguo/krhQJ2ETMPxpLnqyY0
	E1fIMYVA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 19/30] nbd: use blk_mq_alloc_disk and blk_cleanup_disk
Date: Wed,  2 Jun 2021 09:53:34 +0300
Message-Id: <20210602065345.355274-20-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and
request_queue allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/nbd.c | 53 ++++++++++++++++++---------------------------
 1 file changed, 21 insertions(+), 32 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 45d2c28c8fc8..614d82e7fae4 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -219,15 +219,11 @@ static const struct device_attribute pid_attr = {
 static void nbd_dev_remove(struct nbd_device *nbd)
 {
 	struct gendisk *disk = nbd->disk;
-	struct request_queue *q;
 
 	if (disk) {
-		q = disk->queue;
 		del_gendisk(disk);
-		blk_cleanup_queue(q);
 		blk_mq_free_tag_set(&nbd->tag_set);
-		disk->private_data = NULL;
-		put_disk(disk);
+		blk_cleanup_disk(disk);
 	}
 
 	/*
@@ -1646,15 +1642,24 @@ static int nbd_dev_add(int index)
 {
 	struct nbd_device *nbd;
 	struct gendisk *disk;
-	struct request_queue *q;
 	int err = -ENOMEM;
 
 	nbd = kzalloc(sizeof(struct nbd_device), GFP_KERNEL);
 	if (!nbd)
 		goto out;
 
-	disk = alloc_disk(1 << part_shift);
-	if (!disk)
+	nbd->tag_set.ops = &nbd_mq_ops;
+	nbd->tag_set.nr_hw_queues = 1;
+	nbd->tag_set.queue_depth = 128;
+	nbd->tag_set.numa_node = NUMA_NO_NODE;
+	nbd->tag_set.cmd_size = sizeof(struct nbd_cmd);
+	nbd->tag_set.flags = BLK_MQ_F_SHOULD_MERGE |
+		BLK_MQ_F_BLOCKING;
+	nbd->tag_set.driver_data = nbd;
+	nbd->destroy_complete = NULL;
+
+	err = blk_mq_alloc_tag_set(&nbd->tag_set);
+	if (err)
 		goto out_free_nbd;
 
 	if (index >= 0) {
@@ -1668,30 +1673,15 @@ static int nbd_dev_add(int index)
 			index = err;
 	}
 	if (err < 0)
-		goto out_free_disk;
-
+		goto out_free_tags;
 	nbd->index = index;
-	nbd->disk = disk;
-	nbd->tag_set.ops = &nbd_mq_ops;
-	nbd->tag_set.nr_hw_queues = 1;
-	nbd->tag_set.queue_depth = 128;
-	nbd->tag_set.numa_node = NUMA_NO_NODE;
-	nbd->tag_set.cmd_size = sizeof(struct nbd_cmd);
-	nbd->tag_set.flags = BLK_MQ_F_SHOULD_MERGE |
-		BLK_MQ_F_BLOCKING;
-	nbd->tag_set.driver_data = nbd;
-	nbd->destroy_complete = NULL;
 
-	err = blk_mq_alloc_tag_set(&nbd->tag_set);
-	if (err)
+	disk = blk_mq_alloc_disk(&nbd->tag_set, NULL);
+	if (IS_ERR(disk)) {
+		err = PTR_ERR(disk);
 		goto out_free_idr;
-
-	q = blk_mq_init_queue(&nbd->tag_set);
-	if (IS_ERR(q)) {
-		err = PTR_ERR(q);
-		goto out_free_tags;
 	}
-	disk->queue = q;
+	nbd->disk = disk;
 
 	/*
 	 * Tell the block layer that we are not a rotational device
@@ -1712,6 +1702,7 @@ static int nbd_dev_add(int index)
 	INIT_LIST_HEAD(&nbd->list);
 	disk->major = NBD_MAJOR;
 	disk->first_minor = index << part_shift;
+	disk->minors = 1 << part_shift;
 	disk->fops = &nbd_fops;
 	disk->private_data = nbd;
 	sprintf(disk->disk_name, "nbd%d", index);
@@ -1719,12 +1710,10 @@ static int nbd_dev_add(int index)
 	nbd_total_devices++;
 	return index;
 
-out_free_tags:
-	blk_mq_free_tag_set(&nbd->tag_set);
 out_free_idr:
 	idr_remove(&nbd_index_idr, index);
-out_free_disk:
-	put_disk(disk);
+out_free_tags:
+	blk_mq_free_tag_set(&nbd->tag_set);
 out_free_nbd:
 	kfree(nbd);
 out:
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:04:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:04:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135609.251990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKv3-0008Bz-VI; Wed, 02 Jun 2021 07:04:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135609.251990; Wed, 02 Jun 2021 07:04:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loKv3-00087F-5D; Wed, 02 Jun 2021 07:04:01 +0000
Received: by outflank-mailman (input) for mailman id 135609;
 Wed, 02 Jun 2021 07:04:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PwRf=K4=bombadil.srs.infradead.org=batv+e38fb55258da4e18a096+6492+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1loKpP-0007A2-AR
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:58:11 +0000
Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 12923a92-1323-40b7-8618-44db0cf76b6d;
 Wed, 02 Jun 2021 06:56:06 +0000 (UTC)
Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1loKmx-0026Co-6G; Wed, 02 Jun 2021 06:55:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12923a92-1323-40b7-8618-44db0cf76b6d
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=5mJAZZeO1CtG3mn2Yuq3t9OP/z6HiPXlonifU+Xv8uk=; b=tt3BlzEvyR33jB2X0kDiZ5TSxB
	csELHSnmlT8T6KUWOPDXdz45J2xQyFQU19xaezbdVZgeUVrTi9u9ZPTPO6fqT1nm0qE2v7SVLycE9
	/MOWrjDcC/xYy6qBU13Znm79Ch9jRwtBbUtbfzUmR+I9pRspaNFEvyOXmYvNfXgWZsh0TdBvJLp4s
	/Fr3RVYTz9uY0FffL4ddTLGXqxR4LHnzHncqok0pz2PtYl2ZKi5VfjzMdqvx8SDrYpymOwhWqr6QX
	Z9xSFfPHVSbsRQooQMBqPIpILAv/Gx+wWjK7rLtRMYAMdC9wtpz/a7ziP04g4HcPFPbwAdib7nnbG
	Ptr9BkmA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: [PATCH 23/30] rnbd: use blk_mq_alloc_disk and blk_cleanup_disk
Date: Wed,  2 Jun 2021 09:53:38 +0300
Message-Id: <20210602065345.355274-24-hch@lst.de>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and
request_queue allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/rnbd/rnbd-clt.c | 35 ++++++++---------------------------
 1 file changed, 8 insertions(+), 27 deletions(-)

diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
index c604a402cd5c..f4fa45d24c0b 100644
--- a/drivers/block/rnbd/rnbd-clt.c
+++ b/drivers/block/rnbd/rnbd-clt.c
@@ -1353,18 +1353,6 @@ static void rnbd_init_mq_hw_queues(struct rnbd_clt_dev *dev)
 	}
 }
 
-static int setup_mq_dev(struct rnbd_clt_dev *dev)
-{
-	dev->queue = blk_mq_init_queue(&dev->sess->tag_set);
-	if (IS_ERR(dev->queue)) {
-		rnbd_clt_err(dev, "Initializing multiqueue queue failed, err: %ld\n",
-			      PTR_ERR(dev->queue));
-		return PTR_ERR(dev->queue);
-	}
-	rnbd_init_mq_hw_queues(dev);
-	return 0;
-}
-
 static void setup_request_queue(struct rnbd_clt_dev *dev)
 {
 	blk_queue_logical_block_size(dev->queue, dev->logical_block_size);
@@ -1393,13 +1381,13 @@ static void setup_request_queue(struct rnbd_clt_dev *dev)
 	blk_queue_io_opt(dev->queue, dev->sess->max_io_size);
 	blk_queue_virt_boundary(dev->queue, SZ_4K - 1);
 	blk_queue_write_cache(dev->queue, dev->wc, dev->fua);
-	dev->queue->queuedata = dev;
 }
 
 static void rnbd_clt_setup_gen_disk(struct rnbd_clt_dev *dev, int idx)
 {
 	dev->gd->major		= rnbd_client_major;
 	dev->gd->first_minor	= idx << RNBD_PART_BITS;
+	dev->gd->minors		= 1 << RNBD_PART_BITS;
 	dev->gd->fops		= &rnbd_client_ops;
 	dev->gd->queue		= dev->queue;
 	dev->gd->private_data	= dev;
@@ -1426,24 +1414,18 @@ static void rnbd_clt_setup_gen_disk(struct rnbd_clt_dev *dev, int idx)
 
 static int rnbd_client_setup_device(struct rnbd_clt_dev *dev)
 {
-	int err, idx = dev->clt_device_id;
+	int idx = dev->clt_device_id;
 
 	dev->size = dev->nsectors * dev->logical_block_size;
 
-	err = setup_mq_dev(dev);
-	if (err)
-		return err;
+	dev->gd = blk_mq_alloc_disk(&dev->sess->tag_set, dev);
+	if (IS_ERR(dev->gd))
+		return PTR_ERR(dev->gd);
+	dev->queue = dev->gd->queue;
+	rnbd_init_mq_hw_queues(dev);
 
 	setup_request_queue(dev);
-
-	dev->gd = alloc_disk_node(1 << RNBD_PART_BITS,	NUMA_NO_NODE);
-	if (!dev->gd) {
-		blk_cleanup_queue(dev->queue);
-		return -ENOMEM;
-	}
-
 	rnbd_clt_setup_gen_disk(dev, idx);
-
 	return 0;
 }
 
@@ -1650,8 +1632,7 @@ struct rnbd_clt_dev *rnbd_clt_map_device(const char *sessname,
 static void destroy_gen_disk(struct rnbd_clt_dev *dev)
 {
 	del_gendisk(dev->gd);
-	blk_cleanup_queue(dev->queue);
-	put_disk(dev->gd);
+	blk_cleanup_disk(dev->gd);
 }
 
 static void destroy_sysfs(struct rnbd_clt_dev *dev,
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:10:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:10:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135648.252009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loL1c-0004Jq-7W; Wed, 02 Jun 2021 07:10:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135648.252009; Wed, 02 Jun 2021 07:10:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loL1c-0004Jj-46; Wed, 02 Jun 2021 07:10:48 +0000
Received: by outflank-mailman (input) for mailman id 135648;
 Wed, 02 Jun 2021 07:10:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loL1a-0004Jb-LL
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 07:10:46 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 38b574fb-e25a-4ffc-8cc7-bb2fe9e2a211;
 Wed, 02 Jun 2021 07:10:45 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 1BB372193C;
 Wed,  2 Jun 2021 07:10:45 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id E7BCA118DD;
 Wed,  2 Jun 2021 07:10:44 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id ihBuN/Qut2AcVwAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 07:10:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38b574fb-e25a-4ffc-8cc7-bb2fe9e2a211
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622617845; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=TjiqP1jUYF7noSIyoNCVoo2cMfZMos2Hqwbp8cguD/A=;
	b=ov647Dg8ErFWpOxffjDEcWpuWN/sXYRYrS+6U+Kg245m61WkwZ9AeVa6Ms1eU3aOIAPp7d
	9wIGrMqeK7jZtNvM95i2cjlmqQDF95QDHaKy+4Hkqn/y11thKwK8mmRhH0LZpxTSQ2v6gY
	0exXJWccNepoCdT/xqjcPltQimBQUSo=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622617845; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=TjiqP1jUYF7noSIyoNCVoo2cMfZMos2Hqwbp8cguD/A=;
	b=ov647Dg8ErFWpOxffjDEcWpuWN/sXYRYrS+6U+Kg245m61WkwZ9AeVa6Ms1eU3aOIAPp7d
	9wIGrMqeK7jZtNvM95i2cjlmqQDF95QDHaKy+4Hkqn/y11thKwK8mmRhH0LZpxTSQ2v6gY
	0exXJWccNepoCdT/xqjcPltQimBQUSo=
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-9-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v20210601 08/38] tools: show migration transfer rate in
 send_dirty_pages
Message-ID: <42844bc5-da7e-5f6d-1ce0-1ef9e0f9dea6@suse.com>
Date: Wed, 2 Jun 2021 09:10:44 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-9-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="BrqMiSmemi6ydmvK1ilhFR5iHXrP0A4Bg"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--BrqMiSmemi6ydmvK1ilhFR5iHXrP0A4Bg
Content-Type: multipart/mixed; boundary="qCTocIMJRmox1xkeEyqFvBeyqPkP4wbVr";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <42844bc5-da7e-5f6d-1ce0-1ef9e0f9dea6@suse.com>
Subject: Re: [PATCH v20210601 08/38] tools: show migration transfer rate in
 send_dirty_pages
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-9-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-9-olaf@aepfle.de>

--qCTocIMJRmox1xkeEyqFvBeyqPkP4wbVr
Content-Type: multipart/mixed;
 boundary="------------C0C7F94365622C72118C79DD"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C0C7F94365622C72118C79DD
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> Show how fast domU pages are transferred in each iteration.
>=20
> The relevant data is how fast the pfns travel, not so much how much
> protocol overhead exists. So the reported MiB/sec is just for pfns.
>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>   tools/libs/saverestore/common.h |  2 ++
>   tools/libs/saverestore/save.c   | 47 ++++++++++++++++++++++++++++++++=
+
>   2 files changed, 49 insertions(+)
>=20
> diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/c=
ommon.h
> index 50a8479d39..f5fe23caad 100644
> --- a/tools/libs/saverestore/common.h
> +++ b/tools/libs/saverestore/common.h
> @@ -250,6 +250,8 @@ struct xc_sr_context
>               bool debug;
>  =20
>               unsigned long p2m_size;
> +            size_t pages_sent;
> +            size_t overhead_sent;
>  =20
>               struct precopy_stats stats;
>  =20
> diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/sav=
e.c
> index bcff2d28f5..760ca04a84 100644
> --- a/tools/libs/saverestore/save.c
> +++ b/tools/libs/saverestore/save.c
> @@ -1,5 +1,6 @@
>   #include <assert.h>
>   #include <arpa/inet.h>
> +#include <time.h>
>  =20
>   #include "common.h"
>  =20
> @@ -238,6 +239,8 @@ static int write_batch(struct xc_sr_context *ctx)
>       iov[3].iov_len =3D nr_pfns * sizeof(*rec_pfns);
>  =20
>       iovcnt =3D 4;
> +    ctx->save.pages_sent +=3D nr_pages;
> +    ctx->save.overhead_sent +=3D sizeof(rec) + sizeof(hdr) + nr_pfns *=20
sizeof(*rec_pfns);
>  =20
>       if ( nr_pages )
>       {
> @@ -357,6 +360,43 @@ static int suspend_domain(struct xc_sr_context *ct=
x)
>       return 0;
>   }
>  =20
> +static void show_transfer_rate(struct xc_sr_context *ctx, struct times=
pec *start)
> +{
> +    xc_interface *xch =3D ctx->xch;
> +    struct timespec end =3D {}, diff =3D {};
> +    size_t ms, MiB_sec =3D ctx->save.pages_sent * PAGE_SIZE;

I'd rather not initialize MiB_sec here ...

> +
> +    if (!MiB_sec)

=2E.. and test for ctx->save.pages_sent to be non-zero here.

> +        return;
> +
> +    if ( clock_gettime(CLOCK_MONOTONIC, &end) )
> +        PERROR("clock_gettime");
> +
> +    if ( (end.tv_nsec - start->tv_nsec) < 0 )
> +    {
> +        diff.tv_sec =3D end.tv_sec - start->tv_sec - 1;
> +        diff.tv_nsec =3D end.tv_nsec - start->tv_nsec + (1000U*1000U*1=
000U);
> +    }
> +    else
> +    {
> +        diff.tv_sec =3D end.tv_sec - start->tv_sec;
> +        diff.tv_nsec =3D end.tv_nsec - start->tv_nsec;
> +    }
> +
> +    ms =3D (diff.tv_nsec / (1000U*1000U));
> +    if (!ms)
> +        ms =3D 1;

I'd move this ...

> +    ms +=3D (diff.tv_sec * 1000U);

=2E.. below this.

> +
> +    MiB_sec *=3D 1000U;
> +    MiB_sec /=3D ms;
> +    MiB_sec /=3D 1024U*1024U;

Avoid MiB_sec holding bytes per second for some time and use:

MiB_sec =3D ((ctx->save.pages_sent * PAGE_SIZE * 1000) / ms) /
           (1024U * 1024U);


Juergen

--------------C0C7F94365622C72118C79DD
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C0C7F94365622C72118C79DD--

--qCTocIMJRmox1xkeEyqFvBeyqPkP4wbVr--

--BrqMiSmemi6ydmvK1ilhFR5iHXrP0A4Bg
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3LvQFAwAAAAAACgkQsN6d1ii/Ey9M
7Af/WmegSfoqj9LzE0QPJyxfObkyDUezBeB962Fub1lU8+Ra1K/mlMUhKkm5+fYJ17GhFdgvmP+R
pzJ9bhFUZQCyzQD70vlcSJm9AuLC9cIYFTFQe9ym2Q0cn2nFX/tF3i+YugC7cJBiR65lgkPNVTQs
IzD8tqrWZxCGKmekJdJ9bDeWzYOJJt2W5KvX1WABi+VgR+4918O08VvVugfLucRVrjv9TY0uc6ww
3N2+V1ncSV1OeIdgicjQIsuBqRFLCU2+dRC9YGr8INfj6K+JjpqUUiOhCwt7ooeWwSJYb/KJWGLM
qB4pVIUU6sDwGMsHBofYxTHlvevwdWFKvhPLgEesdg==
=mb6b
-----END PGP SIGNATURE-----

--BrqMiSmemi6ydmvK1ilhFR5iHXrP0A4Bg--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:29:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:29:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135741.252019 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLJP-0006PF-Pv; Wed, 02 Jun 2021 07:29:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135741.252019; Wed, 02 Jun 2021 07:29:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLJP-0006P8-Mx; Wed, 02 Jun 2021 07:29:11 +0000
Received: by outflank-mailman (input) for mailman id 135741;
 Wed, 02 Jun 2021 07:29:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loLJO-0006P2-FD
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 07:29:10 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5f531261-5557-4ba5-8ef4-ff52e3267927;
 Wed, 02 Jun 2021 07:29:09 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id CE18E2193F;
 Wed,  2 Jun 2021 07:29:08 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id A0EB7118DD;
 Wed,  2 Jun 2021 07:29:08 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id oMt2JUQzt2CWXwAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 07:29:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f531261-5557-4ba5-8ef4-ff52e3267927
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622618948; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=FRD2XSEoIRBPaIpNSoD14mHiI1NyAveNGfuhrUEYhx4=;
	b=ug8i7Yrwy5OO1w6bBX3V2SEnJ41Lhf3Prl1Kd1opPh5UVq9XFmqvOrq1kYSjmzICEgdR+6
	B3g6eODzk9nH1qvgdbi/5HXfCTnkkRPil+L+JXGo9LgRoet/V9B2gUEOLgSMEScTQDZqA/
	5cQt/1bOByE9T7xyTP+ocspaWvH+lzc=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622618948; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=FRD2XSEoIRBPaIpNSoD14mHiI1NyAveNGfuhrUEYhx4=;
	b=ug8i7Yrwy5OO1w6bBX3V2SEnJ41Lhf3Prl1Kd1opPh5UVq9XFmqvOrq1kYSjmzICEgdR+6
	B3g6eODzk9nH1qvgdbi/5HXfCTnkkRPil+L+JXGo9LgRoet/V9B2gUEOLgSMEScTQDZqA/
	5cQt/1bOByE9T7xyTP+ocspaWvH+lzc=
Subject: Re: [PATCH v20210601 09/38] tools/guest: prepare to allocate arrays
 once
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-10-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <531fe9c5-aa7f-be99-5d78-85d817139740@suse.com>
Date: Wed, 2 Jun 2021 09:29:08 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-10-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="JqP2WOeWoCsDjk8z2fkuxqrRLUO30Vxw6"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--JqP2WOeWoCsDjk8z2fkuxqrRLUO30Vxw6
Content-Type: multipart/mixed; boundary="qLZB84n6oGtax1FIWe3B9RAaR711kfo6I";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <531fe9c5-aa7f-be99-5d78-85d817139740@suse.com>
Subject: Re: [PATCH v20210601 09/38] tools/guest: prepare to allocate arrays
 once
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-10-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-10-olaf@aepfle.de>

--qLZB84n6oGtax1FIWe3B9RAaR711kfo6I
Content-Type: multipart/mixed;
 boundary="------------9D97CCAD5EAE646C653D930B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------9D97CCAD5EAE646C653D930B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> The hotpath 'send_dirty_pages' is supposed to do just one thing: sendin=
g.
> The other end 'handle_page_data' is supposed to do just receiving.
>=20
> But instead both do other costly work like memory allocations and data =
moving.
> Do the allocations once, the array sizes are a compiletime constant.
> Avoid unneeded copying of data by receiving data directly into mapped g=
uest memory.
>=20
> This patch is just prepartion, subsequent changes will populate the arr=
ays.
>=20
> Once all changes are applied, migration of a busy HVM domU changes like=20
that:
>=20
> Without this series, from sr650 to sr950 (xen-4.15.20201027T173911.16a2=
0963b3 xen_testing):
> 2020-10-29 10:23:10.711+0000: xc: show_transfer_rate: 23663128 bytes + =
2879563 pages in 55.324905335 sec, 203 MiB/sec: Internal error
> 2020-10-29 10:23:35.115+0000: xc: show_transfer_rate: 16829632 bytes + =
2097552 pages in 24.401179720 sec, 335 MiB/sec: Internal error
> 2020-10-29 10:23:59.436+0000: xc: show_transfer_rate: 16829032 bytes + =
2097478 pages in 24.319025928 sec, 336 MiB/sec: Internal error
> 2020-10-29 10:24:23.844+0000: xc: show_transfer_rate: 16829024 bytes + =
2097477 pages in 24.406992500 sec, 335 MiB/sec: Internal error
> 2020-10-29 10:24:48.292+0000: xc: show_transfer_rate: 16828912 bytes + =
2097463 pages in 24.446489027 sec, 335 MiB/sec: Internal error
> 2020-10-29 10:25:01.816+0000: xc: show_transfer_rate: 16836080 bytes + =
2098356 pages in 13.447091818 sec, 609 MiB/sec: Internal error
>=20
> With this series, from sr650 to sr950 (xen-4.15.20201027T173911.16a2096=
3b3 xen_unstable):
> 2020-10-28 21:26:05.074+0000: xc: show_transfer_rate: 23663128 bytes + =
2879563 pages in 52.564054368 sec, 213 MiB/sec: Internal error
> 2020-10-28 21:26:23.527+0000: xc: show_transfer_rate: 16830040 bytes + =
2097603 pages in 18.450592015 sec, 444 MiB/sec: Internal error
> 2020-10-28 21:26:41.926+0000: xc: show_transfer_rate: 16830944 bytes + =
2097717 pages in 18.397862306 sec, 445 MiB/sec: Internal error
> 2020-10-28 21:27:00.339+0000: xc: show_transfer_rate: 16829176 bytes + =
2097498 pages in 18.411973339 sec, 445 MiB/sec: Internal error
> 2020-10-28 21:27:18.643+0000: xc: show_transfer_rate: 16828592 bytes + =
2097425 pages in 18.303326695 sec, 447 MiB/sec: Internal error
> 2020-10-28 21:27:26.289+0000: xc: show_transfer_rate: 16835952 bytes + =
2098342 pages in 7.579846749 sec, 1081 MiB/sec: Internal error
>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>   tools/libs/saverestore/common.h  | 8 ++++++++
>   tools/libs/saverestore/restore.c | 8 ++++++++
>   tools/libs/saverestore/save.c    | 4 +++-
>   3 files changed, 19 insertions(+), 1 deletion(-)
>=20
> diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/c=
ommon.h
> index f5fe23caad..80b2e878aa 100644
> --- a/tools/libs/saverestore/common.h
> +++ b/tools/libs/saverestore/common.h
> @@ -223,6 +223,12 @@ static inline int update_blob(struct xc_sr_blob *b=
lob,
>       return 0;
>   }
>  =20
> +struct xc_sr_save_arrays {
> +};
> +
> +struct xc_sr_restore_arrays {
> +};

Can you please add the mfns/pfns arrays to above types, as ...

> +
>   struct xc_sr_context
>   {
>       xc_interface *xch;
> @@ -260,6 +266,7 @@ struct xc_sr_context
>               unsigned long *deferred_pages;
>               unsigned long nr_deferred_pages;
>               xc_hypercall_buffer_t dirty_bitmap_hbuf;
> +            struct xc_sr_save_arrays *m;
>           } save;
>  =20
>           struct /* Restore data. */
> @@ -311,6 +318,7 @@ struct xc_sr_context
>  =20
>               /* Sender has invoked verify mode on the stream. */
>               bool verify;
> +            struct xc_sr_restore_arrays *m;
>           } restore;
>       };
>  =20
> diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/=
restore.c
> index 700f9e74b5..a6cf9ee41c 100644
> --- a/tools/libs/saverestore/restore.c
> +++ b/tools/libs/saverestore/restore.c
> @@ -739,6 +739,13 @@ static int setup(struct xc_sr_context *ctx)
>       }
>       ctx->restore.allocated_rec_num =3D DEFAULT_BUF_RECORDS;
>  =20
> +    ctx->restore.m =3D malloc(sizeof(*ctx->restore.m));
> +    if ( !ctx->restore.m ) {

=2E.. this case might trigger without the full series applied, due to
allocating zero bytes (same for the save side below).


Juergen

--------------9D97CCAD5EAE646C653D930B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------9D97CCAD5EAE646C653D930B--

--qLZB84n6oGtax1FIWe3B9RAaR711kfo6I--

--JqP2WOeWoCsDjk8z2fkuxqrRLUO30Vxw6
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3M0QFAwAAAAAACgkQsN6d1ii/Ey+c
FQgAiPxvE4C15MXruAvM6L9L9ZB04tKiJN5vtS7oKmVFcstcM6VE9zZsuodiwqkvTeAnTGbzhWcs
PTryaG6cJ5eJHRBy5m0qC3/bPAfRZJK2PWpGdsdZYde7IvRnilNi2h067LuSdX5sw9KUizdS452b
eEgNf2UiGe/muhop7kE2VEzbJAWMoGdPJ8ctTdKZ3Tm9nQZSlZ9pdlH9H+e0AtVyGZlCmEy6vvaK
7p4W59wTq+PNSB9qQWQA1ioltRpvaRHuhmM2OfPBuqfsOBNv6N2TMRrPgMJUjQLt4xiAD0gYv9TE
n5IGSr0D2zgqnteJOgiAwQpFbO9R1G3+yQfN0q9u+Q==
=K0gl
-----END PGP SIGNATURE-----

--JqP2WOeWoCsDjk8z2fkuxqrRLUO30Vxw6--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:31:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:31:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135749.252031 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLLO-0007mZ-B0; Wed, 02 Jun 2021 07:31:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135749.252031; Wed, 02 Jun 2021 07:31:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLLO-0007mS-7N; Wed, 02 Jun 2021 07:31:14 +0000
Received: by outflank-mailman (input) for mailman id 135749;
 Wed, 02 Jun 2021 07:31:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loLLN-0007mK-FJ
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 07:31:13 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aedb96b9-3bed-4d2e-babc-f894ca9df54f;
 Wed, 02 Jun 2021 07:31:12 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 198E92193F;
 Wed,  2 Jun 2021 07:31:12 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id E0701118DD;
 Wed,  2 Jun 2021 07:31:11 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id xq3cNL8zt2DhYAAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 07:31:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aedb96b9-3bed-4d2e-babc-f894ca9df54f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622619072; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=uFqeCk6U4CpeDuhkTtjiYlPFQ1D028hwHa2JG54tCeM=;
	b=ug/MSGo6TyugeeEk+J6J5XefmnThzPYZZ05/BX6DroUIpHVjRRhMRYHPdelu7v5WASfbSQ
	D57s8sRDQ+IjdgqpK+rFMpUMNrs2Vi+BngKIDFlbJDu92zcml0yHWcwrgoPrbKxFc20rPn
	/hejRe5ZP0mutl+/DsO5smnpj8Daz6M=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622619072; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=uFqeCk6U4CpeDuhkTtjiYlPFQ1D028hwHa2JG54tCeM=;
	b=ug/MSGo6TyugeeEk+J6J5XefmnThzPYZZ05/BX6DroUIpHVjRRhMRYHPdelu7v5WASfbSQ
	D57s8sRDQ+IjdgqpK+rFMpUMNrs2Vi+BngKIDFlbJDu92zcml0yHWcwrgoPrbKxFc20rPn
	/hejRe5ZP0mutl+/DsO5smnpj8Daz6M=
Subject: Re: [PATCH v20210601 10/38] tools/guest: save: move batch_pfns
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-11-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <8966b599-c4c3-9bca-1c8c-380ad92a7cb0@suse.com>
Date: Wed, 2 Jun 2021 09:31:11 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-11-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="OF5rktH7W5i3VBeqZs5mztqOmwvFUer6m"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--OF5rktH7W5i3VBeqZs5mztqOmwvFUer6m
Content-Type: multipart/mixed; boundary="FDcDl8jh94anROYxnET080tQKOtvIKY3l";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <8966b599-c4c3-9bca-1c8c-380ad92a7cb0@suse.com>
Subject: Re: [PATCH v20210601 10/38] tools/guest: save: move batch_pfns
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-11-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-11-olaf@aepfle.de>

--FDcDl8jh94anROYxnET080tQKOtvIKY3l
Content-Type: multipart/mixed;
 boundary="------------C3899046DF924C9B5B9AEE0A"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C3899046DF924C9B5B9AEE0A
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> The batch_pfns array is already allocated in advance.
> Move it into the preallocated area.
>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------C3899046DF924C9B5B9AEE0A
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C3899046DF924C9B5B9AEE0A--

--FDcDl8jh94anROYxnET080tQKOtvIKY3l--

--OF5rktH7W5i3VBeqZs5mztqOmwvFUer6m
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3M78FAwAAAAAACgkQsN6d1ii/Ey9G
HAgAl4ul0iwSWFsK63dCX82gv/uFyD+T0KkJMR0TKLjS5UBWUvxWpBTVtivw2UtVr+g1k4YUs8j7
z8hI7Rdvas7clD88bD/qNhSFknS8ROH+oU3lOmLWZ+3khU+wuUwcI6Qdbu7e5xQHzcIKHg8HIk8s
Ixb8Fw75UHynt8WfskgYP1MTnLIVd4CznlCNg4naf8ZuKnTAl+exHMvaSyCNzl0j1z9EBL6ulEHp
c83ZQILf+2miRvhJ+OFo3ntqiHt9RvDeJ+XYMSbN51/VvjY58yOSa31D24NUQEQ4ymxlS+nsraN7
yYzWry43ugjtUHnvXsP7sCoDmWpbaoJdRK4clqXPkQ==
=cLpg
-----END PGP SIGNATURE-----

--OF5rktH7W5i3VBeqZs5mztqOmwvFUer6m--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:32:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:32:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135754.252041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLMM-0008NA-KR; Wed, 02 Jun 2021 07:32:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135754.252041; Wed, 02 Jun 2021 07:32:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLMM-0008N3-HI; Wed, 02 Jun 2021 07:32:14 +0000
Received: by outflank-mailman (input) for mailman id 135754;
 Wed, 02 Jun 2021 07:32:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loLML-0008Mt-8M
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 07:32:13 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ca7dc07-ac35-4288-a0e1-1529c80e6bb5;
 Wed, 02 Jun 2021 07:32:12 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id BE85C2193D;
 Wed,  2 Jun 2021 07:32:11 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 909F9118DD;
 Wed,  2 Jun 2021 07:32:11 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id q0tqIfszt2B9YQAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 07:32:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ca7dc07-ac35-4288-a0e1-1529c80e6bb5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622619131; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=XdqI/unly9u6Ya7g5aB/VvN65eSWT+aRglBy8iYJeA8=;
	b=KLaLkP8cCj5ppKNpGR0Tah7qfoJvgjSyaX87JZNQx7q6LOn+tzma+g/aoztqXIbOgbyLJs
	qXmPRSK2yPAAZBSPrrYUy6vt/scXZCsLhFm/4QGdP/rwIusCertCr3VLYkVcHZ1rfHMrXj
	5W/tEw9POYdowMasUBqOhtmqZNU5tUU=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622619131; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=XdqI/unly9u6Ya7g5aB/VvN65eSWT+aRglBy8iYJeA8=;
	b=KLaLkP8cCj5ppKNpGR0Tah7qfoJvgjSyaX87JZNQx7q6LOn+tzma+g/aoztqXIbOgbyLJs
	qXmPRSK2yPAAZBSPrrYUy6vt/scXZCsLhFm/4QGdP/rwIusCertCr3VLYkVcHZ1rfHMrXj
	5W/tEw9POYdowMasUBqOhtmqZNU5tUU=
Subject: Re: [PATCH v20210601 11/38] tools/guest: save: move mfns array
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-12-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <436c86ae-a83e-23b4-3873-a8268708782f@suse.com>
Date: Wed, 2 Jun 2021 09:32:10 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-12-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="mjxMTylHUAWSg9b4QG9SzoMT2FvWDdgUF"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--mjxMTylHUAWSg9b4QG9SzoMT2FvWDdgUF
Content-Type: multipart/mixed; boundary="MObr5Q8wMjeip9oPYshrDo2faroZ1ogVC";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <436c86ae-a83e-23b4-3873-a8268708782f@suse.com>
Subject: Re: [PATCH v20210601 11/38] tools/guest: save: move mfns array
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-12-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-12-olaf@aepfle.de>

--MObr5Q8wMjeip9oPYshrDo2faroZ1ogVC
Content-Type: multipart/mixed;
 boundary="------------359E83907EAB218368172C52"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------359E83907EAB218368172C52
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> Remove allocation from hotpath, move mfns array into preallocated space=
=2E
>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------359E83907EAB218368172C52
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------359E83907EAB218368172C52--

--MObr5Q8wMjeip9oPYshrDo2faroZ1ogVC--

--mjxMTylHUAWSg9b4QG9SzoMT2FvWDdgUF
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3M/sFAwAAAAAACgkQsN6d1ii/Ey8z
ggf/Ttk1r/gpXfVRznwf4s9HbbFNMI6ussCan+f6Y9KPRrx6Z5y0Itw57SUry47Jm4VVc9GfT8lL
zObQqsySzJcbrh2yX8nizRmrbtusxlYp8e9q7wWWOYfjqBIRzYjmPRYtclY/Q0HQDgOMST1XgaOm
bH+Y5FsYxxCrc/Yu1G7/sf0mE0/+A+Dq5PhvC2aAF4iLn+907E0jQJuRvYVschs8YXIxnFMLKAp8
oo3CDhc45TC+7kHbe0gopZcqv+RJUNbMp/lKGuPyF0JCs27c8qdpSTzLM+Dy5KQtM+3mfxFnV5cX
ortKEWzXx+i1afvBoB0s1DA16AY840l7FZ4ry7vDTA==
=wpeb
-----END PGP SIGNATURE-----

--mjxMTylHUAWSg9b4QG9SzoMT2FvWDdgUF--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:33:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:33:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135760.252053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLN6-0000Xc-Tr; Wed, 02 Jun 2021 07:33:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135760.252053; Wed, 02 Jun 2021 07:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLN6-0000XV-Q7; Wed, 02 Jun 2021 07:33:00 +0000
Received: by outflank-mailman (input) for mailman id 135760;
 Wed, 02 Jun 2021 07:32:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loLN5-0000XJ-Rp
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 07:32:59 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 01d7e13f-efcb-4ec9-80db-7b70175176c3;
 Wed, 02 Jun 2021 07:32:59 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 6E9C42193F;
 Wed,  2 Jun 2021 07:32:58 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 44C4C118DD;
 Wed,  2 Jun 2021 07:32:58 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id McykDyo0t2DWYQAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 07:32:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01d7e13f-efcb-4ec9-80db-7b70175176c3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622619178; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=nt+elkNpUiv3T3plgQLyzvWSl66fWkVH1n9KUTKj4PY=;
	b=AQGatwrVGstFLBvTjSKqgvGr9uAS+dHYEdew4cicjaj+0fr9NSExjAxrgIRjNALz60fe9R
	H2xh47kcOkIXCEatQIc1j14WB5k3EUFkBVZwhB6102cmnvhEdrID+zCedKSFP458DOetJm
	eYdFMoFeogkZHdlKTgrAlgY9xUdfWx0=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622619178; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=nt+elkNpUiv3T3plgQLyzvWSl66fWkVH1n9KUTKj4PY=;
	b=AQGatwrVGstFLBvTjSKqgvGr9uAS+dHYEdew4cicjaj+0fr9NSExjAxrgIRjNALz60fe9R
	H2xh47kcOkIXCEatQIc1j14WB5k3EUFkBVZwhB6102cmnvhEdrID+zCedKSFP458DOetJm
	eYdFMoFeogkZHdlKTgrAlgY9xUdfWx0=
Subject: Re: [PATCH v20210601 12/38] tools/guest: save: move types array
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-13-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <e6e60e80-285e-9242-34ef-a60cb900a17b@suse.com>
Date: Wed, 2 Jun 2021 09:32:57 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-13-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="HaPmJPPEjoq53iD3TBG70tO8kCCO0VchU"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--HaPmJPPEjoq53iD3TBG70tO8kCCO0VchU
Content-Type: multipart/mixed; boundary="TBZPVvz7CoAXHK0gNQmAL8INRmGa4aIJV";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <e6e60e80-285e-9242-34ef-a60cb900a17b@suse.com>
Subject: Re: [PATCH v20210601 12/38] tools/guest: save: move types array
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-13-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-13-olaf@aepfle.de>

--TBZPVvz7CoAXHK0gNQmAL8INRmGa4aIJV
Content-Type: multipart/mixed;
 boundary="------------34374FEF24CB0F20D5BB03A6"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------34374FEF24CB0F20D5BB03A6
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> Remove allocation from hotpath, move types array into preallocated spac=
e.
>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------34374FEF24CB0F20D5BB03A6
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------34374FEF24CB0F20D5BB03A6--

--TBZPVvz7CoAXHK0gNQmAL8INRmGa4aIJV--

--HaPmJPPEjoq53iD3TBG70tO8kCCO0VchU
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3NCkFAwAAAAAACgkQsN6d1ii/Ey/6
ewf/fm1LuYyxsvXvwENQySZYi09t1ZaLlzk/KCiEFRCxwVr/XjjHSBd05IBUHyhthz4apTr/vFlB
0YpiChUKXEeTr+7MwpQHDK/4o9bfpQlM3GxyNfkdUfcE9HCIb0lWSHQlZfIDKVWsbeItXaI3qoSs
lwewscvwFvu3zET6XN0TB+TKiIQbICYh/9WSL2xKjAPR30ZBnXTk2VzQvh22uQK7Qu4V0u/tyKI0
KZtgCf5wYb2hE8RY8ONNF1TIWkCd8rD/rE/ddzmD+6FaEknoR+05LxKSoY/2Nu/kzHkaVxGYA1hn
e8lguQb71wMSxvuYjAErk7FpX+Nj/o19OLvZFvRcqg==
=IxEl
-----END PGP SIGNATURE-----

--HaPmJPPEjoq53iD3TBG70tO8kCCO0VchU--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:33:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:33:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135766.252063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLNq-0001H8-6b; Wed, 02 Jun 2021 07:33:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135766.252063; Wed, 02 Jun 2021 07:33:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLNq-0001H1-3P; Wed, 02 Jun 2021 07:33:46 +0000
Received: by outflank-mailman (input) for mailman id 135766;
 Wed, 02 Jun 2021 07:33:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loLNo-0001GU-4J
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 07:33:44 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c4f1d14c-41c2-47cb-be32-1560e31d0e9c;
 Wed, 02 Jun 2021 07:33:43 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 9F9402193D;
 Wed,  2 Jun 2021 07:33:42 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 76CF8118DD;
 Wed,  2 Jun 2021 07:33:42 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id E7zpG1Y0t2BMYgAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 07:33:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4f1d14c-41c2-47cb-be32-1560e31d0e9c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622619222; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=u9MygJOAW7Pkxhmg/4CwkTowQSJN/+MFYfuU3uVopAY=;
	b=f/ZSHHfRme0gUik2sjcx3L90F34B1Y4HoetKe38BLsYy/GSjYukCrtJ12V/d7iKbQMjWmA
	U9rEQjiavKE1hJls7DfEuUah5Ht5rv80+lEXwuS7Z2KLGGTj4UTVNwMy9ZblYJ1r6RMpTx
	LADhRL7NhBIk0tkcrGNRwEpcSmnO4dw=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622619222; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=u9MygJOAW7Pkxhmg/4CwkTowQSJN/+MFYfuU3uVopAY=;
	b=f/ZSHHfRme0gUik2sjcx3L90F34B1Y4HoetKe38BLsYy/GSjYukCrtJ12V/d7iKbQMjWmA
	U9rEQjiavKE1hJls7DfEuUah5Ht5rv80+lEXwuS7Z2KLGGTj4UTVNwMy9ZblYJ1r6RMpTx
	LADhRL7NhBIk0tkcrGNRwEpcSmnO4dw=
Subject: Re: [PATCH v20210601 13/38] tools/guest: save: move errors array
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-14-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <6e6a58d6-a31e-6291-904d-c603168c87d6@suse.com>
Date: Wed, 2 Jun 2021 09:33:41 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-14-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="pZjMPdRZEfpgLUXJX5b8zLRYdv3rzN8Jk"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--pZjMPdRZEfpgLUXJX5b8zLRYdv3rzN8Jk
Content-Type: multipart/mixed; boundary="1Bo2tIVaudSBWN0PVzob5mQPHwryCBpO3";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <6e6a58d6-a31e-6291-904d-c603168c87d6@suse.com>
Subject: Re: [PATCH v20210601 13/38] tools/guest: save: move errors array
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-14-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-14-olaf@aepfle.de>

--1Bo2tIVaudSBWN0PVzob5mQPHwryCBpO3
Content-Type: multipart/mixed;
 boundary="------------79398960E4A1610D9C5B918F"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------79398960E4A1610D9C5B918F
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> Remove allocation from hotpath, move errors array into preallocated spa=
ce.
>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------79398960E4A1610D9C5B918F
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------79398960E4A1610D9C5B918F--

--1Bo2tIVaudSBWN0PVzob5mQPHwryCBpO3--

--pZjMPdRZEfpgLUXJX5b8zLRYdv3rzN8Jk
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3NFUFAwAAAAAACgkQsN6d1ii/Ey9P
VwgAg6sZW53nGhg5iOcq3kzxHPcQa8tX54ckm4e/kuHl+sYQsnonfXzBxhWlrmADL4sZjIh8NUL/
AWLXbH50HsgiyxnPGoV8vGK3s2U4SvTgms1Q77QrARUjg49clxojlsvPvxw9pJcQ2j1qDpdW837R
9wrzbZ2WSkAXxRo/p63esBE1rXMnedMrM5gTVgyVcweKUPy1OtdcsSA1Wq/rKvgPe+q+A1qnnbIA
7L+HsURJT7ktvMZGygPmTrPSj5pqvFXS5zdbJjOtZJp49Jlc9flFX7ahmdor+etN5jek6MSVHLMN
5YUTN8p1vlTaHoICXoAdPRCo3DOUFpUqPwX6dh0onQ==
=49M/
-----END PGP SIGNATURE-----

--pZjMPdRZEfpgLUXJX5b8zLRYdv3rzN8Jk--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:34:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:34:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135776.252075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLOa-00020A-JU; Wed, 02 Jun 2021 07:34:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135776.252075; Wed, 02 Jun 2021 07:34:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLOa-000203-GT; Wed, 02 Jun 2021 07:34:32 +0000
Received: by outflank-mailman (input) for mailman id 135776;
 Wed, 02 Jun 2021 07:34:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loLOZ-0001zx-V4
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 07:34:31 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c3949222-1d74-49ed-bed7-132ce89c81a3;
 Wed, 02 Jun 2021 07:34:30 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 16FFE2193F;
 Wed,  2 Jun 2021 07:34:30 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id D98A2118DD;
 Wed,  2 Jun 2021 07:34:29 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id /DYBNIU0t2C3YgAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 07:34:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3949222-1d74-49ed-bed7-132ce89c81a3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622619270; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=plDgEIEyJDVJJKWVlWnlTb4/kyDO2LdSUEkr/RMnDqw=;
	b=gGHnPGOMfDMrAgPIVeIAhHHH9TYkUhO7K1CXJ2A3+i+gdXFBa162FFyJggR7k5qD+suhQN
	CDXxU37KHDaK1YYJLODV/pq31LmeMk9L4T2hvjFC2gPgTEo9RGVLyYU3MMsScEjY1lW1V8
	29ollBv/VaQqHbNWKX2OcJKFtkRwRFY=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622619270; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=plDgEIEyJDVJJKWVlWnlTb4/kyDO2LdSUEkr/RMnDqw=;
	b=gGHnPGOMfDMrAgPIVeIAhHHH9TYkUhO7K1CXJ2A3+i+gdXFBa162FFyJggR7k5qD+suhQN
	CDXxU37KHDaK1YYJLODV/pq31LmeMk9L4T2hvjFC2gPgTEo9RGVLyYU3MMsScEjY1lW1V8
	29ollBv/VaQqHbNWKX2OcJKFtkRwRFY=
Subject: Re: [PATCH v20210601 14/38] tools/guest: save: move iov array
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-15-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <36cfcad2-911b-f3f2-e7a5-858a63cc811e@suse.com>
Date: Wed, 2 Jun 2021 09:34:29 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-15-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="R148r7t3CpJFOcyGjWoHFpEsr1GI582c5"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--R148r7t3CpJFOcyGjWoHFpEsr1GI582c5
Content-Type: multipart/mixed; boundary="55QsapiVKKlj9K0nGApfRjyJrPsKmIKKr";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <36cfcad2-911b-f3f2-e7a5-858a63cc811e@suse.com>
Subject: Re: [PATCH v20210601 14/38] tools/guest: save: move iov array
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-15-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-15-olaf@aepfle.de>

--55QsapiVKKlj9K0nGApfRjyJrPsKmIKKr
Content-Type: multipart/mixed;
 boundary="------------51A96DC743D4BB826B590670"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------51A96DC743D4BB826B590670
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> Remove allocation from hotpath, move iov array into preallocated space.=

>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------51A96DC743D4BB826B590670
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------51A96DC743D4BB826B590670--

--55QsapiVKKlj9K0nGApfRjyJrPsKmIKKr--

--R148r7t3CpJFOcyGjWoHFpEsr1GI582c5
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3NIUFAwAAAAAACgkQsN6d1ii/Ey+b
uwf+MpbBzop1drmx79BzTXt+WHAd87DlwQcK0cr5oXFaOWW0HY5RiXpeN559qvIju6SXtiqa8wEv
t7KuCNZYJalRo0gZ1YWhjjEiutd09gIpV4Oe2+jfk9LCCwGXDN4Jjjaja1emjbcsD+lQ5yS2vB1P
g2o378AEl7iECYUO0+Qo9RURISTrHu6D99LSf2xPYKWi3LX0/0UR+sMw3eZ0tuwMejJboaNWiZCv
5/PRA1R8sPqpznqjGjd5K9BqV94b48J0hO4dAqFsR3EU5Ja5nPlMHzFEZzJAKnOBkM1roJiba1o5
Jr8RbTHTIbM9bqEBMym1OUa1l8LUv11qdMUOaMY8HQ==
=dQZ/
-----END PGP SIGNATURE-----

--R148r7t3CpJFOcyGjWoHFpEsr1GI582c5--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:35:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:35:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135782.252086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLPK-0002ct-Sc; Wed, 02 Jun 2021 07:35:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135782.252086; Wed, 02 Jun 2021 07:35:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLPK-0002cm-Pc; Wed, 02 Jun 2021 07:35:18 +0000
Received: by outflank-mailman (input) for mailman id 135782;
 Wed, 02 Jun 2021 07:35:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loLPK-0002cc-19
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 07:35:18 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9da59258-1e73-43dc-b3d3-6c4fcd2dd33f;
 Wed, 02 Jun 2021 07:35:17 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 8A0451FD2D;
 Wed,  2 Jun 2021 07:35:16 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 62EE0118DD;
 Wed,  2 Jun 2021 07:35:16 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id z5eXFrQ0t2BTYwAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 07:35:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9da59258-1e73-43dc-b3d3-6c4fcd2dd33f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622619316; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=G1tapD+rwuwU68G/npUwwSpAll87l3UF7obykLLv+TI=;
	b=J/ZRRME/QHXCgdJnrqZhb1SHJcWu0aTctDkh8epkww4HWq0uSCaye4Tew+AIOPcg/QKD7Y
	YbcyzjxZmGD03BycFpov3OFuIR4pCLJPl9cvR5jm6L/snWrmlYSOV9rTZtHCCG99emYCn1
	r58jiYE4n8hU6/K5CKF0csBqZrhJV3s=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622619316; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=G1tapD+rwuwU68G/npUwwSpAll87l3UF7obykLLv+TI=;
	b=J/ZRRME/QHXCgdJnrqZhb1SHJcWu0aTctDkh8epkww4HWq0uSCaye4Tew+AIOPcg/QKD7Y
	YbcyzjxZmGD03BycFpov3OFuIR4pCLJPl9cvR5jm6L/snWrmlYSOV9rTZtHCCG99emYCn1
	r58jiYE4n8hU6/K5CKF0csBqZrhJV3s=
Subject: Re: [PATCH v20210601 15/38] tools/guest: save: move rec_pfns array
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-16-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <0c8d8695-9dbf-73ce-a20c-be2550323812@suse.com>
Date: Wed, 2 Jun 2021 09:35:15 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-16-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="KZzvqOXYGR8daBUbMZ2IXWFvL10SyO5UD"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--KZzvqOXYGR8daBUbMZ2IXWFvL10SyO5UD
Content-Type: multipart/mixed; boundary="CYXkfJV7bDN2TLz0ZWkqUlf2ygZVaSvJe";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <0c8d8695-9dbf-73ce-a20c-be2550323812@suse.com>
Subject: Re: [PATCH v20210601 15/38] tools/guest: save: move rec_pfns array
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-16-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-16-olaf@aepfle.de>

--CYXkfJV7bDN2TLz0ZWkqUlf2ygZVaSvJe
Content-Type: multipart/mixed;
 boundary="------------29811488CD2CBEE334E28AE6"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------29811488CD2CBEE334E28AE6
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> Remove allocation from hotpath, move rec_pfns array into preallocated s=
pace.
>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------29811488CD2CBEE334E28AE6
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------29811488CD2CBEE334E28AE6--

--CYXkfJV7bDN2TLz0ZWkqUlf2ygZVaSvJe--

--KZzvqOXYGR8daBUbMZ2IXWFvL10SyO5UD
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3NLMFAwAAAAAACgkQsN6d1ii/Ey/S
xAf/VRs/IuwdyRQduViKPANCvIrnckmM+LOrAmK69brI5OQXJtrm404sAv9oIgz92s/Va1ZMCrUP
2MyQiEFXAJ9Hp9+AUpHNDlNYjaS3+ZjKCXc/1svh3U2LSJAwcjiI4GILVky9DmWRzECON36zENnm
4bXxPWJl8T7BL/TZTKiojTQemeiSt22ofV8UvZkwhLlKsAt/neBRN08xkhbYuXrZeC6d7REEmvB2
4OR8R5J91fm8IRtgRc6DBFLMKIFtqI6/H9zaYVfijWDMfDt2zjtlid6nXCJwyNU3Pq+8TEu8Mmm1
9y3fiTkXo84JhptuoHnIlE93v1FrJ2s9WDMdFBB5NA==
=d5wm
-----END PGP SIGNATURE-----

--KZzvqOXYGR8daBUbMZ2IXWFvL10SyO5UD--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:39:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:39:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135794.252096 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLTP-0003No-DJ; Wed, 02 Jun 2021 07:39:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135794.252096; Wed, 02 Jun 2021 07:39:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLTP-0003Nh-AJ; Wed, 02 Jun 2021 07:39:31 +0000
Received: by outflank-mailman (input) for mailman id 135794;
 Wed, 02 Jun 2021 07:39:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loLTO-0003Nb-En
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 07:39:30 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2bc7dbe3-03e7-4f1b-b14e-8e7a00630679;
 Wed, 02 Jun 2021 07:39:29 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id CABEF2193D;
 Wed,  2 Jun 2021 07:39:28 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 9A239118DD;
 Wed,  2 Jun 2021 07:39:28 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id NLVuJLA1t2AsZQAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 07:39:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2bc7dbe3-03e7-4f1b-b14e-8e7a00630679
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622619568; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=5YyjPpECNHKWR4pMbjUJXgAsKPlFc821WEgTccPosYI=;
	b=Fa3zdvFrDoO/QVSuWjRtpHXvsKFj7huF1e9pFkV6VcPzSljmmUHGX/TCWfe3nMu7xLjYg9
	kFh/mTn5H2TjEGnZIf4/aE37ziPGYBSc/7r8RtVmrwG2nCLJPPBTGv6Wh6cLemUzOkIJfV
	wvqa5AnMjsnpC//Y3GN3QMERilekJgs=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622619568; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=5YyjPpECNHKWR4pMbjUJXgAsKPlFc821WEgTccPosYI=;
	b=Fa3zdvFrDoO/QVSuWjRtpHXvsKFj7huF1e9pFkV6VcPzSljmmUHGX/TCWfe3nMu7xLjYg9
	kFh/mTn5H2TjEGnZIf4/aE37ziPGYBSc/7r8RtVmrwG2nCLJPPBTGv6Wh6cLemUzOkIJfV
	wvqa5AnMjsnpC//Y3GN3QMERilekJgs=
Subject: Re: [PATCH v20210601 16/38] tools/guest: save: move guest_data array
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-17-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <81ee4f02-8b22-3ed9-55e3-7af3a6ed795c@suse.com>
Date: Wed, 2 Jun 2021 09:39:28 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-17-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="Wphx9M9rz0GXn6OccVLJ2si4D6894xn4J"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--Wphx9M9rz0GXn6OccVLJ2si4D6894xn4J
Content-Type: multipart/mixed; boundary="Cycj0iVtsWnLMG0RxgfljiKjZKR0x4aZq";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <81ee4f02-8b22-3ed9-55e3-7af3a6ed795c@suse.com>
Subject: Re: [PATCH v20210601 16/38] tools/guest: save: move guest_data array
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-17-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-17-olaf@aepfle.de>

--Cycj0iVtsWnLMG0RxgfljiKjZKR0x4aZq
Content-Type: multipart/mixed;
 boundary="------------E583A82856C18751EFE1717B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E583A82856C18751EFE1717B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> Remove allocation from hotpath, move guest_data array into preallocated=20
space.
>=20
> Because this was allocated with calloc:
> Adjust the loop to clear unused entries as needed.
>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------E583A82856C18751EFE1717B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E583A82856C18751EFE1717B--

--Cycj0iVtsWnLMG0RxgfljiKjZKR0x4aZq--

--Wphx9M9rz0GXn6OccVLJ2si4D6894xn4J
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3NbAFAwAAAAAACgkQsN6d1ii/Ey8V
mQf/YgKpPKnh96BRE/I2jDYrlcC82minDiNl2Oarg3D1QM7Ytiq2pDlQ3d9ANDntJhhJEisoCpgB
7gOnQLhQEHD0wDOALNj9jYiVQrv49+leNBHWn5k+2YyFFjeeRrb+b0I1P3AIQcsMLkzHFstXJ0XG
POlAF6KLfRgj6jSnThp+g78ABfmhrXr2W4SZz0evLUWvInUlfX8yGH3Q/wBL/oVgo0U3KTmIKsb2
K4fxb0+ueEmr7VtfKlFxon9ncIyBzdABP+2PnBSpl6isZQs6GNczNRddBgul1AqV9r8zHJizgJ4J
jgfBs+SiDbpmfRLiBJ2erhqDo9YosHqVrNyefxAaTQ==
=PY+1
-----END PGP SIGNATURE-----

--Wphx9M9rz0GXn6OccVLJ2si4D6894xn4J--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:47:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:47:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135802.252107 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLbP-0004wy-8y; Wed, 02 Jun 2021 07:47:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135802.252107; Wed, 02 Jun 2021 07:47:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLbP-0004wr-64; Wed, 02 Jun 2021 07:47:47 +0000
Received: by outflank-mailman (input) for mailman id 135802;
 Wed, 02 Jun 2021 07:47:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loLbN-0004wj-B4
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 07:47:45 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id febcdaa1-d9ab-463a-852b-cca54997528e;
 Wed, 02 Jun 2021 07:47:44 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id C33181FD32;
 Wed,  2 Jun 2021 07:47:43 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 880FA118DD;
 Wed,  2 Jun 2021 07:47:43 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id snPpH583t2AAagAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 07:47:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: febcdaa1-d9ab-463a-852b-cca54997528e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622620063; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=W/hZveXRTQSaVIUZ+OtQzoJHKfGtem4yQGKFo/FbG+o=;
	b=AmU8Eol/SYR9Bq6eu1WrY+mxIXngTW7ezV5L++NIbsPiJqIkDgym2DAZT1gEPkY7Ibk7ae
	lTsjtpgEITeAhMVZwNp+tMR5Do3rI34DEepWWfyL38fVkAncYnFZEPTrbyRB+A4O1yfkse
	SzKAfU6e+60bdhucFWHrjF4BW/3wkno=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622620063; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=W/hZveXRTQSaVIUZ+OtQzoJHKfGtem4yQGKFo/FbG+o=;
	b=AmU8Eol/SYR9Bq6eu1WrY+mxIXngTW7ezV5L++NIbsPiJqIkDgym2DAZT1gEPkY7Ibk7ae
	lTsjtpgEITeAhMVZwNp+tMR5Do3rI34DEepWWfyL38fVkAncYnFZEPTrbyRB+A4O1yfkse
	SzKAfU6e+60bdhucFWHrjF4BW/3wkno=
Subject: Re: [PATCH v20210601 17/38] tools/guest: save: move local_pages array
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-18-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <3f3db866-3d91-093a-afad-509bc9299436@suse.com>
Date: Wed, 2 Jun 2021 09:47:42 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-18-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="CW55JAK8xu8dbaIJhIC44GiPMozkQ0u6F"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--CW55JAK8xu8dbaIJhIC44GiPMozkQ0u6F
Content-Type: multipart/mixed; boundary="uG5lSYoGiII1zxLQSvRMGEDNZYlZ9b1NN";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <3f3db866-3d91-093a-afad-509bc9299436@suse.com>
Subject: Re: [PATCH v20210601 17/38] tools/guest: save: move local_pages array
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-18-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-18-olaf@aepfle.de>

--uG5lSYoGiII1zxLQSvRMGEDNZYlZ9b1NN
Content-Type: multipart/mixed;
 boundary="------------C509216F741D834FB445714B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C509216F741D834FB445714B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> Remove allocation from hotpath, move local_pages array into preallocate=
d space.
>=20
> Adjust the code to use the src page as is in case of HVM.
> In case of PV the page may need to be normalised, use an private memory=

> area for this purpose.
>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------C509216F741D834FB445714B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C509216F741D834FB445714B--

--uG5lSYoGiII1zxLQSvRMGEDNZYlZ9b1NN--

--CW55JAK8xu8dbaIJhIC44GiPMozkQ0u6F
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3N58FAwAAAAAACgkQsN6d1ii/Ey/o
7gf/aNS8QgU8qH9L7beks2Ie6kpMKWQhoaM5mHsI0mY7Hw5BpXn2PKUuxlLgwuRyUYNF4IwoRKY7
ryFytFgHgK2swgyuKGcHL/BCJBvf5gZXl3LUQTE61kPiKC+dEpFXH3tIe/2e2LMxBiXv6ZYtz9i3
DkguaD2y+300ENMogzTe19kxRANAExrYPOyi4popPSgLi8qJmlg4j2hJ4hbF2nrWgGIEUPFVLj4b
bhyEQS7QCtciM/6n2Ub9avrw2VLClUKF4XYXYO98VJ9ZD37Pxs0TIr32Gw+EU0HTwXOWYTb52Rvy
gOYaBCSJtL80/hCLAveOhNHTSWkg/uX4gAd30rRpww==
=38DQ
-----END PGP SIGNATURE-----

--CW55JAK8xu8dbaIJhIC44GiPMozkQ0u6F--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:50:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:50:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135809.252119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLe4-0006NR-T6; Wed, 02 Jun 2021 07:50:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135809.252119; Wed, 02 Jun 2021 07:50:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLe4-0006NK-Q5; Wed, 02 Jun 2021 07:50:32 +0000
Received: by outflank-mailman (input) for mailman id 135809;
 Wed, 02 Jun 2021 07:49:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VmpI=K4=ionos.com=jinpu.wang@srs-us1.protection.inumbo.net>)
 id 1loLdE-0005e5-Ni
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 07:49:40 +0000
Received: from mail-ed1-x52b.google.com (unknown [2a00:1450:4864:20::52b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d7431d4-17ab-4f56-a0fa-2a96f54c793a;
 Wed, 02 Jun 2021 07:49:38 +0000 (UTC)
Received: by mail-ed1-x52b.google.com with SMTP id b17so1849503ede.0
 for <xen-devel@lists.xenproject.org>; Wed, 02 Jun 2021 00:49:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d7431d4-17ab-4f56-a0fa-2a96f54c793a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ionos.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=IviQGHXbohDYlY4Kaxm/cxlbY44V3tAurEYksJ8dtbE=;
        b=jCNYXrovW1sUHaOT/+lw4PCDhP0eyRFf+ELw7YXWBCDBnLin+ptT9Qqbj9mRkWWaUg
         wGBYm/DlmYSjFuDFdCiAFeYUhwBx3bN1zWIfEq4U/R6NzUaRvMrhsPRnKTgM5MVFJVxB
         CmVxljvKTCN3ZyT1LwzUVWKZgE4x9cRuQaj0TjoJvUnlN68E3Mt+4zAfSoenpZB5Ofxb
         0XLwmhqFLm1lOHsISGA/CxlwAihZibid3kenEAmMklVKKOHOy1nSq3KrSTTjFuaKWHai
         u516kaTGPG4uhGM5WuZ/+lvOuzezIo89jko54Ak3T4r5ol1iQsDze8x0V5mTappbPtSr
         aaWg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=IviQGHXbohDYlY4Kaxm/cxlbY44V3tAurEYksJ8dtbE=;
        b=qBuXwlGxZNwWnVUzcerQDHisE1WetQkxcHtGE0Jjlp1WGRoZbUTY+/tApYJFX/waA3
         d2YMbwV+dwQTkfgYRm8Y8/FsnnEKOzW8CNwZaswJG+7FS+wOHsow2nRAdCpGUerf3kSm
         E0kVEPWVBuv4CHOl3Cv4mwd/oY7Cb+i9jRnJzJEt6RNYT4MRtxAi/Sr1p2vWWwSzg/hn
         QZIbPcPXvJM0sryKVDURvJCiX1BMt7nmsm0UHsQ89ymTmEyXpOjlQ+l3TFQgov6xXmDg
         Aja5ZKo/Zx2WPtAuuYYZVMyC7jUmArnOoWqe6Jo99A4dCxq45S5FD25HzL9qeA4aqF05
         OcLg==
X-Gm-Message-State: AOAM530Hz8jzmazHt4QU1V3/O3FZj9BEQwAVVI8VTzUFJW21kIPTEVPI
	XhZYkNxOUganVdUCsviWTc5HdyzAxiY3/dWGoZi4nA==
X-Google-Smtp-Source: ABdhPJzRCVGimPnGT3qQlEAMsZsY3IxcZZF186UcWgTyI7ay8iWe4IlRIEG5WowCojeDoPju71hCmgBQ+xR7PgbbWL0=
X-Received: by 2002:aa7:c693:: with SMTP id n19mr33130385edq.35.1622620178044;
 Wed, 02 Jun 2021 00:49:38 -0700 (PDT)
MIME-Version: 1.0
References: <20210602065345.355274-1-hch@lst.de> <20210602065345.355274-24-hch@lst.de>
In-Reply-To: <20210602065345.355274-24-hch@lst.de>
From: Jinpu Wang <jinpu.wang@ionos.com>
Date: Wed, 2 Jun 2021 09:49:27 +0200
Message-ID: <CAMGffEn7aCmTOTsuzbSr=DwomFKfizkNhzsZnAONHBq1neW2Og@mail.gmail.com>
Subject: Re: [PATCH 23/30] rnbd: use blk_mq_alloc_disk and blk_cleanup_disk
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>, 
	Denis Efremov <efremov@linux.com>, Josef Bacik <josef@toxicpanda.com>, Tim Waugh <tim@cyberelk.net>, 
	Geoff Levand <geoff@infradead.org>, Ilya Dryomov <idryomov@gmail.com>, 
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>, "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Mike Snitzer <snitzer@redhat.com>, Maxim Levitsky <maximlevitsky@gmail.com>, 
	Alex Dubov <oakad@yahoo.com>, Miquel Raynal <miquel.raynal@bootlin.com>, 
	Richard Weinberger <richard@nod.at>, Vignesh Raghavendra <vigneshr@ti.com>, Heiko Carstens <hca@linux.ibm.com>, 
	Vasily Gorbik <gor@linux.ibm.com>, Christian Borntraeger <borntraeger@de.ibm.com>, 
	device-mapper development <dm-devel@redhat.com>, linux-block <linux-block@vger.kernel.org>, nbd@other.debian.org, 
	linuxppc-dev@lists.ozlabs.org, Ceph Development <ceph-devel@vger.kernel.org>, 
	virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, 
	linux-mmc@vger.kernel.org, linux-mtd@lists.infradead.org, 
	linux-s390@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"

On Wed, Jun 2, 2021 at 8:55 AM Christoph Hellwig <hch@lst.de> wrote:
>
> Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and
> request_queue allocation.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/block/rnbd/rnbd-clt.c | 35 ++++++++---------------------------
>  1 file changed, 8 insertions(+), 27 deletions(-)
>
> diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
> index c604a402cd5c..f4fa45d24c0b 100644
> --- a/drivers/block/rnbd/rnbd-clt.c
> +++ b/drivers/block/rnbd/rnbd-clt.c
> @@ -1353,18 +1353,6 @@ static void rnbd_init_mq_hw_queues(struct rnbd_clt_dev *dev)
>         }
>  }
>
> -static int setup_mq_dev(struct rnbd_clt_dev *dev)
> -{
> -       dev->queue = blk_mq_init_queue(&dev->sess->tag_set);
> -       if (IS_ERR(dev->queue)) {
> -               rnbd_clt_err(dev, "Initializing multiqueue queue failed, err: %ld\n",
> -                             PTR_ERR(dev->queue));
> -               return PTR_ERR(dev->queue);
> -       }
> -       rnbd_init_mq_hw_queues(dev);
> -       return 0;
> -}
> -
>  static void setup_request_queue(struct rnbd_clt_dev *dev)
>  {
>         blk_queue_logical_block_size(dev->queue, dev->logical_block_size);
> @@ -1393,13 +1381,13 @@ static void setup_request_queue(struct rnbd_clt_dev *dev)
>         blk_queue_io_opt(dev->queue, dev->sess->max_io_size);
>         blk_queue_virt_boundary(dev->queue, SZ_4K - 1);
>         blk_queue_write_cache(dev->queue, dev->wc, dev->fua);
> -       dev->queue->queuedata = dev;
>  }
>
>  static void rnbd_clt_setup_gen_disk(struct rnbd_clt_dev *dev, int idx)
>  {
>         dev->gd->major          = rnbd_client_major;
>         dev->gd->first_minor    = idx << RNBD_PART_BITS;
> +       dev->gd->minors         = 1 << RNBD_PART_BITS;
>         dev->gd->fops           = &rnbd_client_ops;
>         dev->gd->queue          = dev->queue;
>         dev->gd->private_data   = dev;
> @@ -1426,24 +1414,18 @@ static void rnbd_clt_setup_gen_disk(struct rnbd_clt_dev *dev, int idx)
>
>  static int rnbd_client_setup_device(struct rnbd_clt_dev *dev)
>  {
> -       int err, idx = dev->clt_device_id;
> +       int idx = dev->clt_device_id;
>
>         dev->size = dev->nsectors * dev->logical_block_size;
>
> -       err = setup_mq_dev(dev);
> -       if (err)
> -               return err;
> +       dev->gd = blk_mq_alloc_disk(&dev->sess->tag_set, dev);
> +       if (IS_ERR(dev->gd))
> +               return PTR_ERR(dev->gd);
> +       dev->queue = dev->gd->queue;
> +       rnbd_init_mq_hw_queues(dev);
>
>         setup_request_queue(dev);
> -
> -       dev->gd = alloc_disk_node(1 << RNBD_PART_BITS,  NUMA_NO_NODE);
> -       if (!dev->gd) {
> -               blk_cleanup_queue(dev->queue);
> -               return -ENOMEM;
> -       }
> -
>         rnbd_clt_setup_gen_disk(dev, idx);
> -
>         return 0;
>  }
>
> @@ -1650,8 +1632,7 @@ struct rnbd_clt_dev *rnbd_clt_map_device(const char *sessname,
>  static void destroy_gen_disk(struct rnbd_clt_dev *dev)
>  {
>         del_gendisk(dev->gd);
> -       blk_cleanup_queue(dev->queue);
> -       put_disk(dev->gd);
> +       blk_cleanup_disk(dev->gd);
>  }
>
>  static void destroy_sysfs(struct rnbd_clt_dev *dev,
> --
> 2.30.2

Looks good to me, thx!
Reviewed-by: Jack Wang <jinpu.wang@ionos.com>
>


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:56:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:56:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135819.252129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLjN-0007DP-HU; Wed, 02 Jun 2021 07:56:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135819.252129; Wed, 02 Jun 2021 07:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLjN-0007DI-Eb; Wed, 02 Jun 2021 07:56:01 +0000
Received: by outflank-mailman (input) for mailman id 135819;
 Wed, 02 Jun 2021 07:56:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loLjM-0007DC-P2
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 07:56:00 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1195f5bc-f4ec-4072-83b1-a6e652f0291f;
 Wed, 02 Jun 2021 07:56:00 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 5A6EB1FD32;
 Wed,  2 Jun 2021 07:55:59 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 305E6118DD;
 Wed,  2 Jun 2021 07:55:59 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 4daiCo85t2A4bwAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 07:55:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1195f5bc-f4ec-4072-83b1-a6e652f0291f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622620559; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=lDA9TUNht5hmiBOixkTP9FfS0PJlhzhZ5L9zCSlSxI8=;
	b=il4JHD0uPCDQxpQ7RTxO+oxz4D+WyPjO0GWU51WU0LsspTm19/vrBvNtdx4SWzjB3SjU8Y
	KKGCcbsY1M4RgXTedMy1smub8TjwLAvrnkK/Ljvdm68mj/dwWuflJdmsL80OZaAGQ16l7T
	z1TiWYlNtYpGDsvTEd43Ao2lEg4Q2oE=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622620559; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=lDA9TUNht5hmiBOixkTP9FfS0PJlhzhZ5L9zCSlSxI8=;
	b=il4JHD0uPCDQxpQ7RTxO+oxz4D+WyPjO0GWU51WU0LsspTm19/vrBvNtdx4SWzjB3SjU8Y
	KKGCcbsY1M4RgXTedMy1smub8TjwLAvrnkK/Ljvdm68mj/dwWuflJdmsL80OZaAGQ16l7T
	z1TiWYlNtYpGDsvTEd43Ao2lEg4Q2oE=
Subject: Re: [PATCH v20210601 18/38] tools/guest: restore: move pfns array
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-19-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <9674d45a-a157-2e30-5ddf-3ff3b1e714df@suse.com>
Date: Wed, 2 Jun 2021 09:55:58 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-19-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="BgJq67yKiNgNIafB3hD5bimHBwpLCSN0P"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--BgJq67yKiNgNIafB3hD5bimHBwpLCSN0P
Content-Type: multipart/mixed; boundary="ZRBLfFKUFWommsCIzGghRztptxQHfcleR";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <9674d45a-a157-2e30-5ddf-3ff3b1e714df@suse.com>
Subject: Re: [PATCH v20210601 18/38] tools/guest: restore: move pfns array
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-19-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-19-olaf@aepfle.de>

--ZRBLfFKUFWommsCIzGghRztptxQHfcleR
Content-Type: multipart/mixed;
 boundary="------------E9BCE60B769B01B30D26741D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E9BCE60B769B01B30D26741D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> Remove allocation from hotpath, move pfns array into preallocated space=
=2E
>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------E9BCE60B769B01B30D26741D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E9BCE60B769B01B30D26741D--

--ZRBLfFKUFWommsCIzGghRztptxQHfcleR--

--BgJq67yKiNgNIafB3hD5bimHBwpLCSN0P
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3OY4FAwAAAAAACgkQsN6d1ii/Ey8R
Wgf+NbYjoBGvVFSCtN8qHhfAHXBeT3PivNQPqYq8hULP+gpmdnsNDEEG951zDxpgbnQBrJ5dyZu1
p6MWSBG11xiQLpHkClp5R2FHbej+j9DE6Yt81cVMCGtTjXfxelDdrL/QIItQH2POkhRgBv4HHH2l
Q0Y/NoOZ1u+fynj2/nn6j63Vt753XLyat0cATZXRaCcVpyp0XtsaMGUu6kLaf/VXgnGN6a+rcNhy
oKiGUnnFUMRh09oYbLlNxgoHfdpvWC4KiNgzMInGndKX9qy21L4uJuDRXqC5v+rLgcuxXCDmTndC
oNTqtgj30cJrhzLGdETzW3A5Wg1hDDt3+ECiiHUoDw==
=QDHg
-----END PGP SIGNATURE-----

--BgJq67yKiNgNIafB3hD5bimHBwpLCSN0P--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:56:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:56:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135824.252141 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLjz-0007ly-Sf; Wed, 02 Jun 2021 07:56:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135824.252141; Wed, 02 Jun 2021 07:56:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLjz-0007lr-Og; Wed, 02 Jun 2021 07:56:39 +0000
Received: by outflank-mailman (input) for mailman id 135824;
 Wed, 02 Jun 2021 07:56:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loLjy-0007lh-3Q
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 07:56:38 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 273a4012-a9eb-4047-8ca3-87c4d7db80b1;
 Wed, 02 Jun 2021 07:56:37 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 93CF621919;
 Wed,  2 Jun 2021 07:56:36 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 6B5C5118DD;
 Wed,  2 Jun 2021 07:56:36 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id IuAiGbQ5t2CUbwAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 07:56:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 273a4012-a9eb-4047-8ca3-87c4d7db80b1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622620596; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=2vp0TvE81gOAjRc01OAyTPeCwSJLALPq/ZZ2JdcWy8Q=;
	b=IYmP4OeYvj6vvjNbBf2CcP+B48NgTMPpaX8d4EJXoO1EP16wrNN/y8WYz0JtdKDlN0HFd+
	HKluStMwmz8hc5S6GgLJjCGgMyi4OEMGpJ/6UZrqkocNJDx63kgKbVPvKYKdLyEIiiMzAg
	hF7P+irxbz0kj7UszGS2KwUDnCps2fM=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622620596; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=2vp0TvE81gOAjRc01OAyTPeCwSJLALPq/ZZ2JdcWy8Q=;
	b=IYmP4OeYvj6vvjNbBf2CcP+B48NgTMPpaX8d4EJXoO1EP16wrNN/y8WYz0JtdKDlN0HFd+
	HKluStMwmz8hc5S6GgLJjCGgMyi4OEMGpJ/6UZrqkocNJDx63kgKbVPvKYKdLyEIiiMzAg
	hF7P+irxbz0kj7UszGS2KwUDnCps2fM=
Subject: Re: [PATCH v20210601 19/38] tools/guest: restore: move types array
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-20-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <67fa1586-d9c6-27dd-4a8d-63180e75e988@suse.com>
Date: Wed, 2 Jun 2021 09:56:35 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-20-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="ockjvEqNNyEzc2aZVT68kJ6jHtDMkiNYk"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--ockjvEqNNyEzc2aZVT68kJ6jHtDMkiNYk
Content-Type: multipart/mixed; boundary="1pAsy5dsErpIbKMm3jPbvUEMWyngj371D";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <67fa1586-d9c6-27dd-4a8d-63180e75e988@suse.com>
Subject: Re: [PATCH v20210601 19/38] tools/guest: restore: move types array
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-20-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-20-olaf@aepfle.de>

--1pAsy5dsErpIbKMm3jPbvUEMWyngj371D
Content-Type: multipart/mixed;
 boundary="------------DA979EBDA5B9B497697F172E"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------DA979EBDA5B9B497697F172E
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:10, Olaf Hering wrote:
> Remove allocation from hotpath, move types array into preallocated spac=
e.
>=20
> Signed-off-by: Olaf Hering<olaf@aepfle.de>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------DA979EBDA5B9B497697F172E
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------DA979EBDA5B9B497697F172E--

--1pAsy5dsErpIbKMm3jPbvUEMWyngj371D--

--ockjvEqNNyEzc2aZVT68kJ6jHtDMkiNYk
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3ObMFAwAAAAAACgkQsN6d1ii/Ey/A
dwf/fd/bkEEq+0rDTbFC90f2H0w6xAH2PZE9gigDN/a5li/auQx+HfGuorBWzIxMmVV7qbGEhb9W
/DhM1B2hpf2IEkkLmzPZfbJsFD/anybBGxSa0SqEUnlyAZ0OoGfLc/lLqUQE4mi/HYaV44FokfwO
sUGVs6SDpGeaqBSnW+SWCDakKNYAcJQb0RnUN23OLHT5TyPDJp5wa8Ddnxc2tXomQOrYS8bA39IJ
3msKrFv9iDF+xwrvR+VxUoJO9g5pnzEcveKz2y92WTVjP6XoAAzhgGy9zECckwvL4Z60cGwA/iid
esNhCBhiLeEkPqmXZ85A75SGfSBK7hHKe9d3xWU/vg==
=E69M
-----END PGP SIGNATURE-----

--ockjvEqNNyEzc2aZVT68kJ6jHtDMkiNYk--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:57:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:57:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135830.252151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLkX-0008Lk-5D; Wed, 02 Jun 2021 07:57:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135830.252151; Wed, 02 Jun 2021 07:57:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLkX-0008Ld-2E; Wed, 02 Jun 2021 07:57:13 +0000
Received: by outflank-mailman (input) for mailman id 135830;
 Wed, 02 Jun 2021 07:57:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loLkV-0008LN-Ox
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 07:57:11 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b1daef9c-eaf5-4431-a42f-48f6360ad5fe;
 Wed, 02 Jun 2021 07:57:11 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 729B31FD46;
 Wed,  2 Jun 2021 07:57:10 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 49CEF118DD;
 Wed,  2 Jun 2021 07:57:10 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 0/ToENY5t2D8bwAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 07:57:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1daef9c-eaf5-4431-a42f-48f6360ad5fe
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622620630; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=6ef/3Ok9icZBbk2XmdzO6UCbU5knbRnAldb5dm9VeKE=;
	b=uBSNSudElmYUe4+N9c2LBoa9u2n5z4dfSVDxvf/ko4WsEhYLPUsDArLB9UobSYoN1H/f59
	flScW4erO8KE855k/AFE6P/QEwoNivScwCYW6zd3spxOs43fF0lJbwAPTzGzyI15sgirnH
	MNzUW9SeMRjcqIFOvCYQU6SMEOK8cAs=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622620630; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=6ef/3Ok9icZBbk2XmdzO6UCbU5knbRnAldb5dm9VeKE=;
	b=uBSNSudElmYUe4+N9c2LBoa9u2n5z4dfSVDxvf/ko4WsEhYLPUsDArLB9UobSYoN1H/f59
	flScW4erO8KE855k/AFE6P/QEwoNivScwCYW6zd3spxOs43fF0lJbwAPTzGzyI15sgirnH
	MNzUW9SeMRjcqIFOvCYQU6SMEOK8cAs=
Subject: Re: [PATCH v20210601 20/38] tools/guest: restore: move mfns array
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-21-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <056e5d22-5623-885a-306f-de2562ab8be5@suse.com>
Date: Wed, 2 Jun 2021 09:57:09 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-21-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="StoZGRPYdzogeEthgpyDodLcUvYFRQE1O"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--StoZGRPYdzogeEthgpyDodLcUvYFRQE1O
Content-Type: multipart/mixed; boundary="mXSL8ZjDDOj2BVqb9bqL88NMHVS8yIg1G";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <056e5d22-5623-885a-306f-de2562ab8be5@suse.com>
Subject: Re: [PATCH v20210601 20/38] tools/guest: restore: move mfns array
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-21-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-21-olaf@aepfle.de>

--mXSL8ZjDDOj2BVqb9bqL88NMHVS8yIg1G
Content-Type: multipart/mixed;
 boundary="------------AF82042F449A286D0DF0E44A"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------AF82042F449A286D0DF0E44A
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:11, Olaf Hering wrote:
> Remove allocation from hotpath, move mfns array into preallocated space=
=2E
>=20
> Signed-off-by: Olaf Hering<olaf@aepfle.de>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------AF82042F449A286D0DF0E44A
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------AF82042F449A286D0DF0E44A--

--mXSL8ZjDDOj2BVqb9bqL88NMHVS8yIg1G--

--StoZGRPYdzogeEthgpyDodLcUvYFRQE1O
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3OdUFAwAAAAAACgkQsN6d1ii/Ey+o
6Qf/enrjTf1V9KQ0eBqXuMDfDqZ3mP2HqUDaSPGgxYCK/68kP15AFROpIF1+Bl9cW7uSv7eHdOu4
ilPHhqUuFIPYzqyr9cu1gIf8sq+vC+RN4sH+VlKIf+OQu586VUOecmrhBreEBCnwPxQSo4VIygwD
ETgtMP8RYlgkJGC8cE9DojKXqZYtgII9fsSZvMtBUFmMU9FHLRlVva4iAp3zXCBJeH5qLTb5QH8c
3z7Ll9yOTF/gPXhIArJvlyDNYbQVUue9IEjF8IyZk0H+eWOTDgbfP1sWYaoLaGgITQvEj+TzW6uM
Me8/CYf5pkjsKhVddyi4uUBSgj6yMCTzRL3ythbmvA==
=LiSf
-----END PGP SIGNATURE-----

--StoZGRPYdzogeEthgpyDodLcUvYFRQE1O--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:58:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:58:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135839.252163 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLlM-0000bo-It; Wed, 02 Jun 2021 07:58:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135839.252163; Wed, 02 Jun 2021 07:58:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLlM-0000bh-Fb; Wed, 02 Jun 2021 07:58:04 +0000
Received: by outflank-mailman (input) for mailman id 135839;
 Wed, 02 Jun 2021 07:58:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loLlL-0000bV-1d
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 07:58:03 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a179d282-c200-4147-a857-1ba1df211a1c;
 Wed, 02 Jun 2021 07:58:02 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id B29D71FD2D;
 Wed,  2 Jun 2021 07:58:01 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 8B53C118DD;
 Wed,  2 Jun 2021 07:58:01 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id o3DjIAk6t2CAcAAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 07:58:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a179d282-c200-4147-a857-1ba1df211a1c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622620681; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ojrUKuhnxrlQEJ/AEQ1Xlo9MM67NLDUzKcRoyiHlFO8=;
	b=YmYITsgrTdOGa1niAMR85MTZTAtIx1srQVbBHw43fbg9CQrUhrPCwYmFHa8+KuOBdToYgW
	oguizvmi18E7aNiL5cxm6SfjJWtTlkGsj+uSqLQQVYITTn3NjE2nneOxCHux7kvspc2b/6
	Y43BQCc1KlGv5XaFDphCWumGed7QXbs=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622620681; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ojrUKuhnxrlQEJ/AEQ1Xlo9MM67NLDUzKcRoyiHlFO8=;
	b=YmYITsgrTdOGa1niAMR85MTZTAtIx1srQVbBHw43fbg9CQrUhrPCwYmFHa8+KuOBdToYgW
	oguizvmi18E7aNiL5cxm6SfjJWtTlkGsj+uSqLQQVYITTn3NjE2nneOxCHux7kvspc2b/6
	Y43BQCc1KlGv5XaFDphCWumGed7QXbs=
Subject: Re: [PATCH v20210601 21/38] tools/guest: restore: move map_errs array
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-22-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <e53b6ab7-40f4-fb1c-799c-82c9978bda46@suse.com>
Date: Wed, 2 Jun 2021 09:58:01 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-22-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="gGWSUh17DVsqxlCFgsjptrdvJpCis2GgG"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--gGWSUh17DVsqxlCFgsjptrdvJpCis2GgG
Content-Type: multipart/mixed; boundary="s4yOUcnvp17XmETgfSdhu1P3DZXv55PaZ";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <e53b6ab7-40f4-fb1c-799c-82c9978bda46@suse.com>
Subject: Re: [PATCH v20210601 21/38] tools/guest: restore: move map_errs array
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-22-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-22-olaf@aepfle.de>

--s4yOUcnvp17XmETgfSdhu1P3DZXv55PaZ
Content-Type: multipart/mixed;
 boundary="------------124FC2E3BB061604031EAB26"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------124FC2E3BB061604031EAB26
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:11, Olaf Hering wrote:
> Remove allocation from hotpath, move map_errs array into preallocated s=
pace.
>=20
> Signed-off-by: Olaf Hering<olaf@aepfle.de>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------124FC2E3BB061604031EAB26
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------124FC2E3BB061604031EAB26--

--s4yOUcnvp17XmETgfSdhu1P3DZXv55PaZ--

--gGWSUh17DVsqxlCFgsjptrdvJpCis2GgG
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3OgkFAwAAAAAACgkQsN6d1ii/Ey/T
rAf9FdoSY/tYAIQAl6rSTXYgDFqWeVP90qBaLDPz/FikFuI75Eu9hcIvLWSoX3Xo4EyNMMTQmI/n
qq4qQVbO4xYTOs4wBcmEJLggn8J0+BPCnK2B04YiKsTQQRv+PvotQwOiXsX+1Rq9a3reWBCGcQ2F
RB2qYbdnjdCqUyPHFm5wOir1EQLiqnCW3xailwehwI4Wq8d0ZF5vpLU4vhiPuSh5NjIrZ3iZnZiS
sag39tSdirDGhoBFaofZaknbBjge/79ue4u3eDEeedHn3qwaiiA8AA4ptmVWFDie9uK7nhpVg9VG
QAeQKdDZPWIVmZHvIicqF7xp+7folkccYIQldAr0lQ==
=b9yF
-----END PGP SIGNATURE-----

--gGWSUh17DVsqxlCFgsjptrdvJpCis2GgG--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:59:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:59:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135846.252174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLmO-0001H8-Ta; Wed, 02 Jun 2021 07:59:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135846.252174; Wed, 02 Jun 2021 07:59:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLmO-0001H1-QT; Wed, 02 Jun 2021 07:59:08 +0000
Received: by outflank-mailman (input) for mailman id 135846;
 Wed, 02 Jun 2021 07:59:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loLmN-0001Ey-Dl
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 07:59:07 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a0fb84dc-8304-41ab-8e75-76ddb35b40c1;
 Wed, 02 Jun 2021 07:59:02 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id DACEF21919;
 Wed,  2 Jun 2021 07:59:01 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id AAEF0118DD;
 Wed,  2 Jun 2021 07:59:01 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id yGkXKEU6t2D1cAAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 07:59:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0fb84dc-8304-41ab-8e75-76ddb35b40c1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622620741; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=S1uOjlXIWOlfst2rCx5Hn2SGC5Zdn/Y2Zo1Uyfkt1iU=;
	b=Ss/jveTdd3nSw73nFyLaAjluC1SRZ3v4D+kQ2LNbxS3q/nadQ8MoYSZqeZHKjDJyWbsvdq
	wSo4z0ZGpwGAdJTF+gepqKBf52IzrwcflmhV3MsvaBC5r16wCXtfod0FyJzAsYhKSwEUtz
	jbvwxserDLEZgtdMg5NQTduAgblq0gY=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622620741; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=S1uOjlXIWOlfst2rCx5Hn2SGC5Zdn/Y2Zo1Uyfkt1iU=;
	b=Ss/jveTdd3nSw73nFyLaAjluC1SRZ3v4D+kQ2LNbxS3q/nadQ8MoYSZqeZHKjDJyWbsvdq
	wSo4z0ZGpwGAdJTF+gepqKBf52IzrwcflmhV3MsvaBC5r16wCXtfod0FyJzAsYhKSwEUtz
	jbvwxserDLEZgtdMg5NQTduAgblq0gY=
Subject: Re: [PATCH v20210601 22/38] tools/guest: restore: move mfns array in
 populate_pfns
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-23-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <8eea341c-0db0-1a86-15b9-b3af03fd6086@suse.com>
Date: Wed, 2 Jun 2021 09:59:01 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-23-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="FUPeu6iY2Int7Qs1OdXaxaMSzv36TQR3L"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--FUPeu6iY2Int7Qs1OdXaxaMSzv36TQR3L
Content-Type: multipart/mixed; boundary="4pyit4uZb3DlyFeJketi0Zz3MmbRq3fR7";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <8eea341c-0db0-1a86-15b9-b3af03fd6086@suse.com>
Subject: Re: [PATCH v20210601 22/38] tools/guest: restore: move mfns array in
 populate_pfns
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-23-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-23-olaf@aepfle.de>

--4pyit4uZb3DlyFeJketi0Zz3MmbRq3fR7
Content-Type: multipart/mixed;
 boundary="------------E6C661E30F1348A3FABACE62"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E6C661E30F1348A3FABACE62
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:11, Olaf Hering wrote:
> Remove allocation from hotpath, move populate_pfns mfns array into prea=
llocated space.
> Use some prefix to avoid conflict with an array used in handle_page_dat=
a.
>=20
> Signed-off-by: Olaf Hering<olaf@aepfle.de>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------E6C661E30F1348A3FABACE62
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E6C661E30F1348A3FABACE62--

--4pyit4uZb3DlyFeJketi0Zz3MmbRq3fR7--

--FUPeu6iY2Int7Qs1OdXaxaMSzv36TQR3L
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3OkUFAwAAAAAACgkQsN6d1ii/Ey9d
oggAk8uPTEjqvARtChve62PrB/qO0l30O09C9teNWMXQuwc7A3f8P+Y3/IqJBnr8JHLP9o/XrGB7
/PXcbiPCBufd3+jcSWro8HcD4S/PyQTVGvqR7JMxORC8W0Ho6EYh11066n8czgqN9DEb6WNTzLE6
GgJOvwDCLEQZ94qtWllvogpEWdvV3iFcod6FBvOdUHNxDRbAVGrkunuvnR6G42ONSRKUErPLMMoo
7zA33mPydHjZ8MEyBMG9mJO3oFI8aj02FaptcQk43zeHzJimfVud9kTwuEaLGLeuK8Enm4Nd91N0
uD5hfK/YbwS/vpbDpFbJzAfjDsuaYYQD/JlBlrqTlg==
=u2Yv
-----END PGP SIGNATURE-----

--FUPeu6iY2Int7Qs1OdXaxaMSzv36TQR3L--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 07:59:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 07:59:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135851.252184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLn3-0001rj-7h; Wed, 02 Jun 2021 07:59:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135851.252184; Wed, 02 Jun 2021 07:59:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLn3-0001rc-4R; Wed, 02 Jun 2021 07:59:49 +0000
Received: by outflank-mailman (input) for mailman id 135851;
 Wed, 02 Jun 2021 07:59:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loLn1-0001rF-KF
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 07:59:47 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id af1d68a2-81de-489b-bf42-e079756338b7;
 Wed, 02 Jun 2021 07:59:46 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 13C0A2193F;
 Wed,  2 Jun 2021 07:59:46 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id DE80C118DD;
 Wed,  2 Jun 2021 07:59:45 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id bbs3NXE6t2BecQAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 07:59:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af1d68a2-81de-489b-bf42-e079756338b7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622620786; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=q2vjWC567ViUkHEbPyqtz67qw/nAn8nFvoDVEk6j9m0=;
	b=kCua2psuqFny4BewwcHgApjyjQhYRFDud0nhLB1ZTYiwKJcO8CgHXpFN1XHMKy5U6e750j
	yh0Tagf0MffEIjVKiMlUS45+LzGWsqCtEvGZr/CvJY6WR+tCRVCSl1n63AAe86qfkPhYKn
	rqyGbTdzTZcYsrVk8BLsDnII8nWlqrk=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622620786; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=q2vjWC567ViUkHEbPyqtz67qw/nAn8nFvoDVEk6j9m0=;
	b=kCua2psuqFny4BewwcHgApjyjQhYRFDud0nhLB1ZTYiwKJcO8CgHXpFN1XHMKy5U6e750j
	yh0Tagf0MffEIjVKiMlUS45+LzGWsqCtEvGZr/CvJY6WR+tCRVCSl1n63AAe86qfkPhYKn
	rqyGbTdzTZcYsrVk8BLsDnII8nWlqrk=
Subject: Re: [PATCH v20210601 23/38] tools/guest: restore: move pfns array in
 populate_pfns
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-24-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <23552944-b161-1f3f-2321-3a27a4232e34@suse.com>
Date: Wed, 2 Jun 2021 09:59:45 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-24-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="31LYLF7fkPhli6Nm1ZMHPx8xkAdTcaikI"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--31LYLF7fkPhli6Nm1ZMHPx8xkAdTcaikI
Content-Type: multipart/mixed; boundary="HK0rMEKkhi8z7oSIImu8SButUCofM3BvD";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <23552944-b161-1f3f-2321-3a27a4232e34@suse.com>
Subject: Re: [PATCH v20210601 23/38] tools/guest: restore: move pfns array in
 populate_pfns
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-24-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-24-olaf@aepfle.de>

--HK0rMEKkhi8z7oSIImu8SButUCofM3BvD
Content-Type: multipart/mixed;
 boundary="------------B7E9CAB57A45CC76B1A53ED0"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------B7E9CAB57A45CC76B1A53ED0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:11, Olaf Hering wrote:
> Remove allocation from hotpath, move populate_pfns' pfns array into pre=
allocated space.
> Use some prefix to avoid conflict with an array used in handle_page_dat=
a.
>=20
> Signed-off-by: Olaf Hering<olaf@aepfle.de>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------B7E9CAB57A45CC76B1A53ED0
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------B7E9CAB57A45CC76B1A53ED0--

--HK0rMEKkhi8z7oSIImu8SButUCofM3BvD--

--31LYLF7fkPhli6Nm1ZMHPx8xkAdTcaikI
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3OnEFAwAAAAAACgkQsN6d1ii/Ey8r
eQf/d+8BgxAa5wmWvqpFqsm2EIj/PlsAs2gLgkCVDHXsF1pbUYD8f4X7q5FvjThcyPni/rRjsKP8
qQnWVY/qIuLcC9xocfMqvLub7m40iVl9zKnrPzNoxxSedTI2vLBcz1RqL/GNkj5KBxcvvAPH7umX
7qPY0M6/QfUzcgZCgpdMuVhdVxtZhX9bzHDaPwK/x7CwK0VS0gP1RY+YDftqhFHN9YtuXWklklQv
oJ1SFmWEEbPlRBfllcKy44TR3QF3iZCSEI68In1yqgJCw3ugbkB0d77h09ORHBlIiu1XxSfXAB+D
1pCQeQe4C7oDWDIP9R3IDZ1nGcfZcbTc29/jkZiQvA==
=SK1i
-----END PGP SIGNATURE-----

--31LYLF7fkPhli6Nm1ZMHPx8xkAdTcaikI--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 08:04:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 08:04:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135867.252196 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLrN-000418-1Y; Wed, 02 Jun 2021 08:04:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135867.252196; Wed, 02 Jun 2021 08:04:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loLrM-000411-UY; Wed, 02 Jun 2021 08:04:16 +0000
Received: by outflank-mailman (input) for mailman id 135867;
 Wed, 02 Jun 2021 08:04:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loLrL-00040r-Mi; Wed, 02 Jun 2021 08:04:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loLrL-00026S-CS; Wed, 02 Jun 2021 08:04:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loLrL-00029E-0t; Wed, 02 Jun 2021 08:04:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1loLrL-0001XC-0N; Wed, 02 Jun 2021 08:04:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=m5JwRJHGH0PlztS+ro/4UScAbAyLtumJDHR7E6LtohE=; b=I+hLY0iKtJ7+aNAmcjakAQ9Ode
	OvTWhFXLF1MHsmX+Xnej/glyOINoGbPNj8s29m1d2aQeAIbUsO5Cy1Kfy9U5BF9I+zDv+egnYqp2i
	ihfsq6naCjDOdLrqhnG06FXPWZz8kyFeR8O7piUiPQQHloverPXuLKn2qmnQ5zjp8omo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162332-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162332: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=48b9932352ed38b8d62a3a1c6cab217ad6f07ee1
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Jun 2021 08:04:15 +0000

flight 162332 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162332/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              48b9932352ed38b8d62a3a1c6cab217ad6f07ee1
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  327 days
Failing since        151818  2020-07-11 04:18:52 Z  326 days  319 attempts
Testing same since   162332  2021-06-02 04:18:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 60098 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 08:25:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 08:25:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135882.252210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loMBy-0006dY-3E; Wed, 02 Jun 2021 08:25:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135882.252210; Wed, 02 Jun 2021 08:25:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loMBy-0006dR-00; Wed, 02 Jun 2021 08:25:34 +0000
Received: by outflank-mailman (input) for mailman id 135882;
 Wed, 02 Jun 2021 08:25:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bHXv=K4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1loMBw-0006dL-TZ
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 08:25:32 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 744ac108-af33-4bfd-855e-ddc459c8f8ed;
 Wed, 02 Jun 2021 08:25:32 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2169.outbound.protection.outlook.com [104.47.17.169])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-15-O1tkgl9sPIm4OQOQMC2ClQ-1; Wed, 02 Jun 2021 10:25:30 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2336.eurprd04.prod.outlook.com (2603:10a6:800:27::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.29; Wed, 2 Jun
 2021 08:25:28 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4173.030; Wed, 2 Jun 2021
 08:25:28 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P191CA0007.EURP191.PROD.OUTLOOK.COM (2603:10a6:102:54::12) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.15 via Frontend Transport; Wed, 2 Jun 2021 08:25:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 744ac108-af33-4bfd-855e-ddc459c8f8ed
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1622622331;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MGEewtoTLdysqJ/eZQzK7sQhSOLg8c4FGiaByxsHPfQ=;
	b=fJ5cPFfMl+w3KQvH1766HrLcE3ZSlLdFejOV8tti+p1zq2HfUZJ4tbgAJBComomtedfO5X
	RIof2X0uU9KcPp1qIu6+k7HkMLTDYykm62tX5HH7emUDSTcrrrfYKxLLQhZQxNcbW8UY0P
	EHhaJmIsJL7Gmyw+2qpMqvTvOcTxdGw=
X-MC-Unique: O1tkgl9sPIm4OQOQMC2ClQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WztAxJC49dqJYUc0o05AL7cwYnRpNr91+Niu3ICx6wjgbP+k5tlgwcf9tNPcssSFz85SlgPfoHtDmC2aSDvjfBBhzFsXiJVZV4imA5lMzGWHY/GumFPC/av/fbAh9aRFnBEuLFllZcShLOVAQrArkVKGP5Zzwoyu4X/vFSbD8YE+4IVVquve8DBW9lGYkz3IcOouWLqUl7wAexgfhvqccKIAZ8ldsH7ooilcVRdr7dzTfCA/CiwJ3Cv4ZiIisN2h0+hPUT75parJ6OeW+PCcOZqZZbUeNfLX4pjlAWML3xC27/q+PrqaZF4MPJEIvOU0KZYNS34DwRxIEpCoPNR/ug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MGEewtoTLdysqJ/eZQzK7sQhSOLg8c4FGiaByxsHPfQ=;
 b=mAF7luLKZZtEz8P4Aj5AZ4a5PMPAE/Pek3bQDp95YcDtpy4DWyFKTRn3JF0N28BDJ0hsCx2dhWq0iJOxbPesW0/8WSdS7dKqWLOd9zGUpn3UB6+9ZpJ6leUPRuZBuSdbHcbPGCDddxPUJ5cyfIkPGtQ3UMBt6+oJQ2aKPeUPwneLDp/N/qj0bXjU9347A2Iaxio7rNcpI106GcS5Y9czedATojvGvo4rx1+7wTbRfRMkGpcrH7WueKqVI72TwCqHwiNtxWFs3CsFqdibzqtnMxZ52gzOqdQIITlkYhvb5nFzYtjn4BdemFJw/JDwF9BGRyMfS6v5WvoczoOD5czT1w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [XEN PATCH] libs/foreignmemory: Fix osdep_xenforeignmemory_map
 prototype
To: Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20210601154147.55799-1-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a5d4f4ae-21b9-9798-5501-2c288a70e7b4@suse.com>
Date: Wed, 2 Jun 2021 10:25:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
In-Reply-To: <20210601154147.55799-1-anthony.perard@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P191CA0007.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:102:54::12) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bba8751a-af78-4722-8274-08d9259ffa67
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2336:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB2336336719B261E6E7856BAAB33D9@VI1PR0401MB2336.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LUE6snZG2XFRXtE9JtCYuE/26h2BqNEFXCxdm3BelJWLq8KHOWNMem5h6g9FTdmNKXPxmic52X+uwSkogrY36UdTsfIfmmWUxztQb9CC5NCp+WzwhIsKb6K7PPm4hDdG5rSizp+EXcsykX+pzf4IrCmPXwfoIOBtBiYdIkJQyNIaXo7z2p2PsU9mTa5g+bd12ZKrweUqWY99nttt/HWaX3UVYNL7NtEXbHIy6AaMsINTAi0ZclX9eB9A5XKgR2gZdnduyFpCpd+NI7O46//1i0+tgEdzEBUezFQtXeUDMouKvOKDT6veDEXEb+pI0eips5ZM0zq+gLuCfVNvLG1l4ovHFnWMZP8QJV88hTO5OrbU7hFQOEaisMAKy9/7KKOBMk4vU3b2CgdJhApjWUa8LK84i3OfqqQbIrzv2cqueGqjJnHs5yu2PDSUVY0X4Tc6K0OyUoYUj2jn83vnVwrtjiY6pwMVKe8w7lqz7fjJyNkCodEplrkMmqC3XrWx9gy04MO+jO7TBslMhLpZbwzLRlxBziAnTe4CVp1Empgs5rq91hHILOKIuwJsz5qsGMHcG2JAOLy+Tq6Z1Mxp1B6nLPTdcuRN5Lj8c9nh36WMryPgvlLqV1zbPIodU6f7sutALbR4uuJDMgOjGlFgae3OYOOkDGYxMMWsmLL1reiBry/OACFMqhNKEBT0U1YUxQS3
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(376002)(366004)(39850400004)(396003)(346002)(478600001)(38100700002)(86362001)(4326008)(16576012)(316002)(4744005)(66556008)(31686004)(956004)(2906002)(2616005)(66476007)(66946007)(6486002)(31696002)(26005)(16526019)(53546011)(186003)(36756003)(5660300002)(8936002)(8676002)(110136005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?OXl5aGVwRzcxM1VES1dIZWtJclNBTjdLSEZBWU4yMnM5K2d2VVRZRlZqQUFi?=
 =?utf-8?B?WjI3TVgyZVVvSUNrVjF4Tlp6VFZaQk9KZXZGbGhwZlNZaGEwNWQ5OEJaMjdG?=
 =?utf-8?B?QkJxeEV6WC9PY3M5aXMxUFJQVDFXRDJPUnRtelFHQUd3L1dBRkxmeUgyVHZr?=
 =?utf-8?B?RVN1S0t5TkgyRVZDODZmbU92Y1QwU0VhaWVaQktZSTVYcWwwb0xzZ2JRZ2tF?=
 =?utf-8?B?T0NtU2RTVzV5a1VEbmJHeUhlREJYdFZvTkhBRlI5ZnM2eU8rOXAzOEZaY2pC?=
 =?utf-8?B?Q0s3VlZqWTZRZTVqQW9FOEpvWTdtSXN6MFc1YUtDdDR4aDY4V2lWTHF5ak9Z?=
 =?utf-8?B?cmhOZithVkRXbUxLMHlUVkNGUjhQSHEySlpxK2Znc1AyZlBKcXlqVU1mVW9n?=
 =?utf-8?B?cVZlbW12RGd3T1BDRkt5WDAzVTR2STNPM3RWRmxFVGM5cXNDQTUva1hzL2ov?=
 =?utf-8?B?cE5Jck1RWHc3UXBTaUxCdzZra3pBUEFKb2FMM1E0TWxsUXhLR2pCTDBxT1pT?=
 =?utf-8?B?MS9vNkltOUhNS3RuQmJ1Tlk2SzlIaE1HSERIOWtPbFVNWDBCOVc1eGxaRkJ2?=
 =?utf-8?B?U3NYUVlybFppaGZzVXdEdTl5cmljaTZOZFBBTGFJS3ZjOXBRaHhHdDlZSUNH?=
 =?utf-8?B?MGRQVmlObTNwSUNSSkNWaXNkSHZNMUJPSkI2NW5yY0R6Vktqa0JxRHZlWllm?=
 =?utf-8?B?bCtlZU9qWXlvSm1aODU3MENTdzhNQnBsVitpT2QxU1JnQUpoZ2JVbm9oendC?=
 =?utf-8?B?Q0xocnNTKzBmSStvUjY1Q3h3Um43aWJxSnVGZXZjQ3RraTFvaEI2bHlERGlq?=
 =?utf-8?B?aXZTUDg5V2pmZnJsTGZkejgrZkI0UEJ4eEQ1Ty9GQks3SXp0ME5RbWtudDY4?=
 =?utf-8?B?ZmxFNFI2L25jT25vZDN4REhYdWk3RWVXRWlJVHRLd3lMWjY2UExBVTNjYmlH?=
 =?utf-8?B?d3EyTzQxUEp3YVpXVVVJMmY4RTZZZ2VTV3J6WmxrVG1QYUFMV2V5ZHgzVnRW?=
 =?utf-8?B?WVI0Sk9ueFRBamVySm9lWDlWLytNajRIYTZKOWFlckY2MnVWZHV3YjZrMlU1?=
 =?utf-8?B?TlMvREJWVGtuSmU3bjhmQ1kvcDYyQzF0LzV1QmxNeFEwTWZrY29jYnh3Tngv?=
 =?utf-8?B?L1hZUDhyaDZLdGphMzRiV2UvemtpRGpKWUtZT0NuWk9SN3hUbGEzV2hGcldB?=
 =?utf-8?B?bFh0VHp5Y0NBU3diOGJsMXpQdi9sNjNYa0t3REpMckw1SEVvY0hVSVV0eFJG?=
 =?utf-8?B?bDczVGRaYytEcFFBTUxlMGpGeVFJS091OUdTTU04aFJET2hXZW5XVmhpWnBY?=
 =?utf-8?B?eDA0OFlBeXcyQ21lRzgwTXpDbUFsMzdablRjNmtiajJNMXRoVmVtWHVaT1NU?=
 =?utf-8?B?Y0JtcDJvWWcvaUxzNXg3R0JoSW9IMzVML2ZQRFN0VWRMMHkxWjlQQ0hERGhn?=
 =?utf-8?B?a3UwRDJqNm0xR3E3UWxKSlNZK2xuYVg0Y1Y5L09tbm93a3ZKMEhYZHE0aTRD?=
 =?utf-8?B?cytGVXBaSk1FUkt4WHl1NFI4SXRsVkUxUkxRZ0tYN0RKWlpMT25FeDBRSkdH?=
 =?utf-8?B?Vm94andKcFNHOUMxd2psOEdIN0NIQ0FDS1RTVFZSeW03MmFldTVSbEhidTNh?=
 =?utf-8?B?K0gvRVFtSVU4WEplU1NFOHJxZmdLMnA4SnhKT0huK3h2Z0hBT0pWbVFJYmJ1?=
 =?utf-8?B?WXlNZXBMZkpYcVVhd0xsS1VhY2dWMWFxbkdYVXJXS1VabFdZeTFTT0YzaXdi?=
 =?utf-8?Q?HUtlALDcaVQ2dZbSOumP3vTsxk4VHzksTe4Bg4R?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bba8751a-af78-4722-8274-08d9259ffa67
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2021 08:25:28.2490
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VkJCd0eeB1ElxjWB2NQzhq/KT7xJUD0K7HUQxstBo/6GNiHDDyTQ1TOvzOckOAT2CkFM1Ng98HuBWU2u6KdLCw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2336

On 01.06.2021 17:41, Anthony PERARD wrote:
> Commit cf8c4d3d13b8 made some preparation to have one day
> variable-length-array argument, but didn't declare the array in the
> function prototype the same way as in the function definition. And now
> GCC 11 complains about it.
> 
> Fixes: cf8c4d3d13b8 ("tools/libs/foreignmemory: pull array length argument to map forward")
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Ian - this (or whichever alternative might be chosen to address gcc11's
valid complaint) also will want backporting.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 08:37:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 08:37:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135889.252220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loMN3-0008Ct-4k; Wed, 02 Jun 2021 08:37:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135889.252220; Wed, 02 Jun 2021 08:37:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loMN3-0008Cm-1b; Wed, 02 Jun 2021 08:37:01 +0000
Received: by outflank-mailman (input) for mailman id 135889;
 Wed, 02 Jun 2021 08:36:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Fm/u=K4=redhat.com=lersek@srs-us1.protection.inumbo.net>)
 id 1loMN1-0008Cf-Ap
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 08:36:59 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [170.10.133.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id d7f00a0a-6fb3-41bf-b986-732cc9e4e95c;
 Wed, 02 Jun 2021 08:36:57 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-568-7ydU6i1XMsy7U_gprRY6fg-1; Wed, 02 Jun 2021 04:36:54 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 07D788015F5;
 Wed,  2 Jun 2021 08:36:53 +0000 (UTC)
Received: from lacos-laptop-7.usersys.redhat.com (ovpn-114-42.ams2.redhat.com
 [10.36.114.42])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 587016A03A;
 Wed,  2 Jun 2021 08:36:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7f00a0a-6fb3-41bf-b986-732cc9e4e95c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1622623017;
	h=from:from:reply-to:reply-to:subject:subject:date:date:
	 message-id:message-id:to:to:cc:cc:mime-version:mime-version:
	 content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MONz85R9Mj+QEfj5djA2pIXnRHcnv5ZAisbRFvQPcUI=;
	b=fTnXmDeHzf1Vg5YfZLz54N+lI8zLPoppR4E0Wg7h+CAtaCkRK3AwB4kOk73ra41jpf/UIu
	t2wzSkWYQ8X0Ja9T1Dn5gyfJgGuj4OkUstkyI+N/gpbsqlOBGOEs3Q8mgupIP4TgTeZTB0
	u5KOsAZMXZmt0AYoOD10fr7FAY3rocg=
X-MC-Unique: 7ydU6i1XMsy7U_gprRY6fg-1
Subject: Re: [edk2-devel] [PATCH 00/43] OvmfPkg: remove Xen support from
 OvmfPkg*.dsc, in favor of OvmfXen.dsc
From: Laszlo Ersek <lersek@redhat.com>
Cc: devel@edk2.groups.io, Ard Biesheuvel <ardb+tianocore@kernel.org>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Reply-To: devel@edk2.groups.io, lersek@redhat.com
References: <20210526201446.12554-1-lersek@redhat.com>
To: Anthony Perard <anthony.perard@citrix.com>, Julien Grall <julien@xen.org>
Message-ID: <71da2a3b-aab1-4ecf-7e01-16b537d841a2@redhat.com>
Date: Wed, 2 Jun 2021 10:36:49 +0200
MIME-Version: 1.0
In-Reply-To: <20210526201446.12554-1-lersek@redhat.com>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=lersek@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Anthony, Julien,

(or anyone else subscribed to xen-devel -- CC'd now),

On 05/26/21 22:14, Laszlo Ersek wrote:
> Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=2122
> Repo:     https://pagure.io/lersek/edk2.git
> Branch:   xen_split_bz_2122

can you please build the OvmfXen platform on this branch, and check if
there are any regressions?

Thanks,
Laszlo

> 
> This patch set removes dynamic Xen enlightenment from the following
> platforms:
> 
>   OvmfPkg/OvmfPkgIa32.dsc
>   OvmfPkg/OvmfPkgIa32X64.dsc
>   OvmfPkg/OvmfPkgX64.dsc
> 
> In Xen guests, the following platform should be used:
> 
>   OvmfPkg/OvmfXen.dsc
> 
> Please see more details / references in the bugzilla ticket.
> 
> NOOPT build savings:
> 
> - Ia32:    PEIFV 1536 bytes, DXEFV 130288 bytes
> - Ia32X64: PEIFV 1536 bytes, DXEFV 140912 bytes
> - X64:     PEIFV 1664 bytes, DXEFV 140912 bytes
> - Xen:     PEIFV  256 bytes, DXEFV  69504 bytes
> 
> Functional testing:
> 
> - Booted a Fedora guest on OvmfPkgIa32X64 on QEMU/KVM, compared verbose
>   logs before-after. Memory allocations were satisfied at different
>   addresses, as expected, plus the Xen drivers were absent. No
>   differences otherwise.
> 
> - Booted a RHEL guest on ArmVirtQemu on AARCH64. Memory allocations were
>   satisfied at different addresses, as expected.
> 
> - Xen regression-testing was not done; I'm requesting feedback.
> 
> Build testing / bisectability: at every stage, the series builds with
> the following script:
> 
>> #!/bin/bash
>> set -e -u -C
>>
>> build -b DEBUG -t GCC5 -p ArmVirtPkg/ArmVirtKvmTool.dsc            -a AARCH64
>> build -b DEBUG -t GCC5 -p ArmVirtPkg/ArmVirtKvmTool.dsc    -a ARM
>> build -b NOOPT -t GCC5 -p ArmVirtPkg/ArmVirtQemu.dsc               -a AARCH64
>> build -b NOOPT -t GCC5 -p ArmVirtPkg/ArmVirtQemu.dsc       -a ARM
>> build -b NOOPT -t GCC5 -p ArmVirtPkg/ArmVirtQemuKernel.dsc         -a AARCH64
>> build -b NOOPT -t GCC5 -p ArmVirtPkg/ArmVirtQemuKernel.dsc -a ARM
>> build -b NOOPT -t GCC5 -p ArmVirtPkg/ArmVirtXen.dsc                -a AARCH64
>> build -b NOOPT -t GCC5 -p ArmVirtPkg/ArmVirtXen.dsc        -a ARM
>> build -b NOOPT -t GCC5 -p OvmfPkg/AmdSev/AmdSevX64.dsc             -a X64
>> build -b NOOPT -t GCC5 -p OvmfPkg/Bhyve/BhyveX64.dsc               -a X64
>> build -b NOOPT -t GCC5 -p OvmfPkg/OvmfPkgIa32.dsc          -a IA32
>> build -b NOOPT -t GCC5 -p OvmfPkg/OvmfPkgIa32X64.dsc       -a IA32 -a X64
>> build -b NOOPT -t GCC5 -p OvmfPkg/OvmfPkgX64.dsc                   -a X64
>> build -b NOOPT -t GCC5 -p OvmfPkg/OvmfXen.dsc                      -a X64
> 
> The patches in the series were formatted with the following options, for
> posting:
> 
>   --stat=1000 --stat-graph-width=20 --find-copies-harder -U6
> 
> (The option "--find-copies-harder" is not the best for presenting every
> single patch in the series, in isolation, but taken globally for the
> entire series, it is the most helpful option.)
> 
> Some patches advance with really small steps, in order to cut down on a
> subsequent "meaty" patch. Personally I don't like reviewing code
> movement patches, so I did my best to (a) keep that to a minimum, and
> (b) present it as unintrusively as possible.
> 
> The CC list is a bit long; the reason is that I kept touching up
> "Maintainers.txt", and the "OvmfPkg/Bhyve" and "OvmfPkg/AmdSev"
> platforms as well (whenever it made sense).
> 
> Cc: Andrew Fish <afish@apple.com>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
> Cc: Brijesh Singh <brijesh.singh@amd.com>
> Cc: Erdem Aktas <erdemaktas@google.com>
> Cc: James Bottomley <jejb@linux.ibm.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Jordan Justen <jordan.l.justen@intel.com>
> Cc: Julien Grall <julien@xen.org>
> Cc: Leif Lindholm <leif@nuviainc.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Min Xu <min.m.xu@intel.com>
> Cc: Peter Grehan <grehan@freebsd.org>
> Cc: Philippe Mathieu-Daudé <philmd@redhat.com>
> Cc: Rebecca Cran <rebecca@bsdio.com>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> 
> Thanks,
> Laszlo
> 
> Laszlo Ersek (43):
>   OvmfPkg: remove the Xen drivers from the IA32, IA32X64, and X64
>     platforms
>   OvmfPkg: remove the Xen drivers from the AmdSev platform
>   OvmfPkg: switch IA32, IA32X64, X64 to the fw_cfg-only ACPI platform
>     driver
>   OvmfPkg: switch the AmdSev platform to the fw_cfg-only ACPI platform
>     driver
>   OvmfPkg/README: bump minimum QEMU version to 1.7.1, machine types to
>     1.7
>   OvmfPkg/AcpiPlatformDxe: fix header file warts
>   OvmfPkg/AcpiPlatformDxe: sort #includes and [LibraryClasses]
>   OvmfPkg/AcpiPlatformDxe/QemuLoader.h: remove QemuFwCfgLib class
>     dependency
>   OvmfPkg/AcpiPlatformDxe: move "QemuLoader.h" to IndustryStandard
>   OvmfPkg/AcpiPlatformDxe: consolidate #includes and [LibraryClasses]
>   OvmfPkg/XenAcpiPlatformDxe: create from AcpiPlatformDxe
>   OvmfPkg/AcpiPlatformDxe: remove the "AcpiPlatformDxe.inf" driver
>   OvmfPkg/XenAcpiPlatformDxe: remove the QEMU ACPI linker/loader client
>   OvmfPkg/XenAcpiPlatformDxe: remove QEMU fw_cfg dependency
>   OvmfPkg/XenAcpiPlatformDxe: remove the InstallAcpiTable() helper
>     function
>   OvmfPkg/XenAcpiPlatformDxe: remove OVMF's built-in ACPI tables
>   OvmfPkg/Bhyve/AcpiPlatformDxe: fix file path typo in comment
>   OvmfPkg/AcpiTables: remove unused module
>   OvmfPkg/OvmfXen: make "PcdPciDisableBusEnumeration" Fixed-at-Build
>   OvmfPkg/XenAcpiPlatformDxe: remove delayed ACPI table installation
>   OvmfPkg/PlatformPei: remove Xen support
>   OvmfPkg: drop PcdPciDisableBusEnumeration from the IA32, IA32X64, X64
>     DSCs
>   OvmfPkg: drop PcdPciDisableBusEnumeration from the AmdSev platform
>   OvmfPkg/Bhyve: make "PcdPciDisableBusEnumeration" Fixed-at-Build
>   OvmfPkg/OvmfXen: remove IncompatiblePciDeviceSupport DXE driver
>   OvmfPkg/Bhyve: remove IncompatiblePciDeviceSupport DXE driver
>   OvmfPkg/IncompatiblePciDeviceSupportDxe: remove
>     PcdPciDisableBusEnumeration
>   OvmfPkg/PciHostBridgeLib: consolidate #includes and INF file sections
>   OvmfPkg/PciHostBridgeLibScan: create from PciHostBridgeLib
>   OvmfPkg/Bhyve: consume PciHostBridgeLibScan
>   OvmfPkg/OvmfXen: consume PciHostBridgeLibScan
>   OvmfPkg/PciHostBridgeLib: remove Bhyve and Xen support
>   OvmfPkg/PciHostBridgeLibScan: remove QEMU (fw_cfg) support
>   OvmfPkg/PciHostBridgeLibScan: remove PcdOvmfHostBridgePciDevId
>   OvmfPkg/PciHostBridgeLibScan: clean up file names and file-top
>     comments
>   OvmfPkg/SmbiosPlatformDxe: clean up #includes and INF
>   OvmfPkg/SmbiosPlatformDxe: return EFI_NOT_FOUND if there is no SMBIOS
>     data
>   OvmfPkg/SmbiosPlatformDxe: locate SMBIOS protocol in
>     InstallAllStructures()
>   OvmfPkg/SmbiosPlatformDxe: split GetXenSmbiosTables() decl. to new
>     header
>   OvmfPkg/SmbiosPlatformDxe: declare InstallAllStructures() in header
>     file
>   OvmfPkg/SmbiosPlatformDxe: create Xen-specific module INF file
>   OvmfPkg/SmbiosPlatformDxe: split Xen entry point from QEMU entry point
>   OvmfPkg: restrict XenPlatformLib to BdsDxe in the IA32, IA32X64, X64
>     DSCs
> 
>  Maintainers.txt                                                                                          |  10 +-
>  OvmfPkg/AcpiPlatformDxe/AcpiPlatform.c                                                                   | 262 --------
>  OvmfPkg/AcpiPlatformDxe/AcpiPlatform.h                                                                   |  50 +-
>  OvmfPkg/AcpiPlatformDxe/AcpiPlatformDxe.inf                                                              |  71 --
>  OvmfPkg/AcpiPlatformDxe/BootScript.c                                                                     |   7 +-
>  OvmfPkg/AcpiPlatformDxe/EntryPoint.c                                                                     |   7 +-
>  OvmfPkg/AcpiPlatformDxe/PciDecoding.c                                                                    |   4 +-
>  OvmfPkg/AcpiPlatformDxe/Qemu.c                                                                           | 511 ---------------
>  OvmfPkg/AcpiPlatformDxe/QemuFwCfgAcpi.c                                                                  |  21 +-
>  OvmfPkg/AcpiPlatformDxe/QemuFwCfgAcpiPlatformDxe.inf                                                     |   5 +-
>  OvmfPkg/AcpiTables/AcpiTables.inf                                                                        |  38 --
>  OvmfPkg/AcpiTables/Dsdt.asl                                                                              | 692 --------------------
>  OvmfPkg/AcpiTables/Facp.aslc                                                                             |  89 ---
>  OvmfPkg/AcpiTables/Facs.aslc                                                                             |  78 ---
>  OvmfPkg/AcpiTables/Madt.aslc                                                                             | 153 -----
>  OvmfPkg/AcpiTables/Platform.h                                                                            |  68 --
>  OvmfPkg/AcpiTables/Ssdt.asl                                                                              |  13 -
>  OvmfPkg/AmdSev/AmdSevX64.dsc                                                                             |   9 +-
>  OvmfPkg/AmdSev/AmdSevX64.fdf                                                                             |  12 +-
>  OvmfPkg/Bhyve/AcpiPlatformDxe/AcpiPlatform.c                                                             |   2 +-
>  OvmfPkg/Bhyve/BhyveX64.dsc                                                                               |   5 +-
>  OvmfPkg/Bhyve/BhyveX64.fdf                                                                               |   1 -
>  OvmfPkg/Bhyve/PlatformPei/PlatformPei.inf                                                                |   1 -
>  OvmfPkg/{AcpiPlatformDxe => Include/IndustryStandard}/QemuLoader.h                                       |   8 +-
>  OvmfPkg/IncompatiblePciDeviceSupportDxe/IncompatiblePciDeviceSupport.c                                   |  10 +-
>  OvmfPkg/IncompatiblePciDeviceSupportDxe/IncompatiblePciDeviceSupport.inf                                 |   2 -
>  OvmfPkg/Library/PciHostBridgeLib/PciHostBridgeLib.c                                                      |  28 +-
>  OvmfPkg/Library/PciHostBridgeLib/PciHostBridgeLib.inf                                                    |   8 +-
>  OvmfPkg/Library/{PciHostBridgeLib => PciHostBridgeLibScan}/PciHostBridge.h                               |   4 +-
>  OvmfPkg/Library/PciHostBridgeLibScan/PciHostBridgeLib.c                                                  |  74 +++
>  OvmfPkg/Library/{PciHostBridgeLib/PciHostBridgeLib.inf => PciHostBridgeLibScan/PciHostBridgeLibScan.inf} |  24 +-
>  OvmfPkg/Library/{PciHostBridgeLib/XenSupport.c => PciHostBridgeLibScan/ScanForRootBridges.c}             |  27 +-
>  OvmfPkg/OvmfPkgIa32.dsc                                                                                  |  10 +-
>  OvmfPkg/OvmfPkgIa32.fdf                                                                                  |  12 +-
>  OvmfPkg/OvmfPkgIa32X64.dsc                                                                               |  10 +-
>  OvmfPkg/OvmfPkgIa32X64.fdf                                                                               |  12 +-
>  OvmfPkg/OvmfPkgX64.dsc                                                                                   |  10 +-
>  OvmfPkg/OvmfPkgX64.fdf                                                                                   |  12 +-
>  OvmfPkg/OvmfXen.dsc                                                                                      |  10 +-
>  OvmfPkg/OvmfXen.fdf                                                                                      |  12 +-
>  OvmfPkg/PlatformPei/MemDetect.c                                                                          |  10 +-
>  OvmfPkg/PlatformPei/Platform.c                                                                           | 162 +++--
>  OvmfPkg/PlatformPei/Platform.h                                                                           |  17 -
>  OvmfPkg/PlatformPei/PlatformPei.inf                                                                      |   4 -
>  OvmfPkg/PlatformPei/Xen.c                                                                                | 222 -------
>  OvmfPkg/PlatformPei/Xen.h                                                                                |  39 --
>  OvmfPkg/README                                                                                           |  43 +-
>  OvmfPkg/SmbiosPlatformDxe/ArmXen.c                                                                       |   2 +-
>  OvmfPkg/SmbiosPlatformDxe/Qemu.c                                                                         |  41 +-
>  OvmfPkg/SmbiosPlatformDxe/SmbiosPlatformDxe.c                                                            |  79 +--
>  OvmfPkg/SmbiosPlatformDxe/SmbiosPlatformDxe.h                                                            |  37 +-
>  OvmfPkg/SmbiosPlatformDxe/SmbiosPlatformDxe.inf                                                          |  23 +-
>  OvmfPkg/SmbiosPlatformDxe/X86Xen.c                                                                       |   8 +-
>  OvmfPkg/SmbiosPlatformDxe/Xen.c                                                                          |  49 ++
>  OvmfPkg/SmbiosPlatformDxe/{ArmXen.c => XenSmbiosPlatformDxe.h}                                           |  20 +-
>  OvmfPkg/SmbiosPlatformDxe/{SmbiosPlatformDxe.inf => XenSmbiosPlatformDxe.inf}                            |  32 +-
>  OvmfPkg/XenAcpiPlatformDxe/AcpiPlatform.c                                                                |  41 ++
>  OvmfPkg/XenAcpiPlatformDxe/AcpiPlatform.h                                                                |  28 +
>  OvmfPkg/XenAcpiPlatformDxe/EntryPoint.c                                                                  |  43 ++
>  OvmfPkg/{AcpiPlatformDxe => XenAcpiPlatformDxe}/Xen.c                                                    |  66 +-
>  OvmfPkg/XenAcpiPlatformDxe/XenAcpiPlatformDxe.inf                                                        |  45 ++
>  OvmfPkg/XenPlatformPei/Platform.c                                                                        |   1 -
>  OvmfPkg/XenPlatformPei/Platform.h                                                                        |   5 -
>  OvmfPkg/XenPlatformPei/Xen.c                                                                             |  20 -
>  OvmfPkg/XenPlatformPei/XenPlatformPei.inf                                                                |   1 -
>  65 files changed, 593 insertions(+), 2827 deletions(-)
>  delete mode 100644 OvmfPkg/AcpiPlatformDxe/AcpiPlatform.c
>  delete mode 100644 OvmfPkg/AcpiPlatformDxe/AcpiPlatformDxe.inf
>  delete mode 100644 OvmfPkg/AcpiPlatformDxe/Qemu.c
>  delete mode 100644 OvmfPkg/AcpiTables/AcpiTables.inf
>  delete mode 100644 OvmfPkg/AcpiTables/Dsdt.asl
>  delete mode 100644 OvmfPkg/AcpiTables/Facp.aslc
>  delete mode 100644 OvmfPkg/AcpiTables/Facs.aslc
>  delete mode 100644 OvmfPkg/AcpiTables/Madt.aslc
>  delete mode 100644 OvmfPkg/AcpiTables/Platform.h
>  delete mode 100644 OvmfPkg/AcpiTables/Ssdt.asl
>  rename OvmfPkg/{AcpiPlatformDxe => Include/IndustryStandard}/QemuLoader.h (94%)
>  rename OvmfPkg/Library/{PciHostBridgeLib => PciHostBridgeLibScan}/PciHostBridge.h (57%)
>  create mode 100644 OvmfPkg/Library/PciHostBridgeLibScan/PciHostBridgeLib.c
>  copy OvmfPkg/Library/{PciHostBridgeLib/PciHostBridgeLib.inf => PciHostBridgeLibScan/PciHostBridgeLibScan.inf} (51%)
>  rename OvmfPkg/Library/{PciHostBridgeLib/XenSupport.c => PciHostBridgeLibScan/ScanForRootBridges.c} (91%)
>  delete mode 100644 OvmfPkg/PlatformPei/Xen.c
>  delete mode 100644 OvmfPkg/PlatformPei/Xen.h
>  create mode 100644 OvmfPkg/SmbiosPlatformDxe/Xen.c
>  copy OvmfPkg/SmbiosPlatformDxe/{ArmXen.c => XenSmbiosPlatformDxe.h} (56%)
>  copy OvmfPkg/SmbiosPlatformDxe/{SmbiosPlatformDxe.inf => XenSmbiosPlatformDxe.inf} (65%)
>  create mode 100644 OvmfPkg/XenAcpiPlatformDxe/AcpiPlatform.c
>  create mode 100644 OvmfPkg/XenAcpiPlatformDxe/AcpiPlatform.h
>  create mode 100644 OvmfPkg/XenAcpiPlatformDxe/EntryPoint.c
>  rename OvmfPkg/{AcpiPlatformDxe => XenAcpiPlatformDxe}/Xen.c (82%)
>  create mode 100644 OvmfPkg/XenAcpiPlatformDxe/XenAcpiPlatformDxe.inf
> 
> 
> base-commit: cfa6ffb113f2c0d922034cc77c0d6c52eea05497
> 



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 09:37:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 09:37:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135902.252231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loNJG-0006IV-Mr; Wed, 02 Jun 2021 09:37:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135902.252231; Wed, 02 Jun 2021 09:37:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loNJG-0006IO-Ju; Wed, 02 Jun 2021 09:37:10 +0000
Received: by outflank-mailman (input) for mailman id 135902;
 Wed, 02 Jun 2021 09:37:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=O3Rk=K4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1loNJE-0006II-VO
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 09:37:09 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7af7b89-20e5-48da-8bf1-fe6616d94a49;
 Wed, 02 Jun 2021 09:37:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7af7b89-20e5-48da-8bf1-fe6616d94a49
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622626627;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=jc7qec48AT40alXRPcYQF/ibNgxVXmfPDmJOq+EyVVo=;
  b=LdJIRzGZTWtuuTKnzIu+A4vIBs10yA8zrmObehIiv6E7kkzUZ4+xvhBI
   aZmrjb8uvbu8cYAVm6aquq+Ce1Jrme/XwUV+9Fpdo7MQYX0V133cEJS4v
   UxOncn2g1rB5rh0npkcrjb60I/uooeOYv5HTj5ioyR8FpdjW9XsAjC+Lh
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: /2p7dl3hRiFHA+XGdnu33ak8vPNR+CCRGFgFeNgSQWGmjWS/J6jErDQPQZx5WuO32gdWhicz9e
 MbfyubnOeMv1OX8sMV56jGaM9Q7ShHz/dWKoH3PIVBsCKltXrFnvz1uSg7B5rrH5KsfdSy8+pZ
 fJsHBFJsNkvCdYAwOYiVHnGla0hCRG6FBdDyDyj8bnKpak0Gk9SEV4qdL2Mzfsfmy73+1L7DdA
 6nxBfhclmJTVTUz6S+r1rl7h2iVpTmFwtnW9gzS18nQmlY33lRQIIOHKHf+A5LktrNj4Gnzcfj
 N6E=
X-SBRS: 5.1
X-MesageID: 44875348
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:hnIum6gP+F1h7uMyqxW5XlM5O3BQX+N13DAbv31ZSRFFG/FwyP
 rBoB1L73DJYWgqNE3I+erhBEGBKUmsk6KdxbNhQItKPTOWwldASbsC0WKM+UyEJ8STzJ846U
 4kSdkDNDSSNykLsS+Z2njBLz9I+rDum8rE9ISurQYecegpUdAa0+4QMHfrLqQcfng+OXNWLu
 v62iIRzADQB0j/I/7LSUXsGIP41qv2fP2MW29JOzcXrC21yR+44r/zFBaVmj0EVSlU/Lsk+W
 /Z1yTk+6SKqZiAu1zh/l6Wy64TtMrqy9NFCsDJoNMSMC/QhgGhY5kkc6GevQoyvPqk5D8R4Z
 nxSi8bToFOAk7qDyWISUOH4Xim7N9u0Q6i9baguwqgnSSjLwhKTfao7OliA2jkA0lJhqA37E
 sE5RPBi3L7ZSmw1RgV3OK4Iy2CoHDE6kbKodRj+kC3brFuH4O5jbZvsX+9Q61wUB4T1ugcYa
 FT5ZbnlYlrmBWhHijkglU=
X-IronPort-AV: E=Sophos;i="5.83,241,1616472000"; 
   d="scan'208";a="44875348"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Mlin6qe6porkE5UdemfXPXC5ZGBowzCafT+QUZIVEgfTBh5r/el32KBfeOYa8xFjYMsM+W+etE9N6nsEph7Yc6xHoNSU0fGiuIRDEIZRpwBeleZlmduK8NCOVrK3o4vmaGg7VJon6TFd8ohqGCYG5oIbTtjvx4ExKqzI7Wzf6FAH0PIcnRgwOYktkKIZ78E8JSy8RxKGqXBb2bn4Z5oOzXtSgnUdZv0E5yvhPGWaAvIC/Tkib9KZMNEuimEwmotDdRQ51f8vPDLmcKavmNFqHmxiXPmh03SK6PCZeT391XiA5g74cYVCE5vXMgOQpXqxPQ6LLMC3zdF3GuPXn/DWjQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pIQiLOdKv57R81Ktjktc4X+j3YRvkgqyEtOGYpMpneA=;
 b=VcREAnuKCFFVXPQneExNwmSG4USXFZv5baswR3sJsi4PqEqDhKOO7qn3YVuEbGKESsC+p/67ycfenz4OR6Ui6XC4ATWnxHC0J0FG7mVl6e46AVa/vevcZ0vPOpfJzIVuNu/i9Rt0zFVHZuDMsuJ9iYsBzxJGyNo+cxDOQmNKBdJvwlkcjI2YFBkyTW0XKSyHdlHR7QErTe6J/7jwWkaQf7UCW8oy4wd6NLDX7tXMAQt26q1b/rnlKed0LfwCU2a4gteaSA0q4vD2B22uFjpez7GYwSyy1PI55jzVD5k3jdDQ+sUxC6AUWQZLbqjLqgy8NdQ7f13P8CUKgx8w8U+ErA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pIQiLOdKv57R81Ktjktc4X+j3YRvkgqyEtOGYpMpneA=;
 b=rrELwaBbGtp9pwaE9ikjawezSmHp+DcQGLxjbLoS1dAqEcF52lBgFIbIe+/bg9DcGBDp05EN4v6KviRmr5EDx4Z6izQemmdMGt9apv5gHJ9qyLYPMl2TdyOAd15kEXblyIpdnL97PEOY2h6BrIKrJtwJMLbmsZoNCAf9VxCsewQ=
Date: Wed, 2 Jun 2021 11:36:58 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, "George
 Dunlap" <george.dunlap@citrix.com>, <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 3/3] x86/ept: force WB cache attributes for grant and
 foreign maps
Message-ID: <YLdROqDpiUY0eGUI@Air-de-Roger>
References: <20210528173935.29919-1-roger.pau@citrix.com>
 <20210528173935.29919-4-roger.pau@citrix.com>
 <c3aeb303-760b-fe6a-d51e-6271eaf37d80@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <c3aeb303-760b-fe6a-d51e-6271eaf37d80@suse.com>
X-ClientProxiedBy: AM4PR05CA0024.eurprd05.prod.outlook.com (2603:10a6:205::37)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bfeaf7a9-38b7-41e8-642b-08d925a9faa5
X-MS-TrafficTypeDiagnostic: DM4PR03MB6096:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM4PR03MB6096EAADD674FF35CAC5DF5B8F3D9@DM4PR03MB6096.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: KjQsLw5joIXr1LmrAYJbJqwUb0sJCKPgehDk3s2cRULnRTjoGUjwa5As3BNsAzS2zwwHbSkwyglAlmhcpS0RHI2NAAGP55dwJ+eX8m6h1pfR+z112IigRbu9mkXjkr7NYLMS2v+0tuLNDGQw3zQg6B+4RZRYxPt1je41gJCvxMathjDVv4NYbmP8zXnCS1VlGcbflxiP6x9gCbT7ss3a3co8gZsd8E4KDbEOk/HRHhCqnRPkn7tSAxxRa6KZPGOzao3vmS2SeoTQP2dzRNXj/NWH3JlLWezGCQaT04Uk7B1d8nPDp6PfTHmc0RaLf6z0Mk9Z40Q7+3TbDKH6l9k3qx86hABRh5dUnSRMPnbncHgOqTajsEj0T4ZKKIT9KNbIm+smUYXgOWURleQ1CFzm+8vCIQeWBz8pXeWOvVWK2Dmp5uDORaeUd67kF++qfbG1M1RGa1NzXvKZa6lyf/fc/Ipte/ZCETBvBrbhYUKuBIIapIlfw+PO+5T91yzYEViAbsgU7G6yERTEMzkJ9StsrgjONmaQ3q3X4aPM3XwQi2XvenG5M0qnk+8e7xUfmbpwLlxpRKlzF59pyljnSSCUwECk6VlgETL5WtNAh0ydO5k=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39850400004)(136003)(366004)(346002)(396003)(376002)(83380400001)(53546011)(316002)(38100700002)(6666004)(956004)(6496006)(6916009)(5660300002)(186003)(4326008)(85182001)(26005)(16526019)(33716001)(9686003)(6486002)(54906003)(66556008)(66946007)(8676002)(2906002)(86362001)(8936002)(66476007)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?eXoxTzZrQlcyRFlJOUFCa0YrRHFMVFVkbmswRWhITk9KTk9kQ2JRUll0VmRm?=
 =?utf-8?B?MkxKbHF3OTVtWklrVzFTMmRKdTRyMExHQTBWRUEra3ZudURIb3J0LzFENUo0?=
 =?utf-8?B?K3Uvd1BxYlBRbmhWSGdGWGg1TXJUSk1VZ29wRXNtdVRHYWpjUC9kZjAxK1hW?=
 =?utf-8?B?bDN2T0tsMXUzdVY0WktRTmpOd3c3MlJ1OC8xTEJQamdleFRPcEgyQ2ZZeDdH?=
 =?utf-8?B?am56TXFnZHV5dmRTRTdEa1J1T0YrQ1RQbVZ1TTZWTGZlSXMyTnlBbURNcUdF?=
 =?utf-8?B?YzdNNVgxdEJJWUJwdTd1cS80QnpjVXFQNjhrbldMWVRuNmIreVlxN0l0MFN4?=
 =?utf-8?B?NUNyOG5xempCbkVHSlhaWjFDbTJvdFZ0UE9CM1AyTUg5Um5ZR0FsOCtWNmQ3?=
 =?utf-8?B?ZzFFY0FqcnlLcFAzVFoyZTd1MjdsU0RhbTdONUFjMkNGYys5NVQ0MDlsSk5D?=
 =?utf-8?B?dU15VnVUZ1V6RFVxQkl6ZU5rKzdJVk4xbHlIN293VjBCL0o2MnBNR0ordHpB?=
 =?utf-8?B?YjQ5OU4vVXF2Z2tTSTRQVG8zdDliRVhPNGFWc1RzTkRqV0d5TUZTMWtqektm?=
 =?utf-8?B?enFuMWRMYjZoVE1HVTJ2b2hqOU8ydnBsOTFlcjZwd3Z3WGxHeEE0LzQrdSsz?=
 =?utf-8?B?Y1RUN0xvZ0YxQlRsZDlVblNUYnNtN1h0eC9DOEVaQk05Z2ZXcVAzYkJXRmI5?=
 =?utf-8?B?THFsM2txQjNMTVVTeVBzbEl6bWRkUkZRQzFDbGlnN1NLS3p6d3lybzlWUlA5?=
 =?utf-8?B?N2xTaUdLTDRMalpzS0FTdWxCbmo0b29nWmgrT1dJRkQ4S01YWVptZXZpL0gy?=
 =?utf-8?B?citIaFVQSGdyaklobW55anE5Vk9NbTdUQVJxNFloaGJiYmx5TFkxUndVVnp3?=
 =?utf-8?B?YmtOd0dxSnJJeGZFTzBBNmhJUjVKZkJvSEVXdlkyQXNaYXdsNGtjWWt1RW5i?=
 =?utf-8?B?WlNNVEZyQi9lbWh6d1Bzc1RqWCtUdU1vVlc5TllEdW9QbU9peGxPSlBLTk5n?=
 =?utf-8?B?V2I2eXRnR1FBMDZWVEVvTnlxaG41ZzdPek1BRXkxaWFrYlZFRTNJL1ZNYzVO?=
 =?utf-8?B?WG01TEpyRi9vL08yZmJqRjIrbWtmRE56Mm80THNEOWJwMk5PaDlTbTVnRTJ2?=
 =?utf-8?B?WTFSckh2U3A0SENGRUQrRU1qT2NnV3hlbC84VThRNGhMeUlUZFQzaUwvUHo3?=
 =?utf-8?B?SGd5NTdMTGR5SE5hM2lHSEhMeHN1emFHcTFUMTFQUHloR3F6RXpuSUo3Ums5?=
 =?utf-8?B?QllJTmZnbVhiTkhhNnIzZGM2dm9yVmR1b0tFTjlOUHN0azRZWDdPV01yOWho?=
 =?utf-8?B?alhrSWk5ZG0zM3lMenJPZmZxekY1amRDOExwRmtVSjQzRXNHTXpWTmpLNnVx?=
 =?utf-8?B?M1Zhdkc1OFFsVkpLTVBMSGdraElucFdkUzJVQ0Z5WksvcXJvSGNCLzF2OVIz?=
 =?utf-8?B?NENKL09mZGVmYXpzbUlUT3g2NVhWMXMvMmlRbHI5U05XcGlxczlUS3l1QUlF?=
 =?utf-8?B?TlBaYXo4aFpVQVA3Q3N6VGRPVFZGQXYzTzdzUG5hdjZNMVh1SHpmcUxMSGJu?=
 =?utf-8?B?VmpnTWxDc3B4YWUrTkFOcWdWK0R2SFVmb0IxNWdwMlYzbXJ1QVJIL0pDWm0r?=
 =?utf-8?B?K09MMWxhaU1kSmM2eVlNb3JVd0pQNnpLdVhPaFZ1L1BmYlR0WDVxazRoZ0FU?=
 =?utf-8?B?dVhYbFR4dmJFbVdjTmZZUHowZ0VKQVlCRDRTSURIWUNRMStMU3YxMTZaeFFM?=
 =?utf-8?Q?yTXGXyRm58YRRmO0ljswTFzt7lPkwXfi+VvXcPv?=
X-MS-Exchange-CrossTenant-Network-Message-Id: bfeaf7a9-38b7-41e8-642b-08d925a9faa5
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2021 09:37:03.5734
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bKeqN92JDtCrUABHM5HcCthce9rhYG0sVTa5hsjtKTzw0tlmrwEOUUK5y4koGMUUUAj7A7Gxa5FjKskK88JeEw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6096
X-OriginatorOrg: citrix.com

On Mon, May 31, 2021 at 09:21:25AM +0200, Jan Beulich wrote:
> On 28.05.2021 19:39, Roger Pau Monne wrote:
> > --- a/xen/arch/x86/mm/p2m-ept.c
> > +++ b/xen/arch/x86/mm/p2m-ept.c
> > @@ -487,11 +487,12 @@ static int ept_invalidate_emt_range(struct p2m_domain *p2m,
> >  }
> >  
> >  int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn,
> > -                       unsigned int order, bool *ipat, bool direct_mmio)
> > +                       unsigned int order, bool *ipat, p2m_type_t type)
> >  {
> >      int gmtrr_mtype, hmtrr_mtype;
> >      struct vcpu *v = current;
> >      unsigned long i;
> > +    bool direct_mmio = type == p2m_mmio_direct;
> 
> I don't think this variable is worthwhile to retain/introduce:
> 
> > @@ -535,9 +536,33 @@ int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn,
> >          }
> >      }
> >  
> > -    if ( direct_mmio )
> 
> With this gone, there's exactly one further use left. Preferably
> with this adjustment (which I'd be fine to make while committing, as
> long as you and/or the maintainers agree)
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks. That's fine, I was about to drop it but didn't want to introduce any
more changes than necessary.

> 
> > +    switch ( type )
> > +    {
> > +    case p2m_mmio_direct:
> >          return MTRR_TYPE_UNCACHABLE;
> 
> As a largely unrelated note: We really want to find a way to return
> WC here for e.g. the frame buffer of graphics cards, the more that
> hvm_get_mem_pinned_cacheattr() gets invoked only below from here
> (unlike at initial introduction of the function, where it was called
> ahead of the direct_mmio check, but still after the mfn_valid(), so
> the results were inconsistent anyway). Perhaps we should obtain the
> host MTRR setting for the page (or range) in question.
> 
> As to hvm_get_mem_pinned_cacheattr(), XEN_DOMCTL_pin_mem_cacheattr
> is documented to be intended to be used on RAM only anyway ...

I also think we should make epte_get_entry_emt available to all p2m
code so it can partially replace the logic in p2m_type_to_flags to
account for cache attributes. I don't think there's much point in
keeping such different methods for accounting for cache attributes. I
know AMD lacks an ignore PAT equivalent, but there's no reason why p2m
cache attributes calculation should be done differently for AMD and
Intel AFAICT.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 09:50:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 09:50:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135911.252243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loNWC-0000Eh-3x; Wed, 02 Jun 2021 09:50:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135911.252243; Wed, 02 Jun 2021 09:50:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loNWC-0000Ea-0M; Wed, 02 Jun 2021 09:50:32 +0000
Received: by outflank-mailman (input) for mailman id 135911;
 Wed, 02 Jun 2021 09:50:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loNWA-0000EM-St; Wed, 02 Jun 2021 09:50:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loNWA-0003qG-L5; Wed, 02 Jun 2021 09:50:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loNWA-0000RY-7j; Wed, 02 Jun 2021 09:50:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1loNWA-0004xQ-7E; Wed, 02 Jun 2021 09:50:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VliLuNxhh3MpZPtSZ+oT672vukg4UTuOGmV5NWD6s9s=; b=5XlJfTUajz52pbq93v0FTnIZXs
	f8HZJt+OTjlekI6TQV403JjU3xAFnsHlbCLfJZ/DZuSkNs2TutZrsklg6M4f00M8/wixr6iiTYmGr
	XkMfDurZLr3xS+/p5vrDN6E6jpyltYJZu2gxbAjdiS2a3X+f+JmvZzeTjlO2wqOk2Zc0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162335-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 162335: all pass - PUSHED
X-Osstest-Versions-This:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
X-Osstest-Versions-That:
    xen=683d899e4bffca35c5b192ea0662362b0270a695
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Jun 2021 09:50:30 +0000

flight 162335 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162335/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
baseline version:
 xen                  683d899e4bffca35c5b192ea0662362b0270a695

Last test of basis   162265  2021-05-30 09:19:35 Z    3 days
Testing same since   162335  2021-06-02 09:20:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   683d899e4b..5268b2dcf7  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 10:10:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 10:10:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135919.252256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loNpP-0002rn-Om; Wed, 02 Jun 2021 10:10:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135919.252256; Wed, 02 Jun 2021 10:10:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loNpP-0002rg-Lh; Wed, 02 Jun 2021 10:10:23 +0000
Received: by outflank-mailman (input) for mailman id 135919;
 Wed, 02 Jun 2021 10:10:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aFgD=K4=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1loNpO-0002ra-DN
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 10:10:22 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.0.76]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37e8c34b-a31a-4e41-a8e8-27e77be7bc85;
 Wed, 02 Jun 2021 10:10:19 +0000 (UTC)
Received: from PR0P264CA0246.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100::18) by
 DBAPR08MB5623.eurprd08.prod.outlook.com (2603:10a6:10:1a7::5) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4173.24; Wed, 2 Jun 2021 10:10:10 +0000
Received: from VE1EUR03FT006.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:100:0:cafe::b8) by PR0P264CA0246.outlook.office365.com
 (2603:10a6:100::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.15 via Frontend
 Transport; Wed, 2 Jun 2021 10:10:10 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT006.mail.protection.outlook.com (10.152.18.116) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4150.30 via Frontend Transport; Wed, 2 Jun 2021 10:10:10 +0000
Received: ("Tessian outbound f02dc08cb398:v93");
 Wed, 02 Jun 2021 10:10:09 +0000
Received: from 6aa864ea2a9c.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2A96CBAA-7BA0-4A5A-9BDB-D4696536E9E4.1; 
 Wed, 02 Jun 2021 10:09:58 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6aa864ea2a9c.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 02 Jun 2021 10:09:58 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VE1PR08MB5581.eurprd08.prod.outlook.com (2603:10a6:800:1a0::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.24; Wed, 2 Jun
 2021 10:09:57 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::2807:2ff9:e371:2918]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::2807:2ff9:e371:2918%7]) with mapi id 15.20.4173.030; Wed, 2 Jun 2021
 10:09:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37e8c34b-a31a-4e41-a8e8-27e77be7bc85
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GUpMIkhtcz4TtX8DjJLpcXI/cxXyp0SpdzFl0rTtESU=;
 b=sMtCzdko2rHeI/0Z80ifthN/BXAVRjmbqKVaScz6EN+4SQ0p1bgZFQe9wi8wLKAjnlDHbuRnAPKDJ7zQRsdkQzt+3vsineCgw+sVSMN+kicG+tPfL+iqWfD8WAlClhuLTFMTUCDjj8ze7onQMSSVGpGN/Ghfi05MjqP3Ms9MClc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gFWz/bdF40qJ90tCC4tuAWWCd7Vh7+M0IjhxaXUKW1mwCtZzABJjOhgHrx7NWDhZX/FFCr2cgVRZ/Omah93iThg5uJEA7XMYs8Frqm1RtIglPlLa2PBjYPmojuAkBoPoFElCx+eBwC6ShkdDV7XNyV5EwOnB1sau5v/WpLttnjw+JSHeB+5aPeAgpkigdOBMesVDMdyL5DYmMHOl08votSXAElqDdYA0gEk2OA4vJbeKR8w46o1UT8X40Y9WT9B5IA9FSe+BQuGd6CZilKT6dAqjJHOzifO81bDdXqb5ogJ5YPMVJkdGhPJN9jacVvo3HZEDDBjHq5vR+LewG4YT6Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GUpMIkhtcz4TtX8DjJLpcXI/cxXyp0SpdzFl0rTtESU=;
 b=ABc5SdfOGDy5CC/p+QJ+5LAOZqg48EW7265NVRXcV5ChPIRhY2H7VHqvhqOMQ+FsrF0vDYoRXh4KcN8HyeIpAVr6K2zwrM+vjEdG3nezxTw0mCZJoB3wfg1X0FUdqWrapHQrdeiYrFBs5J0xPC+EUoFX38lz1Ne7PZasB2/u4tYXdB0zRVqZb/znCI6DKkl9xzSeFWN0k9zD84hGNNwygOsf07cqvR/sJcsPcmM5pSREwsKoaTPN+YaPYlxZr0sJ6bnLJwCVURX+gMVKIUzNyOe0GRSc5m1uZVl5/YSP6xCDe1J2nG6Z5jE2FNOeSo1TzeE+A2/l+ze9ftExpRmghg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GUpMIkhtcz4TtX8DjJLpcXI/cxXyp0SpdzFl0rTtESU=;
 b=sMtCzdko2rHeI/0Z80ifthN/BXAVRjmbqKVaScz6EN+4SQ0p1bgZFQe9wi8wLKAjnlDHbuRnAPKDJ7zQRsdkQzt+3vsineCgw+sVSMN+kicG+tPfL+iqWfD8WAlClhuLTFMTUCDjj8ze7onQMSSVGpGN/Ghfi05MjqP3Ms9MClc=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>
Subject: RE: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
Thread-Topic: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
Thread-Index:
 AQHXS6WvvFnJG9tHGECWPMZXor2qraro8JyAgAAPtYCAAiGdAIAAuwwggAA2KACAFH+QgA==
Date: Wed, 2 Jun 2021 10:09:57 +0000
Message-ID:
 <VE1PR08MB52152792B6771236A6DF37E7F73D9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-2-penny.zheng@arm.com>
 <e1b90f06-92d2-11da-c556-4081907124b8@xen.org>
 <VE1PR08MB521519C6F09E92EDB9C9A1AEF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <66e32065-ea2d-d000-1a70-e5598a182b6a@xen.org>
 <VE1PR08MB5215C1F5041860102BBAD595F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <14fb6fe4-c293-6994-8cbc-872d3bd8a3ac@xen.org>
In-Reply-To: <14fb6fe4-c293-6994-8cbc-872d3bd8a3ac@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 1ADCF9C43525C5418E70BA197E02ED77.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.112]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 82d38a6f-465a-428e-5145-08d925ae9ad5
x-ms-traffictypediagnostic: VE1PR08MB5581:|DBAPR08MB5623:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DBAPR08MB5623900C47A59DE0FB4582EEF73D9@DBAPR08MB5623.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Q8C5Kx31d6XvosAZVZCoSyynBw75mjQGU5RiDYgrp3Qbq5oX7o7a/3eb/iE1A4ZnO27cASVZaKkf703C0SrUwmuYTk+EyJsFDCbs94c3m9aKJbK0WG25CJ2nYONO1MK2NYmE270uI8rLmdGXi5C1y5xKyYhflX3szTEJgVS+S5zT5iFmIzfjFz63/qLZlWn0SM+ZBd5Mm2e95UmDMsz81YrGM3TAGYQa9qieK/k/r5lLqUThSTeRyPJ7diw45cuWAYpFGbU4uoKqeC3Md6KH0/9PG+HcZFy5sy4T2K8mUSO5iyeWb/U0bErUna1+X4LdDkKtsm0Eod7+gDk04IBXs/cbRSa4WiH58FJWoGMmTfPB+1Ej8ZJlUIwM+nS3kPn5p8YrFzQob4s4Xo+Loicau++mpsu/brB/+s+9TV5dMLQohkMTP0NoIuPtdqI4m5EXZzxpdeeY/Jut1V0nRBUMAGZK/SjpEUyVFgAOtV5inEtoj0e/DPnUxd1ymXc2g259HRow32B8KBhDObTSIFq0OPKDCXpb6ReJqaCBiUFu+pfywnVhxpcALVZiuunzfyOmphxHoXWQc4msQQ7QJQ1tIi9lvs02lwBG9s6VUc4tgOo=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(346002)(136003)(366004)(376002)(66446008)(76116006)(64756008)(33656002)(86362001)(66476007)(66946007)(71200400001)(66556008)(83380400001)(7696005)(6506007)(316002)(9686003)(52536014)(4326008)(38100700002)(8676002)(55016002)(26005)(110136005)(478600001)(2906002)(8936002)(5660300002)(186003)(54906003)(122000001)(53546011);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?bGg3cDVjT0VnSmt1R3NtZU84YXYxbFVodUNjUnQ0bFI4ZDVuUTlsTUxKZ3Rw?=
 =?utf-8?B?L3VCT3V2RS9lblFrc0ZNLytDODlVTXFXZWt2KzdjMGUzdnQrd1llMHVZNGNY?=
 =?utf-8?B?UVFmZWg4T0NKOVdsSnpxVUw5YVRzOHIvMWg5RUwyZXI1RHRsSWR4NUVBeHFF?=
 =?utf-8?B?OEc1WGpuUDRyNkVldGhhM2p1MHpmbDZPZFlVNU1jc1ZqQ3R1ZEZUMW45UEc4?=
 =?utf-8?B?NUZ3NUoxUURDdllvTHllaTVXeVBBYkdyR0R0eHFWMVA1T2crR0VvUWlBL3dV?=
 =?utf-8?B?MzFyZS9KY25Lc3VJNTNsYVo4UkxLRnp0U3NoTEdvRHhKODRGa1oza0ZYNDBm?=
 =?utf-8?B?MVpZZHQzbHlZcnNOL2dPUzQ2LzdVV0FST05MQlZ3Z3RUQXc0bTFGMGhVR3Rw?=
 =?utf-8?B?MXBqRjBlR1ExdFRRM1RwUVM3VzEyUm80L1JjcVNLRmVEZkl2bWcyWGJ5aWpy?=
 =?utf-8?B?cVc2ZU45ZHJwVWVEWm5kMnJrcXNvZjAvdVR6SGJSNnIza0NSU3ZxRkpQZ1E4?=
 =?utf-8?B?cTFiVkpWcFRwdElZdkp0c3dWaEFzYXRKY04yR0J2R1JZYnc3K1FSVkNUV1hE?=
 =?utf-8?B?WWI5Ny93eWhyL0RYTzZGYTloS2lsZmlWMXl6disvYk1YWEE1dGU1ekNSejNG?=
 =?utf-8?B?TVh6a1FiK1YzcURhamRFdUJYcUJSZ3Fuc0V2T2xaSFUrVU9FL3FybkpaRy9E?=
 =?utf-8?B?eWVHamxNcVQ0d1pqeW5qdklkSDNjODJjZXNwMmpxOEdWSGo3UGlHNTRtbita?=
 =?utf-8?B?NDZoYmIyYTFCdXRRYWRURXJWekZvWVZDSitiK1BYeVVoKzBWNTJ5SFZVOVpK?=
 =?utf-8?B?SzdUWkFicHBFN09qRVdEMzVMdUNrTVNCOWtHWWVpbXh3R29mTWdaVWhYVG9p?=
 =?utf-8?B?YlV4MXRvcFNzRERCOWl4d0ViNExCeEo3RElzTDhIOVgwUlRSRXZDUGpoTGdm?=
 =?utf-8?B?TGFHcVFFdEpBS2crRktGY0g1L2d6RHVPVlF5Rm14a2VhbHFtbmlzY1ljSjZG?=
 =?utf-8?B?aThaM2tYcUJQaE1XaDluaUpPeG5zS3NVelBMRmN6blduOUthb2N3Z3dqdEc5?=
 =?utf-8?B?MUJzOEpZalU1ajlLWTh2RnQ3LzdVbWxPblRIQTZLa3dnd00rMGtoSnZMSFNn?=
 =?utf-8?B?OGJxVEI5Y0JSTnpXZThUV3ZJcE1YM2tVMjh0R1E1cisxaTduVGxsZkJMRmRl?=
 =?utf-8?B?M2E1ejZrUGRIdTZCWFE2NzhyQWdiQUdXUllucWdWQ1d3TSticzdmb2VQSjc0?=
 =?utf-8?B?RkdTQkEzMmljRWNtd2dkZndLOWZrU1QwY3ZtWGhJdW11czRqcmw3Q2M1OHh4?=
 =?utf-8?B?STBGTFZvOEE0WEx0dnJXTUdwYzErMk9kVUFUVHozbTk1dlVwQ1FsaTE0RkI0?=
 =?utf-8?B?OVpaV2dOTDR3Z2ZDOWxrb1d1eEVUQWxIM2RHTEtNeWpnZEVsbUFXYmhRK2ZZ?=
 =?utf-8?B?R0txZitldTNmODNnOEFid3lhVDNmR3JsNTY1bzVDQzl1OS9WcHNTQy9LejRy?=
 =?utf-8?B?ak54UnNmNWZWMzJZejZORFBjS3ZTcUhoMDhMVlpaVm81QzVTd0JCN1A1OHd2?=
 =?utf-8?B?dFZqY055a0FMYXNidjZhODUxNGJBK3h6ZFR4WFJ1YXFHU2JmUFlvQTB5eGhZ?=
 =?utf-8?B?MDkwS2pUdTU3dGtzZEcyS1l2WnhJL3NWdG1Rc0t2NUQ2MlJ6dnVXWGd5RHlm?=
 =?utf-8?B?U3E1enRJTkNWWFRaa29UNlFzS3JYb2czTHNSQjUzVi8rNHlTRXBHNFhEdlVF?=
 =?utf-8?Q?CC9KC4At26c5fVIYLA=3D?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5581
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4d6b09c3-c2f8-406e-6271-08d925ae933c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3xbUXllRkSngvTNq39BfKIOPzqQmQxtk3rax6lX33c+SzqvIuN1r1i/rw9Mo5P8JmYrqSphgCqT502sRZ707j1pCWZE3FSL9LUIMXOxTF7aARupXuT+u7y4s6hfBQhDoYhTiYFQMBVO0RynwIo24AES3R1eHv0GHMzGf1nx2SlGCaQexS1ZfKRAvNaM8SWekIel1LDWEFHM//sUoK2t9He/gogBBXT13sZXSWUUh5cI1Q8Qc9fs3G/n7oram6qNP5rg9LDQmBs6mhN7V4bFDhG+dBCfSt9Spcaksgv0QyxOafAOeZe8zd7fT5EUn4BpjauS3HI7POB8EE3DIwmkSHEt9MjmvA4Hy37GR7DaOQlhu73gAGa96/dYZrYwCrOak2x2qMzySOcYJLzo3m/23kwh2isMJxEqedkcQgSGmiArbXLm7gpncjGE0rL5HJQ80txA7DnZfne6/xTPNzRcNJARn8Gj1eHHnjnWOqH0+OUkz6JTh8IxesasmJ0dcZDRHAQF6xTbTiJ3QV7XMIo80tE0E7IhcHOGAc+My0bL9LbX0D9NNsMJkiaqJm0vLevCQUrs0NanV34sDnDtcmATJkPWtfewmEsNh2dUdzSYYTGxVCX/1Gi7tPXF8E9c8wF/LzOyRQYecP5x1pO7MbLhw+YB1/MAW5gc/Xxv6nvwoERs=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(376002)(396003)(39860400002)(346002)(36840700001)(46966006)(336012)(82310400003)(5660300002)(55016002)(2906002)(82740400003)(8936002)(7696005)(186003)(47076005)(54906003)(70206006)(81166007)(33656002)(53546011)(83380400001)(36860700001)(86362001)(70586007)(356005)(8676002)(316002)(9686003)(478600001)(26005)(110136005)(4326008)(6506007)(52536014);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2021 10:10:10.0145
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 82d38a6f-465a-428e-5145-08d925ae9ad5
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5623

SGkgSnVsaWVuIA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IFRodXJzZGF5LCBNYXkgMjAsIDIwMjEg
NDo1MSBQTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2
ZWxAbGlzdHMueGVucHJvamVjdC5vcmc7DQo+IHNzdGFiZWxsaW5pQGtlcm5lbC5vcmcNCj4gQ2M6
IEJlcnRyYW5kIE1hcnF1aXMgPEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47IFdlaSBDaGVuDQo+
IDxXZWkuQ2hlbkBhcm0uY29tPjsgbmQgPG5kQGFybS5jb20+DQo+IFN1YmplY3Q6IFJlOiBbUEFU
Q0ggMDEvMTBdIHhlbi9hcm06IGludHJvZHVjZSBkb21haW4gb24gU3RhdGljIEFsbG9jYXRpb24N
Cj4gDQo+IEhpLA0KPiANCj4gT24gMjAvMDUvMjAyMSAwNzowNywgUGVubnkgWmhlbmcgd3JvdGU6
DQo+ID4+PiBJdCB3aWxsIGJlIGNvbnNpc3RlbnQgd2l0aCB0aGUgb25lcyBkZWZpbmVkIGluIHRo
ZSBwYXJlbnQgbm9kZSwgZG9tVXguDQo+ID4+IEhtbW0uLi4gVG8gdGFrZSB0aGUgZXhhbXBsZSB5
b3UgcHJvdmlkZWQsIHRoZSBwYXJlbnQgd291bGQgYmUgY2hvc2VuLg0KPiA+PiBIb3dldmVyLCBm
cm9tIHRoZSBleGFtcGxlLCBJIHdvdWxkIGV4cGVjdCB0aGUgcHJvcGVydHkgI3thZGRyZXNzLA0K
PiA+PiBzaXplfS1jZWxscyBpbiBkb21VMSB0byBiZSB1c2VkLiBXaGF0IGRpZCBJIG1pc3M/DQo+
ID4+DQo+ID4NCj4gPiBZZWFoLCB0aGUgcHJvcGVydHkgI3thZGRyZXNzLCBzaXplfS1jZWxscyBp
biBkb21VMSB3aWxsIGJlIHVzZWQuIEFuZA0KPiA+IHRoZSBwYXJlbnQgbm9kZSB3aWxsIGJlIGRv
bVUxLg0KPiANCj4gWW91IG1heSBoYXZlIG1pc3VuZGVyc3Rvb2Qgd2hhdCBJIG1lYW50LiAiZG9t
VTEiIGlzIHRoZSBub2RlIHRoYXQNCj4gY29udGFpbnMgdGhlIHByb3BlcnR5ICJ4ZW4sc3RhdGlj
LW1lbSIuIFRoZSBwYXJlbnQgbm9kZSB3b3VsZCBiZSB0aGUgb25lDQo+IGFib3ZlIChpbiBvdXIg
Y2FzZSAiY2hvc2VuIikuDQo+IA0KDQpJIHJlLXJlLXJlY29uc2lkZXIgd2hhdCB5b3UgbWVhbnQg
aGVyZSwgaG9wZSB0aGlzIHRpbWUgSSBnZXQgd2hhdCB5b3UgbWVhbiwgY29ycmVjdCBtZSBpZiBJ
J20gd3JvbmcsDQpMaXN0IGFuIGV4YW1wbGUgaGVyZToNCg0KICAgIC8gew0KICAgICAgICByZXNl
cnZlZC1tZW1vcnkgew0KICAgICAgICAgICAgI2FkZHJlc3MtY2VsbHMgPSA8Mj47DQogICAgICAg
ICAgICAjc2l6ZS1jZWxscyA9IDwyPjsNCg0KICAgICAgICAgICAgc3RhdGljbWVtZG9tVTE6IHN0
YXRpYy1tZW1vcnlAMHgzMDAwMDAwMCB7DQogICAgICAgICAgICAgICAgY29tcGF0aWJsZSA9ICJ4
ZW4sc3RhdGljLW1lbW9yeS1kb21haW4iOw0KICAgICAgICAgICAgICAgIHJlZyA9IDwweDAgMHgz
MDAwMDAwMCAweDAgMHgyMDAwMDAwMD47DQogICAgICAgICAgICB9Ow0KICAgICAgICB9Ow0KDQog
ICAgICAgIGNob3NlbiB7DQogICAgICAgICAgICBkb21VMSB7DQogICAgICAgICAgICAgICAgY29t
cGF0aWJsZSA9ICJ4ZW4sZG9tYWluIjsNCiAgICAgICAgICAgICAgICAjYWRkcmVzcy1jZWxscyA9
IDwweDE+Ow0KICAgICAgICAgICAgICAgICNzaXplLWNlbGxzID0gPDB4MT47DQogICAgICAgICAg
ICAgICAgY3B1cyA9IDwyPjsNCiAgICAgICAgICAgICAgICB4ZW4sc3RhdGljLW1lbSA9IDwmc3Rh
dGljbWVtZG9tVTE+Ow0KDQogICAgICAgICAgICAgICAuLi4NCiAgICAgICAgICAgIH07DQogICAg
ICAgIH07DQogICAgfTsNCg0KSWYgdXNlciBnaXZlcyB0d28gZGlmZmVyZW50ICNhZGRyZXNzLWNl
bGxzIGFuZCAjc2l6ZS1jZWxscyBpbiByZXNlcnZlZC1tZW1vcnkgYW5kIGRvbVUxLCBUaGVuIHdo
ZW4gcGFyc2luZyBpdA0KdGhyb3VnaCBgeGVuLHN0YXRpYy1tZW1gLCBpdCBtYXkgaGF2ZSB1bmV4
cGVjdGVkIGFuc3dlcnMuDQpJIGNvdWxkIG5vdCB0aGluayBhIHdheSB0byBmaXggaXQgcHJvcGVy
bHkgaW4gY29kZXMsIGRvIHlvdSBoYXZlIGFueSBzdWdnZXN0aW9uPyBPciB3ZSBqdXN0IHB1dCBh
IHdhcm5pbmcgaW4gZG9jL2NvbW1pdHMuDQoNCj4gPg0KPiA+IFRoZSBkdGIgcHJvcGVydHkgc2hv
dWxkIGxvb2sgbGlrZSBhcyBmb2xsb3dzOg0KPiA+DQo+ID4gICAgICAgICAgY2hvc2VuIHsNCj4g
PiAgICAgICAgICAgICAgZG9tVTEgew0KPiA+ICAgICAgICAgICAgICAgICAgY29tcGF0aWJsZSA9
ICJ4ZW4sZG9tYWluIjsNCj4gPiAgICAgICAgICAgICAgICAgICNhZGRyZXNzLWNlbGxzID0gPDB4
Mj47DQo+ID4gICAgICAgICAgICAgICAgICAjc2l6ZS1jZWxscyA9IDwweDI+Ow0KPiA+ICAgICAg
ICAgICAgICAgICAgY3B1cyA9IDwyPjsNCj4gPiAgICAgICAgICAgICAgICAgIHhlbixzdGF0aWMt
bWVtID0gPDB4MCAweDMwMDAwMDAwIDB4MCAweDIwMDAwMDAwPjsNCj4gPg0KPiA+ICAgICAgICAg
ICAgICAgICAgLi4uDQo+ID4gICAgICAgICAgICAgIH07DQo+ID4gICAgICAgICAgfTsNCj4gPg0K
PiA+Pj4gK0RPTVUxIG9uIFN0YXRpYyBBbGxvY2F0aW9uIGhhcyByZXNlcnZlZCBSQU0gYmFuayBh
dCAweDMwMDAwMDAwIG9mDQo+ID4+PiArNTEyTUIgc2l6ZQ0KPiA+DQo+ID4+Pj4+ICtTdGF0aWMg
QWxsb2NhdGlvbiBpcyBvbmx5IHN1cHBvcnRlZCBvbiBBQXJjaDY0IGZvciBub3cuDQo+ID4+Pj4N
Cj4gPj4+PiBUaGUgY29kZSBkb2Vzbid0IHNlZW0gdG8gYmUgQUFyY2g2NCBzcGVjaWZpYy4gU28g
d2h5IGNhbid0IHRoaXMgYmUNCj4gPj4+PiB1c2VkIGZvciAzMi1iaXQgQXJtPw0KPiA+Pj4+DQo+
ID4+Pg0KPiA+Pj4gVHJ1ZSwgd2UgaGF2ZSBwbGFucyB0byBtYWtlIGl0IGFsc28gd29ya2FibGUg
aW4gQUFyY2gzMiBpbiB0aGUgZnV0dXJlLg0KPiA+Pj4gQmVjYXVzZSB3ZSBjb25zaWRlcmVkIFhF
TiBvbiBjb3J0ZXgtUjUyLg0KPiA+Pg0KPiA+PiBBbGwgdGhlIGNvZGUgc2VlbXMgdG8gYmUgaW1w
bGVtZW50ZWQgaW4gYXJtIGdlbmVyaWMgY29kZS4gU28gaXNuJ3QgaXQNCj4gPj4gYWxyZWFkeSB3
b3JraW5nPw0KPiA+Pg0KPiA+Pj4+PiAgICAgc3RhdGljIGludCBfX2luaXQgZWFybHlfc2Nhbl9u
b2RlKGNvbnN0IHZvaWQgKmZkdCwNCj4gPj4+Pj4gICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBpbnQgbm9kZSwgY29uc3QgY2hhciAqbmFtZSwgaW50IGRlcHRoLA0KPiA+Pj4+
PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHUzMiBhZGRyZXNzX2NlbGxz
LCB1MzINCj4gPj4+Pj4gc2l6ZV9jZWxscywgQEAgLTM0NSw2ICszOTQsOSBAQCBzdGF0aWMgaW50
IF9faW5pdA0KPiA+Pj4+PiBlYXJseV9zY2FuX25vZGUoY29uc3QNCj4gPj4gdm9pZCAqZmR0LA0K
PiA+Pj4+PiAgICAgICAgICAgICBwcm9jZXNzX211bHRpYm9vdF9ub2RlKGZkdCwgbm9kZSwgbmFt
ZSwgYWRkcmVzc19jZWxscywNCj4gc2l6ZV9jZWxscyk7DQo+ID4+Pj4+ICAgICAgICAgZWxzZSBp
ZiAoIGRlcHRoID09IDEgJiYgZGV2aWNlX3RyZWVfbm9kZV9tYXRjaGVzKGZkdCwNCj4gPj4+Pj4g
bm9kZSwNCj4gPj4gImNob3NlbiIpICkNCj4gPj4+Pj4gICAgICAgICAgICAgcHJvY2Vzc19jaG9z
ZW5fbm9kZShmZHQsIG5vZGUsIG5hbWUsIGFkZHJlc3NfY2VsbHMsDQo+ID4+Pj4+IHNpemVfY2Vs
bHMpOw0KPiA+Pj4+PiArICAgIGVsc2UgaWYgKCBkZXB0aCA9PSAyICYmIGZkdF9nZXRfcHJvcGVy
dHkoZmR0LCBub2RlLA0KPiA+Pj4+PiArICJ4ZW4sc3RhdGljLW1lbSIsDQo+ID4+Pj4gTlVMTCkg
KQ0KPiA+Pj4+PiArICAgICAgICBwcm9jZXNzX3N0YXRpY19tZW1vcnkoZmR0LCBub2RlLCAieGVu
LHN0YXRpYy1tZW0iLCBhZGRyZXNzX2NlbGxzLA0KPiA+Pj4+PiArICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgc2l6ZV9jZWxscywgJmJvb3RpbmZvLnN0YXRpY19tZW0pOw0KPiA+Pj4+DQo+
ID4+Pj4gSSBhbSBhIGJpdCBjb25jZXJuZWQgdG8gYWRkIHlldCBhbm90aGVyIG1ldGhvZCB0byBw
YXJzZSB0aGUgRFQgYW5kDQo+ID4+Pj4gYWxsIHRoZSBleHRyYSBjb2RlIGl0IHdpbGwgYWRkIGxp
a2UgaW4gcGF0Y2ggIzIuDQo+ID4+Pj4NCj4gPj4+PiAgICBGcm9tIHRoZSBob3N0IFBvViwgdGhl
eSBhcmUgbWVtb3J5IHJlc2VydmVkIGZvciBhIHNwZWNpZmljIHB1cnBvc2UuDQo+ID4+Pj4gV291
bGQgaXQgYmUgcG9zc2libGUgdG8gY29uc2lkZXIgdGhlIHJlc2VydmUtbWVtb3J5IGJpbmRpbmcg
Zm9yDQo+ID4+Pj4gdGhhdCBwdXJwb3NlPyBUaGlzIHdpbGwgaGFwcGVuIG91dHNpZGUgb2YgY2hv
c2VuLCBidXQgd2UgY291bGQgdXNlDQo+ID4+Pj4gYSBwaGFuZGxlIHRvIHJlZmVyIHRoZSByZWdp
b24uDQo+ID4+Pj4NCj4gPj4+DQo+ID4+PiBDb3JyZWN0IG1lIGlmIEkgdW5kZXJzdGFuZCB3cm9u
Z2x5LCBkbyB5b3UgbWVhbiB3aGF0IHRoaXMgZGV2aWNlDQo+ID4+PiB0cmVlDQo+ID4+IHNuaXBw
ZXQgbG9va3MgbGlrZToNCj4gPj4NCj4gPj4gWWVzLCB0aGlzIGlzIHdoYXQgSSBoYWQgaW4gbWlu
ZC4gQWx0aG91Z2ggSSBoYXZlIG9uZSBzbWFsbCByZW1hcmsgYmVsb3cuDQo+ID4+DQo+ID4+DQo+
ID4+PiByZXNlcnZlZC1tZW1vcnkgew0KPiA+Pj4gICAgICAjYWRkcmVzcy1jZWxscyA9IDwyPjsN
Cj4gPj4+ICAgICAgI3NpemUtY2VsbHMgPSA8Mj47DQo+ID4+PiAgICAgIHJhbmdlczsNCj4gPj4+
DQo+ID4+PiAgICAgIHN0YXRpYy1tZW0tZG9tVTE6IHN0YXRpYy1tZW1AMHgzMDAwMDAwMHsNCj4g
Pj4NCj4gPj4gSSB0aGluayB0aGUgbm9kZSB3b3VsZCBuZWVkIHRvIGNvbnRhaW4gYSBjb21wYXRp
YmxlIChuYW1lIHRvIGJlIGRlZmluZWQpLg0KPiA+Pg0KPiA+DQo+ID4gT2ssIG1heWJlLCBobW1t
LCBob3cgYWJvdXQgInhlbixzdGF0aWMtbWVtb3J5Ij8NCj4gDQo+IEkgd291bGQgcG9zc2libHkg
YWRkICJkb21haW4iIGluIHRoZSBuYW1lIHRvIG1ha2UgY2xlYXIgdGhpcyBpcyBkb21haW4NCj4g
bWVtb3J5LiBTdGVmYW5vLCB3aGF0IGRvIHlvdSB0aGluaz8NCj4gDQo+IENoZWVycywNCj4gDQo+
IA0KPiBKdWxpZW4gR3JhbGwNCg0KQ2hlZXJzLA0KDQpQZW5ueSBaaGVuZw0K


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 10:40:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 10:40:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135927.252267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loOI3-0005en-5p; Wed, 02 Jun 2021 10:39:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135927.252267; Wed, 02 Jun 2021 10:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loOI3-0005eg-2a; Wed, 02 Jun 2021 10:39:59 +0000
Received: by outflank-mailman (input) for mailman id 135927;
 Wed, 02 Jun 2021 10:39:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loOI1-0005eU-Uu; Wed, 02 Jun 2021 10:39:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loOI1-0004iD-Ju; Wed, 02 Jun 2021 10:39:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loOI1-0002Q6-BQ; Wed, 02 Jun 2021 10:39:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1loOI1-000840-Ay; Wed, 02 Jun 2021 10:39:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7IpKEHp7HS4reBQ66Pn+10XRwZZ0K4MxzZzoeXZEq6Y=; b=bgVaZizo8QUm3VFbuxaMl0sayj
	wrsiNVCLEY+OAifqhqpVpUbeusrOraLgISPF7QoKlU+k5TKkpljiJTot9uNNhVR/pZagHNvlNArDi
	lN85H6eM206F/FIuoGV0T5VBdKvzAbs9VC3Gfo2ImG1GO0GEE4rLdcm5gztPxwhflFgg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162330-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162330: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
X-Osstest-Versions-That:
    xen=57f68dfd2d111a2ad381df740543c901b41f2299
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Jun 2021 10:39:57 +0000

flight 162330 xen-unstable real [real]
flight 162336 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162330/
http://logs.test-lab.xenproject.org/osstest/logs/162336/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 20 guest-start/debianhvm.repeat fail pass in 162336-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162325
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162325
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162325
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162325
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162325
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 162325
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162325
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162325
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162325
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162325
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162325
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162325
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
baseline version:
 xen                  57f68dfd2d111a2ad381df740543c901b41f2299

Last test of basis   162325  2021-06-01 10:58:56 Z    0 days
Testing same since   162330  2021-06-01 22:08:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   57f68dfd2d..5268b2dcf7  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1 -> master


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 10:43:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 10:43:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135936.252282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loOLf-0007D5-TD; Wed, 02 Jun 2021 10:43:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135936.252282; Wed, 02 Jun 2021 10:43:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loOLf-0007Cy-Oe; Wed, 02 Jun 2021 10:43:43 +0000
Received: by outflank-mailman (input) for mailman id 135936;
 Wed, 02 Jun 2021 10:43:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NSCb=K4=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1loOLd-0007Cq-P6
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 10:43:41 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.21])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 67e4bd11-c797-46a1-8eb8-a937b89119cd;
 Wed, 02 Jun 2021 10:43:39 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx52AhV4Ii
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 2 Jun 2021 12:43:31 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67e4bd11-c797-46a1-8eb8-a937b89119cd
ARC-Seal: i=1; a=rsa-sha256; t=1622630612; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=Hn+VMaCzwXi+Dbsyabk6p4NRP1srElc30vcisnlESTcDuEE6Fj3oEIthlR9UjaApnD
    1NYvcUJzZktdt20hlBMsOFsu8sUdy5FKYSkvkz0inyLEW9nBdQIibvEf5bFurW+SI5tU
    XNf76Gl0bVSu7KsQf9Z+aMqZvL9hvI9jz+hnd9X+GryZHYXRnrqPfU0KMf46MRxCVOKr
    08XsUIcPLhnUQId9qWtTB0erY7+c6gl2XDdyZjavcH3HbEJ6ZjZvDBZM8dr1xw2TZf49
    bADcHTkV+MsjuMye0NsIbZxQlPC/MxHEcAb/7sHwITNf1go0KE+NGCnvbp+eitCgOyTQ
    0lDQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622630612;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=cp5ejae4nR+8FjxdlK6xVBZLvaHpWZO8UIYNCx9OGdk=;
    b=aPRsWfQL2FBKbwokeAVgJk+aOx8VIAZ117/gI/n/qz0YG4AMdpFGQKim6zMij5U9NX
    XBIXdtfhWwJJ6bA35utpBgCDyG84JzN3AQ1riZHaJgTKBl3gjR84nl5pSHpLNEfmZ4d0
    YPQkoJADzVetkPzKJgs+3LOPcGnEbEeaWNGN+N395JfIuGR3DD5ktTyxlcwriyQOO6WL
    re2yzJ/JmX44c0fsW312pja1Q0r9R7XCf1thI91xHIc+vaA/AFL6qqvt2myv98C2RPiR
    IsI/FfkwyY1ia7IPOL4MS5pb0bES0Q3Wqg6Idrwm9TBIj4ZsPGI9trb4dw4Lnh6sK02d
    st+w==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622630612;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=cp5ejae4nR+8FjxdlK6xVBZLvaHpWZO8UIYNCx9OGdk=;
    b=KMxjthg3Y4WcMjCPhf0x2GFuBZo9m9mM995GFvmXs3+bDtRL79yq5MjqOWjzDwFsbl
    tpZh8NQfLRFf0PaHgYBKGmkn3jqtKZSB30jkv6wyN5b908hyt2eX2B6rr7P+08Wtclj7
    vC0O/miRNSPaFmDZvHn2DZ6SR8f/CWdUrDQRGcYg/UeLWZ0qjVY4v8HLTm4uzRdefBUB
    /Pe0G3f7wVsk+Op4cHK83C1y0Jp6z0yMmpGnmIDmdoohMtupyh+qfrT9Ybc0+Aw4jVq/
    wNkdhOBgUegBTPnaLIIoF1SVshGfb4AwONlB76E7jw3Dm/cXkGhkAJeY9Ndk0EMn38EZ
    P0lg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF9Wx7WbE3s+BU2kLCYUBd7t4vRd/ulzKn4R+Wk"
X-RZG-CLASS-ID: mo00
Date: Wed, 2 Jun 2021 12:43:24 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210601 02/38] xl: fix description of migrate --debug
Message-ID: <20210602124324.1bd02cd7.olaf@aepfle.de>
In-Reply-To: <58453bfc-d932-3b46-7ec8-cd883b4c7440@suse.com>
References: <20210601161118.18986-1-olaf@aepfle.de>
	<20210601161118.18986-3-olaf@aepfle.de>
	<58453bfc-d932-3b46-7ec8-cd883b4c7440@suse.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/Xna6f/lcOQlsfx6uda2TY58";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/Xna6f/lcOQlsfx6uda2TY58
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Wed, 2 Jun 2021 08:09:00 +0200
schrieb Juergen Gross <jgross@suse.com>:

> > -    if ( ctx->save.debug && ctx->stream_type !=3D XC_STREAM_PLAIN )
> > +    if ( ctx->save.debug ) =20
> This is no documentation change IMO. You should either mention this
> modification in the commit message, or put it into a separate patch.

I think the conclusion was that this branch is dead code because
only the stream_type=3D=3DXC_STREAM_PLAIN code branch sets debug.


So far there is a decision to be made about this code branch:
- document it
- remove it
- fix it

The latter might be impossible due to lack of APIs to query the usage of a =
given page.

But, perhaps remus and/or colo does not suffer from apparent corruption.
One day I should create a remus and a colo setup to exercise these code pat=
hs.=20


Olaf

--Sig_/Xna6f/lcOQlsfx6uda2TY58
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmC3YMwACgkQ86SN7mm1
DoCQBg/+MAMxtAjytiGCOC+8y7leqe0HZ/OWy9mJg59TJ6S4V7odZN2HH/9trjef
4JDLvXS0IZLvOL4oU8uPrk3ex1qz6r/KSAT/pGyQysDbcnXpFfWB5eAmNPSbnN4t
rfA0NG/d7A6xQco5/wKiv9u46njI9eputceqxenNKTl27+BBzyThLYags44TVEej
54G9eq+2A1dkSgqWodp5Pdiamz0XvVKTzgSMVHDtU25eykSkD6HGmD8Hz1zD9jhU
ua1BKm0HUsTEfYQh5mTsEqWnKgT7qkQdpOoTuXsbosQPI3jMGvjW4oSY+XnJJTK2
4bGabwTSkyZ6O7jHi8S1bBFnDvSh6oIYaUhay6JQrHW8qdM0sL7UoDTWi0FMeVcd
yfbZzTJByOa8zzyCNdJbVCrq1j88TFNMu4V/53MFLjWSNqV+mp+icReU5lhLIpiy
dVPCn2x7Q01TkjoNMhunUsT8QJmympsKtptkhooRLNG7USdoEKiVsyy6zMIXCwv6
ao0bR4NKTm596CbS6c4ZomHrrArN5uS+xmyKiwtVbkrbBYQ7diFB/fhitCcC8OKE
e2MmGcDg4nyJ/AganFB2jRd+f5U4nFhM2On3w0e9mZg8MnL4W3eoKucbOvu9DO9g
n2EVPimc4yxCxPlO0Mpat6/sk8Emq9O1nbRW962LnHnbe8J3X40=
=v4eQ
-----END PGP SIGNATURE-----

--Sig_/Xna6f/lcOQlsfx6uda2TY58--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 10:47:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 10:47:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135944.252297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loOP8-0007ve-Dg; Wed, 02 Jun 2021 10:47:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135944.252297; Wed, 02 Jun 2021 10:47:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loOP8-0007vX-A2; Wed, 02 Jun 2021 10:47:18 +0000
Received: by outflank-mailman (input) for mailman id 135944;
 Wed, 02 Jun 2021 10:47:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yNrp=K4=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1loOP6-0007vR-L6
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 10:47:16 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c0eaa2fa-ae48-4cf8-bcb7-e511378e2057;
 Wed, 02 Jun 2021 10:47:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0eaa2fa-ae48-4cf8-bcb7-e511378e2057
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622630835;
  h=from:to:cc:subject:date:message-id:content-id:
   content-transfer-encoding:mime-version;
  bh=cobCU152ENXwKaRO4mvTns1IADdxV/Nmx5grEVZNQzo=;
  b=FDhgLsprJJCtRL2Pd4KrrMcy8ACEBJrrUa3qkmxeXMj12LxJmq0b19UK
   red/cKsA1Rt+Mba+2kg92HIkUPjHbrS9qP9Nz+wM8/kSULlA1MGGrNOcd
   UgIy90B09sYw7x3zC+R54gWA4ierONP2tbXZ3KfBahVoGUnN4OgjVXN15
   Y=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: pbVPiNFu6oGUuTXmt683ycVyo4Y6zbD4W+NcG+ugA2lkC+/VdK88ClE+jFkrssJVMHwpcCDmfZ
 PXHmoHt+CGIREXBO7NLLHXlvvB4LJYCVXGpSiaxF90drCwjU9KXGweGvKm49vsmWQG02NecIvq
 cjS+z7KH2R+yUralqyqQTH+HSKZqfKQhQuxUEjTvSLm2tKEfYepXh1g9yUZgR+lLFUB92mfKfs
 M+UGpYnw9X6svo48fT9HIAm+lSTIkMbHSCZbAxisLTGrL4IbnH+dK/iaVIy8Of6CurQYhZzPWd
 mRA=
X-SBRS: 5.1
X-MesageID: 46688963
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:PawqB6B370ZoNoPlHeh/sceALOsnbusQ8zAXPh9KJyC9I/b2qy
 nxppgmPEfP+UwssHFJo6HlBEEZKUmsu6KdkrNhQItKOzOW+VdATbsSorcKpgeAJ8SQzJ8k6U
 4NSdkdNDSSNyk7sS+Z2njCLz9I+rDum8rE5Za8854Ed3AxV0gK1XYfNu/vKDwOeOAwP+teKH
 Pz3LsjmxOQPVAsKuirDHgMWObO4/fRkoj9XBIADxk7rCGTkDKB8tfBYlul9yZbdwkK7aYp8G
 DDnQC8zL6kqeuHxhjV0HKWx4hKmeHm1sBICKW3+4oow3TX+0OVjbZaKvq/VQMO0aeSAZER4Y
 DxSiIbToBOArXqDzmISFXWqlLdOX0VmgPfIBej8ATeSIrCNW8H4oN69PNkWwqc5Ew6sN5m1q
 VXm2qfqppMFBvF2D/w/t7SSnhR5weJSNUZ4JouZ+w2a/phVFZ9l/1VwKpuKuZLIMs60vFQLA
 BkNrCR2B+XSyLTU5n9hBgZ/DWBZAVAIv62eDlKhiXO6UkkoJlQ9Tps+CUwpAZxyK4A
X-IronPort-AV: E=Sophos;i="5.83,242,1616472000"; 
   d="scan'208";a="46688963"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e8iQRV81/aTTyFZeoNv0AhiQqWAJtkbP+TXTVxQHrXb/xok92SOvI1auEnkqjp28VhYxtOUrX4T/bRjCSRjdvpovynh5y3Nn8TaCck4KalaS2r5iuEDwDCtn+Iusif1gy/q9UdFFSeOSXK731gba1rFoA0cbSFLwdhPal/wWCfZFNAOBTCEHqOdZOsAiQBr7RTb2ltBvNhYNHFJigbIA6IEveE/YzMZWsTVUuzbwlFLZBsLKghTRQs/ylNMUxAjaSFIJHQ3cAcJbixViQwJoa/Ua4AN+QvxcKkVMiGOs4RiL9e+pjFxE1GKnaBOKHA2gFiPh6hTPDMxGtyPYzbHfSg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cobCU152ENXwKaRO4mvTns1IADdxV/Nmx5grEVZNQzo=;
 b=V1Eua3sc4wwuGAuDHhm4neyJlwezGOJhnbQ2poJzh+PAdq8Gi9Q+70f2CZMEsa5ATEDd9ylafr+hXxV8XGLY9983eemFbIEDpjNV44w66M1Y3bz9fK8u1Y/3Zb35pR9Ec9zhLNQLuuNXRPV+RjHKXv+Sd/O6CY1UJpJrb9nIBllR8/oKNHZOMQmo2h9N10fHANuNXXM76UWRa1EWHSLrTdg5ZQiHtgKEeXYg/DWiYvqem0YmYAyzPe+kTilg5DZttcBh69EdxveyytsBpSBH7bXzxp/cAFBcgJ0pxtGvX1Psi2ubWIJHSfCtcgra5Y9ip0kuv2eJBvEtU2pfimDoCg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cobCU152ENXwKaRO4mvTns1IADdxV/Nmx5grEVZNQzo=;
 b=ZBGa440S2BCXjmeB5Npm6waJT0pLEMRTAQ6SJ2F6QwSvRna+SuniSMCxIg8Ryi3gRgtZNgDb4b8keUH7Su2JLZht5AHD580p00pyB8Os71GPKZaKRecfyS1GA5GdxLrYr3Gkv4s3AIHYrpLB0rOnP6hLsxfIBo5OWXVNdaLdytM=
From: George Dunlap <George.Dunlap@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
CC: Tamas K Lengyel <tamas.k.lengyel@gmail.com>, "intel-xen@intel.com"
	<intel-xen@intel.com>, "daniel.kiper@oracle.com" <daniel.kiper@oracle.com>,
	Roger Pau Monne <roger.pau@citrix.com>, Sergey Dyasli
	<sergey.dyasli@citrix.com>, Christopher Clark
	<christopher.w.clark@gmail.com>, Rich Persaud <persaur@gmail.com>, "Kevin
 Pearson" <kevin.pearson@ortmanconsulting.com>, Juergen Gross
	<jgross@suse.com>, =?utf-8?B?UGF1bCBEdXJyYW50wqA=?= <pdurrant@amazon.com>,
	"Ji, John" <john.ji@intel.com>, "edgar.iglesias@xilinx.com"
	<edgar.iglesias@xilinx.com>, "robin.randhawa@arm.com"
	<robin.randhawa@arm.com>, Artem Mygaiev <Artem_Mygaiev@epam.com>, "Matt
 Spencer" <Matt.Spencer@arm.com>, Stewart Hildebrand
	<Stewart.Hildebrand@dornerworks.com>, Volodymyr Babchuk
	<volodymyr_babchuk@epam.com>, "mirela.simonovic@aggios.com"
	<mirela.simonovic@aggios.com>, Jarvis Roach <Jarvis.Roach@dornerworks.com>,
	Jeff Kubascik <Jeff.Kubascik@dornerworks.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Ian Jackson
	<Ian.Jackson@citrix.com>, Rian Quinn <rianquinn@gmail.com>, "Daniel P. Smith"
	<dpsmith@apertussolutions.com>,
	=?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLRG91ZyBHb2xkc3RlaW4=?=
	<cardoe@cardoe.com>, George Dunlap <George.Dunlap@citrix.com>, "David
 Woodhouse" <dwmw@amazon.co.uk>,
	=?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQW1pdCBTaGFo?= <amit@infradead.org>,
	=?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLVmFyYWQgR2F1dGFt?=
	<varadgautam@gmail.com>, Brian Woods <brian.woods@xilinx.com>, Robert Townley
	<rob.townley@gmail.com>, Bobby Eshleman <bobby.eshleman@gmail.com>,
	=?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQ29yZXkgTWlueWFyZA==?=
	<cminyard@mvista.com>, Olivier Lambert <olivier.lambert@vates.fr>, "Andrew
 Cooper" <Andrew.Cooper3@citrix.com>, Wei Liu <wl@xen.org>, Ash Wilding
	<ash.j.wilding@gmail.com>, Rahul Singh <Rahul.Singh@arm.com>,
	=?utf-8?B?UGlvdHIgS3LDs2w=?= <piotr.krol@3mdeb.com>, Brendan Kerrigan
	<brendank310@gmail.com>, "Thierry Laurion (Insurgo)" <insurgo@riseup.net>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, "Oleksandr
 Tyshchenko" <oleksandr_tyshchenko@epam.com>, Deepthi <deepthi.m@ltts.com>,
	Scott Davis <scottwd@gmail.com>
Subject: [ANNOUNCE] Call for agenda items for 3 June Community Call @ 1500 UTC
Thread-Topic: [ANNOUNCE] Call for agenda items for 3 June Community Call @
 1500 UTC
Thread-Index: AQHXV5yhXZmHrohNzUKCYOYcvw+h8g==
Date: Wed, 2 Jun 2021 10:47:07 +0000
Message-ID: <A78C6A90-7F44-4968-8F0E-E044B0F80C1C@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: ad4a5ba7-f659-4d9e-c943-08d925b3c472
x-ms-traffictypediagnostic: PH0PR03MB5734:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <PH0PR03MB5734F2B8E89E1823F231045F993D9@PH0PR03MB5734.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 1i01cG4xNkcEttkHzKCyup07XouCs5M/wEFihROyZOAXYbpzBFe+P/vP6Iq5iQKgH4CAHI9fWRMYMKLMp2XtAfe6rTaAc32ExjT+Jg5czip8hDBq52OmllDIcqSMF994DRCeLpfj1O8tnhHERXCr9iSeCTRSs24fMgnAC6AjPMFvZ7NnKUwPsa5dKlF8zTQlkw3lJHomdaXqJuumlal3MgFXi/oyhIzjWqfYMiF+ZtPKuAImb4nljntNliC4+klaYMr01dUQKHHPJXAtNtt6n3csdyAp3P8tbcdTvBQOFylpPpooiM42rCat/QxdU0HVkgNdlJGrm2LtqgSJIWhYKq4XWotbdmLJZsJ/geyu4/0QZTYneuptvRx9jcYqmuTLs+9G0bpimbjeuCu950t2NqMRy/7XZQ9OkSVoNP5bweg0CEviOfif/xAqsVmJ+bWoMIk6569wnnP1ZGuOj+Xaz1FvQwIkSQxwNJkZwqLCERARbmz/kfGggziOclgq2U+d50LK9zZnYMBhNdyj9hekKUQs3TfAn4MNgv3M/nIXKecOZz/i8PRf2ruAkZ0lQjFSb5Mhlw+AV0t9V3Dh1MfQDYj8J4SnrTxuNv9ZplmO2ODwI0bDfbpq7V4I1Z1B9GtHc0CKLpSSe7AOysFlk5wC65Ww64MluqIRpoQNkMgToZvcUQqEmSWl0zKcBrMdAq2mMR3VASvje8klxbPTjX+QuIEU+bGMv2ztNye5LXgWUNd7lbByyhKj9lV91uorO0TUc3RS5F45wZ/Q/d8SxxX1o19k7NmQBSZcem1kIe5tqWxEkr8BHrg058srT4sWV+J8Px2/QILlLeb4v+IPu2ckFg==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(39860400002)(346002)(396003)(366004)(122000001)(66476007)(2906002)(54906003)(6486002)(5660300002)(8936002)(2616005)(478600001)(7406005)(7416002)(38100700002)(316002)(6506007)(71200400001)(8676002)(26005)(86362001)(36756003)(83380400001)(186003)(4326008)(66946007)(76116006)(966005)(66446008)(66556008)(64756008)(6916009)(6512007)(33656002)(91956017)(221023002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?utf-8?B?VS84V1YvU3laUnRzZHliZlU0UEZ1NSs5TjNHdEZETmY4RWZMOUhRS0J6bkgz?=
 =?utf-8?B?ek5Yb3NYVU5odWxEZHBJcjVwanRXRmRjUmdEc081NDY2emg3QWFsY2w5VUNh?=
 =?utf-8?B?VU9EWmFBVDhrN1J1cnNlRDRId0thVDh3NlJLZXNicDY3Q2JMR2JIUFdTQ3lt?=
 =?utf-8?B?WGFJMzZOc0J2dlo5YnFlZ2pwTUZKTjVtRTU1SDFkdkdjK1F2L3ZiQ3ZCMkNI?=
 =?utf-8?B?VEVUbjZLUmNPVHR4bFlQZkJMbUZ1UTF3YXFnWjQvWnZrVUNGUE8wdEErS3g1?=
 =?utf-8?B?RStsTVJveittcWo4MkhscGNmR2RDNmdwMnliYzhEeTZ6NVFwQkQ0OHhqQUsw?=
 =?utf-8?B?bHY3ZnRENVZCQ0pMWTk0dFh1NG9rcmlSVGlTZjl3TzAya2ZzUlREK2NnUFc5?=
 =?utf-8?B?RUY4RmtjYVlRVjRCWGczakhQbzBIMndGaU9YWXVaWU9DMmdld09DYzl2c2tT?=
 =?utf-8?B?RkROcDFEdFpycFNlWWFYK3EvZ0FsOU0rN2YyaFFqRWx0WGNDZkg3MndBUnhW?=
 =?utf-8?B?U2N2dWdxWGUzVlFCMzBnQ1ZnUWxEY0FxYVVMU2NDay9ab1V3NzNJVHJ0RTdO?=
 =?utf-8?B?bG9IUFdPSDJ5UThuQm1qMEx3bDBTak5wM2I4MTdLUmFRTVA3N2F0UThralMx?=
 =?utf-8?B?dGZFUlJRc2tHOEZRTDg0QWUrTk1TVjZZWUVuWFFqOGY2M2lOZ2xncDJsZmpX?=
 =?utf-8?B?TzJ5YVFOclRnSk04WEdncWZta3lsbDdhTWJ6TDdDUE11dkN0NzRoUytER2xO?=
 =?utf-8?B?c0lHbnFFSlNISHpIaTQySG5rMmxjMjJ6VklQN2w3MGdmOUUrK2drRUVhN3FL?=
 =?utf-8?B?bkJMaXpqWFFMVXVBM1ZZMGpnNXI0OHJRZTVSVlUxRkhsZStmcUdUOTZmMGd2?=
 =?utf-8?B?a0k2OGJHVzlEb1FDakZqajNocDZib2F6ZktFa2VwVXBsNU5FUExObk9JRHNY?=
 =?utf-8?B?ellXeWZ5eXQvU0UzbEZ3ZVNkREs3ZWdVamdiR2EyNEdPb09MOWNnZFVWeHpR?=
 =?utf-8?B?bnRHRlRHcFhkN29weWx0MlBaeEJ2Yzgrc24zMjBrV0RoaW9sdHB1QnBmNGRZ?=
 =?utf-8?B?NHFWV1U4ZEF1aW15U0FLNHFlZVVLMDZQa01RalU5SVJlMXloK0wrYmlHVUlC?=
 =?utf-8?B?dE5qWDl4dlpHTzNlcHRmK3I5MkpLZnM0Y0VTTnI1Qk9MWi8vS2ZCdW1qd1NI?=
 =?utf-8?B?T1dkTG9MaG80SDVOUG5UWVlPeDVQWDhjL1RDTHZ2endNUzdZZ0QzWDhwVzBX?=
 =?utf-8?B?Mm9GVTc4bWVYYnZzOHpVZG5UYTJuLzc5VzBrWk1BSGxvZkIvOGc2bGx2am1U?=
 =?utf-8?B?WkpNWUVoL0FVVGF3UjRIVWJLbkJ3MTFhckVXNjVwd29BcVg0enVrSDZndGhC?=
 =?utf-8?B?dnpCc0FDczE0YlNuY1hPQ1VaQ1d0UVUxODBXRzFYVW43azMxNGJsRDhZWFBh?=
 =?utf-8?B?emRud09kdnhpaFR6YXFoVk1VQTI4b0JxWDRJaEtUN05DSmtDdlltTXp3cmhX?=
 =?utf-8?B?KzRJTW5xWmhXR3hLTC8yTk9RRmJ2SGRrMVBhcG1HUXpJUjJ6b1IvaVVhTHBP?=
 =?utf-8?B?dnZzUzNQdUMyNXdWYkl4UFZkNmxCZzRmbFRUZXN1Z3A3RklxM0xOQVc1WDAr?=
 =?utf-8?B?NDlLN0tkOHhyUVlUSFJlUXczS2ZsVVRXQVRWUVMybWF4NEZZSlpOdm9iR0tN?=
 =?utf-8?B?RU1Ma2x0cjJsTDE4azM3cXpON3VhcldQY1pHcGMrMnFLLzVCUzdyUERQS2NE?=
 =?utf-8?B?NXViWkk5cUNabUNqWTFEV3BwVHdQbEE3emdUckhoSWlnVHFWTEhCYWt4TFlE?=
 =?utf-8?B?MDhwMUpHNFl3SUlickRqZz09?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <0D86D25B801A8042BD23AA38C8D24525@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ad4a5ba7-f659-4d9e-c943-08d925b3c472
X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Jun 2021 10:47:07.2038
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: G+SxYwQrPNu1ZXLBa0+e7ar7EQjGoclVQ3osgEEtacr72e2bZo5OpCw8lnPW0QfeLni0x8dUGIsFJhMSd+gGl4VCIKsBuGr0oq9SEYLOpr0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5734
X-OriginatorOrg: citrix.com

SGkgYWxsLA0KDQpJIGtub3cgd2UganVzdCBoYWQgdGhlIFhlblN1bW1pdCBsYXN0IHdlZWssIGJ1
dCBpdCB3YXMgc3VnZ2VzdGVkIHRoZXJlIHRoYXQgdGhlIHByaW1hcnkgYWdlbmRhIGl0ZW0gYmUg
bWFraW5nIHN1cmUgdGhhdCBhbGwgb2YgdGhlIHJlc3BvbnNpYmlsaXRpZXMgZm9yIG15IGN1cnJl
bnQgcm9sZSBoYXZlIGJlZW4gZGVsZWdhdGVkIGFwcHJvcHJpYXRlbHksIGluIHByZXBhcmF0aW9u
IGZvciBteSB1cGNvbWluZyBwYXJlbnRhbCBsZWF2ZSAod2hpY2ggd2lsbCBzdGFydCAyNiBKdWx5
IGFuZCBnbyB0aHJvdWdoIHRoZSBlbmQgb2YgdGhlIHllYXIpLiAgSeKAmWxsIHB1dCBzb21lIG9m
IHRoZSB0aGluZ3Mgb24gdGhlIGFnZW5kYSwgYW5kIHdlIGNhbiBicmFpbnN0b3JtIG90aGVyIHRo
aW5ncyBJIG1heSBoYXZlIG1pc3NlZC4gIEZlZWwgZnJlZSB0byBhZGQgb3RoZXIgdGhpbmdzIHRv
IHRoZSBhZ2VuZGEgaWYgbmVlZGVkIGFzIHdlbGwuDQoNClRoZSBwcm9wb3NlZCBhZ2VuZGEgaXMg
aW4gaHR0cHM6Ly9jcnlwdHBhZC5mci9wYWQvIy8yL3BhZC9lZGl0LzNaaCtVb3JELTZ2VXJ3aGpo
UlZZeUhlOC8gYW5kIHlvdSBjYW4gZWRpdCB0byBhZGQgaXRlbXMuICBBbHRlcm5hdGl2ZWx5LCB5
b3UgY2FuIHJlcGx5IHRvIHRoaXMgbWFpbCBkaXJlY3RseS4NCg0KQWdlbmRhIGl0ZW1zIGFwcHJl
Y2lhdGVkIGEgZmV3IGRheXMgYmVmb3JlIHRoZSBjYWxsOiBwbGVhc2UgcHV0IHlvdXIgbmFtZSBi
ZXNpZGVzIGl0ZW1zIGlmIHlvdSBlZGl0IHRoZSBkb2N1bWVudC4NCg0KTm90ZSB0aGUgZm9sbG93
aW5nIGFkbWluaXN0cmF0aXZlIGNvbnZlbnRpb25zIGZvciB0aGUgY2FsbDoNCiogVW5sZXNzLCBh
Z3JlZWQgaW4gdGhlIHBlcnZpb3VzIG1lZXRpbmcgb3RoZXJ3aXNlLCB0aGUgY2FsbCBpcyBvbiB0
aGUgMXN0IFRodXJzZGF5IG9mIGVhY2ggbW9udGggYXQgMTYwMCBCcml0aXNoIFRpbWUgKGVpdGhl
ciBHTVQgb3IgQlNUKQ0KKiBJIHVzdWFsbHkgc2VuZCBvdXQgYSBtZWV0aW5nIHJlbWluZGVyIGEg
ZmV3IGRheXMgYmVmb3JlIHdpdGggYSBwcm92aXNpb25hbCBhZ2VuZGENCg0KKiBUbyBhbGxvdyB0
aW1lIHRvIHN3aXRjaCBiZXR3ZWVuIG1lZXRpbmdzLCB3ZSdsbCBwbGFuIG9uIHN0YXJ0aW5nIHRo
ZSBhZ2VuZGEgYXQgMTY6MDUgc2hhcnAuICBBaW0gdG8gam9pbiBieSAxNjowMyBpZiBwb3NzaWJs
ZSB0byBhbGxvY2F0ZSB0aW1lIHRvIHNvcnQgb3V0IHRlY2huaWNhbCBkaWZmaWN1bHRpZXMgJmMN
Cg0KKiBJZiB5b3Ugd2FudCB0byBiZSBDQydlZCBwbGVhc2UgYWRkIG9yIHJlbW92ZSB5b3Vyc2Vs
ZiBmcm9tIHRoZSBzaWduLXVwLXNoZWV0IGF0IGh0dHBzOi8vY3J5cHRwYWQuZnIvcGFkLyMvMi9w
YWQvZWRpdC9EOXZHemloUHh4QU9lNlJGUHowc1JDZisvDQoNCkJlc3QgUmVnYXJkcw0KR2Vvcmdl
DQoNCg0KDQo9PSBEaWFsLWluIEluZm9ybWF0aW9uID09DQojIyBNZWV0aW5nIHRpbWUNCjE1OjAw
IC0gMTY6MDAgVVRDDQpGdXJ0aGVyIEludGVybmF0aW9uYWwgbWVldGluZyB0aW1lczogaHR0cHM6
Ly93d3cudGltZWFuZGRhdGUuY29tL3dvcmxkY2xvY2svbWVldGluZ2RldGFpbHMuaHRtbD95ZWFy
PTIwMjEmbW9udGg9MDYmZGF5PTMmaG91cj0xNSZtaW49MCZzZWM9MCZwMT0xMjM0JnAyPTM3JnAz
PTIyNCZwND0xNzkNCg0KDQojIyBEaWFsIGluIGRldGFpbHMNCldlYjogaHR0cHM6Ly9tZWV0Lmpp
dC5zaS9YZW5Qcm9qZWN0Q29tbXVuaXR5Q2FsbA0KDQpEaWFsLWluIGluZm8gYW5kIHBpbiBjYW4g
YmUgZm91bmQgaGVyZToNCg0KaHR0cHM6Ly9tZWV0LmppdC5zaS9zdGF0aWMvZGlhbEluSW5mby5o
dG1sP3Jvb209WGVuUHJvamVjdENvbW11bml0eUNhbGw=


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 10:57:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 10:57:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135952.252308 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loOZ0-00017D-Bh; Wed, 02 Jun 2021 10:57:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135952.252308; Wed, 02 Jun 2021 10:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loOZ0-000176-8h; Wed, 02 Jun 2021 10:57:30 +0000
Received: by outflank-mailman (input) for mailman id 135952;
 Wed, 02 Jun 2021 10:57:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NSCb=K4=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1loOYy-00016y-JP
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 10:57:28 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 833b7922-9d5e-40c7-a9db-ddc6ab934193;
 Wed, 02 Jun 2021 10:57:27 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx52AvM4Mj
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 2 Jun 2021 12:57:22 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 833b7922-9d5e-40c7-a9db-ddc6ab934193
ARC-Seal: i=1; a=rsa-sha256; t=1622631442; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=byYJhIr1Zrx2IYXFJo2Kd+qNouxpnra3+21TAVrii5d0SP1lZS+H3AGjmSGk80rHpv
    a3Cmga07VPtLTii+VUoqwhGoH3i4NB+oRIAm+g2WGIznkAqk/vprUsOMxDHypjQ27pTk
    M4ey/0GusjHTQIW+JXXfvSEQBZVc6uz6CvaJKayBq5MqpNBcXISAUcOrW52hB7yAJxlb
    uRGxT4xoq3F2DKsFZdOaaFVQ0a1/y/W4iyeuT0lIXE/4Fdz+F7/C0wIBuk+wg5kWz23Q
    3pS4pd+26AmVCmnGkaMQQii3V6Bq4VjBWoAGwhknJa08+HnqDGF54tZLvGLnQpUOKcXw
    8Jpg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622631442;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=F8geIRzygIdHjqWInOe5Mba3yEzQCyuFZhoHbRoHsQc=;
    b=mtDXc14+ir8G325gupV6sIF9UzxFyfDzPpCeJxw5qhViRCPCgt9DmvPZ4wUWqedgRI
    xYSiC/aYJ0466kVToBpVU/tX0xevvqiTAfS/UL0KdFVELQ0L+/ON1YGPD/wZNwInT0er
    UazSA6qZPYhHIzAGHyc7qXwZrlRAGXGq34FW9DDI1pKeao/wXVJWRX1w4ROjkidkJ8LA
    XAnLXOzL+SHi8n6OCeBKq/NGrfdQ87bL3gjIIT7VNxGU4apCvssC3NE9z4/N6LDgpmKY
    zBP0UwYKoZdZM7gHmwTofRs68n4Iw9Y8Jum6d8RHpqaWBoxi1U1ycKr+x2t0z76zBSIW
    xVLw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622631442;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=F8geIRzygIdHjqWInOe5Mba3yEzQCyuFZhoHbRoHsQc=;
    b=CWUx9dnkAgh3IV8vCZQwpqUNpJol9XCG1XF1UeRhH9zbEdVbzy3KyV67xObRv7vA73
    jRhWtH8D61fksYAdV/Cx3XYyJyXvnwU4jWQq+X7Ul4LhgCtVB/MQyeXxTNertIz/KmDX
    IR9r8mtWK9rxxdXs0vQtBPtJgok3dsHiky3YSeekrCCN0DXQqIgZ5bUsCAnqNR0mFV3c
    iqR/klzqa0m634NBTA1/4NdMG4RgQ/sY3aqyG/UnvM1Ee3NhBpowMDHHsSd2OAeTFj4X
    pDQLdh6RpViK7lQK4w2gG/bB/eaxj0//j0YMkeaWAenePjxjQ6UStbeUEWI1N4KjVmrV
    s0wg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF9Wx7WbE3s+BU2kLCYUBd7t4vRd/ulzKn4R+Wk"
X-RZG-CLASS-ID: mo00
Date: Wed, 2 Jun 2021 12:57:10 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v20210601 04/38] tools: add readv_exact to libxenctrl
Message-ID: <20210602125710.0607a985.olaf@aepfle.de>
In-Reply-To: <23783088-dc59-abd1-c66c-5fcd314d1f5c@suse.com>
References: <20210601161118.18986-1-olaf@aepfle.de>
	<20210601161118.18986-5-olaf@aepfle.de>
	<23783088-dc59-abd1-c66c-5fcd314d1f5c@suse.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/_4=byZjwD0_1WiL6z/pNxSS";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/_4=byZjwD0_1WiL6z/pNxSS
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Wed, 2 Jun 2021 08:30:08 +0200
schrieb Juergen Gross <jgross@suse.com>:

> On 01.06.21 18:10, Olaf Hering wrote:
> > +int readv_exact(int fd, const struct iovec *iov, int iovcnt)

> > +        if ( len <=3D 0 )
> > +        {
> > +            rc =3D -1; =20
> Is EOF really an error?

I think yes, that's what "exact" implies IMO.


> This will stop the loop, even if idx hasn't reached iovcnt.

Yes, it will trigger yet another readv().

The "while" might be a leftover from a variant which used repeated read_exa=
ct to finish the function. It should become a "if", and the "len =3D 0" can=
 be removed.

Olaf

--Sig_/_4=byZjwD0_1WiL6z/pNxSS
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmC3ZAYACgkQ86SN7mm1
DoASbA/+KpZVU9KuuhQOE+wLxOx7ORkOo0/oKFj42YoUnKZOwHFE6B7/Cg3XAvYd
0dMVONch7rIdwTOEvC+/3tikLV0W795H5hF61H3D0K2dbnVTWbdPWe5TQ6mImav4
v3fXN+F/wvsPIduEVf70v+hEVmYp0682/D35GpCk6RoTsjzCpj0TVst7dWIdj7j7
wQ2fJNU2e9R4CZpZA/zj48DLuKC2j4QPZBEWlz0BZSNq/59mKTy8dNpyQ8WLhpG4
SCTPv2g3QEuiBYWyQKjNZUmr7H0YCoXBLQILKACpCuSc/VSgoTg6iCGSeKbCMasr
xwmumnh9Drn0t73aIJ16WmFURP3+nD/uVaa0kKD5F1LWq8W8oXFiyFmoEizIS9Hc
rxb1pMZMTD/JXRJ8hDp6EIDNbiDFetNi5l25lFYv5Q0wG4vHzJENhh4h0uc6tpmt
NMtxjjCJWAi5I6VUIE5Gb89Y7LHlALq14V71eh7fn3r366cxhIjiRzupNnKP6UdL
GePfTK7hZ6MQCKV7CI88Dl9FA5tbIsAzcrmCvZcpQNwBwoe09FQJyLaRqcyZrTnJ
6nvnuPga9xM6GXonj4BdvG+mZMtdg8JoL4NmIMBC7UySdyHl3E8fGblk5KRh2zuD
aPZCaKm0z5/LDUtYIsFmAXJHWLg1wAAxluX+kGhIkDrDEWo4niE=
=0iip
-----END PGP SIGNATURE-----

--Sig_/_4=byZjwD0_1WiL6z/pNxSS--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 11:05:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 11:05:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135959.252318 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loOgj-0002mX-4X; Wed, 02 Jun 2021 11:05:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135959.252318; Wed, 02 Jun 2021 11:05:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loOgj-0002mQ-1Q; Wed, 02 Jun 2021 11:05:29 +0000
Received: by outflank-mailman (input) for mailman id 135959;
 Wed, 02 Jun 2021 11:05:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NSCb=K4=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1loOgh-0002mK-97
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 11:05:27 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.54])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c982436f-a875-4d01-81b8-d2dfc6ae6edc;
 Wed, 02 Jun 2021 11:05:26 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx52B5K4P7
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 2 Jun 2021 13:05:20 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c982436f-a875-4d01-81b8-d2dfc6ae6edc
ARC-Seal: i=1; a=rsa-sha256; t=1622631920; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=oBwNBlC4YklEwQyOooA0XQdZdLPTkql78IQaU+Z5BpHmb46oXZxLJrPwFMXuZf6Q1s
    iDereLW2aBbtfmjz7NixaZPr837ersCZAbndvmJ5IixFGxt9rfGZUbe4J1f6WRXD6vY/
    0e37QqHgBuYMAirUhtVfLRz5J4cFr7R8hc1i2kMTsmpvI1yPtMpytBApH1naUppRka3Y
    JgEOnN9SmjqWjgkFvDL8sy3h0KIIz7gmGaBcuG60Bfir9mlnTO25vDQODYXTSqZaSjEZ
    pOCOGC+6lbJukUDSHl4OAzm0glgn4EAsKMC+YyNY9ZqcpPKJ7WAu3XBFDJLltb63+0O6
    GxYQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622631920;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=mmwAcczl400Nz90EdzHL45etZJkSIEL2HdzsRVjHS/o=;
    b=bcxGcFYlC+ShPevvdADUhieQGevYTDPSercDItCAapvcu07LfQYYPBTEY4oGSa7PsB
    Kqc3dGka2UaVml9JP5Jyod8wY+GWoxy//VNrnjEGPwXMRPuF3mwrdqtVPRR6aGU1q7/+
    Zow06NU779h+L3347dji+c9d9vcJ2I0yU1+jyCRe4oY/u/VyMs1sjCOdCuDu3l5FReqO
    dX00Aq+jeAhmZcUzoeARHbsN0bfU8S0ZHKXL+s7WeXfdbSIz8F+poR2VFOPKgES76pZX
    3R7+PpvbIkZjrKmklXpwKuHE1YBMzuopTQWxUdSmSRjApuKkVxqqlYVQeCZPHOtArGqm
    1GNw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622631920;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=mmwAcczl400Nz90EdzHL45etZJkSIEL2HdzsRVjHS/o=;
    b=PanaoPCr+7hjucM0F4O2Ds9KgaWZflyr1ZUDs1drZHm0VizBodZG1zLXhCRcoiGPlT
    7NfdDYZgDzP5RhdKGWtC/GxieNBGi1nhGNoEzpgbSvAj3IcmFq8y7Ez3CPnbJ9/6kBck
    vUaITkiwJECSu0AeBsCSMdRJPzmI3KHK21A/rZvh0sDA/kCg6bH7V1m7MUNtITINF3Wc
    ZGWR0UJMfH9+ACWXZMrmqFIVL/MKippuFZWR2CmiQPYyIB4KhMrg0dGMXz/nlp1meXF3
    HpPAGkKjRrHUdJgqSxRE5Xm2/LkPdpltILpvRVhCmn2BAvXdQU8wlALNn2ACuUJFbeIl
    gxAw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF9Wx7WbE3s+BU2kLCYUBd7t4vRd/ulzKn4R+Wk"
X-RZG-CLASS-ID: mo00
Date: Wed, 2 Jun 2021 13:05:11 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v20210601 04/38] tools: add readv_exact to libxenctrl
Message-ID: <20210602130511.79c4eb15.olaf@aepfle.de>
In-Reply-To: <20210602125710.0607a985.olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
	<20210601161118.18986-5-olaf@aepfle.de>
	<23783088-dc59-abd1-c66c-5fcd314d1f5c@suse.com>
	<20210602125710.0607a985.olaf@aepfle.de>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/7t8xgqfuvRz/yde5nnr0v1d";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/7t8xgqfuvRz/yde5nnr0v1d
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Wed, 2 Jun 2021 12:57:10 +0200
schrieb Olaf Hering <olaf@aepfle.de>:

> > This will stop the loop, even if idx hasn't reached iovcnt. =20
>=20
> Yes, it will trigger yet another readv().
>=20
> The "while" might be a leftover from a variant which used repeated read_e=
xact to finish the function. It should become a "if", and the "len =3D 0" c=
an be removed.

I think the code is correct.

The while() loop checks if the last iov had a short read, and finishes that=
 iov with read_exact.

I will add a comment, if that helps.

Olaf

--Sig_/7t8xgqfuvRz/yde5nnr0v1d
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmC3ZecACgkQ86SN7mm1
DoAycQ//TEQ88cyhy0SnmTPYQT6HV2VnDMSoLRnznJfN7S2Up7OJKQjZHPsoCIrE
ENwPfTky6N6QrCYWccFxY3zpv6v9aXWgLZBksqmmP4eGm1KMTgEUHCrFX5BI9Hir
DRJqzugpz1gzDns2qgKhVYfiiHlDonF3TOT4zal7V3sCU7gTP/Qv58Kxdw6O0pHD
fuYB6m4MGQYC84GC3YHK2/ikXBWfn4MZjrCBCjVdrOe1yUmIxTCVAYnPOrpZQtg+
8aSNNj2qpi7MsJaPHyzrSfNvs2t3BdcYySkzvvGjsJACtXOoLWw+XCC6uGt+IU/U
xJQwe/aW3M839IKhSf9FUCXQrgRcwpa6MpJlfef+0LzZTytFm3gvWVt8S5wtHwmn
LKUbRcqzbiIrAuzX3uU4bfIEczvRbFlKlwDDALDeZupJEa/C3OrKoH7SnV4Pxeac
ZXqLbafR3iRShgzdai3aHz9qm/cWoZCPzsAzfxrHUnFH6JRhGp41HRG0N8IU1gW0
yb+VDtBuemginwDdWwclSi1CcUn9TiAsWZENXDoznXDd+HPU3vwUss5RBnGJ3ET+
KC6pI+ajA86yq5qCwMAkKwoxP1GC4LMNFmC7Sp7OOCkPz+qKRdspfkDNcriwqXcm
g4NEcP5RQ090smnOv+wlcmZkfxgRpmSPCng8qBH3A47V2l4l/E0=
=XUOE
-----END PGP SIGNATURE-----

--Sig_/7t8xgqfuvRz/yde5nnr0v1d--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 11:10:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 11:10:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135968.252330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loOlh-0004EY-SX; Wed, 02 Jun 2021 11:10:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135968.252330; Wed, 02 Jun 2021 11:10:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loOlh-0004ER-Np; Wed, 02 Jun 2021 11:10:37 +0000
Received: by outflank-mailman (input) for mailman id 135968;
 Wed, 02 Jun 2021 11:10:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NSCb=K4=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1loOlf-0004EL-NS
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 11:10:36 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [81.169.146.167])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7b5a017f-fd75-4627-a9f7-dce388d41b7a;
 Wed, 02 Jun 2021 11:10:34 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx52BAU4Qq
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 2 Jun 2021 13:10:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b5a017f-fd75-4627-a9f7-dce388d41b7a
ARC-Seal: i=1; a=rsa-sha256; t=1622632230; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=QJ67uMjAzwlm1Pqb1gbynRY1AdHiAo45tsRhYVIqdSVjyoVyxe12A5rlTe5CwzNiZm
    z0yzetu72A76ixPhyd0scJvrLCbigKCSIhzep37NJopvbOKTC2DXNpuwd+6IkDXaGKOc
    +EZSFDOqrkJ/xE8yCc9xPRIe3LDNdNHFsLRGQ96RoAKGrjWRURljO29E7EciVMSpx8Jf
    5rQWgOm0/s4YUMdelEPbG/Oainejfd/4gu0i68h+Cw+52xPkOA6Ogs6eMFlwsQjBhwQE
    1S/oGEfMh/nl6W6rIGCXb/Lr+geEvq+HVVw+cpQph05WO+jRLwaIROz0uuCOHcy16rdl
    fWHQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622632230;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=X0g4fFJHobHi8KaDsVgMN9D1Pgxg0oNLVP3TjEglokQ=;
    b=nO/URN2sLpCisbd3jjEiwtPIksi7fb7B6NriSSWwTgY+fpfdwlRsqCtp7RRX5v9b9E
    w3OLexpVMigl7rhNe90RFBAsz9tP1Db5dPpp2IybM52ZkASDbIYPmDh4Y9Sgruwr6nt8
    WlV+oeT5sGK221/2te/pPoJT8vWz4GtUsYShBtDE0mchFnikei4qdRvRQJP844mKfcGv
    o/ROtIP94DsAdTtNZFLNZVjDWrA2KI9N75oS38GmwMAlUAOOT+NCH5aiI0V1cLt01Hw9
    bBBEZc7UGEJHqSixCbrf/V/wVwUHveRTJ/0uWYDiv9ecwWAIzqq0DGEg5RBozdTzBLRU
    6ppw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622632230;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=X0g4fFJHobHi8KaDsVgMN9D1Pgxg0oNLVP3TjEglokQ=;
    b=Hcp37kwZ+TJygH3vqkc2fqidi3MbFS8CDmx4QaNgOIs2A3CYWWTb0nhNs3rp3V0iIW
    d/idXyArAEiuNobVIORuvo31hvykSZoXbjXYnna+443vr6bysHSIlIfJ0iPGl3bGv/v3
    kKLA5TCy8P+F5gQVdB1NKul511djlArlRg+fKNsHuj4L8CFAnJLyL21t0ach/QBQ5MOi
    MzdMxDD92woyUQ4JdAIED7KbG+nVFtHWXws43CzZpXY0lfK84lgh1bZYRytX3fomEuSK
    4eWTLR/y0kBaI/QsIbYhO0I5TL/aOt8MPODq0b99y9XiEDhSQSWHR7+xU6kfaevBBHgY
    /tsw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF9Wx7WbE3s+BU2kLCYUBd7t4vRd/ulzKn4R+Wk"
X-RZG-CLASS-ID: mo00
Date: Wed, 2 Jun 2021 13:10:21 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v20210601 05/38] tools: add xc_is_known_page_type to
 libxenctrl
Message-ID: <20210602131021.094d40f1.olaf@aepfle.de>
In-Reply-To: <f1f00500-f74f-3515-cd46-7a12abdf18c3@suse.com>
References: <20210601161118.18986-1-olaf@aepfle.de>
	<20210601161118.18986-6-olaf@aepfle.de>
	<f1f00500-f74f-3515-cd46-7a12abdf18c3@suse.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/l9_w.aGPh=gUYNJCIIDT/VQ";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/l9_w.aGPh=gUYNJCIIDT/VQ
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Wed, 2 Jun 2021 08:51:45 +0200
schrieb Juergen Gross <jgross@suse.com>:

> I think you should not imply the planned use case here. It would be
> better to use "switch (type & XEN_DOMCTL_PFINFO_LTAB_MASK)".
>=20
> I'm on the edge regarding putting the new function into xc_private.h.
> In the end your use case is _not_ to call the new function from
> libxenctrl.


I'm not sure what that means.
One or the other has to be rebased to the new state.


Olaf

--Sig_/l9_w.aGPh=gUYNJCIIDT/VQ
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmC3Zx0ACgkQ86SN7mm1
DoA7Ig/9EDbv+Xux+TnI2uHADfI0w5AwM5g3/PA57ihWmiXWQIFjTa1mM8QToBLJ
Ct/xm+vX9tLPQwddiid6R96W3fsNezsXYU+KxLhYCo8NH2IZFsrFICGu3+KpTa08
6jJGMxE8hbLPKEcy0YCiZKYmfihUr1J71dO1eAzz93J0eIULGfHh8GskZyBlGp32
IPdrPPxM73lwg4FIxtkYYGU52ChY4JvFYQ9BY062iQ0IhNCRbSiZ3BStrE8aMBVq
2QNSkKidKQ+54cvm26HDpMHc3E+NAuZu6IAPtKtkIzbEQx8U3zh+NOToQJh4a2NZ
gXfYxczeyd0WyRRtBkiV6MNtA6n+lBdFJa6X0SDFLHaIJMBF4ND/z1NIBbVihW9Q
ttsJv7CkFyvFgLqkisiVDnUU2hZd98PPAh48MPBlodZg97f/956Eq3n21qpCJeoW
eB7TrUrb9rrwhTlvOwJ/f0Bq0ZBEb8TTP9QvwWV2GSPW7VZKrdOwaJWzBg/S6WgC
TClrpig32izedEockZ6ba33+Vy/sS83M1jW1G4ye7LOfcG7kkHJIN+d6SQpDSDUO
RS2KsxFHupH34IaH8rJEZSdJFC2t1oMbqbOUodSNnCOaZNB14SiXa/lbgDEkRDzK
ufT2MSpQ3GgpivzZAJt0xK8RgFoquogQ+2RkRL6kXmQBXW43bNM=
=6RVi
-----END PGP SIGNATURE-----

--Sig_/l9_w.aGPh=gUYNJCIIDT/VQ--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 11:20:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 11:20:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135976.252341 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loOvd-0005nl-RH; Wed, 02 Jun 2021 11:20:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135976.252341; Wed, 02 Jun 2021 11:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loOvd-0005ne-N4; Wed, 02 Jun 2021 11:20:53 +0000
Received: by outflank-mailman (input) for mailman id 135976;
 Wed, 02 Jun 2021 11:20:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NSCb=K4=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1loOvc-0005nY-JL
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 11:20:52 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.83])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ffb40af4-4180-4ef1-b201-e70d7205610f;
 Wed, 02 Jun 2021 11:20:51 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx52BKi4Tj
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 2 Jun 2021 13:20:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ffb40af4-4180-4ef1-b201-e70d7205610f
ARC-Seal: i=1; a=rsa-sha256; t=1622632844; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=n0buuWcaiDcnOfO5XQDQ1bYiEV2zWbGEiwEyfMkk+9Zy3VT2H8uKE5L+4RqnC9GtQv
    UTq8ZLtsOuqscDuel50WuERbebjUggieahXv85Hv/hUzUGq/TmONslpLgDKOqFXakEDD
    UP2tunA/vuvHItJbKD2npAIdEGsIDA8l/G6XLfVIqxqUI9iklJSQbYCijNNswEYH5zRH
    sX/HjrG2XCrVOYa53AeLUOAogkann7BDsyEaOtPB23AZiSHMLWMivX+kJGNdW98tUAFZ
    VRoqhhmQRqwIQu8XXDH4j8oTmfNZ1CiTlO1jOZ2Ceb5fHymx6TG4txrh3UeFNfDe8qCe
    Q7gA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622632844;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=KfK4MvOZX1Mw3hMEr+laTlUUuHLc0mlK9XtSeXAWDB4=;
    b=n1b3GPXkN3IRNg1uPiyBEMYbZJZAamJBfSVcg7Ipo4cg7W4pzOFyr0gAQTYwjp9sf4
    J0jMN82rQOSKVvBqzAlhDIimMdXDpDfnvMxj3k3dBGk/WpgnrMdnwndkYDV3wvEsLCNA
    QdCkAgd9HvAEKwTTugNu+dHVv5d7BTaKMWeLkVJGqeVJzFfc5HjzdqxLCTgzowzSqwiT
    vJc74TMAMVkNgByhQjGQabUMZf8Y7F3xjUoU0DYfGENxNP17XbwZm2qBxy1gY7+U+7aL
    B1n2CsrqQs5D/MuobRYSPo5Azc1t+mNjlmqWNePKTNiQxLDpIvLW3XnUhA1BsfcuIG/O
    RaLA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622632844;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=KfK4MvOZX1Mw3hMEr+laTlUUuHLc0mlK9XtSeXAWDB4=;
    b=pw16oXufDDazmJ/GZ0vOyCzFPGqW7C8ltUh/3bvi5A/hSxuF59YREs7CE8DeKYWJen
    /L6rTxFlPGJCiT9nmWw2O6NsfJBq2dDFouzMfE0QWD0SRCQjrFk5FonbkJ8zW81kyo0b
    VH1x2+5rTPjkLy7lSrRaf/F+HYLx3ld6ouckkLEo40Ml0q50wvElOyX8b/iumZ49/vOy
    yk7ilJo1zC8WHTQ0KsjNPOjHQKxM25bSTPY1LsnBbcdVUfLzKvLVvkie4B4vrmwNdcbq
    OBR2Loo9cuZqadP7xnESe+zS07BdWbJjwfWpnLmnQ5d4Slj+OPfMTcsFdPVJ1XuVo8sf
    dffQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF9Wx7WbE3s+BU2kLCYUBd7t4vRd/ulzKn4R+Wk"
X-RZG-CLASS-ID: mo00
Date: Wed, 2 Jun 2021 13:20:36 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Joshua Otto <joshua.t.otto@gmail.com>
Cc: xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com,
 wei.liu2@citrix.com, Joshua Otto <jtotto@uwaterloo.ca>
Subject: Re: [Xen-devel] [PATCH RFC v2 00/23] Design document and
 performance evaluation for post-copy live migration
Message-ID: <20210602132036.68e60184.olaf@aepfle.de>
In-Reply-To: <1529230714-30455-1-git-send-email-joshua.t.otto@gmail.com>
References: <1529230714-30455-1-git-send-email-joshua.t.otto@gmail.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/3V.L3j.n+kYMS2GGUuC+xFn";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/3V.L3j.n+kYMS2GGUuC+xFn
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Sun, 17 Jun 2018 03:18:11 -0700
schrieb Joshua Otto <joshua.t.otto@gmail.com>:

> A little over a year ago, I posted a patch series implementing support for
> post-copy live migration via xenpaging [1].

That that went anywhere, or was this a wasted effort?


Olaf

--Sig_/3V.L3j.n+kYMS2GGUuC+xFn
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmC3aYQACgkQ86SN7mm1
DoBCfg//aCC24yeBB5+IAoUFQU3V26iJrGaigxPPd92cfKfS9+tlTeFfCnVoGy6c
8MJB2SL2pqMPh/UnhxO2Zi3TEWBeZ0Y+eUghW2+0BN56e7WbsbkoXyiyLdRW5q3B
Ow2N9W81Dbe3eyHinDzdXO/LUpozgc+xdDbTcEACUHclUhOeXvb6GDBQKbDsT4w0
RHYaTll2w5DxT3YqTFlH42Bd/P1AaD3g0nor1xtexbRr9dWATHH4uG2mZnGDt4tD
nVMXqVuVXIE/dC878CtmwHsU2fyOEonA3bs4aeWZQR2qeBDz1ViU/uLfY7CYeYrQ
jTceMkCQY+rRy+hfZ5gslsb4xCPTaGJkfxizdFAaqaL7tO0ChhmGZUz5mVfI8v0h
0P76juHlORG1m5/YsXIyFquM2Er8WVqwHtZtclG63Y4b1Q9Grl2y9YzPpMUPal+C
ywOKwis6daAToo1e2MnafT5+gC/QeZsY8LAvlb0r76zex17PC6cL/4ueDn5Lk7xx
jQgh0imV96lmwX7uUTf+gLvzyoczIZiEFbcCz7AidWEpgixc+73pRHjFhVMUz0eT
+n1yrcjD/RMaPxowAFsdi8TVZi2jare2td9o5RyZkA9j6EsqZVNilyMg8DjzRoYR
7XbbipaAoRwqKmwmRwwq6xVV34DmKrzRpnortfZBj7duOiZ190g=
=kfsJ
-----END PGP SIGNATURE-----

--Sig_/3V.L3j.n+kYMS2GGUuC+xFn--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 11:21:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 11:21:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135979.252352 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loOw7-0006IV-3a; Wed, 02 Jun 2021 11:21:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135979.252352; Wed, 02 Jun 2021 11:21:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loOw7-0006IO-02; Wed, 02 Jun 2021 11:21:23 +0000
Received: by outflank-mailman (input) for mailman id 135979;
 Wed, 02 Jun 2021 11:21:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NSCb=K4=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1loOw5-0006Gx-U8
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 11:21:21 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.84])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c5e3d0a-d4a5-488c-b578-356c9c204982;
 Wed, 02 Jun 2021 11:21:21 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx52BLG4Tz
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 2 Jun 2021 13:21:16 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c5e3d0a-d4a5-488c-b578-356c9c204982
ARC-Seal: i=1; a=rsa-sha256; t=1622632876; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=CtQ2P4CHqlLZR/bhH+ffeYRjSnSAIQ2RRrkKtKB5FwKxr0wKjum743DuXrWWk8/A+3
    0tPh5GnGuJrvElAlD/1Wzi7w2Tw8d8bfk2eUGxXZMDzAyALY6cJqgksp0/VWyahOqbnR
    OMGwsx109EBDD1nG6uNr5LDb745qHugebg376wMwqtN3TkDG5XlgJMIkOzAueiVIZAvA
    oeynLJUgW0IL1lwse8E9f9xtyviLP5BnxtJneyDfb0DVwfDkLeubpZuIJxi5xcJBNkZn
    0Rl6gogJ8YoSec5e9Rz6084sMkT6RSUQSNKqBWEO1STer70KAPd3jyj1rjnUR+3fM1b/
    dZSQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622632876;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=9vWpdbDZWYsZRwpTxs4xYixb0KM4n1SJg9aWcgBfw6g=;
    b=gkJQ5rl3aL9A+75Q6dRUBfkMC1GT9PA72eX1VaJ1TfXH4InhXJx50Nh1jdmsACxLfX
    VuDFkJ54TRg0zWKTwC1y+F9U/CYmsA03dESkFZ/0jCaMtPeuJKkdtnUrZYpPK36QoXiz
    E76ezHOY0gqX0tSmEOx8aDm90vPEEVaewAOtY7kb4DcIJVlLV2QNMDlZlqP2oZq8RYXC
    d7pjKwU9fbztsJO4JMVlMJmVnlOTWJpG8gg3uyCOJREjOkxoZ28vfdbz8tR6xIThx62v
    iPhJi0+3hM9Y+O1duw6IG8Q08vVeq0JdMojc2djpNdVxAV6sf3bOh873pzWYNruCeHzm
    uPpA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622632876;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=9vWpdbDZWYsZRwpTxs4xYixb0KM4n1SJg9aWcgBfw6g=;
    b=DC8qV7jHcFHU4fkaBimJZJofUxurfryEUg2mda2P2rIC1WsA8CmzEAcaTB3XjKdOet
    uwVpKKx+ehy3Xi8zykdj01GgG0unteojOBE7Eb3fGQxN53hIC3Veaks50Gf+uL61mmJD
    vr5u18Yy1yAg0ynW6YxDmQ6h9X193cRNGUZeV5MnSyKPaVCQlXb33BSGG1nsdRlcNoEk
    YfcvykPy8UGpYKR+jCDtUPyAapRw9kjgxhbmDbFWG+Tj3QjfLPJ2GvY55b6C6RixqGEQ
    UR65/P6Yt5xZ74zk6C6bU3RUyPw9GAd+DWRAb+f5j01tgNxfdJ+A12+BdVb9DldLddo0
    TnMA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF9Wx7WbE3s+BU2kLCYUBd7t4vRd/ulzKn4R+Wk"
X-RZG-CLASS-ID: mo00
Date: Wed, 2 Jun 2021 13:21:14 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v20210601 07/38] tools: unify type checking for data
 pfns in migration stream
Message-ID: <20210602132114.6fd9ee87.olaf@aepfle.de>
In-Reply-To: <9045add9-0cd0-7f9d-87ef-26cea15b74cd@suse.com>
References: <20210601161118.18986-1-olaf@aepfle.de>
	<20210601161118.18986-8-olaf@aepfle.de>
	<9045add9-0cd0-7f9d-87ef-26cea15b74cd@suse.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/.0kF5D_cin=WVl+P6KXPNQi";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/.0kF5D_cin=WVl+P6KXPNQi
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Wed, 2 Jun 2021 08:59:13 +0200
schrieb Juergen Gross <jgross@suse.com>:

> What about XEN_DOMCTL_PFINFO_XALLOC? Is this case impossible here, or
> are you changing behavior?

I think XEN_DOMCTL_PFINFO_XALLOC is a type that does not exist in practice.

So, no change in behavior.

Olaf

--Sig_/.0kF5D_cin=WVl+P6KXPNQi
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmC3aaoACgkQ86SN7mm1
DoAq0xAAiSG1PWwm6py2Ekb5GUxT7/RgVIFgknbPs/vMA9rQLKwIRO/lrgfK1oqQ
CdDyBFDVEQmTMIyMqW2XndvHwGC7GpxUriziloEkFg3EzMQMJtGm1k+CxnmVJqL8
9VLyS72gk489vOro9a2xZmDfcWqaw4mHOeCPZpQmFtLhj5LmKN696qtsljG3qLNF
/YowswwH1DTq20xusPC2laCkbEhqVTOYKx3t3gtFYaLIqGc1B8IauSmvAJXvSw95
ljBMLy/vVyxXMebuWQ8hCgT1PNBvx5qAoion/pM2MuxFhZKHto3w7Fw3gzIigFTa
1bCZxgh46YuocULkeCCmzes5dGHrIdziL0tM5pUA7rSgPAnFqRLK4d+984RGAvix
eaEzvdUIXr8Knlk84lG6gOSpA3a4gRRJaP3wP/wUQej9U+qnXMqCkt6O3yinEH6V
QH2TH9H5kPhOMPbb2fBOP5J0CsE4UG33jX0KLdsWAPtmHNjFoo+kJtz0zd7C2W+e
jtQLC2kkXA8xQeA7VwR7F/648MYJshNUH3q5D6EcEKuXd8ConCt/cnKuNZOGd20m
1WfEki7/+fSbR56q9CxB1cW5xlxHau7ZU86mS4tPKR1QwVAYBiNIrUTLzEdnIipj
/Ub0DsBouQvzgSkz7Hzpoa6qyrjJm/94DJjOgqCk7X74HGh6LgY=
=5vXS
-----END PGP SIGNATURE-----

--Sig_/.0kF5D_cin=WVl+P6KXPNQi--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 11:57:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 11:57:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.135991.252363 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loPUn-0001ht-Ty; Wed, 02 Jun 2021 11:57:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 135991.252363; Wed, 02 Jun 2021 11:57:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loPUn-0001hm-Qv; Wed, 02 Jun 2021 11:57:13 +0000
Received: by outflank-mailman (input) for mailman id 135991;
 Wed, 02 Jun 2021 11:57:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1loPUm-0001hg-Jy
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 11:57:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1loPUm-00062x-EK
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 11:57:12 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1loPUm-0004pH-DQ
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 11:57:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1loPUh-00030K-Ll; Wed, 02 Jun 2021 12:57:07 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=fkSfa9D8tccyNlc1tEhK8IFgEtqWMbktXu92k+Mn81w=; b=CX/TCcFbqxZetncnzEko10eUtk
	GURr+3jajdAFA59YmN7RJmzqx+4V4qMvZLGboZwea6LCo+oc2ArhOy8Pao9qFaPu15uRi41u0Hww3
	HnxYw8EfFtRI7qMmZZ+j1ROyqXct0DuIlJDKx8QbPKXSwiuQNvqtZAWt6FYyu/zaxSEo=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24759.29203.405999.764567@mariner.uk.xensource.com>
Date: Wed, 2 Jun 2021 12:57:07 +0100
To: Dario Faggioli <dfaggioli@suse.com>
Cc: Jan Beulich <jbeulich@suse.com>,
    "xen-devel\@lists.xenproject.org"  <xen-devel@lists.xenproject.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?=  <roger.pau@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH v2] firmware/shim: UNSUPPORTED=n
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <b1f53cd19ed65eec756d20fdec45c2c5cf79d0d8.camel@suse.com>
References: <19695ffc-34d8-b682-b092-668f872d4e57@suse.com>
	<72b98382-34ba-6e9d-c90e-c913dfe66258@suse.com>
	<b1f53cd19ed65eec756d20fdec45c2c5cf79d0d8.camel@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Dario Faggioli writes ("Re: [PATCH v2] firmware/shim: UNSUPPORTED=n"):
> I can try to put something together, but I don't currently have an
> OSSTest development environment up and running any longer, so it may
> take a couple of iterations...

(bit late with this but)

You could use your account on the Xen Project colo.  Let me know if
you need pointers or anything resetting.

Regards,
Ian.


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 12:03:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 12:03:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136005.252374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loPar-0003XO-0m; Wed, 02 Jun 2021 12:03:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136005.252374; Wed, 02 Jun 2021 12:03:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loPaq-0003XH-TK; Wed, 02 Jun 2021 12:03:28 +0000
Received: by outflank-mailman (input) for mailman id 136005;
 Wed, 02 Jun 2021 12:03:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NSCb=K4=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1loPap-0003XB-5S
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 12:03:27 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.82])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9465871b-b965-4b2d-a28d-a1f5d97931bc;
 Wed, 02 Jun 2021 12:03:26 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx52C3K4kh
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 2 Jun 2021 14:03:20 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9465871b-b965-4b2d-a28d-a1f5d97931bc
ARC-Seal: i=1; a=rsa-sha256; t=1622635400; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=o/k9BmSuh566IxKB34lOOia0hX76LEF/QGxZeQItKEFMoMnyk3JpSGJEKfR8ZF/qT1
    e3Nndi7nraZG6tiYTYd3u2BLbgyYleLYioXD2IeNcTTRK+JCkOORGsfi8bE3HBNU7RX9
    hsLR7SG4TQzqQvREe9uLRxlF1Re0Pn8Ms8wuCuPhxGxuyQDNyyXL1Bqj9H6++oCrQE80
    qIvbIfycH2L5Zuu5I+WJ/eaOkJ+bEbwJ9iF27mFPjycAKm9HjhTsE3fuwgS6Uy70GQk1
    6q6cftGkCl5PR/czGRjDWwP+pFjK9viMaUmtb39CYqzVcoLOoo9t7zNeiH7EJ1w0cJAr
    iNNQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622635400;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=FOduCrl90ZiaFx9X3ycOUkHkthPsCWuIsPfGUyiJQyk=;
    b=j7RzIrqEF1T/QZctBrmETPwg0K3qERUB1Krng1RCINS7gGQyfH4jjMb4XhejZSbzsU
    pLMv9S2N/ZxZb4NK09VkMFKnMoVURifmP71X6FHl7gxmn8pJC4VpiuEgwV6l3C8YwNy0
    1c7GW2T8i9SgpsluZvQHVYwqlP9XlNJ6dHhxewVKBdPbSS/yUKGn3z/ZFUF5DZ2p3/O1
    yjWkC9oTKmBXP9yOrY/kGYKYdCe0qP3tnVWIuylOCY/vFHZnOGAxq+ZvqJ/rjlGGov1v
    yLLpQsM+Pxa8K3eZTWXIgQnJXjAaBtT+OXzTjlzDZiKZwsdlrLzJuTDXkbOwefvuwqAJ
    2HTA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622635400;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=FOduCrl90ZiaFx9X3ycOUkHkthPsCWuIsPfGUyiJQyk=;
    b=ftAzWh0wULzVYaEote6eofUzJNg6pg+ZKWALlsNsck5N6oZzPvMcOv9+d1SUd+jkmV
    /oyXoMiIfhnpALA22nj3xpGSqxsQ/rABeuCe9TKDOHx3B1Wm50TR5TzFpf7n2hQ6gJdO
    ynJEC08IgxFnSJtykzS4k3nqT5i0P/XhsN3uUtcjXH6Ns4mlt+vBXTOkhaxc9pWjmg7b
    ++2/lC/ean9vMhTkxLo6u+X0bQEjZtiDa3miwKfWCy6UcjwrC79nJ0bb+8F1qDR/xE8L
    U94X9FQg3A0L6JnHKNlhmbFlwpXqEHNCIKb1UFLrTikQh+XKwPv61mT9PTuTml/8TVC5
    aCjA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF9Wx7WbE3s+BU2kLCYUBd7t4vRd/ulzKn4R+Wk"
X-RZG-CLASS-ID: mo00
Date: Wed, 2 Jun 2021 14:03:05 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v20210601 09/38] tools/guest: prepare to allocate arrays
 once
Message-ID: <20210602140305.39eb417a.olaf@aepfle.de>
In-Reply-To: <531fe9c5-aa7f-be99-5d78-85d817139740@suse.com>
References: <20210601161118.18986-1-olaf@aepfle.de>
	<20210601161118.18986-10-olaf@aepfle.de>
	<531fe9c5-aa7f-be99-5d78-85d817139740@suse.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/Wdckj5UqH3WRIyOeSurKggp";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/Wdckj5UqH3WRIyOeSurKggp
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Wed, 2 Jun 2021 09:29:08 +0200
schrieb Juergen Gross <jgross@suse.com>:

> > +    ctx->restore.m =3D malloc(sizeof(*ctx->restore.m));
> > +    if ( !ctx->restore.m ) { =20
>=20
> ... this case might trigger without the full series applied, due to
> allocating zero bytes (same for the save side below).

Such bisection point with a libc that returns NULL would be just bad luck.

See git-bisect(1) "Avoiding testing a commit" how to deal with it, in the u=
nlikely case it actually triggers.


Olaf

--Sig_/Wdckj5UqH3WRIyOeSurKggp
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmC3c3kACgkQ86SN7mm1
DoC7uw//SAZCXD0/4nuEPd2BKYrUe2N+Wlb4IpKJWH1jYcMuk2LBI+Ek8SYcv9AI
afH3rvwFispA6HWAFGzJL3h8/8dOltnPAYqFR/ee7yibQ6DoMViPKVrBcoTkGWCb
UUj0gzD2J9ThufTuvPGVsttQY9YpZPr1BI9pqSm/PIlGCNsOoc6zgFui0MYf+IdL
zDjvttw8fAWy726bYfaOMW7fg6H1nIFvzrGRQHNtSWSAZOJ2cyLDxKRQaLgS57uo
HViC+PPxBpRbeqJonY/fpVQ4Vr1h/ttguv+vGsQFsA9+b//DNxfvTHS8FcCxPrGW
sjtP43MVWAqgvSt6yJukXhw3tVPpjh/z2Ehpy833lx0BNUQEICqGDGzm0IMEGjwI
kNigftVUE1I8PG4YMgF6oWZnwB+PGYpHXwvSiLs8ssrYeoEa+BdVfneC5iG+p6/P
05KwdARU4P5fEPAfz6oDZuFRLvzN0ijVV7Dd7U7AX2lda0KpBLpeob2TP97AmxVd
9jEcBhy2woNnEAkK4j7jgqXoWRQRnAwlSwCf4o29PcY2X4Vy7C18FqgLO8QYIfx0
greM1JLgAPyOTYmMG/NSzxkdn1pRJ9rG27iVhQi4znF3KjEwdpH/oUOXG3XQNzHQ
lRODO5AMcWNuD/pRA/blZDuUB9+/wxmNZbWhnqd0nR6RBd48bDs=
=cSKW
-----END PGP SIGNATURE-----

--Sig_/Wdckj5UqH3WRIyOeSurKggp--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 12:08:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 12:08:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136015.252385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loPfF-0004FF-Ks; Wed, 02 Jun 2021 12:08:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136015.252385; Wed, 02 Jun 2021 12:08:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loPfF-0004F8-HN; Wed, 02 Jun 2021 12:08:01 +0000
Received: by outflank-mailman (input) for mailman id 136015;
 Wed, 02 Jun 2021 12:08:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NSCb=K4=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1loPfE-0004F2-2B
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 12:08:00 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.53])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a1819bee-937b-4368-b4cf-46861176c10b;
 Wed, 02 Jun 2021 12:07:58 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx52C7v4mO
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 2 Jun 2021 14:07:57 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1819bee-937b-4368-b4cf-46861176c10b
ARC-Seal: i=1; a=rsa-sha256; t=1622635678; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=pK4Sa+B4ZTK741NZeYuYc9tJkSqR8PmaD9xtNKcF7e6aF3wxkJe/xOMa3gxSE5XOf/
    tC5ShSUY4q9mGA29JASMnjpnGshhspYp2ugR8b/757K2xZsF6pVtwAa4H/XN1EoopfNL
    5UKBIuu+Pmop8bfA130e5JWU55WBaUGUsX2NVb8TOrZG2R8w6J3QkOxzLfJk8+X+Ji8X
    wMjDMT8svei+Sw4fTidGj3d2GFvpqIsOeoFzgu8YTfiBds8fPhWkhzo/IlEQMKTzC0SO
    y3ZgHP51J2JurbJ3mpwtYxwsS7BKWfO7Uj3Oe3XuUq8v7uqzQqjh6LVG2vEEZmdaK+cN
    Rstw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622635678;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=i2Qo/cH+7ij6lWILG3Gu/FmvWJfTvI+hDMyzpKgGD2Y=;
    b=nspK6d9WSIR8fcDDGOGxOYsgDDXFrH4CGeSuimQF1khiBhMkvJGs5wB/2eTEeYaqcY
    8QglgJ+2Hwu1F9A7ZY7z1Reeo6VzGEwMgfE7/mBHkK1zqFMfqUYm05zrSUgjULf6BAGS
    JIpsmd7PmKds6Nf1Qdb60RTtSsUzzsJqyzzmV3KU++zxGB14noYDDP6E7fX6SJkzuSvw
    RMDYIrXZlB6BJpoE5rgmF27RwJ2Jn56MjpiXASiOiANDFY3jX/li9uqDmhyXobX/VyWC
    Hheb9XMSNzm2jtH7Y6dU4ecQtpLuAKjNg8P7ulvEqdqnQ/G2bOdQ3Ztn24ThMrFuh+ki
    CILQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622635678;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=i2Qo/cH+7ij6lWILG3Gu/FmvWJfTvI+hDMyzpKgGD2Y=;
    b=Q/NhDuvXjjaVDlrWAyp7FQCoAKOawpapPw6/BDnjXZRcLUYG792c/soYm3P/KbZe9J
    4Inm53uCmDbUUkLrrIY9BBwwdrwAJmI6fxL9swQdjcLvgVDAhM23bDX/4WCyfNUH8sTG
    jANBRaBc9SsIiqEJyIRw4zRm7CBsUfri/3zaWLO3TMsBiWHdP9pqibdBTldOCVBIlQqo
    P6V49uVlG6Q9oL5jfzi+yswskEJicqaYFPFuLDlOpEIb9a08mWle3IBw1OupqNR0uCnP
    M9lf+wono9k0c/zWfAKmepSflcLoqVcIE0rvEC1pgF4l1LNzLwh76CRmW/UvtC3M2nkC
    7ZUg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF9Wx7WbE3s+BU2kLCYUBd7t4vRd/ulzKn4R+Wk"
X-RZG-CLASS-ID: mo00
Date: Wed, 2 Jun 2021 14:07:48 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [PATCH v20210601 00/38] leftover from 2020
Message-ID: <20210602140748.462fb0fa.olaf@aepfle.de>
In-Reply-To: <51055fa2-b57e-c66b-78d1-6f07e0164b5b@suse.com>
References: <20210601161118.18986-1-olaf@aepfle.de>
	<24670339-c080-7e47-c2a8-22c22f7a719e@suse.com>
	<20210602085403.40064aed.olaf@aepfle.de>
	<51055fa2-b57e-c66b-78d1-6f07e0164b5b@suse.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/1=jcedWTaZ0s2Hre36Dc4wT";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/1=jcedWTaZ0s2Hre36Dc4wT
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Wed, 2 Jun 2021 09:00:49 +0200
schrieb Juergen Gross <jgross@suse.com>:

> IMO this will make it more probable to get at least parts committed.

It depends all on patch #3, so further split can not be done anymore at thi=
s point.


Olaf

--Sig_/1=jcedWTaZ0s2Hre36Dc4wT
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmC3dJQACgkQ86SN7mm1
DoDyPhAAgoeJZQmPRRZj6IrygVH+zTOJ+Sl5IdHk9T01gRhc/POZ6bCN2tlXwIuw
WXIdmXf4UReYX4NmvHcIz9f5FUYzR5+DVFQaAN08mohn/j7rn3Fvn1FWN73FWPYM
OX5zNriJ/l7GLmV+xldfnhmVAPIvneNm3vgx1y2DsEJvfUTR6K3Lz6NcRQiH4PJN
hZpGNBQJMoaM6HPwO3f6r+xDk64AhmIJKx2tMKYfbfe4jfqnwzwS3SwpadjQsDxF
N72A6lobQqlfJQofa4dFnRvZFgLsC21eJxXnq2RVR35xL4dfPO95y0v6OPtQ69X2
1+ZhNI6/8A6hqE+tlMN/JziWuh/+agWxitOFAdcy9bjarPECLixAyh5VfvFXQS+t
29Y29rzayvASJJ0EtrSdeCwTuca56LKPjTyl0CGeeBhBhI/H94AMh4G68ic6w7DD
wE6cMrR/Nhl705Po9S+0eBVMn/pwLeZHZQYw6lzmZXyBBAVmOGfr9C5ZOEEkHntN
vLnY5ZK5oUHByrwDJap/Dnm3hUHweqANerIMFa2IQKinmlpZ8n9gJ9Nu+5MrVHQC
Pi6D9qWvQeujIbtJZwJ0oBVnDyC5x4Y8nWrRzk90/21zk+pZHGLBOE4C0xtEgMJj
D5uozMdLH8lgK+xlORSRGFyjmYdCmHVTlGDAnQ36bUrchFm82aE=
=nM1w
-----END PGP SIGNATURE-----

--Sig_/1=jcedWTaZ0s2Hre36Dc4wT--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 12:15:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 12:15:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136006.252396 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loPmV-0005p8-HE; Wed, 02 Jun 2021 12:15:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136006.252396; Wed, 02 Jun 2021 12:15:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loPmV-0005p1-E1; Wed, 02 Jun 2021 12:15:31 +0000
Received: by outflank-mailman (input) for mailman id 136006;
 Wed, 02 Jun 2021 12:03:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gK3V=K4=linux.ibm.com=schnelle@srs-us1.protection.inumbo.net>)
 id 1loPb2-0003rE-KK
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 12:03:40 +0000
Received: from mx0b-001b2d01.pphosted.com (unknown [148.163.158.5])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a1c8c13e-2a43-48e3-be4d-962896700a9a;
 Wed, 02 Jun 2021 12:03:39 +0000 (UTC)
Received: from pps.filterd (m0098417.ppops.net [127.0.0.1])
 by mx0a-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 152C2cPO189409; Wed, 2 Jun 2021 08:03:00 -0400
Received: from pps.reinject (localhost [127.0.0.1])
 by mx0a-001b2d01.pphosted.com with ESMTP id 38x7kr3h5n-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 02 Jun 2021 08:03:00 -0400
Received: from m0098417.ppops.net (m0098417.ppops.net [127.0.0.1])
 by pps.reinject (8.16.0.43/8.16.0.43) with SMTP id 152C2cNZ189527;
 Wed, 2 Jun 2021 08:02:59 -0400
Received: from ppma05fra.de.ibm.com (6c.4a.5195.ip4.static.sl-reverse.com
 [149.81.74.108])
 by mx0a-001b2d01.pphosted.com with ESMTP id 38x7kr3h4k-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 02 Jun 2021 08:02:59 -0400
Received: from pps.filterd (ppma05fra.de.ibm.com [127.0.0.1])
 by ppma05fra.de.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 152BwHfr023191;
 Wed, 2 Jun 2021 12:02:57 GMT
Received: from b06cxnps4076.portsmouth.uk.ibm.com
 (d06relay13.portsmouth.uk.ibm.com [9.149.109.198])
 by ppma05fra.de.ibm.com with ESMTP id 38ud87s9cx-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 02 Jun 2021 12:02:56 +0000
Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com
 [9.149.105.59])
 by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id
 152C2r4H26280248
 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 2 Jun 2021 12:02:53 GMT
Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id A0364A405F;
 Wed,  2 Jun 2021 12:02:53 +0000 (GMT)
Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id 479C3A4040;
 Wed,  2 Jun 2021 12:02:52 +0000 (GMT)
Received: from sig-9-145-17-43.uk.ibm.com (unknown [9.145.17.43])
 by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTP;
 Wed,  2 Jun 2021 12:02:52 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1c8c13e-2a43-48e3-be4d-962896700a9a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=message-id : subject :
 from : to : cc : date : in-reply-to : references : content-type :
 mime-version : content-transfer-encoding; s=pp1;
 bh=7wC8Y59HiMIthB9xfz5dL+I0HD65zOwRPB4cvu5JQYY=;
 b=QRLGyPounpLaRIZgsSUK1LazhFSMWWWVFIpjfvUyvXqdSXI7p94FVQMtSFG3bMtHNL86
 rLVrJ3SvpT994x2pe7o0yL3iXXDeNBst8Q4A8KEa89vx85ZQSTDkkYsEjLQaMo+eb8VZ
 +GFr8IfErHwDLNlYl+L0v9/qNIinOxFcyNlMJhpRMjSZcskv/o9bM0Y0jNVwjZETIKG0
 IXpmjsLB5CkqOhxt84ImzKiTyn/t0QZlZGbbzTco4+M0nPgqtd7mDQbl31t9d+7DOxJc
 iYhombbX7dZq4eRMaB6F1ITduKcVH4H0X0f5lSV3QvlOH2CDPoe61GoP3QIOjHTOkYh/ DA== 
Message-ID: <e4891689c7651611020bdf3b4db9895819da345a.camel@linux.ibm.com>
Subject: Re: [PATCH 27/30] scm_blk: use blk_mq_alloc_disk and
 blk_cleanup_disk
From: Niklas Schnelle <schnelle@linux.ibm.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Denis Efremov <efremov@linux.com>,
        Josef Bacik <josef@toxicpanda.com>, Tim Waugh <tim@cyberelk.net>,
        Geoff
 Levand <geoff@infradead.org>, Ilya Dryomov <idryomov@gmail.com>,
        "Md.
 Haris Iqbal" <haris.iqbal@ionos.com>,
        Jack Wang <jinpu.wang@ionos.com>,
        "Michael S. Tsirkin" <mst@redhat.com>,
        Jason Wang <jasowang@redhat.com>,
        Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
        Roger Pau
 =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
        Mike Snitzer
 <snitzer@redhat.com>,
        Maxim Levitsky <maximlevitsky@gmail.com>,
        Alex Dubov
 <oakad@yahoo.com>,
        Miquel Raynal <miquel.raynal@bootlin.com>,
        Richard
 Weinberger <richard@nod.at>,
        Vignesh Raghavendra <vigneshr@ti.com>,
        Heiko
 Carstens <hca@linux.ibm.com>, Vasily Gorbik <gor@linux.ibm.com>,
        Christian
 Borntraeger <borntraeger@de.ibm.com>, dm-devel@redhat.com,
        linux-block@vger.kernel.org, nbd@other.debian.org,
        linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
        virtualization@lists.linux-foundation.org,
        xen-devel@lists.xenproject.org, linux-mmc@vger.kernel.org,
        linux-mtd@lists.infradead.org, linux-s390@vger.kernel.org
Date: Wed, 02 Jun 2021 14:02:51 +0200
In-Reply-To: <20210602065345.355274-28-hch@lst.de>
References: <20210602065345.355274-1-hch@lst.de>
	 <20210602065345.355274-28-hch@lst.de>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5 (3.28.5-16.el8) 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
X-TM-AS-GCONF: 00
X-Proofpoint-GUID: 9Pi9MTQ_S8CJ1ewvOnGa4xYgtn5KH1ib
X-Proofpoint-ORIG-GUID: -LAzjZA1VGIxMM3aVX5RPubiT3GLOjUz
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391,18.0.761
 definitions=2021-06-02_06:2021-06-02,2021-06-02 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 malwarescore=0
 priorityscore=1501 lowpriorityscore=0 adultscore=0 bulkscore=0
 mlxlogscore=999 spamscore=0 clxscore=1011 suspectscore=0 phishscore=0
 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2106020078

On Wed, 2021-06-02 at 09:53 +0300, Christoph Hellwig wrote:
> Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and
> request_queue allocation.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/s390/block/scm_blk.c | 21 ++++++---------------
>  1 file changed, 6 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/s390/block/scm_blk.c b/drivers/s390/block/scm_blk.c
> index a4f6f2e62b1d..88cba6212ee2 100644
> --- a/drivers/s390/block/scm_blk.c
> +++ b/drivers/s390/block/scm_blk.c
> @@ -462,12 +462,12 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
>  	if (ret)
>  		goto out;
>  
> -	rq = blk_mq_init_queue(&bdev->tag_set);
> -	if (IS_ERR(rq)) {
> -		ret = PTR_ERR(rq);
> +	bdev->gendisk = blk_mq_alloc_disk(&bdev->tag_set, scmdev);
> +	if (IS_ERR(bdev->gendisk)) {
> +		ret = PTR_ERR(bdev->gendisk);
>  		goto out_tag;
>  	}
> -	bdev->rq = rq;
> +	rq = bdev->rq = bdev->gendisk->queue;
>  	nr_max_blk = min(scmdev->nr_max_block,
>  			 (unsigned int) (PAGE_SIZE / sizeof(struct aidaw)));
>  
> @@ -477,17 +477,11 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
>  	blk_queue_flag_set(QUEUE_FLAG_NONROT, rq);
>  	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, rq);
>  
> -	bdev->gendisk = alloc_disk(SCM_NR_PARTS);
> -	if (!bdev->gendisk) {
> -		ret = -ENOMEM;
> -		goto out_queue;
> -	}
> -	rq->queuedata = scmdev;
>  	bdev->gendisk->private_data = scmdev;
>  	bdev->gendisk->fops = &scm_blk_devops;
> -	bdev->gendisk->queue = rq;
>  	bdev->gendisk->major = scm_major;
>  	bdev->gendisk->first_minor = devindex * SCM_NR_PARTS;
> +	bdev->gendisk->minors = SCM_NR_PARTS;
>  
>  	len = snprintf(bdev->gendisk->disk_name, DISK_NAME_LEN, "scm");
>  	if (devindex > 25) {
> @@ -504,8 +498,6 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
>  	device_add_disk(&scmdev->dev, bdev->gendisk, NULL);
>  	return 0;
>  
> -out_queue:
> -	blk_cleanup_queue(rq);
>  out_tag:
>  	blk_mq_free_tag_set(&bdev->tag_set);
>  out:
> @@ -516,9 +508,8 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
>  void scm_blk_dev_cleanup(struct scm_blk_dev *bdev)
>  {
>  	del_gendisk(bdev->gendisk);
> -	blk_cleanup_queue(bdev->gendisk->queue);
> +	blk_cleanup_disk(bdev->gendisk);
>  	blk_mq_free_tag_set(&bdev->tag_set);
> -	put_disk(bdev->gendisk);
>  }
>  
>  void scm_blk_set_available(struct scm_blk_dev *bdev)

Not an expert on SCM or this code but I gave this a quick test and it
seems to work fine.

Tested-by: Niklas Schnelle <schnelle@linux.ibm.com>





From xen-devel-bounces@lists.xenproject.org Wed Jun 02 12:32:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 12:32:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136030.252407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loQ3F-0008BO-0S; Wed, 02 Jun 2021 12:32:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136030.252407; Wed, 02 Jun 2021 12:32:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loQ3E-0008BH-Sz; Wed, 02 Jun 2021 12:32:48 +0000
Received: by outflank-mailman (input) for mailman id 136030;
 Wed, 02 Jun 2021 12:32:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NSCb=K4=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1loQ3C-0008BB-Qp
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 12:32:47 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [81.169.146.164])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d11ce78-ac99-496d-8ff1-e18014dd2db0;
 Wed, 02 Jun 2021 12:32:45 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx52CWb4ww
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 2 Jun 2021 14:32:37 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d11ce78-ac99-496d-8ff1-e18014dd2db0
ARC-Seal: i=1; a=rsa-sha256; t=1622637157; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=ISWKGepvFv99Q9VNMOvqDUhkJliYEJDAJyAdjWpfxRi1ao6v27utIfPEpQyaMJzg5S
    k3IzuZrxTKBoLZMHkTcnCGrM0VlVu943Rtv4MpU5RrSMercfrY085NnGWhftcQ911AIm
    VeIr0vFSDKgmiTS9yeFa89do9TkfFSgL5SfrIeKwSvK0BqkPt6Udj+ku9xCVsZVcH4W/
    itaAjgcDSZL3es7yJsnMRSKDPLhDarh4w9V37SEDP0NMf6aaM+UbLDkzACbGi/v5hpuC
    aUA52mbIyWXLsSA+FjR20CcDRcPC0JXG5urd2PZs/P783mhBH6Jw+zon6kApY6n1/qfz
    hxMw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1622637157;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=JWdlTkXOkgDN7Clt1j9YLt2cHeC41eQucSJbB7W4054=;
    b=licgU8qa2ZoQswWGYaLxd7Y7KK25vSBFTcAt/L+IzClHxFbTHUNceyBPLiesKlwm15
    PF8ZIQax02sPq8HHMxMGRyOt3xmVwBeepL6TNl2QnRwcEqrfSIASDHkI3kNo5ryWd4Qp
    CRoodtCAOVtR+wQpZZyfnh3al1r60tIFpI2xgrA9Vl7E9hramkEjbdor95zGz8zJ0otb
    yRWq0/JzdygFsZijffbh60OvbCQIJBx5vgVHtFFHxkfzIjC9GuiBYqibWYrNhE9Y5jWk
    Hou4l5wmBecSiQKbQ2yzieqxmpx8Iunl1QDIDwTwWFbOiJp4Y6U1JWWkWjA9DhRaeRLv
    6pKA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1622637157;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=JWdlTkXOkgDN7Clt1j9YLt2cHeC41eQucSJbB7W4054=;
    b=CV3xHSSoGQG6Vl29O/yp3BMzE0AhXJJrjMWncWJPVhK6ndh03HrjS3VvyLln8vSewn
    smL8OtXjl9OIv/v+TemXQnDh3vbnHb7Nn0ejnVoQ3CA46Zxc2aF/tR/obU4wjs87pGDq
    /VlxUDZEDeq//YOrKayLYHHOrYcc/07/FjaJGhyBVFEpwVEUo8rfMRmHGomftPTuu1V1
    TOR+SfvXMeD9pOSh3gGJ0k7+8ScTgxC18ekSRh1Z/ctsneD17Tj3kIgjK+uRGfV2kWai
    IZFXwdOsljzZWMBTM0KID7HB+52jnrWl0DXQ5kcNpbOrMXKEyusVenGXFXakTZRPTAwx
    2WAA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF9Wx7WbE3s+BU2kLCYUBd7t4vRd/ulzKn4R+Wk"
X-RZG-CLASS-ID: mo00
Date: Wed, 2 Jun 2021 14:32:36 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
 <anthony.perard@citrix.com>
Subject: [PATCH v20210602 02/38] xl: fix description of migrate --debug
Message-ID: <20210602143215.3a0cb971.olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-3-olaf@aepfle.de>
References: <20210601161118.18986-1-olaf@aepfle.de>
	<20210601161118.18986-3-olaf@aepfle.de>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

xl migrate --debug used to track every pfn in every batch of pages.
But these times are gone. The code in xc_domain_save is the consumer
of this knob, but it considers it only for the remus and colo case.

Adjust the help text to tell what --debug does today: Nothing.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

v02:
- the option has no effect anymore
---
 docs/man/xl.1.pod.in   | 2 +-
 tools/xl/xl_cmdtable.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index e2176bd696..70a6ebf438 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -481,7 +481,7 @@ domain.
 
 =item B<--debug>
 
-Display huge (!) amount of debug information during the migration process.
+This option has no effect. It is preserved for compatibility reasons.
 
 =item B<-p>
 
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 661323d488..ca1dfa3525 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -172,7 +172,7 @@ const struct cmd_spec cmd_table[] = {
       "                migrate-receive [-d -e]\n"
       "-e              Do not wait in the background (on <host>) for the death\n"
       "                of the domain.\n"
-      "--debug         Print huge (!) amount of debug during the migration process.\n"
+      "--debug         Ignored.\n"
       "-p              Do not unpause domain after migrating it.\n"
       "-D              Preserve the domain id"
     },


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 12:33:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 12:33:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136031.252417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loQ3V-000050-7V; Wed, 02 Jun 2021 12:33:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136031.252417; Wed, 02 Jun 2021 12:33:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loQ3V-00004s-4P; Wed, 02 Jun 2021 12:33:05 +0000
Received: by outflank-mailman (input) for mailman id 136031;
 Wed, 02 Jun 2021 12:33:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loQ3U-0008W5-55; Wed, 02 Jun 2021 12:33:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loQ3U-0006f5-0J; Wed, 02 Jun 2021 12:33:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loQ3T-0006wx-NN; Wed, 02 Jun 2021 12:33:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1loQ3T-0003X2-Mt; Wed, 02 Jun 2021 12:33:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CYahFAxuSyCqoeGUef+KXV5VJNRn4eFtFqsq/s1rcVk=; b=raMGMRiD8HAXV+tqvt6GoHf7r9
	mD5BvSioFCzu5RAj017StO5dB41H1tl2Aw+s+t31GV3RIeeTZ91ioIRTdpDgHRgn6reEYNoCnAdav
	kiUh6admdoIB8lLzL3luhKFFhfklpZm3k5vneGeq9mpVE2hv+D2gnLGXKgL1fhF+0v04=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162334-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162334: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=b233eb1849ac01bdd5b24ea84460a2e481a4c5a9
X-Osstest-Versions-That:
    ovmf=fdf3666f01a2dd02d83a808f609b9c744a74c652
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Jun 2021 12:33:03 +0000

flight 162334 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162334/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 b233eb1849ac01bdd5b24ea84460a2e481a4c5a9
baseline version:
 ovmf                 fdf3666f01a2dd02d83a808f609b9c744a74c652

Last test of basis   162326  2021-06-01 11:41:13 Z    1 days
Testing same since   162334  2021-06-02 07:41:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Marcin Wojtas <mw@semihalf.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   fdf3666f01..b233eb1849  b233eb1849ac01bdd5b24ea84460a2e481a4c5a9 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 13:51:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 13:51:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136045.252432 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loRHA-0000Um-1r; Wed, 02 Jun 2021 13:51:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136045.252432; Wed, 02 Jun 2021 13:51:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loRH9-0000Uf-Uw; Wed, 02 Jun 2021 13:51:15 +0000
Received: by outflank-mailman (input) for mailman id 136045;
 Wed, 02 Jun 2021 13:51:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loRH8-0000UV-Tg; Wed, 02 Jun 2021 13:51:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loRH8-00085x-OF; Wed, 02 Jun 2021 13:51:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loRH8-00027s-DZ; Wed, 02 Jun 2021 13:51:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1loRH8-0005l5-D2; Wed, 02 Jun 2021 13:51:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DfhAuV38iy89kVJGAcPPhhY9Bq4EVFmTPl6HosJBPAs=; b=Bhvpf2eFlj9P6HYjZTA3kHG30k
	ynz21IJongVH/v1gfNKF+rTnSLrFHrTM+XGIwz8909RhTA+MLYxp6g2FeGp+YAOUyYF2iVUfI8uP5
	EDlTY3UmnZT2gNCQcWX3w2mIIHWkQCE/qtW8j01xdxA0fmg2BSa+Q6xGPnBut7T+brXA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162331-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162331: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt-raw:xen-boot:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start.2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=52848929b70dcf92a68aedcfd90207be81ba3274
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Jun 2021 13:51:14 +0000

flight 162331 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162331/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds 20 guest-localmigrate/x10 fail in 162328 pass in 162331
 test-armhf-armhf-libvirt-raw  8 xen-boot                   fail pass in 162328

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     19 guest-start.2 fail in 162328 blocked in 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 162328 like 152631
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 162328 never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                52848929b70dcf92a68aedcfd90207be81ba3274
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  286 days
Failing since        152659  2020-08-21 14:07:39 Z  284 days  527 attempts
Testing same since   162270  2021-05-31 03:30:37 Z    2 days    5 attempts

------------------------------------------------------------
519 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 164273 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 14:37:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 14:37:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136055.252446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loS0A-0005Qs-KM; Wed, 02 Jun 2021 14:37:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136055.252446; Wed, 02 Jun 2021 14:37:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loS0A-0005Ql-HF; Wed, 02 Jun 2021 14:37:46 +0000
Received: by outflank-mailman (input) for mailman id 136055;
 Wed, 02 Jun 2021 14:37:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bHXv=K4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1loS09-0005Qf-84
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 14:37:45 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 06467280-956e-4f16-8cbf-a24b7b021e5b;
 Wed, 02 Jun 2021 14:37:44 +0000 (UTC)
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02lp2051.outbound.protection.outlook.com [104.47.5.51]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-33-cC4GcNaMMY2gmWxJIxX9bw-1; Wed, 02 Jun 2021 16:37:41 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB2701.eurprd04.prod.outlook.com (2603:10a6:800:af::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.27; Wed, 2 Jun
 2021 14:37:39 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4173.030; Wed, 2 Jun 2021
 14:37:39 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3PR09CA0019.eurprd09.prod.outlook.com (2603:10a6:102:b7::24) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4173.20 via Frontend Transport; Wed, 2 Jun 2021 14:37:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06467280-956e-4f16-8cbf-a24b7b021e5b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1622644663;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=Op0EG6H/pxKguEBh5YGLdaL85pglCTLCDsjf1WPZfrM=;
	b=SyRck841TFi/g5Y9WVTgkrr95YD98mwZUzRbQFZIgNfyAR/Q/aFMtGFzzhw92hS4K0PJAW
	Aomu2t2NCtm0Kh/CksNI/tnMFZhNO/DQRIej4O0Ea1DUSf1Eimpi4bpb+uXwrF5VJttRYC
	gw3O9iTFwYZXvxJTRWP4bkuJSlsE3wI=
X-MC-Unique: cC4GcNaMMY2gmWxJIxX9bw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CTBBdDpkZrS2flw96OEK4mFacDj/zLTCvdz+nt9j4x5cQVJHim3Ua8Zz9nQi5RdHzmiGHmglolySjRIH9K4dShB7rCpg+Fg9Etyr/3hiRJkWdDSgQop5xhs3FvEGsUTd0xFwNrOGJMJSeUejV5HXmd1eNkkN84AWiMOdzKMgOLgHVmVGdLxc5agSkffbaTJei0zscli0/HsNdLjk7CkOBsqVo7a4X8z/eWYS1Fyct28rsYy3xH/6tHhq2KYc4bo4rFj01VlP5SGtzxsc2rz/Oaxzij4iDNgPHp7gT4quWHiRtYOqtAkPazdalHevDO5ZYVOh/aeNDwWn6yyfZ0E21A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Op0EG6H/pxKguEBh5YGLdaL85pglCTLCDsjf1WPZfrM=;
 b=MPQt4hjwkmgPcc/6CZPI+TrpzGB6SZY21pXCKZEDn+sub6J8BhLu3lh6Bd7Q2LCbzVMIv37Y4OeodxhfetbVBux77yNRCd2HlZQ5psoUIAK5Ox4vjG/ZYPpO9GFW7P9yti1m6vrQdg47L8MneSqyDivZJ0ECGQU2/EG4CC+ozc9gScndsyS2ra+FSXo//XmFm5Ev21SE4fPsavOwPP4Tm5+S3FIoLndHO+8e7kmfs+UcZwd26DhVJwgn8LtVtcQiFMSSTohkJ3Qnoz1ZkZAFMoWoAZzhjEE1fda2moHIiB1MRX/rBBNnTfc3ie96gN1oKVmH02VRSR1PK4KxzYNxuQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul: avoid using _PRE_EFLAGS() in a few cases
Message-ID: <9285f566-e352-9265-e9e3-e9a1e15ce7d5@suse.com>
Date: Wed, 2 Jun 2021 16:37:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3PR09CA0019.eurprd09.prod.outlook.com
 (2603:10a6:102:b7::24) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d2f47e68-70bc-4d66-68a9-08d925d3f8e5
X-MS-TrafficTypeDiagnostic: VI1PR0402MB2701:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB270132CB2EDCD2F60B38997EB33D9@VI1PR0402MB2701.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+4qNND/S+0XErO6cIM67XmqGRQrrQQ0jkde87iYd8OzOmi6/uyBu7s+Wfq5HdOiJoyYflDy1KRRsVjAjWzq21881l9fBZDPvjsSMpGJ4hXIHgCpIq8eJTNNE0qOvAHrDcenm6rqjNvChlRPSrnGzrK8DR66YJY/H6Dh5+LgXgr9oK+NPKpD9aIoI2HPOg6rPZUmd9r+7IYliJ65vsz+UFq3K7mzJuMbzucapoR6XE/BFRxLbst4Is5P/lNKAziTXq0WoeiYJsKVvgg5QOh7DHFiaFY0pRAAKNmh+KuEiBTrHDKgJZAu0WW7u8O53kmXoM4FV7GzvemQ8683EdYVWP0r0iD1906l5+a1lA/1LDxdDYMOOWsWRstBvHe+iMdOsENX6iOF4XExJCz1IMaGL3Mp6/bSRKSpDbvIrRVrKnA5/c1w3Sf1IXOpIGCDlJn+yFrrK7Kd+28yUDK5P3zxkHAP0a4aIU+4pR4hKlOUCQEGlaQnkUpkbPHRWMP0A45ukWIq0AFnP+EwltxF5gHidska11opkRCcXP9jRzAvM9fuNvf3OtNeHs5PLz9ErFt6RuhW5agS21UGgrCWvQk6Tcv4rVTeA357LpL4/gsEyJ1g/bDzRXWY+DbRrOGxPj0o5GeqeMkTDRUDg+LKSZZtvp4NB6pLXJilovnW0ip8LseyPS0K9NzLGId5+Vw6B1/MU
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(136003)(376002)(396003)(39860400002)(346002)(26005)(36756003)(66476007)(5660300002)(31686004)(8676002)(86362001)(38100700002)(83380400001)(66556008)(316002)(16576012)(8936002)(31696002)(478600001)(6916009)(66946007)(4326008)(6486002)(186003)(16526019)(2906002)(2616005)(54906003)(956004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?QURpNWlvWS8xYlNYR2tFMnBycWVHVTAvYm1MT3d3ZWlwM2JwZnNSN1Q2VzhH?=
 =?utf-8?B?enl3dHI3WEVPY2NWWHhlSFZqUzBGY1BUTVgrdGFobkE0NTNyQy9MclB5M2hP?=
 =?utf-8?B?OVE0aURVVlM4UWRpZ3JkR2wrMEhsc2hOS01YZm81T3V1cHUyTElSLzg4R29G?=
 =?utf-8?B?TDdxUDd1c0JsOXpveFYrb0g2NTZFUXNZelBQdXVON1cvZmdXUEdzNkhYc2VH?=
 =?utf-8?B?M3NpZmdydHM0c0Q2ZFZkNjg0dS9HYnhKTjFUMDhSeE4zLzFOMXo4SWtFVHkx?=
 =?utf-8?B?WmtRS3FUUFk0cjUvOG9DQ25FcGM1ckg4MEs5T2szZFY2TnpaTi9MS00zZkZK?=
 =?utf-8?B?WlVoYkE5NGNFWTFBcEdCak81bjUyQ2YzTEZaQzh5YWZwVzVzT08xcUxRMW9v?=
 =?utf-8?B?QkFOOWNYVm9LYUppdTFJU1cxV2wvZzZYSmtIWkVsQ0Y2alZHbXJCM0hFME40?=
 =?utf-8?B?dDhFNERJM3E2NUxLS1VIL2V0QXVGWlJVZmtyZFBLUmlMSVVXQUFWTGc4OCt4?=
 =?utf-8?B?V2svV0lZdHB2eDFjMS81UkJDM2cya2IvVUpCUENadC9DT1g0TDJ2L090djhF?=
 =?utf-8?B?Uk1hVm1BYUllamhsSWpZcEp0T3pJcTBYVEFXUTZ1V2lVUWVvV1FodlV4aUxl?=
 =?utf-8?B?NWhwRENCMGJVVUxjS3IySTBOOENMd1lBNUprT2dQN2QrMnozMVkySmhUVGl6?=
 =?utf-8?B?TXZTK3grQnVGZnkrMlVMR0NPOFpoRHRSS0pwb0Vwb1NKcmZOc0RoSmlQMnZv?=
 =?utf-8?B?M1VtOVVLV3QyQm1VRXZhM1htTU9WdGJnU2VDMlRjbmVzOGVIc1BLelJ5MmFJ?=
 =?utf-8?B?WTBZOU9vb1FSU2R5NEhKV2cwRnJ4THhpMTI3N1REOXE0RE9qelBRNVgxbm50?=
 =?utf-8?B?QnBEZ2xFVFdTcDV2ek5yS2lzcUxFd3E5SkxGRlJIcTRHN0pyMFFnVURaaHU0?=
 =?utf-8?B?RGRjNitINVhMU1NhaklFT0RoM1kzMkxVS3ZTbUxaNmpoc21kZi9iYi8xaW0z?=
 =?utf-8?B?M01Yc1pjaXJhSnN6dkFTalFNN3g3cWp6MTU5cnZDN0dVL0JzOWZyZEhXaUdV?=
 =?utf-8?B?dmFjVmZkK1F0UG5saGhPRCtoZWZhc2pVTFFaUFMvcmQ1UGkvY1BFR1B5U3o1?=
 =?utf-8?B?QkdJcnZlMGNmcjZHemVQYUhEbzY3NHVza0dySEJGL25VcFpPNTIrdG56R2Jj?=
 =?utf-8?B?L01CaTI5RWRwc0JETmp6R3lmUkRiSUlZaWZVRVAwRVgya3BwTlc2VzE0L0Y3?=
 =?utf-8?B?UDM5UzdEQzQ5cVhWdWZUU1FMbkQvS0JlUC94Y2cvSTRIS1k2eGhWcDg0YkYv?=
 =?utf-8?B?OG9VUWorR1VxQXI2cFRpSDc2ZkFWRWp1VXZ1OUwzSmREcWdzSE1iQ25pWkVt?=
 =?utf-8?B?ZkVtRllnTktPWGV5TmoxakZ3QlNlTGpDaURmcUZHcGRGRTE1Wk02WmZ3NDJT?=
 =?utf-8?B?Y05NZ3MzQis0V3VZWEFyZCs5Y0ZWMG5QczZ6ZXFsWHNnUHJuOHpZQ0E3cU1y?=
 =?utf-8?B?Q2JyWjB1SFNNWFdNVmE1Nk94NEh6R1lHY2lHL0RsOWt0VTZaeGZiMUtjdmww?=
 =?utf-8?B?ZFJaRVk5ZmNJRmpJYzRuQk9pWk1ZcEVGSFA0NkxkbHEybDkzMWl6UGF2eTNj?=
 =?utf-8?B?VjJCU2E1eVJuRzlpbm81Uy9ycHA3UmdBMC8va05VRXhSYlY1UFljWnMvQzJl?=
 =?utf-8?B?K3pCOVJuM2Nnb1Q0RWE2ZWV4cjlhbDBKRHoyWnFNSmFoTnMydXc0eTRZL3FO?=
 =?utf-8?Q?ZQs9eqLU8SkzLn+ynqXcV/N7WX8xLnSbBM8Ozvo?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d2f47e68-70bc-4d66-68a9-08d925d3f8e5
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2021 14:37:39.5805
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EFviBlCCctwev3AJdKit4Yj1+uQ2sA7ELIEbTuVht9RTYZBP9ADZRpecepiVO9TidFmnMgr54WsE++5GsSFAGA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB2701

The macro expanding to quite a few insns, replace its use by simply
clearing the status flags when the to be executed insn doesn't depend
on their initial state, in cases where this is easily possible. (There
are more cases where the uses are hidden inside macros, and where some
of the users of the macros want guest flags put in place before running
the insn, i.e. the macros can't be updated as easily.)

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -6863,7 +6863,8 @@ x86_emulate(
         }
         opc[2] = 0xc3;
 
-        invoke_stub(_PRE_EFLAGS("[eflags]", "[mask]", "[tmp]"),
+        _regs.eflags &= ~EFLAGS_MASK;
+        invoke_stub("",
                     _POST_EFLAGS("[eflags]", "[mask]", "[tmp]"),
                     [eflags] "+g" (_regs.eflags),
                     [tmp] "=&r" (dummy), "+m" (*mmvalp)
@@ -8111,7 +8112,8 @@ x86_emulate(
         opc[2] = 0xc3;
 
         copy_VEX(opc, vex);
-        invoke_stub(_PRE_EFLAGS("[eflags]", "[mask]", "[tmp]"),
+        _regs.eflags &= ~EFLAGS_MASK;
+        invoke_stub("",
                     _POST_EFLAGS("[eflags]", "[mask]", "[tmp]"),
                     [eflags] "+g" (_regs.eflags),
                     "=a" (dst.val), [tmp] "=&r" (dummy)
@@ -11698,13 +11700,14 @@ int x86_emul_rmw(
         break;
 
     case rmw_xadd:
+        *eflags &= ~EFLAGS_MASK;
         switch ( state->op_bytes )
         {
             unsigned long dummy;
 
 #define XADD(sz, cst, mod) \
         case sz: \
-            asm ( _PRE_EFLAGS("[efl]", "[msk]", "[tmp]") \
+            asm ( "" \
                   COND_LOCK(xadd) " %"#mod"[reg], %[mem]; " \
                   _POST_EFLAGS("[efl]", "[msk]", "[tmp]") \
                   : [reg] "+" #cst (state->ea.val), \



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 14:38:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 14:38:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136058.252457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loS0e-0005tH-VU; Wed, 02 Jun 2021 14:38:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136058.252457; Wed, 02 Jun 2021 14:38:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loS0e-0005tA-RR; Wed, 02 Jun 2021 14:38:16 +0000
Received: by outflank-mailman (input) for mailman id 136058;
 Wed, 02 Jun 2021 14:38:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bHXv=K4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1loS0e-0005sy-1a
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 14:38:16 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 71181ff6-5a02-4ecc-8ddb-64c302e781e0;
 Wed, 02 Jun 2021 14:38:14 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2113.outbound.protection.outlook.com [104.47.17.113])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-14-fj4UwSf2Mm-DcYEsuE4KTw-1; Wed, 02 Jun 2021 16:38:12 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3389.eurprd04.prod.outlook.com (2603:10a6:803:b::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.21; Wed, 2 Jun
 2021 14:38:10 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4173.030; Wed, 2 Jun 2021
 14:38:10 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3PR09CA0027.eurprd09.prod.outlook.com (2603:10a6:102:b7::32) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4173.26 via Frontend Transport; Wed, 2 Jun 2021 14:38:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71181ff6-5a02-4ecc-8ddb-64c302e781e0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1622644693;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=NwFfkvN5qesznExZCGsK7iQb4IOiNi91GWuoqxon3zs=;
	b=gT4qi+QA0NZDOeZoN/hT1L9GVEq7Oc5Q4kNQ49xKAeUszTKA8TZEkDS9HQye/GkkFBQ6CS
	qZ5mQS8Qeo+R3I1eFZg5q0MlD1fNwEQ8LQHTJr/FhiBsDLpDwRHjnT44o4ZDJj9hTSSx5x
	XIKI4ZSsn1nDhmX40FDOu1wxZP/U03k=
X-MC-Unique: fj4UwSf2Mm-DcYEsuE4KTw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YSDGrNyZP2yRCRNUCaB/sV11E3j2ANDj7pv529Lmn9WSEATdpDWrogMbScch3fCElReMwBpVlLK7wwGYhVrbVJKVem5tvE6ZgwpqBPCqAYra8S47GSW20l+xRwsa9cVjeagY5926ZOUwSAWbfil4uUO0f5WpoiCfGTZz540dIcyMNs/7563fznh9HtVPGzy8vPa5mIQ5LKBsBgwXsT00M4Qsb2XDKPyHyKRNfMOS6qN49dDuy92wIOExEnvEcXRMoIh4rxoCE7tRWMhtLDH53qbTvcK7oUjw7v3aae6ZikdNu7j/HEw+ZYyNfqHGpM5HJggyiPl2ztsM5WkRwfOOTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NwFfkvN5qesznExZCGsK7iQb4IOiNi91GWuoqxon3zs=;
 b=GXJr5kgLroSZ5wGR1x0wIfKMNnH7aod3+77Ot/CKHFi8YMcb+vHuW39l7xyC6JCwJl13B15rpQb3PZs+lYvV7d9N0lLXuisqcuTQY4uz+qBXhuvveJHbdZR+X3D/Yk9rI2fd750mPLFjKBtiHS3uBif3Kph6KUu/JAX/RKf67JIqU7sMT/dh45LpP6FH7QFX5qLlBcTLWuecQJ2KHDhhrK80P7uZO1gZnUOAGIZAqGfJP1+L53rei5Yq5st6zS8rRzFPVAv0/bUfZGOREn5Ypxj5cZGUDhVWEx0pYtWoSuWGV152pwlYu6zJIV0ctSG2zwuCIrG506caa5MBHqwZvQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul: pad blob-execution "okay" messages
Message-ID: <3250a871-e49d-d3c4-333a-eff435e092c2@suse.com>
Date: Wed, 2 Jun 2021 16:38:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3PR09CA0027.eurprd09.prod.outlook.com
 (2603:10a6:102:b7::32) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5f1df024-eb5c-41e8-9f64-08d925d40b11
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3389:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB338977F05DADC47292EEDFC6B33D9@VI1PR0402MB3389.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:331;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	g8oKK5Rf4pWZGEwj763+nFvZqNie8ZY0Ad2iXG8xu4cjylSDa39yI73bDxOEGTDDFLk4g+hyvuz7rykhyxgHd3rzg7t8G0El3UcY74CH6U73ulGy1BJT1kSsJlTJkE7byHhs7Cdxhx1ruLouLByE1H7kMTO9jH09aWpUi//lmbFgHBH/jNqxwOuxaAYxo1oB5GtZ5o5EmU4IBpJqbGFiKyEgtXZtLHe6hE25TDsk+9BCISGG/qHiB2S1totUDrFbxcYM0mM/JNiZR3HbmAly/R5v5GSLzgbyB4vcfljrV1CXSjp2MprvupynlFd9CeKnkTVBajdnDtMwc6sgReFjOMwbg12f2KaoHGy5YqxIjbVdWky+Vmf1oFsQLfUPntiqhe7Nt/JRwHkp02LHt3+cwusOyN41/K55MfS1rk1kE53lqEJxTVlrXYEMYdN7jIFA0l9RM8SR59UZk+XruCPYLhipAL6OD6htA4yfDI/W+7CSU6t6T137nFJTEk1m+dtoevyY0DFE+sG7wf5qOTWmi24PbjXWiue1NaGwfW4FxtKGxbLLUVbX3nZG3C9bOnuW9zPWjsALBgJt/j5AOoNzE+cnUeKbriHrYRQ0T6AOcQoROCc7UEqAT6D/zPQEZQ11XR9MxB0kvML8Yl3t3Xkuj5FkIH6qABApLGRyN3Rtu4fpX9is0tPoi8/jiA1lkiA9
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(136003)(376002)(366004)(346002)(39860400002)(4326008)(36756003)(5660300002)(16576012)(2906002)(54906003)(316002)(26005)(66556008)(83380400001)(66946007)(38100700002)(15650500001)(86362001)(2616005)(8676002)(956004)(31686004)(31696002)(186003)(16526019)(8936002)(478600001)(66476007)(6916009)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?enpkV1RJZ0N6L0VjV3lSUCtWYnNWWW9zc0drWGtqcjhUM05pMXM5VEN3azR2?=
 =?utf-8?B?ZGFTc3hKRElqTXRNQ3ZLcEwwTElFaUxWMXd2Z2tiaS9aNjdMYzNHeFBlQ1RC?=
 =?utf-8?B?VDNwazNZNU4vbW9YbjI4TE8wZEo1K3NxckFQSEcxd05vYkcyQ3A3cVA1OUcv?=
 =?utf-8?B?TDlLMFByZDJnYXhHTWEwU1YvM0xFMzVjLzhOMHBtZFRqa2pJeWVPQy8zNzdL?=
 =?utf-8?B?VE9PNmhkb1BZa0QxSWVWQnkrQmFuTFhQRDR1aC81OUs4YU12c0ZuZ3prZDRT?=
 =?utf-8?B?QW9rL28vOVBFd20vcFplMzRVdkVubzBaMDhnNWR1WWlpOXZ4OFNicmpVeFFV?=
 =?utf-8?B?QnlqZjJsWjJwaUJ5azFoSFdzZWpERExMdjBSbXh3aFo3MkVpZ3ZaWDlwd0JD?=
 =?utf-8?B?eHBNOHlRb2pVMzF4L2VTaEh2RmNyRHRJemZKSEVscnNkdERUY2tKeWszM3BD?=
 =?utf-8?B?S25OYnNycS81eE53SW9INFFDbFVpaWZJbnFyUDFnWUplcGVxQnhCMlFMNVdQ?=
 =?utf-8?B?Ump5b0pmeUpselFoaDhtUkxEYXVUejEyRUJ1azg0WlcwRTBZTzRLcnJmWHpr?=
 =?utf-8?B?UFo4YmVqRklrV01XUXR0ZjNObFdWMlpmb1g0dlh6cHZYSnh6a0d0dk9ITFNE?=
 =?utf-8?B?Yk1sNzFGbTBDblBZcUpqVEJwN24wSVhLdWlIYkFyNjRraE03M01MSndUdWRF?=
 =?utf-8?B?Z0J0b2ZCQXJ2bmZoUUJPdnNScnpiRDMvRGZQeUQ3R0pzclFnWDFqeXRTaXk2?=
 =?utf-8?B?cWphd0dVVHNHTlR4QzI3dnhwMnRGc2xMeDZKYTdHQ08vTm9qOXYrS3A5TllM?=
 =?utf-8?B?d2dld3ZKVkxNZks1Ui9NOGp0SzJBcHhEeFlXcVF2UWk4SXdmcnhtVCs1c3RD?=
 =?utf-8?B?aS9Cclc1QmdlaWFxM2Z6eVJOUFpFN3NLSHVDc3krNE0yV2FKcHQyVmxwbjN6?=
 =?utf-8?B?RVlla1VnUUk5WTZTeHMzaHJRcDhIRDkxQUc0QXpyRGFJbzdBUVpJa1NWT2xD?=
 =?utf-8?B?UUxPUlVWZ082Y2NLeThueVF2S2h6KzNLWm1wZU0reHhCK0JsWXJDSlc5bjVz?=
 =?utf-8?B?bXdub0d5aTMxZU5WR0d6K0VYTWZtbjEvMlRxQnZ5R2I3bnM1UVpHTFNaUWEw?=
 =?utf-8?B?MllEMktOMXF2bFFKcklDR3ptaEFOZUo5RTJYN1I4UEg5aEdiNkxCOHhrVHVw?=
 =?utf-8?B?cms3ZEdkL2duWnZ0ZEV4TTVad1Z4cnRkMjFBb0xJUTZJKzAvL0R3T1NnVVlu?=
 =?utf-8?B?UnlTR3VjTVV4MWZ2alc3ckxHWXdVc2tiT0ZCdTJLTnBScGpMVUxvWnAyWkR3?=
 =?utf-8?B?R2FsaHhhVlp1cjZhdmIyeFpTUFRVTzBacWZVYWR5U0J5RUx4YnVzd1RiZSto?=
 =?utf-8?B?ZlhRb0FuUVpaRDg3aDlnS3FEVGcrRm8zQlZPMVdQZTYwMUdiMjRFWVNpUFZM?=
 =?utf-8?B?WGZ6UWlrY0FYZktiRkFtTUFxTEhyNTlXTkYybXU2eFJQL05tY1VMWnExTXZF?=
 =?utf-8?B?blh5MjREcGxLU1Arck92MzNnb3J4c1E3V25wMG9XYmNueEVUeElOcXBOeDNp?=
 =?utf-8?B?YlIrbm1tSjNOUmdoL0dOc1ZUcE45R0NQR0pDdDlUZC9TTW5wbnp0NEJzQm9I?=
 =?utf-8?B?WTB5dzV2VUIrSDdBdFB6VE53aTgrUE1LN2NKYnRzdmVRdkd4SFl5N09SU2Nw?=
 =?utf-8?B?OGpycnpyZDVtRDVORVVnYVdFUGlMY05xZUxqRXl2R2RzV3IydDB3cExBVGVC?=
 =?utf-8?Q?mCpeSRhecYOLXe35UQQMvdhbJC2R1jG0V7lEe4i?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5f1df024-eb5c-41e8-9f64-08d925d40b11
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2021 14:38:10.7389
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3ohapvO7SF/xZdV4WVrYjNIe5U3ucFkyB4Ex0KvgyJnvWpdHr/iiTdiuk1phcEQ1KiGh9DiMXewppcE6dvTlEw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3389

We already do so in the native execution case, and a few descriptions (I
did notice this with SHA ones) are short enough for the output to look
slightly odd.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Many descriptions are longer than 37 characters, so I wonder whether we
wouldn't want to bump the padding to 50, 60, or even 70. And this
perhaps despite then going out of sync with the individual insn tests
earlier on (which I wouldn't want to touch).

--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -5181,6 +5181,8 @@ int main(int argc, char **argv)
 
     for ( j = 0; j < ARRAY_SIZE(blobs); j++ )
     {
+        unsigned int nr;
+
         if ( blobs[j].check_cpu && !blobs[j].check_cpu() )
             continue;
 
@@ -5196,7 +5198,8 @@ int main(int argc, char **argv)
 
         if ( ctxt.addr_size == sizeof(void *) * CHAR_BIT )
         {
-            i = printf("Testing %s native execution...", blobs[j].name);
+            nr = printf("Testing %s native execution...", blobs[j].name);
+
             if ( blobs[j].set_regs )
                 blobs[j].set_regs(&regs);
             asm volatile (
@@ -5212,11 +5215,13 @@ int main(int argc, char **argv)
             );
             if ( !blobs[j].check_regs(&regs) )
                 goto fail;
-            printf("%*sokay\n", i < 40 ? 40 - i : 0, "");
+
+            printf("%*sokay\n", nr < 40 ? 40 - nr : 0, "");
         }
 
-        printf("Testing %s %u-bit code sequence",
-               blobs[j].name, ctxt.addr_size);
+        nr = printf("Testing %s %u-bit code sequence",
+                    blobs[j].name, ctxt.addr_size);
+
         if ( blobs[j].set_regs )
             blobs[j].set_regs(&regs);
         regs.eip = (unsigned long)res;
@@ -5233,7 +5238,10 @@ int main(int argc, char **argv)
                 regs.eip < (unsigned long)res + blobs[j].size )
         {
             if ( (i++ & 8191) == 0 )
+            {
                 printf(".");
+                ++nr;
+            }
             rc = x86_emulate(&ctxt, &emulops);
             if ( rc != X86EMUL_OKAY )
             {
@@ -5242,13 +5250,17 @@ int main(int argc, char **argv)
                 return 1;
             }
         }
-        for ( ; i < 2 * 8192; i += 8192 )
+        for ( ; i < 2 * 8192; i += 8192 ) {
             printf(".");
+            ++nr;
+        }
+
         if ( (regs.eip != 0x12345678) ||
              (regs.esp != ((unsigned long)res + MMAP_SZ)) ||
              !blobs[j].check_regs(&regs) )
             goto fail;
-        printf("okay\n");
+
+        printf("%*sokay\n", nr < 40 ? 40 - nr : 0, "");
     }
 
     return 0;



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 15:02:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 15:02:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136069.252467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSNX-0000yC-UY; Wed, 02 Jun 2021 15:01:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136069.252467; Wed, 02 Jun 2021 15:01:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSNX-0000y5-RT; Wed, 02 Jun 2021 15:01:55 +0000
Received: by outflank-mailman (input) for mailman id 136069;
 Wed, 02 Jun 2021 15:01:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YErV=K4=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1loSNV-0000xu-PQ
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 15:01:53 +0000
Received: from mail-pf1-x42d.google.com (unknown [2607:f8b0:4864:20::42d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ddd2621-d8e2-4a24-a7e6-b5585cf0f7e6;
 Wed, 02 Jun 2021 15:01:53 +0000 (UTC)
Received: by mail-pf1-x42d.google.com with SMTP id t28so2382960pfg.10
 for <xen-devel@lists.xenproject.org>; Wed, 02 Jun 2021 08:01:53 -0700 (PDT)
Received: from ?IPv6:2404:f801:0:5:8000::4b1? ([2404:f801:9000:1a:efea::4b1])
 by smtp.gmail.com with ESMTPSA id
 y13sm97917pgp.16.2021.06.02.08.01.39
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 02 Jun 2021 08:01:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ddd2621-d8e2-4a24-a7e6-b5585cf0f7e6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=+az3hl/4w22b3I/rRlTcQ1/sSF9KD2GATy/NbYn2SjU=;
        b=LFh9HsM5N4yAOGzJ85RpIVWyAjUPznHirCf8/j7G4yx59JLH20/Uh4u7wbdu1FOv1R
         BMS/VcyQyPaNyIBLyV79xaZ6CaUNzIjmBxRj8Fe+1khO8KcH3zpgVSaCPqyrS/mPSxUA
         SxH1O0RaYnJym4R1ouc54zGTJI9ko785LbI0cE/6JrbOl++1vJMUiWvQ+YxS/x02SX1A
         GtQ9eZiWdQ81n3fEqhZUgWSVZqHZymyGyts8taigNA7xByYUg2QhaEC1Sqw034dlDQ+n
         EaQjVI9wbITemQxMjxw+B5xkVM7JxSTIPhVvW43gJtVIEw3fB0lKjLuGjHRyUK5Iy7k4
         pXzg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=+az3hl/4w22b3I/rRlTcQ1/sSF9KD2GATy/NbYn2SjU=;
        b=CANYxXmLwiZDJ3CvK+FL06c+P1fQ38CmAtBU++onn/lGdP2cgz95BeI+zyxXrQBOis
         3ZGnJz7Aby8HmoNDbhDP23znSabB2qh0p4RMGfDp6Z6h8iEtTkCyxDCpxo6EUaYBvfsk
         ZGhqv47doC3EeIdrvh4X1dunluazeZCexdaY8TbkKGb78KpjM2hDfp43f8wHZjmIDGxS
         8TzDeSCcv+uOxY6VRaDhC8yj8Lm8+ga39VlEwXntZ0+RmxXlM9Bne64OMDKzCTNuGipz
         aK0XJPpl3ZgCAXHo9KiUNIxjMnkzopgR0x653luPmXaFgk7GBrJNCH6eqsfoI9iXVP6e
         gM3Q==
X-Gm-Message-State: AOAM531hMKJuwTWw36qdvCmgqam5dKjbOHOqWzwDBRE8e4S0Hr8tz7Cn
	dXBiDrtzsJE5Vuo5r2unNUc=
X-Google-Smtp-Source: ABdhPJztpeXS5x/Q7Ocwttmq+13ab9HvNAwDx8hEcPtnl/lWI6mAldnzn4lX5IPnQcCsuOUMA5JO9w==
X-Received: by 2002:a62:4e96:0:b029:2ea:2244:5e31 with SMTP id c144-20020a624e960000b02902ea22445e31mr3354423pfb.43.1622646112263;
        Wed, 02 Jun 2021 08:01:52 -0700 (PDT)
Subject: Re: [RFC PATCH V3 09/11] HV/IOMMU: Enable swiotlb bounce buffer for
 Isolation VM
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>, kys@microsoft.com,
 haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org,
 decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
 x86@kernel.org, hpa@zytor.com, arnd@arndb.de, dave.hansen@linux.intel.com,
 luto@kernel.org, peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, hch@lst.de,
 m.szyprowski@samsung.com, robin.murphy@arm.com, jgross@suse.com,
 sstabellini@kernel.org, joro@8bytes.org, will@kernel.org,
 xen-devel@lists.xenproject.org, davem@davemloft.net, kuba@kernel.org,
 jejb@linux.ibm.com, martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com,
 thomas.lendacky@amd.com, brijesh.singh@amd.com, sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-10-ltykernel@gmail.com>
 <9488c114-81ad-eb67-79c0-5ed319703d3e@oracle.com>
From: Tianyu Lan <ltykernel@gmail.com>
Message-ID: <a023ee3f-ce85-b54f-79c3-146926bf3279@gmail.com>
Date: Wed, 2 Jun 2021 23:01:36 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <9488c114-81ad-eb67-79c0-5ed319703d3e@oracle.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Hi Boris:
	Thanks for your review.

On 6/2/2021 9:16 AM, Boris Ostrovsky wrote:
> 
> On 5/30/21 11:06 AM, Tianyu Lan wrote:
>> @@ -91,6 +92,6 @@ int pci_xen_swiotlb_init_late(void)
>>   EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
>>   
>>   IOMMU_INIT_FINISH(2,
>> -		  NULL,
>> +		  hyperv_swiotlb_detect,
>>   		  pci_xen_swiotlb_init,
>>   		  NULL);
> 
> 
> Could you explain this change?

Hyper-V allocates its own swiotlb bounce buffer and the default
swiotlb buffer should not be allocated. swiotlb_init() in 
pci_swiotlb_init() is to allocate default swiotlb buffer.
To achieve this, put hyperv_swiotlb_detect() as the first entry in the 
iommu_table_entry list. The detect loop in the pci_iommu_alloc() will 
exit once hyperv_swiotlb_detect() is called in Hyper-V VM and other 
iommu_table_entry callback will not be called.


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 15:09:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 15:09:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136076.252479 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSUQ-0001oU-N9; Wed, 02 Jun 2021 15:09:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136076.252479; Wed, 02 Jun 2021 15:09:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSUQ-0001oN-Jq; Wed, 02 Jun 2021 15:09:02 +0000
Received: by outflank-mailman (input) for mailman id 136076;
 Wed, 02 Jun 2021 15:09:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GNUT=K4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1loSUO-0001oH-NK
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 15:09:00 +0000
Received: from mail-ot1-x32f.google.com (unknown [2607:f8b0:4864:20::32f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 805fb504-e35b-422f-be1e-aac25c62ca78;
 Wed, 02 Jun 2021 15:08:59 +0000 (UTC)
Received: by mail-ot1-x32f.google.com with SMTP id
 i14-20020a9d624e0000b029033683c71999so2704407otk.5
 for <xen-devel@lists.xenproject.org>; Wed, 02 Jun 2021 08:08:59 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id e29sm25287oiy.53.2021.06.02.08.08.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 02 Jun 2021 08:08:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 805fb504-e35b-422f-be1e-aac25c62ca78
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=gs+XbRVRzJKHyIBh7bExJpJrDXDyPX5HL5ZlmfaOeUU=;
        b=R8Kb8U2IGsyeRYCVCjI4378u+B88OtvkcxLbL4UkwddYYiyiOLMsKAcpHhvHWygmYa
         MM0W7DZ0IcHkiPjXO38TFYFYwZzzi3CKPT+Jj2H7Uop5EleK3NKNklinn40EqNo/4CxM
         81CYvcCU76R6cy5ldimQm32x7FFd0dYejQrU4VOqrbDt5sX7MoSYNAbakbIO0dQiYx1D
         kyXbNP9kIeY+dyAbFmykEtIqiSJwlwII3CuZKnp71fcfHPDV3X19COX7N5OunMHyRriD
         Wgf++g0HUgjs7GnY/fhQWq40WA370Ypl7IIE+sd4ta5otSlIakfb8W9UunmKRa4s+HwN
         pk+Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=gs+XbRVRzJKHyIBh7bExJpJrDXDyPX5HL5ZlmfaOeUU=;
        b=YzoYdwNOC66WfiQflnNbzsjm/VzqCnKrdWqUYSZEANGbz6SdxubKfpQeUj2Tzjv3zD
         3RfIIoY5Gtl5FOEFHjsDs1PBB8XY5eEszcX7MofgI0Kvf5lHvNDj0IJ1mSb4RSVmN9Jx
         i6/znAn0Bev8sX8HyrFrqOAgXcjxFpGxuK3aFSXt2rgVY+ITF8e5tpuTZl/4XwcGiIsd
         CA/COC9XEfs+T64bitpy00wTTC02JkA0nWGRrQgRBlIGvzbdd4ENtXcbkNJUGzfsT3gi
         /JvdqRXR9vrDbjAKjKYm7IFyPNCspEzNgnDC1T679RHQRu7NWWUEglZGICxjS4fvSUu3
         ikGA==
X-Gm-Message-State: AOAM530TXYCqREXk5RWfATfkQeTBzwVaQcY73mVWnaQRnX0k9Ey/T3FB
	15IHXyFGoq9P8kLnXE0ptrq16+rnpk9g/Q==
X-Google-Smtp-Source: ABdhPJzxWCZ/lvb95Q3G8tRhCwxdkn2xfo2HB1obTY/+bDmrn7K9WmIkhs/xE3pIbyc5XYvGTyMG2A==
X-Received: by 2002:a05:6830:10b:: with SMTP id i11mr2860169otp.240.1622646539028;
        Wed, 02 Jun 2021 08:08:59 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 0/2] Minimal build for RISCV
Date: Wed,  2 Jun 2021 09:08:26 -0600
Message-Id: <cover.1622645816.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi all,

This series introduces a minimal build for RISCV. It is based on Bobby's
previous work from last year[0] rebased onto current Xen.

This series provides the patches necessary to get a minimal build
working. The build is "minimal" in the sense that it only supports
building TARGET=riscv64/head.o. The arch/riscv/riscv64/head.S is just
a simple while(1).

The first patch is a mod to non-RISCV bits that enable building a
config with !CONFIG_HAS_NS16550.

The second patch adds the make/Kconfig boilerplate alongside head.S and
asm-riscv/config.h (head.S references ENTRY that is defined in
asm-riscv/config.h).

[0] https://lore.kernel.org/xen-devel/cover.1579615303.git.bobbyeshleman@gmail.com/

Thanks,
Connor

--
Changes since v4:
  - Dropped patches 2 and 4 as these have been applied
  - Moved arch/riscv/head.S to arch/riscv/riscv64/head.S for consistency
    with ARM.
  - Added Bob and myself to MAINTAINERS

Changes since v3:
  - Dropped "xen: Fix build when !CONFIG_GRANT_TABLE" since this was
    applied by Jan
  - Adjusted Kconfig condition for building NS16550
  - Use bool rather than bool_t
  - Removed riscv memory map, as this should probably be done later once
    the frametable size is figured out
  - Consolidated 64-bit #defines in asm-riscv/config.h
  - Renamed riscv64_defconfig to tiny64_defconfig, added CONFIG_DEBUG
    and CONFIG_DEBUG_INFO
  - Fixed logic/alignment/whitespace issues in Kconfig files
  - Use upstream archlinux riscv64 cross-compiler packages instead of
    custom built toolchain in docker container

Changes since v2:
  - Reduced number of riscv files added to ease review

Changes since v1:
  - Dropped "xen/sched: Fix build when NR_CPUS == 1" since this was
    fixed for 4.15
  - Moved #ifdef-ary around iommu_enabled to iommu.h
  - Moved struct grant_table declaration above ifdef CONFIG_GRANT_TABLE
    instead of defining an empty struct when !CONFIG_GRANT_TABLE

--
Connor Davis (2):
  xen/char: Default HAS_NS16550 to y only for X86 and ARM
  xen: Add files needed for minimal riscv build

 MAINTAINERS                             |  8 +++++
 config/riscv64.mk                       |  5 +++
 xen/Makefile                            |  8 +++--
 xen/arch/riscv/Kconfig                  | 47 +++++++++++++++++++++++++
 xen/arch/riscv/Kconfig.debug            |  0
 xen/arch/riscv/Makefile                 |  2 ++
 xen/arch/riscv/Rules.mk                 |  0
 xen/arch/riscv/arch.mk                  | 14 ++++++++
 xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
 xen/arch/riscv/riscv64/asm-offsets.c    |  0
 xen/arch/riscv/riscv64/head.S           |  6 ++++
 xen/drivers/char/Kconfig                |  1 +
 xen/include/asm-riscv/config.h          | 47 +++++++++++++++++++++++++
 13 files changed, 149 insertions(+), 2 deletions(-)
 create mode 100644 config/riscv64.mk
 create mode 100644 xen/arch/riscv/Kconfig
 create mode 100644 xen/arch/riscv/Kconfig.debug
 create mode 100644 xen/arch/riscv/Makefile
 create mode 100644 xen/arch/riscv/Rules.mk
 create mode 100644 xen/arch/riscv/arch.mk
 create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
 create mode 100644 xen/arch/riscv/riscv64/asm-offsets.c
 create mode 100644 xen/arch/riscv/riscv64/head.S
 create mode 100644 xen/include/asm-riscv/config.h

-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 15:09:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 15:09:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136077.252490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSUV-00028N-3U; Wed, 02 Jun 2021 15:09:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136077.252490; Wed, 02 Jun 2021 15:09:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSUV-00028G-0O; Wed, 02 Jun 2021 15:09:07 +0000
Received: by outflank-mailman (input) for mailman id 136077;
 Wed, 02 Jun 2021 15:09:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GNUT=K4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1loSUT-0001oH-MZ
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 15:09:05 +0000
Received: from mail-ot1-x329.google.com (unknown [2607:f8b0:4864:20::329])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 02ad2af5-521b-4054-bc34-f33394e27e80;
 Wed, 02 Jun 2021 15:09:02 +0000 (UTC)
Received: by mail-ot1-x329.google.com with SMTP id
 5-20020a9d01050000b02903c700c45721so1642238otu.6
 for <xen-devel@lists.xenproject.org>; Wed, 02 Jun 2021 08:09:02 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id e29sm25287oiy.53.2021.06.02.08.09.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 02 Jun 2021 08:09:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02ad2af5-521b-4054-bc34-f33394e27e80
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=UpOeaMEF9ZtjJDEC63CsaCASXXXpK+CjdZ0eTjBgKHU=;
        b=WluB/sS6QKDAdVmEC8aMbwfgJHfOnvp+lHftJ84SREIvF8lL4iu030ifMdB4f/lEoE
         dJ/8KHPWiTnVJTk7oPqFzRTXlrqhp9zJ0SPtkuzxSdOj4/2dOOkXqsCRlfzaftoVR+BL
         4Wn7ifq6m1d+hldFgMHMvmlHO3Ph01sc+CF11TqYqCicxsK6OqlMk3G0IrbZFAGsun7y
         CchjplKJYLefzisuOlJFb4eWTBAsWssxC+/jdJurq8c2gv+wH54pIOg/TBoDlu+cG5Zn
         N1tj9NBdSDlMvDPO3ImJpb2FlABGSJl/jd2/59in3SO3kmtVT168x/PIHpRBAfsV36qh
         aYPw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=UpOeaMEF9ZtjJDEC63CsaCASXXXpK+CjdZ0eTjBgKHU=;
        b=ip36EpCn4S1T/V4CZ4KSa0ItvhgSPYg51Uq4x3dVjovPHAOnAyBkvFUwV385emAnNN
         PvBL6k/bN0TDtb+haAr1EUfS6rRqZ06YpmG6UviZEtwj20e4E475h6ZaDzV//TCIE4yP
         vlb2oNb2WuZh+RsAEraCPNKRxCEBEHHwrf5LCeQxdnNeotS5gy3vhLBHZFjpO35erNyq
         scqwa+yb5hLXQnhBxRUSi//rICLwDs4uyq9K0P4TMyiEF057BvLzOgBkQGaLrk5ug4dB
         TdAzpk7tDq3mrD/jLowQbXmaBfHnnRcXRVkswwMSrC/sTnPwdihFFk+LXcAgx2DFROS3
         JcLg==
X-Gm-Message-State: AOAM530sp/23NekOeYykU7bnfixRiTV/tlzaEfc78M1Jk9U+I5xoLHcq
	9u2ODmcl3e5X73P0PCMX+HhC2ExtDjaHxA==
X-Google-Smtp-Source: ABdhPJwQ+WXaQFw7ZYh1xXrbG0Cfz68tGyEPXixc1hcXDRbGyg+YPQFZpit6zSv+QZpYBNL16XrjhQ==
X-Received: by 2002:a05:6830:192:: with SMTP id q18mr27810801ota.79.1622646541846;
        Wed, 02 Jun 2021 08:09:01 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 1/2] xen/char: Default HAS_NS16550 to y only for X86 and ARM
Date: Wed,  2 Jun 2021 09:08:27 -0600
Message-Id: <2c24cadace47e51e9e3fce6614e0f5e83db6c3af.1622645816.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1622645816.git.connojdavis@gmail.com>
References: <cover.1622645816.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Defaulting to yes only for X86 and ARM reduces the requirements
for a minimal build when porting new architectures.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 xen/drivers/char/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
index b572305657..2ff5b288e2 100644
--- a/xen/drivers/char/Kconfig
+++ b/xen/drivers/char/Kconfig
@@ -1,5 +1,6 @@
 config HAS_NS16550
 	bool "NS16550 UART driver" if ARM
+	default n if RISCV
 	default y
 	help
 	  This selects the 16550-series UART support. For most systems, say Y.
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 15:09:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 15:09:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136078.252501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSUa-0002Sw-CK; Wed, 02 Jun 2021 15:09:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136078.252501; Wed, 02 Jun 2021 15:09:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSUa-0002Sl-8t; Wed, 02 Jun 2021 15:09:12 +0000
Received: by outflank-mailman (input) for mailman id 136078;
 Wed, 02 Jun 2021 15:09:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GNUT=K4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1loSUY-0001oH-Mt
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 15:09:10 +0000
Received: from mail-ot1-x32c.google.com (unknown [2607:f8b0:4864:20::32c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ebce3ce2-2b4a-473e-9997-11237b2b5212;
 Wed, 02 Jun 2021 15:09:04 +0000 (UTC)
Received: by mail-ot1-x32c.google.com with SMTP id
 h19-20020a9d6f930000b02903c0c4560e99so2729671otq.1
 for <xen-devel@lists.xenproject.org>; Wed, 02 Jun 2021 08:09:04 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id e29sm25287oiy.53.2021.06.02.08.09.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 02 Jun 2021 08:09:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebce3ce2-2b4a-473e-9997-11237b2b5212
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=y7wXnoCWLDx7IQREahGv6BGDyqktjGf6yjBgrAWmplI=;
        b=dFj4sTASo4fYEOIPGgyNyZzFb60dHRKTCym915IFPJc7+6zqJF8n3CWo3rb4FVdZYd
         x7vut7ldrlhsi/e/BBlPObmdUfoI17UvEcAil+i/VqulcmPuP2LNRSZ4JFAv4Asbqp1k
         pmfCenfqHaQduIS2MmVPytOJhTha9EmBDtFxv49gvGKQjCqRhdIlxqK6u6hw2eh5lfdd
         zzmPd0//+oPcb6TnolQn3PZw+kBH91RojWoo2omF1XGtCEfuJ3PeZ3Fzc98ev8j9ec1T
         I65oyujx7vmPQNSYqkyvROXL84kTdgnS0oHejFeutIJEldZyX4gDpwH71xs1UB7oEe7n
         urGg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=y7wXnoCWLDx7IQREahGv6BGDyqktjGf6yjBgrAWmplI=;
        b=T4WuUgSnXrRdgHB3SEw2ps3xIgmtPDB9KFttrbT8aGFMXIU4N0PwStO3p50ZXXDvyZ
         m/e1bycJbbf2b5e3ufoNUQwyubMb33O/LYzVL+PnMYvrUqRG2RkKML9QutfCeSq8rA9U
         Xe8JHuFBV5S2iEPiEtXAiTP2GsHM0LwilQVQ0+xMyIPSSDeQVfV0ZHcp9TolXdyDMp4E
         /1RlsUAsC9rkZkY0isamzcDQJjbC3kEwjyfUTz28DdLzCDfQFmd0/GAFYbljpJai2vuD
         xDt/O5TLPtGT8E/cTDZfOGbfEkZUYbGIKIYXWqAB1nZCcuRfqEZgdeS2NIgGQqDBo7ST
         gHsg==
X-Gm-Message-State: AOAM533JQzgokL9nuSQoPc8C00lC89KCyqBIVpsOHSpQiy11i+ya8t6x
	RkNa15ketgKsuYjOVPHQNq8cLAWX8Kovow==
X-Google-Smtp-Source: ABdhPJyzas22Hcmg5hLVbfL0rGAEcBh7pHsLWfn/TD//YITzbDc5sZbEfOvQW1B1p12JeK4xaRHG2w==
X-Received: by 2002:a9d:6756:: with SMTP id w22mr14209050otm.369.1622646544071;
        Wed, 02 Jun 2021 08:09:04 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 2/2] xen: Add files needed for minimal riscv build
Date: Wed,  2 Jun 2021 09:08:28 -0600
Message-Id: <97ae93649f0a4366d6957f3b56d43a522ac21685.1622645816.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1622645816.git.connojdavis@gmail.com>
References: <cover.1622645816.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add arch-specific makefiles and configs needed to build for
riscv. Also add a minimal head.S that is a simple infinite loop.
head.o can be built with

$ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen tiny64_defconfig
$ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen TARGET=riscv64/head.o

No other TARGET is supported at the moment.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
Bob: I moved back to XEN_TARGET_ARCH=riscv64 because supplying
just XEN_TARGET_ARCH=riscv causes TARGET_ARCH == TARGET_SUBARCH, and
that broke the build after the recent commit b6ecd5c8bc
"build: centralize / unify asm-offsets generation". It also deviates
from how x86 and arm work now, so I think this change is for the best
for now. That commit is also why the PHONY include target is added
in the riscv/Makefile.
---
 MAINTAINERS                             |  8 +++++
 config/riscv64.mk                       |  5 +++
 xen/Makefile                            |  8 +++--
 xen/arch/riscv/Kconfig                  | 47 +++++++++++++++++++++++++
 xen/arch/riscv/Kconfig.debug            |  0
 xen/arch/riscv/Makefile                 |  2 ++
 xen/arch/riscv/Rules.mk                 |  0
 xen/arch/riscv/arch.mk                  | 14 ++++++++
 xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
 xen/arch/riscv/riscv64/asm-offsets.c    |  0
 xen/arch/riscv/riscv64/head.S           |  6 ++++
 xen/include/asm-riscv/config.h          | 47 +++++++++++++++++++++++++
 12 files changed, 148 insertions(+), 2 deletions(-)
 create mode 100644 config/riscv64.mk
 create mode 100644 xen/arch/riscv/Kconfig
 create mode 100644 xen/arch/riscv/Kconfig.debug
 create mode 100644 xen/arch/riscv/Makefile
 create mode 100644 xen/arch/riscv/Rules.mk
 create mode 100644 xen/arch/riscv/arch.mk
 create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
 create mode 100644 xen/arch/riscv/riscv64/asm-offsets.c
 create mode 100644 xen/arch/riscv/riscv64/head.S
 create mode 100644 xen/include/asm-riscv/config.h

diff --git a/MAINTAINERS b/MAINTAINERS
index d46b08a0d2..956e71220d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -456,6 +456,14 @@ F:	tools/libs/light/libxl_nonetbuffer.c
 F:	tools/hotplug/Linux/remus-netbuf-setup
 F:	tools/hotplug/Linux/block-drbd-probe
 
+RISCV
+M:	Bob Eshleman <bobbyeshleman@gmail.com>
+R:	Connor Davis <connojdavis@gmail.com>
+S:	Supported
+F:	config/riscv64.mk
+F:	xen/arch/riscv/
+F:	xen/include/asm-riscv/
+
 RTDS SCHEDULER
 M:	Dario Faggioli <dfaggioli@suse.com>
 M:	Meng Xu <mengxu@cis.upenn.edu>
diff --git a/config/riscv64.mk b/config/riscv64.mk
new file mode 100644
index 0000000000..a5a21e5fa2
--- /dev/null
+++ b/config/riscv64.mk
@@ -0,0 +1,5 @@
+CONFIG_RISCV := y
+CONFIG_RISCV_64 := y
+CONFIG_RISCV_$(XEN_OS) := y
+
+CONFIG_XEN_INSTALL_SUFFIX :=
diff --git a/xen/Makefile b/xen/Makefile
index 7ce7692354..89879fad4c 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -26,7 +26,9 @@ MAKEFLAGS += -rR
 EFI_MOUNTPOINT ?= $(BOOT_DIR)/efi
 
 ARCH=$(XEN_TARGET_ARCH)
-SRCARCH=$(shell echo $(ARCH) | sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
+SRCARCH=$(shell echo $(ARCH) | \
+          sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
+              -e s'/riscv.*/riscv/g')
 
 # Don't break if the build process wasn't called from the top level
 # we need XEN_TARGET_ARCH to generate the proper config
@@ -35,7 +37,8 @@ include $(XEN_ROOT)/Config.mk
 # Set ARCH/SUBARCH appropriately.
 export TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
 export TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
-                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
+                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
+                                -e s'/riscv.*/riscv/g')
 
 # Allow someone to change their config file
 export KCONFIG_CONFIG ?= .config
@@ -335,6 +338,7 @@ _clean: delete-unfresh-files
 	$(MAKE) $(clean) xsm
 	$(MAKE) $(clean) crypto
 	$(MAKE) $(clean) arch/arm
+	$(MAKE) $(clean) arch/riscv
 	$(MAKE) $(clean) arch/x86
 	$(MAKE) $(clean) test
 	$(MAKE) -f $(BASEDIR)/tools/kconfig/Makefile.kconfig ARCH=$(ARCH) SRCARCH=$(SRCARCH) clean
diff --git a/xen/arch/riscv/Kconfig b/xen/arch/riscv/Kconfig
new file mode 100644
index 0000000000..bd8381c5e0
--- /dev/null
+++ b/xen/arch/riscv/Kconfig
@@ -0,0 +1,47 @@
+config RISCV
+	def_bool y
+
+config RISCV_64
+	def_bool y
+	select 64BIT
+
+config ARCH_DEFCONFIG
+	string
+	default "arch/riscv/configs/tiny64_defconfig"
+
+menu "Architecture Features"
+
+source "arch/Kconfig"
+
+endmenu
+
+menu "ISA Selection"
+
+choice
+	prompt "Base ISA"
+	default RISCV_ISA_RV64IMA if RISCV_64
+	help
+	  This selects the base ISA extensions that Xen will target.
+
+config RISCV_ISA_RV64IMA
+	bool "RV64IMA"
+	help
+	  Use the RV64I base ISA, plus the "M" and "A" extensions
+	  for integer multiply/divide and atomic instructions, respectively.
+
+endchoice
+
+config RISCV_ISA_C
+	bool "Compressed extension"
+	help
+	  Add "C" to the ISA subsets that the toolchain is allowed to
+	  emit when building Xen, which results in compressed instructions
+	  in the Xen binary.
+
+	  If unsure, say N.
+
+endmenu
+
+source "common/Kconfig"
+
+source "drivers/Kconfig"
diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
new file mode 100644
index 0000000000..942e4ffbc1
--- /dev/null
+++ b/xen/arch/riscv/Makefile
@@ -0,0 +1,2 @@
+.PHONY: include
+include:
diff --git a/xen/arch/riscv/Rules.mk b/xen/arch/riscv/Rules.mk
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
new file mode 100644
index 0000000000..53dadb8975
--- /dev/null
+++ b/xen/arch/riscv/arch.mk
@@ -0,0 +1,14 @@
+########################################
+# RISCV-specific definitions
+
+CFLAGS-$(CONFIG_RISCV_64) += -mabi=lp64
+
+riscv-march-$(CONFIG_RISCV_ISA_RV64IMA) := rv64ima
+riscv-march-$(CONFIG_RISCV_ISA_C)       := $(riscv-march-y)c
+
+# Note that -mcmodel=medany is used so that Xen can be mapped
+# into the upper half _or_ the lower half of the address space.
+# -mcmodel=medlow would force Xen into the lower half.
+
+CFLAGS += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
+CFLAGS += -I$(BASEDIR)/include
diff --git a/xen/arch/riscv/configs/tiny64_defconfig b/xen/arch/riscv/configs/tiny64_defconfig
new file mode 100644
index 0000000000..3c9a2ff941
--- /dev/null
+++ b/xen/arch/riscv/configs/tiny64_defconfig
@@ -0,0 +1,13 @@
+# CONFIG_SCHED_CREDIT is not set
+# CONFIG_SCHED_RTDS is not set
+# CONFIG_SCHED_NULL is not set
+# CONFIG_SCHED_ARINC653 is not set
+# CONFIG_TRACEBUFFER is not set
+# CONFIG_HYPFS is not set
+# CONFIG_GRANT_TABLE is not set
+# CONFIG_SPECULATIVE_HARDEN_ARRAY is not set
+
+CONFIG_RISCV_64=y
+CONFIG_DEBUG=y
+CONFIG_DEBUG_INFO=y
+CONFIG_EXPERT=y
diff --git a/xen/arch/riscv/riscv64/asm-offsets.c b/xen/arch/riscv/riscv64/asm-offsets.c
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
new file mode 100644
index 0000000000..0dbc27ba75
--- /dev/null
+++ b/xen/arch/riscv/riscv64/head.S
@@ -0,0 +1,6 @@
+#include <asm/config.h>
+
+        .text
+
+ENTRY(start)
+        j  start
diff --git a/xen/include/asm-riscv/config.h b/xen/include/asm-riscv/config.h
new file mode 100644
index 0000000000..e2ae21de61
--- /dev/null
+++ b/xen/include/asm-riscv/config.h
@@ -0,0 +1,47 @@
+#ifndef __RISCV_CONFIG_H__
+#define __RISCV_CONFIG_H__
+
+#if defined(CONFIG_RISCV_64)
+# define LONG_BYTEORDER 3
+# define ELFSIZE 64
+# define MAX_VIRT_CPUS 128u
+#else
+# error "Unsupported RISCV variant"
+#endif
+
+#define BYTES_PER_LONG (1 << LONG_BYTEORDER)
+#define BITS_PER_LONG  (BYTES_PER_LONG << 3)
+#define POINTER_ALIGN  BYTES_PER_LONG
+
+#define BITS_PER_LLONG 64
+
+/* xen_ulong_t is always 64 bits */
+#define BITS_PER_XEN_ULONG 64
+
+#define CONFIG_RISCV_L1_CACHE_SHIFT 6
+#define CONFIG_PAGEALLOC_MAX_ORDER  18
+#define CONFIG_DOMU_MAX_ORDER       9
+#define CONFIG_HWDOM_MAX_ORDER      10
+
+#define OPT_CONSOLE_STR "dtuart"
+#define INVALID_VCPU_ID MAX_VIRT_CPUS
+
+/* Linkage for RISCV */
+#ifdef __ASSEMBLY__
+#define ALIGN .align 2
+
+#define ENTRY(name)                                \
+  .globl name;                                     \
+  ALIGN;                                           \
+  name:
+#endif
+
+#endif /* __RISCV_CONFIG_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 15:18:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 15:18:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136100.252512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSda-0004jq-BR; Wed, 02 Jun 2021 15:18:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136100.252512; Wed, 02 Jun 2021 15:18:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSda-0004jj-7q; Wed, 02 Jun 2021 15:18:30 +0000
Received: by outflank-mailman (input) for mailman id 136100;
 Wed, 02 Jun 2021 15:18:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loSdY-0004jV-OD
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 15:18:28 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd62d5c4-a253-498e-8931-a0c4ea511d2a;
 Wed, 02 Jun 2021 15:18:27 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id DBF7D1FD7C;
 Wed,  2 Jun 2021 15:18:26 +0000 (UTC)
Received: by imap.suse.de (Postfix, from userid 51)
 id D75C611CFF; Wed,  2 Jun 2021 15:28:32 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id D3B5711E04;
 Wed,  2 Jun 2021 12:09:05 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id fb+eMeF0t2CLDwAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 12:09:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd62d5c4-a253-498e-8931-a0c4ea511d2a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622647106; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=CkKo+yzVGk8lM2cESfddW3qeWJPQwIy6aUzSH9Y2z7U=;
	b=gs/UPcvlE9iqCShMwnhP3HOqy2w39Ncv+ztuvTT03hkmVfLtUyOf2E29mNRxsYosULtcvP
	BCcL0xOpiKZ/eIoALz29vWBB5vnPtD1eFSUV/3m21U+9+lHsI/jJ+RIEGtKgEjyUBkvVff
	fz+fQRabkEMh5ZGd8CjtF2oDqFPEbbk=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622635746; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=CkKo+yzVGk8lM2cESfddW3qeWJPQwIy6aUzSH9Y2z7U=;
	b=JXtMKXRsAJ0k3fJE/rumTkKd7hzNtbf5WBNYLEgt31PVX0mW31YkYzX5dTGys/kA8MSCXf
	MTbnQeoKme4/4EOegTIH26a/fNnM5e6P5sZ7KPGaHjapghWQkZoEpPj/CTf4j6DchCTi+t
	hS+6qTVroYHwJnVI195gh2Ak29vTKN0=
Subject: Re: [PATCH v20210601 09/38] tools/guest: prepare to allocate arrays
 once
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-10-olaf@aepfle.de>
 <531fe9c5-aa7f-be99-5d78-85d817139740@suse.com>
 <20210602140305.39eb417a.olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <b13fd3d4-452a-85fe-a5f2-d1b356240368@suse.com>
Date: Wed, 2 Jun 2021 14:09:05 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210602140305.39eb417a.olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="xQnFj3m4T8DHdlYxNoMyCYPGj5P711fT7"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--xQnFj3m4T8DHdlYxNoMyCYPGj5P711fT7
Content-Type: multipart/mixed; boundary="wfXRgoU485kj4IBWXcwXSIEQd0GPrusRg";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
Message-ID: <b13fd3d4-452a-85fe-a5f2-d1b356240368@suse.com>
Subject: Re: [PATCH v20210601 09/38] tools/guest: prepare to allocate arrays
 once
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-10-olaf@aepfle.de>
 <531fe9c5-aa7f-be99-5d78-85d817139740@suse.com>
 <20210602140305.39eb417a.olaf@aepfle.de>
In-Reply-To: <20210602140305.39eb417a.olaf@aepfle.de>

--wfXRgoU485kj4IBWXcwXSIEQd0GPrusRg
Content-Type: multipart/mixed;
 boundary="------------4865E6143B74DF93E1BDA534"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------4865E6143B74DF93E1BDA534
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: quoted-printable

On 02.06.21 14:03, Olaf Hering wrote:
> Am Wed, 2 Jun 2021 09:29:08 +0200
> schrieb Juergen Gross <jgross@suse.com>:
>=20
>>> +    ctx->restore.m =3D malloc(sizeof(*ctx->restore.m));
>>> +    if ( !ctx->restore.m ) {
>>
>> ... this case might trigger without the full series applied, due to
>> allocating zero bytes (same for the save side below).
>=20
> Such bisection point with a libc that returns NULL would be just bad lu=
ck.

Sure, but sending a patch which is known to break bisecting is bad
behavior. You could even add a dummy element (with a comment indicating
its purpose) which could be removed when the first "real" structure
element is being added.

> See git-bisect(1) "Avoiding testing a commit" how to deal with it, in t=
he unlikely case it actually triggers.

It can be avoided, yes, but you need to search for the reason the
failure occurred first. And this debugging effort should be avoided
if possible.


Juergen

--------------4865E6143B74DF93E1BDA534
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------4865E6143B74DF93E1BDA534--

--wfXRgoU485kj4IBWXcwXSIEQd0GPrusRg--

--xQnFj3m4T8DHdlYxNoMyCYPGj5P711fT7
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3dOEFAwAAAAAACgkQsN6d1ii/Ey8P
WAf+MYi5eNafVyQ3TQEy/2iop4vZ6g4uLM9FVJCg4DmXfgPtRGihTNiBY3MjPuTxUBp4QR1dXE+O
V+gCMhdIiBSty9h216i935comXJwjOuOkvVQK5kukDQplC7MqwhTP9AmVFejMIHVfkJF1mpWHmAq
v6oVLalPOrymBCdp1aiRiRuAmefET/OwpIXm41SRg1JMtdasMapA3O4rEGYx3JAZsYlp1UexJX34
0UR2QJhamPHGEbyop7iT4k3JQjspSU9V/L1czxKGNe2MXJbTU/kYYryNzX8CwADjvsy4wrksQEig
EnaUma2VYpMHF4VJHSNODbcLpOajXnaJ4LrV6pr7Lw==
=HA4b
-----END PGP SIGNATURE-----

--xQnFj3m4T8DHdlYxNoMyCYPGj5P711fT7--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 15:18:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 15:18:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136102.252534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSde-0005IK-Uz; Wed, 02 Jun 2021 15:18:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136102.252534; Wed, 02 Jun 2021 15:18:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSde-0005ID-Rp; Wed, 02 Jun 2021 15:18:34 +0000
Received: by outflank-mailman (input) for mailman id 136102;
 Wed, 02 Jun 2021 15:18:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loSdd-0004jV-H3
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 15:18:33 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 39ac46a7-d29a-4f70-bd33-be027cf46735;
 Wed, 02 Jun 2021 15:18:30 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id C01C221C5F;
 Wed,  2 Jun 2021 15:18:29 +0000 (UTC)
Received: by imap.suse.de (Postfix, from userid 51)
 id B2F5111CD7; Wed,  2 Jun 2021 16:20:03 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 07E5611DEB;
 Wed,  2 Jun 2021 11:43:58 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id EbcnAP5ut2CmfQAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 11:43:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39ac46a7-d29a-4f70-bd33-be027cf46735
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622647109; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=87Kt1Qwerb8lFGsxIc5QBItlNhNqATSfokRkRwiOpRM=;
	b=VP96IW46WFnIQkZJLaZgVBzvMEf/AjlAYHJ+E3ku26f1kOCH8uPuDOZmxvi5pYAIxdcbOz
	PQ1dHwNii8GJ33jpXG/lY2k0zMIELPxHV3HlAxOgiJmb63XWWah9MMGiKAL+tAMpwbnwer
	a3++Pa/Luq4ov8rJRORYK2qcZ4cJZDw=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622634238; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=87Kt1Qwerb8lFGsxIc5QBItlNhNqATSfokRkRwiOpRM=;
	b=C5upJA/vhU1vlJTcNQM7obcmeDvAuxojfCww+wcax2tvvxQ8dY6heRM0FBA+Soy+RdSquY
	jiBu1ajZiLYciXzXFaMxzX1Na8nFhYzwFE20mYC4SmChisUbbMNkvT4frDIWysdUbvo16J
	ZRq1+aG8yYXywiU7tuzu75DzePWyKXE=
Subject: Re: [PATCH v20210601 02/38] xl: fix description of migrate --debug
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-3-olaf@aepfle.de>
 <58453bfc-d932-3b46-7ec8-cd883b4c7440@suse.com>
 <20210602124324.1bd02cd7.olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <158d6c27-7540-2b67-1104-4c0d99151dfa@suse.com>
Date: Wed, 2 Jun 2021 13:43:57 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210602124324.1bd02cd7.olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="1N5wn9JjeVBn5ZBswJ7uuhN1gl1estidX"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--1N5wn9JjeVBn5ZBswJ7uuhN1gl1estidX
Content-Type: multipart/mixed; boundary="FU1vaBaj48XrKc1mbYQxZiM3TzDtnPCy9";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <158d6c27-7540-2b67-1104-4c0d99151dfa@suse.com>
Subject: Re: [PATCH v20210601 02/38] xl: fix description of migrate --debug
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-3-olaf@aepfle.de>
 <58453bfc-d932-3b46-7ec8-cd883b4c7440@suse.com>
 <20210602124324.1bd02cd7.olaf@aepfle.de>
In-Reply-To: <20210602124324.1bd02cd7.olaf@aepfle.de>

--FU1vaBaj48XrKc1mbYQxZiM3TzDtnPCy9
Content-Type: multipart/mixed;
 boundary="------------A8ACBBBB03892156F3864B29"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------A8ACBBBB03892156F3864B29
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: quoted-printable

On 02.06.21 12:43, Olaf Hering wrote:
> Am Wed, 2 Jun 2021 08:09:00 +0200
> schrieb Juergen Gross <jgross@suse.com>:
>=20
>>> -    if ( ctx->save.debug && ctx->stream_type !=3D XC_STREAM_PLAIN )
>>> +    if ( ctx->save.debug )
>> This is no documentation change IMO. You should either mention this
>> modification in the commit message, or put it into a separate patch.
>=20
> I think the conclusion was that this branch is dead code because
> only the stream_type=3D=3DXC_STREAM_PLAIN code branch sets debug.
>=20
>=20
> So far there is a decision to be made about this code branch:
> - document it
> - remove it
> - fix it

I don't mind either way. I just pointed out an inconsistency between
patch title and content. When searching for a patch causing a regression
I usually skip documentation patches.


Juergen

--------------A8ACBBBB03892156F3864B29
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------A8ACBBBB03892156F3864B29--

--FU1vaBaj48XrKc1mbYQxZiM3TzDtnPCy9--

--1N5wn9JjeVBn5ZBswJ7uuhN1gl1estidX
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3bv0FAwAAAAAACgkQsN6d1ii/Ey9k
NAf+Iaf2MqF3zPsQYcHPdtBOQNzftjLqQKTWs47h2kz+rWh0heykOcb/MJSo4n3Sm93o/C0q7IHi
hCgPxKfCBrf86rkSfRz9mBwDzr7qUsCbxaOInuAkpGJD9Rbp/eWyUxx54mvLYG9r48mWS7K59mnp
CvKfSCeCyajlDdah7kx+V+hgFCzWDesYF8dNUwUmJjkQ3rCM6vCFbK03IYkpqbTPAxZr3r9X+FLV
dBNaqbpXzYRJ0Gnfqk/cdt8/ldZ6z/DLvy0PepVYt1EniaeuS669KTgDEixgnL8Fz0E3/v4KvB18
V5tlEL12IoMxSG5q4nvIMUh09zNHXp24IgWwzQxLnQ==
=J/Jx
-----END PGP SIGNATURE-----

--1N5wn9JjeVBn5ZBswJ7uuhN1gl1estidX--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 15:18:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 15:18:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136101.252518 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSda-0004nP-Mn; Wed, 02 Jun 2021 15:18:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136101.252518; Wed, 02 Jun 2021 15:18:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSda-0004m3-Fw; Wed, 02 Jun 2021 15:18:30 +0000
Received: by outflank-mailman (input) for mailman id 136101;
 Wed, 02 Jun 2021 15:18:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loSdZ-0004jb-GL
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 15:18:29 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 87ef2c64-4598-4334-bfc3-f44c95073228;
 Wed, 02 Jun 2021 15:18:28 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 4BFDE22C1A;
 Wed,  2 Jun 2021 15:18:27 +0000 (UTC)
Received: by imap.suse.de (Postfix, from userid 51)
 id 460F611CF5; Wed,  2 Jun 2021 16:20:03 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 4015F11DE8;
 Wed,  2 Jun 2021 11:41:03 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id d7SUDU9ut2DaewAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 11:41:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87ef2c64-4598-4334-bfc3-f44c95073228
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622647107; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=r3zuSfwQN7zBaKL5TqfD/RmuyIeMWyu+WlTPlVT5+LI=;
	b=gnF4r6twZWNm2+YO0qBVvYstL4BDPKwOlx73GsNcF+REtmJoJlfC08Q1BU/3kE8st6a2Z6
	xAxCp0QxGVYTg1YfBEyCJLJyT78jihmVrt2XQsDOS1ktGlLmwj4ZTdzF3hZ/WsYgDrNXWC
	yWKkyG5z+DnORwAVcKeL1iKYP2qDArk=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622634063; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=r3zuSfwQN7zBaKL5TqfD/RmuyIeMWyu+WlTPlVT5+LI=;
	b=MaY69p2lUfnyP8/qh2nc8Y7yFCzhM1SWOjgyx+sgmjlP4Fy0lx/YdFWHpwPhxSuVxlRZr4
	HGVKpF+PU9nnwvZHvewUzRiQaYN5Zq97MGrwrJySZ4M9P2vb2vwNtQtUVCRg2M62jQNcYN
	W24sN7v6VbgK67LFM4lXGNLvF/nwuJM=
Subject: Re: [PATCH v20210601 04/38] tools: add readv_exact to libxenctrl
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-5-olaf@aepfle.de>
 <23783088-dc59-abd1-c66c-5fcd314d1f5c@suse.com>
 <20210602125710.0607a985.olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <6e1aed4e-8d32-0711-609d-dfabe906c40e@suse.com>
Date: Wed, 2 Jun 2021 13:41:02 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210602125710.0607a985.olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="jfUCIWCISRqnAjGkfDYj1kuV2A0dM3GoT"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--jfUCIWCISRqnAjGkfDYj1kuV2A0dM3GoT
Content-Type: multipart/mixed; boundary="JYSncPRfdKTqTZiOSRHaHouY2ECVN8EpO";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
Message-ID: <6e1aed4e-8d32-0711-609d-dfabe906c40e@suse.com>
Subject: Re: [PATCH v20210601 04/38] tools: add readv_exact to libxenctrl
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-5-olaf@aepfle.de>
 <23783088-dc59-abd1-c66c-5fcd314d1f5c@suse.com>
 <20210602125710.0607a985.olaf@aepfle.de>
In-Reply-To: <20210602125710.0607a985.olaf@aepfle.de>

--JYSncPRfdKTqTZiOSRHaHouY2ECVN8EpO
Content-Type: multipart/mixed;
 boundary="------------291C54DF7E40402F047AA85B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------291C54DF7E40402F047AA85B
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: quoted-printable

On 02.06.21 12:57, Olaf Hering wrote:
> Am Wed, 2 Jun 2021 08:30:08 +0200
> schrieb Juergen Gross <jgross@suse.com>:
>=20
>> On 01.06.21 18:10, Olaf Hering wrote:
>>> +int readv_exact(int fd, const struct iovec *iov, int iovcnt)
>=20
>>> +        if ( len <=3D 0 )
>>> +        {
>>> +            rc =3D -1;
>> Is EOF really an error?
>=20
> I think yes, that's what "exact" implies IMO.

Shouldn't you check for zero length iovec elements as in the
writev_exact() case then?

>> This will stop the loop, even if idx hasn't reached iovcnt.
>=20
> Yes, it will trigger yet another readv().

Ah, right.


Juergen

--------------291C54DF7E40402F047AA85B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------291C54DF7E40402F047AA85B--

--JYSncPRfdKTqTZiOSRHaHouY2ECVN8EpO--

--jfUCIWCISRqnAjGkfDYj1kuV2A0dM3GoT
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3bk4FAwAAAAAACgkQsN6d1ii/Ey+J
5ggAlh8D0FwQ31aM3YUUrYpW56c6kcDQUCjbvU3xBZ0Y/jia5E9TkcqsKJ28TRu7SEezSkPMRmIa
LpRvAJkExWVzllMz5uPa53N5MItJWsMsiKup9bnhBg0Nberbdu4NPy43KYvjy7Cm1rSL6FIaOZE8
cRkddGLTnApC4xuaSqY3pOnAjtQJW/+EMXOzDodROReiQadzW5326i5nM408eCZaV+0PNQSvJ+rb
EBadVRt/5UvKQAiwn0XN0j/lNdjICqXKe2hzeuuQgp9D1SUJd7Ng5eMGz6zw9LfBVVadCIoPLim4
34lGtbE49gRD8OmQT3faMToykVH/aD3TrLZ7jQvozw==
=Wgsp
-----END PGP SIGNATURE-----

--jfUCIWCISRqnAjGkfDYj1kuV2A0dM3GoT--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 15:18:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 15:18:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136103.252545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSdg-0005a8-6t; Wed, 02 Jun 2021 15:18:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136103.252545; Wed, 02 Jun 2021 15:18:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSdg-0005a1-3U; Wed, 02 Jun 2021 15:18:36 +0000
Received: by outflank-mailman (input) for mailman id 136103;
 Wed, 02 Jun 2021 15:18:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loSde-0004jb-CH
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 15:18:34 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f88407cb-805f-4f54-afa5-a7828f6609d9;
 Wed, 02 Jun 2021 15:18:29 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 7CF5F1FDE3;
 Wed,  2 Jun 2021 15:18:28 +0000 (UTC)
Received: by imap.suse.de (Postfix, from userid 51)
 id 74C6B118DD; Wed,  2 Jun 2021 16:20:03 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 9005E11D5C;
 Wed,  2 Jun 2021 09:57:23 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id NifpIQNWt2AXOQAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 09:57:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f88407cb-805f-4f54-afa5-a7828f6609d9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622647108; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=kB9AZG1SeSKHgUCTw/5q6Tcv6K2TuGCPc5PPmES8YJk=;
	b=r7HOUiNrKq9M+5BfRnw2TLIGETXodzaHiRIo+ewHK7Zyf8t+QhWsagmX0oU4DXaLGQGG6Q
	RnwQQbHzt2etvk8U7T06VouMrOA6cCyrTOsrU3xYLI+blrsjWbd4N4LS2GEKLzzjm1QwIj
	pnSK5lULfZZZgfzNRIJk2IRnz3e/kDQ=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622627843; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=kB9AZG1SeSKHgUCTw/5q6Tcv6K2TuGCPc5PPmES8YJk=;
	b=okh57yor//4O15DjW9xjIFGXWnkuc/I73OkWKgO1JyEJfzx6znEwNwJjc24L0oggfDKiOH
	oVsXj4ZSNnrJDZH0nPQ64xkAdK5w1jaocvca3zB6UMcNP9OnvQCL8bl1oc5wFjvBtzeP5q
	TSsbZt5RSqUmEmFR+nz78aVk0t6XbZo=
Subject: Re: [PATCH v20210601 24/38] tools/guest: restore: split record
 processing
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-25-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <0db572f9-384e-13ee-514b-0f7107843b2d@suse.com>
Date: Wed, 2 Jun 2021 11:57:23 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210601161118.18986-25-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="oLOO42K66DjbeY1ukQNQ6ALX4nI5Iy6uz"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--oLOO42K66DjbeY1ukQNQ6ALX4nI5Iy6uz
Content-Type: multipart/mixed; boundary="3yb7NjrNnIa5sRHhorvaZ7kxTcUiPhWct";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <0db572f9-384e-13ee-514b-0f7107843b2d@suse.com>
Subject: Re: [PATCH v20210601 24/38] tools/guest: restore: split record
 processing
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-25-olaf@aepfle.de>
In-Reply-To: <20210601161118.18986-25-olaf@aepfle.de>

--3yb7NjrNnIa5sRHhorvaZ7kxTcUiPhWct
Content-Type: multipart/mixed;
 boundary="------------1BE6534ADF3AC92CAC31D80A"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------1BE6534ADF3AC92CAC31D80A
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.06.21 18:11, Olaf Hering wrote:
> handle_page_data must be able to read directly into mapped guest memory=
=2E
> This will avoid unneccesary memcpy calls for data which can be consumed=20
verbatim.
>=20
> Rearrange the code to allow decisions based on the incoming record.
>=20
> This change is preparation for future changes in handle_page_data,
> no change in behavior is intended.
>=20
> Signed-off-by: Olaf Hering<olaf@aepfle.de>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------1BE6534ADF3AC92CAC31D80A
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------1BE6534ADF3AC92CAC31D80A--

--3yb7NjrNnIa5sRHhorvaZ7kxTcUiPhWct--

--oLOO42K66DjbeY1ukQNQ6ALX4nI5Iy6uz
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3VgMFAwAAAAAACgkQsN6d1ii/Ey/G
Dgf+JGXLs77150dnm6WRZ2D99YYBwSJLl+11lkgRDXtv020xz+E74Li7jb8ogskIZiT+a+RX/Xlk
W6Rgjjry46TZIRYzccOCZdjDnjwdyiYCysrIFF7GHkkge1FeiiIT7I1dGMpofEvh5Dc82YAydYcE
9FMCDlcc9csEHOPx0vwZanjjgHT6IuIRZMdtMcAvtm+QZOszln2L/RXrS81CXqC4j9woC8XVgQ1y
jrEAk3r6jXkFObVENgUR53ntKx4xYyHX8u4ZBEBdiEd0Mb3kohXIONCsWghon4kZt43acVmTPX9i
Cieq6fUUj5TYXw14YBWfe3V5SiiANftWmmMcR7st2Q==
=gYTZ
-----END PGP SIGNATURE-----

--oLOO42K66DjbeY1ukQNQ6ALX4nI5Iy6uz--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 15:18:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 15:18:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136104.252556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSdk-0005wk-HI; Wed, 02 Jun 2021 15:18:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136104.252556; Wed, 02 Jun 2021 15:18:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSdk-0005wb-CE; Wed, 02 Jun 2021 15:18:40 +0000
Received: by outflank-mailman (input) for mailman id 136104;
 Wed, 02 Jun 2021 15:18:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loSdi-0004jV-H5
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 15:18:38 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 57e77543-a98e-4765-ae1d-92787b69ab0b;
 Wed, 02 Jun 2021 15:18:30 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 0266022B80;
 Wed,  2 Jun 2021 15:18:30 +0000 (UTC)
Received: by imap.suse.de (Postfix, from userid 51)
 id EFD4111CD4; Wed,  2 Jun 2021 15:28:32 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 05EA211E0C;
 Wed,  2 Jun 2021 12:15:16 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id YyJWAFR2t2C6EwAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 12:15:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57e77543-a98e-4765-ae1d-92787b69ab0b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622647110; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=6fuFG+pb/SOklZdlfuricFlYrFJLmDyOywD8VqgK1JY=;
	b=cfx7czZ2e5LYQ7WNTMzf7DFnIHiGVWzUUE6Eh5Q67C21IyF/T7pbAHCwzqt1iNf1tDYFrj
	8iMdk8KaekMujvy2L3QamsfDIa/CS5GwMe7+0rZ6/9+H7Y1OlfuV1wDTNedj1i56BbbiN0
	0OptgdBJ6vGWRSnFXBBYxGuwMf90eHE=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622636116; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=6fuFG+pb/SOklZdlfuricFlYrFJLmDyOywD8VqgK1JY=;
	b=doUoHipPAYvaz6kmqpRC2ebCq4dp81ntAU4Q2MOs75uZffeWn9kS13V0Q382/jHCeGd7t9
	J344o33js2hxeZD2m/UVa6LhhcFzndzIhciaDQS0Uf7pSo5OBiUCktlCT83S0XsrVfqDmC
	QM+2TqIRKlFinkDWGTzy3S3VksqBPT0=
Subject: Re: [PATCH v20210601 00/38] leftover from 2020
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org
References: <20210601161118.18986-1-olaf@aepfle.de>
 <24670339-c080-7e47-c2a8-22c22f7a719e@suse.com>
 <20210602085403.40064aed.olaf@aepfle.de>
 <51055fa2-b57e-c66b-78d1-6f07e0164b5b@suse.com>
 <20210602140748.462fb0fa.olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <3cc8c472-8a94-337c-e7da-7bf663dcb1c2@suse.com>
Date: Wed, 2 Jun 2021 14:15:15 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210602140748.462fb0fa.olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="be2L0jh8gcXpQuf70r62M3fjB4y0dprYv"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--be2L0jh8gcXpQuf70r62M3fjB4y0dprYv
Content-Type: multipart/mixed; boundary="tHhshKZGOrFHqurAQGbBFp7xP1KX4iY9z";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org
Message-ID: <3cc8c472-8a94-337c-e7da-7bf663dcb1c2@suse.com>
Subject: Re: [PATCH v20210601 00/38] leftover from 2020
References: <20210601161118.18986-1-olaf@aepfle.de>
 <24670339-c080-7e47-c2a8-22c22f7a719e@suse.com>
 <20210602085403.40064aed.olaf@aepfle.de>
 <51055fa2-b57e-c66b-78d1-6f07e0164b5b@suse.com>
 <20210602140748.462fb0fa.olaf@aepfle.de>
In-Reply-To: <20210602140748.462fb0fa.olaf@aepfle.de>

--tHhshKZGOrFHqurAQGbBFp7xP1KX4iY9z
Content-Type: multipart/mixed;
 boundary="------------473E70176D4C9C093E2F5F98"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------473E70176D4C9C093E2F5F98
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: quoted-printable

On 02.06.21 14:07, Olaf Hering wrote:
> Am Wed, 2 Jun 2021 09:00:49 +0200
> schrieb Juergen Gross <jgross@suse.com>:
>=20
>> IMO this will make it more probable to get at least parts committed.
>=20
> It depends all on patch #3, so further split can not be done anymore at=20
this point.

I don't see the dependency e.g. in patch 27, 28 and 38.


Juergen

--------------473E70176D4C9C093E2F5F98
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------473E70176D4C9C093E2F5F98--

--tHhshKZGOrFHqurAQGbBFp7xP1KX4iY9z--

--be2L0jh8gcXpQuf70r62M3fjB4y0dprYv
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3dlMFAwAAAAAACgkQsN6d1ii/Ey+S
sgf/c9FoCBrkL/YXloVYmem69GF2Skq3RTerJUG5FQWEbU9kxZVyuFOrEVZ04PGvuZQ7BE4ZIBHR
E8AfCiSw4TshOEujuB+JKBtCZsNoucuwRSkmEDC0blDmQ/dea1mvOpoBP/+96n2cQM93MLWmR7l6
fWfYA3azqfAmpuu33kmQMZh8Dvk3o8TXI9xbhqoXts44KC4NXlljgdMVDQKQhjuXWbRaxe0kE8m9
iCEXIGQpsyQ3yPLa/zo/101XTFIqBMGFYmX1CmrcsJjJnF/sZP6DoRTAYYsSpUGUAsRGPau9Mpaq
aQz5mQh1fZSqDd5ureNIWT6IYZRPaswQ1IAExlDTew==
=4eGC
-----END PGP SIGNATURE-----

--be2L0jh8gcXpQuf70r62M3fjB4y0dprYv--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 15:18:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 15:18:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136105.252561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSdk-00061B-Ub; Wed, 02 Jun 2021 15:18:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136105.252561; Wed, 02 Jun 2021 15:18:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSdk-0005zG-Nz; Wed, 02 Jun 2021 15:18:40 +0000
Received: by outflank-mailman (input) for mailman id 136105;
 Wed, 02 Jun 2021 15:18:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loSdj-0004jb-CJ
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 15:18:39 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f02616ee-02f2-49f9-ad3d-03b94e827c96;
 Wed, 02 Jun 2021 15:18:33 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 0086F1FE65;
 Wed,  2 Jun 2021 15:18:33 +0000 (UTC)
Received: by imap.suse.de (Postfix, from userid 51)
 id EDF1F11CD4; Wed,  2 Jun 2021 15:50:55 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 4EC4011DEE;
 Wed,  2 Jun 2021 11:49:00 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id EZ/wESxwt2BVAgAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 11:49:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f02616ee-02f2-49f9-ad3d-03b94e827c96
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622647113; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=N7Eg7BQO9m2m5iOfOE050aZc6dSdn93AtDzPOkwqu/U=;
	b=O6ISxBoNwRYiqR6AOgZ4oehTBSth8xsbIEq3W9m2hb4TnvV3Pym8NWh2OSNWB3ZwhnYfMU
	p+nbSsuEt/gnA97hFvxYpLeKnbmZC2cCfKg219NaeEbTMCuOsvDVqr+gj8M5M2Welknb0g
	i5y8qMLrj4s7OuH4etkGmyvnzVgsDzU=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622634540; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=N7Eg7BQO9m2m5iOfOE050aZc6dSdn93AtDzPOkwqu/U=;
	b=Fnefv3ZVxM9zwkzlhwQuu4WUggqcuugKLrcloXMLrB1fWnSpWLxDt2xPAhRqmvyEeXhPXF
	I9D5yIuZxb5wLMMLvRBVrlI8bQCiE2A5qeD6YCWoubGZ4FDDdKFqbxnwCY4rVMOpkYGP5W
	QS9IXvl7dh8mGhWr6/IWgRZtCyjsGvE=
Subject: Re: [PATCH v20210601 05/38] tools: add xc_is_known_page_type to
 libxenctrl
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-6-olaf@aepfle.de>
 <f1f00500-f74f-3515-cd46-7a12abdf18c3@suse.com>
 <20210602131021.094d40f1.olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <cec20a4c-ab32-9b35-1c69-a6c4cda5d23a@suse.com>
Date: Wed, 2 Jun 2021 13:48:59 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210602131021.094d40f1.olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="bI1Twa27AtiKtXCD2NFAKUlK7wsCFuZrp"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--bI1Twa27AtiKtXCD2NFAKUlK7wsCFuZrp
Content-Type: multipart/mixed; boundary="tu4D7Bw2DHl2vYtHYg9sDNjEXSg0YlqcT";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
Message-ID: <cec20a4c-ab32-9b35-1c69-a6c4cda5d23a@suse.com>
Subject: Re: [PATCH v20210601 05/38] tools: add xc_is_known_page_type to
 libxenctrl
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-6-olaf@aepfle.de>
 <f1f00500-f74f-3515-cd46-7a12abdf18c3@suse.com>
 <20210602131021.094d40f1.olaf@aepfle.de>
In-Reply-To: <20210602131021.094d40f1.olaf@aepfle.de>

--tu4D7Bw2DHl2vYtHYg9sDNjEXSg0YlqcT
Content-Type: multipart/mixed;
 boundary="------------0F789B75290E8040C50288E8"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------0F789B75290E8040C50288E8
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: quoted-printable

On 02.06.21 13:10, Olaf Hering wrote:
> Am Wed, 2 Jun 2021 08:51:45 +0200
> schrieb Juergen Gross <jgross@suse.com>:
>=20
>> I think you should not imply the planned use case here. It would be
>> better to use "switch (type & XEN_DOMCTL_PFINFO_LTAB_MASK)".
>>
>> I'm on the edge regarding putting the new function into xc_private.h.
>> In the end your use case is _not_ to call the new function from
>> libxenctrl.
>=20
>=20
> I'm not sure what that means.

The name xc_private.h indicates that this header file is supposed to be
used only inside of libxenctrl. I know that this isn't true today, but
I don't think new misuses should be added to this file.

> One or the other has to be rebased to the new state.

You can add the same functions to some libxensaverestore prvate header
instead.


Juergen


--------------0F789B75290E8040C50288E8
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------0F789B75290E8040C50288E8--

--tu4D7Bw2DHl2vYtHYg9sDNjEXSg0YlqcT--

--bI1Twa27AtiKtXCD2NFAKUlK7wsCFuZrp
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3cCsFAwAAAAAACgkQsN6d1ii/Ey/1
kAf+ORfg4swYGP3cOSV3K0VNTUlmOEhSFE0/K8Lb3dDc2k7oWLIFbMvnqd44B2e8jQjMkuq9s4NS
5zjNm9eB93WRVeypJ1GMg6DuQHClRTdsC8uosHSezMwNTCZxHtDR7zAiZIpFeMK3xzn8UemiZ111
tjrZ7WqSVcauf9+NFgTtikMVJWkgAnbZSEBL57dYJsGScRotgFvQC7ARqDkve+vagxkyWbx7H4QV
q7oAGLUPT34c6FUO/5SLKPrPvs3xSo3WQP+LiAFYx3B1KIM8a1O6iw2R6pFVfbzsAEig5Ns6rkmU
i1umBGpl1SASZkm6dhDdUaEzXycjd3P8h38/iR6N9Q==
=5L6x
-----END PGP SIGNATURE-----

--bI1Twa27AtiKtXCD2NFAKUlK7wsCFuZrp--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 15:18:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 15:18:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136106.252578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSdp-0006gH-C6; Wed, 02 Jun 2021 15:18:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136106.252578; Wed, 02 Jun 2021 15:18:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSdp-0006g2-6D; Wed, 02 Jun 2021 15:18:45 +0000
Received: by outflank-mailman (input) for mailman id 136106;
 Wed, 02 Jun 2021 15:18:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loSdn-0004jV-HC
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 15:18:43 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f77073a-eee0-4f31-be98-e6cf6f5ecfb3;
 Wed, 02 Jun 2021 15:18:30 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 07B691FE20;
 Wed,  2 Jun 2021 15:18:30 +0000 (UTC)
Received: by imap.suse.de (Postfix, from userid 51)
 id 035F511CD5; Wed,  2 Jun 2021 15:50:54 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id E032511DFC;
 Wed,  2 Jun 2021 12:03:40 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id SWiENZxzt2AHDAAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 12:03:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f77073a-eee0-4f31-be98-e6cf6f5ecfb3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622647110; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=fsA+CO0pdTWaXD1IPZxF+m2NvvG9/Y7f4eYmrV8WLn8=;
	b=ROtsREE9vWj1kDTfua572Z36z86epfZB0wzcy0mYB9fbTRD3MCDvWb7gooaiIhDuCzM3Qk
	JYx4kNroGXfTWDiyXdM2N6Y77FM6oBekmFBM3g5NoJtiO9mVZwcXLLxXbh+2q2ffYD5fOI
	rBHFGeeio5PHgQH26Jsyplp5zirUKyA=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622635421; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=fsA+CO0pdTWaXD1IPZxF+m2NvvG9/Y7f4eYmrV8WLn8=;
	b=tRam7bIMl1s+xsozbAJuVzToFPcbujZe7IxLL3l2Fusa7itQVfvWt0k1xCRmbxFmJAuTYr
	bCBK+anLYv9l+ubWSQJGTPpqDm+hB6M04F6WrqNBCVXQrQ9nm+NBOp5IhALEqGRLR8vOUP
	9uebn9/ozPqAKHjDiRBX7k9ZYIpcZhE=
Subject: Re: [PATCH v20210601 07/38] tools: unify type checking for data pfns
 in migration stream
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-8-olaf@aepfle.de>
 <9045add9-0cd0-7f9d-87ef-26cea15b74cd@suse.com>
 <20210602132114.6fd9ee87.olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <1e0a9a6d-b3b3-29e7-43dc-453711e666a7@suse.com>
Date: Wed, 2 Jun 2021 14:03:40 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210602132114.6fd9ee87.olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="dFjCT96WT3eFC4HBJXSwkKe0yvyhrGiUC"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--dFjCT96WT3eFC4HBJXSwkKe0yvyhrGiUC
Content-Type: multipart/mixed; boundary="oMJhaH9V4TKl9jeoLJdBElxCX0Iff4vuM";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
Message-ID: <1e0a9a6d-b3b3-29e7-43dc-453711e666a7@suse.com>
Subject: Re: [PATCH v20210601 07/38] tools: unify type checking for data pfns
 in migration stream
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-8-olaf@aepfle.de>
 <9045add9-0cd0-7f9d-87ef-26cea15b74cd@suse.com>
 <20210602132114.6fd9ee87.olaf@aepfle.de>
In-Reply-To: <20210602132114.6fd9ee87.olaf@aepfle.de>

--oMJhaH9V4TKl9jeoLJdBElxCX0Iff4vuM
Content-Type: multipart/mixed;
 boundary="------------68A319A14F286A3273C4DF56"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------68A319A14F286A3273C4DF56
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: quoted-printable

On 02.06.21 13:21, Olaf Hering wrote:
> Am Wed, 2 Jun 2021 08:59:13 +0200
> schrieb Juergen Gross <jgross@suse.com>:
>=20
>> What about XEN_DOMCTL_PFINFO_XALLOC? Is this case impossible here, or
>> are you changing behavior?
>=20
> I think XEN_DOMCTL_PFINFO_XALLOC is a type that does not exist in pract=
ice.

Oh, indeed. It was used ages ago by migration code (I could find it
being used e.g. in Xen 4.2, but not in 4.7). BTW, the use case was
to not shatter superpages on migration...

> So, no change in behavior.

Maybe XEN_DOMCTL_PFINFO_XALLOC should no longer be supported, i.e.
a stream containing it should be rejected? This might be a
worthwhile modification of patch 5.


Juergen


--------------68A319A14F286A3273C4DF56
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------68A319A14F286A3273C4DF56--

--oMJhaH9V4TKl9jeoLJdBElxCX0Iff4vuM--

--dFjCT96WT3eFC4HBJXSwkKe0yvyhrGiUC
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3c5wFAwAAAAAACgkQsN6d1ii/Ey/4
8Qf+ISXyh2Swo5akl50r7qdrr+YIZFyPWZP7BL/FAqTWWG1OV487Z0hBV8iHYdCrnwpRaUIGOnUG
3tzQRNxZmZrKNUgXUmif3QsERSJmW/2R46JeTkHritVLKkbU3FLgjIhMegcpgyuSyy9P8/GOa6u1
RsoZ8glymEcZTvTldKerXWoq7jinAYDRNXaCuyggoIZKsiEL1ygyzYO6+dCFVHWVDQE7eK3iN7uK
O4eVmYbodZRrkKO6/MhOXxIq9CF/yBdOdFK3NQZ1uz1+4STfB8SKxiWhgKecw9owiEmIFem0MTrq
VRcxnM88jIsYqFQDIbBfyUFwwWfWcmb617HmsYpSfw==
=p/jP
-----END PGP SIGNATURE-----

--dFjCT96WT3eFC4HBJXSwkKe0yvyhrGiUC--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 15:18:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 15:18:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136109.252589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSdu-0007IH-OU; Wed, 02 Jun 2021 15:18:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136109.252589; Wed, 02 Jun 2021 15:18:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSdu-0007Hy-Ji; Wed, 02 Jun 2021 15:18:50 +0000
Received: by outflank-mailman (input) for mailman id 136109;
 Wed, 02 Jun 2021 15:18:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loSds-0004jV-HM
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 15:18:48 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c8e6fa52-9500-49dc-af38-aa66c03b5315;
 Wed, 02 Jun 2021 15:18:31 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 39C8A1FE26;
 Wed,  2 Jun 2021 15:18:30 +0000 (UTC)
Received: by imap.suse.de (Postfix, from userid 51)
 id 3019A11CD5; Wed,  2 Jun 2021 16:06:26 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 43A3A11EA9;
 Wed,  2 Jun 2021 13:48:36 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 1948DzSMt2CHUAAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 13:48:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8e6fa52-9500-49dc-af38-aa66c03b5315
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622647110; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=D65dpeEgcvshZPY6JSFiSgrHsN2KCrew/EttgjaOezE=;
	b=PKZ547hI4tyzNwF5/ioezn+9zgFJ3mwfGKzjRVDZEP9dAuhhSz3DHo8e6g7sUxzsBdBZ6M
	J5CsPwYNWG9IiGEJww1idvRA7meKbDUMIhNGzK0Fs9t/kqDdUkhozk9K8A4Tn70XKvdOUO
	iF9LhmcqmQYlNsEa+NRJ80zt0IKG4Pg=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622641716; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=D65dpeEgcvshZPY6JSFiSgrHsN2KCrew/EttgjaOezE=;
	b=Xf1y7LK2U3aNSSkECbjoWl4JwYVFqUOMgq4ms1i7f9s1ny6pqrIlAuD3xLx61gIpsGlyEk
	52B1smfa05IIFqygydKsM0O93/QUyQmGID1lgOfgLsFhJI/piPLx0R1GZp/KwQVa8CaFX3
	PlESGLFAhKlmDSYRtIaiSZkR4zgG7g4=
Subject: Re: [PATCH v20210602 02/38] xl: fix description of migrate --debug
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-3-olaf@aepfle.de>
 <20210602143215.3a0cb971.olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <91cf7800-76f9-245d-b0a0-9c0fe3bf1a02@suse.com>
Date: Wed, 2 Jun 2021 15:48:35 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210602143215.3a0cb971.olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="wke2dD5zmEQ8GMehUaOe8Sn7Ts71OzDSo"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--wke2dD5zmEQ8GMehUaOe8Sn7Ts71OzDSo
Content-Type: multipart/mixed; boundary="xPy1LWsPMLA4hsPlCxw6siZ9rz1mrLxHm";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <91cf7800-76f9-245d-b0a0-9c0fe3bf1a02@suse.com>
Subject: Re: [PATCH v20210602 02/38] xl: fix description of migrate --debug
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-3-olaf@aepfle.de>
 <20210602143215.3a0cb971.olaf@aepfle.de>
In-Reply-To: <20210602143215.3a0cb971.olaf@aepfle.de>

--xPy1LWsPMLA4hsPlCxw6siZ9rz1mrLxHm
Content-Type: multipart/mixed;
 boundary="------------7C7FED5B6629F9435C906E8B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------7C7FED5B6629F9435C906E8B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 02.06.21 14:32, Olaf Hering wrote:
> xl migrate --debug used to track every pfn in every batch of pages.
> But these times are gone. The code in xc_domain_save is the consumer
> of this knob, but it considers it only for the remus and colo case.
>=20
> Adjust the help text to tell what --debug does today: Nothing.
>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
>=20
> v02:
> - the option has no effect anymore

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------7C7FED5B6629F9435C906E8B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------7C7FED5B6629F9435C906E8B--

--xPy1LWsPMLA4hsPlCxw6siZ9rz1mrLxHm--

--wke2dD5zmEQ8GMehUaOe8Sn7Ts71OzDSo
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3jDMFAwAAAAAACgkQsN6d1ii/Ey98
jggAkwsh9HvGSm/pxkKUd2AvYJwv8ig0kFSnNpZMlfQDanEQQuPr4i3rXOgvxQ048FFVJuhRNOFs
ZldUGM1DDEAbZeYPLKdqVcVsa9pI1EG5Nd3hFVwMMon0RYEu3FpLoCb8ZxivLDYcH2C7rx3kw7I6
8ZxK4sYmT2/ERm84v2jGtYK+Z5FgFksf2NS6soqbFIw0xl4Q1rSf/wbPWSTU0R9AGorh2TsF5tCD
s4PLtZdV6YbWRgoUuF/DS3K+cxmNf3i5QXOiQBdPPzZdtUYd5w2uJ3GsrGoF71fIq6t5qEtG+a42
ZIIh+jcoDB0GeQGqdGJiCdp36xI4CBCjsAb7gSqCQg==
=97dm
-----END PGP SIGNATURE-----

--wke2dD5zmEQ8GMehUaOe8Sn7Ts71OzDSo--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 15:18:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 15:18:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136112.252600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSdz-0007vf-3a; Wed, 02 Jun 2021 15:18:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136112.252600; Wed, 02 Jun 2021 15:18:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loSdy-0007vK-V0; Wed, 02 Jun 2021 15:18:54 +0000
Received: by outflank-mailman (input) for mailman id 136112;
 Wed, 02 Jun 2021 15:18:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q/fn=K4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1loSdx-0004jV-HS
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 15:18:53 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9afb638b-a057-403f-b890-232ab2c4305f;
 Wed, 02 Jun 2021 15:18:34 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 7C60921C9F;
 Wed,  2 Jun 2021 15:18:32 +0000 (UTC)
Received: by imap.suse.de (Postfix, from userid 51)
 id 7712811CC0; Wed,  2 Jun 2021 15:18:23 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 2E5C611F32;
 Wed,  2 Jun 2021 15:10:22 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id EG3BCF6ft2DJBAAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 02 Jun 2021 15:10:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9afb638b-a057-403f-b890-232ab2c4305f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622647112; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ogV5wNk51FdvQJ93nInVRa11ugw7H3eLjSN1UBRHUMg=;
	b=VZ/sEqCnTKdL3yQUmmuB4fZ+u/h/ufbvpY6nuZiTfUTSbuOef9eHnvN3oSB945kQxwpOMY
	/oez4nwSDzW0xlpCKpQTK2nZjqb1YwqMRlSKKfjm+kcs/sQADpVjPdw8uZb1wsQl2bByyk
	2vxbgFXb+Kfw+gA+MYZ3BwbHpLsiqq0=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622646622; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ogV5wNk51FdvQJ93nInVRa11ugw7H3eLjSN1UBRHUMg=;
	b=QyoJOgo3H2v0w10KYk5gk5pgnq3YUMCC531gS0c+W68PmTGGWEKsUpH2yGYpQ0rwuTPBCT
	g+YqqQRg5IboZBfzcS6DCwHxe+X5PTfxJeAF7aiLcKAyWnrs3ErHlB+4VFXHPxPruXoLcz
	Gt3tlA5l6mA/aH7acocE9GubSrzHyng=
Subject: Re: [PATCH v2 0/6] tools/libs: add missing support of linear
 p2m_list, cleanup
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott <dave@recoil.org>
References: <20210412152236.1975-1-jgross@suse.com>
 <b79c60e4-7c41-be9a-b0df-e9f9cf71eafa@suse.com>
 <9cbac4d9-16d8-ff52-eb8f-8375cb88af93@suse.com>
Message-ID: <b4135462-2eba-f236-679a-c37617808656@suse.com>
Date: Wed, 2 Jun 2021 17:10:21 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <9cbac4d9-16d8-ff52-eb8f-8375cb88af93@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="G73A7fA0HIQOCHlizcXAVyGpJQXGHgTwT"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--G73A7fA0HIQOCHlizcXAVyGpJQXGHgTwT
Content-Type: multipart/mixed; boundary="4bT5Jfg0Lb5jWuFlmGD0LHlYKz9hMgUx3";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott <dave@recoil.org>
Message-ID: <b4135462-2eba-f236-679a-c37617808656@suse.com>
Subject: Re: [PATCH v2 0/6] tools/libs: add missing support of linear
 p2m_list, cleanup
References: <20210412152236.1975-1-jgross@suse.com>
 <b79c60e4-7c41-be9a-b0df-e9f9cf71eafa@suse.com>
 <9cbac4d9-16d8-ff52-eb8f-8375cb88af93@suse.com>
In-Reply-To: <9cbac4d9-16d8-ff52-eb8f-8375cb88af93@suse.com>

--4bT5Jfg0Lb5jWuFlmGD0LHlYKz9hMgUx3
Content-Type: multipart/mixed;
 boundary="------------74EC891718F504DE99235ABB"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------74EC891718F504DE99235ABB
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 25.05.21 09:32, Juergen Gross wrote:
> On 12.05.21 08:58, Juergen Gross wrote:
>> Ping?
>=20
> Now each patch has an Ack by Wei. Could the series be either applied or=

> get more comments, please?

And another PING.

Do I need to setup a cron job pinging each day to get a reaction?


Juergen

>=20
>=20
> Juergen
>=20
>>
>> On 12.04.21 17:22, Juergen Gross wrote:
>>> There are some corners left which don't support the not so very new
>>> linear p2m list of pv guests, which has been introduced in Linux kern=
el
>>> 3.19 and which is mandatory for non-legacy versions of Xen since kern=
el
>>> 4.14.
>>>
>>> This series adds support for the linear p2m list where it is missing
>>> (colo support and "xl dump-core").
>>>
>>> In theory it should be possible to merge the p2m list mapping code
>>> from migration handling and core dump handling, but this needs quite
>>> some cleanup before this is possible.
>>>
>>> The first three patches of this series are fixing real problems, so
>>> I've put them at the start of this series, especially in order to mak=
e
>>> backports easier.
>>>
>>> The other three patches are only the first steps of cleanup. The main=

>>> work done here is to concentrate all p2m mapping in libxenguest inste=
ad
>>> of having one implementation in each of libxenguest and libxenctrl.
>>>
>>> Merging the two implementations should be rather easy, but this will
>>> require to touch many lines of code, as the migration handling varian=
t
>>> seems to be more mature, but it is using the migration stream specifi=
c
>>> structures heavily. So I'd like to have some confirmation that my way=

>>> to clean this up is the right one.
>>>
>>> My idea would be to add the data needed for p2m mapping to struct
>>> domain_info_context and replace the related fields in struct
>>> xc_sr_context with a struct domain_info_context. Modifying the
>>> interface of xc_core_arch_map_p2m() to take most current parameters
>>> via struct domain_info_context would then enable migration coding to
>>> use xc_core_arch_map_p2m() for mapping the p2m. xc_core_arch_map_p2m(=
)
>>> should look basically like the current migration p2m mapping code
>>> afterwards.
>>>
>>> Any comments to that plan?
>>>
>>> Changes in V2:
>>> - added missing #include in ocaml stub
>>>
>>> Juergen Gross (6):
>>> =C2=A0=C2=A0 tools/libs/guest: fix max_pfn setting in map_p2m()
>>> =C2=A0=C2=A0 tools/libs/ctrl: fix xc_core_arch_map_p2m() to support l=
inear p2m
>>> =C2=A0=C2=A0=C2=A0=C2=A0 table
>>> =C2=A0=C2=A0 tools/libs/ctrl: use common p2m mapping code in=20
>>> xc_domain_resume_any()
>>> =C2=A0=C2=A0 tools/libs: move xc_resume.c to libxenguest
>>> =C2=A0=C2=A0 tools/libs: move xc_core* from libxenctrl to libxenguest=

>>> =C2=A0=C2=A0 tools/libs/guest: make some definitions private to libxe=
nguest
>>>
>>> =C2=A0 tools/include/xenctrl.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |=C2=A0 63 ---
>>> =C2=A0 tools/include/xenguest.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 |=C2=A0 63 +++
>>> =C2=A0 tools/libs/ctrl/Makefile=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 |=C2=A0=C2=A0 4 -
>>> =C2=A0 tools/libs/ctrl/xc_core_x86.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 223 ---=
-------
>>> =C2=A0 tools/libs/ctrl/xc_domain.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
|=C2=A0=C2=A0 2 -
>>> =C2=A0 tools/libs/ctrl/xc_private.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=20
> 43 +-
>>> =C2=A0 tools/libs/guest/Makefile=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 |=C2=A0=C2=A0 4 +
>>> =C2=A0 .../libs/{ctrl/xc_core.c =3D> guest/xg_core.c}=C2=A0 |=C2=A0=C2=
=A0 7 +-
>>> =C2=A0 .../libs/{ctrl/xc_core.h =3D> guest/xg_core.h}=C2=A0 |=C2=A0 1=
5=20
> +-
>>> =C2=A0 .../xc_core_arm.c =3D> guest/xg_core_arm.c}=C2=A0=C2=A0=C2=A0=C2=
=A0 |=C2=A0 31 +-
>>> =C2=A0 .../xc_core_arm.h =3D> guest/xg_core_arm.h}=C2=A0=C2=A0=C2=A0=C2=
=A0 |=C2=A0=C2=A0 0
>>> =C2=A0 tools/libs/guest/xg_core_x86.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 399 +++++++++=
+++++++++
>>> =C2=A0 .../xc_core_x86.h =3D> guest/xg_core_x86.h}=C2=A0=C2=A0=C2=A0=C2=
=A0 |=C2=A0=C2=A0 0
>>> =C2=A0 tools/libs/guest/xg_dom_boot.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 2=20

> +-
>>> =C2=A0 tools/libs/guest/xg_domain.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=20
> 19 +-
>>> =C2=A0 tools/libs/guest/xg_offline_page.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 2 +-
>>> =C2=A0 tools/libs/guest/xg_private.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 1=
6 +-
>>> =C2=A0 .../{ctrl/xc_resume.c =3D> guest/xg_resume.c}=C2=A0=C2=A0 |=20
> 69 +--
>>> =C2=A0 tools/libs/guest/xg_sr_save_x86_pv.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 2 +-
>>> =C2=A0 tools/ocaml/libs/xc/xenctrl_stubs.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 1 +
>>> =C2=A0 20 files changed, 545 insertions(+), 420 deletions(-)
>>> =C2=A0 delete mode 100644 tools/libs/ctrl/xc_core_x86.c
>>> =C2=A0 rename tools/libs/{ctrl/xc_core.c =3D> guest/xg_core.c} (99%)
>>> =C2=A0 rename tools/libs/{ctrl/xc_core.h =3D> guest/xg_core.h} (92%)
>>> =C2=A0 rename tools/libs/{ctrl/xc_core_arm.c =3D> guest/xg_core_arm.c=
}=20
> (72%)
>>> =C2=A0 rename tools/libs/{ctrl/xc_core_arm.h =3D> guest/xg_core_arm.h=
}=20
> (100%)
>>> =C2=A0 create mode 100644 tools/libs/guest/xg_core_x86.c
>>> =C2=A0 rename tools/libs/{ctrl/xc_core_x86.h =3D> guest/xg_core_x86.h=
}=20
> (100%)
>>> =C2=A0 rename tools/libs/{ctrl/xc_resume.c =3D> guest/xg_resume.c} (8=
0%)
>>>
>>
>=20


--------------74EC891718F504DE99235ABB
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------74EC891718F504DE99235ABB--

--4bT5Jfg0Lb5jWuFlmGD0LHlYKz9hMgUx3--

--G73A7fA0HIQOCHlizcXAVyGpJQXGHgTwT
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC3n10FAwAAAAAACgkQsN6d1ii/Ey8G
GAf+JKjxfRkaYjnbbDtpCOwB1g2krduWx/+wnikrCQahwtLdRvQpqgSUwpYDKYoVB41ReEihtEDC
bW66vSgqQ8jGWr6fug97EeGBnpd771jJAenfPwk7452DWzDkEXPVuBGHbPbbopf+L1vTzdxZ3U30
LguVhe4jk20HogUKhG+/396h9GdWTkf+we1SvgQ8AAG8YEwJcGS4RS5z3ocGowNHwuPhqoJkZ5nT
DBcNnkqmccwUQ2uDow6GQAlFaWtG/+2u0e8F5ed2SScdyvhoFYhMRz6xXoxou9+Lmw3obyT4Uglg
AeBWDia2igTxAUBGpac10TD3VVfF722ueefGcEMPPA==
=P5rU
-----END PGP SIGNATURE-----

--G73A7fA0HIQOCHlizcXAVyGpJQXGHgTwT--


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 15:52:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 15:52:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136178.252611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loTAW-0005tF-4F; Wed, 02 Jun 2021 15:52:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136178.252611; Wed, 02 Jun 2021 15:52:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loTAW-0005t8-0h; Wed, 02 Jun 2021 15:52:32 +0000
Received: by outflank-mailman (input) for mailman id 136178;
 Wed, 02 Jun 2021 15:52:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bHXv=K4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1loTAU-0005t2-H0
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 15:52:30 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id df92dbd5-2a62-4361-88b6-ca44d9f47eed;
 Wed, 02 Jun 2021 15:52:29 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2055.outbound.protection.outlook.com [104.47.14.55]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-19-khlrCXLfPYSrjX5TxgAJUA-1; Wed, 02 Jun 2021 17:52:27 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5167.eurprd04.prod.outlook.com (2603:10a6:803:5b::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.22; Wed, 2 Jun
 2021 15:52:25 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4173.030; Wed, 2 Jun 2021
 15:52:25 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P193CA0044.EURP193.PROD.OUTLOOK.COM (2603:10a6:102:51::19) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.20 via Frontend Transport; Wed, 2 Jun 2021 15:52:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df92dbd5-2a62-4361-88b6-ca44d9f47eed
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1622649148;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XAo43sTZN+czhhlSbiUM8uJcXERUVp7hKL+uOVdVHxI=;
	b=W2V5ifBo32o+2huR8+4aFQWh3UFl9IJcz4nMjtNkKk5rOmadOWc5B9rbiy2KCgDhKqBmM+
	UCruAgpM4AuW/e0mLJ6cSa7p2e3K2s/3qGk3XTch/2QPtUdABP5iKnfum4mehsImyrl0bw
	lnnj9xojuX7fIzKrUCrjrpSBWeq51fI=
X-MC-Unique: khlrCXLfPYSrjX5TxgAJUA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lLu3KgsHamD94iDYemvEmGDmmLJ7BidLt2clbrOxgUFhu71adpjdKT18Uc/7CRnbPdsESi4BFaQXr0xzHfOyrlq6jrOQciOMnDmXSj9Fw/K0BEqBPGJQMkYpPv1VUgae1OVaVXEZ7om7frhI48MTRydpaVCAxyZSdgVihhmTD6oh0rjpRMuShM8yLDgWCFwWztnmQzsefdMXYouMnHMZD+ucipBl+QVthtYSUv56txWCxTNawQ/5gujPQ55jC2OrIOrilY44kyF5HPO1v7ZCGUp6ss/d12Gm6pKDqrFa4e3zZyLCPCnXVxtrRyA4fw9TU+dOMfDhALmVMr2/A4CZIg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XAo43sTZN+czhhlSbiUM8uJcXERUVp7hKL+uOVdVHxI=;
 b=a1AmHsnqK1oYY8G9+6VZ4lByMJ/XpbQbHzEBkXpwU3m980GbtFK7vHXXERoa4cwTnHLRtmGeSZnhTur7GybGXxOn64y2PxHfwVM4kpuV+2BL4k/0Sdo3NPRQ+Vudi/iHng582uMAOQiBaUJ/EN0aC2KAoeASHG6xbG6ef0QRWvibTCI9RGbKO9eA+QETA2bPDFY0/P8uJxuXspoYd1JZetOKgvsUGMDJO9PPK0B6aBqszFxDT17I09RA3cfIesi3QuPkhZ1GPz3xprjq+JlC5106D5vjRlicdnEWzAFT90iR/lWXcd4kk6SLfe02IGWMLTs7Gvi+N/e02qjQhk/hVw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH v5 1/2] xen/char: Default HAS_NS16550 to y only for X86
 and ARM
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1622645816.git.connojdavis@gmail.com>
 <2c24cadace47e51e9e3fce6614e0f5e83db6c3af.1622645816.git.connojdavis@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <161181af-3e92-d5db-9775-048a512499de@suse.com>
Date: Wed, 2 Jun 2021 17:52:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
In-Reply-To: <2c24cadace47e51e9e3fce6614e0f5e83db6c3af.1622645816.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P193CA0044.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:102:51::19) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 63af4686-3f80-4134-6e8e-08d925de6a5c
X-MS-TrafficTypeDiagnostic: VI1PR04MB5167:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB5167B8E4D66F85C65F17DF97B33D9@VI1PR04MB5167.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+fqVF1+1di72Z7unYHG+h0UXRRyhL3zQOWjfuvY94XCFpvMdC9PgnAWQmgjjuk7Tci/ezX5Eo7LBmVEuMgzb6dSWK+oqgNGa1RiSpUrorjsgSDJszEDNhHkuhhFjdKaTQ3jHtbjH9GOVKzmojRWR9zJALnBcI10Pr5QBrFtjPgReLrOY9ul9NUjMCc28FV8fuAxCuiBcQSey+ZXVlYf0rKf6FOo3SG5Nj0FFtfnNT7Y1JapZemahb9YzEqouzukYZzz/JCLqUWiKjg+iM9mrJM3h+qwFdHIzH2ALAD8nvMoYtn3pZnE00tasTdMRcEENltze82n8fYQGQs0xJZBcI4mY1yKzxweLNLchx6XrSe5P7j6J8DXedor+MiLnMWQ2sWV2ivkxUiwhGdrUYlzRorsTYHsoaUxBMwcLEOYwxcvH7ojTk5zrNq2f4+a7u7496hWQzokzvPs6fs58E+A5MD3KNsod2qV4F2gPBcXzmm/8SbTBJBQLj395AU8E9XxeXCPa2jY9d4RHf77xjExvAxAWYbnpsVQG99yz84+TZZu5Vy5nCkf1cR0ubwhtfvFXts5B/xiF4vWmGskxzMrYrQI7avqMARwGYEaVnPQQZCv9rFxOOgRYGSgYzqwJovvflYAbaywagvGHgat8SfXBKyIFodZDwRoqHUeUKn4diit8O4cqDpylj8N+0j1pNRBz
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(39860400002)(346002)(366004)(376002)(136003)(7416002)(26005)(6916009)(86362001)(36756003)(16526019)(316002)(53546011)(38100700002)(2906002)(4744005)(8676002)(8936002)(66946007)(66556008)(66476007)(5660300002)(4326008)(956004)(54906003)(31696002)(16576012)(478600001)(2616005)(31686004)(6486002)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?dytRMEtvd21rUWtqdjhCYjZVclJ3bmt5djFFRUl6cmNKbzN0eWdtRldDdzRw?=
 =?utf-8?B?T1gvUkNWUXZDVjJSV0FRclI5dmlwcS9QWEJna2hlUUtmL29ubTdaU21jYmtR?=
 =?utf-8?B?MFozSHZRU1ROdjZ3Z3JIOEt4N3RoU1lXRXpVWDBpc3cxSWJTVWlIdVpxSmN6?=
 =?utf-8?B?eFNmLzhkZlBZVGxEZVNvazFXZk8ybU5pdm9TTlhGdEErL25wQ1p3OHd0RHJO?=
 =?utf-8?B?VUNXMEZTWktDbkUvSGUxZEhvanVqMm5oNE40cFlrWmU0elpTQThnNDNjZ1lS?=
 =?utf-8?B?cGFTUC9rUUJFRVV6VGNGYkQ4YVFTVEM3ZFNpU3k0dml3Z1J6V29HUXM1Rlo4?=
 =?utf-8?B?dVB4REFaOS82ZUJBQjFIbE9JWSs3N2V0a0l2OVRjNUtOc1prQkVWUCtGdWM1?=
 =?utf-8?B?d0NzclFxamtBUjlMNWoxTE9yK0s4SU5yamszanBJMFNKampjLzl4YnlUZnUr?=
 =?utf-8?B?cVNpNzA3ckxxSktadGR6cE56NGlUTVVOTE9TSVhuWHd3UjdhaGVudXlzRVBG?=
 =?utf-8?B?NjdiVzBxUU02WktoY2ZCZzdDV1ZkbG1CVmI4YVlQYTBVUmFGeUg1NU5qV3Zs?=
 =?utf-8?B?OWo4SVNPU3ZaQjk5KzkwNGR6SnZFd1c5VXpXZUdmWUdtWWdPK2NBeUxUaDJ3?=
 =?utf-8?B?R1MzZlN6Qi9tUnZKbWhDNDhyeG9LMWRzcFljVklobGhTQS80SG5zdWI2djVQ?=
 =?utf-8?B?SFlJQW9KVGJEMVBIQnk5MDE4Tkd4aUFKRGY1TFBPdnRZRzBiUkpIbFRlOVo3?=
 =?utf-8?B?QTBPOWFDalJlcWlORzVlL3ZlQjAwcmJwUFlPd3R2bVlKRS83NDNzNmN3WHBl?=
 =?utf-8?B?QXl2bWorSjVkSTFHNjlaNkFuQTF3TUlHWWhVQWpORTFvK2lYUmV1T3psSGNR?=
 =?utf-8?B?Tkt3YlF2cW5nN044MysyNzhrbkk3ZXREcnFJZSt0bFRsSGpxZlNUNmZPVCto?=
 =?utf-8?B?RWNuL0thS0daWmdWUzVaZXFtVHd6QktVbFk1bUpqS09BZmk5MVd5RXNaR1Y2?=
 =?utf-8?B?Wks5akV6NGdQNW5QQVYvcnhzZEpBeHRyTkYyOUpzVFdrNHBHY3dmSXJtSXNj?=
 =?utf-8?B?akpRdU9yNTJWczhIQjNQYTJZbjZhNzhZcFhESlJJWE9FcmhKNTh5aktSVEtv?=
 =?utf-8?B?WGxReVNmK1VrNlhhdGtBcGFKMWlJWEZvV29ZQ3AwdTY5azd0ZGFOT3ZpT3JE?=
 =?utf-8?B?OXdZUjg3dmJsTDU1M01RMXUzRCtaZnJDUHFJSW0zdTZpOExwUzBndTZYazVQ?=
 =?utf-8?B?WUNZOHFIeU11THZzWVhKRDc0OFlrMUhvZmd6YkVrNjlUdlhFNk96d29IbXdD?=
 =?utf-8?B?YStpVEprUXJ6TmtUUHRjV3FHZHY0YVFSRTJNdWxYaEhCNjZ0ZWtKOGFnY1E0?=
 =?utf-8?B?UC9ZVnd0MzAyaWJRdUpMK1pMRDhjOGRwekdwSjdLVEtTRkRudEVkejVVWk5X?=
 =?utf-8?B?UjZJUDNaTEhWclIyWmxwdkkzNW9DUnV6dC9hTC8xUVZQcVpnaGd1a08rbzlk?=
 =?utf-8?B?VEZ1R1dmNldoWWZVT3lYWWo3T0w2eWxudDgrUy9KZnIvYmdJOEtLYjBxTm8r?=
 =?utf-8?B?UHBYRGhwWVpEKzZtYW45OGJEZ1hxTTNYTzY4ZVNUM3VBRlhUaVB3Vk9UbXJu?=
 =?utf-8?B?TzUzbVFsSER1dmV1Q2FiR1Mxb2M4Nk9BQjVSb2RhbVdhKzZFaHp2Z2FHV3pB?=
 =?utf-8?B?d08vQVIrQUZ5WGd5SlFZS3pPMDM3WVZSbllZa2ZnYXpXSTROK1UwYzVCQ2tp?=
 =?utf-8?Q?H9loDxSvtVBdJ8Q86Kt50Kgzc4Jgy7AcRhuTrsP?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 63af4686-3f80-4134-6e8e-08d925de6a5c
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2021 15:52:24.8544
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QYnxwInMLe2rNq7gKgiT5qfYgnEqkEDm/AatDMFue6quGr3SJN60meZtKzAgfZW1f9lnCIQV1ZMWreA79JCk0w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5167

On 02.06.2021 17:08, Connor Davis wrote:
> Defaulting to yes only for X86 and ARM reduces the requirements
> for a minimal build when porting new architectures.
> 
> Signed-off-by: Connor Davis <connojdavis@gmail.com>

Please can you accumulate tags you've already got into re-submissions,
so the status of patches is clear and committers will know what is
ready to go in without having to hunt down anything? In the case here
you've lost my A-b.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 16:03:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 16:03:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136185.252622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loTLF-0008Hk-3f; Wed, 02 Jun 2021 16:03:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136185.252622; Wed, 02 Jun 2021 16:03:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loTLE-0008Hd-WA; Wed, 02 Jun 2021 16:03:36 +0000
Received: by outflank-mailman (input) for mailman id 136185;
 Wed, 02 Jun 2021 16:03:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/Ii/=K4=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1loTLD-0008HX-Ps
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 16:03:35 +0000
Received: from mx0a-00069f02.pphosted.com (unknown [205.220.165.32])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5b7b604b-60ca-489b-a204-6481b95969d0;
 Wed, 02 Jun 2021 16:03:34 +0000 (UTC)
Received: from pps.filterd (m0246617.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 152G2fGe024614; Wed, 2 Jun 2021 16:02:41 GMT
Received: from oracle.com (aserp3020.oracle.com [141.146.126.70])
 by mx0b-00069f02.pphosted.com with ESMTP id 38wx9fra8e-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 02 Jun 2021 16:02:40 +0000
Received: from aserp3020.oracle.com (aserp3020.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 152G2duL074339;
 Wed, 2 Jun 2021 16:02:39 GMT
Received: from nam04-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam08lp2171.outbound.protection.outlook.com [104.47.73.171])
 by aserp3020.oracle.com with ESMTP id 38udecap7t-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 02 Jun 2021 16:02:39 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by BL0PR10MB2948.namprd10.prod.outlook.com (2603:10b6:208:7b::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.20; Wed, 2 Jun
 2021 16:02:17 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4195.020; Wed, 2 Jun 2021
 16:02:17 +0000
Received: from [10.74.103.155] (160.34.89.155) by
 SJ0PR03CA0275.namprd03.prod.outlook.com (2603:10b6:a03:39e::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.20 via Frontend
 Transport; Wed, 2 Jun 2021 16:02:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b7b604b-60ca-489b-a204-6481b95969d0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=FhZo5TqOLCbYFE8dfIq/wsTGIaXUBJx1ifBYy0FfxBs=;
 b=C+sZK792DEg+Pm4h9rvBrh/2zdQHOQuoQRm/do27x7/HA48MmNPxtwD8W8ObfoLMplRs
 /yZ6eNFS2mql4jkDtI9LML+8ktzng7QRgsUU0skT2Cpilxjq3tB5JSchNSY0AKjS8Fy6
 x4De18C4zCD0FAKBKSC5sM2eWIpCkTD/qT3+qcNR26gjxRg3ubY436S2elTPL0VxAt4Y
 KWaaaocDSggOSG6ltI50ME1AyilI0OY+lvTxuII95TCSYtHpirJISkVx6N3q3ow+wwNG
 SEKEigOvPMw29jxVGBOMBgAFSbj56PecqHZUARI1RWzYpMKpes7u4MubXYtMeq6vx/6e HA== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VJMvBgWS66jGwJGc2xlCo1rGKiYxBBzM2VzGY+HsNEhweBknL+pGn0yBqLdE0ciE3uupEG0vYQ3aPLW19xUYG2XCOfEDxWxGqsk/mo6BuTQWDCw8fmXfNBDJS2tsFq4t5/PaWaBp1LhxHuiYL4KVYUt1gmZI4lC6X9oRNdCfDM4cMyiemRrraoS2SUw6JTuLGttt1FXByTgXxS1Cj+JYIPE2OUd7xm4Mq+IhiGgE9Bk5fJM5SJuv68hAUwThIDKKmkEbZBVfMHBQBiSylIlQ3cOszjQi1mbxkuv1c48tZ5K4RviM8HSvjk2fQDB2QIyqoiGQS8TTcGKCjs+pzor2cw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FhZo5TqOLCbYFE8dfIq/wsTGIaXUBJx1ifBYy0FfxBs=;
 b=oQAV4EZMJY/pnqtfE0mtTPspIJVNFCQMydHqGvL++XEkzxbi7uFwKMpBuk44f1ipTwBQGXj+bLQXhj8iJfRXjNw9corb+MhK//9uAd9rZVnLBz/fkoMmn7iuF5lUiTeLxWFIWsJbknqMcVxM2zLVMsO81Zji8TUBhmEccsJKuOePqICBGUQQ5fs2bn2I0m/5lcIxiXYxOvmoIN/xWGXaBwgPReu0QtIm58t2ClE7+EvYFfYhEtJhkjiPJfJ+lATlxlFK93aZWmUp2GPEqnDu9L6yLPy5TpNj3+qaG4AWyNUBujNeEcbouQMbw4DH02kQJ3w5RPWPptVTbjos0WjLTw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FhZo5TqOLCbYFE8dfIq/wsTGIaXUBJx1ifBYy0FfxBs=;
 b=ZY9rTj1M6I4whNd1GbkVOBxUHuFh1wdxbd6A8LmJBy5jwsEXhW+HAaLAo0I0ma/5t2PhXABnTt71WqBdwuKj4p3MYQflGAMFdQdpepmAl41X6PhgdjrnXj2yEiKd9XD4Q7NEaRtmIfXxeNirsm7YT8LJXkefefcn42u7/b4+mGQ=
Authentication-Results: microsoft.com; dkim=none (message not signed)
 header.d=none;microsoft.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [RFC PATCH V3 09/11] HV/IOMMU: Enable swiotlb bounce buffer for
 Isolation VM
To: Tianyu Lan <ltykernel@gmail.com>, kys@microsoft.com,
        haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org,
        decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com,
        bp@alien8.de, x86@kernel.org, hpa@zytor.com, arnd@arndb.de,
        dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org,
        akpm@linux-foundation.org, kirill.shutemov@linux.intel.com,
        rppt@kernel.org, hannes@cmpxchg.org, cai@lca.pw,
        krish.sadhukhan@oracle.com, saravanand@fb.com,
        Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, hch@lst.de,
        m.szyprowski@samsung.com, robin.murphy@arm.com, jgross@suse.com,
        sstabellini@kernel.org, joro@8bytes.org, will@kernel.org,
        xen-devel@lists.xenproject.org, davem@davemloft.net, kuba@kernel.org,
        jejb@linux.ibm.com, martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
        linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
        linux-scsi@vger.kernel.org, netdev@vger.kernel.org,
        vkuznets@redhat.com, thomas.lendacky@amd.com, brijesh.singh@amd.com,
        sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-10-ltykernel@gmail.com>
 <9488c114-81ad-eb67-79c0-5ed319703d3e@oracle.com>
 <a023ee3f-ce85-b54f-79c3-146926bf3279@gmail.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <d6714e8b-dcb6-798b-59a4-5bb68f789564@oracle.com>
Date: Wed, 2 Jun 2021 12:02:03 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
In-Reply-To: <a023ee3f-ce85-b54f-79c3-146926bf3279@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [160.34.89.155]
X-ClientProxiedBy: SJ0PR03CA0275.namprd03.prod.outlook.com
 (2603:10b6:a03:39e::10) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e880f785-c80a-4594-9961-08d925dfcb81
X-MS-TrafficTypeDiagnostic: BL0PR10MB2948:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BL0PR10MB2948E5F3684F7BB4EF7F01718A3D9@BL0PR10MB2948.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	HJL3AnD7x/9PGpxjrgiyQBj/KirmlfRaYrGHtjr3cgyzQIc/Fc0RmeioXFo2lfpVibPJcQ9r58P7xtGkhF4eaFDZreeA0GQ90s3Wo+pmh9ansrYimIOspvJ1zUmXFLKsVaMTfvIw7nhDE90bqdKuAO1MbKrtttn9+W+hQJ2wYgBvGV67tN2wT5g2Rlwp7UK7LjGoeoH4t2WsY4gPxvUJLsiUZP2Af6C6k1pEL2fz8kwEH8UDixN/eFDFI6epDF5tWQoFf1UWvBrmlH1kTXTdJmFAxEXcnvu97K8ODRBbd8cUI82pVSqETJUxbA/Zh0hYTzJ6Q89Y+c52vHkfO77EoH7Rv3pq80qHqlxSLK+ox/f3Lzeu73mgTSy4EqCUB5hMcr3CfcvS5PFrsKLG657V+DeXzCBhfxqtfKPcUzcs99tU+Sb9KVSJm1o+QSBAOtuXoEvctK1iEpYlc9GQ2gIh7hnBFYFYoM0DuzSks2msc9lGrhm1AgVDMhjSrYsEzzvOwY8/qyhoWp86YUnMP637N2sCy5vgZlko6itldn8fwpDjWf72FzXPM7hiRw6bOSYj5bSsbsP7Mdh/6L2zRsrfue3z/e0xUaZXCUo+8mbMKqD1CqHQnRNTIzolx2hIcHcubX9CFOOtoPpzqXhJrLApmk7UvBDbh4wjiTYi86havgHX4l3IH0BQ1WCrrPbqV4/Fo1pXDpm1gduHQQrE+XhLdw==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(39860400002)(366004)(396003)(376002)(346002)(26005)(6636002)(16526019)(186003)(86362001)(2616005)(956004)(83380400001)(44832011)(31696002)(66476007)(8676002)(7406005)(478600001)(53546011)(16576012)(66946007)(31686004)(38100700002)(8936002)(66556008)(6666004)(316002)(4326008)(2906002)(921005)(5660300002)(36756003)(6486002)(7416002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?MDZyeStwMFFoR2J0SEc2bW9rbVR3cm51Nm96UzJ1NU9KNDczNmhNZHZIbnF4?=
 =?utf-8?B?ZFA3TlNzR1dLd1FTTVdqWlZuMXVmWTFjMWZVSTlIdDhFM0FhTU9HallxVFNa?=
 =?utf-8?B?dlcxWGRQaEg0S3RQY2ozTGhSRlpnZTNpUzR2TW83amc0ZVV5cTJkbzQzUERS?=
 =?utf-8?B?Y0VqZXFWc01Rd2xKc0NnTmJxUk5vcVZDemdBaThxVDZwUm1abWxldVE5U2ZX?=
 =?utf-8?B?ZjQzVzk2QVZGVEdQYjdxZ0R3RFF4b2tENEJtQ3hhU2tITUdDTENsaUt3OS9Q?=
 =?utf-8?B?T2pTTDFodXVBNktZRGIxV3AxUElTVDExSnRQa1FHTUt6TGhIcGFNNjVaTjFs?=
 =?utf-8?B?K3g4UmRZa2ROYjlGOWRBYkRWZk5YR1VPRmhlcW5YZTJJWTN3MU5NRjU5b2hy?=
 =?utf-8?B?cDlIeGhyejFxNFRMbDlhODljNlUyVFBDa3ptL2dMd3ZwMmd4azZjZDZjc3J5?=
 =?utf-8?B?dG5HZzd1LzhWOVVpdSsyZjJ1VU43QVdOaFBqdTFLQ2MxZHd4NWpmSjlLcDhk?=
 =?utf-8?B?UmN3blRXWloyT2pDVkRsamlJU0pNWDNIc3dwZ0IrLzd4WTNMWFAxdHdMd1Bp?=
 =?utf-8?B?TXRYclRBWG1xeUlEcHlhVnpDMUdHK0lBL21PZk9KaWZYNmtabmpheDVIMFly?=
 =?utf-8?B?L21XcVJ4M2psdkNEeEcvSmQ3ZW1IbEVwWHo1RmNwUmFUS2RwMHRsWVI3UjBR?=
 =?utf-8?B?aWJUanYyOEFhbGZKMXpCU0FGT3ZSbDNzWFdpcnIxL3hsSkYwSTNvUVc2bnZv?=
 =?utf-8?B?cnk5eGpQbnpTV3R3L011NjMxaTFjTWZZTmtVbWlrNFNnMzN5K0RieFRPcWE0?=
 =?utf-8?B?cXExRmJlWVBNbXhad010Nmd6d0h0QkZWZ1NWeXRqa2dVUG0xRHZaU3NJNTEz?=
 =?utf-8?B?RVRHWU9rVGFDRHVtWFJyYTFMVnNWYzM5Mm1FTnVSTldRY2tEZUphZys1ZFhz?=
 =?utf-8?B?b2tENmMrNWErZmk0V2dQcHlUZi8weFdWbE4zaTEwSnArVFc3eGVVTUJYUjFt?=
 =?utf-8?B?cVA2WlpQYlJ2L2RERzFybVY2dEdBWFlVWTNEWVBKSFh1UytXMlN3TXlSSkV3?=
 =?utf-8?B?MEVBVkkvZFBwWGwrVUVieVVxc2Z3bDBic203bjJKU1REQVN4K0w3T2FIMEY5?=
 =?utf-8?B?a1M5VEhYSE12MHp6SXEzRThWWTRPVkNNdS8rSEFUY3dSVWlrMXV4RXJFNVZs?=
 =?utf-8?B?WWJVTHZTZHh0SVZJNGlYQkZXbEd4OUVQMlE0bGdRS3RCT1pxdVhNNmEyZk5B?=
 =?utf-8?B?WG42VlNqK2MvU3MzYUwvSVo3ZndYZmptMmFqVFI4S3RVNlNlNFZ2RXA3VElI?=
 =?utf-8?B?MW1PTEJMYzQvOG9SM2YrNkR2bWllUXVpVWN0dkdWTVgwNVUwc0dkU0JVNFhB?=
 =?utf-8?B?TGFpVkEzOTZJbW5lSVNhY1JHSlE1a2tVQlJxRFRFMHpqanl0aGk1eFdrNmRX?=
 =?utf-8?B?ZjJXRUpudHNXQkU1R0M5NWl1eFpiWTIrWnRXOU9jL1phcyt6d1JWMzdMUGVa?=
 =?utf-8?B?dEE0eWx2RlBISWo3WWQrRHo4M3hTMGZ2Z1ZoZnRjUjJpOThRQVJ5SjB5TWlP?=
 =?utf-8?B?WjRacEw4L3E5SGJWblVDRHhoNEJpcCtXWVAxMXIyTTJOODh2dmJaNzNnLzdk?=
 =?utf-8?B?bm0zM0gzR0x6MGViS0VPalZNUFBOS2NVajBPQ1dMT2FvS2F3UjdnRzhHS2Ju?=
 =?utf-8?B?cnBkd3daaHR6Vjc0dDhWV0JGRDY2alJJK1ZkNEU2LzgzaDJlWWFSTWErOE94?=
 =?utf-8?Q?gXM4EeYi+eZIPZA3pmX/wNZ1x05ij1DuSnROjtp?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e880f785-c80a-4594-9961-08d925dfcb81
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2021 16:02:17.5711
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: y2gXOzzbZ7MMkWZnLPMBjTsCziTkYJjn8DI5EWzgfUUM3mB9fJXC1BCiBLmVmLPBRFNvBCvRFwdczV3oJioOpqVJxQ1juVJOh1BkAxeEGsU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR10MB2948
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10003 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 mlxscore=0 phishscore=0
 suspectscore=0 spamscore=0 mlxlogscore=999 adultscore=0 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106020101
X-Proofpoint-GUID: D2nKtgwgUzB4I8NO5gn7GykhtJc5Pd7z
X-Proofpoint-ORIG-GUID: D2nKtgwgUzB4I8NO5gn7GykhtJc5Pd7z


On 6/2/21 11:01 AM, Tianyu Lan wrote:
> Hi Boris:
>     Thanks for your review.
>
> On 6/2/2021 9:16 AM, Boris Ostrovsky wrote:
>>
>> On 5/30/21 11:06 AM, Tianyu Lan wrote:
>>> @@ -91,6 +92,6 @@ int pci_xen_swiotlb_init_late(void)
>>>   EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
>>>     IOMMU_INIT_FINISH(2,
>>> -          NULL,
>>> +          hyperv_swiotlb_detect,
>>>             pci_xen_swiotlb_init,
>>>             NULL);
>>
>>
>> Could you explain this change?
>
> Hyper-V allocates its own swiotlb bounce buffer and the default
> swiotlb buffer should not be allocated. swiotlb_init() in pci_swiotlb_init() is to allocate default swiotlb buffer.
> To achieve this, put hyperv_swiotlb_detect() as the first entry in the iommu_table_entry list. The detect loop in the pci_iommu_alloc() will exit once hyperv_swiotlb_detect() is called in Hyper-V VM and other iommu_table_entry callback will not be called.



Right. But pci_xen_swiotlb_detect() will only do something for Xen PV guests, and those guests don't run on hyperV. It's either xen_pv_domain() (i.e. hypervisor_is_type(X86_HYPER_XEN_PV)) or hypervisor_is_type(X86_HYPER_MS_HYPERV) but never both. So I don't think there needs to be a dependency between the two callbacks.



-boris



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 16:46:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 16:46:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136195.252633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loU0m-0004aa-FR; Wed, 02 Jun 2021 16:46:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136195.252633; Wed, 02 Jun 2021 16:46:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loU0m-0004aT-Bb; Wed, 02 Jun 2021 16:46:32 +0000
Received: by outflank-mailman (input) for mailman id 136195;
 Wed, 02 Jun 2021 16:46:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loU0l-0004aJ-Eo; Wed, 02 Jun 2021 16:46:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loU0l-0003Md-6w; Wed, 02 Jun 2021 16:46:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loU0k-0004jU-Ph; Wed, 02 Jun 2021 16:46:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1loU0k-0002qI-PE; Wed, 02 Jun 2021 16:46:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ja5d2C/qcfn8NpafNZp7rgPpc9kkgw0cJSd3FYUyVCk=; b=2COQ7LOAWGDVnRPwWxVVRfU8Gs
	kEagCB7gdJ8qTrHRdUXzGm5L7mnV3KLdWq/H06R+h5UGLoP7bLwfhHckxVQupXMWGX62egHbSrI9j
	WM3e5aaOcIUUTbwFF3pbKun1l3bes36ox3VjHM3WKvXhBsZxPWFMC0QexxTcdov/jpvQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162333-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162333: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=231bc539066760aaa44d46818c85b14ca2f56d9f
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Jun 2021 16:46:30 +0000

flight 162333 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162333/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                231bc539066760aaa44d46818c85b14ca2f56d9f
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  305 days
Failing since        152366  2020-08-01 20:49:34 Z  304 days  520 attempts
Testing same since   162333  2021-06-02 06:00:09 Z    0 days    1 attempts

------------------------------------------------------------
6133 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1666320 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 19:19:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 19:19:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136209.252647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loWOQ-0003dF-Lv; Wed, 02 Jun 2021 19:19:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136209.252647; Wed, 02 Jun 2021 19:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loWOQ-0003d8-GU; Wed, 02 Jun 2021 19:19:06 +0000
Received: by outflank-mailman (input) for mailman id 136209;
 Wed, 02 Jun 2021 19:19:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loWOO-0003bf-Na; Wed, 02 Jun 2021 19:19:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loWOO-0005yW-Iw; Wed, 02 Jun 2021 19:19:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loWOO-0002y1-Bh; Wed, 02 Jun 2021 19:19:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1loWOO-0004aO-BA; Wed, 02 Jun 2021 19:19:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tUPLEHDpHGdP13f47BAxgChqZrewP7sAOIasFE0SuQw=; b=LURlfTna7j8/bkS9+cW/Pld9fk
	yCmJNIN2r+PI2YzTrclUNX1efCu+U6W1g45N68/p+404y/zbYUhJvEJ9zKddMRUJgTtFK66EE/eve
	G1i57mFxDUUp895P9N5TLXFF29bWMowrkl+6C1oBgSh47WXKQGMPnXL0CxnDOcfkoOag=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162338-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162338: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=1f515342d8d83ef0fff0c3f4ac67232dd8c97565
X-Osstest-Versions-That:
    ovmf=b233eb1849ac01bdd5b24ea84460a2e481a4c5a9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Jun 2021 19:19:04 +0000

flight 162338 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162338/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 1f515342d8d83ef0fff0c3f4ac67232dd8c97565
baseline version:
 ovmf                 b233eb1849ac01bdd5b24ea84460a2e481a4c5a9

Last test of basis   162334  2021-06-02 07:41:13 Z    0 days
Testing same since   162338  2021-06-02 12:41:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Pierre Gondois <Pierre.Gondois@arm.com>
  Wenyi Xie <xiewenyi2@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   b233eb1849..1f515342d8  1f515342d8d83ef0fff0c3f4ac67232dd8c97565 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 19:38:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 19:38:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136217.252661 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loWgf-0006A6-8m; Wed, 02 Jun 2021 19:37:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136217.252661; Wed, 02 Jun 2021 19:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loWgf-00069z-5m; Wed, 02 Jun 2021 19:37:57 +0000
Received: by outflank-mailman (input) for mailman id 136217;
 Wed, 02 Jun 2021 19:37:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PIZ+=K4=amazon.com=prvs=780e27244=anchalag@srs-us1.protection.inumbo.net>)
 id 1loWge-00069t-DP
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 19:37:56 +0000
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 714b386a-7765-494f-a648-d2acdcad1849;
 Wed, 02 Jun 2021 19:37:55 +0000 (UTC)
Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO
 email-inbound-relay-1d-474bcd9f.us-east-1.amazon.com) ([10.25.36.210])
 by smtp-border-fw-33001.sea14.amazon.com with ESMTP; 02 Jun 2021 19:37:53 +0000
Received: from EX13MTAUWA001.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166])
 by email-inbound-relay-1d-474bcd9f.us-east-1.amazon.com (Postfix) with ESMTPS
 id 977FCA1C5C; Wed,  2 Jun 2021 19:37:46 +0000 (UTC)
Received: from EX13D07UWA002.ant.amazon.com (10.43.160.77) by
 EX13MTAUWA001.ant.amazon.com (10.43.160.58) with Microsoft SMTP Server (TLS)
 id 15.0.1497.18; Wed, 2 Jun 2021 19:37:44 +0000
Received: from EX13MTAUWA001.ant.amazon.com (10.43.160.58) by
 EX13D07UWA002.ant.amazon.com (10.43.160.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.18; Wed, 2 Jun 2021 19:37:44 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.160.118) with Microsoft SMTP
 Server id 15.0.1497.18 via Frontend Transport; Wed, 2 Jun 2021 19:37:44 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 5C62340124; Wed,  2 Jun 2021 19:37:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 714b386a-7765-494f-a648-d2acdcad1849
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1622662676; x=1654198676;
  h=date:from:to:cc:message-id:references:mime-version:
   in-reply-to:subject;
  bh=Oo1yVpiLpXAxmPuTnGP3FETyige1IcwNMzEh/RjZTOE=;
  b=aouba3SOXtl1D/ZhN1YlRNmTPqd4Jb/lGSvGlqjfwoNpIm6kN4L2UIec
   AE7DLR8Yf8XruumrolSmfaV/qgLDIx5TLOce5lRD4H37exBHgc5LJRKw9
   1kffRxBn4rWF8uITp+lsT3tmf7MknrA2rwm7TJh+gtDaq+sw7ne2B05zW
   s=;
X-IronPort-AV: E=Sophos;i="5.83,242,1616457600"; 
   d="scan'208";a="128919057"
Subject: Re: [PATCH v3 01/11] xen/manage: keep track of the on-going suspend mode
Date: Wed, 2 Jun 2021 19:37:43 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: "tglx@linutronix.de" <tglx@linutronix.de>, "mingo@redhat.com"
	<mingo@redhat.com>, "bp@alien8.de" <bp@alien8.de>, "hpa@zytor.com"
	<hpa@zytor.com>, "jgross@suse.com" <jgross@suse.com>,
	"linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>, "linux-mm@kvack.org"
	<linux-mm@kvack.org>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>, "roger.pau@citrix.com"
	<roger.pau@citrix.com>, "axboe@kernel.dk" <axboe@kernel.dk>,
	"davem@davemloft.net" <davem@davemloft.net>, "rjw@rjwysocki.net"
	<rjw@rjwysocki.net>, "len.brown@intel.com" <len.brown@intel.com>,
	"pavel@ucw.cz" <pavel@ucw.cz>, "peterz@infradead.org" <peterz@infradead.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"vkuznets@redhat.com" <vkuznets@redhat.com>, "netdev@vger.kernel.org"
	<netdev@vger.kernel.org>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>, "anchalag@amazon.com" <anchalag@amazon.com>,
	"dwmw@amazon.co.uk" <dwmw@amazon.co.uk>
Message-ID: <20210602193743.GA28861@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <20200925222826.GA11755@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <cc738014-6a79-a5ae-cb2a-a02ff15b4582@oracle.com>
 <20200930212944.GA3138@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <8cd59d9c-36b1-21cf-e59f-40c5c20c65f8@oracle.com>
 <20210521052650.GA19056@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0b1f0772-d1b1-0e59-8e99-368e54d40fbf@oracle.com>
 <20210526044038.GA16226@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <33380567-f86c-5d85-a79e-c1cd889f8ec2@oracle.com>
 <20210528215008.GA19622@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <1ff91b30-3963-728e-aefb-57944197bdde@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <1ff91b30-3963-728e-aefb-57944197bdde@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk

On Tue, Jun 01, 2021 at 10:18:36AM -0400, Boris Ostrovsky wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 5/28/21 5:50 PM, Anchal Agarwal wrote:
> 
> > That only fails during boot but not after the control jumps into the image. The
> > non boot cpus are brought offline(freeze_secondary_cpus) and then online via cpu hotplug path. In that case xen_vcpu_setup doesn't invokes the hypercall again.
> 
> 
> OK, that makes sense --- by that time VCPUs have already been registered. What I don't understand though is why resume doesn't fail every time --- xen_vcpu and xen_vcpu_info should be different practically always, shouldn't they? Do you observe successful resumes when the hypercall fails?
> 
> 
The resume won't fail because in the image the xen_vcpu and xen_vcpu_info are
same. These are the same values that got in there during saving of the
hibernation image. So whatever xen_vcpu got as a value during boot time registration on resume is
essentially lost once the jump into the saved kernel image happens. Interesting
part is if KASLR is not enabled boot time vcpup mfn is same as in the image.
Once you enable KASLR this value changes sometimes and whenever that happens
resume gets stuck. Does that make sense?

No it does not resume successfully if hypercall fails because I was trying to
explicitly reset vcpu and invoke hypercall.
I am just wondering why does restore logic fails to work here or probably I am
missing a critical piece here.
> >
> > Another line of thought is something what kexec does to come around this problem
> > is to abuse soft_reset and issue it during syscore_resume or may be before the image get loaded.
> > I haven't experimented with that yet as I am assuming there has to be a way to re-register vcpus during resume.
> 
> 
> Right, that sounds like it should work.
> 
You mean soft reset or re-register vcpu?

-Anchal
> 
> -boris
> 
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 20:39:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 20:39:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136228.252672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loXeJ-0003e3-2T; Wed, 02 Jun 2021 20:39:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136228.252672; Wed, 02 Jun 2021 20:39:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loXeI-0003dw-Vd; Wed, 02 Jun 2021 20:39:34 +0000
Received: by outflank-mailman (input) for mailman id 136228;
 Wed, 02 Jun 2021 20:39:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loXeH-0003dm-KW; Wed, 02 Jun 2021 20:39:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loXeH-0007Lw-C6; Wed, 02 Jun 2021 20:39:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loXeH-0007KU-14; Wed, 02 Jun 2021 20:39:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1loXeH-00079l-0W; Wed, 02 Jun 2021 20:39:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=j11oJkvtR6ov15B4zWv2Q01wCSwkhtwyPE9XoW6JHbk=; b=FT+HGb97Tfpy7bHlNpMFj43kf5
	xl02bNn4FusS0661Q37xfpFwFLrdmy9yCFquDDSAle6wSOsyRleZlxODPcC6+98fM8rm9+jQdh0D1
	ap5mJbbHxrOy9l/MiNPJc0a4rvahQzSmo8oNmnWh+QgmFGvti72nAkBsXDCww2Yc00gY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162337-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162337: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-vhd:leak-check/check:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Jun 2021 20:39:33 +0000

flight 162337 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162337/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 20 guest-start/debianhvm.repeat fail in 162330 pass in 162337
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 162330 pass in 162337
 test-armhf-armhf-xl-vhd      20 leak-check/check           fail pass in 162330

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162330
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162330
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162330
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162330
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162330
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162330
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162330
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162330
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162330
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162330
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162330
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162337  2021-06-02 10:41:53 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 23:03:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 23:03:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136242.252686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loZtE-0008Hm-LX; Wed, 02 Jun 2021 23:03:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136242.252686; Wed, 02 Jun 2021 23:03:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loZtE-0008HJ-I9; Wed, 02 Jun 2021 23:03:08 +0000
Received: by outflank-mailman (input) for mailman id 136242;
 Wed, 02 Jun 2021 23:03:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GNUT=K4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1loZtD-0008HD-7Z
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 23:03:07 +0000
Received: from mail-ot1-x32b.google.com (unknown [2607:f8b0:4864:20::32b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c14716ad-c91b-48dc-91d8-509b9b18b747;
 Wed, 02 Jun 2021 23:03:06 +0000 (UTC)
Received: by mail-ot1-x32b.google.com with SMTP id
 i12-20020a05683033ecb02903346fa0f74dso3979966otu.10
 for <xen-devel@lists.xenproject.org>; Wed, 02 Jun 2021 16:03:06 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id q63sm305301oic.15.2021.06.02.16.03.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 02 Jun 2021 16:03:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c14716ad-c91b-48dc-91d8-509b9b18b747
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=yMlhmCz+AUe3RoYBIDIG9UWgM5k+DTp0DzTeCajd2K4=;
        b=ZGtfjnNtBOociojIqhH0YEFOQcFx+RgzyN/iQV8hY3IFnPS5XllNv1a3Zkdv41aX3Z
         ynHGbNH0sePXxgNMhorqhzqNp0I62gCT4EkFuUOfZ5vIdQoNYYXhK1tRZrm4uSm7m6lI
         o2RDy9ZBsHHkX4vbBefYlyELzSBRf5hv3wnEHlnkYmFwSUP0GTe89a2wmSKHZLtw8P5L
         tB0yfYsw6s2DJSRvMBMSJ1uzOAafCWMAMAQKN71x17SnN05QhLxdMwB8mUxa6Sl/6yVD
         swTB/ETBsRwwrm/ybJadVDlpqs4p2ZQuQfd4jsbF9hcHCqH+/YG+g78xVf2F6eunY3yQ
         GEqg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=yMlhmCz+AUe3RoYBIDIG9UWgM5k+DTp0DzTeCajd2K4=;
        b=PPjL6oPij2LfBZH2SV3T8loSqNLA/vCTZ9gM7w4yO4E41CJbeTkYsa8LgyhmfjTJ3h
         /2t8NCXSQH2M6SHaIzgow2qIB8W36HZDWvcapldJMj9hD1CTCrrp0KOK1/73cqNyK41i
         a3nJ3uKa5LZZ2LJnNSLiSHa+Z2M4OPD//XHpM6WeZBMRJ42Kz8eYwKCdRIMUq6Q2kgZj
         QLNZCLr438tJADxnJD4Det0VIH/klL02IItbWcB96Cbb/vVRSagvlSOxnl4jNDoHeHnm
         doTHS1OBLprwtP6lQqN4zhVx7oyKsHYqBkDkHuSHKsgTeic2HWkgxO7U4kkTk9wCRixi
         vbJA==
X-Gm-Message-State: AOAM532nW1+dTjgW3I5MoR54+1cDx8R7kSFpPlICbXUW9lu6hDaz/aAu
	Ead9xfhbvw0f1JyvR3/buLN2q6r6tTHylg==
X-Google-Smtp-Source: ABdhPJwmbbu9XwDQXn6VKH8pidlgSNzOT96qN/zc28stBJz+kFdl1TAHIeRCYVEyUHBesmgku68ZnA==
X-Received: by 2002:a9d:7750:: with SMTP id t16mr27989784otl.135.1622674985524;
        Wed, 02 Jun 2021 16:03:05 -0700 (PDT)
Subject: Re: [PATCH v5 1/2] xen/char: Default HAS_NS16550 to y only for X86
 and ARM
To: Jan Beulich <jbeulich@suse.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1622645816.git.connojdavis@gmail.com>
 <2c24cadace47e51e9e3fce6614e0f5e83db6c3af.1622645816.git.connojdavis@gmail.com>
 <161181af-3e92-d5db-9775-048a512499de@suse.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <b132ebe3-f197-7432-d8d7-04d49d4de115@gmail.com>
Date: Wed, 2 Jun 2021 17:03:12 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <161181af-3e92-d5db-9775-048a512499de@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US



On 6/2/21 9:52 AM, Jan Beulich wrote:
> On 02.06.2021 17:08, Connor Davis wrote:
>> Defaulting to yes only for X86 and ARM reduces the requirements
>> for a minimal build when porting new architectures.
>>
>> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> Please can you accumulate tags you've already got into re-submissions,
> so the status of patches is clear and committers will know what is
> ready to go in without having to hunt down anything? In the case here
> you've lost my A-b.
>
> Jan
Oops, sorry about that. Will resend shortly.

Thanks,
Connor


From xen-devel-bounces@lists.xenproject.org Wed Jun 02 23:20:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 23:20:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136249.252696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loaAQ-00029u-5T; Wed, 02 Jun 2021 23:20:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136249.252696; Wed, 02 Jun 2021 23:20:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loaAQ-00029n-2N; Wed, 02 Jun 2021 23:20:54 +0000
Received: by outflank-mailman (input) for mailman id 136249;
 Wed, 02 Jun 2021 23:20:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GNUT=K4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1loaAO-00029h-7N
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 23:20:52 +0000
Received: from mail-oi1-x22c.google.com (unknown [2607:f8b0:4864:20::22c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ebae24f0-1fd1-4bb1-9410-f3a5064d362c;
 Wed, 02 Jun 2021 23:20:51 +0000 (UTC)
Received: by mail-oi1-x22c.google.com with SMTP id z3so4399670oib.5
 for <xen-devel@lists.xenproject.org>; Wed, 02 Jun 2021 16:20:51 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id h7sm306175oib.49.2021.06.02.16.20.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 02 Jun 2021 16:20:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebae24f0-1fd1-4bb1-9410-f3a5064d362c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=a6gIdnq4s9ju4i8wDfgtjonCew0MtQBOOecfSAaR/Y8=;
        b=RDux5V6jT9p/Hdz3xcl1dmxCpdw5uqfFdfRYZx45QRkMXGmqoaW8V2mhwnKojSE2YS
         bPfl3XmhOF5UG+CVo99IsxSqZxZxWH5hKkBtLtwTX4oJKRUAtxkzCA0jzPg4I4xqS7q9
         FxLPT/J3damlNVOxT9Kv/R+yiccHgBeOVx5Q6+zj5iy1Z9gQ+29kOHGDiPVPVxywb9TA
         ECq51qyLqK07TcRISD1UnCyxk4DJTgy7MBoEYzQoOIn0TIhQDrqd6HKGymmtwlKdRFff
         /iOLrDFUhY8JcvxScMGxPpHKa5w9u9Dlkm6TvevqgIy7EQIe7EJ/eBgHJrfVIwHrmn2H
         HyVQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=a6gIdnq4s9ju4i8wDfgtjonCew0MtQBOOecfSAaR/Y8=;
        b=c7Y7r00jP+C/GNxeeWMeLiu4qKndd1wfolHyQmahx2f3WI0Jwm7JrU0ElrO2WJpdzR
         eSsKMJIWStL149o5AuhrwPvIyho4S4/sLrg3xDOnm6Cx3ZMiZGBEFLydJ4ifMdKOXmcq
         g4Aa9Fn8GHxxPiIx4EJ+SAvRvOh33GtmcFRRjKs7mBXZjds4nxQe1N1DhMMu0qT0Vig0
         i18O7H3siVpSFLCAcf+vLYVc0Y2tYY8FP/hN3pYmus3gvggcWLZx3w9aldr/I0XZJNsN
         ZHIxgPCkeBv3a7CkZwgrj8r9nZnHtZBH0IEj3uxPME7mfyRv8loVcEKrsqhn3DiUv5JE
         +z0w==
X-Gm-Message-State: AOAM532V2pOtCf5/K2ptOTVCIg3nf2fA7WhHZjWne6S475HbAnwwl0IR
	6zVgfvaD2e7e9R39bDatHTDpAE3mxUgrIw==
X-Google-Smtp-Source: ABdhPJwgTETcQMos9JHgGva4urfC/93gH77w5U6X2lN6Sfnb2Ii1cOhr9ExkECAkCLudW77bkgaAAQ==
X-Received: by 2002:aca:5dc6:: with SMTP id r189mr5575738oib.164.1622676050576;
        Wed, 02 Jun 2021 16:20:50 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 0/2] Minimal build for RISCV
Date: Wed,  2 Jun 2021 17:20:43 -0600
Message-Id: <cover.1622675569.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi all,

This series introduces a minimal build for RISCV. It is based on Bobby's
previous work from last year[0] rebased onto current Xen.

This series provides the patches necessary to get a minimal build
working. The build is "minimal" in the sense that it only supports
building TARGET=riscv64/head.o. The arch/riscv/riscv64/head.S is just
a simple while(1).

The first patch is a mod to non-RISCV bits that enable building a
config with !CONFIG_HAS_NS16550.

The second patch adds the make/Kconfig boilerplate alongside head.S and
asm-riscv/config.h (head.S references ENTRY that is defined in
asm-riscv/config.h).

[0] https://lore.kernel.org/xen-devel/cover.1579615303.git.bobbyeshleman@gmail.com/

Thanks,
Connor

--
Changes since v5:
  - Added missing A-by from Jan to patch 1

Changes since v4:
  - Dropped patches 2 and 4 as these have been applied
  - Moved arch/riscv/head.S to arch/riscv/riscv64/head.S for consistency
    with ARM.
  - Added Bob and myself to MAINTAINERS

Changes since v3:
  - Dropped "xen: Fix build when !CONFIG_GRANT_TABLE" since this was
    applied by Jan
  - Adjusted Kconfig condition for building NS16550
  - Use bool rather than bool_t
  - Removed riscv memory map, as this should probably be done later once
    the frametable size is figured out
  - Consolidated 64-bit #defines in asm-riscv/config.h
  - Renamed riscv64_defconfig to tiny64_defconfig, added CONFIG_DEBUG
    and CONFIG_DEBUG_INFO
  - Fixed logic/alignment/whitespace issues in Kconfig files
  - Use upstream archlinux riscv64 cross-compiler packages instead of
    custom built toolchain in docker container

Changes since v2:
  - Reduced number of riscv files added to ease review

Changes since v1:
  - Dropped "xen/sched: Fix build when NR_CPUS == 1" since this was
    fixed for 4.15
  - Moved #ifdef-ary around iommu_enabled to iommu.h
  - Moved struct grant_table declaration above ifdef CONFIG_GRANT_TABLE
    instead of defining an empty struct when !CONFIG_GRANT_TABLE
--
Connor Davis (2):
  xen/char: Default HAS_NS16550 to y only for X86 and ARM
  xen: Add files needed for minimal riscv build

 MAINTAINERS                             |  8 +++++
 config/riscv64.mk                       |  5 +++
 xen/Makefile                            |  8 +++--
 xen/arch/riscv/Kconfig                  | 47 +++++++++++++++++++++++++
 xen/arch/riscv/Kconfig.debug            |  0
 xen/arch/riscv/Makefile                 |  2 ++
 xen/arch/riscv/Rules.mk                 |  0
 xen/arch/riscv/arch.mk                  | 14 ++++++++
 xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
 xen/arch/riscv/riscv64/asm-offsets.c    |  0
 xen/arch/riscv/riscv64/head.S           |  6 ++++
 xen/drivers/char/Kconfig                |  1 +
 xen/include/asm-riscv/config.h          | 47 +++++++++++++++++++++++++
 13 files changed, 149 insertions(+), 2 deletions(-)
 create mode 100644 config/riscv64.mk
 create mode 100644 xen/arch/riscv/Kconfig
 create mode 100644 xen/arch/riscv/Kconfig.debug
 create mode 100644 xen/arch/riscv/Makefile
 create mode 100644 xen/arch/riscv/Rules.mk
 create mode 100644 xen/arch/riscv/arch.mk
 create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
 create mode 100644 xen/arch/riscv/riscv64/asm-offsets.c
 create mode 100644 xen/arch/riscv/riscv64/head.S
 create mode 100644 xen/include/asm-riscv/config.h

-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 23:20:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 23:20:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136250.252708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loaAU-0002Qm-Dw; Wed, 02 Jun 2021 23:20:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136250.252708; Wed, 02 Jun 2021 23:20:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loaAU-0002Qf-9p; Wed, 02 Jun 2021 23:20:58 +0000
Received: by outflank-mailman (input) for mailman id 136250;
 Wed, 02 Jun 2021 23:20:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GNUT=K4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1loaAT-00029h-21
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 23:20:57 +0000
Received: from mail-ot1-x332.google.com (unknown [2607:f8b0:4864:20::332])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id de30167b-aaaf-48cc-8052-2ad81dfe9267;
 Wed, 02 Jun 2021 23:20:54 +0000 (UTC)
Received: by mail-ot1-x332.google.com with SMTP id
 i12-20020a05683033ecb02903346fa0f74dso4014343otu.10
 for <xen-devel@lists.xenproject.org>; Wed, 02 Jun 2021 16:20:54 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id h7sm306175oib.49.2021.06.02.16.20.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 02 Jun 2021 16:20:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de30167b-aaaf-48cc-8052-2ad81dfe9267
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=XGumMjGtg94IhtORkgOv43FTexceLATJry4J5HfIX+U=;
        b=GJUKuWulbkk8SiS+7y0Ky1aZjFcBJhMGZhbVX0fOShDv4f/+h8ysaXfiB2J6V6FK/5
         i8Rr74urSUQYipKjCAPone16N6EfHWJcTHx1bIWW8Q8jP8Ki2osR6fI+vPFHXpILT42V
         z9l3rHbAEYdPSvt0VC/TKvFA9Uq5kAgzxKO9mYZ+IGoaeqCLoHMes96tSD9/va3MN1iR
         LtO9FyrU2SozQvgzNLk55wr6cULy8QYbrBaoBUqKmWYqK91VVmQmeiNrPmBr173nKZiN
         pVlomKcGqqjx7ZfL+U6wa8G6b+jUeEqdyNiDONT24imy2TBMm8GrOvTUZ28IoRhVK4V2
         goog==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=XGumMjGtg94IhtORkgOv43FTexceLATJry4J5HfIX+U=;
        b=i8I+tUNyQsptqLBF/OnvTxZZEDIsJbZncgL7QgB8B1XMwvnTWC32UEg1DnKJppN+hy
         PbOXRftYF0zfJUkmCLjW2JTka8bx1GTqtSXH0Fb6fru2LbwJvobutVTUvtdMjPC/j25+
         Oyrkz7ubOTCVkFbqagChqHRVALe48xx54pUrCQ+lGz6Q/qwQOrNYw+2oWJdQSPxYg+jU
         OZY3kObl2xiW+a+VKNpIaiCXX2jUAnP2QNTXlKuWQqgwOpGvxYbfhPCqs/3c5hpqwwcC
         WkiTC3T+TpjiRkPABlY3DfnS/k7CIqgGP5T89XhtJs2j5LEIqL6GTPMTlz8amoD9MOSx
         xIkw==
X-Gm-Message-State: AOAM530+KT+T2JPPvGpqCox8YFvy/FZ5cu8iUBGu+xK0SyM+Z3rVgJE6
	4k7rlYRaU5+cACbxYcccxlYJXzMNHxAx+A==
X-Google-Smtp-Source: ABdhPJzkjPITBzNC90Mrf5dLPcyf/H6/h4Eur+atA8GPWeyJqE0uLWrj/FCfRNDEiW/2Noi/J9FIwg==
X-Received: by 2002:a9d:1ec6:: with SMTP id n64mr28337289otn.3.1622676053680;
        Wed, 02 Jun 2021 16:20:53 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v6 1/2] xen/char: Default HAS_NS16550 to y only for X86 and ARM
Date: Wed,  2 Jun 2021 17:20:44 -0600
Message-Id: <d2d19b62bd2a570db97f2940e6152bf93dc01632.1622675569.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1622675569.git.connojdavis@gmail.com>
References: <cover.1622675569.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Defaulting to yes only for X86 and ARM reduces the requirements
for a minimal build when porting new architectures.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/drivers/char/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
index b572305657..2ff5b288e2 100644
--- a/xen/drivers/char/Kconfig
+++ b/xen/drivers/char/Kconfig
@@ -1,5 +1,6 @@
 config HAS_NS16550
 	bool "NS16550 UART driver" if ARM
+	default n if RISCV
 	default y
 	help
 	  This selects the 16550-series UART support. For most systems, say Y.
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 23:21:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 23:21:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136251.252718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loaAZ-0002mV-LT; Wed, 02 Jun 2021 23:21:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136251.252718; Wed, 02 Jun 2021 23:21:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loaAZ-0002mG-HQ; Wed, 02 Jun 2021 23:21:03 +0000
Received: by outflank-mailman (input) for mailman id 136251;
 Wed, 02 Jun 2021 23:21:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GNUT=K4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1loaAY-00029h-2H
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 23:21:02 +0000
Received: from mail-oi1-x22d.google.com (unknown [2607:f8b0:4864:20::22d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 25796a31-9b0a-47e9-a6cc-203c639555f1;
 Wed, 02 Jun 2021 23:20:57 +0000 (UTC)
Received: by mail-oi1-x22d.google.com with SMTP id d21so4371888oic.11
 for <xen-devel@lists.xenproject.org>; Wed, 02 Jun 2021 16:20:57 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id h7sm306175oib.49.2021.06.02.16.20.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 02 Jun 2021 16:20:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25796a31-9b0a-47e9-a6cc-203c639555f1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=y7wXnoCWLDx7IQREahGv6BGDyqktjGf6yjBgrAWmplI=;
        b=qL6T+IYDMnlwaO6MHJMHo1K3ijLdkUlVWUNRLQ1Hx3tzI+2KdOKpsCnzfwEBsmPsXa
         zZk2BCSF9Ln6qIj/1q2nwHbgjGkXUG2GRXesrIas/ZSyY7atVraGlmtmQFGBBwrHXziN
         mk7aHJwQfcik+zsvd9EyKwZIwxYoHpH0TnFMD1rWdB4cHB7BSw3W2F2g7lBirT0QF0Wf
         5zmo1p1uQJhecYR4r1vrva4OG86jHlegM0IXRtGDD/e6gqKsIF9isQRKwd3GsiPcDUB8
         9YObQTfuYz7D4W1gP6f9GYSvDMRbe0afKEg7kTN4glgIMe44vGeFWep3nV+TlpUvtmMi
         Du8g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=y7wXnoCWLDx7IQREahGv6BGDyqktjGf6yjBgrAWmplI=;
        b=Q2zlAF4K7oKIEIPj9Xh2z5vGHcqs83lAt6EQFl6m0zVp/osPNaY41VHwawrPV7bqN7
         j6A40wtqxEKLuBkV02FIXRLPRQBqrYdxVZ6xJTC9iLfO+fdMhPkJnEg6tfHmPLPe+g0B
         yiu3lSyJtQ3dCBoY8+54joaOyUIsqdke/LwR6opmT1u0I9XQnhROpLA6XSoIwQb3YZ6W
         EoTMMoyHnE1/qULxUj7TyrEeb39YK9E1TfiTn8jM2DAYDiDXHPOuJOe0FLlvqArSMJOQ
         eXRmdEo420rvXzpU1DpRl0vy4HOFiuu6sAO8KTFvbiyd96G3QsUPaueYkpD4FzPfiaii
         LfYQ==
X-Gm-Message-State: AOAM531tiILK77KuefvUUJL5EsPsx77ctU9tHfc0IUtM6w2cYEJAUSl+
	SilBqTbf7+18IcSXmh8fPSwgn5iPS0kXQg==
X-Google-Smtp-Source: ABdhPJy7jQErJRgEkVXaKlWUxYobW+5eP2rkRPBWtrMLLLKUzN59zPAkpOiVmuQR+WnVUVjYAIJlMA==
X-Received: by 2002:aca:f455:: with SMTP id s82mr11834239oih.85.1622676055185;
        Wed, 02 Jun 2021 16:20:55 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v6 2/2] xen: Add files needed for minimal riscv build
Date: Wed,  2 Jun 2021 17:20:45 -0600
Message-Id: <d4670a35758df878565cf757bbc20e2815618eb5.1622675569.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1622675569.git.connojdavis@gmail.com>
References: <cover.1622675569.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add arch-specific makefiles and configs needed to build for
riscv. Also add a minimal head.S that is a simple infinite loop.
head.o can be built with

$ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen tiny64_defconfig
$ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen TARGET=riscv64/head.o

No other TARGET is supported at the moment.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
Bob: I moved back to XEN_TARGET_ARCH=riscv64 because supplying
just XEN_TARGET_ARCH=riscv causes TARGET_ARCH == TARGET_SUBARCH, and
that broke the build after the recent commit b6ecd5c8bc
"build: centralize / unify asm-offsets generation". It also deviates
from how x86 and arm work now, so I think this change is for the best
for now. That commit is also why the PHONY include target is added
in the riscv/Makefile.
---
 MAINTAINERS                             |  8 +++++
 config/riscv64.mk                       |  5 +++
 xen/Makefile                            |  8 +++--
 xen/arch/riscv/Kconfig                  | 47 +++++++++++++++++++++++++
 xen/arch/riscv/Kconfig.debug            |  0
 xen/arch/riscv/Makefile                 |  2 ++
 xen/arch/riscv/Rules.mk                 |  0
 xen/arch/riscv/arch.mk                  | 14 ++++++++
 xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
 xen/arch/riscv/riscv64/asm-offsets.c    |  0
 xen/arch/riscv/riscv64/head.S           |  6 ++++
 xen/include/asm-riscv/config.h          | 47 +++++++++++++++++++++++++
 12 files changed, 148 insertions(+), 2 deletions(-)
 create mode 100644 config/riscv64.mk
 create mode 100644 xen/arch/riscv/Kconfig
 create mode 100644 xen/arch/riscv/Kconfig.debug
 create mode 100644 xen/arch/riscv/Makefile
 create mode 100644 xen/arch/riscv/Rules.mk
 create mode 100644 xen/arch/riscv/arch.mk
 create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
 create mode 100644 xen/arch/riscv/riscv64/asm-offsets.c
 create mode 100644 xen/arch/riscv/riscv64/head.S
 create mode 100644 xen/include/asm-riscv/config.h

diff --git a/MAINTAINERS b/MAINTAINERS
index d46b08a0d2..956e71220d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -456,6 +456,14 @@ F:	tools/libs/light/libxl_nonetbuffer.c
 F:	tools/hotplug/Linux/remus-netbuf-setup
 F:	tools/hotplug/Linux/block-drbd-probe
 
+RISCV
+M:	Bob Eshleman <bobbyeshleman@gmail.com>
+R:	Connor Davis <connojdavis@gmail.com>
+S:	Supported
+F:	config/riscv64.mk
+F:	xen/arch/riscv/
+F:	xen/include/asm-riscv/
+
 RTDS SCHEDULER
 M:	Dario Faggioli <dfaggioli@suse.com>
 M:	Meng Xu <mengxu@cis.upenn.edu>
diff --git a/config/riscv64.mk b/config/riscv64.mk
new file mode 100644
index 0000000000..a5a21e5fa2
--- /dev/null
+++ b/config/riscv64.mk
@@ -0,0 +1,5 @@
+CONFIG_RISCV := y
+CONFIG_RISCV_64 := y
+CONFIG_RISCV_$(XEN_OS) := y
+
+CONFIG_XEN_INSTALL_SUFFIX :=
diff --git a/xen/Makefile b/xen/Makefile
index 7ce7692354..89879fad4c 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -26,7 +26,9 @@ MAKEFLAGS += -rR
 EFI_MOUNTPOINT ?= $(BOOT_DIR)/efi
 
 ARCH=$(XEN_TARGET_ARCH)
-SRCARCH=$(shell echo $(ARCH) | sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
+SRCARCH=$(shell echo $(ARCH) | \
+          sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
+              -e s'/riscv.*/riscv/g')
 
 # Don't break if the build process wasn't called from the top level
 # we need XEN_TARGET_ARCH to generate the proper config
@@ -35,7 +37,8 @@ include $(XEN_ROOT)/Config.mk
 # Set ARCH/SUBARCH appropriately.
 export TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
 export TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
-                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
+                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
+                                -e s'/riscv.*/riscv/g')
 
 # Allow someone to change their config file
 export KCONFIG_CONFIG ?= .config
@@ -335,6 +338,7 @@ _clean: delete-unfresh-files
 	$(MAKE) $(clean) xsm
 	$(MAKE) $(clean) crypto
 	$(MAKE) $(clean) arch/arm
+	$(MAKE) $(clean) arch/riscv
 	$(MAKE) $(clean) arch/x86
 	$(MAKE) $(clean) test
 	$(MAKE) -f $(BASEDIR)/tools/kconfig/Makefile.kconfig ARCH=$(ARCH) SRCARCH=$(SRCARCH) clean
diff --git a/xen/arch/riscv/Kconfig b/xen/arch/riscv/Kconfig
new file mode 100644
index 0000000000..bd8381c5e0
--- /dev/null
+++ b/xen/arch/riscv/Kconfig
@@ -0,0 +1,47 @@
+config RISCV
+	def_bool y
+
+config RISCV_64
+	def_bool y
+	select 64BIT
+
+config ARCH_DEFCONFIG
+	string
+	default "arch/riscv/configs/tiny64_defconfig"
+
+menu "Architecture Features"
+
+source "arch/Kconfig"
+
+endmenu
+
+menu "ISA Selection"
+
+choice
+	prompt "Base ISA"
+	default RISCV_ISA_RV64IMA if RISCV_64
+	help
+	  This selects the base ISA extensions that Xen will target.
+
+config RISCV_ISA_RV64IMA
+	bool "RV64IMA"
+	help
+	  Use the RV64I base ISA, plus the "M" and "A" extensions
+	  for integer multiply/divide and atomic instructions, respectively.
+
+endchoice
+
+config RISCV_ISA_C
+	bool "Compressed extension"
+	help
+	  Add "C" to the ISA subsets that the toolchain is allowed to
+	  emit when building Xen, which results in compressed instructions
+	  in the Xen binary.
+
+	  If unsure, say N.
+
+endmenu
+
+source "common/Kconfig"
+
+source "drivers/Kconfig"
diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
new file mode 100644
index 0000000000..942e4ffbc1
--- /dev/null
+++ b/xen/arch/riscv/Makefile
@@ -0,0 +1,2 @@
+.PHONY: include
+include:
diff --git a/xen/arch/riscv/Rules.mk b/xen/arch/riscv/Rules.mk
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
new file mode 100644
index 0000000000..53dadb8975
--- /dev/null
+++ b/xen/arch/riscv/arch.mk
@@ -0,0 +1,14 @@
+########################################
+# RISCV-specific definitions
+
+CFLAGS-$(CONFIG_RISCV_64) += -mabi=lp64
+
+riscv-march-$(CONFIG_RISCV_ISA_RV64IMA) := rv64ima
+riscv-march-$(CONFIG_RISCV_ISA_C)       := $(riscv-march-y)c
+
+# Note that -mcmodel=medany is used so that Xen can be mapped
+# into the upper half _or_ the lower half of the address space.
+# -mcmodel=medlow would force Xen into the lower half.
+
+CFLAGS += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
+CFLAGS += -I$(BASEDIR)/include
diff --git a/xen/arch/riscv/configs/tiny64_defconfig b/xen/arch/riscv/configs/tiny64_defconfig
new file mode 100644
index 0000000000..3c9a2ff941
--- /dev/null
+++ b/xen/arch/riscv/configs/tiny64_defconfig
@@ -0,0 +1,13 @@
+# CONFIG_SCHED_CREDIT is not set
+# CONFIG_SCHED_RTDS is not set
+# CONFIG_SCHED_NULL is not set
+# CONFIG_SCHED_ARINC653 is not set
+# CONFIG_TRACEBUFFER is not set
+# CONFIG_HYPFS is not set
+# CONFIG_GRANT_TABLE is not set
+# CONFIG_SPECULATIVE_HARDEN_ARRAY is not set
+
+CONFIG_RISCV_64=y
+CONFIG_DEBUG=y
+CONFIG_DEBUG_INFO=y
+CONFIG_EXPERT=y
diff --git a/xen/arch/riscv/riscv64/asm-offsets.c b/xen/arch/riscv/riscv64/asm-offsets.c
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
new file mode 100644
index 0000000000..0dbc27ba75
--- /dev/null
+++ b/xen/arch/riscv/riscv64/head.S
@@ -0,0 +1,6 @@
+#include <asm/config.h>
+
+        .text
+
+ENTRY(start)
+        j  start
diff --git a/xen/include/asm-riscv/config.h b/xen/include/asm-riscv/config.h
new file mode 100644
index 0000000000..e2ae21de61
--- /dev/null
+++ b/xen/include/asm-riscv/config.h
@@ -0,0 +1,47 @@
+#ifndef __RISCV_CONFIG_H__
+#define __RISCV_CONFIG_H__
+
+#if defined(CONFIG_RISCV_64)
+# define LONG_BYTEORDER 3
+# define ELFSIZE 64
+# define MAX_VIRT_CPUS 128u
+#else
+# error "Unsupported RISCV variant"
+#endif
+
+#define BYTES_PER_LONG (1 << LONG_BYTEORDER)
+#define BITS_PER_LONG  (BYTES_PER_LONG << 3)
+#define POINTER_ALIGN  BYTES_PER_LONG
+
+#define BITS_PER_LLONG 64
+
+/* xen_ulong_t is always 64 bits */
+#define BITS_PER_XEN_ULONG 64
+
+#define CONFIG_RISCV_L1_CACHE_SHIFT 6
+#define CONFIG_PAGEALLOC_MAX_ORDER  18
+#define CONFIG_DOMU_MAX_ORDER       9
+#define CONFIG_HWDOM_MAX_ORDER      10
+
+#define OPT_CONSOLE_STR "dtuart"
+#define INVALID_VCPU_ID MAX_VIRT_CPUS
+
+/* Linkage for RISCV */
+#ifdef __ASSEMBLY__
+#define ALIGN .align 2
+
+#define ENTRY(name)                                \
+  .globl name;                                     \
+  ALIGN;                                           \
+  name:
+#endif
+
+#endif /* __RISCV_CONFIG_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 23:26:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 23:26:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136270.252729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loaGF-00044D-Ag; Wed, 02 Jun 2021 23:26:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136270.252729; Wed, 02 Jun 2021 23:26:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loaGF-000446-7e; Wed, 02 Jun 2021 23:26:55 +0000
Received: by outflank-mailman (input) for mailman id 136270;
 Wed, 02 Jun 2021 23:26:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GNUT=K4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1loaGD-00043x-Ul
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 23:26:54 +0000
Received: from mail-ot1-x32e.google.com (unknown [2607:f8b0:4864:20::32e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fcd289a3-4d84-419f-a791-9f768d72cf95;
 Wed, 02 Jun 2021 23:26:53 +0000 (UTC)
Received: by mail-ot1-x32e.google.com with SMTP id
 c31-20020a056830349fb02903a5bfa6138bso4041580otu.7
 for <xen-devel@lists.xenproject.org>; Wed, 02 Jun 2021 16:26:53 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id m189sm303869oif.45.2021.06.02.16.26.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 02 Jun 2021 16:26:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fcd289a3-4d84-419f-a791-9f768d72cf95
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=if2k95Xcwhu7SiKaprmYQn758izSoEUYUBQKX0dMrn4=;
        b=K5ZQIEVzIevNmBeUHFWq2i6+8Gur5QNfCozrPUBPtGKQRKEkxOrWTqVN5JB9Bootgd
         LYaQj8TdCRBbBKKxNHmpCe5MZl1/XHu3237aVcvX4veynbX6znmYvPrSAz1uqu1ZseF1
         0sescp+evpmgj9AcUPsaD7YzJy2t8kJK0F+qG90lxThDKEMhJ9WHkihqbwL2twjJDAPV
         qObC6v5creOlXQ0zC1sU3QkTAjNTm6X0jOp4Z5tm+IsNKZz4r5i4Oon/NzZLEOuaJCET
         H83GnUY0/X/w5dWgAqrC/LOtZQp5qxvqSlTYrtLiLDnGVH830ZvKZWTDVlOjB0OvzGzA
         v+sA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=if2k95Xcwhu7SiKaprmYQn758izSoEUYUBQKX0dMrn4=;
        b=oDw5teviwHzGot08d3hC5+nwqpa6NnyTXOrDPHq1eQUTAMvO5a4yb3CqQgxj/Ai4dm
         d2+rXT7wgSsq14qYRyetUPB8puhIVjznmm9UVV4OdmjIYFRrWwDiGGlUPlNPZaA9LRO/
         GiecMTCHJfhn/u2K+m7VmKl+JGmqta3EWCFsMYGl+KWaGn+PH3f/E2Wg9EJs4PtJBv+z
         vEL0QHForW4M41XiBgKwvcxnpTxVn1RKqaa5t3qMK4iNkgXl0rwq9s9vXaUg8re4DnL2
         CwYOg8QBbAD8AmhWC1LCDz+TkF7Q40WU06R5yd50KmIjL6XDzuOuGxbdiWiOnZZsVY4a
         ST6w==
X-Gm-Message-State: AOAM533CE7cX2Z1kGiObcq8pVQqSVRDBK1XFLRLB9y26jRdI2zTcYUfx
	P4nc4WYMN3zgFgTcNhAn/B4=
X-Google-Smtp-Source: ABdhPJwMknXsNXyiNZKY5veFoP+mF91rPo/KRGcv1QuTEpOfbFdRNtIuGVVlFo0Wv8NyFQREf2dv3w==
X-Received: by 2002:a9d:6a05:: with SMTP id g5mr27413704otn.354.1622676412591;
        Wed, 02 Jun 2021 16:26:52 -0700 (PDT)
Subject: Re: [PATCH v5 0/2] Minimal build for RISCV
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <cover.1622675569.git.connojdavis@gmail.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <6f73fc6a-1edc-695b-d224-43590fde4f0c@gmail.com>
Date: Wed, 2 Jun 2021 17:26:59 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <cover.1622675569.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US

Sigh.. lets try this again with the version numbers in sync

On 6/2/21 5:20 PM, Connor Davis wrote:
> Hi all,
>
> This series introduces a minimal build for RISCV. It is based on Bobby's
> previous work from last year[0] rebased onto current Xen.
>
> This series provides the patches necessary to get a minimal build
> working. The build is "minimal" in the sense that it only supports
> building TARGET=riscv64/head.o. The arch/riscv/riscv64/head.S is just
> a simple while(1).
>
> The first patch is a mod to non-RISCV bits that enable building a
> config with !CONFIG_HAS_NS16550.
>
> The second patch adds the make/Kconfig boilerplate alongside head.S and
> asm-riscv/config.h (head.S references ENTRY that is defined in
> asm-riscv/config.h).
>
> [0] https://lore.kernel.org/xen-devel/cover.1579615303.git.bobbyeshleman@gmail.com/
>
> Thanks,
> Connor
>
> --
> Changes since v5:
>    - Added missing A-by from Jan to patch 1
>
> Changes since v4:
>    - Dropped patches 2 and 4 as these have been applied
>    - Moved arch/riscv/head.S to arch/riscv/riscv64/head.S for consistency
>      with ARM.
>    - Added Bob and myself to MAINTAINERS
>
> Changes since v3:
>    - Dropped "xen: Fix build when !CONFIG_GRANT_TABLE" since this was
>      applied by Jan
>    - Adjusted Kconfig condition for building NS16550
>    - Use bool rather than bool_t
>    - Removed riscv memory map, as this should probably be done later once
>      the frametable size is figured out
>    - Consolidated 64-bit #defines in asm-riscv/config.h
>    - Renamed riscv64_defconfig to tiny64_defconfig, added CONFIG_DEBUG
>      and CONFIG_DEBUG_INFO
>    - Fixed logic/alignment/whitespace issues in Kconfig files
>    - Use upstream archlinux riscv64 cross-compiler packages instead of
>      custom built toolchain in docker container
>
> Changes since v2:
>    - Reduced number of riscv files added to ease review
>
> Changes since v1:
>    - Dropped "xen/sched: Fix build when NR_CPUS == 1" since this was
>      fixed for 4.15
>    - Moved #ifdef-ary around iommu_enabled to iommu.h
>    - Moved struct grant_table declaration above ifdef CONFIG_GRANT_TABLE
>      instead of defining an empty struct when !CONFIG_GRANT_TABLE
> --
> Connor Davis (2):
>    xen/char: Default HAS_NS16550 to y only for X86 and ARM
>    xen: Add files needed for minimal riscv build
>
>   MAINTAINERS                             |  8 +++++
>   config/riscv64.mk                       |  5 +++
>   xen/Makefile                            |  8 +++--
>   xen/arch/riscv/Kconfig                  | 47 +++++++++++++++++++++++++
>   xen/arch/riscv/Kconfig.debug            |  0
>   xen/arch/riscv/Makefile                 |  2 ++
>   xen/arch/riscv/Rules.mk                 |  0
>   xen/arch/riscv/arch.mk                  | 14 ++++++++
>   xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
>   xen/arch/riscv/riscv64/asm-offsets.c    |  0
>   xen/arch/riscv/riscv64/head.S           |  6 ++++
>   xen/drivers/char/Kconfig                |  1 +
>   xen/include/asm-riscv/config.h          | 47 +++++++++++++++++++++++++
>   13 files changed, 149 insertions(+), 2 deletions(-)
>   create mode 100644 config/riscv64.mk
>   create mode 100644 xen/arch/riscv/Kconfig
>   create mode 100644 xen/arch/riscv/Kconfig.debug
>   create mode 100644 xen/arch/riscv/Makefile
>   create mode 100644 xen/arch/riscv/Rules.mk
>   create mode 100644 xen/arch/riscv/arch.mk
>   create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
>   create mode 100644 xen/arch/riscv/riscv64/asm-offsets.c
>   create mode 100644 xen/arch/riscv/riscv64/head.S
>   create mode 100644 xen/include/asm-riscv/config.h
>



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 23:38:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 23:38:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136280.252751 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loaRS-0005qZ-O3; Wed, 02 Jun 2021 23:38:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136280.252751; Wed, 02 Jun 2021 23:38:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loaRS-0005qS-L1; Wed, 02 Jun 2021 23:38:30 +0000
Received: by outflank-mailman (input) for mailman id 136280;
 Wed, 02 Jun 2021 23:38:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GNUT=K4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1loaRR-0005ZJ-ED
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 23:38:29 +0000
Received: from mail-ot1-x329.google.com (unknown [2607:f8b0:4864:20::329])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 19cfe518-f0cd-443d-b221-ca53f475c0aa;
 Wed, 02 Jun 2021 23:38:27 +0000 (UTC)
Received: by mail-ot1-x329.google.com with SMTP id
 x41-20020a05683040a9b02903b37841177eso4051703ott.9
 for <xen-devel@lists.xenproject.org>; Wed, 02 Jun 2021 16:38:27 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id z15sm301633otp.20.2021.06.02.16.38.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 02 Jun 2021 16:38:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19cfe518-f0cd-443d-b221-ca53f475c0aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=XGumMjGtg94IhtORkgOv43FTexceLATJry4J5HfIX+U=;
        b=l29U+8l1/SaCFexdatu2QaPv9YjzKWsUYJL2XWAweIgoXgcgB6hGAq/lv/7j/wo3td
         iss+1jpTy84C4b/+vV8aqhh5p+CGmV+9u7v5z0DOpvNhIdgenBu+Rkj3xonfeDjmEit7
         xQ4JqqYr3vf3CtxuMxYKQNaAFgz6QxEBARaxLLh67N8bWlE3mKHL0xHqvWD9asQpqaIu
         p/aP23SJlMm5IR7fDT8Eyt0boohscK1vGeMUrJYmqWjqDhbiMiaqHzMGN7N2LK/oyzMY
         mDAx1gI+i4lLNQXfdWiudmMr016EDVwO9MefLYog/Pz+rwqODBSvlmgjD/7poi1qb/Ig
         Z4XQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=XGumMjGtg94IhtORkgOv43FTexceLATJry4J5HfIX+U=;
        b=b8lIn92hX05W61X+zapVKYtloh9Su+wHuCXsjhbGv6OEOPVqpn6nmcpSCCfU0NnWOx
         LBo53/IZgBH+8VbEFaPgYwgW7fUNxBQ7Upv0oY5HpRIE9gchZaKs8qR3FpIb+amGY1QP
         0SHNsVR+6CQ36bHMhOqn+61SfKMDOzm6zA/w0bQLb/mSP+s5EA+/g4l+vFgABkfCuuqx
         PK0KvCDEjEUSGgS2eTUhoOolpA5N9xPiZA/pUuqYg8fjI3XocZt/cJzFFsKie74rZC2E
         TmWC0b+qCoNVQBn/8q4c2UDvZL717yiL+yo5PsOrmOhAuACWTPf2Y7jPoaeKeb8wcue2
         nUIA==
X-Gm-Message-State: AOAM5319ZormXQxa6WL5pmeQHqFdlNNa71ijhqATo9gY03D/jkFpkyHW
	dyMQ47IPPiAa4ZiG1/R+SPcb9QBiUxSm8w==
X-Google-Smtp-Source: ABdhPJy8zR/jCqTi3XYkmniIn/mLPkEPqqLViFpBtjEcj7s7YhPNEVS6KKy5+WJrnoJkc+q91lyrWA==
X-Received: by 2002:a05:6830:803:: with SMTP id r3mr27984351ots.237.1622677106833;
        Wed, 02 Jun 2021 16:38:26 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v7 1/2] xen/char: Default HAS_NS16550 to y only for X86 and ARM
Date: Wed,  2 Jun 2021 17:38:09 -0600
Message-Id: <d2d19b62bd2a570db97f2940e6152bf93dc01632.1622676439.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1622676439.git.connojdavis@gmail.com>
References: <cover.1622676439.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Defaulting to yes only for X86 and ARM reduces the requirements
for a minimal build when porting new architectures.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/drivers/char/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
index b572305657..2ff5b288e2 100644
--- a/xen/drivers/char/Kconfig
+++ b/xen/drivers/char/Kconfig
@@ -1,5 +1,6 @@
 config HAS_NS16550
 	bool "NS16550 UART driver" if ARM
+	default n if RISCV
 	default y
 	help
 	  This selects the 16550-series UART support. For most systems, say Y.
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 23:38:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 23:38:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136279.252741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loaRO-0005ZW-HJ; Wed, 02 Jun 2021 23:38:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136279.252741; Wed, 02 Jun 2021 23:38:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loaRO-0005ZP-Dg; Wed, 02 Jun 2021 23:38:26 +0000
Received: by outflank-mailman (input) for mailman id 136279;
 Wed, 02 Jun 2021 23:38:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GNUT=K4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1loaRM-0005ZJ-Mm
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 23:38:24 +0000
Received: from mail-oi1-x236.google.com (unknown [2607:f8b0:4864:20::236])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dce48131-09a5-4356-be9d-2d4aa59caabb;
 Wed, 02 Jun 2021 23:38:23 +0000 (UTC)
Received: by mail-oi1-x236.google.com with SMTP id u11so4459092oiv.1
 for <xen-devel@lists.xenproject.org>; Wed, 02 Jun 2021 16:38:23 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id z15sm301633otp.20.2021.06.02.16.38.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 02 Jun 2021 16:38:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dce48131-09a5-4356-be9d-2d4aa59caabb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=85KPCJRDket6J5WaevzvN9WntHNCxbXkPkfTLkLbIWw=;
        b=ILt2LJXPmDifrF/NE1OgmCB1n5Pbw/1VjGPNgycVf0ZSzAPKT9sjpExW3Ho+HjLRLE
         iuR7oLxUoUydCS+PVSmFxN8w8lo0hlAuzpAeYWS2WZ4INrB2lYU1nP+fkrseuJFxv6RV
         6lzBWkBGbAxUo/1bnkq7sZdciR/ZvSOf5ozgx/gvwKtUBJXfEutMPjd8puh0s2g+xK/I
         0+a/xEX1TKU8WC47fs/we4F+vosZWDqhuy4gnZ87tPbMzUyDCLD+AXKPqpKQUV57u9Jn
         w3MPcJI4Qrcexpqb6MSluMLXlkQkLzJKQ9/wAgVtj4ED9DWiK/dVK6JkPLWp7uOvefwz
         zo9A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=85KPCJRDket6J5WaevzvN9WntHNCxbXkPkfTLkLbIWw=;
        b=bI4BiolphmsuOT602TbxgYlAAbJcJNlQ5WFUrLOdVQ3KfeSkEwIsCW1FQiK18MiWi6
         iZoPaF68PI3fNWU7/IaJwxpFj3Iy2GauPIzadQDuC/poqvn/NL0HT+X2RuCPncX05OFM
         1i+/U6TzeV2nyhXpDqtdb/fXTZj8O5N7rW66OST247SHL+0ZUccjIIk5IQTr5PRUViQ4
         zrtZ57A+O5jcGi8Ea+kdnUTdGtsUsgN2dDf+zvBhmlzDzg2yBYKlGDiE6VMncT0T68Z1
         UO7J8maZgV1Bd0tUCW9a4rjpfKRJDdgbeh3h0l6Jf6cjYrC95ybwlXaK3nmBiOEIqKoS
         76zA==
X-Gm-Message-State: AOAM531qKCjLeDbzOqdL1lNHdRMQvTnHBm0LqZpGGWbrmqf9Ckv02XTE
	8Fj9v5Hvgl1vkG/k9aAHZIFrC4QzqmSg1A==
X-Google-Smtp-Source: ABdhPJzlF0vYZW61JokFIIoU9OrUEJ0HX2WAzsZzupHu8Bv2ozomKot9K2li+V3JLviBRjM8Ohy7WA==
X-Received: by 2002:aca:b506:: with SMTP id e6mr23465875oif.178.1622677102912;
        Wed, 02 Jun 2021 16:38:22 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v7 0/2] Minimal build for RISCV
Date: Wed,  2 Jun 2021 17:38:08 -0600
Message-Id: <cover.1622676439.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi all,

This series introduces a minimal build for RISCV. It is based on Bobby's
previous work from last year[0] rebased onto current Xen.

This series provides the patches necessary to get a minimal build
working. The build is "minimal" in the sense that it only supports
building TARGET=riscv64/head.o. The arch/riscv/riscv64/head.S is just
a simple while(1).

The first patch is a mod to non-RISCV bits that enable building a
config with !CONFIG_HAS_NS16550.

The second patch adds the make/Kconfig boilerplate alongside head.S and
asm-riscv/config.h (head.S references ENTRY that is defined in
asm-riscv/config.h).

[0] https://lore.kernel.org/xen-devel/cover.1579615303.git.bobbyeshleman@gmail.com/

Thanks,
Connor

--
Changes since v6:
  - Make sure patch versions are consistent

Changes since v5:
  - Added missing A-by from Jan to patch 1

Changes since v4:
  - Dropped patches 2 and 4 as these have been applied
  - Moved arch/riscv/head.S to arch/riscv/riscv64/head.S for consistency
    with ARM.
  - Added Bob and myself to MAINTAINERS

Changes since v3:
  - Dropped "xen: Fix build when !CONFIG_GRANT_TABLE" since this was
    applied by Jan
  - Adjusted Kconfig condition for building NS16550
  - Use bool rather than bool_t
  - Removed riscv memory map, as this should probably be done later once
    the frametable size is figured out
  - Consolidated 64-bit #defines in asm-riscv/config.h
  - Renamed riscv64_defconfig to tiny64_defconfig, added CONFIG_DEBUG
    and CONFIG_DEBUG_INFO
  - Fixed logic/alignment/whitespace issues in Kconfig files
  - Use upstream archlinux riscv64 cross-compiler packages instead of
    custom built toolchain in docker container

Changes since v2:
  - Reduced number of riscv files added to ease review

Changes since v1:
  - Dropped "xen/sched: Fix build when NR_CPUS == 1" since this was
    fixed for 4.15
  - Moved #ifdef-ary around iommu_enabled to iommu.h
  - Moved struct grant_table declaration above ifdef CONFIG_GRANT_TABLE
    instead of defining an empty struct when !CONFIG_GRANT_TABLE
--
Connor Davis (2):
  xen/char: Default HAS_NS16550 to y only for X86 and ARM
  xen: Add files needed for minimal riscv build

 MAINTAINERS                             |  8 +++++
 config/riscv64.mk                       |  5 +++
 xen/Makefile                            |  8 +++--
 xen/arch/riscv/Kconfig                  | 47 +++++++++++++++++++++++++
 xen/arch/riscv/Kconfig.debug            |  0
 xen/arch/riscv/Makefile                 |  2 ++
 xen/arch/riscv/Rules.mk                 |  0
 xen/arch/riscv/arch.mk                  | 14 ++++++++
 xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
 xen/arch/riscv/riscv64/asm-offsets.c    |  0
 xen/arch/riscv/riscv64/head.S           |  6 ++++
 xen/drivers/char/Kconfig                |  1 +
 xen/include/asm-riscv/config.h          | 47 +++++++++++++++++++++++++
 13 files changed, 149 insertions(+), 2 deletions(-)
 create mode 100644 config/riscv64.mk
 create mode 100644 xen/arch/riscv/Kconfig
 create mode 100644 xen/arch/riscv/Kconfig.debug
 create mode 100644 xen/arch/riscv/Makefile
 create mode 100644 xen/arch/riscv/Rules.mk
 create mode 100644 xen/arch/riscv/arch.mk
 create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
 create mode 100644 xen/arch/riscv/riscv64/asm-offsets.c
 create mode 100644 xen/arch/riscv/riscv64/head.S
 create mode 100644 xen/include/asm-riscv/config.h

-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 02 23:38:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Jun 2021 23:38:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136281.252763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loaRX-0006Ap-30; Wed, 02 Jun 2021 23:38:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136281.252763; Wed, 02 Jun 2021 23:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loaRW-0006Ae-Vg; Wed, 02 Jun 2021 23:38:34 +0000
Received: by outflank-mailman (input) for mailman id 136281;
 Wed, 02 Jun 2021 23:38:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GNUT=K4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1loaRW-0005ZJ-ET
 for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 23:38:34 +0000
Received: from mail-ot1-x330.google.com (unknown [2607:f8b0:4864:20::330])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f4f70d78-b910-461c-8c67-10fb2abedf12;
 Wed, 02 Jun 2021 23:38:32 +0000 (UTC)
Received: by mail-ot1-x330.google.com with SMTP id
 h24-20020a9d64180000b029036edcf8f9a6so4086187otl.3
 for <xen-devel@lists.xenproject.org>; Wed, 02 Jun 2021 16:38:32 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id z15sm301633otp.20.2021.06.02.16.38.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 02 Jun 2021 16:38:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4f70d78-b910-461c-8c67-10fb2abedf12
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=y7wXnoCWLDx7IQREahGv6BGDyqktjGf6yjBgrAWmplI=;
        b=HHb9+H0zxnpeplaSqRjyOOQKChR6tNN0/WrUvJjn7mLA2CchBDY/Ie+cmJUz0L/oMR
         l6+AJuSRY5u3nsRuMi7+x6htuP6cPP5BHIrqtErYStKWS9x91OPOKr6bgq7P4Mlbyr5d
         FNicvwjENO/+hs3/T1T9vvSHwF/yXQmEOL0HEWnaUC/HQxlCol0CXXA19z+Lc1NxBjnd
         bnXTKuYVCqH2DLjsII1ScTxBX1AsrBcVkiPsrpYKvok8buyPujmAkMnEF7++yM6mMAOU
         X4dDfmMw8MUFHQ4STTWuXKry6AOH5GvNL1Tsq79bfdrZFR6zXbOkS9YVC1lnrqwT1LA8
         3aXg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=y7wXnoCWLDx7IQREahGv6BGDyqktjGf6yjBgrAWmplI=;
        b=JveWQvAQnGB3OYDwZppQfs1Bl6eEf2E5KbXuXyaObEK8T2IiS/zjVHN5BNKneaPxMr
         mP8UbWfbjjETo5jCwSsBG4jkNY99P5aUpFcIhTtjEaKf4OHQOF3Ea/dFLfK8ePyuEeaM
         iI8IsJMu9dSwGjcIyRYrA2ccaEqZKRzaDyrTNV8ZxqI4L2Q9jRYecxrAEBZ39wfoNBIq
         STsxWPIcdWRYlS+0pgPFeQLpR2JmOLwNivNpJ3vkJMFH8l4Vp6qdVcv4h7KqWNMMfRsy
         x0YYdHrpcvKi/1UxlJYBWzyYmIGI25dEeNSfYOPym0Omi2rXnOH+Xh2ab8frv8NgpAEH
         MJvQ==
X-Gm-Message-State: AOAM532pJepjH48w0ulCCPzlRgsBsOHAIrE/mhlIYYcwo5Zy3MKIc32C
	glaoMw7XJGJgrQ5EWTrSMwzZP6ciHzbyDg==
X-Google-Smtp-Source: ABdhPJwrc/OcT43RismaBMCLhAp85KV2VRhAKE4cPgbZCOESCygvJOXkBF5uUDB6oDqcpqlL7QZShQ==
X-Received: by 2002:a9d:5c16:: with SMTP id o22mr10265961otk.319.1622677111637;
        Wed, 02 Jun 2021 16:38:31 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v7 2/2] xen: Add files needed for minimal riscv build
Date: Wed,  2 Jun 2021 17:38:10 -0600
Message-Id: <d4670a35758df878565cf757bbc20e2815618eb5.1622676439.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1622676439.git.connojdavis@gmail.com>
References: <cover.1622676439.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add arch-specific makefiles and configs needed to build for
riscv. Also add a minimal head.S that is a simple infinite loop.
head.o can be built with

$ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen tiny64_defconfig
$ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen TARGET=riscv64/head.o

No other TARGET is supported at the moment.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
Bob: I moved back to XEN_TARGET_ARCH=riscv64 because supplying
just XEN_TARGET_ARCH=riscv causes TARGET_ARCH == TARGET_SUBARCH, and
that broke the build after the recent commit b6ecd5c8bc
"build: centralize / unify asm-offsets generation". It also deviates
from how x86 and arm work now, so I think this change is for the best
for now. That commit is also why the PHONY include target is added
in the riscv/Makefile.
---
 MAINTAINERS                             |  8 +++++
 config/riscv64.mk                       |  5 +++
 xen/Makefile                            |  8 +++--
 xen/arch/riscv/Kconfig                  | 47 +++++++++++++++++++++++++
 xen/arch/riscv/Kconfig.debug            |  0
 xen/arch/riscv/Makefile                 |  2 ++
 xen/arch/riscv/Rules.mk                 |  0
 xen/arch/riscv/arch.mk                  | 14 ++++++++
 xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
 xen/arch/riscv/riscv64/asm-offsets.c    |  0
 xen/arch/riscv/riscv64/head.S           |  6 ++++
 xen/include/asm-riscv/config.h          | 47 +++++++++++++++++++++++++
 12 files changed, 148 insertions(+), 2 deletions(-)
 create mode 100644 config/riscv64.mk
 create mode 100644 xen/arch/riscv/Kconfig
 create mode 100644 xen/arch/riscv/Kconfig.debug
 create mode 100644 xen/arch/riscv/Makefile
 create mode 100644 xen/arch/riscv/Rules.mk
 create mode 100644 xen/arch/riscv/arch.mk
 create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
 create mode 100644 xen/arch/riscv/riscv64/asm-offsets.c
 create mode 100644 xen/arch/riscv/riscv64/head.S
 create mode 100644 xen/include/asm-riscv/config.h

diff --git a/MAINTAINERS b/MAINTAINERS
index d46b08a0d2..956e71220d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -456,6 +456,14 @@ F:	tools/libs/light/libxl_nonetbuffer.c
 F:	tools/hotplug/Linux/remus-netbuf-setup
 F:	tools/hotplug/Linux/block-drbd-probe
 
+RISCV
+M:	Bob Eshleman <bobbyeshleman@gmail.com>
+R:	Connor Davis <connojdavis@gmail.com>
+S:	Supported
+F:	config/riscv64.mk
+F:	xen/arch/riscv/
+F:	xen/include/asm-riscv/
+
 RTDS SCHEDULER
 M:	Dario Faggioli <dfaggioli@suse.com>
 M:	Meng Xu <mengxu@cis.upenn.edu>
diff --git a/config/riscv64.mk b/config/riscv64.mk
new file mode 100644
index 0000000000..a5a21e5fa2
--- /dev/null
+++ b/config/riscv64.mk
@@ -0,0 +1,5 @@
+CONFIG_RISCV := y
+CONFIG_RISCV_64 := y
+CONFIG_RISCV_$(XEN_OS) := y
+
+CONFIG_XEN_INSTALL_SUFFIX :=
diff --git a/xen/Makefile b/xen/Makefile
index 7ce7692354..89879fad4c 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -26,7 +26,9 @@ MAKEFLAGS += -rR
 EFI_MOUNTPOINT ?= $(BOOT_DIR)/efi
 
 ARCH=$(XEN_TARGET_ARCH)
-SRCARCH=$(shell echo $(ARCH) | sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
+SRCARCH=$(shell echo $(ARCH) | \
+          sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
+              -e s'/riscv.*/riscv/g')
 
 # Don't break if the build process wasn't called from the top level
 # we need XEN_TARGET_ARCH to generate the proper config
@@ -35,7 +37,8 @@ include $(XEN_ROOT)/Config.mk
 # Set ARCH/SUBARCH appropriately.
 export TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
 export TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
-                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
+                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
+                                -e s'/riscv.*/riscv/g')
 
 # Allow someone to change their config file
 export KCONFIG_CONFIG ?= .config
@@ -335,6 +338,7 @@ _clean: delete-unfresh-files
 	$(MAKE) $(clean) xsm
 	$(MAKE) $(clean) crypto
 	$(MAKE) $(clean) arch/arm
+	$(MAKE) $(clean) arch/riscv
 	$(MAKE) $(clean) arch/x86
 	$(MAKE) $(clean) test
 	$(MAKE) -f $(BASEDIR)/tools/kconfig/Makefile.kconfig ARCH=$(ARCH) SRCARCH=$(SRCARCH) clean
diff --git a/xen/arch/riscv/Kconfig b/xen/arch/riscv/Kconfig
new file mode 100644
index 0000000000..bd8381c5e0
--- /dev/null
+++ b/xen/arch/riscv/Kconfig
@@ -0,0 +1,47 @@
+config RISCV
+	def_bool y
+
+config RISCV_64
+	def_bool y
+	select 64BIT
+
+config ARCH_DEFCONFIG
+	string
+	default "arch/riscv/configs/tiny64_defconfig"
+
+menu "Architecture Features"
+
+source "arch/Kconfig"
+
+endmenu
+
+menu "ISA Selection"
+
+choice
+	prompt "Base ISA"
+	default RISCV_ISA_RV64IMA if RISCV_64
+	help
+	  This selects the base ISA extensions that Xen will target.
+
+config RISCV_ISA_RV64IMA
+	bool "RV64IMA"
+	help
+	  Use the RV64I base ISA, plus the "M" and "A" extensions
+	  for integer multiply/divide and atomic instructions, respectively.
+
+endchoice
+
+config RISCV_ISA_C
+	bool "Compressed extension"
+	help
+	  Add "C" to the ISA subsets that the toolchain is allowed to
+	  emit when building Xen, which results in compressed instructions
+	  in the Xen binary.
+
+	  If unsure, say N.
+
+endmenu
+
+source "common/Kconfig"
+
+source "drivers/Kconfig"
diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
new file mode 100644
index 0000000000..942e4ffbc1
--- /dev/null
+++ b/xen/arch/riscv/Makefile
@@ -0,0 +1,2 @@
+.PHONY: include
+include:
diff --git a/xen/arch/riscv/Rules.mk b/xen/arch/riscv/Rules.mk
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
new file mode 100644
index 0000000000..53dadb8975
--- /dev/null
+++ b/xen/arch/riscv/arch.mk
@@ -0,0 +1,14 @@
+########################################
+# RISCV-specific definitions
+
+CFLAGS-$(CONFIG_RISCV_64) += -mabi=lp64
+
+riscv-march-$(CONFIG_RISCV_ISA_RV64IMA) := rv64ima
+riscv-march-$(CONFIG_RISCV_ISA_C)       := $(riscv-march-y)c
+
+# Note that -mcmodel=medany is used so that Xen can be mapped
+# into the upper half _or_ the lower half of the address space.
+# -mcmodel=medlow would force Xen into the lower half.
+
+CFLAGS += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
+CFLAGS += -I$(BASEDIR)/include
diff --git a/xen/arch/riscv/configs/tiny64_defconfig b/xen/arch/riscv/configs/tiny64_defconfig
new file mode 100644
index 0000000000..3c9a2ff941
--- /dev/null
+++ b/xen/arch/riscv/configs/tiny64_defconfig
@@ -0,0 +1,13 @@
+# CONFIG_SCHED_CREDIT is not set
+# CONFIG_SCHED_RTDS is not set
+# CONFIG_SCHED_NULL is not set
+# CONFIG_SCHED_ARINC653 is not set
+# CONFIG_TRACEBUFFER is not set
+# CONFIG_HYPFS is not set
+# CONFIG_GRANT_TABLE is not set
+# CONFIG_SPECULATIVE_HARDEN_ARRAY is not set
+
+CONFIG_RISCV_64=y
+CONFIG_DEBUG=y
+CONFIG_DEBUG_INFO=y
+CONFIG_EXPERT=y
diff --git a/xen/arch/riscv/riscv64/asm-offsets.c b/xen/arch/riscv/riscv64/asm-offsets.c
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
new file mode 100644
index 0000000000..0dbc27ba75
--- /dev/null
+++ b/xen/arch/riscv/riscv64/head.S
@@ -0,0 +1,6 @@
+#include <asm/config.h>
+
+        .text
+
+ENTRY(start)
+        j  start
diff --git a/xen/include/asm-riscv/config.h b/xen/include/asm-riscv/config.h
new file mode 100644
index 0000000000..e2ae21de61
--- /dev/null
+++ b/xen/include/asm-riscv/config.h
@@ -0,0 +1,47 @@
+#ifndef __RISCV_CONFIG_H__
+#define __RISCV_CONFIG_H__
+
+#if defined(CONFIG_RISCV_64)
+# define LONG_BYTEORDER 3
+# define ELFSIZE 64
+# define MAX_VIRT_CPUS 128u
+#else
+# error "Unsupported RISCV variant"
+#endif
+
+#define BYTES_PER_LONG (1 << LONG_BYTEORDER)
+#define BITS_PER_LONG  (BYTES_PER_LONG << 3)
+#define POINTER_ALIGN  BYTES_PER_LONG
+
+#define BITS_PER_LLONG 64
+
+/* xen_ulong_t is always 64 bits */
+#define BITS_PER_XEN_ULONG 64
+
+#define CONFIG_RISCV_L1_CACHE_SHIFT 6
+#define CONFIG_PAGEALLOC_MAX_ORDER  18
+#define CONFIG_DOMU_MAX_ORDER       9
+#define CONFIG_HWDOM_MAX_ORDER      10
+
+#define OPT_CONSOLE_STR "dtuart"
+#define INVALID_VCPU_ID MAX_VIRT_CPUS
+
+/* Linkage for RISCV */
+#ifdef __ASSEMBLY__
+#define ALIGN .align 2
+
+#define ENTRY(name)                                \
+  .globl name;                                     \
+  ALIGN;                                           \
+  name:
+#endif
+
+#endif /* __RISCV_CONFIG_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 03 00:03:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 00:03:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136300.252774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loapY-0001zp-2k; Thu, 03 Jun 2021 00:03:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136300.252774; Thu, 03 Jun 2021 00:03:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loapX-0001zi-Va; Thu, 03 Jun 2021 00:03:23 +0000
Received: by outflank-mailman (input) for mailman id 136300;
 Thu, 03 Jun 2021 00:03:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iI76=K5=wdc.com=prvs=781f10532=chaitanya.kulkarni@srs-us1.protection.inumbo.net>)
 id 1loapU-0001zc-Q4
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 00:03:21 +0000
Received: from esa2.hgst.iphmx.com (unknown [68.232.143.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 083744a9-cd19-409a-8dfc-5b57e842f885;
 Thu, 03 Jun 2021 00:03:17 +0000 (UTC)
Received: from mail-dm6nam08lp2041.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.41])
 by ob1.hgst.iphmx.com with ESMTP; 03 Jun 2021 08:03:11 +0800
Received: from BYAPR04MB4965.namprd04.prod.outlook.com (2603:10b6:a03:4d::25)
 by BYAPR04MB4327.namprd04.prod.outlook.com (2603:10b6:a02:ff::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.22; Thu, 3 Jun
 2021 00:03:11 +0000
Received: from BYAPR04MB4965.namprd04.prod.outlook.com
 ([fe80::6873:3d64:8f9f:faf0]) by BYAPR04MB4965.namprd04.prod.outlook.com
 ([fe80::6873:3d64:8f9f:faf0%7]) with mapi id 15.20.4173.034; Thu, 3 Jun 2021
 00:03:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 083744a9-cd19-409a-8dfc-5b57e842f885
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com;
  t=1622678598; x=1654214598;
  h=from:to:cc:subject:date:message-id:references:
   content-transfer-encoding:mime-version;
  bh=o+yVEy7H2DT98ziF3VPykTWncyxjepBwGZs7YTcqT/c=;
  b=CjRXZQwpcoemNvnLNXRjsTPe2PR6xmdGSF9fXzxNHZuMOpuDSRvbQJWP
   qTFNKIytYEEgAWvq3vOLUxd1ynaCIzQjFEFB0llYriTzSV7A0XQ0m9iLD
   9G1Ey7l8ZlSaMFN5dwn2x2trhJMr7KBGlGDcCFFqYIafXVC6zklOvayvY
   VaNRC21rDSr1dCYzpiXm1N779AZDPKyfhlFadHM3/sPr/bTSpxxsDcUdC
   m+VdhTkrFpeZyFojrCOStDYboPGwjoPRy3S8rxWplCjfkkKSjz0lUhrFk
   N9AvextaeU0Uem3SmDoZEQeJIFvYzRLKF8Kx9UC2GYyHG9s24Jlkdpq9C
   g==;
IronPort-SDR: LTb6lWTZrT6MpXhNb90CXDWwwbXdwgf7gnlp3B+5NpeFnl49QaTVO9d3+JGjekYbIqFJqNeky8
 juc4HJiGdxwkVx3B4wM1UHFgwF0LrHxxv7K53bcl6FoPHgidfYHEXOuQseFdIBq7sRXaG15g/x
 FFmf8sTaug1JzxVrcRlIVLwuuHqI/kB6FgX9fPyc/7N++MoFutfUXVVlY3W7g7WDVKoJQx2kVY
 t4qy2LB5jXTo5H1YFdL7hIAgUZUiEEaLdVwLeUV/thdwRmAxTRqyS7wH4MZ6tqVc1WqBmygTgV
 W0I=
X-IronPort-AV: E=Sophos;i="5.83,244,1616428800"; 
   d="scan'208";a="274374650"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YgDEBaXb7HbGmAbrhs9l/t6X5iZXwUbIc3vE9TT3omr41KxvJ8GfK33JFn12sOV3EAFFjssQjgYtFvAc7C8BTpwWbz3GiaP78ObZD/9UxqfeHy28jAFEAQDVl0rDy9tZ7m/BcnlSGR8WEpjPGUXUPdlYkyiXS9EL2wwplzsaKBhgKTQ+r2KaQLcPVyTgjKamYC3l6qfgb/Vhz8HxVF83PjQn5zrLufTyX8Fzkni4jgz1RmWhgTU6D4Z7twg4+4hKIUGXmOA6rP5hInxSi8kixHpC+bz9uXvbGKWgzVlfr3wb5trg+OPmmOg0vNeUgIuZ77RsWKSGly5vmtf6hW3iEQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ujYjzUTvNk5lRoWbqXrQWBuQwFbs/0kVBkPupo0kRlU=;
 b=mxiCzU3U4ch5wdnvTYx0X1rthFhvYc5UtEkmv1kX/cem7n72ZWYlrGGFUPEdeoJUJ2ALPMlDaeJ5WNx6OGrW8L9iQz8ZgI5nWDI7huqbuWAKivFyDPsGk39exKYdQf8NoXNa54jigO3Se/mww86UgLrN41FhVLiTvjVUmSPYPMpA9BfGw8I59L39q6X4Untvg5t/plJ+V1zeLeFZqof/5Q4Wv/c6bJjWFSg7Vk7pSIUhES2U6WkcNDK8d6ca7ChheAfwWwdDbKShPR8JexEEHk4RWZI0I7i3oBZRC11Vrx330RrUmo3Jc0q0gbs7flstnLNwrhfTXPc/iWr9cNi8QA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass
 header.d=wdc.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ujYjzUTvNk5lRoWbqXrQWBuQwFbs/0kVBkPupo0kRlU=;
 b=WGZp/4FqdH6GKPOZNdLEvwYnG2t7pCklbI8rSzqu2aW+Du+B88rLNixQwKCogAX9CMHTpsIzzcBBpaikU9Aeea3dPIQWepJdvs61P8TYtTmnYpAFCdu/hvZaMpHXRtQHOfN1Y7MDupHHat4SJbRtY8DC5DMUQcF5qCTLBOfSfLs=
From: Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Justin Sanders <justin@coraid.com>, Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>, Tim Waugh <tim@cyberelk.net>, Geoff
 Levand <geoff@infradead.org>, Ilya Dryomov <idryomov@gmail.com>, "Md. Haris
 Iqbal" <haris.iqbal@ionos.com>, Jack Wang <jinpu.wang@ionos.com>, "Michael S.
 Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>, Konrad Rzeszutek
 Wilk <konrad.wilk@oracle.com>, =?iso-8859-1?Q?Roger_Pau_Monn=E9?=
	<roger.pau@citrix.com>, Mike Snitzer <snitzer@redhat.com>, Maxim Levitsky
	<maximlevitsky@gmail.com>, Alex Dubov <oakad@yahoo.com>, Miquel Raynal
	<miquel.raynal@bootlin.com>, Richard Weinberger <richard@nod.at>, Vignesh
 Raghavendra <vigneshr@ti.com>, Heiko Carstens <hca@linux.ibm.com>, Vasily
 Gorbik <gor@linux.ibm.com>, Christian Borntraeger <borntraeger@de.ibm.com>,
	"dm-devel@redhat.com" <dm-devel@redhat.com>, "linux-block@vger.kernel.org"
	<linux-block@vger.kernel.org>, "nbd@other.debian.org" <nbd@other.debian.org>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "linux-mmc@vger.kernel.org"
	<linux-mmc@vger.kernel.org>, "linux-mtd@lists.infradead.org"
	<linux-mtd@lists.infradead.org>, "linux-s390@vger.kernel.org"
	<linux-s390@vger.kernel.org>
Subject: Re: [PATCH 03/30] blk-mq: add the blk_mq_alloc_disk APIs
Thread-Topic: [PATCH 03/30] blk-mq: add the blk_mq_alloc_disk APIs
Thread-Index: AQHXV3wpg+GpGHxRB0OZy0BgL2WPKw==
Date: Thu, 3 Jun 2021 00:03:11 +0000
Message-ID:
 <BYAPR04MB49653D1B88ADA8EDEAA88F82863C9@BYAPR04MB4965.namprd04.prod.outlook.com>
References: <20210602065345.355274-1-hch@lst.de>
 <20210602065345.355274-4-hch@lst.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: lst.de; dkim=none (message not signed)
 header.d=none;lst.de; dmarc=none action=none header.from=wdc.com;
x-originating-ip: [199.255.45.62]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: a35b10a6-7fc6-40a1-0ea7-08d92622f9fc
x-ms-traffictypediagnostic: BYAPR04MB4327:
x-microsoft-antispam-prvs:
 <BYAPR04MB43277DCC98ED185FAD050823863C9@BYAPR04MB4327.namprd04.prod.outlook.com>
wdcipoutbound: EOP-TRUE
x-ms-oob-tlc-oobclassifiers: OLM:6430;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 KUrktxVP+HJoyFo+Xhee5NaJZPcQWKKjnb74IsZAcSkgfCH6+h+NTgHgkMgXC+hwOuSG/i1IhPYyYKCsv77QSzV9Hhxn7pe0x68SgN+2AzX3rRjWTDe9DRIvSVanF8X2xHqtALAvdDy2ycNLQAQoEkxScFxhheG3qy2xYatcVn6IbXsDKKAIG5ch3PISks8XZ49m15svn5lR1L6k0UcaE0ycDI0tDr91fCMwL+cPaxDOeJN/vwOHF2ADWiJKkoY8c5CBW/PWmwsBeBB0VkYtMlLQQ1hmkLTPjrtDolAW6VfrTCTDF2yt4ozSc8pKBlk/zi/Rh8Duc7FPDk5ZnwFGOcWpoEY9n6EOaOUGel8w8s3sl6uLcHqUsE7U0x1awfu146GXHfAOXix9XA4Ve7vp3YoNmEOcFHKlnNutlWKIeC+QaMcQCbCeWWqsTdYr6gCst4hOWYrX5P6gHlnSOKMSL0IFN4Yhk4jyqWIJJQ5rCFeQsOY9CyxnpmXPC8mOPTbs5r38pDAUOey+TbzObDC/i5Tvq5iaasgO1B3U6i91LO49JWmhUV1c5QVDRHx9b7qQiQFSjbyDtMcJj4BWIjcfbnE8Chv2pWlZYEXBqq5W2tI=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR04MB4965.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(376002)(366004)(136003)(346002)(33656002)(5660300002)(4326008)(4744005)(76116006)(71200400001)(478600001)(26005)(54906003)(110136005)(7696005)(86362001)(83380400001)(6506007)(53546011)(316002)(66446008)(66946007)(8676002)(7416002)(2906002)(8936002)(55016002)(38100700002)(9686003)(52536014)(186003)(66476007)(64756008)(122000001)(7406005)(66556008);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
 =?iso-8859-1?Q?ynVliGDQxRvrJcMGRyYPK/j4RBDB/47O8t3S1ZBmmqi+OJl5cSFj36BpMo?=
 =?iso-8859-1?Q?duRdXwb92TxF9zCA2J+P5WyYYWJRZ7m7yKAAdRJ3hNHdIUhx62prWbAllO?=
 =?iso-8859-1?Q?5RzOYlHsmxLJg+MygW/Kw2A/GTRNKk77DPVvAMNbZV8oZOyETasQ16Od6c?=
 =?iso-8859-1?Q?4ZCVwGDu8x2CQ3QzJNsJLqvcxCM0cPdTuqouzX4I1Pio3tR5HNBGqfPCNR?=
 =?iso-8859-1?Q?Z+IgWO+M5NixkHMYNj0aKB1Dc/wY/3iHguuMp3vnUD+EAHjKVDTyCfVw8l?=
 =?iso-8859-1?Q?wrJw0rM7eSMFx+/cl0Bkb3op9EgyVU8L7sB2nBNpzGNCPPbsQijJxXnh2q?=
 =?iso-8859-1?Q?xCV6e9qqdx9fqomwbQ7OALpxqS5/3BHu+fm9/8AtX5rxpM5ppAz6YMr6x9?=
 =?iso-8859-1?Q?YVJwy2YhgSW9FovzjveHt+0+Mlerhknec+EHSa+YIc592uuH55NgadlSg6?=
 =?iso-8859-1?Q?xwMh7eU5Ye4bQO6LgA29iZQcfT5UWTnWwhmwGcN2RQ4QMSsZP3lOWoE2Lw?=
 =?iso-8859-1?Q?/oa0c0LbzbVy+oZJZrQXC214a+1pSfA3J/oukxkCCVZy+ZJjIE24lzGvyl?=
 =?iso-8859-1?Q?NKMLX+J7hK0iRFLrla9gI3Y9pdma9MEQuOZjUGslMsZCMLvkeokcK5UexK?=
 =?iso-8859-1?Q?vhSvLlBfi0dzJ2fVNY7C9WtwNxgkrP9y4aEikHYXl/ex0027ccidERsazY?=
 =?iso-8859-1?Q?XmMV+SNP8XyGnSLmFS2Uoo1HZWXGa2Rp53DCkghI0cqmLT3ALHSlq/pWhr?=
 =?iso-8859-1?Q?edbuRICh9T840Rmq44qk0cphIGz6/CNJ0h5MGwL9m21dInIDnfM65GjhOG?=
 =?iso-8859-1?Q?hRprIHCj84jMb+2kEGnQnfyOJbRqCXxwVniBHSTsP+rcSl2hfOxYUupLcZ?=
 =?iso-8859-1?Q?yiZ3S2b91z8+tiP2Tx+o9UIeLNQdZuQyfl4Ki58ul5h8b4mWJ83xDPA8Dg?=
 =?iso-8859-1?Q?jgRQ8ivpRJWPx87Q54ZWRmVBanO8sRxhLtaWy/VAO4bP/1IEe3p2kjK8cI?=
 =?iso-8859-1?Q?N7eFSdRP+VZQsvFktJr/tEQhuHJZqIEy4hUZlMhxFBtj5rlHIGECM7zVi3?=
 =?iso-8859-1?Q?oYmOUS/rKmsGB53BS/h4yKaiTBh1hr/33nwKz88puhi8PIoJBHQwjgdbUY?=
 =?iso-8859-1?Q?jW8L/RcihUjfKP25UQVcyfUMWzVkZ5Wn6pd8QTiS4VyzI+V2/yRo8RSb+S?=
 =?iso-8859-1?Q?XEjy3TiONhDX9jH4u8CaJ37mbvIQsX+cnTOK5EgtJgH/gNotsknqrTZOtG?=
 =?iso-8859-1?Q?6vMjFlI+n66XU3uCAnAj+od+U/vfHAsUXH+K15iPWW8g+f9xg5MbrhK37J?=
 =?iso-8859-1?Q?OCpDLLKwXs4bpDw1i/lgQtxo5+UN2h7coueAuN0AcEhHfMP7bayfzpAEpr?=
 =?iso-8859-1?Q?XUuyds4AE6?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: wdc.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR04MB4965.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a35b10a6-7fc6-40a1-0ea7-08d92622f9fc
X-MS-Exchange-CrossTenant-originalarrivaltime: 03 Jun 2021 00:03:11.2016
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: RDjWAyq6BnuphswWhaa7UCiHTzD/aNrldhcDCrmF+sKsMkcPX6G37kEVI5ch879ou5meUtnrLy6SEuld6NLqkNrrCFzCbDYRgIMVPMUYv2k=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR04MB4327

On 6/1/21 23:54, Christoph Hellwig wrote:=0A=
> Add a new API to allocate a gendisk including the request_queue for use=
=0A=
> with blk-mq based drivers.  This is to avoid boilerplate code in drivers.=
=0A=
>=0A=
> Signed-off-by: Christoph Hellwig <hch@lst.de>=0A=
=0A=
This would be a nice API to get rid of the couple initialization=0A=
calls and respective error handling in each blk-mq based drivers.=0A=
=0A=
Looks good.=0A=
=0A=
Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>=0A=
=0A=
=0A=


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 00:04:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 00:04:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136307.252785 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loaqv-0002fa-Jq; Thu, 03 Jun 2021 00:04:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136307.252785; Thu, 03 Jun 2021 00:04:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loaqv-0002fT-GX; Thu, 03 Jun 2021 00:04:49 +0000
Received: by outflank-mailman (input) for mailman id 136307;
 Thu, 03 Jun 2021 00:04:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iI76=K5=wdc.com=prvs=781f10532=chaitanya.kulkarni@srs-us1.protection.inumbo.net>)
 id 1loaqu-0002fI-6F
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 00:04:48 +0000
Received: from esa4.hgst.iphmx.com (unknown [216.71.154.42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 00218d20-561c-4b5a-83a6-6b6f4852c95e;
 Thu, 03 Jun 2021 00:04:44 +0000 (UTC)
Received: from mail-mw2nam12lp2045.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.45])
 by ob1.hgst.iphmx.com with ESMTP; 03 Jun 2021 08:04:41 +0800
Received: from BYAPR04MB4965.namprd04.prod.outlook.com (2603:10b6:a03:4d::25)
 by BYAPR04MB4327.namprd04.prod.outlook.com (2603:10b6:a02:ff::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.22; Thu, 3 Jun
 2021 00:04:40 +0000
Received: from BYAPR04MB4965.namprd04.prod.outlook.com
 ([fe80::6873:3d64:8f9f:faf0]) by BYAPR04MB4965.namprd04.prod.outlook.com
 ([fe80::6873:3d64:8f9f:faf0%7]) with mapi id 15.20.4173.034; Thu, 3 Jun 2021
 00:04:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00218d20-561c-4b5a-83a6-6b6f4852c95e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com;
  t=1622678684; x=1654214684;
  h=from:to:cc:subject:date:message-id:references:
   content-transfer-encoding:mime-version;
  bh=JsHf/1Axy8NIVPTHTN8VhWztYXay/VLRowokpLJcqck=;
  b=IeuzuMtn6lfL12Y1oyTV7D9z8zsjbu3Y3VjFGQJ+xx/viSR2zu5G5MZK
   MuyxAGMW1S960jpgE7SnmI7gBuwLfmkTbqYMy5QjEtaoj00NPpRgWbm7N
   bGgHBkPh3BfjU6eOm1C4GzOswBDl2ppmRlFXqqwiYYKqfx7kNLOHd9WU1
   rjMkTM1pLniGsDGuFRNDXx0tEI+sn6WyFKGJRmaGwJl1oJt/F1IjZxXIu
   nBmgz7ntQAnyX+X11ai2zWnZSLCStIKvG3deHR3EDVGOYBWUVrD+qbbq4
   S2yDXRka9hmTgAD5/0scVlmDyZf5D4sC57ReMtfIKH3SoP9nxGFeiHKV6
   g==;
IronPort-SDR: ebw43mH2wu2FVzDe3CjkBzn4Ei7G+EjMxL1XQCiLGvLghtxtAihVBK60Ry4k6dJZoGvWftqxQx
 JOxIS3UCpRN2nS4HH4Wm62h5r/EXDr2IleQUWCOCR/JGpCBywEbCpx/GYC9uZQTvIzMAY/7JSH
 sPxA5KUAkpdsPbOLkpRJC2kmm7YhOtUaHvJAMReu8KGWbm59+VPjQl1w6dFeYukHysBBSKN2Ve
 8tYhz5vOkifIjUqABb5q55MhEGp3sZwGNZ6DgKlv6nCCuBkSJ9uc0nsucgyRpJuUIvp3lIHZjf
 jfg=
X-IronPort-AV: E=Sophos;i="5.83,244,1616428800"; 
   d="scan'208";a="169781069"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cJtQMdjSGlU9zr7fdXms/aNN47eZTdSoCBCApcEL0PtASUgfDfjCMYlhPRY2y/zV+Z3iq18wmyi5mg7g9hQBZtxY8a+JUDw6cltEpo0kPEsJXnXgMrkMfkxOMVG+nbm45hM0P/cSF67WJoB8NAmVNCZrP2V3w8dArnTEctQvjUHw4ztJFDnfHnWz0hu3t/rw2kFdhMPPw5c2UlAdm4B6eGVpkMNMseBHX456sIBh6q7gSvBKf/WrFFvXR1s+W5q9k5fKeQro308nYH0hS+wXct/3AKNtYWFIN7vg/XYnWUW9VJs/kfDOdqNy7kZjsSrDGCLhTzHly6rsiSn1Fc+ogQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JsHf/1Axy8NIVPTHTN8VhWztYXay/VLRowokpLJcqck=;
 b=aiP/54PxULlNoM6n6CQST4W7itbWct2dQDVGYay6LJjpl0DmNR2kDWKxRyITK4KYcXlhF6NwBKHn0M8FWqEcmKugUlRkjqzcA1OZNnrLyO/perxG9Y/8ACbvopDrm+c7T/biR6C+31kP0FC9DaWpyxrNOIJXTzWCxieo2RYdG5VmDFpMnzKKYOOdNj8I7eZTQWurbBwyz3KzSQohh8826lf2gpO5tidl6DtL9gRFxgOOf5QhVa2KZG4pcx0AQ1RU07kvUc6QtuLfcG9KuMV7CxmCSyQ5LZc28cf672Fh4xAwxN3Hs2qeN//ntn+13mMkaeOiA4JIQ+mMYSsAAZtEjg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass
 header.d=wdc.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JsHf/1Axy8NIVPTHTN8VhWztYXay/VLRowokpLJcqck=;
 b=o4MEyHLHCXSgfcOLN7a4YzvzZiYc16wAMNH0jfPpwWdUV6uxnuENNm5NmTY/yTjM3r9ZaQhLqSs6XktKzW4AGIJUzaLmSgfRZVcSLGLMYuEM+1NfY+4xSVk4n5MpJZBirmaOkr0h1X1u0gaRZewleJMLc/NczpwD60eNGIJy7f0=
From: Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Justin Sanders <justin@coraid.com>, Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>, Tim Waugh <tim@cyberelk.net>, Geoff
 Levand <geoff@infradead.org>, Ilya Dryomov <idryomov@gmail.com>, "Md. Haris
 Iqbal" <haris.iqbal@ionos.com>, Jack Wang <jinpu.wang@ionos.com>, "Michael S.
 Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>, Konrad Rzeszutek
 Wilk <konrad.wilk@oracle.com>, =?iso-8859-1?Q?Roger_Pau_Monn=E9?=
	<roger.pau@citrix.com>, Mike Snitzer <snitzer@redhat.com>, Maxim Levitsky
	<maximlevitsky@gmail.com>, Alex Dubov <oakad@yahoo.com>, Miquel Raynal
	<miquel.raynal@bootlin.com>, Richard Weinberger <richard@nod.at>, Vignesh
 Raghavendra <vigneshr@ti.com>, Heiko Carstens <hca@linux.ibm.com>, Vasily
 Gorbik <gor@linux.ibm.com>, Christian Borntraeger <borntraeger@de.ibm.com>,
	"dm-devel@redhat.com" <dm-devel@redhat.com>, "linux-block@vger.kernel.org"
	<linux-block@vger.kernel.org>, "nbd@other.debian.org" <nbd@other.debian.org>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "linux-mmc@vger.kernel.org"
	<linux-mmc@vger.kernel.org>, "linux-mtd@lists.infradead.org"
	<linux-mtd@lists.infradead.org>, "linux-s390@vger.kernel.org"
	<linux-s390@vger.kernel.org>
Subject: Re: [PATCH 15/30] blk-mq: remove blk_mq_init_sq_queue
Thread-Topic: [PATCH 15/30] blk-mq: remove blk_mq_init_sq_queue
Thread-Index: AQHXV3xQMWTzcORnaU+ja3Jg+8O85A==
Date: Thu, 3 Jun 2021 00:04:40 +0000
Message-ID:
 <BYAPR04MB49655D481F62EC6B18851664863C9@BYAPR04MB4965.namprd04.prod.outlook.com>
References: <20210602065345.355274-1-hch@lst.de>
 <20210602065345.355274-16-hch@lst.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: lst.de; dkim=none (message not signed)
 header.d=none;lst.de; dmarc=none action=none header.from=wdc.com;
x-originating-ip: [199.255.45.62]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 6bfcc48e-3abd-4db8-6f0e-08d926232f0a
x-ms-traffictypediagnostic: BYAPR04MB4327:
x-microsoft-antispam-prvs:
 <BYAPR04MB432747C9EA426342BA62CB17863C9@BYAPR04MB4327.namprd04.prod.outlook.com>
wdcipoutbound: EOP-TRUE
x-ms-oob-tlc-oobclassifiers: OLM:264;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 OwlTDekL7hwVlX3yba8Ek1cv4FkIZ+J/j/U7PpRxk3Z6zrPfxqaRQ/6AiAWI6itDIedVIjJ4P/W6bUX39lZ2ip5zMIJKqh/05UJFiYDK0MHvhs3zwpQfoEuMcStyuc+BhYT3uhZsYNCV5TRGsPl1mVdrHDJHNUK/lnYUVaNq18zgAKrkZgWciffi+4Ct4NP+FrvuKJBY09IU/uEQS7jNKhjCPUomKvV4IDaGxtSLwTxeAClzhi4eQiw0GPVya7qEaQM/q/ln+bNXhz/29w0KskRN8mdsz1KsHZK5Wi7y9lhhkhJxVlIpGkSQQi1C/wH9xLSHMVY/czRUmhemuQgI9mrZwrqEYSxFRBZqYF9nBuYYojd5+NfTx96l5e8yF1s1RLlX/9qJOVMIHDexCFN+i31dlDA7DF0dmZ2/iCAPgohvaeWovxNq4Rs8fLhuGKQZRGCJImFLnh1O6su/FnWQpmXnnZG2pvixy+xVS9twADWMTZjrp+bazYZMMY9QOvtp8cHrJEmOs3NgxMOurOKVxCnY+S08l+wpqukQTwR+6u4/Fs7Q3dxoc5sjB5wMCN76ZDTbK5jXcT2Ri36g36QMsPSeemPqnZuApkpWXEteLkc=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR04MB4965.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(376002)(366004)(136003)(346002)(33656002)(5660300002)(4326008)(76116006)(71200400001)(478600001)(26005)(558084003)(54906003)(110136005)(7696005)(86362001)(6506007)(53546011)(316002)(66446008)(66946007)(8676002)(7416002)(2906002)(8936002)(55016002)(38100700002)(9686003)(52536014)(186003)(66476007)(64756008)(122000001)(7406005)(66556008);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
 =?iso-8859-1?Q?sXBdSKO61Lf02B48FKieGFHblhseRbF1S8+N01teottA2vv4FPCBtM2yB9?=
 =?iso-8859-1?Q?XuUTpza0ekK5nusUHt5kCqfwFN2O5ZfWqhm/HvpU5wbrVuAM6CRKSv1Q8o?=
 =?iso-8859-1?Q?493jUsBfFTaybWgE3BvjmA6N1jgye1ieRrBSOhD5nIzKRyFhegA+arODHA?=
 =?iso-8859-1?Q?FlH2T0XVxASIf6HpXWyyUjimqoacoHO43o+zvqgY9LAMiLl6sZTx5ykxLI?=
 =?iso-8859-1?Q?9toJguC0kpLfGurgmQOEsXj0AoM56Q+PN9b0PMkSX2Q1c8ScDaODM/7SBE?=
 =?iso-8859-1?Q?8PTOuPFzV+yaoiBD8f80N7tu7GhurIw3pG7Bg9AFQGYDhQD2WU26b58w2I?=
 =?iso-8859-1?Q?Gv3dw8qEAzgviFHj13iNNw4xEqq2tRmkz3QVUXXytYrXqVEafD+G9VJqbB?=
 =?iso-8859-1?Q?s52QR5HjBEj5lGencIoh5Cl6NhjU4kNoZinCLbd9EjobJvldk5IsU5W0v7?=
 =?iso-8859-1?Q?g7VIGxQMLLy+r1hAQPlEcRk2Wg1fhSDZktBP3O0h06bevoCyqvJozek6BP?=
 =?iso-8859-1?Q?6LAuU3XD9WVsw2mscsTfLxgIaSIt61R4l0ZA1+zhETsef7TZJ2DNPYoU9I?=
 =?iso-8859-1?Q?FFOknBNl7/2GAJunTPtDmm4XfaUNWAlnBmcy60CWi6KAWSJ5SZSPuADbyr?=
 =?iso-8859-1?Q?SNSZpjVZoL19+6Nr814QO2QrDqrd+dqS9rsKQeA7YOBvI7lQ0pMsM/XzGA?=
 =?iso-8859-1?Q?8pXBgAk7nzszU9rWKNVit7794Fq1L3m2LgiQnipR3sVfjrDHZft59UCLn+?=
 =?iso-8859-1?Q?fy53ydFRCfRjXgZ3RP/S3WJKMlOYT7n82HMEwGJinb/62Bf6SEa32rvTRa?=
 =?iso-8859-1?Q?FTwCf5r1mrCWYpqg4FwfMxekYF3n+eFwNBx0KrzUtYQd2mPslvonATonaq?=
 =?iso-8859-1?Q?C4IPG+mOSIQ81l3txem1E+R3J1wqtctKPY78n8yzz/Oq8OSwmwsOhok0NX?=
 =?iso-8859-1?Q?zdSnsADCpPu7b8GWyo5qAbD0EqGLRaEK+xJ0gRau6IveMgKPfeq0sc9Gc0?=
 =?iso-8859-1?Q?gI6LxtYoLRuSYQq6eHsUVCLBkWI5ovkh1UD8wFwszHN1qDKKxAnUXO1FP+?=
 =?iso-8859-1?Q?WxWMRPNwY+4YLTuI7AowSIZ1cdWwy5/WFsVaQ6i4PJUYE82JiEYOt+DbnV?=
 =?iso-8859-1?Q?ssV0J3WQW1yxgCaVSMpmWTxHoP6ixIG+fQ/CNfSM2Az13sEt8eC0ifz5Mu?=
 =?iso-8859-1?Q?kYX/IW2WXIstjvnGTqmoxxnjCKrBiBfsTBewNwain3nS5c6Kl5irUNkpug?=
 =?iso-8859-1?Q?xXIiDzf7Crei3NiCtcPCOKUhQoIDYvZryCIcOZfJ4ZF6lM+w/EIVOrZc09?=
 =?iso-8859-1?Q?Ilaz4K/hgpGq0m7v1MG+JZzmQarf8efzkXl7zSU1311foEViZpmHCaX4If?=
 =?iso-8859-1?Q?mTD76Vxg88?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: wdc.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR04MB4965.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6bfcc48e-3abd-4db8-6f0e-08d926232f0a
X-MS-Exchange-CrossTenant-originalarrivaltime: 03 Jun 2021 00:04:40.1871
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: PssMItytFcPuGJZL4oAW8jZKgJiD9CYlBtcjsvIe6eLx9/MAyQgmZKucAPLHbZG6UuQiJ2t+88piZpNQvn93QypCEVRt8ai9TDU7EOe+9+I=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR04MB4327

On 6/1/21 23:55, Christoph Hellwig wrote:=0A=
> All users are gone now.=0A=
>=0A=
> Signed-off-by: Christoph Hellwig <hch@lst.de>=0A=
=0A=
Looks good.=0A=
=0A=
Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>=0A=
=0A=
=0A=
=0A=


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 00:06:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 00:06:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136314.252796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loas5-0003HX-UP; Thu, 03 Jun 2021 00:06:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136314.252796; Thu, 03 Jun 2021 00:06:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loas5-0003HO-RJ; Thu, 03 Jun 2021 00:06:01 +0000
Received: by outflank-mailman (input) for mailman id 136314;
 Thu, 03 Jun 2021 00:06:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iI76=K5=wdc.com=prvs=781f10532=chaitanya.kulkarni@srs-us1.protection.inumbo.net>)
 id 1loas4-0003H3-BB
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 00:06:00 +0000
Received: from esa3.hgst.iphmx.com (unknown [216.71.153.141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id edbeb396-b890-4a14-b2c6-599446b55e16;
 Thu, 03 Jun 2021 00:05:56 +0000 (UTC)
Received: from mail-bn7nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.109])
 by ob1.hgst.iphmx.com with ESMTP; 03 Jun 2021 08:05:49 +0800
Received: from BYAPR04MB4965.namprd04.prod.outlook.com (2603:10b6:a03:4d::25)
 by BYAPR04MB5333.namprd04.prod.outlook.com (2603:10b6:a03:c2::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.22; Thu, 3 Jun
 2021 00:05:45 +0000
Received: from BYAPR04MB4965.namprd04.prod.outlook.com
 ([fe80::6873:3d64:8f9f:faf0]) by BYAPR04MB4965.namprd04.prod.outlook.com
 ([fe80::6873:3d64:8f9f:faf0%7]) with mapi id 15.20.4173.034; Thu, 3 Jun 2021
 00:05:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: edbeb396-b890-4a14-b2c6-599446b55e16
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com;
  t=1622678754; x=1654214754;
  h=from:to:cc:subject:date:message-id:references:
   content-transfer-encoding:mime-version;
  bh=x0A5I9VnxzDXiNVFX9pBDDngO7Cz3NslFa/BzMijjX4=;
  b=kQ82Iu7hNbuhEAq2/85szvtD6O3kPbZWPpwckFWJpyEx6NNiG9tssUDh
   TrqjqAkwzjfIANQ7lif3+rtrXmbl6rSxy/VkxxtKjdEHj2TpGRlTCxnZh
   SjOklb49QffbEGotm1gc0aRO0AZc5qope/CTlhQRbFvH1wTPSnnxypsTB
   INUR++cB5q0xfF+/wc/NYEqWM+krIITfQpizeOr1LEaup6nM3XSwBVP+m
   Hj5ybL7UiYeeJAmXgbRr5KiyUvksgBqmqyIghdQRwhYNo2/IaCAxJ1Mxv
   nAd4isueXbn1eiyduQYFy7Vuir2YVXUFPn4MldrU4SawAnH8ght3BnSkP
   g==;
IronPort-SDR: yNidBVFfwP0++LZUETizwUBWEU+SN5FJSRje8d9L3HA9a4QVTuJ3JBJOnr3MIi9gFTJe/8eecu
 HStTZQKFFUCbQwS4FUr96K24Njy6EXoKXzzXdWil6RMp1mlvcZyH5L6EDrJGcuEI01DwcwTsMT
 pRAfIqTK9lzGNov0fGAd7mAjRIWGxXGtRygQBcLpHsPQyNrNh8eAPm4B6p63N0/Pve4ZLw44Aj
 5F+OH4wmt0QavYeTsh/8Wr2SfC0JwqhvrqLnTzBDcRmyBvK+vFVgn0C8tK5FZvqYFuYnCi5Zm1
 nbw=
X-IronPort-AV: E=Sophos;i="5.83,244,1616428800"; 
   d="scan'208";a="175227080"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=g3farpPa7W2fK3OCqmeUNYp+1WtoYCwnIe0Eyf8Gl9wK74y8kMwubTUPkPw87CpKGPTkcsCJA9J9RAqWtkUv0UsxxqhLAK8cWomOg9Z8zyFWkNN88YJ683LYV5ZYV8pbUm37peQBiPXd9wVZQI6BV8Q2Mk0dx1jpu9xXV42LBxVyNxRkheUvOrmRJ/9MRHFnsDVqos7b2uq+tIZK9XE9JliB7EBArVO2kasvYJfSumxfor6O+1vK0XHfm+mw/xkcErs4ghhwSLe4cIBVMajW0UxCTW3in0EIueBnaojQ8cfbavKhj5EpsgJ9OMufl55s65UFOw5crSiQWgCh+1b94Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x0A5I9VnxzDXiNVFX9pBDDngO7Cz3NslFa/BzMijjX4=;
 b=SPf+169lZ4Rranyg6WH1JdhKDbAUaSshGdESKcqMmPHBbGpcq4V+phPl//qpeaC/DYlbh/8EhDzLHLeTEC+zWwoElyVp0Xi/AlW2/g0q2DrNxFSuHTWLBDNBeuU06i5v8Q99Pa3asHmqtqixTbxSC4f7rD3RpxOi9nZpQlptzvl/DuFwfPz1DeIX6f3+BG8I3nMxnfttIi60y3JhEM5xLf42wULnrsSEE2oj9mncz+uFj6xtd40VfngSmGBqLZ+gSPB/xc4YwVeUViYnUg+wtKXYBVgJF1Cv1LcybuW+jK+/MSMv1IzrVd2Gb7Y8Hf/7C3TFAvDRPaTQiM6cnkPQ7w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass
 header.d=wdc.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x0A5I9VnxzDXiNVFX9pBDDngO7Cz3NslFa/BzMijjX4=;
 b=T+9a36hknCdvo2DPcnesnvQMhfho0Y0/EomLU5TO6dseSuE6OY42Ze5Cj51HfkCMU5dmKIdReUI5CcZqgM2t4/b0A3tomrcyTgMQ6g2QEB6b9szGMpVgTyTO334vQO/8TC2/JufqTsB97FPKYxMSfZ5pugNqxbTTzQXPdatyHjY=
From: Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Justin Sanders <justin@coraid.com>, Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>, Tim Waugh <tim@cyberelk.net>, Geoff
 Levand <geoff@infradead.org>, Ilya Dryomov <idryomov@gmail.com>, "Md. Haris
 Iqbal" <haris.iqbal@ionos.com>, Jack Wang <jinpu.wang@ionos.com>, "Michael S.
 Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>, Konrad Rzeszutek
 Wilk <konrad.wilk@oracle.com>, =?iso-8859-1?Q?Roger_Pau_Monn=E9?=
	<roger.pau@citrix.com>, Mike Snitzer <snitzer@redhat.com>, Maxim Levitsky
	<maximlevitsky@gmail.com>, Alex Dubov <oakad@yahoo.com>, Miquel Raynal
	<miquel.raynal@bootlin.com>, Richard Weinberger <richard@nod.at>, Vignesh
 Raghavendra <vigneshr@ti.com>, Heiko Carstens <hca@linux.ibm.com>, Vasily
 Gorbik <gor@linux.ibm.com>, Christian Borntraeger <borntraeger@de.ibm.com>,
	"dm-devel@redhat.com" <dm-devel@redhat.com>, "linux-block@vger.kernel.org"
	<linux-block@vger.kernel.org>, "nbd@other.debian.org" <nbd@other.debian.org>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "linux-mmc@vger.kernel.org"
	<linux-mmc@vger.kernel.org>, "linux-mtd@lists.infradead.org"
	<linux-mtd@lists.infradead.org>, "linux-s390@vger.kernel.org"
	<linux-s390@vger.kernel.org>
Subject: Re: [PATCH 18/30] loop: use blk_mq_alloc_disk and blk_cleanup_disk
Thread-Topic: [PATCH 18/30] loop: use blk_mq_alloc_disk and blk_cleanup_disk
Thread-Index: AQHXV3xaE0d6OBaU4E62oTiV05BSaQ==
Date: Thu, 3 Jun 2021 00:05:45 +0000
Message-ID:
 <BYAPR04MB49659E527FE3E41660BF8589863C9@BYAPR04MB4965.namprd04.prod.outlook.com>
References: <20210602065345.355274-1-hch@lst.de>
 <20210602065345.355274-19-hch@lst.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: lst.de; dkim=none (message not signed)
 header.d=none;lst.de; dmarc=none action=none header.from=wdc.com;
x-originating-ip: [199.255.45.62]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 39591f5b-3ff9-4e4f-1e41-08d9262355df
x-ms-traffictypediagnostic: BYAPR04MB5333:
x-microsoft-antispam-prvs:
 <BYAPR04MB5333C6D28F1EA2185F1A07B4863C9@BYAPR04MB5333.namprd04.prod.outlook.com>
wdcipoutbound: EOP-TRUE
x-ms-oob-tlc-oobclassifiers: OLM:949;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 PgVZpMqfBZsayGo6qHKvJ/SMB0aIT/hImnkZMDwIUFSs6fwOxe2azCZGsLYYbvBVkWrtWNrUFQJJzeMe3yYffc8YIVjA4vJhqd1Caf+JQrgRrW8iANW1Fa/HwaSNBqVMprKX3CCOr3cy1gDErClF4rDG2tsRuQIz2DZ3wl4lpipsRdWqfqfDlvSHTzPfY8HoDivgfEAPOqwhWAHjZq+glqlqQNx5ZuMHtvR23EDC4vpH/CigaR+TRxEQ1bhZnCzeXwSFOfClGAgqWJfg/YHx/QfV8bngLaMNrTL2snvIV7C4DvRbbyIzJWr1cYbirk42HHX09WrOtimpQWxEpl3MisB8trOfqX2JC4rdBvIjkUaD0Gu3Vd4tNq18VIg+f9BmBccbQ2EtncSTV4CHGdMbjwxBRIamRH6tg8R50QDEz0qRt2ydIbwr9cXW2KR46pgExiN/N5aOQNkcSCw02up2UrfqPreq+/AlBpqOOFH0QCZjF3KTAPdUTp8qjmEkDht0hhnuAIkkoYSevA85lQXGgxc8PooKuHlGX+GrZERL5amiEUT9pgFbBzsgSZjY3DhUCUiugbACiQpYZej9ts05kLxaCUEKkshvgopZWtO9nZk=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR04MB4965.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(136003)(39860400002)(366004)(376002)(52536014)(8936002)(55016002)(9686003)(8676002)(478600001)(110136005)(54906003)(83380400001)(76116006)(66476007)(2906002)(64756008)(66946007)(38100700002)(122000001)(5660300002)(66556008)(66446008)(71200400001)(26005)(4326008)(53546011)(86362001)(186003)(7696005)(6506007)(33656002)(7406005)(7416002)(558084003)(316002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
 =?iso-8859-1?Q?5if6RR5+OJsZpshEstylBHTMiQTdBTwGqtBjzZ62XGcAt8S+WMU1R6WAr1?=
 =?iso-8859-1?Q?a51eNNNlEJ+VJlKHkoSqcgo2gIBhqaOpnahoP2sexcbCNSfIQ/mT23Hhrc?=
 =?iso-8859-1?Q?CcZvGJdjc6rYcwPcGxufFVIXA4VuS9ph3bVkXwT1KWFzecVa9v7sRZjstV?=
 =?iso-8859-1?Q?M52hDeZD+OF2CXi5MCN0ahoxnPtEEWQo3uJ627FGCuiWRxi9hBI4zHAyQD?=
 =?iso-8859-1?Q?n4L880UPigYemTZh2skZ2GRBM3EuXNgIf5eMrwx4/9GRA3thA3Vs6/Srr5?=
 =?iso-8859-1?Q?mGAI4xuVrUjS5kqBdndpNixJAX1SDSM/c5pjrP6MwNVZHaAJQaWiqd/HSt?=
 =?iso-8859-1?Q?qjXLVs9eg3uXezDIMZbbFb3XlhgYLwr4Kg+u+oojFoO86BfCoFfj1SZaLK?=
 =?iso-8859-1?Q?3Bh6HvFLoxKb6fnftF5GncLxb2klSGgVkwEBvxlUdRyF5hLfy0GDhWEiwn?=
 =?iso-8859-1?Q?yvPwGUF1+TCCnVDjIV4TQiM/anB4IJYj7Pz6AgJfWV68HWhWQrmXTYHpFh?=
 =?iso-8859-1?Q?CcLN8OXPeqiXrlUWuJSE+dkeXO0N/nWPs0Ztuv7AkzgDScv3MyeGOeHz3Z?=
 =?iso-8859-1?Q?Ao7F5TJClVo4s6Ru7csKZuy6fY4JU8hhN96B8/sDqt2aBqhr72Hc2stfoq?=
 =?iso-8859-1?Q?cWadzIwd6WVVg+F9M77H9ekG9yiSizSPk2biwRViJxMsn4pztStuwmqKi1?=
 =?iso-8859-1?Q?t83Qs/txRM7vkASOivFSKQe302k3TlOukOSYea41yEa1Pz3i6oIZnZefgv?=
 =?iso-8859-1?Q?NSQw9pYu9oSKGJ25XiFKSTq/j7vSbzks171YHmv9mcMt6wP8yl4LLXxCen?=
 =?iso-8859-1?Q?JXD+BD+GKbVnjs7pZSk99T99q+VCUnMr02tJUq6IfIA+Cq6AX93Q0L3v0A?=
 =?iso-8859-1?Q?qDn95YKDo18n5brnbC8adwYqFb9BWS7i9plCtUA5iAEgQG7cWDB/J/zCCE?=
 =?iso-8859-1?Q?6bE/YfhBuXNi264x616FmYWOFFz2gRTXVHBrqOmgFcjq+xeiwfUiLM3iIc?=
 =?iso-8859-1?Q?vQ1xF8dWT4I98ziJWB64LXJLu3oqZn592OlCv+12XP7AcL+hRIAMmaOLMh?=
 =?iso-8859-1?Q?+DjXC7TFUoHipM+C9D1rO1BB4cTG/35uwiag5HrH7/d45WzZmSsapFV0/+?=
 =?iso-8859-1?Q?eJTfrX8aYrolI4CV0AKZjhKyK1qJrHy2Iy3J4immKeUbAWIyN75+vD6817?=
 =?iso-8859-1?Q?SqCiq4gw9a0EywODdNX5I5vEzTROvlrHITNzRWZn3S0uJZceVLoeMQyUCr?=
 =?iso-8859-1?Q?1vuXs9xwNnyBoHWVfCBl3SNUv4bBHzezF7+hWC4DnOm+Vdn6CRvP+5sOy8?=
 =?iso-8859-1?Q?lJJ8x7HTy3feCkYVRWQwWhhN2a0eI7ohaqFaqoodESTjPUs1PcbecmVk6E?=
 =?iso-8859-1?Q?yOGxfM+lRI?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: wdc.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR04MB4965.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 39591f5b-3ff9-4e4f-1e41-08d9262355df
X-MS-Exchange-CrossTenant-originalarrivaltime: 03 Jun 2021 00:05:45.3613
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: UZUf1SLT6UbGI1jPGaANmbjvJ+IN4ZVHUWumZOErHa4bTyMMnyBgRVRLyXW59SgP4r+36OBdgMCcwGD5GqSUCjl5bim8WN/7KVvAoUqr5pk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR04MB5333

On 6/1/21 23:56, Christoph Hellwig wrote:=0A=
> Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and=0A=
> request_queue allocation.=0A=
>=0A=
> Signed-off-by: Christoph Hellwig <hch@lst.de>=0A=
=0A=
Looks good.=0A=
=0A=
Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>=0A=
=0A=
=0A=
=0A=


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 00:10:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 00:10:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136323.252807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loawK-0004gM-GY; Thu, 03 Jun 2021 00:10:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136323.252807; Thu, 03 Jun 2021 00:10:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loawK-0004gF-DV; Thu, 03 Jun 2021 00:10:24 +0000
Received: by outflank-mailman (input) for mailman id 136323;
 Thu, 03 Jun 2021 00:10:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iI76=K5=wdc.com=prvs=781f10532=chaitanya.kulkarni@srs-us1.protection.inumbo.net>)
 id 1loawI-0004g9-Jp
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 00:10:22 +0000
Received: from esa5.hgst.iphmx.com (unknown [216.71.153.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63065a2a-1822-4784-a8ce-814f9914e375;
 Thu, 03 Jun 2021 00:10:20 +0000 (UTC)
Received: from mail-co1nam11lp2176.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.176])
 by ob1.hgst.iphmx.com with ESMTP; 03 Jun 2021 08:10:11 +0800
Received: from BYAPR04MB4965.namprd04.prod.outlook.com (52.135.233.89) by
 BYAPR04MB3861.namprd04.prod.outlook.com (52.135.214.33) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4173.24; Thu, 3 Jun 2021 00:10:09 +0000
Received: from BYAPR04MB4965.namprd04.prod.outlook.com
 ([fe80::6873:3d64:8f9f:faf0]) by BYAPR04MB4965.namprd04.prod.outlook.com
 ([fe80::6873:3d64:8f9f:faf0%7]) with mapi id 15.20.4173.034; Thu, 3 Jun 2021
 00:10:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63065a2a-1822-4784-a8ce-814f9914e375
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com;
  t=1622679019; x=1654215019;
  h=from:to:cc:subject:date:message-id:references:
   content-transfer-encoding:mime-version;
  bh=wST4v/KFEgsD+AHxp1B4bWyFrtO8jdP6tx0B6ojMsIk=;
  b=Frm1x8/pTrXp4nPxXGLuIO4P8ItJIPrCN46Q5/J5od09/hvLjK4h6wgm
   E7jqhfx3z9wk04M+Ne5WHCZO9P4ddU2r5pZQ+cfJFnwEDGe157cYO+wbl
   G0KtJBtkVh/lQPcTBcDXCjFcZTLuRWT2B5Qm+q2b2U73s8OrtI2uG+AkX
   Vi9kyZ6T/lEc7XfhfbEwvWzKhiGVg1zWoBcY1I/LOZZXOJVjECrKqrM8U
   Z+JaeNHV3rojmfduPnDyIhiTPgY79T7gqx6kFYkgSC7QZDDWIjRmlXOiE
   nPRqNOdWNHA3nIqUq4o1DYzgGfNN7F1GXHvIPxwhGsN1RMOXDi8nQersu
   A==;
IronPort-SDR: ZmknQuXoCnqDkfa8f/gIpbwcqsW3uxFIdkG6PulXIRgvre/V1WSuqDOxsKpr7HB12BBnz5bTJF
 dpg6fDUUhyjirRV+af+X4alFqYTr5oUWRrXWqVj5ekEdtvmwVj+iPCU5w8TuqpEHShe0SuLgiL
 quXXMMlt2JVZSIA7aAtVVNAi8tyYF+2kfYxk/N6iv7SP/zPHjRDhX0CIr4+aYsc1qKQWQ2lLfF
 1pwuG0X1Xj5/5yCDRAXWwP39oIHSazLjYk9RroQcS6fksS9XrRo0mCaN9h0b4MGo5ClYyMYoxw
 /BM=
X-IronPort-AV: E=Sophos;i="5.83,244,1616428800"; 
   d="scan'208";a="170510803"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cZLkwwoAD3gpyISBgdewagHua+umfTEXtt8fAoIQQxNtn3TUom7xGUlGVNuipekFI0LrGCtbcCOV6ybf7lgaQdxpzQZR7FVbCDUnIbEpwTImjwvqO023nPEx8LkjaRijDWaljEKaXFOI89FOGmhTA1ygVl97J+abx4sdYDQy420hpP9+vDjvGyTPwXanYpLR0hZXd9vvDvSATaLjkEu8cVVSuhNewa1/xZkA9X79wGbXzFcqw7097MX0VwzkbSAZekKsjiqEIxKXEcqBecIiFNnG3hTuRkfpOoLt8e2lK42q47i2H47OOOqTv3q0niqmFkO4WW5SeUHAE3ZC+SGdtw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pzWP3cG7e/e70cUAy0yiqwnLhD+NXwUwEN+x14H9+cw=;
 b=Y1+gV6oRpNBiL21Ns+WNrmpjHEagZm1EmHeba536kJYVWO5YSDDjpQIWIuLpb/3RvlQI9ysc82/v+EAZfxTla36zoWNGankQMBKMakzWzWLF8qF7Bun2UCq2wwVPc6rREOtUZmR7z/cUzttOUByKjIb8I5fernx5zRfxxA0bNHO9j9ueuOUG3qC5VaPB9CUJsQkNOKC0NOJw7DMV3X4v7VciGfv1irqIiISjpB70d4Kn94PTHhXF418Z75v06CxeAaguvL+ehlq4PJPP+jZmTlpeiW+8JGSTZTQTl3IYrSNZ/Uzx8MRTX8va8QvEquSLHkjRJY3Z6dJX1QWjQp+bAA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass
 header.d=wdc.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pzWP3cG7e/e70cUAy0yiqwnLhD+NXwUwEN+x14H9+cw=;
 b=KgmidU1Fw+7vpB9Uzsgux34KRcjT0kIms8M8fzVN0CdXWWJ12qjVA20G/kSPLgv3NQ7/mFf9/CEh7zLOM35fNA87mB0+qfmluO0klyMoHw+EtkWs6uJNOr7IH6rYQcWAMIBIdNmK16NxQlvi8bCPYnuCbqX9TTBnDgMXkIhqF6Y=
From: Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>
To: Christoph Hellwig <hch@lst.de>
CC: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>, Denis
 Efremov <efremov@linux.com>, Josef Bacik <josef@toxicpanda.com>, Tim Waugh
	<tim@cyberelk.net>, Geoff Levand <geoff@infradead.org>, Ilya Dryomov
	<idryomov@gmail.com>, "Md. Haris Iqbal" <haris.iqbal@ionos.com>, Jack Wang
	<jinpu.wang@ionos.com>, "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang
	<jasowang@redhat.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Mike Snitzer
	<snitzer@redhat.com>, Maxim Levitsky <maximlevitsky@gmail.com>, Alex Dubov
	<oakad@yahoo.com>, Miquel Raynal <miquel.raynal@bootlin.com>, Richard
 Weinberger <richard@nod.at>, Vignesh Raghavendra <vigneshr@ti.com>, Heiko
 Carstens <hca@linux.ibm.com>, Vasily Gorbik <gor@linux.ibm.com>, Christian
 Borntraeger <borntraeger@de.ibm.com>, "dm-devel@redhat.com"
	<dm-devel@redhat.com>, "linux-block@vger.kernel.org"
	<linux-block@vger.kernel.org>, "nbd@other.debian.org" <nbd@other.debian.org>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "linux-mmc@vger.kernel.org"
	<linux-mmc@vger.kernel.org>, "linux-mtd@lists.infradead.org"
	<linux-mtd@lists.infradead.org>, "linux-s390@vger.kernel.org"
	<linux-s390@vger.kernel.org>
Subject: Re: [PATCH 20/30] nullb: use blk_mq_alloc_disk
Thread-Topic: [PATCH 20/30] nullb: use blk_mq_alloc_disk
Thread-Index: AQHXV3xgq8Qh2KPeAkCUNVGDF6BX9g==
Date: Thu, 3 Jun 2021 00:10:09 +0000
Message-ID:
 <BYAPR04MB4965DDD0F4479F5B492A30D2863C9@BYAPR04MB4965.namprd04.prod.outlook.com>
References: <20210602065345.355274-1-hch@lst.de>
 <20210602065345.355274-21-hch@lst.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: lst.de; dkim=none (message not signed)
 header.d=none;lst.de; dmarc=none action=none header.from=wdc.com;
x-originating-ip: [199.255.45.62]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 410c689a-bed7-4693-4767-08d92623f325
x-ms-traffictypediagnostic: BYAPR04MB3861:
x-microsoft-antispam-prvs:
 <BYAPR04MB3861AC0DB66ED1C3B922E6C3863C9@BYAPR04MB3861.namprd04.prod.outlook.com>
wdcipoutbound: EOP-TRUE
x-ms-oob-tlc-oobclassifiers: OLM:431;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 aSCUd3+tO2dW2W0V3W+l+z6iJa3IoJNFdybg9gYTu9cedQ5lLVvPpm91O+dgVW8ql8UgQUxU6WWy+SnXx4hkRSkEbO+Bh9nDVmSGwmMOGkQ1nHjaRH0AmmcUwOEgU8PZj0bie3HHe1HkTiR9t6S8uH0KYFxKuxPIpGACKfziR9SdgoJQgLBMsLpbcl/XMriIfz/BZePIx+YXeYb0iTZg1zQtisRQ8BHOzvcW9ojTtcViOSd0wEtiPtkPj9ECItd8k4g/tUkuJBZ4Atf6k6pkELhctJZ8WJ42PKiGHu8LVUlUFVPDN2VXhagXLymt7dDZFFWHppkOKUONGWiIdwRpSsODUHoao48cLNskAR6ZEXegvXyqzYYouET3Q0OIYmUwslAg8gnSfVyx+RA2Rzpr2haw1NRfeR8epVcIitoaVUWk4+eJMy/yHTuFQy6ansUVygucOFjq4n5CAbUDWG6Ze00DEqPbx1sGyplv+rZCuJ3RE45DyM89gvL+b6fQ1xcV3jiGTFdg5yKmctnZmapwoPLqBzlIAqKUQandH72F1kQ2EMY5/mqgVQY/NPHh71VFoA/EQHV29oPgt9v0n+SjOM12CltqYJUFiycBsklsHDQ=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR04MB4965.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(39860400002)(396003)(376002)(136003)(6916009)(55016002)(9686003)(86362001)(52536014)(54906003)(8936002)(316002)(26005)(33656002)(122000001)(83380400001)(6506007)(7406005)(5660300002)(2906002)(66946007)(71200400001)(76116006)(4326008)(66556008)(66476007)(66446008)(64756008)(38100700002)(8676002)(7416002)(186003)(478600001)(7696005)(53546011);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
 =?iso-8859-1?Q?tOMRmQzBu9HBrmE54pFYwQCAZwjNFQViLIYp42OXhgbm6VKfnHeKq6ibJR?=
 =?iso-8859-1?Q?XVtDPmSP9FYrKIi5HErHCnz8ZolVYTOTq88vxqGNp5E8WT++gvcDXHWzfQ?=
 =?iso-8859-1?Q?7F22uB4d6Y7jZld1kooNlgaZuhcOZcWVK5jZsiQX+Vnu5A/I0GM6g6pXG/?=
 =?iso-8859-1?Q?fANsvMtPVZl4OWLL2+zaMRzi0z4pS0r4jQpoJHOglAaZtuCNoz9bScDYZd?=
 =?iso-8859-1?Q?AEvlPDPcaU86D30vRAfxCysnlB1/4/Jt1MXUWHSmp8VPpBXRNLlDGEqQxm?=
 =?iso-8859-1?Q?yjCAb9tCYwQfYIz6bHAigje3Yd7B2eiHgPvxT0MqDL2+eflNzyXrCr3o6r?=
 =?iso-8859-1?Q?8UyJSz27O4zzS/20AtH04M+SPYCu8ZwG+hkW9ykY/xrisflV8VjyxJNEKP?=
 =?iso-8859-1?Q?syxQYvDaTwngygKAaCnYFDz2soLqZOyR3u63/D5w2gGDjyoVdakYkl5JrT?=
 =?iso-8859-1?Q?9Vs+HX399Av3uvglcvTgN0NDHBSxG2r2hLTZQIlfUdcTJgqbjEEbTzHh6i?=
 =?iso-8859-1?Q?f5IcKkyHFjcXOS8rdDFlqyJhhOXgloM0oDOXQcVVqLJJYYU8BsOfyIdyIu?=
 =?iso-8859-1?Q?pUOu2kZO3V6uNs6z5PMGZkekzt+S0uxCreJuGgQrKkBZMs48NO+3sbdnsU?=
 =?iso-8859-1?Q?4c6IMFPVWUid+K4P+Y26m0ZmGTZvgHW7EkcU3dFejhLkpHrBxNr4AXEZ2R?=
 =?iso-8859-1?Q?4TZn363tV9qBc7iYebk3NLvLRw3CV5c7IR84da+AVEFOGpO1yX+IYrvoJb?=
 =?iso-8859-1?Q?t7O3F27g+Q5KnNOqFYGbyqb35w7zFyF8QHTbKK8sldGZ2gqzSOBQY3lCLE?=
 =?iso-8859-1?Q?apO2p5Ip0z38niJbY5R/NGjElnkFYxnVQfDKaCjVIIGmhZWgLnu4hiOqzo?=
 =?iso-8859-1?Q?L8+IQB2+pqtlGfnUlBmSIzDBc3+49PFeL2MsQlwM5E9n1hY9qPOUyqbmET?=
 =?iso-8859-1?Q?rpcgr9YIpy7KXg0uR2TRn+2BeJzIBgIqtRJywgwZgA+B8CJ3RwFWYeL1n/?=
 =?iso-8859-1?Q?hY7Q7ycu4z8lPJyJAlsshldtZCy3E1sv89KZsNPETY4AQEr3vHlaaun+xd?=
 =?iso-8859-1?Q?vgyqfQ9WCG1L8tMRnzKsqh3RlrakY3XIEVU8StpIxjZlCsJpMf83zURXip?=
 =?iso-8859-1?Q?9D/f68Z33Hy6P74gkVsgSBM+9qbr+SvWM25poIJijA7qHdhsegGFjmiBA1?=
 =?iso-8859-1?Q?lGWdWKKQOqEX8dKOfdNFEOHYdtB9duMlsdbxaXtMAssT2AMavt2G7XtQ6I?=
 =?iso-8859-1?Q?isoClhWwp/3vVg6ae+fZei/Sg1PoAPAHoD6EWQaKtiyFmWRy4+3YFr/Q13?=
 =?iso-8859-1?Q?HTATyFL9dJB3hIn1M5qIgz0y2yIG8dpT5nMVQTnH8b/gBTJhn1ETaotnmi?=
 =?iso-8859-1?Q?ckYH8Sgs2Z?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: wdc.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR04MB4965.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 410c689a-bed7-4693-4767-08d92623f325
X-MS-Exchange-CrossTenant-originalarrivaltime: 03 Jun 2021 00:10:09.2084
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: csvqD0vw659M62vyGjnjyxGD18c3A93cKZHc7kill9AefsspYbo6IxhVdXG3I88JuR8UAroB3qRvmH2ffepP66pf7ZnPFJxPeY8tuhTHcyk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR04MB3861

On 6/1/21 23:56, Christoph Hellwig wrote:=0A=
> Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and=0A=
> request_queue allocation.=0A=
>=0A=
> Signed-off-by: Christoph Hellwig <hch@lst.de>=0A=
> ---=0A=
>  drivers/block/null_blk/main.c | 11 +++++------=0A=
>  1 file changed, 5 insertions(+), 6 deletions(-)=0A=
>=0A=
> diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.=
c=0A=
> index d8e098f1e5b5..74fb2ec63219 100644=0A=
> --- a/drivers/block/null_blk/main.c=0A=
> +++ b/drivers/block/null_blk/main.c=0A=
> @@ -1851,13 +1851,12 @@ static int null_add_dev(struct nullb_device *dev)=
=0A=
>  =0A=
>  		rv =3D -ENOMEM;=0A=
=0A=
Is above initialization needed ?=0A=
=0A=
>  		nullb->tag_set->timeout =3D 5 * HZ;=0A=
> -		nullb->q =3D blk_mq_init_queue_data(nullb->tag_set, nullb);=0A=
> -		if (IS_ERR(nullb->q))=0A=
> -			goto out_cleanup_tags;=0A=
> -		nullb->disk =3D alloc_disk_node(1, nullb->dev->home_node);=0A=
> -		if (!nullb->disk)=0A=
> +		nullb->disk =3D blk_mq_alloc_disk(nullb->tag_set, nullb);=0A=
> +		if (IS_ERR(nullb->disk)) {=0A=
> +			rv =3D PTR_ERR(nullb->disk);=0A=
>  			goto out_cleanup_disk;=0A=
> -		nullb->disk->queue =3D nullb->q;=0A=
> +		}=0A=
> +		nullb->q =3D nullb->disk->queue;=0A=
>  	} else if (dev->queue_mode =3D=3D NULL_Q_BIO) {=0A=
>  		rv =3D -ENOMEM;=0A=
>  		nullb->disk =3D blk_alloc_disk(nullb->dev->home_node);=0A=
=0A=


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 00:55:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 00:55:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136332.252818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lobdv-0000Ir-7E; Thu, 03 Jun 2021 00:55:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136332.252818; Thu, 03 Jun 2021 00:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lobdv-0000Ik-4F; Thu, 03 Jun 2021 00:55:27 +0000
Received: by outflank-mailman (input) for mailman id 136332;
 Thu, 03 Jun 2021 00:55:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lobdu-0000IN-Ft; Thu, 03 Jun 2021 00:55:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lobdu-0003om-6U; Thu, 03 Jun 2021 00:55:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lobdt-0000rk-QZ; Thu, 03 Jun 2021 00:55:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lobdt-0004nd-Q8; Thu, 03 Jun 2021 00:55:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xIfixx7F79HzaahftmJ5hH9xC5qksD5XEVm1F78y6ec=; b=nukSYKL3x+7pK7lwsbjVjpa43v
	gzcwoEwQC/Olrn9W+Ty8BlSwX6e+xuNTkei8mNAWwDQqUYH3xEbXmEjJPHqosLtSNYH9joEcJTxbZ
	c2xdwoVWCwEAToSDsrmJzVv+GW317ghUOH/XlipDBir7tPZ1XXwTUU+uQRKBHdx0J+r8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162339-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162339: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=dd2db39d78431ab5a0b78777afaab3d61e94533e
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Jun 2021 00:55:25 +0000

flight 162339 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162339/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                dd2db39d78431ab5a0b78777afaab3d61e94533e
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  286 days
Failing since        152659  2020-08-21 14:07:39 Z  285 days  528 attempts
Testing same since   162339  2021-06-02 14:08:37 Z    0 days    1 attempts

------------------------------------------------------------
520 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 164761 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 01:58:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 01:58:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136340.252831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1locd9-0003Hr-4i; Thu, 03 Jun 2021 01:58:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136340.252831; Thu, 03 Jun 2021 01:58:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1locd9-0003Hk-1L; Thu, 03 Jun 2021 01:58:43 +0000
Received: by outflank-mailman (input) for mailman id 136340;
 Thu, 03 Jun 2021 01:58:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1locd7-0003Ha-Di; Thu, 03 Jun 2021 01:58:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1locd7-0001jH-8f; Thu, 03 Jun 2021 01:58:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1locd6-0002W3-Vm; Thu, 03 Jun 2021 01:58:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1locd6-0001SM-VJ; Thu, 03 Jun 2021 01:58:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JcG++BQsGh3DtKzXFAkRXWMk1bAc7fY4K1zb+0EzUBE=; b=T6yqgdFi45RSTK14Fdd7Cyi57o
	+5OZFBb8sr3T1HbAEA5l2y7j2LAnBkoautuqrkHkDw3V7J5IkK7iwuCzNfyweXLWlMuucqxgNuQbL
	RnkCjwjbF5GK5tmtXJhnmf4svOhQC876hEJb77QcqrBnZsf8i6L/2xIzsD1Dra+ku6bA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162341-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162341: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=75e9154f818a58ffc3a28db9f8c97279e723f02d
X-Osstest-Versions-That:
    ovmf=1f515342d8d83ef0fff0c3f4ac67232dd8c97565
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Jun 2021 01:58:40 +0000

flight 162341 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162341/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 75e9154f818a58ffc3a28db9f8c97279e723f02d
baseline version:
 ovmf                 1f515342d8d83ef0fff0c3f4ac67232dd8c97565

Last test of basis   162338  2021-06-02 12:41:10 Z    0 days
Testing same since   162341  2021-06-02 19:41:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   1f515342d8..75e9154f81  75e9154f818a58ffc3a28db9f8c97279e723f02d -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 02:45:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 02:45:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136348.252846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lodMM-0008SM-Ir; Thu, 03 Jun 2021 02:45:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136348.252846; Thu, 03 Jun 2021 02:45:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lodMM-0008SF-Ep; Thu, 03 Jun 2021 02:45:26 +0000
Received: by outflank-mailman (input) for mailman id 136348;
 Thu, 03 Jun 2021 02:45:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mEXH=K5=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lodMK-0008S9-Uo
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 02:45:25 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.77]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4888d3bb-0bde-4d3b-abee-b0e7bd5f5cd0;
 Thu, 03 Jun 2021 02:45:23 +0000 (UTC)
Received: from DU2PR04CA0349.eurprd04.prod.outlook.com (2603:10a6:10:2b4::6)
 by AM0PR08MB3987.eurprd08.prod.outlook.com (2603:10a6:208:134::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.22; Thu, 3 Jun
 2021 02:45:20 +0000
Received: from DB5EUR03FT053.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b4:cafe::e8) by DU2PR04CA0349.outlook.office365.com
 (2603:10a6:10:2b4::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.22 via Frontend
 Transport; Thu, 3 Jun 2021 02:45:19 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT053.mail.protection.outlook.com (10.152.21.119) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.21 via Frontend Transport; Thu, 3 Jun 2021 02:45:19 +0000
Received: ("Tessian outbound a5ae8c02e74f:v93");
 Thu, 03 Jun 2021 02:45:19 +0000
Received: from 360ec6a86785.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 85CF783B-64B2-454F-A173-5CAECD0C341E.1; 
 Thu, 03 Jun 2021 02:45:13 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 360ec6a86785.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 03 Jun 2021 02:45:13 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR08MB3149.eurprd08.prod.outlook.com (2603:10a6:803:41::29)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.22; Thu, 3 Jun
 2021 02:44:58 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::2807:2ff9:e371:2918]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::2807:2ff9:e371:2918%7]) with mapi id 15.20.4173.030; Thu, 3 Jun 2021
 02:44:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4888d3bb-0bde-4d3b-abee-b0e7bd5f5cd0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5pCinxDI/FW/OY2j76PpngpyU91JFm3uGEUde7fyv80=;
 b=0R9ede7dUlNl36Y65SxSZ9PZ1ipMW3EQ/o5ZCwCjDaBUCkFlum28rvoYqUSUNucMscUnUJPNNKNZ6uNwHHS+doVUJ5oCBy2xTk1CdyOWuWdAQcpIzeREnidTonEfvf0zvXyje3+8iJYzd9kJRQec0LxonptMm5Ybh2ca3UEbsGU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gjQe3vOnUM0HDYi+f7D/Rm+AmxWDiuz4G4SPPlo0WMwpYtE2F7HesEy6bAsgmqL/49Ryesun94LIzTABbC6XOZGHbuOIqX4KgJS9zDkzsfS3HUAOt8wGlqmh8Kv26dssBrmO1TeeulaGBKJHWI3vWXmj9FTfSKwgIiUHIwu3jRkZV2USCXf+jq6HI6oRhtCGYTfK56Q7QZ3Sf9gv4VvvDgdkFYmIBqUlbe+OSEApYp3OTl8MkklJVlkR5hvtFNeB02lnz0Jbzr6YO8eOloxGqMbthEmiaNFY7Z/dXSbBzG36c+1pmWvo5gLD3yBk8zscfphX/xd3yg9WG1P41T9Oww==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5pCinxDI/FW/OY2j76PpngpyU91JFm3uGEUde7fyv80=;
 b=j1VLpl22rTmbRtZfk0tyCRaMXMj6e1rNd/rBpKIqGvZU0DiBgR0xisFH1SKaTmD3w3aTxgIeWzHBfPJcT3JsA7hMnG7+R6A3Ubfduwx9TUOkQdQlXne95JIPHW5LCvswd7yQ+H6JcKGF1QeargQgI2aYOaeqMo4rad9cVEcBrvbaV8OoXpCW4tVxsXU5S6SRRfevbV7zMhL7G2cUQMzA1a+tNvwa1Y1uJECj3RkTAshoS9YRs+io9RbCsVt3HSQY8pt9a5ANE6YpQO+hb0XeOP+8mQUs6+I1PVOFuZoIJLC5ZfBv1Zqo42YdukpUJV+O4FJJPmXGPRc8bLIm8LDoFA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5pCinxDI/FW/OY2j76PpngpyU91JFm3uGEUde7fyv80=;
 b=0R9ede7dUlNl36Y65SxSZ9PZ1ipMW3EQ/o5ZCwCjDaBUCkFlum28rvoYqUSUNucMscUnUJPNNKNZ6uNwHHS+doVUJ5oCBy2xTk1CdyOWuWdAQcpIzeREnidTonEfvf0zvXyje3+8iJYzd9kJRQec0LxonptMm5Ybh2ca3UEbsGU=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, "julien@xen.org" <julien@xen.org>
Subject: RE: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
Thread-Topic: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
Thread-Index:
 AQHXS6W/k/joZEkeaUiDjrfYH3jVEaro2ToAgAAR3sCAAC37gIAEZlqwgAAJrgCAFCGXIA==
Date: Thu, 3 Jun 2021 02:44:55 +0000
Message-ID:
 <VE1PR08MB52153C97BC35C39223897F92F73C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-8-penny.zheng@arm.com>
 <7e4706dc-70ea-4dc9-3d70-f07396b462d8@suse.com>
 <VE1PR08MB521528492991FDFC87AC361BF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <4389b5be-7d23-31d7-67e0-0068cba79934@suse.com>
 <VE1PR08MB521538B39E6290BBA0842F97F7299@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <cc03d9d9-227a-0788-1a88-b35a77f5f18d@suse.com>
In-Reply-To: <cc03d9d9-227a-0788-1a88-b35a77f5f18d@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 7B83CCE92746BE4AB485D11C59BDE16E.0
x-checkrecipientchecked: true
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.112]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 35558da4-c829-42d6-451f-08d92639a093
x-ms-traffictypediagnostic: VI1PR08MB3149:|AM0PR08MB3987:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB3987694ED0FDFD1AA5581736F73C9@AM0PR08MB3987.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 7GGQJzPp3OB0ALrEyTZiuemUpnYUjBWzzEO0q3VFmR08GVa4vrqL/68y9fPBrm3QzPVY6GCHgWB4lw1yxc5LUUqpjHtHAOPDrlDlfGndmxxzzRKEtu3qn830wZFGOwVnZUiMUFGHs82hDZJNKnKtKj0dpL7MgeMwpFdtX/xrUKAmAZfAh01n0wc6Fp21lj1EuaOF5Ma+nEwnEWzBVGv9FBjHQ5ELka+0mQRQsu/MAu2A7J+9aUlaJTLTK4XAB6JnqnFTSfHkgMxkLzXXgaL+mXnnPzsPk6+ryxgw8XjDBoeKm6VFR8VPy0URm7bphAl3LqRxkANo/dBmt2lNa1w0svMXzMCRXAItaq5kSQzgxae2Zhw7teg6Q5GdakdKWLcIlXClDPUaJWo9JsmWnqSacBs39y1/q2197fIxi5XP0t/+uDqW1HKc0P+yIbV5eEKcRQ4zG7Cbe5GM5O9llAEGsLBBB1bwa7m4BY7rqQVVU2atASL8LDC7snT9TYZ//QaykgO0dgKgdnrHD2r9bCHhT2zwvDBCxeXtYi6IUWPSAW7z/osyPx2j2UcmiIPVKf3pMB7LoC7ZXTfxpwNYKLwXvrtLPoIPda5BxlcDjfsCL+g=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(396003)(366004)(136003)(346002)(7696005)(52536014)(4326008)(26005)(55016002)(186003)(2906002)(316002)(66446008)(66476007)(64756008)(66946007)(66556008)(76116006)(9686003)(54906003)(33656002)(38100700002)(122000001)(71200400001)(478600001)(83380400001)(86362001)(53546011)(8936002)(8676002)(5660300002)(6506007)(6916009);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?Y0M1TnBIZkhJaEhVcVluWU93RmRPczJIQTdtMGtJQlladE41dnZMK21GSkFQ?=
 =?utf-8?B?R2Q1TVlZeDZFdWJDZlhEelFDZ1N1TTcwbGhwcmxHVXlCT25WYjNrczk2TmNk?=
 =?utf-8?B?Sit2MWtjK1FPL3M2MU41ZmRCdzhPdFlUenE0aGNoU0ZabE5JUWtXZ3NCdjFV?=
 =?utf-8?B?bHVUcUFmNnRQblBJWXpVSTZYaEIvMHZNYUY2ZDIrU3hWTjZlMk9TZzlZak9R?=
 =?utf-8?B?WEVVc2FZVzg0L3pYSEN6Z0FSQUxKeUdJeXo3QVdoaE1FblcrYmFybEZuMXIv?=
 =?utf-8?B?N295bnJqZnUxdkF5RHlZYzJ1OG52NVVyb3Fidmg0M3kxMTFwWFJ6SlB2WXpF?=
 =?utf-8?B?cTlmSmhMdXhFam03bmVxWWRTT3FqOGdFWTZzelRCNlNBTGI2bUwvYWRyWmlH?=
 =?utf-8?B?R1pGRFhlZ2NnU3cxZk56SGdkaGx3S0l0MzRZbzdoSVBpbUFjMzR0dlJ6QVlm?=
 =?utf-8?B?ai8wMWpMTnpZQmRCT0tkYVExSWM5V0tmMnVEOTZUOUZkTjZROThlWlR2ZWRX?=
 =?utf-8?B?MDFPSzY0UzROR3I4LzdPekNCMW5TM1BkQjhpbDhBY3QvOVR1eGEwQzVCeHhh?=
 =?utf-8?B?Yi9QMUF5VVd4cjVXaEcvLy95bVYxUnJ3dzNGaUlnRlBLVHlIbGVZQUlhQ05m?=
 =?utf-8?B?dmdTaWZWemgramhBMnpydDlwbUJtbHJ4UFM3WndacmRtbWtVVWEyNkx0Y1RO?=
 =?utf-8?B?alNlZVVOWXpyVDdZR29yem9qMHVHS0dIK25sU2xoZWQvcjNneVByWlFsYUti?=
 =?utf-8?B?ak1IWTZsTHdHcjBhQmlGMEFWMzFYWmVOSGRYYkczNUowTXpMWDBKZVJCUWxK?=
 =?utf-8?B?N09qdy9GdUdQc3llbWZnVEFBQVhGeGJOUFdKcmxndHc0cUZiRzBQMHRuNU9p?=
 =?utf-8?B?dUY4RjZPSGpsOGFOMWV1RzdCY2JySTNxbURSR2FMM0xadUVwM2tHTTRld2RD?=
 =?utf-8?B?eVN1WGg2VzdGZDg5aXJ1ODVDLzUzNEtzU29rSGorM0dEekU2a2NRSkVQaDdt?=
 =?utf-8?B?Q21nTHpsVjd0YWJtUVViK1lBSVQyNU0wejcwbDd5bUhJVDlJRG44UzBHRTVz?=
 =?utf-8?B?THM3TXJCN1VmRGoxNlpWb3BBS0FsME9TL2JQWjRGbzMwV2VpbXZQQnVSTnlw?=
 =?utf-8?B?YnRFZ251Z01vdE9JcHpPZ1lsczE2K1ZyOTFqN2dGNzV1UXFWMkkydU9nSjNF?=
 =?utf-8?B?dTVqcXpoWjBZeWJwK2ZKdGkzRGtWSjNoRjZYOCtLVnRENW9rcnFRVDJoRTln?=
 =?utf-8?B?Mk5iT3pnZ1ZDcUNlNnVOVU13UFZoVjI5RjlOdUtVWTJjWGkyRXlqd0VReWRw?=
 =?utf-8?B?bXdSRWpTV0hKNFhvYkdZRVFLdVlpN3lwU3FPdC9xaWtxa2xsbjhJcS9jMU9E?=
 =?utf-8?B?YlRVVDVERElhR0grZkxQaktxRFl4NFVCUXgwSWN5czdFaDdZK3F6L3BJSElE?=
 =?utf-8?B?Zkh6bmMzWkszV3RmZGowTWZiRjFQTmI1VHM4WUtvb0d1ZnRlWE1TWU1IUFov?=
 =?utf-8?B?VTB2UU1MQjdMN1JqeWwzZjVMOUlYMjkveWE4N05mMHRpNUphN3BCM1ZaeTdI?=
 =?utf-8?B?Z3ZLWml1a0VVZjlMTlBwWGliZlJnZkVIbTc5cmhtci92RUxIbjhOQ2xWSGc2?=
 =?utf-8?B?ejMvMW1lMjAyN1laelQyRDdxMkMvdnZaUnY5Rm1MQmpGTVNWSEhBM2FPN0hD?=
 =?utf-8?B?OGxNaHZJTnZPNFd0dElmb3lRQm8rZVlGWG1uWXNGQm1MVDRWOHlqcHozT04r?=
 =?utf-8?Q?7iS+E4o4h5gO0pUzW2K4Z3ervU+TfqgUUq/BdZO?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3149
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a6a73d5d-880c-412f-ae7b-08d926399377
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tRJDA4eyc8N7pZudgBhrt6E5rldvRiEzlkPjbRa0vRzfzPA+WthYEcqtoCfp91gMBm2pgoBaR0ZUIonBckfP1N/ydXWfQZIlXO78EmH4HG2gZyrmo+5ECbrqVrXOnlyzu0rPw/Q9dOjsOvXydxqPVNTzlzkrX+qvZAbiBirwC88VSBKLn/u+gvVX9FpnqVdUplMU0FaZxjASr022O1dMtCKz/o2FOA5Nc7qpyjOSi7v4nyCSv4HybD4iHdOeWJCuJARkKwx58QBK4IhuWz0jRGWwG13Zfn8pUrAL/brBbtm9Uu7SjxannU9dzNePj2hxvJmxPkFMCU8Ykkcs1PydKY15GIhepnVrTVP95Cqsb+ESQVBO4+AXwZ16teysHFEF3yJJg6u6K2bcEtfAsK6j1NnwIDy+8NRrKJukH+htdXWKwoDBV2MRlYuHxerzHMghQGlLh3Ud+pZZZObQPm0ywqGLo2Pyr4TnCjWry5OzCRqNrag1QOakzSKGi9VHpB6Q5Dbc2woXliXoMDsu/ZmZPUSDUsNbiQzSVZsAFpSD9ZmKBfmv7s8BiNvdoufGYzHguOw2KMcGP5APP7LBqHuLuXPFXpVlROmGJXC7AOdQw6oYi9mdBK5cfp61p+ztDTiTTite+XuviH+fkSNT+U8W881/EKzT1gj0LR+DC0VlMd8=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(376002)(39860400002)(346002)(136003)(46966006)(36840700001)(36860700001)(86362001)(33656002)(186003)(8676002)(81166007)(54906003)(356005)(5660300002)(47076005)(6506007)(53546011)(9686003)(52536014)(6862004)(478600001)(82310400003)(83380400001)(70586007)(82740400003)(55016002)(70206006)(2906002)(316002)(336012)(7696005)(4326008)(26005)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2021 02:45:19.8553
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 35558da4-c829-42d6-451f-08d92639a093
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3987

SGkgSmFuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiBTZW50OiBGcmlkYXksIE1heSAyMSwgMjAyMSAzOjA5
IFBNDQo+IFRvOiBQZW5ueSBaaGVuZyA8UGVubnkuWmhlbmdAYXJtLmNvbT4NCj4gQ2M6IEJlcnRy
YW5kIE1hcnF1aXMgPEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47IFdlaSBDaGVuDQo+IDxXZWku
Q2hlbkBhcm0uY29tPjsgbmQgPG5kQGFybS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVj
dC5vcmc7DQo+IHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc7IGp1bGllbkB4ZW4ub3JnDQo+IFN1Ympl
Y3Q6IFJlOiBbUEFUQ0ggMDcvMTBdIHhlbi9hcm06IGludHJ1ZHVjZSBhbGxvY19kb21zdGF0aWNf
cGFnZXMNCj4gDQo+IE9uIDIxLjA1LjIwMjEgMDg6NDEsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+
IEhpIEphbg0KPiA+DQo+ID4+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4+IEZyb206
IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gPj4gU2VudDogVHVlc2RheSwgTWF5
IDE4LCAyMDIxIDc6MjMgUE0NCj4gPj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0u
Y29tPg0KPiA+PiBDYzogQmVydHJhbmQgTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29t
PjsgV2VpIENoZW4NCj4gPj4gPFdlaS5DaGVuQGFybS5jb20+OyBuZCA8bmRAYXJtLmNvbT47IHhl
bi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsNCj4gPj4gc3N0YWJlbGxpbmlAa2VybmVsLm9y
ZzsganVsaWVuQHhlbi5vcmcNCj4gPj4gU3ViamVjdDogUmU6IFtQQVRDSCAwNy8xMF0geGVuL2Fy
bTogaW50cnVkdWNlIGFsbG9jX2RvbXN0YXRpY19wYWdlcw0KPiA+Pg0KPiA+PiBPbiAxOC4wNS4y
MDIxIDEwOjU3LCBQZW5ueSBaaGVuZyB3cm90ZToNCj4gPj4+PiBGcm9tOiBKYW4gQmV1bGljaCA8
amJldWxpY2hAc3VzZS5jb20+DQo+ID4+Pj4gU2VudDogVHVlc2RheSwgTWF5IDE4LCAyMDIxIDM6
MzUgUE0NCj4gPj4+Pg0KPiA+Pj4+IE9uIDE4LjA1LjIwMjEgMDc6MjEsIFBlbm55IFpoZW5nIHdy
b3RlOg0KPiA+Pj4+PiAtLS0gYS94ZW4vY29tbW9uL3BhZ2VfYWxsb2MuYw0KPiA+Pj4+PiArKysg
Yi94ZW4vY29tbW9uL3BhZ2VfYWxsb2MuYw0KPiA+Pj4+PiBAQCAtMjQ0Nyw2ICsyNDQ3LDkgQEAg
aW50IGFzc2lnbl9wYWdlcygNCj4gPj4+Pj4gICAgICB7DQo+ID4+Pj4+ICAgICAgICAgIEFTU0VS
VChwYWdlX2dldF9vd25lcigmcGdbaV0pID09IE5VTEwpOw0KPiA+Pj4+PiAgICAgICAgICBwYWdl
X3NldF9vd25lcigmcGdbaV0sIGQpOw0KPiA+Pj4+PiArICAgICAgICAvKiB1c2UgcGFnZV9zZXRf
cmVzZXJ2ZWRfb3duZXIgdG8gc2V0IGl0cyByZXNlcnZlZCBkb21haW4gb3duZXIuDQo+ID4+Pj4g
Ki8NCj4gPj4+Pj4gKyAgICAgICAgaWYgKCAocGdbaV0uY291bnRfaW5mbyAmIFBHQ19yZXNlcnZl
ZCkgKQ0KPiA+Pj4+PiArICAgICAgICAgICAgcGFnZV9zZXRfcmVzZXJ2ZWRfb3duZXIoJnBnW2ld
LCBkKTsNCj4gPj4+Pg0KPiA+Pj4+IE5vdyB0aGlzIGlzIHB1enpsaW5nOiBXaGF0J3MgdGhlIHBv
aW50IG9mIHNldHRpbmcgdHdvIG93bmVyIGZpZWxkcw0KPiA+Pj4+IHRvIHRoZSBzYW1lIHZhbHVl
PyBJIGFsc28gZG9uJ3QgcmVjYWxsIHlvdSBoYXZpbmcgaW50cm9kdWNlZA0KPiA+Pj4+IHBhZ2Vf
c2V0X3Jlc2VydmVkX293bmVyKCkgZm9yIHg4Niwgc28gaG93IGlzIHRoaXMgZ29pbmcgdG8gYnVp
bGQgdGhlcmU/DQo+ID4+Pj4NCj4gPj4+DQo+ID4+PiBUaGFua3MgZm9yIHBvaW50aW5nIG91dCB0
aGF0IGl0IHdpbGwgZmFpbCBvbiB4ODYuDQo+ID4+PiBBcyBmb3IgdGhlIHNhbWUgdmFsdWUsIHN1
cmUsIEkgc2hhbGwgY2hhbmdlIGl0IHRvIGRvbWlkX3QgZG9taWQgdG8NCj4gPj4+IHJlY29yZCBp
dHMNCj4gPj4gcmVzZXJ2ZWQgb3duZXIuDQo+ID4+PiBPbmx5IGRvbWlkIGlzIGVub3VnaCBmb3Ig
ZGlmZmVyZW50aWF0ZS4NCj4gPj4+IEFuZCBldmVuIHdoZW4gZG9tYWluIGdldCByZWJvb3RlZCwg
c3RydWN0IGRvbWFpbiBtYXkgYmUgZGVzdHJveWVkLA0KPiA+Pj4gYnV0IGRvbWlkIHdpbGwgc3Rh
eXMgVGhlIHNhbWUuDQo+ID4+DQo+ID4+IFdpbGwgaXQ/IEFyZSB5b3UgaW50ZW5kaW5nIHRvIHB1
dCBpbiBwbGFjZSByZXN0cmljdGlvbnMgdGhhdCBtYWtlIGl0DQo+ID4+IGltcG9zc2libGUgZm9y
IHRoZSBJRCB0byBnZXQgcmUtdXNlZCBieSBhbm90aGVyIGRvbWFpbj8NCj4gPj4NCj4gPj4+IE1h
am9yIHVzZXIgY2FzZXMgZm9yIGRvbWFpbiBvbiBzdGF0aWMgYWxsb2NhdGlvbiBhcmUgcmVmZXJy
aW5nIHRvDQo+ID4+PiB0aGUgd2hvbGUgc3lzdGVtIGFyZSBzdGF0aWMsIE5vIHJ1bnRpbWUgY3Jl
YXRpb24uDQo+ID4+DQo+ID4+IFJpZ2h0LCBidXQgdGhhdCdzIG5vdCBjdXJyZW50bHkgZW5mb3Jj
ZWQgYWZhaWNzLiBJZiB5b3Ugd291bGQgZW5mb3JjZQ0KPiA+PiBpdCwgaXQgbWF5IHNpbXBsaWZ5
IGEgbnVtYmVyIG9mIHRoaW5ncy4NCj4gPj4NCj4gPj4+Pj4gQEAgLTI1MDksNiArMjUxMiw1NiBA
QCBzdHJ1Y3QgcGFnZV9pbmZvICphbGxvY19kb21oZWFwX3BhZ2VzKA0KPiA+Pj4+PiAgICAgIHJl
dHVybiBwZzsNCj4gPj4+Pj4gIH0NCj4gPj4+Pj4NCj4gPj4+Pj4gKy8qDQo+ID4+Pj4+ICsgKiBB
bGxvY2F0ZSBucl9wZm5zIGNvbnRpZ3VvdXMgcGFnZXMsIHN0YXJ0aW5nIGF0ICNzdGFydCwgb2YN
Cj4gPj4+Pj4gK3N0YXRpYyBtZW1vcnksDQo+ID4+Pj4+ICsgKiB0aGVuIGFzc2lnbiB0aGVtIHRv
IG9uZSBzcGVjaWZpYyBkb21haW4gI2QuDQo+ID4+Pj4+ICsgKiBJdCBpcyB0aGUgZXF1aXZhbGVu
dCBvZiBhbGxvY19kb21oZWFwX3BhZ2VzIGZvciBzdGF0aWMgbWVtb3J5Lg0KPiA+Pj4+PiArICov
DQo+ID4+Pj4+ICtzdHJ1Y3QgcGFnZV9pbmZvICphbGxvY19kb21zdGF0aWNfcGFnZXMoDQo+ID4+
Pj4+ICsgICAgICAgIHN0cnVjdCBkb21haW4gKmQsIHVuc2lnbmVkIGxvbmcgbnJfcGZucywgcGFk
ZHJfdCBzdGFydCwNCj4gPj4+Pj4gKyAgICAgICAgdW5zaWduZWQgaW50IG1lbWZsYWdzKQ0KPiA+
Pj4+PiArew0KPiA+Pj4+PiArICAgIHN0cnVjdCBwYWdlX2luZm8gKnBnID0gTlVMTDsNCj4gPj4+
Pj4gKyAgICB1bnNpZ25lZCBsb25nIGRtYV9zaXplOw0KPiA+Pj4+PiArDQo+ID4+Pj4+ICsgICAg
QVNTRVJUKCFpbl9pcnEoKSk7DQo+ID4+Pj4+ICsNCj4gPj4+Pj4gKyAgICBpZiAoIG1lbWZsYWdz
ICYgTUVNRl9ub19vd25lciApDQo+ID4+Pj4+ICsgICAgICAgIG1lbWZsYWdzIHw9IE1FTUZfbm9f
cmVmY291bnQ7DQo+ID4+Pj4+ICsNCj4gPj4+Pj4gKyAgICBpZiAoICFkbWFfYml0c2l6ZSApDQo+
ID4+Pj4+ICsgICAgICAgIG1lbWZsYWdzICY9IH5NRU1GX25vX2RtYTsNCj4gPj4+Pj4gKyAgICBl
bHNlDQo+ID4+Pj4+ICsgICAgew0KPiA+Pj4+PiArICAgICAgICBkbWFfc2l6ZSA9IDF1bCA8PCBi
aXRzX3RvX3pvbmUoZG1hX2JpdHNpemUpOw0KPiA+Pj4+PiArICAgICAgICAvKiBTdGFydGluZyBh
ZGRyZXNzIHNoYWxsIG1lZXQgdGhlIERNQSBsaW1pdGF0aW9uLiAqLw0KPiA+Pj4+PiArICAgICAg
ICBpZiAoIGRtYV9zaXplICYmIHN0YXJ0IDwgZG1hX3NpemUgKQ0KPiA+Pj4+PiArICAgICAgICAg
ICAgcmV0dXJuIE5VTEw7DQo+ID4+Pj4NCj4gPj4+PiBJdCBpcyB0aGUgZW50aXJlIHJhbmdlIChp
LmUuIGluIHBhcnRpY3VsYXIgdGhlIGxhc3QgYnl0ZSkgd2hpY2gNCj4gPj4+PiBuZWVkcyB0byBt
ZWV0IHN1Y2ggYSByZXN0cmljdGlvbi4gSSdtIG5vdCBjb252aW5jZWQgdGhvdWdoIHRoYXQgRE1B
DQo+ID4+Pj4gd2lkdGggcmVzdHJpY3Rpb25zIGFuZCBzdGF0aWMgYWxsb2NhdGlvbiBhcmUgc2Vu
c2libGUgdG8gY29leGlzdC4NCj4gPj4+Pg0KPiA+Pj4NCj4gPj4+IEZXSVQsIGlmIHN0YXJ0aW5n
IGFkZHJlc3MgbWVldHMgdGhlIGxpbWl0YXRpb24sIHRoZSBsYXN0IGJ5dGUsDQo+ID4+PiBncmVh
dGVyIHRoYW4gc3RhcnRpbmcgYWRkcmVzcyBzaGFsbCBtZWV0IGl0IHRvby4NCj4gPj4NCj4gPj4g
SSdtIGFmcmFpZCBJIGRvbid0IGtub3cgd2hhdCB5b3UncmUgbWVhbmluZyB0byB0ZWxsIG1lIGhl
cmUuDQo+ID4+DQo+ID4NCj4gPiBSZWZlcnJpbmcgdG8gYWxsb2NfZG9taGVhcF9wYWdlcywgaWYg
YGRtYV9iaXRzaXplYCBpcyBub25lLXplcm8gdmFsdWUsDQo+ID4gaXQgd2lsbCB1c2UgIGFsbG9j
X2hlYXBfcGFnZXMgdG8gYWxsb2NhdGUgcGFnZXMgZnJvbSBbZG1hX3pvbmUgKyAxLA0KPiA+IHpv
bmVfaGldLCBgZG1hX3pvbmUgKyAxYCBwb2ludGluZyB0byBhZGRyZXNzIGxhcmdlciB0aGFuIDJe
KGRtYV96b25lICsgMSkuDQo+ID4gU28gSSB3YXMgc2V0dGluZyBhZGRyZXNzIGxpbWl0YXRpb24g
Zm9yIHRoZSBzdGFydGluZyBhZGRyZXNzLg0KPiANCj4gQnV0IGRvZXMgdGhpcyB6b25lIGNvbmNl
cHQgYXBwbHkgdG8gc3RhdGljIHBhZ2VzIGF0IGFsbD8NCj4gDQoNCk9oLCBzbyBzb3JyeS4gSSBm
aW5hbGx5IGdvdCB3aGF0IHlvdSB3ZXJlIGFza2luZyBoZXJlLiBIbW0sIEkgd2FzIHVzaW5nIHRo
ZSBsb2dpYyBpbg0KYml0c190b196b25lIHRvIGRvIHRoZSBhZGRyZXNzIGJpdHMgdHJhbnNsYXRp
b24uIEJ1dCBJIGdvdCwgaXQgd2lsbCBicmluZyBjb25mdXNpb24uIEknbGwNCmZpeCBpdC4gVGh4
Lg0KDQo+IEphbg0K


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 03:37:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 03:37:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136357.252857 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loeAE-0004vU-FO; Thu, 03 Jun 2021 03:36:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136357.252857; Thu, 03 Jun 2021 03:36:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loeAE-0004vN-CP; Thu, 03 Jun 2021 03:36:58 +0000
Received: by outflank-mailman (input) for mailman id 136357;
 Thu, 03 Jun 2021 03:36:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loeAD-0004vD-U6; Thu, 03 Jun 2021 03:36:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loeAD-0003tz-Kt; Thu, 03 Jun 2021 03:36:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loeAD-0005wI-BC; Thu, 03 Jun 2021 03:36:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1loeAD-0004xz-AX; Thu, 03 Jun 2021 03:36:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gW6EWS87WeRPjydeLZ1Agd/6vcGXShrnPSD0g3UdJp0=; b=Rfc6lQjUj2jZN3WDuJeOUBzP5s
	1aF3cNFU6Jia4IBksHgWYduaTafmGgAVn373ZFKPzU9lUWi5JZaLsLI6mIoM8yMxLuYaJi38hOXn4
	evE2pOBsd2PhlhN7qmBFV6h8OfR7LNLWWnDftYbRdPGBVxpmXGC9cJxc36edgoBPbqMQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162340-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162340: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start/freebsd.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=231bc539066760aaa44d46818c85b14ca2f56d9f
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Jun 2021 03:36:57 +0000

flight 162340 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162340/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           8 xen-boot                   fail pass in 162333
 test-amd64-amd64-qemuu-freebsd12-amd64 21 guest-start/freebsd.repeat fail pass in 162333
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 162333

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl         15 migrate-support-check fail in 162333 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 162333 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                231bc539066760aaa44d46818c85b14ca2f56d9f
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  306 days
Failing since        152366  2020-08-01 20:49:34 Z  305 days  521 attempts
Testing same since   162333  2021-06-02 06:00:09 Z    0 days    2 attempts

------------------------------------------------------------
6133 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1666320 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 06:57:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 06:57:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136367.252870 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lohHz-00066P-CN; Thu, 03 Jun 2021 06:57:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136367.252870; Thu, 03 Jun 2021 06:57:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lohHz-00066I-9E; Thu, 03 Jun 2021 06:57:11 +0000
Received: by outflank-mailman (input) for mailman id 136367;
 Thu, 03 Jun 2021 06:57:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lohHx-000668-8u; Thu, 03 Jun 2021 06:57:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lohHw-0007lt-Sv; Thu, 03 Jun 2021 06:57:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lohHw-0007fJ-I4; Thu, 03 Jun 2021 06:57:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lohHw-0000HB-Hb; Thu, 03 Jun 2021 06:57:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=n/WEoal3W/hHb6i+Ng0enTreo6olmmqoacByqN82lrs=; b=uiV8PA8zc16kkj7mDKVkOp11xm
	KAhBVpxcV0NegYB2HO2IpZVMvXvH+PkHq9MWmGCrTk4TuJzTLcy6noS956cp5MkeU21xI/+THCVXW
	ANZPfJdb3cKfRP15ohzAOUAiB1aAKfKbW4FUpg9Edx5NJ0Sj10baPKx4qjzdtWt8iB1s=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162345-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162345: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=86e8f371399e8951ef915902b51809116f16c2db
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Jun 2021 06:57:08 +0000

flight 162345 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162345/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              86e8f371399e8951ef915902b51809116f16c2db
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  328 days
Failing since        151818  2020-07-11 04:18:52 Z  327 days  320 attempts
Testing same since   162345  2021-06-03 04:18:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 60203 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 09:10:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 09:10:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136384.252885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lojMW-0001Pq-VV; Thu, 03 Jun 2021 09:10:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136384.252885; Thu, 03 Jun 2021 09:10:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lojMW-0001Pj-SQ; Thu, 03 Jun 2021 09:10:00 +0000
Received: by outflank-mailman (input) for mailman id 136384;
 Thu, 03 Jun 2021 09:09:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lojMU-0001Pd-Qs
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 09:09:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lojMU-00023b-Kz; Thu, 03 Jun 2021 09:09:58 +0000
Received: from home.octic.net ([81.187.162.82]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lojMU-00061A-Fu; Thu, 03 Jun 2021 09:09:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Y8S49lXGsy9W6T8C2T08VS2kS485sSmT1TwgfDt1xlY=; b=fYaMa8W6T1cr0MQBNPjUSKpPDz
	6gfQ9DinRQqQhD+jlm0f2bFnhnoYAlqql5IV4nvHV3GSTChiIRN+EbQvaplvsiV69cQgeK2COyzyw
	a4CqTIhgAe3044sb1l1+IahMnSE0OXPTXvCJ7ygcSGaGykpuh1vcVBdSiLHrt2kX0dwE=;
Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-2-penny.zheng@arm.com>
 <e1b90f06-92d2-11da-c556-4081907124b8@xen.org>
 <VE1PR08MB521519C6F09E92EDB9C9A1AEF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <66e32065-ea2d-d000-1a70-e5598a182b6a@xen.org>
 <VE1PR08MB5215C1F5041860102BBAD595F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <14fb6fe4-c293-6994-8cbc-872d3bd8a3ac@xen.org>
 <VE1PR08MB52152792B6771236A6DF37E7F73D9@VE1PR08MB5215.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <4251e0e2-fb53-b8a3-0323-f4ce892cf21e@xen.org>
Date: Thu, 3 Jun 2021 10:09:55 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <VE1PR08MB52152792B6771236A6DF37E7F73D9@VE1PR08MB5215.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 02/06/2021 11:09, Penny Zheng wrote:
> Hi Julien
> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Thursday, May 20, 2021 4:51 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>
>> Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
>>
>> Hi,
>>
>> On 20/05/2021 07:07, Penny Zheng wrote:
>>>>> It will be consistent with the ones defined in the parent node, domUx.
>>>> Hmmm... To take the example you provided, the parent would be chosen.
>>>> However, from the example, I would expect the property #{address,
>>>> size}-cells in domU1 to be used. What did I miss?
>>>>
>>>
>>> Yeah, the property #{address, size}-cells in domU1 will be used. And
>>> the parent node will be domU1.
>>
>> You may have misunderstood what I meant. "domU1" is the node that
>> contains the property "xen,static-mem". The parent node would be the one
>> above (in our case "chosen").
>>
> 
> I re-re-reconsider what you meant here, hope this time I get what you mean, correct me if I'm wrong,
> List an example here:
> 
>      / {
>          reserved-memory {
>              #address-cells = <2>;
>              #size-cells = <2>;
> 
>              staticmemdomU1: static-memory@0x30000000 {
>                  compatible = "xen,static-memory-domain";
>                  reg = <0x0 0x30000000 0x0 0x20000000>;
>              };
>          };
> 
>          chosen {
>              domU1 {
>                  compatible = "xen,domain";
>                  #address-cells = <0x1>;
>                  #size-cells = <0x1>;
>                  cpus = <2>;
>                  xen,static-mem = <&staticmemdomU1>;
> 
>                 ...
>              };
>          };
>      };
> 
> If user gives two different #address-cells and #size-cells in reserved-memory and domU1, Then when parsing it
> through `xen,static-mem`, it may have unexpected answers.

Why are you using the #address-cells and #size-cells from the node domU1 
to parse staticmemdomU1?

> I could not think a way to fix it properly in codes, do you have any suggestion? Or we just put a warning in doc/commits.

The correct approach is to find the parent of staticmemdomU1 (i.e. 
reserved-memory) and use the #address-cells and #size-cells from there.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 09:39:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 09:39:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136398.252905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lojpH-0004mF-JF; Thu, 03 Jun 2021 09:39:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136398.252905; Thu, 03 Jun 2021 09:39:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lojpH-0004m8-FI; Thu, 03 Jun 2021 09:39:43 +0000
Received: by outflank-mailman (input) for mailman id 136398;
 Thu, 03 Jun 2021 09:39:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lojpG-0004m2-D1
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 09:39:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lojpF-0002Xz-8N; Thu, 03 Jun 2021 09:39:41 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lojpF-0000Gj-1y; Thu, 03 Jun 2021 09:39:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=0viUw/h0LCd354Rk0oPsKwoOcbkJQj3qpXKTmML/fUw=; b=53O4zrRR9XPGLNvnLZUYdoIe5r
	FzYOnCB4kOnZ6y/8wu85hkcH2lKZs8DIWd3gjcuMItw2go40axrA50gllzDbL+TCiQZDmwr6gb3Uy
	ZrUdzDjDPH7J0mr8wScURS8qSH/dvo5FdG35hU62Dfdz2+dqBYrjScTkf52Hu0UrM+vg=;
Subject: Re: [PATCH v2 07/12] mm: allow page scrubbing routine(s) to be arch
 controlled
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
 <49c46d4d-4eaa-16a8-ccc8-c873b0b1d092@suse.com>
 <b1c10ad9-2cef-031d-39c2-8d2013b3e0b5@xen.org>
 <e805e525-f024-8b5f-3814-0c1346a227f8@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ccdc7909-9ef2-470e-fefd-bc6523fcdf73@xen.org>
Date: Thu, 3 Jun 2021 10:39:38 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <e805e525-f024-8b5f-3814-0c1346a227f8@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 27/05/2021 14:58, Jan Beulich wrote:
> On 27.05.2021 15:06, Julien Grall wrote:
>> On 27/05/2021 13:33, Jan Beulich wrote:
>>> Especially when dealing with large amounts of memory, memset() may not
>>> be very efficient; this can be bad enough that even for debug builds a
>>> custom function is warranted. We additionally want to distinguish "hot"
>>> and "cold" cases.
>>
>> Do you have any benchmark showing the performance improvement?
> 
> This is based on the numbers provided at
> https://lists.xen.org/archives/html/xen-devel/2021-04/msg00716.html (???)
> with the thread with some of the prior discussion rooted at
> https://lists.xen.org/archives/html/xen-devel/2021-04/msg00425.html

Thanks for the pointer!

> I'm afraid I lack ideas on how to sensibly measure _all_ of the
> effects (i.e. including the amount of disturbing of caches).

I think it is quite important to provide some benchmark (or at least 
rationale) in the commit message.

We had a similar situation in the past (see the discussion [1]) where a 
commit message claimed it would improve the performance but in reality 
it also added regression. Unfortunately, there is no easy way forward as 
the rationale is now forgotten...

>>> ---
>>> The choice between hot and cold in scrub_one_page()'s callers is
>>> certainly up for discussion / improvement.
>>
>> To get the discussion started, can you explain how you made the decision
>> between hot/cot? This will also want to be written down in the commit
>> message.
> 
> Well, the initial trivial heuristic is "allocation for oneself" vs
> "allocation for someone else, or freeing, or scrubbing", i.e. whether
> it would be likely that the page will soon be accessed again (or for
> the first time).
> 
>>> --- /dev/null
>>> +++ b/xen/arch/x86/scrub_page.S
>>> @@ -0,0 +1,41 @@
>>> +        .file __FILE__
>>> +
>>> +#include <asm/asm_defns.h>
>>> +#include <xen/page-size.h>
>>> +#include <xen/scrub.h>
>>> +
>>> +ENTRY(scrub_page_cold)
>>> +        mov     $PAGE_SIZE/32, %ecx
>>> +        mov     $SCRUB_PATTERN, %rax
>>> +
>>> +0:      movnti  %rax,   (%rdi)
>>> +        movnti  %rax,  8(%rdi)
>>> +        movnti  %rax, 16(%rdi)
>>> +        movnti  %rax, 24(%rdi)
>>> +        add     $32, %rdi
>>> +        sub     $1, %ecx
>>> +        jnz     0b
>>> +
>>> +        sfence
>>> +        ret
>>> +        .type scrub_page_cold, @function
>>> +        .size scrub_page_cold, . - scrub_page_cold
>>> +
>>> +        .macro scrub_page_stosb
>>> +        mov     $PAGE_SIZE, %ecx
>>> +        mov     $SCRUB_BYTE_PATTERN, %eax
>>> +        rep stosb
>>> +        ret
>>> +        .endm
>>> +
>>> +        .macro scrub_page_stosq
>>> +        mov     $PAGE_SIZE/8, %ecx
>>> +        mov     $SCRUB_PATTERN, %rax
>>> +        rep stosq
>>> +        ret
>>> +        .endm
>>> +
>>> +ENTRY(scrub_page_hot)
>>> +        ALTERNATIVE scrub_page_stosq, scrub_page_stosb, X86_FEATURE_ERMS
>>> +        .type scrub_page_hot, @function
>>> +        .size scrub_page_hot, . - scrub_page_hot
>>
>>   From the commit message, it is not clear how the implementation for
>> hot/cold was chosen. Can you outline in the commit message what are the
>> assumption for each helper?
> 
> I've added 'The goal is for accesses of "cold" pages to not
> disturb caches (albeit finding a good balance between this
> and the higher latency looks to be difficult).'
> 
>>> @@ -1046,12 +1051,14 @@ static struct page_info *alloc_heap_page
>>>        if ( first_dirty != INVALID_DIRTY_IDX ||
>>>             (scrub_debug && !(memflags & MEMF_no_scrub)) )
>>>        {
>>> +        bool cold = d && d != current->domain;
>>
>> So the assumption is if the domain is not running, then the content is
>> not in the cache. Is that correct?
> 
> Not exactly: For one, instead of "not running" it is "is not the current
> domain", i.e. there may still be vCPU-s of the domain running elsewhere.
> And for the cache the question isn't so much of "is in cache", but to
> avoid needlessly bringing contents into the cache when the data is
> unlikely to be used again soon.

Ok. Can this be clarified in the commit message?

As to the approach itself, I'd like an ack from one of the x86 
maintainers to confirm that distinguising cold vs hot page is worth it.

Cheers,

[1] 
<de46590ad566d9be55b26eaca0bc4dc7fbbada59.1585063311.git.hongyxia@amazon.com>

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 09:45:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 09:45:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136405.252915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lojud-0006Bi-75; Thu, 03 Jun 2021 09:45:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136405.252915; Thu, 03 Jun 2021 09:45:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lojud-0006Bb-40; Thu, 03 Jun 2021 09:45:15 +0000
Received: by outflank-mailman (input) for mailman id 136405;
 Thu, 03 Jun 2021 09:45:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lojub-0006BR-86; Thu, 03 Jun 2021 09:45:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lojua-0002e5-T9; Thu, 03 Jun 2021 09:45:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lojua-0002mL-Gm; Thu, 03 Jun 2021 09:45:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lojua-0005dr-Fp; Thu, 03 Jun 2021 09:45:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=E04LCRAREQZcGa93xxzYF5bKLMiT/BVBXW48pZyYs9E=; b=25HhQm7A/LdW6Q39euT+8edJw5
	bBK60EUD2M9tsRvEsBLkecLtqxTZBfxWCGx5+mSyK5KCuoDAhzWhNW5rWvdP6NJ0njv9ztgkuCzem
	x08Hb3448ValmBOF9sxf0xpNadqNyNHoUeFuFe11/kkoyN5HfSJyiHFm9PTfP2P49bms=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162342-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162342: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=8c345b3e6a736d4985b2bca6f7f24b985900de63
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Jun 2021 09:45:12 +0000

flight 162342 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162342/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                8c345b3e6a736d4985b2bca6f7f24b985900de63
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  287 days
Failing since        152659  2020-08-21 14:07:39 Z  285 days  529 attempts
Testing same since   162342  2021-06-03 01:09:37 Z    0 days    1 attempts

------------------------------------------------------------
520 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 166261 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 11:55:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 11:55:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136419.252940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lolwS-00017M-1S; Thu, 03 Jun 2021 11:55:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136419.252940; Thu, 03 Jun 2021 11:55:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lolwR-00017F-Up; Thu, 03 Jun 2021 11:55:15 +0000
Received: by outflank-mailman (input) for mailman id 136419;
 Thu, 03 Jun 2021 11:55:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yeTI=K5=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lolwQ-000179-WC
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 11:55:15 +0000
Received: from mail-lf1-x130.google.com (unknown [2a00:1450:4864:20::130])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d3d0e52-4435-4aca-9e4e-94d34855bb16;
 Thu, 03 Jun 2021 11:55:13 +0000 (UTC)
Received: by mail-lf1-x130.google.com with SMTP id p17so7621649lfc.6
 for <xen-devel@lists.xenproject.org>; Thu, 03 Jun 2021 04:55:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d3d0e52-4435-4aca-9e4e-94d34855bb16
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=EpM6Yqr00wTEWuXreXquvTVwpQ5MOp19mRj8EEUbqIk=;
        b=CwRJuPSeN0OvZ45C85AUDzswzI+YW4S9qQb6poFiff70a6/jZf/CCIy1nl4+D6vJjB
         98NGxOPP0BUt2U0HXsA4x0bQ79KCjDzeYMxGSXePt2ZgaZC77GRwgVqsdjk4wsbaxuiW
         Dbb0hYF2A7fiBpZsU66nh1vmyvmmOeN/TCtJwjAb/WbaGnP+i+9SMEjJuxg11PWjatZ2
         SVxSaEksVgFwzYMNfzP/hOo7NJRupMqB/KwApXMLew8y7lecKzrTcZPIVJW4429M45SR
         Bg6hOnKe69XZ0mw6VePdDJmkxH05xhG/XroUJwxypuJO+cwi6pbd/vhbxgb3SwtEA3An
         6VNQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=EpM6Yqr00wTEWuXreXquvTVwpQ5MOp19mRj8EEUbqIk=;
        b=YxcTX/DJb9UYwvX+JtX8YLx0L8VfMJwBMw7FbBWl45mND5gokmXVZ7C2+W9K32YO0S
         YCWSVhDwRbJwpNx/gEnpDn3hfV4mVGPYbR0NCUGELziZUXHrCqdYvhZxOxxtYZKmUv5N
         HYki7Ezh1e9PMwg+zwHXHBSs6nF91WnEc+52y2wGZByDB6a2bMjcQIKvxOdJBg6DwPuf
         z9gjI3U4IlPGGU8tJ8eeKTN6OUCUsBnzFgFtJennt8R+xwm02IPZPJNoQ2/DIe5Tggv0
         DpulCWWzTy3KQVvQK2wNqrf1ZhpspEwOPvk5i72aOfSxZzsziuMGWIBgQ02Bn0WNtkRe
         qq/g==
X-Gm-Message-State: AOAM533cKSGJA7My74WhPA4sM7XcAPfPm6d3JsboRqUs0zdshrwLzHfF
	OsfPGSw74YDzjl3rMMhPQw8yN9K7bsF6utYDt+I=
X-Google-Smtp-Source: ABdhPJwyivv+crQ7rx0EYCzqBC39js/ujiICRam711RxBEblvFzPC1Jg/bNWYsjdg1iKUlMIqUoUzF2vPKnDaUGLpgE=
X-Received: by 2002:a19:ed04:: with SMTP id y4mr10699984lfy.562.1622721312122;
 Thu, 03 Jun 2021 04:55:12 -0700 (PDT)
MIME-Version: 1.0
References: <20210503192810.36084-1-jandryuk@gmail.com> <20210503192810.36084-5-jandryuk@gmail.com>
 <1747789a-ab6c-cdae-ed35-a6b81ac580a9@suse.com> <CAKf6xps4NuBxMpgCo_duWU1ZXB8x8B8uszb3uNyb6kABxUhNHA@mail.gmail.com>
 <2c3400e8-8236-8558-08e4-37c8b1494de7@suse.com>
In-Reply-To: <2c3400e8-8236-8558-08e4-37c8b1494de7@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 3 Jun 2021 07:55:00 -0400
Message-ID: <CAKf6xpvCkzHOZsBY2yMQSVxq844_muaAaKd-JZUQfd7UCrXLVg@mail.gmail.com>
Subject: Re: [PATCH 04/13] cpufreq: Add Hardware P-State (HWP) driver
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Fri, May 28, 2021 at 2:35 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 27.05.2021 20:50, Jason Andryuk wrote:
> > On Wed, May 26, 2021 at 11:00 AM Jan Beulich <jbeulich@suse.com> wrote:
> >>
> >> On 03.05.2021 21:28, Jason Andryuk wrote:
> >>> +    hwp_verbose("HWP: FAST_IA32_HWP_REQUEST %ssupported\n",
> >>> +                eax & CPUID6_EAX_FAST_HWP_MSR ? "" : "not ");
> >>> +    if ( eax & CPUID6_EAX_FAST_HWP_MSR )
> >>> +    {
> >>> +        if ( rdmsr_safe(MSR_FAST_UNCORE_MSRS_CAPABILITY, val) )
> >>> +            hwp_err("error rdmsr_safe(MSR_FAST_UNCORE_MSRS_CAPABILITY)\n");
> >>> +
> >>> +        hwp_verbose("HWP: MSR_FAST_UNCORE_MSRS_CAPABILITY: %016lx\n", val);
> >>
> >> Missing "else" above here?
> >
> > Are unbalanced braces acceptable or must they be balanced?  Is this acceptable:
> > if ()
> >   one;
> > else {
> >   two;
> >   three;
> > }
>
> Yes, it is. But I don't see how the question relates to my comment.
> All that needs to go in the else's body is the hwp_verbose().

'val' shouldn't be used to set features when the rdmsr fails, so the
following code needs to be within the else.  Unless you want to rely
on a failed rdmsr returning 0.

> >>> +static void hdc_set_pkg_hdc_ctl(bool val)
> >>> +{
> >>> +    uint64_t msr;
> >>> +
> >>> +    if ( rdmsr_safe(MSR_IA32_PKG_HDC_CTL, msr) )
> >>> +    {
> >>> +        hwp_err("error rdmsr_safe(MSR_IA32_PKG_HDC_CTL)\n");
> >>
> >> I'm not convinced of the need of having such log messages after
> >> failed rdmsr/wrmsr, but I'm definitely against them getting logged
> >> unconditionally. In verbose mode, maybe.
> >
> > We should not fail the rdmsr if our earlier cpuid check passed.  So in
> > that respect these messages can be removed.  The benefit here is that
> > you can see which MSR failed.  If you relied on extable_fixup, you
> > would have to cross reference manually.  How will administors know if
> > hwp isn't working properly there are no messages?
>
> This same question would go for various other MSR accesses which
> might fail, but which aren't accompanied by an explicit log message.
> I wouldn't mind a single summary message reporting if e.g. HWP
> setup failed. More detailed analysis of such would be more of a
> developer's than an administrator's job then anyway.
>
> > For HWP in general, the SDM says to check CPUID for the availability
> > of MSRs.  Therefore rdmsr/wrmsr shouldn't fail.  During development, I
> > saw wrmsr failures when with "Valid Maximum" and other "Valid" bits
> > set.  I think that was because I hadn't set up the Package Request
> > MSR.  That has been fixed by not using those bits.  Anyway,
> > rdmsr/wrmsr shouldn't fail, so how much code should be put into
> > checking for those failures which shouldn't happen?
>
> If there's any risk of accesses causing a fault, using *msr_safe()
> is of course the right choice. Beyond that you will need to consider
> what the consequences are. Sadly this needs doing on every single
> case individually. See "Handling unexpected conditions" section of
> ./CODING_STYLE for guidelines (even if the specific constructs
> aren't in use here).

Using *msr_safe(), I think the worst case is the users can't set HWP
in the way they want.  So power/performance may not be what the users
wanted, but Xen won't crash.  The hardware will throttle itself if
needed for self-protection, so that should be okay as well.

> >>> +static void hdc_set_pm_ctl1(bool val)
> >>> +{
> >>> +    uint64_t msr;
> >>> +
> >>> +    if ( rdmsr_safe(MSR_IA32_PM_CTL1, msr) )
> >>> +    {
> >>> +        hwp_err("error rdmsr_safe(MSR_IA32_PM_CTL1)\n");
> >>> +
> >>> +        return;
> >>> +    }
> >>> +
> >>> +    msr = val ? IA32_PM_CTL1_HDC_Allow_Block : 0;
> >>
> >> Same here then, and ...
> >>
> >>> +static void hwp_fast_uncore_msrs_ctl(bool val)
> >>> +{
> >>> +    uint64_t msr;
> >>> +
> >>> +    if ( rdmsr_safe(MSR_FAST_UNCORE_MSRS_CTL, msr) )
> >>> +        hwp_err("error rdmsr_safe(MSR_FAST_UNCORE_MSRS_CTL)\n");
> >>> +
> >>> +    msr = val;
> >>
> >> ... here (where you imply bit 0 instead of using a proper
> >> constant).
> >>
> >> Also for all three functions I'm not convinced their names are
> >> in good sync with their parameters being boolean.
> >
> > Would you prefer something named closer to the bit names like:
> > hdc_set_pkg_hdc_ctl -> hdc_set_pkg_enable
> > hdc_set_pm_ctl1 -> hdc_set_allow_block
> > hwp_fast_uncore_msrs_ctl -> hwp_fast_request_enable
>
> My primary request is for function name, purpose, and parameter(s)
> to be in line. So e.g. if you left the parameters as boolean, then
> I think your suggested name changes make sense. Alternatively you
> could make the functions e.g. be full register updating ones, with
> suitable parameters, which could be a mask-to-set and mask-to-clear.

I'm going to use the replacement names while keeping the boolean
argument.  Those MSRs only have single bits defined today, so
functions with boolean parameters are simpler.

> >>> +static void hwp_read_capabilities(void *info)
> >>> +{
> >>> +    struct cpufreq_policy *policy = info;
> >>> +    struct hwp_drv_data *data = hwp_drv_data[policy->cpu];
> >>> +
> >>> +    if ( rdmsr_safe(MSR_IA32_HWP_CAPABILITIES, data->hwp_caps) )
> >>> +    {
> >>> +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_CAPABILITIES)\n",
> >>> +                policy->cpu);
> >>> +
> >>> +        return;
> >>> +    }
> >>> +
> >>> +    if ( rdmsr_safe(MSR_IA32_HWP_REQUEST, data->curr_req.raw) )
> >>> +    {
> >>> +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_REQUEST)\n", policy->cpu);
> >>> +
> >>> +        return;
> >>> +    }
> >>> +}
> >>
> >> This function doesn't indicate failure to its caller(s), so am I
> >> to understand that failure to read either ofthe MSRs is actually
> >> benign to the driver?
> >
> > They really shouldn't fail since the CPUID check passed during
> > initialization.  If you can't read/write MSRs, something is broken and
> > the driver cannot function.  The machine would still run, but HWP
> > would be uncontrollable.  How much care should be given to
> > "impossible" situations?
>
> See above. The main point is to avoid crashing. In the specific
> case here I think you could simply drop both the log messages and
> the early return, assuming the caller can deal fine with the zero
> value(s) that rdmsr_safe() will substitute for the MSR value(s).
> Bailing early, otoh, calls for handing back an error indicator
> imo.

I changed it to have failure set curr_req.raw = -1.  Then
cpufreq_driver.init returns -ENODEV in that case.

> >>> +    policy->governor = &hwp_cpufreq_governor;
> >>> +
> >>> +    data = xzalloc(typeof(*data));
> >>
> >> Commonly we specify the type explicitly in such cases, rather than using
> >> typeof(). I will admit though that I'm not entirely certain which one's
> >> better. But consistency across the code base is perhaps preferable for
> >> the time being.
> >
> > I thought typeof(*data) is always preferable since it will always be
> > the matching type.  I can change it, but I feel like it's a step
> > backwards.
>
> There's exactly one similar use in the entire code base. The original
> idea with xmalloc() was that one would explicitly specify the intended
> type, such that without looking elsewhere one can see what exactly is
> to be allocated. One could further rely on the compiler warning if
> there was a disagreement between declaration and assignment.

Oh, okay.  Thanks for the explanation of xmalloc().

> >>> +    if ( feature_hwp_energy_perf )
> >>> +        data->energy_perf = 0x80;
> >>> +    else
> >>> +        data->energy_perf = 7;
> >>
> >> Where's this 7 coming from? (You do mention the 0x80 at least in the
> >> description.)
> >
> > When HWP energy performance preference is unavailable, it falls back
> > to IA32_ENERGY_PERF_BIAS with a 0-15 range.  Per the SDM Vol3 14.3.4,
> > "A value of 7 roughly translates into a hint to balance performance
> > with energy consumption."  I will add a comment.
>
> Actually, as per a comment on a later patch, I'm instead expecting
> you to add a #define, the name of which will serve as comment.

Yes, sounds good.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 12:51:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 12:51:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136430.252951 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lomoP-0006ll-DJ; Thu, 03 Jun 2021 12:51:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136430.252951; Thu, 03 Jun 2021 12:51:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lomoP-0006le-AL; Thu, 03 Jun 2021 12:51:01 +0000
Received: by outflank-mailman (input) for mailman id 136430;
 Thu, 03 Jun 2021 12:50:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lomoN-0006lU-LE; Thu, 03 Jun 2021 12:50:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lomoN-0005oX-Ci; Thu, 03 Jun 2021 12:50:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lomoN-00031j-3e; Thu, 03 Jun 2021 12:50:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lomoN-0002M7-2Y; Thu, 03 Jun 2021 12:50:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7RcLLD23yamX2kgQjZfd6C+MUnJxGxTgNx9uS8MerbU=; b=F0szOEIlk7y26Hiag3aQGmYFPM
	YrSmjpxpDfZiXXYG2pxw6tlzB52vUvL81KD1Bk53lZCkobQ98ardQPv1wAGocBZqG6mioCXIO0eUv
	5fw2DIjstYOefPe35tXjIjs7hu4SCmfFboIwgXm2fwjlI5Pg6kduytj15wQbr7itzg6I=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162343-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162343: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Jun 2021 12:50:59 +0000

flight 162343 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162343/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162337
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162337
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162337
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162337
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162337
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162337
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162337
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162337
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162337
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162337
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162337
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162343  2021-06-03 01:52:42 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu Jun 03 13:08:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 13:08:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136440.252966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lon4x-0008LM-Ux; Thu, 03 Jun 2021 13:08:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136440.252966; Thu, 03 Jun 2021 13:08:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lon4x-0008LF-Rx; Thu, 03 Jun 2021 13:08:07 +0000
Received: by outflank-mailman (input) for mailman id 136440;
 Thu, 03 Jun 2021 13:08:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+9Ry=K5=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1lon4w-0008L9-G7
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 13:08:06 +0000
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c2e19d8-ab45-4635-b1a6-3c4acdb313be;
 Thu, 03 Jun 2021 13:08:05 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.94.2 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1lon4t-000MAN-7P; Thu, 03 Jun 2021 13:08:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c2e19d8-ab45-4635-b1a6-3c4acdb313be
Date: Thu, 3 Jun 2021 14:08:03 +0100
From: Tim Deegan <tim@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: [PATCH] MAINTAINERS: adjust x86/mm/shadow maintainers
Message-ID: <YLjUM0Dzqn0lWA0l@deinos.phlegethon.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org); SAEximRunCond expanded to false

Better reflect reality: Andrew and Jan are active maintainers
and I review patches.  Keep myself as a reviewer so I can help
with historical context &c.

Signed-off-by: Tim Deegan <tim@xen.org>
---
 MAINTAINERS | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git MAINTAINERS MAINTAINERS
index d46b08a0d2..09a92534bf 100644
--- MAINTAINERS
+++ MAINTAINERS
@@ -591,7 +591,9 @@ F:	xen/arch/x86/mm/mem_sharing.c
 F:	tools/tests/mem-sharing/
 
 X86 SHADOW PAGETABLES
-M:	Tim Deegan <tim@xen.org>
+M:	Jan Beulich <jbeulich@suse.com>
+M:	Andrew Cooper <andrew.cooper3@citrix.com>
+R:	Tim Deegan <tim@xen.org>
 S:	Maintained
 F:	xen/arch/x86/mm/shadow/
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Jun 03 13:08:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 13:08:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136441.252977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lon5E-0000FB-7G; Thu, 03 Jun 2021 13:08:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136441.252977; Thu, 03 Jun 2021 13:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lon5E-0000F4-4B; Thu, 03 Jun 2021 13:08:24 +0000
Received: by outflank-mailman (input) for mailman id 136441;
 Thu, 03 Jun 2021 13:08:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lon5D-0000Er-K7
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 13:08:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lon5C-00067e-Im; Thu, 03 Jun 2021 13:08:22 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lon5C-0004fe-DC; Thu, 03 Jun 2021 13:08:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=V/LOvSacPDdqLtqOw2ZfOKCaQmjRpIiS9k1Q71jKtt4=; b=54WZDv6fi7pkGFrXvFeXg7yVbt
	3xL+WPLBmeyC29ndDdCB4KDEUC+pp765HipK7kQXrK34CrwNFzoviffsXnKWOlS5at3sOqiaDB0L8
	/FpBHZvCZHhOG42D1cXKVJIKz93lD1W0zFFj2FI4fcDwaOuKJmXPVyVHNzDecr/Fbbz0=;
Subject: Re: [XEN PATCH v2] libxl/arm: provide guests with random seed
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
References: <20210527085233.69917-1-Sergiy_Kibrik@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <78b26e15-a3ca-e218-9afa-9f443e234260@xen.org>
Date: Thu, 3 Jun 2021 14:08:20 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210527085233.69917-1-Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 27/05/2021 09:52, Sergiy Kibrik wrote:
> Pass 128 bytes of random seed via FDT, so that guests' CRNGs are better seeded
> early at boot. This is larger than ChaCha20 key size of 32, so each byte of
> CRNG state will be mixed 4 times using this seed. There does not seem to be
> advantage in larger seed though.
> 
> Depending on its configuration Linux can use the seed as device randomness
> or to just quickly initialize CRNG.
> In either case this will provide extra randomness to further harden CRNG.
> 
> CC: Julien Grall <julien@xen.org>
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Ian, Wei, can we get an ack for this patch?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 14:44:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 14:44:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136468.252996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1looZe-0001CC-51; Thu, 03 Jun 2021 14:43:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136468.252996; Thu, 03 Jun 2021 14:43:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1looZe-0001C5-1d; Thu, 03 Jun 2021 14:43:54 +0000
Received: by outflank-mailman (input) for mailman id 136468;
 Thu, 03 Jun 2021 14:43:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z4WE=K5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1looZc-0001Bz-Td
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 14:43:53 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 151ba2f9-3a92-4677-bfa8-77ff2b1e15ce;
 Thu, 03 Jun 2021 14:43:49 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 394AF219FA;
 Thu,  3 Jun 2021 14:43:48 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 0C5EE118DD;
 Thu,  3 Jun 2021 14:43:48 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id ORXNAaTquGAjUgAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 03 Jun 2021 14:43:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 151ba2f9-3a92-4677-bfa8-77ff2b1e15ce
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622731428; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=lGBbOnGU+NJtjn0Zub4m7iAkGrmgQwLPP/B7yW5z1oA=;
	b=IzCCZwOSvRkVbKenzalsVNSOZQqSK2uy3AJ3dWARilk65NSTTnqJDD/lOA18r9Pu1vcsFV
	WSLYl9p5jn3l5Xkce2rFuIT870jMXv33pFTP4Q0oAiMzLsAx1tVFgvGtQSB3R2PFALgiC6
	hX9A8G+pJrRvHydm9G6ONRiatwZJlq8=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622731428; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=lGBbOnGU+NJtjn0Zub4m7iAkGrmgQwLPP/B7yW5z1oA=;
	b=IzCCZwOSvRkVbKenzalsVNSOZQqSK2uy3AJ3dWARilk65NSTTnqJDD/lOA18r9Pu1vcsFV
	WSLYl9p5jn3l5Xkce2rFuIT870jMXv33pFTP4Q0oAiMzLsAx1tVFgvGtQSB3R2PFALgiC6
	hX9A8G+pJrRvHydm9G6ONRiatwZJlq8=
Subject: Re: [PATCH v4] tools/libs/store: cleanup libxenstore interface
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210512144832.19026-1-jgross@suse.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <d767389d-9fc1-76e8-3928-734b16a11819@suse.com>
Date: Thu, 3 Jun 2021 16:43:47 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210512144832.19026-1-jgross@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="FVSMMg1HeqT0CRTtZXq8wr0XtOMXlye9u"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--FVSMMg1HeqT0CRTtZXq8wr0XtOMXlye9u
Content-Type: multipart/mixed; boundary="86U1Q58838eTKE0PrpmthRR4TYati14QS";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <d767389d-9fc1-76e8-3928-734b16a11819@suse.com>
Subject: Re: [PATCH v4] tools/libs/store: cleanup libxenstore interface
References: <20210512144832.19026-1-jgross@suse.com>
In-Reply-To: <20210512144832.19026-1-jgross@suse.com>

--86U1Q58838eTKE0PrpmthRR4TYati14QS
Content-Type: multipart/mixed;
 boundary="------------45C0131E856EEBA87D5D6AAD"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------45C0131E856EEBA87D5D6AAD
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

Ping?

On 12.05.21 16:48, Juergen Gross wrote:
> There are some internals in the libxenstore interface which should be
> removed.
>=20
> Move those functions into xs_lib.c and the related definitions into
> xs_lib.h. Remove the functions from the mapfile. Add xs_lib.o to
> xenstore_client as some of the internal functions are needed there.
>=20
> Bump the libxenstore version to 4.0 as the change is incompatible.
> Note that the removed functions should not result in any problem as
> they ought to be used by xenstored or xenstore_client only.
>=20
> Avoid an enum as part of a structure as the size of an enum is
> compiler implementation dependent.
>=20
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2: minimal variant (Ian Jackson)
> V3: replace enum with unsigned int (Andrew Cooper)
> V4: full variant again, this time with version bump (Ian Jackson)
> ---
>   tools/include/xenstore_lib.h       |  54 ++------------
>   tools/libs/store/Makefile          |   4 +-
>   tools/libs/store/libxenstore.map   |  10 +--
>   tools/libs/store/xs.c              | 112 +---------------------------=

>   tools/xenstore/Makefile            |   4 +-
>   tools/xenstore/utils.h             |  11 +++
>   tools/xenstore/xenstore_client.c   |   2 +
>   tools/xenstore/xenstored_control.c |   1 +
>   tools/xenstore/xenstored_core.c    |  19 +++--
>   tools/xenstore/xenstored_core.h    |   6 +-
>   tools/xenstore/xenstored_watch.c   |   2 +-
>   tools/xenstore/xs_lib.c            | 114 ++++++++++++++++++++++++++++=
-
>   tools/xenstore/xs_lib.h            |  50 +++++++++++++
>   tools/xenstore/xs_tdb_dump.c       |   2 +-
>   14 files changed, 204 insertions(+), 187 deletions(-)
>   create mode 100644 tools/xenstore/xs_lib.h
>=20
> diff --git a/tools/include/xenstore_lib.h b/tools/include/xenstore_lib.=
h
> index 4c9b6d1685..2266009ec1 100644
> --- a/tools/include/xenstore_lib.h
> +++ b/tools/include/xenstore_lib.h
> @@ -26,42 +26,26 @@
>   #include <stdint.h>
>   #include <xen/io/xs_wire.h>
>  =20
> -/* Bitmask of permissions. */
> -enum xs_perm_type {
> -	XS_PERM_NONE =3D 0,
> -	XS_PERM_READ =3D 1,
> -	XS_PERM_WRITE =3D 2,
> -	/* Internal use. */
> -	XS_PERM_ENOENT_OK =3D 4,
> -	XS_PERM_OWNER =3D 8,
> -	XS_PERM_IGNORE =3D 16,
> -};
> -
>   struct xs_permissions
>   {
>   	unsigned int id;
> -	enum xs_perm_type perms;
> -};
> -
> -/* Header of the node record in tdb. */
> -struct xs_tdb_record_hdr {
> -	uint64_t generation;
> -	uint32_t num_perms;
> -	uint32_t datalen;
> -	uint32_t childlen;
> -	struct xs_permissions perms[0];
> +	unsigned int perms;	/* Bitmask of permissions. */
> +#define XS_PERM_NONE		0x00
> +#define XS_PERM_READ		0x01
> +#define XS_PERM_WRITE		0x02
> +	/* Internal use. */
> +#define XS_PERM_ENOENT_OK	0x04
> +#define XS_PERM_OWNER		0x08
> +#define XS_PERM_IGNORE		0x10
>   };
>  =20
>   /* Each 10 bits takes ~ 3 digits, plus one, plus one for nul terminat=
or. */
>   #define MAX_STRLEN(x) ((sizeof(x) * CHAR_BIT + CHAR_BIT-1) / 10 * 3 +=20
2)
>  =20
>   /* Path for various daemon things: env vars can override. */
> -const char *xs_daemon_rootdir(void);
>   const char *xs_daemon_rundir(void);
>   const char *xs_daemon_socket(void);
>   const char *xs_daemon_socket_ro(void);
> -const char *xs_domain_dev(void);
> -const char *xs_daemon_tdb(void);
>  =20
>   /* Simple write function: loops for you. */
>   bool xs_write_all(int fd, const void *data, unsigned int len);
> @@ -70,26 +54,4 @@ bool xs_write_all(int fd, const void *data, unsigned=20
int len);
>   bool xs_strings_to_perms(struct xs_permissions *perms, unsigned int n=
um,
>   			 const char *strings);
>  =20
> -/* Convert permissions to a string (up to len MAX_STRLEN(unsigned int)=
+1). */
> -bool xs_perm_to_string(const struct xs_permissions *perm,
> -                       char *buffer, size_t buf_len);
> -
> -/* Given a string and a length, count how many strings (nul terms). */=

> -unsigned int xs_count_strings(const char *strings, unsigned int len);
> -
> -/* Sanitising (quoting) possibly-binary strings. */
> -struct expanding_buffer {
> -	char *buf;
> -	int avail;
> -};
> -
> -/* Ensure that given expanding buffer has at least min_avail character=
s. */
> -char *expanding_buffer_ensure(struct expanding_buffer *, int min_avail=
);
> -
> -/* sanitise_value() may return NULL if malloc fails. */
> -char *sanitise_value(struct expanding_buffer *, const char *val, unsig=
ned len);
> -
> -/* *out_len_r on entry is ignored; out must be at least strlen(in)+1 b=
ytes. */
> -void unsanitise_value(char *out, unsigned *out_len_r, const char *in);=

> -
>   #endif /* XENSTORE_LIB_H */
> diff --git a/tools/libs/store/Makefile b/tools/libs/store/Makefile
> index bee57b5629..43b018aa8c 100644
> --- a/tools/libs/store/Makefile
> +++ b/tools/libs/store/Makefile
> @@ -1,8 +1,8 @@
>   XEN_ROOT=3D$(CURDIR)/../../..
>   include $(XEN_ROOT)/tools/Rules.mk
>  =20
> -MAJOR =3D 3.0
> -MINOR =3D 3
> +MAJOR =3D 4
> +MINOR =3D 0
>  =20
>   ifeq ($(CONFIG_Linux),y)
>   APPEND_LDFLAGS +=3D -ldl
> diff --git a/tools/libs/store/libxenstore.map b/tools/libs/store/libxen=
store.map
> index 9854305a2c..7e6c7bdd30 100644
> --- a/tools/libs/store/libxenstore.map
> +++ b/tools/libs/store/libxenstore.map
> @@ -1,4 +1,4 @@
> -VERS_3.0.3 {
> +VERS_4.0 {
>   	global:
>   		xs_open;
>   		xs_close;
> @@ -32,18 +32,10 @@ VERS_3.0.3 {
>   		xs_control_command;
>   		xs_debug_command;
>   		xs_suspend_evtchn_port;
> -		xs_daemon_rootdir;
>   		xs_daemon_rundir;
>   		xs_daemon_socket;
>   		xs_daemon_socket_ro;
> -		xs_domain_dev;
> -		xs_daemon_tdb;
>   		xs_write_all;
>   		xs_strings_to_perms;
> -		xs_perm_to_string;
> -		xs_count_strings;
> -		expanding_buffer_ensure;
> -		sanitise_value;
> -		unsanitise_value;
>   	local: *; /* Do not expose anything by default */
>   };
> diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
> index c91377c27f..7a9a8b1656 100644
> --- a/tools/libs/store/xs.c
> +++ b/tools/libs/store/xs.c
> @@ -34,6 +34,7 @@
>   #include <stdint.h>
>   #include <errno.h>
>   #include "xenstore.h"
> +#include "xs_lib.h"
>   #include "list.h"
>   #include "utils.h"
>  =20
> @@ -1358,117 +1359,6 @@ static void *read_thread(void *arg)
>   }
>   #endif
>  =20
> -char *expanding_buffer_ensure(struct expanding_buffer *ebuf, int min_a=
vail)
> -{
> -	int want;
> -	char *got;
> -
> -	if (ebuf->avail >=3D min_avail)
> -		return ebuf->buf;
> -
> -	if (min_avail >=3D INT_MAX/3)
> -		return 0;
> -
> -	want =3D ebuf->avail + min_avail + 10;
> -	got =3D realloc(ebuf->buf, want);
> -	if (!got)
> -		return 0;
> -
> -	ebuf->buf =3D got;
> -	ebuf->avail =3D want;
> -	return ebuf->buf;
> -}
> -
> -char *sanitise_value(struct expanding_buffer *ebuf,
> -		     const char *val, unsigned len)
> -{
> -	int used, remain, c;
> -	unsigned char *ip;
> -
> -#define ADD(c) (ebuf->buf[used++] =3D (c))
> -#define ADDF(f,c) (used +=3D sprintf(ebuf->buf+used, (f), (c)))
> -
> -	assert(len < INT_MAX/5);
> -
> -	ip =3D (unsigned char *)val;
> -	used =3D 0;
> -	remain =3D len;
> -
> -	if (!expanding_buffer_ensure(ebuf, remain + 1))
> -		return NULL;
> -
> -	while (remain-- > 0) {
> -		c=3D *ip++;
> -
> -		if (c >=3D ' ' && c <=3D '~' && c !=3D '\\') {
> -			ADD(c);
> -			continue;
> -		}
> -
> -		if (!expanding_buffer_ensure(ebuf, used + remain + 5))
> -			/* for "<used>\\nnn<remain>\0" */
> -			return 0;
> -
> -		ADD('\\');
> -		switch (c) {
> -		case '\t':  ADD('t');   break;
> -		case '\n':  ADD('n');   break;
> -		case '\r':  ADD('r');   break;
> -		case '\\':  ADD('\\');  break;
> -		default:
> -			if (c < 010) ADDF("%03o", c);
> -			else         ADDF("x%02x", c);
> -		}
> -	}
> -
> -	ADD(0);
> -	assert(used <=3D ebuf->avail);
> -	return ebuf->buf;
> -
> -#undef ADD
> -#undef ADDF
> -}
> -
> -void unsanitise_value(char *out, unsigned *out_len_r, const char *in)
> -{
> -	const char *ip;
> -	char *op;
> -	unsigned c;
> -	int n;
> -
> -	for (ip =3D in, op =3D out; (c =3D *ip++); *op++ =3D c) {
> -		if (c =3D=3D '\\') {
> -			c =3D *ip++;
> -
> -#define GETF(f) do {					\
> -			n =3D 0;				\
> -			sscanf(ip, f "%n", &c, &n);	\
> -			ip +=3D n;			\
> -		} while (0)
> -
> -			switch (c) {
> -			case 't':              c=3D '\t';            break;
> -			case 'n':              c=3D '\n';            break;
> -			case 'r':              c=3D '\r';            break;
> -			case '\\':             c=3D '\\';            break;
> -			case 'x':                    GETF("%2x");  break;
> -			case '0': case '4':
> -			case '1': case '5':
> -			case '2': case '6':
> -			case '3': case '7':    --ip; GETF("%3o");  break;
> -			case 0:                --ip;               break;
> -			default:;
> -			}
> -#undef GETF
> -		}
> -	}
> -
> -	*op =3D 0;
> -
> -	if (out_len_r)
> -		*out_len_r =3D op - out;
> -}
> -
>   /*
>    * Local variables:
>    *  mode: C
> diff --git a/tools/xenstore/Makefile b/tools/xenstore/Makefile
> index ab89e22d3a..01c9ccc70f 100644
> --- a/tools/xenstore/Makefile
> +++ b/tools/xenstore/Makefile
> @@ -78,8 +78,8 @@ xenstored.a: $(XENSTORED_OBJS)
>   $(CLIENTS): xenstore
>   	ln -f xenstore $@
>  =20
> -xenstore: xenstore_client.o
> -	$(CC) $< $(LDFLAGS) $(LDLIBS_libxenstore) $(LDLIBS_libxentoolcore) $(=
SOCKET_LIBS) -o $@ $(APPEND_LDFLAGS)
> +xenstore: xenstore_client.o xs_lib.o
> +	$(CC) $^ $(LDFLAGS) $(LDLIBS_libxenstore) $(LDLIBS_libxentoolcore) $(=
SOCKET_LIBS) -o $@ $(APPEND_LDFLAGS)
>  =20
>   xenstore-control: xenstore_control.o
>   	$(CC) $< $(LDFLAGS) $(LDLIBS_libxenstore) $(LDLIBS_libxenctrl) $(LDL=
IBS_libxenguest) $(LDLIBS_libxentoolcore) $(SOCKET_LIBS) -o $@ $(APPEND_L=
DFLAGS)
> diff --git a/tools/xenstore/utils.h b/tools/xenstore/utils.h
> index 87713a8e5d..9d012b97c1 100644
> --- a/tools/xenstore/utils.h
> +++ b/tools/xenstore/utils.h
> @@ -7,6 +7,17 @@
>  =20
>   #include <xen-tools/libs.h>
>  =20
> +#include "xenstore_lib.h"
> +
> +/* Header of the node record in tdb. */
> +struct xs_tdb_record_hdr {
> +	uint64_t generation;
> +	uint32_t num_perms;
> +	uint32_t datalen;
> +	uint32_t childlen;
> +	struct xs_permissions perms[0];
> +};
> +
>   /* Is A =3D=3D B ? */
>   #define streq(a,b) (strcmp((a),(b)) =3D=3D 0)
>  =20
> diff --git a/tools/xenstore/xenstore_client.c b/tools/xenstore/xenstore=
_client.c
> index ddbafc5175..0628ba275e 100644
> --- a/tools/xenstore/xenstore_client.c
> +++ b/tools/xenstore/xenstore_client.c
> @@ -22,6 +22,8 @@
>  =20
>   #include <sys/ioctl.h>
>  =20
> +#include "xs_lib.h"
> +
>   #define PATH_SEP '/'
>   #define MAX_PATH_LEN 256
>  =20
> diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xensto=
red_control.c
> index 52d4817679..995f671f35 100644
> --- a/tools/xenstore/xenstored_control.c
> +++ b/tools/xenstore/xenstored_control.c
> @@ -34,6 +34,7 @@
>  =20
>   #include "utils.h"
>   #include "talloc.h"
> +#include "xs_lib.h"
>   #include "xenstored_core.h"
>   #include "xenstored_control.h"
>   #include "xenstored_domain.h"
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored=
_core.c
> index 02ae390e25..4b7b71cfb3 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -46,7 +46,7 @@
>   #include "utils.h"
>   #include "list.h"
>   #include "talloc.h"
> -#include "xenstore_lib.h"
> +#include "xs_lib.h"
>   #include "xenstored_core.h"
>   #include "xenstored_watch.h"
>   #include "xenstored_transaction.h"
> @@ -542,11 +542,11 @@ static int write_node(struct connection *conn, st=
ruct node *node,
>   	return write_node_raw(conn, &key, node, no_quota_check);
>   }
>  =20
> -enum xs_perm_type perm_for_conn(struct connection *conn,
> -				const struct node_perms *perms)
> +unsigned int perm_for_conn(struct connection *conn,
> +			   const struct node_perms *perms)
>   {
>   	unsigned int i;
> -	enum xs_perm_type mask =3D XS_PERM_READ|XS_PERM_WRITE|XS_PERM_OWNER;
> +	unsigned int mask =3D XS_PERM_READ|XS_PERM_WRITE|XS_PERM_OWNER;
>  =20
>   	/* Owners and tools get it all... */
>   	if (!domain_is_unprivileged(conn) || perms->p[0].id =3D=3D conn->id
> @@ -584,7 +584,7 @@ char *get_parent(const void *ctx, const char *node)=

>    * Temporary memory allocations are done with ctx.
>    */
>   static int ask_parents(struct connection *conn, const void *ctx,
> -		       const char *name, enum xs_perm_type *perm)
> +		       const char *name, unsigned int *perm)
>   {
>   	struct node *node;
>  =20
> @@ -618,10 +618,9 @@ static int ask_parents(struct connection *conn, co=
nst void *ctx,
>    * Temporary memory allocations are done with ctx.
>    */
>   static int errno_from_parents(struct connection *conn, const void *ct=
x,
> -			      const char *node, int errnum,
> -			      enum xs_perm_type perm)
> +			      const char *node, int errnum, unsigned int perm)
>   {
> -	enum xs_perm_type parent_perm =3D XS_PERM_NONE;
> +	unsigned int parent_perm =3D XS_PERM_NONE;
>  =20
>   	/* We always tell them about memory failures. */
>   	if (errnum =3D=3D ENOMEM)
> @@ -641,7 +640,7 @@ static int errno_from_parents(struct connection *co=
nn, const void *ctx,
>   static struct node *get_node(struct connection *conn,
>   			     const void *ctx,
>   			     const char *name,
> -			     enum xs_perm_type perm)
> +			     unsigned int perm)
>   {
>   	struct node *node;
>  =20
> @@ -873,7 +872,7 @@ static struct node *get_node_canonicalized(struct c=
onnection *conn,
>   					   const void *ctx,
>   					   const char *name,
>   					   char **canonical_name,
> -					   enum xs_perm_type perm)
> +					   unsigned int perm)
>   {
>   	char *tmp_name;
>  =20
> diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored=
_core.h
> index b50ea3f57d..6a6d0448e8 100644
> --- a/tools/xenstore/xenstored_core.h
> +++ b/tools/xenstore/xenstored_core.h
> @@ -185,8 +185,8 @@ void send_ack(struct connection *conn, enum xsd_soc=
kmsg_type type);
>   char *canonicalize(struct connection *conn, const void *ctx, const ch=
ar *node);
>  =20
>   /* Get access permissions. */
> -enum xs_perm_type perm_for_conn(struct connection *conn,
> -				const struct node_perms *perms);
> +unsigned int perm_for_conn(struct connection *conn,
> +			   const struct node_perms *perms);
>  =20
>   /* Write a node to the tdb data base. */
>   int write_node_raw(struct connection *conn, TDB_DATA *key, struct nod=
e *node,
> @@ -200,8 +200,6 @@ struct connection *new_connection(connwritefn_t *wr=
ite, connreadfn_t *read);
>   struct connection *get_connection_by_id(unsigned int conn_id);
>   void check_store(void);
>   void corrupt(struct connection *conn, const char *fmt, ...);
> -enum xs_perm_type perm_for_conn(struct connection *conn,
> -				const struct node_perms *perms);
>  =20
>   /* Is this a valid node name? */
>   bool is_valid_nodename(const char *node);
> diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstore=
d_watch.c
> index db89e0141f..aca0a71bad 100644
> --- a/tools/xenstore/xenstored_watch.c
> +++ b/tools/xenstore/xenstored_watch.c
> @@ -124,7 +124,7 @@ static bool watch_permitted(struct connection *conn=
, const void *ctx,
>   			    const char *name, struct node *node,
>   			    struct node_perms *perms)
>   {
> -	enum xs_perm_type perm;
> +	unsigned int perm;
>   	struct node *parent;
>   	char *parent_name;
>  =20
> diff --git a/tools/xenstore/xs_lib.c b/tools/xenstore/xs_lib.c
> index 80c03acbea..10fa4c3ad0 100644
> --- a/tools/xenstore/xs_lib.c
> +++ b/tools/xenstore/xs_lib.c
> @@ -16,12 +16,13 @@
>       License along with this library; If not, see <http://www.gnu.org/=
licenses/>.
>   */
>  =20
> +#include <assert.h>
>   #include <unistd.h>
>   #include <stdio.h>
>   #include <string.h>
>   #include <stdlib.h>
>   #include <errno.h>
> -#include "xenstore_lib.h"
> +#include "xs_lib.h"
>  =20
>   /* Common routines for the Xen store daemon and client library. */
>  =20
> @@ -179,3 +180,114 @@ unsigned int xs_count_strings(const char *strings=
, unsigned int len)
>  =20
>   	return num;
>   }
> +
> +char *expanding_buffer_ensure(struct expanding_buffer *ebuf, int min_a=
vail)
> +{
> +	int want;
> +	char *got;
> +
> +	if (ebuf->avail >=3D min_avail)
> +		return ebuf->buf;
> +
> +	if (min_avail >=3D INT_MAX/3)
> +		return 0;
> +
> +	want =3D ebuf->avail + min_avail + 10;
> +	got =3D realloc(ebuf->buf, want);
> +	if (!got)
> +		return 0;
> +
> +	ebuf->buf =3D got;
> +	ebuf->avail =3D want;
> +	return ebuf->buf;
> +}
> +
> +char *sanitise_value(struct expanding_buffer *ebuf,
> +		     const char *val, unsigned len)
> +{
> +	int used, remain, c;
> +	unsigned char *ip;
> +
> +#define ADD(c) (ebuf->buf[used++] =3D (c))
> +#define ADDF(f,c) (used +=3D sprintf(ebuf->buf+used, (f), (c)))
> +
> +	assert(len < INT_MAX/5);
> +
> +	ip =3D (unsigned char *)val;
> +	used =3D 0;
> +	remain =3D len;
> +
> +	if (!expanding_buffer_ensure(ebuf, remain + 1))
> +		return NULL;
> +
> +	while (remain-- > 0) {
> +		c=3D *ip++;
> +
> +		if (c >=3D ' ' && c <=3D '~' && c !=3D '\\') {
> +			ADD(c);
> +			continue;
> +		}
> +
> +		if (!expanding_buffer_ensure(ebuf, used + remain + 5))
> +			/* for "<used>\\nnn<remain>\0" */
> +			return 0;
> +
> +		ADD('\\');
> +		switch (c) {
> +		case '\t':  ADD('t');   break;
> +		case '\n':  ADD('n');   break;
> +		case '\r':  ADD('r');   break;
> +		case '\\':  ADD('\\');  break;
> +		default:
> +			if (c < 010) ADDF("%03o", c);
> +			else         ADDF("x%02x", c);
> +		}
> +	}
> +
> +	ADD(0);
> +	assert(used <=3D ebuf->avail);
> +	return ebuf->buf;
> +
> +#undef ADD
> +#undef ADDF
> +}
> +
> +void unsanitise_value(char *out, unsigned *out_len_r, const char *in)
> +{
> +	const char *ip;
> +	char *op;
> +	unsigned c;
> +	int n;
> +
> +	for (ip =3D in, op =3D out; (c =3D *ip++); *op++ =3D c) {
> +		if (c =3D=3D '\\') {
> +			c =3D *ip++;
> +
> +#define GETF(f) do {					\
> +			n =3D 0;				\
> +			sscanf(ip, f "%n", &c, &n);	\
> +			ip +=3D n;			\
> +		} while (0)
> +
> +			switch (c) {
> +			case 't':		c=3D '\t';		break;
> +			case 'n':		c=3D '\n';		break;
> +			case 'r':		c=3D '\r';		break;
> +			case '\\':		c=3D '\\';		break;
> +			case 'x':		GETF("%2x");		break;
> +			case '0': case '4':
> +			case '1': case '5':
> +			case '2': case '6':
> +			case '3': case '7':	--ip; GETF("%3o");	break;
> +			case 0:			--ip;			break;
> +			default:;
> +			}
> +#undef GETF
> +		}
> +	}
> +
> +	*op =3D 0;
> +
> +	if (out_len_r)
> +		*out_len_r =3D op - out;
> +}
> diff --git a/tools/xenstore/xs_lib.h b/tools/xenstore/xs_lib.h
> new file mode 100644
> index 0000000000..efa05997d6
> --- /dev/null
> +++ b/tools/xenstore/xs_lib.h
> @@ -0,0 +1,50 @@
> +/*
> +    Common routines between Xen store user library and daemon.
> +    Copyright (C) 2005 Rusty Russell IBM Corporation
> +
> +    This library is free software; you can redistribute it and/or
> +    modify it under the terms of the GNU Lesser General Public
> +    License as published by the Free Software Foundation; either
> +    version 2.1 of the License, or (at your option) any later version.=

> +
> +    This library is distributed in the hope that it will be useful,
> +    but WITHOUT ANY WARRANTY; without even the implied warranty of
> +    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +    Lesser General Public License for more details.
> +
> +    You should have received a copy of the GNU Lesser General Public
> +    License along with this library; If not, see <http://www.gnu.org/l=
icenses/>.
> +*/
> +
> +#ifndef XS_LIB_H
> +#define XS_LIB_H
> +
> +#include "xenstore_lib.h"
> +
> +const char *xs_daemon_rootdir(void);
> +const char *xs_domain_dev(void);
> +const char *xs_daemon_tdb(void);
> +
> +/* Convert permissions to a string (up to len MAX_STRLEN(unsigned int)=
+1). */
> +bool xs_perm_to_string(const struct xs_permissions *perm,
> +		       char *buffer, size_t buf_len);
> +
> +/* Given a string and a length, count how many strings (nul terms). */=

> +unsigned int xs_count_strings(const char *strings, unsigned int len);
> +
> +/* Sanitising (quoting) possibly-binary strings. */
> +struct expanding_buffer {
> +	char *buf;
> +	int avail;
> +};
> +
> +/* Ensure that given expanding buffer has at least min_avail character=
s. */
> +char *expanding_buffer_ensure(struct expanding_buffer *, int min_avail=
);
> +
> +/* sanitise_value() may return NULL if malloc fails. */
> +char *sanitise_value(struct expanding_buffer *, const char *val, unsig=
ned len);
> +
> +/* *out_len_r on entry is ignored; out must be at least strlen(in)+1 b=
ytes. */
> +void unsanitise_value(char *out, unsigned *out_len_r, const char *in);=

> +
> +#endif /* XS_LIB_H */
> diff --git a/tools/xenstore/xs_tdb_dump.c b/tools/xenstore/xs_tdb_dump.=
c
> index f74676cf1c..5d2db392b4 100644
> --- a/tools/xenstore/xs_tdb_dump.c
> +++ b/tools/xenstore/xs_tdb_dump.c
> @@ -17,7 +17,7 @@ static uint32_t total_size(struct xs_tdb_record_hdr *=
hdr)
>   		+ hdr->datalen + hdr->childlen;
>   }
>  =20
> -static char perm_to_char(enum xs_perm_type perm)
> +static char perm_to_char(unsigned int perm)
>   {
>   	return perm =3D=3D XS_PERM_READ ? 'r' :
>   		perm =3D=3D XS_PERM_WRITE ? 'w' :
>=20


--------------45C0131E856EEBA87D5D6AAD
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------45C0131E856EEBA87D5D6AAD--

--86U1Q58838eTKE0PrpmthRR4TYati14QS--

--FVSMMg1HeqT0CRTtZXq8wr0XtOMXlye9u
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC46qMFAwAAAAAACgkQsN6d1ii/Ey++
Hgf/XIHKkQSNgr4zyO/5WaY/PfA2KtLZoICpHac9QVj/J8cla6FuAkEkLUJtiuRZkPPRV4SP6k5/
anisMBLy4V82xPB7eZffTTGqCY6k2rmsXui/qx6FQ8ELHp9AsS7DXrJd/mB2e3v1mxrKSRozyjj9
zZ4uFFMlRFj56u4G3s5oHrbRDdZ+F2FLMkVNLxB1MCLNVFIuY0xSUI7fnxxpynkV//ZfzpDdNpBc
SPjPIKUTxZbl9GE0x7WQRhkL7v+SP9iex3WLsepTCkQTIknKT85+ZDFllXAQvYYeg27ASMFoQ1sY
aua1s6r2gXElYwoncHe1ZtF5dwP4lGqfZM+ogxXssQ==
=F8bT
-----END PGP SIGNATURE-----

--FVSMMg1HeqT0CRTtZXq8wr0XtOMXlye9u--


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 15:37:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 15:37:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136480.253017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lopPR-0006M6-Kf; Thu, 03 Jun 2021 15:37:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136480.253017; Thu, 03 Jun 2021 15:37:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lopPR-0006Lz-Hf; Thu, 03 Jun 2021 15:37:25 +0000
Received: by outflank-mailman (input) for mailman id 136480;
 Thu, 03 Jun 2021 15:37:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lPzP=K5=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lopPR-00065X-1k
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 15:37:25 +0000
Received: from mail-pf1-x42a.google.com (unknown [2607:f8b0:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e9c50a91-d304-48dd-becb-79553ea5dde0;
 Thu, 03 Jun 2021 15:37:21 +0000 (UTC)
Received: by mail-pf1-x42a.google.com with SMTP id h12so2320071pfe.2
 for <xen-devel@lists.xenproject.org>; Thu, 03 Jun 2021 08:37:21 -0700 (PDT)
Received: from ?IPv6:2404:f801:0:5:8000::4b1? ([2404:f801:9000:18:efec::4b1])
 by smtp.gmail.com with ESMTPSA id
 r10sm3237979pga.48.2021.06.03.08.37.08
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 03 Jun 2021 08:37:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9c50a91-d304-48dd-becb-79553ea5dde0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=BVYx1tfLGUQhuXKxLmm7FvDthvCK68hj9EtKYIRIQHQ=;
        b=miB2ljSk1RA9tY/uYJGQ9mMwA4UDMc4OP5R2SPNH1XSMN+zppyZP3Jc++5YNxmzOby
         kALkMRui/3t9gSuqF33Vb3xRJqf9zGMvEy1ThQuUuoC56/3xsrdMaI/j4lWWgNSF2sTJ
         COY0x0gCbiM503+Xvvp9Nsgl9aAqJhwkWUkCuJddQ5kBPKpL6maIzV4Esr2iyVTK2nDK
         TGTOiLPti6ZFCSfL7YPGf1T/ifO0Td+1Cseob26JVYcCljKap3dtfQjqMVk0i9iDWAeR
         pRTkE5bQUe239zrqx8ZeVhOf2I48ijf3+Yv7Odtm6yW32eQYlYwHea3YOPPmHlNYafZU
         cDkg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=BVYx1tfLGUQhuXKxLmm7FvDthvCK68hj9EtKYIRIQHQ=;
        b=FVQDDnQhoEcGS5jH7H+QogpCIu/N5fu5w7Qr4PjTGoWeejWhh5GcST2Q9aVcCBN10L
         cAco7dWZMjY6I3ay26u8Ii7wn3h7jsmu/N2h/Djf1zKsoky6ss1JIKAkbAub6AzOJqMm
         nqL3fm0xfiiFDsHw6XW5j/C9y6vC4W8KaVhSSC1EifZ7lqM98D/PwGUVVKN4bvymeZRS
         TiZ4MCqC+49SG9nELtdBi9Sgvc1lkfsBZvK3wLTww8YNneTvc9YKdnWYCwoHjP4Bkrn9
         U5wlbidOTbUHcAd6Vh06o46RxC8ibsVMd1sYDB8fOTcg3Z1QPCOoV0F0lOSw1WB/VbSt
         7pTw==
X-Gm-Message-State: AOAM532JGl4vTaMRlUAUUBmJPDlDlF7UM1Cu7JZF5sPPBL6qJZcKLKAl
	v4g5IQaqUcBnZqOD2143X7w=
X-Google-Smtp-Source: ABdhPJyuSSLvdEikeh+Nq/zhDPHcKBsCeWjVlIXCtWnktcvNT96pyk1AlIWziUacbt3mBXpwaDz9WA==
X-Received: by 2002:aa7:8755:0:b029:2eb:8c8f:d1f1 with SMTP id g21-20020aa787550000b02902eb8c8fd1f1mr271415pfo.11.1622734640422;
        Thu, 03 Jun 2021 08:37:20 -0700 (PDT)
Subject: Re: [RFC PATCH V3 09/11] HV/IOMMU: Enable swiotlb bounce buffer for
 Isolation VM
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>, kys@microsoft.com,
 haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org,
 decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
 x86@kernel.org, hpa@zytor.com, arnd@arndb.de, dave.hansen@linux.intel.com,
 luto@kernel.org, peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, hch@lst.de,
 m.szyprowski@samsung.com, robin.murphy@arm.com, jgross@suse.com,
 sstabellini@kernel.org, joro@8bytes.org, will@kernel.org,
 xen-devel@lists.xenproject.org, davem@davemloft.net, kuba@kernel.org,
 jejb@linux.ibm.com, martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com,
 thomas.lendacky@amd.com, brijesh.singh@amd.com, sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-10-ltykernel@gmail.com>
 <9488c114-81ad-eb67-79c0-5ed319703d3e@oracle.com>
 <a023ee3f-ce85-b54f-79c3-146926bf3279@gmail.com>
 <d6714e8b-dcb6-798b-59a4-5bb68f789564@oracle.com>
From: Tianyu Lan <ltykernel@gmail.com>
Message-ID: <1cdf4e6e-6499-e209-d499-7ab82992040b@gmail.com>
Date: Thu, 3 Jun 2021 23:37:06 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <d6714e8b-dcb6-798b-59a4-5bb68f789564@oracle.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 6/3/2021 12:02 AM, Boris Ostrovsky wrote:
> 
> On 6/2/21 11:01 AM, Tianyu Lan wrote:
>> Hi Boris:
>>      Thanks for your review.
>>
>> On 6/2/2021 9:16 AM, Boris Ostrovsky wrote:
>>>
>>> On 5/30/21 11:06 AM, Tianyu Lan wrote:
>>>> @@ -91,6 +92,6 @@ int pci_xen_swiotlb_init_late(void)
>>>>    EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
>>>>      IOMMU_INIT_FINISH(2,
>>>> -          NULL,
>>>> +          hyperv_swiotlb_detect,
>>>>              pci_xen_swiotlb_init,
>>>>              NULL);
>>>
>>>
>>> Could you explain this change?
>>
>> Hyper-V allocates its own swiotlb bounce buffer and the default
>> swiotlb buffer should not be allocated. swiotlb_init() in pci_swiotlb_init() is to allocate default swiotlb buffer.
>> To achieve this, put hyperv_swiotlb_detect() as the first entry in the iommu_table_entry list. The detect loop in the pci_iommu_alloc() will exit once hyperv_swiotlb_detect() is called in Hyper-V VM and other iommu_table_entry callback will not be called.
> 
> 
> 
> Right. But pci_xen_swiotlb_detect() will only do something for Xen PV guests, and those guests don't run on hyperV. It's either xen_pv_domain() (i.e. hypervisor_is_type(X86_HYPER_XEN_PV)) or hypervisor_is_type(X86_HYPER_MS_HYPERV) but never both. So I don't think there needs to be a dependency between the two callbacks.

Yes, the dependency is between hyperv_swiotlb_detect() and
pci_swiotlb_detect_override()/pci_swiotlb_detect_4gb(). Now
pci_swiotlb_detect_override() and pci_swiotlb_detect_4gb() depends on
pci_xen_swiotlb_detect(). To keep dependency between
hyperv_swiotlb_detect() and pci_swiotlb_detect_override/4gb(), make 
pci_xen_swiotlb_detect() depends on hyperv_swiotlb_detect() and just to
keep order in the IOMMU table. Current iommu_table_entry only has one
depend callback and this is why I put xen depends on hyperv detect function.

Thanks.


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 15:37:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 15:37:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136479.253007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lopPO-00065k-DA; Thu, 03 Jun 2021 15:37:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136479.253007; Thu, 03 Jun 2021 15:37:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lopPO-00065d-9P; Thu, 03 Jun 2021 15:37:22 +0000
Received: by outflank-mailman (input) for mailman id 136479;
 Thu, 03 Jun 2021 15:37:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=012H=K5=linaro.org=ulf.hansson@srs-us1.protection.inumbo.net>)
 id 1lopPM-00065X-5Y
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 15:37:20 +0000
Received: from mail-vs1-xe35.google.com (unknown [2607:f8b0:4864:20::e35])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ae02d19-7fa9-4830-b9b5-a1820ce5dbbc;
 Thu, 03 Jun 2021 15:37:19 +0000 (UTC)
Received: by mail-vs1-xe35.google.com with SMTP id u188so3214264vsu.8
 for <xen-devel@lists.xenproject.org>; Thu, 03 Jun 2021 08:37:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ae02d19-7fa9-4830-b9b5-a1820ce5dbbc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=+W0QahdA5jm5lLp4qGqziVkevJrOYYIsAeli4hbSoMY=;
        b=Sczm7Uhat9CAttKCKONsBvTvpdWX5VANaO58Cd0mJFcJ+A28Gj9YspGmhSWIpVtNS9
         gufvY+CGT5FXDIQfjOQ4ExwQIrmSjUqzpGxTG/ZkK/nXRMIOjYsW6gd0+0ljDlvbLbq6
         AyHgMcG1V7VrDrAancwHixbwlfPPPcOtQ/Us++pps4NpXbaz33l8+avCCapAkCFya/10
         FcaqevmvQJz8RurkVjboQlMdJ4KLmCFgJ9aWml7eHxoV4e7eWIKwWiCmbJjrhvsDE5K2
         PLLnqagHSU8ORkIhfE1C9Sx6ZNkln7FGogf+yyT7tKXsIRL7oAHDTXDC0cZ4GcZV2dc7
         4ZRQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=+W0QahdA5jm5lLp4qGqziVkevJrOYYIsAeli4hbSoMY=;
        b=i60Y25ESG5hw+gh+qPIGtLA0x3jd9mdIbrShBa85y77VNfoyXpN00yQHTZ1TlB2x1s
         FUfYL5HXCb8wmGn++90wwebaF81bnDBK7YlxpfRsNz891CDksl7inOsrYv2W0bGXl9jw
         ZdyITimHinfUMH5IRUXkQcJ1tO126aZTrBCnQrt/j8SHzVJFW/gkGbBFeqXgi98iZ9jQ
         UVcSddGUKt1PMJxsDHzv3CegjED7QC92LyeahdsFfpF76P5ZiLD7n37bsKf8hvCl99Zj
         SeURqbRd+a6ktyhJnTJv9cTGawmonRpqOW1+xjL7UL9CvDmG3F3IIo7LGqGpOBY75pfh
         eb/w==
X-Gm-Message-State: AOAM5302cNBaQr/Ok/YOIFIyeVkLAU68IKLEGNXjzizHl/iyr2gg7NcR
	yco5A85CYfNkocYhVg6G0jZtmtqH7DTcmad47v5djg==
X-Google-Smtp-Source: ABdhPJzM3LSrPJI4FkB0laodTTeEei+m50SowhpqaGtsNgUdvieHcvbYUQloJUD/a79xzNmd816023E4d1PORKHhPJg=
X-Received: by 2002:a05:6102:3023:: with SMTP id v3mr756919vsa.19.1622734639015;
 Thu, 03 Jun 2021 08:37:19 -0700 (PDT)
MIME-Version: 1.0
References: <20210602065345.355274-1-hch@lst.de> <20210602065345.355274-8-hch@lst.de>
In-Reply-To: <20210602065345.355274-8-hch@lst.de>
From: Ulf Hansson <ulf.hansson@linaro.org>
Date: Thu, 3 Jun 2021 17:36:42 +0200
Message-ID: <CAPDyKFoJssCnHv5tmG4vJJ9m0Zj5HkMEVYvnsjamvyemusZaUg@mail.gmail.com>
Subject: Re: [PATCH 07/30] ms_block: use blk_mq_alloc_disk
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>, 
	Denis Efremov <efremov@linux.com>, Josef Bacik <josef@toxicpanda.com>, Tim Waugh <tim@cyberelk.net>, 
	Geoff Levand <geoff@infradead.org>, Ilya Dryomov <idryomov@gmail.com>, 
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>, Jack Wang <jinpu.wang@ionos.com>, 
	"Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Mike Snitzer <snitzer@redhat.com>, Maxim Levitsky <maximlevitsky@gmail.com>, 
	Alex Dubov <oakad@yahoo.com>, Miquel Raynal <miquel.raynal@bootlin.com>, 
	Richard Weinberger <richard@nod.at>, Vignesh Raghavendra <vigneshr@ti.com>, Heiko Carstens <hca@linux.ibm.com>, 
	Vasily Gorbik <gor@linux.ibm.com>, Christian Borntraeger <borntraeger@de.ibm.com>, dm-devel@redhat.com, 
	linux-block <linux-block@vger.kernel.org>, nbd@other.debian.org, 
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>, ceph-devel@vger.kernel.org, 
	virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, 
	linux-mmc <linux-mmc@vger.kernel.org>, linux-mtd@lists.infradead.org, 
	linux-s390@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"

On Wed, 2 Jun 2021 at 08:54, Christoph Hellwig <hch@lst.de> wrote:
>
> Use the blk_mq_alloc_disk API to simplify the gendisk and request_queue
> allocation.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Acked-by: Ulf Hansson <ulf.hansson@linaro.org>

Kind regards
Uffe


> ---
>  drivers/memstick/core/ms_block.c | 25 ++++++++++---------------
>  1 file changed, 10 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/memstick/core/ms_block.c b/drivers/memstick/core/ms_block.c
> index 0bacf4268f83..dac258d12aca 100644
> --- a/drivers/memstick/core/ms_block.c
> +++ b/drivers/memstick/core/ms_block.c
> @@ -2110,21 +2110,17 @@ static int msb_init_disk(struct memstick_dev *card)
>         if (msb->disk_id  < 0)
>                 return msb->disk_id;
>
> -       msb->disk = alloc_disk(0);
> -       if (!msb->disk) {
> -               rc = -ENOMEM;
> +       rc = blk_mq_alloc_sq_tag_set(&msb->tag_set, &msb_mq_ops, 2,
> +                                    BLK_MQ_F_SHOULD_MERGE);
> +       if (rc)
>                 goto out_release_id;
> -       }
>
> -       msb->queue = blk_mq_init_sq_queue(&msb->tag_set, &msb_mq_ops, 2,
> -                                               BLK_MQ_F_SHOULD_MERGE);
> -       if (IS_ERR(msb->queue)) {
> -               rc = PTR_ERR(msb->queue);
> -               msb->queue = NULL;
> -               goto out_put_disk;
> +       msb->disk = blk_mq_alloc_disk(&msb->tag_set, card);
> +       if (IS_ERR(msb->disk)) {
> +               rc = PTR_ERR(msb->disk);
> +               goto out_free_tag_set;
>         }
> -
> -       msb->queue->queuedata = card;
> +       msb->queue = msb->disk->queue;
>
>         blk_queue_max_hw_sectors(msb->queue, MS_BLOCK_MAX_PAGES);
>         blk_queue_max_segments(msb->queue, MS_BLOCK_MAX_SEGS);
> @@ -2135,7 +2131,6 @@ static int msb_init_disk(struct memstick_dev *card)
>         sprintf(msb->disk->disk_name, "msblk%d", msb->disk_id);
>         msb->disk->fops = &msb_bdops;
>         msb->disk->private_data = msb;
> -       msb->disk->queue = msb->queue;
>
>         capacity = msb->pages_in_block * msb->logical_block_count;
>         capacity *= (msb->page_size / 512);
> @@ -2155,8 +2150,8 @@ static int msb_init_disk(struct memstick_dev *card)
>         dbg("Disk added");
>         return 0;
>
> -out_put_disk:
> -       put_disk(msb->disk);
> +out_free_tag_set:
> +       blk_mq_free_tag_set(&msb->tag_set);
>  out_release_id:
>         mutex_lock(&msb_disk_lock);
>         idr_remove(&msb_disk_idr, msb->disk_id);
> --
> 2.30.2
>


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 15:37:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 15:37:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136481.253029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lopPW-0006gX-Tb; Thu, 03 Jun 2021 15:37:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136481.253029; Thu, 03 Jun 2021 15:37:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lopPW-0006gM-Qc; Thu, 03 Jun 2021 15:37:30 +0000
Received: by outflank-mailman (input) for mailman id 136481;
 Thu, 03 Jun 2021 15:37:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=012H=K5=linaro.org=ulf.hansson@srs-us1.protection.inumbo.net>)
 id 1lopPW-00065X-21
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 15:37:30 +0000
Received: from mail-ua1-x933.google.com (unknown [2607:f8b0:4864:20::933])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 12a9bbe6-fa12-4fa4-849d-5f39f80d9427;
 Thu, 03 Jun 2021 15:37:23 +0000 (UTC)
Received: by mail-ua1-x933.google.com with SMTP id l12so3566083uai.0
 for <xen-devel@lists.xenproject.org>; Thu, 03 Jun 2021 08:37:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12a9bbe6-fa12-4fa4-849d-5f39f80d9427
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=7DKVf+GiS9951bpd4cfEBh88PiRutTQ//wVEud26id8=;
        b=XGhrfqnzBbfnTVSNkFDv5GHmpnKX09udhfIYlIGvsHf5udWupM3hEeaCuAp1/iGcK2
         itxazCeDLeGnN0Ew9V7mjQKrXGASjKXLT43xef2dS4qB5N1wZq329ew5d1FZXKKcqiVw
         6FD/vr0CrXQkvA5Vx5CUZuiBjtgVFz1ZbiUU7kgR3MEWuPhMcO948V0XjU30zv2QSu9a
         tohz+jrjUWOtrbk55SyLep3Y8Ie2TFo0EIpc+U1g5ZdabSEhCf2ga/7pzz9IJem82LbC
         xz7dKg0/t3XaDdcBysnSaVg/DwjADlBkyj4crHUSmvNFlYRf/JXspkWoLbMUskTXooPD
         5/eQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=7DKVf+GiS9951bpd4cfEBh88PiRutTQ//wVEud26id8=;
        b=UJYmgyV+pUzWi9x60Lg0CW2CI0CPL1mQhsejmwHyePUEjvkLsMPfava57X9Ld1la4F
         MGkHJENRW17/iXiAsx6C2AnSGW/mHKIgopYarUnTTCOmrlkjf+EpSHl4okbsVHvqR1pW
         tb3wvPdaIOVPb7Cqa/f6MBAugDHle2wBHDPCH+25cBEj0v42vvTq5fEukcSoo+gbar0p
         7ZcvM7wPW9wQ+CqC/2YVqvj4VUGPQQrx+cpi1DHS7fxJIaKGJdnjurPKk2KyOowu9kaY
         2wdWb7S1dUKGnrDDrNbjEfyfuVxR6XWdtn3Se29cHr4AfWvBTASfvmM13ZSKBxtK3D+g
         wvxw==
X-Gm-Message-State: AOAM533F4JzJT89LO6R78ro5HUni8LDnQqXYmb9klIWHH5l8AYr2yE3n
	dBRaeNTyC+jS2UVENSaziTbOB2ONM+q/BvHnbWSINw==
X-Google-Smtp-Source: ABdhPJyc/MDglzj7ZqHuKLnfpO6vVPHNJJJjAhghObYaMjdvGQtxzeGy+jhwPjnJIpGZowWwyMtWi1MuD2fSyYUnfEo=
X-Received: by 2002:ab0:7c5b:: with SMTP id d27mr407242uaw.15.1622734642855;
 Thu, 03 Jun 2021 08:37:22 -0700 (PDT)
MIME-Version: 1.0
References: <20210602065345.355274-1-hch@lst.de> <20210602065345.355274-9-hch@lst.de>
In-Reply-To: <20210602065345.355274-9-hch@lst.de>
From: Ulf Hansson <ulf.hansson@linaro.org>
Date: Thu, 3 Jun 2021 17:36:45 +0200
Message-ID: <CAPDyKFoh6HKx2rHHRXvw--Ou53TR2wLFGrKCDuetigxQ8QbvfQ@mail.gmail.com>
Subject: Re: [PATCH 08/30] mspro: use blk_mq_alloc_disk
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>, 
	Denis Efremov <efremov@linux.com>, Josef Bacik <josef@toxicpanda.com>, Tim Waugh <tim@cyberelk.net>, 
	Geoff Levand <geoff@infradead.org>, Ilya Dryomov <idryomov@gmail.com>, 
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>, Jack Wang <jinpu.wang@ionos.com>, 
	"Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Mike Snitzer <snitzer@redhat.com>, Maxim Levitsky <maximlevitsky@gmail.com>, 
	Alex Dubov <oakad@yahoo.com>, Miquel Raynal <miquel.raynal@bootlin.com>, 
	Richard Weinberger <richard@nod.at>, Vignesh Raghavendra <vigneshr@ti.com>, Heiko Carstens <hca@linux.ibm.com>, 
	Vasily Gorbik <gor@linux.ibm.com>, Christian Borntraeger <borntraeger@de.ibm.com>, dm-devel@redhat.com, 
	linux-block <linux-block@vger.kernel.org>, nbd@other.debian.org, 
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>, ceph-devel@vger.kernel.org, 
	virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, 
	linux-mmc <linux-mmc@vger.kernel.org>, linux-mtd@lists.infradead.org, 
	linux-s390@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"

On Wed, 2 Jun 2021 at 08:54, Christoph Hellwig <hch@lst.de> wrote:
>
> Use the blk_mq_alloc_disk API to simplify the gendisk and request_queue
> allocation.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Acked-by: Ulf Hansson <ulf.hansson@linaro.org>

Kind regards
Uffe


> ---
>  drivers/memstick/core/mspro_block.c | 26 +++++++++++---------------
>  1 file changed, 11 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c
> index cf7fe0d58ee7..22778d0e24f5 100644
> --- a/drivers/memstick/core/mspro_block.c
> +++ b/drivers/memstick/core/mspro_block.c
> @@ -1205,21 +1205,17 @@ static int mspro_block_init_disk(struct memstick_dev *card)
>         if (disk_id < 0)
>                 return disk_id;
>
> -       msb->disk = alloc_disk(1 << MSPRO_BLOCK_PART_SHIFT);
> -       if (!msb->disk) {
> -               rc = -ENOMEM;
> +       rc = blk_mq_alloc_sq_tag_set(&msb->tag_set, &mspro_mq_ops, 2,
> +                                    BLK_MQ_F_SHOULD_MERGE);
> +       if (rc)
>                 goto out_release_id;
> -       }
>
> -       msb->queue = blk_mq_init_sq_queue(&msb->tag_set, &mspro_mq_ops, 2,
> -                                               BLK_MQ_F_SHOULD_MERGE);
> -       if (IS_ERR(msb->queue)) {
> -               rc = PTR_ERR(msb->queue);
> -               msb->queue = NULL;
> -               goto out_put_disk;
> +       msb->disk = blk_mq_alloc_disk(&msb->tag_set, card);
> +       if (IS_ERR(msb->disk)) {
> +               rc = PTR_ERR(msb->disk);
> +               goto out_free_tag_set;
>         }
> -
> -       msb->queue->queuedata = card;
> +       msb->queue = msb->disk->queue;
>
>         blk_queue_max_hw_sectors(msb->queue, MSPRO_BLOCK_MAX_PAGES);
>         blk_queue_max_segments(msb->queue, MSPRO_BLOCK_MAX_SEGS);
> @@ -1228,10 +1224,10 @@ static int mspro_block_init_disk(struct memstick_dev *card)
>
>         msb->disk->major = major;
>         msb->disk->first_minor = disk_id << MSPRO_BLOCK_PART_SHIFT;
> +       msb->disk->minors = 1 << MSPRO_BLOCK_PART_SHIFT;
>         msb->disk->fops = &ms_block_bdops;
>         msb->usage_count = 1;
>         msb->disk->private_data = msb;
> -       msb->disk->queue = msb->queue;
>
>         sprintf(msb->disk->disk_name, "mspblk%d", disk_id);
>
> @@ -1247,8 +1243,8 @@ static int mspro_block_init_disk(struct memstick_dev *card)
>         msb->active = 1;
>         return 0;
>
> -out_put_disk:
> -       put_disk(msb->disk);
> +out_free_tag_set:
> +       blk_mq_free_tag_set(&msb->tag_set);
>  out_release_id:
>         mutex_lock(&mspro_block_disk_lock);
>         idr_remove(&mspro_block_disk_idr, disk_id);
> --
> 2.30.2
>


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 16:01:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 16:01:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136500.253040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lopmT-0002Rm-N6; Thu, 03 Jun 2021 16:01:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136500.253040; Thu, 03 Jun 2021 16:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lopmT-0002Rf-Jt; Thu, 03 Jun 2021 16:01:13 +0000
Received: by outflank-mailman (input) for mailman id 136500;
 Thu, 03 Jun 2021 16:01:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=inOg=K5=rmail.be=alien@srs-us1.protection.inumbo.net>)
 id 1lopmS-0002RZ-92
 for xen-devel@lists.xen.org; Thu, 03 Jun 2021 16:01:12 +0000
Received: from mail.rmail.be (unknown [85.234.218.189])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 8f0bdedf-8f13-4457-9848-3fca0a8d5009;
 Thu, 03 Jun 2021 16:01:10 +0000 (UTC)
Received: from mail.rmail.be (localhost [127.0.0.1])
 by mail.rmail.be (Postfix) with ESMTP id 9CE5BB14F5A;
 Thu,  3 Jun 2021 18:01:09 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f0bdedf-8f13-4457-9848-3fca0a8d5009
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit
Date: Thu, 03 Jun 2021 18:01:09 +0200
From: AL13N <alien@rmail.be>
To: Xen-devel <xen-devel@lists.xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Subject: Re: pci passthrough issue introduced between 4.14.1 and 4.15.0
In-Reply-To: <552b4348-1c52-ce6b-9001-a144c7147a7c@suse.com>
References: <6ccb04f2d93be6089b049df1f94a91dd@mail.rmail.be>
 <e9a3f7a8-7bf2-4f0a-cc25-d8ce015df1f2@suse.com>
 <a7c0e9b0cdd8f9e709abc329c7f6239c@mail.rmail.be>
 <b5ff15fc-ec3b-6b48-3d15-7de29fa5b2aa@suse.com>
 <175befe0e853565761e51f07b79c49cf@mail.rmail.be>
 <552b4348-1c52-ce6b-9001-a144c7147a7c@suse.com>
Message-ID: <0100caba62175123c63f0df4749a8c88@mail.rmail.be>
X-Sender: alien@rmail.be
User-Agent: Roundcube Webmail/1.0.9-1.2.mga5

Jan Beulich schreef op 2021-06-01 16:53:
> On 01.06.2021 16:44, AL13N wrote:
>> This mailing list is the correct place for the toolstack too? right?
> 
> Yes.

So, what's the plan to fix this? is the plan to fix the toolstack? or 
put your patch in kernel to kinda workaround it?

Is there a way to make a regression test or unit test or something?

Does anyone have an idea on what patch would cause the regression? that 
patch that I pointed out, could it be that one, or should i look at a 
specific file/line? I can't really just test all of the patches and/or 
combinations. I'm not really at home with the xen code; and my single 
xen server is production, so i can really only test in weekends...

AL13N


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 16:20:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 16:20:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136507.253051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loq4x-0004iK-AX; Thu, 03 Jun 2021 16:20:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136507.253051; Thu, 03 Jun 2021 16:20:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loq4x-0004iD-7Z; Thu, 03 Jun 2021 16:20:19 +0000
Received: by outflank-mailman (input) for mailman id 136507;
 Thu, 03 Jun 2021 16:20:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1loq4w-0004i7-Ag
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 16:20:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1loq4w-0001Us-29
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 16:20:18 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1loq4w-0005es-0z
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 16:20:18 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1loq4s-0006Zd-L2; Thu, 03 Jun 2021 17:20:14 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=dj5r/43lJoGQdSDSc6b6HAMYHd+EFCRB/4PukVSBLZk=; b=eueJejZJQ/IJv00GxpXuhF11Yp
	paFHCJEduHEqyJm0Zan6G/xObj25QV59Hd8YwK82KIoPaDnUCSPugafcr8/eX0elcIeXwKT2WfEEx
	iygyEKYgQ7qNm9E97LEqusuqPtrzDD4pJ7+wpeD9js2Cee9306jwYc8SKrn1OH2tbmn8=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24761.318.185554.93009@mariner.uk.xensource.com>
Date: Thu, 3 Jun 2021 17:20:14 +0100
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>
Subject: [PATCH v4] tools/libs/store: cleanup libxenstore interface
In-Reply-To: <20210512144832.19026-1-jgross@suse.com>
References: <20210512144832.19026-1-jgross@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Juergen Gross writes ("[PATCH v4] tools/libs/store: cleanup libxenstore interface"):
> There are some internals in the libxenstore interface which should be
> removed.

Thanks.  And sorry for not reviewing this sooner.

> Move those functions into xs_lib.c and the related definitions into
> xs_lib.h. Remove the functions from the mapfile. Add xs_lib.o to
> xenstore_client as some of the internal functions are needed there.
> 
> Bump the libxenstore version to 4.0 as the change is incompatible.
> Note that the removed functions should not result in any problem as
> they ought to be used by xenstored or xenstore_client only.

I am happy with the API and ABI changes.

The situation with the expanding buffer functions (which I think is
the only thing you are moving?) is unfortunate.  We have some more of
similar code in libxl.  And probably elsewhere in the codebase.  It
would be nice to make them shared but I think that is a task for
another day.

The sanitise value functions should perhaps be exposed with more
enssible names ?

I think this was more difficult to review than it needed to be
particularly because of the mixture of code motion with other
changes.

Anyway, thanks for cleaning this up.  All evidence to the contrary, it
*is* appreciated.

Reviewed-by: Ian Jackson <iwj@xenproject.org>

Regards,
Ian.


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 16:41:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 16:41:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136515.253062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loqPf-00070R-3W; Thu, 03 Jun 2021 16:41:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136515.253062; Thu, 03 Jun 2021 16:41:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loqPf-00070K-0Y; Thu, 03 Jun 2021 16:41:43 +0000
Received: by outflank-mailman (input) for mailman id 136515;
 Thu, 03 Jun 2021 16:41:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loqPd-0006zr-K0; Thu, 03 Jun 2021 16:41:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loqPd-0001qm-BW; Thu, 03 Jun 2021 16:41:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loqPd-0005JV-2V; Thu, 03 Jun 2021 16:41:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1loqPd-0000Yv-1w; Thu, 03 Jun 2021 16:41:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sM3RagdXdD61HLLpYDPuo+8eNzeetocXsJWt+EBq77U=; b=lN8KpfmVxyAa4Glr0wm01izmon
	i0v8F5ELktgvN4jqqb2Fs8G8+4vcOyHtt7HXgLf1+cvsa1TwbU8IM77zCwrvjRodU1OKXS1C6+O3w
	gtU5WtooTASt5nPzjXWiKTjhtxtkBeNhIfko46SC1+/omvEW4Ga/9HhpX+N4vx7Jx5/I=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162344-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162344: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start/freebsd.repeat:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=324c92e5e0ee0e993bdb106fac407846ed677f6b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Jun 2021 16:41:41 +0000

flight 162344 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162344/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-qemuu-freebsd12-amd64 21 guest-start/freebsd.repeat fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                324c92e5e0ee0e993bdb106fac407846ed677f6b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  306 days
Failing since        152366  2020-08-01 20:49:34 Z  305 days  522 attempts
Testing same since   162344  2021-06-03 03:41:50 Z    0 days    1 attempts

------------------------------------------------------------
6133 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1666542 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 17:07:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 17:07:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136525.253076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loqoD-00011q-8L; Thu, 03 Jun 2021 17:07:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136525.253076; Thu, 03 Jun 2021 17:07:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loqoD-00011j-4z; Thu, 03 Jun 2021 17:07:05 +0000
Received: by outflank-mailman (input) for mailman id 136525;
 Thu, 03 Jun 2021 17:07:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6GYq=K5=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1loqoB-00011d-UI
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 17:07:04 +0000
Received: from mx0a-00069f02.pphosted.com (unknown [205.220.165.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 46e5e7e0-bba6-4822-bd83-4e9117c15df5;
 Thu, 03 Jun 2021 17:07:02 +0000 (UTC)
Received: from pps.filterd (m0246617.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 153H2vsu020650; Thu, 3 Jun 2021 17:05:16 GMT
Received: from oracle.com (aserp3030.oracle.com [141.146.126.71])
 by mx0b-00069f02.pphosted.com with ESMTP id 38wx9frsup-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 03 Jun 2021 17:05:16 +0000
Received: from aserp3030.oracle.com (aserp3030.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 153H2aVP106745;
 Thu, 3 Jun 2021 17:05:15 GMT
Received: from nam10-bn7-obe.outbound.protection.outlook.com
 (mail-bn7nam10lp2109.outbound.protection.outlook.com [104.47.70.109])
 by aserp3030.oracle.com with ESMTP id 38ubnf1qh2-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 03 Jun 2021 17:05:15 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by MN2PR10MB4301.namprd10.prod.outlook.com (2603:10b6:208:1d9::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.23; Thu, 3 Jun
 2021 17:05:12 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4195.023; Thu, 3 Jun 2021
 17:05:12 +0000
Received: from [10.74.96.237] (160.34.88.237) by
 BY3PR03CA0029.namprd03.prod.outlook.com (2603:10b6:a03:39a::34) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.20 via Frontend
 Transport; Thu, 3 Jun 2021 17:05:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46e5e7e0-bba6-4822-bd83-4e9117c15df5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=W0kt1JqVyTcAf9VeNPE43ObyZ/OpJxPvaVZ7K1L2fZU=;
 b=q7DhytsgfAYHiFD77bjRBoXXh38rh9oq1+9yUUFJQ7i8FacQvxI24Xe3oW5CKlhXTBB2
 pydqUFK2O3NsZ/pB3vT31MJ+hs+gwi7xnsetoFA0tX/JEhY8UVyDU3OvUX1XmR5CcyjV
 pa2kWhkQAX8MocIxALkOH9oz64cECrHUX1Z3K5PTITI8h7i3jVKcRyiviNpCO19lDp6L
 Fshv+s+YwUjvZ0/vgrvts7QQ1pkY4eQnZk1MCuy8k5TAhlKPDpzPZy8DlmbE5qtozqyr
 NQYzzRQejxOhlWK1Ixei0dDlNxDeLsXmQ6S42/QbXBjR2JIVq7GOYCYPZZyiC6EJP/sA uA== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YDdEyxAX/G5netsP5tZvnWy8kr5ph7B6q/nvKRPd/104mUhQk0MZIeCwyADh20LdvDz71Gv46lg9x/e+Vbvjk6uOlWPyPAoZOstEvvs1y5t+OCJwEkEqvtOiye6jEDiklMOObVXIClrLY9kT5tQkC2vo8DikIj0dq7sBiF/NAg72OhXlUf1WM+5DLVGn8qa4zMGcZWV3GfLIOyAXQG1G2P4oOo+JOhGth16KwULn8q54NiacuHt8FQaVldDgho3MDdeRGBvgEtJKaMU8JNDsglyQfFeVFH1UZt5rP1lVgqvw4F6VAPktyqaBUylVIzJ64BPXLdsEiLFeYLBEDyXqag==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W0kt1JqVyTcAf9VeNPE43ObyZ/OpJxPvaVZ7K1L2fZU=;
 b=RPVw+HO2jpm4EAFg/YS59qxwPOOn9821cJzYoH3p93fA51udOIcEEpwxsh5iprCAwAsEHK3D+zG4yH1xbpywb1cMDwKBAPmK8LEVJIrWG5sJ1uPfiXZRhbHF1chG2Dbrd79pKSiLW1yu4fn2+tHgoYWihDJvm84cRiJISEgk1DqaaN6Th61eTxdXwG84F4nGFHerBpSZ3HohOuHgT0u3aFT+WP0Z5Gm1qjQ1Z+G8VQ5BX3OqXqiotacp3PqoS6iUriYwgqrhdDhhdtAR2MSqW3gzFb9yxTaXqfR8MHaSIagDfUO6cCrToa8+MPzAuDE+UuY7NHm/hUDX+gYfqfq5Zw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W0kt1JqVyTcAf9VeNPE43ObyZ/OpJxPvaVZ7K1L2fZU=;
 b=WAXvVmynqMvPxvciQFgMKNDpIBGZN1HAc1rDnJM2qnYcz02cVVk+t4zMjaUaRf758wmEUV5EbhDYqGl2We96zvfoeuWaSE2+nwntQ2sF42MXGvtQVRj7JG4urjJbrwbOlrKrFsufQj/MdTH2aWz1/t5XHzR02mFi1aO/vFZ99R4=
Authentication-Results: microsoft.com; dkim=none (message not signed)
 header.d=none;microsoft.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [RFC PATCH V3 09/11] HV/IOMMU: Enable swiotlb bounce buffer for
 Isolation VM
To: Tianyu Lan <ltykernel@gmail.com>, kys@microsoft.com,
        haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org,
        decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com,
        bp@alien8.de, x86@kernel.org, hpa@zytor.com, arnd@arndb.de,
        dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org,
        akpm@linux-foundation.org, kirill.shutemov@linux.intel.com,
        rppt@kernel.org, hannes@cmpxchg.org, cai@lca.pw,
        krish.sadhukhan@oracle.com, saravanand@fb.com,
        Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, hch@lst.de,
        m.szyprowski@samsung.com, robin.murphy@arm.com, jgross@suse.com,
        sstabellini@kernel.org, joro@8bytes.org, will@kernel.org,
        xen-devel@lists.xenproject.org, davem@davemloft.net, kuba@kernel.org,
        jejb@linux.ibm.com, martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
        linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
        linux-scsi@vger.kernel.org, netdev@vger.kernel.org,
        vkuznets@redhat.com, thomas.lendacky@amd.com, brijesh.singh@amd.com,
        sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-10-ltykernel@gmail.com>
 <9488c114-81ad-eb67-79c0-5ed319703d3e@oracle.com>
 <a023ee3f-ce85-b54f-79c3-146926bf3279@gmail.com>
 <d6714e8b-dcb6-798b-59a4-5bb68f789564@oracle.com>
 <1cdf4e6e-6499-e209-d499-7ab82992040b@gmail.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <099f311b-9614-dac5-ce05-6dad988f8a62@oracle.com>
Date: Thu, 3 Jun 2021 13:04:57 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
In-Reply-To: <1cdf4e6e-6499-e209-d499-7ab82992040b@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [160.34.88.237]
X-ClientProxiedBy: BY3PR03CA0029.namprd03.prod.outlook.com
 (2603:10b6:a03:39a::34) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f6c32b45-aa2d-45f6-f36b-08d926b1c029
X-MS-TrafficTypeDiagnostic: MN2PR10MB4301:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<MN2PR10MB4301570CE51481F8E38CF31C8A3C9@MN2PR10MB4301.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4502;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	Ib59lrCRhUEe25JkRnNnwO7/YJUzZIh37ouuzzvFtifgsJqZWIWIoi5+jWOXCCGNf5AuSPa7HZAaw6NrBjLPg7QZD77646nwPEqf93B7Abtghl3tho5JdsbjjLhR6I46YIneBbyYWQMWaCaie4UeJmcxdCgAD731nr1ohitpVONUNY6Ku/xlf2DcukgfEmHNQ4sCkwXPm+EoBKrEhL9ac1wW3Vw0tdQVXPwplSCnEAc1U9EScJpkzV1cLYDEP7FlfwB8SR3pNkm2HUQDa84ysj1KjfqzV89UMUX70lISiV9NdGI/Hmk5ZDwi0O0K47wBhuQpWIbt4SkZgA9Inn82obiJ2uESdus8pc7in6wxZq5I1HGvIfzLtRYAo+MxWf+Pfbbro1QyhSI87mXt2PRirMeK1iUHeGyHXPOmfwOIqH1v3qQiIHK1sHTU0OklcV3gIkrxm17CuCTDQ3iCsnu3pztKLX2lrMXYbLnSqt1UmI8vNHoYuMDXsCJ7AIpvul927pZ3fvA7JZQlhu0z94ofgDBAb0g0mVLcq/fgSzUhWqql6cuyO2fiOfYR62PGxn4sP7RbgCugQOpinqyO3D5YG6uttjK3aaX06y/jqC1iIl+CsaKB7R+HV2zLLw3cHygk2mB+56CuRlYxsc+25Xi/mvT9dPM3pA0Rno6KZHpM90BzLVT58RcQu67YceKAb2hy
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(396003)(366004)(39860400002)(136003)(376002)(7406005)(38100700002)(16576012)(921005)(4326008)(7416002)(26005)(83380400001)(8936002)(16526019)(478600001)(186003)(44832011)(4744005)(2906002)(316002)(31696002)(53546011)(86362001)(6636002)(66476007)(66556008)(66946007)(6666004)(2616005)(6486002)(8676002)(956004)(36756003)(5660300002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?VEhpODY5bk9rcHFacXk4NjV2NU16M1h2K2xpRkxxeFFXNkpVSTNPNWR6Ukoy?=
 =?utf-8?B?ZzhoVW1PdkdaZSsreXN1MEFDVk81SFhsOTg5R2hERGZWbUNUZEZ0cEoveEhV?=
 =?utf-8?B?RElObjJleWxDMzh6YkRTRUZTNVNUcDR5MWFvZGlsZDQzaGJmOW9oQmpGcTZ2?=
 =?utf-8?B?TnZHYSt2UFZCcGxxTktld1FISmFXZEV4dG9ScnkzZDdqbjRDMGZCaFhVUmd2?=
 =?utf-8?B?TURtNHZSNHpTMEJLRHJtdmpseFNjNC94aFZRa1pZU2F3YVQxTW1CQ1FQUGN0?=
 =?utf-8?B?TjZIMDA1elNCakdmSGczVmU0Rnc0OFlSYnV4OU45ckk1MldydUoyeVNWN3p0?=
 =?utf-8?B?aDRtU3RrM3UvZXk5QXdmSk84Z2VSVTNvSWpPU3k5TWc1ekZSZGgvNGpTTkE0?=
 =?utf-8?B?aGFER0xGUWdtMGZPRDVuUUVDa3RraEs1TE1CbVlkOGZFMitxZ2xVN2J2eGwv?=
 =?utf-8?B?YjhzcExJVm9XbmEzcXh1Z2FFV0NCTnAzcHZVQklaRk5sUlFOeEV6UEVycVY4?=
 =?utf-8?B?eXJ1MnRxcWdVb0JmRE5xekJIRkdJVkRSbmFmUzgvclNmb2ZoSHB0YWc4Rnc3?=
 =?utf-8?B?UFdiYnBwaSthaHk4djd3NnpXUDV6UVhPVlhoNUFMUnRhUk1DL01BbDMxb1Vj?=
 =?utf-8?B?QWpWUzUyNE1CUzhzdFlkVHBxaXNnbG4vdUNSbWs1V2JKOGRJekUzUVhnTjlC?=
 =?utf-8?B?VEU3ZkRWbDVnNzg5TnYwdXUwSG1hUU1mL3ZEZVZqOUVoOGhDeS9Lai82Y3NO?=
 =?utf-8?B?bHI3dWo0YUlhNms1YXVNMVJJWHBDVnhVRDBIK1lxaXBob0paaFlCaHBtMk05?=
 =?utf-8?B?Q0Z3Sm9KcE5FWGY5ODY2b0NVd3hxSUJ1UCtVQzRYRzFlUnovY1Q0WWFxQlVP?=
 =?utf-8?B?TVZMNXJZdUJoTlg3RUM3TWYvZHpDUG9JaDVZL1RyQUgrRWRNSjZPRUpGMWxm?=
 =?utf-8?B?VUwrTGRSb05BVXVQYUtKTDVNaXo4blZqWmVIRGdQQURCYmdaSW5pcGVubnF2?=
 =?utf-8?B?b2Z5QjJUN3hRRHFUcjJ4VE1mZnpqZ1QvdTdzR09tYURyUFR0a3lVT3RvNGF3?=
 =?utf-8?B?M1lGK01VVXc3OU1zMGdSbGp2S2Z3ZGQ1cStDSEx3eUVpUzNGY0oxM0U5SjFH?=
 =?utf-8?B?N0YvVHRCNjBxRE42WUl4RCttMGdCeXhEYnEra0pZQXM3L1VzMldwK2pzYjhD?=
 =?utf-8?B?ckpJNFcxYmkxc3pGcmhvWDRvSkhkc2xuWEFMTENHaUt4a0JqbU1adWM3dkRQ?=
 =?utf-8?B?aUkvZFFmRDJvSTI2ajhxR1BWdUZYQkxkNHR1anpYa2loNEp5R2VqdUNFZzBP?=
 =?utf-8?B?dUpmem42dXBNbWo5YUZFb0ZFYnJ1MGY3UDcvNlBReE1hVHB1WnNnQzhGNzlh?=
 =?utf-8?B?MkppYVcya2Nlcmw2Tjl3TWtKenptN1pNOTMrMDI1dVloblhpZUtHb2pBczVR?=
 =?utf-8?B?dUhOeVE1b1RoN2lGaEp2cm95ZUliQ0t1bjVBSmxsMFJCckRFWTZTSUxVT3Z5?=
 =?utf-8?B?Y1JNVi9kRnluTmE5L3EyNTErdW5nNkUwSUpuRWpqQnlPdHJWWFJ5UUx0UXcz?=
 =?utf-8?B?cjRnR01KSjdrVXNhcWhNb08vSnY1MS9IOThCZE8zYjJ6QmlCL0UzMFhUZU5P?=
 =?utf-8?B?VVpOYzNFWXpjQk8vWFB0TFM5eFBIeVFGb3ZxMWNJaTNtd0JoQlRwSnBOQXY2?=
 =?utf-8?B?YUFUaVBPUjVwWDVpVUhEVVBCa2hYbE1ucDlGa1lZZ0xzODFySy8rZ0FMZmwz?=
 =?utf-8?Q?NImILyNX8hZSGIxt92yeobyRg490LOVKTxJWgp2?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f6c32b45-aa2d-45f6-f36b-08d926b1c029
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2021 17:05:12.6808
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hl7AW3zzE8KTOKvLjG12YlNRlx/jmon3gGhAe41uW1iVlbOxxIKZuCVzphiP7biG3veKW+26gz2pSJL+ynj61iwn77gDR5xZj0KFiM0T+O8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR10MB4301
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10004 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 bulkscore=0 phishscore=0
 spamscore=0 malwarescore=0 mlxscore=0 mlxlogscore=999 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106030115
X-Proofpoint-GUID: WlbH_4viG8I4D6gMQqbLyui6jxa4rNit
X-Proofpoint-ORIG-GUID: WlbH_4viG8I4D6gMQqbLyui6jxa4rNit


On 6/3/21 11:37 AM, Tianyu Lan wrote:
>
> Yes, the dependency is between hyperv_swiotlb_detect() and
> pci_swiotlb_detect_override()/pci_swiotlb_detect_4gb(). Now
> pci_swiotlb_detect_override() and pci_swiotlb_detect_4gb() depends on
> pci_xen_swiotlb_detect(). To keep dependency between
> hyperv_swiotlb_detect() and pci_swiotlb_detect_override/4gb(), make pci_xen_swiotlb_detect() depends on hyperv_swiotlb_detect() and just to
> keep order in the IOMMU table. Current iommu_table_entry only has one
> depend callback and this is why I put xen depends on hyperv detect function.
>

Ah, ok. Thanks.



-boris



From xen-devel-bounces@lists.xenproject.org Thu Jun 03 19:09:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 19:09:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136535.253093 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1losiB-0003Nr-3s; Thu, 03 Jun 2021 19:08:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136535.253093; Thu, 03 Jun 2021 19:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1losiB-0003Nk-0o; Thu, 03 Jun 2021 19:08:59 +0000
Received: by outflank-mailman (input) for mailman id 136535;
 Thu, 03 Jun 2021 19:08:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1losi9-0003Na-FP; Thu, 03 Jun 2021 19:08:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1losi9-0004Kv-7C; Thu, 03 Jun 2021 19:08:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1losi8-0004R4-RT; Thu, 03 Jun 2021 19:08:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1losi8-0002Ek-Qt; Thu, 03 Jun 2021 19:08:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ob8dCytT9+ymlZx+30Oup4G11ASj0K8MDkiHj7R7pAU=; b=cO4Pbz2j2tTFrpvoJ6CPKJhKJ/
	afe9H4AjOl3/i5gvxKBqzMDHTB0khPHenaU2Qm2Cj+fKLRqwp6zhI1TZZy4Ff+bNA+5F09KZ266qb
	1DzQIdY+7NRd21aHb/ANeThRf6R8lTF16gfXDJZofZBUCnfm88ga75ZvImDKlKyVFrxc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162346-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 162346: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=70154d2f82a9058e8316b6e106071c72fcc58718
X-Osstest-Versions-That:
    linux=103f1dbea1ae44731edca02cd7fcfa4a33742cd2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Jun 2021 19:08:56 +0000

flight 162346 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162346/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162247
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162247
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162247
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162247
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162247
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162247
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162247
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162247
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162247
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162247
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162247
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                70154d2f82a9058e8316b6e106071c72fcc58718
baseline version:
 linux                103f1dbea1ae44731edca02cd7fcfa4a33742cd2

Last test of basis   162247  2021-05-28 11:41:53 Z    6 days
Testing same since   162346  2021-06-03 07:12:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Alaa Emad <alaaemadhossney.ae@gmail.com>
  Alan Stern <stern@rowland.harvard.edu>
  Aleksander Jan Bajkowski <olek2@wp.pl>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Alexandru Ardelean <aardelean@deviqon.com>
  Andy Gospodarek <gospo@broadcom.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Anirudh Rayabharam <mail@anirudhrb.com>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Atul Gopinathan <atulgopinathan@gmail.com>
  Aurelien Aptel <aaptel@suse.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Boris Burkov <boris@bur.io>
  Catherine Sullivan <csully@google.com>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chinmay Agarwal <chinagar@codeaurora.org>
  Chris Park <Chris.Park@amd.com>
  Christian Brauner <christian.brauner@ubuntu.com>
  Christian Gmeiner <christian.gmeiner@gmail.com>
  Christian KÃnig <christian.koenig@amd.com>
  Christian König <christian.koenig@amd.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chunfeng Yun <chunfeng.yun@mediatek.com>
  Colin Ian King <colin.king@canonical.com>
  Cornelia Huck <cohuck@redhat.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Daniele Palmas <dnlplm@gmail.com>
  David Awogbemila <awogbemila@google.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  DENG Qingfang <dqfext@gmail.com>
  Dima Chumak <dchumak@nvidia.com>
  Dominik Andreas Schorpp <dominik.a.schorpp@ids.de>
  Dominik Brodowski <linux@dominikbrodowski.net>
  Dongliang Mu <mudongliangabcd@gmail.com>
  Du Cheng <ducheng2@gmail.com>
  Edward Cree <ecree@solarflare.com>
  Eric Farman <farman@linux.ibm.com>
  Felipe Balbi <balbi@kernel.org>
  Felix Fietkau <nbd@nbd.name>
  Florian Fainelli <f.fainelli@gmail.com>
  Francesco Ruggeri <fruggeri@arista.com>
  Fugang Duan <fugang.duan@nxp.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Geoffrey D. Bennett <g@b4.vu>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hans de Goede <hdegoede@redhat.com>
  Hoang Le <hoang.h.le@dektech.com.au>
  Huazhong Tan <tanhuazhong@huawei.com>
  Hui Wang <hui.wang@canonical.com>
  Hulk Robot <hulkrobot@huawei.com>、
  Jakub Kicinski <kuba@kernel.org>
  James Zhu <James.Zhu@amd.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jason Self <jason@bluehome.net>
  Jean Delvare <jdelvare@suse.de>
  Jerry Hoemann <jerry.hoemann@hpe.com>
  Jesse Brandeburg <jesse.brandeburg@intel.com>
  Jim Ma <majinjing3@gmail.com>
  Jingwen Chen <Jingwen.Chen2@amd.com>
  Jiri Slaby <jirislaby@kernel.org>
  Joakim Zhang <qiangqing.zhang@nxp.com>
  Joerg Roedel <jroedel@suse.de>
  Johan Hovold <johan@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  Jon Maloy <jmaloy@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Josef Bacik <josef@toxicpanda.com>
  Jouni Malinen <jouni@codeaurora.org>
  Juergen Borleis <jbe@pengutronix.de>
  Juergen Gross <jgross@suse.com>
  Jussi Maki <joamaki@gmail.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kees Cook <keescook@chromium.org>
  kernel test robot <lkp@intel.com>
  Khalid Aziz <khalid@gonehiking.org>
  Konrad Jankowski <konrad0.jankowski@intel.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com>
  Lang Yu <Lang.Yu@amd.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lu Baolu <baolu.lu@linux.intel.com>
  Lucas Stankus <lucas.p.stankus@gmail.com>
  Lukas Wunner <lukas@wunner.de>
  Manuel Lauss <manuel.lauss@gmail.com>
  Marcel Holtmann <marcel@holtmann.org>
  Mark Brown <broonie@kernel.org>
  Martin Blumenstingl <martin.blumenstingl@googlemail.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Mateusz Palczewski <mateusz.palczewski@intel.com>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Mathy Vanhoef <Mathy.Vanhoef@kuleuven.be>
  Matt Wang <wwentao@vmware.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Michael Chan <michael.chan@broadcom.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael Grzeschik <m.grzeschik@pengutronix.de>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Neil Armstrong <narmstrong@baylibre.com>
  Ondrej Mosnacek <omosnace@redhat.com>
  Paolo Abeni <pabeni@redhat.com>
  Pavel Skripkin <paskripkin@gmail.com>
  Peter Zijlstra <peterz@infradead.org>
  Phillip Potter <phil@philpotter.co.uk>
  Piotr Skajewski <piotrx.skajewski@intel.com>
  Raju Rangoju <rajur@chelsio.com>
  Randy Dunlap <rdunlap@infradead.org>
  Randy Wright <rwright@hpe.com>
  Richard Fitzgerald <rf@opensource.cirrus.com>
  Rolf Eike Beer <eb@emlix.com>
  Rui Miguel Silva <rui.silva@linaro.org>
  Saeed Mahameed <saeedm@nvidia.com>
  Sargun Dhillon <sargun@sargun.me>
  Sasha Levin <sashal@kernel.org>
  Sean MacLennan <seanm@seanm.ca>
  Shuah Khan <skhan@linuxfoundation.org>
  Shyam Sundar S K <Shyam-sundar.S-k@amd.com>
  Sinan Kaya <okaya@kernel.org>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Sriram R <srirrama@codeaurora.org>
  Stafford Horne <shorne@gmail.com>
  Stefan Roese <sr@denx.de>
  Steve French <stfrench@microsoft.com>
  Stylon Wang <stylon.wang@amd.com>
  Taehee Yoo <ap420073@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Tao Liu <thomas.liu@ucloud.cn>
  Tariq Toukan <tariqt@nvidia.com>
  Teava Radu <rateava@gmail.com>
  Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Tom Seewald <tseewald@gmail.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Tung Nguyen <tung.q.nguyen@dektech.com.au>
  Tycho Andersen <tycho@tycho.pizza>
  Vinod Koul <vkoul@kernel.org>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Vladyslav Tarasiuk <vladyslavt@nvidia.com>
  Wen Gong <wgong@codeaurora.org>
  Willem de Bruijn <willemb@google.com>
  Willem de Brujin <willemb@google.com>
  Wolfram Sang <wsa@kernel.org>
  Xin Long <lucien.xin@gmail.com>
  xinhui pan <xinhui.pan@amd.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  YueHaibing <yuehaibing@huawei.com>
  Yunsheng Lin <linyunsheng@huawei.com>
  Zhang Xiaoxu <zhangxiaoxu5@huawei.com>
  Zhen Lei <thunder.leizhen@huawei.com>
  Zheyu Ma <zheyuma97@gmail.com>
  zhouchuangao <zhouchuangao@vivo.com>
  Zolton Jheng <s6668c2t@gmail.com>
  Zou Wei <zou_wei@huawei.com>
  Éric Piel <eric.piel@trempplin-utc.net>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   103f1dbea1ae..70154d2f82a9  70154d2f82a9058e8316b6e106071c72fcc58718 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 19:37:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 19:37:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136545.253107 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lot9C-0006Y3-KF; Thu, 03 Jun 2021 19:36:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136545.253107; Thu, 03 Jun 2021 19:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lot9C-0006Xw-Fh; Thu, 03 Jun 2021 19:36:54 +0000
Received: by outflank-mailman (input) for mailman id 136545;
 Thu, 03 Jun 2021 19:36:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lot9A-0006Xj-TS; Thu, 03 Jun 2021 19:36:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lot9A-0004my-NV; Thu, 03 Jun 2021 19:36:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lot9A-0005M8-9E; Thu, 03 Jun 2021 19:36:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lot9A-00043X-8b; Thu, 03 Jun 2021 19:36:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=CvYiSE8dTE43q7cC0iHncTRPjYDqi10ARRimYk31he8=; b=sygxVBopAxiILyiJvRmZjv1zRp
	LCW5NYnpUgRRHN2qs5ozhRoj8TMfzWCjxuEFsdtQ4R4FKNBAYikrmx8qhHpx4yxJt3H6y0NL2SDAh
	kAid4SiDpJ1je/b1DHOROBxN4qRKRTR2MEc+z217LuivMNPrFfjn+7nhKIo+8v4gr6gg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-i386-freebsd10-amd64
Message-Id: <E1lot9A-00043X-8b@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Jun 2021 19:36:52 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-freebsd10-amd64
testid guest-saverestore

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/160694/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-i386-freebsd10-amd64.guest-saverestore.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-i386-freebsd10-amd64.guest-saverestore --summary-out=tmp/162355.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-i386-freebsd10-amd64 guest-saverestore
Searching for failure / basis pass:
 162342 fail [host=fiano0] / 160125 [host=albana1] 160119 [host=huxelrebe1] 160113 [host=elbling0] 160104 [host=chardonnay0] 160097 [host=albana0] 160091 [host=chardonnay1] 160088 [host=albana1] 160082 [host=pinot0] 160079 [host=fiano1] 160070 [host=pinot1] 160066 ok.
Failure / basis pass flights: 162342 / 160066
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1f515342d8d83ef0fff0c3f4ac67232dd8c97565 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c345b3e6a736d4985b2bca6f7f24b985900de63 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3f8d1885e48e4d72eab0688f604de62e0aea7a38 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9dc46386d89d83c73c41c2b19be83a73957c4393
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#4751a48aeb2ab828b0a5cbdc585fd3642967cda1-1f515342d8d83ef0fff0c3f4ac67232dd8c97565 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#3f8d1885e48e4d72eab0688f604de62e0aea7a38-8c345b3e6a736d4985b2bca6f7f24b985900de63 git://xenbits.xen.org/osstest/seabios.git#b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee-81433aa8a19b36f9e3d50697608c93d8a28bf772 git://xenbits.xen.org/xen.git#9dc46386d89d83c73c41c2b19be83a73957c4393-5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Loaded 63103 nodes in revision graph
Searching for test results:
 160048 []
 160050 []
 160057 []
 160062 []
 160064 []
 160066 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3f8d1885e48e4d72eab0688f604de62e0aea7a38 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9dc46386d89d83c73c41c2b19be83a73957c4393
 160070 [host=pinot1]
 160079 [host=fiano1]
 160082 [host=pinot0]
 160088 [host=albana1]
 160091 [host=chardonnay1]
 160097 [host=albana0]
 160104 [host=chardonnay0]
 160113 [host=elbling0]
 160119 [host=huxelrebe1]
 160125 [host=albana1]
 160134 fail irrelevant
 160147 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160167 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca318882714080fb81fe9eb89a7b7934efc5bfae 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bdee969c0e65d4d509932b1d70e3a3b2ffbff6d5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160328 fail irrelevant
 160361 fail irrelevant
 160392 fail irrelevant
 160418 fail irrelevant
 160448 fail irrelevant
 160477 fail irrelevant
 160501 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160522 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160541 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ec2e6e016d24bd429792d08cf607e4c5350dcdaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160563 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7993b0f83fe5c3f8555e79781d5d098f99751a94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cead8c0d17462f3a1150b5657d3f4eaa88faf1cb
 160619 fail irrelevant
 160632 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 62bad17dcae18f55cb3bdc19909543dfdf928a2b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6ee55e1d10c25c2f6bf5ce2084ad2327e17affa5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 90629587e16e2efdb61da77f25c25fba3c4a5fd7
 160662 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3f8d1885e48e4d72eab0688f604de62e0aea7a38 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9dc46386d89d83c73c41c2b19be83a73957c4393
 160663 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 62bad17dcae18f55cb3bdc19909543dfdf928a2b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6ee55e1d10c25c2f6bf5ce2084ad2327e17affa5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 90629587e16e2efdb61da77f25c25fba3c4a5fd7
 160664 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2a9a6c2a86570ccbf8c5c30cbb8bf723168c459 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160666 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160667 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7286d62d4e259be8cecf3dc2deea80ecc14489a5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160650 fail irrelevant
 160668 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2255564fd21059960966b47212def9069cb56077 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160670 fail irrelevant
 160671 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5e8892db93f3fb6a7221f2d47f3c952a7e489737 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160673 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 9e7118023fda7c29016038e2292d4d14129b63da b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160674 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8b858f9998a9d59a9a7188f2c5c6ffb99eff6115 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160675 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 773b0bc2838ede154c6de9d78401b91fafa91062 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 affc55e761ea4c96b9b2de582d813787a317aeda b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160677 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 30ca7eddc486646fa19c9619fcf233ceaa65e28c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160678 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e7c6a8cf9f5c82aa152273e1c9e80d07b1b0c32c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 160679 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2e8319d456724c3d8514d943dc4607e2f08e88a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 160680 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ee2e67da8f882fcdef2c49fcc58e9962aa695f5a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160682 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 81cbfd5088690c53541ffd0d74851c8ab055a829 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160683 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 283d845c9164f57f5dba020a4783bb290493802f b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160684 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 445a5b4087567bf4d4ce76d394adf78d9d5c88a5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160686 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160688 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160689 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160692 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160693 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160694 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160736 fail irrelevant
 160748 fail irrelevant
 160779 fail irrelevant
 160801 fail irrelevant
 160827 fail irrelevant
 160851 fail irrelevant
 160883 fail irrelevant
 160916 fail irrelevant
 160980 fail irrelevant
 161050 fail irrelevant
 161088 fail irrelevant
 161121 fail irrelevant
 161147 fail irrelevant
 161171 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2ad22420a710dc07e3b644f91a5b55c09c39ecf3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 264aa183ad85b2779b27d1312724a291259ccc9f
 161191 fail irrelevant
 161210 fail irrelevant
 161232 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b53173e7cdafb7a318a239d557478fd73734a86a
 161256 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161276 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161290 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161308 fail irrelevant
 161334 fail irrelevant
 161364 fail irrelevant
 161388 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
 161401 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aaa3eafb3ba8b544d19ca41cda1477640b22b8fc
 161419 fail irrelevant
 161434 fail irrelevant
 161444 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161455 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161472 fail irrelevant
 161481 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5396354b868bd6652600a654bba7df16701ac1cb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 11e7f0fe72ca0060762d18268e0388731fe8ccb6
 161495 fail irrelevant
 161514 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b90b8abb4049e2d98040f548ad23b6ab22d5d19 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
 161540 fail irrelevant
 161554 fail irrelevant
 161571 fail irrelevant
 161587 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161604 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161616 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 53c5433e84e8935abed8e91d4a2eb813168a0ecf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161631 []
 161766 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
 161780 fail irrelevant
 161812 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d45a5270d075ea589f0b0ddcf963a5fea1f500ac b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 8cccd6438e86112ab383e41b433b5a7e73be9621
 161826 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161839 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161853 []
 161856 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161862 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161876 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161886 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161890 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161896 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 74e31681ba05ed1876818df30c581bc530554fb3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161907 fail irrelevant
 161915 fail irrelevant
 161924 fail irrelevant
 161938 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161941 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161950 fail irrelevant
 161955 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161961 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161963 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161967 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161971 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161976 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161981 fail irrelevant
 161986 fail irrelevant
 162019 fail irrelevant
 162070 fail irrelevant
 162090 fail irrelevant
 162104 fail irrelevant
 162099 fail irrelevant
 162108 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 15ee7b76891a78141e6e30ef3f8572e8d6b326d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 972e848b53970d12cb2ca64687ef8ff797fb6236 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162112 fail irrelevant
 162116 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162121 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162124 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162127 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162132 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162135 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162139 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162143 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 371ebfe28600fc5a435504b841cd401208a68f07 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162146 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162152 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162158 fail irrelevant
 162167 fail irrelevant
 162197 fail irrelevant
 162240 fail irrelevant
 162244 fail irrelevant
 162252 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7258034ab40e6927acbd005feb295eb3acf972bb 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 9fdcf851689cb2a9501d3947cb5d767d9c7797e8
 162257 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 62c0ac5041e9130b041adfa13a41583d3c3ddd24 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162260 fail irrelevant
 162264 fail irrelevant
 162267 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f9dc72de91d2915b808e82da34bf613afa5cce43 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162270 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162277 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162299 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 683d899e4bffca35c5b192ea0662362b0270a695
 162328 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3ff5dbe1dfc3420e5254d290500c0b6f6282d17 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162355 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1f515342d8d83ef0fff0c3f4ac67232dd8c97565 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c345b3e6a736d4985b2bca6f7f24b985900de63 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162331 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fdf3666f01a2dd02d83a808f609b9c744a74c652 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162339 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b233eb1849ac01bdd5b24ea84460a2e481a4c5a9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dd2db39d78431ab5a0b78777afaab3d61e94533e 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162342 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1f515342d8d83ef0fff0c3f4ac67232dd8c97565 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c345b3e6a736d4985b2bca6f7f24b985900de63 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162348 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3f8d1885e48e4d72eab0688f604de62e0aea7a38 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9dc46386d89d83c73c41c2b19be83a73957c4393
Searching for interesting versions
 Result found: flight 160066 (pass), for basis pass
 Result found: flight 162342 (fail), for basis failure
 Repro found: flight 162348 (pass), for basis pass
 Repro found: flight 162355 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
No revisions left to test, checking graph state.
 Result found: flight 160686 (pass), for last pass
 Result found: flight 160688 (fail), for first failure
 Repro found: flight 160689 (pass), for last pass
 Repro found: flight 160692 (fail), for first failure
 Repro found: flight 160693 (pass), for last pass
 Repro found: flight 160694 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/160694/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.378534 to fit
pnmtopng: 239 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-i386-freebsd10-amd64.guest-saverestore.{dot,ps,png,html,svg}.
----------------------------------------
162355: tolerable FAIL

flight 162355 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162355/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore    fail baseline untested


jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 test-amd64-i386-freebsd10-amd64                              fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Thu Jun 03 20:12:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 20:12:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136555.253121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lothr-00029E-NN; Thu, 03 Jun 2021 20:12:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136555.253121; Thu, 03 Jun 2021 20:12:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lothr-000297-IQ; Thu, 03 Jun 2021 20:12:43 +0000
Received: by outflank-mailman (input) for mailman id 136555;
 Thu, 03 Jun 2021 20:12:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6GYq=K5=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lothp-00028u-Tc
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 20:12:42 +0000
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2d86aa39-f469-462f-bfd4-53d20955729a;
 Thu, 03 Jun 2021 20:12:40 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
 by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 153K98OH141782;
 Thu, 3 Jun 2021 20:11:55 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by aserp2130.oracle.com with ESMTP id 38ub4cvd89-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 03 Jun 2021 20:11:55 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 153KA5o2020924;
 Thu, 3 Jun 2021 20:11:54 GMT
Received: from nam11-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam11lp2171.outbound.protection.outlook.com [104.47.57.171])
 by userp3030.oracle.com with ESMTP id 38uaqyky4r-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 03 Jun 2021 20:11:54 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by MN2PR10MB4318.namprd10.prod.outlook.com (2603:10b6:208:1d8::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.20; Thu, 3 Jun
 2021 20:11:52 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4195.023; Thu, 3 Jun 2021
 20:11:52 +0000
Received: from [10.74.96.237] (160.34.88.237) by
 SN6PR01CA0001.prod.exchangelabs.com (2603:10b6:805:b6::14) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.15 via Frontend Transport; Thu, 3 Jun 2021 20:11:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d86aa39-f469-462f-bfd4-53d20955729a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=EzbIaADvFD7uc8vEJGwdT6z9IDFMtlnY18zZuVGI6Co=;
 b=nAgUjNuoL/aHGNAh2P8goeRVyvuOJQNbeW7JVpyZKOwlQZkTQiyP9db7njj9KpzPxYEz
 wItE0rqHPyN6+jwplfVq23F5uGnbNyg841ZMAmryktl08N34Bj4ybYYY4wSRLCAyf5k0
 z4+7zZUIywTym1RKTDN3o1JTR4moLXY796qmG5Syyv7nef/92UpyL4g1LJx6UB0bv77y
 JQ71oCo5KMb15hEiov6sGlf8p5d2+vWPLAr4wxvClkQw6G3WYp/yIMz5GzHr3ovju9kB
 bnYcbISzOLeJ+pZHJMauuSy8u7jLA1bGmzOzI5GHg+G/dFdXX+p1bHJbORyLHwPMtWSs Yg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hAQEjZccADZazKafFH+jRFLq5CT8qSQ3g1zo9pl4QhxU53CuPRzO+QqYGHsFcL5BlAtB8y3IHotJA/NIOUIlKSpi2XgJuOYbdHnfe05qhegBkPDQU+xPF33wKjZDbICGWWk8p52M/oZt4N2TE5oUL/OGGaXJzlASXm02FMnyUjWd4seITNWiSAeLLfebu/fyg6A/dmyV/O6KaHllbHUEWj4duW/aBTQRJMoWv4naZfNLFp4owHNrmWLLuvyJk4W93/f2LC9TF6fTVERQzSY8NPFTb1oHNSOKUsaJWtnNQY3h83B5QTq9CDuCaAyenvMNrHF9UGZrupPLczTq3Lw9cA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EzbIaADvFD7uc8vEJGwdT6z9IDFMtlnY18zZuVGI6Co=;
 b=J5XjqSu2YitBrbNbjHGzYnJWEqc21v+s7dX9xNR5aI6D9lGqMbggfvtwyDiPDuSiNx04gr6vFydZh8DGIPWjdJtTkICHkVF1RO2dpmJ7x7cROou1J24Ldod9XT9+SWNOEwWR/8Qyxstduw+Mgz0W1HpvMP3TvW8OEj+MN9keIjxCvftuZ8u1Z8EwKt24mKHdkxy0U8XkqkPkQhrLxTNoPEtix2Iy1+LCYFL+xz1/QNRg0nfeVc66nwWkpqsUkLbD5Lbv/yk+dID8CdGQRvNQWPEH4vx6Y4D4NwDEZjX6f56ZcglWiNWKWXymhmtScWXP8pCEre/m2mujgwczX/xTDQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EzbIaADvFD7uc8vEJGwdT6z9IDFMtlnY18zZuVGI6Co=;
 b=yl/FOAbVew8BMNo6k3rAtZAQijw64kmRFXWWd4MZC0f4s73uyckYNevg0boU+17WGCKVn/WOl178AFiTKXXUy3oEl5b0jl3t/hWoOMXez1kNk54jDsvqO3OtEJQAL8+9by65/j71kncvFqk2ev1KFTmoev4GlimPRD3/bNwiP00=
Authentication-Results: amazon.co.uk; dkim=none (message not signed)
 header.d=none;amazon.co.uk; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH v3 01/11] xen/manage: keep track of the on-going suspend
 mode
To: Anchal Agarwal <anchalag@amazon.com>
Cc: "tglx@linutronix.de" <tglx@linutronix.de>,
        "mingo@redhat.com" <mingo@redhat.com>, "bp@alien8.de" <bp@alien8.de>,
        "hpa@zytor.com" <hpa@zytor.com>, "jgross@suse.com" <jgross@suse.com>,
        "linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>,
        "linux-mm@kvack.org" <linux-mm@kvack.org>,
        "sstabellini@kernel.org" <sstabellini@kernel.org>,
        "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
        "roger.pau@citrix.com" <roger.pau@citrix.com>,
        "axboe@kernel.dk" <axboe@kernel.dk>,
        "davem@davemloft.net" <davem@davemloft.net>,
        "rjw@rjwysocki.net" <rjw@rjwysocki.net>,
        "len.brown@intel.com" <len.brown@intel.com>,
        "pavel@ucw.cz" <pavel@ucw.cz>,
        "peterz@infradead.org" <peterz@infradead.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "vkuznets@redhat.com" <vkuznets@redhat.com>,
        "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
        "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
        "dwmw@amazon.co.uk" <dwmw@amazon.co.uk>
References: <20200925222826.GA11755@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <cc738014-6a79-a5ae-cb2a-a02ff15b4582@oracle.com>
 <20200930212944.GA3138@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <8cd59d9c-36b1-21cf-e59f-40c5c20c65f8@oracle.com>
 <20210521052650.GA19056@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0b1f0772-d1b1-0e59-8e99-368e54d40fbf@oracle.com>
 <20210526044038.GA16226@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <33380567-f86c-5d85-a79e-c1cd889f8ec2@oracle.com>
 <20210528215008.GA19622@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <1ff91b30-3963-728e-aefb-57944197bdde@oracle.com>
 <20210602193743.GA28861@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <2cb71322-9d3d-395e-293b-24888f5be759@oracle.com>
Date: Thu, 3 Jun 2021 16:11:46 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
In-Reply-To: <20210602193743.GA28861@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [160.34.88.237]
X-ClientProxiedBy: SN6PR01CA0001.prod.exchangelabs.com (2603:10b6:805:b6::14)
 To BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 60065c02-fa52-4d39-8512-08d926cbd37a
X-MS-TrafficTypeDiagnostic: MN2PR10MB4318:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<MN2PR10MB431894FFD732E001F5E3CB598A3C9@MN2PR10MB4318.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	UdR5/lG+KwGp6XcH4A8M2jHkvEPp0v/TCH4IBelX+jopcmHHv8DXNY3o3oI9OgMPNCJjLnJFuYrK0puSCu96ZjWrZoRtQAsxNDHy0z+kUFeJdLXXQ3gSfGcATMaJdYWe14RKZ5qUOZ+dXrip/CBYtzegjpaQ1nRtAeue9H+4/uFkBxdOABrNE3x1mRiKmR3j7uu2cOXcUcSM3Bw2cX+pLpelltThNIbaYnFgnAX4pY+V/R7KQOsiq7PovE8zljPMeYjodULn580nWYoVN9E6NPUNnUMlPfeIIdhPwry1Ez3GZrLYQh8RX0HXKr48s2VnCp2GP7VojXKySHJR0oTrPuZgF7mlKuQI+hhSgMnSCZoGu1dx+ETZGQ15pOdlPONKIj7bSJQIMfpOA+8/GqAOmatDu/5+aACOh6yrfiwDCm4RmttehcJG203QDcf7spZaG/nVEhSIR97jnJqfVpeCRfe8aY2G7YkTlP67XLuMJYzNuN9q8nB7H2LYum7AqhDdrXMtQna1L3j+ClpM7CBLbAN2PcHkilhLBA85P4sdxnD1Q/GanP3Tzc+pGHJ04pR0MCD2/nlCardhHcqXUXet/NwbUnfXnOOUPfgvsyouLGKzWK+Crn5pHSaQdoJj2608cbHNVCKb9uYiFlhTaKjUv+YPuSwTWg83iuIUYC7IbJs=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(376002)(136003)(366004)(396003)(346002)(478600001)(7416002)(316002)(36756003)(66946007)(53546011)(16576012)(44832011)(31686004)(86362001)(2906002)(8936002)(16526019)(6666004)(186003)(4326008)(5660300002)(8676002)(6916009)(66556008)(26005)(66476007)(38100700002)(6486002)(31696002)(83380400001)(54906003)(956004)(2616005)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?OXgvaEU1UkZ0REpFQlhpUUlMbitUeWcrM0haZ29LRk9Za0lPbXJ1bEhBRHVz?=
 =?utf-8?B?V1ZtK3hTZC9SNTJXMnRjcWxmUVlmWmluVWZYS2RreTRhNHprN0tZMlV2elEx?=
 =?utf-8?B?QW1UbklFU2dLdUtFcE55ZmNkL29tQTMzdmxZdkpxcWxNTHpOcTJZTW9JUlFB?=
 =?utf-8?B?RW55MzB1aHR0ZHMzNklJVnBWb0ZvWFphTm9QRnRKY1FCRDBjUjhGcXhzWHBk?=
 =?utf-8?B?OTgxNWZDQ1lKSzVNWkpNV3dnMyt1VlJoVm9MQUdVdXZNVWh5TlNWUndTNzRj?=
 =?utf-8?B?clBuRU4vVFpFaWpEUG00NzdwZndSc3Bkc0ErcmlBVlIydklnSzVpZ3ZjZWtz?=
 =?utf-8?B?UkJKOEU4NXZiVU5WY2VCMWs0dElwYXIydjRaSlNPSVlPMlViMHpVb1ZJdG5C?=
 =?utf-8?B?R0RiRmhSTFJ5VHkxWDdmK2ZibmFRaElYUjYzUDFRTmhoeXo1dXFEbkJUdG04?=
 =?utf-8?B?MXJpaHNOZDNrTFYzYTRWcnFQWjlENlFSa05lUzVkV2NYRWFDeUhUeXdwZy94?=
 =?utf-8?B?eWlQSE5JNlBDdW5OQ09mN090M3Azb0hDYm5VZkVjWG1pNUI0RVVoT282QXRD?=
 =?utf-8?B?S2paS2ZVRU5qNnJHMHlEZ0pUcUlQWTVyK2J0dXFhcFV5eU1ZWEt0a0N6bVQ0?=
 =?utf-8?B?b214T2VqczY5MjB4aTRaMUpBRmRVcE9DZWg4YUdJOEx4U3FjNmRKdDJGZDVs?=
 =?utf-8?B?WVdqUFBCS0dDQ0VlT1VRKzZjRjExM21DWDUxVGNuTFU0LytWdWpzbWRDb25N?=
 =?utf-8?B?Wm1MVTF3dlFTeU52Z0RBSkpBWmxwWUo4VG4zVmZabkM4Vk9QVzQxTnFzbjZr?=
 =?utf-8?B?N0hwVjFvWWd0L0p2RG1YeDlxTEtFWXBGYzl6bkhOMGZ0OGhJTDYydmVuaU0v?=
 =?utf-8?B?SEo4VEdCYlllNi9QdmtBSVZ2WDIvdTBieTRjSDJUTExoL0lLS0dMRkFsQnZS?=
 =?utf-8?B?NE9WSWhTVStiWEJhaEZPc2hsbGhNdkFvLzFPZDh2QldTNzFxSXZ6MnY4MEgz?=
 =?utf-8?B?dkg0akdydEZOS00wNmt4dUtwOEM2MTNZNnpaYnZGNGNJMFN4ZzFFMTU1RjN4?=
 =?utf-8?B?NzFFZUlZRno4eHJIT1BpbFA4M2hSUEkwWWJBQzNFVSsyZjV4YWJVeDdmRVk2?=
 =?utf-8?B?RWQ4M0JvRkszRUNPdllNZXYxLzRvaFRVWENNWWtjMkhtTndRSHY2czR5VG1P?=
 =?utf-8?B?YlpaNXVyTkhRbHlNeE1iVTA4RGZhUkpMZ3NYRGxCblZSZjIySFRGTE1RLy8w?=
 =?utf-8?B?M0RYQ0ZzRlcxazNnRWxZeGx6QmtobWhSYlRBYk5kVzNYcHIza2tUTytEKzRK?=
 =?utf-8?B?c0MweG9zTWZldE5HaUVyUzFVeVBGWmhGbUhrbmY3Tk9mQjBvcE1RYXhDdEpq?=
 =?utf-8?B?dmFPKzJDNzNYNjdoOEJlZ1AzR0s3R3hpdFEweGJYNzdDOG9leU01c1ZUc25v?=
 =?utf-8?B?ZWlUZ1lvRjBtTjhycFpqRldIVitoRlRva0V4NFJvRTVFMC9DZ0d4c2lmbC9T?=
 =?utf-8?B?U3M1bUZLQ090UFRFQzhxVW8vdDlmcFJhSGM0enMvZ0V1dXJmNU1IZkk1eFIw?=
 =?utf-8?B?T1E2NkFhck1NM0ZjL3NRUThPNHZmVHZ2VEdSbGJiZUl2cVRiVHdRODRJM2V3?=
 =?utf-8?B?SC9MaGdCWTJjNVJZWFh1Z1hKeG5FQUtvQnVBeU1udEl5Rmg1ZU9zTU9wVk9K?=
 =?utf-8?B?WlZsZUY2Sk90SlBIUDdhZHdwbWJRdzdNTng5T0lxeTg4ZG5nTnBrM0xxN1p4?=
 =?utf-8?Q?ryW1Q1RtosiCIbXVLd1B4osWXm3COGyGguxAlr8?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 60065c02-fa52-4d39-8512-08d926cbd37a
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2021 20:11:52.0453
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VmtzoBFR7zBoFoDClJFpffv91j11nHxxY7Puxp9Qxx7e9L40cQ8X47un1L1vMhq4cnZNWNWwxvIXyj2Ey78Lod6q29Y2FOMEfBeuN5MDStE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR10MB4318
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10004 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 bulkscore=0
 suspectscore=0 spamscore=0 adultscore=0 mlxscore=0 phishscore=0
 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2106030137
X-Proofpoint-GUID: XdOfLFfusTcYPM6RTxXcOxVnLMyrk-_F
X-Proofpoint-ORIG-GUID: XdOfLFfusTcYPM6RTxXcOxVnLMyrk-_F
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10004 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 mlxscore=0
 mlxlogscore=999 malwarescore=0 bulkscore=0 phishscore=0 lowpriorityscore=0
 clxscore=1015 impostorscore=0 adultscore=0 suspectscore=0 spamscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106030137


On 6/2/21 3:37 PM, Anchal Agarwal wrote:
> On Tue, Jun 01, 2021 at 10:18:36AM -0400, Boris Ostrovsky wrote:
>>
> The resume won't fail because in the image the xen_vcpu and xen_vcpu_info are
> same. These are the same values that got in there during saving of the
> hibernation image. So whatever xen_vcpu got as a value during boot time registration on resume is
> essentially lost once the jump into the saved kernel image happens. Interesting
> part is if KASLR is not enabled boot time vcpup mfn is same as in the image.


Do you start the your guest right after you've hibernated it? What happens if you create (and keep running) a few other guests in-between? mfn would likely be different then I'd think.


> Once you enable KASLR this value changes sometimes and whenever that happens
> resume gets stuck. Does that make sense?
>
> No it does not resume successfully if hypercall fails because I was trying to
> explicitly reset vcpu and invoke hypercall.
> I am just wondering why does restore logic fails to work here or probably I am
> missing a critical piece here.


If you are not using KASLR then xen_vcpu_info is at the same address every time you boot. So whatever you registered before hibernating stays the same when you boot second time and register again, and so successful comparison in xen_vcpu_setup() works. (Mostly by chance.)


But if KASLR is on then this comparison not failing should cause xen_vcpu pointer in the loaded image to become bogus because xen_vcpu is now registered for a different xen_vcpu_info address during boot.


>>> Another line of thought is something what kexec does to come around this problem
>>> is to abuse soft_reset and issue it during syscore_resume or may be before the image get loaded.
>>> I haven't experimented with that yet as I am assuming there has to be a way to re-register vcpus during resume.
>>
>> Right, that sounds like it should work.
>>
> You mean soft reset or re-register vcpu?


Doing something along the lines of a soft reset. It should allow you to re-register. Not sure how you can use it without Xen changes though. 



-boris



From xen-devel-bounces@lists.xenproject.org Thu Jun 03 21:33:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 21:33:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136562.253131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1louxa-0000yR-Lf; Thu, 03 Jun 2021 21:33:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136562.253131; Thu, 03 Jun 2021 21:33:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1louxa-0000yK-Ii; Thu, 03 Jun 2021 21:33:02 +0000
Received: by outflank-mailman (input) for mailman id 136562;
 Thu, 03 Jun 2021 21:33:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pgrk=K5=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1louxZ-0000yE-3C
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 21:33:01 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e6f904b6-5101-45e7-ad9a-d8ea53cf7d79;
 Thu, 03 Jun 2021 21:33:00 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 1B7CC613F8;
 Thu,  3 Jun 2021 21:32:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6f904b6-5101-45e7-ad9a-d8ea53cf7d79
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1622755979;
	bh=7mzKDYQvsKnAd6FcZQrbNzzG+GQurCNinl6HDYlWK+M=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=eqLL0oPVVzDEtiH56rP31crhdoTUgmqMrl2pW3Ys78/QVm446fJQf5eeGvcdYDg63
	 SHP2CIBtcqz/qEFXkU99axbkOa+Upt1q4AXZok0ukbcmmGVcpOP31dpE/kdnjtuj9I
	 ivbV5b+dsd3so/0p/hKUXMUZTAWrtURdKb0+C89ZwfnUWCRX+IIhle8G8emkJpFtpL
	 K97lDLFz2Wut0SZo2xuD7Je4uF63xe31SGu9ex3EflrpsLmbF7PWt1S/TkToMrrHnm
	 VJKunm/66v28voO4I+zvt2VNeBKs1witeALfE5LSHsz2G8Dpo033HKcrktEuS/aqCm
	 jtgOaVoJahGyg==
Date: Thu, 3 Jun 2021 14:32:57 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Penny Zheng <Penny.Zheng@arm.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    "sstabellini@kernel.org" <sstabellini@kernel.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, 
    nd <nd@arm.com>
Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
In-Reply-To: <4251e0e2-fb53-b8a3-0323-f4ce892cf21e@xen.org>
Message-ID: <alpine.DEB.2.21.2106031408320.7272@sstabellini-ThinkPad-T480s>
References: <20210518052113.725808-1-penny.zheng@arm.com> <20210518052113.725808-2-penny.zheng@arm.com> <e1b90f06-92d2-11da-c556-4081907124b8@xen.org> <VE1PR08MB521519C6F09E92EDB9C9A1AEF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com> <66e32065-ea2d-d000-1a70-e5598a182b6a@xen.org>
 <VE1PR08MB5215C1F5041860102BBAD595F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com> <14fb6fe4-c293-6994-8cbc-872d3bd8a3ac@xen.org> <VE1PR08MB52152792B6771236A6DF37E7F73D9@VE1PR08MB5215.eurprd08.prod.outlook.com> <4251e0e2-fb53-b8a3-0323-f4ce892cf21e@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

I have not read most emails in this thread (sorry!) but I spotted this
discussion about device tree and I would like to reply to that as we
have discussed something very similar in the context of system device
tree.


On Thu, 3 Jun 2021, Julien Grall wrote:
> On 02/06/2021 11:09, Penny Zheng wrote:
> > Hi Julien
> > 
> > > -----Original Message-----
> > > From: Julien Grall <julien@xen.org>
> > > Sent: Thursday, May 20, 2021 4:51 PM
> > > To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> > > sstabellini@kernel.org
> > > Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> > > <Wei.Chen@arm.com>; nd <nd@arm.com>
> > > Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
> > > 
> > > Hi,
> > > 
> > > On 20/05/2021 07:07, Penny Zheng wrote:
> > > > > > It will be consistent with the ones defined in the parent node,
> > > > > > domUx.
> > > > > Hmmm... To take the example you provided, the parent would be chosen.
> > > > > However, from the example, I would expect the property #{address,
> > > > > size}-cells in domU1 to be used. What did I miss?
> > > > > 
> > > > 
> > > > Yeah, the property #{address, size}-cells in domU1 will be used. And
> > > > the parent node will be domU1.
> > > 
> > > You may have misunderstood what I meant. "domU1" is the node that
> > > contains the property "xen,static-mem". The parent node would be the one
> > > above (in our case "chosen").
> > > 
> > 
> > I re-re-reconsider what you meant here, hope this time I get what you mean,
> > correct me if I'm wrong,
> > List an example here:
> > 
> >      / {
> >          reserved-memory {
> >              #address-cells = <2>;
> >              #size-cells = <2>;
> > 
> >              staticmemdomU1: static-memory@0x30000000 {
> >                  compatible = "xen,static-memory-domain";
> >                  reg = <0x0 0x30000000 0x0 0x20000000>;
> >              };
> >          };
> > 
> >          chosen {
> >              domU1 {
> >                  compatible = "xen,domain";
> >                  #address-cells = <0x1>;
> >                  #size-cells = <0x1>;
> >                  cpus = <2>;
> >                  xen,static-mem = <&staticmemdomU1>;
> > 
> >                 ...
> >              };
> >          };
> >      };
> > 
> > If user gives two different #address-cells and #size-cells in
> > reserved-memory and domU1, Then when parsing it
> > through `xen,static-mem`, it may have unexpected answers.
> 
> Why are you using the #address-cells and #size-cells from the node domU1 to
> parse staticmemdomU1?
> 
> > I could not think a way to fix it properly in codes, do you have any
> > suggestion? Or we just put a warning in doc/commits.
> 
> The correct approach is to find the parent of staticmemdomU1 (i.e.
> reserved-memory) and use the #address-cells and #size-cells from there.

Julien is right about how to parse the static-memory.

But I have a suggestion on the new binding. The /reserved-memory node is
a weird node: it is one of the very few node (the only node aside from
/chosen) which is about software configurations rather than hardware
description.

For this reason, in a device tree with multiple domains /reserved-memory
doesn't make a lot of sense: for which domain is the memory reserved?

This was one of the first points raised by Rob Herring in reviewing
system device tree.

So the solution we went for is the following: if there is a default
domain /reserved-memory applies to the default domain. Otherwise, each
domain is going to have its own reserved-memory. Example:

        domU1 {
            compatible = "xen,domain";
            #address-cells = <0x1>;
            #size-cells = <0x1>;
            cpus = <2>;

            reserved-memory {
                #address-cells = <2>;
                #size-cells = <2>;
   
                static-memory@0x30000000 {
                    compatible = "xen,static-memory-domain";
                    reg = <0x0 0x30000000 0x0 0x20000000>;
                };
            };
        };


So I don't think we want to use reserved-memory for this, either
/reserved-memory or /chosen/domU1/reserved-memory. Instead it would be
good to align it with system device tree and define it as a new property
under domU1.

In system device tree we would use a property called "memory" to specify
one or more ranges, e.g.:

    domU1 {
        memory = <0x0 0x500000 0x0 0x7fb00000>;

Unfortunately for xen,domains we have already defined "memory" to
specify an amount, rather than a range. That's too bad because the most
natural way to do this would be:

    domU1 {
        size = <amount>;
        memory = <ranges>;

When we'll introduce native system device tree support in Xen we'll be
able to do that. For now, we need to come up with a different property.
For instance: "static-memory" (other names are welcome if you have a
better suggestion).

We use a new property called "static-memory" together with
#static-memory-address-cells and #static-memory-size-cells to define how
many cells to use for address and size.

Example:

    domU1 {
        #static-memory-address-cells = <2>;
        #static-memory-size-cells = <2>;
        static-memory = <0x0 0x500000 0x0 0x7fb00000>;



Another alternative would be to extend the definition of the existing
"memory" property to potentially handle not just sizes but also address
and size pairs. There are a couple of ways to do that, including using
#memory-address-cells = <0>; to specify when memory only has a size, or
changing compatible string to "xen,domain-v2". But actually I would
avoid it. I would keep it simple and just introduce a new property like
"static-memory" (we can come up with a better name).



From xen-devel-bounces@lists.xenproject.org Thu Jun 03 22:07:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 22:07:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136568.253143 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lovUo-0004AP-V1; Thu, 03 Jun 2021 22:07:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136568.253143; Thu, 03 Jun 2021 22:07:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lovUo-0004AI-Rx; Thu, 03 Jun 2021 22:07:22 +0000
Received: by outflank-mailman (input) for mailman id 136568;
 Thu, 03 Jun 2021 22:07:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2bdf=K5=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1lovUn-0004AC-HU
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 22:07:21 +0000
Received: from mail-ej1-x629.google.com (unknown [2a00:1450:4864:20::629])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f75b3ef9-fe9b-41c5-b618-96d8faa1fb97;
 Thu, 03 Jun 2021 22:07:20 +0000 (UTC)
Received: by mail-ej1-x629.google.com with SMTP id k7so11417441ejv.12
 for <xen-devel@lists.xenproject.org>; Thu, 03 Jun 2021 15:07:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f75b3ef9-fe9b-41c5-b618-96d8faa1fb97
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=fTwcjmwhNEADbX/Ob8xBzd2SXmTk5gXlTEv1cMtlxa0=;
        b=pAjZ0ZoXVUivt/xg82DQQU2Xo+JPZqnyDiCwTeA2vIs87m+etMYvbgKdBfJSmWIWLT
         TtCr2oOpEflb6YL/P0NwNcwubka2GkHlTuPLIMxZ4RZQDmHKsk9BG+Sok//rifxrp0gF
         eksmHx0X5wIyK7KcAWBtYncR0/EnM7HiGb6HtUDcN7pUEV1IzmdtnfppHCvUhm3Ib95W
         1x1PIm28tfZHxV3jNS5aFmujnXuwvUcvD5l+oySrgxJk4WSiMDO7tcD73bGE1sXBFCi3
         S5EdPOJSXctbsoiKCyBGCaxldabFHcaN7PqYfPfYjpGtwKrDW3iGaEsyaKxV/B/BzThr
         7veg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=fTwcjmwhNEADbX/Ob8xBzd2SXmTk5gXlTEv1cMtlxa0=;
        b=Th+vkHRKu1ljJD5PIG6/YaJF4M5R65GvgeCQ0gQQ8ys48zjaApYNQjIzmg/u97s+4c
         GRod8IN0FvYW4DVEOFEcFtc8KZghZISl4lGvaPAjofaz03kUpv+CePvmvturp9jIkzsZ
         bSxFK1lGAI+6NTdZoeH+vK+MkicIWJouHwuq7L0xANdqrSvbiuRfil51YbBkLIDoi68K
         AP12eZwbFu+zGk4jTBnXWAmAG8uiyTVV1/LlTBv0W6Ycc23SYEgwJP5b+Z9+ZRPnbhNL
         tqV0oJRB61r8BnGNYgWp/e0XuxAZwO0IqUnw/fwcgHaR/4IsE0G25yVOsTGiS2BKOQAQ
         mW8w==
X-Gm-Message-State: AOAM5317fvZkGmVjSWNM8eWezQRtnPC971hRvwv0/mNl2frxEA1NiEHe
	r8A5kafTotC/9jx1b1Qfk0nkpdQZm1qVrbLE4gY=
X-Google-Smtp-Source: ABdhPJz8dlimJ5xx0btkf+kXPsCHFyqiefzOTAPlzlinf2CIqkvR5O2iysZrIMOV1oLUYyXJBfPfJtu9HdCVuazcllk=
X-Received: by 2002:a17:906:b10e:: with SMTP id u14mr1228007ejy.546.1622758039673;
 Thu, 03 Jun 2021 15:07:19 -0700 (PDT)
MIME-Version: 1.0
References: <20210518052113.725808-1-penny.zheng@arm.com> <20210518052113.725808-2-penny.zheng@arm.com>
 <e1b90f06-92d2-11da-c556-4081907124b8@xen.org> <VE1PR08MB521519C6F09E92EDB9C9A1AEF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <66e32065-ea2d-d000-1a70-e5598a182b6a@xen.org> <VE1PR08MB5215C1F5041860102BBAD595F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <14fb6fe4-c293-6994-8cbc-872d3bd8a3ac@xen.org> <VE1PR08MB52152792B6771236A6DF37E7F73D9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <4251e0e2-fb53-b8a3-0323-f4ce892cf21e@xen.org> <alpine.DEB.2.21.2106031408320.7272@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2106031408320.7272@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Thu, 3 Jun 2021 23:07:08 +0100
Message-ID: <CAJ=z9a234ANQDR7BmtSm4AT0k3jrCn67s4b3zZ+jdkUgBMahbw@mail.gmail.com>
Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Penny Zheng <Penny.Zheng@arm.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, nd <nd@arm.com>
Content-Type: text/plain; charset="UTF-8"

Hi,

On Thu, 3 Jun 2021 at 22:33, Stefano Stabellini <sstabellini@kernel.org> wrote:
> On Thu, 3 Jun 2021, Julien Grall wrote:
> > On 02/06/2021 11:09, Penny Zheng wrote:
> > > I could not think a way to fix it properly in codes, do you have any
> > > suggestion? Or we just put a warning in doc/commits.
> >
> > The correct approach is to find the parent of staticmemdomU1 (i.e.
> > reserved-memory) and use the #address-cells and #size-cells from there.
>
> Julien is right about how to parse the static-memory.
>
> But I have a suggestion on the new binding. The /reserved-memory node is
> a weird node: it is one of the very few node (the only node aside from
> /chosen) which is about software configurations rather than hardware
> description.
>
> For this reason, in a device tree with multiple domains /reserved-memory
> doesn't make a lot of sense: for which domain is the memory reserved?

IHMO, /reserved-memory refers to the memory that the hypervisor should
not touch. It is just a coincidence that most of the domains are then
passed through to dom0.

This also matches the fact that the GIC, /memory is consumed by the
hypervisor directly and not the domain..

>
> This was one of the first points raised by Rob Herring in reviewing
> system device tree.
>
> So the solution we went for is the following: if there is a default
> domain /reserved-memory applies to the default domain. Otherwise, each
> domain is going to have its own reserved-memory. Example:
>
>         domU1 {
>             compatible = "xen,domain";
>             #address-cells = <0x1>;
>             #size-cells = <0x1>;
>             cpus = <2>;
>
>             reserved-memory {
>                 #address-cells = <2>;
>                 #size-cells = <2>;
>
>                 static-memory@0x30000000 {
>                     compatible = "xen,static-memory-domain";
>                     reg = <0x0 0x30000000 0x0 0x20000000>;
>                 };
>             };
>         };

This sounds wrong to me because the memory is reserved from the
hypervisor PoV not from the domain. IOW, when I read this, I think the
memory will be reserved in the domain.

>
> So I don't think we want to use reserved-memory for this, either
> /reserved-memory or /chosen/domU1/reserved-memory. Instead it would be
> good to align it with system device tree and define it as a new property
> under domU1.

Do you have any formal documentation of the system device-tree?

>
> In system device tree we would use a property called "memory" to specify
> one or more ranges, e.g.:
>
>     domU1 {
>         memory = <0x0 0x500000 0x0 0x7fb00000>;
>
> Unfortunately for xen,domains we have already defined "memory" to
> specify an amount, rather than a range. That's too bad because the most
> natural way to do this would be:
>
>     domU1 {
>         size = <amount>;
>         memory = <ranges>;
>
> When we'll introduce native system device tree support in Xen we'll be
> able to do that. For now, we need to come up with a different property.
> For instance: "static-memory" (other names are welcome if you have a
> better suggestion).
>
> We use a new property called "static-memory" together with
> #static-memory-address-cells and #static-memory-size-cells to define how
> many cells to use for address and size.
>
> Example:
>
>     domU1 {
>         #static-memory-address-cells = <2>;
>         #static-memory-size-cells = <2>;
>         static-memory = <0x0 0x500000 0x0 0x7fb00000>;

This is pretty similar to what Penny suggested. But I dislike it
because of the amount of code that needs to be duplicated with the
reserved memory.

Cheers,


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 23:27:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 23:27:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136575.253154 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lowk6-000362-2G; Thu, 03 Jun 2021 23:27:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136575.253154; Thu, 03 Jun 2021 23:27:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lowk5-00035v-Ue; Thu, 03 Jun 2021 23:27:13 +0000
Received: by outflank-mailman (input) for mailman id 136575;
 Thu, 03 Jun 2021 23:27:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=alxE=K5=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1lowk4-00035p-KA
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 23:27:12 +0000
Received: from mail-io1-xd33.google.com (unknown [2607:f8b0:4864:20::d33])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4fd170c8-750a-44d7-b6a3-0fa5737ec61d;
 Thu, 03 Jun 2021 23:27:11 +0000 (UTC)
Received: by mail-io1-xd33.google.com with SMTP id e17so8127756iol.7
 for <xen-devel@lists.xenproject.org>; Thu, 03 Jun 2021 16:27:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fd170c8-750a-44d7-b6a3-0fa5737ec61d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=JrxH2fg+CvzkjzCjAmjypku1/P6vAdjqgm5jiphMnUw=;
        b=pq+uFlzRymjZw+h04vKqfaJw0Q3/NUEXyc+K5x6IvYCUm5nPvya4Jt1rNc8MdiSYOA
         pmP6e7TzO1mWm13IsIHXfok/3zspsfUarenM6TsRVX864K4MLrstWPsInirw349+uFuL
         YT7ytyvh4kN8Fv4tVkp0eV/R6UDhInLXqooVXXakyAlDhHlWkpaDF3PX21fSyqQ2rw01
         vO1YEh9ZgHcYyitAI4WF0+/zmqV1LkJw16pvOrcvOt+zYt+G6o+yUplnvgjgXvcC5NjY
         oGDCbwR+PDeRClIBGyDw+Q9xK8Djo/RdotOD1c4ofLgTSltQmNN05SlDRCJ0sGDPHfrm
         aqIw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=JrxH2fg+CvzkjzCjAmjypku1/P6vAdjqgm5jiphMnUw=;
        b=JDFU87RPBzvsBY3mwOB6wpd+o9Z7l0e340tAlZTet4vux2X3nlQ1MJob+iGwwSrvNb
         Yg9QjRAK3DkkGnYqO37DmLfJBJ8+SYmd/FlBHbvliGBDrRm5PcumIA+ZdJe01uQZZcvz
         er04mY8h94U9xMcnru5opsSKvWQR0WuDFoAB9V1ghC+2avReI+auGpJenmRGZygZb5QY
         p8zIi/ZUzC3gW/7Na3jVIlHQU6JL4G9IOKtBVVT1LnQnKfK3kCMgBxzurNymsIvQQa86
         MZbIykQXFqt3Ic44OUu8BbzIOdEtlw6CdKd/BH8m4hZArFbDUg1cbrxo6AU5CuAWQV9T
         sp6Q==
X-Gm-Message-State: AOAM531DxrqxYLmUc7kbY4sBYtiJOwuE/lrkMJKDgePkEtVWhWAl9vWP
	gHP75oEC0TMxHVeSG8qptzEZBHs6344dPDUaoDo=
X-Google-Smtp-Source: ABdhPJzZ5SyHDTTE2gJosVtarG0qhTTw9OkAhbCqpig3D9a1pMmSegDh8QjKTmfIhMpRm+6+dtWlTG7JoonFWfR1fQY=
X-Received: by 2002:a05:6638:37a6:: with SMTP id w38mr1316745jal.106.1622762830521;
 Thu, 03 Jun 2021 16:27:10 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1621712830.git.connojdavis@gmail.com> <88ca49cea8dc0c44604957d42722388bb3d9e3ff.1621712830.git.connojdavis@gmail.com>
 <7d1b6d2a-641c-4508-9b29-b74db4769170@suse.com> <39a8a78c-3662-528f-fde4-d47427e64b15@gmail.com>
 <b0699bc4-5e79-7ce0-c885-b4d8dfa8b74f@gmail.com>
In-Reply-To: <b0699bc4-5e79-7ce0-c885-b4d8dfa8b74f@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Fri, 4 Jun 2021 09:26:44 +1000
Message-ID: <CAKmqyKP-PVgsu_pH5JuH2zrYv-E6ucpD_2DsuGfBFoC1nX9-Jw@mail.gmail.com>
Subject: Re: [PATCH v4 3/4] xen: Add files needed for minimal riscv build
To: Connor Davis <connojdavis@gmail.com>
Cc: Bob Eshleman <bobbyeshleman@gmail.com>, Jan Beulich <jbeulich@suse.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	"open list:X86" <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Tue, Jun 1, 2021 at 12:26 PM Connor Davis <connojdavis@gmail.com> wrote:
>
>
>
> On 5/25/21 12:13 PM, Bob Eshleman wrote:
> > On 5/25/21 1:48 AM, Jan Beulich wrote:
> >> On 24.05.2021 16:34, Connor Davis wrote:
> >>> Add arch-specific makefiles and configs needed to build for
> >>> riscv. Also add a minimal head.S that is a simple infinite loop.
> >>> head.o can be built with
> >>>
> >>> $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen tiny64_defconfig
> >>> $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen TARGET=head.o
> >>>
> >>> No other TARGET is supported at the moment.
> >>>
> >>> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> >>> ---
> >>>   config/riscv.mk                         |  4 +++
> >>>   xen/Makefile                            |  8 +++--
> >>>   xen/arch/riscv/Kconfig                  | 47 +++++++++++++++++++++++++
> >>>   xen/arch/riscv/Kconfig.debug            |  0
> >>>   xen/arch/riscv/Makefile                 |  0
> >>>   xen/arch/riscv/Rules.mk                 |  0
> >>>   xen/arch/riscv/arch.mk                  | 14 ++++++++
> >>>   xen/arch/riscv/asm-offsets.c            |  0
> >>>   xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
> >>>   xen/arch/riscv/head.S                   |  6 ++++
> >>>   xen/include/asm-riscv/config.h          | 47 +++++++++++++++++++++++++
> >>>   11 files changed, 137 insertions(+), 2 deletions(-)
> >>>   create mode 100644 config/riscv.mk
> >>>   create mode 100644 xen/arch/riscv/Kconfig
> >>>   create mode 100644 xen/arch/riscv/Kconfig.debug
> >>>   create mode 100644 xen/arch/riscv/Makefile
> >>>   create mode 100644 xen/arch/riscv/Rules.mk
> >>>   create mode 100644 xen/arch/riscv/arch.mk
> >>>   create mode 100644 xen/arch/riscv/asm-offsets.c
> >>>   create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
> >>>   create mode 100644 xen/arch/riscv/head.S
> >>>   create mode 100644 xen/include/asm-riscv/config.h
> >> I think this wants to be accompanied by an addition to ./MAINTAINERS
> >> right away, such that future RISC-V patches can be acked by the
> >> respective designated maintainers, rather than falling under "THE REST".
> >> Question is whether you / we have settled yet who's to become arch
> >> maintainer there.
> >>
> >> Jan
> >>
> > I'd like to volunteer myself for this, as I'm slated to continue
> > with the port indefinitely and would at least like to review
> > patches.  If Connor has the time, I think it makes sense for him
> > to be listed there too.
> >
> > Until we have others (it's just us two right now), it'll be a
> > lot of us reviewing each other's arch-specific work (in addition to
> > reviewers elsewhere in the Xen project, of course).
> >
> > -Bobby
> >
> Jan: can you list Bobby as the maintainer on commit? You can list me as
> well,
> but I can't guarantee a time commitment unlike Bobby.

If you need more people you can add me as well. I don't have too much
time to spend here, but I did start the initial Xen port and have a
good understanding of the RISC-V Hypervisor extension.

Alistair

>
> Thanks,
> Connor
>


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 23:27:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 23:27:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136576.253165 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lowkP-0003W5-Ay; Thu, 03 Jun 2021 23:27:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136576.253165; Thu, 03 Jun 2021 23:27:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lowkP-0003Vy-7d; Thu, 03 Jun 2021 23:27:33 +0000
Received: by outflank-mailman (input) for mailman id 136576;
 Thu, 03 Jun 2021 23:27:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=alxE=K5=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1lowkO-0003Vg-BF
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 23:27:32 +0000
Received: from mail-io1-xd2a.google.com (unknown [2607:f8b0:4864:20::d2a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30ab5d5b-619e-4b66-8113-b7fabda593f5;
 Thu, 03 Jun 2021 23:27:31 +0000 (UTC)
Received: by mail-io1-xd2a.google.com with SMTP id 5so8161898ioe.1
 for <xen-devel@lists.xenproject.org>; Thu, 03 Jun 2021 16:27:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30ab5d5b-619e-4b66-8113-b7fabda593f5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=2bLa2cn36ZEc9no+iBydwrLQbwFYoKkkb5hpP+GIeX0=;
        b=Xm1TsUUEvo/wZFLPb7jY0F2wP8tzUz0XnrAwbGb5hLgUXghkR5owIfVI1CuX22n6eo
         8cDZQ1dfGFPlWq579BrFGZ4GyzCS2uWRmCUbgbP8AnfGtzCtsaQ7ohpBj2Quuh9ZgoP0
         NgD/2+ahtgAqlOddfJeRf5tmcySwis2czISVEtzbD9hK7NNFdlZZ++rzYZHhGhtpAVHT
         e15Y482d0mz1k5ugaUqG8+hzXVW1FrS9eYe/nJeeRKtmTYQdVlwSUJpB7J6ZRZdfhin3
         Mdu7sWMLxnyxIB+HsE7UJAsd+8git5hkYSguKxvds9d8EJPHXtC8QYJFzrThvPd+M0rm
         jSXg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=2bLa2cn36ZEc9no+iBydwrLQbwFYoKkkb5hpP+GIeX0=;
        b=N/XCRsD4Ia14OM4Omc22fvDOmSYS5BpyU48EPtAgUxdxzp1mSECPNM5cjHOFzpGZCm
         ySwgBr5otJIsf5eEhj274XHorDbqBSBhqw6UwysjkhoTiI8c9CWcLNhleck8NW89n9JZ
         A7bxRue+NYD13d3uq+8mt8N6jjBUI4SMh6XJacARzFjzOOzIvAM4NP6c6+P2b9zNMaW/
         a4bGRGiH3kkldqW2Xq0uLxsQPMwz3z9Hr0W1ldnUKkLVTzgupMkjSc4WGjRHcFHdFMNg
         +I9ssnxVIOJFjiH3AKg8Fcxwt/21kwuveZuOuKH42dODnaoOpl/q+oUPt0LNAgIvEcAy
         Twuw==
X-Gm-Message-State: AOAM532avacD0YznPn9i8/SdiPoMKOiFvLWtR4V8/TpIWYte/PkrisX1
	J+14caamz3ISKh9LcLtfSaqJKv/bl3jQCky4B4s=
X-Google-Smtp-Source: ABdhPJzpIZPGdmQqM7/dm+7loMErDRboUmfkJl4RDfGdCfWaKj6vb21ZXHJ7Vny0rdh/6F1830jCKrEN6fwE3WPMhSk=
X-Received: by 2002:a02:8784:: with SMTP id t4mr1300461jai.26.1622762851431;
 Thu, 03 Jun 2021 16:27:31 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1622676439.git.connojdavis@gmail.com> <d2d19b62bd2a570db97f2940e6152bf93dc01632.1622676439.git.connojdavis@gmail.com>
In-Reply-To: <d2d19b62bd2a570db97f2940e6152bf93dc01632.1622676439.git.connojdavis@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Fri, 4 Jun 2021 09:27:05 +1000
Message-ID: <CAKmqyKO0JQVOxfrO0jk_-4eBBZSkcQm1-pMhd2xzXaE55usXWQ@mail.gmail.com>
Subject: Re: [PATCH v7 1/2] xen/char: Default HAS_NS16550 to y only for X86
 and ARM
To: Connor Davis <connojdavis@gmail.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>, 
	Bobby Eshleman <bobbyeshleman@gmail.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"

On Thu, Jun 3, 2021 at 9:38 AM Connor Davis <connojdavis@gmail.com> wrote:
>
> Defaulting to yes only for X86 and ARM reduces the requirements
> for a minimal build when porting new architectures.
>
> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  xen/drivers/char/Kconfig | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
> index b572305657..2ff5b288e2 100644
> --- a/xen/drivers/char/Kconfig
> +++ b/xen/drivers/char/Kconfig
> @@ -1,5 +1,6 @@
>  config HAS_NS16550
>         bool "NS16550 UART driver" if ARM
> +       default n if RISCV
>         default y
>         help
>           This selects the 16550-series UART support. For most systems, say Y.
> --
> 2.31.1
>


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 23:27:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 23:27:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136579.253176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lowkf-00044m-JB; Thu, 03 Jun 2021 23:27:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136579.253176; Thu, 03 Jun 2021 23:27:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lowkf-00044Z-GE; Thu, 03 Jun 2021 23:27:49 +0000
Received: by outflank-mailman (input) for mailman id 136579;
 Thu, 03 Jun 2021 23:27:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OxO+=K5=amazon.com=prvs=781ef9b0c=anchalag@srs-us1.protection.inumbo.net>)
 id 1lowke-00044D-A0
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 23:27:48 +0000
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40ba190f-025e-46ce-83f6-337d41218bc7;
 Thu, 03 Jun 2021 23:27:47 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2b-4e24fd92.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-6002.iad6.amazon.com with ESMTP; 03 Jun 2021 23:27:45 +0000
Received: from EX13MTAUEE002.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
 by email-inbound-relay-2b-4e24fd92.us-west-2.amazon.com (Postfix) with ESMTPS
 id 4C241A1D18; Thu,  3 Jun 2021 23:27:43 +0000 (UTC)
Received: from EX13D08UEE002.ant.amazon.com (10.43.62.92) by
 EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.18; Thu, 3 Jun 2021 23:27:42 +0000
Received: from EX13MTAUEE002.ant.amazon.com (10.43.62.24) by
 EX13D08UEE002.ant.amazon.com (10.43.62.92) with Microsoft SMTP Server (TLS)
 id 15.0.1497.18; Thu, 3 Jun 2021 23:27:42 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.62.224) with Microsoft SMTP
 Server id 15.0.1497.18 via Frontend Transport; Thu, 3 Jun 2021 23:27:42 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 72A74409AC; Thu,  3 Jun 2021 23:27:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40ba190f-025e-46ce-83f6-337d41218bc7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1622762867; x=1654298867;
  h=date:from:to:cc:message-id:references:mime-version:
   in-reply-to:subject;
  bh=7ofG02ZB3R9ItNYAJX7AOjZDSEYR5bDQ9ZvKLyyiy6M=;
  b=s4NMXVO77pvXXOzL6ItqV0+U/phI6tQWM0ia443Om1cx37MmlxxJCtjr
   Yvf2Y4bVgAHVhXLPlGod7ZVD/Acyg8qWO9czjI7AHB7eA6daJEvo5nGtV
   ZfAVn/LWvNlPoskfE4NzdPz1PwT9DU1UddnFipRXO7RnHoYcOSjWxVPBu
   4=;
X-IronPort-AV: E=Sophos;i="5.83,246,1616457600"; 
   d="scan'208";a="116516910"
Subject: Re: [PATCH v3 01/11] xen/manage: keep track of the on-going suspend mode
Date: Thu, 3 Jun 2021 23:27:42 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: "tglx@linutronix.de" <tglx@linutronix.de>, "mingo@redhat.com"
	<mingo@redhat.com>, "bp@alien8.de" <bp@alien8.de>, "hpa@zytor.com"
	<hpa@zytor.com>, "jgross@suse.com" <jgross@suse.com>,
	"linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>, "linux-mm@kvack.org"
	<linux-mm@kvack.org>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>, "roger.pau@citrix.com"
	<roger.pau@citrix.com>, "axboe@kernel.dk" <axboe@kernel.dk>,
	"davem@davemloft.net" <davem@davemloft.net>, "rjw@rjwysocki.net"
	<rjw@rjwysocki.net>, "len.brown@intel.com" <len.brown@intel.com>,
	"pavel@ucw.cz" <pavel@ucw.cz>, "peterz@infradead.org" <peterz@infradead.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"vkuznets@redhat.com" <vkuznets@redhat.com>, "netdev@vger.kernel.org"
	<netdev@vger.kernel.org>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>, "dwmw@amazon.co.uk" <dwmw@amazon.co.uk>
Message-ID: <20210603232742.GB14368@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <20200930212944.GA3138@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <8cd59d9c-36b1-21cf-e59f-40c5c20c65f8@oracle.com>
 <20210521052650.GA19056@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0b1f0772-d1b1-0e59-8e99-368e54d40fbf@oracle.com>
 <20210526044038.GA16226@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <33380567-f86c-5d85-a79e-c1cd889f8ec2@oracle.com>
 <20210528215008.GA19622@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <1ff91b30-3963-728e-aefb-57944197bdde@oracle.com>
 <20210602193743.GA28861@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <2cb71322-9d3d-395e-293b-24888f5be759@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <2cb71322-9d3d-395e-293b-24888f5be759@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk

On Thu, Jun 03, 2021 at 04:11:46PM -0400, Boris Ostrovsky wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 6/2/21 3:37 PM, Anchal Agarwal wrote:
> > On Tue, Jun 01, 2021 at 10:18:36AM -0400, Boris Ostrovsky wrote:
> >>
> > The resume won't fail because in the image the xen_vcpu and xen_vcpu_info are
> > same. These are the same values that got in there during saving of the
> > hibernation image. So whatever xen_vcpu got as a value during boot time registration on resume is
> > essentially lost once the jump into the saved kernel image happens. Interesting
> > part is if KASLR is not enabled boot time vcpup mfn is same as in the image.
> 
> 
> Do you start the your guest right after you've hibernated it? What happens if you create (and keep running) a few other guests in-between? mfn would likely be different then I'd think.
> 
>
Yes, I just run it in loops on a single guest and I am able to see the issue in
20-40 iterations sometime may be sooner. Yeah, you could be right and this could
definitely happen more often depending what's happening on dom0 side.
> > Once you enable KASLR this value changes sometimes and whenever that happens
> > resume gets stuck. Does that make sense?
> >
> > No it does not resume successfully if hypercall fails because I was trying to
> > explicitly reset vcpu and invoke hypercall.
> > I am just wondering why does restore logic fails to work here or probably I am
> > missing a critical piece here.
> 
> 
> If you are not using KASLR then xen_vcpu_info is at the same address every time you boot. So whatever you registered before hibernating stays the same when you boot second time and register again, and so successful comparison in xen_vcpu_setup() works. (Mostly by chance.)
>
That's what I thought so too.
> 
> But if KASLR is on then this comparison not failing should cause xen_vcpu pointer in the loaded image to become bogus because xen_vcpu is now registered for a different xen_vcpu_info address during boot.
> 
The reason for that I think is once you jump into the image that information is
getting lost. But there is  some residue somewhere that's causing the resume to
fail. I haven't been able to pinpoint the exact field value that may be causing
that issue.
Correct me if I am wrong here, but even if hypothetically I put a hack to tell the kernel
somehow re-register vcpu it won't pass because there is no hypercall to
unregister it in first place? Can the resumed kernel use the new values in that
case [Now this is me just throwing wild guesses!!]

> 
> >>> Another line of thought is something what kexec does to come around this problem
> >>> is to abuse soft_reset and issue it during syscore_resume or may be before the image get loaded.
> >>> I haven't experimented with that yet as I am assuming there has to be a way to re-register vcpus during resume.
> >>
> >> Right, that sounds like it should work.
> >>
> > You mean soft reset or re-register vcpu?
> 
> 
> Doing something along the lines of a soft reset. It should allow you to re-register. Not sure how you can use it without Xen changes though.
> 
No not without xen changes. It won't work. I will have xen changes in place to
test that on our infrastructure. 

--
Anchal
> 
> 
> -boris
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 23:33:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 23:33:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136594.253187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lowqH-0005gs-7s; Thu, 03 Jun 2021 23:33:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136594.253187; Thu, 03 Jun 2021 23:33:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lowqH-0005gl-4m; Thu, 03 Jun 2021 23:33:37 +0000
Received: by outflank-mailman (input) for mailman id 136594;
 Thu, 03 Jun 2021 23:33:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=alxE=K5=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1lowqF-0005gf-TC
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 23:33:35 +0000
Received: from mail-il1-x136.google.com (unknown [2607:f8b0:4864:20::136])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58fe0324-ed44-4528-88e9-18cdc8f5dc4e;
 Thu, 03 Jun 2021 23:33:33 +0000 (UTC)
Received: by mail-il1-x136.google.com with SMTP id i13so1471650ilk.3
 for <xen-devel@lists.xenproject.org>; Thu, 03 Jun 2021 16:33:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58fe0324-ed44-4528-88e9-18cdc8f5dc4e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=LzTmbLr1R+Utddp1aVDlm+2T7NkmZxeRM9ikLAhGO7M=;
        b=sYdd5i96yOUGzz6QFOfuFpPZYQ04fumcSMhdJaG55I4uIWIosqIden5FadbuD3bGtX
         wGVFJDPbEXzQ4FKW4gCzaPanhdeaZP0vJV6ySeC2RWsIqTpVTTQNgwtzIuGCo+6+OWwe
         G/6fZhmjB57VrqPqGgQxl4Zr4SPTEMpvu1Kao8AQOI8zZ7OG0jyeYVmh+R7ZvI9nrrzg
         WPr7T0sGdR/lUJEZTyXzh4E1yN2Ihf9EI7iJv7t1233L+Q2ZiYget0FXa3GMY4IwUoF/
         iO/pqB+hTbC9+7cEktqsxR4S46/Ftl65M6r6+6dihFvQHs9QvrWMbPBmsPadUQAkvsqX
         u1gw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=LzTmbLr1R+Utddp1aVDlm+2T7NkmZxeRM9ikLAhGO7M=;
        b=ie4nwLHnCMyZwPF4HQ7deAOflLUcRIWCLuuXbZFl5RKRHArMooPb+fl5pUTyexQvb9
         6Fmo57wSbHbsuwJtQdjFlTf29uVL0QTLpEtC2+a2Z2zLT06tRZPgouWKLup+MBeHJLSe
         +/qhGHJntViA50vjnJpz8V7GX/qqGRujxa57j0oPPq2Ru0JNG+w2pD77jUk8+pRsBeZ+
         O2PtY/zUP5B/QaAvg2cKtNbs1Mn4MilqFHczT7yAZ10qnV3BzEdwAkKGf/K2yGDAdEsN
         i2IF8iW7v/9i10NfkHZbsxodxWJl8wnqKiBIBSuaRv070YEdBMD4CiloocPXyIFb3gGA
         381g==
X-Gm-Message-State: AOAM533ks1lMq2yf/O+zTmRxbWg7hKl/wOUHqBRzbd8DAYx+nTxPk6Gk
	M71aBy/ded1sk6UXrNoLHaEFZoRSeLea+GqiI94=
X-Google-Smtp-Source: ABdhPJxULwAzKpZZFgf0OJV/4LZF4D8z+3BNELae9HyhiM4Tu50EN0T8TxZaCF034cLPQCWdE3Xjh0psbEP2odEQb50=
X-Received: by 2002:a05:6e02:d08:: with SMTP id g8mr1288789ilj.40.1622763213291;
 Thu, 03 Jun 2021 16:33:33 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1622676439.git.connojdavis@gmail.com> <d4670a35758df878565cf757bbc20e2815618eb5.1622676439.git.connojdavis@gmail.com>
In-Reply-To: <d4670a35758df878565cf757bbc20e2815618eb5.1622676439.git.connojdavis@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Fri, 4 Jun 2021 09:33:07 +1000
Message-ID: <CAKmqyKPSTmkufvpAj9A_jK1sRkK+J9DMNUgmKfWchrCB9Hm+oQ@mail.gmail.com>
Subject: Re: [PATCH v7 2/2] xen: Add files needed for minimal riscv build
To: Connor Davis <connojdavis@gmail.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>, 
	Bobby Eshleman <bobbyeshleman@gmail.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"

On Thu, Jun 3, 2021 at 9:38 AM Connor Davis <connojdavis@gmail.com> wrote:
>
> Add arch-specific makefiles and configs needed to build for
> riscv. Also add a minimal head.S that is a simple infinite loop.
> head.o can be built with
>
> $ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen tiny64_defconfig
> $ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen TARGET=riscv64/head.o
>
> No other TARGET is supported at the moment.
>
> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> ---
> Bob: I moved back to XEN_TARGET_ARCH=riscv64 because supplying
> just XEN_TARGET_ARCH=riscv causes TARGET_ARCH == TARGET_SUBARCH, and
> that broke the build after the recent commit b6ecd5c8bc
> "build: centralize / unify asm-offsets generation". It also deviates
> from how x86 and arm work now, so I think this change is for the best
> for now. That commit is also why the PHONY include target is added
> in the riscv/Makefile.
> ---
>  MAINTAINERS                             |  8 +++++
>  config/riscv64.mk                       |  5 +++
>  xen/Makefile                            |  8 +++--
>  xen/arch/riscv/Kconfig                  | 47 +++++++++++++++++++++++++
>  xen/arch/riscv/Kconfig.debug            |  0
>  xen/arch/riscv/Makefile                 |  2 ++
>  xen/arch/riscv/Rules.mk                 |  0
>  xen/arch/riscv/arch.mk                  | 14 ++++++++
>  xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
>  xen/arch/riscv/riscv64/asm-offsets.c    |  0
>  xen/arch/riscv/riscv64/head.S           |  6 ++++
>  xen/include/asm-riscv/config.h          | 47 +++++++++++++++++++++++++
>  12 files changed, 148 insertions(+), 2 deletions(-)
>  create mode 100644 config/riscv64.mk
>  create mode 100644 xen/arch/riscv/Kconfig
>  create mode 100644 xen/arch/riscv/Kconfig.debug
>  create mode 100644 xen/arch/riscv/Makefile
>  create mode 100644 xen/arch/riscv/Rules.mk
>  create mode 100644 xen/arch/riscv/arch.mk
>  create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
>  create mode 100644 xen/arch/riscv/riscv64/asm-offsets.c
>  create mode 100644 xen/arch/riscv/riscv64/head.S
>  create mode 100644 xen/include/asm-riscv/config.h
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index d46b08a0d2..956e71220d 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -456,6 +456,14 @@ F: tools/libs/light/libxl_nonetbuffer.c
>  F:     tools/hotplug/Linux/remus-netbuf-setup
>  F:     tools/hotplug/Linux/block-drbd-probe
>
> +RISCV
> +M:     Bob Eshleman <bobbyeshleman@gmail.com>
> +R:     Connor Davis <connojdavis@gmail.com>
> +S:     Supported
> +F:     config/riscv64.mk
> +F:     xen/arch/riscv/
> +F:     xen/include/asm-riscv/

I volunteer to be a maintainer as well, feel free to say no :)

I did the QEMU RISC-V H extension port and have a pretty good
understanding of the RISC-V Hypervisor extension.

> +
>  RTDS SCHEDULER
>  M:     Dario Faggioli <dfaggioli@suse.com>
>  M:     Meng Xu <mengxu@cis.upenn.edu>
> diff --git a/config/riscv64.mk b/config/riscv64.mk
> new file mode 100644
> index 0000000000..a5a21e5fa2
> --- /dev/null
> +++ b/config/riscv64.mk
> @@ -0,0 +1,5 @@
> +CONFIG_RISCV := y
> +CONFIG_RISCV_64 := y
> +CONFIG_RISCV_$(XEN_OS) := y
> +
> +CONFIG_XEN_INSTALL_SUFFIX :=
> diff --git a/xen/Makefile b/xen/Makefile
> index 7ce7692354..89879fad4c 100644
> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -26,7 +26,9 @@ MAKEFLAGS += -rR
>  EFI_MOUNTPOINT ?= $(BOOT_DIR)/efi
>
>  ARCH=$(XEN_TARGET_ARCH)
> -SRCARCH=$(shell echo $(ARCH) | sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
> +SRCARCH=$(shell echo $(ARCH) | \
> +          sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
> +              -e s'/riscv.*/riscv/g')
>
>  # Don't break if the build process wasn't called from the top level
>  # we need XEN_TARGET_ARCH to generate the proper config
> @@ -35,7 +37,8 @@ include $(XEN_ROOT)/Config.mk
>  # Set ARCH/SUBARCH appropriately.
>  export TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
>  export TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
> -                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
> +                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
> +                                -e s'/riscv.*/riscv/g')
>
>  # Allow someone to change their config file
>  export KCONFIG_CONFIG ?= .config
> @@ -335,6 +338,7 @@ _clean: delete-unfresh-files
>         $(MAKE) $(clean) xsm
>         $(MAKE) $(clean) crypto
>         $(MAKE) $(clean) arch/arm
> +       $(MAKE) $(clean) arch/riscv
>         $(MAKE) $(clean) arch/x86
>         $(MAKE) $(clean) test
>         $(MAKE) -f $(BASEDIR)/tools/kconfig/Makefile.kconfig ARCH=$(ARCH) SRCARCH=$(SRCARCH) clean
> diff --git a/xen/arch/riscv/Kconfig b/xen/arch/riscv/Kconfig
> new file mode 100644
> index 0000000000..bd8381c5e0
> --- /dev/null
> +++ b/xen/arch/riscv/Kconfig
> @@ -0,0 +1,47 @@
> +config RISCV
> +       def_bool y
> +
> +config RISCV_64
> +       def_bool y
> +       select 64BIT
> +
> +config ARCH_DEFCONFIG
> +       string
> +       default "arch/riscv/configs/tiny64_defconfig"
> +
> +menu "Architecture Features"
> +
> +source "arch/Kconfig"
> +
> +endmenu
> +
> +menu "ISA Selection"
> +
> +choice
> +       prompt "Base ISA"
> +       default RISCV_ISA_RV64IMA if RISCV_64
> +       help
> +         This selects the base ISA extensions that Xen will target.
> +
> +config RISCV_ISA_RV64IMA
> +       bool "RV64IMA"
> +       help
> +         Use the RV64I base ISA, plus the "M" and "A" extensions
> +         for integer multiply/divide and atomic instructions, respectively.
> +
> +endchoice
> +
> +config RISCV_ISA_C
> +       bool "Compressed extension"
> +       help
> +         Add "C" to the ISA subsets that the toolchain is allowed to
> +         emit when building Xen, which results in compressed instructions
> +         in the Xen binary.
> +
> +         If unsure, say N.

I would change this to y if you are unsure. I don't expect any
hardware to have an MMU (yet along the H extension) and no compressed
instruction extension. Linux won't run without the C extension.

Otherwise looks good:

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Now the hard part of getting it to boot.

Alistair

> +
> +endmenu
> +
> +source "common/Kconfig"
> +
> +source "drivers/Kconfig"
> diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
> new file mode 100644
> index 0000000000..e69de29bb2
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> new file mode 100644
> index 0000000000..942e4ffbc1
> --- /dev/null
> +++ b/xen/arch/riscv/Makefile
> @@ -0,0 +1,2 @@
> +.PHONY: include
> +include:
> diff --git a/xen/arch/riscv/Rules.mk b/xen/arch/riscv/Rules.mk
> new file mode 100644
> index 0000000000..e69de29bb2
> diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
> new file mode 100644
> index 0000000000..53dadb8975
> --- /dev/null
> +++ b/xen/arch/riscv/arch.mk
> @@ -0,0 +1,14 @@
> +########################################
> +# RISCV-specific definitions
> +
> +CFLAGS-$(CONFIG_RISCV_64) += -mabi=lp64
> +
> +riscv-march-$(CONFIG_RISCV_ISA_RV64IMA) := rv64ima
> +riscv-march-$(CONFIG_RISCV_ISA_C)       := $(riscv-march-y)c
> +
> +# Note that -mcmodel=medany is used so that Xen can be mapped
> +# into the upper half _or_ the lower half of the address space.
> +# -mcmodel=medlow would force Xen into the lower half.
> +
> +CFLAGS += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
> +CFLAGS += -I$(BASEDIR)/include
> diff --git a/xen/arch/riscv/configs/tiny64_defconfig b/xen/arch/riscv/configs/tiny64_defconfig
> new file mode 100644
> index 0000000000..3c9a2ff941
> --- /dev/null
> +++ b/xen/arch/riscv/configs/tiny64_defconfig
> @@ -0,0 +1,13 @@
> +# CONFIG_SCHED_CREDIT is not set
> +# CONFIG_SCHED_RTDS is not set
> +# CONFIG_SCHED_NULL is not set
> +# CONFIG_SCHED_ARINC653 is not set
> +# CONFIG_TRACEBUFFER is not set
> +# CONFIG_HYPFS is not set
> +# CONFIG_GRANT_TABLE is not set
> +# CONFIG_SPECULATIVE_HARDEN_ARRAY is not set
> +
> +CONFIG_RISCV_64=y
> +CONFIG_DEBUG=y
> +CONFIG_DEBUG_INFO=y
> +CONFIG_EXPERT=y
> diff --git a/xen/arch/riscv/riscv64/asm-offsets.c b/xen/arch/riscv/riscv64/asm-offsets.c
> new file mode 100644
> index 0000000000..e69de29bb2
> diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
> new file mode 100644
> index 0000000000..0dbc27ba75
> --- /dev/null
> +++ b/xen/arch/riscv/riscv64/head.S
> @@ -0,0 +1,6 @@
> +#include <asm/config.h>
> +
> +        .text
> +
> +ENTRY(start)
> +        j  start
> diff --git a/xen/include/asm-riscv/config.h b/xen/include/asm-riscv/config.h
> new file mode 100644
> index 0000000000..e2ae21de61
> --- /dev/null
> +++ b/xen/include/asm-riscv/config.h
> @@ -0,0 +1,47 @@
> +#ifndef __RISCV_CONFIG_H__
> +#define __RISCV_CONFIG_H__
> +
> +#if defined(CONFIG_RISCV_64)
> +# define LONG_BYTEORDER 3
> +# define ELFSIZE 64
> +# define MAX_VIRT_CPUS 128u
> +#else
> +# error "Unsupported RISCV variant"
> +#endif
> +
> +#define BYTES_PER_LONG (1 << LONG_BYTEORDER)
> +#define BITS_PER_LONG  (BYTES_PER_LONG << 3)
> +#define POINTER_ALIGN  BYTES_PER_LONG
> +
> +#define BITS_PER_LLONG 64
> +
> +/* xen_ulong_t is always 64 bits */
> +#define BITS_PER_XEN_ULONG 64
> +
> +#define CONFIG_RISCV_L1_CACHE_SHIFT 6
> +#define CONFIG_PAGEALLOC_MAX_ORDER  18
> +#define CONFIG_DOMU_MAX_ORDER       9
> +#define CONFIG_HWDOM_MAX_ORDER      10
> +
> +#define OPT_CONSOLE_STR "dtuart"
> +#define INVALID_VCPU_ID MAX_VIRT_CPUS
> +
> +/* Linkage for RISCV */
> +#ifdef __ASSEMBLY__
> +#define ALIGN .align 2
> +
> +#define ENTRY(name)                                \
> +  .globl name;                                     \
> +  ALIGN;                                           \
> +  name:
> +#endif
> +
> +#endif /* __RISCV_CONFIG_H__ */
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> --
> 2.31.1
>


From xen-devel-bounces@lists.xenproject.org Thu Jun 03 23:56:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Jun 2021 23:56:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136603.253198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loxBs-00083w-7D; Thu, 03 Jun 2021 23:55:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136603.253198; Thu, 03 Jun 2021 23:55:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loxBs-00083p-4C; Thu, 03 Jun 2021 23:55:56 +0000
Received: by outflank-mailman (input) for mailman id 136603;
 Thu, 03 Jun 2021 23:55:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pgrk=K5=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1loxBq-00083j-4a
 for xen-devel@lists.xenproject.org; Thu, 03 Jun 2021 23:55:54 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e145c33d-1adf-48a2-8533-838c257a1e3a;
 Thu, 03 Jun 2021 23:55:53 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 520826135E;
 Thu,  3 Jun 2021 23:55:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e145c33d-1adf-48a2-8533-838c257a1e3a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1622764552;
	bh=KsaJKBQwAcW5BnlUQPtVHaS0rfnoyn2RvUPmiM9cJ14=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=kdEJuNesLUOUQle/0fRKVUdW/+mWmtygQSfa/OW61lcop/5CaVXM+Ye+RCSSfKJcl
	 4bw9NjDgdcxHm+/4v/v3iPFAAoFqlLnd6cYaBfrKrVXIQH1kTUHyHiuWBwrhTP3dXi
	 W4g76EZVcBoBoC3dL6DBSZRjQIL01C2WeEdsDA9Wgwu+zXnUVeGC/EaQBW4ISgr5eG
	 ENmm+sN6ochrutDyIeNf68LYpIVH+dgj3JlUvJUPfRAudOalqB7j/1mPvZkSGpA+KF
	 G+yMM/KNGRAPcXYhHhvqddb70RQoBhS6vRgVuecb+Aq8rrB0lta0E/y15TM+TMMrRf
	 RqJQVg9X2+f6Q==
Date: Thu, 3 Jun 2021 16:55:51 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien.grall.oss@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Penny Zheng <Penny.Zheng@arm.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, 
    nd <nd@arm.com>
Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
In-Reply-To: <CAJ=z9a234ANQDR7BmtSm4AT0k3jrCn67s4b3zZ+jdkUgBMahbw@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2106031625530.7272@sstabellini-ThinkPad-T480s>
References: <20210518052113.725808-1-penny.zheng@arm.com> <20210518052113.725808-2-penny.zheng@arm.com> <e1b90f06-92d2-11da-c556-4081907124b8@xen.org> <VE1PR08MB521519C6F09E92EDB9C9A1AEF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com> <66e32065-ea2d-d000-1a70-e5598a182b6a@xen.org>
 <VE1PR08MB5215C1F5041860102BBAD595F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com> <14fb6fe4-c293-6994-8cbc-872d3bd8a3ac@xen.org> <VE1PR08MB52152792B6771236A6DF37E7F73D9@VE1PR08MB5215.eurprd08.prod.outlook.com> <4251e0e2-fb53-b8a3-0323-f4ce892cf21e@xen.org>
 <alpine.DEB.2.21.2106031408320.7272@sstabellini-ThinkPad-T480s> <CAJ=z9a234ANQDR7BmtSm4AT0k3jrCn67s4b3zZ+jdkUgBMahbw@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 3 Jun 2021, Julien Grall wrote:
> On Thu, 3 Jun 2021 at 22:33, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > On Thu, 3 Jun 2021, Julien Grall wrote:
> > > On 02/06/2021 11:09, Penny Zheng wrote:
> > > > I could not think a way to fix it properly in codes, do you have any
> > > > suggestion? Or we just put a warning in doc/commits.
> > >
> > > The correct approach is to find the parent of staticmemdomU1 (i.e.
> > > reserved-memory) and use the #address-cells and #size-cells from there.
> >
> > Julien is right about how to parse the static-memory.
> >
> > But I have a suggestion on the new binding. The /reserved-memory node is
> > a weird node: it is one of the very few node (the only node aside from
> > /chosen) which is about software configurations rather than hardware
> > description.
> >
> > For this reason, in a device tree with multiple domains /reserved-memory
> > doesn't make a lot of sense: for which domain is the memory reserved?
> 
> IHMO, /reserved-memory refers to the memory that the hypervisor should
> not touch. It is just a coincidence that most of the domains are then
> passed through to dom0.
>
> This also matches the fact that the GIC, /memory is consumed by the
> hypervisor directly and not the domain..

In system device tree one of the key principles is to distinguish between
hardware description and domains configuration. The domains
configuration is under /domains (originally it was under /chosen then
the DT maintainers requested to move it to its own top-level node), while
everything else is for hardware description.

/chosen and /reserved-memory are exceptions. They are top-level nodes
but they are for software configurations. In system device tree
configurations go under the domain node. This makes sense: Xen, dom0 and
domU can all have different reserved-memory and chosen configurations.

/domains/domU1/reserved-memory gives us a clear way to express
reserved-memory configurations for domU1.

Which leaves us with /reserved-memory. Who is that for? It is for the
default domain.

The default domain is the one receiving all devices by default. In a Xen
setting, it is probably Dom0. In this case, we don't want to add
reserved-memory regions for DomUs to Dom0's list. Dom0's reserved-memory
list is for its own drivers. We could also make an argument that the
default domain is Xen itself. From a spec perspective, that would be
fine too. In this case, /reserved-memory is a list of memory regions
reserved for Xen drivers.  Either way, I don't think is a great fit for
domains memory allocations.


> > This was one of the first points raised by Rob Herring in reviewing
> > system device tree.
> >
> > So the solution we went for is the following: if there is a default
> > domain /reserved-memory applies to the default domain. Otherwise, each
> > domain is going to have its own reserved-memory. Example:
> >
> >         domU1 {
> >             compatible = "xen,domain";
> >             #address-cells = <0x1>;
> >             #size-cells = <0x1>;
> >             cpus = <2>;
> >
> >             reserved-memory {
> >                 #address-cells = <2>;
> >                 #size-cells = <2>;
> >
> >                 static-memory@0x30000000 {
> >                     compatible = "xen,static-memory-domain";
> >                     reg = <0x0 0x30000000 0x0 0x20000000>;
> >                 };
> >             };
> >         };
> 
> This sounds wrong to me because the memory is reserved from the
> hypervisor PoV not from the domain. IOW, when I read this, I think the
> memory will be reserved in the domain.

It is definitely very wrong to place the static-memory allocation under
/chosen/domU1/reserved-memory. Sorry if I caused confusion. I only meant
it as an example of how reserved-memory (actual reserved-memory list
driver-specific memory ranges) is used.


> >
> > So I don't think we want to use reserved-memory for this, either
> > /reserved-memory or /chosen/domU1/reserved-memory. Instead it would be
> > good to align it with system device tree and define it as a new property
> > under domU1.
> 
> Do you have any formal documentation of the system device-tree?

It lives here:
https://github.com/devicetree-org/lopper/tree/master/specification

Start from specification.md. It is the oldest part of the spec, so it is
not yet written with a formal specification language.

FYI there are a number of things in-flight in regards to domains that
we discussed in the last call but they are not yet settled, thus, they
are not yet committed (access flags definitions and hierarchical
domains). However, they don't affect domains memory allocations so from
that perspective nothing has changed.


> > In system device tree we would use a property called "memory" to specify
> > one or more ranges, e.g.:
> >
> >     domU1 {
> >         memory = <0x0 0x500000 0x0 0x7fb00000>;
> >
> > Unfortunately for xen,domains we have already defined "memory" to
> > specify an amount, rather than a range. That's too bad because the most
> > natural way to do this would be:
> >
> >     domU1 {
> >         size = <amount>;
> >         memory = <ranges>;
> >
> > When we'll introduce native system device tree support in Xen we'll be
> > able to do that. For now, we need to come up with a different property.
> > For instance: "static-memory" (other names are welcome if you have a
> > better suggestion).
> >
> > We use a new property called "static-memory" together with
> > #static-memory-address-cells and #static-memory-size-cells to define how
> > many cells to use for address and size.
> >
> > Example:
> >
> >     domU1 {
> >         #static-memory-address-cells = <2>;
> >         #static-memory-size-cells = <2>;
> >         static-memory = <0x0 0x500000 0x0 0x7fb00000>;
> 
> This is pretty similar to what Penny suggested. But I dislike it
> because of the amount of code that needs to be duplicated with the
> reserved memory.

Where is the code duplication? In the parsing itself?

If there is code duplication, can we find a way to share some of the
code to avoid the duplication?


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 00:08:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 00:08:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136610.253209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loxO4-0001h3-Jr; Fri, 04 Jun 2021 00:08:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136610.253209; Fri, 04 Jun 2021 00:08:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loxO4-0001gw-Gu; Fri, 04 Jun 2021 00:08:32 +0000
Received: by outflank-mailman (input) for mailman id 136610;
 Fri, 04 Jun 2021 00:08:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loxO2-0001gm-G5; Fri, 04 Jun 2021 00:08:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loxO2-0001cu-9A; Fri, 04 Jun 2021 00:08:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1loxO1-0008Vn-Ua; Fri, 04 Jun 2021 00:08:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1loxO1-00011r-U1; Fri, 04 Jun 2021 00:08:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aEjwaYcB8moOHiCXgAVWSwzlrBl5tpWCZ2IZPNPiCTk=; b=F3E8/3QS63lCgWY3JaBsztUvmn
	KdnCmX71IGDw2MfvaoaK6oCPnqcr7hxWkJRcMejriqwAXNZ10EiZQRGblBpyk9ef79V9sAWaZiVV/
	RhW4C+Mx4XG8ETO4UCwlvOM1fcZoHvibPgUJU5jLYEOB/y+KYS8unz10mZxicdIeTcms=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162347-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162347: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=8e6dad2028d01b7f9ec76cf3b83457fab57fa1eb
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Jun 2021 00:08:29 +0000

flight 162347 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162347/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                8e6dad2028d01b7f9ec76cf3b83457fab57fa1eb
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  287 days
Failing since        152659  2020-08-21 14:07:39 Z  286 days  530 attempts
Testing same since   162347  2021-06-03 09:47:19 Z    0 days    1 attempts

------------------------------------------------------------
521 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 166828 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 01:49:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 01:49:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136618.253223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loyx5-00084z-Sn; Fri, 04 Jun 2021 01:48:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136618.253223; Fri, 04 Jun 2021 01:48:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loyx5-00084r-MP; Fri, 04 Jun 2021 01:48:47 +0000
Received: by outflank-mailman (input) for mailman id 136618;
 Fri, 04 Jun 2021 01:48:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sjsn=K6=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1loyx4-00084l-Uq
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 01:48:47 +0000
Received: from mail-oi1-x22e.google.com (unknown [2607:f8b0:4864:20::22e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1ffd949a-3513-49ef-b43e-304d0b25fc09;
 Fri, 04 Jun 2021 01:48:45 +0000 (UTC)
Received: by mail-oi1-x22e.google.com with SMTP id v22so8288757oic.2
 for <xen-devel@lists.xenproject.org>; Thu, 03 Jun 2021 18:48:45 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id n17sm158772oij.57.2021.06.03.18.48.44
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 03 Jun 2021 18:48:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ffd949a-3513-49ef-b43e-304d0b25fc09
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=L26ql0Ro1PUVovU7RjnHg95qVz4JfRvv+2ImX6K34AU=;
        b=HtEHUiipfDLbL5I8D1O+xDDpbAiHrHV0UBlL/Mx430t41OMCkRLlHIAMnwo7Y34StD
         qpWPyHLDDpvs6ONSCXfExnNoViQA5sznM57+TzNMqw2qhLrPgNAkP/JgnFZAFneh6W0L
         ooBjMHI6aBvoAX32+aH//eg1/VFAn3i84m48FNkSSy2O5mIouBYdRjiNg7Ez/oCyBhzX
         9yq9KYUtydyUjn3TvflTNRRcLMUxOx+bs4G/A234MxP8kUEolBbFTpsVx23mVwCJYALL
         mBCgNV4yWxHUBezCoH9mnVSe+m2/6RrIomvAgbJXZ2YfirdCCZv27Mm2R+W2jZl4sX1h
         StKQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=L26ql0Ro1PUVovU7RjnHg95qVz4JfRvv+2ImX6K34AU=;
        b=I5xPu/fOTrU3NxB+uylGKjUelleOmVKZx0EtjLnj1H2IHaY1Ya5RafIQRsJVAQqOKx
         iJsOLi+jNQXF/SDUHBLb98qSOPtxYvbko8TBTY69Kd+cCkqPBZbns5FkRmfEwjKh3X8n
         9tiXVYCiONklTPAXshRDeP5SYmDCdxAerS28rZX53/u2wPi0elhj+h6xj34GIon5VD+I
         PbbSsJGPH4AnXi5fztKhx/+h6wFhbCf55/Gk4Br7UXDaiZB0w29bV+zhrCl0pRv+6GCo
         yLzaU9csAW0utHSX2vSOai02RgD3APxbIF9a8MGvM8RWFqM4Fe7A0iQDJMlUEHibTEDt
         uBsQ==
X-Gm-Message-State: AOAM532Kt6S6FyFWFV7YzFP9Vh4F27q7eeSgFJ5gxAKQGfzAgTv1eDCx
	47iLgR4BWBm1FBhq3F4BpPw=
X-Google-Smtp-Source: ABdhPJyMncjHAhuPLorOLl9+EJbJSHa93Bp/Wk42cuGVMrPOMz6MQLxoKFkktUNL3DCRgvbgL6CyqA==
X-Received: by 2002:a05:6808:15a0:: with SMTP id t32mr9068325oiw.91.1622771325007;
        Thu, 03 Jun 2021 18:48:45 -0700 (PDT)
Subject: Re: [PATCH v7 2/2] xen: Add files needed for minimal riscv build
To: Alistair Francis <alistair23@gmail.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>,
 Bobby Eshleman <bobbyeshleman@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <cover.1622676439.git.connojdavis@gmail.com>
 <d4670a35758df878565cf757bbc20e2815618eb5.1622676439.git.connojdavis@gmail.com>
 <CAKmqyKPSTmkufvpAj9A_jK1sRkK+J9DMNUgmKfWchrCB9Hm+oQ@mail.gmail.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <0f1548ab-4b80-9dc4-ef16-98146394909b@gmail.com>
Date: Thu, 3 Jun 2021 19:48:56 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <CAKmqyKPSTmkufvpAj9A_jK1sRkK+J9DMNUgmKfWchrCB9Hm+oQ@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US



On 6/3/21 5:33 PM, Alistair Francis wrote:
> On Thu, Jun 3, 2021 at 9:38 AM Connor Davis <connojdavis@gmail.com> wrote:
>> Add arch-specific makefiles and configs needed to build for
>> riscv. Also add a minimal head.S that is a simple infinite loop.
>> head.o can be built with
>>
>> $ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen tiny64_defconfig
>> $ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen TARGET=riscv64/head.o
>>
>> No other TARGET is supported at the moment.
>>
>> Signed-off-by: Connor Davis <connojdavis@gmail.com>
>> ---
>> Bob: I moved back to XEN_TARGET_ARCH=riscv64 because supplying
>> just XEN_TARGET_ARCH=riscv causes TARGET_ARCH == TARGET_SUBARCH, and
>> that broke the build after the recent commit b6ecd5c8bc
>> "build: centralize / unify asm-offsets generation". It also deviates
>> from how x86 and arm work now, so I think this change is for the best
>> for now. That commit is also why the PHONY include target is added
>> in the riscv/Makefile.
>> ---
>>   MAINTAINERS                             |  8 +++++
>>   config/riscv64.mk                       |  5 +++
>>   xen/Makefile                            |  8 +++--
>>   xen/arch/riscv/Kconfig                  | 47 +++++++++++++++++++++++++
>>   xen/arch/riscv/Kconfig.debug            |  0
>>   xen/arch/riscv/Makefile                 |  2 ++
>>   xen/arch/riscv/Rules.mk                 |  0
>>   xen/arch/riscv/arch.mk                  | 14 ++++++++
>>   xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
>>   xen/arch/riscv/riscv64/asm-offsets.c    |  0
>>   xen/arch/riscv/riscv64/head.S           |  6 ++++
>>   xen/include/asm-riscv/config.h          | 47 +++++++++++++++++++++++++
>>   12 files changed, 148 insertions(+), 2 deletions(-)
>>   create mode 100644 config/riscv64.mk
>>   create mode 100644 xen/arch/riscv/Kconfig
>>   create mode 100644 xen/arch/riscv/Kconfig.debug
>>   create mode 100644 xen/arch/riscv/Makefile
>>   create mode 100644 xen/arch/riscv/Rules.mk
>>   create mode 100644 xen/arch/riscv/arch.mk
>>   create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
>>   create mode 100644 xen/arch/riscv/riscv64/asm-offsets.c
>>   create mode 100644 xen/arch/riscv/riscv64/head.S
>>   create mode 100644 xen/include/asm-riscv/config.h
>>
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index d46b08a0d2..956e71220d 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -456,6 +456,14 @@ F: tools/libs/light/libxl_nonetbuffer.c
>>   F:     tools/hotplug/Linux/remus-netbuf-setup
>>   F:     tools/hotplug/Linux/block-drbd-probe
>>
>> +RISCV
>> +M:     Bob Eshleman <bobbyeshleman@gmail.com>
>> +R:     Connor Davis <connojdavis@gmail.com>
>> +S:     Supported
>> +F:     config/riscv64.mk
>> +F:     xen/arch/riscv/
>> +F:     xen/include/asm-riscv/
> I volunteer to be a maintainer as well, feel free to say no :)
>
> I did the QEMU RISC-V H extension port and have a pretty good
> understanding of the RISC-V Hypervisor extension.
Great! I will add you.
>> +
>>   RTDS SCHEDULER
>>   M:     Dario Faggioli <dfaggioli@suse.com>
>>   M:     Meng Xu <mengxu@cis.upenn.edu>
>> diff --git a/config/riscv64.mk b/config/riscv64.mk
>> new file mode 100644
>> index 0000000000..a5a21e5fa2
>> --- /dev/null
>> +++ b/config/riscv64.mk
>> @@ -0,0 +1,5 @@
>> +CONFIG_RISCV := y
>> +CONFIG_RISCV_64 := y
>> +CONFIG_RISCV_$(XEN_OS) := y
>> +
>> +CONFIG_XEN_INSTALL_SUFFIX :=
>> diff --git a/xen/Makefile b/xen/Makefile
>> index 7ce7692354..89879fad4c 100644
>> --- a/xen/Makefile
>> +++ b/xen/Makefile
>> @@ -26,7 +26,9 @@ MAKEFLAGS += -rR
>>   EFI_MOUNTPOINT ?= $(BOOT_DIR)/efi
>>
>>   ARCH=$(XEN_TARGET_ARCH)
>> -SRCARCH=$(shell echo $(ARCH) | sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
>> +SRCARCH=$(shell echo $(ARCH) | \
>> +          sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
>> +              -e s'/riscv.*/riscv/g')
>>
>>   # Don't break if the build process wasn't called from the top level
>>   # we need XEN_TARGET_ARCH to generate the proper config
>> @@ -35,7 +37,8 @@ include $(XEN_ROOT)/Config.mk
>>   # Set ARCH/SUBARCH appropriately.
>>   export TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
>>   export TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
>> -                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
>> +                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
>> +                                -e s'/riscv.*/riscv/g')
>>
>>   # Allow someone to change their config file
>>   export KCONFIG_CONFIG ?= .config
>> @@ -335,6 +338,7 @@ _clean: delete-unfresh-files
>>          $(MAKE) $(clean) xsm
>>          $(MAKE) $(clean) crypto
>>          $(MAKE) $(clean) arch/arm
>> +       $(MAKE) $(clean) arch/riscv
>>          $(MAKE) $(clean) arch/x86
>>          $(MAKE) $(clean) test
>>          $(MAKE) -f $(BASEDIR)/tools/kconfig/Makefile.kconfig ARCH=$(ARCH) SRCARCH=$(SRCARCH) clean
>> diff --git a/xen/arch/riscv/Kconfig b/xen/arch/riscv/Kconfig
>> new file mode 100644
>> index 0000000000..bd8381c5e0
>> --- /dev/null
>> +++ b/xen/arch/riscv/Kconfig
>> @@ -0,0 +1,47 @@
>> +config RISCV
>> +       def_bool y
>> +
>> +config RISCV_64
>> +       def_bool y
>> +       select 64BIT
>> +
>> +config ARCH_DEFCONFIG
>> +       string
>> +       default "arch/riscv/configs/tiny64_defconfig"
>> +
>> +menu "Architecture Features"
>> +
>> +source "arch/Kconfig"
>> +
>> +endmenu
>> +
>> +menu "ISA Selection"
>> +
>> +choice
>> +       prompt "Base ISA"
>> +       default RISCV_ISA_RV64IMA if RISCV_64
>> +       help
>> +         This selects the base ISA extensions that Xen will target.
>> +
>> +config RISCV_ISA_RV64IMA
>> +       bool "RV64IMA"
>> +       help
>> +         Use the RV64I base ISA, plus the "M" and "A" extensions
>> +         for integer multiply/divide and atomic instructions, respectively.
>> +
>> +endchoice
>> +
>> +config RISCV_ISA_C
>> +       bool "Compressed extension"
>> +       help
>> +         Add "C" to the ISA subsets that the toolchain is allowed to
>> +         emit when building Xen, which results in compressed instructions
>> +         in the Xen binary.
>> +
>> +         If unsure, say N.
> I would change this to y if you are unsure. I don't expect any
> hardware to have an MMU (yet along the H extension) and no compressed
> instruction extension. Linux won't run without the C extension.
> Otherwise looks good:
>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Thanks. I was thinking it may make bring-up easier (at least in assembly
glue) if C was turned off, but in the end it will probably be easiest to 
mimic
linux. I will change to Y.

Connor


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 01:49:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 01:49:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136623.253233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loyxs-0000G6-7k; Fri, 04 Jun 2021 01:49:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136623.253233; Fri, 04 Jun 2021 01:49:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loyxs-0000Fz-4M; Fri, 04 Jun 2021 01:49:36 +0000
Received: by outflank-mailman (input) for mailman id 136623;
 Fri, 04 Jun 2021 01:49:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sjsn=K6=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1loyxq-0000Ft-Ue
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 01:49:34 +0000
Received: from mail-ot1-x32f.google.com (unknown [2607:f8b0:4864:20::32f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a52cb18b-cc8d-4560-a140-ab594ebd5ba6;
 Fri, 04 Jun 2021 01:49:34 +0000 (UTC)
Received: by mail-ot1-x32f.google.com with SMTP id
 i14-20020a9d624e0000b029033683c71999so7664916otk.5
 for <xen-devel@lists.xenproject.org>; Thu, 03 Jun 2021 18:49:34 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id i9sm151710oog.17.2021.06.03.18.49.33
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 03 Jun 2021 18:49:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a52cb18b-cc8d-4560-a140-ab594ebd5ba6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=cnn1WIVeIkKDuZit5//DlNFEUDGCm+dDJ+Nj7pdKMb4=;
        b=Gan7PgMouXZ0akfuXQ7Nq5K2gy+hIgs2Ks39ahncwUjCrzIvPEtwsr51Wjr0TkztP+
         Dd5xt243YLNIhxcp48StqLUak1Cr6rfOjBjbVVUZivi6FNLT/NBBtMH30arVn0ESwHad
         t9hQXDA1teNuVNg33JVS/6eQhZuKmVqj38EHM7E8nDBHpwmAoMIpGPVFSCt7MsiCydSg
         p6cuN1YjGOzEJKNZv7fn/AHgpZXucYpivP0bU4QdYtmqmGhoO9vo9qFYTld5/p8sLNya
         KsRxJoWGzAckzpc8rmPsrO3c+olV0/2m5cy+AfhBm4dMZh18A4r71PP/YN8hLnpdYvLh
         8vYg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=cnn1WIVeIkKDuZit5//DlNFEUDGCm+dDJ+Nj7pdKMb4=;
        b=M4FX9FsP9HPA+5KGqQ7R/wmZ4ao3N9NfldlMFS5FjvVhDTg2AOLBFm4i9PlE9ovj4J
         MjRPPcjRHtSmiwO2HIaLGV9XfloY4so9GlpnS2h+FdZKQbAZMa7KlstrZbjXSawOqKOi
         uXRCHQR0yZQ4mDubRerPb8g1bayc5t7A9aT6vfbMqGwSARXLKy+FUTxT9w5ybujg+61U
         93p5bCCjzJsS0+kC67Vd326Ax0w9+tNG+hOj1v4OJiJqS1Nkm0MhgEkY27eqJmTDedIL
         C1db9KHH5PTjn71E+dycGiubjWfM4sE6iyfSLaQGaCwW6uP24lfLuSZRVVI3C1UdP2+D
         iPGw==
X-Gm-Message-State: AOAM533CW0BPODvCypwWYfRL6Ee53itLlp4EQMnIhR1SStGszXMA7W8+
	7pSsfw70TIWLGJIVkM7jHqY=
X-Google-Smtp-Source: ABdhPJygi+Mu0NIWldqRI0E1lCIvefrKFp/C4Mrm2lkkT8OQabChnw7Q/GWU3Vn3FwIWmrQryjKrdg==
X-Received: by 2002:a9d:7987:: with SMTP id h7mr1820146otm.70.1622771374010;
        Thu, 03 Jun 2021 18:49:34 -0700 (PDT)
Subject: Re: [PATCH v7 1/2] xen/char: Default HAS_NS16550 to y only for X86
 and ARM
To: Alistair Francis <alistair23@gmail.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>,
 Bobby Eshleman <bobbyeshleman@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <cover.1622676439.git.connojdavis@gmail.com>
 <d2d19b62bd2a570db97f2940e6152bf93dc01632.1622676439.git.connojdavis@gmail.com>
 <CAKmqyKO0JQVOxfrO0jk_-4eBBZSkcQm1-pMhd2xzXaE55usXWQ@mail.gmail.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <4b674135-fb25-689c-dbac-8a8f006e3f58@gmail.com>
Date: Thu, 3 Jun 2021 19:49:46 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <CAKmqyKO0JQVOxfrO0jk_-4eBBZSkcQm1-pMhd2xzXaE55usXWQ@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US



On 6/3/21 5:27 PM, Alistair Francis wrote:
> On Thu, Jun 3, 2021 at 9:38 AM Connor Davis <connojdavis@gmail.com> wrote:
>> Defaulting to yes only for X86 and ARM reduces the requirements
>> for a minimal build when porting new architectures.
>>
>> Signed-off-by: Connor Davis <connojdavis@gmail.com>
>> Acked-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
>
> Alistair
Thanks, added

Connor


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 01:50:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 01:50:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136629.253245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loyyT-0001Z5-GR; Fri, 04 Jun 2021 01:50:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136629.253245; Fri, 04 Jun 2021 01:50:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1loyyT-0001Yy-D0; Fri, 04 Jun 2021 01:50:13 +0000
Received: by outflank-mailman (input) for mailman id 136629;
 Fri, 04 Jun 2021 01:50:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TI2w=K6=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1loyyR-0001Yi-Ua
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 01:50:12 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dc976051-4f72-4b75-a0ab-1c96fb4f6ae6;
 Fri, 04 Jun 2021 01:50:09 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 1541m9MC051830;
 Fri, 4 Jun 2021 01:49:15 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by userp2120.oracle.com with ESMTP id 38ue8pmqx9-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 04 Jun 2021 01:49:15 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 1541k4tc001373;
 Fri, 4 Jun 2021 01:49:15 GMT
Received: from nam11-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam11lp2177.outbound.protection.outlook.com [104.47.57.177])
 by userp3030.oracle.com with ESMTP id 38uaqyt6uq-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 04 Jun 2021 01:49:14 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by MN2PR10MB4384.namprd10.prod.outlook.com (2603:10b6:208:198::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.20; Fri, 4 Jun
 2021 01:49:12 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4195.023; Fri, 4 Jun 2021
 01:49:12 +0000
Received: from [10.74.96.237] (160.34.88.237) by
 BY3PR10CA0016.namprd10.prod.outlook.com (2603:10b6:a03:255::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.22 via Frontend
 Transport; Fri, 4 Jun 2021 01:49:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc976051-4f72-4b75-a0ab-1c96fb4f6ae6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=NbRCNCPY8R1hSw7JhonvH/YFtcFYtqbJUiX8S0oc01Y=;
 b=pnhtjbUmsIEY4sG/zPhvG/TeCWwVGsLIUspF6X5lKBqIGYUEdZVnW2EdTSxDy/a4SLnm
 0I3nNsc7ccM0aT+JPjeapBbbODdTm/VWHNzwlC7+u/Sq8QpyWBtdGUJUibMXY2mn1GM0
 +JplUrKIMatC8S754ZgTx1NSsndACRDQLlo1jt/jAs9jtXE6JL8qBWn1R/FbUnKDSgTd
 nrxPEb7a/MbzktIHSUmiKzdKdrp5lnpuHa2WpL3hUyst4ob4PH5hLdKdFIzWLaSH9XtN
 bMlQWDPUYGpmiDb8fXsBf1z9U9T0MhTzmEyvq6P7tMxUrgtB96PcJZM/NfzmE5mDRfkn ww== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Y9FY/OcteczFF+sqeu3i3cWk84bjrEe2Nyzg9yjw+p5WAjW8czPz9Y35uBgsCZRNqgiCE5CHFfQ0nq0dqS8wndNlPt8nuC2w+e+mBFY3QeYh/kpvuPgjWjHgN2mDtwu4iXc2Za9kPIjM5sC96lwSahvW+XKAch95SiQs6tTYj+D/EcmlfgmsRB2eJjH888UpuBeMrtEX7xlbsrW/DOd3Zhqvq1MCt4pTDLS8G+ivv86S1PZergHeDT174usqlt8z0O44HV5b972IDZTDdhQaKH8h5XkoEQRXtRaVxpMYI3teixlGBJKoFOQYAvB1aBWwgY0+dMh2DPwRWGGervOYrg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NbRCNCPY8R1hSw7JhonvH/YFtcFYtqbJUiX8S0oc01Y=;
 b=R29twU9k55JT/x8Dai0HMzG2ytvxffZuwKySk9aB1ds98kMRisHGfylf/f8m0e5j7CL4unowS997sZs99QACFETXE88r9oP6Oiom62JAafLZ2a+PfR15rf5a5kDo64/okht5qy2wfAs2VNJkxIbLtvWsr5MnNXlUn3lRJ52y2pAKwujdAnaFG+pgEEb7//J5qf8B96qjKCA1XhXz5ZTmbUyXeKO2rclj6IJdimhY4hevaukPlX+jXHDETIJIUt/b1cgdfj7YrrwHFxlyuay0cwtoOkSrYB6s8O53fuLRoBcXXQRu1VyP+/ao0xzYan9Cpc8vXTRaEx/OWHrBC10xRQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NbRCNCPY8R1hSw7JhonvH/YFtcFYtqbJUiX8S0oc01Y=;
 b=teD6CnAvT06EaZH1oHtuSiThk+JIeT0V6r7cXJVy1hwH0Ub/3igfjb2As9yflrh3pj0iEE0GP7QhDwp/otYd96kUvrMl+B+FX0202P/r/HI33+ELIIXgly5cfLm3GGPqsPKmEqI9yKo5AgD5XFGr70luXrVdMDdDcjQRwyFGaZ8=
Authentication-Results: amazon.co.uk; dkim=none (message not signed)
 header.d=none;amazon.co.uk; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH v3 01/11] xen/manage: keep track of the on-going suspend
 mode
To: Anchal Agarwal <anchalag@amazon.com>
Cc: "tglx@linutronix.de" <tglx@linutronix.de>,
        "mingo@redhat.com" <mingo@redhat.com>, "bp@alien8.de" <bp@alien8.de>,
        "hpa@zytor.com" <hpa@zytor.com>, "jgross@suse.com" <jgross@suse.com>,
        "linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>,
        "linux-mm@kvack.org" <linux-mm@kvack.org>,
        "sstabellini@kernel.org" <sstabellini@kernel.org>,
        "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
        "roger.pau@citrix.com" <roger.pau@citrix.com>,
        "axboe@kernel.dk" <axboe@kernel.dk>,
        "davem@davemloft.net" <davem@davemloft.net>,
        "rjw@rjwysocki.net" <rjw@rjwysocki.net>,
        "len.brown@intel.com" <len.brown@intel.com>,
        "pavel@ucw.cz" <pavel@ucw.cz>,
        "peterz@infradead.org" <peterz@infradead.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "vkuznets@redhat.com" <vkuznets@redhat.com>,
        "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
        "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
        "dwmw@amazon.co.uk" <dwmw@amazon.co.uk>
References: <20200930212944.GA3138@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <8cd59d9c-36b1-21cf-e59f-40c5c20c65f8@oracle.com>
 <20210521052650.GA19056@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0b1f0772-d1b1-0e59-8e99-368e54d40fbf@oracle.com>
 <20210526044038.GA16226@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <33380567-f86c-5d85-a79e-c1cd889f8ec2@oracle.com>
 <20210528215008.GA19622@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <1ff91b30-3963-728e-aefb-57944197bdde@oracle.com>
 <20210602193743.GA28861@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <2cb71322-9d3d-395e-293b-24888f5be759@oracle.com>
 <20210603232742.GB14368@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <07fca94f-86e1-3a0a-0078-0c0d6aa52363@oracle.com>
Date: Thu, 3 Jun 2021 21:49:04 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
In-Reply-To: <20210603232742.GB14368@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [160.34.88.237]
X-ClientProxiedBy: BY3PR10CA0016.namprd10.prod.outlook.com
 (2603:10b6:a03:255::21) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4e2c7768-a512-4fcf-a69f-08d926faf37a
X-MS-TrafficTypeDiagnostic: MN2PR10MB4384:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<MN2PR10MB4384F0C9073711605DE313DB8A3B9@MN2PR10MB4384.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	xppyepoSXXUHZ74e6dRnIAODnvLMecFa4hYdTHhjtyH7vtaDCz88FCM8TVln6poQ8B8vbLzwthXYnK7aNbS01+/F60SLUFeqKdkSNk36SHLxcffjzOCAAzyG6gGAHs37BaMFj5h1l4HFI51afSAl/unGLgCKpIjLF1lYKWt89QQWbGg1F5iebuc14Q2YE3rpdzwLr04TbT5+qrSPIWr2ZqdGG35qBnELWH2Ci0MtwkOM+oRt/y7V5FT/VRvbpb1cyPRr+zXx9YEvyyUHTvAe1KhLznSRQBWTY0o97KwC+SZ6GPOXrh5jJxIGyJ8hRHvJc+GPc3bmRz5BC9ptvIDAg9fEkITtYZ4XyHCbUW6eB6w7b+ku0p4PvHt2bDqKIul+rupG3BTTIdl/9qAfZGvkyIhVevkYIcD3knF+zV1Ic2qh2M9FQaKcsBUNxhHhBnrV/GmL8fA60xgRCJ8szQuzWjp1B42h7cGJ+KAz8iXbOxKiUnJAbLjpvxYtxDIAFF41weifjOW31QRSoU9nPaG1CbwXenYk6sPelh1sGqDM96YetCub6TfNlzURXhT+S/svN4fE41EunUjjqyQfY3lHuY/PXE/5qXfLVK/aNCcoWJ8inZpLmcJlUp/uq/UHi8kLzyIME6/2jrXTB45SFRLesKcyEm3FdF98bJ9RFUgwaA3JQu4IIzMLogw6kyGC890P
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(136003)(346002)(396003)(366004)(376002)(2616005)(4326008)(186003)(956004)(8936002)(31686004)(44832011)(6666004)(66476007)(66556008)(66946007)(54906003)(86362001)(316002)(8676002)(6916009)(38100700002)(26005)(53546011)(31696002)(16576012)(16526019)(7416002)(36756003)(5660300002)(6486002)(2906002)(478600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?N1VPb0hiejZuL2FsdWQrbHFQcUx6N3FYcy9lUEdZY1ZwSFVlcXFpeHZnS05z?=
 =?utf-8?B?b0lCcW5Ud3FRZ1JSeWE3eUtxM3AvUk1yYnVSdTVKSTlhQUI3eTB4YWNQdDJ0?=
 =?utf-8?B?YjNxZVM4elZydEl1Z2UvWG5yM0xCQlhFdW5yKzRKUWxsY0MwTlE4OEJLRFgw?=
 =?utf-8?B?UEY0QzRuM3BOSDY3UldyWWVjL0V2UUIvcHY2TDBIOGk4Q04yQTQvbUJDNHp3?=
 =?utf-8?B?anMwZHVINnI4T3JXQUh3SUVjNXNKcHBDT2s4c3l2MVord29zN0lZTHNGTG94?=
 =?utf-8?B?TUcvR3RTUlA0MS9hT0huVFdnQ0VWNWtVQkNLTHNSM2ZZOGFIdGNIOUxXSEVV?=
 =?utf-8?B?UVNIZGxvWElYaFdBT2hsdTE2dzM3ZldWY0luMVFhRlgzK3VBRTFYQzNUZHVx?=
 =?utf-8?B?WXBlU1BtY0pPOWRpazVGS0JselJDRkVLN2czVE96cXpEckxVQXIzUERmZHRR?=
 =?utf-8?B?WWxtR2ZaZ09EeUdtdEtXMGZudjJCR1FHOGlLa3orRGo1YXUvTGduaURSK3VY?=
 =?utf-8?B?WXp0SEgyTENTOHlaeTZOYWZ5SkpGQ2lZU2ZMYTJyNlQrbkRFTUlHaHVpTEV0?=
 =?utf-8?B?YmdmZWxMa05oTHl0U2V6OGp0d2xORDVjbDZlWk1aaEtjM0cyR0FFN1pGb1gw?=
 =?utf-8?B?R1J4amJsTmNDbGoweG0wY1J3Q3g5RjVOUXluMlZGTjNrODBGOGtHQ3pQSTFD?=
 =?utf-8?B?SEJUenpxbUlZKzl2RkdjbVR5bmQ3ckFQTEtKK3habjNkT1IxVWlwWGd3N1lF?=
 =?utf-8?B?dTZQOW5RSE5rekUyRHkxbFJYOW92S1U3aE96S0xaeVUvbnBEejZOY0thdm1H?=
 =?utf-8?B?dmZPdS85WnFwSUU4Ums5TkhuK0hJZHl3UHpaeFNKNDc4bmw0aWQxMTA0Wndk?=
 =?utf-8?B?SU83Njl1SjAyMjRHL0wvSUx4azh3NDFyUHhnTGtSdnFoSmJBTytWYmxIUHFU?=
 =?utf-8?B?dEpyNDk0dVlicnBmcnVVRURWL1NWNlZkOHJYSUl4NmdMQ2VBdGxYM3VrVVpt?=
 =?utf-8?B?V1FUdTVMQnM0OFltcVNrNDl2Y2tVeWlBM3ZhazV4QXIxZisrS0ZvcVV6d2s5?=
 =?utf-8?B?elVBMDRnK2U0RW5OU1ZpVE9RR0FDMUUwaWV3SEFOcDZqTmxvUmN5aW5hSTRn?=
 =?utf-8?B?NENaa0tIOG1tNFNaNWJ2U1RRdDhCSnlFd1Q0bkswcjVQZFBibEoxT2ZURWJv?=
 =?utf-8?B?RmFFUG1hRnV6S3owTkVqRmgvWWIrRmxrYkdrNU5iTHc0VDh1Ri9oYUExbFFB?=
 =?utf-8?B?RTNUcXJOOUN5QVNPRHZnUFJmS1RpOThaY1h6b09aeWpvbHQzblVTbWMrMVlB?=
 =?utf-8?B?OHNhNWo4RnVzak80c1FMUTJDbDQwcjRxVjZ3ZStDZC9UTDRmSWthME9ENFF0?=
 =?utf-8?B?Y1JPWVAremoySFFBMUxoenlFVmd0SWVoVGo4Wnp1MVU0R3U0VWh4RXJhaVly?=
 =?utf-8?B?NTY5b3Y1RzE0bHRXRi9PQUhwWG02TmdpTVdsVzJaVUdDR3IzcE1GUjQ0T2Mr?=
 =?utf-8?B?YWw4cDhtam5yTHNaTUtidHozTjZSQXQ1MkN4TXYzSUFVZGNlME5VQWZtL0dF?=
 =?utf-8?B?bEFkU2NLeFRBcDBzWXlTYXY0ZHFVbEhWS0JuQ2pXNUhGSEpwSkkwZzlrQmVU?=
 =?utf-8?B?WjcrSlFtSDEweGVrMEtaNDVFUlR4OUw4ZHRObkoyUzc1RzhHSml0SE5lcytl?=
 =?utf-8?B?cDFkRmJXM1ovakNweUNpNWtzK1lhQ205TjR5bHluYVJ1Vng3ellidVJPcU91?=
 =?utf-8?Q?DtZw17TWNzoWC5nEL5f8Bf0sRV6a7LRRMqE6IA/?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4e2c7768-a512-4fcf-a69f-08d926faf37a
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Jun 2021 01:49:12.0078
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: r1CwEogCqehyqD0GR7MtzrWrE8iuWyxyTvNicDZl0JshfgZWaeKkaK0uh634HhEXU4BX00PS3yqYIbe5np2J/n4g0fvGK4TQx1LBV4vnT5I=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR10MB4384
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10004 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 bulkscore=0
 suspectscore=0 spamscore=0 adultscore=0 mlxscore=0 phishscore=0
 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2106040011
X-Proofpoint-GUID: x5WmnGmBkAhpLuhBekWwk9rG35LzaNYM
X-Proofpoint-ORIG-GUID: x5WmnGmBkAhpLuhBekWwk9rG35LzaNYM
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10004 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 impostorscore=0
 malwarescore=0 adultscore=0 suspectscore=0 lowpriorityscore=0 spamscore=0
 bulkscore=0 phishscore=0 priorityscore=1501 clxscore=1015 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106040011


On 6/3/21 7:27 PM, Anchal Agarwal wrote:
> On Thu, Jun 03, 2021 at 04:11:46PM -0400, Boris Ostrovsky wrote:
>
>> But if KASLR is on then this comparison not failing should cause xen_vcpu pointer in the loaded image to become bogus because xen_vcpu is now registered for a different xen_vcpu_info address during boot.
>>
> The reason for that I think is once you jump into the image that information is
> getting lost. But there is  some residue somewhere that's causing the resume to
> fail. I haven't been able to pinpoint the exact field value that may be causing
> that issue.


xen_vcpu now points to address which is not where the hypervisor thinks vcpu_info should be.


> Correct me if I am wrong here, but even if hypothetically I put a hack to tell the kernel
> somehow re-register vcpu it won't pass because there is no hypercall to
> unregister it in first place? 


Right. You will be shown the door in map_vcpu_info():

       if ( !mfn_eq(v->vcpu_info_mfn, INVALID_MFN) )
           return -EINVAL;


> Can the resumed kernel use the new values in that
> case [Now this is me just throwing wild guesses!!]


I don't think so --- hypervisor is now pointing to a random location in your image.


-boris





From xen-devel-bounces@lists.xenproject.org Fri Jun 04 02:14:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 02:14:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136641.253257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lozLm-0004PA-He; Fri, 04 Jun 2021 02:14:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136641.253257; Fri, 04 Jun 2021 02:14:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lozLm-0004P3-C2; Fri, 04 Jun 2021 02:14:18 +0000
Received: by outflank-mailman (input) for mailman id 136641;
 Fri, 04 Jun 2021 02:14:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sjsn=K6=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lozLk-0004Ox-79
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 02:14:16 +0000
Received: from mail-ot1-x329.google.com (unknown [2607:f8b0:4864:20::329])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5bc51c65-9bc6-4fe2-9a10-3823334f21a0;
 Fri, 04 Jun 2021 02:14:15 +0000 (UTC)
Received: by mail-ot1-x329.google.com with SMTP id
 q9-20020a9d66490000b02903c741e5b703so6593905otm.0
 for <xen-devel@lists.xenproject.org>; Thu, 03 Jun 2021 19:14:15 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id a18sm179903oiy.24.2021.06.03.19.14.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 03 Jun 2021 19:14:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5bc51c65-9bc6-4fe2-9a10-3823334f21a0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=hgunR1rqgJO50MLCN5BCKZ0MZvL9Gm6nWA4qLEzbTig=;
        b=tXvwRqgGq7/qNl2XJTkhnLW73fGpB7vrch+1f8BrjC2+P/45bAsmHLZGBmPIqNLpPQ
         ENa/l5GLuUaJy7RMwEu/14bU1JnK7njvfQNb3nx+fryHd6uRrQQC9wzkbtPvWywslS9G
         Rq04iFVmMJNdGhZ1YYLfCGzPDwi15q8SK8cfu1+YmQZwll2CmcdOGKBO1Q53s7o8i4OO
         /9WX/4siVGfyoPTXTNIiDCC7iXu3umCI+WsE1vuePp7EuOtBH/mhyjF7uxKkUoy0Yily
         74a8BhYW2Wiqe5yYaVX6TtBQFpFlh5XzVrvaq6MNHxeaLA5laj1zakgGtIyhu+IVxi7W
         Xlkw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=hgunR1rqgJO50MLCN5BCKZ0MZvL9Gm6nWA4qLEzbTig=;
        b=sj4/rLBZCV6wxMyZx381cvl9a4vse13MFLT5XEF7F9ChFPQWUgSSViSHIATUhZoRrd
         bF6K2Meb//pENpRXfJkw46B4eWkXdo5Y/XU4N4hAvmJ2mGcG4vLY0Aka0IqCYvsqWKJW
         KFVQXB0Aw8PT2q+txP23fss9zVAXGi1QloFQbXWuE9zQw0jEWimua9oKkYoBUYhskObD
         iCLmTG91bbiwDU1lI4umWEcei6lyoytgcX6NR75K2WacyByg0g2/AyHWmDhh5iz5jpkc
         jtQf0hBozeqVPkSnsURZRPXGqwY0KBQwgklm67KjIsY4sF+wUFLLTM1L6SSqAhScjfQ+
         39zQ==
X-Gm-Message-State: AOAM531tHy7HYcnRFJHEiwjToTYrHXGliG07E9Rl6nlC2FoJE8UhUvkJ
	Zp6UO8QfLhaqCPW8Wfo6vflilwsXoFP/9w==
X-Google-Smtp-Source: ABdhPJy92YomF3LySIk2y5WOFLBGG0Dyv4tVlJeYV4sCld5dOCqH6+/XqGqJ0e+jQolKLyQRNwKihA==
X-Received: by 2002:a05:6830:4110:: with SMTP id w16mr1810147ott.372.1622772854408;
        Thu, 03 Jun 2021 19:14:14 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Alistair Francis <alistair.francis@wdc.com>
Subject: [PATCH v8 0/2] Minimal build for RISCV
Date: Thu,  3 Jun 2021 20:14:03 -0600
Message-Id: <cover.1622772299.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi all,

This series introduces a minimal build for RISCV. It is based on Bobby's
previous work from last year[0] rebased onto current Xen.

This series provides the patches necessary to get a minimal build
working. The build is "minimal" in the sense that it only supports
building TARGET=riscv64/head.o. The arch/riscv/riscv64/head.S is just
a simple while(1).

The first patch is a mod to non-RISCV bits that enable building a
config with !CONFIG_HAS_NS16550.

The second patch adds the make/Kconfig boilerplate alongside head.S and
asm-riscv/config.h (head.S references ENTRY that is defined in
asm-riscv/config.h).

[0] https://lore.kernel.org/xen-devel/cover.1579615303.git.bobbyeshleman@gmail.com/

Thanks,
Connor

--
Changes since v7:
  - Default to y for RISCV_ISA_C Kconfig option (i.e. build Xen with
    compressed instructions by default)
  - Added Alistair to MAINTAINERS

Changes since v6:
  - Make sure patch versions are consistent

Changes since v5:
  - Added missing A-by from Jan to patch 1

Changes since v4:
  - Dropped patches 2 and 4 as these have been applied
  - Moved arch/riscv/head.S to arch/riscv/riscv64/head.S for consistency
    with ARM.
  - Added Bob and myself to MAINTAINERS

Changes since v3:
  - Dropped "xen: Fix build when !CONFIG_GRANT_TABLE" since this was
    applied by Jan
  - Adjusted Kconfig condition for building NS16550
  - Use bool rather than bool_t
  - Removed riscv memory map, as this should probably be done later once
    the frametable size is figured out
  - Consolidated 64-bit #defines in asm-riscv/config.h
  - Renamed riscv64_defconfig to tiny64_defconfig, added CONFIG_DEBUG
    and CONFIG_DEBUG_INFO
  - Fixed logic/alignment/whitespace issues in Kconfig files
  - Use upstream archlinux riscv64 cross-compiler packages instead of
    custom built toolchain in docker container

Changes since v2:
  - Reduced number of riscv files added to ease review

Changes since v1:
  - Dropped "xen/sched: Fix build when NR_CPUS == 1" since this was
    fixed for 4.15
  - Moved #ifdef-ary around iommu_enabled to iommu.h
  - Moved struct grant_table declaration above ifdef CONFIG_GRANT_TABLE

--
Connor Davis (2):
  xen/char: Default HAS_NS16550 to y only for X86 and ARM
  xen: Add files needed for minimal riscv build

 MAINTAINERS                             |  9 +++++
 config/riscv64.mk                       |  5 +++
 xen/Makefile                            |  8 +++--
 xen/arch/riscv/Kconfig                  | 48 +++++++++++++++++++++++++
 xen/arch/riscv/Kconfig.debug            |  0
 xen/arch/riscv/Makefile                 |  2 ++
 xen/arch/riscv/Rules.mk                 |  0
 xen/arch/riscv/arch.mk                  | 14 ++++++++
 xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
 xen/arch/riscv/riscv64/asm-offsets.c    |  0
 xen/arch/riscv/riscv64/head.S           |  6 ++++
 xen/drivers/char/Kconfig                |  1 +
 xen/include/asm-riscv/config.h          | 47 ++++++++++++++++++++++++
 13 files changed, 151 insertions(+), 2 deletions(-)
 create mode 100644 config/riscv64.mk
 create mode 100644 xen/arch/riscv/Kconfig
 create mode 100644 xen/arch/riscv/Kconfig.debug
 create mode 100644 xen/arch/riscv/Makefile
 create mode 100644 xen/arch/riscv/Rules.mk
 create mode 100644 xen/arch/riscv/arch.mk
 create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
 create mode 100644 xen/arch/riscv/riscv64/asm-offsets.c
 create mode 100644 xen/arch/riscv/riscv64/head.S
 create mode 100644 xen/include/asm-riscv/config.h

-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 02:14:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 02:14:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136642.253267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lozLp-0004fj-Or; Fri, 04 Jun 2021 02:14:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136642.253267; Fri, 04 Jun 2021 02:14:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lozLp-0004fc-Ku; Fri, 04 Jun 2021 02:14:21 +0000
Received: by outflank-mailman (input) for mailman id 136642;
 Fri, 04 Jun 2021 02:14:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sjsn=K6=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lozLp-0004Ox-2S
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 02:14:21 +0000
Received: from mail-oi1-x22c.google.com (unknown [2607:f8b0:4864:20::22c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b1117d1-b2ad-41e4-a745-4a9a6f0f6384;
 Fri, 04 Jun 2021 02:14:17 +0000 (UTC)
Received: by mail-oi1-x22c.google.com with SMTP id v22so8342820oic.2
 for <xen-devel@lists.xenproject.org>; Thu, 03 Jun 2021 19:14:16 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id a18sm179903oiy.24.2021.06.03.19.14.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 03 Jun 2021 19:14:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b1117d1-b2ad-41e4-a745-4a9a6f0f6384
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=ZbDVJKRN1h+8ugXIRuULkaSTcePO99uJiv3nqLXkYks=;
        b=mzd0yWht+bBXQ7vxyzqrmz0JSyMelxK5Wto0CHXFj8OXkamJOfCe6oXUy+EwCVeoPT
         VXm/5JWiHGNbOdyeAKNerGspye0XF3fzYD31uaOVdEFKUay+yJMqSleDMl6kJKczJL04
         g8DJ8W4rQbF4qsT/rDwNVskkrRSZltY5Ok1LOXJIYkbR3o4EmFAJwXtcdYb6Xbx2O6tf
         On6/8sQpZqN9Inec651Ni/JLMtixsZDDqtOBi49sNty+Kc7CnYlfnec7unuYj5uAWNJZ
         cAVTJuZfRDlL6HUTiOjihJBtn4MBOJ+bOU9YFaeEIHWzhR9GWUWb6PRHOoiaOWhhrEeW
         cOog==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=ZbDVJKRN1h+8ugXIRuULkaSTcePO99uJiv3nqLXkYks=;
        b=SX2VlmcZyult84FDjI23tZHyoewAWb5ePvfckoiRj8jaYvyeqb8f7jFFZ/6rbEtpwd
         cT7jBY+dr+5uXrMjCM6y/5hVWQlgmK/hnOt6W1GN4ExtTaU6oUGx+ISQm2ohHrtuQfy7
         e/90RCFrRrvzP4vxflJpjf1+0K6S5shPDfXfHOVEkc5m5Nj1MsH3EYIxxjk+zbOu0P3m
         ulsiv4in4FyN1Hd9APHoWsRM85EuJKk+b1TlP8Lfa99KcxtVo/8fcXwcHsWfB8dlB3Jo
         8Ro7lGpobWA3KwF0WYhF0/KqqGAHV+VNULPJ/fLbtw3baKSybHiIBOjeEq0so4bBVZz2
         Y0Lg==
X-Gm-Message-State: AOAM530P280+t+UqeSHsSLacboAZ6BMNomSIIQp9MSYDU2cTbPQQpPGz
	WFPseLlDFSaCpGLrCt8HvVcTIHl07dNdng==
X-Google-Smtp-Source: ABdhPJz7PVplV54Nb4viAJbz2T54+ZJyAw/+03cBpvvFxv1RRXdQScjyLp6tEaVgze13BYednDvf+Q==
X-Received: by 2002:aca:220b:: with SMTP id b11mr9359038oic.89.1622772856520;
        Thu, 03 Jun 2021 19:14:16 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Alistair Francis <alistair.francis@wdc.com>
Subject: [PATCH v8 1/2] xen/char: Default HAS_NS16550 to y only for X86 and ARM
Date: Thu,  3 Jun 2021 20:14:04 -0600
Message-Id: <3886e351e8333f68462b0d2f9ecb1e243cb8b9ec.1622772299.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1622772299.git.connojdavis@gmail.com>
References: <cover.1622772299.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Defaulting to yes only for X86 and ARM reduces the requirements
for a minimal build when porting new architectures.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 xen/drivers/char/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
index b572305657..2ff5b288e2 100644
--- a/xen/drivers/char/Kconfig
+++ b/xen/drivers/char/Kconfig
@@ -1,5 +1,6 @@
 config HAS_NS16550
 	bool "NS16550 UART driver" if ARM
+	default n if RISCV
 	default y
 	help
 	  This selects the 16550-series UART support. For most systems, say Y.
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 02:14:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 02:14:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136643.253278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lozLv-000511-5M; Fri, 04 Jun 2021 02:14:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136643.253278; Fri, 04 Jun 2021 02:14:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lozLv-00050p-1i; Fri, 04 Jun 2021 02:14:27 +0000
Received: by outflank-mailman (input) for mailman id 136643;
 Fri, 04 Jun 2021 02:14:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sjsn=K6=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lozLu-0004Ox-2Z
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 02:14:26 +0000
Received: from mail-ot1-x334.google.com (unknown [2607:f8b0:4864:20::334])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6465723d-4fd8-409f-929c-12a3680f4b0f;
 Fri, 04 Jun 2021 02:14:18 +0000 (UTC)
Received: by mail-ot1-x334.google.com with SMTP id
 66-20020a9d02c80000b02903615edf7c1aso7667299otl.13
 for <xen-devel@lists.xenproject.org>; Thu, 03 Jun 2021 19:14:18 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id a18sm179903oiy.24.2021.06.03.19.14.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 03 Jun 2021 19:14:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6465723d-4fd8-409f-929c-12a3680f4b0f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=Fz5g9/C6TUTCVwzOqiNArfaZRG1hXAbJvG6pMsvIAko=;
        b=in3moGkktPRDAw4r2i1LfWozyIVjfbIwxNj+EJWupD6/KPAO3PjShVjqmFFboXnNwn
         VFB4ua7VRz9rLHqDQGvb9LJGJi9HrXSChCMjHf+DE5Eor7JzfVdWLtytYQ8jNQ4LBygn
         kCfMuhc7oNquajtERoVBOYKZjwwQ+HCPSPvQ4iYXTtPJcpcbQm2oCRiQduJIpQ5qKcet
         GKn+gzIq/1w6EDkxYjjpLaAGfNyVvaDeUknvPateVUHmIsuJEFZpBedbN0eqK0uDZhM3
         qDCWqwZl8cYKewWPZWIPJmjguKIEkKSAsMDfNaKVh/NOkwCfkMQMZJSDyXooRaSOw5nH
         bObQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=Fz5g9/C6TUTCVwzOqiNArfaZRG1hXAbJvG6pMsvIAko=;
        b=F8gFZjaqBgYPtzUvH5+CgriUdKfm+f99OKtkplktkkF8ww2XNouudjIjS2lpLW/loI
         v41CSerKKM5Hes8Zp4EFFURzKJkgFM0syZNdeZjWYwMdtD3nUlk1dRYs9MgBRgAGTV+0
         lrnvrdfi3pRzmcn0vOo4sSuWtcnsA8294XZlKRyClMkAbG7gHpSITVc5Yp7iwNTsM5RC
         2JDGDskeoLsC1aZmeXMtAVSPx4qjlSaB6VxZgWYkU9dXkMki6PlGjOEsFKAEsKRa7l8E
         +mTn0SuVTqiMnOoTkcTGLomCJ18KSj5nu9eYM7C8O5on0rZvjGWM+1/LUIoyE30HGUrT
         O+vQ==
X-Gm-Message-State: AOAM530lBR2YtZe8jBqUKXVtAWp5WUEpcynu7hETsnS56MxaDlKu41XO
	I21wEL8kHONuJW/HDLgfHiZlDxwdivOIpA==
X-Google-Smtp-Source: ABdhPJwbrRHwbka2i9AWbd4QsHICHVo6yA0VuVzpBjs7g4Pu1kelhvt1sGVal3w5DzXyfZMXWtV9pg==
X-Received: by 2002:a9d:69c5:: with SMTP id v5mr1835711oto.108.1622772858182;
        Thu, 03 Jun 2021 19:14:18 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Alistair Francis <alistair.francis@wdc.com>
Subject: [PATCH v8 2/2] xen: Add files needed for minimal riscv build
Date: Thu,  3 Jun 2021 20:14:05 -0600
Message-Id: <4337d3cd6891b34f534d85ca62712bd3b446edf8.1622772299.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1622772299.git.connojdavis@gmail.com>
References: <cover.1622772299.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add arch-specific makefiles and configs needed to build for
riscv. Also add a minimal head.S that is a simple infinite loop.
head.o can be built with

$ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen tiny64_defconfig
$ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen TARGET=riscv64/head.o

No other TARGET is supported at the moment.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 MAINTAINERS                             |  9 +++++
 config/riscv64.mk                       |  5 +++
 xen/Makefile                            |  8 +++--
 xen/arch/riscv/Kconfig                  | 48 +++++++++++++++++++++++++
 xen/arch/riscv/Kconfig.debug            |  0
 xen/arch/riscv/Makefile                 |  2 ++
 xen/arch/riscv/Rules.mk                 |  0
 xen/arch/riscv/arch.mk                  | 14 ++++++++
 xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
 xen/arch/riscv/riscv64/asm-offsets.c    |  0
 xen/arch/riscv/riscv64/head.S           |  6 ++++
 xen/include/asm-riscv/config.h          | 47 ++++++++++++++++++++++++
 12 files changed, 150 insertions(+), 2 deletions(-)
 create mode 100644 config/riscv64.mk
 create mode 100644 xen/arch/riscv/Kconfig
 create mode 100644 xen/arch/riscv/Kconfig.debug
 create mode 100644 xen/arch/riscv/Makefile
 create mode 100644 xen/arch/riscv/Rules.mk
 create mode 100644 xen/arch/riscv/arch.mk
 create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
 create mode 100644 xen/arch/riscv/riscv64/asm-offsets.c
 create mode 100644 xen/arch/riscv/riscv64/head.S
 create mode 100644 xen/include/asm-riscv/config.h

diff --git a/MAINTAINERS b/MAINTAINERS
index d46b08a0d2..5a1f92422a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -456,6 +456,15 @@ F:	tools/libs/light/libxl_nonetbuffer.c
 F:	tools/hotplug/Linux/remus-netbuf-setup
 F:	tools/hotplug/Linux/block-drbd-probe
 
+RISCV
+M:	Bob Eshleman <bobbyeshleman@gmail.com>
+M:	Alistair Francis <alistair.francis@wdc.com>
+R:	Connor Davis <connojdavis@gmail.com>
+S:	Supported
+F:	config/riscv64.mk
+F:	xen/arch/riscv/
+F:	xen/include/asm-riscv/
+
 RTDS SCHEDULER
 M:	Dario Faggioli <dfaggioli@suse.com>
 M:	Meng Xu <mengxu@cis.upenn.edu>
diff --git a/config/riscv64.mk b/config/riscv64.mk
new file mode 100644
index 0000000000..a5a21e5fa2
--- /dev/null
+++ b/config/riscv64.mk
@@ -0,0 +1,5 @@
+CONFIG_RISCV := y
+CONFIG_RISCV_64 := y
+CONFIG_RISCV_$(XEN_OS) := y
+
+CONFIG_XEN_INSTALL_SUFFIX :=
diff --git a/xen/Makefile b/xen/Makefile
index 7ce7692354..89879fad4c 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -26,7 +26,9 @@ MAKEFLAGS += -rR
 EFI_MOUNTPOINT ?= $(BOOT_DIR)/efi
 
 ARCH=$(XEN_TARGET_ARCH)
-SRCARCH=$(shell echo $(ARCH) | sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
+SRCARCH=$(shell echo $(ARCH) | \
+          sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
+              -e s'/riscv.*/riscv/g')
 
 # Don't break if the build process wasn't called from the top level
 # we need XEN_TARGET_ARCH to generate the proper config
@@ -35,7 +37,8 @@ include $(XEN_ROOT)/Config.mk
 # Set ARCH/SUBARCH appropriately.
 export TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
 export TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
-                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
+                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
+                                -e s'/riscv.*/riscv/g')
 
 # Allow someone to change their config file
 export KCONFIG_CONFIG ?= .config
@@ -335,6 +338,7 @@ _clean: delete-unfresh-files
 	$(MAKE) $(clean) xsm
 	$(MAKE) $(clean) crypto
 	$(MAKE) $(clean) arch/arm
+	$(MAKE) $(clean) arch/riscv
 	$(MAKE) $(clean) arch/x86
 	$(MAKE) $(clean) test
 	$(MAKE) -f $(BASEDIR)/tools/kconfig/Makefile.kconfig ARCH=$(ARCH) SRCARCH=$(SRCARCH) clean
diff --git a/xen/arch/riscv/Kconfig b/xen/arch/riscv/Kconfig
new file mode 100644
index 0000000000..468e250c86
--- /dev/null
+++ b/xen/arch/riscv/Kconfig
@@ -0,0 +1,48 @@
+config RISCV
+	def_bool y
+
+config RISCV_64
+	def_bool y
+	select 64BIT
+
+config ARCH_DEFCONFIG
+	string
+	default "arch/riscv/configs/tiny64_defconfig"
+
+menu "Architecture Features"
+
+source "arch/Kconfig"
+
+endmenu
+
+menu "ISA Selection"
+
+choice
+	prompt "Base ISA"
+	default RISCV_ISA_RV64IMA if RISCV_64
+	help
+	  This selects the base ISA extensions that Xen will target.
+
+config RISCV_ISA_RV64IMA
+	bool "RV64IMA"
+	help
+	  Use the RV64I base ISA, plus the "M" and "A" extensions
+	  for integer multiply/divide and atomic instructions, respectively.
+
+endchoice
+
+config RISCV_ISA_C
+	bool "Compressed extension"
+	default y
+	help
+	  Add "C" to the ISA subsets that the toolchain is allowed to
+	  emit when building Xen, which results in compressed instructions
+	  in the Xen binary.
+
+	  If unsure, say Y.
+
+endmenu
+
+source "common/Kconfig"
+
+source "drivers/Kconfig"
diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
new file mode 100644
index 0000000000..942e4ffbc1
--- /dev/null
+++ b/xen/arch/riscv/Makefile
@@ -0,0 +1,2 @@
+.PHONY: include
+include:
diff --git a/xen/arch/riscv/Rules.mk b/xen/arch/riscv/Rules.mk
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
new file mode 100644
index 0000000000..53dadb8975
--- /dev/null
+++ b/xen/arch/riscv/arch.mk
@@ -0,0 +1,14 @@
+########################################
+# RISCV-specific definitions
+
+CFLAGS-$(CONFIG_RISCV_64) += -mabi=lp64
+
+riscv-march-$(CONFIG_RISCV_ISA_RV64IMA) := rv64ima
+riscv-march-$(CONFIG_RISCV_ISA_C)       := $(riscv-march-y)c
+
+# Note that -mcmodel=medany is used so that Xen can be mapped
+# into the upper half _or_ the lower half of the address space.
+# -mcmodel=medlow would force Xen into the lower half.
+
+CFLAGS += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
+CFLAGS += -I$(BASEDIR)/include
diff --git a/xen/arch/riscv/configs/tiny64_defconfig b/xen/arch/riscv/configs/tiny64_defconfig
new file mode 100644
index 0000000000..3c9a2ff941
--- /dev/null
+++ b/xen/arch/riscv/configs/tiny64_defconfig
@@ -0,0 +1,13 @@
+# CONFIG_SCHED_CREDIT is not set
+# CONFIG_SCHED_RTDS is not set
+# CONFIG_SCHED_NULL is not set
+# CONFIG_SCHED_ARINC653 is not set
+# CONFIG_TRACEBUFFER is not set
+# CONFIG_HYPFS is not set
+# CONFIG_GRANT_TABLE is not set
+# CONFIG_SPECULATIVE_HARDEN_ARRAY is not set
+
+CONFIG_RISCV_64=y
+CONFIG_DEBUG=y
+CONFIG_DEBUG_INFO=y
+CONFIG_EXPERT=y
diff --git a/xen/arch/riscv/riscv64/asm-offsets.c b/xen/arch/riscv/riscv64/asm-offsets.c
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
new file mode 100644
index 0000000000..0dbc27ba75
--- /dev/null
+++ b/xen/arch/riscv/riscv64/head.S
@@ -0,0 +1,6 @@
+#include <asm/config.h>
+
+        .text
+
+ENTRY(start)
+        j  start
diff --git a/xen/include/asm-riscv/config.h b/xen/include/asm-riscv/config.h
new file mode 100644
index 0000000000..e2ae21de61
--- /dev/null
+++ b/xen/include/asm-riscv/config.h
@@ -0,0 +1,47 @@
+#ifndef __RISCV_CONFIG_H__
+#define __RISCV_CONFIG_H__
+
+#if defined(CONFIG_RISCV_64)
+# define LONG_BYTEORDER 3
+# define ELFSIZE 64
+# define MAX_VIRT_CPUS 128u
+#else
+# error "Unsupported RISCV variant"
+#endif
+
+#define BYTES_PER_LONG (1 << LONG_BYTEORDER)
+#define BITS_PER_LONG  (BYTES_PER_LONG << 3)
+#define POINTER_ALIGN  BYTES_PER_LONG
+
+#define BITS_PER_LLONG 64
+
+/* xen_ulong_t is always 64 bits */
+#define BITS_PER_XEN_ULONG 64
+
+#define CONFIG_RISCV_L1_CACHE_SHIFT 6
+#define CONFIG_PAGEALLOC_MAX_ORDER  18
+#define CONFIG_DOMU_MAX_ORDER       9
+#define CONFIG_HWDOM_MAX_ORDER      10
+
+#define OPT_CONSOLE_STR "dtuart"
+#define INVALID_VCPU_ID MAX_VIRT_CPUS
+
+/* Linkage for RISCV */
+#ifdef __ASSEMBLY__
+#define ALIGN .align 2
+
+#define ENTRY(name)                                \
+  .globl name;                                     \
+  ALIGN;                                           \
+  name:
+#endif
+
+#endif /* __RISCV_CONFIG_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 02:47:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 02:47:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136666.253289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lozrm-0000Nz-GU; Fri, 04 Jun 2021 02:47:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136666.253289; Fri, 04 Jun 2021 02:47:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lozrm-0000Ns-DE; Fri, 04 Jun 2021 02:47:22 +0000
Received: by outflank-mailman (input) for mailman id 136666;
 Fri, 04 Jun 2021 02:47:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lozrk-0000Ni-UQ; Fri, 04 Jun 2021 02:47:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lozrk-0001XA-Mp; Fri, 04 Jun 2021 02:47:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lozrk-00058x-AY; Fri, 04 Jun 2021 02:47:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lozrk-0007dQ-9d; Fri, 04 Jun 2021 02:47:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CAn1TUBXY62DELOKS8JEx/+R/mGYfHYnJi2bQzMjWzA=; b=3BZtv+X/ahrXerrEudmxw2hKI5
	WN2DJKN0xQi1NgDrQ2zn4hdCTbgaYWqogqS0DqL8od0s4y6GW1QwPlATOfYhQUXyp7eonVhz38G73
	xTybrYEMBkrcjqMPI9AJiO7wYGJ5l27LRGot4+jwzuSdoEnP3S4pCGkqew8lK5MkxRuQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162354-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162354: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start/freebsd.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=324c92e5e0ee0e993bdb106fac407846ed677f6b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Jun 2021 02:47:20 +0000

flight 162354 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162354/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop    fail in 162344 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-freebsd12-amd64 21 guest-start/freebsd.repeat fail in 162344 pass in 162354
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10    fail pass in 162344
 test-arm64-arm64-xl          13 debian-fixup               fail pass in 162344
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 162344

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl         15 migrate-support-check fail in 162344 never pass
 test-arm64-arm64-xl     16 saverestore-support-check fail in 162344 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                324c92e5e0ee0e993bdb106fac407846ed677f6b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  307 days
Failing since        152366  2020-08-01 20:49:34 Z  306 days  523 attempts
Testing same since   162344  2021-06-03 03:41:50 Z    0 days    2 attempts

------------------------------------------------------------
6133 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1666542 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 04:01:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 04:01:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136677.253302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp11A-0007s3-0y; Fri, 04 Jun 2021 04:01:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136677.253302; Fri, 04 Jun 2021 04:01:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp119-0007rw-UB; Fri, 04 Jun 2021 04:01:07 +0000
Received: by outflank-mailman (input) for mailman id 136677;
 Fri, 04 Jun 2021 04:01:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8CJS=K6=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lp118-0007rq-NF
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 04:01:06 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.1.44]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6494481e-744c-463d-a43e-1b7f34ddf92f;
 Fri, 04 Jun 2021 04:01:04 +0000 (UTC)
Received: from AM6PR05CA0004.eurprd05.prod.outlook.com (2603:10a6:20b:2e::17)
 by AM4PR08MB2883.eurprd08.prod.outlook.com (2603:10a6:205:9::30) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.29; Fri, 4 Jun
 2021 04:01:02 +0000
Received: from AM5EUR03FT013.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2e:cafe::e2) by AM6PR05CA0004.outlook.office365.com
 (2603:10a6:20b:2e::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.15 via Frontend
 Transport; Fri, 4 Jun 2021 04:01:02 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT013.mail.protection.outlook.com (10.152.16.140) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.21 via Frontend Transport; Fri, 4 Jun 2021 04:01:02 +0000
Received: ("Tessian outbound a5ae8c02e74f:v93");
 Fri, 04 Jun 2021 04:01:02 +0000
Received: from 889749a9838a.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 EE767B7C-5E8F-4308-A66A-B59D31796660.1; 
 Fri, 04 Jun 2021 04:00:55 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 889749a9838a.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 04 Jun 2021 04:00:55 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VE1PR08MB4752.eurprd08.prod.outlook.com (2603:10a6:802:a4::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Fri, 4 Jun
 2021 04:00:48 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::2807:2ff9:e371:2918]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::2807:2ff9:e371:2918%7]) with mapi id 15.20.4195.023; Fri, 4 Jun 2021
 04:00:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6494481e-744c-463d-a43e-1b7f34ddf92f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pkqWtVYJ30BWp+L9Leb4awSw7mxYMDTgQ18YuCPzsxA=;
 b=DAzVQqpCorNTspy/JT+wdU2OZaEvCZFqiNBjcqFl1FwSeQDFzc3L+NMPlORz1xC9GB3uoGfyv+aJf+H7G/kQ6BENX3Yxv2UdOXhP+Ursw2VC37UteOLCePGZW0PpS4fIuNs0beZp7JbZ1G9TktfcBHAEtZ/SjvQiC5M6zyOz5jA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SRy3ZGUR3QoVgwGpp9ZBUw/2bud3qddgOO4tR/abwsrPbgQpW/ZIusU16Er6QkxJKfkieESpf1XV2v4FpygUP0UonL6lNfPTL9ErmLKYIFpgUJ5BQP80r5tFca8RjufaPmcM5SoBlW26LJUmqBTr4TJeplOlbqV6L0k9fZ+br6vpjcqyKk+CFjwfW+Son0IxOpBm9nvcmV3SLiRa2mv6YyIKSaENmAZKsYGT8W955zcA8+ecYG2GyuDeVtYB5z3aW7j2Jos1f8j6Io3DZtLVF/BFiRGn8Xdl1gnQpPVnwrDRoV1fKIGr5m1DrtKGNLNpE9BECinRIKo7rSkm2hD1ZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pkqWtVYJ30BWp+L9Leb4awSw7mxYMDTgQ18YuCPzsxA=;
 b=jcbtWaH5QLxiik80+U5P45ijb3nvjM5wIBthuQXMk0QsOiuJi0XrWuUzOmKtL5n+/QWCQxTK92uY4mgpfSjBpDlVvKZZ7gBVyaLjyUONDEUbmViflb3GoxmP0IqR9MhZ2KyxbVKLIJEJDewkhXCtQ+MODMkN4L7f0j9fOS3RwA5NhlatAqHmo5QAIdGza7Dzv1xtVVZaiBH+sOG8ZA7a6po0TWVYjp/Mut46T2XQK5jPf3e6SQArO38xJcRRkZpPfv5IjbUApCtIeZ02CMWjRRhEBY4HbnPZXlolMqbvV76tw/g4yK2QdpLuYhTypS1k7uj0xpLUvSNIFhX2gFJ6Tg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pkqWtVYJ30BWp+L9Leb4awSw7mxYMDTgQ18YuCPzsxA=;
 b=DAzVQqpCorNTspy/JT+wdU2OZaEvCZFqiNBjcqFl1FwSeQDFzc3L+NMPlORz1xC9GB3uoGfyv+aJf+H7G/kQ6BENX3Yxv2UdOXhP+Ursw2VC37UteOLCePGZW0PpS4fIuNs0beZp7JbZ1G9TktfcBHAEtZ/SjvQiC5M6zyOz5jA=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien.grall.oss@gmail.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, nd
	<nd@arm.com>
Subject: RE: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
Thread-Topic: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
Thread-Index:
 AQHXS6WvvFnJG9tHGECWPMZXor2qraro8JyAgAAPtYCAAiGdAIAAuwwggAA2KACAFH+QgIABhnKAgADPmoCAAAmNAIAAHmCAgAAx6vA=
Date: Fri, 4 Jun 2021 04:00:47 +0000
Message-ID:
 <VE1PR08MB52151410985CD8E5C577F797F73B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-2-penny.zheng@arm.com>
 <e1b90f06-92d2-11da-c556-4081907124b8@xen.org>
 <VE1PR08MB521519C6F09E92EDB9C9A1AEF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <66e32065-ea2d-d000-1a70-e5598a182b6a@xen.org>
 <VE1PR08MB5215C1F5041860102BBAD595F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <14fb6fe4-c293-6994-8cbc-872d3bd8a3ac@xen.org>
 <VE1PR08MB52152792B6771236A6DF37E7F73D9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <4251e0e2-fb53-b8a3-0323-f4ce892cf21e@xen.org>
 <alpine.DEB.2.21.2106031408320.7272@sstabellini-ThinkPad-T480s>
 <CAJ=z9a234ANQDR7BmtSm4AT0k3jrCn67s4b3zZ+jdkUgBMahbw@mail.gmail.com>
 <alpine.DEB.2.21.2106031625530.7272@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2106031625530.7272@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 701A0E4B7CD1864BABCED0B75633EBD3.0
x-checkrecipientchecked: true
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.113]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: faff3761-b166-4b1d-431c-08d9270d5e78
x-ms-traffictypediagnostic: VE1PR08MB4752:|AM4PR08MB2883:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM4PR08MB2883DB0B7E87D3D78CE28D12F73B9@AM4PR08MB2883.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 EM+LbMu5GUZuoDJwNu/S1QC9Oi60deNycHfggJa8AuoZ6+1xfXh5nEs177BoLD8MqzIPy14FJjgGJ1w/amLDXMPn90649q9O2CnPpnyWMeodA2PMiT0cNNWQR/dwLREGdeRrFX2iTDHlO+w75hu+kEoYamh5XSl0Nf6n80j1n9JsqzvYZNHi3ZHb7tlxE1lnbfEeSwFyk7vQ96DGXc3aw3xuYIoHB4ZLxk2u4g1vipeWGxJmjKWBxBOosSfTWZJqf/aejgVPfET4oxMJn+USapffM98pKAkT00QKby6gsOiXTqre9r9bsnuUJfJcdoJ73EXfTOAuC5fWYxxsPqaTBPqZ8tovudwN58Qsf478Dmdfmfu0Ue1HOGDB0TITUv+n81VZpfvhqAAcjuRkuSk/a4VZ7WQOzL4CIqYmIqsJswLmiLvnLaVKQLP/l9qCedGh3+nxeQatD2xpX9WnrNDKK2c09Ox91ccElRdF+nOJ80qvIf9V4eUzAno8uNgRAyBEY5yyarQ4XTClIzreJa005XaWBIO5I40jVo8PwXN7KYcaZx768YhzXzLIp4RLgLSZ5nLLQVvNA6nzt2JYlfuzR29amVzUAlDjUK3Tlqx8jZD/EjmkubAk1eTMlW9s6wL9MSjC/7BUlgTpm35CZwpURVM4aHDG3fK7n5MaXRa+F6BE+AqKRfvXDyKSNIkWX3A7N5fXgS8W/xaWPWJrueAYKnMBRckhmsXXlDZ9PWGUFd8=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(376002)(346002)(39850400004)(366004)(66476007)(66556008)(86362001)(8936002)(9686003)(66946007)(110136005)(66446008)(966005)(54906003)(64756008)(52536014)(38100700002)(26005)(478600001)(55016002)(83380400001)(76116006)(2906002)(7696005)(6506007)(8676002)(5660300002)(186003)(122000001)(4326008)(316002)(33656002)(71200400001)(53546011);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?VjBRaGRhUEwzTHZFRTMvbmdBMzU1clUxRCsvWDdMODlCUVN3eVZ0ZHFLVUQ2?=
 =?utf-8?B?Y0RTWkovQ0RjaGJ4TkR3UE1OOEJiZEdIQWNWajdhTFJjRytBTHFMY1dqVnJp?=
 =?utf-8?B?eXR1YzNZTkYxOWt6R04yR0QweU5ScjZBakNvbE1EMkkwS1I5NXNFbTZ6OE1Z?=
 =?utf-8?B?MFVLT0FLV050NjJONDlSSyt3a2lNcFduTE1wOG1ZL3VCb2FRSVVLdkNlNi9x?=
 =?utf-8?B?MnUzdTNxMjZkM0p3dmQwckhUTmNUZU4xRGtiVUVES0VtbzdWdnRWYlVOQUdN?=
 =?utf-8?B?TVpYWUx1RXF3YnRqYmlmMUV0blljMnVtblVjbEFmdUFzUkR5RU1uYzhhZVRV?=
 =?utf-8?B?Rm1WSWM2RkxXUWtZWnNlVUNhNHN5N0x1RWU2T1E2aWxGcnZvVWs5S01KRFJP?=
 =?utf-8?B?ckNNdTN1T25iMjV0b3RDdVFmSGNWQTVvQ2pEUm53N0ZhVTg2YjU0RVM2d0Q1?=
 =?utf-8?B?SWFBTmIyZDcvQTBNQk1ieU5IV0V2YU9Zd3VlOGVPcEZDREduTDFkZk1zMmlj?=
 =?utf-8?B?TzFwTzlkMUdLenQ3Y0t4YXoxRXZ0RU5GVEhVRS9PSEhwOWF5VUF0eFJCVVpT?=
 =?utf-8?B?UmIwMjZyNUdkclgxN2RFcmRZall5WXB0a05ZOVpoVUNGRE1qT0NuYjI0RXI3?=
 =?utf-8?B?S1VCa3dmYklPVUMrNXZEZExIRmVkV3NBUGJ1MFFuNUR5SFViOERrenU5aDlj?=
 =?utf-8?B?VG11bTRLTERjZi93N1BydHI0UmRLb0VPdTB1TWNuV01ZdllRY1A3aUU2U2c3?=
 =?utf-8?B?WEdoeUtBbjUvZEJ6L0NEaU5ZOTZlUWFGQVZqRVdnek80blg4RDFHQWpkd2t3?=
 =?utf-8?B?UXVydFd3REd5WmlGQW1wbnlCMDhWYUZ0QmpoUmVtYW5vUGRxcXZBcU1Jb3U0?=
 =?utf-8?B?VHlPSkg3WTUxUGVWczI5UjlHNGdzZkx6QkQzdExEUWRTRWREZUFpMU43cVBL?=
 =?utf-8?B?RHIwVjBkZW9Qc3dZSnorMmUwT3dNR2FmQlNpK1VpZGFuWmJWVUYraUZQOStS?=
 =?utf-8?B?SVhPL1Y0RnE2OVBzamV1SXZsVlp0VHpWbjZmWXVLVm0wMGoxaHc0RnpMSmJ6?=
 =?utf-8?B?eUlQa2pzZStQZmNRYWdaNmFaUmx4dFZGQkxGclNZMW45TWlLNmFvc3hNQWFS?=
 =?utf-8?B?alVvaW4zTVBjdVNxZnVrdFZYdnZTenlOQjUwb1FFa083YnRtb1R0VEdxMUlj?=
 =?utf-8?B?YWVUa2lGVmRlYjROcnBGZWZrbmZkVGFacTVVajI0QlVKMFo2SFR5dXBsR2xJ?=
 =?utf-8?B?amc4OGsrOHRkMkZIdUwvcFcwTVBwN3hQdGhOTEFHdjJDV1pDVEhuczVCc0sv?=
 =?utf-8?B?S2RkeWZUbDMvcm9xY1hQNGtkblk1WEVvTjl4TjgzcW1DMU52MXA5N1Bwak91?=
 =?utf-8?B?cW9CN2dpMWVtVFd2TGNQajJPck9TY1p3OWhVajR3dkF4ZjVQRmU2NGVXOUNZ?=
 =?utf-8?B?UGZOL0VmM290ZlRNdTd5NVVOY2dLQzcyM291WkxXVEkxa1RmVTgvMU9tazVy?=
 =?utf-8?B?bXgyUjdGTTZsdHlxMzkxQXlaL1pzY1JxZ0FHdEswR3lhd2VkaVI1T0xaNzhV?=
 =?utf-8?B?MWhhOHhVeG9KSTFmUjRjSTBoRkpFRWJuc1ZoUkpQMEptaWRMZm95Q283UzhG?=
 =?utf-8?B?SGNGNlhoemdCVWtUMSs4RWNHT2NuR256ejVmNkN5V2RoNG5LOTBYZ1UyZytu?=
 =?utf-8?B?ZjBseEpkdU1YT201YjRYR3FZbVA4UE9uYVRmU3RIemp3MG51NVVXZkl5NjFT?=
 =?utf-8?Q?qulpxj3AVdKU8O71iZv00akS6CrOQXjHqcLwdb0?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB4752
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5af8aa1d-e8cc-4d63-89ea-08d9270d55e2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TibYq7Se67dFD/Rpw6omLKww2BxwuFs5kXdNeGwQMbK4b9TPz6OstZGWfFhzshRDChYEBo/zlHHN4Xn5v8u4/BvQScCCv2ziaq29RsRim9RdxhBMC3E0NP08iJhEyWiwfhjLarmQ2IExikqFIjhytmExjekCxdUDhNCNv7V/X1s5sg4Qa20xkxGsgZ0hRaADrtr+IyW/ZNeuwhJ9eFoJoEnRK7ftHVJbidjXpuMkXYXgW2QSLmLjt7/kOt6F0jKtr/G8imxohvyaLEMZbp02KwsZo6qGAriPxFQ3ud/zYZEXLqE2hBPjPalLTIQgypnJd4po/59hsSWL6ULOktVeKncAEiDoZUfc6Mv6cRBIlxvZbjP+sgokUbfLUpCEjFveVxpUfNi5yqwTqN8SL28hYE3+qwinIgjPYJjIC3UL1061P+cixT8s+s+EH3LtWeFNTJvB1DX/nh63C4IANKlh54AwPNEUirnbOZ7hOq+KLHbqDBnv/U8NWCN4on1+mVX78t9Nt3+4mlbt1FSErUgEfTtrDjsalZGTx2uAymjvRazVPysfxjqCSAjfVdAoI/FFoKx3X4dsskyUAhXrcifKI3raLoNtPJ+3CNRoPu+rjbp8OfaLTAT+FnRfpAfd/smtxTuJgTxlcrP6h6XCfu6R8jcDrHa2QBgiEnuS4VYkzC8WhOEKC4nZreCCTMhzJvSOyD6ojMFsIaPVkJctoOLAwCOwPCxt8X10ih/hpQG4uX+U5nmwdUUDC0tFIszZaynDEDhYstfUr3xJSsDgcgwctlaoy0NWOpHg8rN9WYhAzOmtgt7DK+Vh7OBdBl1+OPtN
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(39850400004)(136003)(346002)(396003)(46966006)(36840700001)(966005)(336012)(82740400003)(81166007)(186003)(83380400001)(8936002)(36860700001)(4326008)(8676002)(86362001)(110136005)(53546011)(70586007)(70206006)(26005)(47076005)(9686003)(55016002)(5660300002)(2906002)(356005)(33656002)(7696005)(6506007)(478600001)(52536014)(316002)(54906003)(82310400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Jun 2021 04:01:02.1924
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: faff3761-b166-4b1d-431c-08d9270d5e78
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR08MB2883

SGkgc3RlZmFubyBhbmQganVsaWVuIA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+
IEZyb206IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4NCj4gU2Vu
dDogRnJpZGF5LCBKdW5lIDQsIDIwMjEgNzo1NiBBTQ0KPiBUbzogSnVsaWVuIEdyYWxsIDxqdWxp
ZW4uZ3JhbGwub3NzQGdtYWlsLmNvbT4NCj4gQ2M6IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJl
bGxpbmlAa2VybmVsLm9yZz47IFBlbm55IFpoZW5nDQo+IDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsg
eGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnOyBCZXJ0cmFuZCBNYXJxdWlzDQo+IDxCZXJ0
cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbiA8V2VpLkNoZW5AYXJtLmNvbT47IG5kDQo+
IDxuZEBhcm0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIDAxLzEwXSB4ZW4vYXJtOiBpbnRy
b2R1Y2UgZG9tYWluIG9uIFN0YXRpYyBBbGxvY2F0aW9uDQo+IA0KPiBPbiBUaHUsIDMgSnVuIDIw
MjEsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gPiBPbiBUaHUsIDMgSnVuIDIwMjEgYXQgMjI6MzMs
IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4NCj4gd3JvdGU6DQo+
ID4gPiBPbiBUaHUsIDMgSnVuIDIwMjEsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gPiA+ID4gT24g
MDIvMDYvMjAyMSAxMTowOSwgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4gPiA+ID4gSSBjb3VsZCBu
b3QgdGhpbmsgYSB3YXkgdG8gZml4IGl0IHByb3Blcmx5IGluIGNvZGVzLCBkbyB5b3UgaGF2ZQ0K
PiA+ID4gPiA+IGFueSBzdWdnZXN0aW9uPyBPciB3ZSBqdXN0IHB1dCBhIHdhcm5pbmcgaW4gZG9j
L2NvbW1pdHMuDQo+ID4gPiA+DQo+ID4gPiA+IFRoZSBjb3JyZWN0IGFwcHJvYWNoIGlzIHRvIGZp
bmQgdGhlIHBhcmVudCBvZiBzdGF0aWNtZW1kb21VMSAoaS5lLg0KPiA+ID4gPiByZXNlcnZlZC1t
ZW1vcnkpIGFuZCB1c2UgdGhlICNhZGRyZXNzLWNlbGxzIGFuZCAjc2l6ZS1jZWxscyBmcm9tIHRo
ZXJlLg0KPiA+ID4NCj4gPiA+IEp1bGllbiBpcyByaWdodCBhYm91dCBob3cgdG8gcGFyc2UgdGhl
IHN0YXRpYy1tZW1vcnkuDQo+ID4gPg0KPiA+ID4gQnV0IEkgaGF2ZSBhIHN1Z2dlc3Rpb24gb24g
dGhlIG5ldyBiaW5kaW5nLiBUaGUgL3Jlc2VydmVkLW1lbW9yeQ0KPiA+ID4gbm9kZSBpcyBhIHdl
aXJkIG5vZGU6IGl0IGlzIG9uZSBvZiB0aGUgdmVyeSBmZXcgbm9kZSAodGhlIG9ubHkgbm9kZQ0K
PiA+ID4gYXNpZGUgZnJvbQ0KPiA+ID4gL2Nob3Nlbikgd2hpY2ggaXMgYWJvdXQgc29mdHdhcmUg
Y29uZmlndXJhdGlvbnMgcmF0aGVyIHRoYW4gaGFyZHdhcmUNCj4gPiA+IGRlc2NyaXB0aW9uLg0K
PiA+ID4NCj4gPiA+IEZvciB0aGlzIHJlYXNvbiwgaW4gYSBkZXZpY2UgdHJlZSB3aXRoIG11bHRp
cGxlIGRvbWFpbnMNCj4gPiA+IC9yZXNlcnZlZC1tZW1vcnkgZG9lc24ndCBtYWtlIGEgbG90IG9m
IHNlbnNlOiBmb3Igd2hpY2ggZG9tYWluIGlzIHRoZQ0KPiBtZW1vcnkgcmVzZXJ2ZWQ/DQo+ID4N
Cj4gPiBJSE1PLCAvcmVzZXJ2ZWQtbWVtb3J5IHJlZmVycyB0byB0aGUgbWVtb3J5IHRoYXQgdGhl
IGh5cGVydmlzb3Igc2hvdWxkDQo+ID4gbm90IHRvdWNoLiBJdCBpcyBqdXN0IGEgY29pbmNpZGVu
Y2UgdGhhdCBtb3N0IG9mIHRoZSBkb21haW5zIGFyZSB0aGVuDQo+ID4gcGFzc2VkIHRocm91Z2gg
dG8gZG9tMC4NCj4gPg0KPiA+IFRoaXMgYWxzbyBtYXRjaGVzIHRoZSBmYWN0IHRoYXQgdGhlIEdJ
QywgL21lbW9yeSBpcyBjb25zdW1lZCBieSB0aGUNCj4gPiBoeXBlcnZpc29yIGRpcmVjdGx5IGFu
ZCBub3QgdGhlIGRvbWFpbi4uDQo+IA0KPiBJbiBzeXN0ZW0gZGV2aWNlIHRyZWUgb25lIG9mIHRo
ZSBrZXkgcHJpbmNpcGxlcyBpcyB0byBkaXN0aW5ndWlzaCBiZXR3ZWVuDQo+IGhhcmR3YXJlIGRl
c2NyaXB0aW9uIGFuZCBkb21haW5zIGNvbmZpZ3VyYXRpb24uIFRoZSBkb21haW5zIGNvbmZpZ3Vy
YXRpb24NCj4gaXMgdW5kZXIgL2RvbWFpbnMgKG9yaWdpbmFsbHkgaXQgd2FzIHVuZGVyIC9jaG9z
ZW4gdGhlbiB0aGUgRFQgbWFpbnRhaW5lcnMNCj4gcmVxdWVzdGVkIHRvIG1vdmUgaXQgdG8gaXRz
IG93biB0b3AtbGV2ZWwgbm9kZSksIHdoaWxlIGV2ZXJ5dGhpbmcgZWxzZSBpcyBmb3INCj4gaGFy
ZHdhcmUgZGVzY3JpcHRpb24uDQo+IA0KPiAvY2hvc2VuIGFuZCAvcmVzZXJ2ZWQtbWVtb3J5IGFy
ZSBleGNlcHRpb25zLiBUaGV5IGFyZSB0b3AtbGV2ZWwgbm9kZXMgYnV0DQo+IHRoZXkgYXJlIGZv
ciBzb2Z0d2FyZSBjb25maWd1cmF0aW9ucy4gSW4gc3lzdGVtIGRldmljZSB0cmVlIGNvbmZpZ3Vy
YXRpb25zIGdvDQo+IHVuZGVyIHRoZSBkb21haW4gbm9kZS4gVGhpcyBtYWtlcyBzZW5zZTogWGVu
LCBkb20wIGFuZCBkb21VIGNhbiBhbGwgaGF2ZQ0KPiBkaWZmZXJlbnQgcmVzZXJ2ZWQtbWVtb3J5
IGFuZCBjaG9zZW4gY29uZmlndXJhdGlvbnMuDQo+IA0KPiAvZG9tYWlucy9kb21VMS9yZXNlcnZl
ZC1tZW1vcnkgZ2l2ZXMgdXMgYSBjbGVhciB3YXkgdG8gZXhwcmVzcyByZXNlcnZlZC0NCj4gbWVt
b3J5IGNvbmZpZ3VyYXRpb25zIGZvciBkb21VMS4NCj4gDQo+IFdoaWNoIGxlYXZlcyB1cyB3aXRo
IC9yZXNlcnZlZC1tZW1vcnkuIFdobyBpcyB0aGF0IGZvcj8gSXQgaXMgZm9yIHRoZSBkZWZhdWx0
DQo+IGRvbWFpbi4NCj4gDQo+IFRoZSBkZWZhdWx0IGRvbWFpbiBpcyB0aGUgb25lIHJlY2Vpdmlu
ZyBhbGwgZGV2aWNlcyBieSBkZWZhdWx0LiBJbiBhIFhlbiBzZXR0aW5nLA0KPiBpdCBpcyBwcm9i
YWJseSBEb20wLiBJbiB0aGlzIGNhc2UsIHdlIGRvbid0IHdhbnQgdG8gYWRkIHJlc2VydmVkLW1l
bW9yeQ0KPiByZWdpb25zIGZvciBEb21VcyB0byBEb20wJ3MgbGlzdC4gRG9tMCdzIHJlc2VydmVk
LW1lbW9yeSBsaXN0IGlzIGZvciBpdHMgb3duDQo+IGRyaXZlcnMuIFdlIGNvdWxkIGFsc28gbWFr
ZSBhbiBhcmd1bWVudCB0aGF0IHRoZSBkZWZhdWx0IGRvbWFpbiBpcyBYZW4gaXRzZWxmLg0KPiBG
cm9tIGEgc3BlYyBwZXJzcGVjdGl2ZSwgdGhhdCB3b3VsZCBiZSBmaW5lIHRvby4gSW4gdGhpcyBj
YXNlLCAvcmVzZXJ2ZWQtDQo+IG1lbW9yeSBpcyBhIGxpc3Qgb2YgbWVtb3J5IHJlZ2lvbnMgcmVz
ZXJ2ZWQgZm9yIFhlbiBkcml2ZXJzLiAgRWl0aGVyIHdheSwgSSBkb24ndA0KPiB0aGluayBpcyBh
IGdyZWF0IGZpdCBmb3IgZG9tYWlucyBtZW1vcnkgYWxsb2NhdGlvbnMuDQo+IA0KPiANCj4gPiA+
IFRoaXMgd2FzIG9uZSBvZiB0aGUgZmlyc3QgcG9pbnRzIHJhaXNlZCBieSBSb2IgSGVycmluZyBp
biByZXZpZXdpbmcNCj4gPiA+IHN5c3RlbSBkZXZpY2UgdHJlZS4NCj4gPiA+DQo+ID4gPiBTbyB0
aGUgc29sdXRpb24gd2Ugd2VudCBmb3IgaXMgdGhlIGZvbGxvd2luZzogaWYgdGhlcmUgaXMgYSBk
ZWZhdWx0DQo+ID4gPiBkb21haW4gL3Jlc2VydmVkLW1lbW9yeSBhcHBsaWVzIHRvIHRoZSBkZWZh
dWx0IGRvbWFpbi4gT3RoZXJ3aXNlLA0KPiA+ID4gZWFjaCBkb21haW4gaXMgZ29pbmcgdG8gaGF2
ZSBpdHMgb3duIHJlc2VydmVkLW1lbW9yeS4gRXhhbXBsZToNCj4gPiA+DQo+ID4gPiAgICAgICAg
IGRvbVUxIHsNCj4gPiA+ICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAieGVuLGRvbWFpbiI7DQo+
ID4gPiAgICAgICAgICAgICAjYWRkcmVzcy1jZWxscyA9IDwweDE+Ow0KPiA+ID4gICAgICAgICAg
ICAgI3NpemUtY2VsbHMgPSA8MHgxPjsNCj4gPiA+ICAgICAgICAgICAgIGNwdXMgPSA8Mj47DQo+
ID4gPg0KPiA+ID4gICAgICAgICAgICAgcmVzZXJ2ZWQtbWVtb3J5IHsNCj4gPiA+ICAgICAgICAg
ICAgICAgICAjYWRkcmVzcy1jZWxscyA9IDwyPjsNCj4gPiA+ICAgICAgICAgICAgICAgICAjc2l6
ZS1jZWxscyA9IDwyPjsNCj4gPiA+DQo+ID4gPiAgICAgICAgICAgICAgICAgc3RhdGljLW1lbW9y
eUAweDMwMDAwMDAwIHsNCj4gPiA+ICAgICAgICAgICAgICAgICAgICAgY29tcGF0aWJsZSA9ICJ4
ZW4sc3RhdGljLW1lbW9yeS1kb21haW4iOw0KPiA+ID4gICAgICAgICAgICAgICAgICAgICByZWcg
PSA8MHgwIDB4MzAwMDAwMDAgMHgwIDB4MjAwMDAwMDA+Ow0KPiA+ID4gICAgICAgICAgICAgICAg
IH07DQo+ID4gPiAgICAgICAgICAgICB9Ow0KPiA+ID4gICAgICAgICB9Ow0KPiA+DQo+ID4gVGhp
cyBzb3VuZHMgd3JvbmcgdG8gbWUgYmVjYXVzZSB0aGUgbWVtb3J5IGlzIHJlc2VydmVkIGZyb20g
dGhlDQo+ID4gaHlwZXJ2aXNvciBQb1Ygbm90IGZyb20gdGhlIGRvbWFpbi4gSU9XLCB3aGVuIEkg
cmVhZCB0aGlzLCBJIHRoaW5rIHRoZQ0KPiA+IG1lbW9yeSB3aWxsIGJlIHJlc2VydmVkIGluIHRo
ZSBkb21haW4uDQo+IA0KPiBJdCBpcyBkZWZpbml0ZWx5IHZlcnkgd3JvbmcgdG8gcGxhY2UgdGhl
IHN0YXRpYy1tZW1vcnkgYWxsb2NhdGlvbiB1bmRlcg0KPiAvY2hvc2VuL2RvbVUxL3Jlc2VydmVk
LW1lbW9yeS4gU29ycnkgaWYgSSBjYXVzZWQgY29uZnVzaW9uLiBJIG9ubHkgbWVhbnQgaXQNCj4g
YXMgYW4gZXhhbXBsZSBvZiBob3cgcmVzZXJ2ZWQtbWVtb3J5IChhY3R1YWwgcmVzZXJ2ZWQtbWVt
b3J5IGxpc3QgZHJpdmVyLQ0KPiBzcGVjaWZpYyBtZW1vcnkgcmFuZ2VzKSBpcyB1c2VkLg0KPiAN
Cj4gDQo+ID4gPg0KPiA+ID4gU28gSSBkb24ndCB0aGluayB3ZSB3YW50IHRvIHVzZSByZXNlcnZl
ZC1tZW1vcnkgZm9yIHRoaXMsIGVpdGhlcg0KPiA+ID4gL3Jlc2VydmVkLW1lbW9yeSBvciAvY2hv
c2VuL2RvbVUxL3Jlc2VydmVkLW1lbW9yeS4gSW5zdGVhZCBpdCB3b3VsZA0KPiA+ID4gYmUgZ29v
ZCB0byBhbGlnbiBpdCB3aXRoIHN5c3RlbSBkZXZpY2UgdHJlZSBhbmQgZGVmaW5lIGl0IGFzIGEg
bmV3DQo+ID4gPiBwcm9wZXJ0eSB1bmRlciBkb21VMS4NCj4gPg0KPiA+IERvIHlvdSBoYXZlIGFu
eSBmb3JtYWwgZG9jdW1lbnRhdGlvbiBvZiB0aGUgc3lzdGVtIGRldmljZS10cmVlPw0KPiANCj4g
SXQgbGl2ZXMgaGVyZToNCj4gaHR0cHM6Ly9naXRodWIuY29tL2RldmljZXRyZWUtb3JnL2xvcHBl
ci90cmVlL21hc3Rlci9zcGVjaWZpY2F0aW9uDQo+IA0KPiBTdGFydCBmcm9tIHNwZWNpZmljYXRp
b24ubWQuIEl0IGlzIHRoZSBvbGRlc3QgcGFydCBvZiB0aGUgc3BlYywgc28gaXQgaXMgbm90IHll
dA0KPiB3cml0dGVuIHdpdGggYSBmb3JtYWwgc3BlY2lmaWNhdGlvbiBsYW5ndWFnZS4NCj4gDQo+
IEZZSSB0aGVyZSBhcmUgYSBudW1iZXIgb2YgdGhpbmdzIGluLWZsaWdodCBpbiByZWdhcmRzIHRv
IGRvbWFpbnMgdGhhdCB3ZQ0KPiBkaXNjdXNzZWQgaW4gdGhlIGxhc3QgY2FsbCBidXQgdGhleSBh
cmUgbm90IHlldCBzZXR0bGVkLCB0aHVzLCB0aGV5IGFyZSBub3QgeWV0DQo+IGNvbW1pdHRlZCAo
YWNjZXNzIGZsYWdzIGRlZmluaXRpb25zIGFuZCBoaWVyYXJjaGljYWwgZG9tYWlucykuIEhvd2V2
ZXIsIHRoZXkNCj4gZG9uJ3QgYWZmZWN0IGRvbWFpbnMgbWVtb3J5IGFsbG9jYXRpb25zIHNvIGZy
b20gdGhhdCBwZXJzcGVjdGl2ZSBub3RoaW5nIGhhcw0KPiBjaGFuZ2VkLg0KPiANCj4gDQo+ID4g
PiBJbiBzeXN0ZW0gZGV2aWNlIHRyZWUgd2Ugd291bGQgdXNlIGEgcHJvcGVydHkgY2FsbGVkICJt
ZW1vcnkiIHRvDQo+ID4gPiBzcGVjaWZ5IG9uZSBvciBtb3JlIHJhbmdlcywgZS5nLjoNCj4gPiA+
DQo+ID4gPiAgICAgZG9tVTEgew0KPiA+ID4gICAgICAgICBtZW1vcnkgPSA8MHgwIDB4NTAwMDAw
IDB4MCAweDdmYjAwMDAwPjsNCj4gPiA+DQo+ID4gPiBVbmZvcnR1bmF0ZWx5IGZvciB4ZW4sZG9t
YWlucyB3ZSBoYXZlIGFscmVhZHkgZGVmaW5lZCAibWVtb3J5IiB0bw0KPiA+ID4gc3BlY2lmeSBh
biBhbW91bnQsIHJhdGhlciB0aGFuIGEgcmFuZ2UuIFRoYXQncyB0b28gYmFkIGJlY2F1c2UgdGhl
DQo+ID4gPiBtb3N0IG5hdHVyYWwgd2F5IHRvIGRvIHRoaXMgd291bGQgYmU6DQo+ID4gPg0KPiA+
ID4gICAgIGRvbVUxIHsNCj4gPiA+ICAgICAgICAgc2l6ZSA9IDxhbW91bnQ+Ow0KPiA+ID4gICAg
ICAgICBtZW1vcnkgPSA8cmFuZ2VzPjsNCj4gPiA+DQo+ID4gPiBXaGVuIHdlJ2xsIGludHJvZHVj
ZSBuYXRpdmUgc3lzdGVtIGRldmljZSB0cmVlIHN1cHBvcnQgaW4gWGVuIHdlJ2xsDQo+ID4gPiBi
ZSBhYmxlIHRvIGRvIHRoYXQuIEZvciBub3csIHdlIG5lZWQgdG8gY29tZSB1cCB3aXRoIGEgZGlm
ZmVyZW50IHByb3BlcnR5Lg0KPiA+ID4gRm9yIGluc3RhbmNlOiAic3RhdGljLW1lbW9yeSIgKG90
aGVyIG5hbWVzIGFyZSB3ZWxjb21lIGlmIHlvdSBoYXZlIGENCj4gPiA+IGJldHRlciBzdWdnZXN0
aW9uKS4NCj4gPiA+DQo+ID4gPiBXZSB1c2UgYSBuZXcgcHJvcGVydHkgY2FsbGVkICJzdGF0aWMt
bWVtb3J5IiB0b2dldGhlciB3aXRoDQo+ID4gPiAjc3RhdGljLW1lbW9yeS1hZGRyZXNzLWNlbGxz
IGFuZCAjc3RhdGljLW1lbW9yeS1zaXplLWNlbGxzIHRvIGRlZmluZQ0KPiA+ID4gaG93IG1hbnkg
Y2VsbHMgdG8gdXNlIGZvciBhZGRyZXNzIGFuZCBzaXplLg0KPiA+ID4NCj4gPiA+IEV4YW1wbGU6
DQo+ID4gPg0KPiA+ID4gICAgIGRvbVUxIHsNCj4gPiA+ICAgICAgICAgI3N0YXRpYy1tZW1vcnkt
YWRkcmVzcy1jZWxscyA9IDwyPjsNCj4gPiA+ICAgICAgICAgI3N0YXRpYy1tZW1vcnktc2l6ZS1j
ZWxscyA9IDwyPjsNCj4gPiA+ICAgICAgICAgc3RhdGljLW1lbW9yeSA9IDwweDAgMHg1MDAwMDAg
MHgwIDB4N2ZiMDAwMDA+Ow0KPiA+DQo+ID4gVGhpcyBpcyBwcmV0dHkgc2ltaWxhciB0byB3aGF0
IFBlbm55IHN1Z2dlc3RlZC4gQnV0IEkgZGlzbGlrZSBpdA0KPiA+IGJlY2F1c2Ugb2YgdGhlIGFt
b3VudCBvZiBjb2RlIHRoYXQgbmVlZHMgdG8gYmUgZHVwbGljYXRlZCB3aXRoIHRoZQ0KPiA+IHJl
c2VydmVkIG1lbW9yeS4NCj4gDQo+IFdoZXJlIGlzIHRoZSBjb2RlIGR1cGxpY2F0aW9uPyBJbiB0
aGUgcGFyc2luZyBpdHNlbGY/DQo+IA0KPiBJZiB0aGVyZSBpcyBjb2RlIGR1cGxpY2F0aW9uLCBj
YW4gd2UgZmluZCBhIHdheSB0byBzaGFyZSBzb21lIG9mIHRoZSBjb2RlIHRvDQo+IGF2b2lkIHRo
ZSBkdXBsaWNhdGlvbj8NCg0KQm90aCB5b3VyIG9waW5pb25zIGFyZSBzbyBjb252aW5jaW5nLi4u
IDovDQoNCkNvcnJlY3QgbWUgaWYgSSBhbSB3cm9uZzoNCkkgdGhpbmsgdGhlIGR1cGxpY2F0aW9u
IHdoaWNoIEp1bGllbiBtZWFucyBhcmUgaGVyZSwgU2VlIGNvbW1pdCANCmh0dHBzOi8vcGF0Y2hl
dy5vcmcvWGVuLzIwMjEwNTE4MDUyMTEzLjcyNTgwOC0xLXBlbm55LnpoZW5nQGFybS5jb20vMjAy
MTA1MTgwNTIxMTMuNzI1ODA4LTMtcGVubnkuemhlbmdAYXJtLmNvbS8NCkkgYWRkZWQgYSBhbm90
aGVyIHNpbWlsYXIgbG9vcCBpbiBkdF91bnJlc2VydmVkX3JlZ2lvbnMgdG8gdW5yZXNlcnZlIHN0
YXRpYyBtZW1vcnkuDQpGb3IgdGhpcyBwYXJ0LCBJIGNvdWxkIHRyeSB0byBleHRyYWN0IGNvbW1v
biBjb2Rlcy4NCg0KQnV0IGFub3RoZXIgcGFydCBJIHRoaW5rIGlzIGp1c3QgdGhpcyBjb21taXQs
IHdoZXJlIEkgYWRkZWQgYW5vdGhlciBjaGVjayBmb3Igc3RhdGljIG1lbW9yeQ0KaW4gZWFybHlf
c2Nhbl9ub2RlOg0KDQorICAgIGVsc2UgaWYgKCBkZXB0aCA9PSAyICYmIGZkdF9nZXRfcHJvcGVy
dHkoZmR0LCBub2RlLCAieGVuLHN0YXRpYy1tZW0iLCBOVUxMKSApDQorICAgICAgICBwcm9jZXNz
X3N0YXRpY19tZW1vcnkoZmR0LCBub2RlLCAieGVuLHN0YXRpYy1tZW0iLCBhZGRyZXNzX2NlbGxz
LA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHNpemVfY2VsbHMsICZib290aW5mby5z
dGF0aWNfbWVtKTsNCg0KVEJILCBJIGRvbid0IGtub3cgaG93IHRvIGZpeCBoZXJlLi4uLg0KDQpJ
J3ZlIGFscmVhZHkgZmluaXNoZWQgUGF0Y2ggdjIsIHdlIGNvdWxkIGNvbnRpbnVlIGRpc2N1c3Np
bmcgb24gaG93IHRvIGRlZmluZSBpdCBpbg0KRGV2aWNlIHRyZWUgaGVyZSwgYW5kIGl0IHdpbGwg
YmUgaW5jbHVkZWQgaW4gUGF0Y2ggdjN+fn4g8J+YiQ0KDQpDaGVlcnMNClBlbm55IFpoZW5nDQoN
Cg==


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 06:02:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 06:02:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136688.253317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp2uS-0002DF-OJ; Fri, 04 Jun 2021 06:02:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136688.253317; Fri, 04 Jun 2021 06:02:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp2uS-0002C4-KL; Fri, 04 Jun 2021 06:02:20 +0000
Received: by outflank-mailman (input) for mailman id 136688;
 Fri, 04 Jun 2021 06:02:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x6aI=K6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lp2uQ-00029d-Pn
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 06:02:18 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fd639606-54a5-424a-ade5-a0bc748bb5f2;
 Fri, 04 Jun 2021 06:02:17 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id D2D141FD3F;
 Fri,  4 Jun 2021 06:02:16 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 9F923118DD;
 Fri,  4 Jun 2021 06:02:16 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id IOG/JejBuWCyGwAALh3uQQ
 (envelope-from <jgross@suse.com>); Fri, 04 Jun 2021 06:02:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd639606-54a5-424a-ade5-a0bc748bb5f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622786536; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=OwgDOczTgEHvqLD+ix9Tn8HCQdRqPLvi5r+OvXHSj60=;
	b=jGNSg8htPDB6y00AVooNV8BpEsX2ZldYrK8DWFmwTrg/FvyDoalUMJ3MGdrB9Mm4lPh2Bg
	UhIClXNlrMniMUrqSKQf2ER1sNeQTKLAMx4io4ihHz4GWkoGQFGsIx5c6gXOQHuydkDBw/
	R+Sg/BUQsn8IvMhGVJGI98bmum1X02I=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622786536; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=OwgDOczTgEHvqLD+ix9Tn8HCQdRqPLvi5r+OvXHSj60=;
	b=jGNSg8htPDB6y00AVooNV8BpEsX2ZldYrK8DWFmwTrg/FvyDoalUMJ3MGdrB9Mm4lPh2Bg
	UhIClXNlrMniMUrqSKQf2ER1sNeQTKLAMx4io4ihHz4GWkoGQFGsIx5c6gXOQHuydkDBw/
	R+Sg/BUQsn8IvMhGVJGI98bmum1X02I=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 2/6] tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
Date: Fri,  4 Jun 2021 08:02:10 +0200
Message-Id: <20210604060214.14924-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210604060214.14924-1-jgross@suse.com>
References: <20210604060214.14924-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The core of a pv linux guest produced via "xl dump-core" is nor usable
as since kernel 4.14 only the linear p2m table is kept if Xen indicates
it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
supporting the 3-level p2m tree only.

Fix that by copying the functionality of map_p2m() from libxenguest to
libxenctrl.

Additionally the mapped p2m isn't of a fixed length now, so the
interface to the mapping functions needs to be adapted. In order not to
add even more parameters, expand struct domain_info_context and use a
pointer to that as a parameter.

Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Wei Liu <wl@xen.org>
---
This is a backport candidate.
---
 tools/include/xenguest.h      |   1 +
 tools/libs/ctrl/xc_core.c     |   5 +-
 tools/libs/ctrl/xc_core.h     |   8 +-
 tools/libs/ctrl/xc_core_arm.c |  23 +--
 tools/libs/ctrl/xc_core_x86.c | 256 ++++++++++++++++++++++++++++------
 tools/libs/ctrl/xc_private.h  |   1 +
 tools/libs/guest/xg_domain.c  |  17 +--
 7 files changed, 233 insertions(+), 78 deletions(-)

diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
index a4492038cf..f9fb0449ad 100644
--- a/tools/include/xenguest.h
+++ b/tools/include/xenguest.h
@@ -700,6 +700,7 @@ struct xc_domain_meminfo {
     xen_pfn_t *pfn_type;
     xen_pfn_t *p2m_table;
     unsigned long p2m_size;
+    unsigned int p2m_frames;
 };
 
 int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
diff --git a/tools/libs/ctrl/xc_core.c b/tools/libs/ctrl/xc_core.c
index b47ab2f6d8..9576bec5a3 100644
--- a/tools/libs/ctrl/xc_core.c
+++ b/tools/libs/ctrl/xc_core.c
@@ -574,8 +574,7 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
             goto out;
         }
 
-        sts = xc_core_arch_map_p2m(xch, dinfo->guest_width, &info, live_shinfo,
-                                   &p2m, &dinfo->p2m_size);
+        sts = xc_core_arch_map_p2m(xch, dinfo, &info, live_shinfo, &p2m);
         if ( sts != 0 )
             goto out;
 
@@ -945,7 +944,7 @@ out:
     if ( memory_map != NULL )
         free(memory_map);
     if ( p2m != NULL )
-        munmap(p2m, PAGE_SIZE * P2M_FL_ENTRIES);
+        munmap(p2m, PAGE_SIZE * dinfo->p2m_frames);
     if ( p2m_array != NULL )
         free(p2m_array);
     if ( pfn_array != NULL )
diff --git a/tools/libs/ctrl/xc_core.h b/tools/libs/ctrl/xc_core.h
index 36fb755da2..8ea1f93a10 100644
--- a/tools/libs/ctrl/xc_core.h
+++ b/tools/libs/ctrl/xc_core.h
@@ -138,14 +138,14 @@ int xc_core_arch_memory_map_get(xc_interface *xch,
                                 xc_dominfo_t *info, shared_info_any_t *live_shinfo,
                                 xc_core_memory_map_t **mapp,
                                 unsigned int *nr_entries);
-int xc_core_arch_map_p2m(xc_interface *xch, unsigned int guest_width,
+int xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo,
                          xc_dominfo_t *info, shared_info_any_t *live_shinfo,
-                         xen_pfn_t **live_p2m, unsigned long *pfnp);
+                         xen_pfn_t **live_p2m);
 
-int xc_core_arch_map_p2m_writable(xc_interface *xch, unsigned int guest_width,
+int xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo,
                                   xc_dominfo_t *info,
                                   shared_info_any_t *live_shinfo,
-                                  xen_pfn_t **live_p2m, unsigned long *pfnp);
+                                  xen_pfn_t **live_p2m);
 
 int xc_core_arch_get_scratch_gpfn(xc_interface *xch, uint32_t domid,
                                   xen_pfn_t *gpfn);
diff --git a/tools/libs/ctrl/xc_core_arm.c b/tools/libs/ctrl/xc_core_arm.c
index 7b587b4cc5..93765a565f 100644
--- a/tools/libs/ctrl/xc_core_arm.c
+++ b/tools/libs/ctrl/xc_core_arm.c
@@ -66,33 +66,24 @@ xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unus
 
 static int
 xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
-                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m,
-                        unsigned long *pfnp, int rw)
+                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m, int rw)
 {
     errno = ENOSYS;
     return -1;
 }
 
 int
-xc_core_arch_map_p2m(xc_interface *xch, unsigned int guest_width, xc_dominfo_t *info,
-                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m,
-                        unsigned long *pfnp)
+xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
-    struct domain_info_context _dinfo = { .guest_width = guest_width };
-    struct domain_info_context *dinfo = &_dinfo;
-    return xc_core_arch_map_p2m_rw(xch, dinfo, info,
-                                   live_shinfo, live_p2m, pfnp, 0);
+    return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 0);
 }
 
 int
-xc_core_arch_map_p2m_writable(xc_interface *xch, unsigned int guest_width, xc_dominfo_t *info,
-                              shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m,
-                              unsigned long *pfnp)
+xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+                              shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
-    struct domain_info_context _dinfo = { .guest_width = guest_width };
-    struct domain_info_context *dinfo = &_dinfo;
-    return xc_core_arch_map_p2m_rw(xch, dinfo, info,
-                                   live_shinfo, live_p2m, pfnp, 1);
+    return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 1);
 }
 
 int
diff --git a/tools/libs/ctrl/xc_core_x86.c b/tools/libs/ctrl/xc_core_x86.c
index cb76e6207b..c8f71d4b75 100644
--- a/tools/libs/ctrl/xc_core_x86.c
+++ b/tools/libs/ctrl/xc_core_x86.c
@@ -17,6 +17,7 @@
  *
  */
 
+#include <inttypes.h>
 #include "xc_private.h"
 #include "xc_core.h"
 #include <xen/hvm/e820.h>
@@ -65,34 +66,169 @@ xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unus
     return 0;
 }
 
-static int
-xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
-                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m,
-                        unsigned long *pfnp, int rw)
+static inline bool is_canonical_address(uint64_t vaddr)
 {
-    /* Double and single indirect references to the live P2M table */
-    xen_pfn_t *live_p2m_frame_list_list = NULL;
-    xen_pfn_t *live_p2m_frame_list = NULL;
-    /* Copies of the above. */
-    xen_pfn_t *p2m_frame_list_list = NULL;
-    xen_pfn_t *p2m_frame_list = NULL;
+    return ((int64_t)vaddr >> 47) == ((int64_t)vaddr >> 63);
+}
 
-    uint32_t dom = info->domid;
-    int ret = -1;
-    int err;
-    int i;
+/* Virtual address ranges reserved for hypervisor. */
+#define HYPERVISOR_VIRT_START_X86_64 0xFFFF800000000000ULL
+#define HYPERVISOR_VIRT_END_X86_64   0xFFFF87FFFFFFFFFFULL
 
-    if ( xc_domain_nr_gpfns(xch, info->domid, &dinfo->p2m_size) < 0 )
+#define HYPERVISOR_VIRT_START_X86_32 0x00000000F5800000ULL
+#define HYPERVISOR_VIRT_END_X86_32   0x00000000FFFFFFFFULL
+
+static xen_pfn_t *
+xc_core_arch_map_p2m_list_rw(xc_interface *xch, struct domain_info_context *dinfo,
+                             uint32_t dom, shared_info_any_t *live_shinfo,
+                             uint64_t p2m_cr3)
+{
+    uint64_t p2m_vaddr, p2m_end, mask, off;
+    xen_pfn_t p2m_mfn, mfn, saved_mfn, max_pfn;
+    uint64_t *ptes = NULL;
+    xen_pfn_t *mfns = NULL;
+    unsigned int fpp, n_pages, level, n_levels, shift,
+                 idx_start, idx_end, idx, saved_idx;
+
+    p2m_vaddr = GET_FIELD(live_shinfo, arch.p2m_vaddr, dinfo->guest_width);
+    fpp = PAGE_SIZE / dinfo->guest_width;
+    dinfo->p2m_frames = (dinfo->p2m_size - 1) / fpp + 1;
+    p2m_end = p2m_vaddr + dinfo->p2m_frames * PAGE_SIZE - 1;
+
+    if ( dinfo->guest_width == 8 )
     {
-        ERROR("Could not get maximum GPFN!");
-        goto out;
+        mask = 0x0000ffffffffffffULL;
+        n_levels = 4;
+        p2m_mfn = p2m_cr3 >> 12;
+        if ( !is_canonical_address(p2m_vaddr) ||
+             !is_canonical_address(p2m_end) ||
+             p2m_end < p2m_vaddr ||
+             (p2m_vaddr <= HYPERVISOR_VIRT_END_X86_64 &&
+              p2m_end > HYPERVISOR_VIRT_START_X86_64) )
+        {
+            ERROR("Bad virtual p2m address range %#" PRIx64 "-%#" PRIx64,
+                  p2m_vaddr, p2m_end);
+            errno = ERANGE;
+            goto out;
+        }
+    }
+    else
+    {
+        mask = 0x00000000ffffffffULL;
+        n_levels = 3;
+        if ( p2m_cr3 & ~mask )
+            p2m_mfn = ~0UL;
+        else
+            p2m_mfn = (uint32_t)((p2m_cr3 >> 12) | (p2m_cr3 << 20));
+        if ( p2m_vaddr > mask || p2m_end > mask || p2m_end < p2m_vaddr ||
+             (p2m_vaddr <= HYPERVISOR_VIRT_END_X86_32 &&
+              p2m_end > HYPERVISOR_VIRT_START_X86_32) )
+        {
+            ERROR("Bad virtual p2m address range %#" PRIx64 "-%#" PRIx64,
+                  p2m_vaddr, p2m_end);
+            errno = ERANGE;
+            goto out;
+        }
     }
 
-    if ( dinfo->p2m_size < info->nr_pages  )
+    mfns = malloc(sizeof(*mfns));
+    if ( !mfns )
     {
-        ERROR("p2m_size < nr_pages -1 (%lx < %lx", dinfo->p2m_size, info->nr_pages - 1);
+        ERROR("Cannot allocate memory for array of %u mfns", 1);
         goto out;
     }
+    mfns[0] = p2m_mfn;
+    off = 0;
+    saved_mfn = 0;
+    idx_start = idx_end = saved_idx = 0;
+
+    for ( level = n_levels; level > 0; level-- )
+    {
+        n_pages = idx_end - idx_start + 1;
+        ptes = xc_map_foreign_pages(xch, dom, PROT_READ, mfns, n_pages);
+        if ( !ptes )
+        {
+            PERROR("Failed to map %u page table pages for p2m list", n_pages);
+            goto out;
+        }
+        free(mfns);
+
+        shift = level * 9 + 3;
+        idx_start = ((p2m_vaddr - off) & mask) >> shift;
+        idx_end = ((p2m_end - off) & mask) >> shift;
+        idx = idx_end - idx_start + 1;
+        mfns = malloc(sizeof(*mfns) * idx);
+        if ( !mfns )
+        {
+            ERROR("Cannot allocate memory for array of %u mfns", idx);
+            goto out;
+        }
+
+        for ( idx = idx_start; idx <= idx_end; idx++ )
+        {
+            mfn = (ptes[idx] & 0x000ffffffffff000ULL) >> PAGE_SHIFT;
+            if ( mfn == 0 )
+            {
+                ERROR("Bad mfn %#lx during page table walk for vaddr %#" PRIx64 " at level %d of p2m list",
+                      mfn, off + ((uint64_t)idx << shift), level);
+                errno = ERANGE;
+                goto out;
+            }
+            mfns[idx - idx_start] = mfn;
+
+            /* Maximum pfn check at level 2. Same reasoning as for p2m tree. */
+            if ( level == 2 )
+            {
+                if ( mfn != saved_mfn )
+                {
+                    saved_mfn = mfn;
+                    saved_idx = idx - idx_start;
+                }
+            }
+        }
+
+        if ( level == 2 )
+        {
+            if ( saved_idx == idx_end )
+                saved_idx++;
+            max_pfn = ((xen_pfn_t)saved_idx << 9) * fpp;
+            if ( max_pfn < dinfo->p2m_size )
+            {
+                dinfo->p2m_size = max_pfn;
+                dinfo->p2m_frames = (dinfo->p2m_size + fpp - 1) / fpp;
+                p2m_end = p2m_vaddr + dinfo->p2m_frames * PAGE_SIZE - 1;
+                idx_end = idx_start + saved_idx;
+            }
+        }
+
+        munmap(ptes, n_pages * PAGE_SIZE);
+        ptes = NULL;
+        off = p2m_vaddr & ((mask >> shift) << shift);
+    }
+
+    return mfns;
+
+ out:
+    free(mfns);
+    if ( ptes )
+        munmap(ptes, n_pages * PAGE_SIZE);
+
+    return NULL;
+}
+
+static xen_pfn_t *
+xc_core_arch_map_p2m_tree_rw(xc_interface *xch, struct domain_info_context *dinfo,
+                             uint32_t dom, shared_info_any_t *live_shinfo)
+{
+    /* Double and single indirect references to the live P2M table */
+    xen_pfn_t *live_p2m_frame_list_list;
+    xen_pfn_t *live_p2m_frame_list = NULL;
+    /* Copies of the above. */
+    xen_pfn_t *p2m_frame_list_list = NULL;
+    xen_pfn_t *p2m_frame_list;
+
+    int err;
+    int i;
 
     live_p2m_frame_list_list =
         xc_map_foreign_range(xch, dom, PAGE_SIZE, PROT_READ,
@@ -151,10 +287,60 @@ xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc
         for ( i = P2M_FL_ENTRIES - 1; i >= 0; i-- )
             p2m_frame_list[i] = ((uint32_t *)p2m_frame_list)[i];
 
+    dinfo->p2m_frames = P2M_FL_ENTRIES;
+
+    return p2m_frame_list;
+
+ out:
+    err = errno;
+
+    if ( live_p2m_frame_list_list )
+        munmap(live_p2m_frame_list_list, PAGE_SIZE);
+
+    if ( live_p2m_frame_list )
+        munmap(live_p2m_frame_list, P2M_FLL_ENTRIES * PAGE_SIZE);
+
+    free(p2m_frame_list_list);
+
+    errno = err;
+
+    return NULL;
+}
+
+static int
+xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m, int rw)
+{
+    xen_pfn_t *p2m_frame_list = NULL;
+    uint64_t p2m_cr3;
+    uint32_t dom = info->domid;
+    int ret = -1;
+    int err;
+
+    if ( xc_domain_nr_gpfns(xch, info->domid, &dinfo->p2m_size) < 0 )
+    {
+        ERROR("Could not get maximum GPFN!");
+        goto out;
+    }
+
+    if ( dinfo->p2m_size < info->nr_pages  )
+    {
+        ERROR("p2m_size < nr_pages -1 (%lx < %lx", dinfo->p2m_size, info->nr_pages - 1);
+        goto out;
+    }
+
+    p2m_cr3 = GET_FIELD(live_shinfo, arch.p2m_cr3, dinfo->guest_width);
+
+    p2m_frame_list = p2m_cr3 ? xc_core_arch_map_p2m_list_rw(xch, dinfo, dom, live_shinfo, p2m_cr3)
+                             : xc_core_arch_map_p2m_tree_rw(xch, dinfo, dom, live_shinfo);
+
+    if ( !p2m_frame_list )
+        goto out;
+
     *live_p2m = xc_map_foreign_pages(xch, dom,
                                     rw ? (PROT_READ | PROT_WRITE) : PROT_READ,
                                     p2m_frame_list,
-                                    P2M_FL_ENTRIES);
+                                    dinfo->p2m_frames);
 
     if ( !*live_p2m )
     {
@@ -162,21 +348,11 @@ xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc
         goto out;
     }
 
-    *pfnp = dinfo->p2m_size;
-
     ret = 0;
 
 out:
     err = errno;
 
-    if ( live_p2m_frame_list_list )
-        munmap(live_p2m_frame_list_list, PAGE_SIZE);
-
-    if ( live_p2m_frame_list )
-        munmap(live_p2m_frame_list, P2M_FLL_ENTRIES * PAGE_SIZE);
-
-    free(p2m_frame_list_list);
-
     free(p2m_frame_list);
 
     errno = err;
@@ -184,25 +360,17 @@ out:
 }
 
 int
-xc_core_arch_map_p2m(xc_interface *xch, unsigned int guest_width, xc_dominfo_t *info,
-                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m,
-                        unsigned long *pfnp)
+xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
-    struct domain_info_context _dinfo = { .guest_width = guest_width };
-    struct domain_info_context *dinfo = &_dinfo;
-    return xc_core_arch_map_p2m_rw(xch, dinfo, info,
-                                   live_shinfo, live_p2m, pfnp, 0);
+    return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 0);
 }
 
 int
-xc_core_arch_map_p2m_writable(xc_interface *xch, unsigned int guest_width, xc_dominfo_t *info,
-                              shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m,
-                              unsigned long *pfnp)
+xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+                              shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
-    struct domain_info_context _dinfo = { .guest_width = guest_width };
-    struct domain_info_context *dinfo = &_dinfo;
-    return xc_core_arch_map_p2m_rw(xch, dinfo, info,
-                                   live_shinfo, live_p2m, pfnp, 1);
+    return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 1);
 }
 
 int
diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
index f0b5f83ac8..8ebc0b59da 100644
--- a/tools/libs/ctrl/xc_private.h
+++ b/tools/libs/ctrl/xc_private.h
@@ -79,6 +79,7 @@ struct iovec {
 
 struct domain_info_context {
     unsigned int guest_width;
+    unsigned int p2m_frames;
     unsigned long p2m_size;
 };
 
diff --git a/tools/libs/guest/xg_domain.c b/tools/libs/guest/xg_domain.c
index 5019c84e0e..dd7db2cbd8 100644
--- a/tools/libs/guest/xg_domain.c
+++ b/tools/libs/guest/xg_domain.c
@@ -24,13 +24,9 @@
 
 int xc_unmap_domain_meminfo(xc_interface *xch, struct xc_domain_meminfo *minfo)
 {
-    struct domain_info_context _di = { .guest_width = minfo->guest_width,
-                                       .p2m_size = minfo->p2m_size};
-    struct domain_info_context *dinfo = &_di;
-
     free(minfo->pfn_type);
     if ( minfo->p2m_table )
-        munmap(minfo->p2m_table, P2M_FL_ENTRIES * PAGE_SIZE);
+        munmap(minfo->p2m_table, minfo->p2m_frames * PAGE_SIZE);
     minfo->p2m_table = NULL;
 
     return 0;
@@ -40,7 +36,6 @@ int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
                           struct xc_domain_meminfo *minfo)
 {
     struct domain_info_context _di;
-    struct domain_info_context *dinfo = &_di;
 
     xc_dominfo_t info;
     shared_info_any_t *live_shinfo;
@@ -96,16 +91,16 @@ int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
         return -1;
     }
 
-    if ( xc_core_arch_map_p2m_writable(xch, minfo->guest_width, &info,
-                                       live_shinfo, &minfo->p2m_table,
-                                       &minfo->p2m_size) )
+    if ( xc_core_arch_map_p2m_writable(xch, &_di, &info,
+                                       live_shinfo, &minfo->p2m_table) )
     {
         PERROR("Could not map the P2M table");
         munmap(live_shinfo, PAGE_SIZE);
         return -1;
     }
     munmap(live_shinfo, PAGE_SIZE);
-    _di.p2m_size = minfo->p2m_size;
+    minfo->p2m_size = _di.p2m_size;
+    minfo->p2m_frames = _di.p2m_frames;
 
     /* Make space and prepare for getting the PFN types */
     minfo->pfn_type = calloc(sizeof(*minfo->pfn_type), minfo->p2m_size);
@@ -141,7 +136,7 @@ failed:
     }
     if ( minfo->p2m_table )
     {
-        munmap(minfo->p2m_table, P2M_FL_ENTRIES * PAGE_SIZE);
+        munmap(minfo->p2m_table, minfo->p2m_frames * PAGE_SIZE);
         minfo->p2m_table = NULL;
     }
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 06:02:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 06:02:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136691.253358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp2ub-0003MI-UF; Fri, 04 Jun 2021 06:02:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136691.253358; Fri, 04 Jun 2021 06:02:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp2ub-0003M7-Q7; Fri, 04 Jun 2021 06:02:29 +0000
Received: by outflank-mailman (input) for mailman id 136691;
 Fri, 04 Jun 2021 06:02:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x6aI=K6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lp2ua-00029c-CE
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 06:02:28 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dfc2046a-8261-4160-b6d7-d3a1b537cd64;
 Fri, 04 Jun 2021 06:02:17 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 2375B21A09;
 Fri,  4 Jun 2021 06:02:17 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id DA946118DD;
 Fri,  4 Jun 2021 06:02:16 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 6Jg7NOjBuWCyGwAALh3uQQ
 (envelope-from <jgross@suse.com>); Fri, 04 Jun 2021 06:02:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dfc2046a-8261-4160-b6d7-d3a1b537cd64
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622786537; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=isgrr2CDNqvffqwRlBKSl7qzhM9b8t1JhZ+3heZNMCc=;
	b=sUAmI//60kTaXOn6AY4IceYBFhGxX7lavmSv7qagAijQs3IOOcvv5Nfo+bfH7tUS8HzZSY
	2tcteC707wNGr67m4I6R2Dywco3AMlboOQZufig/4sI+R5cXm+wk6WMlyxhsAndifNKIyH
	gi9AXLn8Sn92r/uaqrcb02Y6bYmg+jE=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622786537; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=isgrr2CDNqvffqwRlBKSl7qzhM9b8t1JhZ+3heZNMCc=;
	b=sUAmI//60kTaXOn6AY4IceYBFhGxX7lavmSv7qagAijQs3IOOcvv5Nfo+bfH7tUS8HzZSY
	2tcteC707wNGr67m4I6R2Dywco3AMlboOQZufig/4sI+R5cXm+wk6WMlyxhsAndifNKIyH
	gi9AXLn8Sn92r/uaqrcb02Y6bYmg+jE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>
Subject: [PATCH v2 3/6] tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
Date: Fri,  4 Jun 2021 08:02:11 +0200
Message-Id: <20210604060214.14924-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210604060214.14924-1-jgross@suse.com>
References: <20210604060214.14924-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of open coding the mapping of the p2m list use the already
existing xc_core_arch_map_p2m() call, especially as the current code
does not support guests with the linear p2m map. It should be noted
that this code is needed for colo/remus only.

Switching to xc_core_arch_map_p2m() drops the need to bail out for
bitness of tool stack and guest differing.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Christian Lindig <christian.lindig@citrix.com>
Acked-by: Wei Liu <wl@xen.org>
---
This might be a backport candidate
V2:
- add missing #include in ocaml stub (Andrew Cooper)
---
 tools/libs/ctrl/xc_resume.c         | 66 +++++++++--------------------
 tools/ocaml/libs/xc/xenctrl_stubs.c |  1 +
 2 files changed, 22 insertions(+), 45 deletions(-)

diff --git a/tools/libs/ctrl/xc_resume.c b/tools/libs/ctrl/xc_resume.c
index 94c6c9fb31..e3c8e83aa9 100644
--- a/tools/libs/ctrl/xc_resume.c
+++ b/tools/libs/ctrl/xc_resume.c
@@ -20,6 +20,7 @@
 #include <xen/foreign/x86_32.h>
 #include <xen/foreign/x86_64.h>
 #include <xen/hvm/params.h>
+#include "xc_core.h"
 
 static int modify_returncode(xc_interface *xch, uint32_t domid)
 {
@@ -137,12 +138,10 @@ static int xc_domain_resume_any(xc_interface *xch, uint32_t domid)
     struct domain_info_context _dinfo = { .guest_width = 0,
                                           .p2m_size = 0 };
     struct domain_info_context *dinfo = &_dinfo;
-    unsigned long mfn;
+    xen_pfn_t mfn, store_mfn, console_mfn;
     vcpu_guest_context_any_t ctxt;
-    start_info_t *start_info;
-    shared_info_t *shinfo = NULL;
-    xen_pfn_t *p2m_frame_list_list = NULL;
-    xen_pfn_t *p2m_frame_list = NULL;
+    start_info_any_t *start_info;
+    shared_info_any_t *shinfo = NULL;
     xen_pfn_t *p2m = NULL;
 #endif
 
@@ -164,11 +163,6 @@ static int xc_domain_resume_any(xc_interface *xch, uint32_t domid)
         PERROR("Could not get domain width");
         return rc;
     }
-    if ( dinfo->guest_width != sizeof(long) )
-    {
-        ERROR("Cannot resume uncooperative cross-address-size guests");
-        return rc;
-    }
 
     /* Map the shared info frame */
     shinfo = xc_map_foreign_range(xch, domid, PAGE_SIZE,
@@ -179,34 +173,8 @@ static int xc_domain_resume_any(xc_interface *xch, uint32_t domid)
         goto out;
     }
 
-    dinfo->p2m_size = shinfo->arch.max_pfn;
-
-    p2m_frame_list_list =
-        xc_map_foreign_range(xch, domid, PAGE_SIZE, PROT_READ,
-                             shinfo->arch.pfn_to_mfn_frame_list_list);
-    if ( p2m_frame_list_list == NULL )
-    {
-        ERROR("Couldn't map p2m_frame_list_list");
-        goto out;
-    }
-
-    p2m_frame_list = xc_map_foreign_pages(xch, domid, PROT_READ,
-                                          p2m_frame_list_list,
-                                          P2M_FLL_ENTRIES);
-    if ( p2m_frame_list == NULL )
-    {
-        ERROR("Couldn't map p2m_frame_list");
-        goto out;
-    }
-
-    /* Map all the frames of the pfn->mfn table. For migrate to succeed,
-       the guest must not change which frames are used for this purpose.
-       (its not clear why it would want to change them, and we'll be OK
-       from a safety POV anyhow. */
-    p2m = xc_map_foreign_pages(xch, domid, PROT_READ,
-                               p2m_frame_list,
-                               P2M_FL_ENTRIES);
-    if ( p2m == NULL )
+    /* Map the p2m list */
+    if ( xc_core_arch_map_p2m(xch, dinfo, &info, shinfo, &p2m) )
     {
         ERROR("Couldn't map p2m table");
         goto out;
@@ -228,8 +196,20 @@ static int xc_domain_resume_any(xc_interface *xch, uint32_t domid)
         goto out;
     }
 
-    start_info->store_mfn        = p2m[start_info->store_mfn];
-    start_info->console.domU.mfn = p2m[start_info->console.domU.mfn];
+    store_mfn = GET_FIELD(start_info, store_mfn, dinfo->guest_width);
+    console_mfn = GET_FIELD(start_info, console.domU.mfn, dinfo->guest_width);
+    if ( dinfo->guest_width == 4 )
+    {
+        store_mfn = ((uint32_t *)p2m)[store_mfn];
+        console_mfn = ((uint32_t *)p2m)[console_mfn];
+    }
+    else
+    {
+        store_mfn = ((uint64_t *)p2m)[store_mfn];
+        console_mfn = ((uint64_t *)p2m)[console_mfn];
+    }
+    SET_FIELD(start_info, store_mfn, store_mfn, dinfo->guest_width);
+    SET_FIELD(start_info, console.domU.mfn, console_mfn, dinfo->guest_width);
 
     munmap(start_info, PAGE_SIZE);
 #endif /* defined(__i386__) || defined(__x86_64__) */
@@ -250,11 +230,7 @@ static int xc_domain_resume_any(xc_interface *xch, uint32_t domid)
 out:
 #if defined(__i386__) || defined(__x86_64__)
     if (p2m)
-        munmap(p2m, P2M_FL_ENTRIES*PAGE_SIZE);
-    if (p2m_frame_list)
-        munmap(p2m_frame_list, P2M_FLL_ENTRIES*PAGE_SIZE);
-    if (p2m_frame_list_list)
-        munmap(p2m_frame_list_list, PAGE_SIZE);
+        munmap(p2m, dinfo->p2m_frames * PAGE_SIZE);
     if (shinfo)
         munmap(shinfo, PAGE_SIZE);
 #endif
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index d05d7bb30e..6e4bc567f5 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -32,6 +32,7 @@
 
 #define XC_WANT_COMPAT_MAP_FOREIGN_API
 #include <xenctrl.h>
+#include <xenguest.h>
 #include <xen-tools/libs.h>
 
 #include "mmap_stubs.h"
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 06:02:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 06:02:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136692.253363 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp2uc-0003QG-Cg; Fri, 04 Jun 2021 06:02:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136692.253363; Fri, 04 Jun 2021 06:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp2uc-0003Ov-4U; Fri, 04 Jun 2021 06:02:30 +0000
Received: by outflank-mailman (input) for mailman id 136692;
 Fri, 04 Jun 2021 06:02:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x6aI=K6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lp2ua-00029d-Kt
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 06:02:28 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cfd6595e-b914-4951-a782-26a003d51af0;
 Fri, 04 Jun 2021 06:02:18 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 939601FD47;
 Fri,  4 Jun 2021 06:02:17 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 63E28118DD;
 Fri,  4 Jun 2021 06:02:17 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id ONwuF+nBuWCyGwAALh3uQQ
 (envelope-from <jgross@suse.com>); Fri, 04 Jun 2021 06:02:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cfd6595e-b914-4951-a782-26a003d51af0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622786537; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4SWUo43YolLMrmlsJ+DCRZBL/Dp0uw3nhnsJOscj6IQ=;
	b=oWE0POBUbFMpkvHQHjCvq/oGFyH14joaNMlK3UIR6DoVacZWV0YsPMY9oqFDQkoxt31ggC
	BxxIszmrgJuMQNvTL9IPn6SYtbCP48RVWEOaf/cVoFDkp7G8EYzLM3XL+THpYwJPT2iaTX
	RG+sLduwpctq78ul69tXxKcg91xaMt8=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622786537; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4SWUo43YolLMrmlsJ+DCRZBL/Dp0uw3nhnsJOscj6IQ=;
	b=oWE0POBUbFMpkvHQHjCvq/oGFyH14joaNMlK3UIR6DoVacZWV0YsPMY9oqFDQkoxt31ggC
	BxxIszmrgJuMQNvTL9IPn6SYtbCP48RVWEOaf/cVoFDkp7G8EYzLM3XL+THpYwJPT2iaTX
	RG+sLduwpctq78ul69tXxKcg91xaMt8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 5/6] tools/libs: move xc_core* from libxenctrl to libxenguest
Date: Fri,  4 Jun 2021 08:02:13 +0200
Message-Id: <20210604060214.14924-6-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210604060214.14924-1-jgross@suse.com>
References: <20210604060214.14924-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The functionality in xc_core* should be part of libxenguest instead
of libxenctrl. Users are already either in libxenguest, or in xl.
There is one single exception: xc_core_arch_auto_translated_physmap()
is being used by xc_domain_memory_mapping(), which is used by qemu.
So leave the xc_core_arch_auto_translated_physmap() functionality in
libxenctrl.

This will make it easier to merge common functionality of xc_core*
and xg_sr_save*.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Wei Liu <wl@xen.org>
---
 tools/libs/ctrl/Makefile                             |  3 ---
 tools/libs/ctrl/xc_domain.c                          |  2 --
 tools/libs/ctrl/xc_private.h                         | 12 ++++++++++++
 tools/libs/guest/Makefile                            |  3 +++
 tools/libs/{ctrl/xc_core.c => guest/xg_core.c}       |  2 +-
 tools/libs/{ctrl/xc_core.h => guest/xg_core.h}       |  5 ++---
 .../libs/{ctrl/xc_core_arm.c => guest/xg_core_arm.c} |  8 +-------
 .../libs/{ctrl/xc_core_arm.h => guest/xg_core_arm.h} |  0
 .../libs/{ctrl/xc_core_x86.c => guest/xg_core_x86.c} |  8 +-------
 .../libs/{ctrl/xc_core_x86.h => guest/xg_core_x86.h} |  0
 tools/libs/guest/xg_dom_boot.c                       |  2 +-
 tools/libs/guest/xg_domain.c                         |  2 +-
 tools/libs/guest/xg_offline_page.c                   |  2 +-
 tools/libs/guest/xg_resume.c                         |  2 +-
 14 files changed, 24 insertions(+), 27 deletions(-)
 rename tools/libs/{ctrl/xc_core.c => guest/xg_core.c} (99%)
 rename tools/libs/{ctrl/xc_core.h => guest/xg_core.h} (97%)
 rename tools/libs/{ctrl/xc_core_arm.c => guest/xg_core_arm.c} (96%)
 rename tools/libs/{ctrl/xc_core_arm.h => guest/xg_core_arm.h} (100%)
 rename tools/libs/{ctrl/xc_core_x86.c => guest/xg_core_x86.c} (99%)
 rename tools/libs/{ctrl/xc_core_x86.h => guest/xg_core_x86.h} (100%)

diff --git a/tools/libs/ctrl/Makefile b/tools/libs/ctrl/Makefile
index fbeb3a3537..519246b0d6 100644
--- a/tools/libs/ctrl/Makefile
+++ b/tools/libs/ctrl/Makefile
@@ -2,9 +2,6 @@ XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
 SRCS-y       += xc_altp2m.c
-SRCS-y       += xc_core.c
-SRCS-$(CONFIG_X86) += xc_core_x86.c
-SRCS-$(CONFIG_ARM) += xc_core_arm.c
 SRCS-y       += xc_cpupool.c
 SRCS-y       += xc_domain.c
 SRCS-y       += xc_evtchn.c
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index e7cea4a17d..7d118848f1 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -19,8 +19,6 @@
  * Copyright (c) 2003, K A Fraser.
  */
 
-#include "xc_private.h"
-#include "xc_core.h"
 #include "xc_private.h"
 #include <xen/memory.h>
 #include <xen/hvm/hvm_op.h>
diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
index 8ebc0b59da..dff0f0289b 100644
--- a/tools/libs/ctrl/xc_private.h
+++ b/tools/libs/ctrl/xc_private.h
@@ -467,6 +467,18 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param,
 
 int do_dm_op(xc_interface *xch, uint32_t domid, unsigned int nr_bufs, ...);
 
+#if defined (__i386__) || defined (__x86_64__)
+static inline int xc_core_arch_auto_translated_physmap(const xc_dominfo_t *info)
+{
+    return info->hvm;
+}
+#elif defined (__arm__) || defined(__aarch64__)
+static inline int xc_core_arch_auto_translated_physmap(const xc_dominfo_t *info)
+{
+    return 1;
+}
+#endif
+
 #endif /* __XC_PRIVATE_H__ */
 
 /*
diff --git a/tools/libs/guest/Makefile b/tools/libs/guest/Makefile
index 2a2323ff09..2ce92d247e 100644
--- a/tools/libs/guest/Makefile
+++ b/tools/libs/guest/Makefile
@@ -24,6 +24,9 @@ SRCS-y += xg_offline_page.c
 else
 SRCS-y += xg_nomigrate.c
 endif
+SRCS-y       += xg_core.c
+SRCS-$(CONFIG_X86) += xg_core_x86.c
+SRCS-$(CONFIG_ARM) += xg_core_arm.c
 
 CFLAGS += -I$(XEN_libxenctrl)
 
diff --git a/tools/libs/ctrl/xc_core.c b/tools/libs/guest/xg_core.c
similarity index 99%
rename from tools/libs/ctrl/xc_core.c
rename to tools/libs/guest/xg_core.c
index 9576bec5a3..c52f1161c1 100644
--- a/tools/libs/ctrl/xc_core.c
+++ b/tools/libs/guest/xg_core.c
@@ -61,7 +61,7 @@
  */
 
 #include "xc_private.h"
-#include "xc_core.h"
+#include "xg_core.h"
 #include <stdlib.h>
 #include <unistd.h>
 
diff --git a/tools/libs/ctrl/xc_core.h b/tools/libs/guest/xg_core.h
similarity index 97%
rename from tools/libs/ctrl/xc_core.h
rename to tools/libs/guest/xg_core.h
index 8ea1f93a10..f07584aaa6 100644
--- a/tools/libs/ctrl/xc_core.h
+++ b/tools/libs/guest/xg_core.h
@@ -131,7 +131,6 @@ struct xc_core_memory_map {
     uint64_t    size;
 };
 typedef struct xc_core_memory_map xc_core_memory_map_t;
-int xc_core_arch_auto_translated_physmap(const xc_dominfo_t *info);
 struct xc_core_arch_context;
 int xc_core_arch_memory_map_get(xc_interface *xch,
                                 struct xc_core_arch_context *arch_ctxt,
@@ -152,9 +151,9 @@ int xc_core_arch_get_scratch_gpfn(xc_interface *xch, uint32_t domid,
 
 
 #if defined (__i386__) || defined (__x86_64__)
-# include "xc_core_x86.h"
+# include "xg_core_x86.h"
 #elif defined (__arm__) || defined(__aarch64__)
-# include "xc_core_arm.h"
+# include "xg_core_arm.h"
 #else
 # error "unsupported architecture"
 #endif
diff --git a/tools/libs/ctrl/xc_core_arm.c b/tools/libs/guest/xg_core_arm.c
similarity index 96%
rename from tools/libs/ctrl/xc_core_arm.c
rename to tools/libs/guest/xg_core_arm.c
index 93765a565f..aaabd07558 100644
--- a/tools/libs/ctrl/xc_core_arm.c
+++ b/tools/libs/guest/xg_core_arm.c
@@ -17,7 +17,7 @@
  */
 
 #include "xc_private.h"
-#include "xc_core.h"
+#include "xg_core.h"
 
 #include <xen-tools/libs.h>
 
@@ -31,12 +31,6 @@ xc_core_arch_gpfn_may_present(struct xc_core_arch_context *arch_ctxt,
     return 0;
 }
 
-int
-xc_core_arch_auto_translated_physmap(const xc_dominfo_t *info)
-{
-    return 1;
-}
-
 int
 xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unused,
                             xc_dominfo_t *info, shared_info_any_t *live_shinfo,
diff --git a/tools/libs/ctrl/xc_core_arm.h b/tools/libs/guest/xg_core_arm.h
similarity index 100%
rename from tools/libs/ctrl/xc_core_arm.h
rename to tools/libs/guest/xg_core_arm.h
diff --git a/tools/libs/ctrl/xc_core_x86.c b/tools/libs/guest/xg_core_x86.c
similarity index 99%
rename from tools/libs/ctrl/xc_core_x86.c
rename to tools/libs/guest/xg_core_x86.c
index c8f71d4b75..09f5d696ce 100644
--- a/tools/libs/ctrl/xc_core_x86.c
+++ b/tools/libs/guest/xg_core_x86.c
@@ -19,7 +19,7 @@
 
 #include <inttypes.h>
 #include "xc_private.h"
-#include "xc_core.h"
+#include "xg_core.h"
 #include <xen/hvm/e820.h>
 
 int
@@ -33,12 +33,6 @@ xc_core_arch_gpfn_may_present(struct xc_core_arch_context *arch_ctxt,
     return 1;
 }
 
-int
-xc_core_arch_auto_translated_physmap(const xc_dominfo_t *info)
-{
-    return info->hvm;
-}
-
 int
 xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unused,
                             xc_dominfo_t *info, shared_info_any_t *live_shinfo,
diff --git a/tools/libs/ctrl/xc_core_x86.h b/tools/libs/guest/xg_core_x86.h
similarity index 100%
rename from tools/libs/ctrl/xc_core_x86.h
rename to tools/libs/guest/xg_core_x86.h
diff --git a/tools/libs/guest/xg_dom_boot.c b/tools/libs/guest/xg_dom_boot.c
index 2a002e7349..dac96b17a5 100644
--- a/tools/libs/guest/xg_dom_boot.c
+++ b/tools/libs/guest/xg_dom_boot.c
@@ -31,7 +31,7 @@
 #include <zlib.h>
 
 #include "xg_private.h"
-#include "xc_core.h"
+#include "xg_core.h"
 #include <xen/hvm/params.h>
 #include <xen/grant_table.h>
 
diff --git a/tools/libs/guest/xg_domain.c b/tools/libs/guest/xg_domain.c
index dd7db2cbd8..155e337427 100644
--- a/tools/libs/guest/xg_domain.c
+++ b/tools/libs/guest/xg_domain.c
@@ -20,7 +20,7 @@
  */
 
 #include "xg_private.h"
-#include "xc_core.h"
+#include "xg_core.h"
 
 int xc_unmap_domain_meminfo(xc_interface *xch, struct xc_domain_meminfo *minfo)
 {
diff --git a/tools/libs/guest/xg_offline_page.c b/tools/libs/guest/xg_offline_page.c
index d4722f0e8c..cfe0e2d537 100644
--- a/tools/libs/guest/xg_offline_page.c
+++ b/tools/libs/guest/xg_offline_page.c
@@ -25,7 +25,7 @@
 #include <stdlib.h>
 #include <unistd.h>
 #include <sys/time.h>
-#include <xc_core.h>
+#include <xg_core.h>
 
 #include "xc_private.h"
 #include "xg_private.h"
diff --git a/tools/libs/guest/xg_resume.c b/tools/libs/guest/xg_resume.c
index 3bdefb2eef..d201c1488d 100644
--- a/tools/libs/guest/xg_resume.c
+++ b/tools/libs/guest/xg_resume.c
@@ -21,7 +21,7 @@
 #include <xen/foreign/x86_32.h>
 #include <xen/foreign/x86_64.h>
 #include <xen/hvm/params.h>
-#include "xc_core.h"
+#include "xg_core.h"
 
 static int modify_returncode(xc_interface *xch, uint32_t domid)
 {
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 06:02:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 06:02:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136690.253342 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp2uW-0002kh-KF; Fri, 04 Jun 2021 06:02:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136690.253342; Fri, 04 Jun 2021 06:02:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp2uW-0002jS-Ah; Fri, 04 Jun 2021 06:02:24 +0000
Received: by outflank-mailman (input) for mailman id 136690;
 Fri, 04 Jun 2021 06:02:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x6aI=K6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lp2uV-00029d-Kq
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 06:02:23 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3e75bed6-f25e-4c17-bf1d-2c16cc88ec47;
 Fri, 04 Jun 2021 06:02:18 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 5BBDD21A0A;
 Fri,  4 Jun 2021 06:02:17 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 2B50C118DD;
 Fri,  4 Jun 2021 06:02:17 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id OJZkCenBuWCyGwAALh3uQQ
 (envelope-from <jgross@suse.com>); Fri, 04 Jun 2021 06:02:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e75bed6-f25e-4c17-bf1d-2c16cc88ec47
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622786537; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eIBjeTC4n1sKa6paY8ti6Xavmo8jAw6kAIRZige1e6Q=;
	b=npwQj0bxc5WTZl2jMjTKIHJfkGlQuZssSGTpjmDOzqVoBRhh8q7l9R1XA95x3VjW/B9y5F
	iUS/Q9avZoWD9kk0/pJQopzAaTcs7tOHac8VMEYNWpTjjF1QUnUb6K5Cm7/6zt4KPdqBUP
	DCu53gUpKeUjsgoHmOQ482WfAM9c0SM=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622786537; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eIBjeTC4n1sKa6paY8ti6Xavmo8jAw6kAIRZige1e6Q=;
	b=npwQj0bxc5WTZl2jMjTKIHJfkGlQuZssSGTpjmDOzqVoBRhh8q7l9R1XA95x3VjW/B9y5F
	iUS/Q9avZoWD9kk0/pJQopzAaTcs7tOHac8VMEYNWpTjjF1QUnUb6K5Cm7/6zt4KPdqBUP
	DCu53gUpKeUjsgoHmOQ482WfAM9c0SM=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 4/6] tools/libs: move xc_resume.c to libxenguest
Date: Fri,  4 Jun 2021 08:02:12 +0200
Message-Id: <20210604060214.14924-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210604060214.14924-1-jgross@suse.com>
References: <20210604060214.14924-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The guest suspend functionality is already part of libxenguest. Move
the resume functionality from libxenctrl to libxenguest, too.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Wei Liu <wl@xen.org>
---
 tools/include/xenctrl.h                       | 63 -------------------
 tools/include/xenguest.h                      | 62 ++++++++++++++++++
 tools/libs/ctrl/Makefile                      |  1 -
 tools/libs/guest/Makefile                     |  1 +
 .../{ctrl/xc_resume.c => guest/xg_resume.c}   |  1 +
 5 files changed, 64 insertions(+), 64 deletions(-)
 rename tools/libs/{ctrl/xc_resume.c => guest/xg_resume.c} (99%)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 58d3377d6a..2a7c836a02 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -576,69 +576,6 @@ int xc_domain_destroy(xc_interface *xch,
                       uint32_t domid);
 
 
-/**
- * This function resumes a suspended domain. The domain should have
- * been previously suspended.
- *
- * Note that there are 'xc_domain_suspend' as suspending a domain
- * is quite the endeavour.
- *
- * For the purpose of this explanation there are three guests:
- * PV (using hypercalls for privilgied operations), HVM
- * (fully hardware virtualized guests using emulated devices for everything),
- * and PVHVM (PV aware with hardware virtualisation).
- *
- * HVM guest are the simplest - they suspend via S3 / S4 and resume from
- * S3 / S4. Upon resume they have to re-negotiate with the emulated devices.
- *
- * PV and PVHVM communicate via hypercalls for suspend (and resume).
- * For suspend the toolstack initiates the process by writing an value
- * in XenBus "control/shutdown" with the string "suspend".
- *
- * The PV guest stashes anything it deems neccessary in 'struct
- * start_info' in case of failure (PVHVM may ignore this) and calls
- * the SCHEDOP_shutdown::SHUTDOWN_suspend hypercall (for PV as
- * argument it passes the MFN to 'struct start_info').
- *
- * And then the guest is suspended.
- *
- * The checkpointing or notifying a guest that the suspend failed or
- * cancelled (in case of checkpoint) is by having the
- * SCHEDOP_shutdown::SHUTDOWN_suspend hypercall return a non-zero
- * value.
- *
- * The PV and PVHVM resume path are similar. For PV it would be
- * similar to bootup - figure out where the 'struct start_info' is (or
- * if the suspend was cancelled aka checkpointed - reuse the saved
- * values).
- *
- * From here on they differ depending whether the guest is PV or PVHVM
- * in specifics but follow overall the same path:
- *  - PV: Bringing up the vCPUS,
- *  - PVHVM: Setup vector callback,
- *  - Bring up vCPU runstates,
- *  - Remap the grant tables if checkpointing or setup from scratch,
- *
- *
- * If the resume was not checkpointing (or if suspend was succesful) we would
- * setup the PV timers and the different PV events. Lastly the PV drivers
- * re-negotiate with the backend.
- *
- * This function would return before the guest started resuming. That is
- * the guest would be in non-running state and its vCPU context would be
- * in the the SCHEDOP_shutdown::SHUTDOWN_suspend hypercall return path
- * (for PV and PVHVM). For HVM it would be in would be in QEMU emulated
- * BIOS handling S3 suspend.
- *
- * @parm xch a handle to an open hypervisor interface
- * @parm domid the domain id to resume
- * @parm fast use cooperative resume (guest must support this)
- * return 0 on success, -1 on failure
- */
-int xc_domain_resume(xc_interface *xch,
-		     uint32_t domid,
-		     int fast);
-
 /**
  * This function will shutdown a domain. This is intended for use in
  * fully-virtualized domains where this operation is analogous to the
diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
index f9fb0449ad..61d0a82f48 100644
--- a/tools/include/xenguest.h
+++ b/tools/include/xenguest.h
@@ -689,6 +689,68 @@ int xc_query_page_offline_status(xc_interface *xch, unsigned long start,
 
 int xc_exchange_page(xc_interface *xch, uint32_t domid, xen_pfn_t mfn);
 
+/**
+ * This function resumes a suspended domain. The domain should have
+ * been previously suspended.
+ *
+ * Note that there are 'xc_domain_suspend' as suspending a domain
+ * is quite the endeavour.
+ *
+ * For the purpose of this explanation there are three guests:
+ * PV (using hypercalls for privilgied operations), HVM
+ * (fully hardware virtualized guests using emulated devices for everything),
+ * and PVHVM (PV aware with hardware virtualisation).
+ *
+ * HVM guest are the simplest - they suspend via S3 / S4 and resume from
+ * S3 / S4. Upon resume they have to re-negotiate with the emulated devices.
+ *
+ * PV and PVHVM communicate via hypercalls for suspend (and resume).
+ * For suspend the toolstack initiates the process by writing an value
+ * in XenBus "control/shutdown" with the string "suspend".
+ *
+ * The PV guest stashes anything it deems neccessary in 'struct
+ * start_info' in case of failure (PVHVM may ignore this) and calls
+ * the SCHEDOP_shutdown::SHUTDOWN_suspend hypercall (for PV as
+ * argument it passes the MFN to 'struct start_info').
+ *
+ * And then the guest is suspended.
+ *
+ * The checkpointing or notifying a guest that the suspend failed or
+ * cancelled (in case of checkpoint) is by having the
+ * SCHEDOP_shutdown::SHUTDOWN_suspend hypercall return a non-zero
+ * value.
+ *
+ * The PV and PVHVM resume path are similar. For PV it would be
+ * similar to bootup - figure out where the 'struct start_info' is (or
+ * if the suspend was cancelled aka checkpointed - reuse the saved
+ * values).
+ *
+ * From here on they differ depending whether the guest is PV or PVHVM
+ * in specifics but follow overall the same path:
+ *  - PV: Bringing up the vCPUS,
+ *  - PVHVM: Setup vector callback,
+ *  - Bring up vCPU runstates,
+ *  - Remap the grant tables if checkpointing or setup from scratch,
+ *
+ *
+ * If the resume was not checkpointing (or if suspend was succesful) we would
+ * setup the PV timers and the different PV events. Lastly the PV drivers
+ * re-negotiate with the backend.
+ *
+ * This function would return before the guest started resuming. That is
+ * the guest would be in non-running state and its vCPU context would be
+ * in the the SCHEDOP_shutdown::SHUTDOWN_suspend hypercall return path
+ * (for PV and PVHVM). For HVM it would be in would be in QEMU emulated
+ * BIOS handling S3 suspend.
+ *
+ * @parm xch a handle to an open hypervisor interface
+ * @parm domid the domain id to resume
+ * @parm fast use cooperative resume (guest must support this)
+ * return 0 on success, -1 on failure
+ */
+int xc_domain_resume(xc_interface *xch,
+                     uint32_t domid,
+                     int fast);
 
 /**
  * Memory related information, such as PFN types, the P2M table,
diff --git a/tools/libs/ctrl/Makefile b/tools/libs/ctrl/Makefile
index ce9ecae710..fbeb3a3537 100644
--- a/tools/libs/ctrl/Makefile
+++ b/tools/libs/ctrl/Makefile
@@ -20,7 +20,6 @@ SRCS-y       += xc_rt.c
 SRCS-y       += xc_tbuf.c
 SRCS-y       += xc_pm.c
 SRCS-y       += xc_cpu_hotplug.c
-SRCS-y       += xc_resume.c
 SRCS-y       += xc_vm_event.c
 SRCS-y       += xc_vmtrace.c
 SRCS-y       += xc_monitor.c
diff --git a/tools/libs/guest/Makefile b/tools/libs/guest/Makefile
index 6d2a1d5bbc..2a2323ff09 100644
--- a/tools/libs/guest/Makefile
+++ b/tools/libs/guest/Makefile
@@ -9,6 +9,7 @@ endif
 SRCS-y += xg_private.c
 SRCS-y += xg_domain.c
 SRCS-y += xg_suspend.c
+SRCS-y += xg_resume.c
 ifeq ($(CONFIG_MIGRATE),y)
 SRCS-y += xg_sr_common.c
 SRCS-$(CONFIG_X86) += xg_sr_common_x86.c
diff --git a/tools/libs/ctrl/xc_resume.c b/tools/libs/guest/xg_resume.c
similarity index 99%
rename from tools/libs/ctrl/xc_resume.c
rename to tools/libs/guest/xg_resume.c
index e3c8e83aa9..3bdefb2eef 100644
--- a/tools/libs/ctrl/xc_resume.c
+++ b/tools/libs/guest/xg_resume.c
@@ -14,6 +14,7 @@
  */
 
 #include "xc_private.h"
+#include "xenguest.h"
 
 #if defined(__i386__) || defined(__x86_64__)
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 06:02:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 06:02:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136689.253336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp2uW-0002hO-5R; Fri, 04 Jun 2021 06:02:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136689.253336; Fri, 04 Jun 2021 06:02:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp2uW-0002hH-1J; Fri, 04 Jun 2021 06:02:24 +0000
Received: by outflank-mailman (input) for mailman id 136689;
 Fri, 04 Jun 2021 06:02:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x6aI=K6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lp2uV-00029c-C6
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 06:02:23 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a99b291-3166-4068-be37-957b581354d1;
 Fri, 04 Jun 2021 06:02:17 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 9706C1FD37;
 Fri,  4 Jun 2021 06:02:16 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 66EB511A98;
 Fri,  4 Jun 2021 06:02:16 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id yJwFGOjBuWCyGwAALh3uQQ
 (envelope-from <jgross@suse.com>); Fri, 04 Jun 2021 06:02:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a99b291-3166-4068-be37-957b581354d1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622786536; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=52BTUe4jpk+WRPGPjfc9HUIe/qPIv/ZJBRtNKVKcEa4=;
	b=KCOLNc9rrbfAUNqoCLZgCTaMV9Hsq6OfO/Caxjo4dk5E5foJXWirfH3urL65K91EVUPsx+
	KMZHhfwC1cI/BLRxuzv7bvznYMjMG2tS/IK/TMzV1EvoUnw2icN8XSOnQS8vgbJ/Lv1aay
	GrGRyNhbbVYqyC02FrfRe5ugRdpuU/w=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622786536; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=52BTUe4jpk+WRPGPjfc9HUIe/qPIv/ZJBRtNKVKcEa4=;
	b=KCOLNc9rrbfAUNqoCLZgCTaMV9Hsq6OfO/Caxjo4dk5E5foJXWirfH3urL65K91EVUPsx+
	KMZHhfwC1cI/BLRxuzv7bvznYMjMG2tS/IK/TMzV1EvoUnw2icN8XSOnQS8vgbJ/Lv1aay
	GrGRyNhbbVYqyC02FrfRe5ugRdpuU/w=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 1/6] tools/libs/guest: fix max_pfn setting in map_p2m()
Date: Fri,  4 Jun 2021 08:02:09 +0200
Message-Id: <20210604060214.14924-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210604060214.14924-1-jgross@suse.com>
References: <20210604060214.14924-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When setting the highest pfn used in the guest, don't subtract 1 from
the value read from the shared_info data. The value read already is
the correct pfn.

Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Wei Liu <wl@xen.org>
---
This is a backport candidate
---
 tools/libs/guest/xg_sr_save_x86_pv.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libs/guest/xg_sr_save_x86_pv.c b/tools/libs/guest/xg_sr_save_x86_pv.c
index 4964f1f7b8..dae7f2817f 100644
--- a/tools/libs/guest/xg_sr_save_x86_pv.c
+++ b/tools/libs/guest/xg_sr_save_x86_pv.c
@@ -468,7 +468,7 @@ static int map_p2m(struct xc_sr_context *ctx)
 
     ctx->x86.pv.p2m_generation = ~0ULL;
     ctx->x86.pv.max_pfn = GET_FIELD(ctx->x86.pv.shinfo, arch.max_pfn,
-                                    ctx->x86.pv.width) - 1;
+                                    ctx->x86.pv.width);
     p2m_cr3 = GET_FIELD(ctx->x86.pv.shinfo, arch.p2m_cr3, ctx->x86.pv.width);
 
     return p2m_cr3 ? map_p2m_list(ctx, p2m_cr3) : map_p2m_tree(ctx);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 06:02:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 06:02:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136687.253313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp2uS-00029w-FU; Fri, 04 Jun 2021 06:02:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136687.253313; Fri, 04 Jun 2021 06:02:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp2uS-00029o-CL; Fri, 04 Jun 2021 06:02:20 +0000
Received: by outflank-mailman (input) for mailman id 136687;
 Fri, 04 Jun 2021 06:02:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x6aI=K6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lp2uQ-00029c-HA
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 06:02:18 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ddede7e5-73e1-449e-8e80-11e019626b44;
 Fri, 04 Jun 2021 06:02:17 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 6434221A07;
 Fri,  4 Jun 2021 06:02:16 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 2CF15118DD;
 Fri,  4 Jun 2021 06:02:16 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id ABwPCejBuWCyGwAALh3uQQ
 (envelope-from <jgross@suse.com>); Fri, 04 Jun 2021 06:02:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ddede7e5-73e1-449e-8e80-11e019626b44
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622786536; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=FIMbOAmTIZD/8yxM+nY2bRVWI8IB3NGVj1EMJylNr+Y=;
	b=fuClDdf3Zt5CFCrW4QbLpfmMyw2zCJvImUw1OZRP/r679HrfWT50uyZM1B9aGu65o1xkFO
	1zKWr+xZRGEKUXbn3VNeqLAMtywWczee2+1LaUgx3DSTj6OhosOVnSm9/mKItK/TWlr9SO
	d0wEBgmJe3p+flYbLG4bhvcKicDxCyI=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622786536; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=FIMbOAmTIZD/8yxM+nY2bRVWI8IB3NGVj1EMJylNr+Y=;
	b=fuClDdf3Zt5CFCrW4QbLpfmMyw2zCJvImUw1OZRP/r679HrfWT50uyZM1B9aGu65o1xkFO
	1zKWr+xZRGEKUXbn3VNeqLAMtywWczee2+1LaUgx3DSTj6OhosOVnSm9/mKItK/TWlr9SO
	d0wEBgmJe3p+flYbLG4bhvcKicDxCyI=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>
Subject: [PATCH v2 0/6] tools/libs: add missing support of linear p2m_list, cleanup
Date: Fri,  4 Jun 2021 08:02:08 +0200
Message-Id: <20210604060214.14924-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is a resend of V2 with the Acks folded in.

There are some corners left which don't support the not so very new
linear p2m list of pv guests, which has been introduced in Linux kernel
3.19 and which is mandatory for non-legacy versions of Xen since kernel
4.14.

This series adds support for the linear p2m list where it is missing
(colo support and "xl dump-core").

In theory it should be possible to merge the p2m list mapping code
from migration handling and core dump handling, but this needs quite
some cleanup before this is possible.

The first three patches of this series are fixing real problems, so
I've put them at the start of this series, especially in order to make
backports easier.

The other three patches are only the first steps of cleanup. The main
work done here is to concentrate all p2m mapping in libxenguest instead
of having one implementation in each of libxenguest and libxenctrl.

Merging the two implementations should be rather easy, but this will
require to touch many lines of code, as the migration handling variant
seems to be more mature, but it is using the migration stream specific
structures heavily. So I'd like to have some confirmation that my way
to clean this up is the right one.

My idea would be to add the data needed for p2m mapping to struct
domain_info_context and replace the related fields in struct
xc_sr_context with a struct domain_info_context. Modifying the
interface of xc_core_arch_map_p2m() to take most current parameters
via struct domain_info_context would then enable migration coding to
use xc_core_arch_map_p2m() for mapping the p2m. xc_core_arch_map_p2m()
should look basically like the current migration p2m mapping code
afterwards.

Changes in V2:
- added missing #include in ocaml stub

Juergen Gross (6):
  tools/libs/guest: fix max_pfn setting in map_p2m()
  tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m
    table
  tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
  tools/libs: move xc_resume.c to libxenguest
  tools/libs: move xc_core* from libxenctrl to libxenguest
  tools/libs/guest: make some definitions private to libxenguest

 tools/include/xenctrl.h                       |  63 ---
 tools/include/xenguest.h                      |  63 +++
 tools/libs/ctrl/Makefile                      |   4 -
 tools/libs/ctrl/xc_core_x86.c                 | 223 ----------
 tools/libs/ctrl/xc_domain.c                   |   2 -
 tools/libs/ctrl/xc_private.h                  |  43 +-
 tools/libs/guest/Makefile                     |   4 +
 .../libs/{ctrl/xc_core.c => guest/xg_core.c}  |   7 +-
 .../libs/{ctrl/xc_core.h => guest/xg_core.h}  |  15 +-
 .../xc_core_arm.c => guest/xg_core_arm.c}     |  31 +-
 .../xc_core_arm.h => guest/xg_core_arm.h}     |   0
 tools/libs/guest/xg_core_x86.c                | 399 ++++++++++++++++++
 .../xc_core_x86.h => guest/xg_core_x86.h}     |   0
 tools/libs/guest/xg_dom_boot.c                |   2 +-
 tools/libs/guest/xg_domain.c                  |  19 +-
 tools/libs/guest/xg_offline_page.c            |   2 +-
 tools/libs/guest/xg_private.h                 |  16 +-
 .../{ctrl/xc_resume.c => guest/xg_resume.c}   |  69 +--
 tools/libs/guest/xg_sr_save_x86_pv.c          |   2 +-
 tools/ocaml/libs/xc/xenctrl_stubs.c           |   1 +
 20 files changed, 545 insertions(+), 420 deletions(-)
 delete mode 100644 tools/libs/ctrl/xc_core_x86.c
 rename tools/libs/{ctrl/xc_core.c => guest/xg_core.c} (99%)
 rename tools/libs/{ctrl/xc_core.h => guest/xg_core.h} (92%)
 rename tools/libs/{ctrl/xc_core_arm.c => guest/xg_core_arm.c} (72%)
 rename tools/libs/{ctrl/xc_core_arm.h => guest/xg_core_arm.h} (100%)
 create mode 100644 tools/libs/guest/xg_core_x86.c
 rename tools/libs/{ctrl/xc_core_x86.h => guest/xg_core_x86.h} (100%)
 rename tools/libs/{ctrl/xc_resume.c => guest/xg_resume.c} (80%)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 06:02:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 06:02:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136693.253379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp2ug-000452-M5; Fri, 04 Jun 2021 06:02:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136693.253379; Fri, 04 Jun 2021 06:02:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp2ug-00044r-Ig; Fri, 04 Jun 2021 06:02:34 +0000
Received: by outflank-mailman (input) for mailman id 136693;
 Fri, 04 Jun 2021 06:02:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x6aI=K6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lp2uf-00029d-L9
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 06:02:33 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9d3de1b3-3223-44a0-aaa8-1c7101803c80;
 Fri, 04 Jun 2021 06:02:18 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id CDB1A1FD4A;
 Fri,  4 Jun 2021 06:02:17 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 9BE5B118DD;
 Fri,  4 Jun 2021 06:02:17 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id QEfCJOnBuWCyGwAALh3uQQ
 (envelope-from <jgross@suse.com>); Fri, 04 Jun 2021 06:02:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d3de1b3-3223-44a0-aaa8-1c7101803c80
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622786537; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8dbnJVTwfYU2q9IE6Ab5h4nWrKb9atrgQLfiOOr8PBc=;
	b=ODhxW/t+SwuqmSbFWx4OwmnrItQONZiqU56yh+iFlKM0V3PVS6fKnSOFL/79EThVBfUvnY
	ijBB65YG1ryNhGGzI2/4D17mMsG0WYq5UFHaDoZgrV1cKHWLsJApw/xR2M89iLZn7BDucM
	MHfWONi29pi3L5KPexMkppxLkhdxRJU=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622786537; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8dbnJVTwfYU2q9IE6Ab5h4nWrKb9atrgQLfiOOr8PBc=;
	b=ODhxW/t+SwuqmSbFWx4OwmnrItQONZiqU56yh+iFlKM0V3PVS6fKnSOFL/79EThVBfUvnY
	ijBB65YG1ryNhGGzI2/4D17mMsG0WYq5UFHaDoZgrV1cKHWLsJApw/xR2M89iLZn7BDucM
	MHfWONi29pi3L5KPexMkppxLkhdxRJU=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 6/6] tools/libs/guest: make some definitions private to libxenguest
Date: Fri,  4 Jun 2021 08:02:14 +0200
Message-Id: <20210604060214.14924-7-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210604060214.14924-1-jgross@suse.com>
References: <20210604060214.14924-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There are some definitions which are used in libxenguest only now.
Move them from libxenctrl over to libxenguest.

Remove an unused macro.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Wei Liu <wl@xen.org>
---
 tools/libs/ctrl/xc_private.h   | 32 --------------------------------
 tools/libs/guest/xg_core.h     |  2 +-
 tools/libs/guest/xg_core_x86.c | 16 +++++++++++++++-
 tools/libs/guest/xg_private.h  | 16 +++++++++++++++-
 tools/libs/guest/xg_resume.c   |  2 +-
 5 files changed, 32 insertions(+), 36 deletions(-)

diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
index dff0f0289b..3e299b943f 100644
--- a/tools/libs/ctrl/xc_private.h
+++ b/tools/libs/ctrl/xc_private.h
@@ -65,38 +65,6 @@ struct iovec {
 
 #define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
 
-#define GET_FIELD(_p, _f, _w) (((_w) == 8) ? ((_p)->x64._f) : ((_p)->x32._f))
-
-#define SET_FIELD(_p, _f, _v, _w) do {          \
-    if ((_w) == 8)                              \
-        (_p)->x64._f = (_v);                    \
-    else                                        \
-        (_p)->x32._f = (_v);                    \
-} while (0)
-
-/* XXX SMH: following skanky macros rely on variable p2m_size being set */
-/* XXX TJD: also, "guest_width" should be the guest's sizeof(unsigned long) */
-
-struct domain_info_context {
-    unsigned int guest_width;
-    unsigned int p2m_frames;
-    unsigned long p2m_size;
-};
-
-/* Number of xen_pfn_t in a page */
-#define FPP             (PAGE_SIZE/(dinfo->guest_width))
-
-/* Number of entries in the pfn_to_mfn_frame_list_list */
-#define P2M_FLL_ENTRIES (((dinfo->p2m_size)+(FPP*FPP)-1)/(FPP*FPP))
-
-/* Number of entries in the pfn_to_mfn_frame_list */
-#define P2M_FL_ENTRIES  (((dinfo->p2m_size)+FPP-1)/FPP)
-
-/* Size in bytes of the pfn_to_mfn_frame_list     */
-#define P2M_GUEST_FL_SIZE ((P2M_FL_ENTRIES) * (dinfo->guest_width))
-#define P2M_TOOLS_FL_SIZE ((P2M_FL_ENTRIES) *                           \
-                           max_t(size_t, sizeof(xen_pfn_t), dinfo->guest_width))
-
 #define DECLARE_DOMCTL struct xen_domctl domctl
 #define DECLARE_SYSCTL struct xen_sysctl sysctl
 #define DECLARE_PHYSDEV_OP struct physdev_op physdev_op
diff --git a/tools/libs/guest/xg_core.h b/tools/libs/guest/xg_core.h
index f07584aaa6..aaca9e0a8b 100644
--- a/tools/libs/guest/xg_core.h
+++ b/tools/libs/guest/xg_core.h
@@ -21,7 +21,7 @@
 #define XC_CORE_H
 
 #include "xen/version.h"
-#include "xc_private.h"
+#include "xg_private.h"
 #include "xen/libelf/elfstructs.h"
 
 /* section names */
diff --git a/tools/libs/guest/xg_core_x86.c b/tools/libs/guest/xg_core_x86.c
index 09f5d696ce..61106b98b8 100644
--- a/tools/libs/guest/xg_core_x86.c
+++ b/tools/libs/guest/xg_core_x86.c
@@ -18,10 +18,24 @@
  */
 
 #include <inttypes.h>
-#include "xc_private.h"
+#include "xg_private.h"
 #include "xg_core.h"
 #include <xen/hvm/e820.h>
 
+/* Number of xen_pfn_t in a page */
+#define FPP             (PAGE_SIZE/(dinfo->guest_width))
+
+/* Number of entries in the pfn_to_mfn_frame_list_list */
+#define P2M_FLL_ENTRIES (((dinfo->p2m_size)+(FPP*FPP)-1)/(FPP*FPP))
+
+/* Number of entries in the pfn_to_mfn_frame_list */
+#define P2M_FL_ENTRIES  (((dinfo->p2m_size)+FPP-1)/FPP)
+
+/* Size in bytes of the pfn_to_mfn_frame_list     */
+#define P2M_GUEST_FL_SIZE ((P2M_FL_ENTRIES) * (dinfo->guest_width))
+#define P2M_TOOLS_FL_SIZE ((P2M_FL_ENTRIES) * \
+                           max_t(size_t, sizeof(xen_pfn_t), dinfo->guest_width))
+
 int
 xc_core_arch_gpfn_may_present(struct xc_core_arch_context *arch_ctxt,
                               unsigned long pfn)
diff --git a/tools/libs/guest/xg_private.h b/tools/libs/guest/xg_private.h
index 25e46d7ce1..03d765da21 100644
--- a/tools/libs/guest/xg_private.h
+++ b/tools/libs/guest/xg_private.h
@@ -42,6 +42,21 @@
 #endif
 #endif
 
+#define GET_FIELD(_p, _f, _w) (((_w) == 8) ? ((_p)->x64._f) : ((_p)->x32._f))
+
+#define SET_FIELD(_p, _f, _v, _w) do {          \
+    if ((_w) == 8)                              \
+        (_p)->x64._f = (_v);                    \
+    else                                        \
+        (_p)->x32._f = (_v);                    \
+} while (0)
+
+struct domain_info_context {
+    unsigned int guest_width;
+    unsigned int p2m_frames;
+    unsigned long p2m_size;
+};
+
 struct xc_dom_loader {
     const char *name;
     /* Sadly the error returns from these functions are not consistent: */
@@ -139,7 +154,6 @@ static inline xen_pfn_t xc_pfn_to_mfn(xen_pfn_t pfn, xen_pfn_t *p2m,
 /* Masks for PTE<->PFN conversions */
 #define MADDR_BITS_X86  ((dinfo->guest_width == 8) ? 52 : 44)
 #define MFN_MASK_X86    ((1ULL << (MADDR_BITS_X86 - PAGE_SHIFT_X86)) - 1)
-#define MADDR_MASK_X86  (MFN_MASK_X86 << PAGE_SHIFT_X86)
 
 int pin_table(xc_interface *xch, unsigned int type, unsigned long mfn,
               uint32_t dom);
diff --git a/tools/libs/guest/xg_resume.c b/tools/libs/guest/xg_resume.c
index d201c1488d..77e2451a3c 100644
--- a/tools/libs/guest/xg_resume.c
+++ b/tools/libs/guest/xg_resume.c
@@ -13,7 +13,7 @@
  * License along with this library; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include "xc_private.h"
+#include "xg_private.h"
 #include "xenguest.h"
 
 #if defined(__i386__) || defined(__x86_64__)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 06:37:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 06:37:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136739.253391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp3SZ-0000kZ-Gs; Fri, 04 Jun 2021 06:37:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136739.253391; Fri, 04 Jun 2021 06:37:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp3SZ-0000kS-D4; Fri, 04 Jun 2021 06:37:35 +0000
Received: by outflank-mailman (input) for mailman id 136739;
 Fri, 04 Jun 2021 06:37:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EDnB=K6=epam.com=prvs=67890d1f51=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1lp3SX-0000kM-Kf
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 06:37:33 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c64b7702-348d-4527-8fad-c4349ede254f;
 Fri, 04 Jun 2021 06:37:32 +0000 (UTC)
Received: from pps.filterd (m0174680.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 1546aAsU010423; Fri, 4 Jun 2021 06:37:31 GMT
Received: from eur01-db5-obe.outbound.protection.outlook.com
 (mail-db5eur01lp2052.outbound.protection.outlook.com [104.47.2.52])
 by mx0b-0039f301.pphosted.com with ESMTP id 38ydek88g3-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 04 Jun 2021 06:37:30 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB7346.eurprd03.prod.outlook.com (2603:10a6:20b:26e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.20; Fri, 4 Jun
 2021 06:37:27 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::b459:9e8c:964b:a3d1]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::b459:9e8c:964b:a3d1%6]) with mapi id 15.20.4195.023; Fri, 4 Jun 2021
 06:37:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c64b7702-348d-4527-8fad-c4349ede254f
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YpBsH9cS9RUCglzB8fb2ZrCrUHiHongWMEfJ2mCgZ/OkWgieafxjynKMy2QxvyVlnjVsN9SXaVUTqff62hwb7pMjkb+wFz/mEwyeJP14lgkG5m8b6NfX4PNAedqnnOTwCfB0dvmEhikDe0W+R6Ajh0G/mUENcTzWAsepU0rWuLk/XyqFU8/lHGaHyifTOLTih3kujRTS0e+HFsDFMQf+fDf2Ae1unsxIyvnGFD4oMP+IccTxv7WnSuThyvQV99InQu98lAOJZ6s16b2eQs3b81SugNaUentZNZwoW/M38S10ccGaz/lo05M9kjDyTWd8MZHRsoDDsT8HJtY+S0cmvg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jMV+fFvjKlYiGufHj/nr+yshgJCwxjCERrqo4V6wD0k=;
 b=W6onknEYgr6wssnV8rtVym7ge9fKDgc0yWRfgloK01bsiZKlrxWUK69vneWX52O40GofFpgMJYNP9tsxuMI5A2va0gLYszLzIrgvmU5T8vmCePNkSXb1LehD2L2UbdrPf5eMOGdM2LZqLUbxjKe0N7fNMA9rQBNCyZttWM4ChReLwAT0nEcCtMF7skLflsHg+F8l6sdLw5j8xN2G3GoZXg9TVM+Ec+Pow6KRA5zf8FWj+Qzrb3SV0xXjJzTQS249HrnT/cEf5XoPE5JQVjX81WdITIIdiVDGlpOstlWe7dV/gMConbzPfYnBNQBeGhGjQhCD+kMJc7YyFhJnEwvK6g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jMV+fFvjKlYiGufHj/nr+yshgJCwxjCERrqo4V6wD0k=;
 b=n0fcAJ+NRXWjDk0Zzjpb9n20bYyYzw3BfVJpa4lb4e4+Lb1GmYgEuqpZ5Kid+2NLVeN5W7MB4R18HAtR1bhLg0b5cTvqEuPXexENGQplDCfBmOZyBf6bru7MDZNUN25spggxLFJGWhis7ndMLJ1YXetD5VU3ZuKDu3O5EBt8bGge9fuSmeOpVPGqItmUm8L14Gq9bGgaZwZhQ96/xdfNleC238oUBM5kiqbgCSwxYXXDr88T+5TxWY7nZlzWU1XZrNTx2AwvhvFfWWpT6KSkJhsVemz4LVRmDPkwyyTE5ZEZpOuc+DaeHwVxp5QZt9pb24dxqmmqS6GWV+c4A+cRrg==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: SR-IOV: do we need to virtualize in Xen or rely on Dom0?
Thread-Topic: SR-IOV: do we need to virtualize in Xen or rely on Dom0?
Thread-Index: AQHXWQwW/O3obtZgqkmpGRvt7ZNDkQ==
Date: Fri, 4 Jun 2021 06:37:27 +0000
Message-ID: <c10e16c9-ec42-336f-e838-caca49b39723@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: c4d49e6b-6d87-4f66-b7bb-08d9272338ac
x-ms-traffictypediagnostic: AM9PR03MB7346:
x-microsoft-antispam-prvs: 
 <AM9PR03MB73466C2C1DF01BE348B57161E73B9@AM9PR03MB7346.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 v4+QhprZDmamMy8qHI0UXqXtNl2FKTh1zqQncdoF5MUU8N3dzJWry/tO/P9D/7t6zwUMy7+q0e6CGz+7JZLreiHbRFMjpRweID+VMuRxS7ClhRfD2FwL209U1yoz7bxT1q0ozHH+23ITH/bf+5JD7H61a/L5bY3qS++Vqb5B8iOVQUFUoGLxFsZEd4xHVeW2C4RQTq2eFJMlLENvShB/+dfK3sFwsWVnxRlaow2+oO3EawScrdmUs4hAYngEMp5gj0WZsczB3rklrN2GV15fOe7xA/7eUiSTSY7fqf5wZqu5deEr4HG72V5ieZU0m8etHTbqnzuyXPslKW5E3Ez5GXQXPHXaae8P+iWcegEJF3IjdM+/EMbpS7YnVOAVdnNgRXJ38VIqlEjjBB5mgxWtzdaDfllBAWxJbuP2Uw80TibrNX2n7gdA5OoPk7Dqynp4fkHcMQeogu4GLWS/ZOP6/7B83EuO7lSZa9rtlV5UjIWjmMsqWHONa6cLSLjKKwn+lxcYeUIydJDqQVJcipXgdXgv7EZWN9NIpUCSIea6XCRlmORmrFIc5xIFi0XmqcwEo9D/FQQCscs/ffmAYYHTC+L4Yqxx6+M0NDRVbjEQ369xz3BruoYvr9qOAd/mg+nLZN6apOgalGWh8DrPP8bmGmAvbeVxFSErufQiZyZT1tnGFY4iRoZ9x8pNyTEIo1eWh2rPIJCljUV9sVmzUocO4/gH/RPFx896RLYTTkSNBqJTTXHbtyeBNc/6NKqGZ6EuOUojmdp9x82l/b2xhtRZs+JEFu7HNHBCjq5CfcSA5zoi6mgelnzvuoZvgYZik6VV
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(136003)(346002)(376002)(39860400002)(966005)(478600001)(38100700002)(122000001)(5660300002)(8676002)(31686004)(8936002)(76116006)(6512007)(6486002)(66476007)(66556008)(66446008)(64756008)(66946007)(71200400001)(4326008)(83380400001)(2906002)(6916009)(36756003)(316002)(26005)(31696002)(2616005)(186003)(6506007)(54906003)(86362001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?TDRodFFHV2MrZHUxQjVLcW5OTHlQcjhFcXBkNnhsamYwOVl6am5xUlNEcnVN?=
 =?utf-8?B?b29jeElDYUxYa0xqOWhBR2dmb1hKRjhRL0lISmRCNjRkNTZiVXdTc2ZyR0lZ?=
 =?utf-8?B?QWpGeE1WVTgyTzMyVElvUHlMTEphUERkU0t4bFNZdFRoQWlqUjhwZmhEYmhY?=
 =?utf-8?B?UE5ZQmQ1NjduekMySFpoWnRTOU85Tm1VdWkxMjZzVXhCcVFCWFFLOG1vRGt4?=
 =?utf-8?B?dEhhbVNObktTNlVRdytwbGx6Q09UQ1FkakxoR2pTdWRPaDlpYUdrSThYd3A5?=
 =?utf-8?B?NFlaWFVqOG9nYkRHWllSaFUvT1BHN1hveGRkL2l6bmJ4dWZSZHRYWlQzTk1J?=
 =?utf-8?B?Qk5KNnZoVllsUHVvZlYrYkM4SWExMWp1TUVkMkZKTTkzMUVyVzBrMEFncGxh?=
 =?utf-8?B?NHBpSWlCNGg3QWhRSTMvbStLdnBYVVJEanJialk0SkdtYnI1K2htbm9aaFZm?=
 =?utf-8?B?T2dMSHp4RThlV1oyOVhUcVNteHhxejE5dFIrdWc1YnlsVk5VRGhKeVZscW1h?=
 =?utf-8?B?Q3doazZaMk5CT0RKN1AxYjlkSzQxTUZjUXUrd1d2K1N3R3l4RHZpbmd4OCtX?=
 =?utf-8?B?Uzh2cTlmWVgxVHlsemdZb2hHVmh0dHZWYncvb1JiM2x5R2JVNUNDN2RldDVS?=
 =?utf-8?B?YnB1d1BxSEl2clUyMVBKREp2SHlnSzBBWTNiQWhKV0VGNUhHaytqaU4yU3dE?=
 =?utf-8?B?WnYvZmkvOXFqOUhZVDFkcVZmSzRXYzdGZHFmNTlwWFNJT1RWYWVPZCs3bEtL?=
 =?utf-8?B?ekkxTWhiS2hBellML0dRY0hLS1MxUEcyMlJFS0V2VGhla2M4VjkwbDFBcllw?=
 =?utf-8?B?QlozU2FJN0hvd0hDb1pubEhLV1VTUHZVaTBTZGV3VUFJZGUyeVY5OVQ0ZHhx?=
 =?utf-8?B?S2ZTdlFYWEx2b2RjV1NERzhPak9YSWtLcUU2bnF6ZWJLSUFoQnlseitGdHBE?=
 =?utf-8?B?RlNTUk1wUktVdlI4RnNsQUxJOVVtMzBnR2RFczB2VithWm9PaXF3SDVScWMw?=
 =?utf-8?B?RnRwdENMMzZLaDc0L1N4QWpGWHlNRGQ3WXY4cStYbXpGN3ZFdmdwanR1U1lY?=
 =?utf-8?B?VjBmYndHY3M2S3ZSMmpPdEdoeGhueWNmYTRTcXR2RnNNQzlIQitneE5sMXNH?=
 =?utf-8?B?cVVINlUwbUFoRzh6WmdycXE4aGJCQ0pobG1GZTdsdE1TS05Xd3VGalR3R0hV?=
 =?utf-8?B?Zjc2OVgzNDZOWTZXR3MxQmV4K0ZWOTQyRjhFRVU3Q2tuWC9kTGRPN0dSSUo3?=
 =?utf-8?B?V2djdkpibnY2ekJETGhZblVHRHEzcHMvaTY0clNWYVZ6RGs0L0R3Uzc4Mm9R?=
 =?utf-8?B?bjB2OVB6Rk1ZOU91QWRtZHJHS2dXNGpCVjdwWEZjRStiOW9FRWhNeC94KzZH?=
 =?utf-8?B?Z3RPZThLMEtkQWpXWVRlTFQ1STU2cThpVmtTYlRqaVlzdTZzRFRNQk91a0Nw?=
 =?utf-8?B?dFBTcWQ4aW94Q1hkUjB4eUdNREVCcmFrbzNHdGh0UEtkOTNGMGlUVmhXSm9o?=
 =?utf-8?B?QVJzS3ZwUnQ5dm9XdWpWaDIxSElZRkZUSnJlSVpGbjhLSXRGY0pKeSsxUWVP?=
 =?utf-8?B?L0lkVWlCQVdsNnZCM0xndGZPWlEzZ3J5Wnc4ZGpLRkROSDlZY0d5STBKTUNO?=
 =?utf-8?B?bjBkTEltRVZXOTJFOC9qVUxTR1c0MzlDS2VwVGFzSVVnU2UyS3lIMHFVM0Q3?=
 =?utf-8?B?MnJRZkJUQjd2UlByOHJwOHFBMkVsWEZkdkVTL3dCQ3RXUHlxYzd3QW02OGJx?=
 =?utf-8?Q?l9wuqxchtXuaN584gFK3PyMutOzu6na+QxKp05T?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <F2E457ADED1D354A83D5281BC0EA4180@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c4d49e6b-6d87-4f66-b7bb-08d9272338ac
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jun 2021 06:37:27.6881
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: HQ1daDt9J+09kWzwvSKzyl7kP0bayHHtjI+s2XurzbKAyCeVXsT3wiQ6b4LzFFcRSM0b6wV4+FEFcVBB4HajvnzOtbqdq8oufeJMN5680B2/TPOn9V+Qh5cKHsWc5LfD
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB7346
X-Proofpoint-GUID: YUW_1-TV_98wmwVQ9J6W-cdO_KCK-VH5
X-Proofpoint-ORIG-GUID: YUW_1-TV_98wmwVQ9J6W-cdO_KCK-VH5
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 priorityscore=1501
 suspectscore=0 bulkscore=0 spamscore=0 phishscore=0 adultscore=0
 mlxlogscore=863 clxscore=1011 malwarescore=0 lowpriorityscore=0
 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2106040051

SGksIGFsbCENCg0KV2hpbGUgd29ya2luZyBvbiBQQ0kgU1ItSU9WIHN1cHBvcnQgZm9yIEFSTSBJ
IHN0YXJ0ZWQgcG9ydGluZyBbMV0gb24gdG9wDQpvZiBjdXJyZW50IFBDSSBvbiBBUk0gc3VwcG9y
dCBbMl0uIFRoZSBxdWVzdGlvbiBJIGhhdmUgZm9yIHRoaXMgc2VyaWVzDQppcyBpZiB3ZSByZWFs
bHkgbmVlZCBlbXVsYXRpbmcgU1ItSU9WIGNvZGUgaW4gWGVuPw0KDQpJIGhhdmUgaW1wbGVtZW50
ZWQgYSBQb0MgZm9yIFNSLUlPViBvbiBBUk0gWzNdIChwbGVhc2Ugc2VlIHRoZSB0b3AgMiANCnBh
dGNoZXMpDQphbmQgaXQgIndvcmtzIGZvciBtZSI6IE1TSSBzdXBwb3J0IGlzIHN0aWxsIFdJUCwg
YnV0IEkgd2FzIGFibGUgdG8gc2VlIHRoYXQNClZGcyBhcmUgcHJvcGVybHkgc2VlbiBpbiB0aGUg
Z3Vlc3QgYW5kIEJBUnMgYXJlIHByb3Blcmx5IHByb2dyYW1tZWQgaW4gcDJtLg0KDQpXaGF0IEkg
Y2FuJ3QgZnVsbHkgdW5kZXJzdGFuZCBpcyBpZiB3ZSBjYW4gbGl2ZSB3aXRoIHRoaXMgYXBwcm9h
Y2ggb3IgdGhlcmUNCmFyZSB1c2UtY2FzZXMgSSBjYW4ndCBzZWUuDQoNClByZXZpb3VzbHkgSSd2
ZSBiZWVuIHRvbGQgdGhhdCB0aGlzIGFwcHJvYWNoIG1pZ2h0IG5vdCB3b3JrIG9uIEZyZWVCU0Qg
DQpydW5uaW5nDQphcyBEb21haW4tMCwgYnV0IGlzIHNlZW1zIHRoYXQgIlBDSSBQYXNzdGhyb3Vn
aCBpcyBub3Qgc3VwcG9ydGVkIA0KKFhlbi9GcmVlQlNEKSINCmFueXdheXMgWzRdLg0KDQpJIGFs
c28gc2VlIEFDUk4gaHlwZXJ2aXNvciBbNV0gaW1wbGVtZW50cyBTUi1JT1YgaW5zaWRlIGl0IHdo
aWNoIG1ha2VzIA0KbWUgdGhpbmsgSQ0KbWlzcyBzb21lIGltcG9ydGFudCB1c2UtY2FzZSBvbiB4
ODYgdGhvdWdoLg0KDQpJIHdvdWxkIGxpa2UgdG8gYXNrIGZvciBhbnkgYWR2aXNlIHdpdGggU1It
SU9WIGluIGh5cGVydmlzb3IgcmVzcGVjdCwgDQphbnkgcG9pbnRlcnMNCnRvIGRvY3VtZW50YXRp
b24gb3IgYW55IG90aGVyIHNvdXJjZSB3aGljaCBtaWdodCBiZSBoYW5keSBpbiBkZWNpZGluZyBp
ZiANCndlIGRvDQpuZWVkIFNSLUlPViBjb21wbGV4aXR5IGluIFhlbi4NCg0KQW5kIGl0IGRvZXMg
YnJpbmcgY29tcGxleGl0eSBpZiB5b3UgY29tcGFyZSBbMV0gYW5kIFszXSkuLi4NCg0KQSBiaXQg
b2YgdGVjaG5pY2FsIGRldGFpbHMgb24gdGhlIGFwcHJvYWNoIGltcGxlbWVudGVkIFszXToNCjEu
IFdlIHJlbHkgb24gUEhZU0RFVk9QX3BjaV9kZXZpY2VfYWRkDQoyLiBXZSByZWx5IG9uIERvbWFp
bi0wIFNSLUlPViBkcml2ZXJzIHRvIGluc3RhbnRpYXRlIFZGcw0KMy4gQkFScyBhcmUgcHJvZ3Jh
bW1lZCBpbiBwMm0gaW1wbGVtZW50aW5nIGd1ZXN0IHZpZXcgb24gdGhvc2UgKHdlIGhhdmUgDQpl
eHRlbmRlZA0KdlBDSSBjb2RlIGZvciB0aGF0IGFuZCB0aGlzIHBhdGggaXMgdXNlZCBmb3IgYm90
aCAibm9ybWFsIiBkZXZpY2VzIGFuZCANClZGcyB0aGUgc2FtZSB3YXkpDQo0LiBObyBuZWVkIHRv
IHRyYXAgUENJX1NSSU9WX0NUUkwNCjUuIE5vIG5lZWQgdG8gd2FpdCAxMDBtcyBpbiBYZW4gYmVm
b3JlIGF0dGVtcHRpbmcgdG8gYWNjZXNzIFZGIHJlZ2lzdGVycyANCndoZW4NCmVuYWJsaW5nIHZp
cnR1YWwgZnVuY3Rpb25zIG9uIHRoZSBQRiAtIHRoaXMgaXMgaGFuZGxlZCBieSBEb21haW4tMCBp
dHNlbGYuDQoNClRoYW5rIHlvdSBpbiBhZHZhbmNlLA0KT2xla3NhbmRyDQoNClsxXSANCmh0dHBz
Oi8vbGlzdHMueGVucHJvamVjdC5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAxOC0wNy9t
c2cwMTQ5NC5odG1sDQpbMl0gDQpodHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3QvZnVzYS94
ZW4taW50ZWdyYXRpb24vLS90cmVlL2ludGVncmF0aW9uL3BjaS1wYXNzdGhyb3VnaA0KWzNdIGh0
dHBzOi8vZ2l0aHViLmNvbS94ZW4tdHJvb3BzL3hlbi9jb21taXRzL3BjaV9waGFzZTINCls0XSBo
dHRwczovL3dpa2kuZnJlZWJzZC5vcmcvWGVuDQpbNV0gaHR0cHM6Ly9wcm9qZWN0YWNybi5naXRo
dWIuaW8vbGF0ZXN0L3R1dG9yaWFscy9zcmlvdl92aXJ0dWFsaXphdGlvbi5odG1sDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 06:39:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 06:39:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136745.253401 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp3UG-0001Ru-W9; Fri, 04 Jun 2021 06:39:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136745.253401; Fri, 04 Jun 2021 06:39:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp3UG-0001Rn-TA; Fri, 04 Jun 2021 06:39:20 +0000
Received: by outflank-mailman (input) for mailman id 136745;
 Fri, 04 Jun 2021 06:39:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=e0/s=K6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lp3UG-0001Rh-23
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 06:39:20 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f45e93a6-59c6-4002-8277-f59c3858cf45;
 Fri, 04 Jun 2021 06:39:18 +0000 (UTC)
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02lp2057.outbound.protection.outlook.com [104.47.6.57]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-27-Yyu30a40MPygLc64FXWouQ-1; Fri, 04 Jun 2021 08:39:16 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4192.eurprd04.prod.outlook.com (2603:10a6:803:4c::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.20; Fri, 4 Jun
 2021 06:39:14 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4195.024; Fri, 4 Jun 2021
 06:39:14 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR02CA0194.eurprd02.prod.outlook.com (2603:10a6:20b:28e::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.20 via Frontend
 Transport; Fri, 4 Jun 2021 06:39:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f45e93a6-59c6-4002-8277-f59c3858cf45
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1622788757;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nsRO0AHjPmE+cPy7mRipsIibB3oWyLfHn/5+4yugeP8=;
	b=IrFunl8jimCfDNP1o3IAqTEMRPQJ8AFtQHoZD/lCem4Q03q+64PZ1rU3OGZcfrZZkiPZ6j
	yL1bAUohsUquN3KIWZgTBZGjc95HUQupZwhSiWwXdQhRGji/VDVMAb4D2+kbY5khyr048I
	aldB0xJYjYOOc1se9oDVBvBzwX4PYbY=
X-MC-Unique: Yyu30a40MPygLc64FXWouQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NThK/Gm4fHHYfrHjadyBW8gmLfLKYZFyWpYg9tfASK1dx8pgebPBFKKdA23YI7HPqvbEI+IPMBuMUUqBhp9+fM7b7AvtGJObRp5V+2qBgQ3Erb8qLqqnGAt1/tHLfx8z5WQDdbTEDyHfyEIvbxogCc+COO53E/rfFoMc8IlDVtdDMsjMDnaSgyu+REW9l6WwKYr8yYIXdMRy2FiZeURiHp6Gbi5GdQBn90tv0djV/8f1oO5WragBXKAkkYyc6oBcPXC3C4O0mmol06iBb7ghqLihLFUgaRy0Z69a/a1enm8EuNAvuspUFEfKgSHf3AMcaCqxlN7wgsLUqIPKgWklwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nsRO0AHjPmE+cPy7mRipsIibB3oWyLfHn/5+4yugeP8=;
 b=B4zRbw+BtIUpAGJH9Ogj2/l2L7ZZAb5644IMQd1BZHEptH2InbXW9GRn7jhYcXtpj9FI+EVy/THZWXMTdcKqvXCbWbebqW256IkbVRxt+lLoEKRRoiWflRleHwNcYzxz+9HLT0MkHu4BlBSDGEHsopJPTixbx4k0xFlJn+3Lw4eO0VqwZIAs8/FrUKD6QToY0qg+3zvn8iGIfVvO1DPEvMgVrXQzfXsLq3JnOFOSwpYxwF3H3uT+tGY8hjzGhd4Qc34QNcxvQgurMpOws1dBhRDBE8X0Ctn3GCNUDo4/FLt42SvyyJ1liBBYB4nx2teGB2U4iCXKxR7rRsEI1genYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 04/13] cpufreq: Add Hardware P-State (HWP) driver
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-5-jandryuk@gmail.com>
 <1747789a-ab6c-cdae-ed35-a6b81ac580a9@suse.com>
 <CAKf6xps4NuBxMpgCo_duWU1ZXB8x8B8uszb3uNyb6kABxUhNHA@mail.gmail.com>
 <2c3400e8-8236-8558-08e4-37c8b1494de7@suse.com>
 <CAKf6xpvCkzHOZsBY2yMQSVxq844_muaAaKd-JZUQfd7UCrXLVg@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2d88b1db-3064-63e1-e197-9318624e6cc6@suse.com>
Date: Fri, 4 Jun 2021 08:39:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
In-Reply-To: <CAKf6xpvCkzHOZsBY2yMQSVxq844_muaAaKd-JZUQfd7UCrXLVg@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR02CA0194.eurprd02.prod.outlook.com
 (2603:10a6:20b:28e::31) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 40b9653e-aa24-44a2-9d64-08d927237815
X-MS-TrafficTypeDiagnostic: VI1PR04MB4192:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4192FC9E46CD2D0B9EA02157B33B9@VI1PR04MB4192.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4714;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rxFSWfY2gJ06HJj697OrIYzxFybulI7yTKjxjE7z+wVwN/2gMV8FqzJdslXMrEvvbt/RyjD+Wk8dOOUKSN42AqYIdMKuaR4PJteJGt67oofUyLSLDGi5MZUHmcA8RBJNsF1ZlXptnQyHbWwMOc4XQRC+fJt389WemZb4G7JQn4L7TUv204a2gmVcYY+ZkJsY7C53u8eG4F50drrBNTUM1o3un1gtNeKbcrLCBkchMWzuIst9OdvLWri2q3z4HeotTALGAumG8yHRyHKvI3qW42XUHxgdgbQXqxKPAFpEIPqoIzeMXmrrdC4uKzFeAxmCKU5RZEK6uicMk7WD/iwZ3t/Qcpl3LUco4kNQze1AkhbNOreNERiqEO0FLCzlnFfIeWrTzknFVoRIAzDH4poY3p6vRDq4KqirbmU7b990Ts7zwQNwfcJrH+Xm45xVOeAwMTK39g3jMF934Yo9qHNVS3+zwlUcOgPqjXx/j2RU1LpIHfyrOTQNUEojwJSfbWjWlWAdUma7JFMxLEif2v/SdGm+AD+edEVMiclrpnaIiqCKcDF5DhsuPsrNZpvTScY/opUxFN0i39bNahRMweF/iKpOV1okR02PtZlZ2uG5q60Jw7bYJYtWJ1tjrwakNc6pZNI0j39WnfBIskdLUDSs0Qr4EgkfejMOOkLpQ7Nq7QoeWeAzY1tVPbWshrF49Uii
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(2616005)(6916009)(36756003)(31696002)(26005)(4326008)(53546011)(2906002)(83380400001)(86362001)(54906003)(186003)(16576012)(16526019)(66946007)(8936002)(5660300002)(31686004)(498600001)(8676002)(38100700002)(66556008)(6486002)(66476007)(956004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?aXUrb2ZCekVqUWRIQk1TcEd2WjV2Y01FZmxVM2tKbEJJUVBQRndZa1J0R0VQ?=
 =?utf-8?B?TWNvaE5yeVJwK0FXRURGeS8vY1gzaWVnc3BoVER1RzJlUENFei9YTDIvamhj?=
 =?utf-8?B?NjJvM2tiNWU3WnJ1d3JSZ3lhKzRFVkxxUkJ0bXg4cjcrNnNhNUR3bHFpMXlG?=
 =?utf-8?B?YzgrK2cxZktMMWhaZEVmdnlCQ0hEd0tsRmNpMlpFTElad3lrMGMrNUhocVBm?=
 =?utf-8?B?Z0JlSHNHYjhFSjRrZXB2NDQzK2Y2Z1VVUTFkZURxbUMzby9zQk5VZlFBNUNy?=
 =?utf-8?B?VmU0U2l2eHl1U2ZjamhpUkgrN09XTnJ2K2hmQjVUeFpYRmlZd2J6VHcrekhR?=
 =?utf-8?B?ZDc3aStjejg5K0k4a2ZleTk5QjE5VldlN2x4U2dkemNWd3RrS0Y0YzhGN1Fp?=
 =?utf-8?B?QmdTaThXbktyTzFjb2lzRGRrVExaUXVXWU5mdkYrZjMzQUMvdWdkc3BDWXQ2?=
 =?utf-8?B?TC9OUzVFa1JTV29DVlc2NEthYzBZdXk0eVhNMTEzWWF4ODFSVDhKejdEWEoz?=
 =?utf-8?B?MHk4YlYxNTJLVVJHZFc3NW8wQU9JaTNLb0w2VTNRdzh3cUJIRTNVM0xYbEZ2?=
 =?utf-8?B?UmFjTm5TaFEwRTJCNnJqcGpiWWFkK3hEYUVKRGQ0eHVNeGJmam1jOXROV0ZL?=
 =?utf-8?B?amF6RlhPOC8zeGVZeklzalZrWVdISGRoSkdnQ01FQ0lxSGd2YTh5ek9yOXFu?=
 =?utf-8?B?SXVQMlF1ekpzcU5wb3p3Y2ZXVVlSbDl2ZTV2NzRXdVQ1K3JJazMzOFpFeWZw?=
 =?utf-8?B?d1BBZGJBS0wrRG1Nd0NjRzFUQ1k5ZXpiOU1EZkNyK3RUZUVKNWNsNTU2a3Nm?=
 =?utf-8?B?dG9iMjdmdmhxT0JXYkJlaWpIZnIxQlBTbzN2cmt0NHNuK1ZHTXFEbTNNRTNl?=
 =?utf-8?B?TzZRaFVVYmJBMklXQ3hTR3dhYWk4dHRPeEFJTGQ0dFFHeWw5UWlTS2l0ZFV5?=
 =?utf-8?B?Sm8ySC9QMHpxRUZiNmxvV2luTVp0Yjc3TEhEY0V5NmRBQ0dQdVQrR09WWm1I?=
 =?utf-8?B?aEhMSXlCUlhBVzNvUDZHTWNhR0RUZFIvV1c4aHFGY0hRY05tRkRwWEVsMHFm?=
 =?utf-8?B?Y0d5YmJiZk9tUTdlZ1A4bnlyVm45aGxicGRxTXRtRm92V2plUGtuUlVzeHEz?=
 =?utf-8?B?NHdXY0lLajN3dlkvR0J6WkRtUG82V1IyNm94UFh5SHBGTEw3aFdyaURDOElw?=
 =?utf-8?B?QU1ydlJDdWdnVTBCazV2V2x1cXhwWkxlM1lSdkxyemFkVGlISkZTUkRjZXNQ?=
 =?utf-8?B?WE41WXdkNmNMK090VGlYb29NbU5zbXBUWlFqR1VFS081V2lWREFjNkFDcDl2?=
 =?utf-8?B?M2pMMWViOFJ4czV5b3FXV2pSbEcwY3VKVUNmUVR1WEFOUUl5cXZyVVZ2Q1Zx?=
 =?utf-8?B?Wmw0MWRlcXYzZUV5eGhEK21waUp2T0lFVHlBNWtEQmFTVERYazMxS1o4Y3Jk?=
 =?utf-8?B?VlQvWTNLUEhJR3YyYjZlT09tUzFDa0JNdHZ3UHd5V2c5a0VON0tzcjk3cXhL?=
 =?utf-8?B?UXk4eDNPWEpBc041a0d0Y3JZTGI5UUlBdXdKQUhkdUZGTUp5YlZMOUpkSkJx?=
 =?utf-8?B?bWh4aE4zeWsrWDNRT0IrUGJGR1RjQ1Y2QlFnVlJNUnVkL29UQzZpS29rUXl6?=
 =?utf-8?B?WDhOZzR3UW1HWTZOVXVFcXhJa2NlSnRjclZ6RElLTlVVaG1XRlB1d2doYWNC?=
 =?utf-8?B?SEUxYVNqVHo5Mkk1OFo4alRocEJyUGEvODdmK1REK3IwNTZVa3NSWTZYcUJC?=
 =?utf-8?Q?bKFsH3ugd9LgciDgksA0u7mYVXdz+nfhP3mBN3F?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 40b9653e-aa24-44a2-9d64-08d927237815
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Jun 2021 06:39:14.2776
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Cd4Z7EnCvuF9fjVN6KFyRDvEFtzVSOpf5r88GbKFKNSdBBR+9//lmvY9F8Er/HwBzdpPkj1ZakNwnlsMhc7csQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4192

On 03.06.2021 13:55, Jason Andryuk wrote:
> On Fri, May 28, 2021 at 2:35 AM Jan Beulich <jbeulich@suse.com> wrote:
>> On 27.05.2021 20:50, Jason Andryuk wrote:
>>> On Wed, May 26, 2021 at 11:00 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 03.05.2021 21:28, Jason Andryuk wrote:
>>>>> +    hwp_verbose("HWP: FAST_IA32_HWP_REQUEST %ssupported\n",
>>>>> +                eax & CPUID6_EAX_FAST_HWP_MSR ? "" : "not ");
>>>>> +    if ( eax & CPUID6_EAX_FAST_HWP_MSR )
>>>>> +    {
>>>>> +        if ( rdmsr_safe(MSR_FAST_UNCORE_MSRS_CAPABILITY, val) )
>>>>> +            hwp_err("error rdmsr_safe(MSR_FAST_UNCORE_MSRS_CAPABILITY)\n");
>>>>> +
>>>>> +        hwp_verbose("HWP: MSR_FAST_UNCORE_MSRS_CAPABILITY: %016lx\n", val);
>>>>
>>>> Missing "else" above here?
>>>
>>> Are unbalanced braces acceptable or must they be balanced?  Is this acceptable:
>>> if ()
>>>   one;
>>> else {
>>>   two;
>>>   three;
>>> }
>>
>> Yes, it is. But I don't see how the question relates to my comment.
>> All that needs to go in the else's body is the hwp_verbose().
> 
> 'val' shouldn't be used to set features when the rdmsr fails, so the
> following code needs to be within the else.  Unless you want to rely
> on a failed rdmsr returning 0.

It is intentional for rdmsr_safe() to return a zero value when the
access faulted, so I certainly think you may rely on this.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 06:56:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 06:56:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136755.253413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp3ka-0003hj-DU; Fri, 04 Jun 2021 06:56:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136755.253413; Fri, 04 Jun 2021 06:56:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp3ka-0003hc-A9; Fri, 04 Jun 2021 06:56:12 +0000
Received: by outflank-mailman (input) for mailman id 136755;
 Fri, 04 Jun 2021 06:56:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=e0/s=K6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lp3kY-0003hW-MN
 for xen-devel@lists.xen.org; Fri, 04 Jun 2021 06:56:10 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 762ea18e-ad9d-4dc5-95a3-4da4f45f1027;
 Fri, 04 Jun 2021 06:56:08 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2058.outbound.protection.outlook.com [104.47.14.58]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-29--lNYfQM3MoyKLYZm7iOsuA-1; Fri, 04 Jun 2021 08:56:06 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2445.eurprd04.prod.outlook.com (2603:10a6:800:55::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.29; Fri, 4 Jun
 2021 06:56:04 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4195.024; Fri, 4 Jun 2021
 06:56:04 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR3P281CA0007.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:1d::6) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.12 via Frontend Transport; Fri, 4 Jun 2021 06:56:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 762ea18e-ad9d-4dc5-95a3-4da4f45f1027
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1622789767;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KJiwepwRqxdph7NGCMGqmw1+pSn8NxwUwhAHY0K4Kvk=;
	b=K+9tbIJGmS+ezIiY7t0Bj1YYOsxRhXPemPRwO+Rxl2KZGwYXwpHxd3Hmx6vc7QVEmwV373
	Y5VAT1HACFcnEVTk7mxHFcLVhiPNOJAcqmkJ7M8e6HcMJpz+LxR27nmikPlw4x6rUWMYR7
	luF3obhymrEEOT6PSZx5igSIE8II7nQ=
X-MC-Unique: -lNYfQM3MoyKLYZm7iOsuA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ch+v61ow9aXkSKq+I3I1WClp6lEMjVFWZdGFI3X5LMeuYazGp5Npw9Lpca5sqKpJLG0gbyDkGQYluZd0GMUQkb/kEY4WAIe/h7V4b51IkWxeaC7hfYVkFOKEkY5DH264cXF6Ww9UExnanpvGvnXVU5NeOV1tL4vKoO8FIfxBX/ekrzuiddN4CZiM4io+SFtNoBcGZmdrA0VWGq3y/C54YiyKRTTNwubqP5NjOM/gFbcbD2fATvzrtHHYKe4AKnnmllKAH6S2SMoogO1n2SDl68AK/q5kktKM0lp5+y77sLuizgAm1kLTa9oYhZMT8TH/vFS/4OAmrifhcvD/jsSBEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KJiwepwRqxdph7NGCMGqmw1+pSn8NxwUwhAHY0K4Kvk=;
 b=SCepiTqjsgc2orihODpne3ccv/km54XTdETH6nmOJziUltwFGxjJC1xKSDMpKmA0EsPHkkCfEMgK7t+aj+Q1VOdaDGhdqrbCPN2/yQNvIQsryA8ZEO9ZlpCLd3G+jrdPaudS0cPT3U03Cvnj8x9FzVvywaYFweSuMnf94Kjo3tqPFYQxSNMsHyFLpWPAT89gejb9dw/dOgE9e2dc6I2J8ldHgz6Ug62jIn0pLiyieqqmbeneuTdIdqEr2yzmPV+1ZFwdVCAuVd57MlN3Yi8zFqif0m8TmwA/OHK1nmBV51OaQHX+ovmQ7TyesJK7tjYjyoCfx0fEir0If3Y5hVdLcg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xen.org; dkim=none (message not signed)
 header.d=none;lists.xen.org; dmarc=none action=none header.from=suse.com;
Subject: Re: pci passthrough issue introduced between 4.14.1 and 4.15.0
To: AL13N <alien@rmail.be>
References: <6ccb04f2d93be6089b049df1f94a91dd@mail.rmail.be>
 <e9a3f7a8-7bf2-4f0a-cc25-d8ce015df1f2@suse.com>
 <a7c0e9b0cdd8f9e709abc329c7f6239c@mail.rmail.be>
 <b5ff15fc-ec3b-6b48-3d15-7de29fa5b2aa@suse.com>
 <175befe0e853565761e51f07b79c49cf@mail.rmail.be>
 <552b4348-1c52-ce6b-9001-a144c7147a7c@suse.com>
 <0100caba62175123c63f0df4749a8c88@mail.rmail.be>
Cc: Xen-devel <xen-devel@lists.xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8a572357-e743-80a6-fed6-3c4999b986ec@suse.com>
Date: Fri, 4 Jun 2021 08:56:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
In-Reply-To: <0100caba62175123c63f0df4749a8c88@mail.rmail.be>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR3P281CA0007.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::6) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1c011b62-37e7-4d2b-2a72-08d92725d266
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2445:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB2445BEA4CAF83FB75651F74DB33B9@VI1PR0401MB2445.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Si1B1BZ5DgJZVKtwwuIlpq1cd8Rntww2Nbef7ydHkmpMrtUAZT/NfqsTDDDqjxy1MaM0TgKSOPZZR/MBUayfuJHu+72YH7aLd8MbwkUrgkwuQhyG3yXNwIRBAgcHch4bXur0VvKsGo6Ujt0Kglspo8Yv0Gj49OG2TFAF0XRCs4JefiRKbdHYlskH0wTTsqxsTzB4J/GiySlvBQqr9mAawfyAgTI5kfsbEpE0eeStq2nom8ySDZH796mQT4iFHq86NeLlQxrrMoJajTnfq6oboEp2xOijtPGGuW/NUb7gWXwvEYz1SfMM6HAT2t5tdtpkrZFBd/rkswfiMMY+AcUyXxMc+m/fNWV3tkSvEKs2+OY3dFDzJkjjx0lGy1nzG4p/P/2ysJzqBotQim7SV6S1ieGOaOTQb7b4mnK5+2voWHuFwLfyYxMM5aXQbKZbacxNLvWiJrUFpBfvI1AqK9khqDmPABItCTZOGTJQu9S1Wtn4JdproEIGBZrTyC/4Q10O6gem7ok/9lR7PhuJRvnNHWQCsWd6djlzFA2Tiai9QUBhKFavYUsx4bAy5Vk3tAX3Wb4SDbPbDLb2+qj+eeE9ZoNXAGZi7+2o3ERWrkgiTUXVH1+zQabWidUK2hUw+Cuqg7Pw10FtAOIi3KGi9SUDW1KR2vN2BBSJnTZPuo0fuhEnBsmE2A7icInAjH5fDf5x
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(396003)(366004)(39850400004)(136003)(376002)(66476007)(66946007)(66556008)(2616005)(2906002)(4326008)(8676002)(5660300002)(956004)(31696002)(186003)(16526019)(26005)(6486002)(6916009)(86362001)(316002)(478600001)(16576012)(38100700002)(36756003)(8936002)(53546011)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?czBPSUw4c1BCYnFra3ViWHFERFRpOHdKVWFuYW1jeGtqL1NCZmc2NHJ4MFdZ?=
 =?utf-8?B?WVl5NUUvaWlwT2xPelg4UGV0bldoZDdiNnNZcnBLYlZHY0NpRVd1ZytFdmNG?=
 =?utf-8?B?NGlNZWw2S3lib2ZnZDd5K2h0T0hIV3gyMVJuZmZqOXFsRjJ2QlMzYlZVV1pK?=
 =?utf-8?B?NHk2bWdqZjJaL2VBandaUHBqSDdEK0UycDdjdVpSdUo2Z0FCY3l6QVN0YWY3?=
 =?utf-8?B?Q2Y4ZjgwU1hBbFdaS2JQbTliQ3lTbStwMmNROGYvQUdUWGxLQnhjYnBzN2wv?=
 =?utf-8?B?R1FwNWRXdG5LanUyQ1ZaNUhkNkltQ05yMXErRTJwNEZmZFMzK3VGdHB6R1Y0?=
 =?utf-8?B?cmF6RXB6TldYQURlU3dMTGJHeHRJTGlDdDNWRnp3SGJYcUJqM21VeXFxL0hS?=
 =?utf-8?B?QnlHWDlkaHVoTHZvbkNmaXNWMzU1M0hzSWY1Wlk3d2ordDlJOG85UWR3by90?=
 =?utf-8?B?TmNscnZpTjJ1cVRoaWZDTkNwZzFiTW01MGZ0RXljcnVJckhyRmQ2SDB0UFF6?=
 =?utf-8?B?WXV4dHBNdGxvclVMcUU0QlJYMEFUVnkyYUVhMUszbW9yNGNKRzN1RnFla3Fs?=
 =?utf-8?B?VkNiOUExaHVxdmhIeU9wNXB0ck9PZnIzWW5pcWFOdXN3TVNaRER3eXVmTS84?=
 =?utf-8?B?YzZ3VnI3K3RHNWVTNWtLVG9WdjZROEZidXBtSGg0VnBXVmdtai9UeVoxTEtO?=
 =?utf-8?B?RE12MndYU25CRG95dnVUaTh4bTlJcklFRlFXdnUzMzlsQlhtNzVGSDFMQ1B4?=
 =?utf-8?B?NGtNTksvNDNKSUJaZEJYMVlJWU50Z2dNMVVGQ1BPMmdJSGszbWsxdjNQa2pw?=
 =?utf-8?B?V0xBYnJXaWo3N1hOekduWDJZZ0tTL2ZBNW0rbEFpSjNxb3JQQ253eUdhcGRw?=
 =?utf-8?B?WU5XOUNLdHFDYUhTSFBMZ3lCak5lTDNQN1k2NWdJMTRQTVB1MFEvdk9PTDgv?=
 =?utf-8?B?SGtyaE1tTXAxZ0hyRlJzdGNUcGU5c2QrdmZpbTh0eGI2bFFLT0JGTXlZeTJS?=
 =?utf-8?B?alNydkRiU05Hd2RnK05Vdnd1Mno3T1BkMVU5aEJQa1hEZmthOGhiMVFoTklL?=
 =?utf-8?B?aWh5K0tqUktYKzR3cWU0ZnRkanZIaEMvT1dPRUZybnF2aERERWhZUytyVUI3?=
 =?utf-8?B?N3dsVXJranBlS01pL2hScVhKQktrUFV3am0rOVZldkhhU0hhUWRwZkpDc2E5?=
 =?utf-8?B?ajczY3pJYlU4KzB0dEF6ZG5sc2paWnVqVDVsN2FiU0o3ZE9SUEhCRC93c1lR?=
 =?utf-8?B?YVJqWXpKdkdkRUcyVnZCWHNNVnhnYit4Nk81bVh3V3l6dWRldVBNZnFuaElP?=
 =?utf-8?B?cUVoYmk2YldtNzhrQnVDRWQxY1RBRmlTRXJwTktLTkhVUXNJQ1ZyQXRqWnFa?=
 =?utf-8?B?Ni9PT0hjcWUxQmN0cDhzT3ljOVJnNVcrak5RQTYwbWZvMWdVNGw4VGFpdVUr?=
 =?utf-8?B?WVpCNDZ1SUtIQlZ5eUNlZXNlVkZxVUQ2UGY4RFh4WUEyZVBOS2FDdUhTTDQ1?=
 =?utf-8?B?UmVVTG5xR1VvZ081ZGRrK1BnK0UwTnJVQXhqSlNKeTQvRS8yV2lCU1RRSTVD?=
 =?utf-8?B?eE1Sb0VtRnZ6R1ZxSUNjNk9ZbzhhS283TTM1cGh6SWtKVWlZOUpNS1lCTHBZ?=
 =?utf-8?B?Si9abVNhZ3oyeUdER0dLR1RGZGVnWHF3bG5zOXRJbE9LUEZuWHdseVJYSUY4?=
 =?utf-8?B?Y29icjBHUlJTK3NOK3F4Yms2SCtncWdBUWFWSFdlS1NMazlUZlNmQXpQaWlt?=
 =?utf-8?Q?SFB5IlLVmcBSYLDE6ziwflcynXRSQh5krMVnwzX?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1c011b62-37e7-4d2b-2a72-08d92725d266
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Jun 2021 06:56:04.8069
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rRcjSBf05EgHoQ5SkTtkBH/2NOY5pqDPkM+lTrMk88Mjig8m6AsLe1tBWMAFkcECZLhVlMLAGEQwW5/8sWjFoQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2445

On 03.06.2021 18:01, AL13N wrote:
> Jan Beulich schreef op 2021-06-01 16:53:
>> On 01.06.2021 16:44, AL13N wrote:
>>> This mailing list is the correct place for the toolstack too? right?
>>
>> Yes.
> 
> So, what's the plan to fix this? is the plan to fix the toolstack? or 
> put your patch in kernel to kinda workaround it?

The patch has already been put in the kernel, as said. It would be good
to know whether it actually has helped your case as well.

> Is there a way to make a regression test or unit test or something?

Would be nice, but may be a little difficult to arrange for in, say,
osstest.

> Does anyone have an idea on what patch would cause the regression?

Not me, but I'm also not a tool stack maintainer (nor expert in any
way).

Jan

> that 
> patch that I pointed out, could it be that one, or should i look at a 
> specific file/line? I can't really just test all of the patches and/or 
> combinations. I'm not really at home with the xen code; and my single 
> xen server is production, so i can really only test in weekends...
> 
> AL13N
> 



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 07:10:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 07:10:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136761.253424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp3yS-0005wK-F2; Fri, 04 Jun 2021 07:10:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136761.253424; Fri, 04 Jun 2021 07:10:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp3yS-0005wD-Ak; Fri, 04 Jun 2021 07:10:32 +0000
Received: by outflank-mailman (input) for mailman id 136761;
 Fri, 04 Jun 2021 07:10:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x6aI=K6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lp3yR-0005w6-AK
 for xen-devel@lists.xen.org; Fri, 04 Jun 2021 07:10:31 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 378d2af0-c382-4096-9183-40cc292bc29f;
 Fri, 04 Jun 2021 07:10:30 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 29A2C21A10;
 Fri,  4 Jun 2021 07:10:29 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id F390A118DD;
 Fri,  4 Jun 2021 07:10:28 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id VvFCOuTRuWDUNAAALh3uQQ
 (envelope-from <jgross@suse.com>); Fri, 04 Jun 2021 07:10:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 378d2af0-c382-4096-9183-40cc292bc29f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622790629; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=9EfjWUkQUOb60GN9NSBR3bPAh69zGtr9vLU1ZSX0+M8=;
	b=W4Y9Bz6mi61tKpdcwQr5RgmQ/PRTrSHhgDxDCpJhOn0aG4EX2lYDWsv5ACSlyqKvLXJegS
	EhiGj99sd6xaK6Zpe+4RL/58UBtVOrFj/oOMe2nSTon282/RpzUPbe9Fpjx5S4f0+9pcZU
	YWcmxjxCIsF/7r+4zL08ElJN1imrvYg=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622790629; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=9EfjWUkQUOb60GN9NSBR3bPAh69zGtr9vLU1ZSX0+M8=;
	b=W4Y9Bz6mi61tKpdcwQr5RgmQ/PRTrSHhgDxDCpJhOn0aG4EX2lYDWsv5ACSlyqKvLXJegS
	EhiGj99sd6xaK6Zpe+4RL/58UBtVOrFj/oOMe2nSTon282/RpzUPbe9Fpjx5S4f0+9pcZU
	YWcmxjxCIsF/7r+4zL08ElJN1imrvYg=
To: Jan Beulich <jbeulich@suse.com>, AL13N <alien@rmail.be>
Cc: Xen-devel <xen-devel@lists.xen.org>, "Durrant, Paul" <pdurrant@amazon.com>
References: <6ccb04f2d93be6089b049df1f94a91dd@mail.rmail.be>
 <e9a3f7a8-7bf2-4f0a-cc25-d8ce015df1f2@suse.com>
 <a7c0e9b0cdd8f9e709abc329c7f6239c@mail.rmail.be>
 <b5ff15fc-ec3b-6b48-3d15-7de29fa5b2aa@suse.com>
 <175befe0e853565761e51f07b79c49cf@mail.rmail.be>
 <552b4348-1c52-ce6b-9001-a144c7147a7c@suse.com>
 <0100caba62175123c63f0df4749a8c88@mail.rmail.be>
 <8a572357-e743-80a6-fed6-3c4999b986ec@suse.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: pci passthrough issue introduced between 4.14.1 and 4.15.0
Message-ID: <23d90cf3-2bc8-f6f6-449d-1741ff4261ec@suse.com>
Date: Fri, 4 Jun 2021 09:10:28 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <8a572357-e743-80a6-fed6-3c4999b986ec@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="2LnsbVJj3UtyLWLg86IVKQM5yc7IXNz6h"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--2LnsbVJj3UtyLWLg86IVKQM5yc7IXNz6h
Content-Type: multipart/mixed; boundary="yUIQOeA5GkctKYl9JEe1VlbStF28ZkT2H";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>, AL13N <alien@rmail.be>
Cc: Xen-devel <xen-devel@lists.xen.org>, "Durrant, Paul" <pdurrant@amazon.com>
Message-ID: <23d90cf3-2bc8-f6f6-449d-1741ff4261ec@suse.com>
Subject: Re: pci passthrough issue introduced between 4.14.1 and 4.15.0
References: <6ccb04f2d93be6089b049df1f94a91dd@mail.rmail.be>
 <e9a3f7a8-7bf2-4f0a-cc25-d8ce015df1f2@suse.com>
 <a7c0e9b0cdd8f9e709abc329c7f6239c@mail.rmail.be>
 <b5ff15fc-ec3b-6b48-3d15-7de29fa5b2aa@suse.com>
 <175befe0e853565761e51f07b79c49cf@mail.rmail.be>
 <552b4348-1c52-ce6b-9001-a144c7147a7c@suse.com>
 <0100caba62175123c63f0df4749a8c88@mail.rmail.be>
 <8a572357-e743-80a6-fed6-3c4999b986ec@suse.com>
In-Reply-To: <8a572357-e743-80a6-fed6-3c4999b986ec@suse.com>

--yUIQOeA5GkctKYl9JEe1VlbStF28ZkT2H
Content-Type: multipart/mixed;
 boundary="------------F10E9242EBCD1C77246CF79B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------F10E9242EBCD1C77246CF79B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 04.06.21 08:56, Jan Beulich wrote:
> On 03.06.2021 18:01, AL13N wrote:
>> Jan Beulich schreef op 2021-06-01 16:53:
>>> On 01.06.2021 16:44, AL13N wrote:
>>>> This mailing list is the correct place for the toolstack too? right?=

>>>
>>> Yes.
>>
>> So, what's the plan to fix this? is the plan to fix the toolstack? or
>> put your patch in kernel to kinda workaround it?
>=20
> The patch has already been put in the kernel, as said. It would be good=

> to know whether it actually has helped your case as well.
>=20
>> Is there a way to make a regression test or unit test or something?
>=20
> Would be nice, but may be a little difficult to arrange for in, say,
> osstest.
>=20
>> Does anyone have an idea on what patch would cause the regression?
>=20
> Not me, but I'm also not a tool stack maintainer (nor expert in any
> way).

There has been a large series by Paul Durrant [1] making heavy
modifications in this area.

Juergen

[1]: https://lists.xen.org/archives/html/xen-devel/2020-11/msg01680.html


--------------F10E9242EBCD1C77246CF79B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------F10E9242EBCD1C77246CF79B--

--yUIQOeA5GkctKYl9JEe1VlbStF28ZkT2H--

--2LnsbVJj3UtyLWLg86IVKQM5yc7IXNz6h
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC50eQFAwAAAAAACgkQsN6d1ii/Ey8N
AAf/eFWb6WX0BxK8n6rKeo9McCeOhNE3E7IlVXaNBKEyD+iA4tTZdjJIpnlEtkqG/ve2ZRqpmskh
l3wdTHuaWfQx1FH3HfKxRn+zmm6abEC6y+TJUO/5blGuybhtlCrUrdf1NjmbwWVqBfCONrTBVBkU
K7B7/aIxqup8EM63tK76nAXA0HSGixZnvFqZ/OioaXPFVNXXt3pJ+pMLZqDEVZc8eOIkZTwhwa55
CbzhebmWVSDmwNXhXpMrWqC8yZwmV/zBzDWKMmUpBfkGtZoNX4jNXJ1//GL3g2ugm5magujJHqdM
ikLPeTIyt7Fs05eXD9v2qzUerZBIogiWE77W2oaCKg==
=UFJR
-----END PGP SIGNATURE-----

--2LnsbVJj3UtyLWLg86IVKQM5yc7IXNz6h--


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 08:14:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 08:14:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136773.253435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp4yO-0003lc-5K; Fri, 04 Jun 2021 08:14:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136773.253435; Fri, 04 Jun 2021 08:14:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp4yO-0003lV-20; Fri, 04 Jun 2021 08:14:32 +0000
Received: by outflank-mailman (input) for mailman id 136773;
 Fri, 04 Jun 2021 08:14:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lp4yM-0003lL-Ss; Fri, 04 Jun 2021 08:14:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lp4yM-00081u-Lz; Fri, 04 Jun 2021 08:14:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lp4yM-0008CB-AY; Fri, 04 Jun 2021 08:14:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lp4yM-0003li-A6; Fri, 04 Jun 2021 08:14:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2sqPxd+IiLvN1nat1f3rs5G+foHCSxSZ0deAwjZJ6aU=; b=x2/8RL9zmGkf6OeB0dQgmFfE6V
	U2gYZ03ov5AEm/v/gW+lC2uJ8/emNz/lknhe2XWBFisTFbg1EJKHO435I6fNVQMxZyn7sw+cmyZeN
	bmbmW/UV1QitgDz6VcJg3/fTZVmD/A0spxIZ5CrbekoheleY9D2yh1qJIvXnr/1Rtoq0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162356-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162356: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=453d9c61dd5681159051c6e4d07e7b2633de2e70
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Jun 2021 08:14:30 +0000

flight 162356 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162356/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                453d9c61dd5681159051c6e4d07e7b2633de2e70
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  287 days
Failing since        152659  2020-08-21 14:07:39 Z  286 days  531 attempts
Testing same since   162356  2021-06-04 00:39:31 Z    0 days    1 attempts

------------------------------------------------------------
526 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 168469 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 08:37:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 08:37:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136780.253449 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp5Kd-00062v-1J; Fri, 04 Jun 2021 08:37:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136780.253449; Fri, 04 Jun 2021 08:37:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp5Kc-00062o-UP; Fri, 04 Jun 2021 08:37:30 +0000
Received: by outflank-mailman (input) for mailman id 136780;
 Fri, 04 Jun 2021 08:37:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9zHp=K6=rmail.be=alien@srs-us1.protection.inumbo.net>)
 id 1lp5Ka-00062i-Sd
 for xen-devel@lists.xen.org; Fri, 04 Jun 2021 08:37:28 +0000
Received: from mail.rmail.be (unknown [85.234.218.189])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id fffd5c55-f3f7-48d9-8bb5-850de34fc4cc;
 Fri, 04 Jun 2021 08:37:25 +0000 (UTC)
Received: from mail.rmail.be (localhost [127.0.0.1])
 by mail.rmail.be (Postfix) with ESMTP id D7B46B1534D;
 Fri,  4 Jun 2021 10:37:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fffd5c55-f3f7-48d9-8bb5-850de34fc4cc
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit
Date: Fri, 04 Jun 2021 10:37:24 +0200
From: AL13N <alien@rmail.be>
To: Xen-devel <xen-devel@lists.xen.org>
Cc: Jan Beulich <jbeulich@suse.com>, "Durrant, Paul" <pdurrant@amazon.com>
Subject: Re: pci passthrough issue introduced between 4.14.1 and 4.15.0
In-Reply-To: <23d90cf3-2bc8-f6f6-449d-1741ff4261ec@suse.com>
References: <6ccb04f2d93be6089b049df1f94a91dd@mail.rmail.be>
 <e9a3f7a8-7bf2-4f0a-cc25-d8ce015df1f2@suse.com>
 <a7c0e9b0cdd8f9e709abc329c7f6239c@mail.rmail.be>
 <b5ff15fc-ec3b-6b48-3d15-7de29fa5b2aa@suse.com>
 <175befe0e853565761e51f07b79c49cf@mail.rmail.be>
 <552b4348-1c52-ce6b-9001-a144c7147a7c@suse.com>
 <0100caba62175123c63f0df4749a8c88@mail.rmail.be>
 <8a572357-e743-80a6-fed6-3c4999b986ec@suse.com>
 <23d90cf3-2bc8-f6f6-449d-1741ff4261ec@suse.com>
Message-ID: <298871527ff81b658bf959551ae65235@mail.rmail.be>
X-Sender: alien@rmail.be
User-Agent: Roundcube Webmail/1.0.9-1.2.mga5

Juergen Gross schreef op 2021-06-04 09:10:
> On 04.06.21 08:56, Jan Beulich wrote:
>> On 03.06.2021 18:01, AL13N wrote:
>>> Jan Beulich schreef op 2021-06-01 16:53:
>>>> On 01.06.2021 16:44, AL13N wrote:
>>>>> This mailing list is the correct place for the toolstack too? 
>>>>> right?
>>>> 
>>>> Yes.
>>> 
>>> So, what's the plan to fix this? is the plan to fix the toolstack? or
>>> put your patch in kernel to kinda workaround it?
>> 
>> The patch has already been put in the kernel, as said. It would be 
>> good
>> to know whether it actually has helped your case as well.
>> 
>>> Is there a way to make a regression test or unit test or something?
>> 
>> Would be nice, but may be a little difficult to arrange for in, say,
>> osstest.
>> 
>>> Does anyone have an idea on what patch would cause the regression?
>> 
>> Not me, but I'm also not a tool stack maintainer (nor expert in any
>> way).
> 
> There has been a large series by Paul Durrant [1] making heavy
> modifications in this area.
> 
> Juergen
> 
> [1]: 
> https://lists.xen.org/archives/html/xen-devel/2020-11/msg01680.html

Hmm after a quick look-through, nothing stands out to me; except maybe 
if the pci list gets freed after the first add, it would prevent the 
adding of the other pci devices.

Could anyone explain/point me to the place where the toolstack adds pci 
devices from the xl create vs xl pci-add?

I'm circling back to the logs of xl create having 3 log entries "Adding 
new pci device to xenstore", but only one "Creating pci backend". While 
that is normal of course, it does give out 2 possibilities i can see for 
only having 1 device:

I'm looking at this function: 
https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=tools/libs/light/libxl_pci.c;h=1a1c2630803b5b3b4e07f093b688e0fd5780e745;hb=e25aa9939ae0cd8317605be3d5c5611b76bc4ab4#l134 
.

possibility 1:
xs transaction at line 209 does not get called, which i presume would 
add the device to xs.

possibility 2:
xs transaction does get called, but by that time, the other end already 
has finished and thus does not look at the other devices in xs?

since xl pci-list actually does show all 3, and i see no errors, i would 
presume that for possibility 1, it can only really be line 201, but 
since this is a xl create, i'm assuming "starting" is true in this case, 
which means no lock and that line does not get called? (there is this 
weird thing where a transaction is committed and then aborted though?). 
so i guess possibility 1 is no go?

but possibility 2 would mean that unless there's another layer, the 
pcifront on the domU side would be faster than this function being 
called 3 times... which seems odd (unless this all gets done before domU 
is even started, which does not seem likely)

Of course this is all an amateur pov, i have no expertise with any of 
the xen code at all...

Well, I hope someone can take a look at this and/or help me out, please.

Maarten


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 08:45:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 08:45:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136786.253460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp5Rr-0007TJ-QQ; Fri, 04 Jun 2021 08:44:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136786.253460; Fri, 04 Jun 2021 08:44:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp5Rr-0007TC-NJ; Fri, 04 Jun 2021 08:44:59 +0000
Received: by outflank-mailman (input) for mailman id 136786;
 Fri, 04 Jun 2021 08:44:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lp5Rr-0007T2-7w; Fri, 04 Jun 2021 08:44:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lp5Rr-00004s-1m; Fri, 04 Jun 2021 08:44:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lp5Rq-0001Nn-Mw; Fri, 04 Jun 2021 08:44:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lp5Rq-0001wm-MW; Fri, 04 Jun 2021 08:44:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=R2q5JpvTsfjGZh9arshjV/KTKnMLhSBaZn0XBqr93/Q=; b=JXJDFKMMJZDqYylKQmVnl1Rg+q
	MXUOPpZ+U97bErGgO28+bZRlKFvgae0Iahn5S4Qx84HjwpyaYRUuh6mL5598H4lR3x7dVVsDLklWc
	8djFWiRq+lFEOw44meDCDeX741AQD466jF9oPPiM8jQCg44REj5CihHgbGtCI0uNNz3M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162360-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162360: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=86e8f371399e8951ef915902b51809116f16c2db
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Jun 2021 08:44:58 +0000

flight 162360 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162360/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              86e8f371399e8951ef915902b51809116f16c2db
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  329 days
Failing since        151818  2020-07-11 04:18:52 Z  328 days  321 attempts
Testing same since   162345  2021-06-03 04:18:55 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 60203 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 12:30:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 12:30:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136804.253474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp8y9-00026V-Rc; Fri, 04 Jun 2021 12:30:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136804.253474; Fri, 04 Jun 2021 12:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp8y9-00026N-Ma; Fri, 04 Jun 2021 12:30:33 +0000
Received: by outflank-mailman (input) for mailman id 136804;
 Fri, 04 Jun 2021 12:30:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lp8y8-00026D-K5; Fri, 04 Jun 2021 12:30:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lp8y8-0003sC-DE; Fri, 04 Jun 2021 12:30:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lp8y8-00048g-3O; Fri, 04 Jun 2021 12:30:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lp8y8-00015U-2s; Fri, 04 Jun 2021 12:30:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kiaKTKleSjq21ytk2wt2Ui5iFb/P74Q3oUFevCXoVxI=; b=RYGgMG2yYSHt0z31ofyAgNxEcr
	Dkv3mLSLnSAldgzSMcMHktFnb52SHBC+YQQ7feO99op53MY09rKMpe3hH5bOQdv0arTePINz11Hk/
	5+fGsA2NcFS2pIkIaynvbTcFg4wvuxRFYPWfqcLbDzQJYs25H1lftosso4dltNCC4xf4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162357-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162357: trouble: broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl:host-install(5):broken:heisenbug
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Jun 2021 12:30:32 +0000

flight 162357 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162357/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl             <job status>                 broken

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           5 host-install(5)          broken pass in 162343

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl         15 migrate-support-check fail in 162343 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 162343 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162343
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162343
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162343
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162343
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162343
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162343
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162343
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162343
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162343
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162343
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162343
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162357  2021-06-04 01:51:39 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl broken
broken-step test-armhf-armhf-xl host-install(5)

Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 13:15:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 13:15:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136837.253588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp9f6-0007Ln-CN; Fri, 04 Jun 2021 13:14:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136837.253588; Fri, 04 Jun 2021 13:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp9f6-0007Lg-8Y; Fri, 04 Jun 2021 13:14:56 +0000
Received: by outflank-mailman (input) for mailman id 136837;
 Fri, 04 Jun 2021 13:14:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=md+M=K6=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1lp9f5-0007LK-Dh
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 13:14:55 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c41df7b8-db8c-4255-98f8-afb3425fe811;
 Fri, 04 Jun 2021 13:14:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c41df7b8-db8c-4255-98f8-afb3425fe811
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622812493;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=nZPLkxmv1q81OmzcdiJPjiQmdZprjLsJrdGSBk7iHtk=;
  b=WBx8uDmkbfIdrxZqF8b6+jfLUGcioTLJFOAGfE50HahoFTntaiyiyLFc
   rMh0v3oZWKQpFWjTYrszhSh/Ii3IU/mO4EeYmpt/AVmEr4rV2fKcKAzD9
   cH/N060+Jj2uJyoV5H1p6MqE8qy7/Na/LVGvq8CwN6HmhLYhyvrPbvoG+
   o=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: tIEtPflWbvNo4okbVSKQbsTxG/iF8PzCCfzMlpvCMXzuiM31gCfLglhAReFLN+iAA/T7GetC73
 /zvDbtSJBhPItQit6YFTd8AWnpBViNMzCHnEXDlvKk/OKOzOOMaLmAUKWVgGVMnwosCRwsgYKv
 P51mQJbyIjNzBrQoMO/4lFivrdAFMCPS4XMQhQgK8vVZ9wgaLBjxhO5ILevP+of9kJtQBOcDfn
 5nYELo28M3qtEL6tPj45ZDA9Wr5jDCB6Z/BgKMa5sjIChP1OkxroZngSdqNw5inmwNlMEZbOfX
 eTw=
X-SBRS: 5.1
X-MesageID: 46957222
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:rGHuXKq1ROCnUGgtYNues8YaV5viL9V00zEX/kB9WHVpm5Oj+f
 xGzc516farslossSkb6Ky90KnpewK5yXbsibNhfItKLzOWx1dAS7sSrbcKogeQVREWk9Q96U
 4OSdkHNDSdNykZsS++2njELz9C+qjFzEnLv5ak854Fd2gDAMsMj3YbNu/YKDwNeOAvP+tlKH
 P23Lshm9PUQwVvUi3NPAhiYwGsnayvqLvWJTo9QzI34giHij2lrJTgFQKD4xsYWzRThZ8/7G
 nsiWXCl+eemsD+7iWZ+37Y7pxQltek4MBEHtawhs8cLSipohq0Zb5mR6aJsFkO0aSSARcR4Z
 3xSiUbToJOAkDqDziISNzWqlHdOQMVmjjfIJmj8CDeSILCNWgH4oF69Pxkm1PimjsdVZdHof
 52NiuixulqJAKFkyLn69fSURZ20kKyvHo5iOYWy2dSSI0EddZq3MEiFNM8KuZxIMvW0vFtLA
 BVNrCX2B+WSyLtU5nThBgi/DVtZAV6Iv6ieDlMhiW46UkjoJlJ9TpQ+CVEpAZ0yHsUcegy2w
 3rCNUbqI1z
X-IronPort-AV: E=Sophos;i="5.83,248,1616472000"; 
   d="scan'208";a="46957222"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jKTOSZKuF8vfIQQk8Y0m8Vqj6Kd18UGYqj4s+N6VoeZj4qTgfLFrMFqo/FvmBp4NOdo1odK1JJ1PeKOobR/HOPquMD1fq7D5VNe66d1K42w6NbIK/mr8d/RA7xBcgAwB5GDcGb25I20VOviZBPb3V+LRYT9aCT5bWXUlLt96OMwBcGrpNPO3hWQaAdUK994gKVyxN7NEyVvVnQ3LnzAI61uS/MJZzkKtjj4XwxSbfq2bYzrw9/X6H6IkKqt18oqu1yGjBm0ohbDmHEWpfM/Z+gSbVYOojnf80oU7faK/VoDjI+rdlScMpxSfH6pSnlEFAHJCX9h0vOj3ac7ztLsMIg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nZPLkxmv1q81OmzcdiJPjiQmdZprjLsJrdGSBk7iHtk=;
 b=ByePTsnGmQBLw6dWMfo+Z//OvYPUQnqj420kzP2isVYS70J8ExVVdapCxl13JN4w9r5BXmSp24w3SSmRaKqNS1migeoEoI1/xZZEkt+ABdOzMcDeecsEQ/T1WYYfra5F3gKJI+3m86FVLpRfptZSNTAwr6pV6rJ28L5T1KOpnLCZ1ywPO747ZzjcT8QI12AcFDZhRbe5mXmmQ0uoTzxHIfyshT+QU/G+JicBGjSjkUGimlDGyGM45q0MDroefXklaIOSiTdtt19bROEUGYNSGff4SZap9YoGbZpu8kHP/Tpm+jT6EK6mMOkpAeSHVeqRqRDl7m8xW26AHOrVIAv5Rw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nZPLkxmv1q81OmzcdiJPjiQmdZprjLsJrdGSBk7iHtk=;
 b=ZWIuy8Ozkh2khG8spyOQpXwEuo66uKswm4xMfottJWxH6lM4cspqd9npGCSZnrq7xVYp+H5WZse4zihPjaNmdeQzPyNkF49cvKyoMPd2ePvY66BavMpb02boEQTWkyhbVIVdh2zCCZMgyPiHOcpV/uR/c0BfaZ0GZyncfe6wbUU=
From: George Dunlap <George.Dunlap@citrix.com>
To: "security@xenproject" <security@xenproject.org>
CC: xen-devel <xen-devel@lists.xenproject.org>, Jann Horn <jannh@google.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2] SUPPORT.md: Un-shimmed 32-bit PV guests are no longer
 supported
Thread-Topic: [PATCH v2] SUPPORT.md: Un-shimmed 32-bit PV guests are no longer
 supported
Thread-Index: AQHXQnYPkkGqZKBnHUKbkbouhpf7JKsEAkQA
Date: Fri, 4 Jun 2021 13:14:50 +0000
Message-ID: <91F55D14-00F6-4609-AEF6-2BA3D73408D2@citrix.com>
References: <20210506124752.65844-1-george.dunlap@citrix.com>
In-Reply-To: <20210506124752.65844-1-george.dunlap@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: e83dd725-b2ff-48da-89ca-08d9275abbd9
x-ms-traffictypediagnostic: PH0PR03MB5816:
x-microsoft-antispam-prvs: <PH0PR03MB5816D632C56EF6E4BEAE9931993B9@PH0PR03MB5816.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7219;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: GlW9McuVWImw/GMX1jGFbKlG9BqPT0UXV1JpxPdOQmPnTuMjAvSdvfC2uLkF7I4eEPaCOgT7XhbRMz/7RH1r7htRTCR/377xeNQxfoz4AkC/Gr9Iix7SIzbT4IzFjkdgb436Oa44QfyaWE76Nk2MEvEu6B204awLQPBGHwnY7IPB2RdKQCgcKwKscMW/FmRnTQB64uJ0fJUjb9ZTKgWMeIem6LvJm5KG5kcniPowwPR20JwSkyBXWHzTim6v2Lrw929wdf98NjDxEAcICVWSRjtvq+aHkV1U8gYSVdYegDbJDr7UHzhPZ4ahrPv8oUOI1A5GMX0MhA/j0xrU2M4Bj7adQWnVnhDDUfadF65XCH5BKMYT3T3IwBVNJhn2LLLh7RRVuFWY4CtQPESVm1ELoK/PxAYz6V3QmVJBJjt45pQDSFdwBzPqja+Y41UxtKe4fJgrqXXErNrOohees6yDqAzRbgKQeoAJZOeLhHbHdJzEKOvNnKGP0JjWbR2dMP/mDQUUHMGlogojxIO4w8W49oXwje1LDZHKU8aR5hwGHFupN1m02kEHMYrID1XwMAvKQm0zahSYbsvseuL893Sclme9MZtPpBgSJTKHTCZa2Yq0YekY4du6Ur+oQuXeTrxuTa+Q60PZOL7Cz1Tt/HLc0w==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(376002)(366004)(346002)(396003)(5660300002)(36756003)(6486002)(2906002)(122000001)(2616005)(4326008)(26005)(53546011)(6506007)(4744005)(8676002)(86362001)(316002)(478600001)(6512007)(91956017)(66446008)(64756008)(66556008)(66476007)(33656002)(54906003)(66946007)(71200400001)(6916009)(38100700002)(8936002)(186003)(76116006)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?utf-8?B?RHkxQ0lkVnBkMkZaVE40RUFhZXdXSjhhZDlJdTZ2RXk3WnJxSWlrbDRwYUJm?=
 =?utf-8?B?aks3eDB2THNKQ2hRYnp0V2xCZEFoNzhqSEt0cGtHUzA5YXg3RHBya2JSRDVP?=
 =?utf-8?B?WXg3cnY1RC9XNmxDWFM5cGQzY284STlySjdQS1pyMkdxRjB2UHptVkZ0QjJZ?=
 =?utf-8?B?VTFQd1ZkR3d1QVdxUklKUlZoM3FRMWhLOXNpMGdEdllVblUyVlFZbnRzcEM2?=
 =?utf-8?B?b01CaWhKdGdVZUVvdTR2MHU2VDR2b1FJeTd5SzJnZEhqT3hYd2ZMUUlYNnpQ?=
 =?utf-8?B?OGI0UmxGSW1rd1Z1UnB1dCtkNkkrY09CZExCTjUrd1R4bHJRNXNkcVk3RVg2?=
 =?utf-8?B?a2F6bGtMUUoxY1J6V1dHYmNiZ2VES0ZDUWxKUjRKMG5RZVNEK3plUDFuZ2tC?=
 =?utf-8?B?Q3FqeXNPSk9aRG41S21zVmF2TjBadW5HVm1HTDArTk1YUnBaYWZ5RVhMQ1E2?=
 =?utf-8?B?K1RzSWRWbjJ2cUdFVStOWGdZMk9Cakt5M2l6Wk94aE9NSDFLRmpFWGhUeEhp?=
 =?utf-8?B?d3hCNVV1elFjU0o5YzBZRnhuN0dQSlhwNlVnQUdLWUk1bTI0cUY5cGtvUUJO?=
 =?utf-8?B?b1hkaytKakhkL3lSeGRFVUF3Qk5YL1BSdVdlZUp2eFBVY2hZUG1LaFVNNGtY?=
 =?utf-8?B?VHl1RUtyMlpDa0tESzI5dEpDclZmaXZmS2VDaHZldFlCUWVaaUpuT1NneG04?=
 =?utf-8?B?bzZnZzFuRGR4OHpWOWFzQllJS1NFWnlEamRJOFRRMWI1cjB6T29lMms2K3lh?=
 =?utf-8?B?ekpWRFFMckh4bTRwS0U5ZW5WYU9GNHptekpkUE5ISitTRi9XOHFuc2NJeWg1?=
 =?utf-8?B?Z3Jpa1Q1eGhIRmpXSncrcXhHK0VLUDhBaVlvTTE5ZVNrcUI5Zmx0RGVacURS?=
 =?utf-8?B?WXRoajZyY1ZRcGdwQmVKRlVkWVdyU1BmU0lLcW1mYit2aURPNkpCckZzVjZa?=
 =?utf-8?B?ZHRwRXluNXl1N3VsbVJaSTl0cTNSb0NDdmJRTzkwZzU3R1V2RmNGb0tUMHJP?=
 =?utf-8?B?Ukt6TmR0SU5lSGNuV0lWbVd4ckhqdER3UHYxSTVLWFZic0M3andEbEJ1WlJu?=
 =?utf-8?B?RzZqbldLVEsyUld0NW9USlZGTUl4RDZ3c3dBbyswUmlIZyt4ekpDVC9KbSs4?=
 =?utf-8?B?RFlYTUxES3AvQmM2b0F3VzI5bDU5Zlp4UTVHbStOd0R6bUJzaWQ3eGxRbUFt?=
 =?utf-8?B?cXg1ZmhYZ0pKWUlrWVBnOHFoOXZpclRiNzlIS0RiNnZ2R3h3RThZQ3RnR2NY?=
 =?utf-8?B?Sm5JajQxWU1pdW1iMFZpdXNVMVJmNnJzMXNNampGTDV5TUwzRTFWM29XUWVD?=
 =?utf-8?B?Q2ZhMFhmeGtCZ0lQYzVBZGYyaHJjS3V6VGFqRTduTVd1dkNxMkhtR1VENkw1?=
 =?utf-8?B?eTU3UHJvSGlJUnBWbmhzZ2Q4cTRBajN1bTR2ellkWmxZU3Q1VTRSVnRxZXlG?=
 =?utf-8?B?eEZDdEVCWVErb21oRVRNVkRxRUNEZjc1OHZ3VmVBUnFxdWF3c0UvellBRENY?=
 =?utf-8?B?U0VXRUFRZEh0clp1ZnkvemtiTENBQ2hRZHZYV2pucUtsZTNsVXU1bmhYOENJ?=
 =?utf-8?B?cWhKK2lNSWRTVkFhYTZBVDZXSVhmMVJFT2JHMjRkRHdlUU94QmJDVk5vQWh5?=
 =?utf-8?B?eUNZN2ljQlY0U3FPLzZWaHdISGpZaTlHWFVkZ0hRMjkrOXZUSUZvREZQNDZQ?=
 =?utf-8?B?b1F0WjIyOXBlSmZYNDhKcHFrWWhYMVBTQmloMUxVb1hwRm5RVVQrQitVWVVp?=
 =?utf-8?B?L3NPWEk1WEpJWmdEKzdxcEQ5SmNmTEE1K2JFQUxaV295aUpqb0duN3I5QXFa?=
 =?utf-8?B?aG8waG9MYjFMazJhaFNKZz09?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <FA849F3B4B6FC3498D2B06EFE36B0292@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e83dd725-b2ff-48da-89ca-08d9275abbd9
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jun 2021 13:14:50.0815
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 9++swgrR0+4H1eN44PFquwx7wQfcVQ4Pxc6iRGWL8eFN9c0B5rKL9lOI7o2Rm6OZ5auNa4GXv+tQeCYi76pCHTpN/2s0mEo1/TgEAFG/zaA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5816
X-OriginatorOrg: citrix.com

DQoNCj4gT24gTWF5IDYsIDIwMjEsIGF0IDE6NDcgUE0sIEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5k
dW5sYXBAY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBUaGUgc3VwcG9ydCBzdGF0dXMgb2YgMzIt
Yml0IGd1ZXN0cyBkb2Vzbid0IHNlZW0gcGFydGljdWxhcmx5IHVzZWZ1bC4NCj4gDQo+IFdpdGgg
aXQgY2hhbmdlZCB0byBmdWxseSB1bnN1cHBvcnRlZCBvdXRzaWRlIG9mIFBWLXNoaW0sIGFkanVz
dCB0aGUgUFYzMg0KPiBLY29uZmlnIGRlZmF1bHQgYWNjb3JkaW5nbHkuDQo+IA0KPiBSZXBvcnRl
ZC1ieTogSmFubiBIb3JuIDxqYW5uaEBnb29nbGUuY29tPg0KPiBTaWduZWQtb2ZmLWJ5OiBHZW9y
Z2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+DQo+IFNpZ25lZC1vZmYtYnk6IEph
biBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gLS0tDQo+IHYyOg0KPiAtIGFkZCBpbiBL
Y29uZmlnIGZyb20gYWR2aXNvcnksIHBvcnRlZCBvdmVyIGMvcyBkMjNkNzkyNDc4ZA0KDQpUaGVy
ZSBoYXZlbuKAmXQgYmVlbiBhbnkgb2JqZWN0aW9ucyB0byB0aGlzIHBhdGNoLCBvbmx5IHN1Z2dl
c3RlZCBhZGRpdGlvbmFsIGFjdGlvbnMgdGFrZW4uICBVbmxlc3Mgc29tZW9uZSBvYmplY3RzIGJ5
IEVPRCB0b2RheSBJ4oCZbSBnb2luZyB0byBjaGVjayBpdCBpbi4NCg0KIC1HZW9yZ2UNCg0K


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 13:23:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 13:23:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136846.253598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp9nB-0000Nc-B9; Fri, 04 Jun 2021 13:23:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136846.253598; Fri, 04 Jun 2021 13:23:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lp9nB-0000NV-8F; Fri, 04 Jun 2021 13:23:17 +0000
Received: by outflank-mailman (input) for mailman id 136846;
 Fri, 04 Jun 2021 13:23:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=e0/s=K6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lp9n9-0000NP-E9
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 13:23:15 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e42a00f4-cb8a-4edc-bd3d-e65553c5e5c0;
 Fri, 04 Jun 2021 13:23:14 +0000 (UTC)
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur02lp2056.outbound.protection.outlook.com [104.47.4.56]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-17-_SaINpyQNyeQ9zaA2B6QTg-2; Fri, 04 Jun 2021 15:23:12 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6864.eurprd04.prod.outlook.com (2603:10a6:803:138::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Fri, 4 Jun
 2021 13:23:11 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4195.024; Fri, 4 Jun 2021
 13:23:11 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P191CA0030.EURP191.PROD.OUTLOOK.COM (2603:10a6:102:54::35) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.22 via Frontend Transport; Fri, 4 Jun 2021 13:23:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e42a00f4-cb8a-4edc-bd3d-e65553c5e5c0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1622812993;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=s6eHqYCXOWXBryVLpB90OwwJCuDcpY5PXTH/92MJ7XA=;
	b=GM9pZUoQdiKgePPz/fmBFXakampMbxfvnPpqd5KORd0Qlivyugg6To6QFyFCw+bqZWQdii
	R9oWV8mqSj6cuRV69nskvvXcnZ/zi28PTwLolKSHmT+XlIW/s6W+ZZ4QxPmU3IXfbUqnBb
	uJic1VrgFpKB3aS9nLikUAxMcdTiJak=
X-MC-Unique: _SaINpyQNyeQ9zaA2B6QTg-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YLPgPDlzMj1AaUWWpWPv1lzbBMuAABhAtq2hkvkmmDpsicv5gDREegyFg+dveTvUQ+5N9RJRbmZTvJ0AtQlDIjs4+1CPqZTFGtOXvCXSHrbrsNBbAXTFsgv4z7tM+WcRg1e8kb967kHCt2I3SOVxaZreCd782b0J6L1ImkqfzFWMxGlIUAJxCA0T3UTGnVgcU2Yw2L+HXo3Vpy6edYCQuBTEDLhSS7rnReMoKPN7mcx2BBaXV6h7xJim8N3S9WFBklR1Q+9mXfaH/xduisbapLQeCy/LNtersPK2j1a9NFPnFe4ygGaHKLApMaobTtKUJPiAKOt1keMAuh7NnDqPAQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=s6eHqYCXOWXBryVLpB90OwwJCuDcpY5PXTH/92MJ7XA=;
 b=E0ZWPYkzanD3N9bmOwQjeua5wouIrWskcp/iLbz/PIj9ghyrjrLJlkO17FqJYQN9x84XGfi8GuAYmDxJLcEFdJoDfp3mrq2VHk+4n+D9iHSfFb0q/gCG0DUxQlJrnDWmzAIMuMpPLPyjo0KhzWVXHLEYJq72OblV+VeXS6abgDaE8Bhle/MgFv+IX1xR70vMkRogNoYa+uhC1xk0D4Sf3KpSsIxSvZJpuBO+uN2HlWHF6NwYX3E7Lwr6XvipUpdlibzFvqFntsyKudGHdPcmNRRUV+9moyPcnRMK7NLzWHAkSgEoyIzBguEqF1PIODDP19zLA7J8HoMPhJOpDn6zzQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH v2 07/12] mm: allow page scrubbing routine(s) to be arch
 controlled
To: Julien Grall <julien@xen.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
 <49c46d4d-4eaa-16a8-ccc8-c873b0b1d092@suse.com>
 <b1c10ad9-2cef-031d-39c2-8d2013b3e0b5@xen.org>
 <e805e525-f024-8b5f-3814-0c1346a227f8@suse.com>
 <ccdc7909-9ef2-470e-fefd-bc6523fcdf73@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <403746c0-1b36-f782-3f71-2a1cd129aa6e@suse.com>
Date: Fri, 4 Jun 2021 15:23:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <ccdc7909-9ef2-470e-fefd-bc6523fcdf73@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P191CA0030.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:102:54::35) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 084429e7-1419-4380-60b9-08d9275be646
X-MS-TrafficTypeDiagnostic: VI1PR04MB6864:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB6864B3D820009F46DA05AFE4B33B9@VI1PR04MB6864.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1303;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rZYpYp/ZsK0jBsiAONaNZ7eM9ZLDbQ6LWrQ3/1lqEPvaYYetVXorvIiCPGZvNdMPKbU0BV3HB4UupvuWW849K5XiupHxXthE+9d2box5RxLs0iYhz1fr5X3f+XGZcaPRLHROWNovNLH0j8mAImEt0vTeJhY/zKZ0JzDa7RzqqmTGnqZwKizEwEvCD1KhV6qicOdXf0zxn52HOIfL3r2CDClHq81/t9W37fbEbgwEa8H3/5Z9o9VJNs4e9kP96QRT0j8tPBiuPSGCrVH6/8b1wsky7WcgbmpD51jVPCSHfiqD41bTjfoO0XKQ44DTxKot5fODskV9ifHCC+rKmN41Ykowtq8oWaE/jt3SDah3N8QLZKwumfPrIvFrw4R70M2mPVUGSLq9kotEeJ3+AX7+mbzbjZSDDVQpe5NT1r0C2EoJPqTXe/Nu9bpPdtYcKPASyXt1ltUdxhlDz3IHTUUYtj0pMlrFotpT4ztjVF/9+PMZKlXtlT6k5dgD2LZRBmemB04GJXwpWqjxlRFj28udiFNtnNPeR+P2zp1WaaR7B1JmIcii1/WRuC5nivg+z/qutUfZaH4JRr21+W8SNoYbFi29Y4Wl3PVqSBj0fmu6XwfzE9sopmDXge4egXT9AwUkaRmEmr/dVv9E6gBXIkEqfSnVY+kv6naPOYmrqlENquhzyOYW8XfG7oft1Fp9OqTb
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(39860400002)(376002)(346002)(366004)(136003)(31696002)(6486002)(83380400001)(6916009)(8676002)(16576012)(66476007)(66556008)(478600001)(2906002)(5660300002)(956004)(2616005)(36756003)(54906003)(316002)(53546011)(38100700002)(31686004)(26005)(4326008)(8936002)(16526019)(186003)(66946007)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?Q2x6RVRGcnRST1dScm0wYlB4UmROS3dMS0NDb0k0Y3o3RVdwbjhGSi80b25J?=
 =?utf-8?B?dEpCbm4veEFEdGs1L3pQZ1JaNTdUamt0b3QwSXJHMFpBTklGQjJoU1lNMGts?=
 =?utf-8?B?b2p3UGlSa0ttK2QwSHZQMEdaYzY0VjUvTHJPQzlVYlZKZk5LakxpWGZiYlJ5?=
 =?utf-8?B?RyswMHVtL09yWTUzenBqOHRpSG1oNUp1ZGlhQjVuRDJ6K01KekhWT2xyMHV6?=
 =?utf-8?B?K1BUNktYQkFYdUZkSVlsNDV0dEZlUGtINlUvbFZzTkZBV2FrRzJhUUNvajZn?=
 =?utf-8?B?RUo5N3Jqc1VlWkVHQi9MSkp1WkU5OE9mWVpZU3N6Mzg5eGpWQTJJZW10TkVM?=
 =?utf-8?B?dDIrQ2JuelRTYlg1SjhyVzN6eTY4QVYwdDNJY3Y2QUZ2NFFuNkxaYmtJVEgw?=
 =?utf-8?B?M25HV3B4ZjdkWlNKOGl0WWdGc0NRVVg5ZnZhSEg3N1padWdVaDlEWkowYmg0?=
 =?utf-8?B?S0dHZU51aGNycXNkNE1XZWtiSktNTlVXa1l1eTJ6YlVhcDg5SVlEVENqOW1L?=
 =?utf-8?B?L3ludmZYeW5FL2VuZ21ydXJjcENYWWhOTXFDRnRtZ3BiNmYyajJFSk13c1g0?=
 =?utf-8?B?RkYwRXViNXF3Rnc1Vm1pY3FYUmhnQjMvbFVvN1VRMGswUEZzVmE4S2tSQTFz?=
 =?utf-8?B?dEt4MURpdWVodERJOGEzTUVCM0pVMHRxNHdLNm9FakJReTh4NjZVTHNpamxJ?=
 =?utf-8?B?cnEreThiTkY5Uys0dU5OMXpVQTNMNm9VTFFXK1RITzJvdkJlQzJ6eTAzYkFI?=
 =?utf-8?B?VXdRcE5CbkliT3pSZU85V2ZjQjc3Yk82NGxaNWZmYkI5eFk0SHFxT0lmMmYx?=
 =?utf-8?B?VjhIMHBxZzduZHZreTRmaEl3V3lVVFpiTUFyemxoWXRMMG1hR012SEliYitD?=
 =?utf-8?B?b3BPdlR0eHdQd2ljV0ZtSkdWRmRqNmdTcTlES0hjdjFJN2NGanhoQys3QXY5?=
 =?utf-8?B?MUVRVVNMeXJwZndNS2cyQVBTeFdVcFYxU3VZK3N5R1l6bjVDZWNjL3NHSU05?=
 =?utf-8?B?TTNpeUxFU2dtemNJa0h6OFYyZmVndjR6OW9oeGZqbVVIVG95MDU0T3dJNHRv?=
 =?utf-8?B?dFdkNXpQaGo0bDBMVzJ2TXF3SFR1VjYwSFBPcTh3U05OYTBjbmFTM05vSHZO?=
 =?utf-8?B?STFZSHFYS05LY2R0ZG56YVYvWXRyRXREODRKQ1FsV0dtdVpRQXZOUFZ6L0gy?=
 =?utf-8?B?VnpnQzk0amNkU29KMksrUjJ1bHBLblZDMm5GZFBXRWJja293emhpOWhOeVJS?=
 =?utf-8?B?Mml4bGJCWXphODMyRHZuWkY0Zk1LWTBnVXU2Nm5IaFNJM2VzUlErczRMcitx?=
 =?utf-8?B?RGxpSDN0NDRZbUtqd1o0OXV0M0J6Ylh6am9GL0kzZlcxM0dOR09UU08vWnVV?=
 =?utf-8?B?NEFqNWtlME9wMkdIRDArd3lHT0NxNm9GV05GOWxpeDdiRDN0OE5vVUViZ2Jo?=
 =?utf-8?B?SkVJTGFnM1pWRlp4anQyZDlYNGNLVkpYQVMxZHViNEpENXduSkRURTA2L0JC?=
 =?utf-8?B?aVhmeGZYWVVRdUhSaWhmNDhYbzRjZnQvK08vTmJ6cFloQVUrTlVLY3c3WDFC?=
 =?utf-8?B?QW5LTFkwcFlFT0ZndzFxQ2tvUTdMb1VwbWY0WFJiTFBUU3VSTzdjaStubytl?=
 =?utf-8?B?VHJsd1V4VEI2dVpNZXd2OXB4NS85cGcyZ0tLM3YvZGVlMTc5QmlIMER5N3dV?=
 =?utf-8?B?VC9PdTlvWjJZTm9FQ1RkM0Q1RVUzaG1qZ3hYTldSa3FHTmhsTDgwcndYUzZM?=
 =?utf-8?Q?gKC+9EpK+EJ/XUyYQ5dhsXyqkdHROuYu9iVroYc?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 084429e7-1419-4380-60b9-08d9275be646
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Jun 2021 13:23:10.9894
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vEyZBO9fpgCdoAJ05o8Iw+Nf90VaJeE7PrpMjIHxnNmyjf2flXXovoVFVsUa6FvtJBYB4qfqDZqdhPx18B+ckA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6864

On 03.06.2021 11:39, Julien Grall wrote:
> On 27/05/2021 14:58, Jan Beulich wrote:
>> On 27.05.2021 15:06, Julien Grall wrote:
>>> On 27/05/2021 13:33, Jan Beulich wrote:
>>>> @@ -1046,12 +1051,14 @@ static struct page_info *alloc_heap_page
>>>>        if ( first_dirty != INVALID_DIRTY_IDX ||
>>>>             (scrub_debug && !(memflags & MEMF_no_scrub)) )
>>>>        {
>>>> +        bool cold = d && d != current->domain;
>>>
>>> So the assumption is if the domain is not running, then the content is
>>> not in the cache. Is that correct?
>>
>> Not exactly: For one, instead of "not running" it is "is not the current
>> domain", i.e. there may still be vCPU-s of the domain running elsewhere.
>> And for the cache the question isn't so much of "is in cache", but to
>> avoid needlessly bringing contents into the cache when the data is
>> unlikely to be used again soon.
> 
> Ok. Can this be clarified in the commit message?

I had updated it already the other day to

"Especially when dealing with large amounts of memory, memset() may not
 be very efficient; this can be bad enough that even for debug builds a
 custom function is warranted. We additionally want to distinguish "hot"
 and "cold" cases (with, as initial heuristic, "hot" being for any
 allocations a domain does for itself, assuming that in all other cases
 the page wouldn't be accessed [again] soon). The goal is for accesses
 of "cold" pages to not disturb caches (albeit finding a good balance
 between this and the higher latency looks to be difficult)."

Is this good enough?

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 14:03:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 14:03:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136856.253610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpAPZ-0004PY-9m; Fri, 04 Jun 2021 14:02:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136856.253610; Fri, 04 Jun 2021 14:02:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpAPZ-0004PR-64; Fri, 04 Jun 2021 14:02:57 +0000
Received: by outflank-mailman (input) for mailman id 136856;
 Fri, 04 Jun 2021 14:02:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IglV=K6=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lpAPX-0004P5-Ot
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 14:02:55 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd823290-e4eb-4ca0-9bdf-a19d4349493d;
 Fri, 04 Jun 2021 14:02:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd823290-e4eb-4ca0-9bdf-a19d4349493d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622815374;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=N4+fDvYI2amX88/8/iFILWCZCR6BXNTYicLBxZrRct0=;
  b=JA16YKQMkK/D9RW4tb6YiStK4mitj7XYyhAiw3vhwT8HDa4zzAToE4f8
   D/qt7jK3+aMUdPPk+x34vCj0etYvgBoYHMxOa3RHL4DtlRPIkNNPwGktI
   lsdqWfXeYo/6CmOG18ADtBwzX2Uw2uyTuBcA+F+Uxuaayu7PBoI7DgW/S
   E=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: LccNi1ABg2fU2CGm1N+yXA8fL32vdOLwg77JtzvbgFqZhUzQQ7l1DftcgTsGJQktDZQPAU342D
 iZs9O98Ld9QBz+zy0kZvAHM8eDwBToXvTU6QwYgAJSDl8hnyv9nH8nXLBas9cF4nvqMTfRS+3T
 PfLRBb1qiwVFhMNN1sP9i/D6Pc6ibm96FTLDnHyzS84mvdzSHT+YGgmAuQ3ve7X4NwdXqTCrn2
 HqHRIlXjvLEYrYp2BmqmJzJGIS0tDS6OexX01rD7hRrIQEXnIg786sq/L1Ik/s3cpQl4IifUPf
 oSM=
X-SBRS: 5.1
X-MesageID: 45757060
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:7hC/i669S7LljvWAowPXwMzXdLJyesId70hD6qkXc20zTiX4rb
 HLoB1/73TJYVkqNE3I9eruBEDjex3hHO9OgbX5VI3KNGOKhILCFuBfBOXZsl/dMhy72ulB1b
 pxN4hSYeeAaGSSVPyKgzWFLw==
X-IronPort-AV: E=Sophos;i="5.83,248,1616472000"; 
   d="scan'208";a="45757060"
Date: Fri, 4 Jun 2021 15:02:50 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: <devel@edk2.groups.io>, <lersek@redhat.com>
CC: Julien Grall <julien@xen.org>, Ard Biesheuvel <ardb+tianocore@kernel.org>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>, xen-devel
	<xen-devel@lists.xenproject.org>
Subject: Re: [edk2-devel] [PATCH 00/43] OvmfPkg: remove Xen support from
 OvmfPkg*.dsc, in favor of OvmfXen.dsc
Message-ID: <YLoyiqSYxPDJ7VRl@perard>
References: <20210526201446.12554-1-lersek@redhat.com>
 <71da2a3b-aab1-4ecf-7e01-16b537d841a2@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <71da2a3b-aab1-4ecf-7e01-16b537d841a2@redhat.com>

On Wed, Jun 02, 2021 at 10:36:49AM +0200, Laszlo Ersek wrote:
> Anthony, Julien,
> 
> (or anyone else subscribed to xen-devel -- CC'd now),
> 
> On 05/26/21 22:14, Laszlo Ersek wrote:
> > Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=2122
> > Repo:     https://pagure.io/lersek/edk2.git
> > Branch:   xen_split_bz_2122
> 
> can you please build the OvmfXen platform on this branch, and check if
> there are any regressions?

Hi Laszlo,

OvmfXen seems to be working fine with that branch applied.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 14:04:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 14:04:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136860.253621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpAQe-0004yi-Ii; Fri, 04 Jun 2021 14:04:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136860.253621; Fri, 04 Jun 2021 14:04:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpAQe-0004yb-FZ; Fri, 04 Jun 2021 14:04:04 +0000
Received: by outflank-mailman (input) for mailman id 136860;
 Fri, 04 Jun 2021 14:04:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gm11=K6=linaro.org=lee.jones@srs-us1.protection.inumbo.net>)
 id 1lpAQd-0004yR-1k
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 14:04:03 +0000
Received: from mail-wm1-x329.google.com (unknown [2a00:1450:4864:20::329])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c0fbb1d-a627-4eba-b3c9-1f4bba25d53d;
 Fri, 04 Jun 2021 14:04:02 +0000 (UTC)
Received: by mail-wm1-x329.google.com with SMTP id
 t16-20020a05600c1990b02901a0d45ff03aso4446003wmq.2
 for <xen-devel@lists.xenproject.org>; Fri, 04 Jun 2021 07:04:02 -0700 (PDT)
Received: from dell.default ([91.110.221.214])
 by smtp.gmail.com with ESMTPSA id m11sm5422559wmq.33.2021.06.04.07.04.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Jun 2021 07:04:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c0fbb1d-a627-4eba-b3c9-1f4bba25d53d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=CDPZN9rXn/2sWgCtIlnaMwucM0ZhhnFzjW38voB8XP0=;
        b=AZ9qPpOgKNZFV/4UmgwQQfpWul55LoMX8bUmhqdDHjMx3cIZPNpJxDzscteLozut+a
         TRECh9nqt+y+e7BEktoprseEdvjebbZeWlErt/36JD9ar5D2LfVnU3b5cY+eZjqMSxOG
         h0zTwomuOWWLGcnTSpiid/qRa4syjH2qZkpehTiUCv5S69acA9VcYR/wrD3zlXBYDZ2F
         uj0cFVBbdzdN/BlhpVzp8NaxNMjCYhruFSR//4y1j0bTd0m+U7y0tdYei2oswMt2M+sZ
         Ymn22uhPCdeCMGV0LDadFKDUiP5rHwjJ2HGNvlX3OhIEtKfVO+BZlK/1WELTC6YjMGcS
         Xq9w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=CDPZN9rXn/2sWgCtIlnaMwucM0ZhhnFzjW38voB8XP0=;
        b=oo8aLMcTV8blu+WkXfItxH0Vmd53M0kkz9rmwwtYdd7DMusXJeCoW1xeBA4jRrNJP5
         2gBRt1QJPBkANw+mMgmFdBaE6BwKcqgLSVU3Hq6R8X6MfhHzErdA+Xfkby+ES+BJ88ZS
         1xDlHgOHgWCkfSiFehOdGo0U4w+5jfh5L43gQgnCKPrLvpd8ofzM8JN52u/qcxYIy0qY
         59llaCIN3kcM4hZm1w3iqjFqZole727xJ5tAlaje55CseQRftwRvYvJ5M5KJnq6guwj5
         Ngn2HZR6Bv+UtOSUw7lKNNeq12/Zr9aYreJ6Ye9XwG6G0Sog77jvf/Af9v38DpPBzOhp
         90HA==
X-Gm-Message-State: AOAM532IB/fQwTVdQ9WJI+Sp20+sD1GYkLi6ZKWYBQlOMRuC7+GnyWXq
	5xYF68T55g+/hxM4dV9njhqerg==
X-Google-Smtp-Source: ABdhPJz6XYD0ouvZ2spj1i+HskfJEYeUlvOtKtkkuV4v4hqyWbkwEVFPqErNBxKEqheZXGCDnJTvtQ==
X-Received: by 2002:a1c:2456:: with SMTP id k83mr3867850wmk.87.1622815441364;
        Fri, 04 Jun 2021 07:04:01 -0700 (PDT)
From: Lee Jones <lee.jones@linaro.org>
To: lee.jones@linaro.org,
	linux@armlinux.org.uk,
	catalin.marinas@arm.com,
	will@kernel.org,
	mark.rutland@arm.com,
	lorenzo.pieralisi@arm.com,
	sstabellini@kernel.org
Cc: linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	wsa+renesas@sang-engineering.com,
	linux@roeck-us.net,
	treding@nvidia.com,
	arnd@arndb.de,
	xen-devel@lists.xenproject.org,
	patches@armlinux.org.uk
Subject: [RESEND 0/5] ARM/arm64: arm_pm_restart removal
Date: Fri,  4 Jun 2021 15:03:52 +0100
Message-Id: <20210604140357.2602028-1-lee.jones@linaro.org>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is a rebase/refresh of a set sent out, reviewed,
then forgotten about.  It's still considered useful.

Here is an excerpt from the previous attempt:

 "Hi Russell, ARM SoC maintainers,

 here's the full set of patches that remove arm_pm_restart as discussed
 earlier. There's some background on the series in this thread:

	https://lore.kernel.org/linux-arm-kernel/20170130110512.6943-1-thierry.reding@gmail.com/

 I also have a set of patches that build on top of this and try to add
 something slightly more formal by adding a power/reset framework that
 driver can register with. If we can get this series merged, I'll find
 some time to refresh those patches and send out for review again.

 Thierry"

Guenter Roeck (5):
  ARM: xen: Register with kernel restart handler
  drivers: firmware: psci: Register with kernel restart handler
  ARM: Register with kernel restart handler
  ARM64: Remove arm_pm_restart()
  ARM: Remove arm_pm_restart()

 arch/arm/include/asm/system_misc.h   |  1 -
 arch/arm/kernel/reboot.c             |  6 +-----
 arch/arm/kernel/setup.c              | 20 ++++++++++++++++++--
 arch/arm/xen/enlighten.c             | 12 ++++++++++--
 arch/arm64/include/asm/system_misc.h |  2 --
 arch/arm64/kernel/process.c          |  7 +------
 drivers/firmware/psci/psci.c         | 12 ++++++++++--
 7 files changed, 40 insertions(+), 20 deletions(-)


-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 14:04:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 14:04:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136861.253632 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpAQi-0005IG-T7; Fri, 04 Jun 2021 14:04:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136861.253632; Fri, 04 Jun 2021 14:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpAQi-0005I9-Ob; Fri, 04 Jun 2021 14:04:08 +0000
Received: by outflank-mailman (input) for mailman id 136861;
 Fri, 04 Jun 2021 14:04:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gm11=K6=linaro.org=lee.jones@srs-us1.protection.inumbo.net>)
 id 1lpAQi-0004yR-09
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 14:04:08 +0000
Received: from mail-wr1-x434.google.com (unknown [2a00:1450:4864:20::434])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 510db9ae-62bd-4175-9c5b-26835265215a;
 Fri, 04 Jun 2021 14:04:03 +0000 (UTC)
Received: by mail-wr1-x434.google.com with SMTP id f2so9408605wri.11
 for <xen-devel@lists.xenproject.org>; Fri, 04 Jun 2021 07:04:03 -0700 (PDT)
Received: from dell.default ([91.110.221.214])
 by smtp.gmail.com with ESMTPSA id m11sm5422559wmq.33.2021.06.04.07.04.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Jun 2021 07:04:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 510db9ae-62bd-4175-9c5b-26835265215a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=9vD7NkGDkc+IVLjUtVKL1KzFVVpDAskIEMoj4foPPh0=;
        b=akY05cjVG2u0Mf/Ur1k9G+ijD85MCVBcTS5cQxoMRHReBqD1UUr5+rTusPSJUjBFcQ
         xJ5eN+t/caaW7VWfSKfSEp9tAOxtYnT7wP4gt0PBctHI71FMDBhIJElbtcRHq0RZXIqj
         NxjnAOMorjqeNIdAFe72j34/dxhsIEZWYFc8liIEyolGby0H9wUp9V8daZmdeVX1FZ0B
         5SeriQykexYzYAk2IbdrrdTOG9zKvdQJXFBJgXPKvzkZZrm/S0Ymv4tV6g+9IRQBLFvQ
         UPN+cQH55smlaQ153ZgRX0rZF83RKMv1yYNN5C6hN2iKku845JUpznvi7dzmQSIaEGCJ
         OzBA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=9vD7NkGDkc+IVLjUtVKL1KzFVVpDAskIEMoj4foPPh0=;
        b=EOQd5XlNE3Shcz5WfApF2JDw01RGvITNRDp1FAgBLCXhxkWZNc9qv79T8WUXw+x9s+
         ngSUl51hHweDOS7P4YGZawLU6Yw5T2ytHjI8BllrlXfI1wIjsqyL2CReXcdzKVfBy+A8
         b9eYo1ksgZIwf/XkFfb7lMrJjBIKVYcKrjqud5rUnjtAZC1ksrPWRJE/7ZGD5phbGEA/
         b+WicZ9XM1m5oi9z9IG1/GC5r0WLB3VRCL3uWoOEiHBfelIfxvAAhztF5BH22LKKK5zQ
         LUvq94BOAWfbLMW3Wu5j1nNc2W4Dd47Um9XXm38qqpmDa006LoZW+vlxcx8hlMypuI62
         ELNQ==
X-Gm-Message-State: AOAM532Y3UNqqEhtsKzHjfBuVEy1jjR47Po02fk5T5PmJJg/Aun/gcnw
	xBAHwDzxmUIWlOvyUqiCx6dKlA==
X-Google-Smtp-Source: ABdhPJyoc18GiAJJdLah52gbebIhTBv5cg91BFwDaX7X49x4F2VQ9HNUtagdypEOLgxeAr0vit0AsA==
X-Received: by 2002:a05:6000:6:: with SMTP id h6mr4149822wrx.24.1622815442389;
        Fri, 04 Jun 2021 07:04:02 -0700 (PDT)
From: Lee Jones <lee.jones@linaro.org>
To: lee.jones@linaro.org,
	linux@armlinux.org.uk,
	catalin.marinas@arm.com,
	will@kernel.org,
	mark.rutland@arm.com,
	lorenzo.pieralisi@arm.com,
	sstabellini@kernel.org
Cc: linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	wsa+renesas@sang-engineering.com,
	linux@roeck-us.net,
	treding@nvidia.com,
	arnd@arndb.de,
	xen-devel@lists.xenproject.org,
	patches@armlinux.org.uk,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [PATCH 1/5] ARM: xen: Register with kernel restart handler
Date: Fri,  4 Jun 2021 15:03:53 +0100
Message-Id: <20210604140357.2602028-2-lee.jones@linaro.org>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210604140357.2602028-1-lee.jones@linaro.org>
References: <20210604140357.2602028-1-lee.jones@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Guenter Roeck <linux@roeck-us.net>

Register with kernel restart handler instead of setting arm_pm_restart
directly.

Select a high priority of 192 to ensure that default restart handlers
are replaced if Xen is running.

Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
---
 arch/arm/xen/enlighten.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 8ad576ecd0f1d..7f1c106b746f8 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -29,6 +29,7 @@
 #include <linux/cpu.h>
 #include <linux/console.h>
 #include <linux/pvclock_gtod.h>
+#include <linux/reboot.h>
 #include <linux/time64.h>
 #include <linux/timekeeping.h>
 #include <linux/timekeeper_internal.h>
@@ -181,11 +182,18 @@ void xen_reboot(int reason)
 	BUG_ON(rc);
 }
 
-static void xen_restart(enum reboot_mode reboot_mode, const char *cmd)
+static int xen_restart(struct notifier_block *nb, unsigned long action,
+		       void *data)
 {
 	xen_reboot(SHUTDOWN_reboot);
+
+	return NOTIFY_DONE;
 }
 
+static struct notifier_block xen_restart_nb = {
+	.notifier_call = xen_restart,
+	.priority = 192,
+};
 
 static void xen_power_off(void)
 {
@@ -404,7 +412,7 @@ static int __init xen_pm_init(void)
 		return -ENODEV;
 
 	pm_power_off = xen_power_off;
-	arm_pm_restart = xen_restart;
+	register_restart_handler(&xen_restart_nb);
 	if (!xen_initial_domain()) {
 		struct timespec64 ts;
 		xen_read_wallclock(&ts);
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 14:04:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 14:04:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136862.253643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpAQo-0005cy-52; Fri, 04 Jun 2021 14:04:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136862.253643; Fri, 04 Jun 2021 14:04:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpAQo-0005cp-1D; Fri, 04 Jun 2021 14:04:14 +0000
Received: by outflank-mailman (input) for mailman id 136862;
 Fri, 04 Jun 2021 14:04:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gm11=K6=linaro.org=lee.jones@srs-us1.protection.inumbo.net>)
 id 1lpAQn-0004yR-0S
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 14:04:13 +0000
Received: from mail-wr1-x42c.google.com (unknown [2a00:1450:4864:20::42c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 508581bd-cdd7-44cf-a786-69867cd0155c;
 Fri, 04 Jun 2021 14:04:04 +0000 (UTC)
Received: by mail-wr1-x42c.google.com with SMTP id a11so7563099wrt.13
 for <xen-devel@lists.xenproject.org>; Fri, 04 Jun 2021 07:04:04 -0700 (PDT)
Received: from dell.default ([91.110.221.214])
 by smtp.gmail.com with ESMTPSA id m11sm5422559wmq.33.2021.06.04.07.04.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Jun 2021 07:04:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 508581bd-cdd7-44cf-a786-69867cd0155c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=TEh1khdRrbKOISh67J2riVpfOYD6OGYoV3WdgW9rcJ0=;
        b=nbeaNWdBnTcP7bz1VBo8ORkE7GXcmZ9WULwc9d53BO3FJsVDxfvMLco7Jdi8xDopSW
         rFy3A0cnyioI9Du7UwC11so/a/emXg9SmX89XwsXCX5PaimX++OUSF1pmOmzDjPSJ563
         7KFOmY8D3Adm6+/EaZXeJifRRsQf4TM1fcsM4mdfUuAE8wpGiEtPHYeK25qBDNx3FX1l
         FVS9pptxb5ACR0S1LvelYbp3SQvbYCD1/z9hRSFTg44QPcwlGsK3ziy22bSbbPIeLBwq
         2RB1U4q1ThxkuhSrx8E1G/ve/dmSXoAlyAyj1BJwgzWp2AZfBYGVio/AYt62J9+48+g7
         WH9w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=TEh1khdRrbKOISh67J2riVpfOYD6OGYoV3WdgW9rcJ0=;
        b=A8b5+y4h6zECFuB3MhGKNtGBw7Fps6K2Eb1/swJS2eO+lOD6jhVv5nPh663Dpy0h1r
         5HPohO4mDXP7aCrSYb3HpUBwBkEQTTosyCb08r36etlTJKIQgYss59HqHLiRGrFJOxxH
         39WnjAazWsoWzOtB3ve7G7gcCMwWAgcyfq2e0Z5EpfraIy5TmPZ2bf6YqInTVH49PnVa
         EFj3RLrr6mufmtcPTi+8OPT8jGskX2C/fE7jNJjD6IPer+6z6Hi4kV5o4pHLoP2v3rHT
         2RxcAm8le0BcsOOI3SQDQDq1fG6xcoEpl3yTxRN9LSso77BbweKnh6MZ/AUOyUwhL10B
         XEXw==
X-Gm-Message-State: AOAM532/oMuwOZBe8Rlf3wzWJxbQVKve2+uA6Ns1n0qitcP3JMMzgWrX
	WIXT+HzgyoMkQMb3H1g9KzdssA==
X-Google-Smtp-Source: ABdhPJw1kAi2sfOb54fRB7u21L2rZpgLmTnYQGXWunLdqkzDvlsLaxCRGrc38uw/MzOLdLP3jwPQtw==
X-Received: by 2002:a05:6000:18ac:: with SMTP id b12mr4005226wri.44.1622815443376;
        Fri, 04 Jun 2021 07:04:03 -0700 (PDT)
From: Lee Jones <lee.jones@linaro.org>
To: lee.jones@linaro.org,
	linux@armlinux.org.uk,
	catalin.marinas@arm.com,
	will@kernel.org,
	mark.rutland@arm.com,
	lorenzo.pieralisi@arm.com,
	sstabellini@kernel.org
Cc: linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	wsa+renesas@sang-engineering.com,
	linux@roeck-us.net,
	treding@nvidia.com,
	arnd@arndb.de,
	xen-devel@lists.xenproject.org,
	patches@armlinux.org.uk
Subject: [PATCH 2/5] drivers: firmware: psci: Register with kernel restart handler
Date: Fri,  4 Jun 2021 15:03:54 +0100
Message-Id: <20210604140357.2602028-3-lee.jones@linaro.org>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210604140357.2602028-1-lee.jones@linaro.org>
References: <20210604140357.2602028-1-lee.jones@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Guenter Roeck <linux@roeck-us.net>

Register with kernel restart handler instead of setting arm_pm_restart
directly. This enables support for replacing the PSCI restart handler
with a different handler if necessary for a specific board.

Select a priority of 129 to indicate a higher than default priority, but
keep it as low as possible since PSCI reset is known to fail on some
boards.

Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Tested-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
---
 drivers/firmware/psci/psci.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/firmware/psci/psci.c b/drivers/firmware/psci/psci.c
index 3c1c5daf6df2e..18a47c9d5b02b 100644
--- a/drivers/firmware/psci/psci.c
+++ b/drivers/firmware/psci/psci.c
@@ -296,7 +296,8 @@ static int get_set_conduit_method(struct device_node *np)
 	return 0;
 }
 
-static void psci_sys_reset(enum reboot_mode reboot_mode, const char *cmd)
+static int psci_sys_reset(struct notifier_block *nb, unsigned long action,
+			  void *data)
 {
 	if ((reboot_mode == REBOOT_WARM || reboot_mode == REBOOT_SOFT) &&
 	    psci_system_reset2_supported) {
@@ -309,8 +310,15 @@ static void psci_sys_reset(enum reboot_mode reboot_mode, const char *cmd)
 	} else {
 		invoke_psci_fn(PSCI_0_2_FN_SYSTEM_RESET, 0, 0, 0);
 	}
+
+	return NOTIFY_DONE;
 }
 
+static struct notifier_block psci_sys_reset_nb = {
+	.notifier_call = psci_sys_reset,
+	.priority = 129,
+};
+
 static void psci_sys_poweroff(void)
 {
 	invoke_psci_fn(PSCI_0_2_FN_SYSTEM_OFF, 0, 0, 0);
@@ -472,7 +480,7 @@ static void __init psci_0_2_set_functions(void)
 		.migrate_info_type = psci_migrate_info_type,
 	};
 
-	arm_pm_restart = psci_sys_reset;
+	register_restart_handler(&psci_sys_reset_nb);
 
 	pm_power_off = psci_sys_poweroff;
 }
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 14:04:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 14:04:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136863.253654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpAQt-00060w-Ju; Fri, 04 Jun 2021 14:04:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136863.253654; Fri, 04 Jun 2021 14:04:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpAQt-00060A-FC; Fri, 04 Jun 2021 14:04:19 +0000
Received: by outflank-mailman (input) for mailman id 136863;
 Fri, 04 Jun 2021 14:04:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gm11=K6=linaro.org=lee.jones@srs-us1.protection.inumbo.net>)
 id 1lpAQs-0004yR-0e
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 14:04:18 +0000
Received: from mail-wr1-x430.google.com (unknown [2a00:1450:4864:20::430])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8f622367-0396-4b05-8c3d-5a28572d58a6;
 Fri, 04 Jun 2021 14:04:05 +0000 (UTC)
Received: by mail-wr1-x430.google.com with SMTP id z8so9414967wrp.12
 for <xen-devel@lists.xenproject.org>; Fri, 04 Jun 2021 07:04:05 -0700 (PDT)
Received: from dell.default ([91.110.221.214])
 by smtp.gmail.com with ESMTPSA id m11sm5422559wmq.33.2021.06.04.07.04.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Jun 2021 07:04:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f622367-0396-4b05-8c3d-5a28572d58a6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=FXe7mUhRyV1V08UlHOUDRQa+cyJRl+BaTvqQGIK3jgo=;
        b=IaVfsBBg8NR4RGlKN4FA+nCoNqfGwLzQsRsbwt5aa1iwj3NRfw/OtO7jrJTDUGqpMN
         3nVAPuI71sUVJi45TyzxzM5TIPNalhlh3sfTDK715KHFO3Muc2MgflEbEV3Nafl6PD7R
         K+qgau2e3Q9PNbALT3x8rkGgs5+NTyXFUruUWqKYs8PKPFaUyYNz/HKev7bERNhBVieg
         WcZSeH3QkQgKxYNE6LmqUGyhqLYU6HKNSC5G/xK4cRvpuwTtkLy3tg7sD1fN+9iPma7r
         Ej4+LoVtLLElLl53YMXrWShGIRI39H4O94ZsmBzuWC7o70KQiESC9UArUgHuWBqvv5Pt
         e7lg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=FXe7mUhRyV1V08UlHOUDRQa+cyJRl+BaTvqQGIK3jgo=;
        b=bSe1JeLmvoOSR1os0RBjpqVA0SQnrvOBNS+AQH7oMSfM50GFRiWLcHdx8agOja+gGD
         O9bH9nDdpEUzGrhAwjTWOYGrLiEEA3cWI0+lW5+hIyikOv8I0bksggREXng3R8IAX9Vj
         PEW63hGp7GuAL43nDUSTNKUrMgq/CQ/mdtn7V13NInmVtx8lynWY9vI2dAPMnnmsE/4b
         aY/riymMpP/IgxaVL8z06zb80/WcxGqBAZ37Rw028EeXpcfTMzaFih7LvB1etvw0Fxr2
         51pIeF9Q9GFRvEHCGz+IiU1NgXmPlRJP+3uFRIbM8eeOEKMcn7+LZ8CT637kedJNK+Hz
         PU8A==
X-Gm-Message-State: AOAM532nD/wFvjF84CwagK5sKht1XMXjtWFFzirhiERVeMako+eo0D5C
	MWYSCodOvs0RgZsfz+MymnAm2g==
X-Google-Smtp-Source: ABdhPJxEWeGElTopDkHbCuLC8qXXv/5amcTjzy0KQBVFAIdSm17ZnCj3j7C4vGyObQxH79cOOQtMIA==
X-Received: by 2002:a5d:4610:: with SMTP id t16mr3942011wrq.324.1622815444460;
        Fri, 04 Jun 2021 07:04:04 -0700 (PDT)
From: Lee Jones <lee.jones@linaro.org>
To: lee.jones@linaro.org,
	linux@armlinux.org.uk,
	catalin.marinas@arm.com,
	will@kernel.org,
	mark.rutland@arm.com,
	lorenzo.pieralisi@arm.com,
	sstabellini@kernel.org
Cc: linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	wsa+renesas@sang-engineering.com,
	linux@roeck-us.net,
	treding@nvidia.com,
	arnd@arndb.de,
	xen-devel@lists.xenproject.org,
	patches@armlinux.org.uk
Subject: [PATCH 3/5] ARM: Register with kernel restart handler
Date: Fri,  4 Jun 2021 15:03:55 +0100
Message-Id: <20210604140357.2602028-4-lee.jones@linaro.org>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210604140357.2602028-1-lee.jones@linaro.org>
References: <20210604140357.2602028-1-lee.jones@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Guenter Roeck <linux@roeck-us.net>

By making use of the kernel restart handler, board specific restart
handlers can be prioritized amongst available mechanisms for a particular
board or system.

Select the default priority of 128 to indicate that the restart callback
in the machine description is the default restart mechanism.

Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
---
 arch/arm/kernel/setup.c | 20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index 1a5edf562e85e..08c5676477d70 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -1081,6 +1081,20 @@ void __init hyp_mode_check(void)
 #endif
 }
 
+static void (*__arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
+
+static int arm_restart(struct notifier_block *nb, unsigned long action,
+		       void *data)
+{
+	__arm_pm_restart(action, data);
+	return NOTIFY_DONE;
+}
+
+static struct notifier_block arm_restart_nb = {
+	.notifier_call = arm_restart,
+	.priority = 128,
+};
+
 void __init setup_arch(char **cmdline_p)
 {
 	const struct machine_desc *mdesc = NULL;
@@ -1149,8 +1163,10 @@ void __init setup_arch(char **cmdline_p)
 	kasan_init();
 	request_standard_resources(mdesc);
 
-	if (mdesc->restart)
-		arm_pm_restart = mdesc->restart;
+	if (mdesc->restart) {
+		__arm_pm_restart = mdesc->restart;
+		register_restart_handler(&arm_restart_nb);
+	}
 
 	unflatten_device_tree();
 
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 14:04:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 14:04:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136864.253665 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpAQx-0006Pt-Tk; Fri, 04 Jun 2021 14:04:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136864.253665; Fri, 04 Jun 2021 14:04:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpAQx-0006Pf-PB; Fri, 04 Jun 2021 14:04:23 +0000
Received: by outflank-mailman (input) for mailman id 136864;
 Fri, 04 Jun 2021 14:04:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gm11=K6=linaro.org=lee.jones@srs-us1.protection.inumbo.net>)
 id 1lpAQx-0004yR-0q
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 14:04:23 +0000
Received: from mail-wm1-x329.google.com (unknown [2a00:1450:4864:20::329])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2187f623-5ad3-49c0-8329-fb79e8419f1a;
 Fri, 04 Jun 2021 14:04:06 +0000 (UTC)
Received: by mail-wm1-x329.google.com with SMTP id
 l11-20020a05600c4f0bb029017a7cd488f5so5724173wmq.0
 for <xen-devel@lists.xenproject.org>; Fri, 04 Jun 2021 07:04:06 -0700 (PDT)
Received: from dell.default ([91.110.221.214])
 by smtp.gmail.com with ESMTPSA id m11sm5422559wmq.33.2021.06.04.07.04.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Jun 2021 07:04:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2187f623-5ad3-49c0-8329-fb79e8419f1a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=QaJICWaLVjdKyiczP8R7DVAA/JKj3/g1GzNXgU/ERjo=;
        b=OtKIdFz91o6zSGDVTQZvGzg/E3WbsLPBQt+zn0ndJ5AES765uw0NRLSuW7BmM1XIIb
         KSmT00ThuHMEyNqcf0APky2O0X+Fz0wCuAj+ufEoElBLQgTbz+i02lN3/Zf1im09zpDW
         V7M1w+GKAjj8vX7fR+3Ffa8zGHIP9mnEFe/wg7tdUMuHD2MA56Co5BpmNTswwnhnzJZ3
         9c7ENbHX2CdAaMwDB25ftfd7WDxFryEsXCCExLyCZAO6YcPG/2o+q800T9B4mz1KOZRv
         aTr3DUL+N94+eGQZxLu/MEdvv+3cWnjJfA40LNQyMNgeg0oJvNtkbZAD86ICk+f88Itg
         u7eA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=QaJICWaLVjdKyiczP8R7DVAA/JKj3/g1GzNXgU/ERjo=;
        b=HgC7ZIaLn6mpcVm80S8G69AOrROHDIvzdL4XOfucjymHlP9IpGo6lA9LWmiDIejg6b
         omb6WL2LQSyeTnz5YSYQ6fA4W/kAWdDjpCnRQqEhsBaK0eO4/aipEL8ny+2WagL52M9I
         f8SHjCORpuqp0qREZjBvZwxkU54wxDiNrl5cNcL9C581X/i4YKV4Z8Q2fsINwky9xYpf
         jUWYZM67A2yDidOEIeqAgTLf75OwLT/+TieJoNDAW5GrqHte13+W9o+amOcXivtSkzwP
         t9TbQYhYfckQwEcPWpUwfaK8YQW+aTVjx09Oge+Y+5Nocx9FTcwLIh25wKNlxllc4zx/
         ku2g==
X-Gm-Message-State: AOAM530dnuECTrYcnnaOXQssNjz5OjXc+YYeUUw36IaMG75WqrUkle9R
	8r25w1SEQCAtqRB7iU9QS7uXuQ==
X-Google-Smtp-Source: ABdhPJxkeeHzpVP/eH19on7AKsxKyxYqA03gG6EEjmBqwHMSrKqjg6lHl/nEqHDgTjEVI8pTSN1dXw==
X-Received: by 2002:a7b:c30f:: with SMTP id k15mr3940993wmj.128.1622815445465;
        Fri, 04 Jun 2021 07:04:05 -0700 (PDT)
From: Lee Jones <lee.jones@linaro.org>
To: lee.jones@linaro.org,
	linux@armlinux.org.uk,
	catalin.marinas@arm.com,
	will@kernel.org,
	mark.rutland@arm.com,
	lorenzo.pieralisi@arm.com,
	sstabellini@kernel.org
Cc: linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	wsa+renesas@sang-engineering.com,
	linux@roeck-us.net,
	treding@nvidia.com,
	arnd@arndb.de,
	xen-devel@lists.xenproject.org,
	patches@armlinux.org.uk
Subject: [PATCH 4/5] ARM64: Remove arm_pm_restart()
Date: Fri,  4 Jun 2021 15:03:56 +0100
Message-Id: <20210604140357.2602028-5-lee.jones@linaro.org>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210604140357.2602028-1-lee.jones@linaro.org>
References: <20210604140357.2602028-1-lee.jones@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Guenter Roeck <linux@roeck-us.net>

All users of arm_pm_restart() have been converted to use the kernel
restart handler.

Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Tested-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
---
 arch/arm64/include/asm/system_misc.h | 2 --
 arch/arm64/kernel/process.c          | 7 +------
 2 files changed, 1 insertion(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/system_misc.h b/arch/arm64/include/asm/system_misc.h
index 673be2d1263c4..305a7157c6a6a 100644
--- a/arch/arm64/include/asm/system_misc.h
+++ b/arch/arm64/include/asm/system_misc.h
@@ -32,8 +32,6 @@ void hook_debug_fault_code(int nr, int (*fn)(unsigned long, unsigned int,
 struct mm_struct;
 extern void __show_regs(struct pt_regs *);
 
-extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
-
 #endif	/* __ASSEMBLY__ */
 
 #endif	/* __ASM_SYSTEM_MISC_H */
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index b4bb67f17a2ca..5591725cebccc 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -72,8 +72,6 @@ EXPORT_SYMBOL(__stack_chk_guard);
 void (*pm_power_off)(void);
 EXPORT_SYMBOL_GPL(pm_power_off);
 
-void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
-
 static void noinstr __cpu_do_idle(void)
 {
 	dsb(sy);
@@ -201,10 +199,7 @@ void machine_restart(char *cmd)
 		efi_reboot(reboot_mode, NULL);
 
 	/* Now call the architecture specific reboot code. */
-	if (arm_pm_restart)
-		arm_pm_restart(reboot_mode, cmd);
-	else
-		do_kernel_restart(cmd);
+	do_kernel_restart(cmd);
 
 	/*
 	 * Whoops - the architecture was unable to reboot.
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 14:04:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 14:04:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136866.253675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpAR3-0006zm-CA; Fri, 04 Jun 2021 14:04:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136866.253675; Fri, 04 Jun 2021 14:04:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpAR3-0006zS-7a; Fri, 04 Jun 2021 14:04:29 +0000
Received: by outflank-mailman (input) for mailman id 136866;
 Fri, 04 Jun 2021 14:04:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gm11=K6=linaro.org=lee.jones@srs-us1.protection.inumbo.net>)
 id 1lpAR2-0004yR-0z
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 14:04:28 +0000
Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a30cf7f-335f-42dc-a0af-7fe29a653ac5;
 Fri, 04 Jun 2021 14:04:07 +0000 (UTC)
Received: by mail-wr1-x433.google.com with SMTP id u7so3998858wrs.10
 for <xen-devel@lists.xenproject.org>; Fri, 04 Jun 2021 07:04:07 -0700 (PDT)
Received: from dell.default ([91.110.221.214])
 by smtp.gmail.com with ESMTPSA id m11sm5422559wmq.33.2021.06.04.07.04.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Jun 2021 07:04:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a30cf7f-335f-42dc-a0af-7fe29a653ac5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=D6QyFvCrmM9/eKIfxG4Pm00YDFikRvR7iKb/WRsmrRM=;
        b=rnXl1mp749C3drgrMS8qg9OrEpVG5gyjtC/beupAd1e0u7I2C0SA+eLYqiAZo4zOf1
         q05qwrZm55jTi+ifKRg8eXWpkJEyVRxtH/IgUYGbEZ70EVHfHrPyla7TPIzoc3bF3HUY
         38N+YHqp0zTbzFzoCmx/yOn7e0FImL+E1RzKDH7btVNuv4yY5751UVI1bLnrNl7scCoW
         LFc1Ih93JSdQok29RWgqRhCojCLYOg62P4bYX/nGU9EOEWtNNXBLPDIFNWoTmUP22sU3
         JvGp1+ARt/xYZhTOdzb03Rb1/7Tbwwe1vK1CD+EZtoPQYOnPg1dx3h+YFx/bg1nj1AhS
         +DAA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=D6QyFvCrmM9/eKIfxG4Pm00YDFikRvR7iKb/WRsmrRM=;
        b=RJfAxYxCOiJ8a+O5eYoO/1acQrZ5xlUo+H4d2GmXTcbyOzjidBpKbCs3ciSyBLNBk7
         exV9mFxneljJrqEImXSD4J+LSDVHFRTLTzHL7acbTPIRK7OYnOhiPmh1rdU4JDZsSH5g
         CoXqnUF0VRllaeFbwljLN+5kDdvEqBUGaHZwneHFEzLI6smx1S0oRrKqvvw7I+pRH5hJ
         /p6ZKoRHoEPrAPiOdh9OH5Zr9fX5+JXRl0P/pC8g57r9BG3gAuS+KIxWw0vW4BJ9hMHE
         RAWai+xTCCbPJSUc62Y1lhyNorspIonlUV0b/znHQq1qU2Mk4fa0DqyKJvWm/CbTcE+d
         F4EA==
X-Gm-Message-State: AOAM531bTbnMUGkz3F2tG0BtCvc1f2KRrTqxwxf4tj+HS7U+e+Y+ZzgN
	4/ynJlgAGcUxQrqoox5fLPKqDQ==
X-Google-Smtp-Source: ABdhPJy6Bn0tgDEaDDg7jaA3vOqPTriXmMlbNfufQjcNecRNKMp6REhDzJ4KHPqHqkOmLTOy8DweYw==
X-Received: by 2002:adf:f7c3:: with SMTP id a3mr3659762wrq.253.1622815446460;
        Fri, 04 Jun 2021 07:04:06 -0700 (PDT)
From: Lee Jones <lee.jones@linaro.org>
To: lee.jones@linaro.org,
	linux@armlinux.org.uk,
	catalin.marinas@arm.com,
	will@kernel.org,
	mark.rutland@arm.com,
	lorenzo.pieralisi@arm.com,
	sstabellini@kernel.org
Cc: linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	wsa+renesas@sang-engineering.com,
	linux@roeck-us.net,
	treding@nvidia.com,
	arnd@arndb.de,
	xen-devel@lists.xenproject.org,
	patches@armlinux.org.uk
Subject: [PATCH 5/5] ARM: Remove arm_pm_restart()
Date: Fri,  4 Jun 2021 15:03:57 +0100
Message-Id: <20210604140357.2602028-6-lee.jones@linaro.org>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210604140357.2602028-1-lee.jones@linaro.org>
References: <20210604140357.2602028-1-lee.jones@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Guenter Roeck <linux@roeck-us.net>

All users of arm_pm_restart() have been converted to use the kernel
restart handler.

Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
---
 arch/arm/include/asm/system_misc.h | 1 -
 arch/arm/kernel/reboot.c           | 6 +-----
 2 files changed, 1 insertion(+), 6 deletions(-)

diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h
index 66f6a3ae68d27..98b37340376bc 100644
--- a/arch/arm/include/asm/system_misc.h
+++ b/arch/arm/include/asm/system_misc.h
@@ -13,7 +13,6 @@
 extern void cpu_init(void);
 
 void soft_restart(unsigned long);
-extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
 extern void (*arm_pm_idle)(void);
 
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
diff --git a/arch/arm/kernel/reboot.c b/arch/arm/kernel/reboot.c
index 0ce388f154226..3044fcb8d0736 100644
--- a/arch/arm/kernel/reboot.c
+++ b/arch/arm/kernel/reboot.c
@@ -18,7 +18,6 @@ typedef void (*phys_reset_t)(unsigned long, bool);
 /*
  * Function pointers to optional machine specific functions
  */
-void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
 void (*pm_power_off)(void);
 EXPORT_SYMBOL(pm_power_off);
 
@@ -138,10 +137,7 @@ void machine_restart(char *cmd)
 	local_irq_disable();
 	smp_send_stop();
 
-	if (arm_pm_restart)
-		arm_pm_restart(reboot_mode, cmd);
-	else
-		do_kernel_restart(cmd);
+	do_kernel_restart(cmd);
 
 	/* Give a grace period for failure to restart of 1s */
 	mdelay(1000);
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 14:19:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 14:19:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136905.253686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpAfs-0001GR-Ig; Fri, 04 Jun 2021 14:19:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136905.253686; Fri, 04 Jun 2021 14:19:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpAfs-0001GK-FT; Fri, 04 Jun 2021 14:19:48 +0000
Received: by outflank-mailman (input) for mailman id 136905;
 Fri, 04 Jun 2021 14:19:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wstn=K6=redhat.com=lersek@srs-us1.protection.inumbo.net>)
 id 1lpAfq-0001GE-PS
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 14:19:47 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [170.10.133.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 3758f57f-3505-486b-8a2d-fb9472de7be1;
 Fri, 04 Jun 2021 14:19:45 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-126-d0KNyPWbMnecfR-kXC7CQw-1; Fri, 04 Jun 2021 10:19:42 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CEF541034AF2;
 Fri,  4 Jun 2021 14:19:40 +0000 (UTC)
Received: from lacos-laptop-7.usersys.redhat.com (ovpn-112-217.ams2.redhat.com
 [10.36.112.217])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 75C1C10074EF;
 Fri,  4 Jun 2021 14:19:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3758f57f-3505-486b-8a2d-fb9472de7be1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1622816385;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7RaHb1L61pcCk/PK45uDzhk5w+i1nJDMw2mwl+TSD80=;
	b=hUs3uBNASaC9cKW8/2chsuBjb3ZjW26IwI2M/jws9k6VGnYV53iF9XAUBRBmJSLzJK56qR
	O7vSbYS+qoliqu/nHnZgGwtrP9QlGDKlL4/CPvGIXM7lJmSY0Qhl8/Y/hERDdGLa5sLPA0
	1ylPa73Egr/reNUT4NDyHAH4ufuHHhk=
X-MC-Unique: d0KNyPWbMnecfR-kXC7CQw-1
Subject: Re: [edk2-devel] [PATCH 00/43] OvmfPkg: remove Xen support from
 OvmfPkg*.dsc, in favor of OvmfXen.dsc
To: Anthony PERARD <anthony.perard@citrix.com>, devel@edk2.groups.io
Cc: Julien Grall <julien@xen.org>, Ard Biesheuvel
 <ardb+tianocore@kernel.org>, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@redhat.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <20210526201446.12554-1-lersek@redhat.com>
 <71da2a3b-aab1-4ecf-7e01-16b537d841a2@redhat.com> <YLoyiqSYxPDJ7VRl@perard>
From: Laszlo Ersek <lersek@redhat.com>
Message-ID: <49414f37-0e20-a45d-cf24-f186c69fb1bc@redhat.com>
Date: Fri, 4 Jun 2021 16:19:37 +0200
MIME-Version: 1.0
In-Reply-To: <YLoyiqSYxPDJ7VRl@perard>
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=lersek@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06/04/21 16:02, Anthony PERARD wrote:
> On Wed, Jun 02, 2021 at 10:36:49AM +0200, Laszlo Ersek wrote:
>> Anthony, Julien,
>>
>> (or anyone else subscribed to xen-devel -- CC'd now),
>>
>> On 05/26/21 22:14, Laszlo Ersek wrote:
>>> Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=2122
>>> Repo:     https://pagure.io/lersek/edk2.git
>>> Branch:   xen_split_bz_2122
>>
>> can you please build the OvmfXen platform on this branch, and check if
>> there are any regressions?
> 
> Hi Laszlo,
> 
> OvmfXen seems to be working fine with that branch applied.

Thank you very much!!!
Laszlo



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 15:36:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 15:36:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136913.253697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpBsG-0000AS-KE; Fri, 04 Jun 2021 15:36:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136913.253697; Fri, 04 Jun 2021 15:36:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpBsG-0000AL-H8; Fri, 04 Jun 2021 15:36:40 +0000
Received: by outflank-mailman (input) for mailman id 136913;
 Fri, 04 Jun 2021 15:36:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpBsF-0000AB-7q; Fri, 04 Jun 2021 15:36:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpBsF-00077v-0N; Fri, 04 Jun 2021 15:36:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpBsE-0002La-Ne; Fri, 04 Jun 2021 15:36:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpBsE-0006Gx-My; Fri, 04 Jun 2021 15:36:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=isquRluctqnQufW1l/r3g+apgt4LzfV5K92tkCZDhRk=; b=jWS5vcV9O980m7zqL0WUwE3uCI
	0bTNpr0iZ2opMdoUbbPB8jtoo9eDUHHZqUcAiaKZwsKi30Z3atlVh0yABIXVkq/fIOJhiocpuFRG5
	/7HIjXkEBHdwrAWVHQVqtyQCvx7oCahxj3K0pxN+slj7ligGZKnR35b2p+ePNUterQew=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162359-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162359: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
X-Osstest-Versions-That:
    ovmf=75e9154f818a58ffc3a28db9f8c97279e723f02d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Jun 2021 15:36:38 +0000

flight 162359 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162359/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb
baseline version:
 ovmf                 75e9154f818a58ffc3a28db9f8c97279e723f02d

Last test of basis   162341  2021-06-02 19:41:07 Z    1 days
Testing same since   162359  2021-06-04 03:40:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Satoshi Tanda <tanda.sat@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   75e9154f81..c410ad4da4  c410ad4da4b7785170d3d42a3ba190c2caac6feb -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 15:46:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 15:46:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136921.253712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpC1h-0001dO-Jm; Fri, 04 Jun 2021 15:46:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136921.253712; Fri, 04 Jun 2021 15:46:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpC1h-0001dH-Gg; Fri, 04 Jun 2021 15:46:25 +0000
Received: by outflank-mailman (input) for mailman id 136921;
 Fri, 04 Jun 2021 15:46:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpC1g-0001d7-Qb; Fri, 04 Jun 2021 15:46:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpC1g-0007I5-IY; Fri, 04 Jun 2021 15:46:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpC1g-0002dX-9z; Fri, 04 Jun 2021 15:46:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpC1g-0002UI-9T; Fri, 04 Jun 2021 15:46:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Oi7uegepWg3ZVrwFIoAx+VT1IOILfM4h2RjhxMwR1SU=; b=EITlc+KxRLSjHEibseA8jdqJct
	Ol5Wxfh1Mp3Scxo+JT4HEoR1D3LSuonQ3cLoGLLbTAJVbq2mT4j9mI1VEvzI6mgbHahbKLFXHouP+
	8MK9zakBIoDiaFFyH3bJS3MqHv/Cno+SGtfSGGds3Yyoxl/zE728mZtsDoB4PsdEb1mk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162358-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162358: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f88cd3fb9df228e5ce4e13ec3dbad671ddb2146e
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Jun 2021 15:46:24 +0000

flight 162358 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162358/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                f88cd3fb9df228e5ce4e13ec3dbad671ddb2146e
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  307 days
Failing since        152366  2020-08-01 20:49:34 Z  306 days  524 attempts
Testing same since   162358  2021-06-04 02:51:33 Z    0 days    1 attempts

------------------------------------------------------------
6133 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1667449 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 15:59:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 15:59:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136930.253725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpCEA-0003DI-0z; Fri, 04 Jun 2021 15:59:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136930.253725; Fri, 04 Jun 2021 15:59:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpCE9-0003DB-U5; Fri, 04 Jun 2021 15:59:17 +0000
Received: by outflank-mailman (input) for mailman id 136930;
 Fri, 04 Jun 2021 15:59:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q+cM=K6=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
 id 1lpCE7-0003D5-Ou
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 15:59:15 +0000
Received: from mx0a-00069f02.pphosted.com (unknown [205.220.165.32])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7d86147d-4962-42ff-83d1-0b86fce7ec75;
 Fri, 04 Jun 2021 15:59:14 +0000 (UTC)
Received: from pps.filterd (m0246627.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 154Fwio4026278; Fri, 4 Jun 2021 15:58:44 GMT
Received: from oracle.com (aserp3030.oracle.com [141.146.126.71])
 by mx0b-00069f02.pphosted.com with ESMTP id 38y0j7gf19-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 04 Jun 2021 15:58:44 +0000
Received: from aserp3030.oracle.com (aserp3030.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 154Fwg6h009450;
 Fri, 4 Jun 2021 15:58:42 GMT
Received: from nam04-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam08lp2044.outbound.protection.outlook.com [104.47.73.44])
 by aserp3030.oracle.com with ESMTP id 38ubnft1gb-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 04 Jun 2021 15:58:42 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com (2603:10b6:a03:85::27)
 by SJ0PR10MB4573.namprd10.prod.outlook.com (2603:10b6:a03:2ac::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Fri, 4 Jun
 2021 15:58:40 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::8111:d8f1:c262:808d]) by BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::8111:d8f1:c262:808d%6]) with mapi id 15.20.4173.027; Fri, 4 Jun 2021
 15:58:40 +0000
Received: from 0xbeefdead.lan (130.44.160.152) by
 MN2PR01CA0037.prod.exchangelabs.com (2603:10b6:208:23f::6) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.15 via Frontend Transport; Fri, 4 Jun 2021 15:58:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d86147d-4962-42ff-83d1-0b86fce7ec75
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : content-type : in-reply-to :
 mime-version; s=corp-2020-01-29;
 bh=5Lwl0zlLLXOjFB6AJtrSZDs8oG/nvzkr5vkS69mpymw=;
 b=JBjkoq5B0+y4xx0mrBmM+YBsWx6GBZiuWTo0t68Rj+0kEx2I0TXAI37v5GAEQpZ7uDR5
 qHRBvZICcT5pPzOv+xMaZyqryXlc469/5dE6IZs17wbYslTmXFa8DF5+IeGgLNYOwpI1
 h7IO4lesHafZAg/rv7SbgkOODi02MpqyoDWSvxcqaD3zuQ+5cefrCWtBgHJWhVKHN/Z2
 +HrK67dMcQXeH3pG/pjCjzMNi8towHjelvS54bqZbTZggBfK8Wh+0mDQ1QJOlN1tEtYg
 3lJQORmNXavjXsqOxPI/PfW+lTBdyoWC8pGYMwLewkIz6fDPMcyJNcoRnk+2TI0P8eNl gg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=c5535t2svwIRZCrsY0mAHIfQV+E8ZgO4f294j6NmP/t3r/Rtg8rZ15MuHEKajs4mEHp4IQ8HPAhj+K6rD0gFwCLyX4ehtA5XHP2v8aPdF63+cskd5NT4SxtKWI+IInwJzxbJR6t6PdlVcfuxmJtbsH73PVJE0n7luLIWDYd+tUQfIL7VBVrR9QFOt11+pVbYNEM3PTAHdbhQkqKAcM1OHB5CnRog7TPsTwzR0r0hX4/NzmtC/cOdpilleb6YJST38wWiUCgW3Z3aL6QGt1Mz0KQ7wjHLqdvHLgC2eV0EMI+kfVeFy2lSjh/ZiA7O1k7NkYcy6owMDCa8kXh7jGVwdw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Lwl0zlLLXOjFB6AJtrSZDs8oG/nvzkr5vkS69mpymw=;
 b=GC1lmTOSH/gq572Yg/UfaoxMO6z7nu/6Iqb7W0WMpvLZFY7HHvRfC8IZ9FXGW9FiwMxfH5dszYc6sQcT5LoleHzDxo2FROWDuQne7ChvwaS5og9zlqrs7HIEtBfMBRrvRdVi4XGi5VhE/7yTjc7sl+euCQ3WewNhBzygAOY1mVNAKaq7tMfZ2Sip5TXH7Em8iuadwMonAmykk6140jiTktJspW/kLtti7QdfiqHJqmhuxV1Qcmv2U8In/ndNIt20iJazu8d49UVWVXiVtpTEKzoyZv0aOzW4nvz2/bOrFIqPJrZCeTIVxNuhPuMFdrVV0iMhZ3o394OsFNA8HVJDOw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Lwl0zlLLXOjFB6AJtrSZDs8oG/nvzkr5vkS69mpymw=;
 b=pSOsq0S6c5cQezvKm5Vq28Gylj76mSmfPbHhvVd0wopwS+rBKUMyiMiVH5RGzkPjgNB3hvlzdo1sKKYjfSd9KwsJb6bWLFyzbxipUnvy5nNxuXUa2Uq84TGrm69di22qZFt+8eTOn21DXZaaRT00lU5uUNVhkHqhA+i9maPc2Rw=
Authentication-Results: lst.de; dkim=none (message not signed)
 header.d=none;lst.de; dmarc=none action=none header.from=oracle.com;
Date: Fri, 4 Jun 2021 11:58:34 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Christoph Hellwig <hch@lst.de>,
        Boris Ostrovsky <boris.ostrovsky@oracle.com>, jgross@suse.com
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>,
        Denis Efremov <efremov@linux.com>, Josef Bacik <josef@toxicpanda.com>,
        Tim Waugh <tim@cyberelk.net>, Geoff Levand <geoff@infradead.org>,
        Ilya Dryomov <idryomov@gmail.com>,
        "Md. Haris Iqbal" <haris.iqbal@ionos.com>,
        Jack Wang <jinpu.wang@ionos.com>,
        "Michael S. Tsirkin" <mst@redhat.com>,
        Jason Wang <jasowang@redhat.com>,
        Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
        Mike Snitzer <snitzer@redhat.com>,
        Maxim Levitsky <maximlevitsky@gmail.com>, Alex Dubov <oakad@yahoo.com>,
        Miquel Raynal <miquel.raynal@bootlin.com>,
        Richard Weinberger <richard@nod.at>,
        Vignesh Raghavendra <vigneshr@ti.com>,
        Heiko Carstens <hca@linux.ibm.com>, Vasily Gorbik <gor@linux.ibm.com>,
        Christian Borntraeger <borntraeger@de.ibm.com>, dm-devel@redhat.com,
        linux-block@vger.kernel.org, nbd@other.debian.org,
        linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
        virtualization@lists.linux-foundation.org,
        xen-devel@lists.xenproject.org, linux-mmc@vger.kernel.org,
        linux-mtd@lists.infradead.org, linux-s390@vger.kernel.org
Subject: Re: simplify gendisk and request_queue allocation for blk-mq based
 drivers
Message-ID: <YLpNqiw0LMkYWUyN@0xbeefdead.lan>
References: <20210602065345.355274-1-hch@lst.de>
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
X-Originating-IP: [130.44.160.152]
X-ClientProxiedBy: MN2PR01CA0037.prod.exchangelabs.com (2603:10b6:208:23f::6)
 To BYAPR10MB2999.namprd10.prod.outlook.com (2603:10b6:a03:85::27)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6dde9b11-53fa-45a5-e4e3-08d927719ed5
X-MS-TrafficTypeDiagnostic: SJ0PR10MB4573:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<SJ0PR10MB457380B60E35ADDCFC95A572893B9@SJ0PR10MB4573.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	DHBK1Bd4R0yQf0mbwstUwD7rwPUJpZ9OBXQW78ziXe78KyDV/ay9WxxaENfloeMsvna7ISGWXvd4tGUMEE0hAFJq07vfks3neO5X6BhHhSb03cYDUSZQHXrCquEk5/GydvhOOwJftznFM9p1AJkBu13MtYY7t2/t5i7zm5RMU2KnsH/Uy61Wb7pGr9LfEk5gMIkYWiQijXJz+wDDDH5ZuYuLAl4J0ImmQrPMFSyYuO9OXefsEzSvWPDUhTx2FEbG5DluUitc+JkgRUIVVOyv6oZtmslmx2fEGZ0JH45HZ4yXtoT6ANXJwUndJMqaVwL+NLR5bd/TJjVXjkExroJxC4HW8uy8wLPcnt3WhJhQH4N4lp83G61x7DJHAjxIMHVK/hQtyWrnxDx9SzznUZSxFaGQ4gEWh0NldxkFkjK5Wo2f1ZyCVM2pMEksXUvDJu8jvdpWKvtu72LkdZtNF0d5JLlci6rFbhqGbxSPZta8qUs11ZtiwV577linlXpQy9wl3ZHN6gUpfiOKAU6e9N9/7OwoazJb/xgIbpGCT4hcSSDGE4vXb7tMiPqUFx99nSz5EyohhphRzO5Qp54uBzTdbYYHvurlhAKFZSC2uLQvNrIPOGq2X93TowBkmaVPmGub6LFR8OOOv1TuKZtCkA51M3bogdaVBrVnBelcQEx4rdw=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2999.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(376002)(136003)(366004)(396003)(39860400002)(7416002)(7406005)(66946007)(83380400001)(8936002)(7696005)(52116002)(6506007)(8676002)(956004)(2906002)(86362001)(8886007)(66556008)(5660300002)(54906003)(110136005)(36756003)(55016002)(9686003)(66476007)(38100700002)(26005)(316002)(16526019)(186003)(4326008)(478600001)(38350700002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?us-ascii?Q?gdpQmfl8El8DPe05/ff+wLw1/KXLxBUk84KmbBsgJrrdeHieFn5LEuqnWl++?=
 =?us-ascii?Q?+UjNgTjJDG0Oy6mts17gW1+WMWYc5sUjhoAGzfgwfT7Pzy2441s4sE3MvxSX?=
 =?us-ascii?Q?SRhf4Ec4wl4gNcjirY4ZZx9nyHsKiuVqsZIiIgvOZV6JtJ/wEtmJxOuFg8nP?=
 =?us-ascii?Q?uKjCrwrGeC3048DWWWWuHEDcmJ6ZiYL6rsQKzrsK0nqgY5Rxqs6sQE3dQNq4?=
 =?us-ascii?Q?CfCfYZsdey83ES59nAWpWzHfBPV3sAkiuolib1v8KFoFh4m19r203lwOpa59?=
 =?us-ascii?Q?SySyMhX/1zZ61W8f+1VN6FTD6szZX2+vTot42u9xJ9dDh0U3eNAxd2iYZn0f?=
 =?us-ascii?Q?FnywpHXicqp7wJ4XxEvvbHwQjO+qRq86gUvS750ntuWPrrC2vv5+b6RZzuES?=
 =?us-ascii?Q?UcXB6tmXaG0UH6hWSpBKxi4Qu+znSclq9GPbpnNHb6uGZthjISahfQtta6L8?=
 =?us-ascii?Q?QENMEq/n3Y8O4nxltdvopsJg4mtGH7XLAIw1TfFd6NGnkMGv+e8uEULBYnah?=
 =?us-ascii?Q?sKsuSSKxj9HW9YLP46ZjPemhaoeHbI0bMhjtc8tA5iddmW0Qp0D1sou4oBx1?=
 =?us-ascii?Q?+NWx0/dEti/pL1xCxKtWgaprT2A+vvp2lNH/Zp9EdpD7pqwSsffeQGQGYkmO?=
 =?us-ascii?Q?JCogfOFRqp8ieYxSSlI7BY6O5f4vs0fr+pOs/di4TE/EuWhelL8x8D4VFENg?=
 =?us-ascii?Q?C10k7BLZfSLJXmQkDkwCUju3JpwBTu20ZSobHfpvuiGl7BhkQAfiy8EJMLmD?=
 =?us-ascii?Q?YBoD5fLwHNzfpBW0nr5lCjOi5onruP7ZRKk/3ddsKzuFkx2PjQDOaEprB8od?=
 =?us-ascii?Q?uLw/m7j6Z7YgqwkScdMQliPMFlaMnWQlKN5/c0leXZ3TayPv8hxil1UBekgP?=
 =?us-ascii?Q?Kpp5KY5GxCcyAZW2yfB8Y4ikwdiKii5dj+yOqOBxN/6JyV2pvazcS9Xsq9Dj?=
 =?us-ascii?Q?Y1DsNtmdsClAPVchPB15A2Yp5/fLM2u7cgMhr8thEdLIV5cUr5UwtEJfUi1L?=
 =?us-ascii?Q?a7P8IQ2zF8beqpIVhRJuUz+bgGvmluPArexdyi8MmDeVuKICYzY1ZylKnltR?=
 =?us-ascii?Q?KS+uaVVX48VeH3aI1YK8YVpLFxAFhM5jGoahx7249NVuqHQitDHE/F3P2Kw9?=
 =?us-ascii?Q?P9TIMyUxY0U6UDewE1YgwVRFm6ucT69PcDuUPD/MDYRo6lQAkknDIWsZIohy?=
 =?us-ascii?Q?aHLmwizL1c8bkLjOlhcRtz0fGX76iiZhkvd7mxf0PIgWOyisyqwjfn8xHlDy?=
 =?us-ascii?Q?lya6vxtBHvC7VyelPVIM4UREFQMtr43M9AGA4WFzaF5J8kQ/Jx+YMPS6tsQ2?=
 =?us-ascii?Q?K//cbST/UVancc9qupZ1eFle?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6dde9b11-53fa-45a5-e4e3-08d927719ed5
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2999.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Jun 2021 15:58:40.2289
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: k8PHQ2bjx/5Yak8THiDXWmeuwC57H+V+YorWNg11QCDMPAuKEJyQpKJy+Bs2B5GcI9L15xw+he/zXw8ra7OaWg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR10MB4573
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10005 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 bulkscore=0 phishscore=0
 spamscore=0 malwarescore=0 mlxscore=0 mlxlogscore=999 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106040115
X-Proofpoint-ORIG-GUID: rc9IytYinERt57ZWbGC4hfNK-9DkWPUs
X-Proofpoint-GUID: rc9IytYinERt57ZWbGC4hfNK-9DkWPUs

On Wed, Jun 02, 2021 at 09:53:15AM +0300, Christoph Hellwig wrote:
> Hi all,

Hi!

You wouldn't have a nice git repo to pull so one can test it easily?

Thank you!

Cc-ing Boris/Juergen - pls see below xen.
> 
> this series is the scond part of cleaning up lifetimes and allocation of
> the gendisk and request_queue structure.  It adds a new interface to
> allocate the disk and queue together for blk based drivers, and uses that
> in all drivers that do not have any caveats in their gendisk and
> request_queue lifetime rules.
> 
> Diffstat:
>  block/blk-mq.c                      |   91 +++++++++++++++-------------------
>  block/blk.h                         |    1 
>  block/elevator.c                    |    2 
>  drivers/block/amiflop.c             |   16 +-----
>  drivers/block/aoe/aoeblk.c          |   33 ++++--------
>  drivers/block/aoe/aoedev.c          |    3 -
>  drivers/block/ataflop.c             |   16 +-----
>  drivers/block/floppy.c              |   20 +------
>  drivers/block/loop.c                |   19 ++-----
>  drivers/block/nbd.c                 |   53 +++++++------------
>  drivers/block/null_blk/main.c       |   11 +---
>  drivers/block/paride/pcd.c          |   19 +++----
>  drivers/block/paride/pd.c           |   30 ++++-------
>  drivers/block/paride/pf.c           |   18 ++----
>  drivers/block/ps3disk.c             |   36 +++++--------
>  drivers/block/rbd.c                 |   52 ++++++-------------
>  drivers/block/rnbd/rnbd-clt.c       |   35 +++----------
>  drivers/block/sunvdc.c              |   47 ++++-------------
>  drivers/block/swim.c                |   34 +++++-------
>  drivers/block/swim3.c               |   33 +++++-------
>  drivers/block/sx8.c                 |   23 ++------
>  drivers/block/virtio_blk.c          |   26 ++-------
>  drivers/block/xen-blkfront.c        |   96 ++++++++++++++----------------------
>  drivers/block/z2ram.c               |   15 +----
>  drivers/cdrom/gdrom.c               |   45 +++++++---------
>  drivers/md/dm-rq.c                  |    9 +--
>  drivers/memstick/core/ms_block.c    |   25 +++------
>  drivers/memstick/core/mspro_block.c |   26 ++++-----
>  drivers/mtd/mtd_blkdevs.c           |   48 ++++++++----------
>  drivers/mtd/ubi/block.c             |   68 ++++++++++---------------
>  drivers/s390/block/scm_blk.c        |   21 ++-----
>  include/linux/blk-mq.h              |   24 ++++++---
>  include/linux/elevator.h            |    1 
>  33 files changed, 386 insertions(+), 610 deletions(-)


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 16:05:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 16:05:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136938.253737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpCKR-00058R-Oo; Fri, 04 Jun 2021 16:05:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136938.253737; Fri, 04 Jun 2021 16:05:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpCKR-00058K-Lc; Fri, 04 Jun 2021 16:05:47 +0000
Received: by outflank-mailman (input) for mailman id 136938;
 Fri, 04 Jun 2021 16:05:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wstn=K6=redhat.com=lersek@srs-us1.protection.inumbo.net>)
 id 1lpCKQ-00058E-Ne
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 16:05:46 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 8085d1aa-5e0b-4978-a89d-93ed77fd757e;
 Fri, 04 Jun 2021 16:05:46 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-547-MHKNc94CNJKZJd34kIge5A-1; Fri, 04 Jun 2021 12:05:39 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B0F88100855C;
 Fri,  4 Jun 2021 16:05:37 +0000 (UTC)
Received: from lacos-laptop-7.usersys.redhat.com (ovpn-112-217.ams2.redhat.com
 [10.36.112.217])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 63B1060C05;
 Fri,  4 Jun 2021 16:05:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8085d1aa-5e0b-4978-a89d-93ed77fd757e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1622822745;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vSvNMjrJSg2okctEAuNFak697PYcIBvfWIUhzGMZvR8=;
	b=hWPSDeodjWMbbrQYFsscrx3kXhLzKnackdHuDwbJh3H+Qa0U8Lj4+WcZ7FvJXhPnRxYyoB
	+hDItFBbxpyEhkSbZIkbjsuT4AZJ4p/Z9V6b/dJNwEUMTX2lRqLY7cM5uDja/4we0m3ZTT
	fXQHpTzbgiYect75HY6ww+G/ngyV5MI=
X-MC-Unique: MHKNc94CNJKZJd34kIge5A-1
Subject: Re: [edk2-devel] [PATCH 00/43] OvmfPkg: remove Xen support from
 OvmfPkg*.dsc, in favor of OvmfXen.dsc
To: devel@edk2.groups.io, anthony.perard@citrix.com
Cc: Julien Grall <julien@xen.org>, Ard Biesheuvel
 <ardb+tianocore@kernel.org>, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@redhat.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <20210526201446.12554-1-lersek@redhat.com>
 <71da2a3b-aab1-4ecf-7e01-16b537d841a2@redhat.com> <YLoyiqSYxPDJ7VRl@perard>
From: Laszlo Ersek <lersek@redhat.com>
Message-ID: <88389926-5982-d867-164e-b12ddbd2383d@redhat.com>
Date: Fri, 4 Jun 2021 18:05:34 +0200
MIME-Version: 1.0
In-Reply-To: <YLoyiqSYxPDJ7VRl@perard>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=lersek@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06/04/21 16:02, Anthony PERARD via groups.io wrote:
> On Wed, Jun 02, 2021 at 10:36:49AM +0200, Laszlo Ersek wrote:
>> Anthony, Julien,
>>
>> (or anyone else subscribed to xen-devel -- CC'd now),
>>
>> On 05/26/21 22:14, Laszlo Ersek wrote:
>>> Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=2122
>>> Repo:     https://pagure.io/lersek/edk2.git
>>> Branch:   xen_split_bz_2122
>>
>> can you please build the OvmfXen platform on this branch, and check if
>> there are any regressions?
> 
> Hi Laszlo,
> 
> OvmfXen seems to be working fine with that branch applied.

Series merged as commit range 924c2b847f0b..51adb689e1db, via
<https://github.com/tianocore/edk2/pull/1689>.

Thanks
Laszlo



From xen-devel-bounces@lists.xenproject.org Fri Jun 04 17:20:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 17:20:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136949.253752 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpDUT-0003zX-Ha; Fri, 04 Jun 2021 17:20:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136949.253752; Fri, 04 Jun 2021 17:20:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpDUT-0003zQ-DL; Fri, 04 Jun 2021 17:20:13 +0000
Received: by outflank-mailman (input) for mailman id 136949;
 Fri, 04 Jun 2021 17:20:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpDUS-0003zG-DC; Fri, 04 Jun 2021 17:20:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpDUS-0000xN-67; Fri, 04 Jun 2021 17:20:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpDUR-0000Bw-Rc; Fri, 04 Jun 2021 17:20:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpDUR-0006rb-P5; Fri, 04 Jun 2021 17:20:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ojiWlJS7+qT2utj0qISfMpV7ETK1b+JLPWd0rAGQe4I=; b=5oKCk/h+HppZkz1Y7T67+FFwVu
	/uqdhUGwNx7/ffiFktAKdNvXRZhbw2BR10YLCLyZmMELT2Pi5jqVZsQEx9QS7MVQW4nTGeIWXUB2t
	mBRh1kppQFmRUDj040Oq7NUDGBUn7BtoMi+V6KGnb6pEoTjqbZL6pjcs9y75XK9an4G4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162361-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 162361: tolerable FAIL - PUSHED
X-Osstest-Failures:
    seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    seabios=7292e4a0a8f58333ccbd2d0d47242f9865083c9c
X-Osstest-Versions-That:
    seabios=81433aa8a19b36f9e3d50697608c93d8a28bf772
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Jun 2021 17:20:11 +0000

flight 162361 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162361/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162273
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162273
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162273
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162273
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162273
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass

version targeted for testing:
 seabios              7292e4a0a8f58333ccbd2d0d47242f9865083c9c
baseline version:
 seabios              81433aa8a19b36f9e3d50697608c93d8a28bf772

Last test of basis   162273  2021-05-31 05:41:03 Z    4 days
Testing same since   162361  2021-06-04 06:09:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   81433aa..7292e4a  7292e4a0a8f58333ccbd2d0d47242f9865083c9c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 17:35:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 17:35:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136960.253766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpDjM-0005VL-Ql; Fri, 04 Jun 2021 17:35:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136960.253766; Fri, 04 Jun 2021 17:35:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpDjM-0005VE-Nc; Fri, 04 Jun 2021 17:35:36 +0000
Received: by outflank-mailman (input) for mailman id 136960;
 Fri, 04 Jun 2021 17:35:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpDjL-0005V4-9v; Fri, 04 Jun 2021 17:35:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpDjL-0001Cx-58; Fri, 04 Jun 2021 17:35:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpDjK-0000mU-Su; Fri, 04 Jun 2021 17:35:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpDjK-0004T9-SN; Fri, 04 Jun 2021 17:35:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=exAT1rBVPOSO6GA3tHCs9ko+p7SZZvz2k274nG+yFL4=; b=Q2u+hfDyKGJjl4Dg/cC+6muzDd
	HlkcuQJetKRt0QKhQ9QfNMpyN9zkCpeu4wONIM8sQi8k8eH5M7LWxlWLHZy28kWEioi0yUS8W0o5L
	C/5VtzcVyc4VHqrAlEWQPlZhd69OKJgqhpD1Prc7nkWRIEFoWcKQa4yJeEkKwO4XP2wE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162368-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162368: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures:
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:hosts-allocate:starved:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    ovmf=924c2b847f0bc4325f6d14e562e2fb2d8acbc4d0
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Jun 2021 17:35:34 +0000

flight 162368 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162368/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ovmf-amd64  3 hosts-allocate             starved n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  3 hosts-allocate              starved n/a

version targeted for testing:
 ovmf                 924c2b847f0bc4325f6d14e562e2fb2d8acbc4d0
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z    0 days
Testing same since   162368  2021-06-04 15:42:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         starved 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   c410ad4da4..924c2b847f  924c2b847f0bc4325f6d14e562e2fb2d8acbc4d0 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 17:48:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 17:48:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136967.253780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpDvs-0006yN-2I; Fri, 04 Jun 2021 17:48:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136967.253780; Fri, 04 Jun 2021 17:48:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpDvr-0006yG-UF; Fri, 04 Jun 2021 17:48:31 +0000
Received: by outflank-mailman (input) for mailman id 136967;
 Fri, 04 Jun 2021 17:48:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BvY7=K6=kernel.org=will@srs-us1.protection.inumbo.net>)
 id 1lpDvq-0006yA-Me
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 17:48:30 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 62332187-af2e-48a8-afbd-98b2f3991c10;
 Fri, 04 Jun 2021 17:48:30 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id D1B72611CE;
 Fri,  4 Jun 2021 17:48:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62332187-af2e-48a8-afbd-98b2f3991c10
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1622828909;
	bh=itHLj7KgFnb/FecADbuq7I4biuZGdUj1du28UW6T8SI=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=Iqm0inp9VgNpzeAFuP421rSsgCv0Qhkv5QnQPUkLUJFiTIxp7TFY3QR4OyPttmT96
	 63UJ1NvLjEWUu6sIeGlZQM4Ted9nVFpDXdWeD/OOk+ONJBJL5Ar0gZ0w9EXx03CeNz
	 9hxeLBPU+4Vs1Iu13rVy6icMo7fyuPwvCQ+cWk5bEidGKBw04BNS3wfiw4K6ZGCyPX
	 6cUjTh3kCX/N4K95NY0l8Alu5nPdSl//4dOSMljRD2azPbGNPTGPvDRtwrHFnkFOCZ
	 c64/aftHtK6fT63h1C50fed28UcM2UZbSHegeAkgG/khU8kRyqVyIs14SrPvQtO8GW
	 fslW9h7sjgBEQ==
Date: Fri, 4 Jun 2021 18:48:18 +0100
From: Will Deacon <will@kernel.org>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v8 00/15] Restricted DMA
Message-ID: <20210604174818.GC3703@willie-the-truck>
References: <20210527125845.1852284-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org>
User-Agent: Mutt/1.10.1 (2018-07-13)

Hi Claire,

On Thu, May 27, 2021 at 08:58:30PM +0800, Claire Chang wrote:
> This series implements mitigations for lack of DMA access control on
> systems without an IOMMU, which could result in the DMA accessing the
> system memory at unexpected times and/or unexpected addresses, possibly
> leading to data leakage or corruption.
> 
> For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
> not behind an IOMMU. As PCI-e, by design, gives the device full access to
> system memory, a vulnerability in the Wi-Fi firmware could easily escalate
> to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
> full chain of exploits; [2], [3]).
> 
> To mitigate the security concerns, we introduce restricted DMA. Restricted
> DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
> specially allocated region and does memory allocation from the same region.
> The feature on its own provides a basic level of protection against the DMA
> overwriting buffer contents at unexpected times. However, to protect
> against general data leakage and system memory corruption, the system needs
> to provide a way to restrict the DMA to a predefined memory region (this is
> usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).
> 
> [1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
> [1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
> [2] https://blade.tencent.com/en/advisories/qualpwn/
> [3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
> [4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132
> 
> v8:
> - Fix reserved-memory.txt and add the reg property in example.
> - Fix sizeof for of_property_count_elems_of_size in
>   drivers/of/address.c#of_dma_set_restricted_buffer.
> - Apply Will's suggestion to try the OF node having DMA configuration in
>   drivers/of/address.c#of_dma_set_restricted_buffer.
> - Fix typo in the comment of drivers/of/address.c#of_dma_set_restricted_buffer.
> - Add error message for PageHighMem in
>   kernel/dma/swiotlb.c#rmem_swiotlb_device_init and move it to
>   rmem_swiotlb_setup.
> - Fix the message string in rmem_swiotlb_setup.

Thanks for the v8. It works for me out of the box on arm64 under KVM, so:

Tested-by: Will Deacon <will@kernel.org>

Note that something seems to have gone wrong with the mail threading, so
the last 5 patches ended up as a separate thread for me. Probably worth
posting again with all the patches in one place, if you can.

Cheers,

Will


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 18:05:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 18:05:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.136974.253791 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpECS-0000wy-Kf; Fri, 04 Jun 2021 18:05:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 136974.253791; Fri, 04 Jun 2021 18:05:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpECS-0000wr-HQ; Fri, 04 Jun 2021 18:05:40 +0000
Received: by outflank-mailman (input) for mailman id 136974;
 Fri, 04 Jun 2021 18:05:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lpECR-0000wl-AM
 for xen-devel@lists.xenproject.org; Fri, 04 Jun 2021 18:05:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lpECO-0001nH-DV; Fri, 04 Jun 2021 18:05:36 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lpECO-0007V8-7f; Fri, 04 Jun 2021 18:05:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=y/27EYTGRUek+11HnrhLto5u7roRALSw57xQF75A8Og=; b=pzkOQ8vOrzk1iOmeFytIJdqx5Z
	d5myXSOSyEyNvlHOA2Vbct61dYGnXmRxPpluV7hObDvz7MkEkrHF88+QAGrlbsbNuTpHoErzx1nbo
	1Xtjqnqln0eY2D1tMDBwiv+ZuV+FJD7fFixgD7iHWNcUBviYVB/DRCeqLehXOCP6i8Og=;
Subject: Re: [PATCH v2 0/6] tools/libs: add missing support of linear
 p2m_list, cleanup
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott <dave@recoil.org>
References: <20210604060214.14924-1-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e765fa38-f2f6-ec0f-e6d5-a01d333488d9@xen.org>
Date: Fri, 4 Jun 2021 19:05:34 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210604060214.14924-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 04/06/2021 07:02, Juergen Gross wrote:
> This is a resend of V2 with the Acks folded in.

Thank you for resending with the Acks. It is now pushed!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 19:46:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 19:46:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137034.253880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpFm7-00054A-Dd; Fri, 04 Jun 2021 19:46:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137034.253880; Fri, 04 Jun 2021 19:46:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpFm7-000543-AZ; Fri, 04 Jun 2021 19:46:35 +0000
Received: by outflank-mailman (input) for mailman id 137034;
 Fri, 04 Jun 2021 19:46:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpFm6-00053t-GI; Fri, 04 Jun 2021 19:46:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpFm6-0003Rd-4m; Fri, 04 Jun 2021 19:46:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpFm5-0000En-Re; Fri, 04 Jun 2021 19:46:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpFm5-000096-RA; Fri, 04 Jun 2021 19:46:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=M5R1oRDFlREW2+4cfGxEADm22gUTyezsUlXBIoF6CYQ=; b=52iDITfRdMpxxptj5FMNvVPxwI
	BerwN0CY1QowQLnMcR9gT/oXu8zRWpaX7ytgellUbX5uuERM0PQB2SkT/RdStJndL8VFRQncDwFYE
	oYLL8Q9DZKVAfJOT6+voCGRMH+q0gjNh2GtfF28TPpigk1MhHfoqaCO4TykeCgflLQjM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162370-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162370: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1a0f2fe2297d122a08fee2b26de5de995fdeca13
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Jun 2021 19:46:33 +0000

flight 162370 xen-unstable-smoke real [real]
flight 162372 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162370/
http://logs.test-lab.xenproject.org/osstest/logs/162372/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1a0f2fe2297d122a08fee2b26de5de995fdeca13
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    3 days
Testing same since   162370  2021-06-04 17:01:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 22:16:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 22:16:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137047.253908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpI7D-0001aD-OX; Fri, 04 Jun 2021 22:16:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137047.253908; Fri, 04 Jun 2021 22:16:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpI7D-0001a6-LX; Fri, 04 Jun 2021 22:16:31 +0000
Received: by outflank-mailman (input) for mailman id 137047;
 Fri, 04 Jun 2021 22:16:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpI7B-0001Zw-Im; Fri, 04 Jun 2021 22:16:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpI7B-0006DU-Ao; Fri, 04 Jun 2021 22:16:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpI7A-0007VQ-T7; Fri, 04 Jun 2021 22:16:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpI7A-0003Bn-Sh; Fri, 04 Jun 2021 22:16:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TqrcnI/IokOb9oCAlejZ/8uPDqLD6Uo8S89JPrSlR3Y=; b=gEtksjJlSk7zm3ZDXAAGt36aj6
	HsIt+LWAHG/xTJgiO6QduoSeyhvzPyLRSz/BoYwdt0OjCxk5Dr7h5wxg9YxEJKGHbMcESHBO1mGpp
	iEgh3cN8TYqxT+HtpPKf0I3gfCKp2g510F8IvQjeQe5y9PXnT8H55ElJZoHyV1IpGhac=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162362-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162362: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=453d9c61dd5681159051c6e4d07e7b2633de2e70
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Jun 2021 22:16:28 +0000

flight 162362 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162362/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                453d9c61dd5681159051c6e4d07e7b2633de2e70
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  288 days
Failing since        152659  2020-08-21 14:07:39 Z  287 days  532 attempts
Testing same since   162356  2021-06-04 00:39:31 Z    0 days    2 attempts

------------------------------------------------------------
526 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 168469 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 04 22:38:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Jun 2021 22:38:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137056.253926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpISd-0003w2-LL; Fri, 04 Jun 2021 22:38:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137056.253926; Fri, 04 Jun 2021 22:38:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpISd-0003vv-IO; Fri, 04 Jun 2021 22:38:39 +0000
Received: by outflank-mailman (input) for mailman id 137056;
 Fri, 04 Jun 2021 22:38:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpISc-0003vf-2t; Fri, 04 Jun 2021 22:38:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpISb-0006Y9-To; Fri, 04 Jun 2021 22:38:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpISb-0001Go-Ko; Fri, 04 Jun 2021 22:38:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpISb-0004wo-KH; Fri, 04 Jun 2021 22:38:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fgOinzt9HCSQYyKLziGNobIBS5/wzvLKSEpXQqtrd5E=; b=BECHT4UwBDFoAwLLOvAsPdC/9y
	jKrKnnDv/D7VMI83YdMTTn4rh5Zbfb5WeapR4UgR7S8b3csR2+ojxoygEIIE8vpUPt8I0h7rCDW73
	wESpLCKzb9wsbOS7SkxJTEXh/Z1l6TWW127qQ2tl+hPTnjNH637EqLMq5T45LfI+e+YI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162374-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162374: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=75f13e9b221e2c8603f15ee1d53318526cf56113
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Jun 2021 22:38:37 +0000

flight 162374 xen-unstable-smoke real [real]
flight 162377 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162374/
http://logs.test-lab.xenproject.org/osstest/logs/162377/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  75f13e9b221e2c8603f15ee1d53318526cf56113
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    3 days
Failing since        162370  2021-06-04 17:01:35 Z    0 days    2 attempts
Testing same since   162374  2021-06-04 20:03:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Jun 05 00:47:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Jun 2021 00:47:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137070.253952 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpKSl-0007lt-Nk; Sat, 05 Jun 2021 00:46:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137070.253952; Sat, 05 Jun 2021 00:46:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpKSl-0007lm-KS; Sat, 05 Jun 2021 00:46:55 +0000
Received: by outflank-mailman (input) for mailman id 137070;
 Sat, 05 Jun 2021 00:46:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nsHC=K7=protonmail.com=dylangerdaly@srs-us1.protection.inumbo.net>)
 id 1lpKSk-0007lg-Nq
 for xen-devel@lists.xenproject.org; Sat, 05 Jun 2021 00:46:55 +0000
Received: from mail-40133.protonmail.ch (unknown [185.70.40.133])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fa2366c3-c3ec-4ff8-b342-fb619dd8327e;
 Sat, 05 Jun 2021 00:46:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa2366c3-c3ec-4ff8-b342-fb619dd8327e
Date: Sat, 05 Jun 2021 00:46:49 +0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com;
	s=protonmail; t=1622854010;
	bh=LtS95j0B6WfWbjo7dU8GYscPr0kp6+CSg3pWNzHmaAs=;
	h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From;
	b=fvoaA3HjluTyj64ty0hy2yIdg6TU3jgzuK9Ow/QUhY/aGNjPCUfT1dXvl32+QLIzE
	 6+WYlIFG6BRuwGSdTdwMwXjsZQ8Jam58kksJMV5ZX0sCJoALwiYSirRxLsU26vftEs
	 uKN/h5v92kBqSj2v+DbbY4buzVrVqpFnvQ0+wb9Q=
To: Jan Beulich <jbeulich@suse.com>
From: Dylanger Daly <dylangerdaly@protonmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Dario Faggioli <dfaggioli@suse.com>, "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Reply-To: Dylanger Daly <dylangerdaly@protonmail.com>
Subject: Re: Ryzen 4000 (Mobile) Softlocks/Micro-stutters
Message-ID: <10Q_2iTQLicCwiahWeAjyi3C7Db3tcwpkF4dwd8gezGByiwoJpfHNHwHW8gonbgyhQ7LWU8xv1TB_fd18e05-Cy8CYUfFhZbgLneMwbOzNk=@protonmail.com>
In-Reply-To: <0e963858-6834-96de-4bf2-956f905160b4@suse.com>
References: <9lQU_gCfRzGyyNb2j86pxTMi1IET1Iq7iK3994agUZPrTI5Xd-aCJAaRYuJlD3L5LT2WaV4N3-YF4xKl5ukialT0M_YD0ve6gmDFFfatpXw=@protonmail.com> <815f3bc3a28a165e8fa41c6954a6d00db656e3c3.camel@suse.com> <Y-6A5xIyjtCDwG3tBoyQnWpypF_eebCmuCjyUovcwd-ZD6wgFvNmR8VAdscAiwKp41toxpDxsgeF10FsEBn2Xm14b8bl9cniO_-TRNwm9mI=@protonmail.com> <1fc0e850-8a08-760f-c8cb-ad73dda4a37b@suse.com> <PGn1fJFla-7vPl7QFdkkBX8ASy2cWw-f2HBW7rWE5KgeFEZ_kNUp8Yq5zMaGyS38wMWofVshR75o1jD1rXZeTWtE8XhKQvEq_Dmgsnu-Uy0=@protonmail.com> <4916dec1-1bb9-7e6f-2fe5-577bbab92861@suse.com> <d7aaa4e7fa3083ff5bb18e18c5cd8274194109ba.camel@suse.com> <qcVhNDiGu6deufXzsHKbjEr4n3JuLC2cFNc1ORb02vl1IaPjm-37uFkXANQ-i7v77zP1GFxbYoTEG713C4EyHYBrE5YPvA5bXdPc4Brxg5U=@protonmail.com> <0e963858-6834-96de-4bf2-956f905160b4@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Spam-Status: No, score=-1.2 required=10.0 tests=ALL_TRUSTED,DKIM_SIGNED,
	DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM shortcircuit=no
	autolearn=disabled version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on
	mailout.protonmail.ch

Hi all,

Lenovo released a new UEFI update for the Lenovo X13/T14s, changelog is her=
e: https://download.lenovo.com/pccbbs/mobiles/r1cuj63wd.txt

It lists "Fixed an issue that Fixed TSC synchronization  failed under linux=
." as a fix, I can confirm after removing the tsc=3Dunstable CMDLINE everyt=
hing is functioning perfectly.

The bottom of this issue was indeed Lenovo's shotty firmware.

Thank you to everyone.


From xen-devel-bounces@lists.xenproject.org Sat Jun 05 02:00:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Jun 2021 02:00:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137079.253968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpLbb-0004Hx-VZ; Sat, 05 Jun 2021 02:00:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137079.253968; Sat, 05 Jun 2021 02:00:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpLbb-0004Hq-S2; Sat, 05 Jun 2021 02:00:07 +0000
Received: by outflank-mailman (input) for mailman id 137079;
 Sat, 05 Jun 2021 02:00:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0LGE=K7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lpLba-00048o-O9
 for xen-devel@lists.xenproject.org; Sat, 05 Jun 2021 02:00:06 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6db06b23-1e67-4b3f-80f1-27af43058998;
 Sat, 05 Jun 2021 02:00:04 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id CA69C613C9;
 Sat,  5 Jun 2021 02:00:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6db06b23-1e67-4b3f-80f1-27af43058998
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1622858404;
	bh=LQsSiJJo1OkBDgdCb85Krfo9Ty542hu8hhJJaHwiC1g=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=KFiB2IrIPHR/WHSeKRvQKNRso/H/JiOqJCZk0XwazaELAAdvgmWGdwL23EYU8XJMk
	 rdFEPzOD3pIlcGMhBEO8vOCQE3gzunYpGVQ/Mb90030rQmqtueia7m10BZ/v87mbkt
	 vYRTBrXSBnYnvfTKUstXd8ahaDK9lbzge9NfqKcYB3meldhbXFQDrktTM3adeXLbKI
	 yvjomaM0QZMoV1D//h7KHyIrkMZFtDlVzTG5HsiiB6D7VJ9z0eoJVaCXb6kT9S4O8j
	 dg/gDothKXuOMY1TVrDpSzlNhBix2CwM+f5Rc2uNMjgDlnRv3PH4+fztcVARER/zpl
	 3aoNIfNgAkWsg==
Date: Fri, 4 Jun 2021 19:00:02 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Penny Zheng <Penny.Zheng@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Julien Grall <julien.grall.oss@gmail.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, 
    nd <nd@arm.com>
Subject: RE: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
In-Reply-To: <VE1PR08MB52151410985CD8E5C577F797F73B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
Message-ID: <alpine.DEB.2.21.2106041416120.7272@sstabellini-ThinkPad-T480s>
References: <20210518052113.725808-1-penny.zheng@arm.com> <20210518052113.725808-2-penny.zheng@arm.com> <e1b90f06-92d2-11da-c556-4081907124b8@xen.org> <VE1PR08MB521519C6F09E92EDB9C9A1AEF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com> <66e32065-ea2d-d000-1a70-e5598a182b6a@xen.org>
 <VE1PR08MB5215C1F5041860102BBAD595F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com> <14fb6fe4-c293-6994-8cbc-872d3bd8a3ac@xen.org> <VE1PR08MB52152792B6771236A6DF37E7F73D9@VE1PR08MB5215.eurprd08.prod.outlook.com> <4251e0e2-fb53-b8a3-0323-f4ce892cf21e@xen.org>
 <alpine.DEB.2.21.2106031408320.7272@sstabellini-ThinkPad-T480s> <CAJ=z9a234ANQDR7BmtSm4AT0k3jrCn67s4b3zZ+jdkUgBMahbw@mail.gmail.com> <alpine.DEB.2.21.2106031625530.7272@sstabellini-ThinkPad-T480s>
 <VE1PR08MB52151410985CD8E5C577F797F73B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 4 Jun 2021, Penny Zheng wrote:
> > > > In system device tree we would use a property called "memory" to
> > > > specify one or more ranges, e.g.:
> > > >
> > > >     domU1 {
> > > >         memory = <0x0 0x500000 0x0 0x7fb00000>;
> > > >
> > > > Unfortunately for xen,domains we have already defined "memory" to
> > > > specify an amount, rather than a range. That's too bad because the
> > > > most natural way to do this would be:
> > > >
> > > >     domU1 {
> > > >         size = <amount>;
> > > >         memory = <ranges>;
> > > >
> > > > When we'll introduce native system device tree support in Xen we'll
> > > > be able to do that. For now, we need to come up with a different property.
> > > > For instance: "static-memory" (other names are welcome if you have a
> > > > better suggestion).
> > > >
> > > > We use a new property called "static-memory" together with
> > > > #static-memory-address-cells and #static-memory-size-cells to define
> > > > how many cells to use for address and size.
> > > >
> > > > Example:
> > > >
> > > >     domU1 {
> > > >         #static-memory-address-cells = <2>;
> > > >         #static-memory-size-cells = <2>;
> > > >         static-memory = <0x0 0x500000 0x0 0x7fb00000>;
> > >
> > > This is pretty similar to what Penny suggested. But I dislike it
> > > because of the amount of code that needs to be duplicated with the
> > > reserved memory.
> > 
> > Where is the code duplication? In the parsing itself?
> > 
> > If there is code duplication, can we find a way to share some of the code to
> > avoid the duplication?
> 
> Both your opinions are so convincing... :/
> 
> Correct me if I am wrong:
> I think the duplication which Julien means are here, See commit 
> https://patchew.org/Xen/20210518052113.725808-1-penny.zheng@arm.com/20210518052113.725808-3-penny.zheng@arm.com/
> I added a another similar loop in dt_unreserved_regions to unreserve static memory.
> For this part, I could try to extract common codes.
> 
> But another part I think is just this commit, where I added another check for static memory
> in early_scan_node:
> 
> +    else if ( depth == 2 && fdt_get_property(fdt, node, "xen,static-mem", NULL) )
> +        process_static_memory(fdt, node, "xen,static-mem", address_cells,
> +                              size_cells, &bootinfo.static_mem);
> 
> TBH, I don't know how to fix here....

Is it only one loop in dt_unreserved_regions and another call to
process_static_memory? If you can make the code common in
dt_unreserved_regions there wouldn't be much duplication left.


To explain my point of view a bit better, I think we have a lot more
freedom in the Xen implementation compared to the device tree
specification. For the sake of an example, let's say that we wanted Xen
to reuse bootinfo.reserved_mem for both reserved-memory and
static-memory. I don't know if it is even a good idea (I haven't looked
into it, it is just an example) but I think it would be OK, we could
decide to do that.

We have less room for flexibility in the device tree specification.
/reserved-memory is for special configurations of 1 domain only. I don't
think we could add domain static memory allocations to it.


From xen-devel-bounces@lists.xenproject.org Sat Jun 05 02:17:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Jun 2021 02:17:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137086.253979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpLsN-0005p6-Dv; Sat, 05 Jun 2021 02:17:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137086.253979; Sat, 05 Jun 2021 02:17:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpLsN-0005oz-B0; Sat, 05 Jun 2021 02:17:27 +0000
Received: by outflank-mailman (input) for mailman id 137086;
 Sat, 05 Jun 2021 02:17:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpLsM-0005op-Bu; Sat, 05 Jun 2021 02:17:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpLsM-000802-4L; Sat, 05 Jun 2021 02:17:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpLsL-0004e5-Pr; Sat, 05 Jun 2021 02:17:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpLsL-0007eY-PK; Sat, 05 Jun 2021 02:17:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=28Vgpj6KTjXVb89lka7iwLeHj/XvnsCvgxn2g3MjKmk=; b=46zpolG1AoSNQMNLMccxWpamlo
	zRomOhoNWki1PvQCOhMDBnXTNwZE8AeWButyLWZczMylS/WW2NmJUfAWzYPbP3WUopqvbrTKFQZCc
	kWqpVhUNUiR/QDD7MdEouSS43lVzfA+/wDQARf3JPjZyI2LE/Cf83gFSlSG2zinRqGHs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162366-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.15-testing test] 162366: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.15-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=eae0dfac891f521ceb6c4733e22a0cd718f336c0
X-Osstest-Versions-That:
    xen=280d472f4fca070a10377e318d90cabfc2540810
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Jun 2021 02:17:25 +0000

flight 162366 xen-4.15-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162366/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161772
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161772
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161772
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161772
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161772
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161772
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161772
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161772
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161772
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161772
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161772
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  eae0dfac891f521ceb6c4733e22a0cd718f336c0
baseline version:
 xen                  280d472f4fca070a10377e318d90cabfc2540810

Last test of basis   161772  2021-05-04 13:07:50 Z   31 days
Testing same since   162366  2021-06-04 13:08:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Chao Gao <chao.gao@intel.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   280d472f4f..eae0dfac89  eae0dfac891f521ceb6c4733e22a0cd718f336c0 -> stable-4.15


From xen-devel-bounces@lists.xenproject.org Sat Jun 05 02:33:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Jun 2021 02:33:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137108.254035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpM7S-00007x-Fh; Sat, 05 Jun 2021 02:33:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137108.254035; Sat, 05 Jun 2021 02:33:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpM7S-00007q-Cj; Sat, 05 Jun 2021 02:33:02 +0000
Received: by outflank-mailman (input) for mailman id 137108;
 Sat, 05 Jun 2021 02:33:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpM7R-00007g-GX; Sat, 05 Jun 2021 02:33:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpM7R-0008HJ-1i; Sat, 05 Jun 2021 02:33:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpM7Q-0005BX-Lf; Sat, 05 Jun 2021 02:33:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpM7Q-0000tF-LA; Sat, 05 Jun 2021 02:33:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KWvK5pNXdJCnS4dMn7aofsXegzN1+562+QqL3R+WCio=; b=hMo4YUBpYtzFeKKxR4kdVmL77k
	s5RigkTKECSuE6mpDOveek8cTS5S0/i9dnFWF3QYWh6MQjizbYUqVy63ltI53fB1QCsYL015in+sa
	EwRad0PpyTauI2X88vEQvQA4PmE5Ft0/+eFHODebC0OFsqcBLrZ32J6YWLgyFg2QyFn0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162381-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162381: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=75f13e9b221e2c8603f15ee1d53318526cf56113
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Jun 2021 02:33:00 +0000

flight 162381 xen-unstable-smoke real [real]
flight 162384 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162381/
http://logs.test-lab.xenproject.org/osstest/logs/162384/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  75f13e9b221e2c8603f15ee1d53318526cf56113
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    3 days
Failing since        162370  2021-06-04 17:01:35 Z    0 days    3 attempts
Testing same since   162374  2021-06-04 20:03:35 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Jun 05 03:34:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Jun 2021 03:34:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137128.254090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpN4Z-0006Fn-NJ; Sat, 05 Jun 2021 03:34:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137128.254090; Sat, 05 Jun 2021 03:34:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpN4Z-0006Fg-Ip; Sat, 05 Jun 2021 03:34:07 +0000
Received: by outflank-mailman (input) for mailman id 137128;
 Sat, 05 Jun 2021 03:34:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpN4Y-0006FS-WD; Sat, 05 Jun 2021 03:34:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpN4Y-0000qF-QG; Sat, 05 Jun 2021 03:34:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpN4Y-0000Sj-I9; Sat, 05 Jun 2021 03:34:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpN4Y-0005k7-He; Sat, 05 Jun 2021 03:34:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bn1XcKsjM2LK6LDC2a5YBu86hkRsGHtlpbqzB8P4w/4=; b=NTHH716gY1VhZkUUCVOeNAeRGx
	6XThv1jENpmSumPm/BUMLuErjLlBKKH10tV5DyA1xw/OoUwDJZMcglxUVe+CukK7DRzIbXow0l0gS
	4dwXyhugx4h+c03XpUYngRFFQB24E+Q7AEECwE0EtD4RQ/G1aiek0I+IKqc+vbb4jAZc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162365-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 162365: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b046e05736deecbd8254540c5e45444115fb1c98
X-Osstest-Versions-That:
    xen=10f0b2d49376865d49680f06c52b451fabce3bb5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Jun 2021 03:34:06 +0000

flight 162365 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162365/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161771
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161771
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161771
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161771
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161771
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161771
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161771
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161771
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161771
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161771
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161771
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  b046e05736deecbd8254540c5e45444115fb1c98
baseline version:
 xen                  10f0b2d49376865d49680f06c52b451fabce3bb5

Last test of basis   161771  2021-05-04 13:07:00 Z   31 days
Testing same since   162365  2021-06-04 13:08:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Chao Gao <chao.gao@intel.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   10f0b2d493..b046e05736  b046e05736deecbd8254540c5e45444115fb1c98 -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Sat Jun 05 07:20:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Jun 2021 07:20:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137145.254128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpQbl-0001pF-TH; Sat, 05 Jun 2021 07:20:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137145.254128; Sat, 05 Jun 2021 07:20:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpQbl-0001p8-Ps; Sat, 05 Jun 2021 07:20:37 +0000
Received: by outflank-mailman (input) for mailman id 137145;
 Sat, 05 Jun 2021 07:20:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpQbj-0001oy-MG; Sat, 05 Jun 2021 07:20:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpQbj-00052b-CP; Sat, 05 Jun 2021 07:20:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpQbi-0003dL-V8; Sat, 05 Jun 2021 07:20:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpQbi-0007Pa-Uc; Sat, 05 Jun 2021 07:20:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gz7tdqC4MMkpZz0X94W+FUl0alvAvQzwbMhX2CGOyps=; b=CJXV557oqC+lJN1PnPjKCI64KC
	WycY/5D3DwDY7mnC25wIXSoRhUk00KYvHokk/bzFHdKLiR3Sk10HvjBNUV52bp75c1mxK/C2ZavSL
	mk3XhRyf6YFxndnDlvpEVjfO+sNMaKUn+d/O57eeXcvUl0VDV5iWK0e7qR3sJc3Km/bI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162388-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162388: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=75f13e9b221e2c8603f15ee1d53318526cf56113
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Jun 2021 07:20:34 +0000

flight 162388 xen-unstable-smoke real [real]
flight 162393 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162388/
http://logs.test-lab.xenproject.org/osstest/logs/162393/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  75f13e9b221e2c8603f15ee1d53318526cf56113
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    3 days
Failing since        162370  2021-06-04 17:01:35 Z    0 days    4 attempts
Testing same since   162374  2021-06-04 20:03:35 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Jun 05 09:23:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Jun 2021 09:23:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137164.254174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpSW5-00058N-FS; Sat, 05 Jun 2021 09:22:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137164.254174; Sat, 05 Jun 2021 09:22:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpSW5-00058G-C4; Sat, 05 Jun 2021 09:22:53 +0000
Received: by outflank-mailman (input) for mailman id 137164;
 Sat, 05 Jun 2021 09:22:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpSW3-00057g-MR; Sat, 05 Jun 2021 09:22:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpSW3-0007aA-GE; Sat, 05 Jun 2021 09:22:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpSW3-0001W3-6x; Sat, 05 Jun 2021 09:22:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpSW3-0001Cq-6Q; Sat, 05 Jun 2021 09:22:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8UfWv7DCbHHI0h1JuiDQOq1viP2grzu4FT766xicg5E=; b=ztgv+lwFcc2UwKvq7mTo1MZwIQ
	dOzGoG0yaHyyHhRU5FgRiXrjLP6tG8cDBYKTNp0IoynPQMr9vcUbF+Z50M6OQWXBMGSdkxcTbMPJ1
	gS+Gr7y3r0mG79uYGG9NfLbwLVFJYYEXphiWzVvGoz2GH+UXziSK7PrNKpPviYT1G+kI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162367-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 162367: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.13-testing:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ef8b2357d83442b5d2e7607379a935d4f8b35416
X-Osstest-Versions-That:
    xen=284132938900ce8c3b11babf7255f5c6dbb21716
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Jun 2021 09:22:51 +0000

flight 162367 xen-4.13-testing real [real]
flight 162396 xen-4.13-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162367/
http://logs.test-lab.xenproject.org/osstest/logs/162396/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 162396-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 161770

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161770
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161770
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161770
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161770
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161770
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161770
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161770
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161770
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161770
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161770
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161770
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  ef8b2357d83442b5d2e7607379a935d4f8b35416
baseline version:
 xen                  284132938900ce8c3b11babf7255f5c6dbb21716

Last test of basis   161770  2021-05-04 13:07:00 Z   31 days
Testing same since   162367  2021-06-04 13:36:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Chao Gao <chao.gao@intel.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   2841329389..ef8b2357d8  ef8b2357d83442b5d2e7607379a935d4f8b35416 -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Sat Jun 05 10:41:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Jun 2021 10:41:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137179.254200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpTjy-0004Dx-Mx; Sat, 05 Jun 2021 10:41:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137179.254200; Sat, 05 Jun 2021 10:41:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpTjy-0004Dq-JT; Sat, 05 Jun 2021 10:41:18 +0000
Received: by outflank-mailman (input) for mailman id 137179;
 Sat, 05 Jun 2021 10:41:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpTjx-0004Dg-FB; Sat, 05 Jun 2021 10:41:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpTjx-0000Tu-58; Sat, 05 Jun 2021 10:41:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpTjw-0005IV-Su; Sat, 05 Jun 2021 10:41:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpTjw-0002E3-SQ; Sat, 05 Jun 2021 10:41:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1CXvJ2PiCTEwYys6bKz9t7drHhqUIX+TBh8beHgtfGo=; b=FG8RV1paGZbFiQ0CVcX4zHlNxW
	h/lqldB71cp5xYR4EwpoAnCOICKuqjxYJljtO9VGaaWHBcLv2IgkD8Uxbb+tA9EScvNQ6K31uQ1eu
	km2tu2OIdMyUqhLaEFN4Gl+4mMZwEKaNnPiFvAg0lRiRu76psqeLAKpLsysjrEnGN+TI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162395-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162395: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=75f13e9b221e2c8603f15ee1d53318526cf56113
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Jun 2021 10:41:16 +0000

flight 162395 xen-unstable-smoke real [real]
flight 162398 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162395/
http://logs.test-lab.xenproject.org/osstest/logs/162398/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  75f13e9b221e2c8603f15ee1d53318526cf56113
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    3 days
Failing since        162370  2021-06-04 17:01:35 Z    0 days    5 attempts
Testing same since   162374  2021-06-04 20:03:35 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Jun 05 11:27:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Jun 2021 11:27:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137187.254215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpUSY-0008IG-4f; Sat, 05 Jun 2021 11:27:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137187.254215; Sat, 05 Jun 2021 11:27:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpUSY-0008I9-1O; Sat, 05 Jun 2021 11:27:22 +0000
Received: by outflank-mailman (input) for mailman id 137187;
 Sat, 05 Jun 2021 11:27:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpUSX-0008Hz-AU; Sat, 05 Jun 2021 11:27:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpUSX-0001DT-4w; Sat, 05 Jun 2021 11:27:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpUSW-0006nj-Th; Sat, 05 Jun 2021 11:27:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpUSW-0000BK-TC; Sat, 05 Jun 2021 11:27:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hnXcSurU5/QmhqP5qf7dmqC4noJQiDdJUOpaWmNEo+4=; b=4GMJK0AAEjmc5xKft07qJBJOYO
	rEnxTf2Z8vUYh2hSwfIxlc5sbhQaR/CbK5za8eCqhuSAmseZxlem771L27rzqvIsAnQTuN2+TVThV
	pIjOepmu8D9AjfLFRJUwjlJ17fIvJjlD/oWNDxr5UW1b+tdB3J1DfYwNyZsS6Ld3DxAY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162371-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162371: tolerable FAIL - PUSHED
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:nonblocking
X-Osstest-Versions-This:
    ovmf=51adb689e1db695cffdeeacafad218768fbc018c
X-Osstest-Versions-That:
    ovmf=924c2b847f0bc4325f6d14e562e2fb2d8acbc4d0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Jun 2021 11:27:20 +0000

flight 162371 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162371/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail starved in 162368
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail starved in 162368

version targeted for testing:
 ovmf                 51adb689e1db695cffdeeacafad218768fbc018c
baseline version:
 ovmf                 924c2b847f0bc4325f6d14e562e2fb2d8acbc4d0

Last test of basis   162368  2021-06-04 15:42:59 Z    0 days
Testing same since   162371  2021-06-04 17:41:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laszlo Ersek <lersek@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   924c2b847f..51adb689e1  51adb689e1db695cffdeeacafad218768fbc018c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Jun 05 12:49:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Jun 2021 12:49:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137206.254241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpVk7-0007Xw-Qk; Sat, 05 Jun 2021 12:49:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137206.254241; Sat, 05 Jun 2021 12:49:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpVk7-0007Xp-Ml; Sat, 05 Jun 2021 12:49:35 +0000
Received: by outflank-mailman (input) for mailman id 137206;
 Sat, 05 Jun 2021 12:49:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpVk6-0007Xf-Jq; Sat, 05 Jun 2021 12:49:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpVk6-0002Vo-BO; Sat, 05 Jun 2021 12:49:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpVk6-0001Du-3K; Sat, 05 Jun 2021 12:49:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpVk6-0003be-2p; Sat, 05 Jun 2021 12:49:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UBdZ/FrYS58gERH2liyfRVjumq8Nmt8XnrGxQrovhp8=; b=uXC2Zro/icnkuZc6NuhquykCtz
	slHWw9u8WFYQr0hKcx4fX4Sw1QcD2pyNDePD/aFYpGqCDyyr4IjDQDKHrpKnpX+YX/vRMf68NwrA/
	ByYlottvhybkRJ5yKml3qfvw394opsCJnl8vDrfF60FpqYotsky3abTiN82Xoz7NmJB4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162390-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162390: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=619968a6801491ece52cc6c20fe9a018e0d24d7a
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Jun 2021 12:49:34 +0000

flight 162390 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162390/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              619968a6801491ece52cc6c20fe9a018e0d24d7a
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  330 days
Failing since        151818  2020-07-11 04:18:52 Z  329 days  322 attempts
Testing same since   162390  2021-06-05 04:20:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 60472 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 05 13:32:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Jun 2021 13:32:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137217.254260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpWPq-0003rQ-6y; Sat, 05 Jun 2021 13:32:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137217.254260; Sat, 05 Jun 2021 13:32:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpWPq-0003rJ-3x; Sat, 05 Jun 2021 13:32:42 +0000
Received: by outflank-mailman (input) for mailman id 137217;
 Sat, 05 Jun 2021 13:32:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpWPo-0003r9-VH; Sat, 05 Jun 2021 13:32:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpWPo-0003DL-NQ; Sat, 05 Jun 2021 13:32:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpWPo-0003jR-DP; Sat, 05 Jun 2021 13:32:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpWPo-0005gc-Cg; Sat, 05 Jun 2021 13:32:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ee+EXXGQJzjC8DjpC4cN1S0q59/LNkkKjWxPyjc3rQ4=; b=ug0Ii3zzgtFtYB73e7q05VJRUB
	vpixgZzNrhf5Rw2S+Yk4UcCAZwweDI/XZGddxGlFS5mhFs0ElWIDn6IZ756rt61Modg+QDDv105u/
	n17xr0wgf9YUMdUGEvEtUjNvUKdheUBrxZPArGX/RTJOdeAgEQWaGK30YmxfPhI9PWdU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162369-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162369: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f88cd3fb9df228e5ce4e13ec3dbad671ddb2146e
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Jun 2021 13:32:40 +0000

flight 162369 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162369/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 14 guest-start    fail in 162358 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-multivcpu 14 guest-start     fail in 162358 pass in 162369
 test-arm64-arm64-xl-credit1  13 debian-fixup     fail in 162358 pass in 162369
 test-arm64-arm64-libvirt-xsm 13 debian-fixup               fail pass in 162358
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 162358

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                f88cd3fb9df228e5ce4e13ec3dbad671ddb2146e
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  308 days
Failing since        152366  2020-08-01 20:49:34 Z  307 days  525 attempts
Testing same since   162358  2021-06-04 02:51:33 Z    1 days    2 attempts

------------------------------------------------------------
6133 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1667449 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 05 14:03:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Jun 2021 14:03:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137227.254275 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpWtE-00078H-Rq; Sat, 05 Jun 2021 14:03:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137227.254275; Sat, 05 Jun 2021 14:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpWtE-00078A-O5; Sat, 05 Jun 2021 14:03:04 +0000
Received: by outflank-mailman (input) for mailman id 137227;
 Sat, 05 Jun 2021 14:03:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6L7A=K7=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lpWtC-000784-Ql
 for xen-devel@lists.xenproject.org; Sat, 05 Jun 2021 14:03:02 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8b401d8e-98e1-48b9-a154-17b52f48b0fe;
 Sat, 05 Jun 2021 14:03:01 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 8575C6736F; Sat,  5 Jun 2021 16:02:57 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b401d8e-98e1-48b9-a154-17b52f48b0fe
Date: Sat, 5 Jun 2021 16:02:57 +0200
From: Christoph Hellwig <hch@lst.de>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, jgross@suse.com,
	Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>, Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>, dm-devel@redhat.com,
	linux-block@vger.kernel.org, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-s390@vger.kernel.org
Subject: Re: simplify gendisk and request_queue allocation for blk-mq based
 drivers
Message-ID: <20210605140257.GA13166@lst.de>
References: <20210602065345.355274-1-hch@lst.de> <YLpNqiw0LMkYWUyN@0xbeefdead.lan>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YLpNqiw0LMkYWUyN@0xbeefdead.lan>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, Jun 04, 2021 at 11:58:34AM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Jun 02, 2021 at 09:53:15AM +0300, Christoph Hellwig wrote:
> > Hi all,
> 
> Hi!
> 
> You wouldn't have a nice git repo to pull so one can test it easily?

git://git.infradead.org/users/hch/block.git alloc_disk-part2


From xen-devel-bounces@lists.xenproject.org Sat Jun 05 14:08:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Jun 2021 14:08:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137235.254286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpWy2-0007p8-FR; Sat, 05 Jun 2021 14:08:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137235.254286; Sat, 05 Jun 2021 14:08:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpWy2-0007p1-Bi; Sat, 05 Jun 2021 14:08:02 +0000
Received: by outflank-mailman (input) for mailman id 137235;
 Sat, 05 Jun 2021 14:08:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpWy1-0007or-BW; Sat, 05 Jun 2021 14:08:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpWy1-0003tt-4r; Sat, 05 Jun 2021 14:08:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpWy0-0004to-Tl; Sat, 05 Jun 2021 14:08:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpWy0-0001aQ-TN; Sat, 05 Jun 2021 14:08:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XEA5m7USBvAghfbqaNNO0y/9JJFMWY+VE6ymvhEvIXc=; b=3X7FrT7gfpkHtMr18m5wFnweEb
	bTGsJpSmJ1ZtFmPqB3LmB+P/phz3XeNIlX7maKmua3+yPGFUCSDcC9J/X+Cgv7KMFa0T522oSWKHn
	JoYvKOTHptHGxav5R5MS9SkqQsmfyWkeMNLXrNvaabGIyCfMe4u+bRyGwhSeww2Rtujs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162400-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162400: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=75f13e9b221e2c8603f15ee1d53318526cf56113
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Jun 2021 14:08:00 +0000

flight 162400 xen-unstable-smoke real [real]
flight 162405 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162400/
http://logs.test-lab.xenproject.org/osstest/logs/162405/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  75f13e9b221e2c8603f15ee1d53318526cf56113
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    3 days
Failing since        162370  2021-06-04 17:01:35 Z    0 days    6 attempts
Testing same since   162374  2021-06-04 20:03:35 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Jun 05 17:00:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Jun 2021 17:00:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137250.254312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpZeF-0006ez-5r; Sat, 05 Jun 2021 16:59:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137250.254312; Sat, 05 Jun 2021 16:59:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpZeF-0006es-2c; Sat, 05 Jun 2021 16:59:47 +0000
Received: by outflank-mailman (input) for mailman id 137250;
 Sat, 05 Jun 2021 16:59:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpZeD-0006ei-Rj; Sat, 05 Jun 2021 16:59:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpZeD-0007Jw-I8; Sat, 05 Jun 2021 16:59:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpZeD-0006fH-87; Sat, 05 Jun 2021 16:59:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpZeD-000622-7e; Sat, 05 Jun 2021 16:59:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cSdBspFXXIHlYsDjWd97Q6WwbTXHIYeg8gw7jXiGkk0=; b=1gWHA24mEBikXA82LQ3+PKZiMk
	MSyJBkvzmNywNWbMwI4Bc8fZNGsgMbo96A6p4H1btaRkAanxDUjBh0+tabu9wNXBUSBoLumMG4SxS
	kyrtp1MQ30Fb0IJ92UETpMELZnch2bklrGwiEiy6WxV4MfEa/4fOK82/rtAPvXufcEhc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162379-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162379: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=1cbd2d914939ee6028e9688d4ba859a528c28405
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Jun 2021 16:59:45 +0000

flight 162379 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162379/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                1cbd2d914939ee6028e9688d4ba859a528c28405
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  289 days
Failing since        152659  2020-08-21 14:07:39 Z  288 days  533 attempts
Testing same since   162379  2021-06-04 22:37:14 Z    0 days    1 attempts

------------------------------------------------------------
526 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 168995 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 05 17:28:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Jun 2021 17:28:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137262.254331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpa62-0001S7-O0; Sat, 05 Jun 2021 17:28:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137262.254331; Sat, 05 Jun 2021 17:28:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpa62-0001S0-KK; Sat, 05 Jun 2021 17:28:30 +0000
Received: by outflank-mailman (input) for mailman id 137262;
 Sat, 05 Jun 2021 17:28:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpa61-0001Rq-3i; Sat, 05 Jun 2021 17:28:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpa60-0007o8-TH; Sat, 05 Jun 2021 17:28:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpa60-0000Zx-Lo; Sat, 05 Jun 2021 17:28:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpa60-0003zf-LJ; Sat, 05 Jun 2021 17:28:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gHXhZdPkXa1erMLVp81bgDQDXIm3XjwMCpnZCBUfCFY=; b=wpbp4hnsbEnsa1vJjc4SZuolfX
	nbxi1w0wMiG0tLn161dojPpjGdyET1tg30JoTqKgUonb3H3/267MXHrgt54Xeq/ibxTp2SF2h4iVV
	7L4pZlxjwBX7RC1VLQNF/jWRtDN0QiGEAGYAMm75TRjm73ULB03P4FDs4rFM8NZmq0CQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162407-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162407: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-armhf-armhf-xl:debian-fixup:fail:heisenbug
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=75f13e9b221e2c8603f15ee1d53318526cf56113
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Jun 2021 17:28:28 +0000

flight 162407 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162407/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl          13 debian-fixup               fail pass in 162400

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl         15 migrate-support-check fail in 162400 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 162400 never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  75f13e9b221e2c8603f15ee1d53318526cf56113
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    4 days
Failing since        162370  2021-06-04 17:01:35 Z    1 days    7 attempts
Testing same since   162374  2021-06-04 20:03:35 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Jun 05 21:24:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Jun 2021 21:24:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137277.254358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpdmL-0005s9-Lp; Sat, 05 Jun 2021 21:24:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137277.254358; Sat, 05 Jun 2021 21:24:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpdmL-0005s2-HD; Sat, 05 Jun 2021 21:24:25 +0000
Received: by outflank-mailman (input) for mailman id 137277;
 Sat, 05 Jun 2021 21:24:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpdmK-0005rs-Rn; Sat, 05 Jun 2021 21:24:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpdmK-0003K2-AO; Sat, 05 Jun 2021 21:24:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpdmJ-0002S8-UK; Sat, 05 Jun 2021 21:24:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpdmJ-0004s3-Tc; Sat, 05 Jun 2021 21:24:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RE4ijXH9sMXaad0cGaga87NJXoA6O0tjlzMcppqXyJU=; b=0KLYOeLuRH9qqI+F8FJZMd4jKs
	ia1m05ZStBWdTNczNaksnu2aDw3vS7Kzcah9qwMvKgCieyJoVF+kTkP/lBpEm4NOeOrGWNxK+79pL
	BPlnFylfpUBw5jZ7HfEQHxnbO1R101lL2qfRzey+noEHyG2gN7Hqi/p+QUF6Q+uqGN2k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162385-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162385: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Jun 2021 21:24:23 +0000

flight 162385 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162385/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162357
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162357
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162357
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162357
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162357
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162357
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162357
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162357
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162357
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162357
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162357
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162385  2021-06-05 01:51:34 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat Jun 05 22:12:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Jun 2021 22:12:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137287.254378 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpeWM-0002HR-BA; Sat, 05 Jun 2021 22:11:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137287.254378; Sat, 05 Jun 2021 22:11:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpeWM-0002HK-60; Sat, 05 Jun 2021 22:11:58 +0000
Received: by outflank-mailman (input) for mailman id 137287;
 Sat, 05 Jun 2021 22:11:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpeWL-0002HA-1i; Sat, 05 Jun 2021 22:11:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpeWK-00046a-Rv; Sat, 05 Jun 2021 22:11:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpeWK-00045Q-K7; Sat, 05 Jun 2021 22:11:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpeWK-0007jq-Jd; Sat, 05 Jun 2021 22:11:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=W7UAtcFkVhHAX2ZoZV8Yh0SaS1mR+1EJmW+ZSnnYREk=; b=HLIIsOBy+qlUBXDuLl07POt0Hm
	W1pci5vnbMaY9MUc9ow3mH+I1Bu9Ws75kYiXjXO1fL4iKbzMi1sjmwWXqKsEUkt3HE2KSwtuXYg5W
	ei56WKOp9GBMf3iMKYrmbcF0kaWrhB+YRDdQAKNABrzzNLRAVey+Dkdup/9pO9fWQPeI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162411-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162411: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=75f13e9b221e2c8603f15ee1d53318526cf56113
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Jun 2021 22:11:56 +0000

flight 162411 xen-unstable-smoke real [real]
flight 162415 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162411/
http://logs.test-lab.xenproject.org/osstest/logs/162415/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  75f13e9b221e2c8603f15ee1d53318526cf56113
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    4 days
Failing since        162370  2021-06-04 17:01:35 Z    1 days    8 attempts
Testing same since   162374  2021-06-04 20:03:35 Z    1 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Jun 06 00:36:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Jun 2021 00:36:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137305.254410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpglz-0007AW-2l; Sun, 06 Jun 2021 00:36:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137305.254410; Sun, 06 Jun 2021 00:36:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpgly-0007AP-Vu; Sun, 06 Jun 2021 00:36:14 +0000
Received: by outflank-mailman (input) for mailman id 137305;
 Sun, 06 Jun 2021 00:36:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpglx-0007AF-Vm; Sun, 06 Jun 2021 00:36:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpglx-00071f-On; Sun, 06 Jun 2021 00:36:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpglx-0000fj-GZ; Sun, 06 Jun 2021 00:36:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpglx-00009H-G4; Sun, 06 Jun 2021 00:36:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WJkgT93XOBeEQ799DprZbg4SwjG3PFKBFhDeTkP7YMo=; b=UmiHhDuVPoWLZST/ZCqFr+f6Pe
	tC8YBNsGOvrKYo08rlZgMD3foihOWOEBwE8kkk8xfBm9sWq4s79Z0qsCtHIFNJZly/lHzw+2rKVyy
	xjHtoKHhnRImWS3O1k6GSJrSvfHKB09RZpcVxYbXDAlXHpwmpf3W7XgOAD07Vwg8b7cQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162404-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162404: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start/debian.repeat:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9d32fa5d74b148b1cba262c0c24b9a27a910909b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Jun 2021 00:36:13 +0000

flight 162404 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162404/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 22 guest-start/debian.repeat fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                9d32fa5d74b148b1cba262c0c24b9a27a910909b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  309 days
Failing since        152366  2020-08-01 20:49:34 Z  308 days  526 attempts
Testing same since   162404  2021-06-05 13:34:40 Z    0 days    1 attempts

------------------------------------------------------------
6142 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1671637 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 06 02:37:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Jun 2021 02:37:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137317.254436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpifQ-0007BV-AM; Sun, 06 Jun 2021 02:37:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137317.254436; Sun, 06 Jun 2021 02:37:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpifQ-0007B3-3L; Sun, 06 Jun 2021 02:37:36 +0000
Received: by outflank-mailman (input) for mailman id 137317;
 Sun, 06 Jun 2021 02:37:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpifP-0007At-7t; Sun, 06 Jun 2021 02:37:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpifO-0006b3-Ts; Sun, 06 Jun 2021 02:37:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpifO-0006Qe-Bi; Sun, 06 Jun 2021 02:37:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpifO-0003r5-BF; Sun, 06 Jun 2021 02:37:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JuSalmlhN4Jf4nREbhSOh960aSd8FXiGPeBrb/UkItI=; b=XYOCdJFrQZrN+beNNRs+MOx3XB
	OJbqug85i7OIeLK+ri2lYvUpgcQ8J4PgDHQuiEB4+Hq1QweDK5daXP8bUIGasEphso0WDYmWpeouo
	So3CzmTwMl7kbtPhNpk/Z3LBwujAXz+2Tdoop65MpZ6xD+05wjfYmCJmDk/tdv/EQLSs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162417-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162417: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=75f13e9b221e2c8603f15ee1d53318526cf56113
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Jun 2021 02:37:34 +0000

flight 162417 xen-unstable-smoke real [real]
flight 162423 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162417/
http://logs.test-lab.xenproject.org/osstest/logs/162423/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  75f13e9b221e2c8603f15ee1d53318526cf56113
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    4 days
Failing since        162370  2021-06-04 17:01:35 Z    1 days    9 attempts
Testing same since   162374  2021-06-04 20:03:35 Z    1 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Jun 06 04:37:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Jun 2021 04:37:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137329.254456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpkWb-0001Ph-3d; Sun, 06 Jun 2021 04:36:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137329.254456; Sun, 06 Jun 2021 04:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpkWa-0001PF-Rb; Sun, 06 Jun 2021 04:36:36 +0000
Received: by outflank-mailman (input) for mailman id 137329;
 Sun, 06 Jun 2021 04:36:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpkWZ-0001P5-Nu; Sun, 06 Jun 2021 04:36:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpkWZ-000086-Gn; Sun, 06 Jun 2021 04:36:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpkWZ-0004Nt-02; Sun, 06 Jun 2021 04:36:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpkWY-0003dy-Vg; Sun, 06 Jun 2021 04:36:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CXUlEslmXD/hTELebkLQ2qsTYh4JCsymJX9c1h+/Bns=; b=093oz1hpBwcTxja0LnCI6xwL0N
	9Dh+9MKdeSnuusKhLlG2qCngEcvEnhUYt8PQBE3x3B+0OewdaINyBcNxW6SNEW4PuYclzcuJYvTqB
	vCTv/Q+vMkD6tNDx3TE5kXx7XI5UMCYWAxrn7ITE7+YAqb/7pd2al3+0xb4Ioxb31fAc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162409-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162409: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=6f398e533f5e259b4f937f4aa9de970f7201d166
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Jun 2021 04:36:34 +0000

flight 162409 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162409/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                6f398e533f5e259b4f937f4aa9de970f7201d166
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  289 days
Failing since        152659  2020-08-21 14:07:39 Z  288 days  534 attempts
Testing same since   162409  2021-06-05 17:01:49 Z    0 days    1 attempts

------------------------------------------------------------
526 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 169498 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 06 06:54:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Jun 2021 06:54:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137341.254479 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpmfl-00061b-RB; Sun, 06 Jun 2021 06:54:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137341.254479; Sun, 06 Jun 2021 06:54:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpmfl-00061U-NS; Sun, 06 Jun 2021 06:54:13 +0000
Received: by outflank-mailman (input) for mailman id 137341;
 Sun, 06 Jun 2021 06:54:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpmfl-00061K-78; Sun, 06 Jun 2021 06:54:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpmfk-0002pi-VJ; Sun, 06 Jun 2021 06:54:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpmfj-0002S2-Dy; Sun, 06 Jun 2021 06:54:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpmfj-0007zN-DQ; Sun, 06 Jun 2021 06:54:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SPfj93Mru3htUKVqnysFkK2WSacT+R5BarEnM0EOKRc=; b=pSlwHlkhO7WhNSTylqD0AisP3/
	xn5r+eSI9ecw7SDluflpwzyoButnyUI5DqdsR7h+lAJRU0zCLbh2LjhxyP5omY/N+A9B9TZeoOy2p
	7jdsw/k8DPWA6pjOr4sDxP+cqBOW+BvS6PaS3fFJgha8DXgUkWk5n07AuQj4cCOxEVLk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162425-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162425: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=75f13e9b221e2c8603f15ee1d53318526cf56113
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Jun 2021 06:54:11 +0000

flight 162425 xen-unstable-smoke real [real]
flight 162430 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162425/
http://logs.test-lab.xenproject.org/osstest/logs/162430/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  75f13e9b221e2c8603f15ee1d53318526cf56113
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    4 days
Failing since        162370  2021-06-04 17:01:35 Z    1 days   10 attempts
Testing same since   162374  2021-06-04 20:03:35 Z    1 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Jun 06 07:34:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Jun 2021 07:34:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137349.254493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpnIZ-0001fd-Vl; Sun, 06 Jun 2021 07:34:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137349.254493; Sun, 06 Jun 2021 07:34:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpnIZ-0001fW-Ra; Sun, 06 Jun 2021 07:34:19 +0000
Received: by outflank-mailman (input) for mailman id 137349;
 Sun, 06 Jun 2021 07:34:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpnIY-0001fM-J5; Sun, 06 Jun 2021 07:34:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpnIY-0003U5-7a; Sun, 06 Jun 2021 07:34:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpnIX-0005O0-Uj; Sun, 06 Jun 2021 07:34:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpnIX-0004Ku-UA; Sun, 06 Jun 2021 07:34:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TSlZKIoqlU40arKCg415t8vRG9ThBBbtbTUNW4F9uXQ=; b=mrAXs0wNNBd07VIrpxGYHarxxV
	TbVzR30TjOszkPsAnPaBpgUMLJ5s59SO05IEGRxp+nC75YHIoGoME12UtNBearwxh4ipmVbg8L1/u
	OL4UoRXvAU8qhSifP0TZ31PYswYmJjPliNO0aAlBRrw344UTETO3Q6sg99l/59iJTB3s=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162420-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162420: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f5b6eb1e018203913dfefcf6fa988649ad11ad6e
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Jun 2021 07:34:17 +0000

flight 162420 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162420/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f5b6eb1e018203913dfefcf6fa988649ad11ad6e
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  309 days
Failing since        152366  2020-08-01 20:49:34 Z  308 days  527 attempts
Testing same since   162420  2021-06-06 00:41:06 Z    0 days    1 attempts

------------------------------------------------------------
6147 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1672382 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 06 09:56:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Jun 2021 09:56:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137367.254522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lppWB-0007w9-Qw; Sun, 06 Jun 2021 09:56:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137367.254522; Sun, 06 Jun 2021 09:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lppWB-0007w2-Nq; Sun, 06 Jun 2021 09:56:31 +0000
Received: by outflank-mailman (input) for mailman id 137367;
 Sun, 06 Jun 2021 09:56:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lppWA-0007vs-A1; Sun, 06 Jun 2021 09:56:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lppWA-0006KL-2B; Sun, 06 Jun 2021 09:56:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lppW9-0003Op-Ly; Sun, 06 Jun 2021 09:56:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lppW9-0008Kd-LX; Sun, 06 Jun 2021 09:56:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zGYXVxfBJaFHiHv65y2iBvuLHObzV/54XEFjeX2Has8=; b=cOYRbBD9rK1QSm5Wnsh/e7sLEb
	t6brRAUQF1rb9MwfU6CMcO22VFJdU50TupJtxHgZVBy1qnY/Xo1pVwXXEOAgjkM78HxCdcidqa5LY
	bjZ/wQnxu/eRRf2yG/3mfgh3jsQrcwj8EsnsF3dGPs8F3t9DfdUVjGL9/E+DbHD5Dbw0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162427-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162427: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=619968a6801491ece52cc6c20fe9a018e0d24d7a
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Jun 2021 09:56:29 +0000

flight 162427 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162427/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              619968a6801491ece52cc6c20fe9a018e0d24d7a
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  331 days
Failing since        151818  2020-07-11 04:18:52 Z  330 days  323 attempts
Testing same since   162390  2021-06-05 04:20:07 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 60472 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 06 10:13:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Jun 2021 10:13:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137378.254545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lppme-00029b-G0; Sun, 06 Jun 2021 10:13:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137378.254545; Sun, 06 Jun 2021 10:13:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lppme-00029U-BQ; Sun, 06 Jun 2021 10:13:32 +0000
Received: by outflank-mailman (input) for mailman id 137378;
 Sun, 06 Jun 2021 10:13:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lppmd-00029J-3W; Sun, 06 Jun 2021 10:13:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lppmc-0006h9-Th; Sun, 06 Jun 2021 10:13:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lppmc-0004Er-NS; Sun, 06 Jun 2021 10:13:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lppmc-0000NT-Mw; Sun, 06 Jun 2021 10:13:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dwvYZmRNwzUclLU0YmLamOXXUkZO0gUAwk08EkiNR60=; b=r84sjftbahcLr+yncE2vuL6Tdq
	Gs1brOFwlZlesaPeMrM9GDx5+aBqrQBOTNl4yphWTXCvlkXUrx/f9z7Ph+JCONGHjXWHXjfek7Z22
	i2sdE/ppAbkqYRhlO4beuzLUXZE6iORfky15eHJqalPCjkiunWSLE1E2mj+QQIHwb+PA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162432-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162432: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=75f13e9b221e2c8603f15ee1d53318526cf56113
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Jun 2021 10:13:30 +0000

flight 162432 xen-unstable-smoke real [real]
flight 162437 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162432/
http://logs.test-lab.xenproject.org/osstest/logs/162437/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  75f13e9b221e2c8603f15ee1d53318526cf56113
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    4 days
Failing since        162370  2021-06-04 17:01:35 Z    1 days   11 attempts
Testing same since   162374  2021-06-04 20:03:35 Z    1 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Jun 06 13:47:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Jun 2021 13:47:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137400.254583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpt7q-0005ew-9P; Sun, 06 Jun 2021 13:47:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137400.254583; Sun, 06 Jun 2021 13:47:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpt7q-0005eo-3X; Sun, 06 Jun 2021 13:47:38 +0000
Received: by outflank-mailman (input) for mailman id 137400;
 Sun, 06 Jun 2021 13:47:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpt7o-0005ee-Sa; Sun, 06 Jun 2021 13:47:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpt7o-0001hA-KK; Sun, 06 Jun 2021 13:47:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpt7o-0006EA-D3; Sun, 06 Jun 2021 13:47:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpt7o-0001VF-CW; Sun, 06 Jun 2021 13:47:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FwXnoIwZwfdB+M0DrEQOoOn+6ulzJJ9gG+EPnn5ZtLA=; b=sJ/3sZPchWU/slvyoz5ot18hEA
	OJGIcU4HMrpDx0gR6RWou35AiMgIeek5hE18kt1zgjeWVDO0Vr+ZO0ytadDECDepBxPKKVddqJB5d
	gNG1A/XpmHj8gnW0f9Y6Gs7sxENv2nhZ8Kb2L4vuyx0/BBNxF9WOmTDE20GrhjKPonA0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162441-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162441: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=75f13e9b221e2c8603f15ee1d53318526cf56113
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Jun 2021 13:47:36 +0000

flight 162441 xen-unstable-smoke real [real]
flight 162444 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162441/
http://logs.test-lab.xenproject.org/osstest/logs/162444/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  75f13e9b221e2c8603f15ee1d53318526cf56113
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    4 days
Failing since        162370  2021-06-04 17:01:35 Z    1 days   12 attempts
Testing same since   162374  2021-06-04 20:03:35 Z    1 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Jun 06 14:31:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Jun 2021 14:31:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137410.254603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lptoZ-000276-KJ; Sun, 06 Jun 2021 14:31:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137410.254603; Sun, 06 Jun 2021 14:31:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lptoZ-00026z-HH; Sun, 06 Jun 2021 14:31:47 +0000
Received: by outflank-mailman (input) for mailman id 137410;
 Sun, 06 Jun 2021 14:31:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lptoZ-00026p-5N; Sun, 06 Jun 2021 14:31:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lptoY-0002XK-TI; Sun, 06 Jun 2021 14:31:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lptoY-0007gW-J4; Sun, 06 Jun 2021 14:31:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lptoY-0003l7-Ib; Sun, 06 Jun 2021 14:31:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oXsG5eMr316BLvZchaz8ONJn2K9Q9DwEPr0UHrf6Pqk=; b=PblOIqggKA+4LIjYRkQuygsm/Q
	jen859xVq/IKEbC1/ohvAq9Jl71hqOeSJr257uvp3ovzLFPqzued/+tw+zGpcgKbBWCQL0msv1tqr
	FwNs2Ug4TWDTFqD3aQQZ7LdfcPlKFKIBGjx1X60+4pvhgsJD6V3T5cjBvNuKhG+Aw2qY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162422-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162422: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Jun 2021 14:31:46 +0000

flight 162422 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162422/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 162330
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162385
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162385
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162385
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162385
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162385
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162385
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162385
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162385
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162385
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162385
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162385
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162422  2021-06-06 01:53:40 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Jun 06 15:31:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Jun 2021 15:31:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137422.254629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpujt-0007sm-Tb; Sun, 06 Jun 2021 15:31:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137422.254629; Sun, 06 Jun 2021 15:31:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpujt-0007sf-QL; Sun, 06 Jun 2021 15:31:01 +0000
Received: by outflank-mailman (input) for mailman id 137422;
 Sun, 06 Jun 2021 15:31:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpujs-0007sV-MV; Sun, 06 Jun 2021 15:31:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpujs-0003VJ-GF; Sun, 06 Jun 2021 15:31:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpujs-000289-6h; Sun, 06 Jun 2021 15:31:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpujs-0007I4-6C; Sun, 06 Jun 2021 15:31:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BpQ/vjy5xbn89yaOqrFETvInQzcNgKmzsCBrqq2rQA4=; b=JpoFPUsFHh7HDwx4AI7NDmi/AK
	Zz539u9Z+DNDldsij+PGt4itKVHwD7Ia0KzUL7sM2mFa2a7FlF9+3xLhRB+8L2MHcjERPPAlIMIXb
	EUeY/QJx+GBqkOrkhQ6bsSZyaP1TzFkxdDpe3HKjenWeAxkdDoUZY0fJp8g7oJCmdyz0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162436-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162436: tolerable FAIL - PUSHED
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:nonblocking
X-Osstest-Versions-This:
    ovmf=ddb3fdbef30de5a2946f9bd51060e8d5b1987aef
X-Osstest-Versions-That:
    ovmf=51adb689e1db695cffdeeacafad218768fbc018c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Jun 2021 15:31:00 +0000

flight 162436 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162436/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install     fail like 162371
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install    fail like 162371

version targeted for testing:
 ovmf                 ddb3fdbef30de5a2946f9bd51060e8d5b1987aef
baseline version:
 ovmf                 51adb689e1db695cffdeeacafad218768fbc018c

Last test of basis   162371  2021-06-04 17:41:31 Z    1 days
Testing same since   162436  2021-06-06 08:13:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   51adb689e1..ddb3fdbef3  ddb3fdbef30de5a2946f9bd51060e8d5b1987aef -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun Jun 06 16:40:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Jun 2021 16:40:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137438.254655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpvoy-0006fo-Bx; Sun, 06 Jun 2021 16:40:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137438.254655; Sun, 06 Jun 2021 16:40:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpvoy-0006fh-8A; Sun, 06 Jun 2021 16:40:20 +0000
Received: by outflank-mailman (input) for mailman id 137438;
 Sun, 06 Jun 2021 16:40:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpvow-0006fX-6p; Sun, 06 Jun 2021 16:40:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpvov-00059m-Ue; Sun, 06 Jun 2021 16:40:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpvov-0005FM-Ly; Sun, 06 Jun 2021 16:40:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpvov-0000S8-LV; Sun, 06 Jun 2021 16:40:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VarjmWdtXVRDPwno7PSjA5+G4tKwOVKiKIhXMv/uz1s=; b=1ByrfgJSKhwIPTduFIayEWAXdi
	rl0p6MHtN1vdY/P9g9rJGXFnS52KqLHYTP6TPTwE8Uao5JKKZRM/PHAxL9+jgd1ikUHod1aLhqoPr
	zkX1zso0SKiz45Y0f8vTVARcQsLx9fDWpF6xqOrLiWEHRDOPjfe1V93tztADlokzCEIA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162429-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162429: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=6f398e533f5e259b4f937f4aa9de970f7201d166
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Jun 2021 16:40:17 +0000

flight 162429 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162429/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 162409

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                6f398e533f5e259b4f937f4aa9de970f7201d166
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  290 days
Failing since        152659  2020-08-21 14:07:39 Z  289 days  535 attempts
Testing same since   162409  2021-06-05 17:01:49 Z    0 days    2 attempts

------------------------------------------------------------
526 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 169498 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 06 16:41:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Jun 2021 16:41:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137442.254669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpvpc-0007Dz-NU; Sun, 06 Jun 2021 16:41:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137442.254669; Sun, 06 Jun 2021 16:41:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpvpc-0007Dq-KW; Sun, 06 Jun 2021 16:41:00 +0000
Received: by outflank-mailman (input) for mailman id 137442;
 Sun, 06 Jun 2021 16:40:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpvpb-0007Dg-Rp; Sun, 06 Jun 2021 16:40:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpvpb-0005BQ-NY; Sun, 06 Jun 2021 16:40:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpvpb-0005Gv-GI; Sun, 06 Jun 2021 16:40:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpvpb-0000s0-Fp; Sun, 06 Jun 2021 16:40:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ronl+WN4upINcek0AhBxlTbbdDHX6kMwpkGikTLW2gA=; b=bDyzWXSY6Uy8KV69LIE1TXRLYE
	Kq2GLcdCDYMmGWHVJekR6PgFJ71ulARTB9YTxVmjiC0RiJ0KNz/8Ufew4ZhbYRYn/0wMYyHq0WUoj
	k5B1uDv37J2EmDTRKJ5+A0vTsc4n3YuViV3nJhaExSoxJfZLUY53p26Tr5a92pbiAuEo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162446-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162446: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=75f13e9b221e2c8603f15ee1d53318526cf56113
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Jun 2021 16:40:59 +0000

flight 162446 xen-unstable-smoke real [real]
flight 162450 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162446/
http://logs.test-lab.xenproject.org/osstest/logs/162450/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  75f13e9b221e2c8603f15ee1d53318526cf56113
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    5 days
Failing since        162370  2021-06-04 17:01:35 Z    1 days   13 attempts
Testing same since   162374  2021-06-04 20:03:35 Z    1 days   12 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Jun 06 19:51:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Jun 2021 19:51:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137465.254706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpyni-0007gb-0a; Sun, 06 Jun 2021 19:51:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137465.254706; Sun, 06 Jun 2021 19:51:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpynh-0007gU-Ta; Sun, 06 Jun 2021 19:51:13 +0000
Received: by outflank-mailman (input) for mailman id 137465;
 Sun, 06 Jun 2021 19:51:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpyng-0007gK-1f; Sun, 06 Jun 2021 19:51:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpynf-0008LM-NK; Sun, 06 Jun 2021 19:51:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpynf-0003Df-Ge; Sun, 06 Jun 2021 19:51:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpynf-0008HR-GA; Sun, 06 Jun 2021 19:51:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1UmsyGFZ0+6kip6z0s+dZ6C7Zwqhi9Vl6h7xkuxGkfE=; b=bDkg4jiKu2g3WFmumWRdmsDrWG
	tv1JYqY1KKWOwUdQ4fqhL31Hg59IhMrWvO+vz8maw68DDzX7IUksBr7mt06F/6gAvRoSA0CV9KoNx
	BE/MF2XtVuCxb8MM+zJpHmq4JuVbcoCR/3b5oUJN9cVKO1ZYNh/MDJY6cVtiBzH4yI+c=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162453-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162453: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=75f13e9b221e2c8603f15ee1d53318526cf56113
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Jun 2021 19:51:11 +0000

flight 162453 xen-unstable-smoke real [real]
flight 162459 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162453/
http://logs.test-lab.xenproject.org/osstest/logs/162459/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  75f13e9b221e2c8603f15ee1d53318526cf56113
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    5 days
Failing since        162370  2021-06-04 17:01:35 Z    2 days   14 attempts
Testing same since   162374  2021-06-04 20:03:35 Z    1 days   13 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Jun 06 20:45:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Jun 2021 20:45:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137477.254727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpzeZ-0004Rc-FZ; Sun, 06 Jun 2021 20:45:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137477.254727; Sun, 06 Jun 2021 20:45:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lpzeZ-0004RV-C0; Sun, 06 Jun 2021 20:45:51 +0000
Received: by outflank-mailman (input) for mailman id 137477;
 Sun, 06 Jun 2021 20:45:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpzeX-0004RL-HP; Sun, 06 Jun 2021 20:45:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpzeX-0000tM-7k; Sun, 06 Jun 2021 20:45:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lpzeW-0005ay-Ud; Sun, 06 Jun 2021 20:45:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lpzeW-0004i0-UA; Sun, 06 Jun 2021 20:45:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=e392htzPg9f3TT3xuRJIOwzJIpXuMhDFBCw9Qj4Wnkk=; b=q1M6O/ZEnv/7mD6Y3BJ9mfbuys
	xGo0KG/eTnjOYobVoNi4fO0uw/NabqCbE4TJ59/fAWnMTew9WjDbKdpnC1X2wXLsAwc2eci1h3uUV
	kwUULJi9hROEI+CC/hdf7IpHO27ha4hYK0DuDyojuStTyM03vB9SRYk46ynLKDz7Nyak=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162433-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162433: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:heisenbug
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f5b6eb1e018203913dfefcf6fa988649ad11ad6e
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Jun 2021 20:45:48 +0000

flight 162433 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162433/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 14 guest-start    fail in 162420 REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop     fail in 162420 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds 20 guest-localmigrate/x10 fail in 162420 pass in 162433
 test-arm64-arm64-xl-xsm      13 debian-fixup     fail in 162420 pass in 162433
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10     fail pass in 162420
 test-arm64-arm64-libvirt-xsm 13 debian-fixup               fail pass in 162420

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f5b6eb1e018203913dfefcf6fa988649ad11ad6e
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  310 days
Failing since        152366  2020-08-01 20:49:34 Z  308 days  528 attempts
Testing same since   162420  2021-06-06 00:41:06 Z    0 days    2 attempts

------------------------------------------------------------
6147 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1672382 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 06 23:15:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Jun 2021 23:15:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137495.254771 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq1zT-0001JQ-OG; Sun, 06 Jun 2021 23:15:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137495.254771; Sun, 06 Jun 2021 23:15:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq1zT-0001JJ-K6; Sun, 06 Jun 2021 23:15:35 +0000
Received: by outflank-mailman (input) for mailman id 137495;
 Sun, 06 Jun 2021 23:15:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lq1zS-0001It-La; Sun, 06 Jun 2021 23:15:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lq1zS-0003Jd-GF; Sun, 06 Jun 2021 23:15:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lq1zS-0003ns-9d; Sun, 06 Jun 2021 23:15:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lq1zS-0007ZF-6g; Sun, 06 Jun 2021 23:15:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lQBrUdb0FwQdhcA5hSkMa1qMXHQ+ajiLizhjL9ZWOBk=; b=Zh8PRssO2Ny71cvbmRK6gTgkBF
	ZyCk7X/H3S72RU4k41tcnLV1v56P2sDse+/bXfs5orPGgQkpQxO49UovFQR1XSxxpS57CNdan49dm
	mACZuvoTVIPRqP/K77fnsfvQCRTg3Y5x2crs2HBGNY/cqD3St5e/l7ereOwSGxlbtHTk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162461-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162461: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=75f13e9b221e2c8603f15ee1d53318526cf56113
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Jun 2021 23:15:34 +0000

flight 162461 xen-unstable-smoke real [real]
flight 162467 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162461/
http://logs.test-lab.xenproject.org/osstest/logs/162467/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  75f13e9b221e2c8603f15ee1d53318526cf56113
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    5 days
Failing since        162370  2021-06-04 17:01:35 Z    2 days   15 attempts
Testing same since   162374  2021-06-04 20:03:35 Z    2 days   14 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 01:35:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 01:35:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137513.254809 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq4AE-0003j9-MY; Mon, 07 Jun 2021 01:34:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137513.254809; Mon, 07 Jun 2021 01:34:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq4AE-0003j1-J5; Mon, 07 Jun 2021 01:34:50 +0000
Received: by outflank-mailman (input) for mailman id 137513;
 Mon, 07 Jun 2021 01:34:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lq4AE-0003is-6G; Mon, 07 Jun 2021 01:34:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lq4AD-00030V-Rl; Mon, 07 Jun 2021 01:34:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lq4AD-0001oi-Cd; Mon, 07 Jun 2021 01:34:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lq4AD-0003a8-C6; Mon, 07 Jun 2021 01:34:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NYQ9oBAJ1fspKwz8MggHwD7fp6SWp7ewTirU6/sZXlk=; b=6OJWL9Eyqh61eDa/hsyskXflD0
	GFbeXLHq9/4/kgWzao0+xOg8XPEvMjG/mCBXrQLvvPExmVnAHy5XLe4mNnjf8ehC6yQpUbSMjdfZw
	u1KsPcUCuvfDO9GKyz4LTmmHLmU2q4zF7M6EGm8F4HFUv3wIVU3lUZfvGSG36IkTT/bc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162454-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162454: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=6f398e533f5e259b4f937f4aa9de970f7201d166
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Jun 2021 01:34:49 +0000

flight 162454 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162454/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                6f398e533f5e259b4f937f4aa9de970f7201d166
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  290 days
Failing since        152659  2020-08-21 14:07:39 Z  289 days  536 attempts
Testing same since   162409  2021-06-05 17:01:49 Z    1 days    3 attempts

------------------------------------------------------------
526 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 169498 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 02:25:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 02:25:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137523.254828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq4x0-0000ck-Rk; Mon, 07 Jun 2021 02:25:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137523.254828; Mon, 07 Jun 2021 02:25:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq4x0-0000cd-Od; Mon, 07 Jun 2021 02:25:14 +0000
Received: by outflank-mailman (input) for mailman id 137523;
 Mon, 07 Jun 2021 02:25:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lq4wz-0000cT-8S; Mon, 07 Jun 2021 02:25:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lq4wy-0004HX-4L; Mon, 07 Jun 2021 02:25:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lq4wx-00030z-T0; Mon, 07 Jun 2021 02:25:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lq4wx-00044v-SU; Mon, 07 Jun 2021 02:25:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=gLB/Xq99arg8EtmKesx8MDCoUNOTwi1u7v+KHVzvv/4=; b=cpEpCuIDzURLIzDuoE0uAbzlgy
	gIsWswF9pSiRYUe3AhT1HrVfq2Je1vxMHzVZOCk2K/k+MDUMkt3ssXVATz2BIi+0G5ZJkGmYE2TND
	rMTcaVwR1rJiFWatfMXKhm97od1D36zapwaCz8UieZvuAF17/wkAWjICD8ImrhoE7Iro=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-i386-xl-qemuu-ovmf-amd64
Message-Id: <E1lq4wx-00044v-SU@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Jun 2021 02:25:11 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemuu-ovmf-amd64
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  ovmf git://xenbits.xen.org/osstest/ovmf.git
  Bug introduced:  d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64
  Bug not present: 3357ac73807d83eb212632ee7c2e032a20a49c56
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/162473/


  commit d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64
  Author: Laszlo Ersek <lersek@redhat.com>
  Date:   Wed May 26 22:14:24 2021 +0200
  
      OvmfPkg/PlatformPei: remove Xen support
      
      The "OvmfPkg/PlatformPei/PlatformPei.inf" module is used by the following
      platform DSCs:
      
        OvmfPkg/AmdSev/AmdSevX64.dsc
        OvmfPkg/OvmfPkgIa32.dsc
        OvmfPkg/OvmfPkgIa32X64.dsc
        OvmfPkg/OvmfPkgX64.dsc
      
      Remove Xen support from "OvmfPkg/PlatformPei", including any dependencies
      that now become unused. The basic idea is to substitute FALSE for "mXen".
      
      Remove "OvmfPkg/PlatformPei" from the "OvmfPkg: Xen-related modules"
      section of "Maintainers.txt".
      
      This patch is best reviewed with "git show -b -W".
      
      Cc: Andrew Fish <afish@apple.com>
      Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
      Cc: Jordan Justen <jordan.l.justen@intel.com>
      Cc: Leif Lindholm <leif@nuviainc.com>
      Cc: Michael D Kinney <michael.d.kinney@intel.com>
      Cc: Philippe Mathieu-Daudé <philmd@redhat.com>
      Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=2122
      Signed-off-by: Laszlo Ersek <lersek@redhat.com>
      Message-Id: <20210526201446.12554-22-lersek@redhat.com>
      Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
      Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
      Reviewed-by: Leif Lindholm <leif@nuviainc.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-ovmf-amd64.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-ovmf-amd64.debian-hvm-install --summary-out=tmp/162473.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-i386-xl-qemuu-ovmf-amd64 debian-hvm-install
Searching for failure / basis pass:
 162454 fail [host=albana1] / 162379 ok.
Failure / basis pass flights: 162454 / 162379
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ddb3fdbef30de5a2946f9bd51060e8d5b1987aef 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f398e533f5e259b4f937f4aa9de970f7201d166 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 924c2b847f0bc4325f6d14e562e2fb2d8acbc4d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#924c2b847f0bc4325f6d14e562e2fb2d8acbc4d0-ddb3fdbef30de5a2946f9bd51060e8d5b1987aef git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#1cbd2d914939ee6028e9688d4ba859a528c28405-6f398e533f5e259b4f937f4aa9de970f7201d166 git://xenbits.xen.org/osstest/seabios.git#7292e4a0a8f58333ccbd2d0d47242f9865083c9c-7292e4a0a8f58333ccbd2d0d47242f9865083c9c git://xenbits.xen.org/xen.git#5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1-5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Loaded 15001 nodes in revision graph
Searching for test results:
 162409 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 51adb689e1db695cffdeeacafad218768fbc018c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f398e533f5e259b4f937f4aa9de970f7201d166 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162379 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 924c2b847f0bc4325f6d14e562e2fb2d8acbc4d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162428 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 924c2b847f0bc4325f6d14e562e2fb2d8acbc4d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162434 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 51adb689e1db695cffdeeacafad218768fbc018c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f398e533f5e259b4f937f4aa9de970f7201d166 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162439 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e120c962f50432199d4125f0b7066a78ccbad6f3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162443 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d6ba8aa6ef628f5d865865e0aba4a329ee0d0728 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162448 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162429 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 51adb689e1db695cffdeeacafad218768fbc018c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f398e533f5e259b4f937f4aa9de970f7201d166 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162452 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4174c5c7874ec21c2e693565d3685cf9f5c2e2e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162456 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e7641171b6c1f858f3d979c0e8f04d6c12870baa 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162458 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3357ac73807d83eb212632ee7c2e032a20a49c56 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162462 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162465 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3357ac73807d83eb212632ee7c2e032a20a49c56 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162468 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162471 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3357ac73807d83eb212632ee7c2e032a20a49c56 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162454 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ddb3fdbef30de5a2946f9bd51060e8d5b1987aef 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f398e533f5e259b4f937f4aa9de970f7201d166 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162473 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Searching for interesting versions
 Result found: flight 162379 (pass), for basis pass
 Result found: flight 162409 (fail), for basis failure (at ancestor ~1)
 Repro found: flight 162428 (pass), for basis pass
 Repro found: flight 162454 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3357ac73807d83eb212632ee7c2e032a20a49c56 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
No revisions left to test, checking graph state.
 Result found: flight 162458 (pass), for last pass
 Result found: flight 162462 (fail), for first failure
 Repro found: flight 162465 (pass), for last pass
 Repro found: flight 162468 (fail), for first failure
 Repro found: flight 162471 (pass), for last pass
 Repro found: flight 162473 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  ovmf git://xenbits.xen.org/osstest/ovmf.git
  Bug introduced:  d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64
  Bug not present: 3357ac73807d83eb212632ee7c2e032a20a49c56
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/162473/


  commit d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64
  Author: Laszlo Ersek <lersek@redhat.com>
  Date:   Wed May 26 22:14:24 2021 +0200
  
      OvmfPkg/PlatformPei: remove Xen support
      
      The "OvmfPkg/PlatformPei/PlatformPei.inf" module is used by the following
      platform DSCs:
      
        OvmfPkg/AmdSev/AmdSevX64.dsc
        OvmfPkg/OvmfPkgIa32.dsc
        OvmfPkg/OvmfPkgIa32X64.dsc
        OvmfPkg/OvmfPkgX64.dsc
      
      Remove Xen support from "OvmfPkg/PlatformPei", including any dependencies
      that now become unused. The basic idea is to substitute FALSE for "mXen".
      
      Remove "OvmfPkg/PlatformPei" from the "OvmfPkg: Xen-related modules"
      section of "Maintainers.txt".
      
      This patch is best reviewed with "git show -b -W".
      
      Cc: Andrew Fish <afish@apple.com>
      Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
      Cc: Jordan Justen <jordan.l.justen@intel.com>
      Cc: Leif Lindholm <leif@nuviainc.com>
      Cc: Michael D Kinney <michael.d.kinney@intel.com>
      Cc: Philippe Mathieu-Daudé <philmd@redhat.com>
      Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=2122
      Signed-off-by: Laszlo Ersek <lersek@redhat.com>
      Message-Id: <20210526201446.12554-22-lersek@redhat.com>
      Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
      Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
      Reviewed-by: Leif Lindholm <leif@nuviainc.com>

Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-ovmf-amd64.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
162473: tolerable ALL FAIL

flight 162473 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162473/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail baseline untested


jobs:
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 02:43:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 02:43:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137534.254846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5Eu-0002zv-LV; Mon, 07 Jun 2021 02:43:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137534.254846; Mon, 07 Jun 2021 02:43:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5Eu-0002zo-Ha; Mon, 07 Jun 2021 02:43:44 +0000
Received: by outflank-mailman (input) for mailman id 137534;
 Mon, 07 Jun 2021 02:43:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MAL7=LB=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lq5Et-0002zi-6v
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 02:43:43 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.84]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ebaf0d1a-0a28-4a24-adc4-a3ffc5c9c8bd;
 Mon, 07 Jun 2021 02:43:40 +0000 (UTC)
Received: from DBBPR09CA0023.eurprd09.prod.outlook.com (2603:10a6:10:c0::35)
 by AM9PR08MB6228.eurprd08.prod.outlook.com (2603:10a6:20b:281::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Mon, 7 Jun
 2021 02:43:38 +0000
Received: from DB5EUR03FT049.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:c0:cafe::38) by DBBPR09CA0023.outlook.office365.com
 (2603:10a6:10:c0::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.22 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:38 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT049.mail.protection.outlook.com (10.152.20.191) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:43:38 +0000
Received: ("Tessian outbound bf434e582664:v93");
 Mon, 07 Jun 2021 02:43:38 +0000
Received: from 43e9bd8c5705.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 217E8781-6E41-40BB-84C4-EF698A40DA7A.1; 
 Mon, 07 Jun 2021 02:43:31 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 43e9bd8c5705.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 07 Jun 2021 02:43:31 +0000
Received: from DU2PR04CA0209.eurprd04.prod.outlook.com (2603:10a6:10:28d::34)
 by DB7PR08MB3851.eurprd08.prod.outlook.com (2603:10a6:10:7a::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.27; Mon, 7 Jun
 2021 02:43:29 +0000
Received: from DB5EUR03FT018.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:28d:cafe::bf) by DU2PR04CA0209.outlook.office365.com
 (2603:10a6:10:28d::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.15 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:29 +0000
Received: from nebula.arm.com (40.67.248.234) by
 DB5EUR03FT018.mail.protection.outlook.com (10.152.20.69) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:43:29 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Mon, 7 Jun
 2021 02:43:27 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebaf0d1a-0a28-4a24-adc4-a3ffc5c9c8bd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0n23ceT40gDkzpSs8YSi62lKBLmssTtyN4ZJ1rsXIaE=;
 b=64dp+n0zVNx71dEWRz4IPJCmvg2JJzj5bH/rC0JM901HIJ7MtBZcCqf1FrbrQ6m8P2/HglUiJJ0W0zXcvL4xOFGrHYVRF27hNPqcHJ9dpOu8H+0uhygbelu/N7b6wX6BZuwcnPEx//paMFIe5hxkYeu1xq08ICEPxtoKTTKRoTQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 4c6bc1cacb9119fa
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=at3Br/HcUMBCYPgcTQZW1fw/tktcO8oHt/HFnRVJCm+QzkRVy/2NzxpCVeg1VrkXlfWUH75cbO3VLb3m+Vi5cpxv5fhMFO93LbCQ5857S0xOfK3IcMGZis7yI9S7loPhIq1EVzrbEXXvp3YjLHtgtxku6bBp/6hCGYt1xb2Eea4A2Xp0Yaei3WqEVNSP9AGnRpH7yyFl/CMIhOkWnWMoq2VlZm/bx5v5KlVhEm1x42Azeh30PHFRBTHaq3eXxDjfjHeIFSpe2gi3o0s0Lvk+sMhXgbVlaLFF4ZUC80SNjevSvHVbBXTzbVfKkK/ameoE5HLeV0z5khfoG3Uikl2baA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0n23ceT40gDkzpSs8YSi62lKBLmssTtyN4ZJ1rsXIaE=;
 b=VO2SjmiLccopDdI14GqX0GgjMce0Tf5IYQBtkEVNdqvTj3SFnMvo/5q0MznI9bAtOyuxE3VaIY7K3TlhCL/b98gMdUMb2/yl0mLpuXcqiMTXlqusubJ+/xcc+y8KEz9rbnaL5LAg0dLR0ivsRLmlZQqkfwxDR+YRvFygiHCdoi5PQMMjgcflHj/QKkyBOT+FxgYqit4NNRpVG+8XZD7aTfYxe6aMzJSbCgeFk5RIBNdyY1QXz5OeJ2sW5PI5my080HHXzNPZkE4EV64J87eQkCXB55hXmXQWkIXtlyEeDx3UPmipz2dnEu5JLOQqIyC/eqA24i+OXqr82aIehDyr9w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0n23ceT40gDkzpSs8YSi62lKBLmssTtyN4ZJ1rsXIaE=;
 b=64dp+n0zVNx71dEWRz4IPJCmvg2JJzj5bH/rC0JM901HIJ7MtBZcCqf1FrbrQ6m8P2/HglUiJJ0W0zXcvL4xOFGrHYVRF27hNPqcHJ9dpOu8H+0uhygbelu/N7b6wX6BZuwcnPEx//paMFIe5hxkYeu1xq08ICEPxtoKTTKRoTQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>, <jbeulich@suse.com>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>
Subject: [PATCH V2 0/9] Domain on Static Allocation
Date: Mon, 7 Jun 2021 02:43:09 +0000
Message-ID: <20210607024318.3988467-1-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 053aba55-4f35-4f83-c316-08d9295e0d9a
X-MS-TrafficTypeDiagnostic: DB7PR08MB3851:|AM9PR08MB6228:
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB6228096A547F0D6652B59537F7389@AM9PR08MB6228.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 83ajIFQM5ggxLdmrrc7blmKlMjlOCEoozlkdxv1ZZ8wiQco118C8EkTc2qsnLN0JB5vj/OUphpAiUxL6HhZsX0x/f2ycZ1bVIVG0xJiN3v1pkpaXj528xh3Q+oRXLn0z1ebMWAe0cVcu9eOWiqsiNfrShhHakPaiiuS+9pdxw7B30VuyopjnBpwiZY/eyHQz07ELksySejEIvQmXyfGZwiLYinGoAQtAE7hZOJ8CIr857VMOqc4phL6+5gAFIvok3Sfr8MfSuyhWBWx/dfGm9OQHy7l2ddXpNqK4oUrT0dtpPAfhCx0f0ywtW+EUdR+WhHXMPaUNkLxRcWpEcBNrXpLUIJMvRicFF0PHZnNbOCjr864cwZNLQ1cbJ6CWAax1wBhwtOIreuALcyVbbiW5HX9iJA6Cr/CextCl+aZHTYHThu+xz+GPSsDKwhkv9zE1GMWLHYz71wGNOEaPY4dPkM+DXyReoaKvwuio8+yUPavjNN9Qe0l7JaGi03/NC+PRJ8B2Lz10hp3N1BhigMywN7t5VB3Hj4UFwVv+wuz+Yw9cB13U2+fDcabIOaa79sXL/KPeUTOejH/nq30vT7BHScOKl1j/iXjNgoyddfgO/xPmfxJFrquD95eBawPdy1lLzQdEhQvEiSzxhWvc8TwrZeIq8Ni/RHBV0R4tf3XcHdq6f089le2vFogEhCHrKAWvQBqdfXhcRZWt7d+sZBKavH4yPUh5UaKtlFDPAPnv2sR4VFvlBUXPlLCplX0oKnKX3ocZuZ9gmkWCsB04m/GoJQIVvCHmY9YNdBR6dNa5zi0+7ykXHV514FazfwSi7hY1dnSUUdCo4Loh7iP+UniuWQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(396003)(39860400002)(346002)(136003)(376002)(46966006)(36840700001)(47076005)(426003)(336012)(81166007)(36756003)(6666004)(70586007)(5660300002)(86362001)(70206006)(44832011)(8676002)(82740400003)(36860700001)(2616005)(8936002)(82310400003)(966005)(110136005)(54906003)(4326008)(316002)(26005)(2906002)(83380400001)(356005)(186003)(478600001)(1076003)(7696005)(2101003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3851
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT049.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8fdb5ebf-7c35-43ab-e441-08d9295e0842
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Q3nRjMLtj2YhR0g6GtJSLLUNg2dc5ryr64/hlJ0u4MwFrFeZ9XVYsZzZVPqHt5CTbzElmcEB/VeuXH5wfPyQzU9mLxs4vi04X4Y/6ThObeZraPi7gtbHIiC1fB062Gj7uIPODLJzKIFqBPjn1tHvkWy6KpSQINLNyIqXPeXHgQ/F8W6/sx1aj9hk0ulclD/cPHNv4pvFbsPn3cOcsOqo25UcVcEi/n0CNafTZEn7/tsHYUcx8x5JxPa+tuundBWP6/BwGkneYIX5StIGVqLJIU9jrgPN6FG4D7c+msPpR8Hryr2BaPSw+XG5iZ1iM8k4cJxEwe7mT4qxA0YPYq5SXQFsWoA6ay2U418Djm7Lsg4VlzcN9+Qxntd6FlZUBoXHpDHbanuYDZLesGDeVwvp3NQJm33se0DOVUdbNQfPV9Yx7NHOr+Ei+yLaQE0HPwUxu6Xq1ROHNBeHyzgjquZfcOkWeWoAqfypP6JYklG8FEGAXIqbtmeKYPN4VwALTzIFtLqrwJxn/9XO3JGQuXxEjLi3A1kvvDCq/RWIyi6JhJjwLKuggfLFGTPsmwHvwvYzJhkcCHlByJ/7x8qMKa07r1Vb8o0norGpqjLjzY0Kmh9r0lZFJjhpMKhFlLNsqBCOqZ2cwWqfz5d178CaMCv+oOIJ1fS9oQP9mMQ1SUdHPgD630CPltoWIZr6JfwnUSLb9OhThUDmgKP8c3JWhXN5XBnnRXWyVna/JoOwjn5yUUUFW8qDoTgIa4eZGBnQZHJ/9pMOFp53kiRrT3CAO22K1f7mcDhDa4u9jrigENAPUGg=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(396003)(39860400002)(46966006)(36840700001)(8676002)(8936002)(4326008)(966005)(70586007)(70206006)(110136005)(54906003)(316002)(1076003)(2906002)(36860700001)(336012)(47076005)(2616005)(86362001)(82310400003)(186003)(44832011)(26005)(81166007)(83380400001)(5660300002)(6666004)(36756003)(426003)(82740400003)(7696005)(478600001)(2101003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 02:43:38.1486
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 053aba55-4f35-4f83-c316-08d9295e0d9a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT049.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6228

Static Allocation refers to system or sub-system(domains) for which memory
areas are pre-defined by configuration using physical address ranges.
Those pre-defined memory, -- Static Memory, as parts of RAM reserved in the
beginning, shall never go to heap allocator or boot allocator for any use.

This Patch Serie only talks about Domain on Static Allocation.

ABOUT IMPLEMENTATION ON DEVICE TREE FORMAT OF STAIIC ALLOCATION IS
STILL UNDER DISCUSSION, AND RELATED CHANGE WILL BE INCLUDED IN PATCH V3.

Looking into related [design link](
https://lists.xenproject.org/archives/html/xen-devel/2021-05/msg00882.html)
for more details.

The whole design is about Static Allocation and 1:1 direct-map, and this
Patch Serie only covers parts of it, which are Domain on Static Allocation.
Other features will be delievered through different patch series.
---
changes in v2:
 - consider reserved-memory binding and use phandle to refer(still in
discussion)
 - remove unused reserved field in struct page_info, unused helper
page_get_reserved_owner and page_set_reserved_owner
 - introduce CONFIG_STATIC_ALLOCATION to avoid bringing dead codes in
other archs.
 - extract common code from free_heap_pages and free_staticmem_pages
 - introduce new interface assign_pages_nr
 - change pfn-named to mfn-named
 - remove meaningless MEMF_no_owner case
 - leave zone concept out of DMA limitation check
 - take care of concurrency issue on static memory allocation
 - ensure the consistency when `xen,static-mem` and `memory` are both defined
 - fix the scalability issue in allocate_static_memory
 - allocate static memory when parse

Penny Zheng (9):
  xen/arm: introduce domain on Static Allocation
  xen/arm: introduce PGC_reserved
  xen/arm: introduce CONFIG_STATIC_ALLOCATION
  xen/arm: static memory initialization
  xen: introduce assign_pages_nr
  xen/arm: introduce alloc_staticmem_pages and alloc_domstatic_pages
  xen/arm: take care of concurrency on static memory allocation
  xen/arm: check `xen,static-mem` property during domain construction
  xen/arm: introduce allocate_static_memory

 docs/misc/arm/device-tree/booting.txt |  49 ++++++
 xen/arch/arm/Kconfig                  |   3 +
 xen/arch/arm/bootfdt.c                |  12 +-
 xen/arch/arm/domain_build.c           | 188 +++++++++++++++++++-
 xen/arch/arm/setup.c                  |  27 +++
 xen/common/page_alloc.c               | 238 +++++++++++++++++++++++---
 xen/include/asm-arm/mm.h              |   3 +
 xen/include/asm-arm/setup.h           |   2 +
 xen/include/xen/mm.h                  |  18 +-
 9 files changed, 507 insertions(+), 33 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 02:43:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 02:43:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137536.254868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5F9-0003av-6R; Mon, 07 Jun 2021 02:43:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137536.254868; Mon, 07 Jun 2021 02:43:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5F9-0003ak-2f; Mon, 07 Jun 2021 02:43:59 +0000
Received: by outflank-mailman (input) for mailman id 137536;
 Mon, 07 Jun 2021 02:43:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MAL7=LB=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lq5F7-0003W9-KM
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 02:43:57 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1a::613])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e72c521d-edcb-45e3-a34c-0b80ce31f886;
 Mon, 07 Jun 2021 02:43:53 +0000 (UTC)
Received: from AM6PR04CA0004.eurprd04.prod.outlook.com (2603:10a6:20b:92::17)
 by AM9PR08MB5938.eurprd08.prod.outlook.com (2603:10a6:20b:2da::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.23; Mon, 7 Jun
 2021 02:43:47 +0000
Received: from VE1EUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:92:cafe::86) by AM6PR04CA0004.outlook.office365.com
 (2603:10a6:20b:92::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.15 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:47 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT044.mail.protection.outlook.com (10.152.19.106) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:43:47 +0000
Received: ("Tessian outbound 836922dda4f1:v93");
 Mon, 07 Jun 2021 02:43:46 +0000
Received: from 5e7e6bb04b87.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B3FF24D9-2469-4FA8-BD57-4AD5C80A7B5F.1; 
 Mon, 07 Jun 2021 02:43:40 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5e7e6bb04b87.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 07 Jun 2021 02:43:40 +0000
Received: from DBBPR09CA0037.eurprd09.prod.outlook.com (2603:10a6:10:d4::25)
 by DBBPR08MB4522.eurprd08.prod.outlook.com (2603:10a6:10:d2::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.23; Mon, 7 Jun
 2021 02:43:37 +0000
Received: from DB5EUR03FT045.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:d4:cafe::df) by DBBPR09CA0037.outlook.office365.com
 (2603:10a6:10:d4::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.22 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:37 +0000
Received: from nebula.arm.com (40.67.248.234) by
 DB5EUR03FT045.mail.protection.outlook.com (10.152.21.164) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:43:37 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Mon, 7 Jun
 2021 02:43:35 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e72c521d-edcb-45e3-a34c-0b80ce31f886
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wFHlw8L+DYMmW1VL14RO+7hEsLY/c2HmSWKQXFt7g74=;
 b=1bsKOM63byfLzsKnBPezFiDgPYOCfArV7ULsjR5ZpnwtG7nslTQcymaS9+nCker/oSnBeKtuT0fxeeTp9Ot3s0ECDXgdq/D5aHxAB3tb0vNm6OrPAxtTR8No91VaZQ2jlAd1nTJRO4EznZvydi5WlsTMOc6Fp6FozwKWxlZGvjE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: bb25161b750b923f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b/uoTfvXvra6bWOonyZLbRBeNe3EFBTydk4OAwtkgqGN9fjJQVdclSUyg+6MvHffj2hTtrVWlKeBbrRdkMcFT+Ia1vQxGRM2MvcEOZx548O9gSFhDfTuQ+UBFdvVJ3oZNJxxeNXxALWX5vbFZvv8/ZWvEpjcd2RlJ3mun+CkzUj7jH0qhWl38gon+xncVE95sWCGRHPqBAwu7WgJv2E0rJLNNBIGpaQa9KPRwxPF7SXGDQ1X2cWnUpRj2fRazbU0FSPJ9Jo2OPTycJq/XrfYxvnv/HEl3xRkQ4Avj0eyxqDK+AnQ+l9G4ZCyWW9fSZi0GliiXK5ZzZX70zk7Ec0bVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wFHlw8L+DYMmW1VL14RO+7hEsLY/c2HmSWKQXFt7g74=;
 b=DbyzKIyIGXkuxFzL+jrIPBVrl+YPPgOwUnYNIyTOv5WEet4hb5ycGx8+mrQvaokVrWoe30zGj3zAIXx+hWTXGIAYpk1AWMTgVdXI138Qbu5X/Xehqi5nTfprcuaf1fYxZmO6FEnaWIg6y2mI7e2vwHFins8/YsqxCvqQLrfNPvO91ThuCCkjSXX9uOzMYZq1eBQ8bQxSrHYXi7ivBCxr76EFNObxAaVyFWgTtzuvZq+qIJvP76C/0EfkMF/vPaPJntqu7uAiYNm0XbGOjxviEKzWAm9rIIjOE5+mBy5ZFrjXlOnQNKvzzi5JVaIjedDmUgyvuzoz9+zOMm11l9of5w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wFHlw8L+DYMmW1VL14RO+7hEsLY/c2HmSWKQXFt7g74=;
 b=1bsKOM63byfLzsKnBPezFiDgPYOCfArV7ULsjR5ZpnwtG7nslTQcymaS9+nCker/oSnBeKtuT0fxeeTp9Ot3s0ECDXgdq/D5aHxAB3tb0vNm6OrPAxtTR8No91VaZQ2jlAd1nTJRO4EznZvydi5WlsTMOc6Fp6FozwKWxlZGvjE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>, <jbeulich@suse.com>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>
Subject: [PATCH 1/9] xen/arm: introduce domain on Static Allocation
Date: Mon, 7 Jun 2021 02:43:10 +0000
Message-ID: <20210607024318.3988467-2-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210607024318.3988467-1-penny.zheng@arm.com>
References: <20210607024318.3988467-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e1afd8f2-ce5f-42fc-e02a-08d9295e12fd
X-MS-TrafficTypeDiagnostic: DBBPR08MB4522:|AM9PR08MB5938:
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB59383FC85FED5F8EF3413AC0F7389@AM9PR08MB5938.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 yN1QZl0eKLy6qmQjvRD+eNwBzWKoWU6aLms5DiUPiqTb1byCqsXdMnSPfhQWdcCy7F432l9WSFRy9m0JmouZRnk5KIwYjowhZjq1QHuUpreJ6VI2YGB6Ki8TxbeXHpxJc0eejgSNmGj2PK7+uY4ILMOAgRQGZO4DB4T6Fod5Xju2M+KlM3kRpm+1lg8u5Ovux6+ZgGYjwaJnoo7+YObfzf80UF0mFeSnyMWxeUj+IMopy5/kT1Yq5RUHPoWxUYQn0OGqKm3aITHFkA2z3AbKQFfdGkTCcXRujp11IhByZkSN/7cztkdTN0dmp4wpssiskHTg83XlowlJn1QTs08rdbKdMQ5csHoW2V4RKpl1lxs7UKM/A14+MWgW8BOuhMx6hxUS83DB27tC/icxgXNBfLrYd7ympaYniHPCwjC21D0ZKmxE3Z6A03va6+7At0jS3KdhYHdDmQrt/m+OXij14l3tSxI+f9F2v0DIf+DtZNS0O24lX8Ta3lzL1sDAVNwD6xwHOz9nOcCWrb+xs7rWpAn+GwIW4dE0ryX/RCj6sgtUeLbkFblo2PjGbfBtfEQZDA/58VpHfRgfJ8o6RsYofKvLiMDySH+owBDndUNTcXnPmyqybQST69zx8xs+RlTKEyiFSJ1azaLGyLm7iwZH29HMdjLZz2VFByIoZkUJ03dUFsO92USO0+HuBIVyyJLFCjxbtAl7Y3o3WC7F7hjGNg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(396003)(39860400002)(36840700001)(46966006)(356005)(186003)(82310400003)(70206006)(54906003)(1076003)(316002)(4326008)(426003)(6666004)(8936002)(110136005)(8676002)(26005)(83380400001)(47076005)(81166007)(36756003)(44832011)(86362001)(70586007)(82740400003)(36860700001)(5660300002)(7696005)(478600001)(2616005)(2906002)(336012)(2101003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4522
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d6a9e6ac-375a-410d-7b66-08d9295e0d1f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MLZezpl5jCBA2q6yA5SUm7rr62tlT4We+bqZLJlAPrkK759HkGh+Yrb+yDbxMRT3JOcE1FAghtOsDRuus2Z2e6twd8pn9NDZPgat1hp6R92TE1KBUceVcVw47R5Fwyu9yeihYkhyb/dcXh+aCkw7z5cqtuM9s+EZmxukVCigrZMaKL6Y5LehpGJ4XC9PFT1FWlM/7NS0laplSw4HtIy7EbvN5whrDnZ60q3w4CT/b1AW+pPHf2J7ur+Ehn/7UOANYQ2aC8F+7qHK+Kpww4bFva7V+S66B6FmrUnSskH01uTGz1tKW1NNViHdgRRcZsbrlQNwQNh6/M8VJChCzaCSqw/cxgbXgcTZt7fQODODiX/os7nLBGngU0qoalPB07JolfM2gk3FjP5OZNhGjZdDR8u4JQ2r9Ylyb4Jv6DSkUBCpBDE/64dXurZrhOgKffXM53l6gOzp4AQjHg66HWbEwN17GqgNs5hwOVMrXQ1WmrTUoe59PbSeewjaIkAXCCde/9q7WzDzfKXiiAT7La4VBF5EvPzS5tmvJMx+NYwbqTnTkhU6MiRWRGdMmoS2w8TKA0/NzFPy6iOTicpnsU0ukgoplGwANH9xchlXsfqkpYGVipOQcwevMzSFPzUoslfato0wvN8O6fLrZy3qSw800kOp/CHaC+GPOPZqA27bVqEWMAz3U1dG1LRKkPyTQcuL
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(396003)(39860400002)(36840700001)(46966006)(186003)(82310400003)(70206006)(54906003)(1076003)(316002)(4326008)(426003)(6666004)(8936002)(110136005)(8676002)(26005)(83380400001)(47076005)(81166007)(36756003)(44832011)(86362001)(70586007)(82740400003)(36860700001)(5660300002)(7696005)(478600001)(2616005)(2906002)(336012)(2101003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 02:43:47.0608
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e1afd8f2-ce5f-42fc-e02a-08d9295e12fd
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB5938

Static Allocation refers to system or sub-system(domains) for which memory
areas are pre-defined by configuration using physical address ranges.
Those pre-defined memory, -- Static Memory, as parts of RAM reserved in the
beginning, shall never go to heap allocator or boot allocator for any use.

Domains on Static Allocation is supported through static memory nodes,
defined in reserved-memory node with `xen,static-memory-domain` compatible,
which are specifying physical RAM as this domain's guest RAM.
{address, size}-cells will be consistent with the ones defined in the
parent node, reserved-memory.

This patch introduces this new static memory device tree node, and also
documents and parses this new attribute at boot time and
stores related info in static_mem for later initialization.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
v2 changes:
- fix typos
- consider reserved-memory binding and use phandle to refer
---
 docs/misc/arm/device-tree/booting.txt | 49 +++++++++++++++++++++++++++
 xen/arch/arm/bootfdt.c                | 12 +++++--
 xen/include/asm-arm/setup.h           |  2 ++
 3 files changed, 61 insertions(+), 2 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 5243bc7fd3..ba7854b2d3 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -268,3 +268,52 @@ The DTB fragment is loaded at 0xc000000 in the example above. It should
 follow the convention explained in docs/misc/arm/passthrough.txt. The
 DTB fragment will be added to the guest device tree, so that the guest
 kernel will be able to discover the device.
+
+
+Static Allocation
+=============
+
+Static Allocation refers to system or sub-system(domains) for which memory
+areas are pre-defined by configuration using physical address ranges.
+Those pre-defined memory, -- Static Memory, as parts of RAM reserved in the
+beginning, shall never go to heap allocator or boot allocator for any use.
+
+Domains on Static Allocation is supported through static memory nodes,
+defined in reserved-memory node with `xen,static-memory-domain` compatible,
+which are specifying physical RAM as this domain's guest RAM.
+#{address, size}-cells will be consistent with the ones defined in the
+parent node, reserved-memory.
+
+On memory allocation, these pre-defiend static memory ranges shall be
+firstly mapped to the fixed guest RAM address `GUEST_RAM0_BASE`.
+And until it exhausts the `GUEST_RAM0_SIZE`, it will seek to `GUEST_RAM1_BASE`.
+`GUEST_RAM0` may take up several pre-defined physical RAM regions.
+
+The dtb property should look like as follows:
+
+    / {
+        reserved-memory {
+            #address-cells = <2>;
+            #size-cells = <2>;
+
+            staticmemdomU1: static-memory@0x30000000 {
+                compatible = "xen,static-memory-domain";
+                reg = <0x0 0x30000000 0x0 0x20000000>;
+            };
+        };
+
+        chosen {
+            domU1 {
+                compatible = "xen,domain";
+                #address-cells = <0x2>;
+                #size-cells = <0x2>;
+                cpus = <2>;
+                xen,static-mem = <&staticmemdomU1>;
+
+                ...
+            };
+        };
+    };
+
+DomU1 will have a static memory of 512MB reserved from the physical address
+0x30000000 to 0x50000000.
diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index dcff512648..5b3bb75b5f 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -187,9 +187,17 @@ static int __init process_reserved_memory_node(const void *fdt, int node,
 
     if ( rc == -ENOSPC )
         panic("Max number of supported reserved-memory regions reached.");
-    else if ( rc != -ENOENT )
+    else if ( rc == -ENOENT )
+        return 0;
+    else if ( rc )
         return rc;
-    return 0;
+
+    /* Static memory node with compatible string "xen,static-memory-domain". */
+    if ( fdt_node_check_compatible(fdt, node, "xen,static-memory-domain") == 0 )
+        rc = process_memory_node(fdt, node, name, depth, address_cells,
+                                 size_cells, &bootinfo.static_mem);
+
+    return rc;
 }
 
 static int __init process_reserved_memory(const void *fdt, int node,
diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
index 5283244015..5e9f296760 100644
--- a/xen/include/asm-arm/setup.h
+++ b/xen/include/asm-arm/setup.h
@@ -74,6 +74,8 @@ struct bootinfo {
 #ifdef CONFIG_ACPI
     struct meminfo acpi;
 #endif
+    /* Static Memory */
+    struct meminfo static_mem;
 };
 
 extern struct bootinfo bootinfo;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 02:44:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 02:44:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137538.254879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5FB-0003wC-Lf; Mon, 07 Jun 2021 02:44:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137538.254879; Mon, 07 Jun 2021 02:44:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5FB-0003vz-IX; Mon, 07 Jun 2021 02:44:01 +0000
Received: by outflank-mailman (input) for mailman id 137538;
 Mon, 07 Jun 2021 02:44:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MAL7=LB=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lq5FB-0003IQ-1f
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 02:44:01 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.51]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e62f02ba-88a0-4b92-9ede-1c653ad2ce13;
 Mon, 07 Jun 2021 02:43:56 +0000 (UTC)
Received: from AM5PR0101CA0024.eurprd01.prod.exchangelabs.com
 (2603:10a6:206:16::37) by AM6PR08MB3878.eurprd08.prod.outlook.com
 (2603:10a6:20b:8b::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.23; Mon, 7 Jun
 2021 02:43:54 +0000
Received: from AM5EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:16:cafe::ed) by AM5PR0101CA0024.outlook.office365.com
 (2603:10a6:206:16::37) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.22 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:54 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT032.mail.protection.outlook.com (10.152.16.84) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:43:53 +0000
Received: ("Tessian outbound bf434e582664:v93");
 Mon, 07 Jun 2021 02:43:53 +0000
Received: from a622a86e5caa.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2D36962D-F638-467F-B7D3-5FAE4E9318DC.1; 
 Mon, 07 Jun 2021 02:43:46 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a622a86e5caa.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 07 Jun 2021 02:43:46 +0000
Received: from DB6PR0601CA0035.eurprd06.prod.outlook.com (2603:10a6:4:17::21)
 by DB9PR08MB6857.eurprd08.prod.outlook.com (2603:10a6:10:2a2::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Mon, 7 Jun
 2021 02:43:44 +0000
Received: from DB5EUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:17:cafe::e5) by DB6PR0601CA0035.outlook.office365.com
 (2603:10a6:4:17::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.22 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:44 +0000
Received: from nebula.arm.com (40.67.248.234) by
 DB5EUR03FT044.mail.protection.outlook.com (10.152.21.167) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:43:44 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Mon, 7 Jun
 2021 02:43:43 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e62f02ba-88a0-4b92-9ede-1c653ad2ce13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2oM5iiW6n6qyxMv8aSjmUSUvQgcdsVFFxcjsWYlu1N4=;
 b=a/yA+w7oQl3DWt4LH5LvZezJTRI88SIvm051IVTo4/epBsxsQgKkarXVCH0KFZGbHwalKXJbkeUuROKIlPZPxiOTeVhKzVHwK+4SweuZUF9ehV+k2m5sQR81/rZIKEcdp6tOo5IDxo/TpLsT5yjTOLuuGFK215upjxbXqoQSiTY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 17c4e3fb51055c7c
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=c0Z+rh36tSswXq3WjIP5LefuUHKZGA/8lGB4QyEtw4mjflMlSmv6sfwvcTBr/H12YBy5HEmR53vzq2ZUM3Z5LLCZ5KhnW52bekQD06wldFUiCJN1RsGwg508tYgLvJqrflghAPYEsJp+qKmVC6a70rmK07tfooZ7r3oTK3LJ4zeDxh5D7pGsOQbPM+YIwSFE7I6Vcs5a2ZXjv9VoC6ut/BczJY6En/sylPna4OneH55Ipt4fkaxE0U7q0I8wXgyWEF+XNlK7fjegpucq0aHUkK2xOxInSb7QNqdTnOp4rQXgWgZDDabEzPMU7yFllKyN8Bqf2MLxqFzj4HUKZghcCg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2oM5iiW6n6qyxMv8aSjmUSUvQgcdsVFFxcjsWYlu1N4=;
 b=XPQFYD1b6BoJip5Q5CrYg95em3XmCzvSh4ZFL4gxZV5t45+VjHyY2nWlzvEQFdwZ4/Mz27xxk3aL10/gcj8YUojYSC5zpOKlz7ssIPk9TOcPR/iGIzJq3GzGMlO1XDN9z8HSsf7Gs2MhNluJBNj+ozrjTUYinnlE2ANjQQUE6/bBB0AN0imdQhzEzLsjSL1RgnuSnbVuHJqpnlpXll4U2uNnZ5oTpYkUHk7/yrUU0yejmFV4QEDRFSfNXQ/HtddxTz4E8cMSL9u0Dov/EEB5JO8ZbuZyFanq/zVA6CTJ2WZDiFVTnFqkFb5Xf/Sf1SjVg8dXNNn59JxaiBExMRlrkQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2oM5iiW6n6qyxMv8aSjmUSUvQgcdsVFFxcjsWYlu1N4=;
 b=a/yA+w7oQl3DWt4LH5LvZezJTRI88SIvm051IVTo4/epBsxsQgKkarXVCH0KFZGbHwalKXJbkeUuROKIlPZPxiOTeVhKzVHwK+4SweuZUF9ehV+k2m5sQR81/rZIKEcdp6tOo5IDxo/TpLsT5yjTOLuuGFK215upjxbXqoQSiTY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>, <jbeulich@suse.com>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>
Subject: [PATCH 3/9] xen/arm: introduce CONFIG_STATIC_ALLOCATION
Date: Mon, 7 Jun 2021 02:43:12 +0000
Message-ID: <20210607024318.3988467-4-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210607024318.3988467-1-penny.zheng@arm.com>
References: <20210607024318.3988467-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 40525657-4722-41d8-d4c6-08d9295e16e6
X-MS-TrafficTypeDiagnostic: DB9PR08MB6857:|AM6PR08MB3878:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB38789D2AA6F6002AEEBF78F7F7389@AM6PR08MB3878.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:348;OLM:348;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 O7qhMxvRfGcbsbx5+7sohuD+Lx4RCcWRgE8G2euMwTo/QcLP/RkLdjIp5aJ+6BCEnwayHWUhtPXNXqLLF7DMFnNuhvf5XAJRCZEg4aMvvpIbzjBSRqy7cQY5I099P9NKDGVH4dFfBBoDKt6VhJJvxrLs7rRBPraKZPPzaMsL9XSKb+iG80PaYhcwIwXVq95W/3RDVCHMWp32FoEFlVbXEwbR10dp3w3YMAtLZKC74XBPgbgPxosOOt6ktT/6LyEtChZxhXMGqv1IRncfjqMyCu0IWCRyqxMC9FHbou2QGqBlHAzgVIjqGpot2qeMKWeA9GmdoF4c+scKQ5S8yCF+0eAKrTZNrDs5/PzpLo124V8Lj4zx2AYInhDL+wWbxM5PQvQYEM1wg2HnRz5kWnSlSlySLzD7Uc9xRglcpC1Rt1Nw/39fwJegl4Hc2N9O8XJ/xBm2mE6wjJlzAokxQFVoyhDOkGYUxjnkSTCNIe9zin1cp37H+PLBuF3vZtI7CJAfqq6W4/tge8Nvhx3tTs2eQwE07eiFJt+N+pGjg48u0Bl37iN8tkQ0bTk7M5QqLEE6usnYJoFLKL0LcuGM1X+YHANcDp5n2Ebtc0LH+mMeVbOrF97ARh6YYdHFkETCDT/tK+xHYDdXsYc0pq9i8xG3HH4ACf7HFWxVwVfqJaNDSaYPH0eXq9fe8/JjuOJuHbjeermHfUlrpAOik5rK971TRg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(396003)(346002)(136003)(376002)(39860400002)(46966006)(36840700001)(336012)(4744005)(478600001)(7696005)(86362001)(186003)(4326008)(44832011)(36860700001)(356005)(54906003)(1076003)(26005)(36756003)(2616005)(70586007)(70206006)(426003)(47076005)(82310400003)(82740400003)(6666004)(81166007)(8936002)(8676002)(110136005)(2906002)(316002)(5660300002)(36900700001)(2101003);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6857
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	be6afcbc-5bf2-476d-361c-08d9295e1188
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+f37c69d9C9HMxXkDUEO1AqdJBgnYekDM65yyvyv8ebEQydVsSkFZyo9W2zRVCJ1n9ExMP4qoE/grFSbNlzu4VN+XOpHoT+e+F+NqkXAGk2A67JRYE+YjPvIelUr5SD8mXz2u4rVHmzaB4Ei12Y4Ht1DSxGN0ni6NFQAfd1uOipBSx5RSuNnSQbTa9osURKclLXSXxVi+3T9gbsSWAKeMGwYAF1HKdvcQTPaETzqQkIuaHgW0FVpKMrPczrQ+SimSr0rd9cGp6BfDwxorA1C7gSL6zECiRd5OiUjtnp+o85QcQV0NfJ0WFSc6FrngwykIot0G/i7jX6VQQkF0T1ctkDFYA+aW+g6tMpcXfOGXg3jFdVTkv1mUu9K5iSo1961VpM2DYsR7FmnYIcwPZFJSQJet8i6VEQ5r/gT+bztjpEv/Hi+ryOy6eAUWXIa7SNRhshmPLdxU0bZQNNxTLjiTjBDxlR+lHfnjMCLpSuJU2DIiHiJjhmcxiQJkPDwA0EgyXIg5DIXCscpFpk3oGtO+Gyuk4o5zI+7L/XzfjSkNg9bTkMQLXpKEaxgJKAJN8i32uaami/GEfrEbvdZbZ2FyPDuvbdivjAZu605KV3h2vqkVFXIiZ2EvOum661EgZ6ExOBt157MNhdRATx4rB69i+SwkwlUkqlMJO2cVPdRRrCmJ8es+n/Nqop6X3IRCja2
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(396003)(136003)(376002)(39860400002)(46966006)(36840700001)(8676002)(82740400003)(36756003)(110136005)(478600001)(6666004)(5660300002)(2906002)(8936002)(316002)(4326008)(54906003)(70206006)(86362001)(36860700001)(70586007)(426003)(336012)(47076005)(2616005)(7696005)(82310400003)(44832011)(81166007)(1076003)(26005)(186003)(4744005)(2101003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 02:43:53.6967
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 40525657-4722-41d8-d4c6-08d9295e16e6
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3878

For now, since the feature of Domain on Static Allocation is only supported
on ARM Architecture, this commit introduces new CONFIG_STATIC_ALLOCATION to
avoid bringing dead codes in other archs.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
changes v2:
- new commit
---
 xen/arch/arm/Kconfig | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index ecfa6822e4..f165db8ecd 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -278,6 +278,9 @@ config ARM64_ERRATUM_1286807
 
 	  If unsure, say Y.
 
+config STATIC_ALLOCATION
+    def_bool y
+
 endmenu
 
 config ARM64_HARDEN_BRANCH_PREDICTOR
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 02:44:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 02:44:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137535.254857 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5F6-0003Ip-TN; Mon, 07 Jun 2021 02:43:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137535.254857; Mon, 07 Jun 2021 02:43:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5F6-0003Ig-Pm; Mon, 07 Jun 2021 02:43:56 +0000
Received: by outflank-mailman (input) for mailman id 137535;
 Mon, 07 Jun 2021 02:43:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MAL7=LB=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lq5F6-0003IQ-5W
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 02:43:56 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.54]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f1845ea1-542b-467f-92d7-6d1257dcd900;
 Mon, 07 Jun 2021 02:43:52 +0000 (UTC)
Received: from AM6PR04CA0024.eurprd04.prod.outlook.com (2603:10a6:20b:92::37)
 by DB7PR08MB3835.eurprd08.prod.outlook.com (2603:10a6:10:75::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.23; Mon, 7 Jun
 2021 02:43:48 +0000
Received: from VE1EUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:92:cafe::e4) by AM6PR04CA0024.outlook.office365.com
 (2603:10a6:20b:92::37) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.22 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:48 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT044.mail.protection.outlook.com (10.152.19.106) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:43:48 +0000
Received: ("Tessian outbound 836922dda4f1:v93");
 Mon, 07 Jun 2021 02:43:48 +0000
Received: from ebf985aec23b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 BA651BA8-78CC-4AA1-97E2-FC7EEFFB8E5B.1; 
 Mon, 07 Jun 2021 02:43:42 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ebf985aec23b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 07 Jun 2021 02:43:42 +0000
Received: from DU2PR04CA0248.eurprd04.prod.outlook.com (2603:10a6:10:28e::13)
 by AM0PR08MB5363.eurprd08.prod.outlook.com (2603:10a6:208:188::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.22; Mon, 7 Jun
 2021 02:43:41 +0000
Received: from DB5EUR03FT062.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:28e:cafe::d) by DU2PR04CA0248.outlook.office365.com
 (2603:10a6:10:28e::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.20 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:41 +0000
Received: from nebula.arm.com (40.67.248.234) by
 DB5EUR03FT062.mail.protection.outlook.com (10.152.20.197) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:43:40 +0000
Received: from AZ-NEU-EX01.Emea.Arm.com (10.251.26.4) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.14; Mon, 7 Jun
 2021 02:43:40 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX01.Emea.Arm.com
 (10.251.26.4) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.14; Mon, 7
 Jun 2021 02:43:39 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1845ea1-542b-467f-92d7-6d1257dcd900
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cC5QXJLTfZEmB3XeRfskTwUSAKX1zEyV98uM4QkI6go=;
 b=bQWQjoWa7x4Dy26UKrmGSwt6+XmWzDF7MQGjqnk+JYIdBpljmWjKjE9SrejpXmplCt7WSDDgWZV6VsdYNJyWuUgZzB7jdmmQzBjtxkmkF/P7llq47Xki7xWsnLqSxkPEPOPWcMVwH3Yxxg7noISegeR8eBNpxh07tCBZIieUqPI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 9750348ef5da09c8
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oMCWgHXTxOc0cNVXAY8QQxYXCBbsefASSAuvKuKKA3W1fuxJlc3YHRtBgTdtLtIV7aYLjd6QSqLjotRnXFzi8y9kj3TV/uw2n1uXOiuF9k/oYW57rB2s62cCdi05BzoW5AeGSCpio/rG/cc9uokRPYz2+Q2pe5W+jvcEC8U93Tbh/33T1VHmQ/Mw16neHrERzu80YMPml+H8gVIhF2cczIifD0S+w5/OK6jW6A9cdhtbL3KE9hqruovxrMPEsznr7PMVjMwahSEpaedOFGPqEM1J9f1gwx6HBB9HsMuQPE+mU3gHAFo59NvOmS24oFBnJZdDe75H24iBbVsN95wkjQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cC5QXJLTfZEmB3XeRfskTwUSAKX1zEyV98uM4QkI6go=;
 b=O64j7kBeSfEmzG2oEZ6KmJhg+eR6kk3B48KNipA9NpmIv0zzht+RjZ6KQ5T/hNG0bMhQpZCvQzFW5qXRymW4CPEmb5dJcowNkRMUPDvp+wnjLwviGc3pxqQ6P8G80s6fRchjyVhsCvLSXeUfcCDnJXbnewuq+zlac2kJeH3AhuGU17vjMgIHxMRmkUo8nK07vGIpg/Fx+hdgxMzQwb14BlHXH/zo2XwoVweYXXHSRR3BJ8fifOafYSqgu/8eXbazsHY3OZkL9cAAzWY9GsSxaoJystrESPGBwAFsX4TGPMp837r9hqorffdECbsNjeACwkN+lUSMmtA0nljn9sX/Rw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cC5QXJLTfZEmB3XeRfskTwUSAKX1zEyV98uM4QkI6go=;
 b=bQWQjoWa7x4Dy26UKrmGSwt6+XmWzDF7MQGjqnk+JYIdBpljmWjKjE9SrejpXmplCt7WSDDgWZV6VsdYNJyWuUgZzB7jdmmQzBjtxkmkF/P7llq47Xki7xWsnLqSxkPEPOPWcMVwH3Yxxg7noISegeR8eBNpxh07tCBZIieUqPI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>, <jbeulich@suse.com>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>
Subject: [PATCH 2/9] xen/arm: introduce PGC_reserved
Date: Mon, 7 Jun 2021 02:43:11 +0000
Message-ID: <20210607024318.3988467-3-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210607024318.3988467-1-penny.zheng@arm.com>
References: <20210607024318.3988467-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 05ef190f-59bc-4f7e-641f-08d9295e13ee
X-MS-TrafficTypeDiagnostic: AM0PR08MB5363:|DB7PR08MB3835:
X-Microsoft-Antispam-PRVS:
	<DB7PR08MB3835B189E317A6373D527F82F7389@DB7PR08MB3835.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:2331;OLM:2331;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 2/4uV6FPm6UCjnOKker4xkzeHRaNboO9iP4x7G4+L/dVX7xF1EakLc9TXvOS4H8D9C90IS2+Kpb23IZlqCIi87E/wT1XVtJicddOGG0fT2ntYDWPEJyfr8H3UTJPJ3ZIgcf3HHGuY2wEwGgLfeoLRyJkezZjls7zFDXiVqKaB5GMjr+DIsH52aXEBbWxBGk9THXsesRQPOjWUd+92+WeL/LqLKWpWIwhZwT1V2t401s75G5d6yFq5x1TdeSfnvM3UCLQpRNZW+hgR2SE1T24gqB4QpdzgVou8f08Cbk7LdcjSwbBEQMo7U6xwF6XbmgJPyd80NeAfoPPLx//rMYd+5Ws5xsd/KnCUoL0Aaxl5RiRYpu2KXbfHT+DKb+BC2OTqgXaMEthBC69aLBxka+mIkIifKBs5hUSiq6SVCLxr7R8ivSZEzdksmjkxRrIDSrxSrMTodFB61W+tk4REFjPN+sL8pkfCuyEbx4KE2jk/uZPUKl1h6CozasM+fHffAL224uY+wPydbmbOH7MiuTJqSb5Sm4annHwPAvR2NGOOTQWjJmLS5ytlUYVxawlsfDu6xJzMjLi8jKLozVfzh+hH8iLsrEWUvREdtsJEAuK2TckOmyO2uGGfv/Ai2mKh44OgrsRnwxkOhOJRxfUi2d2mM7sSkZkIJw2n8Hm1548rVIWmGPuFuECdviojEV9W5HaCpY16f/Vkj8IYvz7qECvXg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(396003)(39860400002)(46966006)(36840700001)(6666004)(36860700001)(478600001)(82310400003)(70586007)(36756003)(316002)(4326008)(47076005)(4744005)(2906002)(7696005)(54906003)(26005)(86362001)(1076003)(5660300002)(110136005)(70206006)(8676002)(2616005)(186003)(336012)(426003)(44832011)(83380400001)(356005)(82740400003)(8936002)(81166007)(2101003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5363
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	01b2e2f6-f140-4562-e251-08d9295e0f4b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nlerk61MW5oPLFmXPREyYUinC8sxtRf9MD1dgHcbrHjnzimfVM7iX8HnqasiHILtwiWlOjkWTfjyHehaC9imIEJ+KFXMd81c+Z4kjnDaax60y/i6xmC8eIytrwKaG7jlMs9H3OdK2QdC8OBqRARlXYzwAkOZ4/we5t4SWgJryFJ9Tqgw5OemYfu1S7aZtt0nArZ128NZPj07LZQNjaJH3nGSeXTzBRNgZQBp6NsIQjbSkZyQC86bqdoEk0MAhA2p13l/TPaxuBWesHSqBZzBIEix2wxzqYwcbOVsHOQBNsKrcUwEBngH7YutRM7J4IITFisIv+rTBhj1tExiwxwDAPs/+jPr99VnAvgayhKaUgkopY9VluJQb4kLQVRkQrrbrZUc31cjIFLPe7x+Jk/xtcoHr0IfnEI75skPsSrc1fh1InQjj8zMyFYUhjWlHn07HoIHXqYBWbUbXCNYItyJj/w5EF23uIdPRbA6GnmYOjtymA7FLDsoFFc7LORDMfnXIiz09m9EnqH0aEOf2uAywrdKUAOARnnroDlLk/aTifE8XwHLkaYzc9AzorhUGpC2SOqLzD8+/viVPVTQoxnXCFgzA9K6nMcYL9SC61Ibqopk3Wgg+Pkm8tUoy1aecoQallPenv8EkMo5LKvOR1TxPzeG94IN6hpbMKOPaAJZPKAM5oXvQpOvqXORs2NrSduQ
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(136003)(396003)(36840700001)(46966006)(2616005)(478600001)(8936002)(426003)(82310400003)(36756003)(4744005)(86362001)(70206006)(83380400001)(1076003)(5660300002)(44832011)(82740400003)(4326008)(54906003)(110136005)(316002)(2906002)(6666004)(36860700001)(47076005)(336012)(81166007)(26005)(7696005)(70586007)(186003)(8676002)(2101003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 02:43:48.6371
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 05ef190f-59bc-4f7e-641f-08d9295e13ee
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3835

In order to differentiate pages of static memory, from those allocated from
heap, this patch introduces a new page flag PGC_reserved to tell.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
changes v2:
- remove unused reserved field in struct page_info
- remove unused helper page_get_reserved_owner and page_set_reserved_owner
---
 xen/include/asm-arm/mm.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 0b7de3102e..7034fae1b6 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -108,6 +108,9 @@ struct page_info
   /* Page is Xen heap? */
 #define _PGC_xen_heap     PG_shift(2)
 #define PGC_xen_heap      PG_mask(1, 2)
+  /* Page is reserved */
+#define _PGC_reserved     PG_shift(3)
+#define PGC_reserved      PG_mask(1, 3)
 /* ... */
 /* Page is broken? */
 #define _PGC_broken       PG_shift(7)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 02:44:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 02:44:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137540.254890 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5FH-0004Nj-0q; Mon, 07 Jun 2021 02:44:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137540.254890; Mon, 07 Jun 2021 02:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5FG-0004NV-S5; Mon, 07 Jun 2021 02:44:06 +0000
Received: by outflank-mailman (input) for mailman id 137540;
 Mon, 07 Jun 2021 02:44:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MAL7=LB=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lq5FG-0003IQ-1c
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 02:44:06 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.71]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1b257ead-462c-4549-b0c5-e09e4d61eec7;
 Mon, 07 Jun 2021 02:44:02 +0000 (UTC)
Received: from PR2P264CA0045.FRAP264.PROD.OUTLOOK.COM (2603:10a6:101:1::33) by
 AM9PR08MB6211.eurprd08.prod.outlook.com (2603:10a6:20b:2de::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.22; Mon, 7 Jun
 2021 02:43:56 +0000
Received: from VE1EUR03FT056.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:101:1:cafe::69) by PR2P264CA0045.outlook.office365.com
 (2603:10a6:101:1::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.15 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:56 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT056.mail.protection.outlook.com (10.152.19.28) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:43:56 +0000
Received: ("Tessian outbound bf434e582664:v93");
 Mon, 07 Jun 2021 02:43:56 +0000
Received: from 99d96216c950.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 12EA9111-7FA4-470D-94E8-3C8E040EBF66.1; 
 Mon, 07 Jun 2021 02:43:50 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 99d96216c950.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 07 Jun 2021 02:43:50 +0000
Received: from DB6PR07CA0157.eurprd07.prod.outlook.com (2603:10a6:6:43::11) by
 PA4PR08MB6255.eurprd08.prod.outlook.com (2603:10a6:102:f1::6) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.22; Mon, 7 Jun 2021 02:43:49 +0000
Received: from DB5EUR03FT042.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:43:cafe::9) by DB6PR07CA0157.outlook.office365.com
 (2603:10a6:6:43::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.11 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:49 +0000
Received: from nebula.arm.com (40.67.248.234) by
 DB5EUR03FT042.mail.protection.outlook.com (10.152.21.123) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:43:49 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Mon, 7 Jun
 2021 02:43:47 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b257ead-462c-4549-b0c5-e09e4d61eec7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1pQ5ZYSIi8AeHiMoc+JqHBOk6Ajg45oD8n/vlgw9XyM=;
 b=wPpwC0p1EDoBhTPIPznNVFrBMR3CwEE1sy7kSnYsHMt6smxb4yxzHriPGaav0rjw+gIv8mKtkY9A9xSmjLWXd5wpGlReo8UgdhduW90LZAbNBUfVJjJKVascbgJfLktgDUVNcjasEneALaNSSuurjqyBMv2r0dGvuKMCIAUuV6o=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: eca746ad0a4f5db5
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=auQr5+Hw8bVa50IMqI76MdGxrxH7fBhq5+mqE6MP8hhFXIzs71vGdIm4yas3v40DEw1zdAdlca7lbTrTLpc6ZI/6DpNzcgiHBr2xfDRaLMoSfuh8jUBJXWrtCyyrsO9bOpeSQMGZFFL7R4KlriVeHMn6IWPDtAGIpo2z4rGPa1VILH9NOkxP3ykaRQZUtBwrQWQvBLOvdIObBmvBIktU6WNKgdlBbH7KBRiLG+QqBh6g0X/p0tirnP5pxC4M56kXJm46zi/qw4qVYgdisrR//lu9bG5Y1bwQturXJgYlzBvQl2apuKxjCGkrrWHobagWHB5E9Ypg/WhfJ51vWb40kg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1pQ5ZYSIi8AeHiMoc+JqHBOk6Ajg45oD8n/vlgw9XyM=;
 b=XUMNC1gDAdLNBmsvr6PjXFeKV4S1I9YBMl1oiHTN1B7BD8gBdzCUtR05cwb4J1IGHDxk6mt/xgFdxDIuVBNu8VlJk3gqYkWpHfJrfCOSL40FheaFEpGxnrUj74z4/lqHmdkQDiiKYy1i8hX3179azDJqKE6bD2CXmYlxkLKQLlGeHGnfNBhtD8BZqE13CRvdBS5DYPs+agGcmX3EhDFlKibZ/jlfOZSe9yYv3DHI7vy+NwPovjpICFbkGa9gzocYfoq2pZuIxeP8x8WpnnXJJh8AdmrKgheKMRm8+VZ1L6sCte9aLtogtARyoB4Rlwc9lJ6XY5Ue5WtPFpv0/T8OSw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1pQ5ZYSIi8AeHiMoc+JqHBOk6Ajg45oD8n/vlgw9XyM=;
 b=wPpwC0p1EDoBhTPIPznNVFrBMR3CwEE1sy7kSnYsHMt6smxb4yxzHriPGaav0rjw+gIv8mKtkY9A9xSmjLWXd5wpGlReo8UgdhduW90LZAbNBUfVJjJKVascbgJfLktgDUVNcjasEneALaNSSuurjqyBMv2r0dGvuKMCIAUuV6o=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>, <jbeulich@suse.com>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>
Subject: [PATCH 4/9] xen/arm: static memory initialization
Date: Mon, 7 Jun 2021 02:43:13 +0000
Message-ID: <20210607024318.3988467-5-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210607024318.3988467-1-penny.zheng@arm.com>
References: <20210607024318.3988467-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e7659ad6-78e9-4fb7-b9c9-08d9295e1896
X-MS-TrafficTypeDiagnostic: PA4PR08MB6255:|AM9PR08MB6211:
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB621131DBC25F3B66867BFE29F7389@AM9PR08MB6211.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:409;OLM:409;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ConVpp4M6/fpTn6TFFXfhR5JCHasFD5yOr7RBESrZoNx/Hy/a7hW/HeDYzMWBkiBI3yZzWOHkPpD2noFvPnhnrAvM+t6u1VhuM6RXUhZJhx2NDqC7sd1yY29moWCtq7Z8KG+9mjwffmW9rUHoo25nv/QU5CUTEOrIx78NfX+MQxiNVo9hU25eHuc6VX9V4RhpLmewuUQMU7Z9uKC5m0MNHGllLAAkV7BQujAcgVhLNtJXfLdWTtOGFAlZnq0YjLxyTkHw/bUQUGBh6YxKCJNBmrEBVC9o0yLgYfMQPEwBIZnmfHuJSBAomiVHd9ay86uPayRVPU/UKEYKJEMeJvK+xsEg5i1I1swe7PnGRhQ8qGzdZ6bwkaEqAYOjQWWqsttNqNVU9be9/571OaXn3u4xEDRkkXJ+XQC6im+vKy+Sz4MivggClVIhUT4wmu+/2UTmhUtVBQhZob2ai/6nSm0UPaLp0OiUGt9N7/s8etEs4G1QDhtJbMtKFSwR8il5aa/hgiU9IGkmQLkkvVhW5TF3IAcY874vlp5nKg1uhDrwxyhBGwCxXYP52/WiyGkYd3lJu7bVF8MW6KXqmljthzfuY+KJpmsjz/6pFt2y18VcAgEJHgTzygM4AliVfgg2/QqhK81WVHarZ9n/JwZFiee/DJSyODJF7mtwJNubSC0hTgEGsD/hGqbS0TVhMfniBMRYkrclrMCSyAJfeuGJLwGnA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(346002)(376002)(396003)(39860400002)(136003)(46966006)(36840700001)(44832011)(7696005)(70206006)(5660300002)(83380400001)(426003)(82310400003)(81166007)(70586007)(26005)(356005)(36756003)(47076005)(2616005)(82740400003)(1076003)(110136005)(54906003)(8676002)(478600001)(6666004)(36860700001)(8936002)(2906002)(316002)(336012)(86362001)(4326008)(186003)(36900700001)(2101003);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6255
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d95f0018-13fa-43d6-8618-08d9295e1414
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zXeFbB4BmLfvcf67Q/bizRPdn0b38fuO5dKqg5v4BmNJonun0t+/kBILcG2yZkEz+WXvFqxVAXd5lYdlyzr7kW7UWGLACOuXxtqRNoKeDjsYxl8dNFH9IrSnJMvKzT2BaohNfSeQiG/tONUE8nyLG5soeozd5uwXsSzpVoORASvYcFlSr9suR1etmlpmmh8zjlqxwhkKbhPooRyjpN78ArUQfiQagtyWsGgwjhFXZDZPLexfiA5BLM0VctFMJxLuxKy6S8WXHuvJOUttz0gkQnHoMOT30mtGwQaufFeui3NZcnMAQUNPIpwGrhNn7v/sMJ2/PnhbzMq/7+rDYmzQGlBJ04o6L/tH72ruw2Ym7oIVXp1hMzDmyBK7THTwImk6Fh59csA6Pp5acj+RPUqycOuqqZYR8aMA0lYZCo9LKv99dT8wRxZKWgY/ZsQptGUOFYHXoaUMWphNlCXuV4HSF1KAfZRy2PLXomliIlkSYWGCz7A31woxFNuQ2svY9QNuDLjXCATwkFGv5D5ZxTZCdM3nqYUTCWIZktzkuKUmWHSNg8og7At88rhCKHBRYJiFWvjOzY0jUHKwxuaUR7Ra7SsL5UwJTDYYnFIYHgHeipmcQ8AQk0igGrGh0LS+LvWUDqpFz9xSQiXFjoY2Sf04T5OmVBan+pYAYDht2JK/DYcxwPXIgNs8Swjn87Z07ozY
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(346002)(39860400002)(376002)(136003)(36840700001)(46966006)(44832011)(7696005)(70206006)(5660300002)(83380400001)(426003)(1076003)(70586007)(26005)(36756003)(47076005)(186003)(2616005)(82310400003)(81166007)(82740400003)(110136005)(54906003)(316002)(336012)(8676002)(478600001)(6666004)(36860700001)(8936002)(2906002)(86362001)(4326008)(2101003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 02:43:56.4512
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e7659ad6-78e9-4fb7-b9c9-08d9295e1896
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6211

This patch introduces static memory initialization, during system RAM boot up.

New func init_staticmem_pages is responsible for static memory initialization.

Helper free_staticmem_pages is the equivalent of free_heap_pages, to free
nr_mfns pages of static memory.

This commit defines a new helper free_page to extract common code between
free_heap_pages and free_staticmem_pages, like following the same cache/TLB
coherency policy.

For each page, free_staticmem_pages includes the following extra steps to
initialize:
1. change page state from inuse to free state and grant PGC_reserved.
2. scrub the page in need synchronously.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
changes v2:
- rename to nr_mfns
- extract common code from free_heap_pages and free_staticmem_pages
- remove dead codes in other archs, including move some to arm-specific file,
and put some under CONFIG_ARM
- mark free_staticmem_pages __init
---
 xen/arch/arm/setup.c    | 27 ++++++++++++++
 xen/common/page_alloc.c | 78 +++++++++++++++++++++++++++++++++--------
 xen/include/xen/mm.h    |  6 ++++
 3 files changed, 97 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 00aad1c194..daafea0abb 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -611,6 +611,30 @@ static void __init init_pdx(void)
     }
 }
 
+/* Static memory initialization */
+static void __init init_staticmem_pages(void)
+{
+    int bank;
+
+    /*
+     * TODO: Considering NUMA-support scenario.
+     */
+    for ( bank = 0 ; bank < bootinfo.static_mem.nr_banks; bank++ )
+    {
+        paddr_t bank_start = bootinfo.static_mem.bank[bank].start;
+        paddr_t bank_size = bootinfo.static_mem.bank[bank].size;
+        paddr_t bank_end = bank_start + bank_size;
+
+        bank_start = round_pgup(bank_start);
+        bank_end = round_pgdown(bank_end);
+        if ( bank_end <= bank_start )
+            return;
+
+        free_staticmem_pages(maddr_to_page(bank_start),
+                            (bank_end - bank_start) >> PAGE_SHIFT, false);
+    }
+}
+
 #ifdef CONFIG_ARM_32
 static void __init setup_mm(void)
 {
@@ -872,6 +896,9 @@ void __init start_xen(unsigned long boot_phys_offset,
     cmdline_parse(cmdline);
 
     setup_mm();
+    /* If exists, Static Memory Initialization. */
+    if ( bootinfo.static_mem.nr_banks > 0 )
+        init_staticmem_pages();
 
     /* Parse the ACPI tables for possible boot-time configuration */
     acpi_boot_table_init();
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 958ba0cd92..8c00262c04 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1376,6 +1376,37 @@ bool scrub_free_pages(void)
     return node_to_scrub(false) != NUMA_NO_NODE;
 }
 
+static void free_page(struct page_info *pg, bool need_scrub)
+{
+    mfn_t mfn = page_to_mfn(pg);
+
+    /* If a page has no owner it will need no safety TLB flush. */
+    pg->u.free.need_tlbflush = (page_get_owner(pg) != NULL);
+    if ( pg->u.free.need_tlbflush )
+        page_set_tlbflush_timestamp(pg);
+
+    /* This page is not a guest frame any more. */
+    page_set_owner(pg, NULL); /* set_gpfn_from_mfn snoops pg owner */
+    set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY);
+
+#ifdef CONFIG_ARM
+    if ( pg->count_info & PGC_reserved )
+    {
+        /* TODO: asynchronous scrubbing. */
+        if ( need_scrub )
+            scrub_one_page(pg);
+        return;
+    }
+#endif
+    if ( need_scrub )
+    {
+        pg->count_info |= PGC_need_scrub;
+        poison_one_page(pg);
+    }
+
+    return;
+}
+
 /* Free 2^@order set of pages. */
 static void free_heap_pages(
     struct page_info *pg, unsigned int order, bool need_scrub)
@@ -1425,20 +1456,7 @@ static void free_heap_pages(
             BUG();
         }
 
-        /* If a page has no owner it will need no safety TLB flush. */
-        pg[i].u.free.need_tlbflush = (page_get_owner(&pg[i]) != NULL);
-        if ( pg[i].u.free.need_tlbflush )
-            page_set_tlbflush_timestamp(&pg[i]);
-
-        /* This page is not a guest frame any more. */
-        page_set_owner(&pg[i], NULL); /* set_gpfn_from_mfn snoops pg owner */
-        set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY);
-
-        if ( need_scrub )
-        {
-            pg[i].count_info |= PGC_need_scrub;
-            poison_one_page(&pg[i]);
-        }
+        free_page(&pg[i], need_scrub);
     }
 
     avail[node][zone] += 1 << order;
@@ -1512,6 +1530,38 @@ static void free_heap_pages(
     spin_unlock(&heap_lock);
 }
 
+#ifdef CONFIG_STATIC_ALLOCATION
+/* Equivalent of free_heap_pages to free nr_mfns pages of static memory. */
+void __init free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
+                                 bool need_scrub)
+{
+    mfn_t mfn = page_to_mfn(pg);
+    unsigned long i;
+
+    for ( i = 0; i < nr_mfns; i++ )
+    {
+        switch ( pg[i].count_info & PGC_state )
+        {
+        case PGC_state_inuse:
+            BUG_ON(pg[i].count_info & PGC_broken);
+            /* Mark it free and reserved. */
+            pg[i].count_info = PGC_state_free | PGC_reserved;
+            break;
+
+        default:
+            printk(XENLOG_ERR
+                   "Page state shall be only in PGC_state_inuse. "
+                   "pg[%lu] MFN %"PRI_mfn" count_info=%#lx tlbflush_timestamp=%#x.\n",
+                   i, mfn_x(mfn) + i,
+                   pg[i].count_info,
+                   pg[i].tlbflush_timestamp);
+            BUG();
+        }
+
+        free_page(&pg[i], need_scrub);
+    }
+}
+#endif
 
 /*
  * Following rules applied for page offline:
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 667f9dac83..df25e55966 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -85,6 +85,12 @@ bool scrub_free_pages(void);
 } while ( false )
 #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
 
+#ifdef CONFIG_ARM
+/* Static Allocation */
+void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
+                          bool need_scrub);
+#endif
+
 /* Map machine page range in Xen virtual address space. */
 int map_pages_to_xen(
     unsigned long virt,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 02:44:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 02:44:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137541.254901 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5FJ-0004iu-Bk; Mon, 07 Jun 2021 02:44:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137541.254901; Mon, 07 Jun 2021 02:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5FJ-0004if-72; Mon, 07 Jun 2021 02:44:09 +0000
Received: by outflank-mailman (input) for mailman id 137541;
 Mon, 07 Jun 2021 02:44:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MAL7=LB=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lq5FH-0003W9-Jo
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 02:44:07 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.65]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 92589372-4352-47c5-8fff-33cf698ec7b6;
 Mon, 07 Jun 2021 02:44:04 +0000 (UTC)
Received: from AS8P251CA0018.EURP251.PROD.OUTLOOK.COM (2603:10a6:20b:2f2::25)
 by AM0PR08MB5091.eurprd08.prod.outlook.com (2603:10a6:208:15e::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Mon, 7 Jun
 2021 02:44:01 +0000
Received: from AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2f2:cafe::ee) by AS8P251CA0018.outlook.office365.com
 (2603:10a6:20b:2f2::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.22 via Frontend
 Transport; Mon, 7 Jun 2021 02:44:01 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT055.mail.protection.outlook.com (10.152.17.214) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:44:01 +0000
Received: ("Tessian outbound 6d1d235c0b46:v93");
 Mon, 07 Jun 2021 02:44:00 +0000
Received: from dd53acc8070d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 84A0B27A-975F-4E20-9B50-7054C63CE008.1; 
 Mon, 07 Jun 2021 02:43:54 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id dd53acc8070d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 07 Jun 2021 02:43:54 +0000
Received: from DB3PR08CA0009.eurprd08.prod.outlook.com (2603:10a6:8::22) by
 VI1PR0801MB1854.eurprd08.prod.outlook.com (2603:10a6:800:5c::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.25; Mon, 7 Jun
 2021 02:43:52 +0000
Received: from DB5EUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:8:0:cafe::9f) by DB3PR08CA0009.outlook.office365.com
 (2603:10a6:8::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.15 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:52 +0000
Received: from nebula.arm.com (40.67.248.234) by
 DB5EUR03FT026.mail.protection.outlook.com (10.152.20.159) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:43:52 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Mon, 7 Jun
 2021 02:43:51 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 92589372-4352-47c5-8fff-33cf698ec7b6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K0fOlzkmZo8NkZdZnpDOUw9db/Rylzfd7fdhb0LDRQo=;
 b=NcwTHjVfAJ+iZgFKXRSAKk/xplxOYrdho2V6xsfOfq7+/ZyClBdEew3BzvND3uKaXaW4BdCiHgK73P0hpIUEiFJYu+Vn31LB54rR2lQqPBB2dEfU9xRv7NvMu/r5kpD6hbc5ifEWEYFuyn6iIOHfuyac1Nyo/JW0EMePxnIodu8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 263d801f856ac61a
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Rl1ylBfpsqgOxamCMH10uxipty3iprdODYqVDFeaTuDva+EXl+1Mf+nrReH3tewrA/KJ9pgGHzUaleIuf4yaI+roWiE4p2dXsXnxh6nXuC66skCcfWlamBm+sNoccULkWiKoZYH19uUlavnamr07yL78mNyl1sv2K1Shahrq22hoI4brd9l9+H1DRG3t6WpLlkU9IpzKGgqUsPuT/qUN1eZv0d/9zwUXWmpRk1iq4LzSCudQiQP4cQAr18YiL88S/fqxD1tuNKcFs+vZ4W54sgQz5eQPWyIwEZYxxiHGK2VEHUcqhmofXmdstu8/dHrkw/cdv9dFVuTqoZc/tzhhEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K0fOlzkmZo8NkZdZnpDOUw9db/Rylzfd7fdhb0LDRQo=;
 b=obXBO0BXvri9l0PxEjjghNQLeLBODpY65aLFiDBdza8iCruJ9fzUpy/NGSdNDDqY5s5M7H0/Z5eZ6cASWE/siP+Q1W37kTj7bpY5vTgyfV1WlVRvR0vFSHs0CNqrwlxztbhJKfQedLj2kmO5HbhaOtnlrxpsBaauPlWpur6OaApT6x3p4SlnOEU9oZrDCTvoIYpnc06efNtF6l2omjT9+KGPTEr+4BHZ0AGdCaSWdp7eu7uucUtVhddT2sIVAt9Lfwu8Leq2X8tS7EXSNwoQyMZYfYV/zFp1sIForzCzYlm8mK40UUexhFbsniSQR6aFtx7QzVjIylLHTCLaeZf4TA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K0fOlzkmZo8NkZdZnpDOUw9db/Rylzfd7fdhb0LDRQo=;
 b=NcwTHjVfAJ+iZgFKXRSAKk/xplxOYrdho2V6xsfOfq7+/ZyClBdEew3BzvND3uKaXaW4BdCiHgK73P0hpIUEiFJYu+Vn31LB54rR2lQqPBB2dEfU9xRv7NvMu/r5kpD6hbc5ifEWEYFuyn6iIOHfuyac1Nyo/JW0EMePxnIodu8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>, <jbeulich@suse.com>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>
Subject: [PATCH 5/9] xen: introduce assign_pages_nr
Date: Mon, 7 Jun 2021 02:43:14 +0000
Message-ID: <20210607024318.3988467-6-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210607024318.3988467-1-penny.zheng@arm.com>
References: <20210607024318.3988467-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2a9d48f0-e967-45bc-0d4f-08d9295e1b7c
X-MS-TrafficTypeDiagnostic: VI1PR0801MB1854:|AM0PR08MB5091:
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB50910D29E84A16C99AB6BD77F7389@AM0PR08MB5091.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:4941;OLM:4941;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ip0DfnUXDQx+26sRcl8H5XDeYFELaRGl6JXPvJfYizw6RY2wJ/VX29yzwzAqUKncaKGKNt8hCq4kRb69x6Ldzxi44arB4ql7fJKf66WbScu1OnuLXianeFqAl/H5N1gk3/ijuwlV0DH42p7eHhZWbXqOUALZsmhZk1mkpIrkhA1E9ofk1o399Z8olCUClsBqY1aiHvr7EhV/3AwL8L9ivr1bpAeJAg4hLtVacRS5DNvl7tlGWnKA2uOEiOrJf3Rk1PMDcPs+Hhr/lI6llUPBIxw/PKxnQALQgciBQLDDGyx8Vbla2bxQZRE0jx9oN5bBPpgbbtWS/KnlM4a4gdiyGnFhVD9GhxbT8b6xEdkpNS8YHf4UKX7LVL7/bR0+nCFYfXozn45Yhj8t2v6PXigVccc3AR5x3Mz4YY/J5kPOPSbf4PboiKGrVrHr7ySRUJHfrTgoTyKYYGWIHDnWVKAdC/1y2zWIDfhtyAmm0dCFNUE8s1Oh5HClMV3gTdX3pa99OYkaA7TLt5M5Zj2wzZqJ4gZfX6GV5ArbngwGes5nedSXaZr9GqHlmWFxqnMhPnGZZoAj74Z3J6xIQHUFcjb4r4ps0PwYPW8TG/OctKqnzr/smt0bjLI8l9R6Roc/Y1m4athuCLDoXBMMxvmT56aKb1xT0XUOy1y1LwE3HuBSK/hzVTOstJO2pStZSL3RMn2WkRVr5JFvBvFvgVgy9I9BNA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(39860400002)(136003)(376002)(346002)(396003)(46966006)(36840700001)(36860700001)(2616005)(426003)(2906002)(83380400001)(36756003)(1076003)(44832011)(82310400003)(336012)(6666004)(81166007)(8936002)(82740400003)(54906003)(110136005)(478600001)(4326008)(316002)(70586007)(47076005)(356005)(7696005)(186003)(26005)(70206006)(86362001)(8676002)(5660300002)(2101003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1854
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	363fd8cf-9a61-4515-c7c1-08d9295e1633
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jhibhITHFGAJOMP+jWEUFXl33TGqj868lI1NFvSLhcCDHkcHXB2Dnd4zYUnGk+PPXLiBjX6MHs7JK9VsPzBeDPhIOAniGewiaTDNPplK+7iqeWTTsCPqlbGOli5fBZR2P8OO4nAt0H6ZPJWq0ioTRM83GQ2+U+34vhpA0J0vtt27fEmPQO9Seqw36WIkzwE3ZRHsnjaBori2aewqpuzvyjYnw5BxRY74/epnUBj7b2Ddl+4xE0ekjv/Dc8sPXXRkfM+dPLoezW0a0U8bCkQsCPTw4q/5et58iZcX+FgGTnlMXJyvMrZb+r1EfuaGkbM4RP1BDFvHw4XDFtlIfzI/uQc4eEsLQcdgOLrm+xLp3+wWDs0Rk3jHINlvmIjj0LmDcIW0m0J8bplbFlkCMpamUiiNwZZJrmINlrCJ9HSdONhwQrgkHxCEfaqGXRP+O4wSZBGWUUEWM8cBm/Seiz7IDtlgDw8X9AMi31CYAmsxD3l1Cg/THpTpZ2WseZaYhtxDepdI4ufzrZOwpYYQfnGQwyuI5yDyzxG1+Q9zK5wc8ozlW/V6jS5UPbpIm0fJvLK3BTTADaDJFa64ciw+4WiSp+Emqv2REsIUUfALZuZy0beVZFBltx4Nk3mVPflSS3GrRan6IAJljwF2sEZmit92HyRt80MVMK0iWqO5ZN6SWkLDj+xfsLrAOvUYkJB1oDGs
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(396003)(136003)(376002)(346002)(46966006)(36840700001)(186003)(82310400003)(86362001)(26005)(44832011)(47076005)(336012)(2616005)(82740400003)(426003)(478600001)(7696005)(81166007)(6666004)(5660300002)(83380400001)(36756003)(70586007)(70206006)(4326008)(8676002)(8936002)(316002)(1076003)(36860700001)(2906002)(54906003)(110136005)(2101003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 02:44:01.3845
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2a9d48f0-e967-45bc-0d4f-08d9295e1b7c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5091

Introduce new interface assign_pages_nr to deal with when page number is
not in a power-of-two, which will save the trouble each time user needs
to split the size in a power of 2 to use assign_pages.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
changes v2:
- introduce new interface assign_pages_nr
---
 xen/common/page_alloc.c | 26 +++++++++++++++++---------
 xen/include/xen/mm.h    | 10 ++++++++--
 2 files changed, 25 insertions(+), 11 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 8c00262c04..e244d2e52e 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2301,14 +2301,14 @@ void init_domheap_pages(paddr_t ps, paddr_t pe)
 }
 
 
-int assign_pages(
+int assign_pages_nr(
     struct domain *d,
     struct page_info *pg,
-    unsigned int order,
+    unsigned int nr_pfns,
     unsigned int memflags)
 {
     int rc = 0;
-    unsigned long i;
+    unsigned int i;
 
     spin_lock(&d->page_alloc_lock);
 
@@ -2324,7 +2324,7 @@ int assign_pages(
     {
         unsigned int extra_pages = 0;
 
-        for ( i = 0; i < (1ul << order); i++ )
+        for ( i = 0; i < nr_pfns; i++ )
         {
             ASSERT(!(pg[i].count_info & ~PGC_extra));
             if ( pg[i].count_info & PGC_extra )
@@ -2333,18 +2333,18 @@ int assign_pages(
 
         ASSERT(!extra_pages ||
                ((memflags & MEMF_no_refcount) &&
-                extra_pages == 1u << order));
+                extra_pages == nr_pfns));
     }
 #endif
 
     if ( pg[0].count_info & PGC_extra )
     {
-        d->extra_pages += 1u << order;
+        d->extra_pages += nr_pfns;
         memflags &= ~MEMF_no_refcount;
     }
     else if ( !(memflags & MEMF_no_refcount) )
     {
-        unsigned int tot_pages = domain_tot_pages(d) + (1 << order);
+        unsigned int tot_pages = domain_tot_pages(d) + nr_pfns;
 
         if ( unlikely(tot_pages > d->max_pages) )
         {
@@ -2356,10 +2356,10 @@ int assign_pages(
     }
 
     if ( !(memflags & MEMF_no_refcount) &&
-         unlikely(domain_adjust_tot_pages(d, 1 << order) == (1 << order)) )
+         unlikely(domain_adjust_tot_pages(d, nr_pfns) == nr_pfns) )
         get_knownalive_domain(d);
 
-    for ( i = 0; i < (1 << order); i++ )
+    for ( i = 0; i < nr_pfns; i++ )
     {
         ASSERT(page_get_owner(&pg[i]) == NULL);
         page_set_owner(&pg[i], d);
@@ -2374,6 +2374,14 @@ int assign_pages(
     return rc;
 }
 
+int assign_pages(
+    struct domain *d,
+    struct page_info *pg,
+    unsigned int order,
+    unsigned int memflags)
+{
+    return assign_pages_nr(d, pg, (1U << order), memflags);
+}
 
 struct page_info *alloc_domheap_pages(
     struct domain *d, unsigned int order, unsigned int memflags)
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index df25e55966..25d970e857 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -131,12 +131,18 @@ int query_page_offline(mfn_t mfn, uint32_t *status);
 
 void heap_init_late(void);
 
-int assign_pages(
+int assign_pages_nr(
     struct domain *d,
     struct page_info *pg,
-    unsigned int order,
+    unsigned int nr_pfns,
     unsigned int memflags);
 
+ int assign_pages(
+     struct domain *d,
+     struct page_info *pg,
+     unsigned int order,
+     unsigned int memflags);
+
 /* Dump info to serial console */
 void arch_dump_shared_mem_info(void);
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 02:44:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 02:44:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137542.254912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5FO-0005GW-3a; Mon, 07 Jun 2021 02:44:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137542.254912; Mon, 07 Jun 2021 02:44:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5FN-0005GK-TR; Mon, 07 Jun 2021 02:44:13 +0000
Received: by outflank-mailman (input) for mailman id 137542;
 Mon, 07 Jun 2021 02:44:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MAL7=LB=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lq5FM-0003W9-Jn
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 02:44:12 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe02::623])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ef5b5427-2c06-4ac0-87cc-8ddf888cac62;
 Mon, 07 Jun 2021 02:44:08 +0000 (UTC)
Received: from AS8PR04CA0054.eurprd04.prod.outlook.com (2603:10a6:20b:312::29)
 by DB6PR0801MB1719.eurprd08.prod.outlook.com (2603:10a6:4:3a::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.20; Mon, 7 Jun
 2021 02:44:05 +0000
Received: from AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:312:cafe::c9) by AS8PR04CA0054.outlook.office365.com
 (2603:10a6:20b:312::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.22 via Frontend
 Transport; Mon, 7 Jun 2021 02:44:05 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT006.mail.protection.outlook.com (10.152.16.122) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:44:04 +0000
Received: ("Tessian outbound cce4cc55b7ee:v93");
 Mon, 07 Jun 2021 02:44:04 +0000
Received: from 2fe3bb3aaf7d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B6965954-0DF6-472B-A63E-164F0ADC7278.1; 
 Mon, 07 Jun 2021 02:43:57 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2fe3bb3aaf7d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 07 Jun 2021 02:43:57 +0000
Received: from DBBPR09CA0005.eurprd09.prod.outlook.com (2603:10a6:10:c0::17)
 by AM9PR08MB7013.eurprd08.prod.outlook.com (2603:10a6:20b:419::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.26; Mon, 7 Jun
 2021 02:43:56 +0000
Received: from DB5EUR03FT049.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:c0:cafe::a) by DBBPR09CA0005.outlook.office365.com
 (2603:10a6:10:c0::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.15 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:56 +0000
Received: from nebula.arm.com (40.67.248.234) by
 DB5EUR03FT049.mail.protection.outlook.com (10.152.20.191) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:43:56 +0000
Received: from AZ-NEU-EX01.Emea.Arm.com (10.251.26.4) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.14; Mon, 7 Jun
 2021 02:43:56 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX01.Emea.Arm.com
 (10.251.26.4) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.14; Mon, 7
 Jun 2021 02:43:55 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef5b5427-2c06-4ac0-87cc-8ddf888cac62
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pGxfEEBgvfyzUcQmDtVFw6fyVeow+GfVG9WP8F/FLKo=;
 b=XfnJA6E0yAXuYVbO2Iefp05/6kw7vdmXK3Cd7+lZckqHN2o+sZqaf+wWikIKDT/esPHfbP0dQwnXUDuh8M5aisehijz/gBB+VcH3H5RzxPVfiALRmPKC1/zbMdASNHbbDqy4tUbyijqQa3vxZW7lqwVr6ydE4a3zXMADQbb/aMo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5ce1bddece8bc241
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gkKU40QEGQ7Y5k7LyIGVXnPPaJmLpF6vuP5RtGTrBIGdETEMF21StvwBdZfR52wdsjNgqnsSLcsXZ6X6uBDDk6AHgYEYCjE88XsqMHnnYG4Hw9o4FWQoeDuYSyOP0vuXWPU1ag5IjP6Q106zOn320tFeKDue7FuV9e9Do2XSK7P8UICADwqlidQm3JRsvNa+gvqjz98g/aR4D3TYl4hlrwFTVjHozN7+kT2f9CDzJ/F8JxWe55y39FtrRnfgdoA5jEXqlHoW+DTryEaGxaqEumSeOOnhUpkFpt+YxlfKViLreHDS0hLDzeI1YXO8shY+Z1q/BIkkZnfLg/ChIIZZWg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pGxfEEBgvfyzUcQmDtVFw6fyVeow+GfVG9WP8F/FLKo=;
 b=Y1UOHINYILZP7T7FKZf7Npfd+eCRjeXHDXL9P0MEEAwL1jkifZu2CZclW/OIr8PcH7LvxD7PLd41gZLoJqV45RcR4XB1Z15PktEKOIATMAY7VLd+ud6xNw05iqz2YOBZpKqiFo5VOS1Am3bl7tgCoRyfSU1U2Zwoie3ZVeHBOng1NCA517seoWon2CBGj6OAhIfCY72GGd4CfhXNv+g5gFZc83TiowzMg4YWTptdHPxGjDwTVECleaQOxJoyP6ND2xj0oXBxOlaAOEKMdrmHiHGL4aUXppVeoEzZJ8RF8g4fAsnrYKcLza6fkvozUtAzaXLCd+7tFPm0g0cMBzU5gw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pGxfEEBgvfyzUcQmDtVFw6fyVeow+GfVG9WP8F/FLKo=;
 b=XfnJA6E0yAXuYVbO2Iefp05/6kw7vdmXK3Cd7+lZckqHN2o+sZqaf+wWikIKDT/esPHfbP0dQwnXUDuh8M5aisehijz/gBB+VcH3H5RzxPVfiALRmPKC1/zbMdASNHbbDqy4tUbyijqQa3vxZW7lqwVr6ydE4a3zXMADQbb/aMo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>, <jbeulich@suse.com>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>
Subject: [PATCH 6/9] xen/arm: introduce alloc_staticmem_pages and alloc_domstatic_pages
Date: Mon, 7 Jun 2021 02:43:15 +0000
Message-ID: <20210607024318.3988467-7-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210607024318.3988467-1-penny.zheng@arm.com>
References: <20210607024318.3988467-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cbaf51ef-202e-4085-a234-08d9295e1d6e
X-MS-TrafficTypeDiagnostic: AM9PR08MB7013:|DB6PR0801MB1719:
X-Microsoft-Antispam-PRVS:
	<DB6PR0801MB17198D064C1FB36F4F254FA8F7389@DB6PR0801MB1719.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:3631;OLM:3631;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 dlXr1y4EkSnQOPH4QSg3nnPIJWK5k6gWcJfOUtcwtk7vCkKNLmAdQ2F0cjYGDSdYxazC2Pr7ZTwnVkSk1huAkAwo41HFm6kqsbUcXicMcMMUyrbb44PA4pZXc7MTCcUu67Jj0RoX389UEfWKlf5/zYMVKZW/3YS9xBpr0Nk+D486q7hAYxe8G9KHhWSrDJqBbRRJzl2x00zQ/AuZVqTq2jtReMuPZqIvokFqcmJwz8JErBWYGbLy0D0MWAQzc4tLibL04fg697GzQJltYoHxxqTfAffQBxAsxQgr/2qqjS2RLEE2jQgHYKjTx83mTF5IwgAnSFB1SMRSVU19oc3MFanPvYHDHFv4TCq04jJoou/qyCrNdRzB79sukW0IksEhET/OMq5HeXHywPo4kAo/+6ikz2Kw1JxFPJcWEQknPm8d4d5aCIc82oXkuewGQXrSTRDQqwKNWZjGx5jYuNXcjQzYqaEzZz7teukRHFRK9b/U/FsM4OYM7YybBME2SauEPAvTjremyWcDfV01/qGWoqXx7sThzLrq2KZon05fc7hciLW4qsDQ7wJIXUopsGOyTc9dsHiusv+WwXTljC6OCHPljgu8mw1j0dfFaolfJ4MT1sUregbcPgG4o432Yx1oqcWQDImy1ORFMPE4bcGoTApYyLK/jg7i1DIcI1JRD/d9Wz8gsnNoAi6fYERiev5e4pqaq3EIgrcSVulb1m9qwA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(39860400002)(396003)(136003)(376002)(346002)(36840700001)(46966006)(81166007)(336012)(2906002)(316002)(8676002)(110136005)(6666004)(47076005)(36860700001)(356005)(70586007)(186003)(26005)(7696005)(86362001)(70206006)(1076003)(83380400001)(8936002)(2616005)(478600001)(36756003)(426003)(82310400003)(4326008)(54906003)(44832011)(5660300002)(82740400003)(36900700001)(2101003);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB7013
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	61eb3d68-ee7b-466d-e165-08d9295e18bf
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/FJDH1/9p6/JnoTTJhW+EXULbyxxetbeiWvM/1IJwa6+axI+3xYdrgffIqkLmrJ8AUbaYpallReYoFQeId3xYHHtztoNPQXdUyjGL69LhvAIHuj8tXewmbsG4Djlpm9Vfz2YLO+DvNPQE+fhO0e+QWTUxVgjt33B4RcivYsX0PxqXq04X7Ew9rly/YPdoITJNeie8ADop+zLYI6miv2JXP7KGJrdpismee1lg0ZJ+F7Uyq626QLgI5qbSHKSa0pbyCZ9zeL9MU/2VnI7UaUBqGpifRkFwQhVPRzLzsO2Il+ucgErDIg5gYB1u2Iiep+W47oPglFBPt6oFL0IyHDohz/NK498x6VhqWA8H19SLdAYYNgo4EfS+uJtrqytM9SagP/Fgh31vMQBO0yCxJIxaV/ulG2MHRDaIXjl0sFOIZcV/94T8+tayXW3HzZRAdzsftZqXAfpjhXdyJ8kqknyTpSm0mr8llT8jI8XKodNYmU28ihIZXtzUtWpzcrDV6iRiDNQjC8JhwNNLRPvs65mMtMuyHhammYTx/RmLOH9rbsDEJsMRo8YOaNKP+6/N5EKC1HMsW5M2z9Y7h9yqq0gjW24KC2XRNGBmkvgmiJ359d0LtQWPk8zvsGPmSdBTTe4lmgcfF20tpKSFr3GAIy67E0wfldjKOkfCF1s6tx2EeBHFzPcRnkT6aFgjJKOmzuU
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(346002)(396003)(39860400002)(376002)(36840700001)(46966006)(7696005)(44832011)(8676002)(82310400003)(8936002)(47076005)(478600001)(82740400003)(2906002)(26005)(81166007)(186003)(4326008)(110136005)(1076003)(6666004)(316002)(83380400001)(70586007)(36860700001)(2616005)(426003)(54906003)(36756003)(70206006)(5660300002)(336012)(86362001)(2101003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 02:44:04.6510
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: cbaf51ef-202e-4085-a234-08d9295e1d6e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1719

alloc_staticmem_pages aims to allocate nr_mfns contiguous pages of
static memory. And it is the equivalent of alloc_heap_pages for static
memory. Here only covers allocating at specified starting address.

For each page, it shall check if the page is reserved(PGC_reserved)
and free. It shall also do a set of necessary initialization, which are
mostly the same ones in alloc_heap_pages, like, following the same
cache-coherency policy and turning page status into PGC_state_inuse, etc.

alloc_domstatic_pages is the equivalent of alloc_domheap_pages for
static mmeory, and it is to allocate nr_mfns pages of static memory
and assign them to one specific domain.

It uses alloc_staticmen_pages to get nr_mfns pages of static memory,
then on success, it will use assign_pages_nr to assign those pages to
one specific domain.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
changes v2:
- use mfn_valid() to do validation
- change pfn-named to mfn-named
- put CONFIG_STATIC_ALLOCATION around to remove dead codes
- correct off-by-one indentation
- remove meaningless MEMF_no_owner case
- leave zone concept out of DMA limitation check
---
 xen/common/page_alloc.c | 129 ++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/mm.h    |   2 +
 2 files changed, 131 insertions(+)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index e244d2e52e..a0eea5f1a4 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1065,6 +1065,75 @@ static struct page_info *alloc_heap_pages(
     return pg;
 }
 
+#ifdef CONFIG_STATIC_ALLOCATION
+/*
+ * Allocate nr_mfns contiguous pages, starting at #smfn, of static memory.
+ * It is the equivalent of alloc_heap_pages for static memory
+ */
+static struct page_info *alloc_staticmem_pages(unsigned long nr_mfns,
+                                               mfn_t smfn,
+                                               unsigned int memflags)
+{
+    bool need_tlbflush = false;
+    uint32_t tlbflush_timestamp = 0;
+    unsigned long i;
+    struct page_info *pg;
+
+    /* For now, it only supports allocating at specified address. */
+    if ( !mfn_valid(smfn) || !nr_mfns )
+    {
+        printk(XENLOG_ERR
+               "Invalid %lu static memory starting at %"PRI_mfn"\n",
+               nr_mfns, mfn_x(smfn));
+        return NULL;
+    }
+    pg = mfn_to_page(smfn);
+
+    for ( i = 0; i < nr_mfns; i++ )
+    {
+        /*
+         * Reference count must continuously be zero for free pages
+         * of static memory(PGC_reserved).
+         */
+        ASSERT(pg[i].count_info & PGC_reserved);
+        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
+        {
+            printk(XENLOG_ERR
+                   "Reference count must continuously be zero for free pages"
+                   "pg[%lu] MFN %"PRI_mfn" c=%#lx t=%#x\n",
+                   i, mfn_x(page_to_mfn(pg + i)),
+                   pg[i].count_info, pg[i].tlbflush_timestamp);
+            BUG();
+        }
+
+        if ( !(memflags & MEMF_no_tlbflush) )
+            accumulate_tlbflush(&need_tlbflush, &pg[i],
+                                &tlbflush_timestamp);
+
+        /*
+         * Preserve flag PGC_reserved and change page state
+         * to PGC_state_inuse.
+         */
+        pg[i].count_info = (pg[i].count_info & PGC_reserved) | PGC_state_inuse;
+        /* Initialise fields which have other uses for free pages. */
+        pg[i].u.inuse.type_info = 0;
+        page_set_owner(&pg[i], NULL);
+
+        /*
+         * Ensure cache and RAM are consistent for platforms where the
+         * guest can control its own visibility of/through the cache.
+         */
+        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
+                            !(memflags & MEMF_no_icache_flush));
+    }
+
+    if ( need_tlbflush )
+        filtered_flush_tlb_mask(tlbflush_timestamp);
+
+    return pg;
+}
+#endif
+
 /* Remove any offlined page in the buddy pointed to by head. */
 static int reserve_offlined_page(struct page_info *head)
 {
@@ -2326,7 +2395,11 @@ int assign_pages_nr(
 
         for ( i = 0; i < nr_pfns; i++ )
         {
+#ifdef CONFIG_STATIC_ALLOCATION
+            ASSERT(!(pg[i].count_info & ~(PGC_extra | PGC_reserved)));
+#else
             ASSERT(!(pg[i].count_info & ~PGC_extra));
+#endif
             if ( pg[i].count_info & PGC_extra )
                 extra_pages++;
         }
@@ -2365,7 +2438,12 @@ int assign_pages_nr(
         page_set_owner(&pg[i], d);
         smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
         pg[i].count_info =
+#ifdef CONFIG_STATIC_ALLOCATION
+            (pg[i].count_info & (PGC_extra | PGC_reserved)) | PGC_allocated | 1;
+#else
             (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
+#endif
+
         page_list_add_tail(&pg[i], page_to_list(d, &pg[i]));
     }
 
@@ -2434,6 +2512,57 @@ struct page_info *alloc_domheap_pages(
     return pg;
 }
 
+#ifdef CONFIG_STATIC_ALLOCATION
+/*
+ * Allocate nr_mfns contiguous pages, starting at #smfn, of static memory,
+ * then assign them to one specific domain #d.
+ * It is the equivalent of alloc_domheap_pages for static memory.
+ */
+struct page_info *alloc_domstatic_pages(
+        struct domain *d, unsigned long nr_mfns, mfn_t smfn,
+        unsigned int memflags)
+{
+    struct page_info *pg = NULL;
+    unsigned long dma_size;
+
+    ASSERT(!in_irq());
+
+    if ( !dma_bitsize )
+        memflags &= ~MEMF_no_dma;
+    else
+    {
+        if ( (dma_bitsize - PAGE_SHIFT) > 0 )
+        {
+            dma_size = 1ul << (dma_bitsize - PAGE_SHIFT);
+            /* Starting address shall meet the DMA limitation. */
+            if ( mfn_x(smfn) < dma_size )
+                return NULL;
+        }
+    }
+
+    pg = alloc_staticmem_pages(nr_mfns, smfn, memflags);
+    if ( !pg )
+        return NULL;
+
+    /* Right now, MEMF_no_owner case is meaningless here. */
+    ASSERT(d);
+    if ( memflags & MEMF_no_refcount )
+    {
+        unsigned long i;
+
+        for ( i = 0; i < nr_mfns; i++ )
+            pg[i].count_info |= PGC_extra;
+    }
+    if ( assign_pages_nr(d, pg, nr_mfns, memflags) )
+    {
+        free_staticmem_pages(pg, nr_mfns, memflags & MEMF_no_scrub);
+        return NULL;
+    }
+
+    return pg;
+}
+#endif
+
 void free_domheap_pages(struct page_info *pg, unsigned int order)
 {
     struct domain *d = page_get_owner(pg);
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 25d970e857..a07bd02923 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -89,6 +89,8 @@ bool scrub_free_pages(void);
 /* Static Allocation */
 void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
                           bool need_scrub);
+struct page_info *alloc_domstatic_pages(struct domain *d,
+        unsigned long nr_mfns, mfn_t smfn, unsigned int memflags);
 #endif
 
 /* Map machine page range in Xen virtual address space. */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 02:44:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 02:44:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137547.254923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5FT-0005v3-DF; Mon, 07 Jun 2021 02:44:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137547.254923; Mon, 07 Jun 2021 02:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5FT-0005uk-9U; Mon, 07 Jun 2021 02:44:19 +0000
Received: by outflank-mailman (input) for mailman id 137547;
 Mon, 07 Jun 2021 02:44:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MAL7=LB=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lq5FR-0003W9-Jz
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 02:44:17 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.3.40]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0516d7ff-8a69-439c-a0a7-07175f7fb513;
 Mon, 07 Jun 2021 02:44:15 +0000 (UTC)
Received: from AM7PR03CA0020.eurprd03.prod.outlook.com (2603:10a6:20b:130::30)
 by HE1PR0802MB2585.eurprd08.prod.outlook.com (2603:10a6:3:d4::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.27; Mon, 7 Jun
 2021 02:44:13 +0000
Received: from AM5EUR03FT035.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:130:cafe::fa) by AM7PR03CA0020.outlook.office365.com
 (2603:10a6:20b:130::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.15 via Frontend
 Transport; Mon, 7 Jun 2021 02:44:13 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT035.mail.protection.outlook.com (10.152.16.119) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:44:12 +0000
Received: ("Tessian outbound 2977cc564e34:v93");
 Mon, 07 Jun 2021 02:44:11 +0000
Received: from c39aee2c5cfc.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1A51EFF2-D5D9-4C77-8BFE-6E63319D2DFE.1; 
 Mon, 07 Jun 2021 02:44:06 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c39aee2c5cfc.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 07 Jun 2021 02:44:06 +0000
Received: from DB8PR03CA0022.eurprd03.prod.outlook.com (2603:10a6:10:be::35)
 by VI1PR0802MB2381.eurprd08.prod.outlook.com (2603:10a6:800:9b::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.22; Mon, 7 Jun
 2021 02:44:04 +0000
Received: from DB5EUR03FT045.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:be:cafe::ae) by DB8PR03CA0022.outlook.office365.com
 (2603:10a6:10:be::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.22 via Frontend
 Transport; Mon, 7 Jun 2021 02:44:04 +0000
Received: from nebula.arm.com (40.67.248.234) by
 DB5EUR03FT045.mail.protection.outlook.com (10.152.21.164) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:44:04 +0000
Received: from AZ-NEU-EX01.Emea.Arm.com (10.251.26.4) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.14; Mon, 7 Jun
 2021 02:44:03 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX01.Emea.Arm.com
 (10.251.26.4) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.14; Mon, 7
 Jun 2021 02:44:02 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Mon, 7 Jun 2021 02:44:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0516d7ff-8a69-439c-a0a7-07175f7fb513
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hjHc/ZQydMUgz2CkElY2HaTa10pnGTpDgjZ8IA6R1sw=;
 b=lTG2md8QPpBmoccvIkX+y3xtOV7631r0My4UhrKOWX47j5MEPSmQSsAB+SadkAWqUae4ln1DGZ1hSwGAKHieEQ1G0X/ZGtLfRx2xZU002ArF6EG0vx2aQf4LU2wDn39vSRx4dPNF59BamCGUEqXsw4S7MerIOkyA6EQT19a2iiY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: c833dfd982f25670
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kMty4Emx9M8u3E/deZ3l3zvUPz5NloJQs1I9f2SDVP6gMk7rwcI20jf1ccWAEkQYH7g3V/YrgMIh1yBCRaCrraTN3p3U5FiGUf3q0UymwYBIDrqxHwqe0wsKz78U0s/GRShkQbHB2OaYWeVAIwUFIeJdfTxH7Oh63pCkVZ/3l28PnywxIYfCHjP4cGHWMxKLL56K80LKpGNFiytzAWcTdM7eE8ovvZSsa2Xb1S/MdW0k4Ur8htj76MEbkAmyatVbV5Be7H22jYLLy/WYBb8CrlpwJUumRTo+2zK+wr6r8ZBZStgDTtgQR2nCVO+X8E//wRLno9ZVxc5bpA2XUSwfWQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hjHc/ZQydMUgz2CkElY2HaTa10pnGTpDgjZ8IA6R1sw=;
 b=gDIi7T8V+34CPcOCZaK8US1Fze45LAIlFWmc5LLjEot3ZbLpP7BmxoOL23Jj0lyfFCVTNIN03MWWkej458O3pqNvdU00F+ZPs/VyK7pwJhKCg0UiTN8D9wn6CQmxGdKw+xBV3ToZUSOl+D4DdeWOfwwjrsBfrOqDkhLsYtdvW1BT2S8PhUQFTba7UIPDods6RZi+Ew3aKGql98vI3cwwvcsFmP1vLbkKSPffbNkmUcYcwZbp9TtaYuqhNq3abjWdrCtSH5crQmQq5g3VpWsiiO2l0mCXMJmdW5yvBKPJB43gcl3lkFHMFg5ZvyYrgd3SKgcnndkoeOxK27vAfRu3mA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hjHc/ZQydMUgz2CkElY2HaTa10pnGTpDgjZ8IA6R1sw=;
 b=lTG2md8QPpBmoccvIkX+y3xtOV7631r0My4UhrKOWX47j5MEPSmQSsAB+SadkAWqUae4ln1DGZ1hSwGAKHieEQ1G0X/ZGtLfRx2xZU002ArF6EG0vx2aQf4LU2wDn39vSRx4dPNF59BamCGUEqXsw4S7MerIOkyA6EQT19a2iiY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>, <jbeulich@suse.com>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>
Subject: [PATCH 8/9] xen/arm: check `xen,static-mem` property during domain construction
Date: Mon, 7 Jun 2021 02:43:17 +0000
Message-ID: <20210607024318.3988467-9-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210607024318.3988467-1-penny.zheng@arm.com>
References: <20210607024318.3988467-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f9e3e0da-1eba-4377-8046-08d9295e221a
X-MS-TrafficTypeDiagnostic: VI1PR0802MB2381:|HE1PR0802MB2585:
X-Microsoft-Antispam-PRVS:
	<HE1PR0802MB258557BD9DC8A64D17347DD9F7389@HE1PR0802MB2585.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:747;OLM:747;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 yKdNQzBmplE3N31U2qwrNZhMWtrFzULMSRIsiq6o259K83lqdL9DCq9XixEekAs3VDgxUnNeF/pATNH8jLzx5Jr8+5q+72aclbtOqs3JwzvWEUWgkDhko9rPa3rmwElTQnTJ2h4OX5FfuGFKR+kMrUeZIfNPAIz9bhzY3tNAs/gswjRz3p0Gb1ZnLgFgL1BMk7ybsnlM5Tv+WRczwykOPkUwCfln2CWnjiQEflgh2mwEox0covFNF+iTHgfWF0ld1lXKy2vi5ud8WDmCzyOCACAP9INxRz07gy0M1HXAqUyKpjbdNXYAIJs/opdy/C781X6aUf3XMXCY1ltf5iF719faB71JHNuXHs1ywtjewLJo1jWatwEUR4NXh5/QA8LAWWZWSQA4VGszS2gfljewOQAYX4R+ZWhbjAYF2UC14sBle47HItqyC+NX2aMgkErSl+hbSqVSThybpfUWf4ucFh0t6AM1qMFa+PowdexLz+KQjUvuYJRcpQjjmR/wevthMFVP96kfZSaU2h9O2XAXM+jyspYbMqYgx2N89c45zNxDOJLo+v345eP7CBqvbKTuRTZO4MNjuTmE+yFH660TX+qMWaXhywW9rN2XHTugCRz+VdGi8It78kZSSdLIJenlCzge0RZE0h5WMf4qBVBkJ90ocekpPmITvIq+XD6/CecroZ3Vn2i0br/JPwAWsBcbLguy1JgzwO34OSEEhPdsI0Lhv1NcMMwkrBtOr0y9LQ4=
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(346002)(39860400002)(136003)(376002)(396003)(36840700001)(46966006)(6666004)(4326008)(478600001)(7696005)(186003)(1076003)(86362001)(81166007)(110136005)(356005)(8676002)(8936002)(336012)(70586007)(26005)(70206006)(5660300002)(426003)(316002)(2616005)(2906002)(44832011)(47076005)(82740400003)(82310400003)(54906003)(36860700001)(36756003)(83380400001)(81973001)(2101003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0802MB2381
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	832cdbb1-6b57-4076-c7aa-08d9295e1d2a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3jZBqvtTY6EGuiE98vEGTV/ZOArJka7PoepI/VbdtmU8/vL9GTsDar5xSsp9Fao1iOPUOahxAX11vHITWm5KSOCpaQc2X9HFp/uanA5F05meqcFyNoznim8TIHSxYRMz89fMIFOpd/K/9ocJzeAN4sh07oTVm0JylN/koi1ds9489xd2weFdL/6Kjy1+mPWj2d5wHCsloq1s0K4hV2xG4UwFiG58SWcS/YTPcGtP45C61RUOIi2ps/SDuoRSniuDN5RrRrBxUPTBT5szlJPEuvJrFmgYW6G9HhLqp5Yfvpj93jVn0v0Q5GM2U7lIBEqHExXkHN8nwY5QUuYwtarKIMGkak2LprU1F1omrmMLHXkECx/GTvQAkQeZDPb+sR0uabBctqdWDXO3+tC74flmKco+unXWKanMsFfriYuwao8QfrZy7VAZUQ19wU/QVbZzIMzlQZidm4p5FIwqjUDiLcQT0t5vL46EP2+JTwzRjwlbpCdYzXYToR7+4nWfX4cwOTlG43D/ZPmQ3cjZFsNkv5zPDx89ITD+M5JaswOE+rgec2FNcmNajPinVqGgBZDCd+vSyExtzkMUoSadZZTjVJdZPY0vJiLG/12rD0uiBQuRrqrKcKTswm0Tpu+GJCAdICuBYsm3gG2uRn2mMQHEr/1fUqwt468Id/z0qOt86dgPDrt0xLV8B665siLxoI9TFAgmu9X5LjubbfXHUzbXRQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(39860400002)(136003)(346002)(396003)(36840700001)(46966006)(2906002)(70206006)(7696005)(5660300002)(426003)(70586007)(336012)(110136005)(54906003)(36860700001)(1076003)(6666004)(2616005)(44832011)(47076005)(26005)(83380400001)(82740400003)(81166007)(82310400003)(86362001)(8676002)(4326008)(186003)(8936002)(36756003)(478600001)(316002)(81973001)(2101003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 02:44:12.4672
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f9e3e0da-1eba-4377-8046-08d9295e221a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0802MB2585

This commit checks `xen,static-mem` device tree property in /domUx node,
to determine whether domain is on Static Allocation, when constructing
domain during boot-up.

Right now, the implementation of allocate_static_memory is missing, and
will be introduced later. It just BUG() out at the moment.

And if the `memory` property and `xen,static-mem` are both set, it shall
be verified that if the memory size defined in both is consistent.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
changes v2:
- remove parsing procedure here
- check the consistency when `xen,static-mem` and `memory` are both defined
---
 xen/arch/arm/domain_build.c | 37 +++++++++++++++++++++++++++++++------
 1 file changed, 31 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 282416e74d..4166d7993c 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2424,23 +2424,47 @@ static int __init construct_domU(struct domain *d,
 {
     struct kernel_info kinfo = {};
     int rc;
-    u64 mem;
+    u64 mem, static_mem_size = 0;
+    const struct dt_property *prop;
+    bool static_mem = false;
+
+    d->max_pages = ~0U;
+    /*
+     * Guest RAM could be of static memory from static allocation,
+     * which will be specified through "xen,static-mem" phandle.
+     */
+    prop = dt_find_property(node, "xen,static-mem", NULL);
+    if ( prop )
+    {
+        static_mem = true;
+        /* static_mem_size = allocate_static_memory(...); */
+        BUG();
+    }
 
     rc = dt_property_read_u64(node, "memory", &mem);
-    if ( !rc )
+    if ( !static_mem && !rc )
     {
         printk("Error building DomU: cannot read \"memory\" property\n");
         return -EINVAL;
+    } else if ( rc && static_mem )
+    {
+        if ( static_mem_size != mem * SZ_1K )
+        {
+            printk("Memory size in \"memory\" property isn't consistent with"
+                   "the ones defined in \"xen,static-mem\".\n");
+            return -EINVAL;
+        }
     }
-    kinfo.unassigned_mem = (paddr_t)mem * SZ_1K;
+    kinfo.unassigned_mem = static_mem ? 0 : (paddr_t)mem * SZ_1K;
 
-    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
+    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n",
+            d->max_vcpus,
+            static_mem ? static_mem_size : (kinfo.unassigned_mem) >> 10);
 
     kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
 
     if ( vcpu_create(d, 0) == NULL )
         return -ENOMEM;
-    d->max_pages = ~0U;
 
     kinfo.d = d;
 
@@ -2452,7 +2476,8 @@ static int __init construct_domU(struct domain *d,
     /* type must be set before allocate memory */
     d->arch.type = kinfo.type;
 #endif
-    allocate_memory(d, &kinfo);
+    if ( !static_mem )
+        allocate_memory(d, &kinfo);
 
     rc = prepare_dtb_domU(d, &kinfo);
     if ( rc < 0 )
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 02:44:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 02:44:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137551.254934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5FY-0006h8-1k; Mon, 07 Jun 2021 02:44:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137551.254934; Mon, 07 Jun 2021 02:44:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5FX-0006gr-TG; Mon, 07 Jun 2021 02:44:23 +0000
Received: by outflank-mailman (input) for mailman id 137551;
 Mon, 07 Jun 2021 02:44:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MAL7=LB=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lq5FW-0003W9-KG
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 02:44:22 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.44]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2355fa02-f776-46e9-8159-3fc5e00c58ce;
 Mon, 07 Jun 2021 02:44:16 +0000 (UTC)
Received: from AS8PR04CA0124.eurprd04.prod.outlook.com (2603:10a6:20b:127::9)
 by DBBPR08MB5882.eurprd08.prod.outlook.com (2603:10a6:10:200::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.20; Mon, 7 Jun
 2021 02:44:10 +0000
Received: from VE1EUR03FT042.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:127:cafe::7a) by AS8PR04CA0124.outlook.office365.com
 (2603:10a6:20b:127::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.23 via Frontend
 Transport; Mon, 7 Jun 2021 02:44:10 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT042.mail.protection.outlook.com (10.152.19.62) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:44:09 +0000
Received: ("Tessian outbound bf434e582664:v93");
 Mon, 07 Jun 2021 02:44:09 +0000
Received: from 42d751cfa30d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 27168AAA-11A1-4D08-8710-E696E1208AE5.1; 
 Mon, 07 Jun 2021 02:44:03 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 42d751cfa30d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 07 Jun 2021 02:44:03 +0000
Received: from DB6PR0601CA0013.eurprd06.prod.outlook.com (2603:10a6:4:7b::23)
 by HE1PR0802MB2521.eurprd08.prod.outlook.com (2603:10a6:3:df::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.23; Mon, 7 Jun
 2021 02:44:00 +0000
Received: from DB5EUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:7b:cafe::40) by DB6PR0601CA0013.outlook.office365.com
 (2603:10a6:4:7b::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.22 via Frontend
 Transport; Mon, 7 Jun 2021 02:44:00 +0000
Received: from nebula.arm.com (40.67.248.234) by
 DB5EUR03FT005.mail.protection.outlook.com (10.152.20.122) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:44:00 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Mon, 7 Jun
 2021 02:43:58 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Mon, 7 Jun 2021 02:43:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2355fa02-f776-46e9-8159-3fc5e00c58ce
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/JHF5WaQ0fbVKChrSZ5i9VP+rOLKRk36xXg7oyU+VXY=;
 b=HWX+z9JGnwqGi3cpZzIMNBN/ppMighqzDDhgZronSd5B5eb+b1RrJz46/9NOBOe23GnF+EXnI5cQnGtnb6a1vw6SAbfwCH1ma7PWNZmvnQL7loeoBVGRk8m4N0Q8U6ZgJzykFrky96RyuOnQ5BdM8TRSBRW2ZCRWK0/K0y4Kvpo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 788d0609dd54f541
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T/D7veZtl/BSsuiNy6+mof86zzVKMNR35v03tl9zbDZmUwSDkRRIVLroKYVqMGBlwN/UpfxUvcGmo4PENcOZhpUxjamaR0mGpQX9n0S0o/K6PnKMbA0VJjw8FZrcPeq7GPAC7BgR2k5tPZN4dO604kWCIvvkUWFIgcv6rMq9HOYqtV/bb9WxgLJIt4Z7lhuYItKsvaLF7zIbI+1NFeKo7j3mfrd9ipg1vYkSFsxy9g+sL58gTaD0FkmLVrxTXmUGl+0lWXDEemJXcuH26TbVSRYohYSSNpILb3RQpTwnReisT8kpB/GI8zVPYJIM64zLJCxEAr0KJI7RktDkxs5HGA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/JHF5WaQ0fbVKChrSZ5i9VP+rOLKRk36xXg7oyU+VXY=;
 b=ZXMvDk8VmcyqAKBFBaQJv46ex32NcwSViXlCMPjde3qlQrtUcAEvGSdpIyf+sFx15J45fZIVxL7zwriEjanDyxuh3OlZP61T+5AQ3DT5PKd4k629pqMeuR7DqIcfDjI/QQZ6e3y0ztHGF2KGZ1jv8FhJ0BWCLgoDWyIdweCtOehwTbDRpumusyPCd2EoewQlTCuBA9qD0QIyTgSyX77uCux8l3v/W3Vb0csY25Fg2fSWHc3oAx/x8BbPgzoKwUgi1VZAnLZhZ0rmXbXC58cBLtOLButEvUNcG6sT+WfD13NnmuLQX+SLu/FRo0RsrSFWFvAYGfCmPr52A/VH6uwcHg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/JHF5WaQ0fbVKChrSZ5i9VP+rOLKRk36xXg7oyU+VXY=;
 b=HWX+z9JGnwqGi3cpZzIMNBN/ppMighqzDDhgZronSd5B5eb+b1RrJz46/9NOBOe23GnF+EXnI5cQnGtnb6a1vw6SAbfwCH1ma7PWNZmvnQL7loeoBVGRk8m4N0Q8U6ZgJzykFrky96RyuOnQ5BdM8TRSBRW2ZCRWK0/K0y4Kvpo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>, <jbeulich@suse.com>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>
Subject: [PATCH 7/9] xen/arm: take care of concurrency on static memory allocation
Date: Mon, 7 Jun 2021 02:43:16 +0000
Message-ID: <20210607024318.3988467-8-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210607024318.3988467-1-penny.zheng@arm.com>
References: <20210607024318.3988467-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d6b9e365-948f-4a72-7f9b-08d9295e209f
X-MS-TrafficTypeDiagnostic: HE1PR0802MB2521:|DBBPR08MB5882:
X-Microsoft-Antispam-PRVS:
	<DBBPR08MB588204EBEB30A8E2065B6B23F7389@DBBPR08MB5882.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:590;OLM:590;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 1iz3w8t67Ip9LiWn+hr8BIL5flrBarhkPkeQ/GsDwSiiWR/3h9uOyCAQNCneyhq/g0f02H0sKth46apmAUnr1e1fvSvpcrZ1JviL9r57FRSlHI0dVpfGx25GVmasL56TQvCXVb1SL4kZjCFqj27pQ2GnV7b9ikFrbvALkTKPKSf1LqFFahPnF70lyrDy5UeaUtsMVsOgswDjKY1t67AdgGgeF0JbuukmAj1iJ0FxZq1a8MjYP6D1lNQaVks3Rov2e6ucfQVkK2HTSi5+EmFehershm7DZMwVAxaULiZrHPWfs+/IygQDkhPzGI5cfps3o7WvJFvINj9KvP3VSzjqZIY2Rt/4hMb68gA4/apxoNsgnv2ovIddadsGHOqM+q/qgLhzn3Gu2f+CAMbinVoCi+s4QhywiemtCDcf3dTl2YHOKHMQS2Kgs9xvCCzI1vfLH0ieWd2ynaxlh5llrQT+NxUU5/E11Kd2I0OizlfJG6c3hFBpkR8vw2uA5hZ5rDmNoKTZura2iRFHoWl83XBTakfexJ1r81v9B11KuacWTb7CG4ZEm7q74vvQ4dxf2ktE+6hPTV620eY9JLK509fJft/QXzgUEx034C7kUlKIkcEV7n0jTxIt7ay7EQA+Qb3J8xozaJ4qARUXH25YwcotOZC4F5/2jQzSCLyzyamBUinuhDIy1t5ZKEwOCe8x03X/HgOIKzivQSytDj5OEgwkUw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(136003)(376002)(346002)(396003)(39860400002)(36840700001)(46966006)(2906002)(6666004)(36756003)(5660300002)(47076005)(1076003)(4744005)(186003)(26005)(44832011)(86362001)(426003)(110136005)(54906003)(8676002)(83380400001)(81166007)(36860700001)(356005)(7696005)(4326008)(82310400003)(70586007)(70206006)(478600001)(82740400003)(8936002)(336012)(316002)(2616005)(2101003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0802MB2521
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT042.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	53a0d1e3-767e-4d74-b760-08d9295e1aff
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bFpj7HCWmiqb0wFsnYY519160ZUpJcZ8BpGAUQSZ7OMIq2QeiV5V6y+AWEAGAoH1WiyqnjnBnD38ovr43mjQBtAoVWzWOHFM59AVeiDbzf6wLhnxgcbQB9gWbAj7lTv7qPx5CcSjun7uel86cqgYyCfE3hA2L3AyF6mJPAPLqWvASTMA0D44kIX972XNMKi5Aye7ZEi0PW9OnDEp7JCkRQrmVZVOQWHBzD4VAF7dcEgNCIjU5VVSl/y9A16LmWRZnMz7FgK9g7jkP/7fTWUqUbFP4y2yELY2ppVhXcoa8N0pVSnvY78tyjqBJZFrlMqDSNgqjUBlYCM4fLKmJ4NeONmjpnkf/7rN+3AN1KKZ7S2SXuw8nlTLYQ+st3g3SVngj+ykFj4/JLzyR/pfGx0f76nnorRvjRt3tVZXoZP6l1LhvIixDeKrbCUiIiNwjg/+g3BuQy5tbeWbO17mBMEHTBC0P+lQ0lNlJRDqCAHGOEcuQZ4Y1MqkzJ2FRLAn6ByPTe0/wSQPxa6H3gipamY7FGeFduJqYAa4wRN+nYKg2LWKn+3F6zHrvU/ZVbGJELhzGXDoN2nMtE67WhkxL+F0Wfr42uQAtA6zijMzHsv4wwc99eG7aeuswosKVAQTsu1oxPEctFGvRHGjj5oOm6rcWQj81wa2r+Z6n755K8Fj5r4DZ8ZvBo48oyWCNmxRSUa8
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(136003)(376002)(396003)(346002)(36840700001)(46966006)(4326008)(6666004)(83380400001)(36860700001)(8676002)(47076005)(36756003)(81166007)(8936002)(26005)(2616005)(2906002)(478600001)(70206006)(1076003)(44832011)(426003)(54906003)(110136005)(70586007)(86362001)(4744005)(316002)(82310400003)(7696005)(82740400003)(186003)(5660300002)(336012)(2101003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 02:44:09.8884
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d6b9e365-948f-4a72-7f9b-08d9295e209f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT042.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB5882

In the future, user may want to allocate static memory at runtime,
and it is quite important to get the code protected from concurrent
access.

Re-use heap_lock to protect concurrent access in alloc_staticmem_pages.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
changes v2:
- new commit
---
 xen/common/page_alloc.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index a0eea5f1a4..c6ccfc3216 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1087,6 +1087,9 @@ static struct page_info *alloc_staticmem_pages(unsigned long nr_mfns,
                nr_mfns, mfn_x(smfn));
         return NULL;
     }
+
+    spin_lock(&heap_lock);
+
     pg = mfn_to_page(smfn);
 
     for ( i = 0; i < nr_mfns; i++ )
@@ -1127,6 +1130,8 @@ static struct page_info *alloc_staticmem_pages(unsigned long nr_mfns,
                             !(memflags & MEMF_no_icache_flush));
     }
 
+    spin_unlock(&heap_lock);
+
     if ( need_tlbflush )
         filtered_flush_tlb_mask(tlbflush_timestamp);
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 02:44:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 02:44:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137560.254945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5Fn-0007pM-Cq; Mon, 07 Jun 2021 02:44:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137560.254945; Mon, 07 Jun 2021 02:44:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5Fn-0007pB-98; Mon, 07 Jun 2021 02:44:39 +0000
Received: by outflank-mailman (input) for mailman id 137560;
 Mon, 07 Jun 2021 02:44:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MAL7=LB=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lq5Fl-0003W9-Kw
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 02:44:37 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe05::61b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c8abd4d-335f-4e8a-919d-b0ed7e002ae2;
 Mon, 07 Jun 2021 02:44:22 +0000 (UTC)
Received: from AM6P194CA0004.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:90::17)
 by AM0PR08MB4100.eurprd08.prod.outlook.com (2603:10a6:208:130::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.20; Mon, 7 Jun
 2021 02:44:15 +0000
Received: from VE1EUR03FT010.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:90:cafe::7a) by AM6P194CA0004.outlook.office365.com
 (2603:10a6:209:90::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.15 via Frontend
 Transport; Mon, 7 Jun 2021 02:44:15 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT010.mail.protection.outlook.com (10.152.18.113) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:44:15 +0000
Received: ("Tessian outbound 836922dda4f1:v93");
 Mon, 07 Jun 2021 02:44:14 +0000
Received: from d51c310d07aa.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1679F000-86AC-4362-BAD0-20E79C3A7969.1; 
 Mon, 07 Jun 2021 02:44:09 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d51c310d07aa.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 07 Jun 2021 02:44:09 +0000
Received: from DB6PR1001CA0019.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:4:b7::29)
 by DB8PR08MB5035.eurprd08.prod.outlook.com (2603:10a6:10:eb::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Mon, 7 Jun
 2021 02:44:08 +0000
Received: from DB5EUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:b7:cafe::4) by DB6PR1001CA0019.outlook.office365.com
 (2603:10a6:4:b7::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.15 via Frontend
 Transport; Mon, 7 Jun 2021 02:44:08 +0000
Received: from nebula.arm.com (40.67.248.234) by
 DB5EUR03FT012.mail.protection.outlook.com (10.152.20.161) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 02:44:07 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Mon, 7 Jun
 2021 02:44:06 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Mon, 7 Jun 2021 02:44:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c8abd4d-335f-4e8a-919d-b0ed7e002ae2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kHSMQ1sGOPQDhU5EE4xS8cKC3DuRiiQSJWMUP5Duxbg=;
 b=CKyIXG3MK+C6ysHsALDE0Fb1C8sjrnLWyXRMlrlmGNahxqUy6My85Ms0I0PlYpTCnnbHhYjJA66eC/lCeZxh0OPjk5Q3rGlkAs/tNrV6Q15y7iyG6owQ8rw7B4FYeQIkoIA7HqU4gPhHITJG1JhXgao9tBLb/Tqc0FBdJKaaTOU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5f5a7362ab08dbbd
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QZrRCX1v/4OBiq7aUEU4MxpdBKUEZP9YE16raGUxd9lqRF6ZnBk+ifaUz0VpHz0wQxp9jLykl/CojtZOx17lX7rJeIKrBnRZ7OTysvaJg+EboTbU1YzJMwVB2pluhJPNrmBNCkEnifzzWqhWcf/MUkQNdMjY0eq//3sLLlN54Ub0JpRzmPogGmY2QDdgHDVHTUcoB2jzkkPaEJzCVrWiT7koC1mJbuJBRGjEz2Cq06TCfEzuM1UtgfJXvNGr6jN13ypHuysiuXqyF75SQuBKJR+YAljm+nUKpM1H3RpDFufdO6i3E2XsZWNX74xQiKE4RhXnCnjM2otZiSXac9ppgw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kHSMQ1sGOPQDhU5EE4xS8cKC3DuRiiQSJWMUP5Duxbg=;
 b=JdraQ7X7dlfVkARB04jF9OnNmEp2QG51BOo2BUv1dDxeSkpyvcT9ewIB+OwUgy1DYUW9BVohw0XvN2Lyuu39UbTSf1WQcqUm/Qj3PP4IYULRMDqcQT1dd8rPLmT4OP21DCkzxaJLUvC7n9HzmUDBObJf8UU4HXanX6uBHDVcv43H/qtlRZCsO7np7SJaZi+3/9A4BKq7/nXQqyyKsKNTqI88UfZcYG+dOUF8RSXITQFe5QUce91CtmeOFBvle+wxDynHXe1QQFEm5wUpe0VwFy84Kfnoa6FCwU9IfgclxKpZJ9Fj5lyWd+HUuIF1vY/PoJJeL/DP4lgeRPvXXOjFxA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kHSMQ1sGOPQDhU5EE4xS8cKC3DuRiiQSJWMUP5Duxbg=;
 b=CKyIXG3MK+C6ysHsALDE0Fb1C8sjrnLWyXRMlrlmGNahxqUy6My85Ms0I0PlYpTCnnbHhYjJA66eC/lCeZxh0OPjk5Q3rGlkAs/tNrV6Q15y7iyG6owQ8rw7B4FYeQIkoIA7HqU4gPhHITJG1JhXgao9tBLb/Tqc0FBdJKaaTOU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>, <jbeulich@suse.com>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>
Subject: [PATCH 9/9] xen/arm: introduce allocate_static_memory
Date: Mon, 7 Jun 2021 02:43:18 +0000
Message-ID: <20210607024318.3988467-10-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210607024318.3988467-1-penny.zheng@arm.com>
References: <20210607024318.3988467-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: fe858219-dd68-4cc9-0d49-08d9295e23d7
X-MS-TrafficTypeDiagnostic: DB8PR08MB5035:|AM0PR08MB4100:
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB4100440690B2076943CE912BF7389@AM0PR08MB4100.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:5236;OLM:5236;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 0CwGaWSGThSq0G6eAKCK5bA01pOR6d/+lW8mFNINli8q8onPjsR/mQfejX5SFgExklTnF5PrFy9V+B5WDgJOzzS1/aXc0fWxtj16VNvXwhOlhpyxIFuyADnndzBVxzK5B5MP0a5zmGZxpOwVcz8KKGqflqEQCbi5cRz0XAucLKoTvc1qC5kBtX9EK0Yuy48mMHfZtgniBbr7mqGx6A7Pij/uLxRcQSbPCYUR1/t9VRGC7+Tv2mCnFNYkbKyY0soSdK5u0loY4pbM+CS0MsyzPSXuz1YOi1RNolwABQBvYo4FOqNNQoeyp29m6jTWQkmqYGgprJNqlYhuslYlscNe1dfjD8nj7fVGydNB0ZrQr2amjWb0N67dunsMDll6gCMQfgloGNQPAg8ClF2VCfEUMP3ZnYhhRly+UqrTQQbUVZL9tGuy7EeuRBcgIIya+G8yMp0r5f+eCMQlHlhivZoSh4jnKbo5v04FU0axMbxyr9KYALvdkc6P00Vrfj+Z7UBeufZmCaMnT3lvvmvz3A9Zm9EVKS7DlfHnjXT88tYPr7tp6YKFN71yyTjAC8IHI2FYKcszUjctJxidYsh4MuctL24iat/FwfRsWx+TKPobJS/5GLjq/P1hlQyGXYjK22tv09RhCGSEVAArx34nCo3I9gaSZftFBuj29DojqZPjZs2VN4x+bz5PEsPGSojnX0ySH8Q4qgztVTJ6B8Rei4L2Pg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(136003)(376002)(39860400002)(346002)(396003)(36840700001)(46966006)(8676002)(8936002)(70586007)(70206006)(4326008)(54906003)(110136005)(2906002)(1076003)(316002)(36860700001)(336012)(47076005)(2616005)(186003)(86362001)(44832011)(26005)(82310400003)(5660300002)(83380400001)(81166007)(6666004)(36756003)(356005)(426003)(82740400003)(7696005)(478600001)(2101003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5035
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8065edcc-42e8-494d-cbee-08d9295e1f4d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6tSXe5J57PWNXai3JQpZINVadA2TCbCMitw5wILDcqcx5VRwctI+uYk237BBbxIvaUEbJV4y1G90GLqV0fIHAwc2uAqCk2NW/urpDyHycNynrXk/UeCDFTqwS1NoRa2rA5x1viCik/Xx5NRsFHOhqbPcP3wryNaA9ELZ+lzFM7RqsQVHLPcE1bzfiZAymfb1sR/hl79gJU8Yg4qpydWnu4rM+9fdE3NIBH1AvxDU2HZ2daBeyyxaBnoRjKkZG/Zpxo3rf8y3arbgwGLGjG88Tkplt3PoQ+OrKuRhjjvoeSC04TRN5WeQEa5V4x0z/aQSz6iSkN0+h2a5t7eg2Q3LP7ElrUXa5LAkZRAS97opj/eUPmvklf5RdLuzo4gjUVRYK0Okk5LWuKwaMXBYVPMSP7WcWPyGwFa5RIweRxUIckkLH7qLOtcmo3S4QZy6zOQBCv8q6j3H0xYWN82/hV49PER2q7sVsxSB6a874nw06vxeFEgbSw0Z+IXXHhWWGGtORTBBwyZxzNe9rYbsTx1qtvVZFmv7dSyqOQLy6DooYMpaNOMIuNosageqwYvN5NYyiuehLTEqNop8P01S8rsMv75Rh8jTwTQKEKiqOEJmFiF23QM5kMK89keIidgDF54b+HxmZkK4e0d02K4l/tjw7peeJpNNvS/ZbWmMModxWvpYL03jxDj6qoi5bfTGPabc
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(396003)(39860400002)(46966006)(36840700001)(26005)(186003)(36756003)(82740400003)(86362001)(83380400001)(82310400003)(81166007)(54906003)(7696005)(316002)(6666004)(36860700001)(47076005)(5660300002)(336012)(2906002)(478600001)(4326008)(2616005)(8676002)(70586007)(426003)(70206006)(8936002)(1076003)(44832011)(110136005)(2101003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 02:44:15.3217
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fe858219-dd68-4cc9-0d49-08d9295e23d7
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4100

This commit introduces allocate_static_memory to allocate static memory as
guest RAM for Domain on Static Allocation.

It uses alloc_domstatic_pages to allocate pre-configured static memory banks
for this domain, and uses guest_physmap_add_page to set up P2M table.
These pre-defiend static memory ranges shall be firstly mapped to the
fixed guest RAM address `GUEST_RAM0_BASE`. And until it exhausts the
`GUEST_RAM0_SIZE`, it will seek to `GUEST_RAM1_BASE`.
`GUEST_RAM0` may take up several pre-defined physical RAM regions.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
changes v2:
- rename the values, like prefix it g/p
- fix the scalability issue
- allocate when parse
---
 xen/arch/arm/domain_build.c | 155 +++++++++++++++++++++++++++++++++++-
 1 file changed, 153 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 4166d7993c..63b6a97b2c 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -437,6 +437,48 @@ static bool __init allocate_bank_memory(struct domain *d,
     return true;
 }
 
+/*
+ * Static memory bank at #smfn of #gsize shall be mapped to #sgfn of #gsize,
+ * and #sgfn will be next guest address to map when returning.
+ */
+static bool __init allocate_static_bank_memory(struct domain *d,
+                                               struct kernel_info *kinfo,
+                                               int gbank,
+                                               gfn_t* sgfn,
+                                               mfn_t smfn,
+                                               paddr_t gsize)
+{
+    int res;
+    paddr_t tot_size = gsize;
+    const uint64_t rambase[] = GUEST_RAM_BANK_BASES;
+
+    while ( tot_size > 0 )
+    {
+        unsigned int order = get_allocation_size(tot_size);
+
+        res = guest_physmap_add_page(d, *sgfn, smfn, order);
+        if ( res )
+        {
+            dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res);
+            return false;
+        }
+
+        *sgfn = gfn_add(*sgfn, 1UL << order);
+        smfn = mfn_add(smfn, 1UL << order);
+        tot_size -= (1ULL << (PAGE_SHIFT + order));
+    }
+
+    /* Guest RAM bank in kinfo hasn't been initialized. */
+    if ( gbank == kinfo->mem.nr_banks )
+    {
+        kinfo->mem.bank[gbank].start = rambase[gbank];
+        kinfo->mem.nr_banks++;
+    }
+    kinfo->mem.bank[gbank].size += gsize;
+
+    return true;
+}
+
 static void __init allocate_memory(struct domain *d, struct kernel_info *kinfo)
 {
     unsigned int i;
@@ -480,6 +522,116 @@ fail:
           (unsigned long)kinfo->unassigned_mem >> 10);
 }
 
+/* Allocate memory from static memory as RAM for one specific domain d. */
+static u64 __init allocate_static_memory(struct domain *d,
+                                          struct kernel_info *kinfo,
+                                          const struct dt_device_node *node)
+{
+    int nr_banks, bank = 0, gbank = 0;
+    const uint64_t rambase[] = GUEST_RAM_BANK_BASES;
+    const uint64_t ramsize[] = GUEST_RAM_BANK_SIZES;
+    const __be32 *cell;
+    const struct dt_property *prop;
+    struct dt_device_node *static_mem_node;
+    const struct dt_device_node *parent = dt_find_node_by_path("/reserved-memory");
+    u32 addr_cells = 2, size_cells = 2, reg_cells;
+    u64 tot_size;
+
+    paddr_t pbase, psize, gsize;
+    gfn_t sgfn;
+    mfn_t smfn;
+
+    kinfo->mem.nr_banks = 0;
+    /* Start with GUEST_RAM0. */
+    gsize = ramsize[gbank];
+    sgfn = gaddr_to_gfn(rambase[gbank]);
+
+    /* Parse phandle in `xen,static-mem`. */
+    static_mem_node = dt_parse_phandle(node, "xen,static-mem", 0);
+    if ( !static_mem_node )
+        goto fail;
+
+    /*
+     * #address-cells and #size-cells must be consistent with the parent node,
+     * "reserved-memory".
+     */
+    dt_property_read_u32(parent, "#address-cells", &addr_cells);
+    dt_property_read_u32(parent, "#size-cells", &size_cells);
+    BUG_ON(size_cells > 2 || addr_cells > 2);
+    reg_cells = addr_cells + size_cells;
+
+    prop = dt_find_property(static_mem_node, "reg", NULL);
+    if ( !prop )
+        goto fail;
+    cell = (const __be32 *)prop->value;
+    nr_banks = (prop->length) / (reg_cells * sizeof (u32));
+    BUG_ON(nr_banks > NR_MEM_BANKS);
+
+    while ( bank < nr_banks )
+    {
+        device_tree_get_reg(&cell, addr_cells, size_cells, &pbase, &psize);
+        tot_size += (u64)psize;
+        smfn = maddr_to_mfn(pbase);
+
+        if ( !alloc_domstatic_pages(d, psize >> PAGE_SHIFT, smfn, 0) )
+        {
+            printk(XENLOG_ERR
+                    "%pd: cannot allocate static memory"
+                    "(0x%"PRIpaddr" - 0x%"PRIpaddr")",
+                    d, pbase, pbase + psize);
+            goto fail;
+        }
+
+        printk(XENLOG_INFO "%pd STATIC BANK[%d] %#"PRIpaddr"-%#"PRIpaddr"\n",
+               d, bank, pbase, pbase + psize);
+
+        /*
+         * It shall be mapped to the fixed guest RAM address rambase[i],
+         * And until it exhausts the ramsize[i], it will seek to the next
+         * rambase[i+1].
+         */
+        while ( 1 )
+        {
+            if ( gsize >= psize )
+            {
+                if ( !allocate_static_bank_memory(d, kinfo, gbank,
+                                                  &sgfn, smfn, psize) )
+                    goto fail;
+
+                gsize = gsize - psize;
+                bank++;
+                break;
+            }
+            else
+            {
+                if ( !allocate_static_bank_memory(d, kinfo, gbank,
+                                                  &sgfn, smfn, gsize) )
+                    goto fail;
+
+                /*
+                 * Physical bank hasn't been totally mapped,
+                 * seeking to the next guest RAM i+1, if exist.
+                 */
+                if ( ++gbank < GUEST_RAM_BANKS )
+                {
+                    psize = psize - gsize;
+                    smfn = mfn_add(smfn, gsize >> PAGE_SHIFT);
+                    gsize = ramsize[gbank];
+                    sgfn = gaddr_to_gfn(rambase[gbank]);
+                }
+                else
+                    goto fail;
+            }
+        }
+    }
+    return tot_size;
+
+fail:
+    panic("Failed to allocate requested static memory for domain %pd."
+          "Fix the VMs configurations.\n",
+          d);
+}
+
 static int __init write_properties(struct domain *d, struct kernel_info *kinfo,
                                    const struct dt_device_node *node)
 {
@@ -2437,8 +2589,7 @@ static int __init construct_domU(struct domain *d,
     if ( prop )
     {
         static_mem = true;
-        /* static_mem_size = allocate_static_memory(...); */
-        BUG();
+        static_mem_size = allocate_static_memory(d, &kinfo, node);
     }
 
     rc = dt_property_read_u64(node, "memory", &mem);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 03:09:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 03:09:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137619.254955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5dK-0003Cz-Gl; Mon, 07 Jun 2021 03:08:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137619.254955; Mon, 07 Jun 2021 03:08:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5dK-0003Cs-Du; Mon, 07 Jun 2021 03:08:58 +0000
Received: by outflank-mailman (input) for mailman id 137619;
 Mon, 07 Jun 2021 03:08:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lq5dJ-0003Ci-DY; Mon, 07 Jun 2021 03:08:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lq5dJ-0005Wt-4y; Mon, 07 Jun 2021 03:08:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lq5dI-0004lo-TZ; Mon, 07 Jun 2021 03:08:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lq5dI-0003GA-T5; Mon, 07 Jun 2021 03:08:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c/dZOIsvlscMtzQZioBtQD0Vgqz+4khL8iKhaRxNW9s=; b=c2MNTAVo7ufNpCARgGEjC2Xr20
	BZ9q6vDWURhMJc/bphuz6w4z/dOn+w9B+WqKwA0nAyRaGKY/zhgiTQYQSCcJ5/TRUYACQPe4Gc9RU
	T17Ml/IV3qE5gEl/gDDPtH5FkNHwNuVUXqJjN5gn5aRpsh9lvN1SWTaD+J/0mr2aCIC0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162470-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162470: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=75f13e9b221e2c8603f15ee1d53318526cf56113
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Jun 2021 03:08:56 +0000

flight 162470 xen-unstable-smoke real [real]
flight 162478 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162470/
http://logs.test-lab.xenproject.org/osstest/logs/162478/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  75f13e9b221e2c8603f15ee1d53318526cf56113
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    5 days
Failing since        162370  2021-06-04 17:01:35 Z    2 days   16 attempts
Testing same since   162374  2021-06-04 20:03:35 Z    2 days   15 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 03:29:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 03:29:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137630.254973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5x2-0005c0-9j; Mon, 07 Jun 2021 03:29:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137630.254973; Mon, 07 Jun 2021 03:29:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq5x2-0005bt-6l; Mon, 07 Jun 2021 03:29:20 +0000
Received: by outflank-mailman (input) for mailman id 137630;
 Mon, 07 Jun 2021 03:29:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HJn5=LB=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lq5x0-0005bn-ST
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 03:29:18 +0000
Received: from mail-il1-x12e.google.com (unknown [2607:f8b0:4864:20::12e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d181af2-8957-4de6-a9c3-29148ac01000;
 Mon, 07 Jun 2021 03:29:17 +0000 (UTC)
Received: by mail-il1-x12e.google.com with SMTP id w14so5184962ilv.1
 for <xen-devel@lists.xenproject.org>; Sun, 06 Jun 2021 20:29:17 -0700 (PDT)
Received: from mail-io1-f44.google.com (mail-io1-f44.google.com.
 [209.85.166.44])
 by smtp.gmail.com with ESMTPSA id h26sm5176843ioh.34.2021.06.06.20.29.15
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 06 Jun 2021 20:29:16 -0700 (PDT)
Received: by mail-io1-f44.google.com with SMTP id a6so16912383ioe.0
 for <xen-devel@lists.xenproject.org>; Sun, 06 Jun 2021 20:29:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d181af2-8957-4de6-a9c3-29148ac01000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=zXJrYShxgIyQHo1wYCS3F9rVjXIqXzIl9+OOmY4aJuc=;
        b=lXhSyF7uZrXQiHjN4kEdhtrYYO4eYXTtv+MmIVlkaYfrDj3gFF3i6zdq9V6t5fEQPA
         Q41l6xsOf6kbnxwhiHTDBcVDInxTuXTZ4PeJVzQkTroJzwWh+Feq2JO3lXBrruik9rJE
         v5bJs9UlwmvhYXAJ0FNY04iCN48TAqVGzZPFU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=zXJrYShxgIyQHo1wYCS3F9rVjXIqXzIl9+OOmY4aJuc=;
        b=flyfdm0PooQAC2KfFkErPJOFZJxPy8w2bdCPHNp7VWpX3OnunWpuSXttwZF56gLfbu
         q6wzxw3h+ezL1unzrSnLdtD5gDwndVjzUEBnV6x4an50xTv4ZarkI95LSARC9wldCwpp
         xMtVdTr0nO2rIoAST1EEmeL09N5bexooZs0KjUwPXj3aNc/naqIRsuf1dPTxK7YDP9r3
         ivADEhshsAYtYdAQ3BhVnJYUgSNNlDpMnJKvfzD0DZE5IDCWczqnyWs5va6/DVBcB2Y/
         tgWu2FmMGssS3m5CbCmZmqL2Um6iYuGVMtalNbnDcoXw6Z/tjrhFYbiH24XwxskjpunV
         tyrg==
X-Gm-Message-State: AOAM531EHMbxssDLbMEM/3OH+csdHP5YYacA32mXJv4N9wsaRi3fPdFJ
	H/6uMXvQAZqtB263LHs+yWIazeQuLvBnTw==
X-Google-Smtp-Source: ABdhPJwr0Y7cv12JhU2ElFP8gZIRJJMYEeeeTr7X9ykz2+chzjoRE2bIci0Z1a3COol6mi9uUiBCsg==
X-Received: by 2002:a05:6e02:13d1:: with SMTP id v17mr3975137ilj.214.1623036556995;
        Sun, 06 Jun 2021 20:29:16 -0700 (PDT)
X-Received: by 2002:a05:6e02:1a44:: with SMTP id u4mr1746214ilv.64.1623036543940;
 Sun, 06 Jun 2021 20:29:03 -0700 (PDT)
MIME-Version: 1.0
References: <20210527125845.1852284-1-tientzu@chromium.org> <20210604174818.GC3703@willie-the-truck>
In-Reply-To: <20210604174818.GC3703@willie-the-truck>
From: Claire Chang <tientzu@chromium.org>
Date: Mon, 7 Jun 2021 11:28:53 +0800
X-Gmail-Original-Message-ID: <CALiNf29=z2uBM1ZA_GTu04iFS2dJwH0npdGvid1PL5KQM_HrxA@mail.gmail.com>
Message-ID: <CALiNf29=z2uBM1ZA_GTu04iFS2dJwH0npdGvid1PL5KQM_HrxA@mail.gmail.com>
Subject: Re: [PATCH v8 00/15] Restricted DMA
To: Will Deacon <will@kernel.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Frank Rowand <frowand.list@gmail.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig <hch@lst.de>, 
	Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

On Sat, Jun 5, 2021 at 1:48 AM Will Deacon <will@kernel.org> wrote:
>
> Hi Claire,
>
> On Thu, May 27, 2021 at 08:58:30PM +0800, Claire Chang wrote:
> > This series implements mitigations for lack of DMA access control on
> > systems without an IOMMU, which could result in the DMA accessing the
> > system memory at unexpected times and/or unexpected addresses, possibly
> > leading to data leakage or corruption.
> >
> > For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
> > not behind an IOMMU. As PCI-e, by design, gives the device full access to
> > system memory, a vulnerability in the Wi-Fi firmware could easily escalate
> > to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
> > full chain of exploits; [2], [3]).
> >
> > To mitigate the security concerns, we introduce restricted DMA. Restricted
> > DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
> > specially allocated region and does memory allocation from the same region.
> > The feature on its own provides a basic level of protection against the DMA
> > overwriting buffer contents at unexpected times. However, to protect
> > against general data leakage and system memory corruption, the system needs
> > to provide a way to restrict the DMA to a predefined memory region (this is
> > usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).
> >
> > [1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
> > [1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
> > [2] https://blade.tencent.com/en/advisories/qualpwn/
> > [3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
> > [4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132
> >
> > v8:
> > - Fix reserved-memory.txt and add the reg property in example.
> > - Fix sizeof for of_property_count_elems_of_size in
> >   drivers/of/address.c#of_dma_set_restricted_buffer.
> > - Apply Will's suggestion to try the OF node having DMA configuration in
> >   drivers/of/address.c#of_dma_set_restricted_buffer.
> > - Fix typo in the comment of drivers/of/address.c#of_dma_set_restricted_buffer.
> > - Add error message for PageHighMem in
> >   kernel/dma/swiotlb.c#rmem_swiotlb_device_init and move it to
> >   rmem_swiotlb_setup.
> > - Fix the message string in rmem_swiotlb_setup.
>
> Thanks for the v8. It works for me out of the box on arm64 under KVM, so:
>
> Tested-by: Will Deacon <will@kernel.org>
>
> Note that something seems to have gone wrong with the mail threading, so
> the last 5 patches ended up as a separate thread for me. Probably worth
> posting again with all the patches in one place, if you can.

Thanks for testing.

Christoph also added some comments in v7, so I'll prepare v9.

>
> Cheers,
>
> Will


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 04:11:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 04:11:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137432.254988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq6bu-00027Z-Jr; Mon, 07 Jun 2021 04:11:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137432.254988; Mon, 07 Jun 2021 04:11:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq6bu-00027S-Ea; Mon, 07 Jun 2021 04:11:34 +0000
Received: by outflank-mailman (input) for mailman id 137432;
 Sun, 06 Jun 2021 15:59:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r5un=LA=infradead.org=geoff@srs-us1.protection.inumbo.net>)
 id 1lpvB7-0001yb-Ic
 for xen-devel@lists.xenproject.org; Sun, 06 Jun 2021 15:59:10 +0000
Received: from desiato.infradead.org (unknown
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b6bf7a5a-00e5-4bc6-9aa9-539c8a46ee35;
 Sun, 06 Jun 2021 15:59:07 +0000 (UTC)
Received: from [2602:306:c5a2:a380:d04c:9a1:1990:7d22]
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1lpvAk-0044Sh-0J; Sun, 06 Jun 2021 15:58:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6bf7a5a-00e5-4bc6-9aa9-539c8a46ee35
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:Content-Type
	:In-Reply-To:MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:
	Sender:Reply-To:Content-ID:Content-Description;
	bh=xdRwc0+fblxjQWSVfAPrq01sy/g65/LWQdwS7diPzUY=; b=RvNcIfOg5he7n2s7JHDwQwMsUb
	CcsUaraj9QikvzUVMg5uaZVxCkfUMNQpaQoSlSwpFt8En2sRCM3NnRZMB+TpNhFzfHmMKG/4QmVWD
	gCRylCpd2RxMohedFycN/JmPwi64URVSbmxExLKam6OQFwHVi2N5sD9jwXDe8EtAAA/oB8PR9GaaH
	81bj3Hi0GApRdsWIMvWsT4imii1ALjOAkwG/eKwvQ5JhVh4A9McAwOX5gB0KW+xEHm5Q+zGPNdNaj
	3+RxDUZtyJrK0//PQYuM0gsnGyORNLu/D139Iz76bmVhQgjZJrrEjMdarouhAZYzFh+UMlCC+cVvz
	GBTLTYTQ==;
Subject: Re: [PATCH 10/30] ps3disk: use blk_mq_alloc_disk
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Denis Efremov <efremov@linux.com>,
 Josef Bacik <josef@toxicpanda.com>, Tim Waugh <tim@cyberelk.net>,
 Ilya Dryomov <idryomov@gmail.com>, "Md. Haris Iqbal"
 <haris.iqbal@ionos.com>, Jack Wang <jinpu.wang@ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Mike Snitzer <snitzer@redhat.com>, Maxim Levitsky <maximlevitsky@gmail.com>,
 Alex Dubov <oakad@yahoo.com>, Miquel Raynal <miquel.raynal@bootlin.com>,
 Richard Weinberger <richard@nod.at>, Vignesh Raghavendra <vigneshr@ti.com>,
 Heiko Carstens <hca@linux.ibm.com>, Vasily Gorbik <gor@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
 linux-mmc@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-s390@vger.kernel.org
References: <20210602065345.355274-1-hch@lst.de>
 <20210602065345.355274-11-hch@lst.de>
From: Geoff Levand <geoff@infradead.org>
Message-ID: <c9d63809-a7cc-8907-6065-f14add05a5dc@infradead.org>
Date: Sun, 6 Jun 2021 08:58:41 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
MIME-Version: 1.0
In-Reply-To: <20210602065345.355274-11-hch@lst.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Hi Christoph,

On 6/1/21 11:53 PM, Christoph Hellwig wrote:
> Use the blk_mq_alloc_disk API to simplify the gendisk and request_queue
> allocation.
> 
>  drivers/block/ps3disk.c | 36 ++++++++++++++----------------------
>  1 file changed, 14 insertions(+), 22 deletions(-)

I tested your alloc_disk-part2 branch on PS3, and it seemed to be working OK.

Tested-by: Geoff Levand <geoff@infradead.org>


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 04:35:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 04:35:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137645.254998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq6zG-0004lc-GQ; Mon, 07 Jun 2021 04:35:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137645.254998; Mon, 07 Jun 2021 04:35:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq6zG-0004lV-CQ; Mon, 07 Jun 2021 04:35:42 +0000
Received: by outflank-mailman (input) for mailman id 137645;
 Mon, 07 Jun 2021 04:35:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lq6zF-0004lL-QI; Mon, 07 Jun 2021 04:35:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lq6zF-0007FI-E1; Mon, 07 Jun 2021 04:35:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lq6zF-0007l7-5A; Mon, 07 Jun 2021 04:35:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lq6zF-0007Xf-4d; Mon, 07 Jun 2021 04:35:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=y1yiw47ELcM9HqKvT2AImyYzyIpVo1ubEg2nDrvGlpI=; b=uRaa386pf4KqsICF7GUJ+R1Oyg
	3JLM+115cWato2UD6Gx440dMqR0uzw8vf1sxwrwS/OIApf7cjeOVDQd0ogYRMlNKKaXKqn2G3yu8B
	9ik8L3C56DpQThb9w0wP67KMHKUuoIU3YgmRPeBPHwjVYZgSDwxc3NoXNfJenIxTvbkE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162463-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162463: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:guest-localmigrate/x10:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=decad3e1d1ed150588dd9d44beacf82295b9d5a5
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Jun 2021 04:35:41 +0000

flight 162463 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162463/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-qemuu-freebsd12-amd64 19 guest-localmigrate/x10 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                decad3e1d1ed150588dd9d44beacf82295b9d5a5
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  310 days
Failing since        152366  2020-08-01 20:49:34 Z  309 days  529 attempts
Testing same since   162463  2021-06-06 21:16:28 Z    0 days    1 attempts

------------------------------------------------------------
6149 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1673686 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 06:18:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 06:18:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137661.255029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq8aH-0006YU-D7; Mon, 07 Jun 2021 06:18:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137661.255029; Mon, 07 Jun 2021 06:18:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq8aH-0006YN-9p; Mon, 07 Jun 2021 06:18:01 +0000
Received: by outflank-mailman (input) for mailman id 137661;
 Mon, 07 Jun 2021 06:17:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cRKJ=LB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lq8aF-0006YG-EN
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 06:17:59 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3d971cb5-a999-4a62-93ac-4c0a1334cbe0;
 Mon, 07 Jun 2021 06:17:57 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2109.outbound.protection.outlook.com [104.47.17.109])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-38-Bdexnen8PdCDaMaDAxB7cw-1; Mon, 07 Jun 2021 08:17:55 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6382.eurprd04.prod.outlook.com (2603:10a6:803:122::31)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.23; Mon, 7 Jun
 2021 06:17:52 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4195.030; Mon, 7 Jun 2021
 06:17:52 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P189CA0042.EURP189.PROD.OUTLOOK.COM (2603:10a6:102:53::17) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.22 via Frontend Transport; Mon, 7 Jun 2021 06:17:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d971cb5-a999-4a62-93ac-4c0a1334cbe0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623046676;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=c9oC9unxu9tlwtPL6j1KQlPbv2ZU1cSjWlXybErNtYs=;
	b=ZVbN1URjl3nW12lwL1CtxC7ZlHME5yCz9ECS6S9F8ljwJjamK8FFws3H5wRWayAph4XY+M
	IJYS78M4vyWxahbVqXhr0RBbSe+8z2xKSTL16OpOEi3ywBbEsDZPJbrkwgCM1UhtYH4P6j
	b1BqmcYZj3MDNTzM1hHpB/hjxEtlSg4=
X-MC-Unique: Bdexnen8PdCDaMaDAxB7cw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PdB/YAiaulp9sKuBfj99XUk53pICCTJUQ5/PhBqv5a+vBprbBTHbK2BHwUktjYrD5/60hX9asL+ozLTjDO1f1aeUcffa0wciM2w7G+TUql4Ui2kVVZakPdBgOysY4VZiLSblBIx+9qvXx3qbiNf2Cv0altyUOr6qBJtWc6iLudn+SbYOlIaKTwRqftWdTU61SZ07vFpfU7aK2chHqbrlrfG4DtOPWonLS+vczozCl34NuvA8+f5dLzD6tf50Di0qwarQKasVicam+GqaiIEnAtRWYZ06xTaTdfO2LQMgusLOmQ/0fJNwfauOFUydXdgqtAna6PpeZkbtY61ZiKDFQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=c9oC9unxu9tlwtPL6j1KQlPbv2ZU1cSjWlXybErNtYs=;
 b=gd/AFkjavJtPTPO73GMR9V39BzrvHNy5PPWgqNMwPsNcdLiIupqtwj103JOGiSWoLLZBiqnZ66MzyxWIwp9Hu7ogsaLhBwpVeygmHdXtm64LiRMNoW4ej06oSugVQ9m/jnHaiazOCF0KVOeXyRxGh1hk3fH3Cfs+gOxyMujphNUjgupY6kfiDSlbMrjMxu+0F7Bbwco/SYSpwtnVBEMlHUMmYMcJ+SZYmPlqhrNvsDdwePAzlhpMSriTG7UMyOn2ta/EizpcCbfFY6WIraYUtvPC1nQ1yWY0Xh2bC9OwJf5mlcNSkgylP2X3r86/a/4eaUm2u4OB0TRb29oV7dHsKg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH 3/9] xen/arm: introduce CONFIG_STATIC_ALLOCATION
To: Penny Zheng <penny.zheng@arm.com>
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com,
 xen-devel@lists.xenproject.org, sstabellini@kernel.org, julien@xen.org
References: <20210607024318.3988467-1-penny.zheng@arm.com>
 <20210607024318.3988467-4-penny.zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a6c2e72d-e53d-db68-6e54-4c3a5f4d50ce@suse.com>
Date: Mon, 7 Jun 2021 08:17:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210607024318.3988467-4-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P189CA0042.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:53::17) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cbca9617-7c90-447d-9ab0-08d9297bfb58
X-MS-TrafficTypeDiagnostic: VE1PR04MB6382:
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB6382C88C1660B4E8AE5A8D22B3389@VE1PR04MB6382.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4714;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ul1/TKGygi5FsLEAlgyVhxVrJu5p0aInVK6KGw2s9Vtrac26hNXl3E4zvoxiN2IpYOj/4la0iG/O6PqqVEYW5FgcgBtCla4kB45b44s/M8nzgi8WjIFXF+NMo67UwiAEE+MTOmEDSa1rUX+ODghaD1C65rdhPq/KZetzMtC/R2QvjijKSJsuxmw98aFc/BaEmqfr7iDRT8P/5c43fiBxInbsHNHWV+EPXuKAqyqGkFH5f6hTGtjHo+nwKURPfZpUwFsUqJqO9ijeSXTFM4upG2/rR8cRK13r0IYyYz8IYtWQJk3DFhPznyHK6E+FEFIoBLVfwz6+J8HgvDBSSc+W7zsxZjmxVZDMNfitH165RoDowJxrOjWaBmJ4X+FLifey1DOsXwTjkktV1gWAlZNC7CXqWnm/CZweu5nERnnoE9l12t8sakn/hgU3aZIjTO2j+q+lLU9ZgQU5hEI+tEeZUL1MNxZDjq9qD3AqvOl4T2DRafyU295FPyOLZG9qdLjIrcMflW5jYtlzGAYQrFKfr/HBbKaa9iSmoIMTkKjpcdNq3PQffCHpXbsVsyvqKtQ6zqCRK+2Q6pdt0AxhGa6gV3YdDvwitVzCp3dea8a4iXq2atGIQccYe4LDibMX+ad+GZKu7SINK9k6ZixgDoFLGURJuQ/zMfliV4k6R62EOWPHbEnsTtdrg3O9CJAqaqMt
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39840400004)(346002)(376002)(396003)(136003)(366004)(8676002)(2616005)(66556008)(956004)(4744005)(66476007)(86362001)(8936002)(4326008)(6486002)(316002)(31696002)(36756003)(2906002)(26005)(31686004)(66946007)(16576012)(5660300002)(6916009)(478600001)(38100700002)(186003)(16526019)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?U1dBRmdyc2tNRE13em9qd2ZiVG53dTM2SndLODJGeWZ3YmNYSkE2MVpNYWNx?=
 =?utf-8?B?RC9DdUE0R0dOTG1mYXZvVGZTUnNsWGZPVTB4TS9yZ3ptM295bUtIRlBYdVFW?=
 =?utf-8?B?VUp5RXdnUDJ1N3hCWEpnem90N0ZKVjJNalU5dVl5djA2MWE1dGEyL0lrRzdF?=
 =?utf-8?B?eklvaWp5YnJJUmlDTzkxMVU5T1JLdFNjVGZVTzRCV2FYSmVFaU5Ic1owYlVr?=
 =?utf-8?B?M3VqL2VMWi9EU3d0eGhDaFlrWTZKd1BNNHJJYytRTklWQVBnd2xFd0grSGdh?=
 =?utf-8?B?M2lhVjdsbmtvWmx3ZUh0ZVFJc0ljVXdGejNpb09oYnQwb2lrdmVXZDFRTDhG?=
 =?utf-8?B?YVBRSzNpL1JZMit0RENxMHRLU0Vick01WWFSUVRKQUpsVDdUTTdTZlBnTlRv?=
 =?utf-8?B?OEJ3ZW5XZE1VakZIWURjelhJUWFKNGxSRnFUOHlOOVpubk42Njl5aHRJMzBF?=
 =?utf-8?B?emlzV1RJbGJMeXZHSWpKNFJ0TTBYbnBnSS9PdGRnMWVWUFBYaEdpcXhVSEdx?=
 =?utf-8?B?TUl3dmNvZU42TU5KSi91KzB1SExrY0tNKzhOdmxHUWoxL3VpLzVsWXJZTVhJ?=
 =?utf-8?B?ZHNjR21xNjBTRSs5SHNKY1lOdnZWUDcyODVUZC9mNnZwR1ZtdlBNNmZicGg4?=
 =?utf-8?B?dm9YR0x5eFNUeDY5TDFyUG90bXJZSnhnU1B4b2l5aTdFSno4WWNHK0Y4Q2lO?=
 =?utf-8?B?UTJ4aWg0T3REdXhyN2FYdjQwa2U4c0grUHhLd1gzblVNMUFEanc2UFhlQjQr?=
 =?utf-8?B?eVprNDNYWHBkOHNtakVMTjVjRHljeXc1T3ZURXNLVkVSNUk5UG4wWTdFRFIw?=
 =?utf-8?B?dUcrMmY2VEtOMmhFTEpXcXlFUXJxMDRuM1ZENUphcHhMd29BNHkwQnU1M0Mw?=
 =?utf-8?B?K0IxS1hUMG1hQUxIbmYwMDRGL1lwVER3TGdvaTFzaG82N0hQNGdZZjJmUGhI?=
 =?utf-8?B?MmNTWDlIYVFYejI4bEd5TldsalpSajRCbGRnR1l0c3p6RlRjT1lJamN4eE5s?=
 =?utf-8?B?Z3NNTzlCcjEzZk1ycnMrS0QxTDB5b0xidGZsNHlDcFU2cmE3bnFHNnNWM2Zh?=
 =?utf-8?B?MStLNlUveFF5Mlo2b01KRHN4WFJleVB2Mi9wWFZFaXF4VnhPZGo2Z1VxSCtD?=
 =?utf-8?B?eENEczZBN2IwdTJnTU1uYkxjSnlsS0d1bnJnNnFBMzhZMGcxRU1jQSt3MUFa?=
 =?utf-8?B?N0wrem9Pajk5M3VkNHpkWVhmWkZlL0tNcS9SSXFGSm02YjZnczM2YnpEWEU0?=
 =?utf-8?B?QUxkZTBRd0tsVUgvYVhNYXBSUTNYWW45dmNMSlJoK3hkdFZhYTdlYnZwZXky?=
 =?utf-8?B?YWtaODhqTVgvV2pIVGRITXhuTlNNOTZlN0djVkxKNE9nL2RlMFZEc0ErKy9i?=
 =?utf-8?B?U1RYR1JJQndLUGNjTUloV2tCR2ZiWUhjMVE5bjN6djFFa0hkb2VMOXpRRnBX?=
 =?utf-8?B?bkkvVzdDRUlBaXoyYytvODQxSUFLVnZEN1FjV2NzMDQ3VGJrYmRmNzVDYWt6?=
 =?utf-8?B?ZDA2SUVNTEhWSjA3cHpJOU5wMFEzUnhCd0o0WnFnbjhLU0JaN1BYdEp3c3dL?=
 =?utf-8?B?cW9EdEFCdWlOUXE0VTZXOXdmSjZLYUl4UVl6OURHWHJkTFR5VHJBRzYyZmdE?=
 =?utf-8?B?SkNYdDE4VDBhWXBiakRSNXM3MDkrQXlHMzR0NmxlZE9mVTA0ZVBKejZ0M1li?=
 =?utf-8?B?Q3VjNUFLZ3MxMU9pYkRIZTZQNlFoMzJIMTArcXlEK05MUXBWTFEvNDBmc2tH?=
 =?utf-8?Q?FX+m6N+LLdtRIFE5tN2A6RoknDWO1kkDddOlWph?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cbca9617-7c90-447d-9ab0-08d9297bfb58
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 06:17:52.5219
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JeEe1mjnoL+MjFrZC22h7cuRRVNkG04bLeF1kBD5qxGk6HZL6OaZRw4mpP8t5fi9/YOqGAJWKqAaK7jlu08Afw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6382

On 07.06.2021 04:43, Penny Zheng wrote:
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -278,6 +278,9 @@ config ARM64_ERRATUM_1286807
>  
>  	  If unsure, say Y.
>  
> +config STATIC_ALLOCATION
> +    def_bool y
> +
>  endmenu
>  
>  config ARM64_HARDEN_BRANCH_PREDICTOR

Nit: While this and the following option exhibit bad indentation,
everything up from here looks to be fine, and that's what you want
to take as reference when adding a new option.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 06:33:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 06:33:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137668.255041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq8pD-0000P3-Mh; Mon, 07 Jun 2021 06:33:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137668.255041; Mon, 07 Jun 2021 06:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq8pD-0000Ow-J1; Mon, 07 Jun 2021 06:33:27 +0000
Received: by outflank-mailman (input) for mailman id 137668;
 Mon, 07 Jun 2021 06:33:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cRKJ=LB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lq8pB-0000Oq-FA
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 06:33:25 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 69f0d7b9-4315-4bfb-9e11-9d27209969d2;
 Mon, 07 Jun 2021 06:33:24 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2054.outbound.protection.outlook.com [104.47.14.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-35-o1CXCpKMNKCBaK9eHzcZ2g-1; Mon, 07 Jun 2021 08:33:22 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4351.eurprd04.prod.outlook.com (2603:10a6:803:49::33)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Mon, 7 Jun
 2021 06:33:21 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4195.030; Mon, 7 Jun 2021
 06:33:21 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0261.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100::33) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.22 via Frontend Transport; Mon, 7 Jun 2021 06:33:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69f0d7b9-4315-4bfb-9e11-9d27209969d2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623047603;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=O0JhOLHAau5riUtwzhZ0wKW/7b8/ajD73TmPQBB+jOU=;
	b=UCkBU+EZd3q8ySQuF0McHdWHYqcx767RAOn7a2Kpo/T3Kj/SBOSfkOtqLHglc+JG2QMf6G
	+8j3nFvPPIrUGkrZm+yAPQcO8t5zAoyh/yoeVAyQEDatqanTrUy+dBEMZQFchEnOk1q52s
	VoQah0TsOZ07wJgsTHfT39kUGufSeus=
X-MC-Unique: o1CXCpKMNKCBaK9eHzcZ2g-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nZV2FyUWd27rNXk8jbbldzRYvqAcAEYXe4ccS7U5mjNZdK7ldXvxxvE4t2dGMhPp8jcps5piDZjKB45iYJyEhstRDSQyRixhyejMiETBpcdasQ6oS/Zn693De+UWUIfthJddeV1tuso7G/scQtJtxqHRvSBdSpI7hp/21WeNM+p9akRLPkqPtrexgK8sa86OesgxhpCS4shrELAe/h+I/+EciXUfOEyBCqEQYRKlGbAt/JXb93j+ilP52n1kR0mPrWOS2wM5oGsV41421ss/mGhVJpzhxXhfyfDmNoBypSVQB30EuKBmtT5lG8kT/ehP8LfRRwtDLTdayBLj8koCTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=O0JhOLHAau5riUtwzhZ0wKW/7b8/ajD73TmPQBB+jOU=;
 b=dUlx5SqNRLKJFZ+aY0RXwJ8YbIpWdhjfrhYMqNB/+6iCdRNJ0PvY1Awt7u+GjloMajx4dQCP//viLE3iDjunvDprvZye7jb1Fli+rmqmc+xkFCsxl8/AJSZQSiehUnfE4r8MzeVv2d4Wqd0FwtxBGhqoZxeOS0mlaVgqi28tkznCjq9y0If0L4d070JrpasHxhyyHfNyeNx41J0T2GIxeTCFbuR6laxXJrVMT8g5UQ6uyiGfn5tDMdH20o6XZFaApmVvegWGE/Vsl4h/0pvdPxt24s64dAMxpgl9eYf4QsboavBtohvyKFTIgTHDyb03bkje11a4KDwuzebt23rnBA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] MAINTAINERS: adjust x86/mm/shadow maintainers
To: Tim Deegan <tim@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 xen-devel@lists.xenproject.org
References: <YLjUM0Dzqn0lWA0l@deinos.phlegethon.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ce4b0cbc-2168-9320-d4df-ff9f27fb4559@suse.com>
Date: Mon, 7 Jun 2021 08:33:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <YLjUM0Dzqn0lWA0l@deinos.phlegethon.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0261.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100::33)
 To VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 61bbe7e9-07ad-4f89-4634-08d9297e24ea
X-MS-TrafficTypeDiagnostic: VI1PR04MB4351:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB43515B6FE15620BEC9DA9B28B3389@VI1PR04MB4351.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:962;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wMKMfa0prfq+oWAKN7kYyaIDfHNA3Z+o8SBVFZCrAn3EzG1hGu6RLI9UlV+8NHG/0Om5YnC2EFNWuxuuEs19QDnrgbeSqtvqTUVQgAUOPSQUk/k8JRIMQ/HKtDTvSezWcM53X6yUWWJN/mHCthrF6MPjqAVtGdZzm1qZryutOsjf+MCHRKFQOrdz+k1lWPuUjWHabMaNjVSywzvpSRe1o9Ok4kaXU173KU/D9JrxsaLzgRsmv6ICVRZomdVUkFV1euU1r6k49VCeFBjGcyNCxbXEya8XANc0PVXhtWCm2ZeJBu1zDGAyPKHwXfv3aKXxkqJ95WihgCLTr7o+q/70ePEBpWA8CGfZ1ZXYoR/vcalkPJeziEVJ/NO3KQYRSX4Q00500iICSQOz48PWfsXOBbZ1QeIcz+aEFnNEL00ilyJmz+pjQ+vIia0pQdwTEaChtibTh/i+0nn8QlFvy/QFbo27fyMnvteBIw5zCKzxUSsxbZc8+Oq2gHFpNb1nC8CCm6QtprqMXFxz0avMirqGu5hjsJVCjo32cokwZN5RUtd7xusPWBuwxqIhnMmk3ZN18j5sJIufsRwlYv0mrUYSYBYEeyw+WFooP822EnMWDz+Th8fGxMttz6aEEYyL66zL08cadPi7FOE/5lFvJ241DIlyiNpsygSJiV87fXVLuaHtSSL3ZIUTNgHdEOpLnG+z
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(396003)(346002)(366004)(39850400004)(136003)(2616005)(2906002)(956004)(478600001)(5660300002)(26005)(8676002)(86362001)(53546011)(4744005)(6916009)(6486002)(316002)(186003)(31686004)(36756003)(8936002)(38100700002)(31696002)(54906003)(4326008)(66476007)(66556008)(66946007)(16526019)(16576012)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?UngyU1hTTlpIdzU0dGw1SGt0ekJMWG41RkxlbmgzTmdIS3hFOCtub2I3bDFH?=
 =?utf-8?B?SVFoKzdqUlZoYnphQzVRWXpRU1EyTlJsWlJMS2lkUTRzalk2TWhKWTNMM3dG?=
 =?utf-8?B?NU94enFSTDVuN1oyVVBtTmNQL2xSN0U0WUJSN2FueWRXTFB6Zk1MV0hYYjgv?=
 =?utf-8?B?TEQzK3BWemJuL29KYm5ndWs5ZTd0RCtlckZSaHV3WnZGcmpZbjdhcVFkeSt2?=
 =?utf-8?B?L0EwZHkwRS9yVXRxQnBIVXFaMjI3cWVjcWw3S00yY0pNenBNeUJ2RjVvbjdG?=
 =?utf-8?B?QzRkV0V6bEhQNC9JNlJHRHN2TjkzNjAyZEc2WGg5b2ovYzZyYW05dVJkVEhT?=
 =?utf-8?B?VVZvLzJHeVFBUXpXMCt2VlpaZzN5NWo2UmJWUi9vK3dLcWQwYmtLekxOK1I1?=
 =?utf-8?B?Wkw3Y0JpUWxFOVprTFhRcWRmZmM2amgvcmh2N3BjNVJCLzJYMis4dDZQQytZ?=
 =?utf-8?B?cjRwQzVxVThGUVNzbldRL25ZaEJ6ZHpTM2xCN1BadEk4NmxBUCt6T0swcGZm?=
 =?utf-8?B?b01RK1pTK1VWV05yczZ6Zit4K1B4UGNFTnlxYldnZUJWM3RTa0dLckFrMmlJ?=
 =?utf-8?B?TUpnU3BmcC9uQVN3ZFN0dWxMeUNNUE1kd0x5RUFsUnFaUGRwRFE3MkNWaHRx?=
 =?utf-8?B?K1J0dWg0QVg0UHFvMVUyVmxBOHI1dWV5aUc4WmNYUVBQdjU4c05Ma0ZsQ2tH?=
 =?utf-8?B?a2oyQ2VCbzJJa2FtTFVuRUMwMkowMDhyZDFzZTRlUWVRVUVCazNYWW1nQTlx?=
 =?utf-8?B?c09iUDQ0TVRKNVRrWmExck5Fd2xxTXZKbWZlcXo1Y1FyNFpCZ0ZGNUZNSGFz?=
 =?utf-8?B?eFU2UmpkK3FSdzdXWUs5ZVFHUXRLUzJFTFBkN0lDTVBFbEpJSWx3a2dMQjJy?=
 =?utf-8?B?TG1HZ0NRTG0yYjV3Q1BsRDZBMUl4QTBpUHg2TjRaMERyQ09xKy9mbzh5MXdo?=
 =?utf-8?B?MXFtQzV6MllEdEZpSkhnTHF1ckVzODc0Z3pOWGRsMm9zM1U1OWxuUWFVOTJE?=
 =?utf-8?B?UE5HUi9UeGFqWUVlK3BIYjZOY3pjczZWWGN4RCtHY0pTbXpJeDcyUXliOFgv?=
 =?utf-8?B?aUticGE0alkwMHVUaVBPMS8rb1dXRUw2SnNodHhkM3BMakNzMVNSYjdhZ3ls?=
 =?utf-8?B?UjV5b2pPa1c4UHY2N3pSKzJGYjBNNmd3Y1FZbFBPdkROY2UrcFJEeU82VEtz?=
 =?utf-8?B?eUZJYWErQkx1VGRpeUkxRVl0cHorWlZLR1Brb2NXYmNGZm9tMWJUK2NBY2xX?=
 =?utf-8?B?UW5xQmt4QVB5VEVtZHdsWkpQQjZBUjlmOHRBWjZjOHRXMitMcUVjWFk0aWJE?=
 =?utf-8?B?SU5QM3g3djByVzlLSzFCUXd2VjU2VjZ3SjNOYndrLzNIdjhlVUQrYTdvb1ZI?=
 =?utf-8?B?dUtVTkFhT0kvUXpjKzNONWZSdkVjQnQ4TTRxeSt3K1RMRkZHeUUrTzBRNWVz?=
 =?utf-8?B?eUtpNEZqdENzZjNyTko4OTI2WFBmeGJiSk9jRzcxaE1IbncyQ3NqT1NEUjBn?=
 =?utf-8?B?WXlCd1FiVi92WUROQVY1eDFyWjRBbUQ4SXMvZ2VPQnArOVpmV3NKWkxzOUJ1?=
 =?utf-8?B?a09HYWlwWW9oY3YyTnhMZmthTUhWTk1UdEtvT1hGMTBTelJyWU96ajhLR2k5?=
 =?utf-8?B?ZURycUppaDJuY05tb2o4cEZtd3hKd3QyYk8yS0p2bGpRd2pNejJiQ1ZjSFF5?=
 =?utf-8?B?cFN6YXlEYWVOZGZtREtvWGdaa05DQkFSQ1RlczFudFhWbVkyNnBvd0FaK1RI?=
 =?utf-8?Q?rgMOH95f20IUIWbzZLMJfLx515tkDJR4YrHqTmF?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 61bbe7e9-07ad-4f89-4634-08d9297e24ea
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 06:33:21.2426
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YuuWTxTNMW/dVWdH4FzvtQu9EU/wSidvyQ2mhgIf9ICokIyOBqj5xWxhm0Wqc9+KIwuorU2oGh12cahwCsWzlw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4351

On 03.06.2021 15:08, Tim Deegan wrote:
> Better reflect reality: Andrew and Jan are active maintainers
> and I review patches.  Keep myself as a reviewer so I can help
> with historical context &c.
> 
> Signed-off-by: Tim Deegan <tim@xen.org>

Largely for formal reasons
Acked-by: Jan Beulich <jbeulich@suse.com>

> --- MAINTAINERS
> +++ MAINTAINERS
> @@ -591,7 +591,9 @@ F:	xen/arch/x86/mm/mem_sharing.c
>  F:	tools/tests/mem-sharing/
>  
>  X86 SHADOW PAGETABLES
> -M:	Tim Deegan <tim@xen.org>
> +M:	Jan Beulich <jbeulich@suse.com>
> +M:	Andrew Cooper <andrew.cooper3@citrix.com>
> +R:	Tim Deegan <tim@xen.org>
>  S:	Maintained
>  F:	xen/arch/x86/mm/shadow/
>  
> 



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 06:36:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 06:36:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137674.255052 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq8rw-00012i-4G; Mon, 07 Jun 2021 06:36:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137674.255052; Mon, 07 Jun 2021 06:36:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq8rw-00012b-17; Mon, 07 Jun 2021 06:36:16 +0000
Received: by outflank-mailman (input) for mailman id 137674;
 Mon, 07 Jun 2021 06:36:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=I7O+=LB=redhat.com=thuth@srs-us1.protection.inumbo.net>)
 id 1lq8ru-00012V-Rh
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 06:36:15 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [170.10.133.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 1441298f-3931-46d7-bf21-13792a4ed3c7;
 Mon, 07 Jun 2021 06:36:13 +0000 (UTC)
Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com
 [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-364-fxn78dn7OMiun1Xqh1ufiQ-1; Mon, 07 Jun 2021 02:36:10 -0400
Received: by mail-wr1-f71.google.com with SMTP id
 h104-20020adf90710000b029010de8455a3aso7433964wrh.12
 for <xen-devel@lists.xenproject.org>; Sun, 06 Jun 2021 23:36:10 -0700 (PDT)
Received: from thuth.remote.csb (pd957536e.dip0.t-ipconnect.de.
 [217.87.83.110])
 by smtp.gmail.com with ESMTPSA id i15sm4425627wmq.23.2021.06.06.23.36.08
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 06 Jun 2021 23:36:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1441298f-3931-46d7-bf21-13792a4ed3c7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1623047773;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=thJrtqAEZWgLsa5T9uUPgNMHYTXTQwSguSbLhPSvi3M=;
	b=LZRtFo68RZw+3buDEpIpVRMmS6vgQ709xnixuj/HOSrtjGiuUbJ+K+iC0YKPJUT6pY/ouM
	w4/jdn0KJnOAp6RY6GCLg5ni2g0nOIUZgsC30cmZN2d86ZskmVgJ6JQMTmq9XoWuNhO3cw
	3ZA6B0IJEnpxi32fnZq1G5iUN/34ioU=
X-MC-Unique: fxn78dn7OMiun1Xqh1ufiQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=thJrtqAEZWgLsa5T9uUPgNMHYTXTQwSguSbLhPSvi3M=;
        b=NhOPuO/a6s4USKpAj5U5CUAQwYqCpoa7LwC+oT+UtFlnPXs1BrtlRAJz3ZGCGXy1Uo
         G4RzAFxZhGiFJUJxMwfKZSbcUuddMo/04ugtE5HnsyZyW0O+kyy15kD2hMExWtPKHaHr
         CQXq6YRPEOQFDDHOs2EFw8N90Do+vAg/kBwO7vv/M/nJHirFWDLb53hgwURe8Hs4YhVT
         T0okS8M+bLnIqef7rX7EG40SvhmMPdwoj/erMTmatKNKCvc38DWfMMw8N19kX2D8VWSd
         Zu0ofEBPVO0NRQM/a5e0PxBmMKw2ROxz11smYyMyLji7CtBtPWCxbxQvVqLsxqJjKakw
         tpKg==
X-Gm-Message-State: AOAM530/cwQaKfBNodIDnqiUCXpNQcruk6Ds62Drb64SlaJOD+KPsBdZ
	nMqNhC5pme6AblQCFO3aMRbUcGS1ccf6a/maK+tqig0aiz+g8I5TczrmoEcHuUhAJIX+6EaSxuI
	QioBJMlEvyOhkRRfkoCSnrhk1mE/NJevgXaorvWUKwJugyWcToh4XL6eTFTEfzqQVMNLSAGt26r
	A=
X-Received: by 2002:a1c:1f51:: with SMTP id f78mr15330464wmf.7.1623047769521;
        Sun, 06 Jun 2021 23:36:09 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJylIkmTDkwgmIQM4R2Pny1BX6EiFV+JC8gjufLOIUNCoJ8dJnFVzgajFrArTMLPjOqGd59ivw==
X-Received: by 2002:a1c:1f51:: with SMTP id f78mr15330437wmf.7.1623047769298;
        Sun, 06 Jun 2021 23:36:09 -0700 (PDT)
Subject: Re: [PATCH 2/2] Remove leading underscores from Xen defines
To: Ahmed Abouzied <email@aabouzied.com>, qemu-devel@nongnu.org
Cc: qemu-trivial@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <20210605175001.13836-1-email@aabouzied.com>
From: Thomas Huth <thuth@redhat.com>
Message-ID: <01ba2176-b559-1078-8a9f-39553989d9d3@redhat.com>
Date: Mon, 7 Jun 2021 08:36:07 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210605175001.13836-1-email@aabouzied.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=thuth@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05/06/2021 19.50, Ahmed Abouzied wrote:
> Identifiers with leading underscores followed by capital letters or
> underscores are reserved for C standards.
> 
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/369
> 
> Signed-off-by: Ahmed Abouzied <email@aabouzied.com>
> ---
>   include/hw/xen/interface/grant_table.h  | 4 ++--
>   include/hw/xen/interface/io/blkif.h     | 4 ++--
>   include/hw/xen/interface/io/console.h   | 4 ++--
>   include/hw/xen/interface/io/fbif.h      | 4 ++--
>   include/hw/xen/interface/io/kbdif.h     | 4 ++--
>   include/hw/xen/interface/io/netif.h     | 4 ++--
>   include/hw/xen/interface/io/protocols.h | 4 ++--
>   include/hw/xen/interface/io/ring.h      | 4 ++--
>   include/hw/xen/interface/io/usbif.h     | 4 ++--
>   9 files changed, 18 insertions(+), 18 deletions(-)
> 
> diff --git a/include/hw/xen/interface/grant_table.h b/include/hw/xen/interface/grant_table.h
> index 2af0cbdde3..c0a09dadad 100644
> --- a/include/hw/xen/interface/grant_table.h
> +++ b/include/hw/xen/interface/grant_table.h
> @@ -25,8 +25,8 @@
>    * Copyright (c) 2004, K A Fraser
>    */
>   
> -#ifndef __XEN_PUBLIC_GRANT_TABLE_H__
> -#define __XEN_PUBLIC_GRANT_TABLE_H__
> +#ifndef XEN_PUBLIC_GRANT_TABLE_H
> +#define XEN_PUBLIC_GRANT_TABLE_H
>   
>   /*
>    * Reference to a grant entry in a specified domain's grant table.
> diff --git a/include/hw/xen/interface/io/blkif.h b/include/hw/xen/interface/io/blkif.h
> index d07fa1e078..680914571f 100644
> --- a/include/hw/xen/interface/io/blkif.h
> +++ b/include/hw/xen/interface/io/blkif.h
> @@ -25,8 +25,8 @@
>    * Copyright (c) 2012, Spectra Logic Corporation
>    */
>   
> -#ifndef __XEN_PUBLIC_IO_BLKIF_H__
> -#define __XEN_PUBLIC_IO_BLKIF_H__
> +#ifndef XEN_PUBLIC_IO_BLKIF_H
> +#define XEN_PUBLIC_IO_BLKIF_H
>   
>   #include "ring.h"
>   #include "../grant_table.h"
> diff --git a/include/hw/xen/interface/io/console.h b/include/hw/xen/interface/io/console.h
> index e2155d1cf5..0d4a72456e 100644
> --- a/include/hw/xen/interface/io/console.h
> +++ b/include/hw/xen/interface/io/console.h
> @@ -24,8 +24,8 @@
>    * Copyright (c) 2005, Keir Fraser
>    */
>   
> -#ifndef __XEN_PUBLIC_IO_CONSOLE_H__
> -#define __XEN_PUBLIC_IO_CONSOLE_H__
> +#ifndef XEN_PUBLIC_IO_CONSOLE_H
> +#define XEN_PUBLIC_IO_CONSOLE_H
>   
>   typedef uint32_t XENCONS_RING_IDX;
>   
> diff --git a/include/hw/xen/interface/io/fbif.h b/include/hw/xen/interface/io/fbif.h
> index ea87ebec0a..4e25423490 100644
> --- a/include/hw/xen/interface/io/fbif.h
> +++ b/include/hw/xen/interface/io/fbif.h
> @@ -23,8 +23,8 @@
>    * Copyright (C) 2006 Red Hat, Inc., Markus Armbruster <armbru@redhat.com>
>    */
>   
> -#ifndef __XEN_PUBLIC_IO_FBIF_H__
> -#define __XEN_PUBLIC_IO_FBIF_H__
> +#ifndef XEN_PUBLIC_IO_FBIF_H
> +#define XEN_PUBLIC_IO_FBIF_H
>   
>   /* Out events (frontend -> backend) */
>   
> diff --git a/include/hw/xen/interface/io/kbdif.h b/include/hw/xen/interface/io/kbdif.h
> index 1d68cd458e..a952c77bf2 100644
> --- a/include/hw/xen/interface/io/kbdif.h
> +++ b/include/hw/xen/interface/io/kbdif.h
> @@ -23,8 +23,8 @@
>    * Copyright (C) 2006 Red Hat, Inc., Markus Armbruster <armbru@redhat.com>
>    */
>   
> -#ifndef __XEN_PUBLIC_IO_KBDIF_H__
> -#define __XEN_PUBLIC_IO_KBDIF_H__
> +#ifndef XEN_PUBLIC_IO_KBDIF_H
> +#define XEN_PUBLIC_IO_KBDIF_H
>   
>   /*
>    *****************************************************************************
> diff --git a/include/hw/xen/interface/io/netif.h b/include/hw/xen/interface/io/netif.h
> index 48fa530950..f4a28a43b1 100644
> --- a/include/hw/xen/interface/io/netif.h
> +++ b/include/hw/xen/interface/io/netif.h
> @@ -24,8 +24,8 @@
>    * Copyright (c) 2003-2004, Keir Fraser
>    */
>   
> -#ifndef __XEN_PUBLIC_IO_NETIF_H__
> -#define __XEN_PUBLIC_IO_NETIF_H__
> +#ifndef XEN_PUBLIC_IO_NETIF_H
> +#define XEN_PUBLIC_IO_NETIF_H
>   
>   #include "ring.h"
>   #include "../grant_table.h"
> diff --git a/include/hw/xen/interface/io/protocols.h b/include/hw/xen/interface/io/protocols.h
> index 52b4de0f81..3d1cac322b 100644
> --- a/include/hw/xen/interface/io/protocols.h
> +++ b/include/hw/xen/interface/io/protocols.h
> @@ -22,8 +22,8 @@
>    * Copyright (c) 2008, Keir Fraser
>    */
>   
> -#ifndef __XEN_PROTOCOLS_H__
> -#define __XEN_PROTOCOLS_H__
> +#ifndef XEN_PROTOCOLS_H
> +#define XEN_PROTOCOLS_H
>   
>   #define XEN_IO_PROTO_ABI_X86_32     "x86_32-abi"
>   #define XEN_IO_PROTO_ABI_X86_64     "x86_64-abi"
> diff --git a/include/hw/xen/interface/io/ring.h b/include/hw/xen/interface/io/ring.h
> index 115705f3f4..ea324c5a62 100644
> --- a/include/hw/xen/interface/io/ring.h
> +++ b/include/hw/xen/interface/io/ring.h
> @@ -24,8 +24,8 @@
>    * Tim Deegan and Andrew Warfield November 2004.
>    */
>   
> -#ifndef __XEN_PUBLIC_IO_RING_H__
> -#define __XEN_PUBLIC_IO_RING_H__
> +#ifndef XEN_PUBLIC_IO_RING_H
> +#define XEN_PUBLIC_IO_RING_H
>   
>   /*
>    * When #include'ing this header, you need to provide the following
> diff --git a/include/hw/xen/interface/io/usbif.h b/include/hw/xen/interface/io/usbif.h
> index c6a58639d6..564c0115e8 100644
> --- a/include/hw/xen/interface/io/usbif.h
> +++ b/include/hw/xen/interface/io/usbif.h
> @@ -25,8 +25,8 @@
>    * DEALINGS IN THE SOFTWARE.
>    */
>   
> -#ifndef __XEN_PUBLIC_IO_USBIF_H__
> -#define __XEN_PUBLIC_IO_USBIF_H__
> +#ifndef XEN_PUBLIC_IO_USBIF_H
> +#define XEN_PUBLIC_IO_USBIF_H
>   
>   #include "ring.h"
>   #include "../grant_table.h"
> 

I hope the Xen people can comment on whether the underscores had a purpose 
here or whether it's ok to remove them, thus:

Cc: xen-devel@lists.xenproject.org

 From my QEMU-developer's side of view:

Reviewed-by: Thomas Huth <thuth@redhat.com>



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 06:41:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 06:41:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137682.255063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq8xK-0002UN-T9; Mon, 07 Jun 2021 06:41:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137682.255063; Mon, 07 Jun 2021 06:41:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq8xK-0002UG-Pk; Mon, 07 Jun 2021 06:41:50 +0000
Received: by outflank-mailman (input) for mailman id 137682;
 Mon, 07 Jun 2021 06:41:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZuZ8=LB=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lq8xK-0002UA-0O
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 06:41:50 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ecd07ed7-0a40-488f-8cce-fa67029ed8f0;
 Mon, 07 Jun 2021 06:41:47 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id C9F1667373; Mon,  7 Jun 2021 08:41:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ecd07ed7-0a40-488f-8cce-fa67029ed8f0
Date: Mon, 7 Jun 2021 08:41:42 +0200
From: Christoph Hellwig <hch@lst.de>
To: Tianyu Lan <ltykernel@gmail.com>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
	wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
	mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
	arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
	peterz@infradead.org, akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com, rppt@kernel.org,
	hannes@cmpxchg.org, cai@lca.pw, krish.sadhukhan@oracle.com,
	saravanand@fb.com, Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com,
	hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com,
	boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org,
	joro@8bytes.org, will@kernel.org, xen-devel@lists.xenproject.org,
	davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com,
	martin.petersen@oracle.com, iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org, vkuznets@redhat.com,
	thomas.lendacky@amd.com, brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: Re: [RFC PATCH V3 01/11] x86/HV: Initialize GHCB page in Isolation
 VM
Message-ID: <20210607064142.GA24478@lst.de>
References: <20210530150628.2063957-1-ltykernel@gmail.com> <20210530150628.2063957-2-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210530150628.2063957-2-ltykernel@gmail.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Sun, May 30, 2021 at 11:06:18AM -0400, Tianyu Lan wrote:
> +	if (ms_hyperv.ghcb_base) {
> +		rdmsrl(MSR_AMD64_SEV_ES_GHCB, ghcb_gpa);
> +
> +		ghcb_va = ioremap_cache(ghcb_gpa, HV_HYP_PAGE_SIZE);
> +		if (!ghcb_va)
> +			return -ENOMEM;

Can you explain this a bit more?  We've very much deprecated
ioremap_cache in favor of memremap.  Why yo you need a __iomem address
here?  Why do we need the remap here at all?

Does the data structure at this address not have any types that we
could use a struct for?

> +
> +		rdmsrl(MSR_AMD64_SEV_ES_GHCB, ghcb_gpa);
> +		ghcb_va = ioremap_cache(ghcb_gpa, HV_HYP_PAGE_SIZE);
> +		if (!ghcb_va) {

This seems to duplicate the above code.

> +bool hv_isolation_type_snp(void)
> +{
> +	return static_branch_unlikely(&isolation_type_snp);
> +}
> +EXPORT_SYMBOL_GPL(hv_isolation_type_snp);

This probably wants a kerneldoc explaining when it should be used.


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 06:43:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 06:43:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137688.255074 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq8yj-00038v-8V; Mon, 07 Jun 2021 06:43:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137688.255074; Mon, 07 Jun 2021 06:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq8yj-00038o-5Q; Mon, 07 Jun 2021 06:43:17 +0000
Received: by outflank-mailman (input) for mailman id 137688;
 Mon, 07 Jun 2021 06:43:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZuZ8=LB=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lq8yi-00037m-07
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 06:43:16 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5521208-f9ca-44d4-b8d7-7fb737526b70;
 Mon, 07 Jun 2021 06:43:14 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 19E8068AFE; Mon,  7 Jun 2021 08:43:13 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5521208-f9ca-44d4-b8d7-7fb737526b70
Date: Mon, 7 Jun 2021 08:43:12 +0200
From: Christoph Hellwig <hch@lst.de>
To: Tianyu Lan <ltykernel@gmail.com>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
	wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
	mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
	arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
	peterz@infradead.org, akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com, rppt@kernel.org,
	hannes@cmpxchg.org, cai@lca.pw, krish.sadhukhan@oracle.com,
	saravanand@fb.com, Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com,
	hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com,
	boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org,
	joro@8bytes.org, will@kernel.org, xen-devel@lists.xenproject.org,
	davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com,
	martin.petersen@oracle.com, iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org, vkuznets@redhat.com,
	thomas.lendacky@amd.com, brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: Re: [RFC PATCH V3 08/11] swiotlb: Add bounce buffer remap address
 setting function
Message-ID: <20210607064312.GB24478@lst.de>
References: <20210530150628.2063957-1-ltykernel@gmail.com> <20210530150628.2063957-9-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210530150628.2063957-9-ltykernel@gmail.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Sun, May 30, 2021 at 11:06:25AM -0400, Tianyu Lan wrote:
> From: Tianyu Lan <Tianyu.Lan@microsoft.com>
> 
> For Hyper-V isolation VM with AMD SEV SNP, the bounce buffer(shared memory)
> needs to be accessed via extra address space(e.g address above bit39).
> Hyper-V code may remap extra address space outside of swiotlb. swiotlb_
> bounce() needs to use remap virtual address to copy data from/to bounce
> buffer. Add new interface swiotlb_set_bounce_remap() to do that.

Why can't you use the bus_dma_region ranges to remap to your preferred
address?


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 06:45:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 06:45:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137695.255085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq90N-0003n2-Ky; Mon, 07 Jun 2021 06:44:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137695.255085; Mon, 07 Jun 2021 06:44:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq90N-0003mv-HS; Mon, 07 Jun 2021 06:44:59 +0000
Received: by outflank-mailman (input) for mailman id 137695;
 Mon, 07 Jun 2021 06:44:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZuZ8=LB=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lq90L-0003mm-Eh
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 06:44:57 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 667d5b18-68e7-4eb6-a928-9ed6ca22bfc5;
 Mon, 07 Jun 2021 06:44:55 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id B14DA68AFE; Mon,  7 Jun 2021 08:44:53 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 667d5b18-68e7-4eb6-a928-9ed6ca22bfc5
Date: Mon, 7 Jun 2021 08:44:53 +0200
From: Christoph Hellwig <hch@lst.de>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Tianyu Lan <ltykernel@gmail.com>, kys@microsoft.com,
	haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org,
	decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, x86@kernel.org, hpa@zytor.com, arnd@arndb.de,
	dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org,
	akpm@linux-foundation.org, kirill.shutemov@linux.intel.com,
	rppt@kernel.org, hannes@cmpxchg.org, cai@lca.pw,
	krish.sadhukhan@oracle.com, saravanand@fb.com,
	Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, hch@lst.de,
	m.szyprowski@samsung.com, robin.murphy@arm.com, jgross@suse.com,
	sstabellini@kernel.org, joro@8bytes.org, will@kernel.org,
	xen-devel@lists.xenproject.org, davem@davemloft.net,
	kuba@kernel.org, jejb@linux.ibm.com, martin.petersen@oracle.com,
	iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
	linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-scsi@vger.kernel.org, netdev@vger.kernel.org,
	vkuznets@redhat.com, thomas.lendacky@amd.com, brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: Re: [RFC PATCH V3 09/11] HV/IOMMU: Enable swiotlb bounce buffer
 for Isolation VM
Message-ID: <20210607064453.GC24478@lst.de>
References: <20210530150628.2063957-1-ltykernel@gmail.com> <20210530150628.2063957-10-ltykernel@gmail.com> <9488c114-81ad-eb67-79c0-5ed319703d3e@oracle.com> <a023ee3f-ce85-b54f-79c3-146926bf3279@gmail.com> <d6714e8b-dcb6-798b-59a4-5bb68f789564@oracle.com> <1cdf4e6e-6499-e209-d499-7ab82992040b@gmail.com> <099f311b-9614-dac5-ce05-6dad988f8a62@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <099f311b-9614-dac5-ce05-6dad988f8a62@oracle.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

Honestly, we really need to do away with the concept of hypervisor-
specific swiotlb allocations and just add a hypervisor hook to remap the
"main" buffer. That should remove a lot of code and confusion not just
for Xen but also any future addition like hyperv.


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 06:46:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 06:46:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137702.255096 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq91S-0004OO-Vx; Mon, 07 Jun 2021 06:46:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137702.255096; Mon, 07 Jun 2021 06:46:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq91S-0004OH-Ri; Mon, 07 Jun 2021 06:46:06 +0000
Received: by outflank-mailman (input) for mailman id 137702;
 Mon, 07 Jun 2021 06:46:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZuZ8=LB=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lq91S-0004O3-0P
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 06:46:06 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 286305df-f0c1-405a-a4c8-dbe8279bcae8;
 Mon, 07 Jun 2021 06:46:04 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 619F368AFE; Mon,  7 Jun 2021 08:46:03 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 286305df-f0c1-405a-a4c8-dbe8279bcae8
Date: Mon, 7 Jun 2021 08:46:03 +0200
From: Christoph Hellwig <hch@lst.de>
To: Tianyu Lan <ltykernel@gmail.com>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
	wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
	mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
	arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
	peterz@infradead.org, akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com, rppt@kernel.org,
	hannes@cmpxchg.org, cai@lca.pw, krish.sadhukhan@oracle.com,
	saravanand@fb.com, Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com,
	hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com,
	boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org,
	joro@8bytes.org, will@kernel.org, xen-devel@lists.xenproject.org,
	davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com,
	martin.petersen@oracle.com, iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org, vkuznets@redhat.com,
	thomas.lendacky@amd.com, brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: Re: [RFC PATCH V3 11/11] HV/Storvsc: Add Isolation VM support for
 storvsc driver
Message-ID: <20210607064603.GD24478@lst.de>
References: <20210530150628.2063957-1-ltykernel@gmail.com> <20210530150628.2063957-12-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210530150628.2063957-12-ltykernel@gmail.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Sun, May 30, 2021 at 11:06:28AM -0400, Tianyu Lan wrote:
> +				for (i = 0; i < request->hvpg_count; i++)
> +					dma_unmap_page(&device->device,
> +							request->dma_range[i].dma,
> +							request->dma_range[i].mapping_size,
> +							request->vstor_packet.vm_srb.data_in
> +							     == READ_TYPE ?
> +							DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +				kfree(request->dma_range);

Unreadably long lines.  You probably want to factor the quoted code into
a little readable helper and do the same for the map side.


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 06:50:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 06:50:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137712.255110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq95S-0005nw-HS; Mon, 07 Jun 2021 06:50:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137712.255110; Mon, 07 Jun 2021 06:50:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq95S-0005np-Dv; Mon, 07 Jun 2021 06:50:14 +0000
Received: by outflank-mailman (input) for mailman id 137712;
 Mon, 07 Jun 2021 06:50:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZuZ8=LB=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lq95R-0005nj-Dg
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 06:50:13 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dfdb5c52-ebc5-4c3e-a33b-006801863ac4;
 Mon, 07 Jun 2021 06:50:12 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 6BD7A68AFE; Mon,  7 Jun 2021 08:50:07 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dfdb5c52-ebc5-4c3e-a33b-006801863ac4
Date: Mon, 7 Jun 2021 08:50:07 +0200
From: Christoph Hellwig <hch@lst.de>
To: Tianyu Lan <ltykernel@gmail.com>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
	wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
	mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
	arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
	peterz@infradead.org, akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com, rppt@kernel.org,
	hannes@cmpxchg.org, cai@lca.pw, krish.sadhukhan@oracle.com,
	saravanand@fb.com, Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com,
	hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com,
	boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org,
	joro@8bytes.org, will@kernel.org, xen-devel@lists.xenproject.org,
	davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com,
	martin.petersen@oracle.com, iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org, vkuznets@redhat.com,
	thomas.lendacky@amd.com, brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: Re: [RFC PATCH V3 10/11] HV/Netvsc: Add Isolation VM support for
 netvsc driver
Message-ID: <20210607065007.GE24478@lst.de>
References: <20210530150628.2063957-1-ltykernel@gmail.com> <20210530150628.2063957-11-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210530150628.2063957-11-ltykernel@gmail.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Sun, May 30, 2021 at 11:06:27AM -0400, Tianyu Lan wrote:
> +	if (hv_isolation_type_snp()) {
> +		pfns = kcalloc(buf_size / HV_HYP_PAGE_SIZE, sizeof(unsigned long),
> +			       GFP_KERNEL);
> +		for (i = 0; i < buf_size / HV_HYP_PAGE_SIZE; i++)
> +			pfns[i] = virt_to_hvpfn(net_device->recv_buf + i * HV_HYP_PAGE_SIZE) +
> +				(ms_hyperv.shared_gpa_boundary >> HV_HYP_PAGE_SHIFT);
> +
> +		vaddr = vmap_pfn(pfns, buf_size / HV_HYP_PAGE_SIZE, PAGE_KERNEL_IO);
> +		kfree(pfns);
> +		if (!vaddr)
> +			goto cleanup;
> +		net_device->recv_original_buf = net_device->recv_buf;
> +		net_device->recv_buf = vaddr;
> +	}

This probably wnats a helper to make the thing more readable.  But who
came up with this fucked up communication protocol where the host needs
to map random pfns into a contigous range?  Sometime I really have to
wonder what crack the hyper-v people take when comparing this to the
relatively sane approach others take.

> +	for (i = 0; i < page_count; i++)
> +		dma_unmap_single(&hv_dev->device, packet->dma_range[i].dma,
> +				 packet->dma_range[i].mapping_size,
> +				 DMA_TO_DEVICE);
> +
> +	kfree(packet->dma_range);

Any reason this isn't simply using a struct scatterlist?

> +	for (i = 0; i < page_count; i++) {
> +		char *src = phys_to_virt((pb[i].pfn << HV_HYP_PAGE_SHIFT)
> +					 + pb[i].offset);
> +		u32 len = pb[i].len;
> +
> +		dma = dma_map_single(&hv_dev->device, src, len,
> +				     DMA_TO_DEVICE);

dma_map_single can only be used on page baked memory, and if this is
using page backed memory you wouldn't need to do thee phys_to_virt
tricks.  Can someone explain the mess here in more detail?

>  	struct rndis_device *dev = nvdev->extension;
>  	struct rndis_request *request = NULL;
> +	struct hv_device *hv_dev = ((struct net_device_context *)
> +			netdev_priv(ndev))->device_ctx;

Why not use a net_device_context local variable instead of this cast
galore?


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 07:08:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 07:08:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137723.255130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq9NB-0007XD-6H; Mon, 07 Jun 2021 07:08:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137723.255130; Mon, 07 Jun 2021 07:08:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lq9NB-0007X6-29; Mon, 07 Jun 2021 07:08:33 +0000
Received: by outflank-mailman (input) for mailman id 137723;
 Mon, 07 Jun 2021 07:08:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lq9N9-0007Ww-JP; Mon, 07 Jun 2021 07:08:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lq9N9-0001pQ-96; Mon, 07 Jun 2021 07:08:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lq9N9-0007oR-2T; Mon, 07 Jun 2021 07:08:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lq9N9-0002YY-20; Mon, 07 Jun 2021 07:08:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AtxJNPUZJmpV0vJl9Ulzbp5uggoMeIp4SENKkVrK790=; b=hfB8Blp16hGrrcfxXIqnpX1DpY
	u1dik6cI5S2SpuNUoeZFZqDExvKfWoNgshA4NZ3jwVX0rStw1SuSdXsj/Ayr+NdySoyEfYNPZ5sMw
	yP95p2kYGxTtCdMVyvBuZlr/f8gwF+Jo0YIu1t5FQyOnHVG0YtgqmkR4mU6LAdXS+WRg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162480-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162480: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=75f13e9b221e2c8603f15ee1d53318526cf56113
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Jun 2021 07:08:31 +0000

flight 162480 xen-unstable-smoke real [real]
flight 162486 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162480/
http://logs.test-lab.xenproject.org/osstest/logs/162486/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  75f13e9b221e2c8603f15ee1d53318526cf56113
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    5 days
Failing since        162370  2021-06-04 17:01:35 Z    2 days   17 attempts
Testing same since   162374  2021-06-04 20:03:35 Z    2 days   16 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 08:15:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 08:15:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137735.255150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqAPN-0006Eo-FQ; Mon, 07 Jun 2021 08:14:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137735.255150; Mon, 07 Jun 2021 08:14:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqAPN-0006Eh-CN; Mon, 07 Jun 2021 08:14:53 +0000
Received: by outflank-mailman (input) for mailman id 137735;
 Mon, 07 Jun 2021 08:14:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b+N/=LB=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lqAPM-0006EZ-1C
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 08:14:52 +0000
Received: from mail-pl1-x634.google.com (unknown [2607:f8b0:4864:20::634])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b6dff58-edcb-461c-8dcd-a94c33ce1d6d;
 Mon, 07 Jun 2021 08:14:51 +0000 (UTC)
Received: by mail-pl1-x634.google.com with SMTP id x10so8209084plg.3
 for <xen-devel@lists.xenproject.org>; Mon, 07 Jun 2021 01:14:51 -0700 (PDT)
Received: from ?IPv6:2404:f801:0:5:8000::4b1? ([2404:f801:9000:1a:efea::4b1])
 by smtp.gmail.com with ESMTPSA id
 r11sm8236573pgl.34.2021.06.07.01.14.37
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Jun 2021 01:14:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b6dff58-edcb-461c-8dcd-a94c33ce1d6d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=c5tZw1PE5ja1kIfp0A9CThFtz7RTJJ96vn5JKsdn4zo=;
        b=XoQyLvz24OoCRIOShkQNcu9DiR7bBkokC0zH2y0VsTYjxJfKw7R0siA80dncjT3YUK
         wAQolPJ8LMP2QAHMUANfaqPmSCX0C5CGSEEQPtRRcQTl1EKUo8gRIPdRN/yizwXOvk7a
         MyxLl0xOVUFh4CzE2lkyyns1y8zU/3BfW8wnjMepLuBa14dDIuRlcgHvnJy9KhXaOr8X
         f1b1b3SWDcQzR8oj41QVFoJZ6XqUPNURZphda8kDAwgTk/UUkxjzRis3jA/H1Gt2YM49
         MOGgQu1k2QTfbhGmrfPyoKrM3mYFHKWoquCiDY9UKYa1vj3gMvLKx5aFnrcD/cKhHcp4
         J8+w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=c5tZw1PE5ja1kIfp0A9CThFtz7RTJJ96vn5JKsdn4zo=;
        b=PTNP+e+BEbsZgKCbDdK430ywyYvwSCMR4oAVFA1z/bvOjz4A0CwzVrHYEVUTTWjyLP
         DTfQ6tI87T6cO4x4ExJJxR3SwlTBJuckyfzguFRWJujSsWH4XBFvfAO6RD6PVAxR9re4
         46Yb/Vj5X1EML1MQHUMbt+RbLNzVZxvbU9yY18v4Numaz9D+++czZfnWzx55ehJ/qJSt
         fSW3Xp9kbWpMew4FsGCgOW7NlAFazd5UKznHfFR9nbSh5sCXDsP6a2RCMMK0AWIzdCsS
         W1G3ebF1a0FhS6bSpNP1bCyWjsXuaZErx2HceCh0bike/tlmqeNxC+fEelOtGv0KZsyL
         Lnfw==
X-Gm-Message-State: AOAM530oG9LpEYKAiKawgeu8SIv02+ZbLUEqOV3iV56RP9OP9wGTs/Wf
	mUlXVym6WEt1ktpMy8P3KMo=
X-Google-Smtp-Source: ABdhPJz5DsUF3lCnv8yQA4dIwCqwn7lOEnqmDAyx9VQ48+J+hBL0/9GZGiwWD3mDm4r9zCHOflaTmA==
X-Received: by 2002:a17:902:9a42:b029:f5:1cf7:2e52 with SMTP id x2-20020a1709029a42b02900f51cf72e52mr16701852plv.25.1623053690127;
        Mon, 07 Jun 2021 01:14:50 -0700 (PDT)
Subject: Re: [RFC PATCH V3 01/11] x86/HV: Initialize GHCB page in Isolation VM
To: Christoph Hellwig <hch@lst.de>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
 wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
 arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
 peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, m.szyprowski@samsung.com,
 robin.murphy@arm.com, boris.ostrovsky@oracle.com, jgross@suse.com,
 sstabellini@kernel.org, joro@8bytes.org, will@kernel.org,
 xen-devel@lists.xenproject.org, davem@davemloft.net, kuba@kernel.org,
 jejb@linux.ibm.com, martin.petersen@oracle.com,
 iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com,
 thomas.lendacky@amd.com, brijesh.singh@amd.com, sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-2-ltykernel@gmail.com>
 <20210607064142.GA24478@lst.de>
From: Tianyu Lan <ltykernel@gmail.com>
Message-ID: <37260f47-bd32-08f7-b006-f75f4d3c408a@gmail.com>
Date: Mon, 7 Jun 2021 16:14:36 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210607064142.GA24478@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Hi Christoph:
	Thanks for your review.

On 6/7/2021 2:41 PM, Christoph Hellwig wrote:
> On Sun, May 30, 2021 at 11:06:18AM -0400, Tianyu Lan wrote:
>> +	if (ms_hyperv.ghcb_base) {
>> +		rdmsrl(MSR_AMD64_SEV_ES_GHCB, ghcb_gpa);
>> +
>> +		ghcb_va = ioremap_cache(ghcb_gpa, HV_HYP_PAGE_SIZE);
>> +		if (!ghcb_va)
>> +			return -ENOMEM;
> 
> Can you explain this a bit more?  We've very much deprecated
> ioremap_cache in favor of memremap.  Why yo you need a __iomem address
> here?  Why do we need the remap here at all? >

GHCB physical address is an address in extra address space which is 
above shared gpa boundary reported by Hyper-V CPUID. The addresses below
shared gpa boundary treated as encrypted and the one above is treated as 
decrypted. System memory is remapped in the extra address space and it 
starts from the boundary. The shared memory with host needs to use 
address in the extra address(pa + shared_gpa_boundary) in Linux guest.

Here is to map ghcb page for the communication operations with 
Hypervisor(e.g, hypercall and read/write MSR) via GHCB page.

memremap() will go through iomem_resource list and the address in extra 
address space will not be in the list. So I used ioremap_cache(). I will
memremap() instead of ioremap() here.

> Does the data structure at this address not have any types that we
> could use a struct for?

The struct will be added in the following patch. I will refresh the 
following patch and use the struct hv_ghcb for the mapped point.
> 
>> +
>> +		rdmsrl(MSR_AMD64_SEV_ES_GHCB, ghcb_gpa);
>> +		ghcb_va = ioremap_cache(ghcb_gpa, HV_HYP_PAGE_SIZE);
>> +		if (!ghcb_va) {
> 
> This seems to duplicate the above code.

The above is to map ghcb for BSP and here does the same thing for APs
Will add a new function to avoid the duplication.

> 
>> +bool hv_isolation_type_snp(void)
>> +{
>> +	return static_branch_unlikely(&isolation_type_snp);
>> +}
>> +EXPORT_SYMBOL_GPL(hv_isolation_type_snp);
> 
> This probably wants a kerneldoc explaining when it should be used. >

OK. I will add.



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 09:04:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 09:04:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137751.255173 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqBBP-0002no-N7; Mon, 07 Jun 2021 09:04:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137751.255173; Mon, 07 Jun 2021 09:04:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqBBP-0002nh-KD; Mon, 07 Jun 2021 09:04:31 +0000
Received: by outflank-mailman (input) for mailman id 137751;
 Mon, 07 Jun 2021 09:04:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kcEO=LB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lqBBN-0002nb-LX
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 09:04:29 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c317017b-bc5c-4c8b-872c-03e730df006f;
 Mon, 07 Jun 2021 09:04:28 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id BB84421A86;
 Mon,  7 Jun 2021 09:04:27 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 8DB4A118DD;
 Mon,  7 Jun 2021 09:04:27 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id pTV1IRvhvWDdBAAALh3uQQ
 (envelope-from <jgross@suse.com>); Mon, 07 Jun 2021 09:04:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c317017b-bc5c-4c8b-872c-03e730df006f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623056667; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=yL97rnNZM/IuAgRq+UAn/CMZq82m0SlmJjq4Ku9Fwmk=;
	b=aGVb94R3An5t1FUxPWWKFZc2kO/7nc82f8/g7ECRWgSWFMYkCp8IJPF43XFK8/tSr6yHxr
	RIe4Q7ldQXOE+Z6tWJ2eqy+sPS4rYBot2vMPDHGkRj1RDGiJ5qj1XUdkzN1pYUVAjaSl4K
	jYVxiD/rzlz77q9HT3c+KV2S/AEriyI=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623056667; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=yL97rnNZM/IuAgRq+UAn/CMZq82m0SlmJjq4Ku9Fwmk=;
	b=aGVb94R3An5t1FUxPWWKFZc2kO/7nc82f8/g7ECRWgSWFMYkCp8IJPF43XFK8/tSr6yHxr
	RIe4Q7ldQXOE+Z6tWJ2eqy+sPS4rYBot2vMPDHGkRj1RDGiJ5qj1XUdkzN1pYUVAjaSl4K
	jYVxiD/rzlz77q9HT3c+KV2S/AEriyI=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] tools/libs/guest: fix save and restore of pv domains after 32-bit de-support
Date: Mon,  7 Jun 2021 11:04:25 +0200
Message-Id: <20210607090425.18277-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

After 32-bit PV-guests have been security de-supported when not running
under PV-shim, the hypervisor will no longer be configured to support
those domains per default when not being built as PV-shim.

Unfortunately libxenguest will fail saving or restoring a PV domain
due to this restriction, as it is trying to get the compat MFN list
even for 64 bit guests.

Fix that by obtaining the compat MFN list only for 32-bit PV guests.

Fixes: 1a0f2fe2297d122a08fe ("SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/guest/xg_sr_common_x86_pv.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/libs/guest/xg_sr_common_x86_pv.c b/tools/libs/guest/xg_sr_common_x86_pv.c
index cd33406aab..ad20461e2e 100644
--- a/tools/libs/guest/xg_sr_common_x86_pv.c
+++ b/tools/libs/guest/xg_sr_common_x86_pv.c
@@ -154,6 +154,7 @@ int x86_pv_map_m2p(struct xc_sr_context *ctx)
     ctx->x86.pv.compat_m2p_mfn0 = entries[0].mfn;
 #else
     /* 64 bit toolstacks need to ask Xen specially for it */
+    if ( ctx->x86.pv.levels == 3 )
     {
         struct xen_machphys_mfn_list xmml = {
             .max_extents = 1,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 09:12:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 09:12:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137758.255184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqBIx-0004Es-GZ; Mon, 07 Jun 2021 09:12:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137758.255184; Mon, 07 Jun 2021 09:12:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqBIx-0004El-DB; Mon, 07 Jun 2021 09:12:19 +0000
Received: by outflank-mailman (input) for mailman id 137758;
 Mon, 07 Jun 2021 09:12:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cRKJ=LB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqBIw-0004Ef-Cf
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 09:12:18 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e023499-821b-4341-bce3-0a77fa965720;
 Mon, 07 Jun 2021 09:12:16 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2050.outbound.protection.outlook.com [104.47.13.50]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-5--Aio0mhGPPmjhr91Zmh3dA-2;
 Mon, 07 Jun 2021 11:12:14 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7024.eurprd04.prod.outlook.com (2603:10a6:800:124::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.21; Mon, 7 Jun
 2021 09:12:12 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4195.030; Mon, 7 Jun 2021
 09:12:11 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM4PR0701CA0010.eurprd07.prod.outlook.com (2603:10a6:200:42::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.12 via Frontend
 Transport; Mon, 7 Jun 2021 09:12:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e023499-821b-4341-bce3-0a77fa965720
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623057135;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nd7r8vCjVBhTQMZz/7WnPMW+Svvzm6ytLVcpqPiKHmY=;
	b=AzgLqFUtJFKNh479w1hAjvGxS1jbV6pVlPewiLuewwYmrF6Xkm2EetspyGPXjbaVhPdbJ4
	qCWI4iUjVvxF1uR0VgMQesdccybMPGRTGoEaAxCrYzVwI5xoCoFEcSpZUXMANhGcmLE85w
	acaELp11FiOKOdzwxIiffCMorWUAxn8=
X-MC-Unique: -Aio0mhGPPmjhr91Zmh3dA-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aaI9vRbrVG5Xb6cXMf+metaT8QomDrDZYPIANDkUTF24FwI0qK3pOcBgr3HULcuAh8LeZ8PpNQbYfvozin0HO9RMrh0yU/zb/IYLvVPBhNjjNoTuixOIuvDeLlDFB/PvZ/HQoXFVsSIkTF+kM0zmCdlACHIheeYbEFAO31gAno5np7VQ5FY5rFHtqTcQZ2pBLyOa/J4jND813sIR73fqh6jgewpg7NtnjjcDhOqJvv4+d0ihgGVhh5pAJqYzawRWerw2Z35OrCPrHucokyS2AHW4D7br6rIncLESDNmSGhZKcuMdfBqiq4YU8nHpLK/AlhqZzVkMpLxm+dgO44lCNw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nd7r8vCjVBhTQMZz/7WnPMW+Svvzm6ytLVcpqPiKHmY=;
 b=REbIull5AI/cEMc74PPbwXBUff/urlWsEgH5F1PeLzf3+3C4WMdSyBvP/7lrBJdKDA1EZYd7HG7nbw4iHkSlkV5Zf2IBP5clmig26Mu07k58BiKJp+IvMKJG2yYfBxUk18FSDeI+Gh1+6nL2FTZLJ5ATIaaNks7/RFHUZFG0E3AeHI2xBIgsYUTsPPg5UjTERO/7wQbB55Oe8SJZxJapRypMzhazMaTMsB5lAQlwsBsXp9ZpwbqfSalSeW9qIxKjOhbBrH7rkMt7Azll4eOKV7KlglFh8gV1+KycKSR6rEdZJhxMHQe2NXAJgqxrAwRsxyVxHBPaoq79X/bH+N3/iw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] tools/libs/guest: fix save and restore of pv domains
 after 32-bit de-support
To: Juergen Gross <jgross@suse.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210607090425.18277-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <672a88b3-353c-fdc4-d6e0-ee3a725b5788@suse.com>
Date: Mon, 7 Jun 2021 11:12:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210607090425.18277-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM4PR0701CA0010.eurprd07.prod.outlook.com
 (2603:10a6:200:42::20) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e5dfb7b1-0ba4-4a50-90ca-08d929945568
X-MS-TrafficTypeDiagnostic: VI1PR04MB7024:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB70243ABB86153B257C6CAF5FB3389@VI1PR04MB7024.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BhHirDzsFgGJ5duXy5ou/fTNFoBTz+QUPBP7/5886Z3Hd/EuHpA1qUMKcX4TDxNtczicQxzE1UQsU+5aD5lxWSa/8fQevknWtzDKzA5kPcQYh7vbPVEMfJAq6xKY4nwvGFOpvkiyVK1QTpV6FgDZY2QJ43BphZXCKQ0AAU/CiCVYjxBYXn6XVACw3XnMIsvM+ijBbYzafI318eYTjXCb0eyUKW3KvmKZ/pPCmY0zx9zTOXME9/vv8NOjrqVCJsg/4Vag0G4nOlKNv4NORACy5VCNPJmO3EeYLzS67eYGdsv+P1xmDdTyzDdIaKmkO76aKxop+E1qtZ9nqL0ouJSkK8x7WXYxHgDRd/yko8l2Rnr/e8Y+yPKweEq0eSaHzUAekkPTJWFR8LB/rOcZ32fKEWPEL9Oq3kIgKD0lW4C33bctVlseFV4v7iXcvKqkw0sUXprGJzHbm7OXiSkZVIeyPqj7E+nOJq7+VJkqstgLdsJubfPuj2sx5P+M39mgiAATjELwlod4dJA1fBQPpZAVBeMzOdKUPzZsqlp2zOi8F9t709XUzAwUUfmg97iGgSJT53pVhsGgH2IMPvJPfUD/rsVcLqKiqJQsaddhKixSZuo1TvpoQP3qAdYmBWg/QpqbpY4jEhPaB5aGbZkL5PYwQza+/oRpg5U7oMUx6hJXTmRPffAkbCMTdJH3OWVzH4Yt
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(39860400002)(136003)(366004)(396003)(376002)(53546011)(8676002)(6862004)(16526019)(956004)(83380400001)(316002)(26005)(16576012)(5660300002)(37006003)(2906002)(478600001)(36756003)(38100700002)(31686004)(86362001)(186003)(66476007)(31696002)(6636002)(6486002)(2616005)(66556008)(4326008)(54906003)(66946007)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?WGdzdzRzazV0L3VWSEhMOGtxSlN4VzNkOWVQalh0RHRid1Q5NTdMR3NEL1Q2?=
 =?utf-8?B?a3p3LysybGlSZ3VINmhOU0pPVERvUURQVXN3dUs0dEI3dzdtZUlkVTZaVlNP?=
 =?utf-8?B?UStUbjVhTHQvTDFQVHBHWm5pQk5oZUs5eUVjcnVNeE9qNXRDZXVyamFIMnk2?=
 =?utf-8?B?R01vdi92SU1hU1lhM1dCNUs3NVlPZGhrbFM2MFh6OU1uSk5vaVFBcjE3UHFt?=
 =?utf-8?B?RWRFL0U4UVNqNjJVbFhlMnp1Ni9zcmpPc2w0bHpkQisxelRydGMyQThQMm44?=
 =?utf-8?B?c0QvbjVCL3dGMFgwVXRvOXVWaWdvY09lZnlXWnA0R2tpVStjZFV6NGYvMy9s?=
 =?utf-8?B?Q3J5d3NhY05obmpFY1M2Q1hYS0IxRUVUeTdJV2ZXYmZ5VWcwNTRzL21RSFRT?=
 =?utf-8?B?bkMyMytrOHI3WjQwMUFWb2szVDlPNlVzUmxQZ1Bicm5FTmtuSytQeU53dkJp?=
 =?utf-8?B?TlpXVzlUNmtScEdZb0VBdnZyRXVDekZMMDdTbDB4Z1RxSHhOUWp2Rmtra3BN?=
 =?utf-8?B?Q3pOU2xyamtHcHFkMXFMbFUwVE5OSlVKOWFUaWZyWHU2US9kSHZPdWh0QkV5?=
 =?utf-8?B?YjZBRUJHaHJFN2JkSERGNllwRlVzWlFhT3A3WVZMbE9kYlNtREJLZmFwRWdN?=
 =?utf-8?B?SWwxdDVyMnhMWUltRzRvOWhXaFlPWFdEZERBWjJ5QzJsN21ObHp0aS9ZWHJo?=
 =?utf-8?B?TzByTmRFTklCWmJEWkp1MFplZy9wekZFV3FTaExoSlNHKzVOSWppbEd1RzRi?=
 =?utf-8?B?cGlRY0hucTE4VEVvQ2hDSFNGUVN1SE1Vbmd2NllNb1hWVzJ6cTg1WTFSS1py?=
 =?utf-8?B?YS9IelVtbllGM1R6RUtFOE1IR3NHYXFuVDZBZU1GVDVyVFN3NHBxNk0zVVpk?=
 =?utf-8?B?WXpkcmFWQmp2V0VubkQ5VmkxbVFwZFc0cXRJRSt2d2JIWXRsbmtkVExMQXFk?=
 =?utf-8?B?aURNV1I2OFBqQmFNRU8rWU5qdmhDNDJPSFVVWVFLMW1PbGM5WncwZU16T1pL?=
 =?utf-8?B?TmxYMnViRUVKQXh3MXB2WTZJdXlWbmZmajRFUnVwVlRhK0lib00xcGVKYnJY?=
 =?utf-8?B?YTFLNjZZVEplblRZNHRkV3JvTjhNcGlQSjZzdHN1cDNCSENEMkpheWh6Mmw1?=
 =?utf-8?B?anFHM2wrdFNtV1p2ZTM3am51NE9FT0cvUktvazBPRStsS1lZejUzZTVnSkhp?=
 =?utf-8?B?blJvbndBMWRIRk9oMHlDNkF0Q0dmQkREUC9nVy9NUXFGS3F5UFBiOUxyaVBk?=
 =?utf-8?B?bWRDRDlPR3ZGRlZyTFI4TDZFNmQxTTBsaGtZT3RkMXlzd0h3SU5kcDNBbHd1?=
 =?utf-8?B?VVovV3J5Zmo0Y1BITFhpQVlaVHIwRGkyQnJYa2p0UkdUa0pXZGN4WkllKy9T?=
 =?utf-8?B?cUVXYW9mRzc1OWdCOW81bndXL3hWY2pvS0JRbnlDelBMbHlnWWxCNU1XWVVu?=
 =?utf-8?B?aDFZbHk0NlFpTTFSazQrOCtwZzV2bFBmeU5LMGUrMVdxWkFSbllYcTAvQ3No?=
 =?utf-8?B?elR1WkFBRzdyTThpeWRldVVZQngrN0hZQnd5MGFpWjhOTnlsdXUvN2dYcDUv?=
 =?utf-8?B?K2dCcGNqUjF1SVRaZThiMUpQV3F1aUIwNFNXdGlCVHl2UjM4akZRY0pCdktG?=
 =?utf-8?B?VjBUTDk2SDhNQ3RxOS91SGZLSytXSWliN2RrL3R2R1cwRFozK0JLODNxMnBO?=
 =?utf-8?B?Tml3UHNXRDFKQnkvOFMwS1NzNnFvTVphZUtkbnVUa3VUNFYrbFREdkdxUDI3?=
 =?utf-8?Q?lYk6B5k8zvVm6k8A+AbLDcPimcaOip34EiB4aqT?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e5dfb7b1-0ba4-4a50-90ca-08d929945568
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 09:12:11.7299
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xiDkn/10wwEUswIgi+MXlqVwYkVpZCGLK6+JZORL98h3+vbvFEK3mbx+YRvWnohnIfXFAxyMuM3+jU14I2LYhw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7024

On 07.06.2021 11:04, Juergen Gross wrote:
> After 32-bit PV-guests have been security de-supported when not running
> under PV-shim, the hypervisor will no longer be configured to support
> those domains per default when not being built as PV-shim.
> 
> Unfortunately libxenguest will fail saving or restoring a PV domain
> due to this restriction, as it is trying to get the compat MFN list
> even for 64 bit guests.
> 
> Fix that by obtaining the compat MFN list only for 32-bit PV guests.
> 
> Fixes: 1a0f2fe2297d122a08fe ("SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported")
> Signed-off-by: Juergen Gross <jgross@suse.com>

As this will do for the single present consumer of the field
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Nevertheless I wonder ...

> --- a/tools/libs/guest/xg_sr_common_x86_pv.c
> +++ b/tools/libs/guest/xg_sr_common_x86_pv.c
> @@ -154,6 +154,7 @@ int x86_pv_map_m2p(struct xc_sr_context *ctx)
>      ctx->x86.pv.compat_m2p_mfn0 = entries[0].mfn;
>  #else
>      /* 64 bit toolstacks need to ask Xen specially for it */
> +    if ( ctx->x86.pv.levels == 3 )
>      {
>          struct xen_machphys_mfn_list xmml = {
>              .max_extents = 1,

... whether the field wouldn't better get set to an always-invalid
value in all other cases (either by pre-setting or by adding an
"else" here), e.g. ~0.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 09:18:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 09:18:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137765.255195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqBOa-0004xH-6e; Mon, 07 Jun 2021 09:18:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137765.255195; Mon, 07 Jun 2021 09:18:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqBOa-0004xA-2K; Mon, 07 Jun 2021 09:18:08 +0000
Received: by outflank-mailman (input) for mailman id 137765;
 Mon, 07 Jun 2021 09:18:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cRKJ=LB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqBOZ-0004x4-1d
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 09:18:07 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 75df9bcb-a70e-4571-ae7c-fd4b61d292ce;
 Mon, 07 Jun 2021 09:18:06 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2058.outbound.protection.outlook.com [104.47.14.58]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-5-T6Eo7SO0P92nw3SxhaRuTg-1;
 Mon, 07 Jun 2021 11:18:04 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6861.eurprd04.prod.outlook.com (2603:10a6:803:13c::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Mon, 7 Jun
 2021 09:18:02 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4195.030; Mon, 7 Jun 2021
 09:18:02 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR1P264CA0035.FRAP264.PROD.OUTLOOK.COM (2603:10a6:102:19f::22) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 09:18:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75df9bcb-a70e-4571-ae7c-fd4b61d292ce
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623057485;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wRkRbn3AbloXa7G557ju067kqLTR0OyT6peJ3xbSZ5I=;
	b=OrEaUFCh4jxBn2zftwWmSke2QCzhbXY8UwEyyEGhnAPWKcIROLPtQozax4PZe5sP9x4iOe
	bczpjqpVWyYHnjdK//EQ+82pSy2rF1Mo7PnEMdSGNgaQPPVcIxw46+lVaXFXKAp1qidk1z
	s4KQrz99B9IiR4HfURRGUQ3Q0eBM4t0=
X-MC-Unique: T6Eo7SO0P92nw3SxhaRuTg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hMxhNPszEnH6riyrM8/uG+P2jRMqiHDaM6rBAdbk32HroHs+Bz2dv03NGaH1gUlpZ9kfvjDyjEYT6l48m72YSpbUX4CzD0LQ3w/agwCakhzQNjMVB5egDJJNGuxlFOoh4H9AzGuqK9jT/BpXs0pzxYE9oLLeuDnqXc4EuS2aYSHvCm7tCv2q0iLOmcEBxlimQJuqFpS/hUNVOd9gpqmpiVdtOX1Ns/h5tCl4SU7v+wtwiqyTLO1oRut6yNaGGSKi9AbT/+xRN+MfiKoNFH5I6LtRRNG4fN7uWP34cVRC0cBezXdLu6tw6WGqTogH7mFurpHf1TJvoEmqz3yqHm0tPA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BBrx1DxEgaiU2peoYV2TlBUX1tr7Kr/OMfPtof5nPnk=;
 b=MCmglBEWDydHVWD0iBlENqwiFfNkCs+09dyQMoCF/Q3tPu49jahrW64HXcGoxq8q0ZcCi9eD3mj9wmZ22VKJq/HmzLbB1GYCnBSjQh6/QsrPpJluYhP2ZXM0zGXOPjcftlo8Tcn3GYpmdukkg3XKNdJ0Vqs/EJ25bHsIzNseT417L6QjHJqosLyVRTeM+s+9/rXAYFWleVcxf79T25BQr6cGijinYPFKF8wo5fp52hWC73GPHL025RkrCxAUiBiBYuFBrOW9ULnmJvY2z7xKuQPTfWGreSFA6EM4D8D/CrElzbP3wG1gvysZYI0iTXm4sMcDJqBHPtKGGOu4nKme+A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH v2] SUPPORT.md: Un-shimmed 32-bit PV guests are no longer
 supported
To: George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
CC: xen-devel@lists.xenproject.org
References: <20210506124752.65844-1-george.dunlap@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ce26e02d-e784-4c07-3061-36f64ed7d287@suse.com>
Date: Mon, 7 Jun 2021 11:18:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210506124752.65844-1-george.dunlap@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR1P264CA0035.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:102:19f::22) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6add63d6-44c9-4830-9721-08d9299526b3
X-MS-TrafficTypeDiagnostic: VI1PR04MB6861:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB686178D7AC1F3E119CCD9E0EB3389@VI1PR04MB6861.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JBME0OUxWKnJflw0iYgV/B7Twrv1PlgLttn7oeNje6Ak5RibtoqtQu13Z+7nsiLxVuurzWBXecf7KmGn1G0HVQ3/LJT+O8PJM/5UH/n+/qwglQ59fmxQhnEi5vfGNHD1jLKrfCHhAvR6bAAwDXPijv4ZB3Ti0jN/B1Yy14K0l0cg6s1xRQb44vjSXU+g3OYrl5QhBF+S4ht3zEW9bqA1KI/FE0vubmkrH1aIOVi0iLfM8CYMrZWov1MYGHOXehXW23+aCJZ0ZWeVxqInjWqZjiBhpkDqfweVYJrGi8YtPOIl8j3v1r/wfvJVHRfDGnFvQTQPqvxPw97oiSz/gML/hVQe5k4zHDhy+0R/ifD12dBQJursMbhV+xKc48FRR+gxIBTRf0lPCKCmK6UZP4xUjOHo5oYq1mu4XS4Fp8PW6JOXMAT+85LJTytjDXZYlvRQwMZyGgE/++cCG4xR880Lb2CgphPG+Tpdt0Uo4cEISmsYKMQGt3/s2OW+f6M6kWbmEOGmByK9PlGPoTa5RccCv1YAF9vrQheB6L96odmyWH7L8zAWqEy84G4S4zvSFPvE/kt+h8jC3l+3PnJf/sZygGyVZrfkoLvD2IUOTdk6hBdVNiq0FCCFQ4XU5t7AH4CPY0E0PlPeVT40MK2e9n3uEDe3DmFj0QxfRjadhfcpNRS+vAaPWFFoWzP0lNbpvUfs
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(396003)(376002)(366004)(136003)(346002)(5660300002)(26005)(186003)(8676002)(956004)(16526019)(31696002)(2616005)(86362001)(8936002)(31686004)(53546011)(16576012)(110136005)(36756003)(2906002)(6486002)(4326008)(478600001)(316002)(66476007)(66946007)(66556008)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?us-ascii?Q?U3ORTMftSWGLKHO3ugqOFKAJ6Gy8FOwupkY+E4Y9Adb+INjxacpb1KO+dIcu?=
 =?us-ascii?Q?W/6Vo9eIaLhxs37RmrgPuVL5Vxp48pOWi0kI8qwZivOfwVe21INJw+DCKMfX?=
 =?us-ascii?Q?ADzbsHbMU9L2K1sowRw9ws3NCYlVQ2cp5KVET/nE5hCQppWqeYtMcybyorYH?=
 =?us-ascii?Q?RpeZbMkqU6DliI90921I+8LHij2z2FQ7P9Ofw4km35wE6FHSSnw0DKnNPqaK?=
 =?us-ascii?Q?T0s3VrnHGWGf7qVziYKfdl3BAme5lRy8u5JJkhqyj1IVMjHHvJBeqtJ/U5k3?=
 =?us-ascii?Q?J4NI7cowBHHJaAvTgjbgj4QMuioDQXYcgOGYjfZWHLCFjt2DGnxbMzF6favH?=
 =?us-ascii?Q?oYkjMhXTpY85b0mAaTrNS/lE/Z87fC/lQW851zLXCi2db89ZzavpvYXPL2gq?=
 =?us-ascii?Q?RXlxYU1D3eRypP21ASW1O58Uyqx/4kTNZG1MTwdpQLGmfYp5MNkRifJsTmjw?=
 =?us-ascii?Q?SzOMg+8GRLISupPNcz4WVR6lHgKe8/l9t0xcK3ydgmplbzso1b0vqyVNZi5L?=
 =?us-ascii?Q?3HZbFq2YWQX5kzAsz0a31Q62fDMVEW6F4dr84vyKCJjVjsqJE5lUWYnjcTSW?=
 =?us-ascii?Q?2r7OVmBwhu9SscFitHk+77NHrHfC/uZC6C2BfbR/GsMSTOwY9spjR4j2f5ah?=
 =?us-ascii?Q?HXrSl0cJLv5dJwF2FM51RgfmpvLpIjX7/WZgIEZLwgbnuPhqQdWFxLYI30s1?=
 =?us-ascii?Q?ZY6KtIxlCC6ACgO9rFiGWRfx6OloVVkPtcgx28C5i8q/Np4Hp6HZTEr5yKuB?=
 =?us-ascii?Q?f6S+CugA41zLmTWTWTmGe6pg/Dv0EqTTuL4o3FyynkuEtr/G1/QvrsZC8lOW?=
 =?us-ascii?Q?4wfmsgdO0LkmVmElSWzd1TlqhR918zqHIiA1TBKTxGu1pSles435YP/q54Ut?=
 =?us-ascii?Q?auAy8ijPbvEYj9WAQDvbOFlGQq5EgwwFedWKSugeiLhKuwGynMFGRLlCgkwg?=
 =?us-ascii?Q?NHyv3KTcz9cj5ji+ZW9Wrz99q9/UTJ281BD+80eb6utFulE3F/0vbzvySJGv?=
 =?us-ascii?Q?Dh6VFlCNjZMzYR1Nc+1QJBKTRIjBc3ZMDEAnJNXVb33em0iugxjajbW1ug4h?=
 =?us-ascii?Q?MBUY+C4OFaB9pv/N9c72oZlH5SADEWfy3JscwPCdKp0Ogc6blbDP6AGwo3eM?=
 =?us-ascii?Q?1HLkNJO8APxoZSRzV06DwKnes1WsrEv/yYPsn2H+D2KhsT8R76BL8q2Yhbsy?=
 =?us-ascii?Q?vNu5JQbnnPKzcU5Qh4eWAs8RHP2g0PN7xaZ9NmP/zpgxFPIBCBNOCfdalI93?=
 =?us-ascii?Q?dNlCAMXARbUb3meBkF6n+BH6EzhIpZqWIcHa9FZAXRNZ/lbp5FoZYyIPrkkR?=
 =?us-ascii?Q?ccD8Y83vmQDscISid3KRaBmb?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6add63d6-44c9-4830-9721-08d9299526b3
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 09:18:02.7144
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: huoeaxHypID3LqwSM4DeWT2CPH59v/4QC5YOx2dp0cgwpX7vuBDoX7cA2RLf91m1ErTCn9mM5eKsBfp75feBSg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6861

On 06.05.2021 14:47, George Dunlap wrote:
> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -55,7 +55,7 @@ config PV
>  config PV32
>  	bool "Support for 32bit PV guests"
>  	depends on PV
> -	default y
> +	default PV_SHIM
>  	select COMPAT
>  	---help---
>  	  The 32bit PV ABI uses Ring1, an area of the x86 architecture which

The tool stack side change that J=C3=BCrgen has just sent will only address
the smoke test side of the resulting fallout. Unless steps get taken to
continue to build Xen with PV32=3Dy in osstest (despite the changed
default), I expect many (all?) of the test-amd64-i386-* to then fail
when doing a full flight, as otherwise a 32-bit Dom0 then also won't be
usable anymore. Of course I think I recall the question having got
raised in the past whether the 32-bit Dom0 testing shouldn't be dropped
at some point. Yet even then I suppose some 32-bit DomU testing would
still want keeping, which still would require a PV32=3Dy hypervisor.

I guess we want to postpone backporting until that is sorted.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 09:46:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 09:46:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137776.255212 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqBqD-00087m-Ia; Mon, 07 Jun 2021 09:46:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137776.255212; Mon, 07 Jun 2021 09:46:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqBqD-00087f-ED; Mon, 07 Jun 2021 09:46:41 +0000
Received: by outflank-mailman (input) for mailman id 137776;
 Mon, 07 Jun 2021 09:46:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rwXN=LB=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lqBqA-00087Z-TB
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 09:46:39 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.24])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 28d7cab5-3554-4e8e-ad4d-9376360456ae;
 Mon, 07 Jun 2021 09:46:37 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx579kUJcY
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 7 Jun 2021 11:46:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28d7cab5-3554-4e8e-ad4d-9376360456ae
ARC-Seal: i=1; a=rsa-sha256; t=1623059191; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=OMEeYtL3Y61bqYu/Yq3aMH0KZwPlQd+zkkVc8SH1nOvyPVlxBdKvIRUvoKlcpypaSE
    lMWPBHJilGMrJcuyPjAX1lW9KTaQQr4xUgIjVFUFDE9rSCnD5Z9iyd2xd/k28bv/16aY
    IX4R/syriG/yiyEcXyf2UwkhThSEPK52aWXPvBivH1s9bXftrcl2+qA/tPMi0Si4v3tV
    iGOEPMqtA+J9x1vCTacsY7G0JHbam5zPstuS6oC2K8yNcSqjlsksm0UVTJte1q5rGCbR
    zkp7HxNaYEA93EW5m5KvEcpRIudMxEKgH/aCTzZA7bqiHBcwnf2d05xLSGmML9uWhENj
    oEJQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623059191;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=x5TqZMYO2YaHiqlYG8OcTHXd7bXqox6fkCpzYX9i0gs=;
    b=Ax8fJ/IeOiiZ1M96nUsDUKwmfB00qXswXh8mgDJ1s27jy+Vcu3gt4+NoIoHYulzIaF
    hfkxB7kD55NJ1OKoycMot8sv+VbFgMjlJIss4Fjqw68s2NY5rxrkP8zyjnR173ErZJ+Y
    zAFr+WW7c6srBV0AZ+AtUEfstna26xGnaS7EFk2AGoZ80IHdbK/74P+O57WNqtLqqc01
    8ztHzbYKaFPu7OhhxoZ7ksP4SjJ+V6fTt8fsTiHhtle4yjBmNkDsKjEAZcgDmJHB0LP4
    FNC4I7oe20D1Aop0H8lVBxWAZOvo+OK6viDlKMHO2jbxyphKG2ZEe6zF8eESSP5qpI+u
    pJfA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623059191;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=x5TqZMYO2YaHiqlYG8OcTHXd7bXqox6fkCpzYX9i0gs=;
    b=bA8PPA1UQN0xkggZebSI8dqg5liIi/2gL8Vd7+wk04C+r7c0RWihSI076WZSJG+1iH
    FyQO7TpWgvDqWdDDFegghokLv99IhtAQ2xnPVyQqk6Dc/QAeDtcvUySa6OIwfh9UBbGr
    MVnYFhGf1hr6lwXQxWnuDdT66anzIMbvdGn6BEoHA/jT7Noc26NHgyjyluqwlzMiEn7r
    HM0jAnLIwkziOe9zCVLMYf7QfQ4DtcIQUGbkbQtFL/xYXWCG5sLejo+vNH3TqY/5Wi9w
    CW8vUAt0poLuMj9yZ7SiwWO5xDeyAiNlUPh15vz6tysH5i1al92b6TuwOe4lmK1sdsJB
    q+sg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF/Ax6Fb03sCxOoTBq7r1dZtjiRLxxzC79Iv3HI"
X-RZG-CLASS-ID: mo00
Date: Mon, 7 Jun 2021 11:46:13 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v20210601 04/38] tools: add readv_exact to libxenctrl
Message-ID: <20210607114613.220ad144.olaf@aepfle.de>
In-Reply-To: <6e1aed4e-8d32-0711-609d-dfabe906c40e@suse.com>
References: <20210601161118.18986-1-olaf@aepfle.de>
	<20210601161118.18986-5-olaf@aepfle.de>
	<23783088-dc59-abd1-c66c-5fcd314d1f5c@suse.com>
	<20210602125710.0607a985.olaf@aepfle.de>
	<6e1aed4e-8d32-0711-609d-dfabe906c40e@suse.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/IfNm_o/I0oK5jv8+7edNemt";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/IfNm_o/I0oK5jv8+7edNemt
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Wed, 2 Jun 2021 13:41:02 +0200
schrieb Juergen Gross <jgross@suse.com>:

> Shouldn't you check for zero length iovec elements as in the
> writev_exact() case then?

I will double check if this is a hard requirement.

Olaf

--Sig_/IfNm_o/I0oK5jv8+7edNemt
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmC96uUACgkQ86SN7mm1
DoAW9w/8DxBbPQXyrF5ZDekHCvSsyqWFceX4aIAaQNbWgnGigt23x8KYATL+Ak2i
6DKxgkPhC6FzpPy2o63ryvCq1GDIMeoLJuWPh1VcKqMgO+ns6wWId7W/HJobFrmI
SSw2/z6cq+b7SKWFcn824EmKGavv/x2jlVmPhYc5ZeVKe31Gv78Shs1J/b8yJoG8
+jo4SFhhg+g3rKp5pVE6eejbfILRo98RNKnTEOXmdy4P5lOv/1Bm4O/OTDoHjEAA
KH+Gq7R5A7bag9SxLLzbAYtiTUfy1MmsH6AfzorD8bvVLbICXX9bTcpLIvTh2hxc
iUUUjZCk9wW01a9WuP70rkLrPXm0vIYJizOfhu3jl1JBdkNMDq6B4gAKSvrreTJt
cIjE/01iITUkNMxwh2IyVZyEUg0NNCxH/rkSY+rU02S6+Y4GGy+n/1M8xIlEtJ1l
qH7w4BGu0rfsszdUefGPidmU8yfBxreG5ZV5SHY7HwKMlWFZeX87xQ3C98RGSMAt
PR4/bkfB8VeWANnNU/Zx9aHTq6sQUeSpyA+M2/dbbjJk3XJb5CuiUGjmM1hhck2t
C7vtHDE7x8ahOwy/QhNHNEE0O+Tcic1oAgYUq9GGbMaYVyS5gebUBKI+w2VSGWEI
ZSkiEp+EQB/Dxcq2uFMLrCNApYv+JY6Pv7YNdQuTHYz3cD3s/O4=
=JmoN
-----END PGP SIGNATURE-----

--Sig_/IfNm_o/I0oK5jv8+7edNemt--


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 10:12:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 10:12:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137788.255229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqCFB-0002sh-OB; Mon, 07 Jun 2021 10:12:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137788.255229; Mon, 07 Jun 2021 10:12:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqCFB-0002sa-Kz; Mon, 07 Jun 2021 10:12:29 +0000
Received: by outflank-mailman (input) for mailman id 137788;
 Mon, 07 Jun 2021 10:12:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rwXN=LB=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lqCFA-0002sU-8Z
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 10:12:28 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.52])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 30027ee2-e03c-4860-8728-2e4c2bf19c78;
 Mon, 07 Jun 2021 10:12:27 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx57ACLJmQ
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 7 Jun 2021 12:12:21 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30027ee2-e03c-4860-8728-2e4c2bf19c78
ARC-Seal: i=1; a=rsa-sha256; t=1623060742; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=OLr/5ShlgeeBLvCozNg2MV9JHyptXYfUADxbxBV/v9itscU2iGIpMQX9g7XHJN/kCZ
    11LrEmJoQlMTjuOE/MxCFICxLP3aNQoMECQsad7nxt+w085Tq67O3m6HUkrB4gC77/iJ
    2nniuORfQ+bl9IWTYe6tX4Iaga2PllPQ1I9uUdj1F5Tx4680jGkViiRKc72q0shvegaf
    8D1XaYaRd/nu4NzPXUbLgy+XLfioperULUiXWUxGVVaEOFpBqZnEYqmWq3f00RdRvG4p
    8k6kfvgqedTYY0Z1BjB5mDkUS18W1u/awUCjrlR3yWZV3hn6pzaW+LxN1PdAzSZ6Eqom
    KvpQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623060742;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=nj/xaU8bzyMpc1TxtWyc6G3ndddmN0N8KicObbiFYYc=;
    b=A5vaMQnOGhfOhxc1c+24O5tTh/D6LeiErwgIhj0qAy0pwrWZPXn4Zr4gk4Z0rENjOo
    0Wl/+E9MxcR5Nkpo5s6t17lt5vOmGr/xRbOrTyKVp4PZT5R1B2nBzzeCohfuu3d4ag5B
    sg+jQtT7FrYCh1eeswYeFIujDFlNwjgzcqDRGKrThPvT838Us5PpSOL6YifW++JXaFSZ
    zfloxqdPbo5d3LSnK6Dwh/wM5bOElRjnTOEX7t8lZUj/bxMNAE2GQObosfwzqhhBt71m
    /OMqXQM6qGynsd06QES3PFgbicQdcKPb16jKjKmxlyQC1ZFRu3VH/+LP5cqEC40zKSTU
    ta0w==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623060742;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=nj/xaU8bzyMpc1TxtWyc6G3ndddmN0N8KicObbiFYYc=;
    b=PAQSTRR9sR/XFTEzIB9STLobwUHEA5ZGLowGj4Op1zXI9ob6J6za2Od8sbQbVTyKNf
    lKhrNQAHNNKN9BlUOzqgt8iJDcndYRNgYlsS0E/XVbigDLV/daD7w3blMSpS415TLI0q
    f/UswpOQLVFWYCOojDYh14if0GnJf1XRcPw9NqjAhwsSq9X0sKBt9tznd10hotp595Xh
    /x89fg3qak5YwrMWBIa8v13HxBIvSwL25vy5f2Z3pGTqDgjtJbKUTyhF87sRS2ZxLrBu
    CKrOAO9rZqXDhR4jIHICT2ENovSZ/7zq9cAVEJds13A6GCuj0lf36+yliwjSSdKJAscs
    BSMA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF/Ax6Fb03sCxOoTBq7r1dZtjiRLxxzC79Iv3HI"
X-RZG-CLASS-ID: mo00
Date: Mon, 7 Jun 2021 12:12:14 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v20210601 07/38] tools: unify type checking for data
 pfns in migration stream
Message-ID: <20210607121214.3119d315.olaf@aepfle.de>
In-Reply-To: <1e0a9a6d-b3b3-29e7-43dc-453711e666a7@suse.com>
References: <20210601161118.18986-1-olaf@aepfle.de>
	<20210601161118.18986-8-olaf@aepfle.de>
	<9045add9-0cd0-7f9d-87ef-26cea15b74cd@suse.com>
	<20210602132114.6fd9ee87.olaf@aepfle.de>
	<1e0a9a6d-b3b3-29e7-43dc-453711e666a7@suse.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/a/28k5wF5whY5aFRaErkSBU";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/a/28k5wF5whY5aFRaErkSBU
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Wed, 2 Jun 2021 14:03:40 +0200
schrieb Juergen Gross <jgross@suse.com>:

> Maybe XEN_DOMCTL_PFINFO_XALLOC should no longer be supported, i.e.
> a stream containing it should be rejected? This might be a
> worthwhile modification of patch 5.

If there is a desire to actively break incoming domUs from pre-4.6 hosts, t=
his should probably be done in a separate change.

I have not checked if such migration works today with unmodified xen.git#st=
aging.

Olaf

--Sig_/a/28k5wF5whY5aFRaErkSBU
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmC98P4ACgkQ86SN7mm1
DoAaLQ//dDSVmXiWHoEiwlG0LDIC0ts3T1MEVnCjq13sJ1GTSgd5O18iMM4R2jcO
5nZ3ii2cMs/x942eMPOD2OXcr+fLQAY1wgwUrBoj1G1R/YetvJCwBsV3pYFvsGyZ
n+d/RWvIcg1JRd4cBXMHouLVCbpILp0nEmWVhTjZ88qpYL8VShYCTelCUAjpy47l
axpzf6rzzGwOJrJoSOL1+szD9c/46jLJBamrLxJ28izDk6OtZHH7wU5IVKr1o7gW
Ry5u3g2Rm62M3jXDSobYRRF+kmeus6p9WLxItSniErA17Fxz0ZOWwH0DZJT6EboA
hSNMLrk2lPS0uSCIQciEGGjBqIgoI+MYbLijx6RJkZU5krC6NAwgqtyjG2zPr4uc
SXg3yW6X2IUdAFHfF8f8u4cngnnuTfwBiTZAZhh97o7MzhanuZl7zZxwN4KCekV1
RhI/5SNg7VvIjQrD3/+v8WebkwiqeQ/Q89fA5jL7qdNEKKx8D2g4JEctHAQM4ZFa
TCss0CZq0BYtlmDj3wXi70d5pWeGikbSza7fO6YnxTt4Ftpf0HeZ+chO2k07pMmr
AyusAeSWstpaFTAJUHLu04NqAdQmV6KSwY2fA/6+K+3vfY/8kCW8FqHNOSGtnf2m
5dsVsEN4Oe2egcx/6NFHkp+aCSFcssE2i8e2M4hNIuqvnibXMpE=
=E/Rc
-----END PGP SIGNATURE-----

--Sig_/a/28k5wF5whY5aFRaErkSBU--


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 10:22:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 10:22:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137795.255240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqCOv-0004M5-Ml; Mon, 07 Jun 2021 10:22:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137795.255240; Mon, 07 Jun 2021 10:22:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqCOv-0004Ly-Is; Mon, 07 Jun 2021 10:22:33 +0000
Received: by outflank-mailman (input) for mailman id 137795;
 Mon, 07 Jun 2021 10:22:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kcEO=LB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lqCOt-0004Ls-Ub
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 10:22:31 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5a94c5b9-18c8-40fe-848b-59b3e251b7ca;
 Mon, 07 Jun 2021 10:22:29 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id D7FB121A82;
 Mon,  7 Jun 2021 10:22:28 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id ABDC1118DD;
 Mon,  7 Jun 2021 10:22:28 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id iLHWKGTzvWA3MgAALh3uQQ
 (envelope-from <jgross@suse.com>); Mon, 07 Jun 2021 10:22:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a94c5b9-18c8-40fe-848b-59b3e251b7ca
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623061348; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=YA0TCfr7yyuDvKIkc0PlRKhwillcAmnJKdpen7tfIEE=;
	b=HgTLYRvvv1HqJXdRHovs8P4jqXqNZK2eOdY/WcBR7CibgT+c7Cp5S+GxewQdEZeSxHW/hS
	bC+rEGB3Gp3Tst9fPOASqGd4GePHdtnDS4XQVxMLkgx94AU5QzznFRqLmUbPdVUTfH6yFV
	x2iNDEAFu2zlOPbRvZ3le6RG+WoNjmM=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623061348; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=YA0TCfr7yyuDvKIkc0PlRKhwillcAmnJKdpen7tfIEE=;
	b=HgTLYRvvv1HqJXdRHovs8P4jqXqNZK2eOdY/WcBR7CibgT+c7Cp5S+GxewQdEZeSxHW/hS
	bC+rEGB3Gp3Tst9fPOASqGd4GePHdtnDS4XQVxMLkgx94AU5QzznFRqLmUbPdVUTfH6yFV
	x2iNDEAFu2zlOPbRvZ3le6RG+WoNjmM=
Subject: Re: [PATCH v20210601 07/38] tools: unify type checking for data pfns
 in migration stream
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-8-olaf@aepfle.de>
 <9045add9-0cd0-7f9d-87ef-26cea15b74cd@suse.com>
 <20210602132114.6fd9ee87.olaf@aepfle.de>
 <1e0a9a6d-b3b3-29e7-43dc-453711e666a7@suse.com>
 <20210607121214.3119d315.olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <f124c2a3-94a6-5a4c-cfce-ff1abdcd42b2@suse.com>
Date: Mon, 7 Jun 2021 12:22:28 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210607121214.3119d315.olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="5xf7Uo7xbE3PZjPKp5VohteNPG8z1VGfC"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--5xf7Uo7xbE3PZjPKp5VohteNPG8z1VGfC
Content-Type: multipart/mixed; boundary="pznvlZ8uE57IBHZnqF5MehJ16zOrLAIZG";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
Message-ID: <f124c2a3-94a6-5a4c-cfce-ff1abdcd42b2@suse.com>
Subject: Re: [PATCH v20210601 07/38] tools: unify type checking for data pfns
 in migration stream
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-8-olaf@aepfle.de>
 <9045add9-0cd0-7f9d-87ef-26cea15b74cd@suse.com>
 <20210602132114.6fd9ee87.olaf@aepfle.de>
 <1e0a9a6d-b3b3-29e7-43dc-453711e666a7@suse.com>
 <20210607121214.3119d315.olaf@aepfle.de>
In-Reply-To: <20210607121214.3119d315.olaf@aepfle.de>

--pznvlZ8uE57IBHZnqF5MehJ16zOrLAIZG
Content-Type: multipart/mixed;
 boundary="------------E9775A97C2EC0A2CC59A2367"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E9775A97C2EC0A2CC59A2367
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: quoted-printable

On 07.06.21 12:12, Olaf Hering wrote:
> Am Wed, 2 Jun 2021 14:03:40 +0200
> schrieb Juergen Gross <jgross@suse.com>:
>=20
>> Maybe XEN_DOMCTL_PFINFO_XALLOC should no longer be supported, i.e.
>> a stream containing it should be rejected? This might be a
>> worthwhile modification of patch 5.
>=20
> If there is a desire to actively break incoming domUs from pre-4.6 host=
s, this should probably be done in a separate change.
>=20
> I have not checked if such migration works today with unmodified xen.gi=
t#staging.

At least with your patch this might change now, as you are modifying
the behavior in case of XEN_DOMCTL_PFINFO_XALLOC.

Officially pre-4.6 hosts are not supported regarding LM. So your patch
should be fine in this regard. OTH I'd rather have a clear failure than
a maybe weird behavior in case a stream is containing a record with
XEN_DOMCTL_PFINFO_XALLOC.


Juergen

--------------E9775A97C2EC0A2CC59A2367
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E9775A97C2EC0A2CC59A2367--

--pznvlZ8uE57IBHZnqF5MehJ16zOrLAIZG--

--5xf7Uo7xbE3PZjPKp5VohteNPG8z1VGfC
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC982QFAwAAAAAACgkQsN6d1ii/Ey/s
7AgAhk3ZOl2ukINKIGZtNzg1au8X3nkyWelQLwz6YncAs7+VaN2Jf26DValGvpoZ7h9k4dhIqrhi
oSQ31MofjWq2Fg3O2d5bDMBgGD1ccKSzohwOg+QTbQEe0Yw+K18bIZWTrheBvGBclczfb35dxmJ3
dfgBgjXsPRixNa9KETq91TNpwPaTEY+ofb5FuU2tb7fxGQivtaitncksywG+etINHLJgazeKRPEc
qaWOsIXXVj5nQyRJe7ikL+hhSxwCPpShE2911G2hWIHge2vlPGGeEX6NoucmSk+Ux6Ukdn1828wN
mOE+OJYZX0QD/9JJqFCia14lP7SB4onyUIsSl+dAAw==
=gpdx
-----END PGP SIGNATURE-----

--5xf7Uo7xbE3PZjPKp5VohteNPG8z1VGfC--


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 10:27:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 10:27:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137802.255251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqCTz-00054Y-9p; Mon, 07 Jun 2021 10:27:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137802.255251; Mon, 07 Jun 2021 10:27:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqCTz-00054R-6Y; Mon, 07 Jun 2021 10:27:47 +0000
Received: by outflank-mailman (input) for mailman id 137802;
 Mon, 07 Jun 2021 10:27:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cRKJ=LB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqCTy-00054L-JD
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 10:27:46 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a915068-90d6-4975-9e1f-0da3c125cd7a;
 Mon, 07 Jun 2021 10:27:45 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2175.outbound.protection.outlook.com [104.47.17.175])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-3-1W1l-NOQOIalCK4o3rJGpw-1; Mon, 07 Jun 2021 12:27:43 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5743.eurprd04.prod.outlook.com (2603:10a6:803:e0::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.26; Mon, 7 Jun
 2021 10:27:42 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4195.030; Mon, 7 Jun 2021
 10:27:42 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR10CA0016.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:17c::26) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.22 via Frontend
 Transport; Mon, 7 Jun 2021 10:27:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a915068-90d6-4975-9e1f-0da3c125cd7a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623061664;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dKwAgnJR23plKmVqZD3sWQzVp6oTZLsCnN8BZvndX7w=;
	b=b/zNU5z12BgNgKvPTppHXYXtq7LTgBeF+Q9LUizhbgUxUp0cC1lx6EfmY1EDd1d2xvX8fc
	ytQMh7Klg60YtkrVVk18eqhL4y6E0xMQbzbMGwR4TBymRIdpJdms12il8DbZXatiOZ7ALa
	bfS9BK+fq4rF0dMuoH2d9G7AvUEvrT0=
X-MC-Unique: 1W1l-NOQOIalCK4o3rJGpw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JXn0+5rdSDYCEinEqrVn10+s7O9Z9ktrLCqn6TBWI491yTyzRJt1xoAObAzeujTsoDqXJsUm2J7FMbu7osjij2Wy+ihFCg1XzN0Oo2mTNVa2VtdwE0xeaLOv4xuQ6IqF4kPRi6ePPCVFdXFdRHLabQefnPY3PTCOFadLbwMbBTD8MYIttFOU+Zrz69L8Ykj0P/nYB1s3p/TfFZKgH/e0Hu25phapZ/jX44UqLsQQSSh5GeCQnlJXrbLsaOff5G2A8CuJ599GUSsiZxLBizfgy7UmuAT7ffXXsVDiKLNtEz5dq9KuBWLkCMBc8L1UlbaHU0wjmqEgCsYpv3Pt8vpFTg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dKwAgnJR23plKmVqZD3sWQzVp6oTZLsCnN8BZvndX7w=;
 b=dZjLxUIHtkA9FYUt79nhvdAu3SBvxH4rY+PuPLDEAfSAkL+CM9mMk5CZJsfrKzVbCTdYKbpAePRpixd/aNEh6MTvkIRkGH/RMlGka5zNLzlofSJQLKmxAn7ecYG/lnmkbo6Vd/l3nhGDoDZEeUhE4tExBbzMO0jKH6SPshlaMS7PJckNYsBdwhECcPtJ1DC7Us3rI+lO9Ix5XjpY8VJWHN4CH4UFnG1IgnHITAAvTJGKjJ8wjkL+TBc8G4g39SzovRpOzcKdqLWMkque/NLLjGWrxg8eGegjKePBzU2hg/Xktr13oRqYYkHzTY2vN7i/jZYp28NdT3p0koNB7LnENQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: Ping: [PATCH] docs: release-technician-checklist: update to leaf
 tree version pinning
From: Jan Beulich <jbeulich@suse.com>
To: Ian Jackson <iwj@xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <adc2ba4f-81b9-9841-a53c-8c52278ae62a@suse.com>
 <f60da828-ed27-abaf-0301-559cfe017201@suse.com>
Message-ID: <41f6ae32-4704-a580-f103-9e1e9e51330f@suse.com>
Date: Mon, 7 Jun 2021 12:27:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <f60da828-ed27-abaf-0301-559cfe017201@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR10CA0016.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:208:17c::26) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c78ef54f-a585-4925-976a-08d9299ee22f
X-MS-TrafficTypeDiagnostic: VI1PR04MB5743:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB57434A41DB0157DC52416AC8B3389@VI1PR04MB5743.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RMqiCsU3jAKw3Qta92pHJ4ACJ3zK7A+X+dB03uidV1hpYnSl4ZRaPvn4uDwaOhX/ZAZy1PhQa13zFDpqILa0kmpvrdN8CsPg/5vh/EqxXEM4x4IRLbNkX4S9yk+Rbl8ln5URaO8JLOuMfpJPuxt9Rzfeg7SF2GMZzM7HjGDMYrUYaCALuFbdZ2oAnkx4VW8gErvY3O/ULy2D/P9tb5F5XsTnqlm5UDyU9WTth9VrALyqiPX3oau80oW69HTS059jCHX2EvrpHJFhTgdILfPnl4lE0QnnPlKiAFD76sWzkTPpzaiItRi7sIxz83dUgTyVP2ToGvcRNrGBdEnLsr4a9POCdkUp7OWFF3cZuDk5KZ8m4pHal/ywU/E+7qo4oM16tuL9iMfZUeuLfNidq8QQL2e6R0uyq9VOzUp7trMUxu/OernTEcu45QSTZFX5sVQaeG/pFD4dSG6DPFxE9OQYhXCmIInDSa005xrZfyERCErFW1mU3bORbHn7bM0eaZbPH5ukVeoMDMTKJrAiTTOI+OoXUAIfno4CBWRTjQTXoTwmy+PfiBTqpHxVEuPtR7MMOupwVd/ca7X3HC7wrXTWyYoRnF66X/6H1av3ozibjEjnrpOGnVRaaw5aSi1IvBiKimlvjcfIE5pT0j6h0NwibIe6O0bf3ejN7G1dxl2o6Q58wUuR9yKTzH2wIQjbJBAZ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(376002)(396003)(39860400002)(366004)(16576012)(38100700002)(86362001)(956004)(186003)(8936002)(2906002)(6916009)(4744005)(83380400001)(478600001)(31696002)(54906003)(53546011)(36756003)(316002)(8676002)(26005)(4326008)(66556008)(66946007)(66476007)(31686004)(15650500001)(5660300002)(16526019)(6486002)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?ZjAwNlIvOFhoclc2QUM0Njl4MENmeEdrdjJXcXhEYkxjZ3JldkhMTXRNQ2lD?=
 =?utf-8?B?Rlk4SjYrQ2lQVUtlUkJONzdYRnpNcjZIckgrbmtTV1VnM25CdkN5Y21KSHBZ?=
 =?utf-8?B?anZ3YjRKaDFSRTdkaFlSaEc0SHJVY2tUbVA2djZlV3Jtc2tRN201ZlRKSWxB?=
 =?utf-8?B?dXlseS9WRVM5WjNadFdYNGVjVU1GQnZFb2NqcU9YMURDTTc2dW9oN0Y5MlB4?=
 =?utf-8?B?dzdDV081ZWk4WU5zbmlLQXpKZS91UnovV01XSW0wcDZOWkpvZ3lZY1FETFpi?=
 =?utf-8?B?b05RcmxOcEFsOWlOV2ptb3dBY09XZkFlbUliU0lwNlNBNjh0THhnMllhc0wv?=
 =?utf-8?B?NEhCZktTbFNUcUZWUDRud293SlZIaXFMd2l2cDlWc3pQNnlTNHphTWk2ci9F?=
 =?utf-8?B?VGV6Kzg0a3ZPdE5QT1YySm52a0NXdmVHZHJpQ1FEME42eG5KYnV0UnM3SzlX?=
 =?utf-8?B?eXdNV1RxSSsxRFoyRW9BdkJsZU1MYkJNYm02RC9hb3dQSGtxZHBlcy84YVJW?=
 =?utf-8?B?aHJCNVpRN0czQXZQdllpcmlHOVJDR2xQZnZBSzdZT29BVXp1VmM5aURsZ2pp?=
 =?utf-8?B?T2dhQzd3dmRuT1doT3ZMc1dhTmJZeU1wQ3NrY09uN3g4azNjeGk2WTZPMFZZ?=
 =?utf-8?B?NUxLVXZianNLYmdZTHhHempZSDk5K201TC94V0VuVk1hVTV3alp1T1JzdXMr?=
 =?utf-8?B?TFp4RnFFcmpCUjJSNExTMERTNyt1QythY0t6UWlYNyszaWp0S0sveFYxKzdC?=
 =?utf-8?B?WlZNV3RpSDhMQlFTY2hjSnZpTnhqVmMrSFhnVjVSSi9xQ2pvN1V3TGNNQnRh?=
 =?utf-8?B?bWZlSHVJMVpRZm82THRKa1FQZUtvT1JXU1ptSkRmS3hWUWxmSWJGUkxLVFQ2?=
 =?utf-8?B?OExrZWZnQ3dHMFdvcUZiVHNEWXdJNDBDZEJGOXZIL21TZUVyN0FYaTJXd2sz?=
 =?utf-8?B?eXdrSEU1MEQrdjV4OE9lR0QwS2RxTW8reHNZVTArQkZWR2I1b1poaHZLTEhY?=
 =?utf-8?B?WmRxTWxQdkFMaEphRnVtVEJzRHgxT2lIVkU5Mlg5NmlCSDhlSjFGcjhMV1dG?=
 =?utf-8?B?ODAyUG1Xc2pqSDJwbkZlcnp6eHNCQVpzMHZNMi9oSTl5UHdnaWhYeWFEMnll?=
 =?utf-8?B?VHU3MXdiNW9iNkNBOHR3KzhPNFZyTGY0aWdVcjZTZXNjS3FnMU1MaFQycU9J?=
 =?utf-8?B?Z0pybDRGQm1TMkhoZHA1aHhEOWhpTk83UGNyRHpXQXA0SHhpWmVwTzRKZ1pZ?=
 =?utf-8?B?ZzE3WVZsY2FYWlVyN3UxdHhRRFBabkVoYUFZckZlU1VxbXJlT1FhTlI3R3RF?=
 =?utf-8?B?VUhLeWxmbWhhZ3p6OUlEWlRiUnE0Zmhqc2pKWkpFaUgrWXFlMnk5dExidFpa?=
 =?utf-8?B?dHFDdVBaYXBGOGhsYnFzbi9yNGlwOURHbEYxUEJyeVhkS0JDcFlnR05iU2xE?=
 =?utf-8?B?alZyaEtqSlI4a2hOL0F5V0ZXQnpscE9jZWlxODY2OGpUdk0rQUEwMVZLMXZx?=
 =?utf-8?B?b1JwUHhJQkNTbzZQVTZZQy9Dd3psVndYS0NVUE1RRjZJL0laVWxWWmF2ckNs?=
 =?utf-8?B?N2VUdUJxOHArREh2MW05eW1hLzExWmg5bVFxL1NNVDZsb1FoNXY3YUQ0c0xS?=
 =?utf-8?B?aFNsYTNaR2QvdEZnY1hYbU1ZeGhXZmpMcDZxUDB1cUJBTlBsVGRmS2E2aFZr?=
 =?utf-8?B?L3U0UTF4SkwvVE5IcmVXYndOUDE0d0tTSU4xZ21HNEdXSXhTS24ra2RZZHNp?=
 =?utf-8?Q?tDF2zLzeN4QpChsUt6r71j/wVtMfeT2scF9BhYI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c78ef54f-a585-4925-976a-08d9299ee22f
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 10:27:42.7046
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CxcFr7iAiGghyK+/3UaJ671FaI/MSY2Y7dc+ps10U+yZc1BHDzXbzx3dFauU5X+NaXnrK+Sq2f7xrzYxH/n01w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5743

On 27.04.2021 18:01, Jan Beulich wrote:
> On 07.04.2021 11:56, Jan Beulich wrote:
>> Our releases look to flip-flop between keeping or discarding the date
>> and title of the referenced qemu-trad commit. I think with the hash
>> replaced by a tag, the commit's date and title would better also be
>> purged.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> I've been noticing this inconsistency repeatedly because I simply re-use
>> version update patches for the stable trees - the context mismatch
>> results in a such updated patch to then not apply, due to said flip-
>> flopping. For 4.15 I'm inclined to drop the two offending lines perhaps
>> while updating the version to 4.15.1-pre.
> 
> Ian (and others), any thoughts here either way?

If I don't hear back by Wednesday, I guess I'll take the lazy consensus
route and put this in.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 10:44:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 10:44:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137809.255262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqCkP-0007PS-PL; Mon, 07 Jun 2021 10:44:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137809.255262; Mon, 07 Jun 2021 10:44:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqCkP-0007PL-Lf; Mon, 07 Jun 2021 10:44:45 +0000
Received: by outflank-mailman (input) for mailman id 137809;
 Mon, 07 Jun 2021 10:44:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqCkO-0007PB-84; Mon, 07 Jun 2021 10:44:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqCkO-00069l-01; Mon, 07 Jun 2021 10:44:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqCkN-00041T-PQ; Mon, 07 Jun 2021 10:44:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqCkN-0000YY-Ou; Mon, 07 Jun 2021 10:44:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wRWLOogHU0Yx3ZrTH4T6qenU6FcnyMJUQn6gdGWbrNk=; b=Ffyh+z+itwSIvfR/Mk73Rn1ApI
	zVcVUkNJsPqq9VUddeP+1PeVDj+GXnX8EGZYUtVg+v7l71wUyhRUzJbzNl5jRaaajtWb+RsAyKoXb
	A6giDWfdNFwrzly3iQaaq+ZYRJUl+WTYGgrMJonTR84hpkfMcRhX85CNH6uXszuAHW1w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162481-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162481: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=619968a6801491ece52cc6c20fe9a018e0d24d7a
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Jun 2021 10:44:43 +0000

flight 162481 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162481/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              619968a6801491ece52cc6c20fe9a018e0d24d7a
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  332 days
Failing since        151818  2020-07-11 04:18:52 Z  331 days  324 attempts
Testing same since   162390  2021-06-05 04:20:07 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 60472 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 11:02:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 11:02:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137820.255279 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqD1I-0001IS-Fl; Mon, 07 Jun 2021 11:02:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137820.255279; Mon, 07 Jun 2021 11:02:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqD1I-0001IL-AL; Mon, 07 Jun 2021 11:02:12 +0000
Received: by outflank-mailman (input) for mailman id 137820;
 Mon, 07 Jun 2021 11:02:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Idvh=LB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lqD1G-0001IF-53
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 11:02:10 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5ced8439-481e-4ded-a2fe-3d12592aaf58;
 Mon, 07 Jun 2021 11:02:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ced8439-481e-4ded-a2fe-3d12592aaf58
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623063728;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=nbNCw5kmhJpUcZUioKqMrVEUFjE5kfV9TfbEsYxh9SE=;
  b=NWwT+WO5zgIRXTzkY0e5PwMNUjdF+gr8hZmmRPGgYErFGTpQJyKqZTKF
   R5pqTfdeLSsgSl1e/415thFjePTcfksmhG8JsOQgeg0msc+nnMDDAiSYa
   gLlMIPIwd7jZxordoVlgvrn+WvwjKCM/dcHg8/LmE0+quSVBHI9dqPkUp
   U=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 7hCIVds1qMwOfdG7i2WITS8Ym/W2KT8XEYRKKHSQfqO/g0N6ohcewtw+sn9Sq6V3loMqmvhd0K
 bU6x3Sy1DLvCH/S0KiBWoH2ZSKPVpLKInVoNIez8rPsSPdIpC71V66VqvhjqeptIFhwwX3+a+A
 AXpwJKcOJUi3J3rnJiI+IHwsJRTVQecCr1wFE4o283nERTBqXqX4KkkPNHHNajsZ+rvOIsslGb
 1HtuH27to1WB/S26BFk41Uoy97WThx8kmb/AbVXeOpcALbmsImRm9Q4ZeicAziUof/t4QntDcW
 Gy4=
X-SBRS: 5.1
X-MesageID: 45588431
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:MlZoKq19NbtAmhUpHt2A7QqjBbVyeYIsimQD101hICG9Lfb3qy
 n+ppsmPEHP5Ar5OEtBpTiBUJPwJk80hqQFn7X5Wo3SIzUO2VHYUL2KiLGC/9SOIVyEygcw79
 YYT0E6MqyMMbEYt7eJ3ODbKadZ/DDvysnB7o2yvhQdL3AfV0gj1XYeNu/yKDwEeOAsP+tdKH
 Pz3Lsim9PtQwVsUiztbUN1L9Qr6ue7264PDnU9dlAawTjLqQntxK/xEhCe0BtbeShI260e/W
 /MlBG8zrm/stmgoyWslFP73tBzop/M29FDDMuDhow+MTP3kDulY4xnRvmroC01muey81wn+e
 O84yvIB/4Drk85Q1vF5ScEg2LboXETAj7ZuB6laELY0I7ErGlQMbsGuWoxGSGpnnbJv7lHoe
 h2NiyixsNq5Cj77VLADu7zJlpXf3qP0A0feNEo/jViuKslGfJsRN8kjQ9o+KlpJlOz1GlxKp
 geMCib3ocPTW+n
X-IronPort-AV: E=Sophos;i="5.83,254,1616472000"; 
   d="scan'208";a="45588431"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NR59DpeaHrH7qazNbVThGpVMkOub7m0BcWERUucNxF37SRWkldr9KWNvJJ2TuTWgFEeh3NNAivgzKyCPBz40qv1pC9J9inz5nuSC1F/tv6E2m4bEuYcj0esdyuKrFP6Go0ms4gqqt67NqguRnK1Q2LUDc8+fu5Iu3ByMS9gWI+Au/ibEXR4pIb9paaOBkKumEdRjcO2gJ5F0XZeZVPpE1ZZVznJimT4jrdDt1zulwlaU6la0E3GJ6Tl94ewVGEzgJu8JN2TqaLDbX1Iw5eIPH7Phgoq0YO3tdtcwOo4vBgvuJ/0VKih8zvv4ua/O4LC6LL038kOx7Fc+v6h1E2U1hQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ksN3+G7bltMaVrUG7fsa6caqJWLsAeChIaYRABmDk8M=;
 b=F4yKkzNPXWuu7vr0MdsuFJidnSUuSemhs0g7WTakf59Zi+wGEmBCr484LrHzxKGp3pR4Oc3qgemGjJWuUB4wkLysPvwrD2EpHn8Qi9D4O8Uqbx9Mg0sZQP16DkOBafOX6ZHvUpdOQq7lkoolXN6gtyLVSt5LSd5/pLLM4ajgdYggXnqAZUOemcHy5VHLoa+qLZ5uUr69ACf558e+xgnKJadzuplUZKcRe9utZV9c1OZdw6aEX8DZBvi2i/ze2G6x31dXZMzauVRwDfVn69bjoKWQFY1zaveln/07RgHBlnRB0iGFV3D6femdpek+BgtDcTTsx8VfDFmpscNage3KcA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ksN3+G7bltMaVrUG7fsa6caqJWLsAeChIaYRABmDk8M=;
 b=U0pHfWxM6F/0yEOHT0es00xcir2Jw2C75x/7iRY1Bggj3tp6aLeaVgRJ4q/1ov5Iqa1RrMUmQ8Y3L28gU8iWcnUuQxt7joXLSBTVwNL03wzJR9Lc+uSAnZ+T+LXoH+brofvmjz1YtDQSAar8RlKrQ6noMSXgD63yeDwfNmqhguY=
Subject: Re: [PATCH] tools/libs/guest: fix save and restore of pv domains
 after 32-bit de-support
To: Juergen Gross <jgross@suse.com>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210607090425.18277-1-jgross@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <cd954aa7-ee43-1126-f97a-21c213ef053b@citrix.com>
Date: Mon, 7 Jun 2021 12:01:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210607090425.18277-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0239.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a7::10) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9c87a2c6-f14a-4585-de43-08d929a3ae34
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5759:
X-Microsoft-Antispam-PRVS: <SJ0PR03MB575978CFF04E7C9411D1E472BA389@SJ0PR03MB5759.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: bHGHGe708oLvqpPVIwHl7YfuyCiPLfbDiQ63Vtwu6wIwXwJcfwgRQlhU138n3gIOE0oH4jYNQjoN98F/se//7+KaSPUfsuWg/+PYcNlIwxtTneBGHAKg3l8d6zxzgw2jYCQNxS+RsOUWhr+zuf+Xb6FjRxQYONQa/2uyd+8GlfhFUcFdvORIw4mLfLcKfMjLoE4EwEFACeJUD6o7/0/iLhhlXwwIy6vTuhDP6Y5RzPZWFTuZFLXz51v1esY0VWj/TOB4JEusd1xSW/lM7yUqK98+yDZCtRPttsuRYayNRdDw2wg2kSBmYgzVmXqP6ZrBOUzfwAlZ3RgHRlAx7ZvWo2qlzEs9+69Yt+BDwJ7jvKfoVkcDMffx/LsZeFKcNhaIOIiMKQ4A/uGKiicUMo0n8vXFyWlbEljIREm7AWP9BeYFPE3shZhpAcWwhfIZ24AQLgmZQXpkP7SSGufE7X48afakwxNtu7KE2Vr8EGdJLvcmh7mN7Y12MinlL08z63PjRyl1gJGNKuPWC5pmhVQV9Ol4vqDLlg10tz24gRzu9X7O8JVFXI1T+/XiaenmXIhjnuI8qEWKnmx403keSj2MZfGlyw1MhCeYVBPDuPQQD1JBzdx0RAX+8V2S8JIkLj+TyO2RBV5K07Lh+6YYN+4ha0TKJlC0rBwNau09e7rYeg/llqzkDgOyzvtpOc5zTPWa
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(136003)(376002)(396003)(366004)(39860400002)(2616005)(956004)(6486002)(8676002)(8936002)(86362001)(6666004)(83380400001)(2906002)(31696002)(54906003)(5660300002)(316002)(38100700002)(4326008)(66476007)(66556008)(16576012)(53546011)(186003)(16526019)(36756003)(26005)(31686004)(66946007)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Z0Rvc0NEblpTR0JzSm9ONU1aWG5zWittWXg3ZG1LY2dFRVozNUFkYllOUWlx?=
 =?utf-8?B?Y0FTVncwYWRIdjRiKzFoamF3NnZZYmEyWEI1bmc2TTI4QUVXczFrVnhoKzZK?=
 =?utf-8?B?NStpZUVUVG5ScGc5Vng2bVZ6VzdCVTVxTFdWaC9NSjF3SDJYZHFRMCtTbnFa?=
 =?utf-8?B?MHlwa0VRbWJPSS9vRHZrQk85TEFvU1Z3anVwR3FHdDREdFZSNDFqS210Qnlm?=
 =?utf-8?B?OTB3RUs0QmljeUtXQlRiSXhqRnhkVU9jOHN0UmNtM3lheXo5NWhmZ0tIVU1H?=
 =?utf-8?B?Q3V1TEdoT2w0Q013Ry9FRjlFeVduT1pidHFreEtJTlFMeW9naWtnNnpSaFpZ?=
 =?utf-8?B?K3dQbzNnTStWNTRjcERvbWR6MmUzMFRma1oyOWw5WGYxaUV1N01zU290KytE?=
 =?utf-8?B?U3VuOTZZZ3ZiUGxiWXJYZC9yTk9ZMnlwUEpEQUZqR1FKZVBJV3BDOXA2Z0RH?=
 =?utf-8?B?YWJ3bU91anVQMEdzelZBT3Njamx5eFk1N0tUZFZmTitOUndkU1lVREFWVnBp?=
 =?utf-8?B?MWIxZWljTVE3a0lhYmV0Y2xtMXErQ2x4aEtmRGNzbDdyMUV3K2pKYS9FbWFk?=
 =?utf-8?B?WE1VQU1mdnpZSXZDVUZUN0pkVlFwb2N6aVYwbTBad2RQOGo4Z3lvQXJyZXA3?=
 =?utf-8?B?WERXZ0MrTmlILzRPVTRZOUpVbmV1Wi9vTkk4OHJnQ1N1MiswNmRyM0hqRXN6?=
 =?utf-8?B?ak50Z3dEekRwRVArVTZjdWVBN3pqZU5VSUY5akxGSHZPNmhjYk1nVzRGcm9h?=
 =?utf-8?B?THpGWVlLVVZTa0JqRFFXdkUrRjVrZnlqeUdTR1VBMWU2UWtjRXlCNjdvb1E3?=
 =?utf-8?B?bHNBbnRzNzZmNFBZMkhwVWRMZWtHeWZxdFIxYU9xWFEvaEdMcXZiR1lua2Y2?=
 =?utf-8?B?ZnVzT2NaQktWZ0cxQmdUWG4wRzFESy9LNlg4QjNDREgxYVh4VnowTkhmaGgy?=
 =?utf-8?B?T0MrNUpJTzR3VzF1NDlDb2ZadXdiZkZMbnJYSDVxZzAyeUtNZ2prSXZReFk3?=
 =?utf-8?B?U080enVCS3F0TjNhcHpETlZET2N2bXVCNjRvSDVvK0ZETFJTTmQ0c0YzUklX?=
 =?utf-8?B?UFU3bElqa0R6Rm1UNmN4MjlXS0NMR2hlUHNGb3E3aUNtM0RZWkZObUppV2xx?=
 =?utf-8?B?a1laaHFHK1BzejF2T3RXWDR5UmdER2RKVkFza04zRWs0YlowLzRVRWhqSll4?=
 =?utf-8?B?aXF6VGtmRm9tbFJrQ0F3bmtDcnNOU0txcUdEbkJmR09QMGpUbFFnc2R4cnh3?=
 =?utf-8?B?YUIyN3lmUTM3bElHMWN3MVVXbjQwZUtod2U5OGZOSGJUd3oyOU1xUk82OFZX?=
 =?utf-8?B?U3lZVzhFUm1qa29JS09TYnlydHlyaVVxTUliaUdFY2s0UVl5SnhGOVpreU5B?=
 =?utf-8?B?Z2wzQnkyV041ekFEaUZMWVF0ZytHb1d2RDZndFgyK0N2a1JvUXc0aitDblRR?=
 =?utf-8?B?aHhKY09qWjh4aHBYR1hQTldmeXBudjA5a1ZhQW1qTkRpSVZXcXpUKzBlbzlH?=
 =?utf-8?B?UDFZWndHQWY4czlJTmpaNml2dUNhU3kyQy9xU1BlUUZYdTl3V095MlFaMFc5?=
 =?utf-8?B?b2ZETmZORURuQkx3Um1vYlVwVFBhYy9SbkZkUHJnRVJ5QUc4cTBML3VxdzNQ?=
 =?utf-8?B?OG1OR0NSWUFhUjk4TDBpaFJobFU5UWRlaGZwWjZSWE1qSnpSZFZRY3RabVJZ?=
 =?utf-8?B?UlVwN2hmS3lKaHkyUFBPQmhaRVJoSzNlNGFqNGN4Q0p5K0pYZEVTRHFhS0Ns?=
 =?utf-8?Q?jxYS1Yq2PgfaVPERBGCWTQxnybboxbx89gReK7X?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 9c87a2c6-f14a-4585-de43-08d929a3ae34
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 11:02:03.2432
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 88b9QOSBy5IHp3w/34zIDmU3JPQWtV9hHGcYCBQ8bre1tQWPQtsrK+8sBslk2bs3AY6If5d4AEAJNExVCcVzEm3VPuB7SWovkUx7T5e2Z+0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5759
X-OriginatorOrg: citrix.com

On 07/06/2021 10:04, Juergen Gross wrote:
> After 32-bit PV-guests have been security de-supported when not running
> under PV-shim, the hypervisor will no longer be configured to support
> those domains per default when not being built as PV-shim.
>
> Unfortunately libxenguest will fail saving or restoring a PV domain
> due to this restriction, as it is trying to get the compat MFN list
> even for 64 bit guests.
>
> Fix that by obtaining the compat MFN list only for 32-bit PV guests.
>
> Fixes: 1a0f2fe2297d122a08fe ("SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported")
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  tools/libs/guest/xg_sr_common_x86_pv.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/tools/libs/guest/xg_sr_common_x86_pv.c b/tools/libs/guest/xg_sr_common_x86_pv.c
> index cd33406aab..ad20461e2e 100644
> --- a/tools/libs/guest/xg_sr_common_x86_pv.c
> +++ b/tools/libs/guest/xg_sr_common_x86_pv.c
> @@ -154,6 +154,7 @@ int x86_pv_map_m2p(struct xc_sr_context *ctx)
>      ctx->x86.pv.compat_m2p_mfn0 = entries[0].mfn;
>  #else
>      /* 64 bit toolstacks need to ask Xen specially for it */
> +    if ( ctx->x86.pv.levels == 3 )
>      {
>          struct xen_machphys_mfn_list xmml = {
>              .max_extents = 1,

This wants to encompass the whole ifdef block, to avoid having differing
behaviour between compile widths.

Also the comment next to compat_m2p_mfn0 wants adjusting to say "only
set for 32bit PV guests".

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 11:15:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 11:15:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137843.255314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqDDe-0003rM-W1; Mon, 07 Jun 2021 11:14:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137843.255314; Mon, 07 Jun 2021 11:14:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqDDe-0003rF-Sb; Mon, 07 Jun 2021 11:14:58 +0000
Received: by outflank-mailman (input) for mailman id 137843;
 Mon, 07 Jun 2021 11:14:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lqDDd-0003r9-PH
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 11:14:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lqDDd-0006gL-OT
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 11:14:57 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lqDDd-0005zt-NZ
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 11:14:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lqDDS-000056-Ny; Mon, 07 Jun 2021 12:14:46 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=tiJK7SkXD1u03C4YnOzLegCd4pvTwz4c5x+r+gKdUNg=; b=Kcbn39XpnKDazy3wtMX5m+qV3U
	Dtm96I9hDD2d+xvPK4ZKgF1hl6bUAuGRZ4CyE6RYpq7s5ePbtGUASIfTy8UGyTua+ShqpTspXw3Bq
	RVBB+m2TMgqN6lK48liJJmAEIOhXJ/kUMJglxTSfJ8ShM4Ici9Wyiep8/1SDq/jpPF6g=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24765.65446.517292.529623@mariner.uk.xensource.com>
Date: Mon, 7 Jun 2021 12:14:46 +0100
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Julien Grall <julien@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Liu <wl@xen.org>,
    "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: Ping: [PATCH] docs: release-technician-checklist: update to leaf
 tree version pinning
In-Reply-To: <41f6ae32-4704-a580-f103-9e1e9e51330f@suse.com>
References: <adc2ba4f-81b9-9841-a53c-8c52278ae62a@suse.com>
	<f60da828-ed27-abaf-0301-559cfe017201@suse.com>
	<41f6ae32-4704-a580-f103-9e1e9e51330f@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: Ping: [PATCH] docs: release-technician-checklist: update to leaf tree version pinning"):
> On 27.04.2021 18:01, Jan Beulich wrote:
> > On 07.04.2021 11:56, Jan Beulich wrote:
> >> Our releases look to flip-flop between keeping or discarding the date
> >> and title of the referenced qemu-trad commit. I think with the hash
> >> replaced by a tag, the commit's date and title would better also be
> >> purged.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> ---
> >> I've been noticing this inconsistency repeatedly because I simply re-use
> >> version update patches for the stable trees - the context mismatch
> >> results in a such updated patch to then not apply, due to said flip-
> >> flopping. For 4.15 I'm inclined to drop the two offending lines perhaps
> >> while updating the version to 4.15.1-pre.
> > 
> > Ian (and others), any thoughts here either way?
> 
> If I don't hear back by Wednesday, I guess I'll take the lazy consensus
> route and put this in.

That would of course be correct of you :-).

But I should explicitly:

Acked-by: Ian Jackson <iwj@xenproject.org>

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 11:20:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 11:20:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137851.255328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqDIo-0005EY-KW; Mon, 07 Jun 2021 11:20:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137851.255328; Mon, 07 Jun 2021 11:20:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqDIo-0005ER-HO; Mon, 07 Jun 2021 11:20:18 +0000
Received: by outflank-mailman (input) for mailman id 137851;
 Mon, 07 Jun 2021 11:20:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqDIm-0005EH-O1; Mon, 07 Jun 2021 11:20:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqDIm-0006mK-Ih; Mon, 07 Jun 2021 11:20:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqDIm-00050r-AK; Mon, 07 Jun 2021 11:20:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqDIm-00061N-9r; Mon, 07 Jun 2021 11:20:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gXc+NcH3dwYYPdLh9vGhYJIyNIelB57gBjcQICQjNB4=; b=Sfhm0EO/TcPSpoaJcPNB5uOkO+
	fX5cMCz1Q6SmIPuP8i0Oa5zbV49Sl4TPHJW2jEf0+ExZiWOaeQcoWhw+xeGhQxsDRoMqFRMGEIHPm
	IdaXJC8Bdkjc50JzBoat3R9hKos79DWIt6OXJVkRRcorNctB4OGqyYkTwO8vr6792zIg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162490-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162490: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=75f13e9b221e2c8603f15ee1d53318526cf56113
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Jun 2021 11:20:16 +0000

flight 162490 xen-unstable-smoke real [real]
flight 162495 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162490/
http://logs.test-lab.xenproject.org/osstest/logs/162495/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  75f13e9b221e2c8603f15ee1d53318526cf56113
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    5 days
Failing since        162370  2021-06-04 17:01:35 Z    2 days   18 attempts
Testing same since   162374  2021-06-04 20:03:35 Z    2 days   17 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 11:20:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 11:20:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137856.255342 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqDJH-0005iU-02; Mon, 07 Jun 2021 11:20:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137856.255342; Mon, 07 Jun 2021 11:20:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqDJG-0005iB-Qx; Mon, 07 Jun 2021 11:20:46 +0000
Received: by outflank-mailman (input) for mailman id 137856;
 Mon, 07 Jun 2021 11:20:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kcEO=LB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lqDJF-0005aa-I5
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 11:20:45 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 04639098-c405-4c5e-a4e6-49718097229f;
 Mon, 07 Jun 2021 11:20:38 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id C52801FDA1;
 Mon,  7 Jun 2021 11:20:37 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 9BDF0118DD;
 Mon,  7 Jun 2021 11:20:37 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id ulviJAUBvmAcVAAALh3uQQ
 (envelope-from <jgross@suse.com>); Mon, 07 Jun 2021 11:20:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04639098-c405-4c5e-a4e6-49718097229f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623064837; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=/9GCv56hn2ceBq+ibp+voDCt2yOAP9i23fZo/5cnTdE=;
	b=gmbL+CAAcYWZXnHSroyO+d83cSqBOsNgFhy6xhvqyCeZSULvMKJr8ENTdmxRePytgqc6xi
	tyi8UewcuqH+3HckLR24vWaj2kLo6FI2X00GGxw9E+VyGTN0pU5j9vr1HghFmbTappkITP
	EO3GU4NL/l2I1x8qVN+8MKRP4SRVpEE=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623064837; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=/9GCv56hn2ceBq+ibp+voDCt2yOAP9i23fZo/5cnTdE=;
	b=gmbL+CAAcYWZXnHSroyO+d83cSqBOsNgFhy6xhvqyCeZSULvMKJr8ENTdmxRePytgqc6xi
	tyi8UewcuqH+3HckLR24vWaj2kLo6FI2X00GGxw9E+VyGTN0pU5j9vr1HghFmbTappkITP
	EO3GU4NL/l2I1x8qVN+8MKRP4SRVpEE=
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210607090425.18277-1-jgross@suse.com>
 <cd954aa7-ee43-1126-f97a-21c213ef053b@citrix.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] tools/libs/guest: fix save and restore of pv domains
 after 32-bit de-support
Message-ID: <747ef58e-3b39-89b5-c48d-37313e33c260@suse.com>
Date: Mon, 7 Jun 2021 13:20:37 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <cd954aa7-ee43-1126-f97a-21c213ef053b@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="ry2uOR4k0zJd6JcJaWXfMLqEPBVgzEgMI"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--ry2uOR4k0zJd6JcJaWXfMLqEPBVgzEgMI
Content-Type: multipart/mixed; boundary="pIACGx8hhqOjynFkcG9HXxkDgr0NjzDMU";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <747ef58e-3b39-89b5-c48d-37313e33c260@suse.com>
Subject: Re: [PATCH] tools/libs/guest: fix save and restore of pv domains
 after 32-bit de-support
References: <20210607090425.18277-1-jgross@suse.com>
 <cd954aa7-ee43-1126-f97a-21c213ef053b@citrix.com>
In-Reply-To: <cd954aa7-ee43-1126-f97a-21c213ef053b@citrix.com>

--pIACGx8hhqOjynFkcG9HXxkDgr0NjzDMU
Content-Type: multipart/mixed;
 boundary="------------6ACA3C7A8FB1812A69F8E498"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------6ACA3C7A8FB1812A69F8E498
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 07.06.21 13:01, Andrew Cooper wrote:
> On 07/06/2021 10:04, Juergen Gross wrote:
>> After 32-bit PV-guests have been security de-supported when not runnin=
g
>> under PV-shim, the hypervisor will no longer be configured to support
>> those domains per default when not being built as PV-shim.
>>
>> Unfortunately libxenguest will fail saving or restoring a PV domain
>> due to this restriction, as it is trying to get the compat MFN list
>> even for 64 bit guests.
>>
>> Fix that by obtaining the compat MFN list only for 32-bit PV guests.
>>
>> Fixes: 1a0f2fe2297d122a08fe ("SUPPORT.md: Un-shimmed 32-bit PV guests =
are no longer supported")
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>   tools/libs/guest/xg_sr_common_x86_pv.c | 1 +
>>   1 file changed, 1 insertion(+)
>>
>> diff --git a/tools/libs/guest/xg_sr_common_x86_pv.c b/tools/libs/guest=
/xg_sr_common_x86_pv.c
>> index cd33406aab..ad20461e2e 100644
>> --- a/tools/libs/guest/xg_sr_common_x86_pv.c
>> +++ b/tools/libs/guest/xg_sr_common_x86_pv.c
>> @@ -154,6 +154,7 @@ int x86_pv_map_m2p(struct xc_sr_context *ctx)
>>       ctx->x86.pv.compat_m2p_mfn0 =3D entries[0].mfn;
>>   #else
>>       /* 64 bit toolstacks need to ask Xen specially for it */
>> +    if ( ctx->x86.pv.levels =3D=3D 3 )
>>       {
>>           struct xen_machphys_mfn_list xmml =3D {
>>               .max_extents =3D 1,
>=20
> This wants to encompass the whole ifdef block, to avoid having differin=
g
> behaviour between compile widths.
>=20
> Also the comment next to compat_m2p_mfn0 wants adjusting to say "only
> set for 32bit PV guests".

Okay, together with Jan's suggestion this makes sense.


Juergen

--------------6ACA3C7A8FB1812A69F8E498
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------6ACA3C7A8FB1812A69F8E498--

--pIACGx8hhqOjynFkcG9HXxkDgr0NjzDMU--

--ry2uOR4k0zJd6JcJaWXfMLqEPBVgzEgMI
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC+AQUFAwAAAAAACgkQsN6d1ii/Ey9K
eAf/SsrdRuYjPVO0O+lO8rM6vdmekTAAsUfJPEc9urgnr3im3liNJcxFddU23rDBwuVJtKXmaop3
aPnJUI6Dfz3ZsFd+gIwK/fHXI5QePMLlTl2n5XZOn9YjPjquq9ypYhWBVL6mwnPMwpnmGpdLGNwZ
cxyMwDFOKP7VEcvrEZ010aZTnH8EzKFW47dVr5+Gb2xJTKZVyvONIAnE3x0pbomiDGj5SGSxQV5v
huxsuvDPin2fk6DOgPMqen/e7a/2SDzFh+XLT5XwL/IiMbGIsvtmMaZsz/wBfennUdAxl6BI/Yy+
2wQM0cr6ePByd5Rfr6vh6kEE0L/LXS/1SjXnX7BYvA==
=jpkX
-----END PGP SIGNATURE-----

--ry2uOR4k0zJd6JcJaWXfMLqEPBVgzEgMI--


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 11:23:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 11:23:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137866.255352 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqDLf-0006XY-Ft; Mon, 07 Jun 2021 11:23:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137866.255352; Mon, 07 Jun 2021 11:23:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqDLf-0006XR-Cy; Mon, 07 Jun 2021 11:23:15 +0000
Received: by outflank-mailman (input) for mailman id 137866;
 Mon, 07 Jun 2021 11:23:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kcEO=LB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lqDLe-0006XL-1a
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 11:23:14 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4cf39ab3-c2cb-430a-ad5b-69dccfcfb67c;
 Mon, 07 Jun 2021 11:23:13 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 67E5A21A91;
 Mon,  7 Jun 2021 11:23:12 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 2DEBD118DD;
 Mon,  7 Jun 2021 11:23:12 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id +cACCqABvmBlVQAALh3uQQ
 (envelope-from <jgross@suse.com>); Mon, 07 Jun 2021 11:23:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4cf39ab3-c2cb-430a-ad5b-69dccfcfb67c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623064992; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=TVsG2bpF3vrLfIDDvhyK+FQrs0rLZxCGvWEeqGGq7Wk=;
	b=lLD6qrGJ3mHXxkjkjeaT6f/LSCjfb1hr9M7BvpaPvWE1p8XUqE7db//wrHeduj1hbxcp7i
	7JFwvvAlya4ykQxGwZgJouOt1SPIKEdklibNrDCJZS+riN80s6uCjTLchKmQypr1wWswwG
	ZbZDRnK/RtRH+E0V+uPUNaVp6pGavi4=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623064992; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=TVsG2bpF3vrLfIDDvhyK+FQrs0rLZxCGvWEeqGGq7Wk=;
	b=lLD6qrGJ3mHXxkjkjeaT6f/LSCjfb1hr9M7BvpaPvWE1p8XUqE7db//wrHeduj1hbxcp7i
	7JFwvvAlya4ykQxGwZgJouOt1SPIKEdklibNrDCJZS+riN80s6uCjTLchKmQypr1wWswwG
	ZbZDRnK/RtRH+E0V+uPUNaVp6pGavi4=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] MAINTAINERS: add myself as tools/libs reviewer
Date: Mon,  7 Jun 2021 13:23:10 +0200
Message-Id: <20210607112310.22180-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

I have touched most of the Xen libraries in the past, and there is a
clear lack of reviewer band width in the tools area.

Add myself as a tools/libs reviewer for that reason.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 MAINTAINERS | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index d46b08a0d2..bd80740cfe 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -374,6 +374,13 @@ F:	xen/include/{kexec,kimage}.h
 F:	xen/arch/x86/machine_kexec.c
 F:	xen/arch/x86/x86_64/kexec_reloc.S
 
+LIBS:
+M:	Ian Jackson <iwj@xenproject.org>
+M:	Wei Liu <wl@xen.org>
+R:	Juergen Gross <jgross@suse.com>
+S:	Supported
+F:	tools/libs/
+
 LIBXENLIGHT
 M:	Ian Jackson <iwj@xenproject.org>
 M:	Wei Liu <wl@xen.org>
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 11:32:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 11:32:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137876.255364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqDU4-0007yw-CR; Mon, 07 Jun 2021 11:31:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137876.255364; Mon, 07 Jun 2021 11:31:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqDU4-0007yp-8v; Mon, 07 Jun 2021 11:31:56 +0000
Received: by outflank-mailman (input) for mailman id 137876;
 Mon, 07 Jun 2021 11:31:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rwXN=LB=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lqDU2-0007yj-Js
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 11:31:54 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.217])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b37b0e00-aa96-4582-ad38-d79e3bdfa504;
 Mon, 07 Jun 2021 11:31:53 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx57BVmKDp
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 7 Jun 2021 13:31:48 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b37b0e00-aa96-4582-ad38-d79e3bdfa504
ARC-Seal: i=1; a=rsa-sha256; t=1623065508; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=OxbxY5O8QxF1r5z5FQ2va6VasBaxrZMIKLr6BfKeojnVXHcWK3hfko1KXEpF509C5e
    2RtlNfAyjiGP4r98jUnV1TfjKMFExHLGgtrE5pOAMZWxSGA0kVO2oD4WKqrZUEGH5kj+
    Q6HEH96r0hiiXH41iO0vzNQrRWPh4X3FfSQDjjgc6IA6oVG8cLcfzL6HrEhX+WMz2pZI
    3LeruEvVfBw/qpfn3v8MO169JMxO7nUZufEjCN29fFfBiWcEwoMHF3mbMFW5F0cphpyv
    9N+oIse76rlz20vn/3nF01nweqKlI7CMmsAE5zumovcMJ6fDugdblos8opfp2S6zRkJf
    T1sg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623065508;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=tSUe+mhW0778pNVSSqM5Qlq2C/BzFqClSM00cyxyRzk=;
    b=NdNWynOoUorQUmF4+sypuCjJlvDOYuUOoNx5coBcjsUPzUxQQe88kORYKqEdHZ4KsG
    h2MHkFq3WkpjHitp3+EvVODbJwdOtW4l/iGmqkR+S9Q7R8dSaItB5JSqUC+eNxsrfVST
    7ahxRaHsEM7bP529/VdSKjMFQezR9qQmaNBeZ+aCVQ6rB04OvP3HbOgeQYHTka5RXSjI
    nMzFpu1DVlLSpmZsTg60F86H/fZyyKqtXK0SO1nLSQLynAJolzlaTcTHK5Q8dGlBMyUB
    wXoFuHR1eY3EiTGqfCly5tFaVe3alCjUJJJSBCO+h4AzUW2PiFuWe/469B3ccFU2D3QF
    vE2Q==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623065508;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=tSUe+mhW0778pNVSSqM5Qlq2C/BzFqClSM00cyxyRzk=;
    b=nRQ1clVO3RktG6v8mc2d79hrpIqakI00ZZ+CI5hB9CUKe5Qv9UwD9c8IhlbTrBxcpC
    jWQxlActcP9N+M2sbfA6ntU1EsDwwxD9ojpMk6UK13ZR9uw0XfHyQ5hYAtqe/TRpNDTN
    mmarB/Zsn04gVtxevrDw/4qaF8Y/lz6bUv+mvtmGxShCRxSno7h6Z4Q2tTdCdRNn0WbM
    OxPkmLiUkw30sGd5lH015XGGHUIwJOEbvDU9MP3J6f9e+s68DW8NtKFl+RvNdpNJMo+5
    npM/Vl95vmVXM0BWkMReylzOGUUEQ+prFVg6IZ2UIfJ7sHW87RYaS6LU/mJOXGj63CBI
    e6vg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF/Ax6Fb03sCxOoTBq7r1dZtjiRLxxzC79Iv3HI"
X-RZG-CLASS-ID: mo00
Date: Mon, 7 Jun 2021 13:31:33 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v20210601 04/38] tools: add readv_exact to libxenctrl
Message-ID: <20210607133133.2d2c52b6.olaf@aepfle.de>
In-Reply-To: <6e1aed4e-8d32-0711-609d-dfabe906c40e@suse.com>
References: <20210601161118.18986-1-olaf@aepfle.de>
	<20210601161118.18986-5-olaf@aepfle.de>
	<23783088-dc59-abd1-c66c-5fcd314d1f5c@suse.com>
	<20210602125710.0607a985.olaf@aepfle.de>
	<6e1aed4e-8d32-0711-609d-dfabe906c40e@suse.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/sukKz6=fq0Pb4nEPkGJme0A";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/sukKz6=fq0Pb4nEPkGJme0A
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Wed, 2 Jun 2021 13:41:02 +0200
schrieb Juergen Gross <jgross@suse.com>:

> Shouldn't you check for zero length iovec elements as in the
> writev_exact() case then?

It is not clear to me what the purpose of skipping zero-length iov's at the=
 beginning of the iov array in writev_exact is.

I assume the syscall (or libc) will just skip them anyway. The code would n=
ot handle empty iov's in the middle of the array. Perhaps writev_exact shou=
ld be done in the same way as my readv_exact, that would also remove the ne=
ed for malloc. I think in the past I had a patch to do just that. Maybe jus=
t locally, maybe I sent it to this list.


In case of readv_exact, only the trailing incomplete iov will be filled wit=
h read_exact. I think the length parameter for read_exact can not become ze=
ro, "len" would already be zero. Zero-length iov's will be skipped by the l=
oop AFAICS.


Olaf

--Sig_/sukKz6=fq0Pb4nEPkGJme0A
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmC+A5UACgkQ86SN7mm1
DoCTPg/9EqBFJd9+SrqmRDsKa5bn2y37kgb0CSQhj50nP8cBBhrKT+u7ZFiAmw2S
WScBfospfvZPS0Yc++K8RitovLnPgr2ljQuw5tnLtyBzVBstoTl2x1AHw4jefFav
9MOba1rl2D47prxO5aEr8Dv5tbstijx5xoAbqI2GJa2AjRWABR9T9vmrC72F190Q
FKoBhPUZ4eB/udD3Dxsz8pFcS1j/2sHAN80JxGTPbdsKAqUNZcACiLaRKZMh3XGy
qjpnwW95mdwnU0GL/y1zQNVx5r+61kRnGFKVlsicP6KnkAkoioc9vFDNQGhPQ2Hr
HBxCUaL905Ita4bZNUz81gHc4q/IL6Mb4qgf4Gkw7qNPqKwlDLch/l69NXsSK4yy
66FVl3NVhsU8ejcdfOnsgm5b5ipXj4YrnNKOra0IUIHhuTQ0dRH+HxhwmTmofejL
xlG2OqJ1te+tH/lezQPJGNLXkajjBQjkapX8w/cooYog/0KzqVHfYz702qd/kvrC
7pZOVeCAfBrVd+fm27db+RCmxItsG8uffxwTUTaVDoWlB1DPaREj4bfEej5ioY2+
P2tQs4FAg0ct5rg+kbHUZy89Jcg7nEV9GvKG0++VEJymjDCkPamIFxNMr0CPPAi4
iQwE4zzlp8xMx+zHKYRRMqlIRw2GpQSjvgs1M70qMSHuB2kAlXk=
=nFGj
-----END PGP SIGNATURE-----

--Sig_/sukKz6=fq0Pb4nEPkGJme0A--


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 11:44:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 11:44:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137883.255374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqDgb-00016w-IZ; Mon, 07 Jun 2021 11:44:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137883.255374; Mon, 07 Jun 2021 11:44:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqDgb-00016p-Fd; Mon, 07 Jun 2021 11:44:53 +0000
Received: by outflank-mailman (input) for mailman id 137883;
 Mon, 07 Jun 2021 11:44:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v64b=LB=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1lqDgZ-00016j-Qo
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 11:44:52 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fa5bb2aa-566c-400f-ba79-13b21d8a8dbe;
 Mon, 07 Jun 2021 11:44:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa5bb2aa-566c-400f-ba79-13b21d8a8dbe
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623066290;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=oU+7BIOQYIJ4vk9Hrs7NVdUlRzGukvCTg8swiWYEQpU=;
  b=Sw/4YmjfusuxieCKer7Pkjp2yBb5mCDfizL+kKxhMzbgYbOs5ear169M
   MPhxH+s+ZcHpiZBZ7oRSZPOuq8Cc5iUiMKUpvEju2W0jyA7UJotGc7Co+
   ZQja922KCxe+M+e2nP3KdOaSgiQHz/rfkGLHreX5N1OMkstaB6tWsb+RW
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: R2MvXMKq9US595xz8s0SMRwO4NcGsXvgS3ixhMM2M5f/VEZEj+9r8WsFOA6yDcP7t4r3SiHeex
 /B2WU0DkFAsEn/HAYBkiXupRF1rd2D/WvfpXXv/rgv72n4bEUNyQeZIhjes0zIaRNHmyJF1BBe
 gPlmXmL+IdoAPj/cOISp9wKj/rIH5/OMVK37Gf4OHylEXInb4ibXHcJS5Zai2yDq4/e75ktimJ
 AWu4uRwvflBsQTT+1wQuFuaQWEPZ7KD72g2L9eO+tk56/H7iO6chevOTUePEKjMiKEfj+CDzSc
 vs0=
X-SBRS: 5.1
X-MesageID: 45273656
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:ev37Paje4JtY9NPsXCyAA5wrLHBQXh4ji2hC6mlwRA09TyX5ra
 2TdZUgpHrJYVMqMk3I9uruBEDtex3hHP1OkOss1NWZPDUO0VHARO1fBOPZqAEIcBeOldK1u5
 0AT0B/YueAd2STj6zBkXSF+wBL+qj6zEiq792usEuEVWtRGsVdB58SMHfiLqVxLjM2YqYRJd
 6nyedsgSGvQngTZtTTPAh/YwCSz+e78q4PeHQ9dmca1DU=
X-IronPort-AV: E=Sophos;i="5.83,255,1616472000"; 
   d="scan'208";a="45273656"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CRtb8aNlXu8LRcg0Wl+2Mfsgu+Q1WLYqECGST6VV34oknB/vCxWGWRpmKiuOAeG8o709FDzIo0eFSxRmRMClQu7iFzKLa5QsWi7+TgrYA7QZiaY3odMmwiHwGZncrBv5mdPfl55R7QJlyrl1bdu8ctHG9tZnM+bPVDBjNo40iam9qqH6Hkbp8hmh8Lp/h/dP6GnVFNNOygkymIc+M/r8kBa7Q8jftHcuOog3UCof8YraBYKFAlEWYYjdlFJTTx4LEEXZr9+ku3Ka/rLljPm+RNMVCcHtXxLSbhZoW/l4WMM63OTdA+FUkg+3TtlHujKzRGkcB6xSghKygmKy00/M9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bX+EwXKI2YExnugVtuinHTKeSOeX8gHOq0wQrNOVTGs=;
 b=QIgcyXNXK6BXswusBQIIuOET6LH70zgPvQn/ftZzr5s0I67RxS9QjNeSAB5rs+gvHP4hW7NdbpAwxA5YecO2c+DqqCj+LvUalmQN6jriKI6Ncnta7gog4T8FGFA4BfhVwHqrLSlSS7Ldc8nR5xWS5hmmwqGLhcKTcQCRsl7w+3Sv6JRrj/Xx7NkDcWTTJd7KenjXXeSpHt0yw4iVErGLWr6z910M4gj2R1i1nNYFr2TWd098nk5LvHpzVbUsv4ZgqxmRUJNqCe7rAy9fq18p+L/X+MOq8nTZp5Dt0dHE05ahfIa0StwqUEc2WbODpBDbOF7CpQalUtKbwk+QeEFbrw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bX+EwXKI2YExnugVtuinHTKeSOeX8gHOq0wQrNOVTGs=;
 b=rnHtSu5dmKxxo1o0WWonhO1F+Xbkddzkw2NzGUYqOi4rrCIeg44+/Fqu9OuKU+Rw3ZxH42WPwvi7bgxJOhkHO43W5g6mYojwb95LUGrHN2BemeqPnDpsWRhHnE2q1Xs67zrTIZGUtFuHVSceinCZ3k94RO8Aq1YghZQL4vuPzZI=
From: George Dunlap <George.Dunlap@citrix.com>
To: Dario Faggioli <dfaggioli@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Ian
 Jackson" <iwj@xenproject.org>
Subject: Re: [Bugfix PATCH for-4.15] xen: credit2: fix per-entity load
 tracking when continuing running
Thread-Topic: [Bugfix PATCH for-4.15] xen: credit2: fix per-entity load
 tracking when continuing running
Thread-Index: AQHXHLluQ+xtLrrTJUqwj2ZWRPlE/6sI65QA
Date: Mon, 7 Jun 2021 11:44:47 +0000
Message-ID: <A4388641-0B69-4C3A-84D5-10017B91DC74@citrix.com>
References: <161615605709.5036.4052641880659992679.stgit@Wayrath>
In-Reply-To: <161615605709.5036.4052641880659992679.stgit@Wayrath>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 5b855648-e559-4e52-8b29-08d929a9a6e3
x-ms-traffictypediagnostic: PH0PR03MB5829:
x-microsoft-antispam-prvs: <PH0PR03MB5829EEAD66217E102EFBCC9699389@PH0PR03MB5829.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: th1g84hhf56ptlAiBuIeqMs/y6hTzaIBfjJHXuhTTZJV0N0K/TLuMkFgR/MEpXi0Xs1pIlxZTrcbGxg+05/DFACtz0dDqrUWrxCGbZIduOMwzOgte2QlvzYI/Zyno9Pehr1uHPEzBzZF6kt4O3M0eThCgyHKcLXWXSCmdjfJv2JEJa7/A7yDRNjd5YZz5odoHhVDgFEXeTaMNfXYjELTTU/s7W+9sEB9t0UbwTqD2C7sPXvMykpt6EugE8OqQlqVTDjaPrs+mKz0fLxO6sZFi7dffVvV2qxm+1lUmzJ7ogQ8BMSeBmTGHFRyJOxYAw10rQ9GqD3oeum9Q3OWRVjGMGSzjijYerT0mO7XVllY501RN/ZM2UtY6pAiVQrWwcLJBSWv2zNf9BGy5O/9Wx52Wv8RbLI19ncT0BcQBoNhIJaD5FU6KfNAFG3pj0aZEswUHU7YEoZffH2QCrowTToJ5SA0mqyzGZ5MgFLKZzCuxaH7AcHAJmV4yqLRnEeaGOedmFf7SLEoD59Pj5stwukrb0jN75Gr9xqIN1qpogKiJcWyRB6J6SPu4sJg+QOPeyFqnzkx8MWBj6++dKLT7kud4MjytVgnfxciuBWx1OQMbhYwvqneYfZSORIwX9oknrxh1ejD4ieQ9qlv3K4w6qCy/9iD7VS7DyKTJnMpoJFie+s=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(136003)(396003)(376002)(39860400002)(36756003)(26005)(6916009)(186003)(91956017)(76116006)(66556008)(66476007)(71200400001)(66446008)(122000001)(6506007)(53546011)(6512007)(64756008)(66946007)(478600001)(8676002)(2616005)(6486002)(33656002)(54906003)(2906002)(5660300002)(4326008)(316002)(38100700002)(83380400001)(86362001)(8936002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?us-ascii?Q?Bxg9V/Pq/X9ODfKWKm90NwEAK6nuSHohC/AU0BaNQUJtXfiSTn+WOMt6WJ35?=
 =?us-ascii?Q?x8jm1hk/cnR1LnB6ZrbstqPyLF6h9a7XT5uNBaAqdaXYrs50HJVHflAq0Lgc?=
 =?us-ascii?Q?THf5T43jM1elW1SP46xnqZ3BfQXbReB3eJBm6m48gpguQJt+MxbSCOSksbTh?=
 =?us-ascii?Q?OrclNM85dnmjOn14RHcNnShHsfGbdDRPa41bFmLlsK9CVIuYXGwWnIIBlKSd?=
 =?us-ascii?Q?c5CY+7DMeF6rM0LS+abOiZdFTK/miAPkxH5jh1pAbCH3N5a52HAeYEhC3Hjg?=
 =?us-ascii?Q?dW8ISTIGbipoTqwPGFIXsQ/7NCSa4Q+Ka7vE3xUVgfvVCXI/yJdOgqrCbZze?=
 =?us-ascii?Q?03alcHLFycXAUJk9NsijsQ6AthcbWp9KK5+uUzr1sn9/Yd8hxQ+eW25S5Pr5?=
 =?us-ascii?Q?IGtmWVhFIdMFPpxnTbsuEIvUoMJwMV3EocZXks/J1qrJ2CNPyREO3gCnxOi3?=
 =?us-ascii?Q?uM352+AbDvDG/KQRnVlDyqD2wcpUielh6xbYSO9fwZFzRS8sr4zbu1NluX+d?=
 =?us-ascii?Q?oMr3sdZPvS+jvMtGcRtNpuNSKWNm+hVgv3tgS+4fV6btNk8XCtz1Fuq7tOeY?=
 =?us-ascii?Q?KfY0NtcDh1oZLbr5IXm4nMgwKniqTDhBVfu7mDNkbTaSOVQ9FoYEt1ZGLSUZ?=
 =?us-ascii?Q?iXJ1c+sIIdX/F6Aj4lH3kft3SAZbA/d8l6obUDa/ibIVRpWXCoWQSO9nu0xK?=
 =?us-ascii?Q?VJblIZfWtELcg9h6y7oN71y0n2UXhWZEAQsKyJJMrCi6sxV4rara3TsbXczW?=
 =?us-ascii?Q?lHNV1WfFNUQKP/12aZhx4jTGCJZ5uBAGxgmFjehXNeWkVFFo8EfgU2/z7TNe?=
 =?us-ascii?Q?eEIl6P7TldiFPSIEhFwTulW6rYMdGSdrFcFNHzZrVgttQHYfIhI85sI4g6Kj?=
 =?us-ascii?Q?ee5xTWMkKoEVisjP4UFimVVsHxRSEstwU+vSgvT8/jqfJPN8o5fURKpi6N6o?=
 =?us-ascii?Q?XMyNW15Xq1ciITUsdTpG6gDfe/CJFLYF/rBNfX8dBXxj8sD4incOw21Y/+m4?=
 =?us-ascii?Q?k/B+CkCHEiocbF6MoCaM34G0/+e5pgngoBYPUxCjyD/VCQLBzx44a4fA1Jgy?=
 =?us-ascii?Q?3121byXClOLuFYCMmquUT3cnpJCqCC9yBK2BDbTdwq786/cZl7mAOCFlH4eF?=
 =?us-ascii?Q?5UhU9Rh7WJY4US93XP1nRyqcnGrAjcHgGvv7Hg4UGSjTA2zXFUQKPh1+aFsz?=
 =?us-ascii?Q?Iqe9Bh1IHSJHNOwRMpRIG9LDJi7I90jWjFQTGlI77H4HrIqhfAWm7JnvY5jS?=
 =?us-ascii?Q?3hXZQFfSK/9X8XFUt2yOKacyiLoeUFnWJrDSakshK9DXT9fiazR/riYtR62E?=
 =?us-ascii?Q?eY5psActR5EqzDKiqUQoEb2OpyuhBO5NlpSrD0QWRc8lgA=3D=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <2196A24B13B8EB40B972CAA00F6BA3E8@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5b855648-e559-4e52-8b29-08d929a9a6e3
X-MS-Exchange-CrossTenant-originalarrivaltime: 07 Jun 2021 11:44:47.4294
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Ch/Bsa5hvhMWTe5MVLpWc9dTe/ZVd1qES9pG1FFpX5ihJkaLoylSQmBhvhCvgkZT2DZkcSZ7x7uFiWSw4Qr/ajWc7cvtrEztEpvbdoNfawI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5829
X-OriginatorOrg: citrix.com



> On Mar 19, 2021, at 12:14 PM, Dario Faggioli <dfaggioli@suse.com> wrote:
>=20
> If we schedule, and the current vCPU continues to run, its statistical
> load is not properly updated, resulting in something like this, even if
> all the 8 vCPUs are 100% busy:
>=20
> (XEN) Runqueue 0:
> (XEN) [...]
> (XEN)   aveload            =3D 2097152 (~800%)
> (XEN) [...]
> (XEN)   Domain: 0 w 256 c 0 v 8
> (XEN)     1: [0.0] flags=3D2 cpu=3D4 credit=3D9996885 [w=3D256] load=3D35=
 (~0%)
> (XEN)     2: [0.1] flags=3D2 cpu=3D2 credit=3D9993725 [w=3D256] load=3D79=
6 (~0%)
> (XEN)     3: [0.2] flags=3D2 cpu=3D1 credit=3D9995885 [w=3D256] load=3D88=
3 (~0%)
> (XEN)     4: [0.3] flags=3D2 cpu=3D5 credit=3D9998833 [w=3D256] load=3D48=
7 (~0%)
> (XEN)     5: [0.4] flags=3D2 cpu=3D6 credit=3D9998942 [w=3D256] load=3D15=
95 (~0%)
> (XEN)     6: [0.5] flags=3D2 cpu=3D0 credit=3D9994669 [w=3D256] load=3D22=
 (~0%)
> (XEN)     7: [0.6] flags=3D2 cpu=3D7 credit=3D9997706 [w=3D256] load=3D0 =
(~0%)
> (XEN)     8: [0.7] flags=3D2 cpu=3D3 credit=3D9992440 [w=3D256] load=3D0 =
(~0%)
>=20
> As we can see, the average load of the runqueue as a whole is, instead,
> computed properly.
>=20
> This issue would, in theory, potentially affect Credit2 load balancing
> logic. In practice, however, the problem only manifests (at least with
> these characteristics) when there is only 1 runqueue active in the
> cpupool, which also means there is no need to do any load-balancing.
>=20
> Hence its real impact is pretty much limited to wrong per-vCPU load
> percentages, when looking at the output of the 'r' debug-key.
>=20
> With this patch, the load is updated and displayed correctly:
>=20
> (XEN) Runqueue 0:
> (XEN) [...]
> (XEN)   aveload            =3D 2097152 (~800%)
> (XEN) [...]
> (XEN) Domain info:
> (XEN)   Domain: 0 w 256 c 0 v 8
> (XEN)     1: [0.0] flags=3D2 cpu=3D4 credit=3D9995584 [w=3D256] load=3D26=
2144 (~100%)
> (XEN)     2: [0.1] flags=3D2 cpu=3D6 credit=3D9992992 [w=3D256] load=3D26=
2144 (~100%)
> (XEN)     3: [0.2] flags=3D2 cpu=3D3 credit=3D9998918 [w=3D256] load=3D26=
2118 (~99%)
> (XEN)     4: [0.3] flags=3D2 cpu=3D5 credit=3D9996867 [w=3D256] load=3D26=
2144 (~100%)
> (XEN)     5: [0.4] flags=3D2 cpu=3D1 credit=3D9998912 [w=3D256] load=3D26=
2144 (~100%)
> (XEN)     6: [0.5] flags=3D2 cpu=3D2 credit=3D9997842 [w=3D256] load=3D26=
2144 (~100%)
> (XEN)     7: [0.6] flags=3D2 cpu=3D7 credit=3D9994623 [w=3D256] load=3D26=
2144 (~100%)
> (XEN)     8: [0.7] flags=3D2 cpu=3D0 credit=3D9991815 [w=3D256] load=3D26=
2144 (~100%)
>=20
> Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
> ---
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Ian Jackson <iwj@xenproject.org>
> ---
> Despite the limited effect, it's a bug. So:
> - it should be backported;
> - I think it should be included in 4.15. The risk is pretty low, for
>  the same reasons already explained when describing its limited impact.
> ---
> xen/common/sched/credit2.c |    2 ++
> 1 file changed, 2 insertions(+)
>=20
> diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
> index eb5e5a78c5..b3b5de94cf 100644
> --- a/xen/common/sched/credit2.c
> +++ b/xen/common/sched/credit2.c
> @@ -3646,6 +3646,8 @@ static void csched2_schedule(
>             runq_remove(snext);
>             __set_bit(__CSFLAG_scheduled, &snext->flags);
>         }
> +        else
> +            update_load(ops, rqd, snext, 0, now);

I feel like there must be a better way to do this than just bruteforce reme=
mber everywhere we could possibly need to update the load.  But at any rate=
:

Reviewed-by: George Dunlap <george.dunlap@citrix.com>



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 12:10:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 12:10:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137899.255389 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqE5W-0004dB-2Y; Mon, 07 Jun 2021 12:10:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137899.255389; Mon, 07 Jun 2021 12:10:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqE5V-0004d4-UP; Mon, 07 Jun 2021 12:10:37 +0000
Received: by outflank-mailman (input) for mailman id 137899;
 Mon, 07 Jun 2021 12:10:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v64b=LB=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1lqE5U-0004cy-QG
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 12:10:36 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5b71311f-4588-4690-9b17-b966091e7132;
 Mon, 07 Jun 2021 12:10:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b71311f-4588-4690-9b17-b966091e7132
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623067835;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=lty+9wvpX5M+uoA0u8g2Kgx9pOB3SNCum4VbbJTJl3E=;
  b=NqTcRVLa3K8Lqyv/VRZmDL8Pk5w3swEI09ntVw2NepUKWe3SNRX4LyOS
   DyTzLbick5eozjnDMn4FxdKA4bDVoOZLSdtpUwHM5lBc8uoHYQ3ajfM5p
   o4+NJGG+s8r9/4DFViK+B6FByc3mMvutpBQZjWvrZ0SzDcwsYyCrod3o1
   M=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Tn4tcp6KGSrHmthdplQFfZ8Jrd64hNGWDx2gM814EtlZ8bM6FH1RXGGWyAZq6zsfUoEL/eiCGZ
 VuojP5O9YSUtJLEmIZG529FESpycmqZ3g7vTe/OZ4Qta61Gm+hlVQwk9DgpUXsNZo6URKkL5MZ
 RqBQzYJhw4NhhLUcy/+gEC/Rv6jJlSN9BRXtCKQ6pAm45z8glgXpibBV7t+bb46/DALVbaN/KL
 CYjtjVvyg70QVi4PRBV0givcviF+tpP/6mN/aFPKzQIhj/rmfYwNMBcb9TjgRxvnb+N96Y4HG6
 h1Y=
X-SBRS: 5.1
X-MesageID: 45889658
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:OuCCI6q0I0ZEopa6o8zrB3gaV5obeYIsimQD101hICG9Ffb1qy
 nOppsmPHrP4wr5N0tPpTntAsi9qBHnhP1ICPgqXYtKNTOO0AHEEGgF1/qG/9SKIVydygcy79
 YHT4FOTPH2EFhmnYLbzWCDYq8dKQC8gcSVbDHlvhBQcT0=
X-IronPort-AV: E=Sophos;i="5.83,255,1616472000"; 
   d="scan'208";a="45889658"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gnuRjpSHYO7iVRdfPmwLHtSTHUHAxUEcuWjiHRnEiySSvLE8ZELrnxCgOro1B35MgW3m/VjwcyKKJ7nta76ULsSc6FIIj5SuB8f+iUeDpl8NigtIZZd3LZQSTdm75x9CGUqoxZGJ6Ap/nPU+oaeXql1ljjY1HDdtew2uMY5L3JJf3RBJCH5a+pZIOWw9KTG0Y0Srjv9hA65MvRG7GXmhGIf19gxPyAqyL/2gxAlvOWyurRgIE8997rQVVEbd00chRy5WrofmkxUJJgZox6R0tZWJKszxBC87K/etI81hk0lo/TwprHT6EGR8iaWq3lOaxTbvyx2UxJE2ywnN/Uhr1g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lty+9wvpX5M+uoA0u8g2Kgx9pOB3SNCum4VbbJTJl3E=;
 b=ELdNMEdOLBUfDC+cJOpO5kNLGqcfU8/8yaroV7D4no3zDnjEzwod9IUbh1b2TQRcnn6FW4Zs4Nicm4szOFlnbSiezMKShMTBMtDtHeBvDVP7vUUP5FXo0CBBVWrbpkphaHB5PxvdhR18UnW+PC2bAx5n3wqVbTXGFdfSclvwSOU6EI/bgurtcnVg0cMN2KxQuMDxU0iMv5CU1FNfNP3g9SCjHZC0hHWbauTUVL2O9RwJeP4CAa1Bb1tNrorj7NeLQMd8BQer/lQ+mUVNFLfVm+xf/IhoADNFh4eahScERodbUXPfkHo83BI5NyMx7/8+OthoXEjwOuElqsx45AlgaQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lty+9wvpX5M+uoA0u8g2Kgx9pOB3SNCum4VbbJTJl3E=;
 b=tBHmzs07oNVufuKt3+SYB5349JBZPl5DTGMBfpbS+7pbat1Eir4edAWlvG4+vPTlE+2VAuep+72I3COrRyquYIjoA37aqh40rtuG73Ag9pnzk7VH3ALjxFNbvQfov1WocfXS1bG7vT9lUc46HwLXjcEKay8+DGbCYrX+Sgds0nc=
From: George Dunlap <George.Dunlap@citrix.com>
To: Dario Faggioli <dfaggioli@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	=?utf-8?B?TWljaGHFgiBMZXN6Y3p5xYRza2k=?= <michal.leszczynski@cert.pl>, "Dion
 Kant" <g.w.kant@hunenet.nl>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] credit2: make sure we pick a runnable unit from the runq
 if there is one
Thread-Topic: [PATCH] credit2: make sure we pick a runnable unit from the runq
 if there is one
Thread-Index: AQHXU9QMSaNdW1us1k+tzXl6c+sgY6sIhIeA
Date: Mon, 7 Jun 2021 12:10:24 +0000
Message-ID: <5D80842F-4479-46CC-A391-28E4EF364C7E@citrix.com>
References: <162221476843.1378.16573083798333423966.stgit@Wayrath>
In-Reply-To: <162221476843.1378.16573083798333423966.stgit@Wayrath>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: bc322a63-5c66-40c5-ddd0-08d929ad3b30
x-ms-traffictypediagnostic: PH0PR03MB5669:
x-microsoft-antispam-prvs: <PH0PR03MB566975BD8C43CDF9DA01DA5099389@PH0PR03MB5669.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: UEuSaiuYSa/MHs2sabIkD056QOcJqd5D2xTQHacxAqIew0WAFw7QKawMYpKKfvFe9S1u7Er2xsAu+Ddu++F8ECZhqBFiWbv7hhi09lMQTrDPwUdh0nI+vjhD39uJvBtdiJdaR+X3whsTKMB5xIAy87vt/oij1+s1fzYY1D4T+nrg3Vw4wuJwqYWKLBkoDgAMk3GKthFhMoiwovWgIF+OszhkJUuZRuzDLKdwAqX0YT+fdSj1huWZxdgp6q4z3YicKbEBTKgDZgWxEjoEjpuh81IYY1uuNO16uaEI9sx65BMXVzLnbFx6jh9vuQj94ZAIeN3LBH4A70+KgAGGQNlJZsm2LBHSkc4WF7jE9e4qSzMJk1v978m4GxI7POIZU5ujy68gKKdLKepHhh9MXKvMJYhvFWc8ELoXjuGpHm3jsDqx5kNGrLCzcWSaYmgvV5P4D/sBs/CrHgQ5uiNbsAOvShjifayWc5PvTezQeXnRQ30HwAClhEH7Dt+27YPY1lIvFSJhCkzoZKJZh3u05fbHv3gfaMvPTSF6BEvtQuHx/Ec7Y9DfBcTtDKCxFDspwXe8hY8hyVsxVBDxnVvq/KAxCealnRtg5JnRRoQ8F1RVHqrIAlvwEXJ/AP+Cmp7tqQLl3oNQYNntI4e0kKs2x6/T65Yr3YStfzbwl/dhZIY9ocE=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(376002)(396003)(366004)(346002)(6512007)(54906003)(86362001)(2616005)(2906002)(6486002)(36756003)(71200400001)(186003)(6916009)(4326008)(478600001)(66574015)(33656002)(64756008)(66476007)(66446008)(53546011)(76116006)(8936002)(91956017)(66946007)(8676002)(6506007)(26005)(316002)(5660300002)(38100700002)(66556008)(122000001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?utf-8?B?cWtiSEwzbFdLSTJMMWZacWgvVktvMlVwVWZRSlpiLzlMZkFjZUVJVC83NzlI?=
 =?utf-8?B?R2ZhdjltNkZhWEFGRFQxZTZFVmxMQlN0WGxYRHUyREx2Yk9oOGdLVi8xL3hQ?=
 =?utf-8?B?RkpkTXU2ZGVJMkg4bmRENmJWU2RSUGFzTVFSWmdnYjJuQ3d4WDMzMFF4MHFj?=
 =?utf-8?B?aUFkNCtLYUkxdFhnSDNpb3ZiMlpQZ21SSVRtMVN4TG9wTE9iT0MyblMvSk5F?=
 =?utf-8?B?bEZGYTIxZUFkMTZYcWJ5MVhiSEljMHc2a0tTV2FvODk5MzQ5L1RzWktQVUZV?=
 =?utf-8?B?ekNRSWxCanVrMTcrd3MyTTUyazh3RVMrUDFURENCd3d6T1dyWUNVaVh2RWtt?=
 =?utf-8?B?R1VUSHFndGQ2Uk9sT09DY1hUS0xRcmxmL2g4Ry9Rc0l0d0hLNlAvOU1URDJh?=
 =?utf-8?B?Nk9iTG5kelJCd2RZOThwZWdxY0JIQXhUNVk1OWxXeVYvaU9zdU1CN2ZkUVMr?=
 =?utf-8?B?OXdiY2F5RWZGNWxmR2wxV0lTaktUTkliSXlxYU83WkZ2NGcxNkpSOEZTaVlZ?=
 =?utf-8?B?VndKdkhuSXc1bFl5VlJZeldlbDgvbUdQaVRDQ1FJNStoOG5kTjRhYTF6MXM5?=
 =?utf-8?B?RU81NWFxT1hkaCtXdEQzQmdvYVdKcGhTNUZUK2tMYTg1c2dwVWZVdC9ScnRw?=
 =?utf-8?B?OVRUdndNY3VCQlMwS0pCWHdOWkY0d3hET3JBakFhVGVqTW9zamlESGl4UHpM?=
 =?utf-8?B?TFcwZWpwdDI0NkM4SksxT21aZVJPL3ZyNUxJV3owT3RxUDlUaXpHRnpzTkkv?=
 =?utf-8?B?ZktyNE5uWGx0cnVvdm1jU1EyT3ZlcVU3TExPMkRpaGJDYmdPVmhqRjNlcE5z?=
 =?utf-8?B?QW8vTzFTVU5jb2N5blpMZGZNd3p1QXZQRVFBSnZnYTlPeSsvNFJlWE80ZmRx?=
 =?utf-8?B?VTd2MktoQkc5cE4rcndIckFYcHN6SlpBdGIwaFF6dGxkc3RGdE5rRitNVUcv?=
 =?utf-8?B?OHduV0dURmpOVWlRR3phL0NuZG43NlZHUWd2MFFuOXdGNG5XS0dUU1FFOVJt?=
 =?utf-8?B?K0RYOGk1UkxzZXVZbmhTWGx3M3o2MzZQR3lmTWhYTnpORlNWLzBIOUp4aHFh?=
 =?utf-8?B?SmFjMGNXNS9mTC9EQkVqUWk2MUZJYjNxcWlhSnMxenR6S3hSNmxQRU5oblNu?=
 =?utf-8?B?SVJxQldmN0VqTkNIL3RnN1hXamw5QUcvRzhGeFVkN2VxTDZXVlMyb2hIaFhp?=
 =?utf-8?B?enFCUnYrNWVEQ0N6TFlXTWcrbGJITXNzM2VHQ3dFMDU2RXJkSndFZnVhcG9T?=
 =?utf-8?B?enZLMHFLOVYzRnJQRVdObktwM3NObzFTc1RKVEdTanRIbUM4MU1ranJ3ZkdP?=
 =?utf-8?B?Qng5ck9STXJieHF0eHJkR1dUK0xxZTN0L1VGQ003OTdqa0hLTlhUY2w4S3JB?=
 =?utf-8?B?ZG1BMm5wa28vM0R6dHg4YnFNb0J2NDZ5UTFoalplc3c5YTJkVXNoMnRKd1VH?=
 =?utf-8?B?eDJUL2RGckdzTTUzSVVWQXRHUnlaTlFTMmpOSlZsSkpzU3l2cGFlSlJBYXh2?=
 =?utf-8?B?OE80cVBET3hLaDRKa0dQRlVOOEtUWGx6WllWWTJRMTRXaUIvc0p0MzFKKzdB?=
 =?utf-8?B?cTcxbUk3R0QyUWtWZ1JjSWNJb2xuWVJlSysyMGJpRXVtQUh3alNKbkdpN1dP?=
 =?utf-8?B?Zk5MUGFFb3JnSUovWCt4b3ZuS2o2Z3hER2poZllXR0hDNVhld0pJb0Vka0JD?=
 =?utf-8?B?KzBTU1ZUdS9IZ2F5TWxodzVRb082RWp1WHkrZjh5bXlYY1plTmtQWDRBcHlo?=
 =?utf-8?B?WVoxNTN3ajNubVdtQWxHdGNDeXRtd2hrZFU2QkZ2M1JoQnRkMDBBVzU4N0J4?=
 =?utf-8?B?a21EeCtFNXNXdWdQQmkvdz09?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <EC7441EBD64C9E4FBC1F4EDE7E72EFB3@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bc322a63-5c66-40c5-ddd0-08d929ad3b30
X-MS-Exchange-CrossTenant-originalarrivaltime: 07 Jun 2021 12:10:24.7331
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: NtoWc1Vlc62W6ggp72FEwvvI1fwDR54bGPLYNrbRJYv02u136YTUVxbVEvBY4kziuMtDwKVMkCbsCE+aJlg5GgXN97gYEvDB9pqBWEplY3E=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5669
X-OriginatorOrg: citrix.com

DQoNCj4gT24gTWF5IDI4LCAyMDIxLCBhdCA0OjEyIFBNLCBEYXJpbyBGYWdnaW9saSA8ZGZhZ2dp
b2xpQHN1c2UuY29tPiB3cm90ZToNCj4gDQo+IEEgIXJ1bm5hYmxlIHVuaXQgKHRlbXBvcmFyaWx5
KSBwcmVzZW50IGluIHRoZSBydW5xIG1heSBjYXVzZSB1cyB0bw0KPiBzdG9wIHNjYW5uaW5nIHRo
ZSBydW5xIGl0c2VsZiB0b28gZWFybHkuIE9mIGNvdXJzZSwgd2UgZG9uJ3QgcnVuIGFueQ0KPiBu
b24tcnVubmFibGUgdkNQVXMsIGJ1dCB3ZSBlbmQgdGhlIHNjYW4gYW5kIHdlIGZhbGxiYWNrIHRv
IHBpY2tpbmcNCj4gdGhlIGlkbGUgdW5pdC4gSW4gb3RoZXIgd29yZCwgdGhpcyBwcmV2ZW50IHVz
IHRvIGZpbmQgdGhlcmUgYW5kIHBpY2sNCj4gdGhlIGFjdHVhbCB1bml0IHRoYXQgd2UncmUgbWVh
bnQgdG8gc3RhcnQgcnVubmluZyAod2hpY2ggbWlnaHQgYmUNCj4gZnVydGhlciBhaGVhZCBpbiB0
aGUgcnVucSkuDQo+IA0KPiBEZXBlbmRpbmcgb24gdGhlIHZDUFUgcGlubmluZyBjb25maWd1cmF0
aW9uLCB0aGlzIG1heSBsZWFkIHRvIHN1Y2gNCj4gdW5pdCB0byBiZSBzdHVjayBpbiB0aGUgcnVu
cSBmb3IgbG9uZyB0aW1lLCBjYXVzaW5nIG1hbGZ1bmN0aW9uaW5nDQo+IGluc2lkZSB0aGUgZ3Vl
c3QuDQo+IA0KPiBGaXggdGhpcyBieSBjaGVja2luZyBydW5uYWJsZS9ub24tcnVubmFibGUgc3Rh
dHVzIHVwLWZyb250LCBpbiB0aGUgcnVucQ0KPiBzY2FubmluZyBmdW5jdGlvbi4NCj4gDQo+IFJl
cG9ydGVkLWJ5OiBNaWNoYcWCIExlc3pjennFhHNraSA8bWljaGFsLmxlc3pjenluc2tpQGNlcnQu
cGw+DQo+IFJlcG9ydGVkLWJ5OiBEaW9uIEthbnQgPGcudy5rYW50QGh1bmVuZXQubmw+DQo+IFNp
Z25lZC1vZmYtYnk6IERhcmlvIEZhZ2dpb2xpIDxkZmFnZ2lvbGlAc3VzZS5jb20+DQoNClRoYW5r
cyBmb3IgdHJhY2tpbmcgdGhpcyBkb3duLCBEYXJpbyENCg0KUmV2aWV3ZWQtYnk6IEdlb3JnZSBE
dW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4NCg0KSnVzdCBvbmUgY29tbWVudDoNCj4g
QEAgLTM0OTYsOCArMzUwMCw3IEBAIHJ1bnFfY2FuZGlkYXRlKHN0cnVjdCBjc2NoZWQyX3J1bnF1
ZXVlX2RhdGEgKnJxZCwNCj4gICAgICAgICAgKiBzb21lIGJ1ZGdldCwgdGhlbiBjaG9vc2UgaXQu
DQo+ICAgICAgICAgICovDQo+ICAgICAgICAgaWYgKCAoeWllbGQgfHwgc3ZjLT5jcmVkaXQgPiBz
bmV4dC0+Y3JlZGl0KSAmJg0KPiAtICAgICAgICAgICAgICghaGFzX2NhcChzdmMpIHx8IHVuaXRf
Z3JhYl9idWRnZXQoc3ZjKSkgJiYNCj4gLSAgICAgICAgICAgICB1bml0X3J1bm5hYmxlX3N0YXRl
KHN2Yy0+dW5pdCkgKQ0KPiArICAgICAgICAgICAgICghaGFzX2NhcChzdmMpIHx8IHVuaXRfZ3Jh
Yl9idWRnZXQoc3ZjKSkgKQ0KPiAgICAgICAgICAgICBzbmV4dCA9IHN2YzsNCg0KQnkgdGhlIHNh
bWUgbG9naWMsIHNob3VsZG7igJl0IHdlIGFsc28gbW92ZSB0aGUgYCghaGFzX2NhcCgpIOKApilg
IGNsYXVzZSBpbnRvIGEgc2VwYXJhdGUgYGlmKHgpIGNvbnRpbnVlYCBjbGF1c2U/ICBUaGVyZSBt
YXkgYmUgcnVubmFibGUgdW5pdHMgZnVydGhlciBkb3duIHRoZSBxdWV1ZSB3aGljaCBhcmVu4oCZ
dCBjYXBwZWQgLyBoYXZlbuKAmXQgZXhoYXVzdGVkIHRoZWlyIGJ1ZGdldCB5ZXQuDQoNCiAtR2Vv
cmdl


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 12:41:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 12:41:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137911.255417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqEZM-00082w-K6; Mon, 07 Jun 2021 12:41:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137911.255417; Mon, 07 Jun 2021 12:41:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqEZM-00082p-Fr; Mon, 07 Jun 2021 12:41:28 +0000
Received: by outflank-mailman (input) for mailman id 137911;
 Mon, 07 Jun 2021 12:41:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Idvh=LB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lqEZL-00082j-Bv
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 12:41:27 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fb71e5c2-4264-4af1-a06d-f5e1b5f5b3ba;
 Mon, 07 Jun 2021 12:41:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb71e5c2-4264-4af1-a06d-f5e1b5f5b3ba
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623069684;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=e21/IGcWymo5MIqD1rnrm7qEJJUELfd/4foTt5tkdmY=;
  b=eXYPe5ZLS29F1kqJwUL41WIGflAx3bIY2WsUkECCGOX5xI24ukiKDgfY
   tRlHCLlA7IhxFiPI3ud4LdMk18XN8lkJdbLyQkccWtTkONQ1dTr+bcWk/
   qcs3EJ9yy6OAejwna+ya5HJwPpjPMdqK7X6hJWN6iaLTjiKg1Wu7E96yU
   Q=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: fiVjXspTC+it6lcBUkjP2Oto3/hNkUH99pmvJs7QBmN9c3U0HLGyIMxwSVGXnQKf39T6r2RRCE
 uK/WCtExn2HTB9t8rfyS7m/Ni86TSMH1FuNjbj7tOoEtbTGA+31YIJ03CknguXtMN5PL5NK4JU
 TgvAxOQudR3x103kWk4oaDiTygLToZ+aWzxBIduxlGYiqzqRdHhZNr7ToNnPVPrtxNb92BbRYk
 mlZvbtRSwZN00ZwhTg4Z5zY1eYfRfXYwbP9svncVLjSuWo+Fh4CHv+l2BXCSGqfkIA42T35Mjn
 UdU=
X-SBRS: 5.1
X-MesageID: 45278750
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:pjRvOqA++2VDlGnlHemL55DYdb4zR+YMi2TDtnoBKiC9Hfb0qy
 nDppsmPHzP6Ar5OktLpTnoAsDpfZq7z/BICOEqVotKNzOLhILHFuBfxLqn7gSlPhbT2YdmpM
 VdWpk7JdHrD2FAq4LQ/Am8Hr8bsb262ZHtqOvFzU5Xa0VPZ7t75wl0MQqVe3cGITV7OQ==
X-IronPort-AV: E=Sophos;i="5.83,255,1616472000"; 
   d="scan'208";a="45278750"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/cpuid: Fix HLE and RTM handling (again)
Date: Mon, 7 Jun 2021 13:41:16 +0100
Message-ID: <20210607124116.24250-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

For reasons which are my fault, but I don't recall why, the
FDP_EXCP_ONLY/NO_FPU_SEL adjustment uses the whole special_features[] array,
not the two relevant bits.

HLE and RTM were recently added to the list of special features, causing them
to be always set in guest view, irrespective of the toolstacks choice on the
matter.

Rewrite the logic to refer to the features specifically, rather than relying
on the contents of the special_features[] array.

Fixes: 8fe24090d9 ("x86/cpuid: Rework HLE and RTM handling")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reported-by: Edwin Török <edvin.torok@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpuid.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index f3c8950aa3..958caf35da 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -672,9 +672,11 @@ void recalculate_cpuid_policy(struct domain *d)
     sanitise_featureset(fs);
 
     /* Fold host's FDP_EXCP_ONLY and NO_FPU_SEL into guest's view. */
-    fs[FEATURESET_7b0] &= ~special_features[FEATURESET_7b0];
+    fs[FEATURESET_7b0] &= ~(cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
+                            cpufeat_mask(X86_FEATURE_NO_FPU_SEL));
     fs[FEATURESET_7b0] |= (host_cpuid_policy.feat._7b0 &
-                           special_features[FEATURESET_7b0]);
+                           (cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
+                            cpufeat_mask(X86_FEATURE_NO_FPU_SEL)));
 
     cpuid_featureset_to_policy(fs, p);
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 12:41:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 12:41:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137912.255428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqEZl-0008Sr-SG; Mon, 07 Jun 2021 12:41:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137912.255428; Mon, 07 Jun 2021 12:41:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqEZl-0008Sk-P8; Mon, 07 Jun 2021 12:41:53 +0000
Received: by outflank-mailman (input) for mailman id 137912;
 Mon, 07 Jun 2021 12:41:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Idvh=LB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lqEZk-0008S7-Dj
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 12:41:52 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9eec02b0-0cd6-421a-9f54-20416ddf6819;
 Mon, 07 Jun 2021 12:41:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9eec02b0-0cd6-421a-9f54-20416ddf6819
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623069710;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=EUwbdajHIb7nM1V7Gjn97DWAT7BmqdvqQvlEjPG/Z2Y=;
  b=ZKP7x2uCOoyYTnlcJQ5pMu5u+vsPeSa+4nQGwNdacgMP5hVmCuU2Ip/3
   Y9HJVFYPG82NTPceCDCe2P6px/+kDm7pzbaucYt0Y8kuw1CRH4ShvEck0
   BWRHVEGw9wHOOFi5dKAfcyZ0LWPogmyFqw5iTesBMmowmuI9R/kX2JSP4
   k=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: guttQGPz8nEHEQjTupV9J5xNhv7wfSkU2bHCoZrjwXn00aVzSKEtZ5mBWWbdJCQrR8KObli2Gj
 plOXehOeDxlelB/UY2zTuO6gcKTnT28nJySc9ljk/o/pV8oaa2/CjOCJ0iZZhpNFo9mHBI+lDe
 Fmi2rTT/btERcYAR5IDsd8lSCN9zTygN06aafpZ7LDXBucyKM1ZnsBmGr1718ryypj9xjdNZSF
 CZwpvOWA2IOnoWqb21kBVCeo6nyMW3mFoWSw37VObCz3ZXsxqVRmLIMK7AuJRL3dT0nhSnktbz
 HHk=
X-SBRS: 5.1
X-MesageID: 45600744
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:giySK6PPkqMYcsBcTuajsMiBIKoaSvp037BL7TETdfU7SKGlfq
 yV88jztiWVtN9yYhsdcLm7UcG9qBXnm6KdibN7AV7IZmXbUQWTTb1K3M/O+nnEIAHQn9Qtt5
 tIQuxSBMfzNGNdye3n4Ay0euxQpOWvweSEif3d9kxKCSVncbtp4QtDBgnzKDwSeDV7
X-IronPort-AV: E=Sophos;i="5.83,255,1616472000"; 
   d="scan'208";a="45600744"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/cpuid: Drop special_features[]
Date: Mon, 7 Jun 2021 13:41:41 +0100
Message-ID: <20210607124141.24767-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

While the ! annotation is useful to indicate that something special is
happening, an array of bits is not.  Drop it, to prevent mistakes.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpuid.c        | 2 --
 xen/include/asm-x86/cpuid.h | 1 -
 xen/tools/gen-cpuid.py      | 3 ---
 3 files changed, 6 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 958caf35da..2079a30ae4 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -14,7 +14,6 @@
 #include <asm/xstate.h>
 
 const uint32_t known_features[] = INIT_KNOWN_FEATURES;
-const uint32_t special_features[] = INIT_SPECIAL_FEATURES;
 
 static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
 static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
@@ -1132,7 +1131,6 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
 static void __init __maybe_unused build_assertions(void)
 {
     BUILD_BUG_ON(ARRAY_SIZE(known_features) != FSCAPINTS);
-    BUILD_BUG_ON(ARRAY_SIZE(special_features) != FSCAPINTS);
     BUILD_BUG_ON(ARRAY_SIZE(pv_max_featuremask) != FSCAPINTS);
     BUILD_BUG_ON(ARRAY_SIZE(hvm_shadow_max_featuremask) != FSCAPINTS);
     BUILD_BUG_ON(ARRAY_SIZE(hvm_hap_max_featuremask) != FSCAPINTS);
diff --git a/xen/include/asm-x86/cpuid.h b/xen/include/asm-x86/cpuid.h
index 7baf6c9628..46904061d0 100644
--- a/xen/include/asm-x86/cpuid.h
+++ b/xen/include/asm-x86/cpuid.h
@@ -14,7 +14,6 @@
 #include <public/sysctl.h>
 
 extern const uint32_t known_features[FSCAPINTS];
-extern const uint32_t special_features[FSCAPINTS];
 
 void init_guest_cpuid(void);
 
diff --git a/xen/tools/gen-cpuid.py b/xen/tools/gen-cpuid.py
index b953648b65..c6b5056a8d 100755
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -362,8 +362,6 @@ def write_results(state):
 
 #define INIT_KNOWN_FEATURES { \\\n%s\n}
 
-#define INIT_SPECIAL_FEATURES { \\\n%s\n}
-
 #define INIT_PV_DEF_FEATURES { \\\n%s\n}
 
 #define INIT_PV_MAX_FEATURES { \\\n%s\n}
@@ -384,7 +382,6 @@ def write_results(state):
 """ % (state.nr_entries,
        next(featureset_to_uint32s(state.common_1d, 1)),
        format_uint32s(state, state.names.keys(), 4),
-       format_uint32s(state, state.raw['!'], 4),
        format_uint32s(state, state.pv_def, 4),
        format_uint32s(state, state.pv_max, 4),
        format_uint32s(state, state.hvm_shadow_def, 4),
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 12:52:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 12:52:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137925.255438 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqEkN-0001fq-Tm; Mon, 07 Jun 2021 12:52:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137925.255438; Mon, 07 Jun 2021 12:52:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqEkN-0001fj-Qk; Mon, 07 Jun 2021 12:52:51 +0000
Received: by outflank-mailman (input) for mailman id 137925;
 Mon, 07 Jun 2021 12:52:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cRKJ=LB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqEkN-0001fd-BM
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 12:52:51 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6ac4d880-06a3-424e-a5ca-049b286b7c9e;
 Mon, 07 Jun 2021 12:52:50 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2112.outbound.protection.outlook.com [104.47.17.112])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-11-Mb9h-BGbOYyIXg3_u2kmRw-1; Mon, 07 Jun 2021 14:52:48 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6173.eurprd04.prod.outlook.com (2603:10a6:803:ff::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.27; Mon, 7 Jun
 2021 12:52:46 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4195.030; Mon, 7 Jun 2021
 12:52:46 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR1PR01CA0003.eurprd01.prod.exchangelabs.com (2603:10a6:102::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.15 via Frontend
 Transport; Mon, 7 Jun 2021 12:52:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ac4d880-06a3-424e-a5ca-049b286b7c9e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623070369;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ADYzQMaVerczQu8ZmYTGHB9lt1qZzrDIOvpnewkdDC0=;
	b=RjCXorP8Kyi12yT46OEBzoXmASF5n0lFjy/3k8vemNSU1a2JqD6V/N7jPeKWqU/7rm2PxR
	eaFwUwnqwfSZUJ0KSortg5f8kNkVZ5YhfKPZq4qL73S/6ncO7DxpEH6VtMrmYVFwUwnl7r
	slP5f9VjatQmXK+WD3g0FXBUkfAYSyk=
X-MC-Unique: Mb9h-BGbOYyIXg3_u2kmRw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DDuYixP7TfoL2jm0vDQfDURt0xP+X3p7Owjt3+1Ltzo0044Fc3RWDcE8LU47v2cCTC21aqndAs6zzMpokxAMLL3j44LqxKRP0r0EsbYYXCIcXXowY/9Rl6wS8UPJoqFZGBVJc+W4lY9QJaGL2gxaAQe7HkTnio/PFcDctsW2aUKLDa4Tk0vnkmnkdDixiK1U0y+fnpo3Yv6zzTB3KTI5xI+p7wlJho65EjetbTXbE62ZTwsr7sQsq2XB9Fk0PbutUjHXQmLpX/CcgnNYylhIzXKClHbB1Cr+T1J7fSkTzVPv9EoqJXySElLl9vr4b5iNpFYMy5dNhs0wd5x+1SLrNw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=t16o32gG1VsGoX86uvlVT72Q3N6GoyS88sA+DaEkvNw=;
 b=Rv1wGOGDJy9k9bJvIq+ZH0+MUx7E6Uy1CyAvYhstRl0LB6oLCQUmwSJ0D7jGsRiYOsy+vnrbTmDnqTT6pYfa+EjPlu9evmNjueyibx6BlyCl0lWONTnkto+QH8EKN2YHCecZrq83soVLt87kePTVlbdq2hW7Ke4ERLdOII5uqH4Ux5hIprmqB2FNWCeOBm2PslV1RQDmQq3S4GaBfURYLrs5o0SibcxEkHa1gnGxpAqufKUUD7XyroC+8609FB8QyNeozk2FjuB4t17+NAWMvoJXk+vHRdvLTl7cGkiYqM9rkIEskZRoAjbP8D++vh1b8Xa4aXiMZkD5pAoj4NIf3Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] x86/cpuid: Fix HLE and RTM handling (again)
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210607124116.24250-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3c975a90-5222-5a20-e694-d107265ec8d2@suse.com>
Date: Mon, 7 Jun 2021 14:52:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210607124116.24250-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR1PR01CA0003.eurprd01.prod.exchangelabs.com
 (2603:10a6:102::16) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: dfb7fb28-295f-4793-ea36-08d929b3260c
X-MS-TrafficTypeDiagnostic: VI1PR04MB6173:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB6173D3F62D0BCB5348796012B3389@VI1PR04MB6173.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jbvkK3gd9gbcbOayHubqWVByHS6l2KTZbaSCeb0SaPsOrPfy5r4aADka8Siml6feyY+9XZROx4EptnGCBEDD5OUnGiv16Wr+Aj0P4v9ohOvlIBrrj+Cjxtlw0Y0aiK96RVyWu36/ZxkbwaJMwJjfzQvPGXNPt9640IQUG1mTbSjj9ZwzrTeOdcGMWc8/V1FNOCZGR9A69je+ry0f+pOfFkk+1e74/dT+OnybMhEKrW4Hanr94yf5Pk0kZEmQQxzw/6Pep3TjQj/1KKwntoej/QIdZCZeE2sOBZcpkSXivneio8a9PeqrvMReKqbf5i4MidywkdD5auw3Xak/5omOQ7aGuL6sdQpM3/tGRc4yLTSdrG2mxpdny9oODJbmV+PglMc5mriyo8suFlMZsTcPhJlt6Xn2mnpJhSL/q+966g7dCiZlifu0bKqCD7bP4NJKBfSbHGRHuSH5mubiMPmTDuYQGwgvt+sIrxmLCIeKxfSsMb+2HjPpolw7TQIpxl/4Nrh6XPz+y74DxUsg49swoyNf97DXw04mj/CTLMqDCJ2c7cfPTAr50KNvRrkoofluoo0UFy1sETR/u351E+dJSNLMzRIpVoSGmhgUCQxsdtwuO4vjYDx8hBj03U648vs174VcqqoFo0IUIBIYXyOM8w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(86362001)(26005)(186003)(36756003)(16526019)(2906002)(31686004)(53546011)(2616005)(5660300002)(956004)(4326008)(83380400001)(16576012)(8936002)(38100700002)(31696002)(6916009)(6486002)(8676002)(4744005)(54906003)(66476007)(66556008)(498600001)(66946007)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?us-ascii?Q?K/R+VIcujjQVNzw/HtC4rYGng1uuos5OyQfsocmwYSqeCwUil0KsDW/H/zra?=
 =?us-ascii?Q?burI3YLXLJMeHKjZRFteLD5MrbPrrPGLpnMIt1oRxSDT67GcEr4D6tJfNdi0?=
 =?us-ascii?Q?8f13dmCCaVpL6/5Zlvz9p5fr+2yIdZy1QWhsobzj+PAeowLshRAITq5XZ3Qu?=
 =?us-ascii?Q?rxPTUVKrhp5ZsuDOdp4+mrAfMlixpIKNyHgCHiIU8twfU80kdxSBJpLFZiew?=
 =?us-ascii?Q?kEygNWMcm4go4utOvQuO3bjQ+Nm5Kqqn7x19yVK8L4aJ1Nlmu4W4dlzlhbxe?=
 =?us-ascii?Q?XZ/K8SU6EjaRPv/kB7kFZ3SQGAto7iTtxY0mHpDWvTrSKDL10lZUnydb2Fus?=
 =?us-ascii?Q?adc3R2MDWkBu8+Scj9qgur27YhXUTxva+Kmh5ouTPaz3++fbEeahiQH9jFbI?=
 =?us-ascii?Q?uKgN7bwWwpiOYFN3CkZ6C5i5I8TZ4Au2l3gG7k7WqxXn82uo6DIZudRRria3?=
 =?us-ascii?Q?/tgw30LrtvfvNwB2xIFRHnQD72yGu2vNey47NA+tJonVOKZrY+o4I/26TkXC?=
 =?us-ascii?Q?SoDUJuybr+uF08/Nnw9+nWvnRlxMXyBAWyNTUlCx1wS3d91g7kHSNrVrfDC5?=
 =?us-ascii?Q?tM+woCHS82zFFWUHLVCaRwcNJi3lnNZN2boX5joBktEi6yy+3qdcIjtjL6hc?=
 =?us-ascii?Q?Yskx9h6w0LjebkyH8Vz1hgFO7LmVXYUTW37707DIrEImbswsRcdIbuKSLYgL?=
 =?us-ascii?Q?JbC+ANbSeoQbpjIiZw84GTEsdzOeVcfRliO9WidkMgURP+cyWaX97i3Ic0nF?=
 =?us-ascii?Q?cK2GR0B8KWX/j7EtV1u7A6oS7fsTxmSzn6bqq5BzlQBjFs6xi59NdzF+uPe5?=
 =?us-ascii?Q?506t5MJb87awoBzKP91R3yISgJbN7UxY/4pj9TRvD6cNohqUuvH2C0jourNN?=
 =?us-ascii?Q?3bxQ3fypgFhu6zjeVdVy51ot61YikM39aKq7YeoF+j9uZnNautOfkN1Yi+PU?=
 =?us-ascii?Q?lUEy35AniUsbhXFDkTcuEqiv4hJLGsvPvZuKN+GMCw0ctoqFMDw4Owk85sMy?=
 =?us-ascii?Q?ziltH5SznpFAFSkHeRLmfXn8/7SGD/EIa0hv2zeFS4AsmPNj/xudRuLzPsG4?=
 =?us-ascii?Q?S8Mar+ZXqekl5DnQK3gTHAL2tzwpkwsxkubqZvcW0LF9ADAmYH2erjD7D9qT?=
 =?us-ascii?Q?JRpW6wUwmT5H9zvf+Fkqyt9npznpr4FWpkom+sdrH8AfxPMk8xr/JnTj1AiI?=
 =?us-ascii?Q?60fJTusYMJaFlJMhJzQyCi9tW9FrL7mFR9zzPC8/kXI8Rpo+qzamhhFbPOGe?=
 =?us-ascii?Q?3X0wvo2TMJs2bStiTG5+/0HnTmuRbJ74ZtxH1x+Svbw1ycwZZqvsLsHKpC8v?=
 =?us-ascii?Q?031du4TPe7MrG8znDhJ5z0ea?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dfb7fb28-295f-4793-ea36-08d929b3260c
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 12:52:46.4650
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 51OlJ75mymWWGaocg081Rb7WXGB0DhpuxTnsMZUUWenlWMp9v10Q9IDLaZ+zYcvd0TEyA3KgB71Mxj4tetCUhA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6173

On 07.06.2021 14:41, Andrew Cooper wrote:
> For reasons which are my fault, but I don't recall why, the
> FDP_EXCP_ONLY/NO_FPU_SEL adjustment uses the whole special_features[] arr=
ay,
> not the two relevant bits.
>=20
> HLE and RTM were recently added to the list of special features, causing =
them
> to be always set in guest view, irrespective of the toolstacks choice on =
the
> matter.
>=20
> Rewrite the logic to refer to the features specifically, rather than rely=
ing
> on the contents of the special_features[] array.
>=20
> Fixes: 8fe24090d9 ("x86/cpuid: Rework HLE and RTM handling")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reported-by: Edwin T=C3=B6r=C3=B6k <edvin.torok@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 12:56:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 12:56:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137933.255450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqEnx-0002PK-HP; Mon, 07 Jun 2021 12:56:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137933.255450; Mon, 07 Jun 2021 12:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqEnx-0002PD-E7; Mon, 07 Jun 2021 12:56:33 +0000
Received: by outflank-mailman (input) for mailman id 137933;
 Mon, 07 Jun 2021 12:56:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqEnw-0002P3-1Z; Mon, 07 Jun 2021 12:56:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqEnv-0008VQ-PQ; Mon, 07 Jun 2021 12:56:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqEnv-0000GF-Gd; Mon, 07 Jun 2021 12:56:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqEnv-0006Ay-G7; Mon, 07 Jun 2021 12:56:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CY//EfQbvF+z8c0LIIQcgZijZPQyznxn8zMEDhOG5Ws=; b=29OHvWTrfY+PtX4GGBUxqcMuqm
	kkldLSvhGL4416k6NHYH2hWVpzmCgY1zBnOzCYvjce/L5hPLJQcnfNl3ghFW2Po3JG3/XzBF6+0cr
	2omWoBsF9n/mBzDixJpdrjY/V7fH/2clbxqNr72UEGS9hVe4dXx6ruxKgVmXldnxCU28=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162474-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162474: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=6f398e533f5e259b4f937f4aa9de970f7201d166
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Jun 2021 12:56:31 +0000

flight 162474 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162474/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl          20 guest-localmigrate/x10     fail pass in 162454

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                6f398e533f5e259b4f937f4aa9de970f7201d166
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  291 days
Failing since        152659  2020-08-21 14:07:39 Z  289 days  537 attempts
Testing same since   162409  2021-06-05 17:01:49 Z    1 days    4 attempts

------------------------------------------------------------
526 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 169498 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 12:56:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 12:56:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137936.255464 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqEo5-0002is-So; Mon, 07 Jun 2021 12:56:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137936.255464; Mon, 07 Jun 2021 12:56:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqEo5-0002ij-PD; Mon, 07 Jun 2021 12:56:41 +0000
Received: by outflank-mailman (input) for mailman id 137936;
 Mon, 07 Jun 2021 12:56:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cRKJ=LB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqEo3-0002i8-TX
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 12:56:39 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5789699c-f74d-43d0-ba0c-7f0121fea562;
 Mon, 07 Jun 2021 12:56:39 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2169.outbound.protection.outlook.com [104.47.17.169])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-15-aPV173zzNL-UNcya4oWrzA-1; Mon, 07 Jun 2021 14:56:37 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3390.eurprd04.prod.outlook.com (2603:10a6:803:9::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.20; Mon, 7 Jun
 2021 12:56:35 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4195.030; Mon, 7 Jun 2021
 12:56:35 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0211.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1f::31) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.15 via Frontend Transport; Mon, 7 Jun 2021 12:56:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5789699c-f74d-43d0-ba0c-7f0121fea562
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623070598;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LrafuEOJAiK/W6czJ4sYhZ31pWhQ97jOBAbw6w/awzg=;
	b=RMTLsEquTPv11BZDXh8sFCyg1TTn0WEkQCTJpIF/viExSc4MDbwv02k+6uEVBweyrt6v1z
	8Q9vZW7ZasjFpxOQrJnfqZROZ50bOimQJtdZDuz8ah23dli5SRCo5ie1E7ard9zH0P0KVr
	Xpfv6RWgoceAd3A+Cr4CV3GRSTDXoWE=
X-MC-Unique: aPV173zzNL-UNcya4oWrzA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hx7zZviTfvT8DCUktCZBXB8nccxeEj4eREARUKHv8Wq4dN7gAvma71ZzOnubWQ1oF/MwC7Nc3TM/7+/76vcn6ex6EqBb3dDR26l4mxiplECrxVTqWzCuDXsV03lhjLIDFn9qQhUIlUGODGU4rit34CkRpGg+nCuBnWAf74jqdDGerUHeHZap0vWgsL6vRSk8u5ZUUW+B3nLmCC8O1BH7aFD/44XFfhfZt5Xp8mUbFLZ8x03AoDHJsjeNsPk3dU4Z0lWL9/cqIcyjd3HEMHa1KrR7qLPMa3q9XawEtsAMclKIbljeGYXIxZZOPhO/D+lA75G3z1HIh66tYUZBJ+LT1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LrafuEOJAiK/W6czJ4sYhZ31pWhQ97jOBAbw6w/awzg=;
 b=Js5fIw3potEkWg5l4UkUJcMvwauslmuh4XnZKIurwG+pis2UqZyOzmv/ucMeXo3JhthrmLv0kAlCT1emqKW269K4q5QjJAAAeHgbutZzhpCZhJuy/lkOOvQNDdvCLBjGlefn5XsSfbgJVD1tu8pCUQ0SB3orViXGXNxihZYn3p6pn3M+WRglWq6HibBxnR2AgQjRamepeuuI95IuxNgWYKGMoG3DCfs6QqPs6wL8JdvPPqFDAhVRFsTzFGkDe8wfWrL0w/hgIusWr+kIDtG28TLI4rmgwouqQiRrcNYFMhuPscg02O89m6DctR27DxF1Vcui9vTUm7+WIYNHktGNCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] x86/cpuid: Drop special_features[]
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210607124141.24767-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <407475db-08e6-0056-fe71-ca9a8b04d7cc@suse.com>
Date: Mon, 7 Jun 2021 14:56:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210607124141.24767-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0211.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1f::31) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 11bd4efb-5684-4760-f837-08d929b3ae48
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3390:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB3390A04797ABBBC9D7FA6A26B3389@VI1PR0402MB3390.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1443;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MdriZPjHOBpBktxeJJRMpInp+ulITTzCQ+WbzLgcr9jMA7+sU16CVUIi3nrbcYQuSQOyG7b0edb376UOdGSQrC9Dd+sZ1ghHvL7NmbPit3SVefCocnq438NsS1PFqVRpFKty8uQjG6PUrDPysF1qQJERkLX2UfH/UZ2b9qGnDg/glJG9ig3uXt1HKQz5EpKxuFm7vd9BzjTQ3rVrHMj3qDLwwfTem+dBywMwOUg56/UUuKpYiJJmcsgKjmFn+E3hfUwIN+XilY4UwcIMuAYMehr3QmRDepvMdDsdebLuZV21Nqnrf6fKzwiYEDVe0FM0BBbkgMcC6sPdhvkLtghDP2d56lJritXFAjx3tW800vK7vuRfpmI7ZwLPvCK182std3Phqs+ouJOlWT8DiTutffGudbLO6ptm/aa8mogs+g4/ZVj6CcoAJfeMUZEnNEoqVGg3SVkUpakCky6I+YIUV9DUjfWVqfEdJ+cobHs8cNT1XB7oiAFReDIHhNfIKfyImahfrwDyF95tBNUcp4CsW9N7St1wU2Z5/QRyhPwIhwkqPF0M05/ppkUsT+kx0OcN6OjwIXVolWrPmdDhcs0XMtmtHU34pweHald85SfHZgLxvepfldhDh9yNtgp3QfnyErS4hyrACejAQ7IIGI5/zcuzh0VnMS+th/F8eBzMlB3OKz+/RQX12cmVqsBgxWF4
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(39840400004)(376002)(136003)(346002)(396003)(6916009)(6666004)(956004)(5660300002)(16526019)(86362001)(31696002)(38100700002)(36756003)(2906002)(8936002)(2616005)(8676002)(4326008)(66476007)(66556008)(31686004)(66946007)(558084003)(26005)(53546011)(186003)(6486002)(16576012)(316002)(478600001)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?cmhacDV2YlYzOHIxa0ZlTUNBTmdNaHEwRkFFZDZPdCsrdFZPYVV3eVQwT2g2?=
 =?utf-8?B?T0IyRXJNQ0VzN3B5bDdlZ01vSm9tTEZZZlZ3ZTBwYUNqZnZ5Snd0Y3VoOUZU?=
 =?utf-8?B?L1d3clRxT1RLNVpOQTlLbFE5aXF2UElMVmdpcGdEMS9RVzRIL1VlTWJjdlBI?=
 =?utf-8?B?akVXOTU1cXJYME1DaUJHWDhIcjFIN0V3Z092ZFRCcHNkd2U2N1BiRWt4ZG4w?=
 =?utf-8?B?bXVMd3A2YnVYTUxBeGxjTXRLRGlFWUdDcktIQ3FLYnRrYzYwR0ZKeVc1aUo3?=
 =?utf-8?B?UWlGZG4ybWFISFdFaXZwQ0VDN0ZObXhwQ2htM3NSYlhkekhDZ29mT3ZDSzJE?=
 =?utf-8?B?WExkazRPdVd3ZHNPbndkY3RBODYyd21jcy94Y0lpVGIweWNCSWRhMExIT09Z?=
 =?utf-8?B?U1FDanBRTElYbDcrNDByMERKTjRua01PSmhsQlFOaHhycW0rV1lTaGVxcEtz?=
 =?utf-8?B?YUlRUVgzRUNwdzFZc0ZLMkszZkxVbnJKR3NyMFczYWpDY0trWWRRdkI0c0NK?=
 =?utf-8?B?VkhCaWNjWmt1N3pUL0RzcHU3Y0lBTzQzcXZTZnBqZDJWQURRYnRHQVF0UGk4?=
 =?utf-8?B?OGM2alM3b2VQRjBIeTBKemtGSVZORjdTQ3F5QkFsaDZja0pwZ3NiVmZFa3F0?=
 =?utf-8?B?c2dTNUtiSDNRMzM1VFk0OEhocktTb3ZsOGE2M2h6dWpQdzFnSEc5RDB3ZVRO?=
 =?utf-8?B?UmpYd0xNdTJ4dXJOQkEzYjRvMWxsMVZ5bjdQUVYzRUQrcWxXL0RCdVlUeTJq?=
 =?utf-8?B?QlJGRVhMMm1VaTNTdS9uNU95RnFURkxGTzVXcGN3cUEzODR4V1BYQVlFOGNm?=
 =?utf-8?B?Nm1PeTNBTWI4RWJ4VVM0WTJhUFh5Y3NVRTZrY21Gcm9hQ1lnb3hPeUpLRDRV?=
 =?utf-8?B?Z3UvSUlxTWpDOStWN0pVWjU2bUlFbWtOeVg3SzdHajFUVEFHdEltSU8yazFz?=
 =?utf-8?B?M3hxdmFFVmhQUDQzcXNsOVpCOUdteHBpbEp2Zm5majRmK0ZBMG81aFRMc0xV?=
 =?utf-8?B?WDRCOVRqTjBhWXRIK1c1Q3BJRndoSGIyOU5tekV5VHdoT0FqSmsyeW9KMlpr?=
 =?utf-8?B?TnFNTStGSmlDUkY4S05FZTFYN0RyUGg1ZXRyLy8wd1pmeW4xZi91dnZjWVFI?=
 =?utf-8?B?Q1pTemEzeUVkLzh4dUJmS3hRRjdqM29tcC9pTjZqL0hRWjdjRm95L2tENG55?=
 =?utf-8?B?TmwwQkNTU0RUZnJDWm9uSis4MWpXbi9TVDZGOWp4bERHMWJpbG4rQzhxcmJN?=
 =?utf-8?B?cFlESXM2Z3g4ZHlRSWRLaUpkeVJQMVR6azBqV3R3K3ZjeHFiOXJ1M04zWmlk?=
 =?utf-8?B?Z0pRbm5aZjhleU4zQkV0SmViRVNXUlFzYUpaR29jQkNhQWEvK0lkUUYvZ2pi?=
 =?utf-8?B?VjlaZS85RDFFZkozck13U0FQYjN0MWJidkN6dFFuRUxtQ2RBcmtic2dvcXMr?=
 =?utf-8?B?bHRFelk3Tm1xdEpQWlMwOWRmYVBqV05sZlg3MmJQZHA1U0RScjJCVy9lMVZS?=
 =?utf-8?B?WWZTb2tjZ0k1YkdFeit3WmlFaDVBcXJlZm1vRHQ1ajVESUY3Q29ubjZlRUZ3?=
 =?utf-8?B?emI3WEtTWkNZZTJqNU5wTnl4NFBOK3U5ZU1UUUZwNGhIdkxiYWhSQVYvejc3?=
 =?utf-8?B?Z2Y2MjJqL3dLOFRMeGZpd2hkRkVjOTFBQVhVMFArbDJTTjhQSk1XSnJSWFlP?=
 =?utf-8?B?T3lGNUIwWjNVT1cwTjFhN3pwQitGMXlEdy9RaDliL0tJZTc4eE1KK2ZWQmlH?=
 =?utf-8?Q?LlDWesJo3a5FzUK6HeKVIKCIXNORkRMZhrJkl3o?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 11bd4efb-5684-4760-f837-08d929b3ae48
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 12:56:35.0507
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lHmLvsEiYWB3ElW/rgZPd82myD9auYhjwsGNkyvSfWqcyvIh+fYd/NeAB//UDXJlqJDRYIIVpsBo0M7mIpz09Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3390

On 07.06.2021 14:41, Andrew Cooper wrote:
> While the ! annotation is useful to indicate that something special is
> happening, an array of bits is not.  Drop it, to prevent mistakes.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 13:00:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 13:00:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137949.255474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqErT-0004Pj-D7; Mon, 07 Jun 2021 13:00:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137949.255474; Mon, 07 Jun 2021 13:00:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqErT-0004Pc-A7; Mon, 07 Jun 2021 13:00:11 +0000
Received: by outflank-mailman (input) for mailman id 137949;
 Mon, 07 Jun 2021 13:00:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kcEO=LB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lqErR-0004PW-Ru
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 13:00:09 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7e6ad82-02b1-432a-9e63-a77c1d5abfdc;
 Mon, 07 Jun 2021 13:00:08 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id E009321A8A;
 Mon,  7 Jun 2021 13:00:07 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id B1BE7118DD;
 Mon,  7 Jun 2021 13:00:07 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 6qUzKlcYvmBAEAAALh3uQQ
 (envelope-from <jgross@suse.com>); Mon, 07 Jun 2021 13:00:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7e6ad82-02b1-432a-9e63-a77c1d5abfdc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623070807; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=bjaYWxACxRGpqsDAVvv6IZjzEWNKF4fA46tBnzk+epw=;
	b=Msb4pOfV22zWWq5RK3WRR7MzSYe2mV9mDNliSiTu5O5TSLuw+mgnAOFa+PQs7+Xps3uiVB
	m2+H2k7+eG0aPS/vPHOdYQrBIT3UOHJmlr7h3xTEhRK24MzA/sfEdmc+7mR+AEwg7z3VMg
	BTqb06g6dC+cinEmGvNDA5TooFiAsWI=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623070807; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=bjaYWxACxRGpqsDAVvv6IZjzEWNKF4fA46tBnzk+epw=;
	b=Msb4pOfV22zWWq5RK3WRR7MzSYe2mV9mDNliSiTu5O5TSLuw+mgnAOFa+PQs7+Xps3uiVB
	m2+H2k7+eG0aPS/vPHOdYQrBIT3UOHJmlr7h3xTEhRK24MzA/sfEdmc+7mR+AEwg7z3VMg
	BTqb06g6dC+cinEmGvNDA5TooFiAsWI=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2] tools/libs/guest: fix save and restore of pv domains after 32-bit de-support
Date: Mon,  7 Jun 2021 15:00:05 +0200
Message-Id: <20210607130005.5475-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

After 32-bit PV-guests have been security de-supported when not running
under PV-shim, the hypervisor will no longer be configured to support
those domains per default when not being built as PV-shim.

Unfortunately libxenguest will fail saving or restoring a PV domain
due to this restriction, as it is trying to get the compat MFN list
even for 64 bit guests.

Fix that by obtaining the compat MFN list only for 32-bit PV guests.

Fixes: 1a0f2fe2297d122a08fe ("SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- set compat MFN to "invalid" instead of net setting it at all (Jan Beulich)
- don't set compat MFN for 64-bit guests even if running as 32-bit
  domain (Andrew Cooper)
---
 tools/libs/guest/xg_sr_common.h        |  2 +-
 tools/libs/guest/xg_sr_common_x86_pv.c | 37 +++++++++++++++-----------
 2 files changed, 22 insertions(+), 17 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index cc3ad1c394..e2994e18ac 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -325,7 +325,7 @@ struct xc_sr_context
                 xen_pfn_t max_mfn;
                 /* Read-only machine to phys map */
                 xen_pfn_t *m2p;
-                /* first mfn of the compat m2p (Only needed for 32bit PV guests) */
+                /* first mfn of the compat m2p (Only set for 32bit PV guests) */
                 xen_pfn_t compat_m2p_mfn0;
                 /* Number of m2p frames mapped */
                 unsigned long nr_m2p_frames;
diff --git a/tools/libs/guest/xg_sr_common_x86_pv.c b/tools/libs/guest/xg_sr_common_x86_pv.c
index cd33406aab..f339ea4a79 100644
--- a/tools/libs/guest/xg_sr_common_x86_pv.c
+++ b/tools/libs/guest/xg_sr_common_x86_pv.c
@@ -149,27 +149,32 @@ int x86_pv_map_m2p(struct xc_sr_context *ctx)
 
     ctx->x86.pv.nr_m2p_frames = (M2P_CHUNK_SIZE >> PAGE_SHIFT) * m2p_chunks;
 
+    if ( ctx->x86.pv.levels == 3 )
+    {
 #ifdef __i386__
-    /* 32 bit toolstacks automatically get the compat m2p */
-    ctx->x86.pv.compat_m2p_mfn0 = entries[0].mfn;
+        /* 32 bit toolstacks automatically get the compat m2p */
+        ctx->x86.pv.compat_m2p_mfn0 = entries[0].mfn;
 #else
-    /* 64 bit toolstacks need to ask Xen specially for it */
-    {
-        struct xen_machphys_mfn_list xmml = {
-            .max_extents = 1,
-            .extent_start = { &ctx->x86.pv.compat_m2p_mfn0 },
-        };
-
-        rc = do_memory_op(xch, XENMEM_machphys_compat_mfn_list,
-                          &xmml, sizeof(xmml));
-        if ( rc || xmml.nr_extents != 1 )
+        /* 64 bit toolstacks need to ask Xen specially for it */
         {
-            PERROR("Failed to get compat mfn list from Xen");
-            rc = -1;
-            goto err;
+            struct xen_machphys_mfn_list xmml = {
+                .max_extents = 1,
+                .extent_start = { &ctx->x86.pv.compat_m2p_mfn0 },
+            };
+
+            rc = do_memory_op(xch, XENMEM_machphys_compat_mfn_list,
+                              &xmml, sizeof(xmml));
+            if ( rc || xmml.nr_extents != 1 )
+            {
+                PERROR("Failed to get compat mfn list from Xen");
+                rc = -1;
+                goto err;
+            }
         }
-    }
 #endif
+    }
+    else
+        ctx->x86.pv.compat_m2p_mfn0 = INVALID_MFN;
 
     /* All Done */
     rc = 0;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 13:05:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 13:05:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137958.255486 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqEwC-00059p-5Y; Mon, 07 Jun 2021 13:05:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137958.255486; Mon, 07 Jun 2021 13:05:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqEwC-00059i-22; Mon, 07 Jun 2021 13:05:04 +0000
Received: by outflank-mailman (input) for mailman id 137958;
 Mon, 07 Jun 2021 13:05:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cRKJ=LB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqEwB-00059c-9j
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 13:05:03 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c2d6d4ee-addf-4745-938a-61118efa6ee1;
 Mon, 07 Jun 2021 13:05:02 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2057.outbound.protection.outlook.com [104.47.13.57]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-36-FVgcdd6zOveW_GnFEucR8g-1; Mon, 07 Jun 2021 15:05:00 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7024.eurprd04.prod.outlook.com (2603:10a6:800:124::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.21; Mon, 7 Jun
 2021 13:04:58 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4195.030; Mon, 7 Jun 2021
 13:04:58 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR0P281CA0059.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:49::7) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.9 via Frontend Transport; Mon, 7 Jun 2021 13:04:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2d6d4ee-addf-4745-938a-61118efa6ee1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623071101;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=D4N64f9gsaOrT+9l+Ln3X7OWG+Nl2vmf2fgCMHzvfmY=;
	b=DeNTnvxgTlPJp9kdU5MfYE1rcCfjMn+kEmEpSgQ8pCJ3ie/K1KnomDuQlh4ADhS/BpsyCg
	zaQEunZqih3U4XMG/lcosY2S9j5AmtAUkn/7Ivn9Uj/4y7KYOwy92ETsFMqWvZOeo6AAwJ
	bhe+fu1VMoRdvGgRZuIsi0loGOv5O8Q=
X-MC-Unique: FVgcdd6zOveW_GnFEucR8g-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hz1Lk9Z6TFYcc6j+a5ti3GlZyUTJPPdCex22m7ltiE6byNJJr2hSTeHSyFdrS6r3YSfXMwicyVS4CmIRN8CJnMMPFOPung+JzYx9yy+d7XwLSKPNNpHJHOWynooKuucnE3sIB1RGif8xwhpyaaXANrg0bBWmF3WcqQHUaA8kCmy5MaLSV2GYcTafzcy6m+g0CB0SRg7nrjRTKFy49IQMhMHs9AdEEjtP+8RMcYnIJ4jC64v3S34wXOHnA7zdKg5U8Za9PIrQ54UmbgHHx/Acp1FaTfUbYn1LTaKjEosYSbXWqY9yhC4/p4HYAAVQp7uOd60cFGu9vtnXlWWC2fDgYQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=D4N64f9gsaOrT+9l+Ln3X7OWG+Nl2vmf2fgCMHzvfmY=;
 b=mBl5lpWs1GqP+BuwOZ8GFzLyhB3lxeq4DQrOt9l4NrR9yt7w7b5gVsBbtCiN1UoWojCbEh1bfug3bww6ZhI8oDUYG9LhC8ywT5Nt6saE3JsCby8z5zpVxYbGVCA6l1AQsjZuG1PWhWSNML8u339/fU6Swpo8crZhJrwRnL79GliqBBnBLryX++ZCV75G21cE+aEWNd4HJ249fh5U5BMlDxXP+M+BmgPCmpVP9QQAfY/RAI9VUrRNijmnzbVKb4ZTOa0z9zK4Fge/QaI1WcTZGIw+88VITFZTckhOiAgsxUOXtugto/UYVvjNHGjAzErJ4V5mlG+jVBn4bR0e1sAPtA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH v2] tools/libs/guest: fix save and restore of pv domains
 after 32-bit de-support
To: Juergen Gross <jgross@suse.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210607130005.5475-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <07fad6f2-3ace-044e-72af-a470f6864c00@suse.com>
Date: Mon, 7 Jun 2021 15:04:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210607130005.5475-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR0P281CA0059.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::7) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3e47e1f8-d3e5-40e0-5e8c-08d929b4da2b
X-MS-TrafficTypeDiagnostic: VI1PR04MB7024:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB7024D203846EA8C8BC748AB0B3389@VI1PR04MB7024.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OsGiDtZeB2KDgGj4gHuuICZuQJbTgnErZfTV60uMVkJtXWG+msau+DJm5Xv6xEXnDTWDWJOmgs1R3VoPVPaazGzLeUpg99bhTaY9zXeOGONNq9uyjXbyJjo72vH8dxXdZgfGcLw61WV5Fjqrd8MIDExswyAAA/tm6O5yGHZlRUMimjUHvTQHQr+XbofdANjYP6tSH4adTNBaL6zNL6+aAes8v8uZvwPmhTRS+dsu9F2KazLI5wS6o7c0Gep8OlYAkQv45WUVrIlhGH0tx/OI1WE2LJCV5X0WsTt0dX+hSPwDd3OFQZTKxCwjGNm9rQdIIn0U8G3/YiqwKCy/gE+KTRn/Rnzx7CE2hZrtp8cT92hPuHbBJgHbi3ZATflgVJHalkYLsHFHZ9OihMfKM/v+fuVUFDTgTG2HU2yuJTTKKirKd0U4ZiEAFMeKKv42kDLmE/Elnh+xXNxDvf86rz5hPXtV16YWOq7MAfntr6d2WXTfRn4CEPMVPiZ6xX0vGHUHjua5kqNlef+A4/gWPWDep/Z6iDYWw7JDk7aNxAxPUzE5sUJd5DHnTtjO/KeL1ElbNbAKke/4TkfsaJevxFgyl3Pa7B5SinMb/dTQKF1qbFFrIJhCgjRG/6PL7uavceBxZn2uawPvPkILBiQMdT2WhFd4kQ0qlYSimNl/sLywXJ+9pQy71onF498q21lAQHma
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(366004)(396003)(136003)(39850400004)(346002)(478600001)(36756003)(66556008)(66946007)(6636002)(186003)(6486002)(2616005)(31696002)(8936002)(66476007)(2906002)(54906003)(4326008)(31686004)(38100700002)(86362001)(8676002)(53546011)(316002)(26005)(37006003)(16576012)(5660300002)(956004)(6862004)(16526019)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?OWhzaVhwc0xsYWthWWJZSmhKbVdIbTNHSHRCSGpGTHpIQW96a2FrOFFNQXRh?=
 =?utf-8?B?M2hOVnA0NUdlb0xGdXgrWUI5T0N5MUJ3bDRWbkpsd3lBUjh6WDZNc1VFd1hv?=
 =?utf-8?B?N1lncEVPSnExT1RoZXBWeHd0NGZzNFNYUTEyZFoydGVVVHZGLzFKY3VVcG01?=
 =?utf-8?B?Um9mTkFtejRmS041QWRHV3BaK09kRlRUeFdUbEQvQTY1UEhYWWVXVXc3bTlW?=
 =?utf-8?B?Z1prWkdZdFoxekZNZVVvOTZCS1Rhd2gwNjVvbnlycFFYZVc2OHZGQ0dhSVgr?=
 =?utf-8?B?V3E3N0ZHTW9LbDhDdkQ0d08rQ2hhb1VENHdFR0dPd0o3UFNSL0FDbjFtemZQ?=
 =?utf-8?B?Q2pCS3FtN1ROaTRjNXlqVE9xTThqaDg2bHVTVnFPN2JneTR2d0d4eHR2NjZ6?=
 =?utf-8?B?Y3oxZ3NLTmtwM2VqOXBGYk1NOXJudGJPMThKTzJGdUl4RTFOaWdxNW0vUElL?=
 =?utf-8?B?ajAwdE5NY1pwRnpQYjU1VG5UWW5jdlRlNytMM0ZTNU13bnVHcnlOVllYVHR5?=
 =?utf-8?B?Tmpza0dZTVZmdEg3Y1VHK0NIMGZpa0NNaGZnR2JVeUs1Z0ROeTMvRTk2MTNI?=
 =?utf-8?B?MWI0ai9Fby9qMUxzYjFxSUUrUnJHTHM4V1o0UkVTamJSTTJ4eXlYMjV6UzN3?=
 =?utf-8?B?dWdTcEZsKzA1V08xbWVWZEtSMWxpMEo5Z3NSaGMvQ3kyeGpjTTB6cHJnUDJF?=
 =?utf-8?B?M2VmbE1BZDdObEpZRSs1MWFWMmhvNExQUDVEQXNqZVgzcFVPWFlGY3BoOGtH?=
 =?utf-8?B?Vm5IeWRSMmpVZU5qNjFGNk9MUUJZOFhySFBpNldHdHRQNVJqcnpIMy9pcjRW?=
 =?utf-8?B?ZUt6NW15eFBpTW5HeGlxOHJDQkpVdlBBa3NQclBJbUtWY0pYYkdwdjIrZ2I4?=
 =?utf-8?B?VXR1aWNFTXIxdWw5dFllaWhUL3JBVHNRM3dGeUJFd2VlSVRmUU1BMkIrenpm?=
 =?utf-8?B?eENoR3h2T0F6RWR3TGRlODA2NG5mQWh2QjAwRFlubWNncFVScWlrL3Q3U2Nh?=
 =?utf-8?B?SVNka0dHZkF6UVNuZTJNRUFUWmI2MHN5RzJDTHVTRTRhODJUbEdod2Q2UEpu?=
 =?utf-8?B?bWdLUjFkL1JsWDYyOTVyN3ZhLy9jaHR3OVRYdGRNeWplaHZSeUx4N3JkcERX?=
 =?utf-8?B?cnZ3NU9wMCttcXZjTk8zRU9wOEtFK2YvaGN0cnNhZkhDeXBrYlUrMjdOVlZZ?=
 =?utf-8?B?akRjcEtGcGR6QjhmL1F2SXRyS0lYN3owcWhiODNzTWI5RU1XZFJFSnVNUWlD?=
 =?utf-8?B?a3pqN0JzYXZtemxGamdmQk1sMmlWNTUwQVdMQjFCSDVKaUN1YW1yeFZ5U0dy?=
 =?utf-8?B?S1FxV1VnK2V0STluQzB3S3VpbmhhbGREU1d3ZjZlaW1SN1QxMmI0eitINERj?=
 =?utf-8?B?T0pKSGtyZGRSc1JrZWxQRXZwQlZHOGovSzM1YTI5U0NxSFZTNHd5Y0xJVzdS?=
 =?utf-8?B?cGtuQkRza0ZlWXZ5UG1YV1Z1aTA0akc4VmNMOHBKd2hwTkRUWXFJNmd0d3Bi?=
 =?utf-8?B?RGExS2QzQy84MVJ4NVQyM3VLemxvZys4TkZRcGdUQnhubDFZOWZodXN0bXZ2?=
 =?utf-8?B?MytNQlRDUGs2SmJPa0ZsWGxFOGM4WnhTN1NtYkRGcStWY25IL2c4b3BMYWJN?=
 =?utf-8?B?ZjQxaFV3TUR5YkdteExUWC8wVHNKYm8xRjJBZ0tLd1JRVEQrbUd1K3Q1bitW?=
 =?utf-8?B?VzNzMkNRNXJYNElydzV4blRkbkZDVjZuQ1luZHhEcnIvcUhKbTY2MzFNOW4y?=
 =?utf-8?Q?tGcytCuyzUvCVSkyFfZJ44a5Q5JDgw9gAgwJsbI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3e47e1f8-d3e5-40e0-5e8c-08d929b4da2b
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 13:04:58.3172
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WWS0Pi1oWeWQ2MJDyFQKSl6p6KbBG8rBKIFutZmDG+I96jInRc4gHKmQkfHbxUv2SZM3DWkbh2/ejbiP8JIwpQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7024

On 07.06.2021 15:00, Juergen Gross wrote:
> --- a/tools/libs/guest/xg_sr_common_x86_pv.c
> +++ b/tools/libs/guest/xg_sr_common_x86_pv.c
> @@ -149,27 +149,32 @@ int x86_pv_map_m2p(struct xc_sr_context *ctx)
>  
>      ctx->x86.pv.nr_m2p_frames = (M2P_CHUNK_SIZE >> PAGE_SHIFT) * m2p_chunks;
>  
> +    if ( ctx->x86.pv.levels == 3 )
> +    {

With this opening brace you no longer need ...

>  #ifdef __i386__
> -    /* 32 bit toolstacks automatically get the compat m2p */
> -    ctx->x86.pv.compat_m2p_mfn0 = entries[0].mfn;
> +        /* 32 bit toolstacks automatically get the compat m2p */
> +        ctx->x86.pv.compat_m2p_mfn0 = entries[0].mfn;
>  #else
> -    /* 64 bit toolstacks need to ask Xen specially for it */
> -    {

... this one, and hence you could avoid re-indenting ...

> -        struct xen_machphys_mfn_list xmml = {
> -            .max_extents = 1,
> -            .extent_start = { &ctx->x86.pv.compat_m2p_mfn0 },
> -        };
> -
> -        rc = do_memory_op(xch, XENMEM_machphys_compat_mfn_list,
> -                          &xmml, sizeof(xmml));
> -        if ( rc || xmml.nr_extents != 1 )
> +        /* 64 bit toolstacks need to ask Xen specially for it */
>          {
> -            PERROR("Failed to get compat mfn list from Xen");
> -            rc = -1;
> -            goto err;
> +            struct xen_machphys_mfn_list xmml = {
> +                .max_extents = 1,
> +                .extent_start = { &ctx->x86.pv.compat_m2p_mfn0 },
> +            };
> +
> +            rc = do_memory_op(xch, XENMEM_machphys_compat_mfn_list,
> +                              &xmml, sizeof(xmml));
> +            if ( rc || xmml.nr_extents != 1 )
> +            {
> +                PERROR("Failed to get compat mfn list from Xen");
> +                rc = -1;
> +                goto err;
> +            }

... all of this. Preferably with such reduced code churn,
still/again:
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 13:16:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 13:16:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137965.255497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqF7V-0006cJ-AZ; Mon, 07 Jun 2021 13:16:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137965.255497; Mon, 07 Jun 2021 13:16:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqF7V-0006cC-6s; Mon, 07 Jun 2021 13:16:45 +0000
Received: by outflank-mailman (input) for mailman id 137965;
 Mon, 07 Jun 2021 13:16:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a/zi=LB=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1lqF7T-0006bn-G4
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 13:16:43 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id fee34c0c-2eab-4bd5-9c98-248a601a231d;
 Mon, 07 Jun 2021 13:16:42 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 49139143D;
 Mon,  7 Jun 2021 06:16:42 -0700 (PDT)
Received: from [10.57.3.20] (unknown [10.57.3.20])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 358643F694;
 Mon,  7 Jun 2021 06:16:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fee34c0c-2eab-4bd5-9c98-248a601a231d
Subject: Re: [PATCH v3 10/10] arm64: Change type of hsr, cpsr, spsr_el1 to
 uint64_t
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, bertrand.marquis@arm.com,
 wei.chen@arm.com, xen-devel@lists.xenproject.org,
 Julien Grall <julien@xen.org>
References: <20210505074308.11016-1-michal.orzel@arm.com>
 <20210505074308.11016-11-michal.orzel@arm.com>
 <c5676e69-a474-d1ad-c7e9-49c03be3ab66@suse.com>
 <1ff4f9fb-0eca-189a-2b47-b910dc6b3639@arm.com>
 <42a998be-2f99-a1b6-ace6-4c5d42af7046@xen.org>
 <54e845e1-f283-d70c-a0c2-73e768e5a56e@suse.com>
 <b8a14892-0290-3aff-c4b5-6d363b884db7@xen.org>
 <f65babea-bd4f-f1fa-07db-78d83727b155@arm.com>
 <c2d72d18-8266-2866-565a-f91ec4e22d84@suse.com>
From: Michal Orzel <michal.orzel@arm.com>
Message-ID: <7a50d86c-0637-7c75-6e04-06bdbdd5b9d9@arm.com>
Date: Mon, 7 Jun 2021 15:16:36 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <c2d72d18-8266-2866-565a-f91ec4e22d84@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit



On 21.05.2021 09:07, Jan Beulich wrote:
> On 21.05.2021 08:33, Michal Orzel wrote:
>> On 17.05.2021 18:03, Julien Grall wrote:
>>> On 17/05/2021 08:01, Jan Beulich wrote:
>>>> On 12.05.2021 19:59, Julien Grall wrote:
>>>>> Hi,
>>>>>
>>>>> On 11/05/2021 07:37, Michal Orzel wrote:
>>>>>> On 05.05.2021 10:00, Jan Beulich wrote:
>>>>>>> On 05.05.2021 09:43, Michal Orzel wrote:
>>>>>>>> --- a/xen/include/public/arch-arm.h
>>>>>>>> +++ b/xen/include/public/arch-arm.h
>>>>>>>> @@ -267,10 +267,10 @@ struct vcpu_guest_core_regs
>>>>>>>>           /* Return address and mode */
>>>>>>>>        __DECL_REG(pc64,         pc32);             /* ELR_EL2 */
>>>>>>>> -    uint32_t cpsr;                              /* SPSR_EL2 */
>>>>>>>> +    uint64_t cpsr;                              /* SPSR_EL2 */
>>>>>>>>           union {
>>>>>>>> -        uint32_t spsr_el1;       /* AArch64 */
>>>>>>>> +        uint64_t spsr_el1;       /* AArch64 */
>>>>>>>>            uint32_t spsr_svc;       /* AArch32 */
>>>>>>>>        };
>>>>>>>
>>>>>>> This change affects, besides domctl, also default_initialise_vcpu(),
>>>>>>> which Arm's arch_initialise_vcpu() calls. I realize do_arm_vcpu_op()
>>>>>>> only allows two unrelated VCPUOP_* to pass, but then I don't
>>>>>>> understand why arch_initialise_vcpu() doesn't simply return e.g.
>>>>>>> -EOPNOTSUPP. Hence I suspect I'm missing something.
>>>>>
>>>>> I think it is just an overlooked when reviewing the following commit:
>>>>>
>>>>> commit 192df6f9122ddebc21d0a632c10da3453aeee1c2
>>>>> Author: Roger Pau Monné <roger.pau@citrix.com>
>>>>> Date:   Tue Dec 15 14:12:32 2015 +0100
>>>>>
>>>>>       x86: allow HVM guests to use hypercalls to bring up vCPUs
>>>>>
>>>>>       Allow the usage of the VCPUOP_initialise, VCPUOP_up, VCPUOP_down,
>>>>>       VCPUOP_is_up, VCPUOP_get_physid and VCPUOP_send_nmi hypercalls from HVM
>>>>>       guests.
>>>>>
>>>>>       This patch introduces a new structure (vcpu_hvm_context) that
>>>>> should be used
>>>>>       in conjuction with the VCPUOP_initialise hypercall in order to
>>>>> initialize
>>>>>       vCPUs for HVM guests.
>>>>>
>>>>>       Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>>>>       Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>       Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>>>       Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>>>>
>>>>> On Arm, the structure vcpu_guest_context is not exposed outside of Xen
>>>>> and the tools. Interestingly vcpu_guest_core_regs is but it should only
>>>>> be used within vcpu_guest_context.
>>>>>
>>>>> So as this is not used by stable ABI, it is fine to break it.
>>>>>
>>>>>>>
>>>>>> I agree that do_arm_vcpu_op only allows two VCPUOP* to pass and
>>>>>> arch_initialise_vcpu being called in case of VCPUOP_initialise
>>>>>> has no sense as VCPUOP_initialise is not supported on arm.
>>>>>> It makes sense that it should return -EOPNOTSUPP.
>>>>>> However do_arm_vcpu_op will not accept VCPUOP_initialise and will return
>>>>>> -EINVAL. So arch_initialise_vcpu for arm will not be called.
>>>>>> Do you think that changing this behaviour so that arch_initialise_vcpu returns
>>>>>> -EOPNOTSUPP should be part of this patch?
>>>>>
>>>>> I think this change is unrelated. So it should be handled in a follow-up
>>>>> patch.
>>>>
>>>> My only difference in viewing this is that I'd say the adjustment
>>>> would better be a prereq patch to this one, such that the one here
>>>> ends up being more obviously correct.
>>>
>>> The function is already not reachable so I felt it was unfair to require the clean-up for merging this code.
>>>
>>>> Also, if the function is
>>>> indeed not meant to be reachable, besides making it return
>>>> -EOPNOTSUPP (or alike) it should probably also have
>>>> ASSERT_UNREACHABLE() added.
>>>
>>> +1 on the idea.
>>>
>>> Cheers,
>>>
>> FWICS, all the discussion is about creating the next patch fixing the VCPUOP_initialise function.
>> Is there anything left to do in this patch or are you going to ack it?
> 
> Afaic I'd find it quite helpful if that other patch was a prereq to this
> one, making more obvious that the change here is not going to break
> anything. But it's Arm stuff, so Arm folks get the final say anyway.
This change is not going to break anything as the new patch is going to mainly add ASSERT_UNREACHABLE into VCPUOP_initialise which means it'll be a clean-up patch.
Also the problem was not introduced by this patch so I think it should be merged.
> 
> Jan
> 

So what is the final say from Arm folks :) ?



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 13:32:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 13:32:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137974.255514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqFML-0000Oi-MQ; Mon, 07 Jun 2021 13:32:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137974.255514; Mon, 07 Jun 2021 13:32:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqFML-0000Ob-It; Mon, 07 Jun 2021 13:32:05 +0000
Received: by outflank-mailman (input) for mailman id 137974;
 Mon, 07 Jun 2021 13:32:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lqFMK-0000OV-00
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 13:32:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lqFM4-0000he-54; Mon, 07 Jun 2021 13:31:48 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lqFM3-0001L6-Ua; Mon, 07 Jun 2021 13:31:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Sb5sMBoS0YKi0d11Me0ggEk2+k+yrY92QKm0UHWWpQo=; b=3uhf/I33YQMYlmWLdQYuq9XnOt
	tuyuG2WiBT5ZumOC5y9x52zQK08JqnUAX5QadeetuykCft9c5SSp1S7ufjkSsaM88twe2apkCGAqo
	ZC5FRgO9K4X13iExL9khEwypC1/9EJBxtXCMxoLMd1ex6HDjGCrykwz3gyxmlfMjnRsQ=;
Subject: Re: [PATCH v3 10/10] arm64: Change type of hsr, cpsr, spsr_el1 to
 uint64_t
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, bertrand.marquis@arm.com,
 wei.chen@arm.com
References: <20210505074308.11016-1-michal.orzel@arm.com>
 <20210505074308.11016-11-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <d3fa0269-3779-c893-8adb-4db0e22f28c1@xen.org>
Date: Mon, 7 Jun 2021 14:31:44 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210505074308.11016-11-michal.orzel@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 05/05/2021 08:43, Michal Orzel wrote:
> AArch64 registers are 64bit whereas AArch32 registers
> are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
> we should get rid of helpers READ/WRITE_SYSREG32
> in favour of using READ/WRITE_SYSREG.
> We should also use register_t type when reading sysregs
> which can correspond to uint64_t or uint32_t.
> Even though many AArch64 registers have upper 32bit reserved
> it does not mean that they can't be widen in the future.
> 
> Modify type of hsr, cpsr, spsr_el1 to uint64_t.
> Previously we relied on the padding after SPSR_EL1.
> As we removed the padding, modify the union to be 64bit so we don't corrupt SPSR_FIQ.
> No need to modify the assembly code becuase the accesses were based on 64bit

s/becuase/because/

> registers as there was a 32bit padding after SPSR_EL1.
> 
> Remove 32bit padding in cpu_user_regs before spsr_fiq
> as it is no longer needed due to upper union being 64bit now.
> Add 64bit padding in cpu_user_regs before spsr_el1
> because offset of spsr_el1 must be a multiple of 8.
> 
> Change type of cpsr to uint64_t in the public outside interface
> "public/arch-arm.h" to allow ABI compatibility between 32bit and 64bit.
> Increment XEN_DOMCTL_INTERFACE_VERSION.
> 
> Change type of cpsr to uint64_t in the public outside interface
> "public/vm_event.h" to allow ABI compatibility between 32bit and 64bit.
[...]

> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index e7384381cc..c8f9773566 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -546,7 +546,7 @@ void inject_undef64_exception(struct cpu_user_regs *regs, int instr_len)
>           PSR_IRQ_MASK | PSR_DBG_MASK;
>       regs->pc = handler;
>   
> -    WRITE_SYSREG32(esr.bits, ESR_EL1);
> +    WRITE_SYSREG(esr.bits, ESR_EL1);
>   }
>   
>   /* Inject an abort exception into a 64 bit guest */
> @@ -580,7 +580,7 @@ static void inject_abt64_exception(struct cpu_user_regs *regs,
>       regs->pc = handler;
>   
>       WRITE_SYSREG(addr, FAR_EL1);
> -    WRITE_SYSREG32(esr.bits, ESR_EL1);
> +    WRITE_SYSREG(esr.bits, ESR_EL1);
>   }
>   
>   static void inject_dabt64_exception(struct cpu_user_regs *regs,
> @@ -717,7 +717,7 @@ struct reg_ctxt {
>       uint64_t vttbr_el2;
>   };
>   
> -static const char *mode_string(uint32_t cpsr)
> +static const char *mode_string(register_t cpsr)
>   {
>       uint32_t mode;
>       static const char *mode_strings[] = {
> @@ -756,14 +756,16 @@ static void show_registers_32(const struct cpu_user_regs *regs,
>   #ifdef CONFIG_ARM_64
>       BUG_ON( ! (regs->cpsr & PSR_MODE_BIT) );
>       printk("PC:     %08"PRIx32"\n", regs->pc32);
> +    printk("CPSR:   %016"PRIx64" MODE:%s\n", regs->cpsr,
> +           mode_string(regs->cpsr));
Why do you now need to duplicate this line? Can't we use PRIregister you 
did everywhere else a register is printed?

>   #else
>       printk("PC:     %08"PRIx32, regs->pc);
>       if ( !guest_mode )
>           printk(" %pS", _p(regs->pc));
>       printk("\n");
> -#endif
>       printk("CPSR:   %08"PRIx32" MODE:%s\n", regs->cpsr,
>              mode_string(regs->cpsr));
> +#endif

[...]

> diff --git a/xen/include/asm-arm/arm64/processor.h b/xen/include/asm-arm/arm64/processor.h
> index 81dfc5e615..0e86079cbb 100644
> --- a/xen/include/asm-arm/arm64/processor.h
> +++ b/xen/include/asm-arm/arm64/processor.h
> @@ -63,18 +63,19 @@ struct cpu_user_regs
>   
>       /* Return address and mode */
>       __DECL_REG(pc,           pc32);             /* ELR_EL2 */
> -    uint32_t cpsr;                              /* SPSR_EL2 */
> -    uint32_t hsr;                               /* ESR_EL2 */
> +    uint64_t cpsr;                              /* SPSR_EL2 */
> +    uint64_t hsr;                               /* ESR_EL2 */
> +
> +    /* Offset of spsr_el1 must be a multiple of 8 */

I am guessing you are saying it should be 8-byte aligned, right? If so, 
the field before is a 64-bit value, therefore the offset should already 
be a multiple of 8. Did I miss anything?

> +    uint64_t pad0;
>   
>       /* Outer guest frame only from here on... */
>   
>       union {
> -        uint32_t spsr_el1;       /* AArch64 */
> +        uint64_t spsr_el1;       /* AArch64 */
>           uint32_t spsr_svc;       /* AArch32 */
>       };
>   
> -    uint32_t pad1; /* Doubleword-align the user half of the frame */
> -
>       /* AArch32 guests only */
>       uint32_t spsr_fiq, spsr_irq, spsr_und, spsr_abt;
>   
> diff --git a/xen/include/asm-arm/hsr.h b/xen/include/asm-arm/hsr.h
> index 29d4531f40..9b91b28c48 100644
> --- a/xen/include/asm-arm/hsr.h
> +++ b/xen/include/asm-arm/hsr.h
> @@ -16,7 +16,7 @@ enum dabt_size {
>   };
>   
>   union hsr {
> -    uint32_t bits;
> +    register_t bits;
>       struct {
>           unsigned long iss:25;  /* Instruction Specific Syndrome */
>           unsigned long len:1;   /* Instruction length */
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index 713fd65317..64a2ca30da 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -267,10 +267,10 @@ struct vcpu_guest_core_regs
>   
>       /* Return address and mode */
>       __DECL_REG(pc64,         pc32);             /* ELR_EL2 */
> -    uint32_t cpsr;                              /* SPSR_EL2 */
> +    uint64_t cpsr;                              /* SPSR_EL2 */
>   
>       union {
> -        uint32_t spsr_el1;       /* AArch64 */
> +        uint64_t spsr_el1;       /* AArch64 */
>           uint32_t spsr_svc;       /* AArch32 */
>       };
>   
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 4dbf107785..d576bfabd6 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -38,7 +38,7 @@
>   #include "hvm/save.h"
>   #include "memory.h"
>   
> -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000013
> +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000014
>   
>   /*
>    * NB. xen_domctl.domain is an IN/OUT parameter for this operation.
> diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h
> index 36135ba4f1..bb003d21d0 100644
> --- a/xen/include/public/vm_event.h
> +++ b/xen/include/public/vm_event.h
> @@ -266,8 +266,7 @@ struct vm_event_regs_arm {
>       uint64_t ttbr1;
>       uint64_t ttbcr;
>       uint64_t pc;
> -    uint32_t cpsr;
> -    uint32_t _pad;
> +    uint64_t cpsr;
>   };
>   
>   /*
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 14:09:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 14:09:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137984.255535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqFwh-0003o1-OK; Mon, 07 Jun 2021 14:09:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137984.255535; Mon, 07 Jun 2021 14:09:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqFwh-0003nu-KZ; Mon, 07 Jun 2021 14:09:39 +0000
Received: by outflank-mailman (input) for mailman id 137984;
 Mon, 07 Jun 2021 14:09:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Idvh=LB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lqFwg-0003no-DM
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 14:09:38 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea83ec53-30c4-4d5a-a7f4-897c878f5954;
 Mon, 07 Jun 2021 14:09:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea83ec53-30c4-4d5a-a7f4-897c878f5954
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623074976;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=pRcrHc4fsYkSgM57XNquEipXWbrxGeP1ZX49/+dIF5g=;
  b=A7F1ZA8uPWCF+Rt+J/jSB9Zalh1zmKhSOV9z4MwBqSFsGVq43NjXUcaf
   E/TFzy8bBch6PShRCDNiZpuH+8mMtCItDb+b6KusZ6K7nA9hDWBgaGxMj
   Y52g6buucc44Wr8m+c+coUthPrhlSnCFKAxHwIo1si1ROHDx2Sk2xhv2h
   c=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: pUE7hOyVyVLIJHXPinwnv4Gzh78Lk6HMRf8Fb7HkAaOMjJx9urjXj0qYx5CGl/2X0QbNTBNhZR
 xgaaNbDUAXSLjJo3kPQnY5vzLU+3ma4sd6GHmEocgiMkAtr1jL3PZ6rsV47Yhwe3q4mXQ5eSGz
 4zG8jHgVHDS++khg2uDxywZWq/6cAGQl8Fx9PVY+gkn09jg2RrMXRxBh+4LrinoLyk7VSVy0mi
 EBsKoktXAczTiNNXqPrJ5o4DKWNBhyfuQA/EEycStojM3igZ4/XV3gonFJgnhJ93oVmlCvhlOw
 dfU=
X-SBRS: 5.1
X-MesageID: 47104490
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:lrMzkKB7sL6YijvlHenP55DYdb4zR+YMi2TDtnoQdfUxSKelfq
 +V8cjzuSWftN9zYhAdcK67V5VoKEm0naKdirN8AV7NZmfbhFc=
X-IronPort-AV: E=Sophos;i="5.83,255,1616472000"; 
   d="scan'208";a="47104490"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EwTpZRYNNDuunSOcS+zMfdwHpud9TBwG333/RRHULCz9Q8p8++X7PDv5y2bxdCH7SRZmIoyaom/OtkRCqkzwQ1SEp67cnJLNdZm/oKu/Xi4yddeEzBx2gXmydcUaHGkUElzRpZEwL0azRs2p8jEFB0D2ICfLm7spCmKKQNH6+EnB1j5WY/UNP0kujGTGmhFNwiDnHpNtIw73EM3epKvavhhV+92b423l8kzGjO4R7OU/3mKwzXAM6TOPfDPGj2+6n8xdZWAqm+hIINJw+An5KvyLXNJvHm2k2Y0VFiudxQdNGGN7wa2ONgWK+m7QO9YRTWLKwirC1paZRPs0Cd5b5g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nA6whvAbHknALXPo49Zej0ng4Nmw1TYfNVp8NHz3Ibs=;
 b=QerGqXrOQwKsp+ir/Ma+vgXJ9Krv6Rwc7eCGj7PV1jl5WuDEFZbkwtpQ+JXdTuPuWTkVEke3DhVqOuq93CGDAvXglLJX7ZghL+jbPHzVG632Z4w8a3GQJ+jwPC5qGBRlbbBMH6yDPEpmtO00R08M+AsB9l3ZzD5LeGGeftWOjXQqxbAGFJNcm9WTn72J5OuyaPN5VRJpPQ/SlXxIJDlErDM6kTHGVnoaqoNYbnejEoaWzO7XBvm/JtPbJrVks5firRcgj/rdEWUdONSmrQPqWGlvq2o7NYYFcrjBGtYVYWTzAOKQh3D8OLbzPADfZTMNq8sdMUHS1SwBAH/x0cGPrQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nA6whvAbHknALXPo49Zej0ng4Nmw1TYfNVp8NHz3Ibs=;
 b=MsZjPQV6JME9C4OKE1xX94kd9t6UWWGRPpBkBcQcjv9wpwJuT4dKR/APTtru7ZjKi9MZVudsXRewae9JjNyBFzFbggkau7ZVnudRGfvpHq8GEaM3whxmvtQhVD0cVavxX6OLn0lyatxRcDE2F1G/hUyfOXHBFDwCXOCaQOc7+5s=
Subject: Re: [PATCH v2] tools/libs/guest: fix save and restore of pv domains
 after 32-bit de-support
To: Juergen Gross <jgross@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>
References: <20210607130005.5475-1-jgross@suse.com>
 <07fad6f2-3ace-044e-72af-a470f6864c00@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d9251ebe-67e0-2b9e-3031-202f5f27d3c7@citrix.com>
Date: Mon, 7 Jun 2021 14:59:14 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <07fad6f2-3ace-044e-72af-a470f6864c00@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0024.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:62::36) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 08a23083-f339-4929-4ebe-08d929bc7323
X-MS-TrafficTypeDiagnostic: BYAPR03MB4117:
X-Microsoft-Antispam-PRVS: <BYAPR03MB4117559314DF1E638FE28458BA389@BYAPR03MB4117.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1284;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: dBjPLPUSNf3GuWKdp4V/qTU17coLOMzvp5WtcwLfDfp0H6WQlvJC0dQTHrgYYV+V9kP1xJiCquA02S8TaPVKlxjxfbpfDgezfYJTcbc1WMLsCOW0qHV5J8r5UapsrfIS8xJxSTjik6K/YI8u/kIPNKBUIg7GR7G9qv0wgv+MXOgBAAoIr1Hgcw8W2duxV7jcnqY2VbqIWiu47CndAxzoY4Vig0we92evuK5j9Lim6biOhx0f3k0H4Z6xqXCbEQyZsxdhZ9np9q2wH7Y2EdwEHDvZ0RgCOhwv5N1bpFg2uOHz/rUTUb7tovpJCCiDVA0Ukma3VxwztJAKw9ms09ek4XcW23evBqUBWOnEgT6dRaivu+tcYgmMLw520H9hpHFu4pCbxeQHbYC7xp4XQD+GZkzNnCOmo4TmkCGLVFMfh+RMuHcT+2RDThg7h/Iv6SMDoZJwPhgXO7u6GNM32WnMdDnewfouHgPfC3q0FfEt4e4Syot8JZ6TYWuv53PBNhTb5Zgra+6T1cokN47nT0AjIwUdOyttYdlV45nDGGjaJDl2XWm8LRft10oQBh/uypYRJ9N90KuzxzG4nHMB+4E6TDdglqR56caOzBKbPn5g1b22mvjBFfcMbKdWD/1Sq1SOcg4GchgdgeLeQkC86zczsM4iYWO94bSdJuLcQgTYRFYZLtqbPRDY7pEhUWxTqo6F
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(366004)(136003)(376002)(39860400002)(53546011)(54906003)(478600001)(31696002)(6916009)(6486002)(16526019)(5660300002)(2616005)(316002)(26005)(36756003)(8676002)(4326008)(66946007)(66476007)(6666004)(31686004)(66556008)(38100700002)(16576012)(2906002)(86362001)(956004)(186003)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?aXhySk5nbzkxQnAvOU9NcXpEQXJ1ZENQNnZuTG9YaG52Wi9SL2lUMVFHcTNm?=
 =?utf-8?B?bkRpQjlGSk9FdlJlejFmdVU2QmQrYVByTjVjQ25NQWcrb3pLRzJrUTNVSTNM?=
 =?utf-8?B?OFp3d1VoNUdhN0hqMnhtNk94VDk5ZWkzNUR6UmJZay93cDFKNy9TUCthQi9s?=
 =?utf-8?B?VjVJVlZqVUxMMnM1QUFHTE1pNk8xRXFvb0g4MDgxODllQjBad3J5enhpazZp?=
 =?utf-8?B?dnovOXZqZ1JybFp2VHk4bis2KzBEQ25NcTI0U0tyRTBTMUY3ZG5CcVZYdWJs?=
 =?utf-8?B?OGdmMzdTNGFLQlZUOTBjRm9EbFQwT3JNNS90NHRON2VYLzN0cjZxQ1QweGxw?=
 =?utf-8?B?OCtZQmtFT3UxbVdMYUlNd29icnBoUzNDYXBQejl0d0hJdjlCU1VoSk5pQS9C?=
 =?utf-8?B?YWJSL1drU2R3VThmUXlTQkVVTCsvWE5KeWQvZjV5eVMydlNMRDFRbFdHR0Zp?=
 =?utf-8?B?c1dCK2NPYnlvWCs4N3BvZzZ2MVBnOGdFZzhNdUJCWWJuenVjdWphVjYzTktT?=
 =?utf-8?B?UDlDbktDSmJIbml6L2NaZEJrdUtvNVl6eHA2Ym5lRXV5QXZ1OGw5Y1pZWXFS?=
 =?utf-8?B?R0toVjFLRTlsYUpQMnNQLzlLU2wvbWY1djFDbzZPWDZNeEF3elpJb2QxMGVC?=
 =?utf-8?B?dHhEZGI2QTNWU1FrYkNEaDV4Q29SaERQZVJGM0VGbWt2SDV5M1c2S00zdlNJ?=
 =?utf-8?B?dkhKMU93RDNSTGlPbG1QZFhDeUpxeTBzMTIrYTFBd0ZLdy9SUVVKQ1MzaUVp?=
 =?utf-8?B?aFp5d2NCOHZrLzk1ZTJ4U2dZcjFxMkg0TDJGWktNbWN2bEpiMWh0YnVQTVRk?=
 =?utf-8?B?WGZ4bjV3UmFNTlFBTndtbDJBUEdBQUhZeEg3TDBqTGZpdk5vbjZiQVFZRnln?=
 =?utf-8?B?NVpZZkl0MjRtbUF6TEdaM0xHbEg2cGovcW0yM09jMXJNYVZ0aG82bmpRMExZ?=
 =?utf-8?B?U1dxSlQ3SUFqbGk4R0x3cCtmQkc0MUxSSE5iTHF6b3JIUytTR0pZVC9aMTN0?=
 =?utf-8?B?eUtaMHAyRFRGWFFzTk9hZ3IzbnBXRVRMU0lFNmJnUVBNbXA4aDlOS05HRTdh?=
 =?utf-8?B?Zm1USExGN3piVUYvOUliTjB5RnZxNkxQYXFPSE5Za2xDTkhBUnQ0OU15N3NF?=
 =?utf-8?B?WU40UVEvcGhHL0syMUEwU0pSd3JwZmxPTGVMOGUzdm1hNnNtd2FYSU5ybkRp?=
 =?utf-8?B?R1hPNDNjWkVydVJtN2MveDZnUC9XcEdzVDUva2p3SHQ2R0hjN09MWWhXaG9k?=
 =?utf-8?B?UGpZbTJaK0ErSWpabWxQMHMrK2Y1QnhGYjJLbjd3alZTYi9HS0xyenNhMHZt?=
 =?utf-8?B?eUZ6VHJEU3FHTitOMnROZ0Q2dHNpRDFSMk41YWVteFBWcmZhbjBWSkt0VVNP?=
 =?utf-8?B?aGJZVVBhL3NOdEUzWTRYbW1URzZkVVVJSEhoK2Z4SlJORXZwTngrQVFqZE9X?=
 =?utf-8?B?TDV5cE14Y1pHUlh1c3NlRWlqZ3dLVCtJM21iUENLVFQ2U2JJQnJ0Y0pRaSt4?=
 =?utf-8?B?cW5ZNDM5RUlYVzFmeXFHSDkyWVRGM3pMa0xXZUo3ZXRkekFuMEVWa1lmMWNR?=
 =?utf-8?B?TlZCWDdQVzFDNExDZzVxTyt4VU8zSHJFUE1XOW1vVElEa3Bib0hxbk05RkFW?=
 =?utf-8?B?WXo1QWQyNkRNMEpTaEw3Slp0NVJlZVZsRlFBbjhxc3JGNkMvT3RVNXMvQ09J?=
 =?utf-8?B?L1NEYUZRSG5kbkMrdUxnSzFoRmowZk5aUDZpWVQrZ1BwSkY4aU1PZE9ZbVlx?=
 =?utf-8?Q?hqtTo9ePVoDHL4k0ve5LhnRjb3oPK/WnXbQLBK0?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 08a23083-f339-4929-4ebe-08d929bc7323
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 13:59:21.5887
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: P/1gwH+5mxDN045eJZgeOGv6U5FP48/5RP9Mb0A00URUaQ+v6BzOGVEoJzzJcgrWxRkjROhD8XD+N2yBexVLck5WLRLYD6NivkKSHTEqVFo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4117
X-OriginatorOrg: citrix.com

On 07/06/2021 14:04, Jan Beulich wrote:
> On 07.06.2021 15:00, Juergen Gross wrote:
>> --- a/tools/libs/guest/xg_sr_common_x86_pv.c
>> +++ b/tools/libs/guest/xg_sr_common_x86_pv.c
>> @@ -149,27 +149,32 @@ int x86_pv_map_m2p(struct xc_sr_context *ctx)
>>  
>>      ctx->x86.pv.nr_m2p_frames = (M2P_CHUNK_SIZE >> PAGE_SHIFT) * m2p_chunks;
>>  
>> +    if ( ctx->x86.pv.levels == 3 )
>> +    {
> With this opening brace you no longer need ...
>
>>  #ifdef __i386__
>> -    /* 32 bit toolstacks automatically get the compat m2p */
>> -    ctx->x86.pv.compat_m2p_mfn0 = entries[0].mfn;
>> +        /* 32 bit toolstacks automatically get the compat m2p */
>> +        ctx->x86.pv.compat_m2p_mfn0 = entries[0].mfn;
>>  #else
>> -    /* 64 bit toolstacks need to ask Xen specially for it */
>> -    {
> ... this one, and hence you could avoid re-indenting ...
>
>> -        struct xen_machphys_mfn_list xmml = {
>> -            .max_extents = 1,
>> -            .extent_start = { &ctx->x86.pv.compat_m2p_mfn0 },
>> -        };
>> -
>> -        rc = do_memory_op(xch, XENMEM_machphys_compat_mfn_list,
>> -                          &xmml, sizeof(xmml));
>> -        if ( rc || xmml.nr_extents != 1 )
>> +        /* 64 bit toolstacks need to ask Xen specially for it */
>>          {
>> -            PERROR("Failed to get compat mfn list from Xen");
>> -            rc = -1;
>> -            goto err;
>> +            struct xen_machphys_mfn_list xmml = {
>> +                .max_extents = 1,
>> +                .extent_start = { &ctx->x86.pv.compat_m2p_mfn0 },
>> +            };
>> +
>> +            rc = do_memory_op(xch, XENMEM_machphys_compat_mfn_list,
>> +                              &xmml, sizeof(xmml));
>> +            if ( rc || xmml.nr_extents != 1 )
>> +            {
>> +                PERROR("Failed to get compat mfn list from Xen");
>> +                rc = -1;
>> +                goto err;
>> +            }
> ... all of this. Preferably with such reduced code churn,
> still/again:

I agree.  I can fix on commit, if you're happy with that.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 14:25:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 14:25:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137993.255546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqGBs-00064e-8Y; Mon, 07 Jun 2021 14:25:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137993.255546; Mon, 07 Jun 2021 14:25:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqGBs-00064X-4h; Mon, 07 Jun 2021 14:25:20 +0000
Received: by outflank-mailman (input) for mailman id 137993;
 Mon, 07 Jun 2021 14:25:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kcEO=LB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lqGBr-00064R-6A
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 14:25:19 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d4bcc4fc-546f-4bdb-ac99-3a7a4119c700;
 Mon, 07 Jun 2021 14:25:18 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 4C07B21A90;
 Mon,  7 Jun 2021 14:25:17 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 1D890118DD;
 Mon,  7 Jun 2021 14:25:17 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id DM0XBk0svmCLRAAALh3uQQ
 (envelope-from <jgross@suse.com>); Mon, 07 Jun 2021 14:25:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4bcc4fc-546f-4bdb-ac99-3a7a4119c700
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623075917; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=QrS2ZCPVVXB4Gk0dKMr4c0X6qluZr4NnpkXD2Y/Cljw=;
	b=fe1oz8Uh8E6GG33j9kN8O3U5nS+krWQQFQTIIm8HFysXqb3CkYwhBDOr4m2EKdXjSU+ASp
	PHujXktMMrIRR+M8LRCAjZmPtnf6YwJRMBWFR8soFf25on0Md2OViWoeuCwghImz7kA7c5
	4lkbJHlcBWzvVP0u0ey/Jh2Ou1xMFX8=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623075917; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=QrS2ZCPVVXB4Gk0dKMr4c0X6qluZr4NnpkXD2Y/Cljw=;
	b=fe1oz8Uh8E6GG33j9kN8O3U5nS+krWQQFQTIIm8HFysXqb3CkYwhBDOr4m2EKdXjSU+ASp
	PHujXktMMrIRR+M8LRCAjZmPtnf6YwJRMBWFR8soFf25on0Md2OViWoeuCwghImz7kA7c5
	4lkbJHlcBWzvVP0u0ey/Jh2Ou1xMFX8=
Subject: Re: [PATCH v2] tools/libs/guest: fix save and restore of pv domains
 after 32-bit de-support
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>
References: <20210607130005.5475-1-jgross@suse.com>
 <07fad6f2-3ace-044e-72af-a470f6864c00@suse.com>
 <d9251ebe-67e0-2b9e-3031-202f5f27d3c7@citrix.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <4d6f35fa-e3f6-fc06-8bfd-76dc2cb8fd91@suse.com>
Date: Mon, 7 Jun 2021 16:25:16 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <d9251ebe-67e0-2b9e-3031-202f5f27d3c7@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="KM5nSRNTReuDe2qNVDJLH8pB5cskeF6qR"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--KM5nSRNTReuDe2qNVDJLH8pB5cskeF6qR
Content-Type: multipart/mixed; boundary="awEKbKwt5imb2WTawDh3IXnR0IsoiNF5c";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>
Message-ID: <4d6f35fa-e3f6-fc06-8bfd-76dc2cb8fd91@suse.com>
Subject: Re: [PATCH v2] tools/libs/guest: fix save and restore of pv domains
 after 32-bit de-support
References: <20210607130005.5475-1-jgross@suse.com>
 <07fad6f2-3ace-044e-72af-a470f6864c00@suse.com>
 <d9251ebe-67e0-2b9e-3031-202f5f27d3c7@citrix.com>
In-Reply-To: <d9251ebe-67e0-2b9e-3031-202f5f27d3c7@citrix.com>

--awEKbKwt5imb2WTawDh3IXnR0IsoiNF5c
Content-Type: multipart/mixed;
 boundary="------------F71D68ADF40ECAAAC383D54A"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------F71D68ADF40ECAAAC383D54A
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 07.06.21 15:59, Andrew Cooper wrote:
> On 07/06/2021 14:04, Jan Beulich wrote:
>> On 07.06.2021 15:00, Juergen Gross wrote:
>>> --- a/tools/libs/guest/xg_sr_common_x86_pv.c
>>> +++ b/tools/libs/guest/xg_sr_common_x86_pv.c
>>> @@ -149,27 +149,32 @@ int x86_pv_map_m2p(struct xc_sr_context *ctx)
>>>  =20
>>>       ctx->x86.pv.nr_m2p_frames =3D (M2P_CHUNK_SIZE >> PAGE_SHIFT) * =
m2p_chunks;
>>>  =20
>>> +    if ( ctx->x86.pv.levels =3D=3D 3 )
>>> +    {
>> With this opening brace you no longer need ...
>>
>>>   #ifdef __i386__
>>> -    /* 32 bit toolstacks automatically get the compat m2p */
>>> -    ctx->x86.pv.compat_m2p_mfn0 =3D entries[0].mfn;
>>> +        /* 32 bit toolstacks automatically get the compat m2p */
>>> +        ctx->x86.pv.compat_m2p_mfn0 =3D entries[0].mfn;
>>>   #else
>>> -    /* 64 bit toolstacks need to ask Xen specially for it */
>>> -    {
>> ... this one, and hence you could avoid re-indenting ...
>>
>>> -        struct xen_machphys_mfn_list xmml =3D {
>>> -            .max_extents =3D 1,
>>> -            .extent_start =3D { &ctx->x86.pv.compat_m2p_mfn0 },
>>> -        };
>>> -
>>> -        rc =3D do_memory_op(xch, XENMEM_machphys_compat_mfn_list,
>>> -                          &xmml, sizeof(xmml));
>>> -        if ( rc || xmml.nr_extents !=3D 1 )
>>> +        /* 64 bit toolstacks need to ask Xen specially for it */
>>>           {
>>> -            PERROR("Failed to get compat mfn list from Xen");
>>> -            rc =3D -1;
>>> -            goto err;
>>> +            struct xen_machphys_mfn_list xmml =3D {
>>> +                .max_extents =3D 1,
>>> +                .extent_start =3D { &ctx->x86.pv.compat_m2p_mfn0 },
>>> +            };
>>> +
>>> +            rc =3D do_memory_op(xch, XENMEM_machphys_compat_mfn_list=
,
>>> +                              &xmml, sizeof(xmml));
>>> +            if ( rc || xmml.nr_extents !=3D 1 )
>>> +            {
>>> +                PERROR("Failed to get compat mfn list from Xen");
>>> +                rc =3D -1;
>>> +                goto err;
>>> +            }
>> ... all of this. Preferably with such reduced code churn,
>> still/again:
>=20
> I agree.=C2=A0 I can fix on commit, if you're happy with that.

I'm fine with that.

>=20
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>=20

Thanks,

Juergen

--------------F71D68ADF40ECAAAC383D54A
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------F71D68ADF40ECAAAC383D54A--

--awEKbKwt5imb2WTawDh3IXnR0IsoiNF5c--

--KM5nSRNTReuDe2qNVDJLH8pB5cskeF6qR
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC+LEwFAwAAAAAACgkQsN6d1ii/Ey/b
iAgAhFgU5XFeBFKH+DAcz9MgDupd29OeWVkVE0c8GQ4SCdOxIxV6xJAhlKYV1pS5MHdXyMsH0jDi
B+KVoZVXgiq0SksCt6E7q6erWtjBkbdeKkvJu7r/w3mLdM6iwyXvIWG2vK9i/jm9+XuXi2ltxj3/
S9v/sA/idf4YX7NHsAYBtwfu5w93fexAJCham7J/7lE0WFuK9kZeOkO63t1ZHj0n075Fp3x7pAOT
uSrwLr2QFkb3yrIxbMkNl7PREhI4n/4aXdpjSRsdd0ADP0bv6QaxSUNZmvZC1fn6d6ajGzAYznrW
juJLu6ePtm6KvTZIDO/HjQB34+YJ/pTs7sJpBFgTgQ==
=jdb7
-----END PGP SIGNATURE-----

--KM5nSRNTReuDe2qNVDJLH8pB5cskeF6qR--


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 14:26:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 14:26:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.137997.255557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqGCu-0006e6-Ia; Mon, 07 Jun 2021 14:26:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 137997.255557; Mon, 07 Jun 2021 14:26:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqGCu-0006dz-FS; Mon, 07 Jun 2021 14:26:24 +0000
Received: by outflank-mailman (input) for mailman id 137997;
 Mon, 07 Jun 2021 14:26:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lqGCs-0006dr-Up
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 14:26:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lqGCs-0001hE-T1
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 14:26:22 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lqGCs-0006Ev-Rx
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 14:26:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lqGCp-0000Tn-Hx; Mon, 07 Jun 2021 15:26:19 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=yLWoO6vSBtIOqTBuNjpT0IriygDNMYMvZ3be+AzBg0M=; b=hZhXmXt4NDcnpeV5gOHbPeqKCp
	J6qU/+ZpgnF4YXDZMDC0lUlCsgH4e9tNhbOLoRhRncUhhxdmf4cqjTjfCUyzirtPJ1gJEoL+YoZR2
	yUvKqnXs3IYeH06AA8AVDlm9z3tuJOF386CaY6tXnNySl7ZyA3YLUaPwkFfwE+b/e/jw=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24766.11403.340597.148842@mariner.uk.xensource.com>
Date: Mon, 7 Jun 2021 15:26:19 +0100
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Jan Beulich <jbeulich@suse.com>,
    Julien Grall <julien@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH] MAINTAINERS: add myself as tools/libs reviewer
In-Reply-To: <20210607112310.22180-1-jgross@suse.com>
References: <20210607112310.22180-1-jgross@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Juergen Gross writes ("[PATCH] MAINTAINERS: add myself as tools/libs reviewer"):
> I have touched most of the Xen libraries in the past, and there is a
> clear lack of reviewer band width in the tools area.
> 
> Add myself as a tools/libs reviewer for that reason.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 14:49:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 14:49:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138012.255585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqGZB-0000m7-MA; Mon, 07 Jun 2021 14:49:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138012.255585; Mon, 07 Jun 2021 14:49:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqGZB-0000m0-JB; Mon, 07 Jun 2021 14:49:25 +0000
Received: by outflank-mailman (input) for mailman id 138012;
 Mon, 07 Jun 2021 14:49:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lqGZA-0000ls-12
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 14:49:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lqGZ9-00025w-Vj
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 14:49:23 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lqGZ9-00081P-Ue
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 14:49:23 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lqGZ8-0000dE-91; Mon, 07 Jun 2021 15:49:22 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=J3UAMDh0d7kXRCpgxD6vYaEbDtmNtduzvuumbEcsPW4=; b=h8n+fHCYd2xVc8zZUKrN6Ikz6e
	zMi3Q25ETKCmI0psNcNg7TQ0ssRi3PFPODj1527SNT9zAioIYSlGNWoHeCLNlzFpVzaLh9V3WOA94
	CysHzmxVE4OwPzSlYg3Y+ITG2LjmVJY5Ab7BpoWZnbhpQl2WaHulGCom22awo0ZXwhk8=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24766.12786.57771.145539@mariner.uk.xensource.com>
Date: Mon, 7 Jun 2021 15:49:22 +0100
To: xen-devel@lists.xenproject.org
Cc: Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [ovmf test] 162371: tolerable FAIL - PUSHED
Newsgroups: chiark.users.ijackson.xen.cron.osstest
In-Reply-To: <osstest-162371-mainreport@xen.org>
References: <osstest-162371-mainreport@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

osstest service owner writes ("[ovmf test] 162371: tolerable FAIL - PUSHED"):
> flight 162371 ovmf real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/162371/
> 
> Failures :-/ but no regressions.
> 
> Tests which did not succeed, but are not blocking:
>  test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail starved in 162368
>  test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail starved in 162368
> 
> version targeted for testing:
>  ovmf                 51adb689e1db695cffdeeacafad218768fbc018c
> baseline version:
>  ovmf                 924c2b847f0bc4325f6d14e562e2fb2d8acbc4d0

This was unfortunate.  I am working on fixing it so this won't happen
again.  Anthony tells me that this is due to a regression (from our
pov) in upgream ovmf which will probaby break everything.  So I have
rewound the main xenbits ovmf.git "xen-tested-master" branch back to
c410ad4da4b7785170d3d42a3ba190c2caac6feb.

When I have fixed it so this won't happen again I will restart the
ovmf flights.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 14:57:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 14:57:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138019.255597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqGga-0002Cf-FB; Mon, 07 Jun 2021 14:57:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138019.255597; Mon, 07 Jun 2021 14:57:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqGga-0002CY-C4; Mon, 07 Jun 2021 14:57:04 +0000
Received: by outflank-mailman (input) for mailman id 138019;
 Mon, 07 Jun 2021 14:57:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b+N/=LB=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lqGgY-0002CS-QZ
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 14:57:02 +0000
Received: from mail-pg1-x536.google.com (unknown [2607:f8b0:4864:20::536])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bbdd5224-2642-4307-9b44-54896c0b2a61;
 Mon, 07 Jun 2021 14:57:01 +0000 (UTC)
Received: by mail-pg1-x536.google.com with SMTP id r1so13986639pgk.8
 for <xen-devel@lists.xenproject.org>; Mon, 07 Jun 2021 07:57:01 -0700 (PDT)
Received: from ?IPv6:2404:f801:0:5:8000::4b1? ([2404:f801:9000:1a:efea::4b1])
 by smtp.gmail.com with ESMTPSA id
 p14sm9148073pgk.6.2021.06.07.07.56.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Jun 2021 07:57:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bbdd5224-2642-4307-9b44-54896c0b2a61
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=mkL4psSj+prMNO7AohBJyTx+Q189qdo7fgANRpB5e+4=;
        b=lhaKoBcNOuAsXD3pTAmPwsFop0hqZ7JU5MLd+d9L1no2Uoec0Qyp3qHl7oh02shWeI
         Jes/SZ8q3U+KfCNoq9c1Xrlmq8EXws4qXzfcpkEnMBLnTwTeOLVn57/5lNMv0IIQu936
         eZ7O1rSFr9fTp5zpfxPADKjK5NrHWx5QoMUdi6NsRSJlWsZAOZ1xE13fUHg60coHCGz9
         L8KTelBvvQsClf1NodL6YCelq4Ol69U2fD2Jwlp+Qg82jWdc6cKvPRG+8Dfwd10h1GXK
         eJIB1Vu38FzkxJlUAMfqjcOa76EjkPTv6FXCcrF7OHioGvvqCsPHt0hXWXzpcsKlznTI
         Ov6A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=mkL4psSj+prMNO7AohBJyTx+Q189qdo7fgANRpB5e+4=;
        b=F7Pv0CiTY2DAHzUbMo0z8YaY/qESPjVDyhO8zJjd1kx1GZ4RnyPOLmBQESwymmBVBx
         UPGFfUnFiv5RY5Vxu58KU7EGb1ETtgFnsKzfg/ZCF5e4iM8vQypm5W4z/y7UIbCpII1d
         saymRCVVRLNw/z5SLHl/lerRzZR9X60tjvDJgWdQSaZhuT+gJcj4Pb8GlLCUtX9kH9ut
         /Wiu9WQrkrJkCfOSaHxMsTo6eEqagLAN9uELuM6zdhx9aDhrBUYPWDkeW7HUacg5vEhK
         JJ+mm8DtOSOZBBFpuixGDSAHn1wva8oZ+Zl54aCcgrhk7uZHmYI//KzJ97aTYPeZzsHr
         rydA==
X-Gm-Message-State: AOAM5310E5VfMVvflZfbENVW8F1XCWUgnrEt5TPTgQ7hA4v8C4bo9LVr
	M+tBf3EIIx+9/rPNGs8Gc7Q=
X-Google-Smtp-Source: ABdhPJyAa6AVd7SFvf1Ep4fkh59q91v76tY7vbke901CKM7Z5XY+dbd7TJGFiS7g5cKGjx+7iiHgJA==
X-Received: by 2002:a65:6a51:: with SMTP id o17mr18093661pgu.170.1623077821197;
        Mon, 07 Jun 2021 07:57:01 -0700 (PDT)
Subject: Re: [RFC PATCH V3 08/11] swiotlb: Add bounce buffer remap address
 setting function
To: Christoph Hellwig <hch@lst.de>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
 wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
 arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
 peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, m.szyprowski@samsung.com,
 robin.murphy@arm.com, boris.ostrovsky@oracle.com, jgross@suse.com,
 sstabellini@kernel.org, joro@8bytes.org, will@kernel.org,
 xen-devel@lists.xenproject.org, davem@davemloft.net, kuba@kernel.org,
 jejb@linux.ibm.com, martin.petersen@oracle.com,
 iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com,
 thomas.lendacky@amd.com, brijesh.singh@amd.com, sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-9-ltykernel@gmail.com>
 <20210607064312.GB24478@lst.de>
From: Tianyu Lan <ltykernel@gmail.com>
Message-ID: <48516ce3-564c-419e-b355-0ce53794dcb1@gmail.com>
Date: Mon, 7 Jun 2021 22:56:47 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210607064312.GB24478@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit


On 6/7/2021 2:43 PM, Christoph Hellwig wrote:
> On Sun, May 30, 2021 at 11:06:25AM -0400, Tianyu Lan wrote:
>> From: Tianyu Lan <Tianyu.Lan@microsoft.com>
>>
>> For Hyper-V isolation VM with AMD SEV SNP, the bounce buffer(shared memory)
>> needs to be accessed via extra address space(e.g address above bit39).
>> Hyper-V code may remap extra address space outside of swiotlb. swiotlb_
>> bounce() needs to use remap virtual address to copy data from/to bounce
>> buffer. Add new interface swiotlb_set_bounce_remap() to do that.
> 
> Why can't you use the bus_dma_region ranges to remap to your preferred
> address?
> 

Thanks for your suggestion.

These addresses in extra address space works as system memory mirror. 
The shared memory with host in Isolation VM needs to be accessed via 
extra address space which is above shared gpa boundary. During 
initializing swiotlb bounce buffer pool, only address bellow shared gpa 
boundary can be accepted by swiotlb API because it is treated as system 
memory and managed by memory management. This is why Hyper-V swiotlb 
bounce buffer pool needs to be allocated in Hyper-V code and map
associated physical address in extra address space. The patch target is
to add the new interface to set start virtual address of bounce buffer
pool and let swiotlb boucne buffer copy function to use right virtual 
address for extra address space.

bus_dma_region is to translate cpu physical address to dma address.
It can't modify the virtual address of bounce buffer pool and let
swiotlb code to copy data with right address. If some thing missed,
please correct me.

Thanks.


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 15:00:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 15:00:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138024.255608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqGjT-0003I1-Uw; Mon, 07 Jun 2021 15:00:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138024.255608; Mon, 07 Jun 2021 15:00:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqGjT-0003HQ-Qy; Mon, 07 Jun 2021 15:00:03 +0000
Received: by outflank-mailman (input) for mailman id 138024;
 Mon, 07 Jun 2021 15:00:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b+N/=LB=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lqGjS-00033i-5I
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 15:00:02 +0000
Received: from mail-pg1-x531.google.com (unknown [2607:f8b0:4864:20::531])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 99589562-ee7e-48d1-b6e6-a181daa0355b;
 Mon, 07 Jun 2021 15:00:01 +0000 (UTC)
Received: by mail-pg1-x531.google.com with SMTP id l1so14023997pgm.1
 for <xen-devel@lists.xenproject.org>; Mon, 07 Jun 2021 08:00:01 -0700 (PDT)
Received: from ?IPv6:2404:f801:0:5:8000::4b1? ([2404:f801:9000:1a:efea::4b1])
 by smtp.gmail.com with ESMTPSA id
 x6sm8711203pfd.173.2021.06.07.07.59.47
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Jun 2021 08:00:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99589562-ee7e-48d1-b6e6-a181daa0355b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=LBQFfB81E+P58yjRe+k2+cx97sDCbuPy/V/8bIJeXkc=;
        b=d1nEfDWwq7FWKqIFyB7wcvXzmP+qszgIuZNrjkwP3x1/yDu+sQSJ4TgOka8JMFwdpz
         FbG95ETwwZelyW9y9nYy/fBdQQ5EpoXh41Gbn4eQmQTaoxF5map34fNshbDgQ+GKj1eQ
         M2HNtozQY8mDF7hXNffuvHHSZK/H+HS3mDWozeDZ1wrNV7N6DSSxBit7yoz2Djn0vKsu
         PTtFJD9Ofb3TtKRa65oNYY7gKKMzGB1TuT8lxEnUuKWvE9MiqzfZDCghwXldP+Yap8sq
         BhmGax5uZd/Z3aYrXJPVjrW6XKPjEpwSZcuA8L3oguKgjEhjLQ9bi63V9oWBmfiZKAst
         VQoQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=LBQFfB81E+P58yjRe+k2+cx97sDCbuPy/V/8bIJeXkc=;
        b=lcU0AdlwlwKyjqXx91ragKAI6QETceAtlwolt1BXPooKOLK/27wgtqAv5rdn7h2AsT
         6PnjSq5+/7vbmi8wNBL4DL4s7+HDg8UbgmSG8F/qsECEfK1WIvBC/bphUlt3PSfOKKnv
         Ajp0NoKXUFpEjcr4++PaRnT6eOai32ycU3HO8u3VC4BoowgERmJpenFp0sFSdxNPVcrE
         O+6qCiUAIm+grJpMhajSB2NJvTmMjkKu2LmLD+GIA5KlFxlkJH8ajBb4T6OTq7CzpJfi
         seALLMOo5rndidFIb4YsTennoHsb3/0hWo3uhi8a1cauttvlU/gyDgv3Kp1RhzTPcD8E
         Lhfw==
X-Gm-Message-State: AOAM5325y6Xc0WzW2kqeXCYbPayDD7jpaVI4Qahw74u2TgRO8u6kG3Tt
	zq2HaMFXuMYFlpgAhmcm1JU=
X-Google-Smtp-Source: ABdhPJxA6AxF//BtD4CX3ss9LWnlUG4jgLj+0PRFw+NP8biHDKV1XDA0j2C9BbRScevume4H06cdTA==
X-Received: by 2002:a63:7d2:: with SMTP id 201mr18504206pgh.14.1623078000846;
        Mon, 07 Jun 2021 08:00:00 -0700 (PDT)
Subject: Re: [RFC PATCH V3 11/11] HV/Storvsc: Add Isolation VM support for
 storvsc driver
To: Christoph Hellwig <hch@lst.de>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
 wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
 arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
 peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, m.szyprowski@samsung.com,
 robin.murphy@arm.com, boris.ostrovsky@oracle.com, jgross@suse.com,
 sstabellini@kernel.org, joro@8bytes.org, will@kernel.org,
 xen-devel@lists.xenproject.org, davem@davemloft.net, kuba@kernel.org,
 jejb@linux.ibm.com, martin.petersen@oracle.com,
 iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com,
 thomas.lendacky@amd.com, brijesh.singh@amd.com, sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-12-ltykernel@gmail.com>
 <20210607064603.GD24478@lst.de>
From: Tianyu Lan <ltykernel@gmail.com>
Message-ID: <26c771e5-a64e-f2cc-e245-fa5f130f4b18@gmail.com>
Date: Mon, 7 Jun 2021 22:59:46 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210607064603.GD24478@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 6/7/2021 2:46 PM, Christoph Hellwig wrote:
> On Sun, May 30, 2021 at 11:06:28AM -0400, Tianyu Lan wrote:
>> +				for (i = 0; i < request->hvpg_count; i++)
>> +					dma_unmap_page(&device->device,
>> +							request->dma_range[i].dma,
>> +							request->dma_range[i].mapping_size,
>> +							request->vstor_packet.vm_srb.data_in
>> +							     == READ_TYPE ?
>> +							DMA_FROM_DEVICE : DMA_TO_DEVICE);
>> +				kfree(request->dma_range);
> 
> Unreadably long lines.  You probably want to factor the quoted code into
> a little readable helper and do the same for the map side.

Good suggestion. Will update.

Thanks.


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 15:16:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 15:16:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138035.255625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqGzA-0005A2-AS; Mon, 07 Jun 2021 15:16:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138035.255625; Mon, 07 Jun 2021 15:16:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqGzA-00059v-6w; Mon, 07 Jun 2021 15:16:16 +0000
Received: by outflank-mailman (input) for mailman id 138035;
 Mon, 07 Jun 2021 15:16:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1lqGz8-00059p-H1
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 15:16:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1lqGz8-0002aJ-Dx
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 15:16:14 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1lqGz8-0001jO-Cq
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 15:16:14 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1lqGz6-0000hp-JN; Mon, 07 Jun 2021 16:16:12 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=q/shnmVfKFQNYWEaQ8lptlXmoYPWwShZ0Rr21VZgANM=; b=nTkQkHBkbNxvsoTNrhiZaH+jXd
	tmtkeFFbMOcX/xF7zvzUJ/dHCIoKFTLs8PfWz0mJmWj2I4dqjvof+djtWYgKCmX7eTfl4HBNlZf7r
	yYgVNoWS82tEbnHlItC7GvdSJVZXhOr3kC6lJp4sKqvBYgYHL+8+250exDq3UdvbJ1j4=;
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>
Subject: [OSSTEST PATCH] ts-xen-build-prep: Install figlet
Date: Mon,  7 Jun 2021 16:16:01 +0100
Message-Id: <20210607151601.14049-1-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 ts-xen-build-prep | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-xen-build-prep b/ts-xen-build-prep
index 5ec70dd5..67b3eae6 100755
--- a/ts-xen-build-prep
+++ b/ts-xen-build-prep
@@ -197,7 +197,7 @@ END
 }
 
 sub prep () {
-    my @packages = qw(mercurial rsync
+    my @packages = qw(mercurial rsync figlet
                       build-essential bin86 bcc iasl bc
                       flex bison cmake ninja-build
                       libpci-dev libncurses5-dev libssl-dev python-dev
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 15:16:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 15:16:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138036.255635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqGzL-0005UL-Mj; Mon, 07 Jun 2021 15:16:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138036.255635; Mon, 07 Jun 2021 15:16:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqGzL-0005UC-JG; Mon, 07 Jun 2021 15:16:27 +0000
Received: by outflank-mailman (input) for mailman id 138036;
 Mon, 07 Jun 2021 15:16:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1lqGzK-0005TN-JD
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 15:16:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1lqGzK-0002az-IR
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 15:16:26 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1lqGzK-0001kT-HE
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 15:16:26 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1lqGzI-0000i8-Sc; Mon, 07 Jun 2021 16:16:24 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=GD3Qo7Umq5wiIJdpIbajB6oXYpu0bAtv9S8Selleh/I=; b=AhAWnSh297aJHkdksWgIWTAhq8
	WsbSlZ22pcx2XWjPI1csdR5bzRYhQbwtg/m+Wl3qSDG27qYzuPIf5UN7FZKAaNIvh2uXD8cTqJSZA
	uIi8zTkzLDDk+iT/Z3WkrbE17pd9MVTeWbrJMP68Ac0Oe2nHwmQtJUrqghEetM/R8cNU=;
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 1/3] host allocation: Prepare for further starvation check
Date: Mon,  7 Jun 2021 16:16:12 +0100
Message-Id: <20210607151614.14132-1-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

* Add a new job pattern parameter to $starvation_q
* Add a new $thisclass parameter to starving
* Pass 0 for now.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 ts-hosts-allocate-Executive | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/ts-hosts-allocate-Executive b/ts-hosts-allocate-Executive
index 459b9215..9722ce12 100755
--- a/ts-hosts-allocate-Executive
+++ b/ts-hosts-allocate-Executive
@@ -162,6 +162,7 @@ END
      LEFT JOIN steps
          USING (flight,job)
          WHERE flight= ?
+           AND job LIKE ?
       GROUP BY job, jobs.status
 END
 
@@ -822,11 +823,12 @@ sub most_optimistic ($$$) {
     return $optimist->{Got};
 }
 
-sub starving ($$) {
-    my ($best_start_abs, $now) = @_;
+sub starving ($$$) {
+    my ($best_start_abs, $now, $thisclass) = @_;
     return (0, 'runvar says never give up') unless %$starvation_p;
     return (0, 'no estimate') unless defined $best_start_abs;
-    $starvation_q->execute($flight);
+    my $joblike = $thisclass ? ($job =~ s/(?:(-)|\W).*/$1\%/r) : '%';
+    $starvation_q->execute($flight, $joblike);
     my $d=0;
     my $w=0;
     my $maxfin=0;
@@ -935,7 +937,7 @@ sub attempt_allocation {
 	    }
 	} elsif (%$starvation_p) {
 	    my $est_abs = most_optimistic($best, $now, $starvation_p->{I});
-	    my ($starving, $m) = starving($est_abs, $now);
+	    my ($starving, $m) = starving($est_abs, $now, 0);
 	    $starvation_q->finish();
 	    if (!$starving) {
 		print DEBUG "not starving: $m\n";
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 15:16:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 15:16:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138037.255641 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqGzM-0005Y4-2z; Mon, 07 Jun 2021 15:16:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138037.255641; Mon, 07 Jun 2021 15:16:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqGzL-0005Ww-SB; Mon, 07 Jun 2021 15:16:27 +0000
Received: by outflank-mailman (input) for mailman id 138037;
 Mon, 07 Jun 2021 15:16:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1lqGzL-0005Tl-2l
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 15:16:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1lqGzL-0002b3-23
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 15:16:27 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1lqGzL-0001kl-0z
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 15:16:27 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1lqGzJ-0000i8-Bp; Mon, 07 Jun 2021 16:16:25 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=JHUIEH7NL3PT02iIBi/javiAsHyQsQsZlPsWHQfpTwc=; b=AfvlUeN9i4JTYE/cd5S6QYv3+5
	DGpFb5Ie4Zet/scfwVduvHCSe6Sw5STd2LtM2C8an4nHyUl8I97/+43pNWZc2zrzyhGHz2wigaldz
	lk6WWZEHSqM2iP6cuWTF0fTHDw/fbCFuh2sc8NNyW+4mdUcUbjKSJK7pCFRaUKvBtroU=;
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 3/3] host allocation: Avoid starving out failing tests
Date: Mon,  7 Jun 2021 16:16:14 +0100
Message-Id: <20210607151614.14132-3-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210607151614.14132-1-iwj@xenproject.org>
References: <20210607151614.14132-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This can result in bad pushes.  Better to wait.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 ts-hosts-allocate-Executive | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/ts-hosts-allocate-Executive b/ts-hosts-allocate-Executive
index 849bc97b..4dfcd0cd 100755
--- a/ts-hosts-allocate-Executive
+++ b/ts-hosts-allocate-Executive
@@ -704,6 +704,7 @@ sub hid_recurse ($$) {
 	    Selections => [ map { $_->{Selected} } @hids ],
 	    Start => $start_time,
 	    Duration => $duration,
+	    PrevFail => ($previously_failed || $previously_failed_equiv),
 	};
     }
 }
@@ -951,7 +952,11 @@ sub attempt_allocation {
 	    }
 	    $starvation_q->finish();
 	    if ($all_starving) {
-		return $alloc_starved_r;
+		if (!$best->{PrevFail}) {
+		    return $alloc_starved_r;
+		} else {
+		    logm "starving, but previously failed, so continue...";
+		}
 	    }
 	}
     }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 15:16:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 15:16:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138038.255658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqGzN-000609-CV; Mon, 07 Jun 2021 15:16:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138038.255658; Mon, 07 Jun 2021 15:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqGzN-0005yC-6m; Mon, 07 Jun 2021 15:16:29 +0000
Received: by outflank-mailman (input) for mailman id 138038;
 Mon, 07 Jun 2021 15:16:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1lqGzL-0005U0-A6
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 15:16:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1lqGzL-0002b7-9M
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 15:16:27 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1lqGzL-0001kr-8V
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 15:16:27 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1lqGzJ-0000i8-4H; Mon, 07 Jun 2021 16:16:25 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=zQfvwktehqJEtR9IykWPHnEdGaQXPLkehhore2dQGUg=; b=h3OJG8RScyvr2Ej+8Ph3K00Nhv
	Ywq0jdXVe0QBJYXD61VflRwbSAu9bV07Nf/wMqpFdHNZENfDaKQJi83wP9lfYFwaWx74iNqiZ7FpI
	0KQmsJINwqVYAMy2ebBHWo2Laf/IzK2fAIaPl88Cqzd65TF2qz1Lgk33OccC4WzoI5Vc=;
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 2/3] host allocation: Check "job class" too
Date: Mon,  7 Jun 2021 16:16:13 +0100
Message-Id: <20210607151614.14132-2-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210607151614.14132-1-iwj@xenproject.org>
References: <20210607151614.14132-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

That is all jobs which start with the same \w* as this job.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 ts-hosts-allocate-Executive | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/ts-hosts-allocate-Executive b/ts-hosts-allocate-Executive
index 9722ce12..849bc97b 100755
--- a/ts-hosts-allocate-Executive
+++ b/ts-hosts-allocate-Executive
@@ -937,12 +937,20 @@ sub attempt_allocation {
 	    }
 	} elsif (%$starvation_p) {
 	    my $est_abs = most_optimistic($best, $now, $starvation_p->{I});
-	    my ($starving, $m) = starving($est_abs, $now, 0);
+	    my $all_starving = 1;
+	    foreach my $thisclass (qw(1 0)) {
+		my $tcdesc = $thisclass ? 'class' : 'flight';
+		my ($starving, $m) = starving($est_abs, $now, $thisclass);
+		if (!$starving) {
+		    print DEBUG "not starving ($tcdesc): $m\n";
+		    $all_starving = 0;
+		    last;
+		} else {
+		    logm "starving ($tcdesc) ($m)";
+		}
+	    }
 	    $starvation_q->finish();
-	    if (!$starving) {
-		print DEBUG "not starving: $m\n";
-	    } else {
-		logm "starving ($m)";
+	    if ($all_starving) {
 		return $alloc_starved_r;
 	    }
 	}
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 15:21:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 15:21:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138064.255674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqH4K-0008Rr-4Z; Mon, 07 Jun 2021 15:21:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138064.255674; Mon, 07 Jun 2021 15:21:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqH4K-0008Rk-1W; Mon, 07 Jun 2021 15:21:36 +0000
Received: by outflank-mailman (input) for mailman id 138064;
 Mon, 07 Jun 2021 15:21:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b+N/=LB=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lqH4J-0008Re-Ev
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 15:21:35 +0000
Received: from mail-pl1-x633.google.com (unknown [2607:f8b0:4864:20::633])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 51ad0812-6465-4600-a1de-bcded7b6ab12;
 Mon, 07 Jun 2021 15:21:34 +0000 (UTC)
Received: by mail-pl1-x633.google.com with SMTP id u7so8875060plq.4
 for <xen-devel@lists.xenproject.org>; Mon, 07 Jun 2021 08:21:34 -0700 (PDT)
Received: from ?IPv6:2404:f801:0:5:8000::4b1? ([2404:f801:9000:1a:efea::4b1])
 by smtp.gmail.com with ESMTPSA id
 g63sm8507428pfb.55.2021.06.07.08.21.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Jun 2021 08:21:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51ad0812-6465-4600-a1de-bcded7b6ab12
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=AqJMsqQ4KNSlTkis4vnlFWwGNt3Bx7s+42JKf63F/jc=;
        b=WsBdcRsefCiECb9IqTSuqCUv8coeVKPfrhSnmSii/c3l1gmGod3/U/htpg5pceudXy
         aYMrb18s8dW7vOcfl+l3/JLyfgriswqTmUyfzOzd+MpR4PFDQBqNxGysTptsKMLLdnqS
         jvqIZCDUxhyEXhn3kFG8PvvaL25gkKk9fhLU2Y6GVkr75vdb0Ve9OxPhhCsLOSZbZ46o
         oReEyqo0bUD1A3JvfVwFOyoUdnbneVTJbIjly5SDJg+VKDoLMXo7YsQjaqCn4F5oQkI9
         W8T56TKPKa5QxGA65hUQQsSol/AsF5UgHxkBXTE4sP50uacN6R0GzZbY+xLSbeosBWte
         pywQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=AqJMsqQ4KNSlTkis4vnlFWwGNt3Bx7s+42JKf63F/jc=;
        b=c1mLk+krdpmkuw+EjWnWgo8SvBmOaKNIRjnlsrAnBmbgwMgU3UOOggS4i3G1H1QlB1
         8J1jZQLaYuk5tpPSJJifYkcHpUI7Y4xvoAb2xCpoeg9ThB9KtJIFKgcIlp6hrJdJPXYe
         HDy9uqfE2ZtqbgD2oSXFW+5I5UfWcdwuK9yDEgJ+ewqNAi4XGiwg4fW7hTEiGVb5pztw
         4h/sYMEmb8XrstIBRwSey/u7dvuGgEtb3RLAzwtiTdntTzvCqnZVkVXOLIXZ5uJcUHGc
         dIi6QsdNA7UeyBgPex112YBxK5V1VwPRW1tpzybFZy3PEkYZZuKT15KcEZyvExgO6usM
         v8UA==
X-Gm-Message-State: AOAM530DUnPKJN7Sas1VRRV7hLMp/O7iU0lX1pcY5LyDyQwUGUzvJ0Ha
	XCdBZHYcNUdbD8HO6jEimnM=
X-Google-Smtp-Source: ABdhPJwKT7FVRh43L9EMp25B1edDg0633yW6Rnccv/gD+CMCio9gORsvm6jnf+jqiJgT4Bx7SQrrGw==
X-Received: by 2002:a17:90a:c68a:: with SMTP id n10mr33294547pjt.32.1623079293798;
        Mon, 07 Jun 2021 08:21:33 -0700 (PDT)
Subject: Re: [RFC PATCH V3 10/11] HV/Netvsc: Add Isolation VM support for
 netvsc driver
To: Christoph Hellwig <hch@lst.de>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
 wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
 arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
 peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, m.szyprowski@samsung.com,
 robin.murphy@arm.com, boris.ostrovsky@oracle.com, jgross@suse.com,
 sstabellini@kernel.org, joro@8bytes.org, will@kernel.org,
 xen-devel@lists.xenproject.org, davem@davemloft.net, kuba@kernel.org,
 jejb@linux.ibm.com, martin.petersen@oracle.com,
 iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com,
 thomas.lendacky@amd.com, brijesh.singh@amd.com, sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-11-ltykernel@gmail.com>
 <20210607065007.GE24478@lst.de>
From: Tianyu Lan <ltykernel@gmail.com>
Message-ID: <279cb4bf-c5b6-6db9-0f1e-9238e902c8f2@gmail.com>
Date: Mon, 7 Jun 2021 23:21:20 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210607065007.GE24478@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 6/7/2021 2:50 PM, Christoph Hellwig wrote:
> On Sun, May 30, 2021 at 11:06:27AM -0400, Tianyu Lan wrote:
>> +	if (hv_isolation_type_snp()) {
>> +		pfns = kcalloc(buf_size / HV_HYP_PAGE_SIZE, sizeof(unsigned long),
>> +			       GFP_KERNEL);
>> +		for (i = 0; i < buf_size / HV_HYP_PAGE_SIZE; i++)
>> +			pfns[i] = virt_to_hvpfn(net_device->recv_buf + i * HV_HYP_PAGE_SIZE) +
>> +				(ms_hyperv.shared_gpa_boundary >> HV_HYP_PAGE_SHIFT);
>> +
>> +		vaddr = vmap_pfn(pfns, buf_size / HV_HYP_PAGE_SIZE, PAGE_KERNEL_IO);
>> +		kfree(pfns);
>> +		if (!vaddr)
>> +			goto cleanup;
>> +		net_device->recv_original_buf = net_device->recv_buf;
>> +		net_device->recv_buf = vaddr;
>> +	}
> 
> This probably wnats a helper to make the thing more readable.  But who
> came up with this fucked up communication protocol where the host needs
> to map random pfns into a contigous range?  Sometime I really have to
> wonder what crack the hyper-v people take when comparing this to the
> relatively sane approach others take.

Agree. Will add a helper function.
> 
>> +	for (i = 0; i < page_count; i++)
>> +		dma_unmap_single(&hv_dev->device, packet->dma_range[i].dma,
>> +				 packet->dma_range[i].mapping_size,
>> +				 DMA_TO_DEVICE);
>> +
>> +	kfree(packet->dma_range);
> 
> Any reason this isn't simply using a struct scatterlist?

I will have a look. Thanks to reminder scatterlist.

> 
>> +	for (i = 0; i < page_count; i++) {
>> +		char *src = phys_to_virt((pb[i].pfn << HV_HYP_PAGE_SHIFT)
>> +					 + pb[i].offset);
>> +		u32 len = pb[i].len;
>> +
>> +		dma = dma_map_single(&hv_dev->device, src, len,
>> +				     DMA_TO_DEVICE);
> 
> dma_map_single can only be used on page baked memory, and if this is
> using page backed memory you wouldn't need to do thee phys_to_virt
> tricks.  Can someone explain the mess here in more detail?

Sorry. Could you elaborate the issue? These pages in the pb array are 
not allocated by DMA API and using dma_map_single() here is to map these 
pages' address to bounce buffer physical address.

> 
>>   	struct rndis_device *dev = nvdev->extension;
>>   	struct rndis_request *request = NULL;
>> +	struct hv_device *hv_dev = ((struct net_device_context *)
>> +			netdev_priv(ndev))->device_ctx;
> 
> Why not use a net_device_context local variable instead of this cast
> galore?
> 

OK. I will update.


Thanks.


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 15:48:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 15:48:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138074.255685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqHUd-0002OG-6j; Mon, 07 Jun 2021 15:48:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138074.255685; Mon, 07 Jun 2021 15:48:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqHUd-0002O9-3p; Mon, 07 Jun 2021 15:48:47 +0000
Received: by outflank-mailman (input) for mailman id 138074;
 Mon, 07 Jun 2021 15:48:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqHUb-0002Ng-Po; Mon, 07 Jun 2021 15:48:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqHUb-00038b-Jj; Mon, 07 Jun 2021 15:48:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqHUb-0006CF-Cw; Mon, 07 Jun 2021 15:48:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqHUb-0002h2-CR; Mon, 07 Jun 2021 15:48:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MpzqaB1ghZfvb7OMVCYSaSqoCQY9mwuS3zh6G7Rpp6w=; b=up172qTNeJw6y/i+Qji6dWdegX
	wjHJ6PgXlv6sY2DWv3Rq40idGdQFFHjhQnVsIRoyeSQrvXKT/irvztA+2svv3eKk9+kpMP7ayuWjO
	D0/JY3ARrIBuEXvuyaM5vcy68OsgP9FTR2cGjdsj+vuT8IntXcic/UrqZLb+cKMwSmDQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162498-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162498: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=75f13e9b221e2c8603f15ee1d53318526cf56113
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Jun 2021 15:48:45 +0000

flight 162498 xen-unstable-smoke real [real]
flight 162515 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162498/
http://logs.test-lab.xenproject.org/osstest/logs/162515/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  75f13e9b221e2c8603f15ee1d53318526cf56113
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    5 days
Failing since        162370  2021-06-04 17:01:35 Z    2 days   19 attempts
Testing same since   162374  2021-06-04 20:03:35 Z    2 days   18 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 16:00:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 16:00:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138082.255699 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqHgN-00056y-Bf; Mon, 07 Jun 2021 16:00:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138082.255699; Mon, 07 Jun 2021 16:00:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqHgN-00056r-8f; Mon, 07 Jun 2021 16:00:55 +0000
Received: by outflank-mailman (input) for mailman id 138082;
 Mon, 07 Jun 2021 16:00:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gI4l=LB=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lqHgL-00056l-Eu
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 16:00:53 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f2f3275-a6a1-4fbe-96ed-e98c39d48087;
 Mon, 07 Jun 2021 16:00:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f2f3275-a6a1-4fbe-96ed-e98c39d48087
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623081651;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=+KkwsGrzndxOAS/E3lA6QFfueBllf0d4acF0r8yeGvU=;
  b=GoKXIMcsskXCFzYqRSBIzUmqZvQ4q/1n1hAO/8pRCkEDGv/0eFRQrYfL
   nVi/v/hz1JN6EmrjKYQEjNETx3re5gBjWrr2ccIshbDRSW8D5oEl8WseD
   gM2iQ1IzNjizsTnthjnPoY3GaDosrVBCaqTrvG19FjnZ0PNmF5rWq1mxT
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: AxTeQ2jRhwxGIWtJM6V00Zk8gorvYbI2SqDSbDwNKqConrtNd+1/KBZXQjs64TEKHK004RiJEK
 NtCf/JBQwXLY/iT/xgI7MakHx9Z41+dDBz1A5bi1GHyUxa/2NqQt2GaExKeDEu/PnG4RIM71xq
 Zu5dOaNtqf5k6z/uB956PuEjNN7NpEZmcf6sELLZCWZOJt7d2SMkwAfSxpacuRpLEtqrEsmEvB
 vG/cD38kYV47TbFhOlzj41i0b6xm4pcFtuTrWjPJ+BqxmfROJnl0OIVb44LeWc3zRJWFCAwa+X
 /DY=
X-SBRS: 5.1
X-MesageID: 45630884
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:0bFdh67JG9d+jI8tlQPXwY2BI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc6AxxZJhAo6H4BEDkex/hHPFOkO4s1NuZLWzbUQiTXeNfBOnZskXd8kTFn4Yzu9
 YCT0VnMr3N5DBB/L3HCWKDYrAdKbe8gcSVbNPlvg1QpExRGtRdxjY8LjzePlx9RQFAC5Z8Po
 Gb/NB7qz2pfmlSRtinB1EeNtKz7OHjpdbDW1orFhQn4A6BgXeD87jhCSWV2R8YTndm3aoiy2
 7YiAb0j5/T+c1T8iWsmlM70q4m0ecJi+EzcvBks/JlXQkEXzzYLLiIWNW5zUAISa+UmRoXee
 L30msd1vJImgLsl1GO0GbQMjbboUkTAl/ZuCylaCjY0L7ErAxTMbs+uWseSGqs13Yd
X-IronPort-AV: E=Sophos;i="5.83,255,1616472000"; 
   d="scan'208";a="45630884"
Date: Mon, 7 Jun 2021 17:00:46 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Thomas Huth <thuth@redhat.com>
CC: Ahmed Abouzied <email@aabouzied.com>, <qemu-devel@nongnu.org>,
	<qemu-trivial@nongnu.org>, Stefano Stabellini <sstabellini@kernel.org>, "Paul
 Durrant" <paul@xen.org>, xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 2/2] Remove leading underscores from Xen defines
Message-ID: <YL5Crh2VlyxGNUlI@perard>
References: <20210605175001.13836-1-email@aabouzied.com>
 <01ba2176-b559-1078-8a9f-39553989d9d3@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <01ba2176-b559-1078-8a9f-39553989d9d3@redhat.com>

On Mon, Jun 07, 2021 at 08:36:07AM +0200, Thomas Huth wrote:
> On 05/06/2021 19.50, Ahmed Abouzied wrote:
> > Identifiers with leading underscores followed by capital letters or
> > underscores are reserved for C standards.
> > 
> > Resolves: https://gitlab.com/qemu-project/qemu/-/issues/369
> > 
> > Signed-off-by: Ahmed Abouzied <email@aabouzied.com>
> > ---
> >   include/hw/xen/interface/grant_table.h  | 4 ++--
> >   include/hw/xen/interface/io/blkif.h     | 4 ++--
> >   include/hw/xen/interface/io/console.h   | 4 ++--
> >   include/hw/xen/interface/io/fbif.h      | 4 ++--
> >   include/hw/xen/interface/io/kbdif.h     | 4 ++--
> >   include/hw/xen/interface/io/netif.h     | 4 ++--
> >   include/hw/xen/interface/io/protocols.h | 4 ++--
> >   include/hw/xen/interface/io/ring.h      | 4 ++--
> >   include/hw/xen/interface/io/usbif.h     | 4 ++--
> >   9 files changed, 18 insertions(+), 18 deletions(-)
> > 
> 
> I hope the Xen people can comment on whether the underscores had a purpose
> here or whether it's ok to remove them, thus:
> 
> Cc: xen-devel@lists.xenproject.org
> 
> From my QEMU-developer's side of view:
> 
> Reviewed-by: Thomas Huth <thuth@redhat.com>
> 

Nacked-by: Anthony PERARD <anthony.perard@citrix.com>

Please don't change the header guards in include/hw/xen/interface/.
This have been attempted before and result in build failures, see
d1744bd3218daa820744c14572058491e4854399 (Revert xen/io/ring.h of "Clean up a few header guard symbols")

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 16:21:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 16:21:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138091.255711 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqHzs-0007UD-5y; Mon, 07 Jun 2021 16:21:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138091.255711; Mon, 07 Jun 2021 16:21:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqHzs-0007U6-2W; Mon, 07 Jun 2021 16:21:04 +0000
Received: by outflank-mailman (input) for mailman id 138091;
 Mon, 07 Jun 2021 16:21:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=esLc=LB=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1lqHzq-0007U0-PL
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 16:21:02 +0000
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 189ab8c2-3e4d-498f-b838-fa119a9192c0;
 Mon, 07 Jun 2021 16:21:01 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.94.2 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1lqHzm-000JhA-4r; Mon, 07 Jun 2021 16:20:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 189ab8c2-3e4d-498f-b838-fa119a9192c0
Date: Mon, 7 Jun 2021 17:20:58 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Roberto Bagnara <roberto.bagnara@bugseng.com>,
	xen-devel@lists.xenproject.org
Subject: Re: Invalid _Static_assert expanded from HASH_CALLBACKS_CHECK
Message-ID: <YL5Hah0rmLMpG/rs@deinos.phlegethon.org>
References: <ccb37c2e-a3a6-a2e4-bf15-da81f97c94be@bugseng.com>
 <38898d21-fe76-36dc-f1e6-497b52c5c0b7@suse.com>
 <YLEP73On6EBjv3Ks@deinos.phlegethon.org>
 <11b5b29e-0baf-9f50-6d9e-985e791148b2@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <11b5b29e-0baf-9f50-6d9e-985e791148b2@suse.com>
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org); SAEximRunCond expanded to false

Hi,

At 08:45 +0200 on 31 May (1622450756), Jan Beulich wrote:
> On 28.05.2021 17:44, Tim Deegan wrote:
> > Hi,
> > 
> > At 10:58 +0200 on 25 May (1621940330), Jan Beulich wrote:
> >> On 24.05.2021 06:29, Roberto Bagnara wrote:
> >>> I stumbled upon parsing errors due to invalid uses of
> >>> _Static_assert expanded from HASH_CALLBACKS_CHECK where
> >>> the tested expression is not constant, as mandated by
> >>> the C standard.
> >>>
> >>> Judging from the following comment, there is partial awareness
> >>> of the fact this is an issue:
> >>>
> >>> #ifndef __clang__ /* At least some versions dislike some of the uses. */
> >>> #define HASH_CALLBACKS_CHECK(mask) \
> >>>      BUILD_BUG_ON((mask) > (1U << ARRAY_SIZE(callbacks)) - 1)
> >>>
> >>> Indeed, this is not a fault of Clang: the point is that some
> >>> of the expansions of this macro are not C.  Moreover,
> >>> the fact that GCC sometimes accepts them is not
> >>> something we can rely upon:
> > 
> > Well, that is unfortunate - especially since the older ad-hoc
> > compile-time assertion macros handled this kind of thing pretty well.
> > Why when I were a lad &c &c. :)
> 
> So I have to admit I don't understand: The commit introducing
> HASH_CALLBACKS_CHECK() (90629587e16e "x86/shadow: replace stale
> literal numbers in hash_{vcpu,domain}_foreach()") did not replace
> any prior compile-time checking. Hence I wonder what you're
> referring to (and hence what alternative ways of dealing with the
> situation there might be that I'm presently not seeing).

Sorry, I wasn't clear.  Before there was compiler support for
compile-time assertions, people used horrible macros that expanded to
things like int x[(p)?0:-1].  (I don't remember which exact flavour we
had in Xen.)  Those worked fine with static consts because the
predicates only had to be compile-time constant in practice, but now
they have to be constant in principle too.

So I don't think there was a better way of adding these assertions in
90629587e16e, I'm just generally grumbling that the official
compile-time assertions are not quite as useful as the hacks they
replaced.

And I am definitely *not* suggesting that we go back to those kind of
hacks just to get around the compiler's insistence on the letter of
the law. :)

Cheers,

Tim.


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 16:25:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 16:25:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138097.255724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqI4A-00089o-R4; Mon, 07 Jun 2021 16:25:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138097.255724; Mon, 07 Jun 2021 16:25:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqI4A-00089h-Nk; Mon, 07 Jun 2021 16:25:30 +0000
Received: by outflank-mailman (input) for mailman id 138097;
 Mon, 07 Jun 2021 16:25:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=32wv=LB=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lqI49-00089b-JO
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 16:25:29 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.1.64]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f007ef2-faf5-4807-bb42-97f35c593b91;
 Mon, 07 Jun 2021 16:25:27 +0000 (UTC)
Received: from AM5PR0602CA0004.eurprd06.prod.outlook.com
 (2603:10a6:203:a3::14) by DB6PR0802MB2439.eurprd08.prod.outlook.com
 (2603:10a6:4:a0::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.29; Mon, 7 Jun
 2021 16:25:11 +0000
Received: from AM5EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:a3:cafe::5e) by AM5PR0602CA0004.outlook.office365.com
 (2603:10a6:203:a3::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.15 via Frontend
 Transport; Mon, 7 Jun 2021 16:25:11 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT034.mail.protection.outlook.com (10.152.16.81) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.21 via Frontend Transport; Mon, 7 Jun 2021 16:25:10 +0000
Received: ("Tessian outbound f02dc08cb398:v93");
 Mon, 07 Jun 2021 16:25:10 +0000
Received: from 11fd2df6f8f8.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D1679FFA-C26A-44FB-960D-3CAD266C36CD.1; 
 Mon, 07 Jun 2021 16:24:46 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 11fd2df6f8f8.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 07 Jun 2021 16:24:46 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com (2603:10a6:102:130::10)
 by PAXPR08MB6766.eurprd08.prod.outlook.com (2603:10a6:102:136::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Mon, 7 Jun
 2021 16:24:44 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7131:4f3f:625e:4b09]) by PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7131:4f3f:625e:4b09%7]) with mapi id 15.20.4195.030; Mon, 7 Jun 2021
 16:24:44 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LNXP123CA0005.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:d2::17) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.15 via Frontend Transport; Mon, 7 Jun 2021 16:24:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f007ef2-faf5-4807-bb42-97f35c593b91
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yVELQqDgphU1RdD60IDap5BVEAlveswJ1hh4QFblv6Q=;
 b=mp/4983ucueSuErhL0Z5NaByPHrlJFyluWtZaK/jgSA6vDUwFh/YFh827jaFX9o0PvTow7cwD3oMqeHOnlIKIfe/klkZYtsyJnB9Dms3lLub1EsPAq3osWCRr+6j0LitxnkD7RvMloDbLJ5/smL+sh6Js0y2uYWMagJJ/xyLjzc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 37e4d0fea065f90c
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jXcRJaprSa1vcfjOWX3VC2SMrNqZDWyAJt48CuhxyKDTW0WTrAfGs/vrVoWm0/8bbk3H0NvWORqVqAuXw5Gdx//+ORSspdTkK0pZfZSU0uFq5zr4LNuBduzqd131j+4OTPXpG66Pd/sucWvFFTE5eHlm4yq7eYHJx7YKQSFa13nNqpFHH1NKcsQTzGJHeVZPEataD/wiUopmBxeqSr0P3mKWwECzND5LSLeyizi19yq01QcnPONbGaKSKI3dX6fpJPIZiXnC9lgzwnRgPd6/ztinCXFnoLbwPkk08/82Pr6zmxaK0Lyh0cfclN3RQEgdEVLueixidtSoqSV6k0fOUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yVELQqDgphU1RdD60IDap5BVEAlveswJ1hh4QFblv6Q=;
 b=lhBU1dqJXv5HofwgoG8ZbREwXAqJ4fAp62hP5XdYkbUgvTuNEUShv3FqnK9VIbs4F0/yvjKKec3Hb9kCsE+qVD5oj7iL8YpOWoqOUNXFni42sV7K6obQqvN2Ao/isgSB8Ofh9YL4avMlQr+0vnPuhKZuEzYrM6BekO4qg8SWWw6JbaXH50AktmZ36Soi1HvW/qAVY6/H/FQmEQSVB6MT1PhhRbXvLqZJ3pX9rhm5AfiMNa+P+nBCsiInlkTTbD2EJufdHnu38nh+MdqCjUUi/tdk+rY+hZY76/3dqfvuHZpIn5g8MGm1TdWYIJRJdmZ/CGVyooK5/+0LGopqLWQPmQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yVELQqDgphU1RdD60IDap5BVEAlveswJ1hh4QFblv6Q=;
 b=mp/4983ucueSuErhL0Z5NaByPHrlJFyluWtZaK/jgSA6vDUwFh/YFh827jaFX9o0PvTow7cwD3oMqeHOnlIKIfe/klkZYtsyJnB9Dms3lLub1EsPAq3osWCRr+6j0LitxnkD7RvMloDbLJ5/smL+sh6Js0y2uYWMagJJ/xyLjzc=
Authentication-Results-Original: lists.xenproject.org; dkim=none (message not
 signed) header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=arm.com;
Content-Type: text/plain;
	charset=us-ascii
Subject: Re: [PATCH v6 0/9] Use Doxygen and sphinx for html documentation
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <20210510084105.17108-1-luca.fancellu@arm.com>
Date: Mon, 7 Jun 2021 17:24:37 +0100
Cc: Bertrand Marquis <bertrand.marquis@arm.com>,
 wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
Content-Transfer-Encoding: quoted-printable
Message-Id: <7C830055-778A-4F1B-BCA2-6BDE40170BBA@arm.com>
References: <20210510084105.17108-1-luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
X-Mailer: Apple Mail (2.3654.100.0.2.22)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LNXP123CA0005.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:d2::17) To PAXPR08MB6816.eurprd08.prod.outlook.com
 (2603:10a6:102:130::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 17115bbf-e062-4aee-06fc-08d929d0d273
X-MS-TrafficTypeDiagnostic: PAXPR08MB6766:|DB6PR0802MB2439:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<DB6PR0802MB24396A72FA863AF0D2C21FACE4389@DB6PR0802MB2439.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 xoqRNDY6yYSGElcND1rHX2wnzB6uPec70/zHMKYRE0X+UuOdOoHoxrweA99Ntvez44i3yHDkNPgnP7OpUK1OGnIS3Vp2nnnV86FhMFWXaaCM8Cf4rbG9jwmaUVinIN/W1UyMGH3ddDxxJzyrEEgcWFTgQ7ozfbVmtEkGQd8R+PxlX2AnXMnn9tw1t6bZ3ZKkeCISK7LunyAbP3LjpowJN9HtPk5PmQw7w38IqNw//bBE3QAeCN4qa4h7zO14NVRkgYFTBal3McEQoEYsKqxzgww32YFQrUAevuzKuLxModLiUtK40wN53gkHll+G/TwJRY0KjjS7qGecASjcAFY7X0B8TXC0sPZerbCJ75xy1zDl0xjzuQEBBJlzJXFLrVEKhC18qnOkAsocYY0fOuI1PUmWRsI7WwL9l1aKHImxH13+CG002oob6wJjVMMCCCR6NH7AY999OSXBOs16QmZR2GwNx0e48sr8Z9b6nCpT5EAqsjp8c6ObT9qVLn1h42jUDvx0YPRjuBUIhcKSFrP0Man9kDWiL2709iFS3Bdjk5y+WHquZBGD4dHek5ZF7Bdo7mty3FheUrMgssh0kJc6jHBvbo+uaBP85V0n4J6zOnVS7PJ/7/tzjYTm/y4PvaCg8vmM0bDif3CQyEtLsIw42F8BPse1GtB6Cf2W062NksJGVBNJ97PUwhRIEDcPMFUI
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB6816.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(376002)(396003)(366004)(136003)(33656002)(6512007)(6486002)(2906002)(316002)(8936002)(6666004)(8676002)(86362001)(186003)(16526019)(478600001)(83380400001)(53546011)(6506007)(52116002)(38100700002)(38350700002)(36756003)(2616005)(956004)(26005)(66946007)(66556008)(66476007)(54906003)(44832011)(6916009)(4326008)(5660300002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
 =?us-ascii?Q?EXN4Tie9ii4cFy4e13Vout9iEVcScr1pfL2cIhj71/yIBOIqLh5uwBO9Fbux?=
 =?us-ascii?Q?ly1srcRmQHHEb45n6r5L/Fi/KXRBpVg1eqM9racRCTsssmcJr6QP9F3F9CNT?=
 =?us-ascii?Q?vVcpMczZ+JGkNh1XaROWE7J8ET87Nyd/2y9pIvfygnjK3RTmFhvbDPCMrlHT?=
 =?us-ascii?Q?/RBxYz2gDzhf/1k8vllbawGZTn4M5xHkLtjHcnRDk7kDKnUtBRANApq27oMn?=
 =?us-ascii?Q?g1E/Aw/A7nhZbkHOSgj15HEAFD3VYujWs2B7+qP1Axne9dHVqTL8Rw4LqFWd?=
 =?us-ascii?Q?l6KXFpUYEXIMlQ5DSHg7qfEA6fmsMcYzvetT52FSCKaYVhA9Y0lA3Lanw7Wy?=
 =?us-ascii?Q?fqHmPf8QRmVs5z6RjQr9Ocv60MPIZoTydGJeQhxDE+n0Fj/QjBdztzFN2a+a?=
 =?us-ascii?Q?bS+TR9XYtfwaIjlmkijhRKe0lLFJs/Msefg2ZL33sj6Jz5wj8kL709yoxhFQ?=
 =?us-ascii?Q?lHmghiTVi/4XgdiPZcO0MZcTYXVScKPbyrgms/qqkyYZ830kBdBjtE5rLwWi?=
 =?us-ascii?Q?AliOmVADpMOymgbdm4VZhfs1OHFIbbTQzvkt9qX3HrK3Ze+I/aNeMKFmVB2X?=
 =?us-ascii?Q?oK36sOJP6ykM8KzElwNBdFLfaAP5NOyKG1mY4oQG33VLSJXThUiGeIFn1hGO?=
 =?us-ascii?Q?Qx0bEiQ6CRsY8SnzclwSwGT2z4nK15iUvfRQGF42IH59pnGtuQxSaJrTnGCp?=
 =?us-ascii?Q?JcAzX0wQnHpE7Odw2vg+acUlvRJFJATn++g3bfmVOMfAxD4M8I4PVFtEtfds?=
 =?us-ascii?Q?0FPKZoUlx8azoJrQHlrr8K9A511t0y78b0dQ6KEcXEaXCnGH4cKjaWYcLTQO?=
 =?us-ascii?Q?zMaIKx0B0cisfg5++m0G+RPmvVh1BFTlWsYoL6HYiP7G/Tk31qlSg0WqmvSH?=
 =?us-ascii?Q?a0QzZC3fsKqL3esgR45hVjrr4lc4ji5fAnMLsKF0dlKiEkxYmorhj5bQwrGl?=
 =?us-ascii?Q?lsmv4/5IwSGHr4uDZOeOqy0Fnt8+I5s6USnYjZCUk0j0KczqLKbqbaiPFE4R?=
 =?us-ascii?Q?7CO/rSlI9c+i/uPB+VTWiiqVqcZFSDIbY3LsZ2GICgvDbyekqHjyY80Z9gRD?=
 =?us-ascii?Q?o/bUjmm2iGBEkex3aLcX0oY7EoCB8Ag2BrReShRzWYLl2rmkW8xD8jF3fG6A?=
 =?us-ascii?Q?xUPOfqDwew32j5ovmjkM4pom1BrqgsKktU66+E0FicT3O2ia+PCz/DjnQUIZ?=
 =?us-ascii?Q?NoSUUzrV/ylG1pKVVRiKgdUBRsnuQYgv5a2MV2hDCCTtHuvwqFrz1Arc0YgB?=
 =?us-ascii?Q?o9w4hF66Clmq96ctuKFj73JlKt3c626TeZVw8Fq9OE5Iq64zJJV+e4fGsWUT?=
 =?us-ascii?Q?3gcA+wm4CcIO4XozTgnDIhyq?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6766
Original-Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	eaea3326-76d3-4f65-3c91-08d929d0c266
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fb4Gp7vCmtOI6HesvxtPyZnhqLx7B0Rhe4T/gJlFZ/3FDlDetKKJRBJuVU8LEm6GX45kfV41HDbAZbbo4Hr6WS1FJFvESd0brqVFjBKtsUgm1blNIreox6alr3lNBFjJk7q9/5tJ0uIq+XmxEUxi2IsSrEST0baXpOsyuu0gm6bdevZjmhVE/HNCgz4fyMAXgjLn56buXHn50IKkLfR0KI1CtMCpmLYwUus0LCUX4wZPMIWzzrtuL+B13O6rX30lOc7iNomjrO5k7eiSPkjZvszEPYQ7PmkBxAgoD3HQokqW9INtq0X9lQ3MxHbez9cgox08KDrgRkOyionjzOpxkMdKX+Ri/ZnLqMR+y6eXEgG0TCykmuiFRvHhGDJUQFCXeOc0zuIhgOdsSlQx+hlmqCZ7eAcB8Ykk9FnP3dA6FzxIhO6/VAJOLuZts4QgZ8v/38psmmFKQctZwr8qADTD4j8Y0sR7PXA49ic7KJBaoxjy1wbraR8fqtw32CeD13yk3yfek3oXUO6kw+WnSZGcqi4YA8yZultgCEJKsEkSnuc9m4UQjnD2HV+61u8aaRc/h7+CZFSd9wDgdGfyyJRiK7Gg032z/2V74dcraWwUDuzUGe30DK8z8Aau21LUnaWaCThr4fUMyWzul7Qx6m1sJg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(39860400002)(376002)(346002)(396003)(46966006)(36840700001)(44832011)(2616005)(6666004)(956004)(6486002)(478600001)(2906002)(86362001)(83380400001)(186003)(16526019)(26005)(6506007)(82740400003)(53546011)(33656002)(81166007)(336012)(36756003)(8936002)(82310400003)(8676002)(356005)(6512007)(5660300002)(6916009)(4326008)(70586007)(70206006)(47076005)(36860700001)(54906003)(316002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 16:25:10.9776
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 17115bbf-e062-4aee-06fc-08d929d0d273
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2439

Hello Xen community,

Can someone have a look on these patches? Some of them are acked but some o=
thers needs review.

Many thanks!

Cheers,
Luca

> On 10 May 2021, at 09:40, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> This serie introduce doxygen in the sphinx html docs generation.
> One benefit is to keep most of the documentation in the source
> files of xen so that it's more maintainable, on the other hand
> there are some limitation of doxygen that should be addressed
> modifying the current codebase (for example doxygen can't parse
> anonymous structure/union).
>=20
> To reproduce the documentation xen must be compiled because
> most of the headers are generated on compilation time from
> the makefiles.
>=20
> Here follows the steps to generate the sphinx html docs, some
> package may be required on your machine, everything is suggested
> by the autoconf script.
> Here I'm building the arm64 docs (the only introduced for now by
> this serie):
>=20
> ./configure
> make -C xen XEN_TARGET_ARCH=3D"arm64" CROSS_COMPILE=3D"aarch64-linux-gnu-=
" menuconfig
> make -C xen XEN_TARGET_ARCH=3D"arm64" CROSS_COMPILE=3D"aarch64-linux-gnu-=
"
> make -C docs XEN_TARGET_ARCH=3D"arm64" sphinx-html
>=20
> now in docs/sphinx/html/ we have the generated docs starting
> from the index.html page.
>=20
> Luca Fancellu (9):
>  docs: add doxygen configuration file
>  docs: add Xen png logo for the doxygen documentation
>  docs: add doxygen templates
>  m4/python: add function to docs_tool.m4 and new m4 module
>  docs: add checks to configure for sphinx and doxygen
>  docs: add doxygen preprocessor and related files
>  docs: Change Makefile and sphinx configuration for doxygen
>  docs: hypercalls sphinx skeleton for generated html
>  docs/doxygen: doxygen documentation for grant_table.h
>=20
> .gitignore                                    |    7 +
> config/Docs.mk.in                             |    2 +
> docs/Makefile                                 |   46 +-
> docs/conf.py                                  |   48 +-
> docs/configure                                |  258 ++
> docs/configure.ac                             |   15 +
> docs/hypercall-interfaces/arm32.rst           |   32 +
> docs/hypercall-interfaces/arm64.rst           |   33 +
> .../common/grant_tables.rst                   |    9 +
> docs/hypercall-interfaces/index.rst.in        |    7 +
> docs/hypercall-interfaces/x86_64.rst          |   32 +
> docs/index.rst                                |    8 +
> docs/xen-doxygen/customdoxygen.css            |   36 +
> docs/xen-doxygen/doxy-preprocessor.py         |  110 +
> docs/xen-doxygen/doxy_input.list              |    1 +
> docs/xen-doxygen/doxygen_include.h.in         |   32 +
> docs/xen-doxygen/footer.html                  |   21 +
> docs/xen-doxygen/header.html                  |   56 +
> docs/xen-doxygen/mainpage.md                  |    5 +
> docs/xen-doxygen/xen_project_logo_165x67.png  |  Bin 0 -> 18223 bytes
> docs/xen.doxyfile.in                          | 2316 +++++++++++++++++
> m4/ax_python_module.m4                        |   56 +
> m4/docs_tool.m4                               |    9 +
> xen/include/public/grant_table.h              |  387 +--
> 24 files changed, 3367 insertions(+), 159 deletions(-)
> create mode 100644 docs/hypercall-interfaces/arm32.rst
> create mode 100644 docs/hypercall-interfaces/arm64.rst
> create mode 100644 docs/hypercall-interfaces/common/grant_tables.rst
> create mode 100644 docs/hypercall-interfaces/index.rst.in
> create mode 100644 docs/hypercall-interfaces/x86_64.rst
> create mode 100644 docs/xen-doxygen/customdoxygen.css
> create mode 100755 docs/xen-doxygen/doxy-preprocessor.py
> create mode 100644 docs/xen-doxygen/doxy_input.list
> create mode 100644 docs/xen-doxygen/doxygen_include.h.in
> create mode 100644 docs/xen-doxygen/footer.html
> create mode 100644 docs/xen-doxygen/header.html
> create mode 100644 docs/xen-doxygen/mainpage.md
> create mode 100644 docs/xen-doxygen/xen_project_logo_165x67.png
> create mode 100644 docs/xen.doxyfile.in
> create mode 100644 m4/ax_python_module.m4
>=20
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 17:31:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 17:31:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138111.255751 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqJ5u-00068a-0e; Mon, 07 Jun 2021 17:31:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138111.255751; Mon, 07 Jun 2021 17:31:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqJ5t-00068T-Tq; Mon, 07 Jun 2021 17:31:21 +0000
Received: by outflank-mailman (input) for mailman id 138111;
 Mon, 07 Jun 2021 17:31:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqJ5t-00068J-5Z; Mon, 07 Jun 2021 17:31:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqJ5s-0005Pg-WF; Mon, 07 Jun 2021 17:31:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqJ5s-0001Hc-Nb; Mon, 07 Jun 2021 17:31:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqJ5s-0000Hj-N7; Mon, 07 Jun 2021 17:31:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tEr2elnUdC4D0onB9CVVyh3qYpEdROZ5T+qQgnmWZ0A=; b=JfcsXfn00ePvGAXktlVYpyY9kt
	a07Bd9ZZ9nVoBNWccKex8kS3cV4VmQ++Xike3DCt9G6qCvscc/f+HnwFulOzPb7UNv4e6E5YVgBE1
	yIXXaXtzi/BYHYnIwKdiHLOjpc2QbAegdT+dTee0WlOSBjeCNg9yPrlVdcXGvjk1v+TA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162475-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162475: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Jun 2021 17:31:20 +0000

flight 162475 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162475/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install  fail pass in 162422

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 162422 like 162330
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162422
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162422
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162422
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162422
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162422
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162422
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162422
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162422
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162422
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162422
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162422
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162475  2021-06-07 01:51:40 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 18:09:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 18:09:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138123.255770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqJh0-000188-9b; Mon, 07 Jun 2021 18:09:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138123.255770; Mon, 07 Jun 2021 18:09:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqJh0-000181-6W; Mon, 07 Jun 2021 18:09:42 +0000
Received: by outflank-mailman (input) for mailman id 138123;
 Mon, 07 Jun 2021 18:09:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lqJgy-00017u-DV
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 18:09:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lqJgy-00067F-7m; Mon, 07 Jun 2021 18:09:40 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lqJgy-0002Ia-1W; Mon, 07 Jun 2021 18:09:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=vgsRX0lG9uOSjZmvwi1ft0z5cXXen+jh/q5I+lX7JWI=; b=V4ddJ4CBT5bHEO7SwAp5KwKgup
	e7ptOacjwnjYpGiiaCPgwUZ79CxS5+VByHuVvXdfoamehwVf5U7GukGXgpUDIRSLOeI387BvcLe22
	ylty4RPT2Q4iQYhTaPFK634gaB0DcpHYwF90Lysx5qNwC8K0TA+qqCKkkuzn94lzrpck=;
Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
To: Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>
Cc: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-2-penny.zheng@arm.com>
 <e1b90f06-92d2-11da-c556-4081907124b8@xen.org>
 <VE1PR08MB521519C6F09E92EDB9C9A1AEF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <66e32065-ea2d-d000-1a70-e5598a182b6a@xen.org>
 <VE1PR08MB5215C1F5041860102BBAD595F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <14fb6fe4-c293-6994-8cbc-872d3bd8a3ac@xen.org>
 <VE1PR08MB52152792B6771236A6DF37E7F73D9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <4251e0e2-fb53-b8a3-0323-f4ce892cf21e@xen.org>
 <alpine.DEB.2.21.2106031408320.7272@sstabellini-ThinkPad-T480s>
 <CAJ=z9a234ANQDR7BmtSm4AT0k3jrCn67s4b3zZ+jdkUgBMahbw@mail.gmail.com>
 <alpine.DEB.2.21.2106031625530.7272@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <113937c2-f1a7-c27f-8e2e-79de729ea3ce@xen.org>
Date: Mon, 7 Jun 2021 19:09:37 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2106031625530.7272@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 04/06/2021 00:55, Stefano Stabellini wrote:
> On Thu, 3 Jun 2021, Julien Grall wrote:
>> On Thu, 3 Jun 2021 at 22:33, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>> On Thu, 3 Jun 2021, Julien Grall wrote:
>>>> On 02/06/2021 11:09, Penny Zheng wrote:
>>>>> I could not think a way to fix it properly in codes, do you have any
>>>>> suggestion? Or we just put a warning in doc/commits.
>>>>
>>>> The correct approach is to find the parent of staticmemdomU1 (i.e.
>>>> reserved-memory) and use the #address-cells and #size-cells from there.
>>>
>>> Julien is right about how to parse the static-memory.
>>>
>>> But I have a suggestion on the new binding. The /reserved-memory node is
>>> a weird node: it is one of the very few node (the only node aside from
>>> /chosen) which is about software configurations rather than hardware
>>> description.
>>>
>>> For this reason, in a device tree with multiple domains /reserved-memory
>>> doesn't make a lot of sense: for which domain is the memory reserved?
>>
>> IHMO, /reserved-memory refers to the memory that the hypervisor should
>> not touch. It is just a coincidence that most of the domains are then
>> passed through to dom0.
>>
>> This also matches the fact that the GIC, /memory is consumed by the
>> hypervisor directly and not the domain..
> 
> In system device tree one of the key principles is to distinguish between
> hardware description and domains configuration. The domains
> configuration is under /domains (originally it was under /chosen then
> the DT maintainers requested to move it to its own top-level node), while
> everything else is for hardware description.
> 
> /chosen and /reserved-memory are exceptions. They are top-level nodes
> but they are for software configurations. In system device tree
> configurations go under the domain node. This makes sense: Xen, dom0 and
> domU can all have different reserved-memory and chosen configurations.
> 
> /domains/domU1/reserved-memory gives us a clear way to express
> reserved-memory configurations for domU1.
> 
> Which leaves us with /reserved-memory. Who is that for? It is for the
> default domain.
> 
> The default domain is the one receiving all devices by default. In a Xen
> setting, it is probably Dom0. 

Let's take an example, let say in the future someone wants to allocate a 
specific region for the memory used by the GICv3 ITS.

 From what you said above, /reserved-memory would be used by dom0. So 
how would you be able to tell the hypervisor that the region is reserved 
for itself?

> In this case, we don't want to add
> reserved-memory regions for DomUs to Dom0's list. Dom0's reserved-memory
> list is for its own drivers. We could also make an argument that the
> default domain is Xen itself. From a spec perspective, that would be
> fine too. In this case, /reserved-memory is a list of memory regions
> reserved for Xen drivers. 

We seem to have a different way to read the binding description in [1]. 
For convenience, I will copy it here:

"Reserved memory is specified as a node under the /reserved-memory node.
The operating system shall exclude reserved memory from normal usage
one can create child nodes describing particular reserved (excluded from
normal use) memory regions. Such memory regions are usually designed for
the special usage by various device drivers.
"

I read it as this can be used to exclude any memory from the allocator 
for a specific purpose. They give the example of device drivers, but 
they don't exclude other purpose. So...

> Either way, I don't think is a great fit for
> domains memory allocations.

... I don't really understand why this is not a great fit. The regions 
have been *reserved* for a purpose.

>>>
>>> So I don't think we want to use reserved-memory for this, either
>>> /reserved-memory or /chosen/domU1/reserved-memory. Instead it would be
>>> good to align it with system device tree and define it as a new property
>>> under domU1.
>>
>> Do you have any formal documentation of the system device-tree?
> 
> It lives here:
> https://github.com/devicetree-org/lopper/tree/master/specification
> 
> Start from specification.md. It is the oldest part of the spec, so it is
> not yet written with a formal specification language.
> 
> FYI there are a number of things in-flight in regards to domains that
> we discussed in the last call but they are not yet settled, thus, they
> are not yet committed (access flags definitions and hierarchical
> domains). However, they don't affect domains memory allocations so from
> that perspective nothing has changed.

Thanks!

> 
> 
>>> In system device tree we would use a property called "memory" to specify
>>> one or more ranges, e.g.:
>>>
>>>      domU1 {
>>>          memory = <0x0 0x500000 0x0 0x7fb00000>;
>>>
>>> Unfortunately for xen,domains we have already defined "memory" to
>>> specify an amount, rather than a range. That's too bad because the most
>>> natural way to do this would be:
>>>
>>>      domU1 {
>>>          size = <amount>;
>>>          memory = <ranges>;
>>>
>>> When we'll introduce native system device tree support in Xen we'll be
>>> able to do that. For now, we need to come up with a different property.
>>> For instance: "static-memory" (other names are welcome if you have a
>>> better suggestion).
>>>
>>> We use a new property called "static-memory" together with
>>> #static-memory-address-cells and #static-memory-size-cells to define how
>>> many cells to use for address and size.
>>>
>>> Example:
>>>
>>>      domU1 {
>>>          #static-memory-address-cells = <2>;
>>>          #static-memory-size-cells = <2>;
>>>          static-memory = <0x0 0x500000 0x0 0x7fb00000>;
>>
>> This is pretty similar to what Penny suggested. But I dislike it
>> because of the amount of code that needs to be duplicated with the
>> reserved memory.
> 
> Where is the code duplication? In the parsing itself?

So the problem is we need an entire new way to parse and walk yet 
another binding that will describe memory excluded from normal allocator 
hypervisor.

The code is pretty much the same as parsing /reserved-memory except it 
will be on a different address cells, size cells, property.

> 
> If there is code duplication, can we find a way to share some of the
> code to avoid the duplication?

Feel free to propose one. I suggested to use /reserved-memory because 
this is the approach that makes the most sense to me (see my reply above).

TBH, even after your explanation, I am still a bit puzzled into why 
/reserved-memory cannot be leveraged to exclude domain region from the 
hypervisor allocator.

Cheers,

[1] 
https://www.kernel.org/doc/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 18:12:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 18:12:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138128.255782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqJk5-0002Ry-P9; Mon, 07 Jun 2021 18:12:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138128.255782; Mon, 07 Jun 2021 18:12:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqJk5-0002Rr-KU; Mon, 07 Jun 2021 18:12:53 +0000
Received: by outflank-mailman (input) for mailman id 138128;
 Mon, 07 Jun 2021 18:12:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lqJk4-0002Rl-2N
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 18:12:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lqJk3-0006BL-0H; Mon, 07 Jun 2021 18:12:51 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lqJk2-0002aI-QQ; Mon, 07 Jun 2021 18:12:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=4kloaPrlNBIxK3w/O7f0xVZinn0chaMi3g+JATOnGic=; b=c5cTD50YUkLB81y533t8xyHfIf
	5ddPrVqWqJwhwhuYoRwTdKQeTBTzwTa7OSCOlCFowgpjcc4q24oMLOSogJAw3sK/TPuSv+uVz1odn
	SXo4HxJ7Y+3CLQfnI2w43oJNfbKc6/x1nqhoIKDevYJ83uEIC4l4qQmLorMSWTBMjpHc=;
Subject: Re: [PATCH v2 07/12] mm: allow page scrubbing routine(s) to be arch
 controlled
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
 <49c46d4d-4eaa-16a8-ccc8-c873b0b1d092@suse.com>
 <b1c10ad9-2cef-031d-39c2-8d2013b3e0b5@xen.org>
 <e805e525-f024-8b5f-3814-0c1346a227f8@suse.com>
 <ccdc7909-9ef2-470e-fefd-bc6523fcdf73@xen.org>
 <403746c0-1b36-f782-3f71-2a1cd129aa6e@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7e5a09ba-3b47-4d8a-0bcf-a3933e049df1@xen.org>
Date: Mon, 7 Jun 2021 19:12:48 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <403746c0-1b36-f782-3f71-2a1cd129aa6e@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 04/06/2021 14:23, Jan Beulich wrote:
> On 03.06.2021 11:39, Julien Grall wrote:
>> On 27/05/2021 14:58, Jan Beulich wrote:
>>> On 27.05.2021 15:06, Julien Grall wrote:
>>>> On 27/05/2021 13:33, Jan Beulich wrote:
>>>>> @@ -1046,12 +1051,14 @@ static struct page_info *alloc_heap_page
>>>>>         if ( first_dirty != INVALID_DIRTY_IDX ||
>>>>>              (scrub_debug && !(memflags & MEMF_no_scrub)) )
>>>>>         {
>>>>> +        bool cold = d && d != current->domain;
>>>>
>>>> So the assumption is if the domain is not running, then the content is
>>>> not in the cache. Is that correct?
>>>
>>> Not exactly: For one, instead of "not running" it is "is not the current
>>> domain", i.e. there may still be vCPU-s of the domain running elsewhere.
>>> And for the cache the question isn't so much of "is in cache", but to
>>> avoid needlessly bringing contents into the cache when the data is
>>> unlikely to be used again soon.
>>
>> Ok. Can this be clarified in the commit message?
> 
> I had updated it already the other day to
> 
> "Especially when dealing with large amounts of memory, memset() may not
>   be very efficient; this can be bad enough that even for debug builds a
>   custom function is warranted. We additionally want to distinguish "hot"
>   and "cold" cases (with, as initial heuristic, "hot" being for any
>   allocations a domain does for itself, assuming that in all other cases
>   the page wouldn't be accessed [again] soon). The goal is for accesses
>   of "cold" pages to not disturb caches (albeit finding a good balance
>   between this and the higher latency looks to be difficult)."
> 
> Is this good enough?

Yes. Thank you for proposing an update to the commit message!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 18:15:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 18:15:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138135.255792 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqJmD-00036c-3z; Mon, 07 Jun 2021 18:15:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138135.255792; Mon, 07 Jun 2021 18:15:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqJmD-00036V-0t; Mon, 07 Jun 2021 18:15:05 +0000
Received: by outflank-mailman (input) for mailman id 138135;
 Mon, 07 Jun 2021 18:15:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lqJmC-00036P-6N
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 18:15:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lqJmB-0006EC-A1; Mon, 07 Jun 2021 18:15:03 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lqJmB-0002fS-2o; Mon, 07 Jun 2021 18:15:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=4pZL2seXSV736s975wjnJUTLCYC0Y6k2UPK2Kwx12pM=; b=u0HI7TbNJZKrXFw/RhcZ29HBnu
	UKK6J943icGkgGWM6RZK4+nKtt7HXegDLJ8SMtjVcoGM8Exf66mlHLDFvwiMVbQZcWTwNHdaLQozM
	2AT2oype6/D0ugE4ICe7N2VK0fAJQUq39ANnywDpQv2cGEcQazON9CCnlPBEq01r0Cxo=;
Subject: Re: [PATCH v6 1/3] evtchn: slightly defer lock acquire where possible
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <01bbf3d4-ca6a-e837-91fe-b34aa014564c@suse.com>
 <5939858e-1c7c-5658-bc2d-0c9024c74040@suse.com>
 <938eb888-ec15-feb1-19f7-b90dfee822ae@xen.org>
 <e5e6ab32-815c-49d8-94f3-a75d975465b3@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <cd727c23-9433-440c-bbb2-f2b0af61567f@xen.org>
Date: Mon, 7 Jun 2021 19:15:01 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <e5e6ab32-815c-49d8-94f3-a75d975465b3@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 01/06/2021 12:54, Jan Beulich wrote:
> On 27.05.2021 20:48, Julien Grall wrote:
>> On 27/05/2021 12:28, Jan Beulich wrote:
>>> port_is_valid() and evtchn_from_port() are fine to use without holding
>>> any locks. Accordingly acquire the per-domain lock slightly later in
>>> evtchn_close() and evtchn_bind_vcpu().
>>
>> So I agree that port_is_valid() and evtchn_from_port() are fine to use
>> without holding any locks in evtchn_bind_vcpu(). However, this is
>> misleading to say there is no problem with evtchn_close().
>>
>> evtchn_close() can be called with current != d and therefore, there is a
>> risk that port_is_valid() may be valid and then invalid because
>> d->valid_evtchns is decremented in evtchn_destroy().
> 
> While this is the case for other functions as well (and hence a
> comment along the lines of what you ask for below should have
> been in place already), I've added
> 
> /*
>   * While calling the function is okay without holding a suitable lock yet
>   * (see the comment ahead of struct evtchn_port_ops for which ones those
>   * are), for a dying domain it may start returning false at any point - see
>   * evtchn_destroy(). This is not a fundamental problem though, as the
>   * struct evtchn instance won't disappear (and will continue to hold valid
>   * data) until final cleanup of the domain, at which point the domain itself
>   * cannot be looked up anymore and hence calls here can't occur anymore in
>   * the first place.
>   */
> 
> ...
> 
>> Thankfully the memory is still there. So the current code is okayish and
>> I could reluctantly accept this behavior to be spread. However, I don't
>> think this should be left uncommented in both the code (maybe on top of
>> port_is_valid()?) and the commit message.
> 
> ... ahead of port_is_valid() (and not, as I did intend originally,
> in evtchn_close()). As far as the commit message goes, I'll have it
> refer to the comment only.
> 
> I hope this satisfies the requests of both of you. I'll take the
> liberty and retain your ack, Roger.

Yes, this satistfies my requests. Feel free to add my reviewed-by on the 
patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 18:44:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 18:44:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138146.255810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqKEn-0006II-FB; Mon, 07 Jun 2021 18:44:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138146.255810; Mon, 07 Jun 2021 18:44:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqKEn-0006IB-C4; Mon, 07 Jun 2021 18:44:37 +0000
Received: by outflank-mailman (input) for mailman id 138146;
 Mon, 07 Jun 2021 18:44:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqKEm-0006I1-25; Mon, 07 Jun 2021 18:44:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqKEl-0006hW-PT; Mon, 07 Jun 2021 18:44:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqKEl-0005Qb-Hu; Mon, 07 Jun 2021 18:44:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqKEl-0002Kx-HQ; Mon, 07 Jun 2021 18:44:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=p/o+rkrZh60MJ8EjlCQFQe1qWQ/jlpY4ICETBC7I2OQ=; b=buEz5GBsjQWnrIqSmr3IfILcx9
	aHZiS4RqvR6wMD5l91D1gsXZ5/vQRuLzL38q8uJ6EAtl10MExer6DnHidPFok6451wxAEYKUZ7Mzv
	wnWRDeOy+/gfMuB1oDiQFw2EYv7T1jPNTxBAIDeyRgXfeh7SKtI+Zu48uruuLT114PwY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162517-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162517: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d21121685fac829c988e432407fb0e4ef9b19331
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Jun 2021 18:44:35 +0000

flight 162517 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162517/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d21121685fac829c988e432407fb0e4ef9b19331
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    6 days
Failing since        162370  2021-06-04 17:01:35 Z    3 days   20 attempts
Testing same since   162517  2021-06-07 16:01:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d21121685fac829c988e432407fb0e4ef9b19331
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Jun 7 15:00:05 2021 +0200

    tools/libs/guest: fix save and restore of pv domains after 32-bit de-support
    
    After 32-bit PV-guests have been security de-supported when not running
    under PV-shim, the hypervisor will no longer be configured to support
    those domains per default when not being built as PV-shim.
    
    Unfortunately libxenguest will fail saving or restoring a PV domain
    due to this restriction, as it is trying to get the compat MFN list
    even for 64 bit guests.
    
    Fix that by obtaining the compat MFN list only for 32-bit PV guests.
    
    Fixes: 1a0f2fe2297d122a08fe ("SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 69e1472d21cf7e5cf0795ef38b99d00de78a910e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jun 7 13:38:53 2021 +0100

    x86/cpuid: Drop special_features[]
    
    While the ! annotation is useful to indicate that something special is
    happening, an array of bits is not.  Drop it, to prevent mistakes.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 60fa12dbf1d4d2c4ffe1ef34b495b24aa7e41aa0
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jun 7 13:25:09 2021 +0100

    x86/cpuid: Fix HLE and RTM handling (again)
    
    For reasons which are my fault, but I don't recall why, the
    FDP_EXCP_ONLY/NO_FPU_SEL adjustment uses the whole special_features[] array
    element, not the two relevant bits.
    
    HLE and RTM were recently added to the list of special features, causing them
    to be always set in guest view, irrespective of the toolstacks choice on the
    matter.
    
    Rewrite the logic to refer to the features specifically, rather than relying
    on the contents of the special_features[] array.
    
    Fixes: 8fe24090d9 ("x86/cpuid: Rework HLE and RTM handling")
    Reported-by: Edwin Török <edvin.torok@citrix.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit c4beefcada0a406681dcfb6e89f6cbe4aa368c2d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Jun 7 15:40:55 2021 +0200

    ﻿docs: release-technician-checklist: update to leaf tree version pinning
    
    Our releases look to flip-flop between keeping or discarding the date
    and title of the referenced qemu-trad commit. I think with the hash
    replaced by a tag, the commit's date and title would better also be
    purged.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 89052b9fa24bf976924e40918fc9fa3b1b940e17
Author: Dario Faggioli <dfaggioli@suse.com>
Date:   Fri Mar 19 12:14:17 2021 +0000

    xen: credit2: fix per-entity load tracking when continuing running
    
    If we schedule, and the current vCPU continues to run, its statistical
    load is not properly updated, resulting in something like this, even if
    all the 8 vCPUs are 100% busy:
    
    (XEN) Runqueue 0:
    (XEN) [...]
    (XEN)   aveload            = 2097152 (~800%)
    (XEN) [...]
    (XEN)   Domain: 0 w 256 c 0 v 8
    (XEN)     1: [0.0] flags=2 cpu=4 credit=9996885 [w=256] load=35 (~0%)
    (XEN)     2: [0.1] flags=2 cpu=2 credit=9993725 [w=256] load=796 (~0%)
    (XEN)     3: [0.2] flags=2 cpu=1 credit=9995885 [w=256] load=883 (~0%)
    (XEN)     4: [0.3] flags=2 cpu=5 credit=9998833 [w=256] load=487 (~0%)
    (XEN)     5: [0.4] flags=2 cpu=6 credit=9998942 [w=256] load=1595 (~0%)
    (XEN)     6: [0.5] flags=2 cpu=0 credit=9994669 [w=256] load=22 (~0%)
    (XEN)     7: [0.6] flags=2 cpu=7 credit=9997706 [w=256] load=0 (~0%)
    (XEN)     8: [0.7] flags=2 cpu=3 credit=9992440 [w=256] load=0 (~0%)
    
    As we can see, the average load of the runqueue as a whole is, instead,
    computed properly.
    
    This issue would, in theory, potentially affect Credit2 load balancing
    logic. In practice, however, the problem only manifests (at least with
    these characteristics) when there is only 1 runqueue active in the
    cpupool, which also means there is no need to do any load-balancing.
    
    Hence its real impact is pretty much limited to wrong per-vCPU load
    percentages, when looking at the output of the 'r' debug-key.
    
    With this patch, the load is updated and displayed correctly:
    
    (XEN) Runqueue 0:
    (XEN) [...]
    (XEN)   aveload            = 2097152 (~800%)
    (XEN) [...]
    (XEN) Domain info:
    (XEN)   Domain: 0 w 256 c 0 v 8
    (XEN)     1: [0.0] flags=2 cpu=4 credit=9995584 [w=256] load=262144 (~100%)
    (XEN)     2: [0.1] flags=2 cpu=6 credit=9992992 [w=256] load=262144 (~100%)
    (XEN)     3: [0.2] flags=2 cpu=3 credit=9998918 [w=256] load=262118 (~99%)
    (XEN)     4: [0.3] flags=2 cpu=5 credit=9996867 [w=256] load=262144 (~100%)
    (XEN)     5: [0.4] flags=2 cpu=1 credit=9998912 [w=256] load=262144 (~100%)
    (XEN)     6: [0.5] flags=2 cpu=2 credit=9997842 [w=256] load=262144 (~100%)
    (XEN)     7: [0.6] flags=2 cpu=7 credit=9994623 [w=256] load=262144 (~100%)
    (XEN)     8: [0.7] flags=2 cpu=0 credit=9991815 [w=256] load=262144 (~100%)
    
    Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 07b0eb5d0ef0be154606aa46b5b4c5c59b158505
Author: Dario Faggioli <dfaggioli@suse.com>
Date:   Fri May 28 17:12:48 2021 +0200

    credit2: make sure we pick a runnable unit from the runq if there is one
    
    A !runnable unit (temporarily) present in the runq may cause us to
    stop scanning the runq itself too early. Of course, we don't run any
    non-runnable vCPUs, but we end the scan and we fallback to picking
    the idle unit. In other word, this prevent us to find there and pick
    the actual unit that we're meant to start running (which might be
    further ahead in the runq).
    
    Depending on the vCPU pinning configuration, this may lead to such
    unit to be stuck in the runq for long time, causing malfunctioning
    inside the guest.
    
    Fix this by checking runnable/non-runnable status up-front, in the runq
    scanning function.
    
    Reported-by: Michał Leszczyński <michal.leszczynski@cert.pl>
    Reported-by: Dion Kant <g.w.kant@hunenet.nl>
    Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 20:37:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 20:37:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138162.255842 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqLzS-0007tE-8u; Mon, 07 Jun 2021 20:36:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138162.255842; Mon, 07 Jun 2021 20:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqLzS-0007t7-3x; Mon, 07 Jun 2021 20:36:54 +0000
Received: by outflank-mailman (input) for mailman id 138162;
 Mon, 07 Jun 2021 20:36:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqLzQ-0007su-Tn; Mon, 07 Jun 2021 20:36:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqLzQ-0000E3-NV; Mon, 07 Jun 2021 20:36:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqLzQ-00035h-Bp; Mon, 07 Jun 2021 20:36:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqLzQ-0001oc-BG; Mon, 07 Jun 2021 20:36:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=M2i8dIvUP4w4vqz+Jpde/yfrJK0bmKIHZgslG8HNFOM=; b=Ox6IKZfggn98JIqXl12ynkC5nf
	U5UbBq6cg2Jleuhzk1V0Miidb2HtakT9OYBYvu7iY2A/mj/hzVamvVKqkJl678i5pvAc4P32oOyBG
	svtyR/WXLVOqgqWF9vPpzfQrKBWqPhLVPK+4t1U/5d2JBUxCdqi+kKpqeeJQMLJhq0Bc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162501-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162501: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=6f398e533f5e259b4f937f4aa9de970f7201d166
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Jun 2021 20:36:52 +0000

flight 162501 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162501/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                6f398e533f5e259b4f937f4aa9de970f7201d166
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  291 days
Failing since        152659  2020-08-21 14:07:39 Z  290 days  538 attempts
Testing same since   162409  2021-06-05 17:01:49 Z    2 days    5 attempts

------------------------------------------------------------
526 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 169498 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 22:11:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 22:11:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138170.255856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqNSh-0000DJ-3o; Mon, 07 Jun 2021 22:11:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138170.255856; Mon, 07 Jun 2021 22:11:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqNSg-0000DC-WD; Mon, 07 Jun 2021 22:11:11 +0000
Received: by outflank-mailman (input) for mailman id 138170;
 Mon, 07 Jun 2021 22:11:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqNSf-0000D2-Os; Mon, 07 Jun 2021 22:11:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqNSf-0001lm-Js; Mon, 07 Jun 2021 22:11:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqNSf-0006jV-D3; Mon, 07 Jun 2021 22:11:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqNSf-0004mA-CY; Mon, 07 Jun 2021 22:11:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mIE7+5xU1zaGdBgFsb9LmN7eMf9u7IudBOKBQHZiywI=; b=ZRYe5Hiw1frfBon9NzSf6DQl70
	+Drti3R7ALdaS71g+Edwm+SGuP5VzsjOOdD9RgsFyd1YlXYTGIzZHvPks4DVGwRoHIh9gWKpSpOrW
	KS+xkqCy2FoEb8985cNkVEeQZmt7d6MGxkI2bQoxyFRI7NeXe+WMIijsutvXKpcpeljg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162523-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162523: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d21121685fac829c988e432407fb0e4ef9b19331
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Jun 2021 22:11:09 +0000

flight 162523 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162523/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d21121685fac829c988e432407fb0e4ef9b19331
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    6 days
Failing since        162370  2021-06-04 17:01:35 Z    3 days   21 attempts
Testing same since   162517  2021-06-07 16:01:28 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d21121685fac829c988e432407fb0e4ef9b19331
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Jun 7 15:00:05 2021 +0200

    tools/libs/guest: fix save and restore of pv domains after 32-bit de-support
    
    After 32-bit PV-guests have been security de-supported when not running
    under PV-shim, the hypervisor will no longer be configured to support
    those domains per default when not being built as PV-shim.
    
    Unfortunately libxenguest will fail saving or restoring a PV domain
    due to this restriction, as it is trying to get the compat MFN list
    even for 64 bit guests.
    
    Fix that by obtaining the compat MFN list only for 32-bit PV guests.
    
    Fixes: 1a0f2fe2297d122a08fe ("SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 69e1472d21cf7e5cf0795ef38b99d00de78a910e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jun 7 13:38:53 2021 +0100

    x86/cpuid: Drop special_features[]
    
    While the ! annotation is useful to indicate that something special is
    happening, an array of bits is not.  Drop it, to prevent mistakes.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 60fa12dbf1d4d2c4ffe1ef34b495b24aa7e41aa0
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jun 7 13:25:09 2021 +0100

    x86/cpuid: Fix HLE and RTM handling (again)
    
    For reasons which are my fault, but I don't recall why, the
    FDP_EXCP_ONLY/NO_FPU_SEL adjustment uses the whole special_features[] array
    element, not the two relevant bits.
    
    HLE and RTM were recently added to the list of special features, causing them
    to be always set in guest view, irrespective of the toolstacks choice on the
    matter.
    
    Rewrite the logic to refer to the features specifically, rather than relying
    on the contents of the special_features[] array.
    
    Fixes: 8fe24090d9 ("x86/cpuid: Rework HLE and RTM handling")
    Reported-by: Edwin Török <edvin.torok@citrix.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit c4beefcada0a406681dcfb6e89f6cbe4aa368c2d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Jun 7 15:40:55 2021 +0200

    ﻿docs: release-technician-checklist: update to leaf tree version pinning
    
    Our releases look to flip-flop between keeping or discarding the date
    and title of the referenced qemu-trad commit. I think with the hash
    replaced by a tag, the commit's date and title would better also be
    purged.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 89052b9fa24bf976924e40918fc9fa3b1b940e17
Author: Dario Faggioli <dfaggioli@suse.com>
Date:   Fri Mar 19 12:14:17 2021 +0000

    xen: credit2: fix per-entity load tracking when continuing running
    
    If we schedule, and the current vCPU continues to run, its statistical
    load is not properly updated, resulting in something like this, even if
    all the 8 vCPUs are 100% busy:
    
    (XEN) Runqueue 0:
    (XEN) [...]
    (XEN)   aveload            = 2097152 (~800%)
    (XEN) [...]
    (XEN)   Domain: 0 w 256 c 0 v 8
    (XEN)     1: [0.0] flags=2 cpu=4 credit=9996885 [w=256] load=35 (~0%)
    (XEN)     2: [0.1] flags=2 cpu=2 credit=9993725 [w=256] load=796 (~0%)
    (XEN)     3: [0.2] flags=2 cpu=1 credit=9995885 [w=256] load=883 (~0%)
    (XEN)     4: [0.3] flags=2 cpu=5 credit=9998833 [w=256] load=487 (~0%)
    (XEN)     5: [0.4] flags=2 cpu=6 credit=9998942 [w=256] load=1595 (~0%)
    (XEN)     6: [0.5] flags=2 cpu=0 credit=9994669 [w=256] load=22 (~0%)
    (XEN)     7: [0.6] flags=2 cpu=7 credit=9997706 [w=256] load=0 (~0%)
    (XEN)     8: [0.7] flags=2 cpu=3 credit=9992440 [w=256] load=0 (~0%)
    
    As we can see, the average load of the runqueue as a whole is, instead,
    computed properly.
    
    This issue would, in theory, potentially affect Credit2 load balancing
    logic. In practice, however, the problem only manifests (at least with
    these characteristics) when there is only 1 runqueue active in the
    cpupool, which also means there is no need to do any load-balancing.
    
    Hence its real impact is pretty much limited to wrong per-vCPU load
    percentages, when looking at the output of the 'r' debug-key.
    
    With this patch, the load is updated and displayed correctly:
    
    (XEN) Runqueue 0:
    (XEN) [...]
    (XEN)   aveload            = 2097152 (~800%)
    (XEN) [...]
    (XEN) Domain info:
    (XEN)   Domain: 0 w 256 c 0 v 8
    (XEN)     1: [0.0] flags=2 cpu=4 credit=9995584 [w=256] load=262144 (~100%)
    (XEN)     2: [0.1] flags=2 cpu=6 credit=9992992 [w=256] load=262144 (~100%)
    (XEN)     3: [0.2] flags=2 cpu=3 credit=9998918 [w=256] load=262118 (~99%)
    (XEN)     4: [0.3] flags=2 cpu=5 credit=9996867 [w=256] load=262144 (~100%)
    (XEN)     5: [0.4] flags=2 cpu=1 credit=9998912 [w=256] load=262144 (~100%)
    (XEN)     6: [0.5] flags=2 cpu=2 credit=9997842 [w=256] load=262144 (~100%)
    (XEN)     7: [0.6] flags=2 cpu=7 credit=9994623 [w=256] load=262144 (~100%)
    (XEN)     8: [0.7] flags=2 cpu=0 credit=9991815 [w=256] load=262144 (~100%)
    
    Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 07b0eb5d0ef0be154606aa46b5b4c5c59b158505
Author: Dario Faggioli <dfaggioli@suse.com>
Date:   Fri May 28 17:12:48 2021 +0200

    credit2: make sure we pick a runnable unit from the runq if there is one
    
    A !runnable unit (temporarily) present in the runq may cause us to
    stop scanning the runq itself too early. Of course, we don't run any
    non-runnable vCPUs, but we end the scan and we fallback to picking
    the idle unit. In other word, this prevent us to find there and pick
    the actual unit that we're meant to start running (which might be
    further ahead in the runq).
    
    Depending on the vCPU pinning configuration, this may lead to such
    unit to be stuck in the runq for long time, causing malfunctioning
    inside the guest.
    
    Fix this by checking runnable/non-runnable status up-front, in the runq
    scanning function.
    
    Reported-by: Michał Leszczyński <michal.leszczynski@cert.pl>
    Reported-by: Dion Kant <g.w.kant@hunenet.nl>
    Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 22:12:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 22:12:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138179.255870 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqNU8-0000rZ-IJ; Mon, 07 Jun 2021 22:12:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138179.255870; Mon, 07 Jun 2021 22:12:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqNU8-0000rS-FL; Mon, 07 Jun 2021 22:12:40 +0000
Received: by outflank-mailman (input) for mailman id 138179;
 Mon, 07 Jun 2021 22:12:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqNU7-0000rE-He; Mon, 07 Jun 2021 22:12:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqNU7-0001nB-Dc; Mon, 07 Jun 2021 22:12:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqNU7-0006n5-4G; Mon, 07 Jun 2021 22:12:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqNU7-0005z4-3o; Mon, 07 Jun 2021 22:12:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=O6H9/eGYWiZOX69a2x+xfLdmIi7fIfQUQ7/4PLb9Aa0=; b=hmpq1CH2pK78Smr5l1qycmpPvX
	e2lQTq3q/sO0nC6POM9c/ZRPUB1PxhZMKrQ9rVCxKFV7/bgNpH90sXbjWX5Sp5YICb8PMzD6iGAEp
	+8kdo3St1j+A2G+6Phlo8QgGaPgbPie9HWkYF2cG5H3CU9Cjfu14YN6lLlGQnG86BdD8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-amd64-xl-qemuu-ovmf-amd64
Message-Id: <E1lqNU7-0005z4-3o@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Jun 2021 22:12:39 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemuu-ovmf-amd64
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  ovmf git://xenbits.xen.org/osstest/ovmf.git
  Bug introduced:  d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64
  Bug not present: 3357ac73807d83eb212632ee7c2e032a20a49c56
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/162526/


  commit d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64
  Author: Laszlo Ersek <lersek@redhat.com>
  Date:   Wed May 26 22:14:24 2021 +0200
  
      OvmfPkg/PlatformPei: remove Xen support
      
      The "OvmfPkg/PlatformPei/PlatformPei.inf" module is used by the following
      platform DSCs:
      
        OvmfPkg/AmdSev/AmdSevX64.dsc
        OvmfPkg/OvmfPkgIa32.dsc
        OvmfPkg/OvmfPkgIa32X64.dsc
        OvmfPkg/OvmfPkgX64.dsc
      
      Remove Xen support from "OvmfPkg/PlatformPei", including any dependencies
      that now become unused. The basic idea is to substitute FALSE for "mXen".
      
      Remove "OvmfPkg/PlatformPei" from the "OvmfPkg: Xen-related modules"
      section of "Maintainers.txt".
      
      This patch is best reviewed with "git show -b -W".
      
      Cc: Andrew Fish <afish@apple.com>
      Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
      Cc: Jordan Justen <jordan.l.justen@intel.com>
      Cc: Leif Lindholm <leif@nuviainc.com>
      Cc: Michael D Kinney <michael.d.kinney@intel.com>
      Cc: Philippe Mathieu-Daudé <philmd@redhat.com>
      Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=2122
      Signed-off-by: Laszlo Ersek <lersek@redhat.com>
      Message-Id: <20210526201446.12554-22-lersek@redhat.com>
      Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
      Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
      Reviewed-by: Leif Lindholm <leif@nuviainc.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install --summary-out=tmp/162526.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-amd64-xl-qemuu-ovmf-amd64 debian-hvm-install
Searching for failure / basis pass:
 162474 fail [host=pinot0] / 162379 ok.
Failure / basis pass flights: 162474 / 162379
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ddb3fdbef30de5a2946f9bd51060e8d5b1987aef 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f398e533f5e259b4f937f4aa9de970f7201d166 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 924c2b847f0bc4325f6d14e562e2fb2d8acbc4d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#924c2b847f0bc4325f6d14e562e2fb2d8acbc4d0-ddb3fdbef30de5a2946f9bd51060e8d5b1987aef git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#1cbd2d914939ee6028e9688d4ba859a528c28405-6f398e533f5e259b4f937f4aa9de970f7201d166 git://xenbits.xen.org/osstest/seabios.git#7292e4a0a8f58333ccbd2d0d47242f9865083c9c-7292e4a0a8f58333ccbd2d0d47242f9865083c9c git://xenbits.xen.org/xen.git#5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1-5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Loaded 15001 nodes in revision graph
Searching for test results:
 162409 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 51adb689e1db695cffdeeacafad218768fbc018c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f398e533f5e259b4f937f4aa9de970f7201d166 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162379 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 924c2b847f0bc4325f6d14e562e2fb2d8acbc4d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162429 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 51adb689e1db695cffdeeacafad218768fbc018c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f398e533f5e259b4f937f4aa9de970f7201d166 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162454 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ddb3fdbef30de5a2946f9bd51060e8d5b1987aef 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f398e533f5e259b4f937f4aa9de970f7201d166 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162477 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 924c2b847f0bc4325f6d14e562e2fb2d8acbc4d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162484 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ddb3fdbef30de5a2946f9bd51060e8d5b1987aef 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f398e533f5e259b4f937f4aa9de970f7201d166 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162487 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e120c962f50432199d4125f0b7066a78ccbad6f3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162491 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d6ba8aa6ef628f5d865865e0aba4a329ee0d0728 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162493 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162497 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4174c5c7874ec21c2e693565d3685cf9f5c2e2e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162474 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ddb3fdbef30de5a2946f9bd51060e8d5b1987aef 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f398e533f5e259b4f937f4aa9de970f7201d166 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162499 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e7641171b6c1f858f3d979c0e8f04d6c12870baa 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162502 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3357ac73807d83eb212632ee7c2e032a20a49c56 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162507 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162519 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3357ac73807d83eb212632ee7c2e032a20a49c56 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162521 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162525 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3357ac73807d83eb212632ee7c2e032a20a49c56 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162501 []
 162526 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Searching for interesting versions
 Result found: flight 162379 (pass), for basis pass
 Result found: flight 162409 (fail), for basis failure (at ancestor ~1)
 Repro found: flight 162477 (pass), for basis pass
 Repro found: flight 162484 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3357ac73807d83eb212632ee7c2e032a20a49c56 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cbd2d914939ee6028e9688d4ba859a528c28405 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
No revisions left to test, checking graph state.
 Result found: flight 162502 (pass), for last pass
 Result found: flight 162507 (fail), for first failure
 Repro found: flight 162519 (pass), for last pass
 Repro found: flight 162521 (fail), for first failure
 Repro found: flight 162525 (pass), for last pass
 Repro found: flight 162526 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  ovmf git://xenbits.xen.org/osstest/ovmf.git
  Bug introduced:  d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64
  Bug not present: 3357ac73807d83eb212632ee7c2e032a20a49c56
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/162526/


  commit d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64
  Author: Laszlo Ersek <lersek@redhat.com>
  Date:   Wed May 26 22:14:24 2021 +0200
  
      OvmfPkg/PlatformPei: remove Xen support
      
      The "OvmfPkg/PlatformPei/PlatformPei.inf" module is used by the following
      platform DSCs:
      
        OvmfPkg/AmdSev/AmdSevX64.dsc
        OvmfPkg/OvmfPkgIa32.dsc
        OvmfPkg/OvmfPkgIa32X64.dsc
        OvmfPkg/OvmfPkgX64.dsc
      
      Remove Xen support from "OvmfPkg/PlatformPei", including any dependencies
      that now become unused. The basic idea is to substitute FALSE for "mXen".
      
      Remove "OvmfPkg/PlatformPei" from the "OvmfPkg: Xen-related modules"
      section of "Maintainers.txt".
      
      This patch is best reviewed with "git show -b -W".
      
      Cc: Andrew Fish <afish@apple.com>
      Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
      Cc: Jordan Justen <jordan.l.justen@intel.com>
      Cc: Leif Lindholm <leif@nuviainc.com>
      Cc: Michael D Kinney <michael.d.kinney@intel.com>
      Cc: Philippe Mathieu-Daudé <philmd@redhat.com>
      Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=2122
      Signed-off-by: Laszlo Ersek <lersek@redhat.com>
      Message-Id: <20210526201446.12554-22-lersek@redhat.com>
      Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
      Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
      Reviewed-by: Leif Lindholm <leif@nuviainc.com>

Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
162526: tolerable ALL FAIL

flight 162526 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162526/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail baseline untested


jobs:
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Jun 07 23:05:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 23:05:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138191.255895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqOIn-0005n5-Qt; Mon, 07 Jun 2021 23:05:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138191.255895; Mon, 07 Jun 2021 23:05:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqOIn-0005my-NA; Mon, 07 Jun 2021 23:05:01 +0000
Received: by outflank-mailman (input) for mailman id 138191;
 Mon, 07 Jun 2021 23:05:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqOIm-0005mo-H3; Mon, 07 Jun 2021 23:05:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqOIm-0002eo-9L; Mon, 07 Jun 2021 23:05:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqOIl-0000A9-Tl; Mon, 07 Jun 2021 23:05:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqOIl-0001PP-TJ; Mon, 07 Jun 2021 23:04:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DakuHNyp3RHh1co1Uup0wIxCsWm5NMcIvQX/BXP5a94=; b=Gmp8vDF4HnqEG23H6Hf9jd1yc3
	0YvXxODGIFPzBpbcHBdCWcleVm1AHX0AOG3GNnIk36NJOMX2CgxeBrw5nzounpRTsotwAkZ4S+dIU
	pjdrnJPPSUqjbxw7+wQ/5ND3uuZPoK9REelLLSCDageMGCN7GCcKP5DUB/k4fIZPzSkU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162483-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162483: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=614124bea77e452aa6df7a8714e8bc820b489922
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Jun 2021 23:04:59 +0000

flight 162483 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162483/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                614124bea77e452aa6df7a8714e8bc820b489922
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  311 days
Failing since        152366  2020-08-01 20:49:34 Z  310 days  530 attempts
Testing same since   162483  2021-06-07 04:42:27 Z    0 days    1 attempts

------------------------------------------------------------
6149 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1674012 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 07 23:44:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Jun 2021 23:44:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138204.255910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqOue-0001OP-3V; Mon, 07 Jun 2021 23:44:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138204.255910; Mon, 07 Jun 2021 23:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqOud-0001OI-VD; Mon, 07 Jun 2021 23:44:07 +0000
Received: by outflank-mailman (input) for mailman id 138204;
 Mon, 07 Jun 2021 23:44:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AHWm=LB=onlineschubla.de=paul@srs-us1.protection.inumbo.net>)
 id 1lqOuc-0001Ny-KN
 for xen-devel@lists.xenproject.org; Mon, 07 Jun 2021 23:44:06 +0000
Received: from DEU01-FR2-obe.outbound.protection.outlook.com (unknown
 [40.107.135.122]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ca772771-8e3d-4d0f-bad5-4fed6e4e2c1f;
 Mon, 07 Jun 2021 23:44:05 +0000 (UTC)
Received: from FRYP281MB0582.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:45::10)
 by FRYP281MB0029.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:8::9) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.12; Mon, 7 Jun 2021 23:44:03 +0000
Received: from FRYP281MB0582.DEUP281.PROD.OUTLOOK.COM
 ([fe80::184f:c6ec:f202:bf2d]) by FRYP281MB0582.DEUP281.PROD.OUTLOOK.COM
 ([fe80::184f:c6ec:f202:bf2d%8]) with mapi id 15.20.4219.019; Mon, 7 Jun 2021
 23:44:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca772771-8e3d-4d0f-bad5-4fed6e4e2c1f
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Zj3cZjaBXl++cXCSPl3InRJFB6+0miw/b5MWGI5xHIhseaeWc9222zXhfC7vpRYpzjha835yJtWfJTlxS+i2B1xDoiLBhYSyAw0k214m2guryylq9nvTExBRSIlF/acOybcSD2ScKDJTQmad9n+BEbIVqa9+7g34MKQJqMohYdnTlYyiX558GQ1GMjbWz7czn8nQh5TVUtP/VBuCju83D6xVXfpcnjpMkyUaZVhfsf+Vp57IgcgOIsVtg09Yv3XU4OpxnTTk+aOJN9ETMpsQWJ4bCSBhodf6zlx1WylzyggrFkDyTPo7aQooXcr/bNwh0UBngf5Y2ck8vKt94dZHFw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hiBQufO8MgvLLqJIRMD1d8pqD6CWk9lLT4OPKl+Ag30=;
 b=DnnqKbYJzxUUT3hBFITyC/FVnHosjkCR7T71C7GrNJSXhlyA1M7Y1c4MozZ1l7oMIU23CkWSzQro65gh6ESGqQzleQY15gKMyBHjngg6EmI87bwH41LYW8sTH4wfKhav8Y4kvOAOSURWtTRAK8ilFgPYJh2MBuae5tSpmweu4HQ+QKDiTU/7Xn3y9THTMEcCUJwH5KWZGtS9D/BAthEEg+Isl9LKbFcjuLvIsSnbEoUYvP78riuIfEyex3O54RcJrPqSnp7UqSObO3VjDmx4Mh+NnsZZgTA3zmMhT2KhMiJvFjFs8QmX28DXeo5aiEGGVi4G84ItJCAb3YX3offg+Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=onlineschubla.de; dmarc=pass action=none
 header.from=onlineschubla.de; dkim=pass header.d=onlineschubla.de; arc=none
From: Paul Leiber <paul@onlineschubla.de>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: [BUG] Passed through PCI devices lost after Windows HVM DomU reboot
Thread-Topic: [BUG] Passed through PCI devices lost after Windows HVM DomU
 reboot
Thread-Index: Addb9FwKHMmb5HghTwunCUNmuZyBkw==
Date: Mon, 7 Jun 2021 23:44:03 +0000
Message-ID:
 <FRYP281MB05828EB0C49C963C7954578CB0389@FRYP281MB0582.DEUP281.PROD.OUTLOOK.COM>
Accept-Language: en-US
Content-Language: de-DE
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=onlineschubla.de;
x-originating-ip: [2003:d2:1f0a:31f0:dbf:c2a2:7af5:e43e]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 34cf2b31-0a79-4204-ce82-08d92a0e21f3
x-ms-traffictypediagnostic: FRYP281MB0029:
x-microsoft-antispam-prvs:
 <FRYP281MB00296CDB79EA95C351A7900CB0389@FRYP281MB0029.DEUP281.PROD.OUTLOOK.COM>
x-ms-oob-tlc-oobclassifiers: OLM:826;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 MPaxBgo9BKcP6uB52XgoDjaZlmNhrblRdxIQqj3Ufj44ggWz/3vSaSuayw9ffziHcyPLtEjXlCe1ylwD7/JLcF1qiLutcYHa6FYPWiDQqZO0aIaW0Pw2PWOdcDbvtF50tdQH40FYoG5mUmbf8CTGX/BYdIHNBUO7eLdruZUMe/X7Ud8viB/FDnjx+DVY3ZMopu5xkqV95LHt11RFxM5F7JeP7JvPwW1Y23ATx4Onbtj1Ic7CotzoiDvaUQpWKR9hhng7BL7qDqLxsNOBVHn1SCmnBXkEoqAulH+/oAH9ROf2YxghEviq8R6FAbjncIjIFi5EVKJP1py6laQx1yArNWok5YxKS0cfogUmrzFTwMBZcrDEmakefJhQhy6P2b/oLBIoQLkHw4ms2a9B9k1gSes2uLQCNNE3BYtAWtIQWHRzGJi+9udY83CNyYMe5D78r0wyEDnnki1P5ufaEic6K4LPU522271G7wn2nQ6z++9KqMUjX/W9GXAdbeNwtR5u3vYxmJiGkWiCntnQX9Rp61wlR50e0ncZr26iOKFy/3VN5ady1H+vOlkHhlt431LF6zDq/k5TdfmsRTNEiak/qIo/02H3AbQ7YPc7AymzfWZtZlDYM+oDoTzW6Twb1iK8HJ40hZVvX+heN7zxMMIoyY5M5Z+qplcIkfSBVqN1Nh1jL6FZFD/YHjTv1ZXYyKGa1MQfuqpDdLbzy/HuMAxKBxVLpngkvu6ZQSxw2jhmyNemz94NKeOTBF4STBWTdTttFkOCQ/s/r9k/3HMjTFca4w==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:FRYP281MB0582.DEUP281.PROD.OUTLOOK.COM;PTR:;CAT:NONE;SFS:(366004)(376002)(346002)(39830400003)(396003)(136003)(9686003)(966005)(5660300002)(6916009)(55016002)(33656002)(66476007)(186003)(66556008)(71200400001)(6506007)(52536014)(83380400001)(30864003)(8936002)(8676002)(38100700002)(76116006)(2906002)(7696005)(478600001)(66946007)(86362001)(316002)(122000001)(66446008)(64756008)(21314003);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?LiNRoJGIoynE740CTgx4Ja1IBdbpctNyyQ1G11cwKzGdC9i7NIbR/SI/bGaD?=
 =?us-ascii?Q?S3iUmmbsWqO+5pq3PmhNOc6YiauI3lh8fbXcYXWUBXTJgLVxMYN38bPktFp1?=
 =?us-ascii?Q?9lgPNTdENqjbXn5YW08tkg5KduU/PqSbdfBwHw9bCDzYsw1d1Iu65w1iEPD2?=
 =?us-ascii?Q?XGzDqFTL4D+Pp8+EiLXmTVQEYo+1rBjMH3KMoGxRhjDJ89dsXgElLrJvOiHo?=
 =?us-ascii?Q?bAlUWR5Wadf551WcNzEl7zSXhG1K7v//7b7IV6kxhP581VHre0vxyz6MpqTO?=
 =?us-ascii?Q?SmLkXzWm83SOKJsimm1mg399PMi+91NFzP4erS5mALZW+uHkaKurzwqZ0dbV?=
 =?us-ascii?Q?K0F3EOJozhgZfICjW3K1lm+QR9jBkIloSpwawwZjtB+djBrHRM5vvbPVShD/?=
 =?us-ascii?Q?v78TrLwz5k9lOkTsYutIrMbz4MnstIGgS5wCXTOm/qKD7JJy1es4cXVMqtRV?=
 =?us-ascii?Q?FmUzjdCG+LGLLsgc8YmTv95sK8tMUR2Rx1vSguvG8TjSruJY6HykOs2ERVJb?=
 =?us-ascii?Q?uM4U9ZozutHVl2MZoCGJAVX5kiyWkb6UNoMdXJ7WmEDPyMzdDHJi8OOHSRgu?=
 =?us-ascii?Q?wU3ux7aqCrg5bf9xrlTmhwMterf551H4SA6vRyGj5qLfngTlFpgPyWL02AyH?=
 =?us-ascii?Q?C9MtuaSRkvHTPTAZgPWQPhf6RNVuV7dPn9vRcptfvgbylYKjkbq/UJUTo4nW?=
 =?us-ascii?Q?I41Y78X3ALb90n30mgrxnYtHm1rsez5uRktSXp2ZSCTAeJ1VE6HgZGhs/gE5?=
 =?us-ascii?Q?AJWYpOhgwIUhSUj84DOr1F2tRd+PieVxyPxULfJlTJl3MTEAAZrPQ88Yhsbj?=
 =?us-ascii?Q?vR0poU+tmCWdFPgCQHIfAttJLRByo4YRQNkHJQD9tu16PqWvY6ddESTfs+ax?=
 =?us-ascii?Q?DL7rRxN2gNxfA14Buyk9z+F09jUtcmyN5upmB6+P0Z71YHnBmxmpXZvgnJSF?=
 =?us-ascii?Q?2mBtlgBmb77sfiipSDR7d75p6mNO7uQAYE7ui61XbeDRnNcS/SStjeHsov+7?=
 =?us-ascii?Q?lPo485PZ8KTaQ6xbRpIr/RDfIP4a/ci2axO6rBlyTcvGojutNptlrmK09AOL?=
 =?us-ascii?Q?GLdmvfUUoQ1lCvFslT7yUt1tq+thY0Wk0lFOXPDYLBaZ1f8LO39ZjVWve3s5?=
 =?us-ascii?Q?xHRBpXGD+QYvlQIXJaG7RT4oIw87NQkeuk94IkZcJyUt4+T1nTRO5VwC5RH1?=
 =?us-ascii?Q?2nyY09PHaAfUiETVyXLMMHgcVzQSnGp445RMGBe3E1m1F07F3xazioWblbpy?=
 =?us-ascii?Q?2xPs1y8mz7ImcyNUz6NizppY2a2Qoywsd7pZqG7Hz2jkMvAh14t8Yy9RNin5?=
 =?us-ascii?Q?5V2u/BagtrAM8PRshWCi1VgEdym3i0/Q4ICRcXxg2kyZmhCMmp6RBMvh/XKn?=
 =?us-ascii?Q?qXmLvew=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: onlineschubla.de
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: FRYP281MB0582.DEUP281.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-Network-Message-Id: 34cf2b31-0a79-4204-ce82-08d92a0e21f3
X-MS-Exchange-CrossTenant-originalarrivaltime: 07 Jun 2021 23:44:03.6045
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: bfc8b046-4d00-4e98-8679-43c06bdec9db
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 7REWdT1lZDwdTOHOND8kuT1VO5/Z7LiKgny/bSM1c7B0T7FTPHpZJCgqAbnvfETmcu5321gikph7qTd75CSPfw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: FRYP281MB0029

Dear developers,

I  am a mostly very happy Xen beginner. My Debian PV DomUs work like a char=
m out of the box. The only remaining Windows instance is a MediaPortal TV S=
erver backend on Windows Server 2012 HVM DomU. But I have problems with rel=
iably passing through PCIe cards to this Windows HVM DomU. Further testing =
has lead me to the suspicion that there might be a bug where PCI passthroug=
h does not work after a Windows DomU reboot.

Please be patient with me if am not reporting this bug as is custom, this i=
s my first official bug report ever. (If it is indeed a bug.)

Background: I am running a standard apt-get Xen installation based on Debia=
n Buster.  My hardware is a Fujitsu D3417-B1 with an Intel Xeon CPU E3-1235=
L v5, 32 GB ECC RAM, and a Hauppauge HVR-2205 TV tuner card. For getting PC=
I passthrough to work, I needed to set "permissive=3D1" and limit the Dom0 =
memory size. I then could pass through the PCIe TV tuner card without any p=
roblem to my Windows Server 2012 DomU. It got detected and worked very well=
 in the Windows DomU. However, sometimes the card somehow got "lost" in the=
 DomU, i. e. it disappeared from device manager and wasn't functional anymo=
re. I then could reattach it to the DomU with "xl pci-attach". My TV softwa=
re (MediaPortal) then seemed to recognize a new PCIe card instance (e. g. a=
n internal id number of the tuner card was incremented). I then needed to r=
eapply some settings. Other than that, the card was fully functional.

After more testing, I have come to the following conclusion: It seems that =
every time I do a _reboot_ from within a Windows DomU, the PCI device does =
not get attached to the DomU. After DomU reboot, it is immediately availabl=
e for attachment in the Dom0 when I check for it with "xl pci-assignable-li=
st", and I can reattach it to the DomU with "xl pci-attach" without any maj=
or problems beside some annoying side effects (e. g. need to reapply settin=
gs). If I _shut down_ the DomU from within the DomU (with Windows shutdown =
mechanism) or the Dom0 (with "xl shutdown) and restart the DomU with "xl cr=
eate", the PCIe device gets attached automatically at DomU boot and unwante=
d side effects do not occur.

What I would expect is that the passed through PCIe device is available in =
my Windows DomU after each reboot (e. g. after Windows Update automatically=
 installs patches and reboots).

Steps which I can take to provoke the unwanted behavior:
1. Install Xen on Debian Buster following mostly https://wiki.xenproject.or=
g/wiki/Xen_Project_Beginners_Guide
2. Set up PCI passthrough following mostly https://wiki.xenproject.org/wiki=
/Xen_PCI_Passthrough (see additional details below)
3. Set up a Windows Server 2012 HVM (cfg below)
4. Start Windows Server 2012 HVM with "xl create /etc/xen/matrix.cfg", conn=
ect with Windows HVM via VNC for installation and initial settings, then vi=
a RemoteDesktop
5. Check for PCIe device in Windows Device Manager: it is available
6. Initiate reboot in Windows (Go to Server Manager -> local server -> rebo=
ot)
7. Connect with rebooted Windows via RemoteDesktop
8. Check for PCIe device in Windows Device Manager, it is not available
9. Check for PCIe device in Dom0 with " xl pci-assignable-list", it is avai=
lable for passthrough
10. Attach the PCIe device to the Windows DomU, e.g. via "xl pci-attach 9 0=
1:00.0"
11. Check for PCIe device in Windows Device Manager, it is available again
12. Repetition is possible by skipping to step 6

The xl log for a normal cold start (PCIe device attached normally) looks li=
ke this:
Waiting for domain matrix (domid 10) to die [pid 3910]

The log after a reboot (PCIe device not attached automatically) looks like =
this:
Waiting for domain matrix (domid 8) to die [pid 3113]
Domain 8 has shut down, reason code 1 0x1
Action for shutdown reason code 1 is restart
libxl: warning: libxl_domain.c:1739:libxl_retrieve_domain_configuration: Do=
main 8:Device present in JSON but not in xenstore, ignored
Domain 8 needs to be cleaned up: destroying the domain
Done. Rebooting now

Searching for this exact error message ("Device present in JSON but not in =
xenstore, ignored"), I found the following quite old bug report which sound=
s suspiciously similar to my experience, only for PV DomUs:
https://bugzilla.redhat.com/show_bug.cgi?id=3D233801

Additional information which might be helpful:
- I could reproduce this behavior with two different TV tuner cards from di=
fferent manufacturers (Hauppauge HVR-2205 or Digital Devices Max M4) and a =
network card (Intel 82574L)
- I tested the behavior with a fresh install of Windows 10, with the same r=
esults.
- I used the Hauppauge PCIe card in a linux PV DomU (with VDR software) whe=
re the card was attached very reliably - as far as I can remember, there wa=
s only one occurrence of a not working TV card, but I can't remember the de=
tails (i. e. if there was a preceding reboot).
- The unwanted behavior did not occur with the bare metal system before I s=
witched to Xen, i. e. Windows Server 2012 running directly on the hardware =
and the Hauppauge PCIe card.

A description of my problem (which was a little bit less detailed) on the X=
en Users mailing list did not get a reply, therefore I am turning to the de=
veloper mailing list. Could anybody on this list please give me advice on w=
hat I can do solve this issue? Any more information you need to help me or =
any more testing I could do?

Thanks in advance,

Paul



Additional information:


While trying to fix this, I changed kernel boot parameters. I figured out t=
hat giving kernel boot option " xen-pciback.hide" is not necessary as the d=
river is not built into the kernel, therefore I changed the parameters from
	dom0_mem=3D1024M,max:1024M xen-pciback.hide=3D(01:00.0)
to the currently used parameters:
	dom0_mem=3D1024M,max:1024M


The Digital Devised PCIe device is assigned to xen-pciback via /etc/modprob=
e.d/xen-pciback.conf. There is no  driver on the Dom0 for the tuner card, t=
herefore no precautions for not loading other drivers are necessary:
	options xen-pciback hide=3D(0000:01:00.0)


The Hauppauge card needs an additional line for preventing loading the driv=
er in Dom0:
	install saa7164 /sbin/modprobe xen-pciback ; /sbin/modprobe --first-time -=
-ignore-i$
	options xen-pciback hide=3D(0000:01:00.0)


While doing trial and error, I changed the pci line in the Xen config file,=
 but adding " power_mgmt=3D1" and "seize=3D1" didn't change the behavior:
	pci=3D['01:00.0,permissive=3D1,power_mgmt=3D1,seize=3D1']


Xen config file for the Windows domU (besides the above mentioned changes i=
n the line pci=3D[...], there were some probably minor changes between firs=
t installation and the current status, e. g. I started with VNC and later s=
witched to SPICE):

# kernel =3D "/usr/lib/xen-4.0/boot/hvmloader"
type=3D'hvm'
memory =3D 4096
vcpus=3D2
name =3D "matrix"
vif =3D ['bridge=3Dxenbr0,mac=3D00:16:3E:54:A8:2B']
disk =3D ['phy:/dev/vg0/matrix,hda,w','phy:/dev/vg0/compudms-data,hdb,w']
device_model_version =3D 'qemu-xen'
boot=3D"c"
hdtype =3D 'ahci'
acpi =3D 1
apic =3D 1
xen_platform_pci =3D 1
vendor_device =3D 'xenserver'
#  PCI Passthrough
pci=3D['01:00.0,permissive=3D1,power_mgmt=3D1']
viridian =3D 1
stdvga =3D 1
sdl =3D 0
serial =3D 'pty'
usb =3D 1
usbdevice =3D 'tablet'
keymap =3D 'de'
# SPICE
spice=3D1
spicehost=3D'0.0.0.0'
spiceport=3D6000
# spicedisable_ticketing enabled is for no spice password, instead use spic=
epasswd
spicedisable_ticketing=3D1
#spicepasswd=3D"test"
spicevdagent=3D1
spice_clipboard_sharing=3D1
# this will automatically redirect up to 4 usb devices from spice client to=
 domUs
#spiceusbredirection=3D4
# This adds intel hd audio emulated card used for spice audio
soundhw=3D"hda"


xl info:

host                   : xxx
release                : 4.19.0-14-amd64
version                : #1 SMP Debian 4.19.171-2 (2021-01-30)
machine                : x86_64
nr_cpus                : 4
max_cpu_id             : 3
nr_nodes               : 1
cores_per_socket       : 4
threads_per_core       : 1
cpu_mhz                : 1992.100
hw_caps                : bfebfbff:77faf3ff:2c100800:00000121:0000000f:009c6=
fbf:00000000:00000100
virt_caps              : hvm hvm_directio
total_memory           : 32542
free_memory            : 20836
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 11
xen_extra              : .4
xen_version            : 4.11.4
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-=
3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=3D0xffff800000000000
xen_changeset          :
xen_commandline        : placeholder dom0_mem=3D1024M,max:1024M
cc_compiler            : gcc (Debian 8.3.0-6) 8.3.0
cc_compile_by          : pkg-xen-devel
cc_compile_domain      : lists.alioth.debian.org
cc_compile_date        : Fri Dec 11 21:33:51 UTC 2020
build_id               : 6d8e0fa3ddb825695eb6c6832631b4fa2331fe41
xend_config_format     : 4


lspci -vvv (excerpt)

01:00.0 Multimedia controller: Digital Devices GmbH Device 000a
        Subsystem: Digital Devices GmbH Device 0050
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-=
 Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=3Dfast >TAbort- <T=
Abort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0
        Interrupt: pin A routed to IRQ 18
        Region 0: Memory at f7200000 (64-bit, non-prefetchable) [size=3D64K=
]
        Capabilities: [50] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=3D0mA PME(D0-,D1-,D2=
-,D3hot-,D3cold-)
                Status: D0 NoSoftRst- PME-Enable- DSel=3D0 DScale=3D0 PME-
        Capabilities: [70] MSI: Enable- Count=3D1/1 Maskable- 64bit+
                Address: 0000000000000000  Data: 0000
        Capabilities: [90] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <64n=
s, L1 <1us
                        ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- Slo=
tPowerLimit 0.000W
                DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsup=
ported-
                        RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop-
                        MaxPayload 128 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- Tr=
ansPend-
                LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L1, Exit Lat=
ency L0s unlimited, L1 unlimited
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLA=
ctive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range A, TimeoutDis+, LTR-, OB=
FF Not Supported
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR=
-, OBFF Disabled
                LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- Speed=
Dis-
                         Transmit Margin: Normal Operating Range, EnterModi=
fiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationCompl=
ete-, EqualizationPhase1-
                         EqualizationPhase2-, EqualizationPhase3-, LinkEqua=
lizationRequest-
        Capabilities: [100 v1] Vendor Specific Information: ID=3D0000 Rev=
=3D0 Len=3D00c <?>
        Kernel driver in use: pciback


Xenstore-ls -fp (excerpt)

/libxl/10 =3D ""   (n0)
/libxl/10/device =3D ""   (n0)
/libxl/10/device/vbd =3D ""   (n0)
/libxl/10/device/vbd/768 =3D ""   (n0)
/libxl/10/device/vbd/768/frontend =3D "/local/domain/10/device/vbd/768"   (=
n0)
/libxl/10/device/vbd/768/backend =3D "/local/domain/0/backend/vbd/10/768"  =
 (n0)
/libxl/10/device/vbd/768/params =3D "/dev/vg0/matrix"   (n0)
/libxl/10/device/vbd/768/script =3D "/etc/xen/scripts/block"   (n0)
/libxl/10/device/vbd/768/frontend-id =3D "10"   (n0)
/libxl/10/device/vbd/768/online =3D "1"   (n0)
/libxl/10/device/vbd/768/removable =3D "0"   (n0)
/libxl/10/device/vbd/768/bootable =3D "1"   (n0)
/libxl/10/device/vbd/768/state =3D "1"   (n0)
/libxl/10/device/vbd/768/dev =3D "hda"   (n0)
/libxl/10/device/vbd/768/type =3D "phy"   (n0)
/libxl/10/device/vbd/768/mode =3D "w"   (n0)
/libxl/10/device/vbd/768/device-type =3D "disk"   (n0)
/libxl/10/device/vbd/768/discard-enable =3D "1"   (n0)
/libxl/10/device/vbd/832 =3D ""   (n0)
/libxl/10/device/vbd/832/frontend =3D "/local/domain/10/device/vbd/832"   (=
n0)
/libxl/10/device/vbd/832/backend =3D "/local/domain/0/backend/vbd/10/832"  =
 (n0)
/libxl/10/device/vbd/832/params =3D "/dev/vg0/compudms-data"   (n0)
/libxl/10/device/vbd/832/script =3D "/etc/xen/scripts/block"   (n0)
/libxl/10/device/vbd/832/frontend-id =3D "10"   (n0)
/libxl/10/device/vbd/832/online =3D "1"   (n0)
/libxl/10/device/vbd/832/removable =3D "0"   (n0)
/libxl/10/device/vbd/832/bootable =3D "1"   (n0)
/libxl/10/device/vbd/832/state =3D "1"   (n0)
/libxl/10/device/vbd/832/dev =3D "hdb"   (n0)
/libxl/10/device/vbd/832/type =3D "phy"   (n0)
/libxl/10/device/vbd/832/mode =3D "w"   (n0)
/libxl/10/device/vbd/832/device-type =3D "disk"   (n0)
/libxl/10/device/vbd/832/discard-enable =3D "1"   (n0)
/libxl/10/device/console =3D ""   (n0)
/libxl/10/device/console/0 =3D ""   (n0)
/libxl/10/device/console/0/frontend =3D "/local/domain/10/console"   (n0)
/libxl/10/device/console/0/backend =3D "/local/domain/0/backend/console/10/=
0"   (n0)
/libxl/10/device/console/0/frontend-id =3D "10"   (n0)
/libxl/10/device/console/0/online =3D "1"   (n0)
/libxl/10/device/console/0/state =3D "1"   (n0)
/libxl/10/device/console/0/protocol =3D "vt100"   (n0)
/libxl/10/device/vkbd =3D ""   (n0)
/libxl/10/device/vkbd/0 =3D ""   (n0)
/libxl/10/device/vkbd/0/frontend =3D "/local/domain/10/device/vkbd/0"   (n0=
)
/libxl/10/device/vkbd/0/backend =3D "/local/domain/0/backend/vkbd/10/0"   (=
n0)
/libxl/10/device/vkbd/0/frontend-id =3D "10"   (n0)
/libxl/10/device/vkbd/0/online =3D "1"   (n0)
/libxl/10/device/vkbd/0/state =3D "1"   (n0)
/libxl/10/device/vif =3D ""   (n0)
/libxl/10/device/vif/0 =3D ""   (n0)
/libxl/10/device/vif/0/frontend =3D "/local/domain/10/device/vif/0"   (n0)
/libxl/10/device/vif/0/backend =3D "/local/domain/0/backend/vif/10/0"   (n0=
)
/libxl/10/device/vif/0/frontend-id =3D "10"   (n0)
/libxl/10/device/vif/0/online =3D "1"   (n0)
/libxl/10/device/vif/0/state =3D "1"   (n0)
/libxl/10/device/vif/0/script =3D "/etc/xen/scripts/vif-bridge"   (n0)
/libxl/10/device/vif/0/mac =3D "00:16:3e:54:a8:2b"   (n0)
/libxl/10/device/vif/0/bridge =3D "xenbr0"   (n0)
/libxl/10/device/vif/0/handle =3D "0"   (n0)
/libxl/10/device/vif/0/type =3D "vif_ioemu"   (n0)
/libxl/10/device/pci =3D ""   (n0)
/libxl/10/device/pci/0 =3D ""   (n0)
/libxl/10/device/pci/0/frontend =3D "/local/domain/10/device/pci/0"   (n0)
/libxl/10/device/pci/0/backend =3D "/local/domain/0/backend/pci/10/0"   (n0=
)
/libxl/10/device/pci/0/frontend-id =3D "10"   (n0)
/libxl/10/device/pci/0/online =3D "1"   (n0)
/libxl/10/device/pci/0/state =3D "1"   (n0)
/libxl/10/device/pci/0/domain =3D "matrix"   (n0)
/libxl/10/device/pci/0/key-0 =3D "0000:01:00.0"   (n0)
/libxl/10/device/pci/0/dev-0 =3D "0000:01:00.0"   (n0)
/libxl/10/device/pci/0/vdevfn-0 =3D "48"   (n0)
/libxl/10/device/pci/0/opts-0 =3D "msitranslate=3D0,power_mgmt=3D1,permissi=
ve=3D1"   (n0)
/libxl/10/device/pci/0/state-0 =3D "1"   (n0)
/libxl/10/device/pci/0/num_devs =3D "1"   (n0)
/libxl/10/type =3D "hvm"   (n0)
/libxl/10/dm-version =3D "qemu_xen"   (n0)



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 01:48:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 01:48:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138212.255925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqQr8-0001UW-6p; Tue, 08 Jun 2021 01:48:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138212.255925; Tue, 08 Jun 2021 01:48:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqQr8-0001UN-0V; Tue, 08 Jun 2021 01:48:38 +0000
Received: by outflank-mailman (input) for mailman id 138212;
 Tue, 08 Jun 2021 01:48:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqQr6-0001UD-Qx; Tue, 08 Jun 2021 01:48:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqQr6-0002q9-Jy; Tue, 08 Jun 2021 01:48:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqQr6-0007c1-7i; Tue, 08 Jun 2021 01:48:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqQr6-0008Ez-7C; Tue, 08 Jun 2021 01:48:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oA3PIG31XRrTtiaDjRIiX+ExfSBPvX3v9+hRGsg2Hm8=; b=M8OHkIAcNfc4TtWR8qwEujhUNz
	gQF3v33Wu/eFO8q5W3h7ZH+lC1vLE2mz/qk7Nc0AvWnMtS29nzeimRE8DZH5hCI17610G92B5l+nu
	ZGFEfcv72WV4G6J0E8CTfDlUNzwx42pQ8WzgpL04drqtgTm7cOwyhdQFx1JTJV5ehIM0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162531-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162531: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d21121685fac829c988e432407fb0e4ef9b19331
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Jun 2021 01:48:36 +0000

flight 162531 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162531/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d21121685fac829c988e432407fb0e4ef9b19331
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    6 days
Failing since        162370  2021-06-04 17:01:35 Z    3 days   22 attempts
Testing same since   162517  2021-06-07 16:01:28 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d21121685fac829c988e432407fb0e4ef9b19331
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Jun 7 15:00:05 2021 +0200

    tools/libs/guest: fix save and restore of pv domains after 32-bit de-support
    
    After 32-bit PV-guests have been security de-supported when not running
    under PV-shim, the hypervisor will no longer be configured to support
    those domains per default when not being built as PV-shim.
    
    Unfortunately libxenguest will fail saving or restoring a PV domain
    due to this restriction, as it is trying to get the compat MFN list
    even for 64 bit guests.
    
    Fix that by obtaining the compat MFN list only for 32-bit PV guests.
    
    Fixes: 1a0f2fe2297d122a08fe ("SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 69e1472d21cf7e5cf0795ef38b99d00de78a910e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jun 7 13:38:53 2021 +0100

    x86/cpuid: Drop special_features[]
    
    While the ! annotation is useful to indicate that something special is
    happening, an array of bits is not.  Drop it, to prevent mistakes.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 60fa12dbf1d4d2c4ffe1ef34b495b24aa7e41aa0
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jun 7 13:25:09 2021 +0100

    x86/cpuid: Fix HLE and RTM handling (again)
    
    For reasons which are my fault, but I don't recall why, the
    FDP_EXCP_ONLY/NO_FPU_SEL adjustment uses the whole special_features[] array
    element, not the two relevant bits.
    
    HLE and RTM were recently added to the list of special features, causing them
    to be always set in guest view, irrespective of the toolstacks choice on the
    matter.
    
    Rewrite the logic to refer to the features specifically, rather than relying
    on the contents of the special_features[] array.
    
    Fixes: 8fe24090d9 ("x86/cpuid: Rework HLE and RTM handling")
    Reported-by: Edwin Török <edvin.torok@citrix.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit c4beefcada0a406681dcfb6e89f6cbe4aa368c2d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Jun 7 15:40:55 2021 +0200

    ﻿docs: release-technician-checklist: update to leaf tree version pinning
    
    Our releases look to flip-flop between keeping or discarding the date
    and title of the referenced qemu-trad commit. I think with the hash
    replaced by a tag, the commit's date and title would better also be
    purged.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 89052b9fa24bf976924e40918fc9fa3b1b940e17
Author: Dario Faggioli <dfaggioli@suse.com>
Date:   Fri Mar 19 12:14:17 2021 +0000

    xen: credit2: fix per-entity load tracking when continuing running
    
    If we schedule, and the current vCPU continues to run, its statistical
    load is not properly updated, resulting in something like this, even if
    all the 8 vCPUs are 100% busy:
    
    (XEN) Runqueue 0:
    (XEN) [...]
    (XEN)   aveload            = 2097152 (~800%)
    (XEN) [...]
    (XEN)   Domain: 0 w 256 c 0 v 8
    (XEN)     1: [0.0] flags=2 cpu=4 credit=9996885 [w=256] load=35 (~0%)
    (XEN)     2: [0.1] flags=2 cpu=2 credit=9993725 [w=256] load=796 (~0%)
    (XEN)     3: [0.2] flags=2 cpu=1 credit=9995885 [w=256] load=883 (~0%)
    (XEN)     4: [0.3] flags=2 cpu=5 credit=9998833 [w=256] load=487 (~0%)
    (XEN)     5: [0.4] flags=2 cpu=6 credit=9998942 [w=256] load=1595 (~0%)
    (XEN)     6: [0.5] flags=2 cpu=0 credit=9994669 [w=256] load=22 (~0%)
    (XEN)     7: [0.6] flags=2 cpu=7 credit=9997706 [w=256] load=0 (~0%)
    (XEN)     8: [0.7] flags=2 cpu=3 credit=9992440 [w=256] load=0 (~0%)
    
    As we can see, the average load of the runqueue as a whole is, instead,
    computed properly.
    
    This issue would, in theory, potentially affect Credit2 load balancing
    logic. In practice, however, the problem only manifests (at least with
    these characteristics) when there is only 1 runqueue active in the
    cpupool, which also means there is no need to do any load-balancing.
    
    Hence its real impact is pretty much limited to wrong per-vCPU load
    percentages, when looking at the output of the 'r' debug-key.
    
    With this patch, the load is updated and displayed correctly:
    
    (XEN) Runqueue 0:
    (XEN) [...]
    (XEN)   aveload            = 2097152 (~800%)
    (XEN) [...]
    (XEN) Domain info:
    (XEN)   Domain: 0 w 256 c 0 v 8
    (XEN)     1: [0.0] flags=2 cpu=4 credit=9995584 [w=256] load=262144 (~100%)
    (XEN)     2: [0.1] flags=2 cpu=6 credit=9992992 [w=256] load=262144 (~100%)
    (XEN)     3: [0.2] flags=2 cpu=3 credit=9998918 [w=256] load=262118 (~99%)
    (XEN)     4: [0.3] flags=2 cpu=5 credit=9996867 [w=256] load=262144 (~100%)
    (XEN)     5: [0.4] flags=2 cpu=1 credit=9998912 [w=256] load=262144 (~100%)
    (XEN)     6: [0.5] flags=2 cpu=2 credit=9997842 [w=256] load=262144 (~100%)
    (XEN)     7: [0.6] flags=2 cpu=7 credit=9994623 [w=256] load=262144 (~100%)
    (XEN)     8: [0.7] flags=2 cpu=0 credit=9991815 [w=256] load=262144 (~100%)
    
    Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 07b0eb5d0ef0be154606aa46b5b4c5c59b158505
Author: Dario Faggioli <dfaggioli@suse.com>
Date:   Fri May 28 17:12:48 2021 +0200

    credit2: make sure we pick a runnable unit from the runq if there is one
    
    A !runnable unit (temporarily) present in the runq may cause us to
    stop scanning the runq itself too early. Of course, we don't run any
    non-runnable vCPUs, but we end the scan and we fallback to picking
    the idle unit. In other word, this prevent us to find there and pick
    the actual unit that we're meant to start running (which might be
    further ahead in the runq).
    
    Depending on the vCPU pinning configuration, this may lead to such
    unit to be stuck in the runq for long time, causing malfunctioning
    inside the guest.
    
    Fix this by checking runnable/non-runnable status up-front, in the runq
    scanning function.
    
    Reported-by: Michał Leszczyński <michal.leszczynski@cert.pl>
    Reported-by: Dion Kant <g.w.kant@hunenet.nl>
    Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 05:08:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 05:08:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138243.255958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqTxw-0003vd-60; Tue, 08 Jun 2021 05:07:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138243.255958; Tue, 08 Jun 2021 05:07:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqTxw-0003vW-12; Tue, 08 Jun 2021 05:07:52 +0000
Received: by outflank-mailman (input) for mailman id 138243;
 Tue, 08 Jun 2021 05:07:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqTxu-0003vM-1T; Tue, 08 Jun 2021 05:07:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqTxt-0007B8-PT; Tue, 08 Jun 2021 05:07:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqTxt-0000Q7-Cj; Tue, 08 Jun 2021 05:07:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqTxt-0004Ax-CF; Tue, 08 Jun 2021 05:07:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c/pXwW0s7BLcguxcIxLkOZ+Ru8j8rMgqRZkGOreBmuI=; b=sDlAjbz3calGW52D93EQ+uYVWW
	t9ugx3483z0Hdv0ExXNCZQvbBIY08m7YhECc7gKe5LNL7K6KJYueD2eRwt9szXz0n2VM8wnrQW7Qr
	GKQ9u6sTjC/8C8/kWN8dha5Pl+hom+Jp17YMB8pnTAOE3W5O2NvM45kkQ516WU+6tS44=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162527-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162527: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a35947f15c0ee695eba3c55248ec8ac3e4e23cca
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Jun 2021 05:07:49 +0000

flight 162527 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162527/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                a35947f15c0ee695eba3c55248ec8ac3e4e23cca
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  291 days
Failing since        152659  2020-08-21 14:07:39 Z  290 days  539 attempts
Testing same since   162527  2021-06-07 21:06:59 Z    0 days    1 attempts

------------------------------------------------------------
526 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 169653 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 05:16:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 05:16:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138252.255973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqU6B-0005N8-14; Tue, 08 Jun 2021 05:16:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138252.255973; Tue, 08 Jun 2021 05:16:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqU6A-0005N1-TZ; Tue, 08 Jun 2021 05:16:22 +0000
Received: by outflank-mailman (input) for mailman id 138252;
 Tue, 08 Jun 2021 05:16:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqU6A-0005Mr-3u; Tue, 08 Jun 2021 05:16:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqU69-0007KZ-VJ; Tue, 08 Jun 2021 05:16:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqU69-0000mo-Kf; Tue, 08 Jun 2021 05:16:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqU69-0003l5-KA; Tue, 08 Jun 2021 05:16:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fNR7zeYP3cX7YkkyqH060LkAMAVfwhZ5ChVsAvb7n88=; b=vqBSuRRbzsopGVPEB6UKZVaX5T
	xRbAbvqubOlF+hdx2ZvHu1Atf8J9CfTkalx9rjOddWE5vqxwX2cJSrKg4RoMBb/NYcJcUWJNS4X6W
	zwp2XmOJdaPohUi27NuPOr3m/MSYuC7ro81BZH870UVz1vMGPUUUXmquuCj0BRvFWJ3U=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162534-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162534: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:test-armhf-armhf-xl:debian-install:fail:heisenbug
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d21121685fac829c988e432407fb0e4ef9b19331
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Jun 2021 05:16:21 +0000

flight 162534 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162534/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 162327

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl          12 debian-install             fail pass in 162531

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-armhf-armhf-xl         15 migrate-support-check fail in 162531 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 162531 never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d21121685fac829c988e432407fb0e4ef9b19331
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    6 days
Failing since        162370  2021-06-04 17:01:35 Z    3 days   23 attempts
Testing same since   162517  2021-06-07 16:01:28 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d21121685fac829c988e432407fb0e4ef9b19331
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Jun 7 15:00:05 2021 +0200

    tools/libs/guest: fix save and restore of pv domains after 32-bit de-support
    
    After 32-bit PV-guests have been security de-supported when not running
    under PV-shim, the hypervisor will no longer be configured to support
    those domains per default when not being built as PV-shim.
    
    Unfortunately libxenguest will fail saving or restoring a PV domain
    due to this restriction, as it is trying to get the compat MFN list
    even for 64 bit guests.
    
    Fix that by obtaining the compat MFN list only for 32-bit PV guests.
    
    Fixes: 1a0f2fe2297d122a08fe ("SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 69e1472d21cf7e5cf0795ef38b99d00de78a910e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jun 7 13:38:53 2021 +0100

    x86/cpuid: Drop special_features[]
    
    While the ! annotation is useful to indicate that something special is
    happening, an array of bits is not.  Drop it, to prevent mistakes.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 60fa12dbf1d4d2c4ffe1ef34b495b24aa7e41aa0
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jun 7 13:25:09 2021 +0100

    x86/cpuid: Fix HLE and RTM handling (again)
    
    For reasons which are my fault, but I don't recall why, the
    FDP_EXCP_ONLY/NO_FPU_SEL adjustment uses the whole special_features[] array
    element, not the two relevant bits.
    
    HLE and RTM were recently added to the list of special features, causing them
    to be always set in guest view, irrespective of the toolstacks choice on the
    matter.
    
    Rewrite the logic to refer to the features specifically, rather than relying
    on the contents of the special_features[] array.
    
    Fixes: 8fe24090d9 ("x86/cpuid: Rework HLE and RTM handling")
    Reported-by: Edwin Török <edvin.torok@citrix.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit c4beefcada0a406681dcfb6e89f6cbe4aa368c2d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Jun 7 15:40:55 2021 +0200

    ﻿docs: release-technician-checklist: update to leaf tree version pinning
    
    Our releases look to flip-flop between keeping or discarding the date
    and title of the referenced qemu-trad commit. I think with the hash
    replaced by a tag, the commit's date and title would better also be
    purged.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 89052b9fa24bf976924e40918fc9fa3b1b940e17
Author: Dario Faggioli <dfaggioli@suse.com>
Date:   Fri Mar 19 12:14:17 2021 +0000

    xen: credit2: fix per-entity load tracking when continuing running
    
    If we schedule, and the current vCPU continues to run, its statistical
    load is not properly updated, resulting in something like this, even if
    all the 8 vCPUs are 100% busy:
    
    (XEN) Runqueue 0:
    (XEN) [...]
    (XEN)   aveload            = 2097152 (~800%)
    (XEN) [...]
    (XEN)   Domain: 0 w 256 c 0 v 8
    (XEN)     1: [0.0] flags=2 cpu=4 credit=9996885 [w=256] load=35 (~0%)
    (XEN)     2: [0.1] flags=2 cpu=2 credit=9993725 [w=256] load=796 (~0%)
    (XEN)     3: [0.2] flags=2 cpu=1 credit=9995885 [w=256] load=883 (~0%)
    (XEN)     4: [0.3] flags=2 cpu=5 credit=9998833 [w=256] load=487 (~0%)
    (XEN)     5: [0.4] flags=2 cpu=6 credit=9998942 [w=256] load=1595 (~0%)
    (XEN)     6: [0.5] flags=2 cpu=0 credit=9994669 [w=256] load=22 (~0%)
    (XEN)     7: [0.6] flags=2 cpu=7 credit=9997706 [w=256] load=0 (~0%)
    (XEN)     8: [0.7] flags=2 cpu=3 credit=9992440 [w=256] load=0 (~0%)
    
    As we can see, the average load of the runqueue as a whole is, instead,
    computed properly.
    
    This issue would, in theory, potentially affect Credit2 load balancing
    logic. In practice, however, the problem only manifests (at least with
    these characteristics) when there is only 1 runqueue active in the
    cpupool, which also means there is no need to do any load-balancing.
    
    Hence its real impact is pretty much limited to wrong per-vCPU load
    percentages, when looking at the output of the 'r' debug-key.
    
    With this patch, the load is updated and displayed correctly:
    
    (XEN) Runqueue 0:
    (XEN) [...]
    (XEN)   aveload            = 2097152 (~800%)
    (XEN) [...]
    (XEN) Domain info:
    (XEN)   Domain: 0 w 256 c 0 v 8
    (XEN)     1: [0.0] flags=2 cpu=4 credit=9995584 [w=256] load=262144 (~100%)
    (XEN)     2: [0.1] flags=2 cpu=6 credit=9992992 [w=256] load=262144 (~100%)
    (XEN)     3: [0.2] flags=2 cpu=3 credit=9998918 [w=256] load=262118 (~99%)
    (XEN)     4: [0.3] flags=2 cpu=5 credit=9996867 [w=256] load=262144 (~100%)
    (XEN)     5: [0.4] flags=2 cpu=1 credit=9998912 [w=256] load=262144 (~100%)
    (XEN)     6: [0.5] flags=2 cpu=2 credit=9997842 [w=256] load=262144 (~100%)
    (XEN)     7: [0.6] flags=2 cpu=7 credit=9994623 [w=256] load=262144 (~100%)
    (XEN)     8: [0.7] flags=2 cpu=0 credit=9991815 [w=256] load=262144 (~100%)
    
    Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 07b0eb5d0ef0be154606aa46b5b4c5c59b158505
Author: Dario Faggioli <dfaggioli@suse.com>
Date:   Fri May 28 17:12:48 2021 +0200

    credit2: make sure we pick a runnable unit from the runq if there is one
    
    A !runnable unit (temporarily) present in the runq may cause us to
    stop scanning the runq itself too early. Of course, we don't run any
    non-runnable vCPUs, but we end the scan and we fallback to picking
    the idle unit. In other word, this prevent us to find there and pick
    the actual unit that we're meant to start running (which might be
    further ahead in the runq).
    
    Depending on the vCPU pinning configuration, this may lead to such
    unit to be stuck in the runq for long time, causing malfunctioning
    inside the guest.
    
    Fix this by checking runnable/non-runnable status up-front, in the runq
    scanning function.
    
    Reported-by: Michał Leszczyński <michal.leszczynski@cert.pl>
    Reported-by: Dion Kant <g.w.kant@hunenet.nl>
    Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 05:39:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 05:39:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138263.255991 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqUSI-0007lB-3D; Tue, 08 Jun 2021 05:39:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138263.255991; Tue, 08 Jun 2021 05:39:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqUSH-0007l4-Vy; Tue, 08 Jun 2021 05:39:13 +0000
Received: by outflank-mailman (input) for mailman id 138263;
 Tue, 08 Jun 2021 05:39:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/91U=LC=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lqUSG-0007ky-85
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 05:39:12 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5668ddec-2d9e-43ad-a64e-da14c9cf1d62;
 Tue, 08 Jun 2021 05:39:09 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 2EEDF67373; Tue,  8 Jun 2021 07:39:06 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5668ddec-2d9e-43ad-a64e-da14c9cf1d62
Date: Tue, 8 Jun 2021 07:39:05 +0200
From: Christoph Hellwig <hch@lst.de>
To: Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>, Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	"dm-devel@redhat.com" <dm-devel@redhat.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"nbd@other.debian.org" <nbd@other.debian.org>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>,
	"virtualization@lists.linux-foundation.org" <virtualization@lists.linux-foundation.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-mmc@vger.kernel.org" <linux-mmc@vger.kernel.org>,
	"linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>,
	"linux-s390@vger.kernel.org" <linux-s390@vger.kernel.org>
Subject: Re: [PATCH 20/30] nullb: use blk_mq_alloc_disk
Message-ID: <20210608053905.GA14183@lst.de>
References: <20210602065345.355274-1-hch@lst.de> <20210602065345.355274-21-hch@lst.de> <BYAPR04MB4965DDD0F4479F5B492A30D2863C9@BYAPR04MB4965.namprd04.prod.outlook.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <BYAPR04MB4965DDD0F4479F5B492A30D2863C9@BYAPR04MB4965.namprd04.prod.outlook.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Thu, Jun 03, 2021 at 12:10:09AM +0000, Chaitanya Kulkarni wrote:
> > diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
> > index d8e098f1e5b5..74fb2ec63219 100644
> > --- a/drivers/block/null_blk/main.c
> > +++ b/drivers/block/null_blk/main.c
> > @@ -1851,13 +1851,12 @@ static int null_add_dev(struct nullb_device *dev)
> >  
> >  		rv = -ENOMEM;
> 
> Is above initialization needed ?

It isn't strictly required any more.


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 05:58:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 05:58:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138272.256008 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqUlH-0001av-0f; Tue, 08 Jun 2021 05:58:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138272.256008; Tue, 08 Jun 2021 05:58:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqUlG-0001af-SG; Tue, 08 Jun 2021 05:58:50 +0000
Received: by outflank-mailman (input) for mailman id 138272;
 Tue, 08 Jun 2021 05:58:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q7uu=LC=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lqUlF-0001Yg-8N
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 05:58:49 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 637c64fe-7675-47b5-8291-cde4e509faf4;
 Tue, 08 Jun 2021 05:58:47 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id EB9F91FD4F;
 Tue,  8 Jun 2021 05:58:46 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id BF92A118DD;
 Tue,  8 Jun 2021 05:58:46 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id qBGWLRYHv2AQbAAALh3uQQ
 (envelope-from <jgross@suse.com>); Tue, 08 Jun 2021 05:58:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 637c64fe-7675-47b5-8291-cde4e509faf4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623131926; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=oLhrbQjDOgrh2m0VWXxv7u2I2PbmH1A+bc+g88dARsk=;
	b=Nm5a9vOjlJmddT2kOVdE30SuJTCxVMDmklP+HCbTcMpU+uLMlO4UKFOu1CpI4EC/J5n2Hn
	OVmLtq999PjiwYI38qK+mhURy/7BxoZ878GkDFXMCymGZMIjdgW7I+VkLm5M6CxE/ojEb7
	OBph9jymXJZyhxgXjIlvJZyAddDM+iM=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623131926; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=oLhrbQjDOgrh2m0VWXxv7u2I2PbmH1A+bc+g88dARsk=;
	b=Nm5a9vOjlJmddT2kOVdE30SuJTCxVMDmklP+HCbTcMpU+uLMlO4UKFOu1CpI4EC/J5n2Hn
	OVmLtq999PjiwYI38qK+mhURy/7BxoZ878GkDFXMCymGZMIjdgW7I+VkLm5M6CxE/ojEb7
	OBph9jymXJZyhxgXjIlvJZyAddDM+iM=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 0/2] tools/xenstore: set resource limits of xenstored
Date: Tue,  8 Jun 2021 07:58:37 +0200
Message-Id: <20210608055839.10313-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Set some limits for xenstored in order to avoid it being killed by
OOM killer, or to run out of file descriptors.

Changes in V2:
- split into 2 patches
- set limits from start script

Juergen Gross (2):
  tools/xenstore: set oom score for xenstore daemon on Linux
  tools/xenstore: set open file descriptor limit for xenstored

 tools/hotplug/Linux/init.d/sysconfig.xencommons.in | 7 +++++++
 tools/hotplug/Linux/launch-xenstore.in             | 6 ++++++
 2 files changed, 13 insertions(+)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 05:58:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 05:58:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138273.256027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqUlL-00027U-7V; Tue, 08 Jun 2021 05:58:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138273.256027; Tue, 08 Jun 2021 05:58:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqUlL-00027L-3J; Tue, 08 Jun 2021 05:58:55 +0000
Received: by outflank-mailman (input) for mailman id 138273;
 Tue, 08 Jun 2021 05:58:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q7uu=LC=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lqUlK-0001Yg-2p
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 05:58:54 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab4b8d68-4dbf-4443-ae63-7e5e7247d28f;
 Tue, 08 Jun 2021 05:58:48 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 5D71F1FD51;
 Tue,  8 Jun 2021 05:58:47 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 307B5118DD;
 Tue,  8 Jun 2021 05:58:47 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id AMWyChcHv2AQbAAALh3uQQ
 (envelope-from <jgross@suse.com>); Tue, 08 Jun 2021 05:58:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab4b8d68-4dbf-4443-ae63-7e5e7247d28f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623131927; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yUf0kgQYfvnmKxevqLhHiJSctn5eVI9NLzOeej8C5tg=;
	b=JKQDlEuS9yvNBnHdV89lkxA+VM4Er1ZUn6rc79KzTdyfBNRTAudmIwkCb9qz73GYaZ1VN2
	RxiYH4Fe9D0fPb+5C2SFjOR3nJf53T9izV87DbSCTzYPtZ+tGNdajcdBNUlcPkpzK2Tva7
	7av2EmdgFA/WVAvTy1ejrR+4h0gu1cY=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623131927; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yUf0kgQYfvnmKxevqLhHiJSctn5eVI9NLzOeej8C5tg=;
	b=JKQDlEuS9yvNBnHdV89lkxA+VM4Er1ZUn6rc79KzTdyfBNRTAudmIwkCb9qz73GYaZ1VN2
	RxiYH4Fe9D0fPb+5C2SFjOR3nJf53T9izV87DbSCTzYPtZ+tGNdajcdBNUlcPkpzK2Tva7
	7av2EmdgFA/WVAvTy1ejrR+4h0gu1cY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 2/2] tools/xenstore: set open file descriptor limit for xenstored
Date: Tue,  8 Jun 2021 07:58:39 +0200
Message-Id: <20210608055839.10313-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210608055839.10313-1-jgross@suse.com>
References: <20210608055839.10313-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a configuration item for the maximum number of domains xenstored
should support and set the limit of open file descriptors accordingly.

For HVM domains there are up to 5 socket connections per domain (2 by
the xl daemon process, and 3 by qemu). So set the ulimit for xenstored
to 5 * XENSTORED_MAX_DOMAINS + 100 (the "+ 100" is for some headroom,
like logging, event channel device, etc.).

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- set ulimit form launch script (Julien Grall)
- split off from original patch (Julien Grall)
---
 tools/hotplug/Linux/init.d/sysconfig.xencommons.in | 7 +++++++
 tools/hotplug/Linux/launch-xenstore.in             | 3 +++
 2 files changed, 10 insertions(+)

diff --git a/tools/hotplug/Linux/init.d/sysconfig.xencommons.in b/tools/hotplug/Linux/init.d/sysconfig.xencommons.in
index 00cf7f91d4..516cd97092 100644
--- a/tools/hotplug/Linux/init.d/sysconfig.xencommons.in
+++ b/tools/hotplug/Linux/init.d/sysconfig.xencommons.in
@@ -32,6 +32,13 @@
 # Changing this requires a reboot to take effect.
 #XENSTORED=@XENSTORED@
 
+## Type: integer
+## Default: 32768
+#
+# Select maximum number of domains supported by xenstored.
+# Only evaluated if XENSTORETYPE is "daemon".
+#XENSTORED_MAX_N_DOMAINS=32768
+
 ## Type: string
 ## Default: ""
 #
diff --git a/tools/hotplug/Linux/launch-xenstore.in b/tools/hotplug/Linux/launch-xenstore.in
index 3ad71e3d08..89149f98ee 100644
--- a/tools/hotplug/Linux/launch-xenstore.in
+++ b/tools/hotplug/Linux/launch-xenstore.in
@@ -54,12 +54,14 @@ test -f @CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons && . @CONFIG_DIR@/@CONFIG_LEAF
 
 [ "$XENSTORETYPE" = "daemon" ] && {
 	[ -z "$XENSTORED_TRACE" ] || XENSTORED_ARGS="$XENSTORED_ARGS -T @XEN_LOG_DIR@/xenstored-trace.log"
+	[ -z "$XENSTORED_MAX_N_DOMAINS" ] && XENSTORED_MAX_N_DOMAINS=32768
 	[ -z "$XENSTORED" ] && XENSTORED=@XENSTORED@
 	[ -x "$XENSTORED" ] || {
 		echo "No xenstored found"
 		exit 1
 	}
 	rm -f @XEN_RUN_DIR@/xenstored.pid
+	N_FILES=$(($XENSTORED_MAX_N_DOMAINS * 5 + 100))
 
 	echo -n Starting $XENSTORED...
 	$XENSTORED --pid-file @XEN_RUN_DIR@/xenstored.pid $XENSTORED_ARGS
@@ -67,6 +69,7 @@ test -f @CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons && . @CONFIG_DIR@/@CONFIG_LEAF
 	systemd-notify --booted 2>/dev/null || timeout_xenstore $XENSTORED || exit 1
 	XS_PID=`cat @XEN_RUN_DIR@/xenstored.pid`
 	echo -500 >/proc/$XS_PID/oom_score_adj
+	prlimit --pid $XS_PID --nofile=$N_FILES
 
 	exit 0
 }
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 05:58:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 05:58:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138271.256003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqUlG-0001Yy-OL; Tue, 08 Jun 2021 05:58:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138271.256003; Tue, 08 Jun 2021 05:58:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqUlG-0001Yr-L0; Tue, 08 Jun 2021 05:58:50 +0000
Received: by outflank-mailman (input) for mailman id 138271;
 Tue, 08 Jun 2021 05:58:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q7uu=LC=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lqUlF-0001Yf-4T
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 05:58:49 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 35bf5c20-3fea-4264-85fe-236ddbfd58f0;
 Tue, 08 Jun 2021 05:58:47 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 2A0BE1FD50;
 Tue,  8 Jun 2021 05:58:47 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id F27D6118DD;
 Tue,  8 Jun 2021 05:58:46 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id OBsOOhYHv2AQbAAALh3uQQ
 (envelope-from <jgross@suse.com>); Tue, 08 Jun 2021 05:58:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35bf5c20-3fea-4264-85fe-236ddbfd58f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623131927; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=i2oAt0R0s4NoNFYvZ9Mbyh4IYQkZZ3RW7YMkBQwQXlQ=;
	b=UT64tIpTEgxopaXUTiBBPwXIcr65Zj2CYspcux64iBAXY0DjPoW02shBGW6+9SLPTz8KTn
	fwLooxuo3KfUqEhqwBY0DjZu6YRk7QGDLvX4DpI2PIjOec2A7qe08xqY3j9apXDc+hXSgi
	/5KB8sGFhaT2orkadCZF7j8zvKNvVj0=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623131927; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=i2oAt0R0s4NoNFYvZ9Mbyh4IYQkZZ3RW7YMkBQwQXlQ=;
	b=UT64tIpTEgxopaXUTiBBPwXIcr65Zj2CYspcux64iBAXY0DjPoW02shBGW6+9SLPTz8KTn
	fwLooxuo3KfUqEhqwBY0DjZu6YRk7QGDLvX4DpI2PIjOec2A7qe08xqY3j9apXDc+hXSgi
	/5KB8sGFhaT2orkadCZF7j8zvKNvVj0=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 1/2] tools/xenstore: set oom score for xenstore daemon on Linux
Date: Tue,  8 Jun 2021 07:58:38 +0200
Message-Id: <20210608055839.10313-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210608055839.10313-1-jgross@suse.com>
References: <20210608055839.10313-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Xenstored is absolutely mandatory for a Xen host and it can't be
restarted, so being killed by OOM-killer in case of memory shortage is
to be avoided.

Set /proc/$pid/oom_score_adj (if available) to -500 in order to allow
xenstored to use large amounts of memory without being killed.

Make sure the pid file isn't a left-over from a previous run delete it
before starting xenstored.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- set oom score from launch script (Julien Grall)
- split off open file descriptor limit setting (Julien Grall)
---
 tools/hotplug/Linux/launch-xenstore.in | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/tools/hotplug/Linux/launch-xenstore.in b/tools/hotplug/Linux/launch-xenstore.in
index 019f9d6f4d..3ad71e3d08 100644
--- a/tools/hotplug/Linux/launch-xenstore.in
+++ b/tools/hotplug/Linux/launch-xenstore.in
@@ -59,11 +59,14 @@ test -f @CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons && . @CONFIG_DIR@/@CONFIG_LEAF
 		echo "No xenstored found"
 		exit 1
 	}
+	rm -f @XEN_RUN_DIR@/xenstored.pid
 
 	echo -n Starting $XENSTORED...
 	$XENSTORED --pid-file @XEN_RUN_DIR@/xenstored.pid $XENSTORED_ARGS
 
 	systemd-notify --booted 2>/dev/null || timeout_xenstore $XENSTORED || exit 1
+	XS_PID=`cat @XEN_RUN_DIR@/xenstored.pid`
+	echo -500 >/proc/$XS_PID/oom_score_adj
 
 	exit 0
 }
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 06:19:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 06:19:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138292.256039 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqV4q-00059u-Pc; Tue, 08 Jun 2021 06:19:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138292.256039; Tue, 08 Jun 2021 06:19:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqV4q-00059n-M4; Tue, 08 Jun 2021 06:19:04 +0000
Received: by outflank-mailman (input) for mailman id 138292;
 Tue, 08 Jun 2021 06:19:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JFXD=LC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqV4p-00059h-C3
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 06:19:03 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c9cf0fac-7f82-4b78-9645-000feeea19fe;
 Tue, 08 Jun 2021 06:19:02 +0000 (UTC)
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02lp2056.outbound.protection.outlook.com [104.47.5.56]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-33-qFf9yPbCOmOK4iQyhid2xw-1; Tue, 08 Jun 2021 08:19:00 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3389.eurprd04.prod.outlook.com (2603:10a6:803:b::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.21; Tue, 8 Jun
 2021 06:18:58 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4195.030; Tue, 8 Jun 2021
 06:18:58 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR0P281CA0081.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:1e::7) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.10 via Frontend Transport; Tue, 8 Jun 2021 06:18:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9cf0fac-7f82-4b78-9645-000feeea19fe
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623133141;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=X4jiwkTF3ndFYTqlQkRQhHjPPmACvFRX+fMLcraQPy0=;
	b=Fo3hLPDHQIkSsvHMGRlqPIG9gSSChBhx5aLWqWcU/kzUpsuGi0m0DEXxqWs+ISIXZjN/u1
	rN31Xno63JFvY5JOIVyi7Hb7KvywO+1COpUijYYkSFqp2Idt7Ld9uDeCm7jbMEVGsk171r
	wR2/4PsV3p4QoVWJt/hvD+RaEIZFAMY=
X-MC-Unique: qFf9yPbCOmOK4iQyhid2xw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ut5FNj2o/n8yVjUl6S7k2XLELUV6nTGlhnialy9F6w3ntmCYQWUWd7536FQaoSzZo1RWoYNa4KcvSPWVw38W7dxotNnKKBB+N9R7sqzYJgrPpFawHmsrEusNafsyBO96NvHxZ96kIpgTnrINw01sxONMxIzIgrhBokLZYHPR1teLODPAyYlXNnXV9yCM1wVSPauPmP8peS5WmYvCd7yUcGZe326wwp7G/uObV3/FIv7GVQoXJ9aO4JFY58Vr68vbPbtSUDwqfdHRr9o5nif6A5jO5laLHqneniiCdhAW+FtF4QMifyr0PqoT5Ao0HAy6O8XXHSG6vwJRH9yEd3d2Fw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TkjRvjkQYKteAAZxSNLFcpc0YVSD/cAeIqlCB9bbeDo=;
 b=BjS8pzKJK+/Y8wRqPEoPCoS4I3XOfGmIvYbaYoRoppVe5ej7sAt3eGq0rBG2uq25Yb49z/PLdNv7vBQSfK0jh19wBRf2mX0buPhkxtp/zVtDuKf/WEnl6EoPBvze+i/C0vwCqBzZRUGCB0XU9oHf8YeWuXMy87XZT79hSzHFGqG8RdeI6zaANCVBm7Pr5VSrYAJwZoEoikmaeAK7P09fpE8RPJcx2WglfZjLZdqFX+pXLyILOAoYwyccKQ1oqAiUrZ3OlIQEB+quMjGK28sVRnF+XVhs3tNjywRYHFdA3nPxTXXfwoDsl/H2rcV5kqMOFI+Ufdyvl+RAw3n1kjsIsQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] x86/cpuid: Drop special_features[]
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210607124141.24767-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d09c3a27-4b89-be3f-6dea-37f3759df570@suse.com>
Date: Tue, 8 Jun 2021 08:18:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210607124141.24767-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR0P281CA0081.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::7) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 098d131e-38ea-40c1-97e9-08d92a454ce2
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3389:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB3389052B295F66790F09630DB3379@VI1PR0402MB3389.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YYQutg7oKUEm/G5x6jEdxR2nFM/wzAxS7wbrf581g/gkP7UQjvSzo+TGHyYKYbBOJm+1JcEb3tL9Q3J730cokX9b6kVP8OGTjcgCzNrOadaga1Nfi4PklhfPCXVBNSY51qcQQFvic9M4NePUveXvASLBYQoMbcSrmYtCU7O9i7STJgSbYVbsV4hdCEPPNkc9dZUz5gFBlJ1FL2fmSOzqrbXNY7vU5xZuDNEHc20pL+mO5ExpeZ4E7fmTjp517GD6+6lgCgjEphBSOIltxiNHXk7wSZNAh1GcpAFWC4cPhzt3R65cqoXs9eIVM40rJZx0DphwiOv+ZiYbtWmrQkEaShBODF3tZ9LulOhulg7DVCW2VW/hCGHHez1aJKfc6OvtqMxgwI7At1vvJQItqGPK1bNFTyu4xvI3w0u8H0NuYEubzo0+ZEm+xBFME6FWad8EnBRajAZ6C5G0XCc4nbtYu0dHgsUGJ5eFF7WsWJVj30zfda10tP1TmqfOeJa+IGdFerWYk7Tch+UdbRsKlSKt8vRDx2v1HJcwqshozVgqXjXXDJGY3MJ+vAQjogeBUed/68t3kwXO1yxL0dLJPRIAinMJr/8GQ42p3df5peNvKip7yUUmI9IGOD9Dhu9qXUPqq6YFUlIpzDdsqE/Ajd7xeE2NNTTe3PZF5VLGyIBuOtuEsM8/LLjM4ZptEOiJlaPL
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(366004)(39860400002)(376002)(396003)(83380400001)(4744005)(6916009)(2616005)(956004)(53546011)(8936002)(2906002)(26005)(66556008)(66476007)(54906003)(66946007)(316002)(16576012)(6486002)(4326008)(31686004)(186003)(478600001)(86362001)(36756003)(16526019)(31696002)(8676002)(5660300002)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?us-ascii?Q?ZrAI+ydcAN8gMx1onqIU7rp1W0y/aS14vwvpJu6i/t719SJFGZJZA6CA+Cl9?=
 =?us-ascii?Q?mJw/yZIdCjBrVHpDVFn0TVgOVAnCsIQsztIRdMm4wf8FgJ6qcINcHZMA7l0O?=
 =?us-ascii?Q?eDTrRhhmNQGxVXPNiuDtRnMZmKrkdClGkaWl9jrtj0jeH1SCPzrfFamW/ckY?=
 =?us-ascii?Q?C9UClBmf2RPlH6klOYK5VLsIUWG4NwDfbjJuwXP67/rZr1aGoL9EE1hYUGd5?=
 =?us-ascii?Q?Y5raW/yadaFZYJ9zARBX30r9kAVEiLSgZXr+huDSH6okQdSW2B/MF8CRXNqN?=
 =?us-ascii?Q?8X7rfB4vZrILKlA1MuI+m4utMn3epmzBxKiLNhrIbm6roe42FH4HVyIFyN/3?=
 =?us-ascii?Q?WzVFDETS7RJ6bwAKLClolUq8R93LiHZstgRVNE8qsGDMbH8YLakrpju0N6We?=
 =?us-ascii?Q?KQyQhmmi639WgYz/ji0A6IUE5F0aQ2Ewsa8rE8AO2Q+dhe483FLPxFvdleTN?=
 =?us-ascii?Q?f2XzaxfRxl/WlbXJX8Hu9ZnAvYxocKgy1rT86vetKAJCMPC50v0pLsgvTaiW?=
 =?us-ascii?Q?xmg5mKcpY/UYZoZTbS/iMRz2WrYTKZffa19deRwSMuFSoJDNlXv5bpv5FT8v?=
 =?us-ascii?Q?tMo/PjGHKS+UA9/wMfbgHrbBwMtf1yudnv7GK/cQDPEZwxf/GdPOyulEgd0q?=
 =?us-ascii?Q?OjyV4RD//h10FZJSm6qhXYNgQpf4Vi2amqidi2kN8l/93E2TYgMRWKMC6q0/?=
 =?us-ascii?Q?EpSd9cd6YE5MoM7BHQmB1lbHpK+1WsSt+O5sPMP3gWfW2jVuMRPRAFkblco3?=
 =?us-ascii?Q?bL91oK9t8ez4JegpqURKDdJAkbnuJfPA3LJ/KhadV5HjHCNU4IS4nvK7S+bb?=
 =?us-ascii?Q?Aed3qLiehxxO9gFllLQlxChTa4ZfvpmXRxYj/6M3hiOFxOXDxeRQZ0FJZfRb?=
 =?us-ascii?Q?PUBgt8FZLJrzMk1pVnLzu/Q+eU9wIWYyIXhXQJny8Y8rABFSZJxvLFfqh8Z7?=
 =?us-ascii?Q?WBKN1pzNyjUPDy+cJ5gRG7+8FzhJs3FoHoUPuo99eHrDmbXDc20J6AGmJ2PU?=
 =?us-ascii?Q?0M0fqxjwbC3C+Jd92716QA5g/CQ+ndXUig7gE7VBfG9/eV/oq8cDia7iwck5?=
 =?us-ascii?Q?A/Z/nLp0huPbH5WrwK5lhRWmmTeVNzhdY1axDVyNL800/6O4xB2gtUAtaKjI?=
 =?us-ascii?Q?r3wc6imHZhPeRzSrn1nSgKPNAwTGQX0lHFJSKsLlmcOX3NtURcs/tJRiKVx9?=
 =?us-ascii?Q?S04tBK6rr5fLpgj305zfLv2kn4TUIMHT7tSeHlKZCxzTGTkMvnoU3P3GtAYL?=
 =?us-ascii?Q?idxggsMhABZFPbmR+Q8BFpJI4AXoH3wbsivvRJbpKmEcF5wRd+7yiBSpZUlu?=
 =?us-ascii?Q?shPFk2DSsSo2c/2YPUWWJkDA?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 098d131e-38ea-40c1-97e9-08d92a454ce2
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Jun 2021 06:18:58.1873
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: x6rCDTO+ArOl7Js6B21is8wLm5k4C8kxD3K+Pxh7RrJimPltWxYuij7gc2jRxkNxPq389qvc/oM1VGjhrB+kZA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3389

On 07.06.2021 14:41, Andrew Cooper wrote:
> While the ! annotation is useful to indicate that something special is
> happening, an array of bits is not.  Drop it, to prevent mistakes.
>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> ---
>  xen/arch/x86/cpuid.c        | 2 --
>  xen/include/asm-x86/cpuid.h | 1 -
>  xen/tools/gen-cpuid.py      | 3 ---
>  3 files changed, 6 deletions(-)

As osstest points out this didn't go quite far enough, or too far:
Either XC_FEATUREMASK_SPECIAL also needs dropping (including its uses
in libxenguest and xen-cpuid) or, considering exposing this
information via xen-cpuid isn't entirely without purpose, the script
part of the original change needs undoing or making conditional e.g.
upon __XEN__.

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 06:24:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 06:24:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138298.256051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqVA0-0006Yd-Er; Tue, 08 Jun 2021 06:24:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138298.256051; Tue, 08 Jun 2021 06:24:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqVA0-0006YW-Ag; Tue, 08 Jun 2021 06:24:24 +0000
Received: by outflank-mailman (input) for mailman id 138298;
 Tue, 08 Jun 2021 06:24:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JFXD=LC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqV9z-0006YO-8c
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 06:24:23 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e59ea71c-6777-45d5-a5d7-6ba1c271d525;
 Tue, 08 Jun 2021 06:24:22 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2173.outbound.protection.outlook.com [104.47.17.173])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-17-m8QeLUt5MbKmeNXnwYqtyA-1; Tue, 08 Jun 2021 08:24:20 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB3118.eurprd04.prod.outlook.com (2603:10a6:802:a::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.30; Tue, 8 Jun
 2021 06:24:18 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4195.030; Tue, 8 Jun 2021
 06:24:18 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FRYP281CA0009.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::19) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.10 via Frontend Transport; Tue, 8 Jun 2021 06:24:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e59ea71c-6777-45d5-a5d7-6ba1c271d525
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623133461;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6d7B/Jaq2T7t7N2Xwb5Ch47g331S3widrRMqH6FSvhs=;
	b=MR5SekSFJR+8s5oSIc1P9GtUt4NmvjPQF8BuWsT4A09Fi9OOwvD3IBESj5ZmWTUjazoEL8
	ROK0KLFYnH/jg7ithQvni0tPDQzLe8LBoFSVWxiMCPZyezNL0pdLbOne9CROYN1YNeTXIc
	b26o/Ppuk9VxDkU55rva9dnhNCi+p4k=
X-MC-Unique: m8QeLUt5MbKmeNXnwYqtyA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e64V1QIu2mdDzS1PfmK9d6t15JfSlSruJGRZOHNE71jvkE2G73IXj9YuV++P6FSbfPeZ4kTg4MOzvyQxnjmrQNquxva1BF/dfRy4cZVNq35J1rfTMRajegGTGBja/H++uyBIZyc1Bxzhi1n92Kg+WaZYgrXuQOurw1BfC8V4QSAokxhhCxX6RgIaKvoFBIr8pxaZ51+N9NikAQfR0ulaemrZtVpPK4R7QZcqx9DS6qZpHpRjMp6ZWotcrD8jlhl08cFBSSiPIdacB8qQPRJFMoFrjHrJaslSooU5+gRdlo2iIkUKJNHuz0zIxSc5Mw4aulK2VTTXASO0P4wKD7AETQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6d7B/Jaq2T7t7N2Xwb5Ch47g331S3widrRMqH6FSvhs=;
 b=Q99iMyKHA6OSEDc1dEyl9aZSFo7R/1hn0rdpaZoPn7YpHAub7YGHh8dUaMj0uZQWSh1D3Z9W77FWx17RD2mKbdEbfAziE1Cm1ubaa/pWshQaG0MKKVnQwjmhKGTiLcY+H6okfigu17zt0nCxMcQiFkdG3fiSRxmODmUNzACa2QLbWKdAmGJ50dRSXne0jTmupCSb0edYzLnN9jYcpjRWK60YORgrYFX/EpALldvuUr6K/9GfJOjL/Amoy/Mjp4aL4uCKDDrWwelWQCwnTeBkQVk8t7LjTUHZZoPh71gTowJc1AiMDYUYV6abVyA1Mrx3I70VbIAut0Ar1tfqtJZO+Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [BUG] Passed through PCI devices lost after Windows HVM DomU
 reboot
To: Paul Leiber <paul@onlineschubla.de>
References: <FRYP281MB05828EB0C49C963C7954578CB0389@FRYP281MB0582.DEUP281.PROD.OUTLOOK.COM>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5f532ea4-4ff4-e163-9492-096d16a316e7@suse.com>
Date: Tue, 8 Jun 2021 08:24:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <FRYP281MB05828EB0C49C963C7954578CB0389@FRYP281MB0582.DEUP281.PROD.OUTLOOK.COM>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FRYP281CA0009.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::19)
 To VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: be43591f-f716-4a44-57b5-08d92a460bda
X-MS-TrafficTypeDiagnostic: VI1PR04MB3118:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB31184977D603E471D83FB86BB3379@VI1PR04MB3118.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pEO2NzhQuTk6XpGMX2gAglUS60c4UoqAixh4BdFex8HiL1lf1VEW8Mpux8Pgf6dnd6Hs2bJwoSrmoS00hRaXR4KkVPNIZTbf3gXjTX0eIXa2JUeBdc4a96BHEYKUOrBZmZ9RL5qNEQYpUn7sHjuH9hKf1A3DorBGvXTtJ9ArGZ6mWG4v3BRvxXUBSh077tMMmk+MOQim7+o1MuVKmSnIKZua33ODD7hY6dBQBH/E1FyoYoZjtIUKg/gnFbYy/gni5/RCwbY7rQCK/1RuON4yO8k5yFyymxKVyG7fdSvUmUqknXV7G9z1krsNDWnLL3xZuYKl/oD/M9uHvpiuUw5m6nX8VVTpqAVlNE0zYyoT3tHsDWn4ikBWoQCzFtd2Fe3VrAdiT0+AWAmUEI3u3SrUf2W7YgHzE6tSr/4TNxU0cqZLh16JY7EOUrX0a5T+LORQC8PT5n4chz0b/blxoP3naqPNxkN4lyzQ/OD7QE8mO48y9pSeJXCG4gwLJATISO4WN9W7zF3sfiyG+aEF2zdtSn0UZfgU/+BqkRs0RtwjK4uhp02pfQ+YCdRYNF1lEBCvPaAMItp/iiRl405Xj+VENhjM8R4jKGPaMVTrU6UcMKUr1XB8pX2a5DvdKq6qpugxqTzIS6fp45TnKJnnKep4WJZnv/ROEOp3BLo0F/MC1YEjgli7QwFKq8BdgqxBsN+szfvNoPHtHm2b9mEV68snRg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(366004)(396003)(346002)(376002)(136003)(26005)(16526019)(38100700002)(4326008)(8936002)(186003)(66476007)(66556008)(8676002)(66946007)(83380400001)(6916009)(956004)(2616005)(53546011)(5660300002)(86362001)(31686004)(316002)(478600001)(2906002)(16576012)(6486002)(31696002)(36756003)(21314003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?ZEFKOHVya1hJL0Y3TzJWb0lMdFJ5TUNXaVNPYlBCemlMa25ndGFhOC9LUEwr?=
 =?utf-8?B?UXZSaVlzUmR3Smc0MTRnc1JoSktuOUgxK3kvUjVscTdKc3R1WndVT3cyR2Fk?=
 =?utf-8?B?d2xPK29haHZyVHlGTzliODBtVDdpRWh2TitqQ0lVdkRFZUszaW8zNzRkRVdX?=
 =?utf-8?B?UXhtRCt2aUhnVlkvUkJuWUpTQUI3bkg4Zm9VcUkzRDVvdDhPK3VhUTUyQUFY?=
 =?utf-8?B?T1RmNEhuWTg1VVREMjQvOXdlL3J6M3VIUTUyNjE0SnF1OGtTTGhrYjZZNUor?=
 =?utf-8?B?Z1VmQ2hIRzBKcnRTTUoybkN2WEdBczhZVDBrNVFrSFlyR2RtelA4L3lRNHFj?=
 =?utf-8?B?NUh6Nk1pVHcxZlRSWndjenp4aUxFY1MyTVhlSWE5N1JmUVhLK09ZSjBYZ3N6?=
 =?utf-8?B?VjdmcWZRR1Fjcmg2bmM0WXVaL1JIdjUvUTVnWGNpYVE3R2lqeDZ1aGhwenZp?=
 =?utf-8?B?MkgwbUxNbXViVVl5bnlQUksvcHVqQUEwbUp2VnVuNE1ES0JOSzJZUFcybGNv?=
 =?utf-8?B?WERJMW9tOWJVRktObXVCZXRvSWRwaVA0U2FudGk5czJnNEpUREZaVUJlVU0z?=
 =?utf-8?B?cG5HMkYzV2prMGI5WVZlUjlGVWUybEQwYnZYYzRnajFMQkplS1RXTDJaV1Bu?=
 =?utf-8?B?WVJURzJqRE8vR3ErUWdkNUFiU2QyTlJwZS9RNEVKMU1CblF2VkIyOHkrSnVo?=
 =?utf-8?B?ZFdUWWxlbDZGeFI0N2EzU2Y1UGVDbWhVLzRPMi92WHdhTlFETFhnVlU0aHVq?=
 =?utf-8?B?UG9UcWpVTUdPNnJra29IbW5iV2F0OG5tS1J3WUg5NHgzM2g5cFZraUNqQUdU?=
 =?utf-8?B?Z1IzWW9xZ1g5YVNZSmQwNFA5bVlVYnVWWGYzTW14eEdpU3AwYU12cXhsM0tZ?=
 =?utf-8?B?ZlBYb2o2NEdEa2dPRXZKaDkrRC9wd0xhU2g4QlBUajh0dDQ3ZjhQVUdCTk5D?=
 =?utf-8?B?aDk0Nzg0NnZGbXhkR2ZaZG1ZRWZldC9sZnlXTFAzbzkvcWxadTBTUG9oRVpX?=
 =?utf-8?B?bGdKM3ZCeXpYenkwK2RHcVJFaldqL3JkYm9uUFRzS2hlOWRZdW01NmdLOXRZ?=
 =?utf-8?B?NnNkZTBteS9zSHNlaHp4RHRnem1USGZETi9KT3kxM3J1aTFCUGFEb3ZXS2sw?=
 =?utf-8?B?NmNra0x1MVdmN3RER0lZU2Y3ZWFMOGw5Tko2UWo3NEhLcTRFYStaT0FTMlB3?=
 =?utf-8?B?RHlvdkRMV0NHRDYxaXpBT2UvYUpCVnNBRkh2U1VUa3VZVURENlQrUkgwK0gz?=
 =?utf-8?B?NThFNUR1cnN0THp6V0RBMmdGK1NmeFVscXFOc3pTL0xnYWErWHVSallxUDdM?=
 =?utf-8?B?WktzNDVZZzRidk1vTHFiV1FuSG0yUEFXOVNEL2oxM3g1Smd2YUdFcEFpZUQ0?=
 =?utf-8?B?NEp1aUl3cmVxc01QbS9SZldQUk5TZllpQ0ZrdENtbHplelFSYy9tSmt6TzZJ?=
 =?utf-8?B?bzc1L1F5a2VDT1FBSjQ3aktpb0ZBV0tQMWM1QjQzNFlDdU1FeTJwWmRISUdP?=
 =?utf-8?B?OGZUM3RhRXd5OEJOeFE2NHJjSnlFV2dBbWQ5bXovUmoyUWVpYzVHU05TSHB3?=
 =?utf-8?B?TXA3RXJYeGxpQ3BaWmJIN3dKbXpnWlpWVzlvR3cvUmZia21EZFg1RW5Yck9Y?=
 =?utf-8?B?ejc4SVovTlpHS0dwc25DSlNDbjYzUS95bVFHRE5DbjRINzNoQmhWSmdqcVBK?=
 =?utf-8?B?cHM5K3doenpsS1U5b2pWSElPTG55SjBXNmV6TVk1UW85UXdmRmRiRDduMlk0?=
 =?utf-8?Q?T3gDBem9k1KwOjeRbS7BcA+J5jMbStHqXCW9l3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: be43591f-f716-4a44-57b5-08d92a460bda
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Jun 2021 06:24:18.6650
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /dd/QjAP3qwXqYIild3iPSysgjUqQ+3e8W603EghWhExSjXDLaASMtgkS+itwiY3yCwen4H7Fr1yX+SKZ1umzA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB3118

On 08.06.2021 01:44, Paul Leiber wrote:
> After more testing, I have come to the following conclusion: It seems that every time I do a _reboot_ from within a Windows DomU, the PCI device does not get attached to the DomU. After DomU reboot, it is immediately available for attachment in the Dom0 when I check for it with "xl pci-assignable-list", and I can reattach it to the DomU with "xl pci-attach" without any major problems beside some annoying side effects (e. g. need to reapply settings).

A well-known problem on ...

> xl info:
> 
> host                   : xxx
> release                : 4.19.0-14-amd64
> version                : #1 SMP Debian 4.19.171-2 (2021-01-30)
> machine                : x86_64
> nr_cpus                : 4
> max_cpu_id             : 3
> nr_nodes               : 1
> cores_per_socket       : 4
> threads_per_core       : 1
> cpu_mhz                : 1992.100
> hw_caps                : bfebfbff:77faf3ff:2c100800:00000121:0000000f:009c6fbf:00000000:00000100
> virt_caps              : hvm hvm_directio
> total_memory           : 32542
> free_memory            : 20836
> sharing_freed_memory   : 0
> sharing_used_memory    : 0
> outstanding_claims     : 0
> free_cpus              : 0
> xen_major              : 4
> xen_minor              : 11
> xen_extra              : .4
> xen_version            : 4.11.4

... this old Xen version, I believe. I don't recall when exactly it was
fixed (and I don't know at all whether the fix was backported), but
trying a recent version of Xen should get you past this. If a fully
maintained version is still affected, a backport could be requested.

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 07:54:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 07:54:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138308.256063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqWZI-0006KG-UK; Tue, 08 Jun 2021 07:54:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138308.256063; Tue, 08 Jun 2021 07:54:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqWZI-0006K8-Py; Tue, 08 Jun 2021 07:54:36 +0000
Received: by outflank-mailman (input) for mailman id 138308;
 Tue, 08 Jun 2021 07:54:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3rsk=LC=epam.com=prvs=679307a155=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1lqWZG-0006K2-Qp
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 07:54:35 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 361bb396-ac1b-4b57-9e14-0a842cbd736b;
 Tue, 08 Jun 2021 07:54:32 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 1587bOlh028129; Tue, 8 Jun 2021 07:54:31 GMT
Received: from eur04-he1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2052.outbound.protection.outlook.com [104.47.13.52])
 by mx0b-0039f301.pphosted.com with ESMTP id 39239w09vj-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 08 Jun 2021 07:54:30 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR0302MB3282.eurprd03.prod.outlook.com (2603:10a6:208:8::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.25; Tue, 8 Jun
 2021 07:54:28 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::b459:9e8c:964b:a3d1]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::b459:9e8c:964b:a3d1%6]) with mapi id 15.20.4195.030; Tue, 8 Jun 2021
 07:54:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 361bb396-ac1b-4b57-9e14-0a842cbd736b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QVABjeNs7Wl+EM7YmLOY0jnw7fPAyLXEgCp/v2YFAKN+5VwIo/eTCjRQrGzG2CUtrkWA1xMHVtRYbh8lKY/dBpTv7qEJPmMwveNaovHpqIWKjp4r2+7A0zOQtM73hXnRVdy5jK9pODSLs/UZ0gutW1or4KFzb+H7EXZOKfT8eCwE14Ml/bxYLnslktqkYR2kj/IeGL7HV9Eox1StJ25DS3AxAVspOulszoArWZvzTwG5nHLnvd26pIsqFP6QqcU+G2Vhib8lTzXqhMuqXd3RmA71c0bVeUC9EBYT+w5nXRR7ii8IFPQ1Nrspp7FnmUD4IJ8RnosS0zHSzoZ4nAWTsw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kR/jysLeRkeymkeB4J0TLElkAod7ZqYUHiN8fdPr6lk=;
 b=lmfRuyefytzhPCkAXuJh0wIvxzTjBG2fZ2A2T0KU54iocDpNJ9tfDLVoqJelmt06a+iOKHKBRsJze6/+fS3XjWqZ0/rAEepoTkQJnArVikZlxb4zPe4N5mkgzkMjWqB1QYvy3DHed4NB7ag9B9bMBJLZ76CwizYr/DdklBF86pTcm0eLC+CHCZLOCHjqfxz3yd6ee/FtPyaZDh57L6a717JyBtK0estuTT3W8hTsnT3HbSvJBNEso1nkERHjixixNivRj357YsnWWAK24rJp7OZH5YHlTi8P9c/RrIGiUsqHNT3tfPvAwewXd5Bs5w/lz9Nqi+1OWWNJ0von9h78Qg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kR/jysLeRkeymkeB4J0TLElkAod7ZqYUHiN8fdPr6lk=;
 b=QjEE3oVTnrmwknFpkbGvYTtEqQ7OcJYLSzxdWcKkOYyXseaJzJR7cZPaZdVopWBwbpLICeIeOcwUvAP92ukdRjnWMqejHKbuAc2uPjsQtM3oC50VNrMXsWdZOrr12NNysF7IMq/+6YOUcHjTVStxHVZHOM72pQ6eLoNO6F3GzllmHi3CxYy96tju0GutOokHjuO54V0qqBVNzrXG6EmvO8u0fnlccEKZKx8UNP5PDzNDXLRN+b+1LL26YzeqTSYn6jXZqLZfRoWFqr2+eCzJ5KnKyb85q23IcpZXSCMTkY58Kh68kYdQdr/mzwGlccblWS536/WK5rFmMY8mTTIl7Q==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: "wei.liu2@citrix.com" <wei.liu2@citrix.com>,
        "konrad.wilk@oracle.com"
	<konrad.wilk@oracle.com>,
        "jgross@suse.com" <jgross@suse.com>, Ian Jackson
	<iwj@xenproject.org>
Subject: Re: [PATCH 2/2] libgnttab: Add support for Linux dma-buf offset
Thread-Topic: [PATCH 2/2] libgnttab: Add support for Linux dma-buf offset
Thread-Index: AQHWLoWvWdjq7ozlqkeaZNP0F/2zoKl++MQAgAQkQQCBiPzxgA==
Date: Tue, 8 Jun 2021 07:54:28 +0000
Message-ID: <5611ca93-815e-00b6-f958-e1149b27e0b8@epam.com>
References: <20200520090425.28558-1-andr2000@gmail.com>
 <20200520090425.28558-3-andr2000@gmail.com>
 <24433.65344.748102.591216@mariner.uk.xensource.com>
 <9e64a880-02ce-e04b-8e36-eb63fbfbd975@epam.com>
In-Reply-To: <9e64a880-02ce-e04b-8e36-eb63fbfbd975@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 7135f859-4d9e-45dd-f977-08d92a52a492
x-ms-traffictypediagnostic: AM0PR0302MB3282:
x-microsoft-antispam-prvs: 
 <AM0PR0302MB3282CFDA2BD473CA5CAE1A04E7379@AM0PR0302MB3282.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 24N0kepzc/t0VRUO8BC6Jd96V7OgzBzBZsjfYcKTrgco9SYBsHY/PjWGwho4628hRMmcOBrjZktByg2KPHU+sqkqlmteL9wR6z83GJzMYKZyp3Xuta85odjTDgZ+W+J/uVnbx30GJWT7v+bTSWgihXMZJdaPPDqpOMQvaCBIRD1gBE+HyirG42cC2EjrInA5+J/53eOz4s3iHx+Ua7LMrqRbj7DZxEE73ZBvFYfKloHSj7obGsnb+YnJu8XG1mSSeqqhJagNJijL9vdqo88FudesOj/rYj5KJw358q0mx+1L0ooTRZOXSPltRmkNsonEBnAFBkWsJm1GJK2hCB/YzaYuc0DnH/jIpi5dn8xArJvudG7v58C87KJ11QrLUXMizlqNTFZArKMCu20aVo1x9y3um+334ZxpH7HhPdSj0PBalLYK40qyFsrRnWb5ZFzppKHPQEkayPRzTB4+aN/M6El8AEN6Wl42QklsH0nXqpQvM+gYMEESV10NTWPCg8zngvLz27IL8auEjPcBwBq84vLKDni9cYLPmtHSl4kdtN3Wsy9JkSp79+u+RgXNxRM919qmPZBqOZUDs97veh8n4uGj9aYgOmikvolTH5/zMEb7Ath2JfEwvIIuES+bfmlnoiJ0v+oNJWqp8trTQ0Aj08JTmkazTAPqGD3qyeMYF2IdbnXNC3Sa6BiE8pyQPcqu4soTI/iltug48a/PZtEpCcIIlqXNuj91tTLWTfxhFOvZvMg6B51JEuMBsm/zf0nh+MxI28MNtIIy7J6y3dWbuQ==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(136003)(39860400002)(396003)(376002)(6486002)(8936002)(66446008)(966005)(2616005)(6512007)(6506007)(478600001)(66556008)(4326008)(64756008)(36756003)(8676002)(66476007)(5660300002)(2906002)(86362001)(6916009)(54906003)(83380400001)(76116006)(30864003)(66946007)(186003)(31686004)(122000001)(38100700002)(26005)(316002)(31696002)(71200400001)(53546011)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?V2lyUExHalVVdmdONU94NGM0SWQrbTBRbjB3QnNrUWVDTHhyRGFxaThrRDU0?=
 =?utf-8?B?NzNBa3krUURrU1pHdTE3bWtiUHdRZjVhbEZ0akxVWDN6NFk2NXpORktuTWs4?=
 =?utf-8?B?QmpmSUZ3ZFdGV3ZvamVsYTFWUzg4SnNDbzFFSzRhQ29FQk0yakN5S1ROWDVY?=
 =?utf-8?B?RGJWa3EvRmNkUzh6SFZmYmFyclNDN2NTbWppS1dyUnU2cXJLN0l3Y3Q0ankw?=
 =?utf-8?B?dHh4RkFLcS9RWitrenVVQ0ZQL0ZBNmJTdERzT1h3cnI2bVZ5VnJqTng3blNr?=
 =?utf-8?B?STVKY0pCQVZrK0ZBeUdkSUtwUWFCVlNoRXpUK3NhZ1d3RXZPSlhxRWludHFO?=
 =?utf-8?B?ZHdKVUZML2NnL2hzWFByM2h5M3IyOW5aakc5MnFwbXcxaHZiUHM5RGRVYkpQ?=
 =?utf-8?B?Q1pZYTlDTVhDaXJtbm55Q25BMzBKWm90MEpKM08reEdUelYyMDZGa0xhUDQx?=
 =?utf-8?B?dVo1WG1samttZ3I4eksyUC9FMytCQzhjS0lYTGVKM2FiUXpmZ2JRa2FkcmdI?=
 =?utf-8?B?ZlJ5aVdxaExIVGY5cVErc3FVOEpMZ3paWGlPclQxekJXTUZCbWkrZTBFc1pX?=
 =?utf-8?B?anNQRDJQSEFoY1A1Q3c2bkMzWE1qNDV6VVVYYnZBSERBOEZsTDdLMUMvUUc0?=
 =?utf-8?B?OUF5WDQwdS9hK290dEpDc01wUWU0cERzbVU2WVhsSVZ0RDN1byt2TnBWZFBP?=
 =?utf-8?B?Z2s5dlpnOTlaeklmMVlZZUZDbE5hTkRTTWZVYktDTUJsaUxyUEdGbVVRcFV3?=
 =?utf-8?B?emQ0Rk9xanZOTlpLd0t5dXdETXp4Zi9leXNILzZtdGVxSERyRGs1Wkk2U3gy?=
 =?utf-8?B?UEE1T1ppbTBheVdBRHlQU2hmMngrRTYxVDR2clRVdGMxcEJ0TCsxL2haU3BU?=
 =?utf-8?B?eGF3ZmZtY1piUFpncFBTeEc2WUtrRFRiQWpkN00yQjJQQ1NjamNtM29qdS9U?=
 =?utf-8?B?R2ZvanlQQjZIaWVlR0dYcWU5QVBSSnp2aGRIb1hOclJOZkw4YkRka0RGcTJj?=
 =?utf-8?B?Wnh5QUJiNEUvZzFXYk0rMkxmYzVha3BYZjZ5OU1qdkZQWFBQelNQZHdvTEdC?=
 =?utf-8?B?anRraGlCQzZORENXai9oRjVGS3dIM2luQ0creW5WVmhibFVOekk1T3VGdHJs?=
 =?utf-8?B?ZVJhVVBubStRSFA1L2JmSWdzdVNjdG1ZVmh0N2pnU3VEb2IzanJaeXBmTkV2?=
 =?utf-8?B?QjRtRGlHOUVrOUhNTXdON2hjdmRVa2E2NGhFNmNiRlQ2Wkl6ZWJKQ0RRYjZy?=
 =?utf-8?B?ejQzbEh6Slp4TkFkbXVybFdBWkZ4RVczZDlhd3Ezdng0OVBoSDRUaVVweCs4?=
 =?utf-8?B?cHJacC93QktjVUJKb2pDT0NXR0Vlb3BFbk5RVG5WQ0V3QytlSXNoQnJTZzZz?=
 =?utf-8?B?dzZDT1hONnVIQVEvb2FMSzRlMWRqTi94ZkprRTBJY3pPTjJhYTVEZEJvUVYz?=
 =?utf-8?B?RmdpNDV0SmdYd3pnazNOU2pjaVg3dnNoVHE4NW9xcHV0U0lySkpKaG1SY2RY?=
 =?utf-8?B?REF4VG9Oa3dycFBCV21hYmw1NFRPWG9NYWM1akh4S1VFbVptdGtCWTB2bEFY?=
 =?utf-8?B?RU9JMDZMa0oxYkE2SW5TaS9US1RiVlVJTVZxcDFJWDc0UFhPOC9NYjl6QzVv?=
 =?utf-8?B?SDd5YXFTakpkTEZPRjRVbGQ5TGlDbXd1enlBYmZDcHRNQ1IzSEtsTEhodFZs?=
 =?utf-8?B?bUxvZ3o4T3BXUkk3djJEYnNRRFFoeklKSys4Z2VVdTFsWlpRVS9wNTFTVDJ2?=
 =?utf-8?Q?b9Wm+3CdpbpwzGCfm8MzI7DR44TCu3VjxKXWLA/?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <D35331738DBC4849BD322A8557D1D81E@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7135f859-4d9e-45dd-f977-08d92a52a492
X-MS-Exchange-CrossTenant-originalarrivaltime: 08 Jun 2021 07:54:28.5759
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: yWvrMZkkUKc8BMnX6IEws8aGlWykQwvWMm/QLczy7VOA5vumTTstCeB63Ru0UuWNh8leSueGF9o6xhmouA/DHoSVfEx4l+VXRn4fT8IH8uPNsFjrwwN79qQuo3d57vqS
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0302MB3282
X-Proofpoint-ORIG-GUID: kN0F02AScaHWR0mxry0l1Dt5IbvlpEHV
X-Proofpoint-GUID: kN0F02AScaHWR0mxry0l1Dt5IbvlpEHV
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 clxscore=1011
 spamscore=0 bulkscore=0 malwarescore=0 mlxscore=0 adultscore=0
 suspectscore=0 mlxlogscore=999 phishscore=0 lowpriorityscore=0
 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2106080051

SGVsbG8sIGFsbCENCg0KSSB3b3VsZCBsaWtlIHRvIGJyaW5nIGJhY2sgdGhpcyBvbGQgdGhyZWFk
IGFzIGl0IHNlZW1zIGl0IGhhcyBzdHVjayBsb25nIA0KdGltZSBhZ28NCg0Kd2l0aG91dCBjbGVh
ciBOQWNrIG9yIEFjay4gSSBkaWRuJ3QgcmViYXNlIHRoZSBjaGFuZ2VzIGJlY2F1c2UgdGhlIA0K
Y2hhbmdlIGl0c2VsZg0KDQpyZXF1aXJlcyBhbnN3ZXJzIG9uIHRoZSB3YXkgd2Ugc2hvdWxkIGdv
IGhlcmU6IG5ldyBpb2N0bCAoc2VlbXMgdG8gYmUgDQpiZXR0ZXIpIG9yDQoNCmV4dGVuc2lvbiBv
ZiB0aGUgZXhpc3Rpbmcgb25lIChub3Qgc28gZ3JlYXQpDQoNClRoYW5rIHlvdSBpbiBhZHZhbmNl
LA0KDQpPbGVrc2FuZHINCg0KT24gMDEuMTAuMjAgMDk6MzUsIE9sZWtzYW5kciBBbmRydXNoY2hl
bmtvIHdyb3RlOg0KPiBIaSwNCj4NCj4gT24gOS8yOC8yMCA2OjIwIFBNLCBJYW4gSmFja3NvbiB3
cm90ZToNCj4+IE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIHdyaXRlcyAoIltQQVRDSCAyLzJdIGxp
YmdudHRhYjogQWRkIHN1cHBvcnQgZm9yIExpbnV4IGRtYS1idWYgb2Zmc2V0Iik6DQo+Pj4gRnJv
bTogT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gPG9sZWtzYW5kcl9hbmRydXNoY2hlbmtvQGVwYW0u
Y29tPg0KPj4+DQo+Pj4gQWRkIHZlcnNpb24gMiBvZiB0aGUgZG1hLWJ1ZiBpb2N0bHMgd2hpY2gg
YWRkcyBkYXRhX29mcyBwYXJhbWV0ZXIuDQo+Pj4NCj4+PiBkbWEtYnVmIGlzIGJhY2tlZCBieSBh
IHNjYXR0ZXItZ2F0aGVyIHRhYmxlIGFuZCBoYXMgb2Zmc2V0IHBhcmFtZXRlcg0KPj4+IHdoaWNo
IHRlbGxzIHdoZXJlIHRoZSBhY3R1YWwgZGF0YSBzdGFydHMuIFJlbGV2YW50IGlvY3RscyBhcmUg
ZXh0ZW5kZWQNCj4+PiB0byBzdXBwb3J0IHRoYXQgb2Zmc2V0Og0KPj4+ICAgICAtIHdoZW4gZG1h
LWJ1ZiBpcyBjcmVhdGVkIChleHBvcnRlZCkgZnJvbSBncmFudCByZWZlcmVuY2VzIHRoZW4NCj4+
PiAgICAgICBkYXRhX29mcyBpcyB1c2VkIHRvIHNldCB0aGUgb2Zmc2V0IGZpZWxkIGluIHRoZSBz
Y2F0dGVyIGxpc3QNCj4+PiAgICAgICBvZiB0aGUgbmV3IGRtYS1idWYNCj4+PiAgICAgLSB3aGVu
IGRtYS1idWYgaXMgaW1wb3J0ZWQgYW5kIGdyYW50IHJlZmVyZW5jZXMgcHJvdmlkZWQgdGhlbg0K
Pj4+ICAgICAgIGRhdGFfb2ZzIGlzIHVzZWQgdG8gcmVwb3J0IHRoYXQgb2Zmc2V0IHRvIHVzZXIt
c3BhY2UNCj4+IFRoYW5rcy4gIEknbSBub3QgYSBETUEgZXhwZXJ0LCBidXQgSSB0aGluayB0aGlz
IGlzIHByb2JhYmx5IGdvaW5nIGluDQo+PiByb3VnaGx5IHRoZSByaWdodCBkaXJlY3Rpb24uICBJ
IHdpbGwgcHJvYmFibHkgd2FudCBhIHJldmlldyBmcm9tIGEgRE1BDQo+PiBleHBlcnQgdG9vLCBi
dXQgbGV0IG1lIGdldCBvbiB3aXRoIG15IHF1ZXN0aW9uczoNCj4+DQo+PiBXaGVuIHlvdSBzYXkg
InRoZSBwcm90b2NvbCBjaGFuZ2VzIGFyZSBhbHJlYWR5IGFjY2VwdGVkIiBJIHRoaW5rIHlvdQ0K
Pj4gbWVhbiB0aGUgTGludXggaW9jdGwgY2hhbmdlcyA/ICBJZiBub3QsIHdoYXQgKmRvKiB5b3Ug
bWVhbiA/DQo+IEkgbWVhbiB0aGF0IHRoZSByZWxldmFudCBwcm90b2NvbCBjaGFuZ2VzIGFyZSBh
bHJlYWR5IHBhcnQgb2YgYm90aCBYZW4gWzFdDQo+DQo+IGFuZCBMaW51eCB0cmVlcyBbMl0uIFdo
YXQgaXMgbWlzc2luZyBpcyBpb2N0bCBpbXBsZW1lbnRhdGlvbiBpbiB0aGUga2VybmVsIGFuZA0K
Pg0KPiBpdHMgc3VwcG9ydCBpbiBYZW4nIHRvb2xzLiBUaGlzIGlzIHdoeSBJIGhhdmUgbWFya2Vk
IHRoZSBwYXRjaCBhcyBSRkMgaW4gb3JkZXINCj4NCj4gdG8gZ2V0IHNvbWUgdmlldyBvbiB0aGUg
bWF0dGVyIGZyb20gWGVuIGNvbW11bml0eS4gT25jZSB3ZSBhZ3JlZSBvbiB0aGUNCj4NCj4gbmFt
aW5nLCBzdHJ1Y3R1cmUgZXRjLiBJJ2xsIHNlbmQgcGF0Y2hlcyBmb3IgYm90aCBYZW4gYW5kIExp
bnV4DQo+DQo+Pj4gKy8qDQo+Pj4gKyAqIFZlcnNpb24gMiBvZiB0aGUgaW9jdGxzIGFkZHMgQGRh
dGFfb2ZzIHBhcmFtZXRlci4NCj4+PiArICoNCj4+PiArICogZG1hLWJ1ZiBpcyBiYWNrZWQgYnkg
YSBzY2F0dGVyLWdhdGhlciB0YWJsZSBhbmQgaGFzIG9mZnNldA0KPj4+ICsgKiBwYXJhbWV0ZXIg
d2hpY2ggdGVsbHMgd2hlcmUgdGhlIGFjdHVhbCBkYXRhIHN0YXJ0cy4NCj4+PiArICogUmVsZXZh
bnQgaW9jdGxzIGFyZSBleHRlbmRlZCB0byBzdXBwb3J0IHRoYXQgb2Zmc2V0Og0KPj4+ICsgKiAg
IC0gd2hlbiBkbWEtYnVmIGlzIGNyZWF0ZWQgKGV4cG9ydGVkKSBmcm9tIGdyYW50IHJlZmVyZW5j
ZXMgdGhlbg0KPj4+ICsgKiAgICAgQGRhdGFfb2ZzIGlzIHVzZWQgdG8gc2V0IHRoZSBvZmZzZXQg
ZmllbGQgaW4gdGhlIHNjYXR0ZXIgbGlzdA0KPj4+ICsgKiAgICAgb2YgdGhlIG5ldyBkbWEtYnVm
DQo+Pj4gKyAqICAgLSB3aGVuIGRtYS1idWYgaXMgaW1wb3J0ZWQgYW5kIGdyYW50IHJlZmVyZW5j
ZXMgYXJlIHByb3ZpZGVkIHRoZW4NCj4+PiArICogICAgIEBkYXRhX29mcyBpcyB1c2VkIHRvIHJl
cG9ydCB0aGF0IG9mZnNldCB0byB1c2VyLXNwYWNlDQo+Pj4gKyAqLw0KPj4+ICsjZGVmaW5lIElP
Q1RMX0dOVERFVl9ETUFCVUZfRVhQX0ZST01fUkVGU19WMiBcDQo+Pj4gKyAgICBfSU9DKF9JT0Nf
Tk9ORSwgJ0cnLCAxMywgXA0KPj4gSSB0aGluayB0aGlzIHdhcyBjb3BpZWQgZnJvbSBhIExpbnV4
IGhlYWRlciBmaWxlID8gIElmIHNvIHBsZWFzZSBxdW90ZQ0KPj4gdGhlIHByZWNpc2UgZmlsZSBh
bmQgcmV2aXNpb24gaW4gdGhlIGNvbW1pdCBtZXNzYWdlLg0KPiBUaGlzIGlzIG5vdCB1cHN0cmVh
bSB5ZXQsIHBsZWFzZSBzZWUgZXhwbGFuYXRpb24gYWJvdmUNCj4+ICAgICBBbmQgYmUgc3VyZSB0
bw0KPj4gY29weSB0aGUgY29weXJpZ2h0IGluZm9ybXRhaW9uIGFwcHJvcHJpYXRlbHkuDQo+Pg0K
Pj4+ICtpbnQgb3NkZXBfZ250dGFiX2RtYWJ1Zl9leHBfZnJvbV9yZWZzX3YyKHhlbmdudHRhYl9o
YW5kbGUgKnhndCwgdWludDMyX3QgZG9taWQsDQo+Pj4gKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgdWludDMyX3QgZmxhZ3MsIHVpbnQzMl90IGNvdW50LA0KPj4+ICsg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IHVpbnQzMl90ICpy
ZWZzLA0KPj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQz
Ml90ICpkbWFidWZfZmQsIHVpbnQzMl90IGRhdGFfb2ZzKQ0KPj4+ICt7DQo+Pj4gKyAgICBhYm9y
dCgpOw0KPj4gSSdtIHByZXR0eSBzdXJlIHRoaXMgaXMgd3JvbmcuDQo+IEZpcnN0IG9mIGFsbCwg
TGludXggZG1hLWJ1ZnMgYXJlIG9ubHkgc3VwcG9ydGVkIG9uIExpbnV4LCBzbyBuZWl0aGVyIEZy
ZWVCU0Qgbm9yIE1pbmktT1MNCj4NCj4gd2lsbCBoYXZlIHRoYXQuIElmIHlvdSBhcmUgcmVmZXJy
aW5nIHRvICJhYm9ydCgpIiBoZXJlLCBzbyBJIGFtIGp1c3QgYWxpZ25pbmcgdG8gd2hhdCBwcmV2
aW91c2x5DQo+DQo+IHdhcyB0aGVyZSwgZS5nLiBhbGwgbm9uLXJlbGV2YW50IGRtYS1idWYgT1Mg
c3BlY2lmaWNzIHdlcmUgaW1wbGVtZW50ZWQgbGlrZSB0aGF0Lg0KPg0KPj4gVGhpcyBsZWFkcyBt
ZSB0byBhc2sgYWJvdXQgY29tcGF0aWJpbGl0eSwgYm90aCBhY3Jvc3MgdmVyc2lvbnMgb2YgdGhl
DQo+PiB2YXJpb3VzIGNvbXBvbmVudHMsIGFuZCBBUEkgY29tcGF0aWJpbGl0eSBhY3Jvc3MgZGlm
ZmVyZW50IHBsYXRmb3Jtcy4NCj4+DQo+PiBsaWJ4ZW5nbnR0YWIgaXMgc3VwcG9zZWQgdG8gaGF2
ZSBhIHN0YWJsZSBBUEkgYW5kIEFCSS4gIFRoaXMgbWVhbnMNCj4+IHRoYXQgb2xkIHByb2dyYW1z
IHNob3VsZCB3b3JrIHdpdGggdGhlIG5ldyBsaWJyYXJ5IC0gd2hpY2ggSSB0aGluayB5b3UNCj4+
IGhhdmUgYWNoaWV2ZWQuDQo+IFllcw0KPj4gQnV0IEkgdGhpbmsgaXQgYWxzbyBtZWFucyB0aGF0
IGl0IHNob3VsZCB3b3JrIHdpdGggbmV3IHByb2dyYW1zLCBhbmQNCj4+IHRoZSBuZXcgbGlicmFy
eSwgb24gb2xkIGtlcm5lbHMuICBXaGF0IGlzIHlvdXIgY29tcGF0aWJpbGl0eSBzdG9yeQ0KPj4g
aGVyZSA/ICBXaGF0IGlzIHRoZSBpbnRlbmRlZCBtb2RlIG9mIHVzZSBieSBhbiBhcHBsaWNhdGlv
biA/DQo+IFdlbGwsIHRoaXMgaXMgYSB0b3VnaCBzdG9yeS4gSWYgd2UgaGF2ZSBuZXcgc29mdHdh
cmUgYW5kIG5ldyBsaWJyYXJ5LCBidXQgb2xkDQo+DQo+IGtlcm5lbCBpdCBtZWFucyB0aGF0IHRo
ZSBvZmZzZXQgd2UgYXJlIHRyeWluZyB0byBnZXQgd2l0aCB0aGUgbmV3IGlvY3RsIHdpbGwgYmUN
Cj4NCj4gdW5hdmFpbGFibGUgdG8gdGhhdCBuZXcgc29mdHdhcmUuIEluIG1vc3QgY2FzZXMgd2Ug
Y2FuIHVzZSBvZmZzZXQgb2YgMCwgYnV0IHNvbWUNCj4NCj4gcGxhdGZvcm1zIChpTVg4KSB1c2Ug
b2Zmc2V0IG9mIDY0LiBTbywgd2UgY2FuIHdvcmthcm91bmQgdGhhdCBmb3IgbW9zdCg/KSBwbGF0
Zm9ybXMNCj4NCj4gYnkgcmVwb3J0aW5nIG9mZnNldCAwLCBidXQgc29tZSBwbGF0Zm9ybXMgd2ls
bCBmYWlsLiBJIGFtIG5vdCBzdXJlIGlmIHRoaXMgaXMgZ29vZCB0byBzdGF0ZSB0aGF0DQo+DQo+
IHRoaXMgY29tYmluYXRpb24gb2Ygc29mdHdhcmUgKGFzIGRlc2NyaWJlZCBhYm92ZSkgIndpbGwg
bW9zdGx5IHdvcmsiIG9yIGp1c3QgbGV0DQo+DQo+IHRoZSBzeXN0ZW0gZmFpbCBhdCBydW4tdGlt
ZSwgYnkgbGV0dGluZyBMaW51eCByZXR1cm4gRU5PVFNVUFAgZm9yIHRoZSBuZXcgaW9jdGwuDQo+
DQo+IEJ5IGZhaWwgSSBtZWFuIHRoYXQgdGhlIGRpc3BsYXkgYmFja2VuZCBtYXkgZGVjaWRlIGlm
IHRvIHVzZSB0aGUgcHJldmlvdXMgdmVyc2lvbiBvZiB0aGUgaW9jdGwNCj4NCj4gd2l0aG91dCB0
aGUgb2Zmc2V0IGZpZWxkLg0KPg0KPj4gQW5kIHRoZSBzYW1lIGFwcGxpY2F0aW9uIGNvZGUgc2hv
dWxkIGJlIHVzZWFibGUsIHNvIGZhciBhcyBwb3NzaWJsZSwNCj4+IGFjcm9zcyBkaWZmZXJlbnQg
cGxhYXRmb3JtcyB0aGF0IHN1cHBvcnQgWGVuLg0KPj4NCj4+IFdoYXQgZmFsbGJhY2sgd291bGQg
YmUgcG9zc2libGUgZm9yIGFwcGxpY2F0aW9uIGRvIGlmIHRoZSB2MiBmdW5jdGlvbg0KPj4gaXMg
bm90IGF2YWlsYWJsZSA/ICBJIHRoaW5rIHRoYXQgZmFsbGJhY2sgYWN0aW9uIG5lZWRzIHRvIGJl
DQo+PiBzZWxlY3RhYmxlIGF0IHJ1bnRpbWUsIHRvIHN1cHBvcnQgbmV3IHVzZXJzcGFjZSBvbiBv
bGQga2VybmVscy4NCj4gV2VsbCwgYXMgSSBzYWlkIGJlZm9yZSwgZm9yIHRoZSBwbGF0Zm9ybXMg
d2l0aCBvZmZzZXQgMCB3ZSBhcmUgImZpbmUiIGlnbm9yaW5nIHRoZSBvZmZzZXQgYW5kDQo+DQo+
IHVzaW5nIHYxIG9mIHRoZSBpb2N0bCB3aXRob3V0IHRoZSBvZmZzZXQgZmllbGQuIEZvciB0aGUg
cGxhdGZvcm1zIHdpdGggbm9uLXplcm8gb2Zmc2V0IGl0IHJlc3VsdHMNCj4NCj4gYXQgbGVhc3Qg
aW4gc2xpZ2h0IHNjcmVlbiBkaXN0b3J0aW9uIGFuZCB0aGV5IGRvIG5lZWQgdjIgb2YgdGhlIGlv
Y3RsDQo+DQo+PiBXaGF0IGFyY2hpdGVjdHVyZXMgaXMgdGhlIG5ldyBMaW51eCBpb2N0bCBhdmFp
bGFibGUgb24gPw0KPiB4ODYvQVJNDQo+Pj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnMvZ250dGFi
L2luY2x1ZGUveGVuZ250dGFiLmggYi90b29scy9saWJzL2dudHRhYi9pbmNsdWRlL3hlbmdudHRh
Yi5oDQo+Pj4gaW5kZXggMTExZmM4OGNhZWIzLi4wOTU2YmQ5MWUwZGYgMTAwNjQ0DQo+Pj4gLS0t
IGEvdG9vbHMvbGlicy9nbnR0YWIvaW5jbHVkZS94ZW5nbnR0YWIuaA0KPj4+ICsrKyBiL3Rvb2xz
L2xpYnMvZ250dGFiL2luY2x1ZGUveGVuZ250dGFiLmgNCj4+PiBAQCAtMzIyLDEyICszMjIsMTkg
QEAgaW50IHhlbmdudHRhYl9ncmFudF9jb3B5KHhlbmdudHRhYl9oYW5kbGUgKnhndCwNCj4+PiAg
ICAgKiBSZXR1cm5zIDAgaWYgZG1hLWJ1ZiB3YXMgc3VjY2Vzc2Z1bGx5IGNyZWF0ZWQgYW5kIHRo
ZSBjb3JyZXNwb25kaW5nDQo+Pj4gICAgICogZG1hLWJ1ZidzIGZpbGUgZGVzY3JpcHRvciBpcyBy
ZXR1cm5lZCBpbiBAZmQuDQo+Pj4gICAgICoNCj4+PiArDQo+Pj4gKyAqIFZlcnNpb24gMiBhbHNv
IGFjY2VwdHMgQGRhdGFfb2ZzIG9mZnNldCBvZiB0aGUgZGF0YSBpbiB0aGUgYnVmZmVyLg0KPj4+
ICsgKg0KPj4+ICAgICAqIFsxXSBodHRwczovL3VybGRlZmVuc2UuY29tL3YzL19faHR0cHM6Ly9l
bGl4aXIuYm9vdGxpbi5jb20vbGludXgvbGF0ZXN0L3NvdXJjZS9Eb2N1bWVudGF0aW9uL2RyaXZl
ci1hcGkvZG1hLWJ1Zi5yc3RfXzshIUdGXzI5ZGJjUUlVQlBBIWlhN2dzRDVvOXZQdGdRem0zcFVZ
bWNQRWFVbWFPRFpSVWt5bmlxNzR2TkRaa2J6OXpHYnFlLXpDVWVzSEhGMy1ja1JXTHVJQktnJCBb
ZWxpeGlyWy5dYm9vdGxpblsuXWNvbV0NCj4+PiAgICAgKi8NCj4+PiAgICBpbnQgeGVuZ250dGFi
X2RtYWJ1Zl9leHBfZnJvbV9yZWZzKHhlbmdudHRhYl9oYW5kbGUgKnhndCwgdWludDMyX3QgZG9t
aWQsDQo+Pj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50MzJfdCBm
bGFncywgdWludDMyX3QgY291bnQsDQo+Pj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBjb25zdCB1aW50MzJfdCAqcmVmcywgdWludDMyX3QgKmZkKTsNCj4+PiAgICANCj4+
PiAraW50IHhlbmdudHRhYl9kbWFidWZfZXhwX2Zyb21fcmVmc192Mih4ZW5nbnR0YWJfaGFuZGxl
ICp4Z3QsIHVpbnQzMl90IGRvbWlkLA0KPj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHVpbnQzMl90IGZsYWdzLCB1aW50MzJfdCBjb3VudCwNCj4+PiArICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCB1aW50MzJfdCAqcmVmcywgdWludDMy
X3QgKmZkLA0KPj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQz
Ml90IGRhdGFfb2ZzKTsNCj4+IEkgdGhpbmsgdGhlIGluZm9ybWF0aW9uIGFib3V0IHRoZSBtZWFu
aW5nIG9mIEBkYXRhX29mcyBtdXN0IGJlIGluIHRoZQ0KPj4gZG9jIGNvbW1lbnQuICBJbmRlZWQs
IHRoYXQgc2hvdWxkIGJlIHRoZSBwcmltYXJ5IGxvY2F0aW9uLg0KPiBTdXJlDQo+PiBDb252ZXJz
ZWx5IHRoZXJlIGlzIG5vIG5lZWQgdG8gZHVwbGljYXRlIGluZm9ybWF0aW9uIGJldHdlZW4gdGhl
IHBhdGNoDQo+PiBjb250ZW50cywgYW5kIHRoZSBjb21taXQgbWVzc2FnZS4NCj4gSXQncyBqdXN0
IGEgbWUgdGhhdCBhbHdheXMgd2FudHMgdGhlIGRvYyBhdCBoYW5keSBsb2NhdGlvbiBzbyBJIGRv
bid0IG5lZWQgdG8gZGlnIGZvcg0KPg0KPiB0aGUgY29tbWl0IG1lc3NhZ2VzPyBCdXQgYXQgdGhl
IHNhbWUgdGltZSB0aGUgY29tbWl0IG1lc3NhZ2Ugc2hvdWxkIGFsbG93IG9uZQ0KPg0KPiBxdWlj
a2x5IHVuZGVyc3RhbmQgd2hhdCdzIGluIHRoZXJlLiBTbywgSSB3b3VsZCBwcmVmZXIgdG8gaGF2
ZSBtb3JlIGRlc2NyaXB0aW9uIGluIHRoZQ0KPg0KPiBwYXRjaCB0aGVuDQo+DQo+PiBJcyBfdjIg
cmVhbGx5IHRoZSBiZXN0IG5hbWUgZm9yIHRoaXMgPyAgQXJlIHdlIGxpa2VseSB0byB3YW50IHRv
DQo+PiBleHRlbmQgdGhpcyBhZ2FpbiBpbiBmdXR1cmUgPyAgUGVyaGFwcyBpdCBzaG91bGQgYmUg
Y2FsbGVkIC4uLl9vZmZzZXQNCj4+IG9yIHNvbWV0aGluZyA/ICBQbGVhc2UgdGhpbmsgYWJvdXQg
dGhpcyBhbmQgdGVsbCBtZSB5b3VyIG9waW5pb24uDQo+IEkgZG9uJ3QgYWN0dWFsbHkgbGlrZSB2
Mi4gTmVpdGhlciBJIGNhbiBwcm9kdWNlIGFueXRoaW5nIG1vcmUgY3V0ZSA7KQ0KPg0KPiBPbiB0
aGUgb3RoZXIgaGFuZCBpdCBpcyBlYXNpZXIgdG8gdW5kZXJzdGFuZCB0aGF0IHYyIGlzIGFjdHVh
bGx5IGV4dGVuZHMvcmVtb3Zlcy9jaGFuZ2VzDQo+DQo+IHNvbWV0aGluZyB0aGF0IHdhcyBoZXJl
IGJlZm9yZS4gU2F5LCBpZiB5b3UgaGF2ZSAyIGlvY3RscyB5eXkgYW5kIGRkZCB5b3UgbmVlZCB0
byBjb21wYXJlDQo+DQo+IHRoZSB0d28gdG8gdW5kZXJzdGFuZCB3aGF0IGlzIG1vcmUgcmVsZXZh
bnQgYXQgdGhlIG1vbWVudC4gSGF2aW5nIGV4cGxpY2l0IHZlcnNpb24gaW4gdGhlDQo+DQo+IG5h
bWUgbGVhdmVzIG5vIGRvdWJ0IGFib3V0IHdoYXQgaXMgbmV3ZXIuDQo+DQo+Pj4gK2ludCBvc2Rl
cF9nbnR0YWJfZG1hYnVmX2V4cF9mcm9tX3JlZnNfdjIoeGVuZ250dGFiX2hhbmRsZSAqeGd0LCB1
aW50MzJfdCBkb21pZCwNCj4+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICB1aW50MzJfdCBmbGFncywgdWludDMyX3QgY291bnQsDQo+Pj4gKyAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3QgdWludDMyX3QgKnJlZnMsDQo+Pj4gKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgKmRtYWJ1Zl9m
ZCwNCj4+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50MzJf
dCBkYXRhX29mcykNCj4+PiArew0KPj4+ICsgICAgc3RydWN0IGlvY3RsX2dudGRldl9kbWFidWZf
ZXhwX2Zyb21fcmVmc192MiAqZnJvbV9yZWZzX3YyID0gTlVMTDsNCj4+PiArICAgIGludCByYyA9
IC0xOw0KPj4+ICsNCj4+PiArICAgIGlmICggIWNvdW50ICkNCj4+PiArICAgIHsNCj4+PiArICAg
ICAgICBlcnJubyA9IEVJTlZBTDsNCj4+PiArICAgICAgICBnb3RvIG91dDsNCj4+PiArICAgIH0N
Cj4+PiArDQo+Pj4gKyAgICBmcm9tX3JlZnNfdjIgPSBtYWxsb2Moc2l6ZW9mKCpmcm9tX3JlZnNf
djIpICsNCj4+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAoY291bnQgLSAxKSAqIHNpemVv
Zihmcm9tX3JlZnNfdjItPnJlZnNbMF0pKTsNCj4+PiArICAgIGlmICggIWZyb21fcmVmc192MiAp
DQo+Pj4gKyAgICB7DQo+Pj4gKyAgICAgICAgZXJybm8gPSBFTk9NRU07DQo+Pj4gKyAgICAgICAg
Z290byBvdXQ7DQo+Pj4gKyAgICB9DQo+Pj4gKw0KPj4+ICsgICAgZnJvbV9yZWZzX3YyLT5mbGFn
cyA9IGZsYWdzOw0KPj4+ICsgICAgZnJvbV9yZWZzX3YyLT5jb3VudCA9IGNvdW50Ow0KPj4+ICsg
ICAgZnJvbV9yZWZzX3YyLT5kb21pZCA9IGRvbWlkOw0KPj4+ICsgICAgZnJvbV9yZWZzX3YyLT5k
YXRhX29mcyA9IGRhdGFfb2ZzOw0KPj4+ICsNCj4+PiArICAgIG1lbWNweShmcm9tX3JlZnNfdjIt
PnJlZnMsIHJlZnMsIGNvdW50ICogc2l6ZW9mKGZyb21fcmVmc192Mi0+cmVmc1swXSkpOw0KPj4+
ICsNCj4+PiArICAgIGlmICggKHJjID0gaW9jdGwoeGd0LT5mZCwgSU9DVExfR05UREVWX0RNQUJV
Rl9FWFBfRlJPTV9SRUZTX1YyLA0KPj4+ICsgICAgICAgICAgICAgICAgICAgICBmcm9tX3JlZnNf
djIpKSApDQo+Pj4gKyAgICB7DQo+Pj4gKyAgICAgICAgR1RFUlJPUih4Z3QtPmxvZ2dlciwgImlv
Y3RsIERNQUJVRl9FWFBfRlJPTV9SRUZTX1YyIGZhaWxlZCIpOw0KPj4+ICsgICAgICAgIGdvdG8g
b3V0Ow0KPj4+ICsgICAgfQ0KPj4gVGhpcyBzZWVtcyBqdXN0IGEgZmFpcmx5IG9idmlvdXMgd3Jh
cHBlciBmb3IgdGhpcyBpb2N0bC4gIEkgdGhpbmsgaXQNCj4+IHdvdWxkIGJlIGJlc3QgZm9yIG1l
IHRvIHJldmlldyB0aGlzIGluIGRldGFpbCB3aXRoIHJlZmVyZW5jZSB0byB0aGUNCj4+IGlvY3Rs
IGRvY3VtZW50YXRpb24gKHdoaWNoIHlvdSBoZWxwZnVsbHkgcmVmZXIgdG8gLSB0aGFuayB5b3Uh
KSBhZnRlcg0KPj4gSSBzZWUgdGhlIGFuc3dlcnMgdG8gbXkgb3RoZXIgcXVlc3Rpb25zLg0KPiBX
ZWxsLCBJIGhhdmUgbGl0dGxlIHRvIGFkZCBhcyB0aGUgb25seSBjaGFuZ2UgYW5kIHRoZSByZWFz
b24gaXMgdGhhdCBzY2F0dGVyLWdhdGhlciB0YWJsZSdzDQo+DQo+IG9mZnNldCBtdXN0IGJlIGhv
bm9yZWQgd2hpY2ggd2FzIG5vdCBhIHByb2JsZW0gdW50aWwgd2UgZmFjZWQgaU1YOCBwbGF0Zm9y
bSB3aGljaCBoYXMNCj4NCj4gdGhhdCBvZmZzZXQgbm9uLXplcm8uIEZyYW5rbHksIGxvdHMgb2Yg
c29mdHdhcmUgYXNzdW1lcyBpdCBpcyB6ZXJvLi4uDQo+DQo+Pj4gK2ludCBvc2RlcF9nbnR0YWJf
ZG1hYnVmX2ltcF90b19yZWZzX3YyKHhlbmdudHRhYl9oYW5kbGUgKnhndCwgdWludDMyX3QgZG9t
aWQsDQo+Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90
IGZkLCB1aW50MzJfdCBjb3VudCwNCj4+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgdWludDMyX3QgKnJlZnMsDQo+Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHVpbnQzMl90ICpkYXRhX29mcykNCj4+PiArew0KPj4gVGhpcyBmdW5jdGlv
biBpcyB2ZXJ5IHNpbWlsYXIgdG8gdGhlIHByZXZpb3VzIG9uZS4gIEknbSB1bmNvbWZvcnRhYmxl
DQo+PiB3aXRoIHRoZSBkdXBsaWNhdGlvbiwgYnV0IEkgc2VlIHRoYXQNCj4+ICAgICAgb3NkZXBf
Z250dGFiX2RtYWJ1Zl97aW1wX3RvLGV4cF9mcm9tfV9yZWZzDQo+PiBhcmUgdmVyeSBkdXBsaWNh
dGl2ZSBhbHJlYWR5LCBzbyBJIGFtIGFsc28gc29tZXdoYXQgdW5jb21mb3J0YWJsZSB3aXRoDQo+
PiBhc2tpbmcgeW91IHRvIGNsZWFuIHRoaXMgdXAgd2l0aCByZWZhY3RvcmluZy4gIEJ1dCBwZXJo
YXBzIGlmIHlvdSBmZWx0DQo+PiBsaWtlIHRoaW5raW5nIGFib3V0IGNvbWJpb25pbmcgc29tZSBv
ZiB0aGlzLCB0aGF0IG1pZ2h0IGJlIG5pY2UuDQo+IEkgaGF0ZSBoYXZpbmcgY29kZSBkdXBsaWNh
dGlvbiBhcyB3ZWxsOiBsZXNzIGNvZGUgbGVzcyBtYWludGVuYW5jZS4gQnV0IGluIHRoaXMgY2Fz
ZQ0KPg0KPiB0aGUgY29tbW9uIGNvZGUgbWFrZXMgdGhhdCBmdW5jdGlvbiBmdWxsIG9mICJpZiJz
IHNvIGZpbmFsbHkgSSBnYXZlIHVwIGFuZCBtYWtlIGEgY29weS1wYXN0ZS4NCj4NCj4gTm8gc3Ry
b25nIG9waW5pb24gaGVyZTogaWYgeW91IHRoaW5rICJpZiJzIGFyZSBzdGlsbCBiZXR0ZXIgSSds
bCByZXdvcmsgdGhhdA0KPg0KPj4gV2hhdCBkbyBteSBjby1tYWludGFpbmVycyB0aGluayA/DQo+
Pg0KPj4NCj4+IFJlZ2FyZHMsDQo+PiBJYW4uDQo+IFRoYW5rIHlvdSBmb3IgdGhlIHJldmlldyBh
bmQgeW91ciB0aW1lLA0KPg0KPiBPbGVrc2FuZHINCj4NCj4gWzFdIGh0dHBzOi8veGVuYml0cy54
ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9Y29tbWl0O2g9YzI3YTE4NDIyNWVhYjU0ZDIwNDM1
YzhjYWI1YWQwZWYzODRkYzJjMA0KPg0KPiBbMl0gaHR0cHM6Ly9naXQua2VybmVsLm9yZy9wdWIv
c2NtL2xpbnV4L2tlcm5lbC9naXQvdG9ydmFsZHMvbGludXguZ2l0L2NvbW1pdC8/aWQ9NmY5MjMz
N2I2YmZmYjNkOWU1MDkwMjRkNmVmNWMzZjJiMTEyNzU3ZA0KPg==


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 08:05:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 08:05:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138316.256075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqWjb-0008Jj-5W; Tue, 08 Jun 2021 08:05:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138316.256075; Tue, 08 Jun 2021 08:05:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqWjb-0008Jc-0Y; Tue, 08 Jun 2021 08:05:15 +0000
Received: by outflank-mailman (input) for mailman id 138316;
 Tue, 08 Jun 2021 08:05:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqWjZ-0008JS-4e; Tue, 08 Jun 2021 08:05:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqWjY-0002xM-S1; Tue, 08 Jun 2021 08:05:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqWjY-0000LR-IB; Tue, 08 Jun 2021 08:05:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqWjY-000358-He; Tue, 08 Jun 2021 08:05:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ufNHRmHDSPwdL0ohbB9BhYnpnpRnftUfErcYdtBxaXU=; b=gOrkyIyPajczHmFABoZ0ToPbYK
	/K7X/5oVtKybKwO04UGB4JzAcQXGZ9xXHXXJ0lmicTmRxGugGhDWTeTQ3CDCecuwC2LreT6/tbjZb
	h0xBEL55o3u8+7RRHnJBXJkOuZkghyTrS1yqv/Xil0MWihWdO+G6oW+p+AWTQwHcMyYM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162535-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162535: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=1832c0a02b3c9ceb518a4338cb3609fd7d1233a2
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Jun 2021 08:05:12 +0000

flight 162535 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162535/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              1832c0a02b3c9ceb518a4338cb3609fd7d1233a2
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  333 days
Failing since        151818  2020-07-11 04:18:52 Z  332 days  325 attempts
Testing same since   162535  2021-06-08 04:20:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 60642 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 08:38:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 08:38:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138327.256089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqXFb-00034d-Sb; Tue, 08 Jun 2021 08:38:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138327.256089; Tue, 08 Jun 2021 08:38:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqXFb-00034W-PV; Tue, 08 Jun 2021 08:38:19 +0000
Received: by outflank-mailman (input) for mailman id 138327;
 Tue, 08 Jun 2021 08:38:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqXFa-00034M-K5; Tue, 08 Jun 2021 08:38:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqXFa-0003VU-Bt; Tue, 08 Jun 2021 08:38:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqXFa-0002tw-1K; Tue, 08 Jun 2021 08:38:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqXFa-0004Nv-0s; Tue, 08 Jun 2021 08:38:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=34MyD+3DaKxuTEYFNMyOaPw9beJTQbsifmz+Ai1Oq/0=; b=aZnhF71tFZNTtAUt/3+PCZSwLP
	0ongO5/Bs1YY9mPN+/WVquSmvI5TZKQ6aUhQEj3p/kERTm50ttnjtAX6H9KjIObrRX4h4uwbNyM/9
	k2nojcosRK81ZDgJzaE2i7OaSkXlxGpSerq3UwLZ0twVCxil03Hn2XMi4CUC5JsMUYa8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162532-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162532: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt-raw:debian-di-install:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=614124bea77e452aa6df7a8714e8bc820b489922
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Jun 2021 08:38:18 +0000

flight 162532 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162532/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-thunderx 13 debian-fixup     fail in 162483 pass in 162532
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail in 162483 pass in 162532
 test-amd64-amd64-xl-rtds     18 guest-localmigrate         fail pass in 162483
 test-armhf-armhf-libvirt-raw 12 debian-di-install          fail pass in 162483

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 162483 like 152332
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 162483 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                614124bea77e452aa6df7a8714e8bc820b489922
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  311 days
Failing since        152366  2020-08-01 20:49:34 Z  310 days  531 attempts
Testing same since   162483  2021-06-07 04:42:27 Z    1 days    2 attempts

------------------------------------------------------------
6149 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1674012 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 08:38:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 08:38:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138331.256104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqXG1-0003Xb-7C; Tue, 08 Jun 2021 08:38:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138331.256104; Tue, 08 Jun 2021 08:38:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqXG1-0003XU-3v; Tue, 08 Jun 2021 08:38:45 +0000
Received: by outflank-mailman (input) for mailman id 138331;
 Tue, 08 Jun 2021 08:38:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqXFz-0003XE-Sk; Tue, 08 Jun 2021 08:38:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqXFz-0003W3-OS; Tue, 08 Jun 2021 08:38:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqXFz-0002vE-Df; Tue, 08 Jun 2021 08:38:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqXFz-0005C4-D7; Tue, 08 Jun 2021 08:38:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Maba7QG5fzHmzS1YQtrw2lPfamiY80RAQpj6/qy9yQ8=; b=TPZt7AcJWuypb0mligCWwXa9rA
	lsGR/MXUSWF6oeH9OZtYwGrx7nFuVSPTDNbM378FrmWfKHY8ffXZgV0Gjw2dP6i7zbB4l91Ly8ni5
	REQjya9zFWJyK7sEa5VMCag0bJqEJsKpIyqzhWrPuxHSg4IzalMSF+wpdMTLdlyKdg3A=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162538-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162538: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d21121685fac829c988e432407fb0e4ef9b19331
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Jun 2021 08:38:43 +0000

flight 162538 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162538/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d21121685fac829c988e432407fb0e4ef9b19331
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    6 days
Failing since        162370  2021-06-04 17:01:35 Z    3 days   24 attempts
Testing same since   162517  2021-06-07 16:01:28 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d21121685fac829c988e432407fb0e4ef9b19331
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Jun 7 15:00:05 2021 +0200

    tools/libs/guest: fix save and restore of pv domains after 32-bit de-support
    
    After 32-bit PV-guests have been security de-supported when not running
    under PV-shim, the hypervisor will no longer be configured to support
    those domains per default when not being built as PV-shim.
    
    Unfortunately libxenguest will fail saving or restoring a PV domain
    due to this restriction, as it is trying to get the compat MFN list
    even for 64 bit guests.
    
    Fix that by obtaining the compat MFN list only for 32-bit PV guests.
    
    Fixes: 1a0f2fe2297d122a08fe ("SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 69e1472d21cf7e5cf0795ef38b99d00de78a910e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jun 7 13:38:53 2021 +0100

    x86/cpuid: Drop special_features[]
    
    While the ! annotation is useful to indicate that something special is
    happening, an array of bits is not.  Drop it, to prevent mistakes.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 60fa12dbf1d4d2c4ffe1ef34b495b24aa7e41aa0
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jun 7 13:25:09 2021 +0100

    x86/cpuid: Fix HLE and RTM handling (again)
    
    For reasons which are my fault, but I don't recall why, the
    FDP_EXCP_ONLY/NO_FPU_SEL adjustment uses the whole special_features[] array
    element, not the two relevant bits.
    
    HLE and RTM were recently added to the list of special features, causing them
    to be always set in guest view, irrespective of the toolstacks choice on the
    matter.
    
    Rewrite the logic to refer to the features specifically, rather than relying
    on the contents of the special_features[] array.
    
    Fixes: 8fe24090d9 ("x86/cpuid: Rework HLE and RTM handling")
    Reported-by: Edwin Török <edvin.torok@citrix.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit c4beefcada0a406681dcfb6e89f6cbe4aa368c2d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Jun 7 15:40:55 2021 +0200

    ﻿docs: release-technician-checklist: update to leaf tree version pinning
    
    Our releases look to flip-flop between keeping or discarding the date
    and title of the referenced qemu-trad commit. I think with the hash
    replaced by a tag, the commit's date and title would better also be
    purged.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 89052b9fa24bf976924e40918fc9fa3b1b940e17
Author: Dario Faggioli <dfaggioli@suse.com>
Date:   Fri Mar 19 12:14:17 2021 +0000

    xen: credit2: fix per-entity load tracking when continuing running
    
    If we schedule, and the current vCPU continues to run, its statistical
    load is not properly updated, resulting in something like this, even if
    all the 8 vCPUs are 100% busy:
    
    (XEN) Runqueue 0:
    (XEN) [...]
    (XEN)   aveload            = 2097152 (~800%)
    (XEN) [...]
    (XEN)   Domain: 0 w 256 c 0 v 8
    (XEN)     1: [0.0] flags=2 cpu=4 credit=9996885 [w=256] load=35 (~0%)
    (XEN)     2: [0.1] flags=2 cpu=2 credit=9993725 [w=256] load=796 (~0%)
    (XEN)     3: [0.2] flags=2 cpu=1 credit=9995885 [w=256] load=883 (~0%)
    (XEN)     4: [0.3] flags=2 cpu=5 credit=9998833 [w=256] load=487 (~0%)
    (XEN)     5: [0.4] flags=2 cpu=6 credit=9998942 [w=256] load=1595 (~0%)
    (XEN)     6: [0.5] flags=2 cpu=0 credit=9994669 [w=256] load=22 (~0%)
    (XEN)     7: [0.6] flags=2 cpu=7 credit=9997706 [w=256] load=0 (~0%)
    (XEN)     8: [0.7] flags=2 cpu=3 credit=9992440 [w=256] load=0 (~0%)
    
    As we can see, the average load of the runqueue as a whole is, instead,
    computed properly.
    
    This issue would, in theory, potentially affect Credit2 load balancing
    logic. In practice, however, the problem only manifests (at least with
    these characteristics) when there is only 1 runqueue active in the
    cpupool, which also means there is no need to do any load-balancing.
    
    Hence its real impact is pretty much limited to wrong per-vCPU load
    percentages, when looking at the output of the 'r' debug-key.
    
    With this patch, the load is updated and displayed correctly:
    
    (XEN) Runqueue 0:
    (XEN) [...]
    (XEN)   aveload            = 2097152 (~800%)
    (XEN) [...]
    (XEN) Domain info:
    (XEN)   Domain: 0 w 256 c 0 v 8
    (XEN)     1: [0.0] flags=2 cpu=4 credit=9995584 [w=256] load=262144 (~100%)
    (XEN)     2: [0.1] flags=2 cpu=6 credit=9992992 [w=256] load=262144 (~100%)
    (XEN)     3: [0.2] flags=2 cpu=3 credit=9998918 [w=256] load=262118 (~99%)
    (XEN)     4: [0.3] flags=2 cpu=5 credit=9996867 [w=256] load=262144 (~100%)
    (XEN)     5: [0.4] flags=2 cpu=1 credit=9998912 [w=256] load=262144 (~100%)
    (XEN)     6: [0.5] flags=2 cpu=2 credit=9997842 [w=256] load=262144 (~100%)
    (XEN)     7: [0.6] flags=2 cpu=7 credit=9994623 [w=256] load=262144 (~100%)
    (XEN)     8: [0.7] flags=2 cpu=0 credit=9991815 [w=256] load=262144 (~100%)
    
    Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 07b0eb5d0ef0be154606aa46b5b4c5c59b158505
Author: Dario Faggioli <dfaggioli@suse.com>
Date:   Fri May 28 17:12:48 2021 +0200

    credit2: make sure we pick a runnable unit from the runq if there is one
    
    A !runnable unit (temporarily) present in the runq may cause us to
    stop scanning the runq itself too early. Of course, we don't run any
    non-runnable vCPUs, but we end the scan and we fallback to picking
    the idle unit. In other word, this prevent us to find there and pick
    the actual unit that we're meant to start running (which might be
    further ahead in the runq).
    
    Depending on the vCPU pinning configuration, this may lead to such
    unit to be stuck in the runq for long time, causing malfunctioning
    inside the guest.
    
    Fix this by checking runnable/non-runnable status up-front, in the runq
    scanning function.
    
    Reported-by: Michał Leszczyński <michal.leszczynski@cert.pl>
    Reported-by: Dion Kant <g.w.kant@hunenet.nl>
    Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 08:46:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 08:46:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138346.256120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqXNV-0005Cj-71; Tue, 08 Jun 2021 08:46:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138346.256120; Tue, 08 Jun 2021 08:46:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqXNV-0005Cc-1R; Tue, 08 Jun 2021 08:46:29 +0000
Received: by outflank-mailman (input) for mailman id 138346;
 Tue, 08 Jun 2021 08:46:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AOFJ=LC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lqXNT-0005CW-4z
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 08:46:27 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1a46dd14-7b19-4c99-8ab4-f87176db0d5a;
 Tue, 08 Jun 2021 08:46:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a46dd14-7b19-4c99-8ab4-f87176db0d5a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623141984;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=q4CnpP/8ScgdUckSIVKWIJFlvqHpnSa4umpufSvWRPA=;
  b=V4i7RXNzjbk8ekUNuyKylW7l9Snfs+w1rWF1gvNCfR6BhcKyE5ki/YOT
   fjT8GvrFOEzYmysiP0/Hfhn7ShWMEKan08f2YV0bxYchBau/EiZZqdpQP
   t0VoOMBlHk446ywNS7snWtWyIJuELbNjeHy0I54VBwJE0SS9kEeOvqA54
   s=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: NAj86P/2EFskdlSMZdQhm58RENexc4hoobekvvetm9bSSdKrjkZvOiiMCaLDZkpCnLMooPWWLO
 vAiqJvbMbYqpZcAX28BRujfQ9zYVWachRaybqsbyUm2WGFkBj0P6wZZL++bDWA66oVykkydS11
 vQCGZKYA9TSDU4lO3cfxTWqhZpKkDwndOKMd81txmyVSP4LelW+x/alLohERJpXl+yw1oYNnqv
 DHGvnhwsPAoUHVUn3A07XRtMr5c0kLx5Rn6Z17BDRRd9A+jJq1eoVn3dEHtQnYhX95Ka3QmSI/
 5bg=
X-SBRS: 5.1
X-MesageID: 45606218
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:/xN+qKkSA3bcR+0cgRppqkYocjnpDfPOimdD5ihNYBxZY6Wkfp
 +V7ZYmPE7P+VUssS8b+exoYJPwME819fZOkPAs1MSZLXnbUQqTXcJfBOTZskLd8kHFh4lgPI
 ZbAtVD4dDLZmSS7vyKojVQcexQvuVvmZrA7YuwoRYdKHAPV0gJ1XYkNu/xKDwPeOAyP+tCKH
 Pq3Ls9m9PPQwVwUi28PBM4tgr4yuHjpdbDW1orFhQn4A6BgXeD87jhCSWV2R8YTndm3aoi2X
 KtqX292oyT99WAjjPM3W7a6Jpb3PH7zMFYOcCKgs8Jbh3xlweTYph7UbHqhkFwnAjv0idsrD
 D/mWZ4Ay1B0QKIQohzm2q35+DU6kdp15Yl8y7DvZKsm72leNtwMbszuWsQSGqq16NnhqAi7E
 sx5RPki3MfN2K1oA3to9fPTB1kjUyyvD4rlvMSlWVWVc8EZKZWtpF3xjIVLH4sJlO01GkcKp
 ghMCgc3ocdTbqQVQGYgkB/hNi3GngjFBaPRUYP/sac1jRQkXhji1EV38wShDMB84ghQ55P66
 DFP81T5cZzpw8tHO5A7cI6MICK40D2MFvx2VOpUBna/fs8SgfwQrbMkf0IDc+RCeo18Kc=
X-IronPort-AV: E=Sophos;i="5.83,257,1616472000"; 
   d="scan'208";a="45606218"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MRvtSgohSbNM1ztq5fg+V0bCPrECJctSfz0gSswatkdzmKbrleTMPxtRftchv5RzWqflKnXY/phfD57B3xC5PMq4hww9Fi1Qac7Xxj7wccyE3hG/NUXD+3UO2oRTlnh2LUMoDekmX1Ke5uH3mFvru5MZBeKCy7YW0Qco4uwQSrJ7S85hgkA8u/TQIIlCIm161FEFNizFfeL8EwZVCgH/XiOD5Hbax6Dk4iDGBwLs5DRc7Htawc42ZrOnL13sfnTHbvK5aH2SS73ubR0ovoudVQQFmeIqR+X2UWCBjU+3drAWRwwucR93thy/D1GDpPMLGx3wqIgpBbg91NnJc7CYOg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xoACm74cXK7DNwzQN0F3UHM9GF591XLrksoHjFW1OZs=;
 b=IQTiF8gai8K3Pofc4JoIy54u0ItMGHzAj+wKoMc/pp70JGW30p++azRxFNwRfcrUsO+C0RtvvJBDJREw4OYSUueWs1ZmOPG+e04JhEjtWiAUQJsh/5Jb6mdQW40rJGqlsoZz0kq98ZYI2EVsoAoZcDFKcVMzIhgBTkp7D0u8Keh+SYHHce3d3olskaPd91rfbEzm/7msNokHEVhgMMPBidD8Ar7obfeQINguty4AF44mkbzcQYOdNKyQ1IIZgUTAW1haWno0JYW5H+gLTV6EIwif+U13CnEhqQsgVSvdogwdUsH/Q0y+B5sf5zCH9xFYMrBp433gmJDGFm8yk42qOg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xoACm74cXK7DNwzQN0F3UHM9GF591XLrksoHjFW1OZs=;
 b=DxBcDq1KdASb/8UmfpdaV6QNjw4mFhc3gpt9Imvq8VozlBaA4ssHqu0RzYKtvof6xWsydYvkpvqqBy6RkOtKFY5BIGSv/GcD5s6UfSeNBPGZ7GEIDVhQC1pnn1gI8ckL/YkI1/Mpc15jtEQoSEBuzuR1N8kbQA7JFOTAAlAiBXY=
Subject: Re: [PATCH] x86/cpuid: Drop special_features[]
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210607124141.24767-1-andrew.cooper3@citrix.com>
 <d09c3a27-4b89-be3f-6dea-37f3759df570@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <22ffd522-89da-d9fd-3918-bf07eae1be1a@citrix.com>
Date: Tue, 8 Jun 2021 09:46:15 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <d09c3a27-4b89-be3f-6dea-37f3759df570@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0411.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a0::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 81563e72-d05b-416a-328b-08d92a59e3be
X-MS-TrafficTypeDiagnostic: BYAPR03MB3800:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB3800D1E6412003400ABBF126BA379@BYAPR03MB3800.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: EQxOIkqYJZvUl7eE4xqIiiyp7DUeVvipDihpJDUtbHmwRvg0YRBNY7Y7kbamQzs1JxektEzTJjfwBZBfItjQO4Tgm/g+uX4Q3qo4ZmS2K7uS4o2HFrYd7d9o33GDY2ePU5AeYnwLqiaDAgjivU1I7xNugU9mWNGAwW41906P4YO1cYG+QHnkS+HDXoKKDn0oX3NuKdwG/sGMYW/c7BxYt3IKYUo76LKd9TJNejbuH4H27bAKILTwDUJDtC0+VJOewEq9F3xsk/fKZw71ZdgHNCXFT3LkkJgqhZuLQFkBfi/T5HS5XHq2v8ufB5PR80dMm4mDgrUzTqhYkXaLGiVStgMjQICYkVlsPwLVvMEyFosaAoqU5Per2xh4B5FUaTB88/timVFMSpNGavbiAgkT4ueuKuJB7yBsYlN5AHUTaHqv0+Dqn7VB5KUQiD9fsWDK2Eu72Zs8LBQMToM9ghTASm1iJ+7MvMfWu/eYFjAajWBbAIYNl62WeQ/6mxR/Qf7xUYIVocc2Yo6no1H0Da8zOcPgfBKyt3a+lcOxae/P7oiFbCTv4tQc1aeeaiMXlxlcjz0Xx+h0EE5F/rx2Ahr8XFUq6Rz6dpsG9TZDISHtaTQLQswmXcIS97e6EJE97dH6GzbFbK4SGJFTwG4NhwXWMksI2fODxFfUA73kYK0nyjixgb2h2X1F1PUuZpWc7WtI
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(136003)(366004)(396003)(39860400002)(376002)(16576012)(54906003)(4326008)(2906002)(316002)(26005)(956004)(66476007)(8676002)(6486002)(83380400001)(31686004)(53546011)(186003)(16526019)(6666004)(86362001)(31696002)(36756003)(66946007)(8936002)(6916009)(2616005)(66556008)(478600001)(38100700002)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?UUp4cDYxMmFqeVFBNGJ6TmtrYk5hY2NOUGhienl3Wm1PaUsrM0s5Y254YTFJ?=
 =?utf-8?B?eDVKSjVOeVljb0IxL0NjSU0wWGJDUmE3algxbDh3MncyLzd5TzBTTmRyTGpk?=
 =?utf-8?B?T0FXQU1ZamRJeTEvSm9NUnZLZEtDLzFzNDg1Zi96R3JXTCtsODUyQ2xoQWVN?=
 =?utf-8?B?UE1IdDI5ZEFNaFlyWXZWRjhlRjdLaFkrck56bnFUZlpuaFAvM2tFU01YUjAz?=
 =?utf-8?B?Q2xlbGR0V29mMko1Nk96dHhrbSs1SjhURGoycG1scm9aOE14eWd2ZUhOdmF5?=
 =?utf-8?B?bTl5TE81MlltcUlIODFiek9mc0RPSjkxUWc1eHF5Uk41V3pWNHJDMHNUaGcz?=
 =?utf-8?B?NldqTXJRVW9GL1IzKzRpQmlkaS96Z2FWN3cvaG1zVkJrcUJBSC9FYitVN0dP?=
 =?utf-8?B?dXRLUFd3TFZlMXI5NDRPeHdCeDg1bFZvUGFqTkVPQ2NlVE5SVjdwbHAzMitC?=
 =?utf-8?B?TWp3REc2SjdJK0FCZzdxdDdteGs2RmZxNlloaHB3d2FVWFZRS25COS9hWDc2?=
 =?utf-8?B?cC9jZXpEL3pObUc3eGEyVVhHQnB6Ym02VHFKcDV6M2JXcmUzeHJ3SmhIaGho?=
 =?utf-8?B?U1lObU0xdG5jcVZxQStkdHNzNVVNU3QrVTQ3MGhLL2dBSzM1YUF3b0F5MHpr?=
 =?utf-8?B?TFZNVHZ0eDZNRFJpek94aGpFd2dMbEZ6bEJubG1JakJ6V0V6U1BDVStkZnh6?=
 =?utf-8?B?d3VZMS9qWUR5VnRwVzJ1ZGt3TW5GMDlSd0lLOGIyS2ZKeWxWempGOXNESSs4?=
 =?utf-8?B?Wk91MXZSYldDWXh2TVVoemhOeFhSOWFZUkVZS21jWFY0SzV2bHhFbFV0VDBY?=
 =?utf-8?B?ZVNpalAya1NLSXJ1VjFpeW13N2h5d0c5bGxvVUdGaThtck41QU9NRkpTT0pU?=
 =?utf-8?B?b3k5aDB0R2UwRU03cnRFSXAxQ2J3NE94QllVZTVtazNXUXBBTnpHR3RETnNz?=
 =?utf-8?B?eGM5QlozZHlHZVdJUGV1UGF6RUlWZGFJTGIzRFJTeVBBZit2SVVtRjluZUxS?=
 =?utf-8?B?Q29hNzA4WlM3VEMwRER5RUtvNjJmU2xPMzNqaXE0eFcrcFArK2F0Rm1Sakxo?=
 =?utf-8?B?M0hBQUF6TE9CS2ozS1J5SnhLL0NuN1Q0RnJtWnNMcW4vUWRydTRLSWxXTUxS?=
 =?utf-8?B?ekRFNW5aM0d4RUcveWlFakxSY096RC9XajBiWmdxUnNpWkVhcHI4bW1XbU9B?=
 =?utf-8?B?YUNjT1lIQTBvYWRab1dXYzU2YkFyaDQwZ3Vra1ZGVmZpVU5JNXphVW52clNj?=
 =?utf-8?B?cjlmL3ZsTU54c2pOTWw1dnR3cS9uMnNLbDlSTUN5NkpoQnhtc2pPNWRPQXpT?=
 =?utf-8?B?SVZvZTlzeU9aWTF6NVM4OUc2K3VqcXQvcmxQcWNCRyszNUV3T004Tm80VWpS?=
 =?utf-8?B?Rk1ZbENVdjh0ODc4ODIwdVlXK0ZhNjZBdklyK3F1dzZFbTJQQm9qV0NsamtI?=
 =?utf-8?B?TWZFVWJOdFB4c3VMbTFXRXdESzI0Qlp3ZlZpUlZUMFRIK01uSGRTQXdpVE9q?=
 =?utf-8?B?RmZud1M4dEF3cjlPaDcvZEVtMmtCMGxhTE9WTmNxT3IwTEl3QUpFU3gxWWZN?=
 =?utf-8?B?WHpQcXJ6eXZvSFJWYVVibnNOdnkwbGRscFF5aGpzTGp4bXFoZUgrdFlDMTdO?=
 =?utf-8?B?cnRIVmQ2V0xaS00xNUVzaGtUQlhMSTd4cVZIQ0FHaHdxTzNYdHcvek12MVpU?=
 =?utf-8?B?T1B2ZTNZZVFEY0VHejQvRTZXRnBQVDZ5OGtaVUgvc3ZZZ0hrRVZUTDNrNDJa?=
 =?utf-8?Q?JJ8il83ZUaigCxjfhXygf4yWKeh+j59RIaPmKrm?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 81563e72-d05b-416a-328b-08d92a59e3be
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Jun 2021 08:46:21.3113
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Kq1oaMke+vwo9lLGCTYWMij5hU+jI8s8N+9xIOeN7+J6C2De+a9XueR84kEhLSR/oBnDI5iZUULNjGFSKGwX708+RT20uiwCY3bof2sNH68=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3800
X-OriginatorOrg: citrix.com

On 08/06/2021 07:18, Jan Beulich wrote:
> On 07.06.2021 14:41, Andrew Cooper wrote:
>> While the ! annotation is useful to indicate that something special is
>> happening, an array of bits is not.  Drop it, to prevent mistakes.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>> ---
>>  xen/arch/x86/cpuid.c        | 2 --
>>  xen/include/asm-x86/cpuid.h | 1 -
>>  xen/tools/gen-cpuid.py      | 3 ---
>>  3 files changed, 6 deletions(-)
> As osstest points out this didn't go quite far enough, or too far:
> Either XC_FEATUREMASK_SPECIAL also needs dropping (including its uses
> in libxenguest and xen-cpuid) or, considering exposing this
> information via xen-cpuid isn't entirely without purpose, the script
> part of the original change needs undoing or making conditional e.g.
> upon __XEN__.

Yes - Gitlab CI didn't spot, because there is a different breakage from
PV32 blocking things.

I think I'll reinstate the gen-cpuid.py hunks because having xen-cpuid
print out the bits when asked is helpful.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 08:59:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 08:59:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138353.256135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqXZQ-0006mY-Aj; Tue, 08 Jun 2021 08:58:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138353.256135; Tue, 08 Jun 2021 08:58:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqXZQ-0006mR-7L; Tue, 08 Jun 2021 08:58:48 +0000
Received: by outflank-mailman (input) for mailman id 138353;
 Tue, 08 Jun 2021 08:58:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8i+T=LC=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lqXZO-0006mL-VX
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 08:58:47 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.22])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2b266c42-cc0c-4775-bb47-177f65848a84;
 Tue, 08 Jun 2021 08:58:45 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx588wcO3l
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 8 Jun 2021 10:58:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b266c42-cc0c-4775-bb47-177f65848a84
ARC-Seal: i=1; a=rsa-sha256; t=1623142718; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=tHrncYRAKScC/rS6AvJ8pgIkr7Dp5hrklbWx4Yk8N42dbND87/ppatwJn02aZgghKz
    RVoV/De+nL43uFjxpuSEhy2Szdgex3LCv91C+jymcjjPkFhxaNEtqDNQePvmX/8Pf9Ns
    4DsgZ7b36QZALHDnUuJLXf39Es56T1tcPrl4N+8ajdKkVEiVEkIu3Pdf+tw5coJR3j+e
    MQN+3Zj0Yw40NEyBmfaj6q3nES2DwA/NU7w5gkTdA0LjXJABzZm2SiWu/AcgR7QV6QXl
    CZG5XebefBtp5WaJJkZur/0NQ4DF2PvTWE90ZDAGljbIa+Xz9G5rYD3B6f8VU+Qordx5
    G7Ng==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623142718;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=n2fK7SY8CYLcw3DiaNz+QWL9iXCZ0ET33d9Wu+xMRJI=;
    b=m0epb6C5p6JSTsecJxpHxc9zx5fVdt6aYAB9JNXM1EBO9htYx+kmHDuUem5vnDNK6V
    LDeG2EJlxTSEGPR1hnsNaT4LVzn1yGJlj98zLYMrbzLh0kVc4dJ0MnC0jazM9mvthfMN
    XmVRAFLxteoYhSXQYtJUbgtHMwAwBN1KAZcPCT9pWbUmfdJRsmtncnxQWlvDf8zedgkH
    mbA50cPPbUw0AjU9E6fY464NUvhWhZscSp5tuzzJUmZyt9SzkogxQL0GLrSND+s/Xp1r
    I103bjNCLHcJLo6b2qSRDtYd3fS4GdtJYfzB59UXQhBHKTqKL5sLSStKhMR6wUnt5ogT
    s5NQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623142718;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=n2fK7SY8CYLcw3DiaNz+QWL9iXCZ0ET33d9Wu+xMRJI=;
    b=sB3ZkFwvNpuEMUWETSLduRSF2mtbivBAWl/umR9f+0yF5O4Ox9WVDq+MiyDPWgk1v7
    /RKPjjE/K2ku8xBnaSMFa8WPHR3jEzD99TgI91Vo5dw3nqotAu+9khzhFPEFNfrV80lu
    jihZSCxGxUMFzUAM1Vgk2Afjqr2e5uVZWUKezp8i2tf/Mg+QMICF6GAoMNWpnxu8gYYR
    H8spXlKeWvoFV5SKKwh+EgynGcZADSIBQYCwCQE4wzlaI6teQHWSSVHSRUxzayK2rtHe
    3j/YAI54tu8dw2rAnp6yi3nq31oFJZw8usM2WkNK1tldFJphw4qkUR83wC4R4pxl8lrj
    5oSw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF/Ax6Fb03sCxOoTBq7r1dZtjiRLxxzC79Iv3HI"
X-RZG-CLASS-ID: mo00
Date: Tue, 8 Jun 2021 10:58:24 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v20210601 08/38] tools: show migration transfer rate in
 send_dirty_pages
Message-ID: <20210608105824.0b0071dd.olaf@aepfle.de>
In-Reply-To: <42844bc5-da7e-5f6d-1ce0-1ef9e0f9dea6@suse.com>
References: <20210601161118.18986-1-olaf@aepfle.de>
	<20210601161118.18986-9-olaf@aepfle.de>
	<42844bc5-da7e-5f6d-1ce0-1ef9e0f9dea6@suse.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/ZWolvf10tIqcRa+Btou92v2";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/ZWolvf10tIqcRa+Btou92v2
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Wed, 2 Jun 2021 09:10:44 +0200
schrieb Juergen Gross <jgross@suse.com>:

> MiB_sec =3D ((ctx->save.pages_sent * PAGE_SIZE * 1000) / ms) /
>            (1024U * 1024U);

I'm not sure: how does this improve the patch?


Olaf

--Sig_/ZWolvf10tIqcRa+Btou92v2
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmC/MTAACgkQ86SN7mm1
DoCoiA/+OtYu0+esbUO31sQbhRIgpzkegtKjIHIygcfyt8dKu+DhLegfm//y/bvS
AJKK7x4HMfiYnOz7J+aE7+AF+wZOOLZG42jnIKOt8G2Y0UdcZVO6IYxpNk1TVOaY
oiVs+gOV8nWQasrLz1alnYiErKHmJZvaN5vmH5OxcntGOIrH35O1WnPkAEZXlJAU
xT6ct7l1N5rGfO8lrRI/OI8eTEka1wkWB0UH3o6jBES9Cl1H6rEFgGXGtwIUY84Q
I4uJSiscJwx9DatZUo13VkZf9RjCYWsUUKBdnqt8+SrANp9iM+YzdFV0XtP3PT93
1uArJ99731ZsfKUcMTi4oZAVqdgCOc5/mwNUHUb2oepb5AI8WAPqDczCVDAgdvOq
3jgIzsu9XAqc6rnZbMRgfbzCMq5jzbwnaZPdJ+f/Y0EYFN3CcQzlSTiVX8gSD/C/
WDY0R7oc+iTRRBoyTLh59OGevMGP5R+dy7hLdG+F1pDB/ooAasuoezUGk19toV1d
zfDbmhnBr/M/FceCDUV8I/mCOwKYFWxt8UJHvfFp+XtcCOBuj1F0in82/7vBQ+9P
dDYqoX8e8U0pFCTCOanPxMXKWouRMteuTlAqDjzxuUqgO2uSG420NWtrzA96LtxZ
iVlzUgr0SPCO7Vtp+AK9YJ/xoX+R7GPvWfgYpC9kXRPPORFsOw0=
=MKay
-----END PGP SIGNATURE-----

--Sig_/ZWolvf10tIqcRa+Btou92v2--


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 09:01:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 09:01:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138358.256146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqXbk-00088w-O3; Tue, 08 Jun 2021 09:01:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138358.256146; Tue, 08 Jun 2021 09:01:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqXbk-00088p-L1; Tue, 08 Jun 2021 09:01:12 +0000
Received: by outflank-mailman (input) for mailman id 138358;
 Tue, 08 Jun 2021 09:01:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lqXbj-00088j-Pp
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 09:01:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lqXbi-0003wo-N8; Tue, 08 Jun 2021 09:01:10 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lqXbi-0006AQ-Gt; Tue, 08 Jun 2021 09:01:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=sYNKKRsF4QwM92yymEjl+Sx1Oz7F5yt3QzPcopckvPM=; b=lG6XAHhXaJFZ5hqppLRdVQsT4Y
	nYqoW7ikTcLvwt07Aw2uCmcO/kEaj2tXgkQS0Lxh4/GogH4/mmS/yuaJHQ+LTZK8l/qTJ3m0W0DFT
	8ZxcBY4LrJAnBUutcxB4I3OAgu/vp3h+3AInsPQBWs3llp1QLYFzpg0In9+GYv9zD9v0=;
Subject: Re: [PATCH] xen/grant-table: Simplify the update to the per-vCPU
 maptrack freelist
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210526152152.26251-1-julien@xen.org>
 <6748164b-ad38-d7d0-6abe-b5e393f7b9f3@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e1acf0a1-81d1-5ed3-edb4-cf920cfbbc77@xen.org>
Date: Tue, 8 Jun 2021 10:01:08 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <6748164b-ad38-d7d0-6abe-b5e393f7b9f3@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 28/05/2021 14:29, Jan Beulich wrote:
> On 26.05.2021 17:21, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Since XSA-288 (commit 02cbeeb62075 "gnttab: split maptrack lock to make
> 
> XSA-228 I suppose?

Yes, I will update in the next version.

>> it fulfill its purpose again"), v->maptrack_head and v->maptrack_tail
>> are with the lock v->maptrack_freelist_lock held.
> 
> Nit: missing "accessed" or alike?
I have added "accessed".

> 
>> Therefore it is not necessary to update the fields using cmpxchg()
>> and also read them atomically.
> 
> Ah yes, very good observation. Should have noticed this back at the
> time, for an immediate follow-up change.
> 
>> Note that there are two cases where v->maptrack_tail is accessed without
>> the lock. They both happen _get_maptrack_handle() when the current vCPU
>> list is empty. Therefore there is no possible race.
> 
> I think you mean the other function here, without a leading underscore
> in its name. 

Hmmm... Yes. I will update it.

> And if you want to explain the absence of a race, wouldn't
> you then better also mention that the list can get initially filled
> only on the local vCPU?

Sure. I will reword it.

> 
>> I am not sure whether we should try to protect the remaining unprotected
>> access with the lock or maybe add a comment?
> 
> As per above I don't view adding locking as sensible. If you feel like
> adding a helpful comment, perhaps. I will admit that it took me more
> than just a moment to recall that "local vCPU only" argument.

I will try to come up with an helpful comment.

>> --- a/xen/common/grant_table.c
>> +++ b/xen/common/grant_table.c
>> @@ -543,34 +543,26 @@ double_gt_unlock(struct grant_table *lgt, struct grant_table *rgt)
>>   static inline grant_handle_t
>>   _get_maptrack_handle(struct grant_table *t, struct vcpu *v)
>>   {
>> -    unsigned int head, next, prev_head;
>> +    unsigned int head, next;
>>   
>>       spin_lock(&v->maptrack_freelist_lock);
>>   
>> -    do {
>> -        /* No maptrack pages allocated for this VCPU yet? */
>> -        head = read_atomic(&v->maptrack_head);
>> -        if ( unlikely(head == MAPTRACK_TAIL) )
>> -        {
>> -            spin_unlock(&v->maptrack_freelist_lock);
>> -            return INVALID_MAPTRACK_HANDLE;
> 
> Where did this and ...
> 
>> -        }
>> -
>> -        /*
>> -         * Always keep one entry in the free list to make it easier to
>> -         * add free entries to the tail.
>> -         */
>> -        next = read_atomic(&maptrack_entry(t, head).ref);
>> -        if ( unlikely(next == MAPTRACK_TAIL) )
>> -        {
>> -            spin_unlock(&v->maptrack_freelist_lock);
>> -            return INVALID_MAPTRACK_HANDLE;
> 
> ... this use of INVALID_MAPTRACK_HANDLE go? It is at present merely
> coincidence that INVALID_MAPTRACK_HANDLE == MAPTRACK_TAIL. If you
> want to fold them, you will need to do so properly (by eliminating
> one of the two constants). But I think they're separate on purpose.

Hmmm... Somehow I thought one was an alias to the other. But they are 
clearly not. I will update it on the next version.


> 
>> -        }
>> +    /* No maptrack pages allocated for this VCPU yet? */
>> +    head = v->maptrack_head;
>> +    if ( unlikely(head == MAPTRACK_TAIL) )
>> +        goto out;
>>   
>> -        prev_head = head;
>> -        head = cmpxchg(&v->maptrack_head, prev_head, next);
>> -    } while ( head != prev_head );
>> +    /*
>> +     * Always keep one entry in the free list to make it easier to
>> +     * add free entries to the tail.
>> +     */
>> +    next = read_atomic(&maptrack_entry(t, head).ref);
> 
> Since the lock protects the entire free list, why do you need to
> keep read_atomic() here?

Because I wasn't sure whether dropping {write, read}_atomic() when 
accessing the freelist would be fine.

Anyway, I can drop it in the next version.

> 
>> +    if ( unlikely(next == MAPTRACK_TAIL) )
>> +        head = MAPTRACK_TAIL;
>> +    else
>> +        v->maptrack_head = next;
>>   
>> +out:
> 
> Please indent labels by at least one blank, to avoid issues with
> diff's -p option. In fact if you didn't introduce a goto here in
> the first place, there'd be less code churn overall, as you'd
> need to alter the indentation of fewer lines.

I will have a look.

> 
>> @@ -623,7 +615,7 @@ put_maptrack_handle(
>>   {
>>       struct domain *currd = current->domain;
>>       struct vcpu *v;
>> -    unsigned int prev_tail, cur_tail;
>> +    unsigned int prev_tail;
>>   
>>       /* 1. Set entry to be a tail. */
>>       maptrack_entry(t, handle).ref = MAPTRACK_TAIL;
>> @@ -633,11 +625,8 @@ put_maptrack_handle(
>>   
>>       spin_lock(&v->maptrack_freelist_lock);
>>   
>> -    cur_tail = read_atomic(&v->maptrack_tail);
>> -    do {
>> -        prev_tail = cur_tail;
>> -        cur_tail = cmpxchg(&v->maptrack_tail, prev_tail, handle);
>> -    } while ( cur_tail != prev_tail );
>> +    prev_tail = v->maptrack_tail;
>> +    v->maptrack_tail = handle;
>>   
>>       /* 3. Update the old tail entry to point to the new entry. */
>>       write_atomic(&maptrack_entry(t, prev_tail).ref, handle);
> 
> Since the write_atomic() here can then also be converted, may I
> ask that you then rename the local variable to just "tail" as
> well?

Sure.

> 
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -255,7 +255,10 @@ struct vcpu
>>       /* VCPU paused by system controller. */
>>       int              controller_pause_count;
>>   
>> -    /* Grant table map tracking. */
>> +    /*
>> +     * Grant table map tracking. The lock maptrack_freelist_lock protect
> 
> Nit: protects

I will fix it.

> 
>> +     * the access to maptrack_head and maptrack_tail.
>> +     */
> 
> I'm inclined to suggest this doesn't need spelling out, considering ...
> 
>>       spinlock_t       maptrack_freelist_lock;
>>       unsigned int     maptrack_head;
>>       unsigned int     maptrack_tail;
> 
> ... both the name of the lock and its placement next to the two
> fields it protects. Also as per the docs change of the XSA-228 change,
> the lock protects more than just these two fields, so the comment may
> be misleading the way you have it now.

So I think it would be good to document above the lock what it actually 
protects. I agree this is fairly clear that it protect maptrack_{head, 
tail} but this wasn't very clear to me that it would also protect the 
content of the freelist (so read_atomic()/write_atomic() could be dropped).

I will try to come up with a better comment.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 09:02:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 09:02:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138363.256158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqXcf-0000I6-1h; Tue, 08 Jun 2021 09:02:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138363.256158; Tue, 08 Jun 2021 09:02:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqXce-0000Hx-V1; Tue, 08 Jun 2021 09:02:08 +0000
Received: by outflank-mailman (input) for mailman id 138363;
 Tue, 08 Jun 2021 09:02:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JFXD=LC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqXcd-0000HI-JI
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 09:02:07 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 630a6917-be65-4ef8-978e-1ee70eca846d;
 Tue, 08 Jun 2021 09:02:02 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2111.outbound.protection.outlook.com [104.47.17.111])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-28-Hm2sbt-uOb6xTJX8nSrH-g-1; Tue, 08 Jun 2021 11:02:00 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4942.eurprd04.prod.outlook.com (2603:10a6:803:59::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Tue, 8 Jun
 2021 09:01:58 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4195.030; Tue, 8 Jun 2021
 09:01:58 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR3P281CA0031.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:1c::9) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.10 via Frontend Transport; Tue, 8 Jun 2021 09:01:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 630a6917-be65-4ef8-978e-1ee70eca846d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623142921;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cXUVAskYuBPTTkqKnLrSCrT+NGkTOFXscl97reCM+DA=;
	b=Kwb+7CO3Qn8Rd69sBkF4G9ycyGnBoVskuHdyThC5iHf2UFTkXXm7py1adrxhWtINwifk21
	2dqTWz730OGFBeZYksPczGVqdNJ72JhA20G96VuRDL/YnPkzxXXY3szHkLGFZ8MTHeQLn1
	Kg0LuKFR8x23MBZrc/fW/QKsr/fCr4E=
X-MC-Unique: Hm2sbt-uOb6xTJX8nSrH-g-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=It0uAqTEnmPZxtuyzpE96XJnQMi7eh+N3xa8w1DQ/7nMDEzkl39spWmB4Z5FZOI3eThPmUB/bfpzN+BGVSZCIXRRTAcaDN0fN5nF7GGUVXgExla2Er0MSVqzTTQIETqrjch0kHSTFicO8GFfeKW/6c7LuNegwV65Ho3pHIlUcK5i4A8cqKM6bvTovOKcqofK/b2OCLrYboklvphfljBFSYhEpxhN/HTCG8Cpo4CMM7M8t2ZVPA6Xadz/3PTlbBDb7LbNvuDy7zXzQDCjJtxhqKLRrAXxydjm7gaURVce2ndlhLIEliIrgnKjScl/xHxsi+l+xqupBPr8UjiI80xRwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZEy4ES23OSIRiVz8iq3E31vclCeAlR3mb/3fj1yt3Tk=;
 b=HLp/ybkDehpHW2/tsSR5V/wHoAMDytBilOd8OSJ089RS8ePtup8FTaVugpinj3TuPEvrMw+emX7zhqDhPOpzz4gXIlAroYcaZSkYuQ/2qoierMAr4oZYh+EnOkvBEb1gMfHLaZi8fJ3bp6vo+kAed/Wa6TgoJHPlQpUz6jO+Y7+4DWiwFb/M2774h76BbTQcoTx7XNKX5IOKCwR4CyG7LlRjECwa3xQZ8JXK8MAeWDy7IMMy8GXArN5waQYqZXf3V2VdTppFtdX5LpUY50jWYGK+cLHQI4dhYXKM1yo3njX5O7k/ch4uElAaHTbELr8apuCqxWeEnrqzPpysfvfHgQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] x86/cpuid: Drop special_features[]
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210607124141.24767-1-andrew.cooper3@citrix.com>
 <d09c3a27-4b89-be3f-6dea-37f3759df570@suse.com>
 <22ffd522-89da-d9fd-3918-bf07eae1be1a@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <84ab1f8d-7ce5-56cd-231e-bed2da6ddd26@suse.com>
Date: Tue, 8 Jun 2021 11:01:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <22ffd522-89da-d9fd-3918-bf07eae1be1a@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR3P281CA0031.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::9) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b3968b99-2829-4cb8-4a04-08d92a5c127a
X-MS-TrafficTypeDiagnostic: VI1PR04MB4942:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4942E362A0FE92D4A82DB398B3379@VI1PR04MB4942.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QJ5zHVg4Bg4cXeVjdncA+oiih5bfccjt7kQDSWmK62ZWDlcETr/mX7Lo+Cu7ylovyHpWKLjKXzjPpyCUli+foNkdYOf6k3HOu/+lj6QJEyEJYyO0ntxd/yy7SuW4HI70xwx1OinLz/wYvwWPg7ZpohgZ3qgzbx3KmGzLrpw+5bTcnPXpKzTU3QcG88b6zIANc5KnnhcYdQIzcVUm90yeKPp62rZkW23aBLeh1CMT2UUwobLovbUV+CBxxHIbzyV3umjiIvEYgtabKNstsutlp576EwScJuSnTZHMMOqx3gvp+7rp9fdypojbwdiHFdmRmvsXo0DLwFv1r4sI7tn3oh5hEXDlia+PEq288YzHBbzBLjeDWodtXFx1lK1oNJMmImvV0kcSu3hLI4gc1H3mRqvJHnY43pKlGBpvQ8Hvv+r0FSI5ZvVxgf3dGr793AwdTidI2+w0iOxu+4ufcD8l0MQFTh4MgwPXg0/9/nfq7DBs1DDXytUfDepad0QCXprx4kkTidir9QLDj1hNzVuXbwRBiWnbWOwjyGv3dYw1akyyRXc6GL23V8U70hTW1keXQM14nRv6P2bq76Yx4OjYalLhF+gXteXktnbSrHWLbkrY2JbX+mnbrILpCLbZVN+xlMZJ+0S0+ej430/xpwGnC5Ccn70YVpHerwIuk/Bgjtyv9Za+wAsUi0YOfj4DC11Z
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(39860400002)(346002)(396003)(376002)(366004)(4326008)(54906003)(5660300002)(16576012)(6486002)(6916009)(316002)(2906002)(6666004)(31696002)(16526019)(31686004)(83380400001)(36756003)(26005)(186003)(8676002)(956004)(38100700002)(8936002)(66476007)(66946007)(2616005)(478600001)(66556008)(53546011)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?us-ascii?Q?gfBPX9JBkZxhSSj/2lZzZOxolmShp1jF06n2L3UJ0QLgvuwzhP+ttDocNsxV?=
 =?us-ascii?Q?FOHFUs4BhEwk0ssZhLsI1fjXde0XhJKpwKsypo44z2/s0KgRIt/HvrjG1s3e?=
 =?us-ascii?Q?xWw2e1rbCNfXkKUz7yK4x1nnGNLEKY2K5QMNS8QddHVVsd283c3q5m0RuqCQ?=
 =?us-ascii?Q?/nE+BklRLeDiP64U9TlzniMkkV6Co5fEWdhvEltznzif77Ny2yrT+DpZwG5H?=
 =?us-ascii?Q?j38gQS6ZosJMvHoMZEhqAOgteuhgMx2CKe4SssX+ugyRneQJC3M+EFS5rQtj?=
 =?us-ascii?Q?BnXLKmTU5V2eE9Qi2bYkMl9mP2CGWxoTMH8KxOzCVGFIxeYAU0ATbq/Hd9SK?=
 =?us-ascii?Q?1LG3q3U8fXiMQObyehDmifw2XQPwVHUCIX8FEAIgbBSQK4Dmtx1HSxjXTA6n?=
 =?us-ascii?Q?Jjgo3BPN0mnozP8hCPg4PcFL9zOe7HCYzbGUBn/AXBf9AMrQV3lHP/gvVCSA?=
 =?us-ascii?Q?9SYenMbZhpaVRbKkmctm44wSmujGafufOFBTWA1fm+KQ+NdfWYj0wA/sVG4V?=
 =?us-ascii?Q?Ye5gjsqOSSbW0vTCKwn0VPVjLX2n3SO+9+Por0hj/EQKIqr1xehw0N7eNbFl?=
 =?us-ascii?Q?uX0UpOjpccDoVIzM8GsHHp01hpXMLlyVujV7aIVMqmh9Dv/NpAaaZkyBUK7F?=
 =?us-ascii?Q?Uz2EHoIBp6vNkicKP/lGzCzOjzDxgCj6h5/YUzZ6uKdmjl+lrwNpCObFz1yg?=
 =?us-ascii?Q?OCUMcwQUw5lCrZEWZnrNXKchwr44vyX1NSBPgKnPNsYOgZuISKrXad5/2gtD?=
 =?us-ascii?Q?ZXPmasyWVp2eO5wm7Y0XTyDaGVwFr8oBEifh98avoLm8OZp5tkghriT3EYiU?=
 =?us-ascii?Q?j7Rcaw3fI49t/ocqZuIn+DsMPK9QnvPwEwIPAz5/78/YLGr3oGbeOlw7rZPI?=
 =?us-ascii?Q?4JynqmwgalsaMz1GxNzfG9XwVmCqQ5Fe0ofmInjd0aJKpLHR3DXQPF+Zmkfb?=
 =?us-ascii?Q?dWs0zPy1Ahlvn++uH+u5Xb8ayWU1TS2dYgLwiGiZBICwbgbg9IyBifXUh+ic?=
 =?us-ascii?Q?pGkgxKv+YVhcD6vuZWDSVJDpX2MTF6+8iR3ntKhdqp3anF5VgIEt188opJXd?=
 =?us-ascii?Q?q3DTWnE95oWOsbB9+RtZQBo0izY1l5ibUeJ+Ju70cwv5FBWFssNzzgXS6+lh?=
 =?us-ascii?Q?KP0oJEkkCrhnrJiqD0VCA38vFQxROvcTp/mwOT02MfjKPNBGdarOy7fRV6B9?=
 =?us-ascii?Q?ad8JLfAsHTCkm497UmuKSMfOmwjvSBhY/HpN3FVXmSJkSAhlEwulUYZ/OFQA?=
 =?us-ascii?Q?FcJzwpBJ6aI6r9hAvReW7wBnychCcvE10ZNiLW/umfrqzvtpIuj1HvKR36Ws?=
 =?us-ascii?Q?7GQPycVV6m6GeEDSoOYfkMjE?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b3968b99-2829-4cb8-4a04-08d92a5c127a
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Jun 2021 09:01:58.6277
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZNHhnW92ioDSxGHlw0prPEKk7x7zEhjwJQLl8yTwwdXBj+uaGs+w7p4EaHgilXDVhUKixBT+U4mvDXlO2Js23w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4942

On 08.06.2021 10:46, Andrew Cooper wrote:
> On 08/06/2021 07:18, Jan Beulich wrote:
>> On 07.06.2021 14:41, Andrew Cooper wrote:
>>> While the ! annotation is useful to indicate that something special is
>>> happening, an array of bits is not.  Drop it, to prevent mistakes.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> ---
>>> CC: Jan Beulich <JBeulich@suse.com>
>>> CC: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>>> CC: Wei Liu <wl@xen.org>
>>> ---
>>>  xen/arch/x86/cpuid.c        | 2 --
>>>  xen/include/asm-x86/cpuid.h | 1 -
>>>  xen/tools/gen-cpuid.py      | 3 ---
>>>  3 files changed, 6 deletions(-)
>> As osstest points out this didn't go quite far enough, or too far:
>> Either XC_FEATUREMASK_SPECIAL also needs dropping (including its uses
>> in libxenguest and xen-cpuid) or, considering exposing this
>> information via xen-cpuid isn't entirely without purpose, the script
>> part of the original change needs undoing or making conditional e.g.
>> upon __XEN__.
>=20
> Yes - Gitlab CI didn't spot, because there is a different breakage from
> PV32 blocking things.

Oh, what further problem do we have? What I can see there (picking a
random build log) is the failure from the change here ...

> I think I'll reinstate the gen-cpuid.py hunks because having xen-cpuid
> print out the bits when asked is helpful.

And maybe put the #define inside "#ifndef __XEN__" then?

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 10:08:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 10:08:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138382.256173 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqYeP-0006JC-2N; Tue, 08 Jun 2021 10:08:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138382.256173; Tue, 08 Jun 2021 10:08:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqYeO-0006J5-Vh; Tue, 08 Jun 2021 10:08:00 +0000
Received: by outflank-mailman (input) for mailman id 138382;
 Tue, 08 Jun 2021 10:07:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q7uu=LC=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lqYeM-0006Ij-Up
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 10:07:58 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c717adf9-b2f8-497f-b085-84c805a41c70;
 Tue, 08 Jun 2021 10:07:57 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id B972E219C1;
 Tue,  8 Jun 2021 10:07:56 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 8EC2A118DD;
 Tue,  8 Jun 2021 10:07:56 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 72e7IXxBv2DocwAALh3uQQ
 (envelope-from <jgross@suse.com>); Tue, 08 Jun 2021 10:07:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c717adf9-b2f8-497f-b085-84c805a41c70
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623146876; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=utlI0tXZZAIbFnmopjh1ne0sCSKVNwClx3jm+8aKLSs=;
	b=tXTsMLr5NyECZfAHS3mVqNmFeFw9Uc/uP8rnztE+C+TZu+XgHvf/ZcYY1T+7BSZ1yRM5Vs
	13Ryb9Lciao1y/FILAqacvOdiKuzq0LxwswCX/MvtM6iUMvE61bFQBYxb52eVTIHu9k0vP
	1jSnXj1tv4v5QgSoucZJ1KyxuvA1UT8=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623146876; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=utlI0tXZZAIbFnmopjh1ne0sCSKVNwClx3jm+8aKLSs=;
	b=tXTsMLr5NyECZfAHS3mVqNmFeFw9Uc/uP8rnztE+C+TZu+XgHvf/ZcYY1T+7BSZ1yRM5Vs
	13Ryb9Lciao1y/FILAqacvOdiKuzq0LxwswCX/MvtM6iUMvE61bFQBYxb52eVTIHu9k0vP
	1jSnXj1tv4v5QgSoucZJ1KyxuvA1UT8=
Subject: Re: [PATCH v20210601 08/38] tools: show migration transfer rate in
 send_dirty_pages
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-9-olaf@aepfle.de>
 <42844bc5-da7e-5f6d-1ce0-1ef9e0f9dea6@suse.com>
 <20210608105824.0b0071dd.olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <1ada4c3b-90a6-3585-b4ab-6ff4b197cddf@suse.com>
Date: Tue, 8 Jun 2021 12:07:55 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210608105824.0b0071dd.olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="KkE2VY15i2WtqE8xxsLC0C8EdJyQwKRs9"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--KkE2VY15i2WtqE8xxsLC0C8EdJyQwKRs9
Content-Type: multipart/mixed; boundary="yzJjcQyaJibJkK5iP4hOFbK7PIyq1rJkS";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
Message-ID: <1ada4c3b-90a6-3585-b4ab-6ff4b197cddf@suse.com>
Subject: Re: [PATCH v20210601 08/38] tools: show migration transfer rate in
 send_dirty_pages
References: <20210601161118.18986-1-olaf@aepfle.de>
 <20210601161118.18986-9-olaf@aepfle.de>
 <42844bc5-da7e-5f6d-1ce0-1ef9e0f9dea6@suse.com>
 <20210608105824.0b0071dd.olaf@aepfle.de>
In-Reply-To: <20210608105824.0b0071dd.olaf@aepfle.de>

--yzJjcQyaJibJkK5iP4hOFbK7PIyq1rJkS
Content-Type: multipart/mixed;
 boundary="------------57CE684910C9F378B3CD8C4D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------57CE684910C9F378B3CD8C4D
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.06.21 10:58, Olaf Hering wrote:
> Am Wed, 2 Jun 2021 09:10:44 +0200
> schrieb Juergen Gross <jgross@suse.com>:
>=20
>> MiB_sec =3D ((ctx->save.pages_sent * PAGE_SIZE * 1000) / ms) /
>>             (1024U * 1024U);
>=20
> I'm not sure: how does this improve the patch?

The scattered calculation makes it much harder to verify it (at least
for me).

And initializing a variable named "MiB_sec" with a value being clearly
bytes doesn't help.


Juergen

--------------57CE684910C9F378B3CD8C4D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------57CE684910C9F378B3CD8C4D--

--yzJjcQyaJibJkK5iP4hOFbK7PIyq1rJkS--

--KkE2VY15i2WtqE8xxsLC0C8EdJyQwKRs9
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmC/QXsFAwAAAAAACgkQsN6d1ii/Ey/e
3Af/QOeOGZJV7aA/elkpgZMJc/CBpWhsHWbOtSayXobsZe9+djcQzMvwcowEsMTWns0wZsT3mvqt
yUrAqXNwQ5jy69IYCWuo3PYB+5wVPUyOvyb8AjgQbdmj+PS7cTx9CrraSo2pcCJ7npw+59NKJRpb
yFxodxirYqT1Q4yXpDNW5N6n6gncsrMf3MkIVXGvY7VuugtPOd4peXLnli7yEyWKB3vI/zvZUvpv
wmuIe9YHNOV75bpzxkT3rYEYiBWggAbuL3ljaQJcxYTi2i4GAkwxTeIGrV6zqb8nZMxpFWg+SzNm
wxgt4uofpoE9jgH6Ex1TC5TvYiyh46/mcrGM2/+xhA==
=crw0
-----END PGP SIGNATURE-----

--KkE2VY15i2WtqE8xxsLC0C8EdJyQwKRs9--


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 10:08:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 10:08:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138387.256186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqYes-0006oa-Bi; Tue, 08 Jun 2021 10:08:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138387.256186; Tue, 08 Jun 2021 10:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqYes-0006oS-8K; Tue, 08 Jun 2021 10:08:30 +0000
Received: by outflank-mailman (input) for mailman id 138387;
 Tue, 08 Jun 2021 10:08:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lqYer-0006oM-Nm
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 10:08:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lqYeq-0005AG-GO; Tue, 08 Jun 2021 10:08:28 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lqYeq-00027X-6a; Tue, 08 Jun 2021 10:08:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=WWx1c4xCo/kN7OvGQUgBOy+jKZqydcCPkw/e/l7Xqec=; b=6TXhwZyRyHMEsBiAemv6XBy1g5
	v/xZ+oR3a4SaNDkSLlSMYHVeDF+kBz9eezBhEOQakZsace3vmHiXNloc7Zk9BMhY6aEjdrfZfj8yr
	ImUr4HhBl7lgcw1h0dHAxxUqCgIYhIrdQ9DLqw03yDh+qEK1AAqq07pkv/S/sTBhu+t8=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2] xen/grant-table: Simplify the update to the per-vCPU maptrack freelist
Date: Tue,  8 Jun 2021 11:08:24 +0100
Message-Id: <20210608100824.25141-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Since XSA-228 (commit 02cbeeb62075 "gnttab: split maptrack lock
to make it fulfill its purpose again"), v->maptrack_head,
v->maptrack_tail and the content of the freelist are accessed with
the lock v->maptrack_freelist_lock held.

Therefore it is not necessary to update the fields using cmpxchg()
and also read them atomically.

Note that there are two cases where v->maptrack_tail is accessed without
the lock. They both happen in get_maptrack_handle() when initializing
the free list of the current vCPU. Therefore there is no possible race.

The code is now reworked to remove any use of cmpxch() and read_atomic()
when accessing the fields v->maptrack_{head, tail} as wel as the
freelist.

Take the opportunity to add a comment on top of the lock definition
and explain what it protects.

Signed-off-by: Julien Grall <jgrall@amazon.com>

----

    Changes in v2:
        - Fix typoes
        - Update the commit message
        - Don't use interchangeably MAPTRACK_TAIL and
        INVALID_MAPTRACK_HANDLE
---
 xen/common/grant_table.c | 66 ++++++++++++++++------------------------
 xen/include/xen/sched.h  |  8 ++++-
 2 files changed, 34 insertions(+), 40 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index ab30e2e8cfb6..fab77ab9ccb8 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -543,33 +543,27 @@ double_gt_unlock(struct grant_table *lgt, struct grant_table *rgt)
 static inline grant_handle_t
 _get_maptrack_handle(struct grant_table *t, struct vcpu *v)
 {
-    unsigned int head, next, prev_head;
+    unsigned int head, next;
 
     spin_lock(&v->maptrack_freelist_lock);
 
-    do {
-        /* No maptrack pages allocated for this VCPU yet? */
-        head = read_atomic(&v->maptrack_head);
-        if ( unlikely(head == MAPTRACK_TAIL) )
-        {
-            spin_unlock(&v->maptrack_freelist_lock);
-            return INVALID_MAPTRACK_HANDLE;
-        }
-
-        /*
-         * Always keep one entry in the free list to make it easier to
-         * add free entries to the tail.
-         */
-        next = read_atomic(&maptrack_entry(t, head).ref);
-        if ( unlikely(next == MAPTRACK_TAIL) )
-        {
-            spin_unlock(&v->maptrack_freelist_lock);
-            return INVALID_MAPTRACK_HANDLE;
-        }
+    /* No maptrack pages allocated for this VCPU yet? */
+    head = v->maptrack_head;
+    if ( unlikely(head == MAPTRACK_TAIL) )
+    {
+        spin_unlock(&v->maptrack_freelist_lock);
+        return INVALID_MAPTRACK_HANDLE;
+    }
 
-        prev_head = head;
-        head = cmpxchg(&v->maptrack_head, prev_head, next);
-    } while ( head != prev_head );
+    /*
+     * Always keep one entry in the free list to make it easier to
+     * add free entries to the tail.
+     */
+    next = maptrack_entry(t, head).ref;
+    if ( unlikely(next == MAPTRACK_TAIL) )
+        head = INVALID_MAPTRACK_HANDLE;
+    else
+        v->maptrack_head = next;
 
     spin_unlock(&v->maptrack_freelist_lock);
 
@@ -623,7 +617,7 @@ put_maptrack_handle(
 {
     struct domain *currd = current->domain;
     struct vcpu *v;
-    unsigned int prev_tail, cur_tail;
+    unsigned int tail;
 
     /* 1. Set entry to be a tail. */
     maptrack_entry(t, handle).ref = MAPTRACK_TAIL;
@@ -633,14 +627,11 @@ put_maptrack_handle(
 
     spin_lock(&v->maptrack_freelist_lock);
 
-    cur_tail = read_atomic(&v->maptrack_tail);
-    do {
-        prev_tail = cur_tail;
-        cur_tail = cmpxchg(&v->maptrack_tail, prev_tail, handle);
-    } while ( cur_tail != prev_tail );
+    tail = v->maptrack_tail;
+    v->maptrack_tail = handle;
 
     /* 3. Update the old tail entry to point to the new entry. */
-    write_atomic(&maptrack_entry(t, prev_tail).ref, handle);
+    maptrack_entry(t, tail).ref = handle;
 
     spin_unlock(&v->maptrack_freelist_lock);
 }
@@ -650,7 +641,7 @@ get_maptrack_handle(
     struct grant_table *lgt)
 {
     struct vcpu          *curr = current;
-    unsigned int          i, head;
+    unsigned int          i;
     grant_handle_t        handle;
     struct grant_mapping *new_mt = NULL;
 
@@ -686,7 +677,7 @@ get_maptrack_handle(
             maptrack_entry(lgt, handle).ref = MAPTRACK_TAIL;
             curr->maptrack_tail = handle;
             if ( curr->maptrack_head == MAPTRACK_TAIL )
-                write_atomic(&curr->maptrack_head, handle);
+                curr->maptrack_head = handle;
             spin_unlock(&curr->maptrack_freelist_lock);
         }
         return steal_maptrack_handle(lgt, curr);
@@ -707,7 +698,7 @@ get_maptrack_handle(
         new_mt[i].vcpu = curr->vcpu_id;
     }
 
-    /* Set tail directly if this is the first page for this VCPU. */
+    /* Set tail directly if this is the first page for the local vCPU. */
     if ( curr->maptrack_tail == MAPTRACK_TAIL )
         curr->maptrack_tail = handle + MAPTRACK_PER_PAGE - 1;
 
@@ -716,13 +707,10 @@ get_maptrack_handle(
     lgt->maptrack_limit += MAPTRACK_PER_PAGE;
 
     spin_unlock(&lgt->maptrack_lock);
-    spin_lock(&curr->maptrack_freelist_lock);
-
-    do {
-        new_mt[i - 1].ref = read_atomic(&curr->maptrack_head);
-        head = cmpxchg(&curr->maptrack_head, new_mt[i - 1].ref, handle + 1);
-    } while ( head != new_mt[i - 1].ref );
 
+    spin_lock(&curr->maptrack_freelist_lock);
+    new_mt[i - 1].ref = curr->maptrack_head;
+    curr->maptrack_head = handle + 1;
     spin_unlock(&curr->maptrack_freelist_lock);
 
     return handle;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 3982167144c6..6c52ba2af019 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -255,7 +255,13 @@ struct vcpu
     /* VCPU paused by system controller. */
     int              controller_pause_count;
 
-    /* Grant table map tracking. */
+    /*
+     * Grant table map tracking. The lock maptrack_freelist_lock
+     * protects to:
+     *  - The entries in the freelist
+     *  - maptrack_head
+     *  - maptrack_tail
+     */
     spinlock_t       maptrack_freelist_lock;
     unsigned int     maptrack_head;
     unsigned int     maptrack_tail;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 10:30:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 10:30:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138395.256198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqYzk-0001G5-7l; Tue, 08 Jun 2021 10:30:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138395.256198; Tue, 08 Jun 2021 10:30:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqYzk-0001F9-1m; Tue, 08 Jun 2021 10:30:04 +0000
Received: by outflank-mailman (input) for mailman id 138395;
 Tue, 08 Jun 2021 10:30:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lqYzi-0000zn-Ef
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 10:30:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lqYzi-0005X0-BV
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 10:30:02 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lqYzi-0003Hp-Ae
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 10:30:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lqYze-00030d-RY; Tue, 08 Jun 2021 11:29:58 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=R2Vjv5tsfMCyKZbrnMioUAl3KkaQ91YACRcM4gaZd5A=; b=GrQQs1Jr5P+1RG5HSI9FQGzW3S
	dmL2tzwV3xCRvZYbEva8mzX2XnJKkmLk+C7S3Rb55VjZrVdMKKI1CAEyWvMVB/yjDfuD2O4GG4bpW
	OVxGrxGgh3y2prfxBPRtm11shQGKJgbOr5XO7MMOb6DbN9HGXUx4G6ALkb7OASK/LlVY=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24767.18086.549942.159942@mariner.uk.xensource.com>
Date: Tue, 8 Jun 2021 11:29:58 +0100
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH] tools/firmware/ovmf: Use OvmfXen platform file is exist
In-Reply-To: <20210601102804.698364-1-anthony.perard@citrix.com>
References: <20210601102804.698364-1-anthony.perard@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Anthony PERARD writes ("[XEN PATCH] tools/firmware/ovmf: Use OvmfXen platform file is exist"):
> A platform introduced in EDK II named OvmfXen is now the one to use for
> Xen instead of OvmfX64. It comes with PVH support.
> 
> Also, the Xen support in OvmfX64 is deprecated,
>     "deprecation notice: *dynamic* multi-VMM (QEMU vs. Xen) support in OvmfPkg"
>     https://edk2.groups.io/g/devel/message/75498
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Acked-by: Ian Jackson <iwj@xenproject.org>

I will commit this in a moment.

Do we need to backport this ?

Ian.


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 10:40:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 10:40:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138404.256214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqZ9W-0002zM-6X; Tue, 08 Jun 2021 10:40:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138404.256214; Tue, 08 Jun 2021 10:40:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqZ9W-0002zF-3F; Tue, 08 Jun 2021 10:40:10 +0000
Received: by outflank-mailman (input) for mailman id 138404;
 Tue, 08 Jun 2021 10:40:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=exX0=LC=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lqZ9V-0002z9-7r
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 10:40:09 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 67a3e3aa-6079-45a5-a9ff-2deac61a1652;
 Tue, 08 Jun 2021 10:40:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67a3e3aa-6079-45a5-a9ff-2deac61a1652
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623148808;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=7RGMDtvv5YlLqoFWHQvJ5ttRUT/lGNn6s4fM/Uca3g8=;
  b=FLtIrB/ozoeB3j18lUZcdofI4VmpEa5cP1PMZ14WPoRz/gzQMm3yoZ+r
   vL3JuvaIo14QhqluOshUq7Wzo1ZfkSCqn/k7AhRUxGQsnVXv3vLTBv0Er
   sehA20C8K3D1vJ551PSm940YOkKxcwYGgge2Jtcru7neFm6lTmucSuVRW
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: OwXe7mCLfrL1GTaz0eGaXytpCatcfLJtIoT6N59rcKOFTWYP6MO59iAFTI/LKZI5wuJc3atmg0
 KLDBqb+W8KSsIg5uBC4wUdVQyt9/YSt7dpfAXXFik0k7x0KdrAxnORR0BmDmdybXQ3WYfA0rnT
 tHbj8PGbRr1c3fhuYtO9kKpnN/eFxGufoEMsyzwGn9goA3P8Z6iH6tpKIdR29lV66Sx5+dQm4I
 GNcQN5ERJvcFq155ziqsGhy+R/DLgwMSHvI99dBePqOVamwQ54MMmS3a4burBNpXkqWSdykrZa
 GiQ=
X-SBRS: 5.1
X-MesageID: 45608740
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:ciKPEaF7t9Vd4xI/pLqE6MeALOsnbusQ8zAXP0AYc3Jom+ij5q
 STdZUgpHrJYVkqNU3I9ertBEDEewK6yXcX2/hyAV7BZmnbUQKTRekIh7cKgQeQeBEWntQts5
 uIGJIeNDSfNzdHsfo=
X-IronPort-AV: E=Sophos;i="5.83,257,1616472000"; 
   d="scan'208";a="45608740"
Date: Tue, 8 Jun 2021 11:40:04 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Ian Jackson <iwj@xenproject.org>
CC: <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH] tools/firmware/ovmf: Use OvmfXen platform file is
 exist
Message-ID: <YL9JBMnLxdUqLmY3@perard>
References: <20210601102804.698364-1-anthony.perard@citrix.com>
 <24767.18086.549942.159942@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <24767.18086.549942.159942@mariner.uk.xensource.com>

On Tue, Jun 08, 2021 at 11:29:58AM +0100, Ian Jackson wrote:
> Anthony PERARD writes ("[XEN PATCH] tools/firmware/ovmf: Use OvmfXen platform file is exist"):
> > A platform introduced in EDK II named OvmfXen is now the one to use for
> > Xen instead of OvmfX64. It comes with PVH support.
> > 
> > Also, the Xen support in OvmfX64 is deprecated,
> >     "deprecation notice: *dynamic* multi-VMM (QEMU vs. Xen) support in OvmfPkg"
> >     https://edk2.groups.io/g/devel/message/75498
> > 
> > Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> 
> Acked-by: Ian Jackson <iwj@xenproject.org>
> 
> I will commit this in a moment.
> 
> Do we need to backport this ?

Yes, because osstest is wired to use the latest version of OVMF.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 10:44:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 10:44:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138410.256225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqZDY-0003fn-Lo; Tue, 08 Jun 2021 10:44:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138410.256225; Tue, 08 Jun 2021 10:44:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqZDY-0003fg-Iu; Tue, 08 Jun 2021 10:44:20 +0000
Received: by outflank-mailman (input) for mailman id 138410;
 Tue, 08 Jun 2021 10:44:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JFXD=LC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqZDY-0003fa-03
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 10:44:20 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b702d68b-3964-4a75-85ad-9f59d8a1fd7c;
 Tue, 08 Jun 2021 10:44:18 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2106.outbound.protection.outlook.com [104.47.17.106])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-37-bde-LhGlOOuEDd9fjADOWw-1; Tue, 08 Jun 2021 12:44:16 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2445.eurprd04.prod.outlook.com (2603:10a6:800:55::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20; Tue, 8 Jun
 2021 10:44:13 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4195.030; Tue, 8 Jun 2021
 10:44:13 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0106.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:19::22) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.20 via Frontend Transport; Tue, 8 Jun 2021 10:44:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b702d68b-3964-4a75-85ad-9f59d8a1fd7c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623149057;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YkFPpmtDQcMA0edR+gopvuykDTD5zn/IQkctL28VU4w=;
	b=Y5r+w4/fvrxGkAdZ2LGP9ZkYKhlUXk4b8x/HLx4UMSxcyXXCjf8Cx56NHtke2F7nCCGOAr
	5lS6hpvVzsTS5YDc6xb/NVsuKfIgo+ZerqK92mkQCgQ0Jfp70j8YZ55D7hcrOf1oVAH5XE
	zqkFw5YR7IUXhmZXHEqj/x5kw5K7GCw=
X-MC-Unique: bde-LhGlOOuEDd9fjADOWw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Yvy1jujUglMjO8zrj+j/Ly0VIZCb5UZh/li1ZXrOH3OvydvFxsogdG/UOgujVz1oWqpV8cwgpjlJ1qZ9u5tVDxTD4ub/1N9xPRvZdN26nPCQHJjxjO/qqs4yQMCXBvKwZT0mkMBO4bsoPRW39CKuGFgleBZIJuErpqpKqaMyhZ5VxY+RNKXigcYK6i4VXxwfXYQilX9i3jlCLj7WDsc1a9YCnzdNtOKEjPvbpDPWPQNjjOxtJwYm3mB1Rr+hySuLA+/juyOVgiaeNarHXoRxkwwu8ptppaVThvLIVHCXkcClJ/iGEuW03J/xrwVJgA36bPWxVNDx8U/1ZSMm/aTbmQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YkFPpmtDQcMA0edR+gopvuykDTD5zn/IQkctL28VU4w=;
 b=RJb/HuLPwQle2RzGMMErY/hZ7bE1ilYGLUvWy+MH30S/eskbVEUW8i9ApUNxf4U4iRuvv3fnxC1wmLMrkO92H7gV0eBLqalYX6+JTZl6d2E3uuCNItoVrawqjktBwnImzOQZbkh7GOOxcJZrzG6z1OCl/nwrENVQ8aayEHs/IAkNujoPKjbTmonU+q0psso2w+zHHubUfg/I9G58HYhQhQwcOX8Xe9HlykPLWcWafrRHqw1gGpH3PWNrG7kaVe6FWtcgnGXKBsR2dp6VsjXWqomGakRuUnBEqNWwx89W7UKYIV5kIMgIe1Dt74d2Dti2RfmeWB8IEN07pXso57BVBA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH v8 2/2] xen: Add files needed for minimal riscv build
To: Connor Davis <connojdavis@gmail.com>,
 Bobby Eshleman <bobbyeshleman@gmail.com>
Cc: Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Alistair Francis <alistair.francis@wdc.com>,
 xen-devel@lists.xenproject.org
References: <cover.1622772299.git.connojdavis@gmail.com>
 <4337d3cd6891b34f534d85ca62712bd3b446edf8.1622772299.git.connojdavis@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d8e8eb9e-a80d-1202-9bbd-45af977e6e30@suse.com>
Date: Tue, 8 Jun 2021 12:44:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <4337d3cd6891b34f534d85ca62712bd3b446edf8.1622772299.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0106.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:19::22) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f2d92255-bd1a-48cd-fa35-08d92a6a5ac8
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2445:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB2445DFB622CA533D87F5EC76B3379@VI1PR0401MB2445.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3826;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	s9khqwQCSBDIpiEro7wX2ANi1UaT2Nm6Dd20QEkmAtKVrjWlT1y+l1sAMYScCctNw5+ORnjQSkFZR/Mp3WDlWqLC0pC7E7oHB/Ki12pM9zdCRLGXtH3d9XLnTQ8w1piNRcfTO9Bc5boUJHRlNWuQjLE/fvTAH3Wyh10svpG3AHnq8+EEFUT/lhhOonJVPrXCPHFgq5O/atu5yqk+kkAVr03qBR1Ouh/RJIKO46LkbWelMDdHdqrPuMxw51M9sbHaq8SGjQ9X8FDDYyy0r0mYOCCy576dsDhM36k4sNQ2SJoFAkc+8mYbo4ouyqfCH10Hiv6Gwuc0QfFzLw1feaaiGkJCWTpXlBIsvYLm5a5LUNzuyn3WnZyl1ZJrT0NnB4GipcAqAoNbx569yw346W6iEOQJWJgODwoevNWFhkbSHXm6bFqdAjnOpPkAiY4t2kbGgqbo8VE0hExYwCWzlI8/WmXktss/TYcEvPXCOI/x6DO9RPe+LwrzL8Dec4TLCAibiZUs1mD+xKq7q684ypYV88FhK5Wi3gHuLgUpg7znP6/mXhsit/LPsB8QiEfGyI2exTbzMQsV7XBrJ5vteLT8PB7sdOa2XFgdLT7G2DQAzOK1K5dHnUU6msC2HGqhAku3Qj+Z7Ij1/7f/4M4AYct+qyFrTpygqKWxZbtPizz5teJ19s/lDQvXiENtkGOX+xgl
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(136003)(346002)(376002)(396003)(39860400002)(26005)(8936002)(6486002)(38100700002)(54906003)(2906002)(53546011)(5660300002)(4744005)(16526019)(36756003)(66946007)(66476007)(66556008)(186003)(316002)(83380400001)(7416002)(4326008)(31686004)(6666004)(31696002)(2616005)(956004)(110136005)(86362001)(478600001)(16576012)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?NmdzV2FmY09DeHlKWTk5UmxTSVloTWo4UFFPMi9qR2hLQVJGREZSUEc4K2Vq?=
 =?utf-8?B?UW5zcVhhWDAwYXlDN1JjOXNEVDNRNDZXMHA5Y3NLZUJodmo5VGRJTEl4YkIx?=
 =?utf-8?B?M2NjZ3ZFQkNtU1lHRmExajdwOGoycUlhZGhnYmlaRENPQS9nbjE4MW5GTFRv?=
 =?utf-8?B?ejY0RzMrUFFCckc4TnJ4Y0J5YmJzWHAzeUxocEx1bmtuNEEycXF1bnRMZFdP?=
 =?utf-8?B?OHJUeDR5ejRXdnRoOUtiYnY3N1dEUGlkUVgwenZJaGpUdDZQL1V6ZzBIbUdv?=
 =?utf-8?B?T0tBa2M4aGZvTGpKRVJJUmpYaUtmN1Z4RC9TeUhmV3lLOUY1MW5TWEYweXRR?=
 =?utf-8?B?czBodDNIOWVqREVKMEhPZ0pIVlhSY0xzN1owWWNXU3duM3JPdUIxK1RYVndh?=
 =?utf-8?B?aFJnakZiNUJTTnFtQ1Bmc3REdGo3dzczTzhNRi85UUJkeDJ6U21VYmFkUVlC?=
 =?utf-8?B?NU52TFowZUcyMDJxQnBtSHBwU3dFMjFOak5QRkJZemRBM1d1M2lzaVZrWU95?=
 =?utf-8?B?K3l3c1R1TTJzcUF3M3p3R2RGN1pBNkZCM3NyWXpIMWRTVlBsd29vbVk0NHJi?=
 =?utf-8?B?b09vbmt6WFF3N2ZEWmRTNGtNcFIvd1VoM29mK2MzTEhPVzh6Y0ZFZ2hUUWNQ?=
 =?utf-8?B?N25OYVBsRXFFbmdkMGExOVFraEtHMEM5QUc2UFVlZ1ovUTBMSWxVNUNERENQ?=
 =?utf-8?B?UjIzOE1uOURrUTdPcjFqNXpweWtXcXNUV2JPT2ZaL1pFZndUandDQUFZa1Iv?=
 =?utf-8?B?c1Yza1RqUVBuMUVwWGF2RHB3aEpXN1c2RlB4NzFYMjJUSjR2V0ZCR1M0Tnd1?=
 =?utf-8?B?b2pLMU0rWnZXMjZjTk91SnFrR05IZkF3cGJ2dWkvN2xDNVBSYWxVRkhMM0pF?=
 =?utf-8?B?L0NaOUhNeW1ldDd1SmVDRGZjN1NVYVpLNVRaYlA2NWpCS3BCbUNBTUlGMVNw?=
 =?utf-8?B?L0FxUFhQUnpVbjZaaVB4dVhIMEQyRHQ3SU1wNUplWVgyUHoyaFRDd1JTV29q?=
 =?utf-8?B?aWl0eUdUZkZCNmhob0RFWkdRdmVyY25pUCtrcE9ucUpuZVlCRzBFL0doeUsx?=
 =?utf-8?B?NFh4MEp4b1VjcTE1QnlmYnBsMjJFS1ZnSG43eTc0cnRJdUNLTzUyNjlvVHZW?=
 =?utf-8?B?MUNxbmJJQ3NPNGkwc3QwbUZFQmhENlBJMnNZMzVzZ3Q1ZW1KcnRGeHBRalNt?=
 =?utf-8?B?WjA5WDM0SXFObzRMNGt5RjBUL21aRWhkWU1XclpQTWxVSFBWNDRwR0VCVEo1?=
 =?utf-8?B?aUE2WFEzaU9xcy80QXdqWkh0WTViVzZwWkpHdDdQOXVTYnJHcHp2VWFwYmhs?=
 =?utf-8?B?bTI2RUFsZUREU21ibHlucTVnMUNyTnZGWGFhdUNtRms3TzJHWHlBcURSQ3pu?=
 =?utf-8?B?YTZxdVN4TktERmR4U1diajgwYmw1bEg5RWNpTUE2K1pLc2VJdmxNb1Qxbkw3?=
 =?utf-8?B?THE1SE9VdnA5b2srUVdMWGpBenlEQ0o5a25YbzZ6TFRHZWJFMVlwbE13UGhX?=
 =?utf-8?B?RnVFRkpWRWRSNlVTcG12SHUyVHE1KzFheXQwZFhsYzdMWFJPM1N5RkMyOENt?=
 =?utf-8?B?d1I2WmlNUWd1SFRqVkhjNkJJOVZKTjZKVVE2aGRadEFUbFYyN0NLUXdFNEhR?=
 =?utf-8?B?aHpvT2M0clZvVEpjdEloUzlmMTV6ZTdBVmZZNkZVcnk1VGovaVVQaFVsS2Qv?=
 =?utf-8?B?SW00SlhZdnk2VVVSWFFqUktTYzB5eDFxZDdaZlpDQjh3MUsxVTRmK3hmQmZi?=
 =?utf-8?Q?yzL0hOfyCGJErP8ec4NC/5R7dlxV+YmCFpSHhCF?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f2d92255-bd1a-48cd-fa35-08d92a6a5ac8
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Jun 2021 10:44:13.4792
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DzmngbddDrZ205iHWw07JmOe25iZiD+oF7syFW9GYgv5x0/0a9QXfe79QEq5ggzdRLI5Dw5MH8Ilsw2Bi7142g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2445

On 04.06.2021 04:14, Connor Davis wrote:
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -456,6 +456,15 @@ F:	tools/libs/light/libxl_nonetbuffer.c
>  F:	tools/hotplug/Linux/remus-netbuf-setup
>  F:	tools/hotplug/Linux/block-drbd-probe
>  
> +RISCV
> +M:	Bob Eshleman <bobbyeshleman@gmail.com>
> +M:	Alistair Francis <alistair.francis@wdc.com>
> +R:	Connor Davis <connojdavis@gmail.com>
> +S:	Supported
> +F:	config/riscv64.mk
> +F:	xen/arch/riscv/
> +F:	xen/include/asm-riscv/
> +
>  RTDS SCHEDULER
>  M:	Dario Faggioli <dfaggioli@suse.com>
>  M:	Meng Xu <mengxu@cis.upenn.edu>

FAOD as per Julien's request for this to be formally complete, I'm waiting
for Bob's ack in order to put in this one (together with patch 1, as I did
indicate before).

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 11:49:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 11:49:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138428.256244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqaE0-00019p-TN; Tue, 08 Jun 2021 11:48:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138428.256244; Tue, 08 Jun 2021 11:48:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqaE0-00019i-Pv; Tue, 08 Jun 2021 11:48:52 +0000
Received: by outflank-mailman (input) for mailman id 138428;
 Tue, 08 Jun 2021 11:48:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqaDz-00019Y-LH; Tue, 08 Jun 2021 11:48:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqaDz-0006uF-G4; Tue, 08 Jun 2021 11:48:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqaDz-0004vV-96; Tue, 08 Jun 2021 11:48:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqaDz-0002s8-8X; Tue, 08 Jun 2021 11:48:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jfVsgOmhbS0VnSeFPe+f2JzZ0iSGjDzhRqdd9FqrrnM=; b=Qzsp6eIjmDGbiYGH5NhgZKsMfw
	jM6IE9tjH686zzZlgvLzaVesnMOp47fcBs46dscMO45f7t6KUyjSAY1VYfH5cc468IBi8qMlpodXC
	+tjNGAGqybG/EjlTaCaYYUfMabMTaK93+qCk8xTV5AJzWXeQ/o21PJauyVfvcfzM4AaA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162541-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162541: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d21121685fac829c988e432407fb0e4ef9b19331
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Jun 2021 11:48:51 +0000

flight 162541 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162541/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d21121685fac829c988e432407fb0e4ef9b19331
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    6 days
Failing since        162370  2021-06-04 17:01:35 Z    3 days   25 attempts
Testing same since   162517  2021-06-07 16:01:28 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d21121685fac829c988e432407fb0e4ef9b19331
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Jun 7 15:00:05 2021 +0200

    tools/libs/guest: fix save and restore of pv domains after 32-bit de-support
    
    After 32-bit PV-guests have been security de-supported when not running
    under PV-shim, the hypervisor will no longer be configured to support
    those domains per default when not being built as PV-shim.
    
    Unfortunately libxenguest will fail saving or restoring a PV domain
    due to this restriction, as it is trying to get the compat MFN list
    even for 64 bit guests.
    
    Fix that by obtaining the compat MFN list only for 32-bit PV guests.
    
    Fixes: 1a0f2fe2297d122a08fe ("SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 69e1472d21cf7e5cf0795ef38b99d00de78a910e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jun 7 13:38:53 2021 +0100

    x86/cpuid: Drop special_features[]
    
    While the ! annotation is useful to indicate that something special is
    happening, an array of bits is not.  Drop it, to prevent mistakes.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 60fa12dbf1d4d2c4ffe1ef34b495b24aa7e41aa0
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jun 7 13:25:09 2021 +0100

    x86/cpuid: Fix HLE and RTM handling (again)
    
    For reasons which are my fault, but I don't recall why, the
    FDP_EXCP_ONLY/NO_FPU_SEL adjustment uses the whole special_features[] array
    element, not the two relevant bits.
    
    HLE and RTM were recently added to the list of special features, causing them
    to be always set in guest view, irrespective of the toolstacks choice on the
    matter.
    
    Rewrite the logic to refer to the features specifically, rather than relying
    on the contents of the special_features[] array.
    
    Fixes: 8fe24090d9 ("x86/cpuid: Rework HLE and RTM handling")
    Reported-by: Edwin Török <edvin.torok@citrix.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit c4beefcada0a406681dcfb6e89f6cbe4aa368c2d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Jun 7 15:40:55 2021 +0200

    ﻿docs: release-technician-checklist: update to leaf tree version pinning
    
    Our releases look to flip-flop between keeping or discarding the date
    and title of the referenced qemu-trad commit. I think with the hash
    replaced by a tag, the commit's date and title would better also be
    purged.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 89052b9fa24bf976924e40918fc9fa3b1b940e17
Author: Dario Faggioli <dfaggioli@suse.com>
Date:   Fri Mar 19 12:14:17 2021 +0000

    xen: credit2: fix per-entity load tracking when continuing running
    
    If we schedule, and the current vCPU continues to run, its statistical
    load is not properly updated, resulting in something like this, even if
    all the 8 vCPUs are 100% busy:
    
    (XEN) Runqueue 0:
    (XEN) [...]
    (XEN)   aveload            = 2097152 (~800%)
    (XEN) [...]
    (XEN)   Domain: 0 w 256 c 0 v 8
    (XEN)     1: [0.0] flags=2 cpu=4 credit=9996885 [w=256] load=35 (~0%)
    (XEN)     2: [0.1] flags=2 cpu=2 credit=9993725 [w=256] load=796 (~0%)
    (XEN)     3: [0.2] flags=2 cpu=1 credit=9995885 [w=256] load=883 (~0%)
    (XEN)     4: [0.3] flags=2 cpu=5 credit=9998833 [w=256] load=487 (~0%)
    (XEN)     5: [0.4] flags=2 cpu=6 credit=9998942 [w=256] load=1595 (~0%)
    (XEN)     6: [0.5] flags=2 cpu=0 credit=9994669 [w=256] load=22 (~0%)
    (XEN)     7: [0.6] flags=2 cpu=7 credit=9997706 [w=256] load=0 (~0%)
    (XEN)     8: [0.7] flags=2 cpu=3 credit=9992440 [w=256] load=0 (~0%)
    
    As we can see, the average load of the runqueue as a whole is, instead,
    computed properly.
    
    This issue would, in theory, potentially affect Credit2 load balancing
    logic. In practice, however, the problem only manifests (at least with
    these characteristics) when there is only 1 runqueue active in the
    cpupool, which also means there is no need to do any load-balancing.
    
    Hence its real impact is pretty much limited to wrong per-vCPU load
    percentages, when looking at the output of the 'r' debug-key.
    
    With this patch, the load is updated and displayed correctly:
    
    (XEN) Runqueue 0:
    (XEN) [...]
    (XEN)   aveload            = 2097152 (~800%)
    (XEN) [...]
    (XEN) Domain info:
    (XEN)   Domain: 0 w 256 c 0 v 8
    (XEN)     1: [0.0] flags=2 cpu=4 credit=9995584 [w=256] load=262144 (~100%)
    (XEN)     2: [0.1] flags=2 cpu=6 credit=9992992 [w=256] load=262144 (~100%)
    (XEN)     3: [0.2] flags=2 cpu=3 credit=9998918 [w=256] load=262118 (~99%)
    (XEN)     4: [0.3] flags=2 cpu=5 credit=9996867 [w=256] load=262144 (~100%)
    (XEN)     5: [0.4] flags=2 cpu=1 credit=9998912 [w=256] load=262144 (~100%)
    (XEN)     6: [0.5] flags=2 cpu=2 credit=9997842 [w=256] load=262144 (~100%)
    (XEN)     7: [0.6] flags=2 cpu=7 credit=9994623 [w=256] load=262144 (~100%)
    (XEN)     8: [0.7] flags=2 cpu=0 credit=9991815 [w=256] load=262144 (~100%)
    
    Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 07b0eb5d0ef0be154606aa46b5b4c5c59b158505
Author: Dario Faggioli <dfaggioli@suse.com>
Date:   Fri May 28 17:12:48 2021 +0200

    credit2: make sure we pick a runnable unit from the runq if there is one
    
    A !runnable unit (temporarily) present in the runq may cause us to
    stop scanning the runq itself too early. Of course, we don't run any
    non-runnable vCPUs, but we end the scan and we fallback to picking
    the idle unit. In other word, this prevent us to find there and pick
    the actual unit that we're meant to start running (which might be
    further ahead in the runq).
    
    Depending on the vCPU pinning configuration, this may lead to such
    unit to be stuck in the runq for long time, causing malfunctioning
    inside the guest.
    
    Fix this by checking runnable/non-runnable status up-front, in the runq
    scanning function.
    
    Reported-by: Michał Leszczyński <michal.leszczynski@cert.pl>
    Reported-by: Dion Kant <g.w.kant@hunenet.nl>
    Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 11:58:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 11:58:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138435.256259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqaNN-0002bT-Tv; Tue, 08 Jun 2021 11:58:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138435.256259; Tue, 08 Jun 2021 11:58:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqaNN-0002bM-Pf; Tue, 08 Jun 2021 11:58:33 +0000
Received: by outflank-mailman (input) for mailman id 138435;
 Tue, 08 Jun 2021 11:58:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nNwb=LC=eikelenboom.it=linux@srs-us1.protection.inumbo.net>)
 id 1lqaNL-0002bG-FU
 for xen-devel@lists.xen.org; Tue, 08 Jun 2021 11:58:31 +0000
Received: from server.eikelenboom.it (unknown [91.121.65.215])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 635c59f1-bd40-4832-9773-d2cd63b752cc;
 Tue, 08 Jun 2021 11:58:28 +0000 (UTC)
Received: from 76-24-144-85.ftth.glasoperator.nl ([85.144.24.76]:49928
 helo=[172.16.1.50]) by server.eikelenboom.it with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <linux@eikelenboom.it>) id 1lqZzE-0001G7-2i
 for xen-devel@lists.xen.org; Tue, 08 Jun 2021 13:33:36 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 635c59f1-bd40-4832-9773-d2cd63b752cc
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=eikelenboom.it; s=20180706; h=Content-Transfer-Encoding:Content-Type:
	MIME-Version:Date:Message-ID:Subject:From:To:Sender:Reply-To:Cc:Content-ID:
	Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
	:Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
	List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=Fw5TXSv6M+94NkFz+iCJQdAFLvUv4VOjZG+qjBxeC80=; b=TFsXqkLOn9nfrc1KmSD/+cAJzD
	4VS0xUhAqJGJei2E1uywH9lrBl2kgg/wRtgkLEiCE07BLPn5Ows6/TW7e1HOaFHT6WuxsSTlij8wZ
	rfkmC88xrhuMdX8Zy3KEkEqgkY+xtGC6YLcvkhaUtZ6W81vgs9bUPpC9LmZ4NxoPcMEo=;
To: Xen-devel <xen-devel@lists.xen.org>
From: Sander Eikelenboom <linux@eikelenboom.it>
Subject: =?UTF-8?Q?xen-unstable_build-failure=3a_xg=5fcpuid=5fx86=2ec=3a99?=
 =?UTF-8?B?OjQyOiBlcnJvcjog4oCYSU5JVF9TUEVDSUFMX0ZFQVRVUkVT4oCZIHVuZGVjbGFy?=
 =?UTF-8?Q?ed_=28first_use_in_this_function=29=3b_did_you_mean_=e2=80=98INIT?=
 =?UTF-8?B?X1BWX01BWF9GRUFUVVJFU+KAmT8=?=
Message-ID: <39b24aaf-a785-6dae-23fa-c9a787760565@eikelenboom.it>
Date: Tue, 8 Jun 2021 13:28:57 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: nl-NL
Content-Transfer-Encoding: 8bit

L.S.,

I seem to be running into a build error with current xen-unstable.

--
Sander

echo '#if 0' >>/usr/src/new/xen-unstable/xen/include/asm-x86/asm-macros.h.new
echo '.endif' >>/usr/src/new/xen-unstable/xen/include/asm-x86/asm-macros.h.new
cat asm-macros.i >>/usr/src/new/xen-unstable/xen/include/asm-x86/asm-macros.h.new
echo '#endif' >>/usr/src/new/xen-unstable/xen/include/asm-x86/asm-macros.h.new
if ! cmp -s /usr/src/new/xen-unstable/xen/include/asm-x86/asm-macros.h.new /usr/src/new/xen-unstable/xen/include/asm-x86/asm-macros.h; then mv -f /usr/src/new/xen-unstable/xen/include/asm-x86/asm-macros.h.new /usr/src/new/xen-unstable
gcc  -m64 -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Og -fno-omit-frame-pointer -D__XEN_INTERFACE_VERSION__=__XEN_L
make[3]: Leaving directory '/usr/src/new/xen-unstable/xen/arch/x86'
make -f /usr/src/new/xen-unstable/xen/Rules.mk include/asm-x86/asm-offsets.h
make[3]: Entering directory '/usr/src/new/xen-unstable/xen'
gcc  -m64 -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Og -fno-omit-frame-pointer -D__XEN_INTERFACE_VERSION__=__XEN_L
gcc  -m64 -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Og -fno-omit-frame-pointer -D__XEN_INTERFACE_VERSION__=__XEN_L
gcc  -m64 -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Og -fno-omit-frame-pointer -D__XEN_INTERFACE_VERSION__=__XEN_L
gcc  -m64 -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Og -fno-omit-frame-pointer -D__XEN_INTERFACE_VERSION__=__XEN_L
gcc  -m64 -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Og -fno-omit-frame-pointer -D__XEN_INTERFACE_VERSION__=__XEN_L
gcc  -m64 -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Og -fno-omit-frame-pointer -D__XEN_INTERFACE_VERSION__=__XEN_L
gcc -MMD -MP -MF ./.asm-offsets.s.d -m64 -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs -O1 -fno-omit-frame-pointer -nostdinc
gcc  -m64 -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Og -fno-omit-frame-pointer -D__XEN_INTERFACE_VERSION__=__XEN_L
gcc  -m64 -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Og -fno-omit-frame-pointer -D__XEN_INTERFACE_VERSION__=__XEN_L
gcc  -m64 -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Og -fno-omit-frame-pointer -D__XEN_INTERFACE_VERSION__=__XEN_L
if ! cmp -s asm-offsets.s.new asm-offsets.s; then mv -f asm-offsets.s.new asm-offsets.s; else rm -f asm-offsets.s.new; fi
gcc  -m64 -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Og -fno-omit-frame-pointer -D__XEN_INTERFACE_VERSION__=__XEN_L
make[3]: Leaving directory '/usr/src/new/xen-unstable/xen'
make -f /usr/src/new/xen-unstable/xen/Rules.mk -C arch/x86 /usr/src/new/xen-unstable/xen/xen
make[3]: Entering directory '/usr/src/new/xen-unstable/xen/arch/x86'
gcc  -DPIC -m64 -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Og -fno-omit-frame-pointer -D__XEN_INTERFACE_VERSION__=_
gcc  -DPIC -m64 -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Og -fno-omit-frame-pointer -D__XEN_INTERFACE_VERSION__=_
gcc  -DPIC -m64 -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Og -fno-omit-frame-pointer -D__XEN_INTERFACE_VERSION__=_
xg_cpuid_x86.c: In function ‘xc_get_static_cpu_featuremask’:
xg_cpuid_x86.c:99:42: error: ‘INIT_SPECIAL_FEATURES’ undeclared (first use in this function); did you mean ‘INIT_PV_MAX_FEATURES’?
  #define MASK(x) [XC_FEATUREMASK_ ## x] = INIT_ ## x ## _FEATURES
                                           ^~~~~
xg_cpuid_x86.c:102:9: note: in expansion of macro ‘MASK’
          MASK(SPECIAL),
          ^~~~
xg_cpuid_x86.c:99:42: note: each undeclared identifier is reported only once for each function it appears in
  #define MASK(x) [XC_FEATUREMASK_ ## x] = INIT_ ## x ## _FEATURES
                                           ^~~~~
xg_cpuid_x86.c:102:9: note: in expansion of macro ‘MASK’
          MASK(SPECIAL),
          ^~~~
make[6]: *** [/usr/src/new/xen-unstable/tools/libs/guest/../../../tools/Rules.mk:145: xg_cpuid_x86.o] Error 1
make[6]: *** Waiting for unfinished jobs....
sed "s!\(^\| \)$PWD/! !" .asm-macros.i.d >.asm-macros.i.d2.tmp && mv -f .asm-macros.i.d2.tmp .asm-macros.i.d2
make[6]: Leaving directory '/usr/src/new/xen-unstable/tools/libs/guest'
make[5]: *** [/usr/src/new/xen-unstable/tools/libs/../../tools/Rules.mk:161: subdir-install-guest] Error 2
make[5]: Leaving directory '/usr/src/new/xen-unstable/tools/libs'
make[4]: *** [/usr/src/new/xen-unstable/tools/libs/../../tools/Rules.mk:156: subdirs-install] Error 2
make[4]: Leaving directory '/usr/src/new/xen-unstable/tools/libs'
make[3]: *** [/usr/src/new/xen-unstable/tools/../tools/Rules.mk:161: subdir-install-libs] Error 2
make[3]: Leaving directory '/usr/src/new/xen-unstable/tools'
make[2]: *** [/usr/src/new/xen-unstable/tools/../tools/Rules.mk:156: subdirs-install] Error 2
make[2]: Leaving directory '/usr/src/new/xen-unstable/tools'
make[1]: *** [Makefile:66: install] Error 2
make[1]: Leaving directory '/usr/src/new/xen-unstable/tools'
make: *** [Makefile:140: install-tools] Error 2
make: *** Waiting for unfinished jobs....


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 12:22:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 12:22:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138446.256271 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqakS-0005uU-9O; Tue, 08 Jun 2021 12:22:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138446.256271; Tue, 08 Jun 2021 12:22:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqakS-0005uN-5I; Tue, 08 Jun 2021 12:22:24 +0000
Received: by outflank-mailman (input) for mailman id 138446;
 Tue, 08 Jun 2021 12:22:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AOFJ=LC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lqakQ-0005uH-Lu
 for xen-devel@lists.xen.org; Tue, 08 Jun 2021 12:22:22 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f69a0a0-6008-499d-b79c-e5d282e29cec;
 Tue, 08 Jun 2021 12:22:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f69a0a0-6008-499d-b79c-e5d282e29cec
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623154940;
  h=subject:to:references:from:message-id:date:in-reply-to:
   content-transfer-encoding:mime-version;
  bh=Cw23AFEkckS3Dtu5zAnydFSgs2beAzNaetWSreUjhHo=;
  b=JOg6dRkeAyh9v66U1pULUZxqCqb8c71u8RjQC/zmBQORX6ctiDg16mPg
   4WpcDEEeNXv31TJ2ae78TleSrXBj4F/lpW7ztyQ+63kUOgQyarzON/xwx
   IF+s7wmtn4VT6CP7PBkQqKACivOSz3K7V3qMx3wvI+0usBGCEYUHLQn+R
   s=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: CjNl8NMlsqcPmz2kqZZM8vJ6e8PVNxGsbHrBc/KiS1gI/J6LM3xHOI7WNCfYbqgFfl3cEqEoT0
 mPW1nKn2n12nrqdHqfPO/L6xGImNiOCMRxuxt9PwIRJ10KJbucbgPZUQYZXzHVOF44dXTQduiF
 RYLAhyP5dIqaLc2gPkCE5QILAF5HiPDyLpUde0IrYsaEbfAweAN0lik3jUZC50sw6rFaBQNL34
 vRfXCTLHLSrCBJasi5y+jBB02A4TyyiS3KzukdK/U73vP8vjSWOHg7WIsmi0Z49Pqu5bDskQ1f
 b1U=
X-SBRS: 5.1
X-MesageID: 45374247
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:bsRaO6proWiwDfZWrXexYpkaV5oWeYIsimQD101hICF9WcaT/v
 re+8jzsiWZtN9xYh4dcLW7U5VoLkmzyXcY2+gs1NWZLWrbURqTTL2KhLGKq1eMJ8SUzJ8+6U
 4PSdkbNPTASXR8kMbm8E2ZPr8bsb+6GXmT9ILj80s=
X-IronPort-AV: E=Sophos;i="5.83,258,1616472000"; 
   d="scan'208";a="45374247"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U+xr5i8XdRf3kldDKJuEDmFlgiP/lgVQzPWar/PbfW+aLxgd7S+F5eCu/Pri5pe9OUgFZJbDhEPn1TAadVxnHkvllQh8acyiLWRCy2H8NM1FmIQ/EPu8SqytQoLqB/6aX0x6/PJtXdQ6TW9GEeTpwruYgvJ0w4Kdefex0xkZJ/52vUGrn+Gsto+YSD0Hrnhf7z6G4d7WftKZy/Ua6HkqVMPWv1Rk20/IBKLfLPPlGCbjUOLsQinXX0jCk2CZzHSRwkE9evpsNA6fVSSQbslADxotVqifAmM/F4AvFo6G2EAIWUzZwcJPEVogX0psAFYOXUhwC+oxP5i5KeD8L/Iziw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Cw23AFEkckS3Dtu5zAnydFSgs2beAzNaetWSreUjhHo=;
 b=lodYkjf/rHDILepvtSXrnKVXZBHMOxT2K08k73P1JH5c/O4nwgOZjieir4xb9wDTdyIglhaInNydLe580XNTxWWFz/ogO7HC9UewkoH6KNiLe9w46iUSf2Y5ygfMeV54i3+QDJtwegzUpcHG5GoU2R6hBF6b5o1o+BuvmUxOE/2jmdpXRXR9ayUBIeOc9jvHPu+g9xMl1ZUSFp+B/g1RLXU+6x4o+Jpq71iQeB7w+jIHulms0ZZo38OUmQi8OllcjVGrNdAmB0zRkLvAj7HcInHf6tFCM6x1XPL6JUY7jX1kJV8VjnuvVp+IGCpJKCisKNnCQ6a1JJ5IBTSgwC4q7g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Cw23AFEkckS3Dtu5zAnydFSgs2beAzNaetWSreUjhHo=;
 b=DuWJWfGZqJp15rPogLnN80g19zUAP8O2Pc+2IIW118qKDF+Ve+qAiSPzCdEauE5m1BBfWAgMeiqDU+NiMc9Oydsk6Muh7feyxv0GasKiSVhCLahNIj9wkII33OZW9bgEKdt9jgTHnHT80ZWJHhBsKF8MlPwxN/4snOs4PR951eI=
Subject: =?UTF-8?Q?Re=3a_xen-unstable_build-failure=3a_xg=5fcpuid=5fx86=2ec?=
 =?UTF-8?B?Ojk5OjQyOiBlcnJvcjog4oCYSU5JVF9TUEVDSUFMX0ZFQVRVUkVT4oCZIHVuZGVj?=
 =?UTF-8?Q?lared_=28first_use_in_this_function=29=3b_did_you_mean_=e2=80=98I?=
 =?UTF-8?B?TklUX1BWX01BWF9GRUFUVVJFU+KAmT8=?=
To: Sander Eikelenboom <linux@eikelenboom.it>, Xen-devel
	<xen-devel@lists.xen.org>
References: <39b24aaf-a785-6dae-23fa-c9a787760565@eikelenboom.it>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e296e60a-6199-ab80-5730-6c7e0cc96620@citrix.com>
Date: Tue, 8 Jun 2021 13:22:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <39b24aaf-a785-6dae-23fa-c9a787760565@eikelenboom.it>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0388.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18f::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bcfbf13e-939f-4272-c8d3-08d92a780e3e
X-MS-TrafficTypeDiagnostic: BYAPR03MB4166:
X-Microsoft-Antispam-PRVS: <BYAPR03MB4166A6B90B4F88A99CE5FDC5BA379@BYAPR03MB4166.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: rvOyMXLNZQWHMssqDX6vAQPb8LHZO2hkhv9k8BMmh11L+6zzGphx8iUibsRWwIGWRQ7vrfTgKWVUVy0iJPnwSdiUZ/mPw9zlYuY/tP9pgehN/F5JepSa6qiQrBHOTpcxRvX1ifXVN5IHU9xSQg3zY+G6Vq7kiUXhoLf6WQ8OSEzE04++YDT5svly1a5eHhGe6a/kT4CxQWzSJe/cFFgRGG7R+GIfTDkaRSw0fqpyBNOhdAukkOoIg5ixZsOzNLY2UqUM0UTlN/9w5A0DGp6hvZCzd6FU5qwKTgXaFpk6hy3ap3BEgBwHpyAbRVCwly2hN4gFG0PXUY1Aiiw3U57IbLprm+WYsVZKra+mRpEVq4dOTZQ158G0RCC/kRi/9dJPms7NE03coz9yg86myuoITsqzINo71xoCQzrX6sh7XLRHIpk0ulx8xkJTX+NvAPlkuygZkEUDxRG/aiszgGSPVWEl+R37XYVQBi2UK8DVbvY6imc9oPHBheWqFuLacwvhPC9z+BNLhiYiGQpy9l/cnp2pfl4y0Syq/9CmXW273X8YbzFbkjL+TNAUbJVgjqDbc+0EEUletrEN46uTWvlp5AjQ9N7br0UAAmCdqts9QpRgCmZayMoKRnnJmL5waBLdT8mh5vtnYKxq6cGq651IBJwOZzT6R9jtIoPyCiaa+DRB3kt0HHNU/zd0iKwHzx0U2zuVPkkT7FhWtxByNjMXqY1c3k5vU8HqWqtPAj2c+jGz/k8qAMMZQu+yOSsfqQ1iD/Xkl7xhhZGSI6Capq1a1GDbprROWMA/XAOGagf3lIk=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(346002)(39860400002)(376002)(396003)(2906002)(53546011)(86362001)(110136005)(5660300002)(66476007)(6486002)(38100700002)(83380400001)(4744005)(31686004)(16526019)(966005)(66946007)(6666004)(8936002)(31696002)(316002)(16576012)(26005)(36756003)(478600001)(956004)(186003)(66556008)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?RmwyV25xYkQ2eGtHSGxNTmlDcitpYlRlUkpRc1dha2ZtY1dTWjJvT05jbU5Z?=
 =?utf-8?B?VFRXOWc0bWFGTG1VM00xZ1QzWEJ6QmhpY0FwcmpEMDUvSElRODRHQTlEekRU?=
 =?utf-8?B?U28zMGVUZ3FyckZDOUM4eHFqc29nbWpBUU8zWHQyNHEzR0ZFTEhOdU9DTFJj?=
 =?utf-8?B?eGtYTGcxdlhrM3A1WkV2cW4xdEd3RHFDOUJoMU1XMVlCV0RhTit5SGFlRFpu?=
 =?utf-8?B?bE9tZFByK2NQazk3MDBYTUhxbHQxcDlYTkRDK0xFcDZFR2xQVVJMRW5FYWlR?=
 =?utf-8?B?SFl3SG9HZXhycG1GcW45Q1FtakRoZWxWYVNxOENZZWI0UGhaS3NZYzliaWpQ?=
 =?utf-8?B?eW5NZXhVODRLMVN1Rm5nTVp4bXdLRzhaMTBCQUR0Y1VlRWQ0bjV6UXQ5VmxV?=
 =?utf-8?B?MkcwY1RwNDNsMDZzQXlGZUFPSnBYY2RucDB6NmpaN1phUWhGaDc3bG1QOG9H?=
 =?utf-8?B?a3hPQWNMVExNaVFRSEpmTGlRUzlSYXo2VTU4TytXa2t3eW93SUlCUytiUTQ1?=
 =?utf-8?B?UUlVK2RYK3RoeFJKT2R4eHJWSXRyYlZkajFaVHMrRlNYT1JRYnBlZU44V0dO?=
 =?utf-8?B?TUh3YjhxdWhEWXFWMEdYcGxHaUZmazNIdGtyVytYZWlIVHgvRWFYbGg3c1hY?=
 =?utf-8?B?bStTVVdSUnNGaFdsRjQ0MnlWR1pJSDZmUVMxNWh2YTFoTE9XQkh4amdjVlBX?=
 =?utf-8?B?R3gxNnIvdDhSdHpNcVJjbS9SODJiNjRBbHdUazNSNGlzT2h2aWMrNGZpR1dC?=
 =?utf-8?B?N1JwdXhMMGkxOE03dHdmdzY4aXBnYTFFT0NWT2NwZGFrdEU4ZjA5NHlLR1o4?=
 =?utf-8?B?bG9JUWplSGJMK09zdTA1YXVHdFl0eGZQQzYyTFVDeE1wVXlkbjIzTTg3SElh?=
 =?utf-8?B?NTB4VmIrak9MbXFoOFMvVDBiaDJha01IKzkvYnFnYTV0bTZsSEl3aklMeGRp?=
 =?utf-8?B?QTVKdnJIVVhCZS9GUStZQ0hXbFBoTHNwQ1lwcS81UmdHWTVsS0V0WkVIN01p?=
 =?utf-8?B?VHhPL0ZjY1R5cm1HK0F2TWRWWUVJSXBGbXpnUy9pWlhSbXBRalVaM3YwQWRH?=
 =?utf-8?B?L2E1TUxvSkJHWXEzVU5BNEpUL2gwc2pBWEsrN1VkemptdS9kYzFOMk1JTDB5?=
 =?utf-8?B?b1kycWVXdVdOUTlhOS94UU5XVmdvL21XTW15b3Vla2VENnVPbzBsMmR6QTRN?=
 =?utf-8?B?ODFKaEtCNk1hbHJZS3ZqNkxKM2d6WEtlclFFR1ZmZU9aRTJBV3Q3Nk95aW54?=
 =?utf-8?B?ajdWenNEdzNFQ3AzMmtwSlVYS1YwWHhKTGVDWXpNdXVHaUtldk1hWHpUOE1N?=
 =?utf-8?B?SGxKRDV1dlFjeWI1c1hkZjlIWEVaVGF4WlhPV0c4eGpsQTdybXJ6T09nakFt?=
 =?utf-8?B?eVVPL0UwdHhnNjB0Snh1cFBlRHR4aTN5ZmRTbHQ2RmMxY0t3eE5td3NLLzhZ?=
 =?utf-8?B?elVnTnhFVDc4eUpubFpaNDZUL0doc0ZJUkJHVE1oaGtKVTAxUGFaVWk3clg3?=
 =?utf-8?B?Q1VWSEJLQXp4RmJwQjdFL3VVZmJrcEhWb25zSFVFYmVPUTN5bGxjOTd2NG50?=
 =?utf-8?B?S2QvMURXUVdidC9KVFNEbDNXSnFhc2Uya0hoSU5Ib2xVc2YvR2Z2UG5aanpK?=
 =?utf-8?B?Ly9rKzNLcXlHVE51NDVXTHg0WVpQUmxkOW81dCsvdXEzeHNVUTBOckxjUjl1?=
 =?utf-8?B?cXM4a2pTUE5tTzZOamt2MWtLL1NXQUh6bFE5a3Z4enBmTzI3Lzh6aHY5WVJL?=
 =?utf-8?Q?ylnPeJOAA1YTeY7TpjO4vb39/G+TkYF1Xj9/BNF?=
X-MS-Exchange-CrossTenant-Network-Message-Id: bcfbf13e-939f-4272-c8d3-08d92a780e3e
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Jun 2021 12:22:17.4103
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6i8v2NqqpZERH+P1L6WB/2YdkDe6tZJjANGzi0kWLJ9F4MYm/Q1vIq1AsGr+6R2EWh18yvYmtRn0XZEfwWcFXLE+k0oPnGQ7DJ0ExTFT8Tc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4166
X-OriginatorOrg: citrix.com

On 08/06/2021 12:28, Sander Eikelenboom wrote:
> L.S.,
>
> I seem to be running into a build error with current xen-unstable.

Yeah - that's my fault. 
https://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=69e1472d21cf7e5cf0795ef38b99d00de78a910e
went a little too far, and shouldn't have dropped the changes in
gen-cpuid.py

It went unnoticed because the code below is obfuscated from grep, and
because the gitlab CI is currently blocked on earlier build failures
from the PV32 changes.

I'll do a patch when I've got a moment.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 12:35:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 12:35:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138454.256288 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqaxM-0007SE-N9; Tue, 08 Jun 2021 12:35:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138454.256288; Tue, 08 Jun 2021 12:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqaxM-0007RP-IN; Tue, 08 Jun 2021 12:35:44 +0000
Received: by outflank-mailman (input) for mailman id 138454;
 Tue, 08 Jun 2021 12:35:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GltA=LC=cs.pub.ro=costin.lupu@srs-us1.protection.inumbo.net>)
 id 1lqaxM-0007Oq-0A
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 12:35:44 +0000
Received: from mx.upb.ro (unknown [141.85.13.5])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0e435a3d-c40f-4b75-b4a5-51754ef705cd;
 Tue, 08 Jun 2021 12:35:41 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 10425B560199;
 Tue,  8 Jun 2021 15:35:40 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id ifZF8FZnpxHz; Tue,  8 Jun 2021 15:35:38 +0300 (EEST)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id EA895B560192;
 Tue,  8 Jun 2021 15:35:37 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id C75VuhEwhfYB; Tue,  8 Jun 2021 15:35:37 +0300 (EEST)
Received: from localhost.localdomain (unknown [188.25.174.245])
 by mx.upb.ro (Postfix) with ESMTPSA id 4345EB56018F;
 Tue,  8 Jun 2021 15:35:37 +0300 (EEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e435a3d-c40f-4b75-b4a5-51754ef705cd
X-Virus-Scanned: amavisd-new at upb.ro
From: Costin Lupu <costin.lupu@cs.pub.ro>
To: xen-devel@lists.xenproject.org
Cc: Tim Deegan <tim@xen.org>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 0/5] Fix redefinition errors for toolstack libs
Date: Tue,  8 Jun 2021 15:35:24 +0300
Message-Id: <cover.1623155575.git.costin.lupu@cs.pub.ro>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

For replication I used gcc 10.3 on an Alpine system. In order to replicat=
e the
redefinition error for PAGE_SIZE one should install the 'fortify-headers'
package which will change the chain of included headers by indirectly inc=
luding
/usr/include/limits.h where PAGE_SIZE and PATH_MAX are defined.

Changes since v1:
- Use XC_PAGE_* macros instead of PAGE_* macros

Changes since v2:
- Define KDD_PAGE_* macros for changes in debugger/kdd/

Changes since v3:
- Use sysconf(_SC_PAGESIZE) instead of getpagesize()

Costin Lupu (5):
  tools/debugger: Fix PAGE_SIZE redefinition error
  tools/libfsimage: Fix PATH_MAX redefinition error
  tools/libs/foreignmemory: Fix PAGE_SIZE redefinition error
  tools/libs/gnttab: Fix PAGE_SIZE redefinition error
  tools/ocaml: Fix redefinition errors

 tools/debugger/kdd/kdd-xen.c                  | 15 ++++------
 tools/debugger/kdd/kdd.c                      | 19 ++++++-------
 tools/debugger/kdd/kdd.h                      |  7 +++++
 tools/libfsimage/ext2fs/fsys_ext2fs.c         |  2 ++
 tools/libfsimage/reiserfs/fsys_reiserfs.c     |  2 ++
 tools/libs/foreignmemory/core.c               |  2 +-
 tools/libs/foreignmemory/freebsd.c            | 10 +++----
 tools/libs/foreignmemory/linux.c              | 23 +++++++--------
 tools/libs/foreignmemory/minios.c             |  2 +-
 tools/libs/foreignmemory/netbsd.c             | 10 +++----
 tools/libs/foreignmemory/private.h            |  9 +-----
 tools/libs/gnttab/freebsd.c                   | 28 +++++++++----------
 tools/libs/gnttab/linux.c                     | 28 +++++++++----------
 tools/libs/gnttab/netbsd.c                    | 23 +++++++--------
 tools/ocaml/libs/xc/xenctrl_stubs.c           | 10 +++----
 .../ocaml/libs/xentoollog/xentoollog_stubs.c  |  4 +++
 tools/ocaml/libs/xl/xenlight_stubs.c          |  4 +++
 17 files changed, 98 insertions(+), 100 deletions(-)

--=20
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 12:35:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 12:35:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138453.256282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqaxM-0007P3-E5; Tue, 08 Jun 2021 12:35:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138453.256282; Tue, 08 Jun 2021 12:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqaxM-0007Ow-B6; Tue, 08 Jun 2021 12:35:44 +0000
Received: by outflank-mailman (input) for mailman id 138453;
 Tue, 08 Jun 2021 12:35:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GltA=LC=cs.pub.ro=costin.lupu@srs-us1.protection.inumbo.net>)
 id 1lqaxL-0007Ok-Cf
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 12:35:43 +0000
Received: from mx.upb.ro (unknown [141.85.13.201])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 922b3d25-9387-4a10-bfbd-97bc20a93b20;
 Tue, 08 Jun 2021 12:35:42 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 1F056B560197;
 Tue,  8 Jun 2021 15:35:41 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id HBWGNFF5zl2C; Tue,  8 Jun 2021 15:35:38 +0300 (EEST)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 559ACB56018F;
 Tue,  8 Jun 2021 15:35:38 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id An2SduKIoHOu; Tue,  8 Jun 2021 15:35:38 +0300 (EEST)
Received: from localhost.localdomain (unknown [188.25.174.245])
 by mx.upb.ro (Postfix) with ESMTPSA id EB1D7B560197;
 Tue,  8 Jun 2021 15:35:37 +0300 (EEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 922b3d25-9387-4a10-bfbd-97bc20a93b20
X-Virus-Scanned: amavisd-new at upb.ro
From: Costin Lupu <costin.lupu@cs.pub.ro>
To: xen-devel@lists.xenproject.org
Cc: Tim Deegan <tim@xen.org>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 1/5] tools/debugger: Fix PAGE_SIZE redefinition error
Date: Tue,  8 Jun 2021 15:35:25 +0300
Message-Id: <603eac57f53a2263baceb5ec5cd8e14aa46c213f.1623155575.git.costin.lupu@cs.pub.ro>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <cover.1623155575.git.costin.lupu@cs.pub.ro>
References: <cover.1623155575.git.costin.lupu@cs.pub.ro>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

If PAGE_SIZE is already defined in the system (e.g. in /usr/include/limit=
s.h
header) then gcc will trigger a redefinition error because of -Werror. Th=
is
patch replaces usage of PAGE_* macros with KDD_PAGE_* macros in order to =
avoid
confusion between control domain page granularity (PAGE_* definitions) an=
d
guest domain page granularity (which is what we are dealing with here).

We chose to define the KDD_PAGE_* macros instead of using XC_PAGE_* macro=
s
because (1) the code in kdd.c should not include any Xen headers and (2) =
to add
consistency for code in both kdd.c and kdd-xen.c.

Signed-off-by: Costin Lupu <costin.lupu@cs.pub.ro>
Reviewed-by: Tim Deegan <tim@xen.org>
---
 tools/debugger/kdd/kdd-xen.c | 15 ++++++---------
 tools/debugger/kdd/kdd.c     | 19 ++++++++-----------
 tools/debugger/kdd/kdd.h     |  7 +++++++
 3 files changed, 21 insertions(+), 20 deletions(-)

diff --git a/tools/debugger/kdd/kdd-xen.c b/tools/debugger/kdd/kdd-xen.c
index f3f9529f9f..e78c9311c4 100644
--- a/tools/debugger/kdd/kdd-xen.c
+++ b/tools/debugger/kdd/kdd-xen.c
@@ -48,9 +48,6 @@
=20
 #define MAPSIZE 4093 /* Prime */
=20
-#define PAGE_SHIFT 12
-#define PAGE_SIZE (1U << PAGE_SHIFT)
-
 struct kdd_guest {
     struct xentoollog_logger xc_log; /* Must be first for xc log callbac=
ks */
     xc_interface *xc_handle;
@@ -72,7 +69,7 @@ static void flush_maps(kdd_guest *g)
     int i;
     for (i =3D 0; i < MAPSIZE; i++) {
         if (g->maps[i] !=3D NULL)
-            munmap(g->maps[i], PAGE_SIZE);
+            munmap(g->maps[i], KDD_PAGE_SIZE);
         g->maps[i] =3D NULL;
     }
 }
@@ -490,13 +487,13 @@ static uint32_t kdd_access_physical_page(kdd_guest =
*g, uint64_t addr,
     uint32_t map_pfn, map_offset;
     uint8_t *map;
=20
-    map_pfn =3D (addr >> PAGE_SHIFT);
-    map_offset =3D addr & (PAGE_SIZE - 1);
+    map_pfn =3D (addr >> KDD_PAGE_SHIFT);
+    map_offset =3D addr & (KDD_PAGE_SIZE - 1);
=20
     /* Evict any mapping of the wrong frame from our slot */=20
     if (g->pfns[map_pfn % MAPSIZE] !=3D map_pfn
         && g->maps[map_pfn % MAPSIZE] !=3D NULL) {
-        munmap(g->maps[map_pfn % MAPSIZE], PAGE_SIZE);
+        munmap(g->maps[map_pfn % MAPSIZE], KDD_PAGE_SIZE);
         g->maps[map_pfn % MAPSIZE] =3D NULL;
     }
     g->pfns[map_pfn % MAPSIZE] =3D map_pfn;
@@ -507,7 +504,7 @@ static uint32_t kdd_access_physical_page(kdd_guest *g=
, uint64_t addr,
     else {
         map =3D xc_map_foreign_range(g->xc_handle,
                                    g->domid,
-                                   PAGE_SIZE,
+                                   KDD_PAGE_SIZE,
                                    PROT_READ|PROT_WRITE,
                                    map_pfn);
=20
@@ -533,7 +530,7 @@ uint32_t kdd_access_physical(kdd_guest *g, uint64_t a=
ddr,
 {
     uint32_t chunk, rv, done =3D 0;
     while (len > 0) {
-        chunk =3D PAGE_SIZE - (addr & (PAGE_SIZE - 1));
+        chunk =3D KDD_PAGE_SIZE - (addr & (KDD_PAGE_SIZE - 1));
         if (chunk > len)=20
             chunk =3D len;
         rv =3D kdd_access_physical_page(g, addr, chunk, buf, write);
diff --git a/tools/debugger/kdd/kdd.c b/tools/debugger/kdd/kdd.c
index 17513c2650..320c623eda 100644
--- a/tools/debugger/kdd/kdd.c
+++ b/tools/debugger/kdd/kdd.c
@@ -288,9 +288,6 @@ static void kdd_log_pkt(kdd_state *s, const char *nam=
e, kdd_pkt *p)
  *  Memory access: virtual addresses and syntactic sugar.
  */
=20
-#define PAGE_SHIFT (12)
-#define PAGE_SIZE (1ULL << PAGE_SHIFT)=20
-
 static uint32_t kdd_read_physical(kdd_state *s, uint64_t addr,=20
                                   uint32_t len, void *buf)
 {
@@ -352,7 +349,7 @@ static uint64_t v2p(kdd_state *s, int cpuid, uint64_t=
 va)
=20
     /* Walk the appropriate number of levels */
     for (i =3D levels; i > 0; i--) {
-        shift =3D PAGE_SHIFT + bits * (i-1);
+        shift =3D KDD_PAGE_SHIFT + bits * (i-1);
         mask =3D ((1ULL << bits) - 1) << shift;
         offset =3D ((va & mask) >> shift) * width;
         KDD_DEBUG(s, "level %i: mask 0x%16.16"PRIx64" pa 0x%16.16"PRIx64
@@ -364,12 +361,12 @@ static uint64_t v2p(kdd_state *s, int cpuid, uint64=
_t va)
             return -1ULL; // Not present
         pa =3D entry & 0x000ffffffffff000ULL;
         if (pse && (i =3D=3D 2) && (entry & 0x80)) { // Superpage
-            mask =3D ((1ULL << (PAGE_SHIFT + bits)) - 1);
+            mask =3D ((1ULL << (KDD_PAGE_SHIFT + bits)) - 1);
             return (pa & ~mask) + (va & mask);
         }
     }
=20
-    return pa + (va & (PAGE_SIZE - 1));
+    return pa + (va & (KDD_PAGE_SIZE - 1));
 }
=20
 static uint32_t kdd_access_virtual(kdd_state *s, int cpuid, uint64_t add=
r,
@@ -380,7 +377,7 @@ static uint32_t kdd_access_virtual(kdd_state *s, int =
cpuid, uint64_t addr,
    =20
     /* Process one page at a time */
     while (len > 0) {
-        chunk =3D PAGE_SIZE - (addr & (PAGE_SIZE - 1));
+        chunk =3D KDD_PAGE_SIZE - (addr & (KDD_PAGE_SIZE - 1));
         if (chunk > len)=20
             chunk =3D len;
         pa =3D v2p(s, cpuid, addr);
@@ -591,7 +588,7 @@ static void get_os_info_64(kdd_state *s)
     uint64_t dbgkd_addr;
     DBGKD_GET_VERSION64 dbgkd_get_version64;
     /* Maybe 1GB is too big for the limit to search? */
-    uint32_t search_limit =3D (1024 * 1024 * 1024) / PAGE_SIZE; /*1GB/Pa=
geSize*/
+    uint32_t search_limit =3D (1024 * 1024 * 1024) / KDD_PAGE_SIZE; /*1G=
B/PageSize*/
     uint64_t efer;
=20
     /* if we are not in 64-bit mode, fail */
@@ -620,7 +617,7 @@ static void get_os_info_64(kdd_state *s)
      * in 1GB range above the current page base address
      */
=20
-    base =3D idt0_addr & ~(PAGE_SIZE - 1);
+    base =3D idt0_addr & ~(KDD_PAGE_SIZE - 1);
=20
     while (search_limit) {
         uint16_t val;
@@ -633,7 +630,7 @@ static void get_os_info_64(kdd_state *s)
         if (val =3D=3D MZ_HEADER) // MZ
             break;
=20
-        base -=3D PAGE_SIZE;
+        base -=3D KDD_PAGE_SIZE;
         search_limit -=3D 1;
     }
=20
@@ -720,7 +717,7 @@ static void find_os(kdd_state *s)
         /* Try each page in the potential range of kernel load addresses=
 */
         for (limit =3D s->os.base + s->os.range;
              s->os.base <=3D limit;
-             s->os.base +=3D PAGE_SIZE)
+             s->os.base +=3D KDD_PAGE_SIZE)
             if (check_os(s))
                 return;
     }
diff --git a/tools/debugger/kdd/kdd.h b/tools/debugger/kdd/kdd.h
index b9a17440df..b476a76d93 100644
--- a/tools/debugger/kdd/kdd.h
+++ b/tools/debugger/kdd/kdd.h
@@ -39,6 +39,13 @@
=20
 #define PACKED __attribute__((packed))
=20
+/* We define our page related constants here in order to specifically
+ * avoid using the Xen page macros (this is a restriction for the code
+ * in kdd.c which should not include any Xen headers) and to add
+ * consistency for code in both kdd.c and kdd-xen.c. */
+#define KDD_PAGE_SHIFT 12
+#define KDD_PAGE_SIZE (1U << KDD_PAGE_SHIFT)
+
 /***********************************************************************=
******
  * Serial line protocol: Sender sends a 16-byte header with an optional
  * payload following it.  Receiver responds to each packet with an
--=20
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 12:35:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 12:35:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138455.256307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqaxR-0007yY-UO; Tue, 08 Jun 2021 12:35:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138455.256307; Tue, 08 Jun 2021 12:35:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqaxR-0007yN-QN; Tue, 08 Jun 2021 12:35:49 +0000
Received: by outflank-mailman (input) for mailman id 138455;
 Tue, 08 Jun 2021 12:35:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GltA=LC=cs.pub.ro=costin.lupu@srs-us1.protection.inumbo.net>)
 id 1lqaxQ-0007Ok-8S
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 12:35:48 +0000
Received: from mx.upb.ro (unknown [141.85.13.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 543521e3-c839-4bef-96ec-530f55d95727;
 Tue, 08 Jun 2021 12:35:42 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id C0F95B560196;
 Tue,  8 Jun 2021 15:35:40 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id Pe8Jzz6oRYRU; Tue,  8 Jun 2021 15:35:38 +0300 (EEST)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id BE197B560197;
 Tue,  8 Jun 2021 15:35:38 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id E4KKFI4Bk5Xd; Tue,  8 Jun 2021 15:35:38 +0300 (EEST)
Received: from localhost.localdomain (unknown [188.25.174.245])
 by mx.upb.ro (Postfix) with ESMTPSA id 5A94EB560198;
 Tue,  8 Jun 2021 15:35:38 +0300 (EEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 543521e3-c839-4bef-96ec-530f55d95727
X-Virus-Scanned: amavisd-new at upb.ro
From: Costin Lupu <costin.lupu@cs.pub.ro>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 2/5] tools/libfsimage: Fix PATH_MAX redefinition error
Date: Tue,  8 Jun 2021 15:35:26 +0300
Message-Id: <0a80da2cefbef0349177b26facbdc8067e75371f.1623155575.git.costin.lupu@cs.pub.ro>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <cover.1623155575.git.costin.lupu@cs.pub.ro>
References: <cover.1623155575.git.costin.lupu@cs.pub.ro>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

If PATH_MAX is already defined in the system (e.g. in /usr/include/limits=
.h
header) then gcc will trigger a redefinition error because of -Werror.

Signed-off-by: Costin Lupu <costin.lupu@cs.pub.ro>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 tools/libfsimage/ext2fs/fsys_ext2fs.c     | 2 ++
 tools/libfsimage/reiserfs/fsys_reiserfs.c | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/tools/libfsimage/ext2fs/fsys_ext2fs.c b/tools/libfsimage/ext=
2fs/fsys_ext2fs.c
index a4ed10419c..5ed8fce90e 100644
--- a/tools/libfsimage/ext2fs/fsys_ext2fs.c
+++ b/tools/libfsimage/ext2fs/fsys_ext2fs.c
@@ -278,7 +278,9 @@ struct ext4_extent_header {
=20
 #define EXT2_SUPER_MAGIC      0xEF53	/* include/linux/ext2_fs.h */
 #define EXT2_ROOT_INO              2	/* include/linux/ext2_fs.h */
+#ifndef PATH_MAX
 #define PATH_MAX                1024	/* include/linux/limits.h */
+#endif
 #define MAX_LINK_COUNT             5	/* number of symbolic links to foll=
ow */
=20
 /* made up, these are pointers into FSYS_BUF */
diff --git a/tools/libfsimage/reiserfs/fsys_reiserfs.c b/tools/libfsimage=
/reiserfs/fsys_reiserfs.c
index 916eb15292..10ca657476 100644
--- a/tools/libfsimage/reiserfs/fsys_reiserfs.c
+++ b/tools/libfsimage/reiserfs/fsys_reiserfs.c
@@ -284,7 +284,9 @@ struct reiserfs_de_head
 #define S_ISDIR(mode) (((mode) & 0170000) =3D=3D 0040000)
 #define S_ISLNK(mode) (((mode) & 0170000) =3D=3D 0120000)
=20
+#ifndef PATH_MAX
 #define PATH_MAX       1024	/* include/linux/limits.h */
+#endif
 #define MAX_LINK_COUNT    5	/* number of symbolic links to follow */
=20
 /* The size of the node cache */
--=20
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 12:35:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 12:35:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138456.256319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqaxX-0008Lq-81; Tue, 08 Jun 2021 12:35:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138456.256319; Tue, 08 Jun 2021 12:35:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqaxX-0008Lf-4I; Tue, 08 Jun 2021 12:35:55 +0000
Received: by outflank-mailman (input) for mailman id 138456;
 Tue, 08 Jun 2021 12:35:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GltA=LC=cs.pub.ro=costin.lupu@srs-us1.protection.inumbo.net>)
 id 1lqaxV-0007Ok-8Z
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 12:35:53 +0000
Received: from mx.upb.ro (unknown [141.85.13.221])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d09f5da0-5a51-4942-b1c4-47e530ba074d;
 Tue, 08 Jun 2021 12:35:43 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 6BABEB56018F;
 Tue,  8 Jun 2021 15:35:42 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id QlRxf5MRgZX5; Tue,  8 Jun 2021 15:35:39 +0300 (EEST)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 19DA1B560198;
 Tue,  8 Jun 2021 15:35:39 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id Vp6o898uIw3W; Tue,  8 Jun 2021 15:35:39 +0300 (EEST)
Received: from localhost.localdomain (unknown [188.25.174.245])
 by mx.upb.ro (Postfix) with ESMTPSA id C5104B560199;
 Tue,  8 Jun 2021 15:35:38 +0300 (EEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d09f5da0-5a51-4942-b1c4-47e530ba074d
X-Virus-Scanned: amavisd-new at upb.ro
From: Costin Lupu <costin.lupu@cs.pub.ro>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 3/5] tools/libs/foreignmemory: Fix PAGE_SIZE redefinition error
Date: Tue,  8 Jun 2021 15:35:27 +0300
Message-Id: <83beb95e3633b1aca7801fd8592406e2057f9bdc.1623155575.git.costin.lupu@cs.pub.ro>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <cover.1623155575.git.costin.lupu@cs.pub.ro>
References: <cover.1623155575.git.costin.lupu@cs.pub.ro>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

If PAGE_SIZE is already defined in the system (e.g. in /usr/include/limit=
s.h
header) then gcc will trigger a redefinition error because of -Werror. Th=
is
patch replaces usage of PAGE_* macros with XC_PAGE_* macros in order to a=
void
confusion between control domain page granularity (PAGE_* definitions) an=
d
guest domain page granularity.

The exception is in osdep_xenforeignmemory_map() where we need the system=
 page
size to check whether the PFN array should be allocated with mmap() or wi=
th
dynamic allocation.

Signed-off-by: Costin Lupu <costin.lupu@cs.pub.ro>
---
 tools/libs/foreignmemory/core.c    |  2 +-
 tools/libs/foreignmemory/freebsd.c | 10 +++++-----
 tools/libs/foreignmemory/linux.c   | 23 ++++++++++++-----------
 tools/libs/foreignmemory/minios.c  |  2 +-
 tools/libs/foreignmemory/netbsd.c  | 10 +++++-----
 tools/libs/foreignmemory/private.h |  9 +--------
 6 files changed, 25 insertions(+), 31 deletions(-)

diff --git a/tools/libs/foreignmemory/core.c b/tools/libs/foreignmemory/c=
ore.c
index 28ec311af1..7edc6f0dbf 100644
--- a/tools/libs/foreignmemory/core.c
+++ b/tools/libs/foreignmemory/core.c
@@ -202,7 +202,7 @@ int xenforeignmemory_resource_size(
     if ( rc )
         return rc;
=20
-    *size =3D fres.nr_frames << PAGE_SHIFT;
+    *size =3D fres.nr_frames << XC_PAGE_SHIFT;
     return 0;
 }
=20
diff --git a/tools/libs/foreignmemory/freebsd.c b/tools/libs/foreignmemor=
y/freebsd.c
index d94ea07862..2cf0fa1c38 100644
--- a/tools/libs/foreignmemory/freebsd.c
+++ b/tools/libs/foreignmemory/freebsd.c
@@ -63,7 +63,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handl=
e *fmem,
     privcmd_mmapbatch_t ioctlx;
     int rc;
=20
-    addr =3D mmap(addr, num << PAGE_SHIFT, prot, flags | MAP_SHARED, fd,=
 0);
+    addr =3D mmap(addr, num << XC_PAGE_SHIFT, prot, flags | MAP_SHARED, =
fd, 0);
     if ( addr =3D=3D MAP_FAILED )
         return NULL;
=20
@@ -78,7 +78,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handl=
e *fmem,
     {
         int saved_errno =3D errno;
=20
-        (void)munmap(addr, num << PAGE_SHIFT);
+        (void)munmap(addr, num << XC_PAGE_SHIFT);
         errno =3D saved_errno;
         return NULL;
     }
@@ -89,7 +89,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handl=
e *fmem,
 int osdep_xenforeignmemory_unmap(xenforeignmemory_handle *fmem,
                                  void *addr, size_t num)
 {
-    return munmap(addr, num << PAGE_SHIFT);
+    return munmap(addr, num << XC_PAGE_SHIFT);
 }
=20
 int osdep_xenforeignmemory_restrict(xenforeignmemory_handle *fmem,
@@ -101,7 +101,7 @@ int osdep_xenforeignmemory_restrict(xenforeignmemory_=
handle *fmem,
 int osdep_xenforeignmemory_unmap_resource(xenforeignmemory_handle *fmem,
                                         xenforeignmemory_resource_handle=
 *fres)
 {
-    return fres ? munmap(fres->addr, fres->nr_frames << PAGE_SHIFT) : 0;
+    return fres ? munmap(fres->addr, fres->nr_frames << XC_PAGE_SHIFT) :=
 0;
 }
=20
 int osdep_xenforeignmemory_map_resource(xenforeignmemory_handle *fmem,
@@ -120,7 +120,7 @@ int osdep_xenforeignmemory_map_resource(xenforeignmem=
ory_handle *fmem,
         /* Request for resource size.  Skip mmap(). */
         goto skip_mmap;
=20
-    fres->addr =3D mmap(fres->addr, fres->nr_frames << PAGE_SHIFT,
+    fres->addr =3D mmap(fres->addr, fres->nr_frames << XC_PAGE_SHIFT,
                       fres->prot, fres->flags | MAP_SHARED, fmem->fd, 0)=
;
     if ( fres->addr =3D=3D MAP_FAILED )
         return -1;
diff --git a/tools/libs/foreignmemory/linux.c b/tools/libs/foreignmemory/=
linux.c
index c1f35e2db7..9062117407 100644
--- a/tools/libs/foreignmemory/linux.c
+++ b/tools/libs/foreignmemory/linux.c
@@ -134,7 +134,7 @@ static int retry_paged(int fd, uint32_t dom, void *ad=
dr,
         /* At least one gfn is still in paging state */
         ioctlx.num =3D 1;
         ioctlx.dom =3D dom;
-        ioctlx.addr =3D (unsigned long)addr + (i<<PAGE_SHIFT);
+        ioctlx.addr =3D (unsigned long)addr + (i<<XC_PAGE_SHIFT);
         ioctlx.arr =3D arr + i;
         ioctlx.err =3D err + i;
=20
@@ -168,7 +168,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_han=
dle *fmem,
     size_t i;
     int rc;
=20
-    addr =3D mmap(addr, num << PAGE_SHIFT, prot, flags | MAP_SHARED,
+    addr =3D mmap(addr, num << XC_PAGE_SHIFT, prot, flags | MAP_SHARED,
                 fd, 0);
     if ( addr =3D=3D MAP_FAILED )
         return NULL;
@@ -198,9 +198,10 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_ha=
ndle *fmem,
          */
         privcmd_mmapbatch_t ioctlx;
         xen_pfn_t *pfn;
-        unsigned int pfn_arr_size =3D ROUNDUP((num * sizeof(*pfn)), PAGE=
_SHIFT);
+        unsigned int pfn_arr_size =3D ROUNDUP((num * sizeof(*pfn)), XC_P=
AGE_SHIFT);
+        int os_page_size =3D sysconf(_SC_PAGESIZE);
=20
-        if ( pfn_arr_size <=3D PAGE_SIZE )
+        if ( pfn_arr_size <=3D os_page_size )
             pfn =3D alloca(num * sizeof(*pfn));
         else
         {
@@ -209,7 +210,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_han=
dle *fmem,
             if ( pfn =3D=3D MAP_FAILED )
             {
                 PERROR("mmap of pfn array failed");
-                (void)munmap(addr, num << PAGE_SHIFT);
+                (void)munmap(addr, num << XC_PAGE_SHIFT);
                 return NULL;
             }
         }
@@ -242,7 +243,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_han=
dle *fmem,
                     continue;
                 }
                 rc =3D map_foreign_batch_single(fd, dom, pfn + i,
-                        (unsigned long)addr + (i<<PAGE_SHIFT));
+                        (unsigned long)addr + (i<<XC_PAGE_SHIFT));
                 if ( rc < 0 )
                 {
                     rc =3D -errno;
@@ -254,7 +255,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_han=
dle *fmem,
             break;
         }
=20
-        if ( pfn_arr_size > PAGE_SIZE )
+        if ( pfn_arr_size > os_page_size )
             munmap(pfn, pfn_arr_size);
=20
         if ( rc =3D=3D -ENOENT && i =3D=3D num )
@@ -270,7 +271,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_han=
dle *fmem,
     {
         int saved_errno =3D errno;
=20
-        (void)munmap(addr, num << PAGE_SHIFT);
+        (void)munmap(addr, num << XC_PAGE_SHIFT);
         errno =3D saved_errno;
         return NULL;
     }
@@ -281,7 +282,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_han=
dle *fmem,
 int osdep_xenforeignmemory_unmap(xenforeignmemory_handle *fmem,
                                  void *addr, size_t num)
 {
-    return munmap(addr, num << PAGE_SHIFT);
+    return munmap(addr, num << XC_PAGE_SHIFT);
 }
=20
 int osdep_xenforeignmemory_restrict(xenforeignmemory_handle *fmem,
@@ -293,7 +294,7 @@ int osdep_xenforeignmemory_restrict(xenforeignmemory_=
handle *fmem,
 int osdep_xenforeignmemory_unmap_resource(
     xenforeignmemory_handle *fmem, xenforeignmemory_resource_handle *fre=
s)
 {
-    return fres ? munmap(fres->addr, fres->nr_frames << PAGE_SHIFT) : 0;
+    return fres ? munmap(fres->addr, fres->nr_frames << XC_PAGE_SHIFT) :=
 0;
 }
=20
 int osdep_xenforeignmemory_map_resource(
@@ -312,7 +313,7 @@ int osdep_xenforeignmemory_map_resource(
         /* Request for resource size.  Skip mmap(). */
         goto skip_mmap;
=20
-    fres->addr =3D mmap(fres->addr, fres->nr_frames << PAGE_SHIFT,
+    fres->addr =3D mmap(fres->addr, fres->nr_frames << XC_PAGE_SHIFT,
                       fres->prot, fres->flags | MAP_SHARED, fmem->fd, 0)=
;
     if ( fres->addr =3D=3D MAP_FAILED )
         return -1;
diff --git a/tools/libs/foreignmemory/minios.c b/tools/libs/foreignmemory=
/minios.c
index 43341ca301..c5453736d5 100644
--- a/tools/libs/foreignmemory/minios.c
+++ b/tools/libs/foreignmemory/minios.c
@@ -55,7 +55,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handl=
e *fmem,
 int osdep_xenforeignmemory_unmap(xenforeignmemory_handle *fmem,
                                  void *addr, size_t num)
 {
-    return munmap(addr, num << PAGE_SHIFT);
+    return munmap(addr, num << XC_PAGE_SHIFT);
 }
=20
 /*
diff --git a/tools/libs/foreignmemory/netbsd.c b/tools/libs/foreignmemory=
/netbsd.c
index c0b1b8f79d..597db775d7 100644
--- a/tools/libs/foreignmemory/netbsd.c
+++ b/tools/libs/foreignmemory/netbsd.c
@@ -76,7 +76,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handl=
e *fmem,
 {
     int fd =3D fmem->fd;
     privcmd_mmapbatch_v2_t ioctlx;
-    addr =3D mmap(addr, num * PAGE_SIZE, prot,
+    addr =3D mmap(addr, num * XC_PAGE_SIZE, prot,
                 flags | MAP_ANON | MAP_SHARED, -1, 0);
     if ( addr =3D=3D MAP_FAILED ) {
         PERROR("osdep_xenforeignmemory_map: mmap failed");
@@ -93,7 +93,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handl=
e *fmem,
     {
         int saved_errno =3D errno;
         PERROR("osdep_xenforeignmemory_map: ioctl failed");
-        munmap(addr, num * PAGE_SIZE);
+        munmap(addr, num * XC_PAGE_SIZE);
         errno =3D saved_errno;
         return NULL;
     }
@@ -104,7 +104,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_han=
dle *fmem,
 int osdep_xenforeignmemory_unmap(xenforeignmemory_handle *fmem,
                                  void *addr, size_t num)
 {
-    return munmap(addr, num * PAGE_SIZE);
+    return munmap(addr, num * XC_PAGE_SIZE);
 }
=20
 int osdep_xenforeignmemory_restrict(xenforeignmemory_handle *fmem,
@@ -117,7 +117,7 @@ int osdep_xenforeignmemory_restrict(xenforeignmemory_=
handle *fmem,
 int osdep_xenforeignmemory_unmap_resource(
     xenforeignmemory_handle *fmem, xenforeignmemory_resource_handle *fre=
s)
 {
-    return fres ? munmap(fres->addr, fres->nr_frames << PAGE_SHIFT) : 0;
+    return fres ? munmap(fres->addr, fres->nr_frames << XC_PAGE_SHIFT) :=
 0;
 }
=20
 int osdep_xenforeignmemory_map_resource(
@@ -136,7 +136,7 @@ int osdep_xenforeignmemory_map_resource(
         /* Request for resource size.  Skip mmap(). */
         goto skip_mmap;
=20
-    fres->addr =3D mmap(fres->addr, fres->nr_frames << PAGE_SHIFT,
+    fres->addr =3D mmap(fres->addr, fres->nr_frames << XC_PAGE_SHIFT,
                       fres->prot, fres->flags | MAP_ANON | MAP_SHARED, -=
1, 0);
     if ( fres->addr =3D=3D MAP_FAILED )
         return -1;
diff --git a/tools/libs/foreignmemory/private.h b/tools/libs/foreignmemor=
y/private.h
index 1ee3626dd2..65fe77aa5b 100644
--- a/tools/libs/foreignmemory/private.h
+++ b/tools/libs/foreignmemory/private.h
@@ -1,6 +1,7 @@
 #ifndef XENFOREIGNMEMORY_PRIVATE_H
 #define XENFOREIGNMEMORY_PRIVATE_H
=20
+#include <xenctrl.h>
 #include <xentoollog.h>
=20
 #include <xenforeignmemory.h>
@@ -10,14 +11,6 @@
 #include <xen/xen.h>
 #include <xen/sys/privcmd.h>
=20
-#ifndef PAGE_SHIFT /* Mini-os, Yukk */
-#define PAGE_SHIFT           12
-#endif
-#ifndef __MINIOS__ /* Yukk */
-#define PAGE_SIZE            (1UL << PAGE_SHIFT)
-#define PAGE_MASK            (~(PAGE_SIZE-1))
-#endif
-
 struct xenforeignmemory_handle {
     xentoollog_logger *logger, *logger_tofree;
     unsigned flags;
--=20
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 12:35:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 12:35:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138458.256331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqaxb-0000MV-RV; Tue, 08 Jun 2021 12:35:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138458.256331; Tue, 08 Jun 2021 12:35:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqaxb-0000MK-MN; Tue, 08 Jun 2021 12:35:59 +0000
Received: by outflank-mailman (input) for mailman id 138458;
 Tue, 08 Jun 2021 12:35:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GltA=LC=cs.pub.ro=costin.lupu@srs-us1.protection.inumbo.net>)
 id 1lqaxa-0007Ok-8j
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 12:35:58 +0000
Received: from mx.upb.ro (unknown [141.85.13.231])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2fc4d4b7-96d5-4a35-8117-8c13f80885ce;
 Tue, 08 Jun 2021 12:35:43 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id E9E40B560198;
 Tue,  8 Jun 2021 15:35:42 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id BwKakGLi3CjN; Tue,  8 Jun 2021 15:35:39 +0300 (EEST)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 81FB0B56019C;
 Tue,  8 Jun 2021 15:35:39 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id h32ba2wsXpzQ; Tue,  8 Jun 2021 15:35:39 +0300 (EEST)
Received: from localhost.localdomain (unknown [188.25.174.245])
 by mx.upb.ro (Postfix) with ESMTPSA id 1F3A5B56019A;
 Tue,  8 Jun 2021 15:35:39 +0300 (EEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fc4d4b7-96d5-4a35-8117-8c13f80885ce
X-Virus-Scanned: amavisd-new at upb.ro
From: Costin Lupu <costin.lupu@cs.pub.ro>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 4/5] tools/libs/gnttab: Fix PAGE_SIZE redefinition error
Date: Tue,  8 Jun 2021 15:35:28 +0300
Message-Id: <84d03c4595428e4ff857dcacc72f6b9c04476849.1623155575.git.costin.lupu@cs.pub.ro>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <cover.1623155575.git.costin.lupu@cs.pub.ro>
References: <cover.1623155575.git.costin.lupu@cs.pub.ro>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

If PAGE_SIZE is already defined in the system (e.g. in /usr/include/limit=
s.h
header) then gcc will trigger a redefinition error because of -Werror. Th=
is
patch replaces usage of PAGE_* macros with XC_PAGE_* macros in order to a=
void
confusion between control domain page granularity (PAGE_* definitions) an=
d
guest domain page granularity.

The exception is in osdep_xenforeignmemory_map() where we need the system=
 page
size to check whether the PFN array should be allocated with mmap() or wi=
th
dynamic allocation.

Signed-off-by: Costin Lupu <costin.lupu@cs.pub.ro>
---
 tools/libs/gnttab/freebsd.c | 28 +++++++++++++---------------
 tools/libs/gnttab/linux.c   | 28 +++++++++++++---------------
 tools/libs/gnttab/netbsd.c  | 23 ++++++++++-------------
 3 files changed, 36 insertions(+), 43 deletions(-)

diff --git a/tools/libs/gnttab/freebsd.c b/tools/libs/gnttab/freebsd.c
index 768af701c6..e42ac3fbf3 100644
--- a/tools/libs/gnttab/freebsd.c
+++ b/tools/libs/gnttab/freebsd.c
@@ -30,14 +30,11 @@
=20
 #include <xen/sys/gntdev.h>
=20
+#include <xenctrl.h>
 #include <xen-tools/libs.h>
=20
 #include "private.h"
=20
-#define PAGE_SHIFT           12
-#define PAGE_SIZE            (1UL << PAGE_SHIFT)
-#define PAGE_MASK            (~(PAGE_SIZE-1))
-
 #define DEVXEN "/dev/xen/gntdev"
=20
 int osdep_gnttab_open(xengnttab_handle *xgt)
@@ -77,10 +74,11 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
     int domids_stride;
     unsigned int refs_size =3D ROUNDUP(count *
                                      sizeof(struct ioctl_gntdev_grant_re=
f),
-                                     PAGE_SHIFT);
+                                     XC_PAGE_SHIFT);
+    int os_page_size =3D getpagesize();
=20
     domids_stride =3D (flags & XENGNTTAB_GRANT_MAP_SINGLE_DOMAIN) ? 0 : =
1;
-    if ( refs_size <=3D PAGE_SIZE )
+    if ( refs_size <=3D os_page_size )
         map.refs =3D malloc(refs_size);
     else
     {
@@ -107,7 +105,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
         goto out;
     }
=20
-    addr =3D mmap(NULL, PAGE_SIZE * count, prot, MAP_SHARED, fd,
+    addr =3D mmap(NULL, XC_PAGE_SIZE * count, prot, MAP_SHARED, fd,
                 map.index);
     if ( addr !=3D MAP_FAILED )
     {
@@ -116,7 +114,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
=20
         notify.index =3D map.index;
         notify.action =3D 0;
-        if ( notify_offset < PAGE_SIZE * count )
+        if ( notify_offset < XC_PAGE_SIZE * count )
         {
             notify.index +=3D notify_offset;
             notify.action |=3D UNMAP_NOTIFY_CLEAR_BYTE;
@@ -131,7 +129,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
         if ( rv )
         {
             GTERROR(xgt->logger, "ioctl SET_UNMAP_NOTIFY failed");
-            munmap(addr, count * PAGE_SIZE);
+            munmap(addr, count * XC_PAGE_SIZE);
             addr =3D MAP_FAILED;
         }
     }
@@ -150,7 +148,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
     }
=20
  out:
-    if ( refs_size > PAGE_SIZE )
+    if ( refs_size > os_page_size )
         munmap(map.refs, refs_size);
     else
         free(map.refs);
@@ -189,7 +187,7 @@ int osdep_gnttab_unmap(xengnttab_handle *xgt,
     }
=20
     /* Next, unmap the memory. */
-    if ( (rc =3D munmap(start_address, count * PAGE_SIZE)) )
+    if ( (rc =3D munmap(start_address, count * XC_PAGE_SIZE)) )
         return rc;
=20
     /* Finally, unmap the driver slots used to store the grant informati=
on. */
@@ -256,7 +254,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
         goto out;
     }
=20
-    area =3D mmap(NULL, count * PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_S=
HARED,
+    area =3D mmap(NULL, count * XC_PAGE_SIZE, PROT_READ | PROT_WRITE, MA=
P_SHARED,
                 fd, gref_info.index);
=20
     if ( area =3D=3D MAP_FAILED )
@@ -268,7 +266,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
=20
     notify.index =3D gref_info.index;
     notify.action =3D 0;
-    if ( notify_offset < PAGE_SIZE * count )
+    if ( notify_offset < XC_PAGE_SIZE * count )
     {
         notify.index +=3D notify_offset;
         notify.action |=3D UNMAP_NOTIFY_CLEAR_BYTE;
@@ -283,7 +281,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
     if ( err )
     {
         GSERROR(xgs->logger, "ioctl SET_UNMAP_NOTIFY failed");
-        munmap(area, count * PAGE_SIZE);
+        munmap(area, count * XC_PAGE_SIZE);
         area =3D NULL;
     }
=20
@@ -306,7 +304,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
 int osdep_gntshr_unshare(xengntshr_handle *xgs,
                          void *start_address, uint32_t count)
 {
-    return munmap(start_address, count * PAGE_SIZE);
+    return munmap(start_address, count * XC_PAGE_SIZE);
 }
=20
 /*
diff --git a/tools/libs/gnttab/linux.c b/tools/libs/gnttab/linux.c
index 74331a4c7b..5628fd5719 100644
--- a/tools/libs/gnttab/linux.c
+++ b/tools/libs/gnttab/linux.c
@@ -32,14 +32,11 @@
 #include <xen/sys/gntdev.h>
 #include <xen/sys/gntalloc.h>
=20
+#include <xenctrl.h>
 #include <xen-tools/libs.h>
=20
 #include "private.h"
=20
-#define PAGE_SHIFT           12
-#define PAGE_SIZE            (1UL << PAGE_SHIFT)
-#define PAGE_MASK            (~(PAGE_SIZE-1))
-
 #define DEVXEN "/dev/xen/"
=20
 #ifndef O_CLOEXEC
@@ -92,6 +89,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
     int fd =3D xgt->fd;
     struct ioctl_gntdev_map_grant_ref *map;
     unsigned int map_size =3D sizeof(*map) + (count - 1) * sizeof(map->r=
efs[0]);
+    int os_page_size =3D sysconf(_SC_PAGESIZE);
     void *addr =3D NULL;
     int domids_stride =3D 1;
     int i;
@@ -99,11 +97,11 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
     if (flags & XENGNTTAB_GRANT_MAP_SINGLE_DOMAIN)
         domids_stride =3D 0;
=20
-    if ( map_size <=3D PAGE_SIZE )
+    if ( map_size <=3D os_page_size )
         map =3D alloca(map_size);
     else
     {
-        map_size =3D ROUNDUP(map_size, PAGE_SHIFT);
+        map_size =3D ROUNDUP(map_size, XC_PAGE_SHIFT);
         map =3D mmap(NULL, map_size, PROT_READ | PROT_WRITE,
                    MAP_PRIVATE | MAP_ANON | MAP_POPULATE, -1, 0);
         if ( map =3D=3D MAP_FAILED )
@@ -127,7 +125,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
     }
=20
  retry:
-    addr =3D mmap(NULL, PAGE_SIZE * count, prot, MAP_SHARED, fd,
+    addr =3D mmap(NULL, XC_PAGE_SIZE * count, prot, MAP_SHARED, fd,
                 map->index);
=20
     if (addr =3D=3D MAP_FAILED && errno =3D=3D EAGAIN)
@@ -152,7 +150,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
         struct ioctl_gntdev_unmap_notify notify;
         notify.index =3D map->index;
         notify.action =3D 0;
-        if (notify_offset < PAGE_SIZE * count) {
+        if (notify_offset < XC_PAGE_SIZE * count) {
             notify.index +=3D notify_offset;
             notify.action |=3D UNMAP_NOTIFY_CLEAR_BYTE;
         }
@@ -164,7 +162,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
             rv =3D ioctl(fd, IOCTL_GNTDEV_SET_UNMAP_NOTIFY, &notify);
         if (rv) {
             GTERROR(xgt->logger, "ioctl SET_UNMAP_NOTIFY failed");
-            munmap(addr, count * PAGE_SIZE);
+            munmap(addr, count * XC_PAGE_SIZE);
             addr =3D MAP_FAILED;
         }
     }
@@ -184,7 +182,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
     }
=20
  out:
-    if ( map_size > PAGE_SIZE )
+    if ( map_size > os_page_size )
         munmap(map, map_size);
=20
     return addr;
@@ -220,7 +218,7 @@ int osdep_gnttab_unmap(xengnttab_handle *xgt,
     }
=20
     /* Next, unmap the memory. */
-    if ( (rc =3D munmap(start_address, count * PAGE_SIZE)) )
+    if ( (rc =3D munmap(start_address, count * XC_PAGE_SIZE)) )
         return rc;
=20
     /* Finally, unmap the driver slots used to store the grant informati=
on. */
@@ -466,7 +464,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
         goto out;
     }
=20
-    area =3D mmap(NULL, count * PAGE_SIZE, PROT_READ | PROT_WRITE,
+    area =3D mmap(NULL, count * XC_PAGE_SIZE, PROT_READ | PROT_WRITE,
         MAP_SHARED, fd, gref_info->index);
=20
     if (area =3D=3D MAP_FAILED) {
@@ -477,7 +475,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
=20
     notify.index =3D gref_info->index;
     notify.action =3D 0;
-    if (notify_offset < PAGE_SIZE * count) {
+    if (notify_offset < XC_PAGE_SIZE * count) {
         notify.index +=3D notify_offset;
         notify.action |=3D UNMAP_NOTIFY_CLEAR_BYTE;
     }
@@ -489,7 +487,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
         err =3D ioctl(fd, IOCTL_GNTALLOC_SET_UNMAP_NOTIFY, &notify);
     if (err) {
         GSERROR(xgs->logger, "ioctl SET_UNMAP_NOTIFY failed");
-        munmap(area, count * PAGE_SIZE);
+        munmap(area, count * XC_PAGE_SIZE);
         area =3D NULL;
     }
=20
@@ -510,7 +508,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
 int osdep_gntshr_unshare(xengntshr_handle *xgs,
                          void *start_address, uint32_t count)
 {
-    return munmap(start_address, count * PAGE_SIZE);
+    return munmap(start_address, count * XC_PAGE_SIZE);
 }
=20
 /*
diff --git a/tools/libs/gnttab/netbsd.c b/tools/libs/gnttab/netbsd.c
index f8d7c356eb..a4ad624b54 100644
--- a/tools/libs/gnttab/netbsd.c
+++ b/tools/libs/gnttab/netbsd.c
@@ -28,15 +28,12 @@
 #include <sys/ioctl.h>
 #include <sys/mman.h>
=20
+#include <xenctrl.h>
 #include <xen/xen.h>
 #include <xen/xenio.h>
=20
 #include "private.h"
=20
-#define PAGE_SHIFT           12
-#define PAGE_SIZE            (1UL << PAGE_SHIFT)
-#define PAGE_MASK            (~(PAGE_SIZE-1))
-
 #define DEVXEN "/kern/xen/privcmd"
=20
 int osdep_gnttab_open(xengnttab_handle *xgt)
@@ -87,19 +84,19 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
     }
=20
     map.count =3D count;
-    addr =3D mmap(NULL, count * PAGE_SIZE,
+    addr =3D mmap(NULL, count * XC_PAGE_SIZE,
                 prot, flags | MAP_ANON | MAP_SHARED, -1, 0);
     if ( map.va =3D=3D MAP_FAILED )
     {
         GTERROR(xgt->logger, "osdep_gnttab_grant_map: mmap failed");
-        munmap((void *)map.va, count * PAGE_SIZE);
+        munmap((void *)map.va, count * XC_PAGE_SIZE);
         addr =3D MAP_FAILED;
     }
     map.va =3D addr;
=20
     map.notify.offset =3D 0;
     map.notify.action =3D 0;
-    if ( notify_offset < PAGE_SIZE * count )
+    if ( notify_offset < XC_PAGE_SIZE * count )
     {
         map.notify.offset =3D notify_offset;
         map.notify.action |=3D UNMAP_NOTIFY_CLEAR_BYTE;
@@ -115,7 +112,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
     {
         GTERROR(xgt->logger,
             "ioctl IOCTL_GNTDEV_MMAP_GRANT_REF failed: %d", rv);
-        munmap(addr, count * PAGE_SIZE);
+        munmap(addr, count * XC_PAGE_SIZE);
         addr =3D MAP_FAILED;
     }
=20
@@ -136,7 +133,7 @@ int osdep_gnttab_unmap(xengnttab_handle *xgt,
     }
=20
     /* Next, unmap the memory. */
-    rc =3D munmap(start_address, count * PAGE_SIZE);
+    rc =3D munmap(start_address, count * XC_PAGE_SIZE);
=20
     return rc;
 }
@@ -187,7 +184,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
     alloc.domid =3D domid;
     alloc.flags =3D writable ? GNTDEV_ALLOC_FLAG_WRITABLE : 0;
     alloc.count =3D count;
-    area =3D mmap(NULL, count * PAGE_SIZE,
+    area =3D mmap(NULL, count * XC_PAGE_SIZE,
                 PROT_READ | PROT_WRITE, MAP_ANON | MAP_SHARED, -1, 0);
=20
     if ( area =3D=3D MAP_FAILED )
@@ -200,7 +197,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
=20
     alloc.notify.offset =3D 0;
     alloc.notify.action =3D 0;
-    if ( notify_offset < PAGE_SIZE * count )
+    if ( notify_offset < XC_PAGE_SIZE * count )
     {
         alloc.notify.offset =3D notify_offset;
         alloc.notify.action |=3D UNMAP_NOTIFY_CLEAR_BYTE;
@@ -215,7 +212,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
     if ( err )
     {
         GSERROR(xgs->logger, "IOCTL_GNTDEV_ALLOC_GRANT_REF failed");
-        munmap(area, count * PAGE_SIZE);
+        munmap(area, count * XC_PAGE_SIZE);
         area =3D MAP_FAILED;
         goto out;
     }
@@ -230,7 +227,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
 int osdep_gntshr_unshare(xengntshr_handle *xgs,
                          void *start_address, uint32_t count)
 {
-    return munmap(start_address, count * PAGE_SIZE);
+    return munmap(start_address, count * XC_PAGE_SIZE);
 }
=20
 /*
--=20
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 12:36:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 12:36:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138462.256343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqaxh-00010L-4R; Tue, 08 Jun 2021 12:36:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138462.256343; Tue, 08 Jun 2021 12:36:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqaxh-00010A-0N; Tue, 08 Jun 2021 12:36:05 +0000
Received: by outflank-mailman (input) for mailman id 138462;
 Tue, 08 Jun 2021 12:36:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GltA=LC=cs.pub.ro=costin.lupu@srs-us1.protection.inumbo.net>)
 id 1lqaxf-0007Ok-8w
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 12:36:03 +0000
Received: from mx.upb.ro (unknown [141.85.13.201])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ec443ab5-320a-4712-918f-847d522ef42d;
 Tue, 08 Jun 2021 12:35:44 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 144CDB56019A;
 Tue,  8 Jun 2021 15:35:43 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id C8chw8VKlSZH; Tue,  8 Jun 2021 15:35:40 +0300 (EEST)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 56AC1B560192;
 Tue,  8 Jun 2021 15:35:40 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id hIA9l4veBTRc; Tue,  8 Jun 2021 15:35:40 +0300 (EEST)
Received: from localhost.localdomain (unknown [188.25.174.245])
 by mx.upb.ro (Postfix) with ESMTPSA id 819B0B56019B;
 Tue,  8 Jun 2021 15:35:39 +0300 (EEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec443ab5-320a-4712-918f-847d522ef42d
X-Virus-Scanned: amavisd-new at upb.ro
From: Costin Lupu <costin.lupu@cs.pub.ro>
To: xen-devel@lists.xenproject.org
Cc: Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH v4 5/5] tools/ocaml: Fix redefinition errors
Date: Tue,  8 Jun 2021 15:35:29 +0300
Message-Id: <fc242d15b700cd05e8fd0c6c68ce88f736baee90.1623155575.git.costin.lupu@cs.pub.ro>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <cover.1623155575.git.costin.lupu@cs.pub.ro>
References: <cover.1623155575.git.costin.lupu@cs.pub.ro>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

If PAGE_SIZE is already defined in the system (e.g. in /usr/include/limit=
s.h
header) then gcc will trigger a redefinition error because of -Werror. Th=
is
patch replaces usage of PAGE_* macros with XC_PAGE_* macros in order to a=
void
confusion between control domain page granularity (PAGE_* definitions) an=
d
guest domain page granularity (which is what we are dealing with here).

Same issue applies for redefinitions of Val_none and Some_val macros whic=
h
can be already define in the OCaml system headers (e.g.
/usr/lib/ocaml/caml/mlvalues.h).

Signed-off-by: Costin Lupu <costin.lupu@cs.pub.ro>
Reviewed-by: Julien Grall <jgrall@amazon.com>
Tested-by: Dario Faggioli <dfaggioli@suse.com>
---
 tools/ocaml/libs/xc/xenctrl_stubs.c            | 10 ++++------
 tools/ocaml/libs/xentoollog/xentoollog_stubs.c |  4 ++++
 tools/ocaml/libs/xl/xenlight_stubs.c           |  4 ++++
 3 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xe=
nctrl_stubs.c
index d05d7bb30e..f9e33e599a 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -36,14 +36,12 @@
=20
 #include "mmap_stubs.h"
=20
-#define PAGE_SHIFT		12
-#define PAGE_SIZE               (1UL << PAGE_SHIFT)
-#define PAGE_MASK               (~(PAGE_SIZE-1))
-
 #define _H(__h) ((xc_interface *)(__h))
 #define _D(__d) ((uint32_t)Int_val(__d))
=20
+#ifndef Val_none
 #define Val_none (Val_int(0))
+#endif
=20
 #define string_of_option_array(array, index) \
 	((Field(array, index) =3D=3D Val_none) ? NULL : String_val(Field(Field(=
array, index), 0)))
@@ -818,7 +816,7 @@ CAMLprim value stub_xc_domain_memory_increase_reserva=
tion(value xch,
 	CAMLparam3(xch, domid, mem_kb);
 	int retval;
=20
-	unsigned long nr_extents =3D ((unsigned long)(Int64_val(mem_kb))) >> (P=
AGE_SHIFT - 10);
+	unsigned long nr_extents =3D ((unsigned long)(Int64_val(mem_kb))) >> (X=
C_PAGE_SHIFT - 10);
=20
 	uint32_t c_domid =3D _D(domid);
 	caml_enter_blocking_section();
@@ -924,7 +922,7 @@ CAMLprim value stub_pages_to_kib(value pages)
 {
 	CAMLparam1(pages);
=20
-	CAMLreturn(caml_copy_int64(Int64_val(pages) << (PAGE_SHIFT - 10)));
+	CAMLreturn(caml_copy_int64(Int64_val(pages) << (XC_PAGE_SHIFT - 10)));
 }
=20
=20
diff --git a/tools/ocaml/libs/xentoollog/xentoollog_stubs.c b/tools/ocaml=
/libs/xentoollog/xentoollog_stubs.c
index bf64b211c2..e4306a0c2f 100644
--- a/tools/ocaml/libs/xentoollog/xentoollog_stubs.c
+++ b/tools/ocaml/libs/xentoollog/xentoollog_stubs.c
@@ -53,8 +53,12 @@ static char * dup_String_val(value s)
 #include "_xtl_levels.inc"
=20
 /* Option type support as per http://www.linux-nantes.org/~fmonnier/ocam=
l/ocaml-wrapping-c.php */
+#ifndef Val_none
 #define Val_none Val_int(0)
+#endif
+#ifndef Some_val
 #define Some_val(v) Field(v,0)
+#endif
=20
 static value Val_some(value v)
 {
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/x=
enlight_stubs.c
index 352a00134d..45b8af61c7 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -227,8 +227,12 @@ static value Val_string_list(libxl_string_list *c_va=
l)
 }
=20
 /* Option type support as per http://www.linux-nantes.org/~fmonnier/ocam=
l/ocaml-wrapping-c.php */
+#ifndef Val_none
 #define Val_none Val_int(0)
+#endif
+#ifndef Some_val
 #define Some_val(v) Field(v,0)
+#endif
=20
 static value Val_some(value v)
 {
--=20
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 12:36:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 12:36:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138480.256355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqayN-0002S8-Hr; Tue, 08 Jun 2021 12:36:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138480.256355; Tue, 08 Jun 2021 12:36:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqayN-0002S1-Dc; Tue, 08 Jun 2021 12:36:47 +0000
Received: by outflank-mailman (input) for mailman id 138480;
 Tue, 08 Jun 2021 12:36:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqayL-0002RV-W3; Tue, 08 Jun 2021 12:36:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqayL-0007lF-Rl; Tue, 08 Jun 2021 12:36:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqayL-0006ql-Jl; Tue, 08 Jun 2021 12:36:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqayL-00058B-JD; Tue, 08 Jun 2021 12:36:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=lMQcct3BvhNlld+jmxVIoh71curYhSSe7Zjl2rLF5k0=; b=aIuGyZXrKSg569vmfUxn/AY7kC
	apzOawFoBPeBjshHa7M05G0vwoHxT3BAeprR8bXTaxcIIryL72pPBiUTCQ0XfQp3YiHZk/3ETMHDq
	zG+InOkRzeAuxiOxsDYu2+ylhLQvJK5Go3b6nCoYxOJenLGcSeoyWuQ2TJvDAINEqlpo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow
Message-Id: <E1lqayL-00058B-JD@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Jun 2021 12:36:45 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow
testid guest-saverestore

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/160811/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow.guest-saverestore.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow.guest-saverestore --summary-out=tmp/162543.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow guest-saverestore
Searching for failure / basis pass:
 162527 fail [host=albana1] / 160125 [host=pinot0] 160119 [host=fiano1] 160113 [host=fiano0] 160104 [host=chardonnay1] 160097 ok.
Failure / basis pass flights: 162527 / 160097
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a35947f15c0ee695eba3c55248ec8ac3e4e23cca 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2615a5e433aeb812c300d3a48e1a88e1303e2339 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#4751a48aeb2ab828b0a5cbdc585fd3642967cda1-c410ad4da4b7785170d3d42a3ba190c2caac6feb git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#2615a5e433aeb812c300d3a48e1a88e1303e2339-a35947f15c0ee695eba3c55248ec8ac3e4e23cca git://xenbits.xen.org/osstest/seabios.git#b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee-7292e4a0a8f58333ccbd2d0d47242f9865083c9c git://xenbits.xen.org/xen.git#b4011741e6b39a8fd0ed5aded96c16c45ead5888-5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Loaded 43527 nodes in revision graph
Searching for test results:
 160091 [host=albana0]
 160097 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2615a5e433aeb812c300d3a48e1a88e1303e2339 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 160104 [host=chardonnay1]
 160113 [host=fiano0]
 160119 [host=fiano1]
 160125 [host=pinot0]
 160134 fail irrelevant
 160147 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160167 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca318882714080fb81fe9eb89a7b7934efc5bfae 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bdee969c0e65d4d509932b1d70e3a3b2ffbff6d5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160328 fail irrelevant
 160361 fail irrelevant
 160392 fail irrelevant
 160418 fail irrelevant
 160448 fail irrelevant
 160477 fail irrelevant
 160501 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160522 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160541 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ec2e6e016d24bd429792d08cf607e4c5350dcdaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160563 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7993b0f83fe5c3f8555e79781d5d098f99751a94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cead8c0d17462f3a1150b5657d3f4eaa88faf1cb
 160619 fail irrelevant
 160632 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 62bad17dcae18f55cb3bdc19909543dfdf928a2b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6ee55e1d10c25c2f6bf5ce2084ad2327e17affa5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 90629587e16e2efdb61da77f25c25fba3c4a5fd7
 160650 fail irrelevant
 160736 fail irrelevant
 160748 fail irrelevant
 160778 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2615a5e433aeb812c300d3a48e1a88e1303e2339 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 160780 fail irrelevant
 160781 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2a9a6c2a86570ccbf8c5c30cbb8bf723168c459 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160782 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160784 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1db136a29ce8594b693938ab8e788d8bcef54770 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160786 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 24e13a4dc1eb1630eceffc7ab334145d902e763d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160787 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 69259911f948ad2755bd1f2c999dd60ac322c890 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160789 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2255564fd21059960966b47212def9069cb56077 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160791 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e51b27fed31eb7b2a2cb4245806c8c7859207f7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ee2e67da8f882fcdef2c49fcc58e9962aa695f5a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160793 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f9c53a69edeb94ae8c65276b885c1a7efe4f613a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 571d413b5da6bc6f1c2aaca8484717642255ddb0 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160795 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8becb36063fb14df1e3ae4916215667e2cb65fa2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160797 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160779 fail irrelevant
 160800 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160802 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2615a5e433aeb812c300d3a48e1a88e1303e2339 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 160804 fail irrelevant
 160805 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160806 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160808 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160810 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160811 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160801 fail irrelevant
 160827 fail irrelevant
 160851 fail irrelevant
 160883 fail irrelevant
 160916 fail irrelevant
 160980 fail irrelevant
 161050 fail irrelevant
 161088 fail irrelevant
 161121 fail irrelevant
 161147 fail irrelevant
 161171 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2ad22420a710dc07e3b644f91a5b55c09c39ecf3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 264aa183ad85b2779b27d1312724a291259ccc9f
 161191 fail irrelevant
 161210 fail irrelevant
 161232 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b53173e7cdafb7a318a239d557478fd73734a86a
 161256 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161276 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161290 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161308 fail irrelevant
 161334 fail irrelevant
 161364 fail irrelevant
 161388 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
 161401 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aaa3eafb3ba8b544d19ca41cda1477640b22b8fc
 161419 fail irrelevant
 161434 fail irrelevant
 161444 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161455 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161472 fail irrelevant
 161481 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5396354b868bd6652600a654bba7df16701ac1cb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 11e7f0fe72ca0060762d18268e0388731fe8ccb6
 161495 fail irrelevant
 161514 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b90b8abb4049e2d98040f548ad23b6ab22d5d19 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
 161540 fail irrelevant
 161554 fail irrelevant
 161571 fail irrelevant
 161587 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161604 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161616 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 53c5433e84e8935abed8e91d4a2eb813168a0ecf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161631 []
 161766 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
 161780 fail irrelevant
 161812 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d45a5270d075ea589f0b0ddcf963a5fea1f500ac b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 8cccd6438e86112ab383e41b433b5a7e73be9621
 161826 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161839 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161853 []
 161856 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161862 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161876 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161886 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161890 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161896 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 74e31681ba05ed1876818df30c581bc530554fb3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161907 fail irrelevant
 161915 fail irrelevant
 161924 fail irrelevant
 161938 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161941 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161950 fail irrelevant
 161955 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161961 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161963 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161967 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161971 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161976 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161981 fail irrelevant
 161986 fail irrelevant
 162019 fail irrelevant
 162070 fail irrelevant
 162090 fail irrelevant
 162104 fail irrelevant
 162099 fail irrelevant
 162108 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 15ee7b76891a78141e6e30ef3f8572e8d6b326d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 972e848b53970d12cb2ca64687ef8ff797fb6236 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162112 fail irrelevant
 162116 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162121 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162124 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162127 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162132 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162135 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162139 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162143 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 371ebfe28600fc5a435504b841cd401208a68f07 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162146 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162152 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162158 fail irrelevant
 162167 fail irrelevant
 162197 fail irrelevant
 162240 fail irrelevant
 162244 fail irrelevant
 162252 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7258034ab40e6927acbd005feb295eb3acf972bb 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 9fdcf851689cb2a9501d3947cb5d767d9c7797e8
 162257 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 62c0ac5041e9130b041adfa13a41583d3c3ddd24 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162260 fail irrelevant
 162264 fail irrelevant
 162267 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f9dc72de91d2915b808e82da34bf613afa5cce43 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162270 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162277 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162299 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 683d899e4bffca35c5b192ea0662362b0270a695
 162328 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3ff5dbe1dfc3420e5254d290500c0b6f6282d17 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162331 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fdf3666f01a2dd02d83a808f609b9c744a74c652 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162339 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b233eb1849ac01bdd5b24ea84460a2e481a4c5a9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dd2db39d78431ab5a0b78777afaab3d61e94533e 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162342 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1f515342d8d83ef0fff0c3f4ac67232dd8c97565 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c345b3e6a736d4985b2bca6f7f24b985900de63 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162347 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8e6dad2028d01b7f9ec76cf3b83457fab57fa1eb 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162356 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 453d9c61dd5681159051c6e4d07e7b2633de2e70 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162362 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 453d9c61dd5681159051c6e4d07e7b2633de2e70 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162409 fail irrelevant
 162379 fail irrelevant
 162429 fail irrelevant
 162454 fail irrelevant
 162474 fail irrelevant
 162501 []
 162527 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a35947f15c0ee695eba3c55248ec8ac3e4e23cca 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162536 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2615a5e433aeb812c300d3a48e1a88e1303e2339 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 162543 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a35947f15c0ee695eba3c55248ec8ac3e4e23cca 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Searching for interesting versions
 Result found: flight 160097 (pass), for basis pass
 Result found: flight 162527 (fail), for basis failure
 Repro found: flight 162536 (pass), for basis pass
 Repro found: flight 162543 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
No revisions left to test, checking graph state.
 Result found: flight 160800 (pass), for last pass
 Result found: flight 160805 (fail), for first failure
 Repro found: flight 160806 (pass), for last pass
 Repro found: flight 160808 (fail), for first failure
 Repro found: flight 160810 (pass), for last pass
 Repro found: flight 160811 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/160811/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.352496 to fit
pnmtopng: 197 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow.guest-saverestore.{dot,ps,png,html,svg}.
----------------------------------------
162543: tolerable ALL FAIL

flight 162543 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162543/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail baseline untested


jobs:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 12:37:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 12:37:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138487.256370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqaz3-0003Cx-4w; Tue, 08 Jun 2021 12:37:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138487.256370; Tue, 08 Jun 2021 12:37:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqaz3-0003Co-1W; Tue, 08 Jun 2021 12:37:29 +0000
Received: by outflank-mailman (input) for mailman id 138487;
 Tue, 08 Jun 2021 12:37:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GltA=LC=cs.pub.ro=costin.lupu@srs-us1.protection.inumbo.net>)
 id 1lqaz2-0003Cd-4b
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 12:37:28 +0000
Received: from mx.upb.ro (unknown [141.85.13.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 92a664cf-c692-4b73-9b62-066090222f5d;
 Tue, 08 Jun 2021 12:37:27 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 7F514B56018F;
 Tue,  8 Jun 2021 15:37:26 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id 9WbonbmMFQbw; Tue,  8 Jun 2021 15:37:24 +0300 (EEST)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 5F02EB560197;
 Tue,  8 Jun 2021 15:37:24 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id smpOXGdvXI_i; Tue,  8 Jun 2021 15:37:24 +0300 (EEST)
Received: from [192.168.1.35] (unknown [188.25.174.245])
 by mx.upb.ro (Postfix) with ESMTPSA id 02BFAB560196;
 Tue,  8 Jun 2021 15:37:23 +0300 (EEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 92a664cf-c692-4b73-9b62-066090222f5d
X-Virus-Scanned: amavisd-new at upb.ro
Subject: Re: [PATCH v3 3/5] tools/libs/foreignmemory: Fix PAGE_SIZE
 redefinition error
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <cover.1620633386.git.costin.lupu@cs.pub.ro>
 <2040286fc39b7e1d46376a8e75ac59d8d3be5aff.1620633386.git.costin.lupu@cs.pub.ro>
 <690806fb-e6e2-f61f-d7d6-a17efa6329d9@xen.org>
From: Costin Lupu <costin.lupu@cs.pub.ro>
Message-ID: <153d38e4-b5fe-3530-138e-ad116a7c4c4c@cs.pub.ro>
Date: Tue, 8 Jun 2021 15:37:23 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <690806fb-e6e2-f61f-d7d6-a17efa6329d9@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable

Hi Julien,

On 5/17/21 9:12 PM, Julien Grall wrote:
> Hi Costin,
>=20
> On 10/05/2021 09:35, Costin Lupu wrote:
>> @@ -168,7 +168,7 @@ void
>> *osdep_xenforeignmemory_map(xenforeignmemory_handle *fmem,
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 size_t i;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 int rc;
>> =C2=A0 -=C2=A0=C2=A0=C2=A0 addr =3D mmap(addr, num << PAGE_SHIFT, prot=
, flags | MAP_SHARED,
>> +=C2=A0=C2=A0=C2=A0 addr =3D mmap(addr, num << XC_PAGE_SHIFT, prot, fl=
ags | MAP_SHARED,
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 fd, 0);
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( addr =3D=3D MAP_FAILED )
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return NULL;
>> @@ -198,9 +198,10 @@ void
>> *osdep_xenforeignmemory_map(xenforeignmemory_handle *fmem,
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 */
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 privcmd_mmapbat=
ch_t ioctlx;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 xen_pfn_t *pfn;
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned int pfn_arr_size =
=3D ROUNDUP((num * sizeof(*pfn)),
>> PAGE_SHIFT);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned int pfn_arr_size =
=3D ROUNDUP((num * sizeof(*pfn)),
>> XC_PAGE_SHIFT);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 int os_page_size =3D getpa=
gesize();
>=20
> Hmmm... Sorry I only realized now that the manpage suggests that
> getpagesize() is legacy:
>=20
> =C2=A0=C2=A0 SVr4, 4.4BSD, SUSv2.=C2=A0 In SUSv2 the getpagesize() call=
 is labeled
> LEGACY, and in POSIX.1-2001 it has been dropped; HP-UX does not have
> this call.
>=20
> And then:
>=20
> =C2=A0 Portable applications should employ sysconf(_SC_PAGESIZE) instea=
d of
> getpagesize():
>=20
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 #include <unistd=
.h>
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 long sz =3D sysc=
onf(_SC_PAGESIZE);
>=20
> As this is only used by Linux, it is not clear to me whether this is
> important. Ian, what do you think?
>=20

I think it would be safer to follow the man page indication. I've just
sent a v4.

>> =C2=A0 -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( pfn_arr_size <=
=3D PAGE_SIZE )
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( pfn_arr_size <=3D os_=
page_size )
>=20
> Your commit message suggests we are only s/PAGE_SHIFT/XC_PAGE_SHIFT/ bu=
t
> this is using getpagesize() instead. I agree it should be using the OS
> size. However, this should be clarified in the commit message.
>=20

Done.

> The rest of the patch looks fine to me .

Thanks,
Costin


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 12:37:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 12:37:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138490.256382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqazV-0003oi-DF; Tue, 08 Jun 2021 12:37:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138490.256382; Tue, 08 Jun 2021 12:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqazV-0003ob-A8; Tue, 08 Jun 2021 12:37:57 +0000
Received: by outflank-mailman (input) for mailman id 138490;
 Tue, 08 Jun 2021 12:37:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GltA=LC=cs.pub.ro=costin.lupu@srs-us1.protection.inumbo.net>)
 id 1lqazT-0003oB-P2
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 12:37:55 +0000
Received: from mx.upb.ro (unknown [141.85.13.221])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ff5b33d9-695c-4455-b6d7-e205e78257cc;
 Tue, 08 Jun 2021 12:37:55 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 44D03B560197;
 Tue,  8 Jun 2021 15:37:54 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id SbO3_Wf98SWC; Tue,  8 Jun 2021 15:37:52 +0300 (EEST)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 197E5B560192;
 Tue,  8 Jun 2021 15:37:52 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id BoNKilAcrpRR; Tue,  8 Jun 2021 15:37:52 +0300 (EEST)
Received: from [192.168.1.35] (unknown [188.25.174.245])
 by mx.upb.ro (Postfix) with ESMTPSA id AB7A9B56018F;
 Tue,  8 Jun 2021 15:37:51 +0300 (EEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff5b33d9-695c-4455-b6d7-e205e78257cc
X-Virus-Scanned: amavisd-new at upb.ro
Subject: Re: [PATCH v3 4/5] tools/libs/gnttab: Fix PAGE_SIZE redefinition
 error
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <cover.1620633386.git.costin.lupu@cs.pub.ro>
 <b1e87eb24dfde3b1f11c5a14a4298531b4a308ad.1620633386.git.costin.lupu@cs.pub.ro>
 <5603464e-2ef5-9358-d039-cfb1f93340d3@xen.org>
From: Costin Lupu <costin.lupu@cs.pub.ro>
Message-ID: <61c4ad59-641a-f169-180b-7aeac1fc4a0b@cs.pub.ro>
Date: Tue, 8 Jun 2021 15:37:51 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <5603464e-2ef5-9358-d039-cfb1f93340d3@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable

Hi Julien,

On 5/17/21 9:16 PM, Julien Grall wrote:
> Hi Costin,
>=20
> On 10/05/2021 09:35, Costin Lupu wrote:
>> If PAGE_SIZE is already defined in the system (e.g. in
>> /usr/include/limits.h
>> header) then gcc will trigger a redefinition error because of -Werror.
>> This
>> patch replaces usage of PAGE_* macros with XC_PAGE_* macros in order
>> to avoid
>> confusion between control domain page granularity (PAGE_* definitions)
>> and
>> guest domain page granularity (which is what we are dealing with here)=
.
>>
>> Signed-off-by: Costin Lupu <costin.lupu@cs.pub.ro>
>> ---
>> =C2=A0 tools/libs/gnttab/freebsd.c | 28 +++++++++++++---------------
>> =C2=A0 tools/libs/gnttab/linux.c=C2=A0=C2=A0 | 28 +++++++++++++-------=
--------
>> =C2=A0 tools/libs/gnttab/netbsd.c=C2=A0 | 23 ++++++++++-------------
>> =C2=A0 3 files changed, 36 insertions(+), 43 deletions(-)
>>
>> diff --git a/tools/libs/gnttab/freebsd.c b/tools/libs/gnttab/freebsd.c
>> index 768af701c6..e42ac3fbf3 100644
>> --- a/tools/libs/gnttab/freebsd.c
>> +++ b/tools/libs/gnttab/freebsd.c
>> @@ -30,14 +30,11 @@
>> =C2=A0 =C2=A0 #include <xen/sys/gntdev.h>
>> =C2=A0 +#include <xenctrl.h>
>> =C2=A0 #include <xen-tools/libs.h>
>> =C2=A0 =C2=A0 #include "private.h"
>> =C2=A0 -#define PAGE_SHIFT=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 12
>> -#define PAGE_SIZE=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 (1UL << PAGE_SHIFT)
>> -#define PAGE_MASK=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 (~(PAGE_SIZE-1))
>> -
>> =C2=A0 #define DEVXEN "/dev/xen/gntdev"
>> =C2=A0 =C2=A0 int osdep_gnttab_open(xengnttab_handle *xgt)
>> @@ -77,10 +74,11 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt=
,
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 int domids_stride;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned int refs_size =3D ROUNDUP(coun=
t *
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 sizeof(struct
>> ioctl_gntdev_grant_ref),
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
PAGE_SHIFT);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
XC_PAGE_SHIFT);
>> +=C2=A0=C2=A0=C2=A0 int os_page_size =3D getpagesize();
>=20
> Same remark as for patch #4. This at least want to be explained in the
> commit message.

Done.


Thanks,
Costin


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 13:34:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 13:34:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138517.256405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqbsB-0001Mv-TM; Tue, 08 Jun 2021 13:34:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138517.256405; Tue, 08 Jun 2021 13:34:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqbsB-0001Mo-QS; Tue, 08 Jun 2021 13:34:27 +0000
Received: by outflank-mailman (input) for mailman id 138517;
 Tue, 08 Jun 2021 13:34:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JRSi=LC=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
 id 1lqbs9-0001Mi-KI
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 13:34:25 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d00a3cd0-2dd8-483a-989b-281d37f7492e;
 Tue, 08 Jun 2021 13:34:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d00a3cd0-2dd8-483a-989b-281d37f7492e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623159263;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=sQSwTol2/uZCe0gs276S7RPn49SxJ7eC2tmlK1u5+Bk=;
  b=KBqLwfZmkJBQxUdX/IoKhBkbw/GJs1noM3WsMmmVZtg5kZpMqWRVLTUy
   Z3XC5VwTgGApFztbdEDDFtxsW3cVlTdemPhAvsLPp1OWmg8+X2JaVyA0y
   nlK6RsRUXN1lgUWtbG9mYc6Nj4B29Rdy1I3XFV4nPpFxZkwa5lwYO4rwW
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: OFoZAir28eUXPJ7I0jbCBuuxNsu5GCF1IOvOINYp2YpWUEL6JRO6JVgsaOTL+hl/fnn35C34FR
 XP5NxnN5ruGMPC/3jU61yM3UtG95BO98gxMVpftgtVnx3mvDB1kKWUyHl616K7hqxTXMbvCJiq
 b+Y2Jr1fE5BMrU5oLgQF7NZvgegQB3qXyP87SqQEw1hDRHQs3ylxrAigQI6dmY8I7LVTWxytNs
 kp9gVZrUjP6KMb/ooYBYRGhm5b+S+upcL4MV4i58eJCsYEl5QWtUmRQEp++2ejTVQHV3/jesc3
 KbM=
X-SBRS: 5.1
X-MesageID: 45623121
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:BWkdZKt6L+04Bi6Xy8Txx1cm7skDjNV00zEX/kB9WHVpm6yj+v
 xGUs566faUskd0ZJhEo7q90ca7Lk80maQa3WBzB8bGYOCFghrKEGgK1+KLrwEIcxeUygc379
 YDT0ERMrzN5VgRt7eG3OG7eexQvOVuJsqT9JjjJ3QGd3AVV0l5hT0JbTpyiidNNXJ77ZxSLu
 v72uN34wCOVF4wdcqBCnwMT4H41qf2fMKPW29+O/Y/gjP+9Q+V1A==
X-IronPort-AV: E=Sophos;i="5.83,258,1616472000"; 
   d="scan'208,217";a="45623121"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mrtKYwYT5LUJtO/CzcOdxAeIBMtRPjczFPagN/XOrFH/bor6mQX2pCjPh6K+4Aepeip7eYU+S/JeDyWREr+pllAi3eGO6D6NW6FJ4MD/nz3jmFabqrNfpfk3jkh2iZ40N1XJ8dTitvQfMNR6TFx4FpAbd+dNQDmStW/JhHqKMnGsJQ6CrYJLUN9FQz/hJlOeRvKEXT2kIIVwx7fzvdOojdG9DizjdRvu+Z/rHOXQ/ic1oYQ5aYDTdzc+A27pQy3sW3sZkBWCDxmIbmei+wuggUWllODnozaDIDUU55qu8BnMvTSAA1a3fa7J92awXF/ABPCeUEkecNpEBvof5VhbLg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xE6+oRxbKNJo7ZPKq1oHeK7zWBfy6gXm7RqYweJ1C1k=;
 b=YQt4gQq2wtA636894V652QEPnXX6yyLILFUMGsQRwRWMTJg89EQjmLacY1OTadfppKQls0yU8ZQLEiSe0IbNc54zD2fugsB5xp3+OmeQS7yUwtQ9xE//BF3D3z7mbFxVfom12S+aKZjlBHZALd75/F+Z/QN70DASPT8CflQjnbLRJsPIno8DgBRyv8SKlDCMBM09oVvG9qJwLBSFWH6pG/tg5b5d7vKdPIhWsSgUYTPIN4gsDTbHCS8vq2BoKjIjp1N033VQjmP1xT833bAdJDTQBm0AdJ0FtJkbS+Ja+rOfEnFDS9ueNixXzAHoi/7XQZ0hi6Q/J8Z9Ie6xz+kElA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xE6+oRxbKNJo7ZPKq1oHeK7zWBfy6gXm7RqYweJ1C1k=;
 b=oA/MWuWlKYc+/MF6fugXGRArBm8W3K57jyftT7Rvk5f9CPnlJM/g8gRvaasoXlzq+1e6+0aWBTC9fu918wkOsZG3IYRfpMKgr4zUWp8TUGDUhZK3yLv2i5rwQWHMxVgkKZ+1AfXdZeh34CHEPeOJdzVijVcv1L5Q3pTUeI7vmes=
From: Christian Lindig <christian.lindig@citrix.com>
To: Costin Lupu <costin.lupu@cs.pub.ro>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v4 0/5] Fix redefinition errors for toolstack libs
Thread-Topic: [PATCH v4 0/5] Fix redefinition errors for toolstack libs
Thread-Index: AQHXXGLhcoHJqYzC7UuAEWyaYS0iZ6sKHS4A
Date: Tue, 8 Jun 2021 13:34:16 +0000
Message-ID: <09A8A6CD-281F-4D67-9228-3A7F3AAB6BC0@citrix.com>
References: <cover.1623155575.git.costin.lupu@cs.pub.ro>
In-Reply-To: <cover.1623155575.git.costin.lupu@cs.pub.ro>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 67ab263d-4ab9-42ee-b75a-08d92a821ce3
x-ms-traffictypediagnostic: CO1PR03MB5844:
x-microsoft-antispam-prvs: <CO1PR03MB5844EF2E13D12E225F5C0E96F6379@CO1PR03MB5844.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7691;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: lUE+v2EBgTfC7sPUK1YuHN0/RcP2yPH3k2u0XyrUTcRMdud4UdF1AaNA62EQtHOfDuRzW3Ykl2fYgRieUxP5uQjqmADKjCKHL8pEpVVwP+Vv64+fhyF7UqF9m0SvCIfzwrxKgkaTMBP4KbKiqFYqSBQZJvMk2Ea00dRgyYhivD/0lWkNoA3WYaO4M2FKuCphN3/a/H0D59mZ0REkY8QoHwfCbrfVZipk2kzTy/Qtuh7Ub9XrjjcizhH8fd4cHlmRiat9adZXp7hCrhExKUmRI4+ijW+hNhhSGsUji52PKablVo4+zk0zHmfQtyhwjUdLyGCRC0oOmiKqdzUxPxGNsuvoxsfBfXuNhZFJvg1sd3ebrm1d68LpV8mCLNNZ8izTVLEEwSLwtKceHTobz7loQt5fn7dwPmFi5qSzF9OVUZbUTDA5WVH2Okq+ZkiVACXjPAOfnAH42+KOEKHH1XTELKo3dJm5KCHm6m5YYiCqpp3DWjoO7b2uSlH8LIHndBOFzVhwOOlvuTDlJW4TV8p67o0W2mrBt1vFunBSkNESCWsFwQRUY+CoGbXtXSSlgFzEAVMbAgKTntyKcz8CYAnq4LGs7OIrOQ7SnMxRbgaNDdWR+c2JWGdUKMOrQATDpSGhvtLDNg1r0Lhc6gx8VhbZmg==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW4PR03MB6380.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(6512007)(26005)(53546011)(2906002)(8676002)(4744005)(4326008)(6506007)(86362001)(33656002)(71200400001)(5660300002)(498600001)(6916009)(66556008)(122000001)(64756008)(66476007)(66946007)(36756003)(66446008)(6486002)(91956017)(76116006)(2616005)(83380400001)(38100700002)(44832011)(8936002)(186003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?c/S8k+2QtBJ232Y2Ztb+ODA4Tx0Bi+N0Q5zXLaAwgGtL3WQ/AgrY5Htfrssh?=
 =?us-ascii?Q?WkvhAanx/nEkSYnm+I9GeFa+xRcR07+2OIKTyfIa2FK/jk8+u1a35QpA52fO?=
 =?us-ascii?Q?Ke1hnjDsYGn41CCAN5XJ51jL1Tft41A/tSBVzIJG2lVBK6K9794NY1qjmm6b?=
 =?us-ascii?Q?eFmbD24thrF+qEKVW1BuC6674yjX8AZK/zTS65fnff513kDFVryaXyBuIktK?=
 =?us-ascii?Q?5axQuxzVoIfq90O0ekWngkLMoC1RtUcrR8BDSs6G5PU4aBZKulhodqPbzoVJ?=
 =?us-ascii?Q?2nmjS0HO+5kMQwTZuBsI+ozejzILzcZ96IEtTZKNEf/Iqu6ZjrC0H7aDGilN?=
 =?us-ascii?Q?hdsv/eQOAjIAStsaP8+YFUYLww0iGbldO1l/x/5AvoEHdsnarOOlhD5CPt8S?=
 =?us-ascii?Q?x7TXAckMWrffu0Ub/YGWy9Z3dqXnplQmhB7ea3u9YZ0ybiWyPngkM00JFk7a?=
 =?us-ascii?Q?IzQSlGPRhezmkzKuKA5ulCQGJ7Ud4v6r/eu1I2qUUPCYsPGzyhcF+/Kjv++7?=
 =?us-ascii?Q?Xx9ZBWJGPEJj3gYnCGZZeler71wWYt3EZw+neubPzjNlyDzbq/AejgRpYLbD?=
 =?us-ascii?Q?ra28JQyB96jXdlP2p9q6qsVYfhTWa6UPgjw0RmqlX7oSpLkJqkgMzuCJ9htd?=
 =?us-ascii?Q?nqUSctv9dZ2vl24KPqMVA0mNGFjKjjkFZ1sovVUmgcud1BQQfKrxiC63hj8a?=
 =?us-ascii?Q?yVqXzb3O+laokBT345sNuHtzp48RjFpiBcu3frFFkgDYFh4ukKgggo6YN6Ky?=
 =?us-ascii?Q?QCi6n8/uvhLGwpCr0P2ospJ+YGLdnpjCaWOBrbOz1/8I97llmzV+e3cyu4T7?=
 =?us-ascii?Q?nxuuHF50btEO4lPBCcf1CnNvDaRHmWOWShEhDU8duk5Yn+bewRirDRUizNCL?=
 =?us-ascii?Q?cUxy1GlCvaDnWIELFMxOcJVNtB1liwTjAn5yqiD8jsl+eR48KTjqNr85BJsR?=
 =?us-ascii?Q?RqclojNe1lKjbNXZTU8cxfG7jR2HtE1dALx6hNmXp7yyFcmMNwvPuA5sJP2s?=
 =?us-ascii?Q?hXJW41WNgtSURY1cBzhPziHg2NOef6RnTpf21CiaDGtvsqSaiEnb3nYkntxO?=
 =?us-ascii?Q?VImqPaDz/wZpm6lW7MCl+5hWrDaWEqtPWEV4h9o7Uw+aA5qfd24nL+FPzPut?=
 =?us-ascii?Q?Ty3ZhVbS9DtWvuuhSo9jfiYXLDF1jXpHd6GE1ob8cxHiu5BP3rpP/48b3/+H?=
 =?us-ascii?Q?W/90tM2nAXK6G7L9ROYStXEjpNKg8seULvhg5P1gtN3zAxCqsAMba2DGE/W8?=
 =?us-ascii?Q?jWRRuN9VsUe5DcVoxnDd+YQmsUFh0lGsUvMTYIIuOFJSxRkD3hsyckX/9gIX?=
 =?us-ascii?Q?U/mPSFQnGcC7efshrxJv0N0r?=
x-ms-exchange-transport-forked: True
Content-Type: multipart/alternative;
	boundary="_000_09A8A6CD281F4D6792283A7F3AAB6BC0citrixcom_"
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MW4PR03MB6380.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 67ab263d-4ab9-42ee-b75a-08d92a821ce3
X-MS-Exchange-CrossTenant-originalarrivaltime: 08 Jun 2021 13:34:16.6992
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 1Q9/xVWf6mxGySaAhaUljMJ6dvsaV+77gRGe4EaRu4Xt7tJUxcUZyvwrIHvt98v8TWh3h54UBACNzDICGr9lrvHaetC2yRd4z19/j71KTJs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR03MB5844
X-OriginatorOrg: citrix.com

--_000_09A8A6CD281F4D6792283A7F3AAB6BC0citrixcom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable



On 8 Jun 2021, at 13:35, Costin Lupu <costin.lupu@cs.pub.ro<mailto:costin.l=
upu@cs.pub.ro>> wrote:

For replication I used gcc 10.3 on an Alpine system. In order to replicate =
the
redefinition error for PAGE_SIZE one should install the 'fortify-headers'
package which will change the chain of included headers by indirectly inclu=
ding
/usr/include/limits.h where PAGE_SIZE and PATH_MAX are defined.
[..]
tools/ocaml/libs/xc/xenctrl_stubs.c           | 10 +++----
.../ocaml/libs/xentoollog/xentoollog_stubs.c  |  4 +++
tools/ocaml/libs/xl/xenlight_stubs.c          |  4 +++

Acked-by: Christian Lindig <christian.lindig@citrix.com<mailto:christian.li=
ndig@citrix.com>>


--_000_09A8A6CD281F4D6792283A7F3AAB6BC0citrixcom_
Content-Type: text/html; charset="us-ascii"
Content-ID: <D717B1FB498D9E45877FE03DA7FF935F@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; line-break:=
 after-white-space;" class=3D"">
<br class=3D"">
<div><br class=3D"">
<blockquote type=3D"cite" class=3D"">
<div class=3D"">On 8 Jun 2021, at 13:35, Costin Lupu &lt;<a href=3D"mailto:=
costin.lupu@cs.pub.ro" class=3D"">costin.lupu@cs.pub.ro</a>&gt; wrote:</div=
>
<br class=3D"Apple-interchange-newline">
<div class=3D"">
<div class=3D"">For replication I used gcc 10.3 on an Alpine system. In ord=
er to replicate the<br class=3D"">
redefinition error for PAGE_SIZE one should install the 'fortify-headers'<b=
r class=3D"">
package which will change the chain of included headers by indirectly inclu=
ding<br class=3D"">
/usr/include/limits.h where PAGE_SIZE and PATH_MAX are defined.<br class=3D=
"">
[..]<br class=3D"">
tools/ocaml/libs/xc/xenctrl_stubs.c &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;| 10 +++----<br class=3D"">
.../ocaml/libs/xentoollog/xentoollog_stubs.c &nbsp;| &nbsp;4 +++<br class=
=3D"">
tools/ocaml/libs/xl/xenlight_stubs.c &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;| &nbsp;4 +++<br class=3D"">
</div>
</div>
</blockquote>
<br class=3D"">
</div>
<div>
<div style=3D"margin: 0px; font-stretch: normal; font-size: 11px; line-heig=
ht: normal; font-family: Menlo;" class=3D"">
<span style=3D"font-variant-ligatures: no-common-ligatures" class=3D"">Acke=
d-by: Christian Lindig &lt;<a href=3D"mailto:christian.lindig@citrix.com" c=
lass=3D"">christian.lindig@citrix.com</a>&gt;</span></div>
</div>
<br class=3D"">
</body>
</html>

--_000_09A8A6CD281F4D6792283A7F3AAB6BC0citrixcom_--


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 14:24:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 14:24:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138545.256426 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqcer-0006N6-VE; Tue, 08 Jun 2021 14:24:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138545.256426; Tue, 08 Jun 2021 14:24:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqcer-0006Mz-QE; Tue, 08 Jun 2021 14:24:45 +0000
Received: by outflank-mailman (input) for mailman id 138545;
 Tue, 08 Jun 2021 14:24:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqceq-0006Mp-H0; Tue, 08 Jun 2021 14:24:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqceq-0001MR-BH; Tue, 08 Jun 2021 14:24:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqcep-0003d4-Uw; Tue, 08 Jun 2021 14:24:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqcep-0001z2-UQ; Tue, 08 Jun 2021 14:24:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=d/DgDbIxMsDuIAi6unMwzYdeg6rH1W8zJHcSjn0r1aQ=; b=D06cwI2LtP5OHu/uR5/tH7x3et
	55/YICkqPJkpFDD6k/hmVtOdOkPettTas8FzGYqksiKoZUYEOBjigfDxbnl/1Vtq1OdSNXMJdWOu+
	97ANALOU0/xzRzNxg3cKPFLCUByzV2UK22s6qHFlrYIQFuYR5cKFqlHMDFB6YH0E9VMI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162533-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162533: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Jun 2021 14:24:43 +0000

flight 162533 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162533/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162475
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162475
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162475
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162475
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162475
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162475
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162475
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162475
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162475
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162475
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162475
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 15:14:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 15:14:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138559.256441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqdQP-0002mg-0Y; Tue, 08 Jun 2021 15:13:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138559.256441; Tue, 08 Jun 2021 15:13:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqdQO-0002mZ-TY; Tue, 08 Jun 2021 15:13:52 +0000
Received: by outflank-mailman (input) for mailman id 138559;
 Tue, 08 Jun 2021 15:13:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4Z8W=LC=onlineschubla.de=paul@srs-us1.protection.inumbo.net>)
 id 1lqdQN-0002mT-3W
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 15:13:51 +0000
Received: from DEU01-FR2-obe.outbound.protection.outlook.com (unknown
 [40.107.135.128]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7bc6a98e-1411-40c3-83b9-b55ae9497e28;
 Tue, 08 Jun 2021 15:13:49 +0000 (UTC)
Received: from FRYP281MB0582.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:45::10)
 by FRYP281MB0376.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:43::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.9; Tue, 8 Jun
 2021 15:13:47 +0000
Received: from FRYP281MB0582.DEUP281.PROD.OUTLOOK.COM
 ([fe80::184f:c6ec:f202:bf2d]) by FRYP281MB0582.DEUP281.PROD.OUTLOOK.COM
 ([fe80::184f:c6ec:f202:bf2d%8]) with mapi id 15.20.4219.020; Tue, 8 Jun 2021
 15:13:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7bc6a98e-1411-40c3-83b9-b55ae9497e28
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V2ZG/Sk6OtTCPNZYAXDqTD7E/4HW7okiULuVMaQgx4/S0QRcPtklnHcAdmfurDVhXCzVlb5pJWbgVS0b5ybhtTma1wsV6q26hK4JkdM+QAiVh3Jp8VoPgxmtg5GnELn/clb0YnIHbn0WZRQkVAnmsjflyTPWEu7gtfu/uQqVBpT8uelVmldKfUG5FV6Da/dlSAGKDZOtTt1r8tCjHShH3GLozUK71QcZrr0K8pezSGEZzm/HyhxhCuQ3yf/mX3PxQ5wlP/cFK06vcSibpNeGscDC/a85kcwjjT1oiyQgiW5SC9iOCnEwBT8fAJw+Ltek7KbgcaUn9dX1nHkyaIMlxw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NnTSvAcHGNMyB+mxbNP7yD+WKUhKlPC/vYTDjr7Z3PY=;
 b=BYtEgoFbxZRLBFSlSk5ol9sA2mmrsbX9rOTgXhC2MLotCykscJKIGFfwG32BC64ntc3mBlxgFacOOzz4y+4opYorjpcdjPu/TwVBE585tuSUGD8QgKSfqqPNbFD97emr7FC2/WyjgRgiyzqAorUufhaZ/ZvpFwFk2cZxKN592260OfbVZLOOmL9SgcNmF6w8UwCFTDmtbjEwilQIEPIO3teHGwDhBewJYF8g0BQ57beRymwe7smhfO7/7UsuIzA6ASvS0em1e4ZMaYsmQ3a14dOHtizMe1CDOcDWF6Y9Y6YdqW+mn5zD2yg3syfOBCrkgiffRDg8iQy6hhWdDUWT3A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=onlineschubla.de; dmarc=pass action=none
 header.from=onlineschubla.de; dkim=pass header.d=onlineschubla.de; arc=none
From: Paul Leiber <paul@onlineschubla.de>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: AW: [BUG] Passed through PCI devices lost after Windows HVM DomU
 reboot
Thread-Topic: [BUG] Passed through PCI devices lost after Windows HVM DomU
 reboot
Thread-Index: Addb9FwKHMmb5HghTwunCUNmuZyBkwAOovgAABFHM2A=
Date: Tue, 8 Jun 2021 15:13:47 +0000
Message-ID:
 <FRYP281MB0582BC1EAE564E396C3316A6B0379@FRYP281MB0582.DEUP281.PROD.OUTLOOK.COM>
References:
 <FRYP281MB05828EB0C49C963C7954578CB0389@FRYP281MB0582.DEUP281.PROD.OUTLOOK.COM>
 <5f532ea4-4ff4-e163-9492-096d16a316e7@suse.com>
In-Reply-To: <5f532ea4-4ff4-e163-9492-096d16a316e7@suse.com>
Accept-Language: en-US
Content-Language: de-DE
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=onlineschubla.de;
x-originating-ip: [2003:d2:1f26:12f0:194f:1b:f8b8:323e]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: b8c99002-bb3f-4ff5-1e82-08d92a9003a8
x-ms-traffictypediagnostic: FRYP281MB0376:
x-microsoft-antispam-prvs:
 <FRYP281MB03760A81E691438827E26249B0379@FRYP281MB0376.DEUP281.PROD.OUTLOOK.COM>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 mvINWgXCmfgOC5u5zemrApjSiLQKHeTq+1GNHSEMd/H6w4xvfcJy/mE06XUS1Qc963qduvRYI7uuJsUT0aAE7Njy9h28Gk4HwAe/bzGz50Ne3l9V0aEO0DH2apK5pxlLHJGTfdc8FepLB7xrVTPLQOMDQfYrWpMR6154HKdHD4qa8rKIO2JvQEJM5guPSOZPTMzJMjhclT7tiN/tvFtPcH6F9348oEmkdNy/FKCUoXWWKBSpHAWAUIEanqdlDLqzce/e0z/V2Fy4Yc4hr234oCUutLNsrH/5VJhu37luoeQFyhfGY4DftEEtCUqjuxPdONeWsYGBYLVzjVQSlKlTFAagpiSeeXsroLlEfnZQp1Sp3wKJxWt2QjqAcyWtctsDhKVu9jIETmW+pflOcjZcxfY4Fw4XP/rmKvwgRu16gNX4L0X0X4Ca7u4j8wau1d5WWmx3puvx5vWzfwOjqSBpgbOdST6G7+47hFsHyc+QOuqmGOGUluGS8UdOlkzA10J5MChPyCNqP5vSx0oCxAuBX/JkUKEP9RvP84dfFmj6MZjEfSpgoU+/VTcQdTjJ3LJ2uRkFymOhIlnMC0nh111WY184/5PzAxp2uSciL2yEtzPcM777xpZ7gNUHoqqIWad9ZJQgTZLlyz1ZI+xr09oUoA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:FRYP281MB0582.DEUP281.PROD.OUTLOOK.COM;PTR:;CAT:NONE;SFS:(396003)(136003)(346002)(376002)(39830400003)(366004)(8676002)(7696005)(86362001)(6916009)(33656002)(9686003)(66476007)(66556008)(66946007)(66446008)(64756008)(76116006)(8936002)(122000001)(316002)(38100700002)(55016002)(4326008)(6506007)(53546011)(2906002)(52536014)(5660300002)(83380400001)(186003)(71200400001)(478600001)(21314003);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?bWc3VEF2TWx6Y0x0MEFGNVAyK3dIQVNQSFJFakd4TGZlNXcwRUwzY0U0Skdu?=
 =?utf-8?B?SHpQUVA1YzFEN2t2ZUhDZjF3T0NDR0xVK2s1cHczWVpmeDM3QUV5eWJ4UTIy?=
 =?utf-8?B?aG8xSXZhc2JEWHprcW5jdFFua2Z2d0ROSjlMMHRyRWpQdU5uZVhqbkdpaEM2?=
 =?utf-8?B?M1lpb0VNQ2M2TG8va0sxWFZEVncvbHMza1VweUpZZXNLRVplY3dBT2QxcExM?=
 =?utf-8?B?WTFMS1FWcEFnWWsrQS9ZbVdmQ3ZEUmR2bzU0ZEdqc2VJL0hVbEpZcU5aSmtR?=
 =?utf-8?B?Q0JvMXE0NDJGcCtBRCswaFVPb01OMVVkcWJrZE9aNTcyeDJ4a1RMdi9GZkQr?=
 =?utf-8?B?SEM2UjNiR0xxMzBGRjBtRUJENVRBeXEyYlVpOEhsblJ1dElzN3hIMEwvMFRD?=
 =?utf-8?B?MTJVMXhVOWlwamJxei94MExhS1NiUlpxT3hQeVhsbzhrNmhtaFo3eU5ZVXBM?=
 =?utf-8?B?VmFmRDkvVHNXRTRnMXp2ZVlKN2NORG9FaVR4WUplby94VlRyZXFKaHJVazN3?=
 =?utf-8?B?NzBaL2puVWg2cndzQXVrei9JUzZTVVNNTFUxYjlUaVVJUVV5TVFKazdja0dW?=
 =?utf-8?B?VXAwdWtRVk9xNWZEVVM5TGQ1UThwb1VNYnNkeTJiUE9Oa1hSMWhvSUN6RS9n?=
 =?utf-8?B?ZW5PUWliNTAxMm1VWTlNUW5QYm5wRzdCZ3lBU1ZFYVg5VTJkTTNwaG0wZDNj?=
 =?utf-8?B?akh6Ym9KUE9oMlNwdmhETVVuUFlyMWd0VWFjUitXREpCTDVLNmhsUVRTMzc1?=
 =?utf-8?B?eFFiY0pyN01rVUlYSVpIUEpieGlRRlVudnZUVWRUSktwMC9HbE5qWkRKcHNP?=
 =?utf-8?B?SEtwbDd6N3dLYyszTjQ1UHdzdUxlbEt5T3RDeGFmTGdoV3l0N2p1Ty9EZXhr?=
 =?utf-8?B?YWVFL0NQMHFHYWJpVEk2RUYvQnFPVE03WUljQkh3aTFaWW43dzd4THFkYmxz?=
 =?utf-8?B?Y2lVMkpkTVNuSnZUd29LbFNKMWtnQmRwalFPTXJ1eXE3dm1qTHZ4ZitneWor?=
 =?utf-8?B?ZzVQNDJJVTR4R0c2dnJrcDJ2Y2NmbEVObTNKOWFLSzRkZE5LbDJmQ1UwVVkz?=
 =?utf-8?B?MXg3UFdONDU2cmwwM3pDTVQ2NERBNWlBWGFwaC9qYjl2WW1ZK3pFUDI0eFFl?=
 =?utf-8?B?ZGVNZGRIVEVwZHdicVVmRW5vbDUwamdDTDQxN0ZkNUtVVXB1UFFiaVRhdkpU?=
 =?utf-8?B?NHRmM0g1WFkvQk55Um1tcDZXU0pPQWxncisvOEsvTXdsVnhDRUU0MUNrdXAr?=
 =?utf-8?B?bmJTdkt1TUVheVpzUzJpcjVyb3h6K2dXamRhTDdEQmdlTThIb2trdHIrRUtZ?=
 =?utf-8?B?S2t3dDNNeWh6dTVrdUk2Kyt0RG1udGE1TGtSbHgwczdYUUJNQ2hNVXNRRFU5?=
 =?utf-8?B?dURCeTJFVnUwelFwNTRtVS96Y1E5cFlpb3RMZ0RHTFBOUGl6ZkNHMVd4TTBk?=
 =?utf-8?B?akFOaUVubjA5bjZuNnVkWCtDbDBCRW44cU5ETHhoSUZIQjNWV0hJcGxVbmdv?=
 =?utf-8?B?MFh6SCtVbWFvc1MrQnNCWk9hNXBMQStCaTR4QUhFMG1IbjlZUXhhMEUyTTVl?=
 =?utf-8?B?Mk96MnYvY0pmUEtPL1ZDcVVvUy9mNm9kemtQUEE2K1NKZlhveS9wU081N0R1?=
 =?utf-8?B?dzNvaTU5QkoxczJxcElwcE9IRXQ4YkpmWS8vT3Z2WVAyUjN6K3ZxbGZGeDhJ?=
 =?utf-8?B?UlVxZ204R09MdWwvUW8weHBobmNETU5vUGFSemgvc2gzTy92Q25WdU1GVVJ1?=
 =?utf-8?B?YURyd2JTbkw4T2NFUnQ0bEl4YVZFWTNqOEhlZnVLdGNMam83VkVYQzdOUlNQ?=
 =?utf-8?Q?nBYzjtl1Yo+DbzQcXiKWbKNwQS4neUkkCa2WA=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: onlineschubla.de
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: FRYP281MB0582.DEUP281.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-Network-Message-Id: b8c99002-bb3f-4ff5-1e82-08d92a9003a8
X-MS-Exchange-CrossTenant-originalarrivaltime: 08 Jun 2021 15:13:47.3985
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: bfc8b046-4d00-4e98-8679-43c06bdec9db
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: cKXEX++N4pMU4AiqCGn7dgZ/H7AUcILZ0wGdSGkFdIUrpE32ta8IJaXyEHugEhB/LlJ8jeV8YO5olmVbNxrJNw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: FRYP281MB0376

PiBWb246IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gR2VzZW5kZXQ6IERpZW5z
dGFnLCA4LiBKdW5pIDIwMjEgMDg6MjQNCj4gDQo+IE9uIDA4LjA2LjIwMjEgMDE6NDQsIFBhdWwg
TGVpYmVyIHdyb3RlOg0KPiA+IEFmdGVyIG1vcmUgdGVzdGluZywgSSBoYXZlIGNvbWUgdG8gdGhl
IGZvbGxvd2luZyBjb25jbHVzaW9uOiBJdCBzZWVtcyB0aGF0DQo+IGV2ZXJ5IHRpbWUgSSBkbyBh
IF9yZWJvb3RfIGZyb20gd2l0aGluIGEgV2luZG93cyBEb21VLCB0aGUgUENJIGRldmljZQ0KPiBk
b2VzIG5vdCBnZXQgYXR0YWNoZWQgdG8gdGhlIERvbVUuIEFmdGVyIERvbVUgcmVib290LCBpdCBp
cyBpbW1lZGlhdGVseQ0KPiBhdmFpbGFibGUgZm9yIGF0dGFjaG1lbnQgaW4gdGhlIERvbTAgd2hl
biBJIGNoZWNrIGZvciBpdCB3aXRoICJ4bCBwY2ktDQo+IGFzc2lnbmFibGUtbGlzdCIsIGFuZCBJ
IGNhbiByZWF0dGFjaCBpdCB0byB0aGUgRG9tVSB3aXRoICJ4bCBwY2ktYXR0YWNoIiB3aXRob3V0
DQo+IGFueSBtYWpvciBwcm9ibGVtcyBiZXNpZGUgc29tZSBhbm5veWluZyBzaWRlIGVmZmVjdHMg
KGUuIGcuIG5lZWQgdG8gcmVhcHBseQ0KPiBzZXR0aW5ncykuDQo+IA0KPiBBIHdlbGwta25vd24g
cHJvYmxlbSBvbiAuLi4NCj4gDQo+ID4geGwgaW5mbzoNCj4gPg0KPiA+IGhvc3QgICAgICAgICAg
ICAgICAgICAgOiB4eHgNCj4gPiByZWxlYXNlICAgICAgICAgICAgICAgIDogNC4xOS4wLTE0LWFt
ZDY0DQo+ID4gdmVyc2lvbiAgICAgICAgICAgICAgICA6ICMxIFNNUCBEZWJpYW4gNC4xOS4xNzEt
MiAoMjAyMS0wMS0zMCkNCj4gPiBtYWNoaW5lICAgICAgICAgICAgICAgIDogeDg2XzY0DQo+ID4g
bnJfY3B1cyAgICAgICAgICAgICAgICA6IDQNCj4gPiBtYXhfY3B1X2lkICAgICAgICAgICAgIDog
Mw0KPiA+IG5yX25vZGVzICAgICAgICAgICAgICAgOiAxDQo+ID4gY29yZXNfcGVyX3NvY2tldCAg
ICAgICA6IDQNCj4gPiB0aHJlYWRzX3Blcl9jb3JlICAgICAgIDogMQ0KPiA+IGNwdV9taHogICAg
ICAgICAgICAgICAgOiAxOTkyLjEwMA0KPiA+IGh3X2NhcHMgICAgICAgICAgICAgICAgOg0KPiBi
ZmViZmJmZjo3N2ZhZjNmZjoyYzEwMDgwMDowMDAwMDEyMTowMDAwMDAwZjowMDljNmZiZjowMDAw
MDAwMDowMDAwMDEwMA0KPiA+IHZpcnRfY2FwcyAgICAgICAgICAgICAgOiBodm0gaHZtX2RpcmVj
dGlvDQo+ID4gdG90YWxfbWVtb3J5ICAgICAgICAgICA6IDMyNTQyDQo+ID4gZnJlZV9tZW1vcnkg
ICAgICAgICAgICA6IDIwODM2DQo+ID4gc2hhcmluZ19mcmVlZF9tZW1vcnkgICA6IDANCj4gPiBz
aGFyaW5nX3VzZWRfbWVtb3J5ICAgIDogMA0KPiA+IG91dHN0YW5kaW5nX2NsYWltcyAgICAgOiAw
DQo+ID4gZnJlZV9jcHVzICAgICAgICAgICAgICA6IDANCj4gPiB4ZW5fbWFqb3IgICAgICAgICAg
ICAgIDogNA0KPiA+IHhlbl9taW5vciAgICAgICAgICAgICAgOiAxMQ0KPiA+IHhlbl9leHRyYSAg
ICAgICAgICAgICAgOiAuNA0KPiA+IHhlbl92ZXJzaW9uICAgICAgICAgICAgOiA0LjExLjQNCj4g
DQo+IC4uLiB0aGlzIG9sZCBYZW4gdmVyc2lvbiwgSSBiZWxpZXZlLiBJIGRvbid0IHJlY2FsbCB3
aGVuIGV4YWN0bHkgaXQgd2FzDQo+IGZpeGVkIChhbmQgSSBkb24ndCBrbm93IGF0IGFsbCB3aGV0
aGVyIHRoZSBmaXggd2FzIGJhY2twb3J0ZWQpLCBidXQNCj4gdHJ5aW5nIGEgcmVjZW50IHZlcnNp
b24gb2YgWGVuIHNob3VsZCBnZXQgeW91IHBhc3QgdGhpcy4gSWYgYSBmdWxseQ0KPiBtYWludGFp
bmVkIHZlcnNpb24gaXMgc3RpbGwgYWZmZWN0ZWQsIGEgYmFja3BvcnQgY291bGQgYmUgcmVxdWVz
dGVkLg0KDQpJIHN3aXRjaGVkIHRvIHhlbl92ZXJzaW9uIDQuMTQuMi1wcmUgKHRoZSBzdGFuZGFy
ZCBwYWNrYWdlIGZyb20gRGViaWFuIGJ1bGxzZXllKSwgcmVzdWx0OiB0aGUgaXNzdWUgaXMgZ29u
ZS4gVGhlIFBDSSBkZXZpY2UgaXMgYXR0YWNoZWQgcmVsaWFibHkgdG8gdGhlIERvbVUsIGV2ZW4g
YWZ0ZXIgcmVib290Lg0KDQpJIGhhZCBzZWFyY2hlZCBmb3IgaW5mb3JtYXRpb24gb24gdGhpcyBp
c3N1ZSBvbiB0aGUgd2ViIG9yIGluIFhlbiBtYWlsaW5nIGxpc3RzLCBidXQgSSBvbmx5IGZvdW5k
IG9uZSBzdXBlcm9sZCBidWcgcmVwb3J0LCB0aGVyZWZvcmUgSSBkaWRuJ3QgdHJ5IG91dCBhIG1v
cmUgcmVjZW50IFhlbiB2ZXJzaW9uIG15c2VsZi4gSXQgbm93IHR1cm5zIG91dCBJIHNob3VsZCBo
YXZlLi4uDQoNCkFueXdheSwgdGhhbmsgeW91IGZvciB5b3VyIHN1cGVyZmFzdCBoZWxwIQ0KDQpQ
YXVsDQo=


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 15:55:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 15:55:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138570.256453 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqe4Q-0006kS-2W; Tue, 08 Jun 2021 15:55:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138570.256453; Tue, 08 Jun 2021 15:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqe4P-0006kL-V8; Tue, 08 Jun 2021 15:55:13 +0000
Received: by outflank-mailman (input) for mailman id 138570;
 Tue, 08 Jun 2021 15:55:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqe4O-0006kB-MJ; Tue, 08 Jun 2021 15:55:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqe4O-0002rr-D1; Tue, 08 Jun 2021 15:55:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqe4O-0000Jn-4Q; Tue, 08 Jun 2021 15:55:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqe4O-0001B9-3w; Tue, 08 Jun 2021 15:55:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zhdjx57sNDF0uBJWe7V+to8uj4eHe8GlnUQIwxczfgU=; b=LbJ8iY7dSrTZ/2bPJKEPm27tCK
	7Au+Twm6fCCs8MeVe0QG9WUpC29M2NqDgeEKJt6kbPYHz4eoMkoNXSPcbSMeF8Dn5gSAgzNkHZGko
	pv3v/hlzRuoR2YdEUP6B0C9JfNsGZP4tsZjKbbxDCCu2pMzsajMkj0kZuHvVNjlMwWWI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162544-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162544: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=aad7b5c11d51d57659978e04702ac970906894e8
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Jun 2021 15:55:12 +0000

flight 162544 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162544/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  aad7b5c11d51d57659978e04702ac970906894e8
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    6 days
Failing since        162370  2021-06-04 17:01:35 Z    3 days   26 attempts
Testing same since   162544  2021-06-08 12:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit aad7b5c11d51d57659978e04702ac970906894e8
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Tue Jun 1 11:28:03 2021 +0100

    tools/firmware/ovmf: Use OvmfXen platform file is exist
    
    A platform introduced in EDK II named OvmfXen is now the one to use for
    Xen instead of OvmfX64. It comes with PVH support.
    
    Also, the Xen support in OvmfX64 is deprecated,
        "deprecation notice: *dynamic* multi-VMM (QEMU vs. Xen) support in OvmfPkg"
        https://edk2.groups.io/g/devel/message/75498
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit d21121685fac829c988e432407fb0e4ef9b19331
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Jun 7 15:00:05 2021 +0200

    tools/libs/guest: fix save and restore of pv domains after 32-bit de-support
    
    After 32-bit PV-guests have been security de-supported when not running
    under PV-shim, the hypervisor will no longer be configured to support
    those domains per default when not being built as PV-shim.
    
    Unfortunately libxenguest will fail saving or restoring a PV domain
    due to this restriction, as it is trying to get the compat MFN list
    even for 64 bit guests.
    
    Fix that by obtaining the compat MFN list only for 32-bit PV guests.
    
    Fixes: 1a0f2fe2297d122a08fe ("SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 69e1472d21cf7e5cf0795ef38b99d00de78a910e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jun 7 13:38:53 2021 +0100

    x86/cpuid: Drop special_features[]
    
    While the ! annotation is useful to indicate that something special is
    happening, an array of bits is not.  Drop it, to prevent mistakes.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 60fa12dbf1d4d2c4ffe1ef34b495b24aa7e41aa0
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jun 7 13:25:09 2021 +0100

    x86/cpuid: Fix HLE and RTM handling (again)
    
    For reasons which are my fault, but I don't recall why, the
    FDP_EXCP_ONLY/NO_FPU_SEL adjustment uses the whole special_features[] array
    element, not the two relevant bits.
    
    HLE and RTM were recently added to the list of special features, causing them
    to be always set in guest view, irrespective of the toolstacks choice on the
    matter.
    
    Rewrite the logic to refer to the features specifically, rather than relying
    on the contents of the special_features[] array.
    
    Fixes: 8fe24090d9 ("x86/cpuid: Rework HLE and RTM handling")
    Reported-by: Edwin Török <edvin.torok@citrix.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit c4beefcada0a406681dcfb6e89f6cbe4aa368c2d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Jun 7 15:40:55 2021 +0200

    ﻿docs: release-technician-checklist: update to leaf tree version pinning
    
    Our releases look to flip-flop between keeping or discarding the date
    and title of the referenced qemu-trad commit. I think with the hash
    replaced by a tag, the commit's date and title would better also be
    purged.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 89052b9fa24bf976924e40918fc9fa3b1b940e17
Author: Dario Faggioli <dfaggioli@suse.com>
Date:   Fri Mar 19 12:14:17 2021 +0000

    xen: credit2: fix per-entity load tracking when continuing running
    
    If we schedule, and the current vCPU continues to run, its statistical
    load is not properly updated, resulting in something like this, even if
    all the 8 vCPUs are 100% busy:
    
    (XEN) Runqueue 0:
    (XEN) [...]
    (XEN)   aveload            = 2097152 (~800%)
    (XEN) [...]
    (XEN)   Domain: 0 w 256 c 0 v 8
    (XEN)     1: [0.0] flags=2 cpu=4 credit=9996885 [w=256] load=35 (~0%)
    (XEN)     2: [0.1] flags=2 cpu=2 credit=9993725 [w=256] load=796 (~0%)
    (XEN)     3: [0.2] flags=2 cpu=1 credit=9995885 [w=256] load=883 (~0%)
    (XEN)     4: [0.3] flags=2 cpu=5 credit=9998833 [w=256] load=487 (~0%)
    (XEN)     5: [0.4] flags=2 cpu=6 credit=9998942 [w=256] load=1595 (~0%)
    (XEN)     6: [0.5] flags=2 cpu=0 credit=9994669 [w=256] load=22 (~0%)
    (XEN)     7: [0.6] flags=2 cpu=7 credit=9997706 [w=256] load=0 (~0%)
    (XEN)     8: [0.7] flags=2 cpu=3 credit=9992440 [w=256] load=0 (~0%)
    
    As we can see, the average load of the runqueue as a whole is, instead,
    computed properly.
    
    This issue would, in theory, potentially affect Credit2 load balancing
    logic. In practice, however, the problem only manifests (at least with
    these characteristics) when there is only 1 runqueue active in the
    cpupool, which also means there is no need to do any load-balancing.
    
    Hence its real impact is pretty much limited to wrong per-vCPU load
    percentages, when looking at the output of the 'r' debug-key.
    
    With this patch, the load is updated and displayed correctly:
    
    (XEN) Runqueue 0:
    (XEN) [...]
    (XEN)   aveload            = 2097152 (~800%)
    (XEN) [...]
    (XEN) Domain info:
    (XEN)   Domain: 0 w 256 c 0 v 8
    (XEN)     1: [0.0] flags=2 cpu=4 credit=9995584 [w=256] load=262144 (~100%)
    (XEN)     2: [0.1] flags=2 cpu=6 credit=9992992 [w=256] load=262144 (~100%)
    (XEN)     3: [0.2] flags=2 cpu=3 credit=9998918 [w=256] load=262118 (~99%)
    (XEN)     4: [0.3] flags=2 cpu=5 credit=9996867 [w=256] load=262144 (~100%)
    (XEN)     5: [0.4] flags=2 cpu=1 credit=9998912 [w=256] load=262144 (~100%)
    (XEN)     6: [0.5] flags=2 cpu=2 credit=9997842 [w=256] load=262144 (~100%)
    (XEN)     7: [0.6] flags=2 cpu=7 credit=9994623 [w=256] load=262144 (~100%)
    (XEN)     8: [0.7] flags=2 cpu=0 credit=9991815 [w=256] load=262144 (~100%)
    
    Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 07b0eb5d0ef0be154606aa46b5b4c5c59b158505
Author: Dario Faggioli <dfaggioli@suse.com>
Date:   Fri May 28 17:12:48 2021 +0200

    credit2: make sure we pick a runnable unit from the runq if there is one
    
    A !runnable unit (temporarily) present in the runq may cause us to
    stop scanning the runq itself too early. Of course, we don't run any
    non-runnable vCPUs, but we end the scan and we fallback to picking
    the idle unit. In other word, this prevent us to find there and pick
    the actual unit that we're meant to start running (which might be
    further ahead in the runq).
    
    Depending on the vCPU pinning configuration, this may lead to such
    unit to be stuck in the runq for long time, causing malfunctioning
    inside the guest.
    
    Fix this by checking runnable/non-runnable status up-front, in the runq
    scanning function.
    
    Reported-by: Michał Leszczyński <michal.leszczynski@cert.pl>
    Reported-by: Dion Kant <g.w.kant@hunenet.nl>
    Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 75f13e9b221e2c8603f15ee1d53318526cf56113
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:14 2021 +0200

    tools/libs/guest: make some definitions private to libxenguest
    
    There are some definitions which are used in libxenguest only now.
    Move them from libxenctrl over to libxenguest.
    
    Remove an unused macro.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 455790573d3bbad6d5a1bb7e9d28b6dd71075693
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:13 2021 +0200

    tools/libs: move xc_core* from libxenctrl to libxenguest
    
    The functionality in xc_core* should be part of libxenguest instead
    of libxenctrl. Users are already either in libxenguest, or in xl.
    There is one single exception: xc_core_arch_auto_translated_physmap()
    is being used by xc_domain_memory_mapping(), which is used by qemu.
    So leave the xc_core_arch_auto_translated_physmap() functionality in
    libxenctrl.
    
    This will make it easier to merge common functionality of xc_core*
    and xg_sr_save*.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bf1fc18901dfea05a69f661493b934c0db7d3503
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:12 2021 +0200

    tools/libs: move xc_resume.c to libxenguest
    
    The guest suspend functionality is already part of libxenguest. Move
    the resume functionality from libxenctrl to libxenguest, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit f183854facad996fe891c086c024bca7cbcdc1e4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:11 2021 +0200

    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
    
    Instead of open coding the mapping of the p2m list use the already
    existing xc_core_arch_map_p2m() call, especially as the current code
    does not support guests with the linear p2m map. It should be noted
    that this code is needed for colo/remus only.
    
    Switching to xc_core_arch_map_p2m() drops the need to bail out for
    bitness of tool stack and guest differing.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bd7a29c3d0b937ab542abea06ff1b575abe7247a
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:10 2021 +0200

    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
    
    The core of a pv linux guest produced via "xl dump-core" is nor usable
    as since kernel 4.14 only the linear p2m table is kept if Xen indicates
    it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
    supporting the 3-level p2m tree only.
    
    Fix that by copying the functionality of map_p2m() from libxenguest to
    libxenctrl.
    
    Additionally the mapped p2m isn't of a fixed length now, so the
    interface to the mapping functions needs to be adapted. In order not to
    add even more parameters, expand struct domain_info_context and use a
    pointer to that as a parameter.
    
    Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Jun 4 08:02:09 2021 +0200

    tools/libs/guest: fix max_pfn setting in map_p2m()
    
    When setting the highest pfn used in the guest, don't subtract 1 from
    the value read from the shared_info data. The value read already is
    the correct pfn.
    
    Fixes: 91e204d37f449 ("libxc: try to find last used pfn when migrating")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Thu May 6 13:38:02 2021 +0100

    SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
    
    The support status of 32-bit guests doesn't seem particularly useful.
    
    With it changed to fully unsupported outside of PV-shim, adjust the PV32
    Kconfig default accordingly.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 16:21:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 16:21:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138578.256468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqeTL-0001sw-SW; Tue, 08 Jun 2021 16:20:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138578.256468; Tue, 08 Jun 2021 16:20:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqeTL-0001sp-PV; Tue, 08 Jun 2021 16:20:59 +0000
Received: by outflank-mailman (input) for mailman id 138578;
 Tue, 08 Jun 2021 16:20:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AOFJ=LC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lqeTK-0001sj-78
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 16:20:58 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9f32f43c-1787-4252-b3d6-31beea1c021c;
 Tue, 08 Jun 2021 16:20:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f32f43c-1787-4252-b3d6-31beea1c021c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623169256;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=D27eqjDhlfxlV8ZkEkb2PBFocv0/RKZBPYjWwSqnb1A=;
  b=ZBdk2Ze0OIySzZ8Xg5h5ncG1m0WoI3b8JF8i+yYfpue5ui6N+/aWVFTR
   nJ/CfJN+I5WTnIaP4Ivra+vrVyjXnVk1Ad5djisINcSlcojj1HhaJ18aD
   +N4+n8yc1LhpuQQBjBk+w1QhH0edtaCl9lyePMYvpCOp/HPfvQkIkutco
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: g3FRmaIpO0D+TVKbxTrl0zqm/Y0UhMpAVeVB5MVKkvNUNMfT1bItSwsiPfIXCafrlknXmrct02
 vucAmrrtKHCNmQAylpIHMSYSN0g8D0jtSvbfLo/OXsb14y67kMcpwYemKwz8l7KhZkmnJjTwYd
 uz9sz7QbygzZUCo4sLCCJAog4Mm/9APaw9D/QapmHV4739z+ciwumYph6wJMtz+qeI1stkoiDv
 nKqrQ/oJOk59i5XmYkEaFO0VChBEg4wKuYjOqwdpRJsQdGqS0fdsoZ4y8kggBrp8O/toFUMP7y
 pXI=
X-SBRS: 5.1
X-MesageID: 45401988
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:BAyZOqCTTR7UTv3lHemg55DYdb4zR+YMi2TC1yhKJyC9Ffbo8P
 xG/c5rsSMc5wxwZJhNo7y90cq7MBbhHPxOkOos1N6ZNWGM0gaVxelZnO3fKlbbehEWmNQz6U
 4ZSdkdNOHN
X-IronPort-AV: E=Sophos;i="5.83,258,1616472000"; 
   d="scan'208";a="45401988"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Sander Eikelenboom
	<linux@eikelenboom.it>
Subject: [PATCH] x86/cpuid: Half revert "x86/cpuid: Drop special_features[]"
Date: Tue, 8 Jun 2021 17:19:01 +0100
Message-ID: <20210608161901.1894-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

xen-cpuid does print out the list of special features, and this is helpful to
keep.

Fixes: ba6950fb070 ("x86/cpuid: Drop special_features[]")
Reported-by: Jan Beulich <JBeulich@suse.com>
Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Sander Eikelenboom <linux@eikelenboom.it>

Note - this deliberately doesn't insert ifdefary, because it is pointless.  It
adds to constructing/parsing time, and nothing in Xen can gain access to this
without an explicit introduction of INIT_SPECIAL_FEATURES again, which will be
obvious during code review.
---
 xen/tools/gen-cpuid.py | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/tools/gen-cpuid.py b/xen/tools/gen-cpuid.py
index c6b5056a8d..b953648b65 100755
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -362,6 +362,8 @@ def write_results(state):
 
 #define INIT_KNOWN_FEATURES { \\\n%s\n}
 
+#define INIT_SPECIAL_FEATURES { \\\n%s\n}
+
 #define INIT_PV_DEF_FEATURES { \\\n%s\n}
 
 #define INIT_PV_MAX_FEATURES { \\\n%s\n}
@@ -382,6 +384,7 @@ def write_results(state):
 """ % (state.nr_entries,
        next(featureset_to_uint32s(state.common_1d, 1)),
        format_uint32s(state, state.names.keys(), 4),
+       format_uint32s(state, state.raw['!'], 4),
        format_uint32s(state, state.pv_def, 4),
        format_uint32s(state, state.pv_max, 4),
        format_uint32s(state, state.hvm_shadow_def, 4),
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 16:23:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 16:23:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138586.256480 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqeVx-0002ZU-DW; Tue, 08 Jun 2021 16:23:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138586.256480; Tue, 08 Jun 2021 16:23:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqeVx-0002ZN-AV; Tue, 08 Jun 2021 16:23:41 +0000
Received: by outflank-mailman (input) for mailman id 138586;
 Tue, 08 Jun 2021 16:23:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JFXD=LC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqeVv-0002ZH-Tn
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 16:23:39 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2085973b-47cb-4d3e-a11c-e9e642bfea34;
 Tue, 08 Jun 2021 16:23:38 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2053.outbound.protection.outlook.com [104.47.13.53]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-20-cTD_VXCUNpm92xOPM_CiYw-1; Tue, 08 Jun 2021 18:23:36 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6175.eurprd04.prod.outlook.com (2603:10a6:803:fb::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Tue, 8 Jun
 2021 16:23:34 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4195.030; Tue, 8 Jun 2021
 16:23:33 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0185.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1c::29) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.22 via Frontend Transport; Tue, 8 Jun 2021 16:23:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2085973b-47cb-4d3e-a11c-e9e642bfea34
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623169417;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bDazx47XW88ugO/Etgz7NGpwbC9VXwnbfyVEV3s33LQ=;
	b=IMBiA58ScnGpW6jg9EdzH2pZaoPBfraQP3wpoF+bX6ws8Q+sAEoPY36Bb1rg0rg87X0Hpf
	OmR1o3Xys1YED8aPMJRV4z+KLmoCb/HnBXe21pD6aZvcE6fBs1kYCiGOjGlwOKANdC6dQI
	1rLH7o6zFIHNp4I1mFeMtoknRN3fG1I=
X-MC-Unique: cTD_VXCUNpm92xOPM_CiYw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Hq3hz41+4SU+iXMHKSaYSOIMoVjjhGi7QRFBxEpqt9hGDfc5P1lZiFOC3gwVDA/g0Mgeethbe2A6ktmdCoMg2bRWzeFhY5A10Mx1mWog3BBMARa2XifGOwVsRAzsPXgVLQZSRQUTkq9kAD45DeVmc4NsQKpN6uAwV/28goZnoThZ2Zl+GCbRcPHnW1eE52coPT6bTkS3v8Uzh4L918ZqsxoesVDOOILvyetsvxvxi1xLFYvHoW0wCoHipfT2VOd97K7/baAk8IVvnqnGHmEusU5AFiKybv++D528UJjbTGEXIVtfCteo8pOHfo80j+Lcvm6X9BvQLHuw6cK6QGTeUg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bDazx47XW88ugO/Etgz7NGpwbC9VXwnbfyVEV3s33LQ=;
 b=folCj1Ua3I0NeTdiYjWZD1+sAwoChqrOTFgrgkcgI1SfFKhCIADH6kFlOgSC+YPmqgCOvUN//JxjbA5GWuAyrI7jCS7YUNF7TDlKeZoO//FcJmtCxc1GaL2TwKuYO8qV8CDoSMolD+enAWPc344zc7mKL9+L9BpeFJXv3EoyzQmaTBEUbhrJpu8yrzYE0eAbXO72uLh6yTT9uzuVOQ3iA+YGiB5DQiPy8gNkUtaEOGerSVeovZcjXHEEpSLWtMLEtfftEICrwTK6X14iBBVwz9Hx0hp6nXkAh1oqVuSF90fr33QLc3HSXpfPWwR3+HXyUce+hjuhL+eCGkdnN9B0lg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] x86/cpuid: Half revert "x86/cpuid: Drop
 special_features[]"
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Sander Eikelenboom <linux@eikelenboom.it>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210608161901.1894-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <045ed5c7-b1e0-a081-18ac-22ced4753fe8@suse.com>
Date: Tue, 8 Jun 2021 18:23:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210608161901.1894-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0185.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1c::29) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5bbbde03-3e55-4758-4ef5-08d92a99c2e6
X-MS-TrafficTypeDiagnostic: VI1PR04MB6175:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB6175EAA9CF230086F5482F07B3379@VI1PR04MB6175.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1201;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3mHEsF+jhnUmJ+MgQV621fHTx8IENaEPOq4oZAmWNUblDxVFybFI6b9McCNdwwh/ayd5AAo/DjcPrvGc5+ykMnlIVtSTpCDeOuCtrDLITjE4FU8ENf355Q1EA/yffaEEItOEwKCWN0RBwnPVgCSSrtXM2MGXOP44H0EpyWpmkD0c3dtP3Bvr11bqwZCrmnps4anPiEBR/3D5LkkxfTuR/HBM0KkfzEazNL91mMB25FWakERel3rjPnm/ytAidzfVqnsGQYdkzHrDyh9gNGqBxQScSeDZjLaDK1d7nJMPJB48UFtTUKPAGzDZya0eqXYwnsPWC2bi+I1yZYlJ8xaaUNvLhuWglATYPj81WBestVwDK/scGhsmDxW+4BOcqfkq+QsXrQmnukd8Az01EKN9EdlJcWlP0nN54A5MvunxLaf9dt5rVKL+nvlFRkjeN2vllQubz1SL5vK6rJ7wOdH5wsN3YJ1+l5EyTQISGGAAFYP/h/vdv1gEB9KotqQCpE2NihvG5sViORi9aUPsCYx3YUD3EKy+4H2hB+pFmWTY0yMtVX/4GxMKN/OF0tc5mnQkRN7pEvyAzOqiKSRd0N7xfJAujUJbG/AnsTSghU6JXsBO5BG5Kl+NTouGQRW1tTV5ZYOv2Ww+Ar1LXoBpYk1lZvBkstbTFV93QMIHE/raQavDX5sjEzkCTUYFZheI5Bh8
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(366004)(136003)(346002)(39850400004)(396003)(5660300002)(4744005)(478600001)(38100700002)(2906002)(53546011)(6486002)(956004)(31686004)(86362001)(66946007)(66476007)(66556008)(31696002)(2616005)(16526019)(186003)(26005)(8676002)(54906003)(36756003)(8936002)(6916009)(316002)(16576012)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?dUVZZ0tOMDRTUXRNbzRKY0s1VktJRkxLamZtekt2VjMwZFk3UjhIUkhKYnFj?=
 =?utf-8?B?SEs2NkdqdkVQcGFheGdYaDVVdFZUbWVpVDFnNHYxQllnQ3JpS1RUampzbERo?=
 =?utf-8?B?c2tnV0ZuUWZrNzR3MUtkYUpnRVlXaHlTWTVIQzk3dHhRZktlTjk4TmZmekFs?=
 =?utf-8?B?NUQvL1ZyaXUxTGxQTFErbWltSkRYdVA4ZUpzSUh6WE5BVUkrcU9PWlBudzVh?=
 =?utf-8?B?Y2lqOFJPdGI4SSt5NmEzblFieUFaQ0RPZ3R6aWlwMzBCZHpDN1VqOHBIMVlK?=
 =?utf-8?B?RkN0b3Y3ZWlndlUrUkpkT0gxakk3VXdDaSt4ZUhpWTV4RVlncmsrOVBaVyt5?=
 =?utf-8?B?aGdUOUc2WWFWNDdJSFl1MVorNXZOK0NoZk1iU0MzMFdSbERER2UwWEtlaHRE?=
 =?utf-8?B?NDN5WVh3YWh1ZXFhV1VLT25yZ2dkWDlDY1NycHBsNGFBNTYydXZrOFI3ekhM?=
 =?utf-8?B?clQ4Y09zWW1DOTNyU0M0M0N5QjJUR01KNTE2Z01pbkg1MmpWVlltajBacElw?=
 =?utf-8?B?Q20vVXpCVmg5NTJpRGxJS3VWcklCSHV4NWtpQXBhbUM3SDNvRGJqNDlmSmNv?=
 =?utf-8?B?cEpFMkczRHR4M3p2WHRwb21SaDN3R21wQ3RWVTBJOHdpSmlERk80QTlERXc0?=
 =?utf-8?B?Qk52cnRlVWNHMWRDQ0N5SFcrcktOZTRYNWxwMzkwMnFncFFjKzVnWVZscWF0?=
 =?utf-8?B?amd2T2tSemcvY0lzVDVjYjJRUkxqeWdNVGlkTHUvWE45Z0FscHN2ZTJjeUZU?=
 =?utf-8?B?NElLaGVRSnNZc3pjYzIwTU5FZnJBc2dKdzJJYzF2ODR6RURkWkhKSVpuanZH?=
 =?utf-8?B?L09LK05NalZUYXBsSnBSUDZ1MDFxRXZRTlNlUmp4eEJxUFdnVyt5Y2x4YmNo?=
 =?utf-8?B?TnNxZHZNeEpDVWRCVjVsUmI3MEVhWi9NTWhsdTNyS1RodWJZNUpBTCt6clM3?=
 =?utf-8?B?dUpCdFk0ajBQU3FXZ0czU3lDRzFwYXhNMGZkQTJjYitBWnNzMUR1WVVTL3FU?=
 =?utf-8?B?bDFxZnNXTTRNa1lVVjhMaHdjYWcyRTVpUGlmS0NGejNFNFdpOUNGR3BCNW9n?=
 =?utf-8?B?Uzd3RW1TUEEvQ3JiU0VYWWlLYzB3V055SlJaMUhXNGtya0FNNXArZzd3NnFw?=
 =?utf-8?B?WTRaSUFQVTA0dlBBMUZnV3dPZXMvbm5sdUVlUmxoaTRIcWZEMk9Ra201WWZC?=
 =?utf-8?B?cDJwN0QyQk8xSGJJcXBVNWYxUzBFS3o1ZERpakVSWmxQS3EvYkp3QTIvY2Qz?=
 =?utf-8?B?UVRXeGVPQmJUdXpaWWRDSEhNamNHNHJVakhhTHB5MGF6ZXM1RTNhalYyTWFD?=
 =?utf-8?B?bjF2SFRpdDdZbmRKRUN2R2tadEg2TEZMdi81T3BzWk5kTW1CRWpMVUp6cjVs?=
 =?utf-8?B?Q2tsTVIyMDJsUm82Tlovc0podDFQR1VqUWhSTHFZakl6a0xXK3Rnb3JDZThn?=
 =?utf-8?B?THlqNy9NcG1TSUNDQU4yUGcrUWp4bTh1UmU2aU51SnRqYUxrZjhiUkZZZ0ha?=
 =?utf-8?B?d1ptUmlOZHRHV2JlNFlQZSt4dm10R0JwdVZ5Q1Z5TUZyOXdkVmxIUVhvLzNn?=
 =?utf-8?B?cytsWlltbXBrMkFUSWEvVXlTYVFUSEpkMFpVRnlrUlBzRnh1TkZzMTZDS24y?=
 =?utf-8?B?YWs2SkRpd0Ixb2xwb3JqZ3pPdUM0MGJCZHFnd1orVlFOSjd3NUdCV3ZsZUY1?=
 =?utf-8?B?blhrTE1FTWR5ZHNQVktVQXRHdVRidVVpc1pHSEJaN3VhVjF3Ly9mbEpqZGE5?=
 =?utf-8?Q?CdO9CcHAGhoHpFYK055MDsLRali00CmSefJQuVh?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5bbbde03-3e55-4758-4ef5-08d92a99c2e6
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Jun 2021 16:23:33.8921
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qlTCTx7yNDcJJgbf836gwaGHOtghZNHTm6pOldQTYQ3b9eKAQ8lj6bTmVQ21SK/bdcC4NA159cxzyE8hxnhF8Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6175

On 08.06.2021 18:19, Andrew Cooper wrote:
> xen-cpuid does print out the list of special features, and this is helpful to
> keep.
> 
> Fixes: ba6950fb070 ("x86/cpuid: Drop special_features[]")
> Reported-by: Jan Beulich <JBeulich@suse.com>
> Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 16:39:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 16:39:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138595.256496 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqel6-0004EI-WD; Tue, 08 Jun 2021 16:39:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138595.256496; Tue, 08 Jun 2021 16:39:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqel6-0004EB-Sm; Tue, 08 Jun 2021 16:39:20 +0000
Received: by outflank-mailman (input) for mailman id 138595;
 Tue, 08 Jun 2021 16:39:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8i+T=LC=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lqel5-0004E5-04
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 16:39:19 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.21])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 383e3c8d-ab33-4155-8045-9b7386656d9f;
 Tue, 08 Jun 2021 16:39:17 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx58GdBQZC
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 8 Jun 2021 18:39:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 383e3c8d-ab33-4155-8045-9b7386656d9f
ARC-Seal: i=1; a=rsa-sha256; t=1623170352; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=Y1TtzdCnZx0FlzedX4LkEZ3woLvunLh5A6zdz7QvJYkJzmU48dlEb113MBO7tPZlW3
    spAUB/1s9wAuuEiUi8FZY3yr9JlgUGyfnJu4p3XieRyXr3VjvCgzIZDRaut6aWdd1FKG
    RAcc9vG5L4rYX0bHdJpKdNearLJk/e9iVjHnLKGCXEQ0/SP2XBXDRMXh0CN6k30HlYVi
    o1s49pUVrmA0xZGBxNjEISbpxnSL6qK7crrjbpXNyJMw2AREFwVWh6jbUV+ghoOex2ge
    NhFB4yVPw2CqFlbOMEgErc+dTE0oyqBUAsmbXYzfkUZ8/BiMg7EmvwUWCmVBwWNazXjy
    n8qw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623170352;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=qdrbwADjv5TdW5BT8sId8g8o7uI2aZvmSbxxv5A56/Q=;
    b=dyiE+6ShKsAyQMULOTVIIy8DjfYRO7OJogTprySVCbWGtSdz/AsMVAN8an3OVNsenS
    kq6Jj9ZfZmbKzMkjLSOYygth0q9e14+Ob0bpfbearH5lkKtXNO5umvPiaYj29yDFkj+q
    wYKWzlIxoVayNqohj0xT8g4zA7NNFzkBFY/T+2SL3eZ+Lr+7/k7OTyxnaDI9KW8cU18u
    jlZKL+xe3jPpR9gvgQjgBUBItFSm50Z+k34skMJ1W4oyOTZODBgQwhO4NKIH4R7j+LN7
    50jM+eLOHRCTVTMsw7PxrO+5FFhnMX+Ofi5flJ/6YYgvWkjdzc+sd6H53AVZqivBR6Uz
    WMyQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623170352;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=qdrbwADjv5TdW5BT8sId8g8o7uI2aZvmSbxxv5A56/Q=;
    b=OC4t5fB5KZVJCTd96H1Ns/6oi3AUNRrBOffWydvZCSMO9ZuW+kStwYJVqQQ3YsVPo7
    zL83JwoXsmkuDSg93M+Y2eIrbZtwI4jYUVFeM/yfM0mwjeKqg4qfLyj2LpucK6v354V6
    4UtNZEbAx3UMgc3v+6m78kl1vXJXFss2eRjCTCx+mOtgYiMjDNJ2UkjWyVFPRWP7u7i7
    6oPwHonb9BHhljRqWmAzthfqOSfANQ/uFf2GdXZ2OPw87PSBDH968t23UmTj02Rbmgr6
    qLVyeI4bc0z2C6swM1aDQ+6cLDhgRNHb9tumAMpXSqFGhc4CN01I5eLfpYPon7+M+Qt7
    ASwQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF/Ax6Fb03sCxOoTBq7r1dZtjiRLxxzC79Iv3HI"
X-RZG-CLASS-ID: mo00
Date: Tue, 8 Jun 2021 18:39:04 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v2 2/2] tools/xenstore: set open file descriptor limit
 for xenstored
Message-ID: <20210608183833.023551f4.olaf@aepfle.de>
In-Reply-To: <20210608055839.10313-3-jgross@suse.com>
References: <20210608055839.10313-1-jgross@suse.com>
	<20210608055839.10313-3-jgross@suse.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/797QuXKQLRhCN2SUlh_nu37";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/797QuXKQLRhCN2SUlh_nu37
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Tue,  8 Jun 2021 07:58:39 +0200
schrieb Juergen Gross <jgross@suse.com>:

> +#XENSTORED_MAX_N_DOMAINS=3D32768

This will break fillup.
Provide an empty variable like it is done for a few others in that file.

Olaf

--Sig_/797QuXKQLRhCN2SUlh_nu37
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmC/nSgACgkQ86SN7mm1
DoAYIw//WfjJmJ8ealvucE2mkyYkOzk8sQrj+Rx5OTfSRVCJs1Bd3oBfBeq08352
jgEpljkLNFar1nlEvx2ILtW2IXt8C2tdSFTkJSJv4JZhICsib2ouasl29kccq+Q/
HZAAFBJCPx5eBW7ERv0jORqi7rw46H6H2HW6f7mFfrrkztKAK5oAqVmMMQeKlPNp
H8YJISBhqu8qpGwFvW0yVTJ3O8gyyqdgYTFi45sDnLlGeJ4eqtc5rpYxtOajIs3Y
KtY0FsAKMInkGMWzvsopU+FNivBDuX7w4yrYBJXTOs08obxH42QNRRpT1RjUdnNR
n7XyzGbpNBc3nWhz+czodGr3lHNLPNzVP7KWZmNQ2uZt9e4LEV+1F2DUpDrns+Gy
ZrzLBMezR+amN23nHHBl8wU8oGClCszS7VXLOqEEUxk5ujpCrGkpNzUHvkQqJ5tB
CxY2OM2/sbykkR0x8VAI+mGv6lBD0SNkC3zwalWWHTU7/ZPHS8nsQs0riSXWnZf/
JnTHAg+zfcGnOeRdZksDkKE1mw9XOwp96dEueerCLkm/NHf75WHeTNdjSUmL+NLF
3ySZj9SBgO71Kkf+yeEtDiigJhcNE8yJdQCS55c7wC9EP4j6h2fHYJlMdm0pC6Ms
DpL59P8LT4mObGF47u/7LewY8U1F9GnjGEa4YpF+0rpP4MRLNGY=
=TU34
-----END PGP SIGNATURE-----

--Sig_/797QuXKQLRhCN2SUlh_nu37--


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 17:03:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 17:03:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138603.256508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqf7v-0007Fy-Tn; Tue, 08 Jun 2021 17:02:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138603.256508; Tue, 08 Jun 2021 17:02:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqf7v-0007Fr-Qp; Tue, 08 Jun 2021 17:02:55 +0000
Received: by outflank-mailman (input) for mailman id 138603;
 Tue, 08 Jun 2021 17:02:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q7uu=LC=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lqf7v-0007Fl-Br
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 17:02:55 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 05285614-2425-41e4-b6f0-474e6699615c;
 Tue, 08 Jun 2021 17:02:54 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 8A5CE1FD33;
 Tue,  8 Jun 2021 17:02:53 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 5C968118DD;
 Tue,  8 Jun 2021 17:02:53 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id voVoFb2iv2DsZQAALh3uQQ
 (envelope-from <jgross@suse.com>); Tue, 08 Jun 2021 17:02:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05285614-2425-41e4-b6f0-474e6699615c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623171773; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=XE6BGDAlpoGT7vB16KKKSYud+7PMVavS6/QbZRzdDxs=;
	b=MV50QSfEK2ynwxGqYtXKmch4fg36xJzNnB8rvWddCqKd0xv/p3GxnAg805Tnhl2Tt0aFfG
	hk7tnD0pACjAyOfQExdShxTI2I4UKbXoucfEHqC56EmH1Uu7a1e20erV2yVJDvrjkGXnmv
	cyN2jmKRL0GdgAt7lbfH1fW0uPAnE1M=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623171773; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=XE6BGDAlpoGT7vB16KKKSYud+7PMVavS6/QbZRzdDxs=;
	b=MV50QSfEK2ynwxGqYtXKmch4fg36xJzNnB8rvWddCqKd0xv/p3GxnAg805Tnhl2Tt0aFfG
	hk7tnD0pACjAyOfQExdShxTI2I4UKbXoucfEHqC56EmH1Uu7a1e20erV2yVJDvrjkGXnmv
	cyN2jmKRL0GdgAt7lbfH1fW0uPAnE1M=
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [GIT PULL] xen: branch for v5.13-rc6
Date: Tue,  8 Jun 2021 19:02:53 +0200
Message-Id: <20210608170253.13602-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.13b-rc6-tag

xen: branch for v5.13-rc6

It contains a single patch fixing a Xen related security bug: a malicious
guest might be able to trigger a "use after free" issue in the xen-netback
driver.

Thanks.

Juergen

 drivers/net/xen-netback/interface.c | 6 ++++++
 1 file changed, 6 insertions(+)

Roger Pau Monne (1):
      xen-netback: take a reference to the RX task thread


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 17:04:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 17:04:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138609.256523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqf9Z-0007vB-BJ; Tue, 08 Jun 2021 17:04:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138609.256523; Tue, 08 Jun 2021 17:04:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqf9Z-0007v4-7H; Tue, 08 Jun 2021 17:04:37 +0000
Received: by outflank-mailman (input) for mailman id 138609;
 Tue, 08 Jun 2021 17:04:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CbVg=LC=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1lqf9X-0007tU-PS
 for xen-devel@lists.xen.org; Tue, 08 Jun 2021 17:04:35 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 05eccac5-9d57-4387-9fbd-260d206a909a;
 Tue, 08 Jun 2021 17:04:34 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lqf9Q-0004gL-8a; Tue, 08 Jun 2021 17:04:28 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lqf9Q-0004pg-7A; Tue, 08 Jun 2021 17:04:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05eccac5-9d57-4387-9fbd-260d206a909a
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=RXy3Cr7MU80DA2L5jThtQDRerhPPf9p7E9ZKO/Rdv2E=; b=UYWD7GabKQoVeXpp3CKF1Byk23
	K98dheUja/qTOwlXL2ChCQ47g7TOGsLAE+qZGwtMN7/JxilluQ7W5/Rjb4dJbs+EMxOsHLQHv+lZ5
	PTeAkMDvzvIK1l6Frz/hw4QHOqnK5lmcRFT1a0OPJIaytMTWQz8UXIJys16rPinBWR7A=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 372 v3 (CVE-2021-28693) - xen/arm: Boot
 modules are not scrubbed
Message-Id: <E1lqf9Q-0004pg-7A@xenbits.xenproject.org>
Date: Tue, 08 Jun 2021 17:04:28 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2021-28693 / XSA-372
                               version 3

                xen/arm: Boot modules are not scrubbed

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

The bootloader will load boot modules (e.g. kernel, initramfs...) in a
temporary area before they are copied by Xen to each domain memory.
To ensure sensitive data is not leaked from the modules, Xen must
"scrub" them before handing the page over to the allocator.

Unfortunately, it was discovered that modules will not be scrubbed on
Arm.

IMPACT
======

Sensitive information from the boot modules might be visible to another
domain after boot.

VULNERABLE SYSTEMS
==================

Only Arm systems are vulnerable.  System running with "bootscrub=off"
(disabling boot scrubbing) are not vulnerable.

All versions of Xen since 4.12 are vulnerable.

MITIGATION
==========

There is no mitigation available.

CREDITS
=======

This issue was discovered by Julien Grall of Amazon.

RESOLUTION
==========

Applying the appropriate set of attached patches resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa372/*.patch         xen-unstable
xsa372-4.15/*.patch    Xen 4.15.x
xsa372-4.14/*.patch    Xen 4.14.x - Xen 4.13.x
xsa372-4.12/*.patch    Xen 4.12.x

$ sha256sum xsa372* xsa372*/*
06e43684c2d8a3085d55b8b40f57e1b9f1ee47519fac844dcbc21b57fb039915  xsa372.meta
8f872c7abe6c795dbef2e401f2223fda0dbb9d7c57dfebd8047eef37e1caf952  xsa372-4.12/0001-xen-arm-Create-dom0less-domUs-earlier.patch
a43c6c11481cc3f13900908cee79cc6c5401921f6f4e8858c0796cf301cfe923  xsa372-4.12/0002-xen-arm-Boot-modules-should-always-be-scrubbed-if-bo.patch
6d1fad53795ebd251520022b6be901215426ba78ccbbc075841698973b74d2a2  xsa372-4.14/0001-xen-arm-Create-dom0less-domUs-earlier.patch
2ceb5d4d8d4f8a18046721daa3bb29633a620c4794b54e1265f5d4d69a314c3b  xsa372-4.14/0002-xen-arm-Boot-modules-should-always-be-scrubbed-if-bo.patch
7feae5f9f7f2df0ec38c0b9358dc32671a9955f966b3120e17bb3fd820ce33ff  xsa372-4.15/0001-xen-arm-Create-dom0less-domUs-earlier.patch
0cc73b4751fa49f68c6584b1c7882606c6e1f18561d8a6547017ab068de4eb4b  xsa372-4.15/0002-xen-arm-Boot-modules-should-always-be-scrubbed-if-bo.patch
950672405c695ebf6ae59eebeb454bc0738b7afc3efa35ef9680d76eef4d4ec0  xsa372/0001-xen-arm-Create-dom0less-domUs-earlier.patch
9ceccd39c795e7756052a2f00256e043c8dda42e2c691df30e3f8b59190d6e8e  xsa372/0002-xen-arm-Boot-modules-should-always-be-scrubbed-if-bo.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmC/oxIMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZmdYIAMlZ2woM1hnb97BytpKkRM3v8AnyP4xhm29OoVI+
eaclrapZBPxi8qxv0+fxhe/2/t9gf98miEJftI8VRz5btiStmsgIjlEXUGpC6iwE
u7HmLzu7QBX7r2FzpSTFnVVdbFwXCU3scYuO4qM8frCpxH4kevSSxPrT5E/oFVvA
Y83ux8aKg041WTVQvK0gEVA7CgRVoxmbiYeag2JIaRGt8WnEKprbmGWQ5+DYq+pr
8tsLppHtyxppqSa7d6L67xdiNoRqAacfIezNFTpSIdyfS1m0QIIAJTr6Bg7Fd6zi
F2AYcoZiNO53OSnobH3c64axIc5iBINZeXisVMnTDzKU3XE=
=eQ/r
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa372.meta"
Content-Disposition: attachment; filename="xsa372.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNzIsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNSIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIs
CiAgICAiNC4xMiIKICBdLAogICJUcmVlcyI6IFsKICAgICJ4ZW4iCiAgXSwK
ICAiUmVjaXBlcyI6IHsKICAgICI0LjEyIjogewogICAgICAiUmVjaXBlcyI6
IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICI1
OTg0OTA1YjI2MzhkZjg3YTAyNjJkMWVlOTFmMGE2ZTE0YTg2ZGY2IiwKICAg
ICAgICAgICJQcmVyZXFzIjogW10sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsK
ICAgICAgICAgICAgInhzYTM3Mi00LjEyLyoucGF0Y2giCiAgICAgICAgICBd
CiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQuMTMiOiB7CiAgICAg
ICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3Rh
YmxlUmVmIjogIjI4NDEzMjkzODkwMGNlOGMzYjExYmFiZjcyNTVmNWM2ZGJi
MjE3MTYiLAogICAgICAgICAgIlByZXJlcXMiOiBbXSwKICAgICAgICAgICJQ
YXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzcyLTQuMTQvKi5wYXRjaCIK
ICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4x
NCI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAg
ICAgICAgICJTdGFibGVSZWYiOiAiMTBmMGIyZDQ5Mzc2ODY1ZDQ5NjgwZjA2
YzUyYjQ1MWZhYmNlM2JiNSIsCiAgICAgICAgICAiUHJlcmVxcyI6IFtdLAog
ICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2EzNzItNC4x
NC8qLnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAg
fSwKICAgICI0LjE1IjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAi
eGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICIyODBkNDcyZjRmY2Ew
NzBhMTAzNzdlMzE4ZDkwY2FiZmMyNTQwODEwIiwKICAgICAgICAgICJQcmVy
ZXFzIjogW10sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAg
InhzYTM3Mi00LjE1LyoucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQog
ICAgICB9CiAgICB9LAogICAgIm1hc3RlciI6IHsKICAgICAgIlJlY2lwZXMi
OiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAi
YWE3N2FjYzI4MDk4ZDA0OTQ1YWY5OThmM2ZjMGRiZDM3NTliNWI0MSIsCiAg
ICAgICAgICAiUHJlcmVxcyI6IFtdLAogICAgICAgICAgIlBhdGNoZXMiOiBb
CiAgICAgICAgICAgICJ4c2EzNzIvKi5wYXRjaCIKICAgICAgICAgIF0KICAg
ICAgICB9CiAgICAgIH0KICAgIH0KICB9Cn0=

--=separator
Content-Type: application/octet-stream;
 name="xsa372-4.12/0001-xen-arm-Create-dom0less-domUs-earlier.patch"
Content-Disposition: attachment;
 filename="xsa372-4.12/0001-xen-arm-Create-dom0less-domUs-earlier.patch"
Content-Transfer-Encoding: base64

RnJvbSBlZWZiYWJkODVhYzUwNGY0YjQ3NGY3OWFiYzc2NWRhZTkxYjkxZTFi
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWxpZW4gR3JhbGwg
PGpncmFsbEBhbWF6b24uY29tPgpEYXRlOiBNb24sIDE3IE1heSAyMDIxIDE3
OjQ3OjEzICswMTAwClN1YmplY3Q6IFtQQVRDSCAxLzJdIHhlbi9hcm06IENy
ZWF0ZSBkb20wbGVzcyBkb21VcyBlYXJsaWVyCgpJbiBhIGZvbGxvdy11cCBw
YXRjaCB3ZSB3aWxsIG5lZWQgdG8gdW5hbGxvY2F0ZSB0aGUgYm9vdCBtb2R1
bGVzCmJlZm9yZSBoZWFwX2luaXRfbGF0ZSgpIGlzIGNhbGxlZC4KClRoZSBt
b2R1bGVzIHdpbGwgY29udGFpbiB0aGUgZG9tVXMga2VybmVsIGFuZCBpbml0
cmFtZnMuIFRoZXJlZm9yZSBYZW4Kd2lsbCBuZWVkIHRvIGNyZWF0ZSBleHRy
YSBkb21VcyAodXNlZCBieSBkb20wbGVzcykgYmVmb3JlIGhlYXBfaW5pdF9s
YXRlKCkuCgpUaGlzIGhhcyB0d28gY29uc2VxdWVuY2VzIG9uIGRvbTBsZXNz
OgogICAgMSkgRG9tYWlucyB3aWxsIG5vdCBiZSB1bnBhdXNlZCBhcyBzb29u
IGFzIHRoZXkgYXJlIGNyZWF0ZWQgYnV0CiAgICBvbmNlIGFsbCBoYXZlIGJl
ZW4gY3JlYXRlZC4gSG93ZXZlciwgWGVuIGRvZXNuJ3QgZ3VhcmFudGVlIGFu
IG9yZGVyCiAgICB0byB1bnBhdXNlLCBzbyB0aGlzIGlzIG5vdCBzb21ldGhp
bmcgb25lIGNvdWxkIHJlbHkgb24uCgogICAgMikgVGhlIG1lbW9yeSBhbGxv
Y2F0ZWQgZm9yIGEgZG9tVSB3aWxsIG5vdCBiZSBzY3J1YmJlZCBhbnltb3Jl
IHdoZW4gYW4KICAgIGFkbWluIHNlbGVjdCBib290c2NydWI9b24uIFRoaXMg
aXMgbm90IHNvbWV0aGluZyB3ZSBhZHZlcnRpc2VkLCBidXQgaWYKICAgIHRo
aXMgaXMgYSBjb25jZXJuIHdlIGNhbiBpbnRyb2R1Y2UgZWl0aGVyIGZvcmNl
IHNjcnViIGZvciBhbGwgZG9tVXMgb3IKICAgIGEgcGVyLWRvbWFpbiBmbGFn
IGluIHRoZSBEVC4gVGhlIGJlaGF2aW9yIGZvciBib290c2NydWI9b2ZmIGFu
ZAogICAgYm9vdHNjcnViPWlkbGUgKGRlZmF1bHQpIGhhcyBub3QgY2hhbmdl
ZC4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzcyIC8gQ1ZFLTIwMjEtMjg2OTMu
CgpTaWduZWQtb2ZmLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24u
Y29tPgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2Uu
Y29tPgpSZXZpZXdlZC1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVs
bGluaUBrZXJuZWwub3JnPgpUZXN0ZWQtYnk6IFN0ZWZhbm8gU3RhYmVsbGlu
aSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4KLS0tCiB4ZW4vYXJjaC9hcm0v
ZG9tYWluX2J1aWxkLmMgfCAyIC0tCiB4ZW4vYXJjaC9hcm0vc2V0dXAuYyAg
ICAgICAgfCA5ICsrKysrLS0tLQogMiBmaWxlcyBjaGFuZ2VkLCA1IGluc2Vy
dGlvbnMoKyksIDYgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEveGVuL2Fy
Y2gvYXJtL2RvbWFpbl9idWlsZC5jIGIveGVuL2FyY2gvYXJtL2RvbWFpbl9i
dWlsZC5jCmluZGV4IGQ5ODM2Nzc5ZDE3Yy4uNmM1YTZkYjE0NDY2IDEwMDY0
NAotLS0gYS94ZW4vYXJjaC9hcm0vZG9tYWluX2J1aWxkLmMKKysrIGIveGVu
L2FyY2gvYXJtL2RvbWFpbl9idWlsZC5jCkBAIC0yMDkyLDggKzIwOTIsNiBA
QCB2b2lkIF9faW5pdCBjcmVhdGVfZG9tVXModm9pZCkKIAogICAgICAgICBp
ZiAoIGNvbnN0cnVjdF9kb21VKGQsIG5vZGUpICE9IDAgKQogICAgICAgICAg
ICAgcGFuaWMoIkNvdWxkIG5vdCBzZXQgdXAgZG9tYWluICVzXG4iLCBkdF9u
b2RlX25hbWUobm9kZSkpOwotCi0gICAgICAgIGRvbWFpbl91bnBhdXNlX2J5
X3N5c3RlbWNvbnRyb2xsZXIoZCk7CiAgICAgfQogfQogCmRpZmYgLS1naXQg
YS94ZW4vYXJjaC9hcm0vc2V0dXAuYyBiL3hlbi9hcmNoL2FybS9zZXR1cC5j
CmluZGV4IGQ5ODZmODRmOGQyYS4uMGU1NGU5YzczZTA2IDEwMDY0NAotLS0g
YS94ZW4vYXJjaC9hcm0vc2V0dXAuYworKysgYi94ZW4vYXJjaC9hcm0vc2V0
dXAuYwpAQCAtNzM2LDcgKzczNiw3IEBAIHZvaWQgX19pbml0IHN0YXJ0X3hl
bih1bnNpZ25lZCBsb25nIGJvb3RfcGh5c19vZmZzZXQsCiAgICAgaW50IGNw
dXMsIGk7CiAgICAgY29uc3QgY2hhciAqY21kbGluZTsKICAgICBzdHJ1Y3Qg
Ym9vdG1vZHVsZSAqeGVuX2Jvb3Rtb2R1bGU7Ci0gICAgc3RydWN0IGRvbWFp
biAqZG9tMDsKKyAgICBzdHJ1Y3QgZG9tYWluICpkb20wLCAqZDsKICAgICBz
dHJ1Y3QgeGVuX2RvbWN0bF9jcmVhdGVkb21haW4gZG9tMF9jZmcgPSB7CiAg
ICAgICAgIC5mbGFncyA9IFhFTl9ET01DVExfQ0RGX2h2bV9ndWVzdCB8IFhF
Tl9ET01DVExfQ0RGX2hhcCwKICAgICAgICAgLm1heF9ldnRjaG5fcG9ydCA9
IC0xLApAQCAtOTAyLDYgKzkwMiw4IEBAIHZvaWQgX19pbml0IHN0YXJ0X3hl
bih1bnNpZ25lZCBsb25nIGJvb3RfcGh5c19vZmZzZXQsCiAgICAgaWYgKCBj
b25zdHJ1Y3RfZG9tMChkb20wKSAhPSAwKQogICAgICAgICBwYW5pYygiQ291
bGQgbm90IHNldCB1cCBET00wIGd1ZXN0IE9TXG4iKTsKIAorICAgIGNyZWF0
ZV9kb21VcygpOworCiAgICAgaGVhcF9pbml0X2xhdGUoKTsKIAogICAgIGlu
aXRfdHJhY2VfYnVmcygpOwpAQCAtOTE1LDkgKzkxNyw4IEBAIHZvaWQgX19p
bml0IHN0YXJ0X3hlbih1bnNpZ25lZCBsb25nIGJvb3RfcGh5c19vZmZzZXQs
CiAKICAgICBzeXN0ZW1fc3RhdGUgPSBTWVNfU1RBVEVfYWN0aXZlOwogCi0g
ICAgY3JlYXRlX2RvbVVzKCk7Ci0KLSAgICBkb21haW5fdW5wYXVzZV9ieV9z
eXN0ZW1jb250cm9sbGVyKGRvbTApOworICAgIGZvcl9lYWNoX2RvbWFpbigg
ZCApCisgICAgICAgIGRvbWFpbl91bnBhdXNlX2J5X3N5c3RlbWNvbnRyb2xs
ZXIoZCk7CiAKICAgICAvKiBTd2l0Y2ggb24gdG8gdGhlIGR5bmFtaWNhbGx5
IGFsbG9jYXRlZCBzdGFjayBmb3IgdGhlIGlkbGUgdmNwdQogICAgICAqIHNp
bmNlIHRoZSBzdGF0aWMgb25lIHdlJ3JlIHJ1bm5pbmcgb24gaXMgYWJvdXQg
dG8gYmUgZnJlZWQuICovCi0tIAoyLjE3LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa372-4.12/0002-xen-arm-Boot-modules-should-always-be-scrubbed-if-bo.patch"
Content-Disposition: attachment;
 filename="xsa372-4.12/0002-xen-arm-Boot-modules-should-always-be-scrubbed-if-bo.patch"
Content-Transfer-Encoding: base64

RnJvbSA3ZjEwYWI3MDE5YjFkYzU2ZWEzNTg3YzljNWNiYzA3MzQyN2UwMmNl
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWxpZW4gR3JhbGwg
PGpncmFsbEBhbWF6b24uY29tPgpEYXRlOiBTYXQsIDE3IEFwciAyMDIxIDE3
OjM4OjI4ICswMTAwClN1YmplY3Q6IFtQQVRDSCAyLzJdIHhlbi9hcm06IEJv
b3QgbW9kdWxlcyBzaG91bGQgYWx3YXlzIGJlIHNjcnViYmVkIGlmCiBib290
c2NydWI9e29uLCBpZGxlfQoKVGhlIGZ1bmN0aW9uIHRvIGluaXRpYWxpemUg
dGhlIHBhZ2VzIChzZWUgaW5pdF9oZWFwX3BhZ2VzKCkpIHdpbGwgcmVxdWVz
dApzY3J1YiB3aGVuIHRoZSBhZG1pbiByZXF1ZXN0IGlkbGUgYm9vdHNjcnVi
IChkZWZhdWx0KSBhbmQgc3RhdGUgPT0KU1lTX1NUQVRFX2FjdGl2ZS4gV2hl
biBib290c2NydWI9b24sIFhlbiB3aWxsIHNjcnViIGFueSBmcmVlIHBhZ2Vz
IGluCmhlYXBfaW5pdF9sYXRlKCkuCgpDdXJyZW50bHksIHRoZSBib290IG1v
ZHVsZXMgKGUuZy4ga2VybmVscywgaW5pdHJhbWZzKSB3aWxsIGJlIGRpc2Nh
cmRlZC8KZnJlZWQgYWZ0ZXIgaGVhcF9pbml0X2xhdGUoKSBpcyBjYWxsZWQg
YW5kIHN5c3RlbV9zdGF0ZSBzd2l0Y2hlZCB0bwpTWVNfU1RBVEVfYWN0aXZl
LiBUaGlzIG1lYW5zIHRoZSBwYWdlcyBhc3NvY2lhdGVkIHdpdGggdGhlIGJv
b3QgbW9kdWxlcwp3aWxsIG5vdCBnZXQgc2NydWJiZWQgYmVmb3JlIGdldHRp
bmcgcmUtcHVycG9zZWQuCgpJZiB0aGUgbWVtb3J5IGlzIGFzc2lnbmVkIHRv
IGFuIHVudHJ1c3RlZCBkb21VLCBpdCBtYXkgYmUgYWJsZSB0bwpyZXRyaWV2
ZSBzZWNyZXRzIGZyb20gdGhlIG1vZHVsZXMuCgpUaGlzIGlzIHBhcnQgb2Yg
WFNBLTM3MiAvIENWRS0yMDIxLTI4NjkzLgoKRml4ZXM6IDE3NzRlOWIxZGYy
NyAoInhlbi9hcm06IGludHJvZHVjZSBjcmVhdGVfZG9tVXMiKQpTaWduZWQt
b2ZmLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPgpSZXZp
ZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpSZXZp
ZXdlZC1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJu
ZWwub3JnPgpUZXN0ZWQtYnk6IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJl
bGxpbmlAa2VybmVsLm9yZz4KLS0tCiB4ZW4vYXJjaC9hcm0vc2V0dXAuYyB8
IDcgKysrKysrLQogMSBmaWxlIGNoYW5nZWQsIDYgaW5zZXJ0aW9ucygrKSwg
MSBkZWxldGlvbigtKQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9zZXR1
cC5jIGIveGVuL2FyY2gvYXJtL3NldHVwLmMKaW5kZXggMGU1NGU5YzczZTA2
Li5iYTk1YzA2ZDg5ZjIgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL2FybS9zZXR1
cC5jCisrKyBiL3hlbi9hcmNoL2FybS9zZXR1cC5jCkBAIC03Myw3ICs3Myw2
IEBAIHN0YXRpYyBfX3VzZWQgdm9pZCBpbml0X2RvbmUodm9pZCkKICAgICAv
KiBNdXN0IGJlIGRvbmUgcGFzdCBzZXR0aW5nIHN5c3RlbV9zdGF0ZS4gKi8K
ICAgICB1bnJlZ2lzdGVyX2luaXRfdmlydHVhbF9yZWdpb24oKTsKIAotICAg
IGRpc2NhcmRfaW5pdGlhbF9tb2R1bGVzKCk7CiAgICAgZnJlZV9pbml0X21l
bW9yeSgpOwogICAgIHN0YXJ0dXBfY3B1X2lkbGVfbG9vcCgpOwogfQpAQCAt
OTA0LDYgKzkwMywxMiBAQCB2b2lkIF9faW5pdCBzdGFydF94ZW4odW5zaWdu
ZWQgbG9uZyBib290X3BoeXNfb2Zmc2V0LAogCiAgICAgY3JlYXRlX2RvbVVz
KCk7CiAKKyAgICAvKgorICAgICAqIFRoaXMgbmVlZHMgdG8gYmUgY2FsbGVk
ICoqYmVmb3JlKiogaGVhcF9pbml0X2xhdGUoKSBzbyBtb2R1bGVzCisgICAg
ICogd2lsbCBiZSBzY3J1YmJlZCAodW5sZXNzIHN1cHByZXNzZWQpLgorICAg
ICAqLworICAgIGRpc2NhcmRfaW5pdGlhbF9tb2R1bGVzKCk7CisKICAgICBo
ZWFwX2luaXRfbGF0ZSgpOwogCiAgICAgaW5pdF90cmFjZV9idWZzKCk7Ci0t
IAoyLjE3LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa372-4.14/0001-xen-arm-Create-dom0less-domUs-earlier.patch"
Content-Disposition: attachment;
 filename="xsa372-4.14/0001-xen-arm-Create-dom0less-domUs-earlier.patch"
Content-Transfer-Encoding: base64

RnJvbSBmOThjMjBhYWFmOTA5YmUwNGFkYTVjYjZjYjg4YzE0YjliYzc1ZTE1
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWxpZW4gR3JhbGwg
PGpncmFsbEBhbWF6b24uY29tPgpEYXRlOiBNb24sIDE3IE1heSAyMDIxIDE3
OjQ3OjEzICswMTAwClN1YmplY3Q6IFtQQVRDSCAxLzJdIHhlbi9hcm06IENy
ZWF0ZSBkb20wbGVzcyBkb21VcyBlYXJsaWVyCgpJbiBhIGZvbGxvdy11cCBw
YXRjaCB3ZSB3aWxsIG5lZWQgdG8gdW5hbGxvY2F0ZSB0aGUgYm9vdCBtb2R1
bGVzCmJlZm9yZSBoZWFwX2luaXRfbGF0ZSgpIGlzIGNhbGxlZC4KClRoZSBt
b2R1bGVzIHdpbGwgY29udGFpbiB0aGUgZG9tVXMga2VybmVsIGFuZCBpbml0
cmFtZnMuIFRoZXJlZm9yZSBYZW4Kd2lsbCBuZWVkIHRvIGNyZWF0ZSBleHRy
YSBkb21VcyAodXNlZCBieSBkb20wbGVzcykgYmVmb3JlIGhlYXBfaW5pdF9s
YXRlKCkuCgpUaGlzIGhhcyB0d28gY29uc2VxdWVuY2VzIG9uIGRvbTBsZXNz
OgogICAgMSkgRG9tYWlucyB3aWxsIG5vdCBiZSB1bnBhdXNlZCBhcyBzb29u
IGFzIHRoZXkgYXJlIGNyZWF0ZWQgYnV0CiAgICBvbmNlIGFsbCBoYXZlIGJl
ZW4gY3JlYXRlZC4gSG93ZXZlciwgWGVuIGRvZXNuJ3QgZ3VhcmFudGVlIGFu
IG9yZGVyCiAgICB0byB1bnBhdXNlLCBzbyB0aGlzIGlzIG5vdCBzb21ldGhp
bmcgb25lIGNvdWxkIHJlbHkgb24uCgogICAgMikgVGhlIG1lbW9yeSBhbGxv
Y2F0ZWQgZm9yIGEgZG9tVSB3aWxsIG5vdCBiZSBzY3J1YmJlZCBhbnltb3Jl
IHdoZW4gYW4KICAgIGFkbWluIHNlbGVjdCBib290c2NydWI9b24uIFRoaXMg
aXMgbm90IHNvbWV0aGluZyB3ZSBhZHZlcnRpc2VkLCBidXQgaWYKICAgIHRo
aXMgaXMgYSBjb25jZXJuIHdlIGNhbiBpbnRyb2R1Y2UgZWl0aGVyIGZvcmNl
IHNjcnViIGZvciBhbGwgZG9tVXMgb3IKICAgIGEgcGVyLWRvbWFpbiBmbGFn
IGluIHRoZSBEVC4gVGhlIGJlaGF2aW9yIGZvciBib290c2NydWI9b2ZmIGFu
ZAogICAgYm9vdHNjcnViPWlkbGUgKGRlZmF1bHQpIGhhcyBub3QgY2hhbmdl
ZC4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzcyIC8gQ1ZFLTIwMjEtMjg2OTMu
CgpTaWduZWQtb2ZmLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24u
Y29tPgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2Uu
Y29tPgpSZXZpZXdlZC1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVs
bGluaUBrZXJuZWwub3JnPgpUZXN0ZWQtYnk6IFN0ZWZhbm8gU3RhYmVsbGlu
aSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4KLS0tCiB4ZW4vYXJjaC9hcm0v
ZG9tYWluX2J1aWxkLmMgfCAyIC0tCiB4ZW4vYXJjaC9hcm0vc2V0dXAuYyAg
ICAgICAgfCA5ICsrKysrLS0tLQogMiBmaWxlcyBjaGFuZ2VkLCA1IGluc2Vy
dGlvbnMoKyksIDYgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEveGVuL2Fy
Y2gvYXJtL2RvbWFpbl9idWlsZC5jIGIveGVuL2FyY2gvYXJtL2RvbWFpbl9i
dWlsZC5jCmluZGV4IGU4MjRiYTM0YjAxMi4uYjA3NDYxZjVkMzc2IDEwMDY0
NAotLS0gYS94ZW4vYXJjaC9hcm0vZG9tYWluX2J1aWxkLmMKKysrIGIveGVu
L2FyY2gvYXJtL2RvbWFpbl9idWlsZC5jCkBAIC0yNTE1LDggKzI1MTUsNiBA
QCB2b2lkIF9faW5pdCBjcmVhdGVfZG9tVXModm9pZCkKIAogICAgICAgICBp
ZiAoIGNvbnN0cnVjdF9kb21VKGQsIG5vZGUpICE9IDAgKQogICAgICAgICAg
ICAgcGFuaWMoIkNvdWxkIG5vdCBzZXQgdXAgZG9tYWluICVzXG4iLCBkdF9u
b2RlX25hbWUobm9kZSkpOwotCi0gICAgICAgIGRvbWFpbl91bnBhdXNlX2J5
X3N5c3RlbWNvbnRyb2xsZXIoZCk7CiAgICAgfQogfQogCmRpZmYgLS1naXQg
YS94ZW4vYXJjaC9hcm0vc2V0dXAuYyBiL3hlbi9hcmNoL2FybS9zZXR1cC5j
CmluZGV4IDc5NjhjZWU0N2QwNS4uMWYyNjA4MGIzMGJmIDEwMDY0NAotLS0g
YS94ZW4vYXJjaC9hcm0vc2V0dXAuYworKysgYi94ZW4vYXJjaC9hcm0vc2V0
dXAuYwpAQCAtNzc5LDcgKzc3OSw3IEBAIHZvaWQgX19pbml0IHN0YXJ0X3hl
bih1bnNpZ25lZCBsb25nIGJvb3RfcGh5c19vZmZzZXQsCiAgICAgaW50IGNw
dXMsIGk7CiAgICAgY29uc3QgY2hhciAqY21kbGluZTsKICAgICBzdHJ1Y3Qg
Ym9vdG1vZHVsZSAqeGVuX2Jvb3Rtb2R1bGU7Ci0gICAgc3RydWN0IGRvbWFp
biAqZG9tMDsKKyAgICBzdHJ1Y3QgZG9tYWluICpkb20wLCAqZDsKICAgICBz
dHJ1Y3QgeGVuX2RvbWN0bF9jcmVhdGVkb21haW4gZG9tMF9jZmcgPSB7CiAg
ICAgICAgIC5mbGFncyA9IFhFTl9ET01DVExfQ0RGX2h2bSB8IFhFTl9ET01D
VExfQ0RGX2hhcCwKICAgICAgICAgLm1heF9ldnRjaG5fcG9ydCA9IC0xLApA
QCAtOTYyLDYgKzk2Miw4IEBAIHZvaWQgX19pbml0IHN0YXJ0X3hlbih1bnNp
Z25lZCBsb25nIGJvb3RfcGh5c19vZmZzZXQsCiAgICAgaWYgKCBjb25zdHJ1
Y3RfZG9tMChkb20wKSAhPSAwKQogICAgICAgICBwYW5pYygiQ291bGQgbm90
IHNldCB1cCBET00wIGd1ZXN0IE9TXG4iKTsKIAorICAgIGNyZWF0ZV9kb21V
cygpOworCiAgICAgaGVhcF9pbml0X2xhdGUoKTsKIAogICAgIGluaXRfdHJh
Y2VfYnVmcygpOwpAQCAtOTc1LDkgKzk3Nyw4IEBAIHZvaWQgX19pbml0IHN0
YXJ0X3hlbih1bnNpZ25lZCBsb25nIGJvb3RfcGh5c19vZmZzZXQsCiAKICAg
ICBzeXN0ZW1fc3RhdGUgPSBTWVNfU1RBVEVfYWN0aXZlOwogCi0gICAgY3Jl
YXRlX2RvbVVzKCk7Ci0KLSAgICBkb21haW5fdW5wYXVzZV9ieV9zeXN0ZW1j
b250cm9sbGVyKGRvbTApOworICAgIGZvcl9lYWNoX2RvbWFpbiggZCApCisg
ICAgICAgIGRvbWFpbl91bnBhdXNlX2J5X3N5c3RlbWNvbnRyb2xsZXIoZCk7
CiAKICAgICAvKiBTd2l0Y2ggb24gdG8gdGhlIGR5bmFtaWNhbGx5IGFsbG9j
YXRlZCBzdGFjayBmb3IgdGhlIGlkbGUgdmNwdQogICAgICAqIHNpbmNlIHRo
ZSBzdGF0aWMgb25lIHdlJ3JlIHJ1bm5pbmcgb24gaXMgYWJvdXQgdG8gYmUg
ZnJlZWQuICovCi0tIAoyLjE3LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa372-4.14/0002-xen-arm-Boot-modules-should-always-be-scrubbed-if-bo.patch"
Content-Disposition: attachment;
 filename="xsa372-4.14/0002-xen-arm-Boot-modules-should-always-be-scrubbed-if-bo.patch"
Content-Transfer-Encoding: base64

RnJvbSBlN2U0NzVjMWEzZGM2YjE0OTI1MjQxMzU4OWVlYmFhNGFlMTM4ODI0
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWxpZW4gR3JhbGwg
PGpncmFsbEBhbWF6b24uY29tPgpEYXRlOiBTYXQsIDE3IEFwciAyMDIxIDE3
OjM4OjI4ICswMTAwClN1YmplY3Q6IFtQQVRDSCAyLzJdIHhlbi9hcm06IEJv
b3QgbW9kdWxlcyBzaG91bGQgYWx3YXlzIGJlIHNjcnViYmVkIGlmCiBib290
c2NydWI9e29uLCBpZGxlfQoKVGhlIGZ1bmN0aW9uIHRvIGluaXRpYWxpemUg
dGhlIHBhZ2VzIChzZWUgaW5pdF9oZWFwX3BhZ2VzKCkpIHdpbGwgcmVxdWVz
dApzY3J1YiB3aGVuIHRoZSBhZG1pbiByZXF1ZXN0IGlkbGUgYm9vdHNjcnVi
IChkZWZhdWx0KSBhbmQgc3RhdGUgPT0KU1lTX1NUQVRFX2FjdGl2ZS4gV2hl
biBib290c2NydWI9b24sIFhlbiB3aWxsIHNjcnViIGFueSBmcmVlIHBhZ2Vz
IGluCmhlYXBfaW5pdF9sYXRlKCkuCgpDdXJyZW50bHksIHRoZSBib290IG1v
ZHVsZXMgKGUuZy4ga2VybmVscywgaW5pdHJhbWZzKSB3aWxsIGJlIGRpc2Nh
cmRlZC8KZnJlZWQgYWZ0ZXIgaGVhcF9pbml0X2xhdGUoKSBpcyBjYWxsZWQg
YW5kIHN5c3RlbV9zdGF0ZSBzd2l0Y2hlZCB0bwpTWVNfU1RBVEVfYWN0aXZl
LiBUaGlzIG1lYW5zIHRoZSBwYWdlcyBhc3NvY2lhdGVkIHdpdGggdGhlIGJv
b3QgbW9kdWxlcwp3aWxsIG5vdCBnZXQgc2NydWJiZWQgYmVmb3JlIGdldHRp
bmcgcmUtcHVycG9zZWQuCgpJZiB0aGUgbWVtb3J5IGlzIGFzc2lnbmVkIHRv
IGFuIHVudHJ1c3RlZCBkb21VLCBpdCBtYXkgYmUgYWJsZSB0bwpyZXRyaWV2
ZSBzZWNyZXRzIGZyb20gdGhlIG1vZHVsZXMuCgpUaGlzIGlzIHBhcnQgb2Yg
WFNBLTM3MiAvIENWRS0yMDIxLTI4NjkzLgoKRml4ZXM6IDE3NzRlOWIxZGYy
NyAoInhlbi9hcm06IGludHJvZHVjZSBjcmVhdGVfZG9tVXMiKQpTaWduZWQt
b2ZmLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPgpSZXZp
ZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpSZXZp
ZXdlZC1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJu
ZWwub3JnPgpUZXN0ZWQtYnk6IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJl
bGxpbmlAa2VybmVsLm9yZz4KLS0tCiB4ZW4vYXJjaC9hcm0vc2V0dXAuYyB8
IDcgKysrKysrLQogMSBmaWxlIGNoYW5nZWQsIDYgaW5zZXJ0aW9ucygrKSwg
MSBkZWxldGlvbigtKQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9zZXR1
cC5jIGIveGVuL2FyY2gvYXJtL3NldHVwLmMKaW5kZXggMWYyNjA4MGIzMGJm
Li4zNGIxYzFhMTFlZjYgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL2FybS9zZXR1
cC5jCisrKyBiL3hlbi9hcmNoL2FybS9zZXR1cC5jCkBAIC03NSw3ICs3NSw2
IEBAIHN0YXRpYyBfX3VzZWQgdm9pZCBpbml0X2RvbmUodm9pZCkKICAgICAv
KiBNdXN0IGJlIGRvbmUgcGFzdCBzZXR0aW5nIHN5c3RlbV9zdGF0ZS4gKi8K
ICAgICB1bnJlZ2lzdGVyX2luaXRfdmlydHVhbF9yZWdpb24oKTsKIAotICAg
IGRpc2NhcmRfaW5pdGlhbF9tb2R1bGVzKCk7CiAgICAgZnJlZV9pbml0X21l
bW9yeSgpOwogICAgIHN0YXJ0dXBfY3B1X2lkbGVfbG9vcCgpOwogfQpAQCAt
OTY0LDYgKzk2MywxMiBAQCB2b2lkIF9faW5pdCBzdGFydF94ZW4odW5zaWdu
ZWQgbG9uZyBib290X3BoeXNfb2Zmc2V0LAogCiAgICAgY3JlYXRlX2RvbVVz
KCk7CiAKKyAgICAvKgorICAgICAqIFRoaXMgbmVlZHMgdG8gYmUgY2FsbGVk
ICoqYmVmb3JlKiogaGVhcF9pbml0X2xhdGUoKSBzbyBtb2R1bGVzCisgICAg
ICogd2lsbCBiZSBzY3J1YmJlZCAodW5sZXNzIHN1cHByZXNzZWQpLgorICAg
ICAqLworICAgIGRpc2NhcmRfaW5pdGlhbF9tb2R1bGVzKCk7CisKICAgICBo
ZWFwX2luaXRfbGF0ZSgpOwogCiAgICAgaW5pdF90cmFjZV9idWZzKCk7Ci0t
IAoyLjE3LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa372-4.15/0001-xen-arm-Create-dom0less-domUs-earlier.patch"
Content-Disposition: attachment;
 filename="xsa372-4.15/0001-xen-arm-Create-dom0less-domUs-earlier.patch"
Content-Transfer-Encoding: base64

RnJvbSBiMWU1YTg5ZjE5ZDk5MTljM2VhZTE3YWI5YzZhNjYzYjA4MDFhZDlj
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWxpZW4gR3JhbGwg
PGpncmFsbEBhbWF6b24uY29tPgpEYXRlOiBNb24sIDE3IE1heSAyMDIxIDE3
OjQ3OjEzICswMTAwClN1YmplY3Q6IFtQQVRDSCAxLzJdIHhlbi9hcm06IENy
ZWF0ZSBkb20wbGVzcyBkb21VcyBlYXJsaWVyCgpJbiBhIGZvbGxvdy11cCBw
YXRjaCB3ZSB3aWxsIG5lZWQgdG8gdW5hbGxvY2F0ZSB0aGUgYm9vdCBtb2R1
bGVzCmJlZm9yZSBoZWFwX2luaXRfbGF0ZSgpIGlzIGNhbGxlZC4KClRoZSBt
b2R1bGVzIHdpbGwgY29udGFpbiB0aGUgZG9tVXMga2VybmVsIGFuZCBpbml0
cmFtZnMuIFRoZXJlZm9yZSBYZW4Kd2lsbCBuZWVkIHRvIGNyZWF0ZSBleHRy
YSBkb21VcyAodXNlZCBieSBkb20wbGVzcykgYmVmb3JlIGhlYXBfaW5pdF9s
YXRlKCkuCgpUaGlzIGhhcyB0d28gY29uc2VxdWVuY2VzIG9uIGRvbTBsZXNz
OgogICAgMSkgRG9tYWlucyB3aWxsIG5vdCBiZSB1bnBhdXNlZCBhcyBzb29u
IGFzIHRoZXkgYXJlIGNyZWF0ZWQgYnV0CiAgICBvbmNlIGFsbCBoYXZlIGJl
ZW4gY3JlYXRlZC4gSG93ZXZlciwgWGVuIGRvZXNuJ3QgZ3VhcmFudGVlIGFu
IG9yZGVyCiAgICB0byB1bnBhdXNlLCBzbyB0aGlzIGlzIG5vdCBzb21ldGhp
bmcgb25lIGNvdWxkIHJlbHkgb24uCgogICAgMikgVGhlIG1lbW9yeSBhbGxv
Y2F0ZWQgZm9yIGEgZG9tVSB3aWxsIG5vdCBiZSBzY3J1YmJlZCBhbnltb3Jl
IHdoZW4gYW4KICAgIGFkbWluIHNlbGVjdCBib290c2NydWI9b24uIFRoaXMg
aXMgbm90IHNvbWV0aGluZyB3ZSBhZHZlcnRpc2VkLCBidXQgaWYKICAgIHRo
aXMgaXMgYSBjb25jZXJuIHdlIGNhbiBpbnRyb2R1Y2UgZWl0aGVyIGZvcmNl
IHNjcnViIGZvciBhbGwgZG9tVXMgb3IKICAgIGEgcGVyLWRvbWFpbiBmbGFn
IGluIHRoZSBEVC4gVGhlIGJlaGF2aW9yIGZvciBib290c2NydWI9b2ZmIGFu
ZAogICAgYm9vdHNjcnViPWlkbGUgKGRlZmF1bHQpIGhhcyBub3QgY2hhbmdl
ZC4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzcyIC8gQ1ZFLTIwMjEtMjg2OTMu
CgpTaWduZWQtb2ZmLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24u
Y29tPgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2Uu
Y29tPgpSZXZpZXdlZC1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVs
bGluaUBrZXJuZWwub3JnPgpUZXN0ZWQtYnk6IFN0ZWZhbm8gU3RhYmVsbGlu
aSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4KLS0tCiB4ZW4vYXJjaC9hcm0v
ZG9tYWluX2J1aWxkLmMgfCAgMiAtLQogeGVuL2FyY2gvYXJtL3NldHVwLmMg
ICAgICAgIHwgMTEgKysrKysrLS0tLS0KIDIgZmlsZXMgY2hhbmdlZCwgNiBp
bnNlcnRpb25zKCspLCA3IGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL2FybS9kb21haW5fYnVpbGQuYyBiL3hlbi9hcmNoL2FybS9kb21h
aW5fYnVpbGQuYwppbmRleCAzNzRiZjY1NWVlMzQuLjQyMDNkZGNjYTBlMyAx
MDA2NDQKLS0tIGEveGVuL2FyY2gvYXJtL2RvbWFpbl9idWlsZC5jCisrKyBi
L3hlbi9hcmNoL2FybS9kb21haW5fYnVpbGQuYwpAQCAtMjUxNSw4ICsyNTE1
LDYgQEAgdm9pZCBfX2luaXQgY3JlYXRlX2RvbVVzKHZvaWQpCiAKICAgICAg
ICAgaWYgKCBjb25zdHJ1Y3RfZG9tVShkLCBub2RlKSAhPSAwICkKICAgICAg
ICAgICAgIHBhbmljKCJDb3VsZCBub3Qgc2V0IHVwIGRvbWFpbiAlc1xuIiwg
ZHRfbm9kZV9uYW1lKG5vZGUpKTsKLQotICAgICAgICBkb21haW5fdW5wYXVz
ZV9ieV9zeXN0ZW1jb250cm9sbGVyKGQpOwogICAgIH0KIH0KIApkaWZmIC0t
Z2l0IGEveGVuL2FyY2gvYXJtL3NldHVwLmMgYi94ZW4vYXJjaC9hcm0vc2V0
dXAuYwppbmRleCAyNTMyZWM5NzM5MTMuLjQ0MWUwZTE2ZTlmMCAxMDA2NDQK
LS0tIGEveGVuL2FyY2gvYXJtL3NldHVwLmMKKysrIGIveGVuL2FyY2gvYXJt
L3NldHVwLmMKQEAgLTgwNCw3ICs4MDQsNyBAQCB2b2lkIF9faW5pdCBzdGFy
dF94ZW4odW5zaWduZWQgbG9uZyBib290X3BoeXNfb2Zmc2V0LAogICAgIGlu
dCBjcHVzLCBpOwogICAgIGNvbnN0IGNoYXIgKmNtZGxpbmU7CiAgICAgc3Ry
dWN0IGJvb3Rtb2R1bGUgKnhlbl9ib290bW9kdWxlOwotICAgIHN0cnVjdCBk
b21haW4gKmRvbTA7CisgICAgc3RydWN0IGRvbWFpbiAqZG9tMCwgKmQ7CiAg
ICAgc3RydWN0IHhlbl9kb21jdGxfY3JlYXRlZG9tYWluIGRvbTBfY2ZnID0g
ewogICAgICAgICAuZmxhZ3MgPSBYRU5fRE9NQ1RMX0NERl9odm0gfCBYRU5f
RE9NQ1RMX0NERl9oYXAsCiAgICAgICAgIC5tYXhfZXZ0Y2huX3BvcnQgPSAt
MSwKQEAgLTk4Nyw2ICs5ODcsOSBAQCB2b2lkIF9faW5pdCBzdGFydF94ZW4o
dW5zaWduZWQgbG9uZyBib290X3BoeXNfb2Zmc2V0LAogICAgIGlmICggY29u
c3RydWN0X2RvbTAoZG9tMCkgIT0gMCkKICAgICAgICAgcGFuaWMoIkNvdWxk
IG5vdCBzZXQgdXAgRE9NMCBndWVzdCBPU1xuIik7CiAKKyAgICBpZiAoIGFj
cGlfZGlzYWJsZWQgKQorICAgICAgICBjcmVhdGVfZG9tVXMoKTsKKwogICAg
IGhlYXBfaW5pdF9sYXRlKCk7CiAKICAgICBpbml0X3RyYWNlX2J1ZnMoKTsK
QEAgLTEwMDAsMTAgKzEwMDMsOCBAQCB2b2lkIF9faW5pdCBzdGFydF94ZW4o
dW5zaWduZWQgbG9uZyBib290X3BoeXNfb2Zmc2V0LAogCiAgICAgc3lzdGVt
X3N0YXRlID0gU1lTX1NUQVRFX2FjdGl2ZTsKIAotICAgIGlmICggYWNwaV9k
aXNhYmxlZCApCi0gICAgICAgIGNyZWF0ZV9kb21VcygpOwotCi0gICAgZG9t
YWluX3VucGF1c2VfYnlfc3lzdGVtY29udHJvbGxlcihkb20wKTsKKyAgICBm
b3JfZWFjaF9kb21haW4oIGQgKQorICAgICAgICBkb21haW5fdW5wYXVzZV9i
eV9zeXN0ZW1jb250cm9sbGVyKGQpOwogCiAgICAgLyogU3dpdGNoIG9uIHRv
IHRoZSBkeW5hbWljYWxseSBhbGxvY2F0ZWQgc3RhY2sgZm9yIHRoZSBpZGxl
IHZjcHUKICAgICAgKiBzaW5jZSB0aGUgc3RhdGljIG9uZSB3ZSdyZSBydW5u
aW5nIG9uIGlzIGFib3V0IHRvIGJlIGZyZWVkLiAqLwotLSAKMi4xNy4xCgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa372-4.15/0002-xen-arm-Boot-modules-should-always-be-scrubbed-if-bo.patch"
Content-Disposition: attachment;
 filename="xsa372-4.15/0002-xen-arm-Boot-modules-should-always-be-scrubbed-if-bo.patch"
Content-Transfer-Encoding: base64

RnJvbSAwOWJiMjhiZGVmM2ZiNWU3ZDA4YmRkNjQxNjAxY2EwYzBkNGQ4MmI0
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWxpZW4gR3JhbGwg
PGpncmFsbEBhbWF6b24uY29tPgpEYXRlOiBTYXQsIDE3IEFwciAyMDIxIDE3
OjM4OjI4ICswMTAwClN1YmplY3Q6IFtQQVRDSCAyLzJdIHhlbi9hcm06IEJv
b3QgbW9kdWxlcyBzaG91bGQgYWx3YXlzIGJlIHNjcnViYmVkIGlmCiBib290
c2NydWI9e29uLCBpZGxlfQoKVGhlIGZ1bmN0aW9uIHRvIGluaXRpYWxpemUg
dGhlIHBhZ2VzIChzZWUgaW5pdF9oZWFwX3BhZ2VzKCkpIHdpbGwgcmVxdWVz
dApzY3J1YiB3aGVuIHRoZSBhZG1pbiByZXF1ZXN0IGlkbGUgYm9vdHNjcnVi
IChkZWZhdWx0KSBhbmQgc3RhdGUgPT0KU1lTX1NUQVRFX2FjdGl2ZS4gV2hl
biBib290c2NydWI9b24sIFhlbiB3aWxsIHNjcnViIGFueSBmcmVlIHBhZ2Vz
IGluCmhlYXBfaW5pdF9sYXRlKCkuCgpDdXJyZW50bHksIHRoZSBib290IG1v
ZHVsZXMgKGUuZy4ga2VybmVscywgaW5pdHJhbWZzKSB3aWxsIGJlIGRpc2Nh
cmRlZC8KZnJlZWQgYWZ0ZXIgaGVhcF9pbml0X2xhdGUoKSBpcyBjYWxsZWQg
YW5kIHN5c3RlbV9zdGF0ZSBzd2l0Y2hlZCB0bwpTWVNfU1RBVEVfYWN0aXZl
LiBUaGlzIG1lYW5zIHRoZSBwYWdlcyBhc3NvY2lhdGVkIHdpdGggdGhlIGJv
b3QgbW9kdWxlcwp3aWxsIG5vdCBnZXQgc2NydWJiZWQgYmVmb3JlIGdldHRp
bmcgcmUtcHVycG9zZWQuCgpJZiB0aGUgbWVtb3J5IGlzIGFzc2lnbmVkIHRv
IGFuIHVudHJ1c3RlZCBkb21VLCBpdCBtYXkgYmUgYWJsZSB0bwpyZXRyaWV2
ZSBzZWNyZXRzIGZyb20gdGhlIG1vZHVsZXMuCgpUaGlzIGlzIHBhcnQgb2Yg
WFNBLTM3MiAvIENWRS0yMDIxLTI4NjkzLgoKRml4ZXM6IDE3NzRlOWIxZGYy
NyAoInhlbi9hcm06IGludHJvZHVjZSBjcmVhdGVfZG9tVXMiKQpTaWduZWQt
b2ZmLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPgpSZXZp
ZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpSZXZp
ZXdlZC1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJu
ZWwub3JnPgpUZXN0ZWQtYnk6IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJl
bGxpbmlAa2VybmVsLm9yZz4KLS0tCiB4ZW4vYXJjaC9hcm0vc2V0dXAuYyB8
IDggKysrKysrLS0KIDEgZmlsZSBjaGFuZ2VkLCA2IGluc2VydGlvbnMoKyks
IDIgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL3Nl
dHVwLmMgYi94ZW4vYXJjaC9hcm0vc2V0dXAuYwppbmRleCA0NDFlMGUxNmU5
ZjAuLjhhZmI3OGYyYzk4NSAxMDA2NDQKLS0tIGEveGVuL2FyY2gvYXJtL3Nl
dHVwLmMKKysrIGIveGVuL2FyY2gvYXJtL3NldHVwLmMKQEAgLTcyLDggKzcy
LDYgQEAgZG9taWRfdCBfX3JlYWRfbW9zdGx5IG1heF9pbml0X2RvbWlkOwog
CiBzdGF0aWMgX191c2VkIHZvaWQgaW5pdF9kb25lKHZvaWQpCiB7Ci0gICAg
ZGlzY2FyZF9pbml0aWFsX21vZHVsZXMoKTsKLQogICAgIC8qIE11c3QgYmUg
ZG9uZSBwYXN0IHNldHRpbmcgc3lzdGVtX3N0YXRlLiAqLwogICAgIHVucmVn
aXN0ZXJfaW5pdF92aXJ0dWFsX3JlZ2lvbigpOwogCkBAIC05OTAsNiArOTg4
LDEyIEBAIHZvaWQgX19pbml0IHN0YXJ0X3hlbih1bnNpZ25lZCBsb25nIGJv
b3RfcGh5c19vZmZzZXQsCiAgICAgaWYgKCBhY3BpX2Rpc2FibGVkICkKICAg
ICAgICAgY3JlYXRlX2RvbVVzKCk7CiAKKyAgICAvKgorICAgICAqIFRoaXMg
bmVlZHMgdG8gYmUgY2FsbGVkICoqYmVmb3JlKiogaGVhcF9pbml0X2xhdGUo
KSBzbyBtb2R1bGVzCisgICAgICogd2lsbCBiZSBzY3J1YmJlZCAodW5sZXNz
IHN1cHByZXNzZWQpLgorICAgICAqLworICAgIGRpc2NhcmRfaW5pdGlhbF9t
b2R1bGVzKCk7CisKICAgICBoZWFwX2luaXRfbGF0ZSgpOwogCiAgICAgaW5p
dF90cmFjZV9idWZzKCk7Ci0tIAoyLjE3LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa372/0001-xen-arm-Create-dom0less-domUs-earlier.patch"
Content-Disposition: attachment;
 filename="xsa372/0001-xen-arm-Create-dom0less-domUs-earlier.patch"
Content-Transfer-Encoding: base64

RnJvbSBhMjRmZWY3M2VkYzc1YzdhZmY3ZGMyY2JjYzJlZTZkZjk4OTNhMmVi
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWxpZW4gR3JhbGwg
PGpncmFsbEBhbWF6b24uY29tPgpEYXRlOiBNb24sIDE3IE1heSAyMDIxIDE3
OjQ3OjEzICswMTAwClN1YmplY3Q6IFtQQVRDSCAxLzJdIHhlbi9hcm06IENy
ZWF0ZSBkb20wbGVzcyBkb21VcyBlYXJsaWVyCgpJbiBhIGZvbGxvdy11cCBw
YXRjaCB3ZSB3aWxsIG5lZWQgdG8gdW5hbGxvY2F0ZSB0aGUgYm9vdCBtb2R1
bGVzCmJlZm9yZSBoZWFwX2luaXRfbGF0ZSgpIGlzIGNhbGxlZC4KClRoZSBt
b2R1bGVzIHdpbGwgY29udGFpbiB0aGUgZG9tVXMga2VybmVsIGFuZCBpbml0
cmFtZnMuIFRoZXJlZm9yZSBYZW4Kd2lsbCBuZWVkIHRvIGNyZWF0ZSBleHRy
YSBkb21VcyAodXNlZCBieSBkb20wbGVzcykgYmVmb3JlIGhlYXBfaW5pdF9s
YXRlKCkuCgpUaGlzIGhhcyB0d28gY29uc2VxdWVuY2VzIG9uIGRvbTBsZXNz
OgogICAgMSkgRG9tYWlucyB3aWxsIG5vdCBiZSB1bnBhdXNlZCBhcyBzb29u
IGFzIHRoZXkgYXJlIGNyZWF0ZWQgYnV0CiAgICBvbmNlIGFsbCBoYXZlIGJl
ZW4gY3JlYXRlZC4gSG93ZXZlciwgWGVuIGRvZXNuJ3QgZ3VhcmFudGVlIGFu
IG9yZGVyCiAgICB0byB1bnBhdXNlLCBzbyB0aGlzIGlzIG5vdCBzb21ldGhp
bmcgb25lIGNvdWxkIHJlbHkgb24uCgogICAgMikgVGhlIG1lbW9yeSBhbGxv
Y2F0ZWQgZm9yIGEgZG9tVSB3aWxsIG5vdCBiZSBzY3J1YmJlZCBhbnltb3Jl
IHdoZW4gYW4KICAgIGFkbWluIHNlbGVjdCBib290c2NydWI9b24uIFRoaXMg
aXMgbm90IHNvbWV0aGluZyB3ZSBhZHZlcnRpc2VkLCBidXQgaWYKICAgIHRo
aXMgaXMgYSBjb25jZXJuIHdlIGNhbiBpbnRyb2R1Y2UgZWl0aGVyIGZvcmNl
IHNjcnViIGZvciBhbGwgZG9tVXMgb3IKICAgIGEgcGVyLWRvbWFpbiBmbGFn
IGluIHRoZSBEVC4gVGhlIGJlaGF2aW9yIGZvciBib290c2NydWI9b2ZmIGFu
ZAogICAgYm9vdHNjcnViPWlkbGUgKGRlZmF1bHQpIGhhcyBub3QgY2hhbmdl
ZC4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzcyIC8gQ1ZFLTIwMjEtMjg2OTMu
CgpTaWduZWQtb2ZmLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24u
Y29tPgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2Uu
Y29tPgpSZXZpZXdlZC1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVs
bGluaUBrZXJuZWwub3JnPgpUZXN0ZWQtYnk6IFN0ZWZhbm8gU3RhYmVsbGlu
aSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4KLS0tCiB4ZW4vYXJjaC9hcm0v
ZG9tYWluX2J1aWxkLmMgfCAgNiArLS0tLS0KIHhlbi9hcmNoL2FybS9zZXR1
cC5jICAgICAgICB8IDE0ICsrKysrKystLS0tLS0tCiB4ZW4vaW5jbHVkZS9h
c20tYXJtL3NldHVwLmggfCAgMiArLQogMyBmaWxlcyBjaGFuZ2VkLCA5IGlu
c2VydGlvbnMoKyksIDEzIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL2FybS9kb21haW5fYnVpbGQuYyBiL3hlbi9hcmNoL2FybS9kb21h
aW5fYnVpbGQuYwppbmRleCAyODI0MTZlNzRkYTQuLjZjODZkNTI3ODEwZiAx
MDA2NDQKLS0tIGEveGVuL2FyY2gvYXJtL2RvbWFpbl9idWlsZC5jCisrKyBi
L3hlbi9hcmNoL2FybS9kb21haW5fYnVpbGQuYwpAQCAtMjUyMSw4ICsyNTIx
LDYgQEAgdm9pZCBfX2luaXQgY3JlYXRlX2RvbVVzKHZvaWQpCiAKICAgICAg
ICAgaWYgKCBjb25zdHJ1Y3RfZG9tVShkLCBub2RlKSAhPSAwICkKICAgICAg
ICAgICAgIHBhbmljKCJDb3VsZCBub3Qgc2V0IHVwIGRvbWFpbiAlc1xuIiwg
ZHRfbm9kZV9uYW1lKG5vZGUpKTsKLQotICAgICAgICBkb21haW5fdW5wYXVz
ZV9ieV9zeXN0ZW1jb250cm9sbGVyKGQpOwogICAgIH0KIH0KIApAQCAtMjU4
NCw3ICsyNTgyLDcgQEAgc3RhdGljIGludCBfX2luaXQgY29uc3RydWN0X2Rv
bTAoc3RydWN0IGRvbWFpbiAqZCkKICAgICByZXR1cm4gY29uc3RydWN0X2Rv
bWFpbihkLCAma2luZm8pOwogfQogCi1zdHJ1Y3QgZG9tYWluKiBfX2luaXQg
Y3JlYXRlX2RvbTAodm9pZCkKK3ZvaWQgX19pbml0IGNyZWF0ZV9kb20wKHZv
aWQpCiB7CiAgICAgc3RydWN0IGRvbWFpbiAqZG9tMDsKICAgICBzdHJ1Y3Qg
eGVuX2RvbWN0bF9jcmVhdGVkb21haW4gZG9tMF9jZmcgPSB7CkBAIC0yNjE1
LDggKzI2MTMsNiBAQCBzdHJ1Y3QgZG9tYWluKiBfX2luaXQgY3JlYXRlX2Rv
bTAodm9pZCkKIAogICAgIGlmICggY29uc3RydWN0X2RvbTAoZG9tMCkgIT0g
MCkKICAgICAgICAgcGFuaWMoIkNvdWxkIG5vdCBzZXQgdXAgRE9NMCBndWVz
dCBPU1xuIik7Ci0KLSAgICByZXR1cm4gZG9tMDsKIH0KIAogLyoKZGlmZiAt
LWdpdCBhL3hlbi9hcmNoL2FybS9zZXR1cC5jIGIveGVuL2FyY2gvYXJtL3Nl
dHVwLmMKaW5kZXggMDBhYWQxYzE5NGI5Li5lMTc1MzJjMTMyY2YgMTAwNjQ0
Ci0tLSBhL3hlbi9hcmNoL2FybS9zZXR1cC5jCisrKyBiL3hlbi9hcmNoL2Fy
bS9zZXR1cC5jCkBAIC04MzYsNyArODM2LDcgQEAgdm9pZCBfX2luaXQgc3Rh
cnRfeGVuKHVuc2lnbmVkIGxvbmcgYm9vdF9waHlzX29mZnNldCwKICAgICBp
bnQgY3B1cywgaTsKICAgICBjb25zdCBjaGFyICpjbWRsaW5lOwogICAgIHN0
cnVjdCBib290bW9kdWxlICp4ZW5fYm9vdG1vZHVsZTsKLSAgICBzdHJ1Y3Qg
ZG9tYWluICpkb20wID0gTlVMTDsKKyAgICBzdHJ1Y3QgZG9tYWluICpkOwog
ICAgIGludCByYzsKIAogICAgIGRjYWNoZV9saW5lX2J5dGVzID0gcmVhZF9k
Y2FjaGVfbGluZV9ieXRlcygpOwpAQCAtOTkyLDEwICs5OTIsMTMgQEAgdm9p
ZCBfX2luaXQgc3RhcnRfeGVuKHVuc2lnbmVkIGxvbmcgYm9vdF9waHlzX29m
ZnNldCwKIAogICAgIC8qIENyZWF0ZSBpbml0aWFsIGRvbWFpbiAwLiAqLwog
ICAgIGlmICggIWlzX2RvbTBsZXNzX21vZGUoKSApCi0gICAgICAgIGRvbTAg
PSBjcmVhdGVfZG9tMCgpOworICAgICAgICBjcmVhdGVfZG9tMCgpOwogICAg
IGVsc2UKICAgICAgICAgcHJpbnRrKFhFTkxPR19JTkZPICJYZW4gZG9tMGxl
c3MgbW9kZSBkZXRlY3RlZFxuIik7CiAKKyAgICBpZiAoIGFjcGlfZGlzYWJs
ZWQgKQorICAgICAgICBjcmVhdGVfZG9tVXMoKTsKKwogICAgIGhlYXBfaW5p
dF9sYXRlKCk7CiAKICAgICBpbml0X3RyYWNlX2J1ZnMoKTsKQEAgLTEwMDks
MTEgKzEwMTIsOCBAQCB2b2lkIF9faW5pdCBzdGFydF94ZW4odW5zaWduZWQg
bG9uZyBib290X3BoeXNfb2Zmc2V0LAogCiAgICAgc3lzdGVtX3N0YXRlID0g
U1lTX1NUQVRFX2FjdGl2ZTsKIAotICAgIGlmICggYWNwaV9kaXNhYmxlZCAp
Ci0gICAgICAgIGNyZWF0ZV9kb21VcygpOwotCi0gICAgaWYgKCBkb20wICkK
LSAgICAgICAgZG9tYWluX3VucGF1c2VfYnlfc3lzdGVtY29udHJvbGxlcihk
b20wKTsKKyAgICBmb3JfZWFjaF9kb21haW4oIGQgKQorICAgICAgICBkb21h
aW5fdW5wYXVzZV9ieV9zeXN0ZW1jb250cm9sbGVyKGQpOwogCiAgICAgLyog
U3dpdGNoIG9uIHRvIHRoZSBkeW5hbWljYWxseSBhbGxvY2F0ZWQgc3RhY2sg
Zm9yIHRoZSBpZGxlIHZjcHUKICAgICAgKiBzaW5jZSB0aGUgc3RhdGljIG9u
ZSB3ZSdyZSBydW5uaW5nIG9uIGlzIGFib3V0IHRvIGJlIGZyZWVkLiAqLwpk
aWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNtLWFybS9zZXR1cC5oIGIveGVu
L2luY2x1ZGUvYXNtLWFybS9zZXR1cC5oCmluZGV4IDUyODMyNDQwMTUxMS4u
YzRiNmFmNjAyOTk1IDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20tYXJt
L3NldHVwLmgKKysrIGIveGVuL2luY2x1ZGUvYXNtLWFybS9zZXR1cC5oCkBA
IC05NCw3ICs5NCw3IEBAIHZvaWQgYWNwaV9jcmVhdGVfZWZpX21tYXBfdGFi
bGUoc3RydWN0IGRvbWFpbiAqZCwKIGludCBhY3BpX21ha2VfZWZpX25vZGVz
KHZvaWQgKmZkdCwgc3RydWN0IG1lbWJhbmsgdGJsX2FkZFtdKTsKIAogdm9p
ZCBjcmVhdGVfZG9tVXModm9pZCk7Ci1zdHJ1Y3QgZG9tYWluKiBjcmVhdGVf
ZG9tMCh2b2lkKTsKK3ZvaWQgY3JlYXRlX2RvbTAodm9pZCk7CiAKIHZvaWQg
ZGlzY2FyZF9pbml0aWFsX21vZHVsZXModm9pZCk7CiB2b2lkIGZ3X3VucmVz
ZXJ2ZWRfcmVnaW9ucyhwYWRkcl90IHMsIHBhZGRyX3QgZSwKLS0gCjIuMTcu
MQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa372/0002-xen-arm-Boot-modules-should-always-be-scrubbed-if-bo.patch"
Content-Disposition: attachment;
 filename="xsa372/0002-xen-arm-Boot-modules-should-always-be-scrubbed-if-bo.patch"
Content-Transfer-Encoding: base64

RnJvbSA3NTdhNDYxZWQ5MDYyNDhhNTZjYzFiYjkyMmExYmEwN2E3ZWUxMTU0
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWxpZW4gR3JhbGwg
PGpncmFsbEBhbWF6b24uY29tPgpEYXRlOiBTYXQsIDE3IEFwciAyMDIxIDE3
OjM4OjI4ICswMTAwClN1YmplY3Q6IFtQQVRDSCAyLzJdIHhlbi9hcm06IEJv
b3QgbW9kdWxlcyBzaG91bGQgYWx3YXlzIGJlIHNjcnViYmVkIGlmCiBib290
c2NydWI9e29uLCBpZGxlfQoKVGhlIGZ1bmN0aW9uIHRvIGluaXRpYWxpemUg
dGhlIHBhZ2VzIChzZWUgaW5pdF9oZWFwX3BhZ2VzKCkpIHdpbGwgcmVxdWVz
dApzY3J1YiB3aGVuIHRoZSBhZG1pbiByZXF1ZXN0IGlkbGUgYm9vdHNjcnVi
IChkZWZhdWx0KSBhbmQgc3RhdGUgPT0KU1lTX1NUQVRFX2FjdGl2ZS4gV2hl
biBib290c2NydWI9b24sIFhlbiB3aWxsIHNjcnViIGFueSBmcmVlIHBhZ2Vz
IGluCmhlYXBfaW5pdF9sYXRlKCkuCgpDdXJyZW50bHksIHRoZSBib290IG1v
ZHVsZXMgKGUuZy4ga2VybmVscywgaW5pdHJhbWZzKSB3aWxsIGJlIGRpc2Nh
cmRlZC8KZnJlZWQgYWZ0ZXIgaGVhcF9pbml0X2xhdGUoKSBpcyBjYWxsZWQg
YW5kIHN5c3RlbV9zdGF0ZSBzd2l0Y2hlZCB0bwpTWVNfU1RBVEVfYWN0aXZl
LiBUaGlzIG1lYW5zIHRoZSBwYWdlcyBhc3NvY2lhdGVkIHdpdGggdGhlIGJv
b3QgbW9kdWxlcwp3aWxsIG5vdCBnZXQgc2NydWJiZWQgYmVmb3JlIGdldHRp
bmcgcmUtcHVycG9zZWQuCgpJZiB0aGUgbWVtb3J5IGlzIGFzc2lnbmVkIHRv
IGFuIHVudHJ1c3RlZCBkb21VLCBpdCBtYXkgYmUgYWJsZSB0bwpyZXRyaWV2
ZSBzZWNyZXRzIGZyb20gdGhlIG1vZHVsZXMuCgpUaGlzIGlzIHBhcnQgb2Yg
WFNBLTM3MiAvIENWRS0yMDIxLTI4NjkzLgoKRml4ZXM6IDE3NzRlOWIxZGYy
NyAoInhlbi9hcm06IGludHJvZHVjZSBjcmVhdGVfZG9tVXMiKQpTaWduZWQt
b2ZmLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPgpSZXZp
ZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpSZXZp
ZXdlZC1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJu
ZWwub3JnPgpUZXN0ZWQtYnk6IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJl
bGxpbmlAa2VybmVsLm9yZz4KLS0tCiB4ZW4vYXJjaC9hcm0vc2V0dXAuYyB8
IDggKysrKysrLS0KIDEgZmlsZSBjaGFuZ2VkLCA2IGluc2VydGlvbnMoKyks
IDIgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL3Nl
dHVwLmMgYi94ZW4vYXJjaC9hcm0vc2V0dXAuYwppbmRleCBlMTc1MzJjMTMy
Y2YuLjYzYTkwOGUzMjVlZSAxMDA2NDQKLS0tIGEveGVuL2FyY2gvYXJtL3Nl
dHVwLmMKKysrIGIveGVuL2FyY2gvYXJtL3NldHVwLmMKQEAgLTcxLDggKzcx
LDYgQEAgZG9taWRfdCBfX3JlYWRfbW9zdGx5IG1heF9pbml0X2RvbWlkOwog
CiBzdGF0aWMgX191c2VkIHZvaWQgaW5pdF9kb25lKHZvaWQpCiB7Ci0gICAg
ZGlzY2FyZF9pbml0aWFsX21vZHVsZXMoKTsKLQogICAgIC8qIE11c3QgYmUg
ZG9uZSBwYXN0IHNldHRpbmcgc3lzdGVtX3N0YXRlLiAqLwogICAgIHVucmVn
aXN0ZXJfaW5pdF92aXJ0dWFsX3JlZ2lvbigpOwogCkBAIC05OTksNiArOTk3
LDEyIEBAIHZvaWQgX19pbml0IHN0YXJ0X3hlbih1bnNpZ25lZCBsb25nIGJv
b3RfcGh5c19vZmZzZXQsCiAgICAgaWYgKCBhY3BpX2Rpc2FibGVkICkKICAg
ICAgICAgY3JlYXRlX2RvbVVzKCk7CiAKKyAgICAvKgorICAgICAqIFRoaXMg
bmVlZHMgdG8gYmUgY2FsbGVkICoqYmVmb3JlKiogaGVhcF9pbml0X2xhdGUo
KSBzbyBtb2R1bGVzCisgICAgICogd2lsbCBiZSBzY3J1YmJlZCAodW5sZXNz
IHN1cHByZXNzZWQpLgorICAgICAqLworICAgIGRpc2NhcmRfaW5pdGlhbF9t
b2R1bGVzKCk7CisKICAgICBoZWFwX2luaXRfbGF0ZSgpOwogCiAgICAgaW5p
dF90cmFjZV9idWZzKCk7Ci0tIAoyLjE3LjEKCg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 17:04:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 17:04:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138612.256556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqf9i-0000HB-7D; Tue, 08 Jun 2021 17:04:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138612.256556; Tue, 08 Jun 2021 17:04:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqf9i-0000H2-2k; Tue, 08 Jun 2021 17:04:46 +0000
Received: by outflank-mailman (input) for mailman id 138612;
 Tue, 08 Jun 2021 17:04:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CbVg=LC=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1lqf9h-0007tO-D5
 for xen-devel@lists.xen.org; Tue, 08 Jun 2021 17:04:45 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd84a956-bc60-4155-82fb-83c1a5a14b45;
 Tue, 08 Jun 2021 17:04:35 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lqf9S-0004ge-29; Tue, 08 Jun 2021 17:04:30 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lqf9S-0004tH-13; Tue, 08 Jun 2021 17:04:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd84a956-bc60-4155-82fb-83c1a5a14b45
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=Ii1NiQUN0sqcBtUpODfucz8MN9ngRgodsEM4sMsoxgo=; b=AYW4NCBzwch4WTdQyjRQjS5biA
	z3hiKC/BTFd1NDPrmKwEWP5GhyrkCATB8fpZ3GqoJvGjKbVT2xqaVKWkjxtIsMu564xCH8M/dGOs4
	V75qrgF2D+/cQm80M1ByYNpW2bwHE5dnPbpOnv00MmiND5r5QE4e9zXuo/4W7/0Kpf8M=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 374 v2 (CVE-2021-28691) - Guest triggered
 use-after-free in Linux xen-netback
Message-Id: <E1lqf9S-0004tH-13@xenbits.xenproject.org>
Date: Tue, 08 Jun 2021 17:04:30 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2021-28691 / XSA-374
                               version 2

          Guest triggered use-after-free in Linux xen-netback

UPDATES IN VERSION 2
====================

Public release.

ISSUE DESCRIPTION
=================

A malicious or buggy network PV frontend can force Linux netback to
disable the interface and terminate the receive kernel thread
associated with queue 0 in response to the frontend sending a
malformed packet.

Such kernel thread termination will lead to a use-after-free in Linux
netback when the backend is destroyed, as the kernel thread associated
with queue 0 will have already exited and thus the call to
kthread_stop will be performed against a stale pointer.

IMPACT
======

A malicious or buggy frontend driver can trigger a dom0 crash.
Privilege escalation and information leaks cannot be ruled out.

VULNERABLE SYSTEMS
==================

Systems using Linux version 5.5 or newer are vulnerable.

MITIGATION
==========

On x86 running only HVM guests with emulated network cards will avoid the
issue.  There's however no option in the upstream toolstack to offer only
emulated network cards to guests.

CREDITS
=======

This issue was discovered by Michael Brown of iPXE and diagnosed by
Olivier Benjamin, Michael Kurth and Martin Mazein of AWS.

RESOLUTION
==========

Applying the attached patch resolves this issue.

xsa374-linux.patch     Linux 5.5.0 - 5.12.2

$ sha256sum xsa374*
156cee65022359a5901cce97714d2abb16fef786246b1c4bf509083d21e085d6  xsa374-linux.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Deployment of the mitigation to disable PV network interfaces is NOT
permitted (except where all the affected systems and VMs are
administered and used only by organisations which are members of the
Xen Project Security Issues Predisclosure List).  Specifically,
deployment on public cloud systems is NOT permitted.

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmC/oxIMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZigoIAKNYimzTYl6VQYaqgwMdNzqXCF/PdlQF/tf8PSwm
5VP0ZPbLq6Zn4HOgMBtBzs/GCFtrIWsQGnZji611dkaAh21N1YErXW5jFYMnf1DI
rruCXE1GuL5B4sFvWw7CnMXax6vYe0q5KPoGmyZRV77aT5T+gNMONlGl6raw7/Ne
UAtAv4JDSR5Nc53X0HNK7tNU9tdr4VaLqEKWs+C0W+azOFNGvrTeNDVjBiLqDZbA
st62i3PIFTXu+XzbjZNdM/RMpVVxFSkfdWn53RDVJ2JaFBMxrcVs75aVo3Nfr34Z
Iho+eTPDywP9+4zl/FoModMYHg4rTMHf+jmbi3M/aCOal2U=
=1Dhy
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa374-linux.patch"
Content-Disposition: attachment; filename="xsa374-linux.patch"
Content-Transfer-Encoding: base64

RnJvbTogUm9nZXIgUGF1IE1vbm5lIDxyb2dlci5wYXVAY2l0cml4LmNvbT4K
U3ViamVjdDogeGVuLW5ldGJhY2s6IHRha2UgYSByZWZlcmVuY2UgdG8gdGhl
IFJYIHRhc2sgdGhyZWFkCgpEbyB0aGlzIGluIG9yZGVyIHRvIHByZXZlbnQg
dGhlIHRhc2sgZnJvbSBiZWluZyBmcmVlZCBpZiB0aGUgdGhyZWFkCnJldHVy
bnMgKHdoaWNoIGNhbiBiZSB0cmlnZ2VyZWQgYnkgdGhlIGZyb250ZW5kKSBi
ZWZvcmUgdGhlIGNhbGwgdG8Ka3RocmVhZF9zdG9wIGRvbmUgYXMgcGFydCBv
ZiB0aGUgYmFja2VuZCB0ZWFyIGRvd24uIE5vdCB0YWtpbmcgdGhlCnJlZmVy
ZW5jZSB3aWxsIGxlYWQgdG8gYSB1c2UtYWZ0ZXItZnJlZSBpbiB0aGF0IHNj
ZW5hcmlvLiBTdWNoCnJlZmVyZW5jZSB3YXMgdGFrZW4gYmVmb3JlIGJ1dCBk
cm9wcGVkIGFzIHBhcnQgb2YgdGhlIHJld29yayBkb25lIGluCjJhYzA2MWNl
OTdmNC4KClJlaW50cm9kdWNlIHRoZSByZWZlcmVuY2UgdGFraW5nIGFuZCBh
ZGQgYSBjb21tZW50IHRoaXMgdGltZQpleHBsYWluaW5nIHdoeSBpdCdzIG5l
ZWRlZC4KClRoaXMgaXMgWFNBLTM3NCAvIENWRS0yMDIxLTI4NjkxLgoKUmVw
b3J0ZWQtYnk6IE1pY2hhZWwgQnJvd24gPG1jYjMwQGlweGUub3JnPgpGaXhl
czogMmFjMDYxY2U5N2Y0ICgneGVuL25ldGJhY2s6IGNsZWFudXAgaW5pdCBh
bmQgZGVpbml0IGNvZGUnKQpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9u
bsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KQ2M6IHN0YWJsZUB2Z2VyLmtl
cm5lbC5vcmcKUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0Bz
dXNlLmNvbT4KCmRpZmYgLS1naXQgYS9kcml2ZXJzL25ldC94ZW4tbmV0YmFj
ay9pbnRlcmZhY2UuYyBiL2RyaXZlcnMvbmV0L3hlbi1uZXRiYWNrL2ludGVy
ZmFjZS5jCmluZGV4IDE5M2I3MjNmZTNiZC4uYzU4OTk2YzFlMjMwIDEwMDY0
NAotLS0gYS9kcml2ZXJzL25ldC94ZW4tbmV0YmFjay9pbnRlcmZhY2UuYwor
KysgYi9kcml2ZXJzL25ldC94ZW4tbmV0YmFjay9pbnRlcmZhY2UuYwpAQCAt
Njg0LDYgKzY4NCw3IEBAIHN0YXRpYyB2b2lkIHhlbnZpZl9kaXNjb25uZWN0
X3F1ZXVlKHN0cnVjdCB4ZW52aWZfcXVldWUgKnF1ZXVlKQogewogCWlmIChx
dWV1ZS0+dGFzaykgewogCQlrdGhyZWFkX3N0b3AocXVldWUtPnRhc2spOwor
CQlwdXRfdGFza19zdHJ1Y3QocXVldWUtPnRhc2spOwogCQlxdWV1ZS0+dGFz
ayA9IE5VTEw7CiAJfQogCkBAIC03NDUsNiArNzQ2LDExIEBAIGludCB4ZW52
aWZfY29ubmVjdF9kYXRhKHN0cnVjdCB4ZW52aWZfcXVldWUgKnF1ZXVlLAog
CWlmIChJU19FUlIodGFzaykpCiAJCWdvdG8ga3RocmVhZF9lcnI7CiAJcXVl
dWUtPnRhc2sgPSB0YXNrOworCS8qCisJICogVGFrZSBhIHJlZmVyZW5jZSB0
byB0aGUgdGFzayBpbiBvcmRlciB0byBwcmV2ZW50IGl0IGZyb20gYmVpbmcg
ZnJlZWQKKwkgKiBpZiB0aGUgdGhyZWFkIGZ1bmN0aW9uIHJldHVybnMgYmVm
b3JlIGt0aHJlYWRfc3RvcCBpcyBjYWxsZWQuCisJICovCisJZ2V0X3Rhc2tf
c3RydWN0KHRhc2spOwogCiAJdGFzayA9IGt0aHJlYWRfcnVuKHhlbnZpZl9k
ZWFsbG9jX2t0aHJlYWQsIHF1ZXVlLAogCQkJICAgIiVzLWRlYWxsb2MiLCBx
dWV1ZS0+bmFtZSk7Cg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 17:05:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 17:05:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138621.256637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqf9x-0002dy-T6; Tue, 08 Jun 2021 17:05:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138621.256637; Tue, 08 Jun 2021 17:05:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqf9x-0002di-Mb; Tue, 08 Jun 2021 17:05:01 +0000
Received: by outflank-mailman (input) for mailman id 138621;
 Tue, 08 Jun 2021 17:05:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CbVg=LC=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1lqf9w-0007tU-Oo
 for xen-devel@lists.xen.org; Tue, 08 Jun 2021 17:05:00 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9afe90b8-9ae7-4e56-b6a9-e9a73ab8f493;
 Tue, 08 Jun 2021 17:04:36 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lqf9S-0004gw-S9; Tue, 08 Jun 2021 17:04:30 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lqf9S-0004uS-Qx; Tue, 08 Jun 2021 17:04:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9afe90b8-9ae7-4e56-b6a9-e9a73ab8f493
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=8Svh3GdkGDoE8YIWXwufr7G76Pf6eb1XITglGYjIIpc=; b=hbPJ0t7k6us6QP8PYVvAkM8aHN
	R6BJrks7gKLjeBBzIrKWZnATSOkXyL6+JbW7+UUl+QLJyli2WCpiwJ2K25iOzBdkWXvTyoxG9KOEE
	zLGAiTSWUMD3epyuxNfflvMy+1sZDOLpbFrKzXhohDTs/r9z0rJKZ4DgPUo+QbsFEg8w=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 375 v2 (CVE-2021-0089) - Speculative Code
 Store Bypass
Message-Id: <E1lqf9S-0004uS-Qx@xenbits.xenproject.org>
Date: Tue, 08 Jun 2021 17:04:30 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2021-0089 / XSA-375
                              version 2

                    Speculative Code Store Bypass

UPDATES IN VERSION 2
====================

New 4.12 backport (also targeting 4.11), addressing a build issue.

Discuss the need for SPECULATIVE_HARDEN_BRANCH in Resolution.

Provide Arm information links.

Public release.

ISSUE DESCRIPTION
=================

Modern superscalar processors may employ sophisticated decoding and
caching of the instruction stream to improve performance.  However, a
consequence is that self-modifying code updates may not take effect
instantly.

Whatever the architectural guarantees, some CPUs have microarchitectural
behaviour whereby the stale instruction stream may be speculatively
decoded and executed.

Speculation of this form can suffer from type confusion in registers,
and potentially leak data.

For more details, see:
  https://www.vusec.net/projects/fpvi-scsb
  https://www.amd.com/en/corporate-product-security-bulletin-amd-sb-1003
  https://software.intel.com/content/www/us/en/develop/articles/software-security-guidance/advisory-guidance/speculative-code-store-bypass.html
  https://software.intel.com/content/www/us/en/develop/articles/software-security-guidance/advisory-guidance/floating-point-value-injection.html
  https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#scsb
  https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#fvpi

IMPACT
======

In attacker might be able to infer the contents of arbitrary host
memory, including memory assigned to other guests.

VULNERABLE SYSTEMS
==================

Systems running all versions of Xen are affected.

Whether a CPU is potentially vulnerable depends on its
microarchitecture.  Consult your hardware vendor.

Xen running on ARM does not have runtime self-modying code, so is
believed to be not vulnerable, irrespective of any hardware
susceptibility.

Xen running on x86 does have runtime self-modying code as part of
emulation, and is believed to be potentially vulnerable.

Xen is not vulnerable if retpoline or lfence mitigations for Spectre v2
protection are active.  Protections depend on compiler support (as
indicated by INDIRECT_THUNK), and a runtime setting (BTI-Thunk):

  # xl dmesg | grep -e INDIRECT_THUNK -e BTI-Thunk
  (XEN)   Compiled-in support: INDIRECT_THUNK SHADOW_PAGING
  (XEN)   Xen settings: BTI-Thunk RETPOLINE, SPEC_CTRL: IBRS+ SSBD-, Other: SRB_LOCK+ IBPB L1D_FLUSH VERW BRANCH_HARDEN

BTI-Thunk as either RETPOLINE or LFENCE prevents the vulnerability.

MITIGATION
==========

If Spectre v2 support is compiled in, but JMP is used by default,
RETPOLINE or LFENCE can be selected with `spec-ctrl=bti-thunk=retpoline`
or `spec-ctrl=bti-thunk=lfence`.

CREDITS
=======

This issue was discovered by Enrico Barberis, Hany Ragab, Herbert Bos,
and Cristiano Giuffrida from the VUSec group at VU Amsterdam.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.  Note that
in 4.13 and newer the patch will only take effect when the
SPECULATIVE_HARDEN_BRANCH hypervisor config option is enabled.  4.12 and
older do not have such an option, and the change will take effect
unconditionally.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa375.patch           xen-unstable - 4.14.x
xsa375-4.13.patch      Xen 4.13.x
xsa375-4.12.patch      Xen 4.12.x - 4.11.x

$ sha256sum xsa375*
367d5bb97c942b9f744a57645df87148772c0879de6f351f36f88147f3958e83  xsa375.meta
301ef80da837bc2af36a0958f35f42f4d267b20ec6e91ae5faf2616167ef49f8  xsa375.patch
dc024daf17242b6477a16a349754a94b2b25cbbfd8c14475741b778710a44c93  xsa375-4.12.patch
f70511d843c6617b932da11ffe857e2e3aa3834ccff07d4d0beba90d63a3dae2  xsa375-4.13.patch
$

NOTE CONCERNING CVE-2021-0086
=============================

Floating Point Value Injection (FPVI) was discovered and disclosed in
the same research as SCSB.  Xen on x86 does in some cases emulate
floating point operations with guest provided inputs, but does not have
subsequent control flow dependent on results, transient or otherwise, of
the operation.

Therefore, we believe Xen is not vulnerable to FPVI, irrespective of any
hardware susceptibility.

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmC/oxIMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZ0+QH/190a0VhQlorqC7eY2kt+l09S5chHL4AqfAxhBWT
pxbgNcNiuUXhGRQEfxEV/CRBGnUDy5TNwtyHlJqSYm89hqVv3Dh5IbVcRK0DGV7R
x9YLlESaKx97e/SaSDHZ3XtwSXa/es+O6Vmn4X67UZI7jpv8EU89fxa3Fv1fuNhv
Ud8BGW2WXJ1SEW3XIT7/gz/xza1fFtv/rIew+jpnlsu6qSrlE/3pZHLOqI5Wa2n9
LklxwoGmB9JyIV8Me0tOCqiLKEOTGnS1JZiug07N2TmlxjiHj76KrVysTDqRdkFD
R/C8wfmwlOSCddUPnj6uB81fH7C7I02yVTefpYwIBmI7ldc=
=dP+p
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa375.meta"
Content-Disposition: attachment; filename="xsa375.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNzUsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNSIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIs
CiAgICAiNC4xMiIsCiAgICAiNC4xMSIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjExIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICJiMWU0NmJjMzY5YmI0OTBiNzIxYzc3ZjE1ZDI1ODNiYmY0
NjYxNTJkIiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NzIsCiAgICAgICAgICAgIDM3MwogICAgICAgICAgXSwKICAgICAgICAgICJQ
YXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzc1LTQuMTMucGF0Y2giCiAg
ICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQuMTIi
OiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAg
ICAgICAiU3RhYmxlUmVmIjogIjU5ODQ5MDViMjYzOGRmODdhMDI2MmQxZWU5
MWYwYTZlMTRhODZkZjYiLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAg
ICAgICAgIDM3MiwKICAgICAgICAgICAgMzczCiAgICAgICAgICBdLAogICAg
ICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2EzNzUtNC4xMy5w
YXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAg
ICAiNC4xMyI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6
IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiMjg0MTMyOTM4OTAwY2U4YzNi
MTFiYWJmNzI1NWY1YzZkYmIyMTcxNiIsCiAgICAgICAgICAiUHJlcmVxcyI6
IFsKICAgICAgICAgICAgMzcyLAogICAgICAgICAgICAzNzMKICAgICAgICAg
IF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM3
NS00LjEzLnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQog
ICAgfSwKICAgICI0LjE0IjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAg
ICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICIxMGYwYjJkNDkz
NzY4NjVkNDk2ODBmMDZjNTJiNDUxZmFiY2UzYmI1IiwKICAgICAgICAgICJQ
cmVyZXFzIjogWwogICAgICAgICAgICAzNzIsCiAgICAgICAgICAgIDM3Mwog
ICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAg
ICAieHNhMzc1LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAg
fQogICAgfSwKICAgICI0LjE1IjogewogICAgICAiUmVjaXBlcyI6IHsKICAg
ICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICIyODBkNDcy
ZjRmY2EwNzBhMTAzNzdlMzE4ZDkwY2FiZmMyNTQwODEwIiwKICAgICAgICAg
ICJQcmVyZXFzIjogWwogICAgICAgICAgICAzNzIsCiAgICAgICAgICAgIDM3
MwogICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAg
ICAgICAieHNhMzc1LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAg
ICAgfQogICAgfSwKICAgICJtYXN0ZXIiOiB7CiAgICAgICJSZWNpcGVzIjog
ewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogImFh
NzdhY2MyODA5OGQwNDk0NWFmOTk4ZjNmYzBkYmQzNzU5YjViNDEiLAogICAg
ICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDM3MiwKICAgICAgICAg
ICAgMzczCiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAg
ICAgICAgICAgICJ4c2EzNzUucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAg
fQogICAgICB9CiAgICB9CiAgfQp9

--=separator
Content-Type: application/octet-stream; name="xsa375.patch"
Content-Disposition: attachment; filename="xsa375.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogUHJvdGVjdCBhZ2FpbnN0IFNw
ZWN1bGF0aXZlIENvZGUgU3RvcmUgQnlwYXNzCgpNb2Rlcm4geDg2IHByb2Nl
c3NvcnMgaGF2ZSBmYXItYmV0dGVyLXRoYW4tYXJjaGl0ZWN0dXJhbGx5LWd1
YXJhbnRlZWQgc2VsZgptb2RpZnlpbmcgY29kZSBkZXRlY3Rpb24uICBUeXBp
Y2FsbHksIHdoZW4gYSB3cml0ZSBoaXRzIGFuIGluc3RydWN0aW9uIGluCmZs
aWdodCwgYSBNYWNoaW5lIENsZWFyIG9jY3VycyB0byBmbHVzaCBzdGFsZSBj
b250ZW50IGluIHRoZSBmcm9udGVuZCBhbmQKYmFja2VuZC4KCkZvciBzZWxm
IG1vZGlmeWluZyBjb2RlLCBiZWZvcmUgYSB3cml0ZSB3aGljaCBoaXRzIGFu
IGluc3RydWN0aW9uIGluIGZsaWdodApyZXRpcmVzLCB0aGUgZnJvbnRlbmQg
Y2FuIHNwZWN1bGF0aXZlbHkgZGVjb2RlIGFuZCBleGVjdXRlIHRoZSBvbGQg
aW5zdHJ1Y3Rpb24Kc3RyZWFtLiAgU3BlY3VsYXRpb24gb2YgdGhpcyBmb3Jt
IGNhbiBzdWZmZXIgZnJvbSB0eXBlIGNvbmZ1c2lvbiBpbiByZWdpc3RlcnMs
CmFuZCBwb3RlbnRpYWxseSBsZWFrIGRhdGEuCgpGdXJ0aGVybW9yZSwgdXBk
YXRlcyBhcmUgdHlwaWNhbGx5IGJ5dGUtd2lzZSwgcmF0aGVyIHRoYW4gYXRv
bWljLiAgRGVwZW5kaW5nCm9uIHRpbWluZywgc3BlY3VsYXRpb24gY2FuIHJh
Y2UgYWhlYWQgbXVsdGlwbGUgdGltZXMgYmV0d2VlbiBpbmRpdmlkdWFsCndy
aXRlcywgYW5kIGV4ZWN1dGUgdGhlIHRyYW5zaWVudGx5LW1hbGZvcm1lZCBp
bnN0cnVjdGlvbiBzdHJlYW0uCgpYZW4gaGFzIHN0dWJzIHdoaWNoIGFyZSB1
c2VkIGluIGNlcnRhaW4gY2FzZXMgZm9yIGVtdWxhdGlvbiBwdXJwb3Nlcy4g
IEluaGliaXQKc3BlY3VsYXRpb24gYmV0d2VlbiB1cGRhdGluZyB0aGUgc3R1
YiBhbmQgZXhlY3V0aW5nIGl0LgoKVGhpcyBpcyBYU0EtMzc1IC8gQ1ZFLTIw
MjEtMDA4OS4KClNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJl
dy5jb29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBKYW4gQmV1bGlj
aCA8amJldWxpY2hAc3VzZS5jb20+CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gv
eDg2L3B2L2VtdWwtcHJpdi1vcC5jIGIveGVuL2FyY2gveDg2L3B2L2VtdWwt
cHJpdi1vcC5jCmluZGV4IDg4ODk1MDlkMmEuLjExNDY3YTFlM2EgMTAwNjQ0
Ci0tLSBhL3hlbi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYworKysgYi94
ZW4vYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMKQEAgLTEzOCw2ICsxMzgs
OCBAQCBzdGF0aWMgaW9fZW11bF9zdHViX3QgKmlvX2VtdWxfc3R1Yl9zZXR1
cChzdHJ1Y3QgcHJpdl9vcF9jdHh0ICpjdHh0LCB1OCBvcGNvZGUsCiAgICAg
LyogUnVudGltZSBjb25maXJtYXRpb24gdGhhdCB3ZSBoYXZlbid0IGNsb2Ji
ZXJlZCBhbiBhZGphY2VudCBzdHViLiAqLwogICAgIEJVR19PTihTVFVCX0JV
Rl9TSVpFIC8gMiA8IChwIC0gY3R4dC0+aW9fZW11bF9zdHViKSk7CiAKKyAg
ICBibG9ja19zcGVjdWxhdGlvbigpOyAvKiBTQ1NCICovCisKICAgICAvKiBI
YW5keSBmdW5jdGlvbi10eXBlZCBwb2ludGVyIHRvIHRoZSBzdHViLiAqLwog
ICAgIHJldHVybiAodm9pZCAqKXN0dWJfdmE7CiAKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni94ODZfZW11bGF0ZS94ODZfZW11bGF0ZS5jIGIveGVuL2Fy
Y2gveDg2L3g4Nl9lbXVsYXRlL3g4Nl9lbXVsYXRlLmMKaW5kZXggYzI1ZDg4
ZDBkOC4uZjQyZmYyYTgzNyAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4
Nl9lbXVsYXRlL3g4Nl9lbXVsYXRlLmMKKysrIGIveGVuL2FyY2gveDg2L3g4
Nl9lbXVsYXRlL3g4Nl9lbXVsYXRlLmMKQEAgLTEyNTcsNiArMTI1Nyw3IEBA
IHN0YXRpYyBpbmxpbmUgaW50IG1rZWModWludDhfdCBlLCBpbnQzMl90IGVj
LCAuLi4pCiAjIGRlZmluZSBpbnZva2Vfc3R1YihwcmUsIHBvc3QsIGNvbnN0
cmFpbnRzLi4uKSBkbyB7ICAgICAgICAgICAgICAgICAgICBcCiAgICAgc3R1
Yl9leG4uaW5mbyA9ICh1bmlvbiBzdHViX2V4Y2VwdGlvbl90b2tlbikgeyAu
cmF3ID0gfjAgfTsgICAgICAgICBcCiAgICAgc3R1Yl9leG4ubGluZSA9IF9f
TElORV9fOyAvKiBVdGlsaXR5IG91dHdlaWdocyBsaXZlcGF0Y2hpbmcgY29z
dCAqLyBcCisgICAgYmxvY2tfc3BlY3VsYXRpb24oKTsgLyogU0NTQiAqLyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgYXNt
IHZvbGF0aWxlICggcHJlICJcblx0SU5ESVJFQ1RfQ0FMTCAlW3N0dWJdXG5c
dCIgcG9zdCAiXG4iICAgICAgICBcCiAgICAgICAgICAgICAgICAgICAgIi5M
cmV0JT06XG5cdCIgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBcCiAgICAgICAgICAgICAgICAgICAgIi5wdXNoc2VjdGlvbiAuZml4
dXAsXCJheFwiXG4iICAgICAgICAgICAgICAgICAgICAgICBcCg==

--=separator
Content-Type: application/octet-stream; name="xsa375-4.12.patch"
Content-Disposition: attachment; filename="xsa375-4.12.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogUHJvdGVjdCBhZ2FpbnN0IFNw
ZWN1bGF0aXZlIENvZGUgU3RvcmUgQnlwYXNzCgpNb2Rlcm4geDg2IHByb2Nl
c3NvcnMgaGF2ZSBmYXItYmV0dGVyLXRoYW4tYXJjaGl0ZWN0dXJhbGx5LWd1
YXJhbnRlZWQgc2VsZgptb2RpZnlpbmcgY29kZSBkZXRlY3Rpb24uICBUeXBp
Y2FsbHksIHdoZW4gYSB3cml0ZSBoaXRzIGFuIGluc3RydWN0aW9uIGluCmZs
aWdodCwgYSBNYWNoaW5lIENsZWFyIG9jY3VycyB0byBmbHVzaCBzdGFsZSBj
b250ZW50IGluIHRoZSBmcm9udGVuZCBhbmQKYmFja2VuZC4KCkZvciBzZWxm
IG1vZGlmeWluZyBjb2RlLCBiZWZvcmUgYSB3cml0ZSB3aGljaCBoaXRzIGFu
IGluc3RydWN0aW9uIGluIGZsaWdodApyZXRpcmVzLCB0aGUgZnJvbnRlbmQg
Y2FuIHNwZWN1bGF0aXZlbHkgZGVjb2RlIGFuZCBleGVjdXRlIHRoZSBvbGQg
aW5zdHJ1Y3Rpb24Kc3RyZWFtLiAgU3BlY3VsYXRpb24gb2YgdGhpcyBmb3Jt
IGNhbiBzdWZmZXIgZnJvbSB0eXBlIGNvbmZ1c2lvbiBpbiByZWdpc3RlcnMs
CmFuZCBwb3RlbnRpYWxseSBsZWFrIGRhdGEuCgpGdXJ0aGVybW9yZSwgdXBk
YXRlcyBhcmUgdHlwaWNhbGx5IGJ5dGUtd2lzZSwgcmF0aGVyIHRoYW4gYXRv
bWljLiAgRGVwZW5kaW5nCm9uIHRpbWluZywgc3BlY3VsYXRpb24gY2FuIHJh
Y2UgYWhlYWQgbXVsdGlwbGUgdGltZXMgYmV0d2VlbiBpbmRpdmlkdWFsCndy
aXRlcywgYW5kIGV4ZWN1dGUgdGhlIHRyYW5zaWVudGx5LW1hbGZvcm1lZCBp
bnN0cnVjdGlvbiBzdHJlYW0uCgpYZW4gaGFzIHN0dWJzIHdoaWNoIGFyZSB1
c2VkIGluIGNlcnRhaW4gY2FzZXMgZm9yIGVtdWxhdGlvbiBwdXJwb3Nlcy4g
IEluaGliaXQKc3BlY3VsYXRpb24gYmV0d2VlbiB1cGRhdGluZyB0aGUgc3R1
YiBhbmQgZXhlY3V0aW5nIGl0LgoKVGhpcyBpcyBYU0EtMzc1IC8gQ1ZFLTIw
MjEtMDA4OS4KClNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJl
dy5jb29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBKYW4gQmV1bGlj
aCA8amJldWxpY2hAc3VzZS5jb20+CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gv
eDg2L3B2L2VtdWwtcHJpdi1vcC5jIGIveGVuL2FyY2gveDg2L3B2L2VtdWwt
cHJpdi1vcC5jCmluZGV4IDZkYzRmOTJhODQuLjU5YzE1Y2EwZTcgMTAwNjQ0
Ci0tLSBhL3hlbi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYworKysgYi94
ZW4vYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMKQEAgLTk3LDYgKzk3LDgg
QEAgc3RhdGljIGlvX2VtdWxfc3R1Yl90ICppb19lbXVsX3N0dWJfc2V0dXAo
c3RydWN0IHByaXZfb3BfY3R4dCAqY3R4dCwgdTggb3Bjb2RlLAogICAgIEJV
SUxEX0JVR19PTihTVFVCX0JVRl9TSVpFIC8gMiA8IE1BWCg5LCAvKiBEZWZh
dWx0IGVtdWwgc3R1YiAqLwogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICA1ICsgSU9FTVVMX1FVSVJLX1NUVUJfQllURVMpKTsK
IAorICAgIGFzbSB2b2xhdGlsZSAoICJsZmVuY2UiIDo6OiAibWVtb3J5IiAp
OyAvKiBTQ1NCICovCisKICAgICAvKiBIYW5keSBmdW5jdGlvbi10eXBlZCBw
b2ludGVyIHRvIHRoZSBzdHViLiAqLwogICAgIHJldHVybiAodm9pZCAqKXN0
dWJfdmE7CiB9CmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2X2VtdWxh
dGUveDg2X2VtdWxhdGUuYyBiL3hlbi9hcmNoL3g4Ni94ODZfZW11bGF0ZS94
ODZfZW11bGF0ZS5jCmluZGV4IGJiYTZkZDAxODcuLmNkMTIzNDkyYTYgMTAw
NjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni94ODZfZW11bGF0ZS94ODZfZW11bGF0
ZS5jCisrKyBiL3hlbi9hcmNoL3g4Ni94ODZfZW11bGF0ZS94ODZfZW11bGF0
ZS5jCkBAIC0xMDkzLDYgKzEwOTMsNyBAQCBzdGF0aWMgaW5saW5lIGludCBt
a2VjKHVpbnQ4X3QgZSwgaW50MzJfdCBlYywgLi4uKQogIyBkZWZpbmUgaW52
b2tlX3N0dWIocHJlLCBwb3N0LCBjb25zdHJhaW50cy4uLikgZG8geyAgICAg
ICAgICAgICAgICAgICAgXAogICAgIHN0dWJfZXhuLmluZm8gPSAodW5pb24g
c3R1Yl9leGNlcHRpb25fdG9rZW4pIHsgLnJhdyA9IH4wIH07ICAgICAgICAg
XAogICAgIHN0dWJfZXhuLmxpbmUgPSBfX0xJTkVfXzsgLyogVXRpbGl0eSBv
dXR3ZWlnaHMgbGl2ZXBhdGNoaW5nIGNvc3QgKi8gXAorICAgIGFzbSB2b2xh
dGlsZSAoICJsZmVuY2UiIDo6OiAibWVtb3J5IiApOyAvKiBTQ1NCICovICAg
ICAgICAgICAgICAgICAgXAogICAgIGFzbSB2b2xhdGlsZSAoIHByZSAiXG5c
dElORElSRUNUX0NBTEwgJVtzdHViXVxuXHQiIHBvc3QgIlxuIiAgICAgICAg
XAogICAgICAgICAgICAgICAgICAgICIuTHJldCU9OlxuXHQiICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgICAgICAgICAg
ICAgICAgICIucHVzaHNlY3Rpb24gLmZpeHVwLFwiYXhcIlxuIiAgICAgICAg
ICAgICAgICAgICAgICAgXAo=

--=separator
Content-Type: application/octet-stream; name="xsa375-4.13.patch"
Content-Disposition: attachment; filename="xsa375-4.13.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogUHJvdGVjdCBhZ2FpbnN0IFNw
ZWN1bGF0aXZlIENvZGUgU3RvcmUgQnlwYXNzCgpNb2Rlcm4geDg2IHByb2Nl
c3NvcnMgaGF2ZSBmYXItYmV0dGVyLXRoYW4tYXJjaGl0ZWN0dXJhbGx5LWd1
YXJhbnRlZWQgc2VsZgptb2RpZnlpbmcgY29kZSBkZXRlY3Rpb24uICBUeXBp
Y2FsbHksIHdoZW4gYSB3cml0ZSBoaXRzIGFuIGluc3RydWN0aW9uIGluCmZs
aWdodCwgYSBNYWNoaW5lIENsZWFyIG9jY3VycyB0byBmbHVzaCBzdGFsZSBj
b250ZW50IGluIHRoZSBmcm9udGVuZCBhbmQKYmFja2VuZC4KCkZvciBzZWxm
IG1vZGlmeWluZyBjb2RlLCBiZWZvcmUgYSB3cml0ZSB3aGljaCBoaXRzIGFu
IGluc3RydWN0aW9uIGluIGZsaWdodApyZXRpcmVzLCB0aGUgZnJvbnRlbmQg
Y2FuIHNwZWN1bGF0aXZlbHkgZGVjb2RlIGFuZCBleGVjdXRlIHRoZSBvbGQg
aW5zdHJ1Y3Rpb24Kc3RyZWFtLiAgU3BlY3VsYXRpb24gb2YgdGhpcyBmb3Jt
IGNhbiBzdWZmZXIgZnJvbSB0eXBlIGNvbmZ1c2lvbiBpbiByZWdpc3RlcnMs
CmFuZCBwb3RlbnRpYWxseSBsZWFrIGRhdGEuCgpGdXJ0aGVybW9yZSwgdXBk
YXRlcyBhcmUgdHlwaWNhbGx5IGJ5dGUtd2lzZSwgcmF0aGVyIHRoYW4gYXRv
bWljLiAgRGVwZW5kaW5nCm9uIHRpbWluZywgc3BlY3VsYXRpb24gY2FuIHJh
Y2UgYWhlYWQgbXVsdGlwbGUgdGltZXMgYmV0d2VlbiBpbmRpdmlkdWFsCndy
aXRlcywgYW5kIGV4ZWN1dGUgdGhlIHRyYW5zaWVudGx5LW1hbGZvcm1lZCBp
bnN0cnVjdGlvbiBzdHJlYW0uCgpYZW4gaGFzIHN0dWJzIHdoaWNoIGFyZSB1
c2VkIGluIGNlcnRhaW4gY2FzZXMgZm9yIGVtdWxhdGlvbiBwdXJwb3Nlcy4g
IEluaGliaXQKc3BlY3VsYXRpb24gYmV0d2VlbiB1cGRhdGluZyB0aGUgc3R1
YiBhbmQgZXhlY3V0aW5nIGl0LgoKVGhpcyBpcyBYU0EtMzc1IC8gQ1ZFLTIw
MjEtMDA4OS4KClNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJl
dy5jb29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBKYW4gQmV1bGlj
aCA8amJldWxpY2hAc3VzZS5jb20+CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gv
eDg2L3B2L2VtdWwtcHJpdi1vcC5jIGIveGVuL2FyY2gveDg2L3B2L2VtdWwt
cHJpdi1vcC5jCmluZGV4IDZkYzRmOTJhODQuLjU5YzE1Y2EwZTcgMTAwNjQ0
Ci0tLSBhL3hlbi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYworKysgYi94
ZW4vYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMKQEAgLTk3LDYgKzk3LDgg
QEAgc3RhdGljIGlvX2VtdWxfc3R1Yl90ICppb19lbXVsX3N0dWJfc2V0dXAo
c3RydWN0IHByaXZfb3BfY3R4dCAqY3R4dCwgdTggb3Bjb2RlLAogICAgIEJV
SUxEX0JVR19PTihTVFVCX0JVRl9TSVpFIC8gMiA8IE1BWCg5LCAvKiBEZWZh
dWx0IGVtdWwgc3R1YiAqLwogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICA1ICsgSU9FTVVMX1FVSVJLX1NUVUJfQllURVMpKTsK
IAorICAgIGJsb2NrX3NwZWN1bGF0aW9uKCk7IC8qIFNDU0IgKi8KKwogICAg
IC8qIEhhbmR5IGZ1bmN0aW9uLXR5cGVkIHBvaW50ZXIgdG8gdGhlIHN0dWIu
ICovCiAgICAgcmV0dXJuICh2b2lkICopc3R1Yl92YTsKIH0KZGlmZiAtLWdp
dCBhL3hlbi9hcmNoL3g4Ni94ODZfZW11bGF0ZS94ODZfZW11bGF0ZS5jIGIv
eGVuL2FyY2gveDg2L3g4Nl9lbXVsYXRlL3g4Nl9lbXVsYXRlLmMKaW5kZXgg
YmJhNmRkMDE4Ny4uY2QxMjM0OTJhNiAxMDA2NDQKLS0tIGEveGVuL2FyY2gv
eDg2L3g4Nl9lbXVsYXRlL3g4Nl9lbXVsYXRlLmMKKysrIGIveGVuL2FyY2gv
eDg2L3g4Nl9lbXVsYXRlL3g4Nl9lbXVsYXRlLmMKQEAgLTExNzIsNiArMTE3
Miw3IEBAIHN0YXRpYyBpbmxpbmUgaW50IG1rZWModWludDhfdCBlLCBpbnQz
Ml90IGVjLCAuLi4pCiAjIGRlZmluZSBpbnZva2Vfc3R1YihwcmUsIHBvc3Qs
IGNvbnN0cmFpbnRzLi4uKSBkbyB7ICAgICAgICAgICAgICAgICAgICBcCiAg
ICAgc3R1Yl9leG4uaW5mbyA9ICh1bmlvbiBzdHViX2V4Y2VwdGlvbl90b2tl
bikgeyAucmF3ID0gfjAgfTsgICAgICAgICBcCiAgICAgc3R1Yl9leG4ubGlu
ZSA9IF9fTElORV9fOyAvKiBVdGlsaXR5IG91dHdlaWdocyBsaXZlcGF0Y2hp
bmcgY29zdCAqLyBcCisgICAgYmxvY2tfc3BlY3VsYXRpb24oKTsgLyogU0NT
QiAqLyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAg
ICAgYXNtIHZvbGF0aWxlICggcHJlICJcblx0SU5ESVJFQ1RfQ0FMTCAlW3N0
dWJdXG5cdCIgcG9zdCAiXG4iICAgICAgICBcCiAgICAgICAgICAgICAgICAg
ICAgIi5McmV0JT06XG5cdCIgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBcCiAgICAgICAgICAgICAgICAgICAgIi5wdXNoc2VjdGlv
biAuZml4dXAsXCJheFwiXG4iICAgICAgICAgICAgICAgICAgICAgICBcCg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 17:13:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 17:13:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138711.256747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqfHt-0002Ni-Tl; Tue, 08 Jun 2021 17:13:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138711.256747; Tue, 08 Jun 2021 17:13:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqfHt-0002NW-PX; Tue, 08 Jun 2021 17:13:13 +0000
Received: by outflank-mailman (input) for mailman id 138711;
 Tue, 08 Jun 2021 17:13:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CbVg=LC=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1lqfA6-0007tO-E9
 for xen-devel@lists.xen.org; Tue, 08 Jun 2021 17:05:10 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30c86ed4-349b-4cfd-81aa-9080b02df5f3;
 Tue, 08 Jun 2021 17:04:37 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lqf9T-0004hB-KV; Tue, 08 Jun 2021 17:04:31 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lqf9T-0004vc-JX; Tue, 08 Jun 2021 17:04:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30c86ed4-349b-4cfd-81aa-9080b02df5f3
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=YQPfBSO74QvjUcDBr5ntUsPt+VI0GpRiGD3hrWKvTk0=; b=iQOQHvopS5dPP5RekZ3mQlKPNl
	uu1nhvctPYmkry7eCRLjsiHeB+k4O1jZ/Hh3DLEH19sTRbbm+dFCg3ujsKXGD5bjSqizrgVHwPzFO
	X52P3MGlYhlZySwrQLuv8IjYlz6RccNNzji1Th+24/3HF4q1HokQX5Eh97V774abJCwg=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 377 v2 (CVE-2021-28690) - x86: TSX Async
 Abort protections not restored after S3
Message-Id: <E1lqf9T-0004vc-JX@xenbits.xenproject.org>
Date: Tue, 08 Jun 2021 17:04:31 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2021-28690 / XSA-377
                               version 2

        x86: TSX Async Abort protections not restored after S3

UPDATES IN VERSION 2
====================

Public release.

ISSUE DESCRIPTION
=================

This issue relates to the TSX Async Abort speculative security vulnerability.
Please see https://xenbits.xen.org/xsa/advisory-305.html for details.

Mitigating TAA by disabling TSX (the default and preferred option) requires
selecting a non-default setting in MSR_TSX_CTRL.  This setting isn't restored
after S3 suspend.

IMPACT
======

After using S3 suspend at least once, CPU0 remains vulnerable to TAA.

This is an information leak.  For full details of the impact, see
XSA-305.

VULNERABLE SYSTEMS
==================

See XSA-305 for details of susceptibility to TAA.

Only systems which are susceptible to TAA and have the XSA-305 fix are
vulnerable.  Only systems which support S3 suspend/resume are vulnerable.

The vulnerability is only exposed if S3 suspend/resume is used.

MITIGATION
==========

Not using S3 suspend/resume avoids the vulnerability.

CREDITS
=======

This issue was discovered by Andrew Cooper of Citrix.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa377.patch           xen-unstable - Xen 4.13.x
xsa377-4.12.patch      Xen 4.12.x
xsa377-4.11.patch      Xen 4.11.x

$ sha256sum xsa377*
532cb030f97d72e8e534ad97182cd5e3aa0efeef405e255bb49649b4f0dd9947  xsa377.meta
21a30dbf80f6e78057cc7e785c8fda475d5a8a0b6b9442af3bd8ca31dd69becf  xsa377.patch
3279317d56e7b8d0a2b0152b64b4c577381b8b01fa0a1a21ec6f855bb964278a  xsa377-4.11.patch
65f61f1cb7bb0e068fd32e40755b9a9aae464d15ccd42c94dae68e495c5a45e0  xsa377-4.12.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmC/oxIMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZZ0wH/AyYmZO221SvMaSa1kGaV9+tATBWtxKEmUr2I+/Y
jOHJ4Ydw2RarJtZ6reYJ+J0qlTdgI65ceo87VEm1bm+LyvxhlLRmkBfavdTg66aX
VU6uPGqJ9HMUY4rwN7aUgsc/qhquMZQYSWd5A/QknhNHlOtXhX0bnaIqgXoAroi7
PRVs3sawkEizIn1Rqc8nLk+xkOrV3xvu+ollj/VNHgPDKU7SFKZiraBzUW7bErCZ
AjCsgM7SalHDKIMpUqco4hutVJ7ykPE/pbEdC7q93TQ+PWE4/QY3JXcjC7L6KN1/
v9rRTIFTR6fc5EcJfhH2zpWi69OWfE/vjM7k9XhpMoAdUZc=
=fqiA
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa377.meta"
Content-Disposition: attachment; filename="xsa377.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNzcsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNSIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIs
CiAgICAiNC4xMiIsCiAgICAiNC4xMSIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjExIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICJiMWU0NmJjMzY5YmI0OTBiNzIxYzc3ZjE1ZDI1ODNiYmY0
NjYxNTJkIiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NzIsCiAgICAgICAgICAgIDM3MywKICAgICAgICAgICAgMzc1CiAgICAgICAg
ICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2Ez
NzctNC4xMS5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0K
ICAgIH0sCiAgICAiNC4xMiI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAg
ICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiNTk4NDkwNWIy
NjM4ZGY4N2EwMjYyZDFlZTkxZjBhNmUxNGE4NmRmNiIsCiAgICAgICAgICAi
UHJlcmVxcyI6IFsKICAgICAgICAgICAgMzcyLAogICAgICAgICAgICAzNzMs
CiAgICAgICAgICAgIDM3NQogICAgICAgICAgXSwKICAgICAgICAgICJQYXRj
aGVzIjogWwogICAgICAgICAgICAieHNhMzc3LTQuMTIucGF0Y2giCiAgICAg
ICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQuMTMiOiB7
CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAg
ICAiU3RhYmxlUmVmIjogIjI4NDEzMjkzODkwMGNlOGMzYjExYmFiZjcyNTVm
NWM2ZGJiMjE3MTYiLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAg
ICAgIDM3MiwKICAgICAgICAgICAgMzczLAogICAgICAgICAgICAzNzUKICAg
ICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAg
InhzYTM3Ny5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0K
ICAgIH0sCiAgICAiNC4xNCI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAg
ICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiMTBmMGIyZDQ5
Mzc2ODY1ZDQ5NjgwZjA2YzUyYjQ1MWZhYmNlM2JiNSIsCiAgICAgICAgICAi
UHJlcmVxcyI6IFsKICAgICAgICAgICAgMzcyLAogICAgICAgICAgICAzNzMs
CiAgICAgICAgICAgIDM3NQogICAgICAgICAgXSwKICAgICAgICAgICJQYXRj
aGVzIjogWwogICAgICAgICAgICAieHNhMzc3LnBhdGNoIgogICAgICAgICAg
XQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjE1IjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIyODBkNDcyZjRmY2EwNzBhMTAzNzdlMzE4ZDkwY2FiZmMy
NTQwODEwIiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NzIsCiAgICAgICAgICAgIDM3MywKICAgICAgICAgICAgMzc1CiAgICAgICAg
ICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2Ez
NzcucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9
LAogICAgIm1hc3RlciI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAg
InhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiYWE3N2FjYzI4MDk4
ZDA0OTQ1YWY5OThmM2ZjMGRiZDM3NTliNWI0MSIsCiAgICAgICAgICAiUHJl
cmVxcyI6IFsKICAgICAgICAgICAgMzcyLAogICAgICAgICAgICAzNzMsCiAg
ICAgICAgICAgIDM3NQogICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVz
IjogWwogICAgICAgICAgICAieHNhMzc3LnBhdGNoIgogICAgICAgICAgXQog
ICAgICAgIH0KICAgICAgfQogICAgfQogIH0KfQ==

--=separator
Content-Type: application/octet-stream; name="xsa377.patch"
Content-Disposition: attachment; filename="xsa377.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogTWl0aWdhdGUgVEFBIGFmdGVy
IFMzIHJlc3VtZQoKVGhlIHVzZXIgY2hvc2VuIHNldHRpbmcgZm9yIE1TUl9U
U1hfQ1RSTCBuZWVkcyByZXN0b3JpbmcgYWZ0ZXIgUzMuCgpBbGwgQVBzIGdl
dCB0aGUgY29ycmVjdCBzZXR0aW5nIHZpYSBzdGFydF9zZWNvbmRhcnkoKSwg
YnV0IHRoZSBCU1Agd2FzIG1pc3NlZApvdXQuCgpUaGlzIGlzIFhTQS0zNzcg
LyBDVkUtMjAyMS0yODY5MC4KCkZpeGVzOiA4YzQzMzA4MThmNiAoIng4Ni9z
cGVjLWN0cmw6IE1pdGlnYXRlIHRoZSBUU1ggQXN5bmNocm9ub3VzIEFib3J0
IHNpZGVjaGFubmVsIikKU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8
YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEphbiBC
ZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KCmRpZmYgLS1naXQgYS94ZW4v
YXJjaC94ODYvYWNwaS9wb3dlci5jIGIveGVuL2FyY2gveDg2L2FjcGkvcG93
ZXIuYwppbmRleCA5MWE4YzRkMGJkLi4zMWE1NmYwMmQwIDEwMDY0NAotLS0g
YS94ZW4vYXJjaC94ODYvYWNwaS9wb3dlci5jCisrKyBiL3hlbi9hcmNoL3g4
Ni9hY3BpL3Bvd2VyLmMKQEAgLTI4OCw2ICsyODgsOCBAQCBzdGF0aWMgaW50
IGVudGVyX3N0YXRlKHUzMiBzdGF0ZSkKIAogICAgIG1pY3JvY29kZV91cGRh
dGVfb25lKCk7CiAKKyAgICB0c3hfaW5pdCgpOyAvKiBOZWVkcyBtaWNyb2Nv
ZGUuICBNYXkgY2hhbmdlIEhMRS9SVE0gZmVhdHVyZSBiaXRzLiAqLworCiAg
ICAgaWYgKCAhcmVjaGVja19jcHVfZmVhdHVyZXMoMCkgKQogICAgICAgICBw
YW5pYygiTWlzc2luZyBwcmV2aW91c2x5IGF2YWlsYWJsZSBmZWF0dXJlKHMp
XG4iKTsKIAo=

--=separator
Content-Type: application/octet-stream; name="xsa377-4.11.patch"
Content-Disposition: attachment; filename="xsa377-4.11.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogTWl0aWdhdGUgVEFBIGFmdGVy
IFMzIHJlc3VtZQoKVGhlIHVzZXIgY2hvc2VuIHNldHRpbmcgZm9yIE1TUl9U
U1hfQ1RSTCBuZWVkcyByZXN0b3JpbmcgYWZ0ZXIgUzMuCgpBbGwgQVBzIGdl
dCB0aGUgY29ycmVjdCBzZXR0aW5nIHZpYSBzdGFydF9zZWNvbmRhcnkoKSwg
YnV0IHRoZSBCU1Agd2FzIG1pc3NlZApvdXQuCgpUaGlzIGlzIFhTQS0zNzcg
LyBDVkUtMjAyMS0yODY5MC4KCkZpeGVzOiA4YzQzMzA4MThmNiAoIng4Ni9z
cGVjLWN0cmw6IE1pdGlnYXRlIHRoZSBUU1ggQXN5bmNocm9ub3VzIEFib3J0
IHNpZGVjaGFubmVsIikKU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8
YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEphbiBC
ZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KCmRpZmYgLS1naXQgYS94ZW4v
YXJjaC94ODYvYWNwaS9wb3dlci5jIGIveGVuL2FyY2gveDg2L2FjcGkvcG93
ZXIuYwppbmRleCAzMGUxYmQ1Y2QzLi40NTFjYmE2MjJjIDEwMDY0NAotLS0g
YS94ZW4vYXJjaC94ODYvYWNwaS9wb3dlci5jCisrKyBiL3hlbi9hcmNoL3g4
Ni9hY3BpL3Bvd2VyLmMKQEAgLTI1OSw2ICsyNTksOCBAQCBzdGF0aWMgaW50
IGVudGVyX3N0YXRlKHUzMiBzdGF0ZSkKIAogICAgIG1pY3JvY29kZV9yZXN1
bWVfY3B1KDApOwogCisgICAgdHN4X2luaXQoKTsgLyogTmVlZHMgbWljcm9j
b2RlLiAgTWF5IGNoYW5nZSBITEUvUlRNIGZlYXR1cmUgYml0cy4gKi8KKwog
ICAgIGlmICggIXJlY2hlY2tfY3B1X2ZlYXR1cmVzKDApICkKICAgICAgICAg
cGFuaWMoIk1pc3NpbmcgcHJldmlvdXNseSBhdmFpbGFibGUgZmVhdHVyZShz
KS4iKTsKIAo=

--=separator
Content-Type: application/octet-stream; name="xsa377-4.12.patch"
Content-Disposition: attachment; filename="xsa377-4.12.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogTWl0aWdhdGUgVEFBIGFmdGVy
IFMzIHJlc3VtZQoKVGhlIHVzZXIgY2hvc2VuIHNldHRpbmcgZm9yIE1TUl9U
U1hfQ1RSTCBuZWVkcyByZXN0b3JpbmcgYWZ0ZXIgUzMuCgpBbGwgQVBzIGdl
dCB0aGUgY29ycmVjdCBzZXR0aW5nIHZpYSBzdGFydF9zZWNvbmRhcnkoKSwg
YnV0IHRoZSBCU1Agd2FzIG1pc3NlZApvdXQuCgpUaGlzIGlzIFhTQS0zNzcg
LyBDVkUtMjAyMS0yODY5MC4KCkZpeGVzOiA4YzQzMzA4MThmNiAoIng4Ni9z
cGVjLWN0cmw6IE1pdGlnYXRlIHRoZSBUU1ggQXN5bmNocm9ub3VzIEFib3J0
IHNpZGVjaGFubmVsIikKU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8
YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEphbiBC
ZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KCmRpZmYgLS1naXQgYS94ZW4v
YXJjaC94ODYvYWNwaS9wb3dlci5jIGIveGVuL2FyY2gveDg2L2FjcGkvcG93
ZXIuYwppbmRleCBhMDdhYTNiOWVkLi42NjAzNjNhM2RmIDEwMDY0NAotLS0g
YS94ZW4vYXJjaC94ODYvYWNwaS9wb3dlci5jCisrKyBiL3hlbi9hcmNoL3g4
Ni9hY3BpL3Bvd2VyLmMKQEAgLTI1OSw2ICsyNTksOCBAQCBzdGF0aWMgaW50
IGVudGVyX3N0YXRlKHUzMiBzdGF0ZSkKIAogICAgIG1pY3JvY29kZV9yZXN1
bWVfY3B1KDApOwogCisgICAgdHN4X2luaXQoKTsgLyogTmVlZHMgbWljcm9j
b2RlLiAgTWF5IGNoYW5nZSBITEUvUlRNIGZlYXR1cmUgYml0cy4gKi8KKwog
ICAgIGlmICggIXJlY2hlY2tfY3B1X2ZlYXR1cmVzKDApICkKICAgICAgICAg
cGFuaWMoIk1pc3NpbmcgcHJldmlvdXNseSBhdmFpbGFibGUgZmVhdHVyZShz
KVxuIik7CiAK

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 17:14:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 17:14:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138777.256763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqfIz-0003EY-Cr; Tue, 08 Jun 2021 17:14:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138777.256763; Tue, 08 Jun 2021 17:14:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqfIz-0003ER-8q; Tue, 08 Jun 2021 17:14:21 +0000
Received: by outflank-mailman (input) for mailman id 138777;
 Tue, 08 Jun 2021 17:14:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AOFJ=LC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lqfBR-0007tU-02
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 17:06:33 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c493c848-7ea8-4cf4-935c-541debdb2129;
 Tue, 08 Jun 2021 17:06:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c493c848-7ea8-4cf4-935c-541debdb2129
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623171968;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=p9D1XpGniGzTsG6kVwBc4O+31iz1OjOUWmwkSQYYvvs=;
  b=hGwpldey5Fuom+9gM277KYJZ1Y6RWlJQcgK2FGaUB/E0HttR1soEgaoT
   6ysPwideWdMWMGd1nNcIIm6Xb08jT+w4QZ43i4Djc0aDRC8TBK/nbRHmZ
   QLIpi9UI3U8Zat9GGU+nSejVtqObimcBVx6p33EjBXITNbprmof2CXf+J
   M=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Ba2iTYZSYIiQZ/d6bvpB67JeR74Ow2yl4hL8BGUPv3NXvrawaCciq1JKOWQ/7YSbmC6g3WhiSA
 vLx+3IVA1S0lkJNgWW5L28HD5YRXf7Tehv7FnCB97vzkht6PUNaPdCexeMFHeCS84fKcZ6R1UZ
 2T4bT6PONwx+Vtsm0hogGL3azopsTP12wNYR9zOSPUf8Mg6PO6iOvIoH2n3TQ61AYFdRzzu2cO
 Jn0fKRcvmOSHIkYuHWmyLsXMfva04T1EjmRHnCZKLXU/ZOVldeDzu7ZZHOZ9xH5qd9aYAxNzvx
 PDs=
X-SBRS: 5.1
X-MesageID: 45405893
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:sWxaMKrAVumozaHEN7YV6QIaV5oleYIsimQD101hICG8cqSj9v
 xG+85rsyMc6QxhP03I9urwW5VoLUmyyXcX2/h0AV7BZniFhILAFugLhuGOrwEIcxeOj9K1vp
 0BT0ERMrPN5CBB/KPH3DU=
X-IronPort-AV: E=Sophos;i="5.83,258,1616472000"; 
   d="scan'208";a="45405893"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/tsx: Cope with TSX deprecation on SKL/KBL/CFL/WHL
Date: Tue, 8 Jun 2021 18:05:59 +0100
Message-ID: <20210608170559.6732-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The June 2021 microcode is formally de-featuring TSX on the older Skylake
client CPUs.  The workaround from the March 2019 microcode is being dropped,
and replaced with additions to MSR_TSX_FORCE_ABORT to hide the HLE/RTM CPUID
bits.

With this microcode in place, TSX is disabled by default on these CPUs.
Backwards compatibility is provided in the same way as for TAA - RTM force
aborts, rather than suffering #UD, and the CPUID bits can be hidden to recover
performance.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 docs/misc/xen-command-line.pandoc           | 13 +++++
 tools/misc/xen-cpuid.c                      |  2 +-
 xen/arch/x86/tsx.c                          | 76 +++++++++++++++++++++++++++--
 xen/include/asm-x86/cpufeature.h            |  1 +
 xen/include/asm-x86/msr-index.h             |  2 +
 xen/include/public/arch-x86/cpufeatureset.h |  1 +
 6 files changed, 89 insertions(+), 6 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 1fae872626..3ece83a427 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2309,6 +2309,12 @@ Several microcode updates are relevant:
    and formally retiring HLE from the architecture.  The user can disable TSX
    to mitigate TAA, and elect to hide the HLE/RTM CPUID bits.
 
+ * June 2021, removing the workaround for March 2019 on client CPUs and
+   formally de-featured TSX on SKL/KBL/WHL/CFL (Note: SKX still retains the
+   March 2019 fix).  Introduced the ability to hide the HLE/RTM CPUID bits.
+   PCR3 works fine, and TSX is disabled by default, but the user can re-enable
+   TSX at their own risk, accepting that the memory order erratum is unfixed.
+
 On systems with the ability to configure TSX, this boolean offers system wide
 control of whether TSX is enabled or disabled.
 
@@ -2326,6 +2332,13 @@ control of whether TSX is enabled or disabled.
    ordering errata default to `true` to enable working TSX.  Alternatively,
    selecting `tsx=0` will disable TSX and restore PCR3 to a working state.
 
+   SKX and SKL/KBL/WHL/CFL on pre-June 2021 microcode default to `true`.
+   Alternatively, selecting `tsx=0` will disable TSX and restore PCR3 to a
+   working state.
+
+   SKL/KBL/WHL/CFL on the June 2021 microcode or later default to `false`.
+   Alternatively, selecting `tsx=1` will re-enable TSX at the users own risk.
+
 ### ucode
 > `= List of [ <integer> | scan=<bool>, nmi=<bool>, allow-same=<bool> ]`
 
diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index d4bc83d8c9..735bcf8f0e 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -162,7 +162,7 @@ static const char *const str_7d0[32] =
     [ 4] = "fsrm",
 
     [ 8] = "avx512-vp2intersect", [ 9] = "srbds-ctrl",
-    [10] = "md-clear",
+    [10] = "md-clear",            [11] = "rtm-always-abort",
     /* 12 */                [13] = "tsx-force-abort",
     [14] = "serialize",
 
diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
index 338191df7f..085f1c82a7 100644
--- a/xen/arch/x86/tsx.c
+++ b/xen/arch/x86/tsx.c
@@ -60,6 +60,38 @@ void tsx_init(void)
              */
 
             /*
+             * Probe for the June 2021 microcode which de-features TSX on
+             * client parts.  (Note - this is a subset of parts impacted by
+             * the memory ordering errata.)
+             *
+             * RTM_ALWAYS_ABORT enumerates the new functionality, but is also
+             * read as zero if TSX_FORCE_ABORT.ENABLE_RTM has been set before
+             * we run.
+             *
+             * Undo this behaviour in Xen's view of the world.
+             */
+            bool has_rtm_always_abort = cpu_has_rtm_always_abort;
+
+            if ( !has_rtm_always_abort )
+            {
+                uint64_t val;
+
+                rdmsrl(MSR_TSX_FORCE_ABORT, val);
+
+                if ( val & TSX_ENABLE_RTM )
+                    has_rtm_always_abort = true;
+            }
+
+            /*
+             * Always force RTM_ALWAYS_ABORT to be visibile, even if it
+             * currently is.  If the user explicitly opts to enable TSX, we'll
+             * set TSX_FORCE_ABORT.ENABLE_RTM and hide RTM_ALWAYS_ABORT from
+             * the general CPUID scan later.
+             */
+            if ( has_rtm_always_abort )
+                setup_force_cpu_cap(X86_FEATURE_RTM_ALWAYS_ABORT);
+
+            /*
              * If no explicit tsx= option is provided, pick a default.
              *
              * This deliberately overrides the implicit opt_tsx=-3 from
@@ -67,9 +99,16 @@ void tsx_init(void)
              * - parse_spec_ctrl() ran before any CPU details where know.
              * - We now know we're running on a CPU not affected by TAA (as
              *   TSX_FORCE_ABORT is enumerated).
+             * - When RTM_ALWAYS_ABORT is enumerated, TSX malfunctions, so we
+             *   only ever want it enabled by explicit user choice.
+             *
+             * Without RTM_ALWAYS_ABORT, leave TSX active.  In particular,
+             * this includes SKX where TSX is still supported.
+             *
+             * With RTM_ALWAYS_ABORT, disable TSX.
              */
             if ( opt_tsx < 0 )
-                opt_tsx = 1;
+                opt_tsx = !cpu_has_rtm_always_abort;
         }
 
         /*
@@ -90,7 +129,7 @@ void tsx_init(void)
          * Force the features to be visible in Xen's view if we see any of the
          * infrastructure capable of hiding them.
          */
-        if ( cpu_has_tsx_ctrl )
+        if ( cpu_has_tsx_ctrl || cpu_has_tsx_force_abort )
         {
             setup_force_cpu_cap(X86_FEATURE_HLE);
             setup_force_cpu_cap(X86_FEATURE_RTM);
@@ -131,9 +170,36 @@ void tsx_init(void)
         /* Check bottom bit only.  Higher bits are various sentinels. */
         rtm_disabled = !(opt_tsx & 1);
 
-        lo &= ~TSX_FORCE_ABORT_RTM;
-        if ( rtm_disabled )
-            lo |= TSX_FORCE_ABORT_RTM;
+        lo &= ~(TSX_FORCE_ABORT_RTM | TSX_CPUID_CLEAR | TSX_ENABLE_RTM);
+
+        if ( cpu_has_rtm_always_abort )
+        {
+            /*
+             * June 2021 microcode, on a client part with TSX de-featured:
+             *  - There are no mitigations for the TSX memory ordering errata.
+             *  - Performance counter 3 works.  (I.e. it isn't being used by
+             *    microcode to work around the memory ordering errata.)
+             *  - TSX_FORCE_ABORT.FORCE_ABORT_RTM is fixed read1/write-discard.
+             *  - TSX_FORCE_ABORT.TSX_CPUID_CLEAR can be used to hide the
+             *    HLE/RTM CPUID bits.
+             *  - TSX_FORCE_ABORT.ENABLE_RTM may be used to opt in to
+             *    re-enabling RTM, at the users own risk.
+             */
+            lo |= rtm_disabled ? TSX_CPUID_CLEAR : TSX_ENABLE_RTM;
+        }
+        else
+        {
+            /*
+             * Either a server part where TSX isn't de-featured, or pre-June
+             * 2021 microcode:
+             *  - By default, the TSX memory ordering errata is worked around
+             *    in microcode at the cost of Performance Counter 3.
+             *  - "Working TSX" vs "Working PCR3" can be selected by way of
+             *    setting TSX_FORCE_ABORT.FORCE_ABORT_RTM.
+             */
+            if ( rtm_disabled )
+                lo |= TSX_FORCE_ABORT_RTM;
+        }
 
         wrmsr(MSR_TSX_FORCE_ABORT, lo, hi);
     }
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 9f5ae3aa0d..a539a4bacd 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -131,6 +131,7 @@
 #define cpu_has_avx512_4vnniw   boot_cpu_has(X86_FEATURE_AVX512_4VNNIW)
 #define cpu_has_avx512_4fmaps   boot_cpu_has(X86_FEATURE_AVX512_4FMAPS)
 #define cpu_has_avx512_vp2intersect boot_cpu_has(X86_FEATURE_AVX512_VP2INTERSECT)
+#define cpu_has_rtm_always_abort boot_cpu_has(X86_FEATURE_RTM_ALWAYS_ABORT)
 #define cpu_has_tsx_force_abort boot_cpu_has(X86_FEATURE_TSX_FORCE_ABORT)
 #define cpu_has_serialize       boot_cpu_has(X86_FEATURE_SERIALIZE)
 #define cpu_has_arch_caps       boot_cpu_has(X86_FEATURE_ARCH_CAPS)
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index bd3a3a1e7f..9a772c12b8 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -61,6 +61,8 @@
 
 #define MSR_TSX_FORCE_ABORT                 0x0000010f
 #define  TSX_FORCE_ABORT_RTM                (_AC(1, ULL) <<  0)
+#define  TSX_CPUID_CLEAR                    (_AC(1, ULL) <<  1)
+#define  TSX_ENABLE_RTM                     (_AC(1, ULL) <<  2)
 
 #define MSR_TSX_CTRL                        0x00000122
 #define  TSX_CTRL_RTM_DISABLE               (_AC(1, ULL) <<  0)
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index b65af42436..380b51b1b3 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -265,6 +265,7 @@ XEN_CPUFEATURE(FSRM,          9*32+ 4) /*A  Fast Short REP MOVS */
 XEN_CPUFEATURE(AVX512_VP2INTERSECT, 9*32+8) /*a  VP2INTERSECT{D,Q} insns */
 XEN_CPUFEATURE(SRBDS_CTRL,    9*32+ 9) /*   MSR_MCU_OPT_CTRL and RNGDS_MITG_DIS. */
 XEN_CPUFEATURE(MD_CLEAR,      9*32+10) /*A  VERW clears microarchitectural buffers */
+XEN_CPUFEATURE(RTM_ALWAYS_ABORT, 9*32+11) /*! June 2021 TSX defeaturing in microcode. */
 XEN_CPUFEATURE(TSX_FORCE_ABORT, 9*32+13) /* MSR_TSX_FORCE_ABORT.RTM_ABORT */
 XEN_CPUFEATURE(SERIALIZE,     9*32+14) /*a  SERIALIZE insn */
 XEN_CPUFEATURE(CET_IBT,       9*32+20) /*   CET - Indirect Branch Tracking */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 08 17:14:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 17:14:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138782.256782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqfJ3-0003d6-9U; Tue, 08 Jun 2021 17:14:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138782.256782; Tue, 08 Jun 2021 17:14:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqfJ3-0003cA-1u; Tue, 08 Jun 2021 17:14:25 +0000
Received: by outflank-mailman (input) for mailman id 138782;
 Tue, 08 Jun 2021 17:14:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CbVg=LC=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1lqfAG-0007tU-Pb
 for xen-devel@lists.xen.org; Tue, 08 Jun 2021 17:05:20 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d72a1b62-f0df-4bfb-a265-c632ed6c1b57;
 Tue, 08 Jun 2021 17:04:37 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lqf9R-0004gU-89; Tue, 08 Jun 2021 17:04:29 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lqf9R-0004sG-6W; Tue, 08 Jun 2021 17:04:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d72a1b62-f0df-4bfb-a265-c632ed6c1b57
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=SDVFctSv2PMnBZCWe0sMgIhEUf1wCGU8fcv1bKj95EE=; b=AhabBF0EpCQTSyDErdVNqpPU0R
	m0fuCOvQLwx0DgXPRJQAHsszdRCyeUnvuhtv/I/mTl9bN+ziOaiNwKT+S5O+bP1AM+ZMlxZtXZOgp
	rNCVmgh4+eVtFYXcfAmSrCAs5S9fm9+3jjh3H5xYDU5PczLTbe9sFBpx4XK9Tw0Xo45c=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 373 v2 (CVE-2021-28692) - inappropriate x86
 IOMMU timeout detection / handling
Message-Id: <E1lqf9R-0004sG-6W@xenbits.xenproject.org>
Date: Tue, 08 Jun 2021 17:04:29 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2021-28692 / XSA-373
                               version 2

         inappropriate x86 IOMMU timeout detection / handling

UPDATES IN VERSION 2
====================

Public release.

ISSUE DESCRIPTION
=================

IOMMUs process commands issued to them in parallel with the operation
of the CPU(s) issuing such commands.  In the current implementation in
Xen, asynchronous notification of the completion of such commands is
not used.  Instead, the issuing CPU spin-waits for the completion of
the most recently issued command(s).  Some of these waiting loops try
to apply a timeout to fail overly-slow commands.  The course of action
upon a perceived timeout actually being detected is inappropriate:
 - on Intel hardware guests which did not originally cause the timeout
   may be marked as crashed,
 - on AMD hardware higher layer callers would not be notified of the
   issue, making them continue as if the IOMMU operation succeeded.

IMPACT
======

A malicious guest may be able to elevate its privileges to that of the
host, cause host or guest Denial of Service (DoS), or cause information
leaks.

VULNERABLE SYSTEMS
==================

All Xen versions from at least 3.2 onwards are vulnerable.  Earlier
versions have not been inspected.

Only x86 systems with in-use IOMMU hardware are vulnerable.  x86 systems
without any IOMMUs in use are not vulnerable.  On Arm systems IOMMU /
SMMU use is not security supported.

Only x86 guests which have physical devices passed through to them can
leverage the vulnerability.

MITIGATION
==========

Not passing through physical devices to untrusted guests will avoid
the vulnerability.

CREDITS
=======

This issue was discovered by Igor Druzhinin and Andrew Cooper of Citrix,
and further issues were uncovered by by Jan Beulich of SUSE while trying
to fix the first issue.

RESOLUTION
==========

Applying the appropriate set of attached patches resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa373/xsa373-?.patch           xen-unstable
xsa373/xsa373-4.15-?.patch      Xen 4.15.x
xsa373/xsa373-4.14-?.patch      Xen 4.14.x
xsa373/xsa373-4.13-?.patch      Xen 4.13.x
xsa373/xsa373-4.12-?.patch      Xen 4.12.x
xsa373/xsa373-4.11-?.patch      Xen 4.11.x

$ sha256sum xsa373* xsa373*/*
2ded01092088735e0d8a0e378a41b772ec0f17ceb7afabc78228670c43407fc2  xsa373.meta
f62df56cd176237521aa2ed4a22b0e893318b85bb0ce3c17bd7fca5282b6105b  xsa373/xsa373-1.patch
9eed9566508e116c4da6c201b36fe7e53e98f2daf96cce8ed0a9ca192d783edc  xsa373/xsa373-2.patch
ffee9d17e40798c053a67707dd13d7a944e4a53de7bcfe3e146eac7871ca2608  xsa373/xsa373-3.patch
c51bea462222c090ae671f14471ece00724348e6c04e5850f9b91d0b1eceaad8  xsa373/xsa373-4.11-1.patch
9a3b331e404a38c72ec154cefd78f1f67db6f25dcc1bd554b37ff50899ea42ff  xsa373/xsa373-4.11-2.patch
dba77bce4e6c88ec43df61e88bd5c8bee6e32c0ff681cbeddc4bceb0ee6c73dd  xsa373/xsa373-4.11-3.patch
b1f14e8885e3004de79c5012a1d9278d7a0c39633c5b73cbfda28679f1722c38  xsa373/xsa373-4.11-4.patch
791bccec1e7ba4429a0bafef5fd5a35a68562cee333d0962c70477172493ef3b  xsa373/xsa373-4.11-5.patch
cc4e1bcef148dbfc94ada92bef4408c5516cff2cf249e43c5595b1dbffbbc1e4  xsa373/xsa373-4.12-1.patch
12ffdac1526d96c4f1b572360a7f1a0371e8a177cf15228b126c1032de4e8930  xsa373/xsa373-4.12-2.patch
619425ba44f449bf7b0f519040ee579adff0d0293a95e9b0f70c943c02ae22fb  xsa373/xsa373-4.12-3.patch
b1f14e8885e3004de79c5012a1d9278d7a0c39633c5b73cbfda28679f1722c38  xsa373/xsa373-4.12-4.patch
96b3dd11d38ca8ca0b2dfe2dfb571045fcda78dbfe416580c9b04c5a8ce5fcef  xsa373/xsa373-4.12-5.patch
4add1d05ad2780904ebc89b4d1a93a8f2757b6e9f45b075afce46392ae406b58  xsa373/xsa373-4.13-1.patch
b064324db709078b8ef479df0c31ff3391a506755bfb0186d7d165592d025357  xsa373/xsa373-4.13-2.patch
6fe47fbba0c9d86f48643182d8a7c64ff70a7c8b290b0e93afe1d43d04bed480  xsa373/xsa373-4.13-3.patch
b1f14e8885e3004de79c5012a1d9278d7a0c39633c5b73cbfda28679f1722c38  xsa373/xsa373-4.13-4.patch
96b3dd11d38ca8ca0b2dfe2dfb571045fcda78dbfe416580c9b04c5a8ce5fcef  xsa373/xsa373-4.13-5.patch
4add1d05ad2780904ebc89b4d1a93a8f2757b6e9f45b075afce46392ae406b58  xsa373/xsa373-4.14-1.patch
8e61b7dda9ea21a830454e629fd23e3379b73fb230bd04107618e45975e117d1  xsa373/xsa373-4.14-2.patch
a5aa80d8e893c268f171a5e429bfef0c553522f860e3e5132b4bd87d3a73c6b7  xsa373/xsa373-4.14-3.patch
25bfd2b821ae2cc867b8e2d480528ebd435da76cfab766e8106573cf8dc6f36c  xsa373/xsa373-4.14-4.patch
162b3f14d15fe5ca2cb659efad6635f3803dde6fa97a6f0f1f7f202d3ea72d94  xsa373/xsa373-4.14-5.patch
4add1d05ad2780904ebc89b4d1a93a8f2757b6e9f45b075afce46392ae406b58  xsa373/xsa373-4.15-1.patch
9eed9566508e116c4da6c201b36fe7e53e98f2daf96cce8ed0a9ca192d783edc  xsa373/xsa373-4.15-2.patch
13642541b056ed47129d8143a919bcc81a73797baedc3bd90afeb33f021e6d31  xsa373/xsa373-4.15-3.patch
b2517a7e92c26a818e94ed5133d5aef6ef1d3a7a98f2f5355f1ad6f30baa3ab9  xsa373/xsa373-4.15-4.patch
3ca056796b93cb07ddb7e1dfda98410162382fc56135eb08bc5ff19137d8c427  xsa373/xsa373-4.15-5.patch
b2517a7e92c26a818e94ed5133d5aef6ef1d3a7a98f2f5355f1ad6f30baa3ab9  xsa373/xsa373-4.patch
0b7bb146330f7fdc7c8c331a618307819073654a13d9fe1d0a8b83ab037ae802  xsa373/xsa373-5.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmC/oxIMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZ7oQH/39iA05B0xCxHjYxZJmwplLhtr/RwNt+3zOgsesg
jaG8KMWRobWsfLWpbQdEuWKLQ5kPcK47KBGdFkadbSgNW6ZKeG6iR+HWC04/9uA6
3jjlhyqcdetfGnRUh/EO+4gLEaWxdWegWLWMBqYYp+f9b9lKDp8vyWj5yfzU1FFF
+YOu4bSRnqbY21hapsy2iupbBJugJF1vCLVfMLxQjba8KOjl4bk6cIxx/WgX3FPI
XIH6T+0MtLioCbv7MFaSlfeWoMNjpcimMA8/dmePS6XBtjGX02ahEYSO66lHKk7T
BsrN4QLibAsb8vMb5KjcjGE8ukhrg3AH5EOE950duWF5heQ=
=fAD/
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa373.meta"
Content-Disposition: attachment; filename="xsa373.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNzMsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNSIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIs
CiAgICAiNC4xMiIsCiAgICAiNC4xMSIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjExIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICJiMWU0NmJjMzY5YmI0OTBiNzIxYzc3ZjE1ZDI1ODNiYmY0
NjYxNTJkIiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NzIKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAg
ICAgICAgInhzYTM3My94c2EzNzMtNC4xMS0/LnBhdGNoIgogICAgICAgICAg
XQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjEyIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICI1OTg0OTA1YjI2MzhkZjg3YTAyNjJkMWVlOTFmMGE2ZTE0
YTg2ZGY2IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NzIKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAg
ICAgICAgInhzYTM3My94c2EzNzMtNC4xMi0/LnBhdGNoIgogICAgICAgICAg
XQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjEzIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIyODQxMzI5Mzg5MDBjZThjM2IxMWJhYmY3MjU1ZjVjNmRi
YjIxNzE2IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NzIKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAg
ICAgICAgInhzYTM3My94c2EzNzMtNC4xMy0/LnBhdGNoIgogICAgICAgICAg
XQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjE0IjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxMGYwYjJkNDkzNzY4NjVkNDk2ODBmMDZjNTJiNDUxZmFi
Y2UzYmI1IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NzIKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAg
ICAgICAgInhzYTM3My94c2EzNzMtNC4xNC0/LnBhdGNoIgogICAgICAgICAg
XQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjE1IjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIyODBkNDcyZjRmY2EwNzBhMTAzNzdlMzE4ZDkwY2FiZmMy
NTQwODEwIiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NzIKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAg
ICAgICAgInhzYTM3My94c2EzNzMtNC4xNS0/LnBhdGNoIgogICAgICAgICAg
XQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICJtYXN0ZXIiOiB7CiAg
ICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAi
U3RhYmxlUmVmIjogImFhNzdhY2MyODA5OGQwNDk0NWFmOTk4ZjNmYzBkYmQz
NzU5YjViNDEiLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAg
IDM3MgogICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAg
ICAgICAgICAieHNhMzczL3hzYTM3My0/LnBhdGNoIgogICAgICAgICAgXQog
ICAgICAgIH0KICAgICAgfQogICAgfQogIH0KfQ==

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-1.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBWVC1kOiBzaXplIHFpbnZhbCBxdWV1ZSBkeW5hbWljYWxseQoKV2l0aCB0
aGUgcHJlc2VudCBzeW5jaHJvbm91cyBtb2RlbCwgd2UgbmVlZCB0d28gc2xv
dHMgZm9yIGV2ZXJ5Cm9wZXJhdGlvbiAodGhlIG9wZXJhdGlvbiBpdHNlbGYg
YW5kIGEgd2FpdCBkZXNjcmlwdG9yKS4gIFRoZXJlIGNhbiBiZQpvbmUgc3Vj
aCBwYWlyIG9mIHJlcXVlc3RzIHBlbmRpbmcgcGVyIENQVS4gVG8gZW5zdXJl
IHRoYXQgdW5kZXIgYWxsCm5vcm1hbCBjaXJjdW1zdGFuY2VzIGEgc2xvdCBp
cyBhbHdheXMgYXZhaWxhYmxlIHdoZW4gb25lIGlzIHJlcXVlc3RlZCwKc2l6
ZSB0aGUgcXVldWUgcmluZyBhY2NvcmRpbmcgdG8gdGhlIG51bWJlciBvZiBw
cmVzZW50IENQVXMuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAvIENWRS0y
MDIxLTI4NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVs
aWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVs
QHhlbi5vcmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQv
aW9tbXUuaAorKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9t
bXUuaApAQCAtNDUyLDE3ICs0NTIsOSBAQCBzdHJ1Y3QgcWludmFsX2VudHJ5
IHsKICAgICB9cTsKIH07CiAKLS8qIE9yZGVyIG9mIHF1ZXVlIGludmFsaWRh
dGlvbiBwYWdlcyhtYXggaXMgOCkgKi8KLSNkZWZpbmUgUUlOVkFMX1BBR0Vf
T1JERVIgICAyCi0KLSNkZWZpbmUgUUlOVkFMX0FSQ0hfUEFHRV9PUkRFUiAg
KFFJTlZBTF9QQUdFX09SREVSICsgUEFHRV9TSElGVF80SyAtIFBBR0VfU0hJ
RlQpCi0jZGVmaW5lIFFJTlZBTF9BUkNIX1BBR0VfTlIgICAgICggUUlOVkFM
X0FSQ0hfUEFHRV9PUkRFUiA8IDAgPyAgXAotICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAxIDogICAgICAgICAgICAgICAgICAgICAgICAgICAg
IFwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgMSA8PCBRSU5W
QUxfQVJDSF9QQUdFX09SREVSICkKLQogLyogRWFjaCBlbnRyeSBpcyAxNiBi
eXRlcywgc28gMl44IGVudHJpZXMgcGVyIHBhZ2UgKi8KICNkZWZpbmUgUUlO
VkFMX0VOVFJZX09SREVSICAoIFBBR0VfU0hJRlQgLSA0ICkKLSNkZWZpbmUg
UUlOVkFMX0VOVFJZX05SICAgICAoMSA8PCAoUUlOVkFMX1BBR0VfT1JERVIg
KyA4KSkKKyNkZWZpbmUgUUlOVkFMX01BWF9FTlRSWV9OUiAoMXUgPDwgKDcg
KyBRSU5WQUxfRU5UUllfT1JERVIpKQogCiAvKiBTdGF0dXMgZGF0YSBmbGFn
ICovCiAjZGVmaW5lIFFJTlZBTF9TVEFUX0lOSVQgIDAKLS0tIGEveGVuL2Ry
aXZlcnMvcGFzc3Rocm91Z2gvdnRkL3FpbnZhbC5jCisrKyBiL3hlbi9kcml2
ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9xaW52YWwuYwpAQCAtMzEsNiArMzEsOSBA
QAogCiAjZGVmaW5lIFZURF9RSV9USU1FT1VUCTEKIAorc3RhdGljIHVuc2ln
bmVkIGludCBfX3JlYWRfbW9zdGx5IHFpX3BnX29yZGVyOworc3RhdGljIHVu
c2lnbmVkIGludCBfX3JlYWRfbW9zdGx5IHFpX2VudHJ5X25yOworCiBzdGF0
aWMgaW50IF9fbXVzdF9jaGVjayBpbnZhbGlkYXRlX3N5bmMoc3RydWN0IHZ0
ZF9pb21tdSAqaW9tbXUpOwogCiBzdGF0aWMgdm9pZCBwcmludF9xaV9yZWdz
KGNvbnN0IHN0cnVjdCB2dGRfaW9tbXUgKmlvbW11KQpAQCAtNDcsNyArNTAs
NyBAQCBzdGF0aWMgdW5zaWduZWQgaW50IHFpbnZhbF9uZXh0X2luZGV4KHN0
CiAgICAgdGFpbCA+Pj0gUUlOVkFMX0lOREVYX1NISUZUOwogCiAgICAgLyog
KHRhaWwrMSA9PSBoZWFkKSBpbmRpY2F0ZXMgYSBmdWxsIHF1ZXVlLCB3YWl0
IGZvciBIVyAqLwotICAgIHdoaWxlICggKHRhaWwgKyAxKSAlIFFJTlZBTF9F
TlRSWV9OUiA9PQorICAgIHdoaWxlICggKCh0YWlsICsgMSkgJiAocWlfZW50
cnlfbnIgLSAxKSkgPT0KICAgICAgICAgICAgIChkbWFyX3JlYWRsKGlvbW11
LT5yZWcsIERNQVJfSVFIX1JFRykgPj4gUUlOVkFMX0lOREVYX1NISUZUKSAp
CiAgICAgICAgIGNwdV9yZWxheCgpOwogCkBAIC02MCw3ICs2Myw3IEBAIHN0
YXRpYyB2b2lkIHFpbnZhbF91cGRhdGVfcXRhaWwoc3RydWN0IHYKIAogICAg
IC8qIE5lZWQgaG9sZCByZWdpc3RlciBsb2NrIHdoZW4gdXBkYXRlIHRhaWwg
Ki8KICAgICBBU1NFUlQoIHNwaW5faXNfbG9ja2VkKCZpb21tdS0+cmVnaXN0
ZXJfbG9jaykgKTsKLSAgICB2YWwgPSAoaW5kZXggKyAxKSAlIFFJTlZBTF9F
TlRSWV9OUjsKKyAgICB2YWwgPSAoaW5kZXggKyAxKSAmIChxaV9lbnRyeV9u
ciAtIDEpOwogICAgIGRtYXJfd3JpdGVsKGlvbW11LT5yZWcsIERNQVJfSVFU
X1JFRywgdmFsIDw8IFFJTlZBTF9JTkRFWF9TSElGVCk7CiB9CiAKQEAgLTM5
NSw4ICszOTgsMjggQEAgaW50IGVuYWJsZV9xaW52YWwoc3RydWN0IHZ0ZF9p
b21tdSAqaW9tbQogCiAgICAgaWYgKCBpb21tdS0+cWludmFsX21hZGRyID09
IDAgKQogICAgIHsKLSAgICAgICAgaW9tbXUtPnFpbnZhbF9tYWRkciA9IGFs
bG9jX3BndGFibGVfbWFkZHIoUUlOVkFMX0FSQ0hfUEFHRV9OUiwKLSAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
aW9tbXUtPm5vZGUpOworICAgICAgICBpZiAoICFxaV9lbnRyeV9uciApCisg
ICAgICAgIHsKKyAgICAgICAgICAgIC8qCisgICAgICAgICAgICAgKiBXaXRo
IHRoZSBwcmVzZW50IHN5bmNocm9ub3VzIG1vZGVsLCB3ZSBuZWVkIHR3byBz
bG90cyBmb3IgZXZlcnkKKyAgICAgICAgICAgICAqIG9wZXJhdGlvbiAodGhl
IG9wZXJhdGlvbiBpdHNlbGYgYW5kIGEgd2FpdCBkZXNjcmlwdG9yKS4gIFRo
ZXJlCisgICAgICAgICAgICAgKiBjYW4gYmUgb25lIHN1Y2ggcGFpciBvZiBy
ZXF1ZXN0cyBwZW5kaW5nIHBlciBDUFUuICBPbmUgZXh0cmEKKyAgICAgICAg
ICAgICAqIGVudHJ5IGlzIG5lZWRlZCBhcyB0aGUgcmluZyBpcyBjb25zaWRl
cmVkIGZ1bGwgd2hlbiB0aGVyZSdzCisgICAgICAgICAgICAgKiBvbmx5IG9u
ZSBlbnRyeSBsZWZ0LgorICAgICAgICAgICAgICovCisgICAgICAgICAgICBC
VUlMRF9CVUdfT04oQ09ORklHX05SX0NQVVMgKiAyID49IFFJTlZBTF9NQVhf
RU5UUllfTlIpOworICAgICAgICAgICAgcWlfcGdfb3JkZXIgPSBnZXRfb3Jk
ZXJfZnJvbV9ieXRlcygobnVtX3ByZXNlbnRfY3B1cygpICogMiArIDEpIDw8
CisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIChQQUdFX1NISUZUIC0KKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIFFJTlZBTF9FTlRSWV9PUkRFUikpOwor
ICAgICAgICAgICAgcWlfZW50cnlfbnIgPSAxdSA8PCAocWlfcGdfb3JkZXIg
KyBRSU5WQUxfRU5UUllfT1JERVIpOworCisgICAgICAgICAgICBkcHJpbnRr
KFhFTkxPR19JTkZPIFZURFBSRUZJWCwKKyAgICAgICAgICAgICAgICAgICAg
IlFJOiB1c2luZyAldS1lbnRyeSByaW5nKHMpXG4iLCBxaV9lbnRyeV9ucik7
CisgICAgICAgIH0KKworICAgICAgICBpb21tdS0+cWludmFsX21hZGRyID0K
KyAgICAgICAgICAgIGFsbG9jX3BndGFibGVfbWFkZHIocWlfZW50cnlfbnIg
Pj4gUUlOVkFMX0VOVFJZX09SREVSLAorICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBpb21tdS0+bm9kZSk7CiAgICAgICAgIGlmICggaW9tbXUt
PnFpbnZhbF9tYWRkciA9PSAwICkKICAgICAgICAgewogICAgICAgICAgICAg
ZHByaW50ayhYRU5MT0dfV0FSTklORyBWVERQUkVGSVgsCkBAIC00MTAsMTUg
KzQzMywxNiBAQCBpbnQgZW5hYmxlX3FpbnZhbChzdHJ1Y3QgdnRkX2lvbW11
ICppb21tCiAKICAgICBzcGluX2xvY2tfaXJxc2F2ZSgmaW9tbXUtPnJlZ2lz
dGVyX2xvY2ssIGZsYWdzKTsKIAotICAgIC8qIFNldHVwIEludmFsaWRhdGlv
biBRdWV1ZSBBZGRyZXNzKElRQSkgcmVnaXN0ZXIgd2l0aCB0aGUKLSAgICAg
KiBhZGRyZXNzIG9mIHRoZSBwYWdlIHdlIGp1c3QgYWxsb2NhdGVkLiAgUVMg
ZmllbGQgYXQKLSAgICAgKiBiaXRzWzI6MF0gdG8gaW5kaWNhdGUgc2l6ZSBv
ZiBxdWV1ZSBpcyBvbmUgNEtCIHBhZ2UuCi0gICAgICogVGhhdCdzIDI1NiBl
bnRyaWVzLiAgUXVldWVkIEhlYWQgKElRSCkgYW5kIFF1ZXVlIFRhaWwgKElR
VCkKLSAgICAgKiByZWdpc3RlcnMgYXJlIGF1dG9tYXRpY2FsbHkgcmVzZXQg
dG8gMCB3aXRoIHdyaXRlCi0gICAgICogdG8gSVFBIHJlZ2lzdGVyLgorICAg
IC8qCisgICAgICogU2V0dXAgSW52YWxpZGF0aW9uIFF1ZXVlIEFkZHJlc3Mg
KElRQSkgcmVnaXN0ZXIgd2l0aCB0aGUgYWRkcmVzcyBvZiB0aGUKKyAgICAg
KiBwYWdlcyB3ZSBqdXN0IGFsbG9jYXRlZC4gIFRoZSBRUyBmaWVsZCBhdCBi
aXRzWzI6MF0gaW5kaWNhdGVzIHRoZSBzaXplCisgICAgICogKHBhZ2Ugb3Jk
ZXIpIG9mIHRoZSBxdWV1ZS4KKyAgICAgKgorICAgICAqIFF1ZXVlZCBIZWFk
IChJUUgpIGFuZCBRdWV1ZSBUYWlsIChJUVQpIHJlZ2lzdGVycyBhcmUgYXV0
b21hdGljYWxseQorICAgICAqIHJlc2V0IHRvIDAgd2l0aCB3cml0ZSB0byBJ
UUEgcmVnaXN0ZXIuCiAgICAgICovCiAgICAgZG1hcl93cml0ZXEoaW9tbXUt
PnJlZywgRE1BUl9JUUFfUkVHLAotICAgICAgICAgICAgICAgIGlvbW11LT5x
aW52YWxfbWFkZHIgfCBRSU5WQUxfUEFHRV9PUkRFUik7CisgICAgICAgICAg
ICAgICAgaW9tbXUtPnFpbnZhbF9tYWRkciB8IHFpX3BnX29yZGVyKTsKIAog
ICAgIGRtYXJfd3JpdGVxKGlvbW11LT5yZWcsIERNQVJfSVFUX1JFRywgMCk7
CiAK

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-2.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IHNpemUgY29tbWFuZCBidWZmZXIgZHluYW1pY2FsbHkK
CldpdGggdGhlIHByZXNlbnQgc3luY2hyb25vdXMgbW9kZWwsIHdlIG5lZWQg
dHdvIHNsb3RzIGZvciBldmVyeQpvcGVyYXRpb24gKHRoZSBvcGVyYXRpb24g
aXRzZWxmIGFuZCBhIHdhaXQgY29tbWFuZCkuICBUaGVyZSBjYW4gYmUgb25l
CnN1Y2ggcGFpciBvZiBjb21tYW5kcyBwZW5kaW5nIHBlciBDUFUuIFRvIGVu
c3VyZSB0aGF0IHVuZGVyIGFsbCBub3JtYWwKY2lyY3Vtc3RhbmNlcyBhIHNs
b3QgaXMgYWx3YXlzIGF2YWlsYWJsZSB3aGVuIG9uZSBpcyByZXF1ZXN0ZWQs
IHNpemUgdGhlCmNvbW1hbmQgcmluZyBhY2NvcmRpbmcgdG8gdGhlIG51bWJl
ciBvZiBwcmVzZW50IENQVXMuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAv
IENWRS0yMDIxLTI4NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2gg
PGpiZXVsaWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50
IDxwYXVsQHhlbi5vcmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3Vn
aC9hbWQvaW9tbXUtZGVmcy5oCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJv
dWdoL2FtZC9pb21tdS1kZWZzLmgKQEAgLTIwLDkgKzIwLDYgQEAKICNpZm5k
ZWYgQU1EX0lPTU1VX0RFRlNfSAogI2RlZmluZSBBTURfSU9NTVVfREVGU19I
CiAKLS8qIElPTU1VIENvbW1hbmQgQnVmZmVyIGVudHJpZXM6IGluIHBvd2Vy
IG9mIDIgaW5jcmVtZW50cywgbWluaW11bSBvZiAyNTYgKi8KLSNkZWZpbmUg
SU9NTVVfQ01EX0JVRkZFUl9ERUZBVUxUX0VOVFJJRVMJNTEyCi0KIC8qIElP
TU1VIEV2ZW50IExvZyBlbnRyaWVzOiBpbiBwb3dlciBvZiAyIGluY3JlbWVu
dHMsIG1pbmltdW0gb2YgMjU2ICovCiAjZGVmaW5lIElPTU1VX0VWRU5UX0xP
R19ERUZBVUxUX0VOVFJJRVMgICAgIDUxMgogCkBAIC0xNjQsOCArMTYxLDgg
QEAgc3RydWN0IGFtZF9pb21tdV9kdGUgewogI2RlZmluZSBJT01NVV9DTURf
QlVGRkVSX0xFTkdUSF9NQVNLCQkweDBGMDAwMDAwCiAjZGVmaW5lIElPTU1V
X0NNRF9CVUZGRVJfTEVOR1RIX1NISUZUCQkyNAogCi0jZGVmaW5lIElPTU1V
X0NNRF9CVUZGRVJfRU5UUllfU0laRQkJCTE2Ci0jZGVmaW5lIElPTU1VX0NN
RF9CVUZGRVJfUE9XRVJfT0YyX0VOVFJJRVNfUEVSX1BBR0UJOAorI2RlZmlu
ZSBJT01NVV9DTURfQlVGRkVSX0VOVFJZX09SREVSICAgICAgICAgICAgNAor
I2RlZmluZSBJT01NVV9DTURfQlVGRkVSX01BWF9FTlRSSUVTICAgICAgICAg
ICAgKDF1IDw8IDE1KQogCiAjZGVmaW5lIElPTU1VX0NNRF9PUENPREVfTUFT
SwkJCTB4RjAwMDAwMDAKICNkZWZpbmUgSU9NTVVfQ01EX09QQ09ERV9TSElG
VAkJCTI4Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21t
dV9jbWQuYworKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9t
bXVfY21kLmMKQEAgLTI0LDcgKzI0LDcgQEAgc3RhdGljIGludCBxdWV1ZV9p
b21tdV9jb21tYW5kKHN0cnVjdCBhbQogewogICAgIHVpbnQzMl90IHRhaWws
IGhlYWQ7CiAKLSAgICB0YWlsID0gaW9tbXUtPmNtZF9idWZmZXIudGFpbCAr
IElPTU1VX0NNRF9CVUZGRVJfRU5UUllfU0laRTsKKyAgICB0YWlsID0gaW9t
bXUtPmNtZF9idWZmZXIudGFpbCArIHNpemVvZihjbWRfZW50cnlfdCk7CiAg
ICAgaWYgKCB0YWlsID09IGlvbW11LT5jbWRfYnVmZmVyLnNpemUgKQogICAg
ICAgICB0YWlsID0gMDsKIApAQCAtMzMsNyArMzMsNyBAQCBzdGF0aWMgaW50
IHF1ZXVlX2lvbW11X2NvbW1hbmQoc3RydWN0IGFtCiAgICAgaWYgKCBoZWFk
ICE9IHRhaWwgKQogICAgIHsKICAgICAgICAgbWVtY3B5KGlvbW11LT5jbWRf
YnVmZmVyLmJ1ZmZlciArIGlvbW11LT5jbWRfYnVmZmVyLnRhaWwsCi0gICAg
ICAgICAgICAgICBjbWQsIElPTU1VX0NNRF9CVUZGRVJfRU5UUllfU0laRSk7
CisgICAgICAgICAgICAgICBjbWQsIHNpemVvZihjbWRfZW50cnlfdCkpOwog
CiAgICAgICAgIGlvbW11LT5jbWRfYnVmZmVyLnRhaWwgPSB0YWlsOwogICAg
ICAgICByZXR1cm4gMTsKLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gv
YW1kL2lvbW11X2luaXQuYworKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3Vn
aC9hbWQvaW9tbXVfaW5pdC5jCkBAIC0xMTgsNyArMTE4LDcgQEAgc3RhdGlj
IHZvaWQgcmVnaXN0ZXJfaW9tbXVfY21kX2J1ZmZlcl9pbgogICAgIHdyaXRl
bChlbnRyeSwgaW9tbXUtPm1taW9fYmFzZSArIElPTU1VX0NNRF9CVUZGRVJf
QkFTRV9MT1dfT0ZGU0VUKTsKIAogICAgIHBvd2VyX29mMl9lbnRyaWVzID0g
Z2V0X29yZGVyX2Zyb21fYnl0ZXMoaW9tbXUtPmNtZF9idWZmZXIuc2l6ZSkg
KwotICAgICAgICBJT01NVV9DTURfQlVGRkVSX1BPV0VSX09GMl9FTlRSSUVT
X1BFUl9QQUdFOworICAgICAgICBQQUdFX1NISUZUIC0gSU9NTVVfQ01EX0JV
RkZFUl9FTlRSWV9PUkRFUjsKIAogICAgIGVudHJ5ID0gMDsKICAgICBpb21t
dV9zZXRfYWRkcl9oaV90b19yZWcoJmVudHJ5LCBhZGRyX2hpKTsKQEAgLTEw
MTgsOSArMTAxOCwzMSBAQCBzdGF0aWMgdm9pZCAqX19pbml0IGFsbG9jYXRl
X3JpbmdfYnVmZmVyCiBzdGF0aWMgdm9pZCAqIF9faW5pdCBhbGxvY2F0ZV9j
bWRfYnVmZmVyKHN0cnVjdCBhbWRfaW9tbXUgKmlvbW11KQogewogICAgIC8q
IGFsbG9jYXRlICdjb21tYW5kIGJ1ZmZlcicgaW4gcG93ZXIgb2YgMiBpbmNy
ZW1lbnRzIG9mIDRLICovCisgICAgc3RhdGljIHVuc2lnbmVkIGludCBfX3Jl
YWRfbW9zdGx5IG5yX2VudHM7CisKKyAgICBpZiAoICFucl9lbnRzICkKKyAg
ICB7CisgICAgICAgIHVuc2lnbmVkIGludCBvcmRlcjsKKworICAgICAgICAv
KgorICAgICAgICAgKiBXaXRoIHRoZSBwcmVzZW50IHN5bmNocm9ub3VzIG1v
ZGVsLCB3ZSBuZWVkIHR3byBzbG90cyBmb3IgZXZlcnkKKyAgICAgICAgICog
b3BlcmF0aW9uICh0aGUgb3BlcmF0aW9uIGl0c2VsZiBhbmQgYSB3YWl0IGNv
bW1hbmQpLiAgVGhlcmUgY2FuIGJlCisgICAgICAgICAqIG9uZSBzdWNoIHBh
aXIgb2YgcmVxdWVzdHMgcGVuZGluZyBwZXIgQ1BVLiAgT25lIGV4dHJhIGVu
dHJ5IGlzCisgICAgICAgICAqIG5lZWRlZCBhcyB0aGUgcmluZyBpcyBjb25z
aWRlcmVkIGZ1bGwgd2hlbiB0aGVyZSdzIG9ubHkgb25lIGVudHJ5CisgICAg
ICAgICAqIGxlZnQuCisgICAgICAgICAqLworICAgICAgICBCVUlMRF9CVUdf
T04oQ09ORklHX05SX0NQVVMgKiAyID49IElPTU1VX0NNRF9CVUZGRVJfTUFY
X0VOVFJJRVMpOworICAgICAgICBvcmRlciA9IGdldF9vcmRlcl9mcm9tX2J5
dGVzKChudW1fcHJlc2VudF9jcHVzKCkgKiAyICsgMSkgPDwKKyAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBJT01NVV9DTURfQlVGRkVS
X0VOVFJZX09SREVSKTsKKyAgICAgICAgbnJfZW50cyA9IDF1IDw8IChvcmRl
ciArIFBBR0VfU0hJRlQgLSBJT01NVV9DTURfQlVGRkVSX0VOVFJZX09SREVS
KTsKKworICAgICAgICBBTURfSU9NTVVfREVCVUcoInVzaW5nICV1LWVudHJ5
IGNtZCByaW5nKHMpXG4iLCBucl9lbnRzKTsKKyAgICB9CisKKyAgICBCVUlM
RF9CVUdfT04oc2l6ZW9mKGNtZF9lbnRyeV90KSAhPSAoMXUgPDwgSU9NTVVf
Q01EX0JVRkZFUl9FTlRSWV9PUkRFUikpOworCiAgICAgcmV0dXJuIGFsbG9j
YXRlX3JpbmdfYnVmZmVyKCZpb21tdS0+Y21kX2J1ZmZlciwgc2l6ZW9mKGNt
ZF9lbnRyeV90KSwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
SU9NTVVfQ01EX0JVRkZFUl9ERUZBVUxUX0VOVFJJRVMsCi0gICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICJDb21tYW5kIEJ1ZmZlciIsIGZhbHNl
KTsKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbnJfZW50cywg
IkNvbW1hbmQgQnVmZmVyIiwgZmFsc2UpOwogfQogCiBzdGF0aWMgdm9pZCAq
IF9faW5pdCBhbGxvY2F0ZV9ldmVudF9sb2coc3RydWN0IGFtZF9pb21tdSAq
aW9tbXUpCg==

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-3.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBWVC1kOiBlbGltaW5hdGUgZmx1c2ggcmVsYXRlZCB0aW1lb3V0cwoKTGVh
dmluZyBhbiBpbi1wcm9ncmVzcyBvcGVyYXRpb24gcGVuZGluZyB3aGVuIGl0
IGFwcGVhcnMgdG8gdGFrZSB0b28KbG9uZyBpcyBwcm9ibGVtYXRpYzogSWYg
ZS5nLiBhIFFJIGNvbW1hbmQgY29tcGxldGVkIGxhdGVyLCB0aGUgd3JpdGUg
dG8KdGhlICJwb2xsIHNsb3QiIG1heSBpbnN0ZWFkIGJlIHVuZGVyc3Rvb2Qg
dG8gc2lnbmFsIGEgc3Vic2VxdWVudGx5CnN0YXJ0ZWQgY29tbWFuZCdzIGNv
bXBsZXRpb24uIEFsc28gb3VyIGFjY291bnRpbmcgb2YgdGhlIHRpbWVvdXQg
cGVyaW9kCndhcyBhY3R1YWxseSB3cm9uZzogV2UgaW5jbHVkZWQgdGhlIHRp
bWUgaXQgdG9vayBmb3IgdGhlIGNvbW1hbmQgdG8KYWN0dWFsbHkgbWFrZSBp
dCB0byB0aGUgZnJvbnQgb2YgdGhlIHF1ZXVlLCB3aGljaCBjb3VsZCBiZSBo
ZWF2aWx5CmFmZmVjdGVkIGJ5IGd1ZXN0cyBvdGhlciB0aGFuIHRoZSBvbmUg
Zm9yIHdoaWNoIHRoZSBmbHVzaCBpcyBiZWluZwpwZXJmb3JtZWQuCgpEbyBh
d2F5IHdpdGggYWxsIHRpbWVvdXQgZGV0ZWN0aW9uIG9uIGFsbCBmbHVzaCBy
ZWxhdGVkIGNvZGUgcGF0aHMuCkxvZyBleGNlc3NpdmVseSBsb25nIHByb2Nl
c3NpbmcgdGltZXMgKHdpdGggYSBwcm9ncmVzc2l2ZSB0aHJlc2hvbGQpIHRv
CmhhdmUgc29tZSBpbmRpY2F0aW9uIG9mIHByb2JsZW1zIGluIHRoaXMgYXJl
YS4KCkFkZGl0aW9uYWxseSBsb2cgKG9uY2UpIGlmIHFpbnZhbF9uZXh0X2lu
ZGV4KCkgZGlkbid0IGltbWVkaWF0ZWx5IGZpbmQKYW4gYXZhaWxhYmxlIHNs
b3QuIFRvZ2V0aGVyIHdpdGggdGhlIGVhcmxpZXIgY2hhbmdlIHNpemluZyB0
aGUgcXVldWUocykKZHluYW1pY2FsbHksIHdlIHNob3VsZCBub3cgaGF2ZSBh
IGd1YXJhbnRlZSB0aGF0IHdpdGggb3VyIGZ1bGx5CnN5bmNocm9ub3VzIG1v
ZGVsIGFueSBkZW1hbmQgZm9yIHNsb3RzIGNhbiBhY3R1YWxseSBiZSBzYXRp
c2ZpZWQuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAvIENWRS0yMDIxLTI4
NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1
c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5v
cmc+Ci0tLQpUQkQ6IEluIHF1ZXVlX2ludmFsaWRhdGVfd2FpdCgpIHdlIGhh
dmUgdGhlIG9wdGlvbiBvZiBwcm9jZXNzaW5nCiAgICAgc29mdGlycXMgZXZl
cnkgb25jZSBpbiBhIHdoaWxlLCBhcyB0aGVyZSBJUlFzIGFyZW4ndCBvZmYg
d2hpbGUKICAgICBzcGlubmluZy4gVGhpcyB3b3VsZCBrZWVwIHRoZSB3YXRj
aGRvZyBoYXBweS4gSSdtIG5vdCBzdXJlIHRob3VnaAogICAgIHdoZXRoZXIg
d2UgYXJlbid0IGJldHRlciBvZmYgaWYgaXQgYWN0dWFsbHkgdHJpZ2dlcmVk
IGluIGNhc2Ugd2UKICAgICBzcGluIGZvciBhIGxvbmcgdGltZS4KCi0tLSBh
L3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9kbWFyLmgKKysrIGIveGVu
L2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2RtYXIuaApAQCAtMTI3LDYgKzEy
NywzNCBAQCBkbyB7CiAgICAgfSAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogfSB3aGlsZSAo
MCkKIAorI2RlZmluZSBJT01NVV9GTFVTSF9XQUlUKHdoYXQsIGlvbW11LCBv
ZmZzZXQsIG9wLCBjb25kLCBzdHMpICAgICAgIFwKK2RvIHsgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBcCisgICAgc3RhdGljIHVuc2lnbmVkIGludCBfX3JlYWRfbW9z
dGx5IHRocmVzaG9sZCA9IDE7ICAgICAgICAgICAgICAgXAorICAgIHNfdGlt
ZV90IHN0YXJ0ID0gTk9XKCk7ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwKKyAgICBzX3RpbWVfdCB0aW1lb3V0ID0gc3RhcnQg
KyBETUFSX09QRVJBVElPTl9USU1FT1VUICogdGhyZXNob2xkOyBcCisgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgXAorICAgIGZvciAoIDsgOyApICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
KyAgICB7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgIHN0cyA9IG9wKGlv
bW11LT5yZWcsIG9mZnNldCk7ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgXAorICAgICAgICBpZiAoIGNvbmQgKSAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgICAgIGJy
ZWFrOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBcCisgICAgICAgIGlmICggdGltZW91dCAmJiBOT1coKSA+IHRp
bWVvdXQgKSAgICAgICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICB7
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwKKyAgICAgICAgICAgIHRocmVzaG9sZCB8PSB0aHJl
c2hvbGQgPDwgMTsgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAg
ICAgICAgICBwcmludGsoWEVOTE9HX1dBUk5JTkcgVlREUFJFRklYICAgICAg
ICAgICAgICAgICAgICAgICAgXAorICAgICAgICAgICAgICAgICAgICIgSU9N
TVUjJXU6ICVzIGZsdXNoIHRha2luZyB0b28gbG9uZ1xuIiwgICAgICAgIFwK
KyAgICAgICAgICAgICAgICAgICBpb21tdS0+aW5kZXgsIHdoYXQpOyAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgICAgICB0aW1lb3V0
ID0gMDsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgXAorICAgICAgICB9ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgY3B1X3Jl
bGF4KCk7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBcCisgICAgfSAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwKKyAgICBpZiAoICF0aW1lb3V0ICkgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAg
ICAgIHByaW50ayhYRU5MT0dfV0FSTklORyBWVERQUkVGSVggICAgICAgICAg
ICAgICAgICAgICAgICAgICAgXAorICAgICAgICAgICAgICAgIiBJT01NVSMl
dTogJXMgZmx1c2ggdG9vayAlbHVtc1xuIiwgICAgICAgICAgICAgICAgIFwK
KyAgICAgICAgICAgICAgIGlvbW11LT5pbmRleCwgd2hhdCwgKE5PVygpIC0g
c3RhcnQpIC8gMTAwMDAwMDApOyAgICBcCit9IHdoaWxlICggZmFsc2UgKQor
CiBpbnQgdnRkX2h3X2NoZWNrKHZvaWQpOwogdm9pZCBkaXNhYmxlX3Btcihz
dHJ1Y3QgdnRkX2lvbW11ICppb21tdSk7CiBpbnQgaXNfaWdkX2RyaGQoc3Ry
dWN0IGFjcGlfZHJoZF91bml0ICpkcmhkKTsKLS0tIGEveGVuL2RyaXZlcnMv
cGFzc3Rocm91Z2gvdnRkL2lvbW11LmMKKysrIGIveGVuL2RyaXZlcnMvcGFz
c3Rocm91Z2gvdnRkL2lvbW11LmMKQEAgLTM3Myw4ICszNzMsOCBAQCBzdGF0
aWMgdm9pZCBpb21tdV9mbHVzaF93cml0ZV9idWZmZXIoc3RyCiAgICAgZG1h
cl93cml0ZWwoaW9tbXUtPnJlZywgRE1BUl9HQ01EX1JFRywgdmFsIHwgRE1B
X0dDTURfV0JGKTsKIAogICAgIC8qIE1ha2Ugc3VyZSBoYXJkd2FyZSBjb21w
bGV0ZSBpdCAqLwotICAgIElPTU1VX1dBSVRfT1AoaW9tbXUsIERNQVJfR1NU
U19SRUcsIGRtYXJfcmVhZGwsCi0gICAgICAgICAgICAgICAgICAhKHZhbCAm
IERNQV9HU1RTX1dCRlMpLCB2YWwpOworICAgIElPTU1VX0ZMVVNIX1dBSVQo
IndyaXRlIGJ1ZmZlciIsIGlvbW11LCBETUFSX0dTVFNfUkVHLCBkbWFyX3Jl
YWRsLAorICAgICAgICAgICAgICAgICAgICAgISh2YWwgJiBETUFfR1NUU19X
QkZTKSwgdmFsKTsKIAogICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmlv
bW11LT5yZWdpc3Rlcl9sb2NrLCBmbGFncyk7CiB9CkBAIC00MjMsOCArNDIz
LDggQEAgaW50IHZ0ZF9mbHVzaF9jb250ZXh0X3JlZyhzdHJ1Y3QgdnRkX2lv
bQogICAgIGRtYXJfd3JpdGVxKGlvbW11LT5yZWcsIERNQVJfQ0NNRF9SRUcs
IHZhbCk7CiAKICAgICAvKiBNYWtlIHN1cmUgaGFyZHdhcmUgY29tcGxldGUg
aXQgKi8KLSAgICBJT01NVV9XQUlUX09QKGlvbW11LCBETUFSX0NDTURfUkVH
LCBkbWFyX3JlYWRxLAotICAgICAgICAgICAgICAgICAgISh2YWwgJiBETUFf
Q0NNRF9JQ0MpLCB2YWwpOworICAgIElPTU1VX0ZMVVNIX1dBSVQoImNvbnRl
eHQiLCBpb21tdSwgRE1BUl9DQ01EX1JFRywgZG1hcl9yZWFkcSwKKyAgICAg
ICAgICAgICAgICAgICAgICEodmFsICYgRE1BX0NDTURfSUNDKSwgdmFsKTsK
IAogICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmlvbW11LT5yZWdpc3Rl
cl9sb2NrLCBmbGFncyk7CiAgICAgLyogZmx1c2ggY29udGV4dCBlbnRyeSB3
aWxsIGltcGxpY2l0bHkgZmx1c2ggd3JpdGUgYnVmZmVyICovCkBAIC01MDEs
OCArNTAxLDggQEAgaW50IHZ0ZF9mbHVzaF9pb3RsYl9yZWcoc3RydWN0IHZ0
ZF9pb21tdQogICAgIGRtYXJfd3JpdGVxKGlvbW11LT5yZWcsIHRsYl9vZmZz
ZXQgKyA4LCB2YWwpOwogCiAgICAgLyogTWFrZSBzdXJlIGhhcmR3YXJlIGNv
bXBsZXRlIGl0ICovCi0gICAgSU9NTVVfV0FJVF9PUChpb21tdSwgKHRsYl9v
ZmZzZXQgKyA4KSwgZG1hcl9yZWFkcSwKLSAgICAgICAgICAgICAgICAgICEo
dmFsICYgRE1BX1RMQl9JVlQpLCB2YWwpOworICAgIElPTU1VX0ZMVVNIX1dB
SVQoImlvdGxiIiwgaW9tbXUsICh0bGJfb2Zmc2V0ICsgOCksIGRtYXJfcmVh
ZHEsCisgICAgICAgICAgICAgICAgICAgICAhKHZhbCAmIERNQV9UTEJfSVZU
KSwgdmFsKTsKICAgICBzcGluX3VubG9ja19pcnFyZXN0b3JlKCZpb21tdS0+
cmVnaXN0ZXJfbG9jaywgZmxhZ3MpOwogCiAgICAgLyogY2hlY2sgSU9UTEIg
aW52YWxpZGF0aW9uIGdyYW51bGFyaXR5ICovCi0tLSBhL3hlbi9kcml2ZXJz
L3Bhc3N0aHJvdWdoL3Z0ZC9xaW52YWwuYworKysgYi94ZW4vZHJpdmVycy9w
YXNzdGhyb3VnaC92dGQvcWludmFsLmMKQEAgLTI5LDggKzI5LDYgQEAKICNp
bmNsdWRlICJleHRlcm4uaCIKICNpbmNsdWRlICIuLi9hdHMuaCIKIAotI2Rl
ZmluZSBWVERfUUlfVElNRU9VVAkxCi0KIHN0YXRpYyB1bnNpZ25lZCBpbnQg
X19yZWFkX21vc3RseSBxaV9wZ19vcmRlcjsKIHN0YXRpYyB1bnNpZ25lZCBp
bnQgX19yZWFkX21vc3RseSBxaV9lbnRyeV9ucjsKIApAQCAtNTIsNyArNTAs
MTEgQEAgc3RhdGljIHVuc2lnbmVkIGludCBxaW52YWxfbmV4dF9pbmRleChz
dAogICAgIC8qICh0YWlsKzEgPT0gaGVhZCkgaW5kaWNhdGVzIGEgZnVsbCBx
dWV1ZSwgd2FpdCBmb3IgSFcgKi8KICAgICB3aGlsZSAoICgodGFpbCArIDEp
ICYgKHFpX2VudHJ5X25yIC0gMSkpID09CiAgICAgICAgICAgICAoZG1hcl9y
ZWFkbChpb21tdS0+cmVnLCBETUFSX0lRSF9SRUcpID4+IFFJTlZBTF9JTkRF
WF9TSElGVCkgKQorICAgIHsKKyAgICAgICAgcHJpbnRrX29uY2UoWEVOTE9H
X0VSUiBWVERQUkVGSVggIiBJT01NVSMldTogbm8gUUkgc2xvdCBhdmFpbGFi
bGVcbiIsCisgICAgICAgICAgICAgICAgICAgIGlvbW11LT5pbmRleCk7CiAg
ICAgICAgIGNwdV9yZWxheCgpOworICAgIH0KIAogICAgIHJldHVybiB0YWls
OwogfQpAQCAtMTcyLDIzICsxNzQsMzIgQEAgc3RhdGljIGludCBfX211c3Rf
Y2hlY2sgcXVldWVfaW52YWxpZGF0ZQogICAgIC8qIE5vdyB3ZSBkb24ndCBz
dXBwb3J0IGludGVycnVwdCBtZXRob2QgKi8KICAgICBpZiAoIHN3ICkKICAg
ICB7Ci0gICAgICAgIHNfdGltZV90IHRpbWVvdXQ7Ci0KLSAgICAgICAgLyog
SW4gY2FzZSBhbGwgd2FpdCBkZXNjcmlwdG9yIHdyaXRlcyB0byBzYW1lIGFk
ZHIgd2l0aCBzYW1lIGRhdGEgKi8KLSAgICAgICAgdGltZW91dCA9IE5PVygp
ICsgTUlMTElTRUNTKGZsdXNoX2Rldl9pb3RsYiA/Ci0gICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBpb21tdV9kZXZfaW90bGJfdGltZW91
dCA6IFZURF9RSV9USU1FT1VUKTsKKyAgICAgICAgc3RhdGljIHVuc2lnbmVk
IGludCBfX3JlYWRfbW9zdGx5IHRocmVzaG9sZCA9IDE7CisgICAgICAgIHNf
dGltZV90IHN0YXJ0ID0gTk9XKCk7CisgICAgICAgIHNfdGltZV90IHRpbWVv
dXQgPSBzdGFydCArIChmbHVzaF9kZXZfaW90bGIKKyAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgID8gaW9tbXVfZGV2X2lvdGxiX3RpbWVv
dXQKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIDogMTAw
KSAqIE1JTExJU0VDUyh0aHJlc2hvbGQpOwogCiAgICAgICAgIHdoaWxlICgg
QUNDRVNTX09OQ0UoKnRoaXNfcG9sbF9zbG90KSAhPSBRSU5WQUxfU1RBVF9E
T05FICkKICAgICAgICAgewotICAgICAgICAgICAgaWYgKCBOT1coKSA+IHRp
bWVvdXQgKQorICAgICAgICAgICAgaWYgKCB0aW1lb3V0ICYmIE5PVygpID4g
dGltZW91dCApCiAgICAgICAgICAgICB7Ci0gICAgICAgICAgICAgICAgcHJp
bnRfcWlfcmVncyhpb21tdSk7CisgICAgICAgICAgICAgICAgdGhyZXNob2xk
IHw9IHRocmVzaG9sZCA8PCAxOwogICAgICAgICAgICAgICAgIHByaW50ayhY
RU5MT0dfV0FSTklORyBWVERQUkVGSVgKLSAgICAgICAgICAgICAgICAgICAg
ICAgIiBRdWV1ZSBpbnZhbGlkYXRlIHdhaXQgZGVzY3JpcHRvciB0aW1lZCBv
dXRcbiIpOwotICAgICAgICAgICAgICAgIHJldHVybiAtRVRJTUVET1VUOwor
ICAgICAgICAgICAgICAgICAgICAgICAiIElPTU1VIyV1OiBRSSVzIHdhaXQg
ZGVzY3JpcHRvciB0YWtpbmcgdG9vIGxvbmdcbiIsCisgICAgICAgICAgICAg
ICAgICAgICAgIGlvbW11LT5pbmRleCwgZmx1c2hfZGV2X2lvdGxiID8gIiBk
ZXYiIDogIiIpOworICAgICAgICAgICAgICAgIHByaW50X3FpX3JlZ3MoaW9t
bXUpOworICAgICAgICAgICAgICAgIHRpbWVvdXQgPSAwOwogICAgICAgICAg
ICAgfQogICAgICAgICAgICAgY3B1X3JlbGF4KCk7CiAgICAgICAgIH0KKwor
ICAgICAgICBpZiAoICF0aW1lb3V0ICkKKyAgICAgICAgICAgIHByaW50ayhY
RU5MT0dfV0FSTklORyBWVERQUkVGSVgKKyAgICAgICAgICAgICAgICAgICAi
IElPTU1VIyV1OiBRSSVzIHdhaXQgZGVzY3JpcHRvciB0b29rICVsdW1zXG4i
LAorICAgICAgICAgICAgICAgICAgIGlvbW11LT5pbmRleCwgZmx1c2hfZGV2
X2lvdGxiID8gIiBkZXYiIDogIiIsCisgICAgICAgICAgICAgICAgICAgKE5P
VygpIC0gc3RhcnQpIC8gMTAwMDAwMDApOworCiAgICAgICAgIHJldHVybiAw
OwogICAgIH0KIAo=

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.11-1.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.11-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBWVC1kOiBzaXplIHFpbnZhbCBxdWV1ZSBkeW5hbWljYWxseQoKV2l0aCB0
aGUgcHJlc2VudCBzeW5jaHJvbm91cyBtb2RlbCwgd2UgbmVlZCB0d28gc2xv
dHMgZm9yIGV2ZXJ5Cm9wZXJhdGlvbiAodGhlIG9wZXJhdGlvbiBpdHNlbGYg
YW5kIGEgd2FpdCBkZXNjcmlwdG9yKS4gIFRoZXJlIGNhbiBiZQpvbmUgc3Vj
aCBwYWlyIG9mIHJlcXVlc3RzIHBlbmRpbmcgcGVyIENQVS4gVG8gZW5zdXJl
IHRoYXQgdW5kZXIgYWxsCm5vcm1hbCBjaXJjdW1zdGFuY2VzIGEgc2xvdCBp
cyBhbHdheXMgYXZhaWxhYmxlIHdoZW4gb25lIGlzIHJlcXVlc3RlZCwKc2l6
ZSB0aGUgcXVldWUgcmluZyBhY2NvcmRpbmcgdG8gdGhlIG51bWJlciBvZiBw
cmVzZW50IENQVXMuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAvIENWRS0y
MDIxLTI4NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVs
aWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVs
QHhlbi5vcmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQv
aW9tbXUuaAorKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9t
bXUuaApAQCAtNDQ3LDE3ICs0NDcsOSBAQCBzdHJ1Y3QgcWludmFsX2VudHJ5
IHsKICAgICB9cTsKIH07CiAKLS8qIE9yZGVyIG9mIHF1ZXVlIGludmFsaWRh
dGlvbiBwYWdlcyhtYXggaXMgOCkgKi8KLSNkZWZpbmUgUUlOVkFMX1BBR0Vf
T1JERVIgICAyCi0KLSNkZWZpbmUgUUlOVkFMX0FSQ0hfUEFHRV9PUkRFUiAg
KFFJTlZBTF9QQUdFX09SREVSICsgUEFHRV9TSElGVF80SyAtIFBBR0VfU0hJ
RlQpCi0jZGVmaW5lIFFJTlZBTF9BUkNIX1BBR0VfTlIgICAgICggUUlOVkFM
X0FSQ0hfUEFHRV9PUkRFUiA8IDAgPyAgXAotICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAxIDogICAgICAgICAgICAgICAgICAgICAgICAgICAg
IFwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgMSA8PCBRSU5W
QUxfQVJDSF9QQUdFX09SREVSICkKLQogLyogRWFjaCBlbnRyeSBpcyAxNiBi
eXRlcywgc28gMl44IGVudHJpZXMgcGVyIHBhZ2UgKi8KICNkZWZpbmUgUUlO
VkFMX0VOVFJZX09SREVSICAoIFBBR0VfU0hJRlQgLSA0ICkKLSNkZWZpbmUg
UUlOVkFMX0VOVFJZX05SICAgICAoMSA8PCAoUUlOVkFMX1BBR0VfT1JERVIg
KyA4KSkKKyNkZWZpbmUgUUlOVkFMX01BWF9FTlRSWV9OUiAoMXUgPDwgKDcg
KyBRSU5WQUxfRU5UUllfT1JERVIpKQogCiAvKiBTdGF0dXMgZGF0YSBmbGFn
ICovCiAjZGVmaW5lIFFJTlZBTF9TVEFUX0lOSVQgIDAKLS0tIGEveGVuL2Ry
aXZlcnMvcGFzc3Rocm91Z2gvdnRkL3FpbnZhbC5jCisrKyBiL3hlbi9kcml2
ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9xaW52YWwuYwpAQCAtMzEsNiArMzEsOSBA
QAogCiAjZGVmaW5lIFZURF9RSV9USU1FT1VUCTEKIAorc3RhdGljIHVuc2ln
bmVkIGludCBfX3JlYWRfbW9zdGx5IHFpX3BnX29yZGVyOworc3RhdGljIHVu
c2lnbmVkIGludCBfX3JlYWRfbW9zdGx5IHFpX2VudHJ5X25yOworCiBzdGF0
aWMgaW50IF9fbXVzdF9jaGVjayBpbnZhbGlkYXRlX3N5bmMoc3RydWN0IGlv
bW11ICppb21tdSk7CiAKIHN0YXRpYyB2b2lkIHByaW50X3FpX3JlZ3Moc3Ry
dWN0IGlvbW11ICppb21tdSkKQEAgLTU1LDcgKzU4LDcgQEAgc3RhdGljIHVu
c2lnbmVkIGludCBxaW52YWxfbmV4dF9pbmRleChzdAogICAgIHRhaWwgPj49
IFFJTlZBTF9JTkRFWF9TSElGVDsKIAogICAgIC8qICh0YWlsKzEgPT0gaGVh
ZCkgaW5kaWNhdGVzIGEgZnVsbCBxdWV1ZSwgd2FpdCBmb3IgSFcgKi8KLSAg
ICB3aGlsZSAoICggdGFpbCArIDEgKSAlIFFJTlZBTF9FTlRSWV9OUiA9PQor
ICAgIHdoaWxlICggKCh0YWlsICsgMSkgJiAocWlfZW50cnlfbnIgLSAxKSkg
PT0KICAgICAgICAgICAgICggZG1hcl9yZWFkcShpb21tdS0+cmVnLCBETUFS
X0lRSF9SRUcpID4+IFFJTlZBTF9JTkRFWF9TSElGVCApICkKICAgICAgICAg
Y3B1X3JlbGF4KCk7CiAKQEAgLTY4LDcgKzcxLDcgQEAgc3RhdGljIHZvaWQg
cWludmFsX3VwZGF0ZV9xdGFpbChzdHJ1Y3QgaQogCiAgICAgLyogTmVlZCBo
b2xkIHJlZ2lzdGVyIGxvY2sgd2hlbiB1cGRhdGUgdGFpbCAqLwogICAgIEFT
U0VSVCggc3Bpbl9pc19sb2NrZWQoJmlvbW11LT5yZWdpc3Rlcl9sb2NrKSAp
OwotICAgIHZhbCA9IChpbmRleCArIDEpICUgUUlOVkFMX0VOVFJZX05SOwor
ICAgIHZhbCA9IChpbmRleCArIDEpICYgKHFpX2VudHJ5X25yIC0gMSk7CiAg
ICAgZG1hcl93cml0ZXEoaW9tbXUtPnJlZywgRE1BUl9JUVRfUkVHLCAodmFs
IDw8IFFJTlZBTF9JTkRFWF9TSElGVCkpOwogfQogCkBAIC00MTcsNyArNDIw
LDI3IEBAIGludCBlbmFibGVfcWludmFsKHN0cnVjdCBpb21tdSAqaW9tbXUp
CiAgICAgaWYgKCBxaV9jdHJsLT5xaW52YWxfbWFkZHIgPT0gMCApCiAgICAg
ewogICAgICAgICBkcmhkID0gaW9tbXVfdG9fZHJoZChpb21tdSk7Ci0gICAg
ICAgIHFpX2N0cmwtPnFpbnZhbF9tYWRkciA9IGFsbG9jX3BndGFibGVfbWFk
ZHIoZHJoZCwgUUlOVkFMX0FSQ0hfUEFHRV9OUik7CisgICAgICAgIGlmICgg
IXFpX2VudHJ5X25yICkKKyAgICAgICAgeworICAgICAgICAgICAgLyoKKyAg
ICAgICAgICAgICAqIFdpdGggdGhlIHByZXNlbnQgc3luY2hyb25vdXMgbW9k
ZWwsIHdlIG5lZWQgdHdvIHNsb3RzIGZvciBldmVyeQorICAgICAgICAgICAg
ICogb3BlcmF0aW9uICh0aGUgb3BlcmF0aW9uIGl0c2VsZiBhbmQgYSB3YWl0
IGRlc2NyaXB0b3IpLiAgVGhlcmUKKyAgICAgICAgICAgICAqIGNhbiBiZSBv
bmUgc3VjaCBwYWlyIG9mIHJlcXVlc3RzIHBlbmRpbmcgcGVyIENQVS4gIE9u
ZSBleHRyYQorICAgICAgICAgICAgICogZW50cnkgaXMgbmVlZGVkIGFzIHRo
ZSByaW5nIGlzIGNvbnNpZGVyZWQgZnVsbCB3aGVuIHRoZXJlJ3MKKyAgICAg
ICAgICAgICAqIG9ubHkgb25lIGVudHJ5IGxlZnQuCisgICAgICAgICAgICAg
Ki8KKyAgICAgICAgICAgIEJVSUxEX0JVR19PTihDT05GSUdfTlJfQ1BVUyAq
IDIgPj0gUUlOVkFMX01BWF9FTlRSWV9OUik7CisgICAgICAgICAgICBxaV9w
Z19vcmRlciA9IGdldF9vcmRlcl9mcm9tX2J5dGVzKChudW1fcHJlc2VudF9j
cHVzKCkgKiAyICsgMSkgPDwKKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgKFBBR0VfU0hJRlQgLQorICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgUUlOVkFM
X0VOVFJZX09SREVSKSk7CisgICAgICAgICAgICBxaV9lbnRyeV9uciA9IDF1
IDw8IChxaV9wZ19vcmRlciArIFFJTlZBTF9FTlRSWV9PUkRFUik7CisKKyAg
ICAgICAgICAgIGRwcmludGsoWEVOTE9HX0lORk8gVlREUFJFRklYLAorICAg
ICAgICAgICAgICAgICAgICAiUUk6IHVzaW5nICV1LWVudHJ5IHJpbmcocylc
biIsIHFpX2VudHJ5X25yKTsKKyAgICAgICAgfQorCisgICAgICAgIHFpX2N0
cmwtPnFpbnZhbF9tYWRkciA9CisgICAgICAgICAgICBhbGxvY19wZ3RhYmxl
X21hZGRyKGRyaGQsIHFpX2VudHJ5X25yID4+IFFJTlZBTF9FTlRSWV9PUkRF
Uik7CiAgICAgICAgIGlmICggcWlfY3RybC0+cWludmFsX21hZGRyID09IDAg
KQogICAgICAgICB7CiAgICAgICAgICAgICBkcHJpbnRrKFhFTkxPR19XQVJO
SU5HIFZURFBSRUZJWCwKQEAgLTQzMSwxNSArNDU0LDE2IEBAIGludCBlbmFi
bGVfcWludmFsKHN0cnVjdCBpb21tdSAqaW9tbXUpCiAKICAgICBzcGluX2xv
Y2tfaXJxc2F2ZSgmaW9tbXUtPnJlZ2lzdGVyX2xvY2ssIGZsYWdzKTsKIAot
ICAgIC8qIFNldHVwIEludmFsaWRhdGlvbiBRdWV1ZSBBZGRyZXNzKElRQSkg
cmVnaXN0ZXIgd2l0aCB0aGUKLSAgICAgKiBhZGRyZXNzIG9mIHRoZSBwYWdl
IHdlIGp1c3QgYWxsb2NhdGVkLiAgUVMgZmllbGQgYXQKLSAgICAgKiBiaXRz
WzI6MF0gdG8gaW5kaWNhdGUgc2l6ZSBvZiBxdWV1ZSBpcyBvbmUgNEtCIHBh
Z2UuCi0gICAgICogVGhhdCdzIDI1NiBlbnRyaWVzLiAgUXVldWVkIEhlYWQg
KElRSCkgYW5kIFF1ZXVlIFRhaWwgKElRVCkKLSAgICAgKiByZWdpc3RlcnMg
YXJlIGF1dG9tYXRpY2FsbHkgcmVzZXQgdG8gMCB3aXRoIHdyaXRlCi0gICAg
ICogdG8gSVFBIHJlZ2lzdGVyLgorICAgIC8qCisgICAgICogU2V0dXAgSW52
YWxpZGF0aW9uIFF1ZXVlIEFkZHJlc3MgKElRQSkgcmVnaXN0ZXIgd2l0aCB0
aGUgYWRkcmVzcyBvZiB0aGUKKyAgICAgKiBwYWdlcyB3ZSBqdXN0IGFsbG9j
YXRlZC4gIFRoZSBRUyBmaWVsZCBhdCBiaXRzWzI6MF0gaW5kaWNhdGVzIHRo
ZSBzaXplCisgICAgICogKHBhZ2Ugb3JkZXIpIG9mIHRoZSBxdWV1ZS4KKyAg
ICAgKgorICAgICAqIFF1ZXVlZCBIZWFkIChJUUgpIGFuZCBRdWV1ZSBUYWls
IChJUVQpIHJlZ2lzdGVycyBhcmUgYXV0b21hdGljYWxseQorICAgICAqIHJl
c2V0IHRvIDAgd2l0aCB3cml0ZSB0byBJUUEgcmVnaXN0ZXIuCiAgICAgICov
CiAgICAgZG1hcl93cml0ZXEoaW9tbXUtPnJlZywgRE1BUl9JUUFfUkVHLAot
ICAgICAgICAgICAgICAgIHFpX2N0cmwtPnFpbnZhbF9tYWRkciB8IFFJTlZB
TF9QQUdFX09SREVSKTsKKyAgICAgICAgICAgICAgICBxaV9jdHJsLT5xaW52
YWxfbWFkZHIgfCBxaV9wZ19vcmRlcik7CiAKICAgICBkbWFyX3dyaXRlcShp
b21tdS0+cmVnLCBETUFSX0lRVF9SRUcsIDApOwogCg==

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.11-2.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.11-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IHNpemUgY29tbWFuZCBidWZmZXIgZHluYW1pY2FsbHkK
CldpdGggdGhlIHByZXNlbnQgc3luY2hyb25vdXMgbW9kZWwsIHdlIG5lZWQg
dHdvIHNsb3RzIGZvciBldmVyeQpvcGVyYXRpb24gKHRoZSBvcGVyYXRpb24g
aXRzZWxmIGFuZCBhIHdhaXQgY29tbWFuZCkuICBUaGVyZSBjYW4gYmUgb25l
CnN1Y2ggcGFpciBvZiBjb21tYW5kcyBwZW5kaW5nIHBlciBDUFUuIFRvIGVu
c3VyZSB0aGF0IHVuZGVyIGFsbCBub3JtYWwKY2lyY3Vtc3RhbmNlcyBhIHNs
b3QgaXMgYWx3YXlzIGF2YWlsYWJsZSB3aGVuIG9uZSBpcyByZXF1ZXN0ZWQs
IHNpemUgdGhlCmNvbW1hbmQgcmluZyBhY2NvcmRpbmcgdG8gdGhlIG51bWJl
ciBvZiBwcmVzZW50IENQVXMuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAv
IENWRS0yMDIxLTI4NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2gg
PGpiZXVsaWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50
IDxwYXVsQHhlbi5vcmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3Vn
aC9hbWQvaW9tbXVfY21kLmMKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvYW1kL2lvbW11X2NtZC5jCkBAIC0yNCw4ICsyNCw3IEBACiAKIHN0YXRp
YyBpbnQgcXVldWVfaW9tbXVfY29tbWFuZChzdHJ1Y3QgYW1kX2lvbW11ICpp
b21tdSwgdTMyIGNtZFtdKQogewotICAgIHUzMiB0YWlsLCBoZWFkLCAqY21k
X2J1ZmZlcjsKLSAgICBpbnQgaTsKKyAgICB1aW50MzJfdCB0YWlsLCBoZWFk
OwogCiAgICAgdGFpbCA9IGlvbW11LT5jbWRfYnVmZmVyLnRhaWw7CiAgICAg
aWYgKCArK3RhaWwgPT0gaW9tbXUtPmNtZF9idWZmZXIuZW50cmllcyApCkBA
IC0zNSwxMiArMzQsOSBAQCBzdGF0aWMgaW50IHF1ZXVlX2lvbW11X2NvbW1h
bmQoc3RydWN0IGFtCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIElPTU1VX0NNRF9CVUZGRVJfSEVBRF9PRkZTRVQpKTsKICAgICBp
ZiAoIGhlYWQgIT0gdGFpbCApCiAgICAgewotICAgICAgICBjbWRfYnVmZmVy
ID0gKHUzMiAqKShpb21tdS0+Y21kX2J1ZmZlci5idWZmZXIgKwotICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAoaW9tbXUtPmNtZF9idWZmZXIudGFp
bCAqCi0gICAgICAgICAgICAgICAgICAgICAgICAgICAgIElPTU1VX0NNRF9C
VUZGRVJfRU5UUllfU0laRSkpOwotCi0gICAgICAgIGZvciAoIGkgPSAwOyBp
IDwgSU9NTVVfQ01EX0JVRkZFUl9VMzJfUEVSX0VOVFJZOyBpKysgKQotICAg
ICAgICAgICAgY21kX2J1ZmZlcltpXSA9IGNtZFtpXTsKKyAgICAgICAgbWVt
Y3B5KGlvbW11LT5jbWRfYnVmZmVyLmJ1ZmZlciArCisgICAgICAgICAgICAg
ICAoaW9tbXUtPmNtZF9idWZmZXIudGFpbCAqIHNpemVvZihjbWRfZW50cnlf
dCkpLAorICAgICAgICAgICAgICAgY21kLCBzaXplb2YoY21kX2VudHJ5X3Qp
KTsKIAogICAgICAgICBpb21tdS0+Y21kX2J1ZmZlci50YWlsID0gdGFpbDsK
ICAgICAgICAgcmV0dXJuIDE7Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJv
dWdoL2FtZC9pb21tdV9pbml0LmMKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Ro
cm91Z2gvYW1kL2lvbW11X2luaXQuYwpAQCAtMTM2LDcgKzEzNiw3IEBAIHN0
YXRpYyB2b2lkIHJlZ2lzdGVyX2lvbW11X2NtZF9idWZmZXJfaW4KICAgICB3
cml0ZWwoZW50cnksIGlvbW11LT5tbWlvX2Jhc2UgKyBJT01NVV9DTURfQlVG
RkVSX0JBU0VfTE9XX09GRlNFVCk7CiAKICAgICBwb3dlcl9vZjJfZW50cmll
cyA9IGdldF9vcmRlcl9mcm9tX2J5dGVzKGlvbW11LT5jbWRfYnVmZmVyLmFs
bG9jX3NpemUpICsKLSAgICAgICAgSU9NTVVfQ01EX0JVRkZFUl9QT1dFUl9P
RjJfRU5UUklFU19QRVJfUEFHRTsKKyAgICAgICAgUEFHRV9TSElGVCAtIElP
TU1VX0NNRF9CVUZGRVJfRU5UUllfT1JERVI7CiAKICAgICBlbnRyeSA9IDA7
CiAgICAgaW9tbXVfc2V0X2FkZHJfaGlfdG9fcmVnKCZlbnRyeSwgYWRkcl9o
aSk7CkBAIC0xMDAwLDkgKzEwMDAsMzEgQEAgc3RhdGljIHZvaWQgKiBfX2lu
aXQgYWxsb2NhdGVfcmluZ19idWZmZQogc3RhdGljIHZvaWQgKiBfX2luaXQg
YWxsb2NhdGVfY21kX2J1ZmZlcihzdHJ1Y3QgYW1kX2lvbW11ICppb21tdSkK
IHsKICAgICAvKiBhbGxvY2F0ZSAnY29tbWFuZCBidWZmZXInIGluIHBvd2Vy
IG9mIDIgaW5jcmVtZW50cyBvZiA0SyAqLworICAgIHN0YXRpYyB1bnNpZ25l
ZCBpbnQgX19yZWFkX21vc3RseSBucl9lbnRzOworCisgICAgaWYgKCAhbnJf
ZW50cyApCisgICAgeworICAgICAgICB1bnNpZ25lZCBpbnQgb3JkZXI7CisK
KyAgICAgICAgLyoKKyAgICAgICAgICogV2l0aCB0aGUgcHJlc2VudCBzeW5j
aHJvbm91cyBtb2RlbCwgd2UgbmVlZCB0d28gc2xvdHMgZm9yIGV2ZXJ5Cisg
ICAgICAgICAqIG9wZXJhdGlvbiAodGhlIG9wZXJhdGlvbiBpdHNlbGYgYW5k
IGEgd2FpdCBjb21tYW5kKS4gIFRoZXJlIGNhbiBiZQorICAgICAgICAgKiBv
bmUgc3VjaCBwYWlyIG9mIHJlcXVlc3RzIHBlbmRpbmcgcGVyIENQVS4gIE9u
ZSBleHRyYSBlbnRyeSBpcworICAgICAgICAgKiBuZWVkZWQgYXMgdGhlIHJp
bmcgaXMgY29uc2lkZXJlZCBmdWxsIHdoZW4gdGhlcmUncyBvbmx5IG9uZSBl
bnRyeQorICAgICAgICAgKiBsZWZ0LgorICAgICAgICAgKi8KKyAgICAgICAg
QlVJTERfQlVHX09OKENPTkZJR19OUl9DUFVTICogMiA+PSBJT01NVV9DTURf
QlVGRkVSX01BWF9FTlRSSUVTKTsKKyAgICAgICAgb3JkZXIgPSBnZXRfb3Jk
ZXJfZnJvbV9ieXRlcygobnVtX3ByZXNlbnRfY3B1cygpICogMiArIDEpIDw8
CisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgSU9NTVVf
Q01EX0JVRkZFUl9FTlRSWV9PUkRFUik7CisgICAgICAgIG5yX2VudHMgPSAx
dSA8PCAob3JkZXIgKyBQQUdFX1NISUZUIC0gSU9NTVVfQ01EX0JVRkZFUl9F
TlRSWV9PUkRFUik7CisKKyAgICAgICAgQU1EX0lPTU1VX0RFQlVHKCJ1c2lu
ZyAldS1lbnRyeSBjbWQgcmluZyhzKVxuIiwgbnJfZW50cyk7CisgICAgfQor
CisgICAgQlVJTERfQlVHX09OKHNpemVvZihjbWRfZW50cnlfdCkgIT0gKDF1
IDw8IElPTU1VX0NNRF9CVUZGRVJfRU5UUllfT1JERVIpKTsKKwogICAgIHJl
dHVybiBhbGxvY2F0ZV9yaW5nX2J1ZmZlcigmaW9tbXUtPmNtZF9idWZmZXIs
IHNpemVvZihjbWRfZW50cnlfdCksCi0gICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIElPTU1VX0NNRF9CVUZGRVJfREVGQVVMVF9FTlRSSUVTLAot
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAiQ29tbWFuZCBCdWZm
ZXIiKTsKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbnJfZW50
cywgIkNvbW1hbmQgQnVmZmVyIik7CiB9CiAKIHN0YXRpYyB2b2lkICogX19p
bml0IGFsbG9jYXRlX2V2ZW50X2xvZyhzdHJ1Y3QgYW1kX2lvbW11ICppb21t
dSkKLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0vc3ZtL2FtZC1pb21t
dS1kZWZzLmgKKysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0vc3ZtL2Ft
ZC1pb21tdS1kZWZzLmgKQEAgLTIwLDkgKzIwLDYgQEAKICNpZm5kZWYgX0FT
TV9YODZfNjRfQU1EX0lPTU1VX0RFRlNfSAogI2RlZmluZSBfQVNNX1g4Nl82
NF9BTURfSU9NTVVfREVGU19ICiAKLS8qIElPTU1VIENvbW1hbmQgQnVmZmVy
IGVudHJpZXM6IGluIHBvd2VyIG9mIDIgaW5jcmVtZW50cywgbWluaW11bSBv
ZiAyNTYgKi8KLSNkZWZpbmUgSU9NTVVfQ01EX0JVRkZFUl9ERUZBVUxUX0VO
VFJJRVMJNTEyCi0KIC8qIElPTU1VIEV2ZW50IExvZyBlbnRyaWVzOiBpbiBw
b3dlciBvZiAyIGluY3JlbWVudHMsIG1pbmltdW0gb2YgMjU2ICovCiAjZGVm
aW5lIElPTU1VX0VWRU5UX0xPR19ERUZBVUxUX0VOVFJJRVMgICAgIDUxMgog
CkBAIC0xODUsOSArMTgyLDggQEAKICNkZWZpbmUgSU9NTVVfQ01EX0JVRkZF
Ul9MRU5HVEhfTUFTSwkJMHgwRjAwMDAwMAogI2RlZmluZSBJT01NVV9DTURf
QlVGRkVSX0xFTkdUSF9TSElGVAkJMjQKIAotI2RlZmluZSBJT01NVV9DTURf
QlVGRkVSX0VOVFJZX1NJWkUJCQkxNgotI2RlZmluZSBJT01NVV9DTURfQlVG
RkVSX1BPV0VSX09GMl9FTlRSSUVTX1BFUl9QQUdFCTgKLSNkZWZpbmUgSU9N
TVVfQ01EX0JVRkZFUl9VMzJfUEVSX0VOVFJZIAkoSU9NTVVfQ01EX0JVRkZF
Ul9FTlRSWV9TSVpFIC8gNCkKKyNkZWZpbmUgSU9NTVVfQ01EX0JVRkZFUl9F
TlRSWV9PUkRFUiAgICAgICAgICAgIDQKKyNkZWZpbmUgSU9NTVVfQ01EX0JV
RkZFUl9NQVhfRU5UUklFUyAgICAgICAgICAgICgxdSA8PCAxNSkKIAogI2Rl
ZmluZSBJT01NVV9DTURfT1BDT0RFX01BU0sJCQkweEYwMDAwMDAwCiAjZGVm
aW5lIElPTU1VX0NNRF9PUENPREVfU0hJRlQJCQkyOAo=

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.11-3.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.11-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBWVC1kOiBlbGltaW5hdGUgZmx1c2ggcmVsYXRlZCB0aW1lb3V0cwoKTGVh
dmluZyBhbiBpbi1wcm9ncmVzcyBvcGVyYXRpb24gcGVuZGluZyB3aGVuIGl0
IGFwcGVhcnMgdG8gdGFrZSB0b28KbG9uZyBpcyBwcm9ibGVtYXRpYzogSWYg
ZS5nLiBhIFFJIGNvbW1hbmQgY29tcGxldGVkIGxhdGVyLCB0aGUgd3JpdGUg
dG8KdGhlICJwb2xsIHNsb3QiIG1heSBpbnN0ZWFkIGJlIHVuZGVyc3Rvb2Qg
dG8gc2lnbmFsIGEgc3Vic2VxdWVudGx5CnN0YXJ0ZWQgY29tbWFuZCdzIGNv
bXBsZXRpb24uIEFsc28gb3VyIGFjY291bnRpbmcgb2YgdGhlIHRpbWVvdXQg
cGVyaW9kCndhcyBhY3R1YWxseSB3cm9uZzogV2UgaW5jbHVkZWQgdGhlIHRp
bWUgaXQgdG9vayBmb3IgdGhlIGNvbW1hbmQgdG8KYWN0dWFsbHkgbWFrZSBp
dCB0byB0aGUgZnJvbnQgb2YgdGhlIHF1ZXVlLCB3aGljaCBjb3VsZCBiZSBo
ZWF2aWx5CmFmZmVjdGVkIGJ5IGd1ZXN0cyBvdGhlciB0aGFuIHRoZSBvbmUg
Zm9yIHdoaWNoIHRoZSBmbHVzaCBpcyBiZWluZwpwZXJmb3JtZWQuCgpEbyBh
d2F5IHdpdGggYWxsIHRpbWVvdXQgZGV0ZWN0aW9uIG9uIGFsbCBmbHVzaCBy
ZWxhdGVkIGNvZGUgcGF0aHMuCkxvZyBleGNlc3NpdmVseSBsb25nIHByb2Nl
c3NpbmcgdGltZXMgKHdpdGggYSBwcm9ncmVzc2l2ZSB0aHJlc2hvbGQpIHRv
CmhhdmUgc29tZSBpbmRpY2F0aW9uIG9mIHByb2JsZW1zIGluIHRoaXMgYXJl
YS4KCkFkZGl0aW9uYWxseSBsb2cgKG9uY2UpIGlmIHFpbnZhbF9uZXh0X2lu
ZGV4KCkgZGlkbid0IGltbWVkaWF0ZWx5IGZpbmQKYW4gYXZhaWxhYmxlIHNs
b3QuIFRvZ2V0aGVyIHdpdGggdGhlIGVhcmxpZXIgY2hhbmdlIHNpemluZyB0
aGUgcXVldWUocykKZHluYW1pY2FsbHksIHdlIHNob3VsZCBub3cgaGF2ZSBh
IGd1YXJhbnRlZSB0aGF0IHdpdGggb3VyIGZ1bGx5CnN5bmNocm9ub3VzIG1v
ZGVsIGFueSBkZW1hbmQgZm9yIHNsb3RzIGNhbiBhY3R1YWxseSBiZSBzYXRp
c2ZpZWQuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAvIENWRS0yMDIxLTI4
NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1
c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5v
cmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvZG1hci5o
CisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9kbWFyLmgKQEAg
LTEyNyw2ICsxMjcsMzQgQEAgZG8gewogICAgIH0gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
IH0gd2hpbGUgKDApCiAKKyNkZWZpbmUgSU9NTVVfRkxVU0hfV0FJVCh3aGF0
LCBpb21tdSwgb2Zmc2V0LCBvcCwgY29uZCwgc3RzKSAgICAgICBcCitkbyB7
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgXAorICAgIHN0YXRpYyB1bnNpZ25lZCBpbnQg
X19yZWFkX21vc3RseSB0aHJlc2hvbGQgPSAxOyAgICAgICAgICAgICAgIFwK
KyAgICBzX3RpbWVfdCBzdGFydCA9IE5PVygpOyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgc190aW1lX3QgdGltZW91
dCA9IHN0YXJ0ICsgRE1BUl9PUEVSQVRJT05fVElNRU9VVCAqIHRocmVzaG9s
ZDsgXAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICBmb3IgKCA7IDsg
KSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBcCisgICAgeyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICBz
dHMgPSBvcChpb21tdS0+cmVnLCBvZmZzZXQpOyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwKKyAgICAgICAgaWYgKCBjb25kICkgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAg
ICAgICAgICBicmVhazsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgXAorICAgICAgICBpZiAoIHRpbWVvdXQgJiYg
Tk9XKCkgPiB0aW1lb3V0ICkgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
KyAgICAgICAgeyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgICAgICB0aHJlc2hv
bGQgfD0gdGhyZXNob2xkIDw8IDE7ICAgICAgICAgICAgICAgICAgICAgICAg
ICAgXAorICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19XQVJOSU5HIFZURFBS
RUZJWCAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgICAgICAg
ICAgICAiIElPTU1VIyV1OiAlcyBmbHVzaCB0YWtpbmcgdG9vIGxvbmdcbiIs
ICAgICAgICBcCisgICAgICAgICAgICAgICAgICAgaW9tbXUtPmluZGV4LCB3
aGF0KTsgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICAg
ICAgdGltZW91dCA9IDA7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwKKyAgICAgICAgfSAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAg
ICAgIGNwdV9yZWxheCgpOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgXAorICAgIH0gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgaWYgKCAhdGltZW91dCAp
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgXAorICAgICAgICBwcmludGsoWEVOTE9HX1dBUk5JTkcgVlREUFJFRklY
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgICAgICAg
ICIgSU9NTVUjJXU6ICVzIGZsdXNoIHRvb2sgJWx1bXNcbiIsICAgICAgICAg
ICAgICAgICBcCisgICAgICAgICAgICAgICBpb21tdS0+aW5kZXgsIHdoYXQs
IChOT1coKSAtIHN0YXJ0KSAvIDEwMDAwMDAwKTsgICAgXAorfSB3aGlsZSAo
IGZhbHNlICkKKwogaW50IHZ0ZF9od19jaGVjayh2b2lkKTsKIHZvaWQgZGlz
YWJsZV9wbXIoc3RydWN0IGlvbW11ICppb21tdSk7CiBpbnQgaXNfaWdkX2Ry
aGQoc3RydWN0IGFjcGlfZHJoZF91bml0ICpkcmhkKTsKLS0tIGEveGVuL2Ry
aXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11LmMKKysrIGIveGVuL2RyaXZl
cnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11LmMKQEAgLTM1Nyw4ICszNTcsOCBA
QCBzdGF0aWMgdm9pZCBpb21tdV9mbHVzaF93cml0ZV9idWZmZXIoc3RyCiAg
ICAgZG1hcl93cml0ZWwoaW9tbXUtPnJlZywgRE1BUl9HQ01EX1JFRywgdmFs
IHwgRE1BX0dDTURfV0JGKTsKIAogICAgIC8qIE1ha2Ugc3VyZSBoYXJkd2Fy
ZSBjb21wbGV0ZSBpdCAqLwotICAgIElPTU1VX1dBSVRfT1AoaW9tbXUsIERN
QVJfR1NUU19SRUcsIGRtYXJfcmVhZGwsCi0gICAgICAgICAgICAgICAgICAh
KHZhbCAmIERNQV9HU1RTX1dCRlMpLCB2YWwpOworICAgIElPTU1VX0ZMVVNI
X1dBSVQoIndyaXRlIGJ1ZmZlciIsIGlvbW11LCBETUFSX0dTVFNfUkVHLCBk
bWFyX3JlYWRsLAorICAgICAgICAgICAgICAgICAgICAgISh2YWwgJiBETUFf
R1NUU19XQkZTKSwgdmFsKTsKIAogICAgIHNwaW5fdW5sb2NrX2lycXJlc3Rv
cmUoJmlvbW11LT5yZWdpc3Rlcl9sb2NrLCBmbGFncyk7CiB9CkBAIC00MDgs
OCArNDA4LDggQEAgc3RhdGljIGludCBfX211c3RfY2hlY2sgZmx1c2hfY29u
dGV4dF9yZQogICAgIGRtYXJfd3JpdGVxKGlvbW11LT5yZWcsIERNQVJfQ0NN
RF9SRUcsIHZhbCk7CiAKICAgICAvKiBNYWtlIHN1cmUgaGFyZHdhcmUgY29t
cGxldGUgaXQgKi8KLSAgICBJT01NVV9XQUlUX09QKGlvbW11LCBETUFSX0ND
TURfUkVHLCBkbWFyX3JlYWRxLAotICAgICAgICAgICAgICAgICAgISh2YWwg
JiBETUFfQ0NNRF9JQ0MpLCB2YWwpOworICAgIElPTU1VX0ZMVVNIX1dBSVQo
ImNvbnRleHQiLCBpb21tdSwgRE1BUl9DQ01EX1JFRywgZG1hcl9yZWFkcSwK
KyAgICAgICAgICAgICAgICAgICAgICEodmFsICYgRE1BX0NDTURfSUNDKSwg
dmFsKTsKIAogICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmlvbW11LT5y
ZWdpc3Rlcl9sb2NrLCBmbGFncyk7CiAgICAgLyogZmx1c2ggY29udGV4dCBl
bnRyeSB3aWxsIGltcGxpY2l0bHkgZmx1c2ggd3JpdGUgYnVmZmVyICovCkBA
IC00OTEsOCArNDkxLDggQEAgc3RhdGljIGludCBfX211c3RfY2hlY2sgZmx1
c2hfaW90bGJfcmVnKAogICAgIGRtYXJfd3JpdGVxKGlvbW11LT5yZWcsIHRs
Yl9vZmZzZXQgKyA4LCB2YWwpOwogCiAgICAgLyogTWFrZSBzdXJlIGhhcmR3
YXJlIGNvbXBsZXRlIGl0ICovCi0gICAgSU9NTVVfV0FJVF9PUChpb21tdSwg
KHRsYl9vZmZzZXQgKyA4KSwgZG1hcl9yZWFkcSwKLSAgICAgICAgICAgICAg
ICAgICEodmFsICYgRE1BX1RMQl9JVlQpLCB2YWwpOworICAgIElPTU1VX0ZM
VVNIX1dBSVQoImlvdGxiIiwgaW9tbXUsICh0bGJfb2Zmc2V0ICsgOCksIGRt
YXJfcmVhZHEsCisgICAgICAgICAgICAgICAgICAgICAhKHZhbCAmIERNQV9U
TEJfSVZUKSwgdmFsKTsKICAgICBzcGluX3VubG9ja19pcnFyZXN0b3JlKCZp
b21tdS0+cmVnaXN0ZXJfbG9jaywgZmxhZ3MpOwogCiAgICAgLyogY2hlY2sg
SU9UTEIgaW52YWxpZGF0aW9uIGdyYW51bGFyaXR5ICovCi0tLSBhL3hlbi9k
cml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9xaW52YWwuYworKysgYi94ZW4vZHJp
dmVycy9wYXNzdGhyb3VnaC92dGQvcWludmFsLmMKQEAgLTI5LDggKzI5LDYg
QEAKICNpbmNsdWRlICJleHRlcm4uaCIKICNpbmNsdWRlICIuLi9hdHMuaCIK
IAotI2RlZmluZSBWVERfUUlfVElNRU9VVAkxCi0KIHN0YXRpYyB1bnNpZ25l
ZCBpbnQgX19yZWFkX21vc3RseSBxaV9wZ19vcmRlcjsKIHN0YXRpYyB1bnNp
Z25lZCBpbnQgX19yZWFkX21vc3RseSBxaV9lbnRyeV9ucjsKIApAQCAtNjAs
NyArNTgsMTEgQEAgc3RhdGljIHVuc2lnbmVkIGludCBxaW52YWxfbmV4dF9p
bmRleChzdAogICAgIC8qICh0YWlsKzEgPT0gaGVhZCkgaW5kaWNhdGVzIGEg
ZnVsbCBxdWV1ZSwgd2FpdCBmb3IgSFcgKi8KICAgICB3aGlsZSAoICgodGFp
bCArIDEpICYgKHFpX2VudHJ5X25yIC0gMSkpID09CiAgICAgICAgICAgICAo
IGRtYXJfcmVhZHEoaW9tbXUtPnJlZywgRE1BUl9JUUhfUkVHKSA+PiBRSU5W
QUxfSU5ERVhfU0hJRlQgKSApCisgICAgeworICAgICAgICBwcmludGtfb25j
ZShYRU5MT0dfRVJSIFZURFBSRUZJWCAiIElPTU1VIyV1OiBubyBRSSBzbG90
IGF2YWlsYWJsZVxuIiwKKyAgICAgICAgICAgICAgICAgICAgaW9tbXUtPmlu
ZGV4KTsKICAgICAgICAgY3B1X3JlbGF4KCk7CisgICAgfQogCiAgICAgcmV0
dXJuIHRhaWw7CiB9CkBAIC0xODAsMjMgKzE4MiwzMiBAQCBzdGF0aWMgaW50
IF9fbXVzdF9jaGVjayBxdWV1ZV9pbnZhbGlkYXRlCiAgICAgLyogTm93IHdl
IGRvbid0IHN1cHBvcnQgaW50ZXJydXB0IG1ldGhvZCAqLwogICAgIGlmICgg
c3cgKQogICAgIHsKLSAgICAgICAgc190aW1lX3QgdGltZW91dDsKLQotICAg
ICAgICAvKiBJbiBjYXNlIGFsbCB3YWl0IGRlc2NyaXB0b3Igd3JpdGVzIHRv
IHNhbWUgYWRkciB3aXRoIHNhbWUgZGF0YSAqLwotICAgICAgICB0aW1lb3V0
ID0gTk9XKCkgKyBNSUxMSVNFQ1MoZmx1c2hfZGV2X2lvdGxiID8KLSAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGlvbW11X2Rldl9pb3Rs
Yl90aW1lb3V0IDogVlREX1FJX1RJTUVPVVQpOworICAgICAgICBzdGF0aWMg
dW5zaWduZWQgaW50IF9fcmVhZF9tb3N0bHkgdGhyZXNob2xkID0gMTsKKyAg
ICAgICAgc190aW1lX3Qgc3RhcnQgPSBOT1coKTsKKyAgICAgICAgc190aW1l
X3QgdGltZW91dCA9IHN0YXJ0ICsgKGZsdXNoX2Rldl9pb3RsYgorICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPyBpb21tdV9kZXZfaW90
bGJfdGltZW91dAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgOiAxMDApICogTUlMTElTRUNTKHRocmVzaG9sZCk7CiAKICAgICAgICAg
d2hpbGUgKCBBQ0NFU1NfT05DRSgqdGhpc19wb2xsX3Nsb3QpICE9IFFJTlZB
TF9TVEFUX0RPTkUgKQogICAgICAgICB7Ci0gICAgICAgICAgICBpZiAoIE5P
VygpID4gdGltZW91dCApCisgICAgICAgICAgICBpZiAoIHRpbWVvdXQgJiYg
Tk9XKCkgPiB0aW1lb3V0ICkKICAgICAgICAgICAgIHsKLSAgICAgICAgICAg
ICAgICBwcmludF9xaV9yZWdzKGlvbW11KTsKKyAgICAgICAgICAgICAgICB0
aHJlc2hvbGQgfD0gdGhyZXNob2xkIDw8IDE7CiAgICAgICAgICAgICAgICAg
cHJpbnRrKFhFTkxPR19XQVJOSU5HIFZURFBSRUZJWAotICAgICAgICAgICAg
ICAgICAgICAgICAiIFF1ZXVlIGludmFsaWRhdGUgd2FpdCBkZXNjcmlwdG9y
IHRpbWVkIG91dFxuIik7Ci0gICAgICAgICAgICAgICAgcmV0dXJuIC1FVElN
RURPVVQ7CisgICAgICAgICAgICAgICAgICAgICAgICIgSU9NTVUjJXU6IFFJ
JXMgd2FpdCBkZXNjcmlwdG9yIHRha2luZyB0b28gbG9uZ1xuIiwKKyAgICAg
ICAgICAgICAgICAgICAgICAgaW9tbXUtPmluZGV4LCBmbHVzaF9kZXZfaW90
bGIgPyAiIGRldiIgOiAiIik7CisgICAgICAgICAgICAgICAgcHJpbnRfcWlf
cmVncyhpb21tdSk7CisgICAgICAgICAgICAgICAgdGltZW91dCA9IDA7CiAg
ICAgICAgICAgICB9CiAgICAgICAgICAgICBjcHVfcmVsYXgoKTsKICAgICAg
ICAgfQorCisgICAgICAgIGlmICggIXRpbWVvdXQgKQorICAgICAgICAgICAg
cHJpbnRrKFhFTkxPR19XQVJOSU5HIFZURFBSRUZJWAorICAgICAgICAgICAg
ICAgICAgICIgSU9NTVUjJXU6IFFJJXMgd2FpdCBkZXNjcmlwdG9yIHRvb2sg
JWx1bXNcbiIsCisgICAgICAgICAgICAgICAgICAgaW9tbXUtPmluZGV4LCBm
bHVzaF9kZXZfaW90bGIgPyAiIGRldiIgOiAiIiwKKyAgICAgICAgICAgICAg
ICAgICAoTk9XKCkgLSBzdGFydCkgLyAxMDAwMDAwMCk7CisKICAgICAgICAg
cmV0dXJuIDA7CiAgICAgfQogCg==

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.11-4.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.11-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IHdhaXQgZm9yIGNvbW1hbmQgc2xvdCB0byBiZSBhdmFp
bGFibGUKCk5vIGNhbGxlciBjYXJlZCBhYm91dCBzZW5kX2lvbW11X2NvbW1h
bmQoKSBpbmRpY2F0aW5nIHVuYXZhaWxhYmlsaXR5IG9mCmEgc2xvdC4gSGVu
Y2UgaWYgYSBzdWZmaWNpZW50IG51bWJlciBwcmlvciBjb21tYW5kcyB0aW1l
ZCBvdXQsIHdlIGRpZApibGluZGx5IGFzc3VtZSB0aGF0IHRoZSByZXF1ZXN0
ZWQgY29tbWFuZCB3YXMgc3VibWl0dGVkIHRvIHRoZSBJT01NVQp3aGVuIHJl
YWxseSBpdCB3YXNuJ3QuIFRoaXMgY291bGQgbWVhbiBib3RoIGEgaGFuZ2lu
ZyBzeXN0ZW0gKHdhaXRpbmcKZm9yIGEgY29tbWFuZCB0byBjb21wbGV0ZSB0
aGF0IHdhcyBuZXZlciBzZWVuIGJ5IHRoZSBJT01NVSkgb3IgYmxpbmRseQpw
cm9wYWdhdGluZyBzdWNjZXNzIGJhY2sgdG8gY2FsbGVycywgbWFraW5nIHRo
ZW0gYmVsaWV2ZSB0aGV5J3JlIGZpbmUKdG8gZS5nLiBmcmVlIHByZXZpb3Vz
bHkgdW5tYXBwZWQgcGFnZXMuCgpGb2xkIHRoZSB0aHJlZSBpbnZvbHZlZCBm
dW5jdGlvbnMgaW50byBvbmUsIGFkZCBzcGluIHdhaXRpbmcgZm9yIGFuCmF2
YWlsYWJsZSBzbG90IGFsb25nIHRoZSBsaW5lcyBvZiBWVC1kJ3MgcWludmFs
X25leHRfaW5kZXgoKSwgYW5kIGFzIGEKY29uc2VxdWVuY2UgZHJvcCBhbGwg
ZXJyb3IgaW5kaWNhdG9yIHJldHVybiB0eXBlcy92YWx1ZXMuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTM3MyAvIENWRS0yMDIxLTI4NjkyLgoKU2lnbmVkLW9m
Zi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpSZXZpZXdl
ZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+CgotLS0gYS94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfY21kLmMKKysrIGIveGVu
L2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1kL2lvbW11X2NtZC5jCkBAIC0yMiw0
OCArMjIsMzYgQEAKICNpbmNsdWRlIDxhc20vaHZtL3N2bS9hbWQtaW9tbXUt
cHJvdG8uaD4KICNpbmNsdWRlICIuLi9hdHMuaCIKIAotc3RhdGljIGludCBx
dWV1ZV9pb21tdV9jb21tYW5kKHN0cnVjdCBhbWRfaW9tbXUgKmlvbW11LCB1
MzIgY21kW10pCitzdGF0aWMgdm9pZCBzZW5kX2lvbW11X2NvbW1hbmQoc3Ry
dWN0IGFtZF9pb21tdSAqaW9tbXUsCisgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgY29uc3QgdWludDMyX3QgY21kWzRdKQogewotICAgIHVpbnQz
Ml90IHRhaWwsIGhlYWQ7CisgICAgdWludDMyX3QgdGFpbDsKIAogICAgIHRh
aWwgPSBpb21tdS0+Y21kX2J1ZmZlci50YWlsOwogICAgIGlmICggKyt0YWls
ID09IGlvbW11LT5jbWRfYnVmZmVyLmVudHJpZXMgKQogICAgICAgICB0YWls
ID0gMDsKIAotICAgIGhlYWQgPSBpb21tdV9nZXRfcmJfcG9pbnRlcihyZWFk
bChpb21tdS0+bW1pb19iYXNlICsKLSAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgSU9NTVVfQ01EX0JVRkZFUl9IRUFEX09GRlNFVCkp
OwotICAgIGlmICggaGVhZCAhPSB0YWlsICkKKyAgICB3aGlsZSAoIHRhaWwg
PT0gaW9tbXVfZ2V0X3JiX3BvaW50ZXIocmVhZGwoaW9tbXUtPm1taW9fYmFz
ZSArCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIElPTU1VX0NNRF9CVUZGRVJfSEVBRF9PRkZTRVQpKSApCiAgICAg
ewotICAgICAgICBtZW1jcHkoaW9tbXUtPmNtZF9idWZmZXIuYnVmZmVyICsK
LSAgICAgICAgICAgICAgIChpb21tdS0+Y21kX2J1ZmZlci50YWlsICogc2l6
ZW9mKGNtZF9lbnRyeV90KSksCi0gICAgICAgICAgICAgICBjbWQsIHNpemVv
ZihjbWRfZW50cnlfdCkpOwotCi0gICAgICAgIGlvbW11LT5jbWRfYnVmZmVy
LnRhaWwgPSB0YWlsOwotICAgICAgICByZXR1cm4gMTsKKyAgICAgICAgcHJp
bnRrX29uY2UoWEVOTE9HX0VSUgorICAgICAgICAgICAgICAgICAgICAiQU1E
IElPTU1VICUwNHg6JTAyeDolMDJ4LiV1OiBubyBjbWQgc2xvdCBhdmFpbGFi
bGVcbiIsCisgICAgICAgICAgICAgICAgICAgIGlvbW11LT5zZWcsIFBDSV9C
VVMoaW9tbXUtPmJkZiksCisgICAgICAgICAgICAgICAgICAgIFBDSV9TTE9U
KGlvbW11LT5iZGYpLCBQQ0lfRlVOQyhpb21tdS0+YmRmKSk7CisgICAgICAg
IGNwdV9yZWxheCgpOwogICAgIH0KIAotICAgIHJldHVybiAwOwotfQorICAg
IG1lbWNweShpb21tdS0+Y21kX2J1ZmZlci5idWZmZXIgKworICAgICAgICAg
ICAoaW9tbXUtPmNtZF9idWZmZXIudGFpbCAqIHNpemVvZihjbWRfZW50cnlf
dCkpLAorICAgICAgICAgICBjbWQsIHNpemVvZihjbWRfZW50cnlfdCkpOwog
Ci1zdGF0aWMgdm9pZCBjb21taXRfaW9tbXVfY29tbWFuZF9idWZmZXIoc3Ry
dWN0IGFtZF9pb21tdSAqaW9tbXUpCi17Ci0gICAgdTMyIHRhaWwgPSAwOwor
ICAgIGlvbW11LT5jbWRfYnVmZmVyLnRhaWwgPSB0YWlsOwogCisgICAgdGFp
bCA9IDA7CiAgICAgaW9tbXVfc2V0X3JiX3BvaW50ZXIoJnRhaWwsIGlvbW11
LT5jbWRfYnVmZmVyLnRhaWwpOwogICAgIHdyaXRlbCh0YWlsLCBpb21tdS0+
bW1pb19iYXNlK0lPTU1VX0NNRF9CVUZGRVJfVEFJTF9PRkZTRVQpOwogfQog
Ci1pbnQgc2VuZF9pb21tdV9jb21tYW5kKHN0cnVjdCBhbWRfaW9tbXUgKmlv
bW11LCB1MzIgY21kW10pCi17Ci0gICAgaWYgKCBxdWV1ZV9pb21tdV9jb21t
YW5kKGlvbW11LCBjbWQpICkKLSAgICB7Ci0gICAgICAgIGNvbW1pdF9pb21t
dV9jb21tYW5kX2J1ZmZlcihpb21tdSk7Ci0gICAgICAgIHJldHVybiAxOwot
ICAgIH0KLQotICAgIHJldHVybiAwOwotfQotCiBzdGF0aWMgdm9pZCBmbHVz
aF9jb21tYW5kX2J1ZmZlcihzdHJ1Y3QgYW1kX2lvbW11ICppb21tdSkKIHsK
ICAgICB1MzIgY21kWzRdLCBzdGF0dXM7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.11-5.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.11-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IGRyb3AgY29tbWFuZCBjb21wbGV0aW9uIHRpbWVvdXQK
CkZpcnN0IGFuZCBmb3JlbW9zdCAtIHN1Y2ggdGltZW91dHMgd2VyZSBub3Qg
c2lnbmFsZWQgdG8gY2FsbGVycywgbWFraW5nCnRoZW0gYmVsaWV2ZSB0aGV5
J3JlIGZpbmUgdG8gZS5nLiBmcmVlIHByZXZpb3VzbHkgdW5tYXBwZWQgcGFn
ZXMuCgpNaXJyb3IgVlQtZCdzIGJlaGF2aW9yOiBBIGZpeGVkIG51bWJlciBv
ZiBsb29wIGl0ZXJhdGlvbnMgaXMgbm90IGEKc3VpdGFibGUgd2F5IHRvIGRl
dGVjdCB0aW1lb3V0cyBpbiBhbiBlbnZpcm9ubWVudCAoQ1BVIGFuZCBidXMg
c3BlZWRzKQppbmRlcGVuZGVudCBtYW5uZXIgYW55d2F5LiBGdXJ0aGVybW9y
ZSwgbGVhdmluZyBhbiBpbi1wcm9ncmVzcyBvcGVyYXRpb24KcGVuZGluZyB3
aGVuIGl0IGFwcGVhcnMgdG8gdGFrZSB0b28gbG9uZyBpcyBwcm9ibGVtYXRp
YzogSWYgYSBjb21tYW5kCmNvbXBsZXRlZCBsYXRlciwgdGhlIHNpZ25hbGlu
ZyBvZiBpdHMgY29tcGxldGlvbiBtYXkgaW5zdGVhZCBiZQp1bmRlcnN0b29k
IHRvIHNpZ25hbCBhIHN1YnNlcXVlbnRseSBzdGFydGVkIGNvbW1hbmQncyBj
b21wbGV0aW9uLgoKTG9nIGV4Y2Vzc2l2ZWx5IGxvbmcgcHJvY2Vzc2luZyB0
aW1lcyAod2l0aCBhIHByb2dyZXNzaXZlIHRocmVzaG9sZCkgdG8KaGF2ZSBz
b21lIGluZGljYXRpb24gb2YgcHJvYmxlbXMgaW4gdGhpcyBhcmVhLiBBbGxv
dyBjYWxsZXJzIHRvIHNwZWNpZnkKYSBub24tZGVmYXVsdCB0aW1lb3V0IGJp
YXMgZm9yIHRoaXMgbG9nZ2luZywgdXNpbmcgdGhlIHNhbWUgdmFsdWVzIGFz
ClZULWQgZG9lcywgd2hpY2ggaW4gcGFydGljdWxhciBtZWFucyBhIChieSBk
ZWZhdWx0KSBtdWNoIGxhcmdlciB2YWx1ZQpmb3IgZGV2aWNlIElPIFRMQiBp
bnZhbGlkYXRpb24uCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAvIENWRS0y
MDIxLTI4NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVs
aWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVs
QHhlbi5vcmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQv
aW9tbXVfY21kLmMKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1k
L2lvbW11X2NtZC5jCkBAIC01MiwxMCArNTIsMTIgQEAgc3RhdGljIHZvaWQg
c2VuZF9pb21tdV9jb21tYW5kKHN0cnVjdCBhbQogICAgIHdyaXRlbCh0YWls
LCBpb21tdS0+bW1pb19iYXNlK0lPTU1VX0NNRF9CVUZGRVJfVEFJTF9PRkZT
RVQpOwogfQogCi1zdGF0aWMgdm9pZCBmbHVzaF9jb21tYW5kX2J1ZmZlcihz
dHJ1Y3QgYW1kX2lvbW11ICppb21tdSkKK3N0YXRpYyB2b2lkIGZsdXNoX2Nv
bW1hbmRfYnVmZmVyKHN0cnVjdCBhbWRfaW9tbXUgKmlvbW11LAorICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQgaW50IHRpbWVv
dXRfYmFzZSkKIHsKLSAgICB1MzIgY21kWzRdLCBzdGF0dXM7Ci0gICAgaW50
IGxvb3BfY291bnQsIGNvbXBfd2FpdDsKKyAgICB1aW50MzJfdCBjbWRbNF07
CisgICAgc190aW1lX3Qgc3RhcnQsIHRpbWVvdXQ7CisgICAgc3RhdGljIHVu
c2lnbmVkIGludCBfX3JlYWRfbW9zdGx5IHRocmVzaG9sZCA9IDE7CiAKICAg
ICAvKiBSVzFDICdDb21XYWl0SW50JyBpbiBzdGF0dXMgcmVnaXN0ZXIgKi8K
ICAgICB3cml0ZWwoSU9NTVVfU1RBVFVTX0NPTVBfV0FJVF9JTlRfTUFTSywK
QEAgLTcxLDI0ICs3MywzMSBAQCBzdGF0aWMgdm9pZCBmbHVzaF9jb21tYW5k
X2J1ZmZlcihzdHJ1Y3QKICAgICAgICAgICAgICAgICAgICAgICAgICBJT01N
VV9DT01QX1dBSVRfSV9GTEFHX1NISUZULCAmY21kWzBdKTsKICAgICBzZW5k
X2lvbW11X2NvbW1hbmQoaW9tbXUsIGNtZCk7CiAKLSAgICAvKiBNYWtlIGxv
b3BfY291bnQgbG9uZyBlbm91Z2ggZm9yIHBvbGxpbmcgY29tcGxldGlvbiB3
YWl0IGJpdCAqLwotICAgIGxvb3BfY291bnQgPSAxMDAwOwotICAgIGRvIHsK
LSAgICAgICAgc3RhdHVzID0gcmVhZGwoaW9tbXUtPm1taW9fYmFzZSArIElP
TU1VX1NUQVRVU19NTUlPX09GRlNFVCk7Ci0gICAgICAgIGNvbXBfd2FpdCA9
IGdldF9maWVsZF9mcm9tX3JlZ191MzIoc3RhdHVzLAotICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIElPTU1VX1NUQVRVU19D
T01QX1dBSVRfSU5UX01BU0ssCi0gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgSU9NTVVfU1RBVFVTX0NPTVBfV0FJVF9JTlRf
U0hJRlQpOwotICAgICAgICAtLWxvb3BfY291bnQ7Ci0gICAgfSB3aGlsZSAo
ICFjb21wX3dhaXQgJiYgbG9vcF9jb3VudCApOwotCi0gICAgaWYgKCBjb21w
X3dhaXQgKQorICAgIHN0YXJ0ID0gTk9XKCk7CisgICAgdGltZW91dCA9IHN0
YXJ0ICsgKHRpbWVvdXRfYmFzZSA/OiAxMDApICogTUlMTElTRUNTKHRocmVz
aG9sZCk7CisgICAgd2hpbGUgKCAhKHJlYWRsKGlvbW11LT5tbWlvX2Jhc2Ug
KyBJT01NVV9TVEFUVVNfTU1JT19PRkZTRVQpICYKKyAgICAgICAgICAgICAg
SU9NTVVfU1RBVFVTX0NPTVBfV0FJVF9JTlRfTUFTSykgKQogICAgIHsKLSAg
ICAgICAgLyogUlcxQyAnQ29tV2FpdEludCcgaW4gc3RhdHVzIHJlZ2lzdGVy
ICovCi0gICAgICAgIHdyaXRlbChJT01NVV9TVEFUVVNfQ09NUF9XQUlUX0lO
VF9NQVNLLAotICAgICAgICAgICAgICAgaW9tbXUtPm1taW9fYmFzZSArIElP
TU1VX1NUQVRVU19NTUlPX09GRlNFVCk7Ci0gICAgICAgIHJldHVybjsKKyAg
ICAgICAgaWYgKCB0aW1lb3V0ICYmIE5PVygpID4gdGltZW91dCApCisgICAg
ICAgIHsKKyAgICAgICAgICAgIHRocmVzaG9sZCB8PSB0aHJlc2hvbGQgPDwg
MTsKKyAgICAgICAgICAgIHByaW50ayhYRU5MT0dfV0FSTklORworICAgICAg
ICAgICAgICAgICAgICJBTUQgSU9NTVUgJTA0eDolMDJ4OiUwMnguJXU6ICVz
Y29tcGxldGlvbiB3YWl0IHRha2luZyB0b28gbG9uZ1xuIiwKKyAgICAgICAg
ICAgICAgICAgICBpb21tdS0+c2VnLCBQQ0lfQlVTKGlvbW11LT5iZGYpLAor
ICAgICAgICAgICAgICAgICAgIFBDSV9TTE9UKGlvbW11LT5iZGYpLCBQQ0lf
RlVOQyhpb21tdS0+YmRmKSwKKyAgICAgICAgICAgICAgICAgICB0aW1lb3V0
X2Jhc2UgPyAiaW90bGIgIiA6ICIiKTsKKyAgICAgICAgICAgIHRpbWVvdXQg
PSAwOworICAgICAgICB9CisgICAgICAgIGNwdV9yZWxheCgpOwogICAgIH0K
LSAgICBBTURfSU9NTVVfREVCVUcoIldhcm5pbmc6IENvbVdhaXRJbnQgYml0
IGRpZCBub3QgYXNzZXJ0IVxuIik7CisKKyAgICBpZiAoICF0aW1lb3V0ICkK
KyAgICAgICAgcHJpbnRrKFhFTkxPR19XQVJOSU5HCisgICAgICAgICAgICAg
ICAiQU1EIElPTU1VICUwNHg6JTAyeDolMDJ4LiV1OiAlc2NvbXBsZXRpb24g
d2FpdCB0b29rICVsdW1zXG4iLAorICAgICAgICAgICAgICAgaW9tbXUtPnNl
ZywgUENJX0JVUyhpb21tdS0+YmRmKSwKKyAgICAgICAgICAgICAgIFBDSV9T
TE9UKGlvbW11LT5iZGYpLCBQQ0lfRlVOQyhpb21tdS0+YmRmKSwKKyAgICAg
ICAgICAgICAgIHRpbWVvdXRfYmFzZSA/ICJpb3RsYiAiIDogIiIsCisgICAg
ICAgICAgICAgICAoTk9XKCkgLSBzdGFydCkgLyAxMDAwMDAwMCk7CiB9CiAK
IC8qIEJ1aWxkIGxvdyBsZXZlbCBpb21tdSBjb21tYW5kIG1lc3NhZ2VzICov
CkBAIC0zMDAsNyArMzA5LDcgQEAgdm9pZCBhbWRfaW9tbXVfZmx1c2hfaW90
bGIodTggZGV2Zm4sIGNvbgogICAgIC8qIHNlbmQgSU5WQUxJREFURV9JT1RM
Ql9QQUdFUyBjb21tYW5kICovCiAgICAgc3Bpbl9sb2NrX2lycXNhdmUoJmlv
bW11LT5sb2NrLCBmbGFncyk7CiAgICAgaW52YWxpZGF0ZV9pb3RsYl9wYWdl
cyhpb21tdSwgbWF4cGVuZCwgMCwgcXVldWVpZCwgZ2FkZHIsIHJlcV9pZCwg
b3JkZXIpOwotICAgIGZsdXNoX2NvbW1hbmRfYnVmZmVyKGlvbW11KTsKKyAg
ICBmbHVzaF9jb21tYW5kX2J1ZmZlcihpb21tdSwgaW9tbXVfZGV2X2lvdGxi
X3RpbWVvdXQpOwogICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmlvbW11
LT5sb2NrLCBmbGFncyk7CiB9CiAKQEAgLTMzNyw3ICszNDYsNyBAQCBzdGF0
aWMgdm9pZCBfYW1kX2lvbW11X2ZsdXNoX3BhZ2VzKHN0cnVjCiAgICAgewog
ICAgICAgICBzcGluX2xvY2tfaXJxc2F2ZSgmaW9tbXUtPmxvY2ssIGZsYWdz
KTsKICAgICAgICAgaW52YWxpZGF0ZV9pb21tdV9wYWdlcyhpb21tdSwgZ2Fk
ZHIsIGRvbV9pZCwgb3JkZXIpOwotICAgICAgICBmbHVzaF9jb21tYW5kX2J1
ZmZlcihpb21tdSk7CisgICAgICAgIGZsdXNoX2NvbW1hbmRfYnVmZmVyKGlv
bW11LCAwKTsKICAgICAgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmaW9t
bXUtPmxvY2ssIGZsYWdzKTsKICAgICB9CiAKQEAgLTM2MSw3ICszNzAsNyBA
QCB2b2lkIGFtZF9pb21tdV9mbHVzaF9kZXZpY2Uoc3RydWN0IGFtZF9pCiAg
ICAgQVNTRVJUKCBzcGluX2lzX2xvY2tlZCgmaW9tbXUtPmxvY2spICk7CiAK
ICAgICBpbnZhbGlkYXRlX2Rldl90YWJsZV9lbnRyeShpb21tdSwgYmRmKTsK
LSAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihpb21tdSk7CisgICAgZmx1c2hf
Y29tbWFuZF9idWZmZXIoaW9tbXUsIDApOwogfQogCiB2b2lkIGFtZF9pb21t
dV9mbHVzaF9pbnRyZW1hcChzdHJ1Y3QgYW1kX2lvbW11ICppb21tdSwgdWlu
dDE2X3QgYmRmKQpAQCAtMzY5LDcgKzM3OCw3IEBAIHZvaWQgYW1kX2lvbW11
X2ZsdXNoX2ludHJlbWFwKHN0cnVjdCBhbWQKICAgICBBU1NFUlQoIHNwaW5f
aXNfbG9ja2VkKCZpb21tdS0+bG9jaykgKTsKIAogICAgIGludmFsaWRhdGVf
aW50ZXJydXB0X3RhYmxlKGlvbW11LCBiZGYpOwotICAgIGZsdXNoX2NvbW1h
bmRfYnVmZmVyKGlvbW11KTsKKyAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihp
b21tdSwgMCk7CiB9CiAKIHZvaWQgYW1kX2lvbW11X2ZsdXNoX2FsbF9jYWNo
ZXMoc3RydWN0IGFtZF9pb21tdSAqaW9tbXUpCkBAIC0zNzcsNyArMzg2LDcg
QEAgdm9pZCBhbWRfaW9tbXVfZmx1c2hfYWxsX2NhY2hlcyhzdHJ1Y3QgYQog
ICAgIEFTU0VSVCggc3Bpbl9pc19sb2NrZWQoJmlvbW11LT5sb2NrKSApOwog
CiAgICAgaW52YWxpZGF0ZV9pb21tdV9hbGwoaW9tbXUpOwotICAgIGZsdXNo
X2NvbW1hbmRfYnVmZmVyKGlvbW11KTsKKyAgICBmbHVzaF9jb21tYW5kX2J1
ZmZlcihpb21tdSwgMCk7CiB9CiAKIHZvaWQgYW1kX2lvbW11X3NlbmRfZ3Vl
c3RfY21kKHN0cnVjdCBhbWRfaW9tbXUgKmlvbW11LCB1MzIgY21kW10pCkBA
IC0zODcsNyArMzk2LDggQEAgdm9pZCBhbWRfaW9tbXVfc2VuZF9ndWVzdF9j
bWQoc3RydWN0IGFtZAogICAgIHNwaW5fbG9ja19pcnFzYXZlKCZpb21tdS0+
bG9jaywgZmxhZ3MpOwogCiAgICAgc2VuZF9pb21tdV9jb21tYW5kKGlvbW11
LCBjbWQpOwotICAgIGZsdXNoX2NvbW1hbmRfYnVmZmVyKGlvbW11KTsKKyAg
ICAvKiBUQkQ6IFRpbWVvdXQgc2VsZWN0aW9uIG1heSByZXF1aXJlIHBlZWtp
bmcgaW50byBjbWRbXS4gKi8KKyAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihp
b21tdSwgMCk7CiAKICAgICBzcGluX3VubG9ja19pcnFyZXN0b3JlKCZpb21t
dS0+bG9jaywgZmxhZ3MpOwogfQo=

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.12-1.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.12-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBWVC1kOiBzaXplIHFpbnZhbCBxdWV1ZSBkeW5hbWljYWxseQoKV2l0aCB0
aGUgcHJlc2VudCBzeW5jaHJvbm91cyBtb2RlbCwgd2UgbmVlZCB0d28gc2xv
dHMgZm9yIGV2ZXJ5Cm9wZXJhdGlvbiAodGhlIG9wZXJhdGlvbiBpdHNlbGYg
YW5kIGEgd2FpdCBkZXNjcmlwdG9yKS4gIFRoZXJlIGNhbiBiZQpvbmUgc3Vj
aCBwYWlyIG9mIHJlcXVlc3RzIHBlbmRpbmcgcGVyIENQVS4gVG8gZW5zdXJl
IHRoYXQgdW5kZXIgYWxsCm5vcm1hbCBjaXJjdW1zdGFuY2VzIGEgc2xvdCBp
cyBhbHdheXMgYXZhaWxhYmxlIHdoZW4gb25lIGlzIHJlcXVlc3RlZCwKc2l6
ZSB0aGUgcXVldWUgcmluZyBhY2NvcmRpbmcgdG8gdGhlIG51bWJlciBvZiBw
cmVzZW50IENQVXMuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAvIENWRS0y
MDIxLTI4NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVs
aWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVs
QHhlbi5vcmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQv
aW9tbXUuaAorKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9t
bXUuaApAQCAtNDUwLDE3ICs0NTAsOSBAQCBzdHJ1Y3QgcWludmFsX2VudHJ5
IHsKICAgICB9cTsKIH07CiAKLS8qIE9yZGVyIG9mIHF1ZXVlIGludmFsaWRh
dGlvbiBwYWdlcyhtYXggaXMgOCkgKi8KLSNkZWZpbmUgUUlOVkFMX1BBR0Vf
T1JERVIgICAyCi0KLSNkZWZpbmUgUUlOVkFMX0FSQ0hfUEFHRV9PUkRFUiAg
KFFJTlZBTF9QQUdFX09SREVSICsgUEFHRV9TSElGVF80SyAtIFBBR0VfU0hJ
RlQpCi0jZGVmaW5lIFFJTlZBTF9BUkNIX1BBR0VfTlIgICAgICggUUlOVkFM
X0FSQ0hfUEFHRV9PUkRFUiA8IDAgPyAgXAotICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAxIDogICAgICAgICAgICAgICAgICAgICAgICAgICAg
IFwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgMSA8PCBRSU5W
QUxfQVJDSF9QQUdFX09SREVSICkKLQogLyogRWFjaCBlbnRyeSBpcyAxNiBi
eXRlcywgc28gMl44IGVudHJpZXMgcGVyIHBhZ2UgKi8KICNkZWZpbmUgUUlO
VkFMX0VOVFJZX09SREVSICAoIFBBR0VfU0hJRlQgLSA0ICkKLSNkZWZpbmUg
UUlOVkFMX0VOVFJZX05SICAgICAoMSA8PCAoUUlOVkFMX1BBR0VfT1JERVIg
KyA4KSkKKyNkZWZpbmUgUUlOVkFMX01BWF9FTlRSWV9OUiAoMXUgPDwgKDcg
KyBRSU5WQUxfRU5UUllfT1JERVIpKQogCiAvKiBTdGF0dXMgZGF0YSBmbGFn
ICovCiAjZGVmaW5lIFFJTlZBTF9TVEFUX0lOSVQgIDAKLS0tIGEveGVuL2Ry
aXZlcnMvcGFzc3Rocm91Z2gvdnRkL3FpbnZhbC5jCisrKyBiL3hlbi9kcml2
ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9xaW52YWwuYwpAQCAtMzEsNiArMzEsOSBA
QAogCiAjZGVmaW5lIFZURF9RSV9USU1FT1VUCTEKIAorc3RhdGljIHVuc2ln
bmVkIGludCBfX3JlYWRfbW9zdGx5IHFpX3BnX29yZGVyOworc3RhdGljIHVu
c2lnbmVkIGludCBfX3JlYWRfbW9zdGx5IHFpX2VudHJ5X25yOworCiBzdGF0
aWMgaW50IF9fbXVzdF9jaGVjayBpbnZhbGlkYXRlX3N5bmMoc3RydWN0IGlv
bW11ICppb21tdSk7CiAKIHN0YXRpYyB2b2lkIHByaW50X3FpX3JlZ3Moc3Ry
dWN0IGlvbW11ICppb21tdSkKQEAgLTU1LDcgKzU4LDcgQEAgc3RhdGljIHVu
c2lnbmVkIGludCBxaW52YWxfbmV4dF9pbmRleChzdAogICAgIHRhaWwgPj49
IFFJTlZBTF9JTkRFWF9TSElGVDsKIAogICAgIC8qICh0YWlsKzEgPT0gaGVh
ZCkgaW5kaWNhdGVzIGEgZnVsbCBxdWV1ZSwgd2FpdCBmb3IgSFcgKi8KLSAg
ICB3aGlsZSAoICggdGFpbCArIDEgKSAlIFFJTlZBTF9FTlRSWV9OUiA9PQor
ICAgIHdoaWxlICggKCh0YWlsICsgMSkgJiAocWlfZW50cnlfbnIgLSAxKSkg
PT0KICAgICAgICAgICAgICggZG1hcl9yZWFkcShpb21tdS0+cmVnLCBETUFS
X0lRSF9SRUcpID4+IFFJTlZBTF9JTkRFWF9TSElGVCApICkKICAgICAgICAg
Y3B1X3JlbGF4KCk7CiAKQEAgLTY4LDcgKzcxLDcgQEAgc3RhdGljIHZvaWQg
cWludmFsX3VwZGF0ZV9xdGFpbChzdHJ1Y3QgaQogCiAgICAgLyogTmVlZCBo
b2xkIHJlZ2lzdGVyIGxvY2sgd2hlbiB1cGRhdGUgdGFpbCAqLwogICAgIEFT
U0VSVCggc3Bpbl9pc19sb2NrZWQoJmlvbW11LT5yZWdpc3Rlcl9sb2NrKSAp
OwotICAgIHZhbCA9IChpbmRleCArIDEpICUgUUlOVkFMX0VOVFJZX05SOwor
ICAgIHZhbCA9IChpbmRleCArIDEpICYgKHFpX2VudHJ5X25yIC0gMSk7CiAg
ICAgZG1hcl93cml0ZXEoaW9tbXUtPnJlZywgRE1BUl9JUVRfUkVHLCAodmFs
IDw8IFFJTlZBTF9JTkRFWF9TSElGVCkpOwogfQogCkBAIC00MTcsNyArNDIw
LDI3IEBAIGludCBlbmFibGVfcWludmFsKHN0cnVjdCBpb21tdSAqaW9tbXUp
CiAgICAgaWYgKCBxaV9jdHJsLT5xaW52YWxfbWFkZHIgPT0gMCApCiAgICAg
ewogICAgICAgICBkcmhkID0gaW9tbXVfdG9fZHJoZChpb21tdSk7Ci0gICAg
ICAgIHFpX2N0cmwtPnFpbnZhbF9tYWRkciA9IGFsbG9jX3BndGFibGVfbWFk
ZHIoZHJoZCwgUUlOVkFMX0FSQ0hfUEFHRV9OUik7CisgICAgICAgIGlmICgg
IXFpX2VudHJ5X25yICkKKyAgICAgICAgeworICAgICAgICAgICAgLyoKKyAg
ICAgICAgICAgICAqIFdpdGggdGhlIHByZXNlbnQgc3luY2hyb25vdXMgbW9k
ZWwsIHdlIG5lZWQgdHdvIHNsb3RzIGZvciBldmVyeQorICAgICAgICAgICAg
ICogb3BlcmF0aW9uICh0aGUgb3BlcmF0aW9uIGl0c2VsZiBhbmQgYSB3YWl0
IGRlc2NyaXB0b3IpLiAgVGhlcmUKKyAgICAgICAgICAgICAqIGNhbiBiZSBv
bmUgc3VjaCBwYWlyIG9mIHJlcXVlc3RzIHBlbmRpbmcgcGVyIENQVS4gIE9u
ZSBleHRyYQorICAgICAgICAgICAgICogZW50cnkgaXMgbmVlZGVkIGFzIHRo
ZSByaW5nIGlzIGNvbnNpZGVyZWQgZnVsbCB3aGVuIHRoZXJlJ3MKKyAgICAg
ICAgICAgICAqIG9ubHkgb25lIGVudHJ5IGxlZnQuCisgICAgICAgICAgICAg
Ki8KKyAgICAgICAgICAgIEJVSUxEX0JVR19PTihDT05GSUdfTlJfQ1BVUyAq
IDIgPj0gUUlOVkFMX01BWF9FTlRSWV9OUik7CisgICAgICAgICAgICBxaV9w
Z19vcmRlciA9IGdldF9vcmRlcl9mcm9tX2J5dGVzKChudW1fcHJlc2VudF9j
cHVzKCkgKiAyICsgMSkgPDwKKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgKFBBR0VfU0hJRlQgLQorICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgUUlOVkFM
X0VOVFJZX09SREVSKSk7CisgICAgICAgICAgICBxaV9lbnRyeV9uciA9IDF1
IDw8IChxaV9wZ19vcmRlciArIFFJTlZBTF9FTlRSWV9PUkRFUik7CisKKyAg
ICAgICAgICAgIGRwcmludGsoWEVOTE9HX0lORk8gVlREUFJFRklYLAorICAg
ICAgICAgICAgICAgICAgICAiUUk6IHVzaW5nICV1LWVudHJ5IHJpbmcocylc
biIsIHFpX2VudHJ5X25yKTsKKyAgICAgICAgfQorCisgICAgICAgIHFpX2N0
cmwtPnFpbnZhbF9tYWRkciA9CisgICAgICAgICAgICBhbGxvY19wZ3RhYmxl
X21hZGRyKGRyaGQsIHFpX2VudHJ5X25yID4+IFFJTlZBTF9FTlRSWV9PUkRF
Uik7CiAgICAgICAgIGlmICggcWlfY3RybC0+cWludmFsX21hZGRyID09IDAg
KQogICAgICAgICB7CiAgICAgICAgICAgICBkcHJpbnRrKFhFTkxPR19XQVJO
SU5HIFZURFBSRUZJWCwKQEAgLTQzMSwxNSArNDU0LDE2IEBAIGludCBlbmFi
bGVfcWludmFsKHN0cnVjdCBpb21tdSAqaW9tbXUpCiAKICAgICBzcGluX2xv
Y2tfaXJxc2F2ZSgmaW9tbXUtPnJlZ2lzdGVyX2xvY2ssIGZsYWdzKTsKIAot
ICAgIC8qIFNldHVwIEludmFsaWRhdGlvbiBRdWV1ZSBBZGRyZXNzKElRQSkg
cmVnaXN0ZXIgd2l0aCB0aGUKLSAgICAgKiBhZGRyZXNzIG9mIHRoZSBwYWdl
IHdlIGp1c3QgYWxsb2NhdGVkLiAgUVMgZmllbGQgYXQKLSAgICAgKiBiaXRz
WzI6MF0gdG8gaW5kaWNhdGUgc2l6ZSBvZiBxdWV1ZSBpcyBvbmUgNEtCIHBh
Z2UuCi0gICAgICogVGhhdCdzIDI1NiBlbnRyaWVzLiAgUXVldWVkIEhlYWQg
KElRSCkgYW5kIFF1ZXVlIFRhaWwgKElRVCkKLSAgICAgKiByZWdpc3RlcnMg
YXJlIGF1dG9tYXRpY2FsbHkgcmVzZXQgdG8gMCB3aXRoIHdyaXRlCi0gICAg
ICogdG8gSVFBIHJlZ2lzdGVyLgorICAgIC8qCisgICAgICogU2V0dXAgSW52
YWxpZGF0aW9uIFF1ZXVlIEFkZHJlc3MgKElRQSkgcmVnaXN0ZXIgd2l0aCB0
aGUgYWRkcmVzcyBvZiB0aGUKKyAgICAgKiBwYWdlcyB3ZSBqdXN0IGFsbG9j
YXRlZC4gIFRoZSBRUyBmaWVsZCBhdCBiaXRzWzI6MF0gaW5kaWNhdGVzIHRo
ZSBzaXplCisgICAgICogKHBhZ2Ugb3JkZXIpIG9mIHRoZSBxdWV1ZS4KKyAg
ICAgKgorICAgICAqIFF1ZXVlZCBIZWFkIChJUUgpIGFuZCBRdWV1ZSBUYWls
IChJUVQpIHJlZ2lzdGVycyBhcmUgYXV0b21hdGljYWxseQorICAgICAqIHJl
c2V0IHRvIDAgd2l0aCB3cml0ZSB0byBJUUEgcmVnaXN0ZXIuCiAgICAgICov
CiAgICAgZG1hcl93cml0ZXEoaW9tbXUtPnJlZywgRE1BUl9JUUFfUkVHLAot
ICAgICAgICAgICAgICAgIHFpX2N0cmwtPnFpbnZhbF9tYWRkciB8IFFJTlZB
TF9QQUdFX09SREVSKTsKKyAgICAgICAgICAgICAgICBxaV9jdHJsLT5xaW52
YWxfbWFkZHIgfCBxaV9wZ19vcmRlcik7CiAKICAgICBkbWFyX3dyaXRlcShp
b21tdS0+cmVnLCBETUFSX0lRVF9SRUcsIDApOwogCg==

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.12-2.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.12-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IHNpemUgY29tbWFuZCBidWZmZXIgZHluYW1pY2FsbHkK
CldpdGggdGhlIHByZXNlbnQgc3luY2hyb25vdXMgbW9kZWwsIHdlIG5lZWQg
dHdvIHNsb3RzIGZvciBldmVyeQpvcGVyYXRpb24gKHRoZSBvcGVyYXRpb24g
aXRzZWxmIGFuZCBhIHdhaXQgY29tbWFuZCkuICBUaGVyZSBjYW4gYmUgb25l
CnN1Y2ggcGFpciBvZiBjb21tYW5kcyBwZW5kaW5nIHBlciBDUFUuIFRvIGVu
c3VyZSB0aGF0IHVuZGVyIGFsbCBub3JtYWwKY2lyY3Vtc3RhbmNlcyBhIHNs
b3QgaXMgYWx3YXlzIGF2YWlsYWJsZSB3aGVuIG9uZSBpcyByZXF1ZXN0ZWQs
IHNpemUgdGhlCmNvbW1hbmQgcmluZyBhY2NvcmRpbmcgdG8gdGhlIG51bWJl
ciBvZiBwcmVzZW50IENQVXMuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAv
IENWRS0yMDIxLTI4NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2gg
PGpiZXVsaWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50
IDxwYXVsQHhlbi5vcmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3Vn
aC9hbWQvaW9tbXVfY21kLmMKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvYW1kL2lvbW11X2NtZC5jCkBAIC0zNSw4ICszNSw4IEBAIHN0YXRpYyBp
bnQgcXVldWVfaW9tbXVfY29tbWFuZChzdHJ1Y3QgYW0KICAgICBpZiAoIGhl
YWQgIT0gdGFpbCApCiAgICAgewogICAgICAgICBtZW1jcHkoaW9tbXUtPmNt
ZF9idWZmZXIuYnVmZmVyICsKLSAgICAgICAgICAgICAgIChpb21tdS0+Y21k
X2J1ZmZlci50YWlsICogSU9NTVVfQ01EX0JVRkZFUl9FTlRSWV9TSVpFKSwK
LSAgICAgICAgICAgICAgIGNtZCwgSU9NTVVfQ01EX0JVRkZFUl9FTlRSWV9T
SVpFKTsKKyAgICAgICAgICAgICAgIChpb21tdS0+Y21kX2J1ZmZlci50YWls
ICogc2l6ZW9mKGNtZF9lbnRyeV90KSksCisgICAgICAgICAgICAgICBjbWQs
IHNpemVvZihjbWRfZW50cnlfdCkpOwogCiAgICAgICAgIGlvbW11LT5jbWRf
YnVmZmVyLnRhaWwgPSB0YWlsOwogICAgICAgICByZXR1cm4gMTsKLS0tIGEv
eGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1kL2lvbW11X2luaXQuYworKysg
Yi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfaW5pdC5jCkBA
IC0xMzYsNyArMTM2LDcgQEAgc3RhdGljIHZvaWQgcmVnaXN0ZXJfaW9tbXVf
Y21kX2J1ZmZlcl9pbgogICAgIHdyaXRlbChlbnRyeSwgaW9tbXUtPm1taW9f
YmFzZSArIElPTU1VX0NNRF9CVUZGRVJfQkFTRV9MT1dfT0ZGU0VUKTsKIAog
ICAgIHBvd2VyX29mMl9lbnRyaWVzID0gZ2V0X29yZGVyX2Zyb21fYnl0ZXMo
aW9tbXUtPmNtZF9idWZmZXIuYWxsb2Nfc2l6ZSkgKwotICAgICAgICBJT01N
VV9DTURfQlVGRkVSX1BPV0VSX09GMl9FTlRSSUVTX1BFUl9QQUdFOworICAg
ICAgICBQQUdFX1NISUZUIC0gSU9NTVVfQ01EX0JVRkZFUl9FTlRSWV9PUkRF
UjsKIAogICAgIGVudHJ5ID0gMDsKICAgICBpb21tdV9zZXRfYWRkcl9oaV90
b19yZWcoJmVudHJ5LCBhZGRyX2hpKTsKQEAgLTEwMDAsOSArMTAwMCwzMSBA
QCBzdGF0aWMgdm9pZCAqIF9faW5pdCBhbGxvY2F0ZV9yaW5nX2J1ZmZlCiBz
dGF0aWMgdm9pZCAqIF9faW5pdCBhbGxvY2F0ZV9jbWRfYnVmZmVyKHN0cnVj
dCBhbWRfaW9tbXUgKmlvbW11KQogewogICAgIC8qIGFsbG9jYXRlICdjb21t
YW5kIGJ1ZmZlcicgaW4gcG93ZXIgb2YgMiBpbmNyZW1lbnRzIG9mIDRLICov
CisgICAgc3RhdGljIHVuc2lnbmVkIGludCBfX3JlYWRfbW9zdGx5IG5yX2Vu
dHM7CisKKyAgICBpZiAoICFucl9lbnRzICkKKyAgICB7CisgICAgICAgIHVu
c2lnbmVkIGludCBvcmRlcjsKKworICAgICAgICAvKgorICAgICAgICAgKiBX
aXRoIHRoZSBwcmVzZW50IHN5bmNocm9ub3VzIG1vZGVsLCB3ZSBuZWVkIHR3
byBzbG90cyBmb3IgZXZlcnkKKyAgICAgICAgICogb3BlcmF0aW9uICh0aGUg
b3BlcmF0aW9uIGl0c2VsZiBhbmQgYSB3YWl0IGNvbW1hbmQpLiAgVGhlcmUg
Y2FuIGJlCisgICAgICAgICAqIG9uZSBzdWNoIHBhaXIgb2YgcmVxdWVzdHMg
cGVuZGluZyBwZXIgQ1BVLiAgT25lIGV4dHJhIGVudHJ5IGlzCisgICAgICAg
ICAqIG5lZWRlZCBhcyB0aGUgcmluZyBpcyBjb25zaWRlcmVkIGZ1bGwgd2hl
biB0aGVyZSdzIG9ubHkgb25lIGVudHJ5CisgICAgICAgICAqIGxlZnQuCisg
ICAgICAgICAqLworICAgICAgICBCVUlMRF9CVUdfT04oQ09ORklHX05SX0NQ
VVMgKiAyID49IElPTU1VX0NNRF9CVUZGRVJfTUFYX0VOVFJJRVMpOworICAg
ICAgICBvcmRlciA9IGdldF9vcmRlcl9mcm9tX2J5dGVzKChudW1fcHJlc2Vu
dF9jcHVzKCkgKiAyICsgMSkgPDwKKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBJT01NVV9DTURfQlVGRkVSX0VOVFJZX09SREVSKTsK
KyAgICAgICAgbnJfZW50cyA9IDF1IDw8IChvcmRlciArIFBBR0VfU0hJRlQg
LSBJT01NVV9DTURfQlVGRkVSX0VOVFJZX09SREVSKTsKKworICAgICAgICBB
TURfSU9NTVVfREVCVUcoInVzaW5nICV1LWVudHJ5IGNtZCByaW5nKHMpXG4i
LCBucl9lbnRzKTsKKyAgICB9CisKKyAgICBCVUlMRF9CVUdfT04oc2l6ZW9m
KGNtZF9lbnRyeV90KSAhPSAoMXUgPDwgSU9NTVVfQ01EX0JVRkZFUl9FTlRS
WV9PUkRFUikpOworCiAgICAgcmV0dXJuIGFsbG9jYXRlX3JpbmdfYnVmZmVy
KCZpb21tdS0+Y21kX2J1ZmZlciwgc2l6ZW9mKGNtZF9lbnRyeV90KSwKLSAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgSU9NTVVfQ01EX0JVRkZF
Ul9ERUZBVUxUX0VOVFJJRVMsCi0gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICJDb21tYW5kIEJ1ZmZlciIpOworICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBucl9lbnRzLCAiQ29tbWFuZCBCdWZmZXIiKTsKIH0K
IAogc3RhdGljIHZvaWQgKiBfX2luaXQgYWxsb2NhdGVfZXZlbnRfbG9nKHN0
cnVjdCBhbWRfaW9tbXUgKmlvbW11KQotLS0gYS94ZW4vaW5jbHVkZS9hc20t
eDg2L2h2bS9zdm0vYW1kLWlvbW11LWRlZnMuaAorKysgYi94ZW4vaW5jbHVk
ZS9hc20teDg2L2h2bS9zdm0vYW1kLWlvbW11LWRlZnMuaApAQCAtMjAsOSAr
MjAsNiBAQAogI2lmbmRlZiBfQVNNX1g4Nl82NF9BTURfSU9NTVVfREVGU19I
CiAjZGVmaW5lIF9BU01fWDg2XzY0X0FNRF9JT01NVV9ERUZTX0gKIAotLyog
SU9NTVUgQ29tbWFuZCBCdWZmZXIgZW50cmllczogaW4gcG93ZXIgb2YgMiBp
bmNyZW1lbnRzLCBtaW5pbXVtIG9mIDI1NiAqLwotI2RlZmluZSBJT01NVV9D
TURfQlVGRkVSX0RFRkFVTFRfRU5UUklFUwk1MTIKLQogLyogSU9NTVUgRXZl
bnQgTG9nIGVudHJpZXM6IGluIHBvd2VyIG9mIDIgaW5jcmVtZW50cywgbWlu
aW11bSBvZiAyNTYgKi8KICNkZWZpbmUgSU9NTVVfRVZFTlRfTE9HX0RFRkFV
TFRfRU5UUklFUyAgICAgNTEyCiAKQEAgLTE4NCw4ICsxODEsOCBAQAogI2Rl
ZmluZSBJT01NVV9DTURfQlVGRkVSX0xFTkdUSF9NQVNLCQkweDBGMDAwMDAw
CiAjZGVmaW5lIElPTU1VX0NNRF9CVUZGRVJfTEVOR1RIX1NISUZUCQkyNAog
Ci0jZGVmaW5lIElPTU1VX0NNRF9CVUZGRVJfRU5UUllfU0laRQkJCTE2Ci0j
ZGVmaW5lIElPTU1VX0NNRF9CVUZGRVJfUE9XRVJfT0YyX0VOVFJJRVNfUEVS
X1BBR0UJOAorI2RlZmluZSBJT01NVV9DTURfQlVGRkVSX0VOVFJZX09SREVS
ICAgICAgICAgICAgNAorI2RlZmluZSBJT01NVV9DTURfQlVGRkVSX01BWF9F
TlRSSUVTICAgICAgICAgICAgKDF1IDw8IDE1KQogCiAjZGVmaW5lIElPTU1V
X0NNRF9PUENPREVfTUFTSwkJCTB4RjAwMDAwMDAKICNkZWZpbmUgSU9NTVVf
Q01EX09QQ09ERV9TSElGVAkJCTI4Cg==

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.12-3.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.12-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBWVC1kOiBlbGltaW5hdGUgZmx1c2ggcmVsYXRlZCB0aW1lb3V0cwoKTGVh
dmluZyBhbiBpbi1wcm9ncmVzcyBvcGVyYXRpb24gcGVuZGluZyB3aGVuIGl0
IGFwcGVhcnMgdG8gdGFrZSB0b28KbG9uZyBpcyBwcm9ibGVtYXRpYzogSWYg
ZS5nLiBhIFFJIGNvbW1hbmQgY29tcGxldGVkIGxhdGVyLCB0aGUgd3JpdGUg
dG8KdGhlICJwb2xsIHNsb3QiIG1heSBpbnN0ZWFkIGJlIHVuZGVyc3Rvb2Qg
dG8gc2lnbmFsIGEgc3Vic2VxdWVudGx5CnN0YXJ0ZWQgY29tbWFuZCdzIGNv
bXBsZXRpb24uIEFsc28gb3VyIGFjY291bnRpbmcgb2YgdGhlIHRpbWVvdXQg
cGVyaW9kCndhcyBhY3R1YWxseSB3cm9uZzogV2UgaW5jbHVkZWQgdGhlIHRp
bWUgaXQgdG9vayBmb3IgdGhlIGNvbW1hbmQgdG8KYWN0dWFsbHkgbWFrZSBp
dCB0byB0aGUgZnJvbnQgb2YgdGhlIHF1ZXVlLCB3aGljaCBjb3VsZCBiZSBo
ZWF2aWx5CmFmZmVjdGVkIGJ5IGd1ZXN0cyBvdGhlciB0aGFuIHRoZSBvbmUg
Zm9yIHdoaWNoIHRoZSBmbHVzaCBpcyBiZWluZwpwZXJmb3JtZWQuCgpEbyBh
d2F5IHdpdGggYWxsIHRpbWVvdXQgZGV0ZWN0aW9uIG9uIGFsbCBmbHVzaCBy
ZWxhdGVkIGNvZGUgcGF0aHMuCkxvZyBleGNlc3NpdmVseSBsb25nIHByb2Nl
c3NpbmcgdGltZXMgKHdpdGggYSBwcm9ncmVzc2l2ZSB0aHJlc2hvbGQpIHRv
CmhhdmUgc29tZSBpbmRpY2F0aW9uIG9mIHByb2JsZW1zIGluIHRoaXMgYXJl
YS4KCkFkZGl0aW9uYWxseSBsb2cgKG9uY2UpIGlmIHFpbnZhbF9uZXh0X2lu
ZGV4KCkgZGlkbid0IGltbWVkaWF0ZWx5IGZpbmQKYW4gYXZhaWxhYmxlIHNs
b3QuIFRvZ2V0aGVyIHdpdGggdGhlIGVhcmxpZXIgY2hhbmdlIHNpemluZyB0
aGUgcXVldWUocykKZHluYW1pY2FsbHksIHdlIHNob3VsZCBub3cgaGF2ZSBh
IGd1YXJhbnRlZSB0aGF0IHdpdGggb3VyIGZ1bGx5CnN5bmNocm9ub3VzIG1v
ZGVsIGFueSBkZW1hbmQgZm9yIHNsb3RzIGNhbiBhY3R1YWxseSBiZSBzYXRp
c2ZpZWQuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAvIENWRS0yMDIxLTI4
NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1
c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5v
cmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvZG1hci5o
CisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9kbWFyLmgKQEAg
LTEyNyw2ICsxMjcsMzQgQEAgZG8gewogICAgIH0gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
IH0gd2hpbGUgKDApCiAKKyNkZWZpbmUgSU9NTVVfRkxVU0hfV0FJVCh3aGF0
LCBpb21tdSwgb2Zmc2V0LCBvcCwgY29uZCwgc3RzKSAgICAgICBcCitkbyB7
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgXAorICAgIHN0YXRpYyB1bnNpZ25lZCBpbnQg
X19yZWFkX21vc3RseSB0aHJlc2hvbGQgPSAxOyAgICAgICAgICAgICAgIFwK
KyAgICBzX3RpbWVfdCBzdGFydCA9IE5PVygpOyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgc190aW1lX3QgdGltZW91
dCA9IHN0YXJ0ICsgRE1BUl9PUEVSQVRJT05fVElNRU9VVCAqIHRocmVzaG9s
ZDsgXAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICBmb3IgKCA7IDsg
KSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBcCisgICAgeyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICBz
dHMgPSBvcChpb21tdS0+cmVnLCBvZmZzZXQpOyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwKKyAgICAgICAgaWYgKCBjb25kICkgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAg
ICAgICAgICBicmVhazsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgXAorICAgICAgICBpZiAoIHRpbWVvdXQgJiYg
Tk9XKCkgPiB0aW1lb3V0ICkgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
KyAgICAgICAgeyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgICAgICB0aHJlc2hv
bGQgfD0gdGhyZXNob2xkIDw8IDE7ICAgICAgICAgICAgICAgICAgICAgICAg
ICAgXAorICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19XQVJOSU5HIFZURFBS
RUZJWCAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgICAgICAg
ICAgICAiIElPTU1VIyV1OiAlcyBmbHVzaCB0YWtpbmcgdG9vIGxvbmdcbiIs
ICAgICAgICBcCisgICAgICAgICAgICAgICAgICAgaW9tbXUtPmluZGV4LCB3
aGF0KTsgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICAg
ICAgdGltZW91dCA9IDA7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwKKyAgICAgICAgfSAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAg
ICAgIGNwdV9yZWxheCgpOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgXAorICAgIH0gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgaWYgKCAhdGltZW91dCAp
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgXAorICAgICAgICBwcmludGsoWEVOTE9HX1dBUk5JTkcgVlREUFJFRklY
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgICAgICAg
ICIgSU9NTVUjJXU6ICVzIGZsdXNoIHRvb2sgJWx1bXNcbiIsICAgICAgICAg
ICAgICAgICBcCisgICAgICAgICAgICAgICBpb21tdS0+aW5kZXgsIHdoYXQs
IChOT1coKSAtIHN0YXJ0KSAvIDEwMDAwMDAwKTsgICAgXAorfSB3aGlsZSAo
IGZhbHNlICkKKwogaW50IHZ0ZF9od19jaGVjayh2b2lkKTsKIHZvaWQgZGlz
YWJsZV9wbXIoc3RydWN0IGlvbW11ICppb21tdSk7CiBpbnQgaXNfaWdkX2Ry
aGQoc3RydWN0IGFjcGlfZHJoZF91bml0ICpkcmhkKTsKLS0tIGEveGVuL2Ry
aXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11LmMKKysrIGIveGVuL2RyaXZl
cnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11LmMKQEAgLTM1Nyw4ICszNTcsOCBA
QCBzdGF0aWMgdm9pZCBpb21tdV9mbHVzaF93cml0ZV9idWZmZXIoc3RyCiAg
ICAgZG1hcl93cml0ZWwoaW9tbXUtPnJlZywgRE1BUl9HQ01EX1JFRywgdmFs
IHwgRE1BX0dDTURfV0JGKTsKIAogICAgIC8qIE1ha2Ugc3VyZSBoYXJkd2Fy
ZSBjb21wbGV0ZSBpdCAqLwotICAgIElPTU1VX1dBSVRfT1AoaW9tbXUsIERN
QVJfR1NUU19SRUcsIGRtYXJfcmVhZGwsCi0gICAgICAgICAgICAgICAgICAh
KHZhbCAmIERNQV9HU1RTX1dCRlMpLCB2YWwpOworICAgIElPTU1VX0ZMVVNI
X1dBSVQoIndyaXRlIGJ1ZmZlciIsIGlvbW11LCBETUFSX0dTVFNfUkVHLCBk
bWFyX3JlYWRsLAorICAgICAgICAgICAgICAgICAgICAgISh2YWwgJiBETUFf
R1NUU19XQkZTKSwgdmFsKTsKIAogICAgIHNwaW5fdW5sb2NrX2lycXJlc3Rv
cmUoJmlvbW11LT5yZWdpc3Rlcl9sb2NrLCBmbGFncyk7CiB9CkBAIC00MDgs
OCArNDA4LDggQEAgaW50IHZ0ZF9mbHVzaF9jb250ZXh0X3JlZyh2b2lkICpf
aW9tbXUsCiAgICAgZG1hcl93cml0ZXEoaW9tbXUtPnJlZywgRE1BUl9DQ01E
X1JFRywgdmFsKTsKIAogICAgIC8qIE1ha2Ugc3VyZSBoYXJkd2FyZSBjb21w
bGV0ZSBpdCAqLwotICAgIElPTU1VX1dBSVRfT1AoaW9tbXUsIERNQVJfQ0NN
RF9SRUcsIGRtYXJfcmVhZHEsCi0gICAgICAgICAgICAgICAgICAhKHZhbCAm
IERNQV9DQ01EX0lDQyksIHZhbCk7CisgICAgSU9NTVVfRkxVU0hfV0FJVCgi
Y29udGV4dCIsIGlvbW11LCBETUFSX0NDTURfUkVHLCBkbWFyX3JlYWRxLAor
ICAgICAgICAgICAgICAgICAgICAgISh2YWwgJiBETUFfQ0NNRF9JQ0MpLCB2
YWwpOwogCiAgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmaW9tbXUtPnJl
Z2lzdGVyX2xvY2ssIGZsYWdzKTsKICAgICAvKiBmbHVzaCBjb250ZXh0IGVu
dHJ5IHdpbGwgaW1wbGljaXRseSBmbHVzaCB3cml0ZSBidWZmZXIgKi8KQEAg
LTQ5MSw4ICs0OTEsOCBAQCBpbnQgdnRkX2ZsdXNoX2lvdGxiX3JlZyh2b2lk
ICpfaW9tbXUsIHVpCiAgICAgZG1hcl93cml0ZXEoaW9tbXUtPnJlZywgdGxi
X29mZnNldCArIDgsIHZhbCk7CiAKICAgICAvKiBNYWtlIHN1cmUgaGFyZHdh
cmUgY29tcGxldGUgaXQgKi8KLSAgICBJT01NVV9XQUlUX09QKGlvbW11LCAo
dGxiX29mZnNldCArIDgpLCBkbWFyX3JlYWRxLAotICAgICAgICAgICAgICAg
ICAgISh2YWwgJiBETUFfVExCX0lWVCksIHZhbCk7CisgICAgSU9NTVVfRkxV
U0hfV0FJVCgiaW90bGIiLCBpb21tdSwgKHRsYl9vZmZzZXQgKyA4KSwgZG1h
cl9yZWFkcSwKKyAgICAgICAgICAgICAgICAgICAgICEodmFsICYgRE1BX1RM
Ql9JVlQpLCB2YWwpOwogICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmlv
bW11LT5yZWdpc3Rlcl9sb2NrLCBmbGFncyk7CiAKICAgICAvKiBjaGVjayBJ
T1RMQiBpbnZhbGlkYXRpb24gZ3JhbnVsYXJpdHkgKi8KLS0tIGEveGVuL2Ry
aXZlcnMvcGFzc3Rocm91Z2gvdnRkL3FpbnZhbC5jCisrKyBiL3hlbi9kcml2
ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9xaW52YWwuYwpAQCAtMjksOCArMjksNiBA
QAogI2luY2x1ZGUgImV4dGVybi5oIgogI2luY2x1ZGUgIi4uL2F0cy5oIgog
Ci0jZGVmaW5lIFZURF9RSV9USU1FT1VUCTEKLQogc3RhdGljIHVuc2lnbmVk
IGludCBfX3JlYWRfbW9zdGx5IHFpX3BnX29yZGVyOwogc3RhdGljIHVuc2ln
bmVkIGludCBfX3JlYWRfbW9zdGx5IHFpX2VudHJ5X25yOwogCkBAIC02MCw3
ICs1OCwxMSBAQCBzdGF0aWMgdW5zaWduZWQgaW50IHFpbnZhbF9uZXh0X2lu
ZGV4KHN0CiAgICAgLyogKHRhaWwrMSA9PSBoZWFkKSBpbmRpY2F0ZXMgYSBm
dWxsIHF1ZXVlLCB3YWl0IGZvciBIVyAqLwogICAgIHdoaWxlICggKCh0YWls
ICsgMSkgJiAocWlfZW50cnlfbnIgLSAxKSkgPT0KICAgICAgICAgICAgICgg
ZG1hcl9yZWFkcShpb21tdS0+cmVnLCBETUFSX0lRSF9SRUcpID4+IFFJTlZB
TF9JTkRFWF9TSElGVCApICkKKyAgICB7CisgICAgICAgIHByaW50a19vbmNl
KFhFTkxPR19FUlIgVlREUFJFRklYICIgSU9NTVUjJXU6IG5vIFFJIHNsb3Qg
YXZhaWxhYmxlXG4iLAorICAgICAgICAgICAgICAgICAgICBpb21tdS0+aW5k
ZXgpOwogICAgICAgICBjcHVfcmVsYXgoKTsKKyAgICB9CiAKICAgICByZXR1
cm4gdGFpbDsKIH0KQEAgLTE4MCwyMyArMTgyLDMyIEBAIHN0YXRpYyBpbnQg
X19tdXN0X2NoZWNrIHF1ZXVlX2ludmFsaWRhdGUKICAgICAvKiBOb3cgd2Ug
ZG9uJ3Qgc3VwcG9ydCBpbnRlcnJ1cHQgbWV0aG9kICovCiAgICAgaWYgKCBz
dyApCiAgICAgewotICAgICAgICBzX3RpbWVfdCB0aW1lb3V0OwotCi0gICAg
ICAgIC8qIEluIGNhc2UgYWxsIHdhaXQgZGVzY3JpcHRvciB3cml0ZXMgdG8g
c2FtZSBhZGRyIHdpdGggc2FtZSBkYXRhICovCi0gICAgICAgIHRpbWVvdXQg
PSBOT1coKSArIE1JTExJU0VDUyhmbHVzaF9kZXZfaW90bGIgPwotICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaW9tbXVfZGV2X2lvdGxi
X3RpbWVvdXQgOiBWVERfUUlfVElNRU9VVCk7CisgICAgICAgIHN0YXRpYyB1
bnNpZ25lZCBpbnQgX19yZWFkX21vc3RseSB0aHJlc2hvbGQgPSAxOworICAg
ICAgICBzX3RpbWVfdCBzdGFydCA9IE5PVygpOworICAgICAgICBzX3RpbWVf
dCB0aW1lb3V0ID0gc3RhcnQgKyAoZmx1c2hfZGV2X2lvdGxiCisgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICA/IGlvbW11X2Rldl9pb3Rs
Yl90aW1lb3V0CisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICA6IDEwMCkgKiBNSUxMSVNFQ1ModGhyZXNob2xkKTsKIAogICAgICAgICB3
aGlsZSAoIEFDQ0VTU19PTkNFKCp0aGlzX3BvbGxfc2xvdCkgIT0gUUlOVkFM
X1NUQVRfRE9ORSApCiAgICAgICAgIHsKLSAgICAgICAgICAgIGlmICggTk9X
KCkgPiB0aW1lb3V0ICkKKyAgICAgICAgICAgIGlmICggdGltZW91dCAmJiBO
T1coKSA+IHRpbWVvdXQgKQogICAgICAgICAgICAgewotICAgICAgICAgICAg
ICAgIHByaW50X3FpX3JlZ3MoaW9tbXUpOworICAgICAgICAgICAgICAgIHRo
cmVzaG9sZCB8PSB0aHJlc2hvbGQgPDwgMTsKICAgICAgICAgICAgICAgICBw
cmludGsoWEVOTE9HX1dBUk5JTkcgVlREUFJFRklYCi0gICAgICAgICAgICAg
ICAgICAgICAgICIgUXVldWUgaW52YWxpZGF0ZSB3YWl0IGRlc2NyaXB0b3Ig
dGltZWQgb3V0XG4iKTsKLSAgICAgICAgICAgICAgICByZXR1cm4gLUVUSU1F
RE9VVDsKKyAgICAgICAgICAgICAgICAgICAgICAgIiBJT01NVSMldTogUUkl
cyB3YWl0IGRlc2NyaXB0b3IgdGFraW5nIHRvbyBsb25nXG4iLAorICAgICAg
ICAgICAgICAgICAgICAgICBpb21tdS0+aW5kZXgsIGZsdXNoX2Rldl9pb3Rs
YiA/ICIgZGV2IiA6ICIiKTsKKyAgICAgICAgICAgICAgICBwcmludF9xaV9y
ZWdzKGlvbW11KTsKKyAgICAgICAgICAgICAgICB0aW1lb3V0ID0gMDsKICAg
ICAgICAgICAgIH0KICAgICAgICAgICAgIGNwdV9yZWxheCgpOwogICAgICAg
ICB9CisKKyAgICAgICAgaWYgKCAhdGltZW91dCApCisgICAgICAgICAgICBw
cmludGsoWEVOTE9HX1dBUk5JTkcgVlREUFJFRklYCisgICAgICAgICAgICAg
ICAgICAgIiBJT01NVSMldTogUUklcyB3YWl0IGRlc2NyaXB0b3IgdG9vayAl
bHVtc1xuIiwKKyAgICAgICAgICAgICAgICAgICBpb21tdS0+aW5kZXgsIGZs
dXNoX2Rldl9pb3RsYiA/ICIgZGV2IiA6ICIiLAorICAgICAgICAgICAgICAg
ICAgIChOT1coKSAtIHN0YXJ0KSAvIDEwMDAwMDAwKTsKKwogICAgICAgICBy
ZXR1cm4gMDsKICAgICB9CiAK

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.12-4.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.12-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IHdhaXQgZm9yIGNvbW1hbmQgc2xvdCB0byBiZSBhdmFp
bGFibGUKCk5vIGNhbGxlciBjYXJlZCBhYm91dCBzZW5kX2lvbW11X2NvbW1h
bmQoKSBpbmRpY2F0aW5nIHVuYXZhaWxhYmlsaXR5IG9mCmEgc2xvdC4gSGVu
Y2UgaWYgYSBzdWZmaWNpZW50IG51bWJlciBwcmlvciBjb21tYW5kcyB0aW1l
ZCBvdXQsIHdlIGRpZApibGluZGx5IGFzc3VtZSB0aGF0IHRoZSByZXF1ZXN0
ZWQgY29tbWFuZCB3YXMgc3VibWl0dGVkIHRvIHRoZSBJT01NVQp3aGVuIHJl
YWxseSBpdCB3YXNuJ3QuIFRoaXMgY291bGQgbWVhbiBib3RoIGEgaGFuZ2lu
ZyBzeXN0ZW0gKHdhaXRpbmcKZm9yIGEgY29tbWFuZCB0byBjb21wbGV0ZSB0
aGF0IHdhcyBuZXZlciBzZWVuIGJ5IHRoZSBJT01NVSkgb3IgYmxpbmRseQpw
cm9wYWdhdGluZyBzdWNjZXNzIGJhY2sgdG8gY2FsbGVycywgbWFraW5nIHRo
ZW0gYmVsaWV2ZSB0aGV5J3JlIGZpbmUKdG8gZS5nLiBmcmVlIHByZXZpb3Vz
bHkgdW5tYXBwZWQgcGFnZXMuCgpGb2xkIHRoZSB0aHJlZSBpbnZvbHZlZCBm
dW5jdGlvbnMgaW50byBvbmUsIGFkZCBzcGluIHdhaXRpbmcgZm9yIGFuCmF2
YWlsYWJsZSBzbG90IGFsb25nIHRoZSBsaW5lcyBvZiBWVC1kJ3MgcWludmFs
X25leHRfaW5kZXgoKSwgYW5kIGFzIGEKY29uc2VxdWVuY2UgZHJvcCBhbGwg
ZXJyb3IgaW5kaWNhdG9yIHJldHVybiB0eXBlcy92YWx1ZXMuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTM3MyAvIENWRS0yMDIxLTI4NjkyLgoKU2lnbmVkLW9m
Zi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpSZXZpZXdl
ZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+CgotLS0gYS94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfY21kLmMKKysrIGIveGVu
L2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1kL2lvbW11X2NtZC5jCkBAIC0yMiw0
OCArMjIsMzYgQEAKICNpbmNsdWRlIDxhc20vaHZtL3N2bS9hbWQtaW9tbXUt
cHJvdG8uaD4KICNpbmNsdWRlICIuLi9hdHMuaCIKIAotc3RhdGljIGludCBx
dWV1ZV9pb21tdV9jb21tYW5kKHN0cnVjdCBhbWRfaW9tbXUgKmlvbW11LCB1
MzIgY21kW10pCitzdGF0aWMgdm9pZCBzZW5kX2lvbW11X2NvbW1hbmQoc3Ry
dWN0IGFtZF9pb21tdSAqaW9tbXUsCisgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgY29uc3QgdWludDMyX3QgY21kWzRdKQogewotICAgIHVpbnQz
Ml90IHRhaWwsIGhlYWQ7CisgICAgdWludDMyX3QgdGFpbDsKIAogICAgIHRh
aWwgPSBpb21tdS0+Y21kX2J1ZmZlci50YWlsOwogICAgIGlmICggKyt0YWls
ID09IGlvbW11LT5jbWRfYnVmZmVyLmVudHJpZXMgKQogICAgICAgICB0YWls
ID0gMDsKIAotICAgIGhlYWQgPSBpb21tdV9nZXRfcmJfcG9pbnRlcihyZWFk
bChpb21tdS0+bW1pb19iYXNlICsKLSAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgSU9NTVVfQ01EX0JVRkZFUl9IRUFEX09GRlNFVCkp
OwotICAgIGlmICggaGVhZCAhPSB0YWlsICkKKyAgICB3aGlsZSAoIHRhaWwg
PT0gaW9tbXVfZ2V0X3JiX3BvaW50ZXIocmVhZGwoaW9tbXUtPm1taW9fYmFz
ZSArCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIElPTU1VX0NNRF9CVUZGRVJfSEVBRF9PRkZTRVQpKSApCiAgICAg
ewotICAgICAgICBtZW1jcHkoaW9tbXUtPmNtZF9idWZmZXIuYnVmZmVyICsK
LSAgICAgICAgICAgICAgIChpb21tdS0+Y21kX2J1ZmZlci50YWlsICogc2l6
ZW9mKGNtZF9lbnRyeV90KSksCi0gICAgICAgICAgICAgICBjbWQsIHNpemVv
ZihjbWRfZW50cnlfdCkpOwotCi0gICAgICAgIGlvbW11LT5jbWRfYnVmZmVy
LnRhaWwgPSB0YWlsOwotICAgICAgICByZXR1cm4gMTsKKyAgICAgICAgcHJp
bnRrX29uY2UoWEVOTE9HX0VSUgorICAgICAgICAgICAgICAgICAgICAiQU1E
IElPTU1VICUwNHg6JTAyeDolMDJ4LiV1OiBubyBjbWQgc2xvdCBhdmFpbGFi
bGVcbiIsCisgICAgICAgICAgICAgICAgICAgIGlvbW11LT5zZWcsIFBDSV9C
VVMoaW9tbXUtPmJkZiksCisgICAgICAgICAgICAgICAgICAgIFBDSV9TTE9U
KGlvbW11LT5iZGYpLCBQQ0lfRlVOQyhpb21tdS0+YmRmKSk7CisgICAgICAg
IGNwdV9yZWxheCgpOwogICAgIH0KIAotICAgIHJldHVybiAwOwotfQorICAg
IG1lbWNweShpb21tdS0+Y21kX2J1ZmZlci5idWZmZXIgKworICAgICAgICAg
ICAoaW9tbXUtPmNtZF9idWZmZXIudGFpbCAqIHNpemVvZihjbWRfZW50cnlf
dCkpLAorICAgICAgICAgICBjbWQsIHNpemVvZihjbWRfZW50cnlfdCkpOwog
Ci1zdGF0aWMgdm9pZCBjb21taXRfaW9tbXVfY29tbWFuZF9idWZmZXIoc3Ry
dWN0IGFtZF9pb21tdSAqaW9tbXUpCi17Ci0gICAgdTMyIHRhaWwgPSAwOwor
ICAgIGlvbW11LT5jbWRfYnVmZmVyLnRhaWwgPSB0YWlsOwogCisgICAgdGFp
bCA9IDA7CiAgICAgaW9tbXVfc2V0X3JiX3BvaW50ZXIoJnRhaWwsIGlvbW11
LT5jbWRfYnVmZmVyLnRhaWwpOwogICAgIHdyaXRlbCh0YWlsLCBpb21tdS0+
bW1pb19iYXNlK0lPTU1VX0NNRF9CVUZGRVJfVEFJTF9PRkZTRVQpOwogfQog
Ci1pbnQgc2VuZF9pb21tdV9jb21tYW5kKHN0cnVjdCBhbWRfaW9tbXUgKmlv
bW11LCB1MzIgY21kW10pCi17Ci0gICAgaWYgKCBxdWV1ZV9pb21tdV9jb21t
YW5kKGlvbW11LCBjbWQpICkKLSAgICB7Ci0gICAgICAgIGNvbW1pdF9pb21t
dV9jb21tYW5kX2J1ZmZlcihpb21tdSk7Ci0gICAgICAgIHJldHVybiAxOwot
ICAgIH0KLQotICAgIHJldHVybiAwOwotfQotCiBzdGF0aWMgdm9pZCBmbHVz
aF9jb21tYW5kX2J1ZmZlcihzdHJ1Y3QgYW1kX2lvbW11ICppb21tdSkKIHsK
ICAgICB1MzIgY21kWzRdLCBzdGF0dXM7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.12-5.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.12-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IGRyb3AgY29tbWFuZCBjb21wbGV0aW9uIHRpbWVvdXQK
CkZpcnN0IGFuZCBmb3JlbW9zdCAtIHN1Y2ggdGltZW91dHMgd2VyZSBub3Qg
c2lnbmFsZWQgdG8gY2FsbGVycywgbWFraW5nCnRoZW0gYmVsaWV2ZSB0aGV5
J3JlIGZpbmUgdG8gZS5nLiBmcmVlIHByZXZpb3VzbHkgdW5tYXBwZWQgcGFn
ZXMuCgpNaXJyb3IgVlQtZCdzIGJlaGF2aW9yOiBBIGZpeGVkIG51bWJlciBv
ZiBsb29wIGl0ZXJhdGlvbnMgaXMgbm90IGEKc3VpdGFibGUgd2F5IHRvIGRl
dGVjdCB0aW1lb3V0cyBpbiBhbiBlbnZpcm9ubWVudCAoQ1BVIGFuZCBidXMg
c3BlZWRzKQppbmRlcGVuZGVudCBtYW5uZXIgYW55d2F5LiBGdXJ0aGVybW9y
ZSwgbGVhdmluZyBhbiBpbi1wcm9ncmVzcyBvcGVyYXRpb24KcGVuZGluZyB3
aGVuIGl0IGFwcGVhcnMgdG8gdGFrZSB0b28gbG9uZyBpcyBwcm9ibGVtYXRp
YzogSWYgYSBjb21tYW5kCmNvbXBsZXRlZCBsYXRlciwgdGhlIHNpZ25hbGlu
ZyBvZiBpdHMgY29tcGxldGlvbiBtYXkgaW5zdGVhZCBiZQp1bmRlcnN0b29k
IHRvIHNpZ25hbCBhIHN1YnNlcXVlbnRseSBzdGFydGVkIGNvbW1hbmQncyBj
b21wbGV0aW9uLgoKTG9nIGV4Y2Vzc2l2ZWx5IGxvbmcgcHJvY2Vzc2luZyB0
aW1lcyAod2l0aCBhIHByb2dyZXNzaXZlIHRocmVzaG9sZCkgdG8KaGF2ZSBz
b21lIGluZGljYXRpb24gb2YgcHJvYmxlbXMgaW4gdGhpcyBhcmVhLiBBbGxv
dyBjYWxsZXJzIHRvIHNwZWNpZnkKYSBub24tZGVmYXVsdCB0aW1lb3V0IGJp
YXMgZm9yIHRoaXMgbG9nZ2luZywgdXNpbmcgdGhlIHNhbWUgdmFsdWVzIGFz
ClZULWQgZG9lcywgd2hpY2ggaW4gcGFydGljdWxhciBtZWFucyBhIChieSBk
ZWZhdWx0KSBtdWNoIGxhcmdlciB2YWx1ZQpmb3IgZGV2aWNlIElPIFRMQiBp
bnZhbGlkYXRpb24uCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAvIENWRS0y
MDIxLTI4NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVs
aWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVs
QHhlbi5vcmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQv
aW9tbXVfY21kLmMKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1k
L2lvbW11X2NtZC5jCkBAIC01MiwxMCArNTIsMTIgQEAgc3RhdGljIHZvaWQg
c2VuZF9pb21tdV9jb21tYW5kKHN0cnVjdCBhbQogICAgIHdyaXRlbCh0YWls
LCBpb21tdS0+bW1pb19iYXNlK0lPTU1VX0NNRF9CVUZGRVJfVEFJTF9PRkZT
RVQpOwogfQogCi1zdGF0aWMgdm9pZCBmbHVzaF9jb21tYW5kX2J1ZmZlcihz
dHJ1Y3QgYW1kX2lvbW11ICppb21tdSkKK3N0YXRpYyB2b2lkIGZsdXNoX2Nv
bW1hbmRfYnVmZmVyKHN0cnVjdCBhbWRfaW9tbXUgKmlvbW11LAorICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQgaW50IHRpbWVv
dXRfYmFzZSkKIHsKLSAgICB1MzIgY21kWzRdLCBzdGF0dXM7Ci0gICAgaW50
IGxvb3BfY291bnQsIGNvbXBfd2FpdDsKKyAgICB1aW50MzJfdCBjbWRbNF07
CisgICAgc190aW1lX3Qgc3RhcnQsIHRpbWVvdXQ7CisgICAgc3RhdGljIHVu
c2lnbmVkIGludCBfX3JlYWRfbW9zdGx5IHRocmVzaG9sZCA9IDE7CiAKICAg
ICAvKiBSVzFDICdDb21XYWl0SW50JyBpbiBzdGF0dXMgcmVnaXN0ZXIgKi8K
ICAgICB3cml0ZWwoSU9NTVVfU1RBVFVTX0NPTVBfV0FJVF9JTlRfTUFTSywK
QEAgLTcxLDI0ICs3MywzMSBAQCBzdGF0aWMgdm9pZCBmbHVzaF9jb21tYW5k
X2J1ZmZlcihzdHJ1Y3QKICAgICAgICAgICAgICAgICAgICAgICAgICBJT01N
VV9DT01QX1dBSVRfSV9GTEFHX1NISUZULCAmY21kWzBdKTsKICAgICBzZW5k
X2lvbW11X2NvbW1hbmQoaW9tbXUsIGNtZCk7CiAKLSAgICAvKiBNYWtlIGxv
b3BfY291bnQgbG9uZyBlbm91Z2ggZm9yIHBvbGxpbmcgY29tcGxldGlvbiB3
YWl0IGJpdCAqLwotICAgIGxvb3BfY291bnQgPSAxMDAwOwotICAgIGRvIHsK
LSAgICAgICAgc3RhdHVzID0gcmVhZGwoaW9tbXUtPm1taW9fYmFzZSArIElP
TU1VX1NUQVRVU19NTUlPX09GRlNFVCk7Ci0gICAgICAgIGNvbXBfd2FpdCA9
IGdldF9maWVsZF9mcm9tX3JlZ191MzIoc3RhdHVzLAotICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIElPTU1VX1NUQVRVU19D
T01QX1dBSVRfSU5UX01BU0ssCi0gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgSU9NTVVfU1RBVFVTX0NPTVBfV0FJVF9JTlRf
U0hJRlQpOwotICAgICAgICAtLWxvb3BfY291bnQ7Ci0gICAgfSB3aGlsZSAo
ICFjb21wX3dhaXQgJiYgbG9vcF9jb3VudCApOwotCi0gICAgaWYgKCBjb21w
X3dhaXQgKQorICAgIHN0YXJ0ID0gTk9XKCk7CisgICAgdGltZW91dCA9IHN0
YXJ0ICsgKHRpbWVvdXRfYmFzZSA/OiAxMDApICogTUlMTElTRUNTKHRocmVz
aG9sZCk7CisgICAgd2hpbGUgKCAhKHJlYWRsKGlvbW11LT5tbWlvX2Jhc2Ug
KyBJT01NVV9TVEFUVVNfTU1JT19PRkZTRVQpICYKKyAgICAgICAgICAgICAg
SU9NTVVfU1RBVFVTX0NPTVBfV0FJVF9JTlRfTUFTSykgKQogICAgIHsKLSAg
ICAgICAgLyogUlcxQyAnQ29tV2FpdEludCcgaW4gc3RhdHVzIHJlZ2lzdGVy
ICovCi0gICAgICAgIHdyaXRlbChJT01NVV9TVEFUVVNfQ09NUF9XQUlUX0lO
VF9NQVNLLAotICAgICAgICAgICAgICAgaW9tbXUtPm1taW9fYmFzZSArIElP
TU1VX1NUQVRVU19NTUlPX09GRlNFVCk7Ci0gICAgICAgIHJldHVybjsKKyAg
ICAgICAgaWYgKCB0aW1lb3V0ICYmIE5PVygpID4gdGltZW91dCApCisgICAg
ICAgIHsKKyAgICAgICAgICAgIHRocmVzaG9sZCB8PSB0aHJlc2hvbGQgPDwg
MTsKKyAgICAgICAgICAgIHByaW50ayhYRU5MT0dfV0FSTklORworICAgICAg
ICAgICAgICAgICAgICJBTUQgSU9NTVUgJTA0eDolMDJ4OiUwMnguJXU6ICVz
Y29tcGxldGlvbiB3YWl0IHRha2luZyB0b28gbG9uZ1xuIiwKKyAgICAgICAg
ICAgICAgICAgICBpb21tdS0+c2VnLCBQQ0lfQlVTKGlvbW11LT5iZGYpLAor
ICAgICAgICAgICAgICAgICAgIFBDSV9TTE9UKGlvbW11LT5iZGYpLCBQQ0lf
RlVOQyhpb21tdS0+YmRmKSwKKyAgICAgICAgICAgICAgICAgICB0aW1lb3V0
X2Jhc2UgPyAiaW90bGIgIiA6ICIiKTsKKyAgICAgICAgICAgIHRpbWVvdXQg
PSAwOworICAgICAgICB9CisgICAgICAgIGNwdV9yZWxheCgpOwogICAgIH0K
LSAgICBBTURfSU9NTVVfREVCVUcoIldhcm5pbmc6IENvbVdhaXRJbnQgYml0
IGRpZCBub3QgYXNzZXJ0IVxuIik7CisKKyAgICBpZiAoICF0aW1lb3V0ICkK
KyAgICAgICAgcHJpbnRrKFhFTkxPR19XQVJOSU5HCisgICAgICAgICAgICAg
ICAiQU1EIElPTU1VICUwNHg6JTAyeDolMDJ4LiV1OiAlc2NvbXBsZXRpb24g
d2FpdCB0b29rICVsdW1zXG4iLAorICAgICAgICAgICAgICAgaW9tbXUtPnNl
ZywgUENJX0JVUyhpb21tdS0+YmRmKSwKKyAgICAgICAgICAgICAgIFBDSV9T
TE9UKGlvbW11LT5iZGYpLCBQQ0lfRlVOQyhpb21tdS0+YmRmKSwKKyAgICAg
ICAgICAgICAgIHRpbWVvdXRfYmFzZSA/ICJpb3RsYiAiIDogIiIsCisgICAg
ICAgICAgICAgICAoTk9XKCkgLSBzdGFydCkgLyAxMDAwMDAwMCk7CiB9CiAK
IC8qIEJ1aWxkIGxvdyBsZXZlbCBpb21tdSBjb21tYW5kIG1lc3NhZ2VzICov
CkBAIC0zMDAsNyArMzA5LDcgQEAgdm9pZCBhbWRfaW9tbXVfZmx1c2hfaW90
bGIodTggZGV2Zm4sIGNvbgogICAgIC8qIHNlbmQgSU5WQUxJREFURV9JT1RM
Ql9QQUdFUyBjb21tYW5kICovCiAgICAgc3Bpbl9sb2NrX2lycXNhdmUoJmlv
bW11LT5sb2NrLCBmbGFncyk7CiAgICAgaW52YWxpZGF0ZV9pb3RsYl9wYWdl
cyhpb21tdSwgbWF4cGVuZCwgMCwgcXVldWVpZCwgZGFkZHIsIHJlcV9pZCwg
b3JkZXIpOwotICAgIGZsdXNoX2NvbW1hbmRfYnVmZmVyKGlvbW11KTsKKyAg
ICBmbHVzaF9jb21tYW5kX2J1ZmZlcihpb21tdSwgaW9tbXVfZGV2X2lvdGxi
X3RpbWVvdXQpOwogICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmlvbW11
LT5sb2NrLCBmbGFncyk7CiB9CiAKQEAgLTMzNyw3ICszNDYsNyBAQCBzdGF0
aWMgdm9pZCBfYW1kX2lvbW11X2ZsdXNoX3BhZ2VzKHN0cnVjCiAgICAgewog
ICAgICAgICBzcGluX2xvY2tfaXJxc2F2ZSgmaW9tbXUtPmxvY2ssIGZsYWdz
KTsKICAgICAgICAgaW52YWxpZGF0ZV9pb21tdV9wYWdlcyhpb21tdSwgZGFk
ZHIsIGRvbV9pZCwgb3JkZXIpOwotICAgICAgICBmbHVzaF9jb21tYW5kX2J1
ZmZlcihpb21tdSk7CisgICAgICAgIGZsdXNoX2NvbW1hbmRfYnVmZmVyKGlv
bW11LCAwKTsKICAgICAgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmaW9t
bXUtPmxvY2ssIGZsYWdzKTsKICAgICB9CiAKQEAgLTM2MSw3ICszNzAsNyBA
QCB2b2lkIGFtZF9pb21tdV9mbHVzaF9kZXZpY2Uoc3RydWN0IGFtZF9pCiAg
ICAgQVNTRVJUKCBzcGluX2lzX2xvY2tlZCgmaW9tbXUtPmxvY2spICk7CiAK
ICAgICBpbnZhbGlkYXRlX2Rldl90YWJsZV9lbnRyeShpb21tdSwgYmRmKTsK
LSAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihpb21tdSk7CisgICAgZmx1c2hf
Y29tbWFuZF9idWZmZXIoaW9tbXUsIDApOwogfQogCiB2b2lkIGFtZF9pb21t
dV9mbHVzaF9pbnRyZW1hcChzdHJ1Y3QgYW1kX2lvbW11ICppb21tdSwgdWlu
dDE2X3QgYmRmKQpAQCAtMzY5LDcgKzM3OCw3IEBAIHZvaWQgYW1kX2lvbW11
X2ZsdXNoX2ludHJlbWFwKHN0cnVjdCBhbWQKICAgICBBU1NFUlQoIHNwaW5f
aXNfbG9ja2VkKCZpb21tdS0+bG9jaykgKTsKIAogICAgIGludmFsaWRhdGVf
aW50ZXJydXB0X3RhYmxlKGlvbW11LCBiZGYpOwotICAgIGZsdXNoX2NvbW1h
bmRfYnVmZmVyKGlvbW11KTsKKyAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihp
b21tdSwgMCk7CiB9CiAKIHZvaWQgYW1kX2lvbW11X2ZsdXNoX2FsbF9jYWNo
ZXMoc3RydWN0IGFtZF9pb21tdSAqaW9tbXUpCkBAIC0zNzcsNyArMzg2LDcg
QEAgdm9pZCBhbWRfaW9tbXVfZmx1c2hfYWxsX2NhY2hlcyhzdHJ1Y3QgYQog
ICAgIEFTU0VSVCggc3Bpbl9pc19sb2NrZWQoJmlvbW11LT5sb2NrKSApOwog
CiAgICAgaW52YWxpZGF0ZV9pb21tdV9hbGwoaW9tbXUpOwotICAgIGZsdXNo
X2NvbW1hbmRfYnVmZmVyKGlvbW11KTsKKyAgICBmbHVzaF9jb21tYW5kX2J1
ZmZlcihpb21tdSwgMCk7CiB9CiAKIHZvaWQgYW1kX2lvbW11X3NlbmRfZ3Vl
c3RfY21kKHN0cnVjdCBhbWRfaW9tbXUgKmlvbW11LCB1MzIgY21kW10pCkBA
IC0zODcsNyArMzk2LDggQEAgdm9pZCBhbWRfaW9tbXVfc2VuZF9ndWVzdF9j
bWQoc3RydWN0IGFtZAogICAgIHNwaW5fbG9ja19pcnFzYXZlKCZpb21tdS0+
bG9jaywgZmxhZ3MpOwogCiAgICAgc2VuZF9pb21tdV9jb21tYW5kKGlvbW11
LCBjbWQpOwotICAgIGZsdXNoX2NvbW1hbmRfYnVmZmVyKGlvbW11KTsKKyAg
ICAvKiBUQkQ6IFRpbWVvdXQgc2VsZWN0aW9uIG1heSByZXF1aXJlIHBlZWtp
bmcgaW50byBjbWRbXS4gKi8KKyAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihp
b21tdSwgMCk7CiAKICAgICBzcGluX3VubG9ja19pcnFyZXN0b3JlKCZpb21t
dS0+bG9jaywgZmxhZ3MpOwogfQo=

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.13-1.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.13-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBWVC1kOiBzaXplIHFpbnZhbCBxdWV1ZSBkeW5hbWljYWxseQoKV2l0aCB0
aGUgcHJlc2VudCBzeW5jaHJvbm91cyBtb2RlbCwgd2UgbmVlZCB0d28gc2xv
dHMgZm9yIGV2ZXJ5Cm9wZXJhdGlvbiAodGhlIG9wZXJhdGlvbiBpdHNlbGYg
YW5kIGEgd2FpdCBkZXNjcmlwdG9yKS4gIFRoZXJlIGNhbiBiZQpvbmUgc3Vj
aCBwYWlyIG9mIHJlcXVlc3RzIHBlbmRpbmcgcGVyIENQVS4gVG8gZW5zdXJl
IHRoYXQgdW5kZXIgYWxsCm5vcm1hbCBjaXJjdW1zdGFuY2VzIGEgc2xvdCBp
cyBhbHdheXMgYXZhaWxhYmxlIHdoZW4gb25lIGlzIHJlcXVlc3RlZCwKc2l6
ZSB0aGUgcXVldWUgcmluZyBhY2NvcmRpbmcgdG8gdGhlIG51bWJlciBvZiBw
cmVzZW50IENQVXMuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAvIENWRS0y
MDIxLTI4NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVs
aWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVs
QHhlbi5vcmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQv
aW9tbXUuaAorKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9t
bXUuaApAQCAtNDUwLDE3ICs0NTAsOSBAQCBzdHJ1Y3QgcWludmFsX2VudHJ5
IHsKICAgICB9cTsKIH07CiAKLS8qIE9yZGVyIG9mIHF1ZXVlIGludmFsaWRh
dGlvbiBwYWdlcyhtYXggaXMgOCkgKi8KLSNkZWZpbmUgUUlOVkFMX1BBR0Vf
T1JERVIgICAyCi0KLSNkZWZpbmUgUUlOVkFMX0FSQ0hfUEFHRV9PUkRFUiAg
KFFJTlZBTF9QQUdFX09SREVSICsgUEFHRV9TSElGVF80SyAtIFBBR0VfU0hJ
RlQpCi0jZGVmaW5lIFFJTlZBTF9BUkNIX1BBR0VfTlIgICAgICggUUlOVkFM
X0FSQ0hfUEFHRV9PUkRFUiA8IDAgPyAgXAotICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAxIDogICAgICAgICAgICAgICAgICAgICAgICAgICAg
IFwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgMSA8PCBRSU5W
QUxfQVJDSF9QQUdFX09SREVSICkKLQogLyogRWFjaCBlbnRyeSBpcyAxNiBi
eXRlcywgc28gMl44IGVudHJpZXMgcGVyIHBhZ2UgKi8KICNkZWZpbmUgUUlO
VkFMX0VOVFJZX09SREVSICAoIFBBR0VfU0hJRlQgLSA0ICkKLSNkZWZpbmUg
UUlOVkFMX0VOVFJZX05SICAgICAoMSA8PCAoUUlOVkFMX1BBR0VfT1JERVIg
KyA4KSkKKyNkZWZpbmUgUUlOVkFMX01BWF9FTlRSWV9OUiAoMXUgPDwgKDcg
KyBRSU5WQUxfRU5UUllfT1JERVIpKQogCiAvKiBTdGF0dXMgZGF0YSBmbGFn
ICovCiAjZGVmaW5lIFFJTlZBTF9TVEFUX0lOSVQgIDAKLS0tIGEveGVuL2Ry
aXZlcnMvcGFzc3Rocm91Z2gvdnRkL3FpbnZhbC5jCisrKyBiL3hlbi9kcml2
ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9xaW52YWwuYwpAQCAtMzEsNiArMzEsOSBA
QAogCiAjZGVmaW5lIFZURF9RSV9USU1FT1VUCTEKIAorc3RhdGljIHVuc2ln
bmVkIGludCBfX3JlYWRfbW9zdGx5IHFpX3BnX29yZGVyOworc3RhdGljIHVu
c2lnbmVkIGludCBfX3JlYWRfbW9zdGx5IHFpX2VudHJ5X25yOworCiBzdGF0
aWMgaW50IF9fbXVzdF9jaGVjayBpbnZhbGlkYXRlX3N5bmMoc3RydWN0IHZ0
ZF9pb21tdSAqaW9tbXUpOwogCiBzdGF0aWMgdm9pZCBwcmludF9xaV9yZWdz
KHN0cnVjdCB2dGRfaW9tbXUgKmlvbW11KQpAQCAtNTUsNyArNTgsNyBAQCBz
dGF0aWMgdW5zaWduZWQgaW50IHFpbnZhbF9uZXh0X2luZGV4KHN0CiAgICAg
dGFpbCA+Pj0gUUlOVkFMX0lOREVYX1NISUZUOwogCiAgICAgLyogKHRhaWwr
MSA9PSBoZWFkKSBpbmRpY2F0ZXMgYSBmdWxsIHF1ZXVlLCB3YWl0IGZvciBI
VyAqLwotICAgIHdoaWxlICggKCB0YWlsICsgMSApICUgUUlOVkFMX0VOVFJZ
X05SID09CisgICAgd2hpbGUgKCAoKHRhaWwgKyAxKSAmIChxaV9lbnRyeV9u
ciAtIDEpKSA9PQogICAgICAgICAgICAgKCBkbWFyX3JlYWRxKGlvbW11LT5y
ZWcsIERNQVJfSVFIX1JFRykgPj4gUUlOVkFMX0lOREVYX1NISUZUICkgKQog
ICAgICAgICBjcHVfcmVsYXgoKTsKIApAQCAtNjgsNyArNzEsNyBAQCBzdGF0
aWMgdm9pZCBxaW52YWxfdXBkYXRlX3F0YWlsKHN0cnVjdCB2CiAKICAgICAv
KiBOZWVkIGhvbGQgcmVnaXN0ZXIgbG9jayB3aGVuIHVwZGF0ZSB0YWlsICov
CiAgICAgQVNTRVJUKCBzcGluX2lzX2xvY2tlZCgmaW9tbXUtPnJlZ2lzdGVy
X2xvY2spICk7Ci0gICAgdmFsID0gKGluZGV4ICsgMSkgJSBRSU5WQUxfRU5U
UllfTlI7CisgICAgdmFsID0gKGluZGV4ICsgMSkgJiAocWlfZW50cnlfbnIg
LSAxKTsKICAgICBkbWFyX3dyaXRlcShpb21tdS0+cmVnLCBETUFSX0lRVF9S
RUcsICh2YWwgPDwgUUlOVkFMX0lOREVYX1NISUZUKSk7CiB9CiAKQEAgLTQw
Myw4ICs0MDYsMjggQEAgaW50IGVuYWJsZV9xaW52YWwoc3RydWN0IHZ0ZF9p
b21tdSAqaW9tbQogCiAgICAgaWYgKCBpb21tdS0+cWludmFsX21hZGRyID09
IDAgKQogICAgIHsKLSAgICAgICAgaW9tbXUtPnFpbnZhbF9tYWRkciA9IGFs
bG9jX3BndGFibGVfbWFkZHIoUUlOVkFMX0FSQ0hfUEFHRV9OUiwKLSAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
aW9tbXUtPm5vZGUpOworICAgICAgICBpZiAoICFxaV9lbnRyeV9uciApCisg
ICAgICAgIHsKKyAgICAgICAgICAgIC8qCisgICAgICAgICAgICAgKiBXaXRo
IHRoZSBwcmVzZW50IHN5bmNocm9ub3VzIG1vZGVsLCB3ZSBuZWVkIHR3byBz
bG90cyBmb3IgZXZlcnkKKyAgICAgICAgICAgICAqIG9wZXJhdGlvbiAodGhl
IG9wZXJhdGlvbiBpdHNlbGYgYW5kIGEgd2FpdCBkZXNjcmlwdG9yKS4gIFRo
ZXJlCisgICAgICAgICAgICAgKiBjYW4gYmUgb25lIHN1Y2ggcGFpciBvZiBy
ZXF1ZXN0cyBwZW5kaW5nIHBlciBDUFUuICBPbmUgZXh0cmEKKyAgICAgICAg
ICAgICAqIGVudHJ5IGlzIG5lZWRlZCBhcyB0aGUgcmluZyBpcyBjb25zaWRl
cmVkIGZ1bGwgd2hlbiB0aGVyZSdzCisgICAgICAgICAgICAgKiBvbmx5IG9u
ZSBlbnRyeSBsZWZ0LgorICAgICAgICAgICAgICovCisgICAgICAgICAgICBC
VUlMRF9CVUdfT04oQ09ORklHX05SX0NQVVMgKiAyID49IFFJTlZBTF9NQVhf
RU5UUllfTlIpOworICAgICAgICAgICAgcWlfcGdfb3JkZXIgPSBnZXRfb3Jk
ZXJfZnJvbV9ieXRlcygobnVtX3ByZXNlbnRfY3B1cygpICogMiArIDEpIDw8
CisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIChQQUdFX1NISUZUIC0KKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIFFJTlZBTF9FTlRSWV9PUkRFUikpOwor
ICAgICAgICAgICAgcWlfZW50cnlfbnIgPSAxdSA8PCAocWlfcGdfb3JkZXIg
KyBRSU5WQUxfRU5UUllfT1JERVIpOworCisgICAgICAgICAgICBkcHJpbnRr
KFhFTkxPR19JTkZPIFZURFBSRUZJWCwKKyAgICAgICAgICAgICAgICAgICAg
IlFJOiB1c2luZyAldS1lbnRyeSByaW5nKHMpXG4iLCBxaV9lbnRyeV9ucik7
CisgICAgICAgIH0KKworICAgICAgICBpb21tdS0+cWludmFsX21hZGRyID0K
KyAgICAgICAgICAgIGFsbG9jX3BndGFibGVfbWFkZHIocWlfZW50cnlfbnIg
Pj4gUUlOVkFMX0VOVFJZX09SREVSLAorICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBpb21tdS0+bm9kZSk7CiAgICAgICAgIGlmICggaW9tbXUt
PnFpbnZhbF9tYWRkciA9PSAwICkKICAgICAgICAgewogICAgICAgICAgICAg
ZHByaW50ayhYRU5MT0dfV0FSTklORyBWVERQUkVGSVgsCkBAIC00MTgsMTUg
KzQ0MSwxNiBAQCBpbnQgZW5hYmxlX3FpbnZhbChzdHJ1Y3QgdnRkX2lvbW11
ICppb21tCiAKICAgICBzcGluX2xvY2tfaXJxc2F2ZSgmaW9tbXUtPnJlZ2lz
dGVyX2xvY2ssIGZsYWdzKTsKIAotICAgIC8qIFNldHVwIEludmFsaWRhdGlv
biBRdWV1ZSBBZGRyZXNzKElRQSkgcmVnaXN0ZXIgd2l0aCB0aGUKLSAgICAg
KiBhZGRyZXNzIG9mIHRoZSBwYWdlIHdlIGp1c3QgYWxsb2NhdGVkLiAgUVMg
ZmllbGQgYXQKLSAgICAgKiBiaXRzWzI6MF0gdG8gaW5kaWNhdGUgc2l6ZSBv
ZiBxdWV1ZSBpcyBvbmUgNEtCIHBhZ2UuCi0gICAgICogVGhhdCdzIDI1NiBl
bnRyaWVzLiAgUXVldWVkIEhlYWQgKElRSCkgYW5kIFF1ZXVlIFRhaWwgKElR
VCkKLSAgICAgKiByZWdpc3RlcnMgYXJlIGF1dG9tYXRpY2FsbHkgcmVzZXQg
dG8gMCB3aXRoIHdyaXRlCi0gICAgICogdG8gSVFBIHJlZ2lzdGVyLgorICAg
IC8qCisgICAgICogU2V0dXAgSW52YWxpZGF0aW9uIFF1ZXVlIEFkZHJlc3Mg
KElRQSkgcmVnaXN0ZXIgd2l0aCB0aGUgYWRkcmVzcyBvZiB0aGUKKyAgICAg
KiBwYWdlcyB3ZSBqdXN0IGFsbG9jYXRlZC4gIFRoZSBRUyBmaWVsZCBhdCBi
aXRzWzI6MF0gaW5kaWNhdGVzIHRoZSBzaXplCisgICAgICogKHBhZ2Ugb3Jk
ZXIpIG9mIHRoZSBxdWV1ZS4KKyAgICAgKgorICAgICAqIFF1ZXVlZCBIZWFk
IChJUUgpIGFuZCBRdWV1ZSBUYWlsIChJUVQpIHJlZ2lzdGVycyBhcmUgYXV0
b21hdGljYWxseQorICAgICAqIHJlc2V0IHRvIDAgd2l0aCB3cml0ZSB0byBJ
UUEgcmVnaXN0ZXIuCiAgICAgICovCiAgICAgZG1hcl93cml0ZXEoaW9tbXUt
PnJlZywgRE1BUl9JUUFfUkVHLAotICAgICAgICAgICAgICAgIGlvbW11LT5x
aW52YWxfbWFkZHIgfCBRSU5WQUxfUEFHRV9PUkRFUik7CisgICAgICAgICAg
ICAgICAgaW9tbXUtPnFpbnZhbF9tYWRkciB8IHFpX3BnX29yZGVyKTsKIAog
ICAgIGRtYXJfd3JpdGVxKGlvbW11LT5yZWcsIERNQVJfSVFUX1JFRywgMCk7
CiAK

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.13-2.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.13-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IHNpemUgY29tbWFuZCBidWZmZXIgZHluYW1pY2FsbHkK
CldpdGggdGhlIHByZXNlbnQgc3luY2hyb25vdXMgbW9kZWwsIHdlIG5lZWQg
dHdvIHNsb3RzIGZvciBldmVyeQpvcGVyYXRpb24gKHRoZSBvcGVyYXRpb24g
aXRzZWxmIGFuZCBhIHdhaXQgY29tbWFuZCkuICBUaGVyZSBjYW4gYmUgb25l
CnN1Y2ggcGFpciBvZiBjb21tYW5kcyBwZW5kaW5nIHBlciBDUFUuIFRvIGVu
c3VyZSB0aGF0IHVuZGVyIGFsbCBub3JtYWwKY2lyY3Vtc3RhbmNlcyBhIHNs
b3QgaXMgYWx3YXlzIGF2YWlsYWJsZSB3aGVuIG9uZSBpcyByZXF1ZXN0ZWQs
IHNpemUgdGhlCmNvbW1hbmQgcmluZyBhY2NvcmRpbmcgdG8gdGhlIG51bWJl
ciBvZiBwcmVzZW50IENQVXMuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAv
IENWRS0yMDIxLTI4NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2gg
PGpiZXVsaWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50
IDxwYXVsQHhlbi5vcmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3Vn
aC9hbWQvaW9tbXVfY21kLmMKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvYW1kL2lvbW11X2NtZC5jCkBAIC0zNSw4ICszNSw4IEBAIHN0YXRpYyBp
bnQgcXVldWVfaW9tbXVfY29tbWFuZChzdHJ1Y3QgYW0KICAgICBpZiAoIGhl
YWQgIT0gdGFpbCApCiAgICAgewogICAgICAgICBtZW1jcHkoaW9tbXUtPmNt
ZF9idWZmZXIuYnVmZmVyICsKLSAgICAgICAgICAgICAgIChpb21tdS0+Y21k
X2J1ZmZlci50YWlsICogSU9NTVVfQ01EX0JVRkZFUl9FTlRSWV9TSVpFKSwK
LSAgICAgICAgICAgICAgIGNtZCwgSU9NTVVfQ01EX0JVRkZFUl9FTlRSWV9T
SVpFKTsKKyAgICAgICAgICAgICAgIChpb21tdS0+Y21kX2J1ZmZlci50YWls
ICogc2l6ZW9mKGNtZF9lbnRyeV90KSksCisgICAgICAgICAgICAgICBjbWQs
IHNpemVvZihjbWRfZW50cnlfdCkpOwogCiAgICAgICAgIGlvbW11LT5jbWRf
YnVmZmVyLnRhaWwgPSB0YWlsOwogICAgICAgICByZXR1cm4gMTsKLS0tIGEv
eGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1kL2lvbW11X2luaXQuYworKysg
Yi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfaW5pdC5jCkBA
IC0xMjUsNyArMTI1LDcgQEAgc3RhdGljIHZvaWQgcmVnaXN0ZXJfaW9tbXVf
Y21kX2J1ZmZlcl9pbgogICAgIHdyaXRlbChlbnRyeSwgaW9tbXUtPm1taW9f
YmFzZSArIElPTU1VX0NNRF9CVUZGRVJfQkFTRV9MT1dfT0ZGU0VUKTsKIAog
ICAgIHBvd2VyX29mMl9lbnRyaWVzID0gZ2V0X29yZGVyX2Zyb21fYnl0ZXMo
aW9tbXUtPmNtZF9idWZmZXIuYWxsb2Nfc2l6ZSkgKwotICAgICAgICBJT01N
VV9DTURfQlVGRkVSX1BPV0VSX09GMl9FTlRSSUVTX1BFUl9QQUdFOworICAg
ICAgICBQQUdFX1NISUZUIC0gSU9NTVVfQ01EX0JVRkZFUl9FTlRSWV9PUkRF
UjsKIAogICAgIGVudHJ5ID0gMDsKICAgICBpb21tdV9zZXRfYWRkcl9oaV90
b19yZWcoJmVudHJ5LCBhZGRyX2hpKTsKQEAgLTEwNTAsOSArMTA1MCwzMSBA
QCBzdGF0aWMgdm9pZCAqX19pbml0IGFsbG9jYXRlX3JpbmdfYnVmZmVyCiBz
dGF0aWMgdm9pZCAqIF9faW5pdCBhbGxvY2F0ZV9jbWRfYnVmZmVyKHN0cnVj
dCBhbWRfaW9tbXUgKmlvbW11KQogewogICAgIC8qIGFsbG9jYXRlICdjb21t
YW5kIGJ1ZmZlcicgaW4gcG93ZXIgb2YgMiBpbmNyZW1lbnRzIG9mIDRLICov
CisgICAgc3RhdGljIHVuc2lnbmVkIGludCBfX3JlYWRfbW9zdGx5IG5yX2Vu
dHM7CisKKyAgICBpZiAoICFucl9lbnRzICkKKyAgICB7CisgICAgICAgIHVu
c2lnbmVkIGludCBvcmRlcjsKKworICAgICAgICAvKgorICAgICAgICAgKiBX
aXRoIHRoZSBwcmVzZW50IHN5bmNocm9ub3VzIG1vZGVsLCB3ZSBuZWVkIHR3
byBzbG90cyBmb3IgZXZlcnkKKyAgICAgICAgICogb3BlcmF0aW9uICh0aGUg
b3BlcmF0aW9uIGl0c2VsZiBhbmQgYSB3YWl0IGNvbW1hbmQpLiAgVGhlcmUg
Y2FuIGJlCisgICAgICAgICAqIG9uZSBzdWNoIHBhaXIgb2YgcmVxdWVzdHMg
cGVuZGluZyBwZXIgQ1BVLiAgT25lIGV4dHJhIGVudHJ5IGlzCisgICAgICAg
ICAqIG5lZWRlZCBhcyB0aGUgcmluZyBpcyBjb25zaWRlcmVkIGZ1bGwgd2hl
biB0aGVyZSdzIG9ubHkgb25lIGVudHJ5CisgICAgICAgICAqIGxlZnQuCisg
ICAgICAgICAqLworICAgICAgICBCVUlMRF9CVUdfT04oQ09ORklHX05SX0NQ
VVMgKiAyID49IElPTU1VX0NNRF9CVUZGRVJfTUFYX0VOVFJJRVMpOworICAg
ICAgICBvcmRlciA9IGdldF9vcmRlcl9mcm9tX2J5dGVzKChudW1fcHJlc2Vu
dF9jcHVzKCkgKiAyICsgMSkgPDwKKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBJT01NVV9DTURfQlVGRkVSX0VOVFJZX09SREVSKTsK
KyAgICAgICAgbnJfZW50cyA9IDF1IDw8IChvcmRlciArIFBBR0VfU0hJRlQg
LSBJT01NVV9DTURfQlVGRkVSX0VOVFJZX09SREVSKTsKKworICAgICAgICBB
TURfSU9NTVVfREVCVUcoInVzaW5nICV1LWVudHJ5IGNtZCByaW5nKHMpXG4i
LCBucl9lbnRzKTsKKyAgICB9CisKKyAgICBCVUlMRF9CVUdfT04oc2l6ZW9m
KGNtZF9lbnRyeV90KSAhPSAoMXUgPDwgSU9NTVVfQ01EX0JVRkZFUl9FTlRS
WV9PUkRFUikpOworCiAgICAgcmV0dXJuIGFsbG9jYXRlX3JpbmdfYnVmZmVy
KCZpb21tdS0+Y21kX2J1ZmZlciwgc2l6ZW9mKGNtZF9lbnRyeV90KSwKLSAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgSU9NTVVfQ01EX0JVRkZF
Ul9ERUZBVUxUX0VOVFJJRVMsCi0gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICJDb21tYW5kIEJ1ZmZlciIsIGZhbHNlKTsKKyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgbnJfZW50cywgIkNvbW1hbmQgQnVmZmVy
IiwgZmFsc2UpOwogfQogCiBzdGF0aWMgdm9pZCAqIF9faW5pdCBhbGxvY2F0
ZV9ldmVudF9sb2coc3RydWN0IGFtZF9pb21tdSAqaW9tbXUpCi0tLSBhL3hl
bi9pbmNsdWRlL2FzbS14ODYvaHZtL3N2bS9hbWQtaW9tbXUtZGVmcy5oCisr
KyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvaHZtL3N2bS9hbWQtaW9tbXUtZGVm
cy5oCkBAIC0yMCw5ICsyMCw2IEBACiAjaWZuZGVmIF9BU01fWDg2XzY0X0FN
RF9JT01NVV9ERUZTX0gKICNkZWZpbmUgX0FTTV9YODZfNjRfQU1EX0lPTU1V
X0RFRlNfSAogCi0vKiBJT01NVSBDb21tYW5kIEJ1ZmZlciBlbnRyaWVzOiBp
biBwb3dlciBvZiAyIGluY3JlbWVudHMsIG1pbmltdW0gb2YgMjU2ICovCi0j
ZGVmaW5lIElPTU1VX0NNRF9CVUZGRVJfREVGQVVMVF9FTlRSSUVTCTUxMgot
CiAvKiBJT01NVSBFdmVudCBMb2cgZW50cmllczogaW4gcG93ZXIgb2YgMiBp
bmNyZW1lbnRzLCBtaW5pbXVtIG9mIDI1NiAqLwogI2RlZmluZSBJT01NVV9F
VkVOVF9MT0dfREVGQVVMVF9FTlRSSUVTICAgICA1MTIKIApAQCAtMTY4LDgg
KzE2NSw4IEBAIHN0cnVjdCBhbWRfaW9tbXVfZHRlIHsKICNkZWZpbmUgSU9N
TVVfQ01EX0JVRkZFUl9MRU5HVEhfTUFTSwkJMHgwRjAwMDAwMAogI2RlZmlu
ZSBJT01NVV9DTURfQlVGRkVSX0xFTkdUSF9TSElGVAkJMjQKIAotI2RlZmlu
ZSBJT01NVV9DTURfQlVGRkVSX0VOVFJZX1NJWkUJCQkxNgotI2RlZmluZSBJ
T01NVV9DTURfQlVGRkVSX1BPV0VSX09GMl9FTlRSSUVTX1BFUl9QQUdFCTgK
KyNkZWZpbmUgSU9NTVVfQ01EX0JVRkZFUl9FTlRSWV9PUkRFUiAgICAgICAg
ICAgIDQKKyNkZWZpbmUgSU9NTVVfQ01EX0JVRkZFUl9NQVhfRU5UUklFUyAg
ICAgICAgICAgICgxdSA8PCAxNSkKIAogI2RlZmluZSBJT01NVV9DTURfT1BD
T0RFX01BU0sJCQkweEYwMDAwMDAwCiAjZGVmaW5lIElPTU1VX0NNRF9PUENP
REVfU0hJRlQJCQkyOAo=

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.13-3.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.13-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBWVC1kOiBlbGltaW5hdGUgZmx1c2ggcmVsYXRlZCB0aW1lb3V0cwoKTGVh
dmluZyBhbiBpbi1wcm9ncmVzcyBvcGVyYXRpb24gcGVuZGluZyB3aGVuIGl0
IGFwcGVhcnMgdG8gdGFrZSB0b28KbG9uZyBpcyBwcm9ibGVtYXRpYzogSWYg
ZS5nLiBhIFFJIGNvbW1hbmQgY29tcGxldGVkIGxhdGVyLCB0aGUgd3JpdGUg
dG8KdGhlICJwb2xsIHNsb3QiIG1heSBpbnN0ZWFkIGJlIHVuZGVyc3Rvb2Qg
dG8gc2lnbmFsIGEgc3Vic2VxdWVudGx5CnN0YXJ0ZWQgY29tbWFuZCdzIGNv
bXBsZXRpb24uIEFsc28gb3VyIGFjY291bnRpbmcgb2YgdGhlIHRpbWVvdXQg
cGVyaW9kCndhcyBhY3R1YWxseSB3cm9uZzogV2UgaW5jbHVkZWQgdGhlIHRp
bWUgaXQgdG9vayBmb3IgdGhlIGNvbW1hbmQgdG8KYWN0dWFsbHkgbWFrZSBp
dCB0byB0aGUgZnJvbnQgb2YgdGhlIHF1ZXVlLCB3aGljaCBjb3VsZCBiZSBo
ZWF2aWx5CmFmZmVjdGVkIGJ5IGd1ZXN0cyBvdGhlciB0aGFuIHRoZSBvbmUg
Zm9yIHdoaWNoIHRoZSBmbHVzaCBpcyBiZWluZwpwZXJmb3JtZWQuCgpEbyBh
d2F5IHdpdGggYWxsIHRpbWVvdXQgZGV0ZWN0aW9uIG9uIGFsbCBmbHVzaCBy
ZWxhdGVkIGNvZGUgcGF0aHMuCkxvZyBleGNlc3NpdmVseSBsb25nIHByb2Nl
c3NpbmcgdGltZXMgKHdpdGggYSBwcm9ncmVzc2l2ZSB0aHJlc2hvbGQpIHRv
CmhhdmUgc29tZSBpbmRpY2F0aW9uIG9mIHByb2JsZW1zIGluIHRoaXMgYXJl
YS4KCkFkZGl0aW9uYWxseSBsb2cgKG9uY2UpIGlmIHFpbnZhbF9uZXh0X2lu
ZGV4KCkgZGlkbid0IGltbWVkaWF0ZWx5IGZpbmQKYW4gYXZhaWxhYmxlIHNs
b3QuIFRvZ2V0aGVyIHdpdGggdGhlIGVhcmxpZXIgY2hhbmdlIHNpemluZyB0
aGUgcXVldWUocykKZHluYW1pY2FsbHksIHdlIHNob3VsZCBub3cgaGF2ZSBh
IGd1YXJhbnRlZSB0aGF0IHdpdGggb3VyIGZ1bGx5CnN5bmNocm9ub3VzIG1v
ZGVsIGFueSBkZW1hbmQgZm9yIHNsb3RzIGNhbiBhY3R1YWxseSBiZSBzYXRp
c2ZpZWQuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAvIENWRS0yMDIxLTI4
NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1
c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5v
cmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvZG1hci5o
CisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9kbWFyLmgKQEAg
LTEyNyw2ICsxMjcsMzQgQEAgZG8gewogICAgIH0gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
IH0gd2hpbGUgKDApCiAKKyNkZWZpbmUgSU9NTVVfRkxVU0hfV0FJVCh3aGF0
LCBpb21tdSwgb2Zmc2V0LCBvcCwgY29uZCwgc3RzKSAgICAgICBcCitkbyB7
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgXAorICAgIHN0YXRpYyB1bnNpZ25lZCBpbnQg
X19yZWFkX21vc3RseSB0aHJlc2hvbGQgPSAxOyAgICAgICAgICAgICAgIFwK
KyAgICBzX3RpbWVfdCBzdGFydCA9IE5PVygpOyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgc190aW1lX3QgdGltZW91
dCA9IHN0YXJ0ICsgRE1BUl9PUEVSQVRJT05fVElNRU9VVCAqIHRocmVzaG9s
ZDsgXAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICBmb3IgKCA7IDsg
KSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBcCisgICAgeyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICBz
dHMgPSBvcChpb21tdS0+cmVnLCBvZmZzZXQpOyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwKKyAgICAgICAgaWYgKCBjb25kICkgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAg
ICAgICAgICBicmVhazsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgXAorICAgICAgICBpZiAoIHRpbWVvdXQgJiYg
Tk9XKCkgPiB0aW1lb3V0ICkgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
KyAgICAgICAgeyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgICAgICB0aHJlc2hv
bGQgfD0gdGhyZXNob2xkIDw8IDE7ICAgICAgICAgICAgICAgICAgICAgICAg
ICAgXAorICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19XQVJOSU5HIFZURFBS
RUZJWCAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgICAgICAg
ICAgICAiIElPTU1VIyV1OiAlcyBmbHVzaCB0YWtpbmcgdG9vIGxvbmdcbiIs
ICAgICAgICBcCisgICAgICAgICAgICAgICAgICAgaW9tbXUtPmluZGV4LCB3
aGF0KTsgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICAg
ICAgdGltZW91dCA9IDA7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwKKyAgICAgICAgfSAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAg
ICAgIGNwdV9yZWxheCgpOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgXAorICAgIH0gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgaWYgKCAhdGltZW91dCAp
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgXAorICAgICAgICBwcmludGsoWEVOTE9HX1dBUk5JTkcgVlREUFJFRklY
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgICAgICAg
ICIgSU9NTVUjJXU6ICVzIGZsdXNoIHRvb2sgJWx1bXNcbiIsICAgICAgICAg
ICAgICAgICBcCisgICAgICAgICAgICAgICBpb21tdS0+aW5kZXgsIHdoYXQs
IChOT1coKSAtIHN0YXJ0KSAvIDEwMDAwMDAwKTsgICAgXAorfSB3aGlsZSAo
IGZhbHNlICkKKwogaW50IHZ0ZF9od19jaGVjayh2b2lkKTsKIHZvaWQgZGlz
YWJsZV9wbXIoc3RydWN0IHZ0ZF9pb21tdSAqaW9tbXUpOwogaW50IGlzX2ln
ZF9kcmhkKHN0cnVjdCBhY3BpX2RyaGRfdW5pdCAqZHJoZCk7Ci0tLSBhL3hl
bi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCisrKyBiL3hlbi9k
cml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCkBAIC0zMjAsOCArMzIw
LDggQEAgc3RhdGljIHZvaWQgaW9tbXVfZmx1c2hfd3JpdGVfYnVmZmVyKHN0
cgogICAgIGRtYXJfd3JpdGVsKGlvbW11LT5yZWcsIERNQVJfR0NNRF9SRUcs
IHZhbCB8IERNQV9HQ01EX1dCRik7CiAKICAgICAvKiBNYWtlIHN1cmUgaGFy
ZHdhcmUgY29tcGxldGUgaXQgKi8KLSAgICBJT01NVV9XQUlUX09QKGlvbW11
LCBETUFSX0dTVFNfUkVHLCBkbWFyX3JlYWRsLAotICAgICAgICAgICAgICAg
ICAgISh2YWwgJiBETUFfR1NUU19XQkZTKSwgdmFsKTsKKyAgICBJT01NVV9G
TFVTSF9XQUlUKCJ3cml0ZSBidWZmZXIiLCBpb21tdSwgRE1BUl9HU1RTX1JF
RywgZG1hcl9yZWFkbCwKKyAgICAgICAgICAgICAgICAgICAgICEodmFsICYg
RE1BX0dTVFNfV0JGUyksIHZhbCk7CiAKICAgICBzcGluX3VubG9ja19pcnFy
ZXN0b3JlKCZpb21tdS0+cmVnaXN0ZXJfbG9jaywgZmxhZ3MpOwogfQpAQCAt
MzcwLDggKzM3MCw4IEBAIGludCB2dGRfZmx1c2hfY29udGV4dF9yZWcoc3Ry
dWN0IHZ0ZF9pb20KICAgICBkbWFyX3dyaXRlcShpb21tdS0+cmVnLCBETUFS
X0NDTURfUkVHLCB2YWwpOwogCiAgICAgLyogTWFrZSBzdXJlIGhhcmR3YXJl
IGNvbXBsZXRlIGl0ICovCi0gICAgSU9NTVVfV0FJVF9PUChpb21tdSwgRE1B
Ul9DQ01EX1JFRywgZG1hcl9yZWFkcSwKLSAgICAgICAgICAgICAgICAgICEo
dmFsICYgRE1BX0NDTURfSUNDKSwgdmFsKTsKKyAgICBJT01NVV9GTFVTSF9X
QUlUKCJjb250ZXh0IiwgaW9tbXUsIERNQVJfQ0NNRF9SRUcsIGRtYXJfcmVh
ZHEsCisgICAgICAgICAgICAgICAgICAgICAhKHZhbCAmIERNQV9DQ01EX0lD
QyksIHZhbCk7CiAKICAgICBzcGluX3VubG9ja19pcnFyZXN0b3JlKCZpb21t
dS0+cmVnaXN0ZXJfbG9jaywgZmxhZ3MpOwogICAgIC8qIGZsdXNoIGNvbnRl
eHQgZW50cnkgd2lsbCBpbXBsaWNpdGx5IGZsdXNoIHdyaXRlIGJ1ZmZlciAq
LwpAQCAtNDQ4LDggKzQ0OCw4IEBAIGludCB2dGRfZmx1c2hfaW90bGJfcmVn
KHN0cnVjdCB2dGRfaW9tbXUKICAgICBkbWFyX3dyaXRlcShpb21tdS0+cmVn
LCB0bGJfb2Zmc2V0ICsgOCwgdmFsKTsKIAogICAgIC8qIE1ha2Ugc3VyZSBo
YXJkd2FyZSBjb21wbGV0ZSBpdCAqLwotICAgIElPTU1VX1dBSVRfT1AoaW9t
bXUsICh0bGJfb2Zmc2V0ICsgOCksIGRtYXJfcmVhZHEsCi0gICAgICAgICAg
ICAgICAgICAhKHZhbCAmIERNQV9UTEJfSVZUKSwgdmFsKTsKKyAgICBJT01N
VV9GTFVTSF9XQUlUKCJpb3RsYiIsIGlvbW11LCAodGxiX29mZnNldCArIDgp
LCBkbWFyX3JlYWRxLAorICAgICAgICAgICAgICAgICAgICAgISh2YWwgJiBE
TUFfVExCX0lWVCksIHZhbCk7CiAgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9y
ZSgmaW9tbXUtPnJlZ2lzdGVyX2xvY2ssIGZsYWdzKTsKIAogICAgIC8qIGNo
ZWNrIElPVExCIGludmFsaWRhdGlvbiBncmFudWxhcml0eSAqLwotLS0gYS94
ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvcWludmFsLmMKKysrIGIveGVu
L2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL3FpbnZhbC5jCkBAIC0yOSw4ICsy
OSw2IEBACiAjaW5jbHVkZSAiZXh0ZXJuLmgiCiAjaW5jbHVkZSAiLi4vYXRz
LmgiCiAKLSNkZWZpbmUgVlREX1FJX1RJTUVPVVQJMQotCiBzdGF0aWMgdW5z
aWduZWQgaW50IF9fcmVhZF9tb3N0bHkgcWlfcGdfb3JkZXI7CiBzdGF0aWMg
dW5zaWduZWQgaW50IF9fcmVhZF9tb3N0bHkgcWlfZW50cnlfbnI7CiAKQEAg
LTYwLDcgKzU4LDExIEBAIHN0YXRpYyB1bnNpZ25lZCBpbnQgcWludmFsX25l
eHRfaW5kZXgoc3QKICAgICAvKiAodGFpbCsxID09IGhlYWQpIGluZGljYXRl
cyBhIGZ1bGwgcXVldWUsIHdhaXQgZm9yIEhXICovCiAgICAgd2hpbGUgKCAo
KHRhaWwgKyAxKSAmIChxaV9lbnRyeV9uciAtIDEpKSA9PQogICAgICAgICAg
ICAgKCBkbWFyX3JlYWRxKGlvbW11LT5yZWcsIERNQVJfSVFIX1JFRykgPj4g
UUlOVkFMX0lOREVYX1NISUZUICkgKQorICAgIHsKKyAgICAgICAgcHJpbnRr
X29uY2UoWEVOTE9HX0VSUiBWVERQUkVGSVggIiBJT01NVSMldTogbm8gUUkg
c2xvdCBhdmFpbGFibGVcbiIsCisgICAgICAgICAgICAgICAgICAgIGlvbW11
LT5pbmRleCk7CiAgICAgICAgIGNwdV9yZWxheCgpOworICAgIH0KIAogICAg
IHJldHVybiB0YWlsOwogfQpAQCAtMTgwLDIzICsxODIsMzIgQEAgc3RhdGlj
IGludCBfX211c3RfY2hlY2sgcXVldWVfaW52YWxpZGF0ZQogICAgIC8qIE5v
dyB3ZSBkb24ndCBzdXBwb3J0IGludGVycnVwdCBtZXRob2QgKi8KICAgICBp
ZiAoIHN3ICkKICAgICB7Ci0gICAgICAgIHNfdGltZV90IHRpbWVvdXQ7Ci0K
LSAgICAgICAgLyogSW4gY2FzZSBhbGwgd2FpdCBkZXNjcmlwdG9yIHdyaXRl
cyB0byBzYW1lIGFkZHIgd2l0aCBzYW1lIGRhdGEgKi8KLSAgICAgICAgdGlt
ZW91dCA9IE5PVygpICsgTUlMTElTRUNTKGZsdXNoX2Rldl9pb3RsYiA/Ci0g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBpb21tdV9kZXZf
aW90bGJfdGltZW91dCA6IFZURF9RSV9USU1FT1VUKTsKKyAgICAgICAgc3Rh
dGljIHVuc2lnbmVkIGludCBfX3JlYWRfbW9zdGx5IHRocmVzaG9sZCA9IDE7
CisgICAgICAgIHNfdGltZV90IHN0YXJ0ID0gTk9XKCk7CisgICAgICAgIHNf
dGltZV90IHRpbWVvdXQgPSBzdGFydCArIChmbHVzaF9kZXZfaW90bGIKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgID8gaW9tbXVfZGV2
X2lvdGxiX3RpbWVvdXQKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIDogMTAwKSAqIE1JTExJU0VDUyh0aHJlc2hvbGQpOwogCiAgICAg
ICAgIHdoaWxlICggQUNDRVNTX09OQ0UoKnRoaXNfcG9sbF9zbG90KSAhPSBR
SU5WQUxfU1RBVF9ET05FICkKICAgICAgICAgewotICAgICAgICAgICAgaWYg
KCBOT1coKSA+IHRpbWVvdXQgKQorICAgICAgICAgICAgaWYgKCB0aW1lb3V0
ICYmIE5PVygpID4gdGltZW91dCApCiAgICAgICAgICAgICB7Ci0gICAgICAg
ICAgICAgICAgcHJpbnRfcWlfcmVncyhpb21tdSk7CisgICAgICAgICAgICAg
ICAgdGhyZXNob2xkIHw9IHRocmVzaG9sZCA8PCAxOwogICAgICAgICAgICAg
ICAgIHByaW50ayhYRU5MT0dfV0FSTklORyBWVERQUkVGSVgKLSAgICAgICAg
ICAgICAgICAgICAgICAgIiBRdWV1ZSBpbnZhbGlkYXRlIHdhaXQgZGVzY3Jp
cHRvciB0aW1lZCBvdXRcbiIpOwotICAgICAgICAgICAgICAgIHJldHVybiAt
RVRJTUVET1VUOworICAgICAgICAgICAgICAgICAgICAgICAiIElPTU1VIyV1
OiBRSSVzIHdhaXQgZGVzY3JpcHRvciB0YWtpbmcgdG9vIGxvbmdcbiIsCisg
ICAgICAgICAgICAgICAgICAgICAgIGlvbW11LT5pbmRleCwgZmx1c2hfZGV2
X2lvdGxiID8gIiBkZXYiIDogIiIpOworICAgICAgICAgICAgICAgIHByaW50
X3FpX3JlZ3MoaW9tbXUpOworICAgICAgICAgICAgICAgIHRpbWVvdXQgPSAw
OwogICAgICAgICAgICAgfQogICAgICAgICAgICAgY3B1X3JlbGF4KCk7CiAg
ICAgICAgIH0KKworICAgICAgICBpZiAoICF0aW1lb3V0ICkKKyAgICAgICAg
ICAgIHByaW50ayhYRU5MT0dfV0FSTklORyBWVERQUkVGSVgKKyAgICAgICAg
ICAgICAgICAgICAiIElPTU1VIyV1OiBRSSVzIHdhaXQgZGVzY3JpcHRvciB0
b29rICVsdW1zXG4iLAorICAgICAgICAgICAgICAgICAgIGlvbW11LT5pbmRl
eCwgZmx1c2hfZGV2X2lvdGxiID8gIiBkZXYiIDogIiIsCisgICAgICAgICAg
ICAgICAgICAgKE5PVygpIC0gc3RhcnQpIC8gMTAwMDAwMDApOworCiAgICAg
ICAgIHJldHVybiAwOwogICAgIH0KIAo=

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.13-4.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.13-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IHdhaXQgZm9yIGNvbW1hbmQgc2xvdCB0byBiZSBhdmFp
bGFibGUKCk5vIGNhbGxlciBjYXJlZCBhYm91dCBzZW5kX2lvbW11X2NvbW1h
bmQoKSBpbmRpY2F0aW5nIHVuYXZhaWxhYmlsaXR5IG9mCmEgc2xvdC4gSGVu
Y2UgaWYgYSBzdWZmaWNpZW50IG51bWJlciBwcmlvciBjb21tYW5kcyB0aW1l
ZCBvdXQsIHdlIGRpZApibGluZGx5IGFzc3VtZSB0aGF0IHRoZSByZXF1ZXN0
ZWQgY29tbWFuZCB3YXMgc3VibWl0dGVkIHRvIHRoZSBJT01NVQp3aGVuIHJl
YWxseSBpdCB3YXNuJ3QuIFRoaXMgY291bGQgbWVhbiBib3RoIGEgaGFuZ2lu
ZyBzeXN0ZW0gKHdhaXRpbmcKZm9yIGEgY29tbWFuZCB0byBjb21wbGV0ZSB0
aGF0IHdhcyBuZXZlciBzZWVuIGJ5IHRoZSBJT01NVSkgb3IgYmxpbmRseQpw
cm9wYWdhdGluZyBzdWNjZXNzIGJhY2sgdG8gY2FsbGVycywgbWFraW5nIHRo
ZW0gYmVsaWV2ZSB0aGV5J3JlIGZpbmUKdG8gZS5nLiBmcmVlIHByZXZpb3Vz
bHkgdW5tYXBwZWQgcGFnZXMuCgpGb2xkIHRoZSB0aHJlZSBpbnZvbHZlZCBm
dW5jdGlvbnMgaW50byBvbmUsIGFkZCBzcGluIHdhaXRpbmcgZm9yIGFuCmF2
YWlsYWJsZSBzbG90IGFsb25nIHRoZSBsaW5lcyBvZiBWVC1kJ3MgcWludmFs
X25leHRfaW5kZXgoKSwgYW5kIGFzIGEKY29uc2VxdWVuY2UgZHJvcCBhbGwg
ZXJyb3IgaW5kaWNhdG9yIHJldHVybiB0eXBlcy92YWx1ZXMuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTM3MyAvIENWRS0yMDIxLTI4NjkyLgoKU2lnbmVkLW9m
Zi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpSZXZpZXdl
ZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+CgotLS0gYS94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfY21kLmMKKysrIGIveGVu
L2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1kL2lvbW11X2NtZC5jCkBAIC0yMiw0
OCArMjIsMzYgQEAKICNpbmNsdWRlIDxhc20vaHZtL3N2bS9hbWQtaW9tbXUt
cHJvdG8uaD4KICNpbmNsdWRlICIuLi9hdHMuaCIKIAotc3RhdGljIGludCBx
dWV1ZV9pb21tdV9jb21tYW5kKHN0cnVjdCBhbWRfaW9tbXUgKmlvbW11LCB1
MzIgY21kW10pCitzdGF0aWMgdm9pZCBzZW5kX2lvbW11X2NvbW1hbmQoc3Ry
dWN0IGFtZF9pb21tdSAqaW9tbXUsCisgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgY29uc3QgdWludDMyX3QgY21kWzRdKQogewotICAgIHVpbnQz
Ml90IHRhaWwsIGhlYWQ7CisgICAgdWludDMyX3QgdGFpbDsKIAogICAgIHRh
aWwgPSBpb21tdS0+Y21kX2J1ZmZlci50YWlsOwogICAgIGlmICggKyt0YWls
ID09IGlvbW11LT5jbWRfYnVmZmVyLmVudHJpZXMgKQogICAgICAgICB0YWls
ID0gMDsKIAotICAgIGhlYWQgPSBpb21tdV9nZXRfcmJfcG9pbnRlcihyZWFk
bChpb21tdS0+bW1pb19iYXNlICsKLSAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgSU9NTVVfQ01EX0JVRkZFUl9IRUFEX09GRlNFVCkp
OwotICAgIGlmICggaGVhZCAhPSB0YWlsICkKKyAgICB3aGlsZSAoIHRhaWwg
PT0gaW9tbXVfZ2V0X3JiX3BvaW50ZXIocmVhZGwoaW9tbXUtPm1taW9fYmFz
ZSArCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIElPTU1VX0NNRF9CVUZGRVJfSEVBRF9PRkZTRVQpKSApCiAgICAg
ewotICAgICAgICBtZW1jcHkoaW9tbXUtPmNtZF9idWZmZXIuYnVmZmVyICsK
LSAgICAgICAgICAgICAgIChpb21tdS0+Y21kX2J1ZmZlci50YWlsICogc2l6
ZW9mKGNtZF9lbnRyeV90KSksCi0gICAgICAgICAgICAgICBjbWQsIHNpemVv
ZihjbWRfZW50cnlfdCkpOwotCi0gICAgICAgIGlvbW11LT5jbWRfYnVmZmVy
LnRhaWwgPSB0YWlsOwotICAgICAgICByZXR1cm4gMTsKKyAgICAgICAgcHJp
bnRrX29uY2UoWEVOTE9HX0VSUgorICAgICAgICAgICAgICAgICAgICAiQU1E
IElPTU1VICUwNHg6JTAyeDolMDJ4LiV1OiBubyBjbWQgc2xvdCBhdmFpbGFi
bGVcbiIsCisgICAgICAgICAgICAgICAgICAgIGlvbW11LT5zZWcsIFBDSV9C
VVMoaW9tbXUtPmJkZiksCisgICAgICAgICAgICAgICAgICAgIFBDSV9TTE9U
KGlvbW11LT5iZGYpLCBQQ0lfRlVOQyhpb21tdS0+YmRmKSk7CisgICAgICAg
IGNwdV9yZWxheCgpOwogICAgIH0KIAotICAgIHJldHVybiAwOwotfQorICAg
IG1lbWNweShpb21tdS0+Y21kX2J1ZmZlci5idWZmZXIgKworICAgICAgICAg
ICAoaW9tbXUtPmNtZF9idWZmZXIudGFpbCAqIHNpemVvZihjbWRfZW50cnlf
dCkpLAorICAgICAgICAgICBjbWQsIHNpemVvZihjbWRfZW50cnlfdCkpOwog
Ci1zdGF0aWMgdm9pZCBjb21taXRfaW9tbXVfY29tbWFuZF9idWZmZXIoc3Ry
dWN0IGFtZF9pb21tdSAqaW9tbXUpCi17Ci0gICAgdTMyIHRhaWwgPSAwOwor
ICAgIGlvbW11LT5jbWRfYnVmZmVyLnRhaWwgPSB0YWlsOwogCisgICAgdGFp
bCA9IDA7CiAgICAgaW9tbXVfc2V0X3JiX3BvaW50ZXIoJnRhaWwsIGlvbW11
LT5jbWRfYnVmZmVyLnRhaWwpOwogICAgIHdyaXRlbCh0YWlsLCBpb21tdS0+
bW1pb19iYXNlK0lPTU1VX0NNRF9CVUZGRVJfVEFJTF9PRkZTRVQpOwogfQog
Ci1pbnQgc2VuZF9pb21tdV9jb21tYW5kKHN0cnVjdCBhbWRfaW9tbXUgKmlv
bW11LCB1MzIgY21kW10pCi17Ci0gICAgaWYgKCBxdWV1ZV9pb21tdV9jb21t
YW5kKGlvbW11LCBjbWQpICkKLSAgICB7Ci0gICAgICAgIGNvbW1pdF9pb21t
dV9jb21tYW5kX2J1ZmZlcihpb21tdSk7Ci0gICAgICAgIHJldHVybiAxOwot
ICAgIH0KLQotICAgIHJldHVybiAwOwotfQotCiBzdGF0aWMgdm9pZCBmbHVz
aF9jb21tYW5kX2J1ZmZlcihzdHJ1Y3QgYW1kX2lvbW11ICppb21tdSkKIHsK
ICAgICB1MzIgY21kWzRdLCBzdGF0dXM7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.13-5.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.13-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IGRyb3AgY29tbWFuZCBjb21wbGV0aW9uIHRpbWVvdXQK
CkZpcnN0IGFuZCBmb3JlbW9zdCAtIHN1Y2ggdGltZW91dHMgd2VyZSBub3Qg
c2lnbmFsZWQgdG8gY2FsbGVycywgbWFraW5nCnRoZW0gYmVsaWV2ZSB0aGV5
J3JlIGZpbmUgdG8gZS5nLiBmcmVlIHByZXZpb3VzbHkgdW5tYXBwZWQgcGFn
ZXMuCgpNaXJyb3IgVlQtZCdzIGJlaGF2aW9yOiBBIGZpeGVkIG51bWJlciBv
ZiBsb29wIGl0ZXJhdGlvbnMgaXMgbm90IGEKc3VpdGFibGUgd2F5IHRvIGRl
dGVjdCB0aW1lb3V0cyBpbiBhbiBlbnZpcm9ubWVudCAoQ1BVIGFuZCBidXMg
c3BlZWRzKQppbmRlcGVuZGVudCBtYW5uZXIgYW55d2F5LiBGdXJ0aGVybW9y
ZSwgbGVhdmluZyBhbiBpbi1wcm9ncmVzcyBvcGVyYXRpb24KcGVuZGluZyB3
aGVuIGl0IGFwcGVhcnMgdG8gdGFrZSB0b28gbG9uZyBpcyBwcm9ibGVtYXRp
YzogSWYgYSBjb21tYW5kCmNvbXBsZXRlZCBsYXRlciwgdGhlIHNpZ25hbGlu
ZyBvZiBpdHMgY29tcGxldGlvbiBtYXkgaW5zdGVhZCBiZQp1bmRlcnN0b29k
IHRvIHNpZ25hbCBhIHN1YnNlcXVlbnRseSBzdGFydGVkIGNvbW1hbmQncyBj
b21wbGV0aW9uLgoKTG9nIGV4Y2Vzc2l2ZWx5IGxvbmcgcHJvY2Vzc2luZyB0
aW1lcyAod2l0aCBhIHByb2dyZXNzaXZlIHRocmVzaG9sZCkgdG8KaGF2ZSBz
b21lIGluZGljYXRpb24gb2YgcHJvYmxlbXMgaW4gdGhpcyBhcmVhLiBBbGxv
dyBjYWxsZXJzIHRvIHNwZWNpZnkKYSBub24tZGVmYXVsdCB0aW1lb3V0IGJp
YXMgZm9yIHRoaXMgbG9nZ2luZywgdXNpbmcgdGhlIHNhbWUgdmFsdWVzIGFz
ClZULWQgZG9lcywgd2hpY2ggaW4gcGFydGljdWxhciBtZWFucyBhIChieSBk
ZWZhdWx0KSBtdWNoIGxhcmdlciB2YWx1ZQpmb3IgZGV2aWNlIElPIFRMQiBp
bnZhbGlkYXRpb24uCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAvIENWRS0y
MDIxLTI4NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVs
aWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVs
QHhlbi5vcmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQv
aW9tbXVfY21kLmMKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1k
L2lvbW11X2NtZC5jCkBAIC01MiwxMCArNTIsMTIgQEAgc3RhdGljIHZvaWQg
c2VuZF9pb21tdV9jb21tYW5kKHN0cnVjdCBhbQogICAgIHdyaXRlbCh0YWls
LCBpb21tdS0+bW1pb19iYXNlK0lPTU1VX0NNRF9CVUZGRVJfVEFJTF9PRkZT
RVQpOwogfQogCi1zdGF0aWMgdm9pZCBmbHVzaF9jb21tYW5kX2J1ZmZlcihz
dHJ1Y3QgYW1kX2lvbW11ICppb21tdSkKK3N0YXRpYyB2b2lkIGZsdXNoX2Nv
bW1hbmRfYnVmZmVyKHN0cnVjdCBhbWRfaW9tbXUgKmlvbW11LAorICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQgaW50IHRpbWVv
dXRfYmFzZSkKIHsKLSAgICB1MzIgY21kWzRdLCBzdGF0dXM7Ci0gICAgaW50
IGxvb3BfY291bnQsIGNvbXBfd2FpdDsKKyAgICB1aW50MzJfdCBjbWRbNF07
CisgICAgc190aW1lX3Qgc3RhcnQsIHRpbWVvdXQ7CisgICAgc3RhdGljIHVu
c2lnbmVkIGludCBfX3JlYWRfbW9zdGx5IHRocmVzaG9sZCA9IDE7CiAKICAg
ICAvKiBSVzFDICdDb21XYWl0SW50JyBpbiBzdGF0dXMgcmVnaXN0ZXIgKi8K
ICAgICB3cml0ZWwoSU9NTVVfU1RBVFVTX0NPTVBfV0FJVF9JTlRfTUFTSywK
QEAgLTcxLDI0ICs3MywzMSBAQCBzdGF0aWMgdm9pZCBmbHVzaF9jb21tYW5k
X2J1ZmZlcihzdHJ1Y3QKICAgICAgICAgICAgICAgICAgICAgICAgICBJT01N
VV9DT01QX1dBSVRfSV9GTEFHX1NISUZULCAmY21kWzBdKTsKICAgICBzZW5k
X2lvbW11X2NvbW1hbmQoaW9tbXUsIGNtZCk7CiAKLSAgICAvKiBNYWtlIGxv
b3BfY291bnQgbG9uZyBlbm91Z2ggZm9yIHBvbGxpbmcgY29tcGxldGlvbiB3
YWl0IGJpdCAqLwotICAgIGxvb3BfY291bnQgPSAxMDAwOwotICAgIGRvIHsK
LSAgICAgICAgc3RhdHVzID0gcmVhZGwoaW9tbXUtPm1taW9fYmFzZSArIElP
TU1VX1NUQVRVU19NTUlPX09GRlNFVCk7Ci0gICAgICAgIGNvbXBfd2FpdCA9
IGdldF9maWVsZF9mcm9tX3JlZ191MzIoc3RhdHVzLAotICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIElPTU1VX1NUQVRVU19D
T01QX1dBSVRfSU5UX01BU0ssCi0gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgSU9NTVVfU1RBVFVTX0NPTVBfV0FJVF9JTlRf
U0hJRlQpOwotICAgICAgICAtLWxvb3BfY291bnQ7Ci0gICAgfSB3aGlsZSAo
ICFjb21wX3dhaXQgJiYgbG9vcF9jb3VudCApOwotCi0gICAgaWYgKCBjb21w
X3dhaXQgKQorICAgIHN0YXJ0ID0gTk9XKCk7CisgICAgdGltZW91dCA9IHN0
YXJ0ICsgKHRpbWVvdXRfYmFzZSA/OiAxMDApICogTUlMTElTRUNTKHRocmVz
aG9sZCk7CisgICAgd2hpbGUgKCAhKHJlYWRsKGlvbW11LT5tbWlvX2Jhc2Ug
KyBJT01NVV9TVEFUVVNfTU1JT19PRkZTRVQpICYKKyAgICAgICAgICAgICAg
SU9NTVVfU1RBVFVTX0NPTVBfV0FJVF9JTlRfTUFTSykgKQogICAgIHsKLSAg
ICAgICAgLyogUlcxQyAnQ29tV2FpdEludCcgaW4gc3RhdHVzIHJlZ2lzdGVy
ICovCi0gICAgICAgIHdyaXRlbChJT01NVV9TVEFUVVNfQ09NUF9XQUlUX0lO
VF9NQVNLLAotICAgICAgICAgICAgICAgaW9tbXUtPm1taW9fYmFzZSArIElP
TU1VX1NUQVRVU19NTUlPX09GRlNFVCk7Ci0gICAgICAgIHJldHVybjsKKyAg
ICAgICAgaWYgKCB0aW1lb3V0ICYmIE5PVygpID4gdGltZW91dCApCisgICAg
ICAgIHsKKyAgICAgICAgICAgIHRocmVzaG9sZCB8PSB0aHJlc2hvbGQgPDwg
MTsKKyAgICAgICAgICAgIHByaW50ayhYRU5MT0dfV0FSTklORworICAgICAg
ICAgICAgICAgICAgICJBTUQgSU9NTVUgJTA0eDolMDJ4OiUwMnguJXU6ICVz
Y29tcGxldGlvbiB3YWl0IHRha2luZyB0b28gbG9uZ1xuIiwKKyAgICAgICAg
ICAgICAgICAgICBpb21tdS0+c2VnLCBQQ0lfQlVTKGlvbW11LT5iZGYpLAor
ICAgICAgICAgICAgICAgICAgIFBDSV9TTE9UKGlvbW11LT5iZGYpLCBQQ0lf
RlVOQyhpb21tdS0+YmRmKSwKKyAgICAgICAgICAgICAgICAgICB0aW1lb3V0
X2Jhc2UgPyAiaW90bGIgIiA6ICIiKTsKKyAgICAgICAgICAgIHRpbWVvdXQg
PSAwOworICAgICAgICB9CisgICAgICAgIGNwdV9yZWxheCgpOwogICAgIH0K
LSAgICBBTURfSU9NTVVfREVCVUcoIldhcm5pbmc6IENvbVdhaXRJbnQgYml0
IGRpZCBub3QgYXNzZXJ0IVxuIik7CisKKyAgICBpZiAoICF0aW1lb3V0ICkK
KyAgICAgICAgcHJpbnRrKFhFTkxPR19XQVJOSU5HCisgICAgICAgICAgICAg
ICAiQU1EIElPTU1VICUwNHg6JTAyeDolMDJ4LiV1OiAlc2NvbXBsZXRpb24g
d2FpdCB0b29rICVsdW1zXG4iLAorICAgICAgICAgICAgICAgaW9tbXUtPnNl
ZywgUENJX0JVUyhpb21tdS0+YmRmKSwKKyAgICAgICAgICAgICAgIFBDSV9T
TE9UKGlvbW11LT5iZGYpLCBQQ0lfRlVOQyhpb21tdS0+YmRmKSwKKyAgICAg
ICAgICAgICAgIHRpbWVvdXRfYmFzZSA/ICJpb3RsYiAiIDogIiIsCisgICAg
ICAgICAgICAgICAoTk9XKCkgLSBzdGFydCkgLyAxMDAwMDAwMCk7CiB9CiAK
IC8qIEJ1aWxkIGxvdyBsZXZlbCBpb21tdSBjb21tYW5kIG1lc3NhZ2VzICov
CkBAIC0zMDAsNyArMzA5LDcgQEAgdm9pZCBhbWRfaW9tbXVfZmx1c2hfaW90
bGIodTggZGV2Zm4sIGNvbgogICAgIC8qIHNlbmQgSU5WQUxJREFURV9JT1RM
Ql9QQUdFUyBjb21tYW5kICovCiAgICAgc3Bpbl9sb2NrX2lycXNhdmUoJmlv
bW11LT5sb2NrLCBmbGFncyk7CiAgICAgaW52YWxpZGF0ZV9pb3RsYl9wYWdl
cyhpb21tdSwgbWF4cGVuZCwgMCwgcXVldWVpZCwgZGFkZHIsIHJlcV9pZCwg
b3JkZXIpOwotICAgIGZsdXNoX2NvbW1hbmRfYnVmZmVyKGlvbW11KTsKKyAg
ICBmbHVzaF9jb21tYW5kX2J1ZmZlcihpb21tdSwgaW9tbXVfZGV2X2lvdGxi
X3RpbWVvdXQpOwogICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmlvbW11
LT5sb2NrLCBmbGFncyk7CiB9CiAKQEAgLTMzNyw3ICszNDYsNyBAQCBzdGF0
aWMgdm9pZCBfYW1kX2lvbW11X2ZsdXNoX3BhZ2VzKHN0cnVjCiAgICAgewog
ICAgICAgICBzcGluX2xvY2tfaXJxc2F2ZSgmaW9tbXUtPmxvY2ssIGZsYWdz
KTsKICAgICAgICAgaW52YWxpZGF0ZV9pb21tdV9wYWdlcyhpb21tdSwgZGFk
ZHIsIGRvbV9pZCwgb3JkZXIpOwotICAgICAgICBmbHVzaF9jb21tYW5kX2J1
ZmZlcihpb21tdSk7CisgICAgICAgIGZsdXNoX2NvbW1hbmRfYnVmZmVyKGlv
bW11LCAwKTsKICAgICAgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmaW9t
bXUtPmxvY2ssIGZsYWdzKTsKICAgICB9CiAKQEAgLTM2MSw3ICszNzAsNyBA
QCB2b2lkIGFtZF9pb21tdV9mbHVzaF9kZXZpY2Uoc3RydWN0IGFtZF9pCiAg
ICAgQVNTRVJUKCBzcGluX2lzX2xvY2tlZCgmaW9tbXUtPmxvY2spICk7CiAK
ICAgICBpbnZhbGlkYXRlX2Rldl90YWJsZV9lbnRyeShpb21tdSwgYmRmKTsK
LSAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihpb21tdSk7CisgICAgZmx1c2hf
Y29tbWFuZF9idWZmZXIoaW9tbXUsIDApOwogfQogCiB2b2lkIGFtZF9pb21t
dV9mbHVzaF9pbnRyZW1hcChzdHJ1Y3QgYW1kX2lvbW11ICppb21tdSwgdWlu
dDE2X3QgYmRmKQpAQCAtMzY5LDcgKzM3OCw3IEBAIHZvaWQgYW1kX2lvbW11
X2ZsdXNoX2ludHJlbWFwKHN0cnVjdCBhbWQKICAgICBBU1NFUlQoIHNwaW5f
aXNfbG9ja2VkKCZpb21tdS0+bG9jaykgKTsKIAogICAgIGludmFsaWRhdGVf
aW50ZXJydXB0X3RhYmxlKGlvbW11LCBiZGYpOwotICAgIGZsdXNoX2NvbW1h
bmRfYnVmZmVyKGlvbW11KTsKKyAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihp
b21tdSwgMCk7CiB9CiAKIHZvaWQgYW1kX2lvbW11X2ZsdXNoX2FsbF9jYWNo
ZXMoc3RydWN0IGFtZF9pb21tdSAqaW9tbXUpCkBAIC0zNzcsNyArMzg2LDcg
QEAgdm9pZCBhbWRfaW9tbXVfZmx1c2hfYWxsX2NhY2hlcyhzdHJ1Y3QgYQog
ICAgIEFTU0VSVCggc3Bpbl9pc19sb2NrZWQoJmlvbW11LT5sb2NrKSApOwog
CiAgICAgaW52YWxpZGF0ZV9pb21tdV9hbGwoaW9tbXUpOwotICAgIGZsdXNo
X2NvbW1hbmRfYnVmZmVyKGlvbW11KTsKKyAgICBmbHVzaF9jb21tYW5kX2J1
ZmZlcihpb21tdSwgMCk7CiB9CiAKIHZvaWQgYW1kX2lvbW11X3NlbmRfZ3Vl
c3RfY21kKHN0cnVjdCBhbWRfaW9tbXUgKmlvbW11LCB1MzIgY21kW10pCkBA
IC0zODcsNyArMzk2LDggQEAgdm9pZCBhbWRfaW9tbXVfc2VuZF9ndWVzdF9j
bWQoc3RydWN0IGFtZAogICAgIHNwaW5fbG9ja19pcnFzYXZlKCZpb21tdS0+
bG9jaywgZmxhZ3MpOwogCiAgICAgc2VuZF9pb21tdV9jb21tYW5kKGlvbW11
LCBjbWQpOwotICAgIGZsdXNoX2NvbW1hbmRfYnVmZmVyKGlvbW11KTsKKyAg
ICAvKiBUQkQ6IFRpbWVvdXQgc2VsZWN0aW9uIG1heSByZXF1aXJlIHBlZWtp
bmcgaW50byBjbWRbXS4gKi8KKyAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihp
b21tdSwgMCk7CiAKICAgICBzcGluX3VubG9ja19pcnFyZXN0b3JlKCZpb21t
dS0+bG9jaywgZmxhZ3MpOwogfQo=

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.14-1.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.14-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBWVC1kOiBzaXplIHFpbnZhbCBxdWV1ZSBkeW5hbWljYWxseQoKV2l0aCB0
aGUgcHJlc2VudCBzeW5jaHJvbm91cyBtb2RlbCwgd2UgbmVlZCB0d28gc2xv
dHMgZm9yIGV2ZXJ5Cm9wZXJhdGlvbiAodGhlIG9wZXJhdGlvbiBpdHNlbGYg
YW5kIGEgd2FpdCBkZXNjcmlwdG9yKS4gIFRoZXJlIGNhbiBiZQpvbmUgc3Vj
aCBwYWlyIG9mIHJlcXVlc3RzIHBlbmRpbmcgcGVyIENQVS4gVG8gZW5zdXJl
IHRoYXQgdW5kZXIgYWxsCm5vcm1hbCBjaXJjdW1zdGFuY2VzIGEgc2xvdCBp
cyBhbHdheXMgYXZhaWxhYmxlIHdoZW4gb25lIGlzIHJlcXVlc3RlZCwKc2l6
ZSB0aGUgcXVldWUgcmluZyBhY2NvcmRpbmcgdG8gdGhlIG51bWJlciBvZiBw
cmVzZW50IENQVXMuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAvIENWRS0y
MDIxLTI4NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVs
aWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVs
QHhlbi5vcmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQv
aW9tbXUuaAorKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9t
bXUuaApAQCAtNDUwLDE3ICs0NTAsOSBAQCBzdHJ1Y3QgcWludmFsX2VudHJ5
IHsKICAgICB9cTsKIH07CiAKLS8qIE9yZGVyIG9mIHF1ZXVlIGludmFsaWRh
dGlvbiBwYWdlcyhtYXggaXMgOCkgKi8KLSNkZWZpbmUgUUlOVkFMX1BBR0Vf
T1JERVIgICAyCi0KLSNkZWZpbmUgUUlOVkFMX0FSQ0hfUEFHRV9PUkRFUiAg
KFFJTlZBTF9QQUdFX09SREVSICsgUEFHRV9TSElGVF80SyAtIFBBR0VfU0hJ
RlQpCi0jZGVmaW5lIFFJTlZBTF9BUkNIX1BBR0VfTlIgICAgICggUUlOVkFM
X0FSQ0hfUEFHRV9PUkRFUiA8IDAgPyAgXAotICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAxIDogICAgICAgICAgICAgICAgICAgICAgICAgICAg
IFwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgMSA8PCBRSU5W
QUxfQVJDSF9QQUdFX09SREVSICkKLQogLyogRWFjaCBlbnRyeSBpcyAxNiBi
eXRlcywgc28gMl44IGVudHJpZXMgcGVyIHBhZ2UgKi8KICNkZWZpbmUgUUlO
VkFMX0VOVFJZX09SREVSICAoIFBBR0VfU0hJRlQgLSA0ICkKLSNkZWZpbmUg
UUlOVkFMX0VOVFJZX05SICAgICAoMSA8PCAoUUlOVkFMX1BBR0VfT1JERVIg
KyA4KSkKKyNkZWZpbmUgUUlOVkFMX01BWF9FTlRSWV9OUiAoMXUgPDwgKDcg
KyBRSU5WQUxfRU5UUllfT1JERVIpKQogCiAvKiBTdGF0dXMgZGF0YSBmbGFn
ICovCiAjZGVmaW5lIFFJTlZBTF9TVEFUX0lOSVQgIDAKLS0tIGEveGVuL2Ry
aXZlcnMvcGFzc3Rocm91Z2gvdnRkL3FpbnZhbC5jCisrKyBiL3hlbi9kcml2
ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9xaW52YWwuYwpAQCAtMzEsNiArMzEsOSBA
QAogCiAjZGVmaW5lIFZURF9RSV9USU1FT1VUCTEKIAorc3RhdGljIHVuc2ln
bmVkIGludCBfX3JlYWRfbW9zdGx5IHFpX3BnX29yZGVyOworc3RhdGljIHVu
c2lnbmVkIGludCBfX3JlYWRfbW9zdGx5IHFpX2VudHJ5X25yOworCiBzdGF0
aWMgaW50IF9fbXVzdF9jaGVjayBpbnZhbGlkYXRlX3N5bmMoc3RydWN0IHZ0
ZF9pb21tdSAqaW9tbXUpOwogCiBzdGF0aWMgdm9pZCBwcmludF9xaV9yZWdz
KHN0cnVjdCB2dGRfaW9tbXUgKmlvbW11KQpAQCAtNTUsNyArNTgsNyBAQCBz
dGF0aWMgdW5zaWduZWQgaW50IHFpbnZhbF9uZXh0X2luZGV4KHN0CiAgICAg
dGFpbCA+Pj0gUUlOVkFMX0lOREVYX1NISUZUOwogCiAgICAgLyogKHRhaWwr
MSA9PSBoZWFkKSBpbmRpY2F0ZXMgYSBmdWxsIHF1ZXVlLCB3YWl0IGZvciBI
VyAqLwotICAgIHdoaWxlICggKCB0YWlsICsgMSApICUgUUlOVkFMX0VOVFJZ
X05SID09CisgICAgd2hpbGUgKCAoKHRhaWwgKyAxKSAmIChxaV9lbnRyeV9u
ciAtIDEpKSA9PQogICAgICAgICAgICAgKCBkbWFyX3JlYWRxKGlvbW11LT5y
ZWcsIERNQVJfSVFIX1JFRykgPj4gUUlOVkFMX0lOREVYX1NISUZUICkgKQog
ICAgICAgICBjcHVfcmVsYXgoKTsKIApAQCAtNjgsNyArNzEsNyBAQCBzdGF0
aWMgdm9pZCBxaW52YWxfdXBkYXRlX3F0YWlsKHN0cnVjdCB2CiAKICAgICAv
KiBOZWVkIGhvbGQgcmVnaXN0ZXIgbG9jayB3aGVuIHVwZGF0ZSB0YWlsICov
CiAgICAgQVNTRVJUKCBzcGluX2lzX2xvY2tlZCgmaW9tbXUtPnJlZ2lzdGVy
X2xvY2spICk7Ci0gICAgdmFsID0gKGluZGV4ICsgMSkgJSBRSU5WQUxfRU5U
UllfTlI7CisgICAgdmFsID0gKGluZGV4ICsgMSkgJiAocWlfZW50cnlfbnIg
LSAxKTsKICAgICBkbWFyX3dyaXRlcShpb21tdS0+cmVnLCBETUFSX0lRVF9S
RUcsICh2YWwgPDwgUUlOVkFMX0lOREVYX1NISUZUKSk7CiB9CiAKQEAgLTQw
Myw4ICs0MDYsMjggQEAgaW50IGVuYWJsZV9xaW52YWwoc3RydWN0IHZ0ZF9p
b21tdSAqaW9tbQogCiAgICAgaWYgKCBpb21tdS0+cWludmFsX21hZGRyID09
IDAgKQogICAgIHsKLSAgICAgICAgaW9tbXUtPnFpbnZhbF9tYWRkciA9IGFs
bG9jX3BndGFibGVfbWFkZHIoUUlOVkFMX0FSQ0hfUEFHRV9OUiwKLSAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
aW9tbXUtPm5vZGUpOworICAgICAgICBpZiAoICFxaV9lbnRyeV9uciApCisg
ICAgICAgIHsKKyAgICAgICAgICAgIC8qCisgICAgICAgICAgICAgKiBXaXRo
IHRoZSBwcmVzZW50IHN5bmNocm9ub3VzIG1vZGVsLCB3ZSBuZWVkIHR3byBz
bG90cyBmb3IgZXZlcnkKKyAgICAgICAgICAgICAqIG9wZXJhdGlvbiAodGhl
IG9wZXJhdGlvbiBpdHNlbGYgYW5kIGEgd2FpdCBkZXNjcmlwdG9yKS4gIFRo
ZXJlCisgICAgICAgICAgICAgKiBjYW4gYmUgb25lIHN1Y2ggcGFpciBvZiBy
ZXF1ZXN0cyBwZW5kaW5nIHBlciBDUFUuICBPbmUgZXh0cmEKKyAgICAgICAg
ICAgICAqIGVudHJ5IGlzIG5lZWRlZCBhcyB0aGUgcmluZyBpcyBjb25zaWRl
cmVkIGZ1bGwgd2hlbiB0aGVyZSdzCisgICAgICAgICAgICAgKiBvbmx5IG9u
ZSBlbnRyeSBsZWZ0LgorICAgICAgICAgICAgICovCisgICAgICAgICAgICBC
VUlMRF9CVUdfT04oQ09ORklHX05SX0NQVVMgKiAyID49IFFJTlZBTF9NQVhf
RU5UUllfTlIpOworICAgICAgICAgICAgcWlfcGdfb3JkZXIgPSBnZXRfb3Jk
ZXJfZnJvbV9ieXRlcygobnVtX3ByZXNlbnRfY3B1cygpICogMiArIDEpIDw8
CisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIChQQUdFX1NISUZUIC0KKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIFFJTlZBTF9FTlRSWV9PUkRFUikpOwor
ICAgICAgICAgICAgcWlfZW50cnlfbnIgPSAxdSA8PCAocWlfcGdfb3JkZXIg
KyBRSU5WQUxfRU5UUllfT1JERVIpOworCisgICAgICAgICAgICBkcHJpbnRr
KFhFTkxPR19JTkZPIFZURFBSRUZJWCwKKyAgICAgICAgICAgICAgICAgICAg
IlFJOiB1c2luZyAldS1lbnRyeSByaW5nKHMpXG4iLCBxaV9lbnRyeV9ucik7
CisgICAgICAgIH0KKworICAgICAgICBpb21tdS0+cWludmFsX21hZGRyID0K
KyAgICAgICAgICAgIGFsbG9jX3BndGFibGVfbWFkZHIocWlfZW50cnlfbnIg
Pj4gUUlOVkFMX0VOVFJZX09SREVSLAorICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBpb21tdS0+bm9kZSk7CiAgICAgICAgIGlmICggaW9tbXUt
PnFpbnZhbF9tYWRkciA9PSAwICkKICAgICAgICAgewogICAgICAgICAgICAg
ZHByaW50ayhYRU5MT0dfV0FSTklORyBWVERQUkVGSVgsCkBAIC00MTgsMTUg
KzQ0MSwxNiBAQCBpbnQgZW5hYmxlX3FpbnZhbChzdHJ1Y3QgdnRkX2lvbW11
ICppb21tCiAKICAgICBzcGluX2xvY2tfaXJxc2F2ZSgmaW9tbXUtPnJlZ2lz
dGVyX2xvY2ssIGZsYWdzKTsKIAotICAgIC8qIFNldHVwIEludmFsaWRhdGlv
biBRdWV1ZSBBZGRyZXNzKElRQSkgcmVnaXN0ZXIgd2l0aCB0aGUKLSAgICAg
KiBhZGRyZXNzIG9mIHRoZSBwYWdlIHdlIGp1c3QgYWxsb2NhdGVkLiAgUVMg
ZmllbGQgYXQKLSAgICAgKiBiaXRzWzI6MF0gdG8gaW5kaWNhdGUgc2l6ZSBv
ZiBxdWV1ZSBpcyBvbmUgNEtCIHBhZ2UuCi0gICAgICogVGhhdCdzIDI1NiBl
bnRyaWVzLiAgUXVldWVkIEhlYWQgKElRSCkgYW5kIFF1ZXVlIFRhaWwgKElR
VCkKLSAgICAgKiByZWdpc3RlcnMgYXJlIGF1dG9tYXRpY2FsbHkgcmVzZXQg
dG8gMCB3aXRoIHdyaXRlCi0gICAgICogdG8gSVFBIHJlZ2lzdGVyLgorICAg
IC8qCisgICAgICogU2V0dXAgSW52YWxpZGF0aW9uIFF1ZXVlIEFkZHJlc3Mg
KElRQSkgcmVnaXN0ZXIgd2l0aCB0aGUgYWRkcmVzcyBvZiB0aGUKKyAgICAg
KiBwYWdlcyB3ZSBqdXN0IGFsbG9jYXRlZC4gIFRoZSBRUyBmaWVsZCBhdCBi
aXRzWzI6MF0gaW5kaWNhdGVzIHRoZSBzaXplCisgICAgICogKHBhZ2Ugb3Jk
ZXIpIG9mIHRoZSBxdWV1ZS4KKyAgICAgKgorICAgICAqIFF1ZXVlZCBIZWFk
IChJUUgpIGFuZCBRdWV1ZSBUYWlsIChJUVQpIHJlZ2lzdGVycyBhcmUgYXV0
b21hdGljYWxseQorICAgICAqIHJlc2V0IHRvIDAgd2l0aCB3cml0ZSB0byBJ
UUEgcmVnaXN0ZXIuCiAgICAgICovCiAgICAgZG1hcl93cml0ZXEoaW9tbXUt
PnJlZywgRE1BUl9JUUFfUkVHLAotICAgICAgICAgICAgICAgIGlvbW11LT5x
aW52YWxfbWFkZHIgfCBRSU5WQUxfUEFHRV9PUkRFUik7CisgICAgICAgICAg
ICAgICAgaW9tbXUtPnFpbnZhbF9tYWRkciB8IHFpX3BnX29yZGVyKTsKIAog
ICAgIGRtYXJfd3JpdGVxKGlvbW11LT5yZWcsIERNQVJfSVFUX1JFRywgMCk7
CiAK

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.14-2.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.14-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IHNpemUgY29tbWFuZCBidWZmZXIgZHluYW1pY2FsbHkK
CldpdGggdGhlIHByZXNlbnQgc3luY2hyb25vdXMgbW9kZWwsIHdlIG5lZWQg
dHdvIHNsb3RzIGZvciBldmVyeQpvcGVyYXRpb24gKHRoZSBvcGVyYXRpb24g
aXRzZWxmIGFuZCBhIHdhaXQgY29tbWFuZCkuICBUaGVyZSBjYW4gYmUgb25l
CnN1Y2ggcGFpciBvZiBjb21tYW5kcyBwZW5kaW5nIHBlciBDUFUuIFRvIGVu
c3VyZSB0aGF0IHVuZGVyIGFsbCBub3JtYWwKY2lyY3Vtc3RhbmNlcyBhIHNs
b3QgaXMgYWx3YXlzIGF2YWlsYWJsZSB3aGVuIG9uZSBpcyByZXF1ZXN0ZWQs
IHNpemUgdGhlCmNvbW1hbmQgcmluZyBhY2NvcmRpbmcgdG8gdGhlIG51bWJl
ciBvZiBwcmVzZW50IENQVXMuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAv
IENWRS0yMDIxLTI4NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2gg
PGpiZXVsaWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50
IDxwYXVsQHhlbi5vcmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3Vn
aC9hbWQvaW9tbXUtZGVmcy5oCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJv
dWdoL2FtZC9pb21tdS1kZWZzLmgKQEAgLTIwLDkgKzIwLDYgQEAKICNpZm5k
ZWYgQU1EX0lPTU1VX0RFRlNfSAogI2RlZmluZSBBTURfSU9NTVVfREVGU19I
CiAKLS8qIElPTU1VIENvbW1hbmQgQnVmZmVyIGVudHJpZXM6IGluIHBvd2Vy
IG9mIDIgaW5jcmVtZW50cywgbWluaW11bSBvZiAyNTYgKi8KLSNkZWZpbmUg
SU9NTVVfQ01EX0JVRkZFUl9ERUZBVUxUX0VOVFJJRVMJNTEyCi0KIC8qIElP
TU1VIEV2ZW50IExvZyBlbnRyaWVzOiBpbiBwb3dlciBvZiAyIGluY3JlbWVu
dHMsIG1pbmltdW0gb2YgMjU2ICovCiAjZGVmaW5lIElPTU1VX0VWRU5UX0xP
R19ERUZBVUxUX0VOVFJJRVMgICAgIDUxMgogCkBAIC0xNjQsOCArMTYxLDgg
QEAgc3RydWN0IGFtZF9pb21tdV9kdGUgewogI2RlZmluZSBJT01NVV9DTURf
QlVGRkVSX0xFTkdUSF9NQVNLCQkweDBGMDAwMDAwCiAjZGVmaW5lIElPTU1V
X0NNRF9CVUZGRVJfTEVOR1RIX1NISUZUCQkyNAogCi0jZGVmaW5lIElPTU1V
X0NNRF9CVUZGRVJfRU5UUllfU0laRQkJCTE2Ci0jZGVmaW5lIElPTU1VX0NN
RF9CVUZGRVJfUE9XRVJfT0YyX0VOVFJJRVNfUEVSX1BBR0UJOAorI2RlZmlu
ZSBJT01NVV9DTURfQlVGRkVSX0VOVFJZX09SREVSICAgICAgICAgICAgNAor
I2RlZmluZSBJT01NVV9DTURfQlVGRkVSX01BWF9FTlRSSUVTICAgICAgICAg
ICAgKDF1IDw8IDE1KQogCiAjZGVmaW5lIElPTU1VX0NNRF9PUENPREVfTUFT
SwkJCTB4RjAwMDAwMDAKICNkZWZpbmUgSU9NTVVfQ01EX09QQ09ERV9TSElG
VAkJCTI4Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21t
dV9jbWQuYworKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9t
bXVfY21kLmMKQEAgLTI0LDcgKzI0LDcgQEAgc3RhdGljIGludCBxdWV1ZV9p
b21tdV9jb21tYW5kKHN0cnVjdCBhbQogewogICAgIHVpbnQzMl90IHRhaWws
IGhlYWQ7CiAKLSAgICB0YWlsID0gaW9tbXUtPmNtZF9idWZmZXIudGFpbCAr
IElPTU1VX0NNRF9CVUZGRVJfRU5UUllfU0laRTsKKyAgICB0YWlsID0gaW9t
bXUtPmNtZF9idWZmZXIudGFpbCArIHNpemVvZihjbWRfZW50cnlfdCk7CiAg
ICAgaWYgKCB0YWlsID09IGlvbW11LT5jbWRfYnVmZmVyLnNpemUgKQogICAg
ICAgICB0YWlsID0gMDsKIApAQCAtMzMsNyArMzMsNyBAQCBzdGF0aWMgaW50
IHF1ZXVlX2lvbW11X2NvbW1hbmQoc3RydWN0IGFtCiAgICAgaWYgKCBoZWFk
ICE9IHRhaWwgKQogICAgIHsKICAgICAgICAgbWVtY3B5KGlvbW11LT5jbWRf
YnVmZmVyLmJ1ZmZlciArIGlvbW11LT5jbWRfYnVmZmVyLnRhaWwsCi0gICAg
ICAgICAgICAgICBjbWQsIElPTU1VX0NNRF9CVUZGRVJfRU5UUllfU0laRSk7
CisgICAgICAgICAgICAgICBjbWQsIHNpemVvZihjbWRfZW50cnlfdCkpOwog
CiAgICAgICAgIGlvbW11LT5jbWRfYnVmZmVyLnRhaWwgPSB0YWlsOwogICAg
ICAgICByZXR1cm4gMTsKLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gv
YW1kL2lvbW11X2luaXQuYworKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3Vn
aC9hbWQvaW9tbXVfaW5pdC5jCkBAIC0xMTgsNyArMTE4LDcgQEAgc3RhdGlj
IHZvaWQgcmVnaXN0ZXJfaW9tbXVfY21kX2J1ZmZlcl9pbgogICAgIHdyaXRl
bChlbnRyeSwgaW9tbXUtPm1taW9fYmFzZSArIElPTU1VX0NNRF9CVUZGRVJf
QkFTRV9MT1dfT0ZGU0VUKTsKIAogICAgIHBvd2VyX29mMl9lbnRyaWVzID0g
Z2V0X29yZGVyX2Zyb21fYnl0ZXMoaW9tbXUtPmNtZF9idWZmZXIuc2l6ZSkg
KwotICAgICAgICBJT01NVV9DTURfQlVGRkVSX1BPV0VSX09GMl9FTlRSSUVT
X1BFUl9QQUdFOworICAgICAgICBQQUdFX1NISUZUIC0gSU9NTVVfQ01EX0JV
RkZFUl9FTlRSWV9PUkRFUjsKIAogICAgIGVudHJ5ID0gMDsKICAgICBpb21t
dV9zZXRfYWRkcl9oaV90b19yZWcoJmVudHJ5LCBhZGRyX2hpKTsKQEAgLTEw
MjIsOSArMTAyMiwzMSBAQCBzdGF0aWMgdm9pZCAqX19pbml0IGFsbG9jYXRl
X3JpbmdfYnVmZmVyCiBzdGF0aWMgdm9pZCAqIF9faW5pdCBhbGxvY2F0ZV9j
bWRfYnVmZmVyKHN0cnVjdCBhbWRfaW9tbXUgKmlvbW11KQogewogICAgIC8q
IGFsbG9jYXRlICdjb21tYW5kIGJ1ZmZlcicgaW4gcG93ZXIgb2YgMiBpbmNy
ZW1lbnRzIG9mIDRLICovCisgICAgc3RhdGljIHVuc2lnbmVkIGludCBfX3Jl
YWRfbW9zdGx5IG5yX2VudHM7CisKKyAgICBpZiAoICFucl9lbnRzICkKKyAg
ICB7CisgICAgICAgIHVuc2lnbmVkIGludCBvcmRlcjsKKworICAgICAgICAv
KgorICAgICAgICAgKiBXaXRoIHRoZSBwcmVzZW50IHN5bmNocm9ub3VzIG1v
ZGVsLCB3ZSBuZWVkIHR3byBzbG90cyBmb3IgZXZlcnkKKyAgICAgICAgICog
b3BlcmF0aW9uICh0aGUgb3BlcmF0aW9uIGl0c2VsZiBhbmQgYSB3YWl0IGNv
bW1hbmQpLiAgVGhlcmUgY2FuIGJlCisgICAgICAgICAqIG9uZSBzdWNoIHBh
aXIgb2YgcmVxdWVzdHMgcGVuZGluZyBwZXIgQ1BVLiAgT25lIGV4dHJhIGVu
dHJ5IGlzCisgICAgICAgICAqIG5lZWRlZCBhcyB0aGUgcmluZyBpcyBjb25z
aWRlcmVkIGZ1bGwgd2hlbiB0aGVyZSdzIG9ubHkgb25lIGVudHJ5CisgICAg
ICAgICAqIGxlZnQuCisgICAgICAgICAqLworICAgICAgICBCVUlMRF9CVUdf
T04oQ09ORklHX05SX0NQVVMgKiAyID49IElPTU1VX0NNRF9CVUZGRVJfTUFY
X0VOVFJJRVMpOworICAgICAgICBvcmRlciA9IGdldF9vcmRlcl9mcm9tX2J5
dGVzKChudW1fcHJlc2VudF9jcHVzKCkgKiAyICsgMSkgPDwKKyAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBJT01NVV9DTURfQlVGRkVS
X0VOVFJZX09SREVSKTsKKyAgICAgICAgbnJfZW50cyA9IDF1IDw8IChvcmRl
ciArIFBBR0VfU0hJRlQgLSBJT01NVV9DTURfQlVGRkVSX0VOVFJZX09SREVS
KTsKKworICAgICAgICBBTURfSU9NTVVfREVCVUcoInVzaW5nICV1LWVudHJ5
IGNtZCByaW5nKHMpXG4iLCBucl9lbnRzKTsKKyAgICB9CisKKyAgICBCVUlM
RF9CVUdfT04oc2l6ZW9mKGNtZF9lbnRyeV90KSAhPSAoMXUgPDwgSU9NTVVf
Q01EX0JVRkZFUl9FTlRSWV9PUkRFUikpOworCiAgICAgcmV0dXJuIGFsbG9j
YXRlX3JpbmdfYnVmZmVyKCZpb21tdS0+Y21kX2J1ZmZlciwgc2l6ZW9mKGNt
ZF9lbnRyeV90KSwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
SU9NTVVfQ01EX0JVRkZFUl9ERUZBVUxUX0VOVFJJRVMsCi0gICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICJDb21tYW5kIEJ1ZmZlciIsIGZhbHNl
KTsKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbnJfZW50cywg
IkNvbW1hbmQgQnVmZmVyIiwgZmFsc2UpOwogfQogCiBzdGF0aWMgdm9pZCAq
IF9faW5pdCBhbGxvY2F0ZV9ldmVudF9sb2coc3RydWN0IGFtZF9pb21tdSAq
aW9tbXUpCg==

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.14-3.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.14-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBWVC1kOiBlbGltaW5hdGUgZmx1c2ggcmVsYXRlZCB0aW1lb3V0cwoKTGVh
dmluZyBhbiBpbi1wcm9ncmVzcyBvcGVyYXRpb24gcGVuZGluZyB3aGVuIGl0
IGFwcGVhcnMgdG8gdGFrZSB0b28KbG9uZyBpcyBwcm9ibGVtYXRpYzogSWYg
ZS5nLiBhIFFJIGNvbW1hbmQgY29tcGxldGVkIGxhdGVyLCB0aGUgd3JpdGUg
dG8KdGhlICJwb2xsIHNsb3QiIG1heSBpbnN0ZWFkIGJlIHVuZGVyc3Rvb2Qg
dG8gc2lnbmFsIGEgc3Vic2VxdWVudGx5CnN0YXJ0ZWQgY29tbWFuZCdzIGNv
bXBsZXRpb24uIEFsc28gb3VyIGFjY291bnRpbmcgb2YgdGhlIHRpbWVvdXQg
cGVyaW9kCndhcyBhY3R1YWxseSB3cm9uZzogV2UgaW5jbHVkZWQgdGhlIHRp
bWUgaXQgdG9vayBmb3IgdGhlIGNvbW1hbmQgdG8KYWN0dWFsbHkgbWFrZSBp
dCB0byB0aGUgZnJvbnQgb2YgdGhlIHF1ZXVlLCB3aGljaCBjb3VsZCBiZSBo
ZWF2aWx5CmFmZmVjdGVkIGJ5IGd1ZXN0cyBvdGhlciB0aGFuIHRoZSBvbmUg
Zm9yIHdoaWNoIHRoZSBmbHVzaCBpcyBiZWluZwpwZXJmb3JtZWQuCgpEbyBh
d2F5IHdpdGggYWxsIHRpbWVvdXQgZGV0ZWN0aW9uIG9uIGFsbCBmbHVzaCBy
ZWxhdGVkIGNvZGUgcGF0aHMuCkxvZyBleGNlc3NpdmVseSBsb25nIHByb2Nl
c3NpbmcgdGltZXMgKHdpdGggYSBwcm9ncmVzc2l2ZSB0aHJlc2hvbGQpIHRv
CmhhdmUgc29tZSBpbmRpY2F0aW9uIG9mIHByb2JsZW1zIGluIHRoaXMgYXJl
YS4KCkFkZGl0aW9uYWxseSBsb2cgKG9uY2UpIGlmIHFpbnZhbF9uZXh0X2lu
ZGV4KCkgZGlkbid0IGltbWVkaWF0ZWx5IGZpbmQKYW4gYXZhaWxhYmxlIHNs
b3QuIFRvZ2V0aGVyIHdpdGggdGhlIGVhcmxpZXIgY2hhbmdlIHNpemluZyB0
aGUgcXVldWUocykKZHluYW1pY2FsbHksIHdlIHNob3VsZCBub3cgaGF2ZSBh
IGd1YXJhbnRlZSB0aGF0IHdpdGggb3VyIGZ1bGx5CnN5bmNocm9ub3VzIG1v
ZGVsIGFueSBkZW1hbmQgZm9yIHNsb3RzIGNhbiBhY3R1YWxseSBiZSBzYXRp
c2ZpZWQuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAvIENWRS0yMDIxLTI4
NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1
c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5v
cmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvZG1hci5o
CisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9kbWFyLmgKQEAg
LTEyNyw2ICsxMjcsMzQgQEAgZG8gewogICAgIH0gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
IH0gd2hpbGUgKDApCiAKKyNkZWZpbmUgSU9NTVVfRkxVU0hfV0FJVCh3aGF0
LCBpb21tdSwgb2Zmc2V0LCBvcCwgY29uZCwgc3RzKSAgICAgICBcCitkbyB7
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgXAorICAgIHN0YXRpYyB1bnNpZ25lZCBpbnQg
X19yZWFkX21vc3RseSB0aHJlc2hvbGQgPSAxOyAgICAgICAgICAgICAgIFwK
KyAgICBzX3RpbWVfdCBzdGFydCA9IE5PVygpOyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgc190aW1lX3QgdGltZW91
dCA9IHN0YXJ0ICsgRE1BUl9PUEVSQVRJT05fVElNRU9VVCAqIHRocmVzaG9s
ZDsgXAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICBmb3IgKCA7IDsg
KSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBcCisgICAgeyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICBz
dHMgPSBvcChpb21tdS0+cmVnLCBvZmZzZXQpOyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwKKyAgICAgICAgaWYgKCBjb25kICkgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAg
ICAgICAgICBicmVhazsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgXAorICAgICAgICBpZiAoIHRpbWVvdXQgJiYg
Tk9XKCkgPiB0aW1lb3V0ICkgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
KyAgICAgICAgeyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgICAgICB0aHJlc2hv
bGQgfD0gdGhyZXNob2xkIDw8IDE7ICAgICAgICAgICAgICAgICAgICAgICAg
ICAgXAorICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19XQVJOSU5HIFZURFBS
RUZJWCAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgICAgICAg
ICAgICAiIElPTU1VIyV1OiAlcyBmbHVzaCB0YWtpbmcgdG9vIGxvbmdcbiIs
ICAgICAgICBcCisgICAgICAgICAgICAgICAgICAgaW9tbXUtPmluZGV4LCB3
aGF0KTsgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICAg
ICAgdGltZW91dCA9IDA7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwKKyAgICAgICAgfSAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAg
ICAgIGNwdV9yZWxheCgpOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgXAorICAgIH0gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgaWYgKCAhdGltZW91dCAp
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgXAorICAgICAgICBwcmludGsoWEVOTE9HX1dBUk5JTkcgVlREUFJFRklY
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgICAgICAg
ICIgSU9NTVUjJXU6ICVzIGZsdXNoIHRvb2sgJWx1bXNcbiIsICAgICAgICAg
ICAgICAgICBcCisgICAgICAgICAgICAgICBpb21tdS0+aW5kZXgsIHdoYXQs
IChOT1coKSAtIHN0YXJ0KSAvIDEwMDAwMDAwKTsgICAgXAorfSB3aGlsZSAo
IGZhbHNlICkKKwogaW50IHZ0ZF9od19jaGVjayh2b2lkKTsKIHZvaWQgZGlz
YWJsZV9wbXIoc3RydWN0IHZ0ZF9pb21tdSAqaW9tbXUpOwogaW50IGlzX2ln
ZF9kcmhkKHN0cnVjdCBhY3BpX2RyaGRfdW5pdCAqZHJoZCk7Ci0tLSBhL3hl
bi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCisrKyBiL3hlbi9k
cml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCkBAIC0zMjYsOCArMzI2
LDggQEAgc3RhdGljIHZvaWQgaW9tbXVfZmx1c2hfd3JpdGVfYnVmZmVyKHN0
cgogICAgIGRtYXJfd3JpdGVsKGlvbW11LT5yZWcsIERNQVJfR0NNRF9SRUcs
IHZhbCB8IERNQV9HQ01EX1dCRik7CiAKICAgICAvKiBNYWtlIHN1cmUgaGFy
ZHdhcmUgY29tcGxldGUgaXQgKi8KLSAgICBJT01NVV9XQUlUX09QKGlvbW11
LCBETUFSX0dTVFNfUkVHLCBkbWFyX3JlYWRsLAotICAgICAgICAgICAgICAg
ICAgISh2YWwgJiBETUFfR1NUU19XQkZTKSwgdmFsKTsKKyAgICBJT01NVV9G
TFVTSF9XQUlUKCJ3cml0ZSBidWZmZXIiLCBpb21tdSwgRE1BUl9HU1RTX1JF
RywgZG1hcl9yZWFkbCwKKyAgICAgICAgICAgICAgICAgICAgICEodmFsICYg
RE1BX0dTVFNfV0JGUyksIHZhbCk7CiAKICAgICBzcGluX3VubG9ja19pcnFy
ZXN0b3JlKCZpb21tdS0+cmVnaXN0ZXJfbG9jaywgZmxhZ3MpOwogfQpAQCAt
Mzc2LDggKzM3Niw4IEBAIGludCB2dGRfZmx1c2hfY29udGV4dF9yZWcoc3Ry
dWN0IHZ0ZF9pb20KICAgICBkbWFyX3dyaXRlcShpb21tdS0+cmVnLCBETUFS
X0NDTURfUkVHLCB2YWwpOwogCiAgICAgLyogTWFrZSBzdXJlIGhhcmR3YXJl
IGNvbXBsZXRlIGl0ICovCi0gICAgSU9NTVVfV0FJVF9PUChpb21tdSwgRE1B
Ul9DQ01EX1JFRywgZG1hcl9yZWFkcSwKLSAgICAgICAgICAgICAgICAgICEo
dmFsICYgRE1BX0NDTURfSUNDKSwgdmFsKTsKKyAgICBJT01NVV9GTFVTSF9X
QUlUKCJjb250ZXh0IiwgaW9tbXUsIERNQVJfQ0NNRF9SRUcsIGRtYXJfcmVh
ZHEsCisgICAgICAgICAgICAgICAgICAgICAhKHZhbCAmIERNQV9DQ01EX0lD
QyksIHZhbCk7CiAKICAgICBzcGluX3VubG9ja19pcnFyZXN0b3JlKCZpb21t
dS0+cmVnaXN0ZXJfbG9jaywgZmxhZ3MpOwogICAgIC8qIGZsdXNoIGNvbnRl
eHQgZW50cnkgd2lsbCBpbXBsaWNpdGx5IGZsdXNoIHdyaXRlIGJ1ZmZlciAq
LwpAQCAtNDU0LDggKzQ1NCw4IEBAIGludCB2dGRfZmx1c2hfaW90bGJfcmVn
KHN0cnVjdCB2dGRfaW9tbXUKICAgICBkbWFyX3dyaXRlcShpb21tdS0+cmVn
LCB0bGJfb2Zmc2V0ICsgOCwgdmFsKTsKIAogICAgIC8qIE1ha2Ugc3VyZSBo
YXJkd2FyZSBjb21wbGV0ZSBpdCAqLwotICAgIElPTU1VX1dBSVRfT1AoaW9t
bXUsICh0bGJfb2Zmc2V0ICsgOCksIGRtYXJfcmVhZHEsCi0gICAgICAgICAg
ICAgICAgICAhKHZhbCAmIERNQV9UTEJfSVZUKSwgdmFsKTsKKyAgICBJT01N
VV9GTFVTSF9XQUlUKCJpb3RsYiIsIGlvbW11LCAodGxiX29mZnNldCArIDgp
LCBkbWFyX3JlYWRxLAorICAgICAgICAgICAgICAgICAgICAgISh2YWwgJiBE
TUFfVExCX0lWVCksIHZhbCk7CiAgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9y
ZSgmaW9tbXUtPnJlZ2lzdGVyX2xvY2ssIGZsYWdzKTsKIAogICAgIC8qIGNo
ZWNrIElPVExCIGludmFsaWRhdGlvbiBncmFudWxhcml0eSAqLwotLS0gYS94
ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvcWludmFsLmMKKysrIGIveGVu
L2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL3FpbnZhbC5jCkBAIC0yOSw4ICsy
OSw2IEBACiAjaW5jbHVkZSAiZXh0ZXJuLmgiCiAjaW5jbHVkZSAiLi4vYXRz
LmgiCiAKLSNkZWZpbmUgVlREX1FJX1RJTUVPVVQJMQotCiBzdGF0aWMgdW5z
aWduZWQgaW50IF9fcmVhZF9tb3N0bHkgcWlfcGdfb3JkZXI7CiBzdGF0aWMg
dW5zaWduZWQgaW50IF9fcmVhZF9tb3N0bHkgcWlfZW50cnlfbnI7CiAKQEAg
LTYwLDcgKzU4LDExIEBAIHN0YXRpYyB1bnNpZ25lZCBpbnQgcWludmFsX25l
eHRfaW5kZXgoc3QKICAgICAvKiAodGFpbCsxID09IGhlYWQpIGluZGljYXRl
cyBhIGZ1bGwgcXVldWUsIHdhaXQgZm9yIEhXICovCiAgICAgd2hpbGUgKCAo
KHRhaWwgKyAxKSAmIChxaV9lbnRyeV9uciAtIDEpKSA9PQogICAgICAgICAg
ICAgKCBkbWFyX3JlYWRxKGlvbW11LT5yZWcsIERNQVJfSVFIX1JFRykgPj4g
UUlOVkFMX0lOREVYX1NISUZUICkgKQorICAgIHsKKyAgICAgICAgcHJpbnRr
X29uY2UoWEVOTE9HX0VSUiBWVERQUkVGSVggIiBJT01NVSMldTogbm8gUUkg
c2xvdCBhdmFpbGFibGVcbiIsCisgICAgICAgICAgICAgICAgICAgIGlvbW11
LT5pbmRleCk7CiAgICAgICAgIGNwdV9yZWxheCgpOworICAgIH0KIAogICAg
IHJldHVybiB0YWlsOwogfQpAQCAtMTgwLDIzICsxODIsMzIgQEAgc3RhdGlj
IGludCBfX211c3RfY2hlY2sgcXVldWVfaW52YWxpZGF0ZQogICAgIC8qIE5v
dyB3ZSBkb24ndCBzdXBwb3J0IGludGVycnVwdCBtZXRob2QgKi8KICAgICBp
ZiAoIHN3ICkKICAgICB7Ci0gICAgICAgIHNfdGltZV90IHRpbWVvdXQ7Ci0K
LSAgICAgICAgLyogSW4gY2FzZSBhbGwgd2FpdCBkZXNjcmlwdG9yIHdyaXRl
cyB0byBzYW1lIGFkZHIgd2l0aCBzYW1lIGRhdGEgKi8KLSAgICAgICAgdGlt
ZW91dCA9IE5PVygpICsgTUlMTElTRUNTKGZsdXNoX2Rldl9pb3RsYiA/Ci0g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBpb21tdV9kZXZf
aW90bGJfdGltZW91dCA6IFZURF9RSV9USU1FT1VUKTsKKyAgICAgICAgc3Rh
dGljIHVuc2lnbmVkIGludCBfX3JlYWRfbW9zdGx5IHRocmVzaG9sZCA9IDE7
CisgICAgICAgIHNfdGltZV90IHN0YXJ0ID0gTk9XKCk7CisgICAgICAgIHNf
dGltZV90IHRpbWVvdXQgPSBzdGFydCArIChmbHVzaF9kZXZfaW90bGIKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgID8gaW9tbXVfZGV2
X2lvdGxiX3RpbWVvdXQKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIDogMTAwKSAqIE1JTExJU0VDUyh0aHJlc2hvbGQpOwogCiAgICAg
ICAgIHdoaWxlICggQUNDRVNTX09OQ0UoKnRoaXNfcG9sbF9zbG90KSAhPSBR
SU5WQUxfU1RBVF9ET05FICkKICAgICAgICAgewotICAgICAgICAgICAgaWYg
KCBOT1coKSA+IHRpbWVvdXQgKQorICAgICAgICAgICAgaWYgKCB0aW1lb3V0
ICYmIE5PVygpID4gdGltZW91dCApCiAgICAgICAgICAgICB7Ci0gICAgICAg
ICAgICAgICAgcHJpbnRfcWlfcmVncyhpb21tdSk7CisgICAgICAgICAgICAg
ICAgdGhyZXNob2xkIHw9IHRocmVzaG9sZCA8PCAxOwogICAgICAgICAgICAg
ICAgIHByaW50ayhYRU5MT0dfV0FSTklORyBWVERQUkVGSVgKLSAgICAgICAg
ICAgICAgICAgICAgICAgIiBRdWV1ZSBpbnZhbGlkYXRlIHdhaXQgZGVzY3Jp
cHRvciB0aW1lZCBvdXRcbiIpOwotICAgICAgICAgICAgICAgIHJldHVybiAt
RVRJTUVET1VUOworICAgICAgICAgICAgICAgICAgICAgICAiIElPTU1VIyV1
OiBRSSVzIHdhaXQgZGVzY3JpcHRvciB0YWtpbmcgdG9vIGxvbmdcbiIsCisg
ICAgICAgICAgICAgICAgICAgICAgIGlvbW11LT5pbmRleCwgZmx1c2hfZGV2
X2lvdGxiID8gIiBkZXYiIDogIiIpOworICAgICAgICAgICAgICAgIHByaW50
X3FpX3JlZ3MoaW9tbXUpOworICAgICAgICAgICAgICAgIHRpbWVvdXQgPSAw
OwogICAgICAgICAgICAgfQogICAgICAgICAgICAgY3B1X3JlbGF4KCk7CiAg
ICAgICAgIH0KKworICAgICAgICBpZiAoICF0aW1lb3V0ICkKKyAgICAgICAg
ICAgIHByaW50ayhYRU5MT0dfV0FSTklORyBWVERQUkVGSVgKKyAgICAgICAg
ICAgICAgICAgICAiIElPTU1VIyV1OiBRSSVzIHdhaXQgZGVzY3JpcHRvciB0
b29rICVsdW1zXG4iLAorICAgICAgICAgICAgICAgICAgIGlvbW11LT5pbmRl
eCwgZmx1c2hfZGV2X2lvdGxiID8gIiBkZXYiIDogIiIsCisgICAgICAgICAg
ICAgICAgICAgKE5PVygpIC0gc3RhcnQpIC8gMTAwMDAwMDApOworCiAgICAg
ICAgIHJldHVybiAwOwogICAgIH0KIAo=

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.14-4.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.14-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IHdhaXQgZm9yIGNvbW1hbmQgc2xvdCB0byBiZSBhdmFp
bGFibGUKCk5vIGNhbGxlciBjYXJlZCBhYm91dCBzZW5kX2lvbW11X2NvbW1h
bmQoKSBpbmRpY2F0aW5nIHVuYXZhaWxhYmlsaXR5IG9mCmEgc2xvdC4gSGVu
Y2UgaWYgYSBzdWZmaWNpZW50IG51bWJlciBwcmlvciBjb21tYW5kcyB0aW1l
ZCBvdXQsIHdlIGRpZApibGluZGx5IGFzc3VtZSB0aGF0IHRoZSByZXF1ZXN0
ZWQgY29tbWFuZCB3YXMgc3VibWl0dGVkIHRvIHRoZSBJT01NVQp3aGVuIHJl
YWxseSBpdCB3YXNuJ3QuIFRoaXMgY291bGQgbWVhbiBib3RoIGEgaGFuZ2lu
ZyBzeXN0ZW0gKHdhaXRpbmcKZm9yIGEgY29tbWFuZCB0byBjb21wbGV0ZSB0
aGF0IHdhcyBuZXZlciBzZWVuIGJ5IHRoZSBJT01NVSkgb3IgYmxpbmRseQpw
cm9wYWdhdGluZyBzdWNjZXNzIGJhY2sgdG8gY2FsbGVycywgbWFraW5nIHRo
ZW0gYmVsaWV2ZSB0aGV5J3JlIGZpbmUKdG8gZS5nLiBmcmVlIHByZXZpb3Vz
bHkgdW5tYXBwZWQgcGFnZXMuCgpGb2xkIHRoZSB0aHJlZSBpbnZvbHZlZCBm
dW5jdGlvbnMgaW50byBvbmUsIGFkZCBzcGluIHdhaXRpbmcgZm9yIGFuCmF2
YWlsYWJsZSBzbG90IGFsb25nIHRoZSBsaW5lcyBvZiBWVC1kJ3MgcWludmFs
X25leHRfaW5kZXgoKSwgYW5kIGFzIGEKY29uc2VxdWVuY2UgZHJvcCBhbGwg
ZXJyb3IgaW5kaWNhdG9yIHJldHVybiB0eXBlcy92YWx1ZXMuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTM3MyAvIENWRS0yMDIxLTI4NjkyLgoKU2lnbmVkLW9m
Zi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpSZXZpZXdl
ZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+CgotLS0gYS94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfY21kLmMKKysrIGIveGVu
L2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1kL2lvbW11X2NtZC5jCkBAIC0yMCw0
MyArMjAsMzIgQEAKICNpbmNsdWRlICJpb21tdS5oIgogI2luY2x1ZGUgIi4u
L2F0cy5oIgogCi1zdGF0aWMgaW50IHF1ZXVlX2lvbW11X2NvbW1hbmQoc3Ry
dWN0IGFtZF9pb21tdSAqaW9tbXUsIHUzMiBjbWRbXSkKK3N0YXRpYyB2b2lk
IHNlbmRfaW9tbXVfY29tbWFuZChzdHJ1Y3QgYW1kX2lvbW11ICppb21tdSwK
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCB1aW50MzJf
dCBjbWRbNF0pCiB7Ci0gICAgdWludDMyX3QgdGFpbCwgaGVhZDsKKyAgICB1
aW50MzJfdCB0YWlsOwogCiAgICAgdGFpbCA9IGlvbW11LT5jbWRfYnVmZmVy
LnRhaWwgKyBzaXplb2YoY21kX2VudHJ5X3QpOwogICAgIGlmICggdGFpbCA9
PSBpb21tdS0+Y21kX2J1ZmZlci5zaXplICkKICAgICAgICAgdGFpbCA9IDA7
CiAKLSAgICBoZWFkID0gcmVhZGwoaW9tbXUtPm1taW9fYmFzZSArCi0gICAg
ICAgICAgICAgICAgIElPTU1VX0NNRF9CVUZGRVJfSEVBRF9PRkZTRVQpICYg
SU9NTVVfUklOR19CVUZGRVJfUFRSX01BU0s7Ci0gICAgaWYgKCBoZWFkICE9
IHRhaWwgKQorICAgIHdoaWxlICggdGFpbCA9PSAocmVhZGwoaW9tbXUtPm1t
aW9fYmFzZSArCisgICAgICAgICAgICAgICAgICAgICAgICAgICBJT01NVV9D
TURfQlVGRkVSX0hFQURfT0ZGU0VUKSAmCisgICAgICAgICAgICAgICAgICAg
ICBJT01NVV9SSU5HX0JVRkZFUl9QVFJfTUFTSykgKQogICAgIHsKLSAgICAg
ICAgbWVtY3B5KGlvbW11LT5jbWRfYnVmZmVyLmJ1ZmZlciArIGlvbW11LT5j
bWRfYnVmZmVyLnRhaWwsCi0gICAgICAgICAgICAgICBjbWQsIHNpemVvZihj
bWRfZW50cnlfdCkpOwotCi0gICAgICAgIGlvbW11LT5jbWRfYnVmZmVyLnRh
aWwgPSB0YWlsOwotICAgICAgICByZXR1cm4gMTsKKyAgICAgICAgcHJpbnRr
X29uY2UoWEVOTE9HX0VSUgorICAgICAgICAgICAgICAgICAgICAiQU1EIElP
TU1VICUwNHg6JTAyeDolMDJ4LiV1OiBubyBjbWQgc2xvdCBhdmFpbGFibGVc
biIsCisgICAgICAgICAgICAgICAgICAgIGlvbW11LT5zZWcsIFBDSV9CVVMo
aW9tbXUtPmJkZiksCisgICAgICAgICAgICAgICAgICAgIFBDSV9TTE9UKGlv
bW11LT5iZGYpLCBQQ0lfRlVOQyhpb21tdS0+YmRmKSk7CisgICAgICAgIGNw
dV9yZWxheCgpOwogICAgIH0KIAotICAgIHJldHVybiAwOwotfQotCi1zdGF0
aWMgdm9pZCBjb21taXRfaW9tbXVfY29tbWFuZF9idWZmZXIoc3RydWN0IGFt
ZF9pb21tdSAqaW9tbXUpCi17Ci0gICAgd3JpdGVsKGlvbW11LT5jbWRfYnVm
ZmVyLnRhaWwsCi0gICAgICAgICAgIGlvbW11LT5tbWlvX2Jhc2UgKyBJT01N
VV9DTURfQlVGRkVSX1RBSUxfT0ZGU0VUKTsKLX0KKyAgICBtZW1jcHkoaW9t
bXUtPmNtZF9idWZmZXIuYnVmZmVyICsgaW9tbXUtPmNtZF9idWZmZXIudGFp
bCwKKyAgICAgICAgICAgY21kLCBzaXplb2YoY21kX2VudHJ5X3QpKTsKIAot
c3RhdGljIGludCBzZW5kX2lvbW11X2NvbW1hbmQoc3RydWN0IGFtZF9pb21t
dSAqaW9tbXUsIHUzMiBjbWRbXSkKLXsKLSAgICBpZiAoIHF1ZXVlX2lvbW11
X2NvbW1hbmQoaW9tbXUsIGNtZCkgKQotICAgIHsKLSAgICAgICAgY29tbWl0
X2lvbW11X2NvbW1hbmRfYnVmZmVyKGlvbW11KTsKLSAgICAgICAgcmV0dXJu
IDE7Ci0gICAgfQorICAgIGlvbW11LT5jbWRfYnVmZmVyLnRhaWwgPSB0YWls
OwogCi0gICAgcmV0dXJuIDA7CisgICAgd3JpdGVsKHRhaWwsIGlvbW11LT5t
bWlvX2Jhc2UgKyBJT01NVV9DTURfQlVGRkVSX1RBSUxfT0ZGU0VUKTsKIH0K
IAogc3RhdGljIHZvaWQgZmx1c2hfY29tbWFuZF9idWZmZXIoc3RydWN0IGFt
ZF9pb21tdSAqaW9tbXUpCg==

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.14-5.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.14-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IGRyb3AgY29tbWFuZCBjb21wbGV0aW9uIHRpbWVvdXQK
CkZpcnN0IGFuZCBmb3JlbW9zdCAtIHN1Y2ggdGltZW91dHMgd2VyZSBub3Qg
c2lnbmFsZWQgdG8gY2FsbGVycywgbWFraW5nCnRoZW0gYmVsaWV2ZSB0aGV5
J3JlIGZpbmUgdG8gZS5nLiBmcmVlIHByZXZpb3VzbHkgdW5tYXBwZWQgcGFn
ZXMuCgpNaXJyb3IgVlQtZCdzIGJlaGF2aW9yOiBBIGZpeGVkIG51bWJlciBv
ZiBsb29wIGl0ZXJhdGlvbnMgaXMgbm90IGEKc3VpdGFibGUgd2F5IHRvIGRl
dGVjdCB0aW1lb3V0cyBpbiBhbiBlbnZpcm9ubWVudCAoQ1BVIGFuZCBidXMg
c3BlZWRzKQppbmRlcGVuZGVudCBtYW5uZXIgYW55d2F5LiBGdXJ0aGVybW9y
ZSwgbGVhdmluZyBhbiBpbi1wcm9ncmVzcyBvcGVyYXRpb24KcGVuZGluZyB3
aGVuIGl0IGFwcGVhcnMgdG8gdGFrZSB0b28gbG9uZyBpcyBwcm9ibGVtYXRp
YzogSWYgYSBjb21tYW5kCmNvbXBsZXRlZCBsYXRlciwgdGhlIHNpZ25hbGlu
ZyBvZiBpdHMgY29tcGxldGlvbiBtYXkgaW5zdGVhZCBiZQp1bmRlcnN0b29k
IHRvIHNpZ25hbCBhIHN1YnNlcXVlbnRseSBzdGFydGVkIGNvbW1hbmQncyBj
b21wbGV0aW9uLgoKTG9nIGV4Y2Vzc2l2ZWx5IGxvbmcgcHJvY2Vzc2luZyB0
aW1lcyAod2l0aCBhIHByb2dyZXNzaXZlIHRocmVzaG9sZCkgdG8KaGF2ZSBz
b21lIGluZGljYXRpb24gb2YgcHJvYmxlbXMgaW4gdGhpcyBhcmVhLiBBbGxv
dyBjYWxsZXJzIHRvIHNwZWNpZnkKYSBub24tZGVmYXVsdCB0aW1lb3V0IGJp
YXMgZm9yIHRoaXMgbG9nZ2luZywgdXNpbmcgdGhlIHNhbWUgdmFsdWVzIGFz
ClZULWQgZG9lcywgd2hpY2ggaW4gcGFydGljdWxhciBtZWFucyBhIChieSBk
ZWZhdWx0KSBtdWNoIGxhcmdlciB2YWx1ZQpmb3IgZGV2aWNlIElPIFRMQiBp
bnZhbGlkYXRpb24uCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAvIENWRS0y
MDIxLTI4NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVs
aWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVs
QHhlbi5vcmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQv
aW9tbXVfY21kLmMKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1k
L2lvbW11X2NtZC5jCkBAIC00OCwxMCArNDgsMTIgQEAgc3RhdGljIHZvaWQg
c2VuZF9pb21tdV9jb21tYW5kKHN0cnVjdCBhbQogICAgIHdyaXRlbCh0YWls
LCBpb21tdS0+bW1pb19iYXNlICsgSU9NTVVfQ01EX0JVRkZFUl9UQUlMX09G
RlNFVCk7CiB9CiAKLXN0YXRpYyB2b2lkIGZsdXNoX2NvbW1hbmRfYnVmZmVy
KHN0cnVjdCBhbWRfaW9tbXUgKmlvbW11KQorc3RhdGljIHZvaWQgZmx1c2hf
Y29tbWFuZF9idWZmZXIoc3RydWN0IGFtZF9pb21tdSAqaW9tbXUsCisgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBpbnQgdGlt
ZW91dF9iYXNlKQogewotICAgIHVuc2lnbmVkIGludCBjbWRbNF0sIHN0YXR1
cywgbG9vcF9jb3VudDsKLSAgICBib29sIGNvbXBfd2FpdDsKKyAgICB1aW50
MzJfdCBjbWRbNF07CisgICAgc190aW1lX3Qgc3RhcnQsIHRpbWVvdXQ7Cisg
ICAgc3RhdGljIHVuc2lnbmVkIGludCBfX3JlYWRfbW9zdGx5IHRocmVzaG9s
ZCA9IDE7CiAKICAgICAvKiBSVzFDICdDb21XYWl0SW50JyBpbiBzdGF0dXMg
cmVnaXN0ZXIgKi8KICAgICB3cml0ZWwoSU9NTVVfU1RBVFVTX0NPTVBfV0FJ
VF9JTlQsCkBAIC02NywyMiArNjksMzEgQEAgc3RhdGljIHZvaWQgZmx1c2hf
Y29tbWFuZF9idWZmZXIoc3RydWN0CiAgICAgICAgICAgICAgICAgICAgICAg
ICAgSU9NTVVfQ09NUF9XQUlUX0lfRkxBR19TSElGVCwgJmNtZFswXSk7CiAg
ICAgc2VuZF9pb21tdV9jb21tYW5kKGlvbW11LCBjbWQpOwogCi0gICAgLyog
TWFrZSBsb29wX2NvdW50IGxvbmcgZW5vdWdoIGZvciBwb2xsaW5nIGNvbXBs
ZXRpb24gd2FpdCBiaXQgKi8KLSAgICBsb29wX2NvdW50ID0gMTAwMDsKLSAg
ICBkbyB7Ci0gICAgICAgIHN0YXR1cyA9IHJlYWRsKGlvbW11LT5tbWlvX2Jh
c2UgKyBJT01NVV9TVEFUVVNfTU1JT19PRkZTRVQpOwotICAgICAgICBjb21w
X3dhaXQgPSBzdGF0dXMgJiBJT01NVV9TVEFUVVNfQ09NUF9XQUlUX0lOVDsK
LSAgICAgICAgLS1sb29wX2NvdW50OwotICAgIH0gd2hpbGUgKCAhY29tcF93
YWl0ICYmIGxvb3BfY291bnQgKTsKLQotICAgIGlmICggY29tcF93YWl0ICkK
KyAgICBzdGFydCA9IE5PVygpOworICAgIHRpbWVvdXQgPSBzdGFydCArICh0
aW1lb3V0X2Jhc2UgPzogMTAwKSAqIE1JTExJU0VDUyh0aHJlc2hvbGQpOwor
ICAgIHdoaWxlICggIShyZWFkbChpb21tdS0+bW1pb19iYXNlICsgSU9NTVVf
U1RBVFVTX01NSU9fT0ZGU0VUKSAmCisgICAgICAgICAgICAgIElPTU1VX1NU
QVRVU19DT01QX1dBSVRfSU5UKSApCiAgICAgewotICAgICAgICAvKiBSVzFD
ICdDb21XYWl0SW50JyBpbiBzdGF0dXMgcmVnaXN0ZXIgKi8KLSAgICAgICAg
d3JpdGVsKElPTU1VX1NUQVRVU19DT01QX1dBSVRfSU5ULAotICAgICAgICAg
ICAgICAgaW9tbXUtPm1taW9fYmFzZSArIElPTU1VX1NUQVRVU19NTUlPX09G
RlNFVCk7Ci0gICAgICAgIHJldHVybjsKKyAgICAgICAgaWYgKCB0aW1lb3V0
ICYmIE5PVygpID4gdGltZW91dCApCisgICAgICAgIHsKKyAgICAgICAgICAg
IHRocmVzaG9sZCB8PSB0aHJlc2hvbGQgPDwgMTsKKyAgICAgICAgICAgIHBy
aW50ayhYRU5MT0dfV0FSTklORworICAgICAgICAgICAgICAgICAgICJBTUQg
SU9NTVUgJTA0eDolMDJ4OiUwMnguJXU6ICVzY29tcGxldGlvbiB3YWl0IHRh
a2luZyB0b28gbG9uZ1xuIiwKKyAgICAgICAgICAgICAgICAgICBpb21tdS0+
c2VnLCBQQ0lfQlVTKGlvbW11LT5iZGYpLAorICAgICAgICAgICAgICAgICAg
IFBDSV9TTE9UKGlvbW11LT5iZGYpLCBQQ0lfRlVOQyhpb21tdS0+YmRmKSwK
KyAgICAgICAgICAgICAgICAgICB0aW1lb3V0X2Jhc2UgPyAiaW90bGIgIiA6
ICIiKTsKKyAgICAgICAgICAgIHRpbWVvdXQgPSAwOworICAgICAgICB9Cisg
ICAgICAgIGNwdV9yZWxheCgpOwogICAgIH0KLSAgICBBTURfSU9NTVVfREVC
VUcoIldhcm5pbmc6IENvbVdhaXRJbnQgYml0IGRpZCBub3QgYXNzZXJ0IVxu
Iik7CisKKyAgICBpZiAoICF0aW1lb3V0ICkKKyAgICAgICAgcHJpbnRrKFhF
TkxPR19XQVJOSU5HCisgICAgICAgICAgICAgICAiQU1EIElPTU1VICUwNHg6
JTAyeDolMDJ4LiV1OiAlc2NvbXBsZXRpb24gd2FpdCB0b29rICVsdW1zXG4i
LAorICAgICAgICAgICAgICAgaW9tbXUtPnNlZywgUENJX0JVUyhpb21tdS0+
YmRmKSwKKyAgICAgICAgICAgICAgIFBDSV9TTE9UKGlvbW11LT5iZGYpLCBQ
Q0lfRlVOQyhpb21tdS0+YmRmKSwKKyAgICAgICAgICAgICAgIHRpbWVvdXRf
YmFzZSA/ICJpb3RsYiAiIDogIiIsCisgICAgICAgICAgICAgICAoTk9XKCkg
LSBzdGFydCkgLyAxMDAwMDAwMCk7CiB9CiAKIC8qIEJ1aWxkIGxvdyBsZXZl
bCBpb21tdSBjb21tYW5kIG1lc3NhZ2VzICovCkBAIC0yOTQsNyArMzA1LDcg
QEAgdm9pZCBhbWRfaW9tbXVfZmx1c2hfaW90bGIodTggZGV2Zm4sIGNvbgog
ICAgIC8qIHNlbmQgSU5WQUxJREFURV9JT1RMQl9QQUdFUyBjb21tYW5kICov
CiAgICAgc3Bpbl9sb2NrX2lycXNhdmUoJmlvbW11LT5sb2NrLCBmbGFncyk7
CiAgICAgaW52YWxpZGF0ZV9pb3RsYl9wYWdlcyhpb21tdSwgbWF4cGVuZCwg
MCwgcXVldWVpZCwgZGFkZHIsIHJlcV9pZCwgb3JkZXIpOwotICAgIGZsdXNo
X2NvbW1hbmRfYnVmZmVyKGlvbW11KTsKKyAgICBmbHVzaF9jb21tYW5kX2J1
ZmZlcihpb21tdSwgaW9tbXVfZGV2X2lvdGxiX3RpbWVvdXQpOwogICAgIHNw
aW5fdW5sb2NrX2lycXJlc3RvcmUoJmlvbW11LT5sb2NrLCBmbGFncyk7CiB9
CiAKQEAgLTMzMSw3ICszNDIsNyBAQCBzdGF0aWMgdm9pZCBfYW1kX2lvbW11
X2ZsdXNoX3BhZ2VzKHN0cnVjCiAgICAgewogICAgICAgICBzcGluX2xvY2tf
aXJxc2F2ZSgmaW9tbXUtPmxvY2ssIGZsYWdzKTsKICAgICAgICAgaW52YWxp
ZGF0ZV9pb21tdV9wYWdlcyhpb21tdSwgZGFkZHIsIGRvbV9pZCwgb3JkZXIp
OwotICAgICAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihpb21tdSk7CisgICAg
ICAgIGZsdXNoX2NvbW1hbmRfYnVmZmVyKGlvbW11LCAwKTsKICAgICAgICAg
c3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmaW9tbXUtPmxvY2ssIGZsYWdzKTsK
ICAgICB9CiAKQEAgLTM1NSw3ICszNjYsNyBAQCB2b2lkIGFtZF9pb21tdV9m
bHVzaF9kZXZpY2Uoc3RydWN0IGFtZF9pCiAgICAgQVNTRVJUKCBzcGluX2lz
X2xvY2tlZCgmaW9tbXUtPmxvY2spICk7CiAKICAgICBpbnZhbGlkYXRlX2Rl
dl90YWJsZV9lbnRyeShpb21tdSwgYmRmKTsKLSAgICBmbHVzaF9jb21tYW5k
X2J1ZmZlcihpb21tdSk7CisgICAgZmx1c2hfY29tbWFuZF9idWZmZXIoaW9t
bXUsIDApOwogfQogCiB2b2lkIGFtZF9pb21tdV9mbHVzaF9pbnRyZW1hcChz
dHJ1Y3QgYW1kX2lvbW11ICppb21tdSwgdWludDE2X3QgYmRmKQpAQCAtMzYz
LDcgKzM3NCw3IEBAIHZvaWQgYW1kX2lvbW11X2ZsdXNoX2ludHJlbWFwKHN0
cnVjdCBhbWQKICAgICBBU1NFUlQoIHNwaW5faXNfbG9ja2VkKCZpb21tdS0+
bG9jaykgKTsKIAogICAgIGludmFsaWRhdGVfaW50ZXJydXB0X3RhYmxlKGlv
bW11LCBiZGYpOwotICAgIGZsdXNoX2NvbW1hbmRfYnVmZmVyKGlvbW11KTsK
KyAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihpb21tdSwgMCk7CiB9CiAKIHZv
aWQgYW1kX2lvbW11X2ZsdXNoX2FsbF9jYWNoZXMoc3RydWN0IGFtZF9pb21t
dSAqaW9tbXUpCkBAIC0zNzEsNyArMzgyLDcgQEAgdm9pZCBhbWRfaW9tbXVf
Zmx1c2hfYWxsX2NhY2hlcyhzdHJ1Y3QgYQogICAgIEFTU0VSVCggc3Bpbl9p
c19sb2NrZWQoJmlvbW11LT5sb2NrKSApOwogCiAgICAgaW52YWxpZGF0ZV9p
b21tdV9hbGwoaW9tbXUpOwotICAgIGZsdXNoX2NvbW1hbmRfYnVmZmVyKGlv
bW11KTsKKyAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihpb21tdSwgMCk7CiB9
CiAKIHZvaWQgYW1kX2lvbW11X3NlbmRfZ3Vlc3RfY21kKHN0cnVjdCBhbWRf
aW9tbXUgKmlvbW11LCB1MzIgY21kW10pCkBAIC0zODEsNyArMzkyLDggQEAg
dm9pZCBhbWRfaW9tbXVfc2VuZF9ndWVzdF9jbWQoc3RydWN0IGFtZAogICAg
IHNwaW5fbG9ja19pcnFzYXZlKCZpb21tdS0+bG9jaywgZmxhZ3MpOwogCiAg
ICAgc2VuZF9pb21tdV9jb21tYW5kKGlvbW11LCBjbWQpOwotICAgIGZsdXNo
X2NvbW1hbmRfYnVmZmVyKGlvbW11KTsKKyAgICAvKiBUQkQ6IFRpbWVvdXQg
c2VsZWN0aW9uIG1heSByZXF1aXJlIHBlZWtpbmcgaW50byBjbWRbXS4gKi8K
KyAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihpb21tdSwgMCk7CiAKICAgICBz
cGluX3VubG9ja19pcnFyZXN0b3JlKCZpb21tdS0+bG9jaywgZmxhZ3MpOwog
fQo=

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.15-1.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.15-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBWVC1kOiBzaXplIHFpbnZhbCBxdWV1ZSBkeW5hbWljYWxseQoKV2l0aCB0
aGUgcHJlc2VudCBzeW5jaHJvbm91cyBtb2RlbCwgd2UgbmVlZCB0d28gc2xv
dHMgZm9yIGV2ZXJ5Cm9wZXJhdGlvbiAodGhlIG9wZXJhdGlvbiBpdHNlbGYg
YW5kIGEgd2FpdCBkZXNjcmlwdG9yKS4gIFRoZXJlIGNhbiBiZQpvbmUgc3Vj
aCBwYWlyIG9mIHJlcXVlc3RzIHBlbmRpbmcgcGVyIENQVS4gVG8gZW5zdXJl
IHRoYXQgdW5kZXIgYWxsCm5vcm1hbCBjaXJjdW1zdGFuY2VzIGEgc2xvdCBp
cyBhbHdheXMgYXZhaWxhYmxlIHdoZW4gb25lIGlzIHJlcXVlc3RlZCwKc2l6
ZSB0aGUgcXVldWUgcmluZyBhY2NvcmRpbmcgdG8gdGhlIG51bWJlciBvZiBw
cmVzZW50IENQVXMuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAvIENWRS0y
MDIxLTI4NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVs
aWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVs
QHhlbi5vcmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQv
aW9tbXUuaAorKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9t
bXUuaApAQCAtNDUwLDE3ICs0NTAsOSBAQCBzdHJ1Y3QgcWludmFsX2VudHJ5
IHsKICAgICB9cTsKIH07CiAKLS8qIE9yZGVyIG9mIHF1ZXVlIGludmFsaWRh
dGlvbiBwYWdlcyhtYXggaXMgOCkgKi8KLSNkZWZpbmUgUUlOVkFMX1BBR0Vf
T1JERVIgICAyCi0KLSNkZWZpbmUgUUlOVkFMX0FSQ0hfUEFHRV9PUkRFUiAg
KFFJTlZBTF9QQUdFX09SREVSICsgUEFHRV9TSElGVF80SyAtIFBBR0VfU0hJ
RlQpCi0jZGVmaW5lIFFJTlZBTF9BUkNIX1BBR0VfTlIgICAgICggUUlOVkFM
X0FSQ0hfUEFHRV9PUkRFUiA8IDAgPyAgXAotICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAxIDogICAgICAgICAgICAgICAgICAgICAgICAgICAg
IFwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgMSA8PCBRSU5W
QUxfQVJDSF9QQUdFX09SREVSICkKLQogLyogRWFjaCBlbnRyeSBpcyAxNiBi
eXRlcywgc28gMl44IGVudHJpZXMgcGVyIHBhZ2UgKi8KICNkZWZpbmUgUUlO
VkFMX0VOVFJZX09SREVSICAoIFBBR0VfU0hJRlQgLSA0ICkKLSNkZWZpbmUg
UUlOVkFMX0VOVFJZX05SICAgICAoMSA8PCAoUUlOVkFMX1BBR0VfT1JERVIg
KyA4KSkKKyNkZWZpbmUgUUlOVkFMX01BWF9FTlRSWV9OUiAoMXUgPDwgKDcg
KyBRSU5WQUxfRU5UUllfT1JERVIpKQogCiAvKiBTdGF0dXMgZGF0YSBmbGFn
ICovCiAjZGVmaW5lIFFJTlZBTF9TVEFUX0lOSVQgIDAKLS0tIGEveGVuL2Ry
aXZlcnMvcGFzc3Rocm91Z2gvdnRkL3FpbnZhbC5jCisrKyBiL3hlbi9kcml2
ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9xaW52YWwuYwpAQCAtMzEsNiArMzEsOSBA
QAogCiAjZGVmaW5lIFZURF9RSV9USU1FT1VUCTEKIAorc3RhdGljIHVuc2ln
bmVkIGludCBfX3JlYWRfbW9zdGx5IHFpX3BnX29yZGVyOworc3RhdGljIHVu
c2lnbmVkIGludCBfX3JlYWRfbW9zdGx5IHFpX2VudHJ5X25yOworCiBzdGF0
aWMgaW50IF9fbXVzdF9jaGVjayBpbnZhbGlkYXRlX3N5bmMoc3RydWN0IHZ0
ZF9pb21tdSAqaW9tbXUpOwogCiBzdGF0aWMgdm9pZCBwcmludF9xaV9yZWdz
KHN0cnVjdCB2dGRfaW9tbXUgKmlvbW11KQpAQCAtNTUsNyArNTgsNyBAQCBz
dGF0aWMgdW5zaWduZWQgaW50IHFpbnZhbF9uZXh0X2luZGV4KHN0CiAgICAg
dGFpbCA+Pj0gUUlOVkFMX0lOREVYX1NISUZUOwogCiAgICAgLyogKHRhaWwr
MSA9PSBoZWFkKSBpbmRpY2F0ZXMgYSBmdWxsIHF1ZXVlLCB3YWl0IGZvciBI
VyAqLwotICAgIHdoaWxlICggKCB0YWlsICsgMSApICUgUUlOVkFMX0VOVFJZ
X05SID09CisgICAgd2hpbGUgKCAoKHRhaWwgKyAxKSAmIChxaV9lbnRyeV9u
ciAtIDEpKSA9PQogICAgICAgICAgICAgKCBkbWFyX3JlYWRxKGlvbW11LT5y
ZWcsIERNQVJfSVFIX1JFRykgPj4gUUlOVkFMX0lOREVYX1NISUZUICkgKQog
ICAgICAgICBjcHVfcmVsYXgoKTsKIApAQCAtNjgsNyArNzEsNyBAQCBzdGF0
aWMgdm9pZCBxaW52YWxfdXBkYXRlX3F0YWlsKHN0cnVjdCB2CiAKICAgICAv
KiBOZWVkIGhvbGQgcmVnaXN0ZXIgbG9jayB3aGVuIHVwZGF0ZSB0YWlsICov
CiAgICAgQVNTRVJUKCBzcGluX2lzX2xvY2tlZCgmaW9tbXUtPnJlZ2lzdGVy
X2xvY2spICk7Ci0gICAgdmFsID0gKGluZGV4ICsgMSkgJSBRSU5WQUxfRU5U
UllfTlI7CisgICAgdmFsID0gKGluZGV4ICsgMSkgJiAocWlfZW50cnlfbnIg
LSAxKTsKICAgICBkbWFyX3dyaXRlcShpb21tdS0+cmVnLCBETUFSX0lRVF9S
RUcsICh2YWwgPDwgUUlOVkFMX0lOREVYX1NISUZUKSk7CiB9CiAKQEAgLTQw
Myw4ICs0MDYsMjggQEAgaW50IGVuYWJsZV9xaW52YWwoc3RydWN0IHZ0ZF9p
b21tdSAqaW9tbQogCiAgICAgaWYgKCBpb21tdS0+cWludmFsX21hZGRyID09
IDAgKQogICAgIHsKLSAgICAgICAgaW9tbXUtPnFpbnZhbF9tYWRkciA9IGFs
bG9jX3BndGFibGVfbWFkZHIoUUlOVkFMX0FSQ0hfUEFHRV9OUiwKLSAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
aW9tbXUtPm5vZGUpOworICAgICAgICBpZiAoICFxaV9lbnRyeV9uciApCisg
ICAgICAgIHsKKyAgICAgICAgICAgIC8qCisgICAgICAgICAgICAgKiBXaXRo
IHRoZSBwcmVzZW50IHN5bmNocm9ub3VzIG1vZGVsLCB3ZSBuZWVkIHR3byBz
bG90cyBmb3IgZXZlcnkKKyAgICAgICAgICAgICAqIG9wZXJhdGlvbiAodGhl
IG9wZXJhdGlvbiBpdHNlbGYgYW5kIGEgd2FpdCBkZXNjcmlwdG9yKS4gIFRo
ZXJlCisgICAgICAgICAgICAgKiBjYW4gYmUgb25lIHN1Y2ggcGFpciBvZiBy
ZXF1ZXN0cyBwZW5kaW5nIHBlciBDUFUuICBPbmUgZXh0cmEKKyAgICAgICAg
ICAgICAqIGVudHJ5IGlzIG5lZWRlZCBhcyB0aGUgcmluZyBpcyBjb25zaWRl
cmVkIGZ1bGwgd2hlbiB0aGVyZSdzCisgICAgICAgICAgICAgKiBvbmx5IG9u
ZSBlbnRyeSBsZWZ0LgorICAgICAgICAgICAgICovCisgICAgICAgICAgICBC
VUlMRF9CVUdfT04oQ09ORklHX05SX0NQVVMgKiAyID49IFFJTlZBTF9NQVhf
RU5UUllfTlIpOworICAgICAgICAgICAgcWlfcGdfb3JkZXIgPSBnZXRfb3Jk
ZXJfZnJvbV9ieXRlcygobnVtX3ByZXNlbnRfY3B1cygpICogMiArIDEpIDw8
CisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIChQQUdFX1NISUZUIC0KKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIFFJTlZBTF9FTlRSWV9PUkRFUikpOwor
ICAgICAgICAgICAgcWlfZW50cnlfbnIgPSAxdSA8PCAocWlfcGdfb3JkZXIg
KyBRSU5WQUxfRU5UUllfT1JERVIpOworCisgICAgICAgICAgICBkcHJpbnRr
KFhFTkxPR19JTkZPIFZURFBSRUZJWCwKKyAgICAgICAgICAgICAgICAgICAg
IlFJOiB1c2luZyAldS1lbnRyeSByaW5nKHMpXG4iLCBxaV9lbnRyeV9ucik7
CisgICAgICAgIH0KKworICAgICAgICBpb21tdS0+cWludmFsX21hZGRyID0K
KyAgICAgICAgICAgIGFsbG9jX3BndGFibGVfbWFkZHIocWlfZW50cnlfbnIg
Pj4gUUlOVkFMX0VOVFJZX09SREVSLAorICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBpb21tdS0+bm9kZSk7CiAgICAgICAgIGlmICggaW9tbXUt
PnFpbnZhbF9tYWRkciA9PSAwICkKICAgICAgICAgewogICAgICAgICAgICAg
ZHByaW50ayhYRU5MT0dfV0FSTklORyBWVERQUkVGSVgsCkBAIC00MTgsMTUg
KzQ0MSwxNiBAQCBpbnQgZW5hYmxlX3FpbnZhbChzdHJ1Y3QgdnRkX2lvbW11
ICppb21tCiAKICAgICBzcGluX2xvY2tfaXJxc2F2ZSgmaW9tbXUtPnJlZ2lz
dGVyX2xvY2ssIGZsYWdzKTsKIAotICAgIC8qIFNldHVwIEludmFsaWRhdGlv
biBRdWV1ZSBBZGRyZXNzKElRQSkgcmVnaXN0ZXIgd2l0aCB0aGUKLSAgICAg
KiBhZGRyZXNzIG9mIHRoZSBwYWdlIHdlIGp1c3QgYWxsb2NhdGVkLiAgUVMg
ZmllbGQgYXQKLSAgICAgKiBiaXRzWzI6MF0gdG8gaW5kaWNhdGUgc2l6ZSBv
ZiBxdWV1ZSBpcyBvbmUgNEtCIHBhZ2UuCi0gICAgICogVGhhdCdzIDI1NiBl
bnRyaWVzLiAgUXVldWVkIEhlYWQgKElRSCkgYW5kIFF1ZXVlIFRhaWwgKElR
VCkKLSAgICAgKiByZWdpc3RlcnMgYXJlIGF1dG9tYXRpY2FsbHkgcmVzZXQg
dG8gMCB3aXRoIHdyaXRlCi0gICAgICogdG8gSVFBIHJlZ2lzdGVyLgorICAg
IC8qCisgICAgICogU2V0dXAgSW52YWxpZGF0aW9uIFF1ZXVlIEFkZHJlc3Mg
KElRQSkgcmVnaXN0ZXIgd2l0aCB0aGUgYWRkcmVzcyBvZiB0aGUKKyAgICAg
KiBwYWdlcyB3ZSBqdXN0IGFsbG9jYXRlZC4gIFRoZSBRUyBmaWVsZCBhdCBi
aXRzWzI6MF0gaW5kaWNhdGVzIHRoZSBzaXplCisgICAgICogKHBhZ2Ugb3Jk
ZXIpIG9mIHRoZSBxdWV1ZS4KKyAgICAgKgorICAgICAqIFF1ZXVlZCBIZWFk
IChJUUgpIGFuZCBRdWV1ZSBUYWlsIChJUVQpIHJlZ2lzdGVycyBhcmUgYXV0
b21hdGljYWxseQorICAgICAqIHJlc2V0IHRvIDAgd2l0aCB3cml0ZSB0byBJ
UUEgcmVnaXN0ZXIuCiAgICAgICovCiAgICAgZG1hcl93cml0ZXEoaW9tbXUt
PnJlZywgRE1BUl9JUUFfUkVHLAotICAgICAgICAgICAgICAgIGlvbW11LT5x
aW52YWxfbWFkZHIgfCBRSU5WQUxfUEFHRV9PUkRFUik7CisgICAgICAgICAg
ICAgICAgaW9tbXUtPnFpbnZhbF9tYWRkciB8IHFpX3BnX29yZGVyKTsKIAog
ICAgIGRtYXJfd3JpdGVxKGlvbW11LT5yZWcsIERNQVJfSVFUX1JFRywgMCk7
CiAK

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.15-2.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.15-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IHNpemUgY29tbWFuZCBidWZmZXIgZHluYW1pY2FsbHkK
CldpdGggdGhlIHByZXNlbnQgc3luY2hyb25vdXMgbW9kZWwsIHdlIG5lZWQg
dHdvIHNsb3RzIGZvciBldmVyeQpvcGVyYXRpb24gKHRoZSBvcGVyYXRpb24g
aXRzZWxmIGFuZCBhIHdhaXQgY29tbWFuZCkuICBUaGVyZSBjYW4gYmUgb25l
CnN1Y2ggcGFpciBvZiBjb21tYW5kcyBwZW5kaW5nIHBlciBDUFUuIFRvIGVu
c3VyZSB0aGF0IHVuZGVyIGFsbCBub3JtYWwKY2lyY3Vtc3RhbmNlcyBhIHNs
b3QgaXMgYWx3YXlzIGF2YWlsYWJsZSB3aGVuIG9uZSBpcyByZXF1ZXN0ZWQs
IHNpemUgdGhlCmNvbW1hbmQgcmluZyBhY2NvcmRpbmcgdG8gdGhlIG51bWJl
ciBvZiBwcmVzZW50IENQVXMuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAv
IENWRS0yMDIxLTI4NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2gg
PGpiZXVsaWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50
IDxwYXVsQHhlbi5vcmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3Vn
aC9hbWQvaW9tbXUtZGVmcy5oCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJv
dWdoL2FtZC9pb21tdS1kZWZzLmgKQEAgLTIwLDkgKzIwLDYgQEAKICNpZm5k
ZWYgQU1EX0lPTU1VX0RFRlNfSAogI2RlZmluZSBBTURfSU9NTVVfREVGU19I
CiAKLS8qIElPTU1VIENvbW1hbmQgQnVmZmVyIGVudHJpZXM6IGluIHBvd2Vy
IG9mIDIgaW5jcmVtZW50cywgbWluaW11bSBvZiAyNTYgKi8KLSNkZWZpbmUg
SU9NTVVfQ01EX0JVRkZFUl9ERUZBVUxUX0VOVFJJRVMJNTEyCi0KIC8qIElP
TU1VIEV2ZW50IExvZyBlbnRyaWVzOiBpbiBwb3dlciBvZiAyIGluY3JlbWVu
dHMsIG1pbmltdW0gb2YgMjU2ICovCiAjZGVmaW5lIElPTU1VX0VWRU5UX0xP
R19ERUZBVUxUX0VOVFJJRVMgICAgIDUxMgogCkBAIC0xNjQsOCArMTYxLDgg
QEAgc3RydWN0IGFtZF9pb21tdV9kdGUgewogI2RlZmluZSBJT01NVV9DTURf
QlVGRkVSX0xFTkdUSF9NQVNLCQkweDBGMDAwMDAwCiAjZGVmaW5lIElPTU1V
X0NNRF9CVUZGRVJfTEVOR1RIX1NISUZUCQkyNAogCi0jZGVmaW5lIElPTU1V
X0NNRF9CVUZGRVJfRU5UUllfU0laRQkJCTE2Ci0jZGVmaW5lIElPTU1VX0NN
RF9CVUZGRVJfUE9XRVJfT0YyX0VOVFJJRVNfUEVSX1BBR0UJOAorI2RlZmlu
ZSBJT01NVV9DTURfQlVGRkVSX0VOVFJZX09SREVSICAgICAgICAgICAgNAor
I2RlZmluZSBJT01NVV9DTURfQlVGRkVSX01BWF9FTlRSSUVTICAgICAgICAg
ICAgKDF1IDw8IDE1KQogCiAjZGVmaW5lIElPTU1VX0NNRF9PUENPREVfTUFT
SwkJCTB4RjAwMDAwMDAKICNkZWZpbmUgSU9NTVVfQ01EX09QQ09ERV9TSElG
VAkJCTI4Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21t
dV9jbWQuYworKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9t
bXVfY21kLmMKQEAgLTI0LDcgKzI0LDcgQEAgc3RhdGljIGludCBxdWV1ZV9p
b21tdV9jb21tYW5kKHN0cnVjdCBhbQogewogICAgIHVpbnQzMl90IHRhaWws
IGhlYWQ7CiAKLSAgICB0YWlsID0gaW9tbXUtPmNtZF9idWZmZXIudGFpbCAr
IElPTU1VX0NNRF9CVUZGRVJfRU5UUllfU0laRTsKKyAgICB0YWlsID0gaW9t
bXUtPmNtZF9idWZmZXIudGFpbCArIHNpemVvZihjbWRfZW50cnlfdCk7CiAg
ICAgaWYgKCB0YWlsID09IGlvbW11LT5jbWRfYnVmZmVyLnNpemUgKQogICAg
ICAgICB0YWlsID0gMDsKIApAQCAtMzMsNyArMzMsNyBAQCBzdGF0aWMgaW50
IHF1ZXVlX2lvbW11X2NvbW1hbmQoc3RydWN0IGFtCiAgICAgaWYgKCBoZWFk
ICE9IHRhaWwgKQogICAgIHsKICAgICAgICAgbWVtY3B5KGlvbW11LT5jbWRf
YnVmZmVyLmJ1ZmZlciArIGlvbW11LT5jbWRfYnVmZmVyLnRhaWwsCi0gICAg
ICAgICAgICAgICBjbWQsIElPTU1VX0NNRF9CVUZGRVJfRU5UUllfU0laRSk7
CisgICAgICAgICAgICAgICBjbWQsIHNpemVvZihjbWRfZW50cnlfdCkpOwog
CiAgICAgICAgIGlvbW11LT5jbWRfYnVmZmVyLnRhaWwgPSB0YWlsOwogICAg
ICAgICByZXR1cm4gMTsKLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gv
YW1kL2lvbW11X2luaXQuYworKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3Vn
aC9hbWQvaW9tbXVfaW5pdC5jCkBAIC0xMTgsNyArMTE4LDcgQEAgc3RhdGlj
IHZvaWQgcmVnaXN0ZXJfaW9tbXVfY21kX2J1ZmZlcl9pbgogICAgIHdyaXRl
bChlbnRyeSwgaW9tbXUtPm1taW9fYmFzZSArIElPTU1VX0NNRF9CVUZGRVJf
QkFTRV9MT1dfT0ZGU0VUKTsKIAogICAgIHBvd2VyX29mMl9lbnRyaWVzID0g
Z2V0X29yZGVyX2Zyb21fYnl0ZXMoaW9tbXUtPmNtZF9idWZmZXIuc2l6ZSkg
KwotICAgICAgICBJT01NVV9DTURfQlVGRkVSX1BPV0VSX09GMl9FTlRSSUVT
X1BFUl9QQUdFOworICAgICAgICBQQUdFX1NISUZUIC0gSU9NTVVfQ01EX0JV
RkZFUl9FTlRSWV9PUkRFUjsKIAogICAgIGVudHJ5ID0gMDsKICAgICBpb21t
dV9zZXRfYWRkcl9oaV90b19yZWcoJmVudHJ5LCBhZGRyX2hpKTsKQEAgLTEw
MTgsOSArMTAxOCwzMSBAQCBzdGF0aWMgdm9pZCAqX19pbml0IGFsbG9jYXRl
X3JpbmdfYnVmZmVyCiBzdGF0aWMgdm9pZCAqIF9faW5pdCBhbGxvY2F0ZV9j
bWRfYnVmZmVyKHN0cnVjdCBhbWRfaW9tbXUgKmlvbW11KQogewogICAgIC8q
IGFsbG9jYXRlICdjb21tYW5kIGJ1ZmZlcicgaW4gcG93ZXIgb2YgMiBpbmNy
ZW1lbnRzIG9mIDRLICovCisgICAgc3RhdGljIHVuc2lnbmVkIGludCBfX3Jl
YWRfbW9zdGx5IG5yX2VudHM7CisKKyAgICBpZiAoICFucl9lbnRzICkKKyAg
ICB7CisgICAgICAgIHVuc2lnbmVkIGludCBvcmRlcjsKKworICAgICAgICAv
KgorICAgICAgICAgKiBXaXRoIHRoZSBwcmVzZW50IHN5bmNocm9ub3VzIG1v
ZGVsLCB3ZSBuZWVkIHR3byBzbG90cyBmb3IgZXZlcnkKKyAgICAgICAgICog
b3BlcmF0aW9uICh0aGUgb3BlcmF0aW9uIGl0c2VsZiBhbmQgYSB3YWl0IGNv
bW1hbmQpLiAgVGhlcmUgY2FuIGJlCisgICAgICAgICAqIG9uZSBzdWNoIHBh
aXIgb2YgcmVxdWVzdHMgcGVuZGluZyBwZXIgQ1BVLiAgT25lIGV4dHJhIGVu
dHJ5IGlzCisgICAgICAgICAqIG5lZWRlZCBhcyB0aGUgcmluZyBpcyBjb25z
aWRlcmVkIGZ1bGwgd2hlbiB0aGVyZSdzIG9ubHkgb25lIGVudHJ5CisgICAg
ICAgICAqIGxlZnQuCisgICAgICAgICAqLworICAgICAgICBCVUlMRF9CVUdf
T04oQ09ORklHX05SX0NQVVMgKiAyID49IElPTU1VX0NNRF9CVUZGRVJfTUFY
X0VOVFJJRVMpOworICAgICAgICBvcmRlciA9IGdldF9vcmRlcl9mcm9tX2J5
dGVzKChudW1fcHJlc2VudF9jcHVzKCkgKiAyICsgMSkgPDwKKyAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBJT01NVV9DTURfQlVGRkVS
X0VOVFJZX09SREVSKTsKKyAgICAgICAgbnJfZW50cyA9IDF1IDw8IChvcmRl
ciArIFBBR0VfU0hJRlQgLSBJT01NVV9DTURfQlVGRkVSX0VOVFJZX09SREVS
KTsKKworICAgICAgICBBTURfSU9NTVVfREVCVUcoInVzaW5nICV1LWVudHJ5
IGNtZCByaW5nKHMpXG4iLCBucl9lbnRzKTsKKyAgICB9CisKKyAgICBCVUlM
RF9CVUdfT04oc2l6ZW9mKGNtZF9lbnRyeV90KSAhPSAoMXUgPDwgSU9NTVVf
Q01EX0JVRkZFUl9FTlRSWV9PUkRFUikpOworCiAgICAgcmV0dXJuIGFsbG9j
YXRlX3JpbmdfYnVmZmVyKCZpb21tdS0+Y21kX2J1ZmZlciwgc2l6ZW9mKGNt
ZF9lbnRyeV90KSwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
SU9NTVVfQ01EX0JVRkZFUl9ERUZBVUxUX0VOVFJJRVMsCi0gICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICJDb21tYW5kIEJ1ZmZlciIsIGZhbHNl
KTsKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbnJfZW50cywg
IkNvbW1hbmQgQnVmZmVyIiwgZmFsc2UpOwogfQogCiBzdGF0aWMgdm9pZCAq
IF9faW5pdCBhbGxvY2F0ZV9ldmVudF9sb2coc3RydWN0IGFtZF9pb21tdSAq
aW9tbXUpCg==

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.15-3.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.15-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBWVC1kOiBlbGltaW5hdGUgZmx1c2ggcmVsYXRlZCB0aW1lb3V0cwoKTGVh
dmluZyBhbiBpbi1wcm9ncmVzcyBvcGVyYXRpb24gcGVuZGluZyB3aGVuIGl0
IGFwcGVhcnMgdG8gdGFrZSB0b28KbG9uZyBpcyBwcm9ibGVtYXRpYzogSWYg
ZS5nLiBhIFFJIGNvbW1hbmQgY29tcGxldGVkIGxhdGVyLCB0aGUgd3JpdGUg
dG8KdGhlICJwb2xsIHNsb3QiIG1heSBpbnN0ZWFkIGJlIHVuZGVyc3Rvb2Qg
dG8gc2lnbmFsIGEgc3Vic2VxdWVudGx5CnN0YXJ0ZWQgY29tbWFuZCdzIGNv
bXBsZXRpb24uIEFsc28gb3VyIGFjY291bnRpbmcgb2YgdGhlIHRpbWVvdXQg
cGVyaW9kCndhcyBhY3R1YWxseSB3cm9uZzogV2UgaW5jbHVkZWQgdGhlIHRp
bWUgaXQgdG9vayBmb3IgdGhlIGNvbW1hbmQgdG8KYWN0dWFsbHkgbWFrZSBp
dCB0byB0aGUgZnJvbnQgb2YgdGhlIHF1ZXVlLCB3aGljaCBjb3VsZCBiZSBo
ZWF2aWx5CmFmZmVjdGVkIGJ5IGd1ZXN0cyBvdGhlciB0aGFuIHRoZSBvbmUg
Zm9yIHdoaWNoIHRoZSBmbHVzaCBpcyBiZWluZwpwZXJmb3JtZWQuCgpEbyBh
d2F5IHdpdGggYWxsIHRpbWVvdXQgZGV0ZWN0aW9uIG9uIGFsbCBmbHVzaCBy
ZWxhdGVkIGNvZGUgcGF0aHMuCkxvZyBleGNlc3NpdmVseSBsb25nIHByb2Nl
c3NpbmcgdGltZXMgKHdpdGggYSBwcm9ncmVzc2l2ZSB0aHJlc2hvbGQpIHRv
CmhhdmUgc29tZSBpbmRpY2F0aW9uIG9mIHByb2JsZW1zIGluIHRoaXMgYXJl
YS4KCkFkZGl0aW9uYWxseSBsb2cgKG9uY2UpIGlmIHFpbnZhbF9uZXh0X2lu
ZGV4KCkgZGlkbid0IGltbWVkaWF0ZWx5IGZpbmQKYW4gYXZhaWxhYmxlIHNs
b3QuIFRvZ2V0aGVyIHdpdGggdGhlIGVhcmxpZXIgY2hhbmdlIHNpemluZyB0
aGUgcXVldWUocykKZHluYW1pY2FsbHksIHdlIHNob3VsZCBub3cgaGF2ZSBh
IGd1YXJhbnRlZSB0aGF0IHdpdGggb3VyIGZ1bGx5CnN5bmNocm9ub3VzIG1v
ZGVsIGFueSBkZW1hbmQgZm9yIHNsb3RzIGNhbiBhY3R1YWxseSBiZSBzYXRp
c2ZpZWQuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAvIENWRS0yMDIxLTI4
NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1
c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5v
cmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvZG1hci5o
CisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9kbWFyLmgKQEAg
LTEyNyw2ICsxMjcsMzQgQEAgZG8gewogICAgIH0gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
IH0gd2hpbGUgKDApCiAKKyNkZWZpbmUgSU9NTVVfRkxVU0hfV0FJVCh3aGF0
LCBpb21tdSwgb2Zmc2V0LCBvcCwgY29uZCwgc3RzKSAgICAgICBcCitkbyB7
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgXAorICAgIHN0YXRpYyB1bnNpZ25lZCBpbnQg
X19yZWFkX21vc3RseSB0aHJlc2hvbGQgPSAxOyAgICAgICAgICAgICAgIFwK
KyAgICBzX3RpbWVfdCBzdGFydCA9IE5PVygpOyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgc190aW1lX3QgdGltZW91
dCA9IHN0YXJ0ICsgRE1BUl9PUEVSQVRJT05fVElNRU9VVCAqIHRocmVzaG9s
ZDsgXAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICBmb3IgKCA7IDsg
KSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBcCisgICAgeyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICBz
dHMgPSBvcChpb21tdS0+cmVnLCBvZmZzZXQpOyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwKKyAgICAgICAgaWYgKCBjb25kICkgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAg
ICAgICAgICBicmVhazsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgXAorICAgICAgICBpZiAoIHRpbWVvdXQgJiYg
Tk9XKCkgPiB0aW1lb3V0ICkgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
KyAgICAgICAgeyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgICAgICB0aHJlc2hv
bGQgfD0gdGhyZXNob2xkIDw8IDE7ICAgICAgICAgICAgICAgICAgICAgICAg
ICAgXAorICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19XQVJOSU5HIFZURFBS
RUZJWCAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgICAgICAg
ICAgICAiIElPTU1VIyV1OiAlcyBmbHVzaCB0YWtpbmcgdG9vIGxvbmdcbiIs
ICAgICAgICBcCisgICAgICAgICAgICAgICAgICAgaW9tbXUtPmluZGV4LCB3
aGF0KTsgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICAg
ICAgdGltZW91dCA9IDA7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwKKyAgICAgICAgfSAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAg
ICAgIGNwdV9yZWxheCgpOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgXAorICAgIH0gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgaWYgKCAhdGltZW91dCAp
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgXAorICAgICAgICBwcmludGsoWEVOTE9HX1dBUk5JTkcgVlREUFJFRklY
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgICAgICAg
ICIgSU9NTVUjJXU6ICVzIGZsdXNoIHRvb2sgJWx1bXNcbiIsICAgICAgICAg
ICAgICAgICBcCisgICAgICAgICAgICAgICBpb21tdS0+aW5kZXgsIHdoYXQs
IChOT1coKSAtIHN0YXJ0KSAvIDEwMDAwMDAwKTsgICAgXAorfSB3aGlsZSAo
IGZhbHNlICkKKwogaW50IHZ0ZF9od19jaGVjayh2b2lkKTsKIHZvaWQgZGlz
YWJsZV9wbXIoc3RydWN0IHZ0ZF9pb21tdSAqaW9tbXUpOwogaW50IGlzX2ln
ZF9kcmhkKHN0cnVjdCBhY3BpX2RyaGRfdW5pdCAqZHJoZCk7Ci0tLSBhL3hl
bi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCisrKyBiL3hlbi9k
cml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCkBAIC0zNzMsOCArMzcz
LDggQEAgc3RhdGljIHZvaWQgaW9tbXVfZmx1c2hfd3JpdGVfYnVmZmVyKHN0
cgogICAgIGRtYXJfd3JpdGVsKGlvbW11LT5yZWcsIERNQVJfR0NNRF9SRUcs
IHZhbCB8IERNQV9HQ01EX1dCRik7CiAKICAgICAvKiBNYWtlIHN1cmUgaGFy
ZHdhcmUgY29tcGxldGUgaXQgKi8KLSAgICBJT01NVV9XQUlUX09QKGlvbW11
LCBETUFSX0dTVFNfUkVHLCBkbWFyX3JlYWRsLAotICAgICAgICAgICAgICAg
ICAgISh2YWwgJiBETUFfR1NUU19XQkZTKSwgdmFsKTsKKyAgICBJT01NVV9G
TFVTSF9XQUlUKCJ3cml0ZSBidWZmZXIiLCBpb21tdSwgRE1BUl9HU1RTX1JF
RywgZG1hcl9yZWFkbCwKKyAgICAgICAgICAgICAgICAgICAgICEodmFsICYg
RE1BX0dTVFNfV0JGUyksIHZhbCk7CiAKICAgICBzcGluX3VubG9ja19pcnFy
ZXN0b3JlKCZpb21tdS0+cmVnaXN0ZXJfbG9jaywgZmxhZ3MpOwogfQpAQCAt
NDIzLDggKzQyMyw4IEBAIGludCB2dGRfZmx1c2hfY29udGV4dF9yZWcoc3Ry
dWN0IHZ0ZF9pb20KICAgICBkbWFyX3dyaXRlcShpb21tdS0+cmVnLCBETUFS
X0NDTURfUkVHLCB2YWwpOwogCiAgICAgLyogTWFrZSBzdXJlIGhhcmR3YXJl
IGNvbXBsZXRlIGl0ICovCi0gICAgSU9NTVVfV0FJVF9PUChpb21tdSwgRE1B
Ul9DQ01EX1JFRywgZG1hcl9yZWFkcSwKLSAgICAgICAgICAgICAgICAgICEo
dmFsICYgRE1BX0NDTURfSUNDKSwgdmFsKTsKKyAgICBJT01NVV9GTFVTSF9X
QUlUKCJjb250ZXh0IiwgaW9tbXUsIERNQVJfQ0NNRF9SRUcsIGRtYXJfcmVh
ZHEsCisgICAgICAgICAgICAgICAgICAgICAhKHZhbCAmIERNQV9DQ01EX0lD
QyksIHZhbCk7CiAKICAgICBzcGluX3VubG9ja19pcnFyZXN0b3JlKCZpb21t
dS0+cmVnaXN0ZXJfbG9jaywgZmxhZ3MpOwogICAgIC8qIGZsdXNoIGNvbnRl
eHQgZW50cnkgd2lsbCBpbXBsaWNpdGx5IGZsdXNoIHdyaXRlIGJ1ZmZlciAq
LwpAQCAtNTAxLDggKzUwMSw4IEBAIGludCB2dGRfZmx1c2hfaW90bGJfcmVn
KHN0cnVjdCB2dGRfaW9tbXUKICAgICBkbWFyX3dyaXRlcShpb21tdS0+cmVn
LCB0bGJfb2Zmc2V0ICsgOCwgdmFsKTsKIAogICAgIC8qIE1ha2Ugc3VyZSBo
YXJkd2FyZSBjb21wbGV0ZSBpdCAqLwotICAgIElPTU1VX1dBSVRfT1AoaW9t
bXUsICh0bGJfb2Zmc2V0ICsgOCksIGRtYXJfcmVhZHEsCi0gICAgICAgICAg
ICAgICAgICAhKHZhbCAmIERNQV9UTEJfSVZUKSwgdmFsKTsKKyAgICBJT01N
VV9GTFVTSF9XQUlUKCJpb3RsYiIsIGlvbW11LCAodGxiX29mZnNldCArIDgp
LCBkbWFyX3JlYWRxLAorICAgICAgICAgICAgICAgICAgICAgISh2YWwgJiBE
TUFfVExCX0lWVCksIHZhbCk7CiAgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9y
ZSgmaW9tbXUtPnJlZ2lzdGVyX2xvY2ssIGZsYWdzKTsKIAogICAgIC8qIGNo
ZWNrIElPVExCIGludmFsaWRhdGlvbiBncmFudWxhcml0eSAqLwotLS0gYS94
ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvcWludmFsLmMKKysrIGIveGVu
L2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL3FpbnZhbC5jCkBAIC0yOSw4ICsy
OSw2IEBACiAjaW5jbHVkZSAiZXh0ZXJuLmgiCiAjaW5jbHVkZSAiLi4vYXRz
LmgiCiAKLSNkZWZpbmUgVlREX1FJX1RJTUVPVVQJMQotCiBzdGF0aWMgdW5z
aWduZWQgaW50IF9fcmVhZF9tb3N0bHkgcWlfcGdfb3JkZXI7CiBzdGF0aWMg
dW5zaWduZWQgaW50IF9fcmVhZF9tb3N0bHkgcWlfZW50cnlfbnI7CiAKQEAg
LTYwLDcgKzU4LDExIEBAIHN0YXRpYyB1bnNpZ25lZCBpbnQgcWludmFsX25l
eHRfaW5kZXgoc3QKICAgICAvKiAodGFpbCsxID09IGhlYWQpIGluZGljYXRl
cyBhIGZ1bGwgcXVldWUsIHdhaXQgZm9yIEhXICovCiAgICAgd2hpbGUgKCAo
KHRhaWwgKyAxKSAmIChxaV9lbnRyeV9uciAtIDEpKSA9PQogICAgICAgICAg
ICAgKCBkbWFyX3JlYWRxKGlvbW11LT5yZWcsIERNQVJfSVFIX1JFRykgPj4g
UUlOVkFMX0lOREVYX1NISUZUICkgKQorICAgIHsKKyAgICAgICAgcHJpbnRr
X29uY2UoWEVOTE9HX0VSUiBWVERQUkVGSVggIiBJT01NVSMldTogbm8gUUkg
c2xvdCBhdmFpbGFibGVcbiIsCisgICAgICAgICAgICAgICAgICAgIGlvbW11
LT5pbmRleCk7CiAgICAgICAgIGNwdV9yZWxheCgpOworICAgIH0KIAogICAg
IHJldHVybiB0YWlsOwogfQpAQCAtMTgwLDIzICsxODIsMzIgQEAgc3RhdGlj
IGludCBfX211c3RfY2hlY2sgcXVldWVfaW52YWxpZGF0ZQogICAgIC8qIE5v
dyB3ZSBkb24ndCBzdXBwb3J0IGludGVycnVwdCBtZXRob2QgKi8KICAgICBp
ZiAoIHN3ICkKICAgICB7Ci0gICAgICAgIHNfdGltZV90IHRpbWVvdXQ7Ci0K
LSAgICAgICAgLyogSW4gY2FzZSBhbGwgd2FpdCBkZXNjcmlwdG9yIHdyaXRl
cyB0byBzYW1lIGFkZHIgd2l0aCBzYW1lIGRhdGEgKi8KLSAgICAgICAgdGlt
ZW91dCA9IE5PVygpICsgTUlMTElTRUNTKGZsdXNoX2Rldl9pb3RsYiA/Ci0g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBpb21tdV9kZXZf
aW90bGJfdGltZW91dCA6IFZURF9RSV9USU1FT1VUKTsKKyAgICAgICAgc3Rh
dGljIHVuc2lnbmVkIGludCBfX3JlYWRfbW9zdGx5IHRocmVzaG9sZCA9IDE7
CisgICAgICAgIHNfdGltZV90IHN0YXJ0ID0gTk9XKCk7CisgICAgICAgIHNf
dGltZV90IHRpbWVvdXQgPSBzdGFydCArIChmbHVzaF9kZXZfaW90bGIKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgID8gaW9tbXVfZGV2
X2lvdGxiX3RpbWVvdXQKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIDogMTAwKSAqIE1JTExJU0VDUyh0aHJlc2hvbGQpOwogCiAgICAg
ICAgIHdoaWxlICggQUNDRVNTX09OQ0UoKnRoaXNfcG9sbF9zbG90KSAhPSBR
SU5WQUxfU1RBVF9ET05FICkKICAgICAgICAgewotICAgICAgICAgICAgaWYg
KCBOT1coKSA+IHRpbWVvdXQgKQorICAgICAgICAgICAgaWYgKCB0aW1lb3V0
ICYmIE5PVygpID4gdGltZW91dCApCiAgICAgICAgICAgICB7Ci0gICAgICAg
ICAgICAgICAgcHJpbnRfcWlfcmVncyhpb21tdSk7CisgICAgICAgICAgICAg
ICAgdGhyZXNob2xkIHw9IHRocmVzaG9sZCA8PCAxOwogICAgICAgICAgICAg
ICAgIHByaW50ayhYRU5MT0dfV0FSTklORyBWVERQUkVGSVgKLSAgICAgICAg
ICAgICAgICAgICAgICAgIiBRdWV1ZSBpbnZhbGlkYXRlIHdhaXQgZGVzY3Jp
cHRvciB0aW1lZCBvdXRcbiIpOwotICAgICAgICAgICAgICAgIHJldHVybiAt
RVRJTUVET1VUOworICAgICAgICAgICAgICAgICAgICAgICAiIElPTU1VIyV1
OiBRSSVzIHdhaXQgZGVzY3JpcHRvciB0YWtpbmcgdG9vIGxvbmdcbiIsCisg
ICAgICAgICAgICAgICAgICAgICAgIGlvbW11LT5pbmRleCwgZmx1c2hfZGV2
X2lvdGxiID8gIiBkZXYiIDogIiIpOworICAgICAgICAgICAgICAgIHByaW50
X3FpX3JlZ3MoaW9tbXUpOworICAgICAgICAgICAgICAgIHRpbWVvdXQgPSAw
OwogICAgICAgICAgICAgfQogICAgICAgICAgICAgY3B1X3JlbGF4KCk7CiAg
ICAgICAgIH0KKworICAgICAgICBpZiAoICF0aW1lb3V0ICkKKyAgICAgICAg
ICAgIHByaW50ayhYRU5MT0dfV0FSTklORyBWVERQUkVGSVgKKyAgICAgICAg
ICAgICAgICAgICAiIElPTU1VIyV1OiBRSSVzIHdhaXQgZGVzY3JpcHRvciB0
b29rICVsdW1zXG4iLAorICAgICAgICAgICAgICAgICAgIGlvbW11LT5pbmRl
eCwgZmx1c2hfZGV2X2lvdGxiID8gIiBkZXYiIDogIiIsCisgICAgICAgICAg
ICAgICAgICAgKE5PVygpIC0gc3RhcnQpIC8gMTAwMDAwMDApOworCiAgICAg
ICAgIHJldHVybiAwOwogICAgIH0KIAo=

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.15-4.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.15-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IHdhaXQgZm9yIGNvbW1hbmQgc2xvdCB0byBiZSBhdmFp
bGFibGUKCk5vIGNhbGxlciBjYXJlZCBhYm91dCBzZW5kX2lvbW11X2NvbW1h
bmQoKSBpbmRpY2F0aW5nIHVuYXZhaWxhYmlsaXR5IG9mCmEgc2xvdC4gSGVu
Y2UgaWYgYSBzdWZmaWNpZW50IG51bWJlciBwcmlvciBjb21tYW5kcyB0aW1l
ZCBvdXQsIHdlIGRpZApibGluZGx5IGFzc3VtZSB0aGF0IHRoZSByZXF1ZXN0
ZWQgY29tbWFuZCB3YXMgc3VibWl0dGVkIHRvIHRoZSBJT01NVQp3aGVuIHJl
YWxseSBpdCB3YXNuJ3QuIFRoaXMgY291bGQgbWVhbiBib3RoIGEgaGFuZ2lu
ZyBzeXN0ZW0gKHdhaXRpbmcKZm9yIGEgY29tbWFuZCB0byBjb21wbGV0ZSB0
aGF0IHdhcyBuZXZlciBzZWVuIGJ5IHRoZSBJT01NVSkgb3IgYmxpbmRseQpw
cm9wYWdhdGluZyBzdWNjZXNzIGJhY2sgdG8gY2FsbGVycywgbWFraW5nIHRo
ZW0gYmVsaWV2ZSB0aGV5J3JlIGZpbmUKdG8gZS5nLiBmcmVlIHByZXZpb3Vz
bHkgdW5tYXBwZWQgcGFnZXMuCgpGb2xkIHRoZSB0aHJlZSBpbnZvbHZlZCBm
dW5jdGlvbnMgaW50byBvbmUsIGFkZCBzcGluIHdhaXRpbmcgZm9yIGFuCmF2
YWlsYWJsZSBzbG90IGFsb25nIHRoZSBsaW5lcyBvZiBWVC1kJ3MgcWludmFs
X25leHRfaW5kZXgoKSwgYW5kIGFzIGEKY29uc2VxdWVuY2UgZHJvcCBhbGwg
ZXJyb3IgaW5kaWNhdG9yIHJldHVybiB0eXBlcy92YWx1ZXMuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTM3MyAvIENWRS0yMDIxLTI4NjkyLgoKU2lnbmVkLW9m
Zi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpSZXZpZXdl
ZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+CgotLS0gYS94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfY21kLmMKKysrIGIveGVu
L2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1kL2lvbW11X2NtZC5jCkBAIC0yMCw0
MyArMjAsMzAgQEAKICNpbmNsdWRlICJpb21tdS5oIgogI2luY2x1ZGUgIi4u
L2F0cy5oIgogCi1zdGF0aWMgaW50IHF1ZXVlX2lvbW11X2NvbW1hbmQoc3Ry
dWN0IGFtZF9pb21tdSAqaW9tbXUsIHUzMiBjbWRbXSkKK3N0YXRpYyB2b2lk
IHNlbmRfaW9tbXVfY29tbWFuZChzdHJ1Y3QgYW1kX2lvbW11ICppb21tdSwK
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCB1aW50MzJf
dCBjbWRbNF0pCiB7Ci0gICAgdWludDMyX3QgdGFpbCwgaGVhZDsKKyAgICB1
aW50MzJfdCB0YWlsOwogCiAgICAgdGFpbCA9IGlvbW11LT5jbWRfYnVmZmVy
LnRhaWwgKyBzaXplb2YoY21kX2VudHJ5X3QpOwogICAgIGlmICggdGFpbCA9
PSBpb21tdS0+Y21kX2J1ZmZlci5zaXplICkKICAgICAgICAgdGFpbCA9IDA7
CiAKLSAgICBoZWFkID0gcmVhZGwoaW9tbXUtPm1taW9fYmFzZSArCi0gICAg
ICAgICAgICAgICAgIElPTU1VX0NNRF9CVUZGRVJfSEVBRF9PRkZTRVQpICYg
SU9NTVVfUklOR19CVUZGRVJfUFRSX01BU0s7Ci0gICAgaWYgKCBoZWFkICE9
IHRhaWwgKQorICAgIHdoaWxlICggdGFpbCA9PSAocmVhZGwoaW9tbXUtPm1t
aW9fYmFzZSArCisgICAgICAgICAgICAgICAgICAgICAgICAgICBJT01NVV9D
TURfQlVGRkVSX0hFQURfT0ZGU0VUKSAmCisgICAgICAgICAgICAgICAgICAg
ICBJT01NVV9SSU5HX0JVRkZFUl9QVFJfTUFTSykgKQogICAgIHsKLSAgICAg
ICAgbWVtY3B5KGlvbW11LT5jbWRfYnVmZmVyLmJ1ZmZlciArIGlvbW11LT5j
bWRfYnVmZmVyLnRhaWwsCi0gICAgICAgICAgICAgICBjbWQsIHNpemVvZihj
bWRfZW50cnlfdCkpOwotCi0gICAgICAgIGlvbW11LT5jbWRfYnVmZmVyLnRh
aWwgPSB0YWlsOwotICAgICAgICByZXR1cm4gMTsKKyAgICAgICAgcHJpbnRr
X29uY2UoWEVOTE9HX0VSUiAiQU1EIElPTU1VICVwcDogbm8gY21kIHNsb3Qg
YXZhaWxhYmxlXG4iLAorICAgICAgICAgICAgICAgICAgICAmUENJX1NCREYy
KGlvbW11LT5zZWcsIGlvbW11LT5iZGYpKTsKKyAgICAgICAgY3B1X3JlbGF4
KCk7CiAgICAgfQogCi0gICAgcmV0dXJuIDA7Ci19Ci0KLXN0YXRpYyB2b2lk
IGNvbW1pdF9pb21tdV9jb21tYW5kX2J1ZmZlcihzdHJ1Y3QgYW1kX2lvbW11
ICppb21tdSkKLXsKLSAgICB3cml0ZWwoaW9tbXUtPmNtZF9idWZmZXIudGFp
bCwKLSAgICAgICAgICAgaW9tbXUtPm1taW9fYmFzZSArIElPTU1VX0NNRF9C
VUZGRVJfVEFJTF9PRkZTRVQpOwotfQorICAgIG1lbWNweShpb21tdS0+Y21k
X2J1ZmZlci5idWZmZXIgKyBpb21tdS0+Y21kX2J1ZmZlci50YWlsLAorICAg
ICAgICAgICBjbWQsIHNpemVvZihjbWRfZW50cnlfdCkpOwogCi1zdGF0aWMg
aW50IHNlbmRfaW9tbXVfY29tbWFuZChzdHJ1Y3QgYW1kX2lvbW11ICppb21t
dSwgdTMyIGNtZFtdKQotewotICAgIGlmICggcXVldWVfaW9tbXVfY29tbWFu
ZChpb21tdSwgY21kKSApCi0gICAgewotICAgICAgICBjb21taXRfaW9tbXVf
Y29tbWFuZF9idWZmZXIoaW9tbXUpOwotICAgICAgICByZXR1cm4gMTsKLSAg
ICB9CisgICAgaW9tbXUtPmNtZF9idWZmZXIudGFpbCA9IHRhaWw7CiAKLSAg
ICByZXR1cm4gMDsKKyAgICB3cml0ZWwodGFpbCwgaW9tbXUtPm1taW9fYmFz
ZSArIElPTU1VX0NNRF9CVUZGRVJfVEFJTF9PRkZTRVQpOwogfQogCiBzdGF0
aWMgdm9pZCBmbHVzaF9jb21tYW5kX2J1ZmZlcihzdHJ1Y3QgYW1kX2lvbW11
ICppb21tdSkK

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.15-5.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.15-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IGRyb3AgY29tbWFuZCBjb21wbGV0aW9uIHRpbWVvdXQK
CkZpcnN0IGFuZCBmb3JlbW9zdCAtIHN1Y2ggdGltZW91dHMgd2VyZSBub3Qg
c2lnbmFsZWQgdG8gY2FsbGVycywgbWFraW5nCnRoZW0gYmVsaWV2ZSB0aGV5
J3JlIGZpbmUgdG8gZS5nLiBmcmVlIHByZXZpb3VzbHkgdW5tYXBwZWQgcGFn
ZXMuCgpNaXJyb3IgVlQtZCdzIGJlaGF2aW9yOiBBIGZpeGVkIG51bWJlciBv
ZiBsb29wIGl0ZXJhdGlvbnMgaXMgbm90IGEKc3VpdGFibGUgd2F5IHRvIGRl
dGVjdCB0aW1lb3V0cyBpbiBhbiBlbnZpcm9ubWVudCAoQ1BVIGFuZCBidXMg
c3BlZWRzKQppbmRlcGVuZGVudCBtYW5uZXIgYW55d2F5LiBGdXJ0aGVybW9y
ZSwgbGVhdmluZyBhbiBpbi1wcm9ncmVzcyBvcGVyYXRpb24KcGVuZGluZyB3
aGVuIGl0IGFwcGVhcnMgdG8gdGFrZSB0b28gbG9uZyBpcyBwcm9ibGVtYXRp
YzogSWYgYSBjb21tYW5kCmNvbXBsZXRlZCBsYXRlciwgdGhlIHNpZ25hbGlu
ZyBvZiBpdHMgY29tcGxldGlvbiBtYXkgaW5zdGVhZCBiZQp1bmRlcnN0b29k
IHRvIHNpZ25hbCBhIHN1YnNlcXVlbnRseSBzdGFydGVkIGNvbW1hbmQncyBj
b21wbGV0aW9uLgoKTG9nIGV4Y2Vzc2l2ZWx5IGxvbmcgcHJvY2Vzc2luZyB0
aW1lcyAod2l0aCBhIHByb2dyZXNzaXZlIHRocmVzaG9sZCkgdG8KaGF2ZSBz
b21lIGluZGljYXRpb24gb2YgcHJvYmxlbXMgaW4gdGhpcyBhcmVhLiBBbGxv
dyBjYWxsZXJzIHRvIHNwZWNpZnkKYSBub24tZGVmYXVsdCB0aW1lb3V0IGJp
YXMgZm9yIHRoaXMgbG9nZ2luZywgdXNpbmcgdGhlIHNhbWUgdmFsdWVzIGFz
ClZULWQgZG9lcywgd2hpY2ggaW4gcGFydGljdWxhciBtZWFucyBhIChieSBk
ZWZhdWx0KSBtdWNoIGxhcmdlciB2YWx1ZQpmb3IgZGV2aWNlIElPIFRMQiBp
bnZhbGlkYXRpb24uCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAvIENWRS0y
MDIxLTI4NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVs
aWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVs
QHhlbi5vcmc+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQv
aW9tbXVfY21kLmMKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1k
L2lvbW11X2NtZC5jCkBAIC00NiwxMCArNDYsMTIgQEAgc3RhdGljIHZvaWQg
c2VuZF9pb21tdV9jb21tYW5kKHN0cnVjdCBhbQogICAgIHdyaXRlbCh0YWls
LCBpb21tdS0+bW1pb19iYXNlICsgSU9NTVVfQ01EX0JVRkZFUl9UQUlMX09G
RlNFVCk7CiB9CiAKLXN0YXRpYyB2b2lkIGZsdXNoX2NvbW1hbmRfYnVmZmVy
KHN0cnVjdCBhbWRfaW9tbXUgKmlvbW11KQorc3RhdGljIHZvaWQgZmx1c2hf
Y29tbWFuZF9idWZmZXIoc3RydWN0IGFtZF9pb21tdSAqaW9tbXUsCisgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBpbnQgdGlt
ZW91dF9iYXNlKQogewotICAgIHVuc2lnbmVkIGludCBjbWRbNF0sIHN0YXR1
cywgbG9vcF9jb3VudDsKLSAgICBib29sIGNvbXBfd2FpdDsKKyAgICB1aW50
MzJfdCBjbWRbNF07CisgICAgc190aW1lX3Qgc3RhcnQsIHRpbWVvdXQ7Cisg
ICAgc3RhdGljIHVuc2lnbmVkIGludCBfX3JlYWRfbW9zdGx5IHRocmVzaG9s
ZCA9IDE7CiAKICAgICAvKiBSVzFDICdDb21XYWl0SW50JyBpbiBzdGF0dXMg
cmVnaXN0ZXIgKi8KICAgICB3cml0ZWwoSU9NTVVfU1RBVFVTX0NPTVBfV0FJ
VF9JTlQsCkBAIC02NSwyMiArNjcsMjkgQEAgc3RhdGljIHZvaWQgZmx1c2hf
Y29tbWFuZF9idWZmZXIoc3RydWN0CiAgICAgICAgICAgICAgICAgICAgICAg
ICAgSU9NTVVfQ09NUF9XQUlUX0lfRkxBR19TSElGVCwgJmNtZFswXSk7CiAg
ICAgc2VuZF9pb21tdV9jb21tYW5kKGlvbW11LCBjbWQpOwogCi0gICAgLyog
TWFrZSBsb29wX2NvdW50IGxvbmcgZW5vdWdoIGZvciBwb2xsaW5nIGNvbXBs
ZXRpb24gd2FpdCBiaXQgKi8KLSAgICBsb29wX2NvdW50ID0gMTAwMDsKLSAg
ICBkbyB7Ci0gICAgICAgIHN0YXR1cyA9IHJlYWRsKGlvbW11LT5tbWlvX2Jh
c2UgKyBJT01NVV9TVEFUVVNfTU1JT19PRkZTRVQpOwotICAgICAgICBjb21w
X3dhaXQgPSBzdGF0dXMgJiBJT01NVV9TVEFUVVNfQ09NUF9XQUlUX0lOVDsK
LSAgICAgICAgLS1sb29wX2NvdW50OwotICAgIH0gd2hpbGUgKCAhY29tcF93
YWl0ICYmIGxvb3BfY291bnQgKTsKLQotICAgIGlmICggY29tcF93YWl0ICkK
KyAgICBzdGFydCA9IE5PVygpOworICAgIHRpbWVvdXQgPSBzdGFydCArICh0
aW1lb3V0X2Jhc2UgPzogMTAwKSAqIE1JTExJU0VDUyh0aHJlc2hvbGQpOwor
ICAgIHdoaWxlICggIShyZWFkbChpb21tdS0+bW1pb19iYXNlICsgSU9NTVVf
U1RBVFVTX01NSU9fT0ZGU0VUKSAmCisgICAgICAgICAgICAgIElPTU1VX1NU
QVRVU19DT01QX1dBSVRfSU5UKSApCiAgICAgewotICAgICAgICAvKiBSVzFD
ICdDb21XYWl0SW50JyBpbiBzdGF0dXMgcmVnaXN0ZXIgKi8KLSAgICAgICAg
d3JpdGVsKElPTU1VX1NUQVRVU19DT01QX1dBSVRfSU5ULAotICAgICAgICAg
ICAgICAgaW9tbXUtPm1taW9fYmFzZSArIElPTU1VX1NUQVRVU19NTUlPX09G
RlNFVCk7Ci0gICAgICAgIHJldHVybjsKKyAgICAgICAgaWYgKCB0aW1lb3V0
ICYmIE5PVygpID4gdGltZW91dCApCisgICAgICAgIHsKKyAgICAgICAgICAg
IHRocmVzaG9sZCB8PSB0aHJlc2hvbGQgPDwgMTsKKyAgICAgICAgICAgIHBy
aW50ayhYRU5MT0dfV0FSTklORworICAgICAgICAgICAgICAgICAgICJBTUQg
SU9NTVUgJXBwOiAlc2NvbXBsZXRpb24gd2FpdCB0YWtpbmcgdG9vIGxvbmdc
biIsCisgICAgICAgICAgICAgICAgICAgJlBDSV9TQkRGMihpb21tdS0+c2Vn
LCBpb21tdS0+YmRmKSwKKyAgICAgICAgICAgICAgICAgICB0aW1lb3V0X2Jh
c2UgPyAiaW90bGIgIiA6ICIiKTsKKyAgICAgICAgICAgIHRpbWVvdXQgPSAw
OworICAgICAgICB9CisgICAgICAgIGNwdV9yZWxheCgpOwogICAgIH0KLSAg
ICBBTURfSU9NTVVfREVCVUcoIldhcm5pbmc6IENvbVdhaXRJbnQgYml0IGRp
ZCBub3QgYXNzZXJ0IVxuIik7CisKKyAgICBpZiAoICF0aW1lb3V0ICkKKyAg
ICAgICAgcHJpbnRrKFhFTkxPR19XQVJOSU5HCisgICAgICAgICAgICAgICAi
QU1EIElPTU1VICVwcDogJXNjb21wbGV0aW9uIHdhaXQgdG9vayAlbHVtc1xu
IiwKKyAgICAgICAgICAgICAgICZQQ0lfU0JERjIoaW9tbXUtPnNlZywgaW9t
bXUtPmJkZiksCisgICAgICAgICAgICAgICB0aW1lb3V0X2Jhc2UgPyAiaW90
bGIgIiA6ICIiLAorICAgICAgICAgICAgICAgKE5PVygpIC0gc3RhcnQpIC8g
MTAwMDAwMDApOwogfQogCiAvKiBCdWlsZCBsb3cgbGV2ZWwgaW9tbXUgY29t
bWFuZCBtZXNzYWdlcyAqLwpAQCAtMjkxLDcgKzMwMCw3IEBAIHZvaWQgYW1k
X2lvbW11X2ZsdXNoX2lvdGxiKHU4IGRldmZuLCBjb24KICAgICAvKiBzZW5k
IElOVkFMSURBVEVfSU9UTEJfUEFHRVMgY29tbWFuZCAqLwogICAgIHNwaW5f
bG9ja19pcnFzYXZlKCZpb21tdS0+bG9jaywgZmxhZ3MpOwogICAgIGludmFs
aWRhdGVfaW90bGJfcGFnZXMoaW9tbXUsIG1heHBlbmQsIDAsIHF1ZXVlaWQs
IGRhZGRyLCByZXFfaWQsIG9yZGVyKTsKLSAgICBmbHVzaF9jb21tYW5kX2J1
ZmZlcihpb21tdSk7CisgICAgZmx1c2hfY29tbWFuZF9idWZmZXIoaW9tbXUs
IGlvbW11X2Rldl9pb3RsYl90aW1lb3V0KTsKICAgICBzcGluX3VubG9ja19p
cnFyZXN0b3JlKCZpb21tdS0+bG9jaywgZmxhZ3MpOwogfQogCkBAIC0zMjgs
NyArMzM3LDcgQEAgc3RhdGljIHZvaWQgX2FtZF9pb21tdV9mbHVzaF9wYWdl
cyhzdHJ1YwogICAgIHsKICAgICAgICAgc3Bpbl9sb2NrX2lycXNhdmUoJmlv
bW11LT5sb2NrLCBmbGFncyk7CiAgICAgICAgIGludmFsaWRhdGVfaW9tbXVf
cGFnZXMoaW9tbXUsIGRhZGRyLCBkb21faWQsIG9yZGVyKTsKLSAgICAgICAg
Zmx1c2hfY29tbWFuZF9idWZmZXIoaW9tbXUpOworICAgICAgICBmbHVzaF9j
b21tYW5kX2J1ZmZlcihpb21tdSwgMCk7CiAgICAgICAgIHNwaW5fdW5sb2Nr
X2lycXJlc3RvcmUoJmlvbW11LT5sb2NrLCBmbGFncyk7CiAgICAgfQogCkBA
IC0zNTIsNyArMzYxLDcgQEAgdm9pZCBhbWRfaW9tbXVfZmx1c2hfZGV2aWNl
KHN0cnVjdCBhbWRfaQogICAgIEFTU0VSVCggc3Bpbl9pc19sb2NrZWQoJmlv
bW11LT5sb2NrKSApOwogCiAgICAgaW52YWxpZGF0ZV9kZXZfdGFibGVfZW50
cnkoaW9tbXUsIGJkZik7Ci0gICAgZmx1c2hfY29tbWFuZF9idWZmZXIoaW9t
bXUpOworICAgIGZsdXNoX2NvbW1hbmRfYnVmZmVyKGlvbW11LCAwKTsKIH0K
IAogdm9pZCBhbWRfaW9tbXVfZmx1c2hfaW50cmVtYXAoc3RydWN0IGFtZF9p
b21tdSAqaW9tbXUsIHVpbnQxNl90IGJkZikKQEAgLTM2MCw3ICszNjksNyBA
QCB2b2lkIGFtZF9pb21tdV9mbHVzaF9pbnRyZW1hcChzdHJ1Y3QgYW1kCiAg
ICAgQVNTRVJUKCBzcGluX2lzX2xvY2tlZCgmaW9tbXUtPmxvY2spICk7CiAK
ICAgICBpbnZhbGlkYXRlX2ludGVycnVwdF90YWJsZShpb21tdSwgYmRmKTsK
LSAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihpb21tdSk7CisgICAgZmx1c2hf
Y29tbWFuZF9idWZmZXIoaW9tbXUsIDApOwogfQogCiB2b2lkIGFtZF9pb21t
dV9mbHVzaF9hbGxfY2FjaGVzKHN0cnVjdCBhbWRfaW9tbXUgKmlvbW11KQpA
QCAtMzY4LDcgKzM3Nyw3IEBAIHZvaWQgYW1kX2lvbW11X2ZsdXNoX2FsbF9j
YWNoZXMoc3RydWN0IGEKICAgICBBU1NFUlQoIHNwaW5faXNfbG9ja2VkKCZp
b21tdS0+bG9jaykgKTsKIAogICAgIGludmFsaWRhdGVfaW9tbXVfYWxsKGlv
bW11KTsKLSAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihpb21tdSk7CisgICAg
Zmx1c2hfY29tbWFuZF9idWZmZXIoaW9tbXUsIDApOwogfQogCiB2b2lkIGFt
ZF9pb21tdV9zZW5kX2d1ZXN0X2NtZChzdHJ1Y3QgYW1kX2lvbW11ICppb21t
dSwgdTMyIGNtZFtdKQpAQCAtMzc4LDcgKzM4Nyw4IEBAIHZvaWQgYW1kX2lv
bW11X3NlbmRfZ3Vlc3RfY21kKHN0cnVjdCBhbWQKICAgICBzcGluX2xvY2tf
aXJxc2F2ZSgmaW9tbXUtPmxvY2ssIGZsYWdzKTsKIAogICAgIHNlbmRfaW9t
bXVfY29tbWFuZChpb21tdSwgY21kKTsKLSAgICBmbHVzaF9jb21tYW5kX2J1
ZmZlcihpb21tdSk7CisgICAgLyogVEJEOiBUaW1lb3V0IHNlbGVjdGlvbiBt
YXkgcmVxdWlyZSBwZWVraW5nIGludG8gY21kW10uICovCisgICAgZmx1c2hf
Y29tbWFuZF9idWZmZXIoaW9tbXUsIDApOwogCiAgICAgc3Bpbl91bmxvY2tf
aXJxcmVzdG9yZSgmaW9tbXUtPmxvY2ssIGZsYWdzKTsKIH0K

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-4.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IHdhaXQgZm9yIGNvbW1hbmQgc2xvdCB0byBiZSBhdmFp
bGFibGUKCk5vIGNhbGxlciBjYXJlZCBhYm91dCBzZW5kX2lvbW11X2NvbW1h
bmQoKSBpbmRpY2F0aW5nIHVuYXZhaWxhYmlsaXR5IG9mCmEgc2xvdC4gSGVu
Y2UgaWYgYSBzdWZmaWNpZW50IG51bWJlciBwcmlvciBjb21tYW5kcyB0aW1l
ZCBvdXQsIHdlIGRpZApibGluZGx5IGFzc3VtZSB0aGF0IHRoZSByZXF1ZXN0
ZWQgY29tbWFuZCB3YXMgc3VibWl0dGVkIHRvIHRoZSBJT01NVQp3aGVuIHJl
YWxseSBpdCB3YXNuJ3QuIFRoaXMgY291bGQgbWVhbiBib3RoIGEgaGFuZ2lu
ZyBzeXN0ZW0gKHdhaXRpbmcKZm9yIGEgY29tbWFuZCB0byBjb21wbGV0ZSB0
aGF0IHdhcyBuZXZlciBzZWVuIGJ5IHRoZSBJT01NVSkgb3IgYmxpbmRseQpw
cm9wYWdhdGluZyBzdWNjZXNzIGJhY2sgdG8gY2FsbGVycywgbWFraW5nIHRo
ZW0gYmVsaWV2ZSB0aGV5J3JlIGZpbmUKdG8gZS5nLiBmcmVlIHByZXZpb3Vz
bHkgdW5tYXBwZWQgcGFnZXMuCgpGb2xkIHRoZSB0aHJlZSBpbnZvbHZlZCBm
dW5jdGlvbnMgaW50byBvbmUsIGFkZCBzcGluIHdhaXRpbmcgZm9yIGFuCmF2
YWlsYWJsZSBzbG90IGFsb25nIHRoZSBsaW5lcyBvZiBWVC1kJ3MgcWludmFs
X25leHRfaW5kZXgoKSwgYW5kIGFzIGEKY29uc2VxdWVuY2UgZHJvcCBhbGwg
ZXJyb3IgaW5kaWNhdG9yIHJldHVybiB0eXBlcy92YWx1ZXMuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTM3MyAvIENWRS0yMDIxLTI4NjkyLgoKU2lnbmVkLW9m
Zi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpSZXZpZXdl
ZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+CgotLS0gYS94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfY21kLmMKKysrIGIveGVu
L2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1kL2lvbW11X2NtZC5jCkBAIC0yMCw0
MyArMjAsMzAgQEAKICNpbmNsdWRlICJpb21tdS5oIgogI2luY2x1ZGUgIi4u
L2F0cy5oIgogCi1zdGF0aWMgaW50IHF1ZXVlX2lvbW11X2NvbW1hbmQoc3Ry
dWN0IGFtZF9pb21tdSAqaW9tbXUsIHUzMiBjbWRbXSkKK3N0YXRpYyB2b2lk
IHNlbmRfaW9tbXVfY29tbWFuZChzdHJ1Y3QgYW1kX2lvbW11ICppb21tdSwK
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCB1aW50MzJf
dCBjbWRbNF0pCiB7Ci0gICAgdWludDMyX3QgdGFpbCwgaGVhZDsKKyAgICB1
aW50MzJfdCB0YWlsOwogCiAgICAgdGFpbCA9IGlvbW11LT5jbWRfYnVmZmVy
LnRhaWwgKyBzaXplb2YoY21kX2VudHJ5X3QpOwogICAgIGlmICggdGFpbCA9
PSBpb21tdS0+Y21kX2J1ZmZlci5zaXplICkKICAgICAgICAgdGFpbCA9IDA7
CiAKLSAgICBoZWFkID0gcmVhZGwoaW9tbXUtPm1taW9fYmFzZSArCi0gICAg
ICAgICAgICAgICAgIElPTU1VX0NNRF9CVUZGRVJfSEVBRF9PRkZTRVQpICYg
SU9NTVVfUklOR19CVUZGRVJfUFRSX01BU0s7Ci0gICAgaWYgKCBoZWFkICE9
IHRhaWwgKQorICAgIHdoaWxlICggdGFpbCA9PSAocmVhZGwoaW9tbXUtPm1t
aW9fYmFzZSArCisgICAgICAgICAgICAgICAgICAgICAgICAgICBJT01NVV9D
TURfQlVGRkVSX0hFQURfT0ZGU0VUKSAmCisgICAgICAgICAgICAgICAgICAg
ICBJT01NVV9SSU5HX0JVRkZFUl9QVFJfTUFTSykgKQogICAgIHsKLSAgICAg
ICAgbWVtY3B5KGlvbW11LT5jbWRfYnVmZmVyLmJ1ZmZlciArIGlvbW11LT5j
bWRfYnVmZmVyLnRhaWwsCi0gICAgICAgICAgICAgICBjbWQsIHNpemVvZihj
bWRfZW50cnlfdCkpOwotCi0gICAgICAgIGlvbW11LT5jbWRfYnVmZmVyLnRh
aWwgPSB0YWlsOwotICAgICAgICByZXR1cm4gMTsKKyAgICAgICAgcHJpbnRr
X29uY2UoWEVOTE9HX0VSUiAiQU1EIElPTU1VICVwcDogbm8gY21kIHNsb3Qg
YXZhaWxhYmxlXG4iLAorICAgICAgICAgICAgICAgICAgICAmUENJX1NCREYy
KGlvbW11LT5zZWcsIGlvbW11LT5iZGYpKTsKKyAgICAgICAgY3B1X3JlbGF4
KCk7CiAgICAgfQogCi0gICAgcmV0dXJuIDA7Ci19Ci0KLXN0YXRpYyB2b2lk
IGNvbW1pdF9pb21tdV9jb21tYW5kX2J1ZmZlcihzdHJ1Y3QgYW1kX2lvbW11
ICppb21tdSkKLXsKLSAgICB3cml0ZWwoaW9tbXUtPmNtZF9idWZmZXIudGFp
bCwKLSAgICAgICAgICAgaW9tbXUtPm1taW9fYmFzZSArIElPTU1VX0NNRF9C
VUZGRVJfVEFJTF9PRkZTRVQpOwotfQorICAgIG1lbWNweShpb21tdS0+Y21k
X2J1ZmZlci5idWZmZXIgKyBpb21tdS0+Y21kX2J1ZmZlci50YWlsLAorICAg
ICAgICAgICBjbWQsIHNpemVvZihjbWRfZW50cnlfdCkpOwogCi1zdGF0aWMg
aW50IHNlbmRfaW9tbXVfY29tbWFuZChzdHJ1Y3QgYW1kX2lvbW11ICppb21t
dSwgdTMyIGNtZFtdKQotewotICAgIGlmICggcXVldWVfaW9tbXVfY29tbWFu
ZChpb21tdSwgY21kKSApCi0gICAgewotICAgICAgICBjb21taXRfaW9tbXVf
Y29tbWFuZF9idWZmZXIoaW9tbXUpOwotICAgICAgICByZXR1cm4gMTsKLSAg
ICB9CisgICAgaW9tbXUtPmNtZF9idWZmZXIudGFpbCA9IHRhaWw7CiAKLSAg
ICByZXR1cm4gMDsKKyAgICB3cml0ZWwodGFpbCwgaW9tbXUtPm1taW9fYmFz
ZSArIElPTU1VX0NNRF9CVUZGRVJfVEFJTF9PRkZTRVQpOwogfQogCiBzdGF0
aWMgdm9pZCBmbHVzaF9jb21tYW5kX2J1ZmZlcihzdHJ1Y3QgYW1kX2lvbW11
ICppb21tdSkK

--=separator
Content-Type: application/octet-stream; name="xsa373/xsa373-5.patch"
Content-Disposition: attachment; filename="xsa373/xsa373-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IGRyb3AgY29tbWFuZCBjb21wbGV0aW9uIHRpbWVvdXQK
CkZpcnN0IGFuZCBmb3JlbW9zdCAtIHN1Y2ggdGltZW91dHMgd2VyZSBub3Qg
c2lnbmFsZWQgdG8gY2FsbGVycywgbWFraW5nCnRoZW0gYmVsaWV2ZSB0aGV5
J3JlIGZpbmUgdG8gZS5nLiBmcmVlIHByZXZpb3VzbHkgdW5tYXBwZWQgcGFn
ZXMuCgpNaXJyb3IgVlQtZCdzIGJlaGF2aW9yOiBBIGZpeGVkIG51bWJlciBv
ZiBsb29wIGl0ZXJhdGlvbnMgaXMgbm90IGEKc3VpdGFibGUgd2F5IHRvIGRl
dGVjdCB0aW1lb3V0cyBpbiBhbiBlbnZpcm9ubWVudCAoQ1BVIGFuZCBidXMg
c3BlZWRzKQppbmRlcGVuZGVudCBtYW5uZXIgYW55d2F5LiBGdXJ0aGVybW9y
ZSwgbGVhdmluZyBhbiBpbi1wcm9ncmVzcyBvcGVyYXRpb24KcGVuZGluZyB3
aGVuIGl0IGFwcGVhcnMgdG8gdGFrZSB0b28gbG9uZyBpcyBwcm9ibGVtYXRp
YzogSWYgYSBjb21tYW5kCmNvbXBsZXRlZCBsYXRlciwgdGhlIHNpZ25hbGlu
ZyBvZiBpdHMgY29tcGxldGlvbiBtYXkgaW5zdGVhZCBiZQp1bmRlcnN0b29k
IHRvIHNpZ25hbCBhIHN1YnNlcXVlbnRseSBzdGFydGVkIGNvbW1hbmQncyBj
b21wbGV0aW9uLgoKTG9nIGV4Y2Vzc2l2ZWx5IGxvbmcgcHJvY2Vzc2luZyB0
aW1lcyAod2l0aCBhIHByb2dyZXNzaXZlIHRocmVzaG9sZCkgdG8KaGF2ZSBz
b21lIGluZGljYXRpb24gb2YgcHJvYmxlbXMgaW4gdGhpcyBhcmVhLiBBbGxv
dyBjYWxsZXJzIHRvIHNwZWNpZnkKYSBub24tZGVmYXVsdCB0aW1lb3V0IGJp
YXMgZm9yIHRoaXMgbG9nZ2luZywgdXNpbmcgdGhlIHNhbWUgdmFsdWVzIGFz
ClZULWQgZG9lcywgd2hpY2ggaW4gcGFydGljdWxhciBtZWFucyBhIChieSBk
ZWZhdWx0KSBtdWNoIGxhcmdlciB2YWx1ZQpmb3IgZGV2aWNlIElPIFRMQiBp
bnZhbGlkYXRpb24uCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM3MyAvIENWRS0y
MDIxLTI4NjkyLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVs
aWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVs
QHhlbi5vcmc+Ci0tLQp2MjogQXZvaWQgbGVhZGluZyBibGFua3MgaW4gbG9n
IG1lc3NhZ2VzLgotLS0KVEJEOiBBcyBsb25nIGFzIHRoZSBzcGlubmluZyBo
YXBwZW5zIHVuZGVyIGxvY2ssIDEwMG1zIG1heSBiZSB0b28gbGFyZ2UKICAg
ICBhIGdyYW51bGFyaXR5LiBCdXQgSSd2ZSBsZWZ0IGl0IHNpbWlsYXIgdG8g
VlQtZCBhcyBzdWJzZXF1ZW50bHkgSSdtCiAgICAgaW50ZW5kaW5nIHRvIHJl
LXdvcmsgdGhlIGxvY2tpbmcgc3VjaCB0aGF0IHRoZSBzcGlubmluZyB3aWxs
IHN0YXJ0CiAgICAgb25seSBhZnRlciBkcm9wcGluZyB0aGUgbG9jaywgbGlr
ZSBWVC1kIFFJIGFsc28gZG9lcy4KCi0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0
aHJvdWdoL2FtZC9pb21tdV9jbWQuYworKysgYi94ZW4vZHJpdmVycy9wYXNz
dGhyb3VnaC9hbWQvaW9tbXVfY21kLmMKQEAgLTQ2LDEwICs0NiwxMiBAQCBz
dGF0aWMgdm9pZCBzZW5kX2lvbW11X2NvbW1hbmQoc3RydWN0IGFtCiAgICAg
d3JpdGVsKHRhaWwsIGlvbW11LT5tbWlvX2Jhc2UgKyBJT01NVV9DTURfQlVG
RkVSX1RBSUxfT0ZGU0VUKTsKIH0KIAotc3RhdGljIHZvaWQgZmx1c2hfY29t
bWFuZF9idWZmZXIoc3RydWN0IGFtZF9pb21tdSAqaW9tbXUpCitzdGF0aWMg
dm9pZCBmbHVzaF9jb21tYW5kX2J1ZmZlcihzdHJ1Y3QgYW1kX2lvbW11ICpp
b21tdSwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2ln
bmVkIGludCB0aW1lb3V0X2Jhc2UpCiB7Ci0gICAgdW5zaWduZWQgaW50IGNt
ZFs0XSwgc3RhdHVzLCBsb29wX2NvdW50OwotICAgIGJvb2wgY29tcF93YWl0
OworICAgIHVpbnQzMl90IGNtZFs0XTsKKyAgICBzX3RpbWVfdCBzdGFydCwg
dGltZW91dDsKKyAgICBzdGF0aWMgdW5zaWduZWQgaW50IF9fcmVhZF9tb3N0
bHkgdGhyZXNob2xkID0gMTsKIAogICAgIC8qIFJXMUMgJ0NvbVdhaXRJbnQn
IGluIHN0YXR1cyByZWdpc3RlciAqLwogICAgIHdyaXRlbChJT01NVV9TVEFU
VVNfQ09NUF9XQUlUX0lOVCwKQEAgLTY1LDIyICs2NywyOSBAQCBzdGF0aWMg
dm9pZCBmbHVzaF9jb21tYW5kX2J1ZmZlcihzdHJ1Y3QKICAgICAgICAgICAg
ICAgICAgICAgICAgICBJT01NVV9DT01QX1dBSVRfSV9GTEFHX1NISUZULCAm
Y21kWzBdKTsKICAgICBzZW5kX2lvbW11X2NvbW1hbmQoaW9tbXUsIGNtZCk7
CiAKLSAgICAvKiBNYWtlIGxvb3BfY291bnQgbG9uZyBlbm91Z2ggZm9yIHBv
bGxpbmcgY29tcGxldGlvbiB3YWl0IGJpdCAqLwotICAgIGxvb3BfY291bnQg
PSAxMDAwOwotICAgIGRvIHsKLSAgICAgICAgc3RhdHVzID0gcmVhZGwoaW9t
bXUtPm1taW9fYmFzZSArIElPTU1VX1NUQVRVU19NTUlPX09GRlNFVCk7Ci0g
ICAgICAgIGNvbXBfd2FpdCA9IHN0YXR1cyAmIElPTU1VX1NUQVRVU19DT01Q
X1dBSVRfSU5UOwotICAgICAgICAtLWxvb3BfY291bnQ7Ci0gICAgfSB3aGls
ZSAoICFjb21wX3dhaXQgJiYgbG9vcF9jb3VudCApOwotCi0gICAgaWYgKCBj
b21wX3dhaXQgKQorICAgIHN0YXJ0ID0gTk9XKCk7CisgICAgdGltZW91dCA9
IHN0YXJ0ICsgKHRpbWVvdXRfYmFzZSA/OiAxMDApICogTUlMTElTRUNTKHRo
cmVzaG9sZCk7CisgICAgd2hpbGUgKCAhKHJlYWRsKGlvbW11LT5tbWlvX2Jh
c2UgKyBJT01NVV9TVEFUVVNfTU1JT19PRkZTRVQpICYKKyAgICAgICAgICAg
ICAgSU9NTVVfU1RBVFVTX0NPTVBfV0FJVF9JTlQpICkKICAgICB7Ci0gICAg
ICAgIC8qIFJXMUMgJ0NvbVdhaXRJbnQnIGluIHN0YXR1cyByZWdpc3RlciAq
LwotICAgICAgICB3cml0ZWwoSU9NTVVfU1RBVFVTX0NPTVBfV0FJVF9JTlQs
Ci0gICAgICAgICAgICAgICBpb21tdS0+bW1pb19iYXNlICsgSU9NTVVfU1RB
VFVTX01NSU9fT0ZGU0VUKTsKLSAgICAgICAgcmV0dXJuOworICAgICAgICBp
ZiAoIHRpbWVvdXQgJiYgTk9XKCkgPiB0aW1lb3V0ICkKKyAgICAgICAgewor
ICAgICAgICAgICAgdGhyZXNob2xkIHw9IHRocmVzaG9sZCA8PCAxOworICAg
ICAgICAgICAgcHJpbnRrKFhFTkxPR19XQVJOSU5HCisgICAgICAgICAgICAg
ICAgICAgIkFNRCBJT01NVSAlcHA6ICVzY29tcGxldGlvbiB3YWl0IHRha2lu
ZyB0b28gbG9uZ1xuIiwKKyAgICAgICAgICAgICAgICAgICAmUENJX1NCREYy
KGlvbW11LT5zZWcsIGlvbW11LT5iZGYpLAorICAgICAgICAgICAgICAgICAg
IHRpbWVvdXRfYmFzZSA/ICJpb3RsYiAiIDogIiIpOworICAgICAgICAgICAg
dGltZW91dCA9IDA7CisgICAgICAgIH0KKyAgICAgICAgY3B1X3JlbGF4KCk7
CiAgICAgfQotICAgIEFNRF9JT01NVV9ERUJVRygiV2FybmluZzogQ29tV2Fp
dEludCBiaXQgZGlkIG5vdCBhc3NlcnQhXG4iKTsKKworICAgIGlmICggIXRp
bWVvdXQgKQorICAgICAgICBwcmludGsoWEVOTE9HX1dBUk5JTkcKKyAgICAg
ICAgICAgICAgICJBTUQgSU9NTVUgJXBwOiAlc2NvbXBsZXRpb24gd2FpdCB0
b29rICVsdW1zXG4iLAorICAgICAgICAgICAgICAgJlBDSV9TQkRGMihpb21t
dS0+c2VnLCBpb21tdS0+YmRmKSwKKyAgICAgICAgICAgICAgIHRpbWVvdXRf
YmFzZSA/ICJpb3RsYiAiIDogIiIsCisgICAgICAgICAgICAgICAoTk9XKCkg
LSBzdGFydCkgLyAxMDAwMDAwMCk7CiB9CiAKIC8qIEJ1aWxkIGxvdyBsZXZl
bCBpb21tdSBjb21tYW5kIG1lc3NhZ2VzICovCkBAIC0yOTEsNyArMzAwLDcg
QEAgdm9pZCBhbWRfaW9tbXVfZmx1c2hfaW90bGIodTggZGV2Zm4sIGNvbgog
ICAgIC8qIHNlbmQgSU5WQUxJREFURV9JT1RMQl9QQUdFUyBjb21tYW5kICov
CiAgICAgc3Bpbl9sb2NrX2lycXNhdmUoJmlvbW11LT5sb2NrLCBmbGFncyk7
CiAgICAgaW52YWxpZGF0ZV9pb3RsYl9wYWdlcyhpb21tdSwgbWF4cGVuZCwg
MCwgcXVldWVpZCwgZGFkZHIsIHJlcV9pZCwgb3JkZXIpOwotICAgIGZsdXNo
X2NvbW1hbmRfYnVmZmVyKGlvbW11KTsKKyAgICBmbHVzaF9jb21tYW5kX2J1
ZmZlcihpb21tdSwgaW9tbXVfZGV2X2lvdGxiX3RpbWVvdXQpOwogICAgIHNw
aW5fdW5sb2NrX2lycXJlc3RvcmUoJmlvbW11LT5sb2NrLCBmbGFncyk7CiB9
CiAKQEAgLTMyOCw3ICszMzcsNyBAQCBzdGF0aWMgdm9pZCBfYW1kX2lvbW11
X2ZsdXNoX3BhZ2VzKHN0cnVjCiAgICAgewogICAgICAgICBzcGluX2xvY2tf
aXJxc2F2ZSgmaW9tbXUtPmxvY2ssIGZsYWdzKTsKICAgICAgICAgaW52YWxp
ZGF0ZV9pb21tdV9wYWdlcyhpb21tdSwgZGFkZHIsIGRvbV9pZCwgb3JkZXIp
OwotICAgICAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihpb21tdSk7CisgICAg
ICAgIGZsdXNoX2NvbW1hbmRfYnVmZmVyKGlvbW11LCAwKTsKICAgICAgICAg
c3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmaW9tbXUtPmxvY2ssIGZsYWdzKTsK
ICAgICB9CiAKQEAgLTM1Miw3ICszNjEsNyBAQCB2b2lkIGFtZF9pb21tdV9m
bHVzaF9kZXZpY2Uoc3RydWN0IGFtZF9pCiAgICAgQVNTRVJUKCBzcGluX2lz
X2xvY2tlZCgmaW9tbXUtPmxvY2spICk7CiAKICAgICBpbnZhbGlkYXRlX2Rl
dl90YWJsZV9lbnRyeShpb21tdSwgYmRmKTsKLSAgICBmbHVzaF9jb21tYW5k
X2J1ZmZlcihpb21tdSk7CisgICAgZmx1c2hfY29tbWFuZF9idWZmZXIoaW9t
bXUsIDApOwogfQogCiB2b2lkIGFtZF9pb21tdV9mbHVzaF9pbnRyZW1hcChz
dHJ1Y3QgYW1kX2lvbW11ICppb21tdSwgdWludDE2X3QgYmRmKQpAQCAtMzYw
LDcgKzM2OSw3IEBAIHZvaWQgYW1kX2lvbW11X2ZsdXNoX2ludHJlbWFwKHN0
cnVjdCBhbWQKICAgICBBU1NFUlQoIHNwaW5faXNfbG9ja2VkKCZpb21tdS0+
bG9jaykgKTsKIAogICAgIGludmFsaWRhdGVfaW50ZXJydXB0X3RhYmxlKGlv
bW11LCBiZGYpOwotICAgIGZsdXNoX2NvbW1hbmRfYnVmZmVyKGlvbW11KTsK
KyAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihpb21tdSwgMCk7CiB9CiAKIHZv
aWQgYW1kX2lvbW11X2ZsdXNoX2FsbF9jYWNoZXMoc3RydWN0IGFtZF9pb21t
dSAqaW9tbXUpCkBAIC0zNjgsNyArMzc3LDcgQEAgdm9pZCBhbWRfaW9tbXVf
Zmx1c2hfYWxsX2NhY2hlcyhzdHJ1Y3QgYQogICAgIEFTU0VSVCggc3Bpbl9p
c19sb2NrZWQoJmlvbW11LT5sb2NrKSApOwogCiAgICAgaW52YWxpZGF0ZV9p
b21tdV9hbGwoaW9tbXUpOwotICAgIGZsdXNoX2NvbW1hbmRfYnVmZmVyKGlv
bW11KTsKKyAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihpb21tdSwgMCk7CiB9
CiAKIHZvaWQgYW1kX2lvbW11X3NlbmRfZ3Vlc3RfY21kKHN0cnVjdCBhbWRf
aW9tbXUgKmlvbW11LCB1MzIgY21kW10pCkBAIC0zNzgsNyArMzg3LDggQEAg
dm9pZCBhbWRfaW9tbXVfc2VuZF9ndWVzdF9jbWQoc3RydWN0IGFtZAogICAg
IHNwaW5fbG9ja19pcnFzYXZlKCZpb21tdS0+bG9jaywgZmxhZ3MpOwogCiAg
ICAgc2VuZF9pb21tdV9jb21tYW5kKGlvbW11LCBjbWQpOwotICAgIGZsdXNo
X2NvbW1hbmRfYnVmZmVyKGlvbW11KTsKKyAgICAvKiBUQkQ6IFRpbWVvdXQg
c2VsZWN0aW9uIG1heSByZXF1aXJlIHBlZWtpbmcgaW50byBjbWRbXS4gKi8K
KyAgICBmbHVzaF9jb21tYW5kX2J1ZmZlcihpb21tdSwgMCk7CiAKICAgICBz
cGluX3VubG9ja19pcnFyZXN0b3JlKCZpb21tdS0+bG9jaywgZmxhZ3MpOwog
fQo=

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 17:32:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 17:32:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138851.256822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqfaM-000072-Eo; Tue, 08 Jun 2021 17:32:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138851.256822; Tue, 08 Jun 2021 17:32:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqfaM-00006v-Bo; Tue, 08 Jun 2021 17:32:18 +0000
Received: by outflank-mailman (input) for mailman id 138851;
 Tue, 08 Jun 2021 17:32:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2lsH=LC=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
 id 1lqfaL-00006p-RP
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 17:32:17 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 625e96c5-5f94-478e-a662-00c2d987c32a;
 Tue, 08 Jun 2021 17:32:17 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPS id 951F66108E;
 Tue,  8 Jun 2021 17:32:16 +0000 (UTC)
Received: from pdx-korg-docbuild-2.ci.codeaurora.org (localhost.localdomain
 [127.0.0.1])
 by pdx-korg-docbuild-2.ci.codeaurora.org (Postfix) with ESMTP id 84738609E4;
 Tue,  8 Jun 2021 17:32:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 625e96c5-5f94-478e-a662-00c2d987c32a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623173536;
	bh=S1YrODTluW3llQrhVD7txCHxoXFiqcDLpH0NRLN4lUM=;
	h=Subject:From:In-Reply-To:References:Date:To:Cc:From;
	b=recBOmloRnskA0nKPOr9tZstnpmTjL2V7yvZQ9GOOqCUJg1zVDuIso8Jwneu1x1J8
	 xEQxJPcVHDURA8RVvJosNkT4qsWkrSpGPhSdTtptRdIGlnUw3M/8tEhLm2KlfZvLp6
	 +4H+zmIGMhUWCTg480z9n5vWnT+DtGZ34W14JiIY2KEprvdmY9m8LMxlPBTv0Cq22E
	 coHjU/KGx80IwMJn2cFxELsEc981aps9x7cTVaWLQI2+NlP9U/BBcj4tpC3kEuBbRT
	 ZaGnBa4yBhR7inJmQYk7SDkK35PRwa3Sw0XkY9tRmmO9EKKi4xPCuFSLqB1lkvZDOn
	 fmvXJOtpnoZSg==
Subject: Re: [GIT PULL] xen: branch for v5.13-rc6
From: pr-tracker-bot@kernel.org
In-Reply-To: <20210608170253.13602-1-jgross@suse.com>
References: <20210608170253.13602-1-jgross@suse.com>
X-PR-Tracked-List-Id: <linux-kernel.vger.kernel.org>
X-PR-Tracked-Message-Id: <20210608170253.13602-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.13b-rc6-tag
X-PR-Tracked-Commit-Id: 107866a8eb0b664675a260f1ba0655010fac1e08
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: 368094df48e680fa51cedb68537408cfa64b788e
Message-Id: <162317353647.21484.13422862216864483190.pr-tracker-bot@kernel.org>
Date: Tue, 08 Jun 2021 17:32:16 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com

The pull request you sent on Tue,  8 Jun 2021 19:02:53 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.13b-rc6-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/368094df48e680fa51cedb68537408cfa64b788e

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 18:09:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 18:09:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138876.256887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqg9k-0004SD-Uz; Tue, 08 Jun 2021 18:08:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138876.256887; Tue, 08 Jun 2021 18:08:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqg9k-0004S6-R9; Tue, 08 Jun 2021 18:08:52 +0000
Received: by outflank-mailman (input) for mailman id 138876;
 Tue, 08 Jun 2021 18:08:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqg9j-0004Rw-8E; Tue, 08 Jun 2021 18:08:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqg9j-00065N-31; Tue, 08 Jun 2021 18:08:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqg9i-0004RW-Jg; Tue, 08 Jun 2021 18:08:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqg9i-0001k1-JC; Tue, 08 Jun 2021 18:08:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c+4H15C7bRCtQwvR8c/qUT7gfhFul29cyEY0zLxtPjg=; b=FSIZHU01vmODkb7V8QNmUt/fWN
	AMntJIXWdKy0sauqrIE1rb9ME44bY1IwZL9Gvr+F6gY93w/wx6IzPzPBT8iv9eIWZR0JaRMcSs/k5
	qVgZWsg2HmOlNnXDAQZahhdlDvTuFC+JFfKMJ6pFf3Q2pu+ZjFlbwoDzDp/xp7FktpyU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162537-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162537: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-credit2:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a35947f15c0ee695eba3c55248ec8ac3e4e23cca
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Jun 2021 18:08:50 +0000

flight 162537 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162537/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-rtds        <job status>                 broken
 test-amd64-amd64-xl-credit2     <job status>                 broken
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-credit2   5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-xl-rtds      5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                a35947f15c0ee695eba3c55248ec8ac3e4e23cca
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  292 days
Failing since        152659  2020-08-21 14:07:39 Z  291 days  540 attempts
Testing same since   162527  2021-06-07 21:06:59 Z    0 days    2 attempts

------------------------------------------------------------
526 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  broken  
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     broken  
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-rtds broken
broken-job test-amd64-amd64-xl-credit2 broken
broken-step test-amd64-amd64-xl-credit2 host-install(5)
broken-step test-amd64-amd64-xl-rtds host-install(5)

Not pushing.

(No revision log; it would be 169653 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 18:28:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 18:28:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138909.257002 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqgSd-0007zx-Q3; Tue, 08 Jun 2021 18:28:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138909.257002; Tue, 08 Jun 2021 18:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqgSd-0007zq-Mi; Tue, 08 Jun 2021 18:28:23 +0000
Received: by outflank-mailman (input) for mailman id 138909;
 Tue, 08 Jun 2021 18:28:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqgSc-0007zg-N1; Tue, 08 Jun 2021 18:28:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqgSc-0006Tt-G9; Tue, 08 Jun 2021 18:28:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqgSc-0005Pf-8D; Tue, 08 Jun 2021 18:28:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqgSc-0007L9-7h; Tue, 08 Jun 2021 18:28:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vu7QQYpWGFldPGHD2pjZjFmuNegN29voxaVBbXJ4xbE=; b=3A+A3WzPEoIxFH14R5Aw8lhYJJ
	ZL9mbdkSxILFEYFcKlkxJhiUfrMJzeHc6MNAcOU2ybrEUKmtQBJq30UkNENU9o6xDQJhdvOMioa3m
	JYaIedaF1pXc/vX7RvwvnR8zszrcZAScu1FidMrRPnxEEljaZEWTEauuYIBvST4FpEws=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162542-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162542: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=558d83ab1a5179e146a56dd5f3cb16e1ca44ff46
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Jun 2021 18:28:22 +0000

flight 162542 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162542/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 558d83ab1a5179e146a56dd5f3cb16e1ca44ff46
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z    4 days
Failing since        162368  2021-06-04 15:42:59 Z    4 days    4 attempts
Testing same since   162542  2021-06-08 10:41:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1348 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 18:44:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 18:44:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138918.257017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqghv-0001rR-6G; Tue, 08 Jun 2021 18:44:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138918.257017; Tue, 08 Jun 2021 18:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqghv-0001rK-1w; Tue, 08 Jun 2021 18:44:11 +0000
Received: by outflank-mailman (input) for mailman id 138918;
 Tue, 08 Jun 2021 18:44:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vz6+=LC=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
 id 1lqghu-0001rE-Do
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 18:44:10 +0000
Received: from mail-oi1-x22b.google.com (unknown [2607:f8b0:4864:20::22b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 31d3982a-8049-49bd-93b8-a0f0bc821cac;
 Tue, 08 Jun 2021 18:44:06 +0000 (UTC)
Received: by mail-oi1-x22b.google.com with SMTP id a26so1829720oie.11;
 Tue, 08 Jun 2021 11:44:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31d3982a-8049-49bd-93b8-a0f0bc821cac
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:from:date:message-id:subject:to:cc;
        bh=yMYHVO+ERtPmmmBOaX2eo+ZhIMxbzVu0YuG005cvwe8=;
        b=liOxfv1Wfeu3baNbBGqOD8Rcbq/Wz6hk3ewjAlBKfw85uynOXPtby3tHnInul6rlVn
         ZkRe1YByIAU2JtiISG96lPzOfZIKYevFFxJK9EtZexagTW91ULGFpI4eBBrv3Pf+MVRg
         HyEaODXMDLIUcgUZQFq6E3plHbQyL3UucKxf1OpBeqwqZbpu07T1U5+asBdyD5MhRU3l
         o096+D0z2yDNAXSC3g3pzP8UXkDKa8CYM2ckTdo1owuxQ26yAafs+Z8tvuWs8AbsPsW1
         O4DfLbdkUNNqD57nuQ/pFrSEdw8tVGZ+3guZFjuON9MpwLZG/bqMORhZ+tusu9Y3uiRR
         6FgQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to:cc;
        bh=yMYHVO+ERtPmmmBOaX2eo+ZhIMxbzVu0YuG005cvwe8=;
        b=SXooLDoOCq2d5yBKU5/0siSEy/NchYBjSR0kJPDTVpNtAPWV3MdG9hcPQXY1/n3Iyz
         CzAYm1/QvsOwqZ7paYimyc7hv6urhJ+SEpsF2iv6livbA9WNhVkwFK/VEr1w0GklCH3Q
         ygxrfA1ksTJ4B/sggL1fSTwFQCoxq0+ae/sv18N/YjB2VQL01BwQAXnfu64DaDOH3YRY
         i0laptbASWTEdWOT8E5YQ/DOMSHsDbWKUIa8yD5rnmI7qu8M6oM/MRXPIb9o/jgXffoV
         gGIU+sut841TTyXt6pgPLpZpxFbol6dslb3wyizResqKOcUt1m6pJFeBJu9u1XzRfHs1
         E/tQ==
X-Gm-Message-State: AOAM530LeIQV5cdazkWnoThcF4CgrlI64qxSOphvjWoZ+W8BdYjZ8Sip
	hkcGN1JMkvP1ueIuQLgiQRU6qnrvcut/bxWL/U9ZXqwrcpibrQ==
X-Google-Smtp-Source: ABdhPJz9pbTtyhnyt9qsUHIxlUmnIvP+pQ89TpXICMW5WfNiJ10uPlkJEPb4KYWioy9tcgEWchfpc4LhajaEyCnsuB0=
X-Received: by 2002:aca:4554:: with SMTP id s81mr3807985oia.152.1623177845386;
 Tue, 08 Jun 2021 11:44:05 -0700 (PDT)
MIME-Version: 1.0
From: Christopher Clark <christopher.w.clark@gmail.com>
Date: Tue, 8 Jun 2021 11:43:52 -0700
Message-ID: <CACMJ4Ga47G1UZSiy=Ud=audqDr93+5vF8s-tPtoBiN69ZK=v-Q@mail.gmail.com>
Subject: Xen Summit Design Session notes: Hyperlaunch
To: xen-devel <xen-devel@lists.xenproject.org>
Cc: Daniel Smith <dpsmith@apertussolutions.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <stefano.stabellini@xilinx.com>, Julien Grall <jgrall@amazon.com>, 
	Julien Grall <Julien.grall.oss@gmail.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, 
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
	Rich Persaud <persaur@gmail.com>, Bertrand Marquis <Bertrand.Marquis@arm.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	luca.fancellu@arm.com, paul@xen.org, Adam Schwalm <adam.schwalm@starlab.io>, 
	Scott Davis <scott.davis@starlab.io>, Christopher Clark <christopher.clark@starlab.io>, 
	quinnr@ainfosec.com, openxt <openxt@googlegroups.com>, 
	dgdegra <dgdegra@tycho.nsa.gov>, Artem Mygaiev <artem_mygaiev@epam.com>, 
	Bruce Ashfield <bruce.ashfield@gmail.com>, demi@invisiblethingslab.com, dfaggioli@suse.com, 
	mengxu@cis.upenn.edu, josh.whitehead@dornerworks.com, 
	Stewart Hildebrand <stewart.hildebrand@dornerworks.com>, Juergen Gross <jgross@suse.com>, 
	trenchboot-devel@googlegroups.com, system-dt@lists.openampproject.org, 
	minios-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

Design Session - Hyperlaunch
--------------------
Wednesday 26th May, at the Xen Design and Developer Summit 2021
Session Hosts: Christopher Clark & Daniel Smith

tl;dr:
- use cases for Hyperlaunch include supporting bare metal apps
    - latency is a critical requirement for workloads
        - determines success/failure of the system
        - scheduling is hard; Xen has options, including RTDS
- Zephyr in dom0 being explored in the Arm embedded community
- XSM Roles work is to support flexible deployment structure
- System Device Tree is important for Hyperlaunch to integrate
    - migration from dom0less to be supported
    - Lopper tool translates SDT to traditional Device Trees for domains
- Boot Domain could run Lopper
    - could be done as a unikraft unikernel
- US/EU Supply Chain SBOM need aligns with Hyperlaunch + Trenchboot
    - options for funding to accelerate the work:
        - PCI passthrough, Recovery Domain, XSM framework improvements
- Design docs for Hyperlaunch available [patch posted to merge to Xen tree]
https://lists.xenproject.org/archives/html/xen-devel/2021-05/pdfq6mIMNPNoM.pdf
https://lists.xenproject.org/archives/html/xen-devel/2021-05/pdfQlbS0F4suy.pdf

Slides from the Hyperlaunch Keynote:
https://static.sched.com/hosted_files/xen2021/d7/Hyperlaunch%20-%20Keynote_%20Xen%20Summit%202021%20-%20Clark%2C%20Smith.pdf
Video: https://www.youtube.com/watch?v=Xwtq2Q0ylj0&list=PLYyw7IQjL-zGcRPN6EjiTuFVGo4A6KCNf&index=21

Slides from the XSM Roles presentation:
https://static.sched.com/hosted_files/xen2021/75/Tuesday_A%20new%20Role%20model%20XSM-.pdf
Video: https://www.youtube.com/watch?v=j1fDn8ZbyVE&list=PLYyw7IQjL-zGcRPN6EjiTuFVGo4A6KCNf&index=6

Hyperlaunch at the Xen Project wiki:
https://wiki.xenproject.org/wiki/Hyperlaunch

--------------------
Open Discussion:
- floor open for audience requirements, use cases for Hyperlaunch

Stefano:
    - use case: fast unikernel boot (on embedded known as "bare metal
      applications")
        - boot to up as quickly as possible

    - difference between unikernels and bare metal applications:
        - a bare metal application is a tiny driver for a hardware block
        - ie. a hardware block in programmable logic so no existing driver

    - a bare metal application: typically just a driver that executes as the
      "unikernel"
        - usually only a few of them

    - latency is the biggest concern for bare metal apps
        - hypervisor scheduling: a concern
        - priority reason: _must_ respond to hardware action in a very limited
          amount of time
        - ie. Latency more important than anything else
            - losing latency is software failure: disaster happens
        - consequence: Adding a scheduler makes it a lot harder
            - not doing any scheduling is typically easier
            - also need to do cache partitioning, and more

    - a bare metal app doesn't need any PV drivers since it doesn't
      communicate with any other software, just the hardware block.
        - access to mmio + an interrupt or two sufficient

Christopher:
    - use of unikernels aligns with what is wanted for the boot domain:
        - ie. use short, single-purpose domains for platform services to
          avoid turning the boot domain into another dom0 by continuing to
          add functionality
        - eg. Qubes OS Mirage firewall VM, or something similar from unikraft

Daniel:
    - design: the hypervisor finishes the system, waits for boot domain to exit
          and complete the launch
        - enters 'finalization phase', finishes bringing everything up:
          eg. unpausing other domains not unpaused by boot domain
        - boot domain wiped from memory

--- topic: Scheduling

Christopher:
    - For small, single-purpose domains: have a need to schedule these

Stefano:
    - Illustrative example: 2 domains: dom0 Linux, domU bare metal app
    - no scheduling, to make sure deadlines not broken
    - made domU pause dom0 during critical execution:
        - Interesting inversion of priority.

Point is that domU is the most critical thing on the entire system.
ie. if domU meets deadlines and dom0 not present, system still functional.

Christopher:
    - related: Connor's talk at this Summit re: moving scheduling out of
      Xen into dom0;
https://xen2021.sched.com/event/jAEs/the-root-vm-a-new-xen-domain-species-connor-davis-ais
    - also the Bromium architecture, and Daniel's HAT architecture
https://xen2020.sched.com/event/baXt/design-session-talk-reliable-platform-security-xen-and-the-fidelis-platform-for-hardened-access-terminals-hat
        - has concept of protection domains
    - interested in DomU running the fundamental workload but not being
      Control Domain, doesn't have those permissions

For this use case -- domU pauses dom0 for domU to meet its deadlines --
permission model must have been changed.

Stefano:
    - adhoc provision of two hypercalls so domU could pause/unpause dom0
    - not easy to make generic:
        - not just vcpu, must pause _everything_ except self
    - 5 lines of code for a hack, 10 months to do it properly, upstream, etc!

    - Critical section: an interrupt occurs, must act within a very limited
      amount of time; else the whole thing fails

    - Critical section is way smaller than a slot of the scheduler

    - Make sure everything else is paused, to get the full bandwidth of not just
      the CPU, but also DDR, no interrupts. Don't screw up those 15 microseconds

George:
    - how long does it take to pause all the other VMs on the system?
    - eg: a foreach domain, foreach vcpu, and just pause them, but:
      involves sending interrupts, waiting for the thing to finish, etc

Stefano:
    - I knew which event started the critical section, so I made that event the
      trigger for pausing dom0.


[via chat:] Demi Marie: What if we had a hard real-time scheduler like
seL4 does?
[via chat:] Artem: RTDS? NULL?
[via chat:] Andy: yup - those
[via chat:] Julien: https://wiki.xenproject.org/wiki/RTDS-Based-Scheduler
[via chat:] Artem: also ARINC653
[via chat:] Scott: sounds like he wants the hypervisor to disable interrupt
virtualization and sit in a tight loop running a single guest on certain cores
[via chat:] Artem: core pooling?
[via chat:] Demi Marie: IIRC seL4 can do this with the mixed-criticality
scheduling work
[via chat:] Artem: RTDS can do that but AFAIK it cannot reschedule slack

Daniel:
    - you need a scheduler that is aware of these critical interrupts that
when they occur, it means that that domain has to have exclusivity over the
system and can take care of ensuring that you get scheduling exclusivity over
the system.

Stefano: responding to Demi, re: "seL4 can do this with the mixed-criticality
scheduling work"
    - Yes, other domains in the past used this technique

George: you don't actually need to pause the other domains;
    - you just need to make sure that the other CPUs stop doing stuff.

Stefano: what I did: slept in Xen, not even pause the CPU: busy-looping Xen

George: in a sense is correct; similar to core scheduling, sibling cores switch
to not doing anything

Daniel: yes, lots of academic papers on these problems, eg. implemented in seL4
and other kernels.
XSM Roles: was done to help more advanced Hyperlaunch scenarios'
    - (I don't like this idea but:) you could build a role-based scheduler

Christopher: ARINC653 scheduler mentioned - Artem, have you experience with it?

Artem: no, sticking with RTDS. Also used it with full preemption for Xen.
- Really interested in RTDS.
    - want to explore using slack time for domains with best effort priorities
    - RTDS seems like the best option for future development.

Our scenarios, on Arm:
- distinguish between: hardware-controlling domain, hardware domains,
  and controlling domain:
- using dom0 as a controlling domain, able to recreate domains if needed

- using device tree and don't have ACPI: split hardware between domains
    - each domain can talk directly to some piece of hardware
        - ie. they all are, in a sense, hardware domains
    - each can be independently restarted to deal with faulty hardware drivers
        - eg. we can restart the GPU from dom0

- dom0 path to safety certifications: working on Zephyr as a dom0
    - event channels working
    - an early draft implementation

- aims:
    - a small RTOS acting as a starter in dom0
    - don't put other domain kernels in dom0 - instead: a bootloader
        - dom0 starts a domain, gives a generic bootloader, common for all other
          domains, and then other domains have their own filesystems
        - guest domain's know which kernel to use, so dom0 becomes very small
          and very generic, and not depending on other domain's kernels, etc.
        - ie. dom0 is purely for control functions

--- topic: how does Hyperlaunch help?

Stefano:
    - domU should not be started from dom0
        - two domains, no PV drivers at all
        - a clear use case for dom0less
    - more detailed XSM policies allows dom0 to not be fully privileged
    - XSM policy can allows one domU to stop the other domU


--- topic: request to review the design doc

Daniel:
    - we want to make sure that we're good on this idea of the boot domain
    - that we understand how these handoffs are going
    - the roles work, the subtask to get that integrated in so that we can do
      these disaggregated boots.

New definitions for Roles within the Xen system:
    - get away from concepts of 'is_control_domain' and 'is_hardware_domain'
    - talk about what Role we're asking a domain to do and function as
    - want a common language for roles
     (eg. avoid (possibly unaware) misconceptions of current differences in
     views on what a Control Domain is and what a Hardware Domain is)

Review the design doc, give us some feedback; will be adding a design doc for
the Roles work as well -- have a draft form of it and just want to flush it out
further, and hopefully we can get all of that adopted.


--- topic: Question from Julien: is the plan to completely remove dom0less
or keep the two together?

Christopher: integrate, so no boundary between the two
    - Everything with dom0less should continue to work

Daniel: yes
    - dom0less constructing domains from the hypervisor will continue,
      become common code, used by both Arm and x86.
    - biggest difference: migration from dom0less to hyperlaunch trees;
      not sure what that migration period will be.
        - much broader Device Tree definition
        - trying to ensure aligned with System Device Tree
            - (dom0less today has own specific Device Tree configuration)
        - for some period of time, the parser for the dom0less Device Tree is
          going to have to coexist with the Hyperlaunch one


--- topic: System Device Tree and Lopper

Stefano:
System Device Tree:
    - very similar to Hyperlaunch and dom0less
    - defines 'domains': VMs for Xen, or could be bare metal things
      running on a coprocessor
    - next few months: finish cleaning up the definition of domains in
      System Device Tree, and cover VMs properly

Align Hyperlaunch with the System Device Tree domains.
- already need migration from dom0less to System Device Tree
- don't want to do two migrations

System Device Tree comes with a tool called 'Lopper':
    [ https://github.com/devicetree-org/lopper ]

Lopper takes a single System Device Tree and generates multiple
traditional Device Trees, one for each domain in the System Device Tree.
    - Device Tree for VMs can be very different from the one on the host
    - Device Tree for bare metal domains can be much closer

Lopper supports python plugins
- eg. a Lopper plugin to convert the System Device Tree format into dom0less
  format, so works with current Xen parsing
- changing the Xen parsing eventually would be better


Daniel:
- with Hyperlaunch: could boot with the System Device Tree, and pass it
  into the Boot Domain, where Lopper runs, and Lopper can generate domain device
  trees for guest domains to start

Stefano: would be very cool!
- System Device Tree (and Lopper, in python) so far always used at build time

Daniel: the unikraft project has its micropython unikernel
- for embedding scripts as a unikernel
- eg. a unikraft unikernel python domain with Lopper and Boot Domain logic
    - takes System Device Tree used to construct all the domains, and does
      the Device Tree generation
- from a security standpoint: nice: hypervisor's not generating Device Trees
    - all at runtime in a clean, safe architecture

Christopher: interesting for CI looping as well

--- topic: Scope, Funding, Alignment of work

Rich: Q: You said that you were managing the scope, because it could become
quite big: Could you talk about:
    - Some of the things that you have left out of scope?
    - Areas where funding would help?
    - Areas where other contributors would help?
    - How Trenchboot is connected to this or just launch integrity in general?

"In both the US and the EU, there is a top-down effort for supply chain
security, powered by ransomware and bitcoin, so " [ money is available ]
" to get more integrity in the software stack, and they're
pushing 'Secure Bill Of Materials (SBOM)', which we saw at the Yocto event.
So if you have a Secure Bill Of Materials, and your Hyperlaunch system with
Trenchboot can prove that the thing running matches the manifests, people might
want to pay money for that, and help drive your roadmap."

Daniel: Yes.

Trenchboot: Correct, the whole idea of this spawned out of the same thoughts
that created Trenchboot [ https://trenchboot.org/ ]

- proposed back in May/June 2018, driven by:
  how do we want to use Trenchboot in a Xen launch system where we had the
  security properties that we're seeking, but without blowing up, in terms of
  size and code and responsibility, into the hypervisor
    - ref: talk at Trenchboot Developer Forum
        - [ https://www.youtube.com/watch?v=qWMRcfQdc6c ]

- standard pattern following with Trenchboot: launch into a kernel that then
  launches into an integrity measurement system, a security engine
    - ie. for Xen: we do a DRTM launch into Xen, that starts a Boot Domain,
      our security engine that runs in a restricted environment that's protected
      to take measurements of the system that provides you attestable
      information, attestable evidence, to what's in your system, to the degree
      that's possible.
    - at the same time, not everybody wanted to have a capability specifically
      focussed on that, so there had already been discussions about a bootstrap
      domain, that we linked to when Daniel De Graaf did the original Hardware
      Domain - he posted an example Boot Domain capability,
        - so building all of this as the foundation

Rich: are there things you have wanted to do but have postponed or are there
tasks where you need external people to help, or external funding sources that
would allow those features to be addressed?

Christopher:
    - PCI passthrough is the big one
        - really important
        - highly complementary to Hyperlaunch to passthrough PCI devices
          right from start of all of initial VMs
        - but complex

    - the Recovery Domain
        - mentioned in the Design Document
        - ability to have a VM built, configured, and when failure is
          detected during host boot - eg. malfunction of a critical VM -
          can put rescue logic in there to enable recovery

Daniel:
    - For the Roles work, done the minimum Hyperlaunch needed
    - but could definitely go much further
        - get the XSM Framework cleaned up
        - get Flask in much better position
        - more advanced Roles
        - reevaluating all the XSM hooks in terms of Roles and everything
        - getting all of the security framework in a better state.

--------------------
The recording for this Design Session is available at:
https://www.youtube.com/watch?v=j75orDMXO2M&list=PLYyw7IQjL-zGcRPN6EjiTuFVGo4A6KCNf&index=13


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 19:08:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 19:08:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138930.257032 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqh5E-0004N2-Eb; Tue, 08 Jun 2021 19:08:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138930.257032; Tue, 08 Jun 2021 19:08:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqh5E-0004Mv-BL; Tue, 08 Jun 2021 19:08:16 +0000
Received: by outflank-mailman (input) for mailman id 138930;
 Tue, 08 Jun 2021 19:08:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MP2/=LC=gmail.com=bobbyeshleman@srs-us1.protection.inumbo.net>)
 id 1lqh5C-0004Mp-W4
 for xen-devel@lists.xenproject.org; Tue, 08 Jun 2021 19:08:15 +0000
Received: from mail-pg1-x52e.google.com (unknown [2607:f8b0:4864:20::52e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7194ff7a-b54a-4a85-b107-eb451ff6d210;
 Tue, 08 Jun 2021 19:08:13 +0000 (UTC)
Received: by mail-pg1-x52e.google.com with SMTP id y12so5470310pgk.6
 for <xen-devel@lists.xenproject.org>; Tue, 08 Jun 2021 12:08:13 -0700 (PDT)
Received: from [192.168.0.7] ([75.164.44.148])
 by smtp.gmail.com with ESMTPSA id j9sm15736870pjy.25.2021.06.08.12.08.11
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 08 Jun 2021 12:08:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7194ff7a-b54a-4a85-b107-eb451ff6d210
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:organization:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=uj8cs4tAOqs8gD6/bSimGTktOgMVbmEbSzaXj2omFfo=;
        b=PExyHtLqwMxsg2nnqp98Cx4LzXn5hX6WVhGa7RvXsbckN/Ed1gqxzLhMNQqDN1P1Dw
         9d22LApcZBuI9HAnKBMpuT04OnwCsP9W9ri1f8adeMTNLobDsp26EJJx84islV8wUysA
         axxoTqa7GTB8Q3/UfNfimunuI6Rj5GYS3zO2C1kKk8wiv49Cz6Js4zH/3aOQKBecC+hR
         VHSeRfiwXFSAVCv4pOaPvm8sWVgFkclpdsrJeEgY1Xih8Tgcg0ahcJVMpUe+X5E9c6Lp
         6FEhvNtegqurM/FeG0qquArE6jvsGafXAC/r3uBzLuDTpbIZWxZG8tACyZ/1sSUgYq/x
         3ggw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:organization
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=uj8cs4tAOqs8gD6/bSimGTktOgMVbmEbSzaXj2omFfo=;
        b=V9tXQGPs9/DWpKxZ/trX1IYb1ZKEEAZbhnpPkaQ6+lf7iWCjHnwogJQtd/9RnQyFaf
         drydzrZ9oXTpD6czhUwpHoQZYYCc//A4SFwvIEYR1ISvsraas7mXDBgCOfJcT3OLY3o7
         1N9ldqOJO68jDMFSWSQZ/2TtY3yhDncM+t3CWEE3p6cNx7LT30UdWIexPFBf+yMa7dwI
         HyPRIJW3ZSJcRG58tHJvz1ZjA/ccNo/v4HW4+OiTwtq7phusmkc1kk0gfR7JaTMUGmKR
         0SBrWiD21RPxQzk1kfazNsTK7py0qFXhgohlGVJdya+amEIO9Ck2kIn+7kTMuLhKlvgg
         qSQQ==
X-Gm-Message-State: AOAM533ep5yEZY+VM9Vu0RgMhq6/bUOohXlXMfTbswaH5IVfRHvUTytp
	FzJJAWlsef6EwVDcFjl0DV0=
X-Google-Smtp-Source: ABdhPJxrSZvIvsJEYL9r4Iffk1Gex1ZqRlRjpWNKhiDaPtu8gM4rNWFKjI2MMd7xSJ7Lhc168K88Ng==
X-Received: by 2002:a62:2987:0:b029:2de:b564:648d with SMTP id p129-20020a6229870000b02902deb564648dmr1391028pfp.48.1623179292514;
        Tue, 08 Jun 2021 12:08:12 -0700 (PDT)
Subject: Re: [PATCH v8 2/2] xen: Add files needed for minimal riscv build
To: Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Cc: Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Alistair Francis <alistair.francis@wdc.com>
References: <cover.1622772299.git.connojdavis@gmail.com>
 <4337d3cd6891b34f534d85ca62712bd3b446edf8.1622772299.git.connojdavis@gmail.com>
From: Bob Eshleman <bobbyeshleman@gmail.com>
Organization: Vates SAS
Message-ID: <38ffb102-a403-23d6-8b0b-607a8cd3d515@gmail.com>
Date: Tue, 8 Jun 2021 12:08:10 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
In-Reply-To: <4337d3cd6891b34f534d85ca62712bd3b446edf8.1622772299.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 6/3/21 7:14 PM, Connor Davis wrote:
> Add arch-specific makefiles and configs needed to build for
> riscv. Also add a minimal head.S that is a simple infinite loop.
> head.o can be built with
> 
> $ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen tiny64_defconfig
> $ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen TARGET=riscv64/head.o
> 
> No other TARGET is supported at the moment.
> 
> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> ---
>  MAINTAINERS                             |  9 +++++
>  config/riscv64.mk                       |  5 +++
>  xen/Makefile                            |  8 +++--
>  xen/arch/riscv/Kconfig                  | 48 +++++++++++++++++++++++++
>  xen/arch/riscv/Kconfig.debug            |  0
>  xen/arch/riscv/Makefile                 |  2 ++
>  xen/arch/riscv/Rules.mk                 |  0
>  xen/arch/riscv/arch.mk                  | 14 ++++++++
>  xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
>  xen/arch/riscv/riscv64/asm-offsets.c    |  0
>  xen/arch/riscv/riscv64/head.S           |  6 ++++
>  xen/include/asm-riscv/config.h          | 47 ++++++++++++++++++++++++
>  12 files changed, 150 insertions(+), 2 deletions(-)
>  create mode 100644 config/riscv64.mk
>  create mode 100644 xen/arch/riscv/Kconfig
>  create mode 100644 xen/arch/riscv/Kconfig.debug
>  create mode 100644 xen/arch/riscv/Makefile
>  create mode 100644 xen/arch/riscv/Rules.mk
>  create mode 100644 xen/arch/riscv/arch.mk
>  create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
>  create mode 100644 xen/arch/riscv/riscv64/asm-offsets.c
>  create mode 100644 xen/arch/riscv/riscv64/head.S
>  create mode 100644 xen/include/asm-riscv/config.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index d46b08a0d2..5a1f92422a 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -456,6 +456,15 @@ F:	tools/libs/light/libxl_nonetbuffer.c
>  F:	tools/hotplug/Linux/remus-netbuf-setup
>  F:	tools/hotplug/Linux/block-drbd-probe
>  
> +RISCV
> +M:	Bob Eshleman <bobbyeshleman@gmail.com>
> +M:	Alistair Francis <alistair.francis@wdc.com>
> +R:	Connor Davis <connojdavis@gmail.com>
> +S:	Supported
> +F:	config/riscv64.mk
> +F:	xen/arch/riscv/
> +F:	xen/include/asm-riscv/
> +
>  RTDS SCHEDULER
>  M:	Dario Faggioli <dfaggioli@suse.com>
>  M:	Meng Xu <mengxu@cis.upenn.edu>
> diff --git a/config/riscv64.mk b/config/riscv64.mk
> new file mode 100644
> index 0000000000..a5a21e5fa2
> --- /dev/null
> +++ b/config/riscv64.mk
> @@ -0,0 +1,5 @@
> +CONFIG_RISCV := y
> +CONFIG_RISCV_64 := y
> +CONFIG_RISCV_$(XEN_OS) := y
> +
> +CONFIG_XEN_INSTALL_SUFFIX :=
> diff --git a/xen/Makefile b/xen/Makefile
> index 7ce7692354..89879fad4c 100644
> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -26,7 +26,9 @@ MAKEFLAGS += -rR
>  EFI_MOUNTPOINT ?= $(BOOT_DIR)/efi
>  
>  ARCH=$(XEN_TARGET_ARCH)
> -SRCARCH=$(shell echo $(ARCH) | sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
> +SRCARCH=$(shell echo $(ARCH) | \
> +          sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
> +              -e s'/riscv.*/riscv/g')
>  
>  # Don't break if the build process wasn't called from the top level
>  # we need XEN_TARGET_ARCH to generate the proper config
> @@ -35,7 +37,8 @@ include $(XEN_ROOT)/Config.mk
>  # Set ARCH/SUBARCH appropriately.
>  export TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
>  export TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
> -                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
> +                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
> +                                -e s'/riscv.*/riscv/g')
>  
>  # Allow someone to change their config file
>  export KCONFIG_CONFIG ?= .config
> @@ -335,6 +338,7 @@ _clean: delete-unfresh-files
>  	$(MAKE) $(clean) xsm
>  	$(MAKE) $(clean) crypto
>  	$(MAKE) $(clean) arch/arm
> +	$(MAKE) $(clean) arch/riscv
>  	$(MAKE) $(clean) arch/x86
>  	$(MAKE) $(clean) test
>  	$(MAKE) -f $(BASEDIR)/tools/kconfig/Makefile.kconfig ARCH=$(ARCH) SRCARCH=$(SRCARCH) clean
> diff --git a/xen/arch/riscv/Kconfig b/xen/arch/riscv/Kconfig
> new file mode 100644
> index 0000000000..468e250c86
> --- /dev/null
> +++ b/xen/arch/riscv/Kconfig
> @@ -0,0 +1,48 @@
> +config RISCV
> +	def_bool y
> +
> +config RISCV_64
> +	def_bool y
> +	select 64BIT
> +
> +config ARCH_DEFCONFIG
> +	string
> +	default "arch/riscv/configs/tiny64_defconfig"
> +
> +menu "Architecture Features"
> +
> +source "arch/Kconfig"
> +
> +endmenu
> +
> +menu "ISA Selection"
> +
> +choice
> +	prompt "Base ISA"
> +	default RISCV_ISA_RV64IMA if RISCV_64
> +	help
> +	  This selects the base ISA extensions that Xen will target.
> +
> +config RISCV_ISA_RV64IMA
> +	bool "RV64IMA"
> +	help
> +	  Use the RV64I base ISA, plus the "M" and "A" extensions
> +	  for integer multiply/divide and atomic instructions, respectively.
> +
> +endchoice
> +
> +config RISCV_ISA_C
> +	bool "Compressed extension"
> +	default y
> +	help
> +	  Add "C" to the ISA subsets that the toolchain is allowed to
> +	  emit when building Xen, which results in compressed instructions
> +	  in the Xen binary.
> +
> +	  If unsure, say Y.
> +
> +endmenu
> +
> +source "common/Kconfig"
> +
> +source "drivers/Kconfig"
> diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
> new file mode 100644
> index 0000000000..e69de29bb2
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> new file mode 100644
> index 0000000000..942e4ffbc1
> --- /dev/null
> +++ b/xen/arch/riscv/Makefile
> @@ -0,0 +1,2 @@
> +.PHONY: include
> +include:
> diff --git a/xen/arch/riscv/Rules.mk b/xen/arch/riscv/Rules.mk
> new file mode 100644
> index 0000000000..e69de29bb2
> diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
> new file mode 100644
> index 0000000000..53dadb8975
> --- /dev/null
> +++ b/xen/arch/riscv/arch.mk
> @@ -0,0 +1,14 @@
> +########################################
> +# RISCV-specific definitions
> +
> +CFLAGS-$(CONFIG_RISCV_64) += -mabi=lp64
> +
> +riscv-march-$(CONFIG_RISCV_ISA_RV64IMA) := rv64ima
> +riscv-march-$(CONFIG_RISCV_ISA_C)       := $(riscv-march-y)c
> +
> +# Note that -mcmodel=medany is used so that Xen can be mapped
> +# into the upper half _or_ the lower half of the address space.
> +# -mcmodel=medlow would force Xen into the lower half.
> +
> +CFLAGS += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
> +CFLAGS += -I$(BASEDIR)/include
> diff --git a/xen/arch/riscv/configs/tiny64_defconfig b/xen/arch/riscv/configs/tiny64_defconfig
> new file mode 100644
> index 0000000000..3c9a2ff941
> --- /dev/null
> +++ b/xen/arch/riscv/configs/tiny64_defconfig
> @@ -0,0 +1,13 @@
> +# CONFIG_SCHED_CREDIT is not set
> +# CONFIG_SCHED_RTDS is not set
> +# CONFIG_SCHED_NULL is not set
> +# CONFIG_SCHED_ARINC653 is not set
> +# CONFIG_TRACEBUFFER is not set
> +# CONFIG_HYPFS is not set
> +# CONFIG_GRANT_TABLE is not set
> +# CONFIG_SPECULATIVE_HARDEN_ARRAY is not set
> +
> +CONFIG_RISCV_64=y
> +CONFIG_DEBUG=y
> +CONFIG_DEBUG_INFO=y
> +CONFIG_EXPERT=y
> diff --git a/xen/arch/riscv/riscv64/asm-offsets.c b/xen/arch/riscv/riscv64/asm-offsets.c
> new file mode 100644
> index 0000000000..e69de29bb2
> diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
> new file mode 100644
> index 0000000000..0dbc27ba75
> --- /dev/null
> +++ b/xen/arch/riscv/riscv64/head.S
> @@ -0,0 +1,6 @@
> +#include <asm/config.h>
> +
> +        .text
> +
> +ENTRY(start)
> +        j  start
> diff --git a/xen/include/asm-riscv/config.h b/xen/include/asm-riscv/config.h
> new file mode 100644
> index 0000000000..e2ae21de61
> --- /dev/null
> +++ b/xen/include/asm-riscv/config.h
> @@ -0,0 +1,47 @@
> +#ifndef __RISCV_CONFIG_H__
> +#define __RISCV_CONFIG_H__
> +
> +#if defined(CONFIG_RISCV_64)
> +# define LONG_BYTEORDER 3
> +# define ELFSIZE 64
> +# define MAX_VIRT_CPUS 128u
> +#else
> +# error "Unsupported RISCV variant"
> +#endif
> +
> +#define BYTES_PER_LONG (1 << LONG_BYTEORDER)
> +#define BITS_PER_LONG  (BYTES_PER_LONG << 3)
> +#define POINTER_ALIGN  BYTES_PER_LONG
> +
> +#define BITS_PER_LLONG 64
> +
> +/* xen_ulong_t is always 64 bits */
> +#define BITS_PER_XEN_ULONG 64
> +
> +#define CONFIG_RISCV_L1_CACHE_SHIFT 6
> +#define CONFIG_PAGEALLOC_MAX_ORDER  18
> +#define CONFIG_DOMU_MAX_ORDER       9
> +#define CONFIG_HWDOM_MAX_ORDER      10
> +
> +#define OPT_CONSOLE_STR "dtuart"
> +#define INVALID_VCPU_ID MAX_VIRT_CPUS
> +
> +/* Linkage for RISCV */
> +#ifdef __ASSEMBLY__
> +#define ALIGN .align 2
> +
> +#define ENTRY(name)                                \
> +  .globl name;                                     \
> +  ALIGN;                                           \
> +  name:
> +#endif
> +
> +#endif /* __RISCV_CONFIG_H__ */
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> 

Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 19:52:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 19:52:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138937.257043 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqhlk-0000hw-St; Tue, 08 Jun 2021 19:52:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138937.257043; Tue, 08 Jun 2021 19:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqhlk-0000hp-PW; Tue, 08 Jun 2021 19:52:12 +0000
Received: by outflank-mailman (input) for mailman id 138937;
 Tue, 08 Jun 2021 19:52:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqhli-0000hf-QG; Tue, 08 Jun 2021 19:52:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqhli-0007sT-KS; Tue, 08 Jun 2021 19:52:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqhli-00084S-Bz; Tue, 08 Jun 2021 19:52:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqhli-0005gB-BU; Tue, 08 Jun 2021 19:52:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SmM3Ujz72NBJM5exZzbyFp0iDe9cnIksQHordweAcac=; b=wibtKHL98Uhq99XJ5h9GugLaE6
	3frgkMs+Nq2fxgWAOi3fFmholXNfLGXXoBaXR7LgTknixw7ABox1oLyp5d/wifeKSzhdg+KxfNAUO
	A9j2rfc7lw20Eb5zRwU4IyxXYUwYvKYzrjM1OwQkPIFKH0qhAd+ShkBkpMM9Iej3U9Ik=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162545-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162545: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dd77e859686f458a5313786deae0a63683d3cba3
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Jun 2021 19:52:10 +0000

flight 162545 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162545/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 162327

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dd77e859686f458a5313786deae0a63683d3cba3
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    7 days
Failing since        162370  2021-06-04 17:01:35 Z    4 days   27 attempts
Testing same since   162545  2021-06-08 16:02:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 326 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 22:22:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 22:22:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138977.257064 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqk7A-0006AR-Fp; Tue, 08 Jun 2021 22:22:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138977.257064; Tue, 08 Jun 2021 22:22:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqk7A-0006AK-Co; Tue, 08 Jun 2021 22:22:28 +0000
Received: by outflank-mailman (input) for mailman id 138977;
 Tue, 08 Jun 2021 22:22:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqk79-0006AA-Kz; Tue, 08 Jun 2021 22:22:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqk79-00022V-E1; Tue, 08 Jun 2021 22:22:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqk79-0007wz-4I; Tue, 08 Jun 2021 22:22:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqk79-0001fy-2R; Tue, 08 Jun 2021 22:22:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HSvvvMBkZP7FqyJP31qN9J2mDoHb/CjL2aM9lmH9v1w=; b=sDUK6DyWQrG+hu7+Z2Z17JPsgu
	y6fmy1fbOsqtLj+EXZ640NLIPmnqOX6eSbiJ2qloyrgChhQmoEedZoVIhl863aJo5kVzVTLW7OC+0
	8VPQGbpHIULdLjoaZdwu4eq3GrLmu4sRf7fBbgN0hF0VhLaA/sXdAPOpDxiNZnhxYDTI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162539-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162539: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-amd64-amd64-examine:host-install:broken:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt-raw:debian-di-install:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=614124bea77e452aa6df7a8714e8bc820b489922
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Jun 2021 22:22:27 +0000

flight 162539 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162539/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start    fail in 162532 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine      5 host-install             broken pass in 162532
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail in 162483 pass in 162539
 test-amd64-amd64-xl-rtds   18 guest-localmigrate fail in 162532 pass in 162539
 test-armhf-armhf-libvirt-raw 12 debian-di-install fail in 162532 pass in 162539
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 162483
 test-arm64-arm64-xl-thunderx 13 debian-fixup               fail pass in 162532

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                614124bea77e452aa6df7a8714e8bc820b489922
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  312 days
Failing since        152366  2020-08-01 20:49:34 Z  311 days  532 attempts
Testing same since   162483  2021-06-07 04:42:27 Z    1 days    3 attempts

------------------------------------------------------------
6149 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-step test-amd64-amd64-examine host-install

Not pushing.

(No revision log; it would be 1674012 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 08 22:24:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Jun 2021 22:24:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.138986.257080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqk9Q-0006t0-3C; Tue, 08 Jun 2021 22:24:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 138986.257080; Tue, 08 Jun 2021 22:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqk9P-0006st-VS; Tue, 08 Jun 2021 22:24:47 +0000
Received: by outflank-mailman (input) for mailman id 138986;
 Tue, 08 Jun 2021 22:24:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqk9O-0006sh-Cl; Tue, 08 Jun 2021 22:24:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqk9O-00024a-7Y; Tue, 08 Jun 2021 22:24:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqk9N-00088H-W7; Tue, 08 Jun 2021 22:24:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqk9N-000487-T5; Tue, 08 Jun 2021 22:24:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9Gh/CPd42dq/yMe7iQQsDRb3av2L1x3WeMWiv0GuP0E=; b=Kecwj/X393EMuEyH1bnP/Zpwpp
	UzeFn37Kn9k3cQh1UEYbZjRFrIaeLaKZqK+mYP2HVgSVvnODmsJzTlumiWwrOzXG2gx/h4w9kBZIO
	diGozniEDUvB47xCkN7nOFxl2ErtRmdKp2KKUWbOA4Afgw/5hhtWPmflZGBMXCMnVSOM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162554-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162554: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e4fee66043120c954fc309bbb37813604c1c0eb7
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Jun 2021 22:24:45 +0000

flight 162554 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162554/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e4fee66043120c954fc309bbb37813604c1c0eb7
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162327  2021-06-01 16:01:37 Z    7 days
Failing since        162370  2021-06-04 17:01:35 Z    4 days   28 attempts
Testing same since   162554  2021-06-08 20:01:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5268b2dcf7..e4fee66043  e4fee66043120c954fc309bbb37813604c1c0eb7 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 02:19:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 02:19:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139007.257112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqnnt-0000CI-Ds; Wed, 09 Jun 2021 02:18:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139007.257112; Wed, 09 Jun 2021 02:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqnnt-0000CB-AC; Wed, 09 Jun 2021 02:18:49 +0000
Received: by outflank-mailman (input) for mailman id 139007;
 Wed, 09 Jun 2021 02:18:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqnnr-0000C1-Ky; Wed, 09 Jun 2021 02:18:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqnnr-0003ox-DY; Wed, 09 Jun 2021 02:18:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqnnr-0003me-3V; Wed, 09 Jun 2021 02:18:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqnnr-0008N3-31; Wed, 09 Jun 2021 02:18:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=s7K+pROI+H8/XeLbUaKqt7OFEkMXTe6eK3DUxM9nTe8=; b=ztwUAROnxxg89upJxuV5hrgRHy
	3fyeAPV/o+d4Lgq5FDBz2+I0f1JfKLM6TyOTy8uDgm+k0JUgD+Jk0cLxctHndpTmbu8vUI9J3+wY2
	ixR74i7H1SbKtJwPwsuYfrmUkepB7TByX+Sfrres2pqumZlcPAITC3OBymJ6JlE/9Gmw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162546-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.15-testing test] 162546: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f034c96e882b81738720472cd28e75e6d6eb66fe
X-Osstest-Versions-That:
    xen=eae0dfac891f521ceb6c4733e22a0cd718f336c0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Jun 2021 02:18:47 +0000

flight 162546 xen-4.15-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162546/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162366
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162366
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162366
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162366
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162366
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162366
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162366
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162366
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162366
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162366
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162366
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f034c96e882b81738720472cd28e75e6d6eb66fe
baseline version:
 xen                  eae0dfac891f521ceb6c4733e22a0cd718f336c0

Last test of basis   162366  2021-06-04 13:08:58 Z    4 days
Testing same since   162546  2021-06-08 17:06:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   eae0dfac89..f034c96e88  f034c96e882b81738720472cd28e75e6d6eb66fe -> stable-4.15


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 06:13:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 06:13:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139037.257204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqrSH-0005Yc-1B; Wed, 09 Jun 2021 06:12:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139037.257204; Wed, 09 Jun 2021 06:12:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqrSG-0005YV-SB; Wed, 09 Jun 2021 06:12:44 +0000
Received: by outflank-mailman (input) for mailman id 139037;
 Wed, 09 Jun 2021 06:12:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqrSF-0005YK-BH; Wed, 09 Jun 2021 06:12:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqrSF-0008HU-2c; Wed, 09 Jun 2021 06:12:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqrSE-0006tl-Mr; Wed, 09 Jun 2021 06:12:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqrSE-00080h-MI; Wed, 09 Jun 2021 06:12:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4lZ4ZnOo5YAbbJwH74FpkadFxDCJRgRsmEjnD49GYD4=; b=rK863fj6V/GiCyiNkWos5uK1C0
	Dq8Zw3mhAHCuFnz/tat7gNfjHXelY+an0KasEshjVVkUiZFoOuiI/ybKba4F+nw7ghdNXBtOcKCa/
	ubsaSyP927DeSZy0TRPj8MXtRewLVv4kzrXEV4C4/4ho6wQZzE/tLUZ/+5L55SaWLsE0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162547-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 162547: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0ff7f9c5aa02cd2469a8fc03f1ed262f18933721
X-Osstest-Versions-That:
    xen=b046e05736deecbd8254540c5e45444115fb1c98
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Jun 2021 06:12:42 +0000

flight 162547 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162547/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 162365

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162365
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162365
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162365
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162365
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162365
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162365
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162365
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162365
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162365
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162365
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162365
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  0ff7f9c5aa02cd2469a8fc03f1ed262f18933721
baseline version:
 xen                  b046e05736deecbd8254540c5e45444115fb1c98

Last test of basis   162365  2021-06-04 13:08:58 Z    4 days
Testing same since   162547  2021-06-08 18:11:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b046e05736..0ff7f9c5aa  0ff7f9c5aa02cd2469a8fc03f1ed262f18933721 -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 06:36:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 06:36:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139046.257219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqrp7-000822-5u; Wed, 09 Jun 2021 06:36:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139046.257219; Wed, 09 Jun 2021 06:36:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqrp7-00081v-2n; Wed, 09 Jun 2021 06:36:21 +0000
Received: by outflank-mailman (input) for mailman id 139046;
 Wed, 09 Jun 2021 06:36:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqrp5-00081p-Te
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 06:36:19 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 53270274-7e39-4e0d-b9f8-78d5c8b38193;
 Wed, 09 Jun 2021 06:36:18 +0000 (UTC)
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02lp2057.outbound.protection.outlook.com [104.47.5.57]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-23-Z2ewEaSAP7KcNuosk0-lBQ-1; Wed, 09 Jun 2021 08:36:16 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3536.eurprd04.prod.outlook.com (2603:10a6:803:2::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.25; Wed, 9 Jun
 2021 06:36:14 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.021; Wed, 9 Jun 2021
 06:36:13 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0286.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1::34) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.20 via Frontend Transport; Wed, 9 Jun 2021 06:36:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53270274-7e39-4e0d-b9f8-78d5c8b38193
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623220577;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DpG5z7mszEvm3pGLZqzstd4pzaFEQVrKJ2H3PJilBk8=;
	b=GyvGk1TWlGlHgoKyjFGCXkTI2VJ6Pj105bRQS263rTuSegU5QIdK1AJi9EOuS/2ZT4CtgT
	e170bTc16mWZ1vyEoNQ9/LvxjkxZnBI9Fg7hoGto68JWlokB7aPDi21ckRvAft8/KLdT3Q
	vTmXLXb7diqj7e3sTPRVOwkRERRvne0=
X-MC-Unique: Z2ewEaSAP7KcNuosk0-lBQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lbBpFCxLGqMvpkCnz32LZa5FiNVy9blofEvgLBZ6Tpd2765JOS4MbLVZDeo6ezU3QjGoFpFKNCJeIY1hVtN5N55p5WgczxY05lksVDDgsW7wlY3kSXtzUYKbpkPkLQr490gj8YcIr+PZIEwkcCgF1wq3ri4dOE/j4RmcpLxdTX1TP4+bKFbk5J0UD9MIF60omS7C56qDri3YJcCXMALcgX+ulutltIcsZDG24n3yU5DIvLOi6tZHVs6eCRIA7MZYuwjuDmqM/vM4nGtPRseb35CdMmmZEGNVHYaRzCcKcomYsuMbzYx6m1NAGN2kcCCOCeD+AXmGKkwq1SiLQJf56w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DpG5z7mszEvm3pGLZqzstd4pzaFEQVrKJ2H3PJilBk8=;
 b=Lk15ZDsSdUMaE292s19GtiVCEE3pNgSPDviaX7MLigp8tRIXPvP8EgNHGRASXoNAVjAXddcMzvjdY7ym4hBGsWkC+KRztCO87+jJo8z1HKn9cxaWVvPzd0Uvk3LxMwPOJO4UfH2HoazI4aFx+TDVEcprY5HsUBWtJGwtfNYpyord9t8B/rEFUydTvRhY6FM/GFz/ylIqHXNk4z9nxv4VUDFSxgLTtDC234+8Z7p+VayooW07cpwKWucC9n+V40zlXYQA7LbvBaaq9uGglQO215OSNuxRIGtuAvgz35FAMuOW4DWqaxRZM3XQMIt6ZrHeceofGWWKLfuk35Bh0BCV9g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] x86/tsx: Cope with TSX deprecation on SKL/KBL/CFL/WHL
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210608170559.6732-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <72aedd57-9722-2c5b-7365-f46a0e0fe39d@suse.com>
Date: Wed, 9 Jun 2021 08:36:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210608170559.6732-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0286.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1::34) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a3364c35-3197-43d0-65c7-08d92b10e062
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3536:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB3536A022159138578572F614B3369@VI1PR0402MB3536.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LVwf2nE1szuSlmPTeBbrMA+r7zyE4+9snabF3TQ1ug8sDMdBbXoPyYN4EMB9neFpfS16d9pvkJO5a/H8Y0P02QNdJ2StSlzyb4lV9yS1BNcVvt0NP0paE/Qm4M+HYK5PIzG10NidOCtWDfrl93V3bzO3jCm1tXBUuQkh6tDyrtog2wAjy5X5lwGeKmZsNUTfBOoR1/wV4Y7/Vktm3HAoYN797LAiScxYdIAap/LxRqVM18P4n1bQz6OlCCWj6IPMunoiTaaT4eRd068PM+60zj3UXaa0ycjzdA7IIbl7t56lZD+1P1wi56wAwbdt0mvt94l+ABRlJ6UaP9e4B8lIIVB07fLpbn87mVs4alGF1p3YEJEeRZ9cjH8E6GqlF5GyWCzdCBbN/8vJY5gI2bq7LvWqErXrG3a/SbitPe7rDF4IMahuZG8sW4zac8HOWSyO7dxPCZEAHdRD9+B6DaPKwBqqXwD3t1fptX4n+VoCnJi3vLd2x+EqGX7GP2ME4qaJatnJR6P/dGi1LPVFA5M64De6V9y7Ztzpe1QVSi+hDgWZkknMMkKBSQcf1bpAfXOY1OodhHyp/3U32pz0BztPob5TRQaKNdSb6JoPHBdHn4AtPlsR3G9L0R4r+DMet75vVg119jiJYb4C840P20/Z83SKoH5vtoqZr+uZ7o44OkrCiXobbkUX9HdnMngd/MpQ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(39860400002)(396003)(376002)(346002)(136003)(8936002)(478600001)(2616005)(54906003)(956004)(36756003)(38100700002)(83380400001)(5660300002)(31686004)(86362001)(53546011)(66476007)(186003)(4326008)(16576012)(2906002)(316002)(6916009)(31696002)(6486002)(26005)(66556008)(66946007)(16526019)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?eFU5TzFWQ2NFT2VtTVY1eXIwSXAvM0VrV3Vib2lldXplcUk4MnNyckc4Ty9t?=
 =?utf-8?B?NWc5ZVRGbVJ0WnlieWphOHJIaU0vQlk1Um1DakVxc3RzMnF3QkFqZzljalNO?=
 =?utf-8?B?YUY1MWtaTHRlajNFelFaZHdTeStacVpLZlhlNytWRmNlcS9xSWVGYVpSVEY0?=
 =?utf-8?B?M1JDQy9kNWRCWDVhbTkyVVpmVmhCdU5lMnh6V1V5VGxpZnlUdDQyZ29vOVQ3?=
 =?utf-8?B?djVPTUQ1ZHcxVFNmNlFPZmZuZXFpeXdEb2syTUQ3N09aTWJnUEpGTHJsbDhX?=
 =?utf-8?B?UFRPaGt5UHZMdGcwdCtyU0VJdng0dWdhdzNVOHpJaDJ0TWxvNTFPTEw0OUNs?=
 =?utf-8?B?enc4VHZMTzJzcWZjOW1HVXBPT1lGUXphY1VuZmRneG1VeVFETmYrWVc3eFh1?=
 =?utf-8?B?S3ErSEkrVTd3RUMyOXFQbEdYc3hDNlNtSmNoaUdwUThWYU9iSWtFUkhlTWxj?=
 =?utf-8?B?YTFGOWVKVFJXV0JNb0hqODJmbVFmcmVvb1gvZENnd3hjdjl5aVo1Rk1IVVFS?=
 =?utf-8?B?bk5sMXMydUtPc2dVYkJKektJUU5ieFM2MzVtUU9iY3k3VTFEOWdmYU1uUjVn?=
 =?utf-8?B?emFEZWRFdkMxY1FhQlhXVUxjcTF6Sk5IWkhVLzN5SlJOZUE5NTdWSTdNZGZm?=
 =?utf-8?B?L09RKzBNRlR0Q003YzIyeDZxRTM0U29IdkdtYmpwcEtUc1dLa3JoYUJBVWdp?=
 =?utf-8?B?SWFoSFAwV1BZN0xBZFNVMTFnOGdQYnR4MXJiT0N0czJHYjlRaExyNklaL3FD?=
 =?utf-8?B?dUI2Um52czlPZGNJUEdPcVJGOU9tNTYyOGhPTGNhczVjdHdqV3lwNUFEdjBK?=
 =?utf-8?B?S0Q1U05oNlV4S2RUNlF1SThJVDF5QjBOQmczcURaNWtUYlVxUWZ6MVAvRDMz?=
 =?utf-8?B?UnI2TktTVU04VjBGRFNxeEFHZ2h0QXEzUmlFV2kxdHJBbTZaK3ZKQklick5i?=
 =?utf-8?B?VERyRE91RjhRNWVWS0JiTDVsb25iVnR5YUliL1BSZjN3azFKU3ltQzNsaVZR?=
 =?utf-8?B?aFNnTGl2U2hzMTJ3MGxNN3UzZS9aZkpIdEIzbGNsN0lwbzRqdUJidWtmT3BH?=
 =?utf-8?B?UlI2QlhRWVQ0RlRRWVNZV01ZN3oyWDJLclpia1JjRUVuQXN3SVdtUnVwUlVr?=
 =?utf-8?B?UkUrSzJVcTVZSnRBaEtlZVk1NkRUeFRMSXR3SXMyczFwTVhJWS8xZGM1Y3Vm?=
 =?utf-8?B?aXl6Wit1LzEralZNUXhSNkpKaHMrNXVyWHgzTnBCVysrckhpQkc2Rzk1NlBk?=
 =?utf-8?B?OGxCaE5yZ0ZpVnEvY3hBU2ppd2lNczBndWNkMEQxWThmZ2VHZ25vWDYySmZi?=
 =?utf-8?B?QUlKRmtSWStyNnc2ZUUvcURsdFVVMzEwaVYwcjM3dTN5aG92ODkvQUJCRldH?=
 =?utf-8?B?NGE5RUx1V3QrVzFVVFRnb1YxeFJCRjRraFR1NUU3b1MzaDl0TytkbVJpY25j?=
 =?utf-8?B?SWVPMVZTZzFrY2MrNTkwamdQKzFVWEZ1YWNnU3JGeGRWVXNXdE9WM29YMXRM?=
 =?utf-8?B?ZDVGbDV3TkhUM2pCVitDMzdjbWRMV05nWlNqQ3NiOTEwRVJWZnMvTjAvdm5X?=
 =?utf-8?B?VGloMHBYcmRQZ0t3THZzWVRBbnQ5VmJneGUyMDNBb25JTlhkQlJnK2tTR3VY?=
 =?utf-8?B?U1dyVzFiSXhOMWN4UGtHN2JGRDNoeEM3OGpWM25rSFB0c2o0Ly9pZ001U000?=
 =?utf-8?B?Sk1uMUkrWCtVTEFLdkk3cmt6dEhmK282NVpqZU5SOFZTQ1BaU0V5UmtjWFJR?=
 =?utf-8?Q?ZoQAzVyqansTxE9dqRZzVa2LKJDAbv2AsI7VxEC?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a3364c35-3197-43d0-65c7-08d92b10e062
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 06:36:13.4781
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: h+JDxLi2iXRbeZOK1d2L+QQ/XgAr+83DoeH+Wc1AbCc4zRP6HJfv0jy2XenWEwUkzDioj/a4ZRkqRw8tEwjDlg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3536

On 08.06.2021 19:05, Andrew Cooper wrote:
> --- a/xen/arch/x86/tsx.c
> +++ b/xen/arch/x86/tsx.c
> @@ -60,6 +60,38 @@ void tsx_init(void)
>               */
>  
>              /*
> +             * Probe for the June 2021 microcode which de-features TSX on
> +             * client parts.  (Note - this is a subset of parts impacted by
> +             * the memory ordering errata.)
> +             *
> +             * RTM_ALWAYS_ABORT enumerates the new functionality, but is also
> +             * read as zero if TSX_FORCE_ABORT.ENABLE_RTM has been set before
> +             * we run.
> +             *
> +             * Undo this behaviour in Xen's view of the world.
> +             */
> +            bool has_rtm_always_abort = cpu_has_rtm_always_abort;
> +
> +            if ( !has_rtm_always_abort )
> +            {
> +                uint64_t val;
> +
> +                rdmsrl(MSR_TSX_FORCE_ABORT, val);
> +
> +                if ( val & TSX_ENABLE_RTM )
> +                    has_rtm_always_abort = true;
> +            }
> +
> +            /*
> +             * Always force RTM_ALWAYS_ABORT to be visibile, even if it
> +             * currently is.  If the user explicitly opts to enable TSX, we'll
> +             * set TSX_FORCE_ABORT.ENABLE_RTM and hide RTM_ALWAYS_ABORT from
> +             * the general CPUID scan later.
> +             */
> +            if ( has_rtm_always_abort )
> +                setup_force_cpu_cap(X86_FEATURE_RTM_ALWAYS_ABORT);

I understand the "we'll set" part, but I don't think "we'll hide"
anything explicitly. Aiui it is ...

> @@ -131,9 +170,36 @@ void tsx_init(void)
>          /* Check bottom bit only.  Higher bits are various sentinels. */
>          rtm_disabled = !(opt_tsx & 1);
>  
> -        lo &= ~TSX_FORCE_ABORT_RTM;
> -        if ( rtm_disabled )
> -            lo |= TSX_FORCE_ABORT_RTM;
> +        lo &= ~(TSX_FORCE_ABORT_RTM | TSX_CPUID_CLEAR | TSX_ENABLE_RTM);
> +
> +        if ( cpu_has_rtm_always_abort )
> +        {
> +            /*
> +             * June 2021 microcode, on a client part with TSX de-featured:
> +             *  - There are no mitigations for the TSX memory ordering errata.
> +             *  - Performance counter 3 works.  (I.e. it isn't being used by
> +             *    microcode to work around the memory ordering errata.)
> +             *  - TSX_FORCE_ABORT.FORCE_ABORT_RTM is fixed read1/write-discard.
> +             *  - TSX_FORCE_ABORT.TSX_CPUID_CLEAR can be used to hide the
> +             *    HLE/RTM CPUID bits.
> +             *  - TSX_FORCE_ABORT.ENABLE_RTM may be used to opt in to
> +             *    re-enabling RTM, at the users own risk.
> +             */
> +            lo |= rtm_disabled ? TSX_CPUID_CLEAR : TSX_ENABLE_RTM;

... the setting of TSX_ENABLE_RTM here which, as a result, causes
RTM_ALWAYS_ABORT to be clear. If that's correct, perhaps the wording
in that earlier comment would better be something like "we'll set
TSX_FORCE_ABORT.ENABLE_RTM and hence cause RTM_ALWAYS_ABORT to be
hidden from the general CPUID scan later"?

If this understanding of mine is correct, then preferably with some
suitable adjustment to the comment wording
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Also Intel recommends this for SDVs only - can we consider such a
setup supported (not to speak of security supported) at all? I guess
you mean to express this by saying "at their own risk" in the
cmdline doc? If so, perhaps mentioning this in SUPPORT.md would be
a good thing nevertheless, notwithstanding the fact that we're not
really good at expressing there how command line option use affects
support status.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 09:26:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 09:26:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139061.257237 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquTH-0006hY-HC; Wed, 09 Jun 2021 09:25:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139061.257237; Wed, 09 Jun 2021 09:25:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquTH-0006hR-DP; Wed, 09 Jun 2021 09:25:59 +0000
Received: by outflank-mailman (input) for mailman id 139061;
 Wed, 09 Jun 2021 09:25:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lquTF-0006hL-PC
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 09:25:57 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b813fc1e-31e0-4128-9345-d79a4c757b84;
 Wed, 09 Jun 2021 09:25:55 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2104.outbound.protection.outlook.com [104.47.18.104])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-15-dAaOE4-tOJqkxfqb_tqnPQ-1; Wed, 09 Jun 2021 11:25:53 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB3118.eurprd04.prod.outlook.com (2603:10a6:802:a::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.30; Wed, 9 Jun
 2021 09:25:52 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Wed, 9 Jun 2021
 09:25:52 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR0P281CA0074.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:1e::20) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.9 via Frontend Transport; Wed, 9 Jun 2021 09:25:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b813fc1e-31e0-4128-9345-d79a4c757b84
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623230754;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=1UKKWmfXzIXaDaMFirhIX630AbZ84/Bj/Gao4GpCo0U=;
	b=b39h2Y9W3kERxb1kN96uZHWry2wmmtt7g+fBVXXXL8QgycmjB1We3xBXaUfDSiu9ZwYsKc
	U7VEkiwLgkoQ6To8N41oFoHXQ8HzT96a+I1+8FPYnTGIlYi32S8Rha9vGP55LbPAr6wlii
	bTEA2QxbjJzShVln2xwJUOvhzINGYEY=
X-MC-Unique: dAaOE4-tOJqkxfqb_tqnPQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=X0uCyvYqmXF0J4qHZqBdPyzH1m+ZUQCZmgleEaHxpERY+935PiRMqhrRqoYgxn4/EkhO0s0siMNb7jEF5c443uxx2DQR5llO0KpLGnIH5UmZ3evT19Hfde54453QjOhkH2vCgV3jJPF9Oagf/RwVNwpgPTYK9nMd4XpcBgdPD1dBCp3/eSicRNpvbliAjZEuJhUuty08hLF+qAiIdU+aNJL+sdPUELEergh9+m0V2oQXNWC3/oDN1cBiapPpccQXrR+Ov+gZSoIvZ3Tv2HblspnXXSp9s/+DR5LS36ycfAVC5xNkIjSJGAI4yxkO58Z20wikwMBUhP/AQmzCS5NHsQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1UKKWmfXzIXaDaMFirhIX630AbZ84/Bj/Gao4GpCo0U=;
 b=UGAp8p9pHluo63lkw30wZmKznhgziIaUiO4B/yWqYl2jaQvJatzsgBX7iM17QbE5t5vdCa5x3BlQrWx0ZPwP8VgMwuyVU0UKjhH2kLGPnrJCOCmGSB0guAix5Y2H5++aDUVwYKfKNqDbp4U09SVxpRY/s/isua8LwDqnMrQv0XVkjyW5KskQwGK+12FpmFk7lHSZwkWnrJsnEtNEZlBJ4FGAyH8tJa0iEDtCab8Sc0tMplTpzIObRIOKnjeha8L5q0X99JrOB/xAJkFEw83UGbG/0iyAo1XW0lk5FhHay1tyDO/9EOOo2Abwd80XdVdXmVxkEDSZ4MYtf4XAqI0TiA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=suse.com;
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/9] IOMMU: XSA-373 follow-on
Message-ID: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Date: Wed, 9 Jun 2021 11:25:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR0P281CA0074.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::20) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e18219f6-e490-40f1-3a97-08d92b289331
X-MS-TrafficTypeDiagnostic: VI1PR04MB3118:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB3118F80F6EC8A0432EEC5BB0B3369@VI1PR04MB3118.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FsTcU4LMiYiKh2eC8pM//S8hMpo2y1hpZvSPMSd+Q/PANdz4RRCZUQzlX1by65BAVdeTyO/3B+1knBa/YwVg9XRfNsl5oajgcGrVWKA7l07dRQq5qW8EgRjE3YZByjt0UIgtrECERmYVT6kEt9k7mAqU4j7Jk7zB0Wtz8Pnlhiej+bXaFh/8W0SmCZOZLmZG7OBkXhIv7zU0Nh6QJh52B9bo1lJucOyi2Hn+8LRcjpj0GPFOlwQwD15dbj3lrXA6PM5sGpAEgei048SF5HelbbOb9zqlPA+5cZ1ZqrS0DSW9qg2+VH4ENrsDIiFaiRVb/d8E7ZW/f4hDLud1VsKnYh2nxKhkMSev5nFuLrwPcdQGJcKobiBfG7NkVM1f47iKtiNdXPGa30s6IBKPo0kfoPE+mr8YX25MD92dMjOagUZtZR4wscMt7zliUo8tnQCHjaEdxOO2cGgnkVjVdvsxya9adsp9ABQbddtWCKhmzgfkrsKaO59gYEMLdL01MVRoIJnNIPb5QNuJOJIE0LfGpJ1+AVViISqz8mYoOwkV/p/yBTsU0xuy9aZVFqNeH46ZQJKsUpM6u68FIqawKMtG5xVLoCLbqwnn/j5UzTZ+agiWQFFzn2ZJwDYciCjcb4DfAlW//MgxA3Cmy+JAypwCumoeS3RespUAAJbyGa+8Bf+FnJ2DvvUqA6GBezKw0QQ2
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(376002)(39860400002)(366004)(346002)(396003)(16526019)(38100700002)(4326008)(26005)(66556008)(186003)(8936002)(66476007)(4744005)(6916009)(83380400001)(8676002)(956004)(2616005)(5660300002)(86362001)(31686004)(2906002)(478600001)(6486002)(16576012)(31696002)(54906003)(316002)(66946007)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?MWREVGpsUFQ2bENOSTYyQkw4aU1wMVN0UkI2c2dNQU5Ud1JicXE2Y2MxVE9D?=
 =?utf-8?B?MjNldzR2Tlh1NWxwdUlPcnNVZUZYcHNCNjZuRGJMYTNIbU1sMXJjL0dPODNG?=
 =?utf-8?B?QXFYUlYzY21FTVl5U3JpMnJ5TGZySUlJTVhKbjV1amxFTzFIMGlDaExONzQ2?=
 =?utf-8?B?TEVWeTdKSHd3RFNJMW9qRmRFbU9QMDg0OHlzMUZuaG5uSmFxLzNaRm5nN0to?=
 =?utf-8?B?eFBZbjRIOGE0dHdsLzZ0R1A2cXpRTUxtRnk4OG9ENXdJYjFKeFNVSlZhaFg4?=
 =?utf-8?B?TFFvcE8yUjJMNTlicm9OakpMMDB3dXp0SURjNXVDRUZSZklGSjh1YVdYbllM?=
 =?utf-8?B?aGRuMmlGckJ1UnFYTXdqdFBOUXpzcmtwT3FOclYwc3cxa2ZIakhhVnEvTVRY?=
 =?utf-8?B?MGRxVk8vbk44elJYaXhIUHRrVjFjdi96SEE4WkRXenZ5a2Q4Tkt0RldKYStl?=
 =?utf-8?B?YUd5UVJkMmJPNHhqVFdlang1S1U0aExuaGx2VnZxVitWVERWZFdwYXJYMFJP?=
 =?utf-8?B?ODNRT2tDWFJTS2MvcVpXTmxRZzBCeTNFV3NKWC8yV3dkdWdyQXdvS1AxWm1o?=
 =?utf-8?B?Ry9TMWR1UkNRTXdlYXBxVzljQU40Y255TVMxd1paNFRaV3FMOFBYamZyQUlV?=
 =?utf-8?B?YkV6M3lCa3ZwdFMwZ25JZkNiVWNlYVFCemNIam1JK0RCUzdyOFpneDRER2Rk?=
 =?utf-8?B?dDFmMCtXc3N6WS9nbGMwd2pHS2tlZ0hFOHBWdEtxenVVMFBsQyt4WTJySHI1?=
 =?utf-8?B?bTdaOGFhTVdSVTh1eGtRSFZaZnlpeVc2c2treGt6elc4MUZxNkVsRDFOTkJU?=
 =?utf-8?B?UUlWY0pKVjEzVjVYS2J5R2ZMZ1dzbW5wUmpyTzZFKzVpaWRMMDN6L0JtT0lP?=
 =?utf-8?B?T0FqUTE5ck1tSUtERFVtaWpOdTBqOW9qNHROakxaVmhOVjYwRE0yLzhTL3Vy?=
 =?utf-8?B?U0lHaVdITzFYUzRvVTF0RUE5c3pDUnYyY1dPRzRnS1JUejI0eU1rTlhnaE40?=
 =?utf-8?B?aUxXUWx2a3RaWEpobUtpTW9CMFJIV2lBZGZBL2Nqek1zSU1sNGx6T3Y3VTJG?=
 =?utf-8?B?clhsN3RCbUtQVUwwTnVPekpwemhCTmhKbEt2a3JIZkxGV3duS3RrWkpwdzFV?=
 =?utf-8?B?bURWSDcwWDM5MENjUVpUS2xiMVRLazVDL1lweFVkWkNBUXZZSWtiWUhBZk5r?=
 =?utf-8?B?ZkFCMXU4QTlJRHluWjNHdWU5a2NtUHhqRFpQK3F2dWg5SFFUeEhvWkRDVEYy?=
 =?utf-8?B?S00yMEVQQXJBSWtmcTI5RXh1U2hPeHV5dm9Xckc3R2FRdnZhTWp1NzJuZ1lr?=
 =?utf-8?B?NUtZcVh2aituNlFWVHJaaERRdDg0ZzhDZ2JudW9vR0ZhKzVkTjhFMTRTMTJa?=
 =?utf-8?B?QnRhU282S2V3aDN1Z0xwQ3F4bDZxeFFkVE9ZWFV0MlM0VFpxR043ci9pL3NF?=
 =?utf-8?B?bGZhSUNULzBqbGNqalhlU3RTbEhzb0cwbEFBbnhFRENHZHA3aTZaVHFpYkU4?=
 =?utf-8?B?TEdMRm0wWGtpNkVudjkxS1dVS3hnL0xDTjhkdXMyMzBKRUlDczFQVHk1dDZz?=
 =?utf-8?B?bWZlUVkxRkFLNjJUUk5KbTRhU3d2K0FMTkJhZE9aUEd4eHJ6bTQwTkErcVpo?=
 =?utf-8?B?bnJPYWs0bTBUZlg1ek5vbGJ6M3Z4am0weEQvU1BCRzZ5eXM5eVl4b0o5OHo1?=
 =?utf-8?B?L2hreW5qdUk4K0o4T0pRVm1NSFVIRmZjU0tDWmZmTk8xYmRBTzlkV0lmWGJa?=
 =?utf-8?Q?bXJL9+kjk0TrxABzahNN8q+z0hDys9D9MHWO0xs?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e18219f6-e490-40f1-3a97-08d92b289331
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 09:25:51.9118
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6m3KFYPI3MsUJGpbOpCU87LPGAEUL4hicN8b7JErFSpEfEPJUx6OmX5KhtasOPh5zFFxXrYJSz1IxAf62cPjfw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB3118

A number of further adjustments were left out of the XSA, for not
being a security concern (anymore in some of the cases, with the
changes put in place there). This is the collection, possibly
looking a little random in what it contains.

1: AMD/IOMMU: redo awaiting of command completion
2: AMD/IOMMU: re-work locking around sending of commands
3: VT-d: undo device mappings upon error
4: VT-d: adjust domid map updating when unmapping context
5: VT-d: clear_fault_bits() should clear all fault bits
6: VT-d: don't lose errors when flushing TLBs on multiple IOMMUs
7: VT-d: centralize mapping of QI entries
8: VT-d: drop/move a few QI related constants
9: IOMMU/PCI: don't let domain cleanup continue when device de-assignment failed

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 09:26:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 09:26:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139065.257249 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquUB-0007F9-Q5; Wed, 09 Jun 2021 09:26:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139065.257249; Wed, 09 Jun 2021 09:26:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquUB-0007F2-N5; Wed, 09 Jun 2021 09:26:55 +0000
Received: by outflank-mailman (input) for mailman id 139065;
 Wed, 09 Jun 2021 09:26:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lquUB-0007Es-1h
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 09:26:55 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e3973676-3e22-4e97-8941-9ae6bfa5acfb;
 Wed, 09 Jun 2021 09:26:54 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2105.outbound.protection.outlook.com [104.47.18.105])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-8-Kqnb9FPlOEa-vknXKphazg-1; Wed, 09 Jun 2021 11:26:51 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3773.eurprd04.prod.outlook.com (2603:10a6:803:1c::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21; Wed, 9 Jun
 2021 09:26:50 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Wed, 9 Jun 2021
 09:26:50 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR0P281CA0089.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:1e::11) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.9 via Frontend Transport; Wed, 9 Jun 2021 09:26:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3973676-3e22-4e97-8941-9ae6bfa5acfb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623230813;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JgddFxzuF1jSE6HuT5x1HspbAq09VNpsTBcFaLySK44=;
	b=hrj7DZUPG/nIk2OI+0MUVjCIBN5teWTvq4Nc6rcHDd9NTpRAmHhUSMuYXbtQ+qq+WZy3uj
	oouDHm/r1iRLLP0b8HnBEUcz2xu+QBF8dJxsrh8BHWyGx4Z7AiQ2y/DdgeR3GvSdmRbHcj
	qMljqHRdqvZUn466Hfm0D+deaz4XHs4=
X-MC-Unique: Kqnb9FPlOEa-vknXKphazg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K87rkpShTseE8d/AZoAhISK7jlOrTXGpo4Yy7Ekt7Q8jSx3B1hDULh0j7ZqVOmZBU5NSJUvsjU+RjW3xk4O7B8zxC6H1cpbWxr2evXxxE7/5GZAqqUSH0RmmnjSd0ARXWh3tc01SZkNmR5AdIdahQoDwMuk7OKNCBJwlS8Xe/6dVo2FpCqfwEQc32K7WBNTr1KiXtKh+WSH7O3cUNcrGxOi36PhMX7mnYto5SDvGsxZR67lraQmV/OwlhYpPAOtjYGMKrDlJf9rBD709MHsWrnnRRMH006JRSGLr351EmqmPiIOgrqtMh3qE1hLa7QgctRxn1PSoB6drxQxIUgM39w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JgddFxzuF1jSE6HuT5x1HspbAq09VNpsTBcFaLySK44=;
 b=geVoSJY9nHs5LLnhLkBpGuWRMqLPolZOZ949NsyjOQPmNSuCKwrGs33hISs1T2BG5LtX9YqvMvC9wui5Ze5D9aFSMb35DaMv/qqD58uXWyMJgCO1eTBDYecDKFFg/63/ztLjXQjIPHh8tDg7C5T79JPeTh1xsQaZWmocdP230jOJuVeOuvdewkOX7tiNwdRKi7JniqXIiveKQ3fhsd53OQ3tENdJ0B4VlO5a+A1hoUcqywUYT6EAGt6Zzul/Ylc4V2uvmSUdEulRsciwAlEl5rKRuJchrJbmVu8xbHpZZmi/aAbXMzCS8CBjrE6+zXpflhkYt1OCbzX+H21Kcx4Vsg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 1/9] AMD/IOMMU: redo awaiting of command completion
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Message-ID: <1bb49cce-537e-9fba-80bb-82bf502728d0@suse.com>
Date: Wed, 9 Jun 2021 11:26:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR0P281CA0089.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::11) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 733fc351-2ce0-46e5-2b13-08d92b28b5c7
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3773:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB3773EC7E85C9DB61B702BE50B3369@VI1PR0402MB3773.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/xL73gIYfIT3ZtoiLLGrm2YEbI0FFMrwQ0fwS0cmfwu9Ltb2RAk1xcBmtQvB1YsdvuXCUl2Jz+nzpEhi2ezjyOrlkAGsxzLfv74aVHK3IIRY2fu7HBZw1Es3P4iKsgQK5Pn+SVTF2P62Tt2BZePiLuRNVe03Ywj5zqqfjdW9hb7GAnyBKRl8SjsKblFKxkkYao2P+uoOxZ79unN8A1DN+pEwE6M36JAQ3LOQ9cA64WQ+uL67U0qPSDHZBQEq1JfWKSZkZgOl4fJZXACPi38KbHzi5ZO1eYzVesCMbIx+WuM1yXi64/jSWKRgqgUYgSHWr9E+KTtZvsJgIiVKIc1v36s6dAbJv5Xx2kebVKB/4U7UoiWpUc702O6YToXDh1H2SxA1wNZLVO+rgHA/YK0SBL7SDF6HAqDpdpS9Sd7J5Ei7AH4CO0VK42VpGgwRNEx1Q0IDAHNywAD9E2hzGRdL3m01Vocmoc4S3EP5qCf/DccAgvWOgSvMrgYmO7Z7FOcPC2Pk09NED3rq0LydHMLPxXxMso7xxN6z2+wMn4aGp0BxKvw+Qc/c2065c+gPK89EKGAi3XA52bG8/C5utsMBYCuP9IZdJKllSnBGBHM2evkDfxzGZwW4q2AsbmDZnKuTo9reRgCrFVboD+Y27WCkFPMv/fXt0lMNP45nmRdiJkBhd6FqwrjmJsHy0u0EL2Hl
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(366004)(39860400002)(376002)(396003)(4326008)(6916009)(8676002)(83380400001)(6486002)(8936002)(2616005)(956004)(38100700002)(26005)(478600001)(186003)(2906002)(16526019)(5660300002)(31686004)(31696002)(36756003)(316002)(16576012)(66946007)(66476007)(66556008)(86362001)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?TjFhdVhYSUtJZndBeDhhMmpiOXNuTHNMV2ZnRktKc01keGVrclZkblF0TndU?=
 =?utf-8?B?VEw3WXBzWkJodVdsRzdxbTYwWmRydGdzK1NMTEsveS9Jc0t0K2RFaXRyV2pF?=
 =?utf-8?B?UDUybExMR3NZd1Z3ZS9vMGN3b3prNFpCVnhLSVA4alYvZU1LQThublhmSEVq?=
 =?utf-8?B?VUhtNGZaYzk2ZElBRGF4VmVXN0R1cjVZSmVScDZKeThPbnJOMjBOZ2pJOE9M?=
 =?utf-8?B?eWhJUEgyVS9SUEVHdGNqc01SVlBCWTFCVFNKYmpnSFM2L1ZkanpUVGVnRXlK?=
 =?utf-8?B?cXM2Q1RVM0pPQVh2cmlQM2lIMldabzdxNDhYd2ZEQ01NS0dROGZ1L3VMc01r?=
 =?utf-8?B?WDQxMk1wRGN6cXlSTFpNUG1PNXIvMXJnZXpsYTdUV0thVG9CTjlTbnh2TjVj?=
 =?utf-8?B?a0s3V3k3MkVvVmJvR3RNK1owVEZTbCtORkNZRFQwbWRHVFU4OVY0dHdITjFq?=
 =?utf-8?B?a2ozSlBGcnpoNDd4eHRlaVlMYnZubmxHS3RZc3RFSXI4MVdWTkRDSnlxZTZk?=
 =?utf-8?B?TXVNQ1hYSTlSdjlsamxianF4M0VJRTFYdWc1RHQ2S0cvSGhWck1EQXZhSmhL?=
 =?utf-8?B?U2w3VkxXQThoM2lxWXAwdVhWQjVjK25tbjgyY2txTk9DbXBPRUsxT1ZMaXpj?=
 =?utf-8?B?blZxbDRMYzYwT2NaL2F1eHNIbGVINXI5VjdKRHRGSVA2c2FrbFkvTlY2eXdQ?=
 =?utf-8?B?eStaRUlaQ3Fnd1krV200c2Z1SUpFd0thMHkxWCs5QzZ3NXgvNWU5N1J3eTR2?=
 =?utf-8?B?emw2bFRMMzA4YjNoTHJMd252MnRLdXRVMkVvZHlId0VuN2taNEVpdkhrejB4?=
 =?utf-8?B?MnFKa3dlWVE2cU53T012azNwNmkvMllEWHdCNFV6ZFBML3BONjVpVlk4ZjlO?=
 =?utf-8?B?a24zVnFrTklScXhVd1laQkRuQmVVTmdrMlFiOVRGS3NzV0g4ZHN6TEZuaSt5?=
 =?utf-8?B?UWFneUdsS0ZBTVlFKzJoNCtaWTFGekw4TU1hd1N2OU10TzRWZkVINFM1N3Bm?=
 =?utf-8?B?TzlJV2ljVW1WUG01TjJybzdMdW5rV3VLRDdFR3lVWThyY3A0OE10cnZFQWZk?=
 =?utf-8?B?TE5uTHZwQXZBK1oxbXd5eUlZelkvem5Ed1h3eE5VMlJ1aFd1cjYwN1BaUkNz?=
 =?utf-8?B?L2NPZ2VNVEhyajBBWjZSS1pUdkhGWks3WVF1K0VidUxvM0tzYTl1cUFONmdV?=
 =?utf-8?B?ejJVYzdxemlkMklWQW5Lak9vdTRtUklCTUNiUklTbHdqTlh0a1FQQzN2dWwz?=
 =?utf-8?B?bGFLcFQvcFZVZnYwek5sR2dJTCtnT1o1UEUxVkx6ZFN1YWs1NC9sT3J5TzZN?=
 =?utf-8?B?Q1Nodk52T0tLeExnZk1BVjZUYndDeVUwUkJlSnhpc29Vd1lFUVVvcEtURkkv?=
 =?utf-8?B?emEydWNwOWdnb0JSUjh1d3RET1h4Q0Q4ZDZyZ1FNVVpoRGNQODdUcDRrRE52?=
 =?utf-8?B?UnZ3MzRteTV1TzVxei9ZNzVuWjJ2TzNTUUpiMFVEazA3RHR3cVJid3BFWnJs?=
 =?utf-8?B?ZktVaU1hcWo3djBVckdzV0JhcTloVUtqRnFoK0JJOGprM2xFci9mT1o5ZXVi?=
 =?utf-8?B?dmV6U3V6Zks2YmRnYlJYUFJ6bUF6dTlQdWFDZTI3VkdZTjVDMzU4aW5RbWVm?=
 =?utf-8?B?dGhvKzBFSmpJQmhHWlNzVjRId2NxTWRVTDlOcWVyRDdybitKUFlnV24zb1F4?=
 =?utf-8?B?VGpnWlIzN21TQldFWGNvcXFxMFMyWTJycVM5UmxUZy9mVytsRGhLTnBQdE8v?=
 =?utf-8?Q?fWMxUPvuax8tFgvpS7bSfxPTaNFHyT/hjXMakRS?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 733fc351-2ce0-46e5-2b13-08d92b28b5c7
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 09:26:49.9460
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yZBRRnERhOgNYRzrsO5m0V184YcbiymbapR4H7QxTcEEybn+fgw0B266ztdIuCT2T6ESaiY9tOFPt4qGVG+8ig==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3773

The present abuse of the completion interrupt does not only stand in the
way of, down the road, using it for its actual purpose, but also
requires holding the IOMMU lock while waiting for command completion,
limiting parallelism and keeping interrupts off for non-negligible
periods of time. Have the IOMMU do an ordinary memory write instead of
signaling an otherwise disabled interrupt (by just updating a status
register bit).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>

--- a/xen/drivers/passthrough/amd/iommu_cmd.c
+++ b/xen/drivers/passthrough/amd/iommu_cmd.c
@@ -20,6 +20,9 @@
 #include "iommu.h"
 #include "../ats.h"
 
+#define CMD_COMPLETION_INIT 0
+#define CMD_COMPLETION_DONE 1
+
 static void send_iommu_command(struct amd_iommu *iommu,
                                const uint32_t cmd[4])
 {
@@ -49,28 +52,31 @@ static void send_iommu_command(struct am
 static void flush_command_buffer(struct amd_iommu *iommu,
                                  unsigned int timeout_base)
 {
+    static DEFINE_PER_CPU(uint64_t, poll_slot);
+    uint64_t *this_poll_slot = &this_cpu(poll_slot);
+    paddr_t addr = virt_to_maddr(this_poll_slot);
     uint32_t cmd[4];
     s_time_t start, timeout;
     static unsigned int __read_mostly threshold = 1;
 
-    /* RW1C 'ComWaitInt' in status register */
-    writel(IOMMU_STATUS_COMP_WAIT_INT,
-           iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
-
-    /* send an empty COMPLETION_WAIT command to flush command buffer */
-    cmd[3] = cmd[2] = 0;
-    set_field_in_reg_u32(IOMMU_CMD_COMPLETION_WAIT, 0,
+    ACCESS_ONCE(*this_poll_slot) = CMD_COMPLETION_INIT;
+
+    /* send a COMPLETION_WAIT command to flush command buffer */
+    cmd[0] = addr;
+    set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, cmd[0],
+                         IOMMU_COMP_WAIT_S_FLAG_MASK,
+                         IOMMU_COMP_WAIT_S_FLAG_SHIFT, &cmd[0]);
+    cmd[1] = addr >> 32;
+    set_field_in_reg_u32(IOMMU_CMD_COMPLETION_WAIT, cmd[1],
                          IOMMU_CMD_OPCODE_MASK,
                          IOMMU_CMD_OPCODE_SHIFT, &cmd[1]);
-    set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, 0,
-                         IOMMU_COMP_WAIT_I_FLAG_MASK,
-                         IOMMU_COMP_WAIT_I_FLAG_SHIFT, &cmd[0]);
+    cmd[2] = CMD_COMPLETION_DONE;
+    cmd[3] = 0;
     send_iommu_command(iommu, cmd);
 
     start = NOW();
     timeout = start + (timeout_base ?: 100) * MILLISECS(threshold);
-    while ( !(readl(iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET) &
-              IOMMU_STATUS_COMP_WAIT_INT) )
+    while ( ACCESS_ONCE(*this_poll_slot) != CMD_COMPLETION_DONE )
     {
         if ( timeout && NOW() > timeout )
         {



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 09:27:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 09:27:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139073.257263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquUe-0007tK-7H; Wed, 09 Jun 2021 09:27:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139073.257263; Wed, 09 Jun 2021 09:27:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquUe-0007tD-4E; Wed, 09 Jun 2021 09:27:24 +0000
Received: by outflank-mailman (input) for mailman id 139073;
 Wed, 09 Jun 2021 09:27:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lquUd-0007rs-Aa
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 09:27:23 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d697bdc6-6b13-4efb-973e-78bc85c94891;
 Wed, 09 Jun 2021 09:27:21 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2110.outbound.protection.outlook.com [104.47.18.110])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-14-XXIDG_zIMJC01qKnpRyo0g-1; Wed, 09 Jun 2021 11:27:19 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2335.eurprd04.prod.outlook.com (2603:10a6:800:2e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20; Wed, 9 Jun
 2021 09:27:17 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Wed, 9 Jun 2021
 09:27:17 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR02CA0022.eurprd02.prod.outlook.com (2603:10a6:208:3e::35) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.20 via Frontend Transport; Wed, 9 Jun 2021 09:27:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d697bdc6-6b13-4efb-973e-78bc85c94891
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623230840;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=o6pDydvSJ6LSaglFSBHbIL7y0svRLi1hBmasidhS7XM=;
	b=M+JFO64yhJ0D+bVcduJ7M98WEJNOb/ylq3eO3dNz28QBskmIbGGjebmyxeV+Rm2AOvQ3p7
	ENDttg+sjfjjtMiqJcICHt8I8Fbo6cLyOhAzWmnzA3C4BPRKh3xLH87XeZ6MyODeWLH7Va
	AamtDELEXP5lcDr0mVt4cK8nD6Y8W2o=
X-MC-Unique: XXIDG_zIMJC01qKnpRyo0g-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SPECZzKS4qcd3PSSM/chLBQ+vZ1o77BTCUWbsgSSfjKHnHzEtYY9THlGIgapJS3EKardfMnvrAcPrYqS7QA+FMkzXQ0am+3pntvglxEP6uTZlsK5ND50CjIpX1Q8lJgyBZP1oFr0Ca2X3Tup3xZUJM3sBfi5RKPu5WaVhEu1PPCkLx+ddoUCkdndu5Po4TQyib5p/WEMnM5KF6Qj5DQeZ7lOjqASCAV8Ci1sklS31D2Z9Jse6NGl0jikwvaA51ZhSrXdHiy5h26Gjv9+RXJeg6iDgZaZgeTsB5Ddpnl0qWZiLQiG2yuesK6ap72MsomedFaKq5tdth5NOpSY3TZ0Kw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o6pDydvSJ6LSaglFSBHbIL7y0svRLi1hBmasidhS7XM=;
 b=BK3e/YwlZQE79fZYdzImPZz7x+qkuSuyG5NxQrk0Wv6fHhQHzaK3Usl19NC3LY0QJN/R4ojoTnh0Cwx5PnPK299+0Ctp4muh6mir0s3eveS1gh2tnbs34VdCN8Tzc5CK0r26VdyUZY0H3Lve31EsqZZ9KmLUDepS1rFWrzG4Vp/m+FEApldEh48rD+Ic4Srar00/ZSk6e3kgjKGwddZpvGfGPtGoS5ATB/Hi48+Hkl4gpl7tgL8f8hqF2hQpdLR+4O3oZiyYRk+gqAChjjilPT4LODkFrfE030UYF+EDhKpdQ378s0WfwKICcjtjTVCgAzGwlutHdSyKEKE6lMkTLA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 2/9] AMD/IOMMU: re-work locking around sending of commands
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Message-ID: <da2e161f-5d5e-c4bb-bce4-7b86e9418a1e@suse.com>
Date: Wed, 9 Jun 2021 11:27:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR02CA0022.eurprd02.prod.outlook.com
 (2603:10a6:208:3e::35) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b77f4cbb-8fd4-4305-ae89-08d92b28c663
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2335:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB2335C46B9DAC0395624B59FDB3369@VI1PR0401MB2335.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1107;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0IBo7jrnbd81dzQnjdXfkNReslH/WbRxQYurUFEbtPhwThbl4hWT+fePnzGrYUZdk7Iry2nXgVBFbDlGlEIzRZyIy/Ezovy5t0wgiFijOnttWC+7EPTd3sJrbFDUHzsWtRoxgQJwKzVaBChSg0S9pESvryl/vptXaJKsBQlUzrWSmF+ipSuB1y/YwTtQvA/jt/OzOX++fO1PV9mAjdrO7qMccRDBx9iqCoBsgMs2WhzjnIWWd31oyUfRMnWYHPQI4t+AZ5x5Gu/aORHmXL65qdH9OgvFHtCBFa3OZxeD5BskcP1CtYNVyNJ4lqVjo47TCKnuhgLZRztLFUIjdFS+x3LMrvhfVuKptQ0GbHp5WcKefAH7Tvde0OZHqZXRCvnZ1gBNLla/FAvjC3mOd2MaRfnHXN7rw2lQn+mge4inUYTEkDsOsB1ZFFFvXCl13xYoiIeJVajrPJlsZjiYYa4X6WpfRGUxM/Etnst04iAA1VNp556/4ijQkjFjU3TpodSQl+o6iIyJXLGLqkDQYjUb1DpEk/7W81xOFAzT45ugVxC5y1vhqxtj1Il/TqcSrzGIuBXV0JaDkgyr0/BG6feCrk2vmLSk8ZdEbKNAmDSwuk6UvuUwq8y+RdsjBdgLozDTDLwIwftkkHPMB20IUKY/1pAMiHiZSb++laWp1tTbvNJXiUfNT1K6dneNShKtJ9Cw
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(396003)(376002)(346002)(136003)(39860400002)(6486002)(6916009)(83380400001)(31686004)(478600001)(956004)(2616005)(36756003)(2906002)(8676002)(4326008)(66946007)(8936002)(66476007)(5660300002)(38100700002)(66556008)(86362001)(16576012)(26005)(316002)(16526019)(31696002)(30864003)(186003)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?a0RNZ3NaL0gvWkhZUndBUEVqWXBkR2p0ZHRLcFYwMHAweTZDTGE3VzZLNk9I?=
 =?utf-8?B?VDZ3ZVZmRDhYSnlITU9LTHcxUm85UDhaRDV0cFl2dkcxY0djVzB3M3VaeWw4?=
 =?utf-8?B?bzBnQVVtSHVISmJlb3lwMDgzeWc1WGFVa09nNE85U3BrMi8zOFVjTWZFb2RE?=
 =?utf-8?B?R1JTWE1wdWRJL2ZVYlorRENlNFpyV3c5V3ZkNXNjakJqQXkwY2Nvdlk1WkdE?=
 =?utf-8?B?ajRuWEZkUWFsMzU2R3VyblZQNDc0VVFpMFpudEVEQ0dMVXB2RlBaMGRqNWVT?=
 =?utf-8?B?bmJ4Q0hNNVRHcW9WeGJDbTB2aVAxcWk0TE5PdHVNdHN1c2dodFpOVHZZYUpn?=
 =?utf-8?B?akdkUHZuQmJ1VTIySG81d0RKamVVT095ZDN3dEFDYUEwVmVCTWwrckt5a0dU?=
 =?utf-8?B?dGZJM2tEdnREeUhKMFh2d21qNEZWZktiMkk0c3pmTy9iTzJYbW5qSUFHZlhj?=
 =?utf-8?B?Y3RKcG1FemVpTXE2cXBiMW9HN3ZvKytOdGNEZGM1QzZjN0hVRzA4ZGVobFBu?=
 =?utf-8?B?UytSY2Q3aEtaMkNXNWk4OVcwSWwweVdkV1hvRHBGaHE4UmVXTFBLYU1qODBh?=
 =?utf-8?B?Sjg2SldkODdZSzBNTTlsL3JJQmhCRnpIZVNVaE5RQW0xdjVSZTFSVHFuMHlN?=
 =?utf-8?B?bXJmZU1pMEd6RjBiNGxHdk9HT3hZWWhZeVB0M1BwS3BEMVdrNW01SnBoTEpE?=
 =?utf-8?B?Z2FNQmZWczRRVk80OFRLdzF1dlZtbDlHY1Z0c2R1c1lsM2d3NnI5bHM3eU52?=
 =?utf-8?B?UTBFSnVWbTRNNzVqUlJpZVFlTkxWSnZvSW1idkx0NWM3bVRuMFBjaTlGNksw?=
 =?utf-8?B?ck5zcDFudzlZZ2gzMWkzclduVmRTRFFHa3NUSEo0TkJRYUlmeUpUcnVET2lv?=
 =?utf-8?B?b3ZHc2NXM0w2S1FYN2lSNGR6bmpmTkp4UHRHcXp0bTFZcUFMTldaSXJCVWNV?=
 =?utf-8?B?ZWxnZ1JRK2kyQXdwdFZLb2EwU0Jtem1idUdTZlBOSUx4TWVCdGV5ZlZEWkUv?=
 =?utf-8?B?V05oZWw5RUFFS2E4alR4MjBwdWZ5c3lRQjFxQTZ5ZFNUZXlLR0NRMlZWekgv?=
 =?utf-8?B?WE92YkZ6WE9DT1hZS0Z1RGJYL3MzdS9odnRHRlZvMkNlMFAyeVYwWVF0bTE2?=
 =?utf-8?B?d09GdUlZaEZRRHpSbDBjRkkvYVUyNGNUQWpTVTZ0YzRkVGNiT01USFhaRXdv?=
 =?utf-8?B?VE1oank5WHNFR3N3SWZIR1pLQU5mOXpGUlNxTVdTcDc1RlFJOEZWTEQrOGxZ?=
 =?utf-8?B?WWVwSGFMNnAwZTBLNDFFaEJieGJpS0pJa1NxaHg0ZDFOMCtWU3pQUUZTTFRK?=
 =?utf-8?B?VDVZbGNxU1pmN0dKSS9sRU9KSDZIUldNVklMaGJNaEtUSi9zQjRyOFB5YmdH?=
 =?utf-8?B?QkJDZXFYVFFIWWFFOEVnMFVSMGVvUGYreGd1d05xMWtlU3phU2VaQ21ESW9M?=
 =?utf-8?B?TTBWc2hqNEJkVUM1Z0xvLzg5L092L3l4ei8vRWtGSFlQcDMzMTdST3hRMmtH?=
 =?utf-8?B?VUVQOWNVMG8vRVVTQ2Q5TkF6c0JEcmpjTUV2SW40NXBhY0IyQlAxbTA5OVZL?=
 =?utf-8?B?eTRXcUV6RjRsSE5GYjNzUmJwZjhDb1MyeFZuN0VmQy92cUVOeHBnVVR3ckVz?=
 =?utf-8?B?cU9uV2Vpb0syWGtVM1JTY2pGK0xlZU5Layt1ZVp0SjhwVkFIMFdXbUd5RG9E?=
 =?utf-8?B?Qk1JSzVhSWRHOG9Va2t5VXNzRW8xUVZNcVFaNjYyeGVJVWFLR0F2aHhWQm5C?=
 =?utf-8?Q?aI5peF6ooaxaoxzsXL/HCC7ueX0GXN/FyWbLJzj?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b77f4cbb-8fd4-4305-ae89-08d92b28c663
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 09:27:17.8102
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JA4g1n48cS983orGvAmE29zq2qWCHEawC52CNVMMcBmWDnpPVWofGn88Fwgk0YYeCipnRgH+0jC9Hxh9F+5Cog==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2335

It appears unhelpful to me for flush_command_buffer() to block all
progress elsewhere for the given IOMMU by holding its lock while
waiting for command completion. Unless the lock is already held,
acquire it in send_iommu_command(). Release it in all cases in
flush_command_buffer(), before actually starting the wait loop.

Some of the involved functions did/do get called with the lock already
held: For amd_iommu_flush_intremap() we can simply move the locking
inside. For amd_iommu_flush_device() and amd_iommu_flush_all_caches()
the lock now gets dropped in the course of the function's operation.

Where touching function headers anyway, also adjust types used to be
better in line with our coding style and, where applicable, the
functions' callers.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
v2: Add comments. Adjust parameter types when function headers get
    touched anyway.

--- a/xen/drivers/passthrough/amd/iommu.h
+++ b/xen/drivers/passthrough/amd/iommu.h
@@ -253,9 +253,10 @@ void amd_iommu_flush_pages(struct domain
                            unsigned int order);
 void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev *pdev,
                            uint64_t gaddr, unsigned int order);
-void amd_iommu_flush_device(struct amd_iommu *iommu, uint16_t bdf);
+void amd_iommu_flush_device(struct amd_iommu *iommu, uint16_t bdf,
+                            unsigned long flags);
 void amd_iommu_flush_intremap(struct amd_iommu *iommu, uint16_t bdf);
-void amd_iommu_flush_all_caches(struct amd_iommu *iommu);
+void amd_iommu_flush_all_caches(struct amd_iommu *iommu, unsigned long flags);
 
 /* find iommu for bdf */
 struct amd_iommu *find_iommu_for_device(int seg, int bdf);
--- a/xen/drivers/passthrough/amd/iommu_cmd.c
+++ b/xen/drivers/passthrough/amd/iommu_cmd.c
@@ -23,11 +23,20 @@
 #define CMD_COMPLETION_INIT 0
 #define CMD_COMPLETION_DONE 1
 
+/*
+ * When @flags is non-NULL, the function will acquire the IOMMU lock,
+ * transferring lock ownership to the caller.  When @flags is NULL,
+ * the lock is assumed to be already held.
+ */
 static void send_iommu_command(struct amd_iommu *iommu,
-                               const uint32_t cmd[4])
+                               const uint32_t cmd[4],
+                               unsigned long *flags)
 {
     uint32_t tail;
 
+    if ( flags )
+        spin_lock_irqsave(&iommu->lock, *flags);
+
     tail = iommu->cmd_buffer.tail + sizeof(cmd_entry_t);
     if ( tail == iommu->cmd_buffer.size )
         tail = 0;
@@ -49,8 +58,13 @@ static void send_iommu_command(struct am
     writel(tail, iommu->mmio_base + IOMMU_CMD_BUFFER_TAIL_OFFSET);
 }
 
+/*
+ * Callers need to hold the IOMMU lock, which will be released here before
+ * entering the loop to await command completion.
+ */
 static void flush_command_buffer(struct amd_iommu *iommu,
-                                 unsigned int timeout_base)
+                                 unsigned int timeout_base,
+                                 unsigned long flags)
 {
     static DEFINE_PER_CPU(uint64_t, poll_slot);
     uint64_t *this_poll_slot = &this_cpu(poll_slot);
@@ -72,7 +86,9 @@ static void flush_command_buffer(struct
                          IOMMU_CMD_OPCODE_SHIFT, &cmd[1]);
     cmd[2] = CMD_COMPLETION_DONE;
     cmd[3] = 0;
-    send_iommu_command(iommu, cmd);
+    send_iommu_command(iommu, cmd, NULL);
+
+    spin_unlock_irqrestore(&iommu->lock, flags);
 
     start = NOW();
     timeout = start + (timeout_base ?: 100) * MILLISECS(threshold);
@@ -99,12 +115,19 @@ static void flush_command_buffer(struct
 }
 
 /* Build low level iommu command messages */
-static void invalidate_iommu_pages(struct amd_iommu *iommu,
-                                   u64 io_addr, u16 domain_id, u16 order)
+
+/*
+ * The function will acquire the IOMMU lock, via its call to
+ * send_iommu_command(), and then transfer lock ownership to the caller.
+ */
+static unsigned long invalidate_iommu_pages(struct amd_iommu *iommu,
+                                            daddr_t io_addr, domid_t domain_id,
+                                            unsigned int order)
 {
     u64 addr_lo, addr_hi;
     u32 cmd[4], entry;
     int sflag = 0, pde = 0;
+    unsigned long flags;
 
     ASSERT ( order == 0 || order == 9 || order == 18 );
 
@@ -152,16 +175,27 @@ static void invalidate_iommu_pages(struc
     cmd[3] = entry;
 
     cmd[0] = 0;
-    send_iommu_command(iommu, cmd);
+    send_iommu_command(iommu, cmd, &flags);
+
+    return flags;
 }
 
-static void invalidate_iotlb_pages(struct amd_iommu *iommu,
-                                   u16 maxpend, u32 pasid, u16 queueid,
-                                   u64 io_addr, u16 dev_id, u16 order)
+/*
+ * The function will acquire the IOMMU lock, via its call to
+ * send_iommu_command(), and then transfer lock ownership to the caller.
+ */
+static unsigned long invalidate_iotlb_pages(struct amd_iommu *iommu,
+                                            unsigned int maxpend,
+                                            unsigned int pasid,
+                                            unsigned int queueid,
+                                            daddr_t io_addr,
+                                            unsigned int dev_id,
+                                            unsigned int order)
 {
     u64 addr_lo, addr_hi;
     u32 cmd[4], entry;
     int sflag = 0;
+    unsigned long flags;
 
     ASSERT ( order == 0 || order == 9 || order == 18 );
 
@@ -222,9 +256,12 @@ static void invalidate_iotlb_pages(struc
                          IOMMU_INV_IOTLB_PAGES_ADDR_HIGH_SHIFT, &entry);
     cmd[3] = entry;
 
-    send_iommu_command(iommu, cmd);
+    send_iommu_command(iommu, cmd, &flags);
+
+    return flags;
 }
 
+/* Callers need to hold the IOMMU lock. */
 static void invalidate_dev_table_entry(struct amd_iommu *iommu,
                                        u16 device_id)
 {
@@ -241,12 +278,18 @@ static void invalidate_dev_table_entry(s
                          &entry);
     cmd[1] = entry;
 
-    send_iommu_command(iommu, cmd);
+    send_iommu_command(iommu, cmd, NULL);
 }
 
-static void invalidate_interrupt_table(struct amd_iommu *iommu, u16 device_id)
+/*
+ * The function will acquire the IOMMU lock, via its call to
+ * send_iommu_command(), and then transfer lock ownership to the caller.
+ */
+static unsigned long invalidate_interrupt_table(struct amd_iommu *iommu,
+                                                uint16_t device_id)
 {
     u32 cmd[4], entry;
+    unsigned long flags;
 
     cmd[3] = cmd[2] = 0;
     set_field_in_reg_u32(device_id, 0,
@@ -257,9 +300,12 @@ static void invalidate_interrupt_table(s
                          IOMMU_CMD_OPCODE_MASK, IOMMU_CMD_OPCODE_SHIFT,
                          &entry);
     cmd[1] = entry;
-    send_iommu_command(iommu, cmd);
+    send_iommu_command(iommu, cmd, &flags);
+
+    return flags;
 }
 
+/* Callers need to hold the IOMMU lock. */
 static void invalidate_iommu_all(struct amd_iommu *iommu)
 {
     u32 cmd[4], entry;
@@ -271,7 +317,7 @@ static void invalidate_iommu_all(struct
                          &entry);
     cmd[1] = entry;
 
-    send_iommu_command(iommu, cmd);
+    send_iommu_command(iommu, cmd, NULL);
 }
 
 void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev *pdev,
@@ -304,10 +350,9 @@ void amd_iommu_flush_iotlb(u8 devfn, con
     maxpend = pdev->ats.queue_depth & 0xff;
 
     /* send INVALIDATE_IOTLB_PAGES command */
-    spin_lock_irqsave(&iommu->lock, flags);
-    invalidate_iotlb_pages(iommu, maxpend, 0, queueid, daddr, req_id, order);
-    flush_command_buffer(iommu, iommu_dev_iotlb_timeout);
-    spin_unlock_irqrestore(&iommu->lock, flags);
+    flags = invalidate_iotlb_pages(iommu, maxpend, 0, queueid, daddr,
+                                   req_id, order);
+    flush_command_buffer(iommu, iommu_dev_iotlb_timeout, flags);
 }
 
 static void amd_iommu_flush_all_iotlbs(struct domain *d, daddr_t daddr,
@@ -336,15 +381,12 @@ static void _amd_iommu_flush_pages(struc
 {
     unsigned long flags;
     struct amd_iommu *iommu;
-    unsigned int dom_id = d->domain_id;
 
     /* send INVALIDATE_IOMMU_PAGES command */
     for_each_amd_iommu ( iommu )
     {
-        spin_lock_irqsave(&iommu->lock, flags);
-        invalidate_iommu_pages(iommu, daddr, dom_id, order);
-        flush_command_buffer(iommu, 0);
-        spin_unlock_irqrestore(&iommu->lock, flags);
+        flags = invalidate_iommu_pages(iommu, daddr, d->domain_id, order);
+        flush_command_buffer(iommu, 0, flags);
     }
 
     if ( ats_enabled )
@@ -362,39 +404,44 @@ void amd_iommu_flush_pages(struct domain
     _amd_iommu_flush_pages(d, __dfn_to_daddr(dfn), order);
 }
 
-void amd_iommu_flush_device(struct amd_iommu *iommu, uint16_t bdf)
+/*
+ * Callers need to hold the IOMMU lock, which will be released here by
+ * calling flush_command_buffer().
+ */
+void amd_iommu_flush_device(struct amd_iommu *iommu, uint16_t bdf,
+                            unsigned long flags)
 {
     ASSERT( spin_is_locked(&iommu->lock) );
 
     invalidate_dev_table_entry(iommu, bdf);
-    flush_command_buffer(iommu, 0);
+    flush_command_buffer(iommu, 0, flags);
 }
 
 void amd_iommu_flush_intremap(struct amd_iommu *iommu, uint16_t bdf)
 {
-    ASSERT( spin_is_locked(&iommu->lock) );
+    unsigned long flags;
 
-    invalidate_interrupt_table(iommu, bdf);
-    flush_command_buffer(iommu, 0);
+    flags = invalidate_interrupt_table(iommu, bdf);
+    flush_command_buffer(iommu, 0, flags);
 }
 
-void amd_iommu_flush_all_caches(struct amd_iommu *iommu)
+/*
+ * Callers need to hold the IOMMU lock, which will be released here by
+ * calling flush_command_buffer().
+ */
+void amd_iommu_flush_all_caches(struct amd_iommu *iommu, unsigned long flags)
 {
     ASSERT( spin_is_locked(&iommu->lock) );
 
     invalidate_iommu_all(iommu);
-    flush_command_buffer(iommu, 0);
+    flush_command_buffer(iommu, 0, flags);
 }
 
 void amd_iommu_send_guest_cmd(struct amd_iommu *iommu, u32 cmd[])
 {
     unsigned long flags;
 
-    spin_lock_irqsave(&iommu->lock, flags);
-
-    send_iommu_command(iommu, cmd);
+    send_iommu_command(iommu, cmd, &flags);
     /* TBD: Timeout selection may require peeking into cmd[]. */
-    flush_command_buffer(iommu, 0);
-
-    spin_unlock_irqrestore(&iommu->lock, flags);
+    flush_command_buffer(iommu, 0, flags);
 }
--- a/xen/drivers/passthrough/amd/iommu_guest.c
+++ b/xen/drivers/passthrough/amd/iommu_guest.c
@@ -449,8 +449,7 @@ static int do_invalidate_dte(struct doma
     spin_lock_irqsave(&iommu->lock, flags);
     dte_set_gcr3_table(mdte, hdom_id, gcr3_mfn << PAGE_SHIFT, gv, glx);
 
-    amd_iommu_flush_device(iommu, req_id);
-    spin_unlock_irqrestore(&iommu->lock, flags);
+    amd_iommu_flush_device(iommu, req_id, flags);
 
     return 0;
 }
--- a/xen/drivers/passthrough/amd/iommu_init.c
+++ b/xen/drivers/passthrough/amd/iommu_init.c
@@ -921,13 +921,13 @@ static void enable_iommu(struct amd_iomm
 
     set_iommu_translation_control(iommu, IOMMU_CONTROL_ENABLED);
 
-    if ( iommu->features.flds.ia_sup )
-        amd_iommu_flush_all_caches(iommu);
-
     iommu->enabled = 1;
 
+    if ( iommu->features.flds.ia_sup )
+        amd_iommu_flush_all_caches(iommu, flags);
+    else
  out:
-    spin_unlock_irqrestore(&iommu->lock, flags);
+        spin_unlock_irqrestore(&iommu->lock, flags);
 }
 
 static void disable_iommu(struct amd_iommu *iommu)
@@ -1554,9 +1554,8 @@ static int _invalidate_all_devices(
         if ( iommu )
         {
             spin_lock_irqsave(&iommu->lock, flags);
-            amd_iommu_flush_device(iommu, req_id);
+            amd_iommu_flush_device(iommu, req_id, flags);
             amd_iommu_flush_intremap(iommu, req_id);
-            spin_unlock_irqrestore(&iommu->lock, flags);
         }
     }
 
--- a/xen/drivers/passthrough/amd/iommu_intr.c
+++ b/xen/drivers/passthrough/amd/iommu_intr.c
@@ -310,9 +310,7 @@ static int update_intremap_entry_from_io
         entry.ptr32->flds.remap_en = false;
         spin_unlock(lock);
 
-        spin_lock(&iommu->lock);
         amd_iommu_flush_intremap(iommu, req_id);
-        spin_unlock(&iommu->lock);
 
         spin_lock(lock);
     }
@@ -527,11 +525,9 @@ static int update_intremap_entry_from_ms
 
         if ( iommu->enabled )
         {
-            spin_lock_irqsave(&iommu->lock, flags);
             amd_iommu_flush_intremap(iommu, req_id);
             if ( alias_id != req_id )
                 amd_iommu_flush_intremap(iommu, alias_id);
-            spin_unlock_irqrestore(&iommu->lock, flags);
         }
 
         return 0;
@@ -567,11 +563,9 @@ static int update_intremap_entry_from_ms
         entry.ptr32->flds.remap_en = false;
         spin_unlock(lock);
 
-        spin_lock(&iommu->lock);
         amd_iommu_flush_intremap(iommu, req_id);
         if ( alias_id != req_id )
             amd_iommu_flush_intremap(iommu, alias_id);
-        spin_unlock(&iommu->lock);
 
         spin_lock(lock);
     }
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -129,7 +129,7 @@ static void amd_iommu_setup_domain_devic
              iommu_has_cap(iommu, PCI_CAP_IOTLB_SHIFT) )
             dte->i = ats_enabled;
 
-        amd_iommu_flush_device(iommu, req_id);
+        amd_iommu_flush_device(iommu, req_id, flags);
 
         AMD_IOMMU_DEBUG("Setup I/O page table: device id = %#x, type = %#x, "
                         "root table = %#"PRIx64", "
@@ -138,8 +138,8 @@ static void amd_iommu_setup_domain_devic
                         page_to_maddr(hd->arch.amd.root_table),
                         domain->domain_id, hd->arch.amd.paging_mode);
     }
-
-    spin_unlock_irqrestore(&iommu->lock, flags);
+    else
+        spin_unlock_irqrestore(&iommu->lock, flags);
 
     ASSERT(pcidevs_locked());
 
@@ -307,14 +307,15 @@ static void amd_iommu_disable_domain_dev
         smp_wmb();
         dte->v = true;
 
-        amd_iommu_flush_device(iommu, req_id);
+        amd_iommu_flush_device(iommu, req_id, flags);
 
         AMD_IOMMU_DEBUG("Disable: device id = %#x, "
                         "domain = %d, paging mode = %d\n",
                         req_id,  domain->domain_id,
                         dom_iommu(domain)->arch.amd.paging_mode);
     }
-    spin_unlock_irqrestore(&iommu->lock, flags);
+    else
+        spin_unlock_irqrestore(&iommu->lock, flags);
 
     ASSERT(pcidevs_locked());
 
@@ -455,9 +456,7 @@ static int amd_iommu_add_device(u8 devfn
             iommu->dev_table.buffer + (bdf * IOMMU_DEV_TABLE_ENTRY_SIZE),
             ivrs_mappings[bdf].intremap_table, iommu, iommu_intremap);
 
-        amd_iommu_flush_device(iommu, bdf);
-
-        spin_unlock_irqrestore(&iommu->lock, flags);
+        amd_iommu_flush_device(iommu, bdf, flags);
     }
 
     amd_iommu_setup_domain_device(pdev->domain, iommu, devfn, pdev);



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 09:27:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 09:27:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139077.257276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquV4-0008R7-Iv; Wed, 09 Jun 2021 09:27:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139077.257276; Wed, 09 Jun 2021 09:27:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquV4-0008Qz-DR; Wed, 09 Jun 2021 09:27:50 +0000
Received: by outflank-mailman (input) for mailman id 139077;
 Wed, 09 Jun 2021 09:27:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lquV2-0008Qk-R8
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 09:27:48 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0ed03d5e-0a3e-4932-a0df-f274291f086d;
 Wed, 09 Jun 2021 09:27:47 +0000 (UTC)
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2054.outbound.protection.outlook.com [104.47.8.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-12-UvvotrcLPiOp8iylrO5OhA-1; Wed, 09 Jun 2021 11:27:46 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2335.eurprd04.prod.outlook.com (2603:10a6:800:2e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20; Wed, 9 Jun
 2021 09:27:45 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Wed, 9 Jun 2021
 09:27:44 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR2P281CA0020.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:14::7) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.12 via Frontend Transport; Wed, 9 Jun 2021 09:27:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ed03d5e-0a3e-4932-a0df-f274291f086d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623230866;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cqeP7V5sPiTn4G53r8FVAMkSkYXDA8o5CuxWXa7EIrQ=;
	b=RR/SPGgQAgt9XF9L9Nzv86jY0gf4LA4HAgW3gerrcO4AQYp4AmnSyqWxWfjwxaiMG6XPXf
	Y3zfgVV/ZyRXvGWH1s4/yXEyspVyxB3eOxQwvy0/n8Zhz0HEOeSlNpGs+F4PAZhDlwv3m8
	Sx4XptOqQhLmSVSEIaOsGpvOrlmMb1U=
X-MC-Unique: UvvotrcLPiOp8iylrO5OhA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iWypa+c5uJfrbEJo1oDv3A8Wa1SR9fY39yXsZJH5OSpK4c4tzHZDBE46GM++M7wi7k1RM+PkYfNPt+QC9bRdWJMxDSDu1pZJLfWGbduNtRwK3wSG+IT2hYJ2QDWNJA3wXMg0vRrO+zPAtWhKHIWxwNq+D+mi7eVOx33gCsK1fo40MdDijww2s1j+F/pWhvLa/ek9DeNp+JqVKQk0MB+AU2X7nuDQPRqmZicuSFX/BBsUt35nmR85EdPSPUMZ8u0EljPlKoAYJHJBNpmMFXeHFO3kRx2L0Ku7yBLHP13kF+gwzWnil9K7ER6fIuCGwFCAgDHvd49B1/Mvhjx1ojVVFQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cqeP7V5sPiTn4G53r8FVAMkSkYXDA8o5CuxWXa7EIrQ=;
 b=k8Xl99bUmPu/vkXV+Bk4dNWRVUDvDOzzdujYM2IUFG6FYtuFGt/KZLbrnsW61Mtr/2YnfJfFJp9DBE624DW1BJTAm1Fi2TNieVC4Q+Y95/VWeWGs4jh/o+wH4D/6xHKejE6d8sxIT+yCPUDaVCZvYe3Dr5Xu76Yz/2TeXYctdp6sB6Tn7umm1ymLpvd0p9GSaVahzbXhEFNmI6OShm9Ka3cUrQQcPT08fdYVk1/+BSzasSQKzlTgC4+rI67so3+wNg7GKFKAoNCsmFE5OTgghQXnqVcL+g7xZuqLIsQFPzKxZCOnjmawCkRD+zvhvpk3lRwunRTu/UcZHOBErkPu5Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 3/9] VT-d: undo device mappings upon error
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Message-ID: <d6370703-97e3-2571-5ae3-8a5ec11e9bcd@suse.com>
Date: Wed, 9 Jun 2021 11:27:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR2P281CA0020.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::7) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7def0a65-ef3d-431d-f3a4-08d92b28d62e
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2335:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB233501EEEB5CD2E8BB2069DBB3369@VI1PR0401MB2335.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4941;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XMMBxkHU0TM1Z32AdotnvmPfTWnx3j0UbFg4j/DH0WCeQ5XLECKwVsZy5tWe566s2cExxMfUjBEdvanbBpLQsGhUnT7o3P4r6r6uK0EY444k428uj/wSBoWt/gqqwPzrsql8Dlf7okKZfmxPdbrzl5FsqJQ0gVYtWRGllalSiwrgn2N6O/0t8CcGso/8wrdd+OWcbkVoxb+yv5toi1Pfpku22ApBWhaK3q5ujdSScvyyOuqDqKy3V5P9cOtiiXcWLhrszUt6UkNXmnx4lFtYLYf1v7BkWLaRyd3ifOeqNreV9avVkbqfqFgBy33rif+Q/pCb+Ygldm+eg5RWATK9XtApa5MHn8XO5g/ZIn9c5s8ebbJagqiuPIZAIMaJVoe04QXFF3lkaTILvbocj7sCyfP/Ozeaf02kjHJKbrn3n6spZaxgyvwKsqbf5L3bwyDM3kHxVXSxFFWWlEV2xXWk6ACbTTnUEfeyNpG+XLTU3UT4W3zQlWdOc/X0/sfD/mPCfVgXsM0KCrmGiv5Uxwnpg+r7W5HZjTxgWS8ZFV7MpfScj/pfFXJJQ3atyDQPSiEPI94OFlvTLxzk3+l9lemyjNHHL/2dwfpxW3tc5HUY3HxbU3PrPij2ogTKDPJYCjRlTipVBTjXykWtkbIxXrLlSkeuA28kQTQoq4USF+ylSqRTWMgHG2BerMAZeZlpEo+9
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(396003)(376002)(346002)(136003)(39860400002)(6486002)(6916009)(83380400001)(31686004)(478600001)(956004)(2616005)(36756003)(2906002)(8676002)(4326008)(66946007)(8936002)(66476007)(5660300002)(38100700002)(66556008)(86362001)(16576012)(26005)(316002)(16526019)(31696002)(6666004)(186003)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?ZWNTdS9vKy80WkhnVGtrellsc0QvVTEvRTJuQTlxSjdFQ3RMc2VKbHlKZThY?=
 =?utf-8?B?ZTRMZkZQUFRseElldnBqajFMajZNS0VIN20wVXRDTit5UnFKbFYzOXIxamp3?=
 =?utf-8?B?SisxMVptOHlETWViN21ibkl5T1RaMWhDM0x3YTFBRTh6QmlFZzdpMWtFc0dZ?=
 =?utf-8?B?VU16Y3NrMGFPTzVJUmFsd3B6Y3JXSTlheWYxUTZQZFUxa2owa09QYmU4dmhG?=
 =?utf-8?B?am5RcnhiV2hINGxiZmloQkt3RU81a1k1SlcxWUdSbS84N3dSZ0dtVUtZaHd5?=
 =?utf-8?B?a1NwKzNCQTgxQUpBOHEva2UvclBJUjdnRjJSb1Q0TU90bEROYXVwUUVJbmtG?=
 =?utf-8?B?UllxZjNmNHIxL3BmaFJObUpKbUZ5bndEZFBUaUJ2VUZkZEdKUTBZN1VPUXdj?=
 =?utf-8?B?akxHclFtU3hRR0pqNjlGcElvSzVwbWx3dnkra2lwcTVlUUdidm42cS84TTh6?=
 =?utf-8?B?V2U1eTZuOTQxdjdpMXg4OTNlMUZPSWx3THlmZzhQakVlOGhHMTFaNzUreEcz?=
 =?utf-8?B?QjU2ZXFrYlBTbHRoem1uSHhxS1Z1K2t6dmg0eldyZHd1bkMya09sSVNUWXdq?=
 =?utf-8?B?Tkt1SUowT2VjVGJ0WEZvWDhTd3RTc1RXbEFNaS84a2VWQU9ydkNJL1NTalRL?=
 =?utf-8?B?N1doZlQwVkpYR1QzL211Mk94allzSVYzMC9nR0RxalF6SDhOWWRSY3c1V243?=
 =?utf-8?B?U3M0T2RwUWpnamlwbjJMQmNSTWt3ZDF2aXhWSVdYRDVRVVMxT2hkMEtVRzFx?=
 =?utf-8?B?L2lTUkVINnRlMkR4WnYrYkh0RkhBSFZFVjF1QmtSV3BFWjZnWnhodjg1ZVhC?=
 =?utf-8?B?T1JZNmtoQjBJTFkvVUJQQlBJNmovbXFySlRIRCtoMWdVVVlMVzBjQ0t6dElR?=
 =?utf-8?B?QlFNNDdEb3dxS0NYRFZvT1lja09LeWJweHdKVlJtS0psbVZOK1BqQTVpUDAw?=
 =?utf-8?B?YnY2bGhnSjhYbWZmSTg2TUZWSWhDcE41eTljRUdSZjlWbURyVTI2R21SUXYr?=
 =?utf-8?B?UlZTVU5jOHo1bUVDaTRzdmEwQW56NDFkeHZjTFRiUE1yUzFYSTMva0hrcEQy?=
 =?utf-8?B?SjA3WUJvNVFoVjZqYkpzd2JHRmdOdTFDeWRIMllINmdDeVV3dlpLcVpKaldB?=
 =?utf-8?B?dWI4UEMrWER4VnE5aEFHQzR1ZGwzVERkZEYwU2tpOGdSMFhDdk5tYkUwRVBm?=
 =?utf-8?B?emtOclM0NkJtdUVaZGJwZXR4SUZJVEMzTzNuVExlRS9ZVjdzRm55Q2wvU1Ny?=
 =?utf-8?B?MmV6cHUrbFp1Y0xtY0hxTFIzcldJcmhVMm1kOEhUTnhQZkpyN0daQmpCQS9W?=
 =?utf-8?B?UjdpUGFqQ0g4cDFNdktJamV2aVphZ1R5VnY5dVNxdnp1dmYrRmp4Mmc2ZUZo?=
 =?utf-8?B?SFlwT2Z6UmdFR0Ixek15UG9CblZiclBYZW1YdEJTRTlsNXFLczRzRFFEY1BB?=
 =?utf-8?B?SmFVZjNtcnM2MXhrdThaejRCNmhtcUVFdnVYKzNnV0d3NGtZQVlseWdlWURU?=
 =?utf-8?B?MFNueVRpM0Y0dTJONHkyb05WL2l1bHRDRldUaVVqWUp0MzNudWVOUnVyZUV5?=
 =?utf-8?B?Nlpwd3VWZ3dBU01aVHVQUEhRV3NndUxnSTRyS1hqY3ZaQXlUUGhMVWllU3lm?=
 =?utf-8?B?cFIvcitwYUFsOFVtbUsxMWtic0VEZUVOMmpuQkhxWlJvcjllK01rdExtRlg4?=
 =?utf-8?B?U0wyZzBGMHRTSDZIUmpkQmpoYVNuNndFQXBKL0hEVTJWeWNtb29TSjVXQXEx?=
 =?utf-8?Q?gjSUI4QHic7u/72TaGWjhJOlDeNCGtxMYYZflsZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7def0a65-ef3d-431d-f3a4-08d92b28d62e
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 09:27:44.2932
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VNGgCepyb4rKoLMW5iFXijfValxv5Rc+vICQGkbgtt3zgY1GBh1mE28q8ITQ6IpxkNwJSH0PUXuiXqd5dbxhGg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2335

When
 - flushes (supposedly not possible anymore after XSA-373),
 - secondary mappings for legacy PCI devices behind bridges,
 - secondary mappings for chipset quirks, or
 - find_upstream_bridge() invocations
fail, the successfully established device mappings should not be left
around.

Further, when (parts of) unmapping fail, simply returning an error is
typically not enough. Crash the domain instead in such cases, arranging
for domain cleanup to continue in a best effort manner despite such
failures.

Finally make domain_context_unmap()'s error behavior consistent in the
legacy PCI device case: Don't bail from the function in one special
case, but always just exit the switch statement.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>

--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1442,9 +1442,15 @@ int domain_context_mapping_one(
     if ( !seg && !rc )
         rc = me_wifi_quirk(domain, bus, devfn, MAP_ME_PHANTOM_FUNC);
 
+    if ( rc )
+        domain_context_unmap_one(domain, iommu, bus, devfn);
+
     return rc;
 }
 
+static int domain_context_unmap(struct domain *d, uint8_t devfn,
+                                struct pci_dev *pdev);
+
 static int domain_context_mapping(struct domain *domain, u8 devfn,
                                   struct pci_dev *pdev)
 {
@@ -1505,16 +1511,21 @@ static int domain_context_mapping(struct
         if ( ret )
             break;
 
-        if ( find_upstream_bridge(seg, &bus, &devfn, &secbus) < 1 )
-            break;
+        if ( (ret = find_upstream_bridge(seg, &bus, &devfn, &secbus)) < 1 )
+        {
+            if ( !ret )
+                break;
+            ret = -ENXIO;
+        }
 
         /*
          * Mapping a bridge should, if anything, pass the struct pci_dev of
          * that bridge. Since bridges don't normally get assigned to guests,
          * their owner would be the wrong one. Pass NULL instead.
          */
-        ret = domain_context_mapping_one(domain, drhd->iommu, bus, devfn,
-                                         NULL);
+        if ( ret >= 0 )
+            ret = domain_context_mapping_one(domain, drhd->iommu, bus, devfn,
+                                             NULL);
 
         /*
          * Devices behind PCIe-to-PCI/PCIx bridge may generate different
@@ -1531,6 +1542,9 @@ static int domain_context_mapping(struct
             ret = domain_context_mapping_one(domain, drhd->iommu, secbus, 0,
                                              NULL);
 
+        if ( ret )
+            domain_context_unmap(domain, devfn, pdev);
+
         break;
 
     default:
@@ -1609,6 +1623,19 @@ int domain_context_unmap_one(
     if ( !iommu->drhd->segment && !rc )
         rc = me_wifi_quirk(domain, bus, devfn, UNMAP_ME_PHANTOM_FUNC);
 
+    if ( rc && !is_hardware_domain(domain) && domain != dom_io )
+    {
+        if ( domain->is_dying )
+        {
+            printk(XENLOG_ERR "%pd: error %d unmapping %04x:%02x:%02x.%u\n",
+                   domain, rc, iommu->drhd->segment, bus,
+                   PCI_SLOT(devfn), PCI_FUNC(devfn));
+            rc = 0; /* Make upper layers continue in a best effort manner. */
+        }
+        else
+            domain_crash(domain);
+    }
+
     return rc;
 }
 
@@ -1661,17 +1688,29 @@ static int domain_context_unmap(struct d
 
         tmp_bus = bus;
         tmp_devfn = devfn;
-        if ( find_upstream_bridge(seg, &tmp_bus, &tmp_devfn, &secbus) < 1 )
+        if ( (ret = find_upstream_bridge(seg, &tmp_bus, &tmp_devfn,
+                                         &secbus)) < 1 )
+        {
+            if ( ret )
+            {
+                ret = -ENXIO;
+                if ( !domain->is_dying &&
+                     !is_hardware_domain(domain) && domain != dom_io )
+                {
+                    domain_crash(domain);
+                    /* Make upper layers continue in a best effort manner. */
+                    ret = 0;
+                }
+            }
             break;
+        }
 
         /* PCIe to PCI/PCIx bridge */
         if ( pdev_type(seg, tmp_bus, tmp_devfn) == DEV_TYPE_PCIe2PCI_BRIDGE )
         {
             ret = domain_context_unmap_one(domain, iommu, tmp_bus, tmp_devfn);
-            if ( ret )
-                return ret;
-
-            ret = domain_context_unmap_one(domain, iommu, secbus, 0);
+            if ( !ret )
+                ret = domain_context_unmap_one(domain, iommu, secbus, 0);
         }
         else /* Legacy PCI bridge */
             ret = domain_context_unmap_one(domain, iommu, tmp_bus, tmp_devfn);



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 09:28:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 09:28:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139086.257288 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquVU-0000fa-V3; Wed, 09 Jun 2021 09:28:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139086.257288; Wed, 09 Jun 2021 09:28:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquVU-0000fT-Rs; Wed, 09 Jun 2021 09:28:16 +0000
Received: by outflank-mailman (input) for mailman id 139086;
 Wed, 09 Jun 2021 09:28:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lquVT-0000Q5-Ix
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 09:28:15 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c343e9ff-0e8a-4e9f-b23b-3b508e18ced6;
 Wed, 09 Jun 2021 09:28:09 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2111.outbound.protection.outlook.com [104.47.17.111])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-40-1qUaJkRgN0yKtsxnkFY_YQ-1; Wed, 09 Jun 2021 11:28:07 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4845.eurprd04.prod.outlook.com (2603:10a6:803:51::30)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21; Wed, 9 Jun
 2021 09:28:05 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Wed, 9 Jun 2021
 09:28:05 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM4PR0101CA0078.eurprd01.prod.exchangelabs.com (2603:10a6:200:41::46) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20 via Frontend
 Transport; Wed, 9 Jun 2021 09:28:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c343e9ff-0e8a-4e9f-b23b-3b508e18ced6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623230888;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mANegH+ahRiJ8qcgWzaWhVXVsUaVgtH5xHgoOdjJtEU=;
	b=d4vkRo+sAQabZqK6NS1OiuFmLas71JcdTbkDiFVeLDituUsgZmgdZoDQ12EbOx9vUgUQs3
	/8Vf7TNAr8IwYVFp0cDc4zwNw2lW0FKNCKuDjhXnjzbf/6Cbc90BQrJ5V0kMbxpSXhyDwC
	9EIyUYpFhqmh3tjUkisv6erx+8AH75Q=
X-MC-Unique: 1qUaJkRgN0yKtsxnkFY_YQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cWXoUqqU1ASNj3QxKBMfrOmehvEGHmmIWefTp8hInHTtpet7ZWsjr55FiSdpKsWieUFuf9IrmXLGmekNOnxLJAQBwMpmJLe7R6TuYr+D0Whi5pTrvnnUtAeGFvzcr4ss9mEEVnz+Ig5W3RkFdtvGnh84BIUn6sJlUPdw43qCOR/OoL+8Rn+cftXfz3DOoBlIDTzWT2lDmlPKX8DqVYnQTUcvc/THz6hUisxN8SiY/IikfIwDXMgQYHlXAoaR/4Vj4IuL2Z1IuLUSJ1viL6oPh7IkPtSKvyrvhWNO21EN/ix4MODavYFKMnlSToDyTU9f+ympwppBg0uVz7aYG0CnQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mANegH+ahRiJ8qcgWzaWhVXVsUaVgtH5xHgoOdjJtEU=;
 b=nTgcUPHSxx1aujN/DSE/kSsDGv8qtbo3VxRKkif1L5GRZcKbHlrQrA8/rr1ii9TogTgdXIGqBhGEb6OvRjZdBhXa21H0UE/kGuotTKwHSdxpTPZSwCa6WQNKOpsU29Wc3h6s6uOfJAX+vLlMgKcFTxwhRV9F6NtqPW1U1MmJndh5B7Bfs28r05rXd8+JlqQd0x8C/z7TDFNVRP7oeSQwzvEpX1Rr7/wVCJSgg9Eat9AsbIHVPYuum3TyUEdj2vd/CVWKmIKr1dNkVr/obyHL7yIvLypCff+K4cq33ayoGYJ8MxCyQMmUF2ynCZZV08xYVi5tMRAoJrUg1KBE8cNHww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 4/9] VT-d: adjust domid map updating when unmapping context
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Message-ID: <be453d69-dfa8-1f75-b30f-918229c73d02@suse.com>
Date: Wed, 9 Jun 2021 11:28:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM4PR0101CA0078.eurprd01.prod.exchangelabs.com
 (2603:10a6:200:41::46) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 862e2902-a531-4e21-16a9-08d92b28e2cd
X-MS-TrafficTypeDiagnostic: VI1PR04MB4845:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4845AC4BBE792F71DC25110DB3369@VI1PR04MB4845.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wJB3rfqD93rkArIx6w47OGzy+pLaykv1quFMUv/tAVTjalCwGmrPSSdt8IWB5ltip+4ysLyLNkb8qSkYDnOzETYKAKSIzxcERHeUumZDUfNre5u5cXoCHfsCM3RhMzKdWLTZLL/H+a/uwok+mzWk0RhgqqyqtCagTmaYJ2a8n3AgS4QHqiAqlwO4llWZHxTAqOo7tB79r2FOB5wYn0U3SffLFc76pvOCj/XeZ/QqtbeZZJzBzZlXGEP+W/sOoYsEq6euMQU/iC9BDx6Zvrrm3GpRFm0e6TabeiU4C5PLTWz8jNm6ic3NqMx/PSfPI3Xj7nOu3FAwZcUTgk/LqnSi/soOSEFlJIchTDa2r3aKZsBzCUu+krrKUwz3KAQY1rn2Urnr5e38LckreKycWWUSUMAOtp4kEsna1vHGsI9iEZIXVCj9pAII1OuMD8zIcNRTg9MLe3q6me67Em8xpXk9tmhy/RYKLVFkVtjjWe7xwOBPbtstw9dmueRf9NeLAsalUsLbEJ6xOH7gn3YOFtiyGXgOS1sTrIq99IeovsiVkY+tRZszBF1fR3PvRg07gGglUlQvTS/rXWdVyRafrN2NAvcJcuafKgWb9gU3FoPRvfsI3K8upnaFJvfNn6QTCwzWq8auHanaaLyblLk3imZsHw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(396003)(39860400002)(376002)(346002)(316002)(478600001)(86362001)(8676002)(38100700002)(83380400001)(16526019)(26005)(186003)(36756003)(956004)(4326008)(2616005)(8936002)(5660300002)(6486002)(31696002)(6916009)(2906002)(66946007)(54906003)(31686004)(16576012)(66556008)(66476007)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?cUR5T2toMDZxZU9xVEJscW5raW1yUERyZFFSaG9rTFdscXZQYjNjOFI4ZnBa?=
 =?utf-8?B?b2JCN2trNU1KcGlud2Nxd2pmMHNUM3ZCYkdjb0NncXZZQ3lRTFlZeDFuUWMr?=
 =?utf-8?B?YXFWMjZ3MEFwaGVZWWtyZlFPQ3oyMVV0cWNqN2VrQ2FEOTVPRm11ejVHTFgx?=
 =?utf-8?B?ck1JYW9NcS9hdHN1VFdaakdKVytoUExwcjBHb2VCdTB4aXIwVFZjYXFVbGJx?=
 =?utf-8?B?anlLMU9pQXdhSU9QZG15Yzd5YWlmYWFTeWJwZUp6SlVoN002Zi9IT0tXamNO?=
 =?utf-8?B?NHpMbTJkVWtNZUVuMGxqbENHQWZSWHkyVURiM0ppeVRwRkh4UjFmbmpiM0VH?=
 =?utf-8?B?YU9JczZFSHQ5YVFWUnJQSjd5dXZSMkFOc00vdUROUFBhVStKVlFjbGxraWFI?=
 =?utf-8?B?cjlLZ3lJN0lqcVpFQjNta3lLcE1FMFZuVVVVOE9GWnZ0UzB5ZUdPMnZiQXhq?=
 =?utf-8?B?R0JobDVGamQ5cUxGK3ZTUytNOU41dW02OG8xdEZBNjFkbnB3cld3RWpZazRX?=
 =?utf-8?B?ZVh4dlpFZ0JoZXNhNmhrSHBFRnlTc2ZQcXBoTXpDZFdYUndOcDFRZUpOMjVW?=
 =?utf-8?B?R1dnZmZOd0drQ1Y3QnhhdnJOMXhyU2ozcWtsOGFMTEtCdGxUYUhzSlhiRnBL?=
 =?utf-8?B?ZE9iR3NXSDZ0RXBBQXI1eFJkVlFEdmVIcHBYOFVIdEVUeUVZVzZGU3lMK2hr?=
 =?utf-8?B?Q052UHN4dlFOYUplMkZiT3A3R1VySkNmWjUwWlNQUHVmS0NKalI3Tm9lT1Nl?=
 =?utf-8?B?bjdpdDkzbU9kYmt4M05SZFNQN2I4V2lFczN5T1VuVmo1MTAzZzhGYkFoMEtn?=
 =?utf-8?B?bkJOSzBDTlFoaEJNQmV6c2hxMXMxeTNGU0NpVWFITXlLK2tNdkVpUUNVRWFz?=
 =?utf-8?B?NXBqV0pOQStoc3JvVTNwZlVPc3BIOWlEa3RDaTVLQ0h6MEdrMDBTQTkrYTc0?=
 =?utf-8?B?Q1g0VW9vUVh3UHMrb2grMXFkQSswQnc3ZWh4ZEM1dFpLNFJCc0d1bDNUZU4y?=
 =?utf-8?B?S3VwTCtjOTNHQitGbkNHUUg1TmVrVTYwSnpHWTZiakJDRHZuWloyZVh3VU44?=
 =?utf-8?B?L3MwVnI2QTJWNFQ5cFNzZ0hwY1luMzZWQ1dlREpINTV5SnFXd3dBNTZaZ1J2?=
 =?utf-8?B?QnlvdkhHTzQ0aGttK1lnSHpoQldwRE9ySFc2QjFjRW1yN3lEWG5PbHl4dk8v?=
 =?utf-8?B?QU5wRTYyb284UkFoKzNzbVJ4T2dMVm1JRjNSWEFYUXpGQkk1UFE0V21jbmNq?=
 =?utf-8?B?VmhmQ0lodGxSWjdmOFNYMkFYb0NCV2FGaDhHYWh6KzNVb1pUUS9HMXFmUC9S?=
 =?utf-8?B?U1Flb1lTQU02b0tiaERqaHZrYy9Wb1RKZ0J1alExaS9mWUZZZ3VQZ2JPUDdu?=
 =?utf-8?B?N2YrR1pxdEM0bDd0NU1xNHhRTGNNdlk0NGtTUGZqRk9EODFsM2lod1VKaUVU?=
 =?utf-8?B?TFJtRi9aLzYzbWtzNXo5MlRrM0J5VWFuTEhZUElqbS9RMHp3bllMb2p6S1NH?=
 =?utf-8?B?QmpKWWFDa2o0aHEzVENQcEp1aXZkNTZ2cWMvTHNacUNnbHJQd2dTZ1FLUVo2?=
 =?utf-8?B?cXByWDJKRU5aK01CaitmQjhFQ3VhYVoyM2VZWUlxSnppS2RSOStrTmNNNWdV?=
 =?utf-8?B?U01KNUJjZGxTVnU1cFhPRkF0TVJJYTR2SXhPaVZoQTRuVG1iUlZabkZJTWdB?=
 =?utf-8?B?dzF3MkVDMlRTYWpiRG8yT3FrQ1BMOFFSNXhJVVZFZy9teTgwYzVNWHFaKzlz?=
 =?utf-8?Q?azTlMkIayyTESo4HNPWnuO42LshTfYoPq834VES?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 862e2902-a531-4e21-16a9-08d92b28e2cd
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 09:28:05.5002
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZSrSppcoRmVLvYDhZT3JZdzcdZdWHRVbO0wNdl3lYgbs1jMd7FwNOGnJqW9vKxzTHFnXG/+RSyHiwTv1u5gS1A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4845

When an earlier error occurred, cleaning up the domid mapping data is
wrong, as references likely still exist. The only exception to this is
when the actual unmapping worked, but some flush failed (supposedly
impossible after XSA-373). The guest will get crashed in such a case
though, so add fallback cleanup to domain destruction to cover this
case. This in turn makes it desirable to silence the dprintk() in
domain_iommu_domid().

Note that no error will be returned anymore when the lookup fails - in
the common case lookup failure would already have caused
domain_context_unmap_one() to fail, yet even from a more general
perspective it doesn't look right to fail domain_context_unmap() in such
a case when this was the last device, but not when any earlier unmap was
otherwise successful.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -80,9 +80,11 @@ static int domain_iommu_domid(struct dom
         i = find_next_bit(iommu->domid_bitmap, nr_dom, i+1);
     }
 
-    dprintk(XENLOG_ERR VTDPREFIX,
-            "Cannot get valid iommu domid: domid=%d iommu->index=%d\n",
-            d->domain_id, iommu->index);
+    if ( !d->is_dying )
+        dprintk(XENLOG_ERR VTDPREFIX,
+                "Cannot get valid iommu %u domid: %pd\n",
+                iommu->index, d);
+
     return -1;
 }
 
@@ -147,6 +149,17 @@ static int context_get_domain_id(struct
     return domid;
 }
 
+static void cleanup_domid_map(struct domain *domain, struct vtd_iommu *iommu)
+{
+    int iommu_domid = domain_iommu_domid(domain, iommu);
+
+    if ( iommu_domid >= 0 )
+    {
+        clear_bit(iommu_domid, iommu->domid_bitmap);
+        iommu->domid_map[iommu_domid] = 0;
+    }
+}
+
 static void sync_cache(const void *addr, unsigned int size)
 {
     static unsigned long clflush_size = 0;
@@ -1724,6 +1737,9 @@ static int domain_context_unmap(struct d
         goto out;
     }
 
+    if ( ret )
+        goto out;
+
     /*
      * if no other devices under the same iommu owned by this domain,
      * clear iommu in iommu_bitmap and clear domain_id in domid_bitmp
@@ -1743,19 +1759,8 @@ static int domain_context_unmap(struct d
 
     if ( found == 0 )
     {
-        int iommu_domid;
-
         clear_bit(iommu->index, &dom_iommu(domain)->arch.vtd.iommu_bitmap);
-
-        iommu_domid = domain_iommu_domid(domain, iommu);
-        if ( iommu_domid == -1 )
-        {
-            ret = -EINVAL;
-            goto out;
-        }
-
-        clear_bit(iommu_domid, iommu->domid_bitmap);
-        iommu->domid_map[iommu_domid] = 0;
+        cleanup_domid_map(domain, iommu);
     }
 
 out:
@@ -1775,6 +1780,7 @@ static void iommu_domain_teardown(struct
 {
     struct domain_iommu *hd = dom_iommu(d);
     struct mapped_rmrr *mrmrr, *tmp;
+    const struct acpi_drhd_unit *drhd;
 
     if ( list_empty(&acpi_drhd_units) )
         return;
@@ -1786,6 +1792,9 @@ static void iommu_domain_teardown(struct
     }
 
     ASSERT(!hd->arch.vtd.pgd_maddr);
+
+    for_each_drhd_unit ( drhd )
+        cleanup_domid_map(d, drhd->iommu);
 }
 
 static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn,



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 09:28:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 09:28:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139091.257300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquW1-0001IR-8J; Wed, 09 Jun 2021 09:28:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139091.257300; Wed, 09 Jun 2021 09:28:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquW1-0001II-5C; Wed, 09 Jun 2021 09:28:49 +0000
Received: by outflank-mailman (input) for mailman id 139091;
 Wed, 09 Jun 2021 09:28:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lquW0-0001Hq-4O
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 09:28:48 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b6efa68c-0834-4d93-98d6-0b6d94b75f3b;
 Wed, 09 Jun 2021 09:28:47 +0000 (UTC)
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2050.outbound.protection.outlook.com [104.47.8.50]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-29-bvIlS6VoPZi8lL9dURHfSA-1; Wed, 09 Jun 2021 11:28:45 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2335.eurprd04.prod.outlook.com (2603:10a6:800:2e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20; Wed, 9 Jun
 2021 09:28:44 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Wed, 9 Jun 2021
 09:28:44 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P189CA0004.EURP189.PROD.OUTLOOK.COM (2603:10a6:102:52::9) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.20 via Frontend Transport; Wed, 9 Jun 2021 09:28:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6efa68c-0834-4d93-98d6-0b6d94b75f3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623230926;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=txVdk0hVPwWU6U7vm39WYjxerByLtrHY/Ueo8afMxSA=;
	b=gYH1iwYocVb6ckhUnMKF8HR0/TCqsX0aoYEOcE4WabtBTdwey4I8RAPjCL3dh+nAN2Hhqf
	5GJ7CqWLCl+/vAO+bAenKBdOLFvutEe9svevQ5ksvzR1jWaa5zlaUvkjShmM8Fdvru9+Si
	R+51xps3X1usXjtJvMjkJsX+9EXKzAE=
X-MC-Unique: bvIlS6VoPZi8lL9dURHfSA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BalcyRP884+twz+aBkZDTFM82Kvyjxi04FtYerzBBokUf/wzgZA6KQCL7f2FOe+zkebvQ2debhJiPA/kQgVM82O+QaNFMQUbHRJ/+uskSJ2M1W/Ig8J/RTTsLIncu75x3tW5YO/sxuhGClm2q+uKAcncPwwe6kIpxVw8+6hqL3doAeeDOwfGYvKcuO+/+vI9jVgi/pihgPjdWe6hb8uJ48crTGskNgyNNeRPekCkh/nsFkbFk9QJaLAC84T95QJgTMA3x7V5SH0c+Z7j9oSArEgC0t+ixabtHB81b2EnUmYbtAg4fwVEGEjHgkXR1cTKbfAIsIzt6e8uEfV+npk8dQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=txVdk0hVPwWU6U7vm39WYjxerByLtrHY/Ueo8afMxSA=;
 b=MkleqzeUjE2JJiIKTstSyHNy4+I0hczL9XH9MQ6j5jlyIUuGly5BJaaWOXtk30F7p6uiRbhoM5CfrS4a5nt8R50E4IkSNANdGOkFGP6Rbb3xyAPUQ8UyFwkPhAJtm4xmYEvadw5p6JX4tr8e3c6w2NJIjcH/5/P6ZvDd/SsknjGDtHc6eOlX42MDGCtV3tKq2+7rVmoRZiP1b+vW1pipIjBoGjy4AN/Z+HcNweGlj1J4iTuLQ5aBp3VNj+2p+izGclAS+HhRdQLwcaS/nOeGPMeTcfE0DptgtS/A9SCOFZ1pE8BwfmdaIO6M5GgNxwNpDT+NFgHYrHqEJiM/vubhvQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 5/9] VT-d: clear_fault_bits() should clear all fault bits
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Message-ID: <fbb7664b-d55c-f3ce-01b2-e4e379e3780b@suse.com>
Date: Wed, 9 Jun 2021 11:28:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P189CA0004.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:52::9) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 62062e91-acae-4b28-1f80-08d92b28f9e5
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2335:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB2335C8685908C22588889D5EB3369@VI1PR0401MB2335.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1388;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6rMtZJP2nkCTW6Dkar5gUUaKr4vGyp0pIPzA3CxA40dyVW+P0c017KwvOpavyZlC83KqZVKGCcEwW6ER/nGmte7pBXD7xbHVqZldyIJasZuKUjZjKnu/PLyNCoxqTVh6ryoXWQZuuPrKkB/2QbQxUggOHFhxA0H6iGtw4KJY1+MHyO2/kAXpVWUp+Dt/uJECkTCzXtU96c7zkmAHvCJM2E//mBkQTNOTIkjfSZ6FEUJG51vyzrVgqR+Ki+6EUlluAVOgJKbGu7TOGbxEwa8rwYHY+xkIjDuA3Tu9lahcm8AONTBtsh9ZRvJwgn+ewtUMVoINyb22Qokzhe8RW+PzaU9mWB7pIEmfT3+/F7sfg3HrrYo7UzqzQSEaoh/OAuBLa3gIXG5NC4wMc6CzMD1c4XFLZwMvtgIcwSczBCfcmd7begJcZTnvrV3X1izgZ7U1V7Pms9rQXdzJXgTqQ10W6XIS7QavjlGEOTtpiEg4uSSO8FvPpA5a4T3JJZmV2wAo0Kq3b3iWtQZ6LnVTXhc/NHJ5brYG6dL95RuWb3flan3vRs276v+UEIlA1/4Ux61QBgEU/poxopnPNBgfQ1SRxjQudVImW62v6IbHXRGiBEanx+4Y2p5cs6dzvXq2EvBXO4Kiw2tNJsQWLqrnmRhbF5+0oji0V131qgLNU6LnKkVOY0jrVjltUYXRM3aFldS5
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(396003)(376002)(346002)(136003)(39860400002)(6486002)(6916009)(83380400001)(31686004)(478600001)(956004)(2616005)(36756003)(2906002)(8676002)(4326008)(66946007)(8936002)(66476007)(5660300002)(38100700002)(66556008)(86362001)(16576012)(26005)(316002)(16526019)(31696002)(186003)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?c3Qrb1BabEFDQmxuZnd3TXE5UGN2Q2h4azd3RTRIck5mbzhaUFpoUnhnQkpt?=
 =?utf-8?B?MWFYckVVUHRCbHJoL3RQbWZVa3N3NjNDSm56OFdDVEE5eDhqam9PdXdOb3Ev?=
 =?utf-8?B?QTRlY2M1eE1Fa2FwVTJLUDBhZmZmMTI1ZUxnV3A4MzJHSDkwbEoxbkV5bVZI?=
 =?utf-8?B?Y1pRRXhjb25YUEFEQTlLMDhqZjdiLy9xZ0FxSWJDVE5XZm1NWWlSSUxzYURn?=
 =?utf-8?B?a2FHSXp0N2RxYlhiSXUyVGYyZEl0WnFERnM5a1BIV2Z0QS83Z05JS25PMDlU?=
 =?utf-8?B?NnMzdVFNL1Nkblc4QWZoWWVuY0tNMU9EbllkYTZpUHk5c3o5YkVLUjZCNlh5?=
 =?utf-8?B?ZlFkUDJZd3N2M3lMZHFYTklYRXQ3SVBiZWFxRklBQXVjWU96ZktuNmNUT3Ur?=
 =?utf-8?B?TkFUcVVWSTVZakdmVnFmMVBUS0xjaXBRQ0l3d2lkL2krYjhTNWlFTjBRSGVz?=
 =?utf-8?B?eEg4ZVg5ektKV0ZtYlpLUndJS2g1ZmJPT0p4RkdpWlBnWDlLb1Y0QWpjallQ?=
 =?utf-8?B?Q2FJU0xML0hNRjAzT0VsUFg4cFFQQXlMaXU1RFNQbUp4TnBVcVorVEpNWEpF?=
 =?utf-8?B?aUVKRkNERktNQXZuQlZQYWZTVnNHd1BQS1RTcERQTkJCWE1jV3RKM3Y4WWZR?=
 =?utf-8?B?ZlFOSEU5NjVEdytoRWNBcGpReTB5Szl4OU5KZndMOTRKN215a1NsSTZqZGox?=
 =?utf-8?B?UjJZVEQrbUQ3RnZDbGlFdVdXbVQvRGIwQW5kZzBmci9ObExNd1Bod0Z4WDJW?=
 =?utf-8?B?TG5vZUk0U1NlbDROR2RCZ2FwME5TbTkyaGROUjlrWEJZem4rdjJXNytsMmti?=
 =?utf-8?B?MDNIZmtzcFRhVEViRHM5ZVhNWW4yWHdqVW56b0pJM0ZaRGdHbXRKVERnMnNJ?=
 =?utf-8?B?TUVVc3UrZElZbUR4NkJNWWlBY0dtWHB4QmRoWGsyQkhRZEFhdHhhVGRyWnpF?=
 =?utf-8?B?eG1ENERtWFNKSUJWRmJmTlpKSXNGd1RkRkViRDZXbFphcUpGRHhac2c1Um8r?=
 =?utf-8?B?WE5iM1JCVjMwM1ljT0VUdk5YaFF0T2xjVDRPMjluU0p3ZU9OdXoyYTlPTHg4?=
 =?utf-8?B?ZDJZMXVBU0Frdlh4WlMyQ0tVRWNFZGJNbkpBNklDQzkraXFQTnFOQ09XRit4?=
 =?utf-8?B?SFpBazVBeXRxd3B5VXQ5bUdCTjJMSTZKZWdGTW9IeDBWZDZuU201bW5TSjU5?=
 =?utf-8?B?cXB3M2tCTXhWbFBzWS8yd1pEVWN4SzBoSzZwY0F4R0QyWlpFdmlwYnFtMCtM?=
 =?utf-8?B?VC93UTRPaUZaMTlvTGlwaTRMSHJ6YktnZGkyMzcrODZJWGY2RkZDK2pVQUlS?=
 =?utf-8?B?eVg3Ull3b0lidFRtRUxnYkVrQ29SallIeitMVUhHY04rdzVZMHg2Q1N6M0Nu?=
 =?utf-8?B?MFZPdmtVNUxkWU1NSUNnTnpYU1E2SXFOd0V3ZmJoenFuLyszRjFuamRpcStt?=
 =?utf-8?B?RVR1eklFMW9rcmtVdzZiby8yRDhJU0tRYzAvUWxmclRBMzI5OHBmZ3c2SFdl?=
 =?utf-8?B?dndOS05SVFV4aE1CNFBiaWtDam50ZTQrcUxhdSttc3NweURZUHpqSjFZdDly?=
 =?utf-8?B?c0xybkRXSHBVSisyWXhqNytLRlA1STQvUE1pOHM0eGt3WEtqK0k3TytENk1J?=
 =?utf-8?B?TDhScmFQWVBSSnBjRnl1MW10L2FDN2Q4ZFRTWi9keU5pRFNRWklDbkRXRXMw?=
 =?utf-8?B?dWVRU1gvRHo2Q1hhV1l5MHZjaGRyR1dDWXhISC84UW5KSUVsbGpxZW0zd3cw?=
 =?utf-8?Q?ogzZYuE+X+1Ro/BuDQgQzdtguDOZ/jHPFWI3ojO?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 62062e91-acae-4b28-1f80-08d92b28f9e5
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 09:28:44.2184
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bhsu9ev4SkKUXklHtRkbO91+z63MR0SjE73zBq+wUtS55yaLjUnwapLKbffEGW63c3XOQEzzo9+Q78tNbdPT1w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2335

If there is any way for one fault to be left set in the recording
registers, there's no reason there couldn't also be multiple ones. If
PPF set set (being the OR or all F fields), simply loop over the entire
range of fault recording registers, clearing F everywhere.

Since PPF is a r/o bit, also remove it from DMA_FSTS_FAULTS (arguably
the constant's name is ambiguous as well).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -2094,13 +2094,23 @@ static int __hwdom_init setup_hwdom_devi
 
 void clear_fault_bits(struct vtd_iommu *iommu)
 {
-    u64 val;
     unsigned long flags;
 
     spin_lock_irqsave(&iommu->register_lock, flags);
-    val = dmar_readq(iommu->reg, cap_fault_reg_offset(iommu->cap) + 8);
-    dmar_writeq(iommu->reg, cap_fault_reg_offset(iommu->cap) + 8, val);
+
+    if ( dmar_readl(iommu->reg, DMAR_FSTS_REG) & DMA_FSTS_PPF )
+    {
+        unsigned int reg = cap_fault_reg_offset(iommu->cap);
+        unsigned int end = reg + cap_num_fault_regs(iommu->cap);
+
+        do {
+           dmar_writel(iommu->reg, reg + 12, DMA_FRCD_F);
+           reg += PRIMARY_FAULT_REG_LEN;
+        } while ( reg < end );
+    }
+
     dmar_writel(iommu->reg, DMAR_FSTS_REG, DMA_FSTS_FAULTS);
+
     spin_unlock_irqrestore(&iommu->register_lock, flags);
 }
 
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -174,9 +174,8 @@
 #define DMA_FSTS_IQE (1u << 4)
 #define DMA_FSTS_ICE (1u << 5)
 #define DMA_FSTS_ITE (1u << 6)
-#define DMA_FSTS_FAULTS (DMA_FSTS_PFO | DMA_FSTS_PPF | DMA_FSTS_AFO | \
-                         DMA_FSTS_APF | DMA_FSTS_IQE | DMA_FSTS_ICE | \
-                         DMA_FSTS_ITE)
+#define DMA_FSTS_FAULTS (DMA_FSTS_PFO | DMA_FSTS_AFO | DMA_FSTS_APF | \
+                         DMA_FSTS_IQE | DMA_FSTS_ICE | DMA_FSTS_ITE)
 #define dma_fsts_fault_record_index(s) (((s) >> 8) & 0xff)
 
 /* FRCD_REG, 32 bits access */



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 09:29:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 09:29:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139096.257312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquWM-0001sd-IG; Wed, 09 Jun 2021 09:29:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139096.257312; Wed, 09 Jun 2021 09:29:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquWM-0001sW-EZ; Wed, 09 Jun 2021 09:29:10 +0000
Received: by outflank-mailman (input) for mailman id 139096;
 Wed, 09 Jun 2021 09:29:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lquWL-0001qN-D3
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 09:29:09 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2a0b0085-7e88-4b03-8904-56600271e260;
 Wed, 09 Jun 2021 09:29:08 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2109.outbound.protection.outlook.com [104.47.17.109])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-25-O5LBzrL5M6WxY_Bt2DdAcw-1; Wed, 09 Jun 2021 11:29:06 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4845.eurprd04.prod.outlook.com (2603:10a6:803:51::30)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21; Wed, 9 Jun
 2021 09:29:05 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Wed, 9 Jun 2021
 09:29:05 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR04CA0017.eurprd04.prod.outlook.com (2603:10a6:208:122::30) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20 via Frontend
 Transport; Wed, 9 Jun 2021 09:29:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a0b0085-7e88-4b03-8904-56600271e260
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623230947;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=OdIIKv781vcvaB9/e0X5fG6O/EkokaPYCef67sSMkIA=;
	b=SlIl0W7Fv60Q0n3DONFgABQKY8ULsGwD8rTN3mJK6zTN+5yk6CN1Yk2QA0b0nxJCNcJGa4
	87U9sEgC0nN+GcXtspa19iZUGQu3fF+RrAYvZx+ApZ+l/cME/VZ1WTYQGbrLiB8giI3dxt
	ZP5f8aiXxuLjJfCn3vkP/KYFT2e/iOs=
X-MC-Unique: O5LBzrL5M6WxY_Bt2DdAcw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VDPfesniEpPeHlNwJRDkUoJG4vN6INT1kyxXqXEtAxpalMdF9/xwV2rq74v7SHoXFvSDGzO+NajZW66pvl2O7TbUldWfb/S17fgg/cNP1hJKIJ6ads433n3N4kJvk7DNDZVzqft4IZ+BV2PPOzFS/JmIhPAzz0W2Yda9WnWvpSrnx1MrwDnk/0ctUV0HnENLk8H5y5VQ5yU9g59kveAow+BfNEr03BKu2GmZanF1n369qDvc5ZnVQO9UUDq+HkdoVcMSTsUjxGdMWeECpWTM98hraK+4Z2Yc2y/BDcj93RbxBL5YaOgjyXsGDif1THtRM2dnghuFAE6U8n13obI17w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OdIIKv781vcvaB9/e0X5fG6O/EkokaPYCef67sSMkIA=;
 b=GgBX3XSsNOhx1FdUSpzcf1dWzYNzEuIUZdK+KjlmhOvbGLlpmtByq0wDWOVKKtaQONijhAwFgU0bsOwTFtN0AB2ThM77dZmB8dpkqeYaAPawcCgD3Hl5uXxABQ3CQQYxZy6M2jj+v5m4h4+Bc5OrXCahMwL/UVB0K6EZMCwdpv3BvOCXU53hj0bKm5RWCcDmmY/2OIiwR6GIb/39x3X+bonhVScDovoHDdBDJcTVnzYWLkYoMuLcBHW/QymXb7D4lRpsor/zopQHWnIEvBG0z8TTWVLt/NCEJaTSdo+amQP+hHxulgtiIMdaaqQqBfgK/4G5Ceb0HUwlO2i/mltlhQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 6/9] VT-d: don't lose errors when flushing TLBs on multiple
 IOMMUs
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Message-ID: <8a4f355e-c381-412c-8949-061851d0f7e2@suse.com>
Date: Wed, 9 Jun 2021 11:29:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR04CA0017.eurprd04.prod.outlook.com
 (2603:10a6:208:122::30) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2ab675b8-f533-4885-19a4-08d92b290673
X-MS-TrafficTypeDiagnostic: VI1PR04MB4845:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4845078DD2FB4EE3B1F30200B3369@VI1PR04MB4845.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	clgvOzyeGfAea+7RLH4EaBGyO4g11cgn6cP4AQUSQ7k5MRUmMglndIcfeOA5jvDEhm2hIOu1Q8cdWfpoDTajL7bVeN1FNo/bngwvP/9W3k3uGssEWKuGMjs/FWISYcGgjC35a0fue0B60R5tzJSvcPc+Ke5T/ukXyTuqrFP3xFBZLo/ddZZUKUztYHyK+0w6HG5PDNyeFUVQV0NuATfmDKNWDpZLu1eqA+rrvsBSbSyh+tqsqeDN54fNeRDO9Nomp3Myw8mgcNCttwVL0TXiZMZa6bbJOEYTfC/P+vL5FySJ5en84abUtMQ4gvwJT1ksOqOk/SdBgYjBQgqUu874Va3IQevl1P6hJOFYTJftXa1BCmDrwAcm6V6qa0kOXM28yCgki1aOOuiMF1z9xz697RDjf37DOf+XfuGvgGp2yItqnrHjMqBMglXPPJuh3BG3ehxCGZVr/pQXpPxDIol02zR6dEx/ExybZoy/YcXpgE5+HYo5Yu00Qp76KaduKXlVP7vhfapx6T7DGduV8MXXzflKREvanIx3lzyge8ABpmzgdTDjHiO3iljCcH8rE/5pbMHJIp0jyypQVkIki4Y/82nBXDeWDzfjlOfc/8JPiYDe+KxEh6ESLPLdPhRNOBigVG7NGUtOEZXspLxLX3owusuHxyqERFnW/7SHBqfS1FHDKL9kOAY+fObZ7O/VXi7Q
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(396003)(39860400002)(376002)(346002)(316002)(478600001)(86362001)(8676002)(38100700002)(83380400001)(16526019)(26005)(186003)(36756003)(956004)(4326008)(2616005)(8936002)(5660300002)(6486002)(31696002)(6916009)(2906002)(66946007)(54906003)(31686004)(16576012)(66556008)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?QXIwR0pQL2FXKzBMZDFsMmJacERwODhuSHdKSG16QXZSdVdhU3p2TVBQbERR?=
 =?utf-8?B?RzJaamsxbjVscDBOZ0FVdDE1ZS9rTE5pOGN1Ym5wS1V2a2ZrV0FrY3l2UU9Z?=
 =?utf-8?B?TTBMM1RTSWxqNlNYT0ExbzVYc2l4bjF5WmpqTlRPMC9lWFFiNmVXOG01UE5D?=
 =?utf-8?B?eEE1Z0Fza2RKWnI5NTJoUEluYmdyRG9IdXEzU09XMHdNa3V1YXM4RDZRU0Ft?=
 =?utf-8?B?OHh4UFo4N01vb0xUaDdGVUJROHV4RWdZWDNHWkVKNVd5Z3B2dHVWSWlDNHJk?=
 =?utf-8?B?UUFLN0FINTc5ektnN0FzYlgzVjkvYTNLWGdIN21COW9vUWpjbnR6a1ZsYWRa?=
 =?utf-8?B?dm11d2UrdTVvUFpwZ3BzTGxaQ2p4NFFsQ1lzZ3NuU2d3aUlCdllvWTVKMk9R?=
 =?utf-8?B?VkNHcjBDV0tGTHVScEd3M3lQM2w5S2ZRbEpNdWRHUXhFUjZCcXBscWc2TVV4?=
 =?utf-8?B?OEpFcmRQeUo0SUJaSHcwM0FmRlhRZzZXS3gzaExuRTdLS1NKeVY4aThIRll3?=
 =?utf-8?B?QjVOdkJpR1VDa2NlNUR5QVh3aUdGR0VVOVlPUkpiY2VNQ0xFeUtSdmlYNjhw?=
 =?utf-8?B?bHlNdlBnSkRDZXR5VTZXYW1KeEF2R2xMVGVWK1JMYmJiSUs3ZU9qK3o2M1hu?=
 =?utf-8?B?UlR3Yk1lSTZqTzBudFdKOGVDMjBDMWo1VGg1cXQzZGV4bC9aZjZPY1JQYUR6?=
 =?utf-8?B?UXpPdm1qZ01EVE1ESVhXS2l3d2RJdm90Vmo2dWtWS0FqeVZMeU1vR25ISHFH?=
 =?utf-8?B?cFNxc1hjd29ETVRSV21FdjIyMVlvZ0tJVDFSa2RJL0Vzem80ZVFSK25sbVYw?=
 =?utf-8?B?eTIxYWNTeDkyeExTSnRoZldZaDhnZHRFN3c2L3M3VVVkOG1JaXVnVm9oYTZX?=
 =?utf-8?B?TW5PV2RGSEJaRFViNHMvcHFPSWVpWWtzOW5OTDhpZW1UQ1g1VWZZblorTTEr?=
 =?utf-8?B?QXFuZ2MraG1RN0pCQzNFY29DUndOZ2JBRXIyYkduY21FeFlnU1VRT0h6RlV6?=
 =?utf-8?B?d0t6TjE5ZUVwc3Y0TVc2dzdhN0haekFJZmJDREcrU2paZDNMRk5hTHo0VXJH?=
 =?utf-8?B?TVp1b1JKaEVjZ29rdmtWUThUUTdvcjQxZzlEV0dmanhKdHZ4Ymp5RjFldVRK?=
 =?utf-8?B?SksvYmFGakFKWnNqSEY4WmFVbDhvaFFQTjhGYXdiT090TWViTEh1U3VXcjRP?=
 =?utf-8?B?VmtSUEMxYTNGdHROYUlCK1FIVWFqbFFKSi9tZ3VXVCtjOHhJMTVhQ2Q3UVN3?=
 =?utf-8?B?Q0ZGKzdFblRkaUlrTm1wSjZ6bDM1M2k1OW12YmFaK0p3d0JnOEhCWHp3OWRv?=
 =?utf-8?B?cllTYUQ2aGdZMnkxUE1DT2tMRk5SdUxWeFBmRC9XWHMzL1duQ0hYMzluT0dE?=
 =?utf-8?B?UG5xeUp1UkN6NXNHN0h4d0RSOEtIQ01TejRmY2xGM1NHUStDNlZsSTdaSXhC?=
 =?utf-8?B?aVZxSDJTWXJuZHZBekVqWTJ0M1VyYWVGRFZsOHk4eWJLaHV3aGNRc096N0dn?=
 =?utf-8?B?dVF2SDhEMkZiOHUvZGlvMnJMd2NYOEVvOCtSQ0EyeHpqVWxoczdIRVptTXFL?=
 =?utf-8?B?V0RSV2gzRnZHYmY5SFhxR28vRFN5S3pTeEJaWVNVVXJ4RXNlalYxeDBIZ3ll?=
 =?utf-8?B?Y1N2Q0RUWWc5VCt6WFNZMTVaYmtLSmVFVVM1SW9sdkZ2cnBkeXVqSVd6U2RW?=
 =?utf-8?B?eEZrT2Z4RzZyOWhKRDRUT0JWTmRRRTY5VEdaWjZHZEJlTEVCQ1JiVHZyN2Ny?=
 =?utf-8?Q?ED6GIlCdbKTGJCEUvzprQTKkk0Cm3Il8jgHFEHG?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2ab675b8-f533-4885-19a4-08d92b290673
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 09:29:05.2564
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hCsL72nNclbwx0FAHEcIUi8BXiZiKmnIH+Isbuz+N4drvvd06e2PP7ku2hgcxzFGSC6i5OQE0iPgxfOejgXKSA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4845

While no longer an immediate problem with flushes no longer timing out,
errors (if any) get properly reported by iommu_flush_iotlb_{dsi,psi}().
Overwriting such an error with, perhaps, a success indicator received
from another IOMMU will misguide callers. Record the first error, but
don't bail from the loop (such that further necessary invalidation gets
carried out on a best effort basis).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -643,7 +643,7 @@ static int __must_check iommu_flush_iotl
     struct vtd_iommu *iommu;
     bool_t flush_dev_iotlb;
     int iommu_domid;
-    int rc = 0;
+    int ret = 0;
 
     /*
      * No need pcideves_lock here because we have flush
@@ -651,6 +651,8 @@ static int __must_check iommu_flush_iotl
      */
     for_each_drhd_unit ( drhd )
     {
+        int rc;
+
         iommu = drhd->iommu;
 
         if ( !test_bit(iommu->index, &hd->arch.vtd.iommu_bitmap) )
@@ -673,13 +675,12 @@ static int __must_check iommu_flush_iotl
                                        flush_dev_iotlb);
 
         if ( rc > 0 )
-        {
             iommu_flush_write_buffer(iommu);
-            rc = 0;
-        }
+        else if ( !ret )
+            ret = rc;
     }
 
-    return rc;
+    return ret;
 }
 
 static int __must_check iommu_flush_iotlb_pages(struct domain *d,



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 09:29:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 09:29:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139107.257324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquWp-0002ZS-V5; Wed, 09 Jun 2021 09:29:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139107.257324; Wed, 09 Jun 2021 09:29:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquWp-0002ZJ-Rp; Wed, 09 Jun 2021 09:29:39 +0000
Received: by outflank-mailman (input) for mailman id 139107;
 Wed, 09 Jun 2021 09:29:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lquWo-0002Yx-Mr
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 09:29:38 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ce1b3c19-d1d6-494b-aeaf-6ab966fac692;
 Wed, 09 Jun 2021 09:29:37 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2113.outbound.protection.outlook.com [104.47.17.113])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-25-0dbrQj_KMUOVT9SqWq-sVA-1; Wed, 09 Jun 2021 11:29:34 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4845.eurprd04.prod.outlook.com (2603:10a6:803:51::30)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21; Wed, 9 Jun
 2021 09:29:33 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Wed, 9 Jun 2021
 09:29:33 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR04CA0127.eurprd04.prod.outlook.com (2603:10a6:208:55::32) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Wed, 9 Jun 2021 09:29:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce1b3c19-d1d6-494b-aeaf-6ab966fac692
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623230976;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wObYJPemsOXsGyhjl3tI23Gvv+HFSVMfSoBU/wvfj5Q=;
	b=NmjY5+Mi/HprhL86a57fMVyMSBXnoTatfheC2AN4AaockXJqgz/ZvI/OYQWTbPtRbPXRwx
	Ob5zZ7nhkmuXYEWxxOmYSqhZdz1cAkz5mNJPVrWpfMf0pxMQcu/3B9Q7C9ZDkMr96/12ea
	2qlXf619Yk03urH1lFANtdYyA7USDsI=
X-MC-Unique: 0dbrQj_KMUOVT9SqWq-sVA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=njzdSJMVjinNRJyNlUVNY1h70Q6WeBsLdM2lkds/qZzL79k9Xqun1HfoR1mgOeXsekczJ3yfjAnKSIaa6P8z/0bHpZPmSTNZsSv3t/KRbWGqaOZzqrRwI6MFMeGkhZmp4uwdJcWGRBPsEKbLySFVc6SCmVHKQHatmIPRCh4H3ueaGR/u9i4oWhJt+fE70WcTwB/4gvdK7VVwVE1AJZM9Ej9TDpXOix+PX9386WhZKFrg4vXnJ/qrYUNeUX8JYGuWGOjDRCtyWOPNdL2pal5KivYPxe7AxU15wA+A/FWtpkFLfvDWfBCykRVmL5RLLL46W6liki23bnWgHkdGU2cVbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wObYJPemsOXsGyhjl3tI23Gvv+HFSVMfSoBU/wvfj5Q=;
 b=OwXjyYj+scBn2M8zzk4qjF3vjYTkHVuw2tJHAzLC5Hp4AM8TaNlmdsFFj562xCIlYNKZDIxo2mUasM667ZEImIeR4CKvTIqPg0SLXA2J6BJNsWDDkMeHVF5Mjusn60sYe5GA9gVIxDDKRBMoTutbZZn8TTdNQ3ahl4Oc+TDe/9wT97972dPRDulxMheKrESG4X1skvIBgzugERboEj47d47Vo08zjeQFS38f6LgPQ5nfD7kLJJ/YgOxurF6Q5655R0cvP+dxu+Qf398yo74I5i/W5obcVMTWVwOXR1YJAmIPGDgOG/H9CmsASlFY6xO9O5iWzQG5Vof87mTyhh5YaA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 7/9] VT-d: centralize mapping of QI entries
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Message-ID: <b1aba243-e05e-1f50-d85d-00f60703b62b@suse.com>
Date: Wed, 9 Jun 2021 11:29:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR04CA0127.eurprd04.prod.outlook.com
 (2603:10a6:208:55::32) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c1703c37-502e-41c0-c692-08d92b291716
X-MS-TrafficTypeDiagnostic: VI1PR04MB4845:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4845A75DDD702A81EE01BCEAB3369@VI1PR04MB4845.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3826;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	A9M6WYwwa3RjkJ6QXGYO3ZB64+q66+C43ASPxf4GA57JsRZuR4EuR+yKUMHHfYyQJiYtf3wOwcHtBcYXMdXqCjKGSMOeplPhTcq9zplmOqnGFgqrv1u5nH8hnCewxnl+Lj/Gkh4tKuPh/aezl5FgnCYGJK1ijWGOE2bDupzz+L8Uum7ccK+3l7Lv4o1D/fM8g7F65rR2NDD8n4IDYHnNM61sWtpxFPNN3CTLcgWmAB+IPM6kf569elzafC3Sdldt6JekKi/Zz94v9Le2TXgTuIvb03IftzEm3dElswaafGyvXBRikGnGx7YP1iXDmxEb+0kFwZUq/LoBjkUNV8nb8Ws0aOEE0+vrjpPMwrH1gRUv4Rdkcqfz0o/BrYgpEjaD5KbH7wKZY+EolND8uEuKbIwFvUSIEQOPqNnlQ/cvlJl7DCYne0NLz0JAyTiEsYl04wK0tZ68LTs/njJqRCQT3AQ8cumLvPkfmh6MNLsVrps/eqrw0alHeTsJzQvWL7DSX4y922QeiHIlt/s9dnFkpuG0g5pA9gvTHz9uVu+zUYWWyzIfGPPxp8N5CmXnZIzg7R9q7u1edMRReE3/JwZumfRvT8f2ta7IMIezyS+9GaOECIDd4e0WRH3tJebHq0DdAQtN494NqNgNNm0f6dDnGjLcFz6R5Nc1YqrkM9uS3iQ6VVYUfG+iLd6ns0GVtL+6
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(396003)(39860400002)(376002)(346002)(316002)(478600001)(86362001)(8676002)(38100700002)(83380400001)(16526019)(26005)(186003)(36756003)(956004)(4326008)(2616005)(8936002)(5660300002)(6486002)(31696002)(6916009)(2906002)(66946007)(54906003)(31686004)(16576012)(66556008)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?RUlxM1lJaXNuc240cHBSNmhiOXowRDlhOHVRWXRkekFnV2V3NGdheG42T2xS?=
 =?utf-8?B?WUJZRVY1YlczN21GM2R1dXYyNTNBNlhCNTBSUTJEUys0dzUySkowM0VjVVJW?=
 =?utf-8?B?a3FaY2ZOVlJGaktwOGh0d1hjM1JkR28wTU16aW84R1YvK3N6TncxSVpNWUlM?=
 =?utf-8?B?MGhTWFg3Q3VFeUpBN2FPcDArUGxEVkdHa21LUzdzTzRUa3JUR0hjRkxhYi9Q?=
 =?utf-8?B?R2cweFBVbU85NUJQUTlTWjFCeVovL2EyN1ZzaUxJYVU4eDdVVjZsTjV4Uzg3?=
 =?utf-8?B?VUV4dURGL1JzOERycXFDMTE0aHMyc3pCSnFSTjZkWkgxVVlWNFFDVDVuVkcw?=
 =?utf-8?B?dHhQK0JMUEJxZ0xJd3R2OW1ZNzNzUFZaWDRsbVlSZXlaRExKR2plN2REVWNv?=
 =?utf-8?B?bWVYTmlXWlMrOFFDU21MRTFFMGwzWjJ1NHJZL2xLSlJUaGdLR0txZmxrL0V0?=
 =?utf-8?B?ellHMXl5MWRjT08zZFpZdXQ1Rnp3MGNrb3BEdnpNamgxU0NienhrTjg0WDBp?=
 =?utf-8?B?bkZnZHFud0ZQU1Zwc1dncXdNT0JPY3U4MUhYMm1RYkZtUUpXbk0zdjlxZXJL?=
 =?utf-8?B?Vm5EU1AxeUpHcmwreElWdDhObWsyRkxSRTdFRzdNdHVvTXhOeFdQdWx6ZkJp?=
 =?utf-8?B?MWd4U3ZlU3g0cjhDV1YxbVBjUURVazFCTTUwS2tMeEdMT25INEpiZ3ZLaC9X?=
 =?utf-8?B?UEVGVG03NEhiTnJrbXh2dkJoUTlvU2pTdzFKaDRKaVdnOU1wbDUxQk1saVJ3?=
 =?utf-8?B?cWYxVzBtS2FBMUlqQ0RVb0MxZlhSSTZVV0liS3ZtaEEreHhwZ1N4dnl3UjBr?=
 =?utf-8?B?a1BmckdFV0NZT2o3YTNIaG9qOG1OaDgrdWdnN0ZMVEpudjg1TUtYR2dlUno4?=
 =?utf-8?B?SEttY3JwNzFlaG1yVS9WenNuUzRjMXZiVkRaY1AxalFoaGljM0RDZkFzOERO?=
 =?utf-8?B?cGpZRnZmZG1OZXl4ZG5HMWhkL1diUmg5azJVQWMvWUVob3BDMjVUMXQ3ZFVC?=
 =?utf-8?B?QXZyVGdOTk9LUGJ1QVJ1OHJyRlplTU9lYUFkcUtHY2k2YWV1ZllrNWIwUHE2?=
 =?utf-8?B?WnpCRTFBVjV2amQrc0xqYmlSRnBUR2l0ZGorMGUvZVpieVp2aTJLSFVud0J6?=
 =?utf-8?B?MlV2dFFZdVJpVXZibmFSQXVGTUJ0QkRWSStvNHh3S1l1a2VOdE1rbnduQ1Nv?=
 =?utf-8?B?ZmR4T2ovaDJPa2pDUGFRbnNUcXdjWUUxeVNGeENlNDhsNENrR0w4MUZqcHZR?=
 =?utf-8?B?RlMwUXVsVXIxWjZyN1lybHBGU0ppSEF0TEJVSGt0Sjc3LzdMRHpCbVo1Q2Vv?=
 =?utf-8?B?dUcvc3N3UlAvSFlkWTViVDBLZGlOenh6N0M1bnIvS3NmaVMvSDdYV2JwQVM0?=
 =?utf-8?B?aW5IYzlnL0pWcS8rWmVXRmFWWVhSVlFndkxHakFnQkN0b1cxM0lwQUNnNzlF?=
 =?utf-8?B?SkxmTG85MmxpUzNJSHF0ZlpDOFc2QU5tZkp5NU43ejEyUlduaGxnd25ZalZh?=
 =?utf-8?B?NHJqeGhXVlplZnlydzdmYnI3dDhLTFlHT2tjY2wyeDFJK3dmSVRreC93MTVC?=
 =?utf-8?B?KzVLRHdxNjhvVEtWcitNVGtuSHlyamM3OFVFbnJkOTZiMkZTbkcvR3JIb1Vv?=
 =?utf-8?B?cVV3eEJTSWxtcUJrQllQR3ZMWVlKdFkxdWljRlpTcU04YVlRbXhFM0VrTk1P?=
 =?utf-8?B?a0Z6Wml2SGlNUzdoNStXQnRsbTZZbC9PSkpvb3EyTForOVVMY1JwdHQ5NFFz?=
 =?utf-8?Q?oOtqhXy9RCCqB6fKRInFIDThWGRslYcBKP5iRII?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c1703c37-502e-41c0-c692-08d92b291716
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 09:29:33.1857
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QfsfkuskLWjicyoOeSKnfJFC1PZV9bIfWOvHSUFizrVf7Kg22JbsUVfhQ7JF2QvhrcaT1Wxpd131k3Nuqb5Wvg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4845

Introduce a helper function to reduce redundancy. Take the opportunity
to express the logic without using the somewhat odd QINVAL_ENTRY_ORDER.
Also take the opportunity to uniformly unmap after updating queue tail
and dropping the lock (like was done so far only by
queue_invalidate_context_sync()).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I wonder though whether we wouldn't be better off permanently mapping
the queue(s).

--- a/xen/drivers/passthrough/vtd/qinval.c
+++ b/xen/drivers/passthrough/vtd/qinval.c
@@ -69,6 +69,16 @@ static void qinval_update_qtail(struct v
     dmar_writel(iommu->reg, DMAR_IQT_REG, val << QINVAL_INDEX_SHIFT);
 }
 
+static struct qinval_entry *qi_map_entry(const struct vtd_iommu *iommu,
+                                         unsigned int index)
+{
+    paddr_t base = iommu->qinval_maddr +
+                   ((index * sizeof(struct qinval_entry)) & PAGE_MASK);
+    struct qinval_entry *entries = map_vtd_domain_page(base);
+
+    return &entries[index % (PAGE_SIZE / sizeof(*entries))];
+}
+
 static int __must_check queue_invalidate_context_sync(struct vtd_iommu *iommu,
                                                       u16 did, u16 source_id,
                                                       u8 function_mask,
@@ -76,15 +86,11 @@ static int __must_check queue_invalidate
 {
     unsigned long flags;
     unsigned int index;
-    u64 entry_base;
-    struct qinval_entry *qinval_entry, *qinval_entries;
+    struct qinval_entry *qinval_entry;
 
     spin_lock_irqsave(&iommu->register_lock, flags);
     index = qinval_next_index(iommu);
-    entry_base = iommu->qinval_maddr +
-                 ((index >> QINVAL_ENTRY_ORDER) << PAGE_SHIFT);
-    qinval_entries = map_vtd_domain_page(entry_base);
-    qinval_entry = &qinval_entries[index % (1 << QINVAL_ENTRY_ORDER)];
+    qinval_entry = qi_map_entry(iommu, index);
 
     qinval_entry->q.cc_inv_dsc.lo.type = TYPE_INVAL_CONTEXT;
     qinval_entry->q.cc_inv_dsc.lo.granu = granu;
@@ -98,7 +104,7 @@ static int __must_check queue_invalidate
     qinval_update_qtail(iommu, index);
     spin_unlock_irqrestore(&iommu->register_lock, flags);
 
-    unmap_vtd_domain_page(qinval_entries);
+    unmap_vtd_domain_page(qinval_entry);
 
     return invalidate_sync(iommu);
 }
@@ -110,15 +116,11 @@ static int __must_check queue_invalidate
 {
     unsigned long flags;
     unsigned int index;
-    u64 entry_base;
-    struct qinval_entry *qinval_entry, *qinval_entries;
+    struct qinval_entry *qinval_entry;
 
     spin_lock_irqsave(&iommu->register_lock, flags);
     index = qinval_next_index(iommu);
-    entry_base = iommu->qinval_maddr +
-                 ((index >> QINVAL_ENTRY_ORDER) << PAGE_SHIFT);
-    qinval_entries = map_vtd_domain_page(entry_base);
-    qinval_entry = &qinval_entries[index % (1 << QINVAL_ENTRY_ORDER)];
+    qinval_entry = qi_map_entry(iommu, index);
 
     qinval_entry->q.iotlb_inv_dsc.lo.type = TYPE_INVAL_IOTLB;
     qinval_entry->q.iotlb_inv_dsc.lo.granu = granu;
@@ -133,10 +135,11 @@ static int __must_check queue_invalidate
     qinval_entry->q.iotlb_inv_dsc.hi.res_1 = 0;
     qinval_entry->q.iotlb_inv_dsc.hi.addr = addr >> PAGE_SHIFT_4K;
 
-    unmap_vtd_domain_page(qinval_entries);
     qinval_update_qtail(iommu, index);
     spin_unlock_irqrestore(&iommu->register_lock, flags);
 
+    unmap_vtd_domain_page(qinval_entry);
+
     return invalidate_sync(iommu);
 }
 
@@ -147,17 +150,13 @@ static int __must_check queue_invalidate
     static DEFINE_PER_CPU(uint32_t, poll_slot);
     unsigned int index;
     unsigned long flags;
-    u64 entry_base;
-    struct qinval_entry *qinval_entry, *qinval_entries;
+    struct qinval_entry *qinval_entry;
     uint32_t *this_poll_slot = &this_cpu(poll_slot);
 
     spin_lock_irqsave(&iommu->register_lock, flags);
     ACCESS_ONCE(*this_poll_slot) = QINVAL_STAT_INIT;
     index = qinval_next_index(iommu);
-    entry_base = iommu->qinval_maddr +
-                 ((index >> QINVAL_ENTRY_ORDER) << PAGE_SHIFT);
-    qinval_entries = map_vtd_domain_page(entry_base);
-    qinval_entry = &qinval_entries[index % (1 << QINVAL_ENTRY_ORDER)];
+    qinval_entry = qi_map_entry(iommu, index);
 
     qinval_entry->q.inv_wait_dsc.lo.type = TYPE_INVAL_WAIT;
     qinval_entry->q.inv_wait_dsc.lo.iflag = iflag;
@@ -167,10 +166,11 @@ static int __must_check queue_invalidate
     qinval_entry->q.inv_wait_dsc.lo.sdata = QINVAL_STAT_DONE;
     qinval_entry->q.inv_wait_dsc.hi.saddr = virt_to_maddr(this_poll_slot);
 
-    unmap_vtd_domain_page(qinval_entries);
     qinval_update_qtail(iommu, index);
     spin_unlock_irqrestore(&iommu->register_lock, flags);
 
+    unmap_vtd_domain_page(qinval_entry);
+
     /* Now we don't support interrupt method */
     if ( sw )
     {
@@ -246,16 +246,12 @@ int qinval_device_iotlb_sync(struct vtd_
 {
     unsigned long flags;
     unsigned int index;
-    u64 entry_base;
-    struct qinval_entry *qinval_entry, *qinval_entries;
+    struct qinval_entry *qinval_entry;
 
     ASSERT(pdev);
     spin_lock_irqsave(&iommu->register_lock, flags);
     index = qinval_next_index(iommu);
-    entry_base = iommu->qinval_maddr +
-                 ((index >> QINVAL_ENTRY_ORDER) << PAGE_SHIFT);
-    qinval_entries = map_vtd_domain_page(entry_base);
-    qinval_entry = &qinval_entries[index % (1 << QINVAL_ENTRY_ORDER)];
+    qinval_entry = qi_map_entry(iommu, index);
 
     qinval_entry->q.dev_iotlb_inv_dsc.lo.type = TYPE_INVAL_DEVICE_IOTLB;
     qinval_entry->q.dev_iotlb_inv_dsc.lo.res_1 = 0;
@@ -268,10 +264,11 @@ int qinval_device_iotlb_sync(struct vtd_
     qinval_entry->q.dev_iotlb_inv_dsc.hi.res_1 = 0;
     qinval_entry->q.dev_iotlb_inv_dsc.hi.addr = addr >> PAGE_SHIFT_4K;
 
-    unmap_vtd_domain_page(qinval_entries);
     qinval_update_qtail(iommu, index);
     spin_unlock_irqrestore(&iommu->register_lock, flags);
 
+    unmap_vtd_domain_page(qinval_entry);
+
     return dev_invalidate_sync(iommu, pdev, did);
 }
 
@@ -280,16 +277,12 @@ static int __must_check queue_invalidate
 {
     unsigned long flags;
     unsigned int index;
-    u64 entry_base;
-    struct qinval_entry *qinval_entry, *qinval_entries;
+    struct qinval_entry *qinval_entry;
     int ret;
 
     spin_lock_irqsave(&iommu->register_lock, flags);
     index = qinval_next_index(iommu);
-    entry_base = iommu->qinval_maddr +
-                 ((index >> QINVAL_ENTRY_ORDER) << PAGE_SHIFT);
-    qinval_entries = map_vtd_domain_page(entry_base);
-    qinval_entry = &qinval_entries[index % (1 << QINVAL_ENTRY_ORDER)];
+    qinval_entry = qi_map_entry(iommu, index);
 
     qinval_entry->q.iec_inv_dsc.lo.type = TYPE_INVAL_IEC;
     qinval_entry->q.iec_inv_dsc.lo.granu = granu;
@@ -299,10 +292,11 @@ static int __must_check queue_invalidate
     qinval_entry->q.iec_inv_dsc.lo.res_2 = 0;
     qinval_entry->q.iec_inv_dsc.hi.res = 0;
 
-    unmap_vtd_domain_page(qinval_entries);
     qinval_update_qtail(iommu, index);
     spin_unlock_irqrestore(&iommu->register_lock, flags);
 
+    unmap_vtd_domain_page(qinval_entry);
+
     ret = invalidate_sync(iommu);
 
     /*



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 09:29:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 09:29:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139111.257336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquX7-00037H-8U; Wed, 09 Jun 2021 09:29:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139111.257336; Wed, 09 Jun 2021 09:29:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquX7-000376-4n; Wed, 09 Jun 2021 09:29:57 +0000
Received: by outflank-mailman (input) for mailman id 139111;
 Wed, 09 Jun 2021 09:29:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lquX6-00035P-Cp
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 09:29:56 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 59f7ec7b-23dd-4058-85ad-4173d4163aa5;
 Wed, 09 Jun 2021 09:29:55 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2104.outbound.protection.outlook.com [104.47.17.104])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-40-UC3lEw0DPp-taYdeEFZHVQ-1; Wed, 09 Jun 2021 11:29:53 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4845.eurprd04.prod.outlook.com (2603:10a6:803:51::30)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21; Wed, 9 Jun
 2021 09:29:52 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Wed, 9 Jun 2021
 09:29:52 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR2P281CA0009.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:a::19) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.12 via Frontend Transport; Wed, 9 Jun 2021 09:29:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59f7ec7b-23dd-4058-85ad-4173d4163aa5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623230994;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KAEC3zWHdiALl0C1ggZlQZ2fYmIEsDbbA+043+P3nms=;
	b=FleOa31A9W3LlJNgFXt0P7ZUUOMSQQVFBQ3IyKee4pZvUHFI5+nnc69hUU1CbEceEtmekd
	IQQ3MTvSTKEw/0B6R0D6/2b/H7Vl5qqMT3f+L+WvoWLtvP033MbBiYb+G1ejraOEWTbXT1
	SM9ZDk7ikvD9a7MLuoNebZOzprEDMU0=
X-MC-Unique: UC3lEw0DPp-taYdeEFZHVQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f8J9IL6S32rKOZQRQmluRquayipicGoLFHGEaK46bsDCBN0SucwY6TpGYWzJowQIvZm336MPVdcgSWcjP8Dp6KSBbg0g07Y6KS3KD2qjjYd7DiBGgOsg4zvMYAUmXhPpA+fqt/b/lzcfioI3en8UgSCr9ZH01pYD6fXY6AQ5NGRN2OpiUuTwJ0s1BAqwyTBjP7+aZGsUxI+1B6vaqBDfBB/Pc2SSNSWZzu+sgC4tntaaOJgZXP+xifEBtne1ueQka3QUbtLXZxI+epio7SnqUbx3AZ9pmKazEjfqmGRw+j/yf+OhXBdcCzwXvDgIQnjiUv6hZFiG/JSS/tw8clVWgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KAEC3zWHdiALl0C1ggZlQZ2fYmIEsDbbA+043+P3nms=;
 b=FgpOWYlA3s3a1sHIME3mGdQ9aAkE/9vGqCmnN0peIK9aBKH9P9oo967PwXASSDIP/uOmxT5Sui2MAKbgPyNW2qXKoz2RwI1FD15h6WiTJGedVs1yC/gA7yoxyx7EdfcenFgcnis3iDOIOjdhHubB5gReDR3Heva2FXMZqOD7Fzxz+koqesbgWn37SasESXXq9EU7ZaWDYlE1UOpVkfPGo1KZvoOQXfCW4zJwPwg40n0WhaNJ2NZ/RVJOavPoZXt87zuHeTZqcaxdHS4zuB5//+oZGYIVoay6H/ShQsFGDRlad5wsvURo2D1XVjg0rZ4Og1A8rzKod11ltn/e9QCAIA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 8/9] VT-d: drop/move a few QI related constants
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Message-ID: <1b8558d4-42cc-bd68-e6c8-138f40f81e1c@suse.com>
Date: Wed, 9 Jun 2021 11:29:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR2P281CA0009.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::19) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2f23b461-0b43-4604-b9e4-08d92b292263
X-MS-TrafficTypeDiagnostic: VI1PR04MB4845:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4845B755358F51D92EA1228DB3369@VI1PR04MB4845.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2958;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	S8xSz1li6bF68Z1DKHw9Bnfjx56P7Ybrlqr3JIn+7bGSXRov4QXZlObwoffnz9Tp6E0w7DvI485hHjQBh4n1B+ZaySi3/2RsI9bFclfszGa9SxYhIF6lOFYpDnQogdMMcbKW+Y4H4Ve2ra4naECQbDwmlqzRigvcLiA3ywxvPi8nZV1IOM/nHBppWs7eCYtrQt+HkhCR/TmwWK2ZB4quLApdBGLGXDeDfRdLzMMjJSFvIf/1TLmfdpxjQ25e1V9gaB1KPfosZR9+qz/TgeR7BXqWVILrqrN/TlaayZWJtp1vaPNnlmCvXEghA6m3DErwmqGLb+knIyxaLBtO46qTUH4o0Z9s+tiY3bn0bnYQu5Jxkc4/dAzbVKUEJreNbuWrdy5ocXnePhhC5m9RJ3Z/xZ4X5yIOECn2EMDOV4wgaV5Df9dvbhQpP8klkIri/zlxSkQmZf+unJV3quAROayUAJNhJ5BI0WYpUBq0GHmAcg3QkrWBOX74gjzsjRM9bY5YijGjIzzufpOlKXlqu2azeOb1vIPdKbCwsOuhhoOJQzdWwoH/08gwIzBE4hW4/KEhSrH6gioM52+7/p22iIvemzVGjyWvgYYTAl2kRKVgJawnfy+w0KmCVsVFsasZb3AZFNKvTS0tbywASmU+zkP8/Y4Sel4UnW0fjUQts6l6kk+YmaVEJSoNCeRpNkIZF434
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(39850400004)(396003)(376002)(346002)(316002)(478600001)(86362001)(8676002)(38100700002)(83380400001)(16526019)(26005)(186003)(36756003)(956004)(4326008)(2616005)(8936002)(5660300002)(6486002)(31696002)(6916009)(2906002)(66946007)(54906003)(31686004)(16576012)(66556008)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?MFRpcEJZN1NUUlRSSElRVWZ2U3A5aGpGTUcvWVBaKys1eU5qeldWak83V2JE?=
 =?utf-8?B?a1lHcnVBR0R0L09sZ3BYcHVGY3JCNEZlMVhxY3MvSWM5VWtrSWovaVp4UXdn?=
 =?utf-8?B?UzRMSXBKS0E2N3A3dlpMRlpvKytaVUVwMFBvV05LaVhPUUtaMC9EazlMV0Ro?=
 =?utf-8?B?VWswU1VNQXV5aWdtZEVQY3JTeXVsZStGVjMybnI0VW91LzRPNmJNeTFmQ2FT?=
 =?utf-8?B?UHZ1NTloZDNjZkJLQWdJUWFyaXhESk5BRDRFYmxST2pDZmliaEYyTGd0MkJI?=
 =?utf-8?B?Q3Yzb2sra2tVdHh0czNwcmpxdFBtSEl4MGNQUURVZFUxeTE1ajRKTUU2RUk1?=
 =?utf-8?B?N0Z4SEM0cXhQUXdKSVhDemlYWEk5SVBRZjJ3STl2YzJ2WmJidHhWOUI5RUF2?=
 =?utf-8?B?dTg4eG42SjhDRVo3T2lCUDF6Z3pONDZuWVZsTU5JTzhKOFVRc1FUV1R2OHVF?=
 =?utf-8?B?K3BLTFZCdUY0QXZGUjdlVnF4WUI4S0ZrS2JpK1lTN0pBUUJxeWxqNmkxN0FG?=
 =?utf-8?B?LzJLSjRZRGtuWGdHcG9MNnFVQ0hneEkrSmJlaEJSSmFsL0tqaVZHLzhjQk43?=
 =?utf-8?B?MWJ6QmhoUkg1RnlXWEFlVkRsck1iZFZWWk1wU002K3lMb3A2STgrOU1FVCtw?=
 =?utf-8?B?VGY4R1FLc2QxR1hXYmNGNFRPUDhlc0IyZUlVSCtKQms0Tit3bGc3MlBHdFZs?=
 =?utf-8?B?QlJNTHRMNGJHeWJ3SGtMbmw1cEtEcTNwS2M4c0RpWEs1N2RYV3JkMWYwU1VR?=
 =?utf-8?B?UU5NbmM4SEtjU0dZWit3TEdzQzN6YjZtWHFhN1hJckRFNWhGa245bW5GS25J?=
 =?utf-8?B?MXljYmxPSzZBTGMvV05SdENtYmZVOU1EUzZJNUZMV25vYURyRC9PMUh5SVFI?=
 =?utf-8?B?TUc4YllvSkxoaU1sbW9iUmRiR2RoN2oyMzNXajQxenVYM3ZERmtQdjU2NUMz?=
 =?utf-8?B?enVLQzhnUVdKeGt5dnFUQnc5QTVLWnpsYjRhZEhRZUFjL3hWVlNhR0k1N0lr?=
 =?utf-8?B?UUl6b1kxUHJWam1ZUlhWUE5tc1JvcjgxK1RwMjBJQnRkTXMxR3Btbm9EakRB?=
 =?utf-8?B?K3A4SjNOY1p6WXFVOGdWRVVnQ2JZUGFHY3FsWWdQWU1VTVgyUHlnMlNoME9Y?=
 =?utf-8?B?bzJZNVNSOVRoY242bXVkb2czb3VpWG1XSzE3THR1RGpFZEczNzRWbnRKZExw?=
 =?utf-8?B?VUV6bWVNcDlPUWpoOThLL0llNlJhMGVaN3FWWkZhTWFKSmpUcWt0M0pIaVhl?=
 =?utf-8?B?OGtoRHZOVURYbm0zazUzZkVZZkgxR3NZZHNwUDVPNUVTaFRIb0xST0RiT2Vq?=
 =?utf-8?B?aFVwK3BYUkJ6ei9WSUw5Yk51UUdoZ05CR2VxV2JYcFVWKzQwQ1pnQVJZRmw3?=
 =?utf-8?B?WVlEMkQ1Q0I0TzBKejZvV1pydEpzTVZrN1FQVjd2Q0FnbldRNDZKWXAvZ1Bh?=
 =?utf-8?B?M2VEbEZFR3k1NWI3N0ZRN004b1hWSmZ6OG5XTlhWSlpGVjFobDlNd3JvZzhO?=
 =?utf-8?B?YVEwRDhaUXB6WW9Gay9pa2VaZ3ZrbGRyVEY3ei9KOFJuRXNFT0VybDdzcER4?=
 =?utf-8?B?bmxEazUzSmJHLzRBcG5pWnd4Sms1cFpKMmkyU08zVjNLVUUyYXdHckNUZU51?=
 =?utf-8?B?ejQ0Y2UyVElzQmVxbEQ5UDdGZEk0MHRSSTV5M0lBcklQV2t0UGVRU0FrVTR6?=
 =?utf-8?B?c05wR2hOMXArbVlOS3ArWDJxQm5rajRHbGx6RmlyaE50aUVJdDA3UDZsRENn?=
 =?utf-8?Q?jNuRPXFa+0u1mBk8BQOvhr0IkiPXADf0jxywiq9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2f23b461-0b43-4604-b9e4-08d92b292263
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 09:29:52.1619
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: twIWbEA6QsPJfnyy4cbzQwoGeM/enrB02AXLfANTVHdZG0yp7oQZ4gRCqL6inFokNZ2dSHt9BDTAO1DxH3d1Lg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4845

Replace uses of QINVAL_ENTRY_ORDER and QINVAL_INDEX_SHIFT, such that
the constants can be dropped. Move the remaining QINVAL_* ones to the
single source file using them.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -451,17 +451,6 @@ struct qinval_entry {
     }q;
 };
 
-/* Each entry is 16 bytes, so 2^8 entries per page */
-#define QINVAL_ENTRY_ORDER  ( PAGE_SHIFT - 4 )
-#define QINVAL_MAX_ENTRY_NR (1u << (7 + QINVAL_ENTRY_ORDER))
-
-/* Status data flag */
-#define QINVAL_STAT_INIT  0
-#define QINVAL_STAT_DONE  1
-
-/* Queue invalidation head/tail shift */
-#define QINVAL_INDEX_SHIFT 4
-
 #define TYPE_INVAL_CONTEXT      0x1
 #define TYPE_INVAL_IOTLB        0x2
 #define TYPE_INVAL_DEVICE_IOTLB 0x3
--- a/xen/drivers/passthrough/vtd/qinval.c
+++ b/xen/drivers/passthrough/vtd/qinval.c
@@ -29,6 +29,13 @@
 #include "extern.h"
 #include "../ats.h"
 
+/* Each entry is 16 bytes, and there can be up to 2^7 pages. */
+#define QINVAL_MAX_ENTRY_NR (1u << (7 + PAGE_SHIFT_4K - 4))
+
+/* Status data flag */
+#define QINVAL_STAT_INIT  0
+#define QINVAL_STAT_DONE  1
+
 static unsigned int __read_mostly qi_pg_order;
 static unsigned int __read_mostly qi_entry_nr;
 
@@ -45,11 +52,11 @@ static unsigned int qinval_next_index(st
 {
     unsigned int tail = dmar_readl(iommu->reg, DMAR_IQT_REG);
 
-    tail >>= QINVAL_INDEX_SHIFT;
+    tail /= sizeof(struct qinval_entry);
 
     /* (tail+1 == head) indicates a full queue, wait for HW */
     while ( ((tail + 1) & (qi_entry_nr - 1)) ==
-            (dmar_readl(iommu->reg, DMAR_IQH_REG) >> QINVAL_INDEX_SHIFT) )
+            (dmar_readl(iommu->reg, DMAR_IQH_REG) / sizeof(struct qinval_entry)) )
     {
         printk_once(XENLOG_ERR VTDPREFIX " IOMMU#%u: no QI slot available\n",
                     iommu->index);
@@ -66,7 +73,7 @@ static void qinval_update_qtail(struct v
     /* Need hold register lock when update tail */
     ASSERT( spin_is_locked(&iommu->register_lock) );
     val = (index + 1) & (qi_entry_nr - 1);
-    dmar_writel(iommu->reg, DMAR_IQT_REG, val << QINVAL_INDEX_SHIFT);
+    dmar_writel(iommu->reg, DMAR_IQT_REG, val * sizeof(struct qinval_entry));
 }
 
 static struct qinval_entry *qi_map_entry(const struct vtd_iommu *iommu,
@@ -413,17 +420,18 @@ int enable_qinval(struct vtd_iommu *iomm
              * only one entry left.
              */
             BUILD_BUG_ON(CONFIG_NR_CPUS * 2 >= QINVAL_MAX_ENTRY_NR);
-            qi_pg_order = get_order_from_bytes((num_present_cpus() * 2 + 1) <<
-                                               (PAGE_SHIFT -
-                                                QINVAL_ENTRY_ORDER));
-            qi_entry_nr = 1u << (qi_pg_order + QINVAL_ENTRY_ORDER);
+            qi_pg_order = get_order_from_bytes((num_present_cpus() * 2 + 1) *
+                                               sizeof(struct qinval_entry));
+            qi_entry_nr = (PAGE_SIZE << qi_pg_order) /
+                          sizeof(struct qinval_entry);
 
             dprintk(XENLOG_INFO VTDPREFIX,
                     "QI: using %u-entry ring(s)\n", qi_entry_nr);
         }
 
         iommu->qinval_maddr =
-            alloc_pgtable_maddr(qi_entry_nr >> QINVAL_ENTRY_ORDER,
+            alloc_pgtable_maddr(PFN_DOWN(qi_entry_nr *
+                                         sizeof(struct qinval_entry)),
                                 iommu->node);
         if ( iommu->qinval_maddr == 0 )
         {



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 09:30:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 09:30:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139117.257347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquXg-0004Ty-H7; Wed, 09 Jun 2021 09:30:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139117.257347; Wed, 09 Jun 2021 09:30:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquXg-0004Tr-Dy; Wed, 09 Jun 2021 09:30:32 +0000
Received: by outflank-mailman (input) for mailman id 139117;
 Wed, 09 Jun 2021 09:30:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lquXe-0004TW-U3
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 09:30:30 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 73802cb2-9a11-45a0-a412-86b413ab30fc;
 Wed, 09 Jun 2021 09:30:29 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2107.outbound.protection.outlook.com [104.47.17.107])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-39-5M9f1GgPPx2bXU4qQhSa4A-1; Wed, 09 Jun 2021 11:30:27 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4845.eurprd04.prod.outlook.com (2603:10a6:803:51::30)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21; Wed, 9 Jun
 2021 09:30:26 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Wed, 9 Jun 2021
 09:30:26 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0200.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1f::20) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Wed, 9 Jun 2021 09:30:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73802cb2-9a11-45a0-a412-86b413ab30fc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623231028;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5nH2cLUYwbL9eZj4RVdVH4PyeqIGtsVVYtNIc2+3fhE=;
	b=jPadiLvU9fJcOWVNjaPhJSLgSLsHFJMz3egERGm7bYxhwP+yq3C2xHbl38W/LSUxOjhM0g
	82cWwvl/Cec7ss3OIHGStn23dMQKIYtTOvtO8hAPeiDhw7dB3zGRSBr8ixLGLT3xgtDGB5
	6PCrhGJDsm1FEAOM/pvB+lip6MdYu2I=
X-MC-Unique: 5M9f1GgPPx2bXU4qQhSa4A-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ImIDXGNG9/FmVvN2d7Zi9tdJVASKGOp6Z8qOjElAK9mHDeyD/xYAirt3hChk2b1HC1fU56o8kNtxyIR1cQFZT5PWGZOZSWHlVjJDKS358d6nBx3IjcVzXdtS12HXHZn6xBVMKlnlK+kw0p02ndwadzSDlxjPkAP3Gi+xaQfnhJFiTNTgq1gyKSj5O2M6iUB+ycCiiFunUy1nODv2sjUg47PllUOD+zujU1gKzIOWvlcHX6VbkhgRAzKuNJMX9WRl5xZRQIOPAMhDow4cYgh0+bw7rmQkkyZyRnhU/WIPvSg6cTDBquTUQF1xnXJ5WvrF3uvf74WRXa+nzQqgB5FWAg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5nH2cLUYwbL9eZj4RVdVH4PyeqIGtsVVYtNIc2+3fhE=;
 b=odxcWpgCSjkENwR7lnXWoLrV9nOiObiedgM1IrtPLgIE3uj3CBmqVKes5cpq3v0UQzCp8DvVVcpcyblLEtvwa66tUR9z7pCbJ+JcZ7Uv88vlJGNcNVOIwBmiyoMIwU99ulmIU2rB/3o1UD7FAtGWxABcCzV7a4Fax/IV00XX6ZeBPOSk0m+S+mcdnhYfFU4iS6qcSR5f+PwD4JOjKguicU0IJbrzUC8MwnKl+DvV8moREdQU8BuVUj94Vf6vFT/SronTeURxVBSdI0BKDc7Wq6r3lVkYnl/6yOotXv4CiiC3eui38Saqhng6MZcBcWRD33tYIOiaS+BI/iBse4UlOg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 9/9] IOMMU/PCI: don't let domain cleanup continue when device
 de-assignment failed
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Message-ID: <1a7b974c-8dee-3422-28fb-4118fe145b4e@suse.com>
Date: Wed, 9 Jun 2021 11:30:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0200.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1f::20) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 028593ed-253f-4c57-0fa8-08d92b2936d4
X-MS-TrafficTypeDiagnostic: VI1PR04MB4845:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB48450EBDF1B3E7AF338E0F19B3369@VI1PR04MB4845.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	o+kHryYSSwj5I0KtqDAzspLPbhoVAJxnrSDdtBSLQWSnhjTMP4IUVfqAeIGAFm/D8PNpfKum2xhNaygRm4iFZFPgC5kWLdUiB2vRCR6YwV/O0ejmFU3Y11J5f4+jYvFYxWBNX281BADBJ4WwIlfn9brPiruuhOqD1wFR+q9rlUuxwmwuLIXyMEkwei/c9LTct6xRhGgCPVVx8ILQ+Pfy76kaAplIk2jqGfozTt/DMxBfjBOJ0l+fySWqmUWWljQYTK8kUFo326Q1LtojkpzDDNDHp6vfXG5ljDfVZhxxwmHbwRIzYGfB/T8sYAZoTxKUIN3jS4ApwHsHH3KuFE3OrONuVlJNjFaUClhgufXr1Ec32bg+nhIrUKdKzQygcxUv8mIF9Zy79zHTzfD+BizbFOOMw9VJn8Toam4IbCEofcKPs/osylgy5ljhGetar4+ebcuyuBkcGLEXEZcQohLlxtbGFZ98mVSbgU2qpJa7zWmiGSaF4m93M7VDCI+1ecHhNJ/D79Pt5gcQWWgBLSoOSJ5tV7/2CR6aShmxP6U9fdegHhiU+mSePx7g20AzLE8ipdKtblauE2+WCuyRWfRSaPjS3fKnsqtvXb6dduU7PfLWZAzU/oqAn2+jSmR4xqIkjaMIwotE4LIg4ieMvfs1z60hs5kh1gx7TogLCQS5FaKk+wt0gKh0+kdYqrqcEa9s
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(396003)(39860400002)(376002)(346002)(316002)(478600001)(86362001)(8676002)(38100700002)(83380400001)(16526019)(26005)(186003)(36756003)(956004)(4326008)(2616005)(8936002)(5660300002)(6486002)(31696002)(6916009)(2906002)(66946007)(54906003)(31686004)(16576012)(66556008)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?NUwxWjdjOHhQVm5RUGtkdXBNTDBJZjZRMmJMamxvMjZVbzh6QnRWUHFXVUJC?=
 =?utf-8?B?MDBySlh2UmhtZUhwNkg2ci9CNG5GYklzaVFiYStManFmRmY0bFUrYXI0TVNX?=
 =?utf-8?B?QzJ1WUxndTlCM001UnNrYlFMdnN3VXE5UW5mMmJoVGtIRjdZSHZuRmkwUEVl?=
 =?utf-8?B?RHpZLzhxb3pFWjhISDlyeGNqSCtEVkVKTlJPN2VYdkN0anRsRkhpRUpsVkVo?=
 =?utf-8?B?VUFFM2xrM1BPcGlES1hDaERPMThNeHQvNWtudkpRdVFnV3N3ak96ZVBwdVhN?=
 =?utf-8?B?WGpjSTJvVTkzUmU5cytyRnliWCtLQ2hwY1lISHcwYlBwNlVUMU5RcHhwWTc2?=
 =?utf-8?B?bzA4d3d4YzNXNlprNXZEenBTd0ZpZHN6dGxRUmdSSFd4MXlEVXNXM0hQZXZV?=
 =?utf-8?B?b28xYVBuNXh4TXo3TFNEczFzdUpBZHFyN0FCazZTNVMvMndRY2lVUGM0c2lt?=
 =?utf-8?B?ckp6Q3JTdnZaNTBLWXBqZUI1ZVVjdGZ0VGgzY2lwektCTVdrVE9YRHdvcWtw?=
 =?utf-8?B?ZVRrNDBqcTJQWWxsTDJKN09WZGY0dlRuZjFoU2FsMUw2MUVJMEhSZHNIMDda?=
 =?utf-8?B?OGJ3R2hHSW93YzRxaldKK3NDdmc3SnB5TWZnK0duTk1jL29ud3Z6aFpSWTZr?=
 =?utf-8?B?YStGN3ZxMjRML2NUU20wRjltaXlIYlNYeHZkcXBqRTd3TlNzUnNmVjFXYU5B?=
 =?utf-8?B?N0t6bURDN2hPS09vZEFqZUNOUVlvb0lLTm52ZFBwY3UreXhoQmRRNk40UXFw?=
 =?utf-8?B?RVJYSEpseGVSZGlPK05Sd0NsODdaQVhhTjFuV2tKQkJqeWp1UWpFQkYxdEo2?=
 =?utf-8?B?allwU0kvM001anhMb2tlbDIxc1hNdjEwc096cGlhUWhzWTU3N204NVA3RjNK?=
 =?utf-8?B?d3RoeHhLZ0g3ak5WaVo2b0lHMEhVMEgxVHQxcGZCVlpaRUZTYjM2Y0hSVXdy?=
 =?utf-8?B?eDloN29aeFA5OFZYTFI1U1liemJTYmwxQ3VRd2l6U1B0UkxNNEplN1NOKytn?=
 =?utf-8?B?MzErejlXRlJiUHpPc1NoUmIzMmFpNWdVTFVYSFlGcnFXeEpFMUs5VldBbTQr?=
 =?utf-8?B?QXQ0SzNpdTBnL1RYQUFJRmtyZTdYU1hWT1MvV1Z4cWVZR1JwY0FweEhaUzNM?=
 =?utf-8?B?YWRmanJlT3JuKzRWMHcxTFJhSmsvc01WcHZ5bDNNNU1NSEdaR1E2MVcwN2Jk?=
 =?utf-8?B?RGlpODZSTUNpMTN2TzZRd3RaY1hIanFIUnNPRjN1SjNaMVdyQWRyTnVCWjVS?=
 =?utf-8?B?eHp6SGlZdVFOZXdSRGQyLy9KYVJoWTZLZGxSUUh5NWVQajhCQUNaTk9DTGpX?=
 =?utf-8?B?aUMzSEx6ZXUvQkd6N2cya3VWMEptWEkwWXBUOUFQRThlMDlKME1GVUhPdTZU?=
 =?utf-8?B?b1hRVnJSRmY4cE1pcW5lU1dRYXAvKy9YMEp2U2N3NURzKzcxUWt5UGxoaFJj?=
 =?utf-8?B?OXk2N21QUVJidUhobkFmUnUrQTZYWHMzRW1HSHlVMDJLMjk3UUlOMFFxa21v?=
 =?utf-8?B?MnZlbFJwUno5U2kzUnIxaE8yYUIwZHA5WnhXZUVPemNZMDVwbHRvYmFuNjAx?=
 =?utf-8?B?MWE5b2dJMUJ3R3AvczFpQWJXYkRtOEQwc1o0K3pUSTNNd2pJNkl5d3hjZ1R3?=
 =?utf-8?B?VzVaS3VUcEl5eXgySFd4L0IxZzVtdk5JMGlHR1p0T1lqZm55Tkg0MU1YYlVl?=
 =?utf-8?B?OTlPWHNmeXVPbDdMOVZzdER5QlVJZjdqbHJSWDh5LzlWbDdrekl5VnViaGtU?=
 =?utf-8?Q?iP4UyBOCURKaZtXSXlouzAnHoluHPC+lNYxgyrX?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 028593ed-253f-4c57-0fa8-08d92b2936d4
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 09:30:26.4225
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 25+IPgo9DbnHOUqsj0t9Wx8zdTha7V/acGcbmfFJo0V8SO/9im/FrIN0uUYw2Diku1+RYAqAgFxhRWi7NIMODw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4845

Failure here could in principle mean the device may still be issuing DMA
requests, which would continue to be translated by the page tables the
device entry currently points at. With this we cannot allow the
subsequent cleanup step of freeing the page tables to occur, to prevent
use-after-free issues. We would need to accept, for the time being, that
in such a case the remaining domain resources will all be leaked, and
the domain will continue to exist as a zombie.

However, with flushes no longer timing out (and with proper timeout
detection for device I/O TLB flushing yet to be implemented), there's no
way anymore for failures to occur, except due to bugs elsewhere. Hence
the change here is merely a "just in case" one.

In order to continue the loop in spite of an error, we can't use
pci_get_pdev_by_domain() anymore. I have no idea why it was used here in
the first place, instead of the cheaper list iteration.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
A first step beyond this could be to have the backing functions of
deassign_device() allow the caller to tell whether the failure was from
removing the device from the domain being cleaned up, or from re-setup
in wherever the device was supposed to get moved to. In the latter case
we could allow domain cleanup to continue. I wonder whether we could
simply make those functions return "success" anyway, overriding their
returning of an error when ->is_dying is set.

A next step then might be to figure whether there's any "emergency"
adjustment that could be done instead of the full-fledged (and failed)
de-assign, to allow at least recovering all the memory from the guest.

--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -894,7 +894,7 @@ static int deassign_device(struct domain
 
 int pci_release_devices(struct domain *d)
 {
-    struct pci_dev *pdev;
+    struct pci_dev *pdev, *tmp;
     u8 bus, devfn;
     int ret;
 
@@ -905,15 +905,15 @@ int pci_release_devices(struct domain *d
         pcidevs_unlock();
         return ret;
     }
-    while ( (pdev = pci_get_pdev_by_domain(d, -1, -1, -1)) )
+    list_for_each_entry_safe ( pdev, tmp, &d->pdev_list, domain_list )
     {
         bus = pdev->bus;
         devfn = pdev->devfn;
-        deassign_device(d, pdev->seg, bus, devfn);
+        ret = deassign_device(d, pdev->seg, bus, devfn) ?: ret;
     }
     pcidevs_unlock();
 
-    return 0;
+    return ret;
 }
 
 #define PCI_CLASS_BRIDGE_HOST    0x0600



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 09:51:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 09:51:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139141.257362 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqus1-0007FM-Dd; Wed, 09 Jun 2021 09:51:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139141.257362; Wed, 09 Jun 2021 09:51:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqus1-0007FF-AH; Wed, 09 Jun 2021 09:51:33 +0000
Received: by outflank-mailman (input) for mailman id 139141;
 Wed, 09 Jun 2021 09:51:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqurz-0007F5-GJ; Wed, 09 Jun 2021 09:51:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqurz-0004Fi-4q; Wed, 09 Jun 2021 09:51:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqury-0003TT-R5; Wed, 09 Jun 2021 09:51:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqury-00074f-Qd; Wed, 09 Jun 2021 09:51:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=d7276AINmcEngLwFL29K70j0n+LX5cby2RC1h89ui1M=; b=K8Dt53vRtspCug1tYOQ164n1xY
	OjAzZ+BzEp4PcITsW0o6Lm/cJkudkFNXxDXcFu+Oyn2A10DiqEG9OGmHja82yxwBFLQsYV28lMfNQ
	aUmdXMokfqFthgfJnVf/Sd1XA7oCm0NlM9n/G4oSAd4GGyfkLn1T1rBnEtazxW7fyuPE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162566-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 162566: all pass - PUSHED
X-Osstest-Versions-This:
    xen=e4fee66043120c954fc309bbb37813604c1c0eb7
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Jun 2021 09:51:30 +0000

flight 162566 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162566/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  e4fee66043120c954fc309bbb37813604c1c0eb7
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162335  2021-06-02 09:20:40 Z    7 days
Testing same since   162566  2021-06-09 09:19:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5268b2dcf7..e4fee66043  e4fee66043120c954fc309bbb37813604c1c0eb7 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 09:56:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 09:56:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139148.257378 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquwk-0007zi-UO; Wed, 09 Jun 2021 09:56:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139148.257378; Wed, 09 Jun 2021 09:56:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lquwk-0007zb-QI; Wed, 09 Jun 2021 09:56:26 +0000
Received: by outflank-mailman (input) for mailman id 139148;
 Wed, 09 Jun 2021 09:56:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Uv4g=LD=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1lquwj-0007zT-6A
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 09:56:25 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.80]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 74cd74bd-a220-4742-83ed-3f82dd1db448;
 Wed, 09 Jun 2021 09:56:23 +0000 (UTC)
Received: from AM0PR10CA0037.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:20b:150::17)
 by AM5PR0801MB2114.eurprd08.prod.outlook.com (2603:10a6:203:37::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.23; Wed, 9 Jun
 2021 09:56:20 +0000
Received: from AM5EUR03FT051.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:150:cafe::6f) by AM0PR10CA0037.outlook.office365.com
 (2603:10a6:20b:150::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21 via Frontend
 Transport; Wed, 9 Jun 2021 09:56:20 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT051.mail.protection.outlook.com (10.152.16.246) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.21 via Frontend Transport; Wed, 9 Jun 2021 09:56:20 +0000
Received: ("Tessian outbound a5ae8c02e74f:v93");
 Wed, 09 Jun 2021 09:56:20 +0000
Received: from b1e364d00ee2.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 61CAD610-D272-4207-AEE4-5F3815201789.1; 
 Wed, 09 Jun 2021 09:56:14 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b1e364d00ee2.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 09 Jun 2021 09:56:14 +0000
Received: from PAXPR08MB6446.eurprd08.prod.outlook.com (2603:10a6:102:12d::10)
 by PR3PR08MB5690.eurprd08.prod.outlook.com (2603:10a6:102:86::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.22; Wed, 9 Jun
 2021 09:56:11 +0000
Received: from PAXPR08MB6446.eurprd08.prod.outlook.com
 ([fe80::f4f6:3d5e:251c:efcd]) by PAXPR08MB6446.eurprd08.prod.outlook.com
 ([fe80::f4f6:3d5e:251c:efcd%6]) with mapi id 15.20.4219.021; Wed, 9 Jun 2021
 09:56:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74cd74bd-a220-4742-83ed-3f82dd1db448
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sfGWeac31vFkpdiU8ch3aW+Sd4nTuZD+LcPaTd2lB1M=;
 b=+PmdnqQVuvgMw29Wr3I/D6UfBHBAs2wOdVj/PeW2qoyB125ETZ/iWxmUKwgMewcZBs7IKGrM3+P4oNHokT7t9XA8o3TkSzUOJT0F5yTJr+5JYNsFjRDxEm1KqJl1rV/7r+igQn0GteXuW2xMDakAo8DmdJK5GhKiDRJnBNwUJJI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 684114204f042806
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NPWHWIw+ZkWWCTZn1X77bnCbRuo1sOYUIpZ/Ay+b55oUjPNcVftPEHaRuRSsUlStfvyBkxrYJXbBetgJQO652wGs15Lqb/dE/2QcO8MDFiJChruHFbfssfbAMmwJbps/d+5bt7A8XSipotslWU0FtgD8ymu+qJQtQJEKfFG4QjYhEF76WN968DWhsWZnUcKm4WJF54GlYR9QEe88WikuOrmGmu71hzlfHbMiqFspjORr97YqhEyJ6U9eFUiMuLULmMAe21szfDfNP6iw3SnVGKmzcKQcfVtAy16lhEsPP0M5PNCk9rtM5Hu4K5D1PYDohMBwAz7SjFOkbM8+VKoKWQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sfGWeac31vFkpdiU8ch3aW+Sd4nTuZD+LcPaTd2lB1M=;
 b=JCk8vY/WUMd/igN0QCGnp+ef4JH4+Rhba0tvGswAGuYydI1SS2YdMBC7BvPqHzwOLwJZQOe2Uc8o8h8I5Tw6JgujCHWKsFumLFqV4i+oBBWQC9e6YctwtfUWBnc/kc4muSaCT+C8Ir7wR9mvvdaW27LMfPwmQl/Zt1NFo8ajsp6tKKNvV48Ge9Yza7YtC1bQKwD1WXhG7uH2P/zcX6WQM+zJ+cLJOyo0HTLCS74iULnVWEkiDa1J43WrbBHfPG+jbH9sED5Hgm0D8idHhTz3e2vCcLbmyo01ge+/NwwLDLBHb+8pjmiq2pjAjyf0eCBygq2LXdWTp0YiIHDqXdCk3Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sfGWeac31vFkpdiU8ch3aW+Sd4nTuZD+LcPaTd2lB1M=;
 b=+PmdnqQVuvgMw29Wr3I/D6UfBHBAs2wOdVj/PeW2qoyB125ETZ/iWxmUKwgMewcZBs7IKGrM3+P4oNHokT7t9XA8o3TkSzUOJT0F5yTJr+5JYNsFjRDxEm1KqJl1rV/7r+igQn0GteXuW2xMDakAo8DmdJK5GhKiDRJnBNwUJJI=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien.grall.oss@gmail.com>, Penny Zheng <Penny.Zheng@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Wei Chen
	<Wei.Chen@arm.com>, nd <nd@arm.com>
Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
Thread-Topic: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
Thread-Index:
 AQHXS6Wu68azeFIM1EewfWCo/944Maro8JyAgAEjvICAAQ2WAIAAw5CAgAAtpACAFIRzgIABgY+AgADPmoCAAAmNAIAAHmCAgAXol4CAAprLAA==
Date: Wed, 9 Jun 2021 09:56:11 +0000
Message-ID: <BAC8BC8D-9CD6-4857-88C0-7DCE9267EF0E@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-2-penny.zheng@arm.com>
 <e1b90f06-92d2-11da-c556-4081907124b8@xen.org>
 <VE1PR08MB521519C6F09E92EDB9C9A1AEF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <66e32065-ea2d-d000-1a70-e5598a182b6a@xen.org>
 <VE1PR08MB5215C1F5041860102BBAD595F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <14fb6fe4-c293-6994-8cbc-872d3bd8a3ac@xen.org>
 <VE1PR08MB52152792B6771236A6DF37E7F73D9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <4251e0e2-fb53-b8a3-0323-f4ce892cf21e@xen.org>
 <alpine.DEB.2.21.2106031408320.7272@sstabellini-ThinkPad-T480s>
 <CAJ=z9a234ANQDR7BmtSm4AT0k3jrCn67s4b3zZ+jdkUgBMahbw@mail.gmail.com>
 <alpine.DEB.2.21.2106031625530.7272@sstabellini-ThinkPad-T480s>
 <113937c2-f1a7-c27f-8e2e-79de729ea3ce@xen.org>
In-Reply-To: <113937c2-f1a7-c27f-8e2e-79de729ea3ce@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.100.0.2.22)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [81.2.158.121]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: adedfff6-935b-44e7-b141-08d92b2cd551
x-ms-traffictypediagnostic: PR3PR08MB5690:|AM5PR0801MB2114:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM5PR0801MB21142BD04CE1639F95C87B149D369@AM5PR0801MB2114.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 HvBHKdid7xf4SlkCdzQm+EUsmTgTTz3rtaUcT61xQ7F3Htbo8EXzXI7UhtRCMTJWqOGv6GUMOoMVJmvFUOKtHFNNQ5b5lfbsitPod8EbIX3k2+C6H1h9pEBGAhTlIgxRE+uXMRmmJ4IlQGIPIGRFF17Uj5Enl87FRhoq+FiiAYmKfOjpzJVs6VDNHjtDlVXON4qShH0sL/0+sTkhTlliy9cICdGQSrVE9I7wA9eWxSxVG0Ax/gXtCPsJ3aqnsakKpgGwC6vu/FPtH0NYek7uF6pGYHOOEpHXCkE7+gHgi0HWmj9HxWU2My8ot2G4HxKr5v0Vlm9ZV2v4Q2zLxKzVxWinIM7DsWcQ/dX3g97sUdobyFGrT/LlphRpyT8fcE6apLqcFbYWFBspHIo+2L+9n7n/zGX6W4AXw/ImMN1FoQN++J6OwPppHM4tI4GG4RegoEcpI/Kj1UuXFx6I661p5eTcwFtfHA1m4ZK/EzMjK9m5j4rdCPMGIILoFyCQGYab9TvuvJXKyTeiN6SQTEO45QGuzfRubuok+sRc6kGg1sgQ+PxqfD2QqTrVCqpsHRuSC9spcTEEpgJkhYTvwMcY+vJqduiS8wdMPmfigtfjrghR7A8qBaJ/7x6AxdIiEimzb0gPFMh3MJ8BAoOJwf/PqKi3plDxz6F374rDaWMWLr6PtGhW1PFQNHDlGfZ6L6B6LKRHvV77pcTPyRf2HDFAq5kgj0LYdYxxWV950fhOzLTximhi6rVvQrNy7iAA3cEn4aNOZyO0OZraUGbAXqRtmVFiSZafLBfCgHlByS2/9MMmlEn/FsuZrSW3/G4T7V/d
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB6446.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(396003)(136003)(39860400002)(376002)(36756003)(6512007)(5660300002)(71200400001)(166002)(2906002)(83380400001)(66446008)(478600001)(76116006)(966005)(91956017)(66946007)(86362001)(6916009)(66476007)(64756008)(66556008)(26005)(8936002)(54906003)(53546011)(2616005)(316002)(186003)(38100700002)(122000001)(33656002)(6506007)(6486002)(4326008)(8676002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?JgzEC1SM23ToanQyp+9d0Du7LU3Wynyo21YCs2FEJuuaBgn8s/SH3kJzzsy0?=
 =?us-ascii?Q?xyWUUTL0128unrR6tWTiUtYquRq+AF7kk5Y6PBh8xEq0BUiuYjHJc0l1C508?=
 =?us-ascii?Q?kTMDhMccevoAd8FrRmFd+DYsL4F1cCZx2Vo4adPq2AyLjcbX7bJcGuTCEg2H?=
 =?us-ascii?Q?HsNYzbKXl78ipc0Of5jr8SPPQADQdsEZTPm62OoQ3Out92SEBxeIc5fjvdFY?=
 =?us-ascii?Q?odVrb77tLNBbEHKgaMFHtzNq+/ZluMA68pQJ0tMqxkufESgcuRDq15nSYVBx?=
 =?us-ascii?Q?W8x8r8PIeR4XQAQpwaXHDvs6dEwHtseZbK/t16tsH5P63VkTYQlnvzNSqH96?=
 =?us-ascii?Q?DJcruFlFFBVcXD0dKc/763EtJz3IZYEwFUc9iTVZeoILPtnsm+x8+dVTtf8a?=
 =?us-ascii?Q?F6ZXcyVV02EUK4mlJoDMQo4NvKGRENCKdARtF7rSs1NhRz4TfL48Kj6sBCoR?=
 =?us-ascii?Q?6S9JilS5b2VQBhRBkwtb8w00OrbutsItvOlHruWI0fiqHJbS1R7rrhKuaEXF?=
 =?us-ascii?Q?CWc6grlNxJAGcX9pzA7pDq9drkEr2ILy955rTI08buer8GIYiNq7lWFENVlX?=
 =?us-ascii?Q?5JMsPv15wXngLcjMh+C11wBi2MIuaR1NlqeG+tqi7KWGKUCiPmWetHIxS0n0?=
 =?us-ascii?Q?JNczZtxXmn6mts0Rk54iMEE39FXKpenAAul2JGVQxzSWxMKYavkSe7J+tWWk?=
 =?us-ascii?Q?qDDJHibSCU7S08JjFm0yCRXpC3FcaMgoQMd1/D94sjQKJKLGHRmyUDrIyAsj?=
 =?us-ascii?Q?RzaQvFFhelv7OaEFEkIwqk1qAfrHe6CkThVjdogPKQxHN1+RgqHQNjl9eP5c?=
 =?us-ascii?Q?i55Vrzsw8UxVFc/bxtbvSGHx8i1XO+fPyIueG2QnnMWv8ordodtVo9RqfZJp?=
 =?us-ascii?Q?V5Ej4cZf3RDh5O5XqHF2xKmgx7nE0LysbS+IyhxfSJdHNz1vvTxsBxsdM7tx?=
 =?us-ascii?Q?v7zZSoRPcrytLcMh9bYkMY1u/dkmm5wVB5sJW9odH3n4vWBD83l3uSK8iJbv?=
 =?us-ascii?Q?+AtAytJ9RyiOUpIHm+lOPSBjr+RszQQ+Jxa0qTEjVzn9fa7tqXplutxmCwix?=
 =?us-ascii?Q?3awJUFHjeDtRJaQGdtwalZskd/AJlxoKnTQmEfAD63gN3PKiNKNuAPKVSLLw?=
 =?us-ascii?Q?AFsH72BUXIKc9kHvPlJFR9Qc4ScVxNfqF012ZbZes2cUD7KoeBRSGtu2Prcq?=
 =?us-ascii?Q?xXORG+lQE+JJt65GRryP8rIVtXaFx8oBEtezhl3Li+czTkMnTQNUicwyIFnJ?=
 =?us-ascii?Q?3/l9z4LJmFDL5JDBdPtu8Kiv8CMKefkGyjSiNzr9uFfDHUPR6nupmZ1SGRk8?=
 =?us-ascii?Q?xsVzmeVUS7X9PeHvj2JaSTJs?=
Content-Type: multipart/alternative;
	boundary="_000_BAC8BC8D9CD6485788C07DCE9267EF0Earmcom_"
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5690
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e3114132-9da5-4869-20fc-08d92b2ccfa8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+Sn8FZ1tBZ++9xMJPcJ62D2ZbKH/CYKys0QdflQ3PtJS91PfvkYD2fQa2tnWt9HWmka8FSyt2bziGSK7Z72QjOKlwSjc2WnIP/QNk6oNnlPo3/goYMaJMc8MNJumitaE9mEfc9T0CNfZL532sst/Lrs8hEM+idanxM0LmXBDjLMHKghrNX5wcZ2VvVU1AIBpwcDvN3lAAk2ELneiDrVodR1gLuf6gdJX/bCuXIgOCCCURimpQkAPbw4aB213FURlJ3VeIU+4FjTrC8Ic7CFy2mbqCaLIUjFjoR0StmhRi1xNpI1WvVR8NzJGdDrmXCpZeywNv5DRMKhZfEt7MnpaI+Nxx7EDhPG8hpYsTk4V2ciPj/x466WhWsZaG/wn/L3k9PgAZdj4oV4eNnUYQwL8Ne2HfbWlQgihFEJCgX+E4E1L/IH6giq9mbOXf1+XDPO8CPLKhXomH4sF2noZAAOV77yF7/40UZlWyqkgtiLP9/ct7sHXG9cwD+vbC3f4j+ELpzazY74FSjmzVZRE/FZYpgI1I1mYFdgoLM/fwYU07IB2EdVqxCZ8aArdUpvVdRmVQfwpypFsMuCQT4ytElPcQ4nieTDabYTdvyRAMJD30eTjDfLQeir/2sckMxZZQ9O8d0mcNOEv3YpTQ0cbNmJxiu2lMdTZLD+rQi8v/z3N9dd5ZhGRoSv18HqzDdGJQYWSW3seSdzmzwyuj7KP9jv87hXZ++W08b9D1dKt1zomeaIugzThQ51vsYYv52njoDBScHc+fka+KxoVF8jGhXuqHJqcfaE4UxU3jMHm2BTdP5sHE1GAwlzDoAgUC+DQVBas
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(396003)(376002)(136003)(39860400002)(36840700001)(46966006)(4326008)(6486002)(82740400003)(54906003)(81166007)(6862004)(356005)(186003)(6512007)(26005)(2906002)(36756003)(53546011)(86362001)(47076005)(70206006)(5660300002)(33656002)(70586007)(36860700001)(6506007)(2616005)(83380400001)(478600001)(336012)(45080400002)(8936002)(82310400003)(316002)(30864003)(8676002)(166002)(966005)(579004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 09:56:20.6572
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: adedfff6-935b-44e7-b141-08d92b2cd551
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0801MB2114

--_000_BAC8BC8D9CD6485788C07DCE9267EF0Earmcom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi All,

On 7 Jun 2021, at 19:09, Julien Grall <julien@xen.org<mailto:julien@xen.org=
>> wrote:

Hi Stefano,

On 04/06/2021 00:55, Stefano Stabellini wrote:
On Thu, 3 Jun 2021, Julien Grall wrote:
On Thu, 3 Jun 2021 at 22:33, Stefano Stabellini <sstabellini@kernel.org<mai=
lto:sstabellini@kernel.org>> wrote:
On Thu, 3 Jun 2021, Julien Grall wrote:
On 02/06/2021 11:09, Penny Zheng wrote:
I could not think a way to fix it properly in codes, do you have any
suggestion? Or we just put a warning in doc/commits.

The correct approach is to find the parent of staticmemdomU1 (i.e.
reserved-memory) and use the #address-cells and #size-cells from there.

Julien is right about how to parse the static-memory.

But I have a suggestion on the new binding. The /reserved-memory node is
a weird node: it is one of the very few node (the only node aside from
/chosen) which is about software configurations rather than hardware
description.

For this reason, in a device tree with multiple domains /reserved-memory
doesn't make a lot of sense: for which domain is the memory reserved?

IHMO, /reserved-memory refers to the memory that the hypervisor should
not touch. It is just a coincidence that most of the domains are then
passed through to dom0.

This also matches the fact that the GIC, /memory is consumed by the
hypervisor directly and not the domain..
In system device tree one of the key principles is to distinguish between
hardware description and domains configuration. The domains
configuration is under /domains (originally it was under /chosen then
the DT maintainers requested to move it to its own top-level node), while
everything else is for hardware description.
/chosen and /reserved-memory are exceptions. They are top-level nodes
but they are for software configurations. In system device tree
configurations go under the domain node. This makes sense: Xen, dom0 and
domU can all have different reserved-memory and chosen configurations.
/domains/domU1/reserved-memory gives us a clear way to express
reserved-memory configurations for domU1.
Which leaves us with /reserved-memory. Who is that for? It is for the
default domain.
The default domain is the one receiving all devices by default. In a Xen
setting, it is probably Dom0.

Let's take an example, let say in the future someone wants to allocate a sp=
ecific region for the memory used by the GICv3 ITS.

>From what you said above, /reserved-memory would be used by dom0. So how wo=
uld you be able to tell the hypervisor that the region is reserved for itse=
lf?

In this case, we don't want to add
reserved-memory regions for DomUs to Dom0's list. Dom0's reserved-memory
list is for its own drivers. We could also make an argument that the
default domain is Xen itself. From a spec perspective, that would be
fine too. In this case, /reserved-memory is a list of memory regions
reserved for Xen drivers.

We seem to have a different way to read the binding description in [1]. For=
 convenience, I will copy it here:

"Reserved memory is specified as a node under the /reserved-memory node.
The operating system shall exclude reserved memory from normal usage
one can create child nodes describing particular reserved (excluded from
normal use) memory regions. Such memory regions are usually designed for
the special usage by various device drivers.
"

I read it as this can be used to exclude any memory from the allocator for =
a specific purpose. They give the example of device drivers, but they don't=
 exclude other purpose. So...

Either way, I don't think is a great fit for
domains memory allocations.

... I don't really understand why this is not a great fit. The regions have=
 been *reserved* for a purpose.


So I don't think we want to use reserved-memory for this, either
/reserved-memory or /chosen/domU1/reserved-memory. Instead it would be
good to align it with system device tree and define it as a new property
under domU1.

Do you have any formal documentation of the system device-tree?
It lives here:
https://github.com/devicetree-org/lopper/tree/master/specification
Start from specification.md. It is the oldest part of the spec, so it is
not yet written with a formal specification language.
FYI there are a number of things in-flight in regards to domains that
we discussed in the last call but they are not yet settled, thus, they
are not yet committed (access flags definitions and hierarchical
domains). However, they don't affect domains memory allocations so from
that perspective nothing has changed.

Thanks!

In system device tree we would use a property called "memory" to specify
one or more ranges, e.g.:

    domU1 {
        memory =3D <0x0 0x500000 0x0 0x7fb00000>;

Unfortunately for xen,domains we have already defined "memory" to
specify an amount, rather than a range. That's too bad because the most
natural way to do this would be:

    domU1 {
        size =3D <amount>;
        memory =3D <ranges>;

When we'll introduce native system device tree support in Xen we'll be
able to do that. For now, we need to come up with a different property.
For instance: "static-memory" (other names are welcome if you have a
better suggestion).

We use a new property called "static-memory" together with
#static-memory-address-cells and #static-memory-size-cells to define how
many cells to use for address and size.

Example:

    domU1 {
        #static-memory-address-cells =3D <2>;
        #static-memory-size-cells =3D <2>;
        static-memory =3D <0x0 0x500000 0x0 0x7fb00000>;

This is pretty similar to what Penny suggested. But I dislike it
because of the amount of code that needs to be duplicated with the
reserved memory.
Where is the code duplication? In the parsing itself?

So the problem is we need an entire new way to parse and walk yet another b=
inding that will describe memory excluded from normal allocator hypervisor.

The code is pretty much the same as parsing /reserved-memory except it will=
 be on a different address cells, size cells, property.

If there is code duplication, can we find a way to share some of the
code to avoid the duplication?

Feel free to propose one. I suggested to use /reserved-memory because this =
is the approach that makes the most sense to me (see my reply above).

TBH, even after your explanation, I am still a bit puzzled into why /reserv=
ed-memory cannot be leveraged to exclude domain region from the hypervisor =
allocator.

I really tend to think that the original solution from Penny is for now the=
 easiest and simplest to document.

In the long term, using directly memory and giving in it the address range =
directly is the most natural solution but that would clash with the current=
 usage for it.

I would like to suggest the following approach:
- keep original solution from Penny
- start to discuss a domain v2 so that we could solve current issues we hav=
e including the passthrough devices which are not really easy to define.
As a user I would just expect to put a device tree or links in a domain def=
inition to define its characteristic and devices and using the standard nam=
es (memory for example).
Also I must admit I need to read more the system device tree spec to check =
if we could just use it directly (and be compliant to a standard).

Would that approach be acceptable ?
I am more then happy to drive a working group on rethinking the device tree=
 together with Penny.

Cheers
Bertrand


Cheers,

[1] https://www.kernel.org/doc/Documentation/devicetree/bindings/reserved-m=
emory/reserved-memory.txt

--
Julien Grall


--_000_BAC8BC8D9CD6485788C07DCE9267EF0Earmcom_
Content-Type: text/html; charset="us-ascii"
Content-ID: <2F5826B66A10A74A84A3E11F9F0751BE@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; line-break:=
 after-white-space;" class=3D"">
Hi All,<br class=3D"">
<div><br class=3D"">
<blockquote type=3D"cite" class=3D"">
<div class=3D"">On 7 Jun 2021, at 19:09, Julien Grall &lt;<a href=3D"mailto=
:julien@xen.org" class=3D"">julien@xen.org</a>&gt; wrote:</div>
<br class=3D"Apple-interchange-newline">
<div class=3D""><span style=3D"caret-color: rgb(0, 0, 0); font-family: Helv=
etica; font-size: 12px; font-style: normal; font-variant-caps: normal; font=
-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0p=
x; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-te=
xt-stroke-width: 0px; text-decoration: none; float: none; display: inline !=
important;" class=3D"">Hi
 Stefano,</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvet=
ica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-w=
eight: normal; letter-spacing: normal; text-align: start; text-indent: 0px;=
 text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text=
-stroke-width: 0px; text-decoration: none;" class=3D"">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">On
 04/06/2021 00:55, Stefano Stabellini wrote:</span><br style=3D"caret-color=
: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal=
; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; t=
ext-align: start; text-indent: 0px; text-transform: none; white-space: norm=
al; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: non=
e;" class=3D"">
<blockquote type=3D"cite" style=3D"font-family: Helvetica; font-size: 12px;=
 font-style: normal; font-variant-caps: normal; font-weight: normal; letter=
-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-=
transform: none; white-space: normal; widows: auto; word-spacing: 0px; -web=
kit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration=
: none;" class=3D"">
On Thu, 3 Jun 2021, Julien Grall wrote:<br class=3D"">
<blockquote type=3D"cite" class=3D"">On Thu, 3 Jun 2021 at 22:33, Stefano S=
tabellini &lt;<a href=3D"mailto:sstabellini@kernel.org" class=3D"">sstabell=
ini@kernel.org</a>&gt; wrote:<br class=3D"">
<blockquote type=3D"cite" class=3D"">On Thu, 3 Jun 2021, Julien Grall wrote=
:<br class=3D"">
<blockquote type=3D"cite" class=3D"">On 02/06/2021 11:09, Penny Zheng wrote=
:<br class=3D"">
<blockquote type=3D"cite" class=3D"">I could not think a way to fix it prop=
erly in codes, do you have any<br class=3D"">
suggestion? Or we just put a warning in doc/commits.<br class=3D"">
</blockquote>
<br class=3D"">
The correct approach is to find the parent of staticmemdomU1 (i.e.<br class=
=3D"">
reserved-memory) and use the #address-cells and #size-cells from there.<br =
class=3D"">
</blockquote>
<br class=3D"">
Julien is right about how to parse the static-memory.<br class=3D"">
<br class=3D"">
But I have a suggestion on the new binding. The /reserved-memory node is<br=
 class=3D"">
a weird node: it is one of the very few node (the only node aside from<br c=
lass=3D"">
/chosen) which is about software configurations rather than hardware<br cla=
ss=3D"">
description.<br class=3D"">
<br class=3D"">
For this reason, in a device tree with multiple domains /reserved-memory<br=
 class=3D"">
doesn't make a lot of sense: for which domain is the memory reserved?<br cl=
ass=3D"">
</blockquote>
<br class=3D"">
IHMO, /reserved-memory refers to the memory that the hypervisor should<br c=
lass=3D"">
not touch. It is just a coincidence that most of the domains are then<br cl=
ass=3D"">
passed through to dom0.<br class=3D"">
<br class=3D"">
This also matches the fact that the GIC, /memory is consumed by the<br clas=
s=3D"">
hypervisor directly and not the domain..<br class=3D"">
</blockquote>
In system device tree one of the key principles is to distinguish between<b=
r class=3D"">
hardware description and domains configuration. The domains<br class=3D"">
configuration is under /domains (originally it was under /chosen then<br cl=
ass=3D"">
the DT maintainers requested to move it to its own top-level node), while<b=
r class=3D"">
everything else is for hardware description.<br class=3D"">
/chosen and /reserved-memory are exceptions. They are top-level nodes<br cl=
ass=3D"">
but they are for software configurations. In system device tree<br class=3D=
"">
configurations go under the domain node. This makes sense: Xen, dom0 and<br=
 class=3D"">
domU can all have different reserved-memory and chosen configurations.<br c=
lass=3D"">
/domains/domU1/reserved-memory gives us a clear way to express<br class=3D"=
">
reserved-memory configurations for domU1.<br class=3D"">
Which leaves us with /reserved-memory. Who is that for? It is for the<br cl=
ass=3D"">
default domain.<br class=3D"">
The default domain is the one receiving all devices by default. In a Xen<br=
 class=3D"">
setting, it is probably Dom0.<span class=3D"Apple-converted-space">&nbsp;</=
span><br class=3D"">
</blockquote>
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">Let's
 take an example, let say in the future someone wants to allocate a specifi=
c region for the memory used by the GICv3 ITS.</span><br style=3D"caret-col=
or: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: norm=
al; font-variant-caps: normal; font-weight: normal; letter-spacing: normal;=
 text-align: start; text-indent: 0px; text-transform: none; white-space: no=
rmal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: n=
one;" class=3D"">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">From
 what you said above, /reserved-memory would be used by dom0. So how would =
you be able to tell the hypervisor that the region is reserved for itself?<=
/span><br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-=
size: 12px; font-style: normal; font-variant-caps: normal; font-weight: nor=
mal; letter-spacing: normal; text-align: start; text-indent: 0px; text-tran=
sform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-wi=
dth: 0px; text-decoration: none;" class=3D"">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<blockquote type=3D"cite" style=3D"font-family: Helvetica; font-size: 12px;=
 font-style: normal; font-variant-caps: normal; font-weight: normal; letter=
-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-=
transform: none; white-space: normal; widows: auto; word-spacing: 0px; -web=
kit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration=
: none;" class=3D"">
In this case, we don't want to add<br class=3D"">
reserved-memory regions for DomUs to Dom0's list. Dom0's reserved-memory<br=
 class=3D"">
list is for its own drivers. We could also make an argument that the<br cla=
ss=3D"">
default domain is Xen itself. From a spec perspective, that would be<br cla=
ss=3D"">
fine too. In this case, /reserved-memory is a list of memory regions<br cla=
ss=3D"">
reserved for Xen drivers.<span class=3D"Apple-converted-space">&nbsp;</span=
><br class=3D"">
</blockquote>
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">We
 seem to have a different way to read the binding description in [1]. For c=
onvenience, I will copy it here:</span><br style=3D"caret-color: rgb(0, 0, =
0); font-family: Helvetica; font-size: 12px; font-style: normal; font-varia=
nt-caps: normal; font-weight: normal; letter-spacing: normal; text-align: s=
tart; text-indent: 0px; text-transform: none; white-space: normal; word-spa=
cing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;" class=3D=
"">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">&quot;Reserved
 memory is specified as a node under the /reserved-memory node.</span><br s=
tyle=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px;=
 font-style: normal; font-variant-caps: normal; font-weight: normal; letter=
-spacing: normal; text-align: start; text-indent: 0px; text-transform: none=
; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; t=
ext-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">The
 operating system shall exclude reserved memory from normal usage</span><br=
 style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12p=
x; font-style: normal; font-variant-caps: normal; font-weight: normal; lett=
er-spacing: normal; text-align: start; text-indent: 0px; text-transform: no=
ne; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;=
 text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">one
 can create child nodes describing particular reserved (excluded from</span=
><br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size:=
 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; =
letter-spacing: normal; text-align: start; text-indent: 0px; text-transform=
: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: =
0px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">normal
 use) memory regions. Such memory regions are usually designed for</span><b=
r style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12=
px; font-style: normal; font-variant-caps: normal; font-weight: normal; let=
ter-spacing: normal; text-align: start; text-indent: 0px; text-transform: n=
one; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px=
; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">the
 special usage by various device drivers.</span><br style=3D"caret-color: r=
gb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; f=
ont-variant-caps: normal; font-weight: normal; letter-spacing: normal; text=
-align: start; text-indent: 0px; text-transform: none; white-space: normal;=
 word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;"=
 class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">&quot;</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: He=
lvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; fo=
nt-weight: normal; letter-spacing: normal; text-align: start; text-indent: =
0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-=
text-stroke-width: 0px; text-decoration: none;" class=3D"">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">I
 read it as this can be used to exclude any memory from the allocator for a=
 specific purpose. They give the example of device drivers, but they don't =
exclude other purpose. So...</span><br style=3D"caret-color: rgb(0, 0, 0); =
font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-c=
aps: normal; font-weight: normal; letter-spacing: normal; text-align: start=
; text-indent: 0px; text-transform: none; white-space: normal; word-spacing=
: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;" class=3D"">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<blockquote type=3D"cite" style=3D"font-family: Helvetica; font-size: 12px;=
 font-style: normal; font-variant-caps: normal; font-weight: normal; letter=
-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-=
transform: none; white-space: normal; widows: auto; word-spacing: 0px; -web=
kit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration=
: none;" class=3D"">
Either way, I don't think is a great fit for<br class=3D"">
domains memory allocations.<br class=3D"">
</blockquote>
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">...
 I don't really understand why this is not a great fit. The regions have be=
en *reserved* for a purpose.</span><br style=3D"caret-color: rgb(0, 0, 0); =
font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-c=
aps: normal; font-weight: normal; letter-spacing: normal; text-align: start=
; text-indent: 0px; text-transform: none; white-space: normal; word-spacing=
: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;" class=3D"">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<blockquote type=3D"cite" style=3D"font-family: Helvetica; font-size: 12px;=
 font-style: normal; font-variant-caps: normal; font-weight: normal; letter=
-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-=
transform: none; white-space: normal; widows: auto; word-spacing: 0px; -web=
kit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration=
: none;" class=3D"">
<blockquote type=3D"cite" class=3D"">
<blockquote type=3D"cite" class=3D""><br class=3D"">
So I don't think we want to use reserved-memory for this, either<br class=
=3D"">
/reserved-memory or /chosen/domU1/reserved-memory. Instead it would be<br c=
lass=3D"">
good to align it with system device tree and define it as a new property<br=
 class=3D"">
under domU1.<br class=3D"">
</blockquote>
<br class=3D"">
Do you have any formal documentation of the system device-tree?<br class=3D=
"">
</blockquote>
It lives here:<br class=3D"">
<a href=3D"https://github.com/devicetree-org/lopper/tree/master/specificati=
on" class=3D"">https://github.com/devicetree-org/lopper/tree/master/specifi=
cation</a><br class=3D"">
Start from specification.md. It is the oldest part of the spec, so it is<br=
 class=3D"">
not yet written with a formal specification language.<br class=3D"">
FYI there are a number of things in-flight in regards to domains that<br cl=
ass=3D"">
we discussed in the last call but they are not yet settled, thus, they<br c=
lass=3D"">
are not yet committed (access flags definitions and hierarchical<br class=
=3D"">
domains). However, they don't affect domains memory allocations so from<br =
class=3D"">
that perspective nothing has changed.<br class=3D"">
</blockquote>
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">Thanks!</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: H=
elvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; f=
ont-weight: normal; letter-spacing: normal; text-align: start; text-indent:=
 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit=
-text-stroke-width: 0px; text-decoration: none;" class=3D"">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<blockquote type=3D"cite" style=3D"font-family: Helvetica; font-size: 12px;=
 font-style: normal; font-variant-caps: normal; font-weight: normal; letter=
-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-=
transform: none; white-space: normal; widows: auto; word-spacing: 0px; -web=
kit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration=
: none;" class=3D"">
<blockquote type=3D"cite" class=3D"">
<blockquote type=3D"cite" class=3D"">In system device tree we would use a p=
roperty called &quot;memory&quot; to specify<br class=3D"">
one or more ranges, e.g.:<br class=3D"">
<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;domU1 {<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;memory =3D &lt;0x0 0x500000=
 0x0 0x7fb00000&gt;;<br class=3D"">
<br class=3D"">
Unfortunately for xen,domains we have already defined &quot;memory&quot; to=
<br class=3D"">
specify an amount, rather than a range. That's too bad because the most<br =
class=3D"">
natural way to do this would be:<br class=3D"">
<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;domU1 {<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;size =3D &lt;amount&gt;;<br=
 class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;memory =3D &lt;ranges&gt;;<=
br class=3D"">
<br class=3D"">
When we'll introduce native system device tree support in Xen we'll be<br c=
lass=3D"">
able to do that. For now, we need to come up with a different property.<br =
class=3D"">
For instance: &quot;static-memory&quot; (other names are welcome if you hav=
e a<br class=3D"">
better suggestion).<br class=3D"">
<br class=3D"">
We use a new property called &quot;static-memory&quot; together with<br cla=
ss=3D"">
#static-memory-address-cells and #static-memory-size-cells to define how<br=
 class=3D"">
many cells to use for address and size.<br class=3D"">
<br class=3D"">
Example:<br class=3D"">
<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;domU1 {<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;#static-memory-address-cell=
s =3D &lt;2&gt;;<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;#static-memory-size-cells =
=3D &lt;2&gt;;<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;static-memory =3D &lt;0x0 0=
x500000 0x0 0x7fb00000&gt;;<br class=3D"">
</blockquote>
<br class=3D"">
This is pretty similar to what Penny suggested. But I dislike it<br class=
=3D"">
because of the amount of code that needs to be duplicated with the<br class=
=3D"">
reserved memory.<br class=3D"">
</blockquote>
Where is the code duplication? In the parsing itself?<br class=3D"">
</blockquote>
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">So
 the problem is we need an entire new way to parse and walk yet another bin=
ding that will describe memory excluded from normal allocator hypervisor.</=
span><br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-s=
ize: 12px; font-style: normal; font-variant-caps: normal; font-weight: norm=
al; letter-spacing: normal; text-align: start; text-indent: 0px; text-trans=
form: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-wid=
th: 0px; text-decoration: none;" class=3D"">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">The
 code is pretty much the same as parsing /reserved-memory except it will be=
 on a different address cells, size cells, property.</span><br style=3D"car=
et-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style=
: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: n=
ormal; text-align: start; text-indent: 0px; text-transform: none; white-spa=
ce: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decorat=
ion: none;" class=3D"">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<blockquote type=3D"cite" style=3D"font-family: Helvetica; font-size: 12px;=
 font-style: normal; font-variant-caps: normal; font-weight: normal; letter=
-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-=
transform: none; white-space: normal; widows: auto; word-spacing: 0px; -web=
kit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration=
: none;" class=3D"">
If there is code duplication, can we find a way to share some of the<br cla=
ss=3D"">
code to avoid the duplication?<br class=3D"">
</blockquote>
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">Feel
 free to propose one. I suggested to use /reserved-memory because this is t=
he approach that makes the most sense to me (see my reply above).</span><br=
 style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12p=
x; font-style: normal; font-variant-caps: normal; font-weight: normal; lett=
er-spacing: normal; text-align: start; text-indent: 0px; text-transform: no=
ne; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;=
 text-decoration: none;" class=3D"">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">TBH,
 even after your explanation, I am still a bit puzzled into why /reserved-m=
emory cannot be leveraged to exclude domain region from the hypervisor allo=
cator.</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica=
; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weig=
ht: normal; letter-spacing: normal; text-align: start; text-indent: 0px; te=
xt-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-st=
roke-width: 0px; text-decoration: none;" class=3D"">
</div>
</blockquote>
<div><br class=3D"">
</div>
<div>I really tend to think that the original solution from Penny is for no=
w the easiest and simplest to document.</div>
<div><br class=3D"">
</div>
<div>In the long term, using directly memory and giving in it the address r=
ange directly is the most natural solution but that would clash with the cu=
rrent usage for it.</div>
<div><br class=3D"">
</div>
<div>I would like to suggest the following approach:</div>
<div>- keep original solution from Penny</div>
<div>- start to discuss a domain v2 so that we could solve current issues w=
e have including the passthrough devices which are not really easy to defin=
e.&nbsp;</div>
<div>As a user I would just expect to put a device tree or links in a domai=
n definition to define its characteristic and devices and using the standar=
d names (memory for example).</div>
<div>Also I must admit I need to read more the system device tree spec to c=
heck if we could just use it directly (and be compliant to a standard).</di=
v>
<div><br class=3D"">
</div>
<div>Would that approach be acceptable ?</div>
<div>I am more then happy to drive a working group on rethinking the device=
 tree together with Penny.</div>
<div><br class=3D"">
</div>
<div>Cheers</div>
<div>Bertrand</div>
<br class=3D"">
<blockquote type=3D"cite" class=3D"">
<div class=3D""><br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvet=
ica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-w=
eight: normal; letter-spacing: normal; text-align: start; text-indent: 0px;=
 text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text=
-stroke-width: 0px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">Cheers,</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: H=
elvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; f=
ont-weight: normal; letter-spacing: normal; text-align: start; text-indent:=
 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit=
-text-stroke-width: 0px; text-decoration: none;" class=3D"">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">[1]<span class=3D"Apple-converted-space">&nbsp;</span></span><a href=
=3D"https://www.kernel.org/doc/Documentation/devicetree/bindings/reserved-m=
emory/reserved-memory.txt" style=3D"font-family: Helvetica; font-size: 12px=
; font-style: normal; font-variant-caps: normal; font-weight: normal; lette=
r-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text=
-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -we=
bkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px;" class=3D"">ht=
tps://www.kernel.org/doc/Documentation/devicetree/bindings/reserved-memory/=
reserved-memory.txt</a><br style=3D"caret-color: rgb(0, 0, 0); font-family:=
 Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal;=
 font-weight: normal; letter-spacing: normal; text-align: start; text-inden=
t: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webk=
it-text-stroke-width: 0px; text-decoration: none;" class=3D"">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">--<span class=3D"Apple-converted-space">&nbsp;</span></span><br styl=
e=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; fo=
nt-style: normal; font-variant-caps: normal; font-weight: normal; letter-sp=
acing: normal; text-align: start; text-indent: 0px; text-transform: none; w=
hite-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text=
-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">Julien
 Grall</span></div>
</blockquote>
</div>
<br class=3D"">
</body>
</html>

--_000_BAC8BC8D9CD6485788C07DCE9267EF0Earmcom_--


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 10:15:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 10:15:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139159.257393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqvEm-00027e-KR; Wed, 09 Jun 2021 10:15:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139159.257393; Wed, 09 Jun 2021 10:15:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqvEm-00027X-H6; Wed, 09 Jun 2021 10:15:04 +0000
Received: by outflank-mailman (input) for mailman id 139159;
 Wed, 09 Jun 2021 10:15:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YCjx=LD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lqvEl-00027R-JU
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 10:15:03 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c8032c4-38e2-403e-bbf6-ea3eb08d0734;
 Wed, 09 Jun 2021 10:15:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c8032c4-38e2-403e-bbf6-ea3eb08d0734
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623233701;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=lTAGjxPCj0KzK7t4TmR0qb8g1QfDeawSDUk5Z7yEDS0=;
  b=af0JbPjxN8n1T8IZdUAgDzKodfnefJ8iR3/YTqPtFHGiN0brQLvDmWcq
   DG3VV3XypdDD70usKfB3lIwP4WEM19QcAyRQu4ipzlKLtkLx22OrdDPkg
   +4M9lvM/kM7pLZ6PiYIaxQRG7cgmNuJvZbTZxSUF421OUc/vJN20S7UIV
   s=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: NI+phZomRkQnU8f1XHD0T2r6gkEykdQ6ynez8kyGW0+au8gTgzG/SXXMhbwoVd26INt/iB6U9c
 Cx70502k4aPCzlsRkaUBbnpbfhnkb5zl5Nxr3zH5kYf4WF6dYiXEbsGuIjFh1q9SV8G0sW+l+q
 l4bwjphhm0aMDcaYKgu5qi1ytX/ppkmPHqYr3OIA+9yWp9nt0mE7G9oOmG958ke016pt9a5fPk
 m3PMjmks41P8VtDPFHZNDD1BDFDScx7RNVo8VahltQnECWsK3BedBEiXJAi+bAwpnkBIVWu5Rr
 CMI=
X-SBRS: 5.1
X-MesageID: 45709801
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:tlnig6lIvbddNGUxztrLIjh3zr7pDfO3imdD5ihNYBxZY6Wkfp
 +V8sjzhCWatN9OYh0dcLC7WJVpQRvnhPlICPoqTMmftW7dyRSVxeBZnPffKljbehEWmdQtrp
 uIH5IOceEYSGIK8PoSgzPIYerIouP3iJxA7N22pxwGIHAIGsMQmzuRSDzrdHGeLDM2dqbRf6
 Dsg/avyQDQHUj/Iv7LfEXsCIP41q32fd/dEF8771lN0njHsRqYrJrBVzSI1BYXVD1ChZ8k7G
 j+igT8ooGuqeuyxBPw33Laq80+oqqj9vJzQOi3zuQFIDTljQilIKxnRr25pTgw5MWi8kwjnt
 Xgqwope+5z93TSVGeopgaF4Xih7B8er1vZjXOIi3rqpsL0ABggDdBauI5fehzFr2I9odBVys
 twriaknqsSKSmFsDX25tDOWR0vvFGzu2AenekaiGEaeZcCaYVWsZcU8CpuYdI99RrBmcYa+d
 RVfZjhDK48SyLDU5mZhBgs/DWUZAV1Iv/cKXJy4fB8ulNt7QVEJ0hx/r1Sop5PzuNmd3Hoj9
 60eZiAr4s+OPP+W5gNSdvpcfHHfVAlfii8eV56AW6XW53vaEi95aIe3t0OlauXkcszveoPcd
 L6IQ5liVI=
X-IronPort-AV: E=Sophos;i="5.83,260,1616472000"; 
   d="scan'208";a="45709801"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JyI9psgFfHFJBHpFaPZDNPUKJ9j5OBhOyEZWfwxIaRP7RP4vWaQsR0RwfDXt/nJfZqVbBPQvisPxKI0ymS2lr09Jpkov0B5vUZ2BG6QTg3t96E2UGXGr0GYUZoqvmN/wFZwMARlk3H2+RESeLhAGr2OkITXfRMJo0TflKFIoXvCLK7MX8pVpSqaOYTlbelzkV381FPjofUQAr93+/+Wzq0DVrA/vX1lMEvlqbOTUSqjCiE4dvflz0zfQKMHmVXI7l7cx3q4aqJIqTfHzVB7B4SDycefcd3jBmp4jYCyTa+racZSl5+9GuuI2bbE299Ez1zUx67qaiePYw2X+A2X1gQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uRr5FKRYhqIeN9TKQ9MRapuck92JpOGmCbzbsLrxdRw=;
 b=SUy5Ci/hAdjXXMGVtexuBnT+iD6LK4+1ktxQ3YH63+ll31JJFUyP2BozkfewVrPPXVP67v8Fk7jVlAVFcdWZy+as8piPIfsDmm7vUF3wR54/1XeN6XKx/MU0Z9DQF6XXX5KTIfnBTYE9ojvVaTdTXCkMqYsEgSB8OotJzfB+5uTruk2CQEO/QlYkjX+Pj0axbI1WHaC5drjfma3X1YnHOdRph8LrbnJQCHnqbRgoA7Ndhz/5dinUSdQsNxo5LC7DeoEWt4gsrMcG/1ORisYMMGYyAsy2QCPvhVNaqL1pC9wMBvTeYpI9DI17mLAOUBshLZ+CYXQq2MDbid1IBJjKdA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uRr5FKRYhqIeN9TKQ9MRapuck92JpOGmCbzbsLrxdRw=;
 b=qVe/77NO4ZUmVcB5P8Iw5r/3d028oCNxs0ZwloSEg+ko2Q8siyXsGReB4cBGsWa30MKQnmxg1oE77UjOO1dq3BqP9PlWXGaF2q/hviz2rrwRVSno8q501tzl89zFOL6JOvQwGkV/oz+gP0XBI81v8Aq/MegzqfXIUM6xDN5ElLg=
Subject: Re: [PATCH] MAINTAINERS: adjust x86/mm/shadow maintainers
To: Jan Beulich <jbeulich@suse.com>, Tim Deegan <tim@xen.org>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <YLjUM0Dzqn0lWA0l@deinos.phlegethon.org>
 <ce4b0cbc-2168-9320-d4df-ff9f27fb4559@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <476340d8-3814-7210-9eff-eb3aea94c880@citrix.com>
Date: Wed, 9 Jun 2021 11:14:32 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <ce4b0cbc-2168-9320-d4df-ff9f27fb4559@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0126.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9f::18) To BN7PR03MB3618.namprd03.prod.outlook.com
 (2603:10b6:406:c3::27)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 546e7868-1e3f-41f6-bdf8-08d92b2f62b6
X-MS-TrafficTypeDiagnostic: BN8PR03MB5124:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BN8PR03MB5124E488770BF3A8BB9A0039BA369@BN8PR03MB5124.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:962;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: gZgcfOsUb1dacNcQUExqOWBFwDnmAEqyr5JkyBw+K/SPL9u+8uMzVvSrt2rl58CNIV+kqwL+tvjXk8V38hCxbG4pH3BPmFMuJz47cw4fexmnDzonvhfixMkPQ3CAQqng9YN1x7UCxp9TMZG77gB46NbMdnmqKiXN0cc2aA6a+Rq178zDwCkrrCThyKOVoFDvk/6q/s18isx1/BlvGPGrSfqKClnChJ8/PkHxF93zrJdSxt8lrnLCAbt4hDOYtpdCsegCfSTlXWV+CAqeJI0j5baFq+ra2DIs2cHqSg0i/P0yQojmYl6dqA2c3GeUv0r34eSKLUjMTNEicurCTCcgODu3X0WnziiFj81LkSR3NIxqTE6BQ8bUKn95awteojTcT3NvfNY90b7NYGhFlLZCXkNEYzX/IJI+odrkKNPn/hg0edt2QYE6F9GDywOm6tEt9P+oeh2ZXDvbYYFdgTS29gQr7v49qpjyGvkaAJmn7rgie9I6z99Ad6Zp+9xkgD0Ug9mxDN7+ESGa50eqS0XOfrJc95bxuCj/57AtcUvVNg4ppRTT6MFbFnnaE90n+QKWCt5YCLLQVmbi8sPwXKtcru8ZM1mrn4jXAIuX2ODQBghLseeyR8YVM6akgPgO0iKSWNB7ZcvTUw904UmiLFE901HUkpXiWmp3MOFubncTgKbwVpLgugOqskhtF+2sRTGr
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN7PR03MB3618.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(396003)(366004)(136003)(376002)(2906002)(4744005)(478600001)(6486002)(54906003)(110136005)(16526019)(8936002)(38100700002)(186003)(36756003)(8676002)(53546011)(86362001)(66476007)(66946007)(6666004)(5660300002)(956004)(316002)(31686004)(4326008)(66556008)(31696002)(16576012)(2616005)(26005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?MlZlbEVOcXo0VzBoNXJoZkticDdvSWhDZ2pIL2RYQVZHYjZmQktFZ0J0RVdo?=
 =?utf-8?B?VDd2VmdtUjhVTXVMVGVhU0IyeVFWMERBUGl2WDVLbDFuV1I4bzdPYVZoaklw?=
 =?utf-8?B?MVFOek0xVlVvTENYd2ZCNmNaVU9leGxRV0xLNE03Qno4WUFjVy9WMm4zZDRa?=
 =?utf-8?B?cWxNWXNQMjJ4ckU0bmMrS2tuQkVZaGk1MHFQUm9DQnh0ZE5xcmN5UW1SRmRS?=
 =?utf-8?B?S0xMUDJRTHM5djFxQ25UVDAvRHkxd2RoTWVsQnBsUnNJeU93a0s5cmR4QndU?=
 =?utf-8?B?ZVBHL1BIL09WMHZQanI2WTIvbFFxbHpYODdmb3lYak9FblE5M0VQaHFwdmJt?=
 =?utf-8?B?U3RYU29KN0pIbGN6Q3llU25FK3B4R0c0WkNpSUl5cTlENXhXRHdMYkdpNFhY?=
 =?utf-8?B?bW5tMjQ0MHBZOXpTeGY2RWFzV1N5MUthOXF6dHJ2cllhVTNHVzBGb2lJY1c5?=
 =?utf-8?B?Q2VURmtTWE9xemZ5VHpjRkpJNFF0VCtHUVByNkJOVGovanN0Zno3aGFGL2x2?=
 =?utf-8?B?SEVHRituYytNTjhKaGV1a2t0SzZUejQ5aHlWekFlWlR3cUJjR2VCN0FENllr?=
 =?utf-8?B?S3RSR2hmUjMwY09TL0pQdG1jUXMrM0JrNUphdlFDemVZNW5jeC9vRTI1TkFI?=
 =?utf-8?B?MWM0K1JYWGdWSTJNakhiSnhtWVpid0txaHNIVncxY2hFSnFscmpOYlhDRW5o?=
 =?utf-8?B?T3lrdmR6U0RpTVRkUWtlOU9nUm9iRjREdmZwSGRxbEpsekVJL3hyY296VEZS?=
 =?utf-8?B?Ty9zRTF6MU92UEduRXMrWnF4ei9rU0liT2NyekttVys5TWsraktwSE1KaUJ6?=
 =?utf-8?B?cXp0M2dGdmg2UHlRMmZ1YnZyclJ2V2tweFFuZnBVcWE5S01Ld21iWkNRRERP?=
 =?utf-8?B?cGVNcDJpSTZvUGdoOXRnUjJuWnJpeC9jQjh1UXhKN2orcTVwL29HcGhMZGpn?=
 =?utf-8?B?MWJEaXM4MG1nZDhzYlMya1RHSVdXVmp1MzV5QzBSaE56WmR5aFBCMzIvYzZm?=
 =?utf-8?B?dFdmT2FISmdvY0pkcFBQVGxmM2thT3YrYTZlcm1IVkdNaUs2bHgxajNXbVI4?=
 =?utf-8?B?RVdjQy9UV2hla0p0dy9COVplbkl3ZC96MXBRbFhqZ2ozcnZ6TnRWUXFnQUZZ?=
 =?utf-8?B?UkVUUXRxRjNHRlFRL3YzQ08wN3JYcmVwTGJYbk5HUzEyMUZIRlFFNHV0aEFq?=
 =?utf-8?B?c3ZVOGpaVDhRMlV4eVhoTkwrYkV5QlVEcFQ2ZlhjT25KYkdSd2N4NVF0bUNm?=
 =?utf-8?B?T2RuUjFDcGZjWWk2QXl0aWFUZnRPTUVpZXVORWx5d1dpd1NkRDVQdG4vSFpJ?=
 =?utf-8?B?R0NLVE5rZGFxcGVVdTBWaXFwVXFLUi9GWVRCVVp2T3dNQWhpcDZYVjZOMGJJ?=
 =?utf-8?B?L3Z2T3JQVWhYbFBia2oxeFB6OXV1ZXU2SFNGM2JWNUJqeFcwVlk4eDRFd3Rp?=
 =?utf-8?B?ZUVnbGlZYTBZZXM0OTg1aXB5NmFKM2Ewc3drYXRlSVpTemZ0Z1F1WWIrZGVs?=
 =?utf-8?B?NVRaOFhIV0F1U0xlVEo4Q0VueUVmQlBBdGRXOGZtS2IxSnVCNjRQZ0QwQ2N3?=
 =?utf-8?B?c3lvdkxEQWg5ampHYndEeE04bUdqWG56bHF0STkzK0RraER5V0FxZUZibTB5?=
 =?utf-8?B?VnpZS1VCLysxYnNKTzZiaWJVb25QWndUUnFpSGMzZHlXejZ0MWJuSVhaMU02?=
 =?utf-8?B?NzdrRmVrZ0hjM05Jbnk2bnVHWHc3cVdib1kzR2lsaW16Ymx4ZnFaM1d5NGVz?=
 =?utf-8?Q?1eeRHPbXyzaLIQtblA+HKjJT1tb499/wUm2PBAs?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 546e7868-1e3f-41f6-bdf8-08d92b2f62b6
X-MS-Exchange-CrossTenant-AuthSource: BN7PR03MB3618.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 10:14:37.0665
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: O1ov5Q8XF7SSXQxf7jFSf5RVCdoizYpPFQEjwpXuO8rJosL91+lkH+bj37/ihtQws6PFdAtJBi5+LqPKK6FnIgOpf7MhBoXRiYKiiiL63EU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB5124
X-OriginatorOrg: citrix.com

On 07/06/2021 07:33, Jan Beulich wrote:
> On 03.06.2021 15:08, Tim Deegan wrote:
>> Better reflect reality: Andrew and Jan are active maintainers
>> and I review patches.  Keep myself as a reviewer so I can help
>> with historical context &c.
>>
>> Signed-off-by: Tim Deegan <tim@xen.org>
> Largely for formal reasons
> Acked-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 10:20:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 10:20:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139166.257405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqvJx-0003WN-9p; Wed, 09 Jun 2021 10:20:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139166.257405; Wed, 09 Jun 2021 10:20:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqvJx-0003WG-50; Wed, 09 Jun 2021 10:20:25 +0000
Received: by outflank-mailman (input) for mailman id 139166;
 Wed, 09 Jun 2021 10:20:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqvJv-0003W8-Uk
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 10:20:23 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9668a438-1e40-4d2d-bb48-ec8c8df461d8;
 Wed, 09 Jun 2021 10:20:18 +0000 (UTC)
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2053.outbound.protection.outlook.com [104.47.0.53]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-3-FOlbmTpoNKCtwJ5evrWRaA-1;
 Wed, 09 Jun 2021 12:20:16 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6861.eurprd04.prod.outlook.com (2603:10a6:803:13c::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Wed, 9 Jun
 2021 10:20:14 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Wed, 9 Jun 2021
 10:20:14 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR1PR01CA0018.eurprd01.prod.exchangelabs.com (2603:10a6:102::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21 via Frontend
 Transport; Wed, 9 Jun 2021 10:20:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9668a438-1e40-4d2d-bb48-ec8c8df461d8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623234017;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qTDl0/KVhcTvFDB9ZTaEJZ7ejh9uPJ6WnYYJjdsfbV8=;
	b=ByaZF5D62HGMVr6OuwgKgmFR/R/1CIxalg9LSI+xzqJpE5oaoypPARH2lG2rPU+1egY6jJ
	0Er9sIsjHUBHhjB6CNenDKwQuUsoEGPYS23bgemAsixhAXjz/m+Cowwu5/hDOmXEVTLQ2a
	+l1ljdb9wzlwehjTuDiHVQwPlvQH0V0=
X-MC-Unique: FOlbmTpoNKCtwJ5evrWRaA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i+BVWQg5YP4FfYdAuHIUDZh5xllIbDw+fEiothxeplKy+0LsCUcB6KbQFTQQFjiBT6yV6QQin6dMbyhRdwT7lDD3um29bXGFKr85eALl2DflR6d2raXYiwRlvzGnb4AYGffsXbwajFzYKLpx6nbr3lzDZeILJ9mVoU3eLt/3M5ZGKmUgTgUkwNoeBv5IoDEjYbQ2P4SUSgqBYbMGthGbyWLKkHe2h7Ax9rDu7AwdtHLc/o6VTtsgI+1umi1Iq0UGggs1hPlaqYJ1I32dV/xNjZLEUFJ5U7DVYhSSH6MCw4QHbUyQLFmEGi0ICl4NIZJArd6SjM4nqA+ZYb+0E9dOVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qTDl0/KVhcTvFDB9ZTaEJZ7ejh9uPJ6WnYYJjdsfbV8=;
 b=R9Qn6fBhZ7IwC/cUziUfRuKYJHit9k+RPkD33r/PENPG0FC9pA5gmg5O8/H8cCd8YQ+7moSX6CeyuplBkD+KOHyrxssZJ3fuxJOCVRmXL8Re4WzodiattBOv/ajE0SaQ/XTd26+uQv+98zoPac+iyJQSxNJ6/tPKCrKdAax/GdSFyZBZjynTy+XxBmALTpSUGxJXNtKJBVqJtTEq0/LezxOMixVSXRaj8yvrKc1TzXWFFIQIzTo5HgHp0hV9y+uuJxYsIhFwmhFnVd1uxGrerfURh+IrDXxfm0TuoRCZt7F10olYPH7BiV66ASICPHe2h20dkh90vVvNTt0UxB2RZw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH v2] xen/grant-table: Simplify the update to the per-vCPU
 maptrack freelist
To: Julien Grall <julien@xen.org>
Cc: Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210608100824.25141-1-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3df3cc7b-a084-cc9c-5446-b662c884addd@suse.com>
Date: Wed, 9 Jun 2021 12:20:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210608100824.25141-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR1PR01CA0018.eurprd01.prod.exchangelabs.com
 (2603:10a6:102::31) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 86132b79-acf9-4fe9-40ad-08d92b302bfe
X-MS-TrafficTypeDiagnostic: VI1PR04MB6861:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB6861698E6EBC9F1DFB9599B0B3369@VI1PR04MB6861.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1303;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XSUPexdLpql+CgB/wfZczKJIvNba/frEDNu1JXC5hLpGbYvkrYK37H1Eg6m+Qiah5OZ10LfkcsTiUv5UeUG1w5XJTnTfB1W55RZukyoSTELIA5z8yoT5fRyhY2iY6Ff2ARE9yJR0t9a14um2LI1UhE3EqAVtJy8V0+d1s1Hj+yExIJTe+lscaMoJUH8uwmj3n8df9W+utjtIKpd5NudbFLHPHNb4KHaKxBgZ3gzP244wMSyJVCizos9gCFYfNj5v+aJZ293ptrXhR6UJw1L+OZ3CTFMAP7fylfQDf/xki2mMajsRhnyZOtn7UoBaM28/uylX0IOOD5JvH2EqFz6wlqh8VTLnylpPShgjj+LnnZ9FhAAsT+fZr4Q4mqyOfpeGkFd6tZjauR0/oqJGWf7z0chkv1Z0DTVw6trp/j9iOpJQGvvQDg2wIT35TeR7x+9Iilf03BoLEb3XWOvn4WzaX95Tc16nsjIWcfCsqU5ajcXn4lFGuv4AnzRvp31pX7p4jQ07oSdqViyhVBJ3wglkoVnsa1N+ZbXFx47l12PNTLzhrUEnHdclm6D6UHNHw3KkT7mOTyUw+zSUerDZRlqvFv8GgzXOgSHe39RRpVtMLqz267juOfvifOqr4dWmzMwQm5+s7p1mBpNJyCGS+s2dciGrC4xgnxn8Eu47KSaYF9M=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(366004)(376002)(396003)(136003)(346002)(6486002)(86362001)(4326008)(36756003)(38100700002)(316002)(8936002)(2906002)(478600001)(66946007)(66556008)(83380400001)(15650500001)(66476007)(54906003)(2616005)(8676002)(16526019)(956004)(6916009)(186003)(26005)(5660300002)(53546011)(31686004)(16576012)(31696002)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?d2dIU3JTUmRwTWhXdGRuNW5NamM4YWZ1RTE2TXZuNVdBMEJkYUpnL09aaUFL?=
 =?utf-8?B?SWxzRkJ5bkMzTGtpeXk5dXhCb2pyZ0grL0dwVmtYQThvczVTdElSaFdnYkdP?=
 =?utf-8?B?VEE1NFNoQTF3UjlHVzBCOGlHZ2kxaDdiWFZITmtCbHpmL2RUUTh6RG1CV2E5?=
 =?utf-8?B?cXRNWWJQaGpPbERQODhwZmIzb1R1eXE2dGVyLzg5bzRTcytMTjNWUkI3cHNa?=
 =?utf-8?B?M25ENXIxZDVLQldmK3VGZzl1Q2xJYkVnckd5em95bVNWMzFzL2Q0YUlMSXdz?=
 =?utf-8?B?RFZ1MEs2NE9pcHN4SGhhMGlPc3VkdTh1bGNVek4zdzJGWUdTRlhqbC9MYmhW?=
 =?utf-8?B?bDcvbFVQRkVrU0U0R0l1WG5RUjFBK0hPUHlLS1RVRDk2UWJ5ZHRxWnBRaDBN?=
 =?utf-8?B?ZlQyb1pYUjlvaDdvMzhIMnpvREtOeVV6WFhjcGVGWnNnK2NNVUQwQTh0aGVB?=
 =?utf-8?B?UUtGMzEwMElXRDlCd21WNlVDVkRta0o5RGFNRkhPVkNhN2l1ZXR6bDRnQ1cr?=
 =?utf-8?B?Um16NU04akVXMFIxN2o1TkV0d0l5Q3JBVjdPU3JxRnRXbVNWeHlBamVBTFpU?=
 =?utf-8?B?UHVGbHhvU3JrQnRDYlA5UFovV2s4QTA1dDM2dG9abWRjWlF4V3VyVW1Zd091?=
 =?utf-8?B?c1NpenJ4aVEyMlFPQ1AyT3BobjJyTUVFZktYSEUxV0xwMDlNaWk5NldkK25Z?=
 =?utf-8?B?SjhzS2Y4bU85RXg1QjBtME4vOEExQkI2U0x0YzBQSEFaV21BcG9JUHJ5Skd6?=
 =?utf-8?B?dGgwUENsV0VSZkR0QnhXbE84cjVrTTk4eFh2MUsxUW40V01ZYUdvbTVLNVlW?=
 =?utf-8?B?TkdycnBQcjZZenR4S1JtZ0tMczJNTERmMVhUUTlIcEM4b2ZDcjhxTEo3TlJo?=
 =?utf-8?B?cDdFR0c0KzFBaGt3K1ZZVjJZRU5HNGpXOVRVb1lqWVJQKzBVOUV0WDBBRTAw?=
 =?utf-8?B?ZWtUWGJuUXdobU9KYUthM0dWbGI0MTh6ZTR5dEpSa0JiVUhaR1kySEpYYUNF?=
 =?utf-8?B?NFFNS2R2dTgxWXJOZTJmVG5QODllNEhESmxnVEhUS1JtVVUydHhDRmtYMnpy?=
 =?utf-8?B?cm01NWpQYWhCUVlXYjd6VkdSYm5VWk02c1QrNTF0NTdrRXY5R2ZZMXN0eGF5?=
 =?utf-8?B?OURGU2d0TmpGSjVBcThjSjI0Lzl5M2hhQVVrcnpZK3VHMktoZnIzSFlVOXk4?=
 =?utf-8?B?bUtzU0hWcXIyNHMwRndQYW53WEh0UEhEUnh0SUpYYXVuNzU1QUJCVUVpaWFa?=
 =?utf-8?B?aE82bWZjNEE5YWkvVGwrdFFmRlMvQTM3bUtJTDgyVmhiOGdmZjZZcmNGNFFl?=
 =?utf-8?B?YXBrOVNia01Fd3l4dm5WR1IyaU9PYkF4NXR4UmRZa0hYUFlzZUQvZ29Nem1W?=
 =?utf-8?B?NzVNY0VqKzRuODZVN21EMWNhUEQ1R3grTC94UWk3RGF3Q00xL0JSSnVQcTVQ?=
 =?utf-8?B?OVhaV09lVWlvZHM1WmU4T3dCTUZDTFhUN21HekpxZ2U5QTdTbngzNktTOFFP?=
 =?utf-8?B?TlVaSmZPbGxiMUVaWGVpTFJkYi8vajdPR2ZFZjJSRldsMWdDU211cEU0TzRz?=
 =?utf-8?B?aUFRV2JSNloyVHB3cDdkRUlxbDJYVFgzb05UdElPdkNsNHNHbzB0YWFmVlB4?=
 =?utf-8?B?aXpiaTY0KzBFckdaQUpzYkRSNk9OazBNeVNpdUdzNnR5RndQRi9SZFZzOStR?=
 =?utf-8?B?K1dndHphRFlHalFEYmNQU0JsZGRQNUdpTHFVTU45cmlUSUFUVFJRWkMrQS9u?=
 =?utf-8?Q?p1Yz62A+hV0b6hEpquge9c8ONfs0OUhTcYyQlqB?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 86132b79-acf9-4fe9-40ad-08d92b302bfe
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 10:20:14.7542
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4Hs5SdSPWcG6u2eNCOHB7s8pqcgH7FI2Q6rZknHxFr8jLpTCW4OE1ldm50fcvxn1JchyiWeAs5xSckinPo3hIg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6861

On 08.06.2021 12:08, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Since XSA-228 (commit 02cbeeb62075 "gnttab: split maptrack lock
> to make it fulfill its purpose again"), v->maptrack_head,
> v->maptrack_tail and the content of the freelist are accessed with
> the lock v->maptrack_freelist_lock held.
> 
> Therefore it is not necessary to update the fields using cmpxchg()
> and also read them atomically.
> 
> Note that there are two cases where v->maptrack_tail is accessed without
> the lock. They both happen in get_maptrack_handle() when initializing
> the free list of the current vCPU. Therefore there is no possible race.
> 
> The code is now reworked to remove any use of cmpxch() and read_atomic()
> when accessing the fields v->maptrack_{head, tail} as wel as the
> freelist.
> 
> Take the opportunity to add a comment on top of the lock definition
> and explain what it protects.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one nit:

> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -255,7 +255,13 @@ struct vcpu
>      /* VCPU paused by system controller. */
>      int              controller_pause_count;
>  
> -    /* Grant table map tracking. */
> +    /*
> +     * Grant table map tracking. The lock maptrack_freelist_lock
> +     * protects to:

I don't think you want "to" here.

Jan

> +     *  - The entries in the freelist
> +     *  - maptrack_head
> +     *  - maptrack_tail
> +     */
>      spinlock_t       maptrack_freelist_lock;
>      unsigned int     maptrack_head;
>      unsigned int     maptrack_tail;
> 



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 10:35:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 10:35:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139174.257420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqvY2-0005AD-Ky; Wed, 09 Jun 2021 10:34:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139174.257420; Wed, 09 Jun 2021 10:34:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqvY2-0005A6-HQ; Wed, 09 Jun 2021 10:34:58 +0000
Received: by outflank-mailman (input) for mailman id 139174;
 Wed, 09 Jun 2021 10:34:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqvY1-00059y-KZ
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 10:34:57 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d6bc1069-6ebd-4b19-903d-eb1d4d06bae5;
 Wed, 09 Jun 2021 10:34:56 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2053.outbound.protection.outlook.com [104.47.14.53]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-19-1rLL0AnpNAy-Pp0hDhr6YQ-1; Wed, 09 Jun 2021 12:34:54 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5902.eurprd04.prod.outlook.com (2603:10a6:803:ed::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21; Wed, 9 Jun
 2021 10:34:52 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Wed, 9 Jun 2021
 10:34:52 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0088.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:18::28) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.20 via Frontend Transport; Wed, 9 Jun 2021 10:34:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6bc1069-6ebd-4b19-903d-eb1d4d06bae5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623234895;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=uis2epUXIm/AFKHpwnFZcYA2it1l1skFRqJ73nbBLAA=;
	b=TPyVcdzZNdFJmJZl5vWXkpl4hpmSDGNFLeNTDhrbbFlr+MPvEyMyRGeOtwty3pyqxDzTGF
	LjWGrbqDjbBykSeH3hZ44ZPuJzWQK93NeiEPYmHL3WsWVu0k4SXwYtDDpotHL1H2rUkru9
	FtL+Thf04TapH5WN2i8XbH5TAgSARrs=
X-MC-Unique: 1rLL0AnpNAy-Pp0hDhr6YQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LoeTU+JVn71uEqICoNBF7KkUeESjwZgJMAjmXNiUf7s9I/BJ0Utlb8Gjf6aUE12XBbfJcywQjZRzt5OwZRx/i79MGA+pmHuYmFSSk0t91Pg0XU1Y/5YF9tPvLnac7hQKMEztOJtYVo2QvPKrRfV57R9AISHZI/cbnue0LLR316C05tyvDyBzg0y12S+AmuKWXBgZyPHQ9jAXJUw6FdsXtMhF88X+BF0//JodVKz/ZmwarkQZRYeFZ8zgj9SdIs8RNpHTHUM+PT6j3b1IlopOpPTchI7KxEKZeDx3DIIb1pIKpH3Wwpw00amRoNZFT/Hz5A2K1gxIX5V0UPLseQzg2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uis2epUXIm/AFKHpwnFZcYA2it1l1skFRqJ73nbBLAA=;
 b=Q3lJSm0Oyq/s5q8kEib8aiS8Nye9MsIMQiCk3RQBrKZPqX6wdv3/tXmcJI38fNMeLMxMYMgUqU1r8vy6dqf/ch/IKn6gsjUYjSUJogTSgz+b7RP5VoZgY1yh1qMPfkz9gDl3aIp72TvFr4Xm33Wx/NKHEmjlL5dFd7CXDZ3pem3TWqhi7ZUexmJoPhCKa9OyJoTWQ/NS02OPZi0JnjuwHCt0+fLli04g3Zx2sDT2zV5PwNTlzxRFTGz8QOX+1AlCf9RD44joDdRBQk3ppV18H2jZ0vqxI1eL5VfJ/X1cL8tij7eDzLTpw5raAz8zrJk+1TwtCPMVzoAezMgPqXbkEg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86: mark hypercall argument regs clobbering for intended
 fall-through
Message-ID: <bdbd506a-e6fc-a560-1be7-7424f33d413e@suse.com>
Date: Wed, 9 Jun 2021 12:34:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0088.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:18::28) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 44707495-6933-4989-90e1-08d92b323706
X-MS-TrafficTypeDiagnostic: VI1PR04MB5902:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB5902EF31AAF8E45ACDF94EF2B3369@VI1PR04MB5902.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yvcWwc36LV1MicJAiPtamIt4xPV9Akaf12MVAz5tfd1N9nsqXyUBvPJmN15Zkw4DDsYyr5gWJX2luiQc/E0F8HsNxfN6gRoNISqF5S7Ie8hj2eBEJDOSMh0X4JQaEXhJGuSJnkP25wI9wGqHfwUJron/vdGoi8PkbNdiokPV3IAqLl+UqlPHGtxUuiNdDRyjpAioDuKaYxcVxtni9EdouVWBoSkHjNqt5TdekeUyatrdNvPO8DwNbxbtC5AguV7p9brTyLwW9mfuGrAaMwQVz8w4SKjS0CGsG5UeGp2qduc2NdGB5y2FOmDGxbG7UJIHg3KrPkW+HHVkIs92i1UStn/TaaCxvyxRFAkBDjvm122Otfb4GxRB5TYnpGPXvhp3gu0NRZjxvoMTJDlrqNw6r6FJXpYySsltOefIVGgMfjSPtlFraSHSw8WzlHt9Z6KA9nXAo2rQs1GZ3OumkCxI9HXszTO2lqcin1iSIvQSEGUOu7Tr5+ebk28CoT6OTuZSZRz6mL1YF0uQGC+Dhj3hS672ZifjRC1wuMMgbFdaO/vmBk/qYjo47FhnrNu00zMptkf96VIaJCrh6qBtT3ACqjFtOOvfjeMYOmk0ACAKbmA9/+3TRMW9A6m9R8axFfI4/NHD7uVpw9rJ3cUEXS55YkNcI5iTas9lTLvBOfqn3PUvRBU1tY6LaxGydRknVDnk
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(86362001)(26005)(8936002)(5660300002)(36756003)(66556008)(4326008)(186003)(6916009)(66476007)(16526019)(498600001)(54906003)(31686004)(66946007)(6486002)(8676002)(16576012)(31696002)(2906002)(956004)(38100700002)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?a1lmQWg2Y24yR2F5ejloOTZVRFFBb3R1NVQxVVEvcTB5bzkwWFJOY1RSRUVa?=
 =?utf-8?B?K0xVRTU5UnhzL2lKTVZlbjNHQXdQUlBMVE9ycUhoNlJ3R2Jhajh3S3BvT3l6?=
 =?utf-8?B?dThDRGhoR0YrT0JYd25uQWMxaHdWYmpJN0ZEYmFGOFFjYVNLY2Foall0UFh4?=
 =?utf-8?B?WUMra2dJbXJyWm1iS1d6eWlIRERXUDNFMWZMdVJQRUlSckMzUzN3RmJsd0lV?=
 =?utf-8?B?V2FzMittMEo0anU3RVBXZkRTQ0U1RHlhVktNdFJ4dGdUZ2Y0eDE1SkYveDZx?=
 =?utf-8?B?em1JL0hFRGVPS2gvQWlqa2Q1Q2d5VURna1JIMXV0YVJkRXNxNFRlK1poL0kx?=
 =?utf-8?B?M3FKbVFyQVlENWkzL0Q5UnB4eXo3R1ZzZlpRc1ZnNmlqdVByVlErRklMV3lM?=
 =?utf-8?B?blRmTm5nT1dnaklGakdOeHBWcldtWDJOM0RiL0xNU1o4WWhHNEdaNUpIYUdy?=
 =?utf-8?B?L1MzdUlCN2Frb0ZZVHpOODJuNjFNSEYrTzZWMjBJQTI3OGt5YmhTNUJzYUNY?=
 =?utf-8?B?cTMxRjZPdGNNSU9pS3VqSW5QUEJRcW9ZbmFCMGF5eDlGMW5hbEhCaHMwUHFn?=
 =?utf-8?B?MjBEd3FQU1E5c2JwbG44UWVINGxRd2NSOWRlRzlvbGFEcDA3eHpwL2pBYytY?=
 =?utf-8?B?aXZ1cll0aStDTm5jOVRPdlcxb3RyajcwVmpnaGhkLzgwSkRnTVJXY1pLSmdN?=
 =?utf-8?B?cXZlMUZOVlJVZUZnU1Z4RlJMdzQxbk9jOHY3MXU4QnlHZjlvTU16UElNTkRU?=
 =?utf-8?B?U0ZBNWR4RmIxaFVDcGpYUnB1WDdsWjRENWwyWEphME50T2hXZUh2SS92bFlv?=
 =?utf-8?B?RnJ4N3Q5SUJoZTYxMlVKY0h1eFJ3UTZWZEhWZFNQWTEvUjVzaGVUdWVQK0JT?=
 =?utf-8?B?R1ZsT0lINHlDcG1CRHB6V09FVURLbjBSdHlBUzU0bmJrTUJGZDRTYlNDTXNh?=
 =?utf-8?B?VFl5OFY1cHREd0drWHdrQmlreE5WdUNuVmE2bE41aXhNdUlsMVZZM0IwOVkx?=
 =?utf-8?B?RlVuRmc2RzllOTg5ZldrNHdwNC9PMzZDSktJMEZDai9nMGZMTkNUclZ5ZmJY?=
 =?utf-8?B?VDQ3aHh0Mi9wMVFyMmdxcVA5YjhtMFJkTG1wRnliamhWT0lGcHErQUdjK0dR?=
 =?utf-8?B?eXlhaHBBYXNFSFptamVLcHhDSVJuekgwNlRFY1gxWjZKVFZuYXZOd2NqYVdU?=
 =?utf-8?B?cjA2TGlZVFF0Tm9FWURsZE5kWWlpVzV2ZmxHbGFybUpXdEt2TkNKQ2ttNi90?=
 =?utf-8?B?ZExETGdVQWprZzh6d3VxVUExaExHNnJZR1I4dGs1ZlVOejlwTnZLcFlObyto?=
 =?utf-8?B?cXVRVlJxNkZDKysvMEV1N0E1elprU0FFMytoYXQzUUVuWmFvYi9SU3djUFNt?=
 =?utf-8?B?T3ZPQnIrbE9ERk1MWDFmTWhmSExlSGFDaER1cHo2bUIrQ01mbFdJb3ZSckIw?=
 =?utf-8?B?a1k3aVB0dVRSN0J6RW1PYVd0STJ3UDVXVjloWUYzeDl2bWo5L1NuSVQrblpN?=
 =?utf-8?B?NHd3L3NBQUcwcjViQU5vK3VnakE0d1lIYmxzSHZaQngzVlB3dzdqaExDckVN?=
 =?utf-8?B?OTFwMGhNUDRnQlYyN3RPbjFPaU0rZ1hOeW5FQmRHM0hOTzFuZEhZMGZtN2N4?=
 =?utf-8?B?eXhnVDh4a1QzanRZYWlxenZlNE9MS3FnSnhWQTlYN3UxUXFKK0E4a1hIWkZD?=
 =?utf-8?B?S3N1eXJJTk5Vd2w1MThyakt1VGw2M1lvcjZJWGQ0RmszY1habXFGcEtZcDB5?=
 =?utf-8?Q?mNKyeaOqubnQJnUFZVmdq9sTgbmix4C/XbSpwxb?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 44707495-6933-4989-90e1-08d92b323706
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 10:34:52.2481
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zD8lLBpLF/5OFL/jq9BTdIkzJ6WtqyGc1HxkRQ47KLWrJ3JYh5yj/TZlTaPBOxXSAkJ4wfTDqG6I+pZvBr1Xqw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5902

The CIDs below are all for the PV side of things, but also take care of
the HVM side.

Coverity-ID: 1485896, 1485901, 1485906, 1485910, 1485911, 
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Let's see whether Coverity actually understands the (relatively) new
pseudo-keyword.

--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -246,11 +246,11 @@ int hvm_hypercall(struct cpu_user_regs *
         /* Deliberately corrupt parameter regs not used by this hypercall. */
         switch ( hypercall_args_table[eax].native )
         {
-        case 0: rdi = 0xdeadbeefdeadf00dUL;
-        case 1: rsi = 0xdeadbeefdeadf00dUL;
-        case 2: rdx = 0xdeadbeefdeadf00dUL;
-        case 3: r10 = 0xdeadbeefdeadf00dUL;
-        case 4: r8 = 0xdeadbeefdeadf00dUL;
+        case 0: rdi = 0xdeadbeefdeadf00dUL; fallthrough;
+        case 1: rsi = 0xdeadbeefdeadf00dUL; fallthrough;
+        case 2: rdx = 0xdeadbeefdeadf00dUL; fallthrough;
+        case 3: r10 = 0xdeadbeefdeadf00dUL; fallthrough;
+        case 4: r8 = 0xdeadbeefdeadf00dUL; fallthrough;
         case 5: r9 = 0xdeadbeefdeadf00dUL;
         }
 #endif
@@ -264,11 +264,11 @@ int hvm_hypercall(struct cpu_user_regs *
             /* Deliberately corrupt parameter regs used by this hypercall. */
             switch ( hypercall_args_table[eax].native )
             {
-            case 6: regs->r9  = 0xdeadbeefdeadf00dUL;
-            case 5: regs->r8  = 0xdeadbeefdeadf00dUL;
-            case 4: regs->r10 = 0xdeadbeefdeadf00dUL;
-            case 3: regs->rdx = 0xdeadbeefdeadf00dUL;
-            case 2: regs->rsi = 0xdeadbeefdeadf00dUL;
+            case 6: regs->r9  = 0xdeadbeefdeadf00dUL; fallthrough;
+            case 5: regs->r8  = 0xdeadbeefdeadf00dUL; fallthrough;
+            case 4: regs->r10 = 0xdeadbeefdeadf00dUL; fallthrough;
+            case 3: regs->rdx = 0xdeadbeefdeadf00dUL; fallthrough;
+            case 2: regs->rsi = 0xdeadbeefdeadf00dUL; fallthrough;
             case 1: regs->rdi = 0xdeadbeefdeadf00dUL;
             }
         }
--- a/xen/arch/x86/pv/hypercall.c
+++ b/xen/arch/x86/pv/hypercall.c
@@ -149,11 +149,11 @@ void pv_hypercall(struct cpu_user_regs *
         /* Deliberately corrupt parameter regs not used by this hypercall. */
         switch ( hypercall_args_table[eax].native )
         {
-        case 0: rdi = 0xdeadbeefdeadf00dUL;
-        case 1: rsi = 0xdeadbeefdeadf00dUL;
-        case 2: rdx = 0xdeadbeefdeadf00dUL;
-        case 3: r10 = 0xdeadbeefdeadf00dUL;
-        case 4: r8 = 0xdeadbeefdeadf00dUL;
+        case 0: rdi = 0xdeadbeefdeadf00dUL; fallthrough;
+        case 1: rsi = 0xdeadbeefdeadf00dUL; fallthrough;
+        case 2: rdx = 0xdeadbeefdeadf00dUL; fallthrough;
+        case 3: r10 = 0xdeadbeefdeadf00dUL; fallthrough;
+        case 4: r8 = 0xdeadbeefdeadf00dUL; fallthrough;
         case 5: r9 = 0xdeadbeefdeadf00dUL;
         }
 #endif
@@ -172,11 +172,11 @@ void pv_hypercall(struct cpu_user_regs *
             /* Deliberately corrupt parameter regs used by this hypercall. */
             switch ( hypercall_args_table[eax].native )
             {
-            case 6: regs->r9  = 0xdeadbeefdeadf00dUL;
-            case 5: regs->r8  = 0xdeadbeefdeadf00dUL;
-            case 4: regs->r10 = 0xdeadbeefdeadf00dUL;
-            case 3: regs->rdx = 0xdeadbeefdeadf00dUL;
-            case 2: regs->rsi = 0xdeadbeefdeadf00dUL;
+            case 6: regs->r9  = 0xdeadbeefdeadf00dUL; fallthrough;
+            case 5: regs->r8  = 0xdeadbeefdeadf00dUL; fallthrough;
+            case 4: regs->r10 = 0xdeadbeefdeadf00dUL; fallthrough;
+            case 3: regs->rdx = 0xdeadbeefdeadf00dUL; fallthrough;
+            case 2: regs->rsi = 0xdeadbeefdeadf00dUL; fallthrough;
             case 1: regs->rdi = 0xdeadbeefdeadf00dUL;
             }
         }



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 10:36:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 10:36:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139181.257431 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqvZo-0005o5-6c; Wed, 09 Jun 2021 10:36:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139181.257431; Wed, 09 Jun 2021 10:36:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqvZo-0005ny-3l; Wed, 09 Jun 2021 10:36:48 +0000
Received: by outflank-mailman (input) for mailman id 139181;
 Wed, 09 Jun 2021 10:36:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YCjx=LD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lqvZm-0005ns-Pa
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 10:36:46 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b800456-4bf6-457c-af08-5ffd037f8023;
 Wed, 09 Jun 2021 10:36:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b800456-4bf6-457c-af08-5ffd037f8023
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623235005;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=UVGTGEoVd/pxTWlHFP0Rx8muoqsikf6KwEk5ZWvV6uw=;
  b=coy4mgFbhFJgdEHsAqOT08zJG1KcdxSOySAeIJihkehmC9lhDSlw1gkg
   Iqu9qnbtJl0kv6GLhyC9yKxEbfqNTf+NU0CEyesFs5eKQ2GWKJYEWBZ7E
   7VigHGfIfMzn4ebQF4Am94nuMQlDLA2avehGKNH+L3Qbtj3B+3CPyhb20
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: SCiWo8t/ytyRlV1dW+4fmP8RumclkLCOZZ1g09tLbYuan3SHv0DkZGhXJTQJHTN1LGsjzvliNb
 HU9dChkh00egfuyMf8gRSyuBvJ+Gi2TmSg+PgkP0i0Ynb85IG2X6LerrvgCppur19bP7ZHm8pY
 n7u8EP6evkDc+PO+8CyMMDXlc9uia8VUPdOAPZD7iodvptPP7nJ87HDsuvEmReYDnxrcGh2752
 HLwZXtRfNnQnHgv38ghYqlVdF66NZeoyvRa9QEXEU45v+qI6ESOUoyPtDRI9nz1OVxTHsHH6Gn
 /QM=
X-SBRS: 5.1
X-MesageID: 45461785
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:0Eh5JK8nnMrq/bjv7eBuk+E1db1zdoMgy1knxilNoENuHPBwxv
 rAoB1E73PJYVYqOE3Jmbi7Sc29qADnhOBICO4qTMiftWjdyReVxeRZjLcKrAeQYBEWmtQts5
 uINpIOdeEYbmIK//oSgjPIa+rIqePvmMvD6Ja8vhUdOT2CKZsQiDuRYjzrYXGeLzM2fKbReq
 Dsg/av6wDQA0j+Oa+Adwk4tqX41pL2vaOjRSRDKw8s6QGIgz/twLnmEyKA1hNbdz9U278t/U
 XMjgS8v8yYwrCG4y6Z81WWw4VdmdPnxNcGLMuQivINIjGprgqzfoxuV5CLoThwiuCy71QBls
 XKvn4bTopOwkKUWlvwjQrm2gHm3jprwWTl00WkjXzqptG8bC4mCuJa7LgpMCfx2g4FhpVRwa
 hL12WWu958FhXbhhnw4NDOSlVDile0m3w/iuQe5kYvErf2UIUh6bD3wXklV6vpREnBmcYa+a
 hVfYHhDc9tABanhyuzhBg3/DTENU5DaytvQSA5y4aoOnZt7ShEJnAjtboid0E7hdkAoql/lp
 P525tT5fhzp+8tHO9A7bQ6MIeK4lKke2OFDIvEGyWXKEhAAQOXl6LK
X-IronPort-AV: E=Sophos;i="5.83,260,1616472000"; 
   d="scan'208";a="45461785"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z3pm6iPt89FWVqiFWDqIrbJx9ge3AtMUAuiVPjSzyE1DoBLMx8lVAerzQ682UiIcCye/Nu+MeI3DvmgshhFLMGf14fs/zStXEZun1gJ8ShQOrIV/tCPBYxlagYYDIgzWPkfy8CydnPbRD09TIYWq7FMi8ko72aJ8oUmnDpsMEp+r39tzJk05NEY4n9H5PiYHiZtY0muoXlMWimZMX5h4YrDzoCk+Ydubn1MYBzchmGMEnx5KjMeO+A4yzQ3k46qVH6t8PcMqE1AUKZ0EZw/JsIq7Ao5XUW2V4FcwXEN7dTUbsIAlqRgzsby8xzrDl1RLzyYOa1QDImTralEFnP9mdw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7BihrOTn/kOw2O9WjKKPzFDYRCQIKsFGXWwK8nP4X7w=;
 b=QUXnQQOaQBgtq1wn3x+sAJzZC+FcM98nKJoEWKMa2Ai3wAy0pLk6n//VIMhy6D94Fvq/ndrAAGO7IqBY+T1bejEJrVSQ2viD+xmQBm1VpsX6BBnkDewF29MK7tEBMJtG4LKVRjnPsgWmoPtHDM8AJKNPq+rG9gKp/oCZENTjoasSclvHKSrXEBa4S5V9Pfm5IJmKPnMdQz4PvjDNwxRkDwWWCtGen5W/6F8H1iNz/k//4/5tzH2DPp8enb4GOEMSi74oECjwJKwPiRINRlf68yma185i5Phlkin+RacjUqQmuk245wiwd8P4dtOSrFpyHPNoLiGBh3pcyYLa00IZ9A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7BihrOTn/kOw2O9WjKKPzFDYRCQIKsFGXWwK8nP4X7w=;
 b=QZQftUO7EgYMkS0z9pUAOxHUKQtKHNi8cOb2a9joMpwd8n/nZPOxMYQmsnajI1M57PWbZBpur7OCp/o7BY6q5pbg69F0bGm7zvvi1/U0kySi0EX0FjOfD8QSRop5LheXxd3up1dpwdPVkUgGO2kSkSY3LZu/K2BmbFtg71AZVxc=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <paul@xen.org>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
 <1bb49cce-537e-9fba-80bb-82bf502728d0@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/9] AMD/IOMMU: redo awaiting of command completion
Message-ID: <1fcb1140-b9b4-5c0b-de6c-e14d33937318@citrix.com>
Date: Wed, 9 Jun 2021 11:36:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <1bb49cce-537e-9fba-80bb-82bf502728d0@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0155.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c0b745c5-fe1c-4e47-dfc4-08d92b32713f
X-MS-TrafficTypeDiagnostic: BYAPR03MB4871:
X-Microsoft-Antispam-PRVS: <BYAPR03MB487166F426E76DE378DEDCBEBA369@BYAPR03MB4871.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Os2JYM4A0/SoQh9dJ/XeptQBYwRbuw47Mhq3vOGZoCasOcywyQtFCZYBPq39Aj18JHrLopv2Lh/pqKiDGN+byiXeGjReXDTNyoN5FZkIYkzhmEh6oSp8Nnd/rRc0y8iRHwcpLxSSYNbeVi/iayDd5IWHo4rezvsS5wuZxF3oB6m/BKu9BGtapTBUnn/tSXN+IJJxbJKAIlGMHC3TM7dOGunkpigbGEzdmqyGVB//8EjDTKskvoXq/bbcaEb7tDhEjBOTjzmjPPtQCuLt122xa1t8DPOghzsSxB9XrBA3fCK4KcGApWrpkDbJfNr0Y4s3G4Tfn9D0Tu1+1v0x59Fl//+u09fbTmDm9unLzwsOYkzMETl0yI2K9NdFvxReQmRSRnuB7yIXxGg4qomUsq+Ppml5lOheicH+2LWSkJQCTMe5RCpSImen441zd1n6zUTCq2hLaP4N4Vw0RyoYM3qx95lndaZw1Ake1nJJtn2kPefLie7iWgPPIVe9h7tFsrJwfD8s6/4GRTvlHkgjdQuUxqHRXc8tQagUnr5NCUi31posdO3bcTVEHmyVYfBBcr2tybVPSAhOWe3RVT0xnb5DSDqD/t5Dyzjq6Iz6Xzmsr0iMv7cWrhXQos0KfPX4+3QCQK4+zWtMU9KLarIHQQdcrmhDjbBgHg+c4dGG/QqQiPhvcuI7pzqban9Nej+GmFNC
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39850400004)(396003)(136003)(366004)(346002)(38100700002)(8936002)(8676002)(2906002)(66556008)(66946007)(66476007)(6486002)(16526019)(186003)(31696002)(26005)(956004)(4326008)(2616005)(86362001)(53546011)(16576012)(316002)(110136005)(31686004)(36756003)(5660300002)(6666004)(478600001)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?djMwTE5YQ2pKdVp3UVB0Y0ZldzRGd2RjMk52Y1R2cXpDZGsvcXE4SHVaUVZL?=
 =?utf-8?B?QVlURlJ4VEtacmVsRFNSTGF1ZElsd24xQnNrbm1tN2NXaTdidUdEczRKanBR?=
 =?utf-8?B?SjU5dkJlNlpoYkljVWRWQ1JqbzNaYnU0d3hDYStJQmVxdS8yV0dGOWR4YkRw?=
 =?utf-8?B?bERIUmNSRUJTVHpnNnVIRHd5UExJYmV5U1dZbkJRUEJKaDU4anUyN2M3aXlH?=
 =?utf-8?B?NSt5VkxJL2o4NDQrbkZrY1o1TEFHaHhqUnBDalNSRHRWZU1qSzVpRmdmemtk?=
 =?utf-8?B?NkxCMkFKVkZUbmlwb0hZd0hwMXc1d21CYkk1K1V2ZGJwZmxrY0J4MVNjdGVV?=
 =?utf-8?B?cEhySjVWWWFWbC8xdlRnSzhtTyt5SzRDc05mMENDNFIrdWVnTkNaOTcxelBj?=
 =?utf-8?B?MTRsL0QvOVBNVEMwd3Y0VS9xa2sxNnA1Q3NwYitlcUVPQlMwUkRJT1FUVEJL?=
 =?utf-8?B?RjdLSUFGdktVcEZ4K0p5SVc0Q2tQTlFSRXI2SEpORUduTVBpU3lDTEVUUXpK?=
 =?utf-8?B?SHIxT25ZcndwL1NrT0JSMlp0VjE0ayt0OCtiakJRbVdNZlpsNEJWUUxOc2dz?=
 =?utf-8?B?UEVjdTZ0KzNpVkZzQmM1YjhlNXhwRWNXYlczOVRVemN1YjZzZDVpdUdEWWRJ?=
 =?utf-8?B?OHN1Qm5nLy9TcTVHVXBua0QrNDlMN21uTWtwcXdEZmx3RVZwUXVpbzFyRkx2?=
 =?utf-8?B?QXU5V3JaUkpZZUpMTlVFU2JYRHlnUEZadFlreDgxT2R2d3p6TFFPaEY3em45?=
 =?utf-8?B?eWZGSktWbzRmNnp2bHFEVHhveUNFRmZGLzBHT0xZOVlRVy9PbjZKZHdMcXhl?=
 =?utf-8?B?MzkwVTRkRmlHVWEzT1hlTkxXL1RJbExwZmdJK0dSYktXNG5iS2c2NmpRMmpu?=
 =?utf-8?B?VWUxZVdsRGhaNm45WTlTNE9FZWc0aGNUNDJvMWJyOFc4dWthWWhDUkNLTlBt?=
 =?utf-8?B?bWdBZFFPR24rM3l3WSt2TXlJVVdoV3grUTRjeHdFUjFTVTVmVmZ4bTdVV0Y2?=
 =?utf-8?B?b3lkNVJ2TU15YVpWQ0lmSXVadmF5dGJKR0VnYStlUWpGL2VXcExmUDdkVU9k?=
 =?utf-8?B?aEVITml5RUpoV1pTWUxBek9kTFhkUkFIUXpZeGk5T0J0aWxaenQ0OW1rZC95?=
 =?utf-8?B?VlBob3Yxb1dkcXV6NHR3RjlWOWxjMDlia3krN2dNcUdyWGY2cVZsWk1SQS9p?=
 =?utf-8?B?VUVDQ1NNL2tRaGdlNytNTFBmNTRPRjJaRE44U1EvN1Zpa2tiYmFlL1V6cHNz?=
 =?utf-8?B?b2V1ZkxYa0Z2Tm5UT0w4RjM5SGZ4bEE1TEJLNE1LdWZ1eGhDWSsrcDlzNk1Y?=
 =?utf-8?B?NHRRcUNoUTl2S290Z0tXeDhSclEvb2VwbGtzYU5FYlpMNVFiVmxXR2pZZEY5?=
 =?utf-8?B?VU5TZVpBd0gzN1EvTVdCenZHdW5Camx4RkpRU1YyOVp4TlRQSS9YWW5Td0hs?=
 =?utf-8?B?WVRTQVNhQkxBL2F3TlIrMXpmWmw2MmNWNFZTeW5OcnRpOEhWalJPdHlTN25U?=
 =?utf-8?B?TEZ5QW5oVzdlWGZEUG5QZ2x6OW9mRmdrRVZhckZVTDg4VjNUbzZmL0xnZUhn?=
 =?utf-8?B?dEZKVXZxbU1ub1lxY1lvOStWZWttSzViOVNITXdha29hUEE5cVVCaFY5MTFD?=
 =?utf-8?B?UnJ6UlhiT0Ezekk0blNLWmVBaWtiWnFnY1VqbkVoN3hISEg4YWFwZld4MWhH?=
 =?utf-8?B?U1QrZ1Z1WU5ibkpnb3d0SXJyMkdYdXhxWnZ0WktQbGxRVStaUGMvN3BrQjJh?=
 =?utf-8?Q?LDPRaXpr7RPbgofW5pod7z5i4ueLDXc7B6atJzd?=
X-MS-Exchange-CrossTenant-Network-Message-Id: c0b745c5-fe1c-4e47-dfc4-08d92b32713f
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 10:36:30.1314
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: t+UWWrlZs3d6d6kONc7FaWLihFL4QB2a4Pyj4HwTC1FBT5DxrB9TQI5ws9CU6pwJTBhblvEJh+B+Uc+utd/CsE0xb2vr+xDQGJky36ZQ/rM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4871
X-OriginatorOrg: citrix.com

On 09/06/2021 10:26, Jan Beulich wrote:
> The present abuse of the completion interrupt does not only stand in the
> way of, down the road, using it for its actual purpose, but also
> requires holding the IOMMU lock while waiting for command completion,
> limiting parallelism and keeping interrupts off for non-negligible
> periods of time. Have the IOMMU do an ordinary memory write instead of
> signaling an otherwise disabled interrupt (by just updating a status
> register bit).
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Paul Durrant <paul@xen.org>

While I agree with the direction of the patch, some of the details could
do with improvement.

>
> --- a/xen/drivers/passthrough/amd/iommu_cmd.c
> +++ b/xen/drivers/passthrough/amd/iommu_cmd.c
> @@ -20,6 +20,9 @@
>  #include "iommu.h"
>  #include "../ats.h"
> =20
> +#define CMD_COMPLETION_INIT 0
> +#define CMD_COMPLETION_DONE 1
> +
>  static void send_iommu_command(struct amd_iommu *iommu,
>                                 const uint32_t cmd[4])
>  {
> @@ -49,28 +52,31 @@ static void send_iommu_command(struct am
>  static void flush_command_buffer(struct amd_iommu *iommu,
>                                   unsigned int timeout_base)
>  {
> +    static DEFINE_PER_CPU(uint64_t, poll_slot);
> +    uint64_t *this_poll_slot =3D &this_cpu(poll_slot);
> +    paddr_t addr =3D virt_to_maddr(this_poll_slot);
>      uint32_t cmd[4];
>      s_time_t start, timeout;
>      static unsigned int __read_mostly threshold =3D 1;
> =20
> -    /* RW1C 'ComWaitInt' in status register */
> -    writel(IOMMU_STATUS_COMP_WAIT_INT,
> -           iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
> -
> -    /* send an empty COMPLETION_WAIT command to flush command buffer */
> -    cmd[3] =3D cmd[2] =3D 0;
> -    set_field_in_reg_u32(IOMMU_CMD_COMPLETION_WAIT, 0,
> +    ACCESS_ONCE(*this_poll_slot) =3D CMD_COMPLETION_INIT;
> +
> +    /* send a COMPLETION_WAIT command to flush command buffer */
> +    cmd[0] =3D addr;
> +    set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, cmd[0],
> +                         IOMMU_COMP_WAIT_S_FLAG_MASK,
> +                         IOMMU_COMP_WAIT_S_FLAG_SHIFT, &cmd[0]);

set_field_in_reg_u32() is a disaster of a function - both in terms of
semantics, and code gen - and needs to be purged from the code.

It is a shame we don't have a real struct for objects in the command
buffer, but in lieu of that, this is just

=C2=A0=C2=A0=C2=A0 cmd[0] =3D addr | IOMMU_COMP_WAIT_S_FLAG_MASK;

which is the direction that previous cleanup has gone.

There are no current users of IOMMU_COMP_WAIT_S_FLAG_SHIFT, and ...

> +    cmd[1] =3D addr >> 32;
> +    set_field_in_reg_u32(IOMMU_CMD_COMPLETION_WAIT, cmd[1],
>                           IOMMU_CMD_OPCODE_MASK,
>                           IOMMU_CMD_OPCODE_SHIFT, &cmd[1]);
> -    set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, 0,
> -                         IOMMU_COMP_WAIT_I_FLAG_MASK,
> -                         IOMMU_COMP_WAIT_I_FLAG_SHIFT, &cmd[0]);

... this drops the final use of IOMMU_COMP_WAIT_I_FLAG_SHIFT, so both
should be dropped.

As for IOMMU_CMD_OPCODE_SHIFT, that can't be dropped yet, but it would
still be better to use

=C2=A0=C2=A0=C2=A0 cmd[1] =3D (addr >> 32) | MASK_INSR(IOMMU_CMD_COMPLETION=
_WAIT,
IOMMU_CMD_COMPLETION_WAIT);

in the short term.

~Andrew

P.S. an observation of cmd[1] means that AMD IOMMUs don't actually work
for a physaddr width of >52, despite some support along these lines
elsewhere in the spec.



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 10:47:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 10:47:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139190.257443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqvjy-0007K9-6V; Wed, 09 Jun 2021 10:47:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139190.257443; Wed, 09 Jun 2021 10:47:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqvjy-0007K2-3Z; Wed, 09 Jun 2021 10:47:18 +0000
Received: by outflank-mailman (input) for mailman id 139190;
 Wed, 09 Jun 2021 10:47:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lqvjx-0007Jw-74
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 10:47:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lqvjw-0005JT-TG; Wed, 09 Jun 2021 10:47:16 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lqvjw-0002QQ-Hu; Wed, 09 Jun 2021 10:47:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=JTNXSqGFm09oaNbsntUhAmOSel3ZY+XKE66APWspejc=; b=gk9s33y8YFwLzl9WAfcILUtAW6
	yCUVU2htzUIfDg0HmdnuIekWPnANDto5CkrJkuTj06sYgC42trdQHpot7KfCOKy8RevaYtggSMNgA
	VL+vc/lI3dHsE/J90jUY9Nl2GrIBb0B2FjhGbtU5hSUJ4oDgIyTJMuBJkIbXfmjwWTps=;
Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>, Penny Zheng
 <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Chen <Wei.Chen@arm.com>, nd <nd@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-2-penny.zheng@arm.com>
 <e1b90f06-92d2-11da-c556-4081907124b8@xen.org>
 <VE1PR08MB521519C6F09E92EDB9C9A1AEF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <66e32065-ea2d-d000-1a70-e5598a182b6a@xen.org>
 <VE1PR08MB5215C1F5041860102BBAD595F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <14fb6fe4-c293-6994-8cbc-872d3bd8a3ac@xen.org>
 <VE1PR08MB52152792B6771236A6DF37E7F73D9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <4251e0e2-fb53-b8a3-0323-f4ce892cf21e@xen.org>
 <alpine.DEB.2.21.2106031408320.7272@sstabellini-ThinkPad-T480s>
 <CAJ=z9a234ANQDR7BmtSm4AT0k3jrCn67s4b3zZ+jdkUgBMahbw@mail.gmail.com>
 <alpine.DEB.2.21.2106031625530.7272@sstabellini-ThinkPad-T480s>
 <113937c2-f1a7-c27f-8e2e-79de729ea3ce@xen.org>
 <BAC8BC8D-9CD6-4857-88C0-7DCE9267EF0E@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e3a81b21-fd11-852c-aed7-25e71e4b5539@xen.org>
Date: Wed, 9 Jun 2021 11:47:14 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <BAC8BC8D-9CD6-4857-88C0-7DCE9267EF0E@arm.com>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 09/06/2021 10:56, Bertrand Marquis wrote:
> Hi All,

Hi,

>> On 7 Jun 2021, at 19:09, Julien Grall <julien@xen.org 
>> <mailto:julien@xen.org>> wrote:
>> Feel free to propose one. I suggested to use /reserved-memory because 
>> this is the approach that makes the most sense to me (see my reply above).
>>
>> TBH, even after your explanation, I am still a bit puzzled into why 
>> /reserved-memory cannot be leveraged to exclude domain region from the 
>> hypervisor allocator.
> 
> I really tend to think that the original solution from Penny is for now 
> the easiest and simplest to document.

I can live with Penny's solution so long we don't duplicate the parsing 
and we don't create new datastructure in Xen for the new type of 
reserved memory. However...

> 
> In the long term, using directly memory and giving in it the address 
> range directly is the most natural solution but that would clash with 
> the current usage for it.

... we are already going to have quite some churn to support the system 
device-tree. So I don't want yet another binding to be invented in a few 
months time.

IOW, the new binding should be a long term solution rather than a 
temporary one to fill the gap until we agree on what you call "domain v2".

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 10:53:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 10:53:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139197.257456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqvq5-0000IM-TO; Wed, 09 Jun 2021 10:53:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139197.257456; Wed, 09 Jun 2021 10:53:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqvq5-0000IF-QF; Wed, 09 Jun 2021 10:53:37 +0000
Received: by outflank-mailman (input) for mailman id 139197;
 Wed, 09 Jun 2021 10:53:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YCjx=LD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lqvq4-0000I9-FX
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 10:53:36 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ba8a8288-9f46-4815-b7d1-15fce8e97464;
 Wed, 09 Jun 2021 10:53:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba8a8288-9f46-4815-b7d1-15fce8e97464
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623236015;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=FvtKM8hZcyiOKUxPEBrh3kQZZTzUXlzcfWC1I/fdj5s=;
  b=AP5lr72YR1FCpRrn8gnZ26KQHVnepzlLbe5XMHm2vZblPZvftybQTilc
   h48xg7AIGGq5mO3XT6ePzh3p8mRbcPk5iC8Bz0PJFOda/MNMQlvkdfGqZ
   WT1i7o97BaDzSb4ZzizPWZrxOoujJPtyZwG7+rKCKeHC5OYe5WljWi5kr
   U=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 1kgU9iqzK8ymFuM9n5R/Z4I3cglnAgEhhyfmGIq73u6wBZ408AnlOR+I1oxCWiLa3GSqtGiLZX
 7y6pQCXvK6NzIfvmwaD2MnV7kWyspNuHps0Gyu4xaRFWWlfDO2lQS9a34tX8Qs0v6THQaUOMVv
 htTNY92sNewatbB8p/do7BQCtRC3feYrFwE6MIYr/r4HU6oORy2l9MMhfS5GOLIY16jiWt0N7j
 ZNCcWM9GOMQTQlsgmGoRQ/ohSgG+KcsK5ODdx+stQvISThBaqgoe+0zP/9m1BWJ+XVF72uolZG
 ldo=
X-SBRS: 5.1
X-MesageID: 47290813
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:+FEuc6DhWs+F2FDlHehBsceALOsnbusQ8zAXPh9KJyC9I/b2qy
 nxppgmPH/P6Ar4WBkb6LS90dq7MA3hHPlOkPYs1NaZLXXbUQ6TTb2KgrGSuAEIdxeOkNK1kJ
 0QDpSWa+eAf2SS7/yKmDVQeuxIqLLsndHK9IWuu0uFDzsaDJ2Ihz0JeTpzeXcGPTWua6BJca
 Z0qvA33QZJLh8sH7SG7zQ+Lqf+juyOsKijTQ8NBhYh5gXLpTS06ITiGxzd+hsFSTtAzZor7G
 CAymXCl+SemsD+7iWZ+37Y7pxQltek4txfBPaUgsxQDjn3kA6naKloRrXHljEop+OE7kosjb
 D30lkdFvU2z0mUUnC+oBPr1QWl+i0p8WXexViRhmamidDlRRohYvAxx75xQ1/80Q4Nrdt82K
 VE0yayrJxMFy7Nmyz7+pzhSwxqrEypunAv+NRjzEC3abFuLIO5kLZvu3+8SPw7bWTHAcEcYa
 lT5fjnlbNrmQjwVQGBgoEHq+bcK0jaHX+9MwI/U4KuomBrdUtCvj0lLfok7zw9HaIGOu55Dt
 v/Q+1VfZF1P4IrhPFGdas8qfXeMB2EffuaChPiHb2gLtBdB07w
X-IronPort-AV: E=Sophos;i="5.83,260,1616472000"; 
   d="scan'208";a="47290813"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=glrVPH/NGipcmYS+hBER8wcgvMafYEc7sTSeNRxka0esobIsIAe4g9Ogp1ior4rBRXhcfnhwl92Sp4pr5EC86aJqWfRZ6NMw3FLd7Sfpph1+HdSPF3LM0tfsUGZlPHukbsg8vc3VLflaTtt+SvypUjMFY7kiK+76ydfgm2qsm3B69PRrxHjTJvMw7pjjfQrUZiGSaTtF0J+6A7de7sDc9fdvLW8OOy2nesNprwCHk0bJSje2SbJtiiH4vSxe6OrRKd54YgdIKzv5zVB+R+nTyyYpMo+iyI30qJDetwfjht71bTQ+q5qyFW3eyMRV7qHLbfjorDBx//ET9nDGFrUsmg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FvtKM8hZcyiOKUxPEBrh3kQZZTzUXlzcfWC1I/fdj5s=;
 b=LoEBB77bY8uk5mKXG/wNAfN8HHHOAdsV5R1BQmRZHvzhosY5YPikJ2tuAehuT4wR+M4jYmlTMqnyB+nxaKISoB3uWNMMt7o3/S1H1Et5IxozgGRvt8JhOBSkp96t8HxY0IA9dON8WWqXK4UzJY/GYX/m5S39kTfkaTNegNM79xZuozVlCZLDsaqmkt+ldLHx2mtMZzvTF+9bX80kRRdOMMfs2kjn1BdGJf0p/q5McQnzblCN7C84tAYXqBQnF/zu3mp9bSxCx/6pTryAHQhkmS99uf/ebap3lj1lEjX0vFO9bPyG3QtmgClND5xOLqJA7VFrkElWkmaDayeR0OzMaw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FvtKM8hZcyiOKUxPEBrh3kQZZTzUXlzcfWC1I/fdj5s=;
 b=LkfzBpzyyEUJAWX5MzJltCYcs+lf7BRJDNhR/Rx2nL2PL4cTFgXll3UaGBbeUR70N6HZpdxjgUk2Nz7n9cYGz3p3uQRlq32RCsgmiJcsa8uIAIO7f/XhPCV1bz3h+K9CsWWbMcpONl5+APNUd0DCUHHuVF6Kk7FLzXeKGC9+fQE=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <paul@xen.org>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
 <da2e161f-5d5e-c4bb-bce4-7b86e9418a1e@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 2/9] AMD/IOMMU: re-work locking around sending of commands
Message-ID: <31dc681c-8713-7ddd-6c4e-3c385586da4c@citrix.com>
Date: Wed, 9 Jun 2021 11:53:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <da2e161f-5d5e-c4bb-bce4-7b86e9418a1e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0291.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:196::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 97d58c6e-ece4-4763-3e36-08d92b34d231
X-MS-TrafficTypeDiagnostic: BYAPR03MB4247:
X-Microsoft-Antispam-PRVS: <BYAPR03MB42473E8739BE1F6203CC066DBA369@BYAPR03MB4247.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: vWGmCXexLoPO6f+8EpUCC+FGhz6D5RRaDiOGpZZOntAIoaulBr1/WKTsCC7vgS/X/X/YHu+v0sKypC/bAJasP4huWY8fNPEiscfI6I4DrB0wISrVkgmwwYnXi5qtd1xTUX1R24YCT3JUhLqRy5DUU0CXXn3QiW8CUHjgjpknd+ARM9EPms8OWkjj0BjfmzJ1apI/HOthH3latITcTw9zhnMhijTJecfRK2Ju7TXibnhyC3SsWPLAMbZ0hCL/xqBiCROJbdDZ2GsNZPYeHVvjBB0A955ptmLsrDztFJqb9xRaCufomWAt1lwVWNe2g+hlhCY8zXRArQc+M5Twd4Vg0450QibuLVbzYkZndnENVS74POcJOro4KPhRbiExDpOEuKNTPiR+tXl0apJeomBK/c/gVg+TDY8YLu1f1nAsgJLY4l4q82IYRnAtCAJbZq3ilUDyVhU5yO1UYFe7ssSEYyUQG9gfKkVhaIQiUyy6dAVJ2XEM1FdYeC9vFjKzzTKsZ8BVmR4JRzrJrGF8R+EaBQEnEXpWVmga2fpm8KL2Ta1nUvh/PvJF7RcQ4hn92gsYVkCtKL1tVtuGbWBFUoId96z6YZaDbAoArPMg4q5axOPo0iFmA1wfDJHhCLgS2amxg2Y4YmsReqysUYrWj8mzJMPdpAgzmSdD3OPhGN+KPBvXjytkG7ik6wsvr39YZRwh
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(346002)(136003)(39860400002)(376002)(66946007)(2616005)(86362001)(8676002)(66556008)(956004)(4326008)(53546011)(66476007)(16576012)(38100700002)(316002)(110136005)(5660300002)(31686004)(16526019)(83380400001)(2906002)(478600001)(26005)(6486002)(6666004)(8936002)(36756003)(186003)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?N0F4RFh6NE5acytHUzFaNmxKdWo1WmVkR2dlVFlBM1EyT2VDRnhIUGxVbVRz?=
 =?utf-8?B?TmJWQjNaWExPSFRXZDhvd2hxT1JRRWFRellhRGdOb003K3RYb1pITWIrQTFp?=
 =?utf-8?B?RzhQd3l0ZmFkZmRqazJjSTQvR09yaWI3MEN3OUdkSlZscVoweC9ZdmlGMGFO?=
 =?utf-8?B?WTdqTk44ajgyQU44bTY3Ri96NGYyN1VKZ21GNkduRDB5NEFFR3ViaDViK090?=
 =?utf-8?B?RkJ1SmZ2YUdvOEZqZmk3MmdRNmIyNTZXcW9mc1B6T2twT1Izc050Q1Y1SmpY?=
 =?utf-8?B?Wi80SjBWL1pCenpJQ0NtS0lWTlZwbnV3elRVTFdVQ0ZwRU0xVlZKZ0d1RC91?=
 =?utf-8?B?VkhabkNUWTZ3ZTNoS1c5WFNaeUZ6NFFpLzlhM2hUQ3V4a3UvSTVTc0VFOGpq?=
 =?utf-8?B?ZWd5eUNlV0FQSHlDUWRrV2I2elFLdXNSTEdNRmhHQ0NkSzdncjE0eXk1bUl4?=
 =?utf-8?B?cFNaclpJZWpIa1BieDJ6ZGhEcEdRWE5GQ0xXVFNQRUt4WVFlQ1EyMFdXbkVF?=
 =?utf-8?B?Y2dPMVZBU2dvUmV5MkNyMko5VjdCRDkzdlUvOGYrdVB4NEp6WXFkdlpPcUtM?=
 =?utf-8?B?Ukwrd2FmTGpHVStKQWdUc2F0V1VTeFBxUEJiclNHVDFaQUVhQkpWSzF3aTcz?=
 =?utf-8?B?MnlnN2FwRDhZekFtMVlyVVJ6U3RPdURlZEJ2cjkwdjc1Z1FkRy9HWURsenVP?=
 =?utf-8?B?UEF4Y2Jkem1vMXZEZkZMbHFNbEtla0t0SnFFWVVPZS82NE52YkFlWndVd2dp?=
 =?utf-8?B?SlNFNWdKa1RXNEw4aGZvQ0FvdkVCSERQU09ySU1ZcWJZczhhVDJLSzlydXIv?=
 =?utf-8?B?NTdta2ZDelVoaHpDUHNuSmpoYTlwRUxlUlBNMWphdFB2SnVMWWdSeEZzZHI0?=
 =?utf-8?B?SFhZVTZDYVpCbVhJNXZnZXZmVS9nVUlDYyt4NjFlUG55T2R0QXR0N2RRUURI?=
 =?utf-8?B?Y3VxRXAyZi9SMTEybzlmQkpacStETUhEdW9hd2dQMWRIdUl2Nkpzamd4RTlR?=
 =?utf-8?B?SXdUU3JuSlBIY0ZaS1ZNM1VhaFBKRGdId1kwMDBSc1B3V0xTZFBQQ0RPbVFl?=
 =?utf-8?B?Qy9nc0JoM0xiNC9sRlZoTHlnL2wrODFoT1pOOFNGeFpTWlNsdmdORHBPMGNp?=
 =?utf-8?B?TWIvQ1Q5KytDVUZGSGlmM25Vd0h2ZXp3YmFqZmpObFNsdVQwZzRJUEViU2hn?=
 =?utf-8?B?Y0FON0hHcFpHWDMzVFh3a2RwaE4xUkhMYlVMeU1TSHhCMlpGV2dlU2N2dUtl?=
 =?utf-8?B?OTgzb3JnV3pxZWNwQ0FMMmRuNUpoR3laRTVoVWs4ZC81eTIyTGtNNEVleFpV?=
 =?utf-8?B?UTk3OW1zRVdXY2ZWTitRSXJmTW1XL0ZJWVMwNEZQa2tXd3hYOVdUemZCeGMw?=
 =?utf-8?B?cmM4eFU3a0ZHbXBrMXNVKzJyMFZQZVhzR3ZsUFluT2pjczJjSDBNZjV6Zzlv?=
 =?utf-8?B?b25JV2d4Tk43SldmZDBsWENQSTZna1c3MFNqRzQ2RFlnWXp4aHc1VzBidWx3?=
 =?utf-8?B?LzhRNXk5ZkwwQ1BXL0FvL0ZYZlRwcTZZVXI4NVBwQmhlOVVkWW53WEU5cHNy?=
 =?utf-8?B?WDRqVWVqT0dwT3RYY0w2dm1nT0JLTFA5Q1Iwd0tXNlhzSEQzdHFmQ0pCZ0dt?=
 =?utf-8?B?cWM5QW5vcXVUa2pRN3dIK1lkZzFhTDVuMVVZWmR0N1A1di9pNnhCL2xDam1s?=
 =?utf-8?B?ZnRJK0RzOXVnUEYrR0pVL1ZEa3V3K2g3RlppL0JKUkpIKzhpNmcxeHFNNUVM?=
 =?utf-8?Q?Njtg8AHUvBd7Qw+MSv+6edLMsKVD5vlzi6DRKP/?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 97d58c6e-ece4-4763-3e36-08d92b34d231
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 10:53:31.6021
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pTE8WlcVAHSarO+YwcyjisEr0lk70AQYXwBE8bHZvAhC8T66gRWJxft/GJkD2GgH25f7c7LsMvFzORdir81f+muZfh6M0uATE2aVIT0TTE0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4247
X-OriginatorOrg: citrix.com

On 09/06/2021 10:27, Jan Beulich wrote:
> It appears unhelpful to me for flush_command_buffer() to block all
> progress elsewhere for the given IOMMU by holding its lock while
> waiting for command completion. Unless the lock is already held,
> acquire it in send_iommu_command(). Release it in all cases in
> flush_command_buffer(), before actually starting the wait loop.
>
> Some of the involved functions did/do get called with the lock already
> held: For amd_iommu_flush_intremap() we can simply move the locking
> inside. For amd_iommu_flush_device() and amd_iommu_flush_all_caches()
> the lock now gets dropped in the course of the function's operation.
>
> Where touching function headers anyway, also adjust types used to be
> better in line with our coding style and, where applicable, the
> functions' callers.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Paul Durrant <paul@xen.org>

Honestly, I'm -2 to this.=C2=A0 It is horrible obfuscation of the logic.

I agree with the premise of not holding the lock when we don't need to,
but moving the lock/unlocks into different functions makes it impossible
to follow.=C2=A0 (Also, the static analysers are going to scream at this
patch, and rightfully so IMO.)

send_iommu_command() is static, as is flush_command_buffer(), so there
is no need to split the locking like this AFAICT.

Instead, each amd_iommu_flush_* external accessor knows exactly what it
is doing, and whether a wait descriptor is wanted.=C2=A0
flush_command_buffer() wants merging into send_iommu_command() as a
"bool wait" parameter, at which point the locking and unlocking moves
entirely into send_iommu_command() with no pointer games.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 11:20:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 11:20:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139210.257516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwG0-0004Ro-7m; Wed, 09 Jun 2021 11:20:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139210.257516; Wed, 09 Jun 2021 11:20:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwG0-0004Rc-2d; Wed, 09 Jun 2021 11:20:24 +0000
Received: by outflank-mailman (input) for mailman id 139210;
 Wed, 09 Jun 2021 11:20:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FYhx=LD=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1lqwFz-0003dc-4x
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 11:20:23 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd46054f-49bf-4788-b830-e142389e01de;
 Wed, 09 Jun 2021 11:20:16 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 776951FD3C;
 Wed,  9 Jun 2021 11:20:15 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id CCF60118DD;
 Wed,  9 Jun 2021 11:20:14 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id yDM8Me6jwGBTUgAALh3uQQ
 (envelope-from <tzimmermann@suse.de>); Wed, 09 Jun 2021 11:20:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd46054f-49bf-4788-b830-e142389e01de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237615; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nWjkH2kYcjIAyzh3jD5msWYfe6YCZBpfuuajwa0m0a0=;
	b=IAtem6UO8ka5vH8Cg0PiSS22RvapW/ocX82RQpxPce0FBd+pNl5jb95flgADByK2rDHLNJ
	TgWWF4pzyQr9HgUSWj/A+2DROM9qvqfJkHjg5jrOJR73IgD07UeB7RsvawTyl0Rdu6+TSo
	b6c+g9ojK56kuA4P6Kt5boUu456N7KU=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237615;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nWjkH2kYcjIAyzh3jD5msWYfe6YCZBpfuuajwa0m0a0=;
	b=lXxXVlzUjbUFpw5VOvZWcN4FvwlBG+8u52/vI+U5dqir13Xr1GeRq2LFg+XWZsi3Av4+Y3
	8D6XS5wGDQOOuBBA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237615; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nWjkH2kYcjIAyzh3jD5msWYfe6YCZBpfuuajwa0m0a0=;
	b=IAtem6UO8ka5vH8Cg0PiSS22RvapW/ocX82RQpxPce0FBd+pNl5jb95flgADByK2rDHLNJ
	TgWWF4pzyQr9HgUSWj/A+2DROM9qvqfJkHjg5jrOJR73IgD07UeB7RsvawTyl0Rdu6+TSo
	b6c+g9ojK56kuA4P6Kt5boUu456N7KU=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237615;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nWjkH2kYcjIAyzh3jD5msWYfe6YCZBpfuuajwa0m0a0=;
	b=lXxXVlzUjbUFpw5VOvZWcN4FvwlBG+8u52/vI+U5dqir13Xr1GeRq2LFg+XWZsi3Av4+Y3
	8D6XS5wGDQOOuBBA==
From: Thomas Zimmermann <tzimmermann@suse.de>
To: daniel@ffwll.ch,
	mripard@kernel.org,
	maarten.lankhorst@linux.intel.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	krzysztof.kozlowski@canonical.com,
	chunkuang.hu@kernel.org,
	p.zabel@pengutronix.de,
	matthias.bgg@gmail.com,
	robdclark@gmail.com,
	sean@poorly.run,
	airlied@redhat.com,
	kraxel@redhat.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	oleksandr_andrushchenko@epam.com,
	sumit.semwal@linaro.org,
	christian.koenig@amd.com
Cc: dri-devel@lists.freedesktop.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH 2/9] drm/exynox: Implement mmap as GEM object function
Date: Wed,  9 Jun 2021 13:20:05 +0200
Message-Id: <20210609112012.10019-3-tzimmermann@suse.de>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210609112012.10019-1-tzimmermann@suse.de>
References: <20210609112012.10019-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Moving the driver-specific mmap code into a GEM object function allows
for using DRM helpers for various mmap callbacks.

The respective exynos functions are being removed. The file_operations
structure exynos_drm_driver_fops is now being created by the helper macro
DEFINE_DRM_GEM_FOPS().

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/exynos/exynos_drm_drv.c   | 13 ++-----
 drivers/gpu/drm/exynos/exynos_drm_fbdev.c | 20 ++---------
 drivers/gpu/drm/exynos/exynos_drm_gem.c   | 43 +++++------------------
 drivers/gpu/drm/exynos/exynos_drm_gem.h   |  5 ---
 4 files changed, 13 insertions(+), 68 deletions(-)

diff --git a/drivers/gpu/drm/exynos/exynos_drm_drv.c b/drivers/gpu/drm/exynos/exynos_drm_drv.c
index e60257f1f24b..1d46751cad02 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_drv.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_drv.c
@@ -102,16 +102,7 @@ static const struct drm_ioctl_desc exynos_ioctls[] = {
 			DRM_RENDER_ALLOW),
 };
 
-static const struct file_operations exynos_drm_driver_fops = {
-	.owner		= THIS_MODULE,
-	.open		= drm_open,
-	.mmap		= exynos_drm_gem_mmap,
-	.poll		= drm_poll,
-	.read		= drm_read,
-	.unlocked_ioctl	= drm_ioctl,
-	.compat_ioctl = drm_compat_ioctl,
-	.release	= drm_release,
-};
+DEFINE_DRM_GEM_FOPS(exynos_drm_driver_fops);
 
 static const struct drm_driver exynos_drm_driver = {
 	.driver_features	= DRIVER_MODESET | DRIVER_GEM
@@ -124,7 +115,7 @@ static const struct drm_driver exynos_drm_driver = {
 	.prime_fd_to_handle	= drm_gem_prime_fd_to_handle,
 	.gem_prime_import	= exynos_drm_gem_prime_import,
 	.gem_prime_import_sg_table	= exynos_drm_gem_prime_import_sg_table,
-	.gem_prime_mmap		= exynos_drm_gem_prime_mmap,
+	.gem_prime_mmap		= drm_gem_prime_mmap,
 	.ioctls			= exynos_ioctls,
 	.num_ioctls		= ARRAY_SIZE(exynos_ioctls),
 	.fops			= &exynos_drm_driver_fops,
diff --git a/drivers/gpu/drm/exynos/exynos_drm_fbdev.c b/drivers/gpu/drm/exynos/exynos_drm_fbdev.c
index 5147f5929be7..02c97b9ca926 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_fbdev.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_fbdev.c
@@ -15,6 +15,7 @@
 #include <drm/drm_crtc.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_prime.h>
 #include <drm/drm_probe_helper.h>
 #include <drm/exynos_drm.h>
 
@@ -39,25 +40,8 @@ static int exynos_drm_fb_mmap(struct fb_info *info,
 	struct drm_fb_helper *helper = info->par;
 	struct exynos_drm_fbdev *exynos_fbd = to_exynos_fbdev(helper);
 	struct exynos_drm_gem *exynos_gem = exynos_fbd->exynos_gem;
-	unsigned long vm_size;
-	int ret;
-
-	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
-
-	vm_size = vma->vm_end - vma->vm_start;
-
-	if (vm_size > exynos_gem->size)
-		return -EINVAL;
 
-	ret = dma_mmap_attrs(to_dma_dev(helper->dev), vma, exynos_gem->cookie,
-			     exynos_gem->dma_addr, exynos_gem->size,
-			     exynos_gem->dma_attrs);
-	if (ret < 0) {
-		DRM_DEV_ERROR(to_dma_dev(helper->dev), "failed to mmap.\n");
-		return ret;
-	}
-
-	return 0;
+	return drm_gem_prime_mmap(&exynos_gem->base, vma);
 }
 
 static const struct fb_ops exynos_drm_fb_ops = {
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index 4396224227d1..c4b63902ee7a 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -17,6 +17,8 @@
 #include "exynos_drm_drv.h"
 #include "exynos_drm_gem.h"
 
+static int exynos_drm_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
+
 static int exynos_drm_alloc_buf(struct exynos_drm_gem *exynos_gem, bool kvmap)
 {
 	struct drm_device *dev = exynos_gem->base.dev;
@@ -135,6 +137,7 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = {
 static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = {
 	.free = exynos_drm_gem_free_object,
 	.get_sg_table = exynos_drm_gem_prime_get_sg_table,
+	.mmap = exynos_drm_gem_mmap,
 	.vm_ops = &exynos_drm_gem_vm_ops,
 };
 
@@ -354,12 +357,16 @@ int exynos_drm_gem_dumb_create(struct drm_file *file_priv,
 	return 0;
 }
 
-static int exynos_drm_gem_mmap_obj(struct drm_gem_object *obj,
-				   struct vm_area_struct *vma)
+static int exynos_drm_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
 {
 	struct exynos_drm_gem *exynos_gem = to_exynos_gem(obj);
 	int ret;
 
+	if (obj->import_attach)
+		return dma_buf_mmap(obj->dma_buf, vma, 0);
+
+	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
+
 	DRM_DEV_DEBUG_KMS(to_dma_dev(obj->dev), "flags = 0x%x\n",
 			  exynos_gem->flags);
 
@@ -385,26 +392,6 @@ static int exynos_drm_gem_mmap_obj(struct drm_gem_object *obj,
 	return ret;
 }
 
-int exynos_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
-{
-	struct drm_gem_object *obj;
-	int ret;
-
-	/* set vm_area_struct. */
-	ret = drm_gem_mmap(filp, vma);
-	if (ret < 0) {
-		DRM_ERROR("failed to mmap.\n");
-		return ret;
-	}
-
-	obj = vma->vm_private_data;
-
-	if (obj->import_attach)
-		return dma_buf_mmap(obj->dma_buf, vma, 0);
-
-	return exynos_drm_gem_mmap_obj(obj, vma);
-}
-
 /* low-level interface prime helpers */
 struct drm_gem_object *exynos_drm_gem_prime_import(struct drm_device *dev,
 					    struct dma_buf *dma_buf)
@@ -466,15 +453,3 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
 	exynos_gem->sgt = sgt;
 	return &exynos_gem->base;
 }
-
-int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
-			      struct vm_area_struct *vma)
-{
-	int ret;
-
-	ret = drm_gem_mmap_obj(obj, obj->size, vma);
-	if (ret < 0)
-		return ret;
-
-	return exynos_drm_gem_mmap_obj(obj, vma);
-}
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
index a23272fb96fb..79d7e1a87419 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
@@ -96,9 +96,6 @@ int exynos_drm_gem_dumb_create(struct drm_file *file_priv,
 			       struct drm_device *dev,
 			       struct drm_mode_create_dumb *args);
 
-/* set vm_flags and we can change the vm attribute to other one at here. */
-int exynos_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma);
-
 /* low-level interface prime helpers */
 struct drm_gem_object *exynos_drm_gem_prime_import(struct drm_device *dev,
 					    struct dma_buf *dma_buf);
@@ -107,7 +104,5 @@ struct drm_gem_object *
 exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
 				     struct dma_buf_attachment *attach,
 				     struct sg_table *sgt);
-int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
-			      struct vm_area_struct *vma);
 
 #endif
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 11:20:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 11:20:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139207.257480 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwFx-0003dw-7V; Wed, 09 Jun 2021 11:20:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139207.257480; Wed, 09 Jun 2021 11:20:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwFx-0003dp-4E; Wed, 09 Jun 2021 11:20:21 +0000
Received: by outflank-mailman (input) for mailman id 139207;
 Wed, 09 Jun 2021 11:20:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FYhx=LD=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1lqwFv-0003db-4M
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 11:20:19 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c94584d9-69f0-4241-9ee9-02f938e33b3c;
 Wed, 09 Jun 2021 11:20:15 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 322EA1FD2A;
 Wed,  9 Jun 2021 11:20:14 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 8991F118DD;
 Wed,  9 Jun 2021 11:20:13 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id Pn27IO2jwGBTUgAALh3uQQ
 (envelope-from <tzimmermann@suse.de>); Wed, 09 Jun 2021 11:20:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c94584d9-69f0-4241-9ee9-02f938e33b3c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237614; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=oCiBA5IQR9wf4+Cc+r3nE+xIhszdcMtCPYXz7tk1DUc=;
	b=fY/lBPn1RzheTsgcxOOcUlu2A6QxXJzVzaZKYGY8dQ/N9dkPZxAh+IzMlWy3Mdw4YcifeB
	euR+HK45GJ3f0u+/7nmbJqJDLXGEJmIB2cOWMU93FBBin5A1MqrE27Y9jBklc67vNLz8q3
	MEB05kirgOr7hJDXocRIMCqdq+d45FY=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237614;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=oCiBA5IQR9wf4+Cc+r3nE+xIhszdcMtCPYXz7tk1DUc=;
	b=qtPzutLGd6X37XBCAvAZ4AuMX1bEEdUc0wqN5VeNSoa0OrIM4LVgceME8x2WEfyyR258wQ
	Jhvg0YtWBnitxsBg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237614; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=oCiBA5IQR9wf4+Cc+r3nE+xIhszdcMtCPYXz7tk1DUc=;
	b=fY/lBPn1RzheTsgcxOOcUlu2A6QxXJzVzaZKYGY8dQ/N9dkPZxAh+IzMlWy3Mdw4YcifeB
	euR+HK45GJ3f0u+/7nmbJqJDLXGEJmIB2cOWMU93FBBin5A1MqrE27Y9jBklc67vNLz8q3
	MEB05kirgOr7hJDXocRIMCqdq+d45FY=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237614;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=oCiBA5IQR9wf4+Cc+r3nE+xIhszdcMtCPYXz7tk1DUc=;
	b=qtPzutLGd6X37XBCAvAZ4AuMX1bEEdUc0wqN5VeNSoa0OrIM4LVgceME8x2WEfyyR258wQ
	Jhvg0YtWBnitxsBg==
From: Thomas Zimmermann <tzimmermann@suse.de>
To: daniel@ffwll.ch,
	mripard@kernel.org,
	maarten.lankhorst@linux.intel.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	krzysztof.kozlowski@canonical.com,
	chunkuang.hu@kernel.org,
	p.zabel@pengutronix.de,
	matthias.bgg@gmail.com,
	robdclark@gmail.com,
	sean@poorly.run,
	airlied@redhat.com,
	kraxel@redhat.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	oleksandr_andrushchenko@epam.com,
	sumit.semwal@linaro.org,
	christian.koenig@amd.com
Cc: dri-devel@lists.freedesktop.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH 0/9] drm: Implement gem_prime_mmap with drm_gem_prime_mmap()
Date: Wed,  9 Jun 2021 13:20:03 +0200
Message-Id: <20210609112012.10019-1-tzimmermann@suse.de>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Replace all remaining implementations of struct drm_driver.gem_prime_mmap
with use drm_gem_prime_mmap(). For each affected driver, put the mmap code
into struct drm_gem_object_funcs.mmap. With the latter change in place,
create struct file_operations via DEFINE_DRM_GEM_FOPS().

As next steps, remaining drivers can be converted to use drm_gem_prime_mmap()
and drm_gem_mmap() (e.g., Tegra). The default mmap code in drm_gem_prime_mmap()
can be pushed into affected drivers or a helper function. The gem_prime_mmap
hook can probably be removed at some point.

Testing is welcome. I don't have all the necessary hardware.

Thomas Zimmermann (9):
  drm/etnaviv: Implement mmap as GEM object function
  drm/exynox: Implement mmap as GEM object function
  drm/mediatek: Implement mmap as GEM object function
  drm/msm: Implement mmap as GEM object function
  drm/qxl: Remove empty qxl_gem_prime_mmap()
  drm/vgem: Implement mmap as GEM object function
  drm/xen: Implement mmap as GEM object function
  drm/rockchip: Implement mmap as GEM object function
  drm: Update documentation and TODO of gem_prime_mmap hook

 Documentation/gpu/todo.rst                    |  11 --
 drivers/gpu/drm/etnaviv/etnaviv_drv.c         |  14 +--
 drivers/gpu/drm/etnaviv/etnaviv_drv.h         |   3 -
 drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  18 +--
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 ---
 drivers/gpu/drm/exynos/exynos_drm_drv.c       |  13 +--
 drivers/gpu/drm/exynos/exynos_drm_fbdev.c     |  20 +---
 drivers/gpu/drm/exynos/exynos_drm_gem.c       |  43 ++-----
 drivers/gpu/drm/exynos/exynos_drm_gem.h       |   5 -
 drivers/gpu/drm/mediatek/mtk_drm_drv.c        |  13 +--
 drivers/gpu/drm/mediatek/mtk_drm_gem.c        |  44 ++-----
 drivers/gpu/drm/mediatek/mtk_drm_gem.h        |   3 -
 drivers/gpu/drm/msm/msm_drv.c                 |  14 +--
 drivers/gpu/drm/msm/msm_drv.h                 |   1 -
 drivers/gpu/drm/msm/msm_fbdev.c               |  10 +-
 drivers/gpu/drm/msm/msm_gem.c                 |  67 +++++------
 drivers/gpu/drm/msm/msm_gem.h                 |   3 -
 drivers/gpu/drm/msm/msm_gem_prime.c           |  11 --
 drivers/gpu/drm/qxl/qxl_drv.c                 |   1 -
 drivers/gpu/drm/qxl/qxl_drv.h                 |   2 -
 drivers/gpu/drm/qxl/qxl_prime.c               |   6 -
 drivers/gpu/drm/rockchip/rockchip_drm_drv.c   |  13 +--
 drivers/gpu/drm/rockchip/rockchip_drm_fbdev.c |   3 +-
 drivers/gpu/drm/rockchip/rockchip_drm_gem.c   |  44 ++-----
 drivers/gpu/drm/rockchip/rockchip_drm_gem.h   |   7 --
 drivers/gpu/drm/vgem/vgem_drv.c               |  46 +-------
 drivers/gpu/drm/xen/xen_drm_front.c           |  16 +--
 drivers/gpu/drm/xen/xen_drm_front_gem.c       | 108 +++++++-----------
 drivers/gpu/drm/xen/xen_drm_front_gem.h       |   7 --
 include/drm/drm_drv.h                         |  11 +-
 30 files changed, 136 insertions(+), 434 deletions(-)


base-commit: 70e4d80795934312a3853a4f4f49445ce6db1271
prerequisite-patch-id: c2b2f08f0eccc9f5df0c0da49fa1d36267deb11d
prerequisite-patch-id: c67e5d886a47b7d0266d81100837557fda34cb24
--
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 11:20:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 11:20:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139208.257486 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwFx-0003ha-Iv; Wed, 09 Jun 2021 11:20:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139208.257486; Wed, 09 Jun 2021 11:20:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwFx-0003gw-D7; Wed, 09 Jun 2021 11:20:21 +0000
Received: by outflank-mailman (input) for mailman id 139208;
 Wed, 09 Jun 2021 11:20:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FYhx=LD=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1lqwFv-0003dc-33
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 11:20:19 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2384f954-1f82-4d11-9f27-5f2373317b20;
 Wed, 09 Jun 2021 11:20:16 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id D370E219BC;
 Wed,  9 Jun 2021 11:20:14 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 354B811A98;
 Wed,  9 Jun 2021 11:20:14 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 2DYmDO6jwGBTUgAALh3uQQ
 (envelope-from <tzimmermann@suse.de>); Wed, 09 Jun 2021 11:20:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2384f954-1f82-4d11-9f27-5f2373317b20
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237614; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=h+ZIKR66Gjb9uIHTGXu21Qs0ridWWVUt4g+4HcSLyfc=;
	b=XnWyLcXU2DzGKjVpWY+v6GsFXDEvAnsTBuiSFsLYtDqOuSuQ24o7BZ7onxGf1xLaPad5mU
	918hNEVR8evAVRyMQoWrhChlZiOEb+TV/XShyiMhFRl8SP//hv3k9Igf72r/48f0zcqDEc
	MPAr1LiOPOj6guYJivFP9U39T2osWL4=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237614;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=h+ZIKR66Gjb9uIHTGXu21Qs0ridWWVUt4g+4HcSLyfc=;
	b=3hlvfUg2sc+GZSp3mXVxRN0jtCQ6ZZAWw26CXhEuVMNAOePsQpDVWMJIvwDLKIJh/YG3fs
	M2puEtDQ2CPjnHBg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237614; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=h+ZIKR66Gjb9uIHTGXu21Qs0ridWWVUt4g+4HcSLyfc=;
	b=XnWyLcXU2DzGKjVpWY+v6GsFXDEvAnsTBuiSFsLYtDqOuSuQ24o7BZ7onxGf1xLaPad5mU
	918hNEVR8evAVRyMQoWrhChlZiOEb+TV/XShyiMhFRl8SP//hv3k9Igf72r/48f0zcqDEc
	MPAr1LiOPOj6guYJivFP9U39T2osWL4=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237614;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=h+ZIKR66Gjb9uIHTGXu21Qs0ridWWVUt4g+4HcSLyfc=;
	b=3hlvfUg2sc+GZSp3mXVxRN0jtCQ6ZZAWw26CXhEuVMNAOePsQpDVWMJIvwDLKIJh/YG3fs
	M2puEtDQ2CPjnHBg==
From: Thomas Zimmermann <tzimmermann@suse.de>
To: daniel@ffwll.ch,
	mripard@kernel.org,
	maarten.lankhorst@linux.intel.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	krzysztof.kozlowski@canonical.com,
	chunkuang.hu@kernel.org,
	p.zabel@pengutronix.de,
	matthias.bgg@gmail.com,
	robdclark@gmail.com,
	sean@poorly.run,
	airlied@redhat.com,
	kraxel@redhat.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	oleksandr_andrushchenko@epam.com,
	sumit.semwal@linaro.org,
	christian.koenig@amd.com
Cc: dri-devel@lists.freedesktop.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH 1/9] drm/etnaviv: Implement mmap as GEM object function
Date: Wed,  9 Jun 2021 13:20:04 +0200
Message-Id: <20210609112012.10019-2-tzimmermann@suse.de>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210609112012.10019-1-tzimmermann@suse.de>
References: <20210609112012.10019-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Moving the driver-specific mmap code into a GEM object function allows
for using DRM helpers for various mmap callbacks.

The respective etnaviv functions are being removed. The file_operations
structure fops is now being created by the helper macro
DEFINE_DRM_GEM_FOPS().

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/etnaviv/etnaviv_drv.c       | 14 ++------------
 drivers/gpu/drm/etnaviv/etnaviv_drv.h       |  3 ---
 drivers/gpu/drm/etnaviv/etnaviv_gem.c       | 18 +++++-------------
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 13 -------------
 4 files changed, 7 insertions(+), 41 deletions(-)

diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
index f0a07278ad04..7dcc6392792d 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
@@ -468,17 +468,7 @@ static const struct drm_ioctl_desc etnaviv_ioctls[] = {
 	ETNA_IOCTL(PM_QUERY_SIG, pm_query_sig, DRM_RENDER_ALLOW),
 };
 
-static const struct file_operations fops = {
-	.owner              = THIS_MODULE,
-	.open               = drm_open,
-	.release            = drm_release,
-	.unlocked_ioctl     = drm_ioctl,
-	.compat_ioctl       = drm_compat_ioctl,
-	.poll               = drm_poll,
-	.read               = drm_read,
-	.llseek             = no_llseek,
-	.mmap               = etnaviv_gem_mmap,
-};
+DEFINE_DRM_GEM_FOPS(fops);
 
 static const struct drm_driver etnaviv_drm_driver = {
 	.driver_features    = DRIVER_GEM | DRIVER_RENDER,
@@ -487,7 +477,7 @@ static const struct drm_driver etnaviv_drm_driver = {
 	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
 	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
 	.gem_prime_import_sg_table = etnaviv_gem_prime_import_sg_table,
-	.gem_prime_mmap     = etnaviv_gem_prime_mmap,
+	.gem_prime_mmap     = drm_gem_prime_mmap,
 #ifdef CONFIG_DEBUG_FS
 	.debugfs_init       = etnaviv_debugfs_init,
 #endif
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 003288ebd896..049ae87de9be 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -47,12 +47,9 @@ struct etnaviv_drm_private {
 int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
 		struct drm_file *file);
 
-int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
 struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
 int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
-int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
-			   struct vm_area_struct *vma);
 struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
 	struct dma_buf_attachment *attach, struct sg_table *sg);
 int etnaviv_gem_prime_pin(struct drm_gem_object *obj);
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index b8fa6ed3dd73..8f1b5af47dd6 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -130,8 +130,7 @@ static int etnaviv_gem_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
 {
 	pgprot_t vm_page_prot;
 
-	vma->vm_flags &= ~VM_PFNMAP;
-	vma->vm_flags |= VM_MIXEDMAP;
+	vma->vm_flags |= VM_IO | VM_MIXEDMAP | VM_DONTEXPAND | VM_DONTDUMP;
 
 	vm_page_prot = vm_get_page_prot(vma->vm_flags);
 
@@ -154,19 +153,11 @@ static int etnaviv_gem_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
 	return 0;
 }
 
-int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma)
+static int etnaviv_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
 {
-	struct etnaviv_gem_object *obj;
-	int ret;
-
-	ret = drm_gem_mmap(filp, vma);
-	if (ret) {
-		DBG("mmap failed: %d", ret);
-		return ret;
-	}
+	struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
 
-	obj = to_etnaviv_bo(vma->vm_private_data);
-	return obj->ops->mmap(obj, vma);
+	return etnaviv_obj->ops->mmap(etnaviv_obj, vma);
 }
 
 static vm_fault_t etnaviv_gem_fault(struct vm_fault *vmf)
@@ -567,6 +558,7 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = {
 	.unpin = etnaviv_gem_prime_unpin,
 	.get_sg_table = etnaviv_gem_prime_get_sg_table,
 	.vmap = etnaviv_gem_prime_vmap,
+	.mmap = etnaviv_gem_mmap,
 	.vm_ops = &vm_ops,
 };
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index b390dd4d60b7..4d9e8e9b6191 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -34,19 +34,6 @@ int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 	return 0;
 }
 
-int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
-			   struct vm_area_struct *vma)
-{
-	struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
-	int ret;
-
-	ret = drm_gem_mmap_obj(obj, obj->size, vma);
-	if (ret < 0)
-		return ret;
-
-	return etnaviv_obj->ops->mmap(etnaviv_obj, vma);
-}
-
 int etnaviv_gem_prime_pin(struct drm_gem_object *obj)
 {
 	if (!obj->import_attach) {
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 11:20:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 11:20:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139209.257504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwFy-0004Ai-TD; Wed, 09 Jun 2021 11:20:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139209.257504; Wed, 09 Jun 2021 11:20:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwFy-0004Ab-Pl; Wed, 09 Jun 2021 11:20:22 +0000
Received: by outflank-mailman (input) for mailman id 139209;
 Wed, 09 Jun 2021 11:20:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FYhx=LD=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1lqwFy-0003db-87
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 11:20:22 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 321ebd2a-197a-418a-9d12-bf01fe858eac;
 Wed, 09 Jun 2021 11:20:18 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 1EEE2219DE;
 Wed,  9 Jun 2021 11:20:18 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 77538118DD;
 Wed,  9 Jun 2021 11:20:17 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 4KVTHPGjwGBTUgAALh3uQQ
 (envelope-from <tzimmermann@suse.de>); Wed, 09 Jun 2021 11:20:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 321ebd2a-197a-418a-9d12-bf01fe858eac
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237618; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IRUa/mvyWAtsipKLLVmWT7cpX13C9nTjn7UKOIQtpok=;
	b=Qqr+wxRHDfcailoytXEx2oS2lcd/30JrzOhkpalX061VmVUSrdLwmnhSN0MtbG7SF3Lswx
	sQnQ8cPp3QB4PF/l8crVWeS09RmiLZkfgwuhMYtcPRnrUahhNxSrscRyW90luzSwv3G0CF
	7A0GUIi66rqUX0OkNYcpsZIV/8EwyQU=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237618;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IRUa/mvyWAtsipKLLVmWT7cpX13C9nTjn7UKOIQtpok=;
	b=EYZQ1fZ4GiUePdUKfwvu3c7uANqrcaYHVRwXY+xLr2rs0MZBPp9J/4r/U+gB27YIyBmPIF
	iyt7y2F2VM4VApDw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237618; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IRUa/mvyWAtsipKLLVmWT7cpX13C9nTjn7UKOIQtpok=;
	b=Qqr+wxRHDfcailoytXEx2oS2lcd/30JrzOhkpalX061VmVUSrdLwmnhSN0MtbG7SF3Lswx
	sQnQ8cPp3QB4PF/l8crVWeS09RmiLZkfgwuhMYtcPRnrUahhNxSrscRyW90luzSwv3G0CF
	7A0GUIi66rqUX0OkNYcpsZIV/8EwyQU=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237618;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IRUa/mvyWAtsipKLLVmWT7cpX13C9nTjn7UKOIQtpok=;
	b=EYZQ1fZ4GiUePdUKfwvu3c7uANqrcaYHVRwXY+xLr2rs0MZBPp9J/4r/U+gB27YIyBmPIF
	iyt7y2F2VM4VApDw==
From: Thomas Zimmermann <tzimmermann@suse.de>
To: daniel@ffwll.ch,
	mripard@kernel.org,
	maarten.lankhorst@linux.intel.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	krzysztof.kozlowski@canonical.com,
	chunkuang.hu@kernel.org,
	p.zabel@pengutronix.de,
	matthias.bgg@gmail.com,
	robdclark@gmail.com,
	sean@poorly.run,
	airlied@redhat.com,
	kraxel@redhat.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	oleksandr_andrushchenko@epam.com,
	sumit.semwal@linaro.org,
	christian.koenig@amd.com
Cc: dri-devel@lists.freedesktop.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH 6/9] drm/vgem: Implement mmap as GEM object function
Date: Wed,  9 Jun 2021 13:20:09 +0200
Message-Id: <20210609112012.10019-7-tzimmermann@suse.de>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210609112012.10019-1-tzimmermann@suse.de>
References: <20210609112012.10019-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Moving the driver-specific mmap code into a GEM object function allows
for using DRM helpers for various mmap callbacks.

The respective vgem functions are being removed. The file_operations
structure vgem_driver_fops is now being created by the helper macro
DEFINE_DRM_GEM_FOPS().

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/vgem/vgem_drv.c | 46 ++++-----------------------------
 1 file changed, 5 insertions(+), 41 deletions(-)

diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index bf38a7e319d1..df634aa52638 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -239,32 +239,7 @@ static struct drm_ioctl_desc vgem_ioctls[] = {
 	DRM_IOCTL_DEF_DRV(VGEM_FENCE_SIGNAL, vgem_fence_signal_ioctl, DRM_RENDER_ALLOW),
 };
 
-static int vgem_mmap(struct file *filp, struct vm_area_struct *vma)
-{
-	unsigned long flags = vma->vm_flags;
-	int ret;
-
-	ret = drm_gem_mmap(filp, vma);
-	if (ret)
-		return ret;
-
-	/* Keep the WC mmaping set by drm_gem_mmap() but our pages
-	 * are ordinary and not special.
-	 */
-	vma->vm_flags = flags | VM_DONTEXPAND | VM_DONTDUMP;
-	return 0;
-}
-
-static const struct file_operations vgem_driver_fops = {
-	.owner		= THIS_MODULE,
-	.open		= drm_open,
-	.mmap		= vgem_mmap,
-	.poll		= drm_poll,
-	.read		= drm_read,
-	.unlocked_ioctl = drm_ioctl,
-	.compat_ioctl	= drm_compat_ioctl,
-	.release	= drm_release,
-};
+DEFINE_DRM_GEM_FOPS(vgem_driver_fops);
 
 static struct page **vgem_pin_pages(struct drm_vgem_gem_object *bo)
 {
@@ -387,24 +362,12 @@ static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *ma
 	vgem_unpin_pages(bo);
 }
 
-static int vgem_prime_mmap(struct drm_gem_object *obj,
-			   struct vm_area_struct *vma)
+static int vgem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
 {
-	int ret;
-
-	if (obj->size < vma->vm_end - vma->vm_start)
-		return -EINVAL;
-
-	if (!obj->filp)
-		return -ENODEV;
-
-	ret = call_mmap(obj->filp, vma);
-	if (ret)
-		return ret;
-
 	vma_set_file(vma, obj->filp);
 	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
 	vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+	vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
 
 	return 0;
 }
@@ -416,6 +379,7 @@ static const struct drm_gem_object_funcs vgem_gem_object_funcs = {
 	.get_sg_table = vgem_prime_get_sg_table,
 	.vmap = vgem_prime_vmap,
 	.vunmap = vgem_prime_vunmap,
+	.mmap = vgem_prime_mmap,
 	.vm_ops = &vgem_gem_vm_ops,
 };
 
@@ -433,7 +397,7 @@ static const struct drm_driver vgem_driver = {
 	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
 	.gem_prime_import = vgem_prime_import,
 	.gem_prime_import_sg_table = vgem_prime_import_sg_table,
-	.gem_prime_mmap = vgem_prime_mmap,
+	.gem_prime_mmap = drm_gem_prime_mmap,
 
 	.name	= DRIVER_NAME,
 	.desc	= DRIVER_DESC,
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 11:20:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 11:20:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139211.257528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwG4-0004oz-IW; Wed, 09 Jun 2021 11:20:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139211.257528; Wed, 09 Jun 2021 11:20:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwG4-0004oq-De; Wed, 09 Jun 2021 11:20:28 +0000
Received: by outflank-mailman (input) for mailman id 139211;
 Wed, 09 Jun 2021 11:20:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FYhx=LD=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1lqwG3-0003db-8C
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 11:20:27 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2bc5c831-5a9f-4b05-a29b-4acf09e4a797;
 Wed, 09 Jun 2021 11:20:19 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id C9F0C219E7;
 Wed,  9 Jun 2021 11:20:18 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 249C311A98;
 Wed,  9 Jun 2021 11:20:18 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id gK7nB/KjwGBTUgAALh3uQQ
 (envelope-from <tzimmermann@suse.de>); Wed, 09 Jun 2021 11:20:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2bc5c831-5a9f-4b05-a29b-4acf09e4a797
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237618; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0LgmkNBFCrk1hW0/5PXKqc/A6qw4hyWmJvURgp3U/IY=;
	b=feETavSAwPIN91p0cp256C9fiEsKaNYIKuVNT50Mf3so2lwsaBjQikkEdEpjs8+dLr+wQZ
	aRXgCTaShXVXVpbrYad0kOe3F7/pN97IxqZCBf5JanIEjq1G4uCGdjvA1wfggeqAJJdyA7
	cuUfQ5zlL3AEG3KxZiJ6na/5xlBS7gc=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237618;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0LgmkNBFCrk1hW0/5PXKqc/A6qw4hyWmJvURgp3U/IY=;
	b=FiHMe32Ni2REvI8BzxgNMv2YcNCb3Od9taV4cLY/CDSaTllreQL1Zf7x3eX+qMtrvXbqLK
	svw5Gcb+w5vufCDQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237618; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0LgmkNBFCrk1hW0/5PXKqc/A6qw4hyWmJvURgp3U/IY=;
	b=feETavSAwPIN91p0cp256C9fiEsKaNYIKuVNT50Mf3so2lwsaBjQikkEdEpjs8+dLr+wQZ
	aRXgCTaShXVXVpbrYad0kOe3F7/pN97IxqZCBf5JanIEjq1G4uCGdjvA1wfggeqAJJdyA7
	cuUfQ5zlL3AEG3KxZiJ6na/5xlBS7gc=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237618;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0LgmkNBFCrk1hW0/5PXKqc/A6qw4hyWmJvURgp3U/IY=;
	b=FiHMe32Ni2REvI8BzxgNMv2YcNCb3Od9taV4cLY/CDSaTllreQL1Zf7x3eX+qMtrvXbqLK
	svw5Gcb+w5vufCDQ==
From: Thomas Zimmermann <tzimmermann@suse.de>
To: daniel@ffwll.ch,
	mripard@kernel.org,
	maarten.lankhorst@linux.intel.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	krzysztof.kozlowski@canonical.com,
	chunkuang.hu@kernel.org,
	p.zabel@pengutronix.de,
	matthias.bgg@gmail.com,
	robdclark@gmail.com,
	sean@poorly.run,
	airlied@redhat.com,
	kraxel@redhat.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	oleksandr_andrushchenko@epam.com,
	sumit.semwal@linaro.org,
	christian.koenig@amd.com
Cc: dri-devel@lists.freedesktop.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH 7/9] drm/xen: Implement mmap as GEM object function
Date: Wed,  9 Jun 2021 13:20:10 +0200
Message-Id: <20210609112012.10019-8-tzimmermann@suse.de>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210609112012.10019-1-tzimmermann@suse.de>
References: <20210609112012.10019-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Moving the driver-specific mmap code into a GEM object function allows
for using DRM helpers for various mmap callbacks.

The respective xen functions are being removed. The file_operations
structure fops is now being created by the helper macro
DEFINE_DRM_GEM_FOPS().

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/xen/xen_drm_front.c     |  16 +---
 drivers/gpu/drm/xen/xen_drm_front_gem.c | 108 +++++++++---------------
 drivers/gpu/drm/xen/xen_drm_front_gem.h |   7 --
 3 files changed, 44 insertions(+), 87 deletions(-)

diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
index 9f14d99c763c..434064c820e8 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.c
+++ b/drivers/gpu/drm/xen/xen_drm_front.c
@@ -469,19 +469,7 @@ static void xen_drm_drv_release(struct drm_device *dev)
 	kfree(drm_info);
 }
 
-static const struct file_operations xen_drm_dev_fops = {
-	.owner          = THIS_MODULE,
-	.open           = drm_open,
-	.release        = drm_release,
-	.unlocked_ioctl = drm_ioctl,
-#ifdef CONFIG_COMPAT
-	.compat_ioctl   = drm_compat_ioctl,
-#endif
-	.poll           = drm_poll,
-	.read           = drm_read,
-	.llseek         = no_llseek,
-	.mmap           = xen_drm_front_gem_mmap,
-};
+DEFINE_DRM_GEM_FOPS(xen_drm_dev_fops);
 
 static const struct drm_driver xen_drm_driver = {
 	.driver_features           = DRIVER_GEM | DRIVER_MODESET | DRIVER_ATOMIC,
@@ -489,7 +477,7 @@ static const struct drm_driver xen_drm_driver = {
 	.prime_handle_to_fd        = drm_gem_prime_handle_to_fd,
 	.prime_fd_to_handle        = drm_gem_prime_fd_to_handle,
 	.gem_prime_import_sg_table = xen_drm_front_gem_import_sg_table,
-	.gem_prime_mmap            = xen_drm_front_gem_prime_mmap,
+	.gem_prime_mmap            = drm_gem_prime_mmap,
 	.dumb_create               = xen_drm_drv_dumb_create,
 	.fops                      = &xen_drm_dev_fops,
 	.name                      = "xendrm-du",
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index b293c67230ef..dd358ba2bf8e 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -57,6 +57,47 @@ static void gem_free_pages_array(struct xen_gem_object *xen_obj)
 	xen_obj->pages = NULL;
 }
 
+static int xen_drm_front_gem_object_mmap(struct drm_gem_object *gem_obj,
+					 struct vm_area_struct *vma)
+{
+	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+	int ret;
+
+	vma->vm_ops = gem_obj->funcs->vm_ops;
+
+	/*
+	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
+	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
+	 * the whole buffer.
+	 */
+	vma->vm_flags &= ~VM_PFNMAP;
+	vma->vm_flags |= VM_MIXEDMAP;
+	vma->vm_pgoff = 0;
+
+	/*
+	 * According to Xen on ARM ABI (xen/include/public/arch-arm.h):
+	 * all memory which is shared with other entities in the system
+	 * (including the hypervisor and other guests) must reside in memory
+	 * which is mapped as Normal Inner Write-Back Outer Write-Back
+	 * Inner-Shareable.
+	 */
+	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
+
+	/*
+	 * vm_operations_struct.fault handler will be called if CPU access
+	 * to VM is here. For GPUs this isn't the case, because CPU  doesn't
+	 * touch the memory. Insert pages now, so both CPU and GPU are happy.
+	 *
+	 * FIXME: as we insert all the pages now then no .fault handler must
+	 * be called, so don't provide one
+	 */
+	ret = vm_map_pages(vma, xen_obj->pages, xen_obj->num_pages);
+	if (ret < 0)
+		DRM_ERROR("Failed to map pages into vma: %d\n", ret);
+
+	return ret;
+}
+
 static const struct vm_operations_struct xen_drm_drv_vm_ops = {
 	.open           = drm_gem_vm_open,
 	.close          = drm_gem_vm_close,
@@ -67,6 +108,7 @@ static const struct drm_gem_object_funcs xen_drm_front_gem_object_funcs = {
 	.get_sg_table = xen_drm_front_gem_get_sg_table,
 	.vmap = xen_drm_front_gem_prime_vmap,
 	.vunmap = xen_drm_front_gem_prime_vunmap,
+	.mmap = xen_drm_front_gem_object_mmap,
 	.vm_ops = &xen_drm_drv_vm_ops,
 };
 
@@ -238,58 +280,6 @@ xen_drm_front_gem_import_sg_table(struct drm_device *dev,
 	return &xen_obj->base;
 }
 
-static int gem_mmap_obj(struct xen_gem_object *xen_obj,
-			struct vm_area_struct *vma)
-{
-	int ret;
-
-	/*
-	 * clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
-	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
-	 * the whole buffer.
-	 */
-	vma->vm_flags &= ~VM_PFNMAP;
-	vma->vm_flags |= VM_MIXEDMAP;
-	vma->vm_pgoff = 0;
-	/*
-	 * According to Xen on ARM ABI (xen/include/public/arch-arm.h):
-	 * all memory which is shared with other entities in the system
-	 * (including the hypervisor and other guests) must reside in memory
-	 * which is mapped as Normal Inner Write-Back Outer Write-Back
-	 * Inner-Shareable.
-	 */
-	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
-
-	/*
-	 * vm_operations_struct.fault handler will be called if CPU access
-	 * to VM is here. For GPUs this isn't the case, because CPU
-	 * doesn't touch the memory. Insert pages now, so both CPU and GPU are
-	 * happy.
-	 * FIXME: as we insert all the pages now then no .fault handler must
-	 * be called, so don't provide one
-	 */
-	ret = vm_map_pages(vma, xen_obj->pages, xen_obj->num_pages);
-	if (ret < 0)
-		DRM_ERROR("Failed to map pages into vma: %d\n", ret);
-
-	return ret;
-}
-
-int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
-{
-	struct xen_gem_object *xen_obj;
-	struct drm_gem_object *gem_obj;
-	int ret;
-
-	ret = drm_gem_mmap(filp, vma);
-	if (ret < 0)
-		return ret;
-
-	gem_obj = vma->vm_private_data;
-	xen_obj = to_xen_gem_obj(gem_obj);
-	return gem_mmap_obj(xen_obj, vma);
-}
-
 int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
 {
 	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
@@ -313,17 +303,3 @@ void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
 {
 	vunmap(map->vaddr);
 }
-
-int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
-				 struct vm_area_struct *vma)
-{
-	struct xen_gem_object *xen_obj;
-	int ret;
-
-	ret = drm_gem_mmap_obj(gem_obj, gem_obj->size, vma);
-	if (ret < 0)
-		return ret;
-
-	xen_obj = to_xen_gem_obj(gem_obj);
-	return gem_mmap_obj(xen_obj, vma);
-}
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
index a4e67d0a149c..eaea470f7001 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
@@ -15,9 +15,7 @@ struct dma_buf_attachment;
 struct dma_buf_map;
 struct drm_device;
 struct drm_gem_object;
-struct file;
 struct sg_table;
-struct vm_area_struct;
 
 struct drm_gem_object *xen_drm_front_gem_create(struct drm_device *dev,
 						size_t size);
@@ -33,15 +31,10 @@ struct page **xen_drm_front_gem_get_pages(struct drm_gem_object *obj);
 
 void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
 
-int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
-
 int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
 				 struct dma_buf_map *map);
 
 void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
 				    struct dma_buf_map *map);
 
-int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
-				 struct vm_area_struct *vma);
-
 #endif /* __XEN_DRM_FRONT_GEM_H */
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 11:20:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 11:20:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139212.257534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwG5-0004wV-5y; Wed, 09 Jun 2021 11:20:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139212.257534; Wed, 09 Jun 2021 11:20:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwG4-0004uj-Ui; Wed, 09 Jun 2021 11:20:28 +0000
Received: by outflank-mailman (input) for mailman id 139212;
 Wed, 09 Jun 2021 11:20:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FYhx=LD=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1lqwG4-0003dc-50
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 11:20:28 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id babb8331-492f-4b08-a69e-ab3f952be26d;
 Wed, 09 Jun 2021 11:20:17 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id C5D15219E1;
 Wed,  9 Jun 2021 11:20:16 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 24EAC118DD;
 Wed,  9 Jun 2021 11:20:16 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id +B86CPCjwGBTUgAALh3uQQ
 (envelope-from <tzimmermann@suse.de>); Wed, 09 Jun 2021 11:20:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: babb8331-492f-4b08-a69e-ab3f952be26d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237616; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kIw8orWoizr06c23XZxXH8tu8fam0HK3xTsglG8tnXo=;
	b=LG9LpFkFf2FeAPIZvQHSlZxvFOFiAr8/Xb9VRpnPbxhJDFNuCZPrrFFO8CQ0151y3QZCPG
	EZnHbcNSNKBO6Ps/TnPZ4jkzzMdr41xtZH32l5UkrK3yumwOVhU1I1EBdMg7nYTvO2eITu
	zF3UdhO7XDUub97PxRJrOIW4LCyNgwE=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237616;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kIw8orWoizr06c23XZxXH8tu8fam0HK3xTsglG8tnXo=;
	b=jhR1iekYasYXTbZt3T5WSNE/w+9z69slwIJW4ZhnnMWXyMiVOSnQRzFw7pSNWjmKiIGfdf
	OettCmFXDjb/tPCw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237616; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kIw8orWoizr06c23XZxXH8tu8fam0HK3xTsglG8tnXo=;
	b=LG9LpFkFf2FeAPIZvQHSlZxvFOFiAr8/Xb9VRpnPbxhJDFNuCZPrrFFO8CQ0151y3QZCPG
	EZnHbcNSNKBO6Ps/TnPZ4jkzzMdr41xtZH32l5UkrK3yumwOVhU1I1EBdMg7nYTvO2eITu
	zF3UdhO7XDUub97PxRJrOIW4LCyNgwE=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237616;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kIw8orWoizr06c23XZxXH8tu8fam0HK3xTsglG8tnXo=;
	b=jhR1iekYasYXTbZt3T5WSNE/w+9z69slwIJW4ZhnnMWXyMiVOSnQRzFw7pSNWjmKiIGfdf
	OettCmFXDjb/tPCw==
From: Thomas Zimmermann <tzimmermann@suse.de>
To: daniel@ffwll.ch,
	mripard@kernel.org,
	maarten.lankhorst@linux.intel.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	krzysztof.kozlowski@canonical.com,
	chunkuang.hu@kernel.org,
	p.zabel@pengutronix.de,
	matthias.bgg@gmail.com,
	robdclark@gmail.com,
	sean@poorly.run,
	airlied@redhat.com,
	kraxel@redhat.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	oleksandr_andrushchenko@epam.com,
	sumit.semwal@linaro.org,
	christian.koenig@amd.com
Cc: dri-devel@lists.freedesktop.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH 4/9] drm/msm: Implement mmap as GEM object function
Date: Wed,  9 Jun 2021 13:20:07 +0200
Message-Id: <20210609112012.10019-5-tzimmermann@suse.de>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210609112012.10019-1-tzimmermann@suse.de>
References: <20210609112012.10019-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Moving the driver-specific mmap code into a GEM object function allows
for using DRM helpers for various mmap callbacks.

The respective msm functions are being removed. The file_operations
structure fops is now being created by the helper macro
DEFINE_DRM_GEM_FOPS().

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/msm/msm_drv.c       | 14 +-----
 drivers/gpu/drm/msm/msm_drv.h       |  1 -
 drivers/gpu/drm/msm/msm_fbdev.c     | 10 +----
 drivers/gpu/drm/msm/msm_gem.c       | 67 ++++++++++++-----------------
 drivers/gpu/drm/msm/msm_gem.h       |  3 --
 drivers/gpu/drm/msm/msm_gem_prime.c | 11 -----
 6 files changed, 31 insertions(+), 75 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index fe7d17cd35ec..f62eaedfc0d7 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -985,17 +985,7 @@ static const struct drm_ioctl_desc msm_ioctls[] = {
 	DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_QUERY, msm_ioctl_submitqueue_query, DRM_RENDER_ALLOW),
 };
 
-static const struct file_operations fops = {
-	.owner              = THIS_MODULE,
-	.open               = drm_open,
-	.release            = drm_release,
-	.unlocked_ioctl     = drm_ioctl,
-	.compat_ioctl       = drm_compat_ioctl,
-	.poll               = drm_poll,
-	.read               = drm_read,
-	.llseek             = no_llseek,
-	.mmap               = msm_gem_mmap,
-};
+DEFINE_DRM_GEM_FOPS(fops);
 
 static const struct drm_driver msm_driver = {
 	.driver_features    = DRIVER_GEM |
@@ -1015,7 +1005,7 @@ static const struct drm_driver msm_driver = {
 	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
 	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
 	.gem_prime_import_sg_table = msm_gem_prime_import_sg_table,
-	.gem_prime_mmap     = msm_gem_prime_mmap,
+	.gem_prime_mmap     = drm_gem_prime_mmap,
 #ifdef CONFIG_DEBUG_FS
 	.debugfs_init       = msm_debugfs_init,
 #endif
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index 2668941df529..8f1e0d7c8bbb 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -300,7 +300,6 @@ void msm_gem_shrinker_cleanup(struct drm_device *dev);
 struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj);
 int msm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
-int msm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
 		struct dma_buf_attachment *attach, struct sg_table *sg);
 int msm_gem_prime_pin(struct drm_gem_object *obj);
diff --git a/drivers/gpu/drm/msm/msm_fbdev.c b/drivers/gpu/drm/msm/msm_fbdev.c
index 227404077e39..07225907fd2d 100644
--- a/drivers/gpu/drm/msm/msm_fbdev.c
+++ b/drivers/gpu/drm/msm/msm_fbdev.c
@@ -8,6 +8,7 @@
 #include <drm/drm_crtc.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_prime.h>
 
 #include "msm_drv.h"
 #include "msm_gem.h"
@@ -48,15 +49,8 @@ static int msm_fbdev_mmap(struct fb_info *info, struct vm_area_struct *vma)
 	struct drm_fb_helper *helper = (struct drm_fb_helper *)info->par;
 	struct msm_fbdev *fbdev = to_msm_fbdev(helper);
 	struct drm_gem_object *bo = msm_framebuffer_bo(fbdev->fb, 0);
-	int ret = 0;
 
-	ret = drm_gem_mmap_obj(bo, bo->size, vma);
-	if (ret) {
-		pr_err("%s:drm_gem_mmap_obj fail\n", __func__);
-		return ret;
-	}
-
-	return msm_gem_mmap_obj(bo, vma);
+	return drm_gem_prime_mmap(bo, vma);
 }
 
 static int msm_fbdev_create(struct drm_fb_helper *helper,
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index a94a43de95ef..09fd1a990b3c 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -211,46 +211,6 @@ void msm_gem_put_pages(struct drm_gem_object *obj)
 	msm_gem_unlock(obj);
 }
 
-int msm_gem_mmap_obj(struct drm_gem_object *obj,
-		struct vm_area_struct *vma)
-{
-	struct msm_gem_object *msm_obj = to_msm_bo(obj);
-
-	vma->vm_flags &= ~VM_PFNMAP;
-	vma->vm_flags |= VM_MIXEDMAP;
-
-	if (msm_obj->flags & MSM_BO_WC) {
-		vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
-	} else if (msm_obj->flags & MSM_BO_UNCACHED) {
-		vma->vm_page_prot = pgprot_noncached(vm_get_page_prot(vma->vm_flags));
-	} else {
-		/*
-		 * Shunt off cached objs to shmem file so they have their own
-		 * address_space (so unmap_mapping_range does what we want,
-		 * in particular in the case of mmap'd dmabufs)
-		 */
-		vma->vm_pgoff = 0;
-		vma_set_file(vma, obj->filp);
-
-		vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
-	}
-
-	return 0;
-}
-
-int msm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
-{
-	int ret;
-
-	ret = drm_gem_mmap(filp, vma);
-	if (ret) {
-		DBG("mmap failed: %d", ret);
-		return ret;
-	}
-
-	return msm_gem_mmap_obj(vma->vm_private_data, vma);
-}
-
 static vm_fault_t msm_gem_fault(struct vm_fault *vmf)
 {
 	struct vm_area_struct *vma = vmf->vma;
@@ -1119,6 +1079,32 @@ void msm_gem_free_object(struct drm_gem_object *obj)
 	kfree(msm_obj);
 }
 
+static int msm_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+{
+	struct msm_gem_object *msm_obj = to_msm_bo(obj);
+
+	vma->vm_flags &= ~VM_PFNMAP;
+	vma->vm_flags |= VM_MIXEDMAP;
+
+	if (msm_obj->flags & MSM_BO_WC) {
+		vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+	} else if (msm_obj->flags & MSM_BO_UNCACHED) {
+		vma->vm_page_prot = pgprot_noncached(vm_get_page_prot(vma->vm_flags));
+	} else {
+		/*
+		 * Shunt off cached objs to shmem file so they have their own
+		 * address_space (so unmap_mapping_range does what we want,
+		 * in particular in the case of mmap'd dmabufs)
+		 */
+		vma->vm_pgoff = 0;
+		vma_set_file(vma, obj->filp);
+
+		vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
+	}
+
+	return 0;
+}
+
 /* convenience method to construct a GEM buffer object, and userspace handle */
 int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file,
 		uint32_t size, uint32_t flags, uint32_t *handle,
@@ -1156,6 +1142,7 @@ static const struct drm_gem_object_funcs msm_gem_object_funcs = {
 	.get_sg_table = msm_gem_prime_get_sg_table,
 	.vmap = msm_gem_prime_vmap,
 	.vunmap = msm_gem_prime_vunmap,
+	.mmap = msm_gem_object_mmap,
 	.vm_ops = &vm_ops,
 };
 
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 03e2cc2a2ce1..8508163088a9 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -112,9 +112,6 @@ struct msm_gem_object {
 };
 #define to_msm_bo(x) container_of(x, struct msm_gem_object, base)
 
-int msm_gem_mmap_obj(struct drm_gem_object *obj,
-			struct vm_area_struct *vma);
-int msm_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj);
 int msm_gem_get_iova(struct drm_gem_object *obj,
 		struct msm_gem_address_space *aspace, uint64_t *iova);
diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c
index 9880348a4dc7..fc94e061d6a7 100644
--- a/drivers/gpu/drm/msm/msm_gem_prime.c
+++ b/drivers/gpu/drm/msm/msm_gem_prime.c
@@ -39,17 +39,6 @@ void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 	msm_gem_put_vaddr(obj);
 }
 
-int msm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
-{
-	int ret;
-
-	ret = drm_gem_mmap_obj(obj, obj->size, vma);
-	if (ret < 0)
-		return ret;
-
-	return msm_gem_mmap_obj(vma->vm_private_data, vma);
-}
-
 struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
 		struct dma_buf_attachment *attach, struct sg_table *sg)
 {
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 11:20:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 11:20:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139213.257552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwG9-0005Z7-Gs; Wed, 09 Jun 2021 11:20:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139213.257552; Wed, 09 Jun 2021 11:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwG9-0005Yo-AT; Wed, 09 Jun 2021 11:20:33 +0000
Received: by outflank-mailman (input) for mailman id 139213;
 Wed, 09 Jun 2021 11:20:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FYhx=LD=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1lqwG8-0003db-8X
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 11:20:32 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 804ee5db-524f-4584-91f0-d1a1884814cb;
 Wed, 09 Jun 2021 11:20:20 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 19DE7219E6;
 Wed,  9 Jun 2021 11:20:20 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 74EFE11A98;
 Wed,  9 Jun 2021 11:20:19 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id SIS9G/OjwGBTUgAALh3uQQ
 (envelope-from <tzimmermann@suse.de>); Wed, 09 Jun 2021 11:20:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 804ee5db-524f-4584-91f0-d1a1884814cb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237620; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Hk665B6tEfCs3+xiggyrmaaBVgfZTuKVzd+3pkK7Ryg=;
	b=0riftsnENuEtlKaO45VfMLILTW7CVsIiXdx49o3MHlH7Q+EKEz+QeJkuwWMGzAEhU6UHHH
	3ts3WOmOU4iussIVGweeahprcgL7L9c+tNdjKY5Dk6n3NaNwC28S1xW4RjYy+UmH7VLOjH
	k2GIcXr/JV10H9i/R8LvGdlonJJ8RMk=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237620;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Hk665B6tEfCs3+xiggyrmaaBVgfZTuKVzd+3pkK7Ryg=;
	b=Wcvg3bkzQ+lyHvS1B+AJdv/5DCMi0f2Em4b89mhyz0mHV8DFPdA7XrbwOT44PzMYlcG3sD
	mYqnz4K5nYFXyzDg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237620; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Hk665B6tEfCs3+xiggyrmaaBVgfZTuKVzd+3pkK7Ryg=;
	b=0riftsnENuEtlKaO45VfMLILTW7CVsIiXdx49o3MHlH7Q+EKEz+QeJkuwWMGzAEhU6UHHH
	3ts3WOmOU4iussIVGweeahprcgL7L9c+tNdjKY5Dk6n3NaNwC28S1xW4RjYy+UmH7VLOjH
	k2GIcXr/JV10H9i/R8LvGdlonJJ8RMk=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237620;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Hk665B6tEfCs3+xiggyrmaaBVgfZTuKVzd+3pkK7Ryg=;
	b=Wcvg3bkzQ+lyHvS1B+AJdv/5DCMi0f2Em4b89mhyz0mHV8DFPdA7XrbwOT44PzMYlcG3sD
	mYqnz4K5nYFXyzDg==
From: Thomas Zimmermann <tzimmermann@suse.de>
To: daniel@ffwll.ch,
	mripard@kernel.org,
	maarten.lankhorst@linux.intel.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	krzysztof.kozlowski@canonical.com,
	chunkuang.hu@kernel.org,
	p.zabel@pengutronix.de,
	matthias.bgg@gmail.com,
	robdclark@gmail.com,
	sean@poorly.run,
	airlied@redhat.com,
	kraxel@redhat.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	oleksandr_andrushchenko@epam.com,
	sumit.semwal@linaro.org,
	christian.koenig@amd.com
Cc: dri-devel@lists.freedesktop.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH 9/9] drm: Update documentation and TODO of gem_prime_mmap hook
Date: Wed,  9 Jun 2021 13:20:12 +0200
Message-Id: <20210609112012.10019-10-tzimmermann@suse.de>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210609112012.10019-1-tzimmermann@suse.de>
References: <20210609112012.10019-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The hook gem_prime_mmap in struct drm_driver is deprecated. Document
the new requirements.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 Documentation/gpu/todo.rst | 11 -----------
 include/drm/drm_drv.h      | 11 +++++++----
 2 files changed, 7 insertions(+), 15 deletions(-)

diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 12e61869939e..50ad731d579b 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -268,17 +268,6 @@ Contact: Daniel Vetter
 
 Level: Intermediate
 
-Clean up mmap forwarding
-------------------------
-
-A lot of drivers forward gem mmap calls to dma-buf mmap for imported buffers.
-And also a lot of them forward dma-buf mmap to the gem mmap implementations.
-There's drm_gem_prime_mmap() for this now, but still needs to be rolled out.
-
-Contact: Daniel Vetter
-
-Level: Intermediate
-
 Generic fbdev defio support
 ---------------------------
 
diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
index b439ae1921b8..40d93a52cf7a 100644
--- a/include/drm/drm_drv.h
+++ b/include/drm/drm_drv.h
@@ -385,11 +385,14 @@ struct drm_driver {
 	 * mmap hook for GEM drivers, used to implement dma-buf mmap in the
 	 * PRIME helpers.
 	 *
-	 * FIXME: There's way too much duplication going on here, and also moved
-	 * to &drm_gem_object_funcs.
+	 * This hook only exists for historical reasons. Drivers must use
+	 * drm_gem_prime_mmap() to implement it.
+	 *
+	 * FIXME: Convert all drivers to implement mmap in struct
+	 * &drm_gem_object_funcs and inline drm_gem_prime_mmap() into
+	 * its callers. This hook should be removed afterwards.
 	 */
-	int (*gem_prime_mmap)(struct drm_gem_object *obj,
-				struct vm_area_struct *vma);
+	int (*gem_prime_mmap)(struct drm_gem_object *obj, struct vm_area_struct *vma);
 
 	/**
 	 * @dumb_create:
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 11:20:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 11:20:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139214.257563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwGA-0005t2-Vq; Wed, 09 Jun 2021 11:20:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139214.257563; Wed, 09 Jun 2021 11:20:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwGA-0005sR-Od; Wed, 09 Jun 2021 11:20:34 +0000
Received: by outflank-mailman (input) for mailman id 139214;
 Wed, 09 Jun 2021 11:20:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FYhx=LD=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1lqwG9-0003dc-5G
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 11:20:33 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d3cc1e3e-43d2-4141-af65-1314677e90fb;
 Wed, 09 Jun 2021 11:20:18 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 72A051FD5E;
 Wed,  9 Jun 2021 11:20:17 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id C7C0311A98;
 Wed,  9 Jun 2021 11:20:16 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id UPnRL/CjwGBTUgAALh3uQQ
 (envelope-from <tzimmermann@suse.de>); Wed, 09 Jun 2021 11:20:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3cc1e3e-43d2-4141-af65-1314677e90fb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237617; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DKNzmpUMnKGetW9gWSfXN0fWNO4Ill+y5d1BrTMC3Bc=;
	b=Rq55TcomPGhMXwpeFyESUm3GDvlg7OmaZX/qRaxtJN732EyZeijE7CTgQQ81HFu+nOAnWo
	Z+WU7zDsm0G3ahHPlWoKBpb5pqwby6nzNTPZ1PS16fXw3BOYMdf/6A8Nsq4lS+x0+pYTRj
	GUC+kQK64wYiUp8guoJ8SnsSvgqFYrw=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237617;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DKNzmpUMnKGetW9gWSfXN0fWNO4Ill+y5d1BrTMC3Bc=;
	b=UohKZc2F4Ut9lh/4Tyt67jSW7I9hMjqHQGWVvsSXRh0Vw7JlrrpjYo8yCJSaYDQ6uGjzTJ
	H4WzmfaRgb3bONBg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237617; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DKNzmpUMnKGetW9gWSfXN0fWNO4Ill+y5d1BrTMC3Bc=;
	b=Rq55TcomPGhMXwpeFyESUm3GDvlg7OmaZX/qRaxtJN732EyZeijE7CTgQQ81HFu+nOAnWo
	Z+WU7zDsm0G3ahHPlWoKBpb5pqwby6nzNTPZ1PS16fXw3BOYMdf/6A8Nsq4lS+x0+pYTRj
	GUC+kQK64wYiUp8guoJ8SnsSvgqFYrw=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237617;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DKNzmpUMnKGetW9gWSfXN0fWNO4Ill+y5d1BrTMC3Bc=;
	b=UohKZc2F4Ut9lh/4Tyt67jSW7I9hMjqHQGWVvsSXRh0Vw7JlrrpjYo8yCJSaYDQ6uGjzTJ
	H4WzmfaRgb3bONBg==
From: Thomas Zimmermann <tzimmermann@suse.de>
To: daniel@ffwll.ch,
	mripard@kernel.org,
	maarten.lankhorst@linux.intel.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	krzysztof.kozlowski@canonical.com,
	chunkuang.hu@kernel.org,
	p.zabel@pengutronix.de,
	matthias.bgg@gmail.com,
	robdclark@gmail.com,
	sean@poorly.run,
	airlied@redhat.com,
	kraxel@redhat.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	oleksandr_andrushchenko@epam.com,
	sumit.semwal@linaro.org,
	christian.koenig@amd.com
Cc: dri-devel@lists.freedesktop.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH 5/9] drm/qxl: Remove empty qxl_gem_prime_mmap()
Date: Wed,  9 Jun 2021 13:20:08 +0200
Message-Id: <20210609112012.10019-6-tzimmermann@suse.de>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210609112012.10019-1-tzimmermann@suse.de>
References: <20210609112012.10019-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The function qxl_gem_prime_mmap() returns an error. The two callers
of gem_prime_mmap are drm_fbdev_fb_mmap() and drm_gem_dmabuf_mmap(),
which both already handle NULL-callbacks with an error code. So clear
gem_prime_mmap in qxl and remove qxl_gem_prime_mmap().

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/qxl/qxl_drv.c   | 1 -
 drivers/gpu/drm/qxl/qxl_drv.h   | 2 --
 drivers/gpu/drm/qxl/qxl_prime.c | 6 ------
 3 files changed, 9 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_drv.c b/drivers/gpu/drm/qxl/qxl_drv.c
index 854e6c5a563f..b3d75ea7e6b3 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.c
+++ b/drivers/gpu/drm/qxl/qxl_drv.c
@@ -281,7 +281,6 @@ static struct drm_driver qxl_driver = {
 	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
 	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
 	.gem_prime_import_sg_table = qxl_gem_prime_import_sg_table,
-	.gem_prime_mmap = qxl_gem_prime_mmap,
 	.fops = &qxl_fops,
 	.ioctls = qxl_ioctls,
 	.irq_handler = qxl_irq_handler,
diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index dd6abee55f56..f95885a8bd2b 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -434,8 +434,6 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
 int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
 			  struct dma_buf_map *map);
-int qxl_gem_prime_mmap(struct drm_gem_object *obj,
-				struct vm_area_struct *vma);
 
 /* qxl_irq.c */
 int qxl_irq_init(struct qxl_device *qdev);
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index 0628d1cc91fe..4a10cb0a413b 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -73,9 +73,3 @@ void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
 
 	qxl_bo_vunmap(bo);
 }
-
-int qxl_gem_prime_mmap(struct drm_gem_object *obj,
-		       struct vm_area_struct *area)
-{
-	return -ENOSYS;
-}
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 11:20:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 11:20:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139219.257576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwGF-0006Wy-EK; Wed, 09 Jun 2021 11:20:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139219.257576; Wed, 09 Jun 2021 11:20:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwGF-0006Wl-9I; Wed, 09 Jun 2021 11:20:39 +0000
Received: by outflank-mailman (input) for mailman id 139219;
 Wed, 09 Jun 2021 11:20:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FYhx=LD=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1lqwGE-0003dc-5D
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 11:20:38 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id db1aa905-b871-42b8-b230-9d14348c093a;
 Wed, 09 Jun 2021 11:20:17 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1FBE71FD58;
 Wed,  9 Jun 2021 11:20:16 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 7A51C11A98;
 Wed,  9 Jun 2021 11:20:15 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id SMj/HO+jwGBTUgAALh3uQQ
 (envelope-from <tzimmermann@suse.de>); Wed, 09 Jun 2021 11:20:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db1aa905-b871-42b8-b230-9d14348c093a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237616; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nnv/82/uEe+mIz6yhpeOdfG//15FkUncLx/LmWsKeN8=;
	b=FuIyGzxcA7GgNKkvTlwpnn6kjwOF34sI2vGp7Cshyopr3F8vwD4i52PwKGUHIYsWqS5U13
	D7xTpv6S1PAsit/lhLrB9haGxyyzGExwfBReiFLAzgaBGpncqwmbRtwoIJ/4Zjtm0lTzJ7
	d4W9FXLh0tYIObTE6Cpc0Aj7HBwIrDs=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237616;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nnv/82/uEe+mIz6yhpeOdfG//15FkUncLx/LmWsKeN8=;
	b=8PrudITCijMRShhn2OTYjJVPnd+uzcFDtfGBeTDtOp2He5PDfG4ssHt0tTauxHtTDvehbv
	ACaqSeiE6s48xMCQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237616; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nnv/82/uEe+mIz6yhpeOdfG//15FkUncLx/LmWsKeN8=;
	b=FuIyGzxcA7GgNKkvTlwpnn6kjwOF34sI2vGp7Cshyopr3F8vwD4i52PwKGUHIYsWqS5U13
	D7xTpv6S1PAsit/lhLrB9haGxyyzGExwfBReiFLAzgaBGpncqwmbRtwoIJ/4Zjtm0lTzJ7
	d4W9FXLh0tYIObTE6Cpc0Aj7HBwIrDs=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237616;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nnv/82/uEe+mIz6yhpeOdfG//15FkUncLx/LmWsKeN8=;
	b=8PrudITCijMRShhn2OTYjJVPnd+uzcFDtfGBeTDtOp2He5PDfG4ssHt0tTauxHtTDvehbv
	ACaqSeiE6s48xMCQ==
From: Thomas Zimmermann <tzimmermann@suse.de>
To: daniel@ffwll.ch,
	mripard@kernel.org,
	maarten.lankhorst@linux.intel.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	krzysztof.kozlowski@canonical.com,
	chunkuang.hu@kernel.org,
	p.zabel@pengutronix.de,
	matthias.bgg@gmail.com,
	robdclark@gmail.com,
	sean@poorly.run,
	airlied@redhat.com,
	kraxel@redhat.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	oleksandr_andrushchenko@epam.com,
	sumit.semwal@linaro.org,
	christian.koenig@amd.com
Cc: dri-devel@lists.freedesktop.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH 3/9] drm/mediatek: Implement mmap as GEM object function
Date: Wed,  9 Jun 2021 13:20:06 +0200
Message-Id: <20210609112012.10019-4-tzimmermann@suse.de>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210609112012.10019-1-tzimmermann@suse.de>
References: <20210609112012.10019-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Moving the driver-specific mmap code into a GEM object function allows
for using DRM helpers for various mmap callbacks.

The respective mediatek functions are being removed. The file_operations
structure fops is now being created by the helper macro
DEFINE_DRM_GEM_FOPS().

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/mediatek/mtk_drm_drv.c | 13 ++------
 drivers/gpu/drm/mediatek/mtk_drm_gem.c | 44 +++++++-------------------
 drivers/gpu/drm/mediatek/mtk_drm_gem.h |  3 --
 3 files changed, 14 insertions(+), 46 deletions(-)

diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
index b46bdb8985da..bbfefb29c211 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
@@ -300,16 +300,7 @@ static void mtk_drm_kms_deinit(struct drm_device *drm)
 	component_unbind_all(drm->dev, drm);
 }
 
-static const struct file_operations mtk_drm_fops = {
-	.owner = THIS_MODULE,
-	.open = drm_open,
-	.release = drm_release,
-	.unlocked_ioctl = drm_ioctl,
-	.mmap = mtk_drm_gem_mmap,
-	.poll = drm_poll,
-	.read = drm_read,
-	.compat_ioctl = drm_compat_ioctl,
-};
+DEFINE_DRM_GEM_FOPS(mtk_drm_fops);
 
 /*
  * We need to override this because the device used to import the memory is
@@ -332,7 +323,7 @@ static const struct drm_driver mtk_drm_driver = {
 	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
 	.gem_prime_import = mtk_drm_gem_prime_import,
 	.gem_prime_import_sg_table = mtk_gem_prime_import_sg_table,
-	.gem_prime_mmap = mtk_drm_gem_mmap_buf,
+	.gem_prime_mmap = drm_gem_prime_mmap,
 	.fops = &mtk_drm_fops,
 
 	.name = DRIVER_NAME,
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
index 280ea0d5e840..d0544962cfc1 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
@@ -14,11 +14,14 @@
 #include "mtk_drm_drv.h"
 #include "mtk_drm_gem.h"
 
+static int mtk_drm_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
+
 static const struct drm_gem_object_funcs mtk_drm_gem_object_funcs = {
 	.free = mtk_drm_gem_free_object,
 	.get_sg_table = mtk_gem_prime_get_sg_table,
 	.vmap = mtk_drm_gem_prime_vmap,
 	.vunmap = mtk_drm_gem_prime_vunmap,
+	.mmap = mtk_drm_gem_object_mmap,
 	.vm_ops = &drm_gem_cma_vm_ops,
 };
 
@@ -145,11 +148,19 @@ static int mtk_drm_gem_object_mmap(struct drm_gem_object *obj,
 	struct mtk_drm_gem_obj *mtk_gem = to_mtk_gem_obj(obj);
 	struct mtk_drm_private *priv = obj->dev->dev_private;
 
+	/*
+	 * Set vm_pgoff (used as a fake buffer offset by DRM) to 0 and map the
+	 * whole buffer from the start.
+	 */
+	vma->vm_pgoff = 0;
+
 	/*
 	 * dma_alloc_attrs() allocated a struct page table for mtk_gem, so clear
 	 * VM_PFNMAP flag that was set by drm_gem_mmap_obj()/drm_gem_mmap().
 	 */
-	vma->vm_flags &= ~VM_PFNMAP;
+	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
+	vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+	vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
 
 	ret = dma_mmap_attrs(priv->dma_dev, vma, mtk_gem->cookie,
 			     mtk_gem->dma_addr, obj->size, mtk_gem->dma_attrs);
@@ -159,37 +170,6 @@ static int mtk_drm_gem_object_mmap(struct drm_gem_object *obj,
 	return ret;
 }
 
-int mtk_drm_gem_mmap_buf(struct drm_gem_object *obj, struct vm_area_struct *vma)
-{
-	int ret;
-
-	ret = drm_gem_mmap_obj(obj, obj->size, vma);
-	if (ret)
-		return ret;
-
-	return mtk_drm_gem_object_mmap(obj, vma);
-}
-
-int mtk_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
-{
-	struct drm_gem_object *obj;
-	int ret;
-
-	ret = drm_gem_mmap(filp, vma);
-	if (ret)
-		return ret;
-
-	obj = vma->vm_private_data;
-
-	/*
-	 * Set vm_pgoff (used as a fake buffer offset by DRM) to 0 and map the
-	 * whole buffer from the start.
-	 */
-	vma->vm_pgoff = 0;
-
-	return mtk_drm_gem_object_mmap(obj, vma);
-}
-
 /*
  * Allocate a sg_table for this GEM object.
  * Note: Both the table's contents, and the sg_table itself must be freed by
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.h b/drivers/gpu/drm/mediatek/mtk_drm_gem.h
index 6da5ccb4b933..9a359a06cb73 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.h
+++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.h
@@ -39,9 +39,6 @@ struct mtk_drm_gem_obj *mtk_drm_gem_create(struct drm_device *dev, size_t size,
 					   bool alloc_kmap);
 int mtk_drm_gem_dumb_create(struct drm_file *file_priv, struct drm_device *dev,
 			    struct drm_mode_create_dumb *args);
-int mtk_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma);
-int mtk_drm_gem_mmap_buf(struct drm_gem_object *obj,
-			 struct vm_area_struct *vma);
 struct sg_table *mtk_gem_prime_get_sg_table(struct drm_gem_object *obj);
 struct drm_gem_object *mtk_gem_prime_import_sg_table(struct drm_device *dev,
 			struct dma_buf_attachment *attach, struct sg_table *sg);
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 11:20:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 11:20:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139222.257588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwGK-0007RT-Fm; Wed, 09 Jun 2021 11:20:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139222.257588; Wed, 09 Jun 2021 11:20:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqwGK-0007R4-AM; Wed, 09 Jun 2021 11:20:44 +0000
Received: by outflank-mailman (input) for mailman id 139222;
 Wed, 09 Jun 2021 11:20:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FYhx=LD=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1lqwGJ-0003dc-5P
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 11:20:43 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ac8186da-ad10-4a01-b2de-9a5e38588b6d;
 Wed, 09 Jun 2021 11:20:20 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 70A741FD4E;
 Wed,  9 Jun 2021 11:20:19 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id CDACA118DD;
 Wed,  9 Jun 2021 11:20:18 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 0G1AMfKjwGBTUgAALh3uQQ
 (envelope-from <tzimmermann@suse.de>); Wed, 09 Jun 2021 11:20:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac8186da-ad10-4a01-b2de-9a5e38588b6d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237619; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uHgjG881Tzc6TF1wm3lJNykWl3nzYgQpZTifsGlExSU=;
	b=hgqBLcoJY2hRbV6/3Hrf2WYuynMUd9lzWYUR+DdXlNfTZli8hzGqhtU3QmtTRRAqV6fS8M
	wDJdF1FkdaZY1ArXcHL5VCIPMV0PRYELkHJOZDDwCl2CsYPEyMKRwPJ3ZZ5Ic5fk30uGGW
	rCSXuXdk8BiSRqjnpkEqOsqJ8E3bvFY=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237619;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uHgjG881Tzc6TF1wm3lJNykWl3nzYgQpZTifsGlExSU=;
	b=eGgR9KjAmEUGAXpoJk3a/G6GTIkV9mSVYZLHZAU/NHGNMndowHzlD+P9aLIVMDFR5MbUPm
	KIutgtmlNht3fHCg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623237619; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uHgjG881Tzc6TF1wm3lJNykWl3nzYgQpZTifsGlExSU=;
	b=hgqBLcoJY2hRbV6/3Hrf2WYuynMUd9lzWYUR+DdXlNfTZli8hzGqhtU3QmtTRRAqV6fS8M
	wDJdF1FkdaZY1ArXcHL5VCIPMV0PRYELkHJOZDDwCl2CsYPEyMKRwPJ3ZZ5Ic5fk30uGGW
	rCSXuXdk8BiSRqjnpkEqOsqJ8E3bvFY=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623237619;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uHgjG881Tzc6TF1wm3lJNykWl3nzYgQpZTifsGlExSU=;
	b=eGgR9KjAmEUGAXpoJk3a/G6GTIkV9mSVYZLHZAU/NHGNMndowHzlD+P9aLIVMDFR5MbUPm
	KIutgtmlNht3fHCg==
From: Thomas Zimmermann <tzimmermann@suse.de>
To: daniel@ffwll.ch,
	mripard@kernel.org,
	maarten.lankhorst@linux.intel.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	krzysztof.kozlowski@canonical.com,
	chunkuang.hu@kernel.org,
	p.zabel@pengutronix.de,
	matthias.bgg@gmail.com,
	robdclark@gmail.com,
	sean@poorly.run,
	airlied@redhat.com,
	kraxel@redhat.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	oleksandr_andrushchenko@epam.com,
	sumit.semwal@linaro.org,
	christian.koenig@amd.com
Cc: dri-devel@lists.freedesktop.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH 8/9] drm/rockchip: Implement mmap as GEM object function
Date: Wed,  9 Jun 2021 13:20:11 +0200
Message-Id: <20210609112012.10019-9-tzimmermann@suse.de>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210609112012.10019-1-tzimmermann@suse.de>
References: <20210609112012.10019-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Moving the driver-specific mmap code into a GEM object function allows
for using DRM helpers for various mmap callbacks.

The respective rockchip functions are being removed. The file_operations
structure fops is now being created by the helper macro
DEFINE_DRM_GEM_FOPS().

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/rockchip/rockchip_drm_drv.c   | 13 +-----
 drivers/gpu/drm/rockchip/rockchip_drm_fbdev.c |  3 +-
 drivers/gpu/drm/rockchip/rockchip_drm_gem.c   | 44 +++++--------------
 drivers/gpu/drm/rockchip/rockchip_drm_gem.h   |  7 ---
 4 files changed, 15 insertions(+), 52 deletions(-)

diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_drv.c b/drivers/gpu/drm/rockchip/rockchip_drm_drv.c
index b730b8d5d949..2e3ab573a817 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_drv.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_drv.c
@@ -208,16 +208,7 @@ static void rockchip_drm_unbind(struct device *dev)
 	drm_dev_put(drm_dev);
 }
 
-static const struct file_operations rockchip_drm_driver_fops = {
-	.owner = THIS_MODULE,
-	.open = drm_open,
-	.mmap = rockchip_gem_mmap,
-	.poll = drm_poll,
-	.read = drm_read,
-	.unlocked_ioctl = drm_ioctl,
-	.compat_ioctl = drm_compat_ioctl,
-	.release = drm_release,
-};
+DEFINE_DRM_GEM_FOPS(rockchip_drm_driver_fops);
 
 static const struct drm_driver rockchip_drm_driver = {
 	.driver_features	= DRIVER_MODESET | DRIVER_GEM | DRIVER_ATOMIC,
@@ -226,7 +217,7 @@ static const struct drm_driver rockchip_drm_driver = {
 	.prime_handle_to_fd	= drm_gem_prime_handle_to_fd,
 	.prime_fd_to_handle	= drm_gem_prime_fd_to_handle,
 	.gem_prime_import_sg_table	= rockchip_gem_prime_import_sg_table,
-	.gem_prime_mmap		= rockchip_gem_mmap_buf,
+	.gem_prime_mmap		= drm_gem_prime_mmap,
 	.fops			= &rockchip_drm_driver_fops,
 	.name	= DRIVER_NAME,
 	.desc	= DRIVER_DESC,
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_fbdev.c b/drivers/gpu/drm/rockchip/rockchip_drm_fbdev.c
index 2fdc455c4ad7..d8418dd39d0e 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_fbdev.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_fbdev.c
@@ -7,6 +7,7 @@
 #include <drm/drm.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_prime.h>
 #include <drm/drm_probe_helper.h>
 
 #include "rockchip_drm_drv.h"
@@ -24,7 +25,7 @@ static int rockchip_fbdev_mmap(struct fb_info *info,
 	struct drm_fb_helper *helper = info->par;
 	struct rockchip_drm_private *private = to_drm_private(helper);
 
-	return rockchip_gem_mmap_buf(private->fbdev_bo, vma);
+	return drm_gem_prime_mmap(private->fbdev_bo, vma);
 }
 
 static const struct fb_ops rockchip_drm_fbdev_ops = {
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
index 7971f57436dd..63eb73b624aa 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
@@ -240,12 +240,22 @@ static int rockchip_drm_gem_object_mmap(struct drm_gem_object *obj,
 	int ret;
 	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
 
+	/*
+	 * Set vm_pgoff (used as a fake buffer offset by DRM) to 0 and map the
+	 * whole buffer from the start.
+	 */
+	vma->vm_pgoff = 0;
+
 	/*
 	 * We allocated a struct page table for rk_obj, so clear
 	 * VM_PFNMAP flag that was set by drm_gem_mmap_obj()/drm_gem_mmap().
 	 */
+	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
 	vma->vm_flags &= ~VM_PFNMAP;
 
+	vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+	vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
+
 	if (rk_obj->pages)
 		ret = rockchip_drm_gem_object_mmap_iommu(obj, vma);
 	else
@@ -257,39 +267,6 @@ static int rockchip_drm_gem_object_mmap(struct drm_gem_object *obj,
 	return ret;
 }
 
-int rockchip_gem_mmap_buf(struct drm_gem_object *obj,
-			  struct vm_area_struct *vma)
-{
-	int ret;
-
-	ret = drm_gem_mmap_obj(obj, obj->size, vma);
-	if (ret)
-		return ret;
-
-	return rockchip_drm_gem_object_mmap(obj, vma);
-}
-
-/* drm driver mmap file operations */
-int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma)
-{
-	struct drm_gem_object *obj;
-	int ret;
-
-	ret = drm_gem_mmap(filp, vma);
-	if (ret)
-		return ret;
-
-	/*
-	 * Set vm_pgoff (used as a fake buffer offset by DRM) to 0 and map the
-	 * whole buffer from the start.
-	 */
-	vma->vm_pgoff = 0;
-
-	obj = vma->vm_private_data;
-
-	return rockchip_drm_gem_object_mmap(obj, vma);
-}
-
 static void rockchip_gem_release_object(struct rockchip_gem_object *rk_obj)
 {
 	drm_gem_object_release(&rk_obj->base);
@@ -301,6 +278,7 @@ static const struct drm_gem_object_funcs rockchip_gem_object_funcs = {
 	.get_sg_table = rockchip_gem_prime_get_sg_table,
 	.vmap = rockchip_gem_prime_vmap,
 	.vunmap	= rockchip_gem_prime_vunmap,
+	.mmap = rockchip_drm_gem_object_mmap,
 	.vm_ops = &drm_gem_cma_vm_ops,
 };
 
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
index 5a70a56cd406..47c1861eece0 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
@@ -34,13 +34,6 @@ rockchip_gem_prime_import_sg_table(struct drm_device *dev,
 int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
-/* drm driver mmap file operations */
-int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
-
-/* mmap a gem object to userspace. */
-int rockchip_gem_mmap_buf(struct drm_gem_object *obj,
-			  struct vm_area_struct *vma);
-
 struct rockchip_gem_object *
 	rockchip_gem_create_object(struct drm_device *drm, unsigned int size,
 				   bool alloc_kmap);
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 12:09:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 12:09:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139287.257600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqx0y-0005f2-8j; Wed, 09 Jun 2021 12:08:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139287.257600; Wed, 09 Jun 2021 12:08:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqx0y-0005ev-4p; Wed, 09 Jun 2021 12:08:56 +0000
Received: by outflank-mailman (input) for mailman id 139287;
 Wed, 09 Jun 2021 12:08:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqx0w-0005ep-JM
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 12:08:54 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 761ad142-23e2-42e6-a145-6983d95b3aa7;
 Wed, 09 Jun 2021 12:08:53 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2171.outbound.protection.outlook.com [104.47.17.171])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-29-fOMUf2AMMhGzyFXQkGEGIw-1; Wed, 09 Jun 2021 14:08:51 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5167.eurprd04.prod.outlook.com (2603:10a6:803:5b::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.23; Wed, 9 Jun
 2021 12:08:49 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Wed, 9 Jun 2021
 12:08:49 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM3PR07CA0075.eurprd07.prod.outlook.com (2603:10a6:207:4::33) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.12 via Frontend Transport; Wed, 9 Jun 2021 12:08:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 761ad142-23e2-42e6-a145-6983d95b3aa7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623240532;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZQNawxGt/NbMZEli/PIGo203LGEWeiLL6Dc3t+HZrxQ=;
	b=E5qDWwtOy43VbZqrLz+gspd+yYDU5TNra+8icRc9UmzR+U5QvxdaS/SgN7yJwPDw/M5f2v
	DMc2oZkVYb6ZKkerzQ53tgTvTRxTMr/9JBDWCPoPaBhc8nhb5ZKWD+kTJdVOW/lTgAtHPL
	vzHI5XIMcwtRtXG3bHf9dqY+NgoVwmY=
X-MC-Unique: fOMUf2AMMhGzyFXQkGEGIw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J5QK2tePcHWj3ZfA6wEhe9pvHsUsAhQBAHE2f+8rXg/Xva+xKsBlzMLAZz6N5El3/HMSpsHa+sdJy3iHw9NcSGfZokXhm63lttxmD4LeINPeixHwrESPEAS/NyObl2WSPMOCQhYV6akT70rPE6TiFEpi6iaILUUhT3yCdPRQHk7WL+JoNFI2UHKEeNS3ueeg1UhiOXAott0637fh5CjLHtbSOoRPRGFi0p0mhriEQtcyEVPJR+QqZY/qxYxAKD3Cbg6IsNsv5sOb3Npr2r6Rl0WJir/xxY/urtJFvSspcCLZSoiaQLiCvd2GOTnhFafPZULctzx1LwHv5xa5Aiiwdg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xv+J29O4fJpEa2bYbUhjPq0Db6awz6u4S6/r+tHcL9E=;
 b=SlkZqxgiQNujIYxttsnmRcy+P6uO3AGkvCWlpJX/TIX6UH4PfvEl1JTGZ2bRrLN4kot9aMgzq9olyE4aUcvgMXEAM+V/FpmmDqi9JZv7cwQQGoPFy3Xvd9Cai8qe+XvChmmHpgxg++wIwzi+CcRIKEhvpfqrCyXST9X2R1kc09hLsyWyPuX54uHA1TKbF64kJeZW0fCKYyNfOUQdD9Ae6J9jihJKOgC/qt+09u6GuFSqfwzC0GcuRBrT5j4jbSJAHdmWrMSU9rWD+9gOkWfvbN2Tjx1kwC9IyVcs9+avLZzw7u4H7jMWOxSzHH2A5FmS6td1JTK8cfRZQo0Dd/ZbfQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 1/9] AMD/IOMMU: redo awaiting of command completion
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Paul Durrant <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
 <1bb49cce-537e-9fba-80bb-82bf502728d0@suse.com>
 <1fcb1140-b9b4-5c0b-de6c-e14d33937318@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e3283632-7621-69b8-5051-ec528c6ad8fc@suse.com>
Date: Wed, 9 Jun 2021 14:08:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <1fcb1140-b9b4-5c0b-de6c-e14d33937318@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM3PR07CA0075.eurprd07.prod.outlook.com
 (2603:10a6:207:4::33) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e6145250-09c7-47c5-4784-08d92b3f571b
X-MS-TrafficTypeDiagnostic: VI1PR04MB5167:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB5167D3A9C0240C8B511E6CE3B3369@VI1PR04MB5167.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	blgNiwI9mG0wo1lncVwwa5OOdoI6sFUDXv0HuD1hq0x2qX97FiCc/h6n+idaiHY4dkpfR3rjnN+YNfgYAAUZ+ERmuKvP7BAs6Wc3oxzXSGo15ADz4HuBqnfBwcHVOvS6VeXe7hVwzqQmsf6evF8YLFz1RIR+7d4PDAvf3JA9eJb3CoWfX1tRhZWSJTGI6hmgkEiipytCRReH1QihWbe0XmVz6ZZXXynpvvkbgdwFcEMokffVVIgnui9D8hTdr2TCxvUP9zm7GaALqO6IXzUS8dqZlH9OIF7KzIEcCIx0yDa3bKsYXMacZ8LmBcS0PglBg7V/EtjuXfAojynkz+692/ZGbhklYb/akBnVcx+xtX1B9PDAqGf2OFWZNvP88sBNB4kBMIGg7YPVEw/AmQpZqoHO1jk1FuDLnZgphG8rFCtgL/8tq3ewvRheGHNtSd2dvfsH2+ZjlKcbX6GEY5iPIq0iBe06nKnLKYrb0+gUqdkFXYHDJ+R+iapyHSvQyVM10vqzr8PhhjcByxI2bqFbCmApGuy51seFb76kRHG8F5Y12M/htwhfndrrGBRgI50kjjde/4sXQpfPme8aiuzdzot+OAvlP4EyUGsD/QQPerzKC0sXRotGR7RAjZyhPG3IGQW5HWFlDlFhQf+a3H5y9647+oDBEdm8Oi8m00uzaE4fIXkF9i3YKXGo1CrjEJ0N
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(136003)(376002)(39850400004)(366004)(396003)(2906002)(5660300002)(36756003)(16526019)(66556008)(186003)(26005)(478600001)(86362001)(54906003)(8676002)(6486002)(83380400001)(53546011)(2616005)(16576012)(31696002)(956004)(6916009)(4326008)(66946007)(66476007)(31686004)(38100700002)(316002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?us-ascii?Q?TNDHEFpChQ6T791zJFZslFgpsbLtejh1hYFuLU1fvV0qtKOMlQjXrzqV7xSF?=
 =?us-ascii?Q?t4ZWKQJn8ZY6csRbyDDcv7hqk6V1WVJueFd+swACAQaGLqv5lT1J8IXrhoQL?=
 =?us-ascii?Q?dvup0t55CmHHdoIALSjbGkMn491Jf2qyR6dIsJd0v43nF+ctvM0rIIcp/lnI?=
 =?us-ascii?Q?KR5pvwU8Rh3AkzvoHbuV5jyTyuNcakn6bghsMXkQIlfhFMptVU/j2Y77+fgz?=
 =?us-ascii?Q?ark7f4FJ2Di+ag66IBzei11JzpVQh048bwTaqmTpkpYoGZcPn18SpgMJI30S?=
 =?us-ascii?Q?IVn9YO4FS4RrJ9j68o1NEZQDEYmgOfNhnxkognXQ9vW03sGr2FlwBsv1H1+M?=
 =?us-ascii?Q?A6xYPv4Cug2XV94M4OtAtUGboQ89WDE0jJfOWaTltRFt2u8j/BKYbWFC+YxG?=
 =?us-ascii?Q?feLh1A2CXLV70RXVT7PMQmCJ9aUc/N8CK2d9tJ/r/PQta2bV9J6WG/a/+Wp/?=
 =?us-ascii?Q?R03fK/t98qzxX3tQ85/Uszq4Z6zkwuRsErT37+L74h1OyEfNCKvupM16Sp2J?=
 =?us-ascii?Q?19KLpU/oKIZ1r+6yz6TYBrfnJq/85xhoIFuNYOlwol9GHCZI9RSIruMJETtj?=
 =?us-ascii?Q?ahGXf6bWsmGeA37qgCH5vFXYEEqrmUED066VygyicIxgHIwoTUI7W2EIYeX1?=
 =?us-ascii?Q?BgfbNp4E2eh53obgGQM5d8NGOyvjjVS4Kyh8u7B2NK3C/8odgqS0PlxmMTdn?=
 =?us-ascii?Q?4QVnkvRna73XlgntAtHoJ53tVHnM6ee3Ow8X0vh0FiVJBJCjQRj4tCg7ukRF?=
 =?us-ascii?Q?d6p7fMWmcZG8r6l0wXwhUpNrlppUCvryaSH4zMQQxTGNsi8L5Dh7s+S7qjjh?=
 =?us-ascii?Q?0fZyVaUhE8MGusqxUFHc1T5IXhGC2993Q9QRTvyjAwl4MBbR6hYD+VxHToVF?=
 =?us-ascii?Q?MTwiylLTFfQXmkxRxwbpvs/nPS46qAFQOLYA3JdKVA5HLFrZBpw3zwClHcU/?=
 =?us-ascii?Q?qzKSnR0Mo/2vReD6RO+475jQiRmRzhWZSpX7CtY7GQcorD/FFXelA0R+lVoG?=
 =?us-ascii?Q?uNB1x+tVB9rYcCJz877+fkeoqno8+5FaKA1TiT72OVSG/K1kkkRNrb97+dLe?=
 =?us-ascii?Q?j981DcUsSEqukFj6OM9wsuoYk389yh6/egpNwYn3MdOmAe1wwj1uefbshWB+?=
 =?us-ascii?Q?Vb2e39Vc3knZqFgusgy7qqfQ7tI/xV9/z2xSq2OLjFF202Oftly0Bw/M696g?=
 =?us-ascii?Q?L+1lQzrekMUcXHd/8n+qJeCYsPacbUcALD16z0g1yNFAgDL2y43suT8Riian?=
 =?us-ascii?Q?8ctSAW2yZlBE1I9SjedkvtnuhOzd5wdXJ6quEhcCKzc52zr0h4PMhBql3EYW?=
 =?us-ascii?Q?lAAYTDOHPvYtqYdHCysLGYXz?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e6145250-09c7-47c5-4784-08d92b3f571b
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 12:08:49.5174
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nrtSQbl5ZzXDwD6A/NnMrGhUoJND8DElCUKXgXtN3o0TYoN7xov1cv6dMHNBnnGhaC9wfz5cdG22POvBfAESuw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5167

On 09.06.2021 12:36, Andrew Cooper wrote:
> On 09/06/2021 10:26, Jan Beulich wrote:
>> @@ -49,28 +52,31 @@ static void send_iommu_command(struct am
>>  static void flush_command_buffer(struct amd_iommu *iommu,
>>                                   unsigned int timeout_base)
>>  {
>> +    static DEFINE_PER_CPU(uint64_t, poll_slot);
>> +    uint64_t *this_poll_slot =3D &this_cpu(poll_slot);
>> +    paddr_t addr =3D virt_to_maddr(this_poll_slot);
>>      uint32_t cmd[4];
>>      s_time_t start, timeout;
>>      static unsigned int __read_mostly threshold =3D 1;
>> =20
>> -    /* RW1C 'ComWaitInt' in status register */
>> -    writel(IOMMU_STATUS_COMP_WAIT_INT,
>> -           iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
>> -
>> -    /* send an empty COMPLETION_WAIT command to flush command buffer */
>> -    cmd[3] =3D cmd[2] =3D 0;
>> -    set_field_in_reg_u32(IOMMU_CMD_COMPLETION_WAIT, 0,
>> +    ACCESS_ONCE(*this_poll_slot) =3D CMD_COMPLETION_INIT;
>> +
>> +    /* send a COMPLETION_WAIT command to flush command buffer */
>> +    cmd[0] =3D addr;
>> +    set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, cmd[0],
>> +                         IOMMU_COMP_WAIT_S_FLAG_MASK,
>> +                         IOMMU_COMP_WAIT_S_FLAG_SHIFT, &cmd[0]);
>=20
> set_field_in_reg_u32() is a disaster of a function - both in terms of
> semantics, and code gen - and needs to be purged from the code.

Long ago I had an item on my todo list to get this cleaned up. But
it never really having made it up high enough, I dropped it at
some point, in the hope that we'd manage to get this sorted while
re-writing code step by step.

> It is a shame we don't have a real struct for objects in the command
> buffer, but in lieu of that, this is just
>=20
> =C2=A0=C2=A0=C2=A0 cmd[0] =3D addr | IOMMU_COMP_WAIT_S_FLAG_MASK;
>=20
> which is the direction that previous cleanup has gone.

I don't think I can spot a single instance of such. Some work was
done to introduce (mainly bitfield) structs, but this surely goes
too far for the change at hand. I can spot two instances using
MASK_INSR(), so I can see two consistent ways of doing what you
ask for:

    cmd[0] =3D addr | MASK_INSR(IOMMU_CONTROL_ENABLED,
                              IOMMU_COMP_WAIT_S_FLAG_MASK);

keeping the name as *_MASK (and I'd be open to replace
IOMMU_CONTROL_ENABLED by true) or

    cmd[0] =3D addr | IOMMU_COMP_WAIT_S_FLAG;

i.e. dropping _MASK (but requiring adjustments elsewhere in the
code). Please let me know which one you'd prefer.

> There are no current users of IOMMU_COMP_WAIT_S_FLAG_SHIFT, and ...
>=20
>> +    cmd[1] =3D addr >> 32;
>> +    set_field_in_reg_u32(IOMMU_CMD_COMPLETION_WAIT, cmd[1],
>>                           IOMMU_CMD_OPCODE_MASK,
>>                           IOMMU_CMD_OPCODE_SHIFT, &cmd[1]);
>> -    set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, 0,
>> -                         IOMMU_COMP_WAIT_I_FLAG_MASK,
>> -                         IOMMU_COMP_WAIT_I_FLAG_SHIFT, &cmd[0]);
>=20
> ... this drops the final use of IOMMU_COMP_WAIT_I_FLAG_SHIFT, so both
> should be dropped.

Well, I can surely do so, but like this entire request of yours this
feels like scope creep - there was no intention here to do any
unrelated cleanup. And if I remove _S_ and _I_, then surely _F_
wants dropping as well, while IOMMU_COMP_WAIT_ADDR_*_SHIFT have a
use each in iommu_guest.c and hence need to stay for now.

> As for IOMMU_CMD_OPCODE_SHIFT, that can't be dropped yet, but it would
> still be better to use
>=20
> =C2=A0=C2=A0=C2=A0 cmd[1] =3D (addr >> 32) | MASK_INSR(IOMMU_CMD_COMPLETI=
ON_WAIT,
> IOMMU_CMD_COMPLETION_WAIT);
>=20
> in the short term.

Can do (using IOMMU_CMD_OPCODE_MASK).

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 12:19:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 12:19:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139294.257611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqxB1-0007AR-7d; Wed, 09 Jun 2021 12:19:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139294.257611; Wed, 09 Jun 2021 12:19:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqxB1-0007AK-4b; Wed, 09 Jun 2021 12:19:19 +0000
Received: by outflank-mailman (input) for mailman id 139294;
 Wed, 09 Jun 2021 12:19:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YCjx=LD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lqxAz-00079v-Kt
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 12:19:17 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 55943833-ee24-4729-b073-14c56028e23c;
 Wed, 09 Jun 2021 12:19:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55943833-ee24-4729-b073-14c56028e23c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623241155;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=9zr7hEYdIKDPRhh8WwHuiRL4sRbKqAeAbHghMcpvvdk=;
  b=ZXCbnCJY0KNORtOpNdP+NN/9Xi4xi0rdvI6WNFRz7VHYjLamiL5TWrFu
   sxfCbVXZEa4I4DVLrNS9f/ZpqECTfhMCd4YpSuc8lbn8LOWlzDkU0gPjf
   JwtZto7lH1LHvX+Y3HibscW1wDoDcvgJBIZTq+wOTg6Q5MmymEz6HE4x+
   o=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: bpcb30iZyb3rr96Es1eOanX/FpOv068DReRijKSId/vUyL0ur2xo4ZBtXhd/3+A2ufWwTYkEoy
 9ANIfEvDHK41/z7KCxU73FLuvVUFi3x04SKFc0IDZAAXOPPTFxXhmfpdmYIF0wr89Po5OsU3sw
 emjOxlMoPR85Pf5eI+aMTJTJBWAZ88+rDtz04wT3GPqw6/SBcvr9Kb3XMDa+7s3bhkNXle7Iqh
 t7fO1t5OjzJKxrOo77tke8NT7taS5fs7x7HgcYq+Iy1XnFZNSyBI3qLuF1FUN4bss00PPUaNMK
 iC0=
X-SBRS: 5.1
X-MesageID: 46092032
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:vGQN3K69jOGimzc10gPXwSqBI+orL9Y04lQ7vn2ZFiY7TiXIra
 yTdaoguCMc6AxxZJkh8erwXZVoMkmsiqKdhrNhQYtKPTOWxVdASbsN0WKM+UyZJ8STzJ876U
 4kSdkFNDSSNykIsS+Z2njALz9I+rDum8rJ9ISuukuFDzsaD52Ihz0JejpzeXcGIjWua6BJdq
 Z0qvA33AZJLh8sH7WG7zQ+Lqf+juyOsKijTQ8NBhYh5gXLpTS06ITiGxzd+hsFSTtAzZor7G
 CAymXCl+SemsD+7iWZ+37Y7pxQltek4txfBPaUgsxQDjn3kA6naKloRrXHljEop+OE7kosjb
 D30lkdFvU2z0mUUnC+oBPr1QWl+i0p8WXexViRhmamidDlRRohYvAxx75xQ1/80Q4Nrdt82K
 VE0yayrJxMFy7Nmyz7+pzhSwxqrEypunAv+NRjzEC3abFuLIO5kLZvu3+8SPw7bWTHAcEcYa
 lT5fjnlbNrmQjwVQGBgoEHq+bcLEjaHX+9MwI/U4KuomBrdN0Q9TpQ+CUlpAZ2yHsKcegO2w
 31CNUdqFhwdL5hUUtcPpZNfSLlMB2AffrzWFjiaWgPQ5t3RU4l7aSHu4kI2A==
X-IronPort-AV: E=Sophos;i="5.83,260,1616472000"; 
   d="scan'208";a="46092032"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FchuKULF//jPlvdShuEdq1DPvZF90lnLfNHx6KeRDk8RMKfFNPdKx/DIznFiM4zRkMz9uayFxsl6Y4nssHsBx1HtbdywbA4LUp5QfZcKt/hPrKUk5sfF6tO0/KWazE8av5Dx0siYJZX+L70N8kTAhzcMU6dmNBTBvSZnL+awF5BohP6j6gHmWqE1L8UEp+qkw7ZKW+kc5zD4IAaYY5Sn+RT5DpIpdgQvPyTlW5iiz8+KFTUcGk53aGE6aS5rvULLqk0xuHVt6uEpTQs2JXyEuJA5nZsTV+TeGokCQVnnViCsmD68C2NRHzdmrjwDmfhqExIAau4floJWotwLUVAheA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N+Ooql3dIeG18DuyM6dRW/eyiblxjPfAJKbTM/sKP0A=;
 b=CEpOHJMjOE4LYZMQM1x8WgWJe/EECyvm9k1AC0v9t5AmL9wwgcmPCXV2sIyhVSBI4MmXXJdu5VcqWSHxqIdHGIBOsdcpukElcwcKnAiUIKNcZ+vkIJyzJQPvm/MDdqf7PyPolYZ31nRA5Fje29NRu0yFJ7o0A9XxoxNVGIAUfGAxDEx2hR8wcNXLTT8hmKJqjzf7VyvJH71iumVbIlp+veixzWLsTXuovjp+NWBscwU1Is1e0PnqkW3ZYD+wcHfC8q1z0XeKI6utI3CYqOp/3hHRJkjKX6OH0QhFxGNQ3kvM6fXr0dRvH5FlTM8HvDhBVfAVs9ESd0vb5qfemW6/Yg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N+Ooql3dIeG18DuyM6dRW/eyiblxjPfAJKbTM/sKP0A=;
 b=SDCKTq5JfgbFrwodAoIDzE6czcx8wnayjZfVJFBUDOtp1Kphp+w/+g0UL06haxm7x8VCjc9WqZoT7Sbmn/fPY8Y/uwkGvsaOVSlbklYKiKCDdSG+btvUNzf607iWsSxSHnJWmmAXW9aEooMMOReNoy4BtxEoni+P3cDEfvbirlE=
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210608170559.6732-1-andrew.cooper3@citrix.com>
 <72aedd57-9722-2c5b-7365-f46a0e0fe39d@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/tsx: Cope with TSX deprecation on SKL/KBL/CFL/WHL
Message-ID: <8085dad5-2957-14a2-c259-87d8bdd388b7@citrix.com>
Date: Wed, 9 Jun 2021 13:19:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <72aedd57-9722-2c5b-7365-f46a0e0fe39d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0080.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:190::13) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b8a03399-fa55-4e7e-c533-08d92b40c862
X-MS-TrafficTypeDiagnostic: BY5PR03MB5348:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB5348CD24012E2044374316A4BA369@BY5PR03MB5348.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 55ainA/23ZUtuQyPSPsTlNd0TfNms76u2MsuAfdT2KNiKsQZd9DJMes3tkS3Xek07OrlHEsQ3cbQPLNO6rjpypTUde4kDV3sImJ+Gax8ZISdqlwq8jnBACJUHJ9crtH3olzGenVmPmsk5fFnQeATgmE5FdXsFCXVMnESMoHroq3JalgSOAK0kjFNuKS90GFlZ60WgBm+v40SOMJuQMO7SSD594bsk7LbgPCyR2fkDGChnXvn1lUUiE7wsaPLZAwOwbW46dsFLPkVd5JiAY1sNKRKh/GFc/gGkRGsX5yVq6rWvcKCNAWYdmG85N5ACC5/s0cPM4jKCfBOM666pdjCUXMShog8FhxrxeCXj3HcEx1T8DyXY2YL6dJRPAPyaKMg87jxTIBXzFXpss9GFWLzRl/NrCRJGdjqGodvdRc2Uci19l03FzNVVmqpFvlNRxE4/MD/kBWJW96lceMu6VnlO8hx0c4aaIaUsTPY5sw0MRYuZQDrOMJATvcBtHzCnVIyzvMrMZcK3mLOa8At35sJzhtZpv+ssQIT6SjOtORKK3QLCwBnPsGpxrl6/TewXI2qjdRwo6z3Hi5DB0DzvYlf2CQ3DjHZ/jr6azbyr4tHJuk2aNdeFhhwIStHolW6eOF4bJTFmILaqweqJhYFMuz85auUKa6+8Kr1+9MQ1dlWNETks01TSnzW8uM4U1QFN/jZ
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(136003)(376002)(366004)(39850400004)(956004)(2616005)(478600001)(8676002)(6486002)(86362001)(66946007)(8936002)(31696002)(6666004)(54906003)(2906002)(5660300002)(316002)(4326008)(38100700002)(83380400001)(31686004)(16576012)(66556008)(53546011)(186003)(26005)(36756003)(16526019)(6916009)(66476007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?dGl6YmRCZE4wYjE4K1lHV3pkenpkT3QrRllnTEpsUVRTVGMxbE9LTDlZWUln?=
 =?utf-8?B?eVMweEJCK2l5eVYzYVRKTWt4MkMxTWZxbndNMnUrY1ZVdkM2Vysxd3p5Z3Na?=
 =?utf-8?B?bXp5bmlIRFY0T3k4VmdlSjZQTlk1bnlURi9CdTYybGJaY25xNjhBSnhTdnht?=
 =?utf-8?B?Sjh0ZGc2b3VyK0VGUEtNL3oxcVRnSDRhUHp2MnZrTUZBblNkSFdicUs1eElm?=
 =?utf-8?B?VXBwQnRVY2VsWnRCOU5qa1VMM0tBTU1kc3J3V0czV1QzT3ZKQjdBSWQyVFEx?=
 =?utf-8?B?RjFxY3cwZnNobzVEdCtybFZZOUMvS1pZNlVHYjU2SGdpSzVROXE1VTBSYmEr?=
 =?utf-8?B?QjBIT3RKUnYxSEZMaElHQ01sTWltcUpFaXhQM1VmK3NzYS8wOEhBdEdEelVF?=
 =?utf-8?B?Q0dubnFIdjQ2L0Y1aEJDcmI4dmtja1J1UnhJKzV5aENaenk0TU1sbGJ3ZHBz?=
 =?utf-8?B?QmdBNUJsQUFjVzFmaUtwQVZzclNCMml2bys5ZUdLQ2ZrcTFJNFJoTnZON1ZF?=
 =?utf-8?B?Z0tHVkQ0S09vaEJMSlo1WWp0dUdScTBEc21Rajc0V1N3NzRnT3ZQUnZkM1JU?=
 =?utf-8?B?NnAxSXF5SnNXQTFzZGpYd2xNWkloQkhUNm9jWERoR0Jrbms1aG1GT1p3VkdL?=
 =?utf-8?B?WlIyMFF3YlErRUl2bHYydEpLWmRpcmpjNlVzS0lhZXhWaWFQck1VbTZZY2pZ?=
 =?utf-8?B?S1NwbjROTWZaWGpTUzFSN2lTRFAra2VINjJsYTZ2clhEK05KSm9yZktiY29K?=
 =?utf-8?B?a2tpVEkwbFh2WWo1amlxbHRlQXpBSFEzM2JpRzB4RGx5NS9pUGJwTWhESGIw?=
 =?utf-8?B?Q0U5SWtxQ1g0UEFIWGp5V3ZmNHB5OGZ3UkdLUStTYmFiaktmTVFUQUFWVVVx?=
 =?utf-8?B?UGpzeGtObnYyU1FPaEFoZXFDTitVQk5UVlF3djI1UElYSktwcVdhdXZsdk9H?=
 =?utf-8?B?Y2E5bi9pWGRzaXUxdXJtdUpGY2kvcm9ES2tmZUc3WnYwZG0zK1FlbTVOSk8y?=
 =?utf-8?B?dE4vOUl0RjRoMEN6a0VMU1EvbzVXY0N5NGd5alp1WjJMbm9sbTR5YUFPNUh5?=
 =?utf-8?B?ZEVvOFpiNi9GeGVuNXpGSmx4TEZVR0pnV2NGektuRk84Nk5oQTV1cENWanl0?=
 =?utf-8?B?U1VEa1dkcDZ6VjRuRW9na1VvYkpMaEZmb3cySC9lR1lJK1JCY3BBT0JVS01p?=
 =?utf-8?B?RldKQmI1MFlFOUp3bFRYblZDeE1ZWVFBeHhkTHZnVVl3Smw5YnRwM2I4Y3Fi?=
 =?utf-8?B?K3h4TVhlOTd2WWNxVENqTUtGdkczUWM1My9LcHQzd1FyWlJEWWp2MHFYbmt5?=
 =?utf-8?B?NTBpVzFWdzFSOWl1b2IrZThRamw3SUROcGNMVjBxZ2poRmJYa0tiZDQvQmVW?=
 =?utf-8?B?dE5vOStRQWZQUkhvVHIwN1BHbi9Cd0lwZHlXT3o1a2tMWWVONGdGRDZGVnRX?=
 =?utf-8?B?Vkp0SVlnckpjOW1seHVlTWZCazVpU0MxRjJnak1NcW11QzBNYVdhaFRJT0pQ?=
 =?utf-8?B?aVBUUmpRN2RBMWZYdGZWdFpZd2FWMTY3VVVYUDZ0TC9BclFEZXMyeG1Ud3Zz?=
 =?utf-8?B?eitLbjFmUDBqay8vUjk3R2R6YlZRRmdSK2I1YTY4aXpOZDAyVFNZS2V6aTcz?=
 =?utf-8?B?SCsxSXZ2NUJHNlBES2x4U3hZbUNYdzJCRDhQenJmT2t1dERiZXowK3VNd3h6?=
 =?utf-8?B?RzFLY0owYnhlZU9MWERzcnV4OHBCMCtFUVpaQytpNXMwWk9iNU1FeTRnWHdF?=
 =?utf-8?Q?ISQAFml6pAFsvSqPoo1h1KaXlHR7zOn9g9dJzxk?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b8a03399-fa55-4e7e-c533-08d92b40c862
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 12:19:09.0892
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PyOhNmH+lMSoh35xhCgt++9sjEhzy8TbdwJ9AhPTKU/1eeD3GAjoEpf9R+ncoK+rilMJr1I28B7zC63pb6EXE07f1/He8++yOnT2iquIyzI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5348
X-OriginatorOrg: citrix.com

On 09/06/2021 07:36, Jan Beulich wrote:
> On 08.06.2021 19:05, Andrew Cooper wrote:
>> --- a/xen/arch/x86/tsx.c
>> +++ b/xen/arch/x86/tsx.c
>> @@ -60,6 +60,38 @@ void tsx_init(void)
>>               */
>> =20
>>              /*
>> +             * Probe for the June 2021 microcode which de-features TSX =
on
>> +             * client parts.  (Note - this is a subset of parts impacte=
d by
>> +             * the memory ordering errata.)
>> +             *
>> +             * RTM_ALWAYS_ABORT enumerates the new functionality, but i=
s also
>> +             * read as zero if TSX_FORCE_ABORT.ENABLE_RTM has been set =
before
>> +             * we run.
>> +             *
>> +             * Undo this behaviour in Xen's view of the world.
>> +             */
>> +            bool has_rtm_always_abort =3D cpu_has_rtm_always_abort;
>> +
>> +            if ( !has_rtm_always_abort )
>> +            {
>> +                uint64_t val;
>> +
>> +                rdmsrl(MSR_TSX_FORCE_ABORT, val);
>> +
>> +                if ( val & TSX_ENABLE_RTM )
>> +                    has_rtm_always_abort =3D true;
>> +            }
>> +
>> +            /*
>> +             * Always force RTM_ALWAYS_ABORT to be visibile, even if it
>> +             * currently is.  If the user explicitly opts to enable TSX=
, we'll
>> +             * set TSX_FORCE_ABORT.ENABLE_RTM and hide RTM_ALWAYS_ABORT=
 from
>> +             * the general CPUID scan later.
>> +             */
>> +            if ( has_rtm_always_abort )
>> +                setup_force_cpu_cap(X86_FEATURE_RTM_ALWAYS_ABORT);
> I understand the "we'll set" part, but I don't think "we'll hide"
> anything explicitly. Aiui it is ...
>
>> @@ -131,9 +170,36 @@ void tsx_init(void)
>>          /* Check bottom bit only.  Higher bits are various sentinels. *=
/
>>          rtm_disabled =3D !(opt_tsx & 1);
>> =20
>> -        lo &=3D ~TSX_FORCE_ABORT_RTM;
>> -        if ( rtm_disabled )
>> -            lo |=3D TSX_FORCE_ABORT_RTM;
>> +        lo &=3D ~(TSX_FORCE_ABORT_RTM | TSX_CPUID_CLEAR | TSX_ENABLE_RT=
M);
>> +
>> +        if ( cpu_has_rtm_always_abort )
>> +        {
>> +            /*
>> +             * June 2021 microcode, on a client part with TSX de-featur=
ed:
>> +             *  - There are no mitigations for the TSX memory ordering =
errata.
>> +             *  - Performance counter 3 works.  (I.e. it isn't being us=
ed by
>> +             *    microcode to work around the memory ordering errata.)
>> +             *  - TSX_FORCE_ABORT.FORCE_ABORT_RTM is fixed read1/write-=
discard.
>> +             *  - TSX_FORCE_ABORT.TSX_CPUID_CLEAR can be used to hide t=
he
>> +             *    HLE/RTM CPUID bits.
>> +             *  - TSX_FORCE_ABORT.ENABLE_RTM may be used to opt in to
>> +             *    re-enabling RTM, at the users own risk.
>> +             */
>> +            lo |=3D rtm_disabled ? TSX_CPUID_CLEAR : TSX_ENABLE_RTM;
> ... the setting of TSX_ENABLE_RTM here which, as a result, causes
> RTM_ALWAYS_ABORT to be clear. If that's correct, perhaps the wording
> in that earlier comment would better be something like "we'll set
> TSX_FORCE_ABORT.ENABLE_RTM and hence cause RTM_ALWAYS_ABORT to be
> hidden from the general CPUID scan later"?

Yes - that is the intended meaning.=C2=A0 I'll adjust.

> If this understanding of mine is correct, then preferably with some
> suitable adjustment to the comment wording
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

> Also Intel recommends this for SDVs only - can we consider such a
> setup supported (not to speak of security supported) at all? I guess
> you mean to express this by saying "at their own risk" in the
> cmdline doc? If so, perhaps mentioning this in SUPPORT.md would be
> a good thing nevertheless, notwithstanding the fact that we're not
> really good at expressing there how command line option use affects
> support status.

I think this is too fine grained to be expressed in SUPPORT.md, but
given that there is a very clear specific issue, I wouldn't consider
this an unsupported configuration overall.

I don't expect people to want to use TSX on these CPUs in the first
place (which was a factor in choosing off-by-default), but if they do,
there is one specific memory ordering issue of read/writes with a
committed transaction.

None of our code uses RTM, so issues in Xen which manifest with tsx=3D1
won't be related to TSX being enabled.

Obviously, if someone does report an issue, we can tell them to
re-confirm the issue without tsx=3D1 just to rule out interactions, but I
don't think it is likely for it to be relevant to reported issues.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 12:22:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 12:22:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139300.257623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqxED-0008V0-OF; Wed, 09 Jun 2021 12:22:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139300.257623; Wed, 09 Jun 2021 12:22:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqxED-0008Ut-L2; Wed, 09 Jun 2021 12:22:37 +0000
Received: by outflank-mailman (input) for mailman id 139300;
 Wed, 09 Jun 2021 12:22:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqxEC-0008Un-MT
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 12:22:36 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b1d0974-79f0-41dc-9802-f5e02bd3ffff;
 Wed, 09 Jun 2021 12:22:35 +0000 (UTC)
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01lp2054.outbound.protection.outlook.com [104.47.1.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-25-7ZMNU1mXPyq--y9Kh-PtQQ-1; Wed, 09 Jun 2021 14:22:33 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4608.eurprd04.prod.outlook.com (2603:10a6:803:72::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.23; Wed, 9 Jun
 2021 12:22:32 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Wed, 9 Jun 2021
 12:22:32 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR1P264CA0001.FRAP264.PROD.OUTLOOK.COM (2603:10a6:102:19e::6) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4195.24 via Frontend Transport; Wed, 9 Jun 2021 12:22:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b1d0974-79f0-41dc-9802-f5e02bd3ffff
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623241354;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NxVvP1fuRJ5SOxFkz6gHnG9QC+PljgRsKs8p79VfwDM=;
	b=K+pNxQDo8uLfpE2sbXX3gqVTHjfRM2aPH/ow4rdlu5A6O8T1RmAH2RgQXVYRABXAcYFs/J
	BGgnnK7UwJbPC3/9vzlYJzgB/JK5ohEaciTJIqM4FQz7/03LTl4GECnYLqiaPQ+ExkoCfT
	QPqRmpSrasgpR7kMqmRG3AEFsCkcL8s=
X-MC-Unique: 7ZMNU1mXPyq--y9Kh-PtQQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hwH4+Tk7h+5Cdk6+L4vs212yrB8RvPkHgAlSkCH9JQ9zRhRjyx3YY2Y9w2XpZ7u1sMG0ZRrjLk6ZBWEo8nTbxJ4ewF/w/efonju5kgG+ja+BmLZ5nM0eGU/WGeMCxqtwseSOJY+64ZJ+uxKl9pt2IW9xgo+WSjvq2Dn3EfFJ/Gx+cMFWTbJ9byhK5qinX/8k2IzUH+C1tCPjHsjwc4hjgk9xu7kKY0chDsuzuijwnjxiHZiLJheuSlZGc/OPwLro1GPBWrjF+Wwp8VwDFHoiMbKHxqwLwbaisYpzcIwYVx+JPOtPSojh6N9P2IuZPo/48qFgkfXgmdbfponZvdoE7A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YbD62tgf+tx/Qy9YR6Hs49XELMFmV8y0hAW3fnr51Io=;
 b=EFbS1Iy+nJZ4iYSbZkAEpMk8ZCLI8Bwaqf9efAvDqOjWpebsDLJ7im/04+7ebvtANwa3jKSPoVd7dvHpx12AoC0wQB9Qh9j48BRD+RHYYoxn6LWoOrkIVycft3zLSGUOzjMWaKgQyREicA35pT7tv1zxjtQKwvcDCfUaY+9jV/NGqo19Zns4wgoT8tDbWUy2PyogxV+h0EPBPN0KZ2XFgajAyKFq2allIuu2+uz4WaRKEavlIrXgKAwiGoUPf+/0GpuzY3cBn2oJsRJAiKeyO48OWWkqLsTpomXsFXONol7FWbH2EaF+weG87IGrQK9jF370I+Oqb/P8JiFCnC/ZXw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 2/9] AMD/IOMMU: re-work locking around sending of commands
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Paul Durrant <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
 <da2e161f-5d5e-c4bb-bce4-7b86e9418a1e@suse.com>
 <31dc681c-8713-7ddd-6c4e-3c385586da4c@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a6a9c5a3-a061-3401-671e-48cdb408c694@suse.com>
Date: Wed, 9 Jun 2021 14:22:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <31dc681c-8713-7ddd-6c4e-3c385586da4c@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR1P264CA0001.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:102:19e::6) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5bbe0e9f-9ece-46d3-f8ed-08d92b414148
X-MS-TrafficTypeDiagnostic: VI1PR04MB4608:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB460884A473E90CF26E34BCC2B3369@VI1PR04MB4608.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	A/bPoG6ZpjBgK/9JQ767lO1xvPjEHTN4Nsoh/wjVFBDCSXTnJ5l24GfFsfd3mTcg5Pa+JzLj3HnmmwXsDmMbd8weCFhvhNnVPrfmhJSZfRZMdBzz2bHwhsuv3Nn+UAPFjlC9nY1ArQ9jAYupeXvX36rwmncx0/MbDZMnmZLeUtcUJAWORd/eaQOM2/nW99vpiBsbI4UrECkXad4OZ10LBj0PRl2k05qpl1/uWjAB9C5cxi/PEkcKrxJHs/ntqU1RY9xaOObLQCqmFJSY/et+4HPe+xf9HMyOa13W6kcKIkL3Pxe899p3utdcFREt8HpvfZG2BSkcpk34UfJ5IB5dPBk+UzuUewMGtupF46BaVVhTVSxMCE2LEt4b0NAe/hTqtsQTxwHfuVLIl8jMt9xCN+hXghNhEOP88uPo4F0ucycwF55F75rR4KRxEyd35HVvfFmoOz4ei4JCM45Ll4SzHJ3JQt1NVV4gKKWQmw+qI5S/N6QJK54fEYrtD4GTPYk2aYdPxzSQ5Ho0QVV2B27Wz5Y/y11gnmFxneSpC6Cjyf9NRdDbC1No3v/Q35Qtiusj02m/ki8k11m3owgel9fkXhURqsrL8dZsn0ydG0pdTTj2qAw+JZg1zKcoxe9XqhjumDXWEo+f8jyfdhhrmAGhnRwoSz11yp+F3hDrhgYomgImwss1d008d9FUWGzKoBHS
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(376002)(346002)(136003)(39860400002)(396003)(86362001)(36756003)(83380400001)(316002)(26005)(53546011)(66476007)(66556008)(6486002)(54906003)(38100700002)(66946007)(31696002)(16576012)(8676002)(8936002)(31686004)(16526019)(4326008)(956004)(478600001)(5660300002)(2906002)(6916009)(2616005)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?us-ascii?Q?DXcEm+UBqDbOZWDU/vVHdEK1j+Suk/9BgGEZ/hvV/ybMTC0VEigBX9wwv87b?=
 =?us-ascii?Q?Q54fvQ8YmaVgAYLE2sz26T0rkeizopWJ3sqQheQFF/S9klNq8HPUljMGG7+C?=
 =?us-ascii?Q?MeQ545f9bTE/rB2gQ9l+yR6VkgJoi/n1KXpx3jtqzHJpfA2fFTPHEbSnkfSU?=
 =?us-ascii?Q?wib/dxp/TH0GlPf6gx3j+md+sCmmASirWTdXwwEFAg23kes5TjW0mD4/AkBp?=
 =?us-ascii?Q?REh2YPX1DUnRDun1E9QSVVfOtwlnt6mI31vBlaJNbo2mTur6xTe0I2c7QZxO?=
 =?us-ascii?Q?dnFKW5dWv8nOOT8noU0gSaoHmAjfHA12OU4+2J65hsFAAS2a/Hhjc535EKey?=
 =?us-ascii?Q?wQhOgMuYTttPXt3IYvsJxvn0ji0F2oDVMPuHDpaeAiF+v01O/I7mgZ3Mfz5o?=
 =?us-ascii?Q?Sgi5ST2D6qPdNOWVk6ZYVrWfu/phrwUarr4FLpYjKkmq9+C4SU3t5t+2ERRS?=
 =?us-ascii?Q?xR9FLTISOFnF4wd4UTbNByEoTgg6Im1yZtXQ97UyQorYoCYf0c5D3Lf/HJFb?=
 =?us-ascii?Q?JpD2JMe1/2oUvgJdqbvaiFfvIrvYF7a/uxf7X6UthFe3WF/rc7i+YapbVaeD?=
 =?us-ascii?Q?n9IMrgty/Y7OaSBhSpTflIGrH8h2wuzeoMp2vFK3bOdFbE8QudYyMj5PvNJc?=
 =?us-ascii?Q?4JMsJQ0hJHlNl6hBxcvXIVeGrVsM6AnkQB+bPTUL5v2AKbkD1nl3HWqaYT1X?=
 =?us-ascii?Q?2OiVj9wnROTpXIP4q20E5mUoKgcrhD2XxunPrQ/2qA/YrXgAY2XD3tTIUsda?=
 =?us-ascii?Q?2IiqPk5x7B6BKx9NkFmHrE7j8lW78LjT2w5zWKFk0eYM2zZ49dlRDKeC2XUv?=
 =?us-ascii?Q?LCvEjTXAvvwo18tPdDANFqEcr3LM3ZGqrYZbWz7fHSWugEBKABwdgZGAhXNO?=
 =?us-ascii?Q?y0vzX5PMAPjSH9V4k5BOgDc+A9SaKg+i31rJyXQKcrh7RF12bxnTv/+PglFq?=
 =?us-ascii?Q?7QSmlQYFoAOnNpQAJhcb0Ku3tBHQcFAzZ+ZkNlmPzC9Cv6qnsPm3V4hU4MB0?=
 =?us-ascii?Q?Kmbr+ifU+ZY2h8CxV23Cuumr1ya5goPFkjaQyrWwAHQr1RqTBSNCN4hbcNco?=
 =?us-ascii?Q?3aNAZT+blRKYmz1afvCYrg+SEd4Z21TCuzBHfUOjAdXQej53vjGBp+dq5GS9?=
 =?us-ascii?Q?7gSMfNUSqPjy1v1FuFqt1Ao8oh0kI8SgbvoZEhaO1PrlHeEhnnFcE8xw6oo7?=
 =?us-ascii?Q?IGC9CFYamC47/NpvgOH8XfChTK7VFD66W1HqnoxwkcBiS2rg4kRPTl9nDsla?=
 =?us-ascii?Q?AddErBaVc6TTkEwXeXhvIL7382PlUSV/o4n6aoG/TkjgNN1STesyVinsCoRe?=
 =?us-ascii?Q?UPv5iUUYyCFU62Q1TPGgYW1l?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5bbe0e9f-9ece-46d3-f8ed-08d92b414148
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 12:22:31.9044
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XiS+yBZfiMZyN7Jb49SH2OSblOJlKIE55QgrGnd0FWM+O1wY/wvMItdj3QhVYOd7HI2cgyy4KhAGTRWx1FiM6A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4608

On 09.06.2021 12:53, Andrew Cooper wrote:
> On 09/06/2021 10:27, Jan Beulich wrote:
>> It appears unhelpful to me for flush_command_buffer() to block all
>> progress elsewhere for the given IOMMU by holding its lock while
>> waiting for command completion. Unless the lock is already held,
>> acquire it in send_iommu_command(). Release it in all cases in
>> flush_command_buffer(), before actually starting the wait loop.
>>
>> Some of the involved functions did/do get called with the lock already
>> held: For amd_iommu_flush_intremap() we can simply move the locking
>> inside. For amd_iommu_flush_device() and amd_iommu_flush_all_caches()
>> the lock now gets dropped in the course of the function's operation.
>>
>> Where touching function headers anyway, also adjust types used to be
>> better in line with our coding style and, where applicable, the
>> functions' callers.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Reviewed-by: Paul Durrant <paul@xen.org>
>=20
> Honestly, I'm -2 to this.=C2=A0 It is horrible obfuscation of the logic.
>=20
> I agree with the premise of not holding the lock when we don't need to,
> but moving the lock/unlocks into different functions makes it impossible
> to follow.=C2=A0 (Also, the static analysers are going to scream at this
> patch, and rightfully so IMO.)

Just to make it explicit - the immediate goal isn't so much to
shrink the locked regions as much as possible, but first of all to
avoid spin-waiting in flush_command_buffer() while holding the lock
(and having IRQs off).

> send_iommu_command() is static, as is flush_command_buffer(), so there
> is no need to split the locking like this AFAICT.
>=20
> Instead, each amd_iommu_flush_* external accessor knows exactly what it
> is doing, and whether a wait descriptor is wanted.=C2=A0
> flush_command_buffer() wants merging into send_iommu_command() as a
> "bool wait" parameter, at which point the locking and unlocking moves
> entirely into send_iommu_command() with no pointer games.

Then I can only guess you didn't look closely at the pci_amd_iommu.c
part of the change? You may rest assured that I wouldn't have taken
the chosen route if there was a reasonable alternative (within the
current overall code structure). In fact I had tried first with what
you suggest, and had to make it the way it was posted because of the
requirements of these callers.

I'm also pretty certain Paul wouldn't have given his R-b if there
was this simple an alternative.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 12:33:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 12:33:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139310.257636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqxOp-0001hF-05; Wed, 09 Jun 2021 12:33:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139310.257636; Wed, 09 Jun 2021 12:33:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqxOo-0001h8-Rf; Wed, 09 Jun 2021 12:33:34 +0000
Received: by outflank-mailman (input) for mailman id 139310;
 Wed, 09 Jun 2021 12:33:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YCjx=LD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lqxOn-0001h2-Ga
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 12:33:33 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3f8f91dc-d610-4f47-b0ba-b3fdb6659509;
 Wed, 09 Jun 2021 12:33:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f8f91dc-d610-4f47-b0ba-b3fdb6659509
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623242012;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=EznfZ6SEH2iJ9MPfVO7Ls7YtpF35u+5V3Q0yiLlDXFA=;
  b=B9GMdz+knyq+MlEkfUK2xZvfr1me57bAfTIY/E0hMa4xww8jUTe6FwEw
   9vp2v/T8W/6BireIrBzp73hxpxXyaV+p0rSfpOA6qAkA15O1xJ6EFWXS2
   kvMWkkagQ7t2ZQNfWF/XSocpzjpOhmrwNiJtg7LkXOT81wHgtguLzdzkc
   w=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: rwYo9U9uKgg30mukgCgwrXXh53A/ZE5ibMgThRLHdgWT6RE8JUO28qXYJpuEK8j31/LerGKlZj
 uslXG3EmB9NKfPUXSuKDFLMIBEdKYsT21AP4cYrEOcuqp0iMAcctsQgIV+ws5GQ26f3x6ePRM8
 /tnvpxatedpv/mi37DUgabw6ldB/76ijIIOsIlrrStMiXJ1kDhA1LDwawYgtJ3/ZhDq7tEhXiT
 sBpOqoZMKJDZWD9yPNmALw9mFbHyRMUKl+2HVUf50uyl2kkpHTcUvyyBl7fN1Yb7ycNk8qYkS3
 xQ8=
X-SBRS: 5.1
X-MesageID: 46093283
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:icbuqqhNvxT4U6/R9YksVvRrVnBQXzd13DAbv31ZSRFFG/FwyP
 rBoB1L73DJYWgqNE3I+erhBEGBKUmskaKdkrNhQ4tKOzOWx1dATbsSkbcKpgeAJ8SQzJ8n6U
 4NSdkZNDS0NykGsS+Y2njLLz9D+qj+zEnAv463pB0BPGIaCdAV0+46MHf9LqQffng0OXNTLu
 vk2iMonUvERZ1aVLXAOpFTNNKz1+Ej2aiWLiIuNloC0k2jnDmo4Ln1H1yx2QofaSpGxfMH/X
 LemwL0y62/u7XjoyWsllP73tBzop/M29FDDMuDhow8LSjtsB+hYMBEV6eZtD44jemz4BIBkc
 XKoT0nI8NvgkmhMF2dkF/I4U3NwTwu43jtxRuzmn34u/H0Qzo8Fo5omZ9ZWgGx0TtjgPhMlI
 Zwm06JvZteCh3N2A7n4cLTah1snk2o5VI/jO8oiWBFW4d2Us4TkWUmxjIQLH48JlO81Gh+e9
 MeSv00pcwmMW9yVkqp+1WGm7eXLy0O9n7seDl2hiSXuwIm1kyRgXFonPD3pU1wgq7VfaM0rN
 gsAp4Y442mcfVmJJ6VJN1xDPdfWVa9DS4lDgqpUBza/fY8SgPwQtjMke4I2N0=
X-IronPort-AV: E=Sophos;i="5.83,260,1616472000"; 
   d="scan'208";a="46093283"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DpjR6uvcZNIKGUjXAjiH7m3Ilh99J6fhIl79pu/EdOYDQoLK1XkHq8hSFJiPcMte0+g7W/MU65D+3GdJupiyrnddMb0PYbPgA1glc6CO1Jiwf/btmsE8ZoU/fmyq7kz7BqLdZ/i/q5IZ0Di7pusAD/G+cdaK0fag5JtrebZRdvJHAH8MP3vL6lR4Lkq/iLFP1/nQklWl+0HFxqeWQLKaLaJNK4EXN6GVl2bzuo/+Opn3gjIivv+2Fy8CstINR91OlgwJz4NLKJVUnPe2skDkbQ4C6x93CzmvxVoPr6ppgPcXjKUKwgL8IuljLsps7Uw0jEb6mNTY6uu1saegXfUxYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CDP1MljM6Yiu02bJ/qCb/SYip8xqaEoD0Ax/qBR8ZcA=;
 b=NUMWrMv+T11CEFsiSO569v2MaHiwbbkj0rRwP6scK04krqRcrnO4o+ate+8LbjL9KC5nCcWDY/s9h7hNm9EyhcUN7DWa2zeNQr+80DsxqtzT5DqGOmanPlkOS3hijv40gBTBQpKi/8Ttwc1nKsoE4Wztraa2xztpoczV9eGOd5yoIwRhr/UB0NUNsPMM0PT9dt7gGOwTTpEHlaLI9GDb8L1+l7oOGkStPNeMpyO/X4OwNjg8SurDZvmldvzZVCSfDqa4zddnop9voCJGc2NelTmi4rts33h03Lt2SoyTSoezEPLUIcnCN//52x6YsRvQd9UW0tK95nXhsUXRxCOk0A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CDP1MljM6Yiu02bJ/qCb/SYip8xqaEoD0Ax/qBR8ZcA=;
 b=ovkmvGXAk67GCV182x0vlcr44wylR+OFMUj5Tph3B0/qXE+7SsiZm+mcswYVVAsmSYwhHJNxLMsN0fbabZCXJYX+IMSIacRs/+COlVYHBzxVpxgfSz9iRpmDnIc/JjtwI2OXSEpHQKQXbwOI+stC+zV5C8PReX+r02/wiNgH2Vw=
To: Jan Beulich <jbeulich@suse.com>
CC: Paul Durrant <paul@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
 <1bb49cce-537e-9fba-80bb-82bf502728d0@suse.com>
 <1fcb1140-b9b4-5c0b-de6c-e14d33937318@citrix.com>
 <e3283632-7621-69b8-5051-ec528c6ad8fc@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/9] AMD/IOMMU: redo awaiting of command completion
Message-ID: <4950c000-2984-a9be-d164-ecb65edffa2a@citrix.com>
Date: Wed, 9 Jun 2021 13:33:20 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <e3283632-7621-69b8-5051-ec528c6ad8fc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0134.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:193::13) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ef94a94a-c436-4424-fea7-08d92b42c7ad
X-MS-TrafficTypeDiagnostic: BYAPR03MB3797:
X-Microsoft-Antispam-PRVS: <BYAPR03MB3797305544236939B67354ECBA369@BYAPR03MB3797.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: n5OLw/jk1QVM8HwDeuJ+wMKB47CowXih0nxBjB3CigzRBUkjTDQG17AqH5r6N4FSQ9hnUZThU3UWe1nbXNrLBtkwmYCWWSU0N+HTq7sUPlcNL0m1tojq3sD95DhjSxnuJeg4vgSHkBN6NQxVwC8jp3u8cTg97l6yy3DRm6QEWLULKXYwr3dCf7QBkIaG+MUEkEjtk1q3mRJCuDKBgA3nPETOYzkcxuI1EmlBHxP1kejRGlKtMU5TLruCxArNoqabdy7UPop6h3dxNDLEOsoBiElB3v8c7QARGBfLKEkd8r3ESQ5bmWoT1AR6kVfTmAiCg48oe5HYDlw1t22VTUBo8beLRyljn5TB898Tuv/fLuiCygfLUKR2btylXKZ2r0LeQrn7V9ABk8TssvvLfVnDCUAYE9pGTLxWU6NGWVRmns6z8NP51HQCeDF0jK/4qpj0EIP4zkdLtEqgoVwc5Zg4p1HABMb3Z/BE11n0O8ABkr/pVyt/FIkPxQvI57DlF3EcDXRtgt2FGJGcYPDlccvu3SuBs8DFM0KYZosxEivsWKSuzZxvrh9Vb1ksQ1yf8P3Knb7HmPhGT9Dt8uvfnrfji8j2UNrX9Oh543YCdGe+u6OK4xNRYGGoGPrRrBEo+RB8HNScpIFut0F4oMSXupuC00zzT3P/39tre91ZwwrSRyJRU4kk1a4wzBjpFjaIwUnl
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39850400004)(366004)(136003)(376002)(346002)(31696002)(66946007)(4326008)(478600001)(956004)(316002)(54906003)(66556008)(16576012)(8676002)(2616005)(31686004)(36756003)(83380400001)(38100700002)(26005)(6666004)(8936002)(186003)(6916009)(16526019)(66476007)(53546011)(6486002)(5660300002)(86362001)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?cWRlUTUxL3hnUHF5aUhMRkE0d1ZPckJsVk5wS25QSHJGNGNkOE95QzhBL25J?=
 =?utf-8?B?WGJoMDUyMFZSSndDRjFIa1lHSmZqQjI0d1RKaWx6ZDVDajFrT0xnamhRMDAw?=
 =?utf-8?B?TmJ6dXZ6U2lGaHBWalppdXh6SjNDaXR1REVGN0crelRMSDg1bGJWNUd3cTJw?=
 =?utf-8?B?azIzbXBXUHVoejF4WnBrWnZBakZxc1pFTnFPYllic3Q1TVdPRDBIMnU0Mjk0?=
 =?utf-8?B?Y2pzNlRFTWZyR0tEM2tDM2lrRUJiQjdYQ1dub2tTNzVEc3lMcUZ5bWZlcDl4?=
 =?utf-8?B?Ymh6RFVwNWxSei9JcldLMHdxZnM5dm9zWk9tNkNNSlc4TlRheVdvZnYyYS80?=
 =?utf-8?B?d2t0OWV5OXBVT2hUR3hQZ2l3OTNCcmNxL1VLRGQ3b1AvZUJJWEpsTjVJVnBO?=
 =?utf-8?B?MENyaVVDODI2NnFJbUV5V29acTJaUHN0TFJqNWNnYWZXVUMrTVEyZ2ltaVow?=
 =?utf-8?B?N1d6dkRNcWFHS1BXWkVITFZTc3dZMzFIcmpvUXR2ZVhmUWlCTSs1bUl2OWh5?=
 =?utf-8?B?MFpHZWtOUXNCUWd6QyttRzBVdk5IRWJQd2JWdndoVFkyQXUwMTMwRHcrVGNN?=
 =?utf-8?B?M0JEQlo5eHluYWN6SmpLSG4va0d3UkJ1K2pRTEhTWkFPbU5tSC9lc2VnYU1m?=
 =?utf-8?B?Qnp3VU1qRjFtMXloTENERnhycCtkcGdzS3F1TkJrdFlJcEhLc1JyZlhoeWdD?=
 =?utf-8?B?L0FyWTY5Qks3K1NiRERJNmUvcnVseXFHNGJHQTRMQXBIdStQK3Y2ck1Hb0hM?=
 =?utf-8?B?VURPTjlLT1Y1aU55TWQ1K1crbks0ZWJhN0dTMUwvRnVUbmJCSEVtSHlzOXhj?=
 =?utf-8?B?VCtjUGpYNG4xdGF5dVRwYXlHUWY3Yy9RTjVOZllSUmllajl2K2VxNFd5ZGp4?=
 =?utf-8?B?cCtvTlFySERmUVIwRmcweFlabE84dHhLS3c0UmxiTDFDbzFpcEo2QkxkTVBi?=
 =?utf-8?B?QVhHTzR6TEpGVDJPcFpYOEgyaDB4RXFZSDVNbkNsQlBSd2pwbitjRGYxMUZR?=
 =?utf-8?B?MzF3NllmR29NeGRDR1UrdGh1ZmtkUFB1ZUxzRDk2Ri9XSC9vQ3BRVHl1R0FC?=
 =?utf-8?B?QXdGQzNnS0NYblR4SWljbTRrc1JKNksreGFJYnhQU2YyL283MTlrdVR0SEUx?=
 =?utf-8?B?TjVlNDhtNGRUZlZra0RRT3I5aXdPclRpeEJqTUs1bTJ2RHRlZlBGQjZUUmVV?=
 =?utf-8?B?emVRNjJSZVNBWGJnSkR4VGE0TS9NR3BBbGlJVXE0ZFhLemdKclR1UEprelly?=
 =?utf-8?B?TzJrWUZoNFNuOGE1bldaUG9BSzlnbWljNjlNUUV0TlBnbXlQTlV2eW9BT3pj?=
 =?utf-8?B?bkl1YkFRaFNzcjh3a3NzSmFaSTlva3lkakZ6L2k3S0ZoRUFYa3YycjVzekRt?=
 =?utf-8?B?M1QxMzI0SjBDeHQrNVpTUThYZkVKZk92djcvWENIdXRzcys1K21IMHIwdnRa?=
 =?utf-8?B?OHlkR1Z5dnJ0TG5XdURzR3BhMGVUZlNLTUt2OVpQRFFBbXprVmZOS0IxRFRI?=
 =?utf-8?B?VG1WeTVDTHJkK3krN3h5NlFPM3o4SkwvL2JOWUFTZ3RZdE50ZnU1bWZKTDV3?=
 =?utf-8?B?bUpnai81QngyZTAyR0E3czhJTkdGbzRCVzdyalVoTzVGMUl3b1RCbytuOTdS?=
 =?utf-8?B?bVBrWm9oT1Z4UXZTcFVTQVIzd3BFVHBxeTVUcHFXMWNnZGFrK3ZuNk0rTjRh?=
 =?utf-8?B?dEdmNDdHQktlYjJZWVhMeHVXdmJNeHQyKzNXZHB5Mmxhb2FZRjVURWxKYk9K?=
 =?utf-8?Q?B4kEh2cP+cSIH9fTifmURPwg75hu23wCDAT2LuF?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ef94a94a-c436-4424-fea7-08d92b42c7ad
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 12:33:26.9619
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kjhVnfgBaFNCz3mHzyOvKcY2LQn5EAGm5Zwdcymw+qVDG/61Z14WcuIB/uT2RBK0pFLWvfZOKs9zdLyDvpS228WB9f3/fYxXq9gSARMX4i8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3797
X-OriginatorOrg: citrix.com

On 09/06/2021 13:08, Jan Beulich wrote:
> On 09.06.2021 12:36, Andrew Cooper wrote:
>> On 09/06/2021 10:26, Jan Beulich wrote:
>>> @@ -49,28 +52,31 @@ static void send_iommu_command(struct am
>>>  static void flush_command_buffer(struct amd_iommu *iommu,
>>>                                   unsigned int timeout_base)
>>>  {
>>> +    static DEFINE_PER_CPU(uint64_t, poll_slot);
>>> +    uint64_t *this_poll_slot =3D &this_cpu(poll_slot);
>>> +    paddr_t addr =3D virt_to_maddr(this_poll_slot);
>>>      uint32_t cmd[4];
>>>      s_time_t start, timeout;
>>>      static unsigned int __read_mostly threshold =3D 1;
>>> =20
>>> -    /* RW1C 'ComWaitInt' in status register */
>>> -    writel(IOMMU_STATUS_COMP_WAIT_INT,
>>> -           iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
>>> -
>>> -    /* send an empty COMPLETION_WAIT command to flush command buffer *=
/
>>> -    cmd[3] =3D cmd[2] =3D 0;
>>> -    set_field_in_reg_u32(IOMMU_CMD_COMPLETION_WAIT, 0,
>>> +    ACCESS_ONCE(*this_poll_slot) =3D CMD_COMPLETION_INIT;
>>> +
>>> +    /* send a COMPLETION_WAIT command to flush command buffer */
>>> +    cmd[0] =3D addr;
>>> +    set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, cmd[0],
>>> +                         IOMMU_COMP_WAIT_S_FLAG_MASK,
>>> +                         IOMMU_COMP_WAIT_S_FLAG_SHIFT, &cmd[0]);
>> set_field_in_reg_u32() is a disaster of a function - both in terms of
>> semantics, and code gen - and needs to be purged from the code.
> Long ago I had an item on my todo list to get this cleaned up. But
> it never really having made it up high enough, I dropped it at
> some point, in the hope that we'd manage to get this sorted while
> re-writing code step by step.
>
>> It is a shame we don't have a real struct for objects in the command
>> buffer, but in lieu of that, this is just
>>
>> =C2=A0=C2=A0=C2=A0 cmd[0] =3D addr | IOMMU_COMP_WAIT_S_FLAG_MASK;
>>
>> which is the direction that previous cleanup has gone.
> I don't think I can spot a single instance of such.

It's actually the other way around, for the emulation logic (which isn't
used in practice).

drivers/passthrough/amd/iommu_guest.c:348
=C2=A0=C2=A0=C2=A0 i =3D cmd->data[0] & IOMMU_COMP_WAIT_I_FLAG_MASK;

>  Some work was
> done to introduce (mainly bitfield) structs, but this surely goes
> too far for the change at hand. I can spot two instances using
> MASK_INSR(), so I can see two consistent ways of doing what you
> ask for:
>
>     cmd[0] =3D addr | MASK_INSR(IOMMU_CONTROL_ENABLED,
>                               IOMMU_COMP_WAIT_S_FLAG_MASK);
>
> keeping the name as *_MASK (and I'd be open to replace
> IOMMU_CONTROL_ENABLED by true) or
>
>     cmd[0] =3D addr | IOMMU_COMP_WAIT_S_FLAG;
>
> i.e. dropping _MASK (but requiring adjustments elsewhere in the
> code). Please let me know which one you'd prefer.

TBH, I'd suggest just using

=C2=A0=C2=A0=C2=A0 cmd[0] =3D addr | IOMMU_COMP_WAIT_S_FLAG_MASK;

for now.=C2=A0 The constant is correct - its just the name which is wonky.=
=C2=A0
This in particular will reduce the code churn for ...

>> There are no current users of IOMMU_COMP_WAIT_S_FLAG_SHIFT, and ...
>>
>>> +    cmd[1] =3D addr >> 32;
>>> +    set_field_in_reg_u32(IOMMU_CMD_COMPLETION_WAIT, cmd[1],
>>>                           IOMMU_CMD_OPCODE_MASK,
>>>                           IOMMU_CMD_OPCODE_SHIFT, &cmd[1]);
>>> -    set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, 0,
>>> -                         IOMMU_COMP_WAIT_I_FLAG_MASK,
>>> -                         IOMMU_COMP_WAIT_I_FLAG_SHIFT, &cmd[0]);
>> ... this drops the final use of IOMMU_COMP_WAIT_I_FLAG_SHIFT, so both
>> should be dropped.
> Well, I can surely do so, but like this entire request of yours this
> feels like scope creep - there was no intention here to do any
> unrelated cleanup. And if I remove _S_ and _I_, then surely _F_
> wants dropping as well, while IOMMU_COMP_WAIT_ADDR_*_SHIFT have a
> use each in iommu_guest.c and hence need to stay for now.

... this, which I'm perfectly happy leaving to a subsequent change.=C2=A0
(I'll even do it, if you're too busy right now).

What I am mainly concerned with is not using this opportunity to remove
uses of set_field_in_reg_u32().

>
>> As for IOMMU_CMD_OPCODE_SHIFT, that can't be dropped yet, but it would
>> still be better to use
>>
>> =C2=A0=C2=A0=C2=A0 cmd[1] =3D (addr >> 32) | MASK_INSR(IOMMU_CMD_COMPLET=
ION_WAIT,
>> IOMMU_CMD_COMPLETION_WAIT);
>>
>> in the short term.
> Can do (using IOMMU_CMD_OPCODE_MASK).

Oops yes.=C2=A0 That was a copy&paste mistake.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 12:38:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 12:38:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139317.257648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqxTf-0002OS-J6; Wed, 09 Jun 2021 12:38:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139317.257648; Wed, 09 Jun 2021 12:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqxTf-0002OL-G0; Wed, 09 Jun 2021 12:38:35 +0000
Received: by outflank-mailman (input) for mailman id 139317;
 Wed, 09 Jun 2021 12:38:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9c7t=LD=8bytes.org=joro@srs-us1.protection.inumbo.net>)
 id 1lqxTd-0002OF-SU
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 12:38:33 +0000
Received: from theia.8bytes.org (unknown
 [2a01:238:4383:600:38bc:a715:4b6d:a889])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e60bb87b-ea08-4a2b-8dbe-5219760b3ea6;
 Wed, 09 Jun 2021 12:38:32 +0000 (UTC)
Received: by theia.8bytes.org (Postfix, from userid 1000)
 id 99A0E36A; Wed,  9 Jun 2021 14:38:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e60bb87b-ea08-4a2b-8dbe-5219760b3ea6
Date: Wed, 9 Jun 2021 14:38:29 +0200
From: Joerg Roedel <joro@8bytes.org>
To: Tianyu Lan <ltykernel@gmail.com>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
	wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
	mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
	arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
	peterz@infradead.org, akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com, rppt@kernel.org,
	hannes@cmpxchg.org, cai@lca.pw, krish.sadhukhan@oracle.com,
	saravanand@fb.com, Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com,
	hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com,
	boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org,
	will@kernel.org, xen-devel@lists.xenproject.org,
	davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com,
	martin.petersen@oracle.com, iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org, vkuznets@redhat.com,
	thomas.lendacky@amd.com, brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: Re: [RFC PATCH V3 01/11] x86/HV: Initialize GHCB page in Isolation VM
Message-ID: <YMC2RSr/J1WYCvtz@8bytes.org>
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-2-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210530150628.2063957-2-ltykernel@gmail.com>

On Sun, May 30, 2021 at 11:06:18AM -0400, Tianyu Lan wrote:
> From: Tianyu Lan <Tianyu.Lan@microsoft.com>
> 
> Hyper-V exposes GHCB page via SEV ES GHCB MSR for SNP guest
> to communicate with hypervisor. Map GHCB page for all
> cpus to read/write MSR register and submit hvcall request
> via GHCB.
> 
> Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
> ---
>  arch/x86/hyperv/hv_init.c       | 60 ++++++++++++++++++++++++++++++---
>  arch/x86/include/asm/mshyperv.h |  2 ++
>  include/asm-generic/mshyperv.h  |  2 ++
>  3 files changed, 60 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
> index bb0ae4b5c00f..dc74d01cb859 100644
> --- a/arch/x86/hyperv/hv_init.c
> +++ b/arch/x86/hyperv/hv_init.c
> @@ -60,6 +60,9 @@ static int hv_cpu_init(unsigned int cpu)
>  	struct hv_vp_assist_page **hvp = &hv_vp_assist_page[smp_processor_id()];
>  	void **input_arg;
>  	struct page *pg;
> +	u64 ghcb_gpa;
> +	void *ghcb_va;
> +	void **ghcb_base;

Any reason you can't reuse the SEV-ES support code in the Linux kernel?
It already has code to setup GHCBs for all vCPUs.

I see that you don't need #VC handling in your SNP VMs because of the
paravisor running underneath it, but just re-using the GHCB setup code
shouldn't be too hard.

Regards,

	Joerg


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 12:46:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 12:46:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139324.257660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqxbO-0003rO-DW; Wed, 09 Jun 2021 12:46:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139324.257660; Wed, 09 Jun 2021 12:46:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqxbO-0003rH-9j; Wed, 09 Jun 2021 12:46:34 +0000
Received: by outflank-mailman (input) for mailman id 139324;
 Wed, 09 Jun 2021 12:46:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9c7t=LD=8bytes.org=joro@srs-us1.protection.inumbo.net>)
 id 1lqxbN-0003rA-LL
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 12:46:33 +0000
Received: from theia.8bytes.org (unknown
 [2a01:238:4383:600:38bc:a715:4b6d:a889])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a8e88f73-63c6-4ecb-8f9e-3e754a1751b2;
 Wed, 09 Jun 2021 12:46:32 +0000 (UTC)
Received: by theia.8bytes.org (Postfix, from userid 1000)
 id C211341A; Wed,  9 Jun 2021 14:46:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8e88f73-63c6-4ecb-8f9e-3e754a1751b2
Date: Wed, 9 Jun 2021 14:46:29 +0200
From: Joerg Roedel <joro@8bytes.org>
To: Tianyu Lan <ltykernel@gmail.com>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
	wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
	mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
	arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
	peterz@infradead.org, akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com, rppt@kernel.org,
	hannes@cmpxchg.org, cai@lca.pw, krish.sadhukhan@oracle.com,
	saravanand@fb.com, Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com,
	hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com,
	boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org,
	will@kernel.org, xen-devel@lists.xenproject.org,
	davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com,
	martin.petersen@oracle.com, iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org, vkuznets@redhat.com,
	thomas.lendacky@amd.com, brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: Re: [RFC PATCH V3 04/11] HV: Add Write/Read MSR registers via ghcb
Message-ID: <YMC4JdtYO+eLDKh5@8bytes.org>
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-5-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210530150628.2063957-5-ltykernel@gmail.com>

On Sun, May 30, 2021 at 11:06:21AM -0400, Tianyu Lan wrote:
> +void hv_ghcb_msr_write(u64 msr, u64 value)
> +{
> +	union hv_ghcb *hv_ghcb;
> +	void **ghcb_base;
> +	unsigned long flags;
> +
> +	if (!ms_hyperv.ghcb_base)
> +		return;
> +
> +	local_irq_save(flags);
> +	ghcb_base = (void **)this_cpu_ptr(ms_hyperv.ghcb_base);
> +	hv_ghcb = (union hv_ghcb *)*ghcb_base;
> +	if (!hv_ghcb) {
> +		local_irq_restore(flags);
> +		return;
> +	}
> +
> +	memset(hv_ghcb, 0x00, HV_HYP_PAGE_SIZE);
> +
> +	hv_ghcb->ghcb.protocol_version = 1;
> +	hv_ghcb->ghcb.ghcb_usage = 0;
> +
> +	ghcb_set_sw_exit_code(&hv_ghcb->ghcb, SVM_EXIT_MSR);
> +	ghcb_set_rcx(&hv_ghcb->ghcb, msr);
> +	ghcb_set_rax(&hv_ghcb->ghcb, lower_32_bits(value));
> +	ghcb_set_rdx(&hv_ghcb->ghcb, value >> 32);
> +	ghcb_set_sw_exit_info_1(&hv_ghcb->ghcb, 1);
> +	ghcb_set_sw_exit_info_2(&hv_ghcb->ghcb, 0);
> +
> +	VMGEXIT();

This is not safe to use from NMI context. You need at least some
checking or WARN_ON/assertion/whatever to catch cases where this is
violated. Otherwise it will result in some hard to debug bug reports.

Regards,

	Joerg


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 12:49:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 12:49:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139330.257672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqxeB-0004VK-Ri; Wed, 09 Jun 2021 12:49:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139330.257672; Wed, 09 Jun 2021 12:49:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqxeB-0004VD-Od; Wed, 09 Jun 2021 12:49:27 +0000
Received: by outflank-mailman (input) for mailman id 139330;
 Wed, 09 Jun 2021 12:49:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9c7t=LD=8bytes.org=joro@srs-us1.protection.inumbo.net>)
 id 1lqxeA-0004V5-J1
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 12:49:26 +0000
Received: from theia.8bytes.org (unknown [81.169.241.247])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e3dfd65d-0368-4fe3-9947-a777fb2dbfff;
 Wed, 09 Jun 2021 12:49:21 +0000 (UTC)
Received: by theia.8bytes.org (Postfix, from userid 1000)
 id B439236A; Wed,  9 Jun 2021 14:49:20 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3dfd65d-0368-4fe3-9947-a777fb2dbfff
Date: Wed, 9 Jun 2021 14:49:19 +0200
From: Joerg Roedel <joro@8bytes.org>
To: Tianyu Lan <ltykernel@gmail.com>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
	wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
	mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
	arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
	peterz@infradead.org, akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com, rppt@kernel.org,
	hannes@cmpxchg.org, cai@lca.pw, krish.sadhukhan@oracle.com,
	saravanand@fb.com, Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com,
	hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com,
	boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org,
	will@kernel.org, xen-devel@lists.xenproject.org,
	davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com,
	martin.petersen@oracle.com, iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org, vkuznets@redhat.com,
	thomas.lendacky@amd.com, brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: Re: [RFC PATCH V3 05/11] HV: Add ghcb hvcall support for SNP VM
Message-ID: <YMC4z6L0PU3+HCTD@8bytes.org>
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-6-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210530150628.2063957-6-ltykernel@gmail.com>

On Sun, May 30, 2021 at 11:06:22AM -0400, Tianyu Lan wrote:
> +u64 hv_ghcb_hypercall(u64 control, void *input, void *output, u32 input_size)
> +{
> +	union hv_ghcb *hv_ghcb;
> +	void **ghcb_base;
> +	unsigned long flags;
> +
> +	if (!ms_hyperv.ghcb_base)
> +		return -EFAULT;
> +
> +	local_irq_save(flags);
> +	ghcb_base = (void **)this_cpu_ptr(ms_hyperv.ghcb_base);
> +	hv_ghcb = (union hv_ghcb *)*ghcb_base;
> +	if (!hv_ghcb) {
> +		local_irq_restore(flags);
> +		return -EFAULT;
> +	}
> +
> +	memset(hv_ghcb, 0x00, HV_HYP_PAGE_SIZE);
> +	hv_ghcb->ghcb.protocol_version = 1;
> +	hv_ghcb->ghcb.ghcb_usage = 1;
> +
> +	hv_ghcb->hypercall.outputgpa = (u64)output;
> +	hv_ghcb->hypercall.hypercallinput.asuint64 = 0;
> +	hv_ghcb->hypercall.hypercallinput.callcode = control;
> +
> +	if (input_size)
> +		memcpy(hv_ghcb->hypercall.hypercalldata, input, input_size);
> +
> +	VMGEXIT();

Also not NMI-safe. When you re-use the existing GHCB setup code from
from SEV-ES code, you can also use sev_es_get/put_ghcb() which takes
care of re-using a GHCB already in use.

Regards,

	Joerg



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 12:50:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 12:50:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139336.257684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqxer-0005kz-4S; Wed, 09 Jun 2021 12:50:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139336.257684; Wed, 09 Jun 2021 12:50:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqxer-0005ks-1R; Wed, 09 Jun 2021 12:50:09 +0000
Received: by outflank-mailman (input) for mailman id 139336;
 Wed, 09 Jun 2021 12:50:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YCjx=LD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lqxep-0005Fv-Kx
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 12:50:07 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id da256d4e-4ed9-4b65-86e2-55f1972ac695;
 Wed, 09 Jun 2021 12:50:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da256d4e-4ed9-4b65-86e2-55f1972ac695
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623243003;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=SeKYIAeEx046XORKXKP0uvH2+V2+OhK5AJ0CFkDfCbM=;
  b=bAK5HNa5LQWuiMb/+5bGN2FyOwup6TAlI4C/MqpNCJ9ZHWW18We/kEJE
   MSGB9tzCSFtYckAFd/oKQPXB2OP3wgrKeXZS81TQuHWRzp0Xbg6bdjppY
   /Z6VfoeeUZDLtJwyl3qOrCZkQONjokTFm7BLoejA0ihzXA0wn118uGDsn
   o=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 0XHOScb0XYmIJHriv/JRoH32Sbi7sdt+IaCoAOMIZhnDzWrn0anKiRdad7fPMZTMcz1dr0n79T
 iagHxniTxBJcZ0BSipth8QRFbP3XiYOV38iwqChf9WvHvzsglpLaqKwxFce8zn6cS1BqoEoA95
 CUUpyyUkdvDf5T3E6C97LQw+nmOMjITy2UTeqnqqgW2X0YODSP5BU1to/lpYtINdw0xlj9LMV8
 6QmgVK1/CdbKIKcSZ1MGvkTd4FTs+hrFlYMxcpxZ4TkNBF9m3jSxw6Np+TZ5iRcSJ/qKACKt4h
 6UY=
X-SBRS: 5.1
X-MesageID: 45806671
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:wIk6+qFqngcUvD2YpLqFDJHXdLJyesId70hD6qkvc3Jom52j+P
 xGws526faVslYssHFJo6HlBEDyewKjyXcT2/hvAV7CZnibhILMFuBfBOTZskbd8kHFh4hgPO
 JbAtVD4b7LfCpHZKTBkXGF+r8bqbHtms3Y5pa9vgNQpENRGsZdBm9Ce3Wm+yZNNXB77PQCZf
 +hD4Z81kCdkSN9VLXKOpBJZZmMm/T70LbdJTIWDR8u7weDyRuu9b7BChCdmjMTSSlGz7sO+X
 XM11WR3NTij9iLjjvnk0PD5ZVfn9XsjvNFGcy3k8AQbhHhkByhaohNU6CL+Bo1vOaswlA3l8
 SkmWZgA+1Dr1fqOk2lqxrk3AftlBw07WX59FOeiXz/5eTkWTMTEaN69MdkWyqcz3BlkMB30a
 pN0W7cnYFQFwn8kCP04MWNfw12l3CzvWEpnYco/j5iuLMlGfhsRLEkjQVo+M9qJlOi1GlnKp
 gsMCjk3ocTTbvABEqp5lWGqbeXLwEO9hTveDlOhiXa6UkMoJjVp3FojPD3pU1wgq7VfaM0rd
 gsAp4Y442mcfVmJJ6VJN1xDfdfWVa9Di4lDgqpUB/a/fY8SgPwQtjMke8I2N0=
X-IronPort-AV: E=Sophos;i="5.83,260,1616472000"; 
   d="scan'208";a="45806671"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fXhmOqTYndHQHKc+SBaHD6GKsWFM+UpFVgLV6hmknrDkU5lKEytm26T90CUe3n7W5jsjMTuy7YsP0YWrxuJUpAUOb3q99c96p9jmR4kEmWjy+b0TsfHz34cRWGFtka9IBwFWaN7mQgZT3ge4mHaB5jjC6zRjSv600v75CI3Jxrp7y9pzTRekAizJF5DbL9fPh1UUGbLx+ez6KxHNvmGf83UdYwnbSRhD9BjxnKmo/xCwbV6a9JMQYIuBZgkSckkd67ZxO8grNEu1wGiVLPAMxt5lXMqMzJVDoh+mDx4Yj2qq1c/iY6hq+Fq1ArxUTr30stmj0LVsyfyOqAKWACSo/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SeKYIAeEx046XORKXKP0uvH2+V2+OhK5AJ0CFkDfCbM=;
 b=lTBuUccAF6KZ1Tggz4Rf1HlaPZobfDY9Y0bY6QnlnTBmHDLDE02RE6IaLtuZ1ERCPNdU+Esol1BRiKGSkb2GnTvZH7KD2Bd7Wcte1UGVGBwXbML10+t/0F3zP2SpxMs2c6ugnSeGOjtAs2ZGaQF2N+lAtyChwLxIfHOXwxS5VfDd6kOKV5zRFu4NC4kC5FnsCAIIfuyNp6+MbV744dRmzfxMGo00W/riHm8HP9dOZRfPb8dTD4sxQRy9EzI63Q5zn1eaIeGSQnW0jpshvBPIyWY+pRF59LOMJ509b9MRM8B455zaSk6Wt3UdNAijEZR4K0Hw91g0LsO/Tf5pAc7nlg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SeKYIAeEx046XORKXKP0uvH2+V2+OhK5AJ0CFkDfCbM=;
 b=Jirq/49lxe1KpyMZ8lDz5V8oa61e/OsiJXD9NRHsmcZo6KXH9okeJuBeo3xhfqgzPB3/TRRYC/kKRO8fvPgG76rEeDFKhu1byiybA+dBUiKUmj0NpOiaPgxariCWKiBRVEuIOwIbnwPavBo3ibgoXjxHhheA2Q1p4FIdORRKncc=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <bdbd506a-e6fc-a560-1be7-7424f33d413e@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86: mark hypercall argument regs clobbering for intended
 fall-through
Message-ID: <b0675d1f-892a-fde5-133f-65462dd01677@citrix.com>
Date: Wed, 9 Jun 2021 13:49:50 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <bdbd506a-e6fc-a560-1be7-7424f33d413e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0317.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a4::17) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 34cba056-9eb4-42f6-9733-08d92b451630
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5837:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB58375A5A3D22F1F2B1747FA6BA369@SJ0PR03MB5837.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ymXOfCbB3mMDjuR+epdSNKT9JFfVMcGdCiAtkwo3GzzX6y2xDzpCNmIw4gjpSWCT7wECCGMO55gBv2MsmI4gdvS+8JjrfN70ue9Caq1/qfthjbhtODRXNpuuXEwBQbUA6kQ8nXKjUFfPBDUlwG/OJR/BF7EWWCrekHunS73iSh13xBYDJ9PuJPEE+XxEocJJC1TYsL36ORzRbRaD8j/fjrwgTTJra1EpVQJ2xRtaJDW1nFXv4/ouNA+BUDrjTkr/2RM7keoHjxQy2/pV6xp7Xi2HiOXtsG3NfIjidTadHZi3FFkDRPHmqIdidmxijQ+ZJ/qropgijTwv4DddX3QvMkL+JbcWTRcxzz67JSWYOWGr18I277ga6q44X+8eDTdf88qlWiId5hGb6kq39YGcBmG9NTSuU5M87qzZ46SjwjBff/qhGwdT463DaFY0f18YGbA4OxlGuAQNe/dog4tpCiHFy8lCvLYuGwIPS1ihHFSsMarkm0nlXYq5DYermkuzppmqRhQCJ1QWFzdfEjJkV3y3jj1awThOVxSfSx5UJInkK23IHxyd8YnSRFErG6HpX3H1B/MR2W5FThaWYDHXkpSGnT17mq1YrVKDDqiIA6tyqJ6XBgfjkdrMQ+R2a+/6qxg5pyJltugSlY3EpUc8iOTqq1xC+xv5i1IpC1PryXbsYUfTVkLmOu36zmXXoTaf
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(136003)(366004)(346002)(39850400004)(107886003)(4744005)(54906003)(110136005)(8676002)(8936002)(5660300002)(316002)(16576012)(83380400001)(478600001)(6666004)(186003)(26005)(16526019)(86362001)(6486002)(2906002)(31696002)(31686004)(38100700002)(66946007)(66476007)(4326008)(66556008)(53546011)(36756003)(2616005)(956004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?TGtSS1lKc0ZmSHhIUk1CL1duemowUSsyRkdVM2hyQy9TY1VSOW9rcFFjMFI1?=
 =?utf-8?B?UEZnSmpmdjRwN3JjZlhGYjJxTEtvZzl5SzJieW9WcUpLSGNzL3U2Vm93WUFK?=
 =?utf-8?B?VnBqUHJCbnFwRXZBRjhWcjFGb0lobGlBTlU2djZxeVMvaG5BNXVqaHJ6dEJ1?=
 =?utf-8?B?djlNbzJ3c05PT1pwTUtyNkhKdUVlZ01QN2tweGtsbXJLVGlhZWhjOU0yNFRU?=
 =?utf-8?B?R2RBY1N4bDUrVG1YblJ1WDBMTG1vb2ZGTnFUN0h1WTJDbE5qZHVSMy9uUnha?=
 =?utf-8?B?aFBzQ2ZxazJUSkdCcDFTTExXTUhVOElIZnc0d1g3MDNSTXhxTVJWTlN1MVUr?=
 =?utf-8?B?VkxJSzEzRWRzeVBtZTN4bThoeDB3RXJHbU85djVBVE9PUFpXcVRGMHVmY0lC?=
 =?utf-8?B?QzErTDVnKzVaNTd4OVQ1UHl3MkFtS2FJZ0VCUS9zUEtxNU51b2RvSzJjMHNZ?=
 =?utf-8?B?c2JKa0lUUXR3KzJPNTRhR3N5MjJFc3JPclBMOVdQVkpROUd5OG9JaCtCSFpw?=
 =?utf-8?B?elNDKzdBUVhLb1NoMHNEWEE2SmpQa2dRUExoanV0R0VlWGhkS1ZPTTdlMmxi?=
 =?utf-8?B?bEtETHJmTUdiWkUvMGNLQmZOWXlOL08rcmVrS2h1bnMrOU9Qazc3QlFCZjFu?=
 =?utf-8?B?aitFcVExSXprcGZrekthLzR5ajN5UisycWM3dlY1MzRhb2RUWWJLM0JxL0JJ?=
 =?utf-8?B?clpaTld5aGNIUDJ5aWt2eHdJZG5Fa21FSDBWL3dyVlZMZHkwemFaYm9wYU1V?=
 =?utf-8?B?eDFEUytHa1lrWjVDYmU1YjFCQ2piVlUrS2FWcnJlSUFJc1dVa0hNdEdtSGVT?=
 =?utf-8?B?TEt2V1NQWmIybXNnbGluQmFNZUZuZ1lqckFPVjJveU8zUzZTVjc4cGVDVXV4?=
 =?utf-8?B?dGJqY2V5TlVTOGl2YjFacjh1NDF0KzhweXJWSmlXbkQwM0dNcEplcVBIanV2?=
 =?utf-8?B?V1hIUXdDbW9oVWpyK0NsbVpwME1KWC9EWllMRGN3WCtaQTgrNVZya2VPQlJZ?=
 =?utf-8?B?dVJaN1Z2Y0VPS0F1Vy9GWTFkUVU2MEhWcDMzVnBkdjFrN3ZqaFZEd2hub1RK?=
 =?utf-8?B?TEY4VE95Y3lER2hvMnpVcVdyTkU2bmZPRko1d3BBNUwrUFRKTTZUdVhWR3d3?=
 =?utf-8?B?YWpGblI0bWxHT1pCc2tFRE5EZUNidE11UUpZdk9IcVgvSWdxeWJwMzZpUlpU?=
 =?utf-8?B?QUFSR2JFUmZYMHV2RlY2R1oyb1FUYTEzQkdiZElyemtzNmQ3Tkt0TnByRUh3?=
 =?utf-8?B?eU1iNS92akNUdkp5UlVpeUtpQ1A0NEh5cnJEZS9pVVpGMkpaaXNMaVM3bERE?=
 =?utf-8?B?Qlo2T3JBbVFZMkpLd3hTSVJkU0t5SmZwdHFMRXhOWkdEd1NTRzc4Qy9pZmRR?=
 =?utf-8?B?MVhUZGpZWncwNVBCcXZKUU1wV2E3SmlsS3NiNDdQNGRFdENTUjkxUHNBZFdX?=
 =?utf-8?B?VVZMRktIQ0NYSU5aT0lVUnlFbEVEM2hrYjg0YVhOdzVCd0t2Ui9ISURZSWVI?=
 =?utf-8?B?VDkyb0VjcjVWb0pQS3VoaFBIZExONkVVZFltcW5qS1ljZGFuNVU4aGV3TzA4?=
 =?utf-8?B?VFRTSmtQRHl4TEh3dGp4cXFCT2Nrdm16ek5BVU1QSWtXWTRJY2kvN3BITVlU?=
 =?utf-8?B?WEJ3Q3FsRkFDZWpFTHZHNnpidHFFQjhsMlp4cWE1S1JsVUdGVFRiWlpJN0xx?=
 =?utf-8?B?K3cxMzFtME5IYi9reExzUzNVUlRyUDU1MzcvZzR0N1AzSnRXR1NpS0M3WEU4?=
 =?utf-8?Q?nc9tAC2rE2b/TCAHWqQwxfr48bpAnkQPiaUU9t+?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 34cba056-9eb4-42f6-9733-08d92b451630
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 12:49:57.6105
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8+gDAID2/nvMXT30yaq42YtWCYg6XEZrzQJouSdRfoGfPS1pUYWlpaG9OwBEleGOffEnMe7UUKEo9IQPmJbrDFDW63FFIhhTWyp1BnBjDhc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5837
X-OriginatorOrg: citrix.com

On 09/06/2021 11:34, Jan Beulich wrote:
> The CIDs below are all for the PV side of things, but also take care of
> the HVM side.
>
> Coverity-ID: 1485896, 1485901, 1485906, 1485910, 1485911,=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Let's see whether Coverity actually understands the (relatively) new
> pseudo-keyword.

This is exceedingly disappointing.=C2=A0 Coverity used to have the only
sensible rule for not causing spurious fallthrough warnings, but this
has apparently regressed.

Coverity works on the AST, so ought to be after GCC has interpreted
__attribute__((__fallthrough__)) if applicable.

However, I doubt it will work in the fallback case, because #define
fallthrough looks dubious.=C2=A0 To trigger the older logic, the /*
fallthrough */ comment needs to be the final thing before the next case
label, and it isn't with the added semicolon.

Given that this pseudo-keyword is restricted to the SMMU driver for now,
we don't actually know if Coverity likes it or not.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 12:51:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 12:51:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139345.257696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqxgU-0006T1-Kq; Wed, 09 Jun 2021 12:51:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139345.257696; Wed, 09 Jun 2021 12:51:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqxgU-0006Su-H0; Wed, 09 Jun 2021 12:51:50 +0000
Received: by outflank-mailman (input) for mailman id 139345;
 Wed, 09 Jun 2021 12:51:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqxgS-0006Se-SI
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 12:51:48 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 90b001fe-08ff-4e79-a8a2-2c7f73e42e44;
 Wed, 09 Jun 2021 12:51:43 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2053.outbound.protection.outlook.com [104.47.13.53]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-26-C97BAj0hPLGaNsmDosTIgQ-1; Wed, 09 Jun 2021 14:51:41 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6477.eurprd04.prod.outlook.com (2603:10a6:803:11e::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.26; Wed, 9 Jun
 2021 12:51:39 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Wed, 9 Jun 2021
 12:51:39 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0240.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1e::36) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Wed, 9 Jun 2021 12:51:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90b001fe-08ff-4e79-a8a2-2c7f73e42e44
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623243102;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=c6q+ufWXZpdn/YN3zKqiIiK27ddt9J3hNc6NHQMlJ7I=;
	b=d+5vjsFdUEbCN3++UkyoDSQExQoH0iavNxsP8lChILqtvt3RDwXW0907GRoGu/4+EH6AC4
	3epqdmvRVAldsdDjhBFVnQjKLIQFZKCuWbPcdF0CLM6x/XjCJ49Uwt6WV+kSGq1RGq//Rj
	uB73tytjNDvKqDMDuOP2LKW23SQz56o=
X-MC-Unique: C97BAj0hPLGaNsmDosTIgQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HesYu/kJj36lS+u5Vr/6aOXcpJbrCOR0h0W/W+euBeZTHmlCSiLzUN5u5hQO1pxx4XztkVDA0JBiYUuIaQB1nUHTNGx2CjqJzLevW1nQa2ah5zND5EpNyrX3ooXTcyUBjm9WC9GqNbg5uKxWeYvZ4qxFN5wbx+t1J87K7J+i2WNb0YxtQNimYc+ri8/lWsRmTHfIkzni2zOI6ZHZ1/HGgelCrmlO0xTSSPcHvEv2eOBwbSDPsmDg53RRua4V6ActStHliulRNwNjiR6iERcpR4Ra5KkJ0bo9QD6Lj/EOBBQBsFQkIOtq/sT49wykOMC5x+VFcCnqrY8ip5/qA5JCvg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Qzf0kMRXWXR2yll+48rZM819RBLaEPRggM3EAokXNXc=;
 b=S8W49KlXwDJid7/7l6ES4nUnOA5OogTsU2crqxrY3/+i0ItE/430hVxvgmDsWPOUNFTQ8W2Hup6YUQ4TYiYB2M3JjI64JR9oG+0uXwVe9a6cyT7IJCM6jDXkM6aCn+2rxUHOT/ZMwnpTcS0+IYCGsvfj/e97R6EwFShnU27jOAZ5vrkxJyQmjDGlWkoEGgR4+44nZmjZCa6uVrqXQbSv5SvkX0/7qbkWOB+H35qMH+FQnYputbwgEXhOe9cBZSFuxYTCfIvNgnqKsuqffJKOL/qcssEYYujLDbaXGTB/ElHULs/jLDWMwXtooN6U9APAoY20aW3glo5Uy3uEqlV+3w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 1/9] AMD/IOMMU: redo awaiting of command completion
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Paul Durrant <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
 <1bb49cce-537e-9fba-80bb-82bf502728d0@suse.com>
 <1fcb1140-b9b4-5c0b-de6c-e14d33937318@citrix.com>
 <e3283632-7621-69b8-5051-ec528c6ad8fc@suse.com>
 <4950c000-2984-a9be-d164-ecb65edffa2a@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0b38a2df-0cb2-d31d-3733-38285614d154@suse.com>
Date: Wed, 9 Jun 2021 14:51:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <4950c000-2984-a9be-d164-ecb65edffa2a@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0240.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1e::36) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5c820426-b153-48f2-0676-08d92b4552a3
X-MS-TrafficTypeDiagnostic: VE1PR04MB6477:
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB6477C16C8345EBBF59E62709B3369@VE1PR04MB6477.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WoNANj4Mj623Ix59gPK8EAIqCnIscvj1ZGZAv404Bi/RWDpNdUeevmwwrXXqcU0wRZrCsTRVKp70W3tnzBOBSsmoOId+vkogq7S7QrKGZF28PF2Yht/B1zq19wJ1/3Sw6FubwcE7BZtl91BGav+hTaS7FGbYuodEDVNBT3U2QH54loNu4I6Wll9ehKpyHw3AxkYT/nPFojTddIgq1FVG4apj9txB7w6USud9IXc3dHp9JMjewVmUVs4oSgJNV4cIeX5iRWiGMnv8sPXWevg/aTEs0hyaIkfjGEC5IZa4ANVAAbclJzj0C02DuNYTn8ZXKJFd51H1YyNvWAwpHrNLpAExRr+ym3pQfgN8wolAESi4ao81XW1Nf6XeHr5SXs9c+HKfkbSIDpPeB3TUHQct6uVNY58kfsixcsL/UsAwfi1HG4JgXDTDb9rNj0SNmfRlkK4h7iFom8LaqxIFI1VaEzuE+NOPsDrPOFYDkv96fmGhqVDmfORVZTmLpcBdMe2h6lcjoC9rpDDPBBH1v1ZOwxh/RJ4AYoFh8aI8cwOJzmKEQLBdVOB8ZVfVfkKn7nVSOoEhTQ1dQUmgxSHoq39ypiHPA6Gp2nCZQx9bbuVA2ic2S0XEIo/JxL4Aee5YWeyIpCUYg19WcE/ZeNFE+SDej0+inRCacgl/GixyrYqVYXCdOMKLY9txe+NsJ5h5isg9
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(136003)(346002)(39860400002)(396003)(376002)(6486002)(478600001)(956004)(8936002)(36756003)(66556008)(2616005)(8676002)(16576012)(66476007)(2906002)(5660300002)(54906003)(4326008)(53546011)(66946007)(26005)(186003)(31686004)(38100700002)(86362001)(16526019)(31696002)(6916009)(316002)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?us-ascii?Q?rZaIf+tIkunjH1BJRxDXGAI9Vay2KfgzTJQEkD7zl9cp8Ur4kYoG4Atvy0qZ?=
 =?us-ascii?Q?jFv8gTjlaIl8WqmWwRCUP0lBwGrhDL9xw7XIUoHhbvsQbZcN8/KHOVYtcS/L?=
 =?us-ascii?Q?j5qKJIU2X/UxLhROT2OFQjOvwIAw4eavs9/FYUpGFjeb1PIccZTijCXpvgB2?=
 =?us-ascii?Q?Ap2LtzPp+OE/TmaffjC1sWDdbk75lQxrS8f5LX061pi8p/LnMy/3N8qPxJHq?=
 =?us-ascii?Q?XGMXsq3fM1Y5DhKLiG/q9cYJruo5ponKI/xr3smTdQ70W5K1tLcoS0RvGrVr?=
 =?us-ascii?Q?1cIcrShdbIUq+/KQwnl7qAzSgw0pBbeqxfccDlZ0SpRZW8KS8Oo0XGvGtFjZ?=
 =?us-ascii?Q?cxGmTmI1dL5LhrQnlE9GfAWKRxeaW+c/EBg+lvBJT5O6BRW4PQVyMDgeXDCw?=
 =?us-ascii?Q?IChKcj9K6H0jjpGCc58BJ5aVIxRtgE6AmTdoHiDzTUzeX64+PWtGj7CtPfr5?=
 =?us-ascii?Q?PF5X44Fg2UCt2FiIix6V136C+UPbetM9VKXIztFw8rYuoXH+PgV+wKd8XSbu?=
 =?us-ascii?Q?j41fTE9k9uQH4UglgqGYWXQfVj8QSxfBU7IMbtpa6zPpvxQ13aNFmUsXcKaJ?=
 =?us-ascii?Q?3n2q1dv1SIVPF5biE2GPMTFzVsb7xCMmM7X8EArcOyFlxXht8dgl1EeEd2JL?=
 =?us-ascii?Q?I+E0BoFTAAIyHsbsEtpxvvrnIFOtf2PNsdojymQUAlD9XqRE8WMdEne+3jQc?=
 =?us-ascii?Q?o4FPISS+zpoPqsj/w2G1dIh6xOnJnKUSjJ3paxZ8Hn5Sfx51cq9VLB0J/WBw?=
 =?us-ascii?Q?z78OGHjLY04JDoxc2eZOiWRjmnOswmN0wGH7ILnLyCBLXx3d6lqLRrce88jg?=
 =?us-ascii?Q?Hh/psky0PsUD1IOV4aMvYSRwqZ0AGyp+pXN3RO7FkhZ0SNnXNn6IafoGQgnU?=
 =?us-ascii?Q?w9B89pWMdVpK7ohor1Geu+Ei6qKEGBk9s6pW+8/pVweHmodl9BtA3B8vBNB8?=
 =?us-ascii?Q?9ExTXsPPfvij8j2xeqrKQ7PtWVDGou9HAipevEm1f173QK4ZzVsLi8kD7L1B?=
 =?us-ascii?Q?7+RR32Cwbz1qAgYItP1FY7JReGLKtqZ6PCUH+qZHOp15cobTnmN/ba7PlAhK?=
 =?us-ascii?Q?bLV7Pke1iDVzt4L3n+F/F89+/y/9P9EBL5XbfwEIw1ViZvgV2KslKHKShSQt?=
 =?us-ascii?Q?qziecIykclZyL5fhJHKomw2x4kdYK4x1PMpir4zdiL1kEqFNDIh7D7NFDHEr?=
 =?us-ascii?Q?Vc5mAGmvMa3/XyhW2xTopogq7u5M01OSe5lBVdBfWXF1p9BWBwhU26pM0+hc?=
 =?us-ascii?Q?xUk0EJSrXuFLfdjimgGP7Axbzo6K2RthAdqK7lW1bhvnyZGoVOlJ8cl07aa3?=
 =?us-ascii?Q?mEJWfCW/tx5joDmfM0/UxaTD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5c820426-b153-48f2-0676-08d92b4552a3
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 12:51:38.9958
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OJhOwochN+1gQBmXrnCI+cgF2UoE6v2XDFbvoc60yGCPnfb/UeZKdbsalwDhrlstQQOEhbqKmvTyuxMT29gubg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6477

On 09.06.2021 14:33, Andrew Cooper wrote:
> On 09/06/2021 13:08, Jan Beulich wrote:
>> On 09.06.2021 12:36, Andrew Cooper wrote:
>>> On 09/06/2021 10:26, Jan Beulich wrote:
>>>> @@ -49,28 +52,31 @@ static void send_iommu_command(struct am
>>>>  static void flush_command_buffer(struct amd_iommu *iommu,
>>>>                                   unsigned int timeout_base)
>>>>  {
>>>> +    static DEFINE_PER_CPU(uint64_t, poll_slot);
>>>> +    uint64_t *this_poll_slot =3D &this_cpu(poll_slot);
>>>> +    paddr_t addr =3D virt_to_maddr(this_poll_slot);
>>>>      uint32_t cmd[4];
>>>>      s_time_t start, timeout;
>>>>      static unsigned int __read_mostly threshold =3D 1;
>>>> =20
>>>> -    /* RW1C 'ComWaitInt' in status register */
>>>> -    writel(IOMMU_STATUS_COMP_WAIT_INT,
>>>> -           iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
>>>> -
>>>> -    /* send an empty COMPLETION_WAIT command to flush command buffer =
*/
>>>> -    cmd[3] =3D cmd[2] =3D 0;
>>>> -    set_field_in_reg_u32(IOMMU_CMD_COMPLETION_WAIT, 0,
>>>> +    ACCESS_ONCE(*this_poll_slot) =3D CMD_COMPLETION_INIT;
>>>> +
>>>> +    /* send a COMPLETION_WAIT command to flush command buffer */
>>>> +    cmd[0] =3D addr;
>>>> +    set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, cmd[0],
>>>> +                         IOMMU_COMP_WAIT_S_FLAG_MASK,
>>>> +                         IOMMU_COMP_WAIT_S_FLAG_SHIFT, &cmd[0]);
>>> set_field_in_reg_u32() is a disaster of a function - both in terms of
>>> semantics, and code gen - and needs to be purged from the code.
>> Long ago I had an item on my todo list to get this cleaned up. But
>> it never really having made it up high enough, I dropped it at
>> some point, in the hope that we'd manage to get this sorted while
>> re-writing code step by step.
>>
>>> It is a shame we don't have a real struct for objects in the command
>>> buffer, but in lieu of that, this is just
>>>
>>> =C2=A0=C2=A0=C2=A0 cmd[0] =3D addr | IOMMU_COMP_WAIT_S_FLAG_MASK;
>>>
>>> which is the direction that previous cleanup has gone.
>> I don't think I can spot a single instance of such.
>=20
> It's actually the other way around, for the emulation logic (which isn't
> used in practice).
>=20
> drivers/passthrough/amd/iommu_guest.c:348
> =C2=A0=C2=A0=C2=A0 i =3D cmd->data[0] & IOMMU_COMP_WAIT_I_FLAG_MASK;
>=20
>>  Some work was
>> done to introduce (mainly bitfield) structs, but this surely goes
>> too far for the change at hand. I can spot two instances using
>> MASK_INSR(), so I can see two consistent ways of doing what you
>> ask for:
>>
>>     cmd[0] =3D addr | MASK_INSR(IOMMU_CONTROL_ENABLED,
>>                               IOMMU_COMP_WAIT_S_FLAG_MASK);
>>
>> keeping the name as *_MASK (and I'd be open to replace
>> IOMMU_CONTROL_ENABLED by true) or
>>
>>     cmd[0] =3D addr | IOMMU_COMP_WAIT_S_FLAG;
>>
>> i.e. dropping _MASK (but requiring adjustments elsewhere in the
>> code). Please let me know which one you'd prefer.
>=20
> TBH, I'd suggest just using
>=20
> =C2=A0=C2=A0=C2=A0 cmd[0] =3D addr | IOMMU_COMP_WAIT_S_FLAG_MASK;
>=20
> for now.=C2=A0 The constant is correct - its just the name which is wonky=
.=C2=A0

But my previous reply was to make clear that I don't agree with
ORing in a *_MASK into a (to become) live value. *_MASK should be
used exclusively for masking, not as actual field values. Any
code violating this should imo be looked at with suspicion, as to
possibly having used the wrong value altogether.

> This in particular will reduce the code churn for ...
>=20
>>> There are no current users of IOMMU_COMP_WAIT_S_FLAG_SHIFT, and ...
>>>
>>>> +    cmd[1] =3D addr >> 32;
>>>> +    set_field_in_reg_u32(IOMMU_CMD_COMPLETION_WAIT, cmd[1],
>>>>                           IOMMU_CMD_OPCODE_MASK,
>>>>                           IOMMU_CMD_OPCODE_SHIFT, &cmd[1]);
>>>> -    set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, 0,
>>>> -                         IOMMU_COMP_WAIT_I_FLAG_MASK,
>>>> -                         IOMMU_COMP_WAIT_I_FLAG_SHIFT, &cmd[0]);
>>> ... this drops the final use of IOMMU_COMP_WAIT_I_FLAG_SHIFT, so both
>>> should be dropped.
>> Well, I can surely do so, but like this entire request of yours this
>> feels like scope creep - there was no intention here to do any
>> unrelated cleanup. And if I remove _S_ and _I_, then surely _F_
>> wants dropping as well, while IOMMU_COMP_WAIT_ADDR_*_SHIFT have a
>> use each in iommu_guest.c and hence need to stay for now.
>=20
> ... this, which I'm perfectly happy leaving to a subsequent change.=C2=A0
> (I'll even do it, if you're too busy right now).
>=20
> What I am mainly concerned with is not using this opportunity to remove
> uses of set_field_in_reg_u32().

Well, when I put the patch together I was thinking of two "proper"
options - keeping the use of set_field_in_reg_u32(), or replacing it
by (bitfield) struct uses. The latter would be a far larger change,
and should imo be one on its own (i.e. no functional change) anyway.
Hence I went the former route. Since you vehemently ask for it, I'll
go the middle route you suggest, but this only sets us up for re-
writing this piece of code another time once we introduce proper
structs.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 12:55:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 12:55:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139353.257708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqxjz-0007F8-3u; Wed, 09 Jun 2021 12:55:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139353.257708; Wed, 09 Jun 2021 12:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqxjz-0007F1-0r; Wed, 09 Jun 2021 12:55:27 +0000
Received: by outflank-mailman (input) for mailman id 139353;
 Wed, 09 Jun 2021 12:55:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqxjx-0007Ev-PT
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 12:55:25 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3b69c10c-d5b4-4f42-9906-fa84a30d1c82;
 Wed, 09 Jun 2021 12:55:24 +0000 (UTC)
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2052.outbound.protection.outlook.com [104.47.12.52]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-18-L5tsVD-BM6WnaEzkI0BWyw-2; Wed, 09 Jun 2021 14:55:22 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2335.eurprd04.prod.outlook.com (2603:10a6:800:2e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20; Wed, 9 Jun
 2021 12:55:20 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Wed, 9 Jun 2021
 12:55:20 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR0P281CA0010.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:15::15) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.10 via Frontend Transport; Wed, 9 Jun 2021 12:55:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b69c10c-d5b4-4f42-9906-fa84a30d1c82
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623243323;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qmHvrMpkte4ytLDshOAZXNuIdB2Pqg6y97Gg+VlDV0o=;
	b=DYotaHlb4Y0DBJR2kfgXH9HymLF5/3/GLk1nlekTLwuDoBesRYYc+FeRbge/rgbjyTiH8r
	J4fFkU8fKks7vbrMvxucsSCBzxmnafn09A3HkTTequAplsXbGGvaJl2qHsSQ0lYO7zEpAY
	Ey2Xy5+w3ZjM4nNBta8m3vN+BHDsAsk=
X-MC-Unique: L5tsVD-BM6WnaEzkI0BWyw-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jCBxBi4JoCt3rhBetzur0EySKgMqU5HuXJMzgHP1q7RLKJE8CWC8gOxmdjr0el4WFGHMiu/5wbsD5IM2lVrE4juAzozBAe8/nH9CmRHsjLvC4hQfq86TfRKsYWvigJkOAdeZ/IaJ4aXjuPNrGiM4kC8ogtkHFG4t2EhBTse3/aGugsQB5ejxYRGymF/jJPARRyTcZsher3SSDjinQjiRDpOvlNxjYHTtEoqm3/9RQ0ZZEbEoOQqYzHU7zIbItAMEwP9+LSoJ/hFDy35XpViZm2zeDu3esyphgNFc6z/7uFKba6/Tmki14p72p0TsSspsCIY9thoKUEWULymWborjrw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Kg6luS9joI3pnts8ELKvhBCYkGIedGlOoUecLoDH3lE=;
 b=Ax4xIi+FuQkJUYc8sMeuJENOduTP5hk2Qv6FReTAVkYb0DPiA9FVDbCtPRDR+ETsibuJcdWnrDGhSsisIxfoPT6k7hDvJyDwnTt9Ac2xD7Il+XZap2c91UdkaWnW7lYRYm+ZxMOw7oSRj5Ow1kIaz5V4R+6lFOkoHTMOh6yM7a8QBzDp4KSkoLovgd7oiT7SnhqUDiiahTPN9x+zUUuhUH+oiV7kA/wBri5za1MRPsRyB1cILJo2mKpClmWVXnvmt5IozXHTJC3UDvjQvEofw12DuP9mISxmKi9TZp1/p7TJhvnEckv4a9d4arcAqGQfTqP6X2NKpcQ/d3k7HMJo+g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] x86: mark hypercall argument regs clobbering for intended
 fall-through
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bdbd506a-e6fc-a560-1be7-7424f33d413e@suse.com>
 <b0675d1f-892a-fde5-133f-65462dd01677@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ee3a0434-c4b2-3dcb-bd26-3550b80188c4@suse.com>
Date: Wed, 9 Jun 2021 14:55:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <b0675d1f-892a-fde5-133f-65462dd01677@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR0P281CA0010.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::15) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b0bb65b0-4131-4bf1-a41a-08d92b45d66e
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2335:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB233585AB65D828BA2576DB3FB3369@VI1PR0401MB2335.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ydo/5ZavBOpjy09fHg4JGHlJYy0+yeX/xoWKHIAo2me8jjTBXOpc/2a5iuJ5ANhzZvDvYKvZH7LjtZhWkiVVCxwP7qHL9AxuoaAkjABAMAgAZYx+LbCRBvidQ5aFAlEq7SIlMSblHuE4g+5TdRIsJhZ2rQDvVvZ+aS1AuHmkzrjT69BHjOtS6HdxeCq56ycOqJdiy/Xhbszf10Ma17+i/PimBAt155s9+fqjiC9QQxZpytn7QindlrhkZcD5ONTy7V5I+UOLVyNqXqMRVTn7NRHt0rQTIgUKuA1w59P6c8rdio6ytyCams+IENn6l6sNecDzAA6Aai6vTDoJxMuFdIFyfqXgucou3E1+42+4b5Btw6bC3S4z1ixhrF3AeWBP1VNW6U0zsXPkHmR5YZwZclyYuZvp5jd7O83NXnbQqCAmyx0VUibVBJZGTiwefALw8JUjLqM21GaRBFbOHbb2gvH5zMCoHUw8qiBpZ4y0sCL6MZuaXlABXafIcoEMO/a8Kkb64r4glDKj1iV5X98t0AfET0xof5D5GC8opzsU9izOtIodtoV8OD1KOjyPwOUZTpg3iHcCCyQQTXFUHyi2XEH3ABp81Rc42RNUiHL/ROYFF+oq/QweO1lVilVg1HJ0w5lndvsneK7jkSJ3K8iFsa2lYnfJSPqVZLAvDaxB+KcGjOpnEFQ1Q3b05VEJ0NDu
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(136003)(346002)(376002)(396003)(366004)(86362001)(316002)(26005)(8936002)(66946007)(16576012)(66556008)(53546011)(5660300002)(38100700002)(66476007)(54906003)(16526019)(31696002)(186003)(2906002)(31686004)(2616005)(956004)(6916009)(6486002)(83380400001)(478600001)(8676002)(4326008)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?us-ascii?Q?7zi0I5vRxElRMnHUZjTMzBjvEK+wi9nhKyqtitjIxZt7WiII0b5uVLLD1MQP?=
 =?us-ascii?Q?oIGLImsZvpo81TkpYi0NsuiEW2nZHPmmhfJaGHG/TLFs7+fLxReaalX0nPTf?=
 =?us-ascii?Q?KYKezqXMWkMHpdvQbt1piCTf1W+ImsXF8A5ftfae4YqcnavVo8h1bl4uH0YZ?=
 =?us-ascii?Q?K72OMVU/HV3dx/K60EapgruLe/Z27TEf4ItDrEgnGeLb94E58mFam4A1U6mv?=
 =?us-ascii?Q?MIvNvwzY4yfvxvHMPtuVNWTbE0tkOwP8c9rSAKI8JD2wiC7Vtzk/yhRxOn/a?=
 =?us-ascii?Q?3r7iCHkngAmJMXOhDIFou9FYiK1U28NLCMc2GbfvNUroi7/wCFaQD8MjtQLh?=
 =?us-ascii?Q?HlF9iRQFpxErbqdHHOjYUFIfPgBcR06fAaWuC6UDrxYyu1t9n4n/KsTx5CT1?=
 =?us-ascii?Q?OWGIKNC6oorYY3uHX/VEdFjfFt9J9Ru9ZLT18/ID5HMcFiib8eeJriWarZri?=
 =?us-ascii?Q?WkcihCWR7aBaX4dWnYMB4DKYHHoMe/S2vzCJh8lSrb6tH13oBlK7TNAlg8vS?=
 =?us-ascii?Q?WHkPQVGetuRi8QJZG/FjCjoQGE/wV8FrD5XAEOuFPX78HHscl8bFXdq44VUY?=
 =?us-ascii?Q?ekX5BRfisZRJFXRw8J/XaIRT0WvekAzyxXYRe0p+PhQ/YjHi54hlXZGcGLms?=
 =?us-ascii?Q?j08FDiy1skR+KlJNSHCm6xtkkpnrPKmWp4POonVT16SnbBxVm5NzmEN2H4Mj?=
 =?us-ascii?Q?NJ4plDvKQUPTXm237zdZKUS0GLeD4wYbU1LO2fpCNBa/XNvkejbRmrAmiXoy?=
 =?us-ascii?Q?sfmX7aYkTAI+p0+1ZSmT4K5YU+/G2JJrDmktzF+f0EcEQFdzA2Q+3lCvhyCK?=
 =?us-ascii?Q?eUbDU20mLPMJLa5SkpZOmS6574aslTkkiw8WLPJX+XiMOvrQ3oku0k1wk6yn?=
 =?us-ascii?Q?TmYoowwVWHpPM8JJT1FV4LBiAv/mVuPg3ueNrd87OVd6xFGJ41S4pMKlY3te?=
 =?us-ascii?Q?UWWNtSXoD76imM730nVuUERRidVV2VPHXXk08sw4Bvx2iEAzpC7ZeZEK2CNk?=
 =?us-ascii?Q?RyEXk5bSruzUGQbWuyWj5YXZfwAmNO1rNVGbaLULGqT2k6yk63ebra0M7RtW?=
 =?us-ascii?Q?ywwo6dUZkakta/T29SNQG/P0uKjYZTlHYxVOrPs9KwB3aHDjhiNPxNb0AsS8?=
 =?us-ascii?Q?cb8HnFq/8zG2prSOIaJFdDnNag6gAqfJfme26eH2b7JA+qY+KW7YvP9OcG89?=
 =?us-ascii?Q?jYZXruJMNUHDaGgN+IplZB2DHN3JwfKeSU+XDoiHi6T+K9BJ5rc/nDpX7wiz?=
 =?us-ascii?Q?RUDU5U89xNUn+NHCcRYNUia8NSH9u5gmDxQyR8o0C7FP08wqpWwb4EBuzLOR?=
 =?us-ascii?Q?t5VGSR5ZPlpVP9rzq3KBOm6O?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b0bb65b0-4131-4bf1-a41a-08d92b45d66e
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 12:55:20.1137
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9GKdfhjxuK5wI74Cyi/LfIcsGB6+iDUeZFQmkasSs2hg49qnph0Vdv0si/+9KLcGfgmO4AXzGBudN2TwYpkgzg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2335

On 09.06.2021 14:49, Andrew Cooper wrote:
> On 09/06/2021 11:34, Jan Beulich wrote:
>> The CIDs below are all for the PV side of things, but also take care of
>> the HVM side.
>>
>> Coverity-ID: 1485896, 1485901, 1485906, 1485910, 1485911,=20
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> Let's see whether Coverity actually understands the (relatively) new
>> pseudo-keyword.
>=20
> This is exceedingly disappointing.=C2=A0 Coverity used to have the only
> sensible rule for not causing spurious fallthrough warnings, but this
> has apparently regressed.
>=20
> Coverity works on the AST, so ought to be after GCC has interpreted
> __attribute__((__fallthrough__)) if applicable.
>=20
> However, I doubt it will work in the fallback case, because #define
> fallthrough looks dubious.=C2=A0 To trigger the older logic, the /*
> fallthrough */ comment needs to be the final thing before the next case
> label, and it isn't with the added semicolon.
>=20
> Given that this pseudo-keyword is restricted to the SMMU driver for now,
> we don't actually know if Coverity likes it or not.

When it was introduced, I did specifically ask whether it pleases
Coverity. I was told it would, and I had no proof to the contrary, so
I had to accept what I was told. My asking at the time was precisely
to avoid having to have two forms of annotation on every single
legitimate / intentional fall-through case.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 13:14:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 13:14:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139361.257720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqy1p-0001Af-NW; Wed, 09 Jun 2021 13:13:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139361.257720; Wed, 09 Jun 2021 13:13:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqy1p-0001AY-K2; Wed, 09 Jun 2021 13:13:53 +0000
Received: by outflank-mailman (input) for mailman id 139361;
 Wed, 09 Jun 2021 13:13:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqy1o-0001AO-2v; Wed, 09 Jun 2021 13:13:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqy1n-0007wC-Tc; Wed, 09 Jun 2021 13:13:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lqy1n-00082J-Hi; Wed, 09 Jun 2021 13:13:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lqy1n-0005S2-HB; Wed, 09 Jun 2021 13:13:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=K2gdSxW8GkxnnrW2hECXYCJ56Y1n9RsT19yq5QBAoOM=; b=DhVQ519YE2X8CZTvxj/Emqe6Ca
	nABDj5NlWn6FcSl0IoHzOyeIIpE+faL2PJ9oDszdxKs2gjAQ5AkrWVv4Jghr7jmySwGBiU5SGzU4q
	xXQqhqFnKYK+em4mPs1XqcNt5ebwAQ5HyZD6TbN2+6mhPmS0znX6Kp7U2W/xyHCC4sCw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162572-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162572: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f5035d480f7a7033f15765b67c19df86a8ef2c69
X-Osstest-Versions-That:
    xen=e4fee66043120c954fc309bbb37813604c1c0eb7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Jun 2021 13:13:51 +0000

flight 162572 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162572/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f5035d480f7a7033f15765b67c19df86a8ef2c69
baseline version:
 xen                  e4fee66043120c954fc309bbb37813604c1c0eb7

Last test of basis   162554  2021-06-08 20:01:31 Z    0 days
Testing same since   162572  2021-06-09 11:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Connor Davis <connojdavis@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Tim Deegan <tim@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e4fee66043..f5035d480f  f5035d480f7a7033f15765b67c19df86a8ef2c69 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 13:15:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 13:15:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139370.257735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqy2y-0001pz-6E; Wed, 09 Jun 2021 13:15:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139370.257735; Wed, 09 Jun 2021 13:15:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqy2y-0001ps-2o; Wed, 09 Jun 2021 13:15:04 +0000
Received: by outflank-mailman (input) for mailman id 139370;
 Wed, 09 Jun 2021 13:15:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqy2w-0001pm-5l
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 13:15:02 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b99d3d57-45c5-46df-ae00-645cd10f6bcc;
 Wed, 09 Jun 2021 13:15:01 +0000 (UTC)
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2055.outbound.protection.outlook.com [104.47.0.55]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-35-NK8jlv9RPZaJjPs-9_P21w-1; Wed, 09 Jun 2021 15:14:59 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4944.eurprd04.prod.outlook.com (2603:10a6:803:60::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.29; Wed, 9 Jun
 2021 13:14:57 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Wed, 9 Jun 2021
 13:14:56 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0116.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:19::32) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Wed, 9 Jun 2021 13:14:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b99d3d57-45c5-46df-ae00-645cd10f6bcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623244500;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=ujG2q1YnMDOR6+z6mxxei0XuEaxDYgCI4hRLLdDBk1E=;
	b=AxXMCzB7BRWA0VXhit7GLMBzhRcKGWi3dt8hdVYrI7DuCsU6mead3EhLNXBOqMe0XfG4fB
	yOKmb9DC26qwKdu2BqHULk2c5hQKr286/Er9FrELJc3eL6yZcxU5aKpy0YRZGcHbnwgKPt
	qkYQyUgj5SsoyUrvP/OJJMClm7u6d+c=
X-MC-Unique: NK8jlv9RPZaJjPs-9_P21w-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iXnweP5JzaN7p7O6XJowXSVHMJC9jhUVCFitY2glRnvsI82ft2rYfZNvFKQLAoQl0YkbpberIEpMVhz33RBQMgPy/vCMYaqSo0CTKMG16zPPEhMSPr2LRLQXYlxdEjCzAEBV4cioz9rPvS8xUrwPJrV2ejTB4bv5rS7LPjB07wkcOxQUacoJIIB/Ky0RWe9MiSZieQk4eU04HLv3NmOwDxJXJlB3wdpX6c65A0EVlLspSVEvfcRXHjhXxsMPofYwxJZkkuK59GQo5dQ5JBrlcjp63jyTBDyLBDgSKeh+DpYAL0pa51H9byH1Jl6f31RUmv93TaTh2y/KxLJ7KgBVXQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ujG2q1YnMDOR6+z6mxxei0XuEaxDYgCI4hRLLdDBk1E=;
 b=difdWQzC0iONMX96ob2VamGGhhzlMF3aNrF5IOuNJdgYeRbI2tEpWMdUuNm18enJ0PKgS5+XQcbDHpsWE1he0La7OlhXkD4U/yHSOUke5+bt2swPUETNLSUD1A+Qkwt+XybNoxwNCe2uiZBlGlZcqyz7vhByUFKvy7KTRJ+PDjUh+rohPg1qw2BeCPaYL59vtLszaUzNgCFHbInMMDytYxkX739ILB7kiD7cLhQVQXTCEP9NpwPUpSBsLfF7y2biOv9W9t6iktYVzmNPNVtgx4qDUYDezdE8Fuzl/4vAGexv8PcE0xMRUP9uEETQe/eMF8Ghm2VSK5ZVWipsbP5pjQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86: please Clang in arch_set_info_guest()
Message-ID: <c758de2b-c453-4dba-ddc0-0c9548172c6e@suse.com>
Date: Wed, 9 Jun 2021 15:14:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0116.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:19::32) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2f9ed265-cfdc-468b-30b8-08d92b4893d4
X-MS-TrafficTypeDiagnostic: VI1PR04MB4944:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4944A6E833C42818CB0C4351B3369@VI1PR04MB4944.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GVziiPIpjx4+ldEy8/FJcgd1VYjTTuyGiYY6ISlUpRGgrgHYSvO2OX/9HngjY3rqySQRgf7VxvxKlN6w0zOp0McZIy4XA5l79b8UJnOl4jZfRBsC8OaA7BQ6NIXjgFEKGOicDnD2zeAyj7YzZa52BmmJ0Rjij2rq5eqk2MsRhN7x6A6SfR0HGd6AjVx0QiFPOIls/4qvS2myBjzsaQsOtTVPJW17/5plQOV0WHRKTMgMtuat9MnJ7ArlpCa+KxUQN080+VzHt70mNjM1f7+MzazbUHsEQkE1dXMH8UzskmjIeMUmRhT9lFuQWRt21zs7HRnWCl2aoi5Pt2FzNIMxKsnHojhpLqjQfwj/idfCBFi4yK6pBwtmLXtCxo//3FTxJCLEFIBr3xEehwM+ZeEUI82ivgbm4e7DlHJcwVQBlMIbfOBxCplfhTtGjDYAUUcIAdSaLozDyRACz7G0Yp82u/w126OpLG2xAgQb8Ggab5wC4kQ5Z8Y5J36u5wsHtzQORFvOAEbalbMf6AwwTNctkCotiWFFyKihucCVDxlrW7mAo470U4dT82+MrHOaf72ETIYJIL+0eKxWXV2mN7c2CQ9+mYgUz35Zn30uZ36atIhOGek4fXX8YHpLJGoRGXW1752ovRgDLaImrXvCwJG1ZI7IadniBh9QPEQCd8I1rLAGzQ139n0+VXyapQ2Cqp31
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(39860400002)(376002)(366004)(346002)(136003)(36756003)(6486002)(4326008)(2616005)(16526019)(86362001)(186003)(26005)(956004)(316002)(2906002)(6916009)(31686004)(38100700002)(54906003)(8676002)(5660300002)(8936002)(83380400001)(31696002)(66476007)(66556008)(66946007)(16576012)(478600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?NXZnelYycmRCY2lKVGc3V0tpU3RUOWJBWHNacERxNFAyUEE0aDF2ZGh4NUhw?=
 =?utf-8?B?T2FpMjdNcElZc0hmc0g3RFpaSnEyaE1keG9EQTlyN1NEVFMzcEg0NDZXeE9O?=
 =?utf-8?B?b2xLM25Balg1SndJRkVvUFpDcnNBNlZFWkRoYkJJR3doaWRsdHRXdDNIYUti?=
 =?utf-8?B?Mk1lTEtLTFg5cHUwNjkrUXhnQWY5MzBmMmRiSWI1K0h1eDZIZnErbk5wd2VM?=
 =?utf-8?B?OVdTaHlMbHlqcDRXcjdFeWtuR0FkSnFSWG9mZFV0WjVBdGJXK2RmejUxZURm?=
 =?utf-8?B?SGdsVU9RalhCdzJ3MXgwd3JoRnowd3l2QXpaVk5OeTkvMldmRVN0ekprbjZs?=
 =?utf-8?B?N2lsTE16b3I0WFFyKzBqNk5oSC9ORytEc241S29mZ3NnZXJ1VXdRNWcwUXRp?=
 =?utf-8?B?Q21kSkJKY3RPeVgvdHZHU3BtNWZvdnduS0RhUVQrUjFZekxDdFRiQ3FGclli?=
 =?utf-8?B?VkZBdXhXV3ArK1R3Ri9IWFdTYUVWTndzQjJhZW5zd081K1pZeWJTTHhkY05x?=
 =?utf-8?B?bnA1MWlxcHRXYnFObXJWeUp0bTFkclFzRG5ZOXl0M1ZVa0F1ay83RkM4cmZ5?=
 =?utf-8?B?eTJiTUNaWGVWTmFvRkdqOGRaOVQyVDdmWnE4VW16b1E1T3dIUUN4dzVFRG54?=
 =?utf-8?B?L1gvK3Y3UU1ZL25sMXg1NE93S3JpSitaTy9aeGM5ei9LQTBvVUJaOFJjQmpY?=
 =?utf-8?B?ampJeDFGVDZRWS9HWVpyU0hXSm43NmRFU25Sb3lSekhSQkFhc0VidlZtdFR3?=
 =?utf-8?B?Mkd4WUkvTCtlRStON1NiU2lUR1owL3hGcndYOVBnemFiTllia1ExeTVVN0lR?=
 =?utf-8?B?OUplcUtob3hJRU5GZjczTFdsWTkvMGZzbHA2VnJiYnZYdnZUZXc4bGtVZ0kv?=
 =?utf-8?B?RGpuMlArVFg1ZHBReHNScE93UzVBK3JTTFZoRklMdXN5MXFQM1hvQUUrV1ZS?=
 =?utf-8?B?bHNLeDk5SnlXTFd6L0hMY3pLdnNRVVRxN0FKQTI1VVBHb3pEYnZoY0FPMkU0?=
 =?utf-8?B?TDM5M2Y1UmZkdU9kSWxLY3ZOekVXVkQ3SG5hdDE5cU1iY3V3d0NWbWRycS9W?=
 =?utf-8?B?dTE4UDAwd3Jqa2hPRnB3eUpNeFNreTZlQzVXNTMvWjdTdks2ZkdwcnRRNlJH?=
 =?utf-8?B?VklaeVNNUHVMdnNpOUsvZHdzTVRQUGdsaW1NUHlJaXdOLy9PMWhOSC9WVVFi?=
 =?utf-8?B?YkFzUXRGTWluOG9HMTZ3cmdaa3N6S0ROd1BjM0l0bXRNL2pITVcvdE5mbXBJ?=
 =?utf-8?B?bnV0TjB0Nnhob1ZLdmZTZEFheGprZnkvZjJpbzNHSVk0aHdqa2FOcTRHSGpH?=
 =?utf-8?B?T2ZqbU1pdUhEQkkySTl2dEp3alRrTzNLMWl0bjlob3N5a0dSaVEyajZwbVgz?=
 =?utf-8?B?ZEZ5SThOQThCQ1VLN1BZTk4vSlQ0WU5ybEo1SEozdTZWV1lnS1duMmJuSjNP?=
 =?utf-8?B?NUQ5UkdHRm1WYWh6Skw3WSszakJoVUJkVllRYWJGV3BpeHppTzN5dnBjNWcr?=
 =?utf-8?B?NUNVSzZVelEyTU9VYnVvem1BWnpKOGdlUDJJcnR0ZGlVRkU2SHZCOFpNUWkr?=
 =?utf-8?B?YTAwdCtqajJwVDhJamZlTmcvbzg2T2pNZUdOWUJyVjBndHZmMzYycEZ4Z002?=
 =?utf-8?B?SlB2MGlnaHBNOXR0N3NSK2tRbmVmWjNqNnF2cHYyS051bVFFekhJT2Z2dHNx?=
 =?utf-8?B?VEF2d2gvSXlGOUJTYmJLOEE5NS9vZ3RXRy9ZV0ZUUTY1Q1NqTktYbFJTTTBj?=
 =?utf-8?Q?kyFbG8sDsenYUxmuGaVAU3yc4SkNNwUtS+a5XLM?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2f9ed265-cfdc-468b-30b8-08d92b4893d4
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 13:14:56.8371
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NtiwqQyQtY6TUFRBUhfQtfYeQHjCh7GA1++86AKDwoJk591Hy43ucz/E2T87Z+J7GK+6tGnPhHcQaGFZU1I4lQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4944

Clang 10 reports

domain.c:1328:10: error: variable 'cr3_mfn' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
    if ( !compat )
         ^~~~~~~
domain.c:1334:34: note: uninitialized use occurs here
    cr3_page = get_page_from_mfn(cr3_mfn, d);
                                 ^~~~~~~
domain.c:1328:5: note: remove the 'if' if its condition is always true
    if ( !compat )
    ^~~~~~~~~~~~~~
domain.c:1042:18: note: initialize the variable 'cr3_mfn' to silence this warning
    mfn_t cr3_mfn;
                 ^
                  = 0
domain.c:1189:14: error: variable 'fail' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
        if ( !compat )
             ^~~~~~~
domain.c:1211:9: note: uninitialized use occurs here
        fail |= v->arch.pv.gdt_ents != c(gdt_ents);
        ^~~~
domain.c:1189:9: note: remove the 'if' if its condition is always true
        if ( !compat )
        ^~~~~~~~~~~~~~
domain.c:1187:18: note: initialize the variable 'fail' to silence this warning
        bool fail;
                 ^
                  = false

despite this being a build with -O2 in effect, and despite "compat"
being constant "false" when CONFIG_COMPAT (and hence CONFIG_PV32) is not
defined, as it gets set at the top of the function from the result of
is_pv_32bit_domain().

Re-arrange the two "offending" if()s such that when COMPAT=n the
respective variables will be seen as unconditionally initialized. The
original aim was to have the !compat cases first, though.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I wonder how many more there are to come.

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1186,7 +1186,17 @@ int arch_set_info_guest(
         unsigned long pfn = pagetable_get_pfn(v->arch.guest_table);
         bool fail;
 
-        if ( !compat )
+#ifdef CONFIG_COMPAT
+        if ( compat )
+        {
+            l4_pgentry_t *l4tab = map_domain_page(_mfn(pfn));
+
+            pfn = l4e_get_pfn(*l4tab);
+            unmap_domain_page(l4tab);
+            fail = compat_pfn_to_cr3(pfn) != c.cmp->ctrlreg[3];
+        }
+        else
+#endif
         {
             fail = xen_pfn_to_cr3(pfn) != c.nat->ctrlreg[3];
             if ( pagetable_is_null(v->arch.guest_table_user) )
@@ -1197,16 +1207,6 @@ int arch_set_info_guest(
                 fail |= xen_pfn_to_cr3(pfn) != c.nat->ctrlreg[1];
             }
         }
-#ifdef CONFIG_COMPAT
-        else
-        {
-            l4_pgentry_t *l4tab = map_domain_page(_mfn(pfn));
-
-            pfn = l4e_get_pfn(*l4tab);
-            unmap_domain_page(l4tab);
-            fail = compat_pfn_to_cr3(pfn) != c.cmp->ctrlreg[3];
-        }
-#endif
 
         fail |= v->arch.pv.gdt_ents != c(gdt_ents);
         for ( i = 0; !fail && i < nr_gdt_frames; ++i )
@@ -1325,12 +1325,12 @@ int arch_set_info_guest(
 
     set_bit(_VPF_in_reset, &v->pause_flags);
 
-    if ( !compat )
-        cr3_mfn = _mfn(xen_cr3_to_pfn(c.nat->ctrlreg[3]));
 #ifdef CONFIG_COMPAT
-    else
+    if ( compat )
         cr3_mfn = _mfn(compat_cr3_to_pfn(c.cmp->ctrlreg[3]));
+    else
 #endif
+        cr3_mfn = _mfn(xen_cr3_to_pfn(c.nat->ctrlreg[3]));
     cr3_page = get_page_from_mfn(cr3_mfn, d);
 
     if ( !cr3_page )



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 13:18:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 13:18:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139376.257747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqy6M-0002ZU-MU; Wed, 09 Jun 2021 13:18:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139376.257747; Wed, 09 Jun 2021 13:18:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqy6M-0002ZN-Is; Wed, 09 Jun 2021 13:18:34 +0000
Received: by outflank-mailman (input) for mailman id 139376;
 Wed, 09 Jun 2021 13:18:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Llia=LD=oracle.com=daniel.kiper@srs-us1.protection.inumbo.net>)
 id 1lqy6K-0002ZD-If
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 13:18:32 +0000
Received: from mx0a-00069f02.pphosted.com (unknown [205.220.165.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b762a402-6ce0-4de1-9ebf-e8f31535baef;
 Wed, 09 Jun 2021 13:18:31 +0000 (UTC)
Received: from pps.filterd (m0246617.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 159DITjv017183; Wed, 9 Jun 2021 13:18:29 GMT
Received: from oracle.com (aserp3030.oracle.com [141.146.126.71])
 by mx0b-00069f02.pphosted.com with ESMTP id 391g4g8ynr-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 09 Jun 2021 13:18:29 +0000
Received: from aserp3030.oracle.com (aserp3030.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 159DFei1014735;
 Wed, 9 Jun 2021 13:18:28 GMT
Received: from nam11-co1-obe.outbound.protection.outlook.com
 (mail-co1nam11lp2168.outbound.protection.outlook.com [104.47.56.168])
 by aserp3030.oracle.com with ESMTP id 38yyabqp90-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 09 Jun 2021 13:18:27 +0000
Received: from BN6PR1001MB2228.namprd10.prod.outlook.com
 (2603:10b6:405:2e::38) by BN0PR10MB5094.namprd10.prod.outlook.com
 (2603:10b6:408:129::5) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.23; Wed, 9 Jun
 2021 13:18:25 +0000
Received: from BN6PR1001MB2228.namprd10.prod.outlook.com
 ([fe80::ed56:7043:6d81:8547]) by BN6PR1001MB2228.namprd10.prod.outlook.com
 ([fe80::ed56:7043:6d81:8547%4]) with mapi id 15.20.4195.030; Wed, 9 Jun 2021
 13:18:25 +0000
Received: from tomti.i.net-space.pl (84.10.22.86) by
 AM6PR01CA0071.eurprd01.prod.exchangelabs.com (2603:10a6:20b:e0::48) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21 via Frontend
 Transport; Wed, 9 Jun 2021 13:18:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b762a402-6ce0-4de1-9ebf-e8f31535baef
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : content-type : in-reply-to :
 mime-version; s=corp-2020-01-29;
 bh=aWZvyMpXSmm1aqoicaf7ffWPiF/m0Fq3CfRZKcMo76Y=;
 b=Tw9UofTARJ7y0tuOez62YrdrVEvPQdJERQGRLhTTQSb9dTCYgzJVZep1WosVL00S4a/D
 RaCpIKMs5huMHGg2N6GbBg3AKNpnfIgq58MHcfSETqOWXWL5m2cDJG2rnWMdkfACK2Gi
 ZqpluGW1N2wd/Vn1f0Z17H6sEuCuEbhozjNwFy6OU1ZLMcdNdmkxwQNA3wUbIadbndH9
 QOKVuCxoPQXnZyDtVTW4t1R6GCXlgV2Zn2Db0H51yNQVTdAOB5p41yq4ggj+lnuHkGbk
 lAELBJJFpHZYUuc/w26l+OxOSYGLcBWDGWiQvxW0Nw9aZL2u6NaFczIlfDO/VOy4ZkdN tQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UxLEuQ0YR+N6YM/PRplVBH5j/cCaYWgano26ZFk2ShJhFFis5Jkh7Edrvh2judbJgZnENAegEjyShs1Lmqf3aCUg+FBUOu3Tp2o4xUmkmO1kr5n2sPkQj3XOolMroOgayc6zvVqw1ErepXpL/YEmotIbQhyXra6MEr3V09nibrmddtDqhAaUP+h78E9gP2U0Z069TYqPlulhQRQshRUN+y8qd9RoG1h5JTdXoCWxUzPoUFXs6GB01zPjUO+IOYNMlvW8RZmnEKCXA5Ug34fpVrZfpULMzfuKCZrlCFueNIhZ3etTvxos/7tWXJaBDbramoOQW00Q+kXLGAGWdQU32Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aWZvyMpXSmm1aqoicaf7ffWPiF/m0Fq3CfRZKcMo76Y=;
 b=E7/SC+NGqV8Tn2/jRgPIK5vqIgbXwM6GMNp4RV0Ji4P0h/yYzgWa1NKDOOE1dfopxqxIeDstTcc9i8h1V7wI2ZjRYPf6ZjALCzWjghVcKqBLg7EpSxkCHiDhJo/MWR4QIiYR+bQKkNMudL/iZS9+Ghr1O+4RarYI2hZU8u074tfacpKdF8FJdoxyybk9wxjOsYwr/gzVFghWvhI7qcVw94AzP6Jpf90EFgHQEZE0/wrdS9UJe9Me0OYUSjvFtrq9jhf9sRFm0eB0tgEX+tn4KMdJo3uK/fN3lOgbz3dNJOIKnGyFopjZ+nW29VUT/PkjOnir2ZVldxE14f5FBKBGlQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aWZvyMpXSmm1aqoicaf7ffWPiF/m0Fq3CfRZKcMo76Y=;
 b=ZblRIXZsY0fQNRAkmhtSU7rWG5F39IMu0txatRzmqx38gzRrX78+MfUUNwXYaBftHCcillwiIGo4d2pKHNSlNm+5z1Ciy/QSMipriVhNgF59aFDZwShX7lnrORF4E5y18t3Zj8BfAHrMKRAFEqHsAhlKQV5aJLkHvBTDkmb9T1c=
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=oracle.com;
Date: Wed, 9 Jun 2021 15:18:18 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Bob Eshleman <bobbyeshleman@gmail.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
        Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 2/5] xen/x86: manually build xen.mb.efi binary
Message-ID: <20210609131818.pkpzbzi7p5x2fu7i@tomti.i.net-space.pl>
References: <28d5536a2f7691e8f79d55f1470fa89ce4fae93d.1611273359.git.bobbyeshleman@gmail.com>
 <3c621726-31c4-6a79-a020-88c59644111b@suse.com>
 <74ea104d-3826-d80d-3af5-f444d065c73f@gmail.com>
 <a183a5f9-0f36-187d-fd06-8d6db99cbe43@suse.com>
 <20210517132039.6czppjfge27x4mwg@tomti.i.net-space.pl>
 <ee89a22d-5f46-51ed-4c46-63cfc60cbafc@suse.com>
 <20210518174633.spo5kmgcbuo6dg5k@tomti.i.net-space.pl>
 <51333867-d693-38e2-bd1c-fce28241a604@suse.com>
 <20210519124846.go3zyqzojsaj35in@tomti.i.net-space.pl>
 <c55f44dd-47bb-8e60-c1a3-446c561d6740@suse.com>
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <c55f44dd-47bb-8e60-c1a3-446c561d6740@suse.com>
User-Agent: NeoMutt/20170113 (1.7.2)
X-Originating-IP: [84.10.22.86]
X-ClientProxiedBy: AM6PR01CA0071.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:e0::48) To BN6PR1001MB2228.namprd10.prod.outlook.com
 (2603:10b6:405:2e::38)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4894e9f2-d705-4616-9a7a-08d92b491037
X-MS-TrafficTypeDiagnostic: BN0PR10MB5094:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BN0PR10MB5094A70BB12957680DD86D8BE3369@BN0PR10MB5094.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	Rio+zWsjCcaGjOS3OYbE5ALnGE9FCsmXbPgO+lcNsyGFEfjgnaleWbSHnddDgELhOs08kVgNmn9tH74JJhh3Z7qtzVZLj55AE7qeVSly6OZqyylv/iQ/vyya+HKuRdTMrc28lxxXk0BclbtpQihWcCBItRo6CGhmCjDEkC06kWTz3t+vBAS7i2astDGrf2hTJDauq6Z+BctlZX11fE0KtMNQr1gY5fIfcguhnjHfn7OMrKXvpMRS4u6oILuGQJgBGttiUErile7ZmlyVlDTT6/7D1m0FtK8Z9EDmqhwj/NzXaVxuRrg1dmSQOu42/ezZWThoyJYXM68EelkgoIJy0ndnrvkDZLyg8eBY8p6SebyAZ5DhlrNOXkdafDzMmPMlhILaJwH9npnwG8DlEWQKn8LiKMN+1n3v7/avJ8TSUz71uFNmP0mlYAs17W1S2rnKe0Q3E5XPgoWHwDV8xaG8x7vM/ZJ1Xw03gMqQBZAjC0YpLXgzpfgTFifQw3e208e3PdfV1KNRObpM+qjgdhLbf/fXs/I76G/cJAW6w10Vr/8cMKvwSD9Q7CgNflYfBpPhqR4MihMxmZwQLfj8KugbCBNwS5Noj5aPHUHOUOQAjVyC9YDJAsV/VzVqVLhjvSZnNrJmHaPGsWZ3sGtZKblzYQ==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN6PR1001MB2228.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(376002)(396003)(366004)(136003)(39860400002)(9686003)(55016002)(83380400001)(2906002)(5660300002)(44832011)(6916009)(66946007)(478600001)(66556008)(66476007)(86362001)(7696005)(4326008)(54906003)(8936002)(16526019)(8676002)(53546011)(52116002)(316002)(26005)(956004)(186003)(1076003)(38100700002)(38350700002)(6506007)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?Vsz79ukWZHI/5CwrWREouw9GXrhn2ZP8NBOHC6WLadXEBeiH6UTzJ7batwZF?=
 =?us-ascii?Q?34P3yJOJQprtqxtwn8IQOsNbkNAHJCjQU3Od9M/8k6TPY1IBtPU7OEZt+aeG?=
 =?us-ascii?Q?LaUvVz81xcCUmsg5Z4PLP2vs2Yryz3QeOq7hbwe34+HGT1D8s5JaZ6ZHF96b?=
 =?us-ascii?Q?rPYoI8i5kubgE14Yxy7dPe5pLGKkpjvPiLEzdpiJYoArsH9CgZQH5GoUt/ZS?=
 =?us-ascii?Q?oIgtV2WD9Z/AljCjRLtFfujV3ybGi8CimsG5rWmLDEdjp7Fb+y2B5papK0JF?=
 =?us-ascii?Q?cSIsQ5MXbNgLsxKVa6PoZYPiWkiGkTZSLB7cawMAKzCdOezjlldpkL3vNCQk?=
 =?us-ascii?Q?oS9QfonY69CF1u9Om9O6SckPY3gKhKzWkeJaHWNUzMG257lGjfxA81w41j+G?=
 =?us-ascii?Q?5uEAryWcityb0kpGSl7p0VZCUpLYIR5ClF2SFztq3Lg+88KFbJfRwHMut1Cc?=
 =?us-ascii?Q?FqDBFVWY0ctV/zB/Tkd3aK0P+nf1FdyJosYGadqMoO7dCipMtHuatYHywNBs?=
 =?us-ascii?Q?jUwswOaIF8DM1QDjPMUxkeZq0v9lYuljSCCenH87Ua3/XwqEu3/xfsA+syQC?=
 =?us-ascii?Q?rqjWC8Q3wwveIVeiI+vKs+VvRJiy5jhAyjgkVSDTCE4dbas2nt+Cw/WoJchj?=
 =?us-ascii?Q?7tuJjFrEpZ9q6sQsw/xGQNAcb3Sc8u8C6CrPvITgs8pdYwxq4+19jl6k3KQr?=
 =?us-ascii?Q?NFJaqBR8ZZf4RZqTHKFDTmyh8mEjStrebMfw5JDv/N4yNtke5d7m9p1UaJH/?=
 =?us-ascii?Q?TDm2LY/upif5ZoagxvoJwPfa4oPvEE4TXu4DC5wVcIj2mKwRrpq7xIVal/zi?=
 =?us-ascii?Q?TXQEQuaM9CZ+xOURGHfUUgkOXX7JvIsaFT34ioI1AKmsfZKLc2JWuUUEaoWN?=
 =?us-ascii?Q?p+yW+MSFjBJSYkJy8+Pw0N6Nbhf8CQssVNWTSpnufMjTEonFrzpfFKbDFtU3?=
 =?us-ascii?Q?aZtFLW40g0YYmHJ7m+Ulqw3Gq7FkymnRixomXZXp+k376vhb/ugjK6h8l0pO?=
 =?us-ascii?Q?0kr9QJoWSB9BGA5/mInKrInXw/9QuBbPF0meLABOQmWTQKh7MmVS8nMYS0e0?=
 =?us-ascii?Q?Q1IK3oJK2L1bpalsRPrRx42wkGGEy1+5ek+u5u03KmW9hgTR5i/subYAOBRd?=
 =?us-ascii?Q?HJiZsSO01tFE+IFMmlHCJU4GpcODzhaGg1WiF20htrspfy5zmzmWTYMdEhP2?=
 =?us-ascii?Q?2Xxle76AwbOe5TGLtdLvc9xQD5WlMEzGAPauHyt5UP3zUeYtaOzAGF+TKrZv?=
 =?us-ascii?Q?oP8ySAGy2YaaeicjGV1jFOcdkoQwQWosZcyUV4QuNDD/gh1pHlKGFt6E+RD2?=
 =?us-ascii?Q?TbjuLFvJGj+YqcsStcSh+9MZ?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4894e9f2-d705-4616-9a7a-08d92b491037
X-MS-Exchange-CrossTenant-AuthSource: BN6PR1001MB2228.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 13:18:25.6761
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8sANRKjskIO0gpdPI4mw3l4P6J8RsTFKNIhKRe9fakWL6J2ZFgkXk2RPtmElJqzEumzu+1XTo93w3I95gzCjJQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN0PR10MB5094
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10010 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 malwarescore=0 spamscore=0
 adultscore=0 suspectscore=0 mlxscore=0 mlxlogscore=999 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106090066
X-Proofpoint-GUID: JZXLkFih4OPPl-Ln4uPQ1-5JNpMjKM8a
X-Proofpoint-ORIG-GUID: JZXLkFih4OPPl-Ln4uPQ1-5JNpMjKM8a

On Wed, May 19, 2021 at 04:35:00PM +0200, Jan Beulich wrote:
> On 19.05.2021 14:48, Daniel Kiper wrote:
> > On Wed, May 19, 2021 at 11:29:43AM +0200, Jan Beulich wrote:
> >> On 18.05.2021 19:46, Daniel Kiper wrote:
> >>> On Mon, May 17, 2021 at 03:24:28PM +0200, Jan Beulich wrote:
> >>>> On 17.05.2021 15:20, Daniel Kiper wrote:
> >>>>> On Mon, May 17, 2021 at 08:48:32AM +0200, Jan Beulich wrote:
> >>>>>> On 07.05.2021 22:26, Bob Eshleman wrote:
> >>>>>>> What is your intuition WRT the idea that instead of trying add a PE/COFF hdr
> >>>>>>> in front of Xen's mb2 bin, we instead go the route of introducing valid mb2
> >>>>>>> entry points into xen.efi?
> >>>>>>
> >>>>>> At the first glance I think this is going to be less intrusive, and hence
> >>>>>> to be preferred. But of course I haven't experimented in any way ...
> >>>>>
> >>>>> When I worked on this a few years ago I tried that way. Sadly I failed
> >>>>> because I was not able to produce "linear" PE image using binutils
> >>>>> exiting that days.
> >>>>
> >>>> What is a "linear" PE image?
> >>>
> >>> The problem with Multiboot family protocols is that all code and data
> >>> sections have to be glued together in the image and as such loaded into
> >>> the memory (IIRC BSS is an exception but it has to live behind the
> >>> image). So, you cannot use PE image which has different representation
> >>> in file and memory. IIRC by default at least code and data sections in
> >>> xen.efi have different sizes in PE file and in memory. I tried to fix
> >>> that using linker script and objcopy but it did not work. Sadly I do
> >>> not remember the details but there is pretty good chance you can find
> >>> relevant emails in Xen-devel archive with me explaining what kind of
> >>> problems I met.
> >>
> >> Ah, this rings a bell. Even the .bss-is-last assumption doesn't hold,
> >> because .reloc (for us as well as in general) comes later, but needs
> >> loading (in the right place). Since even xen.gz isn't simply the
> >
> > However, IIRC it is not used when Xen is loaded through Multiboot2
> > protocol. So, I think it may stay in the image as is and the Mutliboot2
> > header should not cover .reloc section.
> >
> > By the way, why do we need .reloc section in the PE image? Is not %rip
> > relative addressing sufficient? IIRC the Linux kernel just contains
> > a stub .reloc section. Could not we do the same?
>
> %rip-relative addressing can (obviously, I think) help only for text.
> But we also have data containing pointers, which need relocating.

Ahhh, right, I totally forgot about it.

> >> compressed linker output, but a post-processed (by mkelf32) image,
> >> maybe what we need is a build tool doing similar post-processing on
> >> xen.efi? Otoh getting disk image and in-memory image aligned ought
> >
> > Yep, this should work too.
> >
> >> to be possible by setting --section-alignment= and --file-alignment=
> >> to the same value (resulting in a much larger file) - adjusting file
> >
> > IIRC this did not work for some reason. Maybe it would be better to
> > enforce correct alignment and required padding using linker script.
>
> I'm not convinced the linker script is the correct vehicle here. It
> is mainly about placement in the address space (i.e. laying out how
> things will end up in memory), not about file layout.

OK but at least I would check what is possible and do it then.

> >> positions would effectively be what a post-processing tool would need
> >> to do (like with mkelf32 perhaps we could then at least save the
> >> first ~2Mb of space). Which would still leave .reloc to be dealt with
> >> - maybe we could place this after .init, but still ahead of
> >> __init_end (such that the memory would get freed late in the boot
> >> process). Not sure whether EFI loaders would "like" such an unusual
> >> placement.
> >
> > Yeah, good question...
> >
> >> Also not sure what to do with Dwarf debug info, which just recently
> >> we managed to avoid needing to strip unconditionally.
> >
> > I think debug info may stay as is. Just Multiboot2 header should not
> > cover it if it is not needed.
>
> You did say that .bss is expected to be last, which both .reloc and
> debug info violate.

The .bss section has to be last one in memory from Multiboot2 protocol
point of view. However, nothing, AFAICT, forbids to have something
behind in the file. Of course if you ignore the data at the end of file
when you load the image using Multiboot2 protocol.

Daniel


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 13:46:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 13:46:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139388.257768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqyX3-0005sw-13; Wed, 09 Jun 2021 13:46:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139388.257768; Wed, 09 Jun 2021 13:46:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqyX2-0005sp-UB; Wed, 09 Jun 2021 13:46:08 +0000
Received: by outflank-mailman (input) for mailman id 139388;
 Wed, 09 Jun 2021 13:46:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jMcQ=LD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lqyX2-0005sj-6V
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 13:46:08 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c6fdfe0-5c6f-409b-ae00-0fe700dc4e61;
 Wed, 09 Jun 2021 13:46:06 +0000 (UTC)
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01lp2058.outbound.protection.outlook.com [104.47.1.58]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-5-YkWM81_MPFCGO_XLIQFRBw-1;
 Wed, 09 Jun 2021 15:46:03 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5472.eurprd04.prod.outlook.com (2603:10a6:803:d3::28)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.22; Wed, 9 Jun
 2021 13:46:02 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Wed, 9 Jun 2021
 13:46:02 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P193CA0042.EURP193.PROD.OUTLOOK.COM (2603:10a6:102:51::17) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Wed, 9 Jun 2021 13:46:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c6fdfe0-5c6f-409b-ae00-0fe700dc4e61
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623246365;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=R1RCBhkH3bKk0wnN0QyKqFvRVsI0s+BU6zZsur0WPRM=;
	b=FlTyCycQgzrE8yt/3mCCLccAVWUHupnSV0uoUF6Kr2h3zfKwWCWZtEY1AQv41HqeHiSNAh
	gEADEFKD0U6coAP9iwbRSvn3uAnl2NOYKDtS3+uBwgrsa9H2yeeVuFl3R+7va9CtGG+JaQ
	XmMSxc8FiFdPC71MBnXzqeI+zAjQhDA=
X-MC-Unique: YkWM81_MPFCGO_XLIQFRBw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lGERnIUKFKkNZUopgQL/pxKUtqF1B80+NjdTFvixelpJLkwmbQHF0tD96P7Bf87tGLku0vRt7Y2oDbzQr1eM76ZPNl/PDPrrxI+CtEMpm4Siydb4YqAngHOy6XW0N25M04PfU7zzdc96IzTJXYpjxyWyCP5y2nqBTS9aJyiqhXLjeNy7QPqkGjISbOLEaBm9M+dII7vWaRxnfiH96ORpI5oiISDNoyxtktDL5+NOzV2mrXMm3y1w+gcpfk9ne7mvFHqNKy04g0gv8bYg+6YcDeU82F/lC+8aJTZyDy3iWle0eyKpIyCAbOyQVuM0JK6ndIkWgQnzayyL8nzmMfWWKw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R1RCBhkH3bKk0wnN0QyKqFvRVsI0s+BU6zZsur0WPRM=;
 b=LS8HP+aAzuiCNpYg4v7m+Kqh4FFqyNhtgqaqt+H96+dek5TdfQvElsTuv+CAT07uaUtoch1w/1tY0xc9pRD+SEqFPDkpa6i0cOGJByCL8e7n6V/NlsXFU16/2nVO0ohJn3374cCIS/VSo0ReI/zJEcZ50o/g8eiMwzZhwtEcy//CfceHxLccJQfQq0xmUVt1/F39J4m2YPth346trdXfExuX2jLoq61fG/FqlUNNujb5Hb7udfBISz/g3nZfDlgjn1KKmsem24wjmQY33Y2uBZPBWmfMbEtgWDIfioNAQxl5N3Yw6q4ZwVLK6RG2v5wI9+93fG6FFK90bJd/kcAcEQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH v3 2/5] xen/x86: manually build xen.mb.efi binary
To: Daniel Kiper <daniel.kiper@oracle.com>
Cc: Bob Eshleman <bobbyeshleman@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <28d5536a2f7691e8f79d55f1470fa89ce4fae93d.1611273359.git.bobbyeshleman@gmail.com>
 <3c621726-31c4-6a79-a020-88c59644111b@suse.com>
 <74ea104d-3826-d80d-3af5-f444d065c73f@gmail.com>
 <a183a5f9-0f36-187d-fd06-8d6db99cbe43@suse.com>
 <20210517132039.6czppjfge27x4mwg@tomti.i.net-space.pl>
 <ee89a22d-5f46-51ed-4c46-63cfc60cbafc@suse.com>
 <20210518174633.spo5kmgcbuo6dg5k@tomti.i.net-space.pl>
 <51333867-d693-38e2-bd1c-fce28241a604@suse.com>
 <20210519124846.go3zyqzojsaj35in@tomti.i.net-space.pl>
 <c55f44dd-47bb-8e60-c1a3-446c561d6740@suse.com>
 <20210609131818.pkpzbzi7p5x2fu7i@tomti.i.net-space.pl>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8380715a-f26b-eccf-a8e1-42db29b6ce6f@suse.com>
Date: Wed, 9 Jun 2021 15:45:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210609131818.pkpzbzi7p5x2fu7i@tomti.i.net-space.pl>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P193CA0042.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:102:51::17) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f615c4b3-39ed-497f-fdf8-08d92b4cebac
X-MS-TrafficTypeDiagnostic: VI1PR04MB5472:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB5472B9DB4DFF9A3D46EA5A19B3369@VI1PR04MB5472.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GRopGlcA2QI6RoRzULLuiPxWg9p46Zuw6fjaCFHxLGrU+jcRXeKOFSWZcndL4reQEa89nugLkBf91uCH5EZfbk7+fUAP5cdvcEP9RfYPTYCXaP2z5kk7YuVLIvmuEYTRfJbCigg9fIVQ1mSA6SXq9Jl0HOGX4wsJOO/2SmAI2Cvo0xpE+oqynpjZ1O8TJaIyp0aprz9y78ttWfiQ1UYILoB8ifhYFAd4xIrpwAfk/TL7nhuIYaMyUuhkRJeM8kf+f1jxY6XRPWURO+ANDxNnjK7sABcGwFQGslktmh7p8z7ZdO8g6jBAzqoVVbSRLtPDLJTBN7IPAUnFfq2KOE0eQhwxFUVhcImpYrs9CmZyRFYrqYQhxvcydtz1wwEOiu01NwO5XuvHLE1yM1NqZxTEvzx/+KvZpKaOA36QZ3ncBa5LiUK46YTIOQPSD6+bylQnUMJ4mvqzIXZXEitNP++Y2s5APXafzYTbun2rtHQWKxc67fp4lqiVoMUkBGBc9J2FALnBKsHaegC4fOBjRd116COSA3EZYxH0X5hSBA1cAJFNCQ3FeB6CtyTbPXaTrIAbZ7fXHCkcKy8EcdQ6JEOpfOA3seVMc5iLi7t+RA+ymt/C+3ukxI/QdFNkxU4zlRXl4+ZQ1SWlHzDtIztKIzdN6XBoyb8ApIhwEY+HtI0HjCRc155Wd1M1o+feqgebMLL3
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(396003)(366004)(39860400002)(136003)(376002)(66556008)(31696002)(66476007)(478600001)(16526019)(186003)(53546011)(6486002)(16576012)(86362001)(4326008)(956004)(36756003)(6916009)(26005)(38100700002)(66946007)(316002)(5660300002)(4744005)(2906002)(2616005)(31686004)(8936002)(8676002)(83380400001)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?dWlndnlpQWpuVGtHRVBkTno0THJUVkNxTXpUV1FvN1JmMS82aU5NRHFNVUda?=
 =?utf-8?B?cFJuSEJwRDdvT1Y1a3FsM2o1Y1hIajd1ZHpSbUIrTEc0REtpeVdMTzBTRlNN?=
 =?utf-8?B?YldQV1JtNTMvMTkwR2pqNzJsNXBjekdNYUw3ZTZvRGlhK1RkZkVyenQxUHZU?=
 =?utf-8?B?Z2c0a3ovd1hzT1Q1R01aakt3T21JRytGYUdMOXVjYlo4YkJnRHVEL3JoU0V5?=
 =?utf-8?B?bG9JMTRiU0x5WjNJS0RNRng4dm0ySGxMTmlJOHJtR3AwUlE4cFZFZE42aldN?=
 =?utf-8?B?WS9odzJVWlZmZnZyM0RZUkNac1RkaWNVemtITlo2K29qajJhRTRmbXppL3p1?=
 =?utf-8?B?Z3lkSFZKM05QZjc3M3dwZDhTdGlHWm5YNy9ZMWxqR0JTOFBXUWk5UmlZOWN3?=
 =?utf-8?B?SUQyLzJ3MGdwUjFKYUVXekxqQndXa0tGMlJzbENJVEJiTWJ5SlFpZlRrR2xX?=
 =?utf-8?B?Y3N1NHR0VHZKK1dvYmVSL081Rm1pRVNyOVVxeGMvNzljSkd2Rk00dkY2cXdo?=
 =?utf-8?B?Z2x5YUJTUzdQT2tLVC9kN1hmRVRPaEF6bWh1SXQ3eFovRXh5aW5ibjVKcWc3?=
 =?utf-8?B?WEY2K3VWc3ZzOVN1Y2gxMm9OMXg1VjIyZHFtQlN5cUY0SkVWbkRlK3RIby9H?=
 =?utf-8?B?d2FNQlVqLy9oalZYNUVZYm5ucDRNZ3oyNjFkTmtzeTMxRjNOOUFuZytBbnV6?=
 =?utf-8?B?amw1T2dac1VoTGhSQnRqRWJjTmtiY1dNM0N4RlpuNGhyLzR5L05TSjNHZStT?=
 =?utf-8?B?eEJod2YxVkV1VW05UXB6QmlsTUdkZWtuSS9YZkVLUUFxVVdTcHpDLzJDMDNM?=
 =?utf-8?B?Rk4wdjdpYWYvZjVMVWJUUTlMYUVQeWs4emM1NGRuR2wrTmRFVFlZV3VxRnJO?=
 =?utf-8?B?S0dzc3dqYUhodi9TSDBvSXRUY05xbStHc0N3SFc2QlMwbW9tWm5iSVpQMDJi?=
 =?utf-8?B?WFJBVkRsbVBhbllpemlEcG10bFFMaXE1ak1lbXE4d0VCcHh1TitGbThpa0d0?=
 =?utf-8?B?TXdkYXU0MVJ5WVlZbjg5UnlwVHVpeUdjVmZEYXExQjlCbzNMTWRRR1U2MmdD?=
 =?utf-8?B?YmRxNXRSTGhOMk8zN0plVUFsM09OZ1B6Y0YrdDBvajdzS09XZkJTb3Yxd1Ft?=
 =?utf-8?B?VFJ3RzVVWGQyQVpsc2RaSGhFTzNBa0tOcnFnK25ocnZNUzBoTXpielcwR21H?=
 =?utf-8?B?ZkJLUkF5ZTBLNkxBbTNpTGdZRmxuVlJQN0pWTlBLZTAwUkRkZ2dvNVhBMy9C?=
 =?utf-8?B?alUzN3p4VWhvSjlTWVhTcEkyOG83MDdhQkhIVzdBSndVa3h0V3Fsbkk4YVls?=
 =?utf-8?B?bFlDbkJNK28zUHZGVXIyb0RqdElNTmpjaFJ6TWFqUEVTMk9MdFBkSnFId2Yr?=
 =?utf-8?B?Ykd2elZHNGRTSU5WNjJ0TkU0bUVEbG5kTEE2TGZLaG52V09Lby9lZ0tCZ1Bh?=
 =?utf-8?B?aXhmaUxxdFBjQk10TE5mMloxeXJPeXROeUZBNU8yQmZ0OTZ3MWpDQnZQRERO?=
 =?utf-8?B?ZW5OajhzZzFLdE01U0VzWG9WKzdPRmljaGdLdWo2NHJmb0VZUEx0cjRhekJu?=
 =?utf-8?B?b3BhOHRmU0RFMWZ1K2p5cnlKVGtxTHQ1U0pwK1A4ZzJBV3pTTi9wZm44Q3Yz?=
 =?utf-8?B?VnI1a3pqajBPaktLa2haZzFsYkxQVXVZb2krRFlPd2pzVlNBem1aaml6c21n?=
 =?utf-8?B?cUN5ZlhwK1B0a0FOdEFRSk51MUZBM21DNllIQ1NEaXVOVkdkTFNWUTRwTmNX?=
 =?utf-8?Q?nsDEMuYW7yx3xov/Oc77FW4ZTzlZp/f+D5ocf3e?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f615c4b3-39ed-497f-fdf8-08d92b4cebac
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 13:46:02.2217
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4/8CPiVQDKpy0CvmqX8Xqz5hZ4zavYwdoXM9rc2Nnaq13MhcgLj/MaNKuDCEuIlxOOIqk/TO0lTgewZO2mx9BQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5472

On 09.06.2021 15:18, Daniel Kiper wrote:
> On Wed, May 19, 2021 at 04:35:00PM +0200, Jan Beulich wrote:
>> On 19.05.2021 14:48, Daniel Kiper wrote:
>>> On Wed, May 19, 2021 at 11:29:43AM +0200, Jan Beulich wrote:
>>>> Also not sure what to do with Dwarf debug info, which just recently
>>>> we managed to avoid needing to strip unconditionally.
>>>
>>> I think debug info may stay as is. Just Multiboot2 header should not
>>> cover it if it is not needed.
>>
>> You did say that .bss is expected to be last, which both .reloc and
>> debug info violate.
> 
> The .bss section has to be last one in memory from Multiboot2 protocol
> point of view. However, nothing, AFAICT, forbids to have something
> behind in the file. Of course if you ignore the data at the end of file
> when you load the image using Multiboot2 protocol.

Well, debug info can be ignored. If MB2 would work like it does today,
then .reloc also would never be touched. Feels a little fragile, but
might be okay then.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 13:50:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 13:50:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139398.257784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqybb-0007LM-Oz; Wed, 09 Jun 2021 13:50:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139398.257784; Wed, 09 Jun 2021 13:50:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lqybb-0007LF-M0; Wed, 09 Jun 2021 13:50:51 +0000
Received: by outflank-mailman (input) for mailman id 139398;
 Wed, 09 Jun 2021 13:50:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N7Um=LD=xenbits.xen.org=gdunlap@srs-us1.protection.inumbo.net>)
 id 1lqyba-0007Jc-1m
 for xen-devel@lists.xen.org; Wed, 09 Jun 2021 13:50:50 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8213aecf-b1a1-4682-8ef0-ae9e0af04fc7;
 Wed, 09 Jun 2021 13:50:43 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1lqybO-00009C-7F; Wed, 09 Jun 2021 13:50:38 +0000
Received: from gdunlap by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1lqybO-0000fZ-5i; Wed, 09 Jun 2021 13:50:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8213aecf-b1a1-4682-8ef0-ae9e0af04fc7
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=QfKpnKz4ByVbxkzhNuZ4GYqpHZ1G4uUiNnR6er0r7Y8=; b=LSRRBAv1p7m1yD14lrjQ0rxVIq
	wTeDtQQhDgcoJWexRe8wrI1fKT/iRNMVCGoAja7Wt6ae+OqgRUJVidxppiZKG9r2SMdbdMQnvQLir
	s0fgZq21BPcJwcrvi/VRm2FB94kus3YqwKQL+8m39NOo+XHUhHjLUSEetR5yA3ghYmdM=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 375 v3 (CVE-2021-0089,CVE-2021-26313) -
 Speculative Code Store Bypass
Message-Id: <E1lqybO-0000fZ-5i@xenbits.xenproject.org>
Date: Wed, 09 Jun 2021 13:50:38 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

      Xen Security Advisory CVE-2021-0089,CVE-2021-26313 / XSA-375
                              version 3

                    Speculative Code Store Bypass

UPDATES IN VERSION 3
====================

Added additional CVE, as Intel and AMD allocated different ones.

ISSUE DESCRIPTION
=================

Modern superscalar processors may employ sophisticated decoding and
caching of the instruction stream to improve performance.  However, a
consequence is that self-modifying code updates may not take effect
instantly.

Whatever the architectural guarantees, some CPUs have microarchitectural
behaviour whereby the stale instruction stream may be speculatively
decoded and executed.

Speculation of this form can suffer from type confusion in registers,
and potentially leak data.

For more details, see:
  https://www.vusec.net/projects/fpvi-scsb
  https://www.amd.com/en/corporate-product-security-bulletin-amd-sb-1003
  https://software.intel.com/content/www/us/en/develop/articles/software-security-guidance/advisory-guidance/speculative-code-store-bypass.html
  https://software.intel.com/content/www/us/en/develop/articles/software-security-guidance/advisory-guidance/floating-point-value-injection.html
  https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#scsb
  https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#fvpi

IMPACT
======

In attacker might be able to infer the contents of arbitrary host
memory, including memory assigned to other guests.

VULNERABLE SYSTEMS
==================

Systems running all versions of Xen are affected.

Whether a CPU is potentially vulnerable depends on its
microarchitecture.  Consult your hardware vendor.

Xen running on ARM does not have runtime self-modying code, so is
believed to be not vulnerable, irrespective of any hardware
susceptibility.

Xen running on x86 does have runtime self-modying code as part of
emulation, and is believed to be potentially vulnerable.

Xen is not vulnerable if retpoline or lfence mitigations for Spectre v2
protection are active.  Protections depend on compiler support (as
indicated by INDIRECT_THUNK), and a runtime setting (BTI-Thunk):

  # xl dmesg | grep -e INDIRECT_THUNK -e BTI-Thunk
  (XEN)   Compiled-in support: INDIRECT_THUNK SHADOW_PAGING
  (XEN)   Xen settings: BTI-Thunk RETPOLINE, SPEC_CTRL: IBRS+ SSBD-, Other: SRB_LOCK+ IBPB L1D_FLUSH VERW BRANCH_HARDEN

BTI-Thunk as either RETPOLINE or LFENCE prevents the vulnerability.

MITIGATION
==========

If Spectre v2 support is compiled in, but JMP is used by default,
RETPOLINE or LFENCE can be selected with `spec-ctrl=bti-thunk=retpoline`
or `spec-ctrl=bti-thunk=lfence`.

CREDITS
=======

This issue was discovered by Enrico Barberis, Hany Ragab, Herbert Bos,
and Cristiano Giuffrida from the VUSec group at VU Amsterdam.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.  Note that
in 4.13 and newer the patch will only take effect when the
SPECULATIVE_HARDEN_BRANCH hypervisor config option is enabled.  4.12 and
older do not have such an option, and the change will take effect
unconditionally.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa375.patch           xen-unstable - 4.14.x
xsa375-4.13.patch      Xen 4.13.x
xsa375-4.12.patch      Xen 4.12.x - 4.11.x

$ sha256sum xsa375*
367d5bb97c942b9f744a57645df87148772c0879de6f351f36f88147f3958e83  xsa375.meta
301ef80da837bc2af36a0958f35f42f4d267b20ec6e91ae5faf2616167ef49f8  xsa375.patch
dc024daf17242b6477a16a349754a94b2b25cbbfd8c14475741b778710a44c93  xsa375-4.12.patch
f70511d843c6617b932da11ffe857e2e3aa3834ccff07d4d0beba90d63a3dae2  xsa375-4.13.patch
$

NOTE CONCERNING CVE-2021-0086 / CVE-2021-26314
==============================================

Floating Point Value Injection (FPVI) was discovered and disclosed in
the same research as SCSB.  Xen on x86 does in some cases emulate
floating point operations with guest provided inputs, but does not have
subsequent control flow dependent on results, transient or otherwise, of
the operation.

Therefore, we believe Xen is not vulnerable to FPVI, irrespective of any
hardware susceptibility.

NOTE CONCERNING MULTIPLE CVES
=============================

Intel and AMD allocated different CVEs for SCSB and FPVI.  We have
included both on this advisory.  The allocations are as follows:

  Issue | Intel         | AMD
  ------+---------------+---------------
  SCSB  | CVE-2021-0089 | CVE-2021-26313
  FPVI  | CVE-2021-0086 | CVE-2021-26314

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmDAxVYMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZfKoH/3oVrY0exNwvxp18bXOCYUrzIUYwaXDYCPt3S4GX
JIEZ5Em1SPKEOfexGfjjul6WTiLXQYVof2gx1gWU06ENafEKqoRVJMTryL2Yfi63
IVUifr2lILnYouuIXk+dGSzPmhg9iZ+HwRseNQHwcrRzJnW16VNijWnn74JwfSAV
AWn1inVKriUXJYCTJBBRraQiHMzrDelOo+qB5pNIJHIMtpAK3N1EfkIJFJ0Xe9gl
iKfn+j66CuZorj83bpj5RvSOjgEJiKuMZsKYXK8TPJK6OLR+fEDNx79mHzh1tl2g
VBZOYxXHvTE+SlZwCJotGQ7g3tQJ0JwACPdzvQ6if+xh2N0=
=o800
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa375.meta"
Content-Disposition: attachment; filename="xsa375.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNzUsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNSIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIs
CiAgICAiNC4xMiIsCiAgICAiNC4xMSIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjExIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICJiMWU0NmJjMzY5YmI0OTBiNzIxYzc3ZjE1ZDI1ODNiYmY0
NjYxNTJkIiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NzIsCiAgICAgICAgICAgIDM3MwogICAgICAgICAgXSwKICAgICAgICAgICJQ
YXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzc1LTQuMTMucGF0Y2giCiAg
ICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQuMTIi
OiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAg
ICAgICAiU3RhYmxlUmVmIjogIjU5ODQ5MDViMjYzOGRmODdhMDI2MmQxZWU5
MWYwYTZlMTRhODZkZjYiLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAg
ICAgICAgIDM3MiwKICAgICAgICAgICAgMzczCiAgICAgICAgICBdLAogICAg
ICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2EzNzUtNC4xMy5w
YXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAg
ICAiNC4xMyI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6
IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiMjg0MTMyOTM4OTAwY2U4YzNi
MTFiYWJmNzI1NWY1YzZkYmIyMTcxNiIsCiAgICAgICAgICAiUHJlcmVxcyI6
IFsKICAgICAgICAgICAgMzcyLAogICAgICAgICAgICAzNzMKICAgICAgICAg
IF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM3
NS00LjEzLnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQog
ICAgfSwKICAgICI0LjE0IjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAg
ICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICIxMGYwYjJkNDkz
NzY4NjVkNDk2ODBmMDZjNTJiNDUxZmFiY2UzYmI1IiwKICAgICAgICAgICJQ
cmVyZXFzIjogWwogICAgICAgICAgICAzNzIsCiAgICAgICAgICAgIDM3Mwog
ICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAg
ICAieHNhMzc1LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAg
fQogICAgfSwKICAgICI0LjE1IjogewogICAgICAiUmVjaXBlcyI6IHsKICAg
ICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICIyODBkNDcy
ZjRmY2EwNzBhMTAzNzdlMzE4ZDkwY2FiZmMyNTQwODEwIiwKICAgICAgICAg
ICJQcmVyZXFzIjogWwogICAgICAgICAgICAzNzIsCiAgICAgICAgICAgIDM3
MwogICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAg
ICAgICAieHNhMzc1LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAg
ICAgfQogICAgfSwKICAgICJtYXN0ZXIiOiB7CiAgICAgICJSZWNpcGVzIjog
ewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogImFh
NzdhY2MyODA5OGQwNDk0NWFmOTk4ZjNmYzBkYmQzNzU5YjViNDEiLAogICAg
ICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDM3MiwKICAgICAgICAg
ICAgMzczCiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAg
ICAgICAgICAgICJ4c2EzNzUucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAg
fQogICAgICB9CiAgICB9CiAgfQp9

--=separator
Content-Type: application/octet-stream; name="xsa375.patch"
Content-Disposition: attachment; filename="xsa375.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogUHJvdGVjdCBhZ2FpbnN0IFNw
ZWN1bGF0aXZlIENvZGUgU3RvcmUgQnlwYXNzCgpNb2Rlcm4geDg2IHByb2Nl
c3NvcnMgaGF2ZSBmYXItYmV0dGVyLXRoYW4tYXJjaGl0ZWN0dXJhbGx5LWd1
YXJhbnRlZWQgc2VsZgptb2RpZnlpbmcgY29kZSBkZXRlY3Rpb24uICBUeXBp
Y2FsbHksIHdoZW4gYSB3cml0ZSBoaXRzIGFuIGluc3RydWN0aW9uIGluCmZs
aWdodCwgYSBNYWNoaW5lIENsZWFyIG9jY3VycyB0byBmbHVzaCBzdGFsZSBj
b250ZW50IGluIHRoZSBmcm9udGVuZCBhbmQKYmFja2VuZC4KCkZvciBzZWxm
IG1vZGlmeWluZyBjb2RlLCBiZWZvcmUgYSB3cml0ZSB3aGljaCBoaXRzIGFu
IGluc3RydWN0aW9uIGluIGZsaWdodApyZXRpcmVzLCB0aGUgZnJvbnRlbmQg
Y2FuIHNwZWN1bGF0aXZlbHkgZGVjb2RlIGFuZCBleGVjdXRlIHRoZSBvbGQg
aW5zdHJ1Y3Rpb24Kc3RyZWFtLiAgU3BlY3VsYXRpb24gb2YgdGhpcyBmb3Jt
IGNhbiBzdWZmZXIgZnJvbSB0eXBlIGNvbmZ1c2lvbiBpbiByZWdpc3RlcnMs
CmFuZCBwb3RlbnRpYWxseSBsZWFrIGRhdGEuCgpGdXJ0aGVybW9yZSwgdXBk
YXRlcyBhcmUgdHlwaWNhbGx5IGJ5dGUtd2lzZSwgcmF0aGVyIHRoYW4gYXRv
bWljLiAgRGVwZW5kaW5nCm9uIHRpbWluZywgc3BlY3VsYXRpb24gY2FuIHJh
Y2UgYWhlYWQgbXVsdGlwbGUgdGltZXMgYmV0d2VlbiBpbmRpdmlkdWFsCndy
aXRlcywgYW5kIGV4ZWN1dGUgdGhlIHRyYW5zaWVudGx5LW1hbGZvcm1lZCBp
bnN0cnVjdGlvbiBzdHJlYW0uCgpYZW4gaGFzIHN0dWJzIHdoaWNoIGFyZSB1
c2VkIGluIGNlcnRhaW4gY2FzZXMgZm9yIGVtdWxhdGlvbiBwdXJwb3Nlcy4g
IEluaGliaXQKc3BlY3VsYXRpb24gYmV0d2VlbiB1cGRhdGluZyB0aGUgc3R1
YiBhbmQgZXhlY3V0aW5nIGl0LgoKVGhpcyBpcyBYU0EtMzc1IC8gQ1ZFLTIw
MjEtMDA4OS4KClNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJl
dy5jb29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBKYW4gQmV1bGlj
aCA8amJldWxpY2hAc3VzZS5jb20+CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gv
eDg2L3B2L2VtdWwtcHJpdi1vcC5jIGIveGVuL2FyY2gveDg2L3B2L2VtdWwt
cHJpdi1vcC5jCmluZGV4IDg4ODk1MDlkMmEuLjExNDY3YTFlM2EgMTAwNjQ0
Ci0tLSBhL3hlbi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYworKysgYi94
ZW4vYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMKQEAgLTEzOCw2ICsxMzgs
OCBAQCBzdGF0aWMgaW9fZW11bF9zdHViX3QgKmlvX2VtdWxfc3R1Yl9zZXR1
cChzdHJ1Y3QgcHJpdl9vcF9jdHh0ICpjdHh0LCB1OCBvcGNvZGUsCiAgICAg
LyogUnVudGltZSBjb25maXJtYXRpb24gdGhhdCB3ZSBoYXZlbid0IGNsb2Ji
ZXJlZCBhbiBhZGphY2VudCBzdHViLiAqLwogICAgIEJVR19PTihTVFVCX0JV
Rl9TSVpFIC8gMiA8IChwIC0gY3R4dC0+aW9fZW11bF9zdHViKSk7CiAKKyAg
ICBibG9ja19zcGVjdWxhdGlvbigpOyAvKiBTQ1NCICovCisKICAgICAvKiBI
YW5keSBmdW5jdGlvbi10eXBlZCBwb2ludGVyIHRvIHRoZSBzdHViLiAqLwog
ICAgIHJldHVybiAodm9pZCAqKXN0dWJfdmE7CiAKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni94ODZfZW11bGF0ZS94ODZfZW11bGF0ZS5jIGIveGVuL2Fy
Y2gveDg2L3g4Nl9lbXVsYXRlL3g4Nl9lbXVsYXRlLmMKaW5kZXggYzI1ZDg4
ZDBkOC4uZjQyZmYyYTgzNyAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4
Nl9lbXVsYXRlL3g4Nl9lbXVsYXRlLmMKKysrIGIveGVuL2FyY2gveDg2L3g4
Nl9lbXVsYXRlL3g4Nl9lbXVsYXRlLmMKQEAgLTEyNTcsNiArMTI1Nyw3IEBA
IHN0YXRpYyBpbmxpbmUgaW50IG1rZWModWludDhfdCBlLCBpbnQzMl90IGVj
LCAuLi4pCiAjIGRlZmluZSBpbnZva2Vfc3R1YihwcmUsIHBvc3QsIGNvbnN0
cmFpbnRzLi4uKSBkbyB7ICAgICAgICAgICAgICAgICAgICBcCiAgICAgc3R1
Yl9leG4uaW5mbyA9ICh1bmlvbiBzdHViX2V4Y2VwdGlvbl90b2tlbikgeyAu
cmF3ID0gfjAgfTsgICAgICAgICBcCiAgICAgc3R1Yl9leG4ubGluZSA9IF9f
TElORV9fOyAvKiBVdGlsaXR5IG91dHdlaWdocyBsaXZlcGF0Y2hpbmcgY29z
dCAqLyBcCisgICAgYmxvY2tfc3BlY3VsYXRpb24oKTsgLyogU0NTQiAqLyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgYXNt
IHZvbGF0aWxlICggcHJlICJcblx0SU5ESVJFQ1RfQ0FMTCAlW3N0dWJdXG5c
dCIgcG9zdCAiXG4iICAgICAgICBcCiAgICAgICAgICAgICAgICAgICAgIi5M
cmV0JT06XG5cdCIgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBcCiAgICAgICAgICAgICAgICAgICAgIi5wdXNoc2VjdGlvbiAuZml4
dXAsXCJheFwiXG4iICAgICAgICAgICAgICAgICAgICAgICBcCg==

--=separator
Content-Type: application/octet-stream; name="xsa375-4.12.patch"
Content-Disposition: attachment; filename="xsa375-4.12.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogUHJvdGVjdCBhZ2FpbnN0IFNw
ZWN1bGF0aXZlIENvZGUgU3RvcmUgQnlwYXNzCgpNb2Rlcm4geDg2IHByb2Nl
c3NvcnMgaGF2ZSBmYXItYmV0dGVyLXRoYW4tYXJjaGl0ZWN0dXJhbGx5LWd1
YXJhbnRlZWQgc2VsZgptb2RpZnlpbmcgY29kZSBkZXRlY3Rpb24uICBUeXBp
Y2FsbHksIHdoZW4gYSB3cml0ZSBoaXRzIGFuIGluc3RydWN0aW9uIGluCmZs
aWdodCwgYSBNYWNoaW5lIENsZWFyIG9jY3VycyB0byBmbHVzaCBzdGFsZSBj
b250ZW50IGluIHRoZSBmcm9udGVuZCBhbmQKYmFja2VuZC4KCkZvciBzZWxm
IG1vZGlmeWluZyBjb2RlLCBiZWZvcmUgYSB3cml0ZSB3aGljaCBoaXRzIGFu
IGluc3RydWN0aW9uIGluIGZsaWdodApyZXRpcmVzLCB0aGUgZnJvbnRlbmQg
Y2FuIHNwZWN1bGF0aXZlbHkgZGVjb2RlIGFuZCBleGVjdXRlIHRoZSBvbGQg
aW5zdHJ1Y3Rpb24Kc3RyZWFtLiAgU3BlY3VsYXRpb24gb2YgdGhpcyBmb3Jt
IGNhbiBzdWZmZXIgZnJvbSB0eXBlIGNvbmZ1c2lvbiBpbiByZWdpc3RlcnMs
CmFuZCBwb3RlbnRpYWxseSBsZWFrIGRhdGEuCgpGdXJ0aGVybW9yZSwgdXBk
YXRlcyBhcmUgdHlwaWNhbGx5IGJ5dGUtd2lzZSwgcmF0aGVyIHRoYW4gYXRv
bWljLiAgRGVwZW5kaW5nCm9uIHRpbWluZywgc3BlY3VsYXRpb24gY2FuIHJh
Y2UgYWhlYWQgbXVsdGlwbGUgdGltZXMgYmV0d2VlbiBpbmRpdmlkdWFsCndy
aXRlcywgYW5kIGV4ZWN1dGUgdGhlIHRyYW5zaWVudGx5LW1hbGZvcm1lZCBp
bnN0cnVjdGlvbiBzdHJlYW0uCgpYZW4gaGFzIHN0dWJzIHdoaWNoIGFyZSB1
c2VkIGluIGNlcnRhaW4gY2FzZXMgZm9yIGVtdWxhdGlvbiBwdXJwb3Nlcy4g
IEluaGliaXQKc3BlY3VsYXRpb24gYmV0d2VlbiB1cGRhdGluZyB0aGUgc3R1
YiBhbmQgZXhlY3V0aW5nIGl0LgoKVGhpcyBpcyBYU0EtMzc1IC8gQ1ZFLTIw
MjEtMDA4OS4KClNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJl
dy5jb29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBKYW4gQmV1bGlj
aCA8amJldWxpY2hAc3VzZS5jb20+CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gv
eDg2L3B2L2VtdWwtcHJpdi1vcC5jIGIveGVuL2FyY2gveDg2L3B2L2VtdWwt
cHJpdi1vcC5jCmluZGV4IDZkYzRmOTJhODQuLjU5YzE1Y2EwZTcgMTAwNjQ0
Ci0tLSBhL3hlbi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYworKysgYi94
ZW4vYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMKQEAgLTk3LDYgKzk3LDgg
QEAgc3RhdGljIGlvX2VtdWxfc3R1Yl90ICppb19lbXVsX3N0dWJfc2V0dXAo
c3RydWN0IHByaXZfb3BfY3R4dCAqY3R4dCwgdTggb3Bjb2RlLAogICAgIEJV
SUxEX0JVR19PTihTVFVCX0JVRl9TSVpFIC8gMiA8IE1BWCg5LCAvKiBEZWZh
dWx0IGVtdWwgc3R1YiAqLwogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICA1ICsgSU9FTVVMX1FVSVJLX1NUVUJfQllURVMpKTsK
IAorICAgIGFzbSB2b2xhdGlsZSAoICJsZmVuY2UiIDo6OiAibWVtb3J5IiAp
OyAvKiBTQ1NCICovCisKICAgICAvKiBIYW5keSBmdW5jdGlvbi10eXBlZCBw
b2ludGVyIHRvIHRoZSBzdHViLiAqLwogICAgIHJldHVybiAodm9pZCAqKXN0
dWJfdmE7CiB9CmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2X2VtdWxh
dGUveDg2X2VtdWxhdGUuYyBiL3hlbi9hcmNoL3g4Ni94ODZfZW11bGF0ZS94
ODZfZW11bGF0ZS5jCmluZGV4IGJiYTZkZDAxODcuLmNkMTIzNDkyYTYgMTAw
NjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni94ODZfZW11bGF0ZS94ODZfZW11bGF0
ZS5jCisrKyBiL3hlbi9hcmNoL3g4Ni94ODZfZW11bGF0ZS94ODZfZW11bGF0
ZS5jCkBAIC0xMDkzLDYgKzEwOTMsNyBAQCBzdGF0aWMgaW5saW5lIGludCBt
a2VjKHVpbnQ4X3QgZSwgaW50MzJfdCBlYywgLi4uKQogIyBkZWZpbmUgaW52
b2tlX3N0dWIocHJlLCBwb3N0LCBjb25zdHJhaW50cy4uLikgZG8geyAgICAg
ICAgICAgICAgICAgICAgXAogICAgIHN0dWJfZXhuLmluZm8gPSAodW5pb24g
c3R1Yl9leGNlcHRpb25fdG9rZW4pIHsgLnJhdyA9IH4wIH07ICAgICAgICAg
XAogICAgIHN0dWJfZXhuLmxpbmUgPSBfX0xJTkVfXzsgLyogVXRpbGl0eSBv
dXR3ZWlnaHMgbGl2ZXBhdGNoaW5nIGNvc3QgKi8gXAorICAgIGFzbSB2b2xh
dGlsZSAoICJsZmVuY2UiIDo6OiAibWVtb3J5IiApOyAvKiBTQ1NCICovICAg
ICAgICAgICAgICAgICAgXAogICAgIGFzbSB2b2xhdGlsZSAoIHByZSAiXG5c
dElORElSRUNUX0NBTEwgJVtzdHViXVxuXHQiIHBvc3QgIlxuIiAgICAgICAg
XAogICAgICAgICAgICAgICAgICAgICIuTHJldCU9OlxuXHQiICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgICAgICAgICAg
ICAgICAgICIucHVzaHNlY3Rpb24gLmZpeHVwLFwiYXhcIlxuIiAgICAgICAg
ICAgICAgICAgICAgICAgXAo=

--=separator
Content-Type: application/octet-stream; name="xsa375-4.13.patch"
Content-Disposition: attachment; filename="xsa375-4.13.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogUHJvdGVjdCBhZ2FpbnN0IFNw
ZWN1bGF0aXZlIENvZGUgU3RvcmUgQnlwYXNzCgpNb2Rlcm4geDg2IHByb2Nl
c3NvcnMgaGF2ZSBmYXItYmV0dGVyLXRoYW4tYXJjaGl0ZWN0dXJhbGx5LWd1
YXJhbnRlZWQgc2VsZgptb2RpZnlpbmcgY29kZSBkZXRlY3Rpb24uICBUeXBp
Y2FsbHksIHdoZW4gYSB3cml0ZSBoaXRzIGFuIGluc3RydWN0aW9uIGluCmZs
aWdodCwgYSBNYWNoaW5lIENsZWFyIG9jY3VycyB0byBmbHVzaCBzdGFsZSBj
b250ZW50IGluIHRoZSBmcm9udGVuZCBhbmQKYmFja2VuZC4KCkZvciBzZWxm
IG1vZGlmeWluZyBjb2RlLCBiZWZvcmUgYSB3cml0ZSB3aGljaCBoaXRzIGFu
IGluc3RydWN0aW9uIGluIGZsaWdodApyZXRpcmVzLCB0aGUgZnJvbnRlbmQg
Y2FuIHNwZWN1bGF0aXZlbHkgZGVjb2RlIGFuZCBleGVjdXRlIHRoZSBvbGQg
aW5zdHJ1Y3Rpb24Kc3RyZWFtLiAgU3BlY3VsYXRpb24gb2YgdGhpcyBmb3Jt
IGNhbiBzdWZmZXIgZnJvbSB0eXBlIGNvbmZ1c2lvbiBpbiByZWdpc3RlcnMs
CmFuZCBwb3RlbnRpYWxseSBsZWFrIGRhdGEuCgpGdXJ0aGVybW9yZSwgdXBk
YXRlcyBhcmUgdHlwaWNhbGx5IGJ5dGUtd2lzZSwgcmF0aGVyIHRoYW4gYXRv
bWljLiAgRGVwZW5kaW5nCm9uIHRpbWluZywgc3BlY3VsYXRpb24gY2FuIHJh
Y2UgYWhlYWQgbXVsdGlwbGUgdGltZXMgYmV0d2VlbiBpbmRpdmlkdWFsCndy
aXRlcywgYW5kIGV4ZWN1dGUgdGhlIHRyYW5zaWVudGx5LW1hbGZvcm1lZCBp
bnN0cnVjdGlvbiBzdHJlYW0uCgpYZW4gaGFzIHN0dWJzIHdoaWNoIGFyZSB1
c2VkIGluIGNlcnRhaW4gY2FzZXMgZm9yIGVtdWxhdGlvbiBwdXJwb3Nlcy4g
IEluaGliaXQKc3BlY3VsYXRpb24gYmV0d2VlbiB1cGRhdGluZyB0aGUgc3R1
YiBhbmQgZXhlY3V0aW5nIGl0LgoKVGhpcyBpcyBYU0EtMzc1IC8gQ1ZFLTIw
MjEtMDA4OS4KClNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJl
dy5jb29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBKYW4gQmV1bGlj
aCA8amJldWxpY2hAc3VzZS5jb20+CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gv
eDg2L3B2L2VtdWwtcHJpdi1vcC5jIGIveGVuL2FyY2gveDg2L3B2L2VtdWwt
cHJpdi1vcC5jCmluZGV4IDZkYzRmOTJhODQuLjU5YzE1Y2EwZTcgMTAwNjQ0
Ci0tLSBhL3hlbi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYworKysgYi94
ZW4vYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMKQEAgLTk3LDYgKzk3LDgg
QEAgc3RhdGljIGlvX2VtdWxfc3R1Yl90ICppb19lbXVsX3N0dWJfc2V0dXAo
c3RydWN0IHByaXZfb3BfY3R4dCAqY3R4dCwgdTggb3Bjb2RlLAogICAgIEJV
SUxEX0JVR19PTihTVFVCX0JVRl9TSVpFIC8gMiA8IE1BWCg5LCAvKiBEZWZh
dWx0IGVtdWwgc3R1YiAqLwogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICA1ICsgSU9FTVVMX1FVSVJLX1NUVUJfQllURVMpKTsK
IAorICAgIGJsb2NrX3NwZWN1bGF0aW9uKCk7IC8qIFNDU0IgKi8KKwogICAg
IC8qIEhhbmR5IGZ1bmN0aW9uLXR5cGVkIHBvaW50ZXIgdG8gdGhlIHN0dWIu
ICovCiAgICAgcmV0dXJuICh2b2lkICopc3R1Yl92YTsKIH0KZGlmZiAtLWdp
dCBhL3hlbi9hcmNoL3g4Ni94ODZfZW11bGF0ZS94ODZfZW11bGF0ZS5jIGIv
eGVuL2FyY2gveDg2L3g4Nl9lbXVsYXRlL3g4Nl9lbXVsYXRlLmMKaW5kZXgg
YmJhNmRkMDE4Ny4uY2QxMjM0OTJhNiAxMDA2NDQKLS0tIGEveGVuL2FyY2gv
eDg2L3g4Nl9lbXVsYXRlL3g4Nl9lbXVsYXRlLmMKKysrIGIveGVuL2FyY2gv
eDg2L3g4Nl9lbXVsYXRlL3g4Nl9lbXVsYXRlLmMKQEAgLTExNzIsNiArMTE3
Miw3IEBAIHN0YXRpYyBpbmxpbmUgaW50IG1rZWModWludDhfdCBlLCBpbnQz
Ml90IGVjLCAuLi4pCiAjIGRlZmluZSBpbnZva2Vfc3R1YihwcmUsIHBvc3Qs
IGNvbnN0cmFpbnRzLi4uKSBkbyB7ICAgICAgICAgICAgICAgICAgICBcCiAg
ICAgc3R1Yl9leG4uaW5mbyA9ICh1bmlvbiBzdHViX2V4Y2VwdGlvbl90b2tl
bikgeyAucmF3ID0gfjAgfTsgICAgICAgICBcCiAgICAgc3R1Yl9leG4ubGlu
ZSA9IF9fTElORV9fOyAvKiBVdGlsaXR5IG91dHdlaWdocyBsaXZlcGF0Y2hp
bmcgY29zdCAqLyBcCisgICAgYmxvY2tfc3BlY3VsYXRpb24oKTsgLyogU0NT
QiAqLyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAg
ICAgYXNtIHZvbGF0aWxlICggcHJlICJcblx0SU5ESVJFQ1RfQ0FMTCAlW3N0
dWJdXG5cdCIgcG9zdCAiXG4iICAgICAgICBcCiAgICAgICAgICAgICAgICAg
ICAgIi5McmV0JT06XG5cdCIgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBcCiAgICAgICAgICAgICAgICAgICAgIi5wdXNoc2VjdGlv
biAuZml4dXAsXCJheFwiXG4iICAgICAgICAgICAgICAgICAgICAgICBcCg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 15:36:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 15:36:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139449.257834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr0F9-0001lW-BT; Wed, 09 Jun 2021 15:35:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139449.257834; Wed, 09 Jun 2021 15:35:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr0F9-0001lP-7G; Wed, 09 Jun 2021 15:35:47 +0000
Received: by outflank-mailman (input) for mailman id 139449;
 Wed, 09 Jun 2021 15:35:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YCjx=LD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lr0F7-0001lJ-Sn
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 15:35:46 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b589b9fa-90c2-4678-acee-8134c2867c40;
 Wed, 09 Jun 2021 15:35:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b589b9fa-90c2-4678-acee-8134c2867c40
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623252944;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ApYU524JYYCvOXZMsb2y7UfVqOgfoq8wLQAUB5ZTHJ8=;
  b=Z+Mfk9ktKhRIRbLWnCdS+vHplu69mFnlUEIxUn5qNeartEyEXYNQA7S7
   +mHvLgOqZza1V9YXx4AiG2RdtQFDhC3NYwL9UEoy9nnTGkT7qgi82Jiol
   eyre4q04dQAtLX9hqnUk0EURDRzYwX0xUwoazRqCQT3RAD5hGK/i5EIIR
   M=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ub/3hJS9C74d5+z06jTbN0064RRUmyK8p6f93ElGWDwWkzJ5AdVkXFWXcQ6Ch9/CvgfhFkYi6i
 VOiV1IbcfpGLcdanEXiK3Jhrp9Unq6FTpRPfmqlk5HpZvpCuK/Vy9HDrV0NBlnO+uJktx83PGX
 Sl/pmsUd+KdQhxg/HH2FaRPbvdOIaLs627N7gZ4x3vUV7poE319K2/iLu0wtEsFZf45/GL0u7P
 Gw6JuFcfm7vVYw328HMTDJpCMGCzPn+Q1Y5zoT5en26oPUBEQeONl0AW0Ka0DOrUpyPZ5gG7xf
 1AY=
X-SBRS: 5.1
X-MesageID: 47323016
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:xCkUn6r4s2SNwbpnPqEQmqsaV5vvL9V00zEX/kB9WHVpm5Sj5r
 iTdPRy73/JYUUqKQodcLG7SeK9qBbnn6KdjrNhWYtKMDOJhILsFvAa0WKA+UyrJ8SdzJ876U
 4IScEXY7CdYmSSz/yKhjVQeOxQo+VvhZrY4Ns2uE0dLz2CBZsA0y5JTiKgVmFmTghPApQ0UL
 CG4NBcmjamcXMLKuymG3gsRYH41pH2vaOjRSRDKw8s6QGIgz/twqX9CQKk0hAXVC4K6as+8F
 LCjxfy6syYwr6GI17npiHuBqZt6ZvcI+h4dY+xYw8uW3fRYzOTFcVcsnu5zXUISa+UmRIXeZ
 L30m0d1oxImg7slyeO0FbQ8jil6S0p7XD6z1+enD/MnezVLQhKTPZptMZhaR3e5FMnvNZglI
 Rx/0zcmaZ2IHr77WLADhzzJkhXfwOP0AYfuP9WgHpFXYQEbrhN6YQZ4UNOCZ8FWDn38YY9DY
 BVfYjhDdttACSnhkrizx9SKR2XLwYO9xy9MwA/UwyuokxrdVVCvjglLeAk7wY9HaMGOux5Dr
 7/Q9pVfZl1P78rhIxGdZg8ffc=
X-IronPort-AV: E=Sophos;i="5.83,261,1616472000"; 
   d="scan'208";a="47323016"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T7mtJtcU2Lhj8ptOAH6Kr0cgLJCPRF568CV8I+EPeO3mr7niDPvuJ5f6Ju4ALJYcRUuTMoOp0sUV8J6qVWAcB4npjK2LQOUK0TzXiElU0h55CgX4+i+PVDFRuHisptpNmwgQ7/asYKMWN9SHeYjZnwUzvb9hrTFOzpwam0zT3rGt7npCkDetj/a3dAB9fIs2ffesqInDs2WjkQhl9gDtzqrltcHb7T5LJKZSJJe7k6DLyudkPEHdfblNvA7YSAWfcRSCxO5GAq1mhFtbeYJaT/C8z2wMD785PwAnMJwAyxV1kPE2B2V0lMvK5wCqGJxzdVfFmxxiLKFx3ZKrO9It6w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ApYU524JYYCvOXZMsb2y7UfVqOgfoq8wLQAUB5ZTHJ8=;
 b=ZmQJctTCB1qZjo5C+kD0JLIuLtdzyWbsU/cpaNVHaUHGbNhGjHHbZxGOiN6J/+gy+VKrn19jWv5APuFSVuRq3OoHSH79REGvhnTMQkX20vcGyYDNsrUJezO8pzjJ0ujQ18seuzgyHVC+WoUYcn1rNFH9WihPQmLZOOH9cQRB+/gHtz8kyeo+9jrSjaoVyV51fWKeHSgo9DuwMMTBV5ar0tlz0rySsmi90qg0ZU52yybRZVivpq/Buf5NSsQPSTp7JIHXHNVyJ7LItihOl/82Wc7eMDuqugokmNlLEc1JOuo6mHRjDWpYJdgGzLreytpsESMWCa4QmIX5YHwJ0CC+6w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ApYU524JYYCvOXZMsb2y7UfVqOgfoq8wLQAUB5ZTHJ8=;
 b=MZFggs2PrN/NOV9U2lHdMVNoommNz5KhVB3wP/iOzalINHk3xDr3O5W+GkzL6dHT8t03h238EuPB8vJgUoq29W9HtKFvOWQ2NLWtedAChIEhaJ24tYnbOCVgPSUqCKsc2yPCPR4OPYtl6ZI6GbF/LDK+wTadSb1GVAdaUZOEAl0=
Subject: Re: [PATCH v2 5/6] tools/libs: move xc_core* from libxenctrl to
 libxenguest
To: Juergen Gross <jgross@suse.com>, <xen-devel@lists.xenproject.org>
CC: <julien@xen.org>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210604060214.14924-1-jgross@suse.com>
 <20210604060214.14924-6-jgross@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <33925e6a-b416-75e8-421c-f46b7eed5d00@citrix.com>
Date: Wed, 9 Jun 2021 16:35:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210604060214.14924-6-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LNXP265CA0066.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5d::30) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f34d4eff-1261-4818-8055-08d92b5c3c3c
X-MS-TrafficTypeDiagnostic: BYAPR03MB4614:
X-Microsoft-Antispam-PRVS: <BYAPR03MB4614425C4F3E1FCE25163147BA369@BYAPR03MB4614.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4714;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: kj71fsZx8LkA5qDL48gS75vsAj8E+KT6wjR/MY0QysimDZ4nrirE/nqhatwBjpuZfbPZRoLrbwQh5JySkSNgqK2pjD40HjRmhv4TCXklv0cSU4lzz/yMJzmGvWsReLhPm2W5qdY3PcijiuV+W8jx6Bqf1r2Z5UwgGw2Fp6kQCqnLpsndis6QBPgZUk4EABVbYmZAXktFAQZAmvMeJygZBOtokW/xkutBMvfOL6QPayVeShKONO6cSqVVb3dw65JEX6Rpbb842CdcutRuITZXI1E/23GwZUbmH89BCZ113rMhFsD4tWSjShvVyM7B80GrYn/rOghUSt2Ymd2NI3jSbGMGi8M8L99zMmzZhjI0PKeyhPdZX3ZWImgbRd/TFFR9mMnIIjqwdKQJ315606xe9d5oTGFoN6QgGzkdCU1lOKQ5uq7I9f4V9xnePsxCufDS5VLwD5FM6vA66LhRRt1kcpwuYv/8wLqHMgDYaR4B5UbBKyus+A8ZGVLcOHKWX2L6pKwZ8zYxTRyhp8w/yAb/4IcPrIbhL9dIuWWMi95Bw6LRA3vVpFoOgBX6pvr2h1l+6nIZ4wEi696NRs6TEw6QDTilYfndgsZK8L09ePMlZjtCg4/DvjWv6VQNwDir+Lw5fHJ7cuR8u+SRpZDDbauQ6lruB58sc+ky3mxFeLMLj//DyAVFofmXuzZ0ubvNr6Tu
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(376002)(346002)(136003)(396003)(5660300002)(6486002)(31686004)(66946007)(478600001)(38100700002)(8936002)(8676002)(6666004)(4326008)(31696002)(316002)(16576012)(86362001)(26005)(54906003)(16526019)(186003)(66476007)(66556008)(956004)(2616005)(53546011)(2906002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?U0xEcGxzWW9nbDc0QjdqeHdXYmtyZEdmSzdoVHFmTjdCWlJXRTBJR1p3SEs3?=
 =?utf-8?B?VWZiMlgrWnVHM1dXd2RhTHJIblpnTjJQUFg1QnAvaG0yWkxJV2JqSmREWndh?=
 =?utf-8?B?VTRSU0dWaUs0akhpSGtodERHbnJEelZTeEJJQVQwa3JUV24vS2tVYnNhcW41?=
 =?utf-8?B?ajR5a0RMM1E4OFE0cUFrVHd5ZTJpditUVHpNQkpsakhnemdYczl5allFakRj?=
 =?utf-8?B?TEFSRnI3VE9OYk9xK0RIOHQ0WS9QVWJVQURHL2hFOG41NVJqL1JHTytDUS9M?=
 =?utf-8?B?c0ZVK055eStRVllXU1NOV2dtdFRaWUZqRE9VTFM3ZVdxQSsrdUMxckZPQi9D?=
 =?utf-8?B?ZDJ2bmViQUs4anNRUTloS291YTMyYUhiVG5VY20wV1IrZGMrMVlPK3RFMzBx?=
 =?utf-8?B?RDgzUFpaMnFIWmNLWnVuTlJTcGlNN2FUNzNiZUU0WDRIdXpUVkg4Z1U4a2Jl?=
 =?utf-8?B?eWlDb292RVZJUllqZkk5NkNQYUN2b3YvR0NrWmNvelVGY202MTg0VVFreTF2?=
 =?utf-8?B?NjJybHVRWHZyd1RwRVlEcHM4QURWNUxrZE1ZQ2VFTUxIYlZtcUVPUUtadGdX?=
 =?utf-8?B?WDNJS0ROUGhPVTNjTE5RMHlzNkM5QlYzdGdIdThFaDFOV3QwTWJVNjFhbnZY?=
 =?utf-8?B?VXUxY0IrWFdqQkRiWTlmZVNPcVFyMFU2VEhIN3hCd201RDBrcjJjT2E2VEVj?=
 =?utf-8?B?cGRRa1JoMVVDSWQ2VVVPQWprTUlISkR5QTN4R1VrTjdYQUpEd3BRVzNrZVZW?=
 =?utf-8?B?czhYZWcxUmZlTk1zTlVBeWQ2a2JaZFhTY09TWjRlSmU5NUlVb1llb0loK1lB?=
 =?utf-8?B?eWxUeENpc3JsRm1SR0hMblJDT0R2V01JYkFKeUFGQWFBY0lsdzlWYnRMSUJT?=
 =?utf-8?B?cUhVNUtka1RKYUEzNlRBd3k5eCt5dVN6MndVRDFHV0dVWjhzUlV4Tlo2VjA1?=
 =?utf-8?B?Zi9ZODNuVVR1VGtIUjRKWVpya1lLcHl1dDNKb2NpQ3BYRHh5YWI5MVNUK1NQ?=
 =?utf-8?B?VkYvZERoaFlTZGIwZUJUNGE0NFhtUjRIRzhQRTBiaWdwaTVUS29KTy9NMmha?=
 =?utf-8?B?N0NscHZsY3F1Wm5HUTlxRndVVXJhUldaSFF3QVdiN0hmUGFTamFINEwrZi9o?=
 =?utf-8?B?SFc0ejArNm44OWJ1d2RENzFRMTFQdktFTVVYbmprekttNHNWNDBpdGxZdVBk?=
 =?utf-8?B?cHJDc2xENlNmMSttcE9QTmc5S2N5SGtpVHN6aTNmMDBKdnJSWno4VGIrWGpj?=
 =?utf-8?B?bUcvNGlFOEhRc1pLOUhibDF3S1UzajRKSUYvbDNNaFFHd1VTdVcwakR6aEZD?=
 =?utf-8?B?TTYxYkFDOTRVc1RMQytZYlpDWkRhdlZsd1BZVFkwRUhtTytkdGFyTnh5cXRN?=
 =?utf-8?B?WStLeUN2MHRTK1VQQ2dLMkdkcXo4V2pWcE44MWZaTUh6Y29PMlF2NjRERFFx?=
 =?utf-8?B?eTBYMWRCM1VIUHNQVnBMQzZ3cUl2L1JEb1V3MEhreG5zWUNVdW9ncHMzOFlZ?=
 =?utf-8?B?eHNUcE8xSDI3dkhBdjlUeWpNTTRJMTVwUy9tdlNzRVFNR0tzcjN3cDd6RC9u?=
 =?utf-8?B?TVZxMWk0RVNVQXNSMlRIN3FOVHNyV3VKTWIrTGVJZVdEZm9QdHY2OFNHdFh5?=
 =?utf-8?B?b0dGc0NPL1ArdjBUaWExTklNVzJEQzJMV0V5dW5jTGttaHBJMkNrYzd1Wk9j?=
 =?utf-8?B?cDViVGoyWFNSR3pjNTBtRUZ2cGNIWVVKbVhSL0c4R2F0SDI5bUwyeUpiUHMv?=
 =?utf-8?Q?0mZGdYDra8RU6yGVlL1NcaGGILnAcTAbTXvGOHT?=
X-MS-Exchange-CrossTenant-Network-Message-Id: f34d4eff-1261-4818-8055-08d92b5c3c3c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 15:35:40.0267
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uqCXSEdYFkG6begcHZ3lx4TPiAAfONcGpQjELelizK9mjtV1Whbntuzf80rOsgCfjuVklq+0AkqaDL76YZZP8DYCnPv7A3XKHngglSvgXWA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4614
X-OriginatorOrg: citrix.com

On 04/06/2021 07:02, Juergen Gross wrote:
> The functionality in xc_core* should be part of libxenguest instead
> of libxenctrl. Users are already either in libxenguest, or in xl.
> There is one single exception: xc_core_arch_auto_translated_physmap()
> is being used by xc_domain_memory_mapping(), which is used by qemu.
> So leave the xc_core_arch_auto_translated_physmap() functionality in
> libxenctrl.
>
> This will make it easier to merge common functionality of xc_core*
> and xg_sr_save*.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Acked-by: Wei Liu <wl@xen.org>

This change is incomplete.

$ git grep xc_core\\.h
.gitignore:131:tools/libs/guest/xc_core.h
docs/misc/dump-core-format.txt:144:The note descriptors are defined in
tools/libxc/xc_core.h
tools/libs/guest/xg_core.c:40: *  |    and descriptors are defined in
xc_core.h.           |
xen/include/public/elfnote.h:244: * See tools/libxc/xc_core.h for more
information.
xen/include/public/elfnote.h:252: * See tools/libxc/xc_core.h for more
information.
xen/include/public/elfnote.h:261: * See tools/libxc/xc_core.h for more
information.
xen/include/public/elfnote.h:269: * See tools/libxc/xc_core.h for more
information.

Lots of stale references now.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 15:45:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 15:45:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139457.257846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr0Oq-0003Gn-9y; Wed, 09 Jun 2021 15:45:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139457.257846; Wed, 09 Jun 2021 15:45:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr0Oq-0003Gg-6i; Wed, 09 Jun 2021 15:45:48 +0000
Received: by outflank-mailman (input) for mailman id 139457;
 Wed, 09 Jun 2021 15:45:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YCjx=LD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lr0Oo-0003Ga-Gq
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 15:45:46 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ed710c9e-f1f3-4bb4-9310-d0d98b163f65;
 Wed, 09 Jun 2021 15:45:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ed710c9e-f1f3-4bb4-9310-d0d98b163f65
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623253544;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=DrCUoRFFTMoSoL1c19ehNZDOPYbBMFwcnbIx69FHGaM=;
  b=VvwFewidmOrpHegReps8qZyv5wEFNRjO+OZXiIENsvt2yy+F6uxzFiuN
   5HU/9doJjrKlVOTqQzLKmKGXn+Gn/tUWnLh8cgYZhAY0xGEWrknkx6HC+
   nTdnH0iV9ENrtakISuDb4ht4LojfQz11NVN7nTXMxPLby8W30QmqxrX0C
   Y=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: po6olZMhaetWr2bPU34eipGkfyr0+QjVc+p6mOzHWdh1EO85ICQI5ehw4Wlho5M4WvnTegj8wI
 f81DhVmqrMhByh29rTlhuKIf8TUsBONol5tjhlLCRIYAVzEoQtZdXyRzUOxAp9fMHhHUjNLYUt
 Bpazg/18ar5vhtW1Jsx36j0cMkllYZ8J1MTfXn7TBVWjeNgPg1YQQKbnC3QfjZcGTMruebL3m5
 AtphcXizao5jSZ/LUEuax7dvN7vV9DY3nfe3WMjO7S6JBahBk/au1d64GcZguYkJ7saYfoPqPF
 5Ms=
X-SBRS: 5.1
X-MesageID: 45738211
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:97T5U6PcmuDiIsBcTyX155DYdb4zR+YMi2TDiHoedfUFSKOlfp
 6V8MjztSWVtN4QMEtQ/uxoS5PwP080kqQFnrX5XI3SIDUO3VHIEGgM1/qY/9SNIVyGygcZ79
 YcT0EcMqyDMbEZt7eD3ODQKb9Jq7PrgcPY55ar854ud3ANV0gJ1XYLNu/xKDwSeOApP+tcKH
 PR3Ls8m9L2Ek5nHvhTS0N1EdTrlpnurtbLcBQGDxko5E2nii6p0qfzF1y90g0FWz1C7L8++S
 yd+jaJppmLgrWe8FvxxmXT55NZlJ/IzcZCPtWFjowwJi/3ggilSYx9U/mpvSwzosuo9FE2+e
 O84isIDoBW0Tf8b2u1qRzi103LyzA18ULvzleenD/KvdH5bChSMbsFuatpNj/ir2YwttB116
 xGm0iDsYBMMB/GlCPho/DVShBRkFauq3ZKq59Qs5Vma/pYVFZtl/1YwKsMe61wRR4SqbpXU9
 WGNfusoMq/KjihHijkVgAF+q3YYpwxdi32D3Tq9PbliAS/MRhCvgMlLfck7wE9HaQGOtN5Dt
 T/Q9NVfY51P4YrhIJGdas8qJiMeyPwqSylChPYHb2xLtB3B5uKke+s3IkI
X-IronPort-AV: E=Sophos;i="5.83,261,1616472000"; 
   d="scan'208";a="45738211"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YDAMUrzHekTsOcT0g3zDN752elSMy++1zJ8ZEsvuxGsqoRFq18DA4asWGkL5OLdwOtT71j/KJ0B+x7OxOXZHCyODCgQFCuRNFAjGmnkisLRELaKum4zeRIFsOIXlTqRgweBVJ8Vnf8+bS1/rOPhSnjPlCFFRpnFNZKg/D1hXu1unrbzvgxB8bQbTOnGJExrq5MqIeITUKpvPLoqtJcUIsSc0TdsSNupG0vCm3p9/KKaocMhZgDhtMOeAcoVx7la4Z92h1nkxT2cBwCQIPPgTRHIgv+8fZ6t/64Vb8k/kC/z6+vtQjk5pRL9sUKD7P99JBGs58GI3W/Q6RpKjZi5IyA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZC5QUXtQ/wDtkBplkxLxp2aU0VAZlci8WoSyD5pw8uo=;
 b=c8m+RvOajKmQ3iBrZahA+zJIXE7+svok3D7kKqWhb3xRPlbft0fR0XvyxrHFa7862hOLAgyKliZJXcQeaz/SF2W+mOzxewI+CS1sEmk0qY4rEgde3mJeFrlwWHoaajAuN1KR4Fd04LXLAZX/x6Vr+v0IHQ0T4oIKmCvdMFtseKrgIllzsQk36gE1jVauj+AZx7/osSyQN8F16zd/jm7R3pkytf66InQGDh7dvdQgneIYFjTofi6mUh+kpHr4JSneZd2E0wgVYwglew5JzjXKIAXUEUjFB8wtcPIxk8EnYfQwMNvnGSudCcUAeVFea9c7v6KoiWqcK1ACfvwXm/gV3g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZC5QUXtQ/wDtkBplkxLxp2aU0VAZlci8WoSyD5pw8uo=;
 b=A5Ht7RCNpFkfA4hOVd/dPTXofNZmfTxjs5QWH75ISuG24QCPMemG7DO73Xm90PHSsgVsT1S+oWuC72po228LXrsJXr3ISJLgoP7gGLi0W6YZFKcUF4qOiNvXpiOPMFigKyxKxze5+ZBuVeyXHbMWdGERjRbLyD22g8+i2vpFNSQ=
Subject: Re: [PATCH] x86: please Clang in arch_set_info_guest()
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <c758de2b-c453-4dba-ddc0-0c9548172c6e@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c032362e-51cb-00fe-dfe9-782bd4600163@citrix.com>
Date: Wed, 9 Jun 2021 16:45:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <c758de2b-c453-4dba-ddc0-0c9548172c6e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0294.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:196::11) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 15bd4da3-a576-4ebf-de89-08d92b5da1e1
X-MS-TrafficTypeDiagnostic: BY5PR03MB5080:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB50801ED461E398F8C24A9794BA369@BY5PR03MB5080.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: J4hQWQoDluaF//nPCwXpYAMXghZzPtcP9s665LhCEQKSHhPXoldp8NHaKAuE8ZSchg3YboH1WMEnBIYupHy5LAFbowSHS/MZ3g6rk2DzkJWm7kgnyNo6E3jGQAWGZj87l0AiVU5zEfWyda0wwsf3djxY5b6xLHqRTriLyHt/DdFJcxK8e1OYMLHIqwluC8pOq+nKit/kILVbNKL0LHf4E4SZ441kpNkkWR5EgD8finZSHXxlY3zigJ+7tvu1m2H23sh4n8jFs7LoZFHgwJo3Jzz7eymWRxJg0l/ebCRVrElCX/oWZUePMHbR4VHOzFf8U/BEIedrL01LuLycO5LrW3NovITmwKbRADrPQ3XAi98ExaHpxTeLK55yjWnsTBVApPXJq+hZ3+eFFVNcYrwdjp+ygPGBjqzvyj0a1M3cYcrJ0hlEP85mCPXw4L7ecIxNSsrzOJOmzeDI4NdoDi0XO2WXJvrQZzM03UJZpNzZiaCFyVwgim7NNh7DEiHcafkVLq/Uljo3lZCPoziMo1c1aVrrNwNzoovwf+sZ5Bj8ykiAXbeC/25rko7v0KIM16yGfznggzbsBbu5wBIBUSFb/6aVYG6jmsdFTxiikL3Me6msQKlcSLuP85oZzAIK22Ehk3MhTp85C+EBwak5lTzybQ7g22w2WT4IF6qAssSQZrGrAbJsFjrGTMm30H2R8NJAHMSyxN98BSqjIroAhdhr6kwZ/sFQNn75p24f6v40KnlVSxd1QFfG66/a/mmX1HE4VHd9fHROn2rmi3jhq4lntCZs6LKQMes829Y43YszxTxnUgYawIz/U6edR21tbCgR
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(136003)(376002)(396003)(366004)(39860400002)(186003)(16526019)(86362001)(53546011)(4326008)(966005)(54906003)(316002)(2616005)(16576012)(36756003)(8676002)(31696002)(31686004)(2906002)(8936002)(6486002)(6666004)(66556008)(66476007)(5660300002)(83380400001)(26005)(478600001)(107886003)(38100700002)(956004)(110136005)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?SkdpQ2JsT2laQ1luRWRxK2xNUDdBWEN4eXdOVmxEam84eVRIcVZPYTROMHFS?=
 =?utf-8?B?VE0xbVZhZlRja2RmZHBxK093Nnp4ZVBhWEQ5NUhtK0xENmM4YVBIRndYb2Mv?=
 =?utf-8?B?dUUrRzVNeE03SmtpK3g2QXlENEUySHVoQVFZSjhkdUcyYXNrempRNnczbU96?=
 =?utf-8?B?dE5tSVp4MkxZM2dkQTh5ZjNjdnV5eEFZdzNYK1dkTVN6MHViY1RZanhWbVVJ?=
 =?utf-8?B?bWxoNldOaG5lWmVFaGZ4T3BLRWtwdjFzVDlCakNzMDhabWJhRVlnbFdiOVZt?=
 =?utf-8?B?WUNXRnk0ZC83Tm5jK3VBR0ErcVRUeGZZWEgxV21pVXpadFBJc0gxYmlQaFJj?=
 =?utf-8?B?N1luQk1RWm5zYUNVbnFRU1FZRDl2dmpMWDk1RDZ2WVBnV0I3V2VlQ2ZqV1ZB?=
 =?utf-8?B?dStaK2NqRi9WRERIUXVpQ3dRQzhicmloNXBvMnlTNjN2NUQrQzM4SmJPTFRo?=
 =?utf-8?B?RkdhTWhxZ0dabHNBWUt2eDA2UyswSXVzOGJKTXg1eG5EekdEcDdqQlJEN0Vy?=
 =?utf-8?B?NEFsd2JYTlhKYjhlNGNyVEdoL2Z1MnFsSWk2N3ZvUUpyQlppV09QOGVNd2xs?=
 =?utf-8?B?TWhjS2dieUZHN1JPd2RZOTNTZzRPSHhoTE11YjQzZ0ltYmVNWDdLU0l1UFlH?=
 =?utf-8?B?SkNEOXFaQnUrYW8ra3dHT3p2Qm1vcGliL0tIcTBYeUM5UWVId2ZTN0t6ZWkz?=
 =?utf-8?B?VlowZlUvVmh0YXcrM1BmdUV1dUY0VDVBS2NpSDlUTFR4NHpISDY0clBFVTFu?=
 =?utf-8?B?TkJoR2NPZytYcHl3UXJJek9XbXJiZm5tM3dSWjVzRFBPSEZ3MEs0QUxNZjJi?=
 =?utf-8?B?ZUxocUhCOWVWbSs2azY2bXhXS2h4N21Oa01kc3ZpWGU2ZzFKQjhlZmg5MS9B?=
 =?utf-8?B?VjdNRVBaWGNOMjBUeTFLeStOY3k4bVF2bjNFVHl5VWVGdkVqbXdRTWJJczVU?=
 =?utf-8?B?UUpSaHIzeXBvTFN3bEwwdElhbkdhY2thbVBiYXJtdmZIcmpXdHpaZFR3SFpn?=
 =?utf-8?B?WHlCMmdpRFdmSlpBOWVmOEVURXFvNllsM1JHWms0WUhGOTJIVjdJTEFoVXBr?=
 =?utf-8?B?cTZBTVBwdnV5NnZOYXc4MW8xL1B6TVdnMmNRWTZVSzFnRnZhOGk0QkdkY2l0?=
 =?utf-8?B?Z2RWVUJTT1ZLZnNBbGRlS25ZQ3dLbjVkYy9UQVpVd3kxWkZWVC8rSGxZN2Rw?=
 =?utf-8?B?cmE5c0xjNSs3bk41VThQc1pGZGRGZVd4enIxU3dMV2c4NG0vMkowOEZhcXlX?=
 =?utf-8?B?YStyOUlvWEtRVEd2MWtrY1FnSU5GOUh6cS9Vai9QTnlXSCtES3UzazdlQjZY?=
 =?utf-8?B?R2hXWlJmelJibVZaSVNxV1U1VkRCcCtpVW1FUnpUUWE4NklJZnJoeGdZRGp1?=
 =?utf-8?B?SGIzWWhxeVhTZzlLZ1hGV2h0UkJuYVhuRGVFWUpUNFRZMlZCdU1KSFoxR3dC?=
 =?utf-8?B?OFBJOGQ1TW9sRjBDUVFXSVNwS2t6L0I1dXNKQS9kRGxFWjNwbHFmRU1rVklh?=
 =?utf-8?B?OVp4aEl0WVhmUGZpdVpMUEdpa0VNVjdTQ1RWekpIaXN0NFNCRWttTUg0SHpL?=
 =?utf-8?B?QTV1MXY3Vkh3Tk8xVkI1K1hvZG5wTFRHaXlKaks3Z3Q4M1RROHVNYjRaWVZJ?=
 =?utf-8?B?S3VpOWc0NDNPMXlEL001V2VRWmNSaFpYVVNLRnpNTGVSNm1VYVp5c2VWZmdk?=
 =?utf-8?B?QWJiQ2V5eUliR04yNUtZZjFEVkdZcWJRQ2t2UVdtRjlySGtxRkRleFBzTGRa?=
 =?utf-8?Q?O7VH17WTKmNKlN/wBxfCNh/wg6A74Ur45xmEVIy?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 15bd4da3-a576-4ebf-de89-08d92b5da1e1
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 15:45:40.0581
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YgJTK3q7Z5ws6tL7ZNI4VOg2FhSMVk//5SHaIQLvozbKtAKbtVpJycJxQnFIYSYAdoWtITXxLCYdB3KdtiNvIhHrfCludLJ6JWAw0V7jpP0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5080
X-OriginatorOrg: citrix.com

On 09/06/2021 14:14, Jan Beulich wrote:
> Clang 10 reports
>
> domain.c:1328:10: error: variable 'cr3_mfn' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
>     if ( !compat )
>          ^~~~~~~
> domain.c:1334:34: note: uninitialized use occurs here
>     cr3_page = get_page_from_mfn(cr3_mfn, d);
>                                  ^~~~~~~
> domain.c:1328:5: note: remove the 'if' if its condition is always true
>     if ( !compat )
>     ^~~~~~~~~~~~~~
> domain.c:1042:18: note: initialize the variable 'cr3_mfn' to silence this warning
>     mfn_t cr3_mfn;
>                  ^
>                   = 0
> domain.c:1189:14: error: variable 'fail' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
>         if ( !compat )
>              ^~~~~~~
> domain.c:1211:9: note: uninitialized use occurs here
>         fail |= v->arch.pv.gdt_ents != c(gdt_ents);
>         ^~~~
> domain.c:1189:9: note: remove the 'if' if its condition is always true
>         if ( !compat )
>         ^~~~~~~~~~~~~~
> domain.c:1187:18: note: initialize the variable 'fail' to silence this warning
>         bool fail;
>                  ^
>                   = false
>
> despite this being a build with -O2 in effect, and despite "compat"
> being constant "false" when CONFIG_COMPAT (and hence CONFIG_PV32) is not
> defined, as it gets set at the top of the function from the result of
> is_pv_32bit_domain().
>
> Re-arrange the two "offending" if()s such that when COMPAT=n the
> respective variables will be seen as unconditionally initialized. The
> original aim was to have the !compat cases first, though.
>
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> I wonder how many more there are to come.

https://gitlab.com/xen-project/patchew/xen/-/pipelines/317744453

Everything seems ok now.  The failure is a known arm32 randconfig issue
which still hasn't been fixed, and is unrelated to this.

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 16:07:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 16:07:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139465.257857 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr0jg-00066p-15; Wed, 09 Jun 2021 16:07:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139465.257857; Wed, 09 Jun 2021 16:07:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr0jf-00066i-UM; Wed, 09 Jun 2021 16:07:19 +0000
Received: by outflank-mailman (input) for mailman id 139465;
 Wed, 09 Jun 2021 16:07:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lr0je-00066Y-Al; Wed, 09 Jun 2021 16:07:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lr0je-000397-16; Wed, 09 Jun 2021 16:07:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lr0jd-0000ri-QQ; Wed, 09 Jun 2021 16:07:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lr0jd-00056L-Pp; Wed, 09 Jun 2021 16:07:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=8xPYp2kOna25W//79kbRC2uua0GZQngaxmHB3QVNzgs=; b=LnpEFekoXBWMSZ+LuMlbAWDo6+
	vGTmfr16ePocbK2rPS3DFMPBaA9WuD8kejO3OFSruwe47n7ClrOcs4t7TVvbiAk+tG2n8M8zXZYRk
	xHmU09QQatQkCaqdu03yjs08a1K/NXrOr6hoM9azFCs4DuQZRVdSOwPJq/Bd7h1eiQNA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [ovmf bisection] complete test-amd64-i386-xl-qemuu-ovmf-amd64
Message-Id: <E1lr0jd-00056L-Pp@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Jun 2021 16:07:17 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemuu-ovmf-amd64
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf https://github.com/tianocore/edk2.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  ovmf https://github.com/tianocore/edk2.git
  Bug introduced:  d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64
  Bug not present: 3357ac73807d83eb212632ee7c2e032a20a49c56
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/162575/


  commit d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64
  Author: Laszlo Ersek <lersek@redhat.com>
  Date:   Wed May 26 22:14:24 2021 +0200
  
      OvmfPkg/PlatformPei: remove Xen support
      
      The "OvmfPkg/PlatformPei/PlatformPei.inf" module is used by the following
      platform DSCs:
      
        OvmfPkg/AmdSev/AmdSevX64.dsc
        OvmfPkg/OvmfPkgIa32.dsc
        OvmfPkg/OvmfPkgIa32X64.dsc
        OvmfPkg/OvmfPkgX64.dsc
      
      Remove Xen support from "OvmfPkg/PlatformPei", including any dependencies
      that now become unused. The basic idea is to substitute FALSE for "mXen".
      
      Remove "OvmfPkg/PlatformPei" from the "OvmfPkg: Xen-related modules"
      section of "Maintainers.txt".
      
      This patch is best reviewed with "git show -b -W".
      
      Cc: Andrew Fish <afish@apple.com>
      Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
      Cc: Jordan Justen <jordan.l.justen@intel.com>
      Cc: Leif Lindholm <leif@nuviainc.com>
      Cc: Michael D Kinney <michael.d.kinney@intel.com>
      Cc: Philippe Mathieu-Daudé <philmd@redhat.com>
      Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=2122
      Signed-off-by: Laszlo Ersek <lersek@redhat.com>
      Message-Id: <20210526201446.12554-22-lersek@redhat.com>
      Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
      Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
      Reviewed-by: Leif Lindholm <leif@nuviainc.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/ovmf/test-amd64-i386-xl-qemuu-ovmf-amd64.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/ovmf/test-amd64-i386-xl-qemuu-ovmf-amd64.debian-hvm-install --summary-out=tmp/162575.bisection-summary --basis-template=162359 --blessings=real,real-bisect,real-retry ovmf test-amd64-i386-xl-qemuu-ovmf-amd64 debian-hvm-install
Searching for failure / basis pass:
 162542 fail [host=huxelrebe1] / 162359 [host=elbling1] 162341 [host=pinot1] 162338 [host=albana0] 162334 [host=albana1] 162326 [host=fiano0] 162288 [host=pinot0] 162271 [host=chardonnay0] 162259 [host=elbling0] 162256 [host=chardonnay1] 162217 [host=huxelrebe0] 162131 [host=fiano1] 162113 ok.
Failure / basis pass flights: 162542 / 162113
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf https://github.com/tianocore/edk2.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 558d83ab1a5179e146a56dd5f3cb16e1ca44ff46 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 https://github.com/tianocore/edk2.git#1fb80369b72c6ba7f80b442e4acf771a6dd56ee7-558d83ab1a5179e146a56dd5f3cb16e1ca44ff46 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c743\
 7ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee-7292e4a0a8f58333ccbd2d0d47242f9865083c9c git://xenbits.xen.org/xen.git#aa77acc28098d04945af998f3fc0dbd3759b5b41-5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
>From git://cache:9419/git://xenbits.xen.org/osstest/seabios
 - [deleted]         (none)     -> origin/0.5.1-stable
 - [deleted]         (none)     -> origin/0.6.1-stable
 - [deleted]         (none)     -> origin/1.10-stable
 - [deleted]         (none)     -> origin/1.11-stable
 - [deleted]         (none)     -> origin/1.12-stable
 - [deleted]         (none)     -> origin/1.6.3-stable
 - [deleted]         (none)     -> origin/1.7.2-stable
 - [deleted]         (none)     -> origin/1.7.3-stable
 - [deleted]         (none)     -> origin/1.7.5-stable
 - [deleted]         (none)     -> origin/1.8-stable
 - [deleted]         (none)     -> origin/1.9-stable
 - [deleted]         (none)     -> origin/master
 * [new branch]      xen-tested-master -> origin/xen-tested-master
Loaded 12601 nodes in revision graph
Searching for test results:
 162113 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162131 [host=fiano1]
 162217 [host=huxelrebe0]
 162256 [host=chardonnay1]
 162259 [host=elbling0]
 162271 [host=chardonnay0]
 162288 [host=pinot0]
 162326 [host=fiano0]
 162334 [host=albana1]
 162338 [host=albana0]
 162341 [host=pinot1]
 162359 [host=elbling1]
 162368 []
 162371 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 51adb689e1db695cffdeeacafad218768fbc018c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162436 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ddb3fdbef30de5a2946f9bd51060e8d5b1987aef 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162542 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 558d83ab1a5179e146a56dd5f3cb16e1ca44ff46 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162553 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162555 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 558d83ab1a5179e146a56dd5f3cb16e1ca44ff46 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162558 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162559 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4174c5c7874ec21c2e693565d3685cf9f5c2e2e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162560 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c2f24ba3218ae91a8d5a1a31c31dad3417850d0c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162562 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2833589ad054ee51fadc5c408de4f028ddf485e3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162564 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3357ac73807d83eb212632ee7c2e032a20a49c56 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162565 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162567 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3357ac73807d83eb212632ee7c2e032a20a49c56 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162569 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162573 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3357ac73807d83eb212632ee7c2e032a20a49c56 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162575 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Searching for interesting versions
 Result found: flight 162113 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3357ac73807d83eb212632ee7c2e032a20a49c56 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1, results HASH(0x55b70f121ea0) HASH(0x55b70f1281e0) HASH(0x55b70f11f898) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 4174c5c7874ec21c2e693565d3685cf9f5c2e2e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1, results HASH(0x55b70f113098) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0\
 bd11a38aac308308 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1, results HASH(0x55b70f106a40) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41, results HASH(0x55b70f0ff058) HASH(0x55\
 b70f115248) Result found: flight 162371 (fail), for basis failure (at ancestor ~5477)
 Repro found: flight 162553 (pass), for basis pass
 Repro found: flight 162555 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3357ac73807d83eb212632ee7c2e032a20a49c56 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
No revisions left to test, checking graph state.
 Result found: flight 162564 (pass), for last pass
 Result found: flight 162565 (fail), for first failure
 Repro found: flight 162567 (pass), for last pass
 Repro found: flight 162569 (fail), for first failure
 Repro found: flight 162573 (pass), for last pass
 Repro found: flight 162575 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  ovmf https://github.com/tianocore/edk2.git
  Bug introduced:  d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64
  Bug not present: 3357ac73807d83eb212632ee7c2e032a20a49c56
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/162575/


  commit d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64
  Author: Laszlo Ersek <lersek@redhat.com>
  Date:   Wed May 26 22:14:24 2021 +0200
  
      OvmfPkg/PlatformPei: remove Xen support
      
      The "OvmfPkg/PlatformPei/PlatformPei.inf" module is used by the following
      platform DSCs:
      
        OvmfPkg/AmdSev/AmdSevX64.dsc
        OvmfPkg/OvmfPkgIa32.dsc
        OvmfPkg/OvmfPkgIa32X64.dsc
        OvmfPkg/OvmfPkgX64.dsc
      
      Remove Xen support from "OvmfPkg/PlatformPei", including any dependencies
      that now become unused. The basic idea is to substitute FALSE for "mXen".
      
      Remove "OvmfPkg/PlatformPei" from the "OvmfPkg: Xen-related modules"
      section of "Maintainers.txt".
      
      This patch is best reviewed with "git show -b -W".
      
      Cc: Andrew Fish <afish@apple.com>
      Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
      Cc: Jordan Justen <jordan.l.justen@intel.com>
      Cc: Leif Lindholm <leif@nuviainc.com>
      Cc: Michael D Kinney <michael.d.kinney@intel.com>
      Cc: Philippe Mathieu-Daudé <philmd@redhat.com>
      Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=2122
      Signed-off-by: Laszlo Ersek <lersek@redhat.com>
      Message-Id: <20210526201446.12554-22-lersek@redhat.com>
      Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
      Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
      Reviewed-by: Leif Lindholm <leif@nuviainc.com>

Revision graph left in /home/logs/results/bisect/ovmf/test-amd64-i386-xl-qemuu-ovmf-amd64.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
162575: tolerable ALL FAIL

flight 162575 ovmf real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162575/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail baseline untested


jobs:
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 16:34:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 16:34:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139484.257903 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr1A6-0001Cy-P2; Wed, 09 Jun 2021 16:34:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139484.257903; Wed, 09 Jun 2021 16:34:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr1A6-0001Cr-Lx; Wed, 09 Jun 2021 16:34:38 +0000
Received: by outflank-mailman (input) for mailman id 139484;
 Wed, 09 Jun 2021 16:34:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lr1A5-0001Ch-Ci; Wed, 09 Jun 2021 16:34:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lr1A5-0003aw-8l; Wed, 09 Jun 2021 16:34:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lr1A4-0001y9-R1; Wed, 09 Jun 2021 16:34:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lr1A4-00038k-QX; Wed, 09 Jun 2021 16:34:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MB/AwqVOA5Xgqx7SFM1EDpxI3AJXtCzl+XAkK2TuNKA=; b=gqD/ZKOhtWQHr312YGPR0lAs/w
	naD+SU08H4eOllsacpCakgL+3YnAngfUpOIadVj0/yLdBZNwta4UVk5gtheOkfDxUaBmaaPb2Sfd/
	ncPYvbUW1g9gRPSVi+1JMhugK8/rqI48OIkJ6yAcH4oFNbdmtZ7/AJ9+wtPLiDeeQnD8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162548-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 162548: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ef32c7afa2731b758226d6e10a1e489b1a15fc41
X-Osstest-Versions-That:
    xen=b1e46bc369bb490b721c77f15d2583bbf466152d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Jun 2021 16:34:36 +0000

flight 162548 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162548/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 161769
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 161769
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161769
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161769
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161769
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161769
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161769
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161769
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161769
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161769
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161769
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161769
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161769
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  ef32c7afa2731b758226d6e10a1e489b1a15fc41
baseline version:
 xen                  b1e46bc369bb490b721c77f15d2583bbf466152d

Last test of basis   161769  2021-05-04 13:07:00 Z   36 days
Testing same since   162548  2021-06-08 18:37:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b1e46bc369..ef32c7afa2  ef32c7afa2731b758226d6e10a1e489b1a15fc41 -> stable-4.11


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 17:28:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 17:28:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139494.257918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr1za-00065u-Ru; Wed, 09 Jun 2021 17:27:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139494.257918; Wed, 09 Jun 2021 17:27:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr1za-00065n-Ox; Wed, 09 Jun 2021 17:27:50 +0000
Received: by outflank-mailman (input) for mailman id 139494;
 Wed, 09 Jun 2021 17:27:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lr1zZ-00065d-SY; Wed, 09 Jun 2021 17:27:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lr1zZ-0004X6-L8; Wed, 09 Jun 2021 17:27:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lr1zZ-0006tI-Dl; Wed, 09 Jun 2021 17:27:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lr1zZ-00055Z-DG; Wed, 09 Jun 2021 17:27:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fVbfUwJcj3TE/WulPzy+3hZcoCS4eYRPcQ2qtHNMJ2w=; b=NeyMw/SZkv5EmdmsMwJvklgq75
	GdNBzYxJjgwillKTOT1+nS8sXDOgIkC/qzMlbC4EqkPxFTEOvJ4fA13ETyvdo+tG+/YY56wd4wbWE
	CuYKE6/uzt9t2dPmO/fWpgkTR4+G4/wz9xQcyShvxzv9T8BFiQideTJYlbHpoHb5j0cE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162574-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162574: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3e09045991cde360432bc7437103f8f8a6699359
X-Osstest-Versions-That:
    xen=f5035d480f7a7033f15765b67c19df86a8ef2c69
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Jun 2021 17:27:49 +0000

flight 162574 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162574/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3e09045991cde360432bc7437103f8f8a6699359
baseline version:
 xen                  f5035d480f7a7033f15765b67c19df86a8ef2c69

Last test of basis   162572  2021-06-09 11:00:27 Z    0 days
Testing same since   162574  2021-06-09 14:00:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f5035d480f..3e09045991  3e09045991cde360432bc7437103f8f8a6699359 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 17:43:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 17:43:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139504.257934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr2Ej-0008LX-B7; Wed, 09 Jun 2021 17:43:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139504.257934; Wed, 09 Jun 2021 17:43:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr2Ej-0008LQ-88; Wed, 09 Jun 2021 17:43:29 +0000
Received: by outflank-mailman (input) for mailman id 139504;
 Wed, 09 Jun 2021 17:43:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1y+9=LD=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lr2Eh-0008LI-Tw
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 17:43:27 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e9d25d24-55dd-46d3-935c-dc50d445aae1;
 Wed, 09 Jun 2021 17:43:27 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 6010F613BD;
 Wed,  9 Jun 2021 17:43:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9d25d24-55dd-46d3-935c-dc50d445aae1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623260606;
	bh=u29fOyrNfsPeTkxLnkA2fvyZ8AxZ0uuHL+SikhRqxs8=;
	h=From:To:Cc:Subject:Date:From;
	b=DbfpUOiJbTIxDrx5DAkmHIJOMG1fPhpMeaufcN0Cy/Ju6n1Q3C0ZWKD8JC5B4BNh5
	 BgIJyjtNC1K1kTRrfelLHzRG4/5hTUMk1lXvZ8vs/3bKTop+otPmqFdkB4w56qaaYu
	 6hzg9DqXmz/EgKhCWjFXfs4SxWVbmwMIMQoGSpSVQgiIkgfJTsiheY231VLcMdGoWW
	 PNfd2zAmE2BmKLZ1uceXLt5ZACwytjtyBu1+sUizpVGJzAZGoCDAsO2doBG7nxRMy1
	 HxpnI6zZR5BZC8CGjVP1kr9G6rQgm/AZ0iapIqmc4ZMPcS3IMkXlRhoUMLiWqJDmpw
	 MMy852tf8i9Rg==
From: Stefano Stabellini <sstabellini@kernel.org>
To: julien@xen.org
Cc: Volodymyr_Babchuk@epam.com,
	xen-devel@lists.xenproject.org,
	Bertrand.Marquis@arm.com,
	Michal.Orzel@arm.com,
	edgar.iglesias@xilinx.com,
	sstabellini@kernel.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH] xen/arm32: SPSR_hyp/SPSR
Date: Wed,  9 Jun 2021 10:43:24 -0700
Message-Id: <20210609174324.6621-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1

SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.

This fixes booting Xen/arm32 on QEMU.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 xen/arch/arm/arm32/entry.S | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/arm32/entry.S b/xen/arch/arm/arm32/entry.S
index f2f1bc7a31..4e421109db 100644
--- a/xen/arch/arm/arm32/entry.S
+++ b/xen/arch/arm/arm32/entry.S
@@ -170,7 +170,7 @@ ENDPROC(prepare_context_from_guest)
         mrc     CP32(r11, HSR)                 /* Save exception syndrome */
         str     r11, [sp, #UREGS_hsr]
 
-        mrs     r11, SPSR_hyp
+        mrs     r11, SPSR
         str     r11, [sp, #UREGS_cpsr]
 
         /*
@@ -395,7 +395,7 @@ return_to_hypervisor:
         ldr r11, [sp, #UREGS_pc]
         msr ELR_hyp, r11
         ldr r11, [sp, #UREGS_cpsr]
-        msr SPSR_hyp, r11
+        msr SPSR, r11
 #ifdef CONFIG_ARM32_HARDEN_BRANCH_PREDICTOR
         /*
          * Hardening branch predictor may require to setup a different
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 17:53:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 17:53:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139511.257946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr2O3-0001Qd-8K; Wed, 09 Jun 2021 17:53:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139511.257946; Wed, 09 Jun 2021 17:53:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr2O3-0001QV-4v; Wed, 09 Jun 2021 17:53:07 +0000
Received: by outflank-mailman (input) for mailman id 139511;
 Wed, 09 Jun 2021 17:53:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lr2O1-0001QP-RL
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 17:53:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lr2O0-0004yL-KW; Wed, 09 Jun 2021 17:53:04 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lr2O0-000368-E9; Wed, 09 Jun 2021 17:53:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=N7fqOEJI6vy3tiWAIRn0BxUEzWFvHVPSCYtJGaFrZBg=; b=RElgDiIz73+mm9sODkKxqyyOjm
	AVNFG42+Dtuis6eIqq8p3GCcvZN2FRp27L49Do1BacUoUyXOLriYH551mJ0xGdrO9+qtDRg8TGAe0
	2IaRlcx09m2Nqg7YCqzFq7VgAS4f8/WVPD+mnJ0wyreTvmyR7N4kX31pZ8qMIS4GcA2A=;
Subject: Re: [PATCH] xen/arm32: SPSR_hyp/SPSR
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Volodymyr_Babchuk@epam.com, xen-devel@lists.xenproject.org,
 Bertrand.Marquis@arm.com, Michal.Orzel@arm.com, edgar.iglesias@xilinx.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <20210609174324.6621-1-sstabellini@kernel.org>
From: Julien Grall <julien@xen.org>
Message-ID: <712da7a7-2c1f-fd24-398d-27966335618a@xen.org>
Date: Wed, 9 Jun 2021 18:53:02 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210609174324.6621-1-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 09/06/2021 18:43, Stefano Stabellini wrote:
> SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
> trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.

Please provide a reference to the spec. This helps reviewer and/or 
future developper to figure out quickly where this comes from.

> 
> This fixes booting Xen/arm32 on QEMU.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

With the reference added:

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 19:10:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 19:10:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139526.257968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr3aL-0008I6-0y; Wed, 09 Jun 2021 19:09:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139526.257968; Wed, 09 Jun 2021 19:09:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr3aK-0008Hz-TR; Wed, 09 Jun 2021 19:09:52 +0000
Received: by outflank-mailman (input) for mailman id 139526;
 Wed, 09 Jun 2021 19:09:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lr3aJ-0008Hn-9n; Wed, 09 Jun 2021 19:09:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lr3aJ-0006Td-3W; Wed, 09 Jun 2021 19:09:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lr3aI-0003LS-P4; Wed, 09 Jun 2021 19:09:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lr3aI-0007GR-OD; Wed, 09 Jun 2021 19:09:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KGI9/a00n4wFVC5omS/avH99VXPxQ3VUcR2Hhx/J/QU=; b=UQXOWUSv8bl72zxDjAwlvvaeIL
	1YFmjkTnO3vZd9C8DsSeUFqPiObrzAT4YOrMJXM1CtWP2iF7RBSBr7Jgh9AtkvpjnTXpdXIVY4pSe
	Jmc/hU6tourFVm2g7IAEt1IV4HUp/NOi4iUUl3bA98zQBCsiN5QnFo5dRSwSaDUJ8YlM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162550-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 162550: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.13-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9bd6416528f9080cead0f8c22441f5568dbd0bf3
X-Osstest-Versions-That:
    xen=ef8b2357d83442b5d2e7607379a935d4f8b35416
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Jun 2021 19:09:50 +0000

flight 162550 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162550/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds   18 guest-start/debian.repeat fail blocked in 162367
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162367
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162367
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162367
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162367
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162367
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162367
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162367
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162367
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162367
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162367
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162367
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  9bd6416528f9080cead0f8c22441f5568dbd0bf3
baseline version:
 xen                  ef8b2357d83442b5d2e7607379a935d4f8b35416

Last test of basis   162367  2021-06-04 13:36:12 Z    5 days
Testing same since   162550  2021-06-08 18:37:00 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ef8b2357d8..9bd6416528  9bd6416528f9080cead0f8c22441f5568dbd0bf3 -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 19:42:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 19:42:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139546.258024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr45h-0004Bb-0b; Wed, 09 Jun 2021 19:42:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139546.258024; Wed, 09 Jun 2021 19:42:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr45g-0004BU-Tx; Wed, 09 Jun 2021 19:42:16 +0000
Received: by outflank-mailman (input) for mailman id 139546;
 Wed, 09 Jun 2021 19:42:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j3Z5=LD=xilinx.com=edgar@srs-us1.protection.inumbo.net>)
 id 1lr45f-0004BO-So
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 19:42:16 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com (unknown
 [40.107.237.81]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba333e9c-a3aa-4927-8c7c-83cb886729fc;
 Wed, 09 Jun 2021 19:42:14 +0000 (UTC)
Received: from BN9PR03CA0460.namprd03.prod.outlook.com (2603:10b6:408:139::15)
 by DM6PR02MB6793.namprd02.prod.outlook.com (2603:10b6:5:213::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21; Wed, 9 Jun
 2021 19:42:13 +0000
Received: from BN1NAM02FT041.eop-nam02.prod.protection.outlook.com
 (2603:10b6:408:139:cafe::27) by BN9PR03CA0460.outlook.office365.com
 (2603:10b6:408:139::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20 via Frontend
 Transport; Wed, 9 Jun 2021 19:42:13 +0000
Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by
 BN1NAM02FT041.mail.protection.outlook.com (10.13.2.152) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4195.18 via Frontend Transport; Wed, 9 Jun 2021 19:42:13 +0000
Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by
 xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2176.2; Wed, 9 Jun 2021 12:42:12 -0700
Received: from smtp.xilinx.com (172.19.127.95) by
 xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id
 15.1.2176.2 via Frontend Transport; Wed, 9 Jun 2021 12:42:12 -0700
Received: from [10.71.119.214] (port=52456 helo=localhost)
 by smtp.xilinx.com with esmtp (Exim 4.90)
 (envelope-from <edgar@xilinx.com>)
 id 1lr45c-00032B-0N; Wed, 09 Jun 2021 12:42:12 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba333e9c-a3aa-4927-8c7c-83cb886729fc
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZrfnfA4j+5aeA6ipStugEk+R0/FsH0PZsluFpLtZelBEl6ow6eWEiYZDI1k/qrNty/ulg7182vvgCbQgRZPz4t0jHTRKIJrPzEwdTzjhXxzX5ZWCTJdhzyTu+ZGU1fyVnloJYmfEzU6AClR0RTrTwiXNnWWELFwPKo7F/q+fdmBDp8oR2ggQ1jaXDzlVYW4StStco0b0lvbRIXG71EwwzI9uA/fonnH1ff1aUGL/XTwok03voZppaA7UXo/m6zN5uAklkXRfb30VDY7lUeMOSP8gPnV9v0P+fVfuNipEuYKJWprL1sr0mnNNr5aquuxo5+WCNQCkfjs4F8qJdkxdrw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TT2wVoqIIhbCEe6+6/5VTaC1yHQtIarCslgk7GCNoMw=;
 b=ayWOQ3eZifvc4Id6+dEs/6GC7HjV69jgvxOZTsfyX2lHwsXYpx2rchyxgkMnT8cnprP91dAx4ReNE3o+SJrpVQO4ui7dLYIu9t6awnVIhpaK1qNUJ8ntjHRYm/gDE7KMAHD5cLGJLyTTSG7JVbGfEC7pfk4sMZRqsOtNPW6C8aVWYj6laybGJf885XPVvf/DLQtgyHy2zjurpjHNTxgpPmdhsM3hgqtxU7GPkTJf3MWq0q+ePYYnxLlg7c+dbx4BN//zGJ1iX91Aa1fNqVJtksfR/oZtd2IDE1tNNOxEIIPaKLMiIb6avUXnlJurspv+bO/sd4bYhpboyWGMi5IPCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 149.199.62.198) smtp.rcpttodomain=xen.org smtp.mailfrom=xilinx.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TT2wVoqIIhbCEe6+6/5VTaC1yHQtIarCslgk7GCNoMw=;
 b=oRzBVgiZfaI11miNMPQjDFwiUtuI5kPW8AXkCnwPU2aZ+FFAoXbKE7WsmnBIQ+JjcYyN+PMkF6RfWLXwm1GcarzbvAJnfvsP6DeAC2UZCMsX664ZL5Ek+Y2lwX/2cfWgUIPHQiIKSkCNhC7bZWr8uQzuF89Q29+d4T/7jJaPlRI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198)
 smtp.mailfrom=xilinx.com; xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=pass action=none header.from=xilinx.com;
Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates
 149.199.62.198 as permitted sender) receiver=protection.outlook.com;
 client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com;
Date: Wed, 9 Jun 2021 21:42:09 +0200
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, <Volodymyr_Babchuk@epam.com>,
	<xen-devel@lists.xenproject.org>, <Bertrand.Marquis@arm.com>,
	<Michal.Orzel@arm.com>, Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH] xen/arm32: SPSR_hyp/SPSR
Message-ID: <20210609194209.GE6590@toto>
References: <20210609174324.6621-1-sstabellini@kernel.org>
 <712da7a7-2c1f-fd24-398d-27966335618a@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <712da7a7-2c1f-fd24-398d-27966335618a@xen.org>
User-Agent: Mutt/1.10.1 (2018-07-13)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7f5c623a-b282-4be3-98e8-08d92b7eadd1
X-MS-TrafficTypeDiagnostic: DM6PR02MB6793:
X-Microsoft-Antispam-PRVS:
	<DM6PR02MB67937FE8E7D209DA11388BA0C2369@DM6PR02MB6793.namprd02.prod.outlook.com>
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TGVehs2enPX3K0bxiRJrsyhLbTACeguAzVAXHc0jXmWl88u1fpOpxbaDntQ+EOYBcuisjkFMnUJV7ES7LtBxMag90i31WxwK+7U5XRiulso7jIhadDEYAIf9D7N4/FiDz/B3EVkfnwgaNd/62OUlD3Fauf/oCbdjsfDL8CPKxHKkmmNDnhj+Qqtu/rrELyqEitbxjrn32BThi3Hbw7oZb+1ralvUHZh5qPYgI3vYbswcrZ0yUDJJHezY2ETvG+AsfdUjowPPCVa5jHQHQZ1WT9TVPpAQrv3NIYgBglaJU3MiGkG3kAlWe4cMX0jZYzQDgfNHORZkXSABd16J/27JgZnubMFYBv/YfAn9/dcDE6LnSfO6y1IzO8/8mjrZry7PYph14WbuFLdBIdxZCOqLzPbZwrFH9uwL6VxkWX8imVLylPvIHLEjcsxkk6w5ULQ9OCWCjO52MBoIbLj8LE6H1LPE09H20jReqPwwNmZRO/J6+Cic9RJE90/yG6hB0ENA1A7HVd7TZ0JEMWY9CzAjEdK7UFWWiUZoRz8fX0xros4ZbnP8h58a4nWxUdJHZz1gTugdxb/FtsTTfAXSox8DTQ8apNhC+iqQPR3FJL4t8mxHojRATDRh2U6armrTIs0G6NHPKmpoykuJ0KOyAaY39sbxvEkaG43+9msVG4coy48=
X-Forefront-Antispam-Report:
	CIP:149.199.62.198;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:xsj-pvapexch01.xlnx.xilinx.com;PTR:unknown-62-198.xilinx.com;CAT:NONE;SFS:(7916004)(4636009)(396003)(376002)(136003)(346002)(39860400002)(46966006)(36840700001)(33716001)(47076005)(53546011)(9686003)(2906002)(1076003)(316002)(8676002)(26005)(82310400003)(4326008)(336012)(4744005)(107886003)(83380400001)(5660300002)(33656002)(82740400003)(36906005)(36860700001)(478600001)(7636003)(426003)(6916009)(6666004)(70586007)(356005)(9786002)(54906003)(186003)(8936002)(70206006);DIR:OUT;SFP:1101;
X-OriginatorOrg: xilinx.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2021 19:42:13.0284
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7f5c623a-b282-4be3-98e8-08d92b7eadd1
X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c;Ip=[149.199.62.198];Helo=[xsj-pvapexch01.xlnx.xilinx.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN1NAM02FT041.eop-nam02.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR02MB6793

On Wed, Jun 09, 2021 at 06:53:02PM +0100, Julien Grall wrote:
> Hi Stefano,
> 
> On 09/06/2021 18:43, Stefano Stabellini wrote:
> > SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
> > trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
> 
> Please provide a reference to the spec. This helps reviewer and/or future
> developper to figure out quickly where this comes from.
> 
> > 
> > This fixes booting Xen/arm32 on QEMU.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> With the reference added:
> 
> Reviewed-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>

Cheers,
Edgar



From xen-devel-bounces@lists.xenproject.org Wed Jun 09 22:13:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 22:13:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139569.258085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr6SF-00016q-FO; Wed, 09 Jun 2021 22:13:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139569.258085; Wed, 09 Jun 2021 22:13:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr6SF-00016j-Bt; Wed, 09 Jun 2021 22:13:43 +0000
Received: by outflank-mailman (input) for mailman id 139569;
 Wed, 09 Jun 2021 22:13:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lr6SD-00016Z-Cz; Wed, 09 Jun 2021 22:13:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lr6SD-0001EM-6b; Wed, 09 Jun 2021 22:13:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lr6SC-0004qL-S5; Wed, 09 Jun 2021 22:13:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lr6SC-0006iW-RX; Wed, 09 Jun 2021 22:13:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DI5LshmbomRlbXAClYW0tLf5iMB2WVe05Htby0+4TQE=; b=QFgM2tVtf06k2nyZRGYwIgwR0q
	wM/07WRFwe92VxhsuWPCfs7pF2Oe4A0PH/oUjcG2c8meW/8tOf3427ZmAf5z2ifrMLEY9H6i8z7vQ
	6hpdtmZA/0osUhcU3QkGFC2FqQfGeYPkmKDWqcR7KtG8c52FW+r5b3Amg759H2txdZOs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162549-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 162549: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ea20eee97e9e0861127a8070cc7b9ae3557b09fb
X-Osstest-Versions-That:
    xen=5984905b2638df87a0262d1ee91f0a6e14a86df6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Jun 2021 22:13:40 +0000

flight 162549 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162549/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10       fail  like 161821
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161821
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161821
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161821
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161821
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161821
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161821
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161821
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161821
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161821
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161821
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161821
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ea20eee97e9e0861127a8070cc7b9ae3557b09fb
baseline version:
 xen                  5984905b2638df87a0262d1ee91f0a6e14a86df6

Last test of basis   161821  2021-05-06 22:06:21 Z   34 days
Testing same since   162549  2021-06-08 18:37:01 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5984905b26..ea20eee97e  ea20eee97e9e0861127a8070cc7b9ae3557b09fb -> stable-4.12


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 23:44:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 23:44:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139581.258106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr7s7-0000wR-30; Wed, 09 Jun 2021 23:44:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139581.258106; Wed, 09 Jun 2021 23:44:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr7s6-0000wK-WC; Wed, 09 Jun 2021 23:44:31 +0000
Received: by outflank-mailman (input) for mailman id 139581;
 Wed, 09 Jun 2021 23:44:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lr7s5-0000wA-QS; Wed, 09 Jun 2021 23:44:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lr7s5-0002iK-MO; Wed, 09 Jun 2021 23:44:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lr7s5-0007uQ-FP; Wed, 09 Jun 2021 23:44:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lr7s5-0005ii-Et; Wed, 09 Jun 2021 23:44:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+6LDUq1L80ODxYciZ785X0wl88yoATNmBcNCB7BM7AI=; b=5Ms0P2X5Ul0IsmvvJeb4B/vONU
	fgOoWrZAVtUrn1TtElSasoei9Af8GUPawaNsR6x9vCWBDkTpsVM6NAnkYQ3TnAYatsM38bZ0PA5+k
	jc67SC/n4AqJteCtUSWrWnLXS49D4MIfH/wq9V4DVUJZ34JwWXhnVw1pTHjD7mGxwAmI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162552-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162552: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=558d83ab1a5179e146a56dd5f3cb16e1ca44ff46
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Jun 2021 23:44:29 +0000

flight 162552 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162552/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 558d83ab1a5179e146a56dd5f3cb16e1ca44ff46
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z    5 days
Failing since        162368  2021-06-04 15:42:59 Z    5 days    5 attempts
Testing same since   162542  2021-06-08 10:41:25 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1348 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 09 23:50:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Jun 2021 23:50:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139589.258121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr7y6-0002L3-Q1; Wed, 09 Jun 2021 23:50:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139589.258121; Wed, 09 Jun 2021 23:50:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lr7y6-0002Kw-N4; Wed, 09 Jun 2021 23:50:42 +0000
Received: by outflank-mailman (input) for mailman id 139589;
 Wed, 09 Jun 2021 23:50:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1y+9=LD=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lr7y5-0002Kp-8R
 for xen-devel@lists.xenproject.org; Wed, 09 Jun 2021 23:50:41 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c65a232-ec1b-4586-a98a-ee313c040ebf;
 Wed, 09 Jun 2021 23:50:40 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id A06BE613EA;
 Wed,  9 Jun 2021 23:50:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c65a232-ec1b-4586-a98a-ee313c040ebf
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623282639;
	bh=yEJO2DI/IKbhJ4+Dt/QgSklrpmtCt/IWx3ftV032t9I=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=PwNh5zRH09/pnEhCt/REsgAGKABBMpS4dcGGnIZWfIcDXCgTjGM+ofXaEdlOEj7q5
	 0sbqSlvdwqvFbjxxi8fCpoBj7+SA0nK2gH3j0xfOVnNh1U/pIigp21Q+lLZit2ygFC
	 M5mHn9dNUdA8Ws+6oOVf6MrCOa0+Ce56zeA4bJUwZB4MMb7M9/KaggJ/62Rw/OorvE
	 fgvXhOwuR1ewdn2WG3sjHD7Sd4yA1IyUc8lX6NZQiGNfYUkko9WYEeFyD1btKg8zdv
	 i+D2kUj9RLjPEbAgmE/0BAEBYLhIg5RZZB1i2mbE5RUKD/Ly0/t0yVi4RkRylFlHAH
	 1JiDL6bA4YMDA==
Date: Wed, 9 Jun 2021 16:50:39 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, Volodymyr_Babchuk@epam.com, 
    xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com, 
    Michal.Orzel@arm.com, edgar.iglesias@xilinx.com, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH] xen/arm32: SPSR_hyp/SPSR
In-Reply-To: <712da7a7-2c1f-fd24-398d-27966335618a@xen.org>
Message-ID: <alpine.DEB.2.21.2106091647100.24906@sstabellini-ThinkPad-T480s>
References: <20210609174324.6621-1-sstabellini@kernel.org> <712da7a7-2c1f-fd24-398d-27966335618a@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 9 Jun 2021, Julien Grall wrote:
> Hi Stefano,
> 
> On 09/06/2021 18:43, Stefano Stabellini wrote:
> > SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
> > trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
> 
> Please provide a reference to the spec. This helps reviewer and/or future
> developper to figure out quickly where this comes from.
> 
> > 
> > This fixes booting Xen/arm32 on QEMU.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> With the reference added:
> 
> Reviewed-by: Julien Grall <jgrall@amazon.com>

Thanks!

I added: ARM DDI 0487D.b page G8-5993 and committed it




From xen-devel-bounces@lists.xenproject.org Thu Jun 10 03:05:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 03:05:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139602.258149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrB0V-0000U7-8r; Thu, 10 Jun 2021 03:05:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139602.258149; Thu, 10 Jun 2021 03:05:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrB0V-0000Tf-1F; Thu, 10 Jun 2021 03:05:23 +0000
Received: by outflank-mailman (input) for mailman id 139602;
 Thu, 10 Jun 2021 03:05:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrB0T-0000TU-Kf; Thu, 10 Jun 2021 03:05:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrB0T-0003zS-C3; Thu, 10 Jun 2021 03:05:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrB0T-0008AK-3j; Thu, 10 Jun 2021 03:05:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrB0T-00043B-3C; Thu, 10 Jun 2021 03:05:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jDQOwS1j92maOsjmqB1TPX5RJe7NtaVctoUHdC6RCWY=; b=FJyuZERgr30EHKbUycx650ddP/
	30DwwP99VMejq/VWb4q6S+YlkE/0L+lIGuhSLi047Y8dkW9C/b5U2THZzkKjaqtJAqlbGyJqDctbx
	1Hm6rBUqbx12kViLKYSI7J1IvEt67ETzEVYcDoTFIYMr+ISeZYV/2iyy8jmc0JWyyrxg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162584-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162584: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:guest-start/debian.repeat:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dfcffb128be46a3e413eaa941744536fe53c94b6
X-Osstest-Versions-That:
    xen=3e09045991cde360432bc7437103f8f8a6699359
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Jun 2021 03:05:21 +0000

flight 162584 xen-unstable-smoke real [real]
flight 162587 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162584/
http://logs.test-lab.xenproject.org/osstest/logs/162587/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl         18 guest-start/debian.repeat fail REGR. vs. 162574

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dfcffb128be46a3e413eaa941744536fe53c94b6
baseline version:
 xen                  3e09045991cde360432bc7437103f8f8a6699359

Last test of basis   162574  2021-06-09 14:00:34 Z    0 days
Testing same since   162584  2021-06-10 00:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit dfcffb128be46a3e413eaa941744536fe53c94b6
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Wed Jun 9 10:37:59 2021 -0700

    xen/arm32: SPSR_hyp/SPSR
    
    SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
    trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
    See: ARM DDI 0487D.b page G8-5993.
    
    This fixes booting Xen/arm32 on QEMU.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
    Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 04:33:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 04:33:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139613.258172 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrCNM-0000NX-N6; Thu, 10 Jun 2021 04:33:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139613.258172; Thu, 10 Jun 2021 04:33:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrCNM-0000NQ-K0; Thu, 10 Jun 2021 04:33:04 +0000
Received: by outflank-mailman (input) for mailman id 139613;
 Thu, 10 Jun 2021 04:33:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrCNL-0000NG-Bx; Thu, 10 Jun 2021 04:33:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrCNL-0005VL-2X; Thu, 10 Jun 2021 04:33:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrCNK-0003yQ-Om; Thu, 10 Jun 2021 04:33:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrCNK-00081B-OE; Thu, 10 Jun 2021 04:33:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JN8ZkJSnPqcM3uRkGbL9FR9dpcuesRpK0NT5+If2V+A=; b=1kecYY/SSF8QC/WP5CIiyozFHq
	2Pd9VUuQnj3Op68guU8D0QaPJdnQ78bJC47WtyRrDo8NqwJXbAtjD1RzLnIdZHhI7c7ljATX1oClN
	3lG8ihplYpY5zLxm3ATr+3o9zxhdHV2YmqfKcnly6JIIZCm/ZwhOBGZPQsMZ6LK8+mb4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162551-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162551: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a4716fd8d7c877185652f5f8e25032dc7699d51b
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Jun 2021 04:33:02 +0000

flight 162551 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162551/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                a4716fd8d7c877185652f5f8e25032dc7699d51b
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  293 days
Failing since        152659  2020-08-21 14:07:39 Z  292 days  541 attempts
Testing same since   162551  2021-06-08 18:39:57 Z    1 days    1 attempts

------------------------------------------------------------
530 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 170393 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 07:32:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 07:32:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139631.258203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrFAv-0008AX-P6; Thu, 10 Jun 2021 07:32:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139631.258203; Thu, 10 Jun 2021 07:32:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrFAv-0008AQ-Li; Thu, 10 Jun 2021 07:32:25 +0000
Received: by outflank-mailman (input) for mailman id 139631;
 Thu, 10 Jun 2021 07:32:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrFAu-0008AG-6z; Thu, 10 Jun 2021 07:32:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrFAu-0000YF-2x; Thu, 10 Jun 2021 07:32:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrFAt-0002kz-Rf; Thu, 10 Jun 2021 07:32:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrFAt-0002Gu-R8; Thu, 10 Jun 2021 07:32:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ejBTD4v9UhoT4IBRG1+gQ08WIQnCBGahRrTN5j+6EeI=; b=0jbjT4hxg0ALvWh01vtqqWDkcI
	HGkF0x7mv+qjcjUtze8/Hpe1LT9IGdumfTFmvQdyeVkBC/ZT2IzBwpODxYhiBaSEfW3I1Wog/zjA2
	S74zaec4krtim4vQi4cLfMdSN/xQQks7X+Ug2M1GT5uerlEMUJBE+EJ2GtjCHWfdprVs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162590-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162590: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:guest-start/debian.repeat:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dfcffb128be46a3e413eaa941744536fe53c94b6
X-Osstest-Versions-That:
    xen=3e09045991cde360432bc7437103f8f8a6699359
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Jun 2021 07:32:23 +0000

flight 162590 xen-unstable-smoke real [real]
flight 162595 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162590/
http://logs.test-lab.xenproject.org/osstest/logs/162595/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl         18 guest-start/debian.repeat fail REGR. vs. 162574

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dfcffb128be46a3e413eaa941744536fe53c94b6
baseline version:
 xen                  3e09045991cde360432bc7437103f8f8a6699359

Last test of basis   162574  2021-06-09 14:00:34 Z    0 days
Testing same since   162584  2021-06-10 00:00:27 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit dfcffb128be46a3e413eaa941744536fe53c94b6
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Wed Jun 9 10:37:59 2021 -0700

    xen/arm32: SPSR_hyp/SPSR
    
    SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
    trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
    See: ARM DDI 0487D.b page G8-5993.
    
    This fixes booting Xen/arm32 on QEMU.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
    Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 07:37:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 07:37:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139638.258218 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrFFY-0000QZ-Ep; Thu, 10 Jun 2021 07:37:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139638.258218; Thu, 10 Jun 2021 07:37:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrFFY-0000QS-AH; Thu, 10 Jun 2021 07:37:12 +0000
Received: by outflank-mailman (input) for mailman id 139638;
 Thu, 10 Jun 2021 07:37:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iP0d=LE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrFFW-0000QM-Qw
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 07:37:10 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f6057b24-c9ed-4158-b39a-4f8f33cd5957;
 Thu, 10 Jun 2021 07:37:09 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2052.outbound.protection.outlook.com [104.47.14.52]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-3-TlOIDy6JOfuO28R73fCnJA-1;
 Thu, 10 Jun 2021 09:37:07 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB7360.eurprd04.prod.outlook.com (2603:10a6:800:1a3::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Thu, 10 Jun
 2021 07:37:06 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Thu, 10 Jun 2021
 07:37:06 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR02CA0114.eurprd02.prod.outlook.com (2603:10a6:20b:28c::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20 via Frontend
 Transport; Thu, 10 Jun 2021 07:37:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6057b24-c9ed-4158-b39a-4f8f33cd5957
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623310628;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nVJr6qolGfc+TnGCF7gON2ry05AxZE7V1JPgJtBIWs4=;
	b=DgQcAcINZ3UBz9eIcBs8B+c4m4e9MDEfxLMBJuUZcaymCxhhtZ931Y5dUhV0+exg/biF/S
	lZRqjFSwfAdHeUw3vBJVkFa/N2oY5r4z+GdDFQJGNvpNhDu2fh2WpSiu9YqN1TyZi54k6z
	SEs7Y8JvO4gxAx/On31qWXPVZtKz5mU=
X-MC-Unique: TlOIDy6JOfuO28R73fCnJA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NVBHIRX/kgXHpxvyT7EyFq8QerOHcxqDtFkqXt7e7maQM76kRA2+49QzjGT2QlS42BdsY3WEzK4Qho9K8ZZy0v3fBeF9pzQZaZFk7DnWY6zKSJsFKe8j7ONYqO8+5/GH+pWoNK11BqOmnoiFTJ5eaPBg9MxAJTZ+DgC2ud60ADY9dOtnQJQ1yms5OJqpcZy/FwjlJr/DPmHEmP2GzmwYAJHivkeAcHoT1QX4kippYMQbHz87D7FgEvUZpTm40kH7PP289zn1JGEn8TiNlqlU3xV56AGDlLProh5h27mA7a4CbeCoI9kIyI55+FS1tZmlDJSxN+T/BBavmrLKW1blaw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8H7kS/QkjXPqTdpjKRKfSXQbK37UtVeb37RDe4HxBW0=;
 b=Z3PrO4s41QI5oCW5ZJSlsh82PE8NCseeExI77OX2JqkSB+TDUnRyoxK0gbFfnlYHf/0WxG+4ndzTttvgJVu5qkQ8kt5iOQ8JzeWH4P4ehfnk3ClmhJV0Zir6EP42Jx5+QaTxX2EWndLdZltnGPWJef+nM7EeUK7lW6SmN1817Shr9eLi8Fv6SAJjRXV8Q8e82ebI9R4cOTDB2U60jUPwoGCvgDavKTHQxZRszvHxxj6/vTDss0tx82IV+wgMhGl5f8L5L6ufew/nXNf/xd1XozA5FR5nuehH7U3yTHqZVNZmAXWhH46hrBKfRESjQMz0O6FgsDfWaoqd4x5olPhggw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] x86: please Clang in arch_set_info_guest()
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <c758de2b-c453-4dba-ddc0-0c9548172c6e@suse.com>
 <c032362e-51cb-00fe-dfe9-782bd4600163@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5b9383a7-cdcb-f3e9-3ac3-f703f4bfe4db@suse.com>
Date: Thu, 10 Jun 2021 09:37:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <c032362e-51cb-00fe-dfe9-782bd4600163@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR02CA0114.eurprd02.prod.outlook.com
 (2603:10a6:20b:28c::11) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8c2f2ddf-054d-48c0-320a-08d92be28c4a
X-MS-TrafficTypeDiagnostic: VE1PR04MB7360:
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB7360D8E6E034FC738B097316B3359@VE1PR04MB7360.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KGhBbaIVPQ6jEk7g/DESPbsG0l3ghmevFiZG8GR8VZgO2n+d+f/DD6PSQcpEzGn++nD48JZtmWJ84MSy9/NQd2bNGMaM2VfXw/fX7Ms1MrcoJfF+bkjfzoiZWiEmZzKaowQrgYLhX5IFV+fL6HNdYN00OkgOYNDaCypCaH2s8W7Er0O0+cURRNTSDJlw8PtM3Psr6RVCpWiAAZDBtKDHMFOY56Csw/lAd4xaCzVpNGQYArrZFDau6oS4Q4XDP0Ef7GafvH9ZvVJ/qGGd5Qjj1DqUE6BYc9mrRNf0KKv0326ulK3bw6nnVUMsZeIiwqB2AfsWFCfG2Tt2KNIBU/GmvItPb870n9gTqwY4H1MU/0Qs/QjKuIqf/xbvSvZjhBHwjJ071fbspYZu6K1HoBlKHbAVVX7N8EMEtpew5uS3oEgep5IKupTWGqDwqkmYdWjQ6KnXA2k3ZxsV6Ev+4tTw2ZOSd3Da4pwk9G2E+cO6y+pKPghyXBQsr2l+ZqMUN0RSrlkzfGx8Ed1/7vYYthzLdqNHuy4adjA5L+2MGCwXLubNFrwKWyogTP5bKjTpl6RM4g6urEJ1LWfiW70MsckKpj3NXHhh9XOX3IZCo3fuYkYq2jUP0wZ9BGX6SgKJvG0z2483b1chfvVS7YTesLt2tOLM4JzKwfdUZ/+oTiUFIk5VWsagd1XVZjVw/QXg48XLbXofWhl9Os0FETG9x04gzwTXAStRqer0SNbyC0OvOqWJcIg1RUkCuBIPQAVEQQJlztI5Tx1xUHdPvoMBfKtpH9FnwSL2hIp7qhMEfopEiamKI6hW4lxCBGtFMn1mvK/g
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(136003)(346002)(376002)(366004)(39850400004)(4326008)(16526019)(6486002)(66556008)(31696002)(66476007)(5660300002)(83380400001)(36756003)(31686004)(26005)(186003)(8676002)(316002)(966005)(478600001)(54906003)(16576012)(6916009)(86362001)(2616005)(66946007)(8936002)(53546011)(956004)(38100700002)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?us-ascii?Q?4HqaOvJ4Vn7emvqmAkrcvcdb8dRVlaCdPFLUvXsdtXQ4dFcjjjSQncFUL2yc?=
 =?us-ascii?Q?BaL96IJXvSxWX4VKHUxswkAGTO1xIPfqNwJq9rAD2aLw1JaCVTXfYnh0Mlew?=
 =?us-ascii?Q?Q3mNxkCm40be96zj4mTShONE4SKMk/l+/Vjbo464bkQze7/ewujzyTyi3kG4?=
 =?us-ascii?Q?UyRRDvzRmSkmBuKCLUO5ODeRAITiBf2j27a6hQZ2tgKX2SEYINuCjPkUd1oL?=
 =?us-ascii?Q?z4jwi8RGo45NN+OFQGhmgPpwUE2zC+XnPNHqvWC3wgfzT8W2XYyHr6qpKXR8?=
 =?us-ascii?Q?xn6CexP0UXVSLRaR+J1FrWfFLMjXNc4xiMTUfTvjPisZ2Pwv5uIOA2+tp0Hc?=
 =?us-ascii?Q?Q8oHo8zUg4LOkDE1R1ymAD085VYPM4CPIcI3gNyXPyq3FzyRF4uoMV+h8MgB?=
 =?us-ascii?Q?ZOuCME7TK84/B+b6Q7ghSDqjl//ck0Y+y9H4WlTqf0YT/RCKNPkq27tnBYQA?=
 =?us-ascii?Q?i22IeNqmK7/zarKCF+Wc3+wtR8S/XuEwOi6Nu+OZvAHGHhiMgUk0D/FPlewZ?=
 =?us-ascii?Q?K3qj6rnK0bQujEKggmJF0QnNl75cyYkFvMcgBQAdHxEtKyhfkk+JX1rdScPt?=
 =?us-ascii?Q?49I4+PTQ7f6rr0pcb7tRZiW+C3WQeviOJ3S9xQZA82via/8W8MRy/SqxduPj?=
 =?us-ascii?Q?QWJnsCuWnh+Pz/lDgzJZ/up8J/SPCBWNhUYHcReb+NTp/CQN3bRPPJpmphr3?=
 =?us-ascii?Q?G0JkkoOsO2GeckyXaPz4YgtYTOi9O0BpjrQhIXs+3K0QJqTR73tBl8BVOf3a?=
 =?us-ascii?Q?oSoTrzdrZ3w4rOWn9G83AL3Lxd8luQh3rtksKOkfaqERhXx+VPupN8IE54nZ?=
 =?us-ascii?Q?9lLNnNiWGLhOv5ZAA6O0TrwRxIfGY4MlbeFDwOwq+a+nstQy5ZQVg/tEJ+zN?=
 =?us-ascii?Q?1OCzaAYmfpaou17cWqiwUq7XAo6sCQzqyEbGcKI2y+poN5+1c6YGQOFAXNln?=
 =?us-ascii?Q?PHf29AKVnO21lP3iSW2NJrNX9X0PZhuKbQObvOcYdxNAl0dpf5PzlXOXwH2t?=
 =?us-ascii?Q?FKVZoapMrWaW0X9IsdEBXrhANtbXrcbHWCdhO7opwgmxwyD6/bSx9HWGqBx8?=
 =?us-ascii?Q?kYvFobJser+LAOPHnDL+CjJsawPZXwk16G7p70dtPsJLFf/CKy8t23Ruk5XY?=
 =?us-ascii?Q?wA907mNrp2LTLeMByO1JYbl/6MzODqY9TukkxygCcjSMC2Lg1tfKSdhxzOll?=
 =?us-ascii?Q?yEhExGSUCcvyYGTXihZR2a8EzXRYva8h46wxo/EeEoeus93CqjZOuF6Htt/Q?=
 =?us-ascii?Q?2CmhVJqKynkMofAtL5dycqbQi8aKZ8zeUK/VeiLmZqRK9p8UdGN7iE1OKjaM?=
 =?us-ascii?Q?sDcYycHTHbm/z4skaL5XsL5I?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8c2f2ddf-054d-48c0-320a-08d92be28c4a
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2021 07:37:06.7071
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3FmiqWI2lfd54ev2tSdeEMib3OfBZ8Z3QXGJJTr/PHlqWU9Xap0uHWX6NZshNk3fHKHsyN7/UfgWJ7KLmQGAtQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7360

On 09.06.2021 17:45, Andrew Cooper wrote:
> On 09/06/2021 14:14, Jan Beulich wrote:
>> Clang 10 reports
>>
>> domain.c:1328:10: error: variable 'cr3_mfn' is used uninitialized whenev=
er 'if' condition is false [-Werror,-Wsometimes-uninitialized]
>>     if ( !compat )
>>          ^~~~~~~
>> domain.c:1334:34: note: uninitialized use occurs here
>>     cr3_page =3D get_page_from_mfn(cr3_mfn, d);
>>                                  ^~~~~~~
>> domain.c:1328:5: note: remove the 'if' if its condition is always true
>>     if ( !compat )
>>     ^~~~~~~~~~~~~~
>> domain.c:1042:18: note: initialize the variable 'cr3_mfn' to silence thi=
s warning
>>     mfn_t cr3_mfn;
>>                  ^
>>                   =3D 0
>> domain.c:1189:14: error: variable 'fail' is used uninitialized whenever =
'if' condition is false [-Werror,-Wsometimes-uninitialized]
>>         if ( !compat )
>>              ^~~~~~~
>> domain.c:1211:9: note: uninitialized use occurs here
>>         fail |=3D v->arch.pv.gdt_ents !=3D c(gdt_ents);
>>         ^~~~
>> domain.c:1189:9: note: remove the 'if' if its condition is always true
>>         if ( !compat )
>>         ^~~~~~~~~~~~~~
>> domain.c:1187:18: note: initialize the variable 'fail' to silence this w=
arning
>>         bool fail;
>>                  ^
>>                   =3D false
>>
>> despite this being a build with -O2 in effect, and despite "compat"
>> being constant "false" when CONFIG_COMPAT (and hence CONFIG_PV32) is not
>> defined, as it gets set at the top of the function from the result of
>> is_pv_32bit_domain().
>>
>> Re-arrange the two "offending" if()s such that when COMPAT=3Dn the
>> respective variables will be seen as unconditionally initialized. The
>> original aim was to have the !compat cases first, though.
>>
>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> I wonder how many more there are to come.
>=20
> https://gitlab.com/xen-project/patchew/xen/-/pipelines/317744453
>=20
> Everything seems ok now.=C2=A0 The failure is a known arm32 randconfig is=
sue
> which still hasn't been fixed, and is unrelated to this.

Well, the question was primarily for current code and the presently used
Clang version (which you say looks okay now), but also for arbitrary
code changes which may trigger the same issue for any other similar
constructs, plus also for future Clang versions, which may become even
pickier. And not to forget .config variations.

> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 10 07:55:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 07:55:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139647.258233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrFWg-0002iC-Vc; Thu, 10 Jun 2021 07:54:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139647.258233; Thu, 10 Jun 2021 07:54:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrFWg-0002i5-S2; Thu, 10 Jun 2021 07:54:54 +0000
Received: by outflank-mailman (input) for mailman id 139647;
 Thu, 10 Jun 2021 07:54:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sV8R=LE=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lrFWf-0002hu-H9
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 07:54:53 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1136146a-edeb-4bc5-b8d1-23faea77719a;
 Thu, 10 Jun 2021 07:54:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1136146a-edeb-4bc5-b8d1-23faea77719a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623311692;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=X7/AESbqMCZStgHVgrx1hhFimx2DE8DJqNRE1XvfzKg=;
  b=ZJ4D1B+Cq7lhQX1fT68sVQvW3QkuT1iwHFUvULcaxaPXgE8fWyYAZ/gb
   eLasCWlZTqzrdrx8wkqnqDoLuGIaFDJSbNSJvfHVfqi3E2uJtbdgtHbCk
   0ld3sQVN6GkfVHCo6fNZZJ2YWsn60o6wvBRfT5Fo8vRCwDfjhHn6rN39n
   c=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: qBJcfkdp4OF5pEqEX7mx58Jdfr2jbIHnpM1hONdPUMYRbWP5+a2Px++ryVdBB2zGuk9931Q2pY
 GAdQ7mmPHzg9x7oCoV8K6Sj0QbeEB55P6JRoxUnTrLZPfE6eAj+cwXzyi4tuNiM/AWq6eKPFl0
 Qr7IJcRS2s0VnHmAHbvlviQXMShpFiURQvGiTLWHo3U+9Ym/sHJBQ4HbreB83pswUzhDNNmVeS
 en4q5BkSfFdSb5JFHNPYHvTYFasIUQuuVb0EAZ0b/iHbV6WOPV/omrf2QoEK06YoYP6QudeJ+O
 sYY=
X-SBRS: 5.1
X-MesageID: 45798123
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:MoZUe6jrPn1T/hVPW3KE28GLpHBQXyt13DAbv31ZSRFFG/FwyP
 rBoB1L73DJYWgqNE3I+erhBEGBKUmsk6KdxbNhQItKPTOWwldASbsC0WKM+UyEJ8STzJ846U
 4kSdkDNDSSNykLsS+Z2njBLz9I+rDum8rE9ISurQYecegpUdAa0+4QMHfrLqQcfng+OXNWLu
 v62iIRzADQB0j/I/7LS0UtbqzmnZnmhZjmaRkJC1oO7xSPtyqh7PrfHwKD1hkTfjtTyfN6mF
 K13DDR1+GGibWW2xXc32jc49B/n8bg8MJKAIiphtIOIjvhpw60bMBKWqGEvhoyvOazgWxa3O
 XkklMFBYBe+nnRdma6rV/GwA/7ygsj7Hfk1BuxnWbjidaRfkN7N+NxwaZiNjfJ4Uspu99xlI
 hR2XiCipZRBRTc2Azg+tnzUQ1wnEbcmwtirQcqtQ0cbWIiUs4VkWRGl3klVKvoXRiKprzPKd
 MeT/01v51tABSnhxmzhBgd/DSuNk5DVituDHJy/PB96AIm6EyR+XFojfD3rk1wga7VdKM0kN
 gsEp4Y342mHfVmJ56UOo86ML2K4zv2MG3x2SSpUA3aKJ0=
X-IronPort-AV: E=Sophos;i="5.83,263,1616472000"; 
   d="scan'208";a="45798123"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jNoufSonGXXLycwBc1ZJkC7ydbBl1lSAQie0zpoNIyItfyAHsXdpqO9hoDhqfgkXRmF5jfX5PRGj5SQ8EYdC6QVMGPjm66C+ePAnJIfgEoeWgbeLxZleTUkHiv9awmkkls7TGI55p2MbBjC9G+2n0Ds/sqDqy7pmciaUTSUZrjszrRYJxWVeqhJnVxeLbRVr7hcn60oPIe7DyqC/Z9r8f5R9ESfBUMbMKcdFX5HiNHow+ps8zB1BuSepWAWVZSP1Z8BUYyzS+qTk9QezHvK45bWSqL6JCvOukPOwIxgtLuZJ02BnNwJFeyTggjOg3LobVSZcuIkvl2W7nGPixrEhwA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KUP7Nqu6Fmge9xIhhwPZjy2IirUsrJcuon4/g2rpM/Y=;
 b=YqpMcUBPLYW1kvACF8ARtfAxKQa9vIUDbcJZ/JgqhvPxo45/hCDZXkNe3hHldoMnw6xWNPjLJPh7Uv4fPnJuA78zAQMgBolg8InM4CNo9LTsfMpRlGXnWYkxjg6fYGhVCoqBJcjU9jMr1FJrOGp4MMgP8mE1lnbHI0eIc+t+Ng9xLungtXxxmBX4tisaSr07pFIK+GucM7Le0z4CxFq4imCER8QU+EqxCGhuVbAu0RMtmjaxlvgyA0+q8vBPRu1Z3iiENf8sDxVfYJfIN3/X9S0iu7Aj0Y6YIhubeVJpVtSdL589uEXu77HSijgvOQFCVJy9bNFc+uWSzdT0Qh7wYA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KUP7Nqu6Fmge9xIhhwPZjy2IirUsrJcuon4/g2rpM/Y=;
 b=KX8atIBQcEpETEAzK4mOXl5mBqVskSd6AlgxZNjko6VQsFXI+f5cql2uygAKv4n+UzlAk1r8OcHVTzqt8PAG9uS4x/E6aiZm8oarjxEuY4UycuAixQLUaW7z/9XAO/qJc8O0/wAln+P/hdCqttWIKV5XZMhSsXllMgjHgvRLrJs=
Date: Thu, 10 Jun 2021 09:54:40 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Jan
 Beulich" <jbeulich@suse.com>
Subject: Re: SR-IOV: do we need to virtualize in Xen or rely on Dom0?
Message-ID: <YMHFQA1L61ntKNRq@Air-de-Roger>
References: <c10e16c9-ec42-336f-e838-caca49b39723@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <c10e16c9-ec42-336f-e838-caca49b39723@epam.com>
X-ClientProxiedBy: MR2P264CA0122.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::14) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8fc9bcd9-4cc5-4d44-fb95-08d92be50448
X-MS-TrafficTypeDiagnostic: DM5PR03MB3067:
X-Microsoft-Antispam-PRVS: <DM5PR03MB306755186EA35277BAAA07BD8F359@DM5PR03MB3067.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ur2Qp4FDJ/xPQutqBmDsScumP9OqVhhYvkmP6IwbINMcWWZaF35sKbGu2KFJqFhhkBEKpCpo9AK0LaYfhbCXvsgalpR11WIYZsTpFztbrTu1XuecFNOWlWKefgsfHgGaMSdKaD2iAAqwTtCjCkUCdYb5Tga+DJEU3nJIvuToVo02ndpH3VWe33f7ryQhWGHOZIsWPu8hT0sp3IpURQ+xwe6ACggClqclTBFLr+D/rE1LqebCyvyZgYdi+6VuSmMfSBHHVhuxBORwMyqsTASz4egs8CLRn26DWuItIB1bSq8E9R/fY7eHUZhR9yffqd0+f1McWwYMHKj98vOwEIbHYPf6iwFx2gy4qFdxPMb5gf4fZieusDOR4wohnyXuiIatD9f6cGxoDK1hiKYUgohyD8dQ0fFe39dTgQbtLLY0vNGzCNmd3ZznmZ7hecHWLu+RsiSModJO4WO6zJk0cXbuZ8lGYHn8YUfiJkzVk/gCeqOasrmQ8CltCRKh4DCm5bLX5b9AHgM9B1jdKKZzT+uarHiF53mpM+8IZlt095sKr+LUsLRpTlDtwJvsblUVE1cfOTtA5L92MibtfFP119MUFTO+KIBjq5PpVamLJIp7Y89LO9+UYwPXqIPo87Wb341eAZflGWquQvrzWiclhm3s36ZIYX2pYc1aG4Sq7OtUYKdskY87H1Ip3bXgEN6LY2C/TMiqIA4VGogJVb2/0ms0d8WAxjfFBW8v6b+1AlC5T91OfrGd9x2PUqns6utuUL+q
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(346002)(376002)(366004)(39860400002)(396003)(83380400001)(6666004)(4326008)(2906002)(66476007)(8676002)(85182001)(6916009)(54906003)(316002)(5660300002)(6486002)(478600001)(6496006)(966005)(33716001)(86362001)(8936002)(66556008)(66946007)(9686003)(38100700002)(26005)(16526019)(186003)(956004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?LzZVcGhjSGJ0R0pzekVBVnk4MjFFVFM0SnR6WGxnS0phS1BEWTF5cUZUT0hv?=
 =?utf-8?B?WDUvTGo3K0FPZzY4d1JkVE5MTHAwZDVlMGloYlZrQzR3VW8yWkhyM2FiWmVh?=
 =?utf-8?B?Z08zVHFtL3d2c1BSbnZlVlhYbW0vSDg5Z0h6cStTQWJXRjZzYlJyekJ0TGhr?=
 =?utf-8?B?VVVJeVcvVlhDUXRkUDVvNVNzV25aQXJPTHB0Rll2RlJvOUswbzg2UWFlbUlB?=
 =?utf-8?B?UnZ1YWRsaHM0QzRrMVhYU0FyUk1RRFBmb3FJcG9NNlF1dmp4d0NNZkJqcXNE?=
 =?utf-8?B?OWhySVJ5cHpOZHh5TStWRHRlUVAyUE9JQkU1aGJYSVhHWUlIaDFxZFA4RzU0?=
 =?utf-8?B?Z1hlVUI4STNUWGxvM1oxZnptY1hBb3BaaEI0cStEd2hZS004TC8vV3JhN21p?=
 =?utf-8?B?YXVpZlBCZWY3MTVRZk1tRTFnZk1uK05hUlpycUcra0FBYUl4R0gvVURjbFVN?=
 =?utf-8?B?WWtYSWcrRFd6bmtPL0FyeGlMMnJwd1ZBZjV4UW9yUVZNRnlmWUhVZnQxVjdV?=
 =?utf-8?B?RW0xbVBIWnJNb1pSaFIrUmZjMDVOWkF1cnEzR1dEcWxLbHR3WWNHaDNHL1Iw?=
 =?utf-8?B?VVJId3RKTzBib1NqLzdiMVdJc2crVDlnbmFYTVFwc1FVbitBc3FUWTV6WEdL?=
 =?utf-8?B?ZVBuZTAwTUdmckJLQXZrR3IwS0VaWTA3T3JjcTNSSGozSmQ3bVl6YkV3RnJV?=
 =?utf-8?B?VkphOXhlZEliUTkvWm5oMVpGdEF3S0tvME5EemM1QlVVOEtoaHlqSDROOVdR?=
 =?utf-8?B?cnJpWGY3Qy9OWC9YQzhCaUFKTEk0RUEyRjkwd3Z5MWJSZjlSd05udG1tMTB0?=
 =?utf-8?B?Qlp5NUJCaXltVVZFaGFhZW1XdTBHMlJMc2h1djBKbXk4TURMeW1qcGtkVEVP?=
 =?utf-8?B?dUQrOVlOTG1DUlNpa0tKb3JjQ0UxdFdpT1R1RHNtUkttcWNUek5QQ1VIeXh4?=
 =?utf-8?B?Ym9vd2tPWE9wYTV2UTdDWmhBaTVwTFdkV1AxUmdTaXVmRExzZk9vSTZZRGZM?=
 =?utf-8?B?STJPdmhpTWZZZGVWMzY4T0JEcS9WNkhoVi92d3BjWC95VE04Q0xub2Nza1Zs?=
 =?utf-8?B?aXAwNjc3SDRmMWVBa3Urb3NSc3R5aHI5V2tCS1RJdTUyR1hTMXFUYmpqU0ha?=
 =?utf-8?B?KzNtMFZtay94YVFYUXdVZUNFMXNJcUZRMWl3VmZtVWQyWlkySWovVi83Tkx2?=
 =?utf-8?B?ZThZYlU0K2JUQTJLT1hFcWdyemtsMit0NHFPc3I5NTVjODBmTlZXMFUrT0Iy?=
 =?utf-8?B?a3o2TlJuMSt3K2xoYnVsZEdnVXZtQ0JmSll6Y3ZTaVBXb21OWFloSFU1elBp?=
 =?utf-8?B?dGVId2NOcXZWRVhuQ3JiRXIwOFlTRG9uakJJUStMcFFKRFBmRmJQTVdqc0Fi?=
 =?utf-8?B?QlpyWHJGMC9jblk1dGpOaUI3enA0T0p3SXVNSFV5cWdaM3hQazMvVUJxTTZn?=
 =?utf-8?B?dFYwQWVMT2FPQ0QrdU1iaWxvaWVRRlZrVXVqaHY5QXdNbFNRdndMc0lvMTdX?=
 =?utf-8?B?VDdNOXBaaHlSR2F0K0w0b3EvaGxadlVjYVNmTCtMUDJWdllHMUZlUkI4R0pB?=
 =?utf-8?B?cVhSMmVqLzNkUEJPTUR1TjI4bXRtcXBJNU10SWdVRHE4dGdCWGthdmg1REM5?=
 =?utf-8?B?RU5IU1EyUUVvanlRME42Mk5EVWdTTzJPMzhxc2dLZ0xTT3pmOTN5VmdMbUMw?=
 =?utf-8?B?NmY4NkozNEs2YmQ0RGhhK09JbzBTOVdrOEZCcG5oWHYySkNqekc5S2xxcHBV?=
 =?utf-8?Q?wF6pavi8Vv1hjfeD76Smk/2fynOuSF9mFAMHMiu?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 8fc9bcd9-4cc5-4d44-fb95-08d92be50448
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2021 07:54:47.1920
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: R8BNNvUZLpsbzKPru1PDJKZDYUygOWfl69RYPYFPInqlBjSUq2JoTBLQ35WfcwyzhRRAL2vke4Kowhw0R662gQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3067
X-OriginatorOrg: citrix.com

On Fri, Jun 04, 2021 at 06:37:27AM +0000, Oleksandr Andrushchenko wrote:
> Hi, all!
> 
> While working on PCI SR-IOV support for ARM I started porting [1] on top
> of current PCI on ARM support [2]. The question I have for this series
> is if we really need emulating SR-IOV code in Xen?
> 
> I have implemented a PoC for SR-IOV on ARM [3] (please see the top 2 
> patches)
> and it "works for me": MSI support is still WIP, but I was able to see that
> VFs are properly seen in the guest and BARs are properly programmed in p2m.
> 
> What I can't fully understand is if we can live with this approach or there
> are use-cases I can't see.
> 
> Previously I've been told that this approach might not work on FreeBSD 
> running
> as Domain-0, but is seems that "PCI Passthrough is not supported 
> (Xen/FreeBSD)"
> anyways [4].

PCI passthorgh is not supported on FreeBSD dom0 because PCI
passthrough is not supported by Xen itself when using a PVH dom0, and
that's the only mode FreeBSD dom0 can use.

PHYSDEVOP_pci_device_add can be added to FreeBSD, so it could be made
to work. I however think this is not the proper way to implement
SR-IOV support.

> 
> I also see ACRN hypervisor [5] implements SR-IOV inside it which makes 
> me think I
> miss some important use-case on x86 though.
> 
> I would like to ask for any advise with SR-IOV in hypervisor respect, 
> any pointers
> to documentation or any other source which might be handy in deciding if 
> we do
> need SR-IOV complexity in Xen.
> 
> And it does bring complexity if you compare [1] and [3])...
> 
> A bit of technical details on the approach implemented [3]:
> 1. We rely on PHYSDEVOP_pci_device_add
> 2. We rely on Domain-0 SR-IOV drivers to instantiate VFs
> 3. BARs are programmed in p2m implementing guest view on those (we have 
> extended
> vPCI code for that and this path is used for both "normal" devices and 
> VFs the same way)
> 4. No need to trap PCI_SRIOV_CTRL
> 5. No need to wait 100ms in Xen before attempting to access VF registers 
> when
> enabling virtual functions on the PF - this is handled by Domain-0 itself.

I think the SR-IOV capability should be handled like any other PCI
capability, ie: like we currently handle MSI or MSI-X in vPCI.

It's likely that using some kind of hypercall in order to deal with
SR-IOV could make this easier to implement in Xen, but that just adds
more code to all OSes that want to run as the hardware domain.

OTOH if we properly trap accesses to the SR-IOV capability (like it
was proposed in [1] from your references) we won't have to modify OSes
that want to run as hardware domains in order to handle SR-IOV devices.

IMO going for the hypercall option seems easier now, but adds a burden
to all OSes that want to manage SR-IOV devices that will hurt us long
term.

Thanks, Roger.

> Thank you in advance,
> Oleksandr
> 
> [1] 
> https://lists.xenproject.org/archives/html/xen-devel/2018-07/msg01494.html
> [2] 
> https://gitlab.com/xen-project/fusa/xen-integration/-/tree/integration/pci-passthrough
> [3] https://github.com/xen-troops/xen/commits/pci_phase2
> [4] https://wiki.freebsd.org/Xen
> [5] https://projectacrn.github.io/latest/tutorials/sriov_virtualization.html


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 08:21:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 08:21:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139661.258247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrFwi-0006S2-Js; Thu, 10 Jun 2021 08:21:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139661.258247; Thu, 10 Jun 2021 08:21:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrFwi-0006Rv-Gk; Thu, 10 Jun 2021 08:21:48 +0000
Received: by outflank-mailman (input) for mailman id 139661;
 Thu, 10 Jun 2021 08:04:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g+bs=LE=mittwald.de=s.kieske@srs-us1.protection.inumbo.net>)
 id 1lrFgG-0004mx-1p
 for xen-devel@lists.xen.org; Thu, 10 Jun 2021 08:04:48 +0000
Received: from mailbulkout04.agenturserver.de (unknown [153.92.196.163])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id abbf89da-0741-433d-8658-2910531f69ab;
 Thu, 10 Jun 2021 08:04:39 +0000 (UTC)
Received: from mail03.agenturserver.de (mail03.internal [192.168.51.40])
 by mailbulkout04.agenturserver.de (Postfix) with ESMTP id 4D47C42405;
 Thu, 10 Jun 2021 10:04:38 +0200 (CEST)
Received: from XXX.XXX.XXX.XXX (XXXXX.XX [XXX.XXX.XXX.XXX])
 (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested) (Authenticated sender: p1000p113)
 by mail.agenturserver.de (Postfix) with ESMTPSA id B3DE4413A5;
 Thu, 10 Jun 2021 10:04:37 +0200 (CEST)
Received: from XXX.XXX.XXX.XXX (XXXXX.XX [XXX.XXX.XXX.XXX])
 ex2.mw-ks.local (2a03:2a00:0:3:edc0:71ef:fb82:674a) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2176.14; Thu, 10 Jun 2021 10:04:37 +0200
Received: from XXX.XXX.XXX.XXX (XXXXX.XX [XXX.XXX.XXX.XXX])
 ex1.mw-ks.local ([fe80::e187:7dcb:b1d9:d875%2]) with mapi id 15.01.2176.014;
 Thu, 10 Jun 2021 10:04:37 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: abbf89da-0741-433d-8658-2910531f69ab
From: Sven Kieske <S.Kieske@mittwald.de>
To: "xen-announce@lists.xen.org" <xen-announce@lists.xen.org>,
	"oss-security@lists.openwall.com" <oss-security@lists.openwall.com>,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
CC: "security-team-members@xen.org" <security-team-members@xen.org>
Subject: Re: [oss-security] Xen Security Advisory 375 v3
 (CVE-2021-0089,CVE-2021-26313) - Speculative Code Store Bypass
Thread-Topic: [oss-security] Xen Security Advisory 375 v3
 (CVE-2021-0089,CVE-2021-26313) - Speculative Code Store Bypass
Thread-Index: AQHXXTa+QYm8IqItZ0WWKzH19j3F/KsMwoyA
Date: Thu, 10 Jun 2021 08:04:37 +0000
Message-ID: <90160ae63614ca1098c87f5c60002b9a35e922ef.camel@mittwald.de>
References: <E1lqybO-0000fZ-5i@xenbits.xenproject.org>
In-Reply-To: <E1lqybO-0000fZ-5i@xenbits.xenproject.org>
Accept-Language: de-DE, en-US
Content-Language: de-DE
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
x-originating-ip: [2a03:2a00:2:1::48]
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="=-0l3DJMvbZ87xKiFC2jeo"
MIME-Version: 1.0

--=-0l3DJMvbZ87xKiFC2jeo
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mi, 2021-06-09 at 13:50 +0000, Xen.org security team wrote:
> For more details, see:
[..]
>   https://www.amd.com/en/corporate-product-security-bulletin-amd-sb-1003

The above link turns into a "Page not found", at least for me, I believe th=
e correct link is:

https://www.amd.com/en/corporate/product-security/bulletin/amd-sb-1003

HTH

Mit freundlichen Gr=C3=BC=C3=9Fen / Regards

Sven Kieske
Systementwickler
=20
=20
Mittwald CM Service GmbH & Co. KG
K=C3=B6nigsberger Stra=C3=9Fe 4-6
32339 Espelkamp
=20
Tel.: 05772 / 293-900
Fax: 05772 / 293-333
=20
https://www.mittwald.de
=20
Gesch=C3=A4ftsf=C3=BChrer: Robert Meyer, Florian J=C3=BCrgens
=20
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplement=C3=A4rin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynh=
ausen

Informationen zur Datenverarbeitung im Rahmen unserer Gesch=C3=A4ftst=C3=A4=
tigkeit=20
gem=C3=A4=C3=9F Art. 13-14 DSGVO sind unter www.mittwald.de/ds abrufbar.


--=-0l3DJMvbZ87xKiFC2jeo
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEEdKGxKl7rK5iwbpcWxvL1MwMBtBEFAmDBx5MACgkQxvL1MwMB
tBESWRAA1dpFqCNLMZBg07fdyqYg4VJ2uM+jRKRrPFBLev0QNYF/VRjs7vtO/R/4
/gHlFoOX4uxaMO0GFlPgQ8bZ6qKmNTQ4CDIsLgWvIPf4Ldy0p3Qs92AQxyEGLHpL
IFc1qNdn07koYU+KxfPKNMFH1TZAqrU+hGZec68LtyhVBSpYKby/co2iZ6bH/88l
6Houy2etNAKkztiEUEEKBPPXTFG8v3AqEXq/mwfOUYU9IfuRPcaxcyvTmKdpwLun
24kU2fbifpwV7YDP1J/Q97p4YwD6Y2TSPDlxEu0eOlWCLQTs1dHx2UPBCZf85CjK
ZgHewt7oFcnwdWmtLtm3Q8/ALWZJdoERDUTOXds8pjjt84Pn9iiHdhNJE7LE0hnR
XFUha/OjQw9NxIer+K4YPo5bbG9wAY0lkFYPYZ05D3Ebnncmk58VXJUkWo+4E3zW
LL0AO2DdcP4UOZef7zwhtDD6BNlBxJSz+YvGtNn1KoS7JkdPFsD61x+MBWZXeqY1
ke8Pdq8BnZPCq/6ked5iJtxOpwkjQWz9Owvr1lKRQOfyw4G5G7hr7eB/6/eXyTRw
Yj29vqAI0F/FtqqaSahtXYlvneLePX5vgajH/C3sdVLkVQvQBVlwF4eqCQRmkmCo
iG658IfWJvD6DsYTGM50PjXunfaSsWKz8j7QDRr6RaHEu3O7Tdk=
=o3nj
-----END PGP SIGNATURE-----

--=-0l3DJMvbZ87xKiFC2jeo--


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 08:32:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 08:32:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139671.258259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrG74-0007tk-JV; Thu, 10 Jun 2021 08:32:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139671.258259; Thu, 10 Jun 2021 08:32:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrG74-0007td-GV; Thu, 10 Jun 2021 08:32:30 +0000
Received: by outflank-mailman (input) for mailman id 139671;
 Thu, 10 Jun 2021 08:32:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b5JI=LE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lrG72-0007tX-UJ
 for xen-devel@lists.xen.org; Thu, 10 Jun 2021 08:32:28 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d0baeeb6-9019-404c-ba2f-39e620e57642;
 Thu, 10 Jun 2021 08:32:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0baeeb6-9019-404c-ba2f-39e620e57642
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623313947;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=DZYjvCSVjV+s3xLPFinTOyVZZGZtpPehOb6pKmfXD6k=;
  b=T80/ZhIe9uT7ocm+ZIhD2FW7l1B27T5Q+fJw5e7w8k5LSk9lHjU7xB5y
   sGi6Wnsras3e+IIGjndq85qHW7BS5EMMe2vJGivKH5chqWgvTUT5WF/8C
   /kiMHmSrm0hXJBVBsetuqnNrq+QBECOyttQZSagu5zZTWHYtqVvPymO7f
   k=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: iK8+YVcPcC63ACzXlTh1AnQIiIZiO1ogszwcwViBMaOyyPobOKgop9hH+TDH6+2svL3voYmdme
 tLOUsVgjYFIlqL4LCGLOW/J4dmkF1NfSpiFUAl86t4QzccR67aIBMQsi/qOf2zs3AluEyTUg/2
 sNxBAN2hsKkmLLtvQC0uoSW1pmEo9smv44FWRqhRxbE0eHg1QsHXbfnx6likw5/85U6P16HTGy
 yOtdoS14Ul1quH4afRsWDa+5fAN2o9eBQUQVCIxdPeYQg9Jrql3H90PnHGM++sGmP2bkivGQ6F
 cz8=
X-SBRS: 5.1
X-MesageID: 47388255
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:n1xH866NCcCA4JKldQPXwSOBI+orL9Y04lQ7vn2ZFiY7TiXIra
 yTdaoguCMc6AxxZJkh8erwXJVoMkmsiqKdhrNhQYtKPTOWxVdASbsN0WKM+UyZJ8STzJ866U
 4kSdkFNDSSNykIsS+Z2njALz9I+rDum8rJ9ISuukuFDzsaDJ2Ihz0JejpzeXcGJjWua6BJca
 Z0qvA33AZJLh8sH7WG7zQ+LqT+juyOsKijTQ8NBhYh5gXLpTS06ITiGxzd+hsFSTtAzZor7G
 CAymXCl+uemsD+7iWZ+37Y7pxQltek4txfBPaUgsxQDjn3kA6naKloRrXHljEop+OE7kosjb
 D30lgdFvU2z0mUUnC+oBPr1QWl+i0p8WXexViRhmamidDlRRohYvAxxr5xQ1/80Q4Nrdt82K
 VE0yayrJxMFy7Nmyz7+pzhSwxqrEypunAv+NRjz0C3abFuLYO5kLZvuH+8SPw7bWXHAcEcYa
 hT5fjnlbRrmQjwVQGegoEHq+bcLEjaHX+9MwM/U4KuomFrdUtCvjwlLfok7z89HaIGOu15Dt
 v/Q9JVfZF1P4UrhPFGdao8qfXeMB2FffuaChPtHb2gLtBeB07w
X-IronPort-AV: E=Sophos;i="5.83,263,1616472000"; 
   d="scan'208";a="47388255"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jxII/NPrBZMvKlsJeLI/OYEIkukn2NT4EIBAvnc/MCDlWXAft2807nen7w72wx1QMmF6iHwIGjs6yEgUzPsArYinKndLqCraXtdxUq+kofAJZi3hc80PPMvcA89m9G5mnLJKtO49ziCrGlvTeQosrsP2QfJYm+KoTYRN5/ofqh+nWw6LEBRQYl7et2QUFJJsTbDvBWNJXhVnKy6nHwANVcbSJlmsGcTOsia04XkrBmwC4OdNHTbCN61lfa3ZPL/lchnB35y7e+/aa9cY9jnu4Qw4GNSq9fKmWb1teY1URPA1M/S9hGH/wrOQxSBFt9WR0Golsyr9XznylSu/J4tRpA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R7sciI+MaQ0nI60b66CR3W8s+XfH9h39EKAtUGc0uBM=;
 b=Sf1vRek5h4PsDnx77EfZv6aAph+UWI8wO7F0U++01DyjQaygH4c2y00OTTvmdp3HHR4noyMFLy3VvfvMhFVexTao5eJ0Ycyby+lflp8blNEcIK1aKCdhz8Ii/aqzSB/7tuD9Buq9VoalWtFXiaaPwDnn8ogtMebFRRHWeBfnl0Z8Q8F/jPgpNpmvtNheOetyj404FdyJunxIcmWpH+lkYmyGTn4iNUfPUoNwj8JtcsRVN0qYzpRGLAwEHP9Q4LfJH3DN+lX3pf8aLJHNAM+B7eWezJAEFlKI0T4ZiEOvSMcPGc0dDhg1VoJCpzr7qhJU0kbUP8y+ykHR0lU7kQIaXg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R7sciI+MaQ0nI60b66CR3W8s+XfH9h39EKAtUGc0uBM=;
 b=Lmx94FssPObZPCiTVWRceQaQMTn+cBvQz7/czCgFlCvhwoEfLb+LcNp4teRC09SM/mK5eSAC+gbNetz5PgvyEE64iUeIdtrdRWdQ1Fkps0UK3VhcLPxgIcCVNeu8q6pvMkMnK9oBhRrSndqYDSfUfsPYG8c3aG2N1Shfj03sIpY=
Subject: Re: [oss-security] Xen Security Advisory 375 v3
 (CVE-2021-0089,CVE-2021-26313) - Speculative Code Store Bypass
To: Sven Kieske <S.Kieske@mittwald.de>, "xen-announce@lists.xen.org"
	<xen-announce@lists.xen.org>, "oss-security@lists.openwall.com"
	<oss-security@lists.openwall.com>, "xen-users@lists.xen.org"
	<xen-users@lists.xen.org>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
CC: "security-team-members@xen.org" <security-team-members@xen.org>
References: <E1lqybO-0000fZ-5i@xenbits.xenproject.org>
 <90160ae63614ca1098c87f5c60002b9a35e922ef.camel@mittwald.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0f39a2b1-9e82-f909-8d5b-4c74ef6b5535@citrix.com>
Date: Thu, 10 Jun 2021 09:32:14 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <90160ae63614ca1098c87f5c60002b9a35e922ef.camel@mittwald.de>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0061.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:153::12) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 20472282-6dc0-48ad-cd8c-08d92bea43bf
X-MS-TrafficTypeDiagnostic: BYAPR03MB4423:
X-Microsoft-Antispam-PRVS: <BYAPR03MB44239750B3B043DE02EC6844BA359@BYAPR03MB4423.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: hkgXXx86r+O3Gh5m12hGN+MkeyA8RgHRWo661jHofpGJXE6WywEpKIphHXQg0CCRLyRCUDNkfNZ0UCIDGIatV9LwFL8dbFVgpPRD2y24RkcR1+xj6fjYe+VgMxMEUupo8WN3AaHYNwQ4B32HTVCfkphAA5V4wt0PET4EGVCIABFoUKfRI/eE4tD+hl6KErMMeLZH4BbXKFHul9ud2rH/ND7XZuypLEPIGaVeLem5Go9lGi6YPA4U/VRyGNka9PVrCRwPA1w5C9iPTrbh/uYsP57LpgzGG0MuoXQcj4hd+crQjssUXoRSi7lClvnEu0MUnSqhuKyIaA3Rr3SiKD79E2S1q4rdlz6byStEtPyaWSHUZjpsGiiOncERc0VV09ysl6qbPoowh+VU9nRqgaFUGCwvWfSpFIIrscKOaOW7A6VDxJ1fSI6nDee7rG+/XVpTAWrCTBrkZpVkh8uyjgMJAN9Tka+kwg8Wd+F4WbuzfuziU2lR44LvJhjEdJnlUtvpaZjwsnyYLCFiYAy0SKv9PqBbw5Od4JhDcmHW1J5OfCSLthMGXTnJhf4R8u3b9uQCWc6LasWyMGEWwXqQzhtcMRI+RTJVqoW9mYm8iD/xanCdYPoDRehkbvWFoMOt7cSveemN3WtiE5Ky/M3wgl9nXFuZjAmBgkRD49zid/WNYiC0B+BXnufZQ+I2/OFegycUb224qTjUN1ovBjnt+6FF0QYFkFcTZkDsRBoTSU1zDcMmS2MQCMwsKtOwgabeLHDxLkwxKatgJ4zBjwzlqWcrE9UB8q0YGSGzK72FRUa/KuZEt1bCPejLO7Gp0xzvpC4Gg+pynGFh4B0SMTyHCaJIqlRxTtakCfC1Eu2cZhqb2Is=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(366004)(376002)(346002)(396003)(478600001)(2906002)(6666004)(4326008)(16526019)(31686004)(6486002)(36756003)(186003)(66946007)(2616005)(53546011)(5660300002)(26005)(31696002)(86362001)(8936002)(966005)(83380400001)(4744005)(66556008)(66476007)(8676002)(16576012)(316002)(110136005)(38100700002)(15650500001)(956004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?cERSM2c3SGs1TkQ1MmU4Rks3TmYzaUswZHF6T2xqNjk5dVZjZVNvbTFSQzBI?=
 =?utf-8?B?dDVEdkJJbTBkVVdrSVdXTU94MHhUZnFBUENSSXNVaGkyYjVFY3dnZzd4TUZF?=
 =?utf-8?B?cFVtck9tWmFyNFp3MTB5ZnJGdnliOXQ0NEI3M1YxUHIycUhTMFBhYmlDbmFN?=
 =?utf-8?B?eUJpb0dObkt3RzEvYXFrbkFsVlRhcURzcVJLck1mQTZKZ0syMmpDQnMrR3dC?=
 =?utf-8?B?elJoMDNXaXpMRnFtRHBwYisxdG5GUml6N2ZpWTBVQWwvZEo1MjhTQnpIc3Vu?=
 =?utf-8?B?VDFqaUpwNy91ODNLZ1ZiTHR0TlYwb1hYblpZd3hlSjZ1eGI5SW9XMkQwQWJM?=
 =?utf-8?B?UlVDQmxkQ3hKRVRFTDE0S2lYVk1OdS93RDZLV1k0MkZVeW56VGZZd1VPMjJz?=
 =?utf-8?B?THNVd0lINmlzVE9xTDNHcUNsUjV1eWNWdWFPbk5kR3VCVnRMd1pnb3JtY0tN?=
 =?utf-8?B?TWh3ZXJJUGFKYk9BbmNuVTlZMGJ3aVBaUmRGa3lIVWEzR0c4YlQ1SlZrSUkz?=
 =?utf-8?B?U2ZXd01zUDlSTHNHQ1NRSDduZDV4VmE3bGtUS2p6UGNIelVwM1JTMkZ0b0VF?=
 =?utf-8?B?ZXVLWVN3dUVmdTRuV3BOM0puRFJESS9pRVZxVFBjaHlHd1J3M1Rwakc4OTRn?=
 =?utf-8?B?Q2REM0diY09lZjdoM3RlQzExellHVm5LWHh6Nk5FMVJJK0ZKeWgyaDZvTzFF?=
 =?utf-8?B?UXlFNEF5MkJWQlF0SGY0WStaREpFa1U3RzJpNXJibGVUY1NLMERxMkZOcGY3?=
 =?utf-8?B?MG5EMHAybmVhcSsxME9GV2Nsak55MjJlak16alFERmV6Y2xuSzJBWlBQMHk2?=
 =?utf-8?B?MHFrS0dlQjRTZEdoVmxXNWxCUEo3cWpEWjA3T0ZLQWJLbG5FN1RuWjBiRFp1?=
 =?utf-8?B?eXFqN2lkN2kyYWZnYndmSDQxT1BTK2VGeWdCOS9LWWRNQ1BFWlRNRmhEN0U5?=
 =?utf-8?B?RTVGODg0cFJQTnFoRmh5Tm5Ldm9SaER0MXcvazErc3N2YnYxbWxYem1BQXBZ?=
 =?utf-8?B?eHNKOUhNUFd1R3RwMDl0cVRpYm9jdDUwdkYvZHY2MGdmY01ja1hvaTJEZDQv?=
 =?utf-8?B?OE1nczM4OHZBNlVRTEVuTnBxTjJPdkxFNnNaUWtTT0ZUR011cjZvaURNOVdu?=
 =?utf-8?B?cUdpdkFNMlVRUWxlSzNIQkN6dUtNY2RWY25tZnZDSnRTclFwdlM0eFM2aTlx?=
 =?utf-8?B?TDhWM3pid21FamcxSWpqcVBsUDRYQi8xcktrakJ2dXFSVEE2QlVIaElaUGJj?=
 =?utf-8?B?emxHN2Q2bGpOcm4xOGE4SVUzTGlTVkdZUkx0Zk5KMFFNY0RNNmZsWHpnZXNh?=
 =?utf-8?B?Sms4SEV2UFllV3NwQ2FqbjJyMUwzb3ExU2RiMDJqNEUrOFg0MGNkaXJSblJ4?=
 =?utf-8?B?czNCWldqdmRzRmxWeXh1RFRhS3NzODVwK3QyMy9xbkhTQnpnOWpRT1ZZV2xo?=
 =?utf-8?B?ZW1aMHFIVDNBMWdEaUVIY2xObHhCUnF3bTRidFFmSW9sT0Q3OWN0R0hwWjZZ?=
 =?utf-8?B?aVpXdk4xdzAzZC95ZnlsOTVqTnZLQkN4U0trUEZFL3g3WUxjS1dJRzYxWnpm?=
 =?utf-8?B?VzhndzJpNkJVajlrVUxVMXpuK2daajh5OER6MUVkYkQrUm5SbDlpWUtmRU0w?=
 =?utf-8?B?SDdmZVF4MEZGUDNicVhCeHMreGpvVDFWNVZQLzVGeVBNb1VMRnAzcmJ4V3N4?=
 =?utf-8?B?VW9CWXFicUJKOEE1WDMwcG50azR3aFhxbE5MTjZEVm9xSUw0dm1hUmVSV0My?=
 =?utf-8?Q?DHM9jel4n9OGAnwULgifD0VFO1lQWkMCCu9EhgJ?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 20472282-6dc0-48ad-cd8c-08d92bea43bf
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2021 08:32:20.9840
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ci2DujcDDRagq1oknXZ89DVzPs2k4xjTzrUvkQ04Q0WguFR5eXgSC9TgUaZ2BdemFWMTTYkaVDMrc8pAQoeqeTDGQ90XPhJh9X3PkhePHhQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4423
X-OriginatorOrg: citrix.com

On 10/06/2021 09:04, Sven Kieske wrote:
> On Mi, 2021-06-09 at 13:50 +0000, Xen.org security team wrote:
>> For more details, see:
> [..]
>>   https://www.amd.com/en/corporate-product-security-bulletin-amd-sb-1003
> The above link turns into a "Page not found", at least for me, I believe the correct link is:
>
> https://www.amd.com/en/corporate/product-security/bulletin/amd-sb-1003

Ah - the link changed, and I thought I'd fixed it.  Clearly not.

Thanks - I'll issue a correction to the XSA.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 09:07:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 09:07:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139696.258290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrGfC-0003ly-S8; Thu, 10 Jun 2021 09:07:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139696.258290; Thu, 10 Jun 2021 09:07:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrGfC-0003lr-OC; Thu, 10 Jun 2021 09:07:46 +0000
Received: by outflank-mailman (input) for mailman id 139696;
 Thu, 10 Jun 2021 09:07:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrGfB-0003lh-R9; Thu, 10 Jun 2021 09:07:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrGfB-0002if-M2; Thu, 10 Jun 2021 09:07:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrGfB-000613-CV; Thu, 10 Jun 2021 09:07:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrGfB-0004dI-Bz; Thu, 10 Jun 2021 09:07:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rd1448lMcy8mrfTBWed8BiwWu69yvxxnnfKRCfOjevE=; b=B2rRT+WDQHObwcq1mxa1QaBkrt
	Gve3TCrAv4AZB/uBJB4K1qxj2rwo+LHsWGO92wt+pIYCihcfoSEgO3smcU1ehs4iDDUcyEvHKPPAG
	KZIFr1X7JUhnGq9y4SSDJjcmtGbcIduGH7osDc801MomuxtbXS4/mH3Q/hZGoCRRYnhs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162563-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162563: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=07dc1ac9d29e40ac959c948dcc87923687016291
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Jun 2021 09:07:45 +0000

flight 162563 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162563/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              07dc1ac9d29e40ac959c948dcc87923687016291
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  335 days
Failing since        151818  2020-07-11 04:18:52 Z  334 days  326 attempts
Testing same since   162563  2021-06-09 04:18:56 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 60756 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 09:17:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 09:17:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139707.258321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrGo5-0005jD-G5; Thu, 10 Jun 2021 09:16:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139707.258321; Thu, 10 Jun 2021 09:16:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrGo5-0005j4-Cx; Thu, 10 Jun 2021 09:16:57 +0000
Received: by outflank-mailman (input) for mailman id 139707;
 Thu, 10 Jun 2021 09:16:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xmrX=LE=xenbits.xen.org=gdunlap@srs-us1.protection.inumbo.net>)
 id 1lrGo4-0005Nu-65
 for xen-devel@lists.xen.org; Thu, 10 Jun 2021 09:16:56 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6956cd60-6143-44ef-af56-aae603ecec05;
 Thu, 10 Jun 2021 09:16:48 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1lrGnq-0002tN-Bw; Thu, 10 Jun 2021 09:16:42 +0000
Received: from gdunlap by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1lrGnq-0003by-9y; Thu, 10 Jun 2021 09:16:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6956cd60-6143-44ef-af56-aae603ecec05
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=HQQ0/VpUTzU0PMDoFxBU+QcP4disNR968M6tzreckD4=; b=JsASxyBKXUdHWhP0JElyswO8DO
	Pss+lXRxLxEMuGWuEkzgfKKZmDBbCjnflU4dK/phOuC12hihXpQcFJEZUBwLNjaVFu+XpKUYuLHe7
	P1hV/yc8tc2RgFwUPskPA+QYJhk7bYs1djA+9lxhJUDXZcmgJvTKnrf9+uX8sw8y2nf4=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 375 v4 (CVE-2021-0089,CVE-2021-26313) -
 Speculative Code Store Bypass
Message-Id: <E1lrGnq-0003by-9y@xenbits.xenproject.org>
Date: Thu, 10 Jun 2021 09:16:42 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

      Xen Security Advisory CVE-2021-0089,CVE-2021-26313 / XSA-375
                                version 4

                    Speculative Code Store Bypass

UPDATES IN VERSION 4
====================

Correct the link to the AMD bulletin.

ISSUE DESCRIPTION
=================

Modern superscalar processors may employ sophisticated decoding and
caching of the instruction stream to improve performance.  However, a
consequence is that self-modifying code updates may not take effect
instantly.

Whatever the architectural guarantees, some CPUs have microarchitectural
behaviour whereby the stale instruction stream may be speculatively
decoded and executed.

Speculation of this form can suffer from type confusion in registers,
and potentially leak data.

For more details, see:
  https://www.vusec.net/projects/fpvi-scsb
  https://www.amd.com/en/corporate/product-security/bulletin/amd-sb-1003
  https://software.intel.com/content/www/us/en/develop/articles/software-security-guidance/advisory-guidance/speculative-code-store-bypass.html
  https://software.intel.com/content/www/us/en/develop/articles/software-security-guidance/advisory-guidance/floating-point-value-injection.html
  https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#scsb
  https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#fvpi

IMPACT
======

In attacker might be able to infer the contents of arbitrary host
memory, including memory assigned to other guests.

VULNERABLE SYSTEMS
==================

Systems running all versions of Xen are affected.

Whether a CPU is potentially vulnerable depends on its
microarchitecture.  Consult your hardware vendor.

Xen running on ARM does not have runtime self-modying code, so is
believed to be not vulnerable, irrespective of any hardware
susceptibility.

Xen running on x86 does have runtime self-modying code as part of
emulation, and is believed to be potentially vulnerable.

Xen is not vulnerable if retpoline or lfence mitigations for Spectre v2
protection are active.  Protections depend on compiler support (as
indicated by INDIRECT_THUNK), and a runtime setting (BTI-Thunk):

  # xl dmesg | grep -e INDIRECT_THUNK -e BTI-Thunk
  (XEN)   Compiled-in support: INDIRECT_THUNK SHADOW_PAGING
  (XEN)   Xen settings: BTI-Thunk RETPOLINE, SPEC_CTRL: IBRS+ SSBD-, Other: SRB_LOCK+ IBPB L1D_FLUSH VERW BRANCH_HARDEN

BTI-Thunk as either RETPOLINE or LFENCE prevents the vulnerability.

MITIGATION
==========

If Spectre v2 support is compiled in, but JMP is used by default,
RETPOLINE or LFENCE can be selected with `spec-ctrl=bti-thunk=retpoline`
or `spec-ctrl=bti-thunk=lfence`.

CREDITS
=======

This issue was discovered by Enrico Barberis, Hany Ragab, Herbert Bos,
and Cristiano Giuffrida from the VUSec group at VU Amsterdam.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.  Note that
in 4.13 and newer the patch will only take effect when the
SPECULATIVE_HARDEN_BRANCH hypervisor config option is enabled.  4.12 and
older do not have such an option, and the change will take effect
unconditionally.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa375.patch           xen-unstable - 4.14.x
xsa375-4.13.patch      Xen 4.13.x
xsa375-4.12.patch      Xen 4.12.x - 4.11.x

$ sha256sum xsa375*
367d5bb97c942b9f744a57645df87148772c0879de6f351f36f88147f3958e83  xsa375.meta
301ef80da837bc2af36a0958f35f42f4d267b20ec6e91ae5faf2616167ef49f8  xsa375.patch
dc024daf17242b6477a16a349754a94b2b25cbbfd8c14475741b778710a44c93  xsa375-4.12.patch
f70511d843c6617b932da11ffe857e2e3aa3834ccff07d4d0beba90d63a3dae2  xsa375-4.13.patch
$

NOTE CONCERNING CVE-2021-0086 / CVE-2021-26314
==============================================

Floating Point Value Injection (FPVI) was discovered and disclosed in
the same research as SCSB.  Xen on x86 does in some cases emulate
floating point operations with guest provided inputs, but does not have
subsequent control flow dependent on results, transient or otherwise, of
the operation.

Therefore, we believe Xen is not vulnerable to FPVI, irrespective of any
hardware susceptibility.

NOTE CONCERNING MULTIPLE CVES
=============================

Intel and AMD allocated different CVEs for SCSB and FPVI.  We have
included both on this advisory.  The allocations are as follows:

  Issue | Intel         | AMD
  ------+---------------+---------------
  SCSB  | CVE-2021-0089 | CVE-2021-26313
  FPVI  | CVE-2021-0086 | CVE-2021-26314

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmDB2EQMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZtgkIAMJ6zrjSMK/mrnJ8+vRwfaG7hYIOwIa8k18CnIin
DH4LZ1PIyWRqOjgRo+oqgZEIOXFAlEx/ZXHJscf+SaleemA9klsBWpoiyURONchC
4Sz/qUcJnTHXjakw21seaxtYA4FzBtGQ6V/Ccm/3vDxVhDewtbNSJLflq2kZDLv0
nRMJkSajeCml/YPcSQ2y32KE49kQK726H9hzHIMuRA6fDAKCT51bWiyelH405vnR
vanJetUHys1Uye0arqfi7Z9tv0KMKAspgR/ccOGh5g0EvDOTyOo6ZLAOm69wqdfr
AC0IShNIPyk85k1VJBkU8VSsWvasPmbcT9NYWK6HeP6ZdRg=
=T+nf
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa375.meta"
Content-Disposition: attachment; filename="xsa375.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNzUsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNSIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIs
CiAgICAiNC4xMiIsCiAgICAiNC4xMSIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjExIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICJiMWU0NmJjMzY5YmI0OTBiNzIxYzc3ZjE1ZDI1ODNiYmY0
NjYxNTJkIiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NzIsCiAgICAgICAgICAgIDM3MwogICAgICAgICAgXSwKICAgICAgICAgICJQ
YXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzc1LTQuMTMucGF0Y2giCiAg
ICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQuMTIi
OiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAg
ICAgICAiU3RhYmxlUmVmIjogIjU5ODQ5MDViMjYzOGRmODdhMDI2MmQxZWU5
MWYwYTZlMTRhODZkZjYiLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAg
ICAgICAgIDM3MiwKICAgICAgICAgICAgMzczCiAgICAgICAgICBdLAogICAg
ICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2EzNzUtNC4xMy5w
YXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAg
ICAiNC4xMyI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6
IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiMjg0MTMyOTM4OTAwY2U4YzNi
MTFiYWJmNzI1NWY1YzZkYmIyMTcxNiIsCiAgICAgICAgICAiUHJlcmVxcyI6
IFsKICAgICAgICAgICAgMzcyLAogICAgICAgICAgICAzNzMKICAgICAgICAg
IF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM3
NS00LjEzLnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQog
ICAgfSwKICAgICI0LjE0IjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAg
ICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICIxMGYwYjJkNDkz
NzY4NjVkNDk2ODBmMDZjNTJiNDUxZmFiY2UzYmI1IiwKICAgICAgICAgICJQ
cmVyZXFzIjogWwogICAgICAgICAgICAzNzIsCiAgICAgICAgICAgIDM3Mwog
ICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAg
ICAieHNhMzc1LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAg
fQogICAgfSwKICAgICI0LjE1IjogewogICAgICAiUmVjaXBlcyI6IHsKICAg
ICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICIyODBkNDcy
ZjRmY2EwNzBhMTAzNzdlMzE4ZDkwY2FiZmMyNTQwODEwIiwKICAgICAgICAg
ICJQcmVyZXFzIjogWwogICAgICAgICAgICAzNzIsCiAgICAgICAgICAgIDM3
MwogICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAg
ICAgICAieHNhMzc1LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAg
ICAgfQogICAgfSwKICAgICJtYXN0ZXIiOiB7CiAgICAgICJSZWNpcGVzIjog
ewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogImFh
NzdhY2MyODA5OGQwNDk0NWFmOTk4ZjNmYzBkYmQzNzU5YjViNDEiLAogICAg
ICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDM3MiwKICAgICAgICAg
ICAgMzczCiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAg
ICAgICAgICAgICJ4c2EzNzUucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAg
fQogICAgICB9CiAgICB9CiAgfQp9

--=separator
Content-Type: application/octet-stream; name="xsa375.patch"
Content-Disposition: attachment; filename="xsa375.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogUHJvdGVjdCBhZ2FpbnN0IFNw
ZWN1bGF0aXZlIENvZGUgU3RvcmUgQnlwYXNzCgpNb2Rlcm4geDg2IHByb2Nl
c3NvcnMgaGF2ZSBmYXItYmV0dGVyLXRoYW4tYXJjaGl0ZWN0dXJhbGx5LWd1
YXJhbnRlZWQgc2VsZgptb2RpZnlpbmcgY29kZSBkZXRlY3Rpb24uICBUeXBp
Y2FsbHksIHdoZW4gYSB3cml0ZSBoaXRzIGFuIGluc3RydWN0aW9uIGluCmZs
aWdodCwgYSBNYWNoaW5lIENsZWFyIG9jY3VycyB0byBmbHVzaCBzdGFsZSBj
b250ZW50IGluIHRoZSBmcm9udGVuZCBhbmQKYmFja2VuZC4KCkZvciBzZWxm
IG1vZGlmeWluZyBjb2RlLCBiZWZvcmUgYSB3cml0ZSB3aGljaCBoaXRzIGFu
IGluc3RydWN0aW9uIGluIGZsaWdodApyZXRpcmVzLCB0aGUgZnJvbnRlbmQg
Y2FuIHNwZWN1bGF0aXZlbHkgZGVjb2RlIGFuZCBleGVjdXRlIHRoZSBvbGQg
aW5zdHJ1Y3Rpb24Kc3RyZWFtLiAgU3BlY3VsYXRpb24gb2YgdGhpcyBmb3Jt
IGNhbiBzdWZmZXIgZnJvbSB0eXBlIGNvbmZ1c2lvbiBpbiByZWdpc3RlcnMs
CmFuZCBwb3RlbnRpYWxseSBsZWFrIGRhdGEuCgpGdXJ0aGVybW9yZSwgdXBk
YXRlcyBhcmUgdHlwaWNhbGx5IGJ5dGUtd2lzZSwgcmF0aGVyIHRoYW4gYXRv
bWljLiAgRGVwZW5kaW5nCm9uIHRpbWluZywgc3BlY3VsYXRpb24gY2FuIHJh
Y2UgYWhlYWQgbXVsdGlwbGUgdGltZXMgYmV0d2VlbiBpbmRpdmlkdWFsCndy
aXRlcywgYW5kIGV4ZWN1dGUgdGhlIHRyYW5zaWVudGx5LW1hbGZvcm1lZCBp
bnN0cnVjdGlvbiBzdHJlYW0uCgpYZW4gaGFzIHN0dWJzIHdoaWNoIGFyZSB1
c2VkIGluIGNlcnRhaW4gY2FzZXMgZm9yIGVtdWxhdGlvbiBwdXJwb3Nlcy4g
IEluaGliaXQKc3BlY3VsYXRpb24gYmV0d2VlbiB1cGRhdGluZyB0aGUgc3R1
YiBhbmQgZXhlY3V0aW5nIGl0LgoKVGhpcyBpcyBYU0EtMzc1IC8gQ1ZFLTIw
MjEtMDA4OS4KClNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJl
dy5jb29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBKYW4gQmV1bGlj
aCA8amJldWxpY2hAc3VzZS5jb20+CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gv
eDg2L3B2L2VtdWwtcHJpdi1vcC5jIGIveGVuL2FyY2gveDg2L3B2L2VtdWwt
cHJpdi1vcC5jCmluZGV4IDg4ODk1MDlkMmEuLjExNDY3YTFlM2EgMTAwNjQ0
Ci0tLSBhL3hlbi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYworKysgYi94
ZW4vYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMKQEAgLTEzOCw2ICsxMzgs
OCBAQCBzdGF0aWMgaW9fZW11bF9zdHViX3QgKmlvX2VtdWxfc3R1Yl9zZXR1
cChzdHJ1Y3QgcHJpdl9vcF9jdHh0ICpjdHh0LCB1OCBvcGNvZGUsCiAgICAg
LyogUnVudGltZSBjb25maXJtYXRpb24gdGhhdCB3ZSBoYXZlbid0IGNsb2Ji
ZXJlZCBhbiBhZGphY2VudCBzdHViLiAqLwogICAgIEJVR19PTihTVFVCX0JV
Rl9TSVpFIC8gMiA8IChwIC0gY3R4dC0+aW9fZW11bF9zdHViKSk7CiAKKyAg
ICBibG9ja19zcGVjdWxhdGlvbigpOyAvKiBTQ1NCICovCisKICAgICAvKiBI
YW5keSBmdW5jdGlvbi10eXBlZCBwb2ludGVyIHRvIHRoZSBzdHViLiAqLwog
ICAgIHJldHVybiAodm9pZCAqKXN0dWJfdmE7CiAKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni94ODZfZW11bGF0ZS94ODZfZW11bGF0ZS5jIGIveGVuL2Fy
Y2gveDg2L3g4Nl9lbXVsYXRlL3g4Nl9lbXVsYXRlLmMKaW5kZXggYzI1ZDg4
ZDBkOC4uZjQyZmYyYTgzNyAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4
Nl9lbXVsYXRlL3g4Nl9lbXVsYXRlLmMKKysrIGIveGVuL2FyY2gveDg2L3g4
Nl9lbXVsYXRlL3g4Nl9lbXVsYXRlLmMKQEAgLTEyNTcsNiArMTI1Nyw3IEBA
IHN0YXRpYyBpbmxpbmUgaW50IG1rZWModWludDhfdCBlLCBpbnQzMl90IGVj
LCAuLi4pCiAjIGRlZmluZSBpbnZva2Vfc3R1YihwcmUsIHBvc3QsIGNvbnN0
cmFpbnRzLi4uKSBkbyB7ICAgICAgICAgICAgICAgICAgICBcCiAgICAgc3R1
Yl9leG4uaW5mbyA9ICh1bmlvbiBzdHViX2V4Y2VwdGlvbl90b2tlbikgeyAu
cmF3ID0gfjAgfTsgICAgICAgICBcCiAgICAgc3R1Yl9leG4ubGluZSA9IF9f
TElORV9fOyAvKiBVdGlsaXR5IG91dHdlaWdocyBsaXZlcGF0Y2hpbmcgY29z
dCAqLyBcCisgICAgYmxvY2tfc3BlY3VsYXRpb24oKTsgLyogU0NTQiAqLyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgYXNt
IHZvbGF0aWxlICggcHJlICJcblx0SU5ESVJFQ1RfQ0FMTCAlW3N0dWJdXG5c
dCIgcG9zdCAiXG4iICAgICAgICBcCiAgICAgICAgICAgICAgICAgICAgIi5M
cmV0JT06XG5cdCIgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBcCiAgICAgICAgICAgICAgICAgICAgIi5wdXNoc2VjdGlvbiAuZml4
dXAsXCJheFwiXG4iICAgICAgICAgICAgICAgICAgICAgICBcCg==

--=separator
Content-Type: application/octet-stream; name="xsa375-4.12.patch"
Content-Disposition: attachment; filename="xsa375-4.12.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogUHJvdGVjdCBhZ2FpbnN0IFNw
ZWN1bGF0aXZlIENvZGUgU3RvcmUgQnlwYXNzCgpNb2Rlcm4geDg2IHByb2Nl
c3NvcnMgaGF2ZSBmYXItYmV0dGVyLXRoYW4tYXJjaGl0ZWN0dXJhbGx5LWd1
YXJhbnRlZWQgc2VsZgptb2RpZnlpbmcgY29kZSBkZXRlY3Rpb24uICBUeXBp
Y2FsbHksIHdoZW4gYSB3cml0ZSBoaXRzIGFuIGluc3RydWN0aW9uIGluCmZs
aWdodCwgYSBNYWNoaW5lIENsZWFyIG9jY3VycyB0byBmbHVzaCBzdGFsZSBj
b250ZW50IGluIHRoZSBmcm9udGVuZCBhbmQKYmFja2VuZC4KCkZvciBzZWxm
IG1vZGlmeWluZyBjb2RlLCBiZWZvcmUgYSB3cml0ZSB3aGljaCBoaXRzIGFu
IGluc3RydWN0aW9uIGluIGZsaWdodApyZXRpcmVzLCB0aGUgZnJvbnRlbmQg
Y2FuIHNwZWN1bGF0aXZlbHkgZGVjb2RlIGFuZCBleGVjdXRlIHRoZSBvbGQg
aW5zdHJ1Y3Rpb24Kc3RyZWFtLiAgU3BlY3VsYXRpb24gb2YgdGhpcyBmb3Jt
IGNhbiBzdWZmZXIgZnJvbSB0eXBlIGNvbmZ1c2lvbiBpbiByZWdpc3RlcnMs
CmFuZCBwb3RlbnRpYWxseSBsZWFrIGRhdGEuCgpGdXJ0aGVybW9yZSwgdXBk
YXRlcyBhcmUgdHlwaWNhbGx5IGJ5dGUtd2lzZSwgcmF0aGVyIHRoYW4gYXRv
bWljLiAgRGVwZW5kaW5nCm9uIHRpbWluZywgc3BlY3VsYXRpb24gY2FuIHJh
Y2UgYWhlYWQgbXVsdGlwbGUgdGltZXMgYmV0d2VlbiBpbmRpdmlkdWFsCndy
aXRlcywgYW5kIGV4ZWN1dGUgdGhlIHRyYW5zaWVudGx5LW1hbGZvcm1lZCBp
bnN0cnVjdGlvbiBzdHJlYW0uCgpYZW4gaGFzIHN0dWJzIHdoaWNoIGFyZSB1
c2VkIGluIGNlcnRhaW4gY2FzZXMgZm9yIGVtdWxhdGlvbiBwdXJwb3Nlcy4g
IEluaGliaXQKc3BlY3VsYXRpb24gYmV0d2VlbiB1cGRhdGluZyB0aGUgc3R1
YiBhbmQgZXhlY3V0aW5nIGl0LgoKVGhpcyBpcyBYU0EtMzc1IC8gQ1ZFLTIw
MjEtMDA4OS4KClNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJl
dy5jb29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBKYW4gQmV1bGlj
aCA8amJldWxpY2hAc3VzZS5jb20+CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gv
eDg2L3B2L2VtdWwtcHJpdi1vcC5jIGIveGVuL2FyY2gveDg2L3B2L2VtdWwt
cHJpdi1vcC5jCmluZGV4IDZkYzRmOTJhODQuLjU5YzE1Y2EwZTcgMTAwNjQ0
Ci0tLSBhL3hlbi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYworKysgYi94
ZW4vYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMKQEAgLTk3LDYgKzk3LDgg
QEAgc3RhdGljIGlvX2VtdWxfc3R1Yl90ICppb19lbXVsX3N0dWJfc2V0dXAo
c3RydWN0IHByaXZfb3BfY3R4dCAqY3R4dCwgdTggb3Bjb2RlLAogICAgIEJV
SUxEX0JVR19PTihTVFVCX0JVRl9TSVpFIC8gMiA8IE1BWCg5LCAvKiBEZWZh
dWx0IGVtdWwgc3R1YiAqLwogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICA1ICsgSU9FTVVMX1FVSVJLX1NUVUJfQllURVMpKTsK
IAorICAgIGFzbSB2b2xhdGlsZSAoICJsZmVuY2UiIDo6OiAibWVtb3J5IiAp
OyAvKiBTQ1NCICovCisKICAgICAvKiBIYW5keSBmdW5jdGlvbi10eXBlZCBw
b2ludGVyIHRvIHRoZSBzdHViLiAqLwogICAgIHJldHVybiAodm9pZCAqKXN0
dWJfdmE7CiB9CmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2X2VtdWxh
dGUveDg2X2VtdWxhdGUuYyBiL3hlbi9hcmNoL3g4Ni94ODZfZW11bGF0ZS94
ODZfZW11bGF0ZS5jCmluZGV4IGJiYTZkZDAxODcuLmNkMTIzNDkyYTYgMTAw
NjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni94ODZfZW11bGF0ZS94ODZfZW11bGF0
ZS5jCisrKyBiL3hlbi9hcmNoL3g4Ni94ODZfZW11bGF0ZS94ODZfZW11bGF0
ZS5jCkBAIC0xMDkzLDYgKzEwOTMsNyBAQCBzdGF0aWMgaW5saW5lIGludCBt
a2VjKHVpbnQ4X3QgZSwgaW50MzJfdCBlYywgLi4uKQogIyBkZWZpbmUgaW52
b2tlX3N0dWIocHJlLCBwb3N0LCBjb25zdHJhaW50cy4uLikgZG8geyAgICAg
ICAgICAgICAgICAgICAgXAogICAgIHN0dWJfZXhuLmluZm8gPSAodW5pb24g
c3R1Yl9leGNlcHRpb25fdG9rZW4pIHsgLnJhdyA9IH4wIH07ICAgICAgICAg
XAogICAgIHN0dWJfZXhuLmxpbmUgPSBfX0xJTkVfXzsgLyogVXRpbGl0eSBv
dXR3ZWlnaHMgbGl2ZXBhdGNoaW5nIGNvc3QgKi8gXAorICAgIGFzbSB2b2xh
dGlsZSAoICJsZmVuY2UiIDo6OiAibWVtb3J5IiApOyAvKiBTQ1NCICovICAg
ICAgICAgICAgICAgICAgXAogICAgIGFzbSB2b2xhdGlsZSAoIHByZSAiXG5c
dElORElSRUNUX0NBTEwgJVtzdHViXVxuXHQiIHBvc3QgIlxuIiAgICAgICAg
XAogICAgICAgICAgICAgICAgICAgICIuTHJldCU9OlxuXHQiICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgICAgICAgICAg
ICAgICAgICIucHVzaHNlY3Rpb24gLmZpeHVwLFwiYXhcIlxuIiAgICAgICAg
ICAgICAgICAgICAgICAgXAo=

--=separator
Content-Type: application/octet-stream; name="xsa375-4.13.patch"
Content-Disposition: attachment; filename="xsa375-4.13.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogUHJvdGVjdCBhZ2FpbnN0IFNw
ZWN1bGF0aXZlIENvZGUgU3RvcmUgQnlwYXNzCgpNb2Rlcm4geDg2IHByb2Nl
c3NvcnMgaGF2ZSBmYXItYmV0dGVyLXRoYW4tYXJjaGl0ZWN0dXJhbGx5LWd1
YXJhbnRlZWQgc2VsZgptb2RpZnlpbmcgY29kZSBkZXRlY3Rpb24uICBUeXBp
Y2FsbHksIHdoZW4gYSB3cml0ZSBoaXRzIGFuIGluc3RydWN0aW9uIGluCmZs
aWdodCwgYSBNYWNoaW5lIENsZWFyIG9jY3VycyB0byBmbHVzaCBzdGFsZSBj
b250ZW50IGluIHRoZSBmcm9udGVuZCBhbmQKYmFja2VuZC4KCkZvciBzZWxm
IG1vZGlmeWluZyBjb2RlLCBiZWZvcmUgYSB3cml0ZSB3aGljaCBoaXRzIGFu
IGluc3RydWN0aW9uIGluIGZsaWdodApyZXRpcmVzLCB0aGUgZnJvbnRlbmQg
Y2FuIHNwZWN1bGF0aXZlbHkgZGVjb2RlIGFuZCBleGVjdXRlIHRoZSBvbGQg
aW5zdHJ1Y3Rpb24Kc3RyZWFtLiAgU3BlY3VsYXRpb24gb2YgdGhpcyBmb3Jt
IGNhbiBzdWZmZXIgZnJvbSB0eXBlIGNvbmZ1c2lvbiBpbiByZWdpc3RlcnMs
CmFuZCBwb3RlbnRpYWxseSBsZWFrIGRhdGEuCgpGdXJ0aGVybW9yZSwgdXBk
YXRlcyBhcmUgdHlwaWNhbGx5IGJ5dGUtd2lzZSwgcmF0aGVyIHRoYW4gYXRv
bWljLiAgRGVwZW5kaW5nCm9uIHRpbWluZywgc3BlY3VsYXRpb24gY2FuIHJh
Y2UgYWhlYWQgbXVsdGlwbGUgdGltZXMgYmV0d2VlbiBpbmRpdmlkdWFsCndy
aXRlcywgYW5kIGV4ZWN1dGUgdGhlIHRyYW5zaWVudGx5LW1hbGZvcm1lZCBp
bnN0cnVjdGlvbiBzdHJlYW0uCgpYZW4gaGFzIHN0dWJzIHdoaWNoIGFyZSB1
c2VkIGluIGNlcnRhaW4gY2FzZXMgZm9yIGVtdWxhdGlvbiBwdXJwb3Nlcy4g
IEluaGliaXQKc3BlY3VsYXRpb24gYmV0d2VlbiB1cGRhdGluZyB0aGUgc3R1
YiBhbmQgZXhlY3V0aW5nIGl0LgoKVGhpcyBpcyBYU0EtMzc1IC8gQ1ZFLTIw
MjEtMDA4OS4KClNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJl
dy5jb29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBKYW4gQmV1bGlj
aCA8amJldWxpY2hAc3VzZS5jb20+CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gv
eDg2L3B2L2VtdWwtcHJpdi1vcC5jIGIveGVuL2FyY2gveDg2L3B2L2VtdWwt
cHJpdi1vcC5jCmluZGV4IDZkYzRmOTJhODQuLjU5YzE1Y2EwZTcgMTAwNjQ0
Ci0tLSBhL3hlbi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYworKysgYi94
ZW4vYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMKQEAgLTk3LDYgKzk3LDgg
QEAgc3RhdGljIGlvX2VtdWxfc3R1Yl90ICppb19lbXVsX3N0dWJfc2V0dXAo
c3RydWN0IHByaXZfb3BfY3R4dCAqY3R4dCwgdTggb3Bjb2RlLAogICAgIEJV
SUxEX0JVR19PTihTVFVCX0JVRl9TSVpFIC8gMiA8IE1BWCg5LCAvKiBEZWZh
dWx0IGVtdWwgc3R1YiAqLwogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICA1ICsgSU9FTVVMX1FVSVJLX1NUVUJfQllURVMpKTsK
IAorICAgIGJsb2NrX3NwZWN1bGF0aW9uKCk7IC8qIFNDU0IgKi8KKwogICAg
IC8qIEhhbmR5IGZ1bmN0aW9uLXR5cGVkIHBvaW50ZXIgdG8gdGhlIHN0dWIu
ICovCiAgICAgcmV0dXJuICh2b2lkICopc3R1Yl92YTsKIH0KZGlmZiAtLWdp
dCBhL3hlbi9hcmNoL3g4Ni94ODZfZW11bGF0ZS94ODZfZW11bGF0ZS5jIGIv
eGVuL2FyY2gveDg2L3g4Nl9lbXVsYXRlL3g4Nl9lbXVsYXRlLmMKaW5kZXgg
YmJhNmRkMDE4Ny4uY2QxMjM0OTJhNiAxMDA2NDQKLS0tIGEveGVuL2FyY2gv
eDg2L3g4Nl9lbXVsYXRlL3g4Nl9lbXVsYXRlLmMKKysrIGIveGVuL2FyY2gv
eDg2L3g4Nl9lbXVsYXRlL3g4Nl9lbXVsYXRlLmMKQEAgLTExNzIsNiArMTE3
Miw3IEBAIHN0YXRpYyBpbmxpbmUgaW50IG1rZWModWludDhfdCBlLCBpbnQz
Ml90IGVjLCAuLi4pCiAjIGRlZmluZSBpbnZva2Vfc3R1YihwcmUsIHBvc3Qs
IGNvbnN0cmFpbnRzLi4uKSBkbyB7ICAgICAgICAgICAgICAgICAgICBcCiAg
ICAgc3R1Yl9leG4uaW5mbyA9ICh1bmlvbiBzdHViX2V4Y2VwdGlvbl90b2tl
bikgeyAucmF3ID0gfjAgfTsgICAgICAgICBcCiAgICAgc3R1Yl9leG4ubGlu
ZSA9IF9fTElORV9fOyAvKiBVdGlsaXR5IG91dHdlaWdocyBsaXZlcGF0Y2hp
bmcgY29zdCAqLyBcCisgICAgYmxvY2tfc3BlY3VsYXRpb24oKTsgLyogU0NT
QiAqLyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAg
ICAgYXNtIHZvbGF0aWxlICggcHJlICJcblx0SU5ESVJFQ1RfQ0FMTCAlW3N0
dWJdXG5cdCIgcG9zdCAiXG4iICAgICAgICBcCiAgICAgICAgICAgICAgICAg
ICAgIi5McmV0JT06XG5cdCIgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBcCiAgICAgICAgICAgICAgICAgICAgIi5wdXNoc2VjdGlv
biAuZml4dXAsXCJheFwiXG4iICAgICAgICAgICAgICAgICAgICAgICBcCg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 09:36:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 09:36:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139757.258352 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrH6G-0001JG-HX; Thu, 10 Jun 2021 09:35:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139757.258352; Thu, 10 Jun 2021 09:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrH6G-0001J9-EY; Thu, 10 Jun 2021 09:35:44 +0000
Received: by outflank-mailman (input) for mailman id 139757;
 Thu, 10 Jun 2021 09:35:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iP0d=LE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrH6E-0001J3-MC
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 09:35:42 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c9378f33-d42d-46ba-b2fb-890e13ecbc79;
 Thu, 10 Jun 2021 09:35:41 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2169.outbound.protection.outlook.com [104.47.17.169])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-19-N0koyGfsM4mAvccMGC5_eg-1; Thu, 10 Jun 2021 11:35:39 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4352.eurprd04.prod.outlook.com (2603:10a6:803:4a::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Thu, 10 Jun
 2021 09:35:37 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Thu, 10 Jun 2021
 09:35:37 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3PR09CA0029.eurprd09.prod.outlook.com (2603:10a6:102:b7::34) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.20 via Frontend Transport; Thu, 10 Jun 2021 09:35:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9378f33-d42d-46ba-b2fb-890e13ecbc79
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623317740;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=l53C/sQpvdxNO+AMMq7KVDzxPzCP0wDnMDSJ26C6Xf4=;
	b=EHKL3vglV9hVjjrq+yCAUsXp6PFy9KLp/LCwwHVdJUNjZqAp9gRjtjSVH1UVy+la7WIuJH
	PcC19RcwD7veh+X3GUEuc/zixHUCfZ+JKK0iJuqx+zauiD5nDfaEdm2F16S65LXdnVwHqY
	YX9Qa4Ig/aG20koLiRhKjCWFGbiI9fc=
X-MC-Unique: N0koyGfsM4mAvccMGC5_eg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZEIpSin8N5c1v34GMyfFkDrOgOFwboXOsQB0zKI0LZpyYm5KEfx3Z5mMbpWJ9ZnnvL7DgUtfrdraGJqDGCXgnyBlrpagepPaP41kH681RdOnVg8ok113JdvBASb2j30nJXBA5RxAxKOYlKCivuI7Btw1Y2yzvRN39L+8yoj5UiJh4TIbCUMWsGnPaJnsSmgU8HPhPnWtCD7icrBKNWwkd++d0327u+GoJnzXOe5G0v40ovh55rDwCob9WTvspHMov/+MMv4cD2qv/A6uRs2R9cmAxiHrvzRZCtzUJ+TKpmriSHATYFD0P2e4/8ToQl3HADny2gP0yCmS9d4kStgRaw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l53C/sQpvdxNO+AMMq7KVDzxPzCP0wDnMDSJ26C6Xf4=;
 b=oL/MLi06sJ/J9Oe386wpe5Rg23Dcyc1mpZEghbUlCqvn2Kb27PLmfL9D/c5d4VQUYWewfD7IgmpYgED1ARX7dIMBnaihoE7N828UaVBr2wZUr7JLAxOc8n42giBp00JMqhFzbA/WmafjHNsPzI1tcqLjWejo8zQ+1f5pdWt1YPCIeSN8JIU9Vi472IRB3W2tYX9jLkzy7dABbmA8DFtEPhD18EbyVWTDELaLlY2oeIM7c51Co0M4Qzu43CrZnTeafHrTaEGIqgur4cae5SYALUY5QLfwlgxCLDEW+111JqBSYUeK3vrAcuVddut7StmdjcpFaoMZrGpQnTKkY0CEoA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH 4/9] xen/arm: static memory initialization
To: Penny Zheng <penny.zheng@arm.com>
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com,
 xen-devel@lists.xenproject.org, sstabellini@kernel.org, julien@xen.org
References: <20210607024318.3988467-1-penny.zheng@arm.com>
 <20210607024318.3988467-5-penny.zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e0a312a1-f430-3ff0-6dd6-fcfe18e58071@suse.com>
Date: Thu, 10 Jun 2021 11:35:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210607024318.3988467-5-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3PR09CA0029.eurprd09.prod.outlook.com
 (2603:10a6:102:b7::34) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f3a270a6-fe18-48f4-b4ce-08d92bf31a6c
X-MS-TrafficTypeDiagnostic: VI1PR04MB4352:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB43527C6D82C79F3C114EF6BDB3359@VI1PR04MB4352.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2VFpcRgjl8W0+Z3DzSCXFqAw1c9teDPFvsvSh7QpLD2yO5317aHGL/uvoy5vDw9YrODQ4NLBGrCMn/WuB4uO5CRUwnJa83sQ0LF/0fJJwqbHV6NZ/lKBkTLcY7vW8x2joC8lvr6mKD8zyOPtBH+PYscey+tsqxRZ2V9F3msdAXcwXhCjEaBkShIPTIU2Swhu+ufQE+DGIBD/2xNF4sElAp4TxeqWeBN+iW88INc3VUM85Qcl4OYLVmSyDnXYBBHhdqoXHXv+f1ARn9Sx7msgw9jgxIdu5nEB6aS7RE/lYIROfW6Ds9RFnoWAyp2B5a99puDp36LOyp3ktQk0H541ZOnNbewPX8I68z+jFqU+6PJRxJBl/6ZdPt6MwWFgMMt5/ydQczOo0joIpyegSKr+8i1wSBFR2FPCo6NoaYaXRQYkH4apfc4gUPaGrSxx6RE50dnYEQEjHwENXnGZRhGOr6HGUzfKctRJginXqXF/583Nu1w4cUdMG4c9lSW4h1uIX4pbW/ZTUqCLk10Ru/xYNsLuZJy1lD/cKtCzUYoi3PiLfO7o55KRREID623srb+mGtUY0FvcRhr5zZtT/9vtQRDQsfnGEbck9qdnKJqhM4Yjozq53BLvnga3itkIxN3pfaZbWs8T9X7dANyISKUb4/aWWQy5uqIcUK8FwG0kcu7DkerYBltxUFu/e/vX6YQ7
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39850400004)(136003)(396003)(376002)(366004)(346002)(6486002)(83380400001)(38100700002)(26005)(2906002)(4326008)(956004)(2616005)(36756003)(86362001)(16576012)(53546011)(31696002)(316002)(8676002)(478600001)(16526019)(66476007)(66556008)(31686004)(66946007)(186003)(6916009)(5660300002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?ampPN09MSVNFSHJxM2ZISGZEaHhjS1JzVklMb0x5ait4azlZUzV5enhvNkk2?=
 =?utf-8?B?OEJDdldsdG9OMnFCaTArMDdCRTVoTXdteTN4NjJSczB5WnVnTnRCbUdoRHdU?=
 =?utf-8?B?OUtOMW5pd3lmaXJkc0k2Rm05RFpJWUFUOWk1QmloS2F1SFI5aU5PdmROWmNu?=
 =?utf-8?B?RkRDN0VvbmxTd1V0MlJMVk0rSDRBMVpNdjVRTzA3WGNrbUpDcERYVkhzR2Yw?=
 =?utf-8?B?WmlKTDRwdDVRQlQ4ZktVNlBydUNPSSs2cktmOWEyTzBmRzl0ajVtdS9vQmpX?=
 =?utf-8?B?T3l6YXloU2NWRDhkc1RrdEo0MGVTWGJ4Z0xOOGFiQWxMQnVSQVJFNnk5YTZY?=
 =?utf-8?B?d1NvRkV6MGs3TnRROHlVNE5JZmdTc3l6SVdMZ3Y2MkJUUkx2SmpHZC9ZN3Mv?=
 =?utf-8?B?aTgxb3AzMFQ1WHRpandEQUdmZkdNTFMvMHhhblFOUXZ2RjV5MlFZUU1pM2Ez?=
 =?utf-8?B?RmNwN051SGFUbnVZTUhMRkdZaGZ1LzN1YllncldXRFlYSVRsRGM4MllnRGhP?=
 =?utf-8?B?d09qakVyNXFrL05ndm5WMk1hdUNKUVVid3NWbWg0ZDREWmhtRDNQdis2ekcr?=
 =?utf-8?B?TkZYdVRaZEpWMFZiL3MzcWRRYk1ZRlRCV2hkaWJ4RStmZkdBdDh0cXJXb3k1?=
 =?utf-8?B?eDlHekNUZmRUUUN5djF3TXhOTE1nSFhUdWs4OUZ0RDZjVVdtQ2F1OThQb2N1?=
 =?utf-8?B?Q0ZOVy9FeDYzaWJiYVVsWm9OcDBMUHpUTjVzdW52eUFINUIvWU5vcFptMUFM?=
 =?utf-8?B?Y1Y2TWNrNHY3WHcvT2RIT05KaFUxUHVaL1FlaGNqQzZqQlU0TDZ5bUl2UGRw?=
 =?utf-8?B?aHdNdUthYXZiaG85eGNVeThjM1REakxyNWtVY1YwdVhpRHg2cWNhQUZEdkpq?=
 =?utf-8?B?Vk9BVmRQUzViRWg1YkoycENVbTZXY29yUlJtb1BMRzYza2tDNmJQcWxHcVFt?=
 =?utf-8?B?UVhoYytBcFJaZ1dOZ3EzZ095b2R1OFEvUlJ1ZnBSU0I3c29hWGMyL2sycG1N?=
 =?utf-8?B?NTlPbmsyUzZSbU94Y2FQNGdlMENkTU94SmZvV2FqWHprVjZEWm50eTNnK0dm?=
 =?utf-8?B?N0FQcUh4eDFzWHE1d2lIOU92NVBKMkprRzF5TVZjcUhkZ0hyWEVZUE5zYmww?=
 =?utf-8?B?UXVPWkJ1T1ZCZXZlNUdYeVVjQys5RmhCMm1xL1EvZC9ZKzFwRWwzT2xMSC9s?=
 =?utf-8?B?cDVHVUZUajk3T29oYTF6eDJHL3FNY1hQZURwdlNtSWREUjBhMCtDcGpWUGZD?=
 =?utf-8?B?TmNjZlIwVWR0Yk1xWU9NdWlaUW8rOExxRndZb1RhV0xGUnRzUUowWnhIZFRP?=
 =?utf-8?B?aUplc2hRc0YrcEk5S1dwV3BQcituMFQvcUtnY216T1JOK1VuSWpJWUxiOXBV?=
 =?utf-8?B?Y1BESmN6R0RFZWIxRERSV1NHMmNrWGdrdHRCRkNLSGU5VDRJWE04Qkdza1ow?=
 =?utf-8?B?d0E4YjZIaHZLa0ZVQlZCMVgyNHovdmNXeklSM0xNWHFUNjhlNFNHMGFPeU81?=
 =?utf-8?B?WlBIVGNVUHlCcll0QWwrZDB4b1JpZFpPRHFvblhoRzkvMlNKUzZ1dWlONEgv?=
 =?utf-8?B?V2VoZU1FYzZwZ2RtRkc0UCt3Vm9LdU1NR2pTZWF2VDA0QjE5YkFSeVVyb2Ev?=
 =?utf-8?B?MEw5WTI0c0FIL0VKZDZDYkZzK3lVWVpQMXJjcXBtTllScUxrRFFLY25XRlhz?=
 =?utf-8?B?NjlpNFFvVkZQclhOQ2w1RUN2ajRhNXRFajI0VGQvSjRCUGgrSXhjMi9nZVRJ?=
 =?utf-8?Q?dXHFrLbXTY8NZRFSFsRjgy2twEkaQDXn1HjS9dn?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f3a270a6-fe18-48f4-b4ce-08d92bf31a6c
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2021 09:35:37.1324
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MjVEtamION/NddeOh3alTm+odimExh1tn9cYGn7Gvz2MTuFpSITbF98ibcknstcB2eKAz79N4+RmLXgNzvtqpg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4352

On 07.06.2021 04:43, Penny Zheng wrote:
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -611,6 +611,30 @@ static void __init init_pdx(void)
>      }
>  }
>  
> +/* Static memory initialization */
> +static void __init init_staticmem_pages(void)
> +{
> +    int bank;

While I'm not a maintainer of this code, I'd still like to point out
that wherever possible we prefer "unsigned int" when dealing with
only non-negative values, and even more so when using them as array
indexes.

> +    /*
> +     * TODO: Considering NUMA-support scenario.
> +     */

Nit: Comment style.

> @@ -872,6 +896,9 @@ void __init start_xen(unsigned long boot_phys_offset,
>      cmdline_parse(cmdline);
>  
>      setup_mm();
> +    /* If exists, Static Memory Initialization. */
> +    if ( bootinfo.static_mem.nr_banks > 0 )
> +        init_staticmem_pages();

I don't think the conditional is really needed here?

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1376,6 +1376,37 @@ bool scrub_free_pages(void)
>      return node_to_scrub(false) != NUMA_NO_NODE;
>  }
>  
> +static void free_page(struct page_info *pg, bool need_scrub)
> +{
> +    mfn_t mfn = page_to_mfn(pg);

With pdx compression this is a non-trivial conversion. The function
being an internal helper and the caller already holding the MFN, I
think it would be preferable if the MFN was passed in here. If done
this way, you may want to consider adding an ASSERT() to double
check both passed in arguments match up.

> +    /* If a page has no owner it will need no safety TLB flush. */
> +    pg->u.free.need_tlbflush = (page_get_owner(pg) != NULL);
> +    if ( pg->u.free.need_tlbflush )
> +        page_set_tlbflush_timestamp(pg);
> +
> +    /* This page is not a guest frame any more. */
> +    page_set_owner(pg, NULL); /* set_gpfn_from_mfn snoops pg owner */
> +    set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY);
> +
> +#ifdef CONFIG_ARM

If avoidable there should be no arch-specific code added to this
file. Assuming another arch gained PGC_reserved, what's wrong with
enabling this code right away for them as well? I.e. use
PGC_reserved here instead of CONFIG_ARM? Alternatively this may
want to be CONFIG_STATIC_ALLOCATION, assuming we consider
PGC_reserved tied to it.

> +    if ( pg->count_info & PGC_reserved )
> +    {
> +        /* TODO: asynchronous scrubbing. */
> +        if ( need_scrub )
> +            scrub_one_page(pg);
> +        return;
> +    }
> +#endif
> +    if ( need_scrub )

Nit: Please have a blank line between these last two.

> +    {
> +        pg->count_info |= PGC_need_scrub;
> +        poison_one_page(pg);
> +    }
> +
> +    return;

Please omit return statements at the end of functions returning void.

> +}

On the whole, bike shedding or not, I'm afraid the function's name
doesn't match what it does: There's no freeing of a page here. What
gets done is marking of a page as free. Hence maybe mark_page_free()
or mark_free_page() or some such?

> @@ -1512,6 +1530,38 @@ static void free_heap_pages(
>      spin_unlock(&heap_lock);
>  }
>  
> +#ifdef CONFIG_STATIC_ALLOCATION
> +/* Equivalent of free_heap_pages to free nr_mfns pages of static memory. */
> +void __init free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
> +                                 bool need_scrub)
> +{
> +    mfn_t mfn = page_to_mfn(pg);
> +    unsigned long i;
> +
> +    for ( i = 0; i < nr_mfns; i++ )
> +    {
> +        switch ( pg[i].count_info & PGC_state )
> +        {
> +        case PGC_state_inuse:
> +            BUG_ON(pg[i].count_info & PGC_broken);
> +            /* Mark it free and reserved. */
> +            pg[i].count_info = PGC_state_free | PGC_reserved;
> +            break;
> +
> +        default:
> +            printk(XENLOG_ERR
> +                   "Page state shall be only in PGC_state_inuse. "

Why? A page (static or not) can become broken while in use. IOW I
don't think you can avoid handling PGC_state_offlining here. At which
point this code will match free_heap_pages()'es, and hence likely
will want folding as well.

> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -85,6 +85,12 @@ bool scrub_free_pages(void);
>  } while ( false )
>  #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
>  
> +#ifdef CONFIG_ARM

ITYM CONFIG_STATIC_ALLOCATION here?

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 10 09:39:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 09:39:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139766.258366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrH9q-000221-5K; Thu, 10 Jun 2021 09:39:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139766.258366; Thu, 10 Jun 2021 09:39:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrH9q-00021u-2I; Thu, 10 Jun 2021 09:39:26 +0000
Received: by outflank-mailman (input) for mailman id 139766;
 Thu, 10 Jun 2021 09:39:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrH9o-00021k-Rr; Thu, 10 Jun 2021 09:39:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrH9o-0003Gh-NR; Thu, 10 Jun 2021 09:39:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrH9o-0007TI-DG; Thu, 10 Jun 2021 09:39:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrH9o-0007x0-Cm; Thu, 10 Jun 2021 09:39:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zJ2Xohe08fiAUR0nSVXdG13c3jfs5PPiqefDfnQVwBM=; b=cSe5Xb5pUB2QjASejZ8BfF0shB
	F7LB636F2Qb4tGY06j09J8TbS12KIy3rLHeCwMmqG+SuMlwg9ydLfYux1NVRGMkD2dyxxJp+chED6
	mfGqNbNvA8UXPecBDUCJv79qaFeUIytWM0HogVIYbUchE6Ps1b3Q4Psdf/fI8Z+T9WXY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162556-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162556: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-4:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-examine:reboot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-1:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-5:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-2:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-raw:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-3:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-unstable:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    xen-unstable:test-amd64-amd64-i386-pvgrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-amd64-amd64-pvgrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-saverestore:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e4fee66043120c954fc309bbb37813604c1c0eb7
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Jun 2021 09:39:24 +0000

flight 162556 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162556/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 162533
 test-xtf-amd64-amd64-4      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-amd64-i386-examine       8 reboot                   fail REGR. vs. 162533
 test-xtf-amd64-amd64-1      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 162533
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 162533
 test-xtf-amd64-amd64-5      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-xtf-amd64-amd64-2      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-amd64-i386-xl-raw        8 xen-boot                 fail REGR. vs. 162533
 test-xtf-amd64-amd64-3      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 162533
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 162533
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 162533
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 162533
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 162533
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 162533
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 162533
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 162533
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 162533
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 162533
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 162533
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 162533
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 162533
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 162533
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 162533
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 162533
 test-amd64-amd64-i386-pvgrub 17 guest-localmigrate       fail REGR. vs. 162533
 test-amd64-amd64-amd64-pvgrub 17 guest-localmigrate      fail REGR. vs. 162533
 test-amd64-amd64-libvirt-vhd 16 guest-saverestore        fail REGR. vs. 162533
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 162533
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 162533

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162533
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  e4fee66043120c954fc309bbb37813604c1c0eb7
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z    2 days
Testing same since   162556  2021-06-08 22:39:08 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 556 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 09:39:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 09:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139768.258382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrHA1-0002Ol-Fn; Thu, 10 Jun 2021 09:39:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139768.258382; Thu, 10 Jun 2021 09:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrHA1-0002Oe-Cc; Thu, 10 Jun 2021 09:39:37 +0000
Received: by outflank-mailman (input) for mailman id 139768;
 Thu, 10 Jun 2021 09:39:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrHA0-0002NH-3h; Thu, 10 Jun 2021 09:39:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrH9z-0003Gp-T8; Thu, 10 Jun 2021 09:39:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrH9z-0007TS-Lq; Thu, 10 Jun 2021 09:39:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrH9z-0008Gv-LK; Thu, 10 Jun 2021 09:39:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=z8MZHlfJyHLvJrRLMLrLR/SyLlhU/yhdm2hOk0AjiCc=; b=lGQReG2rWfkJU/YIF9f8qT7GH+
	hQR4XjvARymu/DY6YJYTp9kHS0AZEsDrI46IgyDjZ5iG6/nC5OEgaGvVCHdfkrr5dykVb8v+Gypiz
	QnGcFUKm3qxYejPYXexeiVE3rtUYP2peY1fpxpEgle2GWPNY37N6Hl5nwbrDlo4ienU4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [ovmf bisection] complete test-amd64-amd64-xl-qemuu-ovmf-amd64
Message-Id: <E1lrH9z-0008Gv-LK@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Jun 2021 09:39:35 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemuu-ovmf-amd64
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf https://github.com/tianocore/edk2.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  ovmf https://github.com/tianocore/edk2.git
  Bug introduced:  d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64
  Bug not present: 3357ac73807d83eb212632ee7c2e032a20a49c56
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/162596/


  commit d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64
  Author: Laszlo Ersek <lersek@redhat.com>
  Date:   Wed May 26 22:14:24 2021 +0200
  
      OvmfPkg/PlatformPei: remove Xen support
      
      The "OvmfPkg/PlatformPei/PlatformPei.inf" module is used by the following
      platform DSCs:
      
        OvmfPkg/AmdSev/AmdSevX64.dsc
        OvmfPkg/OvmfPkgIa32.dsc
        OvmfPkg/OvmfPkgIa32X64.dsc
        OvmfPkg/OvmfPkgX64.dsc
      
      Remove Xen support from "OvmfPkg/PlatformPei", including any dependencies
      that now become unused. The basic idea is to substitute FALSE for "mXen".
      
      Remove "OvmfPkg/PlatformPei" from the "OvmfPkg: Xen-related modules"
      section of "Maintainers.txt".
      
      This patch is best reviewed with "git show -b -W".
      
      Cc: Andrew Fish <afish@apple.com>
      Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
      Cc: Jordan Justen <jordan.l.justen@intel.com>
      Cc: Leif Lindholm <leif@nuviainc.com>
      Cc: Michael D Kinney <michael.d.kinney@intel.com>
      Cc: Philippe Mathieu-Daudé <philmd@redhat.com>
      Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=2122
      Signed-off-by: Laszlo Ersek <lersek@redhat.com>
      Message-Id: <20210526201446.12554-22-lersek@redhat.com>
      Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
      Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
      Reviewed-by: Leif Lindholm <leif@nuviainc.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/ovmf/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/ovmf/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install --summary-out=tmp/162596.bisection-summary --basis-template=162359 --blessings=real,real-bisect,real-retry ovmf test-amd64-amd64-xl-qemuu-ovmf-amd64 debian-hvm-install
Searching for failure / basis pass:
 162552 fail [host=godello1] / 162359 [host=elbling1] 162341 [host=fiano0] 162338 [host=elbling0] 162334 [host=albana1] 162326 [host=chardonnay1] 162288 [host=fiano1] 162271 [host=godello0] 162259 [host=huxelrebe0] 162256 [host=albana0] 162217 [host=huxelrebe1] 162131 [host=chardonnay0] 162113 [host=pinot1] 162111 ok.
Failure / basis pass flights: 162552 / 162111
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf https://github.com/tianocore/edk2.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 558d83ab1a5179e146a56dd5f3cb16e1ca44ff46 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 04ae17218deec25c6f488609c5e2ca9c419d2c4b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 https://github.com/tianocore/edk2.git#04ae17218deec25c6f488609c5e2ca9c419d2c4b-558d83ab1a5179e146a56dd5f3cb16e1ca44ff46 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c743\
 7ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee-7292e4a0a8f58333ccbd2d0d47242f9865083c9c git://xenbits.xen.org/xen.git#aa77acc28098d04945af998f3fc0dbd3759b5b41-5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Loaded 12601 nodes in revision graph
Searching for test results:
 162111 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 04ae17218deec25c6f488609c5e2ca9c419d2c4b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162113 [host=pinot1]
 162131 [host=chardonnay0]
 162217 [host=huxelrebe1]
 162256 [host=albana0]
 162259 [host=huxelrebe0]
 162271 [host=godello0]
 162288 [host=fiano1]
 162326 [host=chardonnay1]
 162334 [host=albana1]
 162338 [host=elbling0]
 162341 [host=fiano0]
 162359 [host=elbling1]
 162368 []
 162371 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 51adb689e1db695cffdeeacafad218768fbc018c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162436 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ddb3fdbef30de5a2946f9bd51060e8d5b1987aef 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162542 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 558d83ab1a5179e146a56dd5f3cb16e1ca44ff46 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162577 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 04ae17218deec25c6f488609c5e2ca9c419d2c4b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162578 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 558d83ab1a5179e146a56dd5f3cb16e1ca44ff46 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162579 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162580 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4174c5c7874ec21c2e693565d3685cf9f5c2e2e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162581 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c2f24ba3218ae91a8d5a1a31c31dad3417850d0c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162552 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 558d83ab1a5179e146a56dd5f3cb16e1ca44ff46 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162582 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2833589ad054ee51fadc5c408de4f028ddf485e3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162585 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3357ac73807d83eb212632ee7c2e032a20a49c56 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162586 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162589 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3357ac73807d83eb212632ee7c2e032a20a49c56 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162593 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162594 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3357ac73807d83eb212632ee7c2e032a20a49c56 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162596 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Searching for interesting versions
 Result found: flight 162111 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3357ac73807d83eb212632ee7c2e032a20a49c56 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1, results HASH(0x561ecdb33290) HASH(0x561ecce93d58) HASH(0x561eccdf4b08) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 4174c5c7874ec21c2e693565d3685cf9f5c2e2e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1, results HASH(0x561ecdb1aa98) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0\
 bd11a38aac308308 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1, results HASH(0x561ecdb31408) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 04ae17218deec25c6f488609c5e2ca9c419d2c4b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41, results HASH(0x561ecdb2b0c8) HASH(0x56\
 1ecdb18790) Result found: flight 162371 (fail), for basis failure (at ancestor ~5477)
 Repro found: flight 162577 (pass), for basis pass
 Repro found: flight 162578 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3357ac73807d83eb212632ee7c2e032a20a49c56 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
No revisions left to test, checking graph state.
 Result found: flight 162585 (pass), for last pass
 Result found: flight 162586 (fail), for first failure
 Repro found: flight 162589 (pass), for last pass
 Repro found: flight 162593 (fail), for first failure
 Repro found: flight 162594 (pass), for last pass
 Repro found: flight 162596 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  ovmf https://github.com/tianocore/edk2.git
  Bug introduced:  d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64
  Bug not present: 3357ac73807d83eb212632ee7c2e032a20a49c56
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/162596/


  commit d06eb2d1d9dd8da1ed84bd08c5783a0264fe2b64
  Author: Laszlo Ersek <lersek@redhat.com>
  Date:   Wed May 26 22:14:24 2021 +0200
  
      OvmfPkg/PlatformPei: remove Xen support
      
      The "OvmfPkg/PlatformPei/PlatformPei.inf" module is used by the following
      platform DSCs:
      
        OvmfPkg/AmdSev/AmdSevX64.dsc
        OvmfPkg/OvmfPkgIa32.dsc
        OvmfPkg/OvmfPkgIa32X64.dsc
        OvmfPkg/OvmfPkgX64.dsc
      
      Remove Xen support from "OvmfPkg/PlatformPei", including any dependencies
      that now become unused. The basic idea is to substitute FALSE for "mXen".
      
      Remove "OvmfPkg/PlatformPei" from the "OvmfPkg: Xen-related modules"
      section of "Maintainers.txt".
      
      This patch is best reviewed with "git show -b -W".
      
      Cc: Andrew Fish <afish@apple.com>
      Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
      Cc: Jordan Justen <jordan.l.justen@intel.com>
      Cc: Leif Lindholm <leif@nuviainc.com>
      Cc: Michael D Kinney <michael.d.kinney@intel.com>
      Cc: Philippe Mathieu-Daudé <philmd@redhat.com>
      Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=2122
      Signed-off-by: Laszlo Ersek <lersek@redhat.com>
      Message-Id: <20210526201446.12554-22-lersek@redhat.com>
      Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
      Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
      Reviewed-by: Leif Lindholm <leif@nuviainc.com>

Revision graph left in /home/logs/results/bisect/ovmf/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
162596: tolerable ALL FAIL

flight 162596 ovmf real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162596/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail baseline untested


jobs:
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Thu Jun 10 09:47:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 09:47:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139783.258397 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrHHf-00049I-II; Thu, 10 Jun 2021 09:47:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139783.258397; Thu, 10 Jun 2021 09:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrHHf-00049B-FE; Thu, 10 Jun 2021 09:47:31 +0000
Received: by outflank-mailman (input) for mailman id 139783;
 Thu, 10 Jun 2021 09:47:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1Xc/=LE=redhat.com=vkuznets@srs-us1.protection.inumbo.net>)
 id 1lrHHe-000495-NV
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 09:47:30 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [170.10.133.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 02c1632c-9a8d-4d37-8018-41c79d12818a;
 Thu, 10 Jun 2021 09:47:29 +0000 (UTC)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-586--wodb-1MMF6U25U78ezhWA-1; Thu, 10 Jun 2021 05:47:27 -0400
Received: by mail-wm1-f71.google.com with SMTP id
 o82-20020a1ca5550000b029019ae053d508so2851437wme.6
 for <xen-devel@lists.xenproject.org>; Thu, 10 Jun 2021 02:47:27 -0700 (PDT)
Received: from vitty.brq.redhat.com (g-server-2.ign.cz. [91.219.240.2])
 by smtp.gmail.com with ESMTPSA id x125sm2708442wmg.37.2021.06.10.02.47.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 10 Jun 2021 02:47:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02c1632c-9a8d-4d37-8018-41c79d12818a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1623318449;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=6SOXwaEoB1r4wx6rBI6iO/FkwbNbkxuPQhDm99+pms4=;
	b=DxUbaHr+IR2BJrfcV3lRRTsDMkQD3Eq+/l/GWK74HGbPDkk0sCtsNtCYoH994aTx6IsdIn
	dl0ESmKY1Is3DZIZJ5YIBKh/oxvDt2K6h3vrSCllkGU9/54XcYuaiBHHu3PSh+UhfMVTOE
	PegkkPomB14HicD7aUhJrSSyiUbDWRU=
X-MC-Unique: -wodb-1MMF6U25U78ezhWA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:in-reply-to:references:date
         :message-id:mime-version;
        bh=6SOXwaEoB1r4wx6rBI6iO/FkwbNbkxuPQhDm99+pms4=;
        b=cBNQR+Xyo0mKLeptCBCf/KE65GeTHXuqOn0qRSNWybi773tWRd2wTBp35HscgQAkQT
         ixEjZFjeYcgWx0QBcDvx6OZq6PYBpZzecFOkuGg4fm/Nl/vWvdNUCFW54GV6eP8k8KgD
         mjIVjTgf8cSNilVUJkV7d8RjjdHZx+p57FS24/iOQStFx1ycmuv5/xYWaFqYVqoNBppr
         i08V299oLP1bgE4DfmJVQ+AAlrXzVDuAoRtW4JwrXv1VAavOUcMoL8uzNc79zohHBbcE
         QOHPNovyrIY06nuqEH8BKr90Upv3l8LlAXz9DQP4SLY6+S4JQwRyezUpJdnj4r1ID6Tv
         SvAg==
X-Gm-Message-State: AOAM5315BzTWw7vPOS6FzBwaMj9fyU9UP0SJ0v5tu4kuyHoRdgn/01Lb
	HVzLaFDVkL71NwkJ9xP+kP6WW8N64RfnAhgCAUPRN9Mo53GQRyUzqTJwc2Di+ahDU0j0bH/xMhP
	VRSoY8KNT1HSZykXlw6JbSiqqHHY=
X-Received: by 2002:a5d:6a02:: with SMTP id m2mr4390681wru.77.1623318446697;
        Thu, 10 Jun 2021 02:47:26 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJx2dUezxzGYnsl/xPnZ5DutUwwD8Qpsl170CTiWMTdnHNIbMR6YpxqB5Zb+USUdHAJsBsFR8g==
X-Received: by 2002:a5d:6a02:: with SMTP id m2mr4390649wru.77.1623318446473;
        Thu, 10 Jun 2021 02:47:26 -0700 (PDT)
From: Vitaly Kuznetsov <vkuznets@redhat.com>
To: Tianyu Lan <ltykernel@gmail.com>, kys@microsoft.com,
 haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org,
 decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
 x86@kernel.org, hpa@zytor.com, arnd@arndb.de, dave.hansen@linux.intel.com,
 luto@kernel.org, peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, hch@lst.de,
 m.szyprowski@samsung.com, robin.murphy@arm.com,
 boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org,
 joro@8bytes.org, will@kernel.org, xen-devel@lists.xenproject.org,
 davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com,
 martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-scsi@vger.kernel.org, netdev@vger.kernel.org,
 thomas.lendacky@amd.com, brijesh.singh@amd.com, sunilmut@microsoft.com
Subject: Re: [RFC PATCH V3 03/11] x86/Hyper-V: Add new hvcall guest address
 host visibility support
In-Reply-To: <20210530150628.2063957-4-ltykernel@gmail.com>
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-4-ltykernel@gmail.com>
Date: Thu, 10 Jun 2021 11:47:23 +0200
Message-ID: <878s3iyrtg.fsf@vitty.brq.redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vkuznets@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain

Tianyu Lan <ltykernel@gmail.com> writes:

> From: Tianyu Lan <Tianyu.Lan@microsoft.com>
>
> Add new hvcall guest address host visibility support. Mark vmbus
> ring buffer visible to host when create gpadl buffer and mark back
> to not visible when tear down gpadl buffer.
>
> Co-developed-by: Sunil Muthuswamy <sunilmut@microsoft.com>
> Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
> ---
>  arch/x86/hyperv/Makefile           |   2 +-
>  arch/x86/hyperv/ivm.c              | 106 +++++++++++++++++++++++++++++
>  arch/x86/include/asm/hyperv-tlfs.h |  24 +++++++
>  arch/x86/include/asm/mshyperv.h    |   4 +-
>  arch/x86/mm/pat/set_memory.c       |  10 ++-
>  drivers/hv/channel.c               |  38 ++++++++++-
>  include/asm-generic/hyperv-tlfs.h  |   1 +
>  include/linux/hyperv.h             |  10 +++
>  8 files changed, 190 insertions(+), 5 deletions(-)
>  create mode 100644 arch/x86/hyperv/ivm.c
>
> diff --git a/arch/x86/hyperv/Makefile b/arch/x86/hyperv/Makefile
> index 48e2c51464e8..5d2de10809ae 100644
> --- a/arch/x86/hyperv/Makefile
> +++ b/arch/x86/hyperv/Makefile
> @@ -1,5 +1,5 @@
>  # SPDX-License-Identifier: GPL-2.0-only
> -obj-y			:= hv_init.o mmu.o nested.o irqdomain.o
> +obj-y			:= hv_init.o mmu.o nested.o irqdomain.o ivm.o
>  obj-$(CONFIG_X86_64)	+= hv_apic.o hv_proc.o
>  
>  ifdef CONFIG_X86_64
> diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c
> new file mode 100644
> index 000000000000..fad1d3024056
> --- /dev/null
> +++ b/arch/x86/hyperv/ivm.c
> @@ -0,0 +1,106 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Hyper-V Isolation VM interface with paravisor and hypervisor
> + *
> + * Author:
> + *  Tianyu Lan <Tianyu.Lan@microsoft.com>
> + */
> +
> +#include <linux/hyperv.h>
> +#include <linux/types.h>
> +#include <linux/bitfield.h>
> +#include <asm/io.h>
> +#include <asm/mshyperv.h>
> +
> +/*
> + * hv_mark_gpa_visibility - Set pages visible to host via hvcall.
> + *
> + * In Isolation VM, all guest memory is encripted from host and guest
> + * needs to set memory visible to host via hvcall before sharing memory
> + * with host.
> + */
> +int hv_mark_gpa_visibility(u16 count, const u64 pfn[], u32 visibility)
> +{
> +	struct hv_gpa_range_for_visibility **input_pcpu, *input;
> +	u16 pages_processed;
> +	u64 hv_status;
> +	unsigned long flags;
> +
> +	/* no-op if partition isolation is not enabled */
> +	if (!hv_is_isolation_supported())
> +		return 0;
> +
> +	if (count > HV_MAX_MODIFY_GPA_REP_COUNT) {
> +		pr_err("Hyper-V: GPA count:%d exceeds supported:%lu\n", count,
> +			HV_MAX_MODIFY_GPA_REP_COUNT);
> +		return -EINVAL;
> +	}
> +
> +	local_irq_save(flags);
> +	input_pcpu = (struct hv_gpa_range_for_visibility **)
> +			this_cpu_ptr(hyperv_pcpu_input_arg);
> +	input = *input_pcpu;
> +	if (unlikely(!input)) {
> +		local_irq_restore(flags);
> +		return -EINVAL;
> +	}
> +
> +	input->partition_id = HV_PARTITION_ID_SELF;
> +	input->host_visibility = visibility;
> +	input->reserved0 = 0;
> +	input->reserved1 = 0;
> +	memcpy((void *)input->gpa_page_list, pfn, count * sizeof(*pfn));
> +	hv_status = hv_do_rep_hypercall(
> +			HVCALL_MODIFY_SPARSE_GPA_PAGE_HOST_VISIBILITY, count,
> +			0, input, &pages_processed);
> +	local_irq_restore(flags);
> +
> +	if (!(hv_status & HV_HYPERCALL_RESULT_MASK))
> +		return 0;
> +
> +	return hv_status & HV_HYPERCALL_RESULT_MASK;
> +}
> +EXPORT_SYMBOL(hv_mark_gpa_visibility);
> +
> +/*
> + * hv_set_mem_host_visibility - Set specified memory visible to host.
> + *
> + * In Isolation VM, all guest memory is encrypted from host and guest
> + * needs to set memory visible to host via hvcall before sharing memory
> + * with host. This function works as wrap of hv_mark_gpa_visibility()
> + * with memory base and size.
> + */
> +int hv_set_mem_host_visibility(void *kbuffer, size_t size,
> +			       enum vmbus_page_visibility visibility)
> +{
> +	int pagecount = size >> HV_HYP_PAGE_SHIFT;
> +	u64 *pfn_array;
> +	int ret = 0;
> +	int i, pfn;
> +
> +	if (!hv_is_isolation_supported())
> +		return 0;
> +
> +	pfn_array = vzalloc(HV_HYP_PAGE_SIZE);
> +	if (!pfn_array)
> +		return -ENOMEM;
> +
> +	for (i = 0, pfn = 0; i < pagecount; i++) {
> +		pfn_array[pfn] = virt_to_hvpfn(kbuffer + i * HV_HYP_PAGE_SIZE);
> +		pfn++;
> +
> +		if (pfn == HV_MAX_MODIFY_GPA_REP_COUNT || i == pagecount - 1) {
> +			ret |= hv_mark_gpa_visibility(pfn, pfn_array, visibility);
> +			pfn = 0;
> +
> +			if (ret)
> +				goto err_free_pfn_array;
> +		}
> +	}
> +
> + err_free_pfn_array:
> +	vfree(pfn_array);
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(hv_set_mem_host_visibility);
> +
> diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h
> index 606f5cc579b2..632281b91b44 100644
> --- a/arch/x86/include/asm/hyperv-tlfs.h
> +++ b/arch/x86/include/asm/hyperv-tlfs.h
> @@ -262,6 +262,17 @@ enum hv_isolation_type {
>  #define HV_X64_MSR_TIME_REF_COUNT	HV_REGISTER_TIME_REF_COUNT
>  #define HV_X64_MSR_REFERENCE_TSC	HV_REGISTER_REFERENCE_TSC
>  
> +/* Hyper-V GPA map flags */
> +#define HV_MAP_GPA_PERMISSIONS_NONE            0x0
> +#define HV_MAP_GPA_READABLE                    0x1
> +#define HV_MAP_GPA_WRITABLE                    0x2
> +
> +enum vmbus_page_visibility {
> +	VMBUS_PAGE_NOT_VISIBLE = 0,
> +	VMBUS_PAGE_VISIBLE_READ_ONLY = 1,
> +	VMBUS_PAGE_VISIBLE_READ_WRITE = 3
> +};
> +

Why do we need both flags and the enum? I don't see HV_MAP_GPA_* being
used anywhere and VMBUS_PAGE_VISIBLE_READ_WRITE looks like
HV_MAP_GPA_READABLE | HV_MAP_GPA_WRITABLE.

As this is used to communicate with the host, I'd suggest to avoid using
enum and just use flags everywhere.

>  /*
>   * Declare the MSR used to setup pages used to communicate with the hypervisor.
>   */
> @@ -561,4 +572,17 @@ enum hv_interrupt_type {
>  
>  #include <asm-generic/hyperv-tlfs.h>
>  
> +/* All input parameters should be in single page. */
> +#define HV_MAX_MODIFY_GPA_REP_COUNT		\
> +	((PAGE_SIZE / sizeof(u64)) - 2)
> +
> +/* HvCallModifySparseGpaPageHostVisibility hypercall */
> +struct hv_gpa_range_for_visibility {
> +	u64 partition_id;
> +	u32 host_visibility:2;
> +	u32 reserved0:30;
> +	u32 reserved1;
> +	u64 gpa_page_list[HV_MAX_MODIFY_GPA_REP_COUNT];
> +} __packed;
> +
>  #endif
> diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
> index aeacca7c4da8..6af9d55ffe3b 100644
> --- a/arch/x86/include/asm/mshyperv.h
> +++ b/arch/x86/include/asm/mshyperv.h
> @@ -194,7 +194,9 @@ struct irq_domain *hv_create_pci_msi_domain(void);
>  int hv_map_ioapic_interrupt(int ioapic_id, bool level, int vcpu, int vector,
>  		struct hv_interrupt_entry *entry);
>  int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry);
> -
> +int hv_mark_gpa_visibility(u16 count, const u64 pfn[], u32 visibility);
> +int hv_set_mem_host_visibility(void *kbuffer, size_t size,
> +			       enum vmbus_page_visibility visibility);
>  #else /* CONFIG_HYPERV */
>  static inline void hyperv_init(void) {}
>  static inline void hyperv_setup_mmu_ops(void) {}
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index 156cd235659f..a82975600107 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -29,6 +29,8 @@
>  #include <asm/proto.h>
>  #include <asm/memtype.h>
>  #include <asm/set_memory.h>
> +#include <asm/hyperv-tlfs.h>
> +#include <asm/mshyperv.h>
>  
>  #include "../mm_internal.h"
>  
> @@ -1986,8 +1988,14 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
>  	int ret;
>  
>  	/* Nothing to do if memory encryption is not active */
> -	if (!mem_encrypt_active())
> +	if (hv_is_isolation_supported()) {
> +		return hv_set_mem_host_visibility((void *)addr,
> +				numpages * HV_HYP_PAGE_SIZE,
> +				enc ? VMBUS_PAGE_NOT_VISIBLE
> +				: VMBUS_PAGE_VISIBLE_READ_WRITE);
> +	} else if (!mem_encrypt_active()) {
>  		return 0;
> +	}
>  
>  	/* Should not be working on unaligned addresses */
>  	if (WARN_ONCE(addr & ~PAGE_MASK, "misaligned address: %#lx\n", addr))
> diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
> index f3761c73b074..01048bb07082 100644
> --- a/drivers/hv/channel.c
> +++ b/drivers/hv/channel.c
> @@ -17,6 +17,7 @@
>  #include <linux/hyperv.h>
>  #include <linux/uio.h>
>  #include <linux/interrupt.h>
> +#include <linux/set_memory.h>
>  #include <asm/page.h>
>  #include <asm/mshyperv.h>
>  
> @@ -465,7 +466,7 @@ static int __vmbus_establish_gpadl(struct vmbus_channel *channel,
>  	struct list_head *curr;
>  	u32 next_gpadl_handle;
>  	unsigned long flags;
> -	int ret = 0;
> +	int ret = 0, index;
>  
>  	next_gpadl_handle =
>  		(atomic_inc_return(&vmbus_connection.next_gpadl_handle) - 1);
> @@ -474,6 +475,13 @@ static int __vmbus_establish_gpadl(struct vmbus_channel *channel,
>  	if (ret)
>  		return ret;
>  
> +	ret = set_memory_decrypted((unsigned long)kbuffer,
> +				   HVPFN_UP(size));
> +	if (ret) {
> +		pr_warn("Failed to set host visibility.\n");
> +		return ret;
> +	}
> +
>  	init_completion(&msginfo->waitevent);
>  	msginfo->waiting_channel = channel;
>  
> @@ -539,6 +547,15 @@ static int __vmbus_establish_gpadl(struct vmbus_channel *channel,
>  	/* At this point, we received the gpadl created msg */
>  	*gpadl_handle = gpadlmsg->gpadl;
>  
> +	if (type == HV_GPADL_BUFFER)
> +		index = 0;
> +	else
> +		index = channel->gpadl_range[1].gpadlhandle ? 2 : 1;
> +
> +	channel->gpadl_range[index].size = size;
> +	channel->gpadl_range[index].buffer = kbuffer;
> +	channel->gpadl_range[index].gpadlhandle = *gpadl_handle;
> +
>  cleanup:
>  	spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
>  	list_del(&msginfo->msglistentry);
> @@ -549,6 +566,11 @@ static int __vmbus_establish_gpadl(struct vmbus_channel *channel,
>  	}
>  
>  	kfree(msginfo);
> +
> +	if (ret)
> +		set_memory_encrypted((unsigned long)kbuffer,
> +				     HVPFN_UP(size));
> +
>  	return ret;
>  }
>  
> @@ -811,7 +833,7 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle)
>  	struct vmbus_channel_gpadl_teardown *msg;
>  	struct vmbus_channel_msginfo *info;
>  	unsigned long flags;
> -	int ret;
> +	int ret, i;
>  
>  	info = kzalloc(sizeof(*info) +
>  		       sizeof(struct vmbus_channel_gpadl_teardown), GFP_KERNEL);
> @@ -859,6 +881,18 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle)
>  	spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
>  
>  	kfree(info);
> +
> +	/* Find gpadl buffer virtual address and size. */
> +	for (i = 0; i < VMBUS_GPADL_RANGE_COUNT; i++)
> +		if (channel->gpadl_range[i].gpadlhandle == gpadl_handle)
> +			break;
> +
> +	if (set_memory_encrypted((unsigned long)channel->gpadl_range[i].buffer,
> +			HVPFN_UP(channel->gpadl_range[i].size)))
> +		pr_warn("Fail to set mem host visibility.\n");
> +
> +	channel->gpadl_range[i].gpadlhandle = 0;
> +
>  	return ret;
>  }
>  EXPORT_SYMBOL_GPL(vmbus_teardown_gpadl);
> diff --git a/include/asm-generic/hyperv-tlfs.h b/include/asm-generic/hyperv-tlfs.h
> index 515c3fb06ab3..8a0219255545 100644
> --- a/include/asm-generic/hyperv-tlfs.h
> +++ b/include/asm-generic/hyperv-tlfs.h
> @@ -158,6 +158,7 @@ struct ms_hyperv_tsc_page {
>  #define HVCALL_RETARGET_INTERRUPT		0x007e
>  #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE 0x00af
>  #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST 0x00b0
> +#define HVCALL_MODIFY_SPARSE_GPA_PAGE_HOST_VISIBILITY 0x00db
>  
>  /* Extended hypercalls */
>  #define HV_EXT_CALL_QUERY_CAPABILITIES		0x8001
> diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
> index 2e859d2f9609..06eccaba10c5 100644
> --- a/include/linux/hyperv.h
> +++ b/include/linux/hyperv.h
> @@ -809,6 +809,14 @@ struct vmbus_device {
>  
>  #define VMBUS_DEFAULT_MAX_PKT_SIZE 4096
>  
> +struct vmbus_gpadl_range {
> +	u32 gpadlhandle;
> +	u32 size;
> +	void *buffer;
> +};
> +
> +#define VMBUS_GPADL_RANGE_COUNT		3
> +
>  struct vmbus_channel {
>  	struct list_head listentry;
>  
> @@ -829,6 +837,8 @@ struct vmbus_channel {
>  	struct completion rescind_event;
>  
>  	u32 ringbuffer_gpadlhandle;
> +	/* GPADL_RING and Send/Receive GPADL_BUFFER. */
> +	struct vmbus_gpadl_range gpadl_range[VMBUS_GPADL_RANGE_COUNT];
>  
>  	/* Allocated memory for ring buffer */
>  	struct page *ringbuffer_page;

-- 
Vitaly



From xen-devel-bounces@lists.xenproject.org Thu Jun 10 09:49:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 09:49:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139789.258408 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrHJF-0004mN-Tr; Thu, 10 Jun 2021 09:49:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139789.258408; Thu, 10 Jun 2021 09:49:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrHJF-0004mG-Qx; Thu, 10 Jun 2021 09:49:09 +0000
Received: by outflank-mailman (input) for mailman id 139789;
 Thu, 10 Jun 2021 09:49:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iP0d=LE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrHJE-0004mA-ON
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 09:49:08 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1c676d11-95af-4097-ae64-13aea946c175;
 Thu, 10 Jun 2021 09:49:07 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2112.outbound.protection.outlook.com [104.47.17.112])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-33-3ZnUP_CJObmNmbl8f2_10Q-1; Thu, 10 Jun 2021 11:49:05 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2333.eurprd04.prod.outlook.com (2603:10a6:800:28::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.29; Thu, 10 Jun
 2021 09:49:04 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Thu, 10 Jun 2021
 09:49:04 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR3P281CA0046.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:4a::12) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.9 via Frontend Transport; Thu, 10 Jun 2021 09:49:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c676d11-95af-4097-ae64-13aea946c175
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623318546;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZrWb6uhs+xIbu8V1eJhmX0UnPGTxkFdKLE4tq8T6iho=;
	b=hW6wS3EryyjXhxx7driX7MxbsMLVwL0pKY1Np3lCC3REe1ESsflIjkE9Umajp3ZjMFmp8n
	YR63Nb/yGgEioLocD1/ReqUPiB+tzAAaK4efWxcNZb7duXT4iKHI5cOvb3XFL7KWKul7A3
	yxmbvgjM02bF17GBr1cvdGpvbl/IERc=
X-MC-Unique: 3ZnUP_CJObmNmbl8f2_10Q-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I5KvKPKR1F21iYqju82ynCLSwyo5OFELbVIhmh2ZY/ZaOvmRv6MpL+3w3fyK7kwjJy0N4GdTx0tHtuYvC27w+Pkvaj5v1YPDQWOIUXX7GvGQ8z6njwVwE+/wKl75yHvq6S6Wbx+y64W84nTw07DlhaR9EcPMY8E00Ynw0HHKQAsMjgyEQaKmjnOeyt723sXWhUZ1CI/1GnwQuHwW963Sz5aaVAaC/aAW6cOrpqf8NxwauXurSVsZ2rkL3eV14BxEZv/XxPjWD9OggpnBf3asGg3hDqHm36HDcrSd+mcmd1O1weu/t/Rw7r8ClLH8U/C3GLLpXEI73G7dAhDuU/07Tw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZrWb6uhs+xIbu8V1eJhmX0UnPGTxkFdKLE4tq8T6iho=;
 b=LsQglGjQ4/19ow/ADOWdKHF37PMGDaxNRfWBOFW0+NOJX5nfMF3oICnBvK3aV6lOHX7ToeF3rKkibPT0eih3dDuNzhIo3yRjKL2UVXC1N3whwWSJHKLrotnnRO1IPALFRxzVSCKYDIZHh85oyy7YfZCLjiU0n6RJb16S6O8jNaAYx8rfAAXc8WE1ks+/xJFYqtb+T/cMgUce98z75c6plnABeTik4BcsjUEJ7gPF4Vt8pPPnRPVtSyva0lyjrIxSZAZ2FaXVD9EI/BOjWG/sIPewIrpMDgYrb/WV59RvqfTet+sp+jaWOJdXGIYY+TSWLBJGR/ccHpBQJiGej/LMAA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH 5/9] xen: introduce assign_pages_nr
To: Penny Zheng <penny.zheng@arm.com>
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com,
 xen-devel@lists.xenproject.org, sstabellini@kernel.org, julien@xen.org
References: <20210607024318.3988467-1-penny.zheng@arm.com>
 <20210607024318.3988467-6-penny.zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <41a7389b-630c-6cf4-fa28-7d80cb79176b@suse.com>
Date: Thu, 10 Jun 2021 11:49:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210607024318.3988467-6-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR3P281CA0046.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::12) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b870ddec-4941-4a90-6d72-08d92bf4fba8
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2333:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB2333E7F08D1EE67967ED18CAB3359@VI1PR0401MB2333.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	o4pvImjm9xdNSjlb9KU3TAoMSQIJCeTbNiE90SaTmHputqwps7kRxX6wfKb0FR64wJtiztW9p+meZ7+iOCan+gR0hUArqBE+WQMNB3jZwvkqlkUm3PW7TlxymJC92BWXRxR04Pi6IxMow1PiE+Gw1hqnVlk9VkGKbfIbwzvrqfRLYwc8OTfYSeMDdEXKMZAbCY0bVCx6BdNmREyJIP8P6Q8xvnsBNCmlKUNJWDxXl+hJsPMsKb3ujgshVfSu4WUae5NxtG8gsidBkR1cT9o/7Vo3JuR40YzdDkaL5+p1O7go+QRoHY82Vy1ZsibMIaUny4b4hMH4jIfH1Lp9GmS6NX5G2WdA3zR8khqSqSPd56+oBI2vwR7KkRKvkjo35lqKHVezTgKgF4iAi/VRjYjAEPYCuKco4ukimJ2ZLM8BTXDxTS7/c1qmfh8xswpgO8LEat4MtnNFkNzOH5JN6bes70IW4OsUNmoFzNlDOR+fcKdvIGfwkn2zBoS2vokNlUgztZwnmAbWt89k9mPQL5/BNRpOXC1N9ta6mvVoqvTM64gxoNoWdjOAqRvIoWi5nhnK1uheYNWENf1V0Zo7/+34b6Kwdk2eVcPqmJKUNO8fGR0czL+2ikHeqQx5iKkcdltOLDreWBhRiljR2Y7E5YTI0ORLG4U58Ccp2IcbIIkUsKG+15MWn/M79fiUvaTo/KVT
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(376002)(346002)(39850400004)(396003)(136003)(186003)(86362001)(5660300002)(316002)(16576012)(38100700002)(53546011)(6916009)(31696002)(16526019)(8936002)(2906002)(66556008)(4326008)(8676002)(956004)(36756003)(31686004)(2616005)(66476007)(6486002)(66946007)(83380400001)(478600001)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?RmY0UmpKYlhFRU9hUWsyZFBDb3R1NnJZWmUyVUo2SXNhNnArNitMcmtiUzcx?=
 =?utf-8?B?UGV3eXM0ZzVpOVM0WTJ2ZkdhanFQOW03SklZTUpZaFNzUzZWOHNUcDBaYkYx?=
 =?utf-8?B?cm92bXRCb0VzRW9OR3pxT1I4b3htTFRKWitzZjJQNGJGa1JyOG11MnA4TklW?=
 =?utf-8?B?YjZ6VEpTVkltZmRNa0RlMWtRTy8zV3ROZ0lVOGM3WWdSc282SjIwSUowdDBD?=
 =?utf-8?B?aERKcFJ0SWgza0dvdzRTZEZ1U2pWNTJKdE9VNEtaWDhTbFBmdDhuRnpBL3NP?=
 =?utf-8?B?NnAwU2wxYjV5L2ZpS0QzODEwQnYrYzFjc0lnZlMzeWI2RkF1WDdkOE45WXAx?=
 =?utf-8?B?R3J3aWlrSHV2bkVsZjZBZ21qTDR5aXB0NUpQUE5YbzE5bzNNcEFsM0xocFFU?=
 =?utf-8?B?a01TTTVQRE9rMGQ4MkRqMWtPZnRaeFkzZGNZaTVFWDlVNGtjRTYwWU9FQlAx?=
 =?utf-8?B?QlJBaXRxdFNGQUlKTFZYRGRXWVVjRjQ1V0hiZUZFU3lTc1Jrem9MS0h2WnJr?=
 =?utf-8?B?Y3Z2SFR5dEVObE9oaU4zZlcwaWQ0SlNnM1hPZ2dRaWt1c1Z5N3kxbTBHajBO?=
 =?utf-8?B?TjR0WlZhMHVOSkFjSEZhUlVOUkw1M3Rud2dTYUptZkFLUUdVN0tLZGJzVHQz?=
 =?utf-8?B?dldwWTFXVmEwbDVidWc3dWlRZkIyczdtMGc3VXBEaVpmczZSd09YSFJOa09j?=
 =?utf-8?B?c2VnV3BranM4cU5FZFdhNUtBWTNCbGorWU0wZytnZHFNUjd3WHg3YzlWNUZ0?=
 =?utf-8?B?SHNGWWVIc0V0WWlMUndsT3VtbjNyZkRqejZBeE4xTG4yZ3FhbjdwT2NvUFFZ?=
 =?utf-8?B?a2Iwd1BrZEl0SmhLV2RxbnRmdkZid1BiSHFPMVFCak9MWVliYW1NL2s3ZHNB?=
 =?utf-8?B?STVNVmJmdThRaGx3TSs5WVZDQW94M24zTVYxcytOY1RjMDZyQ0Z2Nmhic0RT?=
 =?utf-8?B?NE1lbXgvRXBqazBQNjBlYythWGV6akFuVUF0QmI3Y1dXWklFRmlwbTdqdVp0?=
 =?utf-8?B?NlVqS1c4Z2lRQk15WE1jMkdrS2t3OXR2VVhmUUdtdElGd3BhU1praGtUVjV0?=
 =?utf-8?B?SE1XMC9FRHRna0VtcUdnTW13TWt5UC8raUt6MWg4STFrSjl2eGhHVnh1b3lm?=
 =?utf-8?B?U2c3cTJkbUtlWktPZ2Z4MTd5b1hRZDUwMXFJbkNKUW9WdlZlVmF2am0rOUE0?=
 =?utf-8?B?d04yRTR2bGN2dm55aE11K1RtVU1MNFR1VEVpVW5BSCtkYkNKV203ZC9rMlZk?=
 =?utf-8?B?S0hEUmZrSzBkaWVhWmNSL3NKdDFHWGY2cTh3SUhnL0VZL3g1NGQ2MUduNEY0?=
 =?utf-8?B?ek81ZUFUakprSGNLTlc2SVRRR1IzdXlNam9UdzRmQ3BMa3MydVBHMWRyT1pm?=
 =?utf-8?B?NkdFRy9FaWFTclRYT3pwOXN4YkYyZE1rTzM4dzJYcTdRWDhoZUdvcUlEeGpW?=
 =?utf-8?B?STQ1OXB5YUJWMnIvTzVlaG9TQW1jU3RMR2d4TmtTQ3Z2WFRLdzNVMnM3MDh0?=
 =?utf-8?B?Rk83L2Nkczg4TmJ2bTdIbzNITGpkVGhNTTBTOXZZM3NDK0hjSEdnL2dFOWVR?=
 =?utf-8?B?UDVPTTRpNlNyVVlkbDhBTjk2N0NYNU9Td1hBMENHZmVab2JleXh6bmhJZ3pj?=
 =?utf-8?B?YUw2ZHFMYVIyTThmYTlrM3VUSUtIVVdhOHBGVEM0emlpNEtGSExKTk15U0dk?=
 =?utf-8?B?U0NQTnVxSjVZa0lkeER0MUNhSGdzWEFnMmc3aFBaNmNqZDREVlVmd2FwOW1l?=
 =?utf-8?Q?yxOMXAuPWX7uJ2VoFvmyHz2nOd7u9Vzh0IoQBj+?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b870ddec-4941-4a90-6d72-08d92bf4fba8
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2021 09:49:04.4999
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: t9IET/Zl4YNKnGpsRQpaIpcpu3OWrQ/0R9XUYAZavPbpNmXG4hGHBZwB2wa6RDXc+I83ImdnmkRI2hI5++QzZw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2333

On 07.06.2021 04:43, Penny Zheng wrote:
> Introduce new interface assign_pages_nr to deal with when page number is
> not in a power-of-two, which will save the trouble each time user needs
> to split the size in a power of 2 to use assign_pages.

First of all I still don't see why in this one special case it is a
meaningful burden to do the count-to-order conversion in the caller you
mean to add, and hence why we really need this new function (to keep it
simple, you could even have the caller not break down to arbitrary
power-of-2 chunks, but simply iterate over all individual [order-0]
pages). The more that I'm not happy with the chosen name, despite it
having been suggested during v1 review. _If_ we needed two functions,
imo they ought to be named assign_page() (dealing with a single page of
the given order) and assign_pages(). Backporting confusion could be
helped by altering the order of parameters, such that the compiler
would point out that adjustments at call sites are needed.

Irrespective of this a few remarks on the code change itself:

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2301,14 +2301,14 @@ void init_domheap_pages(paddr_t ps, paddr_t pe)
>  }
>  
>  
> -int assign_pages(
> +int assign_pages_nr(
>      struct domain *d,
>      struct page_info *pg,
> -    unsigned int order,
> +    unsigned int nr_pfns,

Even leaving the naming aspect of "pfns" aside, I can't see why this
can't be simply "nr" (of appropriate type, see next remark).

>      unsigned int memflags)
>  {
>      int rc = 0;
> -    unsigned long i;
> +    unsigned int i;

This is not an acceptable type change, at least not as long as it's not
justified at all in the description. While both Arm and x86 will be
fine this way, the code here is supposed to be generic, and hence would
better remain generally correct.

> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -131,12 +131,18 @@ int query_page_offline(mfn_t mfn, uint32_t *status);
>  
>  void heap_init_late(void);
>  
> -int assign_pages(
> +int assign_pages_nr(
>      struct domain *d,
>      struct page_info *pg,
> -    unsigned int order,
> +    unsigned int nr_pfns,
>      unsigned int memflags);
>  
> + int assign_pages(
> +     struct domain *d,
> +     struct page_info *pg,
> +     unsigned int order,
> +     unsigned int memflags);

Bogus extra leading space on all lines of this addition.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 10 09:52:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 09:52:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139796.258420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrHMB-00067f-CC; Thu, 10 Jun 2021 09:52:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139796.258420; Thu, 10 Jun 2021 09:52:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrHMB-00067Y-97; Thu, 10 Jun 2021 09:52:11 +0000
Received: by outflank-mailman (input) for mailman id 139796;
 Thu, 10 Jun 2021 09:52:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1Xc/=LE=redhat.com=vkuznets@srs-us1.protection.inumbo.net>)
 id 1lrHMA-00067Q-DC
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 09:52:10 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id ee754177-001f-4f96-b3ec-6f579e49f695;
 Thu, 10 Jun 2021 09:52:09 +0000 (UTC)
Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com
 [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-511-p0wckfAgP6mmGgARLGpbkQ-1; Thu, 10 Jun 2021 05:52:05 -0400
Received: by mail-wm1-f70.google.com with SMTP id
 m33-20020a05600c3b21b02901a44b1d2d87so2856653wms.3
 for <xen-devel@lists.xenproject.org>; Thu, 10 Jun 2021 02:52:05 -0700 (PDT)
Received: from vitty.brq.redhat.com (g-server-2.ign.cz. [91.219.240.2])
 by smtp.gmail.com with ESMTPSA id i2sm2324384wmo.40.2021.06.10.02.52.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 10 Jun 2021 02:52:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee754177-001f-4f96-b3ec-6f579e49f695
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1623318728;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=YoVNk4j9gtFSnQb0D1+J8+G6ag1xYY93rHbDDkf83Y8=;
	b=V1gi+gIe/1zU+SRWrhqT0At1Tp5bBglWFYq2QaheFcvTn6YdR5DXTVfvaBPuoK8I6elS5U
	WHoL7oL6P+PgBIt/INkVHy2/b4n/6quUJECLNhTiLao+nASZQoQE9eoswON7nBYnd6t12I
	CgHxKOZzSNgMU6dRg0613Ht2ehg2I9c=
X-MC-Unique: p0wckfAgP6mmGgARLGpbkQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:in-reply-to:references:date
         :message-id:mime-version;
        bh=YoVNk4j9gtFSnQb0D1+J8+G6ag1xYY93rHbDDkf83Y8=;
        b=iX4LA8cIClJtEslGwEa6rCK/WJjzV/hffOqhHuAY9TtFQon+/dK2nAguKIYrgOB70v
         wmZLIIuLtK6B6lNzkz1f9GF7/IG/YcCB7GN07pmUP1CEW29tl13ioNkHYr35f3pQRRw1
         ucajssLORYMTkt1MqAEyD3FJNEA5/QTaSIxwZm7f+gcWvUGAyk7bJdY4Agm0ytw9UxXt
         VB/8yUi9XCO1MCfXp2YOCuzk7MwlgFoNttbo8b4n/3Htkw6p9LPnOnmQaX8Fsr9bJpQT
         UrafCM2HdJoxaSs07pd5TpOf8B9KxzoH9MqpMGgcsp0hL5wTfEOmwnDPtJghsXExyH0H
         HSOw==
X-Gm-Message-State: AOAM533HCyzFoxozpqJruSwWNzM3YjEfDoDToDR0QzR5tipLNENDGk2s
	e22nCstJpF2UAiYLWxoPpHTtK0bk4LBjvelyuHjRm7J7H5HwJRSLhazizNmtdqULTzsU+78v+mR
	X2nFMe2KAlAk3OJAsfeQ2WlpMois=
X-Received: by 2002:a5d:6111:: with SMTP id v17mr4390243wrt.20.1623318724406;
        Thu, 10 Jun 2021 02:52:04 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJws7/mCqvX4+aPUx81hg5pKpa/fS4mRAQA+Xfvs2mECl/9+zJ/36DfVftBI3zRBfZ+Q7l7Vlw==
X-Received: by 2002:a5d:6111:: with SMTP id v17mr4390203wrt.20.1623318724162;
        Thu, 10 Jun 2021 02:52:04 -0700 (PDT)
From: Vitaly Kuznetsov <vkuznets@redhat.com>
To: Tianyu Lan <ltykernel@gmail.com>
Cc: iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-scsi@vger.kernel.org, netdev@vger.kernel.org,
 thomas.lendacky@amd.com, brijesh.singh@amd.com, sunilmut@microsoft.com,
 kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
 wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
 arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
 peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, hch@lst.de,
 m.szyprowski@samsung.com, robin.murphy@arm.com,
 boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org,
 joro@8bytes.org, will@kernel.org, xen-devel@lists.xenproject.org,
 davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com,
 martin.petersen@oracle.com
Subject: Re: [RFC PATCH V3 10/11] HV/Netvsc: Add Isolation VM support for
 netvsc driver
In-Reply-To: <20210530150628.2063957-11-ltykernel@gmail.com>
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-11-ltykernel@gmail.com>
Date: Thu, 10 Jun 2021 11:52:01 +0200
Message-ID: <874ke6yrlq.fsf@vitty.brq.redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vkuznets@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain

Tianyu Lan <ltykernel@gmail.com> writes:

> From: Tianyu Lan <Tianyu.Lan@microsoft.com>
>
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> pagebuffer() still need to handle. Use DMA API to map/umap these
> memory during sending/receiving packet and Hyper-V DMA ops callback
> will use swiotlb function to allocate bounce buffer and copy data
> from/to bounce buffer.
>
> Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
> ---
>  drivers/net/hyperv/hyperv_net.h   |   6 ++
>  drivers/net/hyperv/netvsc.c       | 125 ++++++++++++++++++++++++++++--
>  drivers/net/hyperv/rndis_filter.c |   3 +
>  include/linux/hyperv.h            |   5 ++
>  4 files changed, 133 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
> index b11aa68b44ec..c2fbb9d4df2c 100644
> --- a/drivers/net/hyperv/hyperv_net.h
> +++ b/drivers/net/hyperv/hyperv_net.h
> @@ -164,6 +164,7 @@ struct hv_netvsc_packet {
>  	u32 total_bytes;
>  	u32 send_buf_index;
>  	u32 total_data_buflen;
> +	struct hv_dma_range *dma_range;
>  };
>  
>  #define NETVSC_HASH_KEYLEN 40
> @@ -1074,6 +1075,7 @@ struct netvsc_device {
>  
>  	/* Receive buffer allocated by us but manages by NetVSP */
>  	void *recv_buf;
> +	void *recv_original_buf;
>  	u32 recv_buf_size; /* allocated bytes */
>  	u32 recv_buf_gpadl_handle;
>  	u32 recv_section_cnt;
> @@ -1082,6 +1084,8 @@ struct netvsc_device {
>  
>  	/* Send buffer allocated by us */
>  	void *send_buf;
> +	void *send_original_buf;
> +	u32 send_buf_size;
>  	u32 send_buf_gpadl_handle;
>  	u32 send_section_cnt;
>  	u32 send_section_size;
> @@ -1729,4 +1733,6 @@ struct rndis_message {
>  #define RETRY_US_HI	10000
>  #define RETRY_MAX	2000	/* >10 sec */
>  
> +void netvsc_dma_unmap(struct hv_device *hv_dev,
> +		      struct hv_netvsc_packet *packet);
>  #endif /* _HYPERV_NET_H */
> diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
> index 7bd935412853..a01740c6c6b8 100644
> --- a/drivers/net/hyperv/netvsc.c
> +++ b/drivers/net/hyperv/netvsc.c
> @@ -153,8 +153,21 @@ static void free_netvsc_device(struct rcu_head *head)
>  	int i;
>  
>  	kfree(nvdev->extension);
> -	vfree(nvdev->recv_buf);
> -	vfree(nvdev->send_buf);
> +
> +	if (nvdev->recv_original_buf) {
> +		vunmap(nvdev->recv_buf);
> +		vfree(nvdev->recv_original_buf);
> +	} else {
> +		vfree(nvdev->recv_buf);
> +	}
> +
> +	if (nvdev->send_original_buf) {
> +		vunmap(nvdev->send_buf);
> +		vfree(nvdev->send_original_buf);
> +	} else {
> +		vfree(nvdev->send_buf);
> +	}
> +
>  	kfree(nvdev->send_section_map);
>  
>  	for (i = 0; i < VRSS_CHANNEL_MAX; i++) {
> @@ -338,8 +351,10 @@ static int netvsc_init_buf(struct hv_device *device,
>  	struct net_device *ndev = hv_get_drvdata(device);
>  	struct nvsp_message *init_packet;
>  	unsigned int buf_size;
> +	unsigned long *pfns;
>  	size_t map_words;
>  	int i, ret = 0;
> +	void *vaddr;
>  
>  	/* Get receive buffer area. */
>  	buf_size = device_info->recv_sections * device_info->recv_section_size;
> @@ -375,6 +390,21 @@ static int netvsc_init_buf(struct hv_device *device,
>  		goto cleanup;
>  	}
>  
> +	if (hv_isolation_type_snp()) {
> +		pfns = kcalloc(buf_size / HV_HYP_PAGE_SIZE, sizeof(unsigned long),
> +			       GFP_KERNEL);
> +		for (i = 0; i < buf_size / HV_HYP_PAGE_SIZE; i++)
> +			pfns[i] = virt_to_hvpfn(net_device->recv_buf + i * HV_HYP_PAGE_SIZE) +
> +				(ms_hyperv.shared_gpa_boundary >> HV_HYP_PAGE_SHIFT);
> +
> +		vaddr = vmap_pfn(pfns, buf_size / HV_HYP_PAGE_SIZE, PAGE_KERNEL_IO);
> +		kfree(pfns);
> +		if (!vaddr)
> +			goto cleanup;
> +		net_device->recv_original_buf = net_device->recv_buf;
> +		net_device->recv_buf = vaddr;
> +	}
> +
>  	/* Notify the NetVsp of the gpadl handle */
>  	init_packet = &net_device->channel_init_pkt;
>  	memset(init_packet, 0, sizeof(struct nvsp_message));
> @@ -477,6 +507,23 @@ static int netvsc_init_buf(struct hv_device *device,
>  		goto cleanup;
>  	}
>  
> +	if (hv_isolation_type_snp()) {
> +		pfns = kcalloc(buf_size / HV_HYP_PAGE_SIZE, sizeof(unsigned long),
> +			       GFP_KERNEL);
> +
> +		for (i = 0; i < buf_size / HV_HYP_PAGE_SIZE; i++)
> +			pfns[i] = virt_to_hvpfn(net_device->send_buf + i * HV_HYP_PAGE_SIZE)
> +				+ (ms_hyperv.shared_gpa_boundary >> HV_HYP_PAGE_SHIFT);
> +
> +		vaddr = vmap_pfn(pfns, buf_size / HV_HYP_PAGE_SIZE, PAGE_KERNEL_IO);
> +		kfree(pfns);
> +		if (!vaddr)
> +			goto cleanup;
> +
> +		net_device->send_original_buf = net_device->send_buf;
> +		net_device->send_buf = vaddr;
> +	}
> +
>  	/* Notify the NetVsp of the gpadl handle */
>  	init_packet = &net_device->channel_init_pkt;
>  	memset(init_packet, 0, sizeof(struct nvsp_message));
> @@ -767,7 +814,7 @@ static void netvsc_send_tx_complete(struct net_device *ndev,
>  
>  	/* Notify the layer above us */
>  	if (likely(skb)) {
> -		const struct hv_netvsc_packet *packet
> +		struct hv_netvsc_packet *packet
>  			= (struct hv_netvsc_packet *)skb->cb;
>  		u32 send_index = packet->send_buf_index;
>  		struct netvsc_stats *tx_stats;
> @@ -783,6 +830,7 @@ static void netvsc_send_tx_complete(struct net_device *ndev,
>  		tx_stats->bytes += packet->total_bytes;
>  		u64_stats_update_end(&tx_stats->syncp);
>  
> +		netvsc_dma_unmap(ndev_ctx->device_ctx, packet);
>  		napi_consume_skb(skb, budget);
>  	}
>  
> @@ -947,6 +995,63 @@ static void netvsc_copy_to_send_buf(struct netvsc_device *net_device,
>  		memset(dest, 0, padding);
>  }
>  
> +void netvsc_dma_unmap(struct hv_device *hv_dev,
> +		      struct hv_netvsc_packet *packet)
> +{
> +	u32 page_count = packet->cp_partial ?
> +		packet->page_buf_cnt - packet->rmsg_pgcnt :
> +		packet->page_buf_cnt;
> +	int i;
> +
> +	if (!packet->dma_range)
> +		return;
> +
> +	for (i = 0; i < page_count; i++)
> +		dma_unmap_single(&hv_dev->device, packet->dma_range[i].dma,
> +				 packet->dma_range[i].mapping_size,
> +				 DMA_TO_DEVICE);
> +
> +	kfree(packet->dma_range);
> +}
> +
> +int netvsc_dma_map(struct hv_device *hv_dev,
> +		   struct hv_netvsc_packet *packet,
> +		   struct hv_page_buffer *pb)
> +{
> +	u32 page_count =  packet->cp_partial ?
> +		packet->page_buf_cnt - packet->rmsg_pgcnt :
> +		packet->page_buf_cnt;
> +	dma_addr_t dma;
> +	int i;
> +
> +	packet->dma_range = kcalloc(page_count,
> +				    sizeof(*packet->dma_range),
> +				    GFP_KERNEL);
> +	if (!packet->dma_range)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < page_count; i++) {
> +		char *src = phys_to_virt((pb[i].pfn << HV_HYP_PAGE_SHIFT)
> +					 + pb[i].offset);
> +		u32 len = pb[i].len;
> +
> +		dma = dma_map_single(&hv_dev->device, src, len,
> +				     DMA_TO_DEVICE);
> +		if (dma_mapping_error(&hv_dev->device, dma)) {
> +			kfree(packet->dma_range);
> +			return -ENOMEM;
> +		}
> +
> +		packet->dma_range[i].dma = dma;
> +		packet->dma_range[i].mapping_size = len;
> +		pb[i].pfn = dma >> HV_HYP_PAGE_SHIFT;
> +		pb[i].offset = offset_in_hvpage(dma);
> +		pb[i].len = len;
> +	}
> +
> +	return 0;
> +}
> +
>  static inline int netvsc_send_pkt(
>  	struct hv_device *device,
>  	struct hv_netvsc_packet *packet,
> @@ -987,14 +1092,22 @@ static inline int netvsc_send_pkt(
>  
>  	trace_nvsp_send_pkt(ndev, out_channel, rpkt);
>  
> +	packet->dma_range = NULL;
>  	if (packet->page_buf_cnt) {
>  		if (packet->cp_partial)
>  			pb += packet->rmsg_pgcnt;
>  
> +		ret = netvsc_dma_map(ndev_ctx->device_ctx, packet, pb);
> +		if (ret)
> +			return ret;
> +
>  		ret = vmbus_sendpacket_pagebuffer(out_channel,
> -						  pb, packet->page_buf_cnt,
> -						  &nvmsg, sizeof(nvmsg),
> -						  req_id);
> +					  pb, packet->page_buf_cnt,
> +					  &nvmsg, sizeof(nvmsg),
> +					  req_id);

Nitpick: stray change? 

> +
> +		if (ret)
> +			netvsc_dma_unmap(ndev_ctx->device_ctx, packet);
>  	} else {
>  		ret = vmbus_sendpacket(out_channel,
>  				       &nvmsg, sizeof(nvmsg),
> diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c
> index 983bf362466a..448c1ee39246 100644
> --- a/drivers/net/hyperv/rndis_filter.c
> +++ b/drivers/net/hyperv/rndis_filter.c
> @@ -293,6 +293,8 @@ static void rndis_filter_receive_response(struct net_device *ndev,
>  	u32 *req_id = &resp->msg.init_complete.req_id;
>  	struct rndis_device *dev = nvdev->extension;
>  	struct rndis_request *request = NULL;
> +	struct hv_device *hv_dev = ((struct net_device_context *)
> +			netdev_priv(ndev))->device_ctx;
>  	bool found = false;
>  	unsigned long flags;
>  
> @@ -361,6 +363,7 @@ static void rndis_filter_receive_response(struct net_device *ndev,
>  			}
>  		}
>  
> +		netvsc_dma_unmap(hv_dev, &request->pkt);
>  		complete(&request->wait_event);
>  	} else {
>  		netdev_err(ndev,
> diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
> index babbe19f57e2..90abff664495 100644
> --- a/include/linux/hyperv.h
> +++ b/include/linux/hyperv.h
> @@ -1616,6 +1616,11 @@ struct hyperv_service_callback {
>  	void (*callback)(void *context);
>  };
>  
> +struct hv_dma_range {
> +	dma_addr_t dma;
> +	u32 mapping_size;
> +};
> +
>  #define MAX_SRV_VER	0x7ffffff
>  extern bool vmbus_prep_negotiate_resp(struct icmsg_hdr *icmsghdrp, u8 *buf, u32 buflen,
>  				const int *fw_version, int fw_vercnt,

-- 
Vitaly



From xen-devel-bounces@lists.xenproject.org Thu Jun 10 10:01:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 10:01:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139806.258435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrHVF-0007iK-CU; Thu, 10 Jun 2021 10:01:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139806.258435; Thu, 10 Jun 2021 10:01:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrHVF-0007iD-9X; Thu, 10 Jun 2021 10:01:33 +0000
Received: by outflank-mailman (input) for mailman id 139806;
 Thu, 10 Jun 2021 10:01:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mVv/=LE=epam.com=prvs=679567fbaa=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1lrHVD-0007hn-5Q
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 10:01:31 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e7b03cec-e9bf-472a-b02f-b38ef5b50a53;
 Thu, 10 Jun 2021 10:01:30 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 15A9xoOX000751; Thu, 10 Jun 2021 10:01:28 GMT
Received: from eur01-he1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2052.outbound.protection.outlook.com [104.47.0.52])
 by mx0b-0039f301.pphosted.com with ESMTP id 393b44gy63-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 10 Jun 2021 10:01:25 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB7347.eurprd03.prod.outlook.com (2603:10a6:20b:268::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.20; Thu, 10 Jun
 2021 10:01:16 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::b459:9e8c:964b:a3d1]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::b459:9e8c:964b:a3d1%6]) with mapi id 15.20.4195.031; Thu, 10 Jun 2021
 10:01:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7b03cec-e9bf-472a-b02f-b38ef5b50a53
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oUuKFS68aWIdFI6Tyq14vHp5+H1TTWYUYKzevIt07H7I7aE7lBhp/FPUc3OImhzZCeBS6PFGdvPcc5oZH0j20sfLsntwmNvmSVNl3HqHGt7cEahykAlpUD06bRwKTy1ppSovQwSQT0VmAPpMEXmjS7JQkNMvvrWob80L2mfId8wiEsSkEuPZvAnIC8g490aK0SnlMfPOjDBKYag5VY3YlhxK2cvuMkjtmYHV1Bobn5YD6cUxtAeT+isGXFhmPqrDXvvi9YIf/gQUbSvDOf2d2KU8ymacK6G8QFvTq9bDrYpCwWqPytYQMuHMOewQRbI4hQ+koncITKspqmd7Mh4tJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xyG66UZuq8U1OUofAufPfAzsja+kZ1kdTElr5MgJ/Ro=;
 b=bVNt+Rbxkmcu3EDZndQ4wNj71Ecl3nSWeOA5ZD0vPlIbhO9zNZxWgCtJhD5sT5C0P+qq/hJgqJDyCW78rH6pIZ3OYI3eoAT1dbnGbCsOGREHfZwBrfPr3fbnAClTm5hgaQbp/b6a9ulEp6STsbELf6Sv6/ZIGnvluMIh+wlbzP8EQ0XlcwPrDFcqwqFi2UhFe6vFHTJQdNiw7JwpKUWDAXEcCgX7J4VOIeMAISn1A5LKpOvcoxt8ePodu9npsEa5xwSI25BLDDb6E8uXq1Uu4hldtdZmaYsaRLmiJDQpGKvVFRLicL5pEdaWCi75lLgB0YWZsvjIlTKDuBLXHwolBw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xyG66UZuq8U1OUofAufPfAzsja+kZ1kdTElr5MgJ/Ro=;
 b=tNyCVJ7o/8pYXLa1acsCXmp54foWdHqxDdJgGFALhKYSR0cbiQCFSFgLx/f/ZD4YU9CCSZ9JEBlDPr2L7KEIMcrDREmcj2hTa8EHAOdoJ7uf1Csvp6YuDYQAYmuS2jx3upPBaqrqlHQ3FmTR+CLTskpvDZyDBjrCVWEoL1p2ws4JwajD4b/RPuE07gRNz/Mlb2wE6a5XTOOZ8Vkf0GNLw4cwph2Vw9GDyC1GxOfTL9lU7vXbubhbGxq/uRqhXaBoOh+94woc/gXghIxWgmcAjNM2YaA/r/3iWJVTUNsI5+om+yl505NscJ0G9BxCyZpprx0yQTA7pOC83Xgm00Dpwg==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Jan
 Beulich <jbeulich@suse.com>
Subject: Re: SR-IOV: do we need to virtualize in Xen or rely on Dom0?
Thread-Topic: SR-IOV: do we need to virtualize in Xen or rely on Dom0?
Thread-Index: AQHXWQwW/O3obtZgqkmpGRvt7ZNDkasM6aMAgAAjXoA=
Date: Thu, 10 Jun 2021 10:01:16 +0000
Message-ID: <30955a5b-ee46-60d7-ae56-23dc7c91008c@epam.com>
References: <c10e16c9-ec42-336f-e838-caca49b39723@epam.com>
 <YMHFQA1L61ntKNRq@Air-de-Roger>
In-Reply-To: <YMHFQA1L61ntKNRq@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 1116bc66-d735-4204-7e66-08d92bf6b020
x-ms-traffictypediagnostic: AM9PR03MB7347:
x-microsoft-antispam-prvs: 
 <AM9PR03MB7347B2F8406C9D98B63DC610E7359@AM9PR03MB7347.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 96NDF7GQkkLXszBWp4tn/RVKE7502UM3iyVEuf+Pm5VdjjibbIHhJUwmoWlBRBplR580GQ2Jj87X2r6inqHsj/ndV0kLa18HPhBUaEw3wUrU6JURObAz16VdrICp92OGM5KU3OJw/5iXk1AFnA4HgWdM93vxEhG3Aq0fWjN6KZ9Wb9cixcdOrJN2206dI2roYNOkpMWGgIngT0eGjr8GjpUv11NGaa92xeIDnbOaz2wPpd3S87kiXfMKDIXtR9sEhW3M4qeJ+XnMrLBAjNqW+XP9G7/250qZIIMlN/w3oNHkfpWfJRFXSsK4j8oHhdiyS+Afgy7uTdx0dz+hdwS8rx1OapVB3StWCmr5nH9yjdkc+H/8ayEEIsjF0jfJlvHTmQb0g1BkKoqiI4kp/c6uMtuS71SIBRMZEfa+i2E4KtFGhscFFIaFGukHansnKDsC8pZG8Mpn8VkP+DdmPJ6DWYuKkO5gW9UuGFORErGpXtlW07/khqsaDP3srDLg7sq94rtf95nxukYnP8A0OPAB/LSg7/TO0mdpr8nzX/CHl7LNc9qDmekKFo5FmX6kLT115zYVyr0fwOgcHNRRvmac8N/ZV8cgmEdrh3gGvpjU5inPx11EVZIuDTUMTmBv3YxHFaa3XPdwr/qvTp+CKZ5bSip3sJp6WNtjA7562O+uYO9XJjYdAxo4IiJnR1mM1an1vIcSi56Dfs0SLjP76S+DLjbLOwZXcyd4b6BYIsMb2v/nLBjMUoCkNdozPEB8p0zDSWKBKh203cLVFDPJT1cRRD4CHjbT8zHzz8/I1dYPVQ0Lh3G679h2yRu/wQ8RaLLWZminuWxqRY/kN1FFROYXRg==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(136003)(376002)(396003)(39860400002)(8936002)(36756003)(31696002)(6486002)(478600001)(6512007)(54906003)(71200400001)(83380400001)(186003)(5660300002)(122000001)(86362001)(4326008)(91956017)(6916009)(316002)(66556008)(66446008)(66476007)(31686004)(64756008)(66946007)(2906002)(53546011)(76116006)(966005)(38100700002)(8676002)(2616005)(6506007)(26005)(21314003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?MFNwNTNURFlJZjJWOTcxSmJxQkpoY2RxcWdoOVVHanlneDhUNjVCbnBWYjZn?=
 =?utf-8?B?ZTNEU0NidGJKWHB6ZFJyT2IvRXc1MHFYNmpuNHN2dTNvT3UvckR1Q1VEZjFX?=
 =?utf-8?B?bm1oNkRPOEFmU2J2RHVzVUl3VVFHcDZTQ2JCeGJWQ3RWRVFTOTc5dlhlWlZj?=
 =?utf-8?B?cjdvSFR1ZEFjZDNwbUJxVXZ1bVZTSGJMSXZBLzNDbGt6aTRORHVaN0JrMEhR?=
 =?utf-8?B?OG5uT3h6Y2ZUUnk0aUx3S1FTQ2pKKzEwelhmSGNLRHVibjFWejkxczNTckhW?=
 =?utf-8?B?dTZENlBTZXo1amdRUTVsdURoeXBjQUd3WjhMaEFFVUhmR08wenllcjBxTHRo?=
 =?utf-8?B?M3dUaGU0OWZYY1g0L0dVWTVFU2tQZkc0Njh5NWRldXIvRnpGV2U4a0ZOVC96?=
 =?utf-8?B?KzNsM3I2N1lOSEYwbTZURUVka2g0L3hycTh4dFlHY25jT3RYSW4xbVduNUJi?=
 =?utf-8?B?NXhqa1JFQndFNTI1Q01NOHRONklKT3lMdzRPZjFZa3Y0am9JYW5vekhGODk0?=
 =?utf-8?B?THpYRnozSkhSNTQ3V3VrMko1V2xsQmE2ei83aWh3bGVJdHdVV2pvaE0rYTFq?=
 =?utf-8?B?Z1FVbGhYL3JJcjYrQm1mZHpDM0lKZ2hpbGx1aUhJdlZpNFNRSnVYTEI5blhr?=
 =?utf-8?B?SmJYSEdFUVlUeXc1SW5naGgyMGRCOGFtNmdqamVpL0pSNEM0eXdOczhUS1pi?=
 =?utf-8?B?SVViak92a29nekNUWWJEMmhjTmNhalBDUHVYQWw2V0tOMWxPbHl1d0l3Y1Vr?=
 =?utf-8?B?VWppT0JFdnhvUDdUTEFodGE3R3B2OUE5VnJ5QlBGVm04L3ltZXpTSGt3akpG?=
 =?utf-8?B?dEhpejNxNWMrbjZUSmtraGRSdzlLc2ZubVZTcHN1QzV1eXJicmNZZTIvWldQ?=
 =?utf-8?B?NVZQa2ovTjRrdnRKNTl6emYxTloxbks4N0FkWElkc3ZRT0dJZ2NlYm8zTytT?=
 =?utf-8?B?TjE5KzE4Q1ZMZ3pRRmFqa3M4dEZaTFpNZklHYmgySWFsWU1Td3JZeEdzNk9a?=
 =?utf-8?B?dVVUTFJOZ2w3R1JOZ3NGQ2x6QTlPRjdWT2FsR2o3V1F1ODlwQnNXY0llMjh3?=
 =?utf-8?B?bWx1cmc1em00cGYwWm9tcTgvcWZjTjY2RFNHVUdHSzIrTmNGU1VheDRPR0RJ?=
 =?utf-8?B?NE5TTFBCZW9ZWUlrU21reTE1ZnFJN1hzQXJSVEgyVlBSRVlFSDl3ZmpPcy9O?=
 =?utf-8?B?dXdnamVWMUpUdGhxRjJNTUJ2VktUMFB3cy9OTEg2eWczcjhuRWlZanExVEk2?=
 =?utf-8?B?MWhVODRRdjJBU2NGRWFicmhVVUxobm5oUWVrZzZMUG9CbDE5b2s0N28wbUVN?=
 =?utf-8?B?V0JKMXVmdUhtcG5FTEJONmlYazl4cDB4Q2dCc2pSNU9oSTlmSGdaODBDWFlU?=
 =?utf-8?B?cGNqUW1ramJ0MnprLzFpdk9xeUFrOG93TExmQ3NBS3hBS3ZXdm9iZlIrKy9s?=
 =?utf-8?B?MkQ2anV0R2YvcnJqVy8yZTNsUVJNVWxXSlRYWm1VWFcrN2JOTUpZL3Y5UXBJ?=
 =?utf-8?B?ZDROanF2Nk1wU1ZrNW9QWEdWTzNGUDJHayswT1drMGRYWG1Dc0FkS2ZOUHY5?=
 =?utf-8?B?SzdCejJaSWNabWJjeGkrODFlUk96aVpobFBCODUvNTh2QnUyNVdTZE5uM2lm?=
 =?utf-8?B?L3Q1WWRjVFJORGsxbnYvQ0UwdVE5SWYwNXVleVVCZTdlSFE1YU83UFkvTjNv?=
 =?utf-8?B?aTlZcS9ja0N0ZWxZSjByaEszcUluN01pVktaRENlc3QrUE5XcUlKWEFFUmFi?=
 =?utf-8?Q?VZeHVOKGFeN+LpvyzeCf8gAFF7/6/VdXtR28Hkx?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <7745AE2CF679EC44B6EA1AB1ED518C54@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1116bc66-d735-4204-7e66-08d92bf6b020
X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Jun 2021 10:01:16.5220
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: EO/HiYIaYSMP7t40Wufajwav7m8YGgh1qe9Ck+wDQQvLo4IUpYwGOYMYQqbVsY0yYCkimKbnQ5l3ydjNG+kFW3ZoDzPG72lUWXHaOBm5GP3Y46dC0LAxY1tYQg1a+fU7
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB7347
X-Proofpoint-ORIG-GUID: ysHWm0mYQL2LHm0pwD9Uro7oJpg975tf
X-Proofpoint-GUID: ysHWm0mYQL2LHm0pwD9Uro7oJpg975tf
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 clxscore=1015
 bulkscore=0 mlxscore=0 malwarescore=0 spamscore=0 adultscore=0
 priorityscore=1501 impostorscore=0 suspectscore=0 phishscore=0
 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2106100065

SGVsbG8sIFJvZ2VyIQ0KDQpPbiAxMC4wNi4yMSAxMDo1NCwgUm9nZXIgUGF1IE1vbm7DqSB3cm90
ZToNCj4gT24gRnJpLCBKdW4gMDQsIDIwMjEgYXQgMDY6Mzc6MjdBTSArMDAwMCwgT2xla3NhbmRy
IEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+PiBIaSwgYWxsIQ0KPj4NCj4+IFdoaWxlIHdvcmtpbmcg
b24gUENJIFNSLUlPViBzdXBwb3J0IGZvciBBUk0gSSBzdGFydGVkIHBvcnRpbmcgWzFdIG9uIHRv
cA0KPj4gb2YgY3VycmVudCBQQ0kgb24gQVJNIHN1cHBvcnQgWzJdLiBUaGUgcXVlc3Rpb24gSSBo
YXZlIGZvciB0aGlzIHNlcmllcw0KPj4gaXMgaWYgd2UgcmVhbGx5IG5lZWQgZW11bGF0aW5nIFNS
LUlPViBjb2RlIGluIFhlbj8NCj4+DQo+PiBJIGhhdmUgaW1wbGVtZW50ZWQgYSBQb0MgZm9yIFNS
LUlPViBvbiBBUk0gWzNdIChwbGVhc2Ugc2VlIHRoZSB0b3AgMg0KPj4gcGF0Y2hlcykNCj4+IGFu
ZCBpdCAid29ya3MgZm9yIG1lIjogTVNJIHN1cHBvcnQgaXMgc3RpbGwgV0lQLCBidXQgSSB3YXMg
YWJsZSB0byBzZWUgdGhhdA0KPj4gVkZzIGFyZSBwcm9wZXJseSBzZWVuIGluIHRoZSBndWVzdCBh
bmQgQkFScyBhcmUgcHJvcGVybHkgcHJvZ3JhbW1lZCBpbiBwMm0uDQo+Pg0KPj4gV2hhdCBJIGNh
bid0IGZ1bGx5IHVuZGVyc3RhbmQgaXMgaWYgd2UgY2FuIGxpdmUgd2l0aCB0aGlzIGFwcHJvYWNo
IG9yIHRoZXJlDQo+PiBhcmUgdXNlLWNhc2VzIEkgY2FuJ3Qgc2VlLg0KPj4NCj4+IFByZXZpb3Vz
bHkgSSd2ZSBiZWVuIHRvbGQgdGhhdCB0aGlzIGFwcHJvYWNoIG1pZ2h0IG5vdCB3b3JrIG9uIEZy
ZWVCU0QNCj4+IHJ1bm5pbmcNCj4+IGFzIERvbWFpbi0wLCBidXQgaXMgc2VlbXMgdGhhdCAiUENJ
IFBhc3N0aHJvdWdoIGlzIG5vdCBzdXBwb3J0ZWQNCj4+IChYZW4vRnJlZUJTRCkiDQo+PiBhbnl3
YXlzIFs0XS4NCj4gUENJIHBhc3N0aG9yZ2ggaXMgbm90IHN1cHBvcnRlZCBvbiBGcmVlQlNEIGRv
bTAgYmVjYXVzZSBQQ0kNCj4gcGFzc3Rocm91Z2ggaXMgbm90IHN1cHBvcnRlZCBieSBYZW4gaXRz
ZWxmIHdoZW4gdXNpbmcgYSBQVkggZG9tMCwgYW5kDQo+IHRoYXQncyB0aGUgb25seSBtb2RlIEZy
ZWVCU0QgZG9tMCBjYW4gdXNlLg0KDQpTbywgaXQgaXMgc3RpbGwgbm90IGNsZWFyIHRvIG1lOiBo
b3cgYW5kIGlmIFBDSSBwYXNzdGhyb3VnaCBpcyBzdXBwb3J0ZWQNCg0Kb24gRnJlZUJTRCwgd2hh
dCBhcmUgdGhlIHNjZW5hcmlvcyBhbmQgcmVxdWlyZW1lbnRzIGZvciB0aGF0Pw0KDQo+DQo+IFBI
WVNERVZPUF9wY2lfZGV2aWNlX2FkZCBjYW4gYmUgYWRkZWQgdG8gRnJlZUJTRCwgc28gaXQgY291
bGQgYmUgbWFkZQ0KPiB0byB3b3JrLiBJIGhvd2V2ZXIgdGhpbmsgdGhpcyBpcyBub3QgdGhlIHBy
b3BlciB3YXkgdG8gaW1wbGVtZW50DQo+IFNSLUlPViBzdXBwb3J0Lg0KDQpJIHdhcyBub3QgYWJs
ZSB0byBmaW5kIGFueSBzdXBwb3J0IGZvciBQSFlTREVWT1BfWFhYIGluIEZyZWVCU0QgY29kZSwN
Cg0KY291bGQgeW91IHBsZWFzZSBwb2ludCBtZSB0byB3aGVyZSBhcmUgdGhlc2UgdXNlZD8NCg0K
SWYgdGhleSBhcmUgbm90LCBzbyBob3cgWGVuIHVuZGVyIEZyZWVCU0Qga25vd3MgYWJvdXQgUENJ
IGRldmljZXM/DQoNCkkgYW0gdHJ5aW5nIHRvIGV4dHJhcG9sYXRlIG15IGtub3dsZWRnZSBvZiBo
b3cgTGludXggZG9lcyB0aGF0DQoNCihkdXJpbmcgUENJIGVudW1lcmF0aW9uIGluIERvbWFpbi0w
IHdlIHVzZSBoeXBlcmNhbGxzKQ0KDQo+DQo+PiBJIGFsc28gc2VlIEFDUk4gaHlwZXJ2aXNvciBb
NV0gaW1wbGVtZW50cyBTUi1JT1YgaW5zaWRlIGl0IHdoaWNoIG1ha2VzDQo+PiBtZSB0aGluayBJ
DQo+PiBtaXNzIHNvbWUgaW1wb3J0YW50IHVzZS1jYXNlIG9uIHg4NiB0aG91Z2guDQo+Pg0KPj4g
SSB3b3VsZCBsaWtlIHRvIGFzayBmb3IgYW55IGFkdmlzZSB3aXRoIFNSLUlPViBpbiBoeXBlcnZp
c29yIHJlc3BlY3QsDQo+PiBhbnkgcG9pbnRlcnMNCj4+IHRvIGRvY3VtZW50YXRpb24gb3IgYW55
IG90aGVyIHNvdXJjZSB3aGljaCBtaWdodCBiZSBoYW5keSBpbiBkZWNpZGluZyBpZg0KPj4gd2Ug
ZG8NCj4+IG5lZWQgU1ItSU9WIGNvbXBsZXhpdHkgaW4gWGVuLg0KPj4NCj4+IEFuZCBpdCBkb2Vz
IGJyaW5nIGNvbXBsZXhpdHkgaWYgeW91IGNvbXBhcmUgWzFdIGFuZCBbM10pLi4uDQo+Pg0KPj4g
QSBiaXQgb2YgdGVjaG5pY2FsIGRldGFpbHMgb24gdGhlIGFwcHJvYWNoIGltcGxlbWVudGVkIFsz
XToNCj4+IDEuIFdlIHJlbHkgb24gUEhZU0RFVk9QX3BjaV9kZXZpY2VfYWRkDQo+PiAyLiBXZSBy
ZWx5IG9uIERvbWFpbi0wIFNSLUlPViBkcml2ZXJzIHRvIGluc3RhbnRpYXRlIFZGcw0KPj4gMy4g
QkFScyBhcmUgcHJvZ3JhbW1lZCBpbiBwMm0gaW1wbGVtZW50aW5nIGd1ZXN0IHZpZXcgb24gdGhv
c2UgKHdlIGhhdmUNCj4+IGV4dGVuZGVkDQo+PiB2UENJIGNvZGUgZm9yIHRoYXQgYW5kIHRoaXMg
cGF0aCBpcyB1c2VkIGZvciBib3RoICJub3JtYWwiIGRldmljZXMgYW5kDQo+PiBWRnMgdGhlIHNh
bWUgd2F5KQ0KPj4gNC4gTm8gbmVlZCB0byB0cmFwIFBDSV9TUklPVl9DVFJMDQo+PiA1LiBObyBu
ZWVkIHRvIHdhaXQgMTAwbXMgaW4gWGVuIGJlZm9yZSBhdHRlbXB0aW5nIHRvIGFjY2VzcyBWRiBy
ZWdpc3RlcnMNCj4+IHdoZW4NCj4+IGVuYWJsaW5nIHZpcnR1YWwgZnVuY3Rpb25zIG9uIHRoZSBQ
RiAtIHRoaXMgaXMgaGFuZGxlZCBieSBEb21haW4tMCBpdHNlbGYuDQo+IEkgdGhpbmsgdGhlIFNS
LUlPViBjYXBhYmlsaXR5IHNob3VsZCBiZSBoYW5kbGVkIGxpa2UgYW55IG90aGVyIFBDSQ0KPiBj
YXBhYmlsaXR5LCBpZTogbGlrZSB3ZSBjdXJyZW50bHkgaGFuZGxlIE1TSSBvciBNU0ktWCBpbiB2
UENJLg0KPg0KPiBJdCdzIGxpa2VseSB0aGF0IHVzaW5nIHNvbWUga2luZCBvZiBoeXBlcmNhbGwg
aW4gb3JkZXIgdG8gZGVhbCB3aXRoDQo+IFNSLUlPViBjb3VsZCBtYWtlIHRoaXMgZWFzaWVyIHRv
IGltcGxlbWVudCBpbiBYZW4sIGJ1dCB0aGF0IGp1c3QgYWRkcw0KPiBtb3JlIGNvZGUgdG8gYWxs
IE9TZXMgdGhhdCB3YW50IHRvIHJ1biBhcyB0aGUgaGFyZHdhcmUgZG9tYWluLg0KDQpJIGRpZG4n
dCBpbnRyb2R1Y2UgYW55IG5ldywgYnV0IFBIWVNERVZPUF9wY2lfZGV2aWNlX2FkZCB3YXMgZW5v
dWdoLg0KDQpUaGUgcmVzdCBJIGRpZCBpbiBYZW4gaXRzZWxmIHdydCBTUi1JT1YuDQoNCj4NCj4g
T1RPSCBpZiB3ZSBwcm9wZXJseSB0cmFwIGFjY2Vzc2VzIHRvIHRoZSBTUi1JT1YgY2FwYWJpbGl0
eSAobGlrZSBpdA0KPiB3YXMgcHJvcG9zZWQgaW4gWzFdIGZyb20geW91ciByZWZlcmVuY2VzKSB3
ZSB3b24ndCBoYXZlIHRvIG1vZGlmeSBPU2VzDQo+IHRoYXQgd2FudCB0byBydW4gYXMgaGFyZHdh
cmUgZG9tYWlucyBpbiBvcmRlciB0byBoYW5kbGUgU1ItSU9WIGRldmljZXMuDQoNCk91dCBvZiBj
dXJpb3NpdHksIGNvdWxkIHlvdSBwbGVhc2UgbmFtZSBhIGZldz8gSSBkbyB1bmRlcnN0YW5kIHRo
YXQNCg0Kd2UgZG8gd2FudCB0byBzdXBwb3J0IHVubW9kaWZpZWQgT1NlcyBhbmQgdGhpcyBpcyBp
bmRlZWQgaW1wb3J0YW50Lg0KDQpCdXQsIHN0aWxsIHdoYXQgYXJlIHRoZSBvdGhlciBPU2VzIHdo
aWNoIGRvIHN1cHBvcnQgWGVuICsgUENJIHBhc3N0aHJvdWdoPw0KDQo+DQo+IElNTyBnb2luZyBm
b3IgdGhlIGh5cGVyY2FsbCBvcHRpb24gc2VlbXMgZWFzaWVyIG5vdywgYnV0IGFkZHMgYSBidXJk
ZW4NCj4gdG8gYWxsIE9TZXMgdGhhdCB3YW50IHRvIG1hbmFnZSBTUi1JT1YgZGV2aWNlcyB0aGF0
IHdpbGwgaHVydCB1cyBsb25nDQo+IHRlcm0uDQoNCkFnYWluLCBJIHdhcyBhYmxlIHRvIG1ha2Ug
aXQgc29tZXdoYXQgd29yayB3aXRoIFBIWVNERVZPUF9wY2lfZGV2aWNlX2FkZCBvbmx5Lg0KDQo+
DQo+IFRoYW5rcywgUm9nZXIuDQoNClRoYW5rIHlvdSBmb3IgeW91ciB2YWx1YWJsZSBjb21tZW50
cywNCg0KT2xla3NhbmRyDQoNCj4NCj4+IFRoYW5rIHlvdSBpbiBhZHZhbmNlLA0KPj4gT2xla3Nh
bmRyDQo+Pg0KPj4gWzFdDQo+PiBodHRwczovL3VybGRlZmVuc2UuY29tL3YzL19faHR0cHM6Ly9s
aXN0cy54ZW5wcm9qZWN0Lm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDE4LTA3L21zZzAx
NDk0Lmh0bWxfXzshIUdGXzI5ZGJjUUlVQlBBIW1JcExFdkJteFJvRldVZ1ljN01GQUYwMzJrREdk
ODk3TlA5dDZkMWZzYzl1Wm9MSFNXOTdHWDh4QVdLaHZsRkY4Q2V6bXZTWkdnJCBbbGlzdHNbLl14
ZW5wcm9qZWN0Wy5db3JnXQ0KPj4gWzJdDQo+PiBodHRwczovL3VybGRlZmVuc2UuY29tL3YzL19f
aHR0cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0L2Z1c2EveGVuLWludGVncmF0aW9uLy0vdHJl
ZS9pbnRlZ3JhdGlvbi9wY2ktcGFzc3Rocm91Z2hfXzshIUdGXzI5ZGJjUUlVQlBBIW1JcExFdkJt
eFJvRldVZ1ljN01GQUYwMzJrREdkODk3TlA5dDZkMWZzYzl1Wm9MSFNXOTdHWDh4QVdLaHZsRkY4
Q2YzWjhZazd3JCBbZ2l0bGFiWy5dY29tXQ0KPj4gWzNdIGh0dHBzOi8vdXJsZGVmZW5zZS5jb20v
djMvX19odHRwczovL2dpdGh1Yi5jb20veGVuLXRyb29wcy94ZW4vY29tbWl0cy9wY2lfcGhhc2Uy
X187ISFHRl8yOWRiY1FJVUJQQSFtSXBMRXZCbXhSb0ZXVWdZYzdNRkFGMDMya0RHZDg5N05QOXQ2
ZDFmc2M5dVpvTEhTVzk3R1g4eEFXS2h2bEZGOENmbmNJZGd2dyQgW2dpdGh1YlsuXWNvbV0NCj4+
IFs0XSBodHRwczovL3VybGRlZmVuc2UuY29tL3YzL19faHR0cHM6Ly93aWtpLmZyZWVic2Qub3Jn
L1hlbl9fOyEhR0ZfMjlkYmNRSVVCUEEhbUlwTEV2Qm14Um9GV1VnWWM3TUZBRjAzMmtER2Q4OTdO
UDl0NmQxZnNjOXVab0xIU1c5N0dYOHhBV0todmxGRjhDY2VMT1BkeWckIFt3aWtpWy5dZnJlZWJz
ZFsuXW9yZ10NCj4+IFs1XSBodHRwczovL3VybGRlZmVuc2UuY29tL3YzL19faHR0cHM6Ly9wcm9q
ZWN0YWNybi5naXRodWIuaW8vbGF0ZXN0L3R1dG9yaWFscy9zcmlvdl92aXJ0dWFsaXphdGlvbi5o
dG1sX187ISFHRl8yOWRiY1FJVUJQQSFtSXBMRXZCbXhSb0ZXVWdZYzdNRkFGMDMya0RHZDg5N05Q
OXQ2ZDFmc2M5dVpvTEhTVzk3R1g4eEFXS2h2bEZGOENkanFIUHhaUSQgW3Byb2plY3RhY3JuWy5d
Z2l0aHViWy5daW9d


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 10:23:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 10:23:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139813.258448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrHqU-0001a4-8E; Thu, 10 Jun 2021 10:23:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139813.258448; Thu, 10 Jun 2021 10:23:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrHqU-0001Zx-4i; Thu, 10 Jun 2021 10:23:30 +0000
Received: by outflank-mailman (input) for mailman id 139813;
 Thu, 10 Jun 2021 10:23:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iP0d=LE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrHqT-0001Zr-IG
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 10:23:29 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0bc8fc67-5c8b-4521-88f7-c2a9011d52ec;
 Thu, 10 Jun 2021 10:23:27 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2109.outbound.protection.outlook.com [104.47.18.109])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-31-XxQ8Lki0Nyul9gAtzBrvTw-1; Thu, 10 Jun 2021 12:23:25 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB7375.eurprd04.prod.outlook.com (2603:10a6:800:1a8::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20; Thu, 10 Jun
 2021 10:23:24 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Thu, 10 Jun 2021
 10:23:23 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P191CA0014.EURP191.PROD.OUTLOOK.COM (2603:10a6:102:54::19) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Thu, 10 Jun 2021 10:23:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0bc8fc67-5c8b-4521-88f7-c2a9011d52ec
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623320606;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XjK+LwKx4hRb33TMruVqDXAuyc8ZGjXa5fzsOdWOTpg=;
	b=IuplYSgKVgije5ypRIHwLPDpjAvofZZ+mUkRWZYSc4S+I1XBhU7PH+2ioebqr6HuQ+PQnB
	ScmlprMi6twp4I5UhZ4DwY2ItPH02s3fsuYcYUr6ZXjdHZY0bIDLCX9N7vPjlwpbgNuHpR
	Mf8GEpwkfJSN4tvFEmbkdMPwMKlap3I=
X-MC-Unique: XxQ8Lki0Nyul9gAtzBrvTw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fE9QE1YSni9zNONL+XONWAEhWOaJDG/ro4HM8iwpnGufUgbyCkxwYXKeiJV3FwaQIyd7Vgqeaf6f3cNaxK5AW1cqlOJKAzZPDqnEuIWiGezK7JxXyJcbzhydYKdoqL54/Mk7FIRFATPAoGPck8Ls4O+KGNvMpcEmevZSwnjJG8Y2vxTPah54gUbDPxC8fh3redJQW5Wu4d8VXZikR18+gco+kU3ZNP+SCz8US8fcEVzX4ZWQkVKMtw+zBlncieoTDPlOIbNMYAskCK4bHPlMDYKPojU3w2BUceEAqKDhHMXO98srKXFJ/oQZzfiDhQM3IY7lnEsqPIYhoaztRP7w9g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XjK+LwKx4hRb33TMruVqDXAuyc8ZGjXa5fzsOdWOTpg=;
 b=OIkNrWWBb8ayyf1Ab+BVe1BKW/Si7QnkplKxNatR1jOre9zJ54qO+GvqJ0LVbCOFeytwrqCLd2mxTdYbMXxBV8PR8tm2fallSsvVdAJwEecmBAcODBNXJgIlZjjNuTWD4a5t+NYW2vkZT3nf9vHvtYk4TZZwU4aynUL9/GFdqQDUaH6k5/DgeUrSeZzJFPurWtIsxb0ouF+2Q5K8/t4CLpL9FA56MPCfgrTkBN5gFCR/3ZHDJM7G8VL8yv5S4SjpvIjEGr2ekWfbryKuiHzNn0+0hRCflha4DAPKqQ7tbzSpTtE5W2ZBU87ySw6JPqRUEnW+TFGdumK/0OOxVchasg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH 6/9] xen/arm: introduce alloc_staticmem_pages and
 alloc_domstatic_pages
To: Penny Zheng <penny.zheng@arm.com>
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com,
 xen-devel@lists.xenproject.org, sstabellini@kernel.org, julien@xen.org
References: <20210607024318.3988467-1-penny.zheng@arm.com>
 <20210607024318.3988467-7-penny.zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c31a85c3-89ea-76a4-3b29-a411d419fb59@suse.com>
Date: Thu, 10 Jun 2021 12:23:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210607024318.3988467-7-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P191CA0014.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:102:54::19) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e8df58d2-e24e-471b-eef7-08d92bf9c70a
X-MS-TrafficTypeDiagnostic: VE1PR04MB7375:
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB73750D4DD26840470C084248B3359@VE1PR04MB7375.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	P3TMtJnHEm8L0/o+fwDYi1GBiTG5FqICquHc0z6KE18NEjMukwzIoNOKeMc//+8LGguFV7i3I66gi3djQGdHSokBsAqc4PvSkLYfivrAyWQ6gnqNSyaPAbniTwrwDfF5TLyJKMtpl+VkyjIxXjKxYvp/oJIY0SZRY0H7+gxNFdyBSqb7DAdhwhR8fchjHZC2Tm2IEOVj/JFIrDogIO5YYeT3i3RsBiNPwV5fSb7mZyoyNb962Xu061u9JMyEoAJrgSleMWvFiwUJ5UlFrxdDGEMa+TXSfIkoTBOXH3qgs4x+oKYEidVMzUszdH0Mm+AeGeeBfdySw/V74U1gMRCk7FIDDPuiwz4e+eFAZ/arDWpFJVNOjXhmU6uXyAPn9g1tE02CHjpvsvHR5bZBqyS2hws/BWsz5kiuN56hycK3addHqH61eBSLsZY+L0460YiNsuA2HusRhPBzyZJ8hRxw2upoXNt3X4kQIriYGbaKBB9QilxJito/IYdnkj3znzVkG579ekQBs8uiFLTsOP+0Xi+HfEHn1GN6CyLOHulu1BTAT0f5Y6r6fteD2wtCCqaXs0kJM7topgmYpg6JnBTjXSRpDdwPuz5KAdjeT10Le3kVlCSn+stYrh4QZUHt9m8ROVFLhufaIMdgHV4L37uwMlGsOSw5kF/+trKl6dVE3wPqht5vPzArpwHYoxs19Xl2
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(376002)(346002)(39860400002)(136003)(396003)(86362001)(8936002)(316002)(31686004)(16526019)(6916009)(66946007)(2906002)(4326008)(66476007)(16576012)(36756003)(38100700002)(66556008)(2616005)(5660300002)(956004)(26005)(53546011)(478600001)(6486002)(8676002)(31696002)(186003)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?SUZPTnA1V25SOHkyZ2FkQWpUMUlrck1HNDArTENyVHcreHB2VWZzblRPZFVy?=
 =?utf-8?B?V2FBRm1HQUdoRkJjWW96eldsdFZjaTNyV0xzQVczbHE4elFPQ2QyY0lPcUZj?=
 =?utf-8?B?R282V1lxNnlqN0x4Kzg1UVdKbXQwd1dvNXpRNWRGbnpEdFV3SEFjVkdBMVFx?=
 =?utf-8?B?cE8vb1JtalQwTzBEYUVtaGNDVUFwU3R4WXVRZlBQeWNEaVlTajEwL1VsN0R2?=
 =?utf-8?B?bUxnYWhUWTBsSlJucnRhSjRHTW5pVkk5b21jZ2R1bGpkay9jZHk0blFtU2xv?=
 =?utf-8?B?RnpyNjhRQklBc0tmbDhCdXZnMW9YVThzQUZTaWMzOUJJSkljenJsNGhkMitE?=
 =?utf-8?B?bjN6elVJR2h3bDBneldKR1BBTFBqMVdlRlM3SGR1U0VpMWhvRS90WmRwOHBU?=
 =?utf-8?B?Y3VnQmRLbElCMWhHMHZVYUJLVmk2VVU5cUFYQ2IwZnN1MnRsd3RuWllXaFFP?=
 =?utf-8?B?Y0JoRk1NcGVDUHg4aUozT3c1UUR2WUpZcFFqeXVWV0RCNnFpRHF5NWV1TGFq?=
 =?utf-8?B?ZjIxVk5RYzV4bTJrVThTV3pEWDZueXpZRlYvQVd4cHJ3Y3FjUHVpZGR4NUJ1?=
 =?utf-8?B?QThJMkN1TGZaR3UzTEhnMmF0WFAyR1BaSkJ0OEUrbStBNVl1TGVSQ0lWTmpW?=
 =?utf-8?B?MzdZYnU3UEc2cDIwOEZmdzBNOG15dkE5aGNSdTJTTUY3WUJRVEpmV1I5bVY2?=
 =?utf-8?B?amZoR0hicVhUbGI4WHZROE1wc0RlU3ZRUElTVjA5V3FZQ3hKUXYzWDhGeGV4?=
 =?utf-8?B?WkUrVkFYWHFWQldqdkFZald1Q1c4bGRlcFZDTEw5Q081S0I0VDJNWDJuU0JB?=
 =?utf-8?B?UlhJeWFMOHRPMlpTZjVSaXYrKzdaNzZOUVNSZ2xHVEJNc2lyT1lZNndLTU00?=
 =?utf-8?B?TG5NNXJ5bVIxYXNNUWkzU29NUEZpUEdoVzM2WWVSZG1uMGRLT09kUUtCTnI3?=
 =?utf-8?B?Zmoyd0o4S1BIcExkRFo3Y08wRGRUZUhWeEoydmNTL3RjN3gvZ0dzeTlaV0hO?=
 =?utf-8?B?TURzb01LN2x3ZFgyTFQ5R3RQb2EreDlaRkdyM3B4SHNZbzQ5NTVSN2k0VklH?=
 =?utf-8?B?RnJRUnhOZGVYZ0NmZW85cVg2c09FcHMrcVZQdGJjbUVkdnlmRnIySm5HNUto?=
 =?utf-8?B?VitNV25CdFU0L0tYTnpmT3JTMzFhejhsNStQRzNUdmlDbzQ3OVhVdkpBTnh2?=
 =?utf-8?B?aXhkM1RyejgyVlVDWnliWlVzQmI5dDg4RnpuSTJJMmRTWHJnSXlBRWxTY2xK?=
 =?utf-8?B?YVZSZlIrZEx0ckhrL2hiZEJVOVErRnZ3V3NUemd2SWtWMENac0VxQXFucHd1?=
 =?utf-8?B?OEhMMEVhYkxVaXhkK3dwNlR2anFQNEMzb3FZVkF6NUR2MHhsK1FOdjVFQ25z?=
 =?utf-8?B?aWFxN3ZKdDdPR2hKUktqTDRmRVhRa1FVMmhBV3JRUmtsdGlZR3oxditRc0hv?=
 =?utf-8?B?ckE1QnErckwycTF0N2JiNEFpdDU4c0E1UkJmMVpPcmFBTnl4dFhFamc3OEZY?=
 =?utf-8?B?cVlFVUErRnFpUmM4VndwTDE3emhVUzlmb3VzR1VDK3pVa3R4Wi9oaXdUZjZM?=
 =?utf-8?B?MzdBMXJTVG1CRk5hcmhrakRRTGk4S2NwY1ppYU95MlVyZWFjT1JaZHBETk0r?=
 =?utf-8?B?NlFRNlZ0ZElTVUxkWG5LdWJsUldjbkpWUFZPMHZmQXhMd1NMckFRZmpCZWgr?=
 =?utf-8?B?UlpXN2YyY2xuQzZvQzlPTUtDd2p4YzRwelBJT05QWEw5QnBIbVNZOFVuckZC?=
 =?utf-8?Q?W17RSsSy6Fn1N1MUJKt0zZmJYWCe+JsX05MEsgm?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e8df58d2-e24e-471b-eef7-08d92bf9c70a
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2021 10:23:23.7168
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KUAJc7dYOx40YFVPnS8ahfGSzist9MMxmC/kUW3y2jutFQHCqlIreNgsBpS7enpRMZHfDppQv6eDzpfXDj+Apw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7375

On 07.06.2021 04:43, Penny Zheng wrote:
> alloc_staticmem_pages aims to allocate nr_mfns contiguous pages of
> static memory. And it is the equivalent of alloc_heap_pages for static
> memory. Here only covers allocating at specified starting address.
> 
> For each page, it shall check if the page is reserved(PGC_reserved)
> and free. It shall also do a set of necessary initialization, which are
> mostly the same ones in alloc_heap_pages, like, following the same
> cache-coherency policy and turning page status into PGC_state_inuse, etc.
> 
> alloc_domstatic_pages is the equivalent of alloc_domheap_pages for
> static mmeory, and it is to allocate nr_mfns pages of static memory
> and assign them to one specific domain.
> 
> It uses alloc_staticmen_pages to get nr_mfns pages of static memory,
> then on success, it will use assign_pages_nr to assign those pages to
> one specific domain.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
> changes v2:
> - use mfn_valid() to do validation
> - change pfn-named to mfn-named
> - put CONFIG_STATIC_ALLOCATION around to remove dead codes
> - correct off-by-one indentation
> - remove meaningless MEMF_no_owner case
> - leave zone concept out of DMA limitation check
> ---
>  xen/common/page_alloc.c | 129 ++++++++++++++++++++++++++++++++++++++++
>  xen/include/xen/mm.h    |   2 +
>  2 files changed, 131 insertions(+)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index e244d2e52e..a0eea5f1a4 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1065,6 +1065,75 @@ static struct page_info *alloc_heap_pages(
>      return pg;
>  }
>  
> +#ifdef CONFIG_STATIC_ALLOCATION
> +/*
> + * Allocate nr_mfns contiguous pages, starting at #smfn, of static memory.
> + * It is the equivalent of alloc_heap_pages for static memory
> + */
> +static struct page_info *alloc_staticmem_pages(unsigned long nr_mfns,
> +                                               mfn_t smfn,
> +                                               unsigned int memflags)
> +{
> +    bool need_tlbflush = false;
> +    uint32_t tlbflush_timestamp = 0;
> +    unsigned long i;
> +    struct page_info *pg;
> +
> +    /* For now, it only supports allocating at specified address. */
> +    if ( !mfn_valid(smfn) || !nr_mfns )
> +    {
> +        printk(XENLOG_ERR
> +               "Invalid %lu static memory starting at %"PRI_mfn"\n",

Reading a log containing e.g. "Invalid 0 static memory starting at
..." I don't think I would recognize that the "0" is the count of
pages.

> +               nr_mfns, mfn_x(smfn));
> +        return NULL;
> +    }
> +    pg = mfn_to_page(smfn);
> +
> +    for ( i = 0; i < nr_mfns; i++ )
> +    {
> +        /*
> +         * Reference count must continuously be zero for free pages
> +         * of static memory(PGC_reserved).
> +         */
> +        ASSERT(pg[i].count_info & PGC_reserved);

What logic elsewhere guarantees that this will hold? ASSERT()s are
to verify that assumptions are met. But I don't think you can
sensibly assume the caller knows the range is reserved (and free),
or else you could get away without any allocation function.

> +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> +        {
> +            printk(XENLOG_ERR
> +                   "Reference count must continuously be zero for free pages"
> +                   "pg[%lu] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> +                   i, mfn_x(page_to_mfn(pg + i)),
> +                   pg[i].count_info, pg[i].tlbflush_timestamp);
> +            BUG();
> +        }

The same applies here at least until proper locking gets added,
which I guess is happening in the next patch when really it would
need to happen right here.

Furthermore I don't see why you don't fold ASSERT() and if into

        if ( pg[i].count_info != (PGC_state_free | PGC_reserved) )

After all PGC_reserved is not similar to PGC_need_scrub, which
alloc_heap_pages() masks out the way you also have it here.

As to the printk() - the extra verbosity compared to the original
isn't helpful or necessary imo. The message needs to be
distinguishable from the other one, yes, so it would better
mention "static" in some way. But the prefix you have is too long
for my taste (and lacks a separating blank anyway).

As a separate matter - have you given up on the concept of
reserving particular memory ranges for _particular_ guests? The
cover letter, saying "Static Allocation refers to system or
sub-system(domains) for which memory areas are pre-defined by
configuration using physical address ranges" as the very first
thing, doesn't seem to suggest so.

> +        if ( !(memflags & MEMF_no_tlbflush) )
> +            accumulate_tlbflush(&need_tlbflush, &pg[i],
> +                                &tlbflush_timestamp);
> +
> +        /*
> +         * Preserve flag PGC_reserved and change page state
> +         * to PGC_state_inuse.
> +         */
> +        pg[i].count_info = (pg[i].count_info & PGC_reserved) | PGC_state_inuse;

Why not

        pg[i].count_info = PGC_state_inuse | PGC_reserved;

? Again, PGC_reserved is sufficiently different from PGC_need_scrub.

> +        /* Initialise fields which have other uses for free pages. */
> +        pg[i].u.inuse.type_info = 0;
> +        page_set_owner(&pg[i], NULL);
> +
> +        /*
> +         * Ensure cache and RAM are consistent for platforms where the
> +         * guest can control its own visibility of/through the cache.
> +         */
> +        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
> +                            !(memflags & MEMF_no_icache_flush));
> +    }
> +
> +    if ( need_tlbflush )
> +        filtered_flush_tlb_mask(tlbflush_timestamp);

Depending on whether static pages have a designated owner, this may
(as suggested before) not be necessary.

> @@ -2326,7 +2395,11 @@ int assign_pages_nr(
>  
>          for ( i = 0; i < nr_pfns; i++ )
>          {
> +#ifdef CONFIG_STATIC_ALLOCATION
> +            ASSERT(!(pg[i].count_info & ~(PGC_extra | PGC_reserved)));
> +#else
>              ASSERT(!(pg[i].count_info & ~PGC_extra));
> +#endif
>              if ( pg[i].count_info & PGC_extra )
>                  extra_pages++;
>          }
> @@ -2365,7 +2438,12 @@ int assign_pages_nr(
>          page_set_owner(&pg[i], d);
>          smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
>          pg[i].count_info =
> +#ifdef CONFIG_STATIC_ALLOCATION
> +            (pg[i].count_info & (PGC_extra | PGC_reserved)) | PGC_allocated | 1;
> +#else
>              (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
> +#endif

Both hunks' #ifdef-ary needs to be avoided, e.g. by

#ifndef CONFIG_STATIC_ALLOCATION
# define PGC_reserved 0
#endif

near the top of the file.

> @@ -2434,6 +2512,57 @@ struct page_info *alloc_domheap_pages(
>      return pg;
>  }
>  
> +#ifdef CONFIG_STATIC_ALLOCATION
> +/*
> + * Allocate nr_mfns contiguous pages, starting at #smfn, of static memory,
> + * then assign them to one specific domain #d.
> + * It is the equivalent of alloc_domheap_pages for static memory.
> + */
> +struct page_info *alloc_domstatic_pages(
> +        struct domain *d, unsigned long nr_mfns, mfn_t smfn,
> +        unsigned int memflags)
> +{
> +    struct page_info *pg = NULL;
> +    unsigned long dma_size;
> +
> +    ASSERT(!in_irq());
> +
> +    if ( !dma_bitsize )
> +        memflags &= ~MEMF_no_dma;
> +    else
> +    {
> +        if ( (dma_bitsize - PAGE_SHIFT) > 0 )
> +        {
> +            dma_size = 1ul << (dma_bitsize - PAGE_SHIFT);
> +            /* Starting address shall meet the DMA limitation. */
> +            if ( mfn_x(smfn) < dma_size )
> +                return NULL;

I think I did ask this on v1 already: Why the first page? Static
memory regions, unlike buddy allocator zones, can cross power-of-2
address boundaries. Hence it ought to be the last page that gets
checked for fitting address width restriction requirements.

And then - is this necessary at all? Shouldn't "pre-defined by
configuration using physical address ranges" imply the memory
designated for a guest fits its DMA needs?

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 10 10:48:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 10:48:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139821.258459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrIEZ-00043s-3F; Thu, 10 Jun 2021 10:48:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139821.258459; Thu, 10 Jun 2021 10:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrIEZ-00043l-0K; Thu, 10 Jun 2021 10:48:23 +0000
Received: by outflank-mailman (input) for mailman id 139821;
 Thu, 10 Jun 2021 10:48:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iP0d=LE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrIEX-00043d-KX
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 10:48:21 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 25085f1c-382e-47e6-9cdf-a9fa836cc71a;
 Thu, 10 Jun 2021 10:48:19 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2058.outbound.protection.outlook.com [104.47.14.58]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-27-hlbNZYm_NSeB69vjGB0GOA-1; Thu, 10 Jun 2021 12:48:17 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5165.eurprd04.prod.outlook.com (2603:10a6:803:54::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Thu, 10 Jun
 2021 10:48:16 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Thu, 10 Jun 2021
 10:48:16 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P189CA0074.EURP189.PROD.OUTLOOK.COM (2603:10a6:102:b4::19) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.20 via Frontend Transport; Thu, 10 Jun 2021 10:48:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25085f1c-382e-47e6-9cdf-a9fa836cc71a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623322098;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WRXC7e/10hM3vysBE7o6rC/+9XiQwbKNMTh1OfkDXOQ=;
	b=iBnD59eKiriLSqd4wIDzvskI+h5XstCmarR1P+CjAdD4WMv2N2KCVI4u88NQ1SqE2H7ywf
	KQMhtFooGvHg7GbYm4W+sMGW3biXzZwlheGPE5OCYfdZY0UIP4wkpx0yeWZYkM/TVsX6Wr
	rdyAzuqRaUFUPz2AmJr2WNb0SUXS/Io=
X-MC-Unique: hlbNZYm_NSeB69vjGB0GOA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hX7qLixLm9pRx94FSEqiIa8JrYtKE96I3BKuZZFMaykJ+lm6CggTNM/J+RRZsq8EgVCQbyu3k9YZ72Bkuy0ibQi1EjzOydLFYxqzUgQja2s46EuDlUZhfZB5mFZl9YFmeYUOXerrUYdAv2K0dULUKQoXjJf+vQA2CiE1HKmLNg0YOuBj44kWDFM8Qz1bcCDp7yYHNZ8MCIHZ1prJ+PQw5sqa+NdExmLqDGVmyJwbHAgUl1dVpJus6I7smoeLCwXtsGf7GLxw75AUMmKUiNQuAu8ucFc0RFtZMp7/HjHgylIRDE2PYckLZEEpFec/0QpB3+sMnBXm5YmHFo7ruSCl5A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8Gw34B5bF4ryf/8QK6Mpyf1ldonxYpGyZCdpsLUUIdk=;
 b=ksylpRNrp3P1uzMdX+sRzFxH52QJdvjFVWocd5URhqjvV4a+cqJiYK0kPrtnVMD0H+Pvi6Zh2vL5fdjHRkxQmyR0eHmjf556J+94HgT8q8FLEo0C6J8SzeXozoYoYgXXS/s/cSDTultYpr47R0afaty1cNX5CUPcqM0UJgJ+32KfgdTyjHhgItETEBpiZCt9fD9aGoHTCCt3pNBJtDLt4N/BdQ2uuC2yJWghYiYmrw1aoq5qsFcixirbkGKaSCU9U97tywxq4oE59nt41awdnqfC8BjTgV1iYdVZGd9lJRTnkZMYLbAg8BCzP9QpqMqcYFdAKcFiz88GaBhVleIKmg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: Re: SR-IOV: do we need to virtualize in Xen or rely on Dom0?
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <c10e16c9-ec42-336f-e838-caca49b39723@epam.com>
 <YMHFQA1L61ntKNRq@Air-de-Roger>
 <30955a5b-ee46-60d7-ae56-23dc7c91008c@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0aa4bf61-e08f-6da0-1cda-48e61bf876af@suse.com>
Date: Thu, 10 Jun 2021 12:48:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <30955a5b-ee46-60d7-ae56-23dc7c91008c@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P189CA0074.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:b4::19) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b1fb4bae-11d6-478c-265d-08d92bfd40dd
X-MS-TrafficTypeDiagnostic: VI1PR04MB5165:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB5165BD8F97730E93D13429D8B3359@VI1PR04MB5165.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pEwUwebPbgoDMUAg34iFLHnrq6SsUOghQnvW3KSa9ekRKN1uEJCAZvsAKvWUvqF7ZXs30BDOEs+43lCuhmE044eV3rQ4BRQN3/SDQykAtphi3WiVSwkcxC/BaqV4SxhtCdKyGeerU9GQWGH9P/XeqDoJNgD/Z91WVX0AHtl709aQUAYpWmEXVnToR6IIP96R5c6VKtPqcH7N7UZAsNTGSEDVV6OQdoFgKrspyihiYoorLpoB4nBkZF0GwW8h4iKWx6gGeaCsoEgptBJb7wqNG5WvNMZIxc3HLjtgh+Br8zutpWd32uQn4eHmjPtR2isqDDZpb7gCPaNtHDFSmSpW7BXtCeyMjOWokRw9LWWh4a0/mVy4Zpc9UTJ1WlXbtEFqaETnyMn7qUpGCn529fB7pkBYd+bNXS/C4/2d4mE26dqL9PnRO+0hGVYNmx4KFHQnqsqgAQxT4BT401HP8IAPfaTnC1LPu5lLxRMS5o0btfabJyxf0bUB2yzLAdRwIAXPVWod1lksP4Tp2f394TXzfIGrXPqqzpZwSx8Y74pwLyuGgyaNpG8L13Yw1nEOCPTWx2BAHqM7Fx+6OIIYSPQD9NH6lqi+5QGJN9WbslZlA1DU24roRnkP+QYbeidxgjgTnrhZt2Xl1nDrbUq7YWT/+cyVQ/1OuBACOkHVI0bHQVoT8YUaHf+LZ5nKM9erAwWQ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(136003)(366004)(376002)(39860400002)(396003)(2616005)(16576012)(956004)(8676002)(53546011)(54906003)(8936002)(316002)(2906002)(31696002)(66556008)(4326008)(478600001)(4744005)(16526019)(186003)(26005)(5660300002)(6486002)(86362001)(6916009)(38100700002)(66946007)(36756003)(83380400001)(31686004)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?us-ascii?Q?N69YkhDnlq3UA4BdhBl9fX1EPsXGKo5K3pf69MD8M9DMpQJkPyV3NCCKLTYV?=
 =?us-ascii?Q?bwa/iyElqRiFVk++i92O04o5/TNn9j3bDkYvbWfdbsPUiXHWHU027TjtFdpe?=
 =?us-ascii?Q?qSnbpdEBk6NCQqibQRkUp9odF9/EOgcr0Tn1wZJviCLlDcBVIOo9ZwvCpNsU?=
 =?us-ascii?Q?wALF06Zt4meccWPeeLBgUwT5xLs99pVK2SYjIrYvOAWVLBfS1nyJNUU7WY+d?=
 =?us-ascii?Q?zqg0LVTGpJij2ngjE3W+Jlw5f8PgCj0siFK2sbnoFv6XCMyYo73WJVVS5Doy?=
 =?us-ascii?Q?g+tqc1KjCX9x+HLOL/EKPMyosIf9AlK/UqjmJArItNHnhoLkoCKtzW1y0tiC?=
 =?us-ascii?Q?8FbQxzIqylt0JeleyFXj4eV0CXaqQfL+pHJbj3tJ06r12oXJTFNg/l7xw3OJ?=
 =?us-ascii?Q?tzkhYuflOZSbj5a4JEqacs8RgZPaXgb86nc0MyVFsHOEmg4XwK7BKJOI31gu?=
 =?us-ascii?Q?F1YE1xvpcOoEnJSaLLYSKsCPIenp4d4tkNxF+NM0EDuHRd4u5VJUWQoO1MUH?=
 =?us-ascii?Q?ix7MirRgHyVL9NFX/qM+xNIHZx6ir0322sTm9dHekEWaRrIKamnjtNn2IDl8?=
 =?us-ascii?Q?fT8eSJlqT9uZNp6+r/gzaxjyigwUxGkkI0LuvO3iF/0qvB8d5bj2qqBukcCm?=
 =?us-ascii?Q?h8aUNVjMfF28BxLdQg9/RrzUhAmol1V8pQHR7PBZYjmYPDC/uikyRcaXaRuY?=
 =?us-ascii?Q?MAFcndTdZI/M3WgnDftdiaCMyFusu0ffnuQSpqMUrRmlTNEBZOZ6pTBbf9ev?=
 =?us-ascii?Q?sK5EoGU3FzAglAK7EU6GtsKpe0bOxJttG5LH0NX6j3EzuBBFcCbiF8/vdOp7?=
 =?us-ascii?Q?PO/2UWPMT6gl+R9AR5PifwtkYGm/meMtn+vV+VoBKXa/xrfCxzZTsSBdZEtP?=
 =?us-ascii?Q?YEP0kwQR1aQxw+NuWx5FBBJ5aevy13LD0nCIMahhvIkqesWTFfo8sg/PVAV6?=
 =?us-ascii?Q?lXPdjMb6E6O7hNQphZMG5khKzmBeQgl5F9vnC84abVl1keMv0jfggk1u4RKT?=
 =?us-ascii?Q?CnzCSMjGTDxnqTYNyDeaL1YH4DyEuDSI+4/QY+Yi4t249aDZ3c3upd1fq+Ln?=
 =?us-ascii?Q?LWFgQgUtz6f9IZgvI0EYT4GFHbmO0qxTE/x62bBop8CAyS3VZ+9Lc+INImmq?=
 =?us-ascii?Q?YbzztDqaZMrWwwaEZ1s7Ax0Tje8Y7/+sy/5MQzj3g8klpzDbxGy1hTtzUeI/?=
 =?us-ascii?Q?0w00YGuFv+Q7jy9MLkNtUZUi3f+SXqohGPx9vat1MhrnLnwY+nQd2cFCh4a4?=
 =?us-ascii?Q?VinUinZa2SJvCnx/2Zjr0GcOFsO2cZGtMN5pjz4Y4LX44jf4ZMNcRiMvHrpb?=
 =?us-ascii?Q?pzlYVhtiEe5IF3jkgt0ZpopO?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b1fb4bae-11d6-478c-265d-08d92bfd40dd
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2021 10:48:16.5740
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zJBCrIcMMyXrUElcWSpSkB4RTvF5Yz4ZMTztAZ9KWfHv5S6CQGuAWIqJBzhs7rScnHM03ZHOhniBVcPZ2iFvkw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5165

On 10.06.2021 12:01, Oleksandr Andrushchenko wrote:
> On 10.06.21 10:54, Roger Pau Monn=C3=A9 wrote:
>> OTOH if we properly trap accesses to the SR-IOV capability (like it
>> was proposed in [1] from your references) we won't have to modify OSes
>> that want to run as hardware domains in order to handle SR-IOV devices.
>=20
> Out of curiosity, could you please name a few? I do understand that
>=20
> we do want to support unmodified OSes and this is indeed important.
>=20
> But, still what are the other OSes which do support Xen + PCI passthrough=
?

I think Roger saying "want" meant to cover ones which currently don't,
and which would have to undergo more extensive changes if they were to
be enabled.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 10 10:50:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 10:50:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139827.258472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrIH4-0005PS-JV; Thu, 10 Jun 2021 10:50:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139827.258472; Thu, 10 Jun 2021 10:50:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrIH4-0005PL-Fa; Thu, 10 Jun 2021 10:50:58 +0000
Received: by outflank-mailman (input) for mailman id 139827;
 Thu, 10 Jun 2021 10:50:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrIH3-0005PB-3A; Thu, 10 Jun 2021 10:50:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrIH2-0004Zp-Sh; Thu, 10 Jun 2021 10:50:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrIH2-0002a5-Jq; Thu, 10 Jun 2021 10:50:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrIH2-0000T3-JL; Thu, 10 Jun 2021 10:50:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ihz/wrPnf6IIBbDL6369vUUBgvKlrfPt9ic5PHUUhJ4=; b=B7KwEowd8R5E1EEwy5QWqplZ/z
	Gu2CKYjLHY5d/cnlzrySOAE+EdClN9DnwhA8WoWF6nNmtPjMAePfNQdFKfbLF3qsAoRpHFctOMuKO
	M6VGLH7e5Xu7YLPb60vyY6/yY71+R4QhbfAKsFnWwxgrxf4nhkX0VcZ1gDzFnc+o44a4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162597-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162597: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:guest-start/debian.repeat:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dfcffb128be46a3e413eaa941744536fe53c94b6
X-Osstest-Versions-That:
    xen=3e09045991cde360432bc7437103f8f8a6699359
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Jun 2021 10:50:56 +0000

flight 162597 xen-unstable-smoke real [real]
flight 162602 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162597/
http://logs.test-lab.xenproject.org/osstest/logs/162602/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl         18 guest-start/debian.repeat fail REGR. vs. 162574

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dfcffb128be46a3e413eaa941744536fe53c94b6
baseline version:
 xen                  3e09045991cde360432bc7437103f8f8a6699359

Last test of basis   162574  2021-06-09 14:00:34 Z    0 days
Testing same since   162584  2021-06-10 00:00:27 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit dfcffb128be46a3e413eaa941744536fe53c94b6
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Wed Jun 9 10:37:59 2021 -0700

    xen/arm32: SPSR_hyp/SPSR
    
    SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
    trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
    See: ARM DDI 0487D.b page G8-5993.
    
    This fixes booting Xen/arm32 on QEMU.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
    Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 10:53:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 10:53:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139834.258487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrIJd-00064v-2n; Thu, 10 Jun 2021 10:53:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139834.258487; Thu, 10 Jun 2021 10:53:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrIJc-00064o-W0; Thu, 10 Jun 2021 10:53:36 +0000
Received: by outflank-mailman (input) for mailman id 139834;
 Thu, 10 Jun 2021 10:53:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iP0d=LE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrIJb-00064e-NI
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 10:53:35 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0b18dfbe-3d7b-4fe6-999d-acfa82f864cb;
 Thu, 10 Jun 2021 10:53:34 +0000 (UTC)
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2056.outbound.protection.outlook.com [104.47.0.56]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-24-1VUHVzSvMfm0gvTPYaktZQ-1; Thu, 10 Jun 2021 12:53:32 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB2702.eurprd04.prod.outlook.com (2603:10a6:800:b4::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.27; Thu, 10 Jun
 2021 10:53:29 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Thu, 10 Jun 2021
 10:53:29 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P189CA0050.EURP189.PROD.OUTLOOK.COM (2603:10a6:102:53::25) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.20 via Frontend Transport; Thu, 10 Jun 2021 10:53:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b18dfbe-3d7b-4fe6-999d-acfa82f864cb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623322413;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VZMruOKPvUj3F4CifjynfF7FGm3G6uXlbGRQuTBF1xM=;
	b=AAoC2Dzl+AVufBk43svSf4DUNVNksa/YgYaXEXaoMb2wsjN6dZuSVSNLX147HrD5Vr+Txh
	Ia86/lNqJPqmIcE3fqTM64laQZRT0HVJXaDw8FdixF9deXr1/dy8uUVbzzm/V7O2rnwsHh
	xqcFJrUDNkZFDknUbBFVVBTbV5E2GkA=
X-MC-Unique: 1VUHVzSvMfm0gvTPYaktZQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TevT3BWt7m0I88vvFgVFziEvItLytf3373osQYnoiWhBIoP1oVoQM1APWDqTsSk8juXmOhdOk9RivbcI2KszspG8sOkaF11yTZEXSzIJf9wB/J7tmFjp5u+qxTYkzrsAa9IyZZP0UXDpjyWyZMBNxHPRz59Oi3oJD8RO8CW+1uyYoexipmUeFahhwZjwYBKWA1uvuSXG0g08nSZlHf2xb5VXAUjZn+IBucwPBrVB3JyCR/SOAx6oE9IRkFbjigJdwCrmKKlHnG4DcY3HGYtei1WEdavuqkjUT9gVxx4CZgM1rEwTJTTWIP7Ntbo55nn0BrCJl0F0iOz1WEzgcnCyAA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VZMruOKPvUj3F4CifjynfF7FGm3G6uXlbGRQuTBF1xM=;
 b=I5Y5rq2FQ3EhyaiuK3lC+HLNa1T4qsgsicPoUBg+EfQRYbMbdPIRoae4OTILy+NvAZixxR9JXw1Hscj0zpNq5PazCcyX8AHbvZG5ZPO1KkKdZ+Y5aoTzQ4egMQ+iH2NPm9FRbgF+qswgzhG7Fq06cqoHxvWxFpNrb90n7VFhqH/YJWQTj/o9iZBUP/ZCkaoc0x4RDzV91YqpKwAYV48Kxb5Ni9xQlWUWeOG9iCh+IoYGLjofk74J+hrsaYimGKkMAaCry9NiwcI3cIHvm+1yThWqYj/qElA8MwQ/wTE1qBy3dXl5TyU1UslBdsML5YBnB/WrdFHdKHNLiamC2uLIbw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH 7/9] xen/arm: take care of concurrency on static memory
 allocation
To: Penny Zheng <penny.zheng@arm.com>
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com,
 xen-devel@lists.xenproject.org, sstabellini@kernel.org, julien@xen.org
References: <20210607024318.3988467-1-penny.zheng@arm.com>
 <20210607024318.3988467-8-penny.zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <74756c9d-4b9d-60ee-a70f-47c73ba4c442@suse.com>
Date: Thu, 10 Jun 2021 12:53:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210607024318.3988467-8-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P189CA0050.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:53::25) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 82c0ae4f-205f-48a3-664c-08d92bfdfb48
X-MS-TrafficTypeDiagnostic: VI1PR0402MB2702:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB270202F5FCF5ACA5E734F727B3359@VI1PR0402MB2702.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4502;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IaA3EtoOpOHvQ9HEIYY7ZIUXIc06cMfw7kMrWEd75fbTyKBJ8lwfoxCj9sh9y0um8Tyb89C9cKutEWgZWe93bVMuW2aMAoClmy09SRBFOip0l1Oh1UVpfviKol6MThjQVYJc55q6yWTlM4ID9Uv0ImtuDA8w4zGtg4Ic1cgjSBrpU39auzjnP+CAOEdsbaEHVmDLuT3tH268vPaqLgJLTWyDZBelZI/0YiQuslS64+KuFLQjRexpWRiyRJOBFG16eTVX3QNaKvQsM/bJCk3YlLT2vNjL0tm7CllttgIzsKSQajpdbd1Ndi8lXo4dwPSa6OScbfCYfS/QNfgYutCeURbualwBmRejJwMO8ARuo9XafihMemPyFZiL4wmrsYG3ccxutA9V2lDKBObnURKRtJakoetk/tUmzaStLYCHw81e4wxM0H4AIutr1/kis8WJAqZPG2fia5ZfPuvcwbQqkAUwNejt/Cq6KxFLDwneYi3px9TrhCQxouW4FhVk9PZ7BwLxJbRuvqwT+ngdYy/BKCscotWc2jvyKuLz1C2+HCSEH9MFB7vKyCuWqOfl5QsM/r10QXuh7/f+TV66zmwfVLxHHLdfe/cl0z3m+DxqD0tJtoqsJeGhVKalV/kJSJgDmwfRC/rFHyW8blw797Qe+f7vrKfB3XA7qSh/IFpbIDX/Hhkr1GjwZNowvMFj79Vq
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(346002)(136003)(396003)(39860400002)(366004)(66476007)(36756003)(66556008)(316002)(86362001)(8936002)(6916009)(66946007)(4326008)(83380400001)(8676002)(31696002)(478600001)(16576012)(53546011)(5660300002)(2906002)(16526019)(26005)(186003)(31686004)(956004)(2616005)(6486002)(4744005)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?cVhiWTc1ZGxzNDdRbXJNc2krWUFCQWs5N3piNlFtWXhDT2w3RVFzVU5OL0tX?=
 =?utf-8?B?M2d6ZHZzZ20rSUtJd3hSbFkvVjZiVkRlL0xjYmpHUnM4enc0bE9KWW81UWl6?=
 =?utf-8?B?TDVOTWlENHF6UVQ1M20yTmVQQWVRWDZJL3dlYk5RT3VVK2dneVEwMkVNT01M?=
 =?utf-8?B?SE9sRnJGbXhGc0Y1Uk0xcDBRdnZ1cE15K20zLzlHLzBucEt3M0FvRFNReTEx?=
 =?utf-8?B?VnpVVVIvdDdFcFJxWlRWMnUzOHRZQW1CbmFBcW12ZzBSU3lFU1g0RC9saStW?=
 =?utf-8?B?REgvUEx4aTdYY3QxRVJoY3YrM21Pa0k0RzFvaXprNVNubjZQdDFQMDNMV2FI?=
 =?utf-8?B?NFF1Si9JY25OeXNUNUxvNjJFRXlmTXJCRkl1MzA5QU5Fem16NFVualA1dVpo?=
 =?utf-8?B?T0tmQW40bW9sVWR1Nk9kWU1oclZyS3Q5OGRtak1jcEh0STBPdWt3MlhmaHI3?=
 =?utf-8?B?NzJ5MFJmU2x1M2JUZWRYeHV2VlNiOWtYcUNVMStQcmFXTm4rdUltWkRsSkxn?=
 =?utf-8?B?cG9CSzRCd2N4Mi9Pakh1S3A4QXNKUk1vSHNKNGFWdE5vVTNqOGdNMmRPLzBk?=
 =?utf-8?B?dks1T0FWTWR5andrTzlJTTZrV3BvVzUxckdkbnVBK1gvMVRFbmZQeUZsV29W?=
 =?utf-8?B?Rk1vWTF4YnNoVTVVaUwzcisybTB5VFNHWDFGZmRrNU12VDlrODJwK1VzM0Jl?=
 =?utf-8?B?Wjk5c0dQaVF6UC9hcFNaY2tOb0Zid2hUMkZSK2JGVFNGVTBhaFlqcW1JdDZJ?=
 =?utf-8?B?S0lwNnNmbE9MTHl4c2NFS3pGSU1LY1VUQXord0c0Wng2enFvMVlaeURZT3h6?=
 =?utf-8?B?WWFmZjFJTGNXUFhuUkxDN2RxR3ZFWUpLd2VMbFZ6cCtwVllpN2craGhMdzlI?=
 =?utf-8?B?V2Z3M01lMnJySkNOeSs0K0EyeUR1Skk0c2tCa0lKcDZNV0kvY0tGdTNkbWpz?=
 =?utf-8?B?ME5pbEFUMUNuczdwWTJrbFlsVWwrSUpheC9SVFF3Qk1uQlpxN1FKZjJ2K2k0?=
 =?utf-8?B?MGhmTEh2MGlMb0ZHWFJDNzhNMWIwemVnMk9hTVhDSENLakEzRkluUlRZYnUv?=
 =?utf-8?B?cFFtRC9sS29SK2VyaWJBQkRvdmRQUHFiM3NaT0JMOFBWT09tK0F1a0E3R0E4?=
 =?utf-8?B?U01MNTM4VXdUK2J0TGpVQUNPb2RyTnEvOVkwclNFakdNMnJJdFRncTFzanBZ?=
 =?utf-8?B?L05CWlQ4WDlHY296M1NPOGtrVDVkUVVpOEVHVlBocDVJTkpkcWk3ZlA0OWZW?=
 =?utf-8?B?Y1piSmtmQlhuZkJ0elNXQ0IvMzRPRzJKenNQUlNOTGlta0JFMWZKMGRObysx?=
 =?utf-8?B?bi90SU45TlF1WkNnNXNxVWUzZXltaFpxVUFvWTdZaXFaL2ZFTVBiQk9yNHVw?=
 =?utf-8?B?bWZzMnozeXhhTW9YeEZBaE45OXloekdXUDVYRG1xS0NVTW91d25lZU1LaVVk?=
 =?utf-8?B?N1ljUFMzSXRQTkdZd01aUWI4RzNnSkE0V3h5N0JwMlo2WDVISGpGeFdkTXI5?=
 =?utf-8?B?U29abll6QkFUUVhjb21Tcm1CQ1ZMT3h1dG5MV2VJSWM2TmxwN25XZ1BCdWZ5?=
 =?utf-8?B?eGpqaldVUHpUY3NOUVBVdUtKSk5JaWNFTE5xUm9TOW9zWVNqSUZpUmYwWEht?=
 =?utf-8?B?R0MwSXJvOU1JZ0Q1KytYZWsyOXBvRHhRQk9xdEVEdnl6b0ttYTl2Ylh1QVpr?=
 =?utf-8?B?NHB6NkIvTVVtNjRSdjMrc0E2MG1ZRGc1UWczSVJxbDRmT1J4ZnFzRmtwOFIw?=
 =?utf-8?Q?JE4GevLv5UBOukK3jhvH2/T+wyFnmcZ/MYZrURm?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 82c0ae4f-205f-48a3-664c-08d92bfdfb48
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2021 10:53:29.3621
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yp2UBiM5frMEDDvUi8UEroaPh6ESukxWwKwjRq5Dy5jJ633N4HdbgEPRaz84Xr5uwaO8uOjwAjg2DqgbaHjF0g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB2702

On 07.06.2021 04:43, Penny Zheng wrote:
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1087,6 +1087,9 @@ static struct page_info *alloc_staticmem_pages(unsigned long nr_mfns,
>                 nr_mfns, mfn_x(smfn));
>          return NULL;
>      }
> +
> +    spin_lock(&heap_lock);
> +
>      pg = mfn_to_page(smfn);
>  
>      for ( i = 0; i < nr_mfns; i++ )
> @@ -1127,6 +1130,8 @@ static struct page_info *alloc_staticmem_pages(unsigned long nr_mfns,
>                              !(memflags & MEMF_no_icache_flush));
>      }
>  
> +    spin_unlock(&heap_lock);
> +
>      if ( need_tlbflush )
>          filtered_flush_tlb_mask(tlbflush_timestamp);

Besides, as indicated there, the need to fold this into the previous
patch, you will also want to pay attention to alloc_heap_pages()
carefully avoiding to scrub or flush pages with the heap lock held.
You will want to follow this for your additions.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 10 11:33:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 11:33:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139843.258498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrIvm-0001fW-4v; Thu, 10 Jun 2021 11:33:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139843.258498; Thu, 10 Jun 2021 11:33:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrIvm-0001fP-1o; Thu, 10 Jun 2021 11:33:02 +0000
Received: by outflank-mailman (input) for mailman id 139843;
 Thu, 10 Jun 2021 11:33:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iP0d=LE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrIvk-0001fJ-Rr
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 11:33:00 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 07d8f4a7-f3b9-4e14-b725-fdd1f55c21f1;
 Thu, 10 Jun 2021 11:32:59 +0000 (UTC)
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2053.outbound.protection.outlook.com [104.47.12.53]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-7-FRwXrX2CO3SjesCVKFISnQ-1;
 Thu, 10 Jun 2021 13:32:57 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2445.eurprd04.prod.outlook.com (2603:10a6:800:55::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20; Thu, 10 Jun
 2021 11:32:55 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Thu, 10 Jun 2021
 11:32:55 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0086.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:18::26) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Thu, 10 Jun 2021 11:32:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07d8f4a7-f3b9-4e14-b725-fdd1f55c21f1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623324778;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=m6Ao1vY1duG4cub/667UzQL8XBxQH9wjKhboKvMhOYo=;
	b=LAT+dB/pdSvBJPJNFP15+JJHQlHhjCpJKNRf3B+QB+uY9lQ7PMe2n6/QFNjC7EJNNhpyx6
	Yoy6ge3yJlyogufut97L0bLnKnZHpb2xrpOC9YU+zVv1Mb3BlMDdTbUO6wuJeEwwR9tEwY
	pIaxMZj7qnRqHTwIoPncs49QjGYL84I=
X-MC-Unique: FRwXrX2CO3SjesCVKFISnQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Fhngoop+kwqu5FbuHeaZHSFPkEgViTt+83buc8t0Bh65UxNhWRgt55OM7U/ln7jjb939r55VEpr+GRd6AtDmUtZ7j8MK3xgNwX09i00VlkcPx+DONzO3WOePoz4uD6qaOKKeLOfPOgmb4TIh/CrJ5SDp3Cn9zU2P4sUbV5kRLuXPhWwCkoV7PGWGCiY8kkEWroFaDmD5zp7a1BWMV6lpv9b+BsWcWAf2fQoG0jYFdM69ARASM70A84UfNvP5xhR6Qv9FP3Q2upa9CXBTtBdatQPIOlU1h14g088xnimMcmYLZXKC6KKVpY+aiKeuJkbalsqz2GwUx0kzTEeIOzUHVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=m6Ao1vY1duG4cub/667UzQL8XBxQH9wjKhboKvMhOYo=;
 b=iqF1MvhjrTCja9fY3Lcg0urGA2APg5NHFUaB0R3kHUhAFGw3rZTRkjHuCabqKbh4EU/z3DDXbxy33L1PkhQPj/1505wNkAaBhm3xRWTGw/2F/s2vnMjbfop0xAAFVMZzV3kuvauK3hRxx88lRSKwVyE5yc+fzDscom7k3V3Gp6m24gb0ekB2kODAxpUZn4z3ybxiK6QII9L1e+gLEw/Y/IUsLKi6r6tcpqQfM5OunY+WQqH5Grh1JXX7ptPdRFOqjPFYmIfzrYYOoIVyPq0O13Accue/YJTxQJFt27U1uvB8U4F1ugKkuA3LSNu/rjCWpDzE0JObyTR6sqI0gZFqUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [xen-unstable-smoke test] 162597: regressions - FAIL
To: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
References: <osstest-162597-mainreport@xen.org>
Cc: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6d95cfac-e43c-d1f0-f988-4f11335b104d@suse.com>
Date: Thu, 10 Jun 2021 13:32:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <osstest-162597-mainreport@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0086.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:18::26) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4d9e83d8-8660-48e8-0986-08d92c037d67
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2445:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB244519808FA02F059230DBA4B3359@VI1PR0401MB2445.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1443;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6s3gwUA/U1giFuJYvFLWk2Yqi0aCfR9UCyfFb4n4XlqBUIo3n09eBYO77hbrx9c15h8eIfknKnTHTeQDP3x7aYkxGDr5WIJSyU/SKqrgrRK+zJHbZjUIO4VnwoEpovgVQgrPc5Y7cgwukhdP44s/7nvOu43a2sqZW47xtxM0gTLC00gfgY/AjlyN/lGTe4eJxjObX6fR9y8+FITsXV67yDgovpQM7z0/XIpr/azXHUBGgH85oBcErF0Omab7wHQ4qC3BAiMqJcW7wKwbVykiDOqDbL0j0T8brK/EbbVptmfzOk7OT5yq7t4fjUtaj77GmIXfVOgyQo+CMaHuiLnEKUmJye9yYrZd2TjyiG3/OLkgjPIT/kPBzNHfUtc1wbWCLWWBd6zcChD/lT65UGc5X+hxlKTstPUYLuB332iXOMjcj9w9+hGdHAItNCf/fYuhFlZ7fPoSVilHT77rKlLB9fBBQgmDgsZWfwQAru19Rb2f9V75Fhnqi4MRa1Dze3ak6kGzuuWFGsYV+8nOFThW7utqq692pf37w2nu1m1PSKcw4926HTt8G2UBUBl5TW2zYVpE1Jon+37xzsdOlIyIrBbj/+8OA/Rvb5yKYafdOC3jXanHXpzeBeOfbsdDg9Ok3GnHyLaP6wJhHg1fo0/jiyhA7ptjpJQZTX+Zb9bvJQPgSERlEAC6d1jgu7aP+yQebEKyyY2PrXzDNU2DeCXvAHxDQSI0tLSixzf2oOusX/Fmkrjo4HsIgWe0WDrz+sD7QkGEAK3vc48Z+2ZrgOo1SxkhbiqFyOEyZbxJpS2j8kC7Hq1C2ZkmgCMAXdEi/ryN
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(39860400002)(136003)(396003)(376002)(366004)(38100700002)(316002)(66946007)(16526019)(2906002)(110136005)(4326008)(186003)(66556008)(86362001)(966005)(66476007)(83380400001)(8676002)(16576012)(6486002)(2616005)(31686004)(956004)(36756003)(53546011)(5660300002)(31696002)(478600001)(26005)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?eGdXaHhJczZQaVc1ZTg3NlpqNmxrVnJsWGJ3NGtiYnU4TWpnUkE2TTZzVGlM?=
 =?utf-8?B?eG9qRStsam5UUHVpSVhCelh3QjJWN040K0R4K1F2dzRFU1N1ZGJZTlRZMHo0?=
 =?utf-8?B?cDFkM2tnem0rbm44MmlmMk51Z0NHdWxHd3N2eEtKT1VhK25YbG9nb3BDTFNN?=
 =?utf-8?B?UmF0N210Y0JLa0hSZitHNUhzb2ZZSE1SWlNrYzNlTTY1bWh3c3huN2hibDRK?=
 =?utf-8?B?YTJQczd0WHJPMjlKRjk2ZklHTm9KcXAvTGRIY0tOMjhjSURXNzZJTFRnMjRQ?=
 =?utf-8?B?ODBUWGY0cHFNdmFVMW5jajZ3NTJHU3N6RTljcjVWTnQ2MFNjdDFQdkcxSVJ4?=
 =?utf-8?B?VVBCOEYvdkVEQk1LNDFrY2pWOUNyVHRuQlF6RTcyL3gwVmZlcGNtUmdIeHFp?=
 =?utf-8?B?MU9yOVc4UEUyNzdhRTNrQkJsWTlmWUg2YXRlYXlwRE41cDVFU3hsQWU2cE1x?=
 =?utf-8?B?bjlORS9NWFVZUVZ4OFZmeUo4bjdkNU5TSTh6K0RzSEMxRzBid3VjejRYOUY1?=
 =?utf-8?B?OWFES3l5aXZndXBVM0V5LzB2YnFuOERpMmR2M21sRExIZGFodmxBN1pJOHVa?=
 =?utf-8?B?UWpwdFY1dGRCeW9RSUIrdkhQdmhTN1VkV3BVWjNZYkhIcG1HUVAyWTJGdzNu?=
 =?utf-8?B?aDVnd25CTU9RRHc2T2FaeTlqNVRXMHcveW04YmNmcmhiLzZvUjBXdmxUUCtB?=
 =?utf-8?B?MU1VY0JKVU4weFVmYUFzcjFaU2xHMGk5RE9HeFZZeC83TjFZUXZGQUE3dElP?=
 =?utf-8?B?QmFLM3JJV2xWRm9JaVhKakxsV21OdnZYd01NK1E5S25UVTExb2F6TDZ6ZHdh?=
 =?utf-8?B?SWplNExsTU1nQ3R6UDJXWGdQaDFoVWdLOW5BNnFDaGRWdi9ra0JUSjNvZ0dp?=
 =?utf-8?B?cjI5MnRHNjVWYm9qTHhMeDBvQ0FiYVk3OERkb1VZQ1RlRUI5bkdEUHFocjFE?=
 =?utf-8?B?clBQV2hLaTl0VGpvS2gvRk4zeUh6RFYvclJyMFJ6a3JZdHY4czRvakZLVUdp?=
 =?utf-8?B?TXV5R2k5MmR1UnErTlNlOHlSTkV4NjVoQzZyZHg4Qlp1TjJDRVhEU2ozdmtr?=
 =?utf-8?B?NWpQa1hiM1NtZXVZY3paczg4K0dUN0s0eWRldllxTHBWR3FnR09FdFhYTUJ0?=
 =?utf-8?B?eGo2VWc1OWxhWjVBV3pudkFpcWhJa0ZkN0JVSVgyb0xaZ0dMWFpZMjVHbkdO?=
 =?utf-8?B?N1dYN3J2cTBOeGhsTEt3L095aTMva3hPakRwUU9NVFhZRjlPWWNndWdKQlly?=
 =?utf-8?B?clc5T2RLVjBZSlJZMmM3VENEa1BKSHJDZnNnaFlISmRwUFQ0UVg1SXZqckJG?=
 =?utf-8?B?QmRNa01Sd1BsTDVoTmh3bVU2aVFEY0p5QVFRaVN3TS9abEJLVXZRTTNjZUN1?=
 =?utf-8?B?ZFNXcHhsTkRCbU0vc0hFMXhONENSMkdxKzR4eG95cldpZlZwRFdqejk5STZI?=
 =?utf-8?B?bTNQTFlnZ1pFc0d1a2locE1WRndHK0VFbFhpYWpOWUV1M1pKWlBTY0V0OWdv?=
 =?utf-8?B?LzJxa1pzMlJ1anQwVXc0Y00rdXkzbDZ5bXA3RDZMbGRpdi9xYncrbkVxK29R?=
 =?utf-8?B?YUd4UXU0ZlRYeHpva1hxRmJCRSt2ckJIb25jMHpzRU5MRzF4UUplVjJaTkxR?=
 =?utf-8?B?VG1abVllS2tZTlBhWHBtQ3pIUEQwaW1QN3J2dXBSNkJsSk1WYUorak53cVBR?=
 =?utf-8?B?REYyYW9FV0doV1lqbnJ0U1d3Q0Y4VkVPMFZwYkQwaXkrWE9ObXlEVUo2RGo1?=
 =?utf-8?Q?ZxQkzFIVT/nXHSWv9bU1zqkm+nOIajJ3W5IJSQV?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4d9e83d8-8660-48e8-0986-08d92c037d67
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2021 11:32:55.2013
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FvWkSDMRu4cba8XWb8QavXHC94klHHsxv+tjVYNF07ufKWr0usV4+ZzC3fqjaM6aWHYnD4FciizYbUJUu/NOrg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2445

On 10.06.2021 12:50, osstest service owner wrote:
> flight 162597 xen-unstable-smoke real [real]
> flight 162602 xen-unstable-smoke real-retest [real]
> http://logs.test-lab.xenproject.org/osstest/logs/162597/
> http://logs.test-lab.xenproject.org/osstest/logs/162602/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-armhf-armhf-xl         18 guest-start/debian.repeat fail REGR. vs. 162574

This now being the 3rd failure in a row, I guess there's a fair chance
of there actually being something wrong with ...

> commit dfcffb128be46a3e413eaa941744536fe53c94b6
> Author: Stefano Stabellini <sstabellini@kernel.org>
> Date:   Wed Jun 9 10:37:59 2021 -0700
> 
>     xen/arm32: SPSR_hyp/SPSR
>     
>     SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
>     trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
>     See: ARM DDI 0487D.b page G8-5993.
>     
>     This fixes booting Xen/arm32 on QEMU.
>     
>     Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>     Reviewed-by: Julien Grall <jgrall@amazon.com>
>     Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
>     Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>

... this. My Arm-untrained eye couldn't spot anything in the logs.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 10 11:46:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 11:46:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139851.258511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrJ8G-0003Em-F8; Thu, 10 Jun 2021 11:45:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139851.258511; Thu, 10 Jun 2021 11:45:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrJ8G-0003Ef-B3; Thu, 10 Jun 2021 11:45:56 +0000
Received: by outflank-mailman (input) for mailman id 139851;
 Thu, 10 Jun 2021 11:45:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mVv/=LE=epam.com=prvs=679567fbaa=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1lrJ8E-0003EZ-Jc
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 11:45:54 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6465ebd6-8b62-4a5a-b90a-652ccf693416;
 Thu, 10 Jun 2021 11:45:53 +0000 (UTC)
Received: from pps.filterd (m0174680.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 15ABj6SJ001593; Thu, 10 Jun 2021 11:45:52 GMT
Received: from eur04-he1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2056.outbound.protection.outlook.com [104.47.13.56])
 by mx0b-0039f301.pphosted.com with ESMTP id 393hssg2ty-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 10 Jun 2021 11:45:52 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB6881.eurprd03.prod.outlook.com (2603:10a6:20b:286::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Thu, 10 Jun
 2021 11:45:49 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::b459:9e8c:964b:a3d1]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::b459:9e8c:964b:a3d1%6]) with mapi id 15.20.4195.031; Thu, 10 Jun 2021
 11:45:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6465ebd6-8b62-4a5a-b90a-652ccf693416
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NhqrhdYaUPaW+ow0rsceObB6ictU+dh6LpdNdQRAacEwyzxifTMdrYcbR5a8K/59odhSCMG7jBUdC8s3/ORrTwWjZ1g+llf5uLJJyJ8W6LBpjwpQiotOUxySt6kmW/AVpRPYAkOzsns2RRN82kNnOL7W2E/koKN0zOwguLN0GYv5yk6NUPPzMcwJrWd4uz1iRvAEXgfYNeryJ9zN/2Unl7QVoaN5JNCvc/FJYJI5NfDxMD8/oqj+WOjoS+2ydgWRkZ/UdEBD23J6TAWrGpE7nvbQizjPa/ci6kGOuK8SO6md3NaBtEHLYaeZAARGs0yV0y6IxoYfK9x98Un5ipQQfw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=a/N4AyG+ggLqAnnE4jf4eKZl3woG/+VJJvDtn5NUYZw=;
 b=JR5BKLhgpszKVFx758xLIztac/fVxYo/uv7KBirkp4BOPhsp48B5pVz2v+5STgHdov+4XX1rHVCQyOfB9oz+UAS+RV8dyekdReMlpD2xOo9ZljsGc+rPMYz6GU3oJb4iEU+s15h2lgFnMyCGC1VDjlhV6ooLgnEOW6JFBlxnKD9sGQ6OPaALSL0qFV7fDP6uOTK5lP3NU8w7mYTJHc3o7Zp4MP+bFoQ5BX/oJNQf7NgEuczaon2U6Ami4nuDR8MzEJrucZnHP8FHxywrxjA5THuJa6W/eMz4fxyAnr7pYKnYxLd9J/BkNCxJneb3M3xYCIpdjcL9uTpCrOoa+SWw+Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=a/N4AyG+ggLqAnnE4jf4eKZl3woG/+VJJvDtn5NUYZw=;
 b=AgUyMSJgOPIPsQRhabrZ6j/T+DLKoFlwGAxbO3MwQLyCNQvwONYostlrFXLqaO0lIjwmC4qfhE++gLLm7Kw66oltl7sj456ilwrWnFlQN+z3eF3J+Og/NtCvCgndW2MyJUsfItSskwGp2UOpCt718XiIt/yJlrC3R3TAkyauBpB9ADmBLHIm63UI01u55c6sJT4o3io9yTFIpUCLE8Pz2THyFkj2imc/FKZl2ub5+txsEVu9G8Ewnn0B1Q93bntv1INytjubB+aEMvBPq11mYFno1qlE9UmWCZszmSclX8XRsPaPbneCipYEofQOfvqN+J0yp+Fc6/3qT5aDB27wHw==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: SR-IOV: do we need to virtualize in Xen or rely on Dom0?
Thread-Topic: SR-IOV: do we need to virtualize in Xen or rely on Dom0?
Thread-Index: AQHXWQwW/O3obtZgqkmpGRvt7ZNDkasM6aMAgAAjXoCAAA0jAIAAEBMA
Date: Thu, 10 Jun 2021 11:45:49 +0000
Message-ID: <38cfe7b7-e5a3-2216-f52e-fdebfc7af517@epam.com>
References: <c10e16c9-ec42-336f-e838-caca49b39723@epam.com>
 <YMHFQA1L61ntKNRq@Air-de-Roger>
 <30955a5b-ee46-60d7-ae56-23dc7c91008c@epam.com>
 <0aa4bf61-e08f-6da0-1cda-48e61bf876af@suse.com>
In-Reply-To: <0aa4bf61-e08f-6da0-1cda-48e61bf876af@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 1aa696ec-f5be-45de-3a79-08d92c054b10
x-ms-traffictypediagnostic: AM9PR03MB6881:
x-microsoft-antispam-prvs: 
 <AM9PR03MB68814DBD98BAAE900DC91761E7359@AM9PR03MB6881.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 hKnGr3G9E23PjvCVzAeWedcA0n7Rf6ik2TuBOeZ8vTv9fO7Z5mBwom6NZRd4ej7LFmtq3RqoQ3iY/cbWxOjfVb2NKfY9MALLSqhtIsDmNM5j0Bi3/AdN7+qfb7eFRQVH4wLAde8K9MWptHNLz8ZCSo1cLNxGj0qZjBzEKwHSzbvwBB95jaoNqFTpECBD4mYx4LJOsMYP1NPcOSNre/8my0bIDaNX/+ma92EDgm8LZzQVvtEN4g+QyAuVc4EXuB3qGH77oRyF8ymLBoXl2NQANimmZOfsDQW8EjxqJwOTW6eIbc/LhVTpQKEyBkSfrm649+/NmeTvDnsGj3/u03juXp672UkSikpkWFBVZcuHv7QYg2vqPNeKQ0Tt/bVb5yPOMlryoUB2PAEsOPWbN/b1VAHVxQ9+wbx/Sd5uaKIAhckrmzKe06HTn6fdGAwtCqQr1Lz5jPNz1/NAyY1HLu4HNanbQsD+QBXrInVK0xNuQnpV7WgCz5sKRckN49uUzbYBl4iGOwpyhwb8wSwMhXB5jMalvu16Q85QCMII0JxQdbwNSOERvOosKm+NinayPSFspDJ2SikTExOTtEI9LonSennRXNMiwlDY7wcKxvzVsuL6Ec0mm9pZEp1TGlGh7TVOEc2L50Nr9eKvzoWnCKFZffHiPWToSv49Mo7vWWIYnS/4+PynNh3Rw/ZEga/J8SqZ
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(396003)(346002)(136003)(376002)(39860400002)(6486002)(122000001)(6512007)(8936002)(53546011)(8676002)(38100700002)(36756003)(71200400001)(83380400001)(6506007)(2906002)(66556008)(76116006)(66946007)(31696002)(2616005)(66476007)(86362001)(31686004)(6916009)(5660300002)(54906003)(316002)(64756008)(478600001)(26005)(186003)(4326008)(66446008)(21314003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?VE1FZlBVVGRrakZoZ1pMbmxUZk9LKzloTlJ1NkR3SFNxT3RPZWNUMlk5cXMr?=
 =?utf-8?B?eFdqN0xKdmZtMWt6Y21LZmVlQ1A0TmdqbCtJQ2NBcGtEZSs5eERVWG5KN0w4?=
 =?utf-8?B?NFo0K05zQSs4OWYwMTB1NGJGVEwxYUJZUzNOREVnajYrSWxxRk5UaEc4QUdp?=
 =?utf-8?B?V2E5SEs0Um44ZEV6dWQyYnlrZjNua1k5SkNHSDNTcS9IeWtlc2M2MithQldU?=
 =?utf-8?B?WUpSa0w5aG1oM2tVd3pFK3F5MjE2VFpIQ1VKbllETzMwVDBXenkveHAyWlM4?=
 =?utf-8?B?NDVSNlhSME1HcnVjOUtNVVJRc1BLc2R4QS9mZGE0emk3eHRPVlZMbFhJREpz?=
 =?utf-8?B?MTM1YjBncWd5eVhpVlRJaUMvZVFQcFZoQzB0TnNKaHNDVU8zWmc5WDJ3SlQ5?=
 =?utf-8?B?M0pQeDQ1QWFRbkhMN3dwR2I1ZlBmdVFCQkIveUkrcCtsTFMwcHg3N1NqWjNS?=
 =?utf-8?B?MERmbnpsUlJQM0kvaVZpdExnSnZ3YmhVUHJyRHA1YXJvNjNlWVU3eWMzRjd4?=
 =?utf-8?B?K0JRWk5qblRlL1BmTFNtS001R1RuT2dNVWk2SW8yeGlNMC9FSU5GK2Rud1FV?=
 =?utf-8?B?S1RWM2VtSnZzbzcwRXJtS3RGb01EZFUwcTBQdmNEcEpoZEI5WmxpYUpqK3Rk?=
 =?utf-8?B?NCtMRFczVFdXVUttWkxNeUVqYmo0NFJlOUlKN0hxWkpEVWV0NjhWT2tvdFlU?=
 =?utf-8?B?L1E1MFFqYlBsc3NST2NCakliMHN6dDB0cEdKdGtDRWJGM1RZTG1oeGVOd0Jp?=
 =?utf-8?B?dGlBMkhmbmZQU2VDaW9ndWxUVTVwYUdKS2FtcXlTSGtWb05aeUFGSUFpa0tK?=
 =?utf-8?B?VC9wSEpta3VhUkJhNlZvV1ZGZkNLZm5GUmJaeUYzcmxreUY1Q0J0NmU1MEwr?=
 =?utf-8?B?TnlSMm1uS2NxcUExNWtMTTdmWERRZWx0eXRBcGE5WjhRT3JTZnNDSWh1TXhM?=
 =?utf-8?B?aThXKzBSbHJGVnNHYzdJdnFCTmwzSXJiNStCQ2dySE9aMXA0UlV1VkpudkVI?=
 =?utf-8?B?UFZLYTZFRG0xWkZGTUg5VDZwSk4ydlNxVENSd2lIYU1WS3VkVnRCbnQvbTBa?=
 =?utf-8?B?Zmxoc3YyaDVJU2JJWk9FVGxiZzRreGhjeTg2QlNpSHRiRDUyOW1KbHpqUFhs?=
 =?utf-8?B?aHVvSTBqaTVtUWdXS3hiWEk4eVFwTzNnT3RXNlN2S2g1RytEUTFod0tIMm5m?=
 =?utf-8?B?WjJ3UkVLZVBaUkFkbHBNdlVsWUdTQXBaTCtsakVveVFhNStWRVhQMCttY24z?=
 =?utf-8?B?QVh3eTduSncvMEV5WEdhMjlLMnUvQUlncEt2UlRoUmN6REd1ak5qM0QycUNF?=
 =?utf-8?B?b1E1NGp6VFRtZzFmWmdjOHFKdDl2SVh4Q1g5MjViVmJlS2ZDSFloTHNxRC9P?=
 =?utf-8?B?dGJKYjlqaXRmQzBLelFlbCs2OHBvTllpaGcvamdhaEN4K1FiaklWcklsWjhy?=
 =?utf-8?B?bU1LSTFwVFJkTG10VDdxYnlneVZDMzdCNnE2Z3BleHlja3JqMHpDL3h4Qjhj?=
 =?utf-8?B?RVZ2S2YzTmtJZVNXZ09VQ2VIbWhSdTUyb1VXeVkwTHFFWTl1YjVoNXNwYnlh?=
 =?utf-8?B?TllUSWNiUGc5VWIvL2Z5R2xqWkRhM05BSnhpVjBhSXowVnQxRGVoSk9ZNDdY?=
 =?utf-8?B?b1cvcWtHeDlaSmQyT1dBY29pUHRDeUVESGorTWdQUWpYR05GNGpoaG5jOGt0?=
 =?utf-8?B?aW0zZEMvL0t1Y2hQNmRObHJlcHZObEhmeS9wU3ZGMWZaMUxxYnlVc0k0cnht?=
 =?utf-8?Q?M/ji12Tg8B1NNot4rb3i1YdJgUqs2FBIXa9tXP7?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <43DA9A3E26C8704EB70E6C98AC139036@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1aa696ec-f5be-45de-3a79-08d92c054b10
X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Jun 2021 11:45:49.4513
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: g0eTbNu5aIwxxh9su4R8oC1835FDVgg/aQ7KC31rM/rBlH5eVaRt+ujV4Ax4DfYANyuo+RmEHI70wWPLfITepsuD6vIWbijBcAPHWrf2s1V/f7k8dmLtFCuTy+3arE39
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB6881
X-Proofpoint-GUID: 5s0JJGMc7zdHYUD8MqZEAPfqohBDW5y0
X-Proofpoint-ORIG-GUID: 5s0JJGMc7zdHYUD8MqZEAPfqohBDW5y0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 bulkscore=0
 impostorscore=0 priorityscore=1501 clxscore=1015 spamscore=0
 mlxlogscore=802 suspectscore=0 lowpriorityscore=0 phishscore=0
 adultscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2104190000 definitions=main-2106100076

SGksIEphbiENCg0KT24gMTAuMDYuMjEgMTM6NDgsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAx
MC4wNi4yMDIxIDEyOjAxLCBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyB3cm90ZToNCj4+IE9uIDEw
LjA2LjIxIDEwOjU0LCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOg0KPj4+IE9UT0ggaWYgd2UgcHJv
cGVybHkgdHJhcCBhY2Nlc3NlcyB0byB0aGUgU1ItSU9WIGNhcGFiaWxpdHkgKGxpa2UgaXQNCj4+
PiB3YXMgcHJvcG9zZWQgaW4gWzFdIGZyb20geW91ciByZWZlcmVuY2VzKSB3ZSB3b24ndCBoYXZl
IHRvIG1vZGlmeSBPU2VzDQo+Pj4gdGhhdCB3YW50IHRvIHJ1biBhcyBoYXJkd2FyZSBkb21haW5z
IGluIG9yZGVyIHRvIGhhbmRsZSBTUi1JT1YgZGV2aWNlcy4NCj4+IE91dCBvZiBjdXJpb3NpdHks
IGNvdWxkIHlvdSBwbGVhc2UgbmFtZSBhIGZldz8gSSBkbyB1bmRlcnN0YW5kIHRoYXQNCj4+DQo+
PiB3ZSBkbyB3YW50IHRvIHN1cHBvcnQgdW5tb2RpZmllZCBPU2VzIGFuZCB0aGlzIGlzIGluZGVl
ZCBpbXBvcnRhbnQuDQo+Pg0KPj4gQnV0LCBzdGlsbCB3aGF0IGFyZSB0aGUgb3RoZXIgT1NlcyB3
aGljaCBkbyBzdXBwb3J0IFhlbiArIFBDSSBwYXNzdGhyb3VnaD8NCj4gSSB0aGluayBSb2dlciBz
YXlpbmcgIndhbnQiIG1lYW50IHRvIGNvdmVyIG9uZXMgd2hpY2ggY3VycmVudGx5IGRvbid0LA0K
PiBhbmQgd2hpY2ggd291bGQgaGF2ZSB0byB1bmRlcmdvIG1vcmUgZXh0ZW5zaXZlIGNoYW5nZXMg
aWYgdGhleSB3ZXJlIHRvDQo+IGJlIGVuYWJsZWQuDQoNCkZhaXIgZW5vdWdoLiBEbyB5b3UgdGhp
bmsgd2Ugd291bGQgYWxzbyBuZWVkIHRvIHJlLXdvcmsgdGhlIGV4aXN0aW5nIGNvZGUNCg0KaW4g
WGVuIHRvIHN1cHBvcnQgbm9ybWFsIGRldmljZXMgKG5vdCBTUi1JT1YpLCBlLmcuIHdlIGN1cnJl
bnRseSByZWx5IG9uDQoNClBIWVNERVZPUF9YWFggYW5kIG90aGVyIExpbnV4IHNwZWNpZmljcy4g
QW5kIGV2ZW4gaWYgU1ItSU9WIGlzIGltcGxlbWVudGVkDQoNCmluIFhlbiB0aGlzIHdvbid0IGFs
bG93IHRob3NlIE9TZXMgdG8gc3RheSB1bm1vZGlmaWVkLCBpbmNsdWRpbmcgRnJlZUJTRC4NCg0K
SXMgbXkgdW5kZXJzdGFuZGluZyBjb3JyZWN0Pw0KDQo+DQo+IEphbg0KPg0KVGhhbmsgeW91LA0K
DQpPbGVrc2FuZHINCg==


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 11:58:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 11:58:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139858.258522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrJKL-0004hf-KF; Thu, 10 Jun 2021 11:58:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139858.258522; Thu, 10 Jun 2021 11:58:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrJKL-0004hY-HK; Thu, 10 Jun 2021 11:58:25 +0000
Received: by outflank-mailman (input) for mailman id 139858;
 Thu, 10 Jun 2021 11:58:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iP0d=LE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrJKJ-0004hS-I6
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 11:58:23 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 34daabfc-11de-4734-984a-776cca21dc93;
 Thu, 10 Jun 2021 11:58:20 +0000 (UTC)
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2058.outbound.protection.outlook.com [104.47.0.58]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-28-TjGd1bgLPzeoMvIx1dUZgQ-1; Thu, 10 Jun 2021 13:58:18 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2447.eurprd04.prod.outlook.com (2603:10a6:800:53::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21; Thu, 10 Jun
 2021 11:58:17 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Thu, 10 Jun 2021
 11:58:17 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR10CA0078.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:15::31) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Thu, 10 Jun 2021 11:58:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34daabfc-11de-4734-984a-776cca21dc93
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623326299;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zB+Ifc4MXF4+UfKXXK2yzfStPqUuJORwZZdfBgnCuQU=;
	b=mhdOChcwIch6FdP0uIvLkcG0IyRR+aylGaCcVtcWus/G27y1tOjifp8cTdzoOWcWnaU28w
	rbptu7B74pS0N1jbPLCvI5eLc+w5DArHPnb48+tSbnJ/CtOI2mOFy9mzA5FyyKzZmlTKGN
	hj7Y2ULfTYfMD6052pNHU/1jX/dZZ4w=
X-MC-Unique: TjGd1bgLPzeoMvIx1dUZgQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m74IX+Z/TzfzUxl/BVeTg40hEauobpOFKvcGQuLUPnQLfWhexJJp086GDtoh5p9yp9uW/Vi+iTfGaDhmAmQR7xYpYxhjIRT8UyfdzLGgRNn53+edUTNg8mBgyzc9nwpPH/UXou5psdbnQ9bQqpiFi7dDwk57Q0v33mjZ++dLTciki7lCecQQa2+80xgpRwPAu+uNslQbDwGbubxPxO21vAT2Ggdg8lv0YLgl0cv0DowC6/cdbM+ECNXUwrY8HUFa1hyIu2DHaAwm2U4TlF4JxSWKW0qr7LKy1EB/05gu37fUtaGaulD8Yhnv6o/wiafCaRo2jF1Wavh4KJB3oTd/TQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0AQ6SoYlc3WS+MVGsuSs3Udt9q2tHA2ImOQ9mn0ikFc=;
 b=B2IcZUUUTeqU0fjpAt/tLYwLQbGhUTy44sk8QwS1ffVyGOQtI2CPrrDWiOHlIq4pewCWG9TdBr9BS9qjo3R27+uIDp1tqcaFkMwAhujXeKEhBLt+VHkENb7wIrTZ3FnIkKxcuHbihJAtqJH+lI2QjJgMdxFs1AygiMLu3lKBSA/FZ6xWnfnvkG88DVJP7KtILCe6+aliuLUmB43ktLPmvZ51K3Xj7THnBWu+Gdt3DZnRZaiK3pjNtljeBptirdFKICvTc6CgJMVt+VaKUtKC4DVZMRr77UHgjGame4gSakb3zhWqZsS6SyhBWe4RxEeUvItTxDw7ej2sRfMxxhdB7A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 2/9] AMD/IOMMU: re-work locking around sending of commands
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Paul Durrant <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
 <da2e161f-5d5e-c4bb-bce4-7b86e9418a1e@suse.com>
 <31dc681c-8713-7ddd-6c4e-3c385586da4c@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <11f4cb58-d0f4-f735-2b18-5e02cb6950e4@suse.com>
Date: Thu, 10 Jun 2021 13:58:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <31dc681c-8713-7ddd-6c4e-3c385586da4c@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR10CA0078.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:208:15::31) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 99569aa1-f59b-4908-243e-08d92c070888
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2447:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB24474F7E28491A5B313D64F6B3359@VI1PR0401MB2447.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	h2N4VehMkB+ozmBko3EVwfRIQXBUYxe8RCowjhCc07sMGVelfyWxXE50KkT6QvynGaY/bNl7yHNooeNy3j1iB3nljH1xIHA4oSOG407HfpJjBrcT7RUbb1VcIx2ndPnQnHKjvrkMIdztjghZ/aulgpB5ZGix5yv2UMU5dfWPmg79H2ShEzA+2ax0h7kgr/XiMzlsAnK0Kb/fCwIlyMNTnf77IhCuymwtqTkZ/tHcQQvE4TfkEKllhV848tBSdarhUlhQIO3AWysiaGQ1MbCWmPjd1YGKbC7YksOsKeRskAdoHpvR52ApQfDCY8kR9qu25UpVb1afb7n+2auGWKd2YVgvs2fM5Eh4kdtQXAPsoOp0hzqcWBmNAognvZG3mTBjRvIxu7KobRnoqs65+O200xA02KnT0hHQH7iQOAxYZ3lx6XigZ5d0mCBp6YXV0/3VQ7RPedZ2y5jcU4/AIEMu5ekUX5lC3DulriyGved52mIl7H9jhfxbn6yGvthfxAxRSSdPSINjOiLlaLfvfLEjvIHW9r+0yqya9YQp8iCghtULVf9sbAfqiQbvcF08Aj2blZ2MpKAxFTS3QtRSI/MKZACEXssNqyc96EAhFhO+Xni9SI86/Y8oz9ew/PVgjG6PBOzq3JvNgTG/6ZhOMWmk77AOAVpwIo05Yh+kDgRA+CjczCPGymhwk1POkRp/Q38l
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(346002)(396003)(39860400002)(376002)(8936002)(83380400001)(86362001)(16576012)(16526019)(31686004)(6916009)(2906002)(5660300002)(956004)(8676002)(53546011)(26005)(478600001)(2616005)(36756003)(316002)(38100700002)(66946007)(66556008)(54906003)(6486002)(66476007)(186003)(31696002)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?us-ascii?Q?EpfHnmEBvM9KDoV/dn3kjOtPof1ERxWuPIcjmNQu7nSIZE3+bxRlBAFZc2xB?=
 =?us-ascii?Q?cdMJIzdZSUlmq+PsOJUMCVMvnYAVWgnVzDhYDm1fmF5xlEHnzOpf3DRTXJYf?=
 =?us-ascii?Q?GdKtqecyHESiPtVVdp5oqVRp4qLTjOIRwqfcDolzi+NOSOMw6sMhI/5MTNQt?=
 =?us-ascii?Q?UbjXduTUmQ1/dvmZQrH3Iz9YV1d92zSBaSHn9loS/0zD9hKgS/oct4I1Wi2s?=
 =?us-ascii?Q?/hOhuoeNvYVDb1N8c7wi4stH/TogMmiu1r4YRRZyfzosVX5cbG3Ev2yf9bL4?=
 =?us-ascii?Q?x4pmNECQeLfGYtkKFuquCCn0jXMFo9WOzj9SBEaMJy317kBfTe1TgKT5Bt5g?=
 =?us-ascii?Q?XmQscut/WBth9RrEi3MFO6w8VzycR2FEHsM+VF5qtwePBDjenAGZ/YkmodaR?=
 =?us-ascii?Q?EaIrEHCAI6L7CYFDsR1t4xzEiioMtlrRx993gWV5QfvXCBm43vn2tXSaErmG?=
 =?us-ascii?Q?kznGuQc1yaVk+jJQdRnNp9fe559VMG/GWywu4LTVPVIwpbfspcs9z1ssAIhE?=
 =?us-ascii?Q?2jNEk8XHT3G/C+rpKsnfoVTUiVfh0wVw2XbZV7g9JaPq/yXmGZmVuMzfTPkX?=
 =?us-ascii?Q?i+BJtIVn0upuFp9gqjQFjG+VBrtkRJRtA5PGgym8Qkez1Gp6ACqq2gqu+1mn?=
 =?us-ascii?Q?o/1o9PSF0AXEZ9CbtAaS8wrJOoQOzN70eIRRBp48/jqa5ozy8ZHRcwg4M178?=
 =?us-ascii?Q?BJ+TImuPJq3LJ13qoRlyYqTe864S8R/TT6RbeeJ44iBZwpcctbDBJo42SZL5?=
 =?us-ascii?Q?y0VQZ/ohznZUKPU7DnYmqYul9kI07f4HxrfvOzszwvPUlOYT4wT1y2NEI5Uj?=
 =?us-ascii?Q?DwdoT9u/N7kww+ld5+6EtMA0/Brd5W38Z7CcYt1QbMxfBWVgxhQxEEhw6UUH?=
 =?us-ascii?Q?f4tYl95A8s0x99dkecMvTU+P8H0r417A6RrQ6J6h+sNUgTXMqn2M2H494mCb?=
 =?us-ascii?Q?PU9Do+eIet/ooVYW7Lf7AD6Bbo0Ox2eXfkZP2b5Da0maGnrETM+ithhiCCdb?=
 =?us-ascii?Q?mLHhJs8na6JqAH3ocYo/pOiDZD/W3+gk80k0fcK0aX5nMPTbWEXMJOEXQb7Z?=
 =?us-ascii?Q?PD4Muosqw6p5fZ/nuxfSqEGNyLRHPdH+OCeM1krrD7il9bu+uvB2BHQjF+ff?=
 =?us-ascii?Q?NCA1CGhOItksD8PNFPnORBj1lGqbRG+XgK4mVnFEbkIMdsI6dmlHI9BoIr2E?=
 =?us-ascii?Q?C/27CFh0e6a4F5Wo+Wj2qF94K94m2Zavd9ROmv6q4SG4HdlmgdQ/aOecplBG?=
 =?us-ascii?Q?Q4xVHEyin6JFVrL/3RRbFyPp9x8Beuputl/ZPgEO/QLmEORxgjS/1E1fbQ+g?=
 =?us-ascii?Q?07c7abfJvmOF0qRADrmUki/3?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 99569aa1-f59b-4908-243e-08d92c070888
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2021 11:58:17.3536
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PwYLMQjE9hsAtv10Fwvm/QZMiiRcHn4AWubPO8wZcM96Qm8pWWc/8Of5f7ML0uymaZYqKEg8j6eVWOj2eX1hpg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2447

On 09.06.2021 12:53, Andrew Cooper wrote:
> On 09/06/2021 10:27, Jan Beulich wrote:
>> It appears unhelpful to me for flush_command_buffer() to block all
>> progress elsewhere for the given IOMMU by holding its lock while
>> waiting for command completion. Unless the lock is already held,
>> acquire it in send_iommu_command(). Release it in all cases in
>> flush_command_buffer(), before actually starting the wait loop.
>>
>> Some of the involved functions did/do get called with the lock already
>> held: For amd_iommu_flush_intremap() we can simply move the locking
>> inside. For amd_iommu_flush_device() and amd_iommu_flush_all_caches()
>> the lock now gets dropped in the course of the function's operation.
>>
>> Where touching function headers anyway, also adjust types used to be
>> better in line with our coding style and, where applicable, the
>> functions' callers.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Reviewed-by: Paul Durrant <paul@xen.org>
>=20
> Honestly, I'm -2 to this.=C2=A0 It is horrible obfuscation of the logic.
>=20
> I agree with the premise of not holding the lock when we don't need to,
> but moving the lock/unlocks into different functions makes it impossible
> to follow.=C2=A0 (Also, the static analysers are going to scream at this
> patch, and rightfully so IMO.)
>=20
> send_iommu_command() is static, as is flush_command_buffer(), so there
> is no need to split the locking like this AFAICT.
>=20
> Instead, each amd_iommu_flush_* external accessor knows exactly what it
> is doing, and whether a wait descriptor is wanted.=C2=A0
> flush_command_buffer() wants merging into send_iommu_command() as a
> "bool wait" parameter,

A further remark on this particular suggestion: While this is likely
doable, the result will presumably look a little odd: Besides the
various code paths calling send_iommu_command() and then
flush_command_buffer(), the former is also called _by_ the latter.
I can give this a try, but I'd like to be halfway certain I won't
be asked to undo that later on.

And of course this won't help with the split locking, only with some
of the passing around of the saved / to-be-restored eflags.

As an aside, the suggested "bool wait" parameter would (right now) only
ever get passed a "true" argument, so I'm not convinced it's useful to
have at this point, as then we'd also need to deal with the "false"
case (requiring a completion interrupt to be arranged for, which we
have no handler for) despite that code path being unused (and hence
also un-testable).

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 10 12:02:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 12:02:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139867.258535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrJNz-0006FJ-HL; Thu, 10 Jun 2021 12:02:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139867.258535; Thu, 10 Jun 2021 12:02:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrJNz-0006FC-Dg; Thu, 10 Jun 2021 12:02:11 +0000
Received: by outflank-mailman (input) for mailman id 139867;
 Thu, 10 Jun 2021 12:02:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iP0d=LE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrJNy-0006F4-KO
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 12:02:10 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f7f6f014-7577-4a11-aa19-9e4b2554b201;
 Thu, 10 Jun 2021 12:02:10 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2109.outbound.protection.outlook.com [104.47.17.109])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-35-n8gypHHqMf2FRq2LMe-L3Q-1; Thu, 10 Jun 2021 14:02:08 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6175.eurprd04.prod.outlook.com (2603:10a6:803:fb::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21; Thu, 10 Jun
 2021 12:02:07 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Thu, 10 Jun 2021
 12:02:06 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM4PR07CA0032.eurprd07.prod.outlook.com (2603:10a6:205:1::45) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.9 via Frontend Transport; Thu, 10 Jun 2021 12:02:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7f6f014-7577-4a11-aa19-9e4b2554b201
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623326529;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0HTaZgCRt0TkfzbiM60ImI+eb6R3SnEDiFO+kQHa+no=;
	b=VZsO0ZBocGaAE+pzQOVXbyDYs46cXpPfZKyTOCQEQeNCjNjMXd5Q9zihoVB03Lbi7E71ZW
	6TKBaOcmYSxMFr0NYhuMIVWejB/A2cyAgroe4FHx3M83X8gGD5OmWS5c0ZHPlkA+S8HruH
	E+TcHHuLXZ9r2LarcNxGNirAnZTqdhk=
X-MC-Unique: n8gypHHqMf2FRq2LMe-L3Q-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SxLDbCkXMrUPDISmgUdSPTN+CHnwN9gX9MC44oLwRNO0VO0fFywAOO23b5tBmvy3BDRSKhnLeUb9zuxmbwx8Jl8njiC+zrJJN5cJKaZ9t4ffkQrp4pAU0srTbSzmRot3fjLuMtsUL07xmyQ9nVbTM7Xdt8j13PHbBEsGDmdx+igxgYrLrKUk4+JG+DL28hppY7fyGPNS366ZAgLXywzt8G9qzg6cORR3mlt8c6adpTYvTXkxCOUH0OJJ9qzT6doXeT+xJd9doQZgfO3WKWXmqACh30p9RajUeg40MW6a+R8se5QO+Q8jc9sG9vINltznvHYRDrQbCtXPBipBefhHFg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vrqhD6PTMrm4PTQJlWlqfUI4BRXNvGy/TXalfvjz0Ig=;
 b=DCNzE+zJb0lEOy47LkGLQB9zsLBGraRYPuzNwjQ7UIldVnTCrYyjmfGMzEyk8HJRjNDlUIMDW77ZcgbG5SDEtYN/uTSUtWjx4BpryEL4bchreVdl9asu3zbq4WbmTvT/UAmysMK8iav1gLhROvqfM/NN0gZFOvanGloHjSi+krRFEcT5AQej95+/23xJikbRuaqZdU/XZYqEhSblBCuShCYh1JuRou5H3kPs5NHOSKv6SovxWwIxrBF4mOSQMJB4JODD4Aqx4slhsEKrpG+tPwwueK8OXYeMd2SOs7rJcdXbzt7pvx+02A67Ep5gvwbWfaegr3PRC8AObbzKYUcb0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: Re: SR-IOV: do we need to virtualize in Xen or rely on Dom0?
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <c10e16c9-ec42-336f-e838-caca49b39723@epam.com>
 <YMHFQA1L61ntKNRq@Air-de-Roger>
 <30955a5b-ee46-60d7-ae56-23dc7c91008c@epam.com>
 <0aa4bf61-e08f-6da0-1cda-48e61bf876af@suse.com>
 <38cfe7b7-e5a3-2216-f52e-fdebfc7af517@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6b6142fa-3dc7-cd54-a40c-d4b9ef47afad@suse.com>
Date: Thu, 10 Jun 2021 14:02:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <38cfe7b7-e5a3-2216-f52e-fdebfc7af517@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM4PR07CA0032.eurprd07.prod.outlook.com
 (2603:10a6:205:1::45) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 80b7ebd6-3e91-454f-9ed6-08d92c07917f
X-MS-TrafficTypeDiagnostic: VI1PR04MB6175:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB61754D367116D5FE2C47A054B3359@VI1PR04MB6175.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dFP0Wm6/pMbrFvsRA3w446jJk2CfvisohDFDZbWyQo2iZi5PXB07CDkYgvIsoITROuPgZ3WRHEorkOdK/GCSgj2BgtcwzwKrST6veW5UTMOWJmb2AK7DnKP1WAV07GTPXpY24LWuaLUET28e2TF5D6UdfntGa0DKq+IfMKfTg/QsLj0tu23SXJ9Eo/mZfYaeahN6h+Zb1ZXm/U6IJb9iMXrqMsz7iVEI9vczBjrl95uC+e6Acb+1U9p8UMgyGhKYg0piZra0gsVcLiFYQZELDxydSE2jwBW7kjP0Xf2m0p0AOoJFDJNB0sVr16cBKmDapkDYCNdP1KtIXvLJkS3s790iSkt2USiOKevLEAEcsSvGlVqClXUqoeSa/Gkztc921Baj3HWVIgRTHc9ab6d0PWiKeVFosF7QsqFPVveJEwQ2gLN+jVOnbfwmDsguL2tVOaLL2ZUeYMzpN4jbEF5F7OO2uranB+XwIT7WRwhN3xvoKIWL4FRPC6S8LtyLRjWUmX54Qf/MBxS4DeG6kb5luM5gfGeTqElImUTXDs5CmI6s3Jau2z6aRPL+HBcnZSQVCIRg97bco4cjmRVT2VMz1KM3e93oig8ytyvpQwy4c99p3Dc8mzHgPYED093DAt8S/WrJ0KA5o1LsATPPD6I2O85bxPoiQUMkLMZLMbTEChgEeKscxTdVNmeUg5pvT/sUtfJTZ0dd3ur/yB5ELddCDw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(136003)(366004)(396003)(39860400002)(376002)(83380400001)(186003)(8936002)(16526019)(31686004)(86362001)(316002)(4326008)(36756003)(8676002)(16576012)(31696002)(53546011)(6486002)(66946007)(54906003)(66476007)(26005)(6916009)(66556008)(478600001)(2616005)(956004)(2906002)(38100700002)(5660300002)(21314003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?us-ascii?Q?/2TJvnJ9faWA/iGi76VnP45irqOez1ARaYk0cuFdptzvVKPO1+BaZzUuNxKY?=
 =?us-ascii?Q?7WTxyw0rWbG106i/C1B1KWBRKzJkSPkUD5Y82Szvy19XPERArAnc3ysSbg2S?=
 =?us-ascii?Q?HMN3idFxr9EoVkrozSsl9OF5xgKCZhVAK+iXudpiYMFQHmXbv/1qs+FGkz2N?=
 =?us-ascii?Q?PZaNSXSjJSUX+HaJMtQ5Y7YBBRGgARM/lzs0cY1OKPVDiURIoFww/J8PhP+W?=
 =?us-ascii?Q?bOl5uUXuP5tt8izTsdjHdYup3y+b7LAjX3ZgTnBKYHGFkr4xbz39K3dAtAIy?=
 =?us-ascii?Q?aBBISXjsgNkTaL5OiOMSPWpGyhtaZW6V9dW7q90QWwbiGfUxVebmO+FbuHw6?=
 =?us-ascii?Q?FbUSq0wT9qP454gkseI5JIPjeOKAvitSb9VMrw5POAKrZCwGdG6UwIdAPqTk?=
 =?us-ascii?Q?dOEmr0LzrYVwz5rwdcrbjJEc9qMErlnhPjkf1z4e6zAQJcwApL8/B3/NiCDo?=
 =?us-ascii?Q?M8PcGeyRAbumyaP+PttUYtbEQZI63AMTNjNxmpU2770IUXxm4jVEeWT2yWN/?=
 =?us-ascii?Q?bB87l6lrkkkWYGCAKfakZ1snUuTOTOnsehbUev1Iujs6dzS1m1KxCl3ovuCw?=
 =?us-ascii?Q?dTcNnuTL/P+w/MYZ5VqtLW4p9GmacX8GZVilOvdKYH4E3vUnWhkLpHAaDJN5?=
 =?us-ascii?Q?MVroDpcoyAgB0YYKMvVnHvAOYkeNGpFJcUx684sqQn5+4ryPMMTcHLfHazCj?=
 =?us-ascii?Q?EEQTEQ3M9SeYzD/T0BQonbex0oPU8VZB33m/ZMPZFkdT6wbETp6H534cJVfN?=
 =?us-ascii?Q?wRguOAj6wCcqYsGnJgsAR79wQOgGdzTWh+FPepNCUB5prYSuTNJIz9+e4QMY?=
 =?us-ascii?Q?mh9R+ysO2akombfEKwQKdmNw/+2Wa1+enDHilLeOgo+D+G9pZJYncmVuBMTc?=
 =?us-ascii?Q?vedDyg9RBanT5I0uI1gZt7ALMS+Dc5LJImy+XS6ztOjlYYco1iLg31NOMlRl?=
 =?us-ascii?Q?W1vYSGI0X0JKP9WI1QQKekQ/s/ufQ3c2v78titQlkq6Y40DlW6fB3H8aaGpj?=
 =?us-ascii?Q?uM/+za0VMND5VhMTbwxZ8vzGSLKKKoW/0n54bfdX5okgcZnRhGtgXXsN4ddu?=
 =?us-ascii?Q?hBgAHkCRnrYZ8UKycTFV2B97dgSR51ey6jVhSvFgu5j1RvwrlkNaEQnk5FRn?=
 =?us-ascii?Q?ZBzlAKp5oQ6UotFFDQQlcYPDzDNQuiN2HIF4mAiWmAiOSiJkXFKskE+RpFzk?=
 =?us-ascii?Q?i8uAz1IEvlC9FRxVUwiZOPqG0vED35Fnf66dB2TOeKvKt7l34IOFcHYG6afz?=
 =?us-ascii?Q?1eBxsXPVx2XSI5KrYq9MseqGkIo4NfHBkRgTN7LaTwROV7ErHhR4rrZvAfqG?=
 =?us-ascii?Q?QmMgwSvRbOnviDNM9CAyRlnu?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 80b7ebd6-3e91-454f-9ed6-08d92c07917f
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2021 12:02:06.8668
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nTo2xxdBtiEAJ/IJahV8/GDW8n7i/37+WULG4J+k9LnSDghK1iqRtjHbjMBw17fAyZrfrgIGPdQ8W84zlQyk/A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6175

On 10.06.2021 13:45, Oleksandr Andrushchenko wrote:
> Hi, Jan!
>=20
> On 10.06.21 13:48, Jan Beulich wrote:
>> On 10.06.2021 12:01, Oleksandr Andrushchenko wrote:
>>> On 10.06.21 10:54, Roger Pau Monn=C3=A9 wrote:
>>>> OTOH if we properly trap accesses to the SR-IOV capability (like it
>>>> was proposed in [1] from your references) we won't have to modify OSes
>>>> that want to run as hardware domains in order to handle SR-IOV devices=
.
>>> Out of curiosity, could you please name a few? I do understand that
>>>
>>> we do want to support unmodified OSes and this is indeed important.
>>>
>>> But, still what are the other OSes which do support Xen + PCI passthrou=
gh?
>> I think Roger saying "want" meant to cover ones which currently don't,
>> and which would have to undergo more extensive changes if they were to
>> be enabled.
>=20
> Fair enough. Do you think we would also need to re-work the existing code
>=20
> in Xen to support normal devices (not SR-IOV), e.g. we currently rely on
>=20
> PHYSDEVOP_XXX and other Linux specifics.

Yes, work in that area would also be needed. For example we'd need to
scan buses / segments as they become accessible. Right now we only scan
segment 0, and even that's only possible because on x86 mmconfig is not
the only way to access config space.

> And even if SR-IOV is implemented
>=20
> in Xen this won't allow those OSes to stay unmodified, including FreeBSD.

Of course, it's the nature of PVH (as opposed to HVM) that OSes need
modification. The question is the scope thereof.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 10 12:24:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 12:24:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139880.258547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrJjn-0000En-Dt; Thu, 10 Jun 2021 12:24:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139880.258547; Thu, 10 Jun 2021 12:24:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrJjn-0000Eg-As; Thu, 10 Jun 2021 12:24:43 +0000
Received: by outflank-mailman (input) for mailman id 139880;
 Thu, 10 Jun 2021 12:24:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iP0d=LE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrJjm-0000Ea-Ct
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 12:24:42 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c7e5254-1ab2-4cf4-a012-87411542d0c6;
 Thu, 10 Jun 2021 12:24:40 +0000 (UTC)
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur03lp2054.outbound.protection.outlook.com [104.47.10.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-39-qhl7JV2DNMCIVn_qkT6EdQ-1; Thu, 10 Jun 2021 14:24:38 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3935.eurprd04.prod.outlook.com (2603:10a6:803:1f::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Thu, 10 Jun
 2021 12:24:36 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Thu, 10 Jun 2021
 12:24:36 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P191CA0024.EURP191.PROD.OUTLOOK.COM (2603:10a6:102:54::29) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.22 via Frontend Transport; Thu, 10 Jun 2021 12:24:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c7e5254-1ab2-4cf4-a012-87411542d0c6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623327879;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=L/XrWhPHwPf7SqkXDfSvxlHbbTeCx5QQ/CwE9JbO2ic=;
	b=JgKA2xy13RwU+6Cybb2+Iyd1k3mO3JYJYeqjh0oBXH50BUOJUxg8GHgc6a9zMEDNhqfiE7
	GezwsBAOuIo7mqp1nIdk4P7A32G0vMIPge/uA0FibmQxUi4MB/3vX/JMInj0IrWgCZfK3x
	K8RiDxyaNZB2qe7irApd2jG6hGLZ5ls=
X-MC-Unique: qhl7JV2DNMCIVn_qkT6EdQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bpt+rlpaFxMGidkfPJHWmC+E0QYYx6is20OvEUvCRt29WRBtbFn6ZpBJuJtw8IrV5Ie1ZSNktjJSbF2b3rlXufEwr9nareKyFk+1LdROSKbyVPGNC9y9iKosDKtcR73kmnuvDBeVmzAv+z0KDSYLZ8Z0Tct+qAmfjy6K5C2mKWwUYx/TANt6Ox+12sZUQ56B6wMPPtortiW58GfWIQPpz+TgFdJx4qfb6i0c8vvOYqUhmvDBYkWcmYkvZm8bSuKtJy2TNlTtiO4LuL3BBC/3q7UEkKR+0KbQdGK9ivueT4R6KPJ6RS8FJjJev/ZtE+1UFCAz6Vu+8JUjlzIfy6ougg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Dpw2Tc9J6gCifNz782V1DgMZt/Mp+Jf78OlFyyxmuT8=;
 b=dgPJmb6DgnsuseWidSMdfPidHf6b8mSy4ClDEET4MLdLAWxc9I+W/QbXtnOtPU/38PbqZUG4ddFizJkSNPgt2e4PSzZqanzqLR5yO/gSY9VYKocWPp5RWcNIo6b16GguxYhd/UocUlkE3ikywFfb7LT+UAWctJzSeJV4Zqidz6OIA7/LFPx7UWObwC6nQ1jucKYmNBaxOlnE1rgZoB5oP85LeR8RhF01ptanLS/iaGuUt3SEXfEZ/6enBONkEMopAublS3dqbLIWjM0iNALdQ1pgTMFWrODdXRHo+sxqEwSFZ0lEWRDt2YPRRPoKwupwcPf2nTMskemrRIArrw4Lww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 1/9] AMD/IOMMU: redo awaiting of command completion
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Paul Durrant <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
 <1bb49cce-537e-9fba-80bb-82bf502728d0@suse.com>
 <1fcb1140-b9b4-5c0b-de6c-e14d33937318@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a6632eec-d64e-5167-2f0c-3ad919620327@suse.com>
Date: Thu, 10 Jun 2021 14:24:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <1fcb1140-b9b4-5c0b-de6c-e14d33937318@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P191CA0024.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:102:54::29) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 08d67d98-169c-4ec5-c493-08d92c0ab615
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3935:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB39358BC5F7E1258390365156B3359@VI1PR0402MB3935.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PVs3cbZxASXGVvyzCgWIC5a3tTxmT46JRSiLHVYI0FXMmQlxCY8VRr7V4XVXc1ZINAZoM5l39V2EHhhwh0+HN123Ru+8jEjfe1a9ez1+ztFnBZkUzvYyo+pXdQAXwYbDkOWeU2Tw7sSyYi4mbignK6aEjpnh9UtXsC/vITqM4UaN5duUVN/b/JORScKw2tBgINASParROqaqEfbzZL8m3OLHbR5pL4mMqwLLP/IoPZ1WoVKNgDDQPeEaax7N0zyOxudRTIFiLO2/3ave+6y7QCqm2fZ99eX9nC3NlUr3azjj7XU/CrYz1OxJxn7xSGkLgBNpG6ZIrxrAchBqMYkB93ZZn9en1fhr+/xb/0ovozLcWs1nypNoZqlUo8zuT+Wy/lJ0WSB+TbkDQKNiBuoHd1DPY+0d7mZ3C0XpPvSSdvzxE/OFldagb2V2lo967hb96GXZ/oxM0QHTNonulPbGfvippGwCNVJsDdJXpJ/fkjj37OBhxIaDOSPhYR3s3ry2+hL4Naby9Z+w8E+neIkhvNybQk/CKJXsIpAkhSyXRAgZftVkPWaKF9pT8pnHXJlyD/SZYnucp4itDnd7B0g3sxdIDhheWwQ/T+SeKXvjCDssukspi59CYDuN9rFd8dx8xnsL5HB5YE4NBFpR+kp8/WmvZ+xHPV3nekV2PNBg+k7oLGoznFO2CTnd715lTkUQ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(366004)(346002)(39840400004)(136003)(396003)(83380400001)(6916009)(2906002)(31686004)(478600001)(6486002)(86362001)(5660300002)(31696002)(186003)(16576012)(2616005)(66946007)(36756003)(66476007)(54906003)(38100700002)(316002)(16526019)(956004)(26005)(8676002)(8936002)(66556008)(4326008)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?us-ascii?Q?ZbhGTUZosnQsR4f/7/zgFTZM94N8y7GFSM83FHqDyNr/2XxyqK3/ghH9WTwX?=
 =?us-ascii?Q?o+cMfF/Gz32VV+SC9X5TNYG7l88L9e8bzEyqkO08jwpWOyj8VSpz/t62E8Vg?=
 =?us-ascii?Q?xYipIEXuExlp/8Y1zhfOVCbgs79z4kTQ44ek6vYwTvhLQCDhPbkhe3+EHcrq?=
 =?us-ascii?Q?psz8qtdjqlKCuX2NlGNmwB78aQOuB5SB8vCCqFKiyFDfaoJcGkd85xCJe7I9?=
 =?us-ascii?Q?/ZIeq5Pb7AZ2498KlY0iIKuWRu4GTDshn+9EDWpitCBUszkD7dCjVkkW2aOl?=
 =?us-ascii?Q?F+mpDky5sxBpgOO54/ay84v4n/0nCng+Iso0kNNpZc2RLWXDHV0s5BjbVSrC?=
 =?us-ascii?Q?OV/+GJa1jby0ePY8vZQwrUrdXr2qCsqgCW0QF9XRC6vuQpRX1VYGEDMC5+DJ?=
 =?us-ascii?Q?R8sPBa0aS2bM82g7e9jMjEBUuBzl47FyYXzGlkRozTgh6ixkJebVq3RCyoxl?=
 =?us-ascii?Q?laRBYhMx4YYBrn2lWkdLoncg7l/K1dI5lYAkqXX+wdf37GpA8xoNbXcxo51F?=
 =?us-ascii?Q?RazWjw6PlgTDkMahmio5sXD1hZdt/UnTQbCLAjCWdDKrf3+Ny3nVKqZ/qg98?=
 =?us-ascii?Q?u8mWsZ3e32ziL+PA2jhlMsE2YO2hIBLPW+8k8756pAwfQQW7f8HC2jeVsibh?=
 =?us-ascii?Q?xLfFZwnxaqzK+7O/0BJDFNKxHNa3PqtpGE7W4d3nL08gz2n90nSsOQDNu7UV?=
 =?us-ascii?Q?xl53AjUzreQyTaQ9Hr5WGHyAhzef7yOMgRV8GOvxvPHoOdhyhtduMS0RTKfs?=
 =?us-ascii?Q?0sQXEiF2PURAhElaEgFg0WGzXJ5sbHMTreeMbVEcfalQubBJ/bO8ClwXGDe8?=
 =?us-ascii?Q?BTbxNKlslEnxadBquuOVSPqsC/6Mr3z4MhRnLOuGFJV9SFGQ2SLWVd5+/Rnm?=
 =?us-ascii?Q?isvgaNjaEfCVIZHZ0g99LOMI2TOzvkeza7UARLXUobArg93sp9h/dbXgc3k8?=
 =?us-ascii?Q?1V+d6wKpghkoaD+iBNgDpq0Jus19prcxpxp2kl4XlCTyTRf/YkmJey56tX8D?=
 =?us-ascii?Q?mJsEU4C+7LFeaJXzWn2QjsKi7tlJAhth9oQES4xbBHH4BrEXLahyyYhGQ9NK?=
 =?us-ascii?Q?arjbjgxTf4ostVWYmMdt07DXMVPZAHOEtterOswpnqRTko/MjLSm5sTRdPUd?=
 =?us-ascii?Q?i+Z8BFjYg8fbM6nb7chMvb8O2MCiC5bXEtq9KuZ2WVvBVmpi+OywwgOPlQIF?=
 =?us-ascii?Q?0GTJkr8h+4q3Q3/mrcTA/Eh37UjUQkmAXHd41UYR9sQ9kcRO/x2CUF92ymf1?=
 =?us-ascii?Q?3tDNFWo74/JL0cR2UmLYuKqRwHqLOFw3nYa8GiP7LrzViXYOh9asfIHPAWyh?=
 =?us-ascii?Q?+HRNTXON1USUWiq2FKHgPAVR?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 08d67d98-169c-4ec5-c493-08d92c0ab615
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2021 12:24:36.7540
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: b0ZRbgfmWI70jYTZxwn9m0wMXE4UtO9r1iI1YRDDGwSajjP8eIsWJPD2O9mb37k8wTqv8M6FVOzNn/HpXTMt1A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3935

On 09.06.2021 12:36, Andrew Cooper wrote:
> On 09/06/2021 10:26, Jan Beulich wrote:
>> The present abuse of the completion interrupt does not only stand in the
>> way of, down the road, using it for its actual purpose, but also
>> requires holding the IOMMU lock while waiting for command completion,
>> limiting parallelism and keeping interrupts off for non-negligible
>> periods of time. Have the IOMMU do an ordinary memory write instead of
>> signaling an otherwise disabled interrupt (by just updating a status
>> register bit).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Reviewed-by: Paul Durrant <paul@xen.org>
>=20
> While I agree with the direction of the patch, some of the details could
> do with improvement.
>=20
>>
>> --- a/xen/drivers/passthrough/amd/iommu_cmd.c
>> +++ b/xen/drivers/passthrough/amd/iommu_cmd.c
>> @@ -20,6 +20,9 @@
>>  #include "iommu.h"
>>  #include "../ats.h"
>> =20
>> +#define CMD_COMPLETION_INIT 0
>> +#define CMD_COMPLETION_DONE 1
>> +
>>  static void send_iommu_command(struct amd_iommu *iommu,
>>                                 const uint32_t cmd[4])
>>  {
>> @@ -49,28 +52,31 @@ static void send_iommu_command(struct am
>>  static void flush_command_buffer(struct amd_iommu *iommu,
>>                                   unsigned int timeout_base)
>>  {
>> +    static DEFINE_PER_CPU(uint64_t, poll_slot);
>> +    uint64_t *this_poll_slot =3D &this_cpu(poll_slot);
>> +    paddr_t addr =3D virt_to_maddr(this_poll_slot);
>>      uint32_t cmd[4];
>>      s_time_t start, timeout;
>>      static unsigned int __read_mostly threshold =3D 1;
>> =20
>> -    /* RW1C 'ComWaitInt' in status register */
>> -    writel(IOMMU_STATUS_COMP_WAIT_INT,
>> -           iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
>> -
>> -    /* send an empty COMPLETION_WAIT command to flush command buffer */
>> -    cmd[3] =3D cmd[2] =3D 0;
>> -    set_field_in_reg_u32(IOMMU_CMD_COMPLETION_WAIT, 0,
>> +    ACCESS_ONCE(*this_poll_slot) =3D CMD_COMPLETION_INIT;
>> +
>> +    /* send a COMPLETION_WAIT command to flush command buffer */
>> +    cmd[0] =3D addr;
>> +    set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, cmd[0],
>> +                         IOMMU_COMP_WAIT_S_FLAG_MASK,
>> +                         IOMMU_COMP_WAIT_S_FLAG_SHIFT, &cmd[0]);
>=20
> set_field_in_reg_u32() is a disaster of a function - both in terms of
> semantics, and code gen - and needs to be purged from the code.
>=20
> It is a shame we don't have a real struct for objects in the command
> buffer, but in lieu of that, this is just
>=20
> =C2=A0=C2=A0=C2=A0 cmd[0] =3D addr | IOMMU_COMP_WAIT_S_FLAG_MASK;
>=20
> which is the direction that previous cleanup has gone.
>=20
> There are no current users of IOMMU_COMP_WAIT_S_FLAG_SHIFT, and ...
>=20
>> +    cmd[1] =3D addr >> 32;
>> +    set_field_in_reg_u32(IOMMU_CMD_COMPLETION_WAIT, cmd[1],
>>                           IOMMU_CMD_OPCODE_MASK,
>>                           IOMMU_CMD_OPCODE_SHIFT, &cmd[1]);
>> -    set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, 0,
>> -                         IOMMU_COMP_WAIT_I_FLAG_MASK,
>> -                         IOMMU_COMP_WAIT_I_FLAG_SHIFT, &cmd[0]);
>=20
> ... this drops the final use of IOMMU_COMP_WAIT_I_FLAG_SHIFT, so both
> should be dropped.
>=20
> As for IOMMU_CMD_OPCODE_SHIFT, that can't be dropped yet, but it would
> still be better to use
>=20
> =C2=A0=C2=A0=C2=A0 cmd[1] =3D (addr >> 32) | MASK_INSR(IOMMU_CMD_COMPLETI=
ON_WAIT,
> IOMMU_CMD_COMPLETION_WAIT);
>=20
> in the short term.

Okay, this conversion has indeed saved a single

	and	$0x0FFFFFFF, %eax

But we're down by two set_field_in_reg_u32() now; only some 30 left.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 10 12:25:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 12:25:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139885.258559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrJkX-0000nf-On; Thu, 10 Jun 2021 12:25:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139885.258559; Thu, 10 Jun 2021 12:25:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrJkX-0000nY-KL; Thu, 10 Jun 2021 12:25:29 +0000
Received: by outflank-mailman (input) for mailman id 139885;
 Thu, 10 Jun 2021 12:25:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mVv/=LE=epam.com=prvs=679567fbaa=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1lrJkV-0000nS-QY
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 12:25:27 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3d286d98-cfce-464e-b9ad-4b5eea399310;
 Thu, 10 Jun 2021 12:25:26 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 15ACA6br007232; Thu, 10 Jun 2021 12:25:25 GMT
Received: from eur02-am5-obe.outbound.protection.outlook.com
 (mail-am5eur02lp2052.outbound.protection.outlook.com [104.47.4.52])
 by mx0a-0039f301.pphosted.com with ESMTP id 393j8fr405-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 10 Jun 2021 12:25:25 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.23; Thu, 10 Jun
 2021 12:25:22 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::b459:9e8c:964b:a3d1]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::b459:9e8c:964b:a3d1%6]) with mapi id 15.20.4195.031; Thu, 10 Jun 2021
 12:25:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d286d98-cfce-464e-b9ad-4b5eea399310
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IK99l+YSQ7utnUZlX5MHTcHiwUGFdWuOCb7A3uXYFjMVi5Ks2UmSOCJCSaMa0EnHPTyFalBpJ80gs1e5h3E5KtqdRrJqqSunXv5OoYzLquehhtlvZcLBQXzNYfwnsnXdR2WNxpaUyfKZiF3LfFPtAOhD6K34K834TTsB41d1g/i7ai38tTVW37/TD1nRo6QKlBy/NJO/CLdiqmr+NYdA3+hKelcxYsqZW36dj+lgVNAXbMRUsNpGfrSH44NqUM1eEvRaBgE2caD3+e/SFHB7FD1qk0FmtEfR5OKGFGaxS1gvr5kfZh4DAJBFrhPqQdUAolUiajxEChbh13bpxRMQsA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SD3e62QSq06Qs6mZwDP6esjNvXLbwMv7H7AFGC4dZJI=;
 b=g0/BpcPtLTB86HgLxpSdXgDoTLKgjH0Ss17qpJtYFwAy7hsMiUXyoi3GX3sYr7SXEOrXWjMhN5R3V+LJtnwRHUOObV4Put/DSmMvyohxPLUZ/xmyg5VJ0xm5Sh91COG/KhwehjI9slagVVP2mxJ5yBzUvdbHKE4ULYuBu4OHH0RGfSQTmAncgS0DoLD6msyqxqRMOBAoEboYp5i+uV8C3aTUDkYGpMMawpKClBhK/ojgexeU/n2Nt3m3oa3gQIuk3hzKyZfgNt8/x09uRWrClqWkCvyo2u6iRgMwU+/N0Tto0IZ7vaIUf1N0q6lTVqRi40CzZQoaXHVpDo8tI4CXog==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SD3e62QSq06Qs6mZwDP6esjNvXLbwMv7H7AFGC4dZJI=;
 b=XIe3bm5x7kiQ+8Urk3wDnqgvivUnpsZvCI8VAaQI9G/pByMe0zwrreZidNH/QqL6OeTm/kvW1ic/EH6e9bhDjvbg+h+jKlNGOsqzdMvTqxmLUlC73ei5ZgCaPb8TuMtUylSP+UeYvgG2JvJrFYk7tizP3tCiyosN96AqltgJ6QerSuXZLx1KhyaWP4KqN5F+ljBXpllNOqL4Wt4FGbgd0bJ9/nbX/1StZL+g+EkYBJpNUE0dNmkGomLQAXMJDpsjsGesWbYXuGvPTK2wCgT1VH8r1FtfLRwk7jHuYjEwU9jQ92ZhcGJ0tkb1Q26gfiP3S9K73nYoJyWGtM7d0FoyoQ==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: SR-IOV: do we need to virtualize in Xen or rely on Dom0?
Thread-Topic: SR-IOV: do we need to virtualize in Xen or rely on Dom0?
Thread-Index: 
 AQHXWQwW/O3obtZgqkmpGRvt7ZNDkasM6aMAgAAjXoCAAA0jAIAAEBMAgAAEiwCAAAaCgA==
Date: Thu, 10 Jun 2021 12:25:22 +0000
Message-ID: <1bfb4b75-c70a-ea79-1cc7-f5543077c52e@epam.com>
References: <c10e16c9-ec42-336f-e838-caca49b39723@epam.com>
 <YMHFQA1L61ntKNRq@Air-de-Roger>
 <30955a5b-ee46-60d7-ae56-23dc7c91008c@epam.com>
 <0aa4bf61-e08f-6da0-1cda-48e61bf876af@suse.com>
 <38cfe7b7-e5a3-2216-f52e-fdebfc7af517@epam.com>
 <6b6142fa-3dc7-cd54-a40c-d4b9ef47afad@suse.com>
In-Reply-To: <6b6142fa-3dc7-cd54-a40c-d4b9ef47afad@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 0e541ef1-0e48-4b90-c44b-08d92c0ad18e
x-ms-traffictypediagnostic: AM0PR03MB6324:
x-microsoft-antispam-prvs: 
 <AM0PR03MB6324B14FBE5AD36B334FAFD5E7359@AM0PR03MB6324.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 QjISlUq8xxXLYPm9asHseTWaZWstvATrLEdxLfZY1NoJkZkD/r4Y2yMT3M4j/AGXSe/QK7AoFpS6/dFCv1TF5xa6/9I2yQ9cQM/pTQjBt6ZKTxb1CX8CAUlOnqjr52CuAY079HyTiTSm3q9PkuyPGlyxrBW/fUxMDW3bTVa0gxyZ+T2T+AhSxvDOKPD25U74PkECl60NDo2ulWZ1s7AhjdUmXunnuCllh0LzYyiFr1dYO4OFuyVrBWxWKoRjR/3UmESJNH0xU4sCL6MRlrEIij9/ivJJ4VKQBcMlRiUkNWowMXPjsCz1cpzeCFybV6ElOqWwVyW4gD/3hMM60dOHKojFnAWfqRzCYK1wEGbRWh2stOoKJQvVp2tZnbJNENn67I/ES36myyFHr8eO/hC8LV+NeieRa+A/RKtT5yWRsnJLEtfGfI5RWvr0OpZGA0Bh2LKiqdnyYkjLW1C2fFZg9LwQulmIJ6oiAXHh6VdGUbf73sm2RGpB5g07s2Z3gg0+gXT/mGhDj2agS5+1SCueSbDnKIfxXvVZ7AjcKzZtX7ZGZGWd3/BeulB42stvuphmo/K2O0fD/dbI6Y8CvBBqi0b98A8r67qPIZ9r66CbaboT8zFYy7J2PVzZmltBzk4ZXGTfh7sBFRSJxwC5lvOF6mhYG7/YgIoquzujTDvkpRu0SF3M+rb5jlDcy4j7Ep6yDaskku2+cd6R+ETOQs1m4UDH/UwkNP83nwnnLQyyvIu8LxZIgEP3SFyyfxzSjggz44rGgvX9jdrKMP3QZh50xUugsTRPQFxwY/zDRZ5KWlF6pXnVq5H7h/HdrPuspxua
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(136003)(346002)(39860400002)(396003)(4326008)(71200400001)(2616005)(316002)(54906003)(31696002)(86362001)(36756003)(31686004)(186003)(6506007)(966005)(26005)(83380400001)(6486002)(122000001)(38100700002)(5660300002)(66476007)(6512007)(6916009)(64756008)(66556008)(66446008)(8936002)(8676002)(76116006)(66946007)(2906002)(91956017)(478600001)(53546011)(21314003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?RGVWZDFldmF1WHhOZDFFMmRDUVhzeEJZdFpSYzA1aGN2eTJvanREVjlPWmt6?=
 =?utf-8?B?ZGRjc3RiODFueGVKSEFGRDRuYVJzSkk1eUhyRzhSekJoUzVQa3hHTVQrU0Y0?=
 =?utf-8?B?WS8yS3dzeENOT3A0YWRQVkEzRGN2ZHhNa2tKY3JMYnJyRHd4OHJnZjl2dGcy?=
 =?utf-8?B?R2pkK0dPT3ZkNDBBcU00cHp3OTBsRzd3cmNpdWN5ejFoUVVzZndlRElUNzVk?=
 =?utf-8?B?Zm5oOS9GT1NNUXFvenQ0WlVDN2E2M0lZZ2cybmxVUXZSOFd6ZlJjUTllR21I?=
 =?utf-8?B?YnpSOGJJUFFWTTU1RlQxWVhpWnluUWp1RGFPNjdqRlYrZldZZ2ZqVkJ0SFNK?=
 =?utf-8?B?WVc2eFkwRWRMWmZoMmp0elZmVmNickhNNGdvSjY5Yk5XWE9KT0djdDRxanYr?=
 =?utf-8?B?bW5uaWZrM2ZhK0VCMW5UUFhwOU5XRS9XSkFhbDF4SFYwNjRuQ3NzUlVaM0hn?=
 =?utf-8?B?SHlQUnNIczlycEJiQlNkMEhERXNrRzZZc2QvTjZWaHI2QWJ2QzhzcEsvcTV3?=
 =?utf-8?B?NWVUcVJ5cGRJZkt4cnhKNUdiVnNJb1YvNUdNRHpWZHZhc2NxRUtWcmtjZXZk?=
 =?utf-8?B?SFE1NkNrbW5GUnBGWnM1eXpCSlFZZXRQMVRxb0ZwcHBOTGN2azlGL1M0d0t4?=
 =?utf-8?B?Q2F2bTRNYWxxekIydzlCbXVWWlNpeFhXZVhJOFg2czhnQzNLY0NxUFhuNTh1?=
 =?utf-8?B?b1hVZ1J4UjNPNzk5ajZSbk9jUXBpUDFDY0RNSGNIaEJqWnUrdG0yOXhkaVl1?=
 =?utf-8?B?dld6MEJnaDV6dG9rUUQ3UFJ4eUYvT0lsRzZwK2N5VmhsZEFqNW90MTVXS2ty?=
 =?utf-8?B?V0RMNk85UTdzZFRKdW5HVHdxZlh6aVNIRUFPbllRRHMyWTFzalphcndjditZ?=
 =?utf-8?B?eUVRUXk0WUZDZlV2V2txVXRpS0RwVjYyMEpGK2pjSUxkdWFkN001TlY1Z2ps?=
 =?utf-8?B?Z1hLVERuMHhaU3RLVHZwNVR4RVFEVTNyb0o3R3o5MzlJd2pXVGdidlVlSm9G?=
 =?utf-8?B?UE9lSUFscnJZR2N3L0liYzIrOWV6ZnJjOWxtSlJ6Q01wM2RBTi9GMzVTU1hn?=
 =?utf-8?B?c0hIWlZRUmZhN1VBWU91SVZsYVJ4NW14WVFhMmJ0bEJlaE1ORmtXQlNUbDlG?=
 =?utf-8?B?elJiNHMxcDN6YnhwOXZ6WkdVQzRvT2UzRkhwaHhVM01JR1JQajJQakhsWlBL?=
 =?utf-8?B?OU54VzFVN1lpV1Q4ZEgrcURmUVcyLzYwVEtBOVNsUHFNL20rOEkzQzhqWUg0?=
 =?utf-8?B?SFVvV0pmSnUzaVUwUmQ5VUdHRHBSWmRPVVVkSE9kSXY5dXdrNlRwSFMxT1l3?=
 =?utf-8?B?a25nVm8xdHNHcFpsMkdiNGdQSVkycG4wQUNDdE9DREFsMEJVdm1aa3d2MVZ2?=
 =?utf-8?B?NXNqSk1oOE9ZenFXS0lSUDRXRlB2M1FWY2FaeUFLOXcxczhjYWsrbndqTkNz?=
 =?utf-8?B?cXdqWkt6Z3ZKZ0RlKzR0YmVMOW9YWEhLSGhMQXl6TS83Y1c2MWE0T0MwTkt0?=
 =?utf-8?B?aVdMd3VyS1VPVTJRZmt0RUZJb3hJdnZLZTJXbi93VUpFQ0w5QXNaYXI1U0Nh?=
 =?utf-8?B?blFkZmtFazdLMkY0UzI5cCtyZHZSd0FoQW1WbkdvWWdTOUZvMzJwcHp3ZFRM?=
 =?utf-8?B?NjlYOW9XWHkvb3VPZkNaa1hvQjZLbWRwRmpYV2tIMGVZN3JtbmNEWWdBaUND?=
 =?utf-8?B?NmY5UVRFNDJmb0dNQS96SDM3MXRLOHNuWDltMlhjUUlGc1J2L0M5clUycDZy?=
 =?utf-8?Q?MsSUsNx84DgLi9vpXq0fYf2orzcPFPudeviEeQf?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <F6E985C78735CF4B9ACB739CFDC37BF4@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0e541ef1-0e48-4b90-c44b-08d92c0ad18e
X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Jun 2021 12:25:22.5271
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Q0LekxDpVmRe+pK4oVAy5rO/XcC7O/xgMA1DPvnX8b60dsjm4yX2zMdE5Y88t+4sFyCZH9IoPWSzTXVemzci7YruxPqseawG8OBvyyI/L80gSciiMfd12q9qGwo32GPZ
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB6324
X-Proofpoint-GUID: Xk3SioopToh4-fkDcSEanYxy9BrKlgue
X-Proofpoint-ORIG-GUID: Xk3SioopToh4-fkDcSEanYxy9BrKlgue
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 phishscore=0
 spamscore=0 lowpriorityscore=0 clxscore=1015 suspectscore=0 adultscore=0
 mlxlogscore=999 malwarescore=0 impostorscore=0 mlxscore=0
 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2106100080

DQpPbiAxMC4wNi4yMSAxNTowMiwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDEwLjA2LjIwMjEg
MTM6NDUsIE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIHdyb3RlOg0KPj4gSGksIEphbiENCj4+DQo+
PiBPbiAxMC4wNi4yMSAxMzo0OCwgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4gT24gMTAuMDYuMjAy
MSAxMjowMSwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+Pj4+IE9uIDEwLjA2LjIx
IDEwOjU0LCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOg0KPj4+Pj4gT1RPSCBpZiB3ZSBwcm9wZXJs
eSB0cmFwIGFjY2Vzc2VzIHRvIHRoZSBTUi1JT1YgY2FwYWJpbGl0eSAobGlrZSBpdA0KPj4+Pj4g
d2FzIHByb3Bvc2VkIGluIFsxXSBmcm9tIHlvdXIgcmVmZXJlbmNlcykgd2Ugd29uJ3QgaGF2ZSB0
byBtb2RpZnkgT1Nlcw0KPj4+Pj4gdGhhdCB3YW50IHRvIHJ1biBhcyBoYXJkd2FyZSBkb21haW5z
IGluIG9yZGVyIHRvIGhhbmRsZSBTUi1JT1YgZGV2aWNlcy4NCj4+Pj4gT3V0IG9mIGN1cmlvc2l0
eSwgY291bGQgeW91IHBsZWFzZSBuYW1lIGEgZmV3PyBJIGRvIHVuZGVyc3RhbmQgdGhhdA0KPj4+
Pg0KPj4+PiB3ZSBkbyB3YW50IHRvIHN1cHBvcnQgdW5tb2RpZmllZCBPU2VzIGFuZCB0aGlzIGlz
IGluZGVlZCBpbXBvcnRhbnQuDQo+Pj4+DQo+Pj4+IEJ1dCwgc3RpbGwgd2hhdCBhcmUgdGhlIG90
aGVyIE9TZXMgd2hpY2ggZG8gc3VwcG9ydCBYZW4gKyBQQ0kgcGFzc3Rocm91Z2g/DQo+Pj4gSSB0
aGluayBSb2dlciBzYXlpbmcgIndhbnQiIG1lYW50IHRvIGNvdmVyIG9uZXMgd2hpY2ggY3VycmVu
dGx5IGRvbid0LA0KPj4+IGFuZCB3aGljaCB3b3VsZCBoYXZlIHRvIHVuZGVyZ28gbW9yZSBleHRl
bnNpdmUgY2hhbmdlcyBpZiB0aGV5IHdlcmUgdG8NCj4+PiBiZSBlbmFibGVkLg0KPj4gRmFpciBl
bm91Z2guIERvIHlvdSB0aGluayB3ZSB3b3VsZCBhbHNvIG5lZWQgdG8gcmUtd29yayB0aGUgZXhp
c3RpbmcgY29kZQ0KPj4NCj4+IGluIFhlbiB0byBzdXBwb3J0IG5vcm1hbCBkZXZpY2VzIChub3Qg
U1ItSU9WKSwgZS5nLiB3ZSBjdXJyZW50bHkgcmVseSBvbg0KPj4NCj4+IFBIWVNERVZPUF9YWFgg
YW5kIG90aGVyIExpbnV4IHNwZWNpZmljcy4NCj4gWWVzLCB3b3JrIGluIHRoYXQgYXJlYSB3b3Vs
ZCBhbHNvIGJlIG5lZWRlZC4gRm9yIGV4YW1wbGUgd2UnZCBuZWVkIHRvDQo+IHNjYW4gYnVzZXMg
LyBzZWdtZW50cyBhcyB0aGV5IGJlY29tZSBhY2Nlc3NpYmxlLiBSaWdodCBub3cgd2Ugb25seSBz
Y2FuDQo+IHNlZ21lbnQgMCwgYW5kIGV2ZW4gdGhhdCdzIG9ubHkgcG9zc2libGUgYmVjYXVzZSBv
biB4ODYgbW1jb25maWcgaXMgbm90DQo+IHRoZSBvbmx5IHdheSB0byBhY2Nlc3MgY29uZmlnIHNw
YWNlLg0KPg0KPj4gQW5kIGV2ZW4gaWYgU1ItSU9WIGlzIGltcGxlbWVudGVkDQo+Pg0KPj4gaW4g
WGVuIHRoaXMgd29uJ3QgYWxsb3cgdGhvc2UgT1NlcyB0byBzdGF5IHVubW9kaWZpZWQsIGluY2x1
ZGluZyBGcmVlQlNELg0KPiBPZiBjb3Vyc2UsIGl0J3MgdGhlIG5hdHVyZSBvZiBQVkggKGFzIG9w
cG9zZWQgdG8gSFZNKSB0aGF0IE9TZXMgbmVlZA0KPiBtb2RpZmljYXRpb24uIFRoZSBxdWVzdGlv
biBpcyB0aGUgc2NvcGUgdGhlcmVvZi4NCg0KT2ssIHRoZW4gaXQgc2VlbXMgSSBuZWVkIHRvIGdl
dCBbMV0gYmFjayBpbnRvIHRoZSBwaWN0dXJlLg0KDQpJIGhhdmUgbW9kaWZpZWQgdlBDSSBjb2Rl
IGEgbG90IGZvciBBUk0gc3VwcG9ydCwgc28gWzFdIHdpbGwgbm90IGFwcGx5DQoNCmFzIGlzIGFu
eW1vcmUgYW5kIG5lZWRzIHRvIGJlIHJlLXdvcmtlZC4gQnV0LCBzdGlsbCBpdCBjYW4gbW9zdGx5
IGJlIHJlLXVzZWQNCg0KDQo+IEphbg0KPg0KVGhhbmsgeW91LA0KDQpPbGVrc2FuZHINCg0KWzFd
IGh0dHBzOi8vbGlzdHMueGVucHJvamVjdC5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAx
OC0wNy9tc2cwMTQ5NC5odG1sDQo=


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 12:53:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 12:53:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139895.258571 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrKB6-00041V-3H; Thu, 10 Jun 2021 12:52:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139895.258571; Thu, 10 Jun 2021 12:52:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrKB6-00041O-09; Thu, 10 Jun 2021 12:52:56 +0000
Received: by outflank-mailman (input) for mailman id 139895;
 Thu, 10 Jun 2021 12:52:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrKB5-00041E-0K; Thu, 10 Jun 2021 12:52:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrKB4-0006gF-Qo; Thu, 10 Jun 2021 12:52:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrKB4-0000vN-8K; Thu, 10 Jun 2021 12:52:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrKB4-0006yK-7B; Thu, 10 Jun 2021 12:52:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6EPZhCSWYJMS2HCzL99XbfvYrnL2sPwO3f/9OX8zpLg=; b=3K2oEAdz1h+BuBxekNac3ph2nc
	nvIIV+EPullCO0+/TdDeqopDA6NWcjfX5uI2kYTxf0aqGJtO3+DjYdPai1A++tSkGjDVQebIEs/Rj
	xrDYq7hhfFkpycaD+bhN4Kv2wOD7dS1IT/WcL17pT2vRNdtTh01g3GNTItS/XDSZqg3k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162557-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162557: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=368094df48e680fa51cedb68537408cfa64b788e
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Jun 2021 12:52:54 +0000

flight 162557 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162557/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                368094df48e680fa51cedb68537408cfa64b788e
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  313 days
Failing since        152366  2020-08-01 20:49:34 Z  312 days  533 attempts
Testing same since   162557  2021-06-08 22:41:51 Z    1 days    1 attempts

------------------------------------------------------------
6152 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1674723 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 12:53:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 12:53:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139899.258586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrKBT-0004T5-D6; Thu, 10 Jun 2021 12:53:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139899.258586; Thu, 10 Jun 2021 12:53:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrKBT-0004Sy-9B; Thu, 10 Jun 2021 12:53:19 +0000
Received: by outflank-mailman (input) for mailman id 139899;
 Thu, 10 Jun 2021 12:53:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iP0d=LE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrKBS-0004SW-2R
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 12:53:18 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a7a7dec5-c834-4c44-b390-84f0b48477cb;
 Thu, 10 Jun 2021 12:53:17 +0000 (UTC)
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02lp2058.outbound.protection.outlook.com [104.47.5.58]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-13-3Iiq9H7_MiaA5CQvDCAieg-1; Thu, 10 Jun 2021 14:53:15 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2335.eurprd04.prod.outlook.com (2603:10a6:800:2e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20; Thu, 10 Jun
 2021 12:53:13 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Thu, 10 Jun 2021
 12:53:13 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR1P264CA0017.FRAP264.PROD.OUTLOOK.COM (2603:10a6:102:19e::22) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Thu, 10 Jun 2021 12:53:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7a7dec5-c834-4c44-b390-84f0b48477cb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623329596;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xPEiC67X521QgKA6rtQ6VgPTXw/sFtmSynNaJxT8BfI=;
	b=kAb4C+AWM8M9u3ZWf1sZ3Q7cs9pp+hWixYjpeUBkyISHdC4reSPSU1D9QkzZ8TWxUpNdUN
	36TkIYvzqJ4lpxmVdmbosEPRBdvg6ABbcBKbuLicOyDrjzFEWdCLJKvYp9HfxtPhkg5EqI
	GtZUrblXHoBFIVOqC82MN/fkkRv8vOk=
X-MC-Unique: 3Iiq9H7_MiaA5CQvDCAieg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=l/O1qS2DSfWtwhCptCsmZwnEqi7L6K0yfBsT4w2crTGa8hAmCe+mK4RvYmWP7PoMOAyotBexK+QCqP5vqIGxEd04IwLrN3tqcnDZjmtx6T+kUIbjg3D5/3uafXXbd1pXLyABRZv4SKeOS+bs6MbrC2zcSqm45Htn0LWhvJ9DEgYsAzlDeqZX+rSRY5Npx9PaY0rGHmDzCwmILaLuBOMTCr0cmjBQ56IdofvY4YbVBHZE/fZvKws+Wfk0xTnJo62Wt+eci1Cexh5J4+sLTQkzJldLL76A1BW/0UK3UmXjkNILyFIb1ba0qUWEK2o/k73jpsIOMBcPTMol5EutqPtCXA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X+UtJaeCpPZZSW2vKfmvlg34TfCyVyIQawQiaVpBwaQ=;
 b=Jgi7LeYXd6KOn8X2cWwGCSfyJf5wPkdspHAftzrYsHivVR0AlCAsyPyJAS/YwU1Jh0009zaywvtwPQ4J1izAgS/Ip0XoBOASSa2OkvyQiibk46MMcDd+6LlwU832xXlo1Cag46PtjXr0tDB3hpxPVEv12oL3lziFHrcKTs0DqDZB0SumUU7EBgXL5IaLgFCPvir050w1pBI/GLZvbhjhOQc5HPBBRX4eWF6GupSUXPZdMdSSlqncKkOssPkraPH/QJcDK+YZWV3233OIguzRFaxYfVpl3seXbBx15F5/VuBttWbN7KN02EE2wtIp3LwWP9Y34S07QuVg/x4YYZ2kDw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 2/9] AMD/IOMMU: re-work locking around sending of commands
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Paul Durrant <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
 <da2e161f-5d5e-c4bb-bce4-7b86e9418a1e@suse.com>
 <31dc681c-8713-7ddd-6c4e-3c385586da4c@citrix.com>
 <11f4cb58-d0f4-f735-2b18-5e02cb6950e4@suse.com>
Message-ID: <f3f47e98-e18c-50f1-ade9-e4b4df055b5a@suse.com>
Date: Thu, 10 Jun 2021 14:53:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <11f4cb58-d0f4-f735-2b18-5e02cb6950e4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR1P264CA0017.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:102:19e::22) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3ba52f58-783a-4afa-27bd-08d92c0eb516
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2335:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB233577C47C319650E7429D8CB3359@VI1PR0401MB2335.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mFzoOVUzusWVgQLFvGz7fcHg85p3P1olcd/w/sDWHAKkUPXnNDothr1mqfpPrSlsHNVQo4Mld8fmCWxRx+0PZv5GohgU3XIhxdgC4F24L7Wg9FoUFGapdunFRQDkPnLjeEMCaoK90z+AUaWhF2igXq0Tt7XkEYlx4FfOGssRzTIRCwU94m05H736BUsQh2nmV+a3ePrCBMuctLj7yHqXeMqQbBxrRhJ5+vp4q7ySIzXQfuI+sHD2+HIdGCrSyLHxVwZtGGJV/Ya7OppwQECdnmGP9NLrvRwqmCe5shHh6gMqLknWKzr8fySwtUYtuEO6aG1QqQLvbT76MwRE2J9f50TkiFM5YKu7A2d5XNzhBUrg7zkWioO1otVx9+RikrUkjwESBNYlPdztyJPGCzhx025SFY38/vOh+Jr4a9BcDxqiDilIiT7Ir0avH2NKp2lz3mE7RMHvduIZIMCw4l2l1YqvO0palKJrtugGsZaN4o1OX9PTMrTc3cvfW2rgrB6Ew51za2VqQKa1g098Oywkl8pbRtIby38Ki1l0VkVf9eaak7jsXDL/yUBnnN+mqNN+pFfFoFk9mtmOM9XRRqqZmF3SciZOwZnlvoEnNsxxyoiLXdaC58bUML9nRj1YnZsbfLblqL2u7PFjR1tbiKBsiMQS8/GZ9QpoBWCM9QZRDWXZBKY5YMbPV6ez+8Jr0fKz
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39850400004)(376002)(346002)(136003)(366004)(396003)(5660300002)(86362001)(16576012)(8936002)(38100700002)(66556008)(53546011)(66476007)(316002)(31696002)(26005)(66946007)(2906002)(36756003)(54906003)(186003)(16526019)(83380400001)(956004)(2616005)(6916009)(478600001)(6486002)(4326008)(8676002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?us-ascii?Q?/tdcbnYd9vmrE6ha3lJQcuC2mm3ivF+aytGRRT1nd6ZYO4N0g1g4u3t3fZF3?=
 =?us-ascii?Q?FXPLsZim4zZxt+sY0ADIrsjJfTAvgEHFYinhFJtiZ2+j6YpgmaiApac5qOkA?=
 =?us-ascii?Q?IKXBc+49bnYrrT8a0i2tR+RPvQtchTjoUsNx7YIzL5BOiVMY2OxGVO2L6A5n?=
 =?us-ascii?Q?CHcG/C+RXRrViINuo1qRw6w+SlJI3Z72dnj4vhIbcPnkzjCRDUKu3bX9C2YY?=
 =?us-ascii?Q?Xq8Dgpbx1PdLzvBNOM328LYghJT17RhxmgHDzaSKnIoFlWg6Yy7bYDgAY3ap?=
 =?us-ascii?Q?Tvl0B5YSfFnKEXhBl1CCmyv7r8zfzpzwzKlBPyuC9ApE4k+AXJ7pgGpEC6HW?=
 =?us-ascii?Q?PenNzY/hdeznXSppELFrne/5DRYgGUva3kQj9jrJQD1nNvzQyZ/APFAbTtdT?=
 =?us-ascii?Q?NGt5MaYkjp6CIjagDQ3QamOnNt4qmx8RBJ+TE3FM+BOAZkg0prW9C9hBJ0v3?=
 =?us-ascii?Q?WnnRTExwhYnyQgH8y+LjrPVkdQZHH7CA5mOjLnFbXNrVerBLs3O/9vd6McAh?=
 =?us-ascii?Q?AUB/TLIVvbzS6CJrtfOfvkDYQktdUl8HlT2IRHqw0vH9b+cjIWyF524QKMu+?=
 =?us-ascii?Q?QI///BEPs7BWxDItiLvlGx4QpcsWU33CmBl9SVlalpPvR9+yFbTMEecSF65M?=
 =?us-ascii?Q?dj2NkFJCl8IKAZMq2oost8H1gsPATlZhIQKHuj1tRRQGddN4gp4K2onal2pP?=
 =?us-ascii?Q?v++VXODEUc8uiOkXXC9uPYGgm588cdJ/NW+tPQWxabX8HcCYZTT0SKfePwPt?=
 =?us-ascii?Q?o8zSE3sCe9IPZSlg+uXLNwhHCmTdn7aPxcMZJJ8AYXjTbBT/dIHcvxwoI6jM?=
 =?us-ascii?Q?1VhEhc5okJD7m7hh9qVsLQWBmkFspeDWaOBaztNkd72WeIT4v4YolH9B5bve?=
 =?us-ascii?Q?jZao9yxXncawsNHOK9ze7sIgPNZJ8m1ZriUV2k+ruzyq9kzArDjn2HUuDOXR?=
 =?us-ascii?Q?pJoRJQFtBI2Lm3ijAedkLvSIxldFIUgs//a10X6XV5cjRPfzX+ZEAFluD1L9?=
 =?us-ascii?Q?lcT5/TX4KPSaJ0jU2FFn47xJpmNNA8v+hYwVqdh2JYYaIm6WZfVkEXCirMAi?=
 =?us-ascii?Q?PKZgBahniWDwM9cKTR+fwB3yjPqErRUKaqiLz4BzO2QkwMr25qB1CbM4CExa?=
 =?us-ascii?Q?Sg1YNV9OFhTRrakwAsMvr4y55hAFstAfu/iIei9pyiC58rsOzeIatR6/aYHg?=
 =?us-ascii?Q?bOUSpxtFoPv67cmdSiVDe+Q6fAH0err/AAJu8JddV26iZP5B5tnDOIz0/v6w?=
 =?us-ascii?Q?MXcTwA86YaM33368lVtQf5A/M3waT+nwNmAyDYU6nIrkaPqVFSlMdGOgDLHC?=
 =?us-ascii?Q?rnkofWXyjtDmYDrwmgFA5Kom?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3ba52f58-783a-4afa-27bd-08d92c0eb516
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2021 12:53:13.0167
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IouY1rGn/T9b5x77sBCBsQqmJF/M53rq+DBJVs0Imz1jxvs0JzM4MZA7ulaQ7MDsbwVTnzjrh/eyKPPwvTIEpA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2335

On 10.06.2021 13:58, Jan Beulich wrote:
> On 09.06.2021 12:53, Andrew Cooper wrote:
>> On 09/06/2021 10:27, Jan Beulich wrote:
>>> It appears unhelpful to me for flush_command_buffer() to block all
>>> progress elsewhere for the given IOMMU by holding its lock while
>>> waiting for command completion. Unless the lock is already held,
>>> acquire it in send_iommu_command(). Release it in all cases in
>>> flush_command_buffer(), before actually starting the wait loop.
>>>
>>> Some of the involved functions did/do get called with the lock already
>>> held: For amd_iommu_flush_intremap() we can simply move the locking
>>> inside. For amd_iommu_flush_device() and amd_iommu_flush_all_caches()
>>> the lock now gets dropped in the course of the function's operation.
>>>
>>> Where touching function headers anyway, also adjust types used to be
>>> better in line with our coding style and, where applicable, the
>>> functions' callers.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> Reviewed-by: Paul Durrant <paul@xen.org>
>>
>> Honestly, I'm -2 to this.=C2=A0 It is horrible obfuscation of the logic.
>>
>> I agree with the premise of not holding the lock when we don't need to,
>> but moving the lock/unlocks into different functions makes it impossible
>> to follow.=C2=A0 (Also, the static analysers are going to scream at this
>> patch, and rightfully so IMO.)
>>
>> send_iommu_command() is static, as is flush_command_buffer(), so there
>> is no need to split the locking like this AFAICT.
>>
>> Instead, each amd_iommu_flush_* external accessor knows exactly what it
>> is doing, and whether a wait descriptor is wanted.=C2=A0
>> flush_command_buffer() wants merging into send_iommu_command() as a
>> "bool wait" parameter,
>=20
> A further remark on this particular suggestion: While this is likely
> doable, the result will presumably look a little odd: Besides the
> various code paths calling send_iommu_command() and then
> flush_command_buffer(), the former is also called _by_ the latter.
> I can give this a try, but I'd like to be halfway certain I won't
> be asked to undo that later on.
>=20
> And of course this won't help with the split locking, only with some
> of the passing around of the saved / to-be-restored eflags.

Actually, different observation: I don't think there really is a need
for either amd_iommu_flush_device() or amd_iommu_flush_all_caches()
to be called with the lock held. The callers can drop the lock, and
then all locking in iommu_cmd.c can likely be contained to
send_iommu_command() alone, without any need to fold in
flush_command_buffer().

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 10 14:11:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 14:11:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139911.258598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrLOj-0003lr-BK; Thu, 10 Jun 2021 14:11:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139911.258598; Thu, 10 Jun 2021 14:11:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrLOj-0003lk-7b; Thu, 10 Jun 2021 14:11:05 +0000
Received: by outflank-mailman (input) for mailman id 139911;
 Thu, 10 Jun 2021 14:11:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sV8R=LE=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lrLOh-0003le-NQ
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 14:11:04 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 85a99871-631b-457d-b6c8-a73b39e154f0;
 Thu, 10 Jun 2021 14:11:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85a99871-631b-457d-b6c8-a73b39e154f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623334261;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=lgwASCWQF4VT4PipYIDMFcXJCeGpKLOTCwLaRxcrzc4=;
  b=TA+cN+RUbAGzi6IXOGWfM2GAmXRLnAYDFYaMPh50nA0bD4kCS/LIDP9n
   bXIr6zCVmIzwgaJmeYhQ+uKoxApmNoZuwR4YYmQEnOl6jUmQJPpetakr7
   RLuDHqf4kdMOBoXLeDVdWlST8DIAdZSzLp1y6D1WeuI2PkP4C6qMTrxPF
   o=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Dn+qZlRp7G09v2c9NYF0IeRCIncCWt1Al5ezUuXWgFVUfWns56SuAPKhNUJmC8PiLCo3xiDWW/
 GSVvjvCxvmt3+M2zxmL9UKC1krh/PUz44p5UC5Vb2bvOMbiQL0elc5DVH0nJf/U8V3GOV8sFSm
 75TXOCB8Qk9F7suJf6xHiD64OAmKfVAt8BO/fHIm75lFmnwtoZ/vwwt2UEJ601zA0D9qTV1kJg
 J/ydyU/MoHDDWM3PUXH7Z2Gc0YX8OJexE+vaVi49PeZy8C1ozVLLChliBKG2yP17hkY4JsbYkJ
 6r8=
X-SBRS: 5.1
X-MesageID: 47418244
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:N9jyLqPuO4b5qcBcTzb155DYdb4zR+YMi2TDiHoedfUFSKOlfp
 6V8MjztSWVtN4QMEtQ/+xoHJPwPE80kqQFnbX5XI3SJjUO3VHIEGgM1/qG/9SNIVybygcZ79
 YeT0EcMqyCMbEZt7eD3ODQKb9Jq7PrgcPY55aq854ud3AQV0gJ1XYINu/xKDwOeOApP+tfKH
 LKjfA31gZINE5nIviTNz0gZazuttfLnJXpbVovAAMm0hCHiXeN5KThGxaV8x8CW3cXqI1Sv1
 Ttokjc3OGOovu7whjT2yv66IlXosLozp9mCNaXgsYYBz3wgkKDZZhnWZeFoDcpydvfp2oCoZ
 3pmVMNLs5z43TeciWeuh32wTTt1z4o9jvL1UKYqWGLm727eBsKT+56wa5JeBrQ7EQt+Ptm1r
 hQ4m6fv51LSTvdgSXG4cTSXR0CrDv1nZMbq59Xs5Vja/pbVFcIxrZvu3+9Ua1wXR4S0bpXUt
 WHV6rnlbBrmTrwVQGqgoFtqObcFUjbUC32G3TqgfblpAS+qkoJh3fw9PZv6kvoy6hNPaWsx9
 60eJiAx4s+A/P/U8pGda48qJyMexLwqFT3QTqvHWg=
X-IronPort-AV: E=Sophos;i="5.83,263,1616472000"; 
   d="scan'208";a="47418244"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Shk6MW390KOCC3xiNN4ahU5y/A926s733SBF+5NZ/CuHuL4a3dOz2uXFnGoxVJFti7Li/XyL85VAOpxjYXrUrjLp7yKJOffrWC1KVaS9GtUqmyaYVfJyFe86T7UqyzBpbb9dbRv/7tJFSG2dSeF7s8HecfasggEh5mGzMZrfIQ/12DSBZxiC6gZVq5k439OmdHsa3TCDyPVWp0cG5J9x4OGi6JV5sd3fhVLacEszPvPZ2pEvO0584PGC+k4cbL089ygIyUVaNS0HbNbfFeDFKfZydIh6USJO7hBoYZtfdQPUg3sxaiPQ6E/PGelkFnp9DyDfX9KGewhuK1EbKlWhwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sf1/7BwnEzl3b/iGjErFl9NE3CyoC8EsipigEZbI5K8=;
 b=OBViev7aAX3umkXFru+SrR5ZfjcoiiZ0D7pYZBWLM6CdbVVZlCJnnxCfr0W+FWrojMTJ9F/a1DhuU+CNeEkhZyFOPH/dHUxWcVE/1vz+ouEoq1kTioe+lB6VHoXFn//MY4tKNQf2K58HYaQnrHuVFZN+ewci7+Su5dOHt/5E6EEtri421CzoadxSmNdp3H0iwf64aRsHDBTrWJsJEcMXOvBa2gLhS5ZiHoJQ9Vv+PY7k8S39IePUBr8Dx+rPqXRcFoqNqAVlkeUtrhI/2m73h6MNk6GGqVPiHTal//1GEy4KMo+AQa06C2cRtL0e2JYzYhf0K5hafZYp08vqW7JVwA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sf1/7BwnEzl3b/iGjErFl9NE3CyoC8EsipigEZbI5K8=;
 b=JSKEs0Q7JwBUaZnhsA8DJA8MbURvSm+bNhOIcG3hqCZqGJT1kfeg+Iz5tYqv/8pij+zIp6BkZN5wgVjIStejcOjdnmE87KrUiCIk3/6bNpEUfwAs43VLrDEwg+Qqt261djOl3g92IlENvU6ueg8rgYPhRTmGBsI2qub836su54o=
Date: Thu, 10 Jun 2021 16:10:52 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Jan
 Beulich" <jbeulich@suse.com>
Subject: Re: SR-IOV: do we need to virtualize in Xen or rely on Dom0?
Message-ID: <YMIdbGCpFGZGwLoN@Air-de-Roger>
References: <c10e16c9-ec42-336f-e838-caca49b39723@epam.com>
 <YMHFQA1L61ntKNRq@Air-de-Roger>
 <30955a5b-ee46-60d7-ae56-23dc7c91008c@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <30955a5b-ee46-60d7-ae56-23dc7c91008c@epam.com>
X-ClientProxiedBy: AM0PR02CA0213.eurprd02.prod.outlook.com
 (2603:10a6:20b:28f::20) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 19ff5831-6583-4e9a-ccd7-08d92c19919e
X-MS-TrafficTypeDiagnostic: DM6PR03MB4298:
X-Microsoft-Antispam-PRVS: <DM6PR03MB4298368D3B058FBE3BE936F58F359@DM6PR03MB4298.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: IpjGQ87jtRqQmRBgXANsxpX8Gk60BBjB1/bgDKzX/NREFS/ISCwD872vb57SviqfRx8YASBr1XG5HdcMN1ey4GqHFJwIFjRn0XhSI9FLAj91neu4G2gfhH49TNQD6tiBeoaoH+uCE1Qh2xoaqRdznOjQjuvsr4Lzdcu472cLVbhvGmef1i0rIB2pdSU9f3/lYWGkbXbtmx/5y9w4xH9epGhqbNZh+BPI2ZoNFDvWbVF8WS1+Uibzzwm6d9TuNw7x58HTyZQaDR8TByZnNnPF58YkH3+egcqGtX0Timt7p3voV//rYAOBOmrpsB/5B4kyV4xPRKza61RIGJoVlIZ5jxSBwnHpQqk25yX1QVmKLUDrM52p3XMiCvZC4wlp6n4N+aNE+PAvqogdtt0vCtz/yN3ZwBavhjleS8RYJyBFrN2EXWDqWyIKWW+z41rCNVqdwYZ3ZzR7kqyvmB7W8XIbvh1MWzSx5mon4cVWwfpRpFDcdfNFmBOvYDlzrDlK3rAHn8rFJ6v3q13Aj7cv50t0i9brShd9uoJDYI0+6uSY3KthRdF9GQaMctfu8l6Loe/dquEhcOlkIqTuix14ddGmmXEBNbjKvLm9FBJadrii88jDzODdYOQc0lnTOK+kNGKmCOGwEcuXkUkLP2tyQZ/AGA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(346002)(376002)(39860400002)(366004)(136003)(6916009)(83380400001)(38100700002)(16526019)(186003)(33716001)(956004)(85182001)(4326008)(6486002)(54906003)(5660300002)(6496006)(6666004)(9686003)(53546011)(478600001)(2906002)(8936002)(86362001)(66556008)(8676002)(66476007)(66946007)(316002)(26005)(21314003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?TytQUU5TbmU0TTR2VXRkWnIzTUhyTzVJREtpZ29Mbjhvak96b2tBZXVYUmhj?=
 =?utf-8?B?V0NUdFRmVFJ3d0lWL25xR3UzQU1oQTZJWlkyWGk3ZVhXblhScDcrRWVWVStT?=
 =?utf-8?B?VStTTytSN3lCTnEvTEJwdW1kQlhSTzVWUkY0OEJUYjh4N3QwNWJSUU0xSy9T?=
 =?utf-8?B?VFNpVjRUVXY2MjlwUkl3L2dyOTV3YU5JTlZtanNjY0JIajEvTUg0NElYU3FD?=
 =?utf-8?B?UzhhOGtBWkNOMHRUTVJIUTlTQjIwUUlwK1ZWNnhRaC8zZTQ0RkxFL2hqYkpu?=
 =?utf-8?B?ZVBjUEJpdm94NHZJNTViem5XRUJOS2xnNVFYcmhzN1IzRHM5Z3ZNWXExZk9C?=
 =?utf-8?B?QlFPUSt6QlkrNkV1QzBWRmI3a0xDQjlaZlNQUldrSUlycEc2QjJIVEs2Y0RF?=
 =?utf-8?B?V0pIczlCWHc4Ti81eHBweGoxMGd3QzlmeCs0T3I4cVd6ZzFXbENmSS9yNWx0?=
 =?utf-8?B?RGJ3RENqbUZycGl5MG1lcDRzdjB1VEo0NnhYaUpQYThkelpsMTB3Q3c2akZj?=
 =?utf-8?B?MmxTd1NTbjJ5dUxXNnYvTXFXRUpjSGxreUNVNU9lZG8vOE1VVHo1VnFKalgw?=
 =?utf-8?B?UlMrc0JFTDBsSGdxcGwyWVAzZ3RwYi9OR2YrN3pDU3ZkTEE5SXZaMm8vZjNT?=
 =?utf-8?B?UXc0cG9QZjFSOTZzZEpPcDZnendwVDFtU3hQUGRiMkNHSWROQWZpbndOMnJS?=
 =?utf-8?B?TDZ1dG9TenlaOEJXejlJSXMvVC93M0h0WE1HdlhQZUdleGZvN0txaFlRUVo2?=
 =?utf-8?B?eTE4VWlXQkUzUEZaYVhSQWxDa3FkNFNkL3FzRWJHbmFDUmpoZDMyTDRkWmZH?=
 =?utf-8?B?VFBQcjlRblNaYTkveGVzSUV5QXpvR2lmN2JCSHA5QjExVVp3NkxSbG5keklz?=
 =?utf-8?B?M1l0N0YrcFVFZDBTd21XeFNoVXUzN3lyT2NFdFJJRllISHRqeWtmUWpVckhs?=
 =?utf-8?B?V0V5ZEl1ZzlpL0NEazZvelVOU1hlUWp4ejZ1ODZqZ09GbmV2VjhUMitPUjFT?=
 =?utf-8?B?dlg2c0ZWaGU4d2JWQ0svK3l5citkYW1Fem1kN2ZpTDkyTERiZFVQdE5xTWF2?=
 =?utf-8?B?Vm5MeWc5VWZBcmNLdU1OR3RJQklKWWI4R2ZOS243dm1wWWVjYkNtRk16SlNS?=
 =?utf-8?B?VnpnWnB2TzJ2TE8rTTZRK2E1S2JFRnVmY252OFJub1UwSzcvSktiU3BNWUdy?=
 =?utf-8?B?Ym5LRXFaUDFIWlkycFdvTmdCUXJqTjRxOUlENTVYZXIxS0tJWDlaUEFxQS8v?=
 =?utf-8?B?enRqckwxdFRqRHN2Mnd5dkszM0NKSm9LRmlNWVRyQXdFV1NMc2YxZHdabkZG?=
 =?utf-8?B?MStKaUFOajJ2ZkUwSG8ydmtPMmtnbUJJdXBwc3dHelpjR1E1b3ZWRVlmelFR?=
 =?utf-8?B?cGZTL2VJUEpTVlI2OG43U1JzTEtKTktjQ2lUTDkrLy9Dc0kzM1VNOFFBK0Qv?=
 =?utf-8?B?VVNxaEN3TDBnZjVFY2dHUFpzRHJIaHdXazVQK2ROZ2UyQVhoeEZIZXdLdnJZ?=
 =?utf-8?B?elR4ZnR3VHNrTXV2eTBiVEoyMkVucGVwWmhXNWpsTWIvTDkvMXMxSEg1QXZw?=
 =?utf-8?B?VlpQUGRmaXJuNC9vS2NVNjJzamgzMnM4Y2pDOHUrcDZ2S0tNUmE2UDBSb0Ns?=
 =?utf-8?B?T0k3WFhHeThCN054OFpONStxTHlQTEpuRVRvOHIxUm56eFh5Z041V1ZRUkpu?=
 =?utf-8?B?cERLbGVqaUxTUmhNWUV2Mkw3aGpSUGVpU2xhNFRWd1VXbjJ3TkFHWTVSaEMz?=
 =?utf-8?Q?lRgdaz6rc+ropDvChUTC2Oix9XMCKtHWknesMCx?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 19ff5831-6583-4e9a-ccd7-08d92c19919e
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2021 14:10:57.9892
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WU+/BECNLY0Y7/73/kcw1v2sg0VXkrUXsO0O96ww2lODM8f2xiNdpO8usZU3DLvJTfIrZ02F4KM7kHRrTY365w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4298
X-OriginatorOrg: citrix.com

On Thu, Jun 10, 2021 at 10:01:16AM +0000, Oleksandr Andrushchenko wrote:
> Hello, Roger!
> 
> On 10.06.21 10:54, Roger Pau Monné wrote:
> > On Fri, Jun 04, 2021 at 06:37:27AM +0000, Oleksandr Andrushchenko wrote:
> >> Hi, all!
> >>
> >> While working on PCI SR-IOV support for ARM I started porting [1] on top
> >> of current PCI on ARM support [2]. The question I have for this series
> >> is if we really need emulating SR-IOV code in Xen?
> >>
> >> I have implemented a PoC for SR-IOV on ARM [3] (please see the top 2
> >> patches)
> >> and it "works for me": MSI support is still WIP, but I was able to see that
> >> VFs are properly seen in the guest and BARs are properly programmed in p2m.
> >>
> >> What I can't fully understand is if we can live with this approach or there
> >> are use-cases I can't see.
> >>
> >> Previously I've been told that this approach might not work on FreeBSD
> >> running
> >> as Domain-0, but is seems that "PCI Passthrough is not supported
> >> (Xen/FreeBSD)"
> >> anyways [4].
> > PCI passthorgh is not supported on FreeBSD dom0 because PCI
> > passthrough is not supported by Xen itself when using a PVH dom0, and
> > that's the only mode FreeBSD dom0 can use.
> 
> So, it is still not clear to me: how and if PCI passthrough is supported
> 
> on FreeBSD, what are the scenarios and requirements for that?
> 
> >
> > PHYSDEVOP_pci_device_add can be added to FreeBSD, so it could be made
> > to work. I however think this is not the proper way to implement
> > SR-IOV support.
> 
> I was not able to find any support for PHYSDEVOP_XXX in FreeBSD code,
> 
> could you please point me to where are these used?

Those are not used on FreeBSD, because x86 PVHv2 dom0 doesn't
implement them anymore. They are implemented on Linux for x86 PV dom0,
AFAIK Arm doesn't use them either.

> If they are not, so how Xen under FreeBSD knows about PCI devices?

Xen scans the PCI bus itself, see scan_pci_devices.

> I am trying to extrapolate my knowledge of how Linux does that
> 
> (during PCI enumeration in Domain-0 we use hypercalls)
> 
> >
> >> I also see ACRN hypervisor [5] implements SR-IOV inside it which makes
> >> me think I
> >> miss some important use-case on x86 though.
> >>
> >> I would like to ask for any advise with SR-IOV in hypervisor respect,
> >> any pointers
> >> to documentation or any other source which might be handy in deciding if
> >> we do
> >> need SR-IOV complexity in Xen.
> >>
> >> And it does bring complexity if you compare [1] and [3])...
> >>
> >> A bit of technical details on the approach implemented [3]:
> >> 1. We rely on PHYSDEVOP_pci_device_add
> >> 2. We rely on Domain-0 SR-IOV drivers to instantiate VFs
> >> 3. BARs are programmed in p2m implementing guest view on those (we have
> >> extended
> >> vPCI code for that and this path is used for both "normal" devices and
> >> VFs the same way)
> >> 4. No need to trap PCI_SRIOV_CTRL
> >> 5. No need to wait 100ms in Xen before attempting to access VF registers
> >> when
> >> enabling virtual functions on the PF - this is handled by Domain-0 itself.
> > I think the SR-IOV capability should be handled like any other PCI
> > capability, ie: like we currently handle MSI or MSI-X in vPCI.
> >
> > It's likely that using some kind of hypercall in order to deal with
> > SR-IOV could make this easier to implement in Xen, but that just adds
> > more code to all OSes that want to run as the hardware domain.
> 
> I didn't introduce any new, but PHYSDEVOP_pci_device_add was enough.

Well, that would be 'new' on x86 PVH or Arm, as they don't implement
any PHYSDEVOP at the moment.

Long term we might need an hypercall to report dynamic MCFG regions,
but I haven't got around to it yet (and haven't found any system that
reports extra MCFG regions from ACPI AML).

> The rest I did in Xen itself wrt SR-IOV.
> 
> >
> > OTOH if we properly trap accesses to the SR-IOV capability (like it
> > was proposed in [1] from your references) we won't have to modify OSes
> > that want to run as hardware domains in order to handle SR-IOV devices.
> 
> Out of curiosity, could you please name a few? I do understand that
> 
> we do want to support unmodified OSes and this is indeed important.
> 
> But, still what are the other OSes which do support Xen + PCI passthrough?

NetBSD PV dom0 does support PCI passthrough, but I'm not sure that's
relevant.

We shouldn't focus on current users to come up with an interface,
but rather think how we want that interface to be.

As I said on the previous email my opinion is that unless not
technically possible we should just trap accesses to the SR-IOV
capability like we do for MSI(-X) and handle it transparently from a
guest PoV.

> >
> > IMO going for the hypercall option seems easier now, but adds a burden
> > to all OSes that want to manage SR-IOV devices that will hurt us long
> > term.
> 
> Again, I was able to make it somewhat work with PHYSDEVOP_pci_device_add only.

Sure, that's how it works on x86 PV hardware domain, so it's certainly
possible. My comments to avoid that route is not because it's not
technically feasible, but because I don't like the approach.

So far we have avoided PVH from having to implement any PHYSDEVOP
hypercall, and that's a design decision, not a coincidence. I'm in
favor of using the existing hardware interfaces for guests instead of
introducing custom Xen ones when technically feasible.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 14:13:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 14:13:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139917.258610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrLRN-0004PN-QI; Thu, 10 Jun 2021 14:13:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139917.258610; Thu, 10 Jun 2021 14:13:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrLRN-0004PG-MU; Thu, 10 Jun 2021 14:13:49 +0000
Received: by outflank-mailman (input) for mailman id 139917;
 Thu, 10 Jun 2021 14:13:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Cwq8=LE=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lrLRM-0004P7-Tg
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 14:13:48 +0000
Received: from mail-pf1-x42d.google.com (unknown [2607:f8b0:4864:20::42d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 71edec09-84c6-4be8-85ac-946ad95a7d8e;
 Thu, 10 Jun 2021 14:13:48 +0000 (UTC)
Received: by mail-pf1-x42d.google.com with SMTP id k15so1731675pfp.6
 for <xen-devel@lists.xenproject.org>; Thu, 10 Jun 2021 07:13:48 -0700 (PDT)
Received: from ?IPv6:2404:f801:0:5:8000::4b1? ([2404:f801:9000:1a:efea::4b1])
 by smtp.gmail.com with ESMTPSA id
 h12sm3035510pgn.54.2021.06.10.07.13.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 10 Jun 2021 07:13:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71edec09-84c6-4be8-85ac-946ad95a7d8e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=uVvPHfIwriaeCqaGS4SWF2JfjGEeCNMTseqgcHKL0PY=;
        b=ot0iffXK2/znAlvDWUx0DvEHPs9Hu42jQhvk3aHGx+T2NhWFpnwKA1vIkYGIy+cgYb
         LxEbfgCrbULtFd7nVuJsfQAH+IMqZIL2R35dwiAl9sJZz1GKTswJr7viHRHzd8ec6z9L
         cpBdHTN6hAjfnZCW/MdapYStRdZ2pSAmS2zSYAlKj+wqby1v1sKfa0Z3IJGU1Q2S8YcP
         HqfC/opPsY22Z36tiTPSUNG8yLuAg9Bq77HzKuJDM2mT1cANyfp9SOlXy6wpSwett2B6
         yucBibCk75zJ2NzA/8lMhfuq11vR41IllnpzmIyKLTH/P3wGLWQCIUfWmxQ4zGD/3HTx
         tacA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=uVvPHfIwriaeCqaGS4SWF2JfjGEeCNMTseqgcHKL0PY=;
        b=iob7/ic0gVDR6PsMSqFr3mZZqe+mzcYj+nuJzEKn8N3n75udjDk3HLlXGnETA548XQ
         9LfqRKbvkLYe+SFLMHVphQE6edexwYwjbr4K5+rLg39KF1UfrXfRL8yqkl4hsPXSO8HO
         /smBhWiqlxAIX8a3ZFBrffH5L6c3yKz4YfTtt9qjkaSf0BjDZM5f3PE8VTt85ZmPStSU
         h8m9Xfq0ayg9dL3CxP76IXW0HJEugLXWljN3ynim+Mf4urb2TfFymOryarM+tBBZZO6L
         7WIcv7s6tZVF2uNiQJl7OEYVqXIpNQQLtxa1mwoLSLJqBtDzOsGjMlRlxI/bXtKLQG66
         wklw==
X-Gm-Message-State: AOAM531MKnzjyqlPOPHTRVpXcpleiAs4DjjIhi3rnr6q4kxEgExngvJp
	PFTUGlc87Z1AfmamJEjpgOU=
X-Google-Smtp-Source: ABdhPJz4FYgWX1XRVYnd1fLznm93BCcmg2/a4wkuQ7x2l90mgEwjIr25Q5mcrDlHBGTMoQXHYCH6CA==
X-Received: by 2002:a05:6a00:2353:b029:2f2:987a:5da2 with SMTP id j19-20020a056a002353b02902f2987a5da2mr3220673pfj.58.1623334427380;
        Thu, 10 Jun 2021 07:13:47 -0700 (PDT)
Subject: Re: [RFC PATCH V3 01/11] x86/HV: Initialize GHCB page in Isolation VM
To: Joerg Roedel <joro@8bytes.org>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
 wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
 arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
 peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, hch@lst.de,
 m.szyprowski@samsung.com, robin.murphy@arm.com, boris.ostrovsky@oracle.com,
 jgross@suse.com, sstabellini@kernel.org, will@kernel.org,
 xen-devel@lists.xenproject.org, davem@davemloft.net, kuba@kernel.org,
 jejb@linux.ibm.com, martin.petersen@oracle.com,
 iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com,
 thomas.lendacky@amd.com, brijesh.singh@amd.com, sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-2-ltykernel@gmail.com> <YMC2RSr/J1WYCvtz@8bytes.org>
From: Tianyu Lan <ltykernel@gmail.com>
Message-ID: <c9a7eaa8-a8b3-3ed3-c52c-7a2cea3c95bc@gmail.com>
Date: Thu, 10 Jun 2021 22:13:32 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <YMC2RSr/J1WYCvtz@8bytes.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Hi Joerg：
	Thanks for your review.


On 6/9/2021 8:38 PM, Joerg Roedel wrote:
> On Sun, May 30, 2021 at 11:06:18AM -0400, Tianyu Lan wrote:
>> From: Tianyu Lan <Tianyu.Lan@microsoft.com>
>>
>> Hyper-V exposes GHCB page via SEV ES GHCB MSR for SNP guest
>> to communicate with hypervisor. Map GHCB page for all
>> cpus to read/write MSR register and submit hvcall request
>> via GHCB.
>>
>> Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
>> ---
>>   arch/x86/hyperv/hv_init.c       | 60 ++++++++++++++++++++++++++++++---
>>   arch/x86/include/asm/mshyperv.h |  2 ++
>>   include/asm-generic/mshyperv.h  |  2 ++
>>   3 files changed, 60 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
>> index bb0ae4b5c00f..dc74d01cb859 100644
>> --- a/arch/x86/hyperv/hv_init.c
>> +++ b/arch/x86/hyperv/hv_init.c
>> @@ -60,6 +60,9 @@ static int hv_cpu_init(unsigned int cpu)
>>   	struct hv_vp_assist_page **hvp = &hv_vp_assist_page[smp_processor_id()];
>>   	void **input_arg;
>>   	struct page *pg;
>> +	u64 ghcb_gpa;
>> +	void *ghcb_va;
>> +	void **ghcb_base;
> 
> Any reason you can't reuse the SEV-ES support code in the Linux kernel?
> It already has code to setup GHCBs for all vCPUs.
> 
> I see that you don't need #VC handling in your SNP VMs because of the
> paravisor running underneath it, but just re-using the GHCB setup code
> shouldn't be too hard.
> 

Thanks for your suggestion. I will have a try to use SEV-ES code.



From xen-devel-bounces@lists.xenproject.org Thu Jun 10 14:16:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 14:16:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139923.258621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrLTX-00053y-5Z; Thu, 10 Jun 2021 14:16:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139923.258621; Thu, 10 Jun 2021 14:16:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrLTX-00053r-2X; Thu, 10 Jun 2021 14:16:03 +0000
Received: by outflank-mailman (input) for mailman id 139923;
 Thu, 10 Jun 2021 14:16:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Cwq8=LE=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lrLTV-00053i-No
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 14:16:01 +0000
Received: from mail-pg1-x536.google.com (unknown [2607:f8b0:4864:20::536])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b0040e02-451b-4e69-a615-a25f5f320783;
 Thu, 10 Jun 2021 14:16:01 +0000 (UTC)
Received: by mail-pg1-x536.google.com with SMTP id e22so22678353pgv.10
 for <xen-devel@lists.xenproject.org>; Thu, 10 Jun 2021 07:16:01 -0700 (PDT)
Received: from ?IPv6:2404:f801:0:5:8000::4b1? ([2404:f801:9000:1a:efea::4b1])
 by smtp.gmail.com with ESMTPSA id
 f6sm2629239pfb.28.2021.06.10.07.15.47
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 10 Jun 2021 07:15:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0040e02-451b-4e69-a615-a25f5f320783
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=Z06P6xfCZfBoLzLGshR2eXD11imPLnqUoouenixaYFw=;
        b=APiM6yjkYAKij1RjMOo9hW4vLGKN36RaoDPpCZzJkgCIWIc5bNnUPRKfG+5uelh6im
         ZwmwB/K2y2Z+UnBHFw6Ok84giPqAWtESqJu7RLH3vi1s6Wmdl12YoCN5FEDC7FxVL8S3
         xBQJ/O0hMPtGsBtUowS8mRCmJHg4qC1j99bvIODTP9XmlSIGPDf9QrQ49kI/Ur72auK8
         XR3tyDiPInVQQ5Wc5igpAdJ+WpQK2Nh7QdvgHpHFXPPDhb1Z06nj6QGPTXJjDDNCczPn
         w4fJFgDDWFpzxBgC5Dh+DND5ldxosu/Bv7P7MHz+0qfPe5TrP3iyAg6tpfZqrx3I67eW
         JLfw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=Z06P6xfCZfBoLzLGshR2eXD11imPLnqUoouenixaYFw=;
        b=CtMxXyLFHIPIiiw7IJp3+thvCn7qW/oEkjUwfM6dTfi6pLAYdvsUdv15Ln1q+T/YGY
         pi+bTHqdpcFqO4Xrl0HP3Wx54TYymQfV+BRcqAtCpTMkLMBCe/J7OLx94NYYfLCgQR43
         HZ9SRuP49wnSJbCxZBWvnPXsmOOO9K7P6VWqDFnpeUnCAaIik4ihdeFETgu2kY7wNLrb
         H/QlOr2eM9gXQjoS/LjSRer/TgI42viSSfIkPWB4YsOdN3Jvb9d2lVVdlCkbkaV3XmFw
         Pu/ltqgxMaTSWvJEfddc2AhbBR8fTJEisyqqkUYMsU7fdVj3qYojZir6OVTSGTlFSUZl
         /sSQ==
X-Gm-Message-State: AOAM530x+QM+7w39BqJ3rb2k9teRPWsAUBVLr+zhvoQgshjcB8prZuXa
	zmHfir8HU+WbgwS6uz7eZ8I=
X-Google-Smtp-Source: ABdhPJzmTLyJiMIH7/kQoP6L+VhlUA+7RnmijLx7oWBaXhPdpzGgcAEZV6BSepEVhxfy3ZoxIY2hkg==
X-Received: by 2002:a63:4e20:: with SMTP id c32mr5182730pgb.104.1623334560354;
        Thu, 10 Jun 2021 07:16:00 -0700 (PDT)
Subject: Re: [RFC PATCH V3 04/11] HV: Add Write/Read MSR registers via ghcb
To: Joerg Roedel <joro@8bytes.org>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
 wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
 arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
 peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, hch@lst.de,
 m.szyprowski@samsung.com, robin.murphy@arm.com, boris.ostrovsky@oracle.com,
 jgross@suse.com, sstabellini@kernel.org, will@kernel.org,
 xen-devel@lists.xenproject.org, davem@davemloft.net, kuba@kernel.org,
 jejb@linux.ibm.com, martin.petersen@oracle.com,
 iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com,
 thomas.lendacky@amd.com, brijesh.singh@amd.com, sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-5-ltykernel@gmail.com> <YMC4JdtYO+eLDKh5@8bytes.org>
From: Tianyu Lan <ltykernel@gmail.com>
Message-ID: <bd84a1a1-1dae-1dc0-8175-ed8bf19e705c@gmail.com>
Date: Thu, 10 Jun 2021 22:15:46 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <YMC4JdtYO+eLDKh5@8bytes.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 6/9/2021 8:46 PM, Joerg Roedel wrote:
> On Sun, May 30, 2021 at 11:06:21AM -0400, Tianyu Lan wrote:
>> +void hv_ghcb_msr_write(u64 msr, u64 value)
>> +{
>> +	union hv_ghcb *hv_ghcb;
>> +	void **ghcb_base;
>> +	unsigned long flags;
>> +
>> +	if (!ms_hyperv.ghcb_base)
>> +		return;
>> +
>> +	local_irq_save(flags);
>> +	ghcb_base = (void **)this_cpu_ptr(ms_hyperv.ghcb_base);
>> +	hv_ghcb = (union hv_ghcb *)*ghcb_base;
>> +	if (!hv_ghcb) {
>> +		local_irq_restore(flags);
>> +		return;
>> +	}
>> +
>> +	memset(hv_ghcb, 0x00, HV_HYP_PAGE_SIZE);
>> +
>> +	hv_ghcb->ghcb.protocol_version = 1;
>> +	hv_ghcb->ghcb.ghcb_usage = 0;
>> +
>> +	ghcb_set_sw_exit_code(&hv_ghcb->ghcb, SVM_EXIT_MSR);
>> +	ghcb_set_rcx(&hv_ghcb->ghcb, msr);
>> +	ghcb_set_rax(&hv_ghcb->ghcb, lower_32_bits(value));
>> +	ghcb_set_rdx(&hv_ghcb->ghcb, value >> 32);
>> +	ghcb_set_sw_exit_info_1(&hv_ghcb->ghcb, 1);
>> +	ghcb_set_sw_exit_info_2(&hv_ghcb->ghcb, 0);
>> +
>> +	VMGEXIT();
> 
> This is not safe to use from NMI context. You need at least some
> checking or WARN_ON/assertion/whatever to catch cases where this is
> violated. Otherwise it will result in some hard to debug bug reports.
> 

Nice catch. Will update in the next version.

Thanks.


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 14:17:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 14:17:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139930.258634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrLVH-0005gM-Hx; Thu, 10 Jun 2021 14:17:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139930.258634; Thu, 10 Jun 2021 14:17:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrLVH-0005gF-Es; Thu, 10 Jun 2021 14:17:51 +0000
Received: by outflank-mailman (input) for mailman id 139930;
 Thu, 10 Jun 2021 14:17:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrLVF-0005g1-QC; Thu, 10 Jun 2021 14:17:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrLVF-0008Bp-Lm; Thu, 10 Jun 2021 14:17:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrLVF-0005rp-Dc; Thu, 10 Jun 2021 14:17:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrLVF-0004cO-B0; Thu, 10 Jun 2021 14:17:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rpA398pAYPja9XJALLFQyCqzp+8wu4zY6N+hRC1+Bbc=; b=4AiUnnq8JJa0IbQQp755kZlLqH
	PQo+WFcBPbomknYGrQV5m3tTVevbla15V2+loGvs2qQKi6zSLC9akywhLI4AgHroAYuO3LkRnpo9e
	jtRqVY79hh3327X9dfRR0/A3ZU2DvFgqv3qvcNBwTaXDiRggWzX04fyM85VAQSA7Z4o8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162603-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162603: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:guest-start/debian.repeat:fail:regression
    xen-unstable-smoke:test-armhf-armhf-xl:guest-start:fail:heisenbug
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dfcffb128be46a3e413eaa941744536fe53c94b6
X-Osstest-Versions-That:
    xen=3e09045991cde360432bc7437103f8f8a6699359
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Jun 2021 14:17:49 +0000

flight 162603 xen-unstable-smoke real [real]
flight 162606 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162603/
http://logs.test-lab.xenproject.org/osstest/logs/162606/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl 18 guest-start/debian.repeat fail in 162597 REGR. vs. 162574

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl          14 guest-start                fail pass in 162597

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl         15 migrate-support-check fail in 162597 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 162597 never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dfcffb128be46a3e413eaa941744536fe53c94b6
baseline version:
 xen                  3e09045991cde360432bc7437103f8f8a6699359

Last test of basis   162574  2021-06-09 14:00:34 Z    1 days
Testing same since   162584  2021-06-10 00:00:27 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit dfcffb128be46a3e413eaa941744536fe53c94b6
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Wed Jun 9 10:37:59 2021 -0700

    xen/arm32: SPSR_hyp/SPSR
    
    SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
    trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
    See: ARM DDI 0487D.b page G8-5993.
    
    This fixes booting Xen/arm32 on QEMU.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
    Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 14:19:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 14:19:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139938.258649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrLWR-0006Lc-1a; Thu, 10 Jun 2021 14:19:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139938.258649; Thu, 10 Jun 2021 14:19:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrLWQ-0006LT-Tq; Thu, 10 Jun 2021 14:19:02 +0000
Received: by outflank-mailman (input) for mailman id 139938;
 Thu, 10 Jun 2021 14:19:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Cwq8=LE=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lrLWP-0006LD-Rj
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 14:19:01 +0000
Received: from mail-pg1-x536.google.com (unknown [2607:f8b0:4864:20::536])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c33fb1a-a327-4caf-9934-b2c4a5bf7c47;
 Thu, 10 Jun 2021 14:19:01 +0000 (UTC)
Received: by mail-pg1-x536.google.com with SMTP id z1so500953pgj.6
 for <xen-devel@lists.xenproject.org>; Thu, 10 Jun 2021 07:19:01 -0700 (PDT)
Received: from ?IPv6:2404:f801:0:5:8000::4b1? ([2404:f801:9000:1a:efea::4b1])
 by smtp.gmail.com with ESMTPSA id
 s22sm2725797pfd.94.2021.06.10.07.18.47
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 10 Jun 2021 07:18:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c33fb1a-a327-4caf-9934-b2c4a5bf7c47
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=S7U7wM1689XPFk0ln7YagGhi2kwC7oBCIYjLHGTATmA=;
        b=j1ssCm/VIao3T0Xc2kwXH6tQqWfrySQ1IoYffbIUZhSffD09z3SFuktT6padpAZaa/
         O6dOSr9MfmJ3BaiaCjc0PA9hj1D56rpnhXvTnUEKbRnxOiqwlkMuTGEQqMho8+laNO3A
         Ps4g2Ewhuj5lqgG3+vb3Pr6h4xORMt0ouVqwcSeoFvA+lTlqRvdEpC++3hJnHyetmZQS
         Y1STui7T1ekgTq6zzEkx3p9dSbyB7UT0DfPgF2o16fgVpJiH+bKr87Q/n9chxbr1Li1Z
         sUz1mgnrmF2Vvd/rNoSeF09YxbBnVBhhC9qN6Rqm62kNLgrUCEgGYVPXMjW1fXotfIvd
         p8kw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=S7U7wM1689XPFk0ln7YagGhi2kwC7oBCIYjLHGTATmA=;
        b=jDvl5refJGckccdKBFeY+r2ikeF442cIUkMqZrJIL8yIRyBuokKB+w5G1ziKHnu8CB
         03+vgpscPz12ZXeA+e/FhdroUkr7aOdGCPN1sKn//OmfUvYnIlbjiyGhMzeFel6OmGTO
         d51lK4i+RpUGhY4O3ouAiDGWuA7ane1TvucUt5W/eDiG2HQknX1/brbGFb0i3eeSfNGe
         wvWp3/EuQhm/mos9jOIkXkymTVj3j5hud3UCFVHRo7FtlxXPKWQoZ98OLOupaPXZubF+
         IPsv/dZ2BqN7xpzywdV3R7snCb7NzjjLfyT8gByiHI1kY+HxUgcbR58FukuBIPjYueNJ
         QLYQ==
X-Gm-Message-State: AOAM5310w1x5B4OqX2JMCa+vDl2xOmKCvPXMQZHVITONtBwZZHbVJ/46
	tTTgxkxQp2UM5PqlQwTJta4=
X-Google-Smtp-Source: ABdhPJykpYU7kOZkoptn+vRUH+LpqhlVAnDQIkp8KIQpRwT8RFYkpZNT1LtsR90ESObN7s+cfunBTA==
X-Received: by 2002:a63:7404:: with SMTP id p4mr5123864pgc.405.1623334740224;
        Thu, 10 Jun 2021 07:19:00 -0700 (PDT)
Subject: Re: [RFC PATCH V3 03/11] x86/Hyper-V: Add new hvcall guest address
 host visibility support
To: Vitaly Kuznetsov <vkuznets@redhat.com>, kys@microsoft.com,
 haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org,
 decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
 x86@kernel.org, hpa@zytor.com, arnd@arndb.de, dave.hansen@linux.intel.com,
 luto@kernel.org, peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, hch@lst.de,
 m.szyprowski@samsung.com, robin.murphy@arm.com, boris.ostrovsky@oracle.com,
 jgross@suse.com, sstabellini@kernel.org, joro@8bytes.org, will@kernel.org,
 xen-devel@lists.xenproject.org, davem@davemloft.net, kuba@kernel.org,
 jejb@linux.ibm.com, martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-scsi@vger.kernel.org, netdev@vger.kernel.org, thomas.lendacky@amd.com,
 brijesh.singh@amd.com, sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-4-ltykernel@gmail.com>
 <878s3iyrtg.fsf@vitty.brq.redhat.com>
From: Tianyu Lan <ltykernel@gmail.com>
Message-ID: <2a0170a9-e4d5-1c63-7901-416094f6ab64@gmail.com>
Date: Thu, 10 Jun 2021 22:18:45 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <878s3iyrtg.fsf@vitty.brq.redhat.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Hi Vitaly:
	Thanks for your review.

On 6/10/2021 5:47 PM, Vitaly Kuznetsov wrote:
>> diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h
>> index 606f5cc579b2..632281b91b44 100644
>> --- a/arch/x86/include/asm/hyperv-tlfs.h
>> +++ b/arch/x86/include/asm/hyperv-tlfs.h
>> @@ -262,6 +262,17 @@ enum hv_isolation_type {
>>   #define HV_X64_MSR_TIME_REF_COUNT	HV_REGISTER_TIME_REF_COUNT
>>   #define HV_X64_MSR_REFERENCE_TSC	HV_REGISTER_REFERENCE_TSC
>>   
>> +/* Hyper-V GPA map flags */
>> +#define HV_MAP_GPA_PERMISSIONS_NONE            0x0
>> +#define HV_MAP_GPA_READABLE                    0x1
>> +#define HV_MAP_GPA_WRITABLE                    0x2
>> +
>> +enum vmbus_page_visibility {
>> +	VMBUS_PAGE_NOT_VISIBLE = 0,
>> +	VMBUS_PAGE_VISIBLE_READ_ONLY = 1,
>> +	VMBUS_PAGE_VISIBLE_READ_WRITE = 3
>> +};
>> +
> Why do we need both flags and the enum? I don't see HV_MAP_GPA_* being
> used anywhere and VMBUS_PAGE_VISIBLE_READ_WRITE looks like
> HV_MAP_GPA_READABLE | HV_MAP_GPA_WRITABLE.
> 
> As this is used to communicate with the host, I'd suggest to avoid using
> enum and just use flags everywhere.
> 

Nice catch. Will update in the next version.

Thanks.


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 14:25:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 14:25:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139948.258661 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrLcd-0007ob-Ou; Thu, 10 Jun 2021 14:25:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139948.258661; Thu, 10 Jun 2021 14:25:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrLcd-0007oU-Kl; Thu, 10 Jun 2021 14:25:27 +0000
Received: by outflank-mailman (input) for mailman id 139948;
 Thu, 10 Jun 2021 14:25:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Cwq8=LE=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lrLcb-0007oO-LN
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 14:25:25 +0000
Received: from mail-pj1-x1030.google.com (unknown [2607:f8b0:4864:20::1030])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57183a0a-bf32-43d7-a106-7200f7f0f2de;
 Thu, 10 Jun 2021 14:25:24 +0000 (UTC)
Received: by mail-pj1-x1030.google.com with SMTP id g24so3744925pji.4
 for <xen-devel@lists.xenproject.org>; Thu, 10 Jun 2021 07:25:24 -0700 (PDT)
Received: from ?IPv6:2404:f801:0:5:8000::4b1? ([2404:f801:9000:1a:efea::4b1])
 by smtp.gmail.com with ESMTPSA id 1sm8338487pjm.8.2021.06.10.07.25.11
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 10 Jun 2021 07:25:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57183a0a-bf32-43d7-a106-7200f7f0f2de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:from:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=9LfruKTSmGsaKr3T0KtsgMI2g1V5D5r+FfKoliRoTY8=;
        b=c3W9IY55kAGRSZQMUKIse7CYjATOc/WNsezUixaF5fWDkVciqS0+iWaZcHtL2rFFUw
         q6Af9zyVo5zJyfySRvKv8k+huR+szVR1Xb8GDyTonmbkQ3V0MoPt+EBqk/fvrUpoIhu0
         FpoCg5T+vTYyaf/IELWrWdHqeg6MOtolC1d7jn3ifj33bWYJ1rAiXRMyOW50J2wlpwFG
         OzCk7HbGROQ8oMztcbHmPooQ1M2yg6rOw9Gp0rsVbF3wVqRpT60yIqJM59aQu50FPywK
         5PKEXQwlOBjWNljf/CAPll8LJKaf8zXSJtCqfJmCU42oGpqB5i4sk/wWojxWFZDHy/Ps
         2toA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:from:to:cc:references:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=9LfruKTSmGsaKr3T0KtsgMI2g1V5D5r+FfKoliRoTY8=;
        b=uNn2diLrEr2Zp4nSysO98SLXQXd1JhcnRIotIpIzb7mNAf1aW/EKcElQdFfAoT+Bbr
         kSrVG0Tpdh/ivTjMLcHo5PpkEpVlDv21tngQMJLXshGxU5XoirhJsU7wV4evH1UmziJ1
         GCzY84oF+KvuujpGIminnEXPFKxPtjXbI5hYOVlzQGPI4FJVTWTiiO8Jbx7qb//7NROV
         lUg6IXSOtU1Quwz4MkFFjJbvHsal3xU1OOGvVv0u3qWLMv6rMqV92BfhD3dNzhZeP2u0
         wBSK6FyluUwXsyBAzLrLdX+73NvWvuS09cVqwDol/pn6m4W6stUx2rmsdaI6HSqAhivo
         ufMg==
X-Gm-Message-State: AOAM533VG0PbZRyTqBKuAZ4gKayoCMUTVRymiN8BlTHuzsUP/0Mx5aaS
	eImptwgCSrnkuaFgTJYm1l8=
X-Google-Smtp-Source: ABdhPJzdwp2MX3HMzPasMcsmUlDqisDdNAsXcZ2anZ9ouahX31714ETLaZ+TtT/Nrt8h3xRgpVym6w==
X-Received: by 2002:a17:90a:7bce:: with SMTP id d14mr3702065pjl.38.1623335123951;
        Thu, 10 Jun 2021 07:25:23 -0700 (PDT)
Subject: Re: [RFC PATCH V3 08/11] swiotlb: Add bounce buffer remap address
 setting function
From: Tianyu Lan <ltykernel@gmail.com>
To: Christoph Hellwig <hch@lst.de>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
 wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
 arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
 peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, m.szyprowski@samsung.com,
 robin.murphy@arm.com, boris.ostrovsky@oracle.com, jgross@suse.com,
 sstabellini@kernel.org, joro@8bytes.org, will@kernel.org,
 xen-devel@lists.xenproject.org, davem@davemloft.net, kuba@kernel.org,
 jejb@linux.ibm.com, martin.petersen@oracle.com,
 iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com,
 thomas.lendacky@amd.com, brijesh.singh@amd.com, sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-9-ltykernel@gmail.com>
 <20210607064312.GB24478@lst.de>
 <48516ce3-564c-419e-b355-0ce53794dcb1@gmail.com>
Message-ID: <9c05f7fd-6460-5d4a-aa83-08626839d18e@gmail.com>
Date: Thu, 10 Jun 2021 22:25:10 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <48516ce3-564c-419e-b355-0ce53794dcb1@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 6/7/2021 10:56 PM, Tianyu Lan wrote:
> 
> On 6/7/2021 2:43 PM, Christoph Hellwig wrote:
>> On Sun, May 30, 2021 at 11:06:25AM -0400, Tianyu Lan wrote:
>>> From: Tianyu Lan <Tianyu.Lan@microsoft.com>
>>>
>>> For Hyper-V isolation VM with AMD SEV SNP, the bounce buffer(shared 
>>> memory)
>>> needs to be accessed via extra address space(e.g address above bit39).
>>> Hyper-V code may remap extra address space outside of swiotlb. swiotlb_
>>> bounce() needs to use remap virtual address to copy data from/to bounce
>>> buffer. Add new interface swiotlb_set_bounce_remap() to do that.
>>
>> Why can't you use the bus_dma_region ranges to remap to your preferred
>> address?
>>
> 
> Thanks for your suggestion.
> 
> These addresses in extra address space works as system memory mirror. 
> The shared memory with host in Isolation VM needs to be accessed via 
> extra address space which is above shared gpa boundary. During 
> initializing swiotlb bounce buffer pool, only address bellow shared gpa 
> boundary can be accepted by swiotlb API because it is treated as system 
> memory and managed by memory management. This is why Hyper-V swiotlb 
> bounce buffer pool needs to be allocated in Hyper-V code and map
> associated physical address in extra address space. The patch target is
> to add the new interface to set start virtual address of bounce buffer
> pool and let swiotlb boucne buffer copy function to use right virtual 
> address for extra address space.
> 
> bus_dma_region is to translate cpu physical address to dma address.
> It can't modify the virtual address of bounce buffer pool and let
> swiotlb code to copy data with right address. If some thing missed,
> please correct me.
> 

Hi Christoph:
	Sorry to bother you. Could you have a look at my previous reply?
I try figuring out the right way.

Thanks.


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 15:33:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 15:33:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139956.258677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrMgV-0005yn-V5; Thu, 10 Jun 2021 15:33:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139956.258677; Thu, 10 Jun 2021 15:33:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrMgV-0005yg-S4; Thu, 10 Jun 2021 15:33:31 +0000
Received: by outflank-mailman (input) for mailman id 139956;
 Thu, 10 Jun 2021 15:33:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mVv/=LE=epam.com=prvs=679567fbaa=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1lrMgU-0005ya-NB
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 15:33:30 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 765242df-cc0a-479e-8e6c-996c1c274d6b;
 Thu, 10 Jun 2021 15:33:28 +0000 (UTC)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 15AFU9DN016699; Thu, 10 Jun 2021 15:33:27 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2172.outbound.protection.outlook.com [104.47.17.172])
 by mx0a-0039f301.pphosted.com with ESMTP id 393mqp07bn-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 10 Jun 2021 15:33:27 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR0302MB3425.eurprd03.prod.outlook.com (2603:10a6:208:3::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.26; Thu, 10 Jun
 2021 15:33:22 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::b459:9e8c:964b:a3d1]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::b459:9e8c:964b:a3d1%6]) with mapi id 15.20.4195.031; Thu, 10 Jun 2021
 15:33:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 765242df-cc0a-479e-8e6c-996c1c274d6b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=l1/gRDqDlriGBVRGYmEJZrP3wLIVV/CEbFPn7FTBPXItoPwjSDxjfqqCI199QnvYn6G5knjk4bDqrXg8a/bMD4q2stTPuDP/KH/qHMpyV3fdWP5bGoZ6xIyah4HVhyy+AjKNK1JmQ7F+b7iSHfCNH+WHj+UNouRZQAn8BmORFJ6zklPdDAlcskhT56ecldl4P+RmIiJcKN/0EHyuc/2KItNDfTwZvtO9FFC1EstujmjAEFKjXb688eSfDoymOIuzw76v4ey3TKMBABhx+iauGRs8UQnuDkBH2odhcyWSrIRJ+t3M+qR4NOdDyGTo2np2hAiDaWy55F6TQD0c7EY8NA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZSjApoFD0gmo8JADzxrc3DbSryHX8BK83gTeR0Vn8Jo=;
 b=e3EhKj7Nu/gG71xkWWJg4LKqmwQWQl+1kTuciOu3rJt89CnDswv08mKixDIMI8TrgCYe66iBeiY9G4vSKu3QBfJqGViJKnPQ+p8pgfWc9svy46p0gkPxC3HurHthmbwHwLoXx7hQAvMIDNtqv0cutvzfdlUrXA1on1ekjxTU+dqe3E7F9vVqAPOjbyn4oER5LT508CJcgwrDgcjTSQnfsSoDgME55LUB29B2tO1sZM06kSpftnMLD5+Eb8hP3qp9/qGzE+eNjFAN2tks6x5OKVFfLGFgbl7H+D0jqeBQqVQAvb8dEyavE9Dn+N7hX131yOHNkPoqCwbUGjVlQ0NITw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZSjApoFD0gmo8JADzxrc3DbSryHX8BK83gTeR0Vn8Jo=;
 b=NRLQErikzmutj/DBOWj9TDZpgN60dfChFYQVOpv4eDYa7tT8+d+5qkWcKxgIme1AB5pC1O/xQ7PJr6qbV6rC+Fm15O+vnwYYnT1lyrw5QEEPh7OTEvJIJ3QEMU6H/31ROEr4cL/xnZc+UcMTDrvVpHO0TWQj6iVokHso54Jx++L1/nZ65Z3LyC8Ra+xibHxwTVhIcY46hi3/lHKffeb0vuHmoo96QmB/HCEO5YNfMSN1iD20vtByrfOx0dkVzQ4kYFc7uJxy4PGHilIxOrK2tooKYLji3AVBfQeo4B6bmWT8ajKaoWZknvf5LalG3Nc+8Tey/W3hvmuLclf7HE+pdA==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Subject: Re: SR-IOV: do we need to virtualize in Xen or rely on Dom0?
Thread-Topic: SR-IOV: do we need to virtualize in Xen or rely on Dom0?
Thread-Index: AQHXWQwW/O3obtZgqkmpGRvt7ZNDkasM6aMAgAAjXoCAAEW+AIAAFw0A
Date: Thu, 10 Jun 2021 15:33:22 +0000
Message-ID: <e0f73a05-b027-d0b6-8f8d-a1078dedccf7@epam.com>
References: <c10e16c9-ec42-336f-e838-caca49b39723@epam.com>
 <YMHFQA1L61ntKNRq@Air-de-Roger>
 <30955a5b-ee46-60d7-ae56-23dc7c91008c@epam.com>
 <YMIdbGCpFGZGwLoN@Air-de-Roger>
In-Reply-To: <YMIdbGCpFGZGwLoN@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 35281ce5-fdd6-4ac8-2b8b-08d92c251516
x-ms-traffictypediagnostic: AM0PR0302MB3425:
x-microsoft-antispam-prvs: 
 <AM0PR0302MB3425982BC5232A6A39FF1AE5E7359@AM0PR0302MB3425.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 XOIvHe7+K/JJdfXTLVIatV8Y5mf5cl+meYUyMrQJPeMDqVCDsHV5S/S10+K09B8KN4Edk9gZ6nHZKC6gwLM0Mgx74fGDTMzOyADi00PlLhNkduJFYGPjurBkKpgugqv+Ov7cj76wMw+I//2SlsD9OBQPWu0udWqes7R9R9/4pc1wuJTLWBfMUFN13nd/LPyie/LsFvFkyS+/MHkZiQu+AFtyRkMMUolysMSeqrIHkgePY7hjeH8LTu5VDMQszVnt49fHXvDRSrRA6saZl31uVtxe1z3ff05befxBH+KY3w7VK0bKufszBpChloxz7Umr4CPLD/JTvrxG8QKrkh1xAuefNils8PlShjTKD0WLVWlAS3JH2IU6wUJSoVLIBnV/7OnBCRnDNKUGSv3LjYkGbflsUQ0o79xvhrb+dsE3Tjj09zTKY5q4b50F0NfYCgQf1dNjqwxRxBYAE4qcVHe2vhj+7EPyX8Q4q2xAPq4ycRhVsQQkkwo/DzRAKsFbf/Mi4Q80hH+UxSzaBCEf2x+FRthYRqkiSyq+5xiVImNkeqOWTVZMvjubsSUp5eflGZEuwPXRVDnyYooV9hW4Dv1oMSw/Yfh6zdRmklZQPHEtOIVuwYEOB9Hr67jnfPo4SUi0/dIaJ19TF6EPb6pjRpgLDDSEYb7mhh2sDtXMFin7xz5pOCrZPJvEfm/ltkjISo1LFXcOgMjJin+S8jXHcxutKOvKTXOeb0AD12M3Af3ZmH12ggK2eQbNDhi8UK/ViW6RzjMvfmYvLDXS1805HkKWcOiRCWrGJj4GFUQxODzR+Lo=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(136003)(376002)(366004)(346002)(2906002)(31696002)(6506007)(316002)(66446008)(53546011)(6916009)(5660300002)(64756008)(6512007)(71200400001)(54906003)(66946007)(6486002)(26005)(76116006)(38100700002)(66556008)(91956017)(83380400001)(36756003)(478600001)(8936002)(86362001)(66476007)(2616005)(4326008)(186003)(31686004)(122000001)(8676002)(966005)(21314003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?aEJTMW9WT3hsNUd2UTBjbWIybXBadFZ2WHBWNy9aNWRUMWF5dWt4SlFuZXI4?=
 =?utf-8?B?Vi9Iczd0RzVSUVhmd1JNakUwSjBtNjg0U29kL2FGcG1QL1dVMDVDclBRV1Jp?=
 =?utf-8?B?Y1UyNWJoQkxrcDRaeTVPNlBjS3BYc09YbGdXRXZRTjVYWko1bGNkVXNBYnMv?=
 =?utf-8?B?dUZILzE3ckhyVTR6Q1crRzFBdXd6V2prQVNtMllDQkZmQnlvdlc2V0l3QWo1?=
 =?utf-8?B?alBtOU5FMFZXQnVBN0FwN2doUHh5TGwySHNEaE0zazRHb2g5dDl0cEJvMzZP?=
 =?utf-8?B?enU2eVRRTXQyS2xoL0RCZlZWYlZ2OTdIeWV6TFlEbUxSNUdMVVIyU29MSEdC?=
 =?utf-8?B?L08rRWZwczlVL1hpWkhBNEl6Skx4UzJEcFR4VGZ5NmZ6Nkhqd2NuRC9kd3Bw?=
 =?utf-8?B?NUI0U3VTcklNTHlva29UOVFhYk50M1dCY2RhYXN5OWFQUGpVTGFDME5PS2lN?=
 =?utf-8?B?aUlVZkNLNUR6R2I5UkdsMXJVa1Q3cWhpeWtnRE5xS0FoOFhDYmp0clBpU0pv?=
 =?utf-8?B?Q1loRTUwVkNiOHRxaEw5cm5RbWVuc2tHcU4yVWhNcFpxbnVFNFJma09uWkUv?=
 =?utf-8?B?R3JxZWFzL0ZkNmlmUkVXS0pSaHdLWlJOYlMvajdBZ3I2ekRDVEJwME5UbDdy?=
 =?utf-8?B?amtncW1wMUxtTExkQkJUcFRsc1pPOUVVbXhobk8wTTZ3RjRFUUxiak5YMGV4?=
 =?utf-8?B?Sy9NdDJXQkZUdmVJblFmQ1U2Yk0yb2Q2UThObDBiUWNTdVJPWUxVb0ZnZkdi?=
 =?utf-8?B?cXpOYjZabGNUZ3dCYTd3Q010SnIzYi9VazdLclltTXdtTUVZbEtrNFlWMUlv?=
 =?utf-8?B?bXZCOEptWWx1ajBOS0Z1clUxM0tHL0ZCUllQMGZrYTF3MXE5TC9HNTlyQXpv?=
 =?utf-8?B?MFk0N1VXS3JtNHVmZzlVeHp4RitkMjJSNHpZcUp2RlViSFRjdTYrS1VSL045?=
 =?utf-8?B?NFFxOTE4NGQvdE5Ca0JBODh1N1pHNXd1Z0VYdTBWNW8wRGxrcnM2UHQxRnE3?=
 =?utf-8?B?cEg5WDdlQ3NMQUR2VDVaQWVicU1BNlZhRXhueFJZS00yNjhBTjZuaUFnL2x4?=
 =?utf-8?B?dHJrQWc2RGlJQzY5YXZtS1FwWWZINWM5dU8vcFRTTmVDeHhGNkdhekpYUFVY?=
 =?utf-8?B?MXU2bXZoWXA3Wk9UamJERVZ4NjkwZ3EzUlEyeDY5dVBEamRjSkdkL2IvajZr?=
 =?utf-8?B?TTZDMExBMHNudDdrWk5ZZVh5b1hZdnNLbytYcmNMNFM3bVJjUnZCNjRUVkg3?=
 =?utf-8?B?NkhEaURpOTRERkVudnFaRHkyRUlBdlVpMlB5MXg5aE4zbnNyeUVaL0dCZktX?=
 =?utf-8?B?Y0dORTkyR1hxajFCQ1lCWVkzbDJMY1F5MURMWUhYc2QxdXpmVS84SGljbEQr?=
 =?utf-8?B?L1hvL09DZTF0ZFQxTFNDRVhwTjNVQ0NudXBHSlB6bE5MaVR0WFpvcGVQR2dF?=
 =?utf-8?B?OFVNb281V0xuYzFVTEdCbUpYYmxzY2tTemw3WlkwV3FsckxLekVVZXluVjF4?=
 =?utf-8?B?N2JiWk9pUFFGTjdySGcrQW9RR1dmVGhLUmxmRXBVbjZjbnkwZmVqQ254V3F6?=
 =?utf-8?B?UElJTWxUUG9Mc3lFUExGZVdxU013eXZOSE5WcGlTc0EyZU9hbFhsN1dKTkg3?=
 =?utf-8?B?MFRsNUZaOFQ0SUwwY01qSGozc3YrVllCK3lnN3pYRWZRd0M2Z2ZCYTlXNXpC?=
 =?utf-8?B?WDRJVGxsQlErTDZnWm5HOWUzNHJmVXZ4M1JqRUhISUJBd2YreGFXMUxTOG9H?=
 =?utf-8?Q?5LpvqH4Wr/+lSH2Ia7RD67YwEtKuXhM7eHgUer8?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <1E50BAF7FFE5AE4E98E48347C8B4937E@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 35281ce5-fdd6-4ac8-2b8b-08d92c251516
X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Jun 2021 15:33:22.7667
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: CKg0xWsdKH1xLcs7KIyarBOeeecuAUo5LuE23MCkB0rj77k/CskMpL2vmDjHiPdJTSiV1xJ4VUj/Fk/4Gm8gurJ+EfOzwjVOt6r0ZtoJ53bXj9oo091UhyVGKFskXDe0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0302MB3425
X-Proofpoint-GUID: CBU5R6i5jAt7B3e_iTuKBLf3xh4Zdh-g
X-Proofpoint-ORIG-GUID: CBU5R6i5jAt7B3e_iTuKBLf3xh4Zdh-g
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501
 malwarescore=0 impostorscore=0 bulkscore=0 phishscore=0 suspectscore=0
 lowpriorityscore=0 mlxlogscore=999 clxscore=1015 mlxscore=0 spamscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2106100099

DQpPbiAxMC4wNi4yMSAxNzoxMCwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToNCj4gT24gVGh1LCBK
dW4gMTAsIDIwMjEgYXQgMTA6MDE6MTZBTSArMDAwMCwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28g
d3JvdGU6DQo+PiBIZWxsbywgUm9nZXIhDQo+Pg0KPj4gT24gMTAuMDYuMjEgMTA6NTQsIFJvZ2Vy
IFBhdSBNb25uw6kgd3JvdGU6DQo+Pj4gT24gRnJpLCBKdW4gMDQsIDIwMjEgYXQgMDY6Mzc6MjdB
TSArMDAwMCwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+Pj4+IEhpLCBhbGwhDQo+
Pj4+DQo+Pj4+IFdoaWxlIHdvcmtpbmcgb24gUENJIFNSLUlPViBzdXBwb3J0IGZvciBBUk0gSSBz
dGFydGVkIHBvcnRpbmcgWzFdIG9uIHRvcA0KPj4+PiBvZiBjdXJyZW50IFBDSSBvbiBBUk0gc3Vw
cG9ydCBbMl0uIFRoZSBxdWVzdGlvbiBJIGhhdmUgZm9yIHRoaXMgc2VyaWVzDQo+Pj4+IGlzIGlm
IHdlIHJlYWxseSBuZWVkIGVtdWxhdGluZyBTUi1JT1YgY29kZSBpbiBYZW4/DQo+Pj4+DQo+Pj4+
IEkgaGF2ZSBpbXBsZW1lbnRlZCBhIFBvQyBmb3IgU1ItSU9WIG9uIEFSTSBbM10gKHBsZWFzZSBz
ZWUgdGhlIHRvcCAyDQo+Pj4+IHBhdGNoZXMpDQo+Pj4+IGFuZCBpdCAid29ya3MgZm9yIG1lIjog
TVNJIHN1cHBvcnQgaXMgc3RpbGwgV0lQLCBidXQgSSB3YXMgYWJsZSB0byBzZWUgdGhhdA0KPj4+
PiBWRnMgYXJlIHByb3Blcmx5IHNlZW4gaW4gdGhlIGd1ZXN0IGFuZCBCQVJzIGFyZSBwcm9wZXJs
eSBwcm9ncmFtbWVkIGluIHAybS4NCj4+Pj4NCj4+Pj4gV2hhdCBJIGNhbid0IGZ1bGx5IHVuZGVy
c3RhbmQgaXMgaWYgd2UgY2FuIGxpdmUgd2l0aCB0aGlzIGFwcHJvYWNoIG9yIHRoZXJlDQo+Pj4+
IGFyZSB1c2UtY2FzZXMgSSBjYW4ndCBzZWUuDQo+Pj4+DQo+Pj4+IFByZXZpb3VzbHkgSSd2ZSBi
ZWVuIHRvbGQgdGhhdCB0aGlzIGFwcHJvYWNoIG1pZ2h0IG5vdCB3b3JrIG9uIEZyZWVCU0QNCj4+
Pj4gcnVubmluZw0KPj4+PiBhcyBEb21haW4tMCwgYnV0IGlzIHNlZW1zIHRoYXQgIlBDSSBQYXNz
dGhyb3VnaCBpcyBub3Qgc3VwcG9ydGVkDQo+Pj4+IChYZW4vRnJlZUJTRCkiDQo+Pj4+IGFueXdh
eXMgWzRdLg0KPj4+IFBDSSBwYXNzdGhvcmdoIGlzIG5vdCBzdXBwb3J0ZWQgb24gRnJlZUJTRCBk
b20wIGJlY2F1c2UgUENJDQo+Pj4gcGFzc3Rocm91Z2ggaXMgbm90IHN1cHBvcnRlZCBieSBYZW4g
aXRzZWxmIHdoZW4gdXNpbmcgYSBQVkggZG9tMCwgYW5kDQo+Pj4gdGhhdCdzIHRoZSBvbmx5IG1v
ZGUgRnJlZUJTRCBkb20wIGNhbiB1c2UuDQo+PiBTbywgaXQgaXMgc3RpbGwgbm90IGNsZWFyIHRv
IG1lOiBob3cgYW5kIGlmIFBDSSBwYXNzdGhyb3VnaCBpcyBzdXBwb3J0ZWQNCj4+DQo+PiBvbiBG
cmVlQlNELCB3aGF0IGFyZSB0aGUgc2NlbmFyaW9zIGFuZCByZXF1aXJlbWVudHMgZm9yIHRoYXQ/
DQo+Pg0KPj4+IFBIWVNERVZPUF9wY2lfZGV2aWNlX2FkZCBjYW4gYmUgYWRkZWQgdG8gRnJlZUJT
RCwgc28gaXQgY291bGQgYmUgbWFkZQ0KPj4+IHRvIHdvcmsuIEkgaG93ZXZlciB0aGluayB0aGlz
IGlzIG5vdCB0aGUgcHJvcGVyIHdheSB0byBpbXBsZW1lbnQNCj4+PiBTUi1JT1Ygc3VwcG9ydC4N
Cj4+IEkgd2FzIG5vdCBhYmxlIHRvIGZpbmQgYW55IHN1cHBvcnQgZm9yIFBIWVNERVZPUF9YWFgg
aW4gRnJlZUJTRCBjb2RlLA0KPj4NCj4+IGNvdWxkIHlvdSBwbGVhc2UgcG9pbnQgbWUgdG8gd2hl
cmUgYXJlIHRoZXNlIHVzZWQ/DQo+IFRob3NlIGFyZSBub3QgdXNlZCBvbiBGcmVlQlNELCBiZWNh
dXNlIHg4NiBQVkh2MiBkb20wIGRvZXNuJ3QNCj4gaW1wbGVtZW50IHRoZW0gYW55bW9yZS4gVGhl
eSBhcmUgaW1wbGVtZW50ZWQgb24gTGludXggZm9yIHg4NiBQViBkb20wLA0KPiBBRkFJSyBBcm0g
ZG9lc24ndCB1c2UgdGhlbSBlaXRoZXIuDQoNCldlbGwsIEFSTSBkaWRuJ3QgdW50aWwgd2Ugc3Rh
cnRlZCBpbXBsZW1lbnRpbmcgUENJIHBhc3N0aHJvdWdoIFsxXS4NCg0KSXQgd2FzIHByZXZpb3Vz
bHkgZGlzY3Vzc2VkIFsyXSwgIiMgRGlzY292ZXJpbmcgUENJIGRldmljZXM6IiBhbmQgcHJvcG9z
ZWQNCg0KdG8gdXNlIFBIWVNERVZPUF9wY2lfZGV2aWNlX2FkZC4NCg0KTG9uZyBzdG9yeSBzaG9y
dCwgaXQgaXMgbm90IGVhc3kgZm9yIEFSTSB0byBlbnVtZXJhdGUgUENJIGRldmljZXMgaW4gWGVu
IGFzIHRoZXJlIGlzDQoNCm5vIHVuaWZpZWQgd2F5IG9mIGRvaW5nIHNvOiBkaWZmZXJlbnQgcGxh
dGZvcm1zIGltcGxlbWVudCBkaWZmZXJlbnQgUENJIGhvc3QgYnJpZGdlcw0KDQp3aGljaCByZXF1
aXJlIGNvbXBsZXggaW5pdGlhbGl6YXRpb24gaW5jbHVkaW5nIGNsb2NrcywgcG93ZXIgZG9tYWlu
cyBldGMuDQoNCkl0IHdhcyBhbHNvIGRpc2N1c3NlZCB0aGF0IFBDSSBvbiBBUk0gd291bGQgd2Fu
dCB0byBzdXBwb3J0IGRvbTBsZXNzIChEb21CKSBzZXR1cHMsDQoNCnNvIHdlIHNob3VsZCBoYXZl
IHNvbWUgYm9vdGxvYWRlciB3aGljaCB3aWxsIGVudW1lcmF0ZSBQQ0kgZGV2aWNlcyBmb3IgWGVu
IGJlZm9yZWhhbmQNCg0KYW5kIFhlbiB3aWxsIG9ubHkgc3VwcG9ydCBFQ0FNLWJhc2VkIGhvc3Qg
YnJpZGdlcy4NCg0KQW55d2F5cywgYXMgdGhlIGFib3ZlIGRvZXMgbm90IGV4aXN0IHlldCwgd2Ug
dXNlIFBIWVNERVZPUF9wY2lfZGV2aWNlX2FkZCBvbiBBUk0uDQoNCkFuZCB3ZSByZWx5IG9uIERv
bTAgdG8gaW5pdGlhbGl6ZSBQQ0kgaG9zdCBicmlkZ2UsIHNvIFhlbiBjYW4gYWxzbyBhY2Nlc3Mg
UENJLg0KDQo+DQo+PiBJZiB0aGV5IGFyZSBub3QsIHNvIGhvdyBYZW4gdW5kZXIgRnJlZUJTRCBr
bm93cyBhYm91dCBQQ0kgZGV2aWNlcz8NCj4gWGVuIHNjYW5zIHRoZSBQQ0kgYnVzIGl0c2VsZiwg
c2VlIHNjYW5fcGNpX2RldmljZXMuDQpTZWUgYWJvdmUsIHRoaXMgaXMgbm90IHlldCBhdmFpbGFi
bGUgb24gQVJNDQo+DQo+PiBJIGFtIHRyeWluZyB0byBleHRyYXBvbGF0ZSBteSBrbm93bGVkZ2Ug
b2YgaG93IExpbnV4IGRvZXMgdGhhdA0KPj4NCj4+IChkdXJpbmcgUENJIGVudW1lcmF0aW9uIGlu
IERvbWFpbi0wIHdlIHVzZSBoeXBlcmNhbGxzKQ0KPj4NCj4+Pj4gSSBhbHNvIHNlZSBBQ1JOIGh5
cGVydmlzb3IgWzVdIGltcGxlbWVudHMgU1ItSU9WIGluc2lkZSBpdCB3aGljaCBtYWtlcw0KPj4+
PiBtZSB0aGluayBJDQo+Pj4+IG1pc3Mgc29tZSBpbXBvcnRhbnQgdXNlLWNhc2Ugb24geDg2IHRo
b3VnaC4NCj4+Pj4NCj4+Pj4gSSB3b3VsZCBsaWtlIHRvIGFzayBmb3IgYW55IGFkdmlzZSB3aXRo
IFNSLUlPViBpbiBoeXBlcnZpc29yIHJlc3BlY3QsDQo+Pj4+IGFueSBwb2ludGVycw0KPj4+PiB0
byBkb2N1bWVudGF0aW9uIG9yIGFueSBvdGhlciBzb3VyY2Ugd2hpY2ggbWlnaHQgYmUgaGFuZHkg
aW4gZGVjaWRpbmcgaWYNCj4+Pj4gd2UgZG8NCj4+Pj4gbmVlZCBTUi1JT1YgY29tcGxleGl0eSBp
biBYZW4uDQo+Pj4+DQo+Pj4+IEFuZCBpdCBkb2VzIGJyaW5nIGNvbXBsZXhpdHkgaWYgeW91IGNv
bXBhcmUgWzFdIGFuZCBbM10pLi4uDQo+Pj4+DQo+Pj4+IEEgYml0IG9mIHRlY2huaWNhbCBkZXRh
aWxzIG9uIHRoZSBhcHByb2FjaCBpbXBsZW1lbnRlZCBbM106DQo+Pj4+IDEuIFdlIHJlbHkgb24g
UEhZU0RFVk9QX3BjaV9kZXZpY2VfYWRkDQo+Pj4+IDIuIFdlIHJlbHkgb24gRG9tYWluLTAgU1It
SU9WIGRyaXZlcnMgdG8gaW5zdGFudGlhdGUgVkZzDQo+Pj4+IDMuIEJBUnMgYXJlIHByb2dyYW1t
ZWQgaW4gcDJtIGltcGxlbWVudGluZyBndWVzdCB2aWV3IG9uIHRob3NlICh3ZSBoYXZlDQo+Pj4+
IGV4dGVuZGVkDQo+Pj4+IHZQQ0kgY29kZSBmb3IgdGhhdCBhbmQgdGhpcyBwYXRoIGlzIHVzZWQg
Zm9yIGJvdGggIm5vcm1hbCIgZGV2aWNlcyBhbmQNCj4+Pj4gVkZzIHRoZSBzYW1lIHdheSkNCj4+
Pj4gNC4gTm8gbmVlZCB0byB0cmFwIFBDSV9TUklPVl9DVFJMDQo+Pj4+IDUuIE5vIG5lZWQgdG8g
d2FpdCAxMDBtcyBpbiBYZW4gYmVmb3JlIGF0dGVtcHRpbmcgdG8gYWNjZXNzIFZGIHJlZ2lzdGVy
cw0KPj4+PiB3aGVuDQo+Pj4+IGVuYWJsaW5nIHZpcnR1YWwgZnVuY3Rpb25zIG9uIHRoZSBQRiAt
IHRoaXMgaXMgaGFuZGxlZCBieSBEb21haW4tMCBpdHNlbGYuDQo+Pj4gSSB0aGluayB0aGUgU1It
SU9WIGNhcGFiaWxpdHkgc2hvdWxkIGJlIGhhbmRsZWQgbGlrZSBhbnkgb3RoZXIgUENJDQo+Pj4g
Y2FwYWJpbGl0eSwgaWU6IGxpa2Ugd2UgY3VycmVudGx5IGhhbmRsZSBNU0kgb3IgTVNJLVggaW4g
dlBDSS4NCj4+Pg0KPj4+IEl0J3MgbGlrZWx5IHRoYXQgdXNpbmcgc29tZSBraW5kIG9mIGh5cGVy
Y2FsbCBpbiBvcmRlciB0byBkZWFsIHdpdGgNCj4+PiBTUi1JT1YgY291bGQgbWFrZSB0aGlzIGVh
c2llciB0byBpbXBsZW1lbnQgaW4gWGVuLCBidXQgdGhhdCBqdXN0IGFkZHMNCj4+PiBtb3JlIGNv
ZGUgdG8gYWxsIE9TZXMgdGhhdCB3YW50IHRvIHJ1biBhcyB0aGUgaGFyZHdhcmUgZG9tYWluLg0K
Pj4gSSBkaWRuJ3QgaW50cm9kdWNlIGFueSBuZXcsIGJ1dCBQSFlTREVWT1BfcGNpX2RldmljZV9h
ZGQgd2FzIGVub3VnaC4NCj4gV2VsbCwgdGhhdCB3b3VsZCBiZSAnbmV3JyBvbiB4ODYgUFZIIG9y
IEFybSwgYXMgdGhleSBkb24ndCBpbXBsZW1lbnQNCj4gYW55IFBIWVNERVZPUCBhdCB0aGUgbW9t
ZW50Lg0KQWdyZWUgZm9yIHg4NiBQVkgNCj4NCj4gTG9uZyB0ZXJtIHdlIG1pZ2h0IG5lZWQgYW4g
aHlwZXJjYWxsIHRvIHJlcG9ydCBkeW5hbWljIE1DRkcgcmVnaW9ucywNCj4gYnV0IEkgaGF2ZW4n
dCBnb3QgYXJvdW5kIHRvIGl0IHlldCAoYW5kIGhhdmVuJ3QgZm91bmQgYW55IHN5c3RlbSB0aGF0
DQo+IHJlcG9ydHMgZXh0cmEgTUNGRyByZWdpb25zIGZyb20gQUNQSSBBTUwpLg0KV2hpY2ggbWVh
bnMgd2UnbGwgbmVlZCB0byBtb2RpZnkgZ3Vlc3QgT1MNCj4NCj4+IFRoZSByZXN0IEkgZGlkIGlu
IFhlbiBpdHNlbGYgd3J0IFNSLUlPVi4NCj4+DQo+Pj4gT1RPSCBpZiB3ZSBwcm9wZXJseSB0cmFw
IGFjY2Vzc2VzIHRvIHRoZSBTUi1JT1YgY2FwYWJpbGl0eSAobGlrZSBpdA0KPj4+IHdhcyBwcm9w
b3NlZCBpbiBbMV0gZnJvbSB5b3VyIHJlZmVyZW5jZXMpIHdlIHdvbid0IGhhdmUgdG8gbW9kaWZ5
IE9TZXMNCj4+PiB0aGF0IHdhbnQgdG8gcnVuIGFzIGhhcmR3YXJlIGRvbWFpbnMgaW4gb3JkZXIg
dG8gaGFuZGxlIFNSLUlPViBkZXZpY2VzLg0KPj4gT3V0IG9mIGN1cmlvc2l0eSwgY291bGQgeW91
IHBsZWFzZSBuYW1lIGEgZmV3PyBJIGRvIHVuZGVyc3RhbmQgdGhhdA0KPj4NCj4+IHdlIGRvIHdh
bnQgdG8gc3VwcG9ydCB1bm1vZGlmaWVkIE9TZXMgYW5kIHRoaXMgaXMgaW5kZWVkIGltcG9ydGFu
dC4NCj4+DQo+PiBCdXQsIHN0aWxsIHdoYXQgYXJlIHRoZSBvdGhlciBPU2VzIHdoaWNoIGRvIHN1
cHBvcnQgWGVuICsgUENJIHBhc3N0aHJvdWdoPw0KPiBOZXRCU0QgUFYgZG9tMCBkb2VzIHN1cHBv
cnQgUENJIHBhc3N0aHJvdWdoLCBidXQgSSdtIG5vdCBzdXJlIHRoYXQncw0KPiByZWxldmFudC4N
Cg0KVGhhdCB3YXMganVzdCBmb3IgbWUgdG8gdW5kZXJzdGFuZCB3aGVyZSB0byBsb29rIGZvciB0
aGUgUENJIHBhc3N0aHJvdWdoDQoNCmltcGxlbWVudGF0aW9ucyBhbmQgbm90IHRvIGJyZWFrIHNv
bWV0aGluZyB3aGljaCBJIGRvbid0IHNlZQ0KDQo+DQo+IFdlIHNob3VsZG4ndCBmb2N1cyBvbiBj
dXJyZW50IHVzZXJzIHRvIGNvbWUgdXAgd2l0aCBhbiBpbnRlcmZhY2UsDQo+IGJ1dCByYXRoZXIg
dGhpbmsgaG93IHdlIHdhbnQgdGhhdCBpbnRlcmZhY2UgdG8gYmUuDQo+DQo+IEFzIEkgc2FpZCBv
biB0aGUgcHJldmlvdXMgZW1haWwgbXkgb3BpbmlvbiBpcyB0aGF0IHVubGVzcyBub3QNCj4gdGVj
aG5pY2FsbHkgcG9zc2libGUgd2Ugc2hvdWxkIGp1c3QgdHJhcCBhY2Nlc3NlcyB0byB0aGUgU1It
SU9WDQo+IGNhcGFiaWxpdHkgbGlrZSB3ZSBkbyBmb3IgTVNJKC1YKSBhbmQgaGFuZGxlIGl0IHRy
YW5zcGFyZW50bHkgZnJvbSBhDQo+IGd1ZXN0IFBvVi4NCg0KT2ssIEkgdW5kZXJzdGFuZC4gSXQg
c2VlbXMgdGhhdCBKYW4gYWxzbyBzdXBwb3J0cyB5b3VyIGlkZWEuIFNvLCBJIGFtIG5vdA0KDQph
Z2FpbnN0IHRoYXQsIGp1c3QgdHJ5aW5nIHRvIHNlZSB0aGUgd2hvbGUgcGljdHVyZSB3aGljaCBp
cyBhIGJpdCBiaWdnZXIgdGhhbiBBUk0NCg0KPg0KPj4+IElNTyBnb2luZyBmb3IgdGhlIGh5cGVy
Y2FsbCBvcHRpb24gc2VlbXMgZWFzaWVyIG5vdywgYnV0IGFkZHMgYSBidXJkZW4NCj4+PiB0byBh
bGwgT1NlcyB0aGF0IHdhbnQgdG8gbWFuYWdlIFNSLUlPViBkZXZpY2VzIHRoYXQgd2lsbCBodXJ0
IHVzIGxvbmcNCj4+PiB0ZXJtLg0KPj4gQWdhaW4sIEkgd2FzIGFibGUgdG8gbWFrZSBpdCBzb21l
d2hhdCB3b3JrIHdpdGggUEhZU0RFVk9QX3BjaV9kZXZpY2VfYWRkIG9ubHkuDQo+IFN1cmUsIHRo
YXQncyBob3cgaXQgd29ya3Mgb24geDg2IFBWIGhhcmR3YXJlIGRvbWFpbiwgc28gaXQncyBjZXJ0
YWlubHkNCj4gcG9zc2libGUuIE15IGNvbW1lbnRzIHRvIGF2b2lkIHRoYXQgcm91dGUgaXMgbm90
IGJlY2F1c2UgaXQncyBub3QNCj4gdGVjaG5pY2FsbHkgZmVhc2libGUsIGJ1dCBiZWNhdXNlIEkg
ZG9uJ3QgbGlrZSB0aGUgYXBwcm9hY2guDQoNClVubGVzcyB3ZSBoYXZlIHNvbWUgdW5pZmllZCB3
YXkgb2YgYWNjZXNzaW5nIFBDSSBvbiBBUk0gSSBhbSBub3Qgc3VyZQ0KDQp3ZSBjYW4gbGl2ZSB3
aXRob3V0IFBIWVNERVZPUF9wY2lfZGV2aWNlX2FkZCBoeXBlcmNhbGwuDQoNCj4NCj4gU28gZmFy
IHdlIGhhdmUgYXZvaWRlZCBQVkggZnJvbSBoYXZpbmcgdG8gaW1wbGVtZW50IGFueSBQSFlTREVW
T1ANCj4gaHlwZXJjYWxsLCBhbmQgdGhhdCdzIGEgZGVzaWduIGRlY2lzaW9uLCBub3QgYSBjb2lu
Y2lkZW5jZS4gSSdtIGluDQo+IGZhdm9yIG9mIHVzaW5nIHRoZSBleGlzdGluZyBoYXJkd2FyZSBp
bnRlcmZhY2VzIGZvciBndWVzdHMgaW5zdGVhZCBvZg0KPiBpbnRyb2R1Y2luZyBjdXN0b20gWGVu
IG9uZXMgd2hlbiB0ZWNobmljYWxseSBmZWFzaWJsZS4NCg0KVW5mb3J0dW5hdGVseSwgb24gQVJN
IChhbmQgSSBiZWxpZXZlIGl0IG1heSBhbHNvIGhhcHBlbiBvbiBvdGhlcg0KDQpub24teDg2IHBs
YXRmb3JtcykgdGhlcmUgYXJlIG5ldyBvYnN0YWNsZXMgdG8gdGhpcyBkZXNpZ24uIEFuZCBpZg0K
DQp3ZSB3YW50IFhlbiArIFBDSSBiZSBzdXBwb3J0ZWQgb24gb3RoZXIgdGhhbiB4ODYgcGxhdGZv
cm1zIHdlIGhhdmUNCg0KdG8gcmUtdGhpbmsgdGhlIGV4aXN0aW5nIGFwcHJvYWNoIHRvIGluY2x1
ZGUgb3RoZXJzIGluIHRoZSBnYW1lLg0KDQo+DQo+IFRoYW5rcywgUm9nZXIuDQoNClRoYW5rIHlv
dSwNCg0KT2xla3NhbmRyDQoNClsxXSBodHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3QvZnVz
YS94ZW4taW50ZWdyYXRpb24vLS9ibG9iL2ludGVncmF0aW9uL3BjaS1wYXNzdGhyb3VnaC94ZW4v
YXJjaC9hcm0vcGh5c2Rldi5jDQoNClsyXSBodHRwczovL3d3dy5tYWlsLWFyY2hpdmUuY29tL3hl
bi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZy9tc2c3NzQyMi5odG1sDQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 15:52:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 15:52:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139977.258707 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrMz3-0000vh-25; Thu, 10 Jun 2021 15:52:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139977.258707; Thu, 10 Jun 2021 15:52:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrMz2-0000va-VB; Thu, 10 Jun 2021 15:52:40 +0000
Received: by outflank-mailman (input) for mailman id 139977;
 Thu, 10 Jun 2021 15:52:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrMz1-0000vQ-9y; Thu, 10 Jun 2021 15:52:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrMz1-0001L2-1y; Thu, 10 Jun 2021 15:52:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrMz0-0001HD-OM; Thu, 10 Jun 2021 15:52:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrMz0-0002X6-Ns; Thu, 10 Jun 2021 15:52:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Go6+2bnlA63RBW+hV0s7Gdrg5MCm5tuQyINWC3D7L94=; b=bYwd9dOW6TFFfLo0Q77cQtHWqP
	9ar/2YN2YzXdW452Ped61M11R96VcUgT5oKdDz4V1Yxi/jH+MLhoTKf+g6Ov8+4Y1F9gOoBA31WvC
	dhmT5jdOeLpHCQIbLRMOfHf82GKAQk4mr5XbJm760HjxOk7qFZkmX2/aQ9MpMIju7oC8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162583-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162583: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=b8649cf2a3e673a4a8cb6c255e394b354b771550
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Jun 2021 15:52:38 +0000

flight 162583 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162583/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 b8649cf2a3e673a4a8cb6c255e394b354b771550
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z    6 days
Failing since        162368  2021-06-04 15:42:59 Z    6 days    6 attempts
Testing same since   162583  2021-06-09 23:44:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1717 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 16:20:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 16:20:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139985.258722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrNPQ-0003u0-A0; Thu, 10 Jun 2021 16:19:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139985.258722; Thu, 10 Jun 2021 16:19:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrNPQ-0003tt-6r; Thu, 10 Jun 2021 16:19:56 +0000
Received: by outflank-mailman (input) for mailman id 139985;
 Thu, 10 Jun 2021 16:19:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5LcG=LE=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1lrNPP-0003tn-Ar
 for xen-devel@lists.xenproject.org; Thu, 10 Jun 2021 16:19:55 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.80]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 74a3bd46-0464-4d10-a688-456cfa22071f;
 Thu, 10 Jun 2021 16:19:54 +0000 (UTC)
Received: from AM6PR10CA0092.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:8c::33)
 by AM9PR08MB7101.eurprd08.prod.outlook.com (2603:10a6:20b:41a::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21; Thu, 10 Jun
 2021 16:19:44 +0000
Received: from VE1EUR03FT024.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8c:cafe::95) by AM6PR10CA0092.outlook.office365.com
 (2603:10a6:209:8c::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21 via Frontend
 Transport; Thu, 10 Jun 2021 16:19:44 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT024.mail.protection.outlook.com (10.152.18.87) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Thu, 10 Jun 2021 16:19:44 +0000
Received: ("Tessian outbound cce4cc55b7ee:v93");
 Thu, 10 Jun 2021 16:19:43 +0000
Received: from c21f63764ca8.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C81ED482-1235-48EE-B301-1562DC9269EB.1; 
 Thu, 10 Jun 2021 16:19:30 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c21f63764ca8.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 10 Jun 2021 16:19:30 +0000
Received: from PAXPR08MB6446.eurprd08.prod.outlook.com (2603:10a6:102:12d::10)
 by PA4PR08MB5902.eurprd08.prod.outlook.com (2603:10a6:102:e0::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Thu, 10 Jun
 2021 16:19:27 +0000
Received: from PAXPR08MB6446.eurprd08.prod.outlook.com
 ([fe80::f4f6:3d5e:251c:efcd]) by PAXPR08MB6446.eurprd08.prod.outlook.com
 ([fe80::f4f6:3d5e:251c:efcd%6]) with mapi id 15.20.4219.021; Thu, 10 Jun 2021
 16:19:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74a3bd46-0464-4d10-a688-456cfa22071f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YIxcpUsQrph7ZAldOOn8E1OaQsgjSGQs8SuQr2bRjnI=;
 b=JRZORxfanohvM+rfRLd2vnmWHK2/ykbwuwAZ1VDmRSyaXPXMklERFSKrxkb2sXaK8l2+appDA5wKLCEFYsjlLbRyvChKCl/QFhQz5j3LvieJW5/Gcrr335tFnKhERa5D/6WiX9nR+UzoeZLMeRfDPrAhGU85I+cR9pW6vNziMCc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: a0d4f73fd72b0111
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iqYUoq3gz1UNt7dkm1pVWyzlUYP1gdb3606juIDWPkxUpZ18r57UNlL70utUSOag5jNSeKd/xuNssMXFKPfqEhR6cXLyelcONcKF+kYfsuoc3VuvnlhK1BBxVQ49xX1X0n7zhHpxPbyuBuhP4/AiX7Gf9YJ+dRZD5w6+tI1kjtdYVLMkbymzVtVSkAg1VaYkh7HVLW2J7GkcV012CYC2yHJsukKRt89RYkaVCqT+w+yKaRAEUwoJ8tXOWs8ZWhTDVS0ZLzq021750dlj8qm1BI89wHvKcrQKoMtqPFOFwvKKGc+s0/epzFojPlDzdLG9TAdiebN3eo9xcKKSXNKghg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YIxcpUsQrph7ZAldOOn8E1OaQsgjSGQs8SuQr2bRjnI=;
 b=bY9wLKUlqUfM6nUiJ1ZOvNLpeuuvayPYXBdm3Fjk4WsRHyZIG4B+4IyFDFuhhbEBz9aZiVAAlNeguK9RXcamg9F8/vYvWBHVLjO7TzpOcC2XzEOx9AoCXx6rVRG3hBfR5kYRfNMEP4R+eyaIQk5OHj4Sop8dN63EwQ71jPfeX/ggz9Y1QBA91yg/04FbVOIRp4cXKw9SHXNeZLEo1x/vVreYA43lS/H+Mz6gurGSoemEeGofcGj+rMmg7wlSLJWyLnFnTZKAMm2b3ypnXDiZVmFw/f+XvUNv9ZUfMV0tiUm75X2OW7fj++SJlnISeZestPKS8yX0EjK9wLG/0LqcJw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YIxcpUsQrph7ZAldOOn8E1OaQsgjSGQs8SuQr2bRjnI=;
 b=JRZORxfanohvM+rfRLd2vnmWHK2/ykbwuwAZ1VDmRSyaXPXMklERFSKrxkb2sXaK8l2+appDA5wKLCEFYsjlLbRyvChKCl/QFhQz5j3LvieJW5/Gcrr335tFnKhERa5D/6WiX9nR+UzoeZLMeRfDPrAhGU85I+cR9pW6vNziMCc=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, osstest service owner <osstest-admin@xenproject.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [xen-unstable-smoke test] 162597: regressions - FAIL
Thread-Topic: [xen-unstable-smoke test] 162597: regressions - FAIL
Thread-Index: AQHXXeaXNOHWEVIMIE21UyguaXkQv6sNHOUAgABQEoA=
Date: Thu, 10 Jun 2021 16:19:27 +0000
Message-ID: <E28F5F88-7D8A-46C1-89B8-9841071778D1@arm.com>
References: <osstest-162597-mainreport@xen.org>
 <6d95cfac-e43c-d1f0-f988-4f11335b104d@suse.com>
In-Reply-To: <6d95cfac-e43c-d1f0-f988-4f11335b104d@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.100.0.2.22)
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [81.2.158.121]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: c495e612-06be-4c9c-d848-08d92c2b8efc
x-ms-traffictypediagnostic: PA4PR08MB5902:|AM9PR08MB7101:
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB71019E2810D1E61F5281E7989D359@AM9PR08MB7101.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:2887;OLM:2887;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 XvelizOIs4pfs31CnAXow1vjFo6HeLXp/f59dgICQaDtM3jfMd6vjxCo5ojzrow8l7USi0RArfFUXyaU5RjvTJZRgHyubmsfjqgvs/0FDjl8j4LuSXkWwyPGGnZVi1MbYe2nEc99p8foe2vv6OjoVp1f1sOLwBOVI+xNon0PMROvUtfIOtq4efQdTG7VMwaxF4NM/z9lCpv88IkxVaLJjJvPLty7vM0AwDWk7+F9mbJuXSs2WuZhu2p0gMcFLIDo1P5/qMnu0fNLvXBoRHv+KnmMBZ7FD4blxx7emf1I1KHQAt7YcKjsE+EiYCEXLybaBibhHuo9W5nOgQFrC1KorZRzXQApzJLGlYXUl+s89tQ2Vtop6xrm3RkwduB7Zf8Dr4GGhKiDsPO5lqrUu864UjKdgQRRND3NdzFY63Qsj1Qo5M+syxBej8DdcrtxqRxb0+XEkJzZhwp7Ag4IcA144utL0TPEyXZrI3/Oy9fZTiy7h/GVT+dTK1/fsu8O6qCh2vb56W9skud4dKDkfB0ulePUu9LKnr3nLA0L1NVwZYdbPWXLz/abDOA8Tv5B4FPgWFN+wtEXz8l6PK3m9MtspyGsHkdsBEHnGv0tgqGb7anY+lMBWqQAmT9zO9lsdNvYAJmPqw2sbXztF1yas1tubXHkucJUopfXCurKEKerK1NOg8+iN+9kHDHjy4vV7KD2wcwAcuf9mTU5V6E+UQna1DIk6JiXlXshHmJD/R9Xziq3YDfUUqM+VvBciNkU1GC2m+8tN9KK/5gphCsLEitHXwgPqs5feQi9Y2db/nKNAQY=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB6446.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(396003)(136003)(376002)(39850400004)(5660300002)(86362001)(6916009)(6506007)(2906002)(316002)(122000001)(4326008)(2616005)(38100700002)(83380400001)(76116006)(186003)(478600001)(71200400001)(8936002)(33656002)(91956017)(26005)(6512007)(8676002)(64756008)(54906003)(6486002)(66446008)(66556008)(966005)(66476007)(36756003)(53546011)(66946007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?9w2aWAMUrFnykwK+hHF2dPM5lsrRVGrqXJ72SQ2DqjwrYgVJCzX2ihaGUNz1?=
 =?us-ascii?Q?Q2LHzSonLlSCuRHEtkfVA2ogEGzrAJaVpht9e3qYqO1KLnrPKgB7A/VTzQZA?=
 =?us-ascii?Q?TSi7PjNZi806zQGufWGU4sXTBpCNW9Rhc++rWohAgCXjeiPHZj5QtubL3XmO?=
 =?us-ascii?Q?LF+HL+51ylHHIfMGnTTWz0cdCaswNO9EuwGb3H1LNKQZBk62hZnVnvsVpUJQ?=
 =?us-ascii?Q?R5eZ06XjV/d5CzUHi6CCOKBbs07CreVimphcF2Eeb+OQHFLoYkQLjZtMNQa5?=
 =?us-ascii?Q?GH8mExtCg1sZBLQwipXMYNND+VbX3LlbqgaJiqWu0yHzAbkpi7D4GX6I5PVt?=
 =?us-ascii?Q?9vYUL7rMTNr/mFNDmOPhTT3qvh/nLFBowAoHxui0i/VO5tfMNv3dABHDpOds?=
 =?us-ascii?Q?yjDKB/ZZBnlqC1hrJD9ALUYcuBsul5Eodk67q0DQKtrG0Slb6730gWEfu7hL?=
 =?us-ascii?Q?sIC/vtqkX2c9GswLyB+4DuUU+Z4xc9O0ZWqtzDKuVgc6cUB3HHSNueO3buqQ?=
 =?us-ascii?Q?Vq8J2oslXWE0uBCm4uqE3uNVCeLijrhHbzhapHbtDqT6BtYplKffpmCi5KoZ?=
 =?us-ascii?Q?llFFtCG3LMVHy15nqEBo2VgZHMCecCPO3o6rGRNMFwmBNOv/URlyGGG0RSfR?=
 =?us-ascii?Q?ffsUGCsgafFjhcMLoaKRZEgenNof4oOj/hRnZi0QU4xQdLXC/KXe2lQdA7Wd?=
 =?us-ascii?Q?1Uzb9HlqcqL+KiNFn7LYAU/7SJdxwNlPlMVvAG6FEtzfdaUioJ8sd0Xp2U/f?=
 =?us-ascii?Q?qUfVHYuQpVNh5r+NdlVemGVq7KCiPtjOX0NYtnARWrvmxNsPogc/HRGyjLEW?=
 =?us-ascii?Q?kvYQ3raVBmQ/mwOI7yql3E0qd84IMIZbgPjH2GAqSkLrfvs5U+cMtNWTTokd?=
 =?us-ascii?Q?AtJbrhovcNDNqIQW68J9oCUynNooo/FEbwI6CcLnjuTRRQW5VT91/u0mLT8d?=
 =?us-ascii?Q?mrw9ggy/cu+Q3vTBoiP6Cph0nhmWxrVBc9twZY+a0lsp1AiyHBWDm3R3BvDn?=
 =?us-ascii?Q?7/f+9RT3y/vnZrqy1V+SR8+8e8/mmy7HHKFYWXKhQlVsRBluLqT+yl9fzRp5?=
 =?us-ascii?Q?cD9Kqq/411/UdTzLnNXHkcLnzIwdNjJz8PmJtNmNxejZtEFRBBkFEr0VdIlb?=
 =?us-ascii?Q?EIkAWsrG+RJEDcmWps6Bbr1WFNF+uPx2bn87Ggfp4zMHmyJZ8tzk6Zv6V5Qi?=
 =?us-ascii?Q?5Yh1mfPOJ/VllyXXTQVxrCSBQaPdUVauvfkGXAD1fVcENzqcDg1MAyKTZlR2?=
 =?us-ascii?Q?6u9BPYH0XCT3HcBsuZ3HnaqHXqREwv1PVr4eIqitNL8U8ywUvx5UL9VD4dPH?=
 =?us-ascii?Q?Kh6yFDRvQrzBXpHY4C03/1GR?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <1D14D7CB02E9A449B1A71530C5EE677F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB5902
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT024.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	28032e8c-b05c-4eab-a94c-08d92c2b8524
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mIzDL2OK9pUMy415PNdyQW2Bl1/fR4b/utPBNDA3qqc2wus1SZqIOfQbbhkokICGojqArkeTRBN0E3CqM5EkEETRDgOz4Rt3+Tx7M9tP8qTIwlh37eEob9r+aLSgkNSvIkrLt8FE4lzJSia2qNRNhO1Pj2QpRvpLMSzVmh5mclWvafkjMLmUje7E3Mp5VoGY6+6ZIVIXeWNj4gcpTjlZVy50dIAVjISmoPNjv4tbQzpvMKSTIx8DVT7DsbADMF5SAxcAJudkYYI8TOMqaZHe0ugmBDdUQZtfjYs03OuJeQ/bqM1kCB6HKbGwEtUDfzIS+mIfe2SI5wju8CGOV824rM3tGmQXhMSRL1EiLD13rXlHo5GFdbqdUoIch3mLGQW2v6TeumWKyEbe1ZO0dZttWa/dYmr8jnvr8QWVfFQQqOsZoLh6WNlyxw9MzAuW9S8peFh3tuT1aNi41L2hvg684xgVP4rvcvOf8TECtrGXvfkYsT/0NVUlv1AXluZGpgu17isWaGRpgmPixTPwuH9738GOXNcQPv1qtO1N53Iq4PD3q90FmLOEiUP7ejFv7g29avfrhKq2XHv/XkbmdMX0ysbxuIhVmsfuOAfdtbtavOhBFKi6OyfBTHqLtKAdnPka8n/bfmDpKfKo+B8e++oMQ6iEuco3ZytWnATX2oaWvPwt2IeFffdNYbC0rW6r8lVReqjxsNbzTeQkcERERFqufONamSVMQJL44uVPFQK2KtE0q5Wwdc8Kwr5dtfzikvsNLH1e5NCDWF+G1Pu+BIduOQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(39850400004)(376002)(346002)(136003)(46966006)(36840700001)(81166007)(356005)(5660300002)(26005)(6486002)(36860700001)(2906002)(36756003)(47076005)(82310400003)(966005)(54906003)(70586007)(70206006)(6862004)(82740400003)(53546011)(6506007)(8676002)(478600001)(86362001)(6512007)(336012)(316002)(33656002)(4326008)(8936002)(2616005)(83380400001)(186003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2021 16:19:44.2497
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c495e612-06be-4c9c-d848-08d92c2b8efc
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT024.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB7101

Hi Jan,

> On 10 Jun 2021, at 12:32, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 10.06.2021 12:50, osstest service owner wrote:
>> flight 162597 xen-unstable-smoke real [real]
>> flight 162602 xen-unstable-smoke real-retest [real]
>> http://logs.test-lab.xenproject.org/osstest/logs/162597/
>> http://logs.test-lab.xenproject.org/osstest/logs/162602/
>>=20
>> Regressions :-(
>>=20
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>> test-armhf-armhf-xl         18 guest-start/debian.repeat fail REGR. vs. =
162574
>=20
> This now being the 3rd failure in a row, I guess there's a fair chance
> of there actually being something wrong with ...
>=20
>> commit dfcffb128be46a3e413eaa941744536fe53c94b6
>> Author: Stefano Stabellini <sstabellini@kernel.org>
>> Date:   Wed Jun 9 10:37:59 2021 -0700
>>=20
>>    xen/arm32: SPSR_hyp/SPSR
>>=20
>>    SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
>>    trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
>>    See: ARM DDI 0487D.b page G8-5993.
>>=20
>>    This fixes booting Xen/arm32 on QEMU.
>>=20
>>    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>    Reviewed-by: Julien Grall <jgrall@amazon.com>
>>    Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
>>    Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
>=20
> ... this. My Arm-untrained eye couldn't spot anything in the logs.

I am not sure to read the log correctly so do I see it right that dom0 star=
ted and it failed then to start a guest ?

Regards
Bertrand

>=20
> Jan
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Thu Jun 10 16:57:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 16:57:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.139996.258748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrNz7-0007uv-CY; Thu, 10 Jun 2021 16:56:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 139996.258748; Thu, 10 Jun 2021 16:56:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrNz7-0007uo-8p; Thu, 10 Jun 2021 16:56:49 +0000
Received: by outflank-mailman (input) for mailman id 139996;
 Thu, 10 Jun 2021 16:56:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrNz6-0007ue-CN; Thu, 10 Jun 2021 16:56:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrNz6-00031i-3z; Thu, 10 Jun 2021 16:56:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrNz5-0004Sq-PH; Thu, 10 Jun 2021 16:56:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrNz5-0005YR-On; Thu, 10 Jun 2021 16:56:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nPdJPyx6l/MWChz/hU6tdR37N6GOs7lm9wrozQgvZPY=; b=HiL6L2PZR7s4nun1C2mJl0zvd8
	8VFvJ6AvjJc1qnFrNCHJtX8Ma2Nl6jpIsfUbmGQ9+RzEjttLrsn/D/wzFZcAh32Nq09+bL7KYQqmh
	8LDZe96J5QOz55BvL7yFxSqWlAaH182ToUgUQ4yYZ1Ga6/iLbnroPRuRgjmFfnU6ZdpM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162561-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.15-testing test] 162561: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.15-testing:test-amd64-amd64-libvirt-vhd:guest-saverestore:fail:heisenbug
    xen-4.15-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.15-testing:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a339ceaa8f17e827ee5eb25f05ad6f52ba8d6b1c
X-Osstest-Versions-That:
    xen=f034c96e882b81738720472cd28e75e6d6eb66fe
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Jun 2021 16:56:47 +0000

flight 162561 xen-4.15-testing real [real]
flight 162608 xen-4.15-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162561/
http://logs.test-lab.xenproject.org/osstest/logs/162608/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 16 guest-saverestore   fail pass in 162608-retest
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 162608-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 162546

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162546
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162546
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162546
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162546
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162546
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162546
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162546
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162546
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162546
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162546
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162546
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  a339ceaa8f17e827ee5eb25f05ad6f52ba8d6b1c
baseline version:
 xen                  f034c96e882b81738720472cd28e75e6d6eb66fe

Last test of basis   162546  2021-06-08 17:06:32 Z    1 days
Testing same since   162561  2021-06-09 02:20:51 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f034c96e88..a339ceaa8f  a339ceaa8f17e827ee5eb25f05ad6f52ba8d6b1c -> stable-4.15


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 17:57:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 17:57:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140007.258769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrOvd-0005FW-8w; Thu, 10 Jun 2021 17:57:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140007.258769; Thu, 10 Jun 2021 17:57:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrOvd-0005FP-4E; Thu, 10 Jun 2021 17:57:17 +0000
Received: by outflank-mailman (input) for mailman id 140007;
 Thu, 10 Jun 2021 17:57:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrOvb-0005FF-N6; Thu, 10 Jun 2021 17:57:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrOvb-00042x-IC; Thu, 10 Jun 2021 17:57:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrOvb-0006r3-Ad; Thu, 10 Jun 2021 17:57:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrOvb-0005Ox-A6; Thu, 10 Jun 2021 17:57:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mjwsKiooTwNvrHNve2bv7Q9BOBHFMK3wjxrdEAJjbYs=; b=WJeX4A0a3X47Z4AEQDRzR1JrTW
	yi11VqRI4Gxas+BWXsCJmF21il2LXXuqyAEMcACmfPoxd/ktEEGzBJfdlUlwtVwku9stKXHSijti6
	BiSeSBUouw0p7ie7TComjf2N4xqDHdid6udhS1Y8pZhLdO7uqz9RNAS+/9AHFlfY6MGU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162598-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162598: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=2a51ff7b40ac7ed81d9244120716e1fd38371572
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Jun 2021 17:57:15 +0000

flight 162598 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162598/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              2a51ff7b40ac7ed81d9244120716e1fd38371572
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  335 days
Failing since        151818  2020-07-11 04:18:52 Z  334 days  327 attempts
Testing same since   162598  2021-06-10 09:09:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 61044 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 18:08:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 18:08:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140015.258783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrP6N-0006mh-91; Thu, 10 Jun 2021 18:08:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140015.258783; Thu, 10 Jun 2021 18:08:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrP6N-0006ma-64; Thu, 10 Jun 2021 18:08:23 +0000
Received: by outflank-mailman (input) for mailman id 140015;
 Thu, 10 Jun 2021 18:08:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrP6L-0006mQ-V3; Thu, 10 Jun 2021 18:08:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrP6L-0004JX-Nl; Thu, 10 Jun 2021 18:08:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrP6L-0007DN-Ew; Thu, 10 Jun 2021 18:08:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrP6L-0007S9-EM; Thu, 10 Jun 2021 18:08:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=wiIVzOXrvHtDwo9/htRaGzU8yqHl9U3CN8b5JwnmljY=; b=yE5NQDA1sbF5ONkyuuigp/61wk
	ErdQddJP29JO6NIDFY5GJ0O8EBWy2XLwPAbGDHJEkzCM6AYjFKy6XW/gSBC8sKbHz450N3QaMsgp4
	aq7wjSOAk0ymoXGWu3vgyW0V0OZCszkB5nCZaFmVXRd5bFSbRfB37ifFirXd1G9zYJoA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-i386-xl-qemuu-debianhvm-amd64
Message-Id: <E1lrP6L-0007S9-EM@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Jun 2021 18:08:21 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemuu-debianhvm-amd64
testid guest-saverestore

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/160849/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-debianhvm-amd64.guest-saverestore.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-debianhvm-amd64.guest-saverestore --summary-out=tmp/162610.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-i386-xl-qemuu-debianhvm-amd64 guest-saverestore
Searching for failure / basis pass:
 162551 fail [host=albana0] / 160125 [host=chardonnay0] 160119 [host=albana1] 160113 [host=pinot1] 160104 [host=huxelrebe1] 160097 [host=chardonnay1] 160091 ok.
Failure / basis pass flights: 162551 / 160091
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a4716fd8d7c877185652f5f8e25032dc7699d51b 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e7c6a8cf9f5c82aa152273e1c9e80d07b1b0c32c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#4751a48aeb2ab828b0a5cbdc585fd3642967cda1-c410ad4da4b7785170d3d42a3ba190c2caac6feb git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#e7c6a8cf9f5c82aa152273e1c9e80d07b1b0c32c-a4716fd8d7c877185652f5f8e25032dc7699d51b git://xenbits.xen.org/osstest/seabios.git#b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee-7292e4a0a8f58333ccbd2d0d47242f9865083c9c git://xenbits.xen.org/xen.git#14b95b3b8546db201e7efd0636ae0e215fae98f3-5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Loaded 31477 nodes in revision graph
Searching for test results:
 160091 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e7c6a8cf9f5c82aa152273e1c9e80d07b1b0c32c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 160097 [host=chardonnay1]
 160104 [host=huxelrebe1]
 160113 [host=pinot1]
 160119 [host=albana1]
 160125 [host=chardonnay0]
 160134 fail irrelevant
 160147 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160167 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca318882714080fb81fe9eb89a7b7934efc5bfae 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bdee969c0e65d4d509932b1d70e3a3b2ffbff6d5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160328 fail irrelevant
 160361 fail irrelevant
 160392 fail irrelevant
 160418 fail irrelevant
 160448 fail irrelevant
 160477 fail irrelevant
 160501 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160522 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160541 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ec2e6e016d24bd429792d08cf607e4c5350dcdaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160563 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7993b0f83fe5c3f8555e79781d5d098f99751a94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cead8c0d17462f3a1150b5657d3f4eaa88faf1cb
 160619 fail irrelevant
 160632 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 62bad17dcae18f55cb3bdc19909543dfdf928a2b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6ee55e1d10c25c2f6bf5ce2084ad2327e17affa5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 90629587e16e2efdb61da77f25c25fba3c4a5fd7
 160650 fail irrelevant
 160736 fail irrelevant
 160748 fail irrelevant
 160779 fail irrelevant
 160812 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e7c6a8cf9f5c82aa152273e1c9e80d07b1b0c32c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 160818 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160814 fail irrelevant
 160816 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2a9a6c2a86570ccbf8c5c30cbb8bf723168c459 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160819 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1db136a29ce8594b693938ab8e788d8bcef54770 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160822 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2255564fd21059960966b47212def9069cb56077 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160823 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5e8892db93f3fb6a7221f2d47f3c952a7e489737 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160801 fail irrelevant
 160824 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8b858f9998a9d59a9a7188f2c5c6ffb99eff6115 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160826 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 87a80dc4f2f5e51894db143685a5e39c8ce6f651 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160828 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 30ca7eddc486646fa19c9619fcf233ceaa65e28c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160830 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 313d86c956d4599054a9dcd524668f67797d317a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 571d413b5da6bc6f1c2aaca8484717642255ddb0 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160832 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 56b89f455894e4628ad7994fe5dd348145d1a9c5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160833 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b50101833987b47e0740f1621de48637c468c3d1 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160835 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 283d845c9164f57f5dba020a4783bb290493802f b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160836 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160838 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8becb36063fb14df1e3ae4916215667e2cb65fa2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160839 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160842 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160844 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160846 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160847 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160827 fail irrelevant
 160849 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160854 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e7c6a8cf9f5c82aa152273e1c9e80d07b1b0c32c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 160856 fail irrelevant
 160851 fail irrelevant
 160883 fail irrelevant
 160916 fail irrelevant
 160980 fail irrelevant
 161050 fail irrelevant
 161088 fail irrelevant
 161121 fail irrelevant
 161147 fail irrelevant
 161171 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2ad22420a710dc07e3b644f91a5b55c09c39ecf3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 264aa183ad85b2779b27d1312724a291259ccc9f
 161191 fail irrelevant
 161210 fail irrelevant
 161232 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b53173e7cdafb7a318a239d557478fd73734a86a
 161256 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161276 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161290 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161308 fail irrelevant
 161334 fail irrelevant
 161364 fail irrelevant
 161388 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
 161401 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aaa3eafb3ba8b544d19ca41cda1477640b22b8fc
 161419 fail irrelevant
 161434 fail irrelevant
 161444 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161455 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161472 fail irrelevant
 161481 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5396354b868bd6652600a654bba7df16701ac1cb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 11e7f0fe72ca0060762d18268e0388731fe8ccb6
 161495 fail irrelevant
 161514 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b90b8abb4049e2d98040f548ad23b6ab22d5d19 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
 161540 fail irrelevant
 161554 fail irrelevant
 161571 fail irrelevant
 161587 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161604 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161616 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 53c5433e84e8935abed8e91d4a2eb813168a0ecf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161631 []
 161766 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
 161780 fail irrelevant
 161812 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d45a5270d075ea589f0b0ddcf963a5fea1f500ac b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 8cccd6438e86112ab383e41b433b5a7e73be9621
 161826 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161839 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161853 []
 161856 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161862 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161876 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161886 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161890 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161896 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 74e31681ba05ed1876818df30c581bc530554fb3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161907 fail irrelevant
 161915 fail irrelevant
 161924 fail irrelevant
 161938 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161941 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161950 fail irrelevant
 161955 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161961 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161963 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161967 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161971 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161976 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161981 fail irrelevant
 161986 fail irrelevant
 162019 fail irrelevant
 162070 fail irrelevant
 162090 fail irrelevant
 162104 fail irrelevant
 162099 fail irrelevant
 162108 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 15ee7b76891a78141e6e30ef3f8572e8d6b326d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 972e848b53970d12cb2ca64687ef8ff797fb6236 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162112 fail irrelevant
 162116 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162121 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162124 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162127 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162132 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162135 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162139 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162143 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 371ebfe28600fc5a435504b841cd401208a68f07 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162146 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162152 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162158 fail irrelevant
 162167 fail irrelevant
 162197 fail irrelevant
 162240 fail irrelevant
 162244 fail irrelevant
 162252 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7258034ab40e6927acbd005feb295eb3acf972bb 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 9fdcf851689cb2a9501d3947cb5d767d9c7797e8
 162257 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 62c0ac5041e9130b041adfa13a41583d3c3ddd24 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162260 fail irrelevant
 162264 fail irrelevant
 162267 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f9dc72de91d2915b808e82da34bf613afa5cce43 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162270 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162277 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162299 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 683d899e4bffca35c5b192ea0662362b0270a695
 162328 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3ff5dbe1dfc3420e5254d290500c0b6f6282d17 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162331 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fdf3666f01a2dd02d83a808f609b9c744a74c652 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162339 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b233eb1849ac01bdd5b24ea84460a2e481a4c5a9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dd2db39d78431ab5a0b78777afaab3d61e94533e 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162342 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1f515342d8d83ef0fff0c3f4ac67232dd8c97565 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c345b3e6a736d4985b2bca6f7f24b985900de63 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162347 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8e6dad2028d01b7f9ec76cf3b83457fab57fa1eb 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162356 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 453d9c61dd5681159051c6e4d07e7b2633de2e70 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162362 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 453d9c61dd5681159051c6e4d07e7b2633de2e70 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162409 fail irrelevant
 162379 fail irrelevant
 162429 fail irrelevant
 162454 fail irrelevant
 162474 fail irrelevant
 162501 []
 162527 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a35947f15c0ee695eba3c55248ec8ac3e4e23cca 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162551 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a4716fd8d7c877185652f5f8e25032dc7699d51b 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162592 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e7c6a8cf9f5c82aa152273e1c9e80d07b1b0c32c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 162610 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a4716fd8d7c877185652f5f8e25032dc7699d51b 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Searching for interesting versions
 Result found: flight 160091 (pass), for basis pass
 Result found: flight 162551 (fail), for basis failure
 Repro found: flight 162592 (pass), for basis pass
 Repro found: flight 162610 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
No revisions left to test, checking graph state.
 Result found: flight 160839 (pass), for last pass
 Result found: flight 160842 (fail), for first failure
 Repro found: flight 160844 (pass), for last pass
 Repro found: flight 160846 (fail), for first failure
 Repro found: flight 160847 (pass), for last pass
 Repro found: flight 160849 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/160849/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.343816 to fit
pnmtopng: 207 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-debianhvm-amd64.guest-saverestore.{dot,ps,png,html,svg}.
----------------------------------------
162610: tolerable FAIL

flight 162610 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162610/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail baseline untested


jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Thu Jun 10 18:28:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 18:28:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140027.258807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrPPg-0000jc-7b; Thu, 10 Jun 2021 18:28:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140027.258807; Thu, 10 Jun 2021 18:28:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrPPg-0000jV-4d; Thu, 10 Jun 2021 18:28:20 +0000
Received: by outflank-mailman (input) for mailman id 140027;
 Thu, 10 Jun 2021 18:28:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrPPe-0000jL-Om; Thu, 10 Jun 2021 18:28:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrPPe-0004di-J0; Thu, 10 Jun 2021 18:28:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrPPe-0007nE-BD; Thu, 10 Jun 2021 18:28:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrPPe-0004b1-Ai; Thu, 10 Jun 2021 18:28:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tRJjPyJ0ayceWGBuGwmdlYKRQQEXndPa9N4GPOZ1Um8=; b=bjEjSBE+4wKCNMdisRHsPHcGqK
	DXT0ySls3rbP+l3dTxNN83vhc9K7qLs6quhk02XCbgxXgocR3tDoO9nKYRXgmJgwULlsmu0AVohp+
	Q1Aky/lYStk6+uhhRYYdn0zUtMBhR9MrKmqXCvlaCOI6ntUiYtZSx91XL708+72SxfXI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162576-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 162576: tolerable FAIL - PUSHED
X-Osstest-Failures:
    seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    seabios=e3c30795823672eec9bde75187e184f23ed98d70
X-Osstest-Versions-That:
    seabios=7292e4a0a8f58333ccbd2d0d47242f9865083c9c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Jun 2021 18:28:18 +0000

flight 162576 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162576/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162361
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162361
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162361
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162361
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162361
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass

version targeted for testing:
 seabios              e3c30795823672eec9bde75187e184f23ed98d70
baseline version:
 seabios              7292e4a0a8f58333ccbd2d0d47242f9865083c9c

Last test of basis   162361  2021-06-04 06:09:59 Z    6 days
Testing same since   162576  2021-06-09 15:09:57 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Volker Rümelin <vr_qemu@t-online.de>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   7292e4a..e3c3079  e3c30795823672eec9bde75187e184f23ed98d70 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 19:14:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 19:14:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140039.258835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrQ8T-0005YS-W0; Thu, 10 Jun 2021 19:14:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140039.258835; Thu, 10 Jun 2021 19:14:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrQ8T-0005YL-S4; Thu, 10 Jun 2021 19:14:37 +0000
Received: by outflank-mailman (input) for mailman id 140039;
 Thu, 10 Jun 2021 19:14:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrQ8R-0005YB-Tx; Thu, 10 Jun 2021 19:14:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrQ8R-0005S4-H1; Thu, 10 Jun 2021 19:14:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrQ8R-0001Wk-9l; Thu, 10 Jun 2021 19:14:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrQ8R-0001RE-9E; Thu, 10 Jun 2021 19:14:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Xkv7/zF+8gps3lbOtXzVUuxuplQIK34I3E0n+1kI230=; b=tNlB+0Z4amgaVTW+JGC3TNLAQc
	sMzYhcThxF/k1G0tbJqdKoVjxEY1qBeLwEuAXmM1EOZkneBZv7j6hb/UjkTl9T0DAkLMO3MQwP/2L
	C+ktRKYzyKZHRYgc5Lw0snQd5dEBgpreINSoegz6ZppqDmxutYqe4ZlwtN1/c5iWkWNo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162607-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162607: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:guest-start/debian.repeat:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=2bb17a45b1814b0b6aa4646eff58e16f876281fd
X-Osstest-Versions-That:
    xen=3e09045991cde360432bc7437103f8f8a6699359
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Jun 2021 19:14:35 +0000

flight 162607 xen-unstable-smoke real [real]
flight 162612 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162607/
http://logs.test-lab.xenproject.org/osstest/logs/162612/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl         18 guest-start/debian.repeat fail REGR. vs. 162574

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  2bb17a45b1814b0b6aa4646eff58e16f876281fd
baseline version:
 xen                  3e09045991cde360432bc7437103f8f8a6699359

Last test of basis   162574  2021-06-09 14:00:34 Z    1 days
Failing since        162584  2021-06-10 00:00:27 Z    0 days    5 attempts
Testing same since   162607  2021-06-10 15:00:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Jan Beulich <jbeulich@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 2bb17a45b1814b0b6aa4646eff58e16f876281fd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jun 10 16:56:24 2021 +0200

    x86: please Clang in arch_set_info_guest()
    
    Clang 10 reports
    
    domain.c:1328:10: error: variable 'cr3_mfn' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
        if ( !compat )
             ^~~~~~~
    domain.c:1334:34: note: uninitialized use occurs here
        cr3_page = get_page_from_mfn(cr3_mfn, d);
                                     ^~~~~~~
    domain.c:1328:5: note: remove the 'if' if its condition is always true
        if ( !compat )
        ^~~~~~~~~~~~~~
    domain.c:1042:18: note: initialize the variable 'cr3_mfn' to silence this warning
        mfn_t cr3_mfn;
                     ^
                      = 0
    domain.c:1189:14: error: variable 'fail' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
            if ( !compat )
                 ^~~~~~~
    domain.c:1211:9: note: uninitialized use occurs here
            fail |= v->arch.pv.gdt_ents != c(gdt_ents);
            ^~~~
    domain.c:1189:9: note: remove the 'if' if its condition is always true
            if ( !compat )
            ^~~~~~~~~~~~~~
    domain.c:1187:18: note: initialize the variable 'fail' to silence this warning
            bool fail;
                     ^
                      = false
    
    despite this being a build with -O2 in effect, and despite "compat"
    being constant "false" when CONFIG_COMPAT (and hence CONFIG_PV32) is not
    defined, as it gets set at the top of the function from the result of
    is_pv_32bit_domain().
    
    Re-arrange the two "offending" if()s such that when COMPAT=n the
    respective variables will be seen as unconditionally initialized. The
    original aim was to have the !compat cases first, though.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit dfcffb128be46a3e413eaa941744536fe53c94b6
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Wed Jun 9 10:37:59 2021 -0700

    xen/arm32: SPSR_hyp/SPSR
    
    SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
    trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
    See: ARM DDI 0487D.b page G8-5993.
    
    This fixes booting Xen/arm32 on QEMU.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
    Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 20:16:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 20:16:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140049.258856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrR60-0002xk-Ov; Thu, 10 Jun 2021 20:16:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140049.258856; Thu, 10 Jun 2021 20:16:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrR60-0002xd-LL; Thu, 10 Jun 2021 20:16:08 +0000
Received: by outflank-mailman (input) for mailman id 140049;
 Thu, 10 Jun 2021 20:16:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrR5z-0002xT-Af; Thu, 10 Jun 2021 20:16:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrR5z-0006Xb-1j; Thu, 10 Jun 2021 20:16:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrR5y-0005ce-Pe; Thu, 10 Jun 2021 20:16:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrR5y-0000Oi-P8; Thu, 10 Jun 2021 20:16:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=clLeGGGWR86ZauUuWjYCvT65iuIAfuDb13J8/Zl9azw=; b=LyazpPKo7xRChVrX7/Fva6EUuH
	n0VtOvUenzIw69M21IF6dAoZL2vK0R79KH/J+zFrV84YQWxd4iFaX9+V+A7m2lf7djEEhHy5UYtFo
	/WFiSDJ7+N5u+6kDqbYtu6jwMWLIdXipz7Me0S2rtPsrFifN3+5OEZ9sVRHQn72cjDkY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm
Message-Id: <E1lrR5y-0000Oi-P8@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Jun 2021 20:16:06 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm
testid guest-saverestore

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/160911/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm.guest-saverestore.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm.guest-saverestore --summary-out=tmp/162616.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm guest-saverestore
Searching for failure / basis pass:
 162551 fail [host=albana0] / 160125 [host=huxelrebe1] 160119 [host=godello0] 160104 [host=fiano1] 160097 [host=chardonnay1] 160091 [host=albana1] 160082 [host=pinot1] 160079 [host=godello0] 160070 [host=huxelrebe1] 160066 [host=elbling0] 160002 ok.
Failure / basis pass flights: 162551 / 160002
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a4716fd8d7c877185652f5f8e25032dc7699d51b 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f34661b6c97a37a5efc27d31c037ddeda4547e2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e4bdcc8aef6707027168ea29caed844a7da67b4d
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#4751a48aeb2ab828b0a5cbdc585fd3642967cda1-c410ad4da4b7785170d3d42a3ba190c2caac6feb git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#6f34661b6c97a37a5efc27d31c037ddeda4547e2-a4716fd8d7c877185652f5f8e25032dc7699d51b git://xenbits.xen.org/osstest/seabios.git#b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee-7292e4a0a8f58333ccbd2d0d47242f9865083c9c git://xenbits.xen.org/xen.git#e4bdcc8aef6707027168ea29caed844a7da67b4d-5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Loaded 31520 nodes in revision graph
Searching for test results:
 159947 [host=pinot0]
 160002 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f34661b6c97a37a5efc27d31c037ddeda4547e2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e4bdcc8aef6707027168ea29caed844a7da67b4d
 160048 []
 160050 []
 160057 []
 160062 []
 160064 []
 160066 [host=elbling0]
 160070 [host=huxelrebe1]
 160079 [host=godello0]
 160082 [host=pinot1]
 160088 []
 160091 [host=albana1]
 160097 [host=chardonnay1]
 160104 [host=fiano1]
 160113 [host=godello0]
 160119 [host=godello0]
 160125 [host=huxelrebe1]
 160134 fail irrelevant
 160147 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160167 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca318882714080fb81fe9eb89a7b7934efc5bfae 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bdee969c0e65d4d509932b1d70e3a3b2ffbff6d5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160328 fail irrelevant
 160361 fail irrelevant
 160392 fail irrelevant
 160418 fail irrelevant
 160448 fail irrelevant
 160477 fail irrelevant
 160501 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160522 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160541 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ec2e6e016d24bd429792d08cf607e4c5350dcdaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160563 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7993b0f83fe5c3f8555e79781d5d098f99751a94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cead8c0d17462f3a1150b5657d3f4eaa88faf1cb
 160619 fail irrelevant
 160632 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 62bad17dcae18f55cb3bdc19909543dfdf928a2b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6ee55e1d10c25c2f6bf5ce2084ad2327e17affa5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 90629587e16e2efdb61da77f25c25fba3c4a5fd7
 160650 fail irrelevant
 160736 fail irrelevant
 160748 blocked irrelevant
 160779 fail irrelevant
 160801 fail irrelevant
 160827 fail irrelevant
 160862 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f34661b6c97a37a5efc27d31c037ddeda4547e2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e4bdcc8aef6707027168ea29caed844a7da67b4d
 160866 fail irrelevant
 160868 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2a9a6c2a86570ccbf8c5c30cbb8bf723168c459 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160869 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160872 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7286d62d4e259be8cecf3dc2deea80ecc14489a5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160874 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 69259911f948ad2755bd1f2c999dd60ac322c890 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160875 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e71c36557ed41017e634ae392fa80f03ced7fa1 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160876 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 81cbfd5088690c53541ffd0d74851c8ab055a829 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160879 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e31b3a5c34c6e5be7ef60773e607f189eaa15f3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 160881 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 757acb9a8295e8be4a37b2cfc1cd947e357fd29c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 160851 fail irrelevant
 160882 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f3bdfc41866edf7c256e689deb9d091a950c8fca 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5b7f5586d182b0cafb1f8d558992a14763e2953e b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160886 fail irrelevant
 160887 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ee2e67da8f882fcdef2c49fcc58e9962aa695f5a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160889 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 571d413b5da6bc6f1c2aaca8484717642255ddb0 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160891 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8becb36063fb14df1e3ae4916215667e2cb65fa2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160893 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 24e13a4dc1eb1630eceffc7ab334145d902e763d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160895 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160897 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160900 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160902 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160905 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160908 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160910 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160911 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160883 fail irrelevant
 160916 fail irrelevant
 160980 fail irrelevant
 161050 fail irrelevant
 161088 fail irrelevant
 161121 fail irrelevant
 161147 fail irrelevant
 161171 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2ad22420a710dc07e3b644f91a5b55c09c39ecf3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 264aa183ad85b2779b27d1312724a291259ccc9f
 161191 fail irrelevant
 161210 fail irrelevant
 161232 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b53173e7cdafb7a318a239d557478fd73734a86a
 161256 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161276 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161290 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161308 fail irrelevant
 161334 fail irrelevant
 161364 fail irrelevant
 161388 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
 161401 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aaa3eafb3ba8b544d19ca41cda1477640b22b8fc
 161419 fail irrelevant
 161434 fail irrelevant
 161444 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161455 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161472 fail irrelevant
 161481 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5396354b868bd6652600a654bba7df16701ac1cb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 11e7f0fe72ca0060762d18268e0388731fe8ccb6
 161495 fail irrelevant
 161514 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b90b8abb4049e2d98040f548ad23b6ab22d5d19 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
 161540 fail irrelevant
 161554 blocked irrelevant
 161571 fail irrelevant
 161587 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161604 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161616 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 53c5433e84e8935abed8e91d4a2eb813168a0ecf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161631 []
 161766 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
 161780 fail irrelevant
 161812 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d45a5270d075ea589f0b0ddcf963a5fea1f500ac b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 8cccd6438e86112ab383e41b433b5a7e73be9621
 161826 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161839 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161853 []
 161856 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161862 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161876 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161886 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161890 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161896 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 74e31681ba05ed1876818df30c581bc530554fb3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161907 fail irrelevant
 161915 fail irrelevant
 161924 fail irrelevant
 161938 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161941 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161950 fail irrelevant
 161955 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161961 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161963 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161967 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161971 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161976 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161981 fail irrelevant
 161986 fail irrelevant
 162019 fail irrelevant
 162070 fail irrelevant
 162090 fail irrelevant
 162104 fail irrelevant
 162099 fail irrelevant
 162108 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 15ee7b76891a78141e6e30ef3f8572e8d6b326d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 972e848b53970d12cb2ca64687ef8ff797fb6236 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162112 fail irrelevant
 162116 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162121 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162124 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162127 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162132 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162135 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162139 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162143 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 371ebfe28600fc5a435504b841cd401208a68f07 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162146 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162152 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162158 fail irrelevant
 162167 fail irrelevant
 162197 fail irrelevant
 162240 fail irrelevant
 162244 fail irrelevant
 162252 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7258034ab40e6927acbd005feb295eb3acf972bb 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 9fdcf851689cb2a9501d3947cb5d767d9c7797e8
 162257 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 62c0ac5041e9130b041adfa13a41583d3c3ddd24 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162260 blocked irrelevant
 162264 fail irrelevant
 162267 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f9dc72de91d2915b808e82da34bf613afa5cce43 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162270 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162277 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162299 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 683d899e4bffca35c5b192ea0662362b0270a695
 162328 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3ff5dbe1dfc3420e5254d290500c0b6f6282d17 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162331 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fdf3666f01a2dd02d83a808f609b9c744a74c652 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162339 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b233eb1849ac01bdd5b24ea84460a2e481a4c5a9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dd2db39d78431ab5a0b78777afaab3d61e94533e 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162342 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1f515342d8d83ef0fff0c3f4ac67232dd8c97565 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c345b3e6a736d4985b2bca6f7f24b985900de63 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162347 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8e6dad2028d01b7f9ec76cf3b83457fab57fa1eb 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162356 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 453d9c61dd5681159051c6e4d07e7b2633de2e70 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162362 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 453d9c61dd5681159051c6e4d07e7b2633de2e70 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162409 fail irrelevant
 162379 fail irrelevant
 162429 fail irrelevant
 162454 fail irrelevant
 162474 fail irrelevant
 162501 []
 162527 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a35947f15c0ee695eba3c55248ec8ac3e4e23cca 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162551 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a4716fd8d7c877185652f5f8e25032dc7699d51b 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162613 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f34661b6c97a37a5efc27d31c037ddeda4547e2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e4bdcc8aef6707027168ea29caed844a7da67b4d
 162616 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a4716fd8d7c877185652f5f8e25032dc7699d51b 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Searching for interesting versions
 Result found: flight 160002 (pass), for basis pass
 Result found: flight 162551 (fail), for basis failure
 Repro found: flight 162613 (pass), for basis pass
 Repro found: flight 162616 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
No revisions left to test, checking graph state.
 Result found: flight 160897 (pass), for last pass
 Result found: flight 160900 (fail), for first failure
 Repro found: flight 160902 (pass), for last pass
 Repro found: flight 160908 (fail), for first failure
 Repro found: flight 160910 (pass), for last pass
 Repro found: flight 160911 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/160911/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.329807 to fit
pnmtopng: 226 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm.guest-saverestore.{dot,ps,png,html,svg}.
----------------------------------------
162616: tolerable FAIL

flight 162616 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162616/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail baseline untested


jobs:
 build-amd64-xsm                                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Thu Jun 10 22:22:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 22:22:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140065.258888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrT3d-00068s-Kf; Thu, 10 Jun 2021 22:21:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140065.258888; Thu, 10 Jun 2021 22:21:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrT3d-00068l-HH; Thu, 10 Jun 2021 22:21:49 +0000
Received: by outflank-mailman (input) for mailman id 140065;
 Thu, 10 Jun 2021 22:21:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrT3c-00068b-95; Thu, 10 Jun 2021 22:21:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrT3c-00009v-2v; Thu, 10 Jun 2021 22:21:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrT3b-00027D-P6; Thu, 10 Jun 2021 22:21:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrT3b-0002P6-Oa; Thu, 10 Jun 2021 22:21:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HNLAAOQUfMyDTuKlY/Z8W5stgZwPvC5F/mzdVpuucWs=; b=t0yk/iufhe6yBI6zf++JnC1YH8
	FB7lSDGT+KJ40yPueG/K6ZYaSxRxOECqRFk3wpY1jfAPIOaibG0zc6qzIbIndzoyP29Ck0fsTvNwt
	OQNoqC7XIUxmcpwtxz5doe6Mx4eQe+llt20Db1PbYRfscYAkvT8RzMDkQiuY+cKozyo8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162591-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162591: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7fe7fae8b48e3f9c647fd685e5155ebc8e6fb84d
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Jun 2021 22:21:47 +0000

flight 162591 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162591/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                7fe7fae8b48e3f9c647fd685e5155ebc8e6fb84d
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  294 days
Failing since        152659  2020-08-21 14:07:39 Z  293 days  542 attempts
Testing same since   162591  2021-06-10 04:36:22 Z    0 days    1 attempts

------------------------------------------------------------
531 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 170591 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 10 23:15:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Jun 2021 23:15:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140092.258926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrTss-0003et-8M; Thu, 10 Jun 2021 23:14:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140092.258926; Thu, 10 Jun 2021 23:14:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrTss-0003em-5O; Thu, 10 Jun 2021 23:14:46 +0000
Received: by outflank-mailman (input) for mailman id 140092;
 Thu, 10 Jun 2021 23:14:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrTsr-0003ec-CG; Thu, 10 Jun 2021 23:14:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrTso-00012N-M9; Thu, 10 Jun 2021 23:14:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrTso-0003ug-GO; Thu, 10 Jun 2021 23:14:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrTso-0005CN-Fv; Thu, 10 Jun 2021 23:14:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KXSI9iPgSOhbTG5lBul9C5MRTH/xH8CBbRVF6gxfV7c=; b=hi3wn9fUC+uCNCCzuKrB+/t3J9
	WXy/AVDEESvUqZrwIl8ha+EcAQzcL48zYHFzZa030TBGKHDXnoROfpOiaN3v82kk1TkXgiamg9h/s
	PYtJpbzul13ocw35RvEnB/xsmjeyHMPFFmlMnJwdKI9wR/JebUDarAz01VD367T3S9T8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162618-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162618: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:guest-start/debian.repeat:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=2bb17a45b1814b0b6aa4646eff58e16f876281fd
X-Osstest-Versions-That:
    xen=3e09045991cde360432bc7437103f8f8a6699359
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Jun 2021 23:14:42 +0000

flight 162618 xen-unstable-smoke real [real]
flight 162621 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162618/
http://logs.test-lab.xenproject.org/osstest/logs/162621/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl         18 guest-start/debian.repeat fail REGR. vs. 162574

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  2bb17a45b1814b0b6aa4646eff58e16f876281fd
baseline version:
 xen                  3e09045991cde360432bc7437103f8f8a6699359

Last test of basis   162574  2021-06-09 14:00:34 Z    1 days
Failing since        162584  2021-06-10 00:00:27 Z    0 days    6 attempts
Testing same since   162607  2021-06-10 15:00:30 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Jan Beulich <jbeulich@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 2bb17a45b1814b0b6aa4646eff58e16f876281fd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jun 10 16:56:24 2021 +0200

    x86: please Clang in arch_set_info_guest()
    
    Clang 10 reports
    
    domain.c:1328:10: error: variable 'cr3_mfn' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
        if ( !compat )
             ^~~~~~~
    domain.c:1334:34: note: uninitialized use occurs here
        cr3_page = get_page_from_mfn(cr3_mfn, d);
                                     ^~~~~~~
    domain.c:1328:5: note: remove the 'if' if its condition is always true
        if ( !compat )
        ^~~~~~~~~~~~~~
    domain.c:1042:18: note: initialize the variable 'cr3_mfn' to silence this warning
        mfn_t cr3_mfn;
                     ^
                      = 0
    domain.c:1189:14: error: variable 'fail' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
            if ( !compat )
                 ^~~~~~~
    domain.c:1211:9: note: uninitialized use occurs here
            fail |= v->arch.pv.gdt_ents != c(gdt_ents);
            ^~~~
    domain.c:1189:9: note: remove the 'if' if its condition is always true
            if ( !compat )
            ^~~~~~~~~~~~~~
    domain.c:1187:18: note: initialize the variable 'fail' to silence this warning
            bool fail;
                     ^
                      = false
    
    despite this being a build with -O2 in effect, and despite "compat"
    being constant "false" when CONFIG_COMPAT (and hence CONFIG_PV32) is not
    defined, as it gets set at the top of the function from the result of
    is_pv_32bit_domain().
    
    Re-arrange the two "offending" if()s such that when COMPAT=n the
    respective variables will be seen as unconditionally initialized. The
    original aim was to have the !compat cases first, though.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit dfcffb128be46a3e413eaa941744536fe53c94b6
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Wed Jun 9 10:37:59 2021 -0700

    xen/arm32: SPSR_hyp/SPSR
    
    SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
    trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
    See: ARM DDI 0487D.b page G8-5993.
    
    This fixes booting Xen/arm32 on QEMU.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
    Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 01:49:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 01:49:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140119.258971 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrWIX-0007TH-9f; Fri, 11 Jun 2021 01:49:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140119.258971; Fri, 11 Jun 2021 01:49:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrWIX-0007Sp-1T; Fri, 11 Jun 2021 01:49:25 +0000
Received: by outflank-mailman (input) for mailman id 140119;
 Fri, 11 Jun 2021 01:49:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C43g=LF=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lrWIV-0007Sh-H4
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 01:49:23 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd4b84f1-6dd1-4db7-92f0-a54d5e4886f9;
 Fri, 11 Jun 2021 01:49:22 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id F1260613B0;
 Fri, 11 Jun 2021 01:49:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd4b84f1-6dd1-4db7-92f0-a54d5e4886f9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623376162;
	bh=LzqymiWl9W3uKLPJog1kh+tyHSofk1JHiqsNFGP3NRA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=aZo0/fuN+9UOLe0Q7uXPSFb93Qu5PdVlN2mgthWsPFNnrkI9CJpdVcAvc5obB2JAp
	 WACvowRZa7T91OJ4srBOKrE1twBQR7NnOZ9XuhLNHX/zKWKsGdlhVpNSGuRU3/QZkr
	 zdbjPgJUxDJWYODbyLDxidkFmL9yxeSTvHzaU988PVScxjxVGTRKV76QZiguWiRUbj
	 uqVIhah/6+MpZ3r7kAXRX0N0fapUW7p1ydMk9eanGkMmy0vVxlfZu395Pdi+OqRJ3Q
	 yhRENYUrgte6hT8aFIE3NbAqEsicfMbrxqxPcRchtqQriuzjEAcqj/6NgQweJ3Lp6E
	 tFiJeYywOvYJA==
Date: Thu, 10 Jun 2021 18:49:21 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Jan Beulich <jbeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    osstest service owner <osstest-admin@xenproject.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [xen-unstable-smoke test] 162597: regressions - FAIL
In-Reply-To: <E28F5F88-7D8A-46C1-89B8-9841071778D1@arm.com>
Message-ID: <alpine.DEB.2.21.2106101644340.24906@sstabellini-ThinkPad-T480s>
References: <osstest-162597-mainreport@xen.org> <6d95cfac-e43c-d1f0-f988-4f11335b104d@suse.com> <E28F5F88-7D8A-46C1-89B8-9841071778D1@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 10 Jun 2021, Bertrand Marquis wrote:
> Hi Jan,
> 
> > On 10 Jun 2021, at 12:32, Jan Beulich <jbeulich@suse.com> wrote:
> > 
> > On 10.06.2021 12:50, osstest service owner wrote:
> >> flight 162597 xen-unstable-smoke real [real]
> >> flight 162602 xen-unstable-smoke real-retest [real]
> >> http://logs.test-lab.xenproject.org/osstest/logs/162597/
> >> http://logs.test-lab.xenproject.org/osstest/logs/162602/
> >> 
> >> Regressions :-(
> >> 
> >> Tests which did not succeed and are blocking,
> >> including tests which could not be run:
> >> test-armhf-armhf-xl         18 guest-start/debian.repeat fail REGR. vs. 162574
> > 
> > This now being the 3rd failure in a row, I guess there's a fair chance
> > of there actually being something wrong with ...
> > 
> >> commit dfcffb128be46a3e413eaa941744536fe53c94b6
> >> Author: Stefano Stabellini <sstabellini@kernel.org>
> >> Date:   Wed Jun 9 10:37:59 2021 -0700
> >> 
> >>    xen/arm32: SPSR_hyp/SPSR
> >> 
> >>    SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
> >>    trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
> >>    See: ARM DDI 0487D.b page G8-5993.
> >> 
> >>    This fixes booting Xen/arm32 on QEMU.
> >> 
> >>    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> >>    Reviewed-by: Julien Grall <jgrall@amazon.com>
> >>    Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
> >>    Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
> > 
> > ... this. My Arm-untrained eye couldn't spot anything in the logs.
> 
> I am not sure to read the log correctly so do I see it right that dom0 started and it failed then to start a guest ?

Thanks Jan for bringing it to my attention. 

I am not an expert in reading OSSTest logs. From the following:

http://logs.test-lab.xenproject.org/osstest/logs/162597/test-armhf-armhf-xl/info.html

I understand that Xen booted and a DomU was started. However,
"migrate-support-check" and "saverestore-support-check" failed. Is that
correct?

If so, it would be really strange for SPSR_hyp/SPSR to cause the problem
because I would expect Xen to hang at boot before Dom0 is started
instead.


I don't have any ARMv7 hardware to try to repro this issue, and ARMv7 is
most certainly required (ARMv8/aarch32 won't repro.)

Could someone more at ease with OSSTest than me arrange for a run with
this commit reverted to verify that it is the issue?



In any case, I tried to figure it out. I guessed it could be a compiler
error. I followed the white rabbit down the ARM ARM hole. I disassebled
the Xen binary [1] from the failed job. "msr SPSR, r11" is 0x0026a38c.

The encoding should be at B9.3.12 of the ARMv7-A DDI 0406C and F5.1.121
of ARMv8 DDI 0487D.b. Unfortunately it doesn't seem to match either one
of them and I don't understand why.


The "mrs r11, SPSR" is generated as 0x00262ecc. That should be described
at F5.1.117 for ARMv8 and B9.3.9 for ARMv7. Also doesn't seem to match.


I guess I am looking at the wrong encoding but I am not exactly sure why.



[1] http://logs.test-lab.xenproject.org/osstest/logs/162597/build-armhf/build/


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 02:21:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 02:21:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140127.258982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrWnO-0003JK-R7; Fri, 11 Jun 2021 02:21:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140127.258982; Fri, 11 Jun 2021 02:21:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrWnO-0003JD-O9; Fri, 11 Jun 2021 02:21:18 +0000
Received: by outflank-mailman (input) for mailman id 140127;
 Fri, 11 Jun 2021 02:21:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrWnN-0003J3-Bv; Fri, 11 Jun 2021 02:21:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrWnN-00021f-5f; Fri, 11 Jun 2021 02:21:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrWnM-0003YN-Qm; Fri, 11 Jun 2021 02:21:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrWnM-0003Ul-QG; Fri, 11 Jun 2021 02:21:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=J5QV4Pd9dGls7qGowo77E1k2MZV/5kOo05iFGgWycxw=; b=Gw9/nZ8djNJu+mfiZkUntMtVqq
	cH2vCejc6iRRD9xko/g6r808oL2J/mKVA+r0JUP9bfpa3K3ZGPexKKoQKXQMyahpLqW9ufOohD7eA
	jD3/9WqBdKv9PqonQMXhEXx2e52OQVGc3PR5sMwnQAYFqIN1SItdUd0wkMi184HIoFpM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-xtf-amd64-amd64-4
Message-Id: <E1lrWnM-0003Ul-QG@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Jun 2021 02:21:16 +0000

branch xen-unstable
xenbranch xen-unstable
job test-xtf-amd64-amd64-4
testid xtf/test-pv32pae-selftest

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Tree: xtf git://xenbits.xen.org/xtf.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  1a0f2fe2297d122a08fee2b26de5de995fdeca13
  Bug not present: 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/162627/


  commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
  Author: George Dunlap <george.dunlap@citrix.com>
  Date:   Thu May 6 13:38:02 2021 +0100
  
      SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
      
      The support status of 32-bit guests doesn't seem particularly useful.
      
      With it changed to fully unsupported outside of PV-shim, adjust the PV32
      Kconfig default accordingly.
      
      Reported-by: Jann Horn <jannh@google.com>
      Signed-off-by: George Dunlap <george.dunlap@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-xtf-amd64-amd64-4.xtf--test-pv32pae-selftest.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-xtf-amd64-amd64-4.xtf--test-pv32pae-selftest --summary-out=tmp/162627.bisection-summary --basis-template=162533 --blessings=real,real-bisect,real-retry xen-unstable test-xtf-amd64-amd64-4 xtf/test-pv32pae-selftest
Searching for failure / basis pass:
 162556 fail [host=huxelrebe0] / 162533 [host=fiano1] 162475 [host=godello0] 162422 [host=huxelrebe1] 162385 [host=godello1] 162343 [host=albana0] 162337 [host=chardonnay0] 162330 [host=godello1] 162325 [host=albana1] 162282 [host=elbling0] 162276 ok.
Failure / basis pass flights: 162556 / 162276
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Tree: xtf git://xenbits.xen.org/xtf.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 e4fee66043120c954fc309bbb37813604c1c0eb7 5ead491e36af6cb8681fc1278bd36c756ad62ac2
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 57f68dfd2d111a2ad381df740543c901b41f2299 5ead491e36af6cb8681fc1278bd36c756ad62ac2
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38\
 aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#57f68dfd2d111a2ad381df740543c901b41f2299-e4fee66043120c954fc309bbb37813604c1c0eb7 git://xenbits.xen.org/xtf.git#5ead491e36af6cb8681fc1278bd36c756ad62ac2-5ead491e36af6cb8681fc1278bd36c756ad62ac2
Loaded 5001 nodes in revision graph
Searching for test results:
 162276 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 57f68dfd2d111a2ad381df740543c901b41f2299 5ead491e36af6cb8681fc1278bd36c756ad62ac2
 162282 [host=elbling0]
 162325 [host=albana1]
 162330 [host=godello1]
 162337 [host=chardonnay0]
 162343 [host=albana0]
 162385 [host=godello1]
 162422 [host=huxelrebe1]
 162475 [host=godello0]
 162533 [host=fiano1]
 162556 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 e4fee66043120c954fc309bbb37813604c1c0eb7 5ead491e36af6cb8681fc1278bd36c756ad62ac2
 162601 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 57f68dfd2d111a2ad381df740543c901b41f2299 5ead491e36af6cb8681fc1278bd36c756ad62ac2
 162617 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3 5ead491e36af6cb8681fc1278bd36c756ad62ac2
 162611 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 e4fee66043120c954fc309bbb37813604c1c0eb7 5ead491e36af6cb8681fc1278bd36c756ad62ac2
 162614 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 d21121685fac829c988e432407fb0e4ef9b19331 5ead491e36af6cb8681fc1278bd36c756ad62ac2
 162615 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 455790573d3bbad6d5a1bb7e9d28b6dd71075693 5ead491e36af6cb8681fc1278bd36c756ad62ac2
 162619 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1 5ead491e36af6cb8681fc1278bd36c756ad62ac2
 162620 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 1a0f2fe2297d122a08fee2b26de5de995fdeca13 5ead491e36af6cb8681fc1278bd36c756ad62ac2
 162622 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1 5ead491e36af6cb8681fc1278bd36c756ad62ac2
 162624 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 1a0f2fe2297d122a08fee2b26de5de995fdeca13 5ead491e36af6cb8681fc1278bd36c756ad62ac2
 162625 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1 5ead491e36af6cb8681fc1278bd36c756ad62ac2
 162627 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 1a0f2fe2297d122a08fee2b26de5de995fdeca13 5ead491e36af6cb8681fc1278bd36c756ad62ac2
Searching for interesting versions
 Result found: flight 162276 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 d21121685fac829c988e432407fb0e4ef9b19331 5ead491e36af6cb8681fc1278bd36c756ad62ac2, results HASH(0x560960afdb60) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895a\
 f2840d85c524f0bd11a38aac308308 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1 5ead491e36af6cb8681fc1278bd36c756ad62ac2, results HASH(0x560960af9cd0) HASH(0x560960b0ee18) HASH(0x560960b0ade0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 57f68dfd2d111a2ad381df740543c901b41f2299 5ead491e36af6cb8681fc1278bd36c756ad62ac2, results HASH(0x560960af\
 5398) HASH(0x560960aeb160) Result found: flight 162556 (fail), for basis failure (at ancestor ~500)
 Repro found: flight 162601 (pass), for basis pass
 Repro found: flight 162611 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1 5ead491e36af6cb8681fc1278bd36c756ad62ac2
No revisions left to test, checking graph state.
 Result found: flight 162619 (pass), for last pass
 Result found: flight 162620 (fail), for first failure
 Repro found: flight 162622 (pass), for last pass
 Repro found: flight 162624 (fail), for first failure
 Repro found: flight 162625 (pass), for last pass
 Repro found: flight 162627 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  1a0f2fe2297d122a08fee2b26de5de995fdeca13
  Bug not present: 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/162627/


  commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
  Author: George Dunlap <george.dunlap@citrix.com>
  Date:   Thu May 6 13:38:02 2021 +0100
  
      SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
      
      The support status of 32-bit guests doesn't seem particularly useful.
      
      With it changed to fully unsupported outside of PV-shim, adjust the PV32
      Kconfig default accordingly.
      
      Reported-by: Jann Horn <jannh@google.com>
      Signed-off-by: George Dunlap <george.dunlap@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-xtf-amd64-amd64-4.xtf--test-pv32pae-selftest.{dot,ps,png,html,svg}.
----------------------------------------
162627: tolerable all pass

flight 162627 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162627/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-xtf-amd64-amd64-4     19 xtf/test-pv32pae-selftest fail baseline untested


jobs:
 test-xtf-amd64-amd64-4                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 02:55:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 02:55:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140136.258998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrXKU-0006V1-JB; Fri, 11 Jun 2021 02:55:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140136.258998; Fri, 11 Jun 2021 02:55:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrXKU-0006Uu-Fs; Fri, 11 Jun 2021 02:55:30 +0000
Received: by outflank-mailman (input) for mailman id 140136;
 Fri, 11 Jun 2021 02:55:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrXKS-0006Uk-RH; Fri, 11 Jun 2021 02:55:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrXKS-0002a4-Kx; Fri, 11 Jun 2021 02:55:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrXKS-0005yl-Es; Fri, 11 Jun 2021 02:55:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrXKS-0001cb-EL; Fri, 11 Jun 2021 02:55:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kcBk3QPwM/0qJSwmjRWlJ6RFt3+9LRzVNfRlxCWMzkk=; b=CXacyWAIUgP3oTAeZFhauoXfb/
	3A/DR6kyDJ/KHmTB1jqN9+GMrMdgrYUD9XpqFyY0rTq93rDhKL8UzaGgLqx3+q1MdvHagnOFRQq5Q
	xkVnWNWII6BA5K8aQfJD5MBzdJGwZ9dXCEDgxD2VmYY38E3hT0TxrJJWc8vJXmc6SxuY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162626-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162626: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:guest-start/debian.repeat:fail:regression
    xen-unstable-smoke:test-armhf-armhf-xl:guest-start:fail:heisenbug
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=2bb17a45b1814b0b6aa4646eff58e16f876281fd
X-Osstest-Versions-That:
    xen=3e09045991cde360432bc7437103f8f8a6699359
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Jun 2021 02:55:28 +0000

flight 162626 xen-unstable-smoke real [real]
flight 162628 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162626/
http://logs.test-lab.xenproject.org/osstest/logs/162628/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl 18 guest-start/debian.repeat fail in 162618 REGR. vs. 162574

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl          14 guest-start                fail pass in 162618

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl         15 migrate-support-check fail in 162618 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 162618 never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  2bb17a45b1814b0b6aa4646eff58e16f876281fd
baseline version:
 xen                  3e09045991cde360432bc7437103f8f8a6699359

Last test of basis   162574  2021-06-09 14:00:34 Z    1 days
Failing since        162584  2021-06-10 00:00:27 Z    1 days    7 attempts
Testing same since   162607  2021-06-10 15:00:30 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Jan Beulich <jbeulich@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 2bb17a45b1814b0b6aa4646eff58e16f876281fd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jun 10 16:56:24 2021 +0200

    x86: please Clang in arch_set_info_guest()
    
    Clang 10 reports
    
    domain.c:1328:10: error: variable 'cr3_mfn' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
        if ( !compat )
             ^~~~~~~
    domain.c:1334:34: note: uninitialized use occurs here
        cr3_page = get_page_from_mfn(cr3_mfn, d);
                                     ^~~~~~~
    domain.c:1328:5: note: remove the 'if' if its condition is always true
        if ( !compat )
        ^~~~~~~~~~~~~~
    domain.c:1042:18: note: initialize the variable 'cr3_mfn' to silence this warning
        mfn_t cr3_mfn;
                     ^
                      = 0
    domain.c:1189:14: error: variable 'fail' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
            if ( !compat )
                 ^~~~~~~
    domain.c:1211:9: note: uninitialized use occurs here
            fail |= v->arch.pv.gdt_ents != c(gdt_ents);
            ^~~~
    domain.c:1189:9: note: remove the 'if' if its condition is always true
            if ( !compat )
            ^~~~~~~~~~~~~~
    domain.c:1187:18: note: initialize the variable 'fail' to silence this warning
            bool fail;
                     ^
                      = false
    
    despite this being a build with -O2 in effect, and despite "compat"
    being constant "false" when CONFIG_COMPAT (and hence CONFIG_PV32) is not
    defined, as it gets set at the top of the function from the result of
    is_pv_32bit_domain().
    
    Re-arrange the two "offending" if()s such that when COMPAT=n the
    respective variables will be seen as unconditionally initialized. The
    original aim was to have the !compat cases first, though.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit dfcffb128be46a3e413eaa941744536fe53c94b6
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Wed Jun 9 10:37:59 2021 -0700

    xen/arm32: SPSR_hyp/SPSR
    
    SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
    trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
    See: ARM DDI 0487D.b page G8-5993.
    
    This fixes booting Xen/arm32 on QEMU.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
    Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 05:01:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 05:01:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140161.259036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrZIV-0002g8-3l; Fri, 11 Jun 2021 05:01:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140161.259036; Fri, 11 Jun 2021 05:01:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrZIV-0002g1-0R; Fri, 11 Jun 2021 05:01:35 +0000
Received: by outflank-mailman (input) for mailman id 140161;
 Fri, 11 Jun 2021 05:01:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=axIZ=LF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lrZIT-0002fu-QZ
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 05:01:33 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7413d49-2c95-47a9-8df8-7a2c5b92f8e3;
 Fri, 11 Jun 2021 05:01:32 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id E8876219A1;
 Fri, 11 Jun 2021 05:01:31 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id BA37C118DD;
 Fri, 11 Jun 2021 05:01:31 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 7aqIKyvuwmBbUAAALh3uQQ
 (envelope-from <jgross@suse.com>); Fri, 11 Jun 2021 05:01:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7413d49-2c95-47a9-8df8-7a2c5b92f8e3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623387691; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=3TOCMHxRSWMQmIL1qFm7wVsPArCv6NvKRnxX4ESC9yE=;
	b=dKDM+/tMFSjSPmj46Lea4DSY9OBD86nVtaTQLMfsl2xH6oM66GDOCO/dERBTnd34iqbmnP
	RgmoOTqX/F9N5GdMcT/uwmlvcQrbG4EjqTZ6wcrFGOWcvzxQmjK85fwd39zeo+InKqelzF
	g0Fjg3pmVTROo3cEO2HI/u1fQ8+qwvs=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623387691; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=3TOCMHxRSWMQmIL1qFm7wVsPArCv6NvKRnxX4ESC9yE=;
	b=dKDM+/tMFSjSPmj46Lea4DSY9OBD86nVtaTQLMfsl2xH6oM66GDOCO/dERBTnd34iqbmnP
	RgmoOTqX/F9N5GdMcT/uwmlvcQrbG4EjqTZ6wcrFGOWcvzxQmjK85fwd39zeo+InKqelzF
	g0Fjg3pmVTROo3cEO2HI/u1fQ8+qwvs=
Subject: Re: [PATCH v2 2/2] tools/xenstore: set open file descriptor limit for
 xenstored
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20210608055839.10313-1-jgross@suse.com>
 <20210608055839.10313-3-jgross@suse.com>
 <20210608183833.023551f4.olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <eaf53d99-fee9-0c79-7f29-efd00aae4d16@suse.com>
Date: Fri, 11 Jun 2021 07:01:31 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210608183833.023551f4.olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="wazqPFXhfvEd0jbUHsYSycV4QL1Ow3Tlm"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--wazqPFXhfvEd0jbUHsYSycV4QL1Ow3Tlm
Content-Type: multipart/mixed; boundary="9Ap1prkew4sek1pfrNiZD87zztnuv6s44";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
Message-ID: <eaf53d99-fee9-0c79-7f29-efd00aae4d16@suse.com>
Subject: Re: [PATCH v2 2/2] tools/xenstore: set open file descriptor limit for
 xenstored
References: <20210608055839.10313-1-jgross@suse.com>
 <20210608055839.10313-3-jgross@suse.com>
 <20210608183833.023551f4.olaf@aepfle.de>
In-Reply-To: <20210608183833.023551f4.olaf@aepfle.de>

--9Ap1prkew4sek1pfrNiZD87zztnuv6s44
Content-Type: multipart/mixed;
 boundary="------------E4AB0010D6D67D200D16945F"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E4AB0010D6D67D200D16945F
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.06.21 18:39, Olaf Hering wrote:
> Am Tue,  8 Jun 2021 07:58:39 +0200
> schrieb Juergen Gross <jgross@suse.com>:
>=20
>> +#XENSTORED_MAX_N_DOMAINS=3D32768
>=20
> This will break fillup.

Why? You realize that above is a comment just documenting the default?

> Provide an empty variable like it is done for a few others in that file=
=2E

I'm following the pattern of basically all variables in that file, BTW.


Juergen

--------------E4AB0010D6D67D200D16945F
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E4AB0010D6D67D200D16945F--

--9Ap1prkew4sek1pfrNiZD87zztnuv6s44--

--wazqPFXhfvEd0jbUHsYSycV4QL1Ow3Tlm
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDC7isFAwAAAAAACgkQsN6d1ii/Ey91
BQf9EHtbyf8KFwNyaez2zliwdZyC7lnl4kwjf7tt1OPhR1AbGirmygn0JhedmZwoV2YD+KnvCXhB
uUGKf9gGKbaJyDR2vmAwimz7xxEOZQ+iD7j0ExGQdBfCR1xD+5ql5Bna1wpkMuJbmYGott+ZXSYL
6yrltZ0f8DFJs3koCES0cL7SWMUggXn1jOlN+1JRdOsN4RtMLtBe6YK4LOMjqdQiAM8r0zhYuMfJ
xrWTRwJYdNtDqdr+Gj0Uzb7u20ccLSKcbJ6On0ZP94kzynnD5+jaL40mV7B/9ywlgM09O5iGJAF2
/JjDMYzECyCNoWWcs/7dyAF+vPNWSkZCHR+9anfjIQ==
=NbFP
-----END PGP SIGNATURE-----

--wazqPFXhfvEd0jbUHsYSycV4QL1Ow3Tlm--


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 05:35:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 05:35:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140169.259048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrZpU-0005uc-Ls; Fri, 11 Jun 2021 05:35:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140169.259048; Fri, 11 Jun 2021 05:35:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrZpU-0005uV-Hx; Fri, 11 Jun 2021 05:35:40 +0000
Received: by outflank-mailman (input) for mailman id 140169;
 Fri, 11 Jun 2021 05:35:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrZpT-0005uL-OD; Fri, 11 Jun 2021 05:35:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrZpT-0005mM-GH; Fri, 11 Jun 2021 05:35:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrZpT-0004AW-4z; Fri, 11 Jun 2021 05:35:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrZpT-0000g8-4A; Fri, 11 Jun 2021 05:35:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9740yjzE2He9zjtQRAPJ5dw1pqDG3y20hGNS+6Z//tk=; b=0uayLSJ/BlMGStMTs5T90ZYpbu
	qzADyWCkY9/4dyIbnglfhvoAUbXOqT8RW298kTRxXdngei25yM3rDk1zkObT0bAtsIzrS6MOkBALw
	FMpyKfER8Nh4AS9sbFcRp/9nEV2+8MFHxg5waR9Fb5wnYKAoq5Hlv9K82wMYsblBKznY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162600-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162600: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-4:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-examine:reboot:fail:regression
    xen-unstable:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-5:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-2:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-raw:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-1:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-3:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    xen-unstable:test-amd64-amd64-i386-pvgrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-amd64-amd64-pvgrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3e09045991cde360432bc7437103f8f8a6699359
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Jun 2021 05:35:39 +0000

flight 162600 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162600/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 162533
 test-xtf-amd64-amd64-4      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-amd64-i386-examine       8 reboot                   fail REGR. vs. 162533
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 162533
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 162533
 test-xtf-amd64-amd64-5      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-xtf-amd64-amd64-2      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-amd64-i386-xl-raw        8 xen-boot                 fail REGR. vs. 162533
 test-xtf-amd64-amd64-1      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-xtf-amd64-amd64-3      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 162533
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 162533
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 162533
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 162533
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 162533
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 162533
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 162533
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 162533
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 162533
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 162533
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 162533
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 162533
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 162533
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 162533
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 162533
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 162533
 test-amd64-amd64-i386-pvgrub 17 guest-localmigrate       fail REGR. vs. 162533
 test-amd64-amd64-amd64-pvgrub 17 guest-localmigrate      fail REGR. vs. 162533
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 162533
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 162533
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 162533
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 162533
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 162533

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162533
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162533
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  3e09045991cde360432bc7437103f8f8a6699359
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z    3 days
Failing since        162556  2021-06-08 22:39:08 Z    2 days    2 attempts
Testing same since   162600  2021-06-10 09:41:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 621 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 05:46:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 05:46:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140179.259062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lra0A-0007ND-QV; Fri, 11 Jun 2021 05:46:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140179.259062; Fri, 11 Jun 2021 05:46:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lra0A-0007N6-M9; Fri, 11 Jun 2021 05:46:42 +0000
Received: by outflank-mailman (input) for mailman id 140179;
 Fri, 11 Jun 2021 05:46:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6eg9=LF=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lra08-0007N0-Rc
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 05:46:41 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.24])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b3ded2e1-8f86-4fff-8f94-16c1b53b8a90;
 Fri, 11 Jun 2021 05:46:40 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5B5kYan8
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 11 Jun 2021 07:46:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3ded2e1-8f86-4fff-8f94-16c1b53b8a90
ARC-Seal: i=1; a=rsa-sha256; t=1623390394; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=PusxxvblkP6TtULIxAwlJD5W5L72yD74vJNyl/UXAaGXSILdDXKg7fYElqay47LnQ3
    UUjGGQJaq4TS9VMTcTAh1RtdRvbE3gxoqd5qcbBvZ0XVMEGaHKAnH2IUKbq7JXmmBHmM
    AzhWvC1Px1DHG0V3cwQiph615H41u3Yo5kS/39gnrFrGRd/n68NVwmxpjmfP+/C/w+uQ
    UCDj+5RPe4CSntkjclKb/etbtWddqmnoup1F68okcVsHJbydoQuB8HlgtttkyhTfH49J
    0rQg4mWTpuzb9SS9yfJsUdCXQCl4PH+gsgSPTK78HK9TlPHGpasFQacKkhV4Grmm2F79
    Z/2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623390394;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=vjMzR4FeZm4cvFakoMghiFq6SXk0IuoSP8kIiQZsVEs=;
    b=j8IRvA9HFk4dNL+2QWnCV6Kig1jTrTX8dSXxRugfTqK+NMSJ1lh9uYtCtPF0/jkbcd
    AqmAxj3IOLci6U8TXAkaDjJC9Zr9/RsNH8ZYTb/rQAw8UWS0/FIBnVKzT8jD+0dR381L
    RwtlnZf/V02dR9teFAv3jS/ncd8zYytg9UhEnEU6GPMejF9lM0UpZnwIVuiHbNn7Kmqa
    H06ORsBIsh7kOa1H7BoZYPnLn7qJXHjg8NTgIyt+VNXDQnnCg7wJ5iVVNSMkSoXWTQV+
    ddK+SmG7j2vXY12B8LnK6mfSe05fSd74PIXkx2sSgsevWft1Ev2x6DSYTiyzaZWabDI1
    FJEA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623390394;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=vjMzR4FeZm4cvFakoMghiFq6SXk0IuoSP8kIiQZsVEs=;
    b=G3v8OKC0KmHA0vpKYm0/Gt2VZprUAy4RQpCE+h8IY1rShSxNhnNd9FFGgH0kJiYaio
    5S5+tGH9xAP0E/Zl/EcnfRJPpGUhSGiQHw+Q7udwUmx7wW3UTWyyhzQherPdGdeCmxHb
    jGOjwjG2l4Mb4Fbw4Jufw8cBSzP/WfV0aiKmEijBpxrA6rDB+GpfO5ZPezN6aLiQkpYb
    XM1STOroXQIBgSNeMZGtvsoINTSNhtaLddLZULtm2um28qqwCMI3dzSgC/NwHGCpmYk/
    SFPbJXs8yVUXG/RF6I7vHjsd7xDTfDAwzk4iAghSPevKCyyEUzzbMiznIW0aS1swtBT9
    zU8Q==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF/Ax6Fb03sCxOoTBq7r1dZtjiRLxxzC79Iv3HI"
X-RZG-CLASS-ID: mo00
Date: Fri, 11 Jun 2021 07:46:16 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v2 2/2] tools/xenstore: set open file descriptor limit
 for xenstored
Message-ID: <20210611074616.2a4b96fb.olaf@aepfle.de>
In-Reply-To: <eaf53d99-fee9-0c79-7f29-efd00aae4d16@suse.com>
References: <20210608055839.10313-1-jgross@suse.com>
	<20210608055839.10313-3-jgross@suse.com>
	<20210608183833.023551f4.olaf@aepfle.de>
	<eaf53d99-fee9-0c79-7f29-efd00aae4d16@suse.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/_3zU0s57d+DGRnJ1RRL3E5f";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/_3zU0s57d+DGRnJ1RRL3E5f
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Fri, 11 Jun 2021 07:01:31 +0200
schrieb Juergen Gross <jgross@suse.com>:

> Why? You realize that above is a comment just documenting the default?

That depends on the context. See https://bugzilla.opensuse.org/show_bug.cgi=
?id=3D1185682 for a reason why it should become an empty variable. But yes,=
 we can patch that one too.

Olaf

--Sig_/_3zU0s57d+DGRnJ1RRL3E5f
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmDC+KgACgkQ86SN7mm1
DoA1CA/+P2LL79vAacNURWmh/ed5rqoSlrR4Xfi2XZJL3OdmZEk19I2Z6KBU9jbp
6zE/JHGBkR8AEpbwXJipbiM/RjlMfXRPWK8l93+7l4JyfgGrVUGrzbAvTmq521pO
qWCjPv2FUYy7Oo8d9B7ul+SpOGMkEz2Zrf4/1kKhqC4IhrMfVVB/TNX6UGeogGH/
jl/fsjQNObseXqVj/Ep5IryRvPxUpNPjIMyvPsqKQwdWA9+EYHe+33PghlCpdaWk
d5qRad46HptnhG3eHh6rDQDncYQXUDnqOHgvrhVOpnzwl5FV+L21Wg5ZC9gCljva
hIJ6sLfrV4w3wFpkPi00HIr7MxGk+TSMUEKisqBTyPdz7Yk0/5AXoWR/Io73HMGp
4i/mURj6NXPonzVRBG+18Z3ffEekbUL+9tlhOFkx5b0rltM1le8mCjyRaBrIFdiR
sdKV8rb6dkAyowLdpIRn0ZNva4euEgYa+BwQ381ziNvQV4U1qnaYImSG2OW63hqd
waVFuxCQJsWtW1hpkvViFUkczjvcH8gOHS5Lc9Lk+jMQo6zDpKYNWWH4a+YU0yQI
uPxQ0QTkbHO735EOZZWDr3jBRJpRg9jl/TI2lS+xR5OxITPJ39uO03RXlYCGXlrh
qcrrtno0E9lFAgdwVj23pdWYQs2YwAc3ps95KMNCnNt6C1djr+w=
=Yuqa
-----END PGP SIGNATURE-----

--Sig_/_3zU0s57d+DGRnJ1RRL3E5f--


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 06:41:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 06:41:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140189.259079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrarM-0004g0-V1; Fri, 11 Jun 2021 06:41:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140189.259079; Fri, 11 Jun 2021 06:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrarM-0004ft-Rw; Fri, 11 Jun 2021 06:41:40 +0000
Received: by outflank-mailman (input) for mailman id 140189;
 Fri, 11 Jun 2021 06:41:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0bEB=LF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrarL-0004fn-Sl
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 06:41:39 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5fb289a0-1694-43cb-9aec-c75ecf00e5a8;
 Fri, 11 Jun 2021 06:41:38 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2052.outbound.protection.outlook.com [104.47.13.52]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-11-xqd0Q1zHP-yMY8HIjc9JOg-1; Fri, 11 Jun 2021 08:41:36 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4352.eurprd04.prod.outlook.com (2603:10a6:803:4a::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Fri, 11 Jun
 2021 06:41:34 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Fri, 11 Jun 2021
 06:41:34 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0150.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1b::18) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.20 via Frontend Transport; Fri, 11 Jun 2021 06:41:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5fb289a0-1694-43cb-9aec-c75ecf00e5a8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623393697;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ObbI7/45YPQat7fozDagLx5gV6orzZ+yCBz+NPG1YyA=;
	b=PF5Jhr1D6TWHAMcrCrgU3kY3aFib4AfzHT1ICljHpg0joR30rjQjBPIrUvlBkp34/k7js5
	/QsOtgKKXvzhte02OSGhiR73HEQ5le4X1zEFu0RfChkNR5iGOTCE6T+wumhk02gPbY9QKY
	3RSA1Ga1tVT+/OJFskbDSkDt7MIZyQ0=
X-MC-Unique: xqd0Q1zHP-yMY8HIjc9JOg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KQ0LkItpcsHILgyZa4IG6gsQsVojfkt6Ntm118dFKTzg7Q4hIModKUsbACUTVa34sgfn+aKijHoRAIvIzYq9uyPA5X3WAG2/pW7OWAxTGdZN/1GtE/q4tYabg8QY0WHfBhGaNC54B8/XIj/XofMfbcb2nxu0bqPstLm0nW8S+Oxw8Sf6JjsHdq+m1wWU8pWs2xtl6SeTcfZLxd5zLyRPo3J0DT9L/VeS+mTfC+HzLQu3KI8Mod61U9kj+V1kRfB8gzRckx38p3Oy3oeRcykzQqy+yWeNYRKhFQ5cSEKySz7CNk0ZHQvDf7YlXZTb959cvVSI+5B+Mm7D2ZntdnHR1w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ObbI7/45YPQat7fozDagLx5gV6orzZ+yCBz+NPG1YyA=;
 b=MTvTVHSkUnJ1OfJKcsYncS+iMwnUj+ceIcl+fIBaVFH6PcZ1p+VxMKWB/8fNhnqv1eQxBmZzGr+2dTrsd08MMFnIixrxql6eSDD3PojYNPRS2aGSVATMpTZWctv2KacKENIuR6L1xaw7OVWueI+Pzb2aPpGI8iNdGcRAtIO/Qa4AEgj1gCtg/g+4+0ntQ6dAqPpWH4OoEX9aoiN0yg6rb6aZ5UTw222p8RWIO8cYMPjYn6PFDiKMHGH3AC65l6S75CW4lqNtM89uaCSpjGHAdPM/YItGdxPsK/X3lVVlRvjDJAUxgdqMP49JIY+TF55eQHX7VR/a8Fcgve7qnbIVmQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [xen-unstable bisection] complete test-xtf-amd64-amd64-4
To: Ian Jackson <iwj@xenproject.org>
References: <E1lrWnM-0003Ul-QG@osstest.test-lab.xenproject.org>
Cc: xen-devel@lists.xenproject.org,
 osstest service owner <osstest-admin@xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1fb151ef-e04a-f244-6c2a-5b562893a2a9@suse.com>
Date: Fri, 11 Jun 2021 08:41:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <E1lrWnM-0003Ul-QG@osstest.test-lab.xenproject.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0150.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1b::18) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: eb5a6889-08b6-4842-6f77-08d92ca3f438
X-MS-TrafficTypeDiagnostic: VI1PR04MB4352:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4352899DEA3305A2010CA43AB3349@VI1PR04MB4352.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WSlgIkXLxCWoLnP4haNxL6lgxzHIzODTJ5M+dO6Z54LyX/0TlLqPoIm6h+epesEyILWsukJrGIxJ59ia6xlMpQ7mOgctooKIJTtpNmTSNuCtFYH1k0vqVbGwVALQXXvd711fXd+69KflmouwFU5d9jF/vYKnLTUGtStrtG8g1w2p/NmieEnGYN+jOCV976Hcl71wLQ186wSWFzrSjeQ9xkmg+cmhD4PT+/lUBpjM0dz3mkbz/J3OgQXzMzazGj5LPXv5F0MZVUtipXP9w9MibfWfuZudKkZg8mqYLpfpJDmiQ0VRF9U8ej7MmSgJH44IkoeFwxXWJK8+ode69uNdG9rofGmHcGXW5qSE7fIcKEgTlLbrr+J7SJOb4k3Y4q9fYDud1L1kqjYFOrrjcPGzOIDgmqgPNRhTJ0/HGDV8EWViNex8ptNwNAWKfv1V5piaVWr2Fbdn5FV9/Xyc7fYASBrbegdfMlezPfJBQy/UHMEaScoflxc+lmd0GLC6IQGwkRdlHkjuMMWhZDNyWc2Y5hZPSmRd9Dy7XcAGfTs+lLlvpT4KqpPpoT+J/O4YKKU8q++RJmYWnHpgJFQh+LqPW0oOzSGRx6Hs8N7rdtQSG2IbeQCaSLa0ncIcWHw7KDrlTuK+ddFdUiJU0D0CrI2zDjpBDHilCwgg3LUTH3dwIBi76tas2XBFBq+zvLtHvwuhfh4z//ISY7UN5kShlj+7V2LWxEbQfRnzGzZqjLqOQqWARb4ct++3v7YKHv3tfPBKimaSsE4TkYMTsnzgoqsao8jfb2CSFvUg1GLQwXoxwbYae0ccj3E5s81ku59eqnp/
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(136003)(376002)(346002)(396003)(366004)(6486002)(83380400001)(38100700002)(2906002)(26005)(8936002)(4326008)(956004)(36756003)(2616005)(316002)(31696002)(86362001)(16576012)(53546011)(16526019)(478600001)(8676002)(186003)(966005)(6916009)(66946007)(66476007)(66556008)(31686004)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UHFSd1BjamVSbVJvWHlEeitZNUN3aWJxWUsxVmJodm1nSXFVTEFxUlN6b256?=
 =?utf-8?B?dmFTT3NvakhpYWVLeTl0clN2aHJRWFIxSkVaUWF3aHF5K3R5SUk3cTVDQWJ4?=
 =?utf-8?B?NU9mSW9LMVI3UE5rMnAxQ0IrQnZtNmUvdEJtcDRjWHNoV08rZmx2bnNzMVdC?=
 =?utf-8?B?YU1wZitVWGx2OE8zcG1PbjlmcWxoanhKbUtBeWkvN0J5c1U3MEpKbU5ORWtI?=
 =?utf-8?B?aWkwM2RqRXB3T1BlbkVqbXpKNWJtUEdhMXRWYjMvZDBDQUM0VjJOcnFkalRt?=
 =?utf-8?B?dmhuWXR5bGJFdHh0aHliMmlGOXhMZk5IaFBoZGJKWnk3d0djODhOaG9uSHUr?=
 =?utf-8?B?VVB6Y0hDdExuMmcvbTBHbEVVZ1VTWjdBR1dDdlFOSUhzc0NtUEtJVkVoNlJR?=
 =?utf-8?B?bXMxWlVrcHl1S1EycUFVNmk4RkRtNzVHQkFKTU9mOG1WWXZRbFZDQ0xLMHVT?=
 =?utf-8?B?MHR1MG15Y0wwb3dqdW9WUGVvcitNUXJLd240RmJaYzVtMTJ0WjRpYVR0T0s4?=
 =?utf-8?B?UUl1MDJ1ajlMUmNkeGJhbnQ1YUlPMDRiZ2J5eVJobFhpSG1xMlk4UDBBVUsy?=
 =?utf-8?B?QTd3MXNiTUoyWHdpT0pobzYwK0tJL1FOR1pxT1Y3aHRNbnh4NnFLZldSMlIw?=
 =?utf-8?B?L0NHd3VnWktTdnNVSUNubTZoVlhEbDlDVFBQK3dTRFY4U2diQnFDZkY1eXh1?=
 =?utf-8?B?WXFyUjVyb05vY1JTdEtWSk5jOFU2dFZoZGJjcG45Qnp3ZjU0eGxybzFnSEh6?=
 =?utf-8?B?MHFSOTBBWUJWTFl0dlhQUlZkRUg4SjdWRGlPaStqU2hlRW93eWNQbUIwbGpa?=
 =?utf-8?B?UDlCcVhWQU45aFdkWUVHQkdVSHNCQmR3QUUvS2VmbXkzb3M4aUt6ZWZJY1JZ?=
 =?utf-8?B?Q2xQMGxDU3NKRER6T2xoZ0NtRnMrOEYzTDdpekJyNElNTHg2ZFNQUG5uMjdF?=
 =?utf-8?B?SktncUhRQ285UnFmTW5lK3NKSlhKRHI5YzBVWWk1WFVLa2liUEVhcmlpWU9C?=
 =?utf-8?B?WisvRVdwUGVIY0NDYjJvYk9vTGVmTE00bzRjdEVKeHJtYUxoeUlBaWpDRUdJ?=
 =?utf-8?B?bS9tUzlEdXRxcEMwTWtjUTh4OWRUcEx3M0ZtQjl5bU9FVVd4NXVaZEc1bTE3?=
 =?utf-8?B?bXpnVVRKcHNaUDBjWmdZOFRBdzY5Rmlta24zcXZsV2dBTHBMRGZxdGl4SzUy?=
 =?utf-8?B?ZERIaXk0WUJqWkdxMTdHSldBaWRaSUJzejFmZlNlQk5rODNHOWNlQmNjTXdE?=
 =?utf-8?B?aXFNbTdGaEtkV0RmZHhtTlcxdTM3WEFrVmlLbm95TGpYdzh0MnRGOXZ5czhi?=
 =?utf-8?B?aTBUMEM3NVlHbXNMNkFndkZINDVkelhWV0FVaWJ1Zk1tQ09Oejlvb0prZGNq?=
 =?utf-8?B?dVpzRk5oeUVEVFU1a25DWE9mbm5OeEhyejYyeVFjMG5hdDVROURYNHVUMmVM?=
 =?utf-8?B?QUVRTTNGMWJkMlUvNU9FTHFnR2R3eW15WnpYWVpZUUkvL0xVNVdIRHMrTjF0?=
 =?utf-8?B?cWVsSDdQUXhJK09IakhKZ1hkajZKcnRQNlRJZ3ZZNytNanRIUER2L1UyeDVv?=
 =?utf-8?B?NVlJamNGVnBzbzRXWlJ4OHoxZlVPN1BmZG1mSmJncnNWUjNqR09FRkdqR29w?=
 =?utf-8?B?ZDFSZHh2UlBpVGNtK3M1VHVWVk9CdWRTTnAwTlVCTm9pQnpCb2NhK0g2TjdP?=
 =?utf-8?B?c3hjU2xKS0FpeFNHcS8vbkh6cTdsRjAwMGZQY1Z2VmNPMHFUMWlvMWEwdnEr?=
 =?utf-8?Q?tedIvI0j6eleLzK7KEg201Fray5DSb3szTtDN4w?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: eb5a6889-08b6-4842-6f77-08d92ca3f438
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jun 2021 06:41:34.0095
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kyyQiXkjMfxgdp23N68bhGi6z0rTg8V8cJYoeFm51D6aymoZQiEl6IM8VXZgF/y7gGc24RRKigO0XQbbopC4Ag==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4352

Ian,

On 11.06.2021 04:21, osstest service owner wrote:
> branch xen-unstable
> xenbranch xen-unstable
> job test-xtf-amd64-amd64-4
> testid xtf/test-pv32pae-selftest
> 
> Tree: linux git://xenbits.xen.org/linux-pvops.git
> Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
> Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
> Tree: qemuu git://xenbits.xen.org/qemu-xen.git
> Tree: xen git://xenbits.xen.org/xen.git
> Tree: xtf git://xenbits.xen.org/xtf.git
> 
> *** Found and reproduced problem changeset ***
> 
>   Bug is in tree:  xen git://xenbits.xen.org/xen.git
>   Bug introduced:  1a0f2fe2297d122a08fee2b26de5de995fdeca13
>   Bug not present: 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
>   Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/162627/
> 
> 
>   commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
>   Author: George Dunlap <george.dunlap@citrix.com>
>   Date:   Thu May 6 13:38:02 2021 +0100
>   
>       SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
>       
>       The support status of 32-bit guests doesn't seem particularly useful.
>       
>       With it changed to fully unsupported outside of PV-shim, adjust the PV32
>       Kconfig default accordingly.
>       
>       Reported-by: Jann Horn <jannh@google.com>
>       Signed-off-by: George Dunlap <george.dunlap@citrix.com>
>       Signed-off-by: Jan Beulich <jbeulich@suse.com>

as indicated upfront, and as now also confirmed by the large amount of
failures in the two most recent full unstable flights, our defaulting
to off of PV32 for non-shim Xen builds means in osstest PV32 needs to
be turned on again, or all the 32-bit PV Dom0 / DomU tests (and xtf
equivalents) would need to be disabled. This will eventually be needed
on all branches which we would backport this change to (I expect this
would be at least all fully maintained ones), and which have a PV32
setting (exists only as of 4.14).

As I don't expect disabling any tests is a reasonable course of action,
I think it's going to need to be the .config override. Could you please
arrange for this?

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 06:45:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 06:45:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140195.259089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lravK-0005Lc-GD; Fri, 11 Jun 2021 06:45:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140195.259089; Fri, 11 Jun 2021 06:45:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lravK-0005LV-D1; Fri, 11 Jun 2021 06:45:46 +0000
Received: by outflank-mailman (input) for mailman id 140195;
 Fri, 11 Jun 2021 06:45:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0bEB=LF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lravI-0005LN-V9
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 06:45:44 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 348248fc-3f35-4765-81d7-0ea5013ba5f8;
 Fri, 11 Jun 2021 06:45:41 +0000 (UTC)
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur03lp2051.outbound.protection.outlook.com [104.47.10.51]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-33-7aVcrUTmMumUuwm6SPF6pQ-1; Fri, 11 Jun 2021 08:45:39 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB3119.eurprd04.prod.outlook.com (2603:10a6:802:10::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.26; Fri, 11 Jun
 2021 06:45:37 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Fri, 11 Jun 2021
 06:45:37 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P195CA0025.EURP195.PROD.OUTLOOK.COM (2603:10a6:102:b6::30) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Fri, 11 Jun 2021 06:45:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 348248fc-3f35-4765-81d7-0ea5013ba5f8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623393940;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qmXVtF2Dy72s0C5b8+IksW14Yiitv7bQfNQxpNHOTdw=;
	b=JJJ8FZX1LiLjLeuu5SeBIcLPtH/dRnDeK5l+72Eb8mO3Sf/nrNz4C47PXO7kuh8m0J19K3
	HHBwMShppQNwVrEAjcDYp3n3yYVXQXtpPnCGAFjBoKUZY+2rhcgk0UAMqnw/BwdgrWaXdN
	GadMAclT2HDEz1e4Vp2wdMq+cGTPrE0=
X-MC-Unique: 7aVcrUTmMumUuwm6SPF6pQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IagxxOC8NCjbggPbZV7snHpulWJUdLCvqLZtVJS8sWdOPnUZK3Hw48JU/MKNMuz+3V69BMm73gKZtM6qCaOgVE+6uIgCwCU3lETMJaWvEkYHrKNRAPxjIan1lETvO7tCmX8rBE+3wHhPLwGdv1J8DJNNA+97kLOa8L2nOtPYnF9v4Fz15D+IIRyGzQIl+k8eUz8J/6+PsP50fUu6KdW0V9W1QywKhQLIR0SaAvg115Ieo37k+O5z6varZzOWz6JHjvWJGFa04Rzm10XpvRvdzi4ZLYS3OlXGQLsD+DKgrkwlDkJ5Ebmih34n65CXKLMAzpRXKuzMQfzJ7l/zlReO2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0CnMwqsHXT/LLouNSySeFUh/lzId/eX52WltUzhQgoQ=;
 b=QfqNGBUriTdHbIzUDJD1hcdWLutKNebNSZeTtIf/Z9C5lmyxxHTogBXghr6+vBI8sFIf2M8v9WBa9D5FIotiOvbPrFw0cEhUuTfhWSVBNN2GB8V/RDut77m2ohHXRyNtNfnbrYukCGAm4pU/0IpPeNPY3WZ/+fVvr9sXqKNLiE4GDy7rZ5Vy9V33O7f2vHfSdBCfCRLdIBLqUL2gis3eIVeXPMNpQYH6KZDu/C+h09eoR+DEZ9dMHMkbq/WMuKBRusRzicGvyQURrVg79b54g3i3WEsSbgxbSjtuQbWkgRGz3NoWEg0jYOiQS+DcxD5u85TjS3f/U373bMr9v1Minw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: Re: SR-IOV: do we need to virtualize in Xen or rely on Dom0?
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Julien Grall <julien@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <c10e16c9-ec42-336f-e838-caca49b39723@epam.com>
 <YMHFQA1L61ntKNRq@Air-de-Roger>
 <30955a5b-ee46-60d7-ae56-23dc7c91008c@epam.com>
 <YMIdbGCpFGZGwLoN@Air-de-Roger>
 <e0f73a05-b027-d0b6-8f8d-a1078dedccf7@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <632d72e8-e794-2637-d5fe-acc52b530875@suse.com>
Date: Fri, 11 Jun 2021 08:45:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <e0f73a05-b027-d0b6-8f8d-a1078dedccf7@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P195CA0025.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:102:b6::30) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f25838dd-ac6b-44f1-9f87-08d92ca48505
X-MS-TrafficTypeDiagnostic: VI1PR04MB3119:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB3119A77F942E4DB536F57D62B3349@VI1PR04MB3119.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9d3REmjgQX6iRQk3R1QQ4nhOZtWl4Zv4WKddIAQaa1ScgEb3mcRMrHXqu18Bq3Zl1xcXNg39xudMvwwuNGg9w6qKHhW5NdTn9GAAu4DDicmyNtb5SFpVAceQ22iu5eM+msIFeds0MsUfBW36VU6jTL5YEIrB+dWp15Yv38FHwx3M6SasoazSqRSSl3aI8LkgbpbAPj7NDvuvA8mqXnBnhSADmnUvnpXhoNWTUkaWWywB8jTGC8nqv0/Yx2bWblyBaW1i/l+FC8EjuvfcVgyx42Gs2XLCh2GqRPAcQ+ljm85HpMPVWgRIceXmpeN5Tesra2OHg/NTtxPvUtwu061YH1ZHC7ibW6q5stmGUXnF+81yZ2N75akqgVvoMwhmrXUuxheuFMSxwYOqLuRXCiPiiqNZBvUnWo/q3/ZjoIL4cruvOt38AcJUw89YnITBjtMryQIO3B2SBM1sR2KYwS2NDwGx/p+szer8Jea5QIoaaMQ665Pl37horOYuNF4MkrKwwyYO7NNTkWOyyYGQzONfPN/i2wnjV5yVx32hc38DjgRtWmVNyH7kpXSOSa2HfciEIG9v8XjUff72wH5GXr05gh2HlbZTSb6rAYNLMYal9Fo7xRkzyTbE/Ia6KLVTATA0HMyjfiYDiJRttPu+2Jb29JbKaiU2cmTd58627ADkE52TnEUp91a2KkUQ+P82ksIpa4eOCPMXR9oO7iMIZTqBZA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(346002)(376002)(136003)(366004)(39860400002)(8676002)(6916009)(316002)(6486002)(54906003)(16576012)(38100700002)(186003)(16526019)(4326008)(478600001)(2616005)(26005)(956004)(53546011)(66476007)(2906002)(8936002)(86362001)(66556008)(5660300002)(66946007)(36756003)(31696002)(31686004)(83380400001)(21314003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?eh3mzovwA2M3IOf/YUqm0DIRwM0Xa78lmGLIkkOXylTB89Af2ArWAg8DP7yX?=
 =?us-ascii?Q?YtLltsrmUSq7tJVEOIA+4waSREeRw4RpgqNDtxDVixWpk+xjTQ+7CJcPZf1g?=
 =?us-ascii?Q?tpiBoFrJH/rrVaXRsahXrdQn25CLFqloZuFmXIbvDmAhPdq8ORseU2slR17M?=
 =?us-ascii?Q?agvWG916x6f9C58JMmBC9UoKm1+C3uUwC4kyemW159IEe1wCaqYu7P0uTCnd?=
 =?us-ascii?Q?siW8BkiAAlk7d3xcfaKTp+BEsSGtaEyAKGERT9Moqwdsta+NzyNgAw2rjV3M?=
 =?us-ascii?Q?v/vy8cGB8k3S0gO3fmZlW38l5dpglC/ztYyA9FqsxDvnN/2DT45BK8+J+b4t?=
 =?us-ascii?Q?GKVSx+3yOwe8TRoTSICm3nzXW7zjfycj/jmA8TAGuzkKbD4/StwbG/D5GGwP?=
 =?us-ascii?Q?uT3Q9U78CFFrE6JTztwdDN7MfBU0vp24AqFHCpst6g3akfNGXyH6kPV4VMjf?=
 =?us-ascii?Q?4E4AR+XCVrAOSf25mOzGzRcO21cFKC+9U2VAfbMb5B3qmTA5Yy6JcMEgfjMo?=
 =?us-ascii?Q?1KonuHZGoih1D00ujkDEjIb5GShcBV6NkDutdj0VnydIbWGmv/mICh6vZQ/y?=
 =?us-ascii?Q?UNbXcbPTa7k6+2Bc3n29LPZx2vvKKH3nvVbPhAJF2+0l31R8o98L8MQ1bbeN?=
 =?us-ascii?Q?Nw30gTDfBbW5Jl5wluADlMrM/COsDSP+TjaY4uzCOoL3yqFjUpTrDrhtjYW2?=
 =?us-ascii?Q?ZkzhtnslVU6jwh9/rLlU/QVuRdZezqrd8ip2sb4yX0M3IO9Qv8vB6+d8JMoe?=
 =?us-ascii?Q?RCG6jgqu+H1i0EkiXBWygxpPI++qv49BSH7obzlZR3o6NkO/0vA8KxQ0dOPs?=
 =?us-ascii?Q?WM+iagMlEVhBFklJgGUx9fO6k97iXcUSP76BHw7DdAnzacoZl4pgInT0szbS?=
 =?us-ascii?Q?Vh89XkXADb+1cPB5JerpRAjgYZHre/AsUq2+bWvvisJ1XBlKzz+PEsT6SIuy?=
 =?us-ascii?Q?AuMftYzZ/7+bNpHSO0oVXpDlRcTfOpkFkmqtlaXDboJz5lhBSmdgCdi097nU?=
 =?us-ascii?Q?gsU+frpv2l2CcK5hVHq2sH3EuplB7h6D7tb1BM087eiblxhp/jurTkKLweIH?=
 =?us-ascii?Q?wkQ6OJxlugkwR+YHu0uP8tucmSqc9wEydCC2DScde+TI1GHmNV5c59CFw9FM?=
 =?us-ascii?Q?TPdmrtcLAYbInMWpK1NHHZgRIj9sdfj1MhzX/3C1Us7a54t/PZhn2CXfQGsO?=
 =?us-ascii?Q?3X7jOZWCA2MX0myFxvMImkP0pNxibnIfbY4ums7xp+KJ8iMRv+7Db0Qz4XgD?=
 =?us-ascii?Q?TkB1R0oq9jU2tcWyhbAVYJH06gqWJ/5dyqA/L/i8cMwemhQ6gReSABhWyQX0?=
 =?us-ascii?Q?0PkoojIoFAY7UIyAR25ut3/p?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f25838dd-ac6b-44f1-9f87-08d92ca48505
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jun 2021 06:45:36.8802
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ImKbKdgFxg14KSoyufiLEJoV2zp2/CqpkF/gYDKjkuCi9YpkKA+O42XfiLKH53RESjkjYYhm/ZJbjvRf2cDVEg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB3119

On 10.06.2021 17:33, Oleksandr Andrushchenko wrote:
> On 10.06.21 17:10, Roger Pau Monn=C3=A9 wrote:
>> On Thu, Jun 10, 2021 at 10:01:16AM +0000, Oleksandr Andrushchenko wrote:
>>> On 10.06.21 10:54, Roger Pau Monn=C3=A9 wrote:
>>>> On Fri, Jun 04, 2021 at 06:37:27AM +0000, Oleksandr Andrushchenko wrot=
e:
>>>>> Hi, all!
>>>>>
>>>>> While working on PCI SR-IOV support for ARM I started porting [1] on =
top
>>>>> of current PCI on ARM support [2]. The question I have for this serie=
s
>>>>> is if we really need emulating SR-IOV code in Xen?
>>>>>
>>>>> I have implemented a PoC for SR-IOV on ARM [3] (please see the top 2
>>>>> patches)
>>>>> and it "works for me": MSI support is still WIP, but I was able to se=
e that
>>>>> VFs are properly seen in the guest and BARs are properly programmed i=
n p2m.
>>>>>
>>>>> What I can't fully understand is if we can live with this approach or=
 there
>>>>> are use-cases I can't see.
>>>>>
>>>>> Previously I've been told that this approach might not work on FreeBS=
D
>>>>> running
>>>>> as Domain-0, but is seems that "PCI Passthrough is not supported
>>>>> (Xen/FreeBSD)"
>>>>> anyways [4].
>>>> PCI passthorgh is not supported on FreeBSD dom0 because PCI
>>>> passthrough is not supported by Xen itself when using a PVH dom0, and
>>>> that's the only mode FreeBSD dom0 can use.
>>> So, it is still not clear to me: how and if PCI passthrough is supporte=
d
>>>
>>> on FreeBSD, what are the scenarios and requirements for that?
>>>
>>>> PHYSDEVOP_pci_device_add can be added to FreeBSD, so it could be made
>>>> to work. I however think this is not the proper way to implement
>>>> SR-IOV support.
>>> I was not able to find any support for PHYSDEVOP_XXX in FreeBSD code,
>>>
>>> could you please point me to where are these used?
>> Those are not used on FreeBSD, because x86 PVHv2 dom0 doesn't
>> implement them anymore. They are implemented on Linux for x86 PV dom0,
>> AFAIK Arm doesn't use them either.
>=20
> Well, ARM didn't until we started implementing PCI passthrough [1].
>=20
> It was previously discussed [2], "# Discovering PCI devices:" and propose=
d
>=20
> to use PHYSDEVOP_pci_device_add.
>=20
> Long story short, it is not easy for ARM to enumerate PCI devices in Xen =
as there is
>=20
> no unified way of doing so: different platforms implement different PCI h=
ost bridges
>=20
> which require complex initialization including clocks, power domains etc.

Just for my own understanding: If this isn't done by firmware, doesn't
this mean you can't boot an Arm system from e.g. a disk connected through
a PCI-based controller? Host bridge setup is definitely firmware's job on
x86 ...

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 06:46:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 06:46:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140201.259101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lraw8-0005vf-SB; Fri, 11 Jun 2021 06:46:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140201.259101; Fri, 11 Jun 2021 06:46:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lraw8-0005vY-OS; Fri, 11 Jun 2021 06:46:36 +0000
Received: by outflank-mailman (input) for mailman id 140201;
 Fri, 11 Jun 2021 06:46:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lraw7-0005vI-4N; Fri, 11 Jun 2021 06:46:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lraw6-00074o-Vs; Fri, 11 Jun 2021 06:46:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lraw6-0007rd-OO; Fri, 11 Jun 2021 06:46:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lraw6-0007JN-Nu; Fri, 11 Jun 2021 06:46:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JypJ9f4NbQgGYqU3Y/6ASbDIN4Vee/Iz58p8dmdvTvc=; b=x8SEFbPunLv40JphStmorH38+/
	c3YevuGgHbA8WqMPdfQi1JokRWXy2YHx9wLJDIuILs+atXhFw81USBgsolHUYlQyyNA0+T69mPoJR
	cSaWkDgsuTOq1aZ3zEnKXazWavWVD7xv8t+yF3RjvGdcS6Pq6W246Mraze4izFWvD3o8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162630-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162630: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:guest-start/debian.repeat:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=2bb17a45b1814b0b6aa4646eff58e16f876281fd
X-Osstest-Versions-That:
    xen=3e09045991cde360432bc7437103f8f8a6699359
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Jun 2021 06:46:34 +0000

flight 162630 xen-unstable-smoke real [real]
flight 162634 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162630/
http://logs.test-lab.xenproject.org/osstest/logs/162634/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl         18 guest-start/debian.repeat fail REGR. vs. 162574

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  2bb17a45b1814b0b6aa4646eff58e16f876281fd
baseline version:
 xen                  3e09045991cde360432bc7437103f8f8a6699359

Last test of basis   162574  2021-06-09 14:00:34 Z    1 days
Failing since        162584  2021-06-10 00:00:27 Z    1 days    8 attempts
Testing same since   162607  2021-06-10 15:00:30 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Jan Beulich <jbeulich@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 2bb17a45b1814b0b6aa4646eff58e16f876281fd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jun 10 16:56:24 2021 +0200

    x86: please Clang in arch_set_info_guest()
    
    Clang 10 reports
    
    domain.c:1328:10: error: variable 'cr3_mfn' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
        if ( !compat )
             ^~~~~~~
    domain.c:1334:34: note: uninitialized use occurs here
        cr3_page = get_page_from_mfn(cr3_mfn, d);
                                     ^~~~~~~
    domain.c:1328:5: note: remove the 'if' if its condition is always true
        if ( !compat )
        ^~~~~~~~~~~~~~
    domain.c:1042:18: note: initialize the variable 'cr3_mfn' to silence this warning
        mfn_t cr3_mfn;
                     ^
                      = 0
    domain.c:1189:14: error: variable 'fail' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
            if ( !compat )
                 ^~~~~~~
    domain.c:1211:9: note: uninitialized use occurs here
            fail |= v->arch.pv.gdt_ents != c(gdt_ents);
            ^~~~
    domain.c:1189:9: note: remove the 'if' if its condition is always true
            if ( !compat )
            ^~~~~~~~~~~~~~
    domain.c:1187:18: note: initialize the variable 'fail' to silence this warning
            bool fail;
                     ^
                      = false
    
    despite this being a build with -O2 in effect, and despite "compat"
    being constant "false" when CONFIG_COMPAT (and hence CONFIG_PV32) is not
    defined, as it gets set at the top of the function from the result of
    is_pv_32bit_domain().
    
    Re-arrange the two "offending" if()s such that when COMPAT=n the
    respective variables will be seen as unconditionally initialized. The
    original aim was to have the !compat cases first, though.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit dfcffb128be46a3e413eaa941744536fe53c94b6
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Wed Jun 9 10:37:59 2021 -0700

    xen/arm32: SPSR_hyp/SPSR
    
    SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
    trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
    See: ARM DDI 0487D.b page G8-5993.
    
    This fixes booting Xen/arm32 on QEMU.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
    Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 06:53:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 06:53:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140212.259115 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrb2t-0007Q5-PZ; Fri, 11 Jun 2021 06:53:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140212.259115; Fri, 11 Jun 2021 06:53:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrb2t-0007Py-M4; Fri, 11 Jun 2021 06:53:35 +0000
Received: by outflank-mailman (input) for mailman id 140212;
 Fri, 11 Jun 2021 06:53:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sBsJ=LF=epam.com=prvs=6796b971c3=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1lrb2r-0007Ps-Kb
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 06:53:33 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 087b4d57-2ac5-4eba-8f58-e4fa0894278f;
 Fri, 11 Jun 2021 06:53:32 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 15B6pg8Q020717; Fri, 11 Jun 2021 06:53:31 GMT
Received: from eur03-db5-obe.outbound.protection.outlook.com
 (mail-db5eur03lp2058.outbound.protection.outlook.com [104.47.10.58])
 by mx0a-0039f301.pphosted.com with ESMTP id 3942dm03ye-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 11 Jun 2021 06:53:31 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB6068.eurprd03.prod.outlook.com (2603:10a6:208:166::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Fri, 11 Jun
 2021 06:53:26 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::b459:9e8c:964b:a3d1]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::b459:9e8c:964b:a3d1%7]) with mapi id 15.20.4219.024; Fri, 11 Jun 2021
 06:53:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 087b4d57-2ac5-4eba-8f58-e4fa0894278f
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OeipfMNk9Io0KE34PwOxvslfYi0KJ4pNxUWXh+vpODlR0CsTEoNEVgtEgbmGufjhXvyGmG1Nk7eldUhGvWUTQ1ZFfDVyXWRW8XorFvICyln7GhhgOShl32mXdqZ3YT6mBjfITifbw38JJ9EESQlRLRR3LaxdptVsisp6roojS8VslkknFSdZmx+u7ztQ7n2BgpAUszU1f2uk89nGdG8J327kjWrlPVHzfGBTaNdduuLepF4R0O9db/QrQW4Z1SD/0rGbyyiyuo/iLTrvakdArxuq5fadTrna84MkQIB3iWSzeLeIi++tOYlI8vcs3YoaGYZgwJyU0Mk+ePxPYHLVUg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kTOpmOIDaR981wQQoovjRFppx1CKxP5ikGvnwSG0umU=;
 b=iEitRxcS8ONyrO+81aBfJYS9PGc4RFlG7p4qnD30mAgUs5Je9sIgBRFqTN7Xqek5R8rzIJ1a08aSJKDwtr4AaC4DlGZ2jhoRH1OXy/gWMrqDvY4Ta7yLk4XmFyj78MZIKnH6UCi0CjaPq5/Evb4Ho9xuxKWUV1CeTeaDvHxLxwGmnQfWI78s4jympXIZXR2PZlDX2hNjm7UnQL1L2OJ2r/OryKAgRK+u+3xXeJNX2P85+B/XruIOCA+v/LbeFyfgUaqJR4GGca4wNh7VkYw6bIDunz1rsSnr387u9BcWhd00HPKZdK0H9IvunFFFLqGxxSvU7m/xOxs1C6tnRjVjug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kTOpmOIDaR981wQQoovjRFppx1CKxP5ikGvnwSG0umU=;
 b=Fiy4kEtVyMJTQzsRXZwi2zUFYUotq4Z3+GNq1EZ0eogP4BsvfKgCYWU0TYa86slExdhuEkCL9RcAnZD6XZK6W3/Zj9s1S7wu0e6FcZQ8otk1o1jV6nlWTEOGJr5Hg2+yPSfe4XQlqITiK5giINxKZ2DP02s3oSoFdsz8cTO2a/gN65W6wWfkcTkgf7kHDDEVsgyc116QMj6rXTmQNf1SvbU6F+hzhHDFbjaf0tQaNvxFLQD15grqLqXbmvis8Qv/Wvg4omxB3w7loflJgyJIHht7vj4lmQLplgf2LWt6KYPzgRwqsfDg2TNmYfI7v2olYdH3dp9dfN0k5A1i9E9PTw==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Julien
 Grall <julien@xen.org>,
        =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>
Subject: Re: SR-IOV: do we need to virtualize in Xen or rely on Dom0?
Thread-Topic: SR-IOV: do we need to virtualize in Xen or rely on Dom0?
Thread-Index: 
 AQHXWQwW/O3obtZgqkmpGRvt7ZNDkasM6aMAgAAjXoCAAEW+AIAAFw0AgAD+3oCAAAIxgA==
Date: Fri, 11 Jun 2021 06:53:25 +0000
Message-ID: <bd8522b5-c126-3d54-b85b-eba46afc30c1@epam.com>
References: <c10e16c9-ec42-336f-e838-caca49b39723@epam.com>
 <YMHFQA1L61ntKNRq@Air-de-Roger>
 <30955a5b-ee46-60d7-ae56-23dc7c91008c@epam.com>
 <YMIdbGCpFGZGwLoN@Air-de-Roger>
 <e0f73a05-b027-d0b6-8f8d-a1078dedccf7@epam.com>
 <632d72e8-e794-2637-d5fe-acc52b530875@suse.com>
In-Reply-To: <632d72e8-e794-2637-d5fe-acc52b530875@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 2b508bdc-8e3b-481c-1d9f-08d92ca59cbb
x-ms-traffictypediagnostic: AM0PR03MB6068:
x-microsoft-antispam-prvs: 
 <AM0PR03MB6068B25241173C9F062A97B0E7349@AM0PR03MB6068.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 2qEoxPjreEWWro8Rg2JctDPzy7xBeygeK7aHxlwddGX5GICoqv6MqhDjvSGZrvFK6PHwKPUlkZnLEQOemyHSNMlkyaSTezKJ0Mugz/rq+wc5f6wAP8BAQ1dAOEf/N3yx9+5E1d/Bd+JASrwltE+daSeNXsrfKNXWqeCN5FIIO03I63rRiBBjbZZ5t74Td8LTAe2gmHKe1LQ3X8jFHpNulxxTc8PmU53N0VZ5Q1clnPPaAUETJ3nx/tf8/00IWH23XN5FcF8s4rYkZp1uWjkjxxsovOQIAMWsvPQPsSu6ZhNohzyVCq4xh7qWka2KBcAE58ZI2V3Zg4OwsO/x8a4HI8VZmXqIqI/D7zO7Y88toUsGET3TME+ICnXOf7vXG+JT/suVQ4VhuarYk1myfHkxFDrEVJIuv+JMimOnBXTzPijLcR7Q0l3LFOeY0wW9FChe+huCW5qOJP21fsKceXFevOWl/sHBuDVD4XMrRiEur6B/JSSryrOiQcHraZ6Ssd8oghnRGh1OEhnV9GQMbjig1Agea42gDXUeVNKzACRn0iWb6u68dDCGM1Wao388kBLPxw3G91CscCyAxLoHfltgDLPckqkBIORqwxM3cyvesBTkkNp5BCgllk+sf/FDkL7+9c7cV0sMjaBYzEUwVV0qdzWbhzl+FBInj5t5Xh+8jtDzGoRzCMBYwdix6BGOHXL1
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(396003)(136003)(376002)(366004)(76116006)(66446008)(66946007)(66476007)(8676002)(2906002)(6506007)(91956017)(6916009)(5660300002)(54906003)(66556008)(64756008)(53546011)(8936002)(4326008)(6486002)(316002)(2616005)(83380400001)(31686004)(26005)(478600001)(186003)(31696002)(36756003)(122000001)(71200400001)(6512007)(86362001)(38100700002)(21314003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?utf-8?B?eFh5UDd0bWdzMUg2dUZtZjZ2a09pVVQ3alVrem0ybzNOdzZubkF4MEFMZFRW?=
 =?utf-8?B?N0lHQmxXRVlxcE9DYXBPVTRsclE1cG9pZ0cvUitRM214ZE54VGxScmRKY3or?=
 =?utf-8?B?aUxQaThncFhsSHo5SDdtQ0Fna25MKzlROWMwS1h1MjFkdWNkOFJCTUhoU0lz?=
 =?utf-8?B?UHB2ZXM5RW1Mb290MzUxQlhrN254MTB5anJhU2M1VTB5OHJWNVBxcjBIazJV?=
 =?utf-8?B?WHF4Y05XUnpLR3AxVXR4dGR0U0NPTkJNbURIampLUm5wN3RydXpubWYzS0VS?=
 =?utf-8?B?SG1kY3RJYURvbWlVQStDL2NMR2h6WmRmRjhjenhsTEVlekNBWnlFWGhYTTNW?=
 =?utf-8?B?ZGJWODRoVFQzb0ZqbEdmYzl1ZXdhSlljb29VMjIyZkdkYmVuSkhuL2pVY1BC?=
 =?utf-8?B?UjhYZE5ONUgwOGtPTU9EWEJZTnZXTktLcnJEZGVXK21VYjhZejVTZlFQSzl5?=
 =?utf-8?B?ZTI4OVZteVJ6d3cvU0t6N0xxeGNPeTJUZzVveG9zYkVMVmZkekl0TklCVEE4?=
 =?utf-8?B?RWJMaEtidEp3MGVjRzhDbk4zR3VueHZRUG04aTNsd1N6OTg4eS9yaks5eEo5?=
 =?utf-8?B?N0w5Yk1HOGxJY3JBYUx0Y1ZjNGFFNkVXK25ONjMvb0dUcWt3Z2xsWTZzdGVa?=
 =?utf-8?B?Rk1KVy96ZUVRclB2UWZPMmtTbTF1amo3dmxTWGdaRC8zNnJBSDNGYlZzb2F4?=
 =?utf-8?B?Q285YjlURU5ZYk5WRUNLdisrWk54UTZHTndjQWdiOHBDV05Nd0lIS0pxUmQ1?=
 =?utf-8?B?VFFQY0QvRXcvd2trMDFSbG1DTlowamozdzR2c3VjOS9tSUxwMkIyL3hzOW8z?=
 =?utf-8?B?Kys4K1E1VUdyOUoxeEJBWmZyajZmR0Y3MHhoZmM5ZkM1bVFNZHdMa3ZObzNS?=
 =?utf-8?B?UUc5RUFuazBtQm1lVmFlZ24xdWRNOGhCUWtIcVFmRllzRWozejhYTCt6RFZG?=
 =?utf-8?B?MFVGNHNyVU1WVkhHN29PMzN6ZnhzdUY4ZWQrb2tiejdwWndVTWd2dUdaYzNk?=
 =?utf-8?B?NFlaclQrTTdrS2o2QzJkNUhBTnJkMEJaUnJtVEUrUHp5NE0xV0FkK241a0U4?=
 =?utf-8?B?akQ0MzM1WFJXZU1DSCtrYi84bmJBOUJ1bzNvb2FTV1p0eXFkQzVZaW05Y0R3?=
 =?utf-8?B?SVRxMEsxenpLNlhJZmlKZVNDeEplRUdYRzYxU2xLcExYMTlvOVZEMGhLM3Qz?=
 =?utf-8?B?OXBIL2JVRXY3SjQrejRVZGhVSUJrNDZzSTRia0gwNENtcnlvcUYzQ1VMaGVL?=
 =?utf-8?B?QUtRNlA5QXkwbVhFalRpMTBsbUliRGdOb1h0cVVuV3B3WG5YcjlJQW5vRFBi?=
 =?utf-8?B?RmE5M3RHSm5USWVRNDR5SU05RGI4Z21xS0J6SWJSRDJMSzlLWXF4NWU1UFN0?=
 =?utf-8?B?S1hYU3oreFBFY0FweGNIMzdTbnUyZWtIdDNtOWYxSXc5Tm45aWRuYlFxOXB4?=
 =?utf-8?B?Z2FFb1gxTU1pb1VQMnNhc0JKTUpoOTdDNXpzS1BidzI1R3hvRXMrY3QrUU1x?=
 =?utf-8?B?WnhSaXlaYkt1S3lpL2t2eFp6cTRkTWt2VC9Id01WQXZDTDJ4aXhGZ2dMQWVB?=
 =?utf-8?B?dWNCYWwyS3orbmtCUVpHNm9KV250WUk5WG5kUmU4TnRXUTlrU2VKekxpajVm?=
 =?utf-8?B?SzdpTjc0ZnVyRnRCQ0krbzZ4VU5JN1lBaWxrNkNnbXFMWUpObDhqbXhJcmov?=
 =?utf-8?B?TUVENDM0NDhZL3dUQ2wzRWg4UFl0SlloYi9GaHkzWjhoQ0Y3TXY0dzZ0bDVo?=
 =?utf-8?Q?Lim3JBgitW7cS/NnT6B+5wJcGQxTO10Ow6n1QPY?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <B19702064797BE4693FD24DFCD35AEF4@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2b508bdc-8e3b-481c-1d9f-08d92ca59cbb
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Jun 2021 06:53:25.9295
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: JRAxKfFL1HHz06uVgvhYbtvwQ4QIieFUAHrK34iNM9tsAG9yc8qUUuCgYirGTt7uhf0lIb3WGpyAttWFybbnAZPud2G1irzpryCi416lTxQJNIR7PbuM5i2ltt2CduOw
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB6068
X-Proofpoint-ORIG-GUID: V6mfsDIo8iop4S9C5WAv2BEczOKoWLQK
X-Proofpoint-GUID: V6mfsDIo8iop4S9C5WAv2BEczOKoWLQK
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 lowpriorityscore=0
 adultscore=0 mlxlogscore=625 malwarescore=0 spamscore=0 suspectscore=0
 bulkscore=0 clxscore=1015 impostorscore=0 priorityscore=1501 phishscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106110044

DQpPbiAxMS4wNi4yMSAwOTo0NSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDEwLjA2LjIwMjEg
MTc6MzMsIE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIHdyb3RlOg0KPj4gT24gMTAuMDYuMjEgMTc6
MTAsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6DQo+Pj4gT24gVGh1LCBKdW4gMTAsIDIwMjEgYXQg
MTA6MDE6MTZBTSArMDAwMCwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+Pj4+IE9u
IDEwLjA2LjIxIDEwOjU0LCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOg0KPj4+Pj4gT24gRnJpLCBK
dW4gMDQsIDIwMjEgYXQgMDY6Mzc6MjdBTSArMDAwMCwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28g
d3JvdGU6DQo+Pj4+Pj4gSGksIGFsbCENCj4+Pj4+Pg0KPj4+Pj4+IFdoaWxlIHdvcmtpbmcgb24g
UENJIFNSLUlPViBzdXBwb3J0IGZvciBBUk0gSSBzdGFydGVkIHBvcnRpbmcgWzFdIG9uIHRvcA0K
Pj4+Pj4+IG9mIGN1cnJlbnQgUENJIG9uIEFSTSBzdXBwb3J0IFsyXS4gVGhlIHF1ZXN0aW9uIEkg
aGF2ZSBmb3IgdGhpcyBzZXJpZXMNCj4+Pj4+PiBpcyBpZiB3ZSByZWFsbHkgbmVlZCBlbXVsYXRp
bmcgU1ItSU9WIGNvZGUgaW4gWGVuPw0KPj4+Pj4+DQo+Pj4+Pj4gSSBoYXZlIGltcGxlbWVudGVk
IGEgUG9DIGZvciBTUi1JT1Ygb24gQVJNIFszXSAocGxlYXNlIHNlZSB0aGUgdG9wIDINCj4+Pj4+
PiBwYXRjaGVzKQ0KPj4+Pj4+IGFuZCBpdCAid29ya3MgZm9yIG1lIjogTVNJIHN1cHBvcnQgaXMg
c3RpbGwgV0lQLCBidXQgSSB3YXMgYWJsZSB0byBzZWUgdGhhdA0KPj4+Pj4+IFZGcyBhcmUgcHJv
cGVybHkgc2VlbiBpbiB0aGUgZ3Vlc3QgYW5kIEJBUnMgYXJlIHByb3Blcmx5IHByb2dyYW1tZWQg
aW4gcDJtLg0KPj4+Pj4+DQo+Pj4+Pj4gV2hhdCBJIGNhbid0IGZ1bGx5IHVuZGVyc3RhbmQgaXMg
aWYgd2UgY2FuIGxpdmUgd2l0aCB0aGlzIGFwcHJvYWNoIG9yIHRoZXJlDQo+Pj4+Pj4gYXJlIHVz
ZS1jYXNlcyBJIGNhbid0IHNlZS4NCj4+Pj4+Pg0KPj4+Pj4+IFByZXZpb3VzbHkgSSd2ZSBiZWVu
IHRvbGQgdGhhdCB0aGlzIGFwcHJvYWNoIG1pZ2h0IG5vdCB3b3JrIG9uIEZyZWVCU0QNCj4+Pj4+
PiBydW5uaW5nDQo+Pj4+Pj4gYXMgRG9tYWluLTAsIGJ1dCBpcyBzZWVtcyB0aGF0ICJQQ0kgUGFz
c3Rocm91Z2ggaXMgbm90IHN1cHBvcnRlZA0KPj4+Pj4+IChYZW4vRnJlZUJTRCkiDQo+Pj4+Pj4g
YW55d2F5cyBbNF0uDQo+Pj4+PiBQQ0kgcGFzc3Rob3JnaCBpcyBub3Qgc3VwcG9ydGVkIG9uIEZy
ZWVCU0QgZG9tMCBiZWNhdXNlIFBDSQ0KPj4+Pj4gcGFzc3Rocm91Z2ggaXMgbm90IHN1cHBvcnRl
ZCBieSBYZW4gaXRzZWxmIHdoZW4gdXNpbmcgYSBQVkggZG9tMCwgYW5kDQo+Pj4+PiB0aGF0J3Mg
dGhlIG9ubHkgbW9kZSBGcmVlQlNEIGRvbTAgY2FuIHVzZS4NCj4+Pj4gU28sIGl0IGlzIHN0aWxs
IG5vdCBjbGVhciB0byBtZTogaG93IGFuZCBpZiBQQ0kgcGFzc3Rocm91Z2ggaXMgc3VwcG9ydGVk
DQo+Pj4+DQo+Pj4+IG9uIEZyZWVCU0QsIHdoYXQgYXJlIHRoZSBzY2VuYXJpb3MgYW5kIHJlcXVp
cmVtZW50cyBmb3IgdGhhdD8NCj4+Pj4NCj4+Pj4+IFBIWVNERVZPUF9wY2lfZGV2aWNlX2FkZCBj
YW4gYmUgYWRkZWQgdG8gRnJlZUJTRCwgc28gaXQgY291bGQgYmUgbWFkZQ0KPj4+Pj4gdG8gd29y
ay4gSSBob3dldmVyIHRoaW5rIHRoaXMgaXMgbm90IHRoZSBwcm9wZXIgd2F5IHRvIGltcGxlbWVu
dA0KPj4+Pj4gU1ItSU9WIHN1cHBvcnQuDQo+Pj4+IEkgd2FzIG5vdCBhYmxlIHRvIGZpbmQgYW55
IHN1cHBvcnQgZm9yIFBIWVNERVZPUF9YWFggaW4gRnJlZUJTRCBjb2RlLA0KPj4+Pg0KPj4+PiBj
b3VsZCB5b3UgcGxlYXNlIHBvaW50IG1lIHRvIHdoZXJlIGFyZSB0aGVzZSB1c2VkPw0KPj4+IFRo
b3NlIGFyZSBub3QgdXNlZCBvbiBGcmVlQlNELCBiZWNhdXNlIHg4NiBQVkh2MiBkb20wIGRvZXNu
J3QNCj4+PiBpbXBsZW1lbnQgdGhlbSBhbnltb3JlLiBUaGV5IGFyZSBpbXBsZW1lbnRlZCBvbiBM
aW51eCBmb3IgeDg2IFBWIGRvbTAsDQo+Pj4gQUZBSUsgQXJtIGRvZXNuJ3QgdXNlIHRoZW0gZWl0
aGVyLg0KPj4gV2VsbCwgQVJNIGRpZG4ndCB1bnRpbCB3ZSBzdGFydGVkIGltcGxlbWVudGluZyBQ
Q0kgcGFzc3Rocm91Z2ggWzFdLg0KPj4NCj4+IEl0IHdhcyBwcmV2aW91c2x5IGRpc2N1c3NlZCBb
Ml0sICIjIERpc2NvdmVyaW5nIFBDSSBkZXZpY2VzOiIgYW5kIHByb3Bvc2VkDQo+Pg0KPj4gdG8g
dXNlIFBIWVNERVZPUF9wY2lfZGV2aWNlX2FkZC4NCj4+DQo+PiBMb25nIHN0b3J5IHNob3J0LCBp
dCBpcyBub3QgZWFzeSBmb3IgQVJNIHRvIGVudW1lcmF0ZSBQQ0kgZGV2aWNlcyBpbiBYZW4gYXMg
dGhlcmUgaXMNCj4+DQo+PiBubyB1bmlmaWVkIHdheSBvZiBkb2luZyBzbzogZGlmZmVyZW50IHBs
YXRmb3JtcyBpbXBsZW1lbnQgZGlmZmVyZW50IFBDSSBob3N0IGJyaWRnZXMNCj4+DQo+PiB3aGlj
aCByZXF1aXJlIGNvbXBsZXggaW5pdGlhbGl6YXRpb24gaW5jbHVkaW5nIGNsb2NrcywgcG93ZXIg
ZG9tYWlucyBldGMuDQo+IEp1c3QgZm9yIG15IG93biB1bmRlcnN0YW5kaW5nOiBJZiB0aGlzIGlz
bid0IGRvbmUgYnkgZmlybXdhcmUsIGRvZXNuJ3QNCj4gdGhpcyBtZWFuIHlvdSBjYW4ndCBib290
IGFuIEFybSBzeXN0ZW0gZnJvbSBlLmcuIGEgZGlzayBjb25uZWN0ZWQgdGhyb3VnaA0KPiBhIFBD
SS1iYXNlZCBjb250cm9sbGVyPyBIb3N0IGJyaWRnZSBzZXR1cCBpcyBkZWZpbml0ZWx5IGZpcm13
YXJlJ3Mgam9iIG9uDQo+IHg4NiAuLi4NCg0KT24gdGhlIHBsYXRmb3JtcyBJIHdvcmsgd2l0aDog
bm8sIHlvdSBjYW4ndC4gV2VsbCwgaXQgaXMgcG9zc2libGUgdG8gYWRkIFBDSQ0KDQpzdXBwb3J0
IHRvIHRoZSBmaXJtd2FyZSwgYnV0IHdlIG5vcm1hbGx5IGJvb3Qgb3V0IG9mIGVNTUMsIFNELCBu
ZXR3b3JrIGFuZA0KDQphbGwgdGhvc2UgYXJlIHR5cGljYWxseSBOT1QgUENJIGRldmljZXMuDQoN
CkV2ZW4gbW9yZS4gSW4gbXkgZXZlcnlkYXkgd29yayBJIGRvbid0IHVzZSAobmVlZCkgYW55IFBD
SSBkZXZpY2UgdG8gcnVuDQoNCnRoZSBzeXN0ZW0gYXQgYWxsLg0KDQoNCj4gSmFuDQo+DQpUaGFu
ayB5b3UsDQoNCk9sZWtzYW5kcg0K


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 06:59:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 06:59:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140219.259126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrb8D-00086X-Cm; Fri, 11 Jun 2021 06:59:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140219.259126; Fri, 11 Jun 2021 06:59:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrb8D-00086Q-8q; Fri, 11 Jun 2021 06:59:05 +0000
Received: by outflank-mailman (input) for mailman id 140219;
 Fri, 11 Jun 2021 06:59:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0bEB=LF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrb8C-00086K-Ev
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 06:59:04 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e9b85b50-f34e-41d9-a70d-d4c244cd1230;
 Fri, 11 Jun 2021 06:59:03 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2107.outbound.protection.outlook.com [104.47.18.107])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-40-PYCYwZttPNmUQhuQTGb-Og-1; Fri, 11 Jun 2021 08:59:01 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4382.eurprd04.prod.outlook.com (2603:10a6:803:73::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Fri, 11 Jun
 2021 06:58:59 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Fri, 11 Jun 2021
 06:58:59 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM9P195CA0028.EURP195.PROD.OUTLOOK.COM (2603:10a6:20b:21f::33) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Fri, 11 Jun 2021 06:58:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9b85b50-f34e-41d9-a70d-d4c244cd1230
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623394742;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=syW9HbkIiOa9IrkfW/lkq+c4qWuGbNaK1+k53L9ByiQ=;
	b=WjvfdVwtVFGvdOHD/YeD3cxh8cY9MORR1bnWeMuA3yQbA/8vWkIOs8bUDAWzo5dqZ5U+40
	EK+0+5XhCY0Lg90y63dP/aAGAtRNU+tAFMqlAskUpnnQQbDtJJ/P/IvTywyJk3KAJZpymT
	vMh+l8TYaP2JmTCFQHkqOpjdm7ZafyE=
X-MC-Unique: PYCYwZttPNmUQhuQTGb-Og-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iylFZqzjN4wXnybxoVW8fH2tZtiCzLI3F+fv28I3pGbiCeMAJq66zoNBROlo4SwPDs3hkuOpOfxwXEH+FEgr5QpC/F4KxUQ80KVe+ZRTJMYT0f6YfZDX3o+QwKjzzpGNDDznKTPdE3vFQzsyV/7BHWn/Vx6VfpKkb8L7Juk6Wu4wfBrePYI92Nx596oiE20RoguM3PGYF7eB94R7Yqv7dYoBmZNYOijWDk5fyD6VOEJgmDm82Ci8/a/ReHsb3MQ8cGLDndQJ2RejURSTRv2kWVwpeTH32XP78pAYGRgIQ/n28bgVpy0MqNTBiFIgDK2QpDgIbXg0GHKFdEUvMp6miQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=syW9HbkIiOa9IrkfW/lkq+c4qWuGbNaK1+k53L9ByiQ=;
 b=kY9XUWvAGWdtEKsFZXzbPCDGwLsYjqGrfvWXMW6Xu605qNUnBbeQbusgBVsjfnaNtN6D3i6z2DgBlNNnKURI5NcZxvhbaaYlugA+32q7uAhxdcolvAWngG6x4Fc+YAhqCWEU+SACYWH+pNFlD5IYd9l5ud3ZhB5tQGhPhNVMp4GTrwEivHZ8J8OCLhjkKoAp/VvcbqGpRsMHQpULacEe9uE9qYEzoYeyQT4vI4L7QXIS+RpTNJsJED42uCjwCKQKgr/uWAWU/s1u6yG1D146F7HYLDF9r2y3OX4vesRFBXARSBHmHcI7VG8X8tYRuP+4j7jNT8XIdKWDnAQ99aj7ng==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [xen-unstable-smoke test] 162597: regressions - FAIL
To: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Julien Grall <julien@xen.org>,
 osstest service owner <osstest-admin@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <osstest-162597-mainreport@xen.org>
 <6d95cfac-e43c-d1f0-f988-4f11335b104d@suse.com>
 <E28F5F88-7D8A-46C1-89B8-9841071778D1@arm.com>
 <alpine.DEB.2.21.2106101644340.24906@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f7bf86cd-520c-f2f1-7928-448cf5ba8ed0@suse.com>
Date: Fri, 11 Jun 2021 08:58:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <alpine.DEB.2.21.2106101644340.24906@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM9P195CA0028.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:20b:21f::33) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 957fbe47-4b35-4324-06a4-08d92ca66341
X-MS-TrafficTypeDiagnostic: VI1PR04MB4382:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB43824F79C0EC14414ED5FEB6B3349@VI1PR04MB4382.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jI22ID+q0XXyQlvoVu+t66qF5u4rJDSkkvX2eXZaxFjXzplaeFKHnfP7c66122ftDOHuXX9z75ADh2ksMmMIUc4YfWDizAIiJ4NiEsPVXx7m/CnbRDHTl/+Nxi40pVe9zNQNlev2imf5ldB1jct9hDVlwB3inEFM0zSq7leBAYDpBxPaJKNx5sMfPVVoulmwokm1dVtqcxycoZ2alMatKcxohjIoGykoRGCIvCH8tiQEsWtsFoZS4w4RU4qRqK+CASLfUr7woDzXtzxMZIelxMbJs+5bnVDrw8M8xrD9uJI/Eu71cOVLGgc8SkTm96CDfl62fNig0UVlBJNv/DhwI0n2OCZosKXmXgqvlvYmZjtd730oIgfr9EJdxJ20v8cThx3j9g5zgts3n4WyGg7CPgA3KcALpwcumRCC8728ekyB2pCvQbeMm9h5oOXrfo2LsxKLSuvfO3pwEJ973IcXUsnnRs61c7ceIcJ8J91xUSOSGEvdaW/M1UXYjUVFx8ZJpTedS3exE4eLqOILZtasaRZvWOGDUGSRDGh2xQmpZSnApxfDGOABMb8Upt96J6wQRazv/ATWqq0mCAdNN3fDdp6e6b2jMIC2dmUEHvB+aGKDkQd75FA/qGpsdqbpgqHc3VyLnf9Vgd7i29e9Dte0gLznh1QbIxRqK/dI1hHYM8npDUgbb/ObtaCESFl8djQHw6ho5b/kWkzAimkIPSn3Lp8VgwjFgomHpx6p20fBfiux2ZxMwWOohInPZDwTIEb1H6FR/ayAbP62eV3MXSnDuv5/l9PEHkNFomDrFf2EBdqoH1L3pN0BUC43nNiIapgi
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(396003)(376002)(346002)(136003)(366004)(31696002)(478600001)(86362001)(16576012)(4326008)(54906003)(110136005)(38100700002)(316002)(8676002)(53546011)(66556008)(26005)(966005)(6486002)(956004)(8936002)(66946007)(36756003)(5660300002)(2616005)(66476007)(31686004)(16526019)(186003)(2906002)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bDFBZUxVTTRueUkvbTVEdGpoK1ZRTUg2TC9NaFowOGJxcW5yWW10a1RCM2ZR?=
 =?utf-8?B?cVJzWXdYenNMSXFiVlk0K3BTd3JZU1lFUlYzNzE0MlpnYURvMlhSZ3prQzhR?=
 =?utf-8?B?NU03R2VXT1dvRWRKN2ptUFR1clRrYStSYUl5dEVzemNVZ0VkbWlFeFBtWFNM?=
 =?utf-8?B?ZHIrRDFBNDYwbDhrYWxSV3ZxRzVLTEc5VG1UN0U0aE4rWlVoSDByWUZ3TXFG?=
 =?utf-8?B?eGJTYkREME1KQVhYRHhrNHIwdkpqTWlFNE5hcHZJY2ZQRG95aHJVRytwMEor?=
 =?utf-8?B?RklsZjJuUHQxTStJRGE1M3RZSFZYVGRnUmFCMmhLQVdMRjVHOXRmNFNtd2FZ?=
 =?utf-8?B?NXlMelk5d3dYbmtNbWRNMzNLUzR0eFdjTGU3cGFTQ0pBOXJHeXJiY1V6My9l?=
 =?utf-8?B?dU5FcUZ0Z0JKaUU1ODlWSGlqeHV1M2J2Y05IZG9LR1lNVlA3aitReGRGWDBw?=
 =?utf-8?B?WVd5UHVlN0hiRUZHVU5qbzVScjdWME1NQzhOTTlJRTgvSGl2NDFOdXJ0d3Ns?=
 =?utf-8?B?Z1JwOWF3eUdZdUlBYUZpcTd3bWVMbGljMThNWDVob1ErTHdVY3I1Zm01d2xz?=
 =?utf-8?B?S1V6VGdtZ09CdXpBaG1Vd0M5amlCVllKaTVsWGoyMDlqUzJnMEM5RkFRU3Ny?=
 =?utf-8?B?dFpoS2ZTM0E3SUxROTdmbzNoR1hRNEFPNldmTEpRZ1FXN3RLZU5vL0tueisy?=
 =?utf-8?B?R3E3NGZLOG1QNmdxMkRCdmFESURLS0JBUjRxdGlhcE1pSFpoY1Bqdjl0VVgz?=
 =?utf-8?B?WlBTYjY2czFSUFh4Z0J5M0Q5SWdDbHpaSjJJdVo0NHU5QXkyY3dXbXB5dGhM?=
 =?utf-8?B?b1gyQTR2R3A1cEZBWmxFaFArZWl1bkhwK3JpUFdsekdWT2hTMzdrSmFaQ3A3?=
 =?utf-8?B?V2VkTjZrZDJ3TTVFVVl1RTAxVE45SmFrbXRUWlJ4WkwwOTM2dXQ5ZjhWaWJP?=
 =?utf-8?B?eFQvMGFLSkpEMksrTHF1UFZuQStDb0NFVEJUZXQ1Nk1vQzJVTlBYUlhROVZL?=
 =?utf-8?B?aVQxU2JrYWhrUWxMeEdtNDJtemdXTWVVYk5pMGZsUG51UTVENHhUWFhxMWg1?=
 =?utf-8?B?QzlYZGZML3FOTWpTZ2hvajhUWUl0aDNwYWxaRDNXUnRFVkpTRVJoaHBWWStw?=
 =?utf-8?B?dTlBKzA4aGdsa2RPaTZ5ZmIzc1ExcVJzbE1XbCtPRnptcGNmN2pPSTNiZlJ5?=
 =?utf-8?B?dlMyQmxhSkRNanZ0MVlMSXdqd1RHNFZhQjNaQW8zQnVkL0xwanlHSG9ZK3o3?=
 =?utf-8?B?K0thZzVmM29WbmFTVEVtTURIUVJQTUtxOC8rK0FIbFhYVTlnNFljLzVpZTlR?=
 =?utf-8?B?Rklva05vVEd2eExFOFc4dk5aejh0YlVYbDlPbGZKci8rNkJmWUxBenNpczVJ?=
 =?utf-8?B?Z2k2QU9LaEtBZmI0ejdacnF3cnZDNVVCeTRMQmNtSkdhODBrazNwVm5qRS9j?=
 =?utf-8?B?QVZ0R04vL3M0clYxSjNKVDJZLytvblVjby9aV2pIa3MyNEVYMjZoTy8yaU1i?=
 =?utf-8?B?N25DY3VSRkt6UlBBZERzUW40dDNWdHVGNGp0ZXk2YTVTMDNwSU5VYWE4RkQ3?=
 =?utf-8?B?QXNqWlExLy9MSi9kbXVjN1pCeEJNUG14NXlKWFltd1ZDd3JSd01IbDl5UTls?=
 =?utf-8?B?ZFZ5dHlKYmd6VXQ1Q29IWnZoVFhKaUpxaG9tT3NNSG9TK3M4R09iYStHNE45?=
 =?utf-8?B?MUVSUHdHR3pMMUR4MzY1ZlROOUVFU1NneXNTNy9SYmhaMXdKZDl0cGR2ZmVJ?=
 =?utf-8?Q?+wwbUfCjjOttvBqzIGhNL4Qv9C9l3TBa0Yfa/GS?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 957fbe47-4b35-4324-06a4-08d92ca66341
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jun 2021 06:58:59.2254
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Idhu9Hzd4tVQzhwRghXyY0/NbDootdOR26GJLsT0uBnJgzriQRQliE5g/Y3XgwmojwqIBMlIlRnSPBGRyc3+qA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4382

On 11.06.2021 03:49, Stefano Stabellini wrote:
> On Thu, 10 Jun 2021, Bertrand Marquis wrote:
>>> On 10 Jun 2021, at 12:32, Jan Beulich <jbeulich@suse.com> wrote:
>>> On 10.06.2021 12:50, osstest service owner wrote:
>>>> flight 162597 xen-unstable-smoke real [real]
>>>> flight 162602 xen-unstable-smoke real-retest [real]
>>>> http://logs.test-lab.xenproject.org/osstest/logs/162597/
>>>> http://logs.test-lab.xenproject.org/osstest/logs/162602/
>>>>
>>>> Regressions :-(
>>>>
>>>> Tests which did not succeed and are blocking,
>>>> including tests which could not be run:
>>>> test-armhf-armhf-xl         18 guest-start/debian.repeat fail REGR. vs. 162574
>>>
>>> This now being the 3rd failure in a row, I guess there's a fair chance
>>> of there actually being something wrong with ...
>>>
>>>> commit dfcffb128be46a3e413eaa941744536fe53c94b6
>>>> Author: Stefano Stabellini <sstabellini@kernel.org>
>>>> Date:   Wed Jun 9 10:37:59 2021 -0700
>>>>
>>>>    xen/arm32: SPSR_hyp/SPSR
>>>>
>>>>    SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
>>>>    trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
>>>>    See: ARM DDI 0487D.b page G8-5993.
>>>>
>>>>    This fixes booting Xen/arm32 on QEMU.
>>>>
>>>>    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>>>    Reviewed-by: Julien Grall <jgrall@amazon.com>
>>>>    Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
>>>>    Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
>>>
>>> ... this. My Arm-untrained eye couldn't spot anything in the logs.
>>
>> I am not sure to read the log correctly so do I see it right that dom0 started and it failed then to start a guest ?

Well, in this particular flight it succeeded to create Dom1 (for
guest-start) and then it managed to also create Dom2, but failed to
get the expected "sign of life". It varies at which of the repeated
attempts the failure occurs (in one of the flights it also occurred
right at guest-start), but failure chances are high enough such that
so far in all of the flights things didn't complete successfully.
And with this high a failure rate, it accidentally succeeding and
thus leading to a push would probably do us more bad than good.

> Thanks Jan for bringing it to my attention. 
> 
> I am not an expert in reading OSSTest logs. From the following:
> 
> http://logs.test-lab.xenproject.org/osstest/logs/162597/test-armhf-armhf-xl/info.html
> 
> I understand that Xen booted and a DomU was started. However,
> "migrate-support-check" and "saverestore-support-check" failed. Is that
> correct?

Yes, but these two steps aren't the problem - afaict they always fail,
and hence wouldn't prevent a push.

It's guest-start/debian.repeat which is the problem in this flight.

> If so, it would be really strange for SPSR_hyp/SPSR to cause the problem
> because I would expect Xen to hang at boot before Dom0 is started
> instead.
> 
> 
> I don't have any ARMv7 hardware to try to repro this issue, and ARMv7 is
> most certainly required (ARMv8/aarch32 won't repro.)
> 
> Could someone more at ease with OSSTest than me arrange for a run with
> this commit reverted to verify that it is the issue?
> 
> 
> 
> In any case, I tried to figure it out. I guessed it could be a compiler
> error. I followed the white rabbit down the ARM ARM hole. I disassebled
> the Xen binary [1] from the failed job. "msr SPSR, r11" is 0x0026a38c.
> 
> The encoding should be at B9.3.12 of the ARMv7-A DDI 0406C and F5.1.121
> of ARMv8 DDI 0487D.b. Unfortunately it doesn't seem to match either one
> of them and I don't understand why.
> 
> 
> The "mrs r11, SPSR" is generated as 0x00262ecc. That should be described
> at F5.1.117 for ARMv8 and B9.3.9 for ARMv7. Also doesn't seem to match.

Indeed I was wondering whether perhaps the tool chain has an issue here.
Otoh I'd expect a tool chain issue to yield consistent failures rather
than ones with just a fair probability. Unless, of course, unspecified
behavior is hit, and the hardware indeed behaves randomly in this case.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 07:07:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 07:07:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140226.259136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrbGL-0001D1-6q; Fri, 11 Jun 2021 07:07:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140226.259136; Fri, 11 Jun 2021 07:07:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrbGL-0001Cu-3w; Fri, 11 Jun 2021 07:07:29 +0000
Received: by outflank-mailman (input) for mailman id 140226;
 Fri, 11 Jun 2021 07:07:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=axIZ=LF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lrbGJ-0001Co-Jo
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 07:07:27 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 625e4031-a66f-413e-9de9-88b0edcc5f11;
 Fri, 11 Jun 2021 07:07:26 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 187FB21977;
 Fri, 11 Jun 2021 07:07:25 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id DFCC6118DD;
 Fri, 11 Jun 2021 07:07:24 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id XnpvNawLw2DcBwAALh3uQQ
 (envelope-from <jgross@suse.com>); Fri, 11 Jun 2021 07:07:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 625e4031-a66f-413e-9de9-88b0edcc5f11
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623395245; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=QkWNiooVNQSs05V1bJZC4hCpcru2xf+saAlGuIFl1xA=;
	b=OhwgWzbw67aykne2WUrVUl6cpEksFKAew3hgh9l42M5jWh1pSEZ7avFKRMSLKKefRdviDb
	hSBy1J9VN3KtbocI7RBo9E+UWfh/x0mCSBphmNcbo1AOFVkZyKexkPlzXMv8Z5T5T6Yq3z
	1nnD0ZU20ZTrkH9Im/UY3DCPy+CuI8Y=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623395245; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=QkWNiooVNQSs05V1bJZC4hCpcru2xf+saAlGuIFl1xA=;
	b=OhwgWzbw67aykne2WUrVUl6cpEksFKAew3hgh9l42M5jWh1pSEZ7avFKRMSLKKefRdviDb
	hSBy1J9VN3KtbocI7RBo9E+UWfh/x0mCSBphmNcbo1AOFVkZyKexkPlzXMv8Z5T5T6Yq3z
	1nnD0ZU20ZTrkH9Im/UY3DCPy+CuI8Y=
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20210608055839.10313-1-jgross@suse.com>
 <20210608055839.10313-3-jgross@suse.com>
 <20210608183833.023551f4.olaf@aepfle.de>
 <eaf53d99-fee9-0c79-7f29-efd00aae4d16@suse.com>
 <20210611074616.2a4b96fb.olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2 2/2] tools/xenstore: set open file descriptor limit for
 xenstored
Message-ID: <fcb0a1d6-c392-e0a1-2ec6-d6cf6a40d6bf@suse.com>
Date: Fri, 11 Jun 2021 09:07:24 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210611074616.2a4b96fb.olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="AP7e712yQkVLyO4fDGY3KEqV21vixnSku"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--AP7e712yQkVLyO4fDGY3KEqV21vixnSku
Content-Type: multipart/mixed; boundary="CJEOEGrliY7zmAbG4AIy6KhUqkPeJf15M";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
Message-ID: <fcb0a1d6-c392-e0a1-2ec6-d6cf6a40d6bf@suse.com>
Subject: Re: [PATCH v2 2/2] tools/xenstore: set open file descriptor limit for
 xenstored
References: <20210608055839.10313-1-jgross@suse.com>
 <20210608055839.10313-3-jgross@suse.com>
 <20210608183833.023551f4.olaf@aepfle.de>
 <eaf53d99-fee9-0c79-7f29-efd00aae4d16@suse.com>
 <20210611074616.2a4b96fb.olaf@aepfle.de>
In-Reply-To: <20210611074616.2a4b96fb.olaf@aepfle.de>

--CJEOEGrliY7zmAbG4AIy6KhUqkPeJf15M
Content-Type: multipart/mixed;
 boundary="------------25A267764D7B3BBA5256CE7F"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------25A267764D7B3BBA5256CE7F
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: quoted-printable

On 11.06.21 07:46, Olaf Hering wrote:
> Am Fri, 11 Jun 2021 07:01:31 +0200
> schrieb Juergen Gross <jgross@suse.com>:
>=20
>> Why? You realize that above is a comment just documenting the default?=

>=20
> That depends on the context. See https://bugzilla.opensuse.org/show_bug=
=2Ecgi?id=3D1185682 for a reason why it should become an empty variable. =
But yes, we can patch that one too.
Isn't that a bug in fillup or the related rpm-macro? A variable set in
the existing /etc/sysconfig/xencommons file only should be preserved.

In general I think we should be consistent in the file.

In case there is no downside for other distributions I'd recommend to
switch all variables to your suggested pattern.

If there are disadvantages for others we should keep the current
pattern as changing it now would break existing installations.

Any thoughts?


Juergen

--------------25A267764D7B3BBA5256CE7F
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------25A267764D7B3BBA5256CE7F--

--CJEOEGrliY7zmAbG4AIy6KhUqkPeJf15M--

--AP7e712yQkVLyO4fDGY3KEqV21vixnSku
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDDC6wFAwAAAAAACgkQsN6d1ii/Ey9m
0wgAkIOlHVzqDVW0pGvsXcDniYzyKT5WqvgLEiE0Ld1sYZtw+uV1ctLcAFhqrW01s4uQ925VUnUR
0WRSwfSA90/ht37Bh9z8fnLq/qAOHioirl46iODLZ5eLxc6yWVYBd/Janz9/Pa9ygAO5FdFN835o
K6JK5CCfXWiUw744PP6egPJb3p39f7WvxFti0tOaeh69g6vw6dyMsKifgpPubRzqp7TX8hnTnqmV
AFhwhj8E+qKCDL9PjHBKNsLICJA6IYGM//8hYNzzGCV+2WAjXmLlimpYpDSdjVSVOrTsRADhjdFq
dOpyuZ0HK0X07dTskah0+YhDFYXNKSWDDv5pTlDihg==
=3HK1
-----END PGP SIGNATURE-----

--AP7e712yQkVLyO4fDGY3KEqV21vixnSku--


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 07:28:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 07:28:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140234.259148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrbap-0003YQ-4L; Fri, 11 Jun 2021 07:28:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140234.259148; Fri, 11 Jun 2021 07:28:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrbap-0003YJ-1L; Fri, 11 Jun 2021 07:28:39 +0000
Received: by outflank-mailman (input) for mailman id 140234;
 Fri, 11 Jun 2021 07:28:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6eg9=LF=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lrbam-0003YA-NU
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 07:28:37 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.218])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dcfa0988-751b-485f-b586-41cf82c56ad5;
 Fri, 11 Jun 2021 07:28:35 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5B7SSbBd
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 11 Jun 2021 09:28:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dcfa0988-751b-485f-b586-41cf82c56ad5
ARC-Seal: i=1; a=rsa-sha256; t=1623396508; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=SXppBXc7jLwi1hjItE1Q6WBIbiKMN0OWpSYpfP3oVVYvUYh/Sj6tKa8VTk2Yq+3bXA
    6jJKD4AoWAlWToBD1M33zkeVj0OSJ/y1KhCGBNxMC+U/VhiX2pB1fq7ae2a2HmN8Qidy
    DDybPpEU6rtZn/fUGrKgGMZ560Nv+Wv37dUUtlmvgw77VoVYTu3tOh/fSwBwpBDeGcNr
    lqfbo7F91HGvFfXHrCAAQXIUtPVzl8lhzByytSAG4bCC9n70cjMLV3OaCXv1DPVHQHz+
    uYtTOk6KqmCkbgK2nJ8HYt21CMVbtt5Xg+6Z+5ffurXjrTxBa6cQyWQe4co90q1900qq
    uF9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623396508;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=IwC2SpROjPR4yfqIr6wGKLMyHG7VjnBvkUQAWkc20T8=;
    b=aMG0hH6giNZDZ/LTSQQ81A/Oyg7hRNkwD9QvenNu/aaQPcynTEZ7Q2tTmE/mnL4yON
    pIBm6qmRRxGbh4l7/A2Z+57+KiWhfHw4neOCH0Q5Si1bl1M8VwC/jMwFxWxxz8+ljB+p
    AvuT1GCQlwuP1fLTKOG8jApM2ypJB9nRadVhBu8nDKCYBe3e/QxicaurcFFt+EE8e/DH
    7klxeilZ9X7L7o73BGp33OA0SjJfh7L49xySBmOwVsXkPlzspZhvOvA4n1/+LuMRivfe
    ja8eOs83A4VbDLiL0Kxr1DhLgxhyVFCBUxJTzON7vdUY7JzVehSldyzZ1UqO9Kx8PIXS
    lwHA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623396508;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=IwC2SpROjPR4yfqIr6wGKLMyHG7VjnBvkUQAWkc20T8=;
    b=T1xtB8D0PFIt+MxvWwtTG+QP+fJc7rK60HKyqH+gaCZSsBaB1OXNrCQTKkvabd5A63
    IV/u9oFurFswnXPuMZlazHaBeO5WtWXCvoH39Sz7J38+a1iEyZMQCMQEi5+8d1Srir30
    OA8OKJNTn4p8SLwCvnqBk8ME4iEtrwEigwEgEixcwf2UmK2mlJh5JGKF5ikUkaE402aa
    QUm7uTAollT3VIMedB7hMekpc2hI1/I5wBR/iJOvRlWmx089deegZph/VPwZmpiqRPeE
    GPhKf40x7Fv/VxqwG2FPRrPNO+r8BC8o+klmYnDjssvbW7Y0QwZfHlUIzjXsYueyB3xI
    6iWw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF/Ax6Fb03sCxOoTBq7r1dZtjiRLxxzC79Iv3HI"
X-RZG-CLASS-ID: mo00
Date: Fri, 11 Jun 2021 09:28:14 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v2 2/2] tools/xenstore: set open file descriptor limit
 for xenstored
Message-ID: <20210611092814.1c86c350.olaf@aepfle.de>
In-Reply-To: <fcb0a1d6-c392-e0a1-2ec6-d6cf6a40d6bf@suse.com>
References: <20210608055839.10313-1-jgross@suse.com>
	<20210608055839.10313-3-jgross@suse.com>
	<20210608183833.023551f4.olaf@aepfle.de>
	<eaf53d99-fee9-0c79-7f29-efd00aae4d16@suse.com>
	<20210611074616.2a4b96fb.olaf@aepfle.de>
	<fcb0a1d6-c392-e0a1-2ec6-d6cf6a40d6bf@suse.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/8snxwIAzlWmCToxQCgDPaHz";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/8snxwIAzlWmCToxQCgDPaHz
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Fri, 11 Jun 2021 09:07:24 +0200
schrieb Juergen Gross <jgross@suse.com>:

> Isn't that a bug in fillup or the related rpm-macro?=20

No. Fillup expects a certain pattern: a bunch of comments and a single key=
=3Dvar right after that. With such format it might be able to adjust the co=
mment and leave the key=3Dvar as it is. Without key=3Dvar it will see it as=
 a stale comment, and removes the entire section of comments during the nex=
t package update.

Olaf

--Sig_/8snxwIAzlWmCToxQCgDPaHz
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmDDEI4ACgkQ86SN7mm1
DoCJxw//ZA0XDNN2hxkZA7DY197qDRjw9N5NXy83isgI0uDdYMlhQKu9YfLd4KoE
cbJcfZ7kjHGtCPrOt7skMy88gIIUeLxGzNsxczVutc6SvLuNW/0f5RQbZJVhxBP9
4flPUJBnB6Z1pF2sADYyEvNJ+P+PIduA0NS5b/IcQXUXZ+1RYLezO8SltLTPWlHg
2DC0wbzUTIVPBZ67dyZxNQL3nHNJStp+Z/ZnYOH9u4hCmwX7GQE2Is03v3rkmMRO
o4g+M/pCHh/b2KPu9N3kkXc6Pju2eY1rID2q/tR4rUVYlPYsCQXWbcPcDNilMvyc
VoHJmy8pXdyTBe20q0hdgT+PRbLgHD/HLUvmESwJHhoVcxkwPs6MIYaAiQBvezBs
HNl1ANhNCxqQPVi2d9xi2+gIcI6ONC91oSgOQE2CuPLKugoS+ExlYpIwqBo3gcfs
qsyjs8CnoYCTQS+CymHbyx7IdyvNvi4eu4fbBztMNCeP7vn3ThmcZbgtPB8A5jWo
pQ9ZsvVUz1PC0v8HS+WoAHLuM3o/4fz4Wiy4NDChdYEyyqGd+6sZkpBG/+LWf8no
NTqy4Ymzd0nxpw1140bGyVw4AvGSCIf0BtrvuqRcerrhe8QnVNDxDf5/AR4YKsc2
N7Qmq3HhQBALM7mJDXw2XqGZmBg9s+0zTfI/vLmO0BbYRJ5W95A=
=52NC
-----END PGP SIGNATURE-----

--Sig_/8snxwIAzlWmCToxQCgDPaHz--


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 07:30:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 07:30:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140240.259159 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrbcE-0004oB-Gq; Fri, 11 Jun 2021 07:30:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140240.259159; Fri, 11 Jun 2021 07:30:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrbcE-0004o4-DS; Fri, 11 Jun 2021 07:30:06 +0000
Received: by outflank-mailman (input) for mailman id 140240;
 Fri, 11 Jun 2021 07:30:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrbcD-0004ca-09; Fri, 11 Jun 2021 07:30:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrbcC-0007pB-OY; Fri, 11 Jun 2021 07:30:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrbcC-0000pi-HL; Fri, 11 Jun 2021 07:30:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrbcC-00078z-Gr; Fri, 11 Jun 2021 07:30:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=C4AcvXJPR99Y7VOdAPdXQdEJFTQNDp6jPP5emt5H44g=; b=BKqjTB5MonjBo6SmtXGgLzHyFM
	NaQXJrKLSOo8iS+WAn+kVeEFLVYnVpmcCCTgnuVhgcXQOKyB+I0DwLFPmMQxY4rWDl2hTOU7LhR7V
	5wIqZRcluxwXxUCxAp0nMWQrTx6iL5kf3/y/Tc10AfJvn/l+caxMJhvQbx0AS8MXGAok=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162609-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162609: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=b8649cf2a3e673a4a8cb6c255e394b354b771550
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Jun 2021 07:30:04 +0000

flight 162609 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162609/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 b8649cf2a3e673a4a8cb6c255e394b354b771550
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z    7 days
Failing since        162368  2021-06-04 15:42:59 Z    6 days    7 attempts
Testing same since   162583  2021-06-09 23:44:58 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1717 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 07:39:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 07:39:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140249.259173 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrbkq-0005cR-G4; Fri, 11 Jun 2021 07:39:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140249.259173; Fri, 11 Jun 2021 07:39:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrbkq-0005cK-CA; Fri, 11 Jun 2021 07:39:00 +0000
Received: by outflank-mailman (input) for mailman id 140249;
 Fri, 11 Jun 2021 07:38:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0bEB=LF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrbkp-0005cE-J6
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 07:38:59 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 676f9e80-1b45-409c-a5e9-b107cb70d5ba;
 Fri, 11 Jun 2021 07:38:58 +0000 (UTC)
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01lp2053.outbound.protection.outlook.com [104.47.2.53]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-17-VKyVc1NSPQCeOb4e9aUOjw-1; Fri, 11 Jun 2021 09:38:56 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3935.eurprd04.prod.outlook.com (2603:10a6:803:1f::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Fri, 11 Jun
 2021 07:38:54 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Fri, 11 Jun 2021
 07:38:54 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR3P281CA0058.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:4b::19) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.9 via Frontend Transport; Fri, 11 Jun 2021 07:38:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 676f9e80-1b45-409c-a5e9-b107cb70d5ba
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623397137;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=X11Ut+Mjv9dMzNg2zGs9HBO5d0wHIHjV5BuGHTa6zMc=;
	b=YoIB0S38T1+X0GJF5SsEl+NTXm6BErkcxcSuJsr/sEll1AnNQKro3UtIMHewUAyPSgezkx
	phwOoBVPmxKeCYdWIyDPb1yISOCedaF2PJ8zY11jua2RsQAXqABdl6cxcY7OggLRXU+PUl
	lXdOJxOM1IGejT2CJ9RXzlKQtUYnhfs=
X-MC-Unique: VKyVc1NSPQCeOb4e9aUOjw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SlN0fz6ZbqLziWTH9npyxaEyvG9KZHmYXbFloNm0EPSIex7zfCQEGKONPnBa2c8VkNIqvBT80S+VskdPMNfPTKh744n5lylMna6pOAlNCZ21qEzufMoxjvKkrwQH0D5Oof9BIEIWkg1OTDMjegxCOW65PM/+eoVpcffH5a3Dxj9dpaTs+6O2optewWDt+JXLSl3r9STv5oECpZAq3cva49SLZ1knu4Fl7cIjq3h25+PSAc2wvLZCGNg0a6u68BCzBdJ5vyPGu6uL1UXZwIPoVxUv8QZoIUhX7vXujrB72GpLNzXbOjFbp8z2owXYJcXhw+yJWfP71iygMxroAXdTfQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X11Ut+Mjv9dMzNg2zGs9HBO5d0wHIHjV5BuGHTa6zMc=;
 b=nddZC44p4kaGMTgo38Ey2EJtm4xoJYM4G/50L3QJ/sT53h5pDzlBDeg9M6FHHNx5OI7//ygBE3fDUdPHVZYBMT3l7AuC3RZD8CoKyt658RYyYBPx0ym5VJ8Wn0t+CNcRDYNuwV6pzYFBML2Q3yRXunTpytpnSeLadxch0mCXlREOFprkZwfsEZfUb5Cg1UTX9JwRAZvRvEvXGRafdEYodrpb39q3f6HAARCgaCrbyWHcTTVqWMAX7XLuViC/fYQmyZ9gpl/vtMXUAvbc2YmYuULRll77ncnJxBBU91N8MoWapmEhkRUeAakAlg79HNSAOP1rJpnxIHJAZ5rq4UsKQQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=suse.com;
Subject: Re: [xen-unstable-smoke test] 162597: regressions - FAIL
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>,
 osstest service owner <osstest-admin@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <osstest-162597-mainreport@xen.org>
 <6d95cfac-e43c-d1f0-f988-4f11335b104d@suse.com>
 <E28F5F88-7D8A-46C1-89B8-9841071778D1@arm.com>
 <alpine.DEB.2.21.2106101644340.24906@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <88b6c31a-d8ea-6d0c-2c97-ede8b01d3117@suse.com>
Date: Fri, 11 Jun 2021 09:38:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <alpine.DEB.2.21.2106101644340.24906@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR3P281CA0058.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::19) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c0e49b50-c4c5-4f98-6fe2-08d92cabf698
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3935:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB393501BA84FE7BF7B0112762B3349@VI1PR0402MB3935.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1Q+Bi1m9UF1UnvtNWNS3c95/utVsm74/dAuoERKEjTBC7YqHCUdnRdvb3gbYc6d3DJU6OajT3bTFdz/c15qc73Sk09jn9xSiLlX0qQfhSy2vQwp3HYGkLMN/XujYCpYCDsc9yhvauerxaBB+2GHXfCwvaTj6PZHbjCMCPWMUBCPdWyfaYxalssZfadawWwlZho4WYS+U7BCW9lqsOZZJ6/zkTWHO3RNryZM5XnoJPWOSyLXBw+8RNzYnUD7SYPeLk+0SZ/Q83aduxfWO4xydUoGGGHoqmPN2GU5MruMgdB7ajR9Cepol6XuJvLk2vxpWWCMKhAYM94ktyTw6w4N3gTmKdLgP1DB78DGZs+jLvbEWzkE+bHtZVD6ZLP7gtlxiA8Lcn29hzUvK84hF95y0vO5OCF5joZbLBCT6G8SNvp+on1wSXjDSIwnP1jqXCw+80LJJdY0eH9fejcCWksyI/G3sKOdmwlqGc/TpnnsRidfdJYfVWXcUJUfZ3xwQwXlG9hXieuSIkZUyt2/TveyN+7WMakLelqr5qW+/HatqfUVDRlqwOW66zS4JbqwxpxDYTVtDO3aUptuDhivKAcfuvPgLKJkmLDJzPbFz7mYlIWIJ7uUKYdZTANvq14wyPdJBc/FAgyE5wHZQI4uovjTXBElWO3kiwUKT2CnuW+UdLMxlM+lNwyminLQX1CMCz1/l
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(366004)(136003)(376002)(346002)(396003)(38100700002)(316002)(54906003)(16526019)(66946007)(956004)(66476007)(26005)(2616005)(16576012)(8936002)(36756003)(66556008)(53546011)(8676002)(4326008)(2906002)(6486002)(31686004)(6916009)(83380400001)(186003)(31696002)(86362001)(478600001)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?amJXOHdNOHdUUDNxa0dtZHNwNnpTUE1sMXRtZDNrZTVVbEFhV2pWbGlxUksy?=
 =?utf-8?B?dEx1WlM5a0x3ZzhtOEJkZ1NaaXhiSURiYlptVzNSbU12blRRelV0VnhsQkl1?=
 =?utf-8?B?Q2lrY3dBeVVCNUM5bElGVTR6REtiK0xSOE8ra3Nmc2kwUmxQbkV3RnIySkpC?=
 =?utf-8?B?andBNnQvdUUzWGl2WUxpdG9QajJ1K0tMb3VKY0xhQmlFZktCcUJwNGpBSmc4?=
 =?utf-8?B?WjB2Q2R1U2NmdzAreG1OUlpza0lTZXNvWFBheTZ1SElUUlBKSUVVUHB6RXpx?=
 =?utf-8?B?cnhReFBhOUYzSExaWjNlYkJERjJTZkM3YnBPdzh5WEplM1FIcTF3K0FpeUt2?=
 =?utf-8?B?RkNsMnhBRkpaLzB3K1ViYzA1QTBCbVR3a09INU1oeHgrYnI2Ty8ybmNtcmNE?=
 =?utf-8?B?SHJ3Zk90d2Juam8wdXU0WkdhQnF3Zi9HWDdESXpVNDY5MjN5akFiRENZa0ZP?=
 =?utf-8?B?WldQMjNmbWJSVkFlakQzL1FXQ3FBNENMbEJWVkNWZk9jWFpzeUV3UXVYK0xt?=
 =?utf-8?B?b05KZXBzbjhqajlJR2hVVmdvQjNKQitybUtBa0JyRy9NYkNxMFNNU0RGd3Yw?=
 =?utf-8?B?U0JZTHFSaTFQWUt3UE52VVpoK20xSHdoRC9xZllIcGhhaGVIMDhZQVpTWjhh?=
 =?utf-8?B?d2NQUkplQ3pWNC9zRTBnMnZiY3NjeTVNMEJOWDRlRkM0V0tIaWdORS9SN0dh?=
 =?utf-8?B?Z0d4Sm9rL0tialk0QStkUTdDVFJJbTBkdmE4YTF1bmNSUlJiVVBCMVA4aTBW?=
 =?utf-8?B?OVlmQjlDUkNxaHhUNUUvVjhuT0xMd1pxOWloVzBrT0w4KzhYSnNIbnl5SU8v?=
 =?utf-8?B?aGRoREJZN3pMZWhUaGRqZXN5S2ZLY0JKUytXbHZTWXRoMnR5ZStBR2lyWWVy?=
 =?utf-8?B?UWFiaWVRVlFjZmdYNVlDMW1mNy9YYXhIeUJqdjdKRVVocUUvM0dWck9rQkJI?=
 =?utf-8?B?dzN5NDFUckMzSFlQQWM1ZzloMU9LelBjS0hyOTZPbWIrTmZsUTB3MHpFWkI2?=
 =?utf-8?B?Q296Myt6anZ4MHR2VmFpN1gyaGdHK29CaStRUTlOb1pQeEF5RjVTSUUzdzBl?=
 =?utf-8?B?d3JxMGQybUlaU0pWQWdnb3FzeUhqNHJvQlg3REVFRjNzK3NobUVGdWZwVklN?=
 =?utf-8?B?bWU1a3VZMUVMUElGSzJxMFpwckc0cGZ5L1Y2MkZpVmJEVUR0UkprSFlSblZi?=
 =?utf-8?B?SUc2RDhmUWxNZzJud3FaQk1pNW1pYm5NV1J5dDlYQ01obkJSbXJhZXVCd0Zq?=
 =?utf-8?B?alBZekFoNU4veUUyRitseTFQZU1USmUyMTBCRDF0SHRrektkYS8vL2gyM1lu?=
 =?utf-8?B?M09DdjBheXhmTHFQZGUzMkRTcnB5MjRkdW4wdS96NGZmUDhHbndTTlFJWExh?=
 =?utf-8?B?NVIzMUxwSldkOC92NVJTMDhWTjhYaElZZGo0a1d1TjFoL2pGZVNJdnZNdWk4?=
 =?utf-8?B?YmlCc1hBdTNHTUc1L2Yvb3c0STI2YjBvRnV0ODFzQ0FYT1hZMkxEOEVnUnE4?=
 =?utf-8?B?V0hBaUJocTNIRHl5L0ZabnVtbHpxTTQ2WnprVXJob3l3WGd6TzY2SlV5aThF?=
 =?utf-8?B?bVVxSEd2OUtHMksxU1JqZE5Wem5XUHErTG9qY1VvenVvcUo4WEpjOU9tMFFz?=
 =?utf-8?B?dk4yTE12QTRHaVJjRGo0LzdpYVJYbzVldHFLQ0s1RDUySjRzU0lYeFlVZldL?=
 =?utf-8?B?Ty9FQ1JXOEROSTd1Skd0b1JVSkdrcC9DZUNoZVcyUHR0cGdIb294VE53VnpH?=
 =?utf-8?Q?kL4cXv5g87kKrKZtDBvQj9zuHscGoLmrSwA62lJ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c0e49b50-c4c5-4f98-6fe2-08d92cabf698
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jun 2021 07:38:54.3027
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uSfsXMylMGflqofr9dzuLzFtNEFVmBzvutMSkhy0Xiw8y2iyWgPj7Wx587EoKHMuP2fALchQEL4O4PbHzhKr0Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3935

On 11.06.2021 03:49, Stefano Stabellini wrote:
> In any case, I tried to figure it out. I guessed it could be a compiler
> error. I followed the white rabbit down the ARM ARM hole. I disassebled
> the Xen binary [1] from the failed job. "msr SPSR, r11" is 0x0026a38c.
> 
> The encoding should be at B9.3.12 of the ARMv7-A DDI 0406C and F5.1.121
> of ARMv8 DDI 0487D.b. Unfortunately it doesn't seem to match either one
> of them and I don't understand why.
> 
> 
> The "mrs r11, SPSR" is generated as 0x00262ecc. That should be described
> at F5.1.117 for ARMv8 and B9.3.9 for ARMv7. Also doesn't seem to match.

According to my looking at the disassembly, the two numbers you've
quoted are the addresses, not insn encodings. Using my own disassembler
(i.e. there's room for that one being wrong), I do get

        E169F00B        msr     spsr_cf, r11
        E14FB000        mrs     r11, spsr

the former of which doesn't look like an exact equivalent of the input
instruction. I guess it really is "msr spsr_cxsf, r11" which is meant?
In gas sources I find this:

      /* Unadorned APSR is equivalent to APSR_nzcvq/CPSR_f (for writes).  This
	 is deprecated, but allow it anyway.  */
      if (is_apsr && lhs)
	{
	  psr_field |= PSR_f;
	  as_tsktsk (_("writing to APSR without specifying a bitmask is "
		       "deprecated"));
	}
      else if (!m_profile)
	/* These bits are never right for M-profile devices: don't set them
	   (only code paths which read/write APSR reach here).  */
	psr_field |= (PSR_c | PSR_f);

There's clearly a comment missing to talk about the "unadorned" SPSR
case, but the effect is exactly what is observed: Rather than
defaulting to the setting of all 4 bits, only two of them get set
when plain "SPSR" is used. I've not been able to spot a place where
the Arm ARM specifies this, but given its size I'm not surprised at
all. I'd like to note though that the MSR description doesn't even
allow for plain "SPSR" (unlike MRS); only SPSR_<...> is described
there.

Based on this analysis I guess I can make a patch despite not being
able to test it, as I'm pretty certain you really want to restore
all of PSR; not just the low half ...

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 07:52:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 07:52:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140256.259184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrbxP-0007o8-Md; Fri, 11 Jun 2021 07:51:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140256.259184; Fri, 11 Jun 2021 07:51:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrbxP-0007o1-IM; Fri, 11 Jun 2021 07:51:59 +0000
Received: by outflank-mailman (input) for mailman id 140256;
 Fri, 11 Jun 2021 07:51:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrbxO-0007nr-7O; Fri, 11 Jun 2021 07:51:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrbxN-0008GH-Tx; Fri, 11 Jun 2021 07:51:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrbxN-0001rT-Kz; Fri, 11 Jun 2021 07:51:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrbxN-0001Tp-KV; Fri, 11 Jun 2021 07:51:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aAOJfvBkGckeFMMsag6h6X7inbV+R7aBVR4ghmTet5Q=; b=KqVANR2J/KFqpQtkvogVvfs0uA
	+sv3rrM1CL5R/FKKLGLQUmiXZpoUjemImFi6+puAjm7UN6heJw3j0vba9u4G7qHmq+bBQ5jMT4YVE
	7wlQ0BsIbRwO+CqnGeTitF+G1TlzQhVrp7vvFPhF1S4KA8r2/5FPp4tuxvVVRmAL39kU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162604-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 162604: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=3909e2374335335c9504467caabc906d3f7487e4
X-Osstest-Versions-That:
    linux=70154d2f82a9058e8316b6e106071c72fcc58718
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Jun 2021 07:51:57 +0000

flight 162604 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162604/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162346
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162346
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162346
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162346
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162346
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162346
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162346
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162346
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162346
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162346
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162346
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                3909e2374335335c9504467caabc906d3f7487e4
baseline version:
 linux                70154d2f82a9058e8316b6e106071c72fcc58718

Last test of basis   162346  2021-06-03 07:12:37 Z    8 days
Testing same since   162604  2021-06-10 12:11:03 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Williamson <alex.williamson@redhat.com>
  Anand Jain <anand.jain@oracle.com>
  Anant Thazhemadam <anant.thazhemadam@gmail.com>
  Andrea Righi <andrea.righi@canonical.com>
  Andrew Bowers <andrewx.bowers@intel.com>
  Andrew Morton <akpm@linux-foundation.org>
  Ard Biesheuvel <ardb@kernel.org>
  Ariel Levkovich <lariel@nvidia.com>
  Armin Wolf <W_Armin@gmx.de>
  Arnd Bergmann <arnd@arndb.de>
  Benjamin Tissoires <benjamin.tissoires@redhat.com>
  Bob Moore <robert.moore@intel.com>
  Borislav Petkov <bp@suse.de>
  Brett Creeley <brett.creeley@intel.com>
  Carlos M <carlos.marr.pz@gmail.com>
  Christian Brauner <christian.brauner@ubuntu.com>
  Coco Li <lixiaoyan@google.com>
  Dave Ertman <david.m.ertman@intel.com>
  David Ahern <dsahern@kernel.org>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Douglas Anderson <dianders@chromium.org>
  Erik Kaneda <erik.kaneda@intel.com>
  Fabio Estevam <festevam@gmail.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Westphal <fw@strlen.de>
  Gao Xiang <hsiangkao@linux.alibaba.com>
  Gao Xiang <hsiangkao@redhat.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  George Kuruvinakunnel <george.kuruvinakunnel@intel.com>
  Grant Grundler <grundler@chromium.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hao Xiong <mart1n@zju.edu.cn>
  Heiner Kallweit <hkallweit1@gmail.com>
  Hoang Le <hoang.h.le@dektech.com.au>
  Jakub Kicinski <kuba@kernel.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Self <jason@bluehome.net>
  Jeff Kirsher <jeffrey.t.kirsher@intel.com>
  Jiri Kosina <jkosina@suse.cz>
  Johan Hovold <johan@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  Johnny Chuang <johnny.chuang.emc@gmail.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jon Maloy <jmaloy@redhat.com>
  Josef Bacik <josef@toxicpanda.com>
  Juergen Gross <jgross@suse.com>
  Julian Anastasov <ja@ssi.bg>
  Junxiao Bi <junxiao.bi@oracle.com>
  Kiran Bhandare <kiranx.bhandare@intel.com>
  Konrad Jankowski <konrad0.jankowski@intel.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com>
  Lin Ma <linma@zju.edu.cn>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Luben Tuikov <luben.tuikov@amd.com>
  Lucas Stach <l.stach@pengutronix.de>
  Magnus Karlsson <magnus.karlsson@intel.com>
  Marc Zyngier <maz@kernel.org>
  Marcel Holtmann <marcel@holtmann.org>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Marek Vasut <marex@denx.de>
  Mark Rutland <mark.rutland@arm.com>
  Matthew Wilcox (Oracle) <willy@infradead.org>
  Max Gurtovoy <mgurtovoy@nvidia.com>
  Michael Chan <michael.chan@broadcom.com>
  Michael Walle <michael@walle.cc>
  Michal Vokáč <michal.vokac@ysoft.com>
  Mina Almasry <almasrymina@google.com>
  Mitch Williams <mitch.a.williams@intel.com>
  Nirmoy Das <nirmoy.das@amd.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Skripkin <paskripkin@gmail.com>
  Phil Elwell <phil@raspberrypi.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rasmus Villemoes <linux@rasmusvillemoes.dk>
  Roja Rani Yarubandi <rojay@codeaurora.org>
  Sasha Levin <sashal@kernel.org>
  Sean Christopherson <seanjc@google.com>
  Shawn Guo <shawnguo@kernel.org>
  Shuah Khan <skhan@linuxfoundation.org>
  Stefan Schmidt <stefan@datenfreihafen.org>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  syzbot+49d4cab497c2142ee170@syzkaller.appspotmail.com
  Takashi Iwai <tiwai@suse.de>
  Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
  Theodore Ts'o <tytso@mit.edu>
  Thomas Gleixner <tglx@linutronix.de>
  Tony Brelinski <tonyx.brelinski@intel.com>
  Tony Lindgren <tony@atomide.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Vishakha Jambekar <vishakha.jambekar@intel.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Wei Yongjun <weiyongjun1@huawei.com>
  Wolfram Sang <wsa@kernel.org>
  Xiang Chen <chenxiang66@hisilicon.com>
  Ye Bin <yebin10@huawei.com>
  Zhen Lei <thunder.leizhen@huawei.com>
  Zubin Mithra <zsm@chromium.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   70154d2f82a9..3909e2374335  3909e2374335335c9504467caabc906d3f7487e4 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 07:55:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 07:55:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140264.259198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrc0c-00006Z-DA; Fri, 11 Jun 2021 07:55:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140264.259198; Fri, 11 Jun 2021 07:55:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrc0c-00006S-AA; Fri, 11 Jun 2021 07:55:18 +0000
Received: by outflank-mailman (input) for mailman id 140264;
 Fri, 11 Jun 2021 07:55:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0bEB=LF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrc0b-00006K-4f
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 07:55:17 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2ed82467-1ec1-48e2-8505-df0d566e4a89;
 Fri, 11 Jun 2021 07:55:16 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2050.outbound.protection.outlook.com [104.47.13.50]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-12-isM2scMtNW-eUyjKO_g1Yw-1; Fri, 11 Jun 2021 09:55:14 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21; Fri, 11 Jun
 2021 07:55:12 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Fri, 11 Jun 2021
 07:55:12 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR02CA0198.eurprd02.prod.outlook.com (2603:10a6:20b:28e::35) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21 via Frontend
 Transport; Fri, 11 Jun 2021 07:55:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ed82467-1ec1-48e2-8505-df0d566e4a89
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623398115;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=3yP0rvJv+Nzvyw8VKF6zz1e403fV6kW3ee23UwS+wE4=;
	b=Lz8APM6OUzafGhszYjfOPub01OCJY3htscK81DGsS9s3ofIccomD30WhD9AAzYYUzleZZe
	llmaXdb5DlcFhURHyO7CikWM0p0so0fAGr4heYY9GW6o8ggrdQ2ZlIvNIyf0oisg3v0ISm
	d5Ul9VSueQYlLxa113oAwZqDG1Kow78=
X-MC-Unique: isM2scMtNW-eUyjKO_g1Yw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fcAPQnLESlCO0VkmGp3KPMEuDyvHFFdPCWqmkMFMk0mZnnfnVx/teHZaKj7c7OeB6Qf8GCVQUhoAwf/9yMZG9/9ECQMBAgqM5P/e96UUNV6gFkl5suotz0pQ8+XkupNmE90HUT5RPjeToaSyWfaP2JaLXlEtoq+pDlL79PJErUvxCBk3/ib8TxRTbHA/r0SztrcryEf0LHz8OhCFpHDaGkv/qGI4skc7xjzzkzZWbNkfyB3X35Tl4BLwOTVWgxCeSWvPKSbub46RR7AUAsk2p2c3UEB2oy7+SiGIFcUBZyuJGHJStq+RgfSBqXG6G+B4NOX75erJ72FbfVh4wzGpBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3yP0rvJv+Nzvyw8VKF6zz1e403fV6kW3ee23UwS+wE4=;
 b=oT/FVskDScveZc98vg02ufDYbt0f3DBk0bcC4b7wwFHZKbCEqUPwNIeXZrHAdyc1wphOmqHtp+GR1Gz9tqqd3GWNsd5/Y1VsXXky10Pemd89NjaWB9Onhm26DtFwg8+DKptXoHDmlAtuQtuWw3v28czNP6rrb5RtBR9jhPdu4axWSR4J32BWDa2lgtJALyC8oi6n9oZbTmaXf1P1zHaEjLg0PGBV3c52Yw/ioN1xK7AlfV5GuOm35I69WDQm0fAoeJREv6OKQDhbNXpD9MWGFouOHaSjk3VpX0Gm4Xw8UTahngV2595RAmwKwbvFVKzpPI1VSw2aGbqD/DIPgIkGCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=suse.com;
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] Arm32: MSR to SPSR needs qualification
Message-ID: <e4946a69-bc1a-d54c-dadf-e71feecd99ab@suse.com>
Date: Fri, 11 Jun 2021 09:55:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR02CA0198.eurprd02.prod.outlook.com
 (2603:10a6:20b:28e::35) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: eb702eed-55e2-473e-3fe5-08d92cae3d61
X-MS-TrafficTypeDiagnostic: VI1PR04MB5600:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB5600B4A35155415E8F0F59A9B3349@VI1PR04MB5600.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	A2/fax+4MTWwT1qNJMIO8aBg/2CCIpJ3C/vvJwSiQvxftvtmdQ/C+l7aVxgRUOFETj268S4MxfoRe4cORoiIASH8mtG0rYQfF9b2hnQknAFz9eo7idZe4emqi/0+qBjvmWFZqnNRJRGIOwMI0UDpkx4FMHF6DRXOQdHEhMgWCKJ61d2zZJxNVpOQ7W3wklkUXOBrd9mJCAEVs+uz9NOAgtAqZ31qL5nO9bWVuxlPMyRoeiwjuAXF1SxSXpQmJAa5u4Aam4Gc/DTgO50Uys+nG7eAGaXaFTU87WZXpy5I8l+93sjd+X/HWyXx6zgx4jQk7GCOQ23rjbnqI02iggQNF02EoQD02QLrwW0Y/geZ6SAjLQZeRJSn1MMkvhDbxoQSXr/2js3QnRVekuzwnCLWl3wIn9SMW7i6cvVLY3LsstFzD2qPFpUcCNPidPmcAMIGRbfn3PMHE61tqMsdAom+uapgwQMlghG6YuDTKuK9PGyp2ToC7Fr5xTGxeztysrBDbEh39R7uuAmYcR4iYBbbXewkcmnFnw7K0zLtyMs1Aai6Rul8euQ4Wkmv8yN+t5Ntjb+gGs+XsJaaoEVYPH7upPQTOt+DRcMjyiqbVw5TD348ejGAbQDKW1x1rYU4YetiIU7EOKBOxUufnWQVPnJEVyqA9aDvCNMB0LTp+H5TA0Rwbcoi2ZnXX1xf5BiOCMjl
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(376002)(346002)(366004)(136003)(39860400002)(38100700002)(2906002)(8936002)(66476007)(16576012)(5660300002)(478600001)(316002)(2616005)(86362001)(186003)(8676002)(6486002)(66946007)(36756003)(26005)(16526019)(4744005)(4326008)(31696002)(54906003)(956004)(31686004)(6916009)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dy9UYXFZMC9JNjgzUTNJVXpmYXJXT3d0L1NzNDgweDUrVjBMOVJyYjM2Wkp2?=
 =?utf-8?B?OUIzdmlrN0NpU1Q5ay9WRjgveDRwQ3oyeG1VUDh0bkhrcUlaYXFRNlVjQW4r?=
 =?utf-8?B?dmJhY1dwT1RvNSttZXdkNE16MzRjc1Q3RjRJSXJoc3VweXpqNndmbE1wK3pv?=
 =?utf-8?B?VjZFVVQrN0NlcGsrSXNIdlBoblRkQ3l5b1FxUUFvKzQzNGx3d0crUXdCMXlS?=
 =?utf-8?B?NnBEMW90NmNMeXdmbTBJSXhKYm01clhkWWNDdmY2eDFJOEtXYXpZSjM3bnE2?=
 =?utf-8?B?THNUL0JkUi9PSk1WY3BydHN6OC9JUVgxdndDSVFNb3JLVFMzRzFreWtnWDVQ?=
 =?utf-8?B?bDNSMDFiSElKQUUzTk9zaUMrMFhiUGRZZStDTFZVTW9FWWhQZ2FzaFRqc2cw?=
 =?utf-8?B?OUVidkV3UFYzSW43VTNraWxtbHMxbXMxRVFGRk9MalNJeE5Eazh1NGFCQ3dH?=
 =?utf-8?B?cFNhU3hmN20xZVlZWkgwNkdkVXM5YWQ3a2FKUUFEVjhrSHlQVUs5NVRmMG9R?=
 =?utf-8?B?OXBSUnp0NkxkSUZObXV3WUNPSE9QSnk3MHo3aWxvL1pIZlVZQ0IveGlwYXhB?=
 =?utf-8?B?SG10akxoai9PUzVhVHRTWC90cXA4UGYzYk5kZ3BQN0pGc2lxV2ZmYnEwUlBY?=
 =?utf-8?B?WGRnMDRORGxTUjdDb0p6KzdsOWsrWEp1N0wzN3ZSQzllTVppOTNrVFhNQnBx?=
 =?utf-8?B?ZVJSQUxVcjlWMVhsRFI0K0ZCQnh4Rm5BWUJNVEpicS9DV01rcnowYnI4RGY4?=
 =?utf-8?B?dGYxbjBGRTF2dVpvMVAySENHZUNWNEh2RHA3M2s3b2lMNFZ2NlF4RkkwZGlS?=
 =?utf-8?B?SXFHS1R6U0ZsRERPWkJqKzVaelNObXFaT1dvZFNTVDVDcDJPUFJ3eE41YzEx?=
 =?utf-8?B?Q2hScmk2cG5pMUxBVXlaS2JWZVFaTXQ3amZleHFTcmtnVVNCejloMGYwOHJa?=
 =?utf-8?B?STdsQ0t5bnFtK3YwSmgvSTNOVVNoVWZxbWpROXlnQUhvVFVBTWxjelROUjNp?=
 =?utf-8?B?dE13OFZEV2V5cWQ2cjhsWnp4UHdTbUFFaWh0dFl3eDNKUW44UUZCV25HcUxF?=
 =?utf-8?B?UzZuczJ5UFFleWVKekRKUmxaNnc3K3BYNDduWWxTSUZ6RWh5c0NCcHMyekRu?=
 =?utf-8?B?VG82Wll3bmx3RjlOTHNvQ0NPL0ZmSUZySW95R0kwNnI4VDdHeU1Id0hVZ3VY?=
 =?utf-8?B?c05sRkNXd1dlMUp5SDdtN0ZJbXJxZC9Sc2Q3OE9qdUdPdEFKbkd3U1ptODlT?=
 =?utf-8?B?dG9iTXhiaHNDZldFNFF4bzU4U245MlBYTkt0K3dLRW1mOFdXL1o3NU9uTmV5?=
 =?utf-8?B?RmhGMUNCTGdBSTBlYmpUdVFDdW1KRmlBTnBPbURLajcyU0cyUy9XTkJ3eVN0?=
 =?utf-8?B?d0h1cnNKSXZkZm9LOWxLTjZWbG54TTBQYnFTVGZiTEpEem9HNVQ2ZjRCSkJC?=
 =?utf-8?B?c1QraFE1WnBVemJnWkVlejB3NmF6T0E2WEZPL3RTdlN4VFJDdEhTYlF2Qk5r?=
 =?utf-8?B?bzAzWEprVG1nWmttZEtqaWFmNW5OUG1tenJ6TktRRHV4WlljT3krK0Yva3BJ?=
 =?utf-8?B?Zi9zcHpUMXJUNytqcHhyL2RQdFlZTTJrWEJzSkVRTlZrbFlOVE9jMmZiQTY2?=
 =?utf-8?B?S0lacThieU5zRWdRTUplOWZsSXBCOWJYRkUwVzNIZXNKNG9GNUMvT0hCOUw0?=
 =?utf-8?B?aGMwMm9SUSthOTZJcUNXazRVSDdrNUNyc1JUYkJ6ak5nQVEzdEpyQkxPQXQw?=
 =?utf-8?Q?FbVSb2gT0EC7TvUn6ardeFRJDCymfC84bUruSd6?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: eb702eed-55e2-473e-3fe5-08d92cae3d61
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jun 2021 07:55:11.6610
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: y+mRKdhl6f4Ar2msW6SwLMGGnFGqthtYjn7SzIRM9Vp3OfdwYbBIQooOXXULrdODKeSwlNorTD11WzBFrHy3oA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5600

The Arm ARM's description of MSR doesn't even allow for plain "SPSR"
here, and while gas accepts this, it takes it to mean SPSR_cf. Yet
surely all of SPSR wants updating on this path, not just the lowest and
highest 8 bits.

Fixes: dfcffb128be4 ("xen/arm32: SPSR_hyp/SPSR")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/arm/arm32/entry.S
+++ b/xen/arch/arm/arm32/entry.S
@@ -395,7 +395,7 @@ return_to_hypervisor:
         ldr r11, [sp, #UREGS_pc]
         msr ELR_hyp, r11
         ldr r11, [sp, #UREGS_cpsr]
-        msr SPSR, r11
+        msr SPSR_cxsf, r11
 #ifdef CONFIG_ARM32_HARDEN_BRANCH_PREDICTOR
         /*
          * Hardening branch predictor may require to setup a different



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 08:00:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 08:00:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140275.259208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrc5Q-0001vs-85; Fri, 11 Jun 2021 08:00:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140275.259208; Fri, 11 Jun 2021 08:00:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrc5Q-0001vl-4W; Fri, 11 Jun 2021 08:00:16 +0000
Received: by outflank-mailman (input) for mailman id 140275;
 Fri, 11 Jun 2021 08:00:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nlzc=LF=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1lrc5O-0001vf-Jr
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 08:00:14 +0000
Received: from mail-ed1-x530.google.com (unknown [2a00:1450:4864:20::530])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d60073ed-eb01-44c4-8d2f-70995ecb5c53;
 Fri, 11 Jun 2021 08:00:13 +0000 (UTC)
Received: by mail-ed1-x530.google.com with SMTP id d13so22552970edt.5
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 01:00:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d60073ed-eb01-44c4-8d2f-70995ecb5c53
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=0R7E4T9d2tEv8nooyR5gAHHyWAHViu7nEx7MS5YEQDU=;
        b=YlLAmwt/KRq5P0Yjbeb1nBSdFXr1Od49TDDjew0HO4BGBgvy13YrE5p4k1dVEL2Vtx
         N3Ls/2BT95qHUjPD74OX56OzB56Mww3cukaRgOU8ycoOffwZ5fT1ATnTRb69TDmCtUAg
         pclMJYkSLjNNEz6Vxj9hCUWzQOctoV0fw8PAsHPUsK5cp6muzf3gFGLRixxLdJGTaRI4
         jT6SvbuUMhqfdKY1eUpHUmCYuugQKOsGULKD/Gcn+wQtT1xRyVOoZCDzTFv5lEgPJvKC
         TKn42kbioWHQDRHpZxoIDVJ6dJbLVSz22Ss39djfOlN2r6tUkQnqfn14m4FIVF+nzRij
         7iOg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=0R7E4T9d2tEv8nooyR5gAHHyWAHViu7nEx7MS5YEQDU=;
        b=lSox8zWJMAAiLnYFHrNDpVA5Ja/WJ0sugOEOP0335B+r5ZYALPXEMUY6ee3DNE0Koh
         fqt2mmur5IV+r+T3Vn8RIPuzAl2H/oKo2oexC8a3cb7C0XM9/E2inowacZadqHBR0jhM
         +Otn6pUnSxEAcpcx3tR2OfvHGLlJU24BosVZqyA+bR77DRt1NfxsPUn7cncwSpXDE4fF
         TodzVBiZ4/sZDIdgIe6LE8Bc/yPhxUZJl3g7OpgMBCp2nkDMM0eSz3FeF2yvO7KmnEmk
         CtZsHWba8PZn3tnwkU7Y2aSzBphb3CYtNsWStkYNBcVbLkvtu3pVEGuldXbYOXwDrWT5
         +v+Q==
X-Gm-Message-State: AOAM5325b7UqHIzs5ZJ0DR9EZ3lbNmskbk1oyFRyDEkm777GFjnLVF82
	cZwPWe+E8fhS0mOwCobPAvFJoXkCPJiXwldtx9A=
X-Google-Smtp-Source: ABdhPJxBUd64j58LzHObKQZ4tDynm4VV4XWl96/NiPmL24BAnFlndULqvHBb0nmGr9cX+igiGtFROvGQ08jDwSDNqDc=
X-Received: by 2002:a05:6402:2789:: with SMTP id b9mr2254592ede.142.1623398412973;
 Fri, 11 Jun 2021 01:00:12 -0700 (PDT)
MIME-Version: 1.0
References: <e4946a69-bc1a-d54c-dadf-e71feecd99ab@suse.com>
In-Reply-To: <e4946a69-bc1a-d54c-dadf-e71feecd99ab@suse.com>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Fri, 11 Jun 2021 09:00:01 +0100
Message-ID: <CAJ=z9a07v-cnMhK=cVjjdN3-f4t8qGc3oQz17zRdLxOauBp=qA@mail.gmail.com>
Subject: Re: [PATCH] Arm32: MSR to SPSR needs qualification
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, 
	Stefano Stabellini <sstabellini@kernel.org>
Content-Type: multipart/alternative; boundary="000000000000bbfa0f05c478e559"

--000000000000bbfa0f05c478e559
Content-Type: text/plain; charset="UTF-8"

On Fri, 11 Jun 2021, 08:55 Jan Beulich, <jbeulich@suse.com> wrote:

> The Arm ARM's description of MSR doesn't even allow for plain "SPSR"
> here, and while gas accepts this, it takes it to mean SPSR_cf. Yet
> surely all of SPSR wants updating on this path, not just the lowest and
> highest 8 bits.
>

Can you provide a reference to the Arm Arm? This would help to navigate
through the 8000 pages.

Cheers,



> Fixes: dfcffb128be4 ("xen/arm32: SPSR_hyp/SPSR")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/arm/arm32/entry.S
> +++ b/xen/arch/arm/arm32/entry.S
> @@ -395,7 +395,7 @@ return_to_hypervisor:
>          ldr r11, [sp, #UREGS_pc]
>          msr ELR_hyp, r11
>          ldr r11, [sp, #UREGS_cpsr]
> -        msr SPSR, r11
> +        msr SPSR_cxsf, r11
>  #ifdef CONFIG_ARM32_HARDEN_BRANCH_PREDICTOR
>          /*
>           * Hardening branch predictor may require to setup a different
>
>

--000000000000bbfa0f05c478e559
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"auto"><div><br><br><div class=3D"gmail_quote"><div dir=3D"ltr" =
class=3D"gmail_attr">On Fri, 11 Jun 2021, 08:55 Jan Beulich, &lt;<a href=3D=
"mailto:jbeulich@suse.com">jbeulich@suse.com</a>&gt; wrote:<br></div><block=
quote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc=
 solid;padding-left:1ex">The Arm ARM&#39;s description of MSR doesn&#39;t e=
ven allow for plain &quot;SPSR&quot;<br>
here, and while gas accepts this, it takes it to mean SPSR_cf. Yet<br>
surely all of SPSR wants updating on this path, not just the lowest and<br>
highest 8 bits.<br></blockquote></div></div><div dir=3D"auto"><br></div><di=
v dir=3D"auto">Can you provide a reference to the Arm Arm? This would help =
to navigate through the 8000 pages.</div><div dir=3D"auto"><br></div><div d=
ir=3D"auto">Cheers,</div><div dir=3D"auto"><br></div><div dir=3D"auto"><br>=
</div><div dir=3D"auto"><div class=3D"gmail_quote"><blockquote class=3D"gma=
il_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-lef=
t:1ex">
<br>
Fixes: dfcffb128be4 (&quot;xen/arm32: SPSR_hyp/SPSR&quot;)<br>
Signed-off-by: Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.com" target=
=3D"_blank" rel=3D"noreferrer">jbeulich@suse.com</a>&gt;<br>
<br>
--- a/xen/arch/arm/arm32/entry.S<br>
+++ b/xen/arch/arm/arm32/entry.S<br>
@@ -395,7 +395,7 @@ return_to_hypervisor:<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ldr r11, [sp, #UREGS_pc]<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0msr ELR_hyp, r11<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ldr r11, [sp, #UREGS_cpsr]<br>
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 msr SPSR, r11<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 msr SPSR_cxsf, r11<br>
=C2=A0#ifdef CONFIG_ARM32_HARDEN_BRANCH_PREDICTOR<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/*<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * Hardening branch predictor may require=
 to setup a different<br>
<br>
</blockquote></div></div></div>

--000000000000bbfa0f05c478e559--


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 08:43:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 08:43:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140287.259244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrckz-0006JE-Ns; Fri, 11 Jun 2021 08:43:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140287.259244; Fri, 11 Jun 2021 08:43:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrckz-0006J7-KH; Fri, 11 Jun 2021 08:43:13 +0000
Received: by outflank-mailman (input) for mailman id 140287;
 Fri, 11 Jun 2021 08:43:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrcky-0006Iv-KW; Fri, 11 Jun 2021 08:43:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrcky-0001Dh-Cg; Fri, 11 Jun 2021 08:43:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrcky-0004eH-4u; Fri, 11 Jun 2021 08:43:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrcky-00059X-4O; Fri, 11 Jun 2021 08:43:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TTCuNDAdGZjjEGseCR6AVVyBIMzpAaAGS5610agTPq8=; b=o0TemaZczxj1XYgGo/khudAe0p
	Z7X/DijTr93iTbZGNGBo+T6EcIq6muOEnusLT+EBdHKCcn6oEsJSfup3eFqdbd303FNieo9PwHzMY
	K8lruVal9DeZKeKZ66tDQzpqsEANufydXBUaISuGiHExuSKE4VquLDmJ9u6pKZMV21Zw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162632-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162632: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=2a51ff7b40ac7ed81d9244120716e1fd38371572
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Jun 2021 08:43:12 +0000

flight 162632 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162632/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              2a51ff7b40ac7ed81d9244120716e1fd38371572
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  336 days
Failing since        151818  2020-07-11 04:18:52 Z  335 days  328 attempts
Testing same since   162598  2021-06-10 09:09:48 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 61044 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 09:16:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 09:16:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140296.259257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrdH0-0001Jj-Gz; Fri, 11 Jun 2021 09:16:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140296.259257; Fri, 11 Jun 2021 09:16:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrdH0-0001Jc-Dr; Fri, 11 Jun 2021 09:16:18 +0000
Received: by outflank-mailman (input) for mailman id 140296;
 Fri, 11 Jun 2021 09:16:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0bEB=LF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrdH0-0001JV-1u
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 09:16:18 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 003df88a-39a1-4007-98e4-ddcaf689c4e3;
 Fri, 11 Jun 2021 09:16:16 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2053.outbound.protection.outlook.com [104.47.14.53]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-9-gObRCzchPne2AjneFU1o3Q-1;
 Fri, 11 Jun 2021 11:16:13 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB3120.eurprd04.prod.outlook.com (2603:10a6:802:e::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Fri, 11 Jun
 2021 09:16:11 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Fri, 11 Jun 2021
 09:16:11 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P189CA0070.EURP189.PROD.OUTLOOK.COM (2603:10a6:102:b4::15) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Fri, 11 Jun 2021 09:16:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 003df88a-39a1-4007-98e4-ddcaf689c4e3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623402974;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RA/siaUq5mzl60nL+m+UPGxrGbDmbfXqDnjrfT/8wF8=;
	b=dkQm5ppEA6BVlS1fepTVAIBVo4Yuqw2XVxy6qmvssltXp4dK1hP+x36hx9ZtZJ+Muny5sE
	wAnq5HotwP78X3HJ9+GKHqShvkFcUP+FK3PnKTZUgjjkLLMkBn9sagY/GUwUsBHZ71/hhE
	hPU5qHjEQSkDHSf7S4OLRSsibdvSF5o=
X-MC-Unique: gObRCzchPne2AjneFU1o3Q-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HIB4FA09p76fXcqjf7fXWqkS2haaYWrqBJe4xlDv4bndTStGSTEm7qQQ1iafqn/yPQUOV+J9xkokx3Gxzjg6twDhhB5i6Ow06EHh9yQ3eU9ldrnLt8ISfM7gF/Lv9gSOg6nk561j5Ah1FH4Icb+RWYyl1aWnxDBY5ZxV/Si9Fnxbk03V309WhpiXCN43Ig6aGe84sHOnCPOl1kkZXzZrLeb9Yv9wa9z3+SK7Hhx7J7nXj4UchOZF3NieefZbifEbrj9L6zr3LkEpRxeBj2z1R48oCjW2HiYxI+2DqsGfffAVfK/oIQ9s8pvtlBr6oKyai4+NbcXMJ17xZt0Qx2u92Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RA/siaUq5mzl60nL+m+UPGxrGbDmbfXqDnjrfT/8wF8=;
 b=VZf/ki4PY3YfaRldFBwA4iOhf0M1Dz6ECPO01xNpkVoP2FgCE3W6urYz7bBaFSeYnvFpjN16R27NNpmVeVbrF2xtRxOThbJziJWPE0WZy9NXB8k0nvtJUh9457rS6QgrEh13oDKQN0oM1zkxcxhaqe/G7o3BhDI0+FAqZZ6ER9eB4IvhH/hg+RmWAeQuJ3efnruF6nxb+zeuzL8Yj94AjDxMwVm4pkLBJoRAO+CVTISQR7iFbKo0FYfsaeT705x3H/wBznhflLmJVqEXVC6mSgppY0tj5bwejx7iTcfcArtScoBJP5wekyAMsycal+GAFhiq0i2vTKMyKSzcThj+fA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH] Arm32: MSR to SPSR needs qualification
To: Julien Grall <julien.grall.oss@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <e4946a69-bc1a-d54c-dadf-e71feecd99ab@suse.com>
 <CAJ=z9a07v-cnMhK=cVjjdN3-f4t8qGc3oQz17zRdLxOauBp=qA@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <af2f231a-5130-8e5f-b024-04f74e57d1ad@suse.com>
Date: Fri, 11 Jun 2021 11:16:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <CAJ=z9a07v-cnMhK=cVjjdN3-f4t8qGc3oQz17zRdLxOauBp=qA@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P189CA0070.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:b4::15) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c1bacd2a-0938-4ef0-cc02-08d92cb98e0c
X-MS-TrafficTypeDiagnostic: VI1PR04MB3120:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB312030C02EB4206E6D37BE71B3349@VI1PR04MB3120.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Vw9TL/mD5bxqwLWP54ikmSSkrXr9D2fk2b6OGt5EQHr3B/loAu5MAcTW1UsqsB8xTTKeqruU14S+FI/MUPRoDu7t/lpcyHqekHrCnPQYCywJUj3cUPNzWp2Qt2e0Zcai6OLJ3AD/lUhxTwYGHDYkeckBzpoR9B/5QE3fPRxthp3QdO9GxghEeCg8MsX8pcYsql+FrTpz9JuV15/xRSCau8qoVf7/U2urHVpJCZLQI0A4vW1Vmzv38/b1ytjDqSB71gEbFCReCNBQkl2SGt0E5fbQmr7u2HD1+a62z47SeLPDD0cCLUSSjADyWbHel770T8fyd5xI2cbGGhuhsxQCd6nu3/KI5iarVCjo+JIHrj4VMVigfKZc8oPbxZ4GbAW2dD01GQ9LNAG40V3KHHnqqM9LQESeKLbbgOkusFLj7ueHu277iyX/0HuKZy4Ad+CpShZHxXSKNo0MzIaZKNTc7boVTJXJ5Snlj4xYmtRK5sCY4kl+I9m3mnFB0qRAWnlBgrD34y/cp9jAC9H4y0zEyceBSktFEtxXhXRU++Z7dWcPaeNBeNzt1EDrIz1iKmtlpRFQ9hSLymrEPc9RYi0V3Gx5Rua6+NweYC2Q8iNjxKxdgD6Z6g/PXgvh+QeeRSiwcZHL5rA1Ui4WRuY+WntpLJ94VnnaEXsiOTNMZFWMsQk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(39850400004)(396003)(376002)(136003)(366004)(5660300002)(36756003)(4326008)(16576012)(38100700002)(316002)(86362001)(2906002)(66946007)(66476007)(66556008)(53546011)(54906003)(8676002)(478600001)(31686004)(6486002)(6916009)(8936002)(956004)(31696002)(2616005)(26005)(83380400001)(186003)(16526019)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?anJUdEswNVh4VDhhNFN4dGFJY3NoZUF0Nmxqa1NBK0loV0RXaTZCTFpZdkdB?=
 =?utf-8?B?RmlNZThvUm11VGNhV2hHRDFBZEp2NUMvampRdFVGS0xlcnFvUzRpcGlXTFBq?=
 =?utf-8?B?aTI1ZzY5dFNpQVBzcllZL2NycnBVbTcvc0taOGVDVjgwVklxSXVvSGVsZkRF?=
 =?utf-8?B?QjFlVG11N09RRmppTmwzSENmV0R0c1lBbi9GVTc4MnZoVlJWaWQybzNYSFBY?=
 =?utf-8?B?azlwODc3SnFtMU9Qd0NjaWdzY2JiRFFSaHFhaFd3SjlOMERpR1BpMkNhRWpq?=
 =?utf-8?B?VVhIQml5Yit4UEdKRXhGNUY1VUErR3hmZGJVLytMRUZoU2gvZXpsU0VQTFQ1?=
 =?utf-8?B?WmpRdXBVKzBzbFBmM2ZPQVVhVm9UZTJqRU8zMUlnNkFpWThqZFV3MVF6ZVFj?=
 =?utf-8?B?OGM3UE9QY3B4L081c0tyWk1uZFI0cU1zWmVCS3gyTE9VMWpGd01idWhyL3BT?=
 =?utf-8?B?U2pCWVBkbmM1VHIwZGtBTlRTdHZZUnFYUTRvQXBPWDIvZ0FmSjQxcjF1ZlF4?=
 =?utf-8?B?enVWUkJVbXlBOExwK0wyWkRTeTM4SjB4RWJyUC9iY2kxSEFLWmdmZ1I5V1M4?=
 =?utf-8?B?dStqeFdMYXlwczRMb0Eya0taWW95TVVWR3hZMkNaYnY0ZEF3SERnaGNERStV?=
 =?utf-8?B?QUdxOTdhTXl6dEVWZG45dmtjNE9IR0lpMWV5Y0puS0ZtVTI5NHRSUnF0Nzg1?=
 =?utf-8?B?c3RaSVlZTjRsYjJ6YlpHUzh1dEVYQVIycmYyYk1veWlYQnp2LytQalhRc2JP?=
 =?utf-8?B?TDR6UDJhSU0yQlB4V3RiRDJnTFhJR3NrM3krbkFoSVN0VHd1d1JBVWtYem1u?=
 =?utf-8?B?WTNqWFJlM2ZrUGxNREZXVDRNWnFsUGFCTHUvYkJEaFlXdE5vdVdXV3gvbzA2?=
 =?utf-8?B?ekdaUk5jSHhrT1BYR3BuZkprMXNiT291dmpXRE1uZGlHT2pxRldsNDRBZjdt?=
 =?utf-8?B?eXR6WDFoOTZGNFBITWZvenY5Z2VmTktnT2R3WVF2MXR0OWpjVGloM3B3MllC?=
 =?utf-8?B?WHFVejZHMTEwYlJTckJWMVFrT0l5eldkOUhPbUYxVFFZRitTYkRWZHdXMGw2?=
 =?utf-8?B?Mmhaekgzd256MzJaTUdHRm9ldGhJTmpVbjViYURROVpONjc3ZTJpYVQwNUtD?=
 =?utf-8?B?NW4yV3ZKY3V4NWExZ29TZGNBY2E4aWs0SUhXR1hjbytKYUlHc2RScnUxR3hU?=
 =?utf-8?B?elY4Vk1LSGFubUQ5QkhTSWVyblhmWGt1Y2dtb29lK3ppM3UwbG5WU01uQzg3?=
 =?utf-8?B?UXY5UkpiWkxvRFhab2xlaWpGVysxL2pXT0kyMFduR1V2dzNrc05aWGJkRnBE?=
 =?utf-8?B?WU5XZTUra2Y4U1BGU3dUUS95T2I0WWZqRXVMa21CZUpuNkZrYkZsM05JcjJt?=
 =?utf-8?B?UUZReXc4eGVndFg2WnZwZUNVRTRXZVdnUEJBWWcxNmhzOEEwcTZralh4VHVC?=
 =?utf-8?B?alVvbWphQTZ0N2NnT1I1c2FoSmRpam0weUZkK1YxSDZjWVlhcWh6S200N0hE?=
 =?utf-8?B?ZWJMWkswQ0Y1STgzdUo3Um9HOTdWZmkzaU1FMklPT2V3bFJPSW5NdW11bkgy?=
 =?utf-8?B?UDh0eUdHZ0tQMlgrVEZNV2kvT1BZV3RmcENSSEFwV2lwbTNCc2hhRDZBSVFD?=
 =?utf-8?B?dFYyUmRSWmFwdnpBZU1wZ1ZnVC9UU0VaN0Q2NE5OUkFmRjZJUUM5YkZFdlJz?=
 =?utf-8?B?alY0QUlDSjNsdjg0NGZzTkJoYy9SWVZzbkpmdG9scHg1YUFBRW5lb0tHaGdH?=
 =?utf-8?Q?+pTVrTkWXeuBsWvkMSjlrXFaGjiB+otkbia/OGw?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c1bacd2a-0938-4ef0-cc02-08d92cb98e0c
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jun 2021 09:16:11.4799
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0Beh9YUSIOSYZ3KLnjzkCgFcSo/AaaG+X5R82yf7ioj/LT+D1auG8vkjoHfWEI9IIms146ZCjaAXfgXccE++3A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB3120

On 11.06.2021 10:00, Julien Grall wrote:
> On Fri, 11 Jun 2021, 08:55 Jan Beulich, <jbeulich@suse.com> wrote:
> 
>> The Arm ARM's description of MSR doesn't even allow for plain "SPSR"
>> here, and while gas accepts this, it takes it to mean SPSR_cf. Yet
>> surely all of SPSR wants updating on this path, not just the lowest and
>> highest 8 bits.
>>
> 
> Can you provide a reference to the Arm Arm? This would help to navigate
> through the 8000 pages.

Referencing the instruction page would be enough, I thought (as
even I, not being an Arm person, have no difficulty locating it).
If it isn't, how is a canonical doc ref supposed to look like on
Arm? On x86, we avoid recording document versions, section
numbers, or even page numbers in code comments or commit messages
(which isn't to say we have none of these, but we try to avoid
new ones to appear), as these tend to change with every new
version of the doc. Therefore, to me, the offending commit's "ARM
DDI 0487D.b page G8-5993" doesn't look like something I wanted to
clone from. But if you tell me otherwise, then well - so be it.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 09:19:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 09:19:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140302.259269 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrdJx-0001x1-Vt; Fri, 11 Jun 2021 09:19:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140302.259269; Fri, 11 Jun 2021 09:19:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrdJx-0001wu-Sr; Fri, 11 Jun 2021 09:19:21 +0000
Received: by outflank-mailman (input) for mailman id 140302;
 Fri, 11 Jun 2021 09:19:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0bEB=LF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrdJw-0001wl-Ao
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 09:19:20 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ae846314-eb8d-48c1-9257-3b79dd6cd847;
 Fri, 11 Jun 2021 09:19:18 +0000 (UTC)
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2053.outbound.protection.outlook.com [104.47.8.53]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-40-dJbJ-qPhMb2OlkGPv3_6_w-1; Fri, 11 Jun 2021 11:19:17 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7038.eurprd04.prod.outlook.com (2603:10a6:800:12d::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Fri, 11 Jun
 2021 09:19:16 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Fri, 11 Jun 2021
 09:19:16 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR03CA0095.eurprd03.prod.outlook.com (2603:10a6:208:69::36) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Fri, 11 Jun 2021 09:19:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae846314-eb8d-48c1-9257-3b79dd6cd847
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623403158;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=lKEcglE5r0lHaY0ElKUC9DDU4DNKyO6COr+pggkSwLY=;
	b=KuqnQfFDFIYgOMRiInt2LVQ9LeGdgdigeLiBJ6jcaS+ZNC96PSP9nXbxFKO2la121dUTgR
	biRxiZKBHfzrivwOcUl4/ayIEA9Ae2sCHjTAF8LF3JLJRPBmC+WhsvkG0pHoWGVGbkkUpt
	/Rma/hxBGGQ4YNBpNxnG7VDes9ril/U=
X-MC-Unique: dJbJ-qPhMb2OlkGPv3_6_w-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lJsn829xi1UVDXQxOw/Tt+q334wIwpT49Z/GBnklr2FwoMVjNi6iOZ2SbvTUBbakZZ2rkm3q/juC5Kd1Dl68C8d9rX2AHv8ZXdkgZuqlB9evPEvsDyiIimKrZQeUus/204SefofXk9nwI6IDS33O8AewFRg0m7kxJiHBHD9ZZubUts8kt6g2pXDZAam7v0jf23uEb0cKGBXcAI5AvrFNp20BMr9XoJ2AeeaklQeRlK0vM3nC1t1mZnCIQl/E6qcPWcMv8a27ZTGx32Qf876d9rbLSO/hShVu+sLz1eQH7trARAHt6zPEUQ3rzv3YYLnXoCi8GaZa1HJOCdSjv/IKnw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lKEcglE5r0lHaY0ElKUC9DDU4DNKyO6COr+pggkSwLY=;
 b=CT1OPRtdzhQp8B1YjYqkJDD64Xg/3JDuVW1sPXFDrxISkqK9tj4Th0PToyIukZDWU3j8uQDsnPqdT/kMEWHc66pKfkLmyLf4YWaumqwCxetyT3dR+cDMpfU3Op4L2BoyAemCyUoMCQyUBAsmII0kcO6s2y0F1tftBj+C1c2KE5YKbs2BTbgVQoommYWqrC4QMKfJuiICW0YmJHhxvEZUxUyfsg+G+CYJcJIc2zF/uy8QU2Y+kxSwvSXZ+i0UdeIxTJUb2CbbKEhOV70raeHsDigaD73uH5WWtPSLoRZH951SZcP/6r/pjEJWtTofC/Or8UjM9AFDYK7AnKRzcZ/0sw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=suse.com;
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] Arm32: avoid .rodata to be marked as executable
Message-ID: <25f1b0d2-9270-ba42-d110-2bf14e45b7b8@suse.com>
Date: Fri, 11 Jun 2021 11:19:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR03CA0095.eurprd03.prod.outlook.com
 (2603:10a6:208:69::36) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 93388112-e2f9-4630-ea58-08d92cb9fc1d
X-MS-TrafficTypeDiagnostic: VI1PR04MB7038:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB703856A02732033C1F0EB435B3349@VI1PR04MB7038.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3fCoz44NxDb+famBbKn6BPpB1jBH7aENDa28crRfuw6gwDMjyEE7fwUqOPw0d7Ty7YDmPbQFoyxFJQigRjoklO+TG9zxHIv0XOZ+dQzzQGycBmC6vMCTjB0auIxuGeZ606GhYfUXDUy01y0FnEsRUr6+8uFvVeSP8KxKKDy2VyiKdcr/p5aeX7ZNc5OmvXx1lqiDYCxc7BoKD+m2qOsvEH5V2wX6ZHUkZBMxvlTihOHO1oIFKdfbccOVYfIgYr9JofM+5WJDCG+dzqx7IGw338OLVfaJ8xMdbfonOPU7lvGryYi6lSzO93ACch45yiO2biIZcjxnX/z21OO7M+sZ58BPACcBE/8xlJr3X7K41iyaQaqldAmraYOjOf6zP5gG79ZhMU87/Mcu6byt8Mof6BQMAP9ugFZ8c/tdcr9NPlEefRctXFkbQNAr1prhKBgxsn+RBEVJ1cWBnCHd+0Wb9vffytfftv16Q1YgcfLIhleVhw/43fWdSOLqSBhnQVPWP08tIQIiyP42/SVX0SBlKPw6hr4nExukiCTK9yy/vMjT8oDELeaEhqTgBMdnCcfIxaos5nXqVIq88EZZLqgD9kLO6/MYVTwFvKZvsMF28+7qjkjaweZsQSOXD2mbLP+naYMlSCFPeUKZzDEb7Ed6ZrO7PlEBZG18z0lv5QbP8Cao+Nf41slyhAE1eG3o4hC2OVkoQ08zZjYWt+neRXh5HQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(346002)(366004)(136003)(39850400004)(396003)(16576012)(316002)(31686004)(956004)(4326008)(6486002)(478600001)(66946007)(86362001)(66556008)(66476007)(38100700002)(36756003)(186003)(16526019)(6916009)(26005)(2906002)(8676002)(5660300002)(83380400001)(8936002)(54906003)(31696002)(2616005)(142923001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ck5xY3U4WFp1SjdXZ3dURjJ1U1R2SXEydVpCKyt1dEh2eks3VlF0QWdnMXdl?=
 =?utf-8?B?RmpISVY4T1lWNS8yZ0ttY0lxSng1MUkvc2FkaG9mcU9WUEorZUsyWFdQK1Fu?=
 =?utf-8?B?OHoyUHgramlwemw2UEp2L3dlejZGL3ZoYWdqTFJYMTJHRjdvemhES1htRmpS?=
 =?utf-8?B?NkMxSUhpMHB5NitJMGxuZ1NLeEYvc3M5MUtJekdWTFIxNmtlWEFkTE5ZNGdV?=
 =?utf-8?B?QUVGV2RPSk5GTWJ4NU1sZ3pYZnZkblF6N2xwV2h0K0xnSjIrcEdXOFQ2eHU0?=
 =?utf-8?B?M3BjRjBNRHdRa1dZZWhnMG5CQkRZdS9UaGFPR3l1NWE1WFFYZ09GZ0tuMzZY?=
 =?utf-8?B?TDBuWVY2YVhFRUFsQ2ZDbG1HMUFTUVBjRDFKYS9XZkV1dlloU1hTSFlCajZq?=
 =?utf-8?B?cHhkS2tTSU1WdTFsU05iVE1sWmpiYUlXaWl5djZpcE55U0FKblQ0NytWTm1M?=
 =?utf-8?B?dFRDQTlXci9OQk95WjFueVdBYTUyQndoc1N3Wm96VVNBMnAvQkpSV2NRbDdx?=
 =?utf-8?B?RVEvWjc0MUpodVhOMThtbmplMm13dlB4enFHQy9XTFg0WVJJV3dNQW01K0tW?=
 =?utf-8?B?dm1EZEU5TXZKZVVMVEFWRDh0dW9PUi9XbktxRCswaFlFLzR4VkVvaXN0dm16?=
 =?utf-8?B?MVJ2eHZjUCt6TEl4Rmh0c1BuSU9mNm0rem9MbUszRU9CelQ2QTdvMjRaOGhK?=
 =?utf-8?B?eVlva2RIalYyYmx4WWExdC9LSHMyblZuZUF2dnhUcVpDMHlINzhuVld0cTlu?=
 =?utf-8?B?RHdPOExNbVpuTlBDWnFZTVFENHhVcUZnaHd4dHNWN096TjdKUkZHUjVvREJI?=
 =?utf-8?B?QU1TcmZJamk4MVJwRE01RzBxbnE5ZkMvVGgyc29vaEJYYXpXcmxmVEI4dDdh?=
 =?utf-8?B?LzA3cEs2ZjFWdGtreGgzcnBINE55OEx1TGRXOVZzNHorR09rY002aklQY000?=
 =?utf-8?B?WTIwUDJlQ0xGMFRxY3RZbXZSWW1acXE4Yi9aaGpxYytYMkczOWVTV0V3N2JD?=
 =?utf-8?B?VENMK3VuaE4xTnpwYlBQK0RPS0x1Y1RVaWN2WTV3TTFwWUIwS3VoSk5UY2h5?=
 =?utf-8?B?RVQwL09pcGRJdFFOK2w1TFEvb1EyNXdsUkVaT2dibkdtZ1pJaiszY3FUSy95?=
 =?utf-8?B?S0V3em1oa0tRejFWM2J3R2oxQ0Q2NzVOM1Q1cHp6QUJHQ0xFT2lJLzZ3bWFm?=
 =?utf-8?B?eG1tR3BlN3lhRGpUVXhieFVrc21GaDNNSWVKQVlkOGpvZ0ZtV0hSUTArS1BL?=
 =?utf-8?B?blRMbnJjL21GaHV5SmoyQTVWc3lna2p4TVRkNjd5eEFOQlBLSE9QZ0R4WTlZ?=
 =?utf-8?B?c1BOZ0RRM2tZakpFb09NckFCbFQzdE1DRDQ4TnpMNnR0WVZPZVNXQ3kyb2dR?=
 =?utf-8?B?bStnbExITXAxUjR2L21NYjNHR3FKUkJaRlI2WnFiN2l6czJrUjZrcnE4ZWhL?=
 =?utf-8?B?QjljQmt3M0Y0UDZZYVQvZ1dNOWtERUZaMWJWTWV6S0ptdENHQk1HcEdZNGlI?=
 =?utf-8?B?WVdMNGpJL01oaEx6dU9Ba0x6THdwMEtwT1FCTG5WQmt6WUk2RnNTTEZyajNq?=
 =?utf-8?B?SllPR0c0aDZra0tWd0JwanFGTyt2NStGNUdXY2RBOE16b3dnSmJLSkx5cDhX?=
 =?utf-8?B?Zmpla1VrNzFMNEloSjY4aWVZZlNRSzRmRGN3NVBXWldFKzlzZmFqcDlFWFFR?=
 =?utf-8?B?TXVlOEFBbHF0WWdycEFoR2g1N0tKOHdhczkvL2g2STJtWnRuZ2V1MDAzQVJM?=
 =?utf-8?Q?/FMvZYcHA8g5KqareEW4w7cQ5cAw44JwoQB6sPe?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 93388112-e2f9-4630-ea58-08d92cb9fc1d
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jun 2021 09:19:16.2225
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2v05AwhOLPLro/NWMGvkXFwm2Gy9VvtGwkDQuC863dpb2sUYS7+zaX8IyjyWC8NDC+Mc+BAbR0C7rzJi17HQHw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7038

This confuses disassemblers, at the very least. When this data still
lived in .init.*, this probably didn't matter much, albeit the
"#execinstr" would have been suspicious to me already then. But the
latest with their movement to .rodata these attributes should have been
dropped.

Fixes: 9cbe093b7b84 ("xen/arm: link: Link proc_info_list in .rodata instead of .init.data")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
The PRINT() macro in head.S is another source of such confusion of code
vs data. While in head.o there are mapping symbols guiding disassemblers,
these mapping symbols are gone when looking at xen-syms. But I realize
adr's reach is too limited to allow for a halfway reasonable approach of
moving those strings (e.g. to, at least, have them all together).

--- a/xen/arch/arm/arm32/proc-v7.S
+++ b/xen/arch/arm/arm32/proc-v7.S
@@ -29,7 +29,7 @@ brahma15mp_init:
         mcr   CP32(r0, ACTLR)
         mov   pc, lr
 
-        .section ".proc.info", #alloc, #execinstr
+        .section ".proc.info", #alloc
         .type __v7_ca15mp_proc_info, #object
 __v7_ca15mp_proc_info:
         .long 0x410FC0F0             /* Cortex-A15 */
@@ -38,7 +38,7 @@ __v7_ca15mp_proc_info:
         .long caxx_processor
         .size __v7_ca15mp_proc_info, . - __v7_ca15mp_proc_info
 
-        .section ".proc.info", #alloc, #execinstr
+        .section ".proc.info", #alloc
         .type __v7_ca7mp_proc_info, #object
 __v7_ca7mp_proc_info:
         .long 0x410FC070             /* Cortex-A7 */
@@ -47,7 +47,7 @@ __v7_ca7mp_proc_info:
         .long caxx_processor
         .size __v7_ca7mp_proc_info, . - __v7_ca7mp_proc_info
 
-        .section ".proc.info", #alloc, #execinstr
+        .section ".proc.info", #alloc
         .type __v7_brahma15mp_proc_info, #object
 __v7_brahma15mp_proc_info:
         .long 0x420F00F0             /* Broadcom Brahma-B15 */



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 09:37:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 09:37:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140310.259280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrdax-0004D6-Eh; Fri, 11 Jun 2021 09:36:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140310.259280; Fri, 11 Jun 2021 09:36:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrdax-0004Cz-Bi; Fri, 11 Jun 2021 09:36:55 +0000
Received: by outflank-mailman (input) for mailman id 140310;
 Fri, 11 Jun 2021 09:36:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrdav-0004Cm-UE; Fri, 11 Jun 2021 09:36:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrdav-00029v-P7; Fri, 11 Jun 2021 09:36:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrdav-0006b0-Hu; Fri, 11 Jun 2021 09:36:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrdav-0008DB-HJ; Fri, 11 Jun 2021 09:36:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4+Ggp4Uh+X6lr0GimNCBrQqIURkpYd+W7sKT6tAXhUs=; b=srqui58OzNXWyfYYhJv/roNp59
	oNM6fw/AXxuF+zWLF/QnH2kTcuk54/SuV4gh0LBRK55qRd24jrb+0rX+mwFP5Syo0hHYVq4oscrav
	jF7p2It73l9cEPP+6+f7M+S2lLPxlKXuZSjKgHeKIUhUpULgf/Bs+++mOVzDTfxV88s4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162636-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162636: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:guest-start/debian.repeat:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=2bb17a45b1814b0b6aa4646eff58e16f876281fd
X-Osstest-Versions-That:
    xen=3e09045991cde360432bc7437103f8f8a6699359
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Jun 2021 09:36:53 +0000

flight 162636 xen-unstable-smoke real [real]
flight 162640 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162636/
http://logs.test-lab.xenproject.org/osstest/logs/162640/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl         18 guest-start/debian.repeat fail REGR. vs. 162574

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  2bb17a45b1814b0b6aa4646eff58e16f876281fd
baseline version:
 xen                  3e09045991cde360432bc7437103f8f8a6699359

Last test of basis   162574  2021-06-09 14:00:34 Z    1 days
Failing since        162584  2021-06-10 00:00:27 Z    1 days    9 attempts
Testing same since   162607  2021-06-10 15:00:30 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Jan Beulich <jbeulich@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 2bb17a45b1814b0b6aa4646eff58e16f876281fd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jun 10 16:56:24 2021 +0200

    x86: please Clang in arch_set_info_guest()
    
    Clang 10 reports
    
    domain.c:1328:10: error: variable 'cr3_mfn' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
        if ( !compat )
             ^~~~~~~
    domain.c:1334:34: note: uninitialized use occurs here
        cr3_page = get_page_from_mfn(cr3_mfn, d);
                                     ^~~~~~~
    domain.c:1328:5: note: remove the 'if' if its condition is always true
        if ( !compat )
        ^~~~~~~~~~~~~~
    domain.c:1042:18: note: initialize the variable 'cr3_mfn' to silence this warning
        mfn_t cr3_mfn;
                     ^
                      = 0
    domain.c:1189:14: error: variable 'fail' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
            if ( !compat )
                 ^~~~~~~
    domain.c:1211:9: note: uninitialized use occurs here
            fail |= v->arch.pv.gdt_ents != c(gdt_ents);
            ^~~~
    domain.c:1189:9: note: remove the 'if' if its condition is always true
            if ( !compat )
            ^~~~~~~~~~~~~~
    domain.c:1187:18: note: initialize the variable 'fail' to silence this warning
            bool fail;
                     ^
                      = false
    
    despite this being a build with -O2 in effect, and despite "compat"
    being constant "false" when CONFIG_COMPAT (and hence CONFIG_PV32) is not
    defined, as it gets set at the top of the function from the result of
    is_pv_32bit_domain().
    
    Re-arrange the two "offending" if()s such that when COMPAT=n the
    respective variables will be seen as unconditionally initialized. The
    original aim was to have the !compat cases first, though.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit dfcffb128be46a3e413eaa941744536fe53c94b6
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Wed Jun 9 10:37:59 2021 -0700

    xen/arm32: SPSR_hyp/SPSR
    
    SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
    trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
    See: ARM DDI 0487D.b page G8-5993.
    
    This fixes booting Xen/arm32 on QEMU.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
    Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 09:39:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 09:39:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140319.259297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrddR-0004vd-4s; Fri, 11 Jun 2021 09:39:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140319.259297; Fri, 11 Jun 2021 09:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrddR-0004vW-1Y; Fri, 11 Jun 2021 09:39:29 +0000
Received: by outflank-mailman (input) for mailman id 140319;
 Fri, 11 Jun 2021 09:39:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0bEB=LF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrddP-0004vQ-My
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 09:39:27 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fde5dec2-cbdd-4450-a2d5-0021c32a2572;
 Fri, 11 Jun 2021 09:39:26 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2050.outbound.protection.outlook.com [104.47.14.50]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-25-enljtwdaNdeLgcMk78rXlg-1; Fri, 11 Jun 2021 11:39:24 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3933.eurprd04.prod.outlook.com (2603:10a6:803:24::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Fri, 11 Jun
 2021 09:39:22 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Fri, 11 Jun 2021
 09:39:22 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0112.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:19::28) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Fri, 11 Jun 2021 09:39:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fde5dec2-cbdd-4450-a2d5-0021c32a2572
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623404365;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=5tcyDQtqV243Vt1Pu4MZc3Wx8qSzPD4ZzLi6EoT2Pmo=;
	b=QcJQtAxxh7LMdnNMvMG7pzqOWLopVzB9xK4SdHWAW9kbvRKyMxLPV1LdRJHBgKXiZqzX72
	9fDVL08M6kGyh1vMUWw7emXUK3eLF21iC/VYcoC0va2hRULkHKWxFGY5HS60Ou0Q2SxzBx
	KYAd1QnPET+SjakRm1wwe9YXDvEBpUc=
X-MC-Unique: enljtwdaNdeLgcMk78rXlg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GcfAy0q1nbDEoq7+LPviz3cpfGEekELTZGCVL2T/rgBIDhjPQBSwxG3Bn2yPE/IhCtWsUDrWw1EQOUgd2DCon35d3Q54OGG8O3U04at/6rDEVZYkWLnTilU85CV+9UTQ8yeofqY+Alw+yIT/C1q0B6EylMoSyiPaZkKcNRok32WrsPaJXKPtrGIG11xNVt8g6d7a+WzDuVtH2+tHY+k8MeIJLh0K562BweTUfqgOVqOCRyFMf7nksSi164ONmS1bSIpIDTIHvEaefeQ4cO71keXjtHqdNKErZhURbT4X4l0869x8/U382Mp/A0TUHbst/JTUoOItm6ySfWh3UwF+5w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5tcyDQtqV243Vt1Pu4MZc3Wx8qSzPD4ZzLi6EoT2Pmo=;
 b=mBwroHljrUJAN69Mq04M+Rw3/wWKn5/5nBmaN3554EbjkAUViUqW1FN3wHLk9X48WZrFicMQRsNLAWWf3vzpPYjE2bHiNSku/3E/MXGQ5cxAduTkUPEITNPtfiJnWeTnkiDKVfIV3+3iENsII2MML6SfB9bcGYigkKFUDKn0IwJznEL/XkCrertEznf9Y8oqzPLjYSSdGddeW/BmtipesUp2g7iRnLxN3L83Btdh/Wxk8eQdDM0vjcXMYjtyh2mXvYX9wO3tgZ8HvjqUD47ey5FoNUJVdqoZdueXccHIwqnu0DW+sQKZy65jIs8zEFQcv8jWijCOhnx7bvVjP9ThYg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=suse.com;
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] Arm: avoid .init.data to be marked as executable
Message-ID: <6837f903-14f6-4438-ed05-b373149984f3@suse.com>
Date: Fri, 11 Jun 2021 11:39:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0112.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:19::28) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6e8b4353-761f-40ed-d1bd-08d92cbccb5b
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3933:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB39338FB33B0A46CC626749E5B3349@VI1PR0402MB3933.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6ao2bmoqV+PhJW+nZDP2uUEvfGRV2xXSrrq+OX1RL/bFizkiV6zsjnutVQxI8QGZHAMZUsWJ9jKNKiotllVFscMjZUGzJMA3HRgWJmqTzOXZR8t0zgUPMDDyAyhXPUAxbkF8B4TyHSKuHkeVPbGnJggTDxo0EXamTHCjcESlCJ+gvMgzneNaAZBcEVt2qsdcZka6EzJIATK+zs2vbErEabhrXtZtny2SVS/26yh39ox8AxmIXi7GgZH/2lczEO6acFHKsjOQjZtQ14jMxq3Pwamra8MBMjFkAB7jZZp7m5OXIB4opm02gKOIqECZQE+RjACwgCBRTb0hZH8LmmMt3VJya3bkvFYt9x6BD5+kI7h5QY5uoAbF5jq4VxpEb3ZlBVRHFLKw4Id9DD8MsZWUbFGmA03sLddv5/vHhqh/tngE54K1E6GhEz2aCxMnSMMx5hFLta61QFK/9brnbOQwsvW7JJSf1S2/bEmnngviedMNoJ4ZEU3bofPM5m1hK9N9sqcWsnmpKONLgwU6BMG4hOGvrErjCqaUlLpJ7AgM2JsDhaaFG5T65GVOzDc8R/KAP3wGUxCjWXDvPfxKONg8uYKsgBNDlcl5UmmiaaibXe4Rq0ZC6mXjA6ysQ1/sczQf8QdDNnV2Zk2eslPsw3R26/wNJJlw2HYROvReOJ/1ejf+F1hzzYsC8ujqU8I+sWReMJyMSEurI0SWX2rzc3FhOw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(39850400004)(136003)(366004)(346002)(376002)(86362001)(31696002)(66556008)(2906002)(66946007)(478600001)(26005)(36756003)(316002)(54906003)(16576012)(16526019)(2616005)(6486002)(6916009)(4326008)(956004)(83380400001)(8676002)(8936002)(38100700002)(186003)(31686004)(66476007)(5660300002)(142923001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Q3RYZlptdlZ6dUMxeUJsQzZTNGZPSlVWbW91NXRiWXVIYU9LNExqY2JMZ2Jy?=
 =?utf-8?B?WUpYNE00TW0rZm1aYlVjbXZOQ3JhMDNNNDJGaHdyOVdzYXcyeklMTUpXWlRO?=
 =?utf-8?B?SHptS3hpdVlwdFlydFl0bmdxdEdwaXhrbzdoQVVuSkVqTE5vNW8zZTVCK2po?=
 =?utf-8?B?dEdpdXFNS2Q3aXBzYkZudnMvS0dqN3o3M29xU0R2MmJxQ2lyVkxFQk9qSEFy?=
 =?utf-8?B?QlpUTWhhS0VOa0pJV3JaQ21scVI4RER2KzEzSzViSlVuYlNoLzFCU2NUMmxX?=
 =?utf-8?B?VzEzdStFSHR1OElhWmc1T2NCeVNLMW9xZGNEL2FGYU5wT3N2OXUwTUdkRzlU?=
 =?utf-8?B?d0xrZkN2TjFFam42ZnlqOXg3T0hQK0VYY1JJa05nQkZkY0ZPbG9IQWNYYzRN?=
 =?utf-8?B?c3I5V2dOWDluMUZ5T3BOZTVCNHR3UnBHa2ZVM2dUZ1B5NkhFcDRpRFV4Y2hQ?=
 =?utf-8?B?VkoyN3M1cy84TCtKNnlWUXdtdHowZFFWYXhuVXIxSmdGN3VVQ0lreXdoRmN2?=
 =?utf-8?B?MVpYMzFDSUJhZUFlZVlIS3ZRVGFJOXdCbkdIakdwNGJ5RnhNQXh4MTB3WGI5?=
 =?utf-8?B?cFNBVFRvM3JMVzh0dFg5S2VXejBVQmkzenhjbzhGOFpTci8rR1dGZXJsdUNH?=
 =?utf-8?B?Y0IzajZtSEpzYzloR24wajVUZyswNWV1VUV0cmFHNDBzRVR1Ni95SFhVaDhv?=
 =?utf-8?B?enZUdzM4RkR3WU9ud1k3d3VnWU9sRE1WRFh1dytUQUJ6Qjdmcklwak9CSVJL?=
 =?utf-8?B?aFExVVRFVEFzaXdoSTV6QlV4ZHZJRzBBNnhjaEVTdTdhQkxYditxZ01UUEll?=
 =?utf-8?B?V1I3cGFRcGVoL3JFOHlBc0V4Ni8zcVRMZVQza0dxT2cramtBTVpqQldCM2h3?=
 =?utf-8?B?SXNFK3QrOHZ6a2hvL3dRSTkvVHFYQUROcHl4Vmk2U1kreS9iQmcxRHNUNmRw?=
 =?utf-8?B?Q1ZpaXJqTjRuYnI5NHpJalU4VFBVWURyZjVlWnpkdnQ2bzdQQmpiMlpxc2k0?=
 =?utf-8?B?dWdXREhrZ2lOYnZmc2JmQVJTSW1CMThIT0tWNHVJbFl2R2U1YkFMZ0VQZ25K?=
 =?utf-8?B?aE41RExmYTBZYlpQUS91R2p0RFlqb0dibjJEM2p4akZsUDF1L1pWVy9mZG1X?=
 =?utf-8?B?Zjg0RHA0aXovOS9zT3gvUWJodGRZYzJYL1RHVlVua1BzSHZRSzVOTEE0dHlX?=
 =?utf-8?B?RXdaRC9zTFJMaTJjYm1RVElROW9hWUtDM0FpeS9wWDg3VENaUTRtU01iYUJW?=
 =?utf-8?B?YVBkb25yOXZEemlKcjA3aVNiOU1RWTZHSnMxS3RWWUxpT3BVTkFTVHI0MENT?=
 =?utf-8?B?L1FlOFlYYWZ4RmE4U0syZVRmNnYyZmQ1OVZmYXdQc0RESnZ2ZHlPeDdLTFBm?=
 =?utf-8?B?ZTRZRlhVYWJWdmlUMEJqL1ZxdHhwaE80S1JjQi9zbmVzcWJTSkFuamxQZ0k0?=
 =?utf-8?B?NExudENLNUtTS2I5MHFqREZneHpRcW1LSUFMNHBKYWY5c3VYQmJiYmk2d2JO?=
 =?utf-8?B?VUR1UW0vZ0Y3b3l3QWkwRjB5UkxiM09OcEVNT2dXYTRnMUEyZ1lvQUpnMTBS?=
 =?utf-8?B?TzRHSUVYQm8vSFhabi9sSWQvTEFjajUzcnlZTnh3dzVINzhydkIwOFNrM0po?=
 =?utf-8?B?aEFta2ZhSHE5U2NwbnBrd0VjZTdIT2pRY2RpMGtQa2tQczcwKzFudVZHNVZC?=
 =?utf-8?B?R01TcEpXRlVUQkRtL3Mra2RCMW10Z2lEbXdzdHVoLzBnUUVxN3UvMy9VeXBV?=
 =?utf-8?Q?J8SDIR4eD8l41xB2HQW8BZdvVE/90I3s2CUB2LB?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6e8b4353-761f-40ed-d1bd-08d92cbccb5b
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jun 2021 09:39:22.8581
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4Q4TXC3y3ljy2kr11Qaspk6kogf/LBAYzBsWObz8XaN4rY1PXeh8Jil03aTaTDpuYyUSJXn7J23athmDLWyFEg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3933

This confuses disassemblers, at the very least. Move
.altinstr_replacement to .init.text, dropping the redundant ALIGN().

Also, to have .altinstr_replacement have consistent attributes in the
object files, add "x" to the one instance where it was missing.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I'm uncertain whether having .altinstr_replacement inside or outside the
[_sinittext,_einittext) region is better; I simply followed what we have
on the x86 side right now.

--- a/xen/arch/arm/xen.lds.S
+++ b/xen/arch/arm/xen.lds.S
@@ -147,6 +147,7 @@ SECTIONS
   .init.text : {
        _sinittext = .;
        *(.init.text)
+       *(.altinstr_replacement)
        _einittext = .;
   } :text
   . = ALIGN(PAGE_SIZE);
@@ -169,8 +170,6 @@ SECTIONS
        __alt_instructions = .;
        *(.altinstructions)
        __alt_instructions_end = .;
-       . = ALIGN(4);
-       *(.altinstr_replacement)
 
 #ifdef CONFIG_DEBUG_LOCK_PROFILE
        . = ALIGN(POINTER_ALIGN);
--- a/xen/include/asm-arm/alternative.h
+++ b/xen/include/asm-arm/alternative.h
@@ -67,7 +67,7 @@ int apply_alternatives(const struct alt_
 	ALTINSTR_ENTRY(feature,cb)					\
 	".popsection\n"							\
 	" .if " __stringify(cb) " == 0\n"				\
-	".pushsection .altinstr_replacement, \"a\"\n"			\
+	".pushsection .altinstr_replacement, \"ax\"\n"			\
 	"663:\n\t"							\
 	newinstr "\n"							\
 	"664:\n\t"							\



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 09:55:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 09:55:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140328.259312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrdtG-0007BL-Jm; Fri, 11 Jun 2021 09:55:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140328.259312; Fri, 11 Jun 2021 09:55:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrdtG-0007BE-EY; Fri, 11 Jun 2021 09:55:50 +0000
Received: by outflank-mailman (input) for mailman id 140328;
 Fri, 11 Jun 2021 09:55:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4Erf=LF=gmail.com=rm.skakun@srs-us1.protection.inumbo.net>)
 id 1lrdtF-0007B8-RM
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 09:55:49 +0000
Received: from mail-lf1-x132.google.com (unknown [2a00:1450:4864:20::132])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b2946a7b-4725-4054-861a-dff63ae2b86e;
 Fri, 11 Jun 2021 09:55:48 +0000 (UTC)
Received: by mail-lf1-x132.google.com with SMTP id v22so7728501lfa.3
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 02:55:48 -0700 (PDT)
Received: from localhost ([178.151.124.169])
 by smtp.gmail.com with ESMTPSA id r17sm651828ljp.40.2021.06.11.02.55.46
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 02:55:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2946a7b-4725-4054-861a-dff63ae2b86e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=L6Nq/3eMJ61O2ppaiV+K5WCKa4wxPQ8pA+EactXa5/Q=;
        b=ph/XomxAPqIvxyU2lBvMH35+Kz6l15GgxHvCZPiV1EKnsNbwv18k8r3CfcXnDTadvY
         0HnwK+tCN0fm+FLw08qSlxbWGgLd6GqEV8gP9+/y3mC8mcoEseYaHD7LCYroqq3CWIuc
         rvV1kJGEsSsa+H0Bu3TBsRdiyQHI4toAKrUIEEeDeNkbUJRQmjgiwHPiHMItUrOZc1Jp
         W4prezTF9Lefwdpg90MeiDy/5zo0TrVfeIoYveukEK7APT3Exak8EMxXVAcMlp+QDfko
         P6XaKRDFXiSX8yq0rw+oLRrog118hw6/Wc5k0gg7+VdGPRxrUvhmGPfxUDfyLjcQfPpN
         E6/w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=L6Nq/3eMJ61O2ppaiV+K5WCKa4wxPQ8pA+EactXa5/Q=;
        b=nLPpMC9vkrEwbDCXbtbppNT6dZAUFunhpLIRU7dESK2EEeKjasKQLwNGijIjQCvqYB
         nCIGtSYUZMUK4c9uYls5ChhLZkWvWBW1pLkHU3tSFYD4JgOghx5wFTo/ls2+Qb4CfNbz
         sl19wrcxacqDCLS+MNos2VQcfTFjFNEMlecn9mX9P+MAK8ixxNFua0l117gGIsAIBqwK
         sjlo/WHXFWCWqGd9rJjD9BgFg8mmreOIQdtg053uui9xYKR23+uGDoyJp151iP+pgE/C
         CTWwTlLvryjfUMmSoJFO5p56LIDssUSiBPG86luDlgSGNKauNh3ZCtLryffv3phTZ5Kp
         avQw==
X-Gm-Message-State: AOAM532Q3PO9B1b8AUqu9jASZ6myKPwGibKDNdt4Mn9NF6RAzvpCaAAn
	YS1twyxIhYOyNJcxdkGarN0=
X-Google-Smtp-Source: ABdhPJy8EDHCE1f4s4ZC6KDj2A/UHmJhDH+REBsIFZnu4F/CM9rWslxdJXdDy1hMj7lLIs/rkF7G0w==
X-Received: by 2002:a19:dc02:: with SMTP id t2mr2158836lfg.261.1623405347760;
        Fri, 11 Jun 2021 02:55:47 -0700 (PDT)
From: Roman Skakun <rm.skakun@gmail.com>
X-Google-Original-From: Roman Skakun <roman_skakun@epam.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org,
	iommu@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Roman Skakun <rm.skakun@gmail.com>,
	Roman Skakun <roman_skakun@epam.com>,
	Andrii Anisov <andrii_anisov@epam.com>
Subject: [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops
Date: Fri, 11 Jun 2021 12:55:28 +0300
Message-Id: <20210611095528.9230-1-roman_skakun@epam.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This commit is dedicated to fix incorrect convertion from
cpu_addr to page address in cases when we get virtual
address which allocated throught xen_swiotlb_alloc_coherent()
and can be mapped in the vmalloc range.
As the result, virt_to_page() cannot convert this address
properly and return incorrect page address.

Need to detect such cases and obtains the page address using
vmalloc_to_page() instead.

The reference code was copied from kernel/dma/ops_helpers.c
and modified to provide additional detections as described
above.

Signed-off-by: Roman Skakun <roman_skakun@epam.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

---
Also, I have observed that the original common code didn't 
make additional checks about contiguity of the memory region
represented by cpu_addr and size

May be, this means that these functions can get only physically 
contiguous memory.
Is this correct?

Cheers!

---
 drivers/xen/swiotlb-xen.c | 51 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 49 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 2b385c1b4a99..f99c98472927 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -563,6 +563,53 @@ xen_swiotlb_dma_supported(struct device *hwdev, u64 mask)
 	return xen_virt_to_bus(hwdev, xen_io_tlb_end - 1) <= mask;
 }
 
+static int
+xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma,
+		void *cpu_addr, dma_addr_t dma_addr, size_t size,
+		unsigned long attrs)
+{
+	unsigned long user_count = vma_pages(vma);
+	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+	unsigned long off = vma->vm_pgoff;
+	struct page *page;
+
+	if (is_vmalloc_addr(cpu_addr))
+		page = vmalloc_to_page(cpu_addr);
+	else
+		page = virt_to_page(cpu_addr);
+
+	vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs);
+
+	if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret))
+		return -ENXIO;
+
+	if (off >= count || user_count > count - off)
+		return -ENXIO;
+
+	return remap_pfn_range(vma, vma->vm_start,
+			page_to_pfn(page) + vma->vm_pgoff,
+			user_count << PAGE_SHIFT, vma->vm_page_prot);
+}
+
+static int
+xen_swiotlb_dma_get_sgtable(struct device *dev, struct sg_table *sgt,
+		 void *cpu_addr, dma_addr_t dma_addr, size_t size,
+		 unsigned long attrs)
+{
+	struct page *page;
+	int ret;
+
+	if (is_vmalloc_addr(cpu_addr))
+		page = vmalloc_to_page(cpu_addr);
+	else
+		page = virt_to_page(cpu_addr);
+
+	ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
+	if (!ret)
+		sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0);
+	return ret;
+}
+
 const struct dma_map_ops xen_swiotlb_dma_ops = {
 	.alloc = xen_swiotlb_alloc_coherent,
 	.free = xen_swiotlb_free_coherent,
@@ -575,8 +622,8 @@ const struct dma_map_ops xen_swiotlb_dma_ops = {
 	.map_page = xen_swiotlb_map_page,
 	.unmap_page = xen_swiotlb_unmap_page,
 	.dma_supported = xen_swiotlb_dma_supported,
-	.mmap = dma_common_mmap,
-	.get_sgtable = dma_common_get_sgtable,
+	.mmap = xen_swiotlb_dma_mmap,
+	.get_sgtable = xen_swiotlb_dma_get_sgtable,
 	.alloc_pages = dma_common_alloc_pages,
 	.free_pages = dma_common_free_pages,
 };
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 10:03:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 10:03:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140336.259326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lre0p-0000J2-EQ; Fri, 11 Jun 2021 10:03:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140336.259326; Fri, 11 Jun 2021 10:03:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lre0p-0000Is-BW; Fri, 11 Jun 2021 10:03:39 +0000
Received: by outflank-mailman (input) for mailman id 140336;
 Fri, 11 Jun 2021 10:03:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lre0o-0000If-HC; Fri, 11 Jun 2021 10:03:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lre0o-0002i1-9y; Fri, 11 Jun 2021 10:03:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lre0n-0007N5-Uq; Fri, 11 Jun 2021 10:03:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lre0n-0007oj-UJ; Fri, 11 Jun 2021 10:03:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qzKEsLjc9Jfea3cWNFZuR+ilSY7DpIlqeVS4AaW6JC0=; b=jjwC2wBKDu++F1dCYCcalWrSiZ
	msVB2VOnwJzP7BSyV9nLy/JI9F+BdD4HBnAE6q8RT4aKVCczO3vTzVvP17GfsO8yfkUZz3YlPi62S
	Y0Kud8Ux+BNymCLnxvzl95xBy1iFqVIw0l1faP0RwTZYYpYu3ZabZIzJyQP/xjZXA6qo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162605-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162605: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:host-ping-check-native:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=cd1245d75ce93b8fd206f4b34eb58bcfe156d5e9
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Jun 2021 10:03:37 +0000

flight 162605 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162605/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  6 host-ping-check-native  fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                cd1245d75ce93b8fd206f4b34eb58bcfe156d5e9
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  314 days
Failing since        152366  2020-08-01 20:49:34 Z  313 days  534 attempts
Testing same since   162605  2021-06-10 12:55:52 Z    0 days    1 attempts

------------------------------------------------------------
6154 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1675379 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 10:41:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 10:41:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140345.259340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrebh-0004KC-Mi; Fri, 11 Jun 2021 10:41:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140345.259340; Fri, 11 Jun 2021 10:41:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrebh-0004K5-JJ; Fri, 11 Jun 2021 10:41:45 +0000
Received: by outflank-mailman (input) for mailman id 140345;
 Fri, 11 Jun 2021 10:41:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HBs1=LF=gmail.com=julien.grall@srs-us1.protection.inumbo.net>)
 id 1lrebg-0004Jz-Bq
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 10:41:44 +0000
Received: from mail-oo1-xc2f.google.com (unknown [2607:f8b0:4864:20::c2f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4aea13a4-3130-4dc0-8790-2cb94719df8f;
 Fri, 11 Jun 2021 10:41:43 +0000 (UTC)
Received: by mail-oo1-xc2f.google.com with SMTP id
 o5-20020a4a2c050000b0290245d6c7b555so595989ooo.11
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 03:41:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4aea13a4-3130-4dc0-8790-2cb94719df8f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=dzmI3mcjI9bUqS7jOZe2FaB9jsbtLcrJ9WxglaseTWk=;
        b=NJCDqOtUSiKC9vHEGGAOmo2nBCb8xe3OL4alTUH6l0IgYKHJq8007qP5L6PP4fjHXN
         eAfjz3EgRXIzRScTtwAjUpkGs6N4tDoOFbCk4TBjrL0WAm+ooeNDyO6R9NXyARTH2Esj
         7PVG2wpHi0JV/b64Um4sxtFNROymjgbcNlk8uZmUfVMlg3mwfQmMpkxLmRm5bqANyuOS
         1QF/AEimMIzSNJsb3KCGUZqlVafWkxY6jDb0NsV3lxNWRfQYgYQ99h/rcHvLgUbVkb+n
         9oMnut0KgtyQ5exZisrICJhHgxilJnJS9Rb06/eSENSDz1E9EU19rGx334MHBgK/bi5I
         4KYQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=dzmI3mcjI9bUqS7jOZe2FaB9jsbtLcrJ9WxglaseTWk=;
        b=gV+qLQB+amDv63eyzmTcUdobp5m29BwLNAYK8pz6k8ZmQCsVBhpmSe4rVUSaP/waBx
         6hqf9I+2X7O0iZCjkyKkiHN8Lqn81C9ck2w9hE+QjiVkrP8N+2wI/vuFGLzhS9EKHBh/
         KJXhs3n6KB6/wP+pDV3O8QCizUF6zl3+glYPFocF4bbWzSw6fwqKR/6KUZ04FLq0qTNt
         m+aq73Jg9g4h1Q5MBVyNEgkwo+jYFFVP1l0l25n4bcmMQmBMB59ZLwWBPDuwFYl2RdgL
         CgIfR/eJIbmwzRQuS3UdTK7/0ifbnim6Czr3SIY4AMYAPHU48NK0nwB4UkaXqQe9jz2q
         YPeA==
X-Gm-Message-State: AOAM533QV9l7hd8mHRbNee1C73LosqLcMwiJTRchRzBc5zv0wZqDu//P
	L4Iwu+WslsihKntR+DqPBxhpa4/M2p3ReJ1d2hM=
X-Google-Smtp-Source: ABdhPJzHaE/hJ3Ygoh19x0vqZOz7BX5zgJi5Ecf1EryaUuWy8ZhEPPlEb07Xp9lVtjnymSgrDG/1sLz2PpYetvX+qns=
X-Received: by 2002:a4a:625a:: with SMTP id y26mr2511543oog.38.1623408102677;
 Fri, 11 Jun 2021 03:41:42 -0700 (PDT)
MIME-Version: 1.0
References: <e4946a69-bc1a-d54c-dadf-e71feecd99ab@suse.com>
 <CAJ=z9a07v-cnMhK=cVjjdN3-f4t8qGc3oQz17zRdLxOauBp=qA@mail.gmail.com> <af2f231a-5130-8e5f-b024-04f74e57d1ad@suse.com>
In-Reply-To: <af2f231a-5130-8e5f-b024-04f74e57d1ad@suse.com>
From: Julien Grall <julien.grall@gmail.com>
Date: Fri, 11 Jun 2021 11:41:30 +0100
Message-ID: <CAF3u54BrJ9MViXnBUMykukaOrpO=SyEV0KwE8Pbs8=tQiLb7wg@mail.gmail.com>
Subject: Re: [PATCH] Arm32: MSR to SPSR needs qualification
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall.oss@gmail.com>, xen-devel <xen-devel@lists.xenproject.org>, 
	Stefano Stabellini <sstabellini@kernel.org>
Content-Type: multipart/alternative; boundary="00000000000049206905c47b2798"

--00000000000049206905c47b2798
Content-Type: text/plain; charset="UTF-8"

On Fri, 11 Jun 2021, 11:16 Jan Beulich, <jbeulich@suse.com> wrote:

> On 11.06.2021 10:00, Julien Grall wrote:
> > On Fri, 11 Jun 2021, 08:55 Jan Beulich, <jbeulich@suse.com> wrote:
> >
> >> The Arm ARM's description of MSR doesn't even allow for plain "SPSR"
> >> here, and while gas accepts this, it takes it to mean SPSR_cf. Yet
> >> surely all of SPSR wants updating on this path, not just the lowest and
> >> highest 8 bits.
> >>
> >
> > Can you provide a reference to the Arm Arm? This would help to navigate
> > through the 8000 pages.
>
> Referencing the instruction page would be enough, I thought (as
> even I, not being an Arm person, have no difficulty locating it).
> If it isn't, how is a canonical doc ref supposed to look like on
> Arm? On x86, we avoid recording document versions, section
> numbers, or even page numbers in code comments or commit messages
> (which isn't to say we have none of these, but we try to avoid
> new ones to appear), as these tend to change with every new
> version of the doc. Therefore, to me, the offending commit's "ARM
> DDI 0487D.b page G8-5993" doesn't look like something I wanted to
> clone from. But if you tell me otherwise, then well - so be it.


The Arm website provides a link for nearly every revision on the specs. As
the wording can change between version, it is useful to know which spec the
understanding is based from.

 Note that for Arm32 we should quote the Armv7 spec and not the Armv8 one
because we only follow the former (there are a few small differences).



> Jan
>
>
>

--00000000000049206905c47b2798
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"auto"><div><br><br><div class=3D"gmail_quote"><div dir=3D"ltr" =
class=3D"gmail_attr">On Fri, 11 Jun 2021, 11:16 Jan Beulich, &lt;<a href=3D=
"mailto:jbeulich@suse.com">jbeulich@suse.com</a>&gt; wrote:<br></div><block=
quote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc=
 solid;padding-left:1ex">On 11.06.2021 10:00, Julien Grall wrote:<br>
&gt; On Fri, 11 Jun 2021, 08:55 Jan Beulich, &lt;<a href=3D"mailto:jbeulich=
@suse.com" target=3D"_blank" rel=3D"noreferrer">jbeulich@suse.com</a>&gt; w=
rote:<br>
&gt; <br>
&gt;&gt; The Arm ARM&#39;s description of MSR doesn&#39;t even allow for pl=
ain &quot;SPSR&quot;<br>
&gt;&gt; here, and while gas accepts this, it takes it to mean SPSR_cf. Yet=
<br>
&gt;&gt; surely all of SPSR wants updating on this path, not just the lowes=
t and<br>
&gt;&gt; highest 8 bits.<br>
&gt;&gt;<br>
&gt; <br>
&gt; Can you provide a reference to the Arm Arm? This would help to navigat=
e<br>
&gt; through the 8000 pages.<br>
<br>
Referencing the instruction page would be enough, I thought (as<br>
even I, not being an Arm person, have no difficulty locating it).<br>
If it isn&#39;t, how is a canonical doc ref supposed to look like on<br>
Arm? On x86, we avoid recording document versions, section<br>
numbers, or even page numbers in code comments or commit messages<br>
(which isn&#39;t to say we have none of these, but we try to avoid<br>
new ones to appear), as these tend to change with every new<br>
version of the doc. Therefore, to me, the offending commit&#39;s &quot;ARM<=
br>
DDI 0487D.b page G8-5993&quot; doesn&#39;t look like something I wanted to<=
br>
clone from. But if you tell me otherwise, then well - so be it.</blockquote=
></div></div><div dir=3D"auto"><div class=3D"gmail_quote"><blockquote class=
=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padd=
ing-left:1ex"></blockquote></div></div><div dir=3D"auto"><br></div><div dir=
=3D"auto">The Arm website provides a link for nearly every revision on the =
specs. As the wording can change between version, it is useful to know whic=
h spec the understanding is based from.</div><div dir=3D"auto"><br></div><d=
iv dir=3D"auto">=C2=A0Note that for Arm32 we should quote the Armv7 spec an=
d not the Armv8 one because we only follow the former (there are a few smal=
l differences).</div><div dir=3D"auto"><br></div><div dir=3D"auto"><br></di=
v><div dir=3D"auto"><div class=3D"gmail_quote"><blockquote class=3D"gmail_q=
uote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1e=
x">
<br>
Jan<br>
<br>
<br>
</blockquote></div></div></div>

--00000000000049206905c47b2798--


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 13:03:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 13:03:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140366.259366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrgoP-0000FP-GD; Fri, 11 Jun 2021 13:03:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140366.259366; Fri, 11 Jun 2021 13:03:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrgoP-0000FI-D6; Fri, 11 Jun 2021 13:03:01 +0000
Received: by outflank-mailman (input) for mailman id 140366;
 Fri, 11 Jun 2021 13:03:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0bEB=LF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrgoO-0000FC-9v
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 13:03:00 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bcbe23ae-7f41-4063-ad71-a1d33bf63e48;
 Fri, 11 Jun 2021 13:02:59 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2059.outbound.protection.outlook.com [104.47.14.59]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-30-ZlK3tSEYMmOxR17o2uNxLQ-1; Fri, 11 Jun 2021 15:02:56 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7149.eurprd04.prod.outlook.com (2603:10a6:800:12e::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20; Fri, 11 Jun
 2021 13:02:54 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Fri, 11 Jun 2021
 13:02:54 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P193CA0054.EURP193.PROD.OUTLOOK.COM (2603:10a6:102:51::29) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.20 via Frontend Transport; Fri, 11 Jun 2021 13:02:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bcbe23ae-7f41-4063-ad71-a1d33bf63e48
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623416577;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=T2JC8urWWDlP8r8Gu85fTSrclDgyvEohEKrX4HjPBLk=;
	b=JlWOwELA999eb5dYr0yZzsFYMVQ906h5/vOiAAL0FCi0Lf+jxODD3NlREL4bIrwkMM4sAn
	0olkjOx6659+5X5NMLWJaFhgwA+9uesphquVati1S+/cAIltGJhXAJE+LN+IzHxhPwO7cK
	mKEMpQsSgJl9wmd7rq2as0Az04ARwuQ=
X-MC-Unique: ZlK3tSEYMmOxR17o2uNxLQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BskrIBDap6QhX411ZvrnDpm1gZ5B5Bm7GmRkmMOo6PveoE/g0Si4VkPwb7YPnyhBZToalPdZglUDlLHFlEnxRxVjPIZa7hRrlTohGUn+vA4rMpgwsv0PH1WSTGIBq1ylhmqll4IyPDhbrqd5c8fW9Ke+F5b3eiBR/+yCib66eXG+V9avrfRPoxhVsH/UFhkcheyu1OUlozsKokNub9xAgmarcB3LeCJ8McXm2K9DOuIOlgKv+Hkf5ss91f3XuA6vs3HNCV7ULL38KP48ZehcX2xSS89Sl1+WYm0t3C1n55QIKH07RaIchenaqNEap6RM8fUrNKKGa0XgR/LcrVw1lA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=T2JC8urWWDlP8r8Gu85fTSrclDgyvEohEKrX4HjPBLk=;
 b=g4EGFhDqRhzUxiRnvOxPMuEZ0KbWfNZnB/PPBU04fkTV6aTe0+KY7lN2+D34N3Va1Thf/tKtp9R4/fwRb8OLW1rfO9CMOrFDXWfnyZ6xZBHb8nxmJp4VE7PDccmZafgNb8lzsWsHaMJrjihFINdA1nEH6JFb8mu/EIhfSkv5a/xqMN0MZm6R/owDkeUF7Ysgkogpdna1beM0jwpiIgXVqjHBqAq6X7TzMtNmhEBS3tAHq8S3evDNfIZ5dRDhMzy7aoPpIdpIy9676RLD5ZoBBgtQrMplBOmxc6fhAPGKIatSqZ4aBZSDn95Af42f50UlO6LXxfmYmiB4aNlyOtB29A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH] Arm32: MSR to SPSR needs qualification
To: Julien Grall <julien.grall@gmail.com>
Cc: Julien Grall <julien.grall.oss@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <e4946a69-bc1a-d54c-dadf-e71feecd99ab@suse.com>
 <CAJ=z9a07v-cnMhK=cVjjdN3-f4t8qGc3oQz17zRdLxOauBp=qA@mail.gmail.com>
 <af2f231a-5130-8e5f-b024-04f74e57d1ad@suse.com>
 <CAF3u54BrJ9MViXnBUMykukaOrpO=SyEV0KwE8Pbs8=tQiLb7wg@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1dca7acd-8f37-b9a0-1ea5-dcd7afc62710@suse.com>
Date: Fri, 11 Jun 2021 15:02:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <CAF3u54BrJ9MViXnBUMykukaOrpO=SyEV0KwE8Pbs8=tQiLb7wg@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P193CA0054.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:102:51::29) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 00e7c6be-2612-4550-5a63-08d92cd93a1d
X-MS-TrafficTypeDiagnostic: VI1PR04MB7149:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB71495377DCE357536CD6FFDAB3349@VI1PR04MB7149.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vXLv7mLZxIhQjR5pz6TUiFAKX0AsKvQj74bxEvY9Q4lyeO0K4HTdUf7IK1nwbM2v4tNdP9m88PyrXLhFBgprQ9GFCRkFVoYclz4DTV3bkYBlxo8kPh0r2nwdSM4UK3LIEsi6aYRQluovd/IQ0q/3jfQHhZClomy3zD1z6Rvs9fjZj6uMiWGjfIoIKOMYx7GwH4gY0eLRkiMV2pXh9GmEa18WiSgYLQsWhPkq2KB0nUuP6G+EXTbYfLMg1Gy5oglTPOeNBctGiyokRXZUG0LiRNF94qqybeMQjCjGtgAqBf7a3KDss2CKmstpeRX9eBu0PciNgOvqa9u1o+vddj+NkGXfn16dIMdHowuVDFyklJntpe+kmuVlTKnS1PfZv6o/egcV1UHtrRMlZ/x2FG6UqXrVKgsCZ5UI/6USUtLrf2jyrUOo0oOpUTHBLDuLsACjoYXOpk3fv59sCi4o23PXq0I6FqeQfF38MdKDkRoAORyh4EGD43KOHAmrDsJMMDPtRiIS5hfY4AwJCbAifB4I/vtdnNUl67O933scuAs2GGeeGYx3PJkv8oFu5Y5Gj0L9PoFOciXJdd4Rvfh4amR2DCxidXSv/AGqxYE3M3blwHJVWBviH/pUJpCQeTPAkeuQ2HG1kBIp+9pa6xsbsFsYLRfJdQeCE00FYhHE1KkHAuU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(366004)(39860400002)(396003)(376002)(136003)(16576012)(4326008)(66476007)(478600001)(6486002)(54906003)(8936002)(316002)(66556008)(2906002)(31696002)(66946007)(5660300002)(31686004)(26005)(8676002)(83380400001)(53546011)(16526019)(6916009)(186003)(2616005)(956004)(86362001)(38100700002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cVo5WlBPdjlRbFNIWnVhcGl4b0JXYmMwZ1FWa1BaaCtydU04TW5KUklwVzU2?=
 =?utf-8?B?QmFJRWRCaElLc211ZitpRy9XVjE3SldWL01xc1NEbitNWnpQSTVOUUVQeG91?=
 =?utf-8?B?WHdPdkhmWlNQVG9LdDJkeU9YMW5FNmhNQzVyNEVsaTdnSDBTVm1McGRUK2tN?=
 =?utf-8?B?MXNMSWFleEVXN1VvTzk4Rk93U0h4SENURzJrMFFmSkJ6eThxblE2RWdXaXFI?=
 =?utf-8?B?dXo3TWh5RkxvZGJHR1V5THhkZS9EcjdzVmpxcXZqT3BjMVB4TTZXUFdBVUJz?=
 =?utf-8?B?Z3R1dVd1TlRONnBUZU9vN1k3RXB2N05QNm0vL1lXRjlCTXBGT0dXekozWUZU?=
 =?utf-8?B?ZlAyY1dHMDdzQzFuV3NVQVJTeFBmaENtTmJDV2RiRVhOZzgzV3VFSXYxMm1D?=
 =?utf-8?B?Nnh6MWZWM2d2bEFXU3lCSnhwSGExOFc5RkFmR3RpWUpmSjBtVHJ1WGFrQ0lj?=
 =?utf-8?B?UnFua0lMOERTdWpOV3VzN0JPcVJVUHFSTFdPV2E1NTBWOWVTdjV0bzBGT0to?=
 =?utf-8?B?N09VU05uNEZaaGF1b1pjeGpzWE52UldhaWVEK3NnVzdjL2lIU05KdW84YlI0?=
 =?utf-8?B?OHZrVE83ZU5ReSs5THdJb25yNzVKb1NrQUNrMDA0b3U0WS9HMUZTMHpSQldN?=
 =?utf-8?B?dVZCUTVhTnhvUDAxa1pqbENTdEV0SWN3OFJ2eE9HaTlXQ3RqR0dZWDdXODFh?=
 =?utf-8?B?RndMTUJIUDUrOEZDTGh6T29oVzd4QThiQkpDZXc4WnB6SFlWVS9FUXVuQlVG?=
 =?utf-8?B?UkNhSG1JL2s1VEpzcTZlT0dMRE53U3ZzUFlVVHdZUGhOWEZjM1lDblNialJI?=
 =?utf-8?B?RVdzZG9GOWs5VGJ2ZjJ2MjYxaTluRFdKT0pzOXUzTVk0SWhjUmc1SnVRSHBy?=
 =?utf-8?B?SkdBRXpqNmZhMkZWNFZRRjNiQXBGOFZtMTQyb01tWUxNSzRCUzZGc29GelRx?=
 =?utf-8?B?MkErNkllMmhWVmRCNlF5Y3kxd1BKQkJEeGFvbTQ4RFlPelJNS1c5TG9oc1d4?=
 =?utf-8?B?dTVTY0xiY1F1TE95WUVnQXhsLzNydW8zdE55YUhmWFFwSmNKSUtKalRTQzVQ?=
 =?utf-8?B?bTlORnJZNTFIRXhHalE0ME1oUm5SK3VwLytRWU81UVllUDdJT3NLU2JvcjVu?=
 =?utf-8?B?SFpmTVJURWhjS1hGM0wvYmhzeGtUWjgyV1lBc21wVGZxZ1ZCYkxjTzd3Mjl5?=
 =?utf-8?B?OEJRU0dyeUs1UGV3SjIvZlFQSUdvZEVBZUNvZE4yelU0UmlHcWFsYzVCRmtB?=
 =?utf-8?B?bU1IUlU5M0tHaXd5T25wZEdlTTVQQ2RmejJTQkFwRGREdjRpeGxHc29OSHRV?=
 =?utf-8?B?NFFVcW5zMVR6MjJTTFJnM0I2TW9XYW1LYjk4bS9KV2k4RnVYL3ZsMXhVQWZ3?=
 =?utf-8?B?cVJ0bjFnSTkwUHJKbG1TRzJQSlBYU280RW9sNFVnRUgwa0NaUFljUjhuUFJM?=
 =?utf-8?B?Yzd6ZklKTi9uWFFNeHhnaldjaGNlSDU4UllBRlVnL3JNRVg5MUFVbWxINlBO?=
 =?utf-8?B?Rm5CYzJEblhTdWxuY3FyQndDQ1E5eS9KNGlXYTNDdUd0bXI5c1hZcmU5N01r?=
 =?utf-8?B?R0xDN0FrRmxpcWRPVFFlMnI5c1VaazVMTHlzZk5pczFKM2grQ3VxeVRpSXlt?=
 =?utf-8?B?TEVwS1pTbWdMUmwyM1dQQjR1Q2E0Uy8rdFZ5cnVOUW9NenF6VTZDOHowdEhj?=
 =?utf-8?B?bVJlSmZFTlJaT2V2MkdERjFkemhBKzJzY2l1Q1dGMGxFWDk0d1hqY3FQL2hx?=
 =?utf-8?Q?Ls6FCHWJd0bvICRJjRyDY7/erNcB2Y7Bj6h0Ge5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 00e7c6be-2612-4550-5a63-08d92cd93a1d
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jun 2021 13:02:54.6052
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 05fwurV59DcAfTzgIF4qHIwXPuaV6kQTd5izljx6+qGbY8ag9iZL5CrqZmmKpMGFqJ1QulGOoju47e/dM0nAIQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7149

On 11.06.2021 12:41, Julien Grall wrote:
> On Fri, 11 Jun 2021, 11:16 Jan Beulich, <jbeulich@suse.com> wrote:
> 
>> On 11.06.2021 10:00, Julien Grall wrote:
>>> On Fri, 11 Jun 2021, 08:55 Jan Beulich, <jbeulich@suse.com> wrote:
>>>
>>>> The Arm ARM's description of MSR doesn't even allow for plain "SPSR"
>>>> here, and while gas accepts this, it takes it to mean SPSR_cf. Yet
>>>> surely all of SPSR wants updating on this path, not just the lowest and
>>>> highest 8 bits.
>>>>
>>>
>>> Can you provide a reference to the Arm Arm? This would help to navigate
>>> through the 8000 pages.
>>
>> Referencing the instruction page would be enough, I thought (as
>> even I, not being an Arm person, have no difficulty locating it).
>> If it isn't, how is a canonical doc ref supposed to look like on
>> Arm? On x86, we avoid recording document versions, section
>> numbers, or even page numbers in code comments or commit messages
>> (which isn't to say we have none of these, but we try to avoid
>> new ones to appear), as these tend to change with every new
>> version of the doc. Therefore, to me, the offending commit's "ARM
>> DDI 0487D.b page G8-5993" doesn't look like something I wanted to
>> clone from. But if you tell me otherwise, then well - so be it.
> 
> 
> The Arm website provides a link for nearly every revision on the specs. As
> the wording can change between version, it is useful to know which spec the
> understanding is based from.
> 
>  Note that for Arm32 we should quote the Armv7 spec and not the Armv8 one
> because we only follow the former (there are a few small differences).

Thanks for having me dig out an up-to-date Armv7 spec. I find this
puzzling in particular because you didn't care to have the earlier
commit provide a v7 doc ref. Initially I did intentionally use (a
newer version of) the doc that was pointed at there (which I also
think is better structured than the v7 one).

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 13:04:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 13:04:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140371.259377 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrgpq-0000pt-Rc; Fri, 11 Jun 2021 13:04:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140371.259377; Fri, 11 Jun 2021 13:04:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrgpq-0000pm-O6; Fri, 11 Jun 2021 13:04:30 +0000
Received: by outflank-mailman (input) for mailman id 140371;
 Fri, 11 Jun 2021 13:04:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0bEB=LF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lrgpp-0000pg-AK
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 13:04:29 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1d143e79-4179-4fa9-86da-e0fecf4e644e;
 Fri, 11 Jun 2021 13:04:28 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2057.outbound.protection.outlook.com [104.47.14.57]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-34-6ZGDkgUFPcuVh2sEyKoULA-1; Fri, 11 Jun 2021 15:04:25 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7149.eurprd04.prod.outlook.com (2603:10a6:800:12e::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20; Fri, 11 Jun
 2021 13:04:25 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.022; Fri, 11 Jun 2021
 13:04:25 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR1P264CA0030.FRAP264.PROD.OUTLOOK.COM (2603:10a6:102:19f::17) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.20 via Frontend Transport; Fri, 11 Jun 2021 13:04:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d143e79-4179-4fa9-86da-e0fecf4e644e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623416667;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=VJz7WJrgdIu7Yqmi+3r6jyJnowYjSigtTjONlNknBI8=;
	b=gtjUX3CISnuQ2fTVG7UYOpvnxWJQcPfWmnQ/ixTrEFXG/B7kmFzy2YhhbwYwwyFeqFGz/4
	2DnScSW0iLPARVKQL0YDvHqKOjWLqMXaI8EeXCFXxduryCXaWhcvcx8GxoWTty2FagiSC3
	pKn4qk9rvrv87v3+RwCNY90LBPhNfLc=
X-MC-Unique: 6ZGDkgUFPcuVh2sEyKoULA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CGg10AqhXHEfQP8m5wR+6flTYcReNqfTEbj2jNr8n75ZcFKu7iHfUewtyhzEngJIuSMjsHWxCNSsDL66LJUaQ8SwvFnUlbx6APDBS3P4UwTM0BJg1iolg/tiiCyhd//axNv3qy6B41/WPMqSBjeec9aKfZE8KT10VyBtUbmV9j9OpqePZMXX19YSOKNxiIqnRJ5pp62b8+zuk44dup5Rdu4DC9C2bZqFPX/8/IeQ+QYYBKD8AP/NDzM1XdaPSAv8FM2dfziNXFdk/mAk3qHl5GQEqTCsHBu0JY/Dy84fF0OBcbhSp+mmKl/xyPhrMKQnfLHonDFTvhd8qP1Cp/bF6g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VJz7WJrgdIu7Yqmi+3r6jyJnowYjSigtTjONlNknBI8=;
 b=EgztL39ul5Gz9vMdI1kEQ8pynhDab82CRX6kNCEXVb57YkI2RjrAGsDm20YzNg7879jQfi5YZ0IU/SfwLgfTQ+RxkgXQ4vP4qdF1eCkicnMlj3viecYMcnDG3iyz56bVBvI8OYsJn0Y0AQIrEYL/Nxk/YanfVOwf/U+rNQlGh5OshY6lYzxmSrzpmXKpe9gQORZcGDav22aC2Xr98rY+2HiMslx97yfXdj19PXf7ps4QnfDJReEdI548cYXpOr4p/BGIsrJYj/aWzA2ShWvlTlmXMDFK5BkC3ujEfzzd4j47faShYe8fijisCo4hEe35KjW4KJWkdxc2FFtpk6B7+g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=suse.com;
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] Arm32: MSR to SPSR needs qualification
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Message-ID: <2d0ac238-bf23-51ed-9ccf-6fd65fc6eec4@suse.com>
Date: Fri, 11 Jun 2021 15:04:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR1P264CA0030.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:102:19f::17) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 23b1ad84-5d1b-43b0-03ad-08d92cd97026
X-MS-TrafficTypeDiagnostic: VI1PR04MB7149:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB7149626613848D9942A2883CB3349@VI1PR04MB7149.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1q/TJE+uZKjo/oGWygKuGIjHPS2SBMKx6CI7IP+heHaiSj8BozcLBbi6zUkz2V9EYW4JVHXEz//mjulEK4TPjhgEY9R7fHrI2oN3LKaJfoZ2v6DNdtwhw4GsZeT7AvVkHjdohWHy75rTrwmo6Ic7KIdiGnUEDI1P4Eq2c0Nc0orxNRreJ8Uk9AWrCsqLKwQXCXEuknVy9kECl/2nT12f1PCV1HATwKp5V0EUHEoRwUXz/N/WVWOgYqfn8wUOMzGOgYE2ULOvCrthmmjd8XNe6K+H5PWs7jG6DoRkSjEk5sfYFWtkNX1438EyN+FlPS845HsbAft6pZyxwtCPKh9WVfvTCJP3mNIRJD5bEaY8hobxiiWGzgmNolXGWR5/w/1VzvmLbrO1OLfRwIpoAPPs+Ti2SU8RYUaSZGvGUxW9vYrk3oFVOJGMDOL4a2lxWUINaLDYkNa5xiNigcCqykuDfx1byPBh4BgUltguNDWwgKpYElRcAfBoC+P78Qfst7vsOcOJ8meaN1JotHd4ikz4IybDNAvklZrdtB6KV/PqGCOYjOf+sfxAYaMbSxsm6wOgYpfqvYXM0Ws+EemGjwE2oJ8wGo0pXCqVlWApla5kbmOkfHYYKq/kHf/Sb3DgybEYs4HEFESV/49K8TvR09Dse9bA52505bLC4We1bCuxt7WhzuwFLhCDjIV1R++M6Bkl
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(366004)(39860400002)(396003)(376002)(136003)(16576012)(4326008)(66476007)(478600001)(6486002)(54906003)(8936002)(316002)(66556008)(2906002)(31696002)(66946007)(5660300002)(31686004)(26005)(8676002)(16526019)(6916009)(186003)(2616005)(956004)(86362001)(4744005)(38100700002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZGx4ZVpldVpVT01oVTJoNnkwSzl3aFJqcnhiQTFnRU5uWmd5bVd3Qm8rVEY3?=
 =?utf-8?B?QTAzQlFsa3hYaEMyVkkxTHZmTkJFYUdDVFU1dzBOMEFTelROZjI4bFFVaWpF?=
 =?utf-8?B?ajNCT0hxZkl1L3BHYStZd0NUanFEWnBxV0huYkRpKzU4MXhZTGVRTVFRdmg5?=
 =?utf-8?B?b1RSUFpqMzQrbEdvSEtWcUFLUFYyTTJISEVnRTI1VWk1b2VFc0VjNVNNZnZr?=
 =?utf-8?B?NGdQNXAzeGo0VUUxdTZ6TTlWaXlaVWFlWUJBN1hxUEhXVVFraEVCS0tIYytr?=
 =?utf-8?B?SDFVRWZ3cTFLZWtONEtsRHpMdXV6cnFMNGI1UEI5ZkpaZWx2bTU0bHlDQ3NC?=
 =?utf-8?B?blJ6UDZMaGwyNDdkSlVZRUZyUGpsMkRXWFVsY2dsc1JlNkJBQUdsY3hiZFpm?=
 =?utf-8?B?cGpDbXJlb3M0Mnl4ZjBDZEgyVSswd0VlUEVKdWtUWEM1cmJaYWlKNFVjc0tY?=
 =?utf-8?B?U1psN2w1NDJKMjdPeUpsMko4d2JDZ2JjazRwVTA0RmJEK0lLWDh6NjB6NkhL?=
 =?utf-8?B?ck1NWFJwNjJFTHI1MVV2VTFHS21jVENxT2NYc0pGbTZzaGQ3MVRUcDFuQUU2?=
 =?utf-8?B?bHhKNGxFc3k1SGcxZldDNVd1bkV4WWhab09KT2hrUEFhL2Y2T2xLYjllbnVt?=
 =?utf-8?B?OU55aExCclVJeitlcnE4bHNhYktjVk0yaUQ4OEhhR1J2aUdXU3BXaFBZdkpK?=
 =?utf-8?B?am5pckpKOWQ5dSsvQnNBY0pIeVErQ0lodmtUZ0J2K1NsaUIyQWlvZGVQUXZs?=
 =?utf-8?B?M3haczFUNkVCaXY3VHZOc0xrZHpPRTd3TW1xM0dEQjA0UWtLbmJGQmpzWGU4?=
 =?utf-8?B?U1FRZFRZcTRiWkdMaXRWK0RVMmM0anBvbWVsLy81WFM2S2dCcjQ5M1pqbThU?=
 =?utf-8?B?dDJaaC9CNEpQQUswWGZ6WGcvK050WWhHalNlRHYyRC8ybG14QSs0TVRmRDFC?=
 =?utf-8?B?cGx3aU0yTXk2UG10eWZFa29BRHkrWHp5TU9sQndoWjVCck1hMHpEYWdyRndu?=
 =?utf-8?B?SWdVVHY5UjYyZTdHdjdWdnpDNGdPeDZ6ME0vTklOSXhYdFI0S0h1WDBLR3hD?=
 =?utf-8?B?eWpVcm5aalZRZnBvR25GUWVGS0RqeGtyWUxGYTNSMzFsbUhqK3FETWhzLzhZ?=
 =?utf-8?B?VktnVFBXbWVkUDVzRTBCY3FnVkEzWnFTS3VqTm91OEoxOUFOSFlOeHRJbDRB?=
 =?utf-8?B?eDJuK2xTYktvVnZPRnNFYlkwUk9mR0FSYmRTM254b0FoT1grd1hUUkJZcjBM?=
 =?utf-8?B?bkFDZGIwMzJOaXpTZ0grRFlqUXdCS3l4Zzl4WHcyS1RVeDZoZHY3bTcyM08v?=
 =?utf-8?B?aTVNazk0WWRYNzlIMWxoaU9mckZkSFZKdDdKZlZNSmRUK1pMVmU4ZXBIdGZq?=
 =?utf-8?B?OFptZ3N5SzlIeVprZlk1RFA4dEREeDhROURrSkY0dForN2xWL3EvY3VOWlcw?=
 =?utf-8?B?cHF5MFJnSE1yWUJvaURKTkJZR25lZ0VuQkNDTm13bnZhN1NkL0JIaEg5aS9n?=
 =?utf-8?B?WERoQWVXbDFRWjBDeUVYR2FFeld4a0l0TnVoM2dpWkdZaW5YZEFUdU9PcE5H?=
 =?utf-8?B?czhLckhJem15Uy9KMVJ2cWxieEpWVXlQT0dHOHVCQWk3NDM2VVlsOGZXckl5?=
 =?utf-8?B?UDkwUXJpTndCRkZsSWp5UUw0WG5HSlR2Z1ZyUTRXdGkxQk9OVDhMajdiWnh6?=
 =?utf-8?B?b1diOFJyOU5jSDJBOUVEUVkyeXBpZUdMY080dXBlRkFUU3AvZERmNDBNUlF5?=
 =?utf-8?Q?vPq3IXeMX1gBMoS55urYkTinUdVqdvV2d5dhkCa?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 23b1ad84-5d1b-43b0-03ad-08d92cd97026
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jun 2021 13:04:25.1570
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EPR8fvUfmOVHwW6DbMGBAfY7VTAzernjFunAgKVhGgvI8BJrDE+zR2ve1QdxWSg0PzS3Ch/ENFbiuyoRRQP4nw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7149

The Arm ARM's description of MSR (ARM DDI 0406C.d section B9.3.12)
doesn't even allow for plain "SPSR" here, and while gas accepts this, it
takes it to mean SPSR_cf. Yet surely all of SPSR wants updating on this
path, not just the lowest and highest 8 bits.

Fixes: dfcffb128be4 ("xen/arm32: SPSR_hyp/SPSR")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Add doc ref.

--- a/xen/arch/arm/arm32/entry.S
+++ b/xen/arch/arm/arm32/entry.S
@@ -395,7 +395,7 @@ return_to_hypervisor:
         ldr r11, [sp, #UREGS_pc]
         msr ELR_hyp, r11
         ldr r11, [sp, #UREGS_cpsr]
-        msr SPSR, r11
+        msr SPSR_cxsf, r11
 #ifdef CONFIG_ARM32_HARDEN_BRANCH_PREDICTOR
         /*
          * Hardening branch predictor may require to setup a different



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 13:15:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 13:15:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140380.259387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrh0s-0002NG-Sf; Fri, 11 Jun 2021 13:15:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140380.259387; Fri, 11 Jun 2021 13:15:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrh0s-0002N9-PQ; Fri, 11 Jun 2021 13:15:54 +0000
Received: by outflank-mailman (input) for mailman id 140380;
 Fri, 11 Jun 2021 13:15:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrh0r-0002Mz-Kc; Fri, 11 Jun 2021 13:15:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrh0r-00064N-FA; Fri, 11 Jun 2021 13:15:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrh0r-0007Cn-6P; Fri, 11 Jun 2021 13:15:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrh0r-0005Rd-5w; Fri, 11 Jun 2021 13:15:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wsSOmf/nm+Eqc+TiweUAqjHIVCPXAcHb9eYYRqmx1JU=; b=YPrqcMBuJreMa6Ec1XZMdwaHNP
	JYe57Kl0NRKtNEZDhhXgzIzr/hK9aLu0eUEcTmrxRoZUX+spu9wyLqZTgEGvOaIjq0jRv4My23zQC
	Ggm57p7Xrl54rAKBfbKQrqU9a/cKB/G+GT2Q8bm6WbZRd8NYWJLkQUsxWPt/QlWoOs/0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162642-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162642: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:guest-start/debian.repeat:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d2cad41defe4e0e9987549fbc8ebdf9ae138f90f
X-Osstest-Versions-That:
    xen=3e09045991cde360432bc7437103f8f8a6699359
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Jun 2021 13:15:53 +0000

flight 162642 xen-unstable-smoke real [real]
flight 162646 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162642/
http://logs.test-lab.xenproject.org/osstest/logs/162646/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl         18 guest-start/debian.repeat fail REGR. vs. 162574

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d2cad41defe4e0e9987549fbc8ebdf9ae138f90f
baseline version:
 xen                  3e09045991cde360432bc7437103f8f8a6699359

Last test of basis   162574  2021-06-09 14:00:34 Z    1 days
Failing since        162584  2021-06-10 00:00:27 Z    1 days   10 attempts
Testing same since   162642  2021-06-11 10:00:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d2cad41defe4e0e9987549fbc8ebdf9ae138f90f
Author: Juergen Gross <jgross@suse.com>
Date:   Wed May 12 16:48:32 2021 +0200

    tools/libs/store: cleanup libxenstore interface
    
    There are some internals in the libxenstore interface which should be
    removed.
    
    Move those functions into xs_lib.c and the related definitions into
    xs_lib.h. Remove the functions from the mapfile. Add xs_lib.o to
    xenstore_client as some of the internal functions are needed there.
    
    Bump the libxenstore version to 4.0 as the change is incompatible.
    Note that the removed functions should not result in any problem as
    they ought to be used by xenstored or xenstore_client only.
    
    Avoid an enum as part of a structure as the size of an enum is
    compiler implementation dependent.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 2bb17a45b1814b0b6aa4646eff58e16f876281fd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jun 10 16:56:24 2021 +0200

    x86: please Clang in arch_set_info_guest()
    
    Clang 10 reports
    
    domain.c:1328:10: error: variable 'cr3_mfn' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
        if ( !compat )
             ^~~~~~~
    domain.c:1334:34: note: uninitialized use occurs here
        cr3_page = get_page_from_mfn(cr3_mfn, d);
                                     ^~~~~~~
    domain.c:1328:5: note: remove the 'if' if its condition is always true
        if ( !compat )
        ^~~~~~~~~~~~~~
    domain.c:1042:18: note: initialize the variable 'cr3_mfn' to silence this warning
        mfn_t cr3_mfn;
                     ^
                      = 0
    domain.c:1189:14: error: variable 'fail' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
            if ( !compat )
                 ^~~~~~~
    domain.c:1211:9: note: uninitialized use occurs here
            fail |= v->arch.pv.gdt_ents != c(gdt_ents);
            ^~~~
    domain.c:1189:9: note: remove the 'if' if its condition is always true
            if ( !compat )
            ^~~~~~~~~~~~~~
    domain.c:1187:18: note: initialize the variable 'fail' to silence this warning
            bool fail;
                     ^
                      = false
    
    despite this being a build with -O2 in effect, and despite "compat"
    being constant "false" when CONFIG_COMPAT (and hence CONFIG_PV32) is not
    defined, as it gets set at the top of the function from the result of
    is_pv_32bit_domain().
    
    Re-arrange the two "offending" if()s such that when COMPAT=n the
    respective variables will be seen as unconditionally initialized. The
    original aim was to have the !compat cases first, though.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit dfcffb128be46a3e413eaa941744536fe53c94b6
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Wed Jun 9 10:37:59 2021 -0700

    xen/arm32: SPSR_hyp/SPSR
    
    SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
    trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
    See: ARM DDI 0487D.b page G8-5993.
    
    This fixes booting Xen/arm32 on QEMU.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
    Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 13:22:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 13:22:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140389.259401 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrh7M-0003ov-OS; Fri, 11 Jun 2021 13:22:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140389.259401; Fri, 11 Jun 2021 13:22:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrh7M-0003oo-LY; Fri, 11 Jun 2021 13:22:36 +0000
Received: by outflank-mailman (input) for mailman id 140389;
 Fri, 11 Jun 2021 13:22:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HBs1=LF=gmail.com=julien.grall@srs-us1.protection.inumbo.net>)
 id 1lrh7L-0003of-5R
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 13:22:35 +0000
Received: from mail-ot1-x334.google.com (unknown [2607:f8b0:4864:20::334])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cb131802-6ec8-4e34-8221-a528efc75d6f;
 Fri, 11 Jun 2021 13:22:34 +0000 (UTC)
Received: by mail-ot1-x334.google.com with SMTP id
 l15-20020a05683016cfb02903fca0eacd15so3099846otr.7
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 06:22:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb131802-6ec8-4e34-8221-a528efc75d6f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=nMkPtLnjfjBcMRUBIpL9t8lGJOfmiK+zsKjRUYJ9MSE=;
        b=LLLorqJCw5VddTYr7qLEz4B0H1/1vOziV+00oJRaOPGkNOipj8lGLauR1pymXocV95
         yPkOjjTAx4wutnveJ0pPYmjpdmS2UDlGCd1REjFRmvequPmrRbOdQIJKcQQsTir3SnLm
         bA4elxmTPRCzhjwZ8AZEz1nxpSJO+6gWPA/IK2bLPo/FXkwjGDVYSHqD6eqlMHpUrFCC
         DjztbKQujN0JFhq2yYlLlVcVNff4kXuaQKaRMhn+xpV0wGSEPV0FEBH8EaryN1BG324Y
         mPGt9GqDCVxG+vPSKUlkmKkFP3RBN4oXPVdhAxiv78fRUj851uPklk3C+jVltI1BNY/c
         cyHQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=nMkPtLnjfjBcMRUBIpL9t8lGJOfmiK+zsKjRUYJ9MSE=;
        b=LPZH4VqnNawlMlGYwOuisV4OYmiaig+fCkx8RBAXei2+Ta1YanNDRVh/2tZMh4GY79
         G90hE2+G4ZAvSo7Ir27x1chngzMdXbKlR8QbzZqU3AE1lhpUcLwzkIOi0kc2aJ002UOs
         JgNLW8Rm2usn9Xob5GaB+7Xvf8SkUBR6Cq8DkyKap/4GJQe/EsuQC/CnLrkk8jfqScy8
         X9eumqiWYVFBV4dxVMHFnT7y56ZxRJ4RxowcPE17GsiENDyBJlUhyIePVLptywztn3xz
         p1k1h94NruVxgwWry+3WVv+UowJG9UuuRiHf765/eoiUPYyj99GV74VfnyTnvm2utdJ8
         eOrg==
X-Gm-Message-State: AOAM532NPlXWKblKELlIIs8VLh9SO1HyLV2XXiWif1wm/W0XLsWSe+Dx
	4wxcEVfKrwxahn0hasaFLE5t8Q8dWR40P5tK5uQ=
X-Google-Smtp-Source: ABdhPJzWf8ciN41S0pKlGCesVgGbsXdNWI6G7QUTrlw8FWICn1UmEdx7VU+Kb05zgZONlyMQ5BeKMNLLeiyCnEHjohE=
X-Received: by 2002:a9d:7682:: with SMTP id j2mr3106147otl.299.1623417753606;
 Fri, 11 Jun 2021 06:22:33 -0700 (PDT)
MIME-Version: 1.0
References: <e4946a69-bc1a-d54c-dadf-e71feecd99ab@suse.com>
 <CAJ=z9a07v-cnMhK=cVjjdN3-f4t8qGc3oQz17zRdLxOauBp=qA@mail.gmail.com>
 <af2f231a-5130-8e5f-b024-04f74e57d1ad@suse.com> <CAF3u54BrJ9MViXnBUMykukaOrpO=SyEV0KwE8Pbs8=tQiLb7wg@mail.gmail.com>
 <1dca7acd-8f37-b9a0-1ea5-dcd7afc62710@suse.com>
In-Reply-To: <1dca7acd-8f37-b9a0-1ea5-dcd7afc62710@suse.com>
From: Julien Grall <julien.grall@gmail.com>
Date: Fri, 11 Jun 2021 14:22:21 +0100
Message-ID: <CAF3u54CRe9WnXob8a6-NnT76hfi55a=-9vjoFN2yyePHhQzKOA@mail.gmail.com>
Subject: Re: [PATCH] Arm32: MSR to SPSR needs qualification
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall.oss@gmail.com>, xen-devel <xen-devel@lists.xenproject.org>, 
	Stefano Stabellini <sstabellini@kernel.org>
Content-Type: multipart/alternative; boundary="000000000000869d1105c47d66d1"

--000000000000869d1105c47d66d1
Content-Type: text/plain; charset="UTF-8"

On Fri, 11 Jun 2021, 15:02 Jan Beulich, <jbeulich@suse.com> wrote:

> On 11.06.2021 12:41, Julien Grall wrote:
> > On Fri, 11 Jun 2021, 11:16 Jan Beulich, <jbeulich@suse.com> wrote:
> >
> >> On 11.06.2021 10:00, Julien Grall wrote:
> >>> On Fri, 11 Jun 2021, 08:55 Jan Beulich, <jbeulich@suse.com> wrote:
> >>>
> >>>> The Arm ARM's description of MSR doesn't even allow for plain "SPSR"
> >>>> here, and while gas accepts this, it takes it to mean SPSR_cf. Yet
> >>>> surely all of SPSR wants updating on this path, not just the lowest
> and
> >>>> highest 8 bits.
> >>>>
> >>>
> >>> Can you provide a reference to the Arm Arm? This would help to navigate
> >>> through the 8000 pages.
> >>
> >> Referencing the instruction page would be enough, I thought (as
> >> even I, not being an Arm person, have no difficulty locating it).
> >> If it isn't, how is a canonical doc ref supposed to look like on
> >> Arm? On x86, we avoid recording document versions, section
> >> numbers, or even page numbers in code comments or commit messages
> >> (which isn't to say we have none of these, but we try to avoid
> >> new ones to appear), as these tend to change with every new
> >> version of the doc. Therefore, to me, the offending commit's "ARM
> >> DDI 0487D.b page G8-5993" doesn't look like something I wanted to
> >> clone from. But if you tell me otherwise, then well - so be it.
> >
> >
> > The Arm website provides a link for nearly every revision on the specs.
> As
> > the wording can change between version, it is useful to know which spec
> the
> > understanding is based from.
> >
> >  Note that for Arm32 we should quote the Armv7 spec and not the Armv8 one
> > because we only follow the former (there are a few small differences).
>
> Thanks for having me dig out an up-to-date Armv7 spec. I find this
> puzzling in particular because you didn't care to have the earlier
> commit provide a v7 doc ref. Initially I did intentionally use (a
> newer version of) the doc that was pointed at there (which I also
> think is better structured than the v7 one).


Well Stefano replied past midnight UK time with the reference and committed
nearly afterwards. So I didn't really have time to object...

When I asked for the reference I didn't think I needed to mention it should
be the Armv7 as he should know we only support Armv7 for 32-bit.

I didn't bother to reply afterwards. But given there is a bug and you
quoted him, I chose to make clear that reference should be Armv7 only.

Cheers,



> Jan
>
>

--000000000000869d1105c47d66d1
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"auto"><div><br><br><div class=3D"gmail_quote"><div dir=3D"ltr" =
class=3D"gmail_attr">On Fri, 11 Jun 2021, 15:02 Jan Beulich, &lt;<a href=3D=
"mailto:jbeulich@suse.com" target=3D"_blank" rel=3D"noreferrer">jbeulich@su=
se.com</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"m=
argin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 11.06.2021=
 12:41, Julien Grall wrote:<br>
&gt; On Fri, 11 Jun 2021, 11:16 Jan Beulich, &lt;<a href=3D"mailto:jbeulich=
@suse.com" rel=3D"noreferrer noreferrer" target=3D"_blank">jbeulich@suse.co=
m</a>&gt; wrote:<br>
&gt; <br>
&gt;&gt; On 11.06.2021 10:00, Julien Grall wrote:<br>
&gt;&gt;&gt; On Fri, 11 Jun 2021, 08:55 Jan Beulich, &lt;<a href=3D"mailto:=
jbeulich@suse.com" rel=3D"noreferrer noreferrer" target=3D"_blank">jbeulich=
@suse.com</a>&gt; wrote:<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; The Arm ARM&#39;s description of MSR doesn&#39;t even allo=
w for plain &quot;SPSR&quot;<br>
&gt;&gt;&gt;&gt; here, and while gas accepts this, it takes it to mean SPSR=
_cf. Yet<br>
&gt;&gt;&gt;&gt; surely all of SPSR wants updating on this path, not just t=
he lowest and<br>
&gt;&gt;&gt;&gt; highest 8 bits.<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; Can you provide a reference to the Arm Arm? This would help to=
 navigate<br>
&gt;&gt;&gt; through the 8000 pages.<br>
&gt;&gt;<br>
&gt;&gt; Referencing the instruction page would be enough, I thought (as<br=
>
&gt;&gt; even I, not being an Arm person, have no difficulty locating it).<=
br>
&gt;&gt; If it isn&#39;t, how is a canonical doc ref supposed to look like =
on<br>
&gt;&gt; Arm? On x86, we avoid recording document versions, section<br>
&gt;&gt; numbers, or even page numbers in code comments or commit messages<=
br>
&gt;&gt; (which isn&#39;t to say we have none of these, but we try to avoid=
<br>
&gt;&gt; new ones to appear), as these tend to change with every new<br>
&gt;&gt; version of the doc. Therefore, to me, the offending commit&#39;s &=
quot;ARM<br>
&gt;&gt; DDI 0487D.b page G8-5993&quot; doesn&#39;t look like something I w=
anted to<br>
&gt;&gt; clone from. But if you tell me otherwise, then well - so be it.<br=
>
&gt; <br>
&gt; <br>
&gt; The Arm website provides a link for nearly every revision on the specs=
. As<br>
&gt; the wording can change between version, it is useful to know which spe=
c the<br>
&gt; understanding is based from.<br>
&gt; <br>
&gt;=C2=A0 Note that for Arm32 we should quote the Armv7 spec and not the A=
rmv8 one<br>
&gt; because we only follow the former (there are a few small differences).=
<br>
<br>
Thanks for having me dig out an up-to-date Armv7 spec. I find this<br>
puzzling in particular because you didn&#39;t care to have the earlier<br>
commit provide a v7 doc ref. Initially I did intentionally use (a<br>
newer version of) the doc that was pointed at there (which I also<br>
think is better structured than the v7 one).</blockquote></div></div><div d=
ir=3D"auto"><br></div><div dir=3D"auto">Well Stefano replied past midnight =
UK time with the reference and committed nearly afterwards. So I didn&#39;t=
 really have time to object...</div><div dir=3D"auto"><br></div><div dir=3D=
"auto">When I asked for the reference I didn&#39;t think I needed to mentio=
n it should be the Armv7 as he should know we only support Armv7 for 32-bit=
.</div><div dir=3D"auto"><br></div><div dir=3D"auto">I didn&#39;t bother to=
 reply afterwards. But given there is a bug and you quoted him, I chose to =
make clear that reference should be Armv7 only.</div><div dir=3D"auto"><br>=
</div><div dir=3D"auto">Cheers,</div><div dir=3D"auto"><br></div><div dir=
=3D"auto"><br></div><div dir=3D"auto"><div class=3D"gmail_quote"><blockquot=
e class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc sol=
id;padding-left:1ex">
<br>
Jan<br>
<br>
</blockquote></div></div></div>

--000000000000869d1105c47d66d1--


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:00:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:00:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140400.259425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrids-0004V3-WB; Fri, 11 Jun 2021 15:00:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140400.259425; Fri, 11 Jun 2021 15:00:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrids-0004Uw-SP; Fri, 11 Jun 2021 15:00:16 +0000
Received: by outflank-mailman (input) for mailman id 140400;
 Fri, 11 Jun 2021 15:00:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lridr-0004Um-00; Fri, 11 Jun 2021 15:00:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lridq-0007rs-Qy; Fri, 11 Jun 2021 15:00:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lridq-0003w5-KV; Fri, 11 Jun 2021 15:00:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lridq-0006YP-K4; Fri, 11 Jun 2021 15:00:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lrXdviCJujyCbXBx/zPZ9bSIi76sdVcUYp1YYpkSQ2w=; b=zPYWgNBhqmvqbXi3y0e3aX28D7
	Qsyxj5lMq5nGlInkdxyoCx1TYYuGI7Bf/yLynG+LY2ercKQPV0JJAYljgoGgSVomHq7B+takHf7yf
	tloF1ym7fiDY1l3BaF2H9Fkb32+xR7K2KlKTAq8PaMPusBI8zuQiutbWMQFFkI8fDHmw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162623-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162623: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7fe7fae8b48e3f9c647fd685e5155ebc8e6fb84d
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Jun 2021 15:00:14 +0000

flight 162623 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162623/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                7fe7fae8b48e3f9c647fd685e5155ebc8e6fb84d
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  295 days
Failing since        152659  2020-08-21 14:07:39 Z  294 days  543 attempts
Testing same since   162591  2021-06-10 04:36:22 Z    1 days    2 attempts

------------------------------------------------------------
531 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 170591 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:20:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:20:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140409.259439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrixA-0006r6-Jo; Fri, 11 Jun 2021 15:20:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140409.259439; Fri, 11 Jun 2021 15:20:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrixA-0006qz-Fw; Fri, 11 Jun 2021 15:20:12 +0000
Received: by outflank-mailman (input) for mailman id 140409;
 Fri, 11 Jun 2021 15:20:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4LSv=LF=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lrix9-0006qt-GI
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:20:11 +0000
Received: from mx0b-00069f02.pphosted.com (unknown [205.220.177.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6070e95d-6f4c-48c9-8735-8b8d71ac7332;
 Fri, 11 Jun 2021 15:20:10 +0000 (UTC)
Received: from pps.filterd (m0246632.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 15BFB5KF014142; Fri, 11 Jun 2021 15:20:06 GMT
Received: from oracle.com (aserp3030.oracle.com [141.146.126.71])
 by mx0b-00069f02.pphosted.com with ESMTP id 393mkb8d0r-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 11 Jun 2021 15:20:06 +0000
Received: from aserp3030.oracle.com (aserp3030.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 15BFK5pY048046;
 Fri, 11 Jun 2021 15:20:05 GMT
Received: from nam11-co1-obe.outbound.protection.outlook.com
 (mail-co1nam11lp2170.outbound.protection.outlook.com [104.47.56.170])
 by aserp3030.oracle.com with ESMTP id 38yyadhwf1-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 11 Jun 2021 15:20:05 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by BL0PR10MB3073.namprd10.prod.outlook.com (2603:10b6:208:32::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Fri, 11 Jun
 2021 15:20:03 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4219.021; Fri, 11 Jun 2021
 15:20:03 +0000
Received: from [10.74.99.109] (160.34.89.109) by
 SN7PR04CA0087.namprd04.prod.outlook.com (2603:10b6:806:121::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20 via Frontend
 Transport; Fri, 11 Jun 2021 15:20:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6070e95d-6f4c-48c9-8735-8b8d71ac7332
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=RHPeAxPjO/Hey6EKM0y2x1JoEIz/0e2OqSrgeCEPOdM=;
 b=kPkGaxNspdv+gGtcC/3n0n0WLx6RRclWNrsFMvcGC5d96gtSLqzR2rcFwMDi/J3ngm2r
 fs3+geIY7pzZvZJWsBd7hKnIfPgP+PxN8iEkwGTpKEHjIatNqELzNWLsvmirExdOgwcu
 zhKOC2SEi4pMnut7bGTadB11Wz2rcMUXQ2Zo0iQuXBCYB5xDSDrSw0J4gsMLJyrugjh2
 oIVyQW+fMyZD7LTEN0Iwn3n3bg2bRE/BzFq0VvphTSppxHWuLaQK9fACMFHR3SeiRw+U
 FR31BVvFlK3edqMhG4Eoqb3GZn0fdzaOdXG8/Wab0LICOyzmYqV/2du1PnuhSFbsC4J0 3Q== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DJ+z2AQfj4TfEHH+L5Es6fW/jZq7jCKglCNxcW4NGoM75ez6/9jYJthlJlKjS7I+cLCF07Uh38+zJpEleXzrOZHfuqd+DqKD/T9C9PxfjAvcgm0xc4mqHT2fgNQQskoPiM5wHiOfueRG7XlS7gEByK1x79nWjSuLwUjpkMDsEFnh2UtlFaYyPs26/0a8EV9gmt9v6QDjMR2cJlIkOrPkdt3VfklO5qawi8y0fpbDEl25KT1DJHGD++w+Z9s6BjWNGLfrl/ZQn+hfDaXapsgtJBR7KkMSJRLMaXx+ivcgciMqVhEZAh1v4WQJtBZgObvxRkA0OQEyZk5fCnJD+OyX4Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RHPeAxPjO/Hey6EKM0y2x1JoEIz/0e2OqSrgeCEPOdM=;
 b=GH1lrR550GGZ0LT5oQAltVOKO5ZOE0VW5+9wk2xeZeRcA1/HS7SMjszlfhPBmn+lPOxgvsEMPztcDDwLrUBZeyFU/J/7v4jy1G+wOwBuagf+aIZat9xoSHrwLnyKrrS4bKsUm5n/wu82rHpPipbqoJiZbJUw+qF2Jhf9s5sjXhDHGLzY95HoM6ornQu2tkxrjVupONO7rnasmuuzKp4sPEKQoFghqbGJistIBHbPUYfVh6ULlCxOqJar7pxr063mqBgIYlcVPd9EBmIdmT/5olKwEW06ddPPMnUaF1+ykUDbPIqMRIG+OWlhbFccwmghS/1d7V5pHGK+sksruMnKoQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RHPeAxPjO/Hey6EKM0y2x1JoEIz/0e2OqSrgeCEPOdM=;
 b=f0REyQgm8toI4XbxibvZdoMuCSFwa+xXs1OsoAYQ6w/BoSvl9LGbKB/HPMTJPvpp0JpZhNAHoV2IiDwrq/hSEpLnCrxb7bJvoo/J5zi8+rbwLSDdOtLhcm2VeIlJx5vlV4MbI4QziEXlq5odZ7RKOURRaQdh8kMIBg5q7YnKhhA=
Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops
To: Roman Skakun <rm.skakun@gmail.com>,
        Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
        Juergen Gross <jgross@suse.com>,
        Stefano Stabellini
 <sstabellini@kernel.org>,
        xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org,
        linux-kernel@vger.kernel.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
        Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
        Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
        Roman Skakun <roman_skakun@epam.com>,
        Andrii Anisov <andrii_anisov@epam.com>
References: <20210611095528.9230-1-roman_skakun@epam.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <855a58e2-1e03-4763-cb56-81367b73762c@oracle.com>
Date: Fri, 11 Jun 2021 11:19:58 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
In-Reply-To: <20210611095528.9230-1-roman_skakun@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [160.34.89.109]
X-ClientProxiedBy: SN7PR04CA0087.namprd04.prod.outlook.com
 (2603:10b6:806:121::32) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: da0c009c-4c2f-4099-8207-08d92cec62fa
X-MS-TrafficTypeDiagnostic: BL0PR10MB3073:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BL0PR10MB3073A801473EC8CF755F71668A349@BL0PR10MB3073.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4941;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	8QOiSJIBIMphbmlWMtadIlLy4Egq95ASen2tTyf4j6cuEqd9Dq5RRAM0jhix9E3tN0N2ZFOFD/n4eu7k/+0C7nrkcns1LJUsZZ/XZSiEvPw5S2bE75D7E8/gqdvFt2JxXHeZSueflFE00Ej6ZyGMAEyzllz5KeYXn3gzXaF3KENd12kdL+7c6ghO6APdSlNhOc/uFZM9ZSjL0Su3afPSKbMZqb6D7iFywCG/VgD6mCdgSzYdSIbJOIh3zNe+2GdiRbiw6guHpnkT6LKKg8IMQN1nJwRUZOGy7hiRfdxl1dHuaqzwrw4bxSPwlP7jMkQsJQ02z5SkQ18EQYmmGgkhuYOLdsflLJ0d8gkS/UBXeh7jWp8k7mEBHx+gVIgrGcKF2EF+ViC/MkN5b/JMtX+1MyLxZrT3EVwljqCZB4GlureSDNeWZWZIjzBkJjWCtEZswKpso0kSe2i1pCItJX/oxn532IGuau8Nj80BvSGnnFUiXJyidD0LM/yYwyDOWfew9l9nSP4dBpbAnkXYgd1fwWRxaZ2Prhn7F7VRS9KqYhs2/8YsBwN+NqZbINtWU3Q7WdPP6FCeFGWjFppf9WC9qI8v3Qj1ESUBpjWz1QqvzouYyiU/ZJxN9ZlRwPB4w5KVSMpq4/dtJDqqoqCrGL/XHsGDCm9suaI4+A7106B+77sjALKUd3l4wvsKkJ3GZ9JV
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(39860400002)(376002)(346002)(136003)(366004)(53546011)(16526019)(186003)(36756003)(7416002)(66556008)(66476007)(86362001)(44832011)(6486002)(31696002)(66946007)(38100700002)(478600001)(316002)(6666004)(54906003)(110136005)(8936002)(16576012)(2906002)(4326008)(31686004)(26005)(2616005)(956004)(8676002)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?cEswdTVROWNKSkcyZm5ieU1IbHozbXMrMG1vYzRINU1ZLzNFdmJ6azJHckVa?=
 =?utf-8?B?cFFMTy9KL1dBMDFHTTFHNUpnRHMzWVVJam9qb2xtWVF3Q3R3aXNKbVhpVVk3?=
 =?utf-8?B?cXBYUDNMbVEzRmlKcjRBNlloM3pqZngveU9tSkZRUTBWd2pZelhxSFlqUG56?=
 =?utf-8?B?cnpkMHU0Wk15ejY5L1phaEJEN2hMR0t1WDRER3I5ZlVzUEtYanBDaWpwckNI?=
 =?utf-8?B?Mk82cmk5Q015ZWtFTkd2YVlxQ05KMjdYNm9WUG9va2VFeFNrRmg5NG41WUFS?=
 =?utf-8?B?UnA0aWFQRkkvMkZ3aHF1VjcrMkJ4Mm8zUWUyVzJuOXpKamlQczJCQ3FIM3VP?=
 =?utf-8?B?eHpSZkRCak85M2hlbjArcUpXOVdTMC93WVIyZHNnODhWRFpRV1dscVFVRVFP?=
 =?utf-8?B?dDdJcFMxb3pOTEpidDAwcWxjTUE5a0pRbHdPY29aa0FJYllYeUFJVStrRUV6?=
 =?utf-8?B?b3VZTEJ3SThVQ1ordFc0dWRGL1ZRaG52M0xvWC94bkhhdEpCRk9WSXdzK20w?=
 =?utf-8?B?dHdxUE56bkRidjNNVTc1eUtWeXNzMHhGajBpRFp5SERtYmlxZ1FHbnNkM1FK?=
 =?utf-8?B?V2V2MFdjWW1DZG0rc2N1cnBiU0RyZGFGM3NzVXl6c0d0bVhTOFhiMDRhNFBk?=
 =?utf-8?B?SlZwZTlHNnBVOW9UdkR2dnFYcGl1SGttaVdYQjY5NCtieEk5eU1WdFVHUEYz?=
 =?utf-8?B?Ymh3ZmVFWHZZU0dqR3d2dFY4R3JPdVd0TWw2ZmxqbmZQSFVXMURXZEtVOWl3?=
 =?utf-8?B?eThWa3BtZ2V1TGllSzUyYitnVlJmNkdTcEtXWXE0NVprMWxiTEpGcElOQm1O?=
 =?utf-8?B?dDgwRVNSOFNyeG8wWUd0U3ZDdSswV0dXbXBpTGpZeTJWSFNyVlNPMGlEa1E5?=
 =?utf-8?B?bXpjYmhDZm4xcVZ4STdDK1g1ZXVtaVFPdXBmZElCUU5KNzVEMFYrMVRxM1Zv?=
 =?utf-8?B?eXgyMW9YODYyR3dEY210L1VYZm54RWZRMHkyVDFSci9ibnlrVWpuWXd5M1JD?=
 =?utf-8?B?dEdRaGNTL2k3RHFkM1UxOUhYRkora3oreFFubzFyZ3F2aUdIRU91L0JuRGQ5?=
 =?utf-8?B?OFpyN3FRV2NJbThMc3VKWmVVNE13S0IzL3NhRVYxUnZrUDBIdVdwdytqNlQr?=
 =?utf-8?B?ejZoRXdRZThuVjRDb21HdHRQR3VpODdxcGw0OGluOHM4OEJtSGRsOWlzaWZM?=
 =?utf-8?B?L1A2OXc5ekhkeFF4U3VqUmdkMEtteUd5K3Nnd2pFdWljcHp3eFN1bFJBZ2Fo?=
 =?utf-8?B?Mzlna0pJdHQ3M1FkMTNGQnB6YStVNmkwb1RueEQ2TlQrYm5tVXJCbWFJQzU1?=
 =?utf-8?B?bU0zR25iKy9PMTVSOXhMNlcxUW1Bb04zMC9ha0YranFMaXRQaklyWWcvZ2tW?=
 =?utf-8?B?L1lMT2tHUUZSa1ZiZnUwT2FDblR2cHZpOVVuYUVaS3VGMklMM0pBMHBqQW83?=
 =?utf-8?B?RTFIcWViVUlRRERWNStxbXppR09qeXVxRGFSb05hcU5BNHZCdnp0cXRTZWIy?=
 =?utf-8?B?Rk42QUdSc3U1dFp6TXpDcFRxSUJHMzNhS0JkbTBxNnNXbm1UTjFGM1lKZ2Rw?=
 =?utf-8?B?V1hYVlB6R1A4bCtpTzJLaUFqNDk3NFRWa0VaSmNFR3BrOGZJYnJlZ0xjdWRW?=
 =?utf-8?B?dWoxSVEvc24rL3ptWEN5bXljZEdMUFBzeVpCVE5VSVJ2ZzJUem96KzZKb043?=
 =?utf-8?B?Z3l2TUZrSG5DOVdvRmhsckNhR2sxK2ZuUHVPbitJTDdpNWo3Z3lkTlo3NnEr?=
 =?utf-8?Q?aq+J5V9VDpGDEPq8wgU85o1oXReb2XHja+Ej81L?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: da0c009c-4c2f-4099-8207-08d92cec62fa
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jun 2021 15:20:03.5757
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: o9y1+DiNv74pKGkCIn3IfPhRB22pDJZXiRYso7WcNt+a7XxDevkZQ76PPYI07yOPfmdEAmXJVL/rietfpWG/8F5syYL3idhSVXjYju5DPO0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR10MB3073
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10012 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 malwarescore=0 spamscore=0
 adultscore=0 suspectscore=0 mlxscore=0 mlxlogscore=999 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106110096
X-Proofpoint-ORIG-GUID: kiyQqyhqWKmDku9sLu1WET854s78Yl79
X-Proofpoint-GUID: kiyQqyhqWKmDku9sLu1WET854s78Yl79


On 6/11/21 5:55 AM, Roman Skakun wrote:
>  
> +static int
> +xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma,
> +		void *cpu_addr, dma_addr_t dma_addr, size_t size,
> +		unsigned long attrs)
> +{
> +	unsigned long user_count = vma_pages(vma);
> +	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> +	unsigned long off = vma->vm_pgoff;
> +	struct page *page;
> +
> +	if (is_vmalloc_addr(cpu_addr))
> +		page = vmalloc_to_page(cpu_addr);
> +	else
> +		page = virt_to_page(cpu_addr);
> +
> +	vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs);
> +
> +	if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret))
> +		return -ENXIO;
> +
> +	if (off >= count || user_count > count - off)
> +		return -ENXIO;
> +
> +	return remap_pfn_range(vma, vma->vm_start,
> +			page_to_pfn(page) + vma->vm_pgoff,
> +			user_count << PAGE_SHIFT, vma->vm_page_prot);
> +}


I suggest you create a helper for computing page value and then revert 922659ea771b3fd728149262c5ea15608fab9719 and pass result of the helper instead of cpu_addr. Here and in xen_swiotlb_dma_get_sgtable().


And use this new helper in xen_swiotlb_free_coherent() too. I am curious though why this was not a problem when Stefano was looking at the problem that introduced this vmalloc check (i.e. 8b1e868f66076490189a36d984fcce286cdd6295). Stefano?


-boris


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:27:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:27:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140416.259449 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj3z-0007YA-Be; Fri, 11 Jun 2021 15:27:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140416.259449; Fri, 11 Jun 2021 15:27:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj3z-0007Y3-8c; Fri, 11 Jun 2021 15:27:15 +0000
Received: by outflank-mailman (input) for mailman id 140416;
 Fri, 11 Jun 2021 15:27:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xgm=LF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lrj3y-0007Xx-6O
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:27:14 +0000
Received: from mail-pg1-x52b.google.com (unknown [2607:f8b0:4864:20::52b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 60e4ead7-5829-4873-bba9-94d46e637d06;
 Fri, 11 Jun 2021 15:27:13 +0000 (UTC)
Received: by mail-pg1-x52b.google.com with SMTP id o9so2769832pgd.2
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:27:13 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:33c8:8e01:1161:6797])
 by smtp.gmail.com with UTF8SMTPSA id m18sm5552391pff.88.2021.06.11.08.27.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 08:27:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60e4ead7-5829-4873-bba9-94d46e637d06
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=HzdTumh9BrSYWktV5WZf7y51LbAaw3C3h39TEtEC61A=;
        b=glnoPPl5BjuEU+aHwo6DAiEerdZdwKnsli+A2zYzDPxEZiqMUG0n7GdfkjKsK9QW0e
         93P4fACu9M1YhL7kJ17IJaADBozzkAYJ7IX93Y7kPx7ZOfCFV+3jbYHpDWgy2cTfANYo
         0mhxwlPjX1zbtnhLTHogjuk/5yAGZZVOgaVh4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=HzdTumh9BrSYWktV5WZf7y51LbAaw3C3h39TEtEC61A=;
        b=bmNZitbHrcHEl7zocS0xVvX2tdJiZdq47eipvY8N1yzGxLt9Lh3ePlh8KI0doE7BDu
         oyS1M0sj6ZLTzmowz9Psc+NeKBnRcU3Xhn/mYR/G07JIwe0k0Iz1VyxgM+XHgsNpvN9l
         mcsyLZurJxsTe9vyoJ+IZ11xPEfRcbs/B+qD8xz1lKfhhi1W/TQ73rnCVgWz558HhVE9
         /OtfJS0DgUEH3+/ENkMWLccZN3Io45v6yjF9EQEkrQUHE4a2UgwPwjjE6P+Y0z+xqwS9
         kFmDvJbfz9G2z73lpaBE6dLjuGsabvNTMZzFMFNZCJvPbU/QyW3RZoI6raWSb5e3ZKFr
         BmbQ==
X-Gm-Message-State: AOAM5332rgWQWxm2egSXnPTCnolAslfYXYb9e1yaGEi4aMLbT0sSDSvb
	idrmgyA//QTX24+9/n/OaPX4eA==
X-Google-Smtp-Source: ABdhPJz1S7kolJJDhDCqvr+nlVhh2CsYK+ut0dTmzSatQWDsZRkoRFJesVPfEDUsaDthsVEt0zyWaw==
X-Received: by 2002:aa7:828f:0:b029:200:6e27:8c8f with SMTP id s15-20020aa7828f0000b02902006e278c8fmr8819226pfm.44.1623425232214;
        Fri, 11 Jun 2021 08:27:12 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v9 00/14] Restricted DMA 
Date: Fri, 11 Jun 2021 23:26:45 +0800
Message-Id: <20210611152659.2142983-1-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series implements mitigations for lack of DMA access control on
systems without an IOMMU, which could result in the DMA accessing the
system memory at unexpected times and/or unexpected addresses, possibly
leading to data leakage or corruption.

For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
not behind an IOMMU. As PCI-e, by design, gives the device full access to
system memory, a vulnerability in the Wi-Fi firmware could easily escalate
to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
full chain of exploits; [2], [3]).

To mitigate the security concerns, we introduce restricted DMA. Restricted
DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
specially allocated region and does memory allocation from the same region.
The feature on its own provides a basic level of protection against the DMA
overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system needs
to provide a way to restrict the DMA to a predefined memory region (this is
usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).

[1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
[1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
[2] https://blade.tencent.com/en/advisories/qualpwn/
[3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
[4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132

v9:
Address the comments in v7 to
  - set swiotlb active pool to dev->dma_io_tlb_mem
  - get rid of get_io_tlb_mem
  - dig out the device struct for is_swiotlb_active
  - move debugfs_create_dir out of swiotlb_create_debugfs
  - do set_memory_decrypted conditionally in swiotlb_init_io_tlb_mem
  - use IS_ENABLED in kernel/dma/direct.c
  - fix redefinition of 'of_dma_set_restricted_buffer'

v8:
- Fix reserved-memory.txt and add the reg property in example.
- Fix sizeof for of_property_count_elems_of_size in
  drivers/of/address.c#of_dma_set_restricted_buffer.
- Apply Will's suggestion to try the OF node having DMA configuration in
  drivers/of/address.c#of_dma_set_restricted_buffer.
- Fix typo in the comment of drivers/of/address.c#of_dma_set_restricted_buffer.
- Add error message for PageHighMem in
  kernel/dma/swiotlb.c#rmem_swiotlb_device_init and move it to
  rmem_swiotlb_setup.
- Fix the message string in rmem_swiotlb_setup.
https://lore.kernel.org/patchwork/cover/1437112/

v7:
Fix debugfs, PageHighMem and comment style in rmem_swiotlb_device_init
https://lore.kernel.org/patchwork/cover/1431031/

v6:
Address the comments in v5
https://lore.kernel.org/patchwork/cover/1423201/

v5:
Rebase on latest linux-next
https://lore.kernel.org/patchwork/cover/1416899/

v4:
- Fix spinlock bad magic
- Use rmem->name for debugfs entry
- Address the comments in v3
https://lore.kernel.org/patchwork/cover/1378113/

v3:
Using only one reserved memory region for both streaming DMA and memory
allocation.
https://lore.kernel.org/patchwork/cover/1360992/

v2:
Building on top of swiotlb.
https://lore.kernel.org/patchwork/cover/1280705/

v1:
Using dma_map_ops.
https://lore.kernel.org/patchwork/cover/1271660/


Claire Chang (14):
  swiotlb: Refactor swiotlb init functions
  swiotlb: Refactor swiotlb_create_debugfs
  swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
  swiotlb: Add restricted DMA pool initialization
  swiotlb: Update is_swiotlb_buffer to add a struct device argument
  swiotlb: Update is_swiotlb_active to add a struct device argument
  swiotlb: Bounce data from/to restricted DMA pool if available
  swiotlb: Move alloc_size to find_slots
  swiotlb: Refactor swiotlb_tbl_unmap_single
  dma-direct: Add a new wrapper __dma_direct_free_pages()
  swiotlb: Add restricted DMA alloc/free support.
  dma-direct: Allocate memory from restricted DMA pool if available
  dt-bindings: of: Add restricted DMA pool
  of: Add plumbing for restricted DMA pool

 .../reserved-memory/reserved-memory.txt       |  36 ++-
 drivers/gpu/drm/i915/gem/i915_gem_internal.c  |   2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c         |   2 +-
 drivers/iommu/dma-iommu.c                     |  12 +-
 drivers/of/address.c                          |  33 +++
 drivers/of/device.c                           |   6 +
 drivers/of/of_private.h                       |   6 +
 drivers/pci/xen-pcifront.c                    |   2 +-
 drivers/xen/swiotlb-xen.c                     |   2 +-
 include/linux/device.h                        |   4 +
 include/linux/swiotlb.h                       |  45 +++-
 kernel/dma/Kconfig                            |  14 +
 kernel/dma/direct.c                           |  62 +++--
 kernel/dma/direct.h                           |   9 +-
 kernel/dma/swiotlb.c                          | 242 +++++++++++++-----
 15 files changed, 376 insertions(+), 101 deletions(-)

-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:27:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:27:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140417.259461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj48-0007r9-KW; Fri, 11 Jun 2021 15:27:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140417.259461; Fri, 11 Jun 2021 15:27:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj48-0007r1-H9; Fri, 11 Jun 2021 15:27:24 +0000
Received: by outflank-mailman (input) for mailman id 140417;
 Fri, 11 Jun 2021 15:27:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xgm=LF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lrj47-0007qJ-2V
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:27:23 +0000
Received: from mail-pg1-x52c.google.com (unknown [2607:f8b0:4864:20::52c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 34539d52-7439-46a2-806b-6fadafdc625e;
 Fri, 11 Jun 2021 15:27:22 +0000 (UTC)
Received: by mail-pg1-x52c.google.com with SMTP id t9so2752751pgn.4
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:27:22 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:33c8:8e01:1161:6797])
 by smtp.gmail.com with UTF8SMTPSA id a11sm5354193pjq.45.2021.06.11.08.27.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 08:27:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34539d52-7439-46a2-806b-6fadafdc625e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=nk5KLnjJUZzmwtQpVnUAlLQfo+45PCj3pfGwn0Xh8Lc=;
        b=ZWUxrjtFrWfMiz2vkDC3jaU4HtvtATQivM2/a1WCbJcnmtmvKI0dai77VR4iYEO7wi
         qbyNf8h4nuDiw8nL5J1NWb9gMMiyGOXWgSIcxhIHjtKOTxqX7IkzBlm/uHswtyFUBs9r
         1dmPjPkxMqFDM4eWBrACbXxSEo8vyq9XlFtA4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=nk5KLnjJUZzmwtQpVnUAlLQfo+45PCj3pfGwn0Xh8Lc=;
        b=OZdTW283g0zM2B9jyLEHtjdAXkHwbInXsq8hdqZKbtqE3JN9vWV79QYG86wRZ1/T0e
         IbTu0/aEWx+MLl8jIt4BqP6LMOUYOls0uxyVWZzi+e2IMHLbKBftabJ/bHz97Ivc23M+
         Dc27ih0axw1CT/yMojCuQBbQCHYeJINDCJa12ukoq/hwTxMw0K8PQ62MMgnpfPRqKF8G
         pCagACJYb+WAN+CLxibceXh4WW1LVpVGgZJFRlxfM3LMbngd++PmNxixsb+wLk75hMdF
         ZZhc6rfnNdGJCPKNToCf1ZUNAIJEWD1RJg4XBxumXNcTTGR0W3t5GzIn4Fo3ULpwb/kC
         o1tQ==
X-Gm-Message-State: AOAM530UOkQNOTlQQz1h9sc4CdEhnW1HdUQgDFY6KqrzNR+qohEzV+FB
	4qgrVlJ03Yjv2tUpuCAOaDO6cQ==
X-Google-Smtp-Source: ABdhPJzVViGqCnvRmV8jjGxainZfFkO96cSEyNRWz3J/3hQzjle8DzEEYAlc9QxFMOZm5LzjKL8vSg==
X-Received: by 2002:a63:1210:: with SMTP id h16mr4191204pgl.189.1623425241610;
        Fri, 11 Jun 2021 08:27:21 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v9 01/14] swiotlb: Refactor swiotlb init functions
Date: Fri, 11 Jun 2021 23:26:46 +0800
Message-Id: <20210611152659.2142983-2-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210611152659.2142983-1-tientzu@chromium.org>
References: <20210611152659.2142983-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
initialization to make the code reusable.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 53 ++++++++++++++++++++++----------------------
 1 file changed, 27 insertions(+), 26 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 8ca7d505d61c..1a1208c81e85 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -168,9 +168,32 @@ void __init swiotlb_update_mem_attributes(void)
 	memset(vaddr, 0, bytes);
 }
 
-int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
+				    unsigned long nslabs, bool late_alloc,
+				    bool memory_decrypted)
 {
+	void *vaddr = phys_to_virt(start);
 	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
+
+	mem->nslabs = nslabs;
+	mem->start = start;
+	mem->end = mem->start + bytes;
+	mem->index = 0;
+	mem->late_alloc = late_alloc;
+	spin_lock_init(&mem->lock);
+	for (i = 0; i < mem->nslabs; i++) {
+		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
+		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
+		mem->slots[i].alloc_size = 0;
+	}
+
+	if (memory_decrypted)
+		set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
+	memset(vaddr, 0, bytes);
+}
+
+int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+{
 	struct io_tlb_mem *mem;
 	size_t alloc_size;
 
@@ -186,16 +209,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 	if (!mem)
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
-	mem->nslabs = nslabs;
-	mem->start = __pa(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
+
+	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false, false);
 
 	io_tlb_default_mem = mem;
 	if (verbose)
@@ -282,7 +297,6 @@ swiotlb_late_init_with_default_size(size_t default_size)
 int
 swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 {
-	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
 	struct io_tlb_mem *mem;
 
 	if (swiotlb_force == SWIOTLB_NO_FORCE)
@@ -297,20 +311,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	if (!mem)
 		return -ENOMEM;
 
-	mem->nslabs = nslabs;
-	mem->start = virt_to_phys(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	mem->late_alloc = 1;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
-
-	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
-	memset(tlb, 0, bytes);
+	swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true, true);
 
 	io_tlb_default_mem = mem;
 	swiotlb_print_info();
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:27:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:27:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140418.259472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj4I-0008J4-2H; Fri, 11 Jun 2021 15:27:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140418.259472; Fri, 11 Jun 2021 15:27:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj4H-0008It-V6; Fri, 11 Jun 2021 15:27:33 +0000
Received: by outflank-mailman (input) for mailman id 140418;
 Fri, 11 Jun 2021 15:27:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xgm=LF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lrj4G-0008HP-5z
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:27:32 +0000
Received: from mail-pj1-x1030.google.com (unknown [2607:f8b0:4864:20::1030])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f0b04a3-6f21-4efd-98a3-a9fc29516775;
 Fri, 11 Jun 2021 15:27:31 +0000 (UTC)
Received: by mail-pj1-x1030.google.com with SMTP id
 o17-20020a17090a9f91b029015cef5b3c50so6030754pjp.4
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:27:31 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:33c8:8e01:1161:6797])
 by smtp.gmail.com with UTF8SMTPSA id fs10sm10781936pjb.31.2021.06.11.08.27.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 08:27:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f0b04a3-6f21-4efd-98a3-a9fc29516775
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=POwdGvWH9uF6GuFYGDpfWh3gBTCdeay2Nuv62kz+670=;
        b=C9jITNUpsFL469uBCWw+ylcEXmDJHC24qGoWeq47lXmwrDoiFy9SQ+nYUnDf9eoUhJ
         UQMk8xg5DYQsIoGnRwISCRXEWnbFfVNYAqeVla6v7rlivkLs9OCXtvoP8+LC1h0SWDED
         3sRx2R1OhX94fVL71OHRWyugi7Thhky5EB+DQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=POwdGvWH9uF6GuFYGDpfWh3gBTCdeay2Nuv62kz+670=;
        b=UZ5JLewqqqqIftpNwr9/BuC9ZZgMqHQRNQGbf3JT0HcUbYynZr2IXpdWGG5lvW9TW7
         ntfC9+a4sRGsZfTabIVSvl639MDolgTwAkOILESnfxcf+KFLR6Pw1OcPRyDU2nwb1pid
         Qaivod7QYMGydLliIVTdpl+qH/5/Z9DitO8wVhDFVe25ltnOU0A7ptPKDDbEt/G2aBlS
         kMAvpiM1wFjJKoinPsumI4TGhRONwwkXgDLOJarzrPVhXjf9HZ49q6w8JuZZwY+Jl84W
         KO121uaMrt61gXXApFHGw44Uhdx7EjMIkN8a7pNSYLyrGnDYciVEMo7IOzc0/pC5lmFM
         Kx4w==
X-Gm-Message-State: AOAM531unb8tM0HGj93UOteyAeCgILyQu8PO0ARp4hvr6YY8oUPI1bAY
	vtMFbc5ofR9HZlcq7P+QvrgdYw==
X-Google-Smtp-Source: ABdhPJye589iRfRWyEdkeIKw72OE8rv2dNVWo6QIa4VZn02HOM9CDAmVKp9644S+z7efQaweuHXMtA==
X-Received: by 2002:a17:90a:7bce:: with SMTP id d14mr5098913pjl.38.1623425250808;
        Fri, 11 Jun 2021 08:27:30 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v9 02/14] swiotlb: Refactor swiotlb_create_debugfs
Date: Fri, 11 Jun 2021 23:26:47 +0800
Message-Id: <20210611152659.2142983-3-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210611152659.2142983-1-tientzu@chromium.org>
References: <20210611152659.2142983-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Split the debugfs creation to make the code reusable for supporting
different bounce buffer pools, e.g. restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 23 ++++++++++++++++-------
 1 file changed, 16 insertions(+), 7 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 1a1208c81e85..8a3e2b3b246d 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -64,6 +64,9 @@
 enum swiotlb_force swiotlb_force;
 
 struct io_tlb_mem *io_tlb_default_mem;
+#ifdef CONFIG_DEBUG_FS
+static struct dentry *debugfs_dir;
+#endif
 
 /*
  * Max segment that we can provide which (if pages are contingous) will
@@ -664,18 +667,24 @@ EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
 #ifdef CONFIG_DEBUG_FS
 
-static int __init swiotlb_create_debugfs(void)
+static void swiotlb_create_debugfs_files(struct io_tlb_mem *mem)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
-
-	if (!mem)
-		return 0;
-	mem->debugfs = debugfs_create_dir("swiotlb", NULL);
 	debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs);
 	debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used);
+}
+
+static int __init swiotlb_create_default_debugfs(void)
+{
+	struct io_tlb_mem *mem = io_tlb_default_mem;
+
+	debugfs_dir = debugfs_create_dir("swiotlb", NULL);
+	if (mem) {
+		mem->debugfs = debugfs_dir;
+		swiotlb_create_debugfs_files(mem);
+	}
 	return 0;
 }
 
-late_initcall(swiotlb_create_debugfs);
+late_initcall(swiotlb_create_default_debugfs);
 
 #endif
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:27:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:27:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140421.259483 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj4Q-0000Ro-Bo; Fri, 11 Jun 2021 15:27:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140421.259483; Fri, 11 Jun 2021 15:27:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj4Q-0000Rf-8c; Fri, 11 Jun 2021 15:27:42 +0000
Received: by outflank-mailman (input) for mailman id 140421;
 Fri, 11 Jun 2021 15:27:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xgm=LF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lrj4P-0000Fn-Fm
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:27:41 +0000
Received: from mail-pl1-x62a.google.com (unknown [2607:f8b0:4864:20::62a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1827cbe9-7de8-4bd5-9a90-dbd3f13cd159;
 Fri, 11 Jun 2021 15:27:40 +0000 (UTC)
Received: by mail-pl1-x62a.google.com with SMTP id h12so3009349plf.4
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:27:40 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:33c8:8e01:1161:6797])
 by smtp.gmail.com with UTF8SMTPSA id t143sm6505494pgb.93.2021.06.11.08.27.32
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 08:27:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1827cbe9-7de8-4bd5-9a90-dbd3f13cd159
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=N9TVmO9aGqoduxqk5pFie0IHXd8vLspLVt82aTdw2co=;
        b=fiJV4qFUiRXaxOp4vBkL4JToBkgYVY+AaQy2O2SrrY5MvO7od5oJ3tYXCIi5YBEuJh
         DGJzgMCIu0uMgBXkHJ7/7dTWoJ1FXDPZDsfDXnlYmJUO5c8ZukGBzjI7CYg/CWlag+/P
         qBP0ch5ltrmS/XxZKNv+okJuDN4vM3IFZNebE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=N9TVmO9aGqoduxqk5pFie0IHXd8vLspLVt82aTdw2co=;
        b=KA+g0USWjyYbseFOsyR8XOnnCXkCqvgRXgARBkNdC6e9roLFJ5/qgwcUVtUaSBF+bS
         5DzOaOVhZLuO+UMpZhs32u0MFpx/qWQLxTvz80FrYDG/90TMKnyOWmy1r6yW71vlA8Ec
         wICSKxBHMf/QmjEbQsJsHcnYZxqhkBZX4/1OuidE7e+3mnoVEwAO2mB+wrQWFvXoMDZq
         L9QRughYZ/qRd2A55TqsTUHN/DJTWZkMftixmpMfX6VX33FrCgwD9UEPaAhqmgQnw/wK
         b/twy8fBx7NxCIOb1HExhSYTJp1xJlmqM+u+JPGYPC41Gimxaeq1oRym9ccg0FwpXlAy
         voMA==
X-Gm-Message-State: AOAM531vby9DPt7NFIAOHrFQCdM7Cc+lJjxW6LMZjpMczJXWTmD4If96
	bDrGNjccpu+1suOMfZ15VBzgfA==
X-Google-Smtp-Source: ABdhPJx0OE49qJUIllsnRRF41WCicXTNT2KzJUs20xdPS7xiVUkY/qS3IozDVpjfuVo5+5ukcDBuNQ==
X-Received: by 2002:a17:90a:5309:: with SMTP id x9mr9513136pjh.111.1623425259905;
        Fri, 11 Jun 2021 08:27:39 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v9 03/14] swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
Date: Fri, 11 Jun 2021 23:26:48 +0800
Message-Id: <20210611152659.2142983-4-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210611152659.2142983-1-tientzu@chromium.org>
References: <20210611152659.2142983-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Always have the pointer to the swiotlb pool used in struct device. This
could help simplify the code for other pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/of/device.c     | 3 +++
 include/linux/device.h  | 4 ++++
 include/linux/swiotlb.h | 8 ++++++++
 kernel/dma/swiotlb.c    | 8 ++++----
 4 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/drivers/of/device.c b/drivers/of/device.c
index c5a9473a5fb1..1defdf15ba95 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
 
 	arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
 
+	if (IS_ENABLED(CONFIG_SWIOTLB))
+		swiotlb_set_io_tlb_default_mem(dev);
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(of_dma_configure_id);
diff --git a/include/linux/device.h b/include/linux/device.h
index 4443e12238a0..2e9a378c9100 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -432,6 +432,7 @@ struct dev_links_info {
  * @dma_pools:	Dma pools (if dma'ble device).
  * @dma_mem:	Internal for coherent mem override.
  * @cma_area:	Contiguous memory area for dma allocations
+ * @dma_io_tlb_mem: Pointer to the swiotlb pool used.  Not for driver use.
  * @archdata:	For arch-specific additions.
  * @of_node:	Associated device tree node.
  * @fwnode:	Associated device node supplied by platform firmware.
@@ -540,6 +541,9 @@ struct device {
 #ifdef CONFIG_DMA_CMA
 	struct cma *cma_area;		/* contiguous memory area for dma
 					   allocations */
+#endif
+#ifdef CONFIG_SWIOTLB
+	struct io_tlb_mem *dma_io_tlb_mem;
 #endif
 	/* arch specific additions */
 	struct dev_archdata	archdata;
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 216854a5e513..008125ccd509 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -108,6 +108,11 @@ static inline bool is_swiotlb_buffer(phys_addr_t paddr)
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
 
+static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
+{
+	dev->dma_io_tlb_mem = io_tlb_default_mem;
+}
+
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
@@ -119,6 +124,9 @@ static inline bool is_swiotlb_buffer(phys_addr_t paddr)
 {
 	return false;
 }
+static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
+{
+}
 static inline void swiotlb_exit(void)
 {
 }
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 8a3e2b3b246d..29b950ab1351 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -344,7 +344,7 @@ void __init swiotlb_exit(void)
 static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size,
 			   enum dma_data_direction dir)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT;
 	phys_addr_t orig_addr = mem->slots[index].orig_addr;
 	size_t alloc_size = mem->slots[index].alloc_size;
@@ -426,7 +426,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
 static int find_slots(struct device *dev, phys_addr_t orig_addr,
 		size_t alloc_size)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
 	dma_addr_t tbl_dma_addr =
 		phys_to_dma_unencrypted(dev, mem->start) & boundary_mask;
@@ -503,7 +503,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		size_t mapping_size, size_t alloc_size,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
 	unsigned int i;
 	int index;
@@ -554,7 +554,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 			      size_t mapping_size, enum dma_data_direction dir,
 			      unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
 	unsigned long flags;
 	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:27:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:27:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140424.259494 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj4Z-000124-M7; Fri, 11 Jun 2021 15:27:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140424.259494; Fri, 11 Jun 2021 15:27:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj4Z-00011x-He; Fri, 11 Jun 2021 15:27:51 +0000
Received: by outflank-mailman (input) for mailman id 140424;
 Fri, 11 Jun 2021 15:27:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xgm=LF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lrj4Y-0000x4-5C
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:27:50 +0000
Received: from mail-pf1-x42b.google.com (unknown [2607:f8b0:4864:20::42b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f162781a-b49c-423c-8c95-ddddc34592aa;
 Fri, 11 Jun 2021 15:27:49 +0000 (UTC)
Received: by mail-pf1-x42b.google.com with SMTP id k15so4734129pfp.6
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:27:49 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:33c8:8e01:1161:6797])
 by smtp.gmail.com with UTF8SMTPSA id m2sm5324723pjf.24.2021.06.11.08.27.41
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 08:27:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f162781a-b49c-423c-8c95-ddddc34592aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=z3csfjTdnaGEbW5Yiu81rOYxUROdkDuu0pkC+1dIGAc=;
        b=IkC7yFFhO8hIfNYdMdJ54k4hExWD/qoffPbEBXy7aMHVEBwdwVaaywxk2mt0G3QZuk
         kHNUXds8Y0K0b8H58qwuxnw/8yQrewipm9rVIDVGn4pqLKNyquN2PKy3qJ+VVeLcukCR
         pIHcSWbnGwWNwrAuGd/5jq088bjDa5bnVis6E=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=z3csfjTdnaGEbW5Yiu81rOYxUROdkDuu0pkC+1dIGAc=;
        b=KA9klA5mR5qncIBnTh7mnelRCzMZUMiV+L/GETlHNJkVDtIE3Wad9499a/cg5cuhlM
         RFrpACfheNbSSvXajC7mzSUNcmzjmY59zljBRBYTOmq3Au614C8o1LB918SU7vFTHVKK
         fWeDBYFfqVC5JDtQdlbmJeQsl8AAZqvg2DYiCfy8U5LTwpbXt3DdV8O8fE52cSCnXZo7
         OYPc221IyoEfr7WAp/sl6swqHIUeER6NAH4SydKPAmlO7/YRIffk5wXL/yxGUu5zKkAM
         ggcetl/DjOKIp3lLcGzYsz2GW/dhBcIbLbVwqIzqSUc4l/+4RG74mbvqbRwtsNwTh31I
         O3pw==
X-Gm-Message-State: AOAM531MsIg0/Tb2BFrF7OVPmf73qNHTXjo0SI8cvfrYcoZgMpPPIbfl
	c0rmKYxMkuu2wzOqpgzs1jpDqg==
X-Google-Smtp-Source: ABdhPJyj9WektlHINHWsfATaExSaFcZYhgD+NDtqPb5EwrK4uHwBuEuhV53c0IRLlzaKiXoj5cmzYw==
X-Received: by 2002:aa7:84c7:0:b029:2e9:2d18:54a5 with SMTP id x7-20020aa784c70000b02902e92d1854a5mr8750291pfn.44.1623425268566;
        Fri, 11 Jun 2021 08:27:48 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v9 04/14] swiotlb: Add restricted DMA pool initialization
Date: Fri, 11 Jun 2021 23:26:49 +0800
Message-Id: <20210611152659.2142983-5-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210611152659.2142983-1-tientzu@chromium.org>
References: <20210611152659.2142983-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the initialization function to create restricted DMA pools from
matching reserved-memory nodes.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h |  3 +-
 kernel/dma/Kconfig      | 14 ++++++++
 kernel/dma/swiotlb.c    | 75 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 91 insertions(+), 1 deletion(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 008125ccd509..ec0c01796c8a 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -72,7 +72,8 @@ extern enum swiotlb_force swiotlb_force;
  *		range check to see if the memory was in fact allocated by this
  *		API.
  * @nslabs:	The number of IO TLB blocks (in groups of 64) between @start and
- *		@end. This is command line adjustable via setup_io_tlb_npages.
+ *		@end. For default swiotlb, this is command line adjustable via
+ *		setup_io_tlb_npages.
  * @used:	The number of used IO TLB block.
  * @list:	The free list describing the number of free entries available
  *		from each index.
diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
index 77b405508743..3e961dc39634 100644
--- a/kernel/dma/Kconfig
+++ b/kernel/dma/Kconfig
@@ -80,6 +80,20 @@ config SWIOTLB
 	bool
 	select NEED_DMA_MAP_STATE
 
+config DMA_RESTRICTED_POOL
+	bool "DMA Restricted Pool"
+	depends on OF && OF_RESERVED_MEM
+	select SWIOTLB
+	help
+	  This enables support for restricted DMA pools which provide a level of
+	  DMA memory protection on systems with limited hardware protection
+	  capabilities, such as those lacking an IOMMU.
+
+	  For more information see
+	  <Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt>
+	  and <kernel/dma/swiotlb.c>.
+	  If unsure, say "n".
+
 #
 # Should be selected if we can mmap non-coherent mappings to userspace.
 # The only thing that is really required is a way to set an uncached bit
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 29b950ab1351..c4a071d6a63f 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -39,6 +39,13 @@
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
 #endif
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_fdt.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/slab.h>
+#endif
 
 #include <asm/io.h>
 #include <asm/dma.h>
@@ -688,3 +695,71 @@ static int __init swiotlb_create_default_debugfs(void)
 late_initcall(swiotlb_create_default_debugfs);
 
 #endif
+
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
+				    struct device *dev)
+{
+	struct io_tlb_mem *mem = rmem->priv;
+	unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
+
+	/*
+	 * Since multiple devices can share the same pool, the private data,
+	 * io_tlb_mem struct, will be initialized by the first device attached
+	 * to it.
+	 */
+	if (!mem) {
+		mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
+		if (!mem)
+			return -ENOMEM;
+
+		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false, true);
+
+		rmem->priv = mem;
+
+		if (IS_ENABLED(CONFIG_DEBUG_FS)) {
+			mem->debugfs =
+				debugfs_create_dir(rmem->name, debugfs_dir);
+			swiotlb_create_debugfs_files(mem);
+		}
+	}
+
+	dev->dma_io_tlb_mem = mem;
+
+	return 0;
+}
+
+static void rmem_swiotlb_device_release(struct reserved_mem *rmem,
+					struct device *dev)
+{
+	dev->dma_io_tlb_mem = io_tlb_default_mem;
+}
+
+static const struct reserved_mem_ops rmem_swiotlb_ops = {
+	.device_init = rmem_swiotlb_device_init,
+	.device_release = rmem_swiotlb_device_release,
+};
+
+static int __init rmem_swiotlb_setup(struct reserved_mem *rmem)
+{
+	unsigned long node = rmem->fdt_node;
+
+	if (of_get_flat_dt_prop(node, "reusable", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,cma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,dma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "no-map", NULL))
+		return -EINVAL;
+
+	if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
+		pr_err("Restricted DMA pool must be accessible within the linear mapping.");
+		return -EINVAL;
+	}
+
+	rmem->ops = &rmem_swiotlb_ops;
+	pr_info("Reserved memory: created restricted DMA pool at %pa, size %ld MiB\n",
+		&rmem->base, (unsigned long)rmem->size / SZ_1M);
+	return 0;
+}
+
+RESERVEDMEM_OF_DECLARE(dma, "restricted-dma-pool", rmem_swiotlb_setup);
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:28:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:28:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140427.259505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj4j-0001aR-09; Fri, 11 Jun 2021 15:28:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140427.259505; Fri, 11 Jun 2021 15:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj4i-0001Zd-RR; Fri, 11 Jun 2021 15:28:00 +0000
Received: by outflank-mailman (input) for mailman id 140427;
 Fri, 11 Jun 2021 15:27:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xgm=LF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lrj4h-0001UV-HM
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:27:59 +0000
Received: from mail-pl1-x62c.google.com (unknown [2607:f8b0:4864:20::62c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d98ad9c-869b-4aac-8060-086fa8c523f2;
 Fri, 11 Jun 2021 15:27:58 +0000 (UTC)
Received: by mail-pl1-x62c.google.com with SMTP id v11so3008238ply.6
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:27:58 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:33c8:8e01:1161:6797])
 by smtp.gmail.com with UTF8SMTPSA id p11sm5083386pfo.126.2021.06.11.08.27.50
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 08:27:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d98ad9c-869b-4aac-8060-086fa8c523f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=bPhM+WorBl/5IPWUopQpopRI8e+hD6cwsqTSUdlP4rc=;
        b=iVxcYyQUW7cXzxhHWTGSRZQlNt4ASqmdafviXMeqVVZDUYoVKbp5gazYTD0liisFpL
         kOReUJY/l+JjcnvT9eiW5Ttd2yOdW/2cOnL6edMsj2jAzKAFxbgUCQs/mv16tTokHkWR
         td0y0Z62+y3sEgdlMapYBkw1WBxSvtMM1bhaM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=bPhM+WorBl/5IPWUopQpopRI8e+hD6cwsqTSUdlP4rc=;
        b=J6L3IGPG7Vd1gwvAbyOszi8Xq6Ftx77p9s+FdIc+/zKUmQvU50NBHz3+uHxjPpaSID
         w9FNOJd1kc++oGb6k5KJsKGytYa1MGpNVGqSLtrt/Uq+k66dLcWhw2r3sMW4PcMV9O1T
         WHbZOULRXyFKyCarz+t1NCJ8grpSrdI04H4979U8olTuhNShZNqShdJZGV228eSMq9qQ
         YyiLbolnI70Wf6RUhMVlpbqjcHEpMOCvFYo4RFygZ5RUMxQxU7qFJ51qhjVMvDtLNBLW
         Cd0/81Sy66B5g12UWAhGRSJmdXOdm3cRNt5ks4Me4IweVJgq1FN8nr6703iRDTrKqWBI
         wMYQ==
X-Gm-Message-State: AOAM530TG5GmgsDSce3hSPastubpYSMrAuhnfXEzaf0f2kj3c1Jgkul2
	HrICcPxgFDfJbgrXcDkegMXQkQ==
X-Google-Smtp-Source: ABdhPJx+9IDb/ywuiMW4gqarDNqt7pH3N+XSFMl5YC3dntNSpPaUZEs1grfZvotZIiN3xZs6Xp8Hkg==
X-Received: by 2002:a17:90a:1941:: with SMTP id 1mr9632407pjh.217.1623425277894;
        Fri, 11 Jun 2021 08:27:57 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v9 05/14] swiotlb: Update is_swiotlb_buffer to add a struct device argument
Date: Fri, 11 Jun 2021 23:26:50 +0800
Message-Id: <20210611152659.2142983-6-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210611152659.2142983-1-tientzu@chromium.org>
References: <20210611152659.2142983-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_buffer to add a struct device argument. This will be
useful later to allow for restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/iommu/dma-iommu.c | 12 ++++++------
 drivers/xen/swiotlb-xen.c |  2 +-
 include/linux/swiotlb.h   |  7 ++++---
 kernel/dma/direct.c       |  6 +++---
 kernel/dma/direct.h       |  6 +++---
 5 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 5d96fcc45fec..1a6a08908245 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -506,7 +506,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
 
 	__iommu_dma_unmap(dev, dma_addr, size);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 
@@ -577,7 +577,7 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
 	}
 
 	iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
-	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
+	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys))
 		swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs);
 	return iova;
 }
@@ -783,7 +783,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
 	if (!dev_is_dma_coherent(dev))
 		arch_sync_dma_for_cpu(phys, size, dir);
 
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_cpu(dev, phys, size, dir);
 }
 
@@ -796,7 +796,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev,
 		return;
 
 	phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_device(dev, phys, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -817,7 +817,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
 
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_cpu(dev, sg_phys(sg),
 						    sg->length, dir);
 	}
@@ -834,7 +834,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
 		return;
 
 	for_each_sg(sgl, sg, nelems, i) {
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_device(dev, sg_phys(sg),
 						       sg->length, dir);
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 24d11861ac7d..0c4fb34f11ab 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -100,7 +100,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	 * in our domain. Therefore _only_ check address within our domain.
 	 */
 	if (pfn_valid(PFN_DOWN(paddr)))
-		return is_swiotlb_buffer(paddr);
+		return is_swiotlb_buffer(dev, paddr);
 	return 0;
 }
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index ec0c01796c8a..921b469c6ad2 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_SWIOTLB_H
 #define __LINUX_SWIOTLB_H
 
+#include <linux/device.h>
 #include <linux/dma-direction.h>
 #include <linux/init.h>
 #include <linux/types.h>
@@ -102,9 +103,9 @@ struct io_tlb_mem {
 };
 extern struct io_tlb_mem *io_tlb_default_mem;
 
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
@@ -121,7 +122,7 @@ bool is_swiotlb_active(void);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index f737e3347059..84c9feb5474a 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -343,7 +343,7 @@ void dma_direct_sync_sg_for_device(struct device *dev,
 	for_each_sg(sgl, sg, nents, i) {
 		phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_device(dev, paddr, sg->length,
 						       dir);
 
@@ -369,7 +369,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(paddr, sg->length, dir);
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_cpu(dev, paddr, sg->length,
 						    dir);
 
@@ -504,7 +504,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr)
 {
 	return !dev_is_dma_coherent(dev) ||
-		is_swiotlb_buffer(dma_to_phys(dev, dma_addr));
+	       is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr));
 }
 
 /**
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 50afc05b6f1d..13e9e7158d94 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -56,7 +56,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev,
 {
 	phys_addr_t paddr = dma_to_phys(dev, addr);
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_device(dev, paddr, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -73,7 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev,
 		arch_sync_dma_for_cpu_all();
 	}
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_cpu(dev, paddr, size, dir);
 
 	if (dir == DMA_FROM_DEVICE)
@@ -113,7 +113,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
 		dma_direct_sync_single_for_cpu(dev, addr, size, dir);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 #endif /* _KERNEL_DMA_DIRECT_H */
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:28:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:28:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140435.259516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj51-0002Sd-Dp; Fri, 11 Jun 2021 15:28:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140435.259516; Fri, 11 Jun 2021 15:28:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj51-0002SK-AO; Fri, 11 Jun 2021 15:28:19 +0000
Received: by outflank-mailman (input) for mailman id 140435;
 Fri, 11 Jun 2021 15:28:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xgm=LF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lrj50-0001UV-JQ
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:28:18 +0000
Received: from mail-pg1-x534.google.com (unknown [2607:f8b0:4864:20::534])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dc172d26-5ff1-4637-b8fd-7db3b49771ed;
 Fri, 11 Jun 2021 15:28:07 +0000 (UTC)
Received: by mail-pg1-x534.google.com with SMTP id e20so2797948pgg.0
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:28:07 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:33c8:8e01:1161:6797])
 by smtp.gmail.com with UTF8SMTPSA id h12sm5753859pgn.54.2021.06.11.08.27.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 08:28:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc172d26-5ff1-4637-b8fd-7db3b49771ed
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=MalV2CJ985iagzNSMyE2gpGad3UeVZNg8z2N/3AvmfA=;
        b=eCBgbtVRpMlNoTp3Qb9z3igfLg56XJvLlX9C35Jsgr3nIsZb4Kok6rgN/bbLfD80Qy
         e3Pgw0kbSJakGzAv3QSsIgmZpc78a08an5Bw5pXx6xrZWrIDvAnZhP9zKwZq+J6CORqn
         K0fF06KCx7SV0W5IFjWZ5axcloQj/KU2Qc5g0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=MalV2CJ985iagzNSMyE2gpGad3UeVZNg8z2N/3AvmfA=;
        b=JCbBNtM9zjO5IK4VUTxGQiOUu5weiNRmUi8r4QO8bqyJsol4XShc89a4Td5CaUW8sl
         voP4nqB08paN3VJiS2GserkocgkcDGPEqIzlfxSWCoGkL4GxyUuztT51Ag2RK169nxTF
         8ybZgIz9xbCWt1V0t2gb80DoojbeHMr1jTHmt4Z/LiCg32Kt+aQ/c0+TQB21PTqpSbKX
         UTfHqapG4eZ+g/5t7UaT6NkBDNYnb2AlECi74LqaIHwKWV036okGvQiBxSAXtzYMgrvt
         yby9MYmkfEc4+RXMOSY5m1fPMLO53ii4rEAJHZFHbwzqHLyRi9Nfo0ji3ADa4BDCkr5W
         dL3Q==
X-Gm-Message-State: AOAM531q6qGW9kdgtpLO6Sx/TgYGN33UE/p/HveSruM1GjTZ8WgMnrK+
	HphrYHWn2IzwBwd9KW0IFFxH4g==
X-Google-Smtp-Source: ABdhPJzlQenh9k0nqVW0E8ub36FyXOThaRF1VbV2QslYPlfabmGDPoR8tL9zv2YKB/OJ0Pgomd7a2w==
X-Received: by 2002:a63:5d52:: with SMTP id o18mr4196807pgm.440.1623425286584;
        Fri, 11 Jun 2021 08:28:06 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v9 06/14] swiotlb: Update is_swiotlb_active to add a struct device argument
Date: Fri, 11 Jun 2021 23:26:51 +0800
Message-Id: <20210611152659.2142983-7-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210611152659.2142983-1-tientzu@chromium.org>
References: <20210611152659.2142983-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_active to add a struct device argument. This will be
useful later to allow for restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c        | 2 +-
 drivers/pci/xen-pcifront.c                   | 2 +-
 include/linux/swiotlb.h                      | 4 ++--
 kernel/dma/direct.c                          | 2 +-
 kernel/dma/swiotlb.c                         | 4 ++--
 6 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index ce6b664b10aa..89a894354263 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 
 	max_order = MAX_ORDER;
 #ifdef CONFIG_SWIOTLB
-	if (is_swiotlb_active()) {
+	if (is_swiotlb_active(obj->base.dev->dev)) {
 		unsigned int max_segment;
 
 		max_segment = swiotlb_max_segment();
diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
index f4c2e46b6fe1..2ca9d9a9e5d5 100644
--- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
@@ -276,7 +276,7 @@ nouveau_ttm_init(struct nouveau_drm *drm)
 	}
 
 #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
-	need_swiotlb = is_swiotlb_active();
+	need_swiotlb = is_swiotlb_active(dev->dev);
 #endif
 
 	ret = ttm_device_init(&drm->ttm.bdev, &nouveau_bo_driver, drm->dev->dev,
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index b7a8f3a1921f..0d56985bfe81 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
 
 	spin_unlock(&pcifront_dev_lock);
 
-	if (!err && !is_swiotlb_active()) {
+	if (!err && !is_swiotlb_active(&pdev->xdev->dev)) {
 		err = pci_xen_swiotlb_init_late();
 		if (err)
 			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 921b469c6ad2..06cf17a80f5c 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -118,7 +118,7 @@ static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
-bool is_swiotlb_active(void);
+bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
@@ -141,7 +141,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev)
 	return SIZE_MAX;
 }
 
-static inline bool is_swiotlb_active(void)
+static inline bool is_swiotlb_active(struct device *dev)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 84c9feb5474a..7a88c34d0867 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
 size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
-	if (is_swiotlb_active() &&
+	if (is_swiotlb_active(dev) &&
 	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index c4a071d6a63f..21e99907edd6 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -666,9 +666,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)
 	return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
 }
 
-bool is_swiotlb_active(void)
+bool is_swiotlb_active(struct device *dev)
 {
-	return io_tlb_default_mem != NULL;
+	return dev->dma_io_tlb_mem != NULL;
 }
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:28:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:28:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140440.259526 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj5G-00034z-O6; Fri, 11 Jun 2021 15:28:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140440.259526; Fri, 11 Jun 2021 15:28:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj5G-00034o-KS; Fri, 11 Jun 2021 15:28:34 +0000
Received: by outflank-mailman (input) for mailman id 140440;
 Fri, 11 Jun 2021 15:28:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xgm=LF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lrj5F-0001UV-Js
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:28:33 +0000
Received: from mail-pg1-x531.google.com (unknown [2607:f8b0:4864:20::531])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5aaf25c3-e907-4874-bea0-63bbea4d62d9;
 Fri, 11 Jun 2021 15:28:16 +0000 (UTC)
Received: by mail-pg1-x531.google.com with SMTP id t9so2754850pgn.4
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:28:16 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:33c8:8e01:1161:6797])
 by smtp.gmail.com with UTF8SMTPSA id e21sm5534829pjh.55.2021.06.11.08.28.08
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 08:28:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5aaf25c3-e907-4874-bea0-63bbea4d62d9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=xr7PRB7Hd7uTEVi/Kqqr6QruQxC5reNs5RNxGaybH98=;
        b=YSZ4LEx/9RYLf/YKEG6tawzieyj/VAOpPCm5Fm982cDV6gY8FOWSj8Iha0Cw2ygemc
         3I0c6r5KExTHw/LJYZsBJHzskVIqgxxuoNNap9FQ4XuW20PKiHcekGBx9346xteGvy0g
         PyEd4u9wNOBOko8Jf1JfFmLyPmRNUFV9L6mCY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=xr7PRB7Hd7uTEVi/Kqqr6QruQxC5reNs5RNxGaybH98=;
        b=GJSUst6dBi1yj8raI3H+ZFV//ql/JZH5plef7Gb42If7mB4TvMC0EUojD1V8mbInVC
         y1szwT28ocx4wRlmQumY/0vdWX6bvNSamTrjgiHxhgnzu+DUQ72l44D9v3on0noOkEWx
         0xni3uhB85GA4RLIxQNFCZhvQcHJLOFp/3yemHUmZr3PMODaEdhrh+41zKiUytRSOMyV
         7/qqhtuTzf3g210QTC7gkdrNWZkH+A2N3l7Qjz9jczr3+PmIF72OSDVgyGjmkhqBWoVK
         x9GWvH6uEsaApk0TIoAI4m5jriHd2pWANjRaIDnLBlagPTPcfhbT3mX+M6E5G2aoY+Kv
         XHxw==
X-Gm-Message-State: AOAM5314PHSrhv43dc217KcXxEbY00T+xBEPMU5cUO0Z+KB/cOxIbd0+
	qSZymlxFs4wJS9qUVyNAmqBCbA==
X-Google-Smtp-Source: ABdhPJxAymBN9XfurYEcX4Hnlqi/pyl7ilzdDB9Ga+mX+C72T/+t3ghcDjkBJ1DjqsKqJ9yREhHJyw==
X-Received: by 2002:a62:2682:0:b029:2f4:e1cf:9575 with SMTP id m124-20020a6226820000b02902f4e1cf9575mr8860532pfm.51.1623425295638;
        Fri, 11 Jun 2021 08:28:15 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v9 07/14] swiotlb: Bounce data from/to restricted DMA pool if available
Date: Fri, 11 Jun 2021 23:26:52 +0800
Message-Id: <20210611152659.2142983-8-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210611152659.2142983-1-tientzu@chromium.org>
References: <20210611152659.2142983-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Regardless of swiotlb setting, the restricted DMA pool is preferred if
available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Note that is_dev_swiotlb_force doesn't check if
swiotlb_force == SWIOTLB_FORCE. Otherwise the memory allocation behavior
with default swiotlb will be changed by the following patche
("dma-direct: Allocate memory from restricted DMA pool if available").

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 10 +++++++++-
 kernel/dma/direct.c     |  3 ++-
 kernel/dma/direct.h     |  3 ++-
 kernel/dma/swiotlb.c    |  1 +
 4 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 06cf17a80f5c..8200c100fe10 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -85,6 +85,7 @@ extern enum swiotlb_force swiotlb_force;
  *		unmap calls.
  * @debugfs:	The dentry to debugfs.
  * @late_alloc:	%true if allocated using the page allocator
+ * @force_swiotlb: %true if swiotlb is forced
  */
 struct io_tlb_mem {
 	phys_addr_t start;
@@ -95,6 +96,7 @@ struct io_tlb_mem {
 	spinlock_t lock;
 	struct dentry *debugfs;
 	bool late_alloc;
+	bool force_swiotlb;
 	struct io_tlb_slot {
 		phys_addr_t orig_addr;
 		size_t alloc_size;
@@ -115,6 +117,11 @@ static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
 	dev->dma_io_tlb_mem = io_tlb_default_mem;
 }
 
+static inline bool is_dev_swiotlb_force(struct device *dev)
+{
+	return dev->dma_io_tlb_mem->force_swiotlb;
+}
+
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
@@ -126,8 +133,9 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
-static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
+static inline bool is_dev_swiotlb_force(struct device *dev)
 {
+	return false;
 }
 static inline void swiotlb_exit(void)
 {
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 7a88c34d0867..078f7087e466 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -496,7 +496,8 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
 	if (is_swiotlb_active(dev) &&
-	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
+	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE ||
+	     is_dev_swiotlb_force(dev)))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
 }
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 13e9e7158d94..f94813674e23 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -87,7 +87,8 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
 	phys_addr_t phys = page_to_phys(page) + offset;
 	dma_addr_t dma_addr = phys_to_dma(dev, phys);
 
-	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
+	if (unlikely(swiotlb_force == SWIOTLB_FORCE) ||
+	    is_dev_swiotlb_force(dev))
 		return swiotlb_map(dev, phys, size, dir, attrs);
 
 	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 21e99907edd6..e5ccc198d0a7 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -714,6 +714,7 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 			return -ENOMEM;
 
 		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false, true);
+		mem->force_swiotlb = true;
 
 		rmem->priv = mem;
 
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:28:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:28:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140443.259538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj5M-0003V8-1r; Fri, 11 Jun 2021 15:28:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140443.259538; Fri, 11 Jun 2021 15:28:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj5L-0003Uu-U6; Fri, 11 Jun 2021 15:28:39 +0000
Received: by outflank-mailman (input) for mailman id 140443;
 Fri, 11 Jun 2021 15:28:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xgm=LF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lrj5K-0001UV-K4
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:28:38 +0000
Received: from mail-pg1-x52b.google.com (unknown [2607:f8b0:4864:20::52b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc6f7187-f193-4ce3-bb8f-39b8d35ecf16;
 Fri, 11 Jun 2021 15:28:25 +0000 (UTC)
Received: by mail-pg1-x52b.google.com with SMTP id n12so2729693pgs.13
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:28:25 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:33c8:8e01:1161:6797])
 by smtp.gmail.com with UTF8SMTPSA id u24sm5764598pfm.156.2021.06.11.08.28.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 08:28:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc6f7187-f193-4ce3-bb8f-39b8d35ecf16
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=bmASqPhRufrL8xgJ1lr0aYPEW2vwCzgwqGTzlcTpKYs=;
        b=VEX8i0DIIoAo+ElXJJaiZoSDgtnFLF/I1TK84grpmdSi4SdYlXjC9TQ850lntV+P5S
         831kSO2v3dGqaOBGzqf/letn3Nd8SFR1zZSHZzcvG9QoeKAD5MpmBTmIAKv31z3N4Dib
         pBY3Kg/UV3b1u2uzwExihRDf6okRk8gDOvA7w=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=bmASqPhRufrL8xgJ1lr0aYPEW2vwCzgwqGTzlcTpKYs=;
        b=ENCqtk5zJUMYyJFyORc69nL7Qf6f7/3FH+uGgaDwTaXv2TrasPrXulWuVvBf/0CBIh
         Ce/FjF2AIObBIzwf9Mxdx0z8blEEAlIt/eYAz6bt2oN1ZoHe0ddev2RvievL0OK19WEz
         OCNr62X/BBD83ydwqjketdU1bB0ckHiHAwSfiebEhVC5oOJOFjtXZlZ4OiNvU12100qB
         smmL4K8AGNWOGBJqjUVyCoPd5bhPyZyBHLC4LGOMQHONHsZsh3EhMi+QXelhTcPq/t1N
         Ew0U3PQxEi75I2BUANHnJ6DMq7qIWIXMbFl0hLIuJ6lKVkg1mj9ptPihJqVX0aHR/IzJ
         +uJQ==
X-Gm-Message-State: AOAM532WGfbUlDzDUB/Uca58r5FWzC2tDWOzt6jSUirDS4CGk7+TuPIj
	t+XQc/BK+6cxyiEpo7EM3+YiVQ==
X-Google-Smtp-Source: ABdhPJxcN6GQyaDPIchNmU5M9fts7obcNJdbHA864T6brE5WTlPTfm/9dx+UosM5tapLbm55ICy5Kg==
X-Received: by 2002:a62:7b4c:0:b029:2e9:cec2:e252 with SMTP id w73-20020a627b4c0000b02902e9cec2e252mr8677730pfc.56.1623425304239;
        Fri, 11 Jun 2021 08:28:24 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v9 08/14] swiotlb: Move alloc_size to find_slots
Date: Fri, 11 Jun 2021 23:26:53 +0800
Message-Id: <20210611152659.2142983-9-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210611152659.2142983-1-tientzu@chromium.org>
References: <20210611152659.2142983-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move the maintenance of alloc_size to find_slots for better code
reusability later.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index e5ccc198d0a7..364c6c822063 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -486,8 +486,11 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 	return -1;
 
 found:
-	for (i = index; i < index + nslots; i++)
+	for (i = index; i < index + nslots; i++) {
 		mem->slots[i].list = 0;
+		mem->slots[i].alloc_size =
+			alloc_size - ((i - index) << IO_TLB_SHIFT);
+	}
 	for (i = index - 1;
 	     io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 &&
 	     mem->slots[i].list; i--)
@@ -542,11 +545,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	 * This is needed when we sync the memory.  Then we sync the buffer if
 	 * needed.
 	 */
-	for (i = 0; i < nr_slots(alloc_size + offset); i++) {
+	for (i = 0; i < nr_slots(alloc_size + offset); i++)
 		mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
-		mem->slots[index + i].alloc_size =
-			alloc_size - (i << IO_TLB_SHIFT);
-	}
 	tlb_addr = slot_addr(mem->start, index) + offset;
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
 	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:33:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:33:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140465.259554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj9i-0005Uz-Ts; Fri, 11 Jun 2021 15:33:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140465.259554; Fri, 11 Jun 2021 15:33:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj9i-0005Uk-OY; Fri, 11 Jun 2021 15:33:10 +0000
Received: by outflank-mailman (input) for mailman id 140465;
 Fri, 11 Jun 2021 15:33:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xgm=LF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lrj6D-0001UV-Lh
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:29:33 +0000
Received: from mail-pj1-x1030.google.com (unknown [2607:f8b0:4864:20::1030])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 099fc472-430e-4ffa-af91-63eece9c8906;
 Fri, 11 Jun 2021 15:28:59 +0000 (UTC)
Received: by mail-pj1-x1030.google.com with SMTP id k5so5895578pjj.1
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:28:59 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:33c8:8e01:1161:6797])
 by smtp.gmail.com with UTF8SMTPSA id z7sm5637671pgr.28.2021.06.11.08.28.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 08:28:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 099fc472-430e-4ffa-af91-63eece9c8906
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=zXV34JP57GpdYi7KETjy6yE9WG6lb3BfY1eExPN5D0k=;
        b=n/l5cUK61jqQlbzQONnRrSTYUM2heU0XHxLqBLodv+ZACW+eYtMX6UrtLKV+DFbDS4
         ciYFCzgOAsbkB8Yxzp2dJIy6q04U2SdNzdIuxnanEC3CRC9P9XtiJVSvAilxoKjJrOtm
         FosJPFUL7qbFgUfC4+wahlbwgRASvGZhQwnMk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=zXV34JP57GpdYi7KETjy6yE9WG6lb3BfY1eExPN5D0k=;
        b=pDBoYcjl2UZkIK+oAveBnKJ3nQQIq6jdbT6M+doHoT30gghYcUNSnL8Cbrd0imAp1y
         GvfVjjpXCqNKsCHI6QjLjxQuiflXa+NjPM9V9PqAxwskmo+Rr0TeuWi5Zr60zeZ4k2qS
         qVWXNbV49rESbz2NkOQdauGghnDzosCVVgknLFkXffxB4KTAcGkFW5nhqQ9SDp0hMwCk
         EoJnyyxu0br4kYXf0/YRL39A/fedAM+G2mQw8i1oLOgCvh2DB0cCMZcmc+qjIMFarhjo
         nBys8zoL6aWmBKeG0XcIuw+55AiNbqBgCwMT4H59Ott5g3UigKGcUDwzmgTc60C/gT5i
         w/wg==
X-Gm-Message-State: AOAM530YsxFM0AYYct6maKJ5L+doLxWUspMNMgJf5keV5xz9enUM68wR
	5cJ/hURR7gMnvorpz/vr+C5C1A==
X-Google-Smtp-Source: ABdhPJxO4yId/BXBPVPZ3GpkGGg2f5gxH07V1pBB/gWW/cLxB2G3X66ogJG3rkzaS8olzWg6RRH4fw==
X-Received: by 2002:a17:90a:b10a:: with SMTP id z10mr5159407pjq.226.1623425339045;
        Fri, 11 Jun 2021 08:28:59 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v9 12/14] dma-direct: Allocate memory from restricted DMA pool if available
Date: Fri, 11 Jun 2021 23:26:57 +0800
Message-Id: <20210611152659.2142983-13-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210611152659.2142983-1-tientzu@chromium.org>
References: <20210611152659.2142983-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The restricted DMA pool is preferred if available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Note that since coherent allocation needs remapping, one must set up
another device coherent pool by shared-dma-pool and use
dma_alloc_from_dev_coherent instead for atomic coherent allocation.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/direct.c | 37 ++++++++++++++++++++++++++++---------
 1 file changed, 28 insertions(+), 9 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index eb4098323bbc..73fc4c659ba7 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -78,6 +78,9 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 static void __dma_direct_free_pages(struct device *dev, struct page *page,
 				    size_t size)
 {
+	if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL) &&
+	    swiotlb_free(dev, page, size))
+		return;
 	dma_free_contiguous(dev, page, size);
 }
 
@@ -92,7 +95,17 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 
 	gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
 					   &phys_limit);
-	page = dma_alloc_contiguous(dev, size, gfp);
+	if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL)) {
+		page = swiotlb_alloc(dev, size);
+		if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
+			__dma_direct_free_pages(dev, page, size);
+			page = NULL;
+		}
+		return page;
+	}
+
+	if (!page)
+		page = dma_alloc_contiguous(dev, size, gfp);
 	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
 		dma_free_contiguous(dev, page, size);
 		page = NULL;
@@ -148,7 +161,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 		gfp |= __GFP_NOWARN;
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
 		page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
 		if (!page)
 			return NULL;
@@ -161,18 +174,23 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev))
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_dev_swiotlb_force(dev))
 		return arch_dma_alloc(dev, size, dma_handle, gfp, attrs);
 
 	/*
 	 * Remapping or decrypting memory may block. If either is required and
 	 * we can't block, allocate the memory from the atomic pools.
+	 * If restricted DMA (i.e., is_dev_swiotlb_force) is required, one must
+	 * set up another device coherent pool by shared-dma-pool and use
+	 * dma_alloc_from_dev_coherent instead.
 	 */
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
 	    !gfpflags_allow_blocking(gfp) &&
 	    (force_dma_unencrypted(dev) ||
-	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev))))
+	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
+	      !dev_is_dma_coherent(dev))) &&
+	    !is_dev_swiotlb_force(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	/* we always manually zero the memory once we are done */
@@ -253,15 +271,15 @@ void dma_direct_free(struct device *dev, size_t size,
 	unsigned int page_order = get_order(size);
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
 		/* cpu_addr is a struct page cookie, not a kernel address */
 		dma_free_contiguous(dev, cpu_addr, size);
 		return;
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev)) {
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_dev_swiotlb_force(dev)) {
 		arch_dma_free(dev, size, cpu_addr, dma_addr, attrs);
 		return;
 	}
@@ -289,7 +307,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	void *ret;
 
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
-	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp))
+	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) &&
+	    !is_dev_swiotlb_force(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	page = __dma_direct_alloc_pages(dev, size, gfp);
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:33:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:33:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140464.259549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj9i-0005Sy-Jz; Fri, 11 Jun 2021 15:33:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140464.259549; Fri, 11 Jun 2021 15:33:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj9i-0005Sr-Gv; Fri, 11 Jun 2021 15:33:10 +0000
Received: by outflank-mailman (input) for mailman id 140464;
 Fri, 11 Jun 2021 15:33:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xgm=LF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lrj6m-0001UV-NE
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:30:08 +0000
Received: from mail-pf1-x42a.google.com (unknown [2607:f8b0:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cdb0ed64-1a85-4480-91e0-857f539f2673;
 Fri, 11 Jun 2021 15:29:17 +0000 (UTC)
Received: by mail-pf1-x42a.google.com with SMTP id z26so4727633pfj.5
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:29:17 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:33c8:8e01:1161:6797])
 by smtp.gmail.com with UTF8SMTPSA id n11sm5376420pfu.29.2021.06.11.08.29.09
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 08:29:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cdb0ed64-1a85-4480-91e0-857f539f2673
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=/h+uMK5EvTU9y/lCGU3ZvCTAZDjSKC/jtJCWXvTsWEE=;
        b=HhpdzooM/9OahuuJNkXbvXBqmvqw8FUa6EKj2BaW6crQMvq/ScwxDhTvwyC0rkCxs+
         qdWlvtSqQ4JGBA/bijy6bpuQWOxgyJPJ9pxnXvbdvDkSLuKEZ1SJkRU0zubFn6wd3k5G
         bES3BFVcePGohMbxPDI0s9cBVMuc2zmUCtVw8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=/h+uMK5EvTU9y/lCGU3ZvCTAZDjSKC/jtJCWXvTsWEE=;
        b=NQOmQO3V/DrSV15Y/VuNejiXKY3yD/K6CW3NnW2hu7lkmHscOrgoaEq5rtpgktlnr4
         gCrP+jeW3j1/Ic8fKXQ1f82TBcUJ4qq1wZIuaYIcs/JAm8RLmhjbzbXe20FVjWdow5iO
         L7w8Z5Im4hnSMwEo6rTTrWQKmzPVrmjKTQ3bb0fRUI0d0NmjY/KCn0q1eumYSaDDlvsM
         q6aj/aYsCmCSEhWWLn2zDtstOifOO+S6OqZOnGalJtBg+rY1Y0yEnmTV8dGn3/RNkEbX
         5PWgu+BYrsBi4JDhP5lsOFgrRPSzFsXMg6EEg4FmFUzfqkuoh8OMuJw0mOdogRAV/4up
         5Nyg==
X-Gm-Message-State: AOAM531iRokt+2kAqYePBPmN3Syh7cN62tD1zCssoetuk5b7mf36X+3u
	AgA+p2F6k4U9KrAiBkDerXAAvA==
X-Google-Smtp-Source: ABdhPJxm0sDDpDqGQ8rNYiECVqpZ7eLqxfJ9mC0qI8JX65WyVERI818aqtVbmfi4eNxIlAsNsotgSA==
X-Received: by 2002:a63:e954:: with SMTP id q20mr1993869pgj.332.1623425356990;
        Fri, 11 Jun 2021 08:29:16 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v9 14/14] of: Add plumbing for restricted DMA pool
Date: Fri, 11 Jun 2021 23:26:59 +0800
Message-Id: <20210611152659.2142983-15-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210611152659.2142983-1-tientzu@chromium.org>
References: <20210611152659.2142983-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If a device is not behind an IOMMU, we look up the device node and set
up the restricted DMA when the restricted-dma-pool is presented.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/of/address.c    | 33 +++++++++++++++++++++++++++++++++
 drivers/of/device.c     |  3 +++
 drivers/of/of_private.h |  6 ++++++
 3 files changed, 42 insertions(+)

diff --git a/drivers/of/address.c b/drivers/of/address.c
index 3b2acca7e363..c8066d95ff0e 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -8,6 +8,7 @@
 #include <linux/logic_pio.h>
 #include <linux/module.h>
 #include <linux/of_address.h>
+#include <linux/of_reserved_mem.h>
 #include <linux/pci.h>
 #include <linux/pci_regs.h>
 #include <linux/sizes.h>
@@ -1001,6 +1002,38 @@ int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map)
 	of_node_put(node);
 	return ret;
 }
+
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np)
+{
+	struct device_node *node, *of_node = dev->of_node;
+	int count, i;
+
+	count = of_property_count_elems_of_size(of_node, "memory-region",
+						sizeof(u32));
+	/*
+	 * If dev->of_node doesn't exist or doesn't contain memory-region, try
+	 * the OF node having DMA configuration.
+	 */
+	if (count <= 0) {
+		of_node = np;
+		count = of_property_count_elems_of_size(
+			of_node, "memory-region", sizeof(u32));
+	}
+
+	for (i = 0; i < count; i++) {
+		node = of_parse_phandle(of_node, "memory-region", i);
+		/*
+		 * There might be multiple memory regions, but only one
+		 * restricted-dma-pool region is allowed.
+		 */
+		if (of_device_is_compatible(node, "restricted-dma-pool") &&
+		    of_device_is_available(node))
+			return of_reserved_mem_device_init_by_idx(dev, of_node,
+								  i);
+	}
+
+	return 0;
+}
 #endif /* CONFIG_HAS_DMA */
 
 /**
diff --git a/drivers/of/device.c b/drivers/of/device.c
index 1defdf15ba95..ba4656e77502 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -168,6 +168,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
 	if (IS_ENABLED(CONFIG_SWIOTLB))
 		swiotlb_set_io_tlb_default_mem(dev);
 
+	if (!iommu)
+		return of_dma_set_restricted_buffer(dev, np);
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(of_dma_configure_id);
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index 631489f7f8c0..376462798f7e 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -163,12 +163,18 @@ struct bus_dma_region;
 #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
 int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map);
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np);
 #else
 static inline int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map)
 {
 	return -ENODEV;
 }
+static inline int of_dma_set_restricted_buffer(struct device *dev,
+					       struct device_node *np)
+{
+	return -ENODEV;
+}
 #endif
 
 void fdt_init_reserved_mem(void);
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:33:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:33:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140473.259571 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj9p-00064p-Aj; Fri, 11 Jun 2021 15:33:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140473.259571; Fri, 11 Jun 2021 15:33:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj9p-00064e-71; Fri, 11 Jun 2021 15:33:17 +0000
Received: by outflank-mailman (input) for mailman id 140473;
 Fri, 11 Jun 2021 15:33:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xgm=LF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lrj6X-0001UV-MY
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:29:53 +0000
Received: from mail-pj1-x1032.google.com (unknown [2607:f8b0:4864:20::1032])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3d59accb-0da0-4dad-b2a2-b75a8054572f;
 Fri, 11 Jun 2021 15:29:08 +0000 (UTC)
Received: by mail-pj1-x1032.google.com with SMTP id
 o10-20020a17090aac0ab029016e92770073so246540pjq.5
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:29:08 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:33c8:8e01:1161:6797])
 by smtp.gmail.com with UTF8SMTPSA id fw16sm10709535pjb.30.2021.06.11.08.29.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 08:29:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d59accb-0da0-4dad-b2a2-b75a8054572f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=GymVnULBeJ03m5EPo7c7VEX7DSqoBwMT6L2fMWoeeEc=;
        b=RsUtJuQ/xhbMtMc9XuxV/0qNZvgSrMkqrnyRrOyme9H1qYO1hZTedsVrCqpq2htHSN
         ZlwkSxjdIBM3PkDZK/6KVtHrZLtJJw6srF6HFJmL1NXvYoRqj5+Sdh8htaj0wdQKo+0g
         e5vmju65RPaMyCngcRLDKqjt2q5GFgsBLXP+0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=GymVnULBeJ03m5EPo7c7VEX7DSqoBwMT6L2fMWoeeEc=;
        b=TexeFee/ILH/dcYnOYyPDt4EEhv4Pdn8InouuCaxWBLAjiyM+koLoZ3Gsd9717o2V0
         XGAhZBrFzr3EPyWIDzMn9FgimYMmCgMBafpkxpEV4Uqvds12rvaNHjchzyoXEqkuuML5
         XpwfHkLfBNL0X38bXYUCcZsUba8eSL7FK247W0WgWNpL27yBUaGZNkZAZsN4kkqykiqj
         UFtMuCinl471OvYn7sdttvc99ofY/eHoaoU8gFqMcIlatLJdbRLNdLnJ5evKJxvC5DWh
         l2AHHoJjNBe5IX99h/IQcvBdVzKVckH6wf1F6F7QZ8IkrjHs8tLpTs8kh+1r3yWdEuEu
         8buQ==
X-Gm-Message-State: AOAM533wxI/4IJJzcqGW104dXqqQU/yfI92Z0dzmBWIuP8wirg2/qW5Q
	sYnD7dCwka8d35IlT78y890LhQ==
X-Google-Smtp-Source: ABdhPJyzFPVM9UZ5f45yCONws2kuIvnajIULWaoJUEbMf7QEQOvvj6qM2zziKH5TeQV+jYXT9K0GEg==
X-Received: by 2002:a17:90a:e2c1:: with SMTP id fr1mr5051480pjb.83.1623425348249;
        Fri, 11 Jun 2021 08:29:08 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v9 13/14] dt-bindings: of: Add restricted DMA pool
Date: Fri, 11 Jun 2021 23:26:58 +0800
Message-Id: <20210611152659.2142983-14-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210611152659.2142983-1-tientzu@chromium.org>
References: <20210611152659.2142983-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce the new compatible string, restricted-dma-pool, for restricted
DMA. One can specify the address and length of the restricted DMA memory
region by restricted-dma-pool in the reserved-memory node.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 .../reserved-memory/reserved-memory.txt       | 36 +++++++++++++++++--
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
index e8d3096d922c..46804f24df05 100644
--- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
@@ -51,6 +51,23 @@ compatible (optional) - standard definition
           used as a shared pool of DMA buffers for a set of devices. It can
           be used by an operating system to instantiate the necessary pool
           management subsystem if necessary.
+        - restricted-dma-pool: This indicates a region of memory meant to be
+          used as a pool of restricted DMA buffers for a set of devices. The
+          memory region would be the only region accessible to those devices.
+          When using this, the no-map and reusable properties must not be set,
+          so the operating system can create a virtual mapping that will be used
+          for synchronization. The main purpose for restricted DMA is to
+          mitigate the lack of DMA access control on systems without an IOMMU,
+          which could result in the DMA accessing the system memory at
+          unexpected times and/or unexpected addresses, possibly leading to data
+          leakage or corruption. The feature on its own provides a basic level
+          of protection against the DMA overwriting buffer contents at
+          unexpected times. However, to protect against general data leakage and
+          system memory corruption, the system needs to provide way to lock down
+          the memory access, e.g., MPU. Note that since coherent allocation
+          needs remapping, one must set up another device coherent pool by
+          shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic
+          coherent allocation.
         - vendor specific string in the form <vendor>,[<device>-]<usage>
 no-map (optional) - empty property
     - Indicates the operating system must not create a virtual mapping
@@ -85,10 +102,11 @@ memory-region-names (optional) - a list of names, one for each corresponding
 
 Example
 -------
-This example defines 3 contiguous regions are defined for Linux kernel:
+This example defines 4 contiguous regions for Linux kernel:
 one default of all device drivers (named linux,cma@72000000 and 64MiB in size),
-one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB), and
-one for multimedia processing (named multimedia-memory@77000000, 64MiB).
+one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB),
+one for multimedia processing (named multimedia-memory@77000000, 64MiB), and
+one for restricted dma pool (named restricted_dma_reserved@0x50000000, 64MiB).
 
 / {
 	#address-cells = <1>;
@@ -120,6 +138,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 			compatible = "acme,multimedia-memory";
 			reg = <0x77000000 0x4000000>;
 		};
+
+		restricted_dma_reserved: restricted_dma_reserved {
+			compatible = "restricted-dma-pool";
+			reg = <0x50000000 0x4000000>;
+		};
 	};
 
 	/* ... */
@@ -138,4 +161,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 		memory-region = <&multimedia_reserved>;
 		/* ... */
 	};
+
+	pcie_device: pcie_device@0,0 {
+		reg = <0x83010000 0x0 0x00000000 0x0 0x00100000
+		       0x83010000 0x0 0x00100000 0x0 0x00100000>;
+		memory-region = <&restricted_dma_mem_reserved>;
+		/* ... */
+	};
 };
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:33:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:33:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140476.259582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj9x-0006Ww-Id; Fri, 11 Jun 2021 15:33:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140476.259582; Fri, 11 Jun 2021 15:33:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrj9x-0006Wl-F7; Fri, 11 Jun 2021 15:33:25 +0000
Received: by outflank-mailman (input) for mailman id 140476;
 Fri, 11 Jun 2021 15:33:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xgm=LF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lrj5t-0001UV-LA
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:29:13 +0000
Received: from mail-pj1-x1036.google.com (unknown [2607:f8b0:4864:20::1036])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe381c5a-0df4-4c6a-8657-b67434233f7b;
 Fri, 11 Jun 2021 15:28:51 +0000 (UTC)
Received: by mail-pj1-x1036.google.com with SMTP id ei4so5887851pjb.3
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:28:51 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:33c8:8e01:1161:6797])
 by smtp.gmail.com with UTF8SMTPSA id p11sm3312697pjj.43.2021.06.11.08.28.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 08:28:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe381c5a-0df4-4c6a-8657-b67434233f7b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=kF6lfFlm47ewlGSwNHdOa0bRh+k/2w/JyGCG+Bsk7jg=;
        b=H/C67oW7mc9oS6nqFrlyQ5bKo2M4IEytOonNmobjWozjPKJmD5foT/Xw7AFSNzd2Jo
         0wjSUN2RW/ppa8UiQ09kGN695gbXZ6TGfgjGghWkAaCeHdylaYggXNn7A87sM831TjH9
         l31C3Fl2eMIXd/0uX15rNLpBd42Axwn6s/C8I=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=kF6lfFlm47ewlGSwNHdOa0bRh+k/2w/JyGCG+Bsk7jg=;
        b=YD03UHNz3zMhFquW0h1nVYYD+4Cg2oUn2bsAlDtJZeLe2QDxnporCS40LkyVInGJCH
         8WssS2lbL34EDli0srSKHz9iSUZPTop2tTYXz00Pk44rdq4RUxRKiBCsxtWc4A77Hujr
         lLqCSaHnNIeTVlDNdFJnQEXnhJx1VChtQupeILb1sK8wwitvw0LAEllk6SPQLJTHJe9i
         pQfLYpILjFC/147dJpYKfUBUwCoPZ11p/tfVYffLgLOHxPCRktSdNS7dKGo/1V5RIKqE
         KuljBxXpeUTUPqg1FZ3JojowrYxBwB6PV3s1urbgLiNbvcGVjZC40sJLzlGW3HlP6Nix
         BWfQ==
X-Gm-Message-State: AOAM532zuARjH8LK1hqKd2zQbSOTzuCVhoWihlBosmWPt6D25lHKaBya
	uACdzU7eTKJlsjTDArZGRQghxA==
X-Google-Smtp-Source: ABdhPJwmTa5xH/d7onV720jkujld6c+EE1EDx86RTuZajd94EHAxdcF++t/hrsHfq11H6lXLYCr71w==
X-Received: by 2002:a17:902:d4d0:b029:113:fb3d:3644 with SMTP id o16-20020a170902d4d0b0290113fb3d3644mr4373020plg.58.1623425330334;
        Fri, 11 Jun 2021 08:28:50 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v9 11/14] swiotlb: Add restricted DMA alloc/free support.
Date: Fri, 11 Jun 2021 23:26:56 +0800
Message-Id: <20210611152659.2142983-12-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210611152659.2142983-1-tientzu@chromium.org>
References: <20210611152659.2142983-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the functions, swiotlb_{alloc,free} to support the memory allocation
from restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 15 +++++++++++++++
 kernel/dma/swiotlb.c    | 35 +++++++++++++++++++++++++++++++++--
 2 files changed, 48 insertions(+), 2 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 8200c100fe10..d3374497a4f8 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -162,4 +162,19 @@ static inline void swiotlb_adjust_size(unsigned long size)
 extern void swiotlb_print_info(void);
 extern void swiotlb_set_max_segment(unsigned int);
 
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size);
+bool swiotlb_free(struct device *dev, struct page *page, size_t size);
+#else
+static inline struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	return NULL;
+}
+static inline bool swiotlb_free(struct device *dev, struct page *page,
+				size_t size)
+{
+	return false;
+}
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
+
 #endif /* __LINUX_SWIOTLB_H */
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index a6562573f090..0a19858da5b8 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -461,8 +461,9 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 
 	index = wrap = wrap_index(mem, ALIGN(mem->index, stride));
 	do {
-		if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
-		    (orig_addr & iotlb_align_mask)) {
+		if (orig_addr &&
+		    (slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
+			    (orig_addr & iotlb_align_mask)) {
 			index = wrap_index(mem, index + 1);
 			continue;
 		}
@@ -702,6 +703,36 @@ late_initcall(swiotlb_create_default_debugfs);
 #endif
 
 #ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+	phys_addr_t tlb_addr;
+	int index;
+
+	if (!mem)
+		return NULL;
+
+	index = find_slots(dev, 0, size);
+	if (index == -1)
+		return NULL;
+
+	tlb_addr = slot_addr(mem->start, index);
+
+	return pfn_to_page(PFN_DOWN(tlb_addr));
+}
+
+bool swiotlb_free(struct device *dev, struct page *page, size_t size)
+{
+	phys_addr_t tlb_addr = page_to_phys(page);
+
+	if (!is_swiotlb_buffer(dev, tlb_addr))
+		return false;
+
+	release_slots(dev, tlb_addr);
+
+	return true;
+}
+
 static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 				    struct device *dev)
 {
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:33:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:33:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140481.259593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrjA1-00070O-UB; Fri, 11 Jun 2021 15:33:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140481.259593; Fri, 11 Jun 2021 15:33:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrjA1-000703-PJ; Fri, 11 Jun 2021 15:33:29 +0000
Received: by outflank-mailman (input) for mailman id 140481;
 Fri, 11 Jun 2021 15:33:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xgm=LF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lrj5U-0001UV-KM
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:28:48 +0000
Received: from mail-pf1-x432.google.com (unknown [2607:f8b0:4864:20::432])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6d191a7-05e8-4411-9f3f-3305a3a0be42;
 Fri, 11 Jun 2021 15:28:33 +0000 (UTC)
Received: by mail-pf1-x432.google.com with SMTP id h12so4737284pfe.2
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:28:33 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:33c8:8e01:1161:6797])
 by smtp.gmail.com with UTF8SMTPSA id j12sm5784068pgs.83.2021.06.11.08.28.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 08:28:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6d191a7-05e8-4411-9f3f-3305a3a0be42
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=DIGkdODP5I9MRpEvhkJ7HW1s6uTYZn4rnCzwtz1SAMY=;
        b=lhAYxyf+2EghTuKgem3YKjDes+j3ljet4Ra8WAuPAmfLRQfXVHLYxk9WtJsBCtY28E
         IuaqxaEDW5QIxfOZvX0LKXau3+uxIIgMfu0x386ocAscsOejf4LbnCS7xFpX/SEkViXk
         qWH7bieG65MLRgMm+LFtf2LJCaC8caomnkZGw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=DIGkdODP5I9MRpEvhkJ7HW1s6uTYZn4rnCzwtz1SAMY=;
        b=M3F8dAyAg8vSFRHNNl+VOiTDtz7ZEzW97p+v8XTNd8PJshx6ZtfHWvfTcqkEIN0ZPB
         TUyeNGtKYpLtcx8Lj4+TmmgIU+l/MrJeNPg7+TjMTbR4MopT4zqN6uApZTcVd3Z02Tor
         FLxCMICa4YBCTW8lCvgSNeiNXgQ6ht2jRUsFcybMnpyyBccJOsejvvrjrZvpytqKXCFx
         kX1dl+Dg78kX+V7wArYdQHk9T2LMbeheWlxxjT6JySy7yrw/J9M/TzD5PFv/vErfSOw2
         prfnS//4vSSryUUehJ3eeqyD3ELY8nhvBGZKtCQ/VwZSG+uUVwXc6USH7iG89B1+rbOH
         P7Xw==
X-Gm-Message-State: AOAM532h7PICvbY83sZT2vMywlfU4QZJ12k/Udq3nmYptf9SsVzA3EB7
	zI6XZATxu0mWjKbKdL9VZzAg9w==
X-Google-Smtp-Source: ABdhPJzQPgrFOqie91DIAajE2oiuNE8F7FW4Yfc8mGamEZxew4gLUUvHH/uFdPCPIOoRHfVNxt8tNA==
X-Received: by 2002:a63:571d:: with SMTP id l29mr4136061pgb.179.1623425312945;
        Fri, 11 Jun 2021 08:28:32 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v9 09/14] swiotlb: Refactor swiotlb_tbl_unmap_single
Date: Fri, 11 Jun 2021 23:26:54 +0800
Message-Id: <20210611152659.2142983-10-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210611152659.2142983-1-tientzu@chromium.org>
References: <20210611152659.2142983-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, release_slots, to make the code reusable for supporting
different bounce buffer pools, e.g. restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 35 ++++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 364c6c822063..a6562573f090 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -554,27 +554,15 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	return tlb_addr;
 }
 
-/*
- * tlb_addr is the physical address of the bounce buffer to unmap.
- */
-void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
-			      size_t mapping_size, enum dma_data_direction dir,
-			      unsigned long attrs)
+static void release_slots(struct device *dev, phys_addr_t tlb_addr)
 {
-	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long flags;
-	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
+	unsigned int offset = swiotlb_align_offset(dev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
 	int nslots = nr_slots(mem->slots[index].alloc_size + offset);
 	int count, i;
 
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
-		swiotlb_bounce(hwdev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
-
 	/*
 	 * Return the buffer to the free list by setting the corresponding
 	 * entries to indicate the number of contiguous entries available.
@@ -609,6 +597,23 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 	spin_unlock_irqrestore(&mem->lock, flags);
 }
 
+/*
+ * tlb_addr is the physical address of the bounce buffer to unmap.
+ */
+void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
+			      size_t mapping_size, enum dma_data_direction dir,
+			      unsigned long attrs)
+{
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
+		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
+
+	release_slots(dev, tlb_addr);
+}
+
 void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
 		size_t size, enum dma_data_direction dir)
 {
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:33:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:33:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140482.259597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrjA2-00073S-9a; Fri, 11 Jun 2021 15:33:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140482.259597; Fri, 11 Jun 2021 15:33:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrjA2-00072d-37; Fri, 11 Jun 2021 15:33:30 +0000
Received: by outflank-mailman (input) for mailman id 140482;
 Fri, 11 Jun 2021 15:33:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xgm=LF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lrj5j-0001UV-Kj
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:29:03 +0000
Received: from mail-pj1-x102f.google.com (unknown [2607:f8b0:4864:20::102f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1fb9e420-300f-44e0-9fdd-b161116f4b25;
 Fri, 11 Jun 2021 15:28:42 +0000 (UTC)
Received: by mail-pj1-x102f.google.com with SMTP id
 x21-20020a17090aa395b029016e25313bfcso6191315pjp.2
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:28:42 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:33c8:8e01:1161:6797])
 by smtp.gmail.com with UTF8SMTPSA id m1sm5459163pfc.63.2021.06.11.08.28.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 08:28:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1fb9e420-300f-44e0-9fdd-b161116f4b25
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=XS4G5xy6+vfRdhwlX4dDJ0Yj9yl3k4TUpXEyoiYlXdQ=;
        b=aFasplHKLdqDM3eBlOV7pwqTk+B1SD0kHcbylvEQ6ctrIvMXBUr8LejksidOkg8J24
         fcCmbphvGffW+g6oA32rHtV2UUmZpuqwluVETGeSf2V3z0AYNoHtsF328JTdyVv/LiVW
         epogX0cUxaR/GY91cHvrC/Zy6jCT7BOWrG+mU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=XS4G5xy6+vfRdhwlX4dDJ0Yj9yl3k4TUpXEyoiYlXdQ=;
        b=umYzmsLpm3FlOra06A69q36/tLTfkA/On3Wl+TKzgc8g+u8h6XcMT5S7L1dGh61zk2
         qlysAE3yy1B/0INEWUFSWwZbhHNcyu9iJ/UWOHJ1fGxo3RYgLmBBVpklC/WRqRLj9BW/
         bXEpowyEUfUtUASVbddUqmWvJ4MIWlreeiCVi1TJ5saNjasR6VIc9Ohp50g7qR8TbZzr
         Gn8Di+Yi7aGkj08ZBE/NhONzHk9AZdgdrkMWKiHJpLu1GkPGXOLr29T3uDBaKktTpAFB
         VTD1MdKSV0qMdro9kM9uloHo6WxEEjJsEgXABGfLSNhjICgP7rQFIHG8fjXuKVlXdL7Q
         s7sw==
X-Gm-Message-State: AOAM530kUY02wsfBDejrSBY5DV3VBKk5HMkJ9XhXSLOJTF9SbTHvVTbT
	zCdhiJm5MsUgbUY853HjKd5oiw==
X-Google-Smtp-Source: ABdhPJwicfQo/0v2MeKUXopLCPgFcwJpi4Wap+QFjjvputvdMnB/ECh+CzhqbhVAhbqd974yX1BCiw==
X-Received: by 2002:a17:90a:ce0b:: with SMTP id f11mr9250452pju.185.1623425321710;
        Fri, 11 Jun 2021 08:28:41 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v9 10/14] dma-direct: Add a new wrapper __dma_direct_free_pages()
Date: Fri, 11 Jun 2021 23:26:55 +0800
Message-Id: <20210611152659.2142983-11-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210611152659.2142983-1-tientzu@chromium.org>
References: <20210611152659.2142983-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new wrapper __dma_direct_free_pages() that will be useful later
for swiotlb_free().

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/direct.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 078f7087e466..eb4098323bbc 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -75,6 +75,12 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 		min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit);
 }
 
+static void __dma_direct_free_pages(struct device *dev, struct page *page,
+				    size_t size)
+{
+	dma_free_contiguous(dev, page, size);
+}
+
 static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 		gfp_t gfp)
 {
@@ -237,7 +243,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 			return NULL;
 	}
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -273,7 +279,7 @@ void dma_direct_free(struct device *dev, size_t size,
 	else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
 		arch_dma_clear_uncached(cpu_addr, size);
 
-	dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size);
+	__dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
 }
 
 struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
@@ -310,7 +316,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	*dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
 	return page;
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -329,7 +335,7 @@ void dma_direct_free_pages(struct device *dev, size_t size,
 	if (force_dma_unencrypted(dev))
 		set_memory_encrypted((unsigned long)vaddr, 1 << page_order);
 
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 }
 
 #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:33:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:33:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140490.259615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrjAC-00088J-QJ; Fri, 11 Jun 2021 15:33:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140490.259615; Fri, 11 Jun 2021 15:33:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrjAC-000886-LS; Fri, 11 Jun 2021 15:33:40 +0000
Received: by outflank-mailman (input) for mailman id 140490;
 Fri, 11 Jun 2021 15:33:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xgm=LF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lrjAB-00084A-Iq
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:33:39 +0000
Received: from mail-qt1-x82e.google.com (unknown [2607:f8b0:4864:20::82e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id df2190ff-d6eb-4a4e-869d-71111195f6b3;
 Fri, 11 Jun 2021 15:33:38 +0000 (UTC)
Received: by mail-qt1-x82e.google.com with SMTP id o20so2859200qtr.8
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:33:38 -0700 (PDT)
Received: from mail-qk1-f175.google.com (mail-qk1-f175.google.com.
 [209.85.222.175])
 by smtp.gmail.com with ESMTPSA id p13sm4604412qkg.80.2021.06.11.08.33.36
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 08:33:37 -0700 (PDT)
Received: by mail-qk1-f175.google.com with SMTP id c138so18483830qkg.5
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:33:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df2190ff-d6eb-4a4e-869d-71111195f6b3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=fBuYG3h4tzknol76ldpkR8y5ZC6wvonHUnBR2HL9Wtc=;
        b=M4+LxFlHhc172xr0wDyY3Eu6l5zAWJucjpWrnMkz6xoYVmzJVmf2sjGSgMe9ycmELY
         m5ZJiAX1In+iQOKRwQmSpbPtym+qFrQOUCVwhjVtCNr+j5I7ahondcOGDBRXfde7p3O6
         sP5HLhecC4zQN5hX5jVm8CbTc59EB916NxMBc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=fBuYG3h4tzknol76ldpkR8y5ZC6wvonHUnBR2HL9Wtc=;
        b=mjJZHyXANG7nA/sOEpdGJQVXpq3GK2tGGrGyh5OEI6Lp6IBYTCxsKOnBUowI70eCKa
         waxUQPxrCWcKV7E39yKyl/ShYrg09FK++v3jDgH0i1qfLfKHL7NLpONd3gQv25Z3NcvZ
         X0v5I3DBObPFmZ7lJAj936ofxPB/MfzrzM7Gd2PkHfCpIGKt4B2iK621JRXCeyk9LzaZ
         aaYoT1JuT1ZlhH7OhnXGztGtIMacPEbAkdqOw2HWyFX6nPKx0EJrYsDQmrQ7L7WUISO0
         u29bqWPweGeSF40vLq+/ZbfI66JhARfQ3DUN5ynYFSE3WyA0adcIXISfmUiMIM0WBT7z
         7s+A==
X-Gm-Message-State: AOAM533xeQ2eCU62aL4QFsBnk92c7jCmyjlLhEid0vraXbOZE+L0n8eC
	4CWzPGqrDz6Y4A+PzScDz2KDvl5wu7DCpQ==
X-Google-Smtp-Source: ABdhPJwsV6MSU1nr7Fqavxr1GQQUCPAhqskMUJ6QNXHvsRNSmrnUeIXrqDMF/YsRE5TUzcUkpD48Qw==
X-Received: by 2002:ac8:71d9:: with SMTP id i25mr4376046qtp.385.1623425618082;
        Fri, 11 Jun 2021 08:33:38 -0700 (PDT)
X-Received: by 2002:a02:cc2f:: with SMTP id o15mr4521234jap.3.1623425606067;
 Fri, 11 Jun 2021 08:33:26 -0700 (PDT)
MIME-Version: 1.0
References: <20210611152659.2142983-1-tientzu@chromium.org> <20210611152659.2142983-4-tientzu@chromium.org>
In-Reply-To: <20210611152659.2142983-4-tientzu@chromium.org>
From: Claire Chang <tientzu@chromium.org>
Date: Fri, 11 Jun 2021 23:33:15 +0800
X-Gmail-Original-Message-ID: <CALiNf2_nzP=qLg5Fqvn3kiaMiaR9r+QJhE3pqypW4FPrgo23DQ@mail.gmail.com>
Message-ID: <CALiNf2_nzP=qLg5Fqvn3kiaMiaR9r+QJhE3pqypW4FPrgo23DQ@mail.gmail.com>
Subject: Re: [PATCH v9 03/14] swiotlb: Set dev->dma_io_tlb_mem to the swiotlb
 pool used
To: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

I'm not sure if this would break arch/x86/pci/sta2x11-fixup.c
swiotlb_late_init_with_default_size is called here
https://elixir.bootlin.com/linux/v5.13-rc5/source/arch/x86/pci/sta2x11-fixup.c#L60

On Fri, Jun 11, 2021 at 11:27 PM Claire Chang <tientzu@chromium.org> wrote:
>
> Always have the pointer to the swiotlb pool used in struct device. This
> could help simplify the code for other pools.
>
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  drivers/of/device.c     | 3 +++
>  include/linux/device.h  | 4 ++++
>  include/linux/swiotlb.h | 8 ++++++++
>  kernel/dma/swiotlb.c    | 8 ++++----
>  4 files changed, 19 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/of/device.c b/drivers/of/device.c
> index c5a9473a5fb1..1defdf15ba95 100644
> --- a/drivers/of/device.c
> +++ b/drivers/of/device.c
> @@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
>
>         arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
>
> +       if (IS_ENABLED(CONFIG_SWIOTLB))
> +               swiotlb_set_io_tlb_default_mem(dev);
> +
>         return 0;
>  }
>  EXPORT_SYMBOL_GPL(of_dma_configure_id);
> diff --git a/include/linux/device.h b/include/linux/device.h
> index 4443e12238a0..2e9a378c9100 100644
> --- a/include/linux/device.h
> +++ b/include/linux/device.h
> @@ -432,6 +432,7 @@ struct dev_links_info {
>   * @dma_pools: Dma pools (if dma'ble device).
>   * @dma_mem:   Internal for coherent mem override.
>   * @cma_area:  Contiguous memory area for dma allocations
> + * @dma_io_tlb_mem: Pointer to the swiotlb pool used.  Not for driver use.
>   * @archdata:  For arch-specific additions.
>   * @of_node:   Associated device tree node.
>   * @fwnode:    Associated device node supplied by platform firmware.
> @@ -540,6 +541,9 @@ struct device {
>  #ifdef CONFIG_DMA_CMA
>         struct cma *cma_area;           /* contiguous memory area for dma
>                                            allocations */
> +#endif
> +#ifdef CONFIG_SWIOTLB
> +       struct io_tlb_mem *dma_io_tlb_mem;
>  #endif
>         /* arch specific additions */
>         struct dev_archdata     archdata;
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 216854a5e513..008125ccd509 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -108,6 +108,11 @@ static inline bool is_swiotlb_buffer(phys_addr_t paddr)
>         return mem && paddr >= mem->start && paddr < mem->end;
>  }
>
> +static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
> +{
> +       dev->dma_io_tlb_mem = io_tlb_default_mem;
> +}
> +
>  void __init swiotlb_exit(void);
>  unsigned int swiotlb_max_segment(void);
>  size_t swiotlb_max_mapping_size(struct device *dev);
> @@ -119,6 +124,9 @@ static inline bool is_swiotlb_buffer(phys_addr_t paddr)
>  {
>         return false;
>  }
> +static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
> +{
> +}
>  static inline void swiotlb_exit(void)
>  {
>  }
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 8a3e2b3b246d..29b950ab1351 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -344,7 +344,7 @@ void __init swiotlb_exit(void)
>  static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size,
>                            enum dma_data_direction dir)
>  {
> -       struct io_tlb_mem *mem = io_tlb_default_mem;
> +       struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
>         int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT;
>         phys_addr_t orig_addr = mem->slots[index].orig_addr;
>         size_t alloc_size = mem->slots[index].alloc_size;
> @@ -426,7 +426,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
>  static int find_slots(struct device *dev, phys_addr_t orig_addr,
>                 size_t alloc_size)
>  {
> -       struct io_tlb_mem *mem = io_tlb_default_mem;
> +       struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
>         unsigned long boundary_mask = dma_get_seg_boundary(dev);
>         dma_addr_t tbl_dma_addr =
>                 phys_to_dma_unencrypted(dev, mem->start) & boundary_mask;
> @@ -503,7 +503,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
>                 size_t mapping_size, size_t alloc_size,
>                 enum dma_data_direction dir, unsigned long attrs)
>  {
> -       struct io_tlb_mem *mem = io_tlb_default_mem;
> +       struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
>         unsigned int offset = swiotlb_align_offset(dev, orig_addr);
>         unsigned int i;
>         int index;
> @@ -554,7 +554,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
>                               size_t mapping_size, enum dma_data_direction dir,
>                               unsigned long attrs)
>  {
> -       struct io_tlb_mem *mem = io_tlb_default_mem;
> +       struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
>         unsigned long flags;
>         unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
>         int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
> --
> 2.32.0.272.g935e593368-goog
>


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:38:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:38:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140532.259626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrjEl-0001XF-CU; Fri, 11 Jun 2021 15:38:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140532.259626; Fri, 11 Jun 2021 15:38:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrjEl-0001X8-9J; Fri, 11 Jun 2021 15:38:23 +0000
Received: by outflank-mailman (input) for mailman id 140532;
 Fri, 11 Jun 2021 15:38:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xgm=LF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lrjEk-0001X2-KE
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:38:22 +0000
Received: from mail-qk1-x733.google.com (unknown [2607:f8b0:4864:20::733])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 346940d3-df7f-42d8-b61d-194b104e03b9;
 Fri, 11 Jun 2021 15:38:21 +0000 (UTC)
Received: by mail-qk1-x733.google.com with SMTP id i68so27785135qke.3
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:38:21 -0700 (PDT)
Received: from mail-qk1-f182.google.com (mail-qk1-f182.google.com.
 [209.85.222.182])
 by smtp.gmail.com with ESMTPSA id m6sm1443914qtu.11.2021.06.11.08.38.20
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 08:38:20 -0700 (PDT)
Received: by mail-qk1-f182.google.com with SMTP id i67so31355426qkc.4
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:38:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 346940d3-df7f-42d8-b61d-194b104e03b9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=89FDdm5otB6kfSIY1wN8GtI+ciQx5v+glSl+A+k1+x0=;
        b=UdgbVNlCM2Z2m4AZZMvHuq5iZqq6a6B1k1NgNWN8Vbr0FhlXEkOMcXQOmqkTt81W9l
         ZZvZ8qEfilw4IVJQmcyphxlOFoGk0yzalYprjSKjTm0i0iDciVsO4hDbAAQ2CGlI10y9
         ZbSgy3asVidQE8sG2vEkYEnh2JJM0V3DCy6O0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=89FDdm5otB6kfSIY1wN8GtI+ciQx5v+glSl+A+k1+x0=;
        b=YHqtc7qxT7LVIJE3jt3gDQElnWEtvnlL5+sLz9HKx3J16HnYKqt4rpbgNz/Ba4ADG7
         o3xT9Z4CA0TvGQDfYfPZt7PUMW240yh/exKYUaFn24XdA1x2VB4unnc2hGRmZXOyyeFd
         9C7ePzrLGXdEkOylOZ7Wntsv35qAogEojfVvEm/4uSNRy76GgHFn5fp3WrGgp8mLqCEQ
         YzsS1i7Rm2WvK6YUQyHolEZFKmpChK16IhTagcqSSKT9tNmK1shK5zo5yDUu+wT6O/ts
         bb70np+s/+kRH+fdOcjBDgpZ8M/VKgNm189gAqdTDTP24YJAiGKUP/G6ZrYhLAzH/VhI
         OO0g==
X-Gm-Message-State: AOAM531p42v15/ZjRYPqxB//ZLDV6uHzpjvTVFmA8omYscynUG+DU9Mo
	k1UPkHIEd42wV/Gig4rW1R9jAzUwZlojnQ==
X-Google-Smtp-Source: ABdhPJwMC1prIQvPDYrypw8NApA4dBm6Sd0gXpL8QAl8rTywG+b5yUOwR/7o5F0qlG19UvrTnhRvKA==
X-Received: by 2002:a05:620a:2148:: with SMTP id m8mr4447125qkm.190.1623425901157;
        Fri, 11 Jun 2021 08:38:21 -0700 (PDT)
X-Received: by 2002:a05:6638:151:: with SMTP id y17mr4471864jao.128.1623425522838;
 Fri, 11 Jun 2021 08:32:02 -0700 (PDT)
MIME-Version: 1.0
References: <20210527125845.1852284-1-tientzu@chromium.org>
 <20210604174818.GC3703@willie-the-truck> <CALiNf29=z2uBM1ZA_GTu04iFS2dJwH0npdGvid1PL5KQM_HrxA@mail.gmail.com>
In-Reply-To: <CALiNf29=z2uBM1ZA_GTu04iFS2dJwH0npdGvid1PL5KQM_HrxA@mail.gmail.com>
From: Claire Chang <tientzu@chromium.org>
Date: Fri, 11 Jun 2021 23:31:52 +0800
X-Gmail-Original-Message-ID: <CALiNf29RGoFq7L+t_Bi6TsE-93-=m49DdV6QrVBV=pvoAjKsvw@mail.gmail.com>
Message-ID: <CALiNf29RGoFq7L+t_Bi6TsE-93-=m49DdV6QrVBV=pvoAjKsvw@mail.gmail.com>
Subject: Re: [PATCH v8 00/15] Restricted DMA
To: Will Deacon <will@kernel.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Frank Rowand <frowand.list@gmail.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig <hch@lst.de>, 
	Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

v9 here: https://lore.kernel.org/patchwork/cover/1445081/

On Mon, Jun 7, 2021 at 11:28 AM Claire Chang <tientzu@chromium.org> wrote:
>
> On Sat, Jun 5, 2021 at 1:48 AM Will Deacon <will@kernel.org> wrote:
> >
> > Hi Claire,
> >
> > On Thu, May 27, 2021 at 08:58:30PM +0800, Claire Chang wrote:
> > > This series implements mitigations for lack of DMA access control on
> > > systems without an IOMMU, which could result in the DMA accessing the
> > > system memory at unexpected times and/or unexpected addresses, possibly
> > > leading to data leakage or corruption.
> > >
> > > For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
> > > not behind an IOMMU. As PCI-e, by design, gives the device full access to
> > > system memory, a vulnerability in the Wi-Fi firmware could easily escalate
> > > to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
> > > full chain of exploits; [2], [3]).
> > >
> > > To mitigate the security concerns, we introduce restricted DMA. Restricted
> > > DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
> > > specially allocated region and does memory allocation from the same region.
> > > The feature on its own provides a basic level of protection against the DMA
> > > overwriting buffer contents at unexpected times. However, to protect
> > > against general data leakage and system memory corruption, the system needs
> > > to provide a way to restrict the DMA to a predefined memory region (this is
> > > usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).
> > >
> > > [1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
> > > [1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
> > > [2] https://blade.tencent.com/en/advisories/qualpwn/
> > > [3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
> > > [4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132
> > >
> > > v8:
> > > - Fix reserved-memory.txt and add the reg property in example.
> > > - Fix sizeof for of_property_count_elems_of_size in
> > >   drivers/of/address.c#of_dma_set_restricted_buffer.
> > > - Apply Will's suggestion to try the OF node having DMA configuration in
> > >   drivers/of/address.c#of_dma_set_restricted_buffer.
> > > - Fix typo in the comment of drivers/of/address.c#of_dma_set_restricted_buffer.
> > > - Add error message for PageHighMem in
> > >   kernel/dma/swiotlb.c#rmem_swiotlb_device_init and move it to
> > >   rmem_swiotlb_setup.
> > > - Fix the message string in rmem_swiotlb_setup.
> >
> > Thanks for the v8. It works for me out of the box on arm64 under KVM, so:
> >
> > Tested-by: Will Deacon <will@kernel.org>
> >
> > Note that something seems to have gone wrong with the mail threading, so
> > the last 5 patches ended up as a separate thread for me. Probably worth
> > posting again with all the patches in one place, if you can.
>
> Thanks for testing.
>
> Christoph also added some comments in v7, so I'll prepare v9.
>
> >
> > Cheers,
> >
> > Will


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 15:41:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 15:41:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140537.259636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrjHM-0002sH-Qw; Fri, 11 Jun 2021 15:41:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140537.259636; Fri, 11 Jun 2021 15:41:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrjHM-0002sA-Nq; Fri, 11 Jun 2021 15:41:04 +0000
Received: by outflank-mailman (input) for mailman id 140537;
 Fri, 11 Jun 2021 15:41:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xgm=LF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lrjHL-0002s4-V5
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 15:41:04 +0000
Received: from mail-qt1-x836.google.com (unknown [2607:f8b0:4864:20::836])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3cdbc737-169c-45cb-99a1-c07902716ef5;
 Fri, 11 Jun 2021 15:41:03 +0000 (UTC)
Received: by mail-qt1-x836.google.com with SMTP id z4so2899814qts.4
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:41:03 -0700 (PDT)
Received: from mail-qt1-f176.google.com (mail-qt1-f176.google.com.
 [209.85.160.176])
 by smtp.gmail.com with ESMTPSA id f19sm4870665qkg.70.2021.06.11.08.41.02
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 08:41:02 -0700 (PDT)
Received: by mail-qt1-f176.google.com with SMTP id z4so2899768qts.4
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 08:41:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3cdbc737-169c-45cb-99a1-c07902716ef5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=AzuwWZVb0hvYDmhYzyjXOdcnRd7gmJwLqOExosCeDwM=;
        b=IRqkW7akj902Qmh2dpTJs35MbbbyYdNh845FqGa90ZgwGlX/fY4AEM0E2Tq/OkUGrx
         VVpf/9sUsDrF+Qmi4N27tWzf7jckOIK7UvGuS34GnQEYg0G4BU1Idcj7Q7ryrjHTdpBh
         ql1llzJVDf1YUXhBMlreUDeIOuUDsqa6G9X1Y=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=AzuwWZVb0hvYDmhYzyjXOdcnRd7gmJwLqOExosCeDwM=;
        b=aMO1oY8LNC4KJQD42BQ5HzKnJI6lhnRbzdDnHBDw33Km9a6tM6+Ghqq3F00f3vNtTJ
         2CFIxm6w6YpMKSHrhwJQfbqRyXLSZ0kmBV3ci8LDNXVfjFewpSapC+tPks1Cp2I+4FS2
         FEjnC/Xqz+6Rabf5lyHllFlYAkLX5n7DEPZ51Ehtd7B2Aq0MPSRD6fd4c8SpyPMh2Bar
         TMkZ23V97yVSzZvDYSFP/tDjeSOyDvtkijJqJBPAql+5jIsxgJazfdhEWLR90bV5AtH3
         PRskLDCDofo3QlvAkR0QPeocmltXYYhYJql/mC+hR/DsEKe08SdGCoV+HRufwKfzDcen
         H+uw==
X-Gm-Message-State: AOAM531tQDQ/36RdzYS5yV6WsURygEURYW8Fl4JhAa+JpBT9zb2kqQxc
	e71YBquflgUlNoJ4riBtneMZmFX9sQzKAg==
X-Google-Smtp-Source: ABdhPJy+K/e9SZ+PNHtiIxoPlJOyjvO1tUHXGgRmgFdTtjZn0JZczckspo7kuHeWZa797rdjjJuS1A==
X-Received: by 2002:ac8:7699:: with SMTP id g25mr4327188qtr.309.1623426062931;
        Fri, 11 Jun 2021 08:41:02 -0700 (PDT)
X-Received: by 2002:a05:6638:e81:: with SMTP id p1mr4464281jas.84.1623425710461;
 Fri, 11 Jun 2021 08:35:10 -0700 (PDT)
MIME-Version: 1.0
References: <20210611152659.2142983-1-tientzu@chromium.org> <20210611152659.2142983-7-tientzu@chromium.org>
In-Reply-To: <20210611152659.2142983-7-tientzu@chromium.org>
From: Claire Chang <tientzu@chromium.org>
Date: Fri, 11 Jun 2021 23:34:59 +0800
X-Gmail-Original-Message-ID: <CALiNf28OT2L1qcH4dKK8mb0=uCyaKDHW7r=LrC9MTJw0PSzSbw@mail.gmail.com>
Message-ID: <CALiNf28OT2L1qcH4dKK8mb0=uCyaKDHW7r=LrC9MTJw0PSzSbw@mail.gmail.com>
Subject: Re: [PATCH v9 06/14] swiotlb: Update is_swiotlb_active to add a
 struct device argument
To: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

I don't have the HW to verify the change. Hopefully I use the right
device struct for is_swiotlb_active.


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 16:36:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 16:36:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140550.259665 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrk9H-0000E8-Ff; Fri, 11 Jun 2021 16:36:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140550.259665; Fri, 11 Jun 2021 16:36:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrk9H-0000Dz-CZ; Fri, 11 Jun 2021 16:36:47 +0000
Received: by outflank-mailman (input) for mailman id 140550;
 Fri, 11 Jun 2021 16:36:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pdcj=LF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lrk9G-0008Of-5R
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 16:36:46 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0d874748-0eb0-41e7-bfe0-e803891e92b6;
 Fri, 11 Jun 2021 16:36:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d874748-0eb0-41e7-bfe0-e803891e92b6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623429400;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=IFZX+h/NDurQeM2yQU3pr/ixDTqiEiHIAboJj1hyhp4=;
  b=H90I6vXRjFrL1ZQ35dIRuffGdJz363fl6rligaEvX4mXuXOX5D5Hut/o
   JPLbs9qfvCQVnz9BkObO5L8CYoJF7GwxIKzLcY1oEV7RyXFS10v1Thw3b
   OVXPEzHCHGPy0awpSNe6UOQ/ma9NiArD+4DJIoA5jVP4O3EZ1ZbiqcpNH
   o=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: xrSGWtslKYaKBq4K6IL5VyQMKgp6haUR+g8xAsBarllu5V+3lM9qe7FIAw+6U5RjHBW0ycZywg
 FDCWD1a1Z5OWnlkIg4kuVGnpGqIfZvioF/QGFjiwS24CtJ8wrfXCeLxfahX8ID5m8RldEBiXE9
 ahR5WaN3SDqohtxF8F07vO0Ls3QQBCkBZx1zrYpv0DlTAst3fuNmTAmFtyo9CwTuoiLfMgXF8P
 6sD+DvxQvW0jd9pPyPk3A9Cp/0elYQRy0Hx+s3W4JugOxcjXNyRh3ryEDpEBGxHw2sHfVQYrI3
 nxk=
X-SBRS: 5.1
X-MesageID: 45692781
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:S3ZKOa2L7Fr689kvE6CDQgqjBcpxeYIsimQD101hICG9Lfbo8v
 xGzc5rtyMc1gxhO03IwerwSZVohEmsgaKdkrNhTYtKPTOGhILMFupfBOTZskDd8kHFh4hgPO
 JbAtZD4b7LfBpHZKTBkXWF+r8bqbHsnc7J9IOuqEuBVTsEV0gj1XYHNu/yKDwxeOAsP+tAKH
 Po3Ls8m9PWQwVtUi3UPAh9Y8Hz4/fMmZ7afxhDIxI88gGBgROEgYSKVySw71M1VT5C/KklyH
 PCmQDi/Kmv2svLjSM041Wjtqi+1eGRlueqy6S3+4YowmGHsGqVTbUkf4fHkCE+oemp5lpvus
 LLuQ0cM8N67G6UVn2poDP2sjOQhQoG2jvH8xu1kHHjqcv2SHYREMxan79UdRPf9g4JoMx86q
 RWxGiU3qAnXy8opB6Ns+QgaisaxnZc4EBSwNL7tkYvD7f2vYUh/rD2/ytuYdo99WzBmcVXR9
 WHyqnnla5rmBihHgPkV1JUsZWRtq5aJGbcfqFLgL3m79F3pgEi86JK/r1Dop/3nKhNBKWt2Y
 z/Q+9VfEYndL5bUUs6PpZffSP8YFa9NC7kISaXOxDqBasHM3XCp9r+56g0/vijfNgNwIEpkJ
 rMXVtEvSpqEnieSfGmzdlO6FTAUW+9VTPixoVX4IV4oKT1QP7uPTeYQF4jnsO8q7EUA9HdWf
 y0JJVKasWTbFcGPLw5nDEWaqMiY0X2CvdlzOrTc2j+6v4jBLeawdDmTA==
X-IronPort-AV: E=Sophos;i="5.83,265,1616472000"; 
   d="scan'208";a="45692781"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, Edwin Torok
	<edvin.torok@citrix.com>, Andrew Cooper <andrew.cooper3@citrix.com>, "Jan
 Beulich" <JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 3/5] x86/msr: Expose MSR_ARCH_CAPS in the raw and host policies
Date: Fri, 11 Jun 2021 17:36:25 +0100
Message-ID: <20210611163627.4878-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210611163627.4878-1-andrew.cooper3@citrix.com>
References: <20210611163627.4878-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

MSR_ARCH_CAPS is still not supported for guests (other than the hardware
domain) yet, until the toolstack learns how to construct an MSR policy.

However, we want access to the host ARCH_CAPS_TSX_CTRL value in particular for
testing purposes.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/msr.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 374f92b2c5..6dbb4744e7 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -47,8 +47,13 @@ struct msr_policy __read_mostly hvm_def_msr_policy;
 
 static void __init calculate_raw_policy(void)
 {
+    struct msr_policy *mp = &raw_msr_policy;
+
     /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
     /* Was already added by probe_cpuid_faulting() */
+
+    if ( cpu_has_arch_caps )
+        rdmsrl(MSR_ARCH_CAPABILITIES, mp->arch_caps.raw);
 }
 
 static void __init calculate_host_policy(void)
@@ -60,6 +65,11 @@ static void __init calculate_host_policy(void)
     /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
     /* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_ENABLES */
     mp->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
+
+    mp->arch_caps.raw &=
+        (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
+         ARCH_CAPS_SKIP_L1DFL | ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO |
+         ARCH_CAPS_IF_PSCHANGE_MC_NO | ARCH_CAPS_TSX_CTRL | ARCH_CAPS_TAA_NO);
 }
 
 static void __init calculate_pv_max_policy(void)
@@ -67,6 +77,8 @@ static void __init calculate_pv_max_policy(void)
     struct msr_policy *mp = &pv_max_msr_policy;
 
     *mp = host_msr_policy;
+
+    mp->arch_caps.raw = 0; /* Not supported yet. */
 }
 
 static void __init calculate_pv_def_policy(void)
@@ -84,6 +96,8 @@ static void __init calculate_hvm_max_policy(void)
 
     /* It's always possible to emulate CPUID faulting for HVM guests */
     mp->platform_info.cpuid_faulting = true;
+
+    mp->arch_caps.raw = 0; /* Not supported yet. */
 }
 
 static void __init calculate_hvm_def_policy(void)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 16:36:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 16:36:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140549.259654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrk9D-0008Os-80; Fri, 11 Jun 2021 16:36:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140549.259654; Fri, 11 Jun 2021 16:36:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrk9D-0008Ol-4i; Fri, 11 Jun 2021 16:36:43 +0000
Received: by outflank-mailman (input) for mailman id 140549;
 Fri, 11 Jun 2021 16:36:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pdcj=LF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lrk9B-0008Of-AX
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 16:36:41 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fe792084-a39e-4b89-bba2-436f0f98475f;
 Fri, 11 Jun 2021 16:36:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe792084-a39e-4b89-bba2-436f0f98475f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623429399;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=k+DAiAmuwhraKaXytC6IFs7/40wbglrQbj+JD+BONFk=;
  b=Bv2Uhvdn8lC3raRHUvxODZxj2w6ezybcGz+MM5Qfx8X5WQi4dhEy5Qlh
   soeEzu/5VZBek55BSePKqKqqDfVywhKAXTMFdx78uGRzc9Lu09Zz6QSae
   F6UT/PLleAnDg7OAp5ClEriZbiiL1TjKQ/L27j/mumDHLa+1cRKepAdtD
   w=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: zxQ5/xyaT28GDUXdUOdrAQioIp0Rn9I7Sb0Ad93qqhg4wRiWJg+ButUdEBLV8rk10x8VBE3OGV
 le8OdVGSshE/HhU9OZHaRO1IEDk1ITuqTzsNebcGyEOSG+8i8Rh1kY11opZGpV8iowLZZH9Csg
 WPJcFBlkDClS/A040ymUau0iplI2FPzH6bDXeu3rZSFiSSmC7sm1ougXMKLf6XemUZvtsjTH75
 ikrgaZSPOLQqn/PHnQ+tWDPFrQEVzr5ockZ8J6Kmphtnv3sYoAgVJWxPvtYJ/tuPJpKnEBnKcA
 pmM=
X-SBRS: 5.1
X-MesageID: 45958352
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:QCa4YqHzEo7/RYhCpLqE5MeALOsnbusQ8zAXP0AYc3Jom6uj5q
 eTdZUgpHvJYVkqOE3I9ertBEDiewK4yXcW2/hzAV7KZmCP0wHEEGgL1/qF/9SKIUzDH4Bmup
 uIC5IOauHNMQ==
X-IronPort-AV: E=Sophos;i="5.83,265,1616472000"; 
   d="scan'208";a="45958352"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, Edwin Torok
	<edvin.torok@citrix.com>, Andrew Cooper <andrew.cooper3@citrix.com>, "Jan
 Beulich" <JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 0/5] x86/tsx: Consistency and settings test
Date: Fri, 11 Jun 2021 17:36:22 +0100
Message-ID: <20210611163627.4878-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

See patch 5 for details.

Andrew Cooper (5):
  x86/platform: Improve MSR permission handling for XENPF_resource_op
  x86/platform: Permit reading the TSX control MSRs via XENPF_resource_op
  x86/msr: Expose MSR_ARCH_CAPS in the raw and host policies
  libs/guest: Move struct xc_cpu_policy into xg_private.h
  tests: Introduce a TSX test

 tools/libs/guest/xg_cpuid_x86.c   |  11 +-
 tools/libs/guest/xg_private.h     |   9 +
 tools/tests/Makefile              |   1 +
 tools/tests/tsx/.gitignore        |   1 +
 tools/tests/tsx/Makefile          |  43 ++++
 tools/tests/tsx/test-tsx.c        | 474 ++++++++++++++++++++++++++++++++++++++
 xen/arch/x86/msr.c                |  14 ++
 xen/arch/x86/platform_hypercall.c |  47 +++-
 xen/arch/x86/psr.c                |   2 +-
 xen/include/asm-x86/cpufeature.h  |   1 +
 10 files changed, 581 insertions(+), 22 deletions(-)
 create mode 100644 tools/tests/tsx/.gitignore
 create mode 100644 tools/tests/tsx/Makefile
 create mode 100644 tools/tests/tsx/test-tsx.c

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 16:36:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 16:36:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140552.259686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrk9R-0000sa-1Y; Fri, 11 Jun 2021 16:36:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140552.259686; Fri, 11 Jun 2021 16:36:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrk9Q-0000sT-UY; Fri, 11 Jun 2021 16:36:56 +0000
Received: by outflank-mailman (input) for mailman id 140552;
 Fri, 11 Jun 2021 16:36:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pdcj=LF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lrk9Q-0008Of-5n
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 16:36:56 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5d8aae90-1e31-4eb7-9fba-e8c498c27063;
 Fri, 11 Jun 2021 16:36:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d8aae90-1e31-4eb7-9fba-e8c498c27063
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623429401;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=Np7xHnAwFYgIgfHxRggucz+t8E1POmf6vghJiDzXVc0=;
  b=I7aFmaZViIjLIFqyHfg8D/XrDtqdK2R4PpDoywT8SA2EcMIDG7xq/rIr
   69oMp8+rEP+GBAZ8eV3UcAAgCP9M32O3KFX8eBfDxNjnP2XBH0Qq4A8g+
   wNHvchXO7B0x4kbcwPTL+c/W0x06lne/VqM4HXFQJKPbPSmuqJW8wQvUq
   4=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ahNr86/5GpRD2MjpA0P4eHkwCelEh2JvUVuaZT/ERuzqDxbDXJMek1xEwKioczOC9DAiGKkh8+
 B2VJVi+aHKswMCOoSsmxIXbd0+j81MrnO5LRsaC4mTSqASWJ8wKWbYjykst/eDLLsjKlW4EYpG
 lrcFtQ0fmzZMKmZqeMwUGOcsFZyw3CcI4Lj9UyjMuaeZlNxBCAz3sNjhXDeh2I0pOYcBIsMJX+
 3ZrYAA+jy43weiXY3R4za3dV4gN2iO/DJV89NjKvDjiwUoNT1+OrU4ONmKeskjh0qNFiFOqPgs
 0Ng=
X-SBRS: 5.1
X-MesageID: 45958354
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:NQ5ZkKprXq0NUdKbAW16AaIaV5oReYIsimQD101hICG8cqSj9v
 xG+85rrSMc6QxhIU3I9urwW5VoLUmyyXcx2/h0AV7AZniBhILLFvAB0WKK+VSJcEeSmtK1l5
 0QFJSWYOeAdmSS5vyb3ODXKbgdKaG8gcWVuds=
X-IronPort-AV: E=Sophos;i="5.83,265,1616472000"; 
   d="scan'208";a="45958354"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, Edwin Torok
	<edvin.torok@citrix.com>, Andrew Cooper <andrew.cooper3@citrix.com>, "Jan
 Beulich" <JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 1/5] x86/platform: Improve MSR permission handling for XENPF_resource_op
Date: Fri, 11 Jun 2021 17:36:23 +0100
Message-ID: <20210611163627.4878-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210611163627.4878-1-andrew.cooper3@citrix.com>
References: <20210611163627.4878-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The logic to disallow writes to the TSC is out-of-place, and should be in
check_resource_access() rather than in resource_access().

Split the existing allow_access_msr() into two - msr_{read,write}_allowed() -
and move all permissions checks here.

Furthermore, guard access to MSR_IA32_CMT_{EVTSEL,CTR} to prohibit their use
on hardware which is lacking the QoS Monitoring feature.  Introduce
cpu_has_pqe to help with the logic.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/platform_hypercall.c | 41 ++++++++++++++++++++++++++++-----------
 xen/arch/x86/psr.c                |  2 +-
 xen/include/asm-x86/cpufeature.h  |  1 +
 3 files changed, 32 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 23fadbc782..41d8e59563 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -64,17 +64,33 @@ long cpu_frequency_change_helper(void *data)
     return cpu_frequency_change((uint64_t)data);
 }
 
-static bool allow_access_msr(unsigned int msr)
+static bool msr_read_allowed(unsigned int msr)
 {
     switch ( msr )
     {
-    /* MSR for CMT, refer to chapter 17.14 of Intel SDM. */
     case MSR_IA32_CMT_EVTSEL:
     case MSR_IA32_CMT_CTR:
+        return cpu_has_pqe;
+
     case MSR_IA32_TSC:
         return true;
     }
 
+    if ( ppin_msr && msr == ppin_msr )
+        return true;
+
+    return false;
+}
+
+static bool msr_write_allowed(unsigned int msr)
+{
+    switch ( msr )
+    {
+    case MSR_IA32_CMT_EVTSEL:
+    case MSR_IA32_CMT_CTR:
+        return cpu_has_pqe;
+    }
+
     return false;
 }
 
@@ -96,15 +112,19 @@ void check_resource_access(struct resource_access *ra)
         switch ( entry->u.cmd )
         {
         case XEN_RESOURCE_OP_MSR_READ:
-            if ( ppin_msr && entry->idx == ppin_msr )
-                break;
-            /* fall through */
+            if ( entry->idx >> 32 )
+                ret = -EINVAL;
+            else if ( !msr_read_allowed(entry->idx) )
+                ret = -EPERM;
+            break;
+
         case XEN_RESOURCE_OP_MSR_WRITE:
             if ( entry->idx >> 32 )
                 ret = -EINVAL;
-            else if ( !allow_access_msr(entry->idx) )
-                ret = -EACCES;
+            else if ( !msr_write_allowed(entry->idx) )
+                ret = -EPERM;
             break;
+
         default:
             ret = -EOPNOTSUPP;
             break;
@@ -163,12 +183,11 @@ void resource_access(void *info)
                 }
             }
             break;
+
         case XEN_RESOURCE_OP_MSR_WRITE:
-            if ( unlikely(entry->idx == MSR_IA32_TSC) )
-                ret = -EPERM;
-            else
-                ret = wrmsr_safe(entry->idx, entry->val);
+            ret = wrmsr_safe(entry->idx, entry->val);
             break;
+
         default:
             BUG();
             break;
diff --git a/xen/arch/x86/psr.c b/xen/arch/x86/psr.c
index d7f8864651..d805b85dc6 100644
--- a/xen/arch/x86/psr.c
+++ b/xen/arch/x86/psr.c
@@ -1558,7 +1558,7 @@ static void psr_cpu_init(void)
     struct cpuid_leaf regs;
     uint32_t feat_mask;
 
-    if ( !psr_alloc_feat_enabled() || !boot_cpu_has(X86_FEATURE_PQE) )
+    if ( !psr_alloc_feat_enabled() || !cpu_has_pqe )
         goto assoc_init;
 
     if ( boot_cpu_data.cpuid_level < PSR_CPUID_LEVEL_CAT )
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index a539a4bacd..5f6b83f71c 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -94,6 +94,7 @@
 #define cpu_has_bmi2            boot_cpu_has(X86_FEATURE_BMI2)
 #define cpu_has_invpcid         boot_cpu_has(X86_FEATURE_INVPCID)
 #define cpu_has_rtm             boot_cpu_has(X86_FEATURE_RTM)
+#define cpu_has_pqe             boot_cpu_has(X86_FEATURE_PQE)
 #define cpu_has_fpu_sel         (!boot_cpu_has(X86_FEATURE_NO_FPU_SEL))
 #define cpu_has_mpx             boot_cpu_has(X86_FEATURE_MPX)
 #define cpu_has_avx512f         boot_cpu_has(X86_FEATURE_AVX512F)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 16:36:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 16:36:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140551.259676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrk9L-0000XM-OK; Fri, 11 Jun 2021 16:36:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140551.259676; Fri, 11 Jun 2021 16:36:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrk9L-0000X8-Ks; Fri, 11 Jun 2021 16:36:51 +0000
Received: by outflank-mailman (input) for mailman id 140551;
 Fri, 11 Jun 2021 16:36:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pdcj=LF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lrk9L-0008Of-5W
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 16:36:51 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 051cac6f-9077-4608-9d60-e1b53a200168;
 Fri, 11 Jun 2021 16:36:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 051cac6f-9077-4608-9d60-e1b53a200168
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623429401;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=j0PLpWa/uAzrwIeuD9efCXaNBu+kISYj0KlPiUCAb5o=;
  b=Emx2XDaGGGOSeHTndNeLi6UrqTNH33e9r6t6VIN9QbzfgdjU1IDoBsus
   8Oqu8cs7bFuSh+dAX2vXl+G1AuMp+TPQAJdESzxd5BFk8aesADSGL2lj7
   WEpsCVFaWMPZ3t2qXOknBeLHtb6rQqSo5L61VjHkGsLI4jWEs2DhMrKVg
   Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: KHWKseF7XjIeVzjmp3F1vZTuP1yEelPBkcq1vTL/hEpqnckMxU2Lb/l5h5Kto+YsIWPmJrSaR/
 t75CE0FdW2AEikEQirZjHAj/N3mHgEwImpOaA7NFgFcd7+SBMigOZrZruBkaPFsybFAjh0TxzU
 cH81rl7+m/7JORFCqrkgdDhBNfNUxLpKYuv5YuNIeQtxE39wk78l2i8Y5q2rdqdooLTHe8bGOD
 SlSlJ9WhcVY1saKKLUbVg0LVwp8R7WBIDn+WdP8xHt26Dwg+y4woSF6uMtaguxd/qPMnDb1VDu
 gU0=
X-SBRS: 5.1
X-MesageID: 45958353
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:InE0Sa2ax5NQUwOoxYFI5QqjBIgkLtp133Aq2lEZdPRUGvb4qy
 nIpoVi6faUskdpZJhOo6HiBEDtexzhHNtOkO0s1NSZLW/bUQmTXeNfBOLZqlWKcUCTygce79
 YGT0EXMqyKMbEQt6bHCWeDferIuOP3lZyVuQ==
X-IronPort-AV: E=Sophos;i="5.83,265,1616472000"; 
   d="scan'208";a="45958353"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, Edwin Torok
	<edvin.torok@citrix.com>, Andrew Cooper <andrew.cooper3@citrix.com>, "Jan
 Beulich" <JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 2/5] x86/platform: Permit reading the TSX control MSRs via XENPF_resource_op
Date: Fri, 11 Jun 2021 17:36:24 +0100
Message-ID: <20210611163627.4878-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210611163627.4878-1-andrew.cooper3@citrix.com>
References: <20210611163627.4878-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

We are going to want this to write some tests with.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/platform_hypercall.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 41d8e59563..284c2dfb9e 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -74,6 +74,12 @@ static bool msr_read_allowed(unsigned int msr)
 
     case MSR_IA32_TSC:
         return true;
+
+    case MSR_TSX_FORCE_ABORT:
+        return cpu_has_tsx_force_abort;
+
+    case MSR_TSX_CTRL:
+        return cpu_has_tsx_ctrl;
     }
 
     if ( ppin_msr && msr == ppin_msr )
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 16:37:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 16:37:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140553.259698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrk9W-0001I2-BF; Fri, 11 Jun 2021 16:37:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140553.259698; Fri, 11 Jun 2021 16:37:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrk9W-0001Hm-7u; Fri, 11 Jun 2021 16:37:02 +0000
Received: by outflank-mailman (input) for mailman id 140553;
 Fri, 11 Jun 2021 16:37:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pdcj=LF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lrk9V-0008Of-68
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 16:37:01 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eddaf960-55c5-49ef-860c-5891f8024d07;
 Fri, 11 Jun 2021 16:36:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eddaf960-55c5-49ef-860c-5891f8024d07
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623429402;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=ljXhb6YM4lTDZym7VtQuZUsyYm1RmolfquSRiw5dDh4=;
  b=BBpRcOBE4g+G58uI34dDPbF1I0s3Ecdtle9mxDs2b07fSUzsidCDLHgg
   DShL05T6rCoBhygMsvbcFK9E80fc3mV3uZN4YrF1xCs1PIZClccmz6MAv
   qMbwHRfuCmXp8T9T/R+Sk1OGGxmTSH6toh+/vLcHpc82LThAuqniin1C2
   Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: cVXV/GDTpGt05FzP2P3Fqd/JusJOCA8XM38W8JEcf8HkKiLsH7OynA+T6mUeldooGDkaMe/bhw
 rmIb1ZtSXBKW625S4pxahD9g3/vLYrhLRUcmiDXlmNaeY0l584g3oGUkEcck91OpqZd72d62ku
 3AQxpkkjABtQ9A3CxuOEUEpyfIp8/qSy1UctNqoNhMDnCwDz2uVzuatKUDNKRc86HVV27Y29YR
 VcxOfMybBZmQDihTdsuwheX5O66gbUXfg9zvGxtUpe6KV/JGkCwuQIU9HvfvafdZBJ1iix+It0
 L5w=
X-SBRS: 5.1
X-MesageID: 45958359
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:OGHUaqtjHh3KJMz4Abfz2WR77skDTNV00zEX/kB9WHVpmszxra
 GTdZMgpGfJYVcqKQgdcL+7Scq9qB/nmqKdpLNhWYtKPzOW3ldATrsSj7cKqgeIc0aVm4JgPO
 VbAs9D4bXLfCNHZK3BgDVQfexP/DD+ytHMudvj
X-IronPort-AV: E=Sophos;i="5.83,265,1616472000"; 
   d="scan'208";a="45958359"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, Edwin Torok
	<edvin.torok@citrix.com>, Andrew Cooper <andrew.cooper3@citrix.com>, "Jan
 Beulich" <JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 4/5] libs/guest: Move struct xc_cpu_policy into xg_private.h
Date: Fri, 11 Jun 2021 17:36:26 +0100
Message-ID: <20210611163627.4878-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210611163627.4878-1-andrew.cooper3@citrix.com>
References: <20210611163627.4878-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

... so tests can peek at the internals, without the structure being generally
available to users of the library.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 tools/libs/guest/xg_cpuid_x86.c | 11 +----------
 tools/libs/guest/xg_private.h   |  9 +++++++++
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index ec5a47fde4..e01d657e03 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -22,7 +22,7 @@
 #include <stdlib.h>
 #include <stdbool.h>
 #include <limits.h>
-#include "xc_private.h"
+#include "xg_private.h"
 #include "xc_bitops.h"
 #include <xen/hvm/params.h>
 #include <xen-tools/libs.h>
@@ -34,18 +34,9 @@ enum {
 
 #include <xen/asm/x86-vendors.h>
 
-#include <xen/lib/x86/cpu-policy.h>
-
 #define bitmaskof(idx)      (1u << ((idx) & 31))
 #define featureword_of(idx) ((idx) >> 5)
 
-struct xc_cpu_policy {
-    struct cpuid_policy cpuid;
-    struct msr_policy msr;
-    xen_cpuid_leaf_t leaves[CPUID_MAX_SERIALISED_LEAVES];
-    xen_msr_entry_t entries[MSR_MAX_SERIALISED_ENTRIES];
-};
-
 int xc_get_cpu_levelling_caps(xc_interface *xch, uint32_t *caps)
 {
     DECLARE_SYSCTL;
diff --git a/tools/libs/guest/xg_private.h b/tools/libs/guest/xg_private.h
index 03d765da21..59909d2a2c 100644
--- a/tools/libs/guest/xg_private.h
+++ b/tools/libs/guest/xg_private.h
@@ -33,6 +33,8 @@
 #include <xen/elfnote.h>
 #include <xen/libelf/libelf.h>
 
+#include <xen/lib/x86/cpu-policy.h>
+
 #ifndef ELFSIZE
 #include <limits.h>
 #if UINT_MAX == ULONG_MAX
@@ -168,4 +170,11 @@ int pin_table(xc_interface *xch, unsigned int type, unsigned long mfn,
 #define M2P_SIZE(_m)    ROUNDUP(((_m) * sizeof(xen_pfn_t)), M2P_SHIFT)
 #define M2P_CHUNKS(_m)  (M2P_SIZE((_m)) >> M2P_SHIFT)
 
+struct xc_cpu_policy {
+    struct cpuid_policy cpuid;
+    struct msr_policy msr;
+    xen_cpuid_leaf_t leaves[CPUID_MAX_SERIALISED_LEAVES];
+    xen_msr_entry_t entries[MSR_MAX_SERIALISED_ENTRIES];
+};
+
 #endif /* XG_PRIVATE_H */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 16:37:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 16:37:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140556.259709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrk9b-0001nF-Nf; Fri, 11 Jun 2021 16:37:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140556.259709; Fri, 11 Jun 2021 16:37:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrk9b-0001n5-I3; Fri, 11 Jun 2021 16:37:07 +0000
Received: by outflank-mailman (input) for mailman id 140556;
 Fri, 11 Jun 2021 16:37:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pdcj=LF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lrk9a-0008Of-6J
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 16:37:06 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a8627c0-347f-40e2-ae5d-ec513bc0a140;
 Fri, 11 Jun 2021 16:36:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a8627c0-347f-40e2-ae5d-ec513bc0a140
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623429402;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=SumFpR+oY+hds6iIruCSqHSplJXSvo3V45dPCKzNM9s=;
  b=YHk0ZrugOQljfMuEviKgBZUaXVnObaKbU2o4FQlT2UUPBcsgX8weZD6u
   l/vI9PzL386w8DZy+DgUoxTjRbq/HQjiqWqcPsew3MuK18R0IPV6JvPH7
   KryQSiX+WNMB6/Yj/dRdc2WmRtYjgcsXmFcUwOVVQRWgau239LXgibRVR
   4=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: MdZ0hv6Rnx6nFYzpWf79xr1ooGPl9paJq9x07BdZ32fjjTBGHYCgCMvE4CCdlrh4fYLoWjB+ue
 9BzEbHUDpmcwVv+oTt5psYGh8ElbhM8GL9Md1Vvrh+bR+DOloM+A6w93DOH7NeKz1eF+tArmRl
 7PYxtCEVyLsKx3sUbjoAkNPZBvp12nc7BGfo6RvuiWNA/bXMoctHmhhFRyLH8X+I0XbMoyqx8Z
 biISeW4nAITWTjTfoB9IjKxC5A+9uSSkCzZmB+X+zDa2yRkjo3YMRcAUtpBAqGs0IX2r1lTTkt
 DgE=
X-SBRS: 5.1
X-MesageID: 45958357
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:B1fo8KzIEkVSU3WJsJIUKrPwFr1zdoMgy1knxilNoRw8SK2lfq
 eV7YwmPH7P+U8ssR4b6LO90cW7Lk80sKQFhbX5Xo3SOjUO2lHYTr2KhLGKq1aLdkHDH6xmpM
 BdmsBFeabN5DNB7foSjjPXLz9Z+qjjzJyV
X-IronPort-AV: E=Sophos;i="5.83,265,1616472000"; 
   d="scan'208";a="45958357"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, Edwin Torok
	<edvin.torok@citrix.com>, Andrew Cooper <andrew.cooper3@citrix.com>, "Jan
 Beulich" <JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 5/5] tests: Introduce a TSX test
Date: Fri, 11 Jun 2021 17:36:27 +0100
Message-ID: <20210611163627.4878-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210611163627.4878-1-andrew.cooper3@citrix.com>
References: <20210611163627.4878-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

See the comment at the top of test-tsx.c for details.

This covers various complexities encountered while trying to address the
recent TSX deprecation on client parts.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 tools/tests/Makefile       |   1 +
 tools/tests/tsx/.gitignore |   1 +
 tools/tests/tsx/Makefile   |  43 ++++
 tools/tests/tsx/test-tsx.c | 474 +++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 519 insertions(+)
 create mode 100644 tools/tests/tsx/.gitignore
 create mode 100644 tools/tests/tsx/Makefile
 create mode 100644 tools/tests/tsx/test-tsx.c

diff --git a/tools/tests/Makefile b/tools/tests/Makefile
index 8746aabe6b..25531a984a 100644
--- a/tools/tests/Makefile
+++ b/tools/tests/Makefile
@@ -5,6 +5,7 @@ SUBDIRS-y :=
 SUBDIRS-y += resource
 SUBDIRS-$(CONFIG_X86) += cpu-policy
 SUBDIRS-$(CONFIG_X86) += mce-test
+SUBDIRS-$(CONFIG_X86) += tsx
 ifneq ($(clang),y)
 SUBDIRS-$(CONFIG_X86) += x86_emulator
 endif
diff --git a/tools/tests/tsx/.gitignore b/tools/tests/tsx/.gitignore
new file mode 100644
index 0000000000..97ec4db7ff
--- /dev/null
+++ b/tools/tests/tsx/.gitignore
@@ -0,0 +1 @@
+test-tsx
diff --git a/tools/tests/tsx/Makefile b/tools/tests/tsx/Makefile
new file mode 100644
index 0000000000..7381a4f5a4
--- /dev/null
+++ b/tools/tests/tsx/Makefile
@@ -0,0 +1,43 @@
+XEN_ROOT = $(CURDIR)/../../..
+include $(XEN_ROOT)/tools/Rules.mk
+
+TARGET := test-tsx
+
+.PHONY: all
+all: $(TARGET)
+
+.PHONY: run
+run: $(TARGET)
+	./$(TARGET)
+
+.PHONY: clean
+clean:
+	$(RM) -f -- *.o $(TARGET) $(DEPS_RM)
+
+.PHONY: distclean
+distclean: clean
+	$(RM) -f -- *~
+
+.PHONY: install
+install: all
+
+.PHONY: uninstall
+uninstall:
+
+CFLAGS += -Werror -std=gnu11
+CFLAGS += $(CFLAGS_xeninclude)
+CFLAGS += $(CFLAGS_libxenctrl)
+CFLAGS += $(CFLAGS_libxenguest)
+CFLAGS += -I$(XEN_ROOT)/tools/libs/ctrl -I$(XEN_ROOT)/tools/libs/guest
+CFLAGS += $(APPEND_CFLAGS)
+
+LDFLAGS += $(LDLIBS_libxenctrl)
+LDFLAGS += $(LDLIBS_libxenguest)
+LDFLAGS += $(APPEND_LDFLAGS)
+
+test-tsx.o: Makefile
+
+test-tsx: test-tsx.o
+	$(CC) -o $@ $< $(LDFLAGS)
+
+-include $(DEPS_INCLUDE)
diff --git a/tools/tests/tsx/test-tsx.c b/tools/tests/tsx/test-tsx.c
new file mode 100644
index 0000000000..2bf22cea81
--- /dev/null
+++ b/tools/tests/tsx/test-tsx.c
@@ -0,0 +1,474 @@
+/*
+ * TSX settings and consistency tests
+ *
+ * This tests various behaviours and invariants with regards to TSX.  It
+ * ideally wants running for several microcode versions, and all applicable
+ * tsx= commandline settings, on a single CPU, including after an S3
+ * suspend/resume event.
+ *
+ * It tests specifically:
+ *  - The consistency of MSR_TSX_CTRL/MSR_TSX_FORCE_ABORT values across the
+ *    system, and their accessibility WRT data in the host CPU policy.
+ *  - The actual behaviour of RTM on the system.
+ *
+ *  - 
+ */
+
+#define _GNU_SOURCE
+
+#include <err.h>
+#include <errno.h>
+#include <inttypes.h>
+#include <signal.h>
+#include <stdio.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <sys/ucontext.h>
+
+#include <xenctrl.h>
+#include <xenguest.h>
+#include <xen-tools/libs.h>
+
+#include "xg_private.h"
+
+enum {
+#define XEN_CPUFEATURE(name, value) X86_FEATURE_##name = value,
+#include <xen/arch-x86/cpufeatureset.h>
+};
+#define bitmaskof(idx)      (1u << ((idx) & 31))
+
+#define MSR_ARCH_CAPABILITIES               0x0000010a
+#define  ARCH_CAPS_TSX_CTRL                 (1 <<  7)
+#define MSR_TSX_FORCE_ABORT                 0x0000010f
+#define MSR_TSX_CTRL                        0x00000122
+
+static unsigned int nr_failures;
+#define fail(fmt, ...)                          \
+({                                              \
+    nr_failures++;                              \
+    (void)printf(fmt, ##__VA_ARGS__);           \
+})
+
+static xc_interface *xch;
+
+/*
+ * Policies, arranged as an array for easy collection of all of them.  We
+ * don't care about the raw policy (index 0) so reuse that for the guest
+ * policy.
+ */
+static struct xc_cpu_policy policies[6];
+#define guest_policy policies[0]
+#define host         policies[XEN_SYSCTL_cpu_policy_host]
+#define pv_max       policies[XEN_SYSCTL_cpu_policy_pv_max]
+#define hvm_max      policies[XEN_SYSCTL_cpu_policy_hvm_max]
+#define pv_default   policies[XEN_SYSCTL_cpu_policy_pv_default]
+#define hvm_default  policies[XEN_SYSCTL_cpu_policy_hvm_default]
+
+static bool xen_has_pv = true, xen_has_hvm = true;
+
+static unsigned int nr_cpus;
+static enum rtm_behaviour {
+    RTM_UD,
+    RTM_OK,
+    RTM_ABORT,
+} rtm_behaviour;
+
+/*
+ * Test a specific TSX MSR for consistency across the system, taking into
+ * account whether it ought to be accessable or not.
+ *
+ * We can't query offline CPUs, so skip those if encountered.  We don't care
+ * particularly for the exact MSR value, but we do care that it is the same
+ * everywhere.
+ */
+static void test_tsx_msr_consistency(unsigned int msr, bool accessable)
+{
+    uint64_t cpu0_val = ~0;
+
+    for ( unsigned int cpu = 0; cpu < nr_cpus; ++cpu )
+    {
+        xc_resource_entry_t ent = {
+            .u.cmd = XEN_RESOURCE_OP_MSR_READ,
+            .idx = msr,
+        };
+        xc_resource_op_t op = {
+            .cpu = cpu,
+            .entries = &ent,
+            .nr_entries = 1,
+        };
+        int rc = xc_resource_op(xch, 1, &op);
+
+        if ( rc < 0 )
+        {
+            /* Don't emit a message for offline CPUs */
+            if ( errno != ENODEV )
+                fail("  xc_resource_op() for CPU%u failed: rc %d, errno %d - %s\n",
+                     cpu, rc, errno, strerror(errno));
+            continue;
+        }
+
+        if ( accessable )
+        {
+            if ( rc != 1 )
+            {
+                fail("  Expected 1 result, got %u\n", rc);
+                continue;
+            }
+            if ( ent.u.ret != 0 )
+            {
+                fail("  Expected ok, got %d\n", ent.u.ret);
+                continue;
+            }
+        }
+        else
+        {
+            if ( rc != 0 )
+                fail("  Expected 0 results, got %u\n", rc);
+            else if ( ent.u.ret != -EPERM )
+                fail("  Expected -EPERM, got %d\n", ent.u.ret);
+            continue;
+        }
+
+        if ( cpu == 0 )
+        {
+            cpu0_val = ent.val;
+            printf("  CPU0 val %#"PRIx64"\n", cpu0_val);
+        }
+        else if ( ent.val != cpu0_val )
+            fail("  CPU%u val %#"PRIx64" differes from CPU0 %#"PRIx64"\n",
+                 cpu, ent.val, cpu0_val);
+    }
+}
+
+/*
+ * Check all TSX MSRs, and in particular that their accessibility matches what
+ * is expressed in the host CPU policy.
+ */
+static void test_tsx_msrs(void)
+{
+    printf("Testing MSR_TSX_FORCE_ABORT consistency\n");
+    test_tsx_msr_consistency(
+        MSR_TSX_FORCE_ABORT, host.cpuid.feat.tsx_force_abort);
+
+    printf("Testing MSR_TSX_CTRL consistency\n");
+    test_tsx_msr_consistency(
+        MSR_TSX_CTRL, host.msr.arch_caps.tsx_ctrl);
+}
+
+/*
+ * Probe for how RTM behaves, deliberately not inspecting CPUID.
+ * Distinguishes between "no support at all" (i.e. XBEGIN suffers #UD),
+ * working ok, and appearing to always abort.
+ */
+static enum rtm_behaviour probe_rtm_behaviour(void)
+{
+    for ( int i = 0; i < 1000; ++i )
+    {
+        /*
+         * Opencoding the RTM infrastructure from immintrin.h, because we
+         * still support older versions of GCC.  ALso so we can include #UD
+         * detection logic.
+         */
+#define XBEGIN_STARTED -1
+#define XBEGIN_UD      -2
+        unsigned int status = XBEGIN_STARTED;
+
+        asm volatile (".Lxbegin: .byte 0xc7,0xf8,0,0,0,0" /* XBEGIN 1f; 1: */
+                      : "+a" (status) :: "memory");
+        if ( status == XBEGIN_STARTED )
+        {
+            asm volatile (".byte 0x0f,0x01,0xd5" ::: "memory"); /* XEND */
+            return RTM_OK;
+        }
+        else if ( status == XBEGIN_UD )
+            return RTM_UD;
+    }
+
+    return RTM_ABORT;
+}
+
+static struct sigaction old_sigill;
+
+static void sigill_handler(int signo, siginfo_t *info, void *extra)
+{
+    extern char xbegin_label[] asm(".Lxbegin");
+
+    if ( info->si_addr == xbegin_label ||
+         memcmp(info->si_addr, "\xc7\xf8\x00\x00\x00\x00", 6) == 0 )
+    {
+        ucontext_t *context = extra;
+
+        /*
+         * Found the XBEGIN instruction.  Step over it, and update `status` to
+         * signal #UD.
+         */
+#ifdef __x86_64__
+        context->uc_mcontext.gregs[REG_RIP] += 6;
+        context->uc_mcontext.gregs[REG_RAX] = XBEGIN_UD;
+#else
+        context->uc_mcontext.gregs[REG_EIP] += 6;
+        context->uc_mcontext.gregs[REG_EAX] = XBEGIN_UD;
+#endif
+    }
+    else
+    {
+        /*
+         * Not the SIGILL we're looking for...  Restore the old handler and
+         * try again.  Will likely coredump as a result.
+         */
+        sigaction(SIGILL, &old_sigill, NULL);
+    }
+}
+
+static void test_rtm_behaviour(void)
+{
+    struct sigaction new_sigill = {
+        .sa_flags = SA_SIGINFO,
+        .sa_sigaction = sigill_handler,
+    };
+    const char *str;
+
+    printf("Testing RTM behaviour\n");
+
+    /*
+     * Install a custom SIGILL handler while probing for RTM behaviour, as the
+     * XBEGIN instruction might suffer #UD.
+     */
+    sigaction(SIGILL, &new_sigill, &old_sigill);
+    rtm_behaviour = probe_rtm_behaviour();
+    sigaction(SIGILL, &old_sigill, NULL);
+
+    switch ( rtm_behaviour )
+    {
+    case RTM_UD:    str = "#UD";   break;
+    case RTM_OK:    str = "OK";    break;
+    case RTM_ABORT: str = "Abort"; break;
+    default:        str = NULL;    break;
+    }
+
+    if ( str )
+        printf("  Got %s\n", str);
+    else
+        return fail("  Got unexpected behaviour %d\n", rtm_behaviour);
+
+    if ( host.cpuid.feat.rtm )
+    {
+        if ( rtm_behaviour == RTM_UD )
+            fail("  Host reports RTM, but appears unavailable\n");
+    }
+    else
+    {
+        if ( rtm_behaviour != RTM_UD )
+            fail("  Host reports no RTM, but appears available\n");
+    }
+}
+
+static void dump_tsx_details(const struct xc_cpu_policy *p, const char *pref)
+{
+    printf("  %s RTM %u, HLE %u, TSX_FORCE_ABORT %u, RTM_ALWAYS_ABORT %u, TSX_CTRL %u\n",
+           pref,
+           p->cpuid.feat.rtm,
+           p->cpuid.feat.hle,
+           p->cpuid.feat.tsx_force_abort,
+           p->cpuid.feat.rtm_always_abort,
+           p->msr.arch_caps.tsx_ctrl
+        );
+}
+
+/*
+ * Sanity test various invariants we expect in the default/max policies.
+ */
+static void test_guest_policies(const struct xc_cpu_policy *max,
+                                const struct xc_cpu_policy *def)
+{
+    const struct cpuid_policy *cm = &max->cpuid;
+    const struct cpuid_policy *cd = &def->cpuid;
+    const struct msr_policy *mm = &max->msr;
+    const struct msr_policy *md = &def->msr;
+
+    dump_tsx_details(max, "Max:");
+    dump_tsx_details(def, "Def:");
+
+    if ( ((cm->feat.raw[0].d | cd->feat.raw[0].d) &
+          (bitmaskof(X86_FEATURE_TSX_FORCE_ABORT) |
+           bitmaskof(X86_FEATURE_RTM_ALWAYS_ABORT))) ||
+         ((mm->arch_caps.raw | md->arch_caps.raw) & ARCH_CAPS_TSX_CTRL) )
+        fail("  Xen-only TSX controls offered to guest\n");
+
+    switch ( rtm_behaviour )
+    {
+    case RTM_UD:
+        if ( (cm->feat.raw[0].b | cd->feat.raw[0].b) &
+             (bitmaskof(X86_FEATURE_HLE) | bitmaskof(X86_FEATURE_RTM)) )
+             fail("  HLE/RTM offered to guests despite not being available\n");
+        break;
+
+    case RTM_ABORT:
+        if ( cd->feat.raw[0].b &
+             (bitmaskof(X86_FEATURE_HLE) | bitmaskof(X86_FEATURE_RTM)) )
+             fail("  HLE/RTM offered to guests by default despite not being usable\n");
+        break;
+
+    case RTM_OK:
+        if ( !cm->feat.rtm || !cd->feat.rtm )
+             fail("  RTM not offered to guests despite being available\n");
+        break;
+    }
+
+    if ( cd->feat.hle )
+        fail("  Fail: HLE offered in default policy\n");
+}
+
+static void test_def_max_policies(void)
+{
+    if ( xen_has_pv )
+    {
+        printf("Testing PV default/max policies\n");
+        test_guest_policies(&pv_max, &pv_default);
+    }
+
+    if ( xen_has_hvm )
+    {
+        printf("Testing HVM default/max policies\n");
+        test_guest_policies(&hvm_max, &hvm_default);
+    }
+}
+
+static void test_guest(struct xen_domctl_createdomain *c)
+{
+    uint32_t domid = 0;
+    int rc;
+
+    rc = xc_domain_create(xch, &domid, c);
+    if ( rc )
+        return fail("  Domain create failure: %d - %s\n",
+                    errno, strerror(errno));
+
+    printf("  Created d%u\n", domid);
+
+    rc = xc_cpu_policy_get_domain(xch, domid, &guest_policy);
+    if ( rc )
+    {
+        fail("  Failed to obtain domain policy: %d - %s\n",
+             errno, strerror(errno));
+        goto out;
+    }
+
+    dump_tsx_details(&guest_policy, "Cur:");
+
+    /*
+     * Check defaults given to the guest.
+     */
+    if ( guest_policy.cpuid.feat.rtm != (rtm_behaviour == RTM_OK) )
+        fail("  RTM %u in guest, despite rtm behaviour\n",
+             guest_policy.cpuid.feat.rtm);
+
+    if ( guest_policy.cpuid.feat.hle ||
+         guest_policy.cpuid.feat.tsx_force_abort ||
+         guest_policy.cpuid.feat.rtm_always_abort ||
+         guest_policy.msr.arch_caps.tsx_ctrl )
+        fail("  Unexpected features advertised\n");
+
+ out:
+    rc = xc_domain_destroy(xch, domid);
+    if ( rc )
+        fail("  Failed to destroy domain: %d - %s\n",
+             errno, strerror(errno));
+}
+
+static void test_guests(void)
+{
+    if ( xen_has_pv )
+    {
+        struct xen_domctl_createdomain c = {
+            .max_vcpus = 1,
+            .max_grant_frames = 1,
+        };
+
+        printf("Testing PV guest\n");
+        test_guest(&c);
+    }
+
+    if ( xen_has_hvm )
+    {
+        struct xen_domctl_createdomain c = {
+            .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
+            .max_vcpus = 1,
+            .max_grant_frames = 1,
+            .arch = {
+                .emulation_flags = XEN_X86_EMU_LAPIC,
+            },
+        };
+
+        printf("Testing HVM guest\n");
+        test_guest(&c);
+    }
+}
+
+/* Obtain some general data, then run the tests. */
+static void test_tsx(void)
+{
+    int rc;
+    xc_physinfo_t info = {};
+
+    /* Read all policies except raw. */
+    for ( int i = XEN_SYSCTL_cpu_policy_host;
+          i <= XEN_SYSCTL_cpu_policy_hvm_default; ++i )
+    {
+        rc = xc_cpu_policy_get_system(xch, i, &policies[i]);
+
+        if ( rc == -1 && errno == EOPNOTSUPP )
+        {
+            /*
+             * Use EOPNOTSUPP to spot Xen missing CONFIG_{PV,HVM}, and adjust
+             * later testing accordingly.
+             */
+            switch ( i )
+            {
+            case XEN_SYSCTL_cpu_policy_pv_max:
+            case XEN_SYSCTL_cpu_policy_pv_default:
+                if ( xen_has_pv )
+                    printf("  Xen doesn't support PV\n");
+                xen_has_pv = false;
+                continue;
+
+            case XEN_SYSCTL_cpu_policy_hvm_max:
+            case XEN_SYSCTL_cpu_policy_hvm_default:
+                if ( xen_has_hvm )
+                    printf("  Xen doesn't support HVM\n");
+                xen_has_hvm = false;
+                continue;
+            }
+        }
+        if ( rc )
+            return fail("Failed to obtain policy[%u]: %d - %s\n",
+                        i, errno, strerror(errno));
+    }
+
+    rc = xc_physinfo(xch, &info);
+    if ( rc )
+        return fail("Failed to obtain physinfo: %d - %s\n",
+                    errno, strerror(errno));
+
+    nr_cpus = info.max_cpu_id + 1;
+    printf("  Got %u CPUs\n", nr_cpus);
+
+    test_tsx_msrs();
+    test_rtm_behaviour();
+    test_def_max_policies();
+    test_guests();
+}
+
+int main(int argc, char **argv)
+{
+    printf("TSX tests\n");
+
+    xch = xc_interface_open(NULL, NULL, 0);
+
+    if ( !xch )
+        err(1, "xc_interface_open");
+
+    test_tsx();
+
+    return !!nr_failures;
+}
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 17:02:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 17:02:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140594.259725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrkYK-0006CR-3f; Fri, 11 Jun 2021 17:02:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140594.259725; Fri, 11 Jun 2021 17:02:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrkYK-0006CK-0d; Fri, 11 Jun 2021 17:02:40 +0000
Received: by outflank-mailman (input) for mailman id 140594;
 Fri, 11 Jun 2021 17:02:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1lrkYJ-0006CE-2u
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 17:02:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1lrkYJ-0002As-24
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 17:02:39 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1lrkYJ-0001fH-15
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 17:02:39 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1lrkYH-0006mH-9o; Fri, 11 Jun 2021 18:02:37 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=nIVDGrpYEEFfU9g4fKC8dpS6IW30yBrHRV+oWfW+Vxo=; b=UzkfsCwnKHYWfKOjHmzYAY1VNl
	W+s3N8RmVAtIroxI1wNehNrZj2+GYqtDtIM3FKl9XF9Oq7em3+UB4Hbxuacbs2zAQjjiJB+AE82f1
	bserwnSUGk2ifCs3WUiFptHHRbNlPhSCpWXLy/FjH37uKppZ+MF/caxZhkEmExiuIoPU=;
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [OSSTEST PATCH] ts-xen-build: Turn on CONFIG_PV32 again
Date: Fri, 11 Jun 2021 18:02:30 +0100
Message-Id: <20210611170230.20195-1-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

CC: George Dunlap <george.dunlap@citrix.com>
Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 ts-xen-build | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/ts-xen-build b/ts-xen-build
index deec52b2..af0dd894 100755
--- a/ts-xen-build
+++ b/ts-xen-build
@@ -132,6 +132,10 @@ END
 		# on Xen. For now (Xen 4.10/4.11 at at least),
 		# will be not built by default and gated by expert mode
 		echo >>xen/.config CONFIG_HAS_ITS=y
+
+		# PV32 is disabled by default but we still want to test
+		# it, for now at least until everything is updated.
+		echo >>xen/.config CONFIG_PV32=y
 	fi
 END
                );
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 17:51:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 17:51:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140601.259737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrlIx-0002YW-O9; Fri, 11 Jun 2021 17:50:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140601.259737; Fri, 11 Jun 2021 17:50:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrlIx-0002YP-Ka; Fri, 11 Jun 2021 17:50:51 +0000
Received: by outflank-mailman (input) for mailman id 140601;
 Fri, 11 Jun 2021 17:50:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrlIv-0002YF-SL; Fri, 11 Jun 2021 17:50:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrlIv-0002yC-Oh; Fri, 11 Jun 2021 17:50:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrlIv-0001Wv-GG; Fri, 11 Jun 2021 17:50:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrlIv-00009R-Fn; Fri, 11 Jun 2021 17:50:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7Yr84GyeuFba+lvyO940RVjQ3ezhEMLCXjAH4JPRDfI=; b=CN5XHnLqsFs+D8pMZ5okFsRww+
	+7qyNN2rENOuc4/lrIl6hkjoOKG8ZN9BszaSwhnth1W/PIPRCZCDkT2iqH9gQz4j+2N4yx0bKlOSe
	0i1jIHnFaPo2yfv6BSoxN71B38F6BmuPMNMOeIx4akayNN/ABazFi3vzoJ0eb4J61Yvc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162647-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162647: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:guest-start/debian.repeat:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d2cad41defe4e0e9987549fbc8ebdf9ae138f90f
X-Osstest-Versions-That:
    xen=3e09045991cde360432bc7437103f8f8a6699359
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Jun 2021 17:50:49 +0000

flight 162647 xen-unstable-smoke real [real]
flight 162652 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162647/
http://logs.test-lab.xenproject.org/osstest/logs/162652/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl         18 guest-start/debian.repeat fail REGR. vs. 162574

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d2cad41defe4e0e9987549fbc8ebdf9ae138f90f
baseline version:
 xen                  3e09045991cde360432bc7437103f8f8a6699359

Last test of basis   162574  2021-06-09 14:00:34 Z    2 days
Failing since        162584  2021-06-10 00:00:27 Z    1 days   11 attempts
Testing same since   162642  2021-06-11 10:00:31 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d2cad41defe4e0e9987549fbc8ebdf9ae138f90f
Author: Juergen Gross <jgross@suse.com>
Date:   Wed May 12 16:48:32 2021 +0200

    tools/libs/store: cleanup libxenstore interface
    
    There are some internals in the libxenstore interface which should be
    removed.
    
    Move those functions into xs_lib.c and the related definitions into
    xs_lib.h. Remove the functions from the mapfile. Add xs_lib.o to
    xenstore_client as some of the internal functions are needed there.
    
    Bump the libxenstore version to 4.0 as the change is incompatible.
    Note that the removed functions should not result in any problem as
    they ought to be used by xenstored or xenstore_client only.
    
    Avoid an enum as part of a structure as the size of an enum is
    compiler implementation dependent.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 2bb17a45b1814b0b6aa4646eff58e16f876281fd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jun 10 16:56:24 2021 +0200

    x86: please Clang in arch_set_info_guest()
    
    Clang 10 reports
    
    domain.c:1328:10: error: variable 'cr3_mfn' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
        if ( !compat )
             ^~~~~~~
    domain.c:1334:34: note: uninitialized use occurs here
        cr3_page = get_page_from_mfn(cr3_mfn, d);
                                     ^~~~~~~
    domain.c:1328:5: note: remove the 'if' if its condition is always true
        if ( !compat )
        ^~~~~~~~~~~~~~
    domain.c:1042:18: note: initialize the variable 'cr3_mfn' to silence this warning
        mfn_t cr3_mfn;
                     ^
                      = 0
    domain.c:1189:14: error: variable 'fail' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
            if ( !compat )
                 ^~~~~~~
    domain.c:1211:9: note: uninitialized use occurs here
            fail |= v->arch.pv.gdt_ents != c(gdt_ents);
            ^~~~
    domain.c:1189:9: note: remove the 'if' if its condition is always true
            if ( !compat )
            ^~~~~~~~~~~~~~
    domain.c:1187:18: note: initialize the variable 'fail' to silence this warning
            bool fail;
                     ^
                      = false
    
    despite this being a build with -O2 in effect, and despite "compat"
    being constant "false" when CONFIG_COMPAT (and hence CONFIG_PV32) is not
    defined, as it gets set at the top of the function from the result of
    is_pv_32bit_domain().
    
    Re-arrange the two "offending" if()s such that when COMPAT=n the
    respective variables will be seen as unconditionally initialized. The
    original aim was to have the !compat cases first, though.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit dfcffb128be46a3e413eaa941744536fe53c94b6
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Wed Jun 9 10:37:59 2021 -0700

    xen/arm32: SPSR_hyp/SPSR
    
    SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
    trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
    See: ARM DDI 0487D.b page G8-5993.
    
    This fixes booting Xen/arm32 on QEMU.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
    Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 17:55:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 17:55:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140608.259750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrlN6-0003Dc-8Q; Fri, 11 Jun 2021 17:55:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140608.259750; Fri, 11 Jun 2021 17:55:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrlN6-0003DV-5D; Fri, 11 Jun 2021 17:55:08 +0000
Received: by outflank-mailman (input) for mailman id 140608;
 Fri, 11 Jun 2021 17:55:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrlN4-0003DL-0F; Fri, 11 Jun 2021 17:55:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrlN3-00031b-SH; Fri, 11 Jun 2021 17:55:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrlN3-0001da-Kx; Fri, 11 Jun 2021 17:55:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrlN3-0003hS-KQ; Fri, 11 Jun 2021 17:55:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=BVGZEIFyMa4IJlFRs1jPazcuZplxJwd9UCya3u2/xLM=; b=J5elnF6RcPlwqldhq6hSdViRX3
	jEDvTosJyf4BXsNIrubiJJ0VoH5lUtaG0zNzpGu4BJqCnw3eP0TYOWKceU/4aomPXnRB5VVPtxLD1
	6eyq8WoP6H72mZqVqxanmcuMZxJec6VvaZXuRz/hSh634SUF/WUtI55i0bWZH//63I6M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-i386-examine
Message-Id: <E1lrlN3-0003hS-KQ@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Jun 2021 17:55:05 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-examine
testid reboot

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  1a0f2fe2297d122a08fee2b26de5de995fdeca13
  Bug not present: 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/162653/


  commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
  Author: George Dunlap <george.dunlap@citrix.com>
  Date:   Thu May 6 13:38:02 2021 +0100
  
      SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
      
      The support status of 32-bit guests doesn't seem particularly useful.
      
      With it changed to fully unsupported outside of PV-shim, adjust the PV32
      Kconfig default accordingly.
      
      Reported-by: Jann Horn <jannh@google.com>
      Signed-off-by: George Dunlap <george.dunlap@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-i386-examine.reboot.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-i386-examine.reboot --summary-out=tmp/162653.bisection-summary --basis-template=162533 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-i386-examine reboot
Searching for failure / basis pass:
 162600 fail [host=huxelrebe1] / 162533 [host=albana1] 162475 [host=elbling0] 162422 [host=chardonnay0] 162385 [host=fiano1] 162343 [host=huxelrebe0] 162337 [host=elbling0] 162330 [host=chardonnay1] 162325 [host=pinot1] 162282 ok.
Failure / basis pass flights: 162600 / 162282
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 3e09045991cde360432bc7437103f8f8a6699359
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 57f68dfd2d111a2ad381df740543c901b41f2299
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38\
 aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#57f68dfd2d111a2ad381df740543c901b41f2299-3e09045991cde360432bc7437103f8f8a6699359
Loaded 5001 nodes in revision graph
Searching for test results:
 162276 [host=elbling1]
 162282 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 57f68dfd2d111a2ad381df740543c901b41f2299
 162325 [host=pinot1]
 162330 [host=chardonnay1]
 162337 [host=elbling0]
 162343 [host=huxelrebe0]
 162385 [host=fiano1]
 162422 [host=chardonnay0]
 162475 [host=elbling0]
 162533 [host=albana1]
 162556 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 e4fee66043120c954fc309bbb37813604c1c0eb7
 162649 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 1a0f2fe2297d122a08fee2b26de5de995fdeca13
 162629 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 57f68dfd2d111a2ad381df740543c901b41f2299
 162600 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 3e09045991cde360432bc7437103f8f8a6699359
 162631 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 e4fee66043120c954fc309bbb37813604c1c0eb7
 162635 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 3e09045991cde360432bc7437103f8f8a6699359
 162638 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 d21121685fac829c988e432407fb0e4ef9b19331
 162639 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 455790573d3bbad6d5a1bb7e9d28b6dd71075693
 162641 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7bd8989ab77b6ade3b7a5f4b640a55248d1791a3
 162644 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162645 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 1a0f2fe2297d122a08fee2b26de5de995fdeca13
 162648 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162651 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162653 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 1a0f2fe2297d122a08fee2b26de5de995fdeca13
Searching for interesting versions
 Result found: flight 162282 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 d21121685fac829c988e432407fb0e4ef9b19331, results HASH(0x55a1e988f0c0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5268b2dcf7\
 e5342c8a51ceb4bed3e7740c69f5c1, results HASH(0x55a1e988cc10) HASH(0x55a1e987bb00) HASH(0x55a1e99103d0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 57f68dfd2d111a2ad381df740543c901b41f2299, results HASH(0x55a1e987d388) HASH(0x55a1e9885fd0) Result found: flight 162556 (fail), for basis failure (at ancestor ~504)
 Repro found: flight 162629 (pass), for basis pass
 Repro found: flight 162635 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
No revisions left to test, checking graph state.
 Result found: flight 162644 (pass), for last pass
 Result found: flight 162645 (fail), for first failure
 Repro found: flight 162648 (pass), for last pass
 Repro found: flight 162649 (fail), for first failure
 Repro found: flight 162651 (pass), for last pass
 Repro found: flight 162653 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  1a0f2fe2297d122a08fee2b26de5de995fdeca13
  Bug not present: 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/162653/


  commit 1a0f2fe2297d122a08fee2b26de5de995fdeca13
  Author: George Dunlap <george.dunlap@citrix.com>
  Date:   Thu May 6 13:38:02 2021 +0100
  
      SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
      
      The support status of 32-bit guests doesn't seem particularly useful.
      
      With it changed to fully unsupported outside of PV-shim, adjust the PV32
      Kconfig default accordingly.
      
      Reported-by: Jann Horn <jannh@google.com>
      Signed-off-by: George Dunlap <george.dunlap@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-i386-examine.reboot.{dot,ps,png,html,svg}.
----------------------------------------
162653: tolerable ALL FAIL

flight 162653 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162653/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-examine       8 reboot                  fail baseline untested


jobs:
 test-amd64-i386-examine                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 17:55:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 17:55:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140610.259764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrlND-0003XX-HX; Fri, 11 Jun 2021 17:55:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140610.259764; Fri, 11 Jun 2021 17:55:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrlND-0003XQ-ES; Fri, 11 Jun 2021 17:55:15 +0000
Received: by outflank-mailman (input) for mailman id 140610;
 Fri, 11 Jun 2021 17:55:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=a66D=LF=kernel.dk=axboe@srs-us1.protection.inumbo.net>)
 id 1lrlNC-0003Wp-EK
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 17:55:14 +0000
Received: from mail-pj1-x1032.google.com (unknown [2607:f8b0:4864:20::1032])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d155e491-025b-4736-a1ec-82a9a14b9543;
 Fri, 11 Jun 2021 17:55:11 +0000 (UTC)
Received: by mail-pj1-x1032.google.com with SMTP id ei4so6107751pjb.3
 for <xen-devel@lists.xenproject.org>; Fri, 11 Jun 2021 10:55:11 -0700 (PDT)
Received: from [192.168.4.41] (cpe-72-132-29-68.dc.res.rr.com. [72.132.29.68])
 by smtp.gmail.com with ESMTPSA id
 x3sm6384950pgx.8.2021.06.11.10.55.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Jun 2021 10:55:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d155e491-025b-4736-a1ec-82a9a14b9543
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=kernel-dk.20150623.gappssmtp.com; s=20150623;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=FiCqgV8kjys+3xuqiyCwJtcf1Wj53LQ5ch2SqyLytgY=;
        b=SD6IZ6g/PQccK+DJj47BNWbjBcu51Hoxg75XX8W0NsqQBg0ABF+hnuOUc/UFnpm8im
         oNVX3uykvZuuO5Y8ikPd3y+WsvzDFhqLGs4yWHQFnc8RzIYufWCIOXqn3AApkjH4Q2eg
         xQaVfA+BsocsKdfD6NHf1Rfl2fMGnqoxNR3DdZP0eXDcQ/vjXZbD88x8d17C79EnssGA
         MaA3XiWo9+AFyaOrA1MdfYif3vi99movLQHJefeIj5rihTG1TSe1D2qp8+37dXVxo+jK
         juNRW5LpLgfCfOOUxjWz+2zHY1TAZQqD4FUpZkK9KYz4TPA/VoJTiiPZCp8OQM00GFzi
         g76Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=FiCqgV8kjys+3xuqiyCwJtcf1Wj53LQ5ch2SqyLytgY=;
        b=NtO7hYyBiRpM/GiQLFm3nx0+F74AdnJwkWAZ5ChUAlGk+QoN6hD3X6y2barpVYq5iR
         IqoIVzmmqq3+bI5z1hbxTjg3DQKAkzdUP3/3/GitFS192z15yTIen/RTSnRYsoWBOCHO
         n157NAmhh0JJBXvzJYNrv58PdvTN9UYA4EBTIGeRmgojko+mSRhj7RoprhVZVhYnVde2
         +IyiM5UmTXlQmoWRuKdsbGTjOFAyp1ACLG1WTZdBZeH2fIKYQGLKpDHF9u54beBcuXx1
         /BmYg+wChpNVeuk1aMu4pL6Lse9kV4Me/yQr5yRAoaPl0mKo5QlbKX1/rrxrx0NUdrzH
         Zo3Q==
X-Gm-Message-State: AOAM5303B4NfsVVF+i1J+Yr9PH1hTv9MhpT1RU5rbcP8RJoHmGSw1PqS
	kcTzjEsPudWyMEayEM4YcWF9Ng==
X-Google-Smtp-Source: ABdhPJxMGSnZ9kJAsN6p7k0McSncvftPUJip5kVUuOxTF3REguFHe02Lgz4i9/8fJhn7M2nOfjp+aA==
X-Received: by 2002:a17:90a:9481:: with SMTP id s1mr5656508pjo.48.1623434110978;
        Fri, 11 Jun 2021 10:55:10 -0700 (PDT)
Subject: Re: simplify gendisk and request_queue allocation for blk-mq based
 drivers
To: Christoph Hellwig <hch@lst.de>
Cc: Justin Sanders <justin@coraid.com>, Denis Efremov <efremov@linux.com>,
 Josef Bacik <josef@toxicpanda.com>, Tim Waugh <tim@cyberelk.net>,
 Geoff Levand <geoff@infradead.org>, Ilya Dryomov <idryomov@gmail.com>,
 "Md. Haris Iqbal" <haris.iqbal@ionos.com>, Jack Wang <jinpu.wang@ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Mike Snitzer <snitzer@redhat.com>, Maxim Levitsky <maximlevitsky@gmail.com>,
 Alex Dubov <oakad@yahoo.com>, Miquel Raynal <miquel.raynal@bootlin.com>,
 Richard Weinberger <richard@nod.at>, Vignesh Raghavendra <vigneshr@ti.com>,
 Heiko Carstens <hca@linux.ibm.com>, Vasily Gorbik <gor@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
 linux-mmc@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-s390@vger.kernel.org
References: <20210602065345.355274-1-hch@lst.de>
From: Jens Axboe <axboe@kernel.dk>
Message-ID: <fa9590e3-20eb-5215-d2f7-6489169c232c@kernel.dk>
Date: Fri, 11 Jun 2021 11:55:09 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20210602065345.355274-1-hch@lst.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 6/2/21 12:53 AM, Christoph Hellwig wrote:
> Hi all,
> 
> this series is the scond part of cleaning up lifetimes and allocation of
> the gendisk and request_queue structure.  It adds a new interface to
> allocate the disk and queue together for blk based drivers, and uses that
> in all drivers that do not have any caveats in their gendisk and
> request_queue lifetime rules.

Applied, thanks.

-- 
Jens Axboe



From xen-devel-bounces@lists.xenproject.org Fri Jun 11 18:48:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 18:48:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140628.259784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrmCG-0000QG-Qc; Fri, 11 Jun 2021 18:48:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140628.259784; Fri, 11 Jun 2021 18:48:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrmCG-0000Q9-NU; Fri, 11 Jun 2021 18:48:00 +0000
Received: by outflank-mailman (input) for mailman id 140628;
 Fri, 11 Jun 2021 18:47:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrmCF-0000Pz-6h; Fri, 11 Jun 2021 18:47:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrmCE-0003yx-Sb; Fri, 11 Jun 2021 18:47:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrmCE-0003sn-L8; Fri, 11 Jun 2021 18:47:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrmCE-0002qi-Kh; Fri, 11 Jun 2021 18:47:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=T/7BXkYXg5L33tvYn1Nj35ZVRqyo3PTKD/jg3W0N4Tg=; b=xjP4QNZ+F2jekkid7WPGMIo7sI
	C+mpkq0J8FK9DTdBjJzjiQhxB7HV5ux7JhU6DurOVnRNEhG8rq1QkPo4fQm01NBYzTLP/gEFhAWTT
	x6oEruTexwMc3qlAZVP9zgdm5GS+q4rNrRkNAcy8d02Q6HMSzjxbcAO23aNMBsomYkOo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162637-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162637: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=b8649cf2a3e673a4a8cb6c255e394b354b771550
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Jun 2021 18:47:58 +0000

flight 162637 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162637/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 b8649cf2a3e673a4a8cb6c255e394b354b771550
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z    7 days
Failing since        162368  2021-06-04 15:42:59 Z    7 days    8 attempts
Testing same since   162583  2021-06-09 23:44:58 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1717 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 19:48:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 19:48:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140640.259811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrn88-00069o-GR; Fri, 11 Jun 2021 19:47:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140640.259811; Fri, 11 Jun 2021 19:47:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrn88-00069h-DJ; Fri, 11 Jun 2021 19:47:48 +0000
Received: by outflank-mailman (input) for mailman id 140640;
 Fri, 11 Jun 2021 19:47:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrn87-00069X-T4; Fri, 11 Jun 2021 19:47:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrn87-0004zE-MH; Fri, 11 Jun 2021 19:47:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrn87-0005yy-Ae; Fri, 11 Jun 2021 19:47:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrn87-0006zh-A7; Fri, 11 Jun 2021 19:47:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BWe4ksMQQs5QeH8bNzCnboKn89ZI9cImZGVC1uUsyAg=; b=s+Ka946T7bz8BozvM1ERnmyq6N
	4ZlQGmrKsTVfKu5i92Im/dk3AcDj/KaXuQ7UwUXa71YY8p+VO5kBRwRBvOtWMVOGIcnXz+cv1NQld
	jEqMwNKDy2rxGU3q68mW84UIsoIRPLKocMqvjQ428WlNTgbx08FAKVW7PR3Y582OHKVo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162633-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162633: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-4:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-examine:reboot:fail:regression
    xen-unstable:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-5:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-2:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-1:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-3:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-raw:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-amd64-i386-pvgrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-amd64-amd64-pvgrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3e09045991cde360432bc7437103f8f8a6699359
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Jun 2021 19:47:47 +0000

flight 162633 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162633/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 162533
 test-xtf-amd64-amd64-4      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-amd64-i386-examine       8 reboot                   fail REGR. vs. 162533
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 162533
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 162533
 test-xtf-amd64-amd64-5      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-xtf-amd64-amd64-2      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-xtf-amd64-amd64-1      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-xtf-amd64-amd64-3      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-amd64-i386-xl-raw        8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 162533
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 162533
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 162533
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 162533
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 162533
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 162533
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 162533
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 162533
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 162533
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 162533
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 162533
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 162533
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 162533
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 162533
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 162533
 test-amd64-amd64-i386-pvgrub 17 guest-localmigrate       fail REGR. vs. 162533
 test-amd64-amd64-amd64-pvgrub 17 guest-localmigrate      fail REGR. vs. 162533
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 162533
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 162533

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 162600 pass in 162633
 test-amd64-amd64-examine    4 memdisk-try-append fail in 162600 pass in 162633
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 162600 pass in 162633
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 162600 pass in 162633
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore  fail pass in 162600

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 162422
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162533
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162533
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  3e09045991cde360432bc7437103f8f8a6699359
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z    3 days
Failing since        162556  2021-06-08 22:39:08 Z    2 days    3 attempts
Testing same since   162600  2021-06-10 09:41:17 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 621 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 21:48:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 21:48:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140653.259837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrp0Y-0000Dp-Hk; Fri, 11 Jun 2021 21:48:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140653.259837; Fri, 11 Jun 2021 21:48:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrp0Y-0000Di-E3; Fri, 11 Jun 2021 21:48:06 +0000
Received: by outflank-mailman (input) for mailman id 140653;
 Fri, 11 Jun 2021 21:48:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrp0X-0000DY-Iv; Fri, 11 Jun 2021 21:48:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrp0X-000751-Em; Fri, 11 Jun 2021 21:48:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrp0X-0002mI-6r; Fri, 11 Jun 2021 21:48:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrp0X-0002kr-6L; Fri, 11 Jun 2021 21:48:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zDA4M1D2fVTnsqTwQw4GncTi05whUe3/lRtn/ue6VD8=; b=M+11jPLgv+dVnT93pJgAEYP2o7
	faox5uI8B8fSGjsSPQ7xEqbjO+0hsTVxRx1Gp78e7DRTVG8EiJnIyNDLwwTEVuq+y73BmkNIAsmNJ
	FZEGoqba1dmYxElI0wmNEZOj45jpUJoxhD+BPXS8hhkGC2CQjLLbdvzKE4YlDDgdRDcs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162656-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162656: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:guest-start/debian.repeat:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d2cad41defe4e0e9987549fbc8ebdf9ae138f90f
X-Osstest-Versions-That:
    xen=3e09045991cde360432bc7437103f8f8a6699359
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Jun 2021 21:48:05 +0000

flight 162656 xen-unstable-smoke real [real]
flight 162663 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162656/
http://logs.test-lab.xenproject.org/osstest/logs/162663/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl         18 guest-start/debian.repeat fail REGR. vs. 162574

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d2cad41defe4e0e9987549fbc8ebdf9ae138f90f
baseline version:
 xen                  3e09045991cde360432bc7437103f8f8a6699359

Last test of basis   162574  2021-06-09 14:00:34 Z    2 days
Failing since        162584  2021-06-10 00:00:27 Z    1 days   12 attempts
Testing same since   162642  2021-06-11 10:00:31 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d2cad41defe4e0e9987549fbc8ebdf9ae138f90f
Author: Juergen Gross <jgross@suse.com>
Date:   Wed May 12 16:48:32 2021 +0200

    tools/libs/store: cleanup libxenstore interface
    
    There are some internals in the libxenstore interface which should be
    removed.
    
    Move those functions into xs_lib.c and the related definitions into
    xs_lib.h. Remove the functions from the mapfile. Add xs_lib.o to
    xenstore_client as some of the internal functions are needed there.
    
    Bump the libxenstore version to 4.0 as the change is incompatible.
    Note that the removed functions should not result in any problem as
    they ought to be used by xenstored or xenstore_client only.
    
    Avoid an enum as part of a structure as the size of an enum is
    compiler implementation dependent.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 2bb17a45b1814b0b6aa4646eff58e16f876281fd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jun 10 16:56:24 2021 +0200

    x86: please Clang in arch_set_info_guest()
    
    Clang 10 reports
    
    domain.c:1328:10: error: variable 'cr3_mfn' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
        if ( !compat )
             ^~~~~~~
    domain.c:1334:34: note: uninitialized use occurs here
        cr3_page = get_page_from_mfn(cr3_mfn, d);
                                     ^~~~~~~
    domain.c:1328:5: note: remove the 'if' if its condition is always true
        if ( !compat )
        ^~~~~~~~~~~~~~
    domain.c:1042:18: note: initialize the variable 'cr3_mfn' to silence this warning
        mfn_t cr3_mfn;
                     ^
                      = 0
    domain.c:1189:14: error: variable 'fail' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
            if ( !compat )
                 ^~~~~~~
    domain.c:1211:9: note: uninitialized use occurs here
            fail |= v->arch.pv.gdt_ents != c(gdt_ents);
            ^~~~
    domain.c:1189:9: note: remove the 'if' if its condition is always true
            if ( !compat )
            ^~~~~~~~~~~~~~
    domain.c:1187:18: note: initialize the variable 'fail' to silence this warning
            bool fail;
                     ^
                      = false
    
    despite this being a build with -O2 in effect, and despite "compat"
    being constant "false" when CONFIG_COMPAT (and hence CONFIG_PV32) is not
    defined, as it gets set at the top of the function from the result of
    is_pv_32bit_domain().
    
    Re-arrange the two "offending" if()s such that when COMPAT=n the
    respective variables will be seen as unconditionally initialized. The
    original aim was to have the !compat cases first, though.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit dfcffb128be46a3e413eaa941744536fe53c94b6
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Wed Jun 9 10:37:59 2021 -0700

    xen/arm32: SPSR_hyp/SPSR
    
    SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
    trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
    See: ARM DDI 0487D.b page G8-5993.
    
    This fixes booting Xen/arm32 on QEMU.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
    Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 22:04:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 22:04:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140661.259851 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrpGV-0002WL-Vc; Fri, 11 Jun 2021 22:04:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140661.259851; Fri, 11 Jun 2021 22:04:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrpGV-0002WE-Sa; Fri, 11 Jun 2021 22:04:35 +0000
Received: by outflank-mailman (input) for mailman id 140661;
 Fri, 11 Jun 2021 22:04:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C43g=LF=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lrpGU-0002W8-KW
 for xen-devel@lists.xenproject.org; Fri, 11 Jun 2021 22:04:34 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1320e0e9-e480-4e6d-aab9-4af6bae76f4f;
 Fri, 11 Jun 2021 22:04:34 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id F0A04613CD;
 Fri, 11 Jun 2021 22:04:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1320e0e9-e480-4e6d-aab9-4af6bae76f4f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623449073;
	bh=RNA12J4X4dUJiKgQsnm7WAaTzCrQsGPkuYyFSojgvFs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=higscilJ00rt2hvrBQl6j2QfG3tYIDo3Ivqq77DsinLkXG9Ee4kDnNJp7dPANLn3A
	 sy4yBiQxBN8fT48krj+T4xnQ4DV0NIp3U/fZka2oYoiOBUxlmp8Wt3h+4INQlLcQ2j
	 MjRNJsV1c/YK5qUpr3EoxZ0MZz6kiZi8tR7q6SlCknd7Wtqn+hDJQDcCruYTA5G2jR
	 9ddZkJXaTSt7s+llOsXJc28eOeeZryir1+cZqg32uIRka1CLd492uc4RMPk6s8belx
	 UpsC+KiJALsZ1bX5dqOionqoWYGQA5kI62Q3KE4HERGN5f5FgRf4enJY57prd6dCEB
	 hGCfMwK+Z38pw==
Date: Fri, 11 Jun 2021 15:04:32 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2] Arm32: MSR to SPSR needs qualification
In-Reply-To: <2d0ac238-bf23-51ed-9ccf-6fd65fc6eec4@suse.com>
Message-ID: <alpine.DEB.2.21.2106111500380.24906@sstabellini-ThinkPad-T480s>
References: <2d0ac238-bf23-51ed-9ccf-6fd65fc6eec4@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 11 Jun 2021, Jan Beulich wrote:
> The Arm ARM's description of MSR (ARM DDI 0406C.d section B9.3.12)
> doesn't even allow for plain "SPSR" here, and while gas accepts this, it
> takes it to mean SPSR_cf. Yet surely all of SPSR wants updating on this
> path, not just the lowest and highest 8 bits.
> 
> Fixes: dfcffb128be4 ("xen/arm32: SPSR_hyp/SPSR")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Thanks for the patch! I disassembled the instruction in the bad Xen
binary and confirmed that 2 of the mask bits are off.

Rebuilding the binary with your patch applied solves the issue: now are
4 bits are set.

Thank you so much!

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> v2: Add doc ref.
> 
> --- a/xen/arch/arm/arm32/entry.S
> +++ b/xen/arch/arm/arm32/entry.S
> @@ -395,7 +395,7 @@ return_to_hypervisor:
>          ldr r11, [sp, #UREGS_pc]
>          msr ELR_hyp, r11
>          ldr r11, [sp, #UREGS_cpsr]
> -        msr SPSR, r11
> +        msr SPSR_cxsf, r11
>  #ifdef CONFIG_ARM32_HARDEN_BRANCH_PREDICTOR
>          /*
>           * Hardening branch predictor may require to setup a different
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 11 22:17:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Jun 2021 22:17:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140670.259869 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrpSm-00044X-5O; Fri, 11 Jun 2021 22:17:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140670.259869; Fri, 11 Jun 2021 22:17:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrpSm-00044Q-2L; Fri, 11 Jun 2021 22:17:16 +0000
Received: by outflank-mailman (input) for mailman id 140670;
 Fri, 11 Jun 2021 22:17:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrpSk-00044G-C5; Fri, 11 Jun 2021 22:17:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrpSk-0007a2-6K; Fri, 11 Jun 2021 22:17:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrpSj-0003gs-S0; Fri, 11 Jun 2021 22:17:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrpSj-0003eE-RU; Fri, 11 Jun 2021 22:17:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DLK/ZWnOSJZdVcE5E7ca+mbxdCWPEnkLaRXp3dnbY9g=; b=5A/BaL3/nKKeRVsIroRnXx/EXN
	+pV8GT7I6y0zBNlRHZOH1XhIIUap/GQcyPvP2sxjD0MLc27hj927FFvIPkJDcTPAU7fqupE6Tpy60
	4W8vZBk0XxFwURuJ4UqYO7McBWywyaxg3KOkf7AJLay6kSCb4KPlFICgin+tfCJhudAc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162643-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162643: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:guest-localmigrate:fail:allowable
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=06af8679449d4ed282df13191fc52d5ba28ec536
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Jun 2021 22:17:13 +0000

flight 162643 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162643/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-qemuu-freebsd11-amd64 17 guest-localmigrate fail REGR. vs. 152332
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                06af8679449d4ed282df13191fc52d5ba28ec536
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  315 days
Failing since        152366  2020-08-01 20:49:34 Z  314 days  535 attempts
Testing same since   162643  2021-06-11 10:06:27 Z    0 days    1 attempts

------------------------------------------------------------
6155 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1675750 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 12 01:08:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Jun 2021 01:08:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140688.259909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrs8d-0000Cn-Fn; Sat, 12 Jun 2021 01:08:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140688.259909; Sat, 12 Jun 2021 01:08:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrs8d-0000Cg-Cv; Sat, 12 Jun 2021 01:08:39 +0000
Received: by outflank-mailman (input) for mailman id 140688;
 Sat, 12 Jun 2021 01:08:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrs8b-0000CQ-Fv; Sat, 12 Jun 2021 01:08:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrs8b-0007uM-6V; Sat, 12 Jun 2021 01:08:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrs8a-0004xV-V9; Sat, 12 Jun 2021 01:08:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrs8a-0006dI-Ud; Sat, 12 Jun 2021 01:08:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LoQkeKVBqHwXZSfuG8pA0tPB64BD/HJUYP9jmy58GdE=; b=tbFAo2Ka2/JE9nOKTxQBV+LXXf
	Q2IR+gY+0GoxBbMqrHSt/xIZj7Gam7DmETOnNvGyIEdacFG+9yVVoschQh5X8EdvNaiXtmqxqsOhU
	Nl8z+sXXg7QgYuUexI0ruPNuBBWzLXcX9TbxV5oMWq3CqK657Jvh0vVR3nzMk/bzl7yw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162665-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162665: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:guest-start/debian.repeat:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d2cad41defe4e0e9987549fbc8ebdf9ae138f90f
X-Osstest-Versions-That:
    xen=3e09045991cde360432bc7437103f8f8a6699359
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Jun 2021 01:08:36 +0000

flight 162665 xen-unstable-smoke real [real]
flight 162671 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162665/
http://logs.test-lab.xenproject.org/osstest/logs/162671/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl         18 guest-start/debian.repeat fail REGR. vs. 162574

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d2cad41defe4e0e9987549fbc8ebdf9ae138f90f
baseline version:
 xen                  3e09045991cde360432bc7437103f8f8a6699359

Last test of basis   162574  2021-06-09 14:00:34 Z    2 days
Failing since        162584  2021-06-10 00:00:27 Z    2 days   13 attempts
Testing same since   162642  2021-06-11 10:00:31 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d2cad41defe4e0e9987549fbc8ebdf9ae138f90f
Author: Juergen Gross <jgross@suse.com>
Date:   Wed May 12 16:48:32 2021 +0200

    tools/libs/store: cleanup libxenstore interface
    
    There are some internals in the libxenstore interface which should be
    removed.
    
    Move those functions into xs_lib.c and the related definitions into
    xs_lib.h. Remove the functions from the mapfile. Add xs_lib.o to
    xenstore_client as some of the internal functions are needed there.
    
    Bump the libxenstore version to 4.0 as the change is incompatible.
    Note that the removed functions should not result in any problem as
    they ought to be used by xenstored or xenstore_client only.
    
    Avoid an enum as part of a structure as the size of an enum is
    compiler implementation dependent.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 2bb17a45b1814b0b6aa4646eff58e16f876281fd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jun 10 16:56:24 2021 +0200

    x86: please Clang in arch_set_info_guest()
    
    Clang 10 reports
    
    domain.c:1328:10: error: variable 'cr3_mfn' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
        if ( !compat )
             ^~~~~~~
    domain.c:1334:34: note: uninitialized use occurs here
        cr3_page = get_page_from_mfn(cr3_mfn, d);
                                     ^~~~~~~
    domain.c:1328:5: note: remove the 'if' if its condition is always true
        if ( !compat )
        ^~~~~~~~~~~~~~
    domain.c:1042:18: note: initialize the variable 'cr3_mfn' to silence this warning
        mfn_t cr3_mfn;
                     ^
                      = 0
    domain.c:1189:14: error: variable 'fail' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
            if ( !compat )
                 ^~~~~~~
    domain.c:1211:9: note: uninitialized use occurs here
            fail |= v->arch.pv.gdt_ents != c(gdt_ents);
            ^~~~
    domain.c:1189:9: note: remove the 'if' if its condition is always true
            if ( !compat )
            ^~~~~~~~~~~~~~
    domain.c:1187:18: note: initialize the variable 'fail' to silence this warning
            bool fail;
                     ^
                      = false
    
    despite this being a build with -O2 in effect, and despite "compat"
    being constant "false" when CONFIG_COMPAT (and hence CONFIG_PV32) is not
    defined, as it gets set at the top of the function from the result of
    is_pv_32bit_domain().
    
    Re-arrange the two "offending" if()s such that when COMPAT=n the
    respective variables will be seen as unconditionally initialized. The
    original aim was to have the !compat cases first, though.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit dfcffb128be46a3e413eaa941744536fe53c94b6
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Wed Jun 9 10:37:59 2021 -0700

    xen/arm32: SPSR_hyp/SPSR
    
    SPSR_hyp is not meant to be accessed from Hyp mode (EL2); accesses
    trigger UNPREDICTABLE behaviour. Xen should read/write SPSR instead.
    See: ARM DDI 0487D.b page G8-5993.
    
    This fixes booting Xen/arm32 on QEMU.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
    Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Jun 12 02:27:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Jun 2021 02:27:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140700.259935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrtMz-0007x0-If; Sat, 12 Jun 2021 02:27:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140700.259935; Sat, 12 Jun 2021 02:27:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrtMz-0007wt-FP; Sat, 12 Jun 2021 02:27:33 +0000
Received: by outflank-mailman (input) for mailman id 140700;
 Sat, 12 Jun 2021 02:27:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrtMy-0007wj-I0; Sat, 12 Jun 2021 02:27:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrtMy-0001Kx-AI; Sat, 12 Jun 2021 02:27:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrtMy-0000Fs-1Q; Sat, 12 Jun 2021 02:27:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrtMy-0007B9-0y; Sat, 12 Jun 2021 02:27:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gZknkxjEZpFsBpRgdypESUzALygU9+Zbjnwpv6zdatE=; b=c41dEmU1jGuUzL/txsEUSw8w/X
	SWB5ZoaJxQBH9eifIi+OsqSu6hZGwa/bfATMg1KTb8VcQKam/RzRhIDPEm3WCktmoZTwOqdhEWsDm
	JrZ3bCS70tR3LunpCAdOiRzPXj8j6aOmXl3bxyQ1LLIyBIjiHpGFJtN3Vy89Jw/Pl0Ng=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162650-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162650: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=894fc4fd670aaf04a67dc7507739f914ff4bacf2
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Jun 2021 02:27:32 +0000

flight 162650 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162650/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                894fc4fd670aaf04a67dc7507739f914ff4bacf2
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  295 days
Failing since        152659  2020-08-21 14:07:39 Z  294 days  544 attempts
Testing same since   162650  2021-06-11 15:02:16 Z    0 days    1 attempts

------------------------------------------------------------
531 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 170840 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 12 04:44:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Jun 2021 04:44:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140715.259971 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrvVW-0003kV-6H; Sat, 12 Jun 2021 04:44:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140715.259971; Sat, 12 Jun 2021 04:44:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrvVW-0003kO-33; Sat, 12 Jun 2021 04:44:30 +0000
Received: by outflank-mailman (input) for mailman id 140715;
 Sat, 12 Jun 2021 04:44:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrvVU-0003kE-De; Sat, 12 Jun 2021 04:44:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrvVU-0003rq-4N; Sat, 12 Jun 2021 04:44:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrvVT-0007So-Tu; Sat, 12 Jun 2021 04:44:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrvVT-0007xe-TO; Sat, 12 Jun 2021 04:44:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OudgqBAWNeaEqHD3nzrKWFNqSPTqVQbXwgR6APoIu88=; b=GhwvYDhEY5CfOu+PkzeLz2ijvG
	sr1hm6Lj9RtMadxj/tG34bsnU3rjtjI861XlgCcCCAhx5qQPWq1DjIArUYiktXgciLeAXK5YapYyv
	nhk8+5m0XYQv6cIphMal85j8uJXJ01zjaOUtgAYxmy0ldi1rVEN513gKA7TQl19y/N9g=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162674-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162674: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=93031fbe9f4c341a2e7950a088025ea550291433
X-Osstest-Versions-That:
    xen=3e09045991cde360432bc7437103f8f8a6699359
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Jun 2021 04:44:27 +0000

flight 162674 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162674/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  93031fbe9f4c341a2e7950a088025ea550291433
baseline version:
 xen                  3e09045991cde360432bc7437103f8f8a6699359

Last test of basis   162574  2021-06-09 14:00:34 Z    2 days
Failing since        162584  2021-06-10 00:00:27 Z    2 days   14 attempts
Testing same since   162674  2021-06-12 02:01:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3e09045991..93031fbe9f  93031fbe9f4c341a2e7950a088025ea550291433 -> smoke


From xen-devel-bounces@lists.xenproject.org Sat Jun 12 05:15:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Jun 2021 05:15:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140726.259990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrvzP-0007Iy-T2; Sat, 12 Jun 2021 05:15:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140726.259990; Sat, 12 Jun 2021 05:15:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lrvzP-0007Ir-QA; Sat, 12 Jun 2021 05:15:23 +0000
Received: by outflank-mailman (input) for mailman id 140726;
 Sat, 12 Jun 2021 05:15:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrvzO-0007Ih-VA; Sat, 12 Jun 2021 05:15:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrvzO-0004gK-O0; Sat, 12 Jun 2021 05:15:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lrvzO-0000sX-0B; Sat, 12 Jun 2021 05:15:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lrvzN-0000qI-Vv; Sat, 12 Jun 2021 05:15:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8R6w8zFInuEAlvI9OKaHSjkk34/g7RLNQwqQxziPJfA=; b=ksF9pWT9wO6xeuIibzHGuP2ttx
	3abQnNdlGAMVeLVvL/C/XmFJqbEQzt4/1/DxRJBRH/dbwiHfz1ok7OBdcMq+MwuUUl0Yfo/16CwDZ
	MSCiH+80vCJXUYsaGePLjn6HNPNGmdkvOKkTg4z6Izp3LORbp361dxWy68QtOJmjcJvE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162659-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162659: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=b8649cf2a3e673a4a8cb6c255e394b354b771550
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Jun 2021 05:15:21 +0000

flight 162659 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162659/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 b8649cf2a3e673a4a8cb6c255e394b354b771550
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z    8 days
Failing since        162368  2021-06-04 15:42:59 Z    7 days    9 attempts
Testing same since   162583  2021-06-09 23:44:58 Z    2 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1717 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 12 11:53:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Jun 2021 11:53:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140765.260073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ls2Cr-0000gw-49; Sat, 12 Jun 2021 11:53:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140765.260073; Sat, 12 Jun 2021 11:53:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ls2Cr-0000gp-1E; Sat, 12 Jun 2021 11:53:41 +0000
Received: by outflank-mailman (input) for mailman id 140765;
 Sat, 12 Jun 2021 11:53:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ls2Cp-0000gc-50; Sat, 12 Jun 2021 11:53:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ls2Co-0003M0-Sq; Sat, 12 Jun 2021 11:53:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ls2Co-0007Km-Iz; Sat, 12 Jun 2021 11:53:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ls2Co-0004gq-IW; Sat, 12 Jun 2021 11:53:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qkk7cijbplY1v4t5IX95VtzfzmOFQxcxOi46avs5fCI=; b=IBK8QUXCVA2JFtt4/LcNk8QxBT
	7azfLMh4NMRm/q35HTGp+NV1shx3Tkkfgwh47KKmjWaaGD6Vns6hpOlfriTd/x9+JwzPpx0ay/4en
	uJMeR2r68fLReB0VxIqCPkPHlCKB4s8b7hSmi2t7m5MxjK2kIxORChpVPQu0sxi3mH60=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162661-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162661: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-4:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-examine:reboot:fail:regression
    xen-unstable:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-5:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-2:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-1:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-3:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-raw:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-amd64-i386-pvgrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-amd64-amd64-pvgrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3e09045991cde360432bc7437103f8f8a6699359
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Jun 2021 11:53:38 +0000

flight 162661 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162661/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 162533
 test-xtf-amd64-amd64-4      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-amd64-i386-examine       8 reboot                   fail REGR. vs. 162533
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 162533
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 162533
 test-xtf-amd64-amd64-5      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-xtf-amd64-amd64-2      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 162533
 test-xtf-amd64-amd64-1      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-xtf-amd64-amd64-3      19 xtf/test-pv32pae-selftest fail REGR. vs. 162533
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 162533
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 162533
 test-amd64-i386-xl-raw        8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 162533
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 162533
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 162533
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 162533
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 162533
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 162533
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 162533
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 162533
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 162533
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 162533
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 162533
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 162533
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 162533
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 162533
 test-amd64-amd64-i386-pvgrub 17 guest-localmigrate       fail REGR. vs. 162533
 test-amd64-amd64-amd64-pvgrub 17 guest-localmigrate      fail REGR. vs. 162533
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 162533
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 162533

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 162600 pass in 162661
 test-amd64-amd64-examine    4 memdisk-try-append fail in 162600 pass in 162661
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 162600 pass in 162661
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 162600 pass in 162661
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore  fail pass in 162600

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162533
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162533
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  3e09045991cde360432bc7437103f8f8a6699359
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z    4 days
Failing since        162556  2021-06-08 22:39:08 Z    3 days    4 attempts
Testing same since   162600  2021-06-10 09:41:17 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 621 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 12 12:03:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Jun 2021 12:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140775.260092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ls2M2-0002FA-Fc; Sat, 12 Jun 2021 12:03:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140775.260092; Sat, 12 Jun 2021 12:03:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ls2M2-0002F3-9E; Sat, 12 Jun 2021 12:03:10 +0000
Received: by outflank-mailman (input) for mailman id 140775;
 Sat, 12 Jun 2021 12:03:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ls2M1-0002Em-BN; Sat, 12 Jun 2021 12:03:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ls2M1-0003Xi-4a; Sat, 12 Jun 2021 12:03:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ls2M0-0007d7-Qq; Sat, 12 Jun 2021 12:03:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ls2M0-0002n5-QK; Sat, 12 Jun 2021 12:03:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Dg74H4URQ/s05piXvM/nppekNNSJzfaOAnRzHRTvjGE=; b=XcnW+XHTmsnjCpmaN3Ke8+82tQ
	Y6dCCDcaT2XVYZTZQTJGjv/3GCffj/jzM/GY7jbbk1g3dawbhSKMqe5zYv3ruJpU7WnxZO5UehsPN
	yLhxPcTqE7qNGbEFbvFf/DWxGMPtZi0V1mBLCON4e9AX9o413GYJ1e66F8fMfHKbqqiI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162681-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162681: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=55ea45acc99c549c7757efe954aacc33ad30a8ef
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Jun 2021 12:03:08 +0000

flight 162681 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162681/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              55ea45acc99c549c7757efe954aacc33ad30a8ef
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  337 days
Failing since        151818  2020-07-11 04:18:52 Z  336 days  329 attempts
Testing same since   162681  2021-06-12 04:18:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 61229 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 12 12:56:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Jun 2021 12:56:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140786.260114 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ls3Bc-00078X-LC; Sat, 12 Jun 2021 12:56:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140786.260114; Sat, 12 Jun 2021 12:56:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ls3Bc-00078Q-H6; Sat, 12 Jun 2021 12:56:28 +0000
Received: by outflank-mailman (input) for mailman id 140786;
 Sat, 12 Jun 2021 12:56:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ls3Ba-00078G-BP; Sat, 12 Jun 2021 12:56:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ls3Ba-0004NO-2V; Sat, 12 Jun 2021 12:56:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ls3BZ-0001Z9-QA; Sat, 12 Jun 2021 12:56:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ls3BZ-0008Le-Pg; Sat, 12 Jun 2021 12:56:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=AoFFOto4uZG4E6dmI9e5lA0DPrIS6FA2evvWhCjDotU=; b=5w2LC+STG71Z2Od4JJWlmjKBpV
	S6zB91fV2ktRYcrwFNOSUrdIyGPY3aNQrSg9hWDen6dNk/9jG1L270QYKPZ1PipYNPLPJX0zZBfcz
	wtyJllUzBGDgB9vUw6OXbs7XlpUg3IaZ8vQtH6mmiAMm2sWHM3Ot2cjTNTiJcqoh9xRU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-i386-xl-qemuu-ovmf-amd64
Message-Id: <E1ls3BZ-0008Le-Pg@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Jun 2021 12:56:25 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemuu-ovmf-amd64
testid guest-saverestore

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161012/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-ovmf-amd64.guest-saverestore.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-ovmf-amd64.guest-saverestore --summary-out=tmp/162694.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-i386-xl-qemuu-ovmf-amd64 guest-saverestore
Searching for failure / basis pass:
 162650 fail [host=albana1] / 160125 [host=pinot0] 160119 [host=fiano1] 160113 [host=fiano0] 160104 [host=chardonnay1] 160097 ok.
Failure / basis pass flights: 162650 / 160097
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2615a5e433aeb812c300d3a48e1a88e1303e2339 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#4751a48aeb2ab828b0a5cbdc585fd3642967cda1-c410ad4da4b7785170d3d42a3ba190c2caac6feb git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#2615a5e433aeb812c300d3a48e1a88e1303e2339-894fc4fd670aaf04a67dc7507739f914ff4bacf2 git://xenbits.xen.org/osstest/seabios.git#b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee-e3c30795823672eec9bde75187e184f23ed98d70 git://xenbits.xen.org/xen.git#b4011741e6b39a8fd0ed5aded96c16c45ead5888-5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Loaded 31478 nodes in revision graph
Searching for test results:
 160091 [host=albana0]
 160097 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2615a5e433aeb812c300d3a48e1a88e1303e2339 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 160104 [host=chardonnay1]
 160113 [host=fiano0]
 160119 [host=fiano1]
 160125 [host=pinot0]
 160134 fail irrelevant
 160147 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160167 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca318882714080fb81fe9eb89a7b7934efc5bfae 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bdee969c0e65d4d509932b1d70e3a3b2ffbff6d5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160328 fail irrelevant
 160361 fail irrelevant
 160392 fail irrelevant
 160418 fail irrelevant
 160448 fail irrelevant
 160477 fail irrelevant
 160501 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160522 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160541 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ec2e6e016d24bd429792d08cf607e4c5350dcdaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160563 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7993b0f83fe5c3f8555e79781d5d098f99751a94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cead8c0d17462f3a1150b5657d3f4eaa88faf1cb
 160619 fail irrelevant
 160632 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 62bad17dcae18f55cb3bdc19909543dfdf928a2b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6ee55e1d10c25c2f6bf5ce2084ad2327e17affa5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 90629587e16e2efdb61da77f25c25fba3c4a5fd7
 160650 fail irrelevant
 160736 fail irrelevant
 160748 fail irrelevant
 160779 fail irrelevant
 160801 fail irrelevant
 160827 fail irrelevant
 160851 fail irrelevant
 160883 fail irrelevant
 160914 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2615a5e433aeb812c300d3a48e1a88e1303e2339 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 160917 fail irrelevant
 160920 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2a9a6c2a86570ccbf8c5c30cbb8bf723168c459 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160922 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160924 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1db136a29ce8594b693938ab8e788d8bcef54770 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160927 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 24e13a4dc1eb1630eceffc7ab334145d902e763d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160930 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 69259911f948ad2755bd1f2c999dd60ac322c890 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160933 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2255564fd21059960966b47212def9069cb56077 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160935 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e51b27fed31eb7b2a2cb4245806c8c7859207f7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ee2e67da8f882fcdef2c49fcc58e9962aa695f5a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160941 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f9c53a69edeb94ae8c65276b885c1a7efe4f613a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 571d413b5da6bc6f1c2aaca8484717642255ddb0 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160949 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8becb36063fb14df1e3ae4916215667e2cb65fa2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160956 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160962 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160969 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160916 fail irrelevant
 160975 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160984 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2615a5e433aeb812c300d3a48e1a88e1303e2339 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 160992 fail irrelevant
 160999 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161006 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161012 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160980 fail irrelevant
 161050 fail irrelevant
 161088 fail irrelevant
 161121 fail irrelevant
 161147 fail irrelevant
 161171 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2ad22420a710dc07e3b644f91a5b55c09c39ecf3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 264aa183ad85b2779b27d1312724a291259ccc9f
 161191 fail irrelevant
 161210 fail irrelevant
 161232 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b53173e7cdafb7a318a239d557478fd73734a86a
 161256 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161276 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161290 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161308 fail irrelevant
 161334 fail irrelevant
 161364 fail irrelevant
 161388 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
 161401 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aaa3eafb3ba8b544d19ca41cda1477640b22b8fc
 161419 fail irrelevant
 161434 fail irrelevant
 161444 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161455 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161472 fail irrelevant
 161481 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5396354b868bd6652600a654bba7df16701ac1cb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 11e7f0fe72ca0060762d18268e0388731fe8ccb6
 161495 fail irrelevant
 161514 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b90b8abb4049e2d98040f548ad23b6ab22d5d19 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
 161540 fail irrelevant
 161554 fail irrelevant
 161571 fail irrelevant
 161587 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161604 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161616 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 53c5433e84e8935abed8e91d4a2eb813168a0ecf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161631 []
 161766 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
 161780 fail irrelevant
 161812 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d45a5270d075ea589f0b0ddcf963a5fea1f500ac b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 8cccd6438e86112ab383e41b433b5a7e73be9621
 161826 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161839 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161853 []
 161856 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161862 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161876 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161886 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161890 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161896 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 74e31681ba05ed1876818df30c581bc530554fb3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161907 fail irrelevant
 161915 fail irrelevant
 161924 fail irrelevant
 161938 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161941 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161950 fail irrelevant
 161955 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161961 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161963 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161967 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161971 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161976 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161981 fail irrelevant
 161986 fail irrelevant
 162019 fail irrelevant
 162070 fail irrelevant
 162090 fail irrelevant
 162104 fail irrelevant
 162099 fail irrelevant
 162108 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 15ee7b76891a78141e6e30ef3f8572e8d6b326d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 972e848b53970d12cb2ca64687ef8ff797fb6236 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162112 fail irrelevant
 162116 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162121 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162124 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162127 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162132 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162135 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162139 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162143 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 371ebfe28600fc5a435504b841cd401208a68f07 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162146 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162152 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162158 fail irrelevant
 162167 fail irrelevant
 162197 fail irrelevant
 162240 fail irrelevant
 162244 fail irrelevant
 162252 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7258034ab40e6927acbd005feb295eb3acf972bb 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 9fdcf851689cb2a9501d3947cb5d767d9c7797e8
 162257 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 62c0ac5041e9130b041adfa13a41583d3c3ddd24 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162260 fail irrelevant
 162264 fail irrelevant
 162267 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f9dc72de91d2915b808e82da34bf613afa5cce43 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162270 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162277 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162299 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 683d899e4bffca35c5b192ea0662362b0270a695
 162328 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3ff5dbe1dfc3420e5254d290500c0b6f6282d17 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162331 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fdf3666f01a2dd02d83a808f609b9c744a74c652 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162339 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b233eb1849ac01bdd5b24ea84460a2e481a4c5a9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dd2db39d78431ab5a0b78777afaab3d61e94533e 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162342 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1f515342d8d83ef0fff0c3f4ac67232dd8c97565 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c345b3e6a736d4985b2bca6f7f24b985900de63 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162347 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8e6dad2028d01b7f9ec76cf3b83457fab57fa1eb 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162356 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 453d9c61dd5681159051c6e4d07e7b2633de2e70 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162362 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 453d9c61dd5681159051c6e4d07e7b2633de2e70 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162409 blocked irrelevant
 162379 fail irrelevant
 162428 blocked irrelevant
 162434 blocked irrelevant
 162439 blocked irrelevant
 162443 blocked irrelevant
 162448 blocked irrelevant
 162429 blocked irrelevant
 162452 blocked irrelevant
 162456 blocked irrelevant
 162458 blocked irrelevant
 162462 blocked irrelevant
 162465 blocked irrelevant
 162468 blocked irrelevant
 162471 blocked irrelevant
 162454 blocked irrelevant
 162473 blocked irrelevant
 162474 blocked irrelevant
 162501 []
 162527 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a35947f15c0ee695eba3c55248ec8ac3e4e23cca 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162551 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a4716fd8d7c877185652f5f8e25032dc7699d51b 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162591 fail irrelevant
 162623 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7fe7fae8b48e3f9c647fd685e5155ebc8e6fb84d e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162650 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162677 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2615a5e433aeb812c300d3a48e1a88e1303e2339 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 162694 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Searching for interesting versions
 Result found: flight 160097 (pass), for basis pass
 Result found: flight 162650 (fail), for basis failure
 Repro found: flight 162677 (pass), for basis pass
 Repro found: flight 162694 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
No revisions left to test, checking graph state.
 Result found: flight 160962 (pass), for last pass
 Result found: flight 160969 (fail), for first failure
 Repro found: flight 160975 (pass), for last pass
 Repro found: flight 160999 (fail), for first failure
 Repro found: flight 161006 (pass), for last pass
 Repro found: flight 161012 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161012/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.340048 to fit
pnmtopng: 210 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-ovmf-amd64.guest-saverestore.{dot,ps,png,html,svg}.
----------------------------------------
162694: tolerable FAIL

flight 162694 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162694/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail baseline untested


jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sat Jun 12 14:09:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Jun 2021 14:09:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140801.260139 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ls4KJ-0005Rg-AO; Sat, 12 Jun 2021 14:09:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140801.260139; Sat, 12 Jun 2021 14:09:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ls4KJ-0005RZ-7N; Sat, 12 Jun 2021 14:09:31 +0000
Received: by outflank-mailman (input) for mailman id 140801;
 Sat, 12 Jun 2021 14:09:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ls4KI-0005RP-0L; Sat, 12 Jun 2021 14:09:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ls4KH-0005dZ-Pi; Sat, 12 Jun 2021 14:09:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ls4KH-0003z8-FJ; Sat, 12 Jun 2021 14:09:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ls4KH-0005KG-En; Sat, 12 Jun 2021 14:09:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EGzAIb5We7X08WBtd9tF9QOI+oDnt8PccU6FSjHJDjo=; b=3oMrStUR88uWxJa4uHG1TfzUll
	01KssB8Av9m8+X7wczhcqq1Mi8dwIcshM+nYzmy5qURcfIPHcUnIyafZvf88N+iU24pOJBzap89AP
	Rb40gUIh7chanJVCTMtlpIfrZW5kMEk/kl1pGpENKFk7MhewuZETpZTihg8zUHZWKrbQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162667-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162667: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:guest-localmigrate/x10:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f21b807c3cf8cd7c5ca9e406b27bf1cd2f1c1238
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Jun 2021 14:09:29 +0000

flight 162667 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162667/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-qemuu-freebsd12-amd64 19 guest-localmigrate/x10 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f21b807c3cf8cd7c5ca9e406b27bf1cd2f1c1238
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  315 days
Failing since        152366  2020-08-01 20:49:34 Z  314 days  536 attempts
Testing same since   162667  2021-06-11 22:40:14 Z    0 days    1 attempts

------------------------------------------------------------
6158 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1677031 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 12 14:11:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Jun 2021 14:11:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140808.260154 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ls4ML-0006kS-QV; Sat, 12 Jun 2021 14:11:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140808.260154; Sat, 12 Jun 2021 14:11:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ls4ML-0006kL-NF; Sat, 12 Jun 2021 14:11:37 +0000
Received: by outflank-mailman (input) for mailman id 140808;
 Sat, 12 Jun 2021 14:11:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eFEz=LG=strugglers.net=andy@srs-us1.protection.inumbo.net>)
 id 1ls4MK-0006kF-Dm
 for xen-devel@lists.xenproject.org; Sat, 12 Jun 2021 14:11:36 +0000
Received: from mail.bitfolk.com (unknown [2001:ba8:1f1:f019::25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9a544415-edb4-45f3-b812-9119a44c1f6b;
 Sat, 12 Jun 2021 14:11:33 +0000 (UTC)
Received: from andy by mail.bitfolk.com with local (Exim 4.89)
 (envelope-from <andy@strugglers.net>) id 1ls4MG-0003YO-OT
 for xen-devel@lists.xenproject.org; Sat, 12 Jun 2021 14:11:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a544415-edb4-45f3-b812-9119a44c1f6b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=bitfolk.com
	; s=alpha; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID:
	Subject:To:From:Date:Sender:Reply-To:Cc:Content-Transfer-Encoding:Content-ID:
	Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
	:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe:
	List-Post:List-Owner:List-Archive;
	bh=BiZM08ZTs2Qr5ErHg2iHjadGR9QDIYe5Q3nQhf8QEwM=; b=QrvstByPdPQxp1NHjhkDETRYUU
	iRuhWs7eGjOB7a9dQ+wHmd7QnfpFzpn2QHG+l70EDJ0hm+ZEYzX/2Ny3fCLjRpAmCfBVPgpu0jumK
	+1NbLRhJkNdz4KllHRDJvx+T5RGo/5L7EkEsFpaDSkwFxrb5+7CQs+Yw8OAx5fLMTBJ932mbst601
	cURxJqcuu2Yvb3QqDlqV/Z29uxDF1fol+KY3QzXHVujRi53WtlaejdLNAvRtOxLS/9X0iSjgb4JjW
	xX08OsAbztKRjQIwzardqk5SCWEKYS8Bv8fRWLbLimoehIWLxNtjWZXj8Yml4rmycmfxQes0hkgM9
	W9qLfKzA==;
Date: Sat, 12 Jun 2021 14:11:32 +0000
From: Andy Smith <andy@strugglers.net>
To: xen-devel@lists.xenproject.org
Subject: Re: dom0 suddenly blocking on all access to md device
Message-ID: <20210612141132.rjtmvjv6377lz4tl@bitfolk.com>
References: <20210226223927.GQ29212@bitfolk.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210226223927.GQ29212@bitfolk.com>
OpenPGP: id=BF15490B; url=http://strugglers.net/~andy/pubkey.asc
X-URL: http://strugglers.net/wiki/User:Andy
User-Agent: NeoMutt/20170113 (1.7.2)
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: andy@strugglers.net
X-SA-Exim-Scanned: No (on mail.bitfolk.com); SAEximRunCond expanded to false

Hello,

Unfortunately I'm still experiencing this problem as described in
the earlier email below and I'm running out of ideas for things to
test / try.

What was fine for a long time (~5 years): Debian jessie dom0 kernel
4.9.x with Xen 4.10.

Below issues started happening on same machines once dom0 was
upgraded to Debian buster 4.19.x kernel (currently 4.19.0-16-amd64)
and 4.12 hypervisor. Starting around December 2020.

Since then I've also tried going to Xen 4.14.2 (plus latest XSA
patches up to XSA377) and it's still happening. I've also tried
switching to "credit" scheduler and that did not make a difference.
It can be a month or two between incidents although one machine just
had it happen twice in 3 days. Maybe half a dozen incidents so far
on different machines, different hardware configs.

Hypervisor command line is:

dom0_mem=4096M dom0_max_vcpus=2 com1=115200,8n1,0x2f8,10 console=com1,vga ucode=scan serial_tx_buffer=256k smt=1

There's a serial console but not much interesting is ever seen on
it. If there are some debug keys you would like to see the output of
please let me know. Pretty much the only sort of thing that gets
logged in dom0 is the following and that could just be a symptom.

Jun 12 12:04:40 clockwork kernel: [216427.246183] INFO: task md5_raid1:205 blocked for more than 120 seconds.
Jun 12 12:04:40 clockwork kernel: [216427.246995]       Not tainted 4.19.0-16-amd64 #1 Debian 4.19.181-1
Jun 12 12:04:40 clockwork kernel: [216427.247852] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 12 12:04:40 clockwork kernel: [216427.248674] md5_raid1       D 0   205      2 0x80000000
Jun 12 12:04:40 clockwork kernel: [216427.249534] Call Trace:
Jun 12 12:04:40 clockwork kernel: [216427.250368] __schedule+0x29f/0x840
Jun 12 12:04:40 clockwork kernel: [216427.251788]  ? _raw_spin_unlock_irqrestore+0x14/0x20
Jun 12 12:04:40 clockwork kernel: [216427.253078] schedule+0x28/0x80
Jun 12 12:04:40 clockwork kernel: [216427.253945] md_super_wait+0x6e/0xa0 [md_mod]
Jun 12 12:04:40 clockwork kernel: [216427.254812]  ? finish_wait+0x80/0x80
Jun 12 12:04:40 clockwork kernel: [216427.256139] md_bitmap_wait_writes+0x93/0xa0 [md_mod]
Jun 12 12:04:40 clockwork kernel: [216427.256994]  ? md_bitmap_get_counter+0x42/0xd0 [md_mod]
Jun 12 12:04:40 clockwork kernel: [216427.257787] md_bitmap_daemon_work+0x1f7/0x370 [md_mod]
Jun 12 12:04:40 clockwork kernel: [216427.258608]  ? md_rdev_init+0xb0/0xb0 [md_mod]
Jun 12 12:04:40 clockwork kernel: [216427.259553] md_check_recovery+0x41/0x530 [md_mod]
Jun 12 12:04:40 clockwork kernel: [216427.260304]  raid1d+0x5c/0xf10 [raid1]
Jun 12 12:04:40 clockwork kernel: [216427.261096]  ? lock_timer_base+0x67/0x80
Jun 12 12:04:40 clockwork kernel: [216427.261863]  ? _raw_spin_unlock_irqrestore+0x14/0x20
Jun 12 12:04:40 clockwork kernel: [216427.262659]  ? try_to_del_timer_sync+0x4d/0x80
Jun 12 12:04:40 clockwork kernel: [216427.263436]  ? del_timer_sync+0x37/0x40
Jun 12 12:04:40 clockwork kernel: [216427.264189]  ? schedule_timeout+0x173/0x3b0
Jun 12 12:04:40 clockwork kernel: [216427.264911]  ? md_rdev_init+0xb0/0xb0 [md_mod]
Jun 12 12:04:40 clockwork kernel: [216427.265664]  ? md_thread+0x94/0x150 [md_mod]
Jun 12 12:04:40 clockwork kernel: [216427.266412]  ? process_checks+0x4a0/0x4a0 [raid1]
Jun 12 12:04:40 clockwork kernel: [216427.267124] md_thread+0x94/0x150 [md_mod]
Jun 12 12:04:40 clockwork kernel: [216427.267842]  ? finish_wait+0x80/0x80
Jun 12 12:04:40 clockwork kernel: [216427.268539] kthread+0x112/0x130
Jun 12 12:04:40 clockwork kernel: [216427.269231]  ? kthread_bind+0x30/0x30
Jun 12 12:04:40 clockwork kernel: [216427.269903] ret_from_fork+0x35/0x40
Jun 12 12:04:40 clockwork kernel: [216427.270590] INFO: task md2_raid1:207 blocked for more than 120 seconds.
Jun 12 12:04:40 clockwork kernel: [216427.271260]       Not tainted 4.19.0-16-amd64 #1 Debian 4.19.181-1
Jun 12 12:04:40 clockwork kernel: [216427.271942] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 12 12:04:40 clockwork kernel: [216427.272721] md2_raid1       D 0   207      2 0x80000000
Jun 12 12:04:40 clockwork kernel: [216427.273432] Call Trace:
Jun 12 12:04:40 clockwork kernel: [216427.274172] __schedule+0x29f/0x840
Jun 12 12:04:40 clockwork kernel: [216427.274869] schedule+0x28/0x80
Jun 12 12:04:40 clockwork kernel: [216427.275543] io_schedule+0x12/0x40
Jun 12 12:04:40 clockwork kernel: [216427.276208] wbt_wait+0x205/0x300
Jun 12 12:04:40 clockwork kernel: [216427.276861]  ? wbt_wait+0x300/0x300
Jun 12 12:04:40 clockwork kernel: [216427.277503] rq_qos_throttle+0x31/0x40
Jun 12 12:04:40 clockwork kernel: [216427.278193] blk_mq_make_request+0x111/0x530
Jun 12 12:04:40 clockwork kernel: [216427.278876] generic_make_request+0x1a4/0x400
Jun 12 12:04:40 clockwork kernel: [216427.279657]  ? try_to_wake_up+0x54/0x470
Jun 12 12:04:40 clockwork kernel: [216427.280400] submit_bio+0x45/0x130
Jun 12 12:04:40 clockwork kernel: [216427.281136]  ? md_super_write.part.63+0x90/0x120 [md_mod]
Jun 12 12:04:40 clockwork kernel: [216427.281788] md_update_sb.part.65+0x3a8/0x8e0 [md_mod]
Jun 12 12:04:40 clockwork kernel: [216427.282480]  ? md_rdev_init+0xb0/0xb0 [md_mod]
Jun 12 12:04:40 clockwork kernel: [216427.283106] md_check_recovery+0x272/0x530 [md_mod]
Jun 12 12:04:40 clockwork kernel: [216427.283738]  raid1d+0x5c/0xf10 [raid1]
Jun 12 12:04:40 clockwork kernel: [216427.284345]  ? __schedule+0x2a7/0x840
Jun 12 12:04:40 clockwork kernel: [216427.284939]  ? md_rdev_init+0xb0/0xb0 [md_mod]
Jun 12 12:04:40 clockwork kernel: [216427.285522]  ? schedule+0x28/0x80
Jun 12 12:04:40 clockwork kernel: [216427.286121]  ? schedule_timeout+0x26d/0x3b0
Jun 12 12:04:40 clockwork kernel: [216427.286702]  ? __schedule+0x2a7/0x840
Jun 12 12:04:40 clockwork kernel: [216427.287279]  ? md_rdev_init+0xb0/0xb0 [md_mod]
Jun 12 12:04:40 clockwork kernel: [216427.287871]  ? md_thread+0x94/0x150 [md_mod]
Jun 12 12:04:40 clockwork kernel: [216427.288458]  ? process_checks+0x4a0/0x4a0 [raid1]
Jun 12 12:04:40 clockwork kernel: [216427.289062] md_thread+0x94/0x150 [md_mod]
Jun 12 12:04:40 clockwork kernel: [216427.289663]  ? finish_wait+0x80/0x80
Jun 12 12:04:40 clockwork kernel: [216427.290288] kthread+0x112/0x130
Jun 12 12:04:40 clockwork kernel: [216427.290858]  ? kthread_bind+0x30/0x30
Jun 12 12:04:40 clockwork kernel: [216427.291433] ret_from_fork+0x35/0x40

What I HAVEN'T yet tried is a much newer kernel. That will probably
be what I try next having exhausted all ideas about upgrading or
configuring Xen.

Should I take a kernel from buster-backports which would currently
be:

    https://packages.debian.org/buster-backports/linux-image-5.10.0-0.bpo.5-amd64

or should I build a kernel package from a mainline release?

Thanks,
Andy

On Fri, Feb 26, 2021 at 10:39:27PM +0000, Andy Smith wrote:
> Hi,
> 
> I suspect this might be an issue in the dom0 kernel (Debian buster,
> kernel 4.19.0-13-amd64), but just lately I've been sporadically
> having issues where dom0 blocks or severely slows down on all access
> to the particular md device that hosts all domU block devices.
> 
> Setup in dom0: an md RAID10 that is used as an LVM PV for an LVM volume
> group, where all domU block devices are LVM logical volumes in that
> group. So the relevant part of a domU config file might look like:
> 
> disk = [ "phy:/dev/myvg/domu_debtest1_xvda,xvda,w",
>          "phy:/dev/myvg/domu_debtest1_xvdb,xvdb,w" ]
> 
> The guests are mostly PV, a sprinkling of PVH, no HVM.
> 
> There's 5 of these servers but 3 of them have only recently been
> upgraded to Xen 4.12.14 (on Debian buster) from Xen 4.10 (on Debian
> jessie). The fact that all of them have been pretty stable in the
> past, on differing hardware, makes me discount a hardware issue. The
> fact that two of them have been buster / 4.12.x for a long time
> without issue but are also now starting to see this does make me
> think that it's a recent dom0 kernel issue.
> 
> When the problem occurs, inside every domU I see things like this:
> 
> Feb 26 20:02:34 backup4 kernel: [2530464.736085] INFO: task btrfs-transacti:333 blocked for more than 120 seconds.
> Feb 26 20:02:34 backup4 kernel: [2530464.736107]       Not tainted 4.9.0-14-amd64 #1 Debian 4.9.246-2
> Feb 26 20:02:34 backup4 kernel: [2530464.736117] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Feb 26 20:02:34 backup4 kernel: [2530464.736131] btrfs-transacti D    0   333      2 0x00000000
> Feb 26 20:02:34 backup4 kernel: [2530464.736146]  0000000000000246 ffff8800f4e0c400 0000000000000000 ffff8800f8a7f100
> Feb 26 20:02:34 backup4 kernel: [2530464.736168]  ffff8800fad18a00 ffff8800fa7dd000 ffffc90040b2f670 ffffffff8161a979
> Feb 26 20:02:34 backup4 kernel: [2530464.736188]  ffff8800fa6d0200 0000000000000000 ffff8800fad18a00 0000000000000010
> Feb 26 20:02:34 backup4 kernel: [2530464.736209] Call Trace:
> Feb 26 20:02:34 backup4 kernel: [2530464.736223]  [<ffffffff8161a979>] ? __schedule+0x239/0x6f0
> Feb 26 20:02:34 backup4 kernel: [2530464.736236]  [<ffffffff8161ae62>] ? schedule+0x32/0x80
> Feb 26 20:02:34 backup4 kernel: [2530464.736248]  [<ffffffff8161e1fd>] ? schedule_timeout+0x1dd/0x380
> Feb 26 20:02:34 backup4 kernel: [2530464.736263]  [<ffffffff8101c201>] ? xen_clocksource_get_cycles+0x11/0x20
> Feb 26 20:02:34 backup4 kernel: [2530464.736275]  [<ffffffff8161a6dd>] ? io_schedule_timeout+0x9d/0x100
> Feb 26 20:02:34 backup4 kernel: [2530464.736289]  [<ffffffff81367964>] ? __sbitmap_queue_get+0x24/0x90
> Feb 26 20:02:34 backup4 kernel: [2530464.736302]  [<ffffffff81317f60>] ? bt_get.isra.6+0x160/0x220
> Feb 26 20:02:34 backup4 kernel: [2530464.736338]  [<ffffffffc0148bf8>] ? __btrfs_map_block+0x6c8/0x11d0 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736353]  [<ffffffff810bf010>] ? prepare_to_wait_event+0xf0/0xf0
> Feb 26 20:02:34 backup4 kernel: [2530464.736364]  [<ffffffff813182d3>] ? blk_mq_get_tag+0x23/0x90
> Feb 26 20:02:34 backup4 kernel: [2530464.736377]  [<ffffffff81313b6a>] ? __blk_mq_alloc_request+0x1a/0x220
> Feb 26 20:02:34 backup4 kernel: [2530464.736390]  [<ffffffff81314a39>] ? blk_mq_map_request+0xd9/0x170
> Feb 26 20:02:34 backup4 kernel: [2530464.736402]  [<ffffffff8131726b>] ? blk_mq_make_request+0xbb/0x580
> Feb 26 20:02:34 backup4 kernel: [2530464.736429]  [<ffffffffc0148bf8>] ? __btrfs_map_block+0x6c8/0x11d0 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736444]  [<ffffffff8130b0f5>] ? generic_make_request+0x115/0x2d0
> Feb 26 20:02:34 backup4 kernel: [2530464.736456]  [<ffffffff8130b326>] ? submit_bio+0x76/0x140
> Feb 26 20:02:34 backup4 kernel: [2530464.736481]  [<ffffffffc0149d9a>] ? btrfs_map_bio+0x19a/0x340 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736505]  [<ffffffffc0111635>] ? btree_submit_bio_hook+0xf5/0x110 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736535]  [<ffffffffc0138318>] ? submit_one_bio+0x68/0x90 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736561]  [<ffffffffc013fd4d>] ? read_extent_buffer_pages+0x1cd/0x300 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736587]  [<ffffffffc010fbe0>] ? free_root_pointers+0x60/0x60 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736609]  [<ffffffffc010ff9c>] ? btree_read_extent_buffer_pages+0x8c/0x100 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736635]  [<ffffffffc0111814>] ? read_tree_block+0x34/0x50 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736655]  [<ffffffffc00ef9f3>] ? read_block_for_search.isra.36+0x133/0x320 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736678]  [<ffffffffc00eabe4>] ? unlock_up+0xd4/0x180 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736700]  [<ffffffffc00f1b8d>] ? btrfs_search_slot+0x3ad/0xa00 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736723]  [<ffffffffc00f3a47>] ? btrfs_insert_empty_items+0x67/0xc0 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736748]  [<ffffffffc00ffe24>] ? __btrfs_run_delayed_refs+0xfc4/0x13a0 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736763]  [<ffffffff810164bd>] ? xen_mc_flush+0xdd/0x1d0
> Feb 26 20:02:34 backup4 kernel: [2530464.736785]  [<ffffffffc01033ad>] ? btrfs_run_delayed_refs+0x9d/0x2b0 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736811]  [<ffffffffc0119817>] ? btrfs_commit_transaction+0x57/0xa10 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736837]  [<ffffffffc011a266>] ? start_transaction+0x96/0x480 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736861]  [<ffffffffc011464c>] ? transaction_kthread+0x1dc/0x200 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736886]  [<ffffffffc0114470>] ? btrfs_cleanup_transaction+0x590/0x590 [btrfs]
> Feb 26 20:02:34 backup4 kernel: [2530464.736901]  [<ffffffff8109be69>] ? kthread+0xd9/0xf0
> Feb 26 20:02:34 backup4 kernel: [2530464.736913]  [<ffffffff8109bd90>] ? kthread_park+0x60/0x60
> Feb 26 20:02:34 backup4 kernel: [2530464.736926]  [<ffffffff8161f8f7>] ? ret_from_fork+0x57/0x70
> 
> It's all kinds of guest kernel, and the processes are basically
> anything that tries to access its block devices.
> 
> Over in the dom0 at the time, I mostly haven't managed to get logs,
> probably because its logging is on the same md device that is having
> problems. Some of the servers are fortunate to have their dom0
> operating system installed on separate devices to the guest devices,
> and on one of those I got this:
> 
> Feb 20 00:58:44 talisker kernel: [5876461.472590] INFO: task md5_raid10:226 blocked for more than 120 seconds.
> Feb 20 00:58:44 talisker kernel: [5876461.473105]       Not tainted 4.19.0-13-amd64 #1 Debian 4.19.160-2
> Feb 20 00:58:44 talisker kernel: [5876461.473523] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Feb 20 00:58:44 talisker kernel: [5876461.473936] md5_raid10      D    0   226      2 0x80000000
> Feb 20 00:58:44 talisker kernel: [5876461.474341] Call Trace:
> Feb 20 00:58:44 talisker kernel: [5876461.474743]  __schedule+0x29f/0x840
> Feb 20 00:58:44 talisker kernel: [5876461.475142]  ? _raw_spin_unlock_irqrestore+0x14/0x20
> Feb 20 00:58:44 talisker kernel: [5876461.475554]  schedule+0x28/0x80
> Feb 20 00:58:44 talisker kernel: [5876461.475964]  md_super_wait+0x6e/0xa0 [md_mod]
> Feb 20 00:58:44 talisker kernel: [5876461.476372]  ? finish_wait+0x80/0x80
> Feb 20 00:58:44 talisker kernel: [5876461.476817]  md_bitmap_wait_writes+0x93/0xa0 [md_mod]
> Feb 20 00:58:44 talisker kernel: [5876461.477504]  ? md_bitmap_get_counter+0x42/0xd0 [md_mod]
> Feb 20 00:58:44 talisker kernel: [5876461.478248]  md_bitmap_daemon_work+0x1f7/0x370 [md_mod]
> Feb 20 00:58:44 talisker kernel: [5876461.478904]  md_check_recovery+0x41/0x530 [md_mod]
> Feb 20 00:58:44 talisker kernel: [5876461.479309]  raid10d+0x62/0x1460 [raid10]
> Feb 20 00:58:44 talisker kernel: [5876461.479722]  ? __switch_to_asm+0x41/0x70
> Feb 20 00:58:44 talisker kernel: [5876461.480133]  ? finish_task_switch+0x78/0x280
> Feb 20 00:58:44 talisker kernel: [5876461.480540]  ? _raw_spin_lock_irqsave+0x15/0x40
> Feb 20 00:58:44 talisker kernel: [5876461.480987]  ? lock_timer_base+0x67/0x80
> Feb 20 00:58:44 talisker kernel: [5876461.481719]  ? _raw_spin_unlock_irqrestore+0x14/0x20
> Feb 20 00:58:44 talisker kernel: [5876461.482358]  ? try_to_del_timer_sync+0x4d/0x80
> Feb 20 00:58:44 talisker kernel: [5876461.482768]  ? del_timer_sync+0x37/0x40
> Feb 20 00:58:44 talisker kernel: [5876461.483162]  ? schedule_timeout+0x173/0x3b0
> Feb 20 00:58:44 talisker kernel: [5876461.483553]  ? md_rdev_init+0xb0/0xb0 [md_mod]
> Feb 20 00:58:44 talisker kernel: [5876461.483944]  ? md_thread+0x94/0x150 [md_mod]
> Feb 20 00:58:44 talisker kernel: [5876461.484345]  ? r10bio_pool_alloc+0x20/0x20 [raid10]
> Feb 20 00:58:44 talisker kernel: [5876461.484777]  md_thread+0x94/0x150 [md_mod]
> Feb 20 00:58:44 talisker kernel: [5876461.485500]  ? finish_wait+0x80/0x80
> Feb 20 00:58:44 talisker kernel: [5876461.486083]  kthread+0x112/0x130
> Feb 20 00:58:44 talisker kernel: [5876461.486479]  ? kthread_bind+0x30/0x30
> Feb 20 00:58:44 talisker kernel: [5876461.486870]  ret_from_fork+0x35/0x40
> Feb 20 00:58:44 talisker kernel: [5876461.487260] INFO: task 1.xvda-0:4237 blocked for more than 120 seconds.
> Feb 20 00:58:44 talisker kernel: [5876461.487644]       Not tainted 4.19.0-13-amd64 #1 Debian 4.19.160-2
> Feb 20 00:58:44 talisker kernel: [5876461.488027] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Feb 20 00:58:44 talisker kernel: [5876461.488422] 1.xvda-0        D    0  4237      2 0x80000000
> Feb 20 00:58:44 talisker kernel: [5876461.488842] Call Trace:
> Feb 20 00:58:44 talisker kernel: [5876461.489530]  __schedule+0x29f/0x840
> Feb 20 00:58:44 talisker kernel: [5876461.490149]  ? _raw_spin_unlock_irqrestore+0x14/0x20
> Feb 20 00:58:44 talisker kernel: [5876461.490545]  schedule+0x28/0x80
> Feb 20 00:58:44 talisker kernel: [5876461.490954]  md_super_wait+0x6e/0xa0 [md_mod]
> Feb 20 00:58:44 talisker kernel: [5876461.491330]  ? finish_wait+0x80/0x80
> Feb 20 00:58:44 talisker kernel: [5876461.491708]  md_bitmap_wait_writes+0x93/0xa0 [md_mod]
> Feb 20 00:58:44 talisker kernel: [5876461.492101]  md_bitmap_unplug+0xc5/0x120 [md_mod]
> Feb 20 00:58:44 talisker kernel: [5876461.492490]  raid10_unplug+0xd4/0x190 [raid10]
> Feb 20 00:58:44 talisker kernel: [5876461.492926]  blk_flush_plug_list+0xcf/0x240
> Feb 20 00:58:44 talisker kernel: [5876461.493648]  blk_finish_plug+0x21/0x2e
> Feb 20 00:58:44 talisker kernel: [5876461.494277]  dispatch_rw_block_io+0x696/0x990 [xen_blkback]
> Feb 20 00:58:44 talisker kernel: [5876461.494657]  ? inv_show+0x30/0x30
> Feb 20 00:58:44 talisker kernel: [5876461.495043]  __do_block_io_op+0x30f/0x610 [xen_blkback]
> Feb 20 00:58:44 talisker kernel: [5876461.495458]  ? _raw_spin_unlock_irqrestore+0x14/0x20
> Feb 20 00:58:44 talisker kernel: [5876461.495871]  ? try_to_del_timer_sync+0x4d/0x80
> Feb 20 00:58:44 talisker kernel: [5876461.496264]  xen_blkif_schedule+0xdb/0x650 [xen_blkback]
> Feb 20 00:58:44 talisker kernel: [5876461.496784]  ? finish_wait+0x80/0x80
> Feb 20 00:58:44 talisker kernel: [5876461.497418]  ? xen_blkif_be_int+0x30/0x30 [xen_blkback]
> Feb 20 00:58:44 talisker kernel: [5876461.498041]  kthread+0x112/0x130
> Feb 20 00:58:44 talisker kernel: [5876461.498668]  ? kthread_bind+0x30/0x30
> Feb 20 00:58:44 talisker kernel: [5876461.499309]  ret_from_fork+0x35/0x40
> Feb 20 00:58:44 talisker kernel: [5876461.499960] INFO: task 1.xvda-1:4238 blocked for more than 120 seconds.
> Feb 20 00:58:44 talisker kernel: [5876461.500518]       Not tainted 4.19.0-13-amd64 #1 Debian 4.19.160-2
> Feb 20 00:58:44 talisker kernel: [5876461.500943] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Feb 20 00:58:44 talisker kernel: [5876461.501609] 1.xvda-1        D    0  4238      2 0x80000000
> Feb 20 00:58:44 talisker kernel: [5876461.501992] Call Trace:
> Feb 20 00:58:44 talisker kernel: [5876461.502372]  __schedule+0x29f/0x840
> Feb 20 00:58:44 talisker kernel: [5876461.502747]  schedule+0x28/0x80
> Feb 20 00:58:44 talisker kernel: [5876461.503121]  io_schedule+0x12/0x40
> Feb 20 00:58:44 talisker kernel: [5876461.503494]  wbt_wait+0x205/0x300
> Feb 20 00:58:44 talisker kernel: [5876461.503863]  ? wbt_wait+0x300/0x300
> Feb 20 00:58:44 talisker kernel: [5876461.504237]  rq_qos_throttle+0x31/0x40
> Feb 20 00:58:44 talisker kernel: [5876461.504637]  blk_mq_make_request+0x111/0x530
> Feb 20 00:58:44 talisker kernel: [5876461.505319]  generic_make_request+0x1a4/0x400
> Feb 20 00:58:44 talisker kernel: [5876461.505999]  raid10_unplug+0xfd/0x190 [raid10]
> Feb 20 00:58:44 talisker kernel: [5876461.506402]  blk_flush_plug_list+0xcf/0x240
> Feb 20 00:58:44 talisker kernel: [5876461.506772]  blk_finish_plug+0x21/0x2e
> Feb 20 00:58:44 talisker kernel: [5876461.507140]  dispatch_rw_block_io+0x696/0x990 [xen_blkback]
> Feb 20 00:58:44 talisker kernel: [5876461.507792]  ? inv_show+0x30/0x30
> Feb 20 00:58:44 talisker kernel: [5876461.508166]  __do_block_io_op+0x30f/0x610 [xen_blkback]
> Feb 20 00:58:44 talisker kernel: [5876461.508549]  ? _raw_spin_unlock_irqrestore+0x14/0x20
> Feb 20 00:58:44 talisker kernel: [5876461.508967]  ? try_to_del_timer_sync+0x4d/0x80
> Feb 20 00:58:44 talisker kernel: [5876461.509673]  xen_blkif_schedule+0xdb/0x650 [xen_blkback]
> Feb 20 00:58:44 talisker kernel: [5876461.510304]  ? finish_wait+0x80/0x80
> Feb 20 00:58:44 talisker kernel: [5876461.510678]  ? xen_blkif_be_int+0x30/0x30 [xen_blkback]
> Feb 20 00:58:44 talisker kernel: [5876461.511049]  kthread+0x112/0x130
> Feb 20 00:58:44 talisker kernel: [5876461.511413]  ? kthread_bind+0x30/0x30
> Feb 20 00:58:44 talisker kernel: [5876461.511776]  ret_from_fork+0x35/0x40
> 
> Administrators of the guests notice problems and try to shutdown or
> reboot, but that fails because dom0 can't write to its xenstore, so
> mostly domains can't be managed after this happens and the server
> has to be forcibly rebooted.
> 
> These are all using the default scheduler, which I understand since
> 4.12 is credit2. SMT is enabled and I've limited dom0 to 2 cores,
> then pinned dom0 to cores 0 and 1, and pinned all other guests to
> their choice out of the remaining cores. That is something I did
> fairly recently though; for a long time there was no pinning yet
> this still started happening.
> 
> In a couple of cases I have found that I've been able to run
> "xentop" and see a particular guest doing heavy block device reads.
> I've done an "xl destroy" on that guest and then everything has
> returned to normal. Unfortunately the times this has happened have
> been on dom0s without useful logs. There's just a gap in logs
> between when the problems started and when the (apparently)
> problematic domU is destroyed. The problematic domU can then be
> booted again and life goes on.
> 
> So, it could be totally unrelated to Xen, and as I investigate
> further I will try different kernels in dom0. But the way that
> destroying a domU frees things up makes me wonder if it could be Xen
> related, maybe scheduler related? Also, it's always the md device
> that the guest block devices are on that is stalled - IO to other
> devices in dom0
> 
> Are there any hypervisor magic sysrq debug keys that could provide
> useful information to you in ruling in / out a Xen issue?
> 
> Should I try using the "credit" scheduler (instead of "credit2") at
> next boot?
> 
> I *think* this has only been seen with kernel version
> 4.19.0-13-amd64. Some of these servers have now been rebooted into
> 4.19.0-14-amd64 (the latest available package) due to the issue,
> which has not yet re-occurred for them.
> 
> If it does re-occur with 4.19.0-14-amd64 what kernel version would
> you advise I try out at next reboot so as to take the Debian kernel
> out of the picture? I will download an upstream kernel release and
> build a Debian package out of it, using my existing kernel config as
> a base.
> 
> As Debian buster is on the 4.19 series should I pick the latest
> 4.19.x longterm to be near to it, or the 5.10.x longterm, or the
> 5.11.x stable?
> 
> Thanks,
> Andy


From xen-devel-bounces@lists.xenproject.org Sat Jun 12 14:34:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Jun 2021 14:34:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140819.260171 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ls4ih-0000mb-U9; Sat, 12 Jun 2021 14:34:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140819.260171; Sat, 12 Jun 2021 14:34:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ls4ih-0000mU-Q2; Sat, 12 Jun 2021 14:34:43 +0000
Received: by outflank-mailman (input) for mailman id 140819;
 Sat, 12 Jun 2021 14:34:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ls4ih-0000mK-8r; Sat, 12 Jun 2021 14:34:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ls4ih-00063k-3k; Sat, 12 Jun 2021 14:34:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ls4ig-000592-Pn; Sat, 12 Jun 2021 14:34:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ls4ig-0005qE-PH; Sat, 12 Jun 2021 14:34:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9Cm3QllWTuZ4P6gSnRGEmEJu0cnqKj59CRy4ffKX7Ow=; b=32GqEDMETUq+/Ql/mNTwqQJXFS
	mhRR/9mry1Y66nTr7qPlOsV0osJTwlsY4QqZZBj+l+QezH3Q/jYQJd/aCGSdZ4wKteVIczfysKaXI
	X4D6q4kUnvJVA68yGSaW/kOGZ9tnKmH2DFdOQ//WVHvi2lmrHq80crCu1eZjVcNmWg28=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162683-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162683: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=b8649cf2a3e673a4a8cb6c255e394b354b771550
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Jun 2021 14:34:42 +0000

flight 162683 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162683/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 b8649cf2a3e673a4a8cb6c255e394b354b771550
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z    8 days
Failing since        162368  2021-06-04 15:42:59 Z    7 days   10 attempts
Testing same since   162583  2021-06-09 23:44:58 Z    2 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1717 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 12 18:55:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Jun 2021 18:55:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140843.260233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ls8md-0007Dr-67; Sat, 12 Jun 2021 18:55:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140843.260233; Sat, 12 Jun 2021 18:55:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ls8md-0007Dk-20; Sat, 12 Jun 2021 18:55:03 +0000
Received: by outflank-mailman (input) for mailman id 140843;
 Sat, 12 Jun 2021 18:55:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ls8mb-0007Da-Pm; Sat, 12 Jun 2021 18:55:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ls8mb-0002XK-HZ; Sat, 12 Jun 2021 18:55:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ls8mb-0002oT-90; Sat, 12 Jun 2021 18:55:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ls8mb-0002XW-64; Sat, 12 Jun 2021 18:55:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EhKqQq8pP4onCS0nl0shRRRZYndAhe1vi5KWdmiVt+8=; b=45IODcrX0HAgXXr61ewjXcMZAW
	rnT4k4nAc3zpQ9SxvBh6oE80OqDPs2MuU4KieRBPjMTE4ArhE8K6VIO+nDOKq7moHNTTOcON0F6D6
	GlbMTd4KJOmlfVxIKy51c4ZzYEQYNhxGuoS3cOecaRDtBTUd1SxCsUM/EZ6aTF/Vw+Co=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162676-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162676: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=894fc4fd670aaf04a67dc7507739f914ff4bacf2
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Jun 2021 18:55:01 +0000

flight 162676 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162676/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                894fc4fd670aaf04a67dc7507739f914ff4bacf2
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  296 days
Failing since        152659  2020-08-21 14:07:39 Z  295 days  545 attempts
Testing same since   162650  2021-06-11 15:02:16 Z    1 days    2 attempts

------------------------------------------------------------
531 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 170840 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 12 22:48:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Jun 2021 22:48:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140865.260289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsCQC-00029H-3R; Sat, 12 Jun 2021 22:48:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140865.260289; Sat, 12 Jun 2021 22:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsCQC-00029A-06; Sat, 12 Jun 2021 22:48:08 +0000
Received: by outflank-mailman (input) for mailman id 140865;
 Sat, 12 Jun 2021 22:48:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2Cnc=LG=gmail.com=rob.townley@srs-us1.protection.inumbo.net>)
 id 1lsCQA-000294-Qs
 for xen-devel@lists.xenproject.org; Sat, 12 Jun 2021 22:48:06 +0000
Received: from mail-ua1-x92c.google.com (unknown [2607:f8b0:4864:20::92c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9da9d0a3-04bc-442a-91b9-ba250310bdad;
 Sat, 12 Jun 2021 22:48:02 +0000 (UTC)
Received: by mail-ua1-x92c.google.com with SMTP id z13so3974264uai.12
 for <xen-devel@lists.xenproject.org>; Sat, 12 Jun 2021 15:48:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9da9d0a3-04bc-442a-91b9-ba250310bdad
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:reply-to:from:date:message-id
         :subject:to:cc;
        bh=B5fDzRRIKZNFUKOQi7cQLXCkNdaHjgDxcL3Q4EG9Kr8=;
        b=VNZoM2r1B42hzWffFSWvmye8ljBkurJE2a6rLrY4nJ+FNgA+s7ezA3vZyE6iK2MTaj
         UZACiL6xqoXuOg53lGTLmEVCL1ysPqYyeH9Z0gQ3NlRB6EqNphkInelfe0iBUe3A/d1B
         SSi1G9Lp/erB+cvKvEUL1BvoEH9F6RLSDgpjNv3rVGgZSw/5vR/Euu2ngoZ2lweMrAog
         GmkJVBsmEEF3wYN8Ng2eYnpWmzIEQwirAgtjiICRUkVbfIIha7Dc/EjsmUA78LYxJ3nw
         16X1yxNm7ndNtpQBo7mBaMWa2Rd1J54DRjPW7JSn9ERZtq1PM/7OM7bz610KmKVECejq
         K5AA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:reply-to
         :from:date:message-id:subject:to:cc;
        bh=B5fDzRRIKZNFUKOQi7cQLXCkNdaHjgDxcL3Q4EG9Kr8=;
        b=VFMjTdNu9Mkq429ETuoRKqNq7C4XQ3Xvp56RaezgRZxWTP81hc4V/dKrC5MmrcnJ8a
         GRIc+vbPxwssVLeWweluo3ycv3thIGNjN3MsR1BZ6FaZ9GKqBomvgF2QnZ2b4k+8RAJU
         PXCQYJ2iJgOXWoJIyyKAkEXsrhAL6SdjzGv4bfht88+zcJ5vOqYLvlZRSH8+vhmyWpAG
         EKAS2dpzbsDeIO710IM/zmALxLhywX2BXiJlvkPml9KKvK0Y1y4CgubWeEdST0W8NDIu
         u6KmeZHMU3Qt9VCy+zOgyjruz6tJasaUO1vdlAcNj4L65fmV8ZOxP3OPKnMhlFLBc9kN
         0wGA==
X-Gm-Message-State: AOAM53203lRGSZIeEpPKYRXGEBFFnsiORyaEuPgbO3wI4JW6zrU3/XOx
	owioQPAsIVOtwnHAwdWbX6kDgiBC6hwKQhtFVvU=
X-Google-Smtp-Source: ABdhPJyQGZEOtOZpSUIEVeCb9KKGkScLYHbb2tcpS731/B9QUMcT+/5xNGVjVl3Lip1MU77a7xugGFJmteQ5ExAAiPQ=
X-Received: by 2002:ab0:a8c:: with SMTP id d12mr8264623uak.47.1623538081621;
 Sat, 12 Jun 2021 15:48:01 -0700 (PDT)
MIME-Version: 1.0
References: <20210226223927.GQ29212@bitfolk.com> <20210612141132.rjtmvjv6377lz4tl@bitfolk.com>
In-Reply-To: <20210612141132.rjtmvjv6377lz4tl@bitfolk.com>
Reply-To: Rob.Townley@gmail.com
From: Rob Townley <rob.townley@gmail.com>
Date: Sat, 12 Jun 2021 17:47:49 -0500
Message-ID: <CA+VdTb8TQFu81S=s4n26NyBoZ2Lr-XQo6wWBrsN4hsv0_y-gcA@mail.gmail.com>
Subject: Re: dom0 suddenly blocking on all access to md device
To: Andy Smith <andy@strugglers.net>
Cc: xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="000000000000a269d705c4996af1"

--000000000000a269d705c4996af1
Content-Type: text/plain; charset="UTF-8"

mdadm.conf has email reporting capabilities to alert to failing drives.
Test that you receive emails.

Use mdadm to run tests on the raid.

iostat may indicate a failing drive as well
smartctl -a /dev/


On Sat, Jun 12, 2021 at 9:12 AM Andy Smith <andy@strugglers.net> wrote:

> Hello,
>
> Unfortunately I'm still experiencing this problem as described in
> the earlier email below and I'm running out of ideas for things to
> test / try.
>
> What was fine for a long time (~5 years): Debian jessie dom0 kernel
> 4.9.x with Xen 4.10.
>
> Below issues started happening on same machines once dom0 was
> upgraded to Debian buster 4.19.x kernel (currently 4.19.0-16-amd64)
> and 4.12 hypervisor. Starting around December 2020.
>
> Since then I've also tried going to Xen 4.14.2 (plus latest XSA
> patches up to XSA377) and it's still happening. I've also tried
> switching to "credit" scheduler and that did not make a difference.
> It can be a month or two between incidents although one machine just
> had it happen twice in 3 days. Maybe half a dozen incidents so far
> on different machines, different hardware configs.
>
> Hypervisor command line is:
>
> dom0_mem=4096M dom0_max_vcpus=2 com1=115200,8n1,0x2f8,10 console=com1,vga
> ucode=scan serial_tx_buffer=256k smt=1
>
> There's a serial console but not much interesting is ever seen on
> it. If there are some debug keys you would like to see the output of
> please let me know. Pretty much the only sort of thing that gets
> logged in dom0 is the following and that could just be a symptom.
>
> Jun 12 12:04:40 clockwork kernel: [216427.246183] INFO: task md5_raid1:205
> blocked for more than 120 seconds.
> Jun 12 12:04:40 clockwork kernel: [216427.246995]       Not tainted
> 4.19.0-16-amd64 #1 Debian 4.19.181-1
> Jun 12 12:04:40 clockwork kernel: [216427.247852] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Jun 12 12:04:40 clockwork kernel: [216427.248674] md5_raid1       D 0
>  205      2 0x80000000
> Jun 12 12:04:40 clockwork kernel: [216427.249534] Call Trace:
> Jun 12 12:04:40 clockwork kernel: [216427.250368] __schedule+0x29f/0x840
> Jun 12 12:04:40 clockwork kernel: [216427.251788]  ?
> _raw_spin_unlock_irqrestore+0x14/0x20
> Jun 12 12:04:40 clockwork kernel: [216427.253078] schedule+0x28/0x80
> Jun 12 12:04:40 clockwork kernel: [216427.253945] md_super_wait+0x6e/0xa0
> [md_mod]
> Jun 12 12:04:40 clockwork kernel: [216427.254812]  ? finish_wait+0x80/0x80
> Jun 12 12:04:40 clockwork kernel: [216427.256139]
> md_bitmap_wait_writes+0x93/0xa0 [md_mod]
> Jun 12 12:04:40 clockwork kernel: [216427.256994]  ?
> md_bitmap_get_counter+0x42/0xd0 [md_mod]
> Jun 12 12:04:40 clockwork kernel: [216427.257787]
> md_bitmap_daemon_work+0x1f7/0x370 [md_mod]
> Jun 12 12:04:40 clockwork kernel: [216427.258608]  ?
> md_rdev_init+0xb0/0xb0 [md_mod]
> Jun 12 12:04:40 clockwork kernel: [216427.259553]
> md_check_recovery+0x41/0x530 [md_mod]
> Jun 12 12:04:40 clockwork kernel: [216427.260304]  raid1d+0x5c/0xf10
> [raid1]
> Jun 12 12:04:40 clockwork kernel: [216427.261096]  ?
> lock_timer_base+0x67/0x80
> Jun 12 12:04:40 clockwork kernel: [216427.261863]  ?
> _raw_spin_unlock_irqrestore+0x14/0x20
> Jun 12 12:04:40 clockwork kernel: [216427.262659]  ?
> try_to_del_timer_sync+0x4d/0x80
> Jun 12 12:04:40 clockwork kernel: [216427.263436]  ?
> del_timer_sync+0x37/0x40
> Jun 12 12:04:40 clockwork kernel: [216427.264189]  ?
> schedule_timeout+0x173/0x3b0
> Jun 12 12:04:40 clockwork kernel: [216427.264911]  ?
> md_rdev_init+0xb0/0xb0 [md_mod]
> Jun 12 12:04:40 clockwork kernel: [216427.265664]  ? md_thread+0x94/0x150
> [md_mod]
> Jun 12 12:04:40 clockwork kernel: [216427.266412]  ?
> process_checks+0x4a0/0x4a0 [raid1]
> Jun 12 12:04:40 clockwork kernel: [216427.267124] md_thread+0x94/0x150
> [md_mod]
> Jun 12 12:04:40 clockwork kernel: [216427.267842]  ? finish_wait+0x80/0x80
> Jun 12 12:04:40 clockwork kernel: [216427.268539] kthread+0x112/0x130
> Jun 12 12:04:40 clockwork kernel: [216427.269231]  ? kthread_bind+0x30/0x30
> Jun 12 12:04:40 clockwork kernel: [216427.269903] ret_from_fork+0x35/0x40
> Jun 12 12:04:40 clockwork kernel: [216427.270590] INFO: task md2_raid1:207
> blocked for more than 120 seconds.
> Jun 12 12:04:40 clockwork kernel: [216427.271260]       Not tainted
> 4.19.0-16-amd64 #1 Debian 4.19.181-1
> Jun 12 12:04:40 clockwork kernel: [216427.271942] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Jun 12 12:04:40 clockwork kernel: [216427.272721] md2_raid1       D 0
>  207      2 0x80000000
> Jun 12 12:04:40 clockwork kernel: [216427.273432] Call Trace:
> Jun 12 12:04:40 clockwork kernel: [216427.274172] __schedule+0x29f/0x840
> Jun 12 12:04:40 clockwork kernel: [216427.274869] schedule+0x28/0x80
> Jun 12 12:04:40 clockwork kernel: [216427.275543] io_schedule+0x12/0x40
> Jun 12 12:04:40 clockwork kernel: [216427.276208] wbt_wait+0x205/0x300
> Jun 12 12:04:40 clockwork kernel: [216427.276861]  ? wbt_wait+0x300/0x300
> Jun 12 12:04:40 clockwork kernel: [216427.277503] rq_qos_throttle+0x31/0x40
> Jun 12 12:04:40 clockwork kernel: [216427.278193]
> blk_mq_make_request+0x111/0x530
> Jun 12 12:04:40 clockwork kernel: [216427.278876]
> generic_make_request+0x1a4/0x400
> Jun 12 12:04:40 clockwork kernel: [216427.279657]  ?
> try_to_wake_up+0x54/0x470
> Jun 12 12:04:40 clockwork kernel: [216427.280400] submit_bio+0x45/0x130
> Jun 12 12:04:40 clockwork kernel: [216427.281136]  ?
> md_super_write.part.63+0x90/0x120 [md_mod]
> Jun 12 12:04:40 clockwork kernel: [216427.281788]
> md_update_sb.part.65+0x3a8/0x8e0 [md_mod]
> Jun 12 12:04:40 clockwork kernel: [216427.282480]  ?
> md_rdev_init+0xb0/0xb0 [md_mod]
> Jun 12 12:04:40 clockwork kernel: [216427.283106]
> md_check_recovery+0x272/0x530 [md_mod]
> Jun 12 12:04:40 clockwork kernel: [216427.283738]  raid1d+0x5c/0xf10
> [raid1]
> Jun 12 12:04:40 clockwork kernel: [216427.284345]  ? __schedule+0x2a7/0x840
> Jun 12 12:04:40 clockwork kernel: [216427.284939]  ?
> md_rdev_init+0xb0/0xb0 [md_mod]
> Jun 12 12:04:40 clockwork kernel: [216427.285522]  ? schedule+0x28/0x80
> Jun 12 12:04:40 clockwork kernel: [216427.286121]  ?
> schedule_timeout+0x26d/0x3b0
> Jun 12 12:04:40 clockwork kernel: [216427.286702]  ? __schedule+0x2a7/0x840
> Jun 12 12:04:40 clockwork kernel: [216427.287279]  ?
> md_rdev_init+0xb0/0xb0 [md_mod]
> Jun 12 12:04:40 clockwork kernel: [216427.287871]  ? md_thread+0x94/0x150
> [md_mod]
> Jun 12 12:04:40 clockwork kernel: [216427.288458]  ?
> process_checks+0x4a0/0x4a0 [raid1]
> Jun 12 12:04:40 clockwork kernel: [216427.289062] md_thread+0x94/0x150
> [md_mod]
> Jun 12 12:04:40 clockwork kernel: [216427.289663]  ? finish_wait+0x80/0x80
> Jun 12 12:04:40 clockwork kernel: [216427.290288] kthread+0x112/0x130
> Jun 12 12:04:40 clockwork kernel: [216427.290858]  ? kthread_bind+0x30/0x30
> Jun 12 12:04:40 clockwork kernel: [216427.291433] ret_from_fork+0x35/0x40
>
> What I HAVEN'T yet tried is a much newer kernel. That will probably
> be what I try next having exhausted all ideas about upgrading or
> configuring Xen.
>
> Should I take a kernel from buster-backports which would currently
> be:
>
>
> https://packages.debian.org/buster-backports/linux-image-5.10.0-0.bpo.5-amd64
>
> or should I build a kernel package from a mainline release?
>
> Thanks,
> Andy
>
> On Fri, Feb 26, 2021 at 10:39:27PM +0000, Andy Smith wrote:
> > Hi,
> >
> > I suspect this might be an issue in the dom0 kernel (Debian buster,
> > kernel 4.19.0-13-amd64), but just lately I've been sporadically
> > having issues where dom0 blocks or severely slows down on all access
> > to the particular md device that hosts all domU block devices.
> >
> > Setup in dom0: an md RAID10 that is used as an LVM PV for an LVM volume
> > group, where all domU block devices are LVM logical volumes in that
> > group. So the relevant part of a domU config file might look like:
> >
> > disk = [ "phy:/dev/myvg/domu_debtest1_xvda,xvda,w",
> >          "phy:/dev/myvg/domu_debtest1_xvdb,xvdb,w" ]
> >
> > The guests are mostly PV, a sprinkling of PVH, no HVM.
> >
> > There's 5 of these servers but 3 of them have only recently been
> > upgraded to Xen 4.12.14 (on Debian buster) from Xen 4.10 (on Debian
> > jessie). The fact that all of them have been pretty stable in the
> > past, on differing hardware, makes me discount a hardware issue. The
> > fact that two of them have been buster / 4.12.x for a long time
> > without issue but are also now starting to see this does make me
> > think that it's a recent dom0 kernel issue.
> >
> > When the problem occurs, inside every domU I see things like this:
> >
> > Feb 26 20:02:34 backup4 kernel: [2530464.736085] INFO: task
> btrfs-transacti:333 blocked for more than 120 seconds.
> > Feb 26 20:02:34 backup4 kernel: [2530464.736107]       Not tainted
> 4.9.0-14-amd64 #1 Debian 4.9.246-2
> > Feb 26 20:02:34 backup4 kernel: [2530464.736117] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > Feb 26 20:02:34 backup4 kernel: [2530464.736131] btrfs-transacti D    0
>  333      2 0x00000000
> > Feb 26 20:02:34 backup4 kernel: [2530464.736146]  0000000000000246
> ffff8800f4e0c400 0000000000000000 ffff8800f8a7f100
> > Feb 26 20:02:34 backup4 kernel: [2530464.736168]  ffff8800fad18a00
> ffff8800fa7dd000 ffffc90040b2f670 ffffffff8161a979
> > Feb 26 20:02:34 backup4 kernel: [2530464.736188]  ffff8800fa6d0200
> 0000000000000000 ffff8800fad18a00 0000000000000010
> > Feb 26 20:02:34 backup4 kernel: [2530464.736209] Call Trace:
> > Feb 26 20:02:34 backup4 kernel: [2530464.736223]  [<ffffffff8161a979>] ?
> __schedule+0x239/0x6f0
> > Feb 26 20:02:34 backup4 kernel: [2530464.736236]  [<ffffffff8161ae62>] ?
> schedule+0x32/0x80
> > Feb 26 20:02:34 backup4 kernel: [2530464.736248]  [<ffffffff8161e1fd>] ?
> schedule_timeout+0x1dd/0x380
> > Feb 26 20:02:34 backup4 kernel: [2530464.736263]  [<ffffffff8101c201>] ?
> xen_clocksource_get_cycles+0x11/0x20
> > Feb 26 20:02:34 backup4 kernel: [2530464.736275]  [<ffffffff8161a6dd>] ?
> io_schedule_timeout+0x9d/0x100
> > Feb 26 20:02:34 backup4 kernel: [2530464.736289]  [<ffffffff81367964>] ?
> __sbitmap_queue_get+0x24/0x90
> > Feb 26 20:02:34 backup4 kernel: [2530464.736302]  [<ffffffff81317f60>] ?
> bt_get.isra.6+0x160/0x220
> > Feb 26 20:02:34 backup4 kernel: [2530464.736338]  [<ffffffffc0148bf8>] ?
> __btrfs_map_block+0x6c8/0x11d0 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736353]  [<ffffffff810bf010>] ?
> prepare_to_wait_event+0xf0/0xf0
> > Feb 26 20:02:34 backup4 kernel: [2530464.736364]  [<ffffffff813182d3>] ?
> blk_mq_get_tag+0x23/0x90
> > Feb 26 20:02:34 backup4 kernel: [2530464.736377]  [<ffffffff81313b6a>] ?
> __blk_mq_alloc_request+0x1a/0x220
> > Feb 26 20:02:34 backup4 kernel: [2530464.736390]  [<ffffffff81314a39>] ?
> blk_mq_map_request+0xd9/0x170
> > Feb 26 20:02:34 backup4 kernel: [2530464.736402]  [<ffffffff8131726b>] ?
> blk_mq_make_request+0xbb/0x580
> > Feb 26 20:02:34 backup4 kernel: [2530464.736429]  [<ffffffffc0148bf8>] ?
> __btrfs_map_block+0x6c8/0x11d0 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736444]  [<ffffffff8130b0f5>] ?
> generic_make_request+0x115/0x2d0
> > Feb 26 20:02:34 backup4 kernel: [2530464.736456]  [<ffffffff8130b326>] ?
> submit_bio+0x76/0x140
> > Feb 26 20:02:34 backup4 kernel: [2530464.736481]  [<ffffffffc0149d9a>] ?
> btrfs_map_bio+0x19a/0x340 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736505]  [<ffffffffc0111635>] ?
> btree_submit_bio_hook+0xf5/0x110 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736535]  [<ffffffffc0138318>] ?
> submit_one_bio+0x68/0x90 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736561]  [<ffffffffc013fd4d>] ?
> read_extent_buffer_pages+0x1cd/0x300 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736587]  [<ffffffffc010fbe0>] ?
> free_root_pointers+0x60/0x60 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736609]  [<ffffffffc010ff9c>] ?
> btree_read_extent_buffer_pages+0x8c/0x100 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736635]  [<ffffffffc0111814>] ?
> read_tree_block+0x34/0x50 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736655]  [<ffffffffc00ef9f3>] ?
> read_block_for_search.isra.36+0x133/0x320 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736678]  [<ffffffffc00eabe4>] ?
> unlock_up+0xd4/0x180 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736700]  [<ffffffffc00f1b8d>] ?
> btrfs_search_slot+0x3ad/0xa00 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736723]  [<ffffffffc00f3a47>] ?
> btrfs_insert_empty_items+0x67/0xc0 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736748]  [<ffffffffc00ffe24>] ?
> __btrfs_run_delayed_refs+0xfc4/0x13a0 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736763]  [<ffffffff810164bd>] ?
> xen_mc_flush+0xdd/0x1d0
> > Feb 26 20:02:34 backup4 kernel: [2530464.736785]  [<ffffffffc01033ad>] ?
> btrfs_run_delayed_refs+0x9d/0x2b0 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736811]  [<ffffffffc0119817>] ?
> btrfs_commit_transaction+0x57/0xa10 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736837]  [<ffffffffc011a266>] ?
> start_transaction+0x96/0x480 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736861]  [<ffffffffc011464c>] ?
> transaction_kthread+0x1dc/0x200 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736886]  [<ffffffffc0114470>] ?
> btrfs_cleanup_transaction+0x590/0x590 [btrfs]
> > Feb 26 20:02:34 backup4 kernel: [2530464.736901]  [<ffffffff8109be69>] ?
> kthread+0xd9/0xf0
> > Feb 26 20:02:34 backup4 kernel: [2530464.736913]  [<ffffffff8109bd90>] ?
> kthread_park+0x60/0x60
> > Feb 26 20:02:34 backup4 kernel: [2530464.736926]  [<ffffffff8161f8f7>] ?
> ret_from_fork+0x57/0x70
> >
> > It's all kinds of guest kernel, and the processes are basically
> > anything that tries to access its block devices.
> >
> > Over in the dom0 at the time, I mostly haven't managed to get logs,
> > probably because its logging is on the same md device that is having
> > problems. Some of the servers are fortunate to have their dom0
> > operating system installed on separate devices to the guest devices,
> > and on one of those I got this:
> >
> > Feb 20 00:58:44 talisker kernel: [5876461.472590] INFO: task
> md5_raid10:226 blocked for more than 120 seconds.
> > Feb 20 00:58:44 talisker kernel: [5876461.473105]       Not tainted
> 4.19.0-13-amd64 #1 Debian 4.19.160-2
> > Feb 20 00:58:44 talisker kernel: [5876461.473523] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > Feb 20 00:58:44 talisker kernel: [5876461.473936] md5_raid10      D
> 0   226      2 0x80000000
> > Feb 20 00:58:44 talisker kernel: [5876461.474341] Call Trace:
> > Feb 20 00:58:44 talisker kernel: [5876461.474743]  __schedule+0x29f/0x840
> > Feb 20 00:58:44 talisker kernel: [5876461.475142]  ?
> _raw_spin_unlock_irqrestore+0x14/0x20
> > Feb 20 00:58:44 talisker kernel: [5876461.475554]  schedule+0x28/0x80
> > Feb 20 00:58:44 talisker kernel: [5876461.475964]
> md_super_wait+0x6e/0xa0 [md_mod]
> > Feb 20 00:58:44 talisker kernel: [5876461.476372]  ?
> finish_wait+0x80/0x80
> > Feb 20 00:58:44 talisker kernel: [5876461.476817]
> md_bitmap_wait_writes+0x93/0xa0 [md_mod]
> > Feb 20 00:58:44 talisker kernel: [5876461.477504]  ?
> md_bitmap_get_counter+0x42/0xd0 [md_mod]
> > Feb 20 00:58:44 talisker kernel: [5876461.478248]
> md_bitmap_daemon_work+0x1f7/0x370 [md_mod]
> > Feb 20 00:58:44 talisker kernel: [5876461.478904]
> md_check_recovery+0x41/0x530 [md_mod]
> > Feb 20 00:58:44 talisker kernel: [5876461.479309]  raid10d+0x62/0x1460
> [raid10]
> > Feb 20 00:58:44 talisker kernel: [5876461.479722]  ?
> __switch_to_asm+0x41/0x70
> > Feb 20 00:58:44 talisker kernel: [5876461.480133]  ?
> finish_task_switch+0x78/0x280
> > Feb 20 00:58:44 talisker kernel: [5876461.480540]  ?
> _raw_spin_lock_irqsave+0x15/0x40
> > Feb 20 00:58:44 talisker kernel: [5876461.480987]  ?
> lock_timer_base+0x67/0x80
> > Feb 20 00:58:44 talisker kernel: [5876461.481719]  ?
> _raw_spin_unlock_irqrestore+0x14/0x20
> > Feb 20 00:58:44 talisker kernel: [5876461.482358]  ?
> try_to_del_timer_sync+0x4d/0x80
> > Feb 20 00:58:44 talisker kernel: [5876461.482768]  ?
> del_timer_sync+0x37/0x40
> > Feb 20 00:58:44 talisker kernel: [5876461.483162]  ?
> schedule_timeout+0x173/0x3b0
> > Feb 20 00:58:44 talisker kernel: [5876461.483553]  ?
> md_rdev_init+0xb0/0xb0 [md_mod]
> > Feb 20 00:58:44 talisker kernel: [5876461.483944]  ?
> md_thread+0x94/0x150 [md_mod]
> > Feb 20 00:58:44 talisker kernel: [5876461.484345]  ?
> r10bio_pool_alloc+0x20/0x20 [raid10]
> > Feb 20 00:58:44 talisker kernel: [5876461.484777]  md_thread+0x94/0x150
> [md_mod]
> > Feb 20 00:58:44 talisker kernel: [5876461.485500]  ?
> finish_wait+0x80/0x80
> > Feb 20 00:58:44 talisker kernel: [5876461.486083]  kthread+0x112/0x130
> > Feb 20 00:58:44 talisker kernel: [5876461.486479]  ?
> kthread_bind+0x30/0x30
> > Feb 20 00:58:44 talisker kernel: [5876461.486870]
> ret_from_fork+0x35/0x40
> > Feb 20 00:58:44 talisker kernel: [5876461.487260] INFO: task
> 1.xvda-0:4237 blocked for more than 120 seconds.
> > Feb 20 00:58:44 talisker kernel: [5876461.487644]       Not tainted
> 4.19.0-13-amd64 #1 Debian 4.19.160-2
> > Feb 20 00:58:44 talisker kernel: [5876461.488027] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > Feb 20 00:58:44 talisker kernel: [5876461.488422] 1.xvda-0        D
> 0  4237      2 0x80000000
> > Feb 20 00:58:44 talisker kernel: [5876461.488842] Call Trace:
> > Feb 20 00:58:44 talisker kernel: [5876461.489530]  __schedule+0x29f/0x840
> > Feb 20 00:58:44 talisker kernel: [5876461.490149]  ?
> _raw_spin_unlock_irqrestore+0x14/0x20
> > Feb 20 00:58:44 talisker kernel: [5876461.490545]  schedule+0x28/0x80
> > Feb 20 00:58:44 talisker kernel: [5876461.490954]
> md_super_wait+0x6e/0xa0 [md_mod]
> > Feb 20 00:58:44 talisker kernel: [5876461.491330]  ?
> finish_wait+0x80/0x80
> > Feb 20 00:58:44 talisker kernel: [5876461.491708]
> md_bitmap_wait_writes+0x93/0xa0 [md_mod]
> > Feb 20 00:58:44 talisker kernel: [5876461.492101]
> md_bitmap_unplug+0xc5/0x120 [md_mod]
> > Feb 20 00:58:44 talisker kernel: [5876461.492490]
> raid10_unplug+0xd4/0x190 [raid10]
> > Feb 20 00:58:44 talisker kernel: [5876461.492926]
> blk_flush_plug_list+0xcf/0x240
> > Feb 20 00:58:44 talisker kernel: [5876461.493648]
> blk_finish_plug+0x21/0x2e
> > Feb 20 00:58:44 talisker kernel: [5876461.494277]
> dispatch_rw_block_io+0x696/0x990 [xen_blkback]
> > Feb 20 00:58:44 talisker kernel: [5876461.494657]  ? inv_show+0x30/0x30
> > Feb 20 00:58:44 talisker kernel: [5876461.495043]
> __do_block_io_op+0x30f/0x610 [xen_blkback]
> > Feb 20 00:58:44 talisker kernel: [5876461.495458]  ?
> _raw_spin_unlock_irqrestore+0x14/0x20
> > Feb 20 00:58:44 talisker kernel: [5876461.495871]  ?
> try_to_del_timer_sync+0x4d/0x80
> > Feb 20 00:58:44 talisker kernel: [5876461.496264]
> xen_blkif_schedule+0xdb/0x650 [xen_blkback]
> > Feb 20 00:58:44 talisker kernel: [5876461.496784]  ?
> finish_wait+0x80/0x80
> > Feb 20 00:58:44 talisker kernel: [5876461.497418]  ?
> xen_blkif_be_int+0x30/0x30 [xen_blkback]
> > Feb 20 00:58:44 talisker kernel: [5876461.498041]  kthread+0x112/0x130
> > Feb 20 00:58:44 talisker kernel: [5876461.498668]  ?
> kthread_bind+0x30/0x30
> > Feb 20 00:58:44 talisker kernel: [5876461.499309]
> ret_from_fork+0x35/0x40
> > Feb 20 00:58:44 talisker kernel: [5876461.499960] INFO: task
> 1.xvda-1:4238 blocked for more than 120 seconds.
> > Feb 20 00:58:44 talisker kernel: [5876461.500518]       Not tainted
> 4.19.0-13-amd64 #1 Debian 4.19.160-2
> > Feb 20 00:58:44 talisker kernel: [5876461.500943] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > Feb 20 00:58:44 talisker kernel: [5876461.501609] 1.xvda-1        D
> 0  4238      2 0x80000000
> > Feb 20 00:58:44 talisker kernel: [5876461.501992] Call Trace:
> > Feb 20 00:58:44 talisker kernel: [5876461.502372]  __schedule+0x29f/0x840
> > Feb 20 00:58:44 talisker kernel: [5876461.502747]  schedule+0x28/0x80
> > Feb 20 00:58:44 talisker kernel: [5876461.503121]  io_schedule+0x12/0x40
> > Feb 20 00:58:44 talisker kernel: [5876461.503494]  wbt_wait+0x205/0x300
> > Feb 20 00:58:44 talisker kernel: [5876461.503863]  ? wbt_wait+0x300/0x300
> > Feb 20 00:58:44 talisker kernel: [5876461.504237]
> rq_qos_throttle+0x31/0x40
> > Feb 20 00:58:44 talisker kernel: [5876461.504637]
> blk_mq_make_request+0x111/0x530
> > Feb 20 00:58:44 talisker kernel: [5876461.505319]
> generic_make_request+0x1a4/0x400
> > Feb 20 00:58:44 talisker kernel: [5876461.505999]
> raid10_unplug+0xfd/0x190 [raid10]
> > Feb 20 00:58:44 talisker kernel: [5876461.506402]
> blk_flush_plug_list+0xcf/0x240
> > Feb 20 00:58:44 talisker kernel: [5876461.506772]
> blk_finish_plug+0x21/0x2e
> > Feb 20 00:58:44 talisker kernel: [5876461.507140]
> dispatch_rw_block_io+0x696/0x990 [xen_blkback]
> > Feb 20 00:58:44 talisker kernel: [5876461.507792]  ? inv_show+0x30/0x30
> > Feb 20 00:58:44 talisker kernel: [5876461.508166]
> __do_block_io_op+0x30f/0x610 [xen_blkback]
> > Feb 20 00:58:44 talisker kernel: [5876461.508549]  ?
> _raw_spin_unlock_irqrestore+0x14/0x20
> > Feb 20 00:58:44 talisker kernel: [5876461.508967]  ?
> try_to_del_timer_sync+0x4d/0x80
> > Feb 20 00:58:44 talisker kernel: [5876461.509673]
> xen_blkif_schedule+0xdb/0x650 [xen_blkback]
> > Feb 20 00:58:44 talisker kernel: [5876461.510304]  ?
> finish_wait+0x80/0x80
> > Feb 20 00:58:44 talisker kernel: [5876461.510678]  ?
> xen_blkif_be_int+0x30/0x30 [xen_blkback]
> > Feb 20 00:58:44 talisker kernel: [5876461.511049]  kthread+0x112/0x130
> > Feb 20 00:58:44 talisker kernel: [5876461.511413]  ?
> kthread_bind+0x30/0x30
> > Feb 20 00:58:44 talisker kernel: [5876461.511776]
> ret_from_fork+0x35/0x40
> >
> > Administrators of the guests notice problems and try to shutdown or
> > reboot, but that fails because dom0 can't write to its xenstore, so
> > mostly domains can't be managed after this happens and the server
> > has to be forcibly rebooted.
> >
> > These are all using the default scheduler, which I understand since
> > 4.12 is credit2. SMT is enabled and I've limited dom0 to 2 cores,
> > then pinned dom0 to cores 0 and 1, and pinned all other guests to
> > their choice out of the remaining cores. That is something I did
> > fairly recently though; for a long time there was no pinning yet
> > this still started happening.
> >
> > In a couple of cases I have found that I've been able to run
> > "xentop" and see a particular guest doing heavy block device reads.
> > I've done an "xl destroy" on that guest and then everything has
> > returned to normal. Unfortunately the times this has happened have
> > been on dom0s without useful logs. There's just a gap in logs
> > between when the problems started and when the (apparently)
> > problematic domU is destroyed. The problematic domU can then be
> > booted again and life goes on.
> >
> > So, it could be totally unrelated to Xen, and as I investigate
> > further I will try different kernels in dom0. But the way that
> > destroying a domU frees things up makes me wonder if it could be Xen
> > related, maybe scheduler related? Also, it's always the md device
> > that the guest block devices are on that is stalled - IO to other
> > devices in dom0
> >
> > Are there any hypervisor magic sysrq debug keys that could provide
> > useful information to you in ruling in / out a Xen issue?
> >
> > Should I try using the "credit" scheduler (instead of "credit2") at
> > next boot?
> >
> > I *think* this has only been seen with kernel version
> > 4.19.0-13-amd64. Some of these servers have now been rebooted into
> > 4.19.0-14-amd64 (the latest available package) due to the issue,
> > which has not yet re-occurred for them.
> >
> > If it does re-occur with 4.19.0-14-amd64 what kernel version would
> > you advise I try out at next reboot so as to take the Debian kernel
> > out of the picture? I will download an upstream kernel release and
> > build a Debian package out of it, using my existing kernel config as
> > a base.
> >
> > As Debian buster is on the 4.19 series should I pick the latest
> > 4.19.x longterm to be near to it, or the 5.10.x longterm, or the
> > 5.11.x stable?
> >
> > Thanks,
> > Andy
>
>

--000000000000a269d705c4996af1
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div><div dir=3D"auto">mdadm.conf has email reporting capabilities to alert=
 to failing drives.=C2=A0 Test that you receive emails. =C2=A0=C2=A0</div><=
div dir=3D"auto"><br></div><div dir=3D"auto">Use mdadm to run tests on the =
raid.</div><div dir=3D"auto"><br></div><div dir=3D"auto">iostat may indicat=
e a failing drive as well</div><div dir=3D"auto">smartctl -a /dev/ =C2=A0=
=C2=A0</div></div><div><div dir=3D"auto"><br></div><div dir=3D"auto"><br></=
div><div><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">O=
n Sat, Jun 12, 2021 at 9:12 AM Andy Smith &lt;<a href=3D"mailto:andy@strugg=
lers.net" target=3D"_blank">andy@strugglers.net</a>&gt; wrote:<br></div><bl=
ockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #=
ccc solid;padding-left:1ex">Hello,<br>
<br>
Unfortunately I&#39;m still experiencing this problem as described in<br>
the earlier email below and I&#39;m running out of ideas for things to<br>
test / try.<br>
<br>
What was fine for a long time (~5 years): Debian jessie dom0 kernel<br>
4.9.x with Xen 4.10.<br>
<br>
Below issues started happening on same machines once dom0 was<br>
upgraded to Debian buster 4.19.x kernel (currently 4.19.0-16-amd64)<br>
and 4.12 hypervisor. Starting around December 2020.<br>
<br>
Since then I&#39;ve also tried going to Xen 4.14.2 (plus latest XSA<br>
patches up to XSA377) and it&#39;s still happening. I&#39;ve also tried<br>
switching to &quot;credit&quot; scheduler and that did not make a differenc=
e.<br>
It can be a month or two between incidents although one machine just<br>
had it happen twice in 3 days. Maybe half a dozen incidents so far<br>
on different machines, different hardware configs.<br>
<br>
Hypervisor command line is:<br>
<br>
dom0_mem=3D4096M dom0_max_vcpus=3D2 com1=3D115200,8n1,0x2f8,10 console=3Dco=
m1,vga ucode=3Dscan serial_tx_buffer=3D256k smt=3D1<br>
<br>
There&#39;s a serial console but not much interesting is ever seen on<br>
it. If there are some debug keys you would like to see the output of<br>
please let me know. Pretty much the only sort of thing that gets<br>
logged in dom0 is the following and that could just be a symptom.<br>
<br>
Jun 12 12:04:40 clockwork kernel: [216427.246183] INFO: task md5_raid1:205 =
blocked for more than 120 seconds.<br>
Jun 12 12:04:40 clockwork kernel: [216427.246995]=C2=A0 =C2=A0 =C2=A0 =C2=
=A0Not tainted 4.19.0-16-amd64 #1 Debian 4.19.181-1<br>
Jun 12 12:04:40 clockwork kernel: [216427.247852] &quot;echo 0 &gt; /proc/s=
ys/kernel/hung_task_timeout_secs&quot; disables this message.<br>
Jun 12 12:04:40 clockwork kernel: [216427.248674] md5_raid1=C2=A0 =C2=A0 =
=C2=A0 =C2=A0D 0=C2=A0 =C2=A0205=C2=A0 =C2=A0 =C2=A0 2 0x80000000<br>
Jun 12 12:04:40 clockwork kernel: [216427.249534] Call Trace:<br>
Jun 12 12:04:40 clockwork kernel: [216427.250368] __schedule+0x29f/0x840<br=
>
Jun 12 12:04:40 clockwork kernel: [216427.251788]=C2=A0 ? _raw_spin_unlock_=
irqrestore+0x14/0x20<br>
Jun 12 12:04:40 clockwork kernel: [216427.253078] schedule+0x28/0x80<br>
Jun 12 12:04:40 clockwork kernel: [216427.253945] md_super_wait+0x6e/0xa0 [=
md_mod]<br>
Jun 12 12:04:40 clockwork kernel: [216427.254812]=C2=A0 ? finish_wait+0x80/=
0x80<br>
Jun 12 12:04:40 clockwork kernel: [216427.256139] md_bitmap_wait_writes+0x9=
3/0xa0 [md_mod]<br>
Jun 12 12:04:40 clockwork kernel: [216427.256994]=C2=A0 ? md_bitmap_get_cou=
nter+0x42/0xd0 [md_mod]<br>
Jun 12 12:04:40 clockwork kernel: [216427.257787] md_bitmap_daemon_work+0x1=
f7/0x370 [md_mod]<br>
Jun 12 12:04:40 clockwork kernel: [216427.258608]=C2=A0 ? md_rdev_init+0xb0=
/0xb0 [md_mod]<br>
Jun 12 12:04:40 clockwork kernel: [216427.259553] md_check_recovery+0x41/0x=
530 [md_mod]<br>
Jun 12 12:04:40 clockwork kernel: [216427.260304]=C2=A0 raid1d+0x5c/0xf10 [=
raid1]<br>
Jun 12 12:04:40 clockwork kernel: [216427.261096]=C2=A0 ? lock_timer_base+0=
x67/0x80<br>
Jun 12 12:04:40 clockwork kernel: [216427.261863]=C2=A0 ? _raw_spin_unlock_=
irqrestore+0x14/0x20<br>
Jun 12 12:04:40 clockwork kernel: [216427.262659]=C2=A0 ? try_to_del_timer_=
sync+0x4d/0x80<br>
Jun 12 12:04:40 clockwork kernel: [216427.263436]=C2=A0 ? del_timer_sync+0x=
37/0x40<br>
Jun 12 12:04:40 clockwork kernel: [216427.264189]=C2=A0 ? schedule_timeout+=
0x173/0x3b0<br>
Jun 12 12:04:40 clockwork kernel: [216427.264911]=C2=A0 ? md_rdev_init+0xb0=
/0xb0 [md_mod]<br>
Jun 12 12:04:40 clockwork kernel: [216427.265664]=C2=A0 ? md_thread+0x94/0x=
150 [md_mod]<br>
Jun 12 12:04:40 clockwork kernel: [216427.266412]=C2=A0 ? process_checks+0x=
4a0/0x4a0 [raid1]<br>
Jun 12 12:04:40 clockwork kernel: [216427.267124] md_thread+0x94/0x150 [md_=
mod]<br>
Jun 12 12:04:40 clockwork kernel: [216427.267842]=C2=A0 ? finish_wait+0x80/=
0x80<br>
Jun 12 12:04:40 clockwork kernel: [216427.268539] kthread+0x112/0x130<br>
Jun 12 12:04:40 clockwork kernel: [216427.269231]=C2=A0 ? kthread_bind+0x30=
/0x30<br>
Jun 12 12:04:40 clockwork kernel: [216427.269903] ret_from_fork+0x35/0x40<b=
r>
Jun 12 12:04:40 clockwork kernel: [216427.270590] INFO: task md2_raid1:207 =
blocked for more than 120 seconds.<br>
Jun 12 12:04:40 clockwork kernel: [216427.271260]=C2=A0 =C2=A0 =C2=A0 =C2=
=A0Not tainted 4.19.0-16-amd64 #1 Debian 4.19.181-1<br>
Jun 12 12:04:40 clockwork kernel: [216427.271942] &quot;echo 0 &gt; /proc/s=
ys/kernel/hung_task_timeout_secs&quot; disables this message.<br>
Jun 12 12:04:40 clockwork kernel: [216427.272721] md2_raid1=C2=A0 =C2=A0 =
=C2=A0 =C2=A0D 0=C2=A0 =C2=A0207=C2=A0 =C2=A0 =C2=A0 2 0x80000000<br>
Jun 12 12:04:40 clockwork kernel: [216427.273432] Call Trace:<br>
Jun 12 12:04:40 clockwork kernel: [216427.274172] __schedule+0x29f/0x840<br=
>
Jun 12 12:04:40 clockwork kernel: [216427.274869] schedule+0x28/0x80<br>
Jun 12 12:04:40 clockwork kernel: [216427.275543] io_schedule+0x12/0x40<br>
Jun 12 12:04:40 clockwork kernel: [216427.276208] wbt_wait+0x205/0x300<br>
Jun 12 12:04:40 clockwork kernel: [216427.276861]=C2=A0 ? wbt_wait+0x300/0x=
300<br>
Jun 12 12:04:40 clockwork kernel: [216427.277503] rq_qos_throttle+0x31/0x40=
<br>
Jun 12 12:04:40 clockwork kernel: [216427.278193] blk_mq_make_request+0x111=
/0x530<br>
Jun 12 12:04:40 clockwork kernel: [216427.278876] generic_make_request+0x1a=
4/0x400<br>
Jun 12 12:04:40 clockwork kernel: [216427.279657]=C2=A0 ? try_to_wake_up+0x=
54/0x470<br>
Jun 12 12:04:40 clockwork kernel: [216427.280400] submit_bio+0x45/0x130<br>
Jun 12 12:04:40 clockwork kernel: [216427.281136]=C2=A0 ? md_super_write.pa=
rt.63+0x90/0x120 [md_mod]<br>
Jun 12 12:04:40 clockwork kernel: [216427.281788] md_update_sb.part.65+0x3a=
8/0x8e0 [md_mod]<br>
Jun 12 12:04:40 clockwork kernel: [216427.282480]=C2=A0 ? md_rdev_init+0xb0=
/0xb0 [md_mod]<br>
Jun 12 12:04:40 clockwork kernel: [216427.283106] md_check_recovery+0x272/0=
x530 [md_mod]<br>
Jun 12 12:04:40 clockwork kernel: [216427.283738]=C2=A0 raid1d+0x5c/0xf10 [=
raid1]<br>
Jun 12 12:04:40 clockwork kernel: [216427.284345]=C2=A0 ? __schedule+0x2a7/=
0x840<br>
Jun 12 12:04:40 clockwork kernel: [216427.284939]=C2=A0 ? md_rdev_init+0xb0=
/0xb0 [md_mod]<br>
Jun 12 12:04:40 clockwork kernel: [216427.285522]=C2=A0 ? schedule+0x28/0x8=
0<br>
Jun 12 12:04:40 clockwork kernel: [216427.286121]=C2=A0 ? schedule_timeout+=
0x26d/0x3b0<br>
Jun 12 12:04:40 clockwork kernel: [216427.286702]=C2=A0 ? __schedule+0x2a7/=
0x840<br>
Jun 12 12:04:40 clockwork kernel: [216427.287279]=C2=A0 ? md_rdev_init+0xb0=
/0xb0 [md_mod]<br>
Jun 12 12:04:40 clockwork kernel: [216427.287871]=C2=A0 ? md_thread+0x94/0x=
150 [md_mod]<br>
Jun 12 12:04:40 clockwork kernel: [216427.288458]=C2=A0 ? process_checks+0x=
4a0/0x4a0 [raid1]<br>
Jun 12 12:04:40 clockwork kernel: [216427.289062] md_thread+0x94/0x150 [md_=
mod]<br>
Jun 12 12:04:40 clockwork kernel: [216427.289663]=C2=A0 ? finish_wait+0x80/=
0x80<br>
Jun 12 12:04:40 clockwork kernel: [216427.290288] kthread+0x112/0x130<br>
Jun 12 12:04:40 clockwork kernel: [216427.290858]=C2=A0 ? kthread_bind+0x30=
/0x30<br>
Jun 12 12:04:40 clockwork kernel: [216427.291433] ret_from_fork+0x35/0x40<b=
r>
<br>
What I HAVEN&#39;T yet tried is a much newer kernel. That will probably<br>
be what I try next having exhausted all ideas about upgrading or<br>
configuring Xen.<br>
<br>
Should I take a kernel from buster-backports which would currently<br>
be:<br>
<br>
=C2=A0 =C2=A0 <a href=3D"https://packages.debian.org/buster-backports/linux=
-image-5.10.0-0.bpo.5-amd64" rel=3D"noreferrer" target=3D"_blank">https://p=
ackages.debian.org/buster-backports/linux-image-5.10.0-0.bpo.5-amd64</a><br=
>
<br>
or should I build a kernel package from a mainline release?<br>
<br>
Thanks,<br>
Andy<br>
<br>
On Fri, Feb 26, 2021 at 10:39:27PM +0000, Andy Smith wrote:<br>
&gt; Hi,<br>
&gt; <br>
&gt; I suspect this might be an issue in the dom0 kernel (Debian buster,<br=
>
&gt; kernel 4.19.0-13-amd64), but just lately I&#39;ve been sporadically<br=
>
&gt; having issues where dom0 blocks or severely slows down on all access<b=
r>
&gt; to the particular md device that hosts all domU block devices.<br>
&gt; <br>
&gt; Setup in dom0: an md RAID10 that is used as an LVM PV for an LVM volum=
e<br>
&gt; group, where all domU block devices are LVM logical volumes in that<br=
>
&gt; group. So the relevant part of a domU config file might look like:<br>
&gt; <br>
&gt; disk =3D [ &quot;phy:/dev/myvg/domu_debtest1_xvda,xvda,w&quot;,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &quot;phy:/dev/myvg/domu_debtest1_xv=
db,xvdb,w&quot; ]<br>
&gt; <br>
&gt; The guests are mostly PV, a sprinkling of PVH, no HVM.<br>
&gt; <br>
&gt; There&#39;s 5 of these servers but 3 of them have only recently been<b=
r>
&gt; upgraded to Xen 4.12.14 (on Debian buster) from Xen 4.10 (on Debian<br=
>
&gt; jessie). The fact that all of them have been pretty stable in the<br>
&gt; past, on differing hardware, makes me discount a hardware issue. The<b=
r>
&gt; fact that two of them have been buster / 4.12.x for a long time<br>
&gt; without issue but are also now starting to see this does make me<br>
&gt; think that it&#39;s a recent dom0 kernel issue.<br>
&gt; <br>
&gt; When the problem occurs, inside every domU I see things like this:<br>
&gt; <br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736085] INFO: task btrfs-tran=
sacti:333 blocked for more than 120 seconds.<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736107]=C2=A0 =C2=A0 =C2=A0 =
=C2=A0Not tainted 4.9.0-14-amd64 #1 Debian 4.9.246-2<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736117] &quot;echo 0 &gt; /pr=
oc/sys/kernel/hung_task_timeout_secs&quot; disables this message.<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736131] btrfs-transacti D=C2=
=A0 =C2=A0 0=C2=A0 =C2=A0333=C2=A0 =C2=A0 =C2=A0 2 0x00000000<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736146]=C2=A0 000000000000024=
6 ffff8800f4e0c400 0000000000000000 ffff8800f8a7f100<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736168]=C2=A0 ffff8800fad18a0=
0 ffff8800fa7dd000 ffffc90040b2f670 ffffffff8161a979<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736188]=C2=A0 ffff8800fa6d020=
0 0000000000000000 ffff8800fad18a00 0000000000000010<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736209] Call Trace:<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736223]=C2=A0 [&lt;ffffffff81=
61a979&gt;] ? __schedule+0x239/0x6f0<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736236]=C2=A0 [&lt;ffffffff81=
61ae62&gt;] ? schedule+0x32/0x80<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736248]=C2=A0 [&lt;ffffffff81=
61e1fd&gt;] ? schedule_timeout+0x1dd/0x380<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736263]=C2=A0 [&lt;ffffffff81=
01c201&gt;] ? xen_clocksource_get_cycles+0x11/0x20<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736275]=C2=A0 [&lt;ffffffff81=
61a6dd&gt;] ? io_schedule_timeout+0x9d/0x100<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736289]=C2=A0 [&lt;ffffffff81=
367964&gt;] ? __sbitmap_queue_get+0x24/0x90<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736302]=C2=A0 [&lt;ffffffff81=
317f60&gt;] ? bt_get.isra.6+0x160/0x220<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736338]=C2=A0 [&lt;ffffffffc0=
148bf8&gt;] ? __btrfs_map_block+0x6c8/0x11d0 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736353]=C2=A0 [&lt;ffffffff81=
0bf010&gt;] ? prepare_to_wait_event+0xf0/0xf0<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736364]=C2=A0 [&lt;ffffffff81=
3182d3&gt;] ? blk_mq_get_tag+0x23/0x90<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736377]=C2=A0 [&lt;ffffffff81=
313b6a&gt;] ? __blk_mq_alloc_request+0x1a/0x220<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736390]=C2=A0 [&lt;ffffffff81=
314a39&gt;] ? blk_mq_map_request+0xd9/0x170<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736402]=C2=A0 [&lt;ffffffff81=
31726b&gt;] ? blk_mq_make_request+0xbb/0x580<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736429]=C2=A0 [&lt;ffffffffc0=
148bf8&gt;] ? __btrfs_map_block+0x6c8/0x11d0 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736444]=C2=A0 [&lt;ffffffff81=
30b0f5&gt;] ? generic_make_request+0x115/0x2d0<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736456]=C2=A0 [&lt;ffffffff81=
30b326&gt;] ? submit_bio+0x76/0x140<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736481]=C2=A0 [&lt;ffffffffc0=
149d9a&gt;] ? btrfs_map_bio+0x19a/0x340 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736505]=C2=A0 [&lt;ffffffffc0=
111635&gt;] ? btree_submit_bio_hook+0xf5/0x110 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736535]=C2=A0 [&lt;ffffffffc0=
138318&gt;] ? submit_one_bio+0x68/0x90 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736561]=C2=A0 [&lt;ffffffffc0=
13fd4d&gt;] ? read_extent_buffer_pages+0x1cd/0x300 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736587]=C2=A0 [&lt;ffffffffc0=
10fbe0&gt;] ? free_root_pointers+0x60/0x60 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736609]=C2=A0 [&lt;ffffffffc0=
10ff9c&gt;] ? btree_read_extent_buffer_pages+0x8c/0x100 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736635]=C2=A0 [&lt;ffffffffc0=
111814&gt;] ? read_tree_block+0x34/0x50 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736655]=C2=A0 [&lt;ffffffffc0=
0ef9f3&gt;] ? read_block_for_search.isra.36+0x133/0x320 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736678]=C2=A0 [&lt;ffffffffc0=
0eabe4&gt;] ? unlock_up+0xd4/0x180 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736700]=C2=A0 [&lt;ffffffffc0=
0f1b8d&gt;] ? btrfs_search_slot+0x3ad/0xa00 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736723]=C2=A0 [&lt;ffffffffc0=
0f3a47&gt;] ? btrfs_insert_empty_items+0x67/0xc0 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736748]=C2=A0 [&lt;ffffffffc0=
0ffe24&gt;] ? __btrfs_run_delayed_refs+0xfc4/0x13a0 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736763]=C2=A0 [&lt;ffffffff81=
0164bd&gt;] ? xen_mc_flush+0xdd/0x1d0<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736785]=C2=A0 [&lt;ffffffffc0=
1033ad&gt;] ? btrfs_run_delayed_refs+0x9d/0x2b0 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736811]=C2=A0 [&lt;ffffffffc0=
119817&gt;] ? btrfs_commit_transaction+0x57/0xa10 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736837]=C2=A0 [&lt;ffffffffc0=
11a266&gt;] ? start_transaction+0x96/0x480 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736861]=C2=A0 [&lt;ffffffffc0=
11464c&gt;] ? transaction_kthread+0x1dc/0x200 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736886]=C2=A0 [&lt;ffffffffc0=
114470&gt;] ? btrfs_cleanup_transaction+0x590/0x590 [btrfs]<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736901]=C2=A0 [&lt;ffffffff81=
09be69&gt;] ? kthread+0xd9/0xf0<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736913]=C2=A0 [&lt;ffffffff81=
09bd90&gt;] ? kthread_park+0x60/0x60<br>
&gt; Feb 26 20:02:34 backup4 kernel: [2530464.736926]=C2=A0 [&lt;ffffffff81=
61f8f7&gt;] ? ret_from_fork+0x57/0x70<br>
&gt; <br>
&gt; It&#39;s all kinds of guest kernel, and the processes are basically<br=
>
&gt; anything that tries to access its block devices.<br>
&gt; <br>
&gt; Over in the dom0 at the time, I mostly haven&#39;t managed to get logs=
,<br>
&gt; probably because its logging is on the same md device that is having<b=
r>
&gt; problems. Some of the servers are fortunate to have their dom0<br>
&gt; operating system installed on separate devices to the guest devices,<b=
r>
&gt; and on one of those I got this:<br>
&gt; <br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.472590] INFO: task md5_raid1=
0:226 blocked for more than 120 seconds.<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.473105]=C2=A0 =C2=A0 =C2=A0 =
=C2=A0Not tainted 4.19.0-13-amd64 #1 Debian 4.19.160-2<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.473523] &quot;echo 0 &gt; /p=
roc/sys/kernel/hung_task_timeout_secs&quot; disables this message.<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.473936] md5_raid10=C2=A0 =C2=
=A0 =C2=A0 D=C2=A0 =C2=A0 0=C2=A0 =C2=A0226=C2=A0 =C2=A0 =C2=A0 2 0x8000000=
0<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.474341] Call Trace:<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.474743]=C2=A0 __schedule+0x2=
9f/0x840<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.475142]=C2=A0 ? _raw_spin_un=
lock_irqrestore+0x14/0x20<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.475554]=C2=A0 schedule+0x28/=
0x80<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.475964]=C2=A0 md_super_wait+=
0x6e/0xa0 [md_mod]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.476372]=C2=A0 ? finish_wait+=
0x80/0x80<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.476817]=C2=A0 md_bitmap_wait=
_writes+0x93/0xa0 [md_mod]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.477504]=C2=A0 ? md_bitmap_ge=
t_counter+0x42/0xd0 [md_mod]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.478248]=C2=A0 md_bitmap_daem=
on_work+0x1f7/0x370 [md_mod]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.478904]=C2=A0 md_check_recov=
ery+0x41/0x530 [md_mod]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.479309]=C2=A0 raid10d+0x62/0=
x1460 [raid10]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.479722]=C2=A0 ? __switch_to_=
asm+0x41/0x70<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.480133]=C2=A0 ? finish_task_=
switch+0x78/0x280<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.480540]=C2=A0 ? _raw_spin_lo=
ck_irqsave+0x15/0x40<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.480987]=C2=A0 ? lock_timer_b=
ase+0x67/0x80<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.481719]=C2=A0 ? _raw_spin_un=
lock_irqrestore+0x14/0x20<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.482358]=C2=A0 ? try_to_del_t=
imer_sync+0x4d/0x80<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.482768]=C2=A0 ? del_timer_sy=
nc+0x37/0x40<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.483162]=C2=A0 ? schedule_tim=
eout+0x173/0x3b0<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.483553]=C2=A0 ? md_rdev_init=
+0xb0/0xb0 [md_mod]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.483944]=C2=A0 ? md_thread+0x=
94/0x150 [md_mod]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.484345]=C2=A0 ? r10bio_pool_=
alloc+0x20/0x20 [raid10]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.484777]=C2=A0 md_thread+0x94=
/0x150 [md_mod]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.485500]=C2=A0 ? finish_wait+=
0x80/0x80<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.486083]=C2=A0 kthread+0x112/=
0x130<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.486479]=C2=A0 ? kthread_bind=
+0x30/0x30<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.486870]=C2=A0 ret_from_fork+=
0x35/0x40<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.487260] INFO: task 1.xvda-0:=
4237 blocked for more than 120 seconds.<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.487644]=C2=A0 =C2=A0 =C2=A0 =
=C2=A0Not tainted 4.19.0-13-amd64 #1 Debian 4.19.160-2<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.488027] &quot;echo 0 &gt; /p=
roc/sys/kernel/hung_task_timeout_secs&quot; disables this message.<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.488422] 1.xvda-0=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 D=C2=A0 =C2=A0 0=C2=A0 4237=C2=A0 =C2=A0 =C2=A0 2 0x80000=
000<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.488842] Call Trace:<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.489530]=C2=A0 __schedule+0x2=
9f/0x840<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.490149]=C2=A0 ? _raw_spin_un=
lock_irqrestore+0x14/0x20<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.490545]=C2=A0 schedule+0x28/=
0x80<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.490954]=C2=A0 md_super_wait+=
0x6e/0xa0 [md_mod]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.491330]=C2=A0 ? finish_wait+=
0x80/0x80<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.491708]=C2=A0 md_bitmap_wait=
_writes+0x93/0xa0 [md_mod]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.492101]=C2=A0 md_bitmap_unpl=
ug+0xc5/0x120 [md_mod]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.492490]=C2=A0 raid10_unplug+=
0xd4/0x190 [raid10]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.492926]=C2=A0 blk_flush_plug=
_list+0xcf/0x240<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.493648]=C2=A0 blk_finish_plu=
g+0x21/0x2e<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.494277]=C2=A0 dispatch_rw_bl=
ock_io+0x696/0x990 [xen_blkback]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.494657]=C2=A0 ? inv_show+0x3=
0/0x30<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.495043]=C2=A0 __do_block_io_=
op+0x30f/0x610 [xen_blkback]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.495458]=C2=A0 ? _raw_spin_un=
lock_irqrestore+0x14/0x20<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.495871]=C2=A0 ? try_to_del_t=
imer_sync+0x4d/0x80<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.496264]=C2=A0 xen_blkif_sche=
dule+0xdb/0x650 [xen_blkback]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.496784]=C2=A0 ? finish_wait+=
0x80/0x80<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.497418]=C2=A0 ? xen_blkif_be=
_int+0x30/0x30 [xen_blkback]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.498041]=C2=A0 kthread+0x112/=
0x130<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.498668]=C2=A0 ? kthread_bind=
+0x30/0x30<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.499309]=C2=A0 ret_from_fork+=
0x35/0x40<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.499960] INFO: task 1.xvda-1:=
4238 blocked for more than 120 seconds.<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.500518]=C2=A0 =C2=A0 =C2=A0 =
=C2=A0Not tainted 4.19.0-13-amd64 #1 Debian 4.19.160-2<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.500943] &quot;echo 0 &gt; /p=
roc/sys/kernel/hung_task_timeout_secs&quot; disables this message.<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.501609] 1.xvda-1=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 D=C2=A0 =C2=A0 0=C2=A0 4238=C2=A0 =C2=A0 =C2=A0 2 0x80000=
000<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.501992] Call Trace:<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.502372]=C2=A0 __schedule+0x2=
9f/0x840<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.502747]=C2=A0 schedule+0x28/=
0x80<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.503121]=C2=A0 io_schedule+0x=
12/0x40<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.503494]=C2=A0 wbt_wait+0x205=
/0x300<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.503863]=C2=A0 ? wbt_wait+0x3=
00/0x300<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.504237]=C2=A0 rq_qos_throttl=
e+0x31/0x40<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.504637]=C2=A0 blk_mq_make_re=
quest+0x111/0x530<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.505319]=C2=A0 generic_make_r=
equest+0x1a4/0x400<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.505999]=C2=A0 raid10_unplug+=
0xfd/0x190 [raid10]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.506402]=C2=A0 blk_flush_plug=
_list+0xcf/0x240<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.506772]=C2=A0 blk_finish_plu=
g+0x21/0x2e<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.507140]=C2=A0 dispatch_rw_bl=
ock_io+0x696/0x990 [xen_blkback]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.507792]=C2=A0 ? inv_show+0x3=
0/0x30<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.508166]=C2=A0 __do_block_io_=
op+0x30f/0x610 [xen_blkback]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.508549]=C2=A0 ? _raw_spin_un=
lock_irqrestore+0x14/0x20<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.508967]=C2=A0 ? try_to_del_t=
imer_sync+0x4d/0x80<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.509673]=C2=A0 xen_blkif_sche=
dule+0xdb/0x650 [xen_blkback]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.510304]=C2=A0 ? finish_wait+=
0x80/0x80<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.510678]=C2=A0 ? xen_blkif_be=
_int+0x30/0x30 [xen_blkback]<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.511049]=C2=A0 kthread+0x112/=
0x130<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.511413]=C2=A0 ? kthread_bind=
+0x30/0x30<br>
&gt; Feb 20 00:58:44 talisker kernel: [5876461.511776]=C2=A0 ret_from_fork+=
0x35/0x40<br>
&gt; <br>
&gt; Administrators of the guests notice problems and try to shutdown or<br=
>
&gt; reboot, but that fails because dom0 can&#39;t write to its xenstore, s=
o<br>
&gt; mostly domains can&#39;t be managed after this happens and the server<=
br>
&gt; has to be forcibly rebooted.<br>
&gt; <br>
&gt; These are all using the default scheduler, which I understand since<br=
>
&gt; 4.12 is credit2. SMT is enabled and I&#39;ve limited dom0 to 2 cores,<=
br>
&gt; then pinned dom0 to cores 0 and 1, and pinned all other guests to<br>
&gt; their choice out of the remaining cores. That is something I did<br>
&gt; fairly recently though; for a long time there was no pinning yet<br>
&gt; this still started happening.<br>
&gt; <br>
&gt; In a couple of cases I have found that I&#39;ve been able to run<br>
&gt; &quot;xentop&quot; and see a particular guest doing heavy block device=
 reads.<br>
&gt; I&#39;ve done an &quot;xl destroy&quot; on that guest and then everyth=
ing has<br>
&gt; returned to normal. Unfortunately the times this has happened have<br>
&gt; been on dom0s without useful logs. There&#39;s just a gap in logs<br>
&gt; between when the problems started and when the (apparently)<br>
&gt; problematic domU is destroyed. The problematic domU can then be<br>
&gt; booted again and life goes on.<br>
&gt; <br>
&gt; So, it could be totally unrelated to Xen, and as I investigate<br>
&gt; further I will try different kernels in dom0. But the way that<br>
&gt; destroying a domU frees things up makes me wonder if it could be Xen<b=
r>
&gt; related, maybe scheduler related? Also, it&#39;s always the md device<=
br>
&gt; that the guest block devices are on that is stalled - IO to other<br>
&gt; devices in dom0<br>
&gt; <br>
&gt; Are there any hypervisor magic sysrq debug keys that could provide<br>
&gt; useful information to you in ruling in / out a Xen issue?<br>
&gt; <br>
&gt; Should I try using the &quot;credit&quot; scheduler (instead of &quot;=
credit2&quot;) at<br>
&gt; next boot?<br>
&gt; <br>
&gt; I *think* this has only been seen with kernel version<br>
&gt; 4.19.0-13-amd64. Some of these servers have now been rebooted into<br>
&gt; 4.19.0-14-amd64 (the latest available package) due to the issue,<br>
&gt; which has not yet re-occurred for them.<br>
&gt; <br>
&gt; If it does re-occur with 4.19.0-14-amd64 what kernel version would<br>
&gt; you advise I try out at next reboot so as to take the Debian kernel<br=
>
&gt; out of the picture? I will download an upstream kernel release and<br>
&gt; build a Debian package out of it, using my existing kernel config as<b=
r>
&gt; a base.<br>
&gt; <br>
&gt; As Debian buster is on the 4.19 series should I pick the latest<br>
&gt; 4.19.x longterm to be near to it, or the 5.10.x longterm, or the<br>
&gt; 5.11.x stable?<br>
&gt; <br>
&gt; Thanks,<br>
&gt; Andy<br>
<br>
</blockquote></div></div>
</div>

--000000000000a269d705c4996af1--


From xen-devel-bounces@lists.xenproject.org Sat Jun 12 23:14:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Jun 2021 23:14:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140875.260303 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsCpF-0005Kk-Ch; Sat, 12 Jun 2021 23:14:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140875.260303; Sat, 12 Jun 2021 23:14:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsCpF-0005Kd-9c; Sat, 12 Jun 2021 23:14:01 +0000
Received: by outflank-mailman (input) for mailman id 140875;
 Sat, 12 Jun 2021 23:13:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eFEz=LG=strugglers.net=andy@srs-us1.protection.inumbo.net>)
 id 1lsCpD-0005KX-Gc
 for xen-devel@lists.xenproject.org; Sat, 12 Jun 2021 23:13:59 +0000
Received: from mail.bitfolk.com (unknown [2001:ba8:1f1:f019::25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 91059439-b96c-4bb6-a5e3-45ca0bf01d61;
 Sat, 12 Jun 2021 23:13:58 +0000 (UTC)
Received: from andy by mail.bitfolk.com with local (Exim 4.89)
 (envelope-from <andy@strugglers.net>) id 1lsCpB-0000GS-9p
 for xen-devel@lists.xenproject.org; Sat, 12 Jun 2021 23:13:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91059439-b96c-4bb6-a5e3-45ca0bf01d61
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=bitfolk.com
	; s=alpha; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID:
	Subject:To:From:Date:Sender:Reply-To:Cc:Content-Transfer-Encoding:Content-ID:
	Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
	:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe:
	List-Post:List-Owner:List-Archive;
	bh=OWqO3i/j6QzXQuag1dyHMW1KO6nKQG3OtJaUXgkTHN4=; b=fXBEVLDH9ukq5atMdaluX/xDrG
	kdELEy6ZZuyeHmIqKyEurU5FOtHqPPgA5XCrWyu4QcRPB3Yo8nrIcqagLA7Xz4TFI1sNQjHDm+tOW
	cW48OltHGva7YY4z89JZCoKmttlBGUgvgQc7OVXL8Kb7hkLkaPLv6CODsIifc5uPSv6HMSj2CvgZK
	SytOxxTji1edDEfijkQmHQXyVo3suYcM8d1jNOrie6jAuLudHF1H8nFbB0ynHudMzYSuBPul6R5fm
	leKhiBjOSKm002t3dRWv4VICjXLWy5rAQKKfjh0LWfpWK7zyMPzQq7jkLX1IE5AF3Fs5ixh1wijhN
	TlKfqbIQ==;
Date: Sat, 12 Jun 2021 23:13:57 +0000
From: Andy Smith <andy@strugglers.net>
To: xen-devel@lists.xenproject.org
Subject: Re: dom0 suddenly blocking on all access to md device
Message-ID: <20210612231357.upxplm7ecpvl3zlo@bitfolk.com>
References: <20210226223927.GQ29212@bitfolk.com>
 <20210612141132.rjtmvjv6377lz4tl@bitfolk.com>
 <CA+VdTb8TQFu81S=s4n26NyBoZ2Lr-XQo6wWBrsN4hsv0_y-gcA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CA+VdTb8TQFu81S=s4n26NyBoZ2Lr-XQo6wWBrsN4hsv0_y-gcA@mail.gmail.com>
OpenPGP: id=BF15490B; url=http://strugglers.net/~andy/pubkey.asc
X-URL: http://strugglers.net/wiki/User:Andy
User-Agent: NeoMutt/20170113 (1.7.2)
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: andy@strugglers.net
X-SA-Exim-Scanned: No (on mail.bitfolk.com); SAEximRunCond expanded to false

Hi Rob,

On Sat, Jun 12, 2021 at 05:47:49PM -0500, Rob Townley wrote:
> mdadm.conf has email reporting capabilities to alert to failing drives.
> Test that you receive emails.

I do receive those emails, when such things occur, but the drives
are not failing.

Devices are not kicked out of MD arrays, all IO just stalls
completely. Also these incidents coincide with an upgrade of OS and
hypervisor and are happening on 5 different servers so far, so it
would be highly unlikely that so many devices suddenly went bad.

> Use mdadm to run tests on the raid.

Weekly scrubs take place using /usr/share/mdadm/checkarray

> smartctl -a /dev/

Yep, SMART health checks and self-testing are enabled.

I've now put two test servers on linux-image-amd64/buster-backports
and any time any of the production servers experiences the issue I
will boot it into that kernel next time.

Cheers,
Andy


From xen-devel-bounces@lists.xenproject.org Sun Jun 13 00:26:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Jun 2021 00:26:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140885.260322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsDxM-00043D-PE; Sun, 13 Jun 2021 00:26:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140885.260322; Sun, 13 Jun 2021 00:26:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsDxM-000436-MJ; Sun, 13 Jun 2021 00:26:28 +0000
Received: by outflank-mailman (input) for mailman id 140885;
 Sun, 13 Jun 2021 00:26:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsDxM-00042w-1e; Sun, 13 Jun 2021 00:26:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsDxL-0000BD-Tl; Sun, 13 Jun 2021 00:26:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsDxL-0007y2-Kz; Sun, 13 Jun 2021 00:26:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsDxL-00047J-KX; Sun, 13 Jun 2021 00:26:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EJXFEoCAoAxnB4Goc9NuX/NaqrHqLj7oohQNRytNQ2g=; b=ySk0ylT5uVpOdJ+9GbM5neZdvW
	Yoebz2pSlqxDziB6sZPsMzSNHWZ/GbMYmOrOLNiYeAijH7MFp7mRh/SOdRz5btB2/EI7eqCpeK/Je
	9UPrrk8+3xIv4o9JZg2oS4fi01FwUBMMba3dzQpwwVrGgp1GOb2Z2QpXus9//cWUbzeM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162702-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162702: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=b8649cf2a3e673a4a8cb6c255e394b354b771550
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Jun 2021 00:26:27 +0000

flight 162702 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162702/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 b8649cf2a3e673a4a8cb6c255e394b354b771550
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z    8 days
Failing since        162368  2021-06-04 15:42:59 Z    8 days   11 attempts
Testing same since   162583  2021-06-09 23:44:58 Z    3 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1717 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 13 01:43:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Jun 2021 01:43:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140926.260436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsF9N-0000Tj-Ek; Sun, 13 Jun 2021 01:42:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140926.260436; Sun, 13 Jun 2021 01:42:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsF9N-0000TH-7G; Sun, 13 Jun 2021 01:42:57 +0000
Received: by outflank-mailman (input) for mailman id 140926;
 Sun, 13 Jun 2021 01:42:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsF9L-0000T7-D2; Sun, 13 Jun 2021 01:42:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsF9L-0006jc-0C; Sun, 13 Jun 2021 01:42:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsF9K-0003Cu-Lw; Sun, 13 Jun 2021 01:42:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsF9K-0002Hu-LV; Sun, 13 Jun 2021 01:42:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Xavc19WV5uzflLgU6La9EBmszyh0O7GLFjoQzEYWJ38=; b=WC4+4ItcKvYZ+Ew3c42gyeSmm6
	1OAhd2EQGEANMENoM0P/C7MDZxyWQufD07HaZLQB/hCJV8Vas4Pod+vDqD8hM4bXAbIzcgvstoPzb
	vHup5RYrXAB5SEABbFXIcyqWFv0KJPdom26iHG0MwEZrSQx1sm6NLN8LO4CD/pvUAKiI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162696-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162696: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-amd64-i386-pvgrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-amd64-amd64-pvgrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=93031fbe9f4c341a2e7950a088025ea550291433
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Jun 2021 01:42:54 +0000

flight 162696 xen-unstable real [real]
flight 162721 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162696/
http://logs.test-lab.xenproject.org/osstest/logs/162721/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
 test-amd64-amd64-i386-pvgrub 17 guest-localmigrate       fail REGR. vs. 162533
 test-amd64-amd64-amd64-pvgrub 17 guest-localmigrate      fail REGR. vs. 162533

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 162533

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162533
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162533
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  93031fbe9f4c341a2e7950a088025ea550291433
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z    4 days
Failing since        162556  2021-06-08 22:39:08 Z    4 days    5 attempts
Testing same since   162696  2021-06-12 11:59:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 724 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 13 03:39:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Jun 2021 03:39:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140939.260456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsGxc-0002Uw-94; Sun, 13 Jun 2021 03:38:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140939.260456; Sun, 13 Jun 2021 03:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsGxc-0002Up-5Y; Sun, 13 Jun 2021 03:38:56 +0000
Received: by outflank-mailman (input) for mailman id 140939;
 Sun, 13 Jun 2021 03:38:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsGxa-0002Uf-BM; Sun, 13 Jun 2021 03:38:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsGxa-0000dk-4y; Sun, 13 Jun 2021 03:38:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsGxZ-00083M-Sc; Sun, 13 Jun 2021 03:38:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsGxZ-0000BJ-S1; Sun, 13 Jun 2021 03:38:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+4Bm34B/x8sdGvKNzKreIp5QDdF0wfydmEJRPiRzVsk=; b=FL90SKQF7HI0HKN43WRVYL/o4j
	lo7idhxsgYL/GtupEuZNbw1MruuMw3M8pLpUGRWhOF6I1YsjndeHaivEm3NoJ0pblzHcs2hrJC7i9
	Jsc8IZ/RI1La2OdumLGVEux/ffHXlWAuPTFukU+/NaJOhHaWb0dOWNkbrZ1xyGBN7q5Y=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162700-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162700: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start.2:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ad347abe4a9876b1f65f408ab467137e88f77eb4
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Jun 2021 03:38:53 +0000

flight 162700 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162700/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      23 guest-start.2            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                ad347abe4a9876b1f65f408ab467137e88f77eb4
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  316 days
Failing since        152366  2020-08-01 20:49:34 Z  315 days  537 attempts
Testing same since   162700  2021-06-12 14:13:47 Z    0 days    1 attempts

------------------------------------------------------------
6159 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1677280 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 13 03:40:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Jun 2021 03:40:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140945.260469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsGyf-00035M-M2; Sun, 13 Jun 2021 03:40:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140945.260469; Sun, 13 Jun 2021 03:40:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsGyf-00035F-J0; Sun, 13 Jun 2021 03:40:01 +0000
Received: by outflank-mailman (input) for mailman id 140945;
 Sun, 13 Jun 2021 03:40:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsGye-000351-Sx; Sun, 13 Jun 2021 03:40:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsGye-0000eY-PM; Sun, 13 Jun 2021 03:40:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsGye-00086h-Jq; Sun, 13 Jun 2021 03:40:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsGye-0002TG-JL; Sun, 13 Jun 2021 03:40:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DhIN3otFLVFVK02No73Cvz14wTeTcU2r45Qu4hu4gAc=; b=vmUCIL1ELbJ6la9LKURUNbfMaL
	ps7zqtaFHSA9+ME5WpY0tNAkrlTF4LBRCT5NLogn1misZn2Qo7U4RSnE2Xezq9UizeoF7nl9uNGrO
	BsBAFO9yrhLTYH/Ha5KxEcd+IWMTVJmxUDQa3lP20PBthoUTASYTNEW9zrFte55PCzhk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162722-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162722: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=b8649cf2a3e673a4a8cb6c255e394b354b771550
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Jun 2021 03:40:00 +0000

flight 162722 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162722/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 b8649cf2a3e673a4a8cb6c255e394b354b771550
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z    8 days
Failing since        162368  2021-06-04 15:42:59 Z    8 days   12 attempts
Testing same since   162583  2021-06-09 23:44:58 Z    3 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1717 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 13 06:27:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Jun 2021 06:27:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140960.260496 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsJak-0001mn-US; Sun, 13 Jun 2021 06:27:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140960.260496; Sun, 13 Jun 2021 06:27:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsJak-0001mg-Qk; Sun, 13 Jun 2021 06:27:30 +0000
Received: by outflank-mailman (input) for mailman id 140960;
 Sun, 13 Jun 2021 06:27:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsJaj-0001mW-Hl; Sun, 13 Jun 2021 06:27:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsJaj-0003wi-9y; Sun, 13 Jun 2021 06:27:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsJai-0007Sa-UK; Sun, 13 Jun 2021 06:27:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsJai-00037d-Tp; Sun, 13 Jun 2021 06:27:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2iuuauoudAcgEtEIVV1UJ/gTxmKntXWXodMyA2r9Id8=; b=HULsNDqCRM3vP6+o157OYOl7JI
	zPT5ein69lTVzVmZ3tAaOqytPFY4yVygngQp3oPKy49uVi6dW+m4eqxsp8Ja7EDmfKw5jY8/HQ67F
	K5cRKuQzxINWFQAHlnNRLLcvcwn0oPyVwI+N9yTJGkAxMtD3DtnoC2apmF9vR40tScsQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162712-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162712: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=894fc4fd670aaf04a67dc7507739f914ff4bacf2
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Jun 2021 06:27:28 +0000

flight 162712 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162712/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                894fc4fd670aaf04a67dc7507739f914ff4bacf2
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  296 days
Failing since        152659  2020-08-21 14:07:39 Z  295 days  546 attempts
Testing same since   162650  2021-06-11 15:02:16 Z    1 days    3 attempts

------------------------------------------------------------
531 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 170840 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 13 07:40:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Jun 2021 07:40:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140968.260516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsKiq-0008RE-O0; Sun, 13 Jun 2021 07:39:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140968.260516; Sun, 13 Jun 2021 07:39:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsKiq-0008R7-L0; Sun, 13 Jun 2021 07:39:56 +0000
Received: by outflank-mailman (input) for mailman id 140968;
 Sun, 13 Jun 2021 06:29:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ejil=LH=gmail.com=salvatore.bonaccorso@srs-us1.protection.inumbo.net>)
 id 1lsJd7-0002VB-QW
 for xen-devel@lists.xenproject.org; Sun, 13 Jun 2021 06:29:57 +0000
Received: from mail-ej1-x629.google.com (unknown [2a00:1450:4864:20::629])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab54bf0b-ff02-435f-871e-54a6d4bc37bf;
 Sun, 13 Jun 2021 06:29:56 +0000 (UTC)
Received: by mail-ej1-x629.google.com with SMTP id my49so10991383ejc.7
 for <xen-devel@lists.xenproject.org>; Sat, 12 Jun 2021 23:29:56 -0700 (PDT)
Received: from eldamar (80-218-24-251.dclient.hispeed.ch. [80.218.24.251])
 by smtp.gmail.com with ESMTPSA id bh2sm4076028ejb.80.2021.06.12.23.29.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 12 Jun 2021 23:29:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: ab54bf0b-ff02-435f-871e-54a6d4bc37bf
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=nownNaLyh+w2Hh2hW4XMdkpyjXZowhZ5hjX6G051lLE=;
        b=lwzDq4P822FD7x7i1tDBpyVYCy/giWAHWZxhe3KDt4Og2Ld+UYFPvzQJYbufImNACE
         MbeMajFChJylYVz2YzhzmJmMZsH5yMaNgMYmNVLfDBsNtKkRtzGaKNp1kU4hqNETa6LK
         mice6EaId4gnYzYXacc5bAec3HAT03HlN39y0nQw4YxL5/wBc4t/OeKZft/m9BK7d4At
         8TRpiRrAUWyFoMYP83wNu19oga0SgH6TzsC0k0cFD0fFbbsZpbZb9gRAUYo+JQNj1W4R
         zd9vPqZ3/POWxNWhTSJr5vQnXr12VTGp3B06uaGX+t+mYUHgZP3IRXNGr+poq8D6bTtS
         UU+A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition
         :content-transfer-encoding:in-reply-to;
        bh=nownNaLyh+w2Hh2hW4XMdkpyjXZowhZ5hjX6G051lLE=;
        b=CQERRXxCG7R6Dk+9/qbQ39NSwowIVL9ZXA5sYU3g53/2BAqwxt7tb6Ls5RMi2XME17
         sJXXjE2I6xmQ0pac7zO6C2P7hL0QDZGWz55jQPXim7g4liW0/o1FM3R85hwFuNPHv4g9
         jtYq9vzW+xL0gE9zhsEmpu0FejGo2Naa0HYkryhDl3J6Y6k7Jcqk4eqbE4wn7ZpY5lZZ
         gyIYCnOg5xEUJ5lKSNv1uv2qE+br6tilVZ9pJL5T4zatPwBnqz2czh9DqDCNWqAGByp+
         QzqUm8Bf2pZYBkk8MvJOvxlprEdjBFecr9Y2x8lP8EwdF6ypobmASpCpt8g2F6mQ4FE8
         uESQ==
X-Gm-Message-State: AOAM531+UdCgxeD1Nocrzi5lWBoaFU16NIepjEkOv8CXJQ7oZsB6PkQU
	VmmT/KR1cb5yXOh0F31mYPI=
X-Google-Smtp-Source: ABdhPJyCZLvVBZwnmjVV9bW3wGTKBQLXrnDE4vdwzq3ihX/PNWK29zGPP6lAzW4+QKO3OLlfuXpc4Q==
X-Received: by 2002:a17:906:4e91:: with SMTP id v17mr10641608eju.119.1623565795327;
        Sat, 12 Jun 2021 23:29:55 -0700 (PDT)
Sender: Salvatore Bonaccorso <salvatore.bonaccorso@gmail.com>
Date: Sun, 13 Jun 2021 08:29:53 +0200
From: Salvatore Bonaccorso <carnil@debian.org>
To: =?utf-8?B?5bCP5aSq?= <nospam@kota.moe>,
	Jianxiong Gao <jxgao@google.com>, Christoph Hellwig <hch@lst.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	Robin Murphy <robin.murphy@arm.com>, xen-devel@lists.xenproject.org
Cc: 989778-maintonly@bugs.debian.org
Subject: Regression in at least 5.10.y and mainline: Firewire audio interface
 fails to work properly (when booted under Xen)
Message-ID: <YMWl4UnFBAVRDnys@eldamar.lan>
References: <162352833546.2353.230557992597997974.reportbug@home.kota.moe>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <162352833546.2353.230557992597997974.reportbug@home.kota.moe>

Hi,

On Sun, Jun 13, 2021 at 06:05:37AM +1000, 小太 wrote:
> Package: src:linux
> Version: 5.10.40-1
> Severity: normal
> Tags: upstream
> X-Debbugs-Cc: nospam@kota.moe
> 
> After updating from linux-image-5.10.0-6-amd64, jackd now fails to sync to my
> DICE-compatible firewire audio interface (Profire 610), with the following
> error messages (full log attached):
> 
> > $ jackd -d firewire -v
> > jackdmp 1.9.12
> > ...snip...
> > 00301056761: Warning (StreamProcessorManager.cpp)[ 913] alignReceivedStreams:
> xrun while aligning streams...
> > 00301056793: Error (StreamProcessorManager.cpp)[ 877] syncStartAll: Could not
> align streams...
> > 00301056829: Fatal (StreamProcessorManager.cpp)[1025] start: Could not
> syncStartAll...
> > 00301400626: Warning (TimestampedBuffer.cpp)[ 248] calculateRate: (0x1fa5a20)
> rate ( 708.18713) more that 10% off nominal (rate= 512.00000, diff=
> 5665.497, update_period=8)
> > 00301416642: Warning (TimestampedBuffer.cpp)[ 248] calculateRate: (0x1fa5a20)
> rate ( 686.49011) more that 10% off nominal (rate= 512.00000, diff=
> 5491.921, update_period=8)
> > 00301416925: Warning (devicemanager.cpp)[ 925] startStreaming: Failed to
> start SPM!
> > firewire ERR: Could not start streaming threads
> > Cannot start driver
> > JackServer::Start() failed with -1
> > 00301424329: Warning (ieee1394service.cpp)[1509] freeIsoChannel:  Channel 1
> not registered
> > 00301424360: Error (dice_avdevice.cpp)[1440] startstopStreamByIndex: Could
> not deallocate iso channel for SP 1 (ARX 0)
> > 00301424397: Warning (devicemanager.cpp)[ 959] stopStreamingOnDevice: Could
> not stop stream 1 of device 0x1f6e600
> > 00301424406: Warning (devicemanager.cpp)[ 931] startStreaming: Could not stop
> streaming on device 0x1f6e600!
> > 00301424429: Fatal (ffado.cpp)[ 220] ffado_streaming_start: Could not start
> the streaming system
> > Failed to start server
> > no message buffer overruns
> 
> Additionally, I also tried using the snd-dice driver to expose the audio
> interface directly in ALSA. While the interface did appear and was usable
> there, all inputs came out of my speakers highly distorted, with channels
> bleeding into each other - practically unusable.
> 
> I've reproduced the issue on upstream kernel version v5.13-rc5+
> (ad347abe4a9876b1f65f408ab467137e88f77eb4), and bisected the first bad commit
> down to 85a5a6875ca93dc4efbf20df942ba41d27a917e3.
> 
> To double check commit 85a5a6875ca93dc4efbf20df942ba41d27a917e3 was indeed the
> issue, I built the latest v5.10 kernel v5.10.43 with the commit reverted, and
> indeed the issue went away.
> Unfortunately, the reverse patch would not apply to v5.13-rc5+, since it seems
> like the file has changed too much.

A user in Debian reported the above issue, which was reproducible with
5.13-rc5 and 5.10.y as packaged in Debian and found that 85a5a6875ca9
("swiotlb: don't modify orig_addr in swiotlb_tbl_sync_single") that
introduced the issue.

The full bug log is at https://bugs.debian.org/989778

I'm CC'ing as well the xen-devel list, as it appears from
https://bugs.debian.org/989778#10 that the issue is only exposed when booting
under Xen.

Any ideas?

Regards,
Salvatore


From xen-devel-bounces@lists.xenproject.org Sun Jun 13 07:51:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Jun 2021 07:51:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140979.260527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsKtq-0002Am-QP; Sun, 13 Jun 2021 07:51:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140979.260527; Sun, 13 Jun 2021 07:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsKtq-0002Af-MH; Sun, 13 Jun 2021 07:51:18 +0000
Received: by outflank-mailman (input) for mailman id 140979;
 Sun, 13 Jun 2021 07:51:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsKtp-0002AV-Gp; Sun, 13 Jun 2021 07:51:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsKto-0005Jq-IC; Sun, 13 Jun 2021 07:51:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsKto-0003ph-9D; Sun, 13 Jun 2021 07:51:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsKto-0001wI-8i; Sun, 13 Jun 2021 07:51:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1SdWFx91orQwF/u8XwFmFRD0wtfwoXNe08uxVog2iGg=; b=SfZ56xuEfFxa4R/y5dn3Kwp3zT
	sMkUCYM8oVO6rNvfhdlFDwBTFAMCE/7mowN/bIsz8TeaXTSe2lFI7tBHnhIE0IYSlT8h95VtY4a00
	0hE94W+n4y2PTxjJ+ZqMCgE1eAW4R7hPGMqgYWE52uYsoKIAHt20C1Z7VHlKyb88X8h8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162760-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162760: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=55ea45acc99c549c7757efe954aacc33ad30a8ef
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Jun 2021 07:51:16 +0000

flight 162760 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162760/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              55ea45acc99c549c7757efe954aacc33ad30a8ef
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  338 days
Failing since        151818  2020-07-11 04:18:52 Z  337 days  330 attempts
Testing same since   162681  2021-06-12 04:18:50 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 61229 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 13 10:02:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Jun 2021 10:02:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.140991.260547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsMwu-0005xh-Dq; Sun, 13 Jun 2021 10:02:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 140991.260547; Sun, 13 Jun 2021 10:02:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsMwu-0005xa-Aq; Sun, 13 Jun 2021 10:02:36 +0000
Received: by outflank-mailman (input) for mailman id 140991;
 Sun, 13 Jun 2021 10:02:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsMws-0005xP-Hn; Sun, 13 Jun 2021 10:02:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsMws-000880-77; Sun, 13 Jun 2021 10:02:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsMwr-0003YW-Vs; Sun, 13 Jun 2021 10:02:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsMwr-0006ar-VO; Sun, 13 Jun 2021 10:02:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QC5DrSitIhGXC/EWuoBRNXw1A7BhZaVYMQFuX+DwinQ=; b=pHsosaqnjYvf94I7XtOrfGrpx4
	IfC/RBu0/aEhwMSltX/4NlfqYMOiWZZ2pLo5Ao0autkhCeRewfjYdOKtPueC4G0BsMoKMEpQMGJKR
	SL01d1TZZbTY5XexQs0gsma3PCCKynLgcoaWNSlI54lDA1yfsXOnlzty2F/9dSnXldZo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162765-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 162765: all pass - PUSHED
X-Osstest-Versions-This:
    xen=93031fbe9f4c341a2e7950a088025ea550291433
X-Osstest-Versions-That:
    xen=e4fee66043120c954fc309bbb37813604c1c0eb7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Jun 2021 10:02:33 +0000

flight 162765 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162765/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  93031fbe9f4c341a2e7950a088025ea550291433
baseline version:
 xen                  e4fee66043120c954fc309bbb37813604c1c0eb7

Last test of basis   162566  2021-06-09 09:19:38 Z    4 days
Testing same since   162765  2021-06-13 09:18:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Connor Davis <connojdavis@gmail.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e4fee66043..93031fbe9f  93031fbe9f4c341a2e7950a088025ea550291433 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Jun 13 11:45:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Jun 2021 11:45:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141002.260567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsOYh-0006q2-5D; Sun, 13 Jun 2021 11:45:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141002.260567; Sun, 13 Jun 2021 11:45:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsOYh-0006pv-1Q; Sun, 13 Jun 2021 11:45:43 +0000
Received: by outflank-mailman (input) for mailman id 141002;
 Sun, 13 Jun 2021 11:45:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsOYf-0006pl-IA; Sun, 13 Jun 2021 11:45:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsOYf-00027y-BW; Sun, 13 Jun 2021 11:45:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsOYf-0007A7-2p; Sun, 13 Jun 2021 11:45:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsOYf-0005xj-2L; Sun, 13 Jun 2021 11:45:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hoRmXDLe1Ma2eExHKdXXUbjbLz1wetmYJ9T5DRvtlLE=; b=oiUpzVvCHQ6xMpXPKAIveJKTLY
	l2JP7BZQreABX2VHCFLFfYW5mj6vv4q7b5J+LmGj8A1qjU2NA+FhaFzShQgXmBNH8fZOGDhj7+zYF
	8vQs1/zUSI+myeBNhXcZiT77rZUMMdkc92VyLxz1k+xTclpWZx1RYviDDWbT/WyRjaqQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162758-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162758: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=b8649cf2a3e673a4a8cb6c255e394b354b771550
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Jun 2021 11:45:41 +0000

flight 162758 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162758/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 b8649cf2a3e673a4a8cb6c255e394b354b771550
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z    9 days
Failing since        162368  2021-06-04 15:42:59 Z    8 days   13 attempts
Testing same since   162583  2021-06-09 23:44:58 Z    3 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1717 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 13 13:06:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Jun 2021 13:06:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141013.260587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsPoZ-0005um-Ai; Sun, 13 Jun 2021 13:06:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141013.260587; Sun, 13 Jun 2021 13:06:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsPoZ-0005uf-7N; Sun, 13 Jun 2021 13:06:11 +0000
Received: by outflank-mailman (input) for mailman id 141013;
 Sun, 13 Jun 2021 13:06:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsPoY-0005uV-33; Sun, 13 Jun 2021 13:06:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsPoX-0003T5-RT; Sun, 13 Jun 2021 13:06:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsPoX-0002E1-G3; Sun, 13 Jun 2021 13:06:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsPoX-0001dU-FX; Sun, 13 Jun 2021 13:06:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sFfrDo0sh9ZnltyGGbjhG2JPn0G01hU8uUdTDOW0Bfs=; b=sYxr7TvUUxfGLe64totObqzJfN
	RG3jGwp1J3sbRygxmhwn47gIFroqY74cDSDs7dU1RTKxbaACHyIIZUI+CrossiAHDi2QCspvDUd97
	rAwRlgCS2oVjtXZAIJ9OH7Z7bWVlfVdFMLIWWil59asXSaWmFJjoyvnSsuytNCqewsGo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162755-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162755: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-amd64-i386-pvgrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-amd64-amd64-pvgrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=93031fbe9f4c341a2e7950a088025ea550291433
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Jun 2021 13:06:09 +0000

flight 162755 xen-unstable real [real]
flight 162767 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162755/
http://logs.test-lab.xenproject.org/osstest/logs/162767/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
 test-amd64-amd64-i386-pvgrub 17 guest-localmigrate       fail REGR. vs. 162533
 test-amd64-amd64-amd64-pvgrub 17 guest-localmigrate      fail REGR. vs. 162533

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162533
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162533
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  93031fbe9f4c341a2e7950a088025ea550291433
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z    5 days
Failing since        162556  2021-06-08 22:39:08 Z    4 days    6 attempts
Testing same since   162696  2021-06-12 11:59:49 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 724 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 13 16:01:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Jun 2021 16:01:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141030.260618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsSYD-0005MT-AB; Sun, 13 Jun 2021 16:01:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141030.260618; Sun, 13 Jun 2021 16:01:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsSYD-0005MM-7J; Sun, 13 Jun 2021 16:01:29 +0000
Received: by outflank-mailman (input) for mailman id 141030;
 Sun, 13 Jun 2021 16:01:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsSYC-0005MC-C0; Sun, 13 Jun 2021 16:01:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsSYC-0006t1-2j; Sun, 13 Jun 2021 16:01:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsSYB-0001jR-PJ; Sun, 13 Jun 2021 16:01:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsSYB-0001oM-Or; Sun, 13 Jun 2021 16:01:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PBfaGNiSZuTQWPG1mefkqT+TXErDSXFF2om9RRCqIRU=; b=vX0T+FEM7WlSyB7o2RSXOyxmSG
	imyhafarfd7st/NiOiC+NezCafpDzCtITCVuTE60km3Qxh67qsL2YjzD1u0Cbi+ap0PVqIRQpQiLk
	EnpAfGbfacaf6mxLYqQK7ZxCuA58dBWqET1X6I4zd/zAky73CzWs0U80aKoiHqHP9vUA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162769-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162769: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=b8649cf2a3e673a4a8cb6c255e394b354b771550
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Jun 2021 16:01:27 +0000

flight 162769 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162769/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 b8649cf2a3e673a4a8cb6c255e394b354b771550
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z    9 days
Failing since        162368  2021-06-04 15:42:59 Z    9 days   14 attempts
Testing same since   162583  2021-06-09 23:44:58 Z    3 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1717 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 13 17:54:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Jun 2021 17:54:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141045.260638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsUJM-00076v-Cm; Sun, 13 Jun 2021 17:54:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141045.260638; Sun, 13 Jun 2021 17:54:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsUJM-00076o-9V; Sun, 13 Jun 2021 17:54:16 +0000
Received: by outflank-mailman (input) for mailman id 141045;
 Sun, 13 Jun 2021 17:54:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsUJK-00076e-W1; Sun, 13 Jun 2021 17:54:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsUJK-0000FV-PR; Sun, 13 Jun 2021 17:54:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsUJK-0006ux-Du; Sun, 13 Jun 2021 17:54:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsUJK-0000O4-DP; Sun, 13 Jun 2021 17:54:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=P+ULEqji9jnQKS76xlbYGsnOSO2LF6pzlJk62Yr2EzI=; b=hscYZx0o/dXCC5JSUFjLSZndNo
	EBYum8dEWGqDklua2eagqmaQVJT7QGMqFqHol3f06Wylq0zwt7xgNWpjdyph6O1SV/Y5FYueGx9+o
	R/krBiQmX04GPsKn7htTEEUOsQIwhlxd5wHYMDYZcWnMWv2IVkV8X7Yj7/j9QzcUuqe8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162757-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162757: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8ecfa36cd4db3275bf3b6c6f32c7e3c6bb537de2
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Jun 2021 17:54:14 +0000

flight 162757 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162757/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                8ecfa36cd4db3275bf3b6c6f32c7e3c6bb537de2
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  316 days
Failing since        152366  2020-08-01 20:49:34 Z  315 days  538 attempts
Testing same since   162757  2021-06-13 03:43:17 Z    0 days    1 attempts

------------------------------------------------------------
6167 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1680028 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 13 19:47:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Jun 2021 19:47:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141059.260659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsW4G-0000Hn-Rf; Sun, 13 Jun 2021 19:46:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141059.260659; Sun, 13 Jun 2021 19:46:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsW4G-0000HY-OY; Sun, 13 Jun 2021 19:46:48 +0000
Received: by outflank-mailman (input) for mailman id 141059;
 Sun, 13 Jun 2021 19:46:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsW4F-0000HN-Ke; Sun, 13 Jun 2021 19:46:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsW4F-0002AL-Eg; Sun, 13 Jun 2021 19:46:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsW4F-0004BM-5w; Sun, 13 Jun 2021 19:46:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsW4F-00049J-5Q; Sun, 13 Jun 2021 19:46:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Zbf0Vng3F00OCDV+k/angvypaHauV6XQk7A7Uo/v9Y0=; b=yMBJtdlzodZPd61Sn+saQIJvF4
	rWcVhmC0OPSuMHIDL0zdapHGmN7zCJ26ZnltzIVWCco0Wb2zKnd1/hrO9TjLq8Et2WUF64A+umi8r
	13uxKpjBz6FZTN+eHcuudUT7StI4zVLeTcusWYp5w9rswe681TTw5g2OIgfLdHmpmFoA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162762-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162762: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=894fc4fd670aaf04a67dc7507739f914ff4bacf2
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Jun 2021 19:46:47 +0000

flight 162762 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162762/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                894fc4fd670aaf04a67dc7507739f914ff4bacf2
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  297 days
Failing since        152659  2020-08-21 14:07:39 Z  296 days  547 attempts
Testing same since   162650  2021-06-11 15:02:16 Z    2 days    4 attempts

------------------------------------------------------------
531 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 170840 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 13 22:06:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Jun 2021 22:06:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141073.260682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsYFW-0004Rk-0F; Sun, 13 Jun 2021 22:06:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141073.260682; Sun, 13 Jun 2021 22:06:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsYFV-0004Rd-Se; Sun, 13 Jun 2021 22:06:33 +0000
Received: by outflank-mailman (input) for mailman id 141073;
 Sun, 13 Jun 2021 22:06:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsYFV-0004RT-1U; Sun, 13 Jun 2021 22:06:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsYFU-0004Zc-Ri; Sun, 13 Jun 2021 22:06:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsYFU-0004cr-Hg; Sun, 13 Jun 2021 22:06:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsYFU-00030s-HB; Sun, 13 Jun 2021 22:06:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VpuXYxOnEe10lntmvRQlMe2kOfYAJiwpqXFZYwskL7Y=; b=iCmt7uHQLprSiT0bKKOq70bWfK
	0j1PSx/HFhU6Hyer3Z2g3jWc3pq/7BorcfhylpJAtwNamWA5tJv/3eJWmKuKlIQL1quUADRWW9/TJ
	qztGunyxy4BfIyZkvazQnBULU67hL3f3RytfDezhLeZgmjJhwv9+KVjKqNkOqEMknGik=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162774-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162774: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=b8649cf2a3e673a4a8cb6c255e394b354b771550
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Jun 2021 22:06:32 +0000

flight 162774 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162774/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 b8649cf2a3e673a4a8cb6c255e394b354b771550
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z    9 days
Failing since        162368  2021-06-04 15:42:59 Z    9 days   15 attempts
Testing same since   162583  2021-06-09 23:44:58 Z    3 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1717 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 02:22:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 02:22:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141100.260741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lscEH-0008Jk-Bw; Mon, 14 Jun 2021 02:21:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141100.260741; Mon, 14 Jun 2021 02:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lscEH-0008Jc-6g; Mon, 14 Jun 2021 02:21:33 +0000
Received: by outflank-mailman (input) for mailman id 141100;
 Mon, 14 Jun 2021 02:21:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lscEG-0008JC-KW; Mon, 14 Jun 2021 02:21:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lscEG-0006eu-DQ; Mon, 14 Jun 2021 02:21:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lscEG-0006PR-6h; Mon, 14 Jun 2021 02:21:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lscEG-0006zw-68; Mon, 14 Jun 2021 02:21:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Sk8ullAI4SnNURpPPtn9VAvAm4uSvDaP9kF2VPWLgc8=; b=SJElUyqiUq3lxwE4iFK0O7GbaA
	seI+4e11GD87EvBTQ0BQHj7raNB9h0cQNa3ttkLRaIXaLrO7TOMPECN7bkab2l0XOpHSKk9Z0lwCb
	NFZrFVmfSxbV17azkBhZ8PzyhaJB3slm/Qlfnjo8J1fjRhncmbWvf69HDR1MOlapdITQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162771-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162771: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-amd64-i386-pvgrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-amd64-amd64-pvgrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=93031fbe9f4c341a2e7950a088025ea550291433
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Jun 2021 02:21:32 +0000

flight 162771 xen-unstable real [real]
flight 162783 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162771/
http://logs.test-lab.xenproject.org/osstest/logs/162783/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
 test-amd64-amd64-i386-pvgrub 17 guest-localmigrate       fail REGR. vs. 162533
 test-amd64-amd64-amd64-pvgrub 17 guest-localmigrate      fail REGR. vs. 162533

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162533
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162533
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  93031fbe9f4c341a2e7950a088025ea550291433
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z    6 days
Failing since        162556  2021-06-08 22:39:08 Z    5 days    7 attempts
Testing same since   162696  2021-06-12 11:59:49 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 724 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 02:44:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 02:44:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141111.260761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lscZt-0002HX-9r; Mon, 14 Jun 2021 02:43:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141111.260761; Mon, 14 Jun 2021 02:43:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lscZt-0002HQ-6k; Mon, 14 Jun 2021 02:43:53 +0000
Received: by outflank-mailman (input) for mailman id 141111;
 Mon, 14 Jun 2021 02:43:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lscZs-0002HG-5S; Mon, 14 Jun 2021 02:43:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lscZs-00070W-0D; Mon, 14 Jun 2021 02:43:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lscZr-00070v-Pe; Mon, 14 Jun 2021 02:43:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lscZr-00009W-P7; Mon, 14 Jun 2021 02:43:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rXrXJCU8j2jf7nqIL/KiZYutaP85bWXOXThpvafQexQ=; b=YagK8QuILlB2Juctjcv+cFsX1I
	duUvO5nwKknE4oGQ3rAGFD5Ng2mTqYVKRwteWeOZS0QhCt+vrQ8EMss3dGlWIkDQLHkap2OJmaQz+
	dbO/AFpJe7UeHMXTyU/VPbFJcNgemTX7oCr97Jhxh73G+IrRRofkPLxZ45zi5LLdv4+8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162781-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162781: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=b8649cf2a3e673a4a8cb6c255e394b354b771550
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Jun 2021 02:43:51 +0000

flight 162781 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162781/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 b8649cf2a3e673a4a8cb6c255e394b354b771550
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z    9 days
Failing since        162368  2021-06-04 15:42:59 Z    9 days   16 attempts
Testing same since   162583  2021-06-09 23:44:58 Z    4 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1717 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 03:56:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 03:56:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141119.260775 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsdhd-0000p6-EN; Mon, 14 Jun 2021 03:55:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141119.260775; Mon, 14 Jun 2021 03:55:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsdhd-0000oz-AQ; Mon, 14 Jun 2021 03:55:57 +0000
Received: by outflank-mailman (input) for mailman id 141119;
 Mon, 14 Jun 2021 03:55:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsdhc-0000oo-HW; Mon, 14 Jun 2021 03:55:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsdhc-0000Ck-Ag; Mon, 14 Jun 2021 03:55:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsdhb-0002dM-Ko; Mon, 14 Jun 2021 03:55:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsdhb-0000ao-KG; Mon, 14 Jun 2021 03:55:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eYsxb55CapChp6p/ySAYs6JqSUFcfgp6eeDkYD956eI=; b=i93z6Qr6aYf6HvCrkNy/JxVyyL
	pEpIeFfetHp69iX8DTC49Uxr0k3NfESqYEnCe+NRf+/x0kYO3FN4rFHA5uiMDW9ggP97HChOyU3Zj
	9ouNJdVfKB1zv3K8+dc6iF5oC1LpC2k4DSiDuzra/RT3cODW22wKs6PoX3suQisw0Fg4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162776-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162776: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8ecfa36cd4db3275bf3b6c6f32c7e3c6bb537de2
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Jun 2021 03:55:55 +0000

flight 162776 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162776/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop    fail in 162757 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl          13 debian-fixup     fail in 162757 pass in 162776
 test-arm64-arm64-xl-xsm      13 debian-fixup     fail in 162757 pass in 162776
 test-arm64-arm64-libvirt-xsm 13 debian-fixup     fail in 162757 pass in 162776
 test-amd64-amd64-examine    4 memdisk-try-append fail in 162757 pass in 162776
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10    fail pass in 162757
 test-amd64-amd64-dom0pvh-xl-intel 22 guest-start/debian.repeat fail pass in 162757

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 162757 like 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                8ecfa36cd4db3275bf3b6c6f32c7e3c6bb537de2
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  317 days
Failing since        152366  2020-08-01 20:49:34 Z  316 days  539 attempts
Testing same since   162757  2021-06-13 03:43:17 Z    1 days    2 attempts

------------------------------------------------------------
6167 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1680028 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 06:11:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 06:11:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141127.260789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsfoE-0001Bv-5L; Mon, 14 Jun 2021 06:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141127.260789; Mon, 14 Jun 2021 06:10:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsfoE-0001Bo-1b; Mon, 14 Jun 2021 06:10:54 +0000
Received: by outflank-mailman (input) for mailman id 141127;
 Mon, 14 Jun 2021 06:10:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XszW=LI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lsfoC-0001Bf-6p
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 06:10:52 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 42a75b2c-8451-427a-b394-d7d3ac5ff65f;
 Mon, 14 Jun 2021 06:10:50 +0000 (UTC)
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2052.outbound.protection.outlook.com [104.47.0.52]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-27-NLfOUfsTP_CXgJCwPsZYVQ-1; Mon, 14 Jun 2021 08:10:48 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6672.eurprd04.prod.outlook.com (2603:10a6:803:127::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.24; Mon, 14 Jun
 2021 06:10:46 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 06:10:45 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM4P190CA0009.EURP190.PROD.OUTLOOK.COM (2603:10a6:200:56::19) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Mon, 14 Jun 2021 06:10:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42a75b2c-8451-427a-b394-d7d3ac5ff65f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623651049;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=OjUMcnKkFA/XkpJf/PjyDSuqSh/sM9esCQ1SiWmy8PU=;
	b=TRVC+cMIIL+494aiYP78odRIm3Uel5/Fj+h0vby/L04+bgnroHaobDKgMWEkLU/BGSj/GX
	zrohsOUFRtsynFlSR57nhjeDJQ4ctRhymSdc1QlXc8Bs2gJzO+rv46O8NozIicQMK5nLCX
	udN2kxZw5Ei5AFB5sxtI3zM9RFowsTg=
X-MC-Unique: NLfOUfsTP_CXgJCwPsZYVQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Sqz6rM88WtRSjjazmM+BNe9Bp96K+w/OVbnxbi0tXV6PY+lkmGTLCsfm4rySXBeSOLF1gQgyI5bDs1lSM+emcXRiswFz5+T2YT3/6BtI1pRgA/b3N/VVwXrZdJKjGZKBiXaphZMGqAColRlcpQmvIj8HV5Hy+wBxNjmpYGzhGYQmY+tq/rOFqmOr/25CjaiAAkVPI9KcAuNx6r6lvM948JdxJUWOhMiPy3zB+ylyQ0CytvPwEkukY6UVF3oZRyxLauRIGtGHi5KkijApl63YqpjUZlhZCDa4gy66RdjgKDUtidmedPuOXry7sqGhnJsU7cjJiik1YHbMnw2Bx7DZaQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OjUMcnKkFA/XkpJf/PjyDSuqSh/sM9esCQ1SiWmy8PU=;
 b=IJByn5glAbe7PosviMEQpgNVeFTHm8REEy1Ty1krbcNf936GjdebPEtHqUE9lw5+OQZKsJpF6vG8QTDW2qG7Iort5fFmvUyV4SLJ9T+XQb3OwJLAf6p7zHtUlVK+bu1xZF/djKPtwo3QIplOTD/8pEFV4lyYttmI74meyHlcS04CTOgpcIpzN5ivN8ni9a+AQoGrf0GW4mhY7ha3e+zf1RM/aVGAbAAKkgAxOBi2rHuRraIgvODdm7WJwg0OktmIohrw5QS8xcGMmY49ATBJvVgEcN03C+TczluZtUU3oXDJpfKhHzftgn2yavtQWxIbcE3trFF6ZXqx7WEsloAMzQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [OSSTEST PATCH] ts-xen-build: Turn on CONFIG_PV32 again
To: Ian Jackson <iwj@xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
References: <20210611170230.20195-1-iwj@xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f45a45c0-b847-d6f8-d613-f2111c390929@suse.com>
Date: Mon, 14 Jun 2021 08:10:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210611170230.20195-1-iwj@xenproject.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM4P190CA0009.EURP190.PROD.OUTLOOK.COM
 (2603:10a6:200:56::19) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b89e9546-3625-4765-d568-08d92efb25d1
X-MS-TrafficTypeDiagnostic: VE1PR04MB6672:
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB66720DDD49470812CECFBEEAB3319@VE1PR04MB6672.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:400;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EJXt3GJeTzGszV9eb6mM14XawZJ5Dm80yI3QmPCsIo4YtjCqULhnLfNOeusrzdq4D9OhzrFNG0Id2kP1TnoIAQIARM/QYoHRNF8puwSjAYZYDP0tIf72SvVZ13m4ln23T93yYVmesdPUdc5oiPHNs/nvxNzvoJ+uAXnLKkjDsoPCspQqLNQ0x/kuV3/hpl0bSFKV6goYitFgU2CYMtiWJq3304paLG7/O9gC0yZgvzpWCr2xij2/2NtjrnHAz2EDzTF6k99Z4/3JyAE1PKUBH1485gbOQLI0ggJ2ENPPIKPUkOKYDdJPTzlrbr3oJVp2+g6W8Ecd2vFE+OtykndExfQ3qKR4gnVwb5PhdTsdyMTvOo4pKo5+bjEdvOooN8T4yMQxRCzTxgQkT16536O9X3JviVHZn+9GbLcdR/ZvQ4VSQvHNeGutBJVW0Ta3bmlIbISdaZ0SHprMsqUs8zxJtAPHhvWwQRiwWXRSkKGyvtKdV44amL55CWyqzIAuCqtNuJ2cFOPrLh8Gu/TP8hXGpnpWrwwRvoga8IhKLB+IbJMionxtTe+DTzvY6OZJD/BweO7c0Ct03muxRfIo8ihBm9bvTWCJAEuBlL6KOlXm4PnA67OxYHqJ7JdDOxqJtHwNwswg3cuxQPIVt4//U3X3+3pM4tpEG5LYA+e296jVyYwSmm8eowdM6z53VtYYi8D3
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(39850400004)(396003)(346002)(366004)(376002)(6486002)(16576012)(316002)(478600001)(2906002)(31686004)(53546011)(4326008)(66556008)(66476007)(8936002)(956004)(66946007)(16526019)(2616005)(26005)(86362001)(558084003)(6916009)(186003)(8676002)(36756003)(38100700002)(31696002)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OTVYVGIzK0dTYTUva3FtUEVRekFXZFY5a3o4M2NJRXQ1STR6d1U2V1NFU0dH?=
 =?utf-8?B?K3pyOGtER05QbzI3R3JleEZkRkZlSmdWTzFibjkwK2d4UDFyNWZ6VUVuTHk1?=
 =?utf-8?B?c1pkcGpGajVBRmVCYnl6V1h2RXA5OFlEdEVOS0dOb3ZWeTNDaExqOVFWUGNw?=
 =?utf-8?B?WFc1dk9pZ0ZjUTcyajdEdm5tV0RoZnMxTW9oNFJVQzg4c1BYcEJLTVZrOWp2?=
 =?utf-8?B?bTJtQVM0VWNQNElkNi91L3VUNmNCUjh1TjVxSXh4QS9XdHBPcno1cmpITEEr?=
 =?utf-8?B?dk5BSFI0aGwreXdxcWliaDBMSSs0ek80MnhXajBhTEU0VUJPeXFEN3pWaU44?=
 =?utf-8?B?azVNU3hmVExVVFBqNnVXZnEraktTcjdSVytZWDgwOS80UTdUcVM0TExqR0Zx?=
 =?utf-8?B?dzdid3hzS1pmdlJpWGVFd0gyMTFSb1ZiYTFKN2Z2d2VqMXVZVUtNS3cwczNh?=
 =?utf-8?B?V0hTcDlNK2xSMXJJYlpTcDYyQ3Z3b3p0QlVKOGJPQytuVzJNMk5JZTY0VnZJ?=
 =?utf-8?B?ZkE2cDRWMkFOdmdBU1FxSWQwekE3YVJtWnRwa1pJc3E0MU1UeXJGL1FzZUZh?=
 =?utf-8?B?TkxTUC9UbEIxZ0I4OW1IRWRKcHUxTTVoR0N0STR1TW01cVFUeHF2NTdvUzFo?=
 =?utf-8?B?NmtNcnBNaW5sUUtId0R3K1IxWk1kZmVSSDY5MEk1L2YzbkJ5L2tBUmRuQVFj?=
 =?utf-8?B?UmxBR3Znc3lldkFsekxMWEdEMnVmdW5KQVQxQVNIZGxub1FwSTA1UnU4VFpM?=
 =?utf-8?B?VG43dWQyMEQ2cWNhWjA1UGxEdUNRRzF5RENKQkQ2VFlMMUZvN2c2ZXlzK202?=
 =?utf-8?B?a01pclRkN1NtSVlYV2tpQzdCdzRiUUZnditPVHY3clYrWjRWeVMvNEJQSkN1?=
 =?utf-8?B?ekJ1eVUrLy9pZWxnVEErK09JR0xRRDRtOWtiSk1iOEtQd2FMRTlTdWVybGhE?=
 =?utf-8?B?RlNWdlJSWHAzWUorbzFML3N1NkJRQnBleUhrR2o4azV6eDYvN0VXbk4rMjVK?=
 =?utf-8?B?QnQ5Y2F1NE1VQm5Gbk5lZ3FDbkRiTG9GWDh0eTRIY2o2djRDeGVJM2NZRUFp?=
 =?utf-8?B?b1M1OWs0V2ltbTZpdVcvazU4eEI2K2dJS3MyYkFJV3pSVFE4UHdXQy9uRzFX?=
 =?utf-8?B?VmdndHlrODJaQ1pEUEFzM2tkSFFkRzhIZ3FNSlZhcWJzSTV6Uk9TajF2bUNu?=
 =?utf-8?B?NmhScC8vdXB1dlV5T0FGNEhoZ2tLRk5ybUZlV0htb3lRNnFSTk1qR2hXZktK?=
 =?utf-8?B?ODd6a0JGalVNTFBJYXZEWStvZDJrejVmYkVvcE9TWHVUVkcrVmxMMzhjOXRy?=
 =?utf-8?B?UVBreDVCZjlvVkduWjBRUGNxazkvK0E4ZzM5emE3V0dPL1U4RFdXWTFxMEVl?=
 =?utf-8?B?YnZxK0l6MWVrVGFKY3hicG50YUZDbFFuOUlKay81TlErRjlBYTM0a2xuYmtY?=
 =?utf-8?B?Q3E2c08ycysvcTJ3aHB4Tm1PQzdWSFpiWFJ5cGxEczlHRS9hZFhWUzR6cy90?=
 =?utf-8?B?dkhJMEdoR1dUbkcvRkpHTjJZRWFNZUZOWVlCMjd0WkhnSXJzVVBPUDUyUkh5?=
 =?utf-8?B?Nk82YllFa3lJS1pQUVlzRk4zZzVvblAxaWQvelFVSTlsZk13RnZ2VHRNNFE3?=
 =?utf-8?B?dHFQRmsyWGUxTFAvam9HMnlrclJKNXlORDhDc1RXY2FNRHpnSUdUMHBaNHdS?=
 =?utf-8?B?V2t3aFB6T2ZUWUtGRE44SXlMNlRKZjJTWnBvajRYZUVlMUROVWJqdkppSjVv?=
 =?utf-8?Q?9baWiJRpQ8AHWLF42qhrbyoK39hElj01a1wrUtf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b89e9546-3625-4765-d568-08d92efb25d1
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 06:10:45.7342
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vSDjdUnStz7LrXZMJo9AB2Rid9agGrMnHUahxc7+mTNLIJ9yMdSGubavronUoy7H2wa/ZNKpr/fJfEipspkfgw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6672

On 11.06.2021 19:02, Ian Jackson wrote:
> CC: George Dunlap <george.dunlap@citrix.com>
> Suggested-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Ian Jackson <iwj@xenproject.org>

FWIW:
Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 06:16:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 06:16:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141135.260800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsfu0-0002Ah-Vc; Mon, 14 Jun 2021 06:16:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141135.260800; Mon, 14 Jun 2021 06:16:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsfu0-0002Aa-Qq; Mon, 14 Jun 2021 06:16:52 +0000
Received: by outflank-mailman (input) for mailman id 141135;
 Mon, 14 Jun 2021 06:16:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aFdS=LI=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lsfu0-0002AS-27
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 06:16:52 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 288c6ac7-a198-4808-9768-2ee1eb181037;
 Mon, 14 Jun 2021 06:16:50 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 6D6E667373; Mon, 14 Jun 2021 08:16:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 288c6ac7-a198-4808-9768-2ee1eb181037
Date: Mon, 14 Jun 2021 08:16:44 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v9 01/14] swiotlb: Refactor swiotlb init functions
Message-ID: <20210614061644.GA28343@lst.de>
References: <20210611152659.2142983-1-tientzu@chromium.org> <20210611152659.2142983-2-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210611152659.2142983-2-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, Jun 11, 2021 at 11:26:46PM +0800, Claire Chang wrote:
> +	spin_lock_init(&mem->lock);
> +	for (i = 0; i < mem->nslabs; i++) {
> +		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> +		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
> +		mem->slots[i].alloc_size = 0;
> +	}
> +
> +	if (memory_decrypted)
> +		set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
> +	memset(vaddr, 0, bytes);

We don't really need to do this call before the memset.  Which means we
can just move it to the callers that care instead of having a bool
argument.

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 06:17:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 06:17:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141140.260811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsfun-0002ln-8B; Mon, 14 Jun 2021 06:17:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141140.260811; Mon, 14 Jun 2021 06:17:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsfun-0002lg-4S; Mon, 14 Jun 2021 06:17:41 +0000
Received: by outflank-mailman (input) for mailman id 141140;
 Mon, 14 Jun 2021 06:17:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aFdS=LI=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lsfum-0002lS-7c
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 06:17:40 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ddf673d7-f830-42c6-afe9-f6a3c39c3825;
 Mon, 14 Jun 2021 06:17:39 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 2951067373; Mon, 14 Jun 2021 08:17:37 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ddf673d7-f830-42c6-afe9-f6a3c39c3825
Date: Mon, 14 Jun 2021 08:17:36 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v9 02/14] swiotlb: Refactor swiotlb_create_debugfs
Message-ID: <20210614061736.GB28343@lst.de>
References: <20210611152659.2142983-1-tientzu@chromium.org> <20210611152659.2142983-3-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210611152659.2142983-3-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, Jun 11, 2021 at 11:26:47PM +0800, Claire Chang wrote:
> Split the debugfs creation to make the code reusable for supporting
> different bounce buffer pools, e.g. restricted DMA pool.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  kernel/dma/swiotlb.c | 23 ++++++++++++++++-------
>  1 file changed, 16 insertions(+), 7 deletions(-)
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 1a1208c81e85..8a3e2b3b246d 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -64,6 +64,9 @@
>  enum swiotlb_force swiotlb_force;
>  
>  struct io_tlb_mem *io_tlb_default_mem;
> +#ifdef CONFIG_DEBUG_FS
> +static struct dentry *debugfs_dir;
> +#endif

What about moving this declaration into the main CONFIG_DEBUG_FS block
near the functions using it?

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 06:20:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 06:20:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141147.260822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsfxP-0004Gl-MQ; Mon, 14 Jun 2021 06:20:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141147.260822; Mon, 14 Jun 2021 06:20:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsfxP-0004Ge-IQ; Mon, 14 Jun 2021 06:20:23 +0000
Received: by outflank-mailman (input) for mailman id 141147;
 Mon, 14 Jun 2021 06:20:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aFdS=LI=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lsfxO-0004GR-30
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 06:20:22 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 19a339f1-3527-4cae-ae02-c33903390ba3;
 Mon, 14 Jun 2021 06:20:21 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 6FC7267373; Mon, 14 Jun 2021 08:20:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19a339f1-3527-4cae-ae02-c33903390ba3
Date: Mon, 14 Jun 2021 08:20:15 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com,
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk,
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie,
	dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com, Jianxiong Gao <jxgao@google.com>,
	joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com, matthew.auld@intel.com,
	rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v9 03/14] swiotlb: Set dev->dma_io_tlb_mem to the
 swiotlb pool used
Message-ID: <20210614062015.GC28343@lst.de>
References: <20210611152659.2142983-1-tientzu@chromium.org> <20210611152659.2142983-4-tientzu@chromium.org> <CALiNf2_nzP=qLg5Fqvn3kiaMiaR9r+QJhE3pqypW4FPrgo23DQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CALiNf2_nzP=qLg5Fqvn3kiaMiaR9r+QJhE3pqypW4FPrgo23DQ@mail.gmail.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, Jun 11, 2021 at 11:33:15PM +0800, Claire Chang wrote:
> I'm not sure if this would break arch/x86/pci/sta2x11-fixup.c
> swiotlb_late_init_with_default_size is called here
> https://elixir.bootlin.com/linux/v5.13-rc5/source/arch/x86/pci/sta2x11-fixup.c#L60

It will.  It will also break all non-OF devices.  I think you need to
initialize the initial pool in device_initialize, which covers all devices.


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 06:21:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 06:21:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141154.260832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsfyO-0004u5-0g; Mon, 14 Jun 2021 06:21:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141154.260832; Mon, 14 Jun 2021 06:21:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsfyN-0004ty-To; Mon, 14 Jun 2021 06:21:23 +0000
Received: by outflank-mailman (input) for mailman id 141154;
 Mon, 14 Jun 2021 06:21:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aFdS=LI=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lsfyM-0004tZ-2M
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 06:21:22 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b2107152-52a2-4560-9954-1a8a580ee528;
 Mon, 14 Jun 2021 06:21:21 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id B72AF67373; Mon, 14 Jun 2021 08:21:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2107152-52a2-4560-9954-1a8a580ee528
Date: Mon, 14 Jun 2021 08:21:18 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v9 04/14] swiotlb: Add restricted DMA pool
 initialization
Message-ID: <20210614062118.GD28343@lst.de>
References: <20210611152659.2142983-1-tientzu@chromium.org> <20210611152659.2142983-5-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210611152659.2142983-5-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, Jun 11, 2021 at 11:26:49PM +0800, Claire Chang wrote:
> Add the initialization function to create restricted DMA pools from
> matching reserved-memory nodes.

Bisection hazard:  we should only add the new config option when the
code is actually read to be used.  So this patch should move to the end
of the series.


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 06:21:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 06:21:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141157.260844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsfyo-0005TZ-8s; Mon, 14 Jun 2021 06:21:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141157.260844; Mon, 14 Jun 2021 06:21:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsfyo-0005TS-5r; Mon, 14 Jun 2021 06:21:50 +0000
Received: by outflank-mailman (input) for mailman id 141157;
 Mon, 14 Jun 2021 06:21:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aFdS=LI=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lsfym-0005Kf-AV
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 06:21:48 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6b46cfee-734a-4b28-8bd2-295b003ec28c;
 Mon, 14 Jun 2021 06:21:42 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 2608B68AFE; Mon, 14 Jun 2021 08:21:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b46cfee-734a-4b28-8bd2-295b003ec28c
Date: Mon, 14 Jun 2021 08:21:39 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v9 05/14] swiotlb: Update is_swiotlb_buffer to add a
 struct device argument
Message-ID: <20210614062139.GE28343@lst.de>
References: <20210611152659.2142983-1-tientzu@chromium.org> <20210611152659.2142983-6-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210611152659.2142983-6-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 06:24:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 06:24:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141167.260855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsg0v-0006G7-MB; Mon, 14 Jun 2021 06:24:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141167.260855; Mon, 14 Jun 2021 06:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsg0v-0006G0-IJ; Mon, 14 Jun 2021 06:24:01 +0000
Received: by outflank-mailman (input) for mailman id 141167;
 Mon, 14 Jun 2021 06:24:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aFdS=LI=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lsg0u-0006Fr-Gd
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 06:24:00 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 76eea99a-bf9b-43aa-b784-71a4f117ec86;
 Mon, 14 Jun 2021 06:24:00 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id B6EE767373; Mon, 14 Jun 2021 08:23:55 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76eea99a-bf9b-43aa-b784-71a4f117ec86
Date: Mon, 14 Jun 2021 08:23:55 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v9 06/14] swiotlb: Update is_swiotlb_active to add a
 struct device argument
Message-ID: <20210614062355.GF28343@lst.de>
References: <20210611152659.2142983-1-tientzu@chromium.org> <20210611152659.2142983-7-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210611152659.2142983-7-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

>  kernel/dma/direct.c                          | 2 +-
>  kernel/dma/swiotlb.c                         | 4 ++--
>  6 files changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> index ce6b664b10aa..89a894354263 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> @@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
>  
>  	max_order = MAX_ORDER;
>  #ifdef CONFIG_SWIOTLB
> -	if (is_swiotlb_active()) {
> +	if (is_swiotlb_active(obj->base.dev->dev)) {

This is the same device used for DMA mapping in
i915_gem_gtt_prepare_pages, so this looks good.

> index f4c2e46b6fe1..2ca9d9a9e5d5 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
> @@ -276,7 +276,7 @@ nouveau_ttm_init(struct nouveau_drm *drm)
>  	}
>  
>  #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
> -	need_swiotlb = is_swiotlb_active();
> +	need_swiotlb = is_swiotlb_active(dev->dev);
>  #endif

This looks good, too.

> diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
> index b7a8f3a1921f..0d56985bfe81 100644
> --- a/drivers/pci/xen-pcifront.c
> +++ b/drivers/pci/xen-pcifront.c
> @@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
>  
>  	spin_unlock(&pcifront_dev_lock);
>  
> -	if (!err && !is_swiotlb_active()) {
> +	if (!err && !is_swiotlb_active(&pdev->xdev->dev)) {

This looks good as well.

So I think the devices are all good.

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 06:25:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 06:25:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141175.260866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsg2R-0006yq-0o; Mon, 14 Jun 2021 06:25:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141175.260866; Mon, 14 Jun 2021 06:25:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsg2Q-0006yj-U3; Mon, 14 Jun 2021 06:25:34 +0000
Received: by outflank-mailman (input) for mailman id 141175;
 Mon, 14 Jun 2021 06:25:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aFdS=LI=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lsg2P-0006yZ-V3
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 06:25:33 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43e40c08-d6f7-4949-ba28-faf31fecdd93;
 Mon, 14 Jun 2021 06:25:32 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id BF02567373; Mon, 14 Jun 2021 08:25:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43e40c08-d6f7-4949-ba28-faf31fecdd93
Date: Mon, 14 Jun 2021 08:25:30 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v9 07/14] swiotlb: Bounce data from/to restricted DMA
 pool if available
Message-ID: <20210614062530.GG28343@lst.de>
References: <20210611152659.2142983-1-tientzu@chromium.org> <20210611152659.2142983-8-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210611152659.2142983-8-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, Jun 11, 2021 at 11:26:52PM +0800, Claire Chang wrote:
> Regardless of swiotlb setting, the restricted DMA pool is preferred if
> available.
> 
> The restricted DMA pools provide a basic level of protection against the
> DMA overwriting buffer contents at unexpected times. However, to protect
> against general data leakage and system memory corruption, the system
> needs to provide a way to lock down the memory access, e.g., MPU.
> 
> Note that is_dev_swiotlb_force doesn't check if
> swiotlb_force == SWIOTLB_FORCE. Otherwise the memory allocation behavior
> with default swiotlb will be changed by the following patche
> ("dma-direct: Allocate memory from restricted DMA pool if available").
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  include/linux/swiotlb.h | 10 +++++++++-
>  kernel/dma/direct.c     |  3 ++-
>  kernel/dma/direct.h     |  3 ++-
>  kernel/dma/swiotlb.c    |  1 +
>  4 files changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 06cf17a80f5c..8200c100fe10 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -85,6 +85,7 @@ extern enum swiotlb_force swiotlb_force;
>   *		unmap calls.
>   * @debugfs:	The dentry to debugfs.
>   * @late_alloc:	%true if allocated using the page allocator
> + * @force_swiotlb: %true if swiotlb is forced
>   */
>  struct io_tlb_mem {
>  	phys_addr_t start;
> @@ -95,6 +96,7 @@ struct io_tlb_mem {
>  	spinlock_t lock;
>  	struct dentry *debugfs;
>  	bool late_alloc;
> +	bool force_swiotlb;
>  	struct io_tlb_slot {
>  		phys_addr_t orig_addr;
>  		size_t alloc_size;
> @@ -115,6 +117,11 @@ static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
>  	dev->dma_io_tlb_mem = io_tlb_default_mem;
>  }
>  
> +static inline bool is_dev_swiotlb_force(struct device *dev)
> +{
> +	return dev->dma_io_tlb_mem->force_swiotlb;
> +}
> +
>  void __init swiotlb_exit(void);
>  unsigned int swiotlb_max_segment(void);
>  size_t swiotlb_max_mapping_size(struct device *dev);
> @@ -126,8 +133,9 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  {
>  	return false;
>  }
> -static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
> +static inline bool is_dev_swiotlb_force(struct device *dev)
>  {
> +	return false;
>  }
>  static inline void swiotlb_exit(void)
>  {
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 7a88c34d0867..078f7087e466 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -496,7 +496,8 @@ size_t dma_direct_max_mapping_size(struct device *dev)
>  {
>  	/* If SWIOTLB is active, use its maximum mapping size */
>  	if (is_swiotlb_active(dev) &&
> -	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
> +	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE ||
> +	     is_dev_swiotlb_force(dev)))

I think we can remove the extra swiotlb_force check here if the
swiotlb_force setting is propagated into io_tlb_default_mem->force
when that is initialized. This avoids an extra check in the fast path.

> -	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
> +	if (unlikely(swiotlb_force == SWIOTLB_FORCE) ||
> +	    is_dev_swiotlb_force(dev))

Same here.


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 06:25:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 06:25:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141177.260877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsg2e-0007QQ-CB; Mon, 14 Jun 2021 06:25:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141177.260877; Mon, 14 Jun 2021 06:25:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsg2e-0007QJ-8w; Mon, 14 Jun 2021 06:25:48 +0000
Received: by outflank-mailman (input) for mailman id 141177;
 Mon, 14 Jun 2021 06:25:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aFdS=LI=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lsg2c-0007OO-Qn
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 06:25:46 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4b3b12ef-4a8a-44d3-b4b1-7d230eb3233f;
 Mon, 14 Jun 2021 06:25:46 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 4495467373; Mon, 14 Jun 2021 08:25:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b3b12ef-4a8a-44d3-b4b1-7d230eb3233f
Date: Mon, 14 Jun 2021 08:25:44 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v9 08/14] swiotlb: Move alloc_size to find_slots
Message-ID: <20210614062544.GH28343@lst.de>
References: <20210611152659.2142983-1-tientzu@chromium.org> <20210611152659.2142983-9-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210611152659.2142983-9-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, Jun 11, 2021 at 11:26:53PM +0800, Claire Chang wrote:
> Move the maintenance of alloc_size to find_slots for better code
> reusability later.

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 06:26:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 06:26:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141186.260888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsg38-0008CA-N9; Mon, 14 Jun 2021 06:26:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141186.260888; Mon, 14 Jun 2021 06:26:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsg38-0008C3-JC; Mon, 14 Jun 2021 06:26:18 +0000
Received: by outflank-mailman (input) for mailman id 141186;
 Mon, 14 Jun 2021 06:26:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aFdS=LI=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lsg37-0008BU-4L
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 06:26:17 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1fde41bb-1bd4-475b-add9-be85f15a8019;
 Mon, 14 Jun 2021 06:26:16 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 1F6F967373; Mon, 14 Jun 2021 08:26:14 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1fde41bb-1bd4-475b-add9-be85f15a8019
Date: Mon, 14 Jun 2021 08:26:13 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v9 09/14] swiotlb: Refactor swiotlb_tbl_unmap_single
Message-ID: <20210614062613.GI28343@lst.de>
References: <20210611152659.2142983-1-tientzu@chromium.org> <20210611152659.2142983-10-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210611152659.2142983-10-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, Jun 11, 2021 at 11:26:54PM +0800, Claire Chang wrote:
> Add a new function, release_slots, to make the code reusable for supporting
> different bounce buffer pools, e.g. restricted DMA pool.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  kernel/dma/swiotlb.c | 35 ++++++++++++++++++++---------------
>  1 file changed, 20 insertions(+), 15 deletions(-)
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 364c6c822063..a6562573f090 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -554,27 +554,15 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
>  	return tlb_addr;
>  }
>  
> -/*
> - * tlb_addr is the physical address of the bounce buffer to unmap.
> - */
> -void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
> -			      size_t mapping_size, enum dma_data_direction dir,
> -			      unsigned long attrs)
> +static void release_slots(struct device *dev, phys_addr_t tlb_addr)
>  {
> -	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
> +	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
>  	unsigned long flags;
> -	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
> +	unsigned int offset = swiotlb_align_offset(dev, tlb_addr);
>  	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
>  	int nslots = nr_slots(mem->slots[index].alloc_size + offset);
>  	int count, i;
>  
> -	/*
> -	 * First, sync the memory before unmapping the entry
> -	 */
> -	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
> -	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
> -		swiotlb_bounce(hwdev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
> -
>  	/*
>  	 * Return the buffer to the free list by setting the corresponding
>  	 * entries to indicate the number of contiguous entries available.
> @@ -609,6 +597,23 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
>  	spin_unlock_irqrestore(&mem->lock, flags);
>  }
>  
> +/*
> + * tlb_addr is the physical address of the bounce buffer to unmap.
> + */
> +void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
> +			      size_t mapping_size, enum dma_data_direction dir,
> +			      unsigned long attrs)
> +{
> +	/*
> +	 * First, sync the memory before unmapping the entry
> +	 */
> +	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
> +	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
> +		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
> +
> +	release_slots(dev, tlb_addr);

Can you give this a swiotlb_ prefix?

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 06:28:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 06:28:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141196.260898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsg4t-0000Y7-3i; Mon, 14 Jun 2021 06:28:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141196.260898; Mon, 14 Jun 2021 06:28:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsg4t-0000Y0-0Z; Mon, 14 Jun 2021 06:28:07 +0000
Received: by outflank-mailman (input) for mailman id 141196;
 Mon, 14 Jun 2021 06:28:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aFdS=LI=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lsg4s-0000Xs-JC
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 06:28:06 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 01f0d984-1e77-48fc-9dbf-4b91536791bd;
 Mon, 14 Jun 2021 06:28:06 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id DE87267373; Mon, 14 Jun 2021 08:28:01 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01f0d984-1e77-48fc-9dbf-4b91536791bd
Date: Mon, 14 Jun 2021 08:28:01 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v9 11/14] swiotlb: Add restricted DMA alloc/free
 support.
Message-ID: <20210614062801.GJ28343@lst.de>
References: <20210611152659.2142983-1-tientzu@chromium.org> <20210611152659.2142983-12-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210611152659.2142983-12-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

I think merging this with the next two patches would be a little more
clear.


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 06:28:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 06:28:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141198.260909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsg59-0000zN-Cs; Mon, 14 Jun 2021 06:28:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141198.260909; Mon, 14 Jun 2021 06:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsg59-0000zE-9q; Mon, 14 Jun 2021 06:28:23 +0000
Received: by outflank-mailman (input) for mailman id 141198;
 Mon, 14 Jun 2021 06:28:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aFdS=LI=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lsg57-0000wh-VN
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 06:28:21 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d910065-704a-4984-b47a-db12add2a4ca;
 Mon, 14 Jun 2021 06:28:21 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 4C6E068AFE; Mon, 14 Jun 2021 08:28:19 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d910065-704a-4984-b47a-db12add2a4ca
Date: Mon, 14 Jun 2021 08:28:18 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v9 11/14] swiotlb: Add restricted DMA alloc/free
 support.
Message-ID: <20210614062818.GK28343@lst.de>
References: <20210611152659.2142983-1-tientzu@chromium.org> <20210611152659.2142983-12-tientzu@chromium.org> <20210614062801.GJ28343@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210614062801.GJ28343@lst.de>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Mon, Jun 14, 2021 at 08:28:01AM +0200, Christoph Hellwig wrote:
> I think merging this with the next two patches would be a little more
> clear.

Sorry, I mean the next patch and the previous one.


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 06:41:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 06:41:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141213.260921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsgHw-0003w6-JO; Mon, 14 Jun 2021 06:41:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141213.260921; Mon, 14 Jun 2021 06:41:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsgHw-0003vz-GA; Mon, 14 Jun 2021 06:41:36 +0000
Received: by outflank-mailman (input) for mailman id 141213;
 Mon, 14 Jun 2021 06:41:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9Pv4=LI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lsgHv-0003vY-88
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 06:41:35 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dad38461-204c-4d68-9fd8-ecc2473cc65e;
 Mon, 14 Jun 2021 06:41:34 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 136C821975;
 Mon, 14 Jun 2021 06:41:33 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id CD98F118DD;
 Mon, 14 Jun 2021 06:41:32 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id OCT7MBz6xmAUbQAALh3uQQ
 (envelope-from <jgross@suse.com>); Mon, 14 Jun 2021 06:41:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dad38461-204c-4d68-9fd8-ecc2473cc65e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623652893; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=zebcSGQFr6VJ3TfznxJ6Xuyv0bmnuLIQaGpN9k8ulms=;
	b=S6Z5nhqlhxVd7/rNPaAv3ubRHNI+3SPoqmIzPyzRIB4nPwh1be0LvypfLqqK6T9EmhPF0V
	XOXwIDkgdXCoXGO5+9uqQ/WYpiDvDp/5KivURWB1m0+O+8OVA9FaTVB5+IDPk3oewVGUCv
	wNggUElAFhLenteHH42qc4aJS1da7os=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623652893; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=zebcSGQFr6VJ3TfznxJ6Xuyv0bmnuLIQaGpN9k8ulms=;
	b=S6Z5nhqlhxVd7/rNPaAv3ubRHNI+3SPoqmIzPyzRIB4nPwh1be0LvypfLqqK6T9EmhPF0V
	XOXwIDkgdXCoXGO5+9uqQ/WYpiDvDp/5KivURWB1m0+O+8OVA9FaTVB5+IDPk3oewVGUCv
	wNggUElAFhLenteHH42qc4aJS1da7os=
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <osstest-162771-mainreport@xen.org>
From: Juergen Gross <jgross@suse.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [xen-unstable test] 162771: regressions - FAIL
Message-ID: <78aa2d24-3e2a-01cd-4596-e2796b4432a7@suse.com>
Date: Mon, 14 Jun 2021 08:41:32 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <osstest-162771-mainreport@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="yuJ0QH2tXUQ66J2e8qyL0feqaE3MooDDt"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--yuJ0QH2tXUQ66J2e8qyL0feqaE3MooDDt
Content-Type: multipart/mixed; boundary="GgHN18e8Ky8NYu93pkY4JBlS8EPnrygKL";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <78aa2d24-3e2a-01cd-4596-e2796b4432a7@suse.com>
Subject: Re: [xen-unstable test] 162771: regressions - FAIL
References: <osstest-162771-mainreport@xen.org>
In-Reply-To: <osstest-162771-mainreport@xen.org>

--GgHN18e8Ky8NYu93pkY4JBlS8EPnrygKL
Content-Type: multipart/mixed;
 boundary="------------FF798DF7B5533FDA60FC68D7"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------FF798DF7B5533FDA60FC68D7
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 14.06.21 04:21, osstest service owner wrote:
> flight 162771 xen-unstable real [real]
> flight 162783 xen-unstable real-retest [real]
> http://logs.test-lab.xenproject.org/osstest/logs/162771/
> http://logs.test-lab.xenproject.org/osstest/logs/162783/
>=20
> Regressions :-(
>=20
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>   test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. =
vs. 162533
>   test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. v=
s. 162533
>   test-amd64-amd64-i386-pvgrub 17 guest-localmigrate       fail REGR. v=
s. 162533
>   test-amd64-amd64-amd64-pvgrub 17 guest-localmigrate      fail REGR. v=
s. 162533

Hmm, this is rather unfortunate.

Those last 2 tests failed due to commit 7bd8989ab77b6ade3b, but just
reverting that patch doesn't seem right to me either.

The Linux kernel has a bug here: it will initially set max_pfn in the
shared_info page to the size of the p2m_list (so my reasoning for above
patch was wrong in this case), but when growing the p2m_list (e.g. due
to ballooning or grant mapping), it will store a real pfn number in
max_pfn. But even this pfn might be wrong, as only the pfn leading to
allocation of a new p2m page will be stored in max_pfn, any higher new
pfn having its p2m entry in the new p2m page won't result in a new
max_pfn entry.

As a result I think the only sane handling would be to assume the
max_pfn value read from the shared_info page is really a pfn. This
value should be adjusted to specify the last pfn of the related p2m
page, and the resulting last p2m page should be tolerated to not be
valid.

Another variant would be to just revert above commit and modify the
semantics of max_pfn in the shared_info page to really mean max_pfn+1.
This would result in possible migration failures of ballooned Linux
systems as today.

Additionally I'll fix the Linux kernel, of course.

Any thoughts?


Juergen

--------------FF798DF7B5533FDA60FC68D7
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------FF798DF7B5533FDA60FC68D7--

--GgHN18e8Ky8NYu93pkY4JBlS8EPnrygKL--

--yuJ0QH2tXUQ66J2e8qyL0feqaE3MooDDt
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDG+hwFAwAAAAAACgkQsN6d1ii/Ey88
ZQgAkAy+217Q820vkq/NAI0SIamkTmTXYhkiEiDGYNZbM39U6eL2HQZYXN8wCh0jMishh3f4Np0f
MiKMcHjXkOW6IbYcNLswk5+jEaxGOc5Wwe0BpoaQJWwqrlzQwdEgDb5O6dhY45s7etjBdKynIsKb
Ne1Q6xSq3bTzclKVr0YcnSAySk2qYV/oiOHbLMDnBrhOVSR36iurfGYeUHOQ74z9uda4nXnAHsdx
nOTEGeMca04MbA2bGMDplzRwnfPNDyyBZjTTRBRmQoXxAoi1vB7/pg3fhr6idbUc67RnQhefkzbm
OOd3X3wuIEg2hY0T5jGwh0xSU7BCKw4/LCoSwrDhZg==
=CoW3
-----END PGP SIGNATURE-----

--yuJ0QH2tXUQ66J2e8qyL0feqaE3MooDDt--


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 06:43:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 06:43:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141219.260931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsgJP-0004aq-VL; Mon, 14 Jun 2021 06:43:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141219.260931; Mon, 14 Jun 2021 06:43:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsgJP-0004aj-S9; Mon, 14 Jun 2021 06:43:07 +0000
Received: by outflank-mailman (input) for mailman id 141219;
 Mon, 14 Jun 2021 06:43:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsgJO-0004aS-4L; Mon, 14 Jun 2021 06:43:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsgJN-0003Yj-Qh; Mon, 14 Jun 2021 06:43:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsgJN-0002C7-Fv; Mon, 14 Jun 2021 06:43:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsgJN-0003NT-FT; Mon, 14 Jun 2021 06:43:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pA3w0iIbhkJh2PD/n8hNCWI9sOzp5KUfJS8Fm5NR8bc=; b=DyaU1An8TWKtdWkJ3hT1+jadl3
	G5kqkE3SsXmo2YftDf36MzQ5PYy0nfBEfNvo0/XS/9GcNLP9+W0W19fxT0U/wIG8Ju9Ee1XrFjb9C
	B6pcvMah7ZTsBpsB6dCtRVZXqTng31+9PnbQym/szcl/rIWllb9l3kxc0N4WCX5ylUUY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162778-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162778: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=894fc4fd670aaf04a67dc7507739f914ff4bacf2
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Jun 2021 06:43:05 +0000

flight 162778 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162778/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                894fc4fd670aaf04a67dc7507739f914ff4bacf2
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  297 days
Failing since        152659  2020-08-21 14:07:39 Z  296 days  548 attempts
Testing same since   162650  2021-06-11 15:02:16 Z    2 days    5 attempts

------------------------------------------------------------
531 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 170840 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 07:09:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 07:09:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141231.260952 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsgic-0008Du-Cn; Mon, 14 Jun 2021 07:09:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141231.260952; Mon, 14 Jun 2021 07:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsgic-0008Dn-8b; Mon, 14 Jun 2021 07:09:10 +0000
Received: by outflank-mailman (input) for mailman id 141231;
 Mon, 14 Jun 2021 07:09:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aFdS=LI=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lsgia-0008Dd-VC
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 07:09:08 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6d6c04c5-087e-4964-8997-ea3e5ae4adff;
 Mon, 14 Jun 2021 07:09:08 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 1F7FB67373; Mon, 14 Jun 2021 09:09:04 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d6c04c5-087e-4964-8997-ea3e5ae4adff
Date: Mon, 14 Jun 2021 09:09:03 +0200
From: Christoph Hellwig <hch@lst.de>
To: Tianyu Lan <ltykernel@gmail.com>
Cc: Christoph Hellwig <hch@lst.de>, kys@microsoft.com,
	haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org,
	decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, x86@kernel.org, hpa@zytor.com, arnd@arndb.de,
	dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org,
	akpm@linux-foundation.org, kirill.shutemov@linux.intel.com,
	rppt@kernel.org, hannes@cmpxchg.org, cai@lca.pw,
	krish.sadhukhan@oracle.com, saravanand@fb.com,
	Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com,
	m.szyprowski@samsung.com, robin.murphy@arm.com,
	boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org,
	joro@8bytes.org, will@kernel.org, xen-devel@lists.xenproject.org,
	davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com,
	martin.petersen@oracle.com, iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org, vkuznets@redhat.com,
	thomas.lendacky@amd.com, brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: Re: [RFC PATCH V3 10/11] HV/Netvsc: Add Isolation VM support for
 netvsc driver
Message-ID: <20210614070903.GA29976@lst.de>
References: <20210530150628.2063957-1-ltykernel@gmail.com> <20210530150628.2063957-11-ltykernel@gmail.com> <20210607065007.GE24478@lst.de> <279cb4bf-c5b6-6db9-0f1e-9238e902c8f2@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <279cb4bf-c5b6-6db9-0f1e-9238e902c8f2@gmail.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Mon, Jun 07, 2021 at 11:21:20PM +0800, Tianyu Lan wrote:
>> dma_map_single can only be used on page baked memory, and if this is
>> using page backed memory you wouldn't need to do thee phys_to_virt
>> tricks.  Can someone explain the mess here in more detail?
>
> Sorry. Could you elaborate the issue? These pages in the pb array are not 
> allocated by DMA API and using dma_map_single() here is to map these pages' 
> address to bounce buffer physical address.

dma_map_single just calls dma_map_page using virt_to_page.  So this
can't work on addresses not in the kernel linear mapping.


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 07:12:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 07:12:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141237.260963 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsglp-0001Eg-Sl; Mon, 14 Jun 2021 07:12:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141237.260963; Mon, 14 Jun 2021 07:12:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsglp-0001EY-Oe; Mon, 14 Jun 2021 07:12:29 +0000
Received: by outflank-mailman (input) for mailman id 141237;
 Mon, 14 Jun 2021 07:12:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aFdS=LI=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lsglo-0001EO-HG
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 07:12:28 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5ce7fb8-1947-45de-85c0-4b9025f5b1cc;
 Mon, 14 Jun 2021 07:12:27 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 537CA67373; Mon, 14 Jun 2021 09:12:23 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5ce7fb8-1947-45de-85c0-4b9025f5b1cc
Date: Mon, 14 Jun 2021 09:12:23 +0200
From: Christoph Hellwig <hch@lst.de>
To: Tianyu Lan <ltykernel@gmail.com>
Cc: Christoph Hellwig <hch@lst.de>, kys@microsoft.com,
	haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org,
	decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, x86@kernel.org, hpa@zytor.com, arnd@arndb.de,
	dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org,
	akpm@linux-foundation.org, kirill.shutemov@linux.intel.com,
	rppt@kernel.org, hannes@cmpxchg.org, cai@lca.pw,
	krish.sadhukhan@oracle.com, saravanand@fb.com,
	Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com,
	m.szyprowski@samsung.com, robin.murphy@arm.com,
	boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org,
	joro@8bytes.org, will@kernel.org, xen-devel@lists.xenproject.org,
	davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com,
	martin.petersen@oracle.com, iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org, vkuznets@redhat.com,
	thomas.lendacky@amd.com, brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: Re: [RFC PATCH V3 08/11] swiotlb: Add bounce buffer remap address
 setting function
Message-ID: <20210614071223.GA30171@lst.de>
References: <20210530150628.2063957-1-ltykernel@gmail.com> <20210530150628.2063957-9-ltykernel@gmail.com> <20210607064312.GB24478@lst.de> <48516ce3-564c-419e-b355-0ce53794dcb1@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <48516ce3-564c-419e-b355-0ce53794dcb1@gmail.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Mon, Jun 07, 2021 at 10:56:47PM +0800, Tianyu Lan wrote:
> These addresses in extra address space works as system memory mirror. The 
> shared memory with host in Isolation VM needs to be accessed via extra 
> address space which is above shared gpa boundary.

Why?


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 08:23:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 08:23:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141255.260977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lshsY-0002LV-Cm; Mon, 14 Jun 2021 08:23:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141255.260977; Mon, 14 Jun 2021 08:23:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lshsY-0002LO-9M; Mon, 14 Jun 2021 08:23:30 +0000
Received: by outflank-mailman (input) for mailman id 141255;
 Mon, 14 Jun 2021 08:23:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lshsW-0002L7-Ql; Mon, 14 Jun 2021 08:23:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lshsW-0005je-Bf; Mon, 14 Jun 2021 08:23:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lshsV-0000WE-U1; Mon, 14 Jun 2021 08:23:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lshsV-0007Ib-TW; Mon, 14 Jun 2021 08:23:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Lj1gkJJLjvmzL50oGvoAvZLlSCWPsJEXblgse6BNbKs=; b=DjgtwgWv/b6BlN783ayo2fJo0W
	zZXX3SYO5bHPaSTVsIs9x00mPWFeB3vnINMlg+4+mu95sZnK2Ymx8r3Tg5yQaOUoYREc+JCGmC+IN
	8/JtcuweEp3zxxKttjHAUSiot42Xlqx13mD2Wx6odFCW+rnP9LaTtWAKKpcyQxt4veZA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162794-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162794: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=55ea45acc99c549c7757efe954aacc33ad30a8ef
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Jun 2021 08:23:27 +0000

flight 162794 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162794/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              55ea45acc99c549c7757efe954aacc33ad30a8ef
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  339 days
Failing since        151818  2020-07-11 04:18:52 Z  338 days  331 attempts
Testing same since   162681  2021-06-12 04:18:50 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 61229 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 09:42:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 09:42:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141276.261002 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsj74-0004dZ-TH; Mon, 14 Jun 2021 09:42:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141276.261002; Mon, 14 Jun 2021 09:42:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsj74-0004d7-MK; Mon, 14 Jun 2021 09:42:34 +0000
Received: by outflank-mailman (input) for mailman id 141276;
 Mon, 14 Jun 2021 09:42:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lsj73-0004d1-7P
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 09:42:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lsj73-00017X-2V; Mon, 14 Jun 2021 09:42:33 +0000
Received: from [54.239.6.179] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lsj72-0003RU-R5; Mon, 14 Jun 2021 09:42:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=4Vm3P39vFSk9p2iCk5JrRc1i21iUXkH1ftY0XfY2dRI=; b=czzOZx0sasQAD248a9kj8Rn6r/
	Ha0GDRq/izVaS2lsDbzuLMIDV/zh4Q1zlKrKPlzD9gCE7L6k5d0JGHrd50Vgxq1HIP1HU2xTrIuwi
	EI3Ew+UW48aMPzslTMo5EJHscSloKz+CNUyoELaEEaC0XdrlvLkBWoLPfI7uXkiufwoo=;
Subject: Re: [PATCH] Arm: avoid .init.data to be marked as executable
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
References: <6837f903-14f6-4438-ed05-b373149984f3@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <b7e76787-cdae-ed1a-a741-e5db146fc87e@xen.org>
Date: Mon, 14 Jun 2021 11:41:53 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <6837f903-14f6-4438-ed05-b373149984f3@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 11/06/2021 11:39, Jan Beulich wrote:
> This confuses disassemblers, at the very least. Move
> .altinstr_replacement to .init.text,

The alternative code was borrowed from Linux. The code has now changed 
to cater very large kernel. They used to keep the .altinstr_replacement 
and altinstructions close to each other (albeit they were both in 
.init.text).

I am not entirely why, but I am a bit worry to separate them. What sort 
of test did you do?

> dropping the redundant ALIGN().
> 
> Also, to have .altinstr_replacement have consistent attributes in the
> object files, add "x" to the one instance where it was missing. >
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> I'm uncertain whether having .altinstr_replacement inside or outside the
> [_sinittext,_einittext) region is better; I simply followed what we have
> on the x86 side right now.

This means the altinstructions will be marked executable in the 
page-table. They technically should not be executable, so I would move 
them outside _einittext and make sure the section is aligned to a PAGE_SIZE.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 09:57:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 09:57:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141284.261017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsjLX-00068U-6H; Mon, 14 Jun 2021 09:57:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141284.261017; Mon, 14 Jun 2021 09:57:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsjLX-00068N-0u; Mon, 14 Jun 2021 09:57:31 +0000
Received: by outflank-mailman (input) for mailman id 141284;
 Mon, 14 Jun 2021 09:57:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lsjLV-00067y-ST
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 09:57:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lsjLV-0001NF-MP; Mon, 14 Jun 2021 09:57:29 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lsjLV-0004S8-Ez; Mon, 14 Jun 2021 09:57:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=30YTuEitrkjhxbU9lj+Ns4W/H3DW4PBOizrGuvmuMe0=; b=N7zGe2xX0P50RtQYfsbqhiwpfF
	//Z9wD9dFz0+ah/L+V28TMQKYS4aHLkzJlpGhHi2ow4xHjx2kazEcsByHxmF/rSpD8CcxQo1vVLdL
	YHXNr9usRoVuf6nzq7qc9zP+myWPPLAFXEPFxaVF8OtXiIf/hW4bFT0rtxTrwX2QkbZA=;
Subject: Re: [PATCH] Arm32: avoid .rodata to be marked as executable
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
References: <25f1b0d2-9270-ba42-d110-2bf14e45b7b8@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <5b819c5e-587b-4eec-b873-73892503c3e2@xen.org>
Date: Mon, 14 Jun 2021 11:57:27 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <25f1b0d2-9270-ba42-d110-2bf14e45b7b8@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 11/06/2021 11:19, Jan Beulich wrote:
> This confuses disassemblers, at the very least. When this data still
> lived in .init.*, this probably didn't matter much, albeit the
> "#execinstr" would have been suspicious to me already then. But the
> latest with their movement to .rodata these attributes should have been
> dropped.

I don't quite understand why this wasn't really a problem for .init.data 
but it is a problem for .rodata. Can you expand your thought?

> 
> Fixes: 9cbe093b7b84 ("xen/arm: link: Link proc_info_list in .rodata instead of .init.data")
I don't view this commit as the buggy one. I would prefer if there is no 
Fixes tag. But if you want one, then it should be the patch that 
introduced #execinstr.

> Signed-off-by: Jan Beulich <jbeulich@suse.com>

The change below looks good to me. But I don't understand the commit 
message, so I will wait an answer before acking it.

> ---
> The PRINT() macro in head.S is another source of such confusion of code
> vs data. While in head.o there are mapping symbols guiding disassemblers,
> these mapping symbols are gone when looking at xen-syms. But I realize
> adr's reach is too limited to allow for a halfway reasonable approach of
> moving those strings (e.g. to, at least, have them all together).

I have tried it in the past. The solution I have was to differentiate 
call with MMU on from the one with MMU off. But I considered this wasn't 
worth the trouble.

> 
> --- a/xen/arch/arm/arm32/proc-v7.S
> +++ b/xen/arch/arm/arm32/proc-v7.S
> @@ -29,7 +29,7 @@ brahma15mp_init:
>           mcr   CP32(r0, ACTLR)
>           mov   pc, lr
>   
> -        .section ".proc.info", #alloc, #execinstr
> +        .section ".proc.info", #alloc
>           .type __v7_ca15mp_proc_info, #object
>   __v7_ca15mp_proc_info:
>           .long 0x410FC0F0             /* Cortex-A15 */
> @@ -38,7 +38,7 @@ __v7_ca15mp_proc_info:
>           .long caxx_processor
>           .size __v7_ca15mp_proc_info, . - __v7_ca15mp_proc_info
>   
> -        .section ".proc.info", #alloc, #execinstr
> +        .section ".proc.info", #alloc
>           .type __v7_ca7mp_proc_info, #object
>   __v7_ca7mp_proc_info:
>           .long 0x410FC070             /* Cortex-A7 */
> @@ -47,7 +47,7 @@ __v7_ca7mp_proc_info:
>           .long caxx_processor
>           .size __v7_ca7mp_proc_info, . - __v7_ca7mp_proc_info
>   
> -        .section ".proc.info", #alloc, #execinstr
> +        .section ".proc.info", #alloc
>           .type __v7_brahma15mp_proc_info, #object
>   __v7_brahma15mp_proc_info:
>           .long 0x420F00F0             /* Broadcom Brahma-B15 */
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 10:03:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 10:03:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141290.261027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsjQl-0007cE-Mr; Mon, 14 Jun 2021 10:02:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141290.261027; Mon, 14 Jun 2021 10:02:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsjQl-0007c7-JW; Mon, 14 Jun 2021 10:02:55 +0000
Received: by outflank-mailman (input) for mailman id 141290;
 Mon, 14 Jun 2021 10:02:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XszW=LI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lsjQj-0007br-Mg
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 10:02:53 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d5a9e4e9-51fb-4301-b384-b4f02484d6d2;
 Mon, 14 Jun 2021 10:02:52 +0000 (UTC)
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur02lp2055.outbound.protection.outlook.com [104.47.4.55]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-12-ZjViEyiiPLKoyhGFGDkU1Q-1; Mon, 14 Jun 2021 12:02:50 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6382.eurprd04.prod.outlook.com (2603:10a6:803:122::31)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21; Mon, 14 Jun
 2021 10:02:45 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 10:02:45 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR2P264CA0008.FRAP264.PROD.OUTLOOK.COM (2603:10a6:101::20) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.20 via Frontend Transport; Mon, 14 Jun 2021 10:02:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5a9e4e9-51fb-4301-b384-b4f02484d6d2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623664971;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sHdTEpmXpCdOEPV+CfkRImOUaqcbwd4uriVh5rok7iQ=;
	b=VDdeFpmpCeMlR6lUhpkyhDTUO6XEJ8aYJfrnXXPx/jPRhS/iIkT+QhpOGBSNJx9b7n+bho
	w+FBfXA011Dx01Yr81JKdMqeEP0cDZqk2EKjImhtCQU7KA2gbIALVpj2mnxAY+7p5eqiDZ
	wqS28gpfat5D1SaiJYo2ZbBuWUEhz5Y=
X-MC-Unique: ZjViEyiiPLKoyhGFGDkU1Q-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ncI++CmL0y/6asVKUsQjZk0mAMNwdoLSdn6Z5rAx3y2UBBynfmfw1nCO0GeOg1jQCTW1950z5SRrpzzYeavnfjUmZ9bssMrIdgAfJqIY/N1qeYNdwYBn/eYCRMfZ04tfN3gijXFWlQOAKlvMXgrkxuYnigIedMm0zxqSqGN44UdF9FsGAQjpgvdfqxEOoH1GN5UTtcZ7tUC7EOBn8lRn5AH4v8fRE1nW5gaEYqcYwGLljSrUJEJw9cltpAX9ui176u9l84tVIBJIXolPkX4YrFS2nfInkgZ3zq3Jcq5X6s+vXPK+TlokBlFgCBBfrcL7m7SIWMbnNjwnmU+p3evYZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sHdTEpmXpCdOEPV+CfkRImOUaqcbwd4uriVh5rok7iQ=;
 b=a2pHaJTggM0TUx+4p6cOb0Kne87wofW1cRDHED+VgwXwnShhUwTk1deKI88s7vm6NgDfKi06IL36BY4G3J1Hsn4fcS35cMfF2w5vCcABoj9wVdd5ch9iNMVoOK/Di6dhwQVEeCqSMf32armCwAk/bd42OC5hyPeJJxlGjWjFNWb8Vc3v99fEDbXqi6geKc6JAZ8kU3q1e9fG5YV2sNEixPcg+MzMI2Rfi+QKAHEGxaVnoz4tojgJ/Us8TqW0EC4W+gDC7kc0Wrr42KjCbdg9gq+St19E1c02CnzbzD6JGLZ3WKR3BFgWL35caUIjx601JsLKa1Hbs0JIVL37/X4xhg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] Arm: avoid .init.data to be marked as executable
To: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <6837f903-14f6-4438-ed05-b373149984f3@suse.com>
 <b7e76787-cdae-ed1a-a741-e5db146fc87e@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8c5ec03f-5ea1-99f8-a521-3552d0015ac4@suse.com>
Date: Mon, 14 Jun 2021 12:02:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <b7e76787-cdae-ed1a-a741-e5db146fc87e@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR2P264CA0008.FRAP264.PROD.OUTLOOK.COM (2603:10a6:101::20)
 To VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bfef8889-7706-4740-e154-08d92f1b8ea5
X-MS-TrafficTypeDiagnostic: VE1PR04MB6382:
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB638234A8B5A821872B47292BB3319@VE1PR04MB6382.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	t+ZEtwcMdnFmAF+/OquPKN2nvE6WkKV6ncZwKH0ihHFVXsACHAsqh4qKvlumGK7/CADGi7m5yZt4Jz+PxHNFtUzSxVwILKXDKveSCXTZy9puI+tHQPA5gabcaJrlTwCmbSFBN8x2CLZ3qrDyZ6DgG3RZNDSUck8VkXFN61PwviNBf+x6X3C3YUYm2IIaMc7CQnYf1pXH1Z2PXZXny/MObDWxbphzgUyLj3iADNxTa1zx7SAkCbMuzlm1Kmay5xDWx8YkXZPcqqkWtk1ZzJbH0dU82GigbxFZ5nbi4fN7uU74HGl1jsIkFNy48RU/1f0mD6nJKKm9OexfL0aVjINIsn+5FDPx5ja2/vOly30xDyxIh27RvptTZz5Rw6d51mZ+Tx8Te8R4koKGOsg7+MvVCYD1Zq4CdqPhJyt+o4dlJdyERfQSxHdsfhbc1gU5QtWO5kV850eeuT3Yhls5WfWQQNP9T3SiJGuljCwSbhj0Q/dOyBiS2kikCgfgeEtHJUel3lTddtidJrOUSBFahIfyDV2j6MjWlVKXeV3RI2VJxJcuDRuJdtXxXdt3OCa+L0YVZCtBndGXngAZeBPPiE4+sfk6pK80Shc7K8lBE/oSapD6odtXCYBUwWBsEK1/8wkXBlzaq9bu5dcrK+/ZZD2XToJpQcY2h8fpo3oW4vcXBXetMAfqn69SCXbDJF3hDAZGhg5TMPEA5U7xwM7gLhZLDA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(346002)(136003)(39860400002)(396003)(366004)(31696002)(31686004)(26005)(66946007)(478600001)(186003)(83380400001)(16526019)(66556008)(66476007)(5660300002)(8936002)(4326008)(54906003)(16576012)(6486002)(2616005)(8676002)(36756003)(2906002)(6916009)(38100700002)(316002)(53546011)(86362001)(956004)(142923001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T3l6U3VnYUVFVU9BMVU2Q2w3U0lQZnVEa3EzbTg5dU9XOEdpU0RObG1iS3hV?=
 =?utf-8?B?cVFiR0RlbzNyN05OMkY0a1kzcnBGTDJhTHN1c1RFQ3Q0T0R6a3MyU0JHVkNH?=
 =?utf-8?B?VEFCYzdTbmtxeUZCNTNVbDE5eUY4YnhvYTVTSGR0dmFha1BqM2R1dVNGR3pE?=
 =?utf-8?B?S013SUg3a3owNy9WOEtnY3BuTGY5bXF3dTd6TmdLNlErTHpzVEsyOHYyVkFC?=
 =?utf-8?B?WVZyeDBkWnpJNlZoOHoyeHdOaDNSQnEyVXBvT0ZPSHVnWm5wYi9Zek90VSt0?=
 =?utf-8?B?c1ZpK01DZHIzZGR3eEoyNE9EekpCZHROWERUWU5DWVJtb3FveHpkMUcrdUxP?=
 =?utf-8?B?MzkyakZOVnFwUFhKdHBWS1ZEY09RdjhrdU1TaHN6T20rVHZpRHFWaUlpWFhP?=
 =?utf-8?B?dWNzeUJrNFBDaWxWSDNjcnh1V3BYUTFZZEVBUjNWSzNsaGlYT0srb05WMWc3?=
 =?utf-8?B?MjEzYUF1M0FLZldhZ3FreGZqRFJ4RWtZQ3VFdjR0OUg2RFFQeWMxRnJFNmNq?=
 =?utf-8?B?T1NOUWlhdURhSEJ1MWtJRVJCTG16RWlwbG9GZUJLQlpjT1MyaHU2bldaeFdK?=
 =?utf-8?B?OTIrOVE1Z0J6eU9lMTNoaklyd0orYzh6ZE51cUkrME5nZk9TcVVzTFNCUkQv?=
 =?utf-8?B?aGtVeGZGNG1zZWQ4N3pvRGRCbmFYT2tYb3prZU9QZlNub2l6RnVUME9VOGtw?=
 =?utf-8?B?ZjE0VlF4Qmx1VElVdlFmNnRHWitNcTNTaTZaRlYzN1JFZ21xcUIrd3dmRVhP?=
 =?utf-8?B?ZlZxZ0dVNVlMRHY2MVRDd3grbEh2ajVrcWlSdW91alQyN0d4WWlzUDZwOG8z?=
 =?utf-8?B?Zjl1aHZ2SEdQTTNRdDJUVnhKa0NpanBKejU3dzExZHUraUM2N2I4bFJkd0Nh?=
 =?utf-8?B?Nm1jSWx3cWI0c0dveCs0NTNVUktrM1JkN2FZUnZCQU9nZDlNUCtnU1h0cHF4?=
 =?utf-8?B?V3RweHRxVHdvRkt0Q2pSanZzVkdhdmoyL3dnbHNQZEE4ZWg3WFJWQVM3dTBa?=
 =?utf-8?B?K2JYenh6TUh4WTFXektlOHVPMWltVlRYQ1lmNGFXRy8rUDFWVURvejljUHhF?=
 =?utf-8?B?dnZhQjdEdXhadU9rdFJ1QTErcEdTOS9SVjUrMUJrNkxYZkUrZW5IYVBkdG0y?=
 =?utf-8?B?Wjd1WS8wVkNZSEhDWXZVNDRvWFhqVXoyU28zVlpVQUdzQ1Z4Sm96MmthNStD?=
 =?utf-8?B?MkRYT3ZwYnFCMnBPcEN3SDlaVGRrMllxRXlnZXhCWUppNGJiMVQvTXJ1Y09J?=
 =?utf-8?B?ZnpkVEw2SDczUk04K2dZNDdCb1hsZ1lOM09lUXhFdDRrbk02bDlsU2NNQWIz?=
 =?utf-8?B?eHNlQTA4dGswdy9vZ2grNnU4NDNRWlQ2WU9rY2k2L2dCdTlJZkV1elBXeDRW?=
 =?utf-8?B?ZkZjWFJmdDZGUmhqdUNqNDhGVmtsaWlpaDhua1lkVC9sWWFLSHNkbERkb2xD?=
 =?utf-8?B?V2JENW44eG1aLzkxYXlJMkFvNUJlblBxSGpPYm9jdDBzdWhucFJ5Uk5IWlRw?=
 =?utf-8?B?U3pNZCtjV29DcEYwa3pxc09Zd1V6QU5WTThBdDBZdGh6SktQT0ZHdUNGUWxT?=
 =?utf-8?B?N3ZoZzFyNUxFSnZZOHEreTRsdzR4cnFtS203ajFMRmhLcUkrZTRERmZtVFRH?=
 =?utf-8?B?NHcrUkJMeWJpdUpxVDZKRi9UL0kvOGY1M2RKTFhDVy9teXpEMDNCRFMzd3pj?=
 =?utf-8?B?UmlqemZDMmUzUHd0THNidmxtZk1Vd0VrNVBUOWlOSzJidHlHWVNEdSs0c2lm?=
 =?utf-8?Q?FIbHTDaDMqe8vpRCDUi8tHw0jHq/6OeuusIT8W0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bfef8889-7706-4740-e154-08d92f1b8ea5
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 10:02:45.5293
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PsPdiE2TQDgCfXCtRX1NwqS9akMjxqD57mDTvesF5v01dDlboPmwvlCwuuO39+bXOc0sXUJuhad3/HnxVh3aLw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6382

On 14.06.2021 11:41, Julien Grall wrote:
> On 11/06/2021 11:39, Jan Beulich wrote:
>> This confuses disassemblers, at the very least. Move
>> .altinstr_replacement to .init.text,
> 
> The alternative code was borrowed from Linux. The code has now changed 
> to cater very large kernel. They used to keep the .altinstr_replacement 
> and altinstructions close to each other (albeit they were both in 
> .init.text).
> 
> I am not entirely why, but I am a bit worry to separate them. What sort 
> of test did you do?

Well, just build tests, on the assumption that relocation overflows
would be reported by the linker if the sections ended up too far
apart.

>> dropping the redundant ALIGN().
>>
>> Also, to have .altinstr_replacement have consistent attributes in the
>> object files, add "x" to the one instance where it was missing. >
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> I'm uncertain whether having .altinstr_replacement inside or outside the
>> [_sinittext,_einittext) region is better; I simply followed what we have
>> on the x86 side right now.
> 
> This means the altinstructions will be marked executable in the 
> page-table. They technically should not be executable, so I would move 
> them outside _einittext and make sure the section is aligned to a PAGE_SIZE.

Hmm, are you saying you bother getting attributes right for .init.*
in the page tables? I ask because we don't on x86, and because it
would seem wasteful to me to pad to PAGE_SIZE just for this. But
you're the maintainer, i.e. I'm merely double checking ...

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 10:03:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 10:03:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141291.261033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsjQm-0007iy-4w; Mon, 14 Jun 2021 10:02:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141291.261033; Mon, 14 Jun 2021 10:02:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsjQl-0007gB-VF; Mon, 14 Jun 2021 10:02:55 +0000
Received: by outflank-mailman (input) for mailman id 141291;
 Mon, 14 Jun 2021 10:02:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsjQk-0007bx-Nq; Mon, 14 Jun 2021 10:02:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsjQk-0001Z1-GI; Mon, 14 Jun 2021 10:02:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsjQk-00051g-81; Mon, 14 Jun 2021 10:02:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsjQk-0003ZW-7O; Mon, 14 Jun 2021 10:02:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LHNHaDLO8UOUXKmUoJvBrUq3+OUVuzdcA9fKKl0uOmk=; b=mx53fCwJqt+fk72n8KICdWZqPR
	CgpVdQ363JBNcCPzgwYgQy4Yimv0tKUhdNi9tljXeRZZax7vuj79xsHaLzfoiKCVYUhYI5Fxh5Xu4
	5AToBB5vxzKgT28tPgQtAH8fB09VQts7CB7lia2O3ZOxqZo8AiEw+0OrCITVhnllj+YQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162792-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162792: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=b8649cf2a3e673a4a8cb6c255e394b354b771550
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Jun 2021 10:02:54 +0000

flight 162792 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162792/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 b8649cf2a3e673a4a8cb6c255e394b354b771550
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   10 days
Failing since        162368  2021-06-04 15:42:59 Z    9 days   17 attempts
Testing same since   162583  2021-06-09 23:44:58 Z    4 days   12 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1717 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 10:11:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 10:11:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141308.261055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsjYz-0001JE-1z; Mon, 14 Jun 2021 10:11:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141308.261055; Mon, 14 Jun 2021 10:11:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsjYy-0001J7-Ux; Mon, 14 Jun 2021 10:11:24 +0000
Received: by outflank-mailman (input) for mailman id 141308;
 Mon, 14 Jun 2021 10:11:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lsjYx-0001Ix-Td
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 10:11:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lsjYw-0001gd-R1; Mon, 14 Jun 2021 10:11:22 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lsjYw-0005e0-J4; Mon, 14 Jun 2021 10:11:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=NpzWb+maa2BuXXssB8hk453klw7p/KghmtSFkGr3htg=; b=pvwnyHaEGPmld9aR3SpOn7KMxC
	9Cm984ggqEy3knbeiCMs0Q0RlJJc9tf+8qmjTY7Nx3S3UgzG8Q1myl7gGgutE/vm/hxrMWy1m+mB8
	q9A/EaN4LyvNqJg/SVUcklbiEuGKAnTVxV7PHQPNmIqbvDIKu9N4Y6Z3RIzdUF59kJwA=;
Subject: Re: [PATCH v2] xen/grant-table: Simplify the update to the per-vCPU
 maptrack freelist
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210608100824.25141-1-julien@xen.org>
 <3df3cc7b-a084-cc9c-5446-b662c884addd@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2d1d5588-5464-541c-5911-5c7942835b56@xen.org>
Date: Mon, 14 Jun 2021 12:11:20 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <3df3cc7b-a084-cc9c-5446-b662c884addd@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 09/06/2021 12:20, Jan Beulich wrote:
> On 08.06.2021 12:08, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Since XSA-228 (commit 02cbeeb62075 "gnttab: split maptrack lock
>> to make it fulfill its purpose again"), v->maptrack_head,
>> v->maptrack_tail and the content of the freelist are accessed with
>> the lock v->maptrack_freelist_lock held.
>>
>> Therefore it is not necessary to update the fields using cmpxchg()
>> and also read them atomically.
>>
>> Note that there are two cases where v->maptrack_tail is accessed without
>> the lock. They both happen in get_maptrack_handle() when initializing
>> the free list of the current vCPU. Therefore there is no possible race.
>>
>> The code is now reworked to remove any use of cmpxch() and read_atomic()
>> when accessing the fields v->maptrack_{head, tail} as wel as the
>> freelist.
>>
>> Take the opportunity to add a comment on top of the lock definition
>> and explain what it protects.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks. I committed with...


> with one nit:
> 
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -255,7 +255,13 @@ struct vcpu
>>       /* VCPU paused by system controller. */
>>       int              controller_pause_count;
>>   
>> -    /* Grant table map tracking. */
>> +    /*
>> +     * Grant table map tracking. The lock maptrack_freelist_lock
>> +     * protects to:
> 
> I don't think you want "to" here.

... this addressed and ...

> 
> Jan
> 
>> +     *  - The entries in the freelist

... "The" removed.

>> +     *  - maptrack_head
>> +     *  - maptrack_tail
>> +     */
>>       spinlock_t       maptrack_freelist_lock;
>>       unsigned int     maptrack_head;
>>       unsigned int     maptrack_tail;
>>
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 10:32:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 10:32:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141315.261067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsjt8-0003bW-QM; Mon, 14 Jun 2021 10:32:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141315.261067; Mon, 14 Jun 2021 10:32:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsjt8-0003bP-Mq; Mon, 14 Jun 2021 10:32:14 +0000
Received: by outflank-mailman (input) for mailman id 141315;
 Mon, 14 Jun 2021 10:32:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lsjt7-0003bJ-GM
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 10:32:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lsjt7-00023F-Dr; Mon, 14 Jun 2021 10:32:13 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lsjt7-00072b-5Z; Mon, 14 Jun 2021 10:32:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=7tb+ir/w2XyLVfPLCnYc7gmNkMd9IzM6R3qoDRIapnQ=; b=03nf4i/S/lySbAGpxRnBdIMWs5
	HZqHii9rBmHx2TGYzB5whe3gkHQt7ug5h3qFM9U93LTu/oUMMjLYR6eFrvsE6nwvUDcI115mogF8V
	1d2A4+tog+SzBzmBnlFZVne0VXYFRF84b4ub9coY1MMSKia5m4c43SFvrmqdJWClGf3k=;
Subject: Re: [PATCH] Arm: avoid .init.data to be marked as executable
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <6837f903-14f6-4438-ed05-b373149984f3@suse.com>
 <b7e76787-cdae-ed1a-a741-e5db146fc87e@xen.org>
 <8c5ec03f-5ea1-99f8-a521-3552d0015ac4@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1b44cb6d-dda6-5297-893b-a53fe7d123d9@xen.org>
Date: Mon, 14 Jun 2021 12:32:11 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <8c5ec03f-5ea1-99f8-a521-3552d0015ac4@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 14/06/2021 12:02, Jan Beulich wrote:
> On 14.06.2021 11:41, Julien Grall wrote:
>> On 11/06/2021 11:39, Jan Beulich wrote:
>>> This confuses disassemblers, at the very least. Move
>>> .altinstr_replacement to .init.text,
>>
>> The alternative code was borrowed from Linux. The code has now changed
>> to cater very large kernel. They used to keep the .altinstr_replacement
>> and altinstructions close to each other (albeit they were both in
>> .init.text).
>>
>> I am not entirely why, but I am a bit worry to separate them. What sort
>> of test did you do?
> 
> Well, just build tests, on the assumption that relocation overflows
> would be reported by the linker if the sections ended up too far
> apart.

Hmmm, fair point. They should also not be further than the original 
instruction. So there ought to be fine.

> 
>>> dropping the redundant ALIGN().
>>>
>>> Also, to have .altinstr_replacement have consistent attributes in the
>>> object files, add "x" to the one instance where it was missing. >
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> I'm uncertain whether having .altinstr_replacement inside or outside the
>>> [_sinittext,_einittext) region is better; I simply followed what we have
>>> on the x86 side right now.
>>
>> This means the altinstructions will be marked executable in the
>> page-table. They technically should not be executable, so I would move
>> them outside _einittext and make sure the section is aligned to a PAGE_SIZE.
> 
> Hmm, are you saying you bother getting attributes right for .init.*
> in the page tables? I ask because we don't on x86, and because it
> would seem wasteful to me to pad to PAGE_SIZE just for this. But
> you're the maintainer, i.e. I'm merely double checking ...

So this is a defense in depth. Your assumption is .init.text is going to 
disappear after boot. However, if there is a bug that would leave 
.init.text present then this may add more attack surface. So I think it 
is a good practice to keep the permission correct.

However... looking the alternative code again, there is another reason 
to move this change out of the range _sinitext - _einittext. The 
function branch_insn_requires_update() will forbid branch target in 
another alternative instructions.

This is first checking that the target is part of an active text. With 
this change, this will return true because alternative instruction 
replacement will be between _sinittext and _einittext.

So .altinstructions_replacement must outside of the region [_stinittext, 
_einittext[.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 10:41:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 10:41:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141322.261077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsk1b-00051Z-Hn; Mon, 14 Jun 2021 10:40:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141322.261077; Mon, 14 Jun 2021 10:40:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsk1b-00051S-Ep; Mon, 14 Jun 2021 10:40:59 +0000
Received: by outflank-mailman (input) for mailman id 141322;
 Mon, 14 Jun 2021 10:40:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XszW=LI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lsk1Z-00051M-Pf
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 10:40:57 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e17215cf-8770-4f08-a7fa-2f3818f21f58;
 Mon, 14 Jun 2021 10:40:52 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2174.outbound.protection.outlook.com [104.47.17.174])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-31-pdwX8b5YMY-rk8FfUAoNdA-1; Mon, 14 Jun 2021 12:40:50 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7087.eurprd04.prod.outlook.com (2603:10a6:800:12a::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Mon, 14 Jun
 2021 10:40:49 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 10:40:49 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0261.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100::33) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Mon, 14 Jun 2021 10:40:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e17215cf-8770-4f08-a7fa-2f3818f21f58
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623667251;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=U6Y9TEnLy4A3egQcw338VygtEogIv1SMQnGzmu3Kcoo=;
	b=Bkrc5WupiN8v2sNPEO5EVNqpBmdfUKU8n9+jq9/rwPuXn8sXp19/XCDVulLcU+AznTnE7O
	2JJ1hmdeHrCbRpUNldBXuahbQV8cFajTKeIWJqv9pTjJAyfIm5ElthHKrOjP2oR3uvqnNZ
	nus4S06vZ6P0cbwq+FkSJ0O65J0f1vk=
X-MC-Unique: pdwX8b5YMY-rk8FfUAoNdA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SDjvJDL0W+6IeDDNFUj+nRhV680eGJCGMURSZvAdrmLMpZXnBl4Jyr4/Ia3n0UcJePK2ikkYKt/0fO9jhj9bLpPXMLbe2M/bBQ23xK30IKLnI/lGqAAfzSLwSbL0VDAeNF4eYzTsZlruRIvrZ7ne8p2fi5uolGr0xX7gTnow+34BXiCAZv7erol7A+kik9qzJvUhSmsSa+ogKhiflzv1x9G+QFXAC8hNjuUFQgfb3cm98rsLHV6jGT9M2H6cTd3JFEFMl494Fcd9N5MgnGqEbdfej4h4eo9I95DuGBfv8zvF6c2FbT04THavzeqYoJzM7hpxy4rNnrbqOoEuKCGoAg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U6Y9TEnLy4A3egQcw338VygtEogIv1SMQnGzmu3Kcoo=;
 b=fcAPlxra443yKwaG1R+Ya6r5MEdXQE0K8SnB7sBMfNJm0NvSNZv1zBs1QA+z3FI8SHVJNBYs6HnIuisDkRfOB1+VMiX/qT/yfHfKwhcZOzkeRPkkBZbth0x7MJQlg/cw9Mhxzg9T66MV/5/vO88FoFX6lo7CyhxNZtbEQmPougJ2CQwarIoMwg/S99/vI0LHXw6y5apBhlSBh9IOA8P0WHBL6LEeeNS7Yv8cm/48RDhu+mMY9EwIoeNv9B9XBOSJgPzqADwBChFLZH89MYiwtnO++wwbVUQ3PNlwhaApdN0TMDx7UdUAlIpUD2VniqRWqj7zIdpvxEFOZ5loax8hyw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] Arm32: avoid .rodata to be marked as executable
To: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <25f1b0d2-9270-ba42-d110-2bf14e45b7b8@suse.com>
 <5b819c5e-587b-4eec-b873-73892503c3e2@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4143bdfd-ca78-d7ce-4ed0-2b6271c48ecf@suse.com>
Date: Mon, 14 Jun 2021 12:40:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <5b819c5e-587b-4eec-b873-73892503c3e2@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0261.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100::33)
 To VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 41e7f88b-baa6-4549-5e53-08d92f20df9e
X-MS-TrafficTypeDiagnostic: VI1PR04MB7087:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB7087BB9E7DA8759332F6E17DB3319@VI1PR04MB7087.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mIFC9SE4/2VRcvPn537FxPcEmgi604pc8GufEH6pCATxV5BwDydxMXoheZQgKdm4xLZeRsESOWkc5Ny22kF7jL1sIeA1kdEyoiR7gq3NLDEPjELlTjGX5o3/0W6/p+1HiC6IuktuarjSbD4YJw7ifBOwsNdhTmZA3i2AfNg3sN3Kg9Tj7lwQy363vMMKUlsY7a2PVPyaUAYgmiKepSDAU+QcvMDHt4qHDfot1Hll4q+KrGAcVY9rGISN+AyVplghvXgQ1ZxstPf0mkss7jgWhTF/QEcCNV3f2SdnqluYtWa4xlHeRAFEzj5hzn40ogF+UsvBVRNih2QkrN/YvI//OOa92g1awx8IwZbA+/8L0mRTJCYj71HFe8/9ja0ERehhwiCpx3oS46u0NkTFm7dxEx0RojGmcXyT6k+WR655p8SclrVzOPjfJzK3CObG1LCBt2jM7UrsWot15CGyEaxC2EvGrhy+23y9v5TqntG+DCE8hRV1c5Fz6TC0QdrqHM02n9+uFNLV4xDFcT5NX7EgB+sGxBZ11A+e66p6gcGdbvT5NqLGyfkI/VatvuLZ8Ong2zwcCr1YOyNAq4AUcqfLSyyhMhHruEL5yDbAZzpXkkecVq4mwvjJtO1a92+Ens/l7foPl2eEDSm0FzJCcDYukjhIfRiYAMBxNHhR01vRtloN4oaxqFPyr0zj2p9J2M85JQSXIgoX9tSUpy5raHKFvQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(396003)(136003)(366004)(39860400002)(376002)(2616005)(86362001)(956004)(478600001)(8676002)(6916009)(6486002)(8936002)(5660300002)(66556008)(54906003)(16576012)(316002)(31696002)(53546011)(38100700002)(83380400001)(4326008)(66476007)(31686004)(186003)(36756003)(16526019)(2906002)(26005)(66946007)(142923001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Ymh2ZU4weE5idnoyY3UzWnJBQkJYSTlWWFVHemN6TTgxQ2lRTitranNIdmlC?=
 =?utf-8?B?K1QwclRLQzBFaC9yYjc3dE9QV05vckZ4ODg0OVg4OU93V2ErRzJTWW01VVBD?=
 =?utf-8?B?QWxsVFR6Rm9BZkJFVEs0MHdXMXJoRVpIeGpyMkZuWDBieUU1YnhHR252T2xP?=
 =?utf-8?B?QkFDQmpSZVlGSk9BWUpZcGpaS3EvaDBTT0Z6dXpka1lXRE00MStiQklRMk5Q?=
 =?utf-8?B?RVhGbEFYMWJjb1doN0k3Zmw2cDJtVGJ4d050ZC9NSE5OQ0lBenZOR0FZcTNw?=
 =?utf-8?B?WjVxODhnWUY1ZVI2dCtqNUdGTG8vRDMrVWJweHhjdk5WRkdwMWZ4ZVl2eHg4?=
 =?utf-8?B?NStvbVhLMWdtNGJ5NFpxWE10ekVaU3grZ1dWOEJvQTk1QWl4T2hmbUoweHRY?=
 =?utf-8?B?RitFRFI0dUU0ci92WlRJeWVYZXdBQUlLYjhocEFzeVZremJSTXRBNUxIQm1X?=
 =?utf-8?B?cFFGTFRtNUlJWVJ6c2ZkSVYzM0NtSXdUQ3B0MHZlUjlzbUJzNjBacjIvMHFB?=
 =?utf-8?B?ekozS0hQbm1kWU8raFZ6WHZrc2dsWDV2QUd3a1ZkRGU3OHZISHhxZU8wSUFW?=
 =?utf-8?B?R1F6Sm02dzRsSFRzS29wSHV4WGdFNlBoVVVobVg1eEhjMUF6T3l0Ull2VlJw?=
 =?utf-8?B?eDVrbHZVSkk4NWNCQ3RMaEhwa284MUEzdnVJYWs1SGZJZlI0aXh3eWlab2Vw?=
 =?utf-8?B?V1ZwV2h4ZnVkWm5WVXJZZnlqNHVDZVAyTVp6eWJFOXVFTEc5Tldlc0NrVUhv?=
 =?utf-8?B?VTE0aHd0VWU5K09IVDZTd0EraitZUXpRQVVyMjhrVEtWVE9YNk1lZ2x6RHo0?=
 =?utf-8?B?cHVpenY4azNsMXRzL0lCUEwwOFZmZEZtQmZRV0JNRkkzcWVVT3ZvdVdjZDNC?=
 =?utf-8?B?RTJySSt5K1RoUkFBZWhwOUFpVldMMEtuUHpuVEFsczM4TEpoVEtDMWRhcTQz?=
 =?utf-8?B?OE1tM2orcXFqSktxa3N3cG1Vc3o2QUdsdm1meVY0THlrNVBEV1F0ek9oVlN4?=
 =?utf-8?B?SEZUcGdQaTJjTHVSdDJlSlJ5eWNjcXJ2RjI2SFNDUkdMZGhteXAvUm5NSzhO?=
 =?utf-8?B?cWZ3eStuMTA4L0NZT3orQ3FXeEhmaEV5SWl2NDZRWldQRm1lbzNXdEg1MDVk?=
 =?utf-8?B?VjFmWVBxS29HTFRnRXVDM3JObHhZOE9XaytSZDZua0I3NlN1RkRNQTRxWmdt?=
 =?utf-8?B?ang4cFhqb0xuSEtxRHYvNmZtZVh4WWQ1cGQ2ZGxjZTBieVFCTEkyRGQydHNQ?=
 =?utf-8?B?Qm5DOHRNdk9pR0tscHFhU21CVDNUL3hqQ2RIZ2ltZGN1WFpDY3QxNllxMnI3?=
 =?utf-8?B?MVRCQzJ1SVBSTGpEU0JRVFljNHlyaFJzbG5xQkJ0eFg5N09nZnJ1aE9TNW1w?=
 =?utf-8?B?WkF2a3cyYi81OE9SNUxXaVRIVmh3UGRGK0lRTmxZK1MxbGlPTDdGeGdpMFpp?=
 =?utf-8?B?ait1YlNPTFZ4a2pISVlEb0JIQ2NZUE5HdWs2TDhRWGRDSTVKNGNoUlMzWFJG?=
 =?utf-8?B?Qyt5ZUxWa0pucHdyaTdRck14THFxYmZMeDdyYXRXUVR1dDlxNkt5VjRiem90?=
 =?utf-8?B?ZXJudURCZUtUOTJsOEljZUpJbzJlQ1VVaTdDWkYyS0NMeFdxT0xJdmRnMHJP?=
 =?utf-8?B?YkhSQXZjK3Z1MWR0c1JtOVRFbDRCK2YveGI5USt5SHFLR2o3aXVudDJkN2N3?=
 =?utf-8?B?U0kyOHhLZGNhcXIycVlZaWpJa3JkbTNCekhmMmtlL1ZIL1RWcVFxcGFXOGoz?=
 =?utf-8?Q?xyH0f9H0MuwxTe7ZiKfL6dd04eop2pwzlANyosg?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 41e7f88b-baa6-4549-5e53-08d92f20df9e
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 10:40:49.0910
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gFosQpvHLi2xewfuyZGNXfExod0YVvf1FaXENanonzucR8+dREia+h37nH4Z2XWH7VbmHDXKuYWa7dWCu8DCNQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7087

On 14.06.2021 11:57, Julien Grall wrote:
> On 11/06/2021 11:19, Jan Beulich wrote:
>> This confuses disassemblers, at the very least. When this data still
>> lived in .init.*, this probably didn't matter much, albeit the
>> "#execinstr" would have been suspicious to me already then. But the
>> latest with their movement to .rodata these attributes should have been
>> dropped.
> 
> I don't quite understand why this wasn't really a problem for .init.data 
> but it is a problem for .rodata. Can you expand your thought?

I've said "probably" for a reason, and my thinking here goes along
the lines of what I've said on the other patch regarding .init.*:
There's perhaps not overly much reason to be picky about the
attributes of .init.*, and at least on x86 there is also a case
(the EFI binary) where we fold all .init.* into just .init anyway.

The alternative to the present description that I see would be to
go with just the 1st sentence. But I would be afraid in such a
case that you would come back and tell me this is too little of a
description.

>> Fixes: 9cbe093b7b84 ("xen/arm: link: Link proc_info_list in .rodata instead of .init.data")
> I don't view this commit as the buggy one. I would prefer if there is no 
> Fixes tag. But if you want one, then it should be the patch that 
> introduced #execinstr.

I've dropped the tag.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 10:48:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 10:48:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141329.261089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsk8L-0005k0-5B; Mon, 14 Jun 2021 10:47:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141329.261089; Mon, 14 Jun 2021 10:47:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsk8L-0005jt-1u; Mon, 14 Jun 2021 10:47:57 +0000
Received: by outflank-mailman (input) for mailman id 141329;
 Mon, 14 Jun 2021 10:47:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AGyB=LI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lsk8I-0005jn-Q8
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 10:47:54 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d30d9971-e77b-4835-aa4b-00f8693bc644;
 Mon, 14 Jun 2021 10:47:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d30d9971-e77b-4835-aa4b-00f8693bc644
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623667668;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=v+zdlEygWncSd5EExTV+B0dTwQvIZMe326EFJ8aB29o=;
  b=Yyc3QmRQPm6Zv53wa/biFjZpR3NNrqCUY6DP4pcN+e8hwm5PvepRmKYl
   2r6EgWRMf0nNAowXxADNKGg02yccoUhFLoEsUzFit+d4t2vr3YBISQoco
   sVe7Ek5O/cyZftLuy5AC+IgAE5G51x4iBAjMgnSKhFu3W04RoPxRilZLA
   Q=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ejkMfLFQadSd6PC6g1aaWQBpYsGQuz5h0GEAG4yp29nFvFzQZ5+mDO+1RHL+u/XszkjWQaelv0
 BUt0ewnbDc5lhqUoMidO3LjaiZq+FGXWGTvFz4qnyklsxsrszKAnVPvqkHpq5bsbQ7D9EhNEZd
 Fvz4S/1eC1wssbX/UcBf+X1w6mAcwUJCPy3CcohSwjb9Eu1Tc34L75o/iN3Xm6O+hvIiBrRiKR
 DYUNOR0yszIggUOM7rIUYBS231H2+S5CNk4y6aJFORf6POh/TVaJKYEQ/EkAYxx57eA8cdEPD/
 dSk=
X-SBRS: 5.1
X-MesageID: 45788757
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:Hy3B/64cCtMJoJ4q/wPXweyBI+orL9Y04lQ7vn2ZESYlCvBw9/
 re2cjy1XfP6Ar5K0tQ3+xoWZPwGE80kKQf3WB/B8bHYOCLggWVxcRZnPLfKl7bamXDH4xmpM
 BdmsFFYbWbYTdHZITBkW+F+r0bsbq6GdWT9ILjJgBWPGNXgs9bjjuQp22gf3FedU1jP94UBZ
 Cc7s1Iq36LYnIMdPm2AXEDQqzqu8DLvIiOW29NOzcXrC21yR+44r/zFBaVmj0EVSlU/Lsk+W
 /Z1yTk+6SYte2hwBO07R6f030Woqqs9jJwPr3DtiEnEESstu9uXvUgZ1S2hkF7nAho0idorD
 CDmWZjAy050QKrQoj8m2qW5+FmuwxerEMLHTSj8D/eSIrCNXkHItsEioRDfhTD7U08+Nl6za
 JQxmqc84FaFBXagU3Glq71vr5R5zuJSFcZ4JouZkZkIPwjgX5q3P8iFUhuYd499eLBmfUa+c
 xVfbHhDdptACynhkHizxtSKYaXLwgO9z+9MzY/U+KuokVroEw=
X-IronPort-AV: E=Sophos;i="5.83,273,1616472000"; 
   d="scan'208";a="45788757"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, Edwin Torok
	<edvin.torok@citrix.com>, Andrew Cooper <andrew.cooper3@citrix.com>, "Jan
 Beulich" <JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v1.1 5/5] tests: Introduce a TSX test
Date: Mon, 14 Jun 2021 11:47:16 +0100
Message-ID: <20210614104716.23405-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210611163627.4878-6-andrew.cooper3@citrix.com>
References: <20210611163627.4878-6-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

See the comment at the top of test-tsx.c for details.

This covers various complexities encountered while trying to address the
recent TSX deprecation on client parts.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v1.1:
 * Set alternative guest policy, and check.
 * Cope with !HAP configurations.
 * Complete the comment at the top of test-tsx.c
---
 tools/tests/Makefile       |   1 +
 tools/tests/tsx/.gitignore |   1 +
 tools/tests/tsx/Makefile   |  43 ++++
 tools/tests/tsx/test-tsx.c | 515 +++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 560 insertions(+)
 create mode 100644 tools/tests/tsx/.gitignore
 create mode 100644 tools/tests/tsx/Makefile
 create mode 100644 tools/tests/tsx/test-tsx.c

diff --git a/tools/tests/Makefile b/tools/tests/Makefile
index 8746aabe6b..25531a984a 100644
--- a/tools/tests/Makefile
+++ b/tools/tests/Makefile
@@ -5,6 +5,7 @@ SUBDIRS-y :=
 SUBDIRS-y += resource
 SUBDIRS-$(CONFIG_X86) += cpu-policy
 SUBDIRS-$(CONFIG_X86) += mce-test
+SUBDIRS-$(CONFIG_X86) += tsx
 ifneq ($(clang),y)
 SUBDIRS-$(CONFIG_X86) += x86_emulator
 endif
diff --git a/tools/tests/tsx/.gitignore b/tools/tests/tsx/.gitignore
new file mode 100644
index 0000000000..97ec4db7ff
--- /dev/null
+++ b/tools/tests/tsx/.gitignore
@@ -0,0 +1 @@
+test-tsx
diff --git a/tools/tests/tsx/Makefile b/tools/tests/tsx/Makefile
new file mode 100644
index 0000000000..7381a4f5a4
--- /dev/null
+++ b/tools/tests/tsx/Makefile
@@ -0,0 +1,43 @@
+XEN_ROOT = $(CURDIR)/../../..
+include $(XEN_ROOT)/tools/Rules.mk
+
+TARGET := test-tsx
+
+.PHONY: all
+all: $(TARGET)
+
+.PHONY: run
+run: $(TARGET)
+	./$(TARGET)
+
+.PHONY: clean
+clean:
+	$(RM) -f -- *.o $(TARGET) $(DEPS_RM)
+
+.PHONY: distclean
+distclean: clean
+	$(RM) -f -- *~
+
+.PHONY: install
+install: all
+
+.PHONY: uninstall
+uninstall:
+
+CFLAGS += -Werror -std=gnu11
+CFLAGS += $(CFLAGS_xeninclude)
+CFLAGS += $(CFLAGS_libxenctrl)
+CFLAGS += $(CFLAGS_libxenguest)
+CFLAGS += -I$(XEN_ROOT)/tools/libs/ctrl -I$(XEN_ROOT)/tools/libs/guest
+CFLAGS += $(APPEND_CFLAGS)
+
+LDFLAGS += $(LDLIBS_libxenctrl)
+LDFLAGS += $(LDLIBS_libxenguest)
+LDFLAGS += $(APPEND_LDFLAGS)
+
+test-tsx.o: Makefile
+
+test-tsx: test-tsx.o
+	$(CC) -o $@ $< $(LDFLAGS)
+
+-include $(DEPS_INCLUDE)
diff --git a/tools/tests/tsx/test-tsx.c b/tools/tests/tsx/test-tsx.c
new file mode 100644
index 0000000000..036b36e797
--- /dev/null
+++ b/tools/tests/tsx/test-tsx.c
@@ -0,0 +1,515 @@
+/*
+ * TSX settings and consistency tests
+ *
+ * This tests various behaviours and invariants with regards to TSX.  It
+ * ideally wants running for several microcode versions, and all applicable
+ * tsx= commandline settings, on a single CPU, including after an S3
+ * suspend/resume event.
+ *
+ * It tests specifically:
+ *  - The consistency of MSR_TSX_CTRL/MSR_TSX_FORCE_ABORT values across the
+ *    system, and their accessibility WRT data in the host CPU policy.
+ *  - The actual behaviour of RTM on the system.
+ *  - Cross-check the default/max policies based on the actual RTM behaviour.
+ *  - Create some guests, check their defaults, and check that the defaults
+ *    can be changed.
+ */
+
+#define _GNU_SOURCE
+
+#include <err.h>
+#include <errno.h>
+#include <inttypes.h>
+#include <signal.h>
+#include <stdio.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <sys/ucontext.h>
+
+#include <xenctrl.h>
+#include <xenguest.h>
+#include <xen-tools/libs.h>
+
+#include "xg_private.h"
+
+enum {
+#define XEN_CPUFEATURE(name, value) X86_FEATURE_##name = value,
+#include <xen/arch-x86/cpufeatureset.h>
+};
+#define bitmaskof(idx)      (1u << ((idx) & 31))
+
+#define MSR_ARCH_CAPABILITIES               0x0000010a
+#define  ARCH_CAPS_TSX_CTRL                 (1 <<  7)
+#define MSR_TSX_FORCE_ABORT                 0x0000010f
+#define MSR_TSX_CTRL                        0x00000122
+
+static unsigned int nr_failures;
+#define fail(fmt, ...)                          \
+({                                              \
+    nr_failures++;                              \
+    (void)printf(fmt, ##__VA_ARGS__);           \
+})
+
+static xc_interface *xch;
+
+/*
+ * Policies, arranged as an array for easy collection of all of them.  We
+ * don't care about the raw policy (index 0) so reuse that for the guest
+ * policy.
+ */
+static struct xc_cpu_policy policies[6];
+#define guest_policy policies[0]
+#define host         policies[XEN_SYSCTL_cpu_policy_host]
+#define pv_max       policies[XEN_SYSCTL_cpu_policy_pv_max]
+#define hvm_max      policies[XEN_SYSCTL_cpu_policy_hvm_max]
+#define pv_default   policies[XEN_SYSCTL_cpu_policy_pv_default]
+#define hvm_default  policies[XEN_SYSCTL_cpu_policy_hvm_default]
+
+static bool xen_has_pv = true, xen_has_hvm = true;
+
+static xc_physinfo_t physinfo;
+
+static enum rtm_behaviour {
+    RTM_UD,
+    RTM_OK,
+    RTM_ABORT,
+} rtm_behaviour;
+
+/*
+ * Test a specific TSX MSR for consistency across the system, taking into
+ * account whether it ought to be accessable or not.
+ *
+ * We can't query offline CPUs, so skip those if encountered.  We don't care
+ * particularly for the exact MSR value, but we do care that it is the same
+ * everywhere.
+ */
+static void test_tsx_msr_consistency(unsigned int msr, bool accessable)
+{
+    uint64_t cpu0_val = ~0;
+
+    for ( unsigned int cpu = 0; cpu <= physinfo.max_cpu_id; ++cpu )
+    {
+        xc_resource_entry_t ent = {
+            .u.cmd = XEN_RESOURCE_OP_MSR_READ,
+            .idx = msr,
+        };
+        xc_resource_op_t op = {
+            .cpu = cpu,
+            .entries = &ent,
+            .nr_entries = 1,
+        };
+        int rc = xc_resource_op(xch, 1, &op);
+
+        if ( rc < 0 )
+        {
+            /* Don't emit a message for offline CPUs */
+            if ( errno != ENODEV )
+                fail("  xc_resource_op() for CPU%u failed: rc %d, errno %d - %s\n",
+                     cpu, rc, errno, strerror(errno));
+            continue;
+        }
+
+        if ( accessable )
+        {
+            if ( rc != 1 )
+            {
+                fail("  Expected 1 result, got %u\n", rc);
+                continue;
+            }
+            if ( ent.u.ret != 0 )
+            {
+                fail("  Expected ok, got %d\n", ent.u.ret);
+                continue;
+            }
+        }
+        else
+        {
+            if ( rc != 0 )
+                fail("  Expected 0 results, got %u\n", rc);
+            else if ( ent.u.ret != -EPERM )
+                fail("  Expected -EPERM, got %d\n", ent.u.ret);
+            continue;
+        }
+
+        if ( cpu == 0 )
+        {
+            cpu0_val = ent.val;
+            printf("  CPU0 val %#"PRIx64"\n", cpu0_val);
+        }
+        else if ( ent.val != cpu0_val )
+            fail("  CPU%u val %#"PRIx64" differes from CPU0 %#"PRIx64"\n",
+                 cpu, ent.val, cpu0_val);
+    }
+}
+
+/*
+ * Check all TSX MSRs, and in particular that their accessibility matches what
+ * is expressed in the host CPU policy.
+ */
+static void test_tsx_msrs(void)
+{
+    printf("Testing MSR_TSX_FORCE_ABORT consistency\n");
+    test_tsx_msr_consistency(
+        MSR_TSX_FORCE_ABORT, host.cpuid.feat.tsx_force_abort);
+
+    printf("Testing MSR_TSX_CTRL consistency\n");
+    test_tsx_msr_consistency(
+        MSR_TSX_CTRL, host.msr.arch_caps.tsx_ctrl);
+}
+
+/*
+ * Probe for how RTM behaves, deliberately not inspecting CPUID.
+ * Distinguishes between "no support at all" (i.e. XBEGIN suffers #UD),
+ * working ok, and appearing to always abort.
+ */
+static enum rtm_behaviour probe_rtm_behaviour(void)
+{
+    for ( int i = 0; i < 1000; ++i )
+    {
+        /*
+         * Opencoding the RTM infrastructure from immintrin.h, because we
+         * still support older versions of GCC.  ALso so we can include #UD
+         * detection logic.
+         */
+#define XBEGIN_STARTED -1
+#define XBEGIN_UD      -2
+        unsigned int status = XBEGIN_STARTED;
+
+        asm volatile (".Lxbegin: .byte 0xc7,0xf8,0,0,0,0" /* XBEGIN 1f; 1: */
+                      : "+a" (status) :: "memory");
+        if ( status == XBEGIN_STARTED )
+        {
+            asm volatile (".byte 0x0f,0x01,0xd5" ::: "memory"); /* XEND */
+            return RTM_OK;
+        }
+        else if ( status == XBEGIN_UD )
+            return RTM_UD;
+    }
+
+    return RTM_ABORT;
+}
+
+static struct sigaction old_sigill;
+
+static void sigill_handler(int signo, siginfo_t *info, void *extra)
+{
+    extern char xbegin_label[] asm(".Lxbegin");
+
+    if ( info->si_addr == xbegin_label ||
+         memcmp(info->si_addr, "\xc7\xf8\x00\x00\x00\x00", 6) == 0 )
+    {
+        ucontext_t *context = extra;
+
+        /*
+         * Found the XBEGIN instruction.  Step over it, and update `status` to
+         * signal #UD.
+         */
+#ifdef __x86_64__
+        context->uc_mcontext.gregs[REG_RIP] += 6;
+        context->uc_mcontext.gregs[REG_RAX] = XBEGIN_UD;
+#else
+        context->uc_mcontext.gregs[REG_EIP] += 6;
+        context->uc_mcontext.gregs[REG_EAX] = XBEGIN_UD;
+#endif
+    }
+    else
+    {
+        /*
+         * Not the SIGILL we're looking for...  Restore the old handler and
+         * try again.  Will likely coredump as a result.
+         */
+        sigaction(SIGILL, &old_sigill, NULL);
+    }
+}
+
+static void test_rtm_behaviour(void)
+{
+    struct sigaction new_sigill = {
+        .sa_flags = SA_SIGINFO,
+        .sa_sigaction = sigill_handler,
+    };
+    const char *str;
+
+    printf("Testing RTM behaviour\n");
+
+    /*
+     * Install a custom SIGILL handler while probing for RTM behaviour, as the
+     * XBEGIN instruction might suffer #UD.
+     */
+    sigaction(SIGILL, &new_sigill, &old_sigill);
+    rtm_behaviour = probe_rtm_behaviour();
+    sigaction(SIGILL, &old_sigill, NULL);
+
+    switch ( rtm_behaviour )
+    {
+    case RTM_UD:    str = "#UD";   break;
+    case RTM_OK:    str = "OK";    break;
+    case RTM_ABORT: str = "Abort"; break;
+    default:        str = NULL;    break;
+    }
+
+    if ( str )
+        printf("  Got %s\n", str);
+    else
+        return fail("  Got unexpected behaviour %d\n", rtm_behaviour);
+
+    if ( host.cpuid.feat.rtm )
+    {
+        if ( rtm_behaviour == RTM_UD )
+            fail("  Host reports RTM, but appears unavailable\n");
+    }
+    else
+    {
+        if ( rtm_behaviour != RTM_UD )
+            fail("  Host reports no RTM, but appears available\n");
+    }
+}
+
+static void dump_tsx_details(const struct xc_cpu_policy *p, const char *pref)
+{
+    printf("  %s RTM %u, HLE %u, TSX_FORCE_ABORT %u, RTM_ALWAYS_ABORT %u, TSX_CTRL %u\n",
+           pref,
+           p->cpuid.feat.rtm,
+           p->cpuid.feat.hle,
+           p->cpuid.feat.tsx_force_abort,
+           p->cpuid.feat.rtm_always_abort,
+           p->msr.arch_caps.tsx_ctrl);
+}
+
+/* Sanity test various invariants we expect in the default/max policies. */
+static void test_guest_policies(const struct xc_cpu_policy *max,
+                                const struct xc_cpu_policy *def)
+{
+    const struct cpuid_policy *cm = &max->cpuid;
+    const struct cpuid_policy *cd = &def->cpuid;
+    const struct msr_policy *mm = &max->msr;
+    const struct msr_policy *md = &def->msr;
+
+    dump_tsx_details(max, "Max:");
+    dump_tsx_details(def, "Def:");
+
+    if ( ((cm->feat.raw[0].d | cd->feat.raw[0].d) &
+          (bitmaskof(X86_FEATURE_TSX_FORCE_ABORT) |
+           bitmaskof(X86_FEATURE_RTM_ALWAYS_ABORT))) ||
+         ((mm->arch_caps.raw | md->arch_caps.raw) & ARCH_CAPS_TSX_CTRL) )
+        fail("  Xen-only TSX controls offered to guest\n");
+
+    switch ( rtm_behaviour )
+    {
+    case RTM_UD:
+        if ( (cm->feat.raw[0].b | cd->feat.raw[0].b) &
+             (bitmaskof(X86_FEATURE_HLE) | bitmaskof(X86_FEATURE_RTM)) )
+             fail("  HLE/RTM offered to guests despite not being available\n");
+        break;
+
+    case RTM_ABORT:
+        if ( cd->feat.raw[0].b &
+             (bitmaskof(X86_FEATURE_HLE) | bitmaskof(X86_FEATURE_RTM)) )
+             fail("  HLE/RTM offered to guests by default despite not being usable\n");
+        break;
+
+    case RTM_OK:
+        if ( !cm->feat.rtm || !cd->feat.rtm )
+             fail("  RTM not offered to guests despite being available\n");
+        break;
+    }
+
+    if ( cd->feat.hle )
+        fail("  Fail: HLE offered in default policy\n");
+}
+
+static void test_def_max_policies(void)
+{
+    if ( xen_has_pv )
+    {
+        printf("Testing PV default/max policies\n");
+        test_guest_policies(&pv_max, &pv_default);
+    }
+
+    if ( xen_has_hvm )
+    {
+        printf("Testing HVM default/max policies\n");
+        test_guest_policies(&hvm_max, &hvm_default);
+    }
+}
+
+static void test_guest(struct xen_domctl_createdomain *c)
+{
+    uint32_t domid = 0;
+    int rc;
+
+    rc = xc_domain_create(xch, &domid, c);
+    if ( rc )
+        return fail("  Domain create failure: %d - %s\n",
+                    errno, strerror(errno));
+
+    printf("  Created d%u\n", domid);
+
+    rc = xc_cpu_policy_get_domain(xch, domid, &guest_policy);
+    if ( rc )
+    {
+        fail("  Failed to obtain domain policy: %d - %s\n",
+             errno, strerror(errno));
+        goto out;
+    }
+
+    dump_tsx_details(&guest_policy, "Cur:");
+
+    /*
+     * Check defaults given to the guest.
+     */
+    if ( guest_policy.cpuid.feat.rtm != (rtm_behaviour == RTM_OK) )
+        fail("  RTM %u in guest, despite rtm behaviour\n",
+             guest_policy.cpuid.feat.rtm);
+
+    if ( guest_policy.cpuid.feat.hle ||
+         guest_policy.cpuid.feat.tsx_force_abort ||
+         guest_policy.cpuid.feat.rtm_always_abort ||
+         guest_policy.msr.arch_caps.tsx_ctrl )
+        fail("  Unexpected features advertised\n");
+
+    if ( host.cpuid.feat.rtm )
+    {
+        unsigned int _7b0;
+
+        /*
+         * If host RTM is available, all combinations of guest flags should be
+         * possible.  Flip both HLE/RTM to check non-default settings.
+         */
+        _7b0 = (guest_policy.cpuid.feat.raw[0].b ^=
+                (bitmaskof(X86_FEATURE_HLE) | bitmaskof(X86_FEATURE_RTM)));
+
+        /* Set the new policy. */
+        rc = xc_cpu_policy_set_domain(xch, domid, &guest_policy);
+        if ( rc )
+        {
+            fail("  Failed to set domain policy: %d - %s\n",
+                 errno, strerror(errno));
+            goto out;
+        }
+
+        /* Re-get the new policy. */
+        rc = xc_cpu_policy_get_domain(xch, domid, &guest_policy);
+        if ( rc )
+        {
+            fail("  Failed to obtain domain policy: %d - %s\n",
+                 errno, strerror(errno));
+            goto out;
+        }
+
+        dump_tsx_details(&guest_policy, "Cur:");
+
+        if ( guest_policy.cpuid.feat.raw[0].b != _7b0 )
+        {
+            fail("  Expected CPUID.7[1].b 0x%08x differes from actual 0x%08x\n",
+                 _7b0, guest_policy.cpuid.feat.raw[0].b);
+            goto out;
+        }
+    }
+
+ out:
+    rc = xc_domain_destroy(xch, domid);
+    if ( rc )
+        fail("  Failed to destroy domain: %d - %s\n",
+             errno, strerror(errno));
+}
+
+static void test_guests(void)
+{
+    if ( xen_has_pv )
+    {
+        struct xen_domctl_createdomain c = {
+            .max_vcpus = 1,
+            .max_grant_frames = 1,
+        };
+
+        printf("Testing PV guest\n");
+        test_guest(&c);
+    }
+
+    if ( xen_has_hvm )
+    {
+        struct xen_domctl_createdomain c = {
+            .flags = XEN_DOMCTL_CDF_hvm,
+            .max_vcpus = 1,
+            .max_grant_frames = 1,
+            .arch = {
+                .emulation_flags = XEN_X86_EMU_LAPIC,
+            },
+        };
+
+        if ( physinfo.capabilities & XEN_SYSCTL_PHYSCAP_hap )
+            c.flags |= XEN_DOMCTL_CDF_hap;
+        else if ( !(physinfo.capabilities & XEN_SYSCTL_PHYSCAP_shadow) )
+            return fail("  HVM available, but neither HAP nor Shadow\n");
+
+        printf("Testing HVM guest\n");
+        test_guest(&c);
+    }
+}
+
+/* Obtain some general data, then run the tests. */
+static void test_tsx(void)
+{
+    int rc;
+
+    /* Read all policies except raw. */
+    for ( int i = XEN_SYSCTL_cpu_policy_host;
+          i <= XEN_SYSCTL_cpu_policy_hvm_default; ++i )
+    {
+        rc = xc_cpu_policy_get_system(xch, i, &policies[i]);
+
+        if ( rc == -1 && errno == EOPNOTSUPP )
+        {
+            /*
+             * Use EOPNOTSUPP to spot Xen missing CONFIG_{PV,HVM}, and adjust
+             * later testing accordingly.
+             */
+            switch ( i )
+            {
+            case XEN_SYSCTL_cpu_policy_pv_max:
+            case XEN_SYSCTL_cpu_policy_pv_default:
+                if ( xen_has_pv )
+                    printf("  Xen doesn't support PV\n");
+                xen_has_pv = false;
+                continue;
+
+            case XEN_SYSCTL_cpu_policy_hvm_max:
+            case XEN_SYSCTL_cpu_policy_hvm_default:
+                if ( xen_has_hvm )
+                    printf("  Xen doesn't support HVM\n");
+                xen_has_hvm = false;
+                continue;
+            }
+        }
+        if ( rc )
+            return fail("Failed to obtain policy[%u]: %d - %s\n",
+                        i, errno, strerror(errno));
+    }
+
+    rc = xc_physinfo(xch, &physinfo);
+    if ( rc )
+        return fail("Failed to obtain physinfo: %d - %s\n",
+                    errno, strerror(errno));
+
+    printf("  Got %u CPUs\n", physinfo.max_cpu_id + 1);
+
+    test_tsx_msrs();
+    test_rtm_behaviour();
+    test_def_max_policies();
+    test_guests();
+}
+
+int main(int argc, char **argv)
+{
+    printf("TSX tests\n");
+
+    xch = xc_interface_open(NULL, NULL, 0);
+
+    if ( !xch )
+        err(1, "xc_interface_open");
+
+    test_tsx();
+
+    return !!nr_failures;
+}
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 10:54:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 10:54:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141336.261100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lskEI-0007CU-0g; Mon, 14 Jun 2021 10:54:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141336.261100; Mon, 14 Jun 2021 10:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lskEH-0007CN-Sl; Mon, 14 Jun 2021 10:54:05 +0000
Received: by outflank-mailman (input) for mailman id 141336;
 Mon, 14 Jun 2021 10:54:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lskEG-0007CH-Bz
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 10:54:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lskEG-0002Oz-8r; Mon, 14 Jun 2021 10:54:04 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lskEG-0000Gh-1b; Mon, 14 Jun 2021 10:54:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=baLOSII5h50GFo7ZzJDPdjkr2wbD5PMMRxCpj5ra8mM=; b=ixQpZCIgJb3cpbX20/b7vXe1EA
	BKtya/E9agB6ChVH5dlsxPw5zagbe15Zn6DOU6bteg3aHeMPLqTrvPAKY3yilmsCnzmqYPcpL6Q7B
	2HmhKIGNEiwquErQp9nrxzI4ZuB2WwcyJgKOvKMwf7YBVjMhQHlpbNvJRGla8D5Wj1RU=;
Subject: Re: [PATCH] Arm32: avoid .rodata to be marked as executable
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <25f1b0d2-9270-ba42-d110-2bf14e45b7b8@suse.com>
 <5b819c5e-587b-4eec-b873-73892503c3e2@xen.org>
 <4143bdfd-ca78-d7ce-4ed0-2b6271c48ecf@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7a57d3df-94d0-5ee6-1ceb-bf4eddec1392@xen.org>
Date: Mon, 14 Jun 2021 12:54:02 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <4143bdfd-ca78-d7ce-4ed0-2b6271c48ecf@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 14/06/2021 12:40, Jan Beulich wrote:
> On 14.06.2021 11:57, Julien Grall wrote:
>> On 11/06/2021 11:19, Jan Beulich wrote:
>>> This confuses disassemblers, at the very least. When this data still
>>> lived in .init.*, this probably didn't matter much, albeit the
>>> "#execinstr" would have been suspicious to me already then. But the
>>> latest with their movement to .rodata these attributes should have been
>>> dropped.
>>
>> I don't quite understand why this wasn't really a problem for .init.data
>> but it is a problem for .rodata. Can you expand your thought?
> 
> I've said "probably" for a reason, and my thinking here goes along
> the lines of what I've said on the other patch regarding .init.*:
> There's perhaps not overly much reason to be picky about the
> attributes of .init.*, and at least on x86 there is also a case
> (the EFI binary) where we fold all .init.* into just .init anyway.

Makese sense. Thanks for the explanation.

> 
> The alternative to the present description that I see would be to
> go with just the 1st sentence. But I would be afraid in such a
> case that you would come back and tell me this is too little of a
> description.

How about:

"xen/arm: .proc.info doesn't need to be executable

The section .proc.info lives in .rodata as it doesn't contain any 
executable code. However, the section is still marked as executable as 
the consequence .rodata will also be marked executable.

Xen doesn't use the ELF permissions to decide the page-table mapping 
permission. However, this will confuse disassemblers.

#execinstr is now removed on all the pushsection dealing with .proc.info".

I can update the commit message on commit.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 11:16:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 11:16:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141344.261111 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lskZR-00015x-Kh; Mon, 14 Jun 2021 11:15:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141344.261111; Mon, 14 Jun 2021 11:15:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lskZR-00015q-Ha; Mon, 14 Jun 2021 11:15:57 +0000
Received: by outflank-mailman (input) for mailman id 141344;
 Mon, 14 Jun 2021 11:15:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kbqA=LI=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1lskZP-00015k-JX
 for xen-devel@lists.xen.org; Mon, 14 Jun 2021 11:15:55 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6b156149-2019-40f9-a6c8-e4e9d11b2c27;
 Mon, 14 Jun 2021 11:15:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b156149-2019-40f9-a6c8-e4e9d11b2c27
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623669354;
  h=from:subject:to:message-id:date:
   content-transfer-encoding:mime-version;
  bh=8HZBenymHg64BucWeXqDtfdc2Z3bnJwGmoXVxoU0QZM=;
  b=fG4USH8JgvTxIAY2VCZFKKXSAgKHLKsmbxIYDTKsmYyotW1Cvke4paRj
   6cn+L9huGmgEhzR8OETebXwDd2bC7VKhenQ0KDKpSYbdNNMJHZ0gnF8YJ
   NGcLDHND6G0qfH9HUxtgLre7eneuCXoTrLCqwmRDbt8QZR1xylaThbSru
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: dKmKC9Fpa/NUpDcnAXRybM7rw8FvtvOwlzoNaa2YDD36zc3JJg7KwpTLwz3G0crxTzk1/8v/it
 SzXHOgNaV1zsr5bJpXVfL9ye17RH8rZJ7zftMx5DUBef7JFsgMFALkVKIDgMY3joUXQZmT8R3C
 yP7lNxNmshPSJZRjng2zGF2S7gAqrG/qgZfhxBSaXlrQxtrQMYAeriW3pP8GYMk41iP469lFab
 l5aarYOxuAIXlCQEgqVu3WU1bU6SEr4A3Sy4xoF0S5FiDQ7oCQzRTl2DVfZq503EwsuKBsBwar
 gKs=
X-SBRS: 5.1
X-MesageID: 46036397
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:mD2IFKoqgAYp8WqtV5ImCoQaV5vaL9V00zEX/kB9WHVpm5Oj+f
 xGzc516farslossREb+exoS5PwO080kqQFnLX5XI3SJzUO3VHIEGgM1/qa/9SNIVyaygc/79
 YQT0EdMqyXMbESt6+Ti2PUYrVQoqj1zEnBv5ah854Hd3APV0gP1XYfNu/WKDwPeOEQbqBJa6
 Z0q/A36gaISDAyVICWF3MFV+/Mq5nik4/nWwcPA1oC5BOVhT2lxbbmG1zAty1uHA9n8PMHyy
 zoggb57qKsv7WSzQLd7Xba69BzlMH6wtVOKcSQgow+KynqiCyveIN9Mofy8AwdkaWK0hIHgd
 PMqxAvM4BY8HXKZFy4phPrxk3JzCsuw2WK8y7ZvVLT5ejCAB4qActIgoxUNjHD7VA7gd162K
 VXm0qEqptsCw/aliiV3amIa/hTrDv3nZMeq59Xs5QGOrFuLIO57LZvsn+9Ka1wXx4Ts+scYa
 5T5Ki23ocnTbuYB0qp9lWHjubcGEjas3+9Mz8/U/euok1rdUZCvgIlLfwk7wU9Ha0GOu15Ds
 T/Q+9VfeJ1P4UrhZwUPpZ2fSLhMB2wffuLChPKHWja
X-IronPort-AV: E=Sophos;i="5.83,273,1616472000"; 
   d="scan'208";a="46036397"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=W9Mz9KYKkZN/54SsPA7c5vqFYsGKC3HGE3HKwy56sBrSHsHmWb7jh3FQDlcGLN3Fcg/OnZ6j+Xo4lIJZ9QTE2p4ObNPs3en2XGUgCMpZeXTA5Zn/A85CYNJ++SaqTcefN45CWD4O75ykO32mOiFYwuYKsfO41rbeNNmXOEvzDmXMYFijqmjwYuU1ZPyRnrUhq2VBmMUOKgC7JLxzXDFlnmM2RwbJuaBYKpwNlz2knaCrp5gSzEh0MJBARivFTbel85AJxcznLliIM/eZEe6R97/n+ZER7/iZI17RP0KVAHl5SP/XXKbq0lYH9mJBr2fCccG0f8PSK4XLzQJjj8HuiQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N3VDvPhjn3tj27Bj+t9BHbFR5yJS7KEvtm/3JBs7nfA=;
 b=Jm+lj/2E9Z8OmpNKNVbo/FZrAwAwbGm9ngVTdty0AdsZwff8Bm6Xp7HME7/V08+GnuHgLM8Kwi6QNLUqxJrXOneHZfwQe8NVFkGA3ssBzytcyaEhQnEgBsLb2dt7K2C1Y8cpp/8dfBk7SIB1ckEg6uySEx+vDDToc+uZSbG1q+wgp4AbZroBwpDIJFyD0lCbWiD0qNiw/FlxhdnQI7s60Dm60KOvoY/63iG5QBEQ6b/8l4GnxOnatG/CRZW2TMjJtwDrr4imXZdIyEe2ldCu+CCpDy+j6SZU19FLIHu2bKTsSmUWrDgNd6lyBqy7y4ESYC20brVpSJX81iY/RXFlHg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N3VDvPhjn3tj27Bj+t9BHbFR5yJS7KEvtm/3JBs7nfA=;
 b=xBX76cJwQr9eNA/b9eBk+N2VHF2Fb0EC0+4ed6wKiUuZoaf5hwJyeuvbXqxtGPZ3Gyyufv1fqbyB6TD/nQRhmqyUkxWymMgS7adgKy6fapwjLviHIVpEIPqwGUBO/OEe1BOyuXqELWDeZ9raPovci5E/tdTXeARWLYm8HTG9nXY=
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Subject: BUG in 1f3d87c75129 ("x86/vpt: do not take pt_migrate rwlock in some
 cases")
To: <xen-devel@lists.xen.org>, <boris.ostrovsky@oracle.com>,
	<roger.pau@citrix.com>, <stephen.s.brennan@oracle.com>
Message-ID: <0efd0099-49bb-e20c-fe8d-fb97c9de2b63@citrix.com>
Date: Mon, 14 Jun 2021 12:15:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0303.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::27) To BN3PR03MB2401.namprd03.prod.outlook.com
 (2a01:111:e400:7bbc::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9e08e659-2ee3-41bc-f134-08d92f25c0f2
X-MS-TrafficTypeDiagnostic: BN6PR03MB3060:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BN6PR03MB3060EF5B28FD8D2F7A4135A0E4319@BN6PR03MB3060.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ioj5LAfbl+GmN1/lq86VS+QWTL0e5/Y1VcP4KhS44DC9UrS8adA0aIdDGjMF1qbW2bXDGtrRL7v2zqeJkJyFu1Yfl43RDa4tV4EA6dQnM5pYP9sEpNw0NUCtDFpjPMcHAnc1mcQlBMDAnKtEYhHolgYD8SeDulQwUd6F5mLvMFlXDZ21m6uxiO2Oa05KZjxhKrziVbywPZgQlCwU7JYyRpJfyNoixOYOrIIWl+kt+ssLC636ZFqDkks4chqd64y4EXS01DnqiQHlgCr3nvS4FL03EnxaHTPX42gxPlFCAMY0cEtnJ58qAHeREAlmdXXPSjifsYj10ukna3TvuMylRT8iaBdG+h5aYeDEMwqbrQSv6e4tHe9rIxrqSZqEsB14ElUBIF39vq0eRQHvgJv2nAnBkjQVTmTuo7TYofEfcILVzQJunzY2y042rEqloRwJfMGBXEzaR0LtbDXU+0RvyLWzcuKwEbRhpmUGLtHJvdAHcj1UpK6LPY1pKy7V7TbljQ1FmxR1axrl2Y7YfXjrhhsfBSUFXCxAVEn2PZ4oKm+lOONkM4gI8bC4BuJo9h6nuGwDAYfxhbXulH7EAZzS4KtnTwTh1y2Gyfq3PwSKM/sfWjsNrkkI2jBECW0stl1F67+XuaIS39CEfSHoga1Af3P3Sd+TT5hFfQlpsckiJ0DSkN81cb55RmrzdY581bS1
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN3PR03MB2401.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(136003)(39860400002)(376002)(366004)(2616005)(16576012)(44832011)(8676002)(316002)(956004)(26005)(2906002)(66946007)(66556008)(66476007)(186003)(31686004)(6486002)(6666004)(86362001)(83380400001)(36756003)(8936002)(478600001)(5660300002)(31696002)(38100700002)(16526019)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?anJia0dFaS9KQnFEL09KK21ta085ZFZweHhvUXF3NGl6MGNQcWdXQWxPNk1L?=
 =?utf-8?B?cEEydmpha3hmT0JmYmZ3MmpuR1VGVjdma1ZCMi9lRkl4TjVDbC9taVVyWFlh?=
 =?utf-8?B?UTFHTmFpKzlsUjI5MlZpV0dyakpLcUhiSVI2U3l5T2VlSjVIeFA1S25Rbzg1?=
 =?utf-8?B?YTl4a1E5WGNNMVg0WUN5N3MzYXhnTnIwa0U4b0xucVlCVXhNRnFobFc2VnFu?=
 =?utf-8?B?bUFmRXkzM3pKZStMR01nTDFBQS9tS3FPNnhjaW44TjZLWm1FR2pLTUpwRUpK?=
 =?utf-8?B?bmdBeTU1d0xGUHIwUVRDMkhqc1g4M3N0REVHdEhmSkFpYVgvb0RKL0FsY0s2?=
 =?utf-8?B?bnpMaGhjakUyMEliTUxMcHJ5eDlOZHBsUEJUdzZZL1dHTHUzNTk2aW5ISjY0?=
 =?utf-8?B?QjRjMCsxbklyZm1kZ2JJeXk5Q2MzMFpNVHV4MElzS250WnFaQWhOUzdpNkRw?=
 =?utf-8?B?ZWZEOGduc2RXRzF5NW5sMU5ZWTJ0dUdaM1dsSkRkRThmSHBxRFVkbjFBM3ky?=
 =?utf-8?B?UW1sQmNKMUx4c2F4ckZmVHp5dHgvWEVmRXVwNDc5TWN3cW95YUtqTm9ZR2Rq?=
 =?utf-8?B?eUlOcGM2SXJHTURPSWJ3SEUrUTZiV0ZPU3F1MmU0VkRmc0lwRE5vS1FQbyti?=
 =?utf-8?B?WkZ0amdzSkI5NlV1T0JPaUdtdUdFZkRZdEN1b3VsSzhyWUtJeUplQjEyME1R?=
 =?utf-8?B?V1N5eHpVVUk0bTdGd1NnL1dOeWQwY3IraTJCN2RWbUZHd2t5cmVTeUx5VmJx?=
 =?utf-8?B?Z3RYNEpFRThOTGo2QzZFeDBYWVBONElyVCt3Nk10OEZaVC9wa3YvRnJFVTdZ?=
 =?utf-8?B?ekMvVUcrOC9HbVJQS2E3dEQ5aVZWbnQ3eGcySFNiVzdjSDI1bHorRmZmbnpw?=
 =?utf-8?B?cmkvSlZ0S1phUHZHWTJTSUdSOWcwOFhYUkFXWWFqRUhYR1pBUFk4SFlZMFd3?=
 =?utf-8?B?TllOVTVMbko4MmhWNlZPa1pmK1RTQitxUC9PUXZBT2ZnUjhzVWFVVkNwUUVX?=
 =?utf-8?B?MnJCTTc0U0VLQVFEcDNVSGx4RkZsbzFTSmF2dTEyMkxGT0dUZ3B3MUtYZlUx?=
 =?utf-8?B?UWRwR1UvNVRTM1Yyb2FsTmgrM1RJZzBjcDRJc0thcCtCSE5tWUNrVUI3a29a?=
 =?utf-8?B?QWZiUkRNUVVsME9BdXpUZnBzNk9ZdU15MkIveTF1cnUvNHpWeWhMZnlGZHZL?=
 =?utf-8?B?SGlNcWs1SnZTRDFUemNNSDY4b1NhTllLM1lST3NaTUxXSUVDaWNBdHA1SWYw?=
 =?utf-8?B?SjF2L1BDNlpEd05UdWt1U2l5SVpvbk1CRnowbklQNXNBeDlTL1hwMVRVby9X?=
 =?utf-8?B?dHFtemFTaUg3UzhiRmo4UUhibGFOaC9MbjBPb3ZnQVE5bytJQU9UUU5sRm9a?=
 =?utf-8?B?R3lhZk1pbjFxWkRkK3ZITmo0WWJTQ2FPV2ttK1ZJTU5GeHNBSGhYb21VeUwy?=
 =?utf-8?B?UmIxVG51alhzdE9lQnNlRVdFOVZQUnlPdkRiUVpBTUhhS1lWSXZ0Q2ZadlBP?=
 =?utf-8?B?ZWRFbnhqVGVYeGZheEgyUEE4UXQ4bDh2bXh4WXU5ZWhETzFIVjhMdE53RnRq?=
 =?utf-8?B?REZKQlhPamRkdVdxYTNtUWh1RDREVlJnd0N1KzNVTlc3WFdtdll5UEZEZ00r?=
 =?utf-8?B?NG45MGpLOW5CdzFKUFhFbHRaeGVoYWp0SVJhYkh5SzlYWnhFVk02TGhvUFI1?=
 =?utf-8?B?UVd1WkkvNjgwc3pUaGJLU294Q0lRcjVQRHBtOExXZ1ZiY0hmSy9zQVFZT1Zr?=
 =?utf-8?Q?tagGU3BnvxrCtWYYrcIvKJdrEIJPNjfDHN7UwXT?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 9e08e659-2ee3-41bc-f134-08d92f25c0f2
X-MS-Exchange-CrossTenant-AuthSource: BN3PR03MB2401.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 11:15:44.8867
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MSBv/F4fGvJEMzzJ45CWqdETzpKYGk0y4Km4Dz0N6DgvsFmV5d4S2ivyxLAdHsed+NvDbCmnOh7SCuAinibfQcWIHV5ycfVkj9Lovo5zUAQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR03MB3060
X-OriginatorOrg: citrix.com

Hi, Boris, Stephen, Roger,

We have stress tested recent changes on staging-4.13 which includes a
backport of the subject. Since the backport is identical to the
master branch and all of the pre-reqs are in place - we have no reason
to believe the issue is not the same on master.

Here is what we got by running heavy stress testing including multiple
repeated VM lifecycle operations with storage and network load:


Assertion 'timer->status >= TIMER_STATUS_inactive' failed at timer.c:287
----[ Xen-4.13.3-10.7-d  x86_64  debug=y   Not tainted ]----
CPU:    17
RIP:    e008:[<ffff82d080246b65>] common/timer.c#active_timer+0xc/0x1b
RFLAGS: 0000000000010046   CONTEXT: hypervisor (d675v0)
rax: 0000000000000000   rbx: ffff83137a8ed300   rcx: 0000000000000000
rdx: ffff83303fff7fff   rsi: ffff83303fff2549   rdi: ffff83137a8ed300
rbp: ffff83303fff7cf8   rsp: ffff83303fff7cf8   r8:  0000000000000001
r9:  0000000000000000   r10: 0000000000000011   r11: 0000168b0cc08083
r12: 0000000000000000   r13: ffff82d0805cf300   r14: ffff82d0805cf300
r15: 0000000000000292   cr0: 0000000080050033   cr4: 00000000000426e0
cr3: 00000013c1a32000   cr2: 0000000000000000
fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
Xen code around <ffff82d080246b65> (common/timer.c#active_timer+0xc/0x1b):
  0f b6 47 2a 84 c0 75 02 <0f> 0b 3c 04 76 02 0f 0b 3c 02 0f 97 c0 5d c3 55
Xen stack trace from rsp=ffff83303fff7cf8:
    ffff83303fff7d48 ffff82d0802479f1 0000168b0192b846 ffff83137a8ed328
    000000001d0776eb ffff83137a8ed2c0 ffff83133ee47568 ffff83133ee47000
    ffff83133ee47560 ffff832b1a0cd000 ffff83303fff7d78 ffff82d08031e74e
    ffff83102d898000 ffff83133ee47000 ffff83102db8d000 0000000000000011
    ffff83303fff7dc8 ffff82d08027df19 0000000000000000 ffff83133ee47060
    ffff82d0805d0088 ffff83102d898000 ffff83133ee47000 0000000000000011
    0000000000000001 0000000000000011 ffff83303fff7e08 ffff82d0802414e0
    ffff83303fff7df8 0000168b0192b846 ffff83102d8a4660 0000168b0192b846
    ffff83102d8a4720 0000000000000011 ffff83303fff7ea8 ffff82d080241d6c
    ffff83133ee47000 ffff831244137a50 ffff83303fff7e48 ffff82d08031b5b8
    ffff83133ee47000 ffff832b1a0cd000 ffff83303fff7e68 ffff82d080312b65
    ffff83133ee47000 0000000000000000 ffff83303fff7ee8 ffff83102d8a4678
    ffff83303fff7ee8 ffff82d0805d6380 ffff82d0805d5b00 ffffffffffffffff
    ffff83303fff7fff 0000000000000000 ffff83303fff7ed8 ffff82d0802431f5
    ffff83133ee47000 0000000000000000 0000000000000000 0000000000000000
    ffff83303fff7ee8 ffff82d08024324a 00007ccfc00080e7 ffff82d08033930b
    ffffffffb0ebd5a0 000000000000000d 0000000000000062 0000000000000097
    000000000000001e 000000000000001f ffffffffb0ebd5ad 0000000000000000
    0000000000000005 000000000003d91d 0000000000000000 0000000000000000
    00000000000003d5 000000000000001e 0000000000000000 0000beef0000beef
Xen call trace:
    [<ffff82d080246b65>] R common/timer.c#active_timer+0xc/0x1b
    [<ffff82d0802479f1>] F stop_timer+0xf5/0x188
    [<ffff82d08031e74e>] F pt_save_timer+0x45/0x8a
    [<ffff82d08027df19>] F context_switch+0xf9/0xee0
    [<ffff82d0802414e0>] F common/schedule.c#sched_context_switch+0x146/0x151
    [<ffff82d080241d6c>] F common/schedule.c#schedule+0x28a/0x299
    [<ffff82d0802431f5>] F common/softirq.c#__do_softirq+0x85/0x90
    [<ffff82d08024324a>] F do_softirq+0x13/0x15
    [<ffff82d08033930b>] F vmx_asm_do_vmentry+0x2b/0x30

****************************************
Panic on CPU 17:
Assertion 'timer->status >= TIMER_STATUS_inactive' failed at timer.c:287
****************************************

That looks like a race opened by this change - we didn't see the problem
before while running with all of the pre-req patches.

Could you please analyse this assertion failure?

Igor


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 11:53:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 11:53:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141351.261122 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsl9a-00050p-Eb; Mon, 14 Jun 2021 11:53:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141351.261122; Mon, 14 Jun 2021 11:53:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsl9a-00050i-BF; Mon, 14 Jun 2021 11:53:18 +0000
Received: by outflank-mailman (input) for mailman id 141351;
 Mon, 14 Jun 2021 11:53:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XszW=LI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lsl9Z-00050c-27
 for xen-devel@lists.xen.org; Mon, 14 Jun 2021 11:53:17 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 57a63d20-8878-4b9c-a310-470d5b72ebc8;
 Mon, 14 Jun 2021 11:53:15 +0000 (UTC)
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02lp2050.outbound.protection.outlook.com [104.47.6.50]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-2-3z3uM3oyPU6pw9Tf33767A-1;
 Mon, 14 Jun 2021 13:53:13 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7150.eurprd04.prod.outlook.com (2603:10a6:800:12a::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21; Mon, 14 Jun
 2021 11:53:11 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 11:53:11 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR0P281CA0054.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:48::16) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.9 via Frontend Transport; Mon, 14 Jun 2021 11:53:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57a63d20-8878-4b9c-a310-470d5b72ebc8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623671594;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ypFJ2bjnskxJSpRs9AnLPofEHdB7DVcudEEVG0dBTZk=;
	b=YTTkSHvTRVSx6Cum49oS/HLXA/SykF6bvH0i8C5cTp7aOlkx6J0vxC4+klVRTR4kVJpwEF
	1kpZu9zKFDFUPIQQnpW1oWLR4rm0VH3GnfJRGKoDdN/MKJUxLNGb6z1xq8tV7RsAkRm50m
	0VebtbHBWcqCqacrYG1pKsbPyErhbrg=
X-MC-Unique: 3z3uM3oyPU6pw9Tf33767A-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ceHSVXw9lf9oc0BTSdBuCXknBv+Kt2M2FLATRJTOgRvwjDfu/K0ohpWiw+M6AH1QzwgpPVgy79Ok4Ip7htKXrfjQy4qL4ctsQSxX2PdeviLkJNBu76KY0Ez6q1k4ujsKVNqXw2TR3y8i4eJsV82wdhi+27I+VdHHbaOh8nCKSx4OQO3hEya5VMlECJkNUIksIbC0yfFbH4Q4k4qA6OLXX5i2Zeixpa7dvYMAdbzwpNiQ7ZidqgVgumq2Crq1uzOBSQ/m6mSFptTeXRjAoAc+gdbw/dDvZjc2yVsohTzIYyaxcrOAaEq2RlNU0cvMb9FLqa9+R5t2AmIHPwHgLMLRXg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ypFJ2bjnskxJSpRs9AnLPofEHdB7DVcudEEVG0dBTZk=;
 b=Sg3a2gyqmH0LDw+/oLmgsnxrzSQMWx92bWy6EFu53KdPmANq7vqdAHVc57IzIV4GF+U0k9HODG9S/oxm/1ksxYeWF0x+Ze7lNqPoB4VpWJSwQXmqB37EjF7p2/SopVlh+lFaFogki0K50y9I+GVZwAcIKd01SisngHZPG4hpCFtlM2RaqxijERb6DUGr6YaFbROtSIvViTBdg/EJOJYZY5I9fc/AUZUwv5+o54amB1AKK114O59uE45147OVjoZ+481yjtd0nDACRec+wH5oEpe1CG4AbvWhBMDb2gviZ81eOrsA7f+zHcNs8toQG14oydRl5uJ4X1Q0OBg6P4x0LQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: Re: BUG in 1f3d87c75129 ("x86/vpt: do not take pt_migrate rwlock in
 some cases")
To: Igor Druzhinin <igor.druzhinin@citrix.com>
References: <0efd0099-49bb-e20c-fe8d-fb97c9de2b63@citrix.com>
Cc: xen-devel@lists.xen.org, boris.ostrovsky@oracle.com,
 stephen.s.brennan@oracle.com, roger.pau@citrix.com
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <74378af9-5d04-f95e-3957-918cf5c81ded@suse.com>
Date: Mon, 14 Jun 2021 13:53:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <0efd0099-49bb-e20c-fe8d-fb97c9de2b63@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR0P281CA0054.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::16) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0a32cdd8-a8a6-4dc0-915a-08d92f2afc07
X-MS-TrafficTypeDiagnostic: VI1PR04MB7150:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB7150904A24E77687D6DB2209B3319@VI1PR04MB7150.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4/410YQiOPdsn9MQ9p/ci3bibmM19JbYju/7W9hIwNGvkIUBCGtC1YPIDJhlrep7DT9cfORv/0FMm6PG0+WazyV5zlU+KY6ViGtvGyIlksYXb24bMDA536rxMBcyKO1I1/1SyacoZ6a5xEGzCMNL9WkDI1ZoOx1qYaBK551jacVACdVjsDML/6p4q3AmkxtWx2I2Awd7WPrtWsse6MLUe8FV56Lr4bvZYvHXsp2U392/MdkQ2/whVvlZbJRM17RhrCV7B/JfrOLmU7YEQ9AESJ4Od5uIf5O3/2OcIjynlj0mRpVRefmev8F/vL1TgB9g8COdDvEEaRb0kAcxNxnPDInRKM6bqYL/cNqKLIYjjy3014IiWeFtK4n9ba6RCtz/M/X7087/z9ao5vc0gkAUZS+qCnnUYd0LWr27UHoVHrA9rSYfb/4KEI1m4DXpV3t+joMVGF8r09SwBMjLplr7jbWgf6FmVqSqjTmqzHjWPOAvgI88xSSljnBrvRZI0pVINMJuyTnxQe1iFBlbsSwx4kE3/PIzR7i42661nHAqxxbQcjoC3dAbrawa2FGEDnFMO1ryjVQOi2opCP4f6XhT8J+Dn3k/xYhb9tqNP2SK9ZBm4bQjFSpL4Gwij+BjaVN6GBXdsgV/Dfss/OJJ+C4QVzLmqf350UKjU11nxyVdUqo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(136003)(396003)(376002)(346002)(366004)(6916009)(26005)(2906002)(316002)(86362001)(83380400001)(16576012)(31686004)(8676002)(66946007)(31696002)(8936002)(36756003)(478600001)(53546011)(66476007)(6486002)(2616005)(956004)(16526019)(186003)(38100700002)(66556008)(4326008)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VklJV2NaQm11TGxsTjhQT1hSZE9VS2tJcEh1dlFIdHFUZXRaVUE3cFo0ZVI1?=
 =?utf-8?B?alVOWmFyVXFUZmxJTzEvUUJUUWtLbDRWQ0sva0QxZ2RoNnRscDJVNmZvcUxy?=
 =?utf-8?B?dFdZUis2Q1VMak81R2x1aWRFNThWa1dWZFh0M3VMYnBXcmp5RHUrd1lUNVAx?=
 =?utf-8?B?NTY5SnVIMlJSZmdTcDUyY0pUNk5ZMHNCYkdZQVZabDVxazcvejYvM29wL2I2?=
 =?utf-8?B?WXhoT0NUMHBmTERDY0JMS1VOSktLL1VqaXlYRzB3K1NtSldyRUJSNjRZVzRL?=
 =?utf-8?B?ZGl2WEhrSnVLY0JQQjRHV2Z0U2V1SWJrcGFod0ZJSmZEUTAwNmNXUGluOXN6?=
 =?utf-8?B?eklzWGY5OVlWa1lqU1d6cGdEdGFYL3JoMkplNEdXbVBjeXJmeGhQRzgweFY5?=
 =?utf-8?B?ZzVLak1nOG8ydU02NTBQSTB0cjUyc2FrejZyOWplVG9QanBsUDRwenVBcGxJ?=
 =?utf-8?B?V0JIVEk2cXNRNUY0Q2YzZzBEMFphNDh3NnBOeE5ZYkNFRkNHTHExMEdJd3Nu?=
 =?utf-8?B?a1hxbHE0NlI0VmZhSExCSUN1b3NrcGY1YjR6M2s3ZVZXalpjYTZuZkhvWm0y?=
 =?utf-8?B?WTIzVnZXbnF0NXc3SmRDaGZtbllaWFRPTGxHTW93NkVsb243NC9kWHNsVkZJ?=
 =?utf-8?B?YWxqbm95YkFYRDRmV3hJREZ0cXNOeXRQNmZPL2tSNjZkb3RENkpwbDNwVHp2?=
 =?utf-8?B?azFKSURtaHR6UmprTWtpMGNwR2tTWCt0a0hWdGRGVjBnbVlENkt2Qi9VVlRI?=
 =?utf-8?B?SUg4M3U1M3dTQmg2Rlo3ZmlEUkFVOFBQQm5nclFTdTVXTTlDc3Uwc2w4d0J2?=
 =?utf-8?B?ei8veVVEY2RNK2RVWkQyVHZjMnNQREVDK0M0UytCQ3BHY2xqbStPMk0wYWZq?=
 =?utf-8?B?TmtYV2lIai9SQVZiN2dURHZ2ZTRHa1l6a05uUjJqbitoT0NBUzgrT1BtM2Rx?=
 =?utf-8?B?VHVQNHN0dHhwUUpFcUo0b1RnMEFHeHZraG1tdUtLUXJxZko2TnFJbGpZZFFh?=
 =?utf-8?B?SE9pZTNvRncrM0lRd1dEcUYvM0wxdHRnMnhWejU2NE5rbCtqNGhHWHRNZnIw?=
 =?utf-8?B?WkZrZzM0S2RLMFNadVE5RS9QTUt3ZDd4cVZtdk9GenY3N094aHl5M1J0anBm?=
 =?utf-8?B?emJsbkhFc3I0THo1Und3K0dEaEdpelZOU2pDUE0xME8xYjJ2U3JmSXVGTjUr?=
 =?utf-8?B?YnBJRkc1ODd5ZXJSQnNKbmU4WXh5Q0w2VUp0ZmxJTklEZVBYYnNwdldPbEFD?=
 =?utf-8?B?dytRMzRBYTZEMU95RVpjSkJNRmdWcWZSdXZ6anVvWUFTL1RwejhJTGljRFZI?=
 =?utf-8?B?UzFYY3ZEc1VOVDhjamhFOEhIM1J5WDE4b3VLUXVoQjhFbjc2ODRCdDFjUnRm?=
 =?utf-8?B?cmlOcXdTN0NTMndhUzdwR3ZsbWtyZXZWRVpHNVdQRUNLSThsUkpYNFRvUVBB?=
 =?utf-8?B?YmpRNkhKd1BkRlhUaFhwc2RMZTZYR3lCN1ExVDc0MFMxVlpOaWIvNE44WXIx?=
 =?utf-8?B?S2ZzNUhka0Vxb29ud0J5c1pSdGVieGtTeTBtM0RRcHRvbTN6UVlSSUU1a0lx?=
 =?utf-8?B?WFIvL3dEUktBalhFOVpnaFh0bUFQRTBDbWVlV2J3cjFVRWJpVmx0bStyU28x?=
 =?utf-8?B?QUczeCtYWWdndW5SY2JDU0hwUW5PSXF1YzZlRXpiQk0xNVg2NFJITko0UWZa?=
 =?utf-8?B?eG9jQVNydEF3dFJBWk5HY0ZsN2lqcDVtRHgxd3RjNDhzekF0ajIzLzkrZGtq?=
 =?utf-8?Q?3piUrW2HDDimNvgqJQzVx3gvAtSsMju5PwXYt7L?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0a32cdd8-a8a6-4dc0-915a-08d92f2afc07
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 11:53:11.5610
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GL8MW9jf/jkJ4ex3vGTAeiC8eoc38+Q29j/rQjJSJciOigfTFER+8cHqyeWzYbw6U0wUtVFXiP/E3LVj9kNf7w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7150

On 14.06.2021 13:15, Igor Druzhinin wrote:
> Hi, Boris, Stephen, Roger,
> 
> We have stress tested recent changes on staging-4.13 which includes a
> backport of the subject. Since the backport is identical to the
> master branch and all of the pre-reqs are in place - we have no reason
> to believe the issue is not the same on master.
> 
> Here is what we got by running heavy stress testing including multiple
> repeated VM lifecycle operations with storage and network load:
> 
> 
> Assertion 'timer->status >= TIMER_STATUS_inactive' failed at timer.c:287
> ----[ Xen-4.13.3-10.7-d  x86_64  debug=y   Not tainted ]----
> CPU:    17
> RIP:    e008:[<ffff82d080246b65>] common/timer.c#active_timer+0xc/0x1b
> RFLAGS: 0000000000010046   CONTEXT: hypervisor (d675v0)
> rax: 0000000000000000   rbx: ffff83137a8ed300   rcx: 0000000000000000
> rdx: ffff83303fff7fff   rsi: ffff83303fff2549   rdi: ffff83137a8ed300
> rbp: ffff83303fff7cf8   rsp: ffff83303fff7cf8   r8:  0000000000000001
> r9:  0000000000000000   r10: 0000000000000011   r11: 0000168b0cc08083
> r12: 0000000000000000   r13: ffff82d0805cf300   r14: ffff82d0805cf300
> r15: 0000000000000292   cr0: 0000000080050033   cr4: 00000000000426e0
> cr3: 00000013c1a32000   cr2: 0000000000000000
> fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
> ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
> Xen code around <ffff82d080246b65> (common/timer.c#active_timer+0xc/0x1b):
>   0f b6 47 2a 84 c0 75 02 <0f> 0b 3c 04 76 02 0f 0b 3c 02 0f 97 c0 5d c3 55
> Xen stack trace from rsp=ffff83303fff7cf8:
>     ffff83303fff7d48 ffff82d0802479f1 0000168b0192b846 ffff83137a8ed328
>     000000001d0776eb ffff83137a8ed2c0 ffff83133ee47568 ffff83133ee47000
>     ffff83133ee47560 ffff832b1a0cd000 ffff83303fff7d78 ffff82d08031e74e
>     ffff83102d898000 ffff83133ee47000 ffff83102db8d000 0000000000000011
>     ffff83303fff7dc8 ffff82d08027df19 0000000000000000 ffff83133ee47060
>     ffff82d0805d0088 ffff83102d898000 ffff83133ee47000 0000000000000011
>     0000000000000001 0000000000000011 ffff83303fff7e08 ffff82d0802414e0
>     ffff83303fff7df8 0000168b0192b846 ffff83102d8a4660 0000168b0192b846
>     ffff83102d8a4720 0000000000000011 ffff83303fff7ea8 ffff82d080241d6c
>     ffff83133ee47000 ffff831244137a50 ffff83303fff7e48 ffff82d08031b5b8
>     ffff83133ee47000 ffff832b1a0cd000 ffff83303fff7e68 ffff82d080312b65
>     ffff83133ee47000 0000000000000000 ffff83303fff7ee8 ffff83102d8a4678
>     ffff83303fff7ee8 ffff82d0805d6380 ffff82d0805d5b00 ffffffffffffffff
>     ffff83303fff7fff 0000000000000000 ffff83303fff7ed8 ffff82d0802431f5
>     ffff83133ee47000 0000000000000000 0000000000000000 0000000000000000
>     ffff83303fff7ee8 ffff82d08024324a 00007ccfc00080e7 ffff82d08033930b
>     ffffffffb0ebd5a0 000000000000000d 0000000000000062 0000000000000097
>     000000000000001e 000000000000001f ffffffffb0ebd5ad 0000000000000000
>     0000000000000005 000000000003d91d 0000000000000000 0000000000000000
>     00000000000003d5 000000000000001e 0000000000000000 0000beef0000beef
> Xen call trace:
>     [<ffff82d080246b65>] R common/timer.c#active_timer+0xc/0x1b
>     [<ffff82d0802479f1>] F stop_timer+0xf5/0x188
>     [<ffff82d08031e74e>] F pt_save_timer+0x45/0x8a
>     [<ffff82d08027df19>] F context_switch+0xf9/0xee0
>     [<ffff82d0802414e0>] F common/schedule.c#sched_context_switch+0x146/0x151
>     [<ffff82d080241d6c>] F common/schedule.c#schedule+0x28a/0x299
>     [<ffff82d0802431f5>] F common/softirq.c#__do_softirq+0x85/0x90
>     [<ffff82d08024324a>] F do_softirq+0x13/0x15
>     [<ffff82d08033930b>] F vmx_asm_do_vmentry+0x2b/0x30
> 
> ****************************************
> Panic on CPU 17:
> Assertion 'timer->status >= TIMER_STATUS_inactive' failed at timer.c:287
> ****************************************

Since this suggests a timer was found on the list without ever having been
initialized, I've spotted a case where this indeed could now happen. Could
you give the patch below a try?

Jan

x86/vpt: fully init timers before putting onto list

With pt_vcpu_lock() no longer acquiring the pt_migrate lock, parties
iterating the list and acting on the timers of the list entries will no
longer be kept from entering their loops by create_periodic_time()'s
holding of that lock. Therefore at least init_timer() needs calling
ahead of list insertion, but keep this and set_timer() together.

Fixes: 8113b02f0bf8 ("x86/vpt: do not take pt_migrate rwlock in some cases")
Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- unstable.orig/xen/arch/x86/hvm/vpt.c
+++ unstable/xen/arch/x86/hvm/vpt.c
@@ -554,14 +554,14 @@ void create_periodic_time(
     pt->cb = cb;
     pt->priv = data;
 
+    init_timer(&pt->timer, pt_timer_fn, pt, v->processor);
+    set_timer(&pt->timer, pt->scheduled);
+
     pt_vcpu_lock(v);
     pt->on_list = 1;
     list_add(&pt->list, &v->arch.hvm.tm_list);
     pt_vcpu_unlock(v);
 
-    init_timer(&pt->timer, pt_timer_fn, pt, v->processor);
-    set_timer(&pt->timer, pt->scheduled);
-
     write_unlock(&v->domain->arch.hvm.pl_time->pt_migrate);
 }
 



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 11:58:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 11:58:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141359.261133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lslEt-0005lK-6J; Mon, 14 Jun 2021 11:58:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141359.261133; Mon, 14 Jun 2021 11:58:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lslEt-0005lD-3A; Mon, 14 Jun 2021 11:58:47 +0000
Received: by outflank-mailman (input) for mailman id 141359;
 Mon, 14 Jun 2021 11:58:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XszW=LI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lslEs-0005l7-KD
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 11:58:46 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 406a51d7-6879-4a36-8a13-773fd169ffce;
 Mon, 14 Jun 2021 11:58:45 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2168.outbound.protection.outlook.com [104.47.17.168])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-5-uO0mL1uuNUeRgOYsXpK73Q-2; Mon, 14 Jun 2021 13:58:43 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB3296.eurprd04.prod.outlook.com (2603:10a6:802:7::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21; Mon, 14 Jun
 2021 11:58:40 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 11:58:40 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P192CA0026.EURP192.PROD.OUTLOOK.COM (2603:10a6:102:56::31) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Mon, 14 Jun 2021 11:58:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 406a51d7-6879-4a36-8a13-773fd169ffce
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623671924;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GLUDRLYsIajKK0RYgEbfX6JIhgIHNKKzgzDM+8fsSzU=;
	b=KSPvVWPP9JwwKndq3CfttdFC7xBXSeysBwPZyX2I6wlHydXzMPxY3gH9CvT1rt0VUtjjFS
	ies0EPwqH5pim5Af5O1g2WJOeGPWBJ3V34JEo/dQIPtxrIEqE8vqS6MO8w/EJPCzIsOg5s
	9Aq9SvoCk6bRkz6rgeVYTljzxVlqr94=
X-MC-Unique: uO0mL1uuNUeRgOYsXpK73Q-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IW1QwQDZDp8Jkt8az0yD5hGZNwF2g6TDQdi1kycnAb1+i++wO5R5kEN4LI6C+NSl2hk1AngeK5l+WWA6mSvJdMMFhcBdePNXwgNnhwhSRp5tE8bL98DAbUmXf3rZVLRWVx8GGr9g+kdhUhdY/qKjip+SVkavD/E5I5wQvbAGCYKRnHzmb9lE7cBmPoMRa8+x6shTdKHoqrnvz4wwqhZMBPi3WZQocqnD3sN3NwjmTdYQTgec3KAu9RIFNeW142MAUOr62iERT3kF+r/G4q5ajuXEmzRIYhBIxphGWwumtnw4R0b8yeRE8V5U5gjBp+eFoZh1Iy7z5UVYP53FWkvR6w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GLUDRLYsIajKK0RYgEbfX6JIhgIHNKKzgzDM+8fsSzU=;
 b=LFkSk6dt7N91eOkhxe2jsYJteNB89BuVBvG1EY1Sp2mL3hcoSxQtSl2Qxbx3+sYwGlEcxjcNTi7m6Wrh9tMCC4fnvfFA88YlJsFYvu0WmzX8QDkxssC3kavtnM8mux2xqSAnoobrNfRI4lLvEEcNdzT5faRSLEfH7oUbU8OvRlPY7GyUS2SCqCZIVbniPRUnlurEwVJePkpEy+wL/m36cAXgWLYuLdMOEFZEj77zWz7HfBTKOxIBqt6n22NoVYXZg9YfheddAmWuvndZdakVbvM8+75KJ1gSUhkHoy8leic7K5kJ6TxVqIpT9W9aaiJUy0p3EiEjCLnibUQ1AEMasg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [xen-unstable test] 162771: regressions - FAIL
To: Juergen Gross <jgross@suse.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <osstest-162771-mainreport@xen.org>
 <78aa2d24-3e2a-01cd-4596-e2796b4432a7@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7dd0b271-7904-0316-2b4f-00d5eaa78bf4@suse.com>
Date: Mon, 14 Jun 2021 13:58:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <78aa2d24-3e2a-01cd-4596-e2796b4432a7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P192CA0026.EURP192.PROD.OUTLOOK.COM
 (2603:10a6:102:56::31) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f7e557ef-59c7-4cb9-9dd5-08d92f2bbfd7
X-MS-TrafficTypeDiagnostic: VI1PR04MB3296:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB3296BABB795DFF7815130F83B3319@VI1PR04MB3296.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5E7DrIrQE9oZAxJy4Z2oGLvkaBzIqLqD10ZLLMnfmtq8Eqd3byEoEBDrYGe2nsR0X5uItxJN1YZxruAvK+Apf/xiM6FnEnSHt1P5IFcv8exfLGvfqCdZdwJxUTN8/wSwZ3Bes+GoWXCE2YbcX9QaaLIuwpzsiIqan6HWSfLdJZCWxrkmp/ITZnAvFFUFqPE8DjkcFoKLAas4rmJCNbJclMepaJr+4HLxLJN6AEO4GLYglzo8hxVY6wzVMvIX4jDo/IZW/Xe2CBgwNV8+hyDDiPKmhMyw7/RbK16xnnwzvEXgC/XvB8oVg0NeTj2zYouH/SGu9nkV0SMXI0DlsTag6d8I3+Yj3A48gpntfLdyVJVgzQznICmmL0crbKoTo6Kcn+1h0ST7R+8hPIptMLD+qPlvQH2AKo0JhfI6IIz0B8ViWZkegZwMKlTuJu8bpii3+M31CB3++qpPHWeaAEbjqJ1x3cO0VoyvlxwJ+7YqOquO/JXUau2TgdU1Xbw29GqkwBZs+RQZXrIAx1cK6VlaXh4SUMpnyZ0d1929oHmPJqQ0Y1NGzUkwdAUg1WFOQf36i5mG/uH+l7rYKOTr0RAFpDYqm1u8O1OPqRRlM/vC6/RZlf/mO9r/MNaKWQLGcdmI4UfV5owr/QbuI8yX9iYhqyRw+00g7jeo0vmY4L67w0Wx0NTxwYOS7ChYn8cFT9nMlo3NNyZX4VKZ/wYXHPjxBqjdC6VD50LKJL7o3zwR+w5dAE2+VoeH44KV/shPRNqokbLc3Xexl3FUVoVHClsYZxs7iIEb3c9mH6YCQQlfC91i2r+a6Q7Da8HF3yQP/RYz
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(376002)(136003)(39860400002)(346002)(366004)(316002)(16576012)(26005)(37006003)(16526019)(186003)(4326008)(38100700002)(8676002)(54906003)(2616005)(31686004)(956004)(2906002)(53546011)(83380400001)(966005)(31696002)(5660300002)(6636002)(86362001)(6862004)(8936002)(6486002)(478600001)(66476007)(66556008)(66946007)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WnlyNGUzQzhHUjNIMU5RVlltNHNuaFVHQkJReG9SVE0xczA5TkxWZU1zZXcx?=
 =?utf-8?B?elp2dGN2MURBUnRtT1VFQlgxZlR5NjJWZ05iTlBqRGJINHF1TEJhUlkwd0ZT?=
 =?utf-8?B?TWlzNzVDTU4wOGgwcmpoTTRiNEErbi9yRytkTXpmeWFaSlFhVjJXZUlaL2ht?=
 =?utf-8?B?THg5T2tIZG5udWl4QlphNGhGQUtlK2c1T1pNZThiTkZLZndmTDREUlNPeGJx?=
 =?utf-8?B?WDRqdWxmanZDMlRuL0s2dHNRZ0E2em5Xc2FEOEhOSjBRbUxnQUVyZ3ZlY2F4?=
 =?utf-8?B?WWU4dE5NazhmeXpBdUxXM0dOT1A2dUluSUdGZFc5a2JYSFh5Z2JuR3cwZjFX?=
 =?utf-8?B?KzQ5TmQ0aDZhY2pGVmdOSVpjZ1BtanhlSGRxNERYQkQzTVFJWW4rb00xdks3?=
 =?utf-8?B?TUY4RlpVcE1vYjNkdVJSSXpvNkxKclQvOFpnLzZrRU5OWXF3UEdVdU04UzBM?=
 =?utf-8?B?ZGh3NWQybi9IRWlzYjVHaUs3bXBIVGg5TCt6ZkV0V21pSzNpYjBEc1J2OGQ5?=
 =?utf-8?B?VkM2Q044REZIT2FpMCtMcmM3a0xVUFlyaWFkQjJrZHhYWXNYaFo4YmozdEZV?=
 =?utf-8?B?RUI3WjY3clY2MFBtR0xZMzJlaXdSSWlseURGTU1GZkF5UWtVcCtDOEdTUFZO?=
 =?utf-8?B?R3Z4UTVYWUF6bCtMNWoxekV3Wk4vVnR5aWVOWkl6RTQ3MkZwU1Q3WWtrcE1i?=
 =?utf-8?B?amZ2bjBoV3ZTNUZ3eG5CRlEwUTFMQVA0aWtKSCs4M1pQWkw2MjVlWWhqQzI3?=
 =?utf-8?B?dlVkTGVWaHJLN3VHSHFUTEdtSTVkcVU0dE12U1JBaHYxQlNuZzlSYnRjZGEy?=
 =?utf-8?B?U3JOSEN3Q1B4RFJwMXlMTndkU0JaVnppSm5yRWY3UDFkSFltL3czN1ByK3A5?=
 =?utf-8?B?V3ZyYlp0ZUpLZ0dheE9SangwSTc1N2UvNnhYU3l6dmV3ZThUY2NEdWxYRjVS?=
 =?utf-8?B?OHhKTmdLT0xZY0t6cGR3N0laVWYyMjc5aHowcGFldlJhUFJCSUZQTFIzN2sw?=
 =?utf-8?B?NzRZMElHNG5TdjlIalovUE1KR2F6M3pxODV6OUhlN0h6LzIyK1NpOWI0UVk2?=
 =?utf-8?B?TS82SmpqM20xZG9Fa2RidzBuQnFsZGlJSUlUUC9KTmFFbld5RUpkS0hPUGhm?=
 =?utf-8?B?VVZsOW5RRnNkL3dwQzUzdklIOERid1R4dkRWTzFmVmVEQmFmdzlWbWY1VEx1?=
 =?utf-8?B?OU9iUFVMSUJvSzg5QlA5bDQ0SGl4dGhZekVYNndWak05ZHhFVHRLUVAyc0xD?=
 =?utf-8?B?R3lhb29uRzl6SGRVSHBBd1JDVzY2VlFOdWNVWnR1ZEFnM1BuTGNkK1ZsK01K?=
 =?utf-8?B?eE8yYk9FQ0llVlRwRzFreTh6TituVHduWHkvdUp2MEY2WlhHMkpDZ05HNXFG?=
 =?utf-8?B?dmhyN2xVVE9VU1ExUmZ3aXVRbStDdUxZeENMVVNMRlJ6Zzc5MzVMZEJGZnhu?=
 =?utf-8?B?b3prS2FXNHFiN3JkU2FPdlZxYnh6OGFLek1Ndjllb0N3V3lSQXJXbDlwSUZ5?=
 =?utf-8?B?NGdWYUtkUElQZEM4Z2RnZmtBZGxhTDFlY1NLcC9oUXV1UUNKNmtqWFhuZS9B?=
 =?utf-8?B?Y1lpU0YyZVRMRHRmVXBacmU1MTFrUVY4ZHVwQkh2aXpXZisxYTZLTzV2NVFV?=
 =?utf-8?B?dkkyY0RTdTVmM0U4am5NNCtqbzJFQ2wvcFdjYnlUajh0cFNudzQrZHdwY203?=
 =?utf-8?B?U0FIanhGTVRsajB5UFdRbkdPTDlSZXU3MUp2QUxyakxrWjZxeHdBSzZ0ZmZt?=
 =?utf-8?Q?HpGxeXvkLlGXmS02pNIwsgwoHV8QXpevg7YI8Xs?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f7e557ef-59c7-4cb9-9dd5-08d92f2bbfd7
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 11:58:40.0742
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Xw0pqS/rUVKNgDQboqAoXvDIcJJua9PYO75fw+o2BoEKumwPeUu2iCMzAky813AYeCjCMrsOOvSA8le81Uhtww==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB3296

On 14.06.2021 08:41, Juergen Gross wrote:
> On 14.06.21 04:21, osstest service owner wrote:
>> flight 162771 xen-unstable real [real]
>> flight 162783 xen-unstable real-retest [real]
>> http://logs.test-lab.xenproject.org/osstest/logs/162771/
>> http://logs.test-lab.xenproject.org/osstest/logs/162783/
>>
>> Regressions :-(
>>
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>>   test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
>>   test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
>>   test-amd64-amd64-i386-pvgrub 17 guest-localmigrate       fail REGR. vs. 162533
>>   test-amd64-amd64-amd64-pvgrub 17 guest-localmigrate      fail REGR. vs. 162533
> 
> Hmm, this is rather unfortunate.
> 
> Those last 2 tests failed due to commit 7bd8989ab77b6ade3b, but just
> reverting that patch doesn't seem right to me either.
> 
> The Linux kernel has a bug here: it will initially set max_pfn in the
> shared_info page to the size of the p2m_list (so my reasoning for above
> patch was wrong in this case), but when growing the p2m_list (e.g. due
> to ballooning or grant mapping), it will store a real pfn number in
> max_pfn. But even this pfn might be wrong, as only the pfn leading to
> allocation of a new p2m page will be stored in max_pfn, any higher new
> pfn having its p2m entry in the new p2m page won't result in a new
> max_pfn entry.
> 
> As a result I think the only sane handling would be to assume the
> max_pfn value read from the shared_info page is really a pfn.

This would be contrary to the public interface header having

    /*
     * Number of valid entries in the p2m table(s) anchored at
     * pfn_to_mfn_frame_list_list and/or p2m_vaddr.
     */
    unsigned long max_pfn;

IOW the name containing "max" is misleading (should be "num" or
alike), but there's no room imo to change this.

> This
> value should be adjusted to specify the last pfn of the related p2m
> page, and the resulting last p2m page should be tolerated to not be
> valid.
> 
> Another variant would be to just revert above commit and modify the
> semantics of max_pfn in the shared_info page to really mean max_pfn+1.

But that's what it is already, according to the comment. Are you
suggesting there was code prior to the change you've quoted that
already violated this (in Xen or the tool stack, that is, not
the issue you suggest has been present in Linux)?

Jan

> This would result in possible migration failures of ballooned Linux
> systems as today.
> 
> Additionally I'll fix the Linux kernel, of course.
> 
> Any thoughts?
> 
> 
> Juergen
> 



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 12:02:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 12:02:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141370.261146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lslIU-0007FV-T5; Mon, 14 Jun 2021 12:02:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141370.261146; Mon, 14 Jun 2021 12:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lslIU-0007FO-QF; Mon, 14 Jun 2021 12:02:30 +0000
Received: by outflank-mailman (input) for mailman id 141370;
 Mon, 14 Jun 2021 12:02:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XszW=LI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lslIT-0007FF-Ua
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 12:02:29 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 60d32572-a6cd-434c-8aa8-50597cdb6fc9;
 Mon, 14 Jun 2021 12:02:28 +0000 (UTC)
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur03lp2051.outbound.protection.outlook.com [104.47.10.51]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-30-I-jcRurhNnOLytKe0K2VuQ-1; Mon, 14 Jun 2021 14:02:26 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7040.eurprd04.prod.outlook.com (2603:10a6:800:121::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Mon, 14 Jun
 2021 12:02:25 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 12:02:25 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3PR09CA0007.eurprd09.prod.outlook.com (2603:10a6:102:b7::12) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Mon, 14 Jun 2021 12:02:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60d32572-a6cd-434c-8aa8-50597cdb6fc9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623672147;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ANbCd3cO3EaUbi2pY7tnLHJdKBi2wDGigy4Oyzq6I2Q=;
	b=ZFmZwyg4hryrw0tjNaT3oYQNV3EvtX0ydVr2xaYzH+PbFjCsfiub+fJnzEHfXMmHox6KuE
	HyVPlZbLQm3RRF66COrBPMWYWBSUPd+/HaCWP2Of3FU7ubCsBtoH6+o5ZRKG6Ixv8Pltlb
	aiuQ6XLZn+c2QPxmniUumTzcvxd1CwE=
X-MC-Unique: I-jcRurhNnOLytKe0K2VuQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L1+Pg6fc/u0UthblUjiE4vEoVfqFW9v7w+lD4eqLVr0ng8YVqPJXAgi7i5yjxOJy2IMOK6YeKDUW2b2MMaWKu0D325YogFMJGZhNkqsxztop0KyGjHGJVmqKvMcQCZGwDPmmu1aVGcbCr31ZmfenSDF9XzX1eLYJDaDmYJsGJ/cuDfSZQ2S+1njQPGpLWBpEehoi0elYLGkbi5pscb7lQE+7StwmqWGWtHNg0rOujwNV9P+nTkrW29VyM4l1am9M2xcGSmLs17MV2Qkec91EFejHDDg4xP87Ml8HELV0NnSrE/uRfxkMQ8BpVUdOXWBam5lRDesvwtsCFKUBHr/4UQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ANbCd3cO3EaUbi2pY7tnLHJdKBi2wDGigy4Oyzq6I2Q=;
 b=F0GPzGS/0B2knsu7BUnucu3aeSdfoSu0aUTx6XRSygF8TD2Y8bkJHt/0Qz/pBsnLg9hHyLQK2xbiDm1LUavJzYg0/FbleDfKEpjVUl1FsgA/j3rw+km5mziAtopP4uhdmL4WyZ9OGGqrUYMJGJ9T4PWtE0mor6FtSKaXlkjYEoB1RqlkIvSbQ5xbq7Qg9oHVX5e3bNRjxqhAgKiq0BqvR5r4BmUs0Oy22q+0dUfmiNj2EHc81I44WaGvTT0szcTazNeFixqJ+gjUU2Zh3R2aoWR6na36oTDCCAnzfA2fTdLLrYIrohuoKzNtXzc8KDO9qJCfpz5EZMUj+qyGj+aRsw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] Arm32: avoid .rodata to be marked as executable
To: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <25f1b0d2-9270-ba42-d110-2bf14e45b7b8@suse.com>
 <5b819c5e-587b-4eec-b873-73892503c3e2@xen.org>
 <4143bdfd-ca78-d7ce-4ed0-2b6271c48ecf@suse.com>
 <7a57d3df-94d0-5ee6-1ceb-bf4eddec1392@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <666fdb88-94c0-be05-f4d5-d755b0326dad@suse.com>
Date: Mon, 14 Jun 2021 14:02:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <7a57d3df-94d0-5ee6-1ceb-bf4eddec1392@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3PR09CA0007.eurprd09.prod.outlook.com
 (2603:10a6:102:b7::12) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 96c13704-c640-4ee7-aa4d-08d92f2c45c3
X-MS-TrafficTypeDiagnostic: VI1PR04MB7040:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB70404628525919E8335026EDB3319@VI1PR04MB7040.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ztKazuxbUkM1vAmotgLP02STW5T/Fxzwsvg5cBHok7mUT0jfFqfikt1glzruOcrUqJY/bazUxaZgB0Xz2JSQl6JfVqJFIr9L8lYAowyJ7ZbbegoOBhgZN1GlgBxq5XHrtJTG/ZaZiuIpVQgT3ecARxENDoMSqwXS40c7stL9t75gGRsIQBO0HQt5JWMfMN3t0C/apqHxdYYrKi735mvOL2sQsyBUnMS08tx+Lf/4er0IclhtIEQDWaWpiVc96ZOoDRMeq88jMrGLd6GoFpNDm9HCOM8sUnLgR2BV+P9+ZshcPggGbA8qM4hzlPacPB5KwXfhIsrZqQcennuqqMzt0Onm7s8oe020sFP/zzVEwyCnJUFy6zsccWOHIanaD06uwBfKoCYKHsPOUCJRJU8q5H28VfjH4Huh/NsjHqOJGbEmiWeiBEI9M/+aRJyUHtWtzaBlzvo9bIeN++r9SwzaazFHaHQo2+Jl7gutGz4hF7sZozs0LJZT/r1jcDrxPFRautGzPZq7fuj2zj5qjV7IxYf26VdnUa5jlfSjedr3MtwQa0NBSKK5JU0Bp39l6+wKOumerp7IcKz5cDx5Q73zlFymEZoFwwtGmacwqFfOBJ1U/wzD5xQZJnfqRxPxp8Rv4MGzXeE1pvoo8NJDpjf5XTrc/ZnuDLpusCyN2z/guMz20MEuZn83U/d/XwNJRfKL1jxazCOWCa4KiV/0xN2lbQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(136003)(39860400002)(396003)(346002)(376002)(86362001)(956004)(2906002)(26005)(5660300002)(2616005)(66556008)(83380400001)(66476007)(66946007)(16576012)(316002)(53546011)(31686004)(6916009)(16526019)(31696002)(4326008)(186003)(478600001)(8676002)(8936002)(36756003)(54906003)(6486002)(38100700002)(142923001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NEVoM0p6KzVnTE5RWE91MXkwVXgxVm5VWUJQcWF2bGw3eHg0cVU4R2Iyam1G?=
 =?utf-8?B?U1JwU3UrQ2RTeXc1NXd0eWxHcWlGWEhwdHJmM01qWmhxa2hXWHFDTVVCVTBm?=
 =?utf-8?B?dHpKcU5xdk1jTXpJeU1ZWFVuQVkrQ3NmQzhrcVBzRDJ0blNOTHdEbENOcDdh?=
 =?utf-8?B?SmRsaklIWElMMVlUczNMYUkyRFBvZTZIQTh1NzRpdXNGYmlGa0l2RmRiRFhX?=
 =?utf-8?B?OEl5YWJEQmd3b3dFZHVkclRwaERhc3Y5UjhrbURPODF4VytNMEJtcktyOFpH?=
 =?utf-8?B?UG02cTZxSFNJZGZmV01PT2h0Y2tscHZ5bXlnQTJ4dkIrN1FVZm1NVjgvZTBR?=
 =?utf-8?B?Nmc0Qzc3czlMK0hmMUw5SHNwSldDMTl6MUlRcnVoU1B0b1VoczdCR1JRekgr?=
 =?utf-8?B?ZWViRTFEeDZxa1pXV1hLcUk0elQ2a0VBak5OT3dwZXpCeU1yWXZGZWZobTBD?=
 =?utf-8?B?N2I1VFJTTmZ4clk1M3RoQlRDMm9wZGFwck5odFN5eGRhYWZkMXVjNlJ4NEZP?=
 =?utf-8?B?RkZFNVd1cXJKM0hBRGZCaEJoM0M1bndZMmVCdmlKZ3N4U3E4ODNFdW1hSWp5?=
 =?utf-8?B?c1hrdFlZZU9EUjVNRGdsaStpejdKcjdXQVVnbEJuNk9mRXdtOGk3djlpNUY1?=
 =?utf-8?B?bmVtckFybzRPTEhiVTFwQVQrYTR4RzMxNkFDbXRFaXZSRVdIaWJvMEpmd3VN?=
 =?utf-8?B?bnU2Z0NyeWZDNjNEWUxBc2Y1TTdKNUhOdWgybjZlZEFjUGZtMkhtVk40ZTVn?=
 =?utf-8?B?MVptMm5LUHYzZ1VyTDlESVZvQjUxL1I4alZuRnJ2WFhqUEVPSitrM3l0ZmRu?=
 =?utf-8?B?d2hxUzIzYWFEYUxDRmF3SmluTWNFQVFzYXlvSGRZNlAzY1FkQ3MzYVNPTEZU?=
 =?utf-8?B?ZnNEVXluVjA0UFpEb3A5UUQ3YmdwblpBemxZQ0RNdXBienIvM1hvVXl5aEJY?=
 =?utf-8?B?QWptYjRFQnFRYWRqeWdJVTdEY3lUcWRaVDR2S09GVEx1Wmk4L0tEcDI1YTFK?=
 =?utf-8?B?YXNjOWozbENXUGRRWkFBYjRkeWs3VDFaVEY2aGJndTNuOXdkKyt3UUxLa05Y?=
 =?utf-8?B?VmluWGN2WW9BSHZLTlhmMXRhTjhzRHpZdXNFcFVoU2pPWG5pMy9BZEJFVzU5?=
 =?utf-8?B?NDk2YitoMzRXVW50ZGNEenY0bGFHTUptRldQR0VjVWZtRVVsQ3lWQWoxSU0r?=
 =?utf-8?B?bDF2UVU4MjgzOFVXMlFmODBOQ21qMUVjWTV6aEU2bTZaWHg1NmRnMEZhdnhz?=
 =?utf-8?B?T0g3cHlVTzFGSG1DVVBySExoS2VVWlpYckZDUU45d1hSUDVIaUM1eUNZSUFn?=
 =?utf-8?B?MThaMk1BTVJFb3dXTTN6T2FNdzBJemtXK0JXQ0pxZWRzc0k4cVU3NFpodXpz?=
 =?utf-8?B?VXdkcWN0Tmh1QnkzOEh5R01hNFdMVEhIWU42YTBpOCs4K05iVnJ3RU9uR29Y?=
 =?utf-8?B?YVNDK1RwWUpyZG9rSTIwbDlvZjdzSncxRHRHeXlGdlpZVXY1bHFtcWNyb1dD?=
 =?utf-8?B?dldQZmQ3Y01MTE91ZDJTWnZONzFma3FjNmE2dlRXN2tQc1B6U2syb2tldHVC?=
 =?utf-8?B?eTBNbTdaWkFCZ2xNandGcGdJRTF6Q0djS3NaQWNLZTBGWWcrb2cvRjl6WWY2?=
 =?utf-8?B?RjhGazBCU0FVejBTQ3VKaFNqckNmU1dDZ1ZVcW8wSkhHeVJSb21iczlKNWh0?=
 =?utf-8?B?cUh5R1BDd0xkc0V2ajN2L2RYNDN1WGNDdm9ieTRtZTJ3QzJlWUZNZHBScERL?=
 =?utf-8?Q?86XZw+CBEaqYLzB+fsdIt9aUARpiFJcwbTl6Ixh?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 96c13704-c640-4ee7-aa4d-08d92f2c45c3
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 12:02:24.9910
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KfpQzL4Hq/G/Wlobm6bwi1pomHkDz946Z9NagdJjx2MPCTHfkPzD0V2IQl9XZ+Wa0afcsRfMpp8k+W7stezMsA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7040

On 14.06.2021 12:54, Julien Grall wrote:
> On 14/06/2021 12:40, Jan Beulich wrote:
>> On 14.06.2021 11:57, Julien Grall wrote:
>>> On 11/06/2021 11:19, Jan Beulich wrote:
>>>> This confuses disassemblers, at the very least. When this data still
>>>> lived in .init.*, this probably didn't matter much, albeit the
>>>> "#execinstr" would have been suspicious to me already then. But the
>>>> latest with their movement to .rodata these attributes should have been
>>>> dropped.
>>>
>>> I don't quite understand why this wasn't really a problem for .init.data
>>> but it is a problem for .rodata. Can you expand your thought?
>>
>> I've said "probably" for a reason, and my thinking here goes along
>> the lines of what I've said on the other patch regarding .init.*:
>> There's perhaps not overly much reason to be picky about the
>> attributes of .init.*, and at least on x86 there is also a case
>> (the EFI binary) where we fold all .init.* into just .init anyway.
> 
> Makese sense. Thanks for the explanation.
> 
>>
>> The alternative to the present description that I see would be to
>> go with just the 1st sentence. But I would be afraid in such a
>> case that you would come back and tell me this is too little of a
>> description.
> 
> How about:
> 
> "xen/arm: .proc.info doesn't need to be executable
> 
> The section .proc.info lives in .rodata as it doesn't contain any 
> executable code. However, the section is still marked as executable as 
> the consequence .rodata will also be marked executable.
> 
> Xen doesn't use the ELF permissions to decide the page-table mapping 
> permission. However, this will confuse disassemblers.
> 
> #execinstr is now removed on all the pushsection dealing with .proc.info".
> 
> I can update the commit message on commit.

I'm fine with the new commit message, but I'd prefer the title to
remain as is, as that aspect is what did trigger me making this
change.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 12:06:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 12:06:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141377.261158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lslMB-0007uj-Dh; Mon, 14 Jun 2021 12:06:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141377.261158; Mon, 14 Jun 2021 12:06:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lslMB-0007uc-Aj; Mon, 14 Jun 2021 12:06:19 +0000
Received: by outflank-mailman (input) for mailman id 141377;
 Mon, 14 Jun 2021 12:06:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9Pv4=LI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lslM9-0007uW-I5
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 12:06:17 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 49bdc282-99fa-45b9-9bca-b08e4ca42933;
 Mon, 14 Jun 2021 12:06:16 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 97D881FD33;
 Mon, 14 Jun 2021 12:06:15 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 6969B118DD;
 Mon, 14 Jun 2021 12:06:15 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 9tYiGDdGx2CiKAAALh3uQQ
 (envelope-from <jgross@suse.com>); Mon, 14 Jun 2021 12:06:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49bdc282-99fa-45b9-9bca-b08e4ca42933
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623672375; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=bWA9byQYGvx3i/KMZ+TS2SFSHdemHYmaBgI1QZEX+u4=;
	b=gcoIypOD6iWK7f5RTn0IhinpGco+zN0w9CcVb5zaxj2+OBNEhH2PlNZEogpjYRtWWSmATv
	UZdNLVmZnE6JI6LWC68SRq0B16UQnCP0LeQEInblsI3F+OIsg+MbEm553+6ePc2NgNHCco
	eBwBs4H5B2ON5u9Ehyto7EBkU824bXU=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623672375; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=bWA9byQYGvx3i/KMZ+TS2SFSHdemHYmaBgI1QZEX+u4=;
	b=gcoIypOD6iWK7f5RTn0IhinpGco+zN0w9CcVb5zaxj2+OBNEhH2PlNZEogpjYRtWWSmATv
	UZdNLVmZnE6JI6LWC68SRq0B16UQnCP0LeQEInblsI3F+OIsg+MbEm553+6ePc2NgNHCco
	eBwBs4H5B2ON5u9Ehyto7EBkU824bXU=
To: Jan Beulich <jbeulich@suse.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <osstest-162771-mainreport@xen.org>
 <78aa2d24-3e2a-01cd-4596-e2796b4432a7@suse.com>
 <7dd0b271-7904-0316-2b4f-00d5eaa78bf4@suse.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [xen-unstable test] 162771: regressions - FAIL
Message-ID: <f3eca526-0669-3c2a-6d3b-40e69deef288@suse.com>
Date: Mon, 14 Jun 2021 14:06:14 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <7dd0b271-7904-0316-2b4f-00d5eaa78bf4@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="xHLWCXlJnTyOvCdbeoZrOQgQIz5krIdsS"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--xHLWCXlJnTyOvCdbeoZrOQgQIz5krIdsS
Content-Type: multipart/mixed; boundary="b44jQHhricizJZePUmUuSOCfJ5494ZNSi";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
Message-ID: <f3eca526-0669-3c2a-6d3b-40e69deef288@suse.com>
Subject: Re: [xen-unstable test] 162771: regressions - FAIL
References: <osstest-162771-mainreport@xen.org>
 <78aa2d24-3e2a-01cd-4596-e2796b4432a7@suse.com>
 <7dd0b271-7904-0316-2b4f-00d5eaa78bf4@suse.com>
In-Reply-To: <7dd0b271-7904-0316-2b4f-00d5eaa78bf4@suse.com>

--b44jQHhricizJZePUmUuSOCfJ5494ZNSi
Content-Type: multipart/mixed;
 boundary="------------388BBEC2B8EF8B7B53A46B53"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------388BBEC2B8EF8B7B53A46B53
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 14.06.21 13:58, Jan Beulich wrote:
> On 14.06.2021 08:41, Juergen Gross wrote:
>> On 14.06.21 04:21, osstest service owner wrote:
>>> flight 162771 xen-unstable real [real]
>>> flight 162783 xen-unstable real-retest [real]
>>> http://logs.test-lab.xenproject.org/osstest/logs/162771/
>>> http://logs.test-lab.xenproject.org/osstest/logs/162783/
>>>
>>> Regressions :-(
>>>
>>> Tests which did not succeed and are blocking,
>>> including tests which could not be run:
>>>    test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REG=
R. vs. 162533
>>>    test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR=
=2E vs. 162533
>>>    test-amd64-amd64-i386-pvgrub 17 guest-localmigrate       fail REGR=
=2E vs. 162533
>>>    test-amd64-amd64-amd64-pvgrub 17 guest-localmigrate      fail REGR=
=2E vs. 162533
>>
>> Hmm, this is rather unfortunate.
>>
>> Those last 2 tests failed due to commit 7bd8989ab77b6ade3b, but just
>> reverting that patch doesn't seem right to me either.
>>
>> The Linux kernel has a bug here: it will initially set max_pfn in the
>> shared_info page to the size of the p2m_list (so my reasoning for abov=
e
>> patch was wrong in this case), but when growing the p2m_list (e.g. due=

>> to ballooning or grant mapping), it will store a real pfn number in
>> max_pfn. But even this pfn might be wrong, as only the pfn leading to
>> allocation of a new p2m page will be stored in max_pfn, any higher new=

>> pfn having its p2m entry in the new p2m page won't result in a new
>> max_pfn entry.
>>
>> As a result I think the only sane handling would be to assume the
>> max_pfn value read from the shared_info page is really a pfn.
>=20
> This would be contrary to the public interface header having
>=20
>      /*
>       * Number of valid entries in the p2m table(s) anchored at
>       * pfn_to_mfn_frame_list_list and/or p2m_vaddr.
>       */
>      unsigned long max_pfn;
>=20
> IOW the name containing "max" is misleading (should be "num" or
> alike), but there's no room imo to change this.

Oh, how nice! :-(

I have let myself been fooled by the correctly named max_pfn field in
libxenguest, together with the wrong usage in the kernel.

>=20
>> This
>> value should be adjusted to specify the last pfn of the related p2m
>> page, and the resulting last p2m page should be tolerated to not be
>> valid.
>>
>> Another variant would be to just revert above commit and modify the
>> semantics of max_pfn in the shared_info page to really mean max_pfn+1.=

>=20
> But that's what it is already, according to the comment. Are you
> suggesting there was code prior to the change you've quoted that
> already violated this (in Xen or the tool stack, that is, not
> the issue you suggest has been present in Linux)?

Reading the comment would have helped.

So a plain revert is the way to go.


Juergen

--------------388BBEC2B8EF8B7B53A46B53
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------388BBEC2B8EF8B7B53A46B53--

--b44jQHhricizJZePUmUuSOCfJ5494ZNSi--

--xHLWCXlJnTyOvCdbeoZrOQgQIz5krIdsS
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDHRjYFAwAAAAAACgkQsN6d1ii/Ey/I
TAf+L+99a6/YVzz0JI4PXMy+nY8af8g5aJza54IZeAJN/xKGYZ7z/VREPSAyJ75xLmi9RDrsa3Du
0ave4kd/KmHxgYhewHcybD2I1fkLJEJ+NfcIkAGgwYwNiQu5cmnychPNSkWCpK096tW0PqWj+GSy
5iP5Dq4hMJIscFI2RE3fGxR1ZZNtdq3B+4cJi4Vj+X8JGQJvQjqF/5GwCn6cQwCngTpaNR9HE+8c
ECz7q1k9bIjS0we//q7z1yR6BNLbQTMaMo9CfquHOKuM6StotwY5GZolZm8nBDExhW56F1M5oEEc
yIFO9MlV7RsmrEoXNpJzLPWGuPaesN5t5F0zkqdW3g==
=Mybk
-----END PGP SIGNATURE-----

--xHLWCXlJnTyOvCdbeoZrOQgQIz5krIdsS--


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 12:15:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 12:15:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141390.261171 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lslVF-00013F-OB; Mon, 14 Jun 2021 12:15:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141390.261171; Mon, 14 Jun 2021 12:15:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lslVF-000138-LH; Mon, 14 Jun 2021 12:15:41 +0000
Received: by outflank-mailman (input) for mailman id 141390;
 Mon, 14 Jun 2021 12:15:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9Pv4=LI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lslVE-000132-MG
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 12:15:40 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd206d09-1a52-4b1f-a65a-d6a7e8ae601f;
 Mon, 14 Jun 2021 12:15:39 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id C0C7B2196B;
 Mon, 14 Jun 2021 12:15:38 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 8E8D9118DD;
 Mon, 14 Jun 2021 12:15:38 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 8DmmIWpIx2CULgAALh3uQQ
 (envelope-from <jgross@suse.com>); Mon, 14 Jun 2021 12:15:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd206d09-1a52-4b1f-a65a-d6a7e8ae601f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623672938; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=elzLLQaqzlfR2T6/kM01JMbhSI9GLF0Dqtml7Q+JSkU=;
	b=ZPvvGamiD8C6r2JotsvBU2enNZy+9TTIYPHpP+ShmHu6ZrKP163mfavFF1osDd3/gyl8yL
	AbiRCqXheHaKZ9cS3wQYQKtvOH2R9PPnCCiMXcXxeKdUEOruz5GJiMIv9dLwKppWXO+r7g
	XQg/v6NogF+2wCrDL1HisopYxAI6Dv8=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623672938; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=elzLLQaqzlfR2T6/kM01JMbhSI9GLF0Dqtml7Q+JSkU=;
	b=ZPvvGamiD8C6r2JotsvBU2enNZy+9TTIYPHpP+ShmHu6ZrKP163mfavFF1osDd3/gyl8yL
	AbiRCqXheHaKZ9cS3wQYQKtvOH2R9PPnCCiMXcXxeKdUEOruz5GJiMIv9dLwKppWXO+r7g
	XQg/v6NogF+2wCrDL1HisopYxAI6Dv8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] tools/libs/guest: revert fix max_pfn setting in map_p2m()
Date: Mon, 14 Jun 2021 14:15:36 +0200
Message-Id: <20210614121536.5288-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The reasoning for commit 7bd8989ab77b6a ("tools/libs/guest: fix max_pfn
setting in map_p2m()") was wrong.

The max_pfn field in shared_info is misnamed, it has the semantics of
num_pfns, which is hidden at least partially in Linux, as the kernel is
(wrongly) treating it like the highest used pfn in some places.

So revert above commit.

Fixes: 7bd8989ab77b6a ("tools/libs/guest: fix max_pfn setting in map_p2m()")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/guest/xg_sr_save_x86_pv.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libs/guest/xg_sr_save_x86_pv.c b/tools/libs/guest/xg_sr_save_x86_pv.c
index dae7f2817f..4964f1f7b8 100644
--- a/tools/libs/guest/xg_sr_save_x86_pv.c
+++ b/tools/libs/guest/xg_sr_save_x86_pv.c
@@ -468,7 +468,7 @@ static int map_p2m(struct xc_sr_context *ctx)
 
     ctx->x86.pv.p2m_generation = ~0ULL;
     ctx->x86.pv.max_pfn = GET_FIELD(ctx->x86.pv.shinfo, arch.max_pfn,
-                                    ctx->x86.pv.width);
+                                    ctx->x86.pv.width) - 1;
     p2m_cr3 = GET_FIELD(ctx->x86.pv.shinfo, arch.p2m_cr3, ctx->x86.pv.width);
 
     return p2m_cr3 ? map_p2m_list(ctx, p2m_cr3) : map_p2m_tree(ctx);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 12:17:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 12:17:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141396.261182 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lslXA-0001h8-4p; Mon, 14 Jun 2021 12:17:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141396.261182; Mon, 14 Jun 2021 12:17:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lslXA-0001h1-1x; Mon, 14 Jun 2021 12:17:40 +0000
Received: by outflank-mailman (input) for mailman id 141396;
 Mon, 14 Jun 2021 12:17:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XszW=LI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lslX8-0001gt-Ta
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 12:17:38 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63222186-4928-4643-a717-76b8af31db60;
 Mon, 14 Jun 2021 12:17:37 +0000 (UTC)
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2055.outbound.protection.outlook.com [104.47.12.55]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-2-lLpuAWNhOImYw_o_AwnmmA-1;
 Mon, 14 Jun 2021 14:17:35 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5743.eurprd04.prod.outlook.com (2603:10a6:803:e0::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21; Mon, 14 Jun
 2021 12:17:34 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 12:17:34 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR3P281CA0043.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:4a::15) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.9 via Frontend Transport; Mon, 14 Jun 2021 12:17:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63222186-4928-4643-a717-76b8af31db60
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623673056;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=X118asixY4udkMf49i5xGtdu346NNs10cwM67GSykGs=;
	b=RBXCPmIBOK53NtVwjf4n9kNR/jQl9axBRBAXOQOeLGW/U99OfLVu0IMEHH4s1HKB8Zfa3w
	nFhrU62ILm5GSyc1YOV4imXEh2ek9/ME1q9f85tzygj3RkXzppINACJrYVhJBplHyYCWiH
	ThcdZdTahPeNXQcfm2g11+r1tnJVVG8=
X-MC-Unique: lLpuAWNhOImYw_o_AwnmmA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Me0lQkft+IPM+pbg+BG7dkyMrWHdao8AzQ/B2bJsBJkNKfIq63ahpwCGuiTRp/E2f1lWideWr7KOsUH2D46YwX0r4kVv10m/BHK8baRVOcUZMPnX24w3c7g2cP2ALOPnc/W3q2qi6v+zMBQeByrJ6jcTpTeMBOwfrhLvbX/6aqKf7OJUeFa4Fe5mnj/5V8FmzAWM9aBjgD+m1oZLjSiiP36rWk02KZqA1XhLDjuAHhLB6BM0/9b018Nx4myGkd62RHjXgWaOiFLjhKOyAraf4q7WLlFCFfEePjh8R49Pg/IbvH5l7YaIwHNZTRhb3yZ+RhWZyaCfKDkbJZqEd6cKBA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X118asixY4udkMf49i5xGtdu346NNs10cwM67GSykGs=;
 b=YJ4das0GbZMo0wudUpXdy6+aJ741I1Z+y5f9QgngZa2a+cwjcfRBLF869LIJNZYu2c1oEJKozBdPtpKO9qHhCJVIBh8fzriPVa7nuoY83pJ5Dm75eD3EQPcnIIOKCiHdlV1Bh7ngU4E+R1Y/NrNCio5QiIv1yslDQTAtPDG78utz82zX9AVbzbqylDzRzFTIrMjIR720deMhgeVQNj9c5wfxva/jKleWM9uh7LSlmFNq5OMrR27Uf7Kns13Fe6ndjEvESiCks6fL31BvMUWX1xyqLJrIJt6b3/tWD9dsSet22mO8u+Vh7z/h0n17UwXPX86ysZnpQgmyNNborK1ZaA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] Arm: avoid .init.data to be marked as executable
To: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <6837f903-14f6-4438-ed05-b373149984f3@suse.com>
 <b7e76787-cdae-ed1a-a741-e5db146fc87e@xen.org>
 <8c5ec03f-5ea1-99f8-a521-3552d0015ac4@suse.com>
 <1b44cb6d-dda6-5297-893b-a53fe7d123d9@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <919ddc45-c6a4-20b3-e1ab-7a16fe1c48d2@suse.com>
Date: Mon, 14 Jun 2021 14:17:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <1b44cb6d-dda6-5297-893b-a53fe7d123d9@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR3P281CA0043.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::15) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b5fd49f6-a77a-4ff7-e508-08d92f2e6414
X-MS-TrafficTypeDiagnostic: VI1PR04MB5743:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB57435F746127F05D64107ACCB3319@VI1PR04MB5743.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7mdVQqNeAQm20Bl+elvc28wXAm/WPkmr0glt553qMg7IbEXYsVy35vOWcorL4gdrfpMYTHwMByNNpgBtMPuX7l41onAtcOG+C0TixCOd+84RijP8KL5KREo4nSN7OIzG3wER/3bFG8Cy5X0w+B3m2nXw1YTjJKLWHeZanIYXHPTIBSnLZ7TPxBpuBgHVAk1w5dnNx0O7LiGnEcpfEnSFAXKAvQScs36hq5lAklu9rpHtJrrkO7MFP0VmGAy3BMzbNv5MrblFfgygbgVfXlcvaHSwI1Kx21vSIWS6G0dFj5dnivESgYKC4KEEhdxzwBBL0DRDABCbCzh4xbPpIAsenKmmE69VUUPn7k1bbif63qRmcNn3BgqkMNLuzTbFVQalEuUyOxE0UE7Db/OMaLbBdr36RNXJBy8ra3+INDQ5d5hJLRKCUHGn6m8NhF6cKm1GdXR8220po2mZXZUkiz/JyaPRXjLFmaqNykp7xqZBmSwCvtp9EB2C1vdvfkLk41L4ZudUainnOh6QAQXNLWW/oKZjb0KrQs6nkArkVYAjQolrDpIYo7GdOHilxl09PQl2pZtrVQ1AZTJ1y8PkxFtDuAYIDJLtIosTsZzFvAFjRCbytUNSS48gBJaTwhlDFiSN9H0shYoM5FOg/XmjIYtfS0Vy9L0poCIKZvdJcpvPNBYcjPNM5PN6g7STNpBBQKLMxmLN4BhJAB9BDeKQaoszhg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(396003)(136003)(376002)(346002)(366004)(38100700002)(4326008)(2906002)(66556008)(186003)(8676002)(16526019)(26005)(66476007)(31686004)(8936002)(6486002)(86362001)(54906003)(956004)(83380400001)(478600001)(36756003)(66946007)(31696002)(5660300002)(53546011)(2616005)(6916009)(316002)(16576012)(142923001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VTJ6V2lEQWhiRGtkVXpyVXhtL0swZmw4VlJEanJiS3JhN3ArNXBYUHJDWkpn?=
 =?utf-8?B?OU5TbzFUYW9xemlMaTFSVVdDUEN6dFZxTHAwS2xmL0VsTVdYQkRSRWlHU0N0?=
 =?utf-8?B?S2tXbU1sbFVoWTArRlo2c0hmbWxPMGZFUTVRNk1EdWlhaGJzZis3clE5VitK?=
 =?utf-8?B?ajY5aG1XSEgwS2d6TmV6NHVUZXFCamp3MkZzUjVkNEt0YlVwL1ZKWE9adkVi?=
 =?utf-8?B?em1uY2JBSlFIRGlRd1crSGxwdXNnSDJnanVwTXBxNDZjWUQ1WGxnZlk2Z08y?=
 =?utf-8?B?YzVheDNJZWtiNWJ2TTByb3NQNzZ4ZnV2YnhNaGJBMmlvdmFPUGR3VmQrSldI?=
 =?utf-8?B?ZElCNUlzaVZzVTRmNFI5TTdiZnNaTmNHSGhia3lYajU1NmN3bFBCZytBc2pN?=
 =?utf-8?B?TlhWT1BjRGFrM1dGMnphNHJYQ25yMjJta0dxQ3NQUnB6QUZQaksrUmlBNmpu?=
 =?utf-8?B?L2RuM0JzL2gzQmViZmZXMEp6RlZsK1hDL25GZm5tWUJPck0wUnFrQXBKQzVs?=
 =?utf-8?B?K29iKzdZOW03QkFuVUxGYkVyMTMyckkvMDh6YVBHT2FRejRoWklzb1NaUzM4?=
 =?utf-8?B?VktNbEdFNGhDNEtWMmxiMnYwNDVwWGI4WTk2V0ljRjczVUlWc2hMTjBSc2Ur?=
 =?utf-8?B?dGQ2TVVNbXhybTVGNi80bHNVK0NJblBzM0dpV2hxV3Q5Mm5MWElHczhDYnk4?=
 =?utf-8?B?SnUzSmRtc1orbStHcmFpV1ljTjBnSTlsNnBpcnNIMEt6a1N2VXBGUVBQWUc4?=
 =?utf-8?B?ZnEyZE0yZCt4Q1NSUW1WRkFydDgxUE1BNGxmVlZyVTJrQUhTOWgwUkFzejZ0?=
 =?utf-8?B?VUp1dENmdlV3RkJYM3VxNy8yUDlsakVuWllzQmt6TDlWci9VbG5hMlAvTWV0?=
 =?utf-8?B?cmN3Rm1HUGlCZXI2L1F1MFhpOC9MWTl2UTkxVnY2U3RwcU5GMitkdUdqNDhn?=
 =?utf-8?B?Mjh4QmgvenNUZnpQRzhkN1U1RnJjcVhtMVJlaDFzVS9hWHBsdkk5S2hhTUIx?=
 =?utf-8?B?TWR2WXZJeFU3TzJORy81TEN4bkZMQVVJYUdDLys0K1dqTDd0dXFQaFNqMjlB?=
 =?utf-8?B?YzZNQkt4NFM5MG5vN3V0Ui9ERVowMFB0c1ArVllDUmpqSFNWVTRaVVdUN2dZ?=
 =?utf-8?B?OUdEZ21WZGh5dWtweTRBZUlwOERvZHc4dWJGR3JCSDE4NTcvekpjaXVLdHV6?=
 =?utf-8?B?NmowMHRUdUZqOFl6UVYrMnEwWkxlVitVSXhidHI5eXpaN3R1eVF4V2IxTHpB?=
 =?utf-8?B?cXI2N3lwV3VDZzFWMmtmTitTTWxUNVdkcWdpQnpUSFJpaVJCd3VNVkVsNVBn?=
 =?utf-8?B?WnFJVmdsQWozZERCSjZiVjhoWkt6NG9Xd3RRN3Zyam44M3VMUkRUZ0dKY1Ev?=
 =?utf-8?B?ZEdyQ3BERlp2czJ1eUcxNW9xdlVTT0lSSjBpL3psZnJkMmhIMTAyRFpXb1ZO?=
 =?utf-8?B?TzBScGlmU3IrU0VMNlk1ek9JME9PUUpVV2R0aG1EeDhlNzFULzhzTmV4MHdJ?=
 =?utf-8?B?TjdiYVZuVUtNWmVKSHoxellZMGU2QWJXS3RaTHBtaWllVTZWaWQyTjJpOGNQ?=
 =?utf-8?B?VFpsRE9wVXBDaXUyb09OM1gwdFlPQnhPWXhJTnpwNXo0NXduZml2Nm5NSHhC?=
 =?utf-8?B?dGxUVXR6ci8zK2wrQk5rK0xkU0FTcVcvbzVpNWRuOHd5b1BsejI4aXYxL0RZ?=
 =?utf-8?B?bmJzSlJSK0xNNTFiM2NyN25DY2VhS2tWM3ptMHpFNTZaT0w1ZFQ3c2NQZkZo?=
 =?utf-8?Q?bBdRLcc+5gHt/oM8uTOTg3DGFkRiDtRzjyazWQk?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b5fd49f6-a77a-4ff7-e508-08d92f2e6414
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 12:17:34.5207
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wwdHKl1m6iHyIEI00xXZHVgmCp552mG87QNFsIEQlaazKadzZE+KLEDxfhox4dximJqzo2vuYqq3sfy89F0HTw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5743

On 14.06.2021 12:32, Julien Grall wrote:
> 
> 
> On 14/06/2021 12:02, Jan Beulich wrote:
>> On 14.06.2021 11:41, Julien Grall wrote:
>>> On 11/06/2021 11:39, Jan Beulich wrote:
>>>> This confuses disassemblers, at the very least. Move
>>>> .altinstr_replacement to .init.text,
>>>
>>> The alternative code was borrowed from Linux. The code has now changed
>>> to cater very large kernel. They used to keep the .altinstr_replacement
>>> and altinstructions close to each other (albeit they were both in
>>> .init.text).
>>>
>>> I am not entirely why, but I am a bit worry to separate them. What sort
>>> of test did you do?
>>
>> Well, just build tests, on the assumption that relocation overflows
>> would be reported by the linker if the sections ended up too far
>> apart.
> 
> Hmmm, fair point. They should also not be further than the original 
> instruction. So there ought to be fine.
> 
>>
>>>> dropping the redundant ALIGN().
>>>>
>>>> Also, to have .altinstr_replacement have consistent attributes in the
>>>> object files, add "x" to the one instance where it was missing. >
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>> ---
>>>> I'm uncertain whether having .altinstr_replacement inside or outside the
>>>> [_sinittext,_einittext) region is better; I simply followed what we have
>>>> on the x86 side right now.
>>>
>>> This means the altinstructions will be marked executable in the
>>> page-table. They technically should not be executable, so I would move
>>> them outside _einittext and make sure the section is aligned to a PAGE_SIZE.
>>
>> Hmm, are you saying you bother getting attributes right for .init.*
>> in the page tables? I ask because we don't on x86, and because it
>> would seem wasteful to me to pad to PAGE_SIZE just for this. But
>> you're the maintainer, i.e. I'm merely double checking ...
> 
> So this is a defense in depth. Your assumption is .init.text is going to 
> disappear after boot. However, if there is a bug that would leave 
> .init.text present then this may add more attack surface. So I think it 
> is a good practice to keep the permission correct.
> 
> However... looking the alternative code again, there is another reason 
> to move this change out of the range _sinitext - _einittext. The 
> function branch_insn_requires_update() will forbid branch target in 
> another alternative instructions.
> 
> This is first checking that the target is part of an active text. With 
> this change, this will return true because alternative instruction 
> replacement will be between _sinittext and _einittext.
> 
> So .altinstructions_replacement must outside of the region [_stinittext, 
> _einittext[.

I see. But I'm not sure about the defense-in-depth aspect: By putting
it outside [_sinittext,_einittext) it'll get mapped r/w, while I think
you were implying that it would become r/o. Not even .init.rodata gets
mapped r/o.

As a result I'm not convinced yet that you really want me to make the
change. Otoh your arguments will make me put together an x86-side
change moving this section past _einittext.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 12:45:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 12:45:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141407.261202 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lslyO-0005Fc-Nt; Mon, 14 Jun 2021 12:45:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141407.261202; Mon, 14 Jun 2021 12:45:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lslyO-0005FV-J6; Mon, 14 Jun 2021 12:45:48 +0000
Received: by outflank-mailman (input) for mailman id 141407;
 Mon, 14 Jun 2021 12:45:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XszW=LI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lslyN-0005FO-Gk
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 12:45:47 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 64cd0093-c097-4f8b-a76f-fd76d5d6db3d;
 Mon, 14 Jun 2021 12:45:46 +0000 (UTC)
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02lp2053.outbound.protection.outlook.com [104.47.5.53]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-20-XsJxT8ZBOi6MbaQIMEotnQ-1; Mon, 14 Jun 2021 14:45:44 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3390.eurprd04.prod.outlook.com (2603:10a6:803:9::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.23; Mon, 14 Jun
 2021 12:45:41 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 12:45:41 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0069.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1d::33) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.20 via Frontend Transport; Mon, 14 Jun 2021 12:45:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64cd0093-c097-4f8b-a76f-fd76d5d6db3d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623674745;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=tNv3bKAdlk18Lf3VKXW+E+FXFOtvnHMTt67yV+SM2Qo=;
	b=JORsiz/TcdqELiZ3J814accrtVsLHB/zeAeO54hTT3k5lpbY5DQHwCL7vN6NjSr5iIEvU9
	ox6GoJXI6K0BAcUbMNWXWzuuWJvS5f4YwGk3l0m5BDIqqY409e8FHinw2oQf+5nGaxl8dV
	n+J21Omimz1pbM/e27/OUz6wKlH97xQ=
X-MC-Unique: XsJxT8ZBOi6MbaQIMEotnQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ngZF+J/4kdqQFAduV12XtUpQl1x3ong7PX4IoNVy/PXa4mDhm8fPSBNwDEpA6PgCY3MunJWUoEAYKVyFfNB47ZgZDSg5C8xAO0HoKPMqJ0jpl3VuH0FIu0nd5WZ6L7jWN2qSZP0ucMCJq0kiiZt1i+5sw+25uVme11BBwX0e6KALn7yLzL6xYO0Qc0FvzftADU4pppL6B1TDu2FDK6y+8iUFQq4tzv/pl6AYyaHIuU28MRAWHoD1j5vKWyTsQlDuJSKfBpILiri6EgmMOvDDkhxf31t18a//Pe1XIPE9ZA4zmnec+Q/QaXn1zxSB/24Nz/rNe+uMp949Fmmx2u1mtg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tNv3bKAdlk18Lf3VKXW+E+FXFOtvnHMTt67yV+SM2Qo=;
 b=UxJFN7p02v+s10AyJb/tv7N5Q70rFRtw+/+axPSeVqpq0ll0b8J6beHHBkaDu9hffsIjmVYyi3CcEgTqOB8SNHHtXkUlwV9KYnkz+NUuTie6iJOSuI/EYTb1i+JNV5DJtYyOtf5XMoO0OY/NLp1ncoxdwknUDZDsT3fzj55fbUgzGqG/I9N8g6rxG5rh8hK/NdRuWveFIUR/7g+MhUPUAwh8OvcEgKBkkRg/GbFS4S0AeJZQu9U3NeGrLWHHPy5Ef3Xpw+d84gi9lqroSgI9foQzVLFbwpgg4ouFccKnxXjN8Vs0LsV+l/X7JvDkvJHGcaPqNofIGEwhi7E+fzwAQw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 1/5] x86/platform: Improve MSR permission handling for
 XENPF_resource_op
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>,
 Edwin Torok <edvin.torok@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210611163627.4878-1-andrew.cooper3@citrix.com>
 <20210611163627.4878-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3d2c7c2d-be0d-b65f-fce4-402ca4e95a64@suse.com>
Date: Mon, 14 Jun 2021 14:45:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210611163627.4878-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0069.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1d::33) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 090d2fb0-0de5-4eca-5491-08d92f3251b0
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3390:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB33908474269310DAEA798D54B3319@VI1PR0402MB3390.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zks7TxOREUcBshme9fFiLRnddLqHS33SgSvTXpo9Lo/0Qf0ChsjUg8Lvd8FO5mw4cgRGCzw+Ck2IZ85+/hdRU6f0+m3husRZLPh8MA8euucs9ShSU/wYlOoWBonU/fhy5m+b5JXEFnHkdequ/m/qVkWF92MINr9d0+aW657KcrXzT8SVuqD5+7uj0FxbP+TKomoE3Lrb70IdW6EbVZ1Xuhxb3VNV3pAQvqjSkTVseuuzPNriQyVUu/JcbELz2VUJ+JRvTycHKb3FCmGo1Y/xI2nRyWmb1p/JH0LJUnesL/AhXHM8rmp8Fdf31w7At7g/uAyX2yIKASCUdhP5COGalFw0Aq0eaZsFrz4BtJ063JzcRUFtFQZ9g0DM3xVmRCGn5yaW6IRex82yjFSx43IHfAYYHOf0eOtdGCZibK5J9Lwo0nn0NK4TtofKZbWlqFBMdWNNzhriK3i1elXju7RNW4M1uJ7dp17sKfWxImxCxxYeGRcFBPFFuqqG9BKNCqKoS5OgyjAyp0daRgbq/ZNbM9ijfnl3XN11Ameq7LWVf+1mXK+Njoaot3JKhW/mwlGy8ld9pOehhVaO5YHKmycopqmZF7BI9lq1wLy5Mi/Ybf3T7mCjx9ieRO1Su8MsLov6OCW4A2uIhS6zO7XwHQ+MN1TqciRVbD085G+E4smnhJh4p48EFHP4aBTAC5q50Ax0
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(346002)(136003)(366004)(39860400002)(376002)(54906003)(38100700002)(8676002)(86362001)(2906002)(316002)(16576012)(6486002)(5660300002)(478600001)(2616005)(83380400001)(36756003)(16526019)(31686004)(66476007)(66556008)(6916009)(66946007)(8936002)(31696002)(4326008)(53546011)(4744005)(26005)(186003)(956004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cHcrekkzekNHVXlDeUd6a0s4SlhjRkQ2VVN0ZXIxMDBqS2JGc29vRHA0MHRH?=
 =?utf-8?B?QjZRT3paYlE5ZUg4KzVmaVNQMlhBamkrVkNIV01PRmNSbHRkalpEYTExYUVL?=
 =?utf-8?B?RGlkbGxubkVWOE1RU0sya0tMb0FoWFJsQWZuZ0xpK3FpTTlNWHRROFQ3N0x3?=
 =?utf-8?B?SjVhWVhFWW5PUHVLYVFEKzRWeVBnVDVsRjZvTVQzMzg2ZmdvaU1VYWlVT2Np?=
 =?utf-8?B?REtXY2tqWlV0Q0lNdjJld3hTTTJDUVk2amtLRi9XWVEzTnFGMldvWTdhNXVm?=
 =?utf-8?B?eDMxLzVWU0FjMEoyM1FiMmxJYU93dWNyL1k2U2RrR2gvdVI1MGs0Y00reU9I?=
 =?utf-8?B?N083YWdaK1dnTnZjbDhVSWp6eEUxTnNqQys0NTJLMEVHc1JoSEhTZlZUWjht?=
 =?utf-8?B?SGtDQWJqMUhJVTB0eFJNUFVKZ01vREJVOTMzTURGMnlDT3drWmNoSXpNK0Iy?=
 =?utf-8?B?RDRSV1Zob3dXTm1CSzNrRzBQbmdyRm8reHBTeTFGTnE5bHVubS90dkJQbEpp?=
 =?utf-8?B?WXhveXFYRExvdnZZTlJQOFhjQ2ZTRnRnNGltOFIrQ1hCQklWZ1lRaHc2TFY0?=
 =?utf-8?B?ZldkQzhPSUxabTlBeWlFdHVzVFZXdi9NSWRmMCs0TitjSVZMTVJzaS9aUU5x?=
 =?utf-8?B?c2UzVXI3b2s1eFFlLzVwWVdhbUh1WHZrbEZZNzBsWE9pRCtnenJYYnJ4ZG43?=
 =?utf-8?B?SFlVU3RjemlEN3krSzAzdmdMMy9RdEhFSm5NUXdYL2E0czNxQkFGNlByb3M5?=
 =?utf-8?B?Nmw5cjF2emd1enozTGljeVhOd3ExNGg4UTY1ZlZCWks3Q0x1a1JweGgzTWs3?=
 =?utf-8?B?S1pMUllJSUFsQ0I5NVFmb2tsMDJqK2cyZGQ5L3JDaDFwcDY2cnVpTmRnUURz?=
 =?utf-8?B?Ync0T01hNU53cmdSa3RGN2FJRnlHNVZ5L3RabVJDTXdONSt4dEdoYkpoc01O?=
 =?utf-8?B?Rk84eW5NS0xxcnd4WWxJNHNLVFhOcmFCcERjK2dXanJRVXpTMzRWUVVSbExK?=
 =?utf-8?B?UVFLbzlVRHNkeXZNa0F0Q0FVb0Rsc1pISkRFRTM1RThjRTh2d3AzZkF1bmpz?=
 =?utf-8?B?QXI0OE5ObkhYaGMvUUZmaEQ0bElOeFBNV0ttWVQvTk02cVl3by9CaWNBVFZh?=
 =?utf-8?B?SElKTVlHR3NDRFlMTjV0WW9zajBXMUoyVnppTEVCWjNWNVZBS0FHTVFyOUgw?=
 =?utf-8?B?M1g1NUtlclpiSFZvdFJsQ1B6ZVNpeHpwSjd3WXRZeUtIN3Jodkloa2NleUxx?=
 =?utf-8?B?LzVIYkFXTWorV2d6TlhoN3lKZkM1RHcvTyttVXZTWGcyTHFaWjZkd3oxeWJ0?=
 =?utf-8?B?MVltcmNZbGdXem1ORU5iMlQxQjhtWGlQVHQvZVJJNFZrQkZsdWthWk9QVWw3?=
 =?utf-8?B?cUZCUUd6Uk1WdnVaUFY3aUtiRCt5WHlJd29RdWZFenJFOWdsV3QwTEw3L0Zl?=
 =?utf-8?B?aW9SSVI4ZG9WZENEZ09EajU4andYTjEzQkdGYktDSWo2TUtMcW9EWEVKZmNX?=
 =?utf-8?B?NDhQVUFNTGFvbnlzcTBQTXl1Y0dsZU8zOUpOb2Q2WWpicGpMN1UzY0pKcC9P?=
 =?utf-8?B?RWszaDhJM1RUK0IrNWNPanRXVTNwYW1yOXlBc3FaUUkwZ013ZDhoUDZSeThZ?=
 =?utf-8?B?RTk2K0hYK2o4dXA5MGFIWlVPRTRVMjVyWkhFUXV6ZHZqZnVpOEdmcjhXSlRS?=
 =?utf-8?B?L3NXRHRMT2dRbERubzdiWkJpVmdkWE9VRlF5cVMzUEVsQ3VvSW5rSFMydzFO?=
 =?utf-8?Q?hGSqqlBTwIUmNcs1Bsux0xWZFxBJxdYCrS7Dcs6?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 090d2fb0-0de5-4eca-5491-08d92f3251b0
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 12:45:41.5885
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bZbkNE/UrXlVZzRn2MAvkFxK7w+6hM8KcyrDnXhqQ6X6Bixg7/Y03Y2eF0C6RQy8k7OxJcHyTE4D0vZpSqBWJA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3390

On 11.06.2021 18:36, Andrew Cooper wrote:
> The logic to disallow writes to the TSC is out-of-place, and should be in
> check_resource_access() rather than in resource_access().
> 
> Split the existing allow_access_msr() into two - msr_{read,write}_allowed() -
> and move all permissions checks here.
> 
> Furthermore, guard access to MSR_IA32_CMT_{EVTSEL,CTR} to prohibit their use
> on hardware which is lacking the QoS Monitoring feature.  Introduce
> cpu_has_pqe to help with the logic.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 12:46:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 12:46:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141413.261213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lslzG-0005pC-Vj; Mon, 14 Jun 2021 12:46:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141413.261213; Mon, 14 Jun 2021 12:46:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lslzG-0005p5-Sq; Mon, 14 Jun 2021 12:46:42 +0000
Received: by outflank-mailman (input) for mailman id 141413;
 Mon, 14 Jun 2021 12:46:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XszW=LI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lslzG-0005ov-Fo
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 12:46:42 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d066a8cc-fa51-4fdb-87a0-31c513bcde15;
 Mon, 14 Jun 2021 12:46:41 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2168.outbound.protection.outlook.com [104.47.17.168])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-22-PY6P7p6AO1-Ohb7iU8Lq5g-1; Mon, 14 Jun 2021 14:46:39 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3390.eurprd04.prod.outlook.com (2603:10a6:803:9::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.23; Mon, 14 Jun
 2021 12:46:38 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 12:46:38 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0050.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1d::14) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Mon, 14 Jun 2021 12:46:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d066a8cc-fa51-4fdb-87a0-31c513bcde15
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623674800;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Sp0QWqMxrfWuXDLB7Yla+HCYh3ElKC2v7yyxRTEoow0=;
	b=d2hvYctsVDfTZm7uonU5wUzD+tvbIFGZGZexiwBjS3iZbgdV9WM28yf07jt4Yqe9hiyiPB
	spIBqyFqxLUwPG5K1qFBu7rFczlxf4mRN+3SITXAPQRFC9wo0F18cwOei+G0BkrfXoC1Xz
	/Kk7G5nsJajzGenHAdrxCQncRGt8EtE=
X-MC-Unique: PY6P7p6AO1-Ohb7iU8Lq5g-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KTrXtgYxDuHfYcGyHJkYzRMsKGdI0bKmn0fZ8T4/EDACoCwf0PaAKSycoVoqWMrEYh9N1HBpYHP4DbDbcoeddBcnvFcy4k1XUMLsiDzgHCrSR6ki5WmblSN0p85JEGGBTQ0MOsG8HGRgPOFBPOMh7B4D0m5yTPiYNS4Q1eT4J0cFfuv7oQpT4rps8Vxd0gksYkcWNnZEDfTCSUC+VLyn1TbYXwCbDVTHuRlxyaafbGBBFBSfsy93UjqoYMOEmu2p1BxvAL9JonYNYr9WqMqxRnR0ZcwJz28sheWrg9kIdPUciEATsUUPrBCGI/OX7KwytQInpGNMvvtQACyVBm4Z+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Sp0QWqMxrfWuXDLB7Yla+HCYh3ElKC2v7yyxRTEoow0=;
 b=STvqHVuUmNY2+mt261IJOCwwpu5/GlHCQRmVNfIxPml8f360Z2MTBSDn9TOoNBnNzsyMq52VxcWA/XErF6som1BPBMxYvsn5OVkIXVYRXY4EWyXAT6ogX6pA6yaHHAztQ7iNui5Qv0Zz9fW0i9+kvFiQRFrLBvRXp6wfscsA2tGCJybdy3D40Ay29C40p4lxrLVweUhsmPJHQLjd+CskiHTVCyNyfpI4yErHfdQLE2zgMtSEfRkNdiFGnFbfUke4Aib7WfE9BcBrX0Se9JsBDlEiIo8wuPk8ADEkTEWyJ4kBnVy6J2Xyid7OGtMRkFnXOAFVF6S6zutTKOVX6TYPgg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 2/5] x86/platform: Permit reading the TSX control MSRs via
 XENPF_resource_op
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>,
 Edwin Torok <edvin.torok@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210611163627.4878-1-andrew.cooper3@citrix.com>
 <20210611163627.4878-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <44de7fb0-ee20-0d54-5417-9964593b9e0d@suse.com>
Date: Mon, 14 Jun 2021 14:46:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210611163627.4878-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0050.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1d::14) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cd8ebda8-38c9-4a0b-f061-08d92f327387
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3390:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB3390E562713B8863564C17D1B3319@VI1PR0402MB3390.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:428;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	84zvowRRf9hrze0XkDs35O6IeQKLXqvIVIJh0tq0tQkU197NYOtiNpsu1sUb3JjiyhRTQCKUAyEo/z1/BowJ6XDrG+FAaJy2i4cKS2RMNvaIyHUusPJlQYGdD6MlNQaMWct2IwuvZ4msDfqep0g8flAQip10gb66vlgJurbtLh+8WWq5jt5jL1Acr+QKk8tGbosZBDavDZca7Y1n3+aLrGw0wvYuU++atBY9QTfgGFXnG2ouyGHHc5JukuNMAnLRWTNqCfp1DSGJZaCsSM1zi6AmmjZhIMl34lr5yrDbuVvDij60zqcQ8Qwn5oIpuorFDTk9Hkvj+iolC5ABVxMbJidWJjvFBS/3yCs0cyYu72tmYGC03OP3RQ/IzdOKGa5xRzlv/eKzl6aV3y+WTzIMBZHpwLJaO3WUpf6DPnWVC6Egks2uGWPq1dTLZhMAZWohKgD0gLXgdZAS7oFA4ulm/8bAt7Ul7V9ANRiAs1SpowskBE2ESi7gZk4mMekg9SFT+w5ZwIRconrtnSLJgr2Ht9ih1l4rmiWfTOZkSAHxL1vhGwtesmkZsCA/77uq4lD2onrP1M626o7Lt4X/0fq/WZVaHSA7aZkFVMB+0HDbgYB+9lHZk74O7gC2d4wY+fQF02eJi3Q8fDzBGA6IUb1D0f4I969j6lpwugl0dtJtWwMp3uZABv17vF8+U+VK/st8
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(346002)(136003)(366004)(39860400002)(376002)(54906003)(38100700002)(8676002)(86362001)(2906002)(316002)(16576012)(6486002)(5660300002)(478600001)(2616005)(36756003)(16526019)(31686004)(66476007)(66556008)(6916009)(66946007)(8936002)(31696002)(4326008)(558084003)(53546011)(26005)(186003)(956004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TUEzS0wxTFI1THdiemF1OTNFZDZYekFldUZjMlpSNXdjY3BWcU5hK0NRNGVC?=
 =?utf-8?B?VkFYOU9EQm9EaHRibjMrTGxNOEZxeDBsYy9SRGlIS2FiNTJJZ2RUcWpqVzho?=
 =?utf-8?B?dGVrSkRxd1hrc29uZThyNUtyS2ZCYW40bWtSVEh3YnlRR0VUYzBNdWFoQTJa?=
 =?utf-8?B?YW41akE3WEpDdUZ3Uk01T044cHhmYlBPM3dBZWpFZjZSZ0RlMUJrU0RvZVlP?=
 =?utf-8?B?LzZFM0h1S09TZmE2d2tmSlliY2h6RXcydmo2VGhjNDFtVmZvbm9pL3gzNC9i?=
 =?utf-8?B?LzlNajcveDVtaGwvaVdubHZ0WVRPM0JpQUo1NzVmSVduMmEzNHQrL2psMEV4?=
 =?utf-8?B?TkxFbGY0dzF2REw0ZGJQd3VpL0ovWW9mNXdRTHg4c1IyYWlCTmU1UGtpWGdF?=
 =?utf-8?B?cHdXcFo0TjdxZWp4R0FsTzNLTDFXbkVoYUdQVWhpSHVGMi9JcGFIT0pPVS84?=
 =?utf-8?B?WE5yZmlIVEpoK3QrWWtHOXUrSjRUOW5xdkVNd1BPSTVBSlh4UTBMdVhyVHda?=
 =?utf-8?B?eGhJSFIrcEZ5MDdZNlIzUjZvamtKUDZNK0tYRFU4aVN1Ynhqbm9CTnRGQ3hm?=
 =?utf-8?B?N0lncmRvVitLQnZUM2gwZ1ZnZUtMZnpuRUJORFBsYWtnQjdSeHFMMHRNaURn?=
 =?utf-8?B?YUN5ZlRvb29MMnJEcS9pa0FmV0lyMzNmL2Y5cEsvSkoxUjNBVEllRmNCUVdy?=
 =?utf-8?B?eFhGRng2aVNsc0lRNy82NnBleGZIdi92T2tiWXZSM1BQc0haK0x2K2NuUkxX?=
 =?utf-8?B?KzF2L0hpVTc0U25wVUY4Q3pqOGNFWG44Znp6SUFqWS9PbnBsOWwrRVlqSEhR?=
 =?utf-8?B?ZEFaZjhZem1rZ2lFOWJvWDVodUNVWDBKR0R1T0dtbmxPRlNTdVBvTkFNQW5D?=
 =?utf-8?B?eHdkRDNsSFRnTk5ySGo2L2c4OUxSVE0zVitoWkZWVDA1YzV4TjFhMGJvVWN2?=
 =?utf-8?B?M0VITW5UNEVCS2k1c09qL3c3bmgweEFtNndkVzBPWmtaUUVZQXgyeC9MN2R5?=
 =?utf-8?B?eWhnMEZWRzFYRDF5TzN5OCtUQkl1YUYzR0FOVmR1VlJ3NnRwcVFCYlZZOWZI?=
 =?utf-8?B?WGl2Y0pDOXg1SEFKODlodEplRzNpN282dGozZFlSZXlOTG9vaTBKbU9jMUVP?=
 =?utf-8?B?VHVzaW9DM3p6cUhoTXlxdzlkY2NqMVNXa0VhNFdmYVZHNW92dU1VWkF3Vm9I?=
 =?utf-8?B?YWNaanNhZ3crSVhrYUxpS2RHWWgvQnZhMjd6Mys1M0IvanEyQnhEblZNRkFx?=
 =?utf-8?B?MnZ5V0NVWEorakJLcW9HN1dQOWdxZnFNL0lIQ3NiYzljcmxSZnB1OEdiVkwz?=
 =?utf-8?B?T2tvbkZuZmVzeVF2R2JOSDh1QXFmQVIzMUQyS3V5cXl1dTBpY0ZnNWV3TXhH?=
 =?utf-8?B?VWJtcm1sVlljdHhucHFkRlB5SzlsenU3d00wM2J2VnVEbEF0Q1p0elNuMXdO?=
 =?utf-8?B?VElhMTZIUndSbWVSZ25qS1hjQ25VV0VMbjNoTThQOGR0ZCtBRnB4a2o5NHl2?=
 =?utf-8?B?dVk2UTBvNGU3MVFqSVZPQjd5bTlKZ1cyWUVRZ3cyVDRBaUtOMWp0TUxqdU04?=
 =?utf-8?B?cUdrMTNOQ1p3SDRpQ2RnRkNITjdPaVBvYnRmWmpoa1NJWVRkZFFqUTBhejRi?=
 =?utf-8?B?UmpRWWdzTFlLUFZMdUFPSHZNMGplSW54MUZzdEt6bFBobW9RSnRDWWp3UEth?=
 =?utf-8?B?Y1BYeFVlbXJEa3VPYjdDR1RROGVXemtURkFxS2djOUl3aG9nMjliQ1ZiTkhY?=
 =?utf-8?Q?To5FJ5JfMYHKOL3YCdz+J3gABukCDCuSjBwNOU3?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cd8ebda8-38c9-4a0b-f061-08d92f327387
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 12:46:38.3604
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GjwPldcekurlfcxzbOxxCzQo5R57Lf9BlJPaB/vW17XCPxqGP3wgE4RzHj97o9ld39j2k3eScO9yhkAd0zC0vA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3390

On 11.06.2021 18:36, Andrew Cooper wrote:
> We are going to want this to write some tests with.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 12:47:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 12:47:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141418.261224 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsm0B-0006U1-9d; Mon, 14 Jun 2021 12:47:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141418.261224; Mon, 14 Jun 2021 12:47:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsm0B-0006Tu-6M; Mon, 14 Jun 2021 12:47:39 +0000
Received: by outflank-mailman (input) for mailman id 141418;
 Mon, 14 Jun 2021 12:47:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ADML=LI=gmail.com=rm.skakun@srs-us1.protection.inumbo.net>)
 id 1lsm09-0006Tg-SX
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 12:47:37 +0000
Received: from mail-lj1-x230.google.com (unknown [2a00:1450:4864:20::230])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b99f9e50-6e2b-486e-8853-e88b39b66002;
 Mon, 14 Jun 2021 12:47:37 +0000 (UTC)
Received: by mail-lj1-x230.google.com with SMTP id k8so3090363lja.4
 for <xen-devel@lists.xenproject.org>; Mon, 14 Jun 2021 05:47:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b99f9e50-6e2b-486e-8853-e88b39b66002
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=QUObwUNWvgk8Ic1w7m2h32Aq1YfuXrxksajQxqaMrIs=;
        b=rj6ZqtY0hWRNdi47dSpXnG5VlOgr15kVbYhmu6VWU8u6FjoktZ8tvzKvzlkPnDP+pV
         XO3nhrT0y6qyTJZjPH5sSQnLiK2So2RyVxzuj8bZTSrohk/KJTgF8P077y+Fvbm3p3MS
         adrMOeNBlgRIAQ8M4JksRyHAk91ctiTVHRxKE+Ysd4cv7fxq0GrTjF2zot+++owhWgOZ
         akrE4+2kZBZEEm5a4N3TOZdjIGXh/Ipfb6XTfSeGKR/mE8wojx+WFi//AUD6f13s38T+
         dh0ynmjkc4C4o/Er1BLAEQaXeS2zTdDzm++TKc2dXCS2gvmp9hROmv+KiK5rYRK9W/Lu
         mMbA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=QUObwUNWvgk8Ic1w7m2h32Aq1YfuXrxksajQxqaMrIs=;
        b=ovN12n/ebWAg5TXy4rEVmSpTWZCB/HJF1pIBKOrJ6q1eJLBvTMjcA/gG5+UyTKJCxn
         jnsOkQ8YIVQi0vFQyMSeLu4TwoTDadGAPoAv+MbgVBote+YHM8YUqyaEzhMLbf3NCrjg
         fFJGCjJclFRUyqOtqxVnpfSjcgnJ5HWtL1eAhN24j+mLIHzhJTcsqgFKAJfQ3ld2lwAv
         RaxOFn7e1tCR78BMRa7o15/T6vAfxOqjBgeQ40H08QfbIyguo+fZFznMZ6XljomfAo6B
         FAjSmKhji6nwHhkgYhTcZ4ix0TWkGEcBpWUEbUZa5b0zQv3I04fuGZr7CnWAQbNIxmyd
         jipg==
X-Gm-Message-State: AOAM532qHJeq1J2LiTCYOMaKsTaU/J3MemotVDDweGSVUJqgB1EqCf15
	n/HB4TAfGe43dly9B05cfZBe3feN1wQC3t2SrMs=
X-Google-Smtp-Source: ABdhPJxC86d8VXC4py9RRymEPfI5ESRE2fKe8HepltlfpWhz75a3A3JMLKsQJcx2NkZi9XGq2Uggz8yzDs460UQlrTk=
X-Received: by 2002:a2e:b8cc:: with SMTP id s12mr13710166ljp.66.1623674855928;
 Mon, 14 Jun 2021 05:47:35 -0700 (PDT)
MIME-Version: 1.0
References: <20210611095528.9230-1-roman_skakun@epam.com> <855a58e2-1e03-4763-cb56-81367b73762c@oracle.com>
In-Reply-To: <855a58e2-1e03-4763-cb56-81367b73762c@oracle.com>
From: Roman Skakun <rm.skakun@gmail.com>
Date: Mon, 14 Jun 2021 15:47:25 +0300
Message-ID: <CADu_u-MqALJkG8RJHrr65vC_sHu-UyvCGwwUfaBong0eir5+hQ@mail.gmail.com>
Subject: Re: [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Juergen Gross <jgross@suse.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org, 
	iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, 
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Roman Skakun <roman_skakun@epam.com>, 
	Andrii Anisov <andrii_anisov@epam.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hey, Boris!
Thanks for the review.

I have an additional question about current implementation that disturbed m=
e.
I think, that we can have cases when mapped memory can not be
physically contiguous.
In order to proceed this correctly need to apply some additional steps
to current implementation as well.

In mmap() :
1. Is the memory region physically contiguous?
2. Remap multiple ranges if it is not.

In get_sgtable() :
1. Is the memory region physically contiguous?
2. Create sgt that will be included multiple contiguous ranges if it is not=
.

What do you think about it?

Cheers!
Roman


=D0=BF=D1=82, 11 =D0=B8=D1=8E=D0=BD. 2021 =D0=B3. =D0=B2 18:20, Boris Ostro=
vsky <boris.ostrovsky@oracle.com>:
>
>
> On 6/11/21 5:55 AM, Roman Skakun wrote:
> >
> > +static int
> > +xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma,
> > +             void *cpu_addr, dma_addr_t dma_addr, size_t size,
> > +             unsigned long attrs)
> > +{
> > +     unsigned long user_count =3D vma_pages(vma);
> > +     unsigned long count =3D PAGE_ALIGN(size) >> PAGE_SHIFT;
> > +     unsigned long off =3D vma->vm_pgoff;
> > +     struct page *page;
> > +
> > +     if (is_vmalloc_addr(cpu_addr))
> > +             page =3D vmalloc_to_page(cpu_addr);
> > +     else
> > +             page =3D virt_to_page(cpu_addr);
> > +
> > +     vma->vm_page_prot =3D dma_pgprot(dev, vma->vm_page_prot, attrs);
> > +
> > +     if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret))
> > +             return -ENXIO;
> > +
> > +     if (off >=3D count || user_count > count - off)
> > +             return -ENXIO;
> > +
> > +     return remap_pfn_range(vma, vma->vm_start,
> > +                     page_to_pfn(page) + vma->vm_pgoff,
> > +                     user_count << PAGE_SHIFT, vma->vm_page_prot);
> > +}
>
>
> I suggest you create a helper for computing page value and then revert 92=
2659ea771b3fd728149262c5ea15608fab9719 and pass result of the helper instea=
d of cpu_addr. Here and in xen_swiotlb_dma_get_sgtable().
>
>
> And use this new helper in xen_swiotlb_free_coherent() too. I am curious =
though why this was not a problem when Stefano was looking at the problem t=
hat introduced this vmalloc check (i.e. 8b1e868f66076490189a36d984fcce286cd=
d6295). Stefano?
>
>
> -boris



--=20
Best Regards, Roman.


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 12:57:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 12:57:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141430.261241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsm98-00084z-8D; Mon, 14 Jun 2021 12:56:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141430.261241; Mon, 14 Jun 2021 12:56:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsm98-00084s-4v; Mon, 14 Jun 2021 12:56:54 +0000
Received: by outflank-mailman (input) for mailman id 141430;
 Mon, 14 Jun 2021 12:56:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lsm97-00084m-6t
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 12:56:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lsm97-0004TQ-2r; Mon, 14 Jun 2021 12:56:53 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lsm96-0001mV-R0; Mon, 14 Jun 2021 12:56:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=XCbiOCDvlqziCIM+eMWN9Mc+pioEDSMSh5sz9jv2GW0=; b=J95Je1q+TddQhwFzfqzuTnn62V
	mV3fqq+758Nts6XzUcofJz8WnzKgatJf++NVA54f3F2oeDnHhC9j1qifjSuOCqm1nDOKxrf0wiWB1
	06YfUdNRkssUFoVnQmD+Vwj7BU5KSgzVdygmxbw9Ku7EXZdH1hzn6hUJpKob2JSbN+Ds=;
Subject: Re: [PATCH] Arm: avoid .init.data to be marked as executable
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <6837f903-14f6-4438-ed05-b373149984f3@suse.com>
 <b7e76787-cdae-ed1a-a741-e5db146fc87e@xen.org>
 <8c5ec03f-5ea1-99f8-a521-3552d0015ac4@suse.com>
 <1b44cb6d-dda6-5297-893b-a53fe7d123d9@xen.org>
 <919ddc45-c6a4-20b3-e1ab-7a16fe1c48d2@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e08a7113-672c-81fc-ff7b-5f58bdf52bb7@xen.org>
Date: Mon, 14 Jun 2021 14:56:50 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <919ddc45-c6a4-20b3-e1ab-7a16fe1c48d2@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 14/06/2021 14:17, Jan Beulich wrote:
> On 14.06.2021 12:32, Julien Grall wrote:
>>
>>
>> On 14/06/2021 12:02, Jan Beulich wrote:
>>> On 14.06.2021 11:41, Julien Grall wrote:
>>>> On 11/06/2021 11:39, Jan Beulich wrote:
>>>>> This confuses disassemblers, at the very least. Move
>>>>> .altinstr_replacement to .init.text,
>>>>
>>>> The alternative code was borrowed from Linux. The code has now changed
>>>> to cater very large kernel. They used to keep the .altinstr_replacement
>>>> and altinstructions close to each other (albeit they were both in
>>>> .init.text).
>>>>
>>>> I am not entirely why, but I am a bit worry to separate them. What sort
>>>> of test did you do?
>>>
>>> Well, just build tests, on the assumption that relocation overflows
>>> would be reported by the linker if the sections ended up too far
>>> apart.
>>
>> Hmmm, fair point. They should also not be further than the original
>> instruction. So there ought to be fine.
>>
>>>
>>>>> dropping the redundant ALIGN().
>>>>>
>>>>> Also, to have .altinstr_replacement have consistent attributes in the
>>>>> object files, add "x" to the one instance where it was missing. >
>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>> ---
>>>>> I'm uncertain whether having .altinstr_replacement inside or outside the
>>>>> [_sinittext,_einittext) region is better; I simply followed what we have
>>>>> on the x86 side right now.
>>>>
>>>> This means the altinstructions will be marked executable in the
>>>> page-table. They technically should not be executable, so I would move
>>>> them outside _einittext and make sure the section is aligned to a PAGE_SIZE.
>>>
>>> Hmm, are you saying you bother getting attributes right for .init.*
>>> in the page tables? I ask because we don't on x86, and because it
>>> would seem wasteful to me to pad to PAGE_SIZE just for this. But
>>> you're the maintainer, i.e. I'm merely double checking ...
>>
>> So this is a defense in depth. Your assumption is .init.text is going to
>> disappear after boot. However, if there is a bug that would leave
>> .init.text present then this may add more attack surface. So I think it
>> is a good practice to keep the permission correct.
>>
>> However... looking the alternative code again, there is another reason
>> to move this change out of the range _sinitext - _einittext. The
>> function branch_insn_requires_update() will forbid branch target in
>> another alternative instructions.
>>
>> This is first checking that the target is part of an active text. With
>> this change, this will return true because alternative instruction
>> replacement will be between _sinittext and _einittext.
>>
>> So .altinstructions_replacement must outside of the region [_stinittext,
>> _einittext[.
> 
> I see. But I'm not sure about the defense-in-depth aspect: By putting
> it outside [_sinittext,_einittext) it'll get mapped r/w, while I think
> you were implying that it would become r/o. Not even .init.rodata gets
> mapped r/o.

Yes it is no r/o and that should be fixed at some point. However, I feel 
that r/w is better than allowing execution because some the instructions 
can lead to a DoS if executed on platform not supporting them.

But that's a matter of opinion and I think this confused the messaging here.

> 
> As a result I'm not convinced yet that you really want me to make the
> change.

I wrote "must", so I am not sure what else I could say to convince you 
that I really want to make this change...

To re-iterate, this code will break runtime check in the alternative 
patching code. So the .altinstruction_replacement **should** be placed 
after _einittext.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 12:57:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 12:57:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141433.261252 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsm9R-00007X-NY; Mon, 14 Jun 2021 12:57:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141433.261252; Mon, 14 Jun 2021 12:57:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsm9R-00007Q-JR; Mon, 14 Jun 2021 12:57:13 +0000
Received: by outflank-mailman (input) for mailman id 141433;
 Mon, 14 Jun 2021 12:57:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XszW=LI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lsm9Q-000070-Dg
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 12:57:12 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58b77765-4781-4ac4-82fc-f5560a254cbb;
 Mon, 14 Jun 2021 12:57:11 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2177.outbound.protection.outlook.com [104.47.17.177])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-35-UlbjW3WlP5aZkb8YpkqmCw-1; Mon, 14 Jun 2021 14:57:09 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2335.eurprd04.prod.outlook.com (2603:10a6:800:2e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20; Mon, 14 Jun
 2021 12:57:06 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 12:57:06 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR10CA0100.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:e6::17) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.20 via Frontend Transport; Mon, 14 Jun 2021 12:57:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58b77765-4781-4ac4-82fc-f5560a254cbb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623675430;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=E88rxmzha68DnX7MDCCamoXuD+jSA9NYX9LK2SOH100=;
	b=mEnEHJKdblzuNPHdJB2WjMoi/fpPUBcwC4mqWMn6b0nLOZfFBiv8K+VVu5IyAPh5u+V6Mo
	74sy9ZfvUVDIIhMXKaPfJNgjGHL2HsOdR+SmBA7HsxyHJrzoUTOMc5HnOXXTjaFydLGHPz
	OxwCergDVNz2QuIdViRs4uJl5mVIXOo=
X-MC-Unique: UlbjW3WlP5aZkb8YpkqmCw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TQIhAwYUQ0++TmX4+yYQGaax+xmgOU8zk6uV+6KOcwhQEFTmT/qiRc2GCEB3I0aJKNmu8dWoM+H+vr+ZhiCCaNQp7MgxTpd4dckIZzCirQiziY5qxHb4BKNflION0gr8taXWmgwa1QoxxdfRI8bNRfdwcriSWGspu1EMAeUxtwHsF2MIVGH+mhteJgfpH05QbTv+6O6YR7R3nu1rPA6Olda1+uGIAGNjMtcrUV7CnZ8P5moLbcLu3hhwtZ+ZGKYEzRncCQx39Z6Y32a1TG4QhsBkh11HCDsNjN9bL+aM+4ugEA8P3CMH2G9z3NYofE+dqoAqlG99DuEp0sCxEytHkw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E88rxmzha68DnX7MDCCamoXuD+jSA9NYX9LK2SOH100=;
 b=oWU2G/RhwJjCbjUxnTOJJ6v9nvgMijXvwC0SwS35zbMEWA/NYLxXzA0tudZDMVx1m4MW4e7kAY8QwECaDCDxy9x8CWO87Dj8AlpJcU/0tKE3Rnwzq+28kvuWWc1OU5IP6blWCjAtY87QzlLK/0OgJ6mmQRN9cIGF9EqpnfEdgzYBMTlPv+L9sS/JUG6D1b+KxylsWtFlDCq/ZHXZdhpW7IdrfukuPm2PueIz4W78ByFqprryOLqTxeahVRScOa6pyEH+rmPPM2WzR/hHdoK6dBNbzGw3DD1r1YKktcPs50nVzUOF7sEga5McrAI3ifgqyIxLc6mbN6xKKmvmgKqfAw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 3/5] x86/msr: Expose MSR_ARCH_CAPS in the raw and host
 policies
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>,
 Edwin Torok <edvin.torok@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210611163627.4878-1-andrew.cooper3@citrix.com>
 <20210611163627.4878-4-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bcb7cacf-f18d-ed74-00b4-854b403bef2e@suse.com>
Date: Mon, 14 Jun 2021 14:57:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210611163627.4878-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR10CA0100.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:208:e6::17) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d2c7c21c-6493-4e7e-fae9-08d92f33e9ef
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2335:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB233528F7B46E203A6D7B9507B3319@VI1PR0401MB2335.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	42KPm/14Xk1TUJgx6zlOsyQiCn8gAXtv33+BCggpCzbQy95J9bdHWDzrzCSQCD9R474gYYVYIcUDmPZFNlI5zbd000q3sXdjYUaPDMkvsTDXE5TgeaJmkLEf3jtQm5yD5Cg8qLPXfsZ6XVO32w6Q3Rx5aDznbGEAUUCRX7TYVvZ3rnMoA8hFVbQonRmClG9kUWFq8I2jXK23Ml4w+UzpduQeYRdUbJzG2bOFgrHEsVfPTlyueTATlZYVm5jy7MD6A8sL5NpUeFbJgN82/qMhb+45jYNcX7OtiWpHtyXy740WdnXW16zHmX3q3j9zwTJTHHIzLQLIqlvrqS8/+J9YhSUwtfs8CHytdMsP6dk+v7YPChQbjhvVF51EVkGV8QPyZ5gSxJxM65qCWK0w5oPj9hWPZ/3uwH+gY+duMG0nTPTfwypMt31U0A1i84wogR6PUTTF0evuBQelG9kituiddc8N4/EHYFSymL1yk+cJjnyS5SvjWWgftQARButAU+kL2TeGxOCM9XJWOBcg85e1U936d85buPgIJmugdGa82s0ocu6IJQWdrNCyxUilU/WUfM0YzsoHGncPYzCD2WMZ9NSvKnUPMC5tyUCOuaDxlXlRPsNEhMbd7tz7Qr8EjZTmZoNymIwlrS10d60lAHSIx1C7lXasum2DR99ykiifQ7HglMsbkfzt3LCYs6BjL43G
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(396003)(39840400004)(376002)(346002)(366004)(478600001)(6916009)(4326008)(83380400001)(36756003)(6486002)(31686004)(2616005)(956004)(8676002)(2906002)(8936002)(186003)(38100700002)(53546011)(5660300002)(54906003)(6666004)(16576012)(31696002)(316002)(86362001)(16526019)(26005)(66476007)(66556008)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dWQrVjJaVmZCQjBOVnZJcE5rNFg2QktGbGh4cFVkSzZsWWZDMXc0dnR6T0Uz?=
 =?utf-8?B?cDdPdm9FNDdvM0pJR2YyYzl0SEI2ZXZIaTVmT3JLUTRTTlhZY1BFQ1cyT3gz?=
 =?utf-8?B?V1J1S0N6eUthYkx0YkN4WmFNbWttMDRWZ0dVc0I5RGErV2ZtZXZNdnZiZjFO?=
 =?utf-8?B?R0pSY0VUZkN1aUZ5TTlQYUF4NUN6Q01GMlN6b2Q2RDBWMHVUdkJ4dmFTUU52?=
 =?utf-8?B?Zi83N2Z4RmpaUGd1blFlWHlKbGxDSHdoS3dLbWJLcGhLODNubTVQeW5zWnJ5?=
 =?utf-8?B?eWdZVWNzeFRaRmtPL0d4cDBCQituSURSMWl2NVFUbEpDb2VHV2RiSG5EQ2I5?=
 =?utf-8?B?SmMwTDArR254enZkUlErV1oreFpaUlNad1VSQjVNL21tdE1scUNyVWJIaG9F?=
 =?utf-8?B?OUVkeDZ6Z0dVeCtjZWRWaGYza2E4dUVFY2IrdllhMGNHamljWXJpaWVkSWpx?=
 =?utf-8?B?THh5YVN6bGZtYjhkN3V5VjJ0V00ybGxuWU5tZmRBeUNvZFlmdXVLc2pNZlJh?=
 =?utf-8?B?bHVRaUEwYmZDR3IwUWo2eXl5VzcwN0V4dWRvK3AweWRTRENHY3AydFA0cHJO?=
 =?utf-8?B?TVF6My96YlFnczUvY3d6MlVrcmFhRWNUcENWbnVtYzVYdkxqZks3bk43Y2pV?=
 =?utf-8?B?RFpSUWRONFBLT0FNOXJDdGF1TzRiWlZWQXV4QlhXa05FQmNkUVdpUXJBcmFG?=
 =?utf-8?B?OWdWbTVhU1NsSTlZTkM2UzhzL0d5dVE4WDc0RDNpSkdxQTVlLzFYVzJjNUdo?=
 =?utf-8?B?c1ZxV3JQeUpTaUNBblpISzJwMWRJR2RNaFlMOGRSTFN5dkFZOWhqMzVxMDFK?=
 =?utf-8?B?eEdJbGpNMnpkMWhUUi9YUkNWb21VeUwvRmxlSmtySkI3dWw0Ynl5YzF2MU84?=
 =?utf-8?B?cU5RK0h0T05QQ21ldjZweWlGZzAyYlM4WWdsWno4dC9scW5aeElLQXhnMWZk?=
 =?utf-8?B?RGpoSkdVUGtpd1RGMnJpUEZJbmFFZlpmdCtNb0laRzJRVUJ5U2ZWUUhMaHNH?=
 =?utf-8?B?Y1NzVk9WYko3ZG1MVXNWMlZCNzRubzBIcklHMGtBWkdEZTM0SGNxTHY1SWZl?=
 =?utf-8?B?MllQNGNibHpQSlZISHFDMGRheDBWRnFMOVVvL1Q5N3lKNENnQVBXRFY2QzI1?=
 =?utf-8?B?SktKbi9yWVQvTG5KcExyM2FTVS9VSTdRZG1uODkxdkdsUGh4SGVEY2Y3cVJF?=
 =?utf-8?B?ejhCMmtjWlJ4ajdUUWtxMGU0c2ZIQzRuWlBjeVJOOVljYTlUZStYY2Q4bkU4?=
 =?utf-8?B?eEZsbXQrdHNYSHd6bkRncUd5OVZuM1p3b1pTa0xhSDI5K0dNN2wzeTRZLytl?=
 =?utf-8?B?ZXVZTVYxVUNQOWd4WnVuYnRXVkc2ZElFUk5DVER4bWRuS1BqMTFya0srUFpV?=
 =?utf-8?B?L3pwdFRocklzc2t2UExtRnhLak91alRCdHI0YU04b3BUd1p1a3FoaGZ1QklK?=
 =?utf-8?B?RW9NczhIYUUrWU5TYmx5N1gvQTcwaUdSdzcrWFZzMm9FK1hJUHlLRHA3WHI3?=
 =?utf-8?B?TnBqeC9Xd3kzSU5kQ204UWUxUGxZb2lNRDlxNU1wQW1nVERjQlJwT1k0bVcy?=
 =?utf-8?B?WkxWeHk2MEF3aWlrUVlpRnNoZXY2UFQxdjRZNjhIODVHbWozZTl1TWpxMmo4?=
 =?utf-8?B?bGFiem11RGczNnJxcHNnb2lINVpZUXRGMzJDb21JUU9odzF3NGFLVy9zcnQ5?=
 =?utf-8?B?Z1JOcnVKbENkWVc4MWVlT0NyT1I1VHlkbjI3cnYvZUp2WGVUZk1qMzR0d0wz?=
 =?utf-8?Q?FOOiuXqr0/CFRKymZNAR2IJcRJBwMn/trMs0lIt?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d2c7c21c-6493-4e7e-fae9-08d92f33e9ef
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 12:57:06.5251
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: phbzAA+QL8aHQRJt7+OQToV/00TcbwHYyNLZiyk2uTZyahC69DFfYWXBXZVtu9VDwdiJ9pwJa63REmCQEtWWxQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2335

On 11.06.2021 18:36, Andrew Cooper wrote:
> @@ -60,6 +65,11 @@ static void __init calculate_host_policy(void)
>      /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
>      /* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_ENABLES */
>      mp->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
> +
> +    mp->arch_caps.raw &=
> +        (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
> +         ARCH_CAPS_SKIP_L1DFL | ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO |
> +         ARCH_CAPS_IF_PSCHANGE_MC_NO | ARCH_CAPS_TSX_CTRL | ARCH_CAPS_TAA_NO);
>  }

Isn't this a little too simple? For CPUID we consider the host policy
to be what Xen is using. Taking ARCH_CAPS_SKIP_L1DFL as an example,
we're not using it unconditionally (depending on opt_md_clear_hvm and
opt_l1d_flush), i.e. there's command line control over its use just
like there is over the CPUID bits. Or take ARCH_CAPS_RDCL_NO, which
we set unilaterally for AMD/Hygon.

I don't mind it remaining this simple for the moment, but then at
least the commit message should state that this is currently over-
simplifying things. If you agree, then with suitable wording added:
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 13:00:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 13:00:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141443.261263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmCy-0001kK-9u; Mon, 14 Jun 2021 13:00:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141443.261263; Mon, 14 Jun 2021 13:00:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmCy-0001kD-50; Mon, 14 Jun 2021 13:00:52 +0000
Received: by outflank-mailman (input) for mailman id 141443;
 Mon, 14 Jun 2021 13:00:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XszW=LI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lsmCx-0001k7-Ac
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:00:51 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b0da18d7-8e4f-4398-8ec9-f674d39d56f8;
 Mon, 14 Jun 2021 13:00:49 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2171.outbound.protection.outlook.com [104.47.17.171])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-14-SHsC5XotOue2vDPVfAlQew-1; Mon, 14 Jun 2021 15:00:47 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6669.eurprd04.prod.outlook.com (2603:10a6:803:125::33)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.24; Mon, 14 Jun
 2021 13:00:46 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 13:00:46 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P191CA0024.EURP191.PROD.OUTLOOK.COM (2603:10a6:102:54::29) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.22 via Frontend Transport; Mon, 14 Jun 2021 13:00:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0da18d7-8e4f-4398-8ec9-f674d39d56f8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623675649;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+OheBH9cKJlYpGZrszMCo07i6IKzpXkQmFnGftDV4eU=;
	b=HRtcee215JWlKHHJwDGBWdyAvR5K6dC/qXq1PSdN826zHMHzRsfwvx8pac1mFXtBMkMzJb
	30rCACqBzLX5AgCsQqAM19YQNHoQcVJtOqfBXJgr7G2eW7eX1fgUmOPr1HVmZaPF+egDa1
	twsSkW7RPb7x4mbtOYxewX55tS29jok=
X-MC-Unique: SHsC5XotOue2vDPVfAlQew-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ihgAd1ARe1ryqBOLwBgEW9oxaqD4Jpv42J7EfhgOm1z+RR8YFQu5xF2bnhzh7NwVAmuxrYE4Y79X4rtDrOWYFVM9ZhhBMfPKFXlJ2f+0Td2W2qa0r2YspleVeO6dYA+qej49YIvf5qc64IKWLgIVICRXIZfPb4a+W4wqUSWEMQS8aljhDPwJji+dQDtJuYAp27cgtMGPVXMWV8glEIWPyqGVCTvXRBv99JLM6gnow9xc0j6d612VtkpctH4xp5jjP0RtaWVRKeAEtkXU+hnulT+pk1BNJFZLWs4Q7uodkWILto/vZKlAWK22mM+tW8f3JCrSnhPLv6iVRilsTcACfw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+OheBH9cKJlYpGZrszMCo07i6IKzpXkQmFnGftDV4eU=;
 b=Py4aLMMDi/+/R7xr3HxSHO747JfDANxOWOsTs1c2xVl61zuZEW2gabTUug+RJCUEYskpt3OSQbF+IYJ982+6Q8+VF74wfVvy105jhRngwq202ENlakjqaeXUJ2hd5rvNrk7rT96wI1ExpBzAprKX+2gq/U+wLFjSrza2V6YN/ou0sfCv8z9XzcWKxbmchyMB0esrZf6Ber7aOSkGmWH3RfAd7NLkZq8gFQQH2aFLYXUTn8lktPRpsXSwKYYE/DfED+uogbbO+KWzJdVsaG14iTaerFvMQ8cMeZit6907ghlCTiCuHrBk78C1TaCLuJUfp3MBweFCPqPS3/25ZaTGng==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH 4/5] libs/guest: Move struct xc_cpu_policy into
 xg_private.h
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>,
 Edwin Torok <edvin.torok@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>
References: <20210611163627.4878-1-andrew.cooper3@citrix.com>
 <20210611163627.4878-5-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bb85a8ea-c78f-1b94-6d83-224137f21500@suse.com>
Date: Mon, 14 Jun 2021 15:00:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210611163627.4878-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P191CA0024.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:102:54::29) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8e780695-b3f8-49c1-df69-08d92f346ca7
X-MS-TrafficTypeDiagnostic: VE1PR04MB6669:
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB66692A7E4B12DF6105BBA663B3319@VE1PR04MB6669.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	cfBFWe7lzUbZgGdl21ZDBHruRj9LE8AMJ4zkqo0vYyI9NVyh8EkQBmg7C7y7T1jldXqJ2FT2LxqswIjrXM1pnHBSA8Kfpq7d9N/JX9sqT/Kqj6V/jRxZ7qagVPKTm4tmLzPnN1v/BepsOvl3LDTCzZ406xspNfJeGH+6GKlPHkHG2KLHMnpdYlB+RxHl6HK7n3Ksmbxghw7G6kqysv+Kf1H0cxbAIx1tDh1N/nXf2RqUwgrgnWJ7MijjCObkhj54goK0zy1Vx1s4xcZhlbWBTQWAXujHl6+1a1bwOu8kKC008tD19HC4hs0CDm27ivVGR5K/bof4oGZLh9KXdT6IgneNUWuwEY9OL3tlK+oVAUYGBwdteVISzZ5acSdAMF+knVMCso8AeCcvylFtduEJHMqJSvJn5hRcVbo3vEfURovsTFi90/EU4Cwk0G2hZpl9aPnwK99hHMeKxJg+d6BPhnfid9sa18H6RNkxaHQ1yOCe8OfDXUllM8+cW6D7OInM1Ytx/8hI+oyji+15ndKwtmu7zh0TuC8J3TkQUb5XriJXIzcCjf1RD/M38W8fPFIoRY7TUFzjZuAcv3ETLoqwqbref9djjxbww8Nip8uDp6pRIX/bcfovoOcY9Q0J+GzSV0uueir2FckGuNQKamT1OgjV9ZQCHiXe/JEY70m8tyGkd/HFEImYBh9EVGrmDzlX
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(136003)(366004)(396003)(346002)(39830400003)(2616005)(2906002)(66556008)(16576012)(956004)(8936002)(31696002)(316002)(86362001)(4744005)(6486002)(66946007)(4326008)(66476007)(36756003)(38100700002)(31686004)(8676002)(26005)(5660300002)(186003)(16526019)(54906003)(6916009)(53546011)(478600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U1dQK3ZpK3hLL2FzV29VRGU2U2pCZisrRFM5RmpZbXhjYWdHN0RvNFp3UUNw?=
 =?utf-8?B?UkIyZmpDSFIwN2FOSnZ4L044TlRhemtjcmVXdk1NN3RCTEdvdk5kc0pPaVQw?=
 =?utf-8?B?TmRlbHphNU9lQ3pCOGdKS0pGT084Ym5LQkZXWDJRTlBQbUJvNldjc3h1Y0I2?=
 =?utf-8?B?YzBWUm8xN1hFZnRVNTVNZmVvNzFXRHU5SytqOHh0bWNSQ2lJQVJZdTZ1cmEy?=
 =?utf-8?B?V0xOQWUwaVZwelkxeGwrWGZiY0ZHRnByOVZkMmNreS9uWmpPTFFrelFvdEx1?=
 =?utf-8?B?MVVreFowamlUcUE2ZUtrQStGN2tDTnJkWUszOUR0V1M3NlE3VnlxYnZwOFVW?=
 =?utf-8?B?R0JBSkszMEtKb0FxblJxTWk0NXUvVkNSL3dYNjZOWm5HV3pjc0lWRmUrMEtl?=
 =?utf-8?B?aTFQM0JrNGtsU2swMUN6OWZmc2xCMnZnSnpobXVsUlFrQkhlN3BtcFNSUml3?=
 =?utf-8?B?NzdSTStHeHl1UTg5VzkrNHVHK1gzMDM4VStjUTVWUTBPYVo5T2o2NU43NGtG?=
 =?utf-8?B?cUxvRGdjZTRrTVVHK015YXU3VnlBaXljS29sS1MrdW53YXJ5TVFFcVAwSzl4?=
 =?utf-8?B?aUhnRStPbGJOL0hNUXJjbDdRbkVxczk5dmNpMGZzNENHTWZlWng1Rm4wRGpB?=
 =?utf-8?B?UnpOejAvNXZRRzlyaHl4MDlDa0V6RDBMejRFTmRnbjV4OTV3RnlXUDFVUEh3?=
 =?utf-8?B?VmlSUDhCL0RVSXhKS0NjdDB4ekVTNDJJNGxzTHZFdXJkdFJRN2ZEOEdpYWlT?=
 =?utf-8?B?NS9aeXYyRnlRM1BaQjVjdWZVV1k1ZkZySHVnSDlsTkNpMnBIZ2J6YUhvNDZX?=
 =?utf-8?B?OWFpNS9kdUFPZHZ1TDZGRXU0YmViVUxZOWlTUjJ0UmoxdHNvMC9tUFY2b2wr?=
 =?utf-8?B?aUNyU3dySlJqVWtVTXN6NXpXQkw1Mi9XRFNXQ0NYZFF3M3pYWWk0MG5XVDhh?=
 =?utf-8?B?WEFmOE1SWUJYc3gzWEx1UmRld0lLdzdLU25RVklSbHpYUkYvdDFwVnJiLzVO?=
 =?utf-8?B?WUxpMzF2cDVvV1FhMzZpNk11OGQ2T1l5WWtzb0pqejBodkhBUEJUSHN3d3dR?=
 =?utf-8?B?YXdFTGttdms5aERJUnl3VjcwMklPWk95SXRicmJOKzlVT2VKeS9xUWNoWFJD?=
 =?utf-8?B?b2VWR1JNbUxIOWdDTDlOeXFQZUMxQnB4ZVJ1NDR3YlR1RHFBTEVxVGV4Q0F3?=
 =?utf-8?B?ekZBUjhDWjNoVGNRZ05PVmsrbm5jQ0hnMlJSelozWXBkcjUvSmVNdWs5Rk03?=
 =?utf-8?B?eGlXWGxXaU9INFFZaGZPMVcxeWllQW5qM1V2NDgycXBIRllxYXQ3MERPd29Z?=
 =?utf-8?B?WjNIMTM2R01jZytOQVpSQWRqZnQ0c2lzS3U3Q21sK2JSUkRZMTAraDN2c0lS?=
 =?utf-8?B?TzQ4UDdQcHJHcWUvRmJmYzcraTc4TFZGSnQzNXB2eCt0VkZUQ1JiYjlzVWtG?=
 =?utf-8?B?Mi9mOU9LRnNBVDB3d0k4Z0RzbHNtOU90cTh4QkxQWm9qUnFtbXRmc3liRmp4?=
 =?utf-8?B?MURPMFFnTnErSnplQXpTYVRiVGU2ZW5yc0tDUVVRaERidERXK1lJTEk0elBP?=
 =?utf-8?B?cnlPaTRxNGdaMFlGV1RTRkIxSHZZVXdXTTF5RGpkTjBKRkR6TmFKSVd2N1JW?=
 =?utf-8?B?VGY3QVVKdU5XOWR1Zm1TUGoxZWtUQlgwUm9iRlZaWE83MGhpZlJwZWhYTEdO?=
 =?utf-8?B?MmlURExFcGVHWUpMRGtPMkl5THFJai8yWExHdFEyK1haUDU1TlQ3bXFzTUs5?=
 =?utf-8?Q?GvFgeBli0fH3QQAWYX3/B2kLNFT/sbikjNt1ZXD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8e780695-b3f8-49c1-df69-08d92f346ca7
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 13:00:45.9211
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +r7AAajFJIDvIA7ISnGisS3VetTab20pychMq5l+Ip1U8VSvHL1+PgLXpjbcZFRVG5V5/OA+MPcMuzhE226DnQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6669

On 11.06.2021 18:36, Andrew Cooper wrote:
> ... so tests can peek at the internals, without the structure being generally
> available to users of the library.

I'm not sure whether this slight over-exposure is tolerable in the tools code,
so I'd prefer leaving the ack-ing of this change to the tools folks.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 13:01:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 13:01:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141447.261274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmDa-0002Hb-I8; Mon, 14 Jun 2021 13:01:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141447.261274; Mon, 14 Jun 2021 13:01:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmDa-0002HS-F9; Mon, 14 Jun 2021 13:01:30 +0000
Received: by outflank-mailman (input) for mailman id 141447;
 Mon, 14 Jun 2021 13:01:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsmDZ-0002Fx-5K; Mon, 14 Jun 2021 13:01:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsmDZ-0004an-0U; Mon, 14 Jun 2021 13:01:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsmDY-0003V3-OK; Mon, 14 Jun 2021 13:01:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsmDY-0001E5-Ns; Mon, 14 Jun 2021 13:01:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=21C3Zhzo6eBCK/6j/J/OdRX4sBPxRhIqSd2rf1+SVGE=; b=we3l/Ix+P3xveoWpNRGocP4K73
	rZUKyi9i39pwrB3EIzcCg1A9tH4+YWK6MZt0T6uk9wiv7xRADjIuABPxmKgkhy6/j1xNwbGxk4SVa
	WK4OfLJi8nUhLzoHsqTvBdpyQ5WYSx2ufER4NMgOp+4CYCtRsAIR+fOfnMb7Xzgsa1xY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162800-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162800: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4f1858763b7b1aeb79fa7c818eca98c96943aa69
X-Osstest-Versions-That:
    xen=93031fbe9f4c341a2e7950a088025ea550291433
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Jun 2021 13:01:28 +0000

flight 162800 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162800/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4f1858763b7b1aeb79fa7c818eca98c96943aa69
baseline version:
 xen                  93031fbe9f4c341a2e7950a088025ea550291433

Last test of basis   162674  2021-06-12 02:01:29 Z    2 days
Testing same since   162800  2021-06-14 11:01:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   93031fbe9f..4f1858763b  4f1858763b7b1aeb79fa7c818eca98c96943aa69 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 13:10:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 13:10:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141460.261288 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmLq-0003KE-G0; Mon, 14 Jun 2021 13:10:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141460.261288; Mon, 14 Jun 2021 13:10:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmLq-0003Jj-C0; Mon, 14 Jun 2021 13:10:02 +0000
Received: by outflank-mailman (input) for mailman id 141460;
 Mon, 14 Jun 2021 13:10:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lsmLp-00039K-D9
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:10:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lsmLp-0004jy-Au; Mon, 14 Jun 2021 13:10:01 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lsmLp-0003BN-3Q; Mon, 14 Jun 2021 13:10:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=DcXKWpWetdS7J5+taxThxsmttgI+TvgUtlFz/dUH0yM=; b=ilVF6CGnSGZnUJmJDAcLo8Kqcn
	DCxTYHkzkxx8XD2CHooKYYglgaHv8kquEXm1WVYVIeo0riIJXW8r6Q1UCB3mJlh7W0afbh02PcZA4
	2eXMmLjVoz+95/asastAQkSt8n4+0ARLpOzKEJxm4eR4eSaQAB2s8Z7a/Fst4Z868sAc=;
Subject: Re: [PATCH] Arm32: avoid .rodata to be marked as executable
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <25f1b0d2-9270-ba42-d110-2bf14e45b7b8@suse.com>
 <5b819c5e-587b-4eec-b873-73892503c3e2@xen.org>
 <4143bdfd-ca78-d7ce-4ed0-2b6271c48ecf@suse.com>
 <7a57d3df-94d0-5ee6-1ceb-bf4eddec1392@xen.org>
 <666fdb88-94c0-be05-f4d5-d755b0326dad@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <55a201ef-728e-dc59-1f9f-d269e1c5989e@xen.org>
Date: Mon, 14 Jun 2021 15:09:59 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <666fdb88-94c0-be05-f4d5-d755b0326dad@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 14/06/2021 14:02, Jan Beulich wrote:
> On 14.06.2021 12:54, Julien Grall wrote:
>> On 14/06/2021 12:40, Jan Beulich wrote:
>>> On 14.06.2021 11:57, Julien Grall wrote:
>>>> On 11/06/2021 11:19, Jan Beulich wrote:
>>>>> This confuses disassemblers, at the very least. When this data still
>>>>> lived in .init.*, this probably didn't matter much, albeit the
>>>>> "#execinstr" would have been suspicious to me already then. But the
>>>>> latest with their movement to .rodata these attributes should have been
>>>>> dropped.
>>>>
>>>> I don't quite understand why this wasn't really a problem for .init.data
>>>> but it is a problem for .rodata. Can you expand your thought?
>>>
>>> I've said "probably" for a reason, and my thinking here goes along
>>> the lines of what I've said on the other patch regarding .init.*:
>>> There's perhaps not overly much reason to be picky about the
>>> attributes of .init.*, and at least on x86 there is also a case
>>> (the EFI binary) where we fold all .init.* into just .init anyway.
>>
>> Makese sense. Thanks for the explanation.
>>
>>>
>>> The alternative to the present description that I see would be to
>>> go with just the 1st sentence. But I would be afraid in such a
>>> case that you would come back and tell me this is too little of a
>>> description.
>>
>> How about:
>>
>> "xen/arm: .proc.info doesn't need to be executable
>>
>> The section .proc.info lives in .rodata as it doesn't contain any
>> executable code. However, the section is still marked as executable as
>> the consequence .rodata will also be marked executable.
>>
>> Xen doesn't use the ELF permissions to decide the page-table mapping
>> permission. However, this will confuse disassemblers.
>>
>> #execinstr is now removed on all the pushsection dealing with .proc.info".
>>
>> I can update the commit message on commit.
> 
> I'm fine with the new commit message, but I'd prefer the title to
> remain as is, as that aspect is what did trigger me making this > change.

Sure. I will keep your commit title and update the commit message.

Cheers,


-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 13:15:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 13:15:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141468.261298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmQs-0004cr-6r; Mon, 14 Jun 2021 13:15:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141468.261298; Mon, 14 Jun 2021 13:15:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmQs-0004ck-3p; Mon, 14 Jun 2021 13:15:14 +0000
Received: by outflank-mailman (input) for mailman id 141468;
 Mon, 14 Jun 2021 13:15:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lsmQq-0004cd-Ew
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:15:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lsmQq-0004pU-9o; Mon, 14 Jun 2021 13:15:12 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lsmQq-0003eV-2n; Mon, 14 Jun 2021 13:15:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=MexwEiLb6DO00HfVmCl9lSsQxuKTdncX1Efn8m1NmVk=; b=g4qV2/w1UFArtw+vrLfR35LzXG
	jqvJOtmrkN+pw0g7uJ7TkoGxjNMGcIQr8Bdj0Aa4X2qCtDVGaT9wMZs0yVL/VCqzhA3vaEcoiL5A/
	YATRPEENTfMN9OSwpLb4+6soeH8Af5/Wq1FE7BgyDKYVVWqrYos8pbYQwdNhCYheoZAc=;
Subject: Re: [PATCH] Arm32: avoid .rodata to be marked as executable
From: Julien Grall <julien@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <25f1b0d2-9270-ba42-d110-2bf14e45b7b8@suse.com>
 <5b819c5e-587b-4eec-b873-73892503c3e2@xen.org>
 <4143bdfd-ca78-d7ce-4ed0-2b6271c48ecf@suse.com>
 <7a57d3df-94d0-5ee6-1ceb-bf4eddec1392@xen.org>
 <666fdb88-94c0-be05-f4d5-d755b0326dad@suse.com>
 <55a201ef-728e-dc59-1f9f-d269e1c5989e@xen.org>
Message-ID: <cb3120de-e06f-7002-d57a-e2b560205b6e@xen.org>
Date: Mon, 14 Jun 2021 15:15:10 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <55a201ef-728e-dc59-1f9f-d269e1c5989e@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 14/06/2021 15:09, Julien Grall wrote:
>> I'm fine with the new commit message, but I'd prefer the title to
>> remain as is, as that aspect is what did trigger me making this > change.
> 
> Sure. I will keep your commit title and update the commit message.

Commited with:

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 13:27:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 13:27:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141476.261314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmcb-0006BO-DX; Mon, 14 Jun 2021 13:27:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141476.261314; Mon, 14 Jun 2021 13:27:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmcb-0006BH-9g; Mon, 14 Jun 2021 13:27:21 +0000
Received: by outflank-mailman (input) for mailman id 141476;
 Mon, 14 Jun 2021 13:27:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uT58=LI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lsmcZ-0006BA-M1
 for xen-devel@lists.xen.org; Mon, 14 Jun 2021 13:27:20 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 00a04c8e-89f7-402c-8a26-14ef5179b3ad;
 Mon, 14 Jun 2021 13:27:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00a04c8e-89f7-402c-8a26-14ef5179b3ad
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623677238;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=yKCEAH5lOjEYnguxQtxus7zIqrbkWXfvSyPchxPZ7JI=;
  b=V5AiLBNsfDQ5U5Va7/GQdD0uj1tO0wSibFsGHsJWavur1a1SAJ7ErDtl
   scWmQjdpYG1ow71G5+iAbTljD/NEOZfSJI2zSnlibItMddmAHbg3XlHON
   mH4SFpAG+eUaKvhTaZ9LX3HgaIgF2a7gCitvASM1WHFrKGyASXB1D7GJO
   M=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: pHH1VnMaN4nuqADj2zW8eH7Tlv1xIdoiO8NQM3E0OeVZ3wpxEffxgj9UNV0NMRGwKzRiVjHMqz
 MzKh3VNPU2DAZ6intfbV7cCUOoDgzSVT368taFEjI1ClNULpwmecLWCYxbPjUOnLqaZvd7Bm/d
 Xr3rwd1L8xsBXYQgksYPH/vo2qPhnRH+f7bSZKvgzq3+J/qkq1mHvDwX9vaLGIMJ9mojOZuBhP
 V8OWIJ58IznK3bCs4jQcq83AeYIuQvM9xX6nUiER71my5zDr7mU5y7+20IVQJ8yY1RBn+jAMld
 VgM=
X-SBRS: 5.1
X-MesageID: 46068031
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:UXorJK66K4fHRzelkgPXwAzXdLJyesId70hD6qkQc3Fom62j5q
 WTdZEgvyMc5wx/ZJhNo7690cq7MBHhHPxOgbX5VI3KNGXbUQOTR72KhrGSoAEIdReeygZcv5
 0QCZSXCrfLfCVHZRCR2njFLz4iquP3j5xBnY3lvhNQpZkBUdAZ0+9+YDzrdXFedU19KrcSMo
 GT3cZDryrIQwVtUizqbkN1OdQqvrfw5evbXSI=
X-IronPort-AV: E=Sophos;i="5.83,273,1616472000"; 
   d="scan'208";a="46068031"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J1HiC75DkKAuySJpHDX8rPPRTJB745X4Z8vhqtuSYQMp0a7MT1z1xMNVRdp9cuAlyPyK4AYg48B3yOYGC5aiR+/cKa3vcjFQbKFskGIMAcyc8GOue445LE+jcsKmxmHUqXJ7Hm1GgwXcqt/lYUvnm3eHFLtEQFk49m1pc9RpNGTWoeti24bySg5Eykq0GRXbTPTBueY9/14mCd5GZH5otPUMry0FpE3YSGHLILp15Q6hufoj8vqlzTSW+RgNpZ0fsMkfcCUKbWupPJ6z3GqGDtbW+ie/8uOzfmJ55t/trlD76K9GNyXGgDGlhLQinSXQg9mI4REgS/V1lac/Oe6r8Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Eke6UwnoHxePLsUmocmHA2wYq0orFxM+FVW3L87kxls=;
 b=gyWENriLCi5CpMiaIaZLeSGvgDdlPyoqo5yEQbaUVRGmgwadPGo/Dt4IxfAdX9M62g0m7JE5CEoDUG/lrs58sbUPvGj+0G8IR0I2DYvgToo6qIXdzOjjbBJL6oaIek6WFHWGB5IwuLUV2+aQZvQkoiK+JncdMcc9pgZtA+B6QsaN46sAA69BWYewscbJxGw6u56A6VEFZigb1CTP+oImOQOHvf5V0eEamgllpvkcCxVJAvIDJmMG7Zx5PjKTPQj2/uTGjCyr7wC3nPJ8loFIt75AwxBfZpD2Nmih9tBQggLldMJe3rUjJ5n/f5+XgaucOREqJpPwGkiPGOJWWl2xzw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Eke6UwnoHxePLsUmocmHA2wYq0orFxM+FVW3L87kxls=;
 b=WMoSNmmi/BYjU133lAU7zceZLOHEQeWQHBZqBEVle9zhzPx+LBmz6ruwIUV3bZBQMJHbH6aI0HD3Epyl1SBqA2oYCet025kVAntke2Xcbq7BuoCblIqcSlSFYT/JU2lyw2Mw5SDlV2rvSEZLChBnYswCsfX6LlwpyJxcmpZWM2Y=
Date: Mon, 14 Jun 2021 15:27:06 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, <xen-devel@lists.xen.org>,
	<boris.ostrovsky@oracle.com>, <stephen.s.brennan@oracle.com>
Subject: Re: BUG in 1f3d87c75129 ("x86/vpt: do not take pt_migrate rwlock in
 some cases")
Message-ID: <YMdZKuKOnFKpQ3sg@Air-de-Roger>
References: <0efd0099-49bb-e20c-fe8d-fb97c9de2b63@citrix.com>
 <74378af9-5d04-f95e-3957-918cf5c81ded@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <74378af9-5d04-f95e-3957-918cf5c81ded@suse.com>
X-ClientProxiedBy: MRXP264CA0011.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:15::23) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 838d7080-e982-4ee4-1001-08d92f381e74
X-MS-TrafficTypeDiagnostic: DM6PR03MB4683:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4683A0391EB899517D6392DD8F319@DM6PR03MB4683.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: M+I603CkvA4HuAOJIZLsDgTvt2HGz8HpuaVwKGGZITZJbVTQvsy8rxQeeUL2guKD8hRICmaXKjpbVVzmy/halkthVuGsAz7YdHaVZHnejvGdDO6ZWa4MkEauTMbk1xFTrZc/QiqoDE7xCUARqAfobkG1/64MDqH4JSEqmcyBEagURqRMC9xjowLWnl+1RGhU4HaOWWT67iRsrD3IgK69mSFcBUUX4k8f1LX0/jfFiGNeJOK4l1T1O8bp1dpLsGqLomsfSqqPeixrW8bq3LPAISfqlVEtlOW/WHwvQRWgSrzUaekfIjPVCfMmAKzUwQgzzhSRVrV/SLkdx0zyutan1TyYt4T2l2eKWmaHpvlgUSwo/nuPKbypQSP0a/l8CQo1sbgrdECN7FAQbFiAgv8z7F6PTP/tLKZQ9LS6n6KMC52LOn84gplbgYPncTC9M7LfOa0vJ63u3I+ndI431xMrx0oQh+Q3VLtY6efIiEI5/ATIi7QpbJo/N4t/hbyfEOTzbtfEFwkiMvWEdRszN5gb7lgNkpneEMK9MLqLcYV1j1z5FpYVXthhQGyiBlSqSRc95ydSAokpN1z0ReICzvZuJw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(396003)(346002)(376002)(366004)(39860400002)(86362001)(6496006)(66946007)(53546011)(6916009)(83380400001)(478600001)(8936002)(33716001)(5660300002)(66556008)(66476007)(6486002)(9686003)(316002)(26005)(6666004)(4326008)(956004)(2906002)(38100700002)(8676002)(85182001)(16526019)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?dXFpUDdaaGV0cVdHZXQ4QmNWY0dmRjM2SGVDQ3JxcEdoNEljVXFlZCt6Q3JK?=
 =?utf-8?B?b2M2Qi9pUCtxc3haWmdKdkxwVVJhRXJXU3F4UTJmZnZ5ZXBiMXlIT3FweWsw?=
 =?utf-8?B?dFFGdGJvMVF4bjFDT01meXlkaURzWlk1SGcrVlN4NHRha3BKTmNodThQSGIr?=
 =?utf-8?B?dFNYamNZSU83c1hLcmx1cWhNd244aFVhWTFqNi8zVERMcjFpb2F6bm1FQjdh?=
 =?utf-8?B?NnBmTmREUXVNRXJGL1lEdjlrdGpYN3BralJoWDRSTDRuTkp0WkVkQXdSSDZH?=
 =?utf-8?B?Tk02ZE1UQUo3eGhrNk9aT2hhUmVmbTRRQXJRMTByUDdlM1lrSmd0L1JTcFU5?=
 =?utf-8?B?WmVYbjdrOEZPQ3V6NndzbVhmQms2akE5YTgvYXVYdlhDaGNyR1FGbll2eWQ3?=
 =?utf-8?B?ajA5YmYvRGl1ckZGTGgzcDRjdksvRGYvQ3piUnFydGhFeEIyN0hIMVFKbHE1?=
 =?utf-8?B?YlF5VVJyRlNwdndod2FzQUgrMEw3VVVHeEZNOE1hQkJKS2F0RER2NUVSNDAy?=
 =?utf-8?B?SUdLM2d6NUpRclhrSkhLWmpTN3UzaUk0NTIrdTJwSVFTajdhaElQdWs3Q1hY?=
 =?utf-8?B?OGdwOUx2ZFJ0Z1dUKzhlOE9IWFllY2VLNmdkb0RIcmM2ZlVYSC82c09ZdGkx?=
 =?utf-8?B?MzBrQUFDbmw2QWZucDRtQ1Z5UHFtM2JKdVVEOE5NdXB5dE1UV3l2aDdOKzRa?=
 =?utf-8?B?bzV5dFcvQ2JUYjUvMktqcWFDYUJrSngzYmZ1RU5kMEFqKytNaTdqUDI2eENS?=
 =?utf-8?B?ckpLU0VCdE8wcVRqQ0hxdzNJK3RPTmVHZHdjMGZ1dDFVVVprOCtjNjlJbm5i?=
 =?utf-8?B?Ti9BNmpSNk9VTXJNR1J1UzdaTklKTUUyTnhIOVMzMmZ5LzhRbjRxblZjeTFQ?=
 =?utf-8?B?a2Y5S2prMWVydjV2U3k2NnVDM1NQRkFEeW5CZk84TGZDZlhLYzFFbmJBbEl3?=
 =?utf-8?B?VkhKZjRod1pqdHRXN2tSVU1BTWhjeDZPRXZ0R1hxak83T2hwZ1VPd25qdjUr?=
 =?utf-8?B?cjF2Nk5jMlgyQUlIbDBQWDdxRldaRTRWM2xBdGhCT3lKYXdhZUtNMU5KcExC?=
 =?utf-8?B?NmdDdHBhZFJpdk10WERndkNMeHM0V1FpdG5aRGg0aDdMSmh4LzFZRUw5cmlC?=
 =?utf-8?B?UnJJbHRzQTlIckhlNlNRNnRCai85T3ZTcjcvZXp5K0MvS1ZEclVsWlgxYjR1?=
 =?utf-8?B?ZzF3K2pXaitLajJodjI5QlRrbGVYYVJVWE9zYVVlV3JSTzJuUmJHQUNWTUdS?=
 =?utf-8?B?ZC9XaWJTYi91WDhCb0FMK0creEVTbGJ2aU4zbmhJcE1EUjQvUDJKZlEySVgy?=
 =?utf-8?B?WEFMQmFWVzM5SnpCUHZwZTZIaElIcWxNVEJrTFNrRkw0UitsMTBSdjRROUw2?=
 =?utf-8?B?ZDMxL1VOaE4xZDF6R0FUVDdEaDZCTWJXZDFMOFBoakJhMldXZFRKYlZudEM5?=
 =?utf-8?B?ODROaGJiQnVzZ0VyeFFTZzlvRWt6U1U2YjNra2EzZXRWMm5FbWhOcyt2d1dh?=
 =?utf-8?B?M0lwY3FDRWkvRVlsOStvY3Zoc2laTkhlSy8zQWNZV0NlaFdGRzNiOENwTU1v?=
 =?utf-8?B?MWJ5V3RyN0wwSUhhaXNJQ2pSbUVyNFB3TEh2RDQwQUZYMzV0SzVEaDhVMlhw?=
 =?utf-8?B?Z3d1Y21BbmhtSVA1VytLSWxMZmZQREdPUGg0dm1aYnpYMkNJeXRLbVJLRHV2?=
 =?utf-8?B?ckM3MmcrYTUyVjFJcE55d2VzczRBbHRCY0p2WnJIWDVkS1d1MTVMeEM2b3l1?=
 =?utf-8?Q?DcEaWPt9AnW0z0mzXgO2HEZw1o1VCtM18SEsNZq?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 838d7080-e982-4ee4-1001-08d92f381e74
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 13:27:12.7950
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EhWEo0eeqdWCYH4DRvJGH5T+ksswTUTpR1dseGsQ8tJeksEvLAX4eecDKPIg+GLlhvTyoRCbml7Ra7WDVccjhg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4683
X-OriginatorOrg: citrix.com

On Mon, Jun 14, 2021 at 01:53:09PM +0200, Jan Beulich wrote:
> On 14.06.2021 13:15, Igor Druzhinin wrote:
> > Hi, Boris, Stephen, Roger,
> > 
> > We have stress tested recent changes on staging-4.13 which includes a
> > backport of the subject. Since the backport is identical to the
> > master branch and all of the pre-reqs are in place - we have no reason
> > to believe the issue is not the same on master.
> > 
> > Here is what we got by running heavy stress testing including multiple
> > repeated VM lifecycle operations with storage and network load:
> > 
> > 
> > Assertion 'timer->status >= TIMER_STATUS_inactive' failed at timer.c:287
> > ----[ Xen-4.13.3-10.7-d  x86_64  debug=y   Not tainted ]----
> > CPU:    17
> > RIP:    e008:[<ffff82d080246b65>] common/timer.c#active_timer+0xc/0x1b
> > RFLAGS: 0000000000010046   CONTEXT: hypervisor (d675v0)
> > rax: 0000000000000000   rbx: ffff83137a8ed300   rcx: 0000000000000000
> > rdx: ffff83303fff7fff   rsi: ffff83303fff2549   rdi: ffff83137a8ed300
> > rbp: ffff83303fff7cf8   rsp: ffff83303fff7cf8   r8:  0000000000000001
> > r9:  0000000000000000   r10: 0000000000000011   r11: 0000168b0cc08083
> > r12: 0000000000000000   r13: ffff82d0805cf300   r14: ffff82d0805cf300
> > r15: 0000000000000292   cr0: 0000000080050033   cr4: 00000000000426e0
> > cr3: 00000013c1a32000   cr2: 0000000000000000
> > fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
> > ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
> > Xen code around <ffff82d080246b65> (common/timer.c#active_timer+0xc/0x1b):
> >   0f b6 47 2a 84 c0 75 02 <0f> 0b 3c 04 76 02 0f 0b 3c 02 0f 97 c0 5d c3 55
> > Xen stack trace from rsp=ffff83303fff7cf8:
> >     ffff83303fff7d48 ffff82d0802479f1 0000168b0192b846 ffff83137a8ed328
> >     000000001d0776eb ffff83137a8ed2c0 ffff83133ee47568 ffff83133ee47000
> >     ffff83133ee47560 ffff832b1a0cd000 ffff83303fff7d78 ffff82d08031e74e
> >     ffff83102d898000 ffff83133ee47000 ffff83102db8d000 0000000000000011
> >     ffff83303fff7dc8 ffff82d08027df19 0000000000000000 ffff83133ee47060
> >     ffff82d0805d0088 ffff83102d898000 ffff83133ee47000 0000000000000011
> >     0000000000000001 0000000000000011 ffff83303fff7e08 ffff82d0802414e0
> >     ffff83303fff7df8 0000168b0192b846 ffff83102d8a4660 0000168b0192b846
> >     ffff83102d8a4720 0000000000000011 ffff83303fff7ea8 ffff82d080241d6c
> >     ffff83133ee47000 ffff831244137a50 ffff83303fff7e48 ffff82d08031b5b8
> >     ffff83133ee47000 ffff832b1a0cd000 ffff83303fff7e68 ffff82d080312b65
> >     ffff83133ee47000 0000000000000000 ffff83303fff7ee8 ffff83102d8a4678
> >     ffff83303fff7ee8 ffff82d0805d6380 ffff82d0805d5b00 ffffffffffffffff
> >     ffff83303fff7fff 0000000000000000 ffff83303fff7ed8 ffff82d0802431f5
> >     ffff83133ee47000 0000000000000000 0000000000000000 0000000000000000
> >     ffff83303fff7ee8 ffff82d08024324a 00007ccfc00080e7 ffff82d08033930b
> >     ffffffffb0ebd5a0 000000000000000d 0000000000000062 0000000000000097
> >     000000000000001e 000000000000001f ffffffffb0ebd5ad 0000000000000000
> >     0000000000000005 000000000003d91d 0000000000000000 0000000000000000
> >     00000000000003d5 000000000000001e 0000000000000000 0000beef0000beef
> > Xen call trace:
> >     [<ffff82d080246b65>] R common/timer.c#active_timer+0xc/0x1b
> >     [<ffff82d0802479f1>] F stop_timer+0xf5/0x188
> >     [<ffff82d08031e74e>] F pt_save_timer+0x45/0x8a
> >     [<ffff82d08027df19>] F context_switch+0xf9/0xee0
> >     [<ffff82d0802414e0>] F common/schedule.c#sched_context_switch+0x146/0x151
> >     [<ffff82d080241d6c>] F common/schedule.c#schedule+0x28a/0x299
> >     [<ffff82d0802431f5>] F common/softirq.c#__do_softirq+0x85/0x90
> >     [<ffff82d08024324a>] F do_softirq+0x13/0x15
> >     [<ffff82d08033930b>] F vmx_asm_do_vmentry+0x2b/0x30
> > 
> > ****************************************
> > Panic on CPU 17:
> > Assertion 'timer->status >= TIMER_STATUS_inactive' failed at timer.c:287
> > ****************************************
> 
> Since this suggests a timer was found on the list without ever having been
> initialized, I've spotted a case where this indeed could now happen. Could
> you give the patch below a try?
> 
> Jan
> 
> x86/vpt: fully init timers before putting onto list
> 
> With pt_vcpu_lock() no longer acquiring the pt_migrate lock, parties
> iterating the list and acting on the timers of the list entries will no
> longer be kept from entering their loops by create_periodic_time()'s
> holding of that lock. Therefore at least init_timer() needs calling
> ahead of list insertion, but keep this and set_timer() together.
> 
> Fixes: 8113b02f0bf8 ("x86/vpt: do not take pt_migrate rwlock in some cases")
> Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Thanks for looking into this so quickly, and sorry for not realizing
myself when relaxing the locking. Adding the timer to the list without
it being fully initialized was a latent issue even if protected by the
lock initially.

Provided testing shows the issue is fixed:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 13:29:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 13:29:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141482.261325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmeT-0006oF-Pk; Mon, 14 Jun 2021 13:29:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141482.261325; Mon, 14 Jun 2021 13:29:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmeT-0006o8-MO; Mon, 14 Jun 2021 13:29:17 +0000
Received: by outflank-mailman (input) for mailman id 141482;
 Mon, 14 Jun 2021 13:29:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DfkY=LI=arm.com=robin.murphy@srs-us1.protection.inumbo.net>)
 id 1lsmeS-0006o0-Ga
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:29:16 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id f37c60d6-52d8-43ec-8134-16a3df064ad9;
 Mon, 14 Jun 2021 13:29:15 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 259ED6D;
 Mon, 14 Jun 2021 06:29:15 -0700 (PDT)
Received: from [10.57.9.136] (unknown [10.57.9.136])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A18ED3F719;
 Mon, 14 Jun 2021 06:29:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f37c60d6-52d8-43ec-8134-16a3df064ad9
Subject: Re: Regression in at least 5.10.y and mainline: Firewire audio
 interface fails to work properly (when booted under Xen)
To: Salvatore Bonaccorso <carnil@debian.org>, =?UTF-8?B?5bCP5aSq?=
 <nospam@kota.moe>, Jianxiong Gao <jxgao@google.com>,
 Christoph Hellwig <hch@lst.de>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org,
 Marek Szyprowski <m.szyprowski@samsung.com>, xen-devel@lists.xenproject.org
Cc: 989778-maintonly@bugs.debian.org
References: <162352833546.2353.230557992597997974.reportbug@home.kota.moe>
 <YMWl4UnFBAVRDnys@eldamar.lan>
From: Robin Murphy <robin.murphy@arm.com>
Message-ID: <2f7c7d36-b6f4-f8ab-756e-a563fa03b9e4@arm.com>
Date: Mon, 14 Jun 2021 14:29:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <YMWl4UnFBAVRDnys@eldamar.lan>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

On 2021-06-13 07:29, Salvatore Bonaccorso wrote:
> A user in Debian reported the above issue, which was reproducible with
> 5.13-rc5 and 5.10.y as packaged in Debian and found that 85a5a6875ca9
> ("swiotlb: don't modify orig_addr in swiotlb_tbl_sync_single") that
> introduced the issue.

Sounds like it's probably the same thing as being discussed over here:

https://lore.kernel.org/linux-iommu/2e899de2-4b69-c4b6-33a6-09fb8949d2fd@nxp.com/

Robin.


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 13:29:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 13:29:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141485.261336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmeq-0007MJ-3B; Mon, 14 Jun 2021 13:29:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141485.261336; Mon, 14 Jun 2021 13:29:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmep-0007MC-W4; Mon, 14 Jun 2021 13:29:39 +0000
Received: by outflank-mailman (input) for mailman id 141485;
 Mon, 14 Jun 2021 13:29:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6qZl=LI=amd.com=thomas.lendacky@srs-us1.protection.inumbo.net>)
 id 1lsmen-0007Jk-TR
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:29:38 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e88::61f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9cd485c7-edc5-4b46-b1a2-b5fee25ed63a;
 Mon, 14 Jun 2021 13:29:35 +0000 (UTC)
Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by
 DM6PR12MB3369.namprd12.prod.outlook.com (2603:10b6:5:117::16) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.23; Mon, 14 Jun 2021 13:29:31 +0000
Received: from DM5PR12MB1355.namprd12.prod.outlook.com
 ([fe80::6437:2e87:f7dc:a686]) by DM5PR12MB1355.namprd12.prod.outlook.com
 ([fe80::6437:2e87:f7dc:a686%12]) with mapi id 15.20.4219.025; Mon, 14 Jun
 2021 13:29:31 +0000
Received: from office-ryzen.texastahm.com (67.79.209.213) by
 SA9PR13CA0156.namprd13.prod.outlook.com (2603:10b6:806:28::11) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.9 via Frontend Transport; Mon, 14 Jun 2021 13:29:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9cd485c7-edc5-4b46-b1a2-b5fee25ed63a
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TssKbYwq4JWlNirEuKAXIYFbhBum6dE3vzgfeTkaYzrFB2ioxzfCDymBenqjoIxsqQPYP0ra6zqPSmmYq7WYxyvB+OzR/MAuRcbIg3oEgSuBJ/RapBVR1QTpsbkSlOzDbY8GiQcK715dwRURWXGuF77ieoSaQWpl5z8aUlDoVOIpLBtGWv1NPO/p9dO4Pu3ag2UtYfvlKh8M2ku0gqXCevHduCdlU5RtJj/Ku5pnp0nvGTc+DJabdFMUh1y28+6XTetqezJtddMY9QEJmqQnFxz8xxyAx9sOK0x6+9gdDNeyGSqSwuHw6mNhOdB/F04oRTiXvDH/axh0KXsylHquzA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4tEwxi4hVyxOZ0eN1UAOCZfBuAaTWIOrLO/RC/MnHdI=;
 b=R4DLhS/2GP2HTh8B08DBVpbcfhE849XDYc3nE2CaVJs0p3ux8lNYleOiKQxzNm9HWzev8MyrrNoOpeHll69FSbZxjuqzWb8XxMIty5PCj42rTbhi/+AVTPQPMmvk5Tgy0hyh3cbIqcHI1E35kQg36ek+XERq5x2WsxdURc52lgwVWy35mXDp5Jx4+en5upDtAFvZ8wuraOJerrkoxtuDzuGuUNW4rbSC2lrF2/GMwP4V3GNTkpDMx/bTAq9bg2r7Z201maQfivlEjZyQB8621MlkDsj/03zP4ClfxrkubKNg47IX208lklIbOXqzjr6lKMicoe5c4ijWdIE34TyjGg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4tEwxi4hVyxOZ0eN1UAOCZfBuAaTWIOrLO/RC/MnHdI=;
 b=ZzpgrFF/sG/4Nosw4z7DxYtRE3dX36+KREkrQTnaqtOGDEEDGAnJlUIbQeIwixqknnSmvJryf5PqfFrsjID1VgKioG1ac5FQcg9mASJqfmBLr8Raj7FQ1A9PH8WqnX6GWNuNIY6B//1YzYneknadUbCKQUzBPwZoqomkTbhc8aU=
Authentication-Results: microsoft.com; dkim=none (message not signed)
 header.d=none;microsoft.com; dmarc=none action=none header.from=amd.com;
Subject: Re: [RFC PATCH V3 08/11] swiotlb: Add bounce buffer remap address
 setting function
To: Christoph Hellwig <hch@lst.de>, Tianyu Lan <ltykernel@gmail.com>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
 wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
 arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
 peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, m.szyprowski@samsung.com,
 robin.murphy@arm.com, boris.ostrovsky@oracle.com, jgross@suse.com,
 sstabellini@kernel.org, joro@8bytes.org, will@kernel.org,
 xen-devel@lists.xenproject.org, davem@davemloft.net, kuba@kernel.org,
 jejb@linux.ibm.com, martin.petersen@oracle.com,
 iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com,
 brijesh.singh@amd.com, sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-9-ltykernel@gmail.com>
 <20210607064312.GB24478@lst.de>
 <48516ce3-564c-419e-b355-0ce53794dcb1@gmail.com>
 <20210614071223.GA30171@lst.de>
From: Tom Lendacky <thomas.lendacky@amd.com>
Message-ID: <e76644b9-8eda-8e9c-8837-42299b0754d5@amd.com>
Date: Mon, 14 Jun 2021 08:29:27 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210614071223.GA30171@lst.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [67.79.209.213]
X-ClientProxiedBy: SA9PR13CA0156.namprd13.prod.outlook.com
 (2603:10b6:806:28::11) To DM5PR12MB1355.namprd12.prod.outlook.com
 (2603:10b6:3:6e::7)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6e989d99-af08-4a06-4c51-08d92f38711e
X-MS-TrafficTypeDiagnostic: DM6PR12MB3369:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<DM6PR12MB3369A3BCC0CC33B7CD0B7E7FEC319@DM6PR12MB3369.namprd12.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1QmDGjWWEbGHHVdPamyugQhhazHt47b+mkPusoQGt9Ms/n426x5PB2UvkugkitKBIBvfXLnNi2ZR9RnQuO7cvIN8/Ly0sf02CRgcGMcO+jaRMDZZSHVa/uA2jgenuLLMsuxN1hwvTRjJDDqrBUyVYSUVlgWd55AUbLzUHNNfmG23XzYvY4ciyBTTsx0ffYIqDd3QseUNFZe8uKhh7vdTxLHZkHJZehOdMlY1eRqRSFQlYXKpppWVBBuZbEgMIbyHsUgDVajruTR3ygUazc3U4AcvEb4SiDUnt1UdMGBCCHSvUBkLWKdVtVI2N7w+0pBfL/dtQhDrVozV5/+i/Gb1dQ1Zk9gFHxrb10QTaRIOLpwDj2sTFJAZeUG/fxCxkk6s7YHXZBxrDIJS8M4XVLeDxAzDrtLjMvQF9i7K3471NZbYwEz/YNXh9yJS/zTO4o5gAoRTXYE41QZxbxJWHfPe3xxvCf3T69KL2V2s+5O6G8AzLJVTkc6mV2BTcsMKK6nR703FY/Ilf8iHpUYn3yqXMCWGTpe/0hd3Cfpi0gW0JiLgdl/u/F0mhjsHsi1SqVnY+qABvw+Ks6ZtLvwM4eETAPkOPNQohevxatH4sqYaA7tIpAkbNapUhRCKbbxrRnVPg7pBnjSK6EyLv6zjR7N/+Xg/EVl4pbBKgq8+1dQPqFJbArKwS179+AMIaX5k5WhUPXTKyAEe+MvOHa+0mbIN8ppm8xlm3WL5wpSlOpGOiRxuVwESXB8nDimPgpsST2H9J+hcALOIjl2ipXIoS9cOTVYBZIwjtehtjPIpYO4C9fxlxqHZWiWjyhcKhuTeLnoD
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(366004)(346002)(396003)(376002)(2906002)(16526019)(26005)(83380400001)(186003)(6506007)(7406005)(53546011)(7416002)(5660300002)(66946007)(4326008)(4744005)(86362001)(66476007)(8676002)(956004)(8936002)(2616005)(66556008)(38100700002)(31696002)(6512007)(36756003)(31686004)(966005)(316002)(110136005)(478600001)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U09KclZSaDBTaTI3bnJlL0oyUzg2OHVoU1lETnluRVByV3JlT25FZk5ZcjMx?=
 =?utf-8?B?aS83R1BiUkdNVjRuZ09yYjJBYTE3bSs3Z3JWWHJweXlPK3FTTzM3NjFQYWNE?=
 =?utf-8?B?aTA0czVZNU1ZOVA2SEFVT3lkZ2dDdDQ1ZkU0dWN2WHdNNHByWVA2VDFYeGtV?=
 =?utf-8?B?QmpNdWtzYXo5VWJnVXB6c1YxRHJUQWdqb3FmWkJqaWdOMldNNkxvNFRUM1Zn?=
 =?utf-8?B?c1FCWTZKRWFnNGthVDdiWHVTU0MvVUNOcW5TeDZ4dkVLUnpMS0x0cFk1R1lS?=
 =?utf-8?B?L2FCNGVzSTl6ajUvT0ZnbHZWd1VoMVJvTFdnUjNnVFRxNmFDNzdnMmtOd2NY?=
 =?utf-8?B?UzArc2liWU41UzNldFF4ZmVDTW9FR0tIMzhLYXlvNWNsakZiS1FpckQ4aC8z?=
 =?utf-8?B?d2s3QjlBbnBXSFhTNTNBVmlEZytuZzBlVGdFcUE4R1d2Mmh5ZlJSZjlPY0JS?=
 =?utf-8?B?bmV0SmpNNk45ZFRFMXlsQm5GWGhGT1M2M1UrWVY1ZmorNTZWSEE5VkEvWG9S?=
 =?utf-8?B?UzJjd25HZGxveCsvRkkxTFlNQ1ZYei85Mjlya3NISkxadDRkRFhuWC9lV0Rh?=
 =?utf-8?B?bERCTmQ4UTcyOGxnR20zM1ZMNzFHMDR1QTRndG4ydlhOZUNnRlk3a2xKMDlz?=
 =?utf-8?B?aldJSDdvTll6QkZSaGpjRUhjNzZwb1hOTFpCc0lMdXpGZ05LM1JOWUFsM1dt?=
 =?utf-8?B?YmNlWDJZV0pRUGxwOGhuQnRxTzlZdnJsM0h5cnJWSjU5bkJEbEtFUzM3YjVl?=
 =?utf-8?B?YVpseGRkYW92QXRpWjd5V3NOYmE0Z1VlZ2FSYTQyNzUyVU9yaWJxS1ZZZWo3?=
 =?utf-8?B?a2NBMmpjVVIwVU1RaXRvQStqQmxlYTQzTUhmYU1IWXFaczVOSXZaM3dCVXZW?=
 =?utf-8?B?WDgwNXh1UnJsdUhRTE1ocm5WTWFzMHI2Q3pzcVIxNW9HZ2doc1NCMHdRSWFT?=
 =?utf-8?B?aW5zaGlUV09sd0VsTGVJOXorZ2RxNzJxTDFZMmlUeG1HYlpidkZWUmJLWGM4?=
 =?utf-8?B?VEh5SlJYMzhWMVVJTmxERHpGbjBoTjNOdkNzMHUvamVyeWVzOHhjRmtNeHV4?=
 =?utf-8?B?NUlTTGgyVHVOd0t3bndIU3NLejViNlJlS2h1TmFScHlWc1Z4bm1pR1NLQkR1?=
 =?utf-8?B?WHBUeTNyWm1UODJ1QnpRZndXN0dORWVzOHJCdVRVVnhTUmU1NzJUOFZvNFNv?=
 =?utf-8?B?TWZHSHNpWUhoRklic1l4TjNNU21nTlRyR2RlMFAycmtKN1dFQUtnSmw1cEdJ?=
 =?utf-8?B?WWdEMnc3bzdkK3laMFhoVDRPeW0vRlZ6bEhlN092eVI3RTYrSVIwTHpiRCti?=
 =?utf-8?B?TVJkY1lhenNZZCtPL1RCMkVpSGkrV1lVYWZiT3BvcU1KVlc0cjlTR2xlaXll?=
 =?utf-8?B?NFdUN3Z5VVRiY1BQb3BZOTBKODJoVHZ2dStMRWlUMG0zamdLN1BFL05lbjMr?=
 =?utf-8?B?RTZYaDFtQ0dXSE1Hdkxhd090cXFpb1BWa0lqQmJ5QnhMNXZ0c0lpNDdoZ1VB?=
 =?utf-8?B?RTBjTDBGUDZORkc5TGorOGtKQTRMaUs0UlVVRGhpSDlJVkd6RTI4VHg0Wlc3?=
 =?utf-8?B?Sk1RQlNoakdJNVFzOEx2eUJBNmVsM1Aydmtod0FFYjBIRmxPOC9DbEM1aU0x?=
 =?utf-8?B?U1N4aHRFZjNNOFF4WlUvL2hSYmt2RDJTaWlQVHFHVk05czFQS2s0S01RZUFZ?=
 =?utf-8?B?ZmRMT3Z3SkpGR1JiWHBjWkZyeVJBeEJYclQ1YzBpVG9acjFsT3k5UUpoL2xL?=
 =?utf-8?Q?ZHYZOzjn4BogJ5Uq0Zqb/GJaQ7hliEsKT3RlIOF?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6e989d99-af08-4a06-4c51-08d92f38711e
X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 13:29:31.5169
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WI31GdrXDga81nCxIOcSfQ8NG9RmlJ/RvBfGoXtWFVMP2J3ukSn6BXcu+zCQcToiNLLEwGct1WP42ocHUZLDKQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3369

On 6/14/21 2:12 AM, Christoph Hellwig wrote:
> On Mon, Jun 07, 2021 at 10:56:47PM +0800, Tianyu Lan wrote:
>> These addresses in extra address space works as system memory mirror. The 
>> shared memory with host in Isolation VM needs to be accessed via extra 
>> address space which is above shared gpa boundary.
> 
> Why?
> 

IIUC, this is using the vTOM feature of SEV-SNP. When this feature is
enabled for a VMPL level, any physical memory addresses below vTOM are
considered private/encrypted and any physical memory addresses above vTOM
are considered shared/unencrypted. With this option, you don't need a
fully enlightened guest that sets and clears page table encryption bits.
You just need the DMA buffers to be allocated in the proper range above vTOM.

See the section on "Virtual Machine Privilege Levels" in
https://www.amd.com/system/files/TechDocs/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-more.pdf.

Thanks,
Tom


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 13:31:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 13:31:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141493.261346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmgc-0000Nl-Jp; Mon, 14 Jun 2021 13:31:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141493.261346; Mon, 14 Jun 2021 13:31:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmgc-0000Ne-Gg; Mon, 14 Jun 2021 13:31:30 +0000
Received: by outflank-mailman (input) for mailman id 141493;
 Mon, 14 Jun 2021 13:31:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XszW=LI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lsmga-0000NY-K4
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:31:28 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6d6793f5-3fec-4eb4-92c4-c61592fa5c14;
 Mon, 14 Jun 2021 13:31:27 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2110.outbound.protection.outlook.com [104.47.18.110])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-15-MbkfgPg5OZuoRrKLm6__Rw-1; Mon, 14 Jun 2021 15:31:25 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3776.eurprd04.prod.outlook.com (2603:10a6:803:18::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20; Mon, 14 Jun
 2021 13:31:23 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 13:31:23 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM8P190CA0004.EURP190.PROD.OUTLOOK.COM (2603:10a6:20b:219::9) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.20 via Frontend Transport; Mon, 14 Jun 2021 13:31:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d6793f5-3fec-4eb4-92c4-c61592fa5c14
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623677486;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=75yDB8dNxubumROkHf+V4tVCUw57HtyIaAiia6Di6FI=;
	b=daB028z4kVLEu8PJycmyRyAXEdLBgeINBNj8mQlh5+6lCow74NJOs2cVZQKTHJ5lPaYEVj
	dxWgnrtPf5WuNTZwdKc91ebtVB5M+J1jzzyY+MkJ119a4JnNJIstTMxaMS2XzPioQTEisQ
	0TrgXtVh93sQjjqkOwkWf4TMZ/pu6k8=
X-MC-Unique: MbkfgPg5OZuoRrKLm6__Rw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JO9Uml3sUGMLBRf2Dc/RvuDvCrRYpkAN6KYFq2JnWHJii5rslx72krkPDZMyMg2D6Hl0+yTd495z4heBFaWTOHWRplx6cZmRLuWIgvvrOC/Cui7nlrEm31eFzmJuBq5vY5wys8nnP9cRMn2UTao/xVeWVaU/onFAxAthetrlUOW36+aVLFpox97K+NDMOFT756QS07s9xKJGNvdTpTSyjR0NVX2MUiKWSzeDn0D3FqT74iP49hp4FLTajr5mjM+yNH7XD2oQlmbycs1MewalxBD6fzzBxdXQtOb4yqTzOcWWHEbgOTHEN8wBAuhIVs1kKRpXNMeGO65EDT+/a2OEbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=75yDB8dNxubumROkHf+V4tVCUw57HtyIaAiia6Di6FI=;
 b=nUWTEtfmx6t3Tw2jN46ish+6PCcjj7JCswiJsaxcEUkVDafIk7BmxGgrCKkZhNNNDX3aHt9FkT+QkVbZOPNUM+6fnpPKMxWhSoJ92/QFlUVR4YmjNHWjje/528JV596UyEfR3wxLJlB/hoQMnTJd/zBYTf7mHlNIjOd1frgivS+wl3CrTZnkulPfcMXDzfCnSG6yZWdVMrw4RiCx8rkVXekFBuDdO6PvndlJ76PRvSrCXve8vE80+ahBGbvTzo8LDYLsCmOLfGhq8fWWMkUfpLD9GsemD19Hn1FfoM4Q83dBXoGN18oQl58jbE2/LrQPEJSzv5uXPIEiWjebGODJHQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH v1.1 5/5] tests: Introduce a TSX test
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>,
 Edwin Torok <edvin.torok@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210611163627.4878-6-andrew.cooper3@citrix.com>
 <20210614104716.23405-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9257fd40-65cb-8b08-7639-00b15dd0aba4@suse.com>
Date: Mon, 14 Jun 2021 15:31:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210614104716.23405-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM8P190CA0004.EURP190.PROD.OUTLOOK.COM
 (2603:10a6:20b:219::9) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a3e1599a-936a-4df6-7b1b-08d92f38b3bd
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3776:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB37768BEFEF3FE5EA5C605D3DB3319@VI1PR0402MB3776.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2733;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YIu57ASIK/PwzdaaNuc3QPW6izB7aradumVYLSDbDoL1wG13n1Gg2gPlCC9v0ykdUesrsgG5m+TdrFekU68YsIG00fhFuI9y7lNHvLHRSrOYgADUVlAQH9aBuhHXxhX48go78TFWYec2fO2tDaCQuGSxMcrjUsaZWr5YjMQqS3s9ggyxfCMSvnGKZFmMYBuqQQZDhYZzxPQWjijizdTHMqQOdTNMwh5oiFldzlbxbO60tDja3u87y01dtOWvV/DSWJZa5BM/WK0Ax9sXubqIgJfmUDb4jqWCqWqfkkrsxx9WhlNEwe/8EfOXr6KStR54Y13VZHDARQ1/oXLROIzdboP17XswH1AuqLKWQ9Ge3NRP8yd/NFaAPiAICVQDTM43Mt/pVKH8NO3T6Hz1zpfMn2INlwQ+7+Z2+38TpvqMBv/0+PUnjEpnOtg8FIq/U38gExtPdxde/cMH4MurDRnuBfBbo/OI5Tq3HZu72iYzUbb2TlAF7Gu/q1PTe1gRqa1tSikKg/xscgvByUedKdUopBSaDXfYD3sAzLvfuFWy/j5GFj0ccbidqnYZS322g00arLZq4Lqxdz6BRf0jyip/jwAZD+QW3AVPCZkb9Z8s8AhK3zt/k3PL12fKCQFG3kyMReNJ6YG/dV/wJ4Zq5sRGbHXsHXv+88+34ZSm9WkmCBxtGK7M3MJn/opNeABYOSqZ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(39850400004)(396003)(376002)(366004)(346002)(478600001)(4326008)(83380400001)(36756003)(6916009)(31686004)(6486002)(2616005)(956004)(8676002)(8936002)(2906002)(53546011)(5660300002)(54906003)(6666004)(16576012)(316002)(31696002)(86362001)(16526019)(26005)(186003)(66476007)(38100700002)(66556008)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b2RTUnE5dzFDbkR1WFhjaEJSdkxOREpoUmdOaTMyOEJwTUVGNlRhUjFSWGFs?=
 =?utf-8?B?RWNad0lRVTNKRWNXVFc2aHdPQUUwM2tCNFJhcm1hMWtPMUd6UllNb1EwVHR5?=
 =?utf-8?B?K3BVcGxOMGo1MkMxSUQrRjYwSVdQbndSTFdsa293cHM1TzRJbmJYdzFuRzVH?=
 =?utf-8?B?MnllaUdBTjV4MWJqaHVQdGxUZFI3QXcyV085MjZwcVkwSkFSQ1o3ZktrMUMx?=
 =?utf-8?B?UDVwZGpuMFEvVDc5bHJyaG1rQ2diUC9mWWdWVkd4OXFSbTFKQjF4TjlMb1No?=
 =?utf-8?B?RFZwQ2FkeW5GRnBhRkNvVGl0clJ2OGROYm9PUjkyUHB3OFV1MzRyaFF4a285?=
 =?utf-8?B?L2FrZXk5SEtySE1IVE9qdVBINnVmRHM5aEdxeFNWY0F6RFFVYzhPcU9Hb3d6?=
 =?utf-8?B?UjFIN0M1N2doV04zd2VyQW85VUU5bC83c0VpZURZcWNodVpoWFpIWEhBRTdx?=
 =?utf-8?B?K0xWY2dTQ1FrVThSbHphZ1B1MnJrZ1Q3RFl1R25hZVpRK2grWkxlVTBwRHU4?=
 =?utf-8?B?RW5MMDU0ME13SUJpRmE0R1p0RzNCTlJSNERRMithWGlYMmhqQS9xWkZkUU80?=
 =?utf-8?B?TTloTnRXYk12ZVh6K0VWbU8vT1IvOW8rRUhWdmlEL1I5Vmo5aU1LQTI3S3li?=
 =?utf-8?B?VFE2am5GWVdOUVBZZ1VLdUhNaE9UTTI2WEJDLy9vVU1OY25uRHpSR2JvUHdu?=
 =?utf-8?B?VlJmOVdkQm83eCtSMjZkRlllSzdHZTJGNU41MXhmRDVaanpBRUNLZEpoZ28y?=
 =?utf-8?B?K24zVGpHMmpmcEx5VTNHcW0wb0haMXpoRE9xOGY5bVpzazRVSzRIN1Jtb294?=
 =?utf-8?B?c1N1VHV3RTFENDBxVlJBL3VGaWRkTTN4ZUcrL1FIR2hNZSsrUE9ORGVyV0lQ?=
 =?utf-8?B?ZHh0VE5SMGw4Rnp4LzVOV05uSC8vOTgyWTg5bzZncDI1bUx4Rk1sMUd1bmZN?=
 =?utf-8?B?ZW9KMVhjR2lqa24zUkRaNi9NOVBnWUFDdkVoWGo1dDlHQXR2bkdOR2VQcERW?=
 =?utf-8?B?bC9HR2lWajYyZjNJWHY4elA2amswNHZWeXNKU3R2UWFYUFFDeHdtaE9ZaGlI?=
 =?utf-8?B?WFNTMjRlcFlzOEFQdjc4RFZWSFJMU2R2d0l3cldBaWp5MkFheXNPVm9sZjh5?=
 =?utf-8?B?UFd1YU9POCszWkNTRXBqVzVDZEtYQ1pSWnUxWUdPSjRKTk40anFnS0drdU1u?=
 =?utf-8?B?cU9mZEszcjZrSytKbG5yZ3RFb2pJWmJZMkhJV2dvTlA5MCtkWnlMNmJ1Rzdt?=
 =?utf-8?B?WDl3R2xVZzRrQ0R6T2NqUm83a0MyV3A1cHY5cTJvengvVStkUnN0UXkrVENF?=
 =?utf-8?B?NER6NjN6Qy9MTDNpcVpKRVFtVE9lK2JGbEF2M2xqenFpeU94dmtjUHJoMXFj?=
 =?utf-8?B?a25oVGFhMEMxOGhZcjZqdE9SZCs4UUNmNFR2Tk85ZE85cGxvaEN4UXh3MVk1?=
 =?utf-8?B?b2E4b002cVUwR0FDeURyT0tDQWlZb0hjNWFEUytlSjZKNDQ3dU9YTHFQSjNs?=
 =?utf-8?B?YkFIWWtmWklSa0FaOVY0Lzk4cStUVTIzQ2RjWWNwUDJ6RERLK2I1WmVIcHBx?=
 =?utf-8?B?N0ZTemhISUYyRnVLSUxNOXdwZ25yRVAyaFBuaWZzNElSREliYS81VXIzTDll?=
 =?utf-8?B?MDYvdEd0dGZZREJDenRIR1luS0kyRU1BMklXREJiN0FYLzJsekpYQi9EeTJu?=
 =?utf-8?B?WHEzVm96cll0d2JkbkpGd2RnWE9wNjF4MldZZW5ma0wyZGFpcVhLWUF5US9P?=
 =?utf-8?Q?vHm4boh8KP2L+KXfaYAcM6j55lmdoxdlseXp2tV?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a3e1599a-936a-4df6-7b1b-08d92f38b3bd
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 13:31:23.0848
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4bBhy7YdCBYk1HhCiZxlPOC4gJnPkutDiysv2RDpLVZcXh3k9It7Topjap+OH4ZqjkcjRJEtXhckTYysuTEwWw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3776

On 14.06.2021 12:47, Andrew Cooper wrote:
> --- /dev/null
> +++ b/tools/tests/tsx/Makefile
> @@ -0,0 +1,43 @@
> +XEN_ROOT = $(CURDIR)/../../..
> +include $(XEN_ROOT)/tools/Rules.mk
> +
> +TARGET := test-tsx
> +
> +.PHONY: all
> +all: $(TARGET)
> +
> +.PHONY: run
> +run: $(TARGET)
> +	./$(TARGET)
> +
> +.PHONY: clean
> +clean:
> +	$(RM) -f -- *.o $(TARGET) $(DEPS_RM)

I'm surprised this is necessary, but indeed I can see it elsewhere too.

> +.PHONY: distclean
> +distclean: clean
> +	$(RM) -f -- *~
> +
> +.PHONY: install
> +install: all
> +
> +.PHONY: uninstall
> +uninstall:
> +
> +CFLAGS += -Werror -std=gnu11

Is this strictly necessary? It excludes a fair share of the gcc
versions that we claim the tree can be built with. If it is
necessary, then I think it needs arranging for that the tools/
build as a whole won't fail just because of this test not
building. We do something along these lines for the x86 emulator
harness, for example.

> +CFLAGS += $(CFLAGS_xeninclude)
> +CFLAGS += $(CFLAGS_libxenctrl)
> +CFLAGS += $(CFLAGS_libxenguest)
> +CFLAGS += -I$(XEN_ROOT)/tools/libs/ctrl -I$(XEN_ROOT)/tools/libs/guest
> +CFLAGS += $(APPEND_CFLAGS)
> +
> +LDFLAGS += $(LDLIBS_libxenctrl)
> +LDFLAGS += $(LDLIBS_libxenguest)
> +LDFLAGS += $(APPEND_LDFLAGS)
> +
> +test-tsx.o: Makefile
> +
> +test-tsx: test-tsx.o

Wouldn't you want to use $(TARGET) at least here?

> +/*
> + * Test a specific TSX MSR for consistency across the system, taking into
> + * account whether it ought to be accessable or not.
> + *
> + * We can't query offline CPUs, so skip those if encountered.  We don't care
> + * particularly for the exact MSR value, but we do care that it is the same
> + * everywhere.
> + */
> +static void test_tsx_msr_consistency(unsigned int msr, bool accessable)

Isn't it "accessible"?

> +{
> +    uint64_t cpu0_val = ~0;
> +
> +    for ( unsigned int cpu = 0; cpu <= physinfo.max_cpu_id; ++cpu )
> +    {
> +        xc_resource_entry_t ent = {
> +            .u.cmd = XEN_RESOURCE_OP_MSR_READ,
> +            .idx = msr,
> +        };
> +        xc_resource_op_t op = {
> +            .cpu = cpu,
> +            .entries = &ent,
> +            .nr_entries = 1,
> +        };
> +        int rc = xc_resource_op(xch, 1, &op);
> +
> +        if ( rc < 0 )
> +        {
> +            /* Don't emit a message for offline CPUs */
> +            if ( errno != ENODEV )
> +                fail("  xc_resource_op() for CPU%u failed: rc %d, errno %d - %s\n",
> +                     cpu, rc, errno, strerror(errno));
> +            continue;
> +        }
> +
> +        if ( accessable )
> +        {
> +            if ( rc != 1 )
> +            {
> +                fail("  Expected 1 result, got %u\n", rc);

%d

> +                continue;
> +            }
> +            if ( ent.u.ret != 0 )
> +            {
> +                fail("  Expected ok, got %d\n", ent.u.ret);
> +                continue;
> +            }
> +        }
> +        else
> +        {
> +            if ( rc != 0 )
> +                fail("  Expected 0 results, got %u\n", rc);
> +            else if ( ent.u.ret != -EPERM )
> +                fail("  Expected -EPERM, got %d\n", ent.u.ret);
> +            continue;
> +        }
> +
> +        if ( cpu == 0 )
> +        {
> +            cpu0_val = ent.val;
> +            printf("  CPU0 val %#"PRIx64"\n", cpu0_val);
> +        }
> +        else if ( ent.val != cpu0_val )
> +            fail("  CPU%u val %#"PRIx64" differes from CPU0 %#"PRIx64"\n",

Nit: differs?

> +/*
> + * Probe for how RTM behaves, deliberately not inspecting CPUID.
> + * Distinguishes between "no support at all" (i.e. XBEGIN suffers #UD),
> + * working ok, and appearing to always abort.
> + */
> +static enum rtm_behaviour probe_rtm_behaviour(void)
> +{
> +    for ( int i = 0; i < 1000; ++i )
> +    {
> +        /*
> +         * Opencoding the RTM infrastructure from immintrin.h, because we
> +         * still support older versions of GCC.  ALso so we can include #UD
> +         * detection logic.
> +         */
> +#define XBEGIN_STARTED -1
> +#define XBEGIN_UD      -2
> +        unsigned int status = XBEGIN_STARTED;
> +
> +        asm volatile (".Lxbegin: .byte 0xc7,0xf8,0,0,0,0" /* XBEGIN 1f; 1: */
> +                      : "+a" (status) :: "memory");
> +        if ( status == XBEGIN_STARTED )
> +        {
> +            asm volatile (".byte 0x0f,0x01,0xd5" ::: "memory"); /* XEND */

Nit: This otherwise following hypervisor style, the asm()s want more
blanks added (also again further down).

> +            return RTM_OK;
> +        }
> +        else if ( status == XBEGIN_UD )
> +            return RTM_UD;
> +    }
> +
> +    return RTM_ABORT;
> +}
> +
> +static struct sigaction old_sigill;
> +
> +static void sigill_handler(int signo, siginfo_t *info, void *extra)
> +{
> +    extern char xbegin_label[] asm(".Lxbegin");

Perhaps add const? I'm also not sure about .L names used for extern-s.

> +    if ( info->si_addr == xbegin_label ||
> +         memcmp(info->si_addr, "\xc7\xf8\x00\x00\x00\x00", 6) == 0 )

Why the || here? I could see you use && if you really wanted to be on
the safe side, but the way you have it I don't understand the
intentions.

> +    {
> +        ucontext_t *context = extra;
> +
> +        /*
> +         * Found the XBEGIN instruction.  Step over it, and update `status` to
> +         * signal #UD.
> +         */
> +#ifdef __x86_64__
> +        context->uc_mcontext.gregs[REG_RIP] += 6;
> +        context->uc_mcontext.gregs[REG_RAX] = XBEGIN_UD;
> +#else
> +        context->uc_mcontext.gregs[REG_EIP] += 6;
> +        context->uc_mcontext.gregs[REG_EAX] = XBEGIN_UD;
> +#endif

At the very least for this, don't you need to constrain the test to
just Linux?

> +static void test_tsx(void)
> +{
> +    int rc;
> +
> +    /* Read all policies except raw. */
> +    for ( int i = XEN_SYSCTL_cpu_policy_host;

To avoid having this as bad precedent, even though it's "just" testing
code: unsigned int? (I've first spotted this here, but later I've
found more places elsewhere.)

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 13:37:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 13:37:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141503.261358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmmE-00018Y-8T; Mon, 14 Jun 2021 13:37:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141503.261358; Mon, 14 Jun 2021 13:37:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmmE-00018R-5T; Mon, 14 Jun 2021 13:37:18 +0000
Received: by outflank-mailman (input) for mailman id 141503;
 Mon, 14 Jun 2021 13:37:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Ibu=LI=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lsmmD-00018L-EH
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:37:17 +0000
Received: from mail-pj1-x102b.google.com (unknown [2607:f8b0:4864:20::102b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2d68f023-6deb-44cf-bcfa-7b06f16bdb20;
 Mon, 14 Jun 2021 13:37:16 +0000 (UTC)
Received: by mail-pj1-x102b.google.com with SMTP id
 k22-20020a17090aef16b0290163512accedso10429061pjz.0
 for <xen-devel@lists.xenproject.org>; Mon, 14 Jun 2021 06:37:16 -0700 (PDT)
Received: from ?IPv6:2404:f801:0:5:8000::4b1? ([2404:f801:9000:1a:efea::4b1])
 by smtp.gmail.com with ESMTPSA id
 125sm12375806pfg.52.2021.06.14.06.37.03
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 14 Jun 2021 06:37:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d68f023-6deb-44cf-bcfa-7b06f16bdb20
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=L2zqO1GmHtYJk6YxBI1sWe6jLyKPpb1AfJC8K4R6e5k=;
        b=Cxb1/imPkH0J0mCr4lfArmxBVSP9MZn7j5y/xPOvK408iNpZ/trCx7iTSB97b+6h2r
         GH8FYPrOC4Fgm9Z7GYXw2Q6TAfu1pePiEZj/1/rkvJ46B3Wt5yANQsUnBs0W0Zpx2yUg
         FUCFHCSdOksAFWye+3qd6MfK6evFhwMm5txRVoHom2eNUq2iAZevoB1ovrJLeR2+pngA
         zIKmOSyFmyM+dI38qtVe8XxjSsIv9gPiitEg81toKRhlb1r2KrcyYfURTl4AM29GboLz
         j4D3UMpwPezOe7xh7mP/Tz2Ele5K3oJYUbWnr9/m79e2/FDBSwGRnqztvf0n1OEzcoie
         ma5Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=L2zqO1GmHtYJk6YxBI1sWe6jLyKPpb1AfJC8K4R6e5k=;
        b=a/B3N3ZZdFBF7C3uEiiZlW/ENlMFeKfBEV6YZSLwO4UzewLBTasaxyghx+h06ExBR7
         KR3u8cigkjUNboCMEDgsqiPpy8SLitPGiysWEkOC8BQgrAmKb8PEC3QaPGL7qvAov/I1
         DkQ6IMbIjT076JfqtHbPKRGImE+SODRK5sQ5xn4RKUbZFyZdtz3P9tfy1w7aIQ3OHPhM
         pz5eYArWVBSRPO4r0xd8Y3JjkaMcnk6v6h0UmARNphqfrgp9w1M1w/Mh6cy+RSbZ10xD
         BdlhRqcIbJ7IWduDiTR4wBpJC5i+xNLVy1dWKN8PGcmFKlAIoWUuHMmBrbBS97YgnDDc
         ZrAA==
X-Gm-Message-State: AOAM5328s+ieQgBzgE8e2qcJ3h5YjZQqKTlggtYIdSfSTy+MG1MPvoo5
	DHEM5Ig20HYfrYQz4xC18Tk=
X-Google-Smtp-Source: ABdhPJzc3ULYnZUK4PYU2Lj8UKWkUEgb6rd9+REx3GFM6TekjZ3ClxH5FkBxfIYNNF665iBfi2dSTw==
X-Received: by 2002:a17:902:b585:b029:f6:5cd5:f128 with SMTP id a5-20020a170902b585b02900f65cd5f128mr16662490pls.43.1623677835908;
        Mon, 14 Jun 2021 06:37:15 -0700 (PDT)
Subject: Re: [RFC PATCH V3 08/11] swiotlb: Add bounce buffer remap address
 setting function
To: Christoph Hellwig <hch@lst.de>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
 wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
 arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
 peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, m.szyprowski@samsung.com,
 robin.murphy@arm.com, boris.ostrovsky@oracle.com, jgross@suse.com,
 sstabellini@kernel.org, joro@8bytes.org, will@kernel.org,
 xen-devel@lists.xenproject.org, davem@davemloft.net, kuba@kernel.org,
 jejb@linux.ibm.com, martin.petersen@oracle.com,
 iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com,
 thomas.lendacky@amd.com, brijesh.singh@amd.com, sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-9-ltykernel@gmail.com>
 <20210607064312.GB24478@lst.de>
 <48516ce3-564c-419e-b355-0ce53794dcb1@gmail.com>
 <20210614071223.GA30171@lst.de>
From: Tianyu Lan <ltykernel@gmail.com>
Message-ID: <3e64e59b-7440-69a5-75c5-43225f3d6c0a@gmail.com>
Date: Mon, 14 Jun 2021 21:37:01 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210614071223.GA30171@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 6/14/2021 3:12 PM, Christoph Hellwig wrote:
> On Mon, Jun 07, 2021 at 10:56:47PM +0800, Tianyu Lan wrote:
>> These addresses in extra address space works as system memory mirror. The
>> shared memory with host in Isolation VM needs to be accessed via extra
>> address space which is above shared gpa boundary.
> 
> Why?
> 

The shared_gpa_boundary in the AMD SEV SNP spec is called virtual top of
memory(vTOM). Memory addresses below vTOM are automatically treated as
private while memory above vTOM is treated as shared. Using vTOM to
separate memory in this way avoids the need to augment the standard x86
page tables with C-bit markings, simplifying guest OS software.


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 13:42:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 13:42:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141511.261369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmr2-0002W3-Ry; Mon, 14 Jun 2021 13:42:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141511.261369; Mon, 14 Jun 2021 13:42:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmr2-0002Vw-Nw; Mon, 14 Jun 2021 13:42:16 +0000
Received: by outflank-mailman (input) for mailman id 141511;
 Mon, 14 Jun 2021 13:42:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Ibu=LI=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lsmr1-0002Vq-9W
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:42:15 +0000
Received: from mail-pg1-x52c.google.com (unknown [2607:f8b0:4864:20::52c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id deadc58d-8e76-4385-90e2-726bcddf4c1a;
 Mon, 14 Jun 2021 13:42:14 +0000 (UTC)
Received: by mail-pg1-x52c.google.com with SMTP id t17so8669625pga.5
 for <xen-devel@lists.xenproject.org>; Mon, 14 Jun 2021 06:42:14 -0700 (PDT)
Received: from ?IPv6:2404:f801:0:5:8000::4b1? ([2404:f801:9000:1a:efea::4b1])
 by smtp.gmail.com with ESMTPSA id
 s13sm13014226pgi.36.2021.06.14.06.42.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 14 Jun 2021 06:42:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: deadc58d-8e76-4385-90e2-726bcddf4c1a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:from:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=BOH7Vu+778+ln6ekCo+GQvvDaGAsmqLU6p5AWEG4zZQ=;
        b=UL15DRdkb+5FCOorYR2hP1xNvGKF+hhVpp6Zgl0ZEc/kynMSrJYTv2B24Mo4Twd4OC
         vrOGM4BGqqMhwnmUjnn22As3rfqICUuSLDNUfN4x055EQkDEP6U8uykZ1ageGPFza6dy
         jotUZIYfU7sXGQAZHr4kuOdz1HN1DauLgWNH57sKiZSzR0dkndXAb/EFiLKvDgTKoc5n
         CF/y9pIWqcDDuK8B74SpZxdcaDkimajNnHxGveNvr7r1vPb4MaK8pI1+WDcP5irZNQIO
         K/ubQwcXeNSCfSAQihAGciweSNRMmdfdrpjGhRR3Ol9hGp7Cim/3jPpHV8NR+c4DzoMz
         2CfQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:from:to:cc:references:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=BOH7Vu+778+ln6ekCo+GQvvDaGAsmqLU6p5AWEG4zZQ=;
        b=p9lhIZWdIFc6DT1u/BQJolvEwlezJU8vyMcJWWVuDO9/Lj9Gp6lDcw4I2AVu2iDsX8
         8micOLf1hLuCJAgxhDJdgJclMLmMI8SePF9g0trWtU3OSMsAVzaKrU92rWiTvtcfkfnB
         JsbD7ufJVIfDKXNIt6NKqKdMkFBc4mmBz25m7FhRP7l1SVnxEKPd8EEayrGyY1sMJpJs
         Zl8AuYqzCWNsKWHbm24h6lqd/M42vCyZLfAXo140shU8SBsU/AQsrT7a1eUWcKChbc/u
         HQeqKUZ93PlehyrLrALamd8afc4MYDF91yIMKQB6M3lPGAQwKf0MEv36IZVV0e0nASCU
         MsjA==
X-Gm-Message-State: AOAM532hBrnrpeprYCs0pKjz0/xk4G2pkKFPVGtECaNgLZOuxXzF8qCY
	n7DV2EAyKw7ZGjjtf+b2qNs=
X-Google-Smtp-Source: ABdhPJzSD5SSODv3ZNJBk7/EJwQiektVm8x7/VFzAUKIi7xqvpG8L/erou4acHYRKMj3nTXmR0zAfQ==
X-Received: by 2002:a63:f13:: with SMTP id e19mr16837169pgl.112.1623678133793;
        Mon, 14 Jun 2021 06:42:13 -0700 (PDT)
Subject: Re: [RFC PATCH V3 08/11] swiotlb: Add bounce buffer remap address
 setting function
From: Tianyu Lan <ltykernel@gmail.com>
To: Christoph Hellwig <hch@lst.de>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
 wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
 arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
 peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, m.szyprowski@samsung.com,
 robin.murphy@arm.com, boris.ostrovsky@oracle.com, jgross@suse.com,
 sstabellini@kernel.org, joro@8bytes.org, will@kernel.org,
 xen-devel@lists.xenproject.org, davem@davemloft.net, kuba@kernel.org,
 jejb@linux.ibm.com, martin.petersen@oracle.com,
 iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com,
 thomas.lendacky@amd.com, brijesh.singh@amd.com, sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-9-ltykernel@gmail.com>
 <20210607064312.GB24478@lst.de>
 <48516ce3-564c-419e-b355-0ce53794dcb1@gmail.com>
 <20210614071223.GA30171@lst.de>
 <3e64e59b-7440-69a5-75c5-43225f3d6c0a@gmail.com>
Message-ID: <ee3d79ea-f4f9-b886-e1ee-e26b42a88530@gmail.com>
Date: Mon, 14 Jun 2021 21:42:00 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <3e64e59b-7440-69a5-75c5-43225f3d6c0a@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 6/14/2021 9:37 PM, Tianyu Lan wrote:
> 
> 
> On 6/14/2021 3:12 PM, Christoph Hellwig wrote:
>> On Mon, Jun 07, 2021 at 10:56:47PM +0800, Tianyu Lan wrote:
>>> These addresses in extra address space works as system memory mirror. 
>>> The
>>> shared memory with host in Isolation VM needs to be accessed via extra
>>> address space which is above shared gpa boundary.
>>
>> Why?
>>
> 
> The shared_gpa_boundary in the AMD SEV SNP spec is called virtual top of
> memory(vTOM). Memory addresses below vTOM are automatically treated as
> private while memory above vTOM is treated as shared. Using vTOM to
> separate memory in this way avoids the need to augment the standard x86
> page tables with C-bit markings, simplifying guest OS software.

Here is the spec link and vTOM description is in the page 14.
https://www.amd.com/system/files/TechDocs/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-more.pdf

Thanks.



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 13:49:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 13:49:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141523.261379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmyP-0003Eo-Lj; Mon, 14 Jun 2021 13:49:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141523.261379; Mon, 14 Jun 2021 13:49:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmyP-0003Eh-Ii; Mon, 14 Jun 2021 13:49:53 +0000
Received: by outflank-mailman (input) for mailman id 141523;
 Mon, 14 Jun 2021 13:49:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lsmyN-0003Eb-OK
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:49:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lsmyN-0005Rq-MJ
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:49:51 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lsmyN-0006IM-La
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:49:51 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lsmyC-0006UC-FL; Mon, 14 Jun 2021 14:49:40 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=7KlJg5qmLnYl1LtfWn96biimdtbJh1Th9pa+aSmotHM=; b=WG3wCI0cOBW/U/gRb2IrCbfo5z
	VU7DsHy/IRT4L93YsTNplWMrlq2fC84Mc/vVhedF+NWjgJ2bttDCUubE0dBK2IBHtzZ6usGUnC98+
	1PPA9OwERy5GHgcAkFQRuVjfOGxgk12AbgciVGzeRb5gobofJsmfa3SE6SLWnUF0c0vE=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24775.24180.199869.133786@mariner.uk.xensource.com>
Date: Mon, 14 Jun 2021 14:49:40 +0100
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
    Igor Druzhinin <igor.druzhinin@citrix.com>,
    Edwin Torok <edvin.torok@citrix.com>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?=  <roger.pau@citrix.com>,
    Wei Liu <wl@xen.org>,
    Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 4/5] libs/guest: Move struct xc_cpu_policy into
 xg_private.h
In-Reply-To: <bb85a8ea-c78f-1b94-6d83-224137f21500@suse.com>
References: <20210611163627.4878-1-andrew.cooper3@citrix.com>
	<20210611163627.4878-5-andrew.cooper3@citrix.com>
	<bb85a8ea-c78f-1b94-6d83-224137f21500@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: [PATCH 4/5] libs/guest: Move struct xc_cpu_policy into xg_private.h"):
> On 11.06.2021 18:36, Andrew Cooper wrote: ... so tests can peek at
> > the internals, without the structure being generally available to
> > users of the library.
> 
> I'm not sure whether this slight over-exposure is tolerable in the tools code,
> so I'd prefer leaving the ack-ing of this change to the tools folks.

I am fine with the change described in the Subject.

But I haven't reviewed the patch, which wasn't CC'd to me AFAICT.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 13:50:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 13:50:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141526.261391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmyh-0004N3-4d; Mon, 14 Jun 2021 13:50:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141526.261391; Mon, 14 Jun 2021 13:50:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmyg-0004Mw-Vd; Mon, 14 Jun 2021 13:50:10 +0000
Received: by outflank-mailman (input) for mailman id 141526;
 Mon, 14 Jun 2021 13:50:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DfkY=LI=arm.com=robin.murphy@srs-us1.protection.inumbo.net>)
 id 1lsmyf-00041e-PD
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:50:09 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id c607a8f9-1d77-4ce7-8aed-6cfbb2e81a63;
 Mon, 14 Jun 2021 13:50:04 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 294C36D;
 Mon, 14 Jun 2021 06:50:04 -0700 (PDT)
Received: from [10.57.9.136] (unknown [10.57.9.136])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4A2A83F70D;
 Mon, 14 Jun 2021 06:49:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c607a8f9-1d77-4ce7-8aed-6cfbb2e81a63
Subject: Re: [RFC PATCH V3 08/11] swiotlb: Add bounce buffer remap address
 setting function
To: Christoph Hellwig <hch@lst.de>, Tianyu Lan <ltykernel@gmail.com>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
 wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
 arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
 peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, m.szyprowski@samsung.com,
 boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org,
 joro@8bytes.org, will@kernel.org, xen-devel@lists.xenproject.org,
 davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com,
 martin.petersen@oracle.com, iommu@lists.linux-foundation.org,
 linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org,
 linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
 netdev@vger.kernel.org, vkuznets@redhat.com, thomas.lendacky@amd.com,
 brijesh.singh@amd.com, sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-9-ltykernel@gmail.com>
 <20210607064312.GB24478@lst.de>
From: Robin Murphy <robin.murphy@arm.com>
Message-ID: <94038087-a33c-93c5-27bf-7ec1f6f5f0e3@arm.com>
Date: Mon, 14 Jun 2021 14:49:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210607064312.GB24478@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

On 2021-06-07 07:43, Christoph Hellwig wrote:
> On Sun, May 30, 2021 at 11:06:25AM -0400, Tianyu Lan wrote:
>> From: Tianyu Lan <Tianyu.Lan@microsoft.com>
>>
>> For Hyper-V isolation VM with AMD SEV SNP, the bounce buffer(shared memory)
>> needs to be accessed via extra address space(e.g address above bit39).
>> Hyper-V code may remap extra address space outside of swiotlb. swiotlb_
>> bounce() needs to use remap virtual address to copy data from/to bounce
>> buffer. Add new interface swiotlb_set_bounce_remap() to do that.
> 
> Why can't you use the bus_dma_region ranges to remap to your preferred
> address?

FWIW, I think a better generalisation for this would be allowing 
set_memory_decrypted() to return an address rather than implicitly 
operating in-place, and hide all the various hypervisor hooks behind that.

Robin.


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 13:50:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 13:50:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141533.261401 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmzM-00058Q-CQ; Mon, 14 Jun 2021 13:50:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141533.261401; Mon, 14 Jun 2021 13:50:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsmzM-00058J-8u; Mon, 14 Jun 2021 13:50:52 +0000
Received: by outflank-mailman (input) for mailman id 141533;
 Mon, 14 Jun 2021 13:50:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lsmzL-000587-3P
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:50:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lsmzL-0005Sv-2d
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:50:51 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lsmzL-0006Ly-1m
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:50:51 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lsmzF-0006Uk-HL; Mon, 14 Jun 2021 14:50:45 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=lmsjuCqy01w2J3x7XLhXB8nl/HjAhLx07qnDGRbWeVk=; b=QNQlzLU70gy0ZGv82BSsayl6kQ
	K7liR1/Gfg8dzs+z8K2nuG5sI4E5CPqvH3h/NY3T/glnca+DuupnUHVRbdCR8Av+IPyhsWXiOYg6r
	441l0mcdJZu9kQ0RL0nPOMHvitBPbB8DqxDIDcAISg9cnsOvaiZVIXM6AUuKyxUEG43g=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24775.24245.273501.946833@mariner.uk.xensource.com>
Date: Mon, 14 Jun 2021 14:50:45 +0100
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
    xen-devel@lists.xenproject.org
Subject: Re: [OSSTEST PATCH] ts-xen-build: Turn on CONFIG_PV32 again
In-Reply-To: <f45a45c0-b847-d6f8-d613-f2111c390929@suse.com>
References: <20210611170230.20195-1-iwj@xenproject.org>
	<f45a45c0-b847-d6f8-d613-f2111c390929@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: [OSSTEST PATCH] ts-xen-build: Turn on CONFIG_PV32 again"):
> On 11.06.2021 19:02, Ian Jackson wrote:
> > CC: George Dunlap <george.dunlap@citrix.com>
> > Suggested-by: Jan Beulich <jbeulich@suse.com>
> > Signed-off-by: Ian Jackson <iwj@xenproject.org>
> 
> FWIW:
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks.  FTR, I pushed this on Friday and it is now in production and
you can no doubt tell.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 13:52:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 13:52:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141542.261413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsn1C-0005qX-Ou; Mon, 14 Jun 2021 13:52:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141542.261413; Mon, 14 Jun 2021 13:52:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsn1C-0005qQ-L2; Mon, 14 Jun 2021 13:52:46 +0000
Received: by outflank-mailman (input) for mailman id 141542;
 Mon, 14 Jun 2021 13:52:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XszW=LI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lsn1B-0005qJ-0O
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:52:45 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ee17a86-bcb2-4d8a-8f54-4ca86d920634;
 Mon, 14 Jun 2021 13:52:44 +0000 (UTC)
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2053.outbound.protection.outlook.com [104.47.0.53]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-30-016OOoopOMeL7U5QbbjGBw-1; Mon, 14 Jun 2021 15:52:42 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4848.eurprd04.prod.outlook.com (2603:10a6:803:55::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20; Mon, 14 Jun
 2021 13:52:40 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 13:52:40 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P189CA0068.EURP189.PROD.OUTLOOK.COM (2603:10a6:102:b4::13) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.20 via Frontend Transport; Mon, 14 Jun 2021 13:52:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ee17a86-bcb2-4d8a-8f54-4ca86d920634
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623678763;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=uHxN8njyAPbAT4wEv5MVDJ50g6OUZutBTJuMLi18sp0=;
	b=TI47cXuyyQTJ3kHP1fYLzfvRWjTR07wDez+cDvn1bnrNOd9wCwcjpt2Li0md6xpETIzyYU
	QXlvqa/Nc/K+ru+AZvM9kghhFL106Hr1d0eiF4WKbIIYene2FYa4f3poMdmagr5ulcir95
	dJLc0Ai1GZrIOzMHMA+3sdjwfqh4wkg=
X-MC-Unique: 016OOoopOMeL7U5QbbjGBw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fuivG3+pMRNI7/2IRkGMiKkkh3g6S7ofGoh3FAsA27KW4y6GM4kSSo4zDhAVh6ntipctTYV+agNesoA6wZB2Yk7FNNyTKCm2TF7NNaqY9/lZP/zqEFMiA1X9J4KZ3dPM0eeb/6PCpg3lBDO1UNJrI1sX9M8xi8E8VkZGqaccVLsdgzVgDolFVlT4M7hsFpGaJlJnu97/fDF+KJGwyOz3/+UKs0urSDibMPNw9F2w7cD1CqVeLzkWTwj9oNVygz/3J+sGMtlVjYwtJeM0lnctrfGn9XzRvFXjmRZhlGqLYQsSEFLPsTeTpWi/WxG2/o48COhS66gvQccqw+SZma/sTQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uHxN8njyAPbAT4wEv5MVDJ50g6OUZutBTJuMLi18sp0=;
 b=D8/dHsAVr/d71EjHnzhp1TlodLXqmjGpuRPH9+gdd7+v726sjjRWtaPhtsTOkyi8qRZKFRvf+j/dTCc5vWjNtRky3HoUokJNDAhkEdfli7MC0+rxDvYhr9Csg7d1LG3JletjhgWWHANrmt7R4Ublm7Bf3V2Gf7nAlmpLNXag9ltaqdts3FM6q0LyimPVgk6nF4Z9bG00fJsl2bpCPmdlw231FF1ovg2U9BaSnpnil9oCHe3b3usw8vne0hu4n/7WnsMz7/z/rwaXcI0tfoRQKpp0Cw8Zx2SxYtcIvHUHPICSwzrCTCOmyhPHZmjBG3zHLHsuot45GP/rtCFBwQO9FQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=suse.com;
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] Arm: avoid .init.data to be marked as executable
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <5c173e92-f615-c95a-21a2-5c894727414d@suse.com>
Date: Mon, 14 Jun 2021 15:52:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P189CA0068.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:b4::13) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a0b7af54-b7e8-45b8-d3cd-08d92f3bacf2
X-MS-TrafficTypeDiagnostic: VI1PR04MB4848:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4848CEE5AB322257F2A5F82BB3319@VI1PR04MB4848.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ofrI9vD5JdO8gdRtsCPfpw5Udz7mc7aeBKdqfrG5J39Phl1utLLqGBldJDxng4IA3Xg6cXrORIbwXE8oCAnKubXkRWBiJzromETVDu2f81BYXoT2jb7o9aFIrVTDwlWfd3Deh7mRb52btaaPODBnc4Z8h4q3JYYdA6WUX8A0orTXlNbe1bQtv85IdolTMfVRNrfYrTbfVcU0ozh4yR8Sq+e66TVWb62SUK21XcaB8wT/ektq+TQljMWpRPO0LbQ6dI+pIBvS/MNy1vTJISMmvCkpmJ1UpLZOV3nX5kJdaKJWmQofIRnVVk4V0QMLX7ROtYjSXuYIzays2zCYDY/JwlGnmxtxorm7Yz7wpbmiChKR+pMnv6qYwSurSjm2RTjeZ2I8B5OQkVgOkm0v8/+44esgjCZJgxwlzwen8a59lwPf1I3AOvEGjwENBl530zt82oJrPeEmkvLl4A3IEdHY/5BVUeEYwG7BaTGoKJnJtStDhW4Ti6Vw+CA/bVtI3JXcfoY2Fgjg7M6UVrFBZDWEkCoaoPsRipBg5nU8igpyffXca9JGmlb1Y0iqMiZY92KB5TqGEzra3iCvRObBu4+8SDt5zR+0q8M0aVlbbCokERjzC6N7SPdfIzgiWdtFClyKDCzazKC+CaFcgFxzlE3hX3lBZRTlOucl1VvpQHJCV7dvTtXyusrI0s0GAfWjIuVtLTqCwnSsJXhRvubzUO1QQQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(39850400004)(376002)(366004)(136003)(396003)(31696002)(316002)(54906003)(6666004)(16576012)(86362001)(38100700002)(186003)(5660300002)(66476007)(66556008)(66946007)(26005)(16526019)(6486002)(4326008)(478600001)(36756003)(6916009)(83380400001)(2906002)(8676002)(8936002)(2616005)(956004)(31686004)(142923001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U2Ira2FEVzRMZUYvSG1PSnU0ZmRucVdkVlhoNnNWRlBmRGZRekJHbjBVd3I5?=
 =?utf-8?B?NitnQ1BmYlIxeHMvVUdnSGZsK0NHazJUN3Bqd2I3ZFFqZEJUY1hUUjN2Zytq?=
 =?utf-8?B?STU3ZzRYSnJqUWRWSjNJekRReDd4RU4wTTh6TnppYm8wRTcrNXZYRDQrcEdj?=
 =?utf-8?B?SHErK1M5N29BaTlIRWtjNTlhRGZiNDc1WG5SL3ZhMnRweFh2QXFyTzRyNHBr?=
 =?utf-8?B?K2FQTkJXRzY4V1ovcXJjdXdsVjNNa0wwN2REZ3VqTXhrSGlWS3h5c3U4OGVC?=
 =?utf-8?B?WTFkRldGZlUvK012TEVPeGh4dmxselMxK2tOSlNZWGxvRkNYMXlycDFYSzR1?=
 =?utf-8?B?dGx2d3h3S3Jka0xqZEljaEtQTlJOeW44MmF3U2wrYk9GRGtyZ0NpMHpMbXBt?=
 =?utf-8?B?M2xOT1YzN3plUkJSVTloY3JFWDQvWHgzQW1kMXZjQzNQcFl0MkZIQm1kVVhB?=
 =?utf-8?B?YmpNYjlHdW1pK1pmMEVIZGlBZWtxTFdNbGdEM1lCTXhvNExSekx4SzVwSWoz?=
 =?utf-8?B?ekRnN3ZhMjVxUnNFWGM0RkhoSXh4dU93and6Z2dKMTk0VWxjanRTOWdlamYw?=
 =?utf-8?B?b2JMOVI3UmdxQmd2RWc3SWdISmJlVkx5Mzk3SCtxaEprNXJMUkNHUm9vL082?=
 =?utf-8?B?Vzc0K0kvSVRaUkZ2VkNlM3oycnh5MnoxbEY1ZGxWQmFlNXdQVDZ3YVYxYWR6?=
 =?utf-8?B?blNVM0I1REkxZjY5S0krT1lvZDEyWlN4b2ZvTmsvbjRpZ1RHejRqdnBZWDU2?=
 =?utf-8?B?Ymh0bXp0UUw3bi9mVHYySGJrYTk4TmdLQndYamgzMjVVeDFqUy84WFNKNXBK?=
 =?utf-8?B?UkdNS3NoNWNSYmNIazFZNDh1aU9ETkhKY1J5NitrRzFrT0daZERQdnB6ZUZw?=
 =?utf-8?B?RS94dGlncXhpd0wzbUxYYXUwM3FmYVpCU3pBd0UxdDg5L2MrbGJ6a25NSmZI?=
 =?utf-8?B?K3JGcC9GNGhYZnBwM3ljTFViaXlWMisvRm0zTnovazd0YlNYKzBUQlp5OUY0?=
 =?utf-8?B?SkFacUFCWTlYSitKOFdUaENPZ0ZaL201TlZoOENJeXdSSlNHZzFJeDBXMmVo?=
 =?utf-8?B?UGppRlZpdWlVMFVzbGgyVWhERW51RUZyUTUxaHh4WUpqZVUxRGd2Rzd6bHRH?=
 =?utf-8?B?RE53ekcwM1dLVUlySU54UllpUXhMWFBTdGhNanBXWmhFVDkxakc2QWpKUlEr?=
 =?utf-8?B?U1gvOWZXbG1VaUJhS2NzRWsxeXBCL3JsYjFDZzVCTFptRkVicjVqV2Vnc0VZ?=
 =?utf-8?B?WHVLZTlqK21KRlFUcnA0V1ZvUjZ0Y0dibzc5WEMxd0IxbDZLbkxSVjMrQlZU?=
 =?utf-8?B?L0RzcG8zZGpFUHJxSXM5WHZYWS9PV2VBV2hoNW9wTGxreXhlNldibDU5SktT?=
 =?utf-8?B?TWlnV0JJeEF2aDBmMW1UTk12eTRDVjBmWUNGWncrLzMxZFA0dlVxcld4N3ZM?=
 =?utf-8?B?YjIzdzJ6WG00S1d4MkVLRWtkWXB2K0FCQXh5SGhoVEFVam01djNTbnJLWDls?=
 =?utf-8?B?VWViWWprYlVua1g0cE1aYzg1WklUNXMvZzQ5TVp5Y0xPZTVGckF3cjRvVEJQ?=
 =?utf-8?B?emRYRFlLVC9FOGxwbThNdklEY0t0Z1lyVjhCT1lRYklZU3NPclZNT3VkV3B2?=
 =?utf-8?B?THpzYTVXeVhuOXpIUklVWVd2NkcvWndqVWx2QlI3UnJPTlo1ZVQ4bzIwdkxR?=
 =?utf-8?B?K1FScWx2UElOT1lveFptVmlUYU5wOG42Y3JhaHY2NW5xRjAvS0RnajNjRW1n?=
 =?utf-8?Q?I2K0uT6g+UN1MIY64OnrhQEu48SX1usBRQyvznO?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a0b7af54-b7e8-45b8-d3cd-08d92f3bacf2
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 13:52:40.1826
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CxDhiwj1iWoKM8Zy4+fbbSzViqmLGoO/MkmPDAWAmo2aHadSJCzMXPsreWLK2V66XEDeCtGmDCG0SvvXXdG5pQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4848

This confuses disassemblers, at the very least. Move
.altinstr_replacement to .init.text. The previously redundant ALIGN()
now gets converted to page alignment, such that the hypervisor mapping
won't have this as executable (it'll instead get mapped r/w, which I'm
told is intended to be adjusted at some point).

Note that for the actual patching logic's purposes this part of
.init.text _has_ to live after _einittext (or before _sinittext), or
else branch_insn_requires_update() would produce wrong results.

Also, to have .altinstr_replacement have consistent attributes in the
object files, add "x" to the one instance where it was missing.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Put past _einittext.

--- a/xen/arch/arm/xen.lds.S
+++ b/xen/arch/arm/xen.lds.S
@@ -148,6 +148,8 @@ SECTIONS
        _sinittext = .;
        *(.init.text)
        _einittext = .;
+       . = ALIGN(PAGE_SIZE);        /* Avoid mapping alt insns executable */
+       *(.altinstr_replacement)
   } :text
   . = ALIGN(PAGE_SIZE);
   .init.data : {
@@ -169,8 +171,6 @@ SECTIONS
        __alt_instructions = .;
        *(.altinstructions)
        __alt_instructions_end = .;
-       . = ALIGN(4);
-       *(.altinstr_replacement)
 
 #ifdef CONFIG_DEBUG_LOCK_PROFILE
        . = ALIGN(POINTER_ALIGN);
--- a/xen/include/asm-arm/alternative.h
+++ b/xen/include/asm-arm/alternative.h
@@ -67,7 +67,7 @@ int apply_alternatives(const struct alt_
 	ALTINSTR_ENTRY(feature,cb)					\
 	".popsection\n"							\
 	" .if " __stringify(cb) " == 0\n"				\
-	".pushsection .altinstr_replacement, \"a\"\n"			\
+	".pushsection .altinstr_replacement, \"ax\"\n"			\
 	"663:\n\t"							\
 	newinstr "\n"							\
 	"664:\n\t"							\



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 13:53:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 13:53:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141549.261424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsn2J-0006St-2h; Mon, 14 Jun 2021 13:53:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141549.261424; Mon, 14 Jun 2021 13:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsn2I-0006Sm-Vi; Mon, 14 Jun 2021 13:53:54 +0000
Received: by outflank-mailman (input) for mailman id 141549;
 Mon, 14 Jun 2021 13:53:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XszW=LI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lsn2H-0006Sa-Cy
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:53:53 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 39cdf45d-8500-400f-b461-5031f4456551;
 Mon, 14 Jun 2021 13:53:52 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2055.outbound.protection.outlook.com [104.47.13.55]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-27-qQa5LXm9OSKUwqy-z1y1aw-1; Mon, 14 Jun 2021 15:53:50 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB3119.eurprd04.prod.outlook.com (2603:10a6:802:10::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.24; Mon, 14 Jun
 2021 13:53:47 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 13:53:47 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR2PR09CA0013.eurprd09.prod.outlook.com (2603:10a6:101:16::25) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Mon, 14 Jun 2021 13:53:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39cdf45d-8500-400f-b461-5031f4456551
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623678831;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=MmjRk9izpB0nnTeZQGZQ903HcGexrQTGFs+AxEdyMUI=;
	b=IfB+UetWsj3VM6FOQbYhrSlTsA7u/PjUnqHIOVzwZ2o+6kgV3oW551Rx/dDwBLPnJHbTV6
	W7R8gbSKEhlv0jywFLPvJ2+l2w+4HkRLijJTeZsbOo87wJ90ZJkSAAITBsPAtBoYc3WHKz
	dqF3kP6sKn+B0MNtia+NApr6VuplUYM=
X-MC-Unique: qQa5LXm9OSKUwqy-z1y1aw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CAGgDoUn0zghCFbLNl6UJsUL2z2hFQRwlaClh85kpEdHleFJzzaFfoHfLUCK+qTYRcFML7jDyqLfgCvG3Gn5sOQiAZ5iWcr1LQN3Q4tTKMegFVLlEfofHdBW6e7nB9cj9+V96FwLYOfzM9LThFy7BVN10GTz/Y5urwlTXLGEE7d5rnazUaogbw9kOH2TgoHFTR+0IhOrJgVhXPsU1ZvzsdovA6Xl614/VXBeUXTX6lFzCLWI7FVQ1R5kwgBKagAgffg7n9HCZcig/cGJtPykUUjRiZzjlwh1VLGdTijVNhPuVj9zObVvBUxKUMrD44GZoMyjiYpidFdcnuE0/HPPJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MmjRk9izpB0nnTeZQGZQ903HcGexrQTGFs+AxEdyMUI=;
 b=eBrfUReXSXga97oPAC20xscm6wRTzsxMhfz8xbOYe7FOSE/AGm1u+PrA4RSLGaWEdI57a+tfBj4CzTJhB2pBnwwjHrgsB4Bkn+PcTTMPD8kxK+SFUlZl3cw3ujtaQQG55rIBw4wiY88baVrOE65xlgz/oFy9fSG0Q8maWMcZZGAHb5DmlaIwEKnZSIo3ylsWy72mzA82YgRtQujV96MG/+pn1cp0bhjBRV35ovZPG/yhJ8Gell0PbTdITGj6zYFOn0+9E8gtg0DmKFb9XOfsdq0YHN8Vl5FB5wqoQLTXN20qJIMdrBLXQZaGb0jzYu+Mv0woMCzH6QS7ZVIJkzD40A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86: move .altinstr_replacement past _einittext
Message-ID: <759595e2-808a-46c0-7a93-fadecfeb991b@suse.com>
Date: Mon, 14 Jun 2021 15:53:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR2PR09CA0013.eurprd09.prod.outlook.com
 (2603:10a6:101:16::25) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6fcc1563-ca70-4190-5cff-08d92f3bd521
X-MS-TrafficTypeDiagnostic: VI1PR04MB3119:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB3119513E084E8B985DF2CC40B3319@VI1PR04MB3119.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8480KIYf6zDgALnhoUtWzFmjyJg+LJUTqe/kbKhPAVizPfWmy/JdNOJZ3vg6y0M3h8xR9pqPUXb4HGpBKQBfomxF8FHTaRA6k2hLTjm8ZK6XqQt9FX0pfUB0isWXoQPcWENZGpvS+5GCEc4aLZZsQt6oj0x6T8IM9BvOBmP9oHbhgW1R1jjnILsb0XzmM1D8Bd2j/hxtwbSXo/K6/l3fPj48wjel/j2zHgCYMoVS84rSFMh+jO//BQLAbkAad+1hcR4AQM36QKu0CETbuStyhOYFw6sTVKYDahJvdFrlnqXLNzMOteWeWLqsumf003km568Se9vrhcm+pdz9JlwU9vE6L2P5z4d9cJbRChP95da87rvq8thF2+iKLh99SyVsswr4aDUjNzOWhxr8PfQnCgkJUhUU5ftmewyfuvagG5umVmMTcUBw+VEBMAfKKSN4O/e8PjS5dRnrClG36wUDduTvApuAqlgf40feviB2C4sL038JzEC4hDupdy7btMS+WJipm8gEf6sbQPfBEZMtzgS5keq2sZJ9n3eNmDoIPTrwhiOujSX9y+194Z1BJ0tu2Q4KeKwT/sRLSLJHddO917BphPE6h+WcPNexRqCsqWdTRG+QsolkDmSWQOlUdB8FwXRZBPlet27VPpFeEi2PLKLO1+EKjtfI4OQfbh8RImh37o95dI40HlFob+hxBv9b7MiPIDphI5unQEfBBvlSbQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(366004)(39850400004)(396003)(346002)(136003)(66476007)(66556008)(5660300002)(38100700002)(2616005)(956004)(8936002)(66946007)(2906002)(31696002)(31686004)(4744005)(316002)(16576012)(54906003)(86362001)(36756003)(478600001)(8676002)(16526019)(186003)(6486002)(26005)(4326008)(6916009)(142923001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SzlBZVB6RmRzMXhvbzl5RnptSTBKYVp5Zmh1Nll3ZEVHd2k1dHd1MEZvS09Y?=
 =?utf-8?B?eW16T1BXQ1kwd3NzZWdzOEhaZzdNRjdmazBUaHhvRUh6L3dtbHU1RkE2TDJs?=
 =?utf-8?B?RVpuL3QyZStDNjdIUUg4OVlJU2pzcStVNGx3enZadzAwemxLN1BMUVl5VHlo?=
 =?utf-8?B?bkdXblFRNmhwZkpjdlVsMjk3UHBDbkpNendUVzdFYktseEhHZVVqZC90aGtK?=
 =?utf-8?B?YVZ3R0Y3c25XeXdLN0pnR2IyVm5iZHJvdm9jNkRWUTlONkF2WDQyTUorSzIw?=
 =?utf-8?B?RHc0a3pqS3RKcFZwRHNKTDA1MWFZSTFrZFUvZW5VRmlwQkxyZnBoVnFkSXRO?=
 =?utf-8?B?Y05KWXpWM1RRVjc5b29LUVJXQy9QcGJ4bWNCdHdyQ01Kb0tWanoxVUpsWkg0?=
 =?utf-8?B?VldudnM2WUIvSTBJVWl5eFFzUGFpeWw4eWQ0a290SkZsTDA0V3dJYUZzZUQr?=
 =?utf-8?B?ZitEbWpwZm1rNHlwdUJnZ2ZIQlNsaU1LOGNvTFl2T2syVXkvTHBtcGtXNVIz?=
 =?utf-8?B?d0Y5eFFIdGxaOTFRdEdqQUt4TnZJbmtuTm5oVG1veWZ6WWtNdnhvMGJYb2Uw?=
 =?utf-8?B?dEd0QkxYSUllMHNlOHB2dFh4cmxmejV2VVR1QVVwdXZJWEJaMjRsREY1Ykdk?=
 =?utf-8?B?dEpqR2RLdHR3YjdXRFdpV29nbkdSVzJTVGprTDdBQktReFA1T0FjRTdyMzFF?=
 =?utf-8?B?SkNKMzk0cFlEWVorNFZLZGdCeTJDTkZiWXVGNzR6OGRmMjlxZCtHRWlBMDdC?=
 =?utf-8?B?UGZldHpaOWVyc215THVhUEllUDdVT25qaXdQUFdid3duZWxJTWQraSszOGRm?=
 =?utf-8?B?dEMxbXNDWTcvc2Rxb0Z2aC94NWFGMzQvWitTaXdkWkE2cm1EcTBvOFNNQTUz?=
 =?utf-8?B?VEEwSitkMUtqcnp4bzhpYytZb1JRbFVUd2tuRnVvUElzd05UL2puTHhqbDNh?=
 =?utf-8?B?ekxZV05pWTd4ZWJFZjQ3aUZLTWoyUDdmSDl3d2hGbmtZcG1nSGVkOVJrcVll?=
 =?utf-8?B?akZrK0lEbm9UMUYwbXFwTG1kZ3ExYzQ3Q0NZZEpiRWpEVkp4UDRzUHdBQWNr?=
 =?utf-8?B?L1ZtWjhDdndYKzZyMTZaaFhWTWxuOVVQRXdVZEJVUVlseGZLN0FucU1mWE5k?=
 =?utf-8?B?TWhjYnB0VlN1WWNsdWNJcTBYUlgrTHZyN0JKRHU2c2tGTVE0YmFnTXRSYmVE?=
 =?utf-8?B?UVJHcWlQNmlOa1U1SjJiNjJ6TjZJdXIxT1hIeDVLMTMwM3NMdUNwT29RZmdh?=
 =?utf-8?B?ODUraDhCenlCeFdobmNDSVhDNnBsb2FjTm4zcHFCT2hHYldHdlU3Vm05bURv?=
 =?utf-8?B?SGxJcDVpNjVvdExMWnR1b2tmaGRERk5UR2ptc0ROaDB4T0NWUHh2MzZPZDc1?=
 =?utf-8?B?czVkQ2plV0pjRWJjOUVzTlIzdzBSajJvd3BZSGVyekhKLytUUy9hVlBRSDR6?=
 =?utf-8?B?N2Y0Q0g0SDdraFBsWFlpYlBDNHhFV0Z0dEg5SG02VDRZTjZXd3F4ZVdlaU13?=
 =?utf-8?B?NmNUMVphcGtvWC95bnNyM0x1V293VW9jT3ZRcjFPWlZkU1Q2SVBzNGN3UGg5?=
 =?utf-8?B?U2taZU5iYkJROU9BU3Y4TmxIMzVWbXhoU1Bpb0VydVlHTzhRM3doUnlPOVBz?=
 =?utf-8?B?VVpRZy91YjlHWDEydm4zSVZwRjFBYThzN0ZzVXZCQ1lMVERQSWJYbndIZ2lp?=
 =?utf-8?B?NXI4a2ZtMGNHWHpybGZFUXZaaDk3K2FhSVhCY1B2aDhrSFpDa0UzQWF2ME5T?=
 =?utf-8?Q?zFFZyIEKpXK8p2hh2MTxYGo+RcSf+a0on+qlek4?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6fcc1563-ca70-4190-5cff-08d92f3bd521
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 13:53:47.5784
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YMgRrTjwMROxAv1kue79OAKY2JL4Ii8UUVFtOngeQlP1eVtGQP6FrSU22RPHlE7SQiFKFL9Ohgnc3McEZZuRJA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB3119

This section's contents do not represent part of actual hypervisor text,
so shouldn't be included in what is_kernel_inittext() or (while still
booting) is_active_kernel_text() report "true" for. Keep them in
.init.text though, as there's no real reason to have a separate section
for this in the final binary.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -185,13 +185,13 @@ SECTIONS
 #endif
        _sinittext = .;
        *(.init.text)
+       _einittext = .;
        /*
         * Here are the replacement instructions. The linker sticks them
         * as binary blobs. The .altinstructions has enough data to get
         * the address and the length of them to patch the kernel safely.
         */
        *(.altinstr_replacement)
-       _einittext = .;
 
 #ifdef EFI /* EFI wants to merge all of .init.*  ELF doesn't. */
        . = ALIGN(SMP_CACHE_BYTES);



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 13:54:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 13:54:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141550.261434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsn2S-0006nn-Bj; Mon, 14 Jun 2021 13:54:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141550.261434; Mon, 14 Jun 2021 13:54:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsn2S-0006ng-8L; Mon, 14 Jun 2021 13:54:04 +0000
Received: by outflank-mailman (input) for mailman id 141550;
 Mon, 14 Jun 2021 13:54:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lsn2Q-0006me-EA
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:54:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lsn2Q-0005Wl-CJ; Mon, 14 Jun 2021 13:54:02 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lsn2Q-0006Wh-5d; Mon, 14 Jun 2021 13:54:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=BE6+p6K/3wf7CiK2RQ+Z6ZYQ04bSmw2gSzRvIDaF94Q=; b=GXXG+vxyP540n1vLcjRxxJSGsF
	iGvfdpn8RRZes2hx88REHnCVWZ7/IUQL4c0KJ1m9mRkP63BBawcUcq9Py6gwHPh1lgY5JDtDcUIgh
	NYTxjhAZanXZuzkYFHGFH5wmiLPI0PPRvwyUGUk6uPYO865G11KFAAkZBL0B3nCy1TUY=;
Subject: Re: [PATCH v2] Arm: avoid .init.data to be marked as executable
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
References: <5c173e92-f615-c95a-21a2-5c894727414d@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <74fbf731-59f2-0b2e-8707-142091a5876d@xen.org>
Date: Mon, 14 Jun 2021 15:54:00 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <5c173e92-f615-c95a-21a2-5c894727414d@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 14/06/2021 15:52, Jan Beulich wrote:
> This confuses disassemblers, at the very least. Move
> .altinstr_replacement to .init.text. The previously redundant ALIGN()
> now gets converted to page alignment, such that the hypervisor mapping
> won't have this as executable (it'll instead get mapped r/w, which I'm
> told is intended to be adjusted at some point).
> 
> Note that for the actual patching logic's purposes this part of
> .init.text _has_ to live after _einittext (or before _sinittext), or
> else branch_insn_requires_update() would produce wrong results.
> 
> Also, to have .altinstr_replacement have consistent attributes in the
> object files, add "x" to the one instance where it was missing.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> v2: Put past _einittext.
> 
> --- a/xen/arch/arm/xen.lds.S
> +++ b/xen/arch/arm/xen.lds.S
> @@ -148,6 +148,8 @@ SECTIONS
>          _sinittext = .;
>          *(.init.text)
>          _einittext = .;
> +       . = ALIGN(PAGE_SIZE);        /* Avoid mapping alt insns executable */
> +       *(.altinstr_replacement)
>     } :text
>     . = ALIGN(PAGE_SIZE);
>     .init.data : {
> @@ -169,8 +171,6 @@ SECTIONS
>          __alt_instructions = .;
>          *(.altinstructions)
>          __alt_instructions_end = .;
> -       . = ALIGN(4);
> -       *(.altinstr_replacement)
>   
>   #ifdef CONFIG_DEBUG_LOCK_PROFILE
>          . = ALIGN(POINTER_ALIGN);
> --- a/xen/include/asm-arm/alternative.h
> +++ b/xen/include/asm-arm/alternative.h
> @@ -67,7 +67,7 @@ int apply_alternatives(const struct alt_
>   	ALTINSTR_ENTRY(feature,cb)					\
>   	".popsection\n"							\
>   	" .if " __stringify(cb) " == 0\n"				\
> -	".pushsection .altinstr_replacement, \"a\"\n"			\
> +	".pushsection .altinstr_replacement, \"ax\"\n"			\
>   	"663:\n\t"							\
>   	newinstr "\n"							\
>   	"664:\n\t"							\
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 13:56:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 13:56:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141565.261445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsn4b-0007ld-Ux; Mon, 14 Jun 2021 13:56:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141565.261445; Mon, 14 Jun 2021 13:56:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsn4b-0007lW-Ra; Mon, 14 Jun 2021 13:56:17 +0000
Received: by outflank-mailman (input) for mailman id 141565;
 Mon, 14 Jun 2021 13:56:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XszW=LI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lsn4a-0007lQ-Ui
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 13:56:16 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c708129e-2d91-463c-ad09-e0518637bb9f;
 Mon, 14 Jun 2021 13:56:16 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2105.outbound.protection.outlook.com [104.47.18.105])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-16-alet17oxN_SYiDmNPNYKzg-1; Mon, 14 Jun 2021 15:56:14 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB2958.eurprd04.prod.outlook.com (2603:10a6:802:a::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.24; Mon, 14 Jun
 2021 13:56:12 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 13:56:12 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P189CA0078.EURP189.PROD.OUTLOOK.COM (2603:10a6:102:b4::23) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.20 via Frontend Transport; Mon, 14 Jun 2021 13:56:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c708129e-2d91-463c-ad09-e0518637bb9f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623678975;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1xsCjtvTCIbMdR4RMF9+oPSu85gzSIhR7OD1ao9vaiw=;
	b=mnIj+XaR2POCzfpDKr2Xvb4ucslauCPGoa6YMDyUf9sv9qMeas3++nKLc0Yw7Xrj1lYkUw
	rvDdetZ9ddgfpqnA1YY1i2NJsZsOPzlA2+evBOLtHHUjFHj72k3ClTAtdphwQdIcIxvSqN
	rFvQAZ5n0SUTWHRYzqFpfJT1+5IxWng=
X-MC-Unique: alet17oxN_SYiDmNPNYKzg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C1KGc5iJIPp+UAWZxnLEp19SNq4KbwlPIIQ4z46mjLTfvWGcTK1lY36sxbpXLaOgJ1P728skRVuRFcD1LizkkqlDwz4olHPblbmj2N6DvOyCUFSExa72eNy5wK5BknBjjeZNbP/eeFivKW3UHOpB45wT597DUzc+5s38kcV5HIbRdGcjbmR9W0jnKwvzeYLGoo8OnZsZo87lkrz0euvsts2lz0kMDjsAzIF0k3M4R8DvAzzzHCsDJRXY2+vu6PXz9WoS+gXQEiwJiHBKjutLYTP6pfwVYyAWIE33Bo6iRL4u8OuEgmHvGYTPhdutb/F2z+Kw3SU3bTPgcaTF997Zjw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1xsCjtvTCIbMdR4RMF9+oPSu85gzSIhR7OD1ao9vaiw=;
 b=F2g3k4MC0IkyualppfF6ENPPMLC5RUWEDxGuPS64S4hzWBUMhR1rkrhGNV3IX0ZQIYyZ/tZWWuyRkF5owmfQUuVng6TNerCpM291vPhf1WrcOAc8Ffh2beGACgZwFfyToeDsx3i5cYQ/bsS2WPWM9r0goE8eUEevjIR/gQwqQ1CHpYPcxV+k0Jn8GnnVWMahdqdHDCO8n+aebAEd8BOewEKZ3R7tmPzZceymsENgZfKrBdVlKT5FL5ln1g6NmnrG51TEoO6yDt/8vweufL4R3EK/4zrlfTv23jTjPiRKNIa76S87a6YfSPPGnTApfJf9ddJjLRcGHgPA5+lkbHfCXg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 4/5] libs/guest: Move struct xc_cpu_policy into
 xg_private.h
To: Ian Jackson <iwj@xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>,
 Edwin Torok <edvin.torok@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210611163627.4878-1-andrew.cooper3@citrix.com>
 <20210611163627.4878-5-andrew.cooper3@citrix.com>
 <bb85a8ea-c78f-1b94-6d83-224137f21500@suse.com>
 <24775.24180.199869.133786@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b4e35d57-2a61-6faf-71cf-c82da7499488@suse.com>
Date: Mon, 14 Jun 2021 15:56:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <24775.24180.199869.133786@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P189CA0078.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:b4::23) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 54e19ac4-b2ec-40c8-6fd9-08d92f3c2b28
X-MS-TrafficTypeDiagnostic: VI1PR04MB2958:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB2958F07EA51FD2472AD38048B3319@VI1PR04MB2958.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4rwshWWSz1OZjXOrGcI56g/vsZ1r6PS0b0ql9RNQrNhX1yYFbE0Za3cCJ+qf2uGfXvAWs0sFvf6qyzecsmAxkHSUh3kF6EV4TJgj0yBBw4vQW7ggjkNUL7R9O2SYinM3I9Purskv7OxVTDUhjvWBL4TXa5NDNow8IsqRLWz0A8LLeZMieSxGIqaAT22p17Z7jCwWIj7KSLRvKuu0LmOcQNueI/uS0Tl9/Ya3W10uSEFujX4931p96MxTmRbS2ZpDgIyYhbDvzEOXDR/j3Y77Wtbjrc0LC/ac5IWVS3i+4up624vq98g+xt6qYSb2UU1iXMbFhF/z+aq79Lo+FtKxoiNW9yoLBpAV3O8wtlxU6E1sFiHJVJ4rRIs5/gBKf8C2mWowAllZ1HENPINlBnEpiC3urRqZCBsnBf3phwttTkJDTN4qUGRWOJQ7ATaNiq80CkEEknBssO1YuhfkC6xS09xPeFoCrxcsAMf0I7NexbLq1yfPVRg3sXnk+jo4WZbycqekvP5Nkl//cY6I6DjxNDRF9erkyLE95MtAhjHV9z0pygDgpxR/RNRCHonrRUXXfKH1zloGyf228rcYcd3gSk8hyZwUjfTTWanh/Fho7WaR6IaylhDVs/g9PdF+BpyUsctseiKOdvclr+5AFBs+SRpGSg7SJ8WheKWr9WazIPpqfZf9CkeFCJ7baHnq6HuQ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(39850400004)(396003)(376002)(366004)(346002)(86362001)(8676002)(2616005)(26005)(6486002)(6666004)(956004)(478600001)(16526019)(186003)(38100700002)(316002)(31696002)(4744005)(66556008)(4326008)(54906003)(110136005)(16576012)(5660300002)(2906002)(31686004)(66946007)(66476007)(36756003)(53546011)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Tis5c2lHNWFEcHM3RXV5VDBGc1BXRzlwYkZwK202Y1JLQjZTVEVGc0w1MTNa?=
 =?utf-8?B?b0hBaTRQRmR6c0htRldrV0ZLbklqTlVwTW13aHQ0WklRN0ZFbGN1djhVMnVK?=
 =?utf-8?B?ZnUyckFjSzhBNnNqdkpZNzZKbHBQK01teW5YNjl5WEF0Zkxka0c4RzM2b0pD?=
 =?utf-8?B?RUxQeThoUWV6VGVxSkxRYUx3WklKcmVkRkNueGtuQk04R1hHZE1WQXY5NnZY?=
 =?utf-8?B?WkVkOWk1d0w0UFlOZmNQSEhQTUtSL2tJZzFUZDZYODh6VCtPOVg3UTJqWDhH?=
 =?utf-8?B?dGJhdS84ODlhOVd0Y0VUQklvcS9kSFprcVJ1NXpnRTlORHQ5KzhBUVpUUmRp?=
 =?utf-8?B?c3FLZitCakdRWTJ3RjhIenhobmZjaUMrQlQveVlJbGQ4RUJPRFVtQU1xbkg3?=
 =?utf-8?B?RnMrOUhIcCs5eGVBYWRKWTJQY1JOMEF0WlVBVjNYUE1XcG81K1BINjk5WHE4?=
 =?utf-8?B?T2w1RmgrVytJQ2hSSWtlMGZEUnBEM01rVGsydEFhWHl1TVVkNE5pNXNsTVFS?=
 =?utf-8?B?ejJwNGV2S1dtY2FtMDNHUWpVbDh2bFBpZXpWN1RoUHkydEYwU0didEdpQ1NR?=
 =?utf-8?B?UkNPaUI5aVVGR0dWb2lWODZiM1JYb1FhNTBQT05EQXBjUjlwYmZZdmpnRzFa?=
 =?utf-8?B?c3FPcXp3b1ZwWjIrYlRMNnMrUkhjcFg3YkFSbDY1cC9qeUVKRUJrVUpRYjFa?=
 =?utf-8?B?d0ZuSmN4c0orS1JCcDlVM3pjeU1ndENWcmhnbkhnemlValJLaXpFNEp2bGQv?=
 =?utf-8?B?eC9qVUgyV2hMUGRFQ3l5NlpQRXNDVU9udmwxbkZqbkNJUVlPYzYrem5ZeU1u?=
 =?utf-8?B?STkyQjcwbTg2bFNEMzQ2SlJnOWdobHFzUjdWWkVjZkJpemVEOVFuZW5LbjBY?=
 =?utf-8?B?dEhGbDJ3bWpNT1JYYnRiVFZhNjhVenEvUlh5Tzd3b24yTWMwRjNEM1RxN3pp?=
 =?utf-8?B?c0JNM1h6L0dDZThHV1d1QnpuNExqZktWaWZhOGU3MlBweTlMZy9oeXJiNDZF?=
 =?utf-8?B?b3NUNXM4UGczVjM3NlFKRjNiMGJBbmxWM3I3UDhOSXh5c1ZFYzNRRHA1L2xa?=
 =?utf-8?B?VlBBUWNQTzlrRmhyLzRXTEZLdFhZK25pL2YxZlhTaDhqRllldjRaSE1QQm5x?=
 =?utf-8?B?cVhMdFlMS0gwbCs4c3AzZTJlMGJISFBxeXpPVExHbUJhbGs3UzdLMDQ4OUs0?=
 =?utf-8?B?ak5FTkNqU1NaMGxrS2lyOXo2NEhPeEpWUFh3czZ1L3BxZTlPWlpkVVROY0pV?=
 =?utf-8?B?MDdjRlFSQlo1NllKTFI4T2EvU0Y1ZVVyUGhyd1ZuVGFldnhVYmhybG9YUWJD?=
 =?utf-8?B?VDhLSFBUY0haRzVVREExNVJYRUFubmlKcDlNK20wbWpRTHZNOUwvUjdKLzZB?=
 =?utf-8?B?d3VRVmt2dzBlUGJKU2hoQW5rOEwrWVVMZk1GQjhMNlhNY1lZMTFuZi8wcjFs?=
 =?utf-8?B?dzZ6bXFIc01KelVSUlhJa2dBd3h5cER2dWVqTTlldDQzZklDSHhWRXRRTENr?=
 =?utf-8?B?Qjl4S0hpcGk3Q3hWSm9lQmhyOXIrbVdIUlRCdysvMkZxSTZGZlVvdUZzQlRt?=
 =?utf-8?B?NWtYOXZDOSs1L2JBdWpRN0pPUUlJM2RYNjcyYkc5a3dncDZ1cExneFk2aUxK?=
 =?utf-8?B?eTFkSlVBb1U2bExvNDNHWXJqWEI5M3ViR2t5dWtUQkNuWm9lcVNOVWI1bDV4?=
 =?utf-8?B?RmpTdkhkWkQ2NlZDcjdZTEJyaHNTRDlmR09Nc0cwQmNBODR6b2trWWhaNkp0?=
 =?utf-8?Q?JmFdEhpxm896BMlr1DBbtFz1bOu9m7SGtEv4/6K?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 54e19ac4-b2ec-40c8-6fd9-08d92f3c2b28
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 13:56:11.9019
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ss3Nb4zM1g0BuuEeqFWLOP+LvsqQNe3sbKIhJzHjlhmqY8DHHV9w7m3q0IsDysF7TADqP4f1JcWab/A7W6DUHA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB2958

On 14.06.2021 15:49, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH 4/5] libs/guest: Move struct xc_cpu_policy into xg_private.h"):
>> On 11.06.2021 18:36, Andrew Cooper wrote: ... so tests can peek at
>>> the internals, without the structure being generally available to
>>> users of the library.
>>
>> I'm not sure whether this slight over-exposure is tolerable in the tools code,
>> so I'd prefer leaving the ack-ing of this change to the tools folks.
> 
> I am fine with the change described in the Subject.

In which case I'm happy to give
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 14:04:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 14:04:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141574.261457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsnCR-0000tB-QH; Mon, 14 Jun 2021 14:04:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141574.261457; Mon, 14 Jun 2021 14:04:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsnCR-0000t4-MG; Mon, 14 Jun 2021 14:04:23 +0000
Received: by outflank-mailman (input) for mailman id 141574;
 Mon, 14 Jun 2021 14:04:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Ibu=LI=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lsnCQ-0000ss-1r
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 14:04:22 +0000
Received: from mail-pf1-x434.google.com (unknown [2607:f8b0:4864:20::434])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d913d133-46db-408e-8b9e-fb6ad569810a;
 Mon, 14 Jun 2021 14:04:21 +0000 (UTC)
Received: by mail-pf1-x434.google.com with SMTP id q25so10629058pfh.7
 for <xen-devel@lists.xenproject.org>; Mon, 14 Jun 2021 07:04:21 -0700 (PDT)
Received: from ?IPv6:2404:f801:0:5:8000::4b1? ([2404:f801:9000:1a:efea::4b1])
 by smtp.gmail.com with ESMTPSA id
 x206sm12950089pfc.211.2021.06.14.07.04.08
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 14 Jun 2021 07:04:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d913d133-46db-408e-8b9e-fb6ad569810a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=yJB+B/uqyT8Yj8oTe4yW/iwZF1PlGwEGbYmCRyf4VPs=;
        b=WH301YVkd+NT/H+jeV6pcNYHdcjqtaqvIdtbpu0fkkYAmJ7rw0/VnTDAhSvIgbv4c0
         oazbEKmF+c7M1WRPaxGMZU5xoCLTUJgtdxU9APUEzGlzXnchk40ZNp3QFWLwuNrBu83j
         e2ylhCoSYk62pl120eNuhLwuvfMbaggtn+gXAD60VuMlb9RO9aR6yjsww4EENQAJey5G
         2ZzxWZw9pL2yOQEQRAidkW9g0ErNbkj+wegNJA0H7wJ55BH2rzTM1HnAVz1Nh8SFyWFK
         I7BbhU7Q2hwWxihuFlWkGm+R8myZotEgjwBltHJuCwGgHkDJ5/gTI6uquz0y/0l8g0ap
         0ueg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=yJB+B/uqyT8Yj8oTe4yW/iwZF1PlGwEGbYmCRyf4VPs=;
        b=TTqail81I9sqIeQFdHO8F55N0tMnyiBZVJ9AvC9lVm5XVgV6bzcWp5CEKMLk2K9Y5s
         +vi8CsL+0o98U+MiVjv04AcDJG4YWOEIXlbZ68ktGDtaeQK+UlAaUJsITadVMA3z9lTx
         qmNRYIzqa3inHpFk24lfIvWCFEJ4Li7iuYT8XwLjh3MPuLWwuNPoCEdROsiseovOX38L
         N9HRxvr1zFeBkn0+StHtrG0XmbCdYr+SOYmWby5udunYpECOpkUeMB2HmGcTWSLYhmXM
         c0FAhP4+xpx61mS6imHYTjjJrtKKSSuKnkK6ML28+YPSlNeqtFzfAQ0xi0ab8HHIf8w7
         HUEQ==
X-Gm-Message-State: AOAM530+Hxb3+YnxlltIwIg+iTWG/ed6NRJgreNVy3FHSCmCS7KaQzXY
	SjdEilGd2BvP5itdIvGFSl4=
X-Google-Smtp-Source: ABdhPJyxkq2bPSQ3rYusHXFwzEn3le7fAXsaXh5Rh36tAMz0+W6V1tW5a8HSqlbDYgsTNNATX2tI2A==
X-Received: by 2002:a63:5760:: with SMTP id h32mr17200205pgm.367.1623679460361;
        Mon, 14 Jun 2021 07:04:20 -0700 (PDT)
Subject: Re: [RFC PATCH V3 10/11] HV/Netvsc: Add Isolation VM support for
 netvsc driver
To: Christoph Hellwig <hch@lst.de>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
 wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
 arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
 peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, m.szyprowski@samsung.com,
 robin.murphy@arm.com, boris.ostrovsky@oracle.com, jgross@suse.com,
 sstabellini@kernel.org, joro@8bytes.org, will@kernel.org,
 xen-devel@lists.xenproject.org, davem@davemloft.net, kuba@kernel.org,
 jejb@linux.ibm.com, martin.petersen@oracle.com,
 iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com,
 thomas.lendacky@amd.com, brijesh.singh@amd.com, sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-11-ltykernel@gmail.com>
 <20210607065007.GE24478@lst.de>
 <279cb4bf-c5b6-6db9-0f1e-9238e902c8f2@gmail.com>
 <20210614070903.GA29976@lst.de>
From: Tianyu Lan <ltykernel@gmail.com>
Message-ID: <e10c2696-23c3-befe-4f4d-25e18918132f@gmail.com>
Date: Mon, 14 Jun 2021 22:04:06 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210614070903.GA29976@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit


On 6/14/2021 3:09 PM, Christoph Hellwig wrote:
> On Mon, Jun 07, 2021 at 11:21:20PM +0800, Tianyu Lan wrote:
>>> dma_map_single can only be used on page baked memory, and if this is
>>> using page backed memory you wouldn't need to do thee phys_to_virt
>>> tricks.  Can someone explain the mess here in more detail?
>>
>> Sorry. Could you elaborate the issue? These pages in the pb array are not
>> allocated by DMA API and using dma_map_single() here is to map these pages'
>> address to bounce buffer physical address.
> 
> dma_map_single just calls dma_map_page using virt_to_page.  So this
> can't work on addresses not in the kernel linear mapping.
> 

The pages in the hv_page_buffer array here are in the kernel linear 
mapping. The packet sent to host will contain an array which contains 
transaction data. In the isolation VM, data in the these pages needs to 
be copied to bounce buffer and so call dma_map_single() here to map 
these data pages with bounce buffer. The vmbus has ring buffer where the 
send/receive packets are copied to/from. The ring buffer has been 
remapped to the extra space above shared gpa boundary/vTom during 
probing Netvsc driver and so not call dma map function for vmbus ring
buffer.






From xen-devel-bounces@lists.xenproject.org Mon Jun 14 14:10:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 14:10:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141581.261468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsnI8-0002GE-EF; Mon, 14 Jun 2021 14:10:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141581.261468; Mon, 14 Jun 2021 14:10:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsnI8-0002G7-B5; Mon, 14 Jun 2021 14:10:16 +0000
Received: by outflank-mailman (input) for mailman id 141581;
 Mon, 14 Jun 2021 14:10:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AGyB=LI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lsnI6-0002G1-Hi
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 14:10:14 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fcdc4a77-a4ae-4632-9581-9e6d9a90af85;
 Mon, 14 Jun 2021 14:10:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fcdc4a77-a4ae-4632-9581-9e6d9a90af85
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623679813;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=aSrzooKL9AB4hIavz+SAJag0GVEMeOqq6yM61AU+x0k=;
  b=NBQbM3PPF4aYVoypOKQx2HeLKfXGpgn459LfKReFDvlYrwpDk3+VBn4Y
   Qh45lATlHQcJogIxWdfc95sm7YW3KSozRgaYbKDjfUiYRAObyG50+w61B
   XfgxedMvPHeBhpHBaIol+NmkZeH1nB5pRdUzz/qvZECdJFoBz+YTqRdji
   s=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: wPASZuPBYB7ENqapSMODNBlCJUfD6vLLxVVSyuJqieKDb3QUyunkavz7WrL7bp/3A2b2+jCyui
 l3EzmDP1Wsuvo4Afn3+3mpo15U+1NY/k1C7tD3BgShkTI17XhcX+OMKheF9qNyugIGvw1QSbX/
 AH3A/twQIBkYqKZSLyCD3DV4tF4beNMCMNuD9nB05v9A26kIFstwMyBDvzQFtmyZQgsdTAKewE
 DOmsCVNdpKi1sZlqhxEZ4J8yv1SYZ1q8Ynl1cmPJ8D2NMelOrrchWOvtYTCo38/LrTvAsG4EwG
 7/g=
X-SBRS: 5.1
X-MesageID: 46140471
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:cTSrY6gGLH0jzR1dCPaHXfkQpnBQXz513DAbv31ZSRFFG/FwyP
 rBoB1L73DJYWgqNE3I+erhBEGBKUmskaKdkrNhQ4tKOzOWx1dATbsSkbcKpgeAJ8SQzJ8n6U
 4NSdkZNDS0NykGsS+Y2njLLz9D+qj+zEnAv463pB0BPGIaCdAV0+46MHf9LqQffng0OXNTLu
 vk2iMonUvERZ1aVLXAOpFTNNKz1+Ej2aiWLyIuNloC0k2jnDmo4Ln1H1yx2QofaSpGxfMH/X
 LemwL0y62/u7XjoyWsl1P73tBzop/M29FDDMuDhow8LSjtsB+hYMBEV6eZtD44jemz4BIBkc
 XKoT0nI8NvgkmhP12dkF/I4U3NwTwu43jtxRuzmn34u/H0Qzo8Fo5omZ9ZWgGx0TtkgPhMlI
 Zwm06JvZteCh3N2A7n4cLTah1snk2o5VI/jO8oiWBFW4d2Us4SkWUmxjITLH48JlO91Gh+e9
 MeVf00pcwmMm9yVkqp+lWGm7eXLywO9n7seDl2hiSXuwIm0UyRgXFon/D2Mx87hdoAoqJ/lp
 L525JT5ftzp/8tHNVA7dg6MIKK40z2MF7x2TGpUBva/J9uAQOHl3eh2sRF2AjtQu1T8KcP
X-IronPort-AV: E=Sophos;i="5.83,273,1616472000"; 
   d="scan'208";a="46140471"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MuFs+8p7qwYDC+3OjIe1CZgfKhurNzQxDN2ixs4YXeUtTDTawrblzZ3GvYlLxzhbJGwwmOrHZQr+f8ntQQyOsXrAJ10nEQq9uDVOUlyrjXocM25OH/8UycCK3oxUDNtziUdkqasjt4ntii7IiKUQJH1ze1Jpw/UbFoopYmXzdwdvXJVQvhbeufp76gdqfEs3ORK+Ql8iSWG63Exv9e8r9GUt+E4cXL3rBQbgJPbzPoSPJ3sbJicPvU/wYhq7Jo8P9gaGPl/E6ZzHCiyR6jwcbf3F4sy7k8blgMolu67FXmV/qexKEiynWrVDGh0nCyEvy2yMEFi2CCeBOddL4fyW2A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ICjZXqrRU8UXH0K0zEPXPvDmMNtwCUB+TF9uMGEWTzQ=;
 b=Lnues2u/73fGe3F8IriMGaMaaENaZuwlqPSnz5Jm6v92DiIesWBw7yPc0jDuVKD1y31krMNepCygA3p6Vl7SpuqklMNz4doXLesF4ezJrgSobrvVVuJsVJotv9KHcGRoB/Veqp9MyPj4YFHgoLQZ6k936LGVUTx8CJebrWT10VUmVtbzSPEpG7Hopg2Is96p/8KSQs971gUKQOd5DmyIgHuEKNfQVYmVjZWyrSIYKuDkB2ezTp3DwwvyFBKoDIx5JxvhtCX4G4GcLthd+/QVXFYnFrobH2V06ajRa0YTNbiYb+kD1ohaMhoXjphys4uU9dzHHFrHL3Dw+44XmxPK7A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ICjZXqrRU8UXH0K0zEPXPvDmMNtwCUB+TF9uMGEWTzQ=;
 b=PXwMrhQF7AiDPvbV5/gs4hnt6AdFXDoE67tJvaXQ8ylSTF4KX6NcKcTKZEw6SZnxPTxVWb0VqouFjnpzDuVSkprqr5ht8ReOjUvpMthmm6RuPBxB6gJvnItgm5bXekF8h5lrhkggKDPIN0q6undQlIy8dE9cb0cXGuuWPQGm1vY=
To: Jan Beulich <jbeulich@suse.com>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, Edwin Torok
	<edvin.torok@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Xen-devel
	<xen-devel@lists.xenproject.org>
References: <20210611163627.4878-1-andrew.cooper3@citrix.com>
 <20210611163627.4878-4-andrew.cooper3@citrix.com>
 <bcb7cacf-f18d-ed74-00b4-854b403bef2e@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 3/5] x86/msr: Expose MSR_ARCH_CAPS in the raw and host
 policies
Message-ID: <9105e14e-3a3e-4e25-e809-702f72207f11@citrix.com>
Date: Mon, 14 Jun 2021 15:10:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <bcb7cacf-f18d-ed74-00b4-854b403bef2e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO3P265CA0014.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:bb::19) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3ce20485-a4df-46ed-91b6-08d92f3e1e7d
X-MS-TrafficTypeDiagnostic: BYAPR03MB3735:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB3735FE535C950B38FC26D615BA319@BYAPR03MB3735.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: YTbxKgwBCd6NGL8Pb9whBwwo87qeg0z+hYHdNl2t3FEYrsw95qu/nXRMeWcZZ/mv6AaYmWNKeNkuiO1NK714fvH1O2ol1LM1fZ4fTxnYH2ENIEV9UPilTbu55yBZIWYRPKdt4SRWgWKKPBWGXpsTBho6LnYF9FFvI69bMZnEzqla/8rSOTzlSP3P6XMr1lhhJ7/le3nM7g8eyXB11pd3BBDcn3/7VnW7DHnlCuABmXYryEu/Vf7KBsNvITelexW9Az3mIeS6u79YQUne6Q8PePoDB8hPXTeO2bPoaFyMEEaAWOtyzkt850U7IU4FiMwE5nY0pt7s+Hz2heikvHhrSj1km14r4i/80qbsXkcHlZCOBEKPhbyMM5atfAcy2t9glnrdAw1RapBWqJsYg4t1Pa7FVF+bUfPuEqivxmfp2oT7qUO1Ntx8Ls/P5jl9pvQX06MoOq6XMbtHMFStp3NBwR57wrLCXSJ1zd/yQVi7E0fc9TRU74UmqaifLoPQdheg4oEeNOTw8n9Iv6mE/IuGUi4y2JJ4aLWCcAyYu18QGhvFxi5mzNemnYbgI+4W60qnAA1gYz9biT6eAPdk+okfbVYwp567xRpt5dRK7iQ+1RgGV0O0DTfwW8jc2iaGBFbqHpAHO0ySgfDkBc/aredwMhzoBricBfAxU8i6qbomsf0wXUW4NfJ35Gs457uXqMpW
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(366004)(39860400002)(396003)(136003)(38100700002)(5660300002)(36756003)(66946007)(956004)(16576012)(31686004)(316002)(6916009)(2906002)(4326008)(8676002)(66476007)(16526019)(83380400001)(6666004)(31696002)(8936002)(53546011)(66556008)(478600001)(54906003)(6486002)(186003)(86362001)(2616005)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?SkRtcGZQT1pnVElzUGlGdVc3Zmg4WHM0dzVIaVdwRk5od3pONEJ0alVwRGlX?=
 =?utf-8?B?VTFmUnE3dS9LcnNTNXNXUEREamRKU2FCbDlhWVBDUU56Y2J3VHYrYktYYkpY?=
 =?utf-8?B?TVNPK2dOTUJPc2QrcG9TZENvUXFHcWJFbnduUEpzeWpNQUQ5OXRDdnYvNy8v?=
 =?utf-8?B?ZkRlSWVQQmFEdlQ0OU54REluY0t5QmpJdkZaWk5EMHRVcEc4d212MjJtcitQ?=
 =?utf-8?B?VjRVeVJpQVl1UFp4TklkTHNYalExU3o4OEt5WjNkQzVKS212SHdZZmM0Wm1U?=
 =?utf-8?B?ZERMVVI5T015c0U5SmRnR2FFeXB0UW5tb3dOZGx3b1hhcTFyRHkzQnhzRE9p?=
 =?utf-8?B?VWRPazkrMFVMekVkS2NJbndxay9aV2NnM3JyYk5WTzFmVmRWeUZ5dHprMllm?=
 =?utf-8?B?b2wwdHRTL0NST0NVdXE3aG4yamIvcjNyei9NYXc4ZWNkN0F1Q1ptcjhXZDVK?=
 =?utf-8?B?OExCQjRxV3NOT050SG9ZVHFSbEMxMFhGMzdEUHM1c3V0eGFKUXE2YlR3aUtu?=
 =?utf-8?B?amhPY0tZbGcxRWtsOFMyQ2xDVHo1Q1dTMGhCOVJhOGRZK0toZWdvTjlwWjlx?=
 =?utf-8?B?OHlpZXJRaHdET3FOaXFjazdNSmw5QU96MWxsYmwwcnJ1dUZvTEtQMENRdEJ4?=
 =?utf-8?B?WWkwZHgzZWpmMzJGWXRtOXFOb0QxSXFnMjFoOWl3U2xsYXluTlJpWHdUMS96?=
 =?utf-8?B?Vm5tRGhFakhLbis3WU9jL3ZiY0Z5VGR2ZXRxYWRRS1BSZ2dCb0RJUzBKZjFr?=
 =?utf-8?B?N3NvdDUvOVNmaWJDQWlyTEI3MDdqVjVaRlUxSm1YTGtvbWZhdTBCZjM4Zm04?=
 =?utf-8?B?L0hCWDVhU1dReWdqcHk4K21PSmFkaWZuTjFHdFpiQXNYVGVwM2w0emRWTmlF?=
 =?utf-8?B?bWF1TkZiQytlL0VLZ25DSk9qeElTSEwvQ29EQ25mbTh5TWFKUjF4QnU4cE5k?=
 =?utf-8?B?REpYVmtwbkxGNE0vdGZCRlpralR4L3QrRUljZ1cxVjVaSXZtQVZhSFEvQm9X?=
 =?utf-8?B?RWhqQ1E2Nzh1ZUN4OWxNbnVKaXVKZStTK0hNOXFLZkFyYjZNekVTQ2NqNFBM?=
 =?utf-8?B?ZWoxT2ZKRXkySVNkWVI0Z3l6N1ZkcDM3M1YzTkdyTWc5NnYrU0ZuK1VIaXVU?=
 =?utf-8?B?UEFLb0RnbkdPUFRNK1pqMWI4QmNpWDBYTDEvZTBoY1JlTjlEYmhBR0NFK3Q1?=
 =?utf-8?B?ZWViY2JERXMwRGdrM00wdXFNWGtjUXZTdjBLd0QvWHFOTktGUFBnZlBCUnZr?=
 =?utf-8?B?WktPejJUdzMxTldYRFdSc0FwbEQxdlI1OVd3YXJ4eUFjUUJmMFdiK0p2NnB3?=
 =?utf-8?B?VU9SVmUwemNsZnl2WWlncEYvNGE0emFIMWREQmtDcFhybEhpQ1A0dHRtZjRz?=
 =?utf-8?B?ZmVFd3VVYmV5S0h3NmxNcUVUeU5pRGpydUsrTmwvbmxrcFVyWlNiamJwdWJt?=
 =?utf-8?B?NE43ZVJRV21PWk45U08ySzJCaERlcncvRkxabTd5MGIwWjFnR1J6cklrRWUw?=
 =?utf-8?B?RG9reW16WGk0NTdRSUdxbWtuWEFoZU1RaTVzUWhtZUxNaGtub2RKa0VUVUUy?=
 =?utf-8?B?MUsrTWJ6Yys2M29hR2VXTGV0QkZsWmFaOS8zL3RvYlo0RXk1RmprckVHYTJB?=
 =?utf-8?B?TFdGT29UMVpiOEVwcmdWRWtoaTBLYzRSUzJwdDdPVE04TURFL2s3elBJdk9Y?=
 =?utf-8?B?MlVIQzlScDNaTjVnMnlGZE83T0kzeXlZeHJQOEFudlpwZTgyeVVybUZoekZk?=
 =?utf-8?Q?snwDdXWHCwVfM6ZLHK2jHRVpfpv/RxullZLtWYj?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 3ce20485-a4df-46ed-91b6-08d92f3e1e7d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 14:10:09.7508
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3IqI4g1i7MGePM/gYdYOTLD5e3Aq0M+YkVaDEBbPDmkT9kwSI/Nyk0VoeBdIDt8HzPMMcQnYlX0enDa+iC/ToJrSamza5/bBL3WjNKIuYXo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3735
X-OriginatorOrg: citrix.com

On 14/06/2021 13:57, Jan Beulich wrote:
> On 11.06.2021 18:36, Andrew Cooper wrote:
>> @@ -60,6 +65,11 @@ static void __init calculate_host_policy(void)
>>      /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
>>      /* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_E=
NABLES */
>>      mp->platform_info.cpuid_faulting =3D cpu_has_cpuid_faulting;
>> +
>> +    mp->arch_caps.raw &=3D
>> +        (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
>> +         ARCH_CAPS_SKIP_L1DFL | ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO |
>> +         ARCH_CAPS_IF_PSCHANGE_MC_NO | ARCH_CAPS_TSX_CTRL | ARCH_CAPS_T=
AA_NO);
>>  }
> Isn't this a little too simple? For CPUID we consider the host policy
> to be what Xen is using. Taking ARCH_CAPS_SKIP_L1DFL as an example,
> we're not using it unconditionally (depending on opt_md_clear_hvm and
> opt_l1d_flush), i.e. there's command line control over its use just
> like there is over the CPUID bits.

But we don't go clearing CPUID bits for features we choose not to use.

ARCH_CAPS_SKIP_L1DFL, despite its name, is a statement of how hardware
(and/or out outer hypervisor) behaves.

It means "you don't need to flush the L1D on VMEntry to mitigate L1TF",
whether or not we employ fine tuning to change what Xen does.

>  Or take ARCH_CAPS_RDCL_NO, which
> we set unilaterally for AMD/Hygon.

That is local to spec_ctrl.c, and a mistake in hindsight.=C2=A0 It was
written at a point in time when it wasn't clear whether AMD were going
to implement MSR_ARCH_CAPS or not.

The logic in spec_ctrl.c will change substantially when we load
microcode and collect the raw/host policies at the start of boot.

> I don't mind it remaining this simple for the moment, but then at
> least the commit message should state that this is currently over-
> simplifying things. If you agree, then with suitable wording added:
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

This is "mask all features not known by the Xen".=C2=A0 For CPUID bits, it'=
s
done by the masking against known_features[] (autogenerated by
gen-cpuid.py), but we have no equivalent for MSRs yet.

We're definitely going to have to invent something (VT-x is going to be
a total nightmare without it), but I haven't got any clever ideas right now=
.

I'm happy to insert a comment saying that this is a substitute for not
having known_features[] for MSR bits yet.

~Andrew



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 14:51:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 14:51:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141591.261485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsnvW-0006Ht-N3; Mon, 14 Jun 2021 14:50:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141591.261485; Mon, 14 Jun 2021 14:50:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsnvW-0006Hm-Jt; Mon, 14 Jun 2021 14:50:58 +0000
Received: by outflank-mailman (input) for mailman id 141591;
 Mon, 14 Jun 2021 14:50:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AGyB=LI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lsnvV-0006Hg-Co
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 14:50:57 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 503f24a6-ec72-4646-9020-06c07d29851b;
 Mon, 14 Jun 2021 14:50:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 503f24a6-ec72-4646-9020-06c07d29851b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623682256;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=BXTLdtgZmTJZhikHG+hPyfuGtfTuNiaZX596X8Tminc=;
  b=LViZ42bufbBSQtR81aUjysdQhwZjOKLXGSz9iK/WA8XOHCp55P4rVbub
   XOOrx94Gms3Bwtmjvvd7eVB+UbNM3qNQ9bGHNcyRg0WNow+TKU8oxXPkD
   ERHyuPEdpPeADqhlPYl9bPRZhhRX7/k2tarfIP5rMGGbocSFEYL0O8Qms
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 9jIiICgrOBnigWMMlTpt9IplAMl7Sm7q+EGeXutXRt0kv3t74FZUfCJ38CQywAjet/c8lJRAY8
 7gYuhgqrpRIfFyVtVbIHC7Av7OoflPv2sQm9TxrfqLU5gQhM8MAkppSLMZLG6XBJPeDD/kQSL3
 vrVbkvRFVwxENm/6G/u+oGbyZTridSx8rub8uLnPCiFyKktEX4YBBpOWtqeRz8iqXMLrNcKuQ/
 wgBBZlEI3ZRwwNpDDcNI/lF1ifyLWylJaQbPI8vtwqNDEqLpAX4dk2UJQZP+1MgXKHYxYzvxLj
 iRM=
X-SBRS: 5.1
X-MesageID: 47650475
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:16Ct8q36w/3SI4Fs2T+1yQqjBVByeYIsimQD101hICG9Lfb2qy
 n+ppgmPEHP5Qr5OEtApTiBUJPwJE80hqQFnrX5Wo3SIDUO2VHYUb2KiLGN/9SOIVyHygcw79
 YGT0E6MqyLMbEYt7eI3ODbKadY/DDvysnB7o2/vhQdOD2CKZsQizuRYjzrYnGeLzM2Y6bReq
 DshPav6wDQAkj+Oa+Adwg4tqX41pL2vaOjRSRDKw8s6QGIgz/twLnmEyKA1hNbdz9U278t/U
 XMjgS8v8yYwrCG4y6Z81WWw4VdmdPnxNcGLMuQivINIjGprgqzfoxuV5CLoThwiuCy71QBls
 XKvn4bTopOwkKUWlvwjQrm2gHm3jprwWTl00WkjXzqptG8bC4mCuJa7LgpMCfx2g4FhpVRwa
 hL12WWu958FhXbhhnw4NDOSlVDile0m3w/iuQe5kYvErf2UIUh6bD3wXklV6vpREnBmcYa+a
 hVfYHhDc9tABanhyuzhBg3/DTENU5DbCtvQSA5y4aoOnZt7ShEJ+Zx/r1Xop46zuNLd3Bz3Z
 WODk1ZrsA7ciYoV9MKOA4ge7r7NoWfe2OBDIqtSW6XXJ3vbEi91aIfpo9Fv92XRA==
X-IronPort-AV: E=Sophos;i="5.83,273,1616472000"; 
   d="scan'208";a="47650475"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T4DQNCWzI9ZBd7bi+vZgjJA0/Z61pq4fn+VNb3wIei9qIN0VYzOgUfSDUYXPzhHNPggXfoPP7eAXr9L0kmgpwRHv2btwSaoV4+8A8bo8C+7NRyMl5WcIivXNly866M2/RtQkfJxTPmDTgDbkmjmZfIcno8AIFwZWzxSZEUctoOLdLEns+pAOm1+hrYamipSoU7YlNgzB7MCoDJex0BfNy5RUq5ZqFzd1y0qoFhMQCCpr6bV9NWZBQLTdY0+GEa5AhW80DbL9dfjq2Ubjdwshhrnp46DILH21vIT/nA+KdCriMx6s0w/nIvvTSQPZmaAlyCsdQIqRr7+Sw8Hf6z3Ilw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=P/mfLfjGPmj7/MqpHaqFZy/eLTzP7lPftkwFZLmgMqI=;
 b=Y1LYIo+7PSr4dg+LgvJpn+KabPIzM6m9Ox2/nvrBIOnktIB0+JsEHJoNtfphsVzUSCQjb8ZUAQX8zB/6UMXNUC/8Ozzi6SLpjgTxeEgrWmsHOzkvVbgsmgCnvy7rp69hY3anETB1eYcJBDBiEC6vtZZ/97aITFJmLpevlzBjEkE+ggODSVYph5kRgJGjLvb8Z11ddZ7EpONG9XqU2bE07NE1zPsG0HcIV8C/r+nnuu6GLXWZZwjrzI2wSXWZ/anpIn6cekCk8UqDi59MsVnZXg7mInvlBy+XYYFUOfvxuK79RrvW+WX0OWeJTCEyOri/kxuxPvtsxC3kbawbu2+wEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=P/mfLfjGPmj7/MqpHaqFZy/eLTzP7lPftkwFZLmgMqI=;
 b=Bk2YIsIwg06stOTkMXtUD9wdMwqED2c6Lso5veap92bSmeIa198OhUYvEvJlGid+fkScLjYIzMne4cFtpAq3gBEJdSJeVnE6b8zP+BReVXQaYsTeOgsiTsFelbf3jrkzQQ5lcF87WjOwoHjvjh6PByjjrq5DMYbx5CcHeGH7sSE=
To: Jan Beulich <jbeulich@suse.com>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, Edwin Torok
	<edvin.torok@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Xen-devel
	<xen-devel@lists.xenproject.org>
References: <20210611163627.4878-6-andrew.cooper3@citrix.com>
 <20210614104716.23405-1-andrew.cooper3@citrix.com>
 <9257fd40-65cb-8b08-7639-00b15dd0aba4@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v1.1 5/5] tests: Introduce a TSX test
Message-ID: <cecc837c-e261-17d8-a77e-044256d8bc0b@citrix.com>
Date: Mon, 14 Jun 2021 15:50:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <9257fd40-65cb-8b08-7639-00b15dd0aba4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LNXP265CA0080.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:76::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4f75b76c-2b0b-48c3-6f27-08d92f43cdd1
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5774:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5774C72B6C78E0A162A73E7FBA319@SJ0PR03MB5774.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: FrjCaOiUp3+zndS/+KcMfkCWEu2rupBNsBv2MmgDbtKOA/7h2x2p8nEWy9NEAin2z2fhyvh3b0qmedIbtzkRoRCqYrpQdae4u3Soz5wLMy77iQugxa0sTp/d+TLCV6OAv0Ya+EmO5XKr6Dy7C8M5SicjSjL0bOp0gCybWh+TtI9VoOJCsf+HEaM8/K+sJTmMmhoTI9YVV3w0HdZXt5p2W4oQkMTw4t2snJOct6V+hq3J9p2rCLtT4CxAQNhMzNYqtaSLmKwstC1brmzsXeeDpFuxsOyn2wvJO7Um+jQOtzLocGPka/nU/yoqLK4OMdxT2tU6pFtJF07n0rYBbYG6DQ3PnDrBqnBf6G76FXv2PIhF69Sw/mLElx5OogSqBOQXVZZG1sqsEQygPKUs0oyftjhJGhoXoYeWeixgmPe1dgna654tuuMOTROQhaKlZ5N8fqFrSRQkty21JYr0gaZyZmNbEzWg8v5jbVhAZZ2o5ZF/f364/wBdXdufjzOzoAOKNt5lWpqvBsJaEkkNlyKG4u7zuj87KmTIyHgWJZKeYvuCkMLEUxM4iRGcYYNzgSfAtZeauHKPHMCfSdL7jbwQMiZxgZ83unao8IjO3IiwFXyr0cK57ZgAvpw3aKJKMtqsvowCPy0ZT4G+BnheFZ7HBdkInwU4G89H4Dry3Iqs4Xkmm/oNPcLsfGzHlJ8srAXV
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(396003)(376002)(346002)(136003)(4326008)(38100700002)(66946007)(956004)(31696002)(66556008)(66476007)(8936002)(54906003)(2616005)(6486002)(36756003)(86362001)(316002)(16576012)(26005)(53546011)(83380400001)(6916009)(16526019)(186003)(5660300002)(31686004)(6666004)(478600001)(2906002)(8676002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?Z2hPTkhTUUYvWmtyZ2N2aTVrSGZYTXNnVTNWc2dmRXovNWRab3ZzQW5ldHpt?=
 =?utf-8?B?dkZjOFMzMGtkdklXYnNHZUducVFSbm5uVEdUL1RRenZ5OUtYZFhtaUhCL1lk?=
 =?utf-8?B?UmJQQml2TXBGOVBvQkpsdG12dGtaYmlHUzVoNXhPQVdGUHdSbjZ1L2d1dUdX?=
 =?utf-8?B?alFVVEVtS2tWdmhXY1RabUFHMmJpcFNGd3ZqdU1ZZ0NpbzJmbE05Um9QblBq?=
 =?utf-8?B?b1ZVVkRYT2w3S20rVnNLQktDTlViMXhaRmh0T0tpV2s3d0Rjc1dMYUdTaXFS?=
 =?utf-8?B?aWdIeFExYVBHdjczaW5ONjIwTS9vM3RINDBYR2M4WFNUYWFFZjkyemhoU3k1?=
 =?utf-8?B?c0ZwM044c1RYOVBTL1hzYjI4WDJsc292VnpyakFCN1U1RGQvS2FlVnRQWmQw?=
 =?utf-8?B?djh2UjN1NkJPc2Z1RVAwY0tTQWEzZ1hOa1RKdnhhc0ZDb2ZwTFE1UTUwdllD?=
 =?utf-8?B?NXkyOTF3RTVOMnhuZktEN2NvV0M2S1N1T2RHQTB0WnBHejArYmFRZWNzNlow?=
 =?utf-8?B?NzRtQXV6Tk1qV0tpWEtEY1BPMlFKenZzbXhVRThWVWtNcVBwOStYeEoxTEN4?=
 =?utf-8?B?QnVQTlREQ2F1TXpJcFRXUEJVb2xEYzhxMmVLc0p5ajVNS3hsTWxnbWwxNTZv?=
 =?utf-8?B?dWxtWWxoekVHb3o0M0RoOTMzUC9VOFRDeWZ2UXE4bzdxQ2I4Qmd0bTlteldB?=
 =?utf-8?B?WTlyWnVJUyt2ZXc2OTA0NUpXYzg2SlAwbC9OMUpLNXFwSmNLM3d3U1hKWkNx?=
 =?utf-8?B?bDlOb0hLbytmSk5heTErK1E0SUg2SGFYMHNRbVFTaXk5akhnL0NmWVlCdEZi?=
 =?utf-8?B?K1hTWVBmTVIwRjAyUWVsRlNpS3VSOU01K0tmeDVUNlhQOU4ybTBGdDR4QWt2?=
 =?utf-8?B?N2c0RzJJcjRXNjlYUmxrL2VOTnhRS3p2ZWhDNWUxSlNWbzdBOXh0RzdaQ1hn?=
 =?utf-8?B?dHRTMUtXRllpMUV2NmJGZU5mV3NoQ0xjQWZpT3RoMHhnd251RUw1R2xxM2w1?=
 =?utf-8?B?cUdDRmJldXJVbSt2ZmJpZjlFY3oxTE9uUTlheHAwYzRyTzUwMTh2MkFLZERw?=
 =?utf-8?B?UXMxZWgvaERZcjRrb01QM3pOeVdOOXl2MFFkcmJrUzVuRk9adkc3QnBuWDRw?=
 =?utf-8?B?TlJhQ3JOYjltS0RXcm9IOUxGUGg1bWZRcVpWTjZjYkthdEJnSzhFWFJEQ1NP?=
 =?utf-8?B?WDFFOTVYR0hSczRJOVJlTjVnU0RaODJ3T2g3K1VUQ01aWWtyNThUY3pFVFVY?=
 =?utf-8?B?aUZJdDRabC9KSCtiazhPbHllRlJGd3hKOXI1WlUvdjZQZjA3ZTBzcVVySGtE?=
 =?utf-8?B?WmtJR1hrZWxBcEV2ZUZWdUJqbWZ5b0x4SFhkTkgxVzVRTkpDRFVLWktoa0ho?=
 =?utf-8?B?NlNURG9YbzBpOUtlakJ4RlZHR2krY0dlZGRFa0FTZCs5V0dJbi9HanArNktF?=
 =?utf-8?B?SzJBV24xZTZ6TUM0Z1BEb0ZDMnhteUo0WlJvTVZqSDJqdit4NUxObnFHNUNP?=
 =?utf-8?B?REo2NWliTmNBWm9CdENKcERzaG0wYnVlclk2RnhJV3pzbldJa3RhQUwvSUY2?=
 =?utf-8?B?TUN1MW5qZ2k2VDlCbmg4NjFJNEdlNm10YXA5dUYyV2hHQkx3UWVmSGVXZFVy?=
 =?utf-8?B?VkxKR0RzcksvYnpTWHN0TzZkQ1lpZk1TMHMwaE95YlhPZ0RtZytnZ2N3aTc0?=
 =?utf-8?B?WDIyZ2kwQ0N5ZFJTNWxuYm5aTmc5R3IxeUNJMm5mZXlVTnhaenFxRHNWcDd5?=
 =?utf-8?Q?9Fb6MvKZwsnJilLFFFikAjX9uLilVovxwjP3ozV?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 4f75b76c-2b0b-48c3-6f27-08d92f43cdd1
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 14:50:51.5972
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1u1OYaLXedUTIBR9Qhlx8IF3hbRvOpyFqE2HyW5LUnp0GcB/izqGb2PxbgwoaQ6koh+hEAFFf6W1eeUe0Qu8hBiryUPaCB0o9kfphNYvQ1w=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5774
X-OriginatorOrg: citrix.com

On 14/06/2021 14:31, Jan Beulich wrote:
> On 14.06.2021 12:47, Andrew Cooper wrote:
>> +.PHONY: distclean
>> +distclean: clean
>> +	$(RM) -f -- *~
>> +
>> +.PHONY: install
>> +install: all
>> +
>> +.PHONY: uninstall
>> +uninstall:
>> +
>> +CFLAGS +=3D -Werror -std=3Dgnu11
> Is this strictly necessary?

Appears not.=C2=A0 Dropped.

>
>> +            return RTM_OK;
>> +        }
>> +        else if ( status =3D=3D XBEGIN_UD )
>> +            return RTM_UD;
>> +    }
>> +
>> +    return RTM_ABORT;
>> +}
>> +
>> +static struct sigaction old_sigill;
>> +
>> +static void sigill_handler(int signo, siginfo_t *info, void *extra)
>> +{
>> +    extern char xbegin_label[] asm(".Lxbegin");
> Perhaps add const? I'm also not sure about .L names used for extern-s.

Well - they work perfectly fine even with the Clang integrated assembler.

>
>> +    if ( info->si_addr =3D=3D xbegin_label ||
>> +         memcmp(info->si_addr, "\xc7\xf8\x00\x00\x00\x00", 6) =3D=3D 0 =
)
> Why the || here? I could see you use && if you really wanted to be on
> the safe side, but the way you have it I don't understand the
> intentions.

That should have been &&, but I also appear to have lost a noclone
attribute too.

>
>> +    {
>> +        ucontext_t *context =3D extra;
>> +
>> +        /*
>> +         * Found the XBEGIN instruction.  Step over it, and update `sta=
tus` to
>> +         * signal #UD.
>> +         */
>> +#ifdef __x86_64__
>> +        context->uc_mcontext.gregs[REG_RIP] +=3D 6;
>> +        context->uc_mcontext.gregs[REG_RAX] =3D XBEGIN_UD;
>> +#else
>> +        context->uc_mcontext.gregs[REG_EIP] +=3D 6;
>> +        context->uc_mcontext.gregs[REG_EAX] =3D XBEGIN_UD;
>> +#endif
> At the very least for this, don't you need to constrain the test to
> just Linux?

I guess it was too much to hope that this would be compatible across the
BSDs too.

And the FreeBSD CI did notice it, but apparently didn't email me...

I'll try to make it BSD compatible.

>
>> +static void test_tsx(void)
>> +{
>> +    int rc;
>> +
>> +    /* Read all policies except raw. */
>> +    for ( int i =3D XEN_SYSCTL_cpu_policy_host;
> To avoid having this as bad precedent, even though it's "just" testing
> code: unsigned int? (I've first spotted this here, but later I've
> found more places elsewhere.)

Well - I question if it even is "bad" precedent.

For array bounds which are constants, the compiler can (and does) do
better than anything we can write in C here, as it is arch-dependent
whether signed or unsigned is better to use.

Beyond that, it's just code volume.

~Andrew



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 14:54:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 14:54:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141599.261496 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsnzH-00073L-B4; Mon, 14 Jun 2021 14:54:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141599.261496; Mon, 14 Jun 2021 14:54:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsnzH-00073E-7y; Mon, 14 Jun 2021 14:54:51 +0000
Received: by outflank-mailman (input) for mailman id 141599;
 Mon, 14 Jun 2021 14:54:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XszW=LI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lsnzG-000738-5u
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 14:54:50 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 05de688b-994c-4c87-9b9a-a91ea1c9b1a6;
 Mon, 14 Jun 2021 14:54:49 +0000 (UTC)
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur02lp2055.outbound.protection.outlook.com [104.47.4.55]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-34-WsWonu0xNMCJ0hregxWC4w-1; Mon, 14 Jun 2021 16:54:46 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4381.eurprd04.prod.outlook.com (2603:10a6:803:6d::30)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Mon, 14 Jun
 2021 14:54:45 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 14:54:45 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3PR09CA0027.eurprd09.prod.outlook.com (2603:10a6:102:b7::32) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Mon, 14 Jun 2021 14:54:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05de688b-994c-4c87-9b9a-a91ea1c9b1a6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623682488;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fAtRyQyrzSI2zQa+qFmddlOQ84F0wuvcIS6hczk4Yg0=;
	b=K6lPdxxMJmYlRehEnD72meZ8mChFZg3P9LXbhOq97mxzSTUjo5ckQO6kCW0zrz0tU+Pjkd
	QM/nHJMwhHUMMb+Ly1YOOFR2FYV2xWm+RpQOYGKVsGt4y6v8HIPrKrJ+Di7IxAA6X43cRE
	0eN60Ye5gMw5Jn631kxa17lGkWgGxs8=
X-MC-Unique: WsWonu0xNMCJ0hregxWC4w-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AVrxNmtf/Xcsmy7GU5ZLdtQHvQe6i4ZiTtyU96MHedO/HUd3FRkUy831dSDV7E8aBAZdknVEotj6RA8zL7Dk9Z4fMl5NkFDy9yuCnUF29gQ9ZzPFYPujudhWRQDMFvEQ1rXbOMHhT6mRj+2WrQKqyFoiFUidALxoqxv18SmPz9koD9CRPEOp4gN3B5CMcu3kvI+FUiSg+Nw3knmtddGLwSZXW1/j5FjYtxnNOmEJgu1aG1B2suWXbUmW/dYLhlZ04P++KSVv56nLsJr/ZoxLWdxlUKCT25AvyASToXcKocFzI+VlE6+EsvAwNpILCT3aymKakr4kiv2BXDgi9KAbLQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9msd66WVmd+Kgz8zkugch3nW8/svijeNQJblOIY1hK8=;
 b=Qgk5etmnhX4E5dQ3D7wpCZ3DZHqZHtypZVmcPS2XBUGCNXucWyZ0c6e3goNDJ306RQwgGM3ZW319NXXQYANa4QXndvvf/cuuDfWnuruCXcNvBLxLy0T+reNbaizqq/j4AOza5R2Kh/oc2X1++nyes05pKNMTjnqD/DWAmAKnDdw0ZKEuiRiBEAGEZWPxPFokUpBx12JhFXiKkSgUSTntKY8eniCxaP8mX6RIqIkfqvXwVmszg3PiLwSyVxlBiCFvpaSgU6sHGZLUPjcIkw5zJNxCy5840hYSPy3w52gkQVdotBBWH8fWfHY0NEFPL/w+LJoFo7yJzIRtEheU+FmQKg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 3/5] x86/msr: Expose MSR_ARCH_CAPS in the raw and host
 policies
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>,
 Edwin Torok <edvin.torok@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210611163627.4878-1-andrew.cooper3@citrix.com>
 <20210611163627.4878-4-andrew.cooper3@citrix.com>
 <bcb7cacf-f18d-ed74-00b4-854b403bef2e@suse.com>
 <9105e14e-3a3e-4e25-e809-702f72207f11@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5a7e671d-a44d-7734-92f5-0bde2e56fca0@suse.com>
Date: Mon, 14 Jun 2021 16:54:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <9105e14e-3a3e-4e25-e809-702f72207f11@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3PR09CA0027.eurprd09.prod.outlook.com
 (2603:10a6:102:b7::32) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c84436fa-1fff-4a3c-29e0-08d92f4458ed
X-MS-TrafficTypeDiagnostic: VI1PR04MB4381:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4381875E3882EDFA5B6113D8B3319@VI1PR04MB4381.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xV4DrWajGu0lhL5XZNS/pZ8vCWebRTgTNyfIeb9qRy66qJs7oeyR2Zk5CVmviIFnSPY9njHDZ40vjoNzp6nxuPhWwTV6eixABT7xEXKGMxy9fk7PwOv+2SCEnnm7aHNqZ556cmXP3mdywD5AtcTZpYPqCGyy2lHS25HxfVmPvOflFihh4RZLMt+MAFQiFzLzTw3sMGTMTxwQEoD98Nr3oWxIcbSPSVG8K7bfswO3+nuP1tI0Bb9PSPcLJXgj+/w/zKlvKd/NjexbgUMPn8vkmEWX3oJMumkxnoV48cBFgBuS89xqxOcpQRxSF8v2yaIRRkc+Cf/2ZWfXCRi5T+2xI2A71AzLeApkanL6/r+ZFF3GW3U01Gq3QpBpT2PrsqhSwu+G5rW2OGlC0zWR6ETy4N26iWQW+GpNBAx9ZaclDPSR4qE3o/MLWDf8wvtQ5txG3HvId1F/cP4krw5VxbngCz8fzu2M6IiXT1/UmdtzjC7+FNie35vKoQx5fu1S0ijsOzld21H5J8iMVGJrkRNhw6v4m7Oo+L9MBz1LI7C81Eb08Ia6hfgLUsXitcFN4ztPEZooBgdp8GkYspUM99wzOYlCCCsoCyTpDgo8aPME+wx7nq8FFSzwhqAAj1FxCw8AXU2abiI1lVFb4ZaxOfgjiLK1GK+4DEMdYanDXO8c6xzmd4/AmyWFVMRKrLF/lT6o
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(396003)(376002)(346002)(366004)(136003)(53546011)(31696002)(66476007)(66556008)(478600001)(38100700002)(4326008)(31686004)(66946007)(6666004)(186003)(36756003)(16526019)(86362001)(6916009)(16576012)(2906002)(83380400001)(316002)(8936002)(5660300002)(26005)(6486002)(8676002)(2616005)(54906003)(956004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?fh2HcaeheAf5D443TRFPvIseXJ2flNgnxKtv8V0cL7r/GtPEtih09wJ8Qc8h?=
 =?us-ascii?Q?fVL7bifOgREg1fEa0hg0dcUjMzmY9Pfpk5dfiOpvRO8lNG61/cITYI5R3Hwf?=
 =?us-ascii?Q?km7u4EAz/ZPGzFm/2u4E/7ew9FmWkq0ebLK43g/WUvMgwmRiAM21n+k7HuRO?=
 =?us-ascii?Q?e2kKsW3DD5UKvGK47w8pbUek5txmezJwXMFZmx9OZQ5RUc3DcSj2iPLPHiEw?=
 =?us-ascii?Q?ul7j2fDyRJzvOhdSuKVbCFV19YPqN4Q6hAmRg7eyani6mBs1wStYWwLR6KTn?=
 =?us-ascii?Q?DZLZQWJSvAzCJ4uhXJJn+KHuO27JfDRGCLGROE4y4IXPcyFJG4npGE5K0MbJ?=
 =?us-ascii?Q?xa4TQGuyheloiu/Z+wRxCGjXWt+Bk34yGhniRp6pX5pQH1y5EmwyZTSxBMWx?=
 =?us-ascii?Q?LZmqR8NOEvdU2/Qn4X8uOisW/qq8p+id46iA18EciPJD/da6sc08qnM5jsis?=
 =?us-ascii?Q?b18QXWw/UCTON/aY5envipkRvoiihT7n4jJZyv9pcBhq+z4hPR62BKiyF+Uc?=
 =?us-ascii?Q?3doDfbG7sJQXlYMClo8mU5uih4E5akSr17a7zcmPgb38FhAyJYwzOEEHk+SS?=
 =?us-ascii?Q?xuUSEWL2vy97UughjoyhJxZAuuCADWKKxzMyaE9PZjLjYZVkzMLGBKST9Ex+?=
 =?us-ascii?Q?nFH14w8jXaUX5MTQAZ955jXBF3ccqeFPD0J1vF64r1PzZPAiLKjaxSmXJ+4k?=
 =?us-ascii?Q?a9mY35iEzJT2aiVLxoiggKzaNCV9VFaPR/jO46cTyV2DWQIXPr+Bm+89MKdr?=
 =?us-ascii?Q?7H8AIlbbxi/ewvBgUFbYi1wM6SMSLQp5pJkgVbldDU1zn0H3j+ngxnpdW7FQ?=
 =?us-ascii?Q?RFNY/lv9Cgd71XO2ZusTepynmNUNsgGzWRRG/OXKztvSt8BaTJrEZCrx1laD?=
 =?us-ascii?Q?0FVICsb+2/imdFE8czhRyVDnDx4BUiadaYIKw6AYPGmNNNh3gM9pEahzOPwY?=
 =?us-ascii?Q?+LjH9rQJT1hWwY+uF/UWqgdJ1rEqG0CZ+5koKZEMVcvjr4W4nBfh6qY8IyEq?=
 =?us-ascii?Q?Q0LMPEqCg+Vs6u1XhgZm+41Z5LwifoOa1gk/795FoLtyHgpELiPNYb3VFXRl?=
 =?us-ascii?Q?2XUZ3GpLFwenujb6YBh0LtM0qa5yhdcaufisZO4uoNXgqGPPQid4KB4tdLiH?=
 =?us-ascii?Q?sKAx+cAVXZ1HLohyeEA3iTRi/zsrdyqCzrSe69OsYpCk3eurWPhtD264svN1?=
 =?us-ascii?Q?wdoRVDmDcSdyLDlx4g5cuAGeCU5nCqEmKMlh6kcTOyLpsiBBFM84WfdLD3eE?=
 =?us-ascii?Q?rQXjU93eD9JlcRk4jPVQOxkqxK9tJlWmOt4WS8jZg7SZWyoU6PUO3pmKEKIj?=
 =?us-ascii?Q?KBgNWqC7b6rS14JGrpiNEu70?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c84436fa-1fff-4a3c-29e0-08d92f4458ed
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 14:54:44.8473
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5qOi0+AgFPS1cWtiXMXGJLb6Hl8LQucvT7SXpSDxhSmixwgdH03DepKc34oCNu4DvlCSGObZZqf5PppYPP60Bg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4381

On 14.06.2021 16:10, Andrew Cooper wrote:
> On 14/06/2021 13:57, Jan Beulich wrote:
>> On 11.06.2021 18:36, Andrew Cooper wrote:
>>> @@ -60,6 +65,11 @@ static void __init calculate_host_policy(void)
>>>      /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
>>>      /* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_=
ENABLES */
>>>      mp->platform_info.cpuid_faulting =3D cpu_has_cpuid_faulting;
>>> +
>>> +    mp->arch_caps.raw &=3D
>>> +        (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
>>> +         ARCH_CAPS_SKIP_L1DFL | ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO |
>>> +         ARCH_CAPS_IF_PSCHANGE_MC_NO | ARCH_CAPS_TSX_CTRL | ARCH_CAPS_=
TAA_NO);
>>>  }
>> Isn't this a little too simple? For CPUID we consider the host policy
>> to be what Xen is using. Taking ARCH_CAPS_SKIP_L1DFL as an example,
>> we're not using it unconditionally (depending on opt_md_clear_hvm and
>> opt_l1d_flush), i.e. there's command line control over its use just
>> like there is over the CPUID bits.
>=20
> But we don't go clearing CPUID bits for features we choose not to use.
>=20
> ARCH_CAPS_SKIP_L1DFL, despite its name, is a statement of how hardware
> (and/or out outer hypervisor) behaves.
>=20
> It means "you don't need to flush the L1D on VMEntry to mitigate L1TF",
> whether or not we employ fine tuning to change what Xen does.
>=20
>>  Or take ARCH_CAPS_RDCL_NO, which
>> we set unilaterally for AMD/Hygon.
>=20
> That is local to spec_ctrl.c, and a mistake in hindsight.=C2=A0 It was
> written at a point in time when it wasn't clear whether AMD were going
> to implement MSR_ARCH_CAPS or not.
>=20
> The logic in spec_ctrl.c will change substantially when we load
> microcode and collect the raw/host policies at the start of boot.
>=20
>> I don't mind it remaining this simple for the moment, but then at
>> least the commit message should state that this is currently over-
>> simplifying things. If you agree, then with suitable wording added:
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>=20
> This is "mask all features not known by the Xen".=C2=A0 For CPUID bits, i=
t's
> done by the masking against known_features[] (autogenerated by
> gen-cpuid.py), but we have no equivalent for MSRs yet.
>=20
> We're definitely going to have to invent something (VT-x is going to be
> a total nightmare without it), but I haven't got any clever ideas right n=
ow.
>=20
> I'm happy to insert a comment saying that this is a substitute for not
> having known_features[] for MSR bits yet.

Please do, and then I'm fine with it.

Thanks, Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 14:56:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 14:56:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141604.261507 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lso0q-0007ef-On; Mon, 14 Jun 2021 14:56:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141604.261507; Mon, 14 Jun 2021 14:56:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lso0q-0007eY-Jc; Mon, 14 Jun 2021 14:56:28 +0000
Received: by outflank-mailman (input) for mailman id 141604;
 Mon, 14 Jun 2021 14:56:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AGyB=LI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lso0o-0007eS-SC
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 14:56:26 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 81d0eea6-5169-4868-9341-5756902a9ab5;
 Mon, 14 Jun 2021 14:56:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81d0eea6-5169-4868-9341-5756902a9ab5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623682585;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=g3lM/GV6pXX5PGX+Whlxjp2NRc5FeWZi9nwX0iUHM10=;
  b=PV2Dp59YH6oXSyTOhf7Ll9eVPLNRt3kJx5g+431MF/mqpg0brJW6ZiLQ
   yX7vjarIzlfYgN79EDD6NJdSBsFWHER4azvj4tF1SsJiKuDnPbky2RhL7
   EhGY2c9ibyprBNs3HQznNPOJWo+cdn3krqxHS0/dNN8+adZ7wmusXyeZ4
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: +/Nxmbb4XbbckSOE9o/SQkJuPt7njpGJBxgQqZA+AmurKQFH0P/sNUBMNjNwwtEgei3LinjAI5
 0B06yRY8tje7JbBKHs1Hz9uy76e+VvKkSKaMHZJFbgmaLmkNS9bcdmbzAueGZR/wcgNwNzIQfG
 EwKMoOlsdOmz4wLJMvU06AcUCZSSxOUmlKMKVQ+3CfKt0+x+deRMhIRwEXw7X4PGwhV2EZNxjC
 sBaOPhk27GidJErAqTccz1KbeIp2+vwXFOEUy/s6GdgfA+asPejcHaw/C2JNPlt5a8hsH2y8BN
 s+4=
X-SBRS: 5.1
X-MesageID: 46442210
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:TuECX6147KWERYSAdRBfLwqjBIckLtp133Aq2lEZdPUMSL37qy
 ncpoV/6faUskdoZJhOo7C90cW7LU80lqQFmrX5X43SPzUO0VHAROoJgLcKqAeOJ8SKzI9gPN
 BbHZSWQ+eAaWSSxfyKhzVRn7wbsaO6GLbBv5a5855Cd3ASV51d
X-IronPort-AV: E=Sophos;i="5.83,273,1616472000"; 
   d="scan'208";a="46442210"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O8P8UZa5Nc9Ca3YzH3WMhQQSvKq4zLyZdkxI67IihTdX8A3NdVqW+BZX3f5mpZiIIJpg+hVenv3BGWbj2Qip9B0DYAeqcTSuYaVMAebZP+lZ+05TsnYGNGVw+FzPQ6cQJzpZyD0xCXD8nIpobXgEOKNajm6tr/NT4Zbdkw63h4Ld7MS8udqX3+GyAXJo6zfOhhAbqB2DINoeKD//PW0jUOfPFWtppKMR9N8yKKgwBMhk5ddaTVsDWcu4zyRwmTg4J4WEojuEAxZZbcUcqFKia2PwqZZ2hoNbBf9M8S3p/M3fLnJAPK6U53+w6NEglosOz/p4UyldfmMUje/DfkP7iA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=g3lM/GV6pXX5PGX+Whlxjp2NRc5FeWZi9nwX0iUHM10=;
 b=ThHM7GXsfuaN9GJ4ZLi1IXUIG/ex2sjR/PC0pRfP2bpTeIfreZ+srageoAnwQxnDnQhUjYIueVz3x547H3JlxCDw3Indtky0oFQeXEhKhnfsKTb18SGelSbD7qucw8FDhTlnuU6IZ7T9ssEuWULgaSik2kvglm9RM6GDf+8/MTqVvjomdZLnQ1sFIcmBzp6QJyaFbym+dSw622oJ3sPVfkbdiVaU0FcUGPcHizLcpQLdK6PF16x9HTBxnE8JBUkbgai+g3AvjFTezYZ6aWChTkAnvhdWLNPS6xNVavBMXD0aqaFZFg477kLmN7iPQCdO+8YBDRVPLmQMqQpkQge1pQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=g3lM/GV6pXX5PGX+Whlxjp2NRc5FeWZi9nwX0iUHM10=;
 b=MBYm4URU6emz6nKr3WIqwR0X0LUHLjQzIsVF+/UZdwR7mf4iQOQphVbIfo6Wqsc2rRBX0+xBGZW8zPok0T+MPG/Tpq593iwuzOTX6TXWRcxM3jlIbIAabNScXFH3x4lzoHKxmnZTgLytr2cxktwqxMlaaqbpmfwgIRWoe3c6PL0=
Subject: Re: [PATCH] x86: move .altinstr_replacement past _einittext
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <759595e2-808a-46c0-7a93-fadecfeb991b@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <8c0ecb59-67b3-2e3d-590c-0f55d805ae54@citrix.com>
Date: Mon, 14 Jun 2021 15:56:01 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <759595e2-808a-46c0-7a93-fadecfeb991b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LNXP265CA0005.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5e::17) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 23bd9232-7446-4d83-c579-08d92f448a23
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5885:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5885AAC1E2B1FC3277105DC0BA319@SJ0PR03MB5885.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 3SQET1XMGcHGkNxzTULQE4dcsLKkBrOVRb+n8Q/WVDK26YxWdkNCz4Je3ZnZGiDIcfn9/caQP0s7ZHWjIms1MetZoqk2GfSKxPsm/xQn8aDzbO5qpRFGNJlA4Ao+UzixCzMs4gNiDaHdXm0rWY+t2NydaglRyNXNEjtaX79TG8hatelWvh+8X4GYZizYmo9EgNccatNfgZHVIM5DoWvqb96ZSAwWFU9OVrFmaLnL7fweDXGLY7nt1rSkUwZjivLEatIn1FQXRYJnUOc+j2lZgDYzHU9TgoWnR+H1SMjQHFtBHG+OSuFrehjZkTQClZLhMOxfk2P318M6rsbxxtK3oDTtVdLwMy9J0w6DEL2ao/ah5ejt8Ua26WmKNMZnawqIYgv8iRrdgV8HF7EbDEUOa9cCcd/kLrRVNU9PcVY22mycdZ7OxK42wxVFr03ba3Y4+PB0LPgIcYHTvBpTPLJuixmOzT6uRo+uITmgIe8C+pBF8Oikd48kHXm4cgQ1PzA38BzUQHuLkKSJvFjABf7e9h9M+uw8eZOJKW95r+HjVQOCDuVLH1qkBKVOTQsyW3JuyvotG58fbCOWkYyvC9YY5UU7rtxhdj15PfHXvWE/1SAY8rnUDOKmF05+L730UBgWj2zA7beXCdOjSYFkBhrp72sA5duSutpLbBLZX06hQpF92OKz7AAGUA+xKf0qIq11CVasAnXIOhH7clO2RNIf0w==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(366004)(346002)(376002)(396003)(54906003)(26005)(2616005)(66946007)(8936002)(956004)(2906002)(31696002)(5660300002)(4744005)(66476007)(86362001)(66556008)(8676002)(36756003)(478600001)(107886003)(6486002)(316002)(186003)(38100700002)(31686004)(16526019)(6666004)(16576012)(4326008)(53546011)(110136005)(142923001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?cDZxUldMOGdzaTFVd1huUm8vdXRub2QrcllTRU9mUUovY0dzRFY1ZmZQQXhC?=
 =?utf-8?B?S0F5R0NRNHV2amFPdUU0Znl2SUllbkJDdXhKWlB4dXFPeUV4ZEgraGI0cW1R?=
 =?utf-8?B?eFBKbHFqMWE1aFQxbnEyVXA5NVgvZDRZeFIydUxVQ0ZGMm12SDlUekUrcmFj?=
 =?utf-8?B?Y3lNU0VBcEx1ci82UmlmdzdiOGdtdXkwR0tqY1VoZ3VSVUJNRnl5NGxNZVFO?=
 =?utf-8?B?bEhqRzZMazRwM0xrek5DS1dJOXJIN1JQQW5lWE5SRENxekMxWHBLS2VGVTJP?=
 =?utf-8?B?RzhTMmdSZVJSMEhnS3VJNGExTDV5RVMwUzQwSTIzV3FkNVlHTll1SzBtL3ZD?=
 =?utf-8?B?L1h4QW9vZ1ppdjBkS0hiSDUwUGVaRkVJR24xbnhpN2FHVVhWYmhSTEpqQzF6?=
 =?utf-8?B?ZTU4dVBwV2hSb0l3QVE4bXNYU3pOZ3ZmSnYwMytyMHRMUm9xalV2WkVlVE9a?=
 =?utf-8?B?MTNIZ01VS3ZaN3lMTjZtMmNvZzRjNHIxZlU3bWZ4WkhTeUk0bzFUTVBKUzNO?=
 =?utf-8?B?MFM3SEV0ZXArNTRXYUJIWW0vYkIyZEMrNWtXR1lndlB5NjRTNjd0ZWhWbVVZ?=
 =?utf-8?B?aFArbEVwcVU4VFZ3YjUvVGVqS0VZbk54Z3UzdlFNR29Pdks1YUZyWEJMcXVU?=
 =?utf-8?B?YTY1UXdqU3lHUld0TmFKalNTZ1VlZWtNa1N1SWtjWnJ6TDBKSWxYUUROcUwr?=
 =?utf-8?B?NjVxOFloQUVEcDZ2ZG5oZGRkVy85d1JiVU9EbG5Da2tCMjBsbXN0STRQZUgx?=
 =?utf-8?B?UnMzc2N3cGo2OW5QekkvRVlSRzlhRFdWQ2pFZnRxNlZQeFl4aDA0VjdQY1NC?=
 =?utf-8?B?WWdtUFZvb0ltWnpTUnpEN1NoWFhVTTRWN0RBM2ZvWmdMOWZ6RGVWa0VuRUlF?=
 =?utf-8?B?YUpjRkFOOHhPenBxK1dwb2tjRFZsSU1JeWFpT0hjck02cXpiWGRWWHBmb3F2?=
 =?utf-8?B?WDBBNmRuODJLcklJSUQ4VnpKTE1hQTI2TFVLR0cxc3h0MnJpSGtvd3M2b0tx?=
 =?utf-8?B?R05xenVCekNCendjeDN1SE9ubFlkS0JXb3QwUHRZakc2K1d4dGxhWDV0N3BS?=
 =?utf-8?B?ZXhNd3NhQ1RpSmdjZFdEV2kzUlFqcmdxNXNqd0ZuQUNaSnhHcFU0T21GVFFk?=
 =?utf-8?B?eklYY0lWL3d1VmFZdTh3Q21yekhEbC81UHZ3dFFuTEsyZDhLRGlVcmsvaEdP?=
 =?utf-8?B?RHA5SEY0eVJaRzVVZkw1MlB1OVE5Vi9RZUliWkU2akkza0FiSEtyaU5KVzF3?=
 =?utf-8?B?ay9XcFM0akVQd0M1Rk1aZ0Q3Mlh4b0N4QjdqV1BEeEN1V0U4S210aHZXOGVO?=
 =?utf-8?B?WnhTN1hvMVl0aVhtUnZ3WStCZ1J4M25LVWptdlBGTjZCRGhvNTg2N0grTk82?=
 =?utf-8?B?ZnZIaDBHK3hyNHpTaDZMVm9yUmcrb1NoTWxJdDRKY0hlNUxKZVFvK3JzNEJs?=
 =?utf-8?B?STRDeW1lc3hFSTlmUnVNZ2hOb3NwRDY4N0tEU3J4eC9HQ0VNUGFveU1wSi9B?=
 =?utf-8?B?QmdUY3o1QVdFeU5ENmZYcm1ZczFkSEJwK3oybjJRMDVBWUt5WDhIY0tZeDl3?=
 =?utf-8?B?SUN2QVdFaDNTcUV0K1UzQnloVXprYlM5QTFJaDBNcm1xSnJFK05oeFRONmdV?=
 =?utf-8?B?Qm8zeldQRUV4YnF2K05pTkxmRlp5RHJqRjFKQjl4Tmxxem92ZlhOcXBOajNl?=
 =?utf-8?B?ekIwdFZSVWEwTGFUQmhLbUZiREN2b1RDelZacnd6WmQ3YUdWdjU5SlpubWhO?=
 =?utf-8?Q?+YvQikBRa8a5g8PFW8JXd0FEyWCrCucCXAXZ8Lh?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 23bd9232-7446-4d83-c579-08d92f448a23
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 14:56:07.2526
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pNxncqvAjzLyVFKnVsovPY/U9bXS8lzE5RvLCReaAdXznrvM8cXwN2tDcEt5vROtGz17ih1YFx8O3GxkkXS2RTLlTW9KnC0Z+Vd5OJ4wctc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5885
X-OriginatorOrg: citrix.com

On 14/06/2021 14:53, Jan Beulich wrote:
> This section's contents do not represent part of actual hypervisor text,
> so shouldn't be included in what is_kernel_inittext() or (while still
> booting) is_active_kernel_text() report "true" for. Keep them in
> .init.text though, as there's no real reason to have a separate section
> for this in the final binary.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 14:59:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 14:59:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141612.261518 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lso40-0008M9-7p; Mon, 14 Jun 2021 14:59:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141612.261518; Mon, 14 Jun 2021 14:59:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lso40-0008M2-4W; Mon, 14 Jun 2021 14:59:44 +0000
Received: by outflank-mailman (input) for mailman id 141612;
 Mon, 14 Jun 2021 14:59:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XszW=LI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lso3y-0008Lu-7o
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 14:59:42 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 229b1db1-fad4-4716-a850-ef13262a142a;
 Mon, 14 Jun 2021 14:59:41 +0000 (UTC)
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01lp2054.outbound.protection.outlook.com [104.47.2.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-7-1HI_6QEgNiOn38S9iAGQeA-1;
 Mon, 14 Jun 2021 16:59:39 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4381.eurprd04.prod.outlook.com (2603:10a6:803:6d::30)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Mon, 14 Jun
 2021 14:59:38 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 14:59:38 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR2P281CA0022.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:14::9) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.9 via Frontend Transport; Mon, 14 Jun 2021 14:59:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 229b1db1-fad4-4716-a850-ef13262a142a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623682780;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wpD3cR1gk7HASzyY+f5N3PorKTwg02z3WvEFIYP4MRw=;
	b=CDJnUb8dKCud2EaUVli/d9o+fn8/6ZgBJEyzbzlXvyQcScBaqW1PV42E7QPS5MiLwUV77J
	LUA7+oM0rPb+u1AL81MCV/7C2TmUrsuwEgVwydxJuysqfJBe4ZTBIJYI0agg/sAa+oLEQf
	4iffWfs5up3e4Idxd/OXOce64bRNClc=
X-MC-Unique: 1HI_6QEgNiOn38S9iAGQeA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cXE7E7y/gW2k+wbWuCGPYlmTJPnWVbrfXGB7T0xKcby3bu+/II0wdNSlY4cBqPOyhLs+A0EvrlF0oIOdK0zBlj0aKcpssxI6DF6aAfL+BNR9FtIHEQgi32yGSsZmKDCY2fU5w9dqrsXGVoV/L2ApziuGtEERlwcujdKYbZ6hYsZRpUT261HkK1vIxVDMP263H0wwhP8sZaAW6pNtindUtc8Y+3HhdTi96/4IVIJ6qhIqNQ3qgORzLeXyw33CNvrEPJKlVnc3f/8whOea/15OFywAMVhjTrjTvIfs3w938vlvk/r1mdm7ccVsJZTLv7bPJ2ZYrNtxyLSgkSmQXzFEJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wpD3cR1gk7HASzyY+f5N3PorKTwg02z3WvEFIYP4MRw=;
 b=AC0ucYPOPAR6/ejvNFTzl0uKaExRFnA1M1s9Go1uu7+B8iuWeUVZ8fQkiLu82QA9EkK7NhTED1yRilrE1dfb7mklLyanN5+ATVCw4qRuowDKmW1m+IX2+agOQqyqRHlKE539JtqMwBoFEFHs7hSoqHM/rksC+q3e069mLrv0s5XvVzAz0i4nGbHDUAKppmbSoizrekD8fT+0ToL1BCqMIaYKrUDmvhozGBLbfKfpZ4ufzzd9LfQYKIG1n6CcoEWTURo9lvDv4NajS4+XDFacNH4pQWjcT92hAQiJgUWHRkO8OJFYzH/KRPnqC5b8CZgd27UbEJ85RgSAtl/eEVFDEA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH v1.1 5/5] tests: Introduce a TSX test
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>,
 Edwin Torok <edvin.torok@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210611163627.4878-6-andrew.cooper3@citrix.com>
 <20210614104716.23405-1-andrew.cooper3@citrix.com>
 <9257fd40-65cb-8b08-7639-00b15dd0aba4@suse.com>
 <cecc837c-e261-17d8-a77e-044256d8bc0b@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d7c7959c-6842-7305-2aef-77cb65883324@suse.com>
Date: Mon, 14 Jun 2021 16:59:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <cecc837c-e261-17d8-a77e-044256d8bc0b@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR2P281CA0022.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::9) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bba61029-8163-4345-0ac1-08d92f4507da
X-MS-TrafficTypeDiagnostic: VI1PR04MB4381:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB438165DEAA506DC24B4B661AB3319@VI1PR04MB4381.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	eT2iH0y2cdB0UZ1AsMfFCnBZkk1JCzjeWD/ndczDPtFvUtsLTsBsUfoBm/BOoRH4sQRrMcgKNOy0lEuMOcbLAQmeDApK0SCC/YrG0gzBESnPaZH0d20h3foYvWDHl+Wx4k4VMRmqC7CxY/QNT2Ke4aWhiFSo26G2fK4ch1RqJ3zPCZ6f5bFGpKVPOTmke//nloReP/BUNgJx6aIV+qqAqhO4YCHTUQQrXMbRAT0IakcsnUBpNlDT1Jb47vTcbtZ5AXga4bRp+/Wp4eVJ+E5fdI52+REjtjT0ytQTHlNs7QauN+bJuzevZ6zU4CNi5ENyOCd8GhVllSnfR4SuU/crL8UWzG58W1UEqjbowOZWZlEeUoEhUdhzx0BN4AthXfWV4aSxGiYBUWR94VO5DFzoYc5DvmVRGJurp4o5P6a9K20uC2+Tt5OebKHqT/sSyXzH9MtfB+JDH2pzQ4ipM8lfmqQs+LQnTXTCIxEiE2PIc/ZSOV8LOQl5zbEAYDITWgknxT1EXq6ZodbiHX47Mnh4I7gWNZ6UxqeuQFhyhZ9JS5YepK6uYI7E+olKd12of545Jos5wHvD+S8RLKuuxbeCsZodjOwZhmAOto2hLEty5SPOt0P7fCdWOZFy62eiV2dktqJnh7cTAmZSHWm2gQ4kanh5vYvGvlEbVyFB6D+DA8YiH4TzNPS70bDzGQC39Pv7
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(396003)(376002)(346002)(366004)(136003)(53546011)(31696002)(66476007)(66556008)(478600001)(38100700002)(4326008)(31686004)(66946007)(6666004)(186003)(36756003)(16526019)(86362001)(6916009)(16576012)(2906002)(316002)(8936002)(5660300002)(26005)(6486002)(8676002)(2616005)(54906003)(956004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Q2RqWHlQdG9GanByM3R4SnA3aWdkdnhmb0xrTm03T2lnaTFwL2trem1yL09u?=
 =?utf-8?B?RnIxS3ZzQTBudzZJT0w4Q1hDRTZGL3Y5Qm40azlFZDBNWVZoQzJ0WmZjM1N2?=
 =?utf-8?B?RlU5R2ltTUFtRmFqblozZisrRnAxemk5OFY2NkhsM3NmV1haYUhIbWp1b1Iw?=
 =?utf-8?B?eVdNYTFqSGJ1VDhhMjVRQmhtUlhrQXhYVkU4K3lQRjVVWWpncDlxZVVIUXZD?=
 =?utf-8?B?VGlkUm9PNHVBT3BuRUJMSy9wazVURFJsWVd2b3hHaVptYjJUa21jN3RuY09S?=
 =?utf-8?B?OVFVOTZYNDRGWWpvYUkwT0JoWkRjMjFNWkI3bkxuYm5OMWV5QURhRW93Zlkx?=
 =?utf-8?B?NThEUmJpWkI4REhvblNTUm0vQXBHMkF3VE5md3BCZzZxbTM2TkRNclNNa0l4?=
 =?utf-8?B?OTcvQmNVVzJLcmlHMlA1TnhKTFNOY3QyLzAxY0tJdTVnakpsUHhOUVBWVzk0?=
 =?utf-8?B?Y2ZqUUZHaHpKY0NidlpmcUUya1Vnbi8xK3lXdmxja09oSGdKZlFYTEhHb2dO?=
 =?utf-8?B?NEV5eUxTWEJqamRnd2VRZlcycGtwMjhhaW11T0taOTRpbzJLaFZYbUJVTkpa?=
 =?utf-8?B?YWlvZ3VPWVduQlRJbUlydzkzeU5LeVdRaU5NSFh5K2FuMnltU1g4c09mekxF?=
 =?utf-8?B?VDg0aXFXaGNYanhiSERiQXZVMEwxKzlwcy9pMnA2b0JVSllxZnU5YnlqUzhG?=
 =?utf-8?B?emZteDBCWEVGVVd0V09lMU9yNWoya1lZZmE1b1BJOGxSQktPaVRwU2NhTmEw?=
 =?utf-8?B?MkFoVnFBWFdmbnVyM1J1cW1lQjlOS2pZSXprT3FJVjE2VWROSUdhY1NtWnZ2?=
 =?utf-8?B?NnYrdzBGaUhxV0p6UzJacU5Rb0tJaHo5bzNRRWQzUkxjSWVGaGxycmVZVFR3?=
 =?utf-8?B?NERlUGRZZ2F2Z1BsaXBTdkpKZnJaanFXU0RsV3h5VjNqYkhPMTVRQ1F1V3hE?=
 =?utf-8?B?UGlXQ2xCRjN6ZDErSk1tZndPaTRnYlZqZUdFcXN5WUFLRTBjV3d0YVFQd0pC?=
 =?utf-8?B?WGhEWWxreTBQMk1FZnZwOXN5ZGZKUkMwTmlaNEZXRFJVc0pqUTd0dTdFWjdD?=
 =?utf-8?B?V1pqYnB1SE5hWk0yTno4ckZlNms5V2l5SEU5Z1lTTkFQcURxK0doTWxEQm9B?=
 =?utf-8?B?WDh1dVdNNVZsdi9tU3BicTIyT0NVcTVXK20zTkRuRXd5T2VyZzZJV2JmbzdE?=
 =?utf-8?B?VHhsKzQxSlZoYVVsVk9PSlVpRmZQVXg4U24yN0hyRE92UlVxWGJJSVNIRmpy?=
 =?utf-8?B?U0JhNXRWemFYcTdVZlQ5RXhHekRMVGdXVERPRnVVZDUyN0xTdXVzOHZ3YWZx?=
 =?utf-8?B?cG54Z1lhdi9ROFNWWEpPbmNiL3VveHNCOHcrOUxjMWQrRkpIOTBpekswWlZh?=
 =?utf-8?B?eGFZSk84ajZiRTU1eEx0cFh5ZHR2clIxbHZ2ZXdNYWRiUnNjZXkrUWpTK1VD?=
 =?utf-8?B?N1RlSTNXaHFmMFJENWJzSGtaRUpFU1lBVkkvVXd0WEM0TkpER0ZVekdLM0N2?=
 =?utf-8?B?VkpPT0pWamdkbk02UW4rQkFGSkVKaE5nUzFZcmNWRnJDSXMxQ3gwTnlOdVZt?=
 =?utf-8?B?d1FReERBTmJaOW9nRldFNUl0SkphaFNqTzQvbTY0Q2RmNnpEZnhIWVRMNy9J?=
 =?utf-8?B?RXdoMXpUa04wRkcyQThqcDJ5bWF4SHNjMUxFbm5laE5lTjB2Y0hFQjU3MnhW?=
 =?utf-8?B?K1J3WU5VdlNHSnY1YmJkaWhBVFM4T2wzSFc2clpkSzlCaXpudkhxM1JvM3Ix?=
 =?utf-8?Q?ho0oC/WakW+LWkUJtdQBi6LiLNZWRNugSJysTiD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bba61029-8163-4345-0ac1-08d92f4507da
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 14:59:38.1824
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VETmXgttV8a0BaQCDbYonjspYCBOSOeh9eYZ7mIDWn/xo6YK416asq/fZ0D+E5g4MbnOkla4u3UbFdmixa7V8Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4381

On 14.06.2021 16:50, Andrew Cooper wrote:
> On 14/06/2021 14:31, Jan Beulich wrote:
>> On 14.06.2021 12:47, Andrew Cooper wrote:
>>> +static void test_tsx(void)
>>> +{
>>> +    int rc;
>>> +
>>> +    /* Read all policies except raw. */
>>> +    for ( int i = XEN_SYSCTL_cpu_policy_host;
>> To avoid having this as bad precedent, even though it's "just" testing
>> code: unsigned int? (I've first spotted this here, but later I've
>> found more places elsewhere.)
> 
> Well - I question if it even is "bad" precedent.
> 
> For array bounds which are constants, the compiler can (and does) do
> better than anything we can write in C here, as it is arch-dependent
> whether signed or unsigned is better to use.
> 
> Beyond that, it's just code volume.

Well, no, I disagree. Any use of variables for array indexing,
when not intentionally meaning negative indexes as well, would
better use unsigned variables. This is just so that in cases
where it does matter, people will not end up cloning from an
instance where it may not be important because of, as you say,
e.g. constant loop bounds.

As to the compiler doing better - if it can when the induction
variable is (implicitly) signed, why would it not be able to
when the variable is (explicitly) unsigned?

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 15:18:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 15:18:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141621.261528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsoM8-0002NB-0z; Mon, 14 Jun 2021 15:18:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141621.261528; Mon, 14 Jun 2021 15:18:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsoM7-0002N4-Tw; Mon, 14 Jun 2021 15:18:27 +0000
Received: by outflank-mailman (input) for mailman id 141621;
 Mon, 14 Jun 2021 15:18:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsoM6-0002Mu-Ia; Mon, 14 Jun 2021 15:18:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsoM6-00073Q-CT; Mon, 14 Jun 2021 15:18:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsoM6-0000BI-2O; Mon, 14 Jun 2021 15:18:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsoM6-0007RD-1x; Mon, 14 Jun 2021 15:18:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Zk4JEY3TC6sGZlmnoigdKzD6z1lWi12lKKQzy866acE=; b=hZdMzfcO/Wd3BnxgnQD1KYwlAc
	skZ+hzfyvVpADZkQtZyph9XrUUg2ETmsKirvxcmvLTGz9N81zo7w9LWYa8L3FOUBjRxPlMUKKug1S
	TCCZq2/Q4xuCCtUGjPQ4TCQKJ4qGoG3Syxe3o5CwYcKs3g4eqREBb8qOfYKdqPrS1sH0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162790-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162790: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-amd64-i386-pvgrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-amd64-amd64-pvgrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=93031fbe9f4c341a2e7950a088025ea550291433
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Jun 2021 15:18:26 +0000

flight 162790 xen-unstable real [real]
flight 162802 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162790/
http://logs.test-lab.xenproject.org/osstest/logs/162802/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
 test-amd64-amd64-i386-pvgrub 17 guest-localmigrate       fail REGR. vs. 162533
 test-amd64-amd64-amd64-pvgrub 17 guest-localmigrate      fail REGR. vs. 162533

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162533
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162533
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  93031fbe9f4c341a2e7950a088025ea550291433
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z    6 days
Failing since        162556  2021-06-08 22:39:08 Z    5 days    8 attempts
Testing same since   162696  2021-06-12 11:59:49 Z    2 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 724 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 15:33:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 15:33:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141631.261548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsoaC-0004bs-FL; Mon, 14 Jun 2021 15:33:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141631.261548; Mon, 14 Jun 2021 15:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsoaC-0004bl-CY; Mon, 14 Jun 2021 15:33:00 +0000
Received: by outflank-mailman (input) for mailman id 141631;
 Mon, 14 Jun 2021 15:32:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aFdS=LI=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lsoaA-0004bV-Hb
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 15:32:58 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a3303be7-2a60-4b62-bf20-b255421617d5;
 Mon, 14 Jun 2021 15:32:56 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 2740F68AFE; Mon, 14 Jun 2021 17:32:52 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3303be7-2a60-4b62-bf20-b255421617d5
Date: Mon, 14 Jun 2021 17:32:52 +0200
From: Christoph Hellwig <hch@lst.de>
To: Robin Murphy <robin.murphy@arm.com>
Cc: Christoph Hellwig <hch@lst.de>, Tianyu Lan <ltykernel@gmail.com>,
	kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
	wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
	mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
	arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
	peterz@infradead.org, akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com, rppt@kernel.org,
	hannes@cmpxchg.org, cai@lca.pw, krish.sadhukhan@oracle.com,
	saravanand@fb.com, Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com,
	m.szyprowski@samsung.com, boris.ostrovsky@oracle.com,
	jgross@suse.com, sstabellini@kernel.org, joro@8bytes.org,
	will@kernel.org, xen-devel@lists.xenproject.org,
	davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com,
	martin.petersen@oracle.com, iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org, vkuznets@redhat.com,
	thomas.lendacky@amd.com, brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: Re: [RFC PATCH V3 08/11] swiotlb: Add bounce buffer remap address
 setting function
Message-ID: <20210614153252.GA1741@lst.de>
References: <20210530150628.2063957-1-ltykernel@gmail.com> <20210530150628.2063957-9-ltykernel@gmail.com> <20210607064312.GB24478@lst.de> <94038087-a33c-93c5-27bf-7ec1f6f5f0e3@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <94038087-a33c-93c5-27bf-7ec1f6f5f0e3@arm.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Mon, Jun 14, 2021 at 02:49:51PM +0100, Robin Murphy wrote:
> FWIW, I think a better generalisation for this would be allowing 
> set_memory_decrypted() to return an address rather than implicitly 
> operating in-place, and hide all the various hypervisor hooks behind that.

Yes, something like that would be a good idea.  As-is
set_memory_decrypted is a pretty horribly API anyway due to passing
the address as void, and taking a size parameter while it works in units
of pages.  So I'd very much welcome a major overhaul of this API.


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 15:33:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 15:33:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141632.261554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsoaC-0004f9-Pi; Mon, 14 Jun 2021 15:33:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141632.261554; Mon, 14 Jun 2021 15:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsoaC-0004eW-L3; Mon, 14 Jun 2021 15:33:00 +0000
Received: by outflank-mailman (input) for mailman id 141632;
 Mon, 14 Jun 2021 15:32:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsoaB-0004bb-89; Mon, 14 Jun 2021 15:32:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsoaB-0007Ig-3v; Mon, 14 Jun 2021 15:32:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsoaA-0000YK-Te; Mon, 14 Jun 2021 15:32:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsoaA-0004KF-TB; Mon, 14 Jun 2021 15:32:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1BiYg5F5zWzB66ewZG5IJIhqSiOtixZ0wTrjoEZLyLQ=; b=6dnIJmTqiSwTF2PrJcTRzjPouH
	1PbpRsmbugWN0RL/eEJGl5NpqU0U5b4go3D88/kFH7sEWvIKrKmUOTVRudv+4BsATZPmyxocRIZf9
	bTKKR5WwNYl6bj9vE1JVgDagoNji+gfM3xF8OZhxL2L+RZgttjSkxTTbEknBcVPgljyU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162799-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162799: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=b8649cf2a3e673a4a8cb6c255e394b354b771550
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Jun 2021 15:32:58 +0000

flight 162799 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162799/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 b8649cf2a3e673a4a8cb6c255e394b354b771550
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   10 days
Failing since        162368  2021-06-04 15:42:59 Z    9 days   18 attempts
Testing same since   162583  2021-06-09 23:44:58 Z    4 days   13 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1717 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 15:33:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 15:33:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141640.261574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsoat-0005mV-8W; Mon, 14 Jun 2021 15:33:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141640.261574; Mon, 14 Jun 2021 15:33:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsoat-0005mO-53; Mon, 14 Jun 2021 15:33:43 +0000
Received: by outflank-mailman (input) for mailman id 141640;
 Mon, 14 Jun 2021 15:33:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aFdS=LI=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lsoar-0005mG-I4
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 15:33:41 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 04af035a-0bae-479a-9225-1ea44978ae1b;
 Mon, 14 Jun 2021 15:33:41 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 4E32868AFE; Mon, 14 Jun 2021 17:33:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04af035a-0bae-479a-9225-1ea44978ae1b
Date: Mon, 14 Jun 2021 17:33:39 +0200
From: Christoph Hellwig <hch@lst.de>
To: Tianyu Lan <ltykernel@gmail.com>
Cc: Christoph Hellwig <hch@lst.de>, kys@microsoft.com,
	haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org,
	decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, x86@kernel.org, hpa@zytor.com, arnd@arndb.de,
	dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org,
	akpm@linux-foundation.org, kirill.shutemov@linux.intel.com,
	rppt@kernel.org, hannes@cmpxchg.org, cai@lca.pw,
	krish.sadhukhan@oracle.com, saravanand@fb.com,
	Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com,
	m.szyprowski@samsung.com, robin.murphy@arm.com,
	boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org,
	joro@8bytes.org, will@kernel.org, xen-devel@lists.xenproject.org,
	davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com,
	martin.petersen@oracle.com, iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org, vkuznets@redhat.com,
	thomas.lendacky@amd.com, brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: Re: [RFC PATCH V3 10/11] HV/Netvsc: Add Isolation VM support for
 netvsc driver
Message-ID: <20210614153339.GB1741@lst.de>
References: <20210530150628.2063957-1-ltykernel@gmail.com> <20210530150628.2063957-11-ltykernel@gmail.com> <20210607065007.GE24478@lst.de> <279cb4bf-c5b6-6db9-0f1e-9238e902c8f2@gmail.com> <20210614070903.GA29976@lst.de> <e10c2696-23c3-befe-4f4d-25e18918132f@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <e10c2696-23c3-befe-4f4d-25e18918132f@gmail.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Mon, Jun 14, 2021 at 10:04:06PM +0800, Tianyu Lan wrote:
> The pages in the hv_page_buffer array here are in the kernel linear 
> mapping. The packet sent to host will contain an array which contains 
> transaction data. In the isolation VM, data in the these pages needs to be 
> copied to bounce buffer and so call dma_map_single() here to map these data 
> pages with bounce buffer. The vmbus has ring buffer where the send/receive 
> packets are copied to/from. The ring buffer has been remapped to the extra 
> space above shared gpa boundary/vTom during probing Netvsc driver and so 
> not call dma map function for vmbus ring
> buffer.

So why do we have all that PFN magic instead of using struct page or
the usual kernel I/O buffers that contain a page pointer?


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 15:46:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 15:46:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141654.261585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsomn-0007Lz-Ao; Mon, 14 Jun 2021 15:46:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141654.261585; Mon, 14 Jun 2021 15:46:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsomn-0007Ls-7h; Mon, 14 Jun 2021 15:46:01 +0000
Received: by outflank-mailman (input) for mailman id 141654;
 Mon, 14 Jun 2021 15:46:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EgT/=LI=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lsomm-0007Lm-3o
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 15:46:00 +0000
Received: from mx0b-00069f02.pphosted.com (unknown [205.220.177.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c032e94a-c4a9-4662-b23c-137f6b3f1adf;
 Mon, 14 Jun 2021 15:45:59 +0000 (UTC)
Received: from pps.filterd (m0246631.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 15EFh660015718; Mon, 14 Jun 2021 15:45:55 GMT
Received: from oracle.com (aserp3030.oracle.com [141.146.126.71])
 by mx0b-00069f02.pphosted.com with ESMTP id 395t6ug90j-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 14 Jun 2021 15:45:54 +0000
Received: from aserp3030.oracle.com (aserp3030.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 15EFiOkR028132;
 Mon, 14 Jun 2021 15:45:54 GMT
Received: from nam11-co1-obe.outbound.protection.outlook.com
 (mail-co1nam11lp2168.outbound.protection.outlook.com [104.47.56.168])
 by aserp3030.oracle.com with ESMTP id 3959cjh3jc-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 14 Jun 2021 15:45:53 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by BL0PR10MB3012.namprd10.prod.outlook.com (2603:10b6:208:72::31)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Mon, 14 Jun
 2021 15:45:51 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 15:45:51 +0000
Received: from [10.74.101.149] (138.3.201.21) by
 SN4PR0201CA0011.namprd02.prod.outlook.com (2603:10b6:803:2b::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20 via Frontend
 Transport; Mon, 14 Jun 2021 15:45:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c032e94a-c4a9-4662-b23c-137f6b3f1adf
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=t1qtCFv42Ehv8s+NNj+MyFNBqYrN7lrL8g47Ata/JRM=;
 b=KCUvgy6ayIKB6DfZAfEGR4lqztmC+cdXDzzGhPTX6PejPNQXFZl5SQdddDffC+Z62Wfh
 7RWGAMFTmwmuGeq6rpJs/yU6cdDik+waC8bmOsIMDZ//wLRiznepSXbKJKFamA6jZOhI
 ioW4JYiwVqMajv/5LilE2Q1tT9FHCGm6NiBDN1SUktCUEE+Zb6YuZ3tbVw4JlOtvWn5T
 QZ/PanAGrSvCmtwSyl6BI2q0UfxzTLT0bdiuxONvGkjcT+jbd27bEo7t2FVr9qskqX9D
 61MVJu7KK8AtfCwsOEMWYztcH346btIao8lP3XypOX/b8OAT+EVNRkCx7O8z5hu0zlp9 sg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hjP2co2IpBTx8GIrp208SOnXKoembSdA20V3JR6K8E0ZOhbBEs4hXxV9eqFUmUhg7yNFIEFMN4B9kmXg5oN+BeG+JDExm8ShJ6cWvfVpQSV+NrKOdYwD9X+fdj3bid2I8Lidq/0vdTZaNERREHmPZWZPWB9c73mehEb2gZachhX+dfuzDz73rmWPFzuNT+tSF0baV1IiC1EP5F+AXulL+iM95KKDhQ1A9yCfgP1+DbTJjey0cqqSU9bttoNWWOfj19rgDFgB+3mrV5vCR5OcdLJIygAU+OUSI5m01bRoydokcC19HLD4YSgoS45BH6hEwt3pvtpMVOTPAn05oe96+Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=t1qtCFv42Ehv8s+NNj+MyFNBqYrN7lrL8g47Ata/JRM=;
 b=jkv1zkU2bTisN/HS2HSgti5Ueu/houq800F2ZV8u2efs3r+Z/yHDYu10O7jH/9Gp2ToHuc1mLF176RFBNa0LZb1wjKUgi0XfXwpA599AUzzIy0ZGNLtHtYgDKWR9zumgWKYXKJBUdKQ4LoeLEMpCRPfDxKmeFX7EhXXuqB2aGlGPc52fzDjp6cIgIR8arGEWrpoQSDwz9wR4W1L5D4e8oDXh/NvnBd6gsetxLaVRXgHZQzqaLqx4R6fMp5ojNA+wOCKViAIeT3V04r7nh8eIzDCN1FT/DvqylIYpXXghBkdH9RtITmyBPPPfduTpVyLV5cUqeqbyP19rRlu4EaQQyw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=t1qtCFv42Ehv8s+NNj+MyFNBqYrN7lrL8g47Ata/JRM=;
 b=NeJYS48pM9EGXk0Zg6FynSAU3Rut451t3QxtXeC6w6XPbtrDGaxHIo6ODUF/f0FXGyZGyTmJJ3ZAhj/6yCxqyvqO9Y2V+RgragE3OGgeGYxE/TyNg7m9LKNaWQ/0IdJcmOq9/mnudbCdZiO72jVCRrSGcuCDIxGrOOItBbj7y0g=
Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops
To: Roman Skakun <rm.skakun@gmail.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
        Juergen Gross <jgross@suse.com>,
        Stefano Stabellini
 <sstabellini@kernel.org>,
        xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org,
        linux-kernel@vger.kernel.org,
        Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
        Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
        Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
        Roman Skakun <roman_skakun@epam.com>,
        Andrii Anisov <andrii_anisov@epam.com>
References: <20210611095528.9230-1-roman_skakun@epam.com>
 <855a58e2-1e03-4763-cb56-81367b73762c@oracle.com>
 <CADu_u-MqALJkG8RJHrr65vC_sHu-UyvCGwwUfaBong0eir5+hQ@mail.gmail.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <fbaeaad5-ea8a-ff2d-2e62-d27b4d234e8e@oracle.com>
Date: Mon, 14 Jun 2021 11:45:46 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
In-Reply-To: <CADu_u-MqALJkG8RJHrr65vC_sHu-UyvCGwwUfaBong0eir5+hQ@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.201.21]
X-ClientProxiedBy: SN4PR0201CA0011.namprd02.prod.outlook.com
 (2603:10b6:803:2b::21) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c73d10f5-f84e-4ce3-76da-08d92f4b7cc8
X-MS-TrafficTypeDiagnostic: BL0PR10MB3012:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BL0PR10MB3012E0D08B2AF5A6AF0840388A319@BL0PR10MB3012.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	ShpcMXAD8stzTvaXdEy4f2R5h6rHIPkC2l15hiXvbVbbrBVoqcvbZmzh2/EXQeBHXfEG/MU4CZKY56zZSlveJa55XvgB6VhJENBA3Qy3SyFsr8Y4vtvGMcYEqeue3UAjbTFZJQaCz8K69U+3alh64i4wVlutgryqs64iyuCcLlIEJ+/u53j6fkAreY3V2i5fTTd28yU8XAa8KRNZf/mHWokiN5saqVyHKm8sOpGHaBPJFKwGO3yLmEQgzkkaktMG6n5RsLQ63LWNyd068egQKt0HaLG6MD7xzGbnnzxOoCIBSSsqqG9CYDgt+YVvlHH9K6vw2ly5PYou/TC9Vd8lTUPn3R+ZIcGdKYbIDGHEdbhb7MJ2j4TQqim8dm6DyUjsPh8Y8E4aybykloJ5GuNqZQT0YyvL9fK3DtS/a/ErqQZvRh1y6TqFwS/Ce6lq0Am0ErBKahgjpf8WhQT1EQ+7XKg5iMEWGA12fws/rOaiuqMX1KEmUs/JucWOgLYryPUihlabrhwKyWNrvxpUZH8T8y1t/B+inJAS8W0XBosGxY3VZZ7i3913Iya4snHJqtGvrrm/uTKlRi9afkY09vrexDgk2f6RySaZ24632PL1VDTgxbvDBA7A8rpsYJuYJJ/oLnr8PAloJg419MDCpUBLV7NvG1YsBav1DxTCdGyhSRk7njwa43M+XZdOD6IFunH6
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(376002)(396003)(366004)(136003)(39860400002)(53546011)(66556008)(66476007)(478600001)(38100700002)(16526019)(4326008)(31686004)(6666004)(31696002)(186003)(36756003)(86362001)(956004)(8936002)(316002)(2906002)(44832011)(83380400001)(5660300002)(26005)(66946007)(54906003)(6486002)(4744005)(16576012)(2616005)(6916009)(7416002)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?SytyNWRNYi9FUDBnTjZPaHRxdWJnQWxQaFF0R2tHR1Q4S3VYWXlOY3AzTmJw?=
 =?utf-8?B?bnM0U0FuczBVd2NuVFdkM0s5aDI2ZXVuS2ZKR2dianM2cE5BRWdJVFB6eW9a?=
 =?utf-8?B?SDF0aEJLRDBBbEV3bk9zWXN6OVBaUG03L1FKQ2pLY1JXTHRqY1lTQ2NHaW1I?=
 =?utf-8?B?RGFiV2V5YWV0Yll4cllNdThVN2lCODVaSTVhZENSenNIenIrZkZ4K1pKRUxW?=
 =?utf-8?B?b05CQS95dzFlVUoza01oR01UUUN1RGJMdnMweG96QzcxbnpYVm5sZzJzT2tp?=
 =?utf-8?B?ekpqaW92M09oYmlBamt3QTdQL2w4a1Vmb3hBWW5FQXEyZk9GTUlHTDV0M2wx?=
 =?utf-8?B?bnZYUXEzN2tXamJTUkYxbEdBVGRIMTNsWDlqdHl1VFJOakJzOWswUUt1NE1h?=
 =?utf-8?B?d0FVNzZqYzlvK0hkTGo5LzdXTy85N1NoVlNyM0RhYVcwcmxWSzVPcld4WFA4?=
 =?utf-8?B?Vm5YcktFekRkdysra1l4RVRrRmdVLzRrNHFMSDVlOUdsWVd3UWtNaWFncGhS?=
 =?utf-8?B?RE1pWTNaS1h1UWdJd1RKaWxQMEpNUXVnVUpCdGlrRFNUaTZ6ZTJ1NjM2bXpi?=
 =?utf-8?B?NkZEdVJGVVJBZXcyYXFSRFQ1S1FyU01qM2dHOEJpMXk0bDdRaTk5TWcrYzR3?=
 =?utf-8?B?c0djcFdITlM2dXhmY3hJa2U5YXBteVdNS3EvSmwrbmFrZEU0ZzdqUFordXFS?=
 =?utf-8?B?RWQrRnZIOTkyTzJzK0NyRGVVUFZWWkdPM0diUzJYVS9KTkoxWGNRTW15MHcr?=
 =?utf-8?B?anV1RS93TkZpOEZXUDM3WEZUajJEMVc5aTdCTzBEc2VRYnRFWERWaXRwQ1M0?=
 =?utf-8?B?SW5tSm9oOWF2SEl1eURJclBCbjJmblZKQ3VCZG51VVEyU1JXT0VmS1pFNFBS?=
 =?utf-8?B?RWdHNWNMTitIY0lHYzVqS1NVVVp5TXpNSmlDV2E3Qi9QK3JnNGJKY0xYRjhE?=
 =?utf-8?B?UmpjQU9OcFZLZEJ1SlVuRGNXbmJCN3lDM0pURFY3RUVFWXVpZnkzRi85dW1K?=
 =?utf-8?B?K29iYythcjQzSGc3eWRxcm1YRFdRR2RTa2toYW1leW9tMml5TkJtYTU3SENt?=
 =?utf-8?B?Z1JheUFWakppTmJLWm13dHpvNXZTVG9PdjZ0anRGczJlSm1IVzkwREJCQzAr?=
 =?utf-8?B?ZzU5VjBZMTlCdHZoS0s1KzVzanBEbHpsRUxHUmN3ZXJnQmVrR2gxdUtzTExx?=
 =?utf-8?B?NTJqRUhlZmczWHI1TEp6OUs2UmFwOEttTEFOT0JLRS9RbTJqbHBmdTkzSTYw?=
 =?utf-8?B?bG84bGE3SFRJOGgyYkthdFpVVDhSVXpTakdtQnVNOG82ZFV2UFRvWC9QWWky?=
 =?utf-8?B?dldpamo3dUt6eWtPbFlqMWl1b2ZaQmRCTmVTSytlU1MzVm9DSGJFV01EOENa?=
 =?utf-8?B?VlJndktaMmRka2liWWFveFFyTzlibnZQbzVaVi9HS1R4UnVScGVpcndYTjFr?=
 =?utf-8?B?NmtZVkJoOU5raWxKQ0l2ZU5Zc1FoOFAvYTVWN2JvbVpsK0JFZjVPN0dmTFBp?=
 =?utf-8?B?bStYdkxUMmpQcEJvRmJKd1pJOWpyb1p0Y0pkV2l6eFJmdDFzMlBvbFlPTG5r?=
 =?utf-8?B?T2M1cWVSMUN3aEVWTjZsYldMMHVoRzdsMGd6YnBpa2liczNXcXBxWlRhWlpR?=
 =?utf-8?B?emxGUUtqazBkbU5GWm9tMWJoUnFCUHQ0RytXTTBSa0VyR1piSHhaRTRlVUFR?=
 =?utf-8?B?WXJLTUxjVzFSdWVmNGlMekMvdUZZbHJxV1NDVEVNbjBQdjNoVEVrS2o2bTNh?=
 =?utf-8?Q?th+CK8KAQIDqv8OWWjqSxre8Vy40OVFDmjSA2r7?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c73d10f5-f84e-4ce3-76da-08d92f4b7cc8
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 15:45:51.4630
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sAZtYjKeywxXLISzbKKdBtadpe4JDoIVIEwgOaOtjs9hGgGYbVhoEAOsCoMkqCADwLEMvk80dIOQ1ddaz0mwfLOlIK9bhqORByrQkb8RFtk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR10MB3012
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10015 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 phishscore=0 mlxscore=0
 adultscore=0 suspectscore=0 bulkscore=0 mlxlogscore=999 spamscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106140100
X-Proofpoint-ORIG-GUID: EGjVKAP4kx5qywM8t1DC1JU7_YyRKcqv
X-Proofpoint-GUID: EGjVKAP4kx5qywM8t1DC1JU7_YyRKcqv


On 6/14/21 8:47 AM, Roman Skakun wrote:
> Hey, Boris!
> Thanks for the review.
>
> I have an additional question about current implementation that disturbed me.
> I think, that we can have cases when mapped memory can not be
> physically contiguous.
> In order to proceed this correctly need to apply some additional steps
> to current implementation as well.
>
> In mmap() :
> 1. Is the memory region physically contiguous?
> 2. Remap multiple ranges if it is not.
>
> In get_sgtable() :
> 1. Is the memory region physically contiguous?
> 2. Create sgt that will be included multiple contiguous ranges if it is not.
>
> What do you think about it?


We make sure that we allocate contiguous memory in xen_swiotlb_alloc_coherent().


-boris




From xen-devel-bounces@lists.xenproject.org Mon Jun 14 15:56:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 15:56:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141677.261617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsowS-0001QD-Oh; Mon, 14 Jun 2021 15:56:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141677.261617; Mon, 14 Jun 2021 15:56:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsowS-0001Q6-Jk; Mon, 14 Jun 2021 15:56:00 +0000
Received: by outflank-mailman (input) for mailman id 141677;
 Mon, 14 Jun 2021 15:56:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JnBr=LI=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lsowS-0001Q0-0O
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 15:56:00 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c48e5d27-5297-4af4-acaf-edc78d0706cb;
 Mon, 14 Jun 2021 15:55:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c48e5d27-5297-4af4-acaf-edc78d0706cb
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623686157;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=864QeOfPGUusuIvJjrfVf1tvOPemq1mpdKAFCDhbFkU=;
  b=KeUIWrtu0jZdty4HDlsgbPi7O9K3Oi9hmiFdDGW0qbfNJfD2tx2D2FJt
   Ehjy7IXuyhmWU4w35ohR1HUjgBzq5QBsT9iJG+ZJu+j5NDuFmErPnSZPO
   WyjOQATyGESeFpDeL9o4W2ZmOq28uty11Ne3KLbM7QaPomH3ppoMRZhjL
   E=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: EUo/HtsoZBi9HeAv1PVwzNomXxeBb8Ig+5iQ/FEPs4kNLwZkQpFufT6genFubh/enjKYo2JC8p
 GRYMhvoMd2EjqvZdM+o/ZeEUh8/LAsuMFoc8ZfDVLky/7BavWZOPjxkYxkVh5OBfY54Kq/rQP4
 cPk4JN8FqAIQS/s7D77xKON4YdVF2YVVGlNypCrx0Yu5ltDSThhreCMMykovW60h/BmslSKWZR
 qcVZY1gL/DGPpL4tiavGfR5qSkETQq/zosW8MzAIBdHPV45z0yWrEKnEG5Mjfrc9Da9BkjSRf0
 /qw=
X-SBRS: 5.1
X-MesageID: 45816706
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:EVbNT6s1Og9YmMdjIj1KsY6W7skDj9V00zEX/kB9WHVpm5Sj5r
 iTdZQgpG7JYUUqKQMdcLG7Sdy9qBbnnqKdjrNhWYtKMDOGhILsFvAa0WKA+UydJ8SdzJ876U
 4IScEXYrGeY2SSz/yKhjVQe+xQv+Vvm5rY5ts2uk0dKD2CHJsQjTuRZDz6LqXaLzMqOXNzLu
 vm2iMOnUvbRZyRBf7LdUXsi4P41q32fNmPW295O/cI0njysQ+V
X-IronPort-AV: E=Sophos;i="5.83,273,1616472000"; 
   d="scan'208";a="45816706"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jx1OUTMroPOQcHeLitfJL38jtWyijW4Du7GGb5oz+VnGyQ50DseDBY141i+75lQqcAMyLYYPZIOHbjGgl8UmqY2EzkUL/dEnEU170Haa8YW1ueIh4JLLPxlID+7VJZ46MvGnrtB7ObvkkxgBoIy9Dygm4BNAVe2WThS4c3q/enKpy/GA7rNlm58IDxWyvSywAQQNTsJN7LMVjK4PAT9GkA0j3EvtUcgQSbzZ5p8Kh7j64frAqE/rs6DvNkgYnvqmwuFu3cjoqjYmaqd9ZnSAKU2I3Aj5EGVpBi4ADB1e0uGwbq8HVaT764egkpdzAcXspKQwPSq7NL7wK4V5YMkxZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=864QeOfPGUusuIvJjrfVf1tvOPemq1mpdKAFCDhbFkU=;
 b=X/MwhAS8wPApJrZaIVXgCnEGmzZU+tsuA1pGEN0nEmW9JkhO4HaRvgtIu/Nz5/PZu8lERd8oPXM9tT/K7Hhj1b1AfDXptuPX8+BInoc0AONz/URg7Gf8VAJOMSWDYaoK1yS7IciVyPjXgTdHYx3thYYD1WEuW1QlcVOSjXhJNQ8Of2U0mjCiqHqSah8ujqqSzMceG8GuaPAbPgtJJhD0EAd+Yp2lzdxUiP1imP89Ky1HSEuqNUFXqHv0VNbtwzGbWeVSYw4+vNKohimPGb2MEZCVROgTOnyKEC68V914FHPiwZzyNmpFVKgPcLMf0Ep+CxvYy24NmnCcQ8SnXCLRUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=864QeOfPGUusuIvJjrfVf1tvOPemq1mpdKAFCDhbFkU=;
 b=NJaEAktTr0xINrWVj5ZlhQIrZfrEmJMT6mkt8U5kjibjHU6/11zZjyw7KrP+PXtgrU0uCUSaeNqdaRShyMBbwtx/b3qtcVNIzvWtjTEC99Yi4QzW7VSSWozoi8FIxE+nF7Rn6jafx6Eiay3qC3FZa+3cdUE/MMNLsLbv8nufW2M=
From: Edwin Torok <edvin.torok@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, Roger Pau Monne
	<roger.pau@citrix.com>, "JBeulich@suse.com" <JBeulich@suse.com>, "wl@xen.org"
	<wl@xen.org>
Subject: Re: [PATCH v1.1 5/5] tests: Introduce a TSX test
Thread-Topic: [PATCH v1.1 5/5] tests: Introduce a TSX test
Thread-Index: AQHXYQq4I95Qq+/sPkip2c9aO/6G4qsTqWqA
Date: Mon, 14 Jun 2021 15:55:51 +0000
Message-ID: <3b9a4b575108a2043b2c61ade6f7389e3d6f7ad6.camel@citrix.com>
References: <20210611163627.4878-6-andrew.cooper3@citrix.com>
	 <20210614104716.23405-1-andrew.cooper3@citrix.com>
In-Reply-To: <20210614104716.23405-1-andrew.cooper3@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Evolution 3.36.4-0ubuntu1 
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 9c6aa72a-edf8-47e0-d8dc-08d92f4ce2f4
x-ms-traffictypediagnostic: BY5PR03MB4935:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BY5PR03MB493544F3D6AEC7DD9050A4329B319@BY5PR03MB4935.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:29;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: UQKXpCmQjjASETgjTcTvQEwOzHUwVaWUtDWDv5AA/qYjU6SSRdtHYlmQ+IpahvpFALUxHVEHUs4wvHg/N83Bylfp7j2O9iUsqT8PSuyDDSUSvDI0OoJckBmQFczi6jG8dMDdX+HoAN2CzpofTHeJwEFnYvPyT4uaNuMGANDPUEFMNNUcEHTVplai1KQBf1f+yoK0VJJUTje5qMCKCkNlA8nQ9TBVVAMxJy7nWhgM8WYy5wE8zpX73PDoee56XCsRXpryVqXGNfv4ov/raC/kgc8Tmm/F4M8p5W94CqXYSn9WydEjmtJR/jQPM6+g849Z++2UR3Bfrohhl3y5QN/WJaxhpxlFv54VVMyZFfXNZLoolOVDqip451Xt4aynMjlCaKuCDHfwHdCe7mpQvFWDI3OMG9LWalzXbf/f/IQJmZ1Fl7uja4Yrtz7nSqrkG5+Bk1cWbGsBQF7WIw51Y4ofYQHeWdBk0FYlKGGTlInXDalOVcW7Q+YGrAmi0nXcfi3GpGrFxyBASIJy5U7pgbLlaYRMSqUi7OlreZ0wQxzk+sdDa8acCo6uP3F0ZFD8JpRkCu5hNUD5KzJK0Aivxg1n0dP5b3U5P8mWPHyo8IRW8+FHQ4EOrBFxjD64LpRmhNOW4y8xKZ6wJN+KPr11YF4VGQ==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB5888.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(346002)(396003)(376002)(136003)(36756003)(186003)(2906002)(5660300002)(8936002)(316002)(2616005)(110136005)(6506007)(54906003)(26005)(66476007)(66446008)(86362001)(6512007)(122000001)(6486002)(478600001)(83380400001)(4326008)(8676002)(91956017)(76116006)(71200400001)(66946007)(30864003)(38100700002)(64756008)(66556008)(579004)(309714004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?L0JyT01lM1ZEb25BSEprU1VkSzF1bk43UkRQR3lCUHBVTUZWSWduM0V3MGt3?=
 =?utf-8?B?aG10MDB6RXpJckJ0MURWbEl3OTNCRmxycWVpejM4TXJBNHhRL0FnOVJVMTRQ?=
 =?utf-8?B?SjZRV2NsZVZtdzM0dGpKcGJvTldiZFhGSDY4bGtXdTYvSDhzRkFUU0lXcFZ4?=
 =?utf-8?B?ZW1nTlZtWFc4ZUhvY1dMRVhvVUtHTnE5bmE1YkdEbm5vK2JlZ2xqSXFhUGZ0?=
 =?utf-8?B?ZEZvMWFRMm5aYThyQUNiWE45NjBxL2JHNXAraGRlK1o2MlNXeUJFeHlQTEpw?=
 =?utf-8?B?TmR0Nkx6ZVkzcHNNK3ZGYm5PSmcvalpvUmxWLzFZZkw5V24zME8yVXVnS2RL?=
 =?utf-8?B?YjZFZld3dUdQZllwblBwSUFkLy9wMURHdUVOeVNSbjNmcWVMQXVSd3gxVkxX?=
 =?utf-8?B?WCs5NnhUVlJtRVVqTzQ5a0JmY0JwSnVRZFNkZEJKUjZnMnJnaHBRUUZ0WENz?=
 =?utf-8?B?Q3VENXUzWFhtRFdiMU1iamJSNWl3ZDMzZVBnM1k3OUlyRlRDZlp4MmtTS3Na?=
 =?utf-8?B?Ri9KUlVsOW1XSStIYWVVRmNkZzRwa2VvTzRQZ0ttWXNSVHN5VG9kNEVxTlk2?=
 =?utf-8?B?MjZUbVVWS1hQOURPN0psS0txQk5SaHYxOGxZOW5RbFJUV0lraW5mTHJqTTNx?=
 =?utf-8?B?aFBwRmVZSHVzR3c0a1ZKM2lwNDVOMHF2cldtd0dRVmVvWFJmTElCQk1aK0gy?=
 =?utf-8?B?YnVIZURPK1RUelZKWHF2SFd2SGlUT1M1RFZUa3hXU1l5cVRDdkxtZE03Sytr?=
 =?utf-8?B?QlgzT3p2ZmZXY3RWNG1oOVF5elg2N21tOUQya2ZTRFRTbjd2MXErdVhlZ09k?=
 =?utf-8?B?eWx4KzloZW9DZ254WVBIWWl0dVdOckRnL2hhSHBJM3ZyNFY0a1FqTkNMNkcr?=
 =?utf-8?B?VUU0K0FLbHRpbWIwWENuUlZEVkIvaTNmMkRCNWVFT2Y2d0FPZmdtVHRQU3Zk?=
 =?utf-8?B?dHowOWkzWjdXMERIcUNoT3pqMm90am1RaWFsOHVlR1BOdmdXczNTMzdqdnJ4?=
 =?utf-8?B?L1JUTEdHZ0hWbTdnWC8xWC8yeEFCZlBNNDZnTDZtcTdkTXNpc0VYMktXaDlj?=
 =?utf-8?B?RVJjYThROGpodytjd1ZqSnZELzhSR1B1NmlDbGgraENuUy92WGpYekpWb2hR?=
 =?utf-8?B?Q3FKTmVBUW9zWEwzM2RSVTgyL0RBSi9ndkt6ay8rNFVMaSs0KzArTExoV3Vp?=
 =?utf-8?B?MjJHZ013V0xhNXBwKzIyQ1lMeWZyMUkxMUVUWWFWamoyUmVhMzZrUW5rNlph?=
 =?utf-8?B?ZkVKcTJwTGd0ZnJYMVpxVlpFSzFEQkdjMzFvaXVQOTJ0MVlNTVhxbi82VWJW?=
 =?utf-8?B?bUVpYVpWSW01MzFEVTdCaDZxRUhSUW9vdzQyaVh0NmE5a3Z5dU1DZnRwVWY5?=
 =?utf-8?B?dDFTbE10ZnJ0R0dBVWZ4aThwY0E0YTJnckFab1lxeUY5STZFR0MwYWtnSWJ6?=
 =?utf-8?B?R2NKY0ZiOUJZL2Y4K0xocjF6S0VGb2E5V3JQdG9NOWFIZU1OMUsxVnJQWjVE?=
 =?utf-8?B?VkZlMzVBMGJiaCtqUHJGSm50Yi9EQi9XaVdUZG5WeGExdU5YQld4K2JNS3VZ?=
 =?utf-8?B?TDRhRmNLaGRocjU1UWdIQmxNQU56N2YxRyswSGJsZlBhbTF0UFE1ZTFnc2ZF?=
 =?utf-8?B?VEpaTnFGSFpDMGgxQWtKaE96dTRibmM0VHJuSlNLRmdFUFErVkRtK3NvWmdG?=
 =?utf-8?B?Z1oxS1JhQ1pSU3AwVDc5TjR5bUg0MENBMlI2R24rN3JCVXRsRXVGTEl1bkl3?=
 =?utf-8?Q?mbWGZdbq41ZHRCf5Bmtz0NG8OdkkgfLg1HMwpG+?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <500A94ECDD4D8B42BA0F7171F5E1887B@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB5888.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9c6aa72a-edf8-47e0-d8dc-08d92f4ce2f4
X-MS-Exchange-CrossTenant-originalarrivaltime: 14 Jun 2021 15:55:51.9789
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: S74VWfoQed6l+MoK9r5YfmY1kgYk2kQlq8OlkvRwihCDo7Cq/+NU3zQuo2LOmszzQP2umATD5knPbKEcFujtnQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB4935
X-OriginatorOrg: citrix.com

T24gTW9uLCAyMDIxLTA2LTE0IGF0IDExOjQ3ICswMTAwLCBBbmRyZXcgQ29vcGVyIHdyb3RlOg0K
PiBTZWUgdGhlIGNvbW1lbnQgYXQgdGhlIHRvcCBvZiB0ZXN0LXRzeC5jIGZvciBkZXRhaWxzLg0K
PiANCj4gVGhpcyBjb3ZlcnMgdmFyaW91cyBjb21wbGV4aXRpZXMgZW5jb3VudGVyZWQgd2hpbGUg
dHJ5aW5nIHRvIGFkZHJlc3MNCj4gdGhlDQo+IHJlY2VudCBUU1ggZGVwcmVjYXRpb24gb24gY2xp
ZW50IHBhcnRzLg0KPiANCj4gU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNv
b3BlcjNAY2l0cml4LmNvbT4NCj4gLS0tDQo+IENDOiBKYW4gQmV1bGljaCA8SkJldWxpY2hAc3Vz
ZS5jb20+DQo+IENDOiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4NCj4g
Q0M6IFdlaSBMaXUgPHdsQHhlbi5vcmc+DQo+IA0KPiB2MS4xOg0KPiAgKiBTZXQgYWx0ZXJuYXRp
dmUgZ3Vlc3QgcG9saWN5LCBhbmQgY2hlY2suDQo+ICAqIENvcGUgd2l0aCAhSEFQIGNvbmZpZ3Vy
YXRpb25zLg0KPiAgKiBDb21wbGV0ZSB0aGUgY29tbWVudCBhdCB0aGUgdG9wIG9mIHRlc3QtdHN4
LmMNCj4gLS0tDQo+ICB0b29scy90ZXN0cy9NYWtlZmlsZSAgICAgICB8ICAgMSArDQo+ICB0b29s
cy90ZXN0cy90c3gvLmdpdGlnbm9yZSB8ICAgMSArDQo+ICB0b29scy90ZXN0cy90c3gvTWFrZWZp
bGUgICB8ICA0MyArKysrDQo+ICB0b29scy90ZXN0cy90c3gvdGVzdC10c3guYyB8IDUxNQ0KPiAr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysNCj4gIDQgZmlsZXMg
Y2hhbmdlZCwgNTYwIGluc2VydGlvbnMoKykNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy90
ZXN0cy90c3gvLmdpdGlnbm9yZQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL3Rlc3RzL3Rz
eC9NYWtlZmlsZQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL3Rlc3RzL3RzeC90ZXN0LXRz
eC5jDQo+IA0KPiBkaWZmIC0tZ2l0IGEvdG9vbHMvdGVzdHMvTWFrZWZpbGUgYi90b29scy90ZXN0
cy9NYWtlZmlsZQ0KPiBpbmRleCA4NzQ2YWFiZTZiLi4yNTUzMWE5ODRhIDEwMDY0NA0KPiAtLS0g
YS90b29scy90ZXN0cy9NYWtlZmlsZQ0KPiArKysgYi90b29scy90ZXN0cy9NYWtlZmlsZQ0KPiBA
QCAtNSw2ICs1LDcgQEAgU1VCRElSUy15IDo9DQo+ICBTVUJESVJTLXkgKz0gcmVzb3VyY2UNCj4g
IFNVQkRJUlMtJChDT05GSUdfWDg2KSArPSBjcHUtcG9saWN5DQo+ICBTVUJESVJTLSQoQ09ORklH
X1g4NikgKz0gbWNlLXRlc3QNCj4gK1NVQkRJUlMtJChDT05GSUdfWDg2KSArPSB0c3gNCj4gIGlm
bmVxICgkKGNsYW5nKSx5KQ0KPiAgU1VCRElSUy0kKENPTkZJR19YODYpICs9IHg4Nl9lbXVsYXRv
cg0KPiAgZW5kaWYNCj4gZGlmZiAtLWdpdCBhL3Rvb2xzL3Rlc3RzL3RzeC8uZ2l0aWdub3JlIGIv
dG9vbHMvdGVzdHMvdHN4Ly5naXRpZ25vcmUNCj4gbmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4gaW5k
ZXggMDAwMDAwMDAwMC4uOTdlYzRkYjdmZg0KPiAtLS0gL2Rldi9udWxsDQo+ICsrKyBiL3Rvb2xz
L3Rlc3RzL3RzeC8uZ2l0aWdub3JlDQo+IEBAIC0wLDAgKzEgQEANCj4gK3Rlc3QtdHN4DQo+IGRp
ZmYgLS1naXQgYS90b29scy90ZXN0cy90c3gvTWFrZWZpbGUgYi90b29scy90ZXN0cy90c3gvTWFr
ZWZpbGUNCj4gbmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4gaW5kZXggMDAwMDAwMDAwMC4uNzM4MWE0
ZjVhNA0KPiAtLS0gL2Rldi9udWxsDQo+ICsrKyBiL3Rvb2xzL3Rlc3RzL3RzeC9NYWtlZmlsZQ0K
PiBAQCAtMCwwICsxLDQzIEBADQo+ICtYRU5fUk9PVCA9ICQoQ1VSRElSKS8uLi8uLi8uLg0KPiAr
aW5jbHVkZSAkKFhFTl9ST09UKS90b29scy9SdWxlcy5taw0KPiArDQo+ICtUQVJHRVQgOj0gdGVz
dC10c3gNCj4gKw0KPiArLlBIT05ZOiBhbGwNCj4gK2FsbDogJChUQVJHRVQpDQo+ICsNCj4gKy5Q
SE9OWTogcnVuDQo+ICtydW46ICQoVEFSR0VUKQ0KPiArCS4vJChUQVJHRVQpDQo+ICsNCj4gKy5Q
SE9OWTogY2xlYW4NCj4gK2NsZWFuOg0KPiArCSQoUk0pIC1mIC0tICoubyAkKFRBUkdFVCkgJChE
RVBTX1JNKQ0KPiArDQo+ICsuUEhPTlk6IGRpc3RjbGVhbg0KPiArZGlzdGNsZWFuOiBjbGVhbg0K
PiArCSQoUk0pIC1mIC0tICp+DQo+ICsNCj4gKy5QSE9OWTogaW5zdGFsbA0KPiAraW5zdGFsbDog
YWxsDQo+ICsNCj4gKy5QSE9OWTogdW5pbnN0YWxsDQo+ICt1bmluc3RhbGw6DQo+ICsNCj4gK0NG
TEFHUyArPSAtV2Vycm9yIC1zdGQ9Z251MTENCj4gK0NGTEFHUyArPSAkKENGTEFHU194ZW5pbmNs
dWRlKQ0KPiArQ0ZMQUdTICs9ICQoQ0ZMQUdTX2xpYnhlbmN0cmwpDQo+ICtDRkxBR1MgKz0gJChD
RkxBR1NfbGlieGVuZ3Vlc3QpDQo+ICtDRkxBR1MgKz0gLUkkKFhFTl9ST09UKS90b29scy9saWJz
L2N0cmwNCj4gLUkkKFhFTl9ST09UKS90b29scy9saWJzL2d1ZXN0DQo+ICtDRkxBR1MgKz0gJChB
UFBFTkRfQ0ZMQUdTKQ0KPiArDQo+ICtMREZMQUdTICs9ICQoTERMSUJTX2xpYnhlbmN0cmwpDQo+
ICtMREZMQUdTICs9ICQoTERMSUJTX2xpYnhlbmd1ZXN0KQ0KPiArTERGTEFHUyArPSAkKEFQUEVO
RF9MREZMQUdTKQ0KPiArDQo+ICt0ZXN0LXRzeC5vOiBNYWtlZmlsZQ0KPiArDQo+ICt0ZXN0LXRz
eDogdGVzdC10c3gubw0KPiArCSQoQ0MpIC1vICRAICQ8ICQoTERGTEFHUykNCj4gKw0KPiArLWlu
Y2x1ZGUgJChERVBTX0lOQ0xVREUpDQo+IGRpZmYgLS1naXQgYS90b29scy90ZXN0cy90c3gvdGVz
dC10c3guYyBiL3Rvb2xzL3Rlc3RzL3RzeC90ZXN0LXRzeC5jDQo+IG5ldyBmaWxlIG1vZGUgMTAw
NjQ0DQo+IGluZGV4IDAwMDAwMDAwMDAuLjAzNmIzNmU3OTcNCj4gLS0tIC9kZXYvbnVsbA0KPiAr
KysgYi90b29scy90ZXN0cy90c3gvdGVzdC10c3guYw0KPiBAQCAtMCwwICsxLDUxNSBAQA0KPiAr
LyoNCj4gKyAqIFRTWCBzZXR0aW5ncyBhbmQgY29uc2lzdGVuY3kgdGVzdHMNCj4gKyAqDQo+ICsg
KiBUaGlzIHRlc3RzIHZhcmlvdXMgYmVoYXZpb3VycyBhbmQgaW52YXJpYW50cyB3aXRoIHJlZ2Fy
ZHMgdG8NCj4gVFNYLiAgSXQNCj4gKyAqIGlkZWFsbHkgd2FudHMgcnVubmluZyBmb3Igc2V2ZXJh
bCBtaWNyb2NvZGUgdmVyc2lvbnMsIGFuZCBhbGwNCj4gYXBwbGljYWJsZQ0KPiArICogdHN4PSBj
b21tYW5kbGluZSBzZXR0aW5ncywgb24gYSBzaW5nbGUgQ1BVLCBpbmNsdWRpbmcgYWZ0ZXIgYW4g
UzMNCj4gKyAqIHN1c3BlbmQvcmVzdW1lIGV2ZW50Lg0KPiArICoNCj4gKyAqIEl0IHRlc3RzIHNw
ZWNpZmljYWxseToNCj4gKyAqICAtIFRoZSBjb25zaXN0ZW5jeSBvZiBNU1JfVFNYX0NUUkwvTVNS
X1RTWF9GT1JDRV9BQk9SVCB2YWx1ZXMNCj4gYWNyb3NzIHRoZQ0KPiArICogICAgc3lzdGVtLCBh
bmQgdGhlaXIgYWNjZXNzaWJpbGl0eSBXUlQgZGF0YSBpbiB0aGUgaG9zdCBDUFUNCj4gcG9saWN5
Lg0KPiArICogIC0gVGhlIGFjdHVhbCBiZWhhdmlvdXIgb2YgUlRNIG9uIHRoZSBzeXN0ZW0uDQo+
ICsgKiAgLSBDcm9zcy1jaGVjayB0aGUgZGVmYXVsdC9tYXggcG9saWNpZXMgYmFzZWQgb24gdGhl
IGFjdHVhbCBSVE0NCj4gYmVoYXZpb3VyLg0KPiArICogIC0gQ3JlYXRlIHNvbWUgZ3Vlc3RzLCBj
aGVjayB0aGVpciBkZWZhdWx0cywgYW5kIGNoZWNrIHRoYXQgdGhlDQo+IGRlZmF1bHRzDQo+ICsg
KiAgICBjYW4gYmUgY2hhbmdlZC4NCj4gKyAqLw0KPiArDQo+ICsjZGVmaW5lIF9HTlVfU09VUkNF
DQo+ICsNCj4gKyNpbmNsdWRlIDxlcnIuaD4NCj4gKyNpbmNsdWRlIDxlcnJuby5oPg0KPiArI2lu
Y2x1ZGUgPGludHR5cGVzLmg+DQo+ICsjaW5jbHVkZSA8c2lnbmFsLmg+DQo+ICsjaW5jbHVkZSA8
c3RkaW8uaD4NCj4gKyNpbmNsdWRlIDxzdHJpbmcuaD4NCj4gKyNpbmNsdWRlIDxzeXMvbW1hbi5o
Pg0KPiArI2luY2x1ZGUgPHN5cy91Y29udGV4dC5oPg0KPiArDQo+ICsjaW5jbHVkZSA8eGVuY3Ry
bC5oPg0KPiArI2luY2x1ZGUgPHhlbmd1ZXN0Lmg+DQo+ICsjaW5jbHVkZSA8eGVuLXRvb2xzL2xp
YnMuaD4NCj4gKw0KPiArI2luY2x1ZGUgInhnX3ByaXZhdGUuaCINCj4gKw0KPiArZW51bSB7DQo+
ICsjZGVmaW5lIFhFTl9DUFVGRUFUVVJFKG5hbWUsIHZhbHVlKSBYODZfRkVBVFVSRV8jI25hbWUg
PSB2YWx1ZSwNCj4gKyNpbmNsdWRlIDx4ZW4vYXJjaC14ODYvY3B1ZmVhdHVyZXNldC5oPg0KPiAr
fTsNCj4gKyNkZWZpbmUgYml0bWFza29mKGlkeCkgICAgICAoMXUgPDwgKChpZHgpICYgMzEpKQ0K
PiArDQo+ICsjZGVmaW5lIE1TUl9BUkNIX0NBUEFCSUxJVElFUyAgICAgICAgICAgICAgIDB4MDAw
MDAxMGENCj4gKyNkZWZpbmUgIEFSQ0hfQ0FQU19UU1hfQ1RSTCAgICAgICAgICAgICAgICAgKDEg
PDwgIDcpDQo+ICsjZGVmaW5lIE1TUl9UU1hfRk9SQ0VfQUJPUlQgICAgICAgICAgICAgICAgIDB4
MDAwMDAxMGYNCj4gKyNkZWZpbmUgTVNSX1RTWF9DVFJMICAgICAgICAgICAgICAgICAgICAgICAg
MHgwMDAwMDEyMg0KPiArDQo+ICtzdGF0aWMgdW5zaWduZWQgaW50IG5yX2ZhaWx1cmVzOw0KPiAr
I2RlZmluZSBmYWlsKGZtdCwgLi4uKSAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KPiArKHsg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KPiArICAgIG5y
X2ZhaWx1cmVzKys7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KPiArICAgICh2b2lk
KXByaW50ZihmbXQsICMjX19WQV9BUkdTX18pOyAgICAgICAgICAgXA0KPiArfSkNCj4gKw0KPiAr
c3RhdGljIHhjX2ludGVyZmFjZSAqeGNoOw0KPiArDQo+ICsvKg0KPiArICogUG9saWNpZXMsIGFy
cmFuZ2VkIGFzIGFuIGFycmF5IGZvciBlYXN5IGNvbGxlY3Rpb24gb2YgYWxsIG9mDQo+IHRoZW0u
ICBXZQ0KPiArICogZG9uJ3QgY2FyZSBhYm91dCB0aGUgcmF3IHBvbGljeSAoaW5kZXggMCkgc28g
cmV1c2UgdGhhdCBmb3IgdGhlDQo+IGd1ZXN0DQo+ICsgKiBwb2xpY3kuDQo+ICsgKi8NCj4gK3N0
YXRpYyBzdHJ1Y3QgeGNfY3B1X3BvbGljeSBwb2xpY2llc1s2XTsNCj4gKyNkZWZpbmUgZ3Vlc3Rf
cG9saWN5IHBvbGljaWVzWzBdDQo+ICsjZGVmaW5lIGhvc3QgICAgICAgICBwb2xpY2llc1tYRU5f
U1lTQ1RMX2NwdV9wb2xpY3lfaG9zdF0NCj4gKyNkZWZpbmUgcHZfbWF4ICAgICAgIHBvbGljaWVz
W1hFTl9TWVNDVExfY3B1X3BvbGljeV9wdl9tYXhdDQo+ICsjZGVmaW5lIGh2bV9tYXggICAgICBw
b2xpY2llc1tYRU5fU1lTQ1RMX2NwdV9wb2xpY3lfaHZtX21heF0NCj4gKyNkZWZpbmUgcHZfZGVm
YXVsdCAgIHBvbGljaWVzW1hFTl9TWVNDVExfY3B1X3BvbGljeV9wdl9kZWZhdWx0XQ0KPiArI2Rl
ZmluZSBodm1fZGVmYXVsdCAgcG9saWNpZXNbWEVOX1NZU0NUTF9jcHVfcG9saWN5X2h2bV9kZWZh
dWx0XQ0KPiArDQo+ICtzdGF0aWMgYm9vbCB4ZW5faGFzX3B2ID0gdHJ1ZSwgeGVuX2hhc19odm0g
PSB0cnVlOw0KPiArDQo+ICtzdGF0aWMgeGNfcGh5c2luZm9fdCBwaHlzaW5mbzsNCj4gKw0KPiAr
c3RhdGljIGVudW0gcnRtX2JlaGF2aW91ciB7DQo+ICsgICAgUlRNX1VELA0KPiArICAgIFJUTV9P
SywNCj4gKyAgICBSVE1fQUJPUlQsDQo+ICt9IHJ0bV9iZWhhdmlvdXI7DQo+ICsNCj4gKy8qDQo+
ICsgKiBUZXN0IGEgc3BlY2lmaWMgVFNYIE1TUiBmb3IgY29uc2lzdGVuY3kgYWNyb3NzIHRoZSBz
eXN0ZW0sIHRha2luZw0KPiBpbnRvDQo+ICsgKiBhY2NvdW50IHdoZXRoZXIgaXQgb3VnaHQgdG8g
YmUgYWNjZXNzYWJsZSBvciBub3QuDQo+ICsgKg0KPiArICogV2UgY2FuJ3QgcXVlcnkgb2ZmbGlu
ZSBDUFVzLCBzbyBza2lwIHRob3NlIGlmIGVuY291bnRlcmVkLiAgV2UNCj4gZG9uJ3QgY2FyZQ0K
PiArICogcGFydGljdWxhcmx5IGZvciB0aGUgZXhhY3QgTVNSIHZhbHVlLCBidXQgd2UgZG8gY2Fy
ZSB0aGF0IGl0IGlzDQo+IHRoZSBzYW1lDQo+ICsgKiBldmVyeXdoZXJlLg0KPiArICovDQo+ICtz
dGF0aWMgdm9pZCB0ZXN0X3RzeF9tc3JfY29uc2lzdGVuY3kodW5zaWduZWQgaW50IG1zciwgYm9v
bA0KPiBhY2Nlc3NhYmxlKQ0KPiArew0KPiArICAgIHVpbnQ2NF90IGNwdTBfdmFsID0gfjA7DQo+
ICsNCj4gKyAgICBmb3IgKCB1bnNpZ25lZCBpbnQgY3B1ID0gMDsgY3B1IDw9IHBoeXNpbmZvLm1h
eF9jcHVfaWQ7ICsrY3B1ICkNCj4gKyAgICB7DQo+ICsgICAgICAgIHhjX3Jlc291cmNlX2VudHJ5
X3QgZW50ID0gew0KPiArICAgICAgICAgICAgLnUuY21kID0gWEVOX1JFU09VUkNFX09QX01TUl9S
RUFELA0KPiArICAgICAgICAgICAgLmlkeCA9IG1zciwNCj4gKyAgICAgICAgfTsNCj4gKyAgICAg
ICAgeGNfcmVzb3VyY2Vfb3BfdCBvcCA9IHsNCj4gKyAgICAgICAgICAgIC5jcHUgPSBjcHUsDQo+
ICsgICAgICAgICAgICAuZW50cmllcyA9ICZlbnQsDQo+ICsgICAgICAgICAgICAubnJfZW50cmll
cyA9IDEsDQo+ICsgICAgICAgIH07DQo+ICsgICAgICAgIGludCByYyA9IHhjX3Jlc291cmNlX29w
KHhjaCwgMSwgJm9wKTsNCj4gKw0KPiArICAgICAgICBpZiAoIHJjIDwgMCApDQo+ICsgICAgICAg
IHsNCj4gKyAgICAgICAgICAgIC8qIERvbid0IGVtaXQgYSBtZXNzYWdlIGZvciBvZmZsaW5lIENQ
VXMgKi8NCj4gKyAgICAgICAgICAgIGlmICggZXJybm8gIT0gRU5PREVWICkNCj4gKyAgICAgICAg
ICAgICAgICBmYWlsKCIgIHhjX3Jlc291cmNlX29wKCkgZm9yIENQVSV1IGZhaWxlZDogcmMgJWQs
DQo+IGVycm5vICVkIC0gJXNcbiIsDQo+ICsgICAgICAgICAgICAgICAgICAgICBjcHUsIHJjLCBl
cnJubywgc3RyZXJyb3IoZXJybm8pKTsNCj4gKyAgICAgICAgICAgIGNvbnRpbnVlOw0KPiArICAg
ICAgICB9DQo+ICsNCj4gKyAgICAgICAgaWYgKCBhY2Nlc3NhYmxlICkNCj4gKyAgICAgICAgew0K
PiArICAgICAgICAgICAgaWYgKCByYyAhPSAxICkNCj4gKyAgICAgICAgICAgIHsNCj4gKyAgICAg
ICAgICAgICAgICBmYWlsKCIgIEV4cGVjdGVkIDEgcmVzdWx0LCBnb3QgJXVcbiIsIHJjKTsNCj4g
KyAgICAgICAgICAgICAgICBjb250aW51ZTsNCj4gKyAgICAgICAgICAgIH0NCj4gKyAgICAgICAg
ICAgIGlmICggZW50LnUucmV0ICE9IDAgKQ0KPiArICAgICAgICAgICAgew0KPiArICAgICAgICAg
ICAgICAgIGZhaWwoIiAgRXhwZWN0ZWQgb2ssIGdvdCAlZFxuIiwgZW50LnUucmV0KTsNCj4gKyAg
ICAgICAgICAgICAgICBjb250aW51ZTsNCj4gKyAgICAgICAgICAgIH0NCj4gKyAgICAgICAgfQ0K
PiArICAgICAgICBlbHNlDQo+ICsgICAgICAgIHsNCj4gKyAgICAgICAgICAgIGlmICggcmMgIT0g
MCApDQo+ICsgICAgICAgICAgICAgICAgZmFpbCgiICBFeHBlY3RlZCAwIHJlc3VsdHMsIGdvdCAl
dVxuIiwgcmMpOw0KPiArICAgICAgICAgICAgZWxzZSBpZiAoIGVudC51LnJldCAhPSAtRVBFUk0g
KQ0KPiArICAgICAgICAgICAgICAgIGZhaWwoIiAgRXhwZWN0ZWQgLUVQRVJNLCBnb3QgJWRcbiIs
IGVudC51LnJldCk7DQo+ICsgICAgICAgICAgICBjb250aW51ZTsNCj4gKyAgICAgICAgfQ0KPiAr
DQo+ICsgICAgICAgIGlmICggY3B1ID09IDAgKQ0KPiArICAgICAgICB7DQo+ICsgICAgICAgICAg
ICBjcHUwX3ZhbCA9IGVudC52YWw7DQo+ICsgICAgICAgICAgICBwcmludGYoIiAgQ1BVMCB2YWwg
JSMiUFJJeDY0IlxuIiwgY3B1MF92YWwpOw0KPiArICAgICAgICB9DQo+ICsgICAgICAgIGVsc2Ug
aWYgKCBlbnQudmFsICE9IGNwdTBfdmFsICkNCj4gKyAgICAgICAgICAgIGZhaWwoIiAgQ1BVJXUg
dmFsICUjIlBSSXg2NCIgZGlmZmVyZXMgZnJvbSBDUFUwDQo+ICUjIlBSSXg2NCJcbiIsDQoNClR5
cG86IGRpZmZlcnM/DQoNCj4gKyAgICAgICAgICAgICAgICAgY3B1LCBlbnQudmFsLCBjcHUwX3Zh
bCk7DQo+ICsgICAgfQ0KPiArfQ0KPiArDQo+ICsvKg0KPiArICogQ2hlY2sgYWxsIFRTWCBNU1Jz
LCBhbmQgaW4gcGFydGljdWxhciB0aGF0IHRoZWlyIGFjY2Vzc2liaWxpdHkNCj4gbWF0Y2hlcyB3
aGF0DQo+ICsgKiBpcyBleHByZXNzZWQgaW4gdGhlIGhvc3QgQ1BVIHBvbGljeS4NCj4gKyAqLw0K
PiArc3RhdGljIHZvaWQgdGVzdF90c3hfbXNycyh2b2lkKQ0KPiArew0KPiArICAgIHByaW50Zigi
VGVzdGluZyBNU1JfVFNYX0ZPUkNFX0FCT1JUIGNvbnNpc3RlbmN5XG4iKTsNCj4gKyAgICB0ZXN0
X3RzeF9tc3JfY29uc2lzdGVuY3koDQo+ICsgICAgICAgIE1TUl9UU1hfRk9SQ0VfQUJPUlQsIGhv
c3QuY3B1aWQuZmVhdC50c3hfZm9yY2VfYWJvcnQpOw0KPiArDQo+ICsgICAgcHJpbnRmKCJUZXN0
aW5nIE1TUl9UU1hfQ1RSTCBjb25zaXN0ZW5jeVxuIik7DQo+ICsgICAgdGVzdF90c3hfbXNyX2Nv
bnNpc3RlbmN5KA0KPiArICAgICAgICBNU1JfVFNYX0NUUkwsIGhvc3QubXNyLmFyY2hfY2Fwcy50
c3hfY3RybCk7DQo+ICt9DQoNCg0KVGhpcyBpcyBncmVhdCwgY291bGQgd2UgZXh0ZW5kIHRoZSB0
ZXN0IHRvIGFsbCBNU1JzIHRoYXQgWGVuIGtub3dzDQphYm91dCBhbmQgYXJlIGV4cGVjdGVkIHRv
IGJlIGlkZW50aWNhbD8gUGFydGljdWxhcmx5DQpNU1JfU1BFQ19DVFJMLCBNU1JfTUNVX09QVF9D
VFJMLCBhbmQgSSBzZWUgc29tZSBNU1JzIHVzZWQgZm9yIGVycmF0YQ0Kd29ya2Fyb3VuZHMgbGlr
ZSBNU1JfTUNVX09QVF9DVFJMLCBwb3NzaWJseWUgbW9yZS4NCg0KPiArDQo+ICsvKg0KPiArICog
UHJvYmUgZm9yIGhvdyBSVE0gYmVoYXZlcywgZGVsaWJlcmF0ZWx5IG5vdCBpbnNwZWN0aW5nIENQ
VUlELg0KPiArICogRGlzdGluZ3Vpc2hlcyBiZXR3ZWVuICJubyBzdXBwb3J0IGF0IGFsbCIgKGku
ZS4gWEJFR0lOIHN1ZmZlcnMNCj4gI1VEKSwNCj4gKyAqIHdvcmtpbmcgb2ssIGFuZCBhcHBlYXJp
bmcgdG8gYWx3YXlzIGFib3J0Lg0KPiArICovDQo+ICtzdGF0aWMgZW51bSBydG1fYmVoYXZpb3Vy
IHByb2JlX3J0bV9iZWhhdmlvdXIodm9pZCkNCj4gK3sNCj4gKyAgICBmb3IgKCBpbnQgaSA9IDA7
IGkgPCAxMDAwOyArK2kgKQ0KPiArICAgIHsNCj4gKyAgICAgICAgLyoNCj4gKyAgICAgICAgICog
T3BlbmNvZGluZyB0aGUgUlRNIGluZnJhc3RydWN0dXJlIGZyb20gaW1taW50cmluLmgsDQo+IGJl
Y2F1c2Ugd2UNCj4gKyAgICAgICAgICogc3RpbGwgc3VwcG9ydCBvbGRlciB2ZXJzaW9ucyBvZiBH
Q0MuICBBTHNvIHNvIHdlIGNhbg0KPiBpbmNsdWRlICNVRA0KPiArICAgICAgICAgKiBkZXRlY3Rp
b24gbG9naWMuDQo+ICsgICAgICAgICAqLw0KPiArI2RlZmluZSBYQkVHSU5fU1RBUlRFRCAtMQ0K
PiArI2RlZmluZSBYQkVHSU5fVUQgICAgICAtMg0KPiArICAgICAgICB1bnNpZ25lZCBpbnQgc3Rh
dHVzID0gWEJFR0lOX1NUQVJURUQ7DQo+ICsNCj4gKyAgICAgICAgYXNtIHZvbGF0aWxlICgiLkx4
YmVnaW46IC5ieXRlIDB4YzcsMHhmOCwwLDAsMCwwIiAvKiBYQkVHSU4NCj4gMWY7IDE6ICovDQo+
ICsgICAgICAgICAgICAgICAgICAgICAgOiAiK2EiIChzdGF0dXMpIDo6ICJtZW1vcnkiKTsNCj4g
KyAgICAgICAgaWYgKCBzdGF0dXMgPT0gWEJFR0lOX1NUQVJURUQgKQ0KPiArICAgICAgICB7DQo+
ICsgICAgICAgICAgICBhc20gdm9sYXRpbGUgKCIuYnl0ZSAweDBmLDB4MDEsMHhkNSIgOjo6ICJt
ZW1vcnkiKTsgLyoNCj4gWEVORCAqLw0KPiArICAgICAgICAgICAgcmV0dXJuIFJUTV9PSzsNCj4g
KyAgICAgICAgfQ0KPiArICAgICAgICBlbHNlIGlmICggc3RhdHVzID09IFhCRUdJTl9VRCApDQo+
ICsgICAgICAgICAgICByZXR1cm4gUlRNX1VEOw0KPiArICAgIH0NCj4gKw0KPiArICAgIHJldHVy
biBSVE1fQUJPUlQ7DQo+ICt9DQo+ICsNCj4gK3N0YXRpYyBzdHJ1Y3Qgc2lnYWN0aW9uIG9sZF9z
aWdpbGw7DQo+ICsNCj4gK3N0YXRpYyB2b2lkIHNpZ2lsbF9oYW5kbGVyKGludCBzaWdubywgc2ln
aW5mb190ICppbmZvLCB2b2lkICpleHRyYSkNCj4gK3sNCj4gKyAgICBleHRlcm4gY2hhciB4YmVn
aW5fbGFiZWxbXSBhc20oIi5MeGJlZ2luIik7DQo+ICsNCj4gKyAgICBpZiAoIGluZm8tPnNpX2Fk
ZHIgPT0geGJlZ2luX2xhYmVsIHx8DQo+ICsgICAgICAgICBtZW1jbXAoaW5mby0+c2lfYWRkciwg
Ilx4YzdceGY4XHgwMFx4MDBceDAwXHgwMCIsIDYpID09IDAgKQ0KPiArICAgIHsNCj4gKyAgICAg
ICAgdWNvbnRleHRfdCAqY29udGV4dCA9IGV4dHJhOw0KPiArDQo+ICsgICAgICAgIC8qDQo+ICsg
ICAgICAgICAqIEZvdW5kIHRoZSBYQkVHSU4gaW5zdHJ1Y3Rpb24uICBTdGVwIG92ZXIgaXQsIGFu
ZCB1cGRhdGUNCj4gYHN0YXR1c2AgdG8NCj4gKyAgICAgICAgICogc2lnbmFsICNVRC4NCj4gKyAg
ICAgICAgICovDQo+ICsjaWZkZWYgX194ODZfNjRfXw0KPiArICAgICAgICBjb250ZXh0LT51Y19t
Y29udGV4dC5ncmVnc1tSRUdfUklQXSArPSA2Ow0KPiArICAgICAgICBjb250ZXh0LT51Y19tY29u
dGV4dC5ncmVnc1tSRUdfUkFYXSA9IFhCRUdJTl9VRDsNCj4gKyNlbHNlDQo+ICsgICAgICAgIGNv
bnRleHQtPnVjX21jb250ZXh0LmdyZWdzW1JFR19FSVBdICs9IDY7DQo+ICsgICAgICAgIGNvbnRl
eHQtPnVjX21jb250ZXh0LmdyZWdzW1JFR19FQVhdID0gWEJFR0lOX1VEOw0KPiArI2VuZGlmDQo+
ICsgICAgfQ0KPiArICAgIGVsc2UNCj4gKyAgICB7DQo+ICsgICAgICAgIC8qDQo+ICsgICAgICAg
ICAqIE5vdCB0aGUgU0lHSUxMIHdlJ3JlIGxvb2tpbmcgZm9yLi4uICBSZXN0b3JlIHRoZSBvbGQN
Cj4gaGFuZGxlciBhbmQNCj4gKyAgICAgICAgICogdHJ5IGFnYWluLiAgV2lsbCBsaWtlbHkgY29y
ZWR1bXAgYXMgYSByZXN1bHQuDQo+ICsgICAgICAgICAqLw0KPiArICAgICAgICBzaWdhY3Rpb24o
U0lHSUxMLCAmb2xkX3NpZ2lsbCwgTlVMTCk7DQo+ICsgICAgfQ0KPiArfQ0KPiArDQo+ICtzdGF0
aWMgdm9pZCB0ZXN0X3J0bV9iZWhhdmlvdXIodm9pZCkNCj4gK3sNCj4gKyAgICBzdHJ1Y3Qgc2ln
YWN0aW9uIG5ld19zaWdpbGwgPSB7DQo+ICsgICAgICAgIC5zYV9mbGFncyA9IFNBX1NJR0lORk8s
DQo+ICsgICAgICAgIC5zYV9zaWdhY3Rpb24gPSBzaWdpbGxfaGFuZGxlciwNCj4gKyAgICB9Ow0K
PiArICAgIGNvbnN0IGNoYXIgKnN0cjsNCj4gKw0KPiArICAgIHByaW50ZigiVGVzdGluZyBSVE0g
YmVoYXZpb3VyXG4iKTsNCj4gKw0KPiArICAgIC8qDQo+ICsgICAgICogSW5zdGFsbCBhIGN1c3Rv
bSBTSUdJTEwgaGFuZGxlciB3aGlsZSBwcm9iaW5nIGZvciBSVE0NCj4gYmVoYXZpb3VyLCBhcyB0
aGUNCj4gKyAgICAgKiBYQkVHSU4gaW5zdHJ1Y3Rpb24gbWlnaHQgc3VmZmVyICNVRC4NCj4gKyAg
ICAgKi8NCj4gKyAgICBzaWdhY3Rpb24oU0lHSUxMLCAmbmV3X3NpZ2lsbCwgJm9sZF9zaWdpbGwp
Ow0KPiArICAgIHJ0bV9iZWhhdmlvdXIgPSBwcm9iZV9ydG1fYmVoYXZpb3VyKCk7DQo+ICsgICAg
c2lnYWN0aW9uKFNJR0lMTCwgJm9sZF9zaWdpbGwsIE5VTEwpOw0KPiArDQo+ICsgICAgc3dpdGNo
ICggcnRtX2JlaGF2aW91ciApDQo+ICsgICAgew0KPiArICAgIGNhc2UgUlRNX1VEOiAgICBzdHIg
PSAiI1VEIjsgICBicmVhazsNCj4gKyAgICBjYXNlIFJUTV9PSzogICAgc3RyID0gIk9LIjsgICAg
YnJlYWs7DQo+ICsgICAgY2FzZSBSVE1fQUJPUlQ6IHN0ciA9ICJBYm9ydCI7IGJyZWFrOw0KPiAr
ICAgIGRlZmF1bHQ6ICAgICAgICBzdHIgPSBOVUxMOyAgICBicmVhazsNCj4gKyAgICB9DQo+ICsN
Cj4gKyAgICBpZiAoIHN0ciApDQo+ICsgICAgICAgIHByaW50ZigiICBHb3QgJXNcbiIsIHN0cik7
DQo+ICsgICAgZWxzZQ0KPiArICAgICAgICByZXR1cm4gZmFpbCgiICBHb3QgdW5leHBlY3RlZCBi
ZWhhdmlvdXIgJWRcbiIsDQo+IHJ0bV9iZWhhdmlvdXIpOw0KPiArDQo+ICsgICAgaWYgKCBob3N0
LmNwdWlkLmZlYXQucnRtICkNCj4gKyAgICB7DQo+ICsgICAgICAgIGlmICggcnRtX2JlaGF2aW91
ciA9PSBSVE1fVUQgKQ0KPiArICAgICAgICAgICAgZmFpbCgiICBIb3N0IHJlcG9ydHMgUlRNLCBi
dXQgYXBwZWFycyB1bmF2YWlsYWJsZVxuIik7DQo+ICsgICAgfQ0KPiArICAgIGVsc2UNCj4gKyAg
ICB7DQo+ICsgICAgICAgIGlmICggcnRtX2JlaGF2aW91ciAhPSBSVE1fVUQgKQ0KPiArICAgICAg
ICAgICAgZmFpbCgiICBIb3N0IHJlcG9ydHMgbm8gUlRNLCBidXQgYXBwZWFycyBhdmFpbGFibGVc
biIpOw0KPiArICAgIH0NCj4gK30NCj4gKw0KPiArc3RhdGljIHZvaWQgZHVtcF90c3hfZGV0YWls
cyhjb25zdCBzdHJ1Y3QgeGNfY3B1X3BvbGljeSAqcCwgY29uc3QNCj4gY2hhciAqcHJlZikNCj4g
K3sNCj4gKyAgICBwcmludGYoIiAgJXMgUlRNICV1LCBITEUgJXUsIFRTWF9GT1JDRV9BQk9SVCAl
dSwNCj4gUlRNX0FMV0FZU19BQk9SVCAldSwgVFNYX0NUUkwgJXVcbiIsDQo+ICsgICAgICAgICAg
IHByZWYsDQo+ICsgICAgICAgICAgIHAtPmNwdWlkLmZlYXQucnRtLA0KPiArICAgICAgICAgICBw
LT5jcHVpZC5mZWF0LmhsZSwNCj4gKyAgICAgICAgICAgcC0+Y3B1aWQuZmVhdC50c3hfZm9yY2Vf
YWJvcnQsDQo+ICsgICAgICAgICAgIHAtPmNwdWlkLmZlYXQucnRtX2Fsd2F5c19hYm9ydCwNCj4g
KyAgICAgICAgICAgcC0+bXNyLmFyY2hfY2Fwcy50c3hfY3RybCk7DQo+ICt9DQo+ICsNCj4gKy8q
IFNhbml0eSB0ZXN0IHZhcmlvdXMgaW52YXJpYW50cyB3ZSBleHBlY3QgaW4gdGhlIGRlZmF1bHQv
bWF4DQo+IHBvbGljaWVzLiAqLw0KPiArc3RhdGljIHZvaWQgdGVzdF9ndWVzdF9wb2xpY2llcyhj
b25zdCBzdHJ1Y3QgeGNfY3B1X3BvbGljeSAqbWF4LA0KPiArICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBjb25zdCBzdHJ1Y3QgeGNfY3B1X3BvbGljeSAqZGVmKQ0KPiArew0KPiArICAg
IGNvbnN0IHN0cnVjdCBjcHVpZF9wb2xpY3kgKmNtID0gJm1heC0+Y3B1aWQ7DQo+ICsgICAgY29u
c3Qgc3RydWN0IGNwdWlkX3BvbGljeSAqY2QgPSAmZGVmLT5jcHVpZDsNCj4gKyAgICBjb25zdCBz
dHJ1Y3QgbXNyX3BvbGljeSAqbW0gPSAmbWF4LT5tc3I7DQo+ICsgICAgY29uc3Qgc3RydWN0IG1z
cl9wb2xpY3kgKm1kID0gJmRlZi0+bXNyOw0KPiArDQo+ICsgICAgZHVtcF90c3hfZGV0YWlscyht
YXgsICJNYXg6Iik7DQo+ICsgICAgZHVtcF90c3hfZGV0YWlscyhkZWYsICJEZWY6Iik7DQo+ICsN
Cj4gKyAgICBpZiAoICgoY20tPmZlYXQucmF3WzBdLmQgfCBjZC0+ZmVhdC5yYXdbMF0uZCkgJg0K
PiArICAgICAgICAgIChiaXRtYXNrb2YoWDg2X0ZFQVRVUkVfVFNYX0ZPUkNFX0FCT1JUKSB8DQo+
ICsgICAgICAgICAgIGJpdG1hc2tvZihYODZfRkVBVFVSRV9SVE1fQUxXQVlTX0FCT1JUKSkpIHx8
DQo+ICsgICAgICAgICAoKG1tLT5hcmNoX2NhcHMucmF3IHwgbWQtPmFyY2hfY2Fwcy5yYXcpICYN
Cj4gQVJDSF9DQVBTX1RTWF9DVFJMKSApDQo+ICsgICAgICAgIGZhaWwoIiAgWGVuLW9ubHkgVFNY
IGNvbnRyb2xzIG9mZmVyZWQgdG8gZ3Vlc3RcbiIpOw0KPiArDQo+ICsgICAgc3dpdGNoICggcnRt
X2JlaGF2aW91ciApDQo+ICsgICAgew0KPiArICAgIGNhc2UgUlRNX1VEOg0KPiArICAgICAgICBp
ZiAoIChjbS0+ZmVhdC5yYXdbMF0uYiB8IGNkLT5mZWF0LnJhd1swXS5iKSAmDQo+ICsgICAgICAg
ICAgICAgKGJpdG1hc2tvZihYODZfRkVBVFVSRV9ITEUpIHwNCj4gYml0bWFza29mKFg4Nl9GRUFU
VVJFX1JUTSkpICkNCj4gKyAgICAgICAgICAgICBmYWlsKCIgIEhMRS9SVE0gb2ZmZXJlZCB0byBn
dWVzdHMgZGVzcGl0ZSBub3QgYmVpbmcNCj4gYXZhaWxhYmxlXG4iKTsNCj4gKyAgICAgICAgYnJl
YWs7DQo+ICsNCj4gKyAgICBjYXNlIFJUTV9BQk9SVDoNCj4gKyAgICAgICAgaWYgKCBjZC0+ZmVh
dC5yYXdbMF0uYiAmDQo+ICsgICAgICAgICAgICAgKGJpdG1hc2tvZihYODZfRkVBVFVSRV9ITEUp
IHwNCj4gYml0bWFza29mKFg4Nl9GRUFUVVJFX1JUTSkpICkNCj4gKyAgICAgICAgICAgICBmYWls
KCIgIEhMRS9SVE0gb2ZmZXJlZCB0byBndWVzdHMgYnkgZGVmYXVsdCBkZXNwaXRlDQo+IG5vdCBi
ZWluZyB1c2FibGVcbiIpOw0KPiArICAgICAgICBicmVhazsNCj4gKw0KPiArICAgIGNhc2UgUlRN
X09LOg0KPiArICAgICAgICBpZiAoICFjbS0+ZmVhdC5ydG0gfHwgIWNkLT5mZWF0LnJ0bSApDQo+
ICsgICAgICAgICAgICAgZmFpbCgiICBSVE0gbm90IG9mZmVyZWQgdG8gZ3Vlc3RzIGRlc3BpdGUg
YmVpbmcNCj4gYXZhaWxhYmxlXG4iKTsNCj4gKyAgICAgICAgYnJlYWs7DQo+ICsgICAgfQ0KPiAr
DQo+ICsgICAgaWYgKCBjZC0+ZmVhdC5obGUgKQ0KPiArICAgICAgICBmYWlsKCIgIEZhaWw6IEhM
RSBvZmZlcmVkIGluIGRlZmF1bHQgcG9saWN5XG4iKTsNCj4gK30NCj4gKw0KPiArc3RhdGljIHZv
aWQgdGVzdF9kZWZfbWF4X3BvbGljaWVzKHZvaWQpDQo+ICt7DQo+ICsgICAgaWYgKCB4ZW5faGFz
X3B2ICkNCj4gKyAgICB7DQo+ICsgICAgICAgIHByaW50ZigiVGVzdGluZyBQViBkZWZhdWx0L21h
eCBwb2xpY2llc1xuIik7DQo+ICsgICAgICAgIHRlc3RfZ3Vlc3RfcG9saWNpZXMoJnB2X21heCwg
JnB2X2RlZmF1bHQpOw0KPiArICAgIH0NCj4gKw0KPiArICAgIGlmICggeGVuX2hhc19odm0gKQ0K
PiArICAgIHsNCj4gKyAgICAgICAgcHJpbnRmKCJUZXN0aW5nIEhWTSBkZWZhdWx0L21heCBwb2xp
Y2llc1xuIik7DQo+ICsgICAgICAgIHRlc3RfZ3Vlc3RfcG9saWNpZXMoJmh2bV9tYXgsICZodm1f
ZGVmYXVsdCk7DQo+ICsgICAgfQ0KPiArfQ0KPiArDQo+ICtzdGF0aWMgdm9pZCB0ZXN0X2d1ZXN0
KHN0cnVjdCB4ZW5fZG9tY3RsX2NyZWF0ZWRvbWFpbiAqYykNCj4gK3sNCj4gKyAgICB1aW50MzJf
dCBkb21pZCA9IDA7DQo+ICsgICAgaW50IHJjOw0KPiArDQo+ICsgICAgcmMgPSB4Y19kb21haW5f
Y3JlYXRlKHhjaCwgJmRvbWlkLCBjKTsNCj4gKyAgICBpZiAoIHJjICkNCj4gKyAgICAgICAgcmV0
dXJuIGZhaWwoIiAgRG9tYWluIGNyZWF0ZSBmYWlsdXJlOiAlZCAtICVzXG4iLA0KPiArICAgICAg
ICAgICAgICAgICAgICBlcnJubywgc3RyZXJyb3IoZXJybm8pKTsNCj4gKw0KPiArICAgIHByaW50
ZigiICBDcmVhdGVkIGQldVxuIiwgZG9taWQpOw0KPiArDQo+ICsgICAgcmMgPSB4Y19jcHVfcG9s
aWN5X2dldF9kb21haW4oeGNoLCBkb21pZCwgJmd1ZXN0X3BvbGljeSk7DQo+ICsgICAgaWYgKCBy
YyApDQo+ICsgICAgew0KPiArICAgICAgICBmYWlsKCIgIEZhaWxlZCB0byBvYnRhaW4gZG9tYWlu
IHBvbGljeTogJWQgLSAlc1xuIiwNCj4gKyAgICAgICAgICAgICBlcnJubywgc3RyZXJyb3IoZXJy
bm8pKTsNCj4gKyAgICAgICAgZ290byBvdXQ7DQo+ICsgICAgfQ0KPiArDQo+ICsgICAgZHVtcF90
c3hfZGV0YWlscygmZ3Vlc3RfcG9saWN5LCAiQ3VyOiIpOw0KPiArDQo+ICsgICAgLyoNCj4gKyAg
ICAgKiBDaGVjayBkZWZhdWx0cyBnaXZlbiB0byB0aGUgZ3Vlc3QuDQo+ICsgICAgICovDQo+ICsg
ICAgaWYgKCBndWVzdF9wb2xpY3kuY3B1aWQuZmVhdC5ydG0gIT0gKHJ0bV9iZWhhdmlvdXIgPT0g
UlRNX09LKSApDQo+ICsgICAgICAgIGZhaWwoIiAgUlRNICV1IGluIGd1ZXN0LCBkZXNwaXRlIHJ0
bSBiZWhhdmlvdXJcbiIsDQo+ICsgICAgICAgICAgICAgZ3Vlc3RfcG9saWN5LmNwdWlkLmZlYXQu
cnRtKTsNCj4gKw0KPiArICAgIGlmICggZ3Vlc3RfcG9saWN5LmNwdWlkLmZlYXQuaGxlIHx8DQo+
ICsgICAgICAgICBndWVzdF9wb2xpY3kuY3B1aWQuZmVhdC50c3hfZm9yY2VfYWJvcnQgfHwNCj4g
KyAgICAgICAgIGd1ZXN0X3BvbGljeS5jcHVpZC5mZWF0LnJ0bV9hbHdheXNfYWJvcnQgfHwNCj4g
KyAgICAgICAgIGd1ZXN0X3BvbGljeS5tc3IuYXJjaF9jYXBzLnRzeF9jdHJsICkNCj4gKyAgICAg
ICAgZmFpbCgiICBVbmV4cGVjdGVkIGZlYXR1cmVzIGFkdmVydGlzZWRcbiIpOw0KPiArDQo+ICsg
ICAgaWYgKCBob3N0LmNwdWlkLmZlYXQucnRtICkNCj4gKyAgICB7DQo+ICsgICAgICAgIHVuc2ln
bmVkIGludCBfN2IwOw0KPiArDQo+ICsgICAgICAgIC8qDQo+ICsgICAgICAgICAqIElmIGhvc3Qg
UlRNIGlzIGF2YWlsYWJsZSwgYWxsIGNvbWJpbmF0aW9ucyBvZiBndWVzdCBmbGFncw0KPiBzaG91
bGQgYmUNCj4gKyAgICAgICAgICogcG9zc2libGUuICBGbGlwIGJvdGggSExFL1JUTSB0byBjaGVj
ayBub24tZGVmYXVsdA0KPiBzZXR0aW5ncy4NCj4gKyAgICAgICAgICovDQo+ICsgICAgICAgIF83
YjAgPSAoZ3Vlc3RfcG9saWN5LmNwdWlkLmZlYXQucmF3WzBdLmIgXj0NCj4gKyAgICAgICAgICAg
ICAgICAoYml0bWFza29mKFg4Nl9GRUFUVVJFX0hMRSkgfA0KPiBiaXRtYXNrb2YoWDg2X0ZFQVRV
UkVfUlRNKSkpOw0KPiArDQo+ICsgICAgICAgIC8qIFNldCB0aGUgbmV3IHBvbGljeS4gKi8NCj4g
KyAgICAgICAgcmMgPSB4Y19jcHVfcG9saWN5X3NldF9kb21haW4oeGNoLCBkb21pZCwgJmd1ZXN0
X3BvbGljeSk7DQo+ICsgICAgICAgIGlmICggcmMgKQ0KPiArICAgICAgICB7DQo+ICsgICAgICAg
ICAgICBmYWlsKCIgIEZhaWxlZCB0byBzZXQgZG9tYWluIHBvbGljeTogJWQgLSAlc1xuIiwNCj4g
KyAgICAgICAgICAgICAgICAgZXJybm8sIHN0cmVycm9yKGVycm5vKSk7DQo+ICsgICAgICAgICAg
ICBnb3RvIG91dDsNCj4gKyAgICAgICAgfQ0KPiArDQo+ICsgICAgICAgIC8qIFJlLWdldCB0aGUg
bmV3IHBvbGljeS4gKi8NCj4gKyAgICAgICAgcmMgPSB4Y19jcHVfcG9saWN5X2dldF9kb21haW4o
eGNoLCBkb21pZCwgJmd1ZXN0X3BvbGljeSk7DQo+ICsgICAgICAgIGlmICggcmMgKQ0KPiArICAg
ICAgICB7DQo+ICsgICAgICAgICAgICBmYWlsKCIgIEZhaWxlZCB0byBvYnRhaW4gZG9tYWluIHBv
bGljeTogJWQgLSAlc1xuIiwNCj4gKyAgICAgICAgICAgICAgICAgZXJybm8sIHN0cmVycm9yKGVy
cm5vKSk7DQo+ICsgICAgICAgICAgICBnb3RvIG91dDsNCj4gKyAgICAgICAgfQ0KPiArDQo+ICsg
ICAgICAgIGR1bXBfdHN4X2RldGFpbHMoJmd1ZXN0X3BvbGljeSwgIkN1cjoiKTsNCj4gKw0KPiAr
ICAgICAgICBpZiAoIGd1ZXN0X3BvbGljeS5jcHVpZC5mZWF0LnJhd1swXS5iICE9IF83YjAgKQ0K
PiArICAgICAgICB7DQo+ICsgICAgICAgICAgICBmYWlsKCIgIEV4cGVjdGVkIENQVUlELjdbMV0u
YiAweCUwOHggZGlmZmVyZXMgZnJvbQ0KPiBhY3R1YWwgMHglMDh4XG4iLA0KPiArICAgICAgICAg
ICAgICAgICBfN2IwLCBndWVzdF9wb2xpY3kuY3B1aWQuZmVhdC5yYXdbMF0uYik7DQo+ICsgICAg
ICAgICAgICBnb3RvIG91dDsNCj4gKyAgICAgICAgfQ0KPiArICAgIH0NCj4gKw0KPiArIG91dDoN
Cj4gKyAgICByYyA9IHhjX2RvbWFpbl9kZXN0cm95KHhjaCwgZG9taWQpOw0KPiArICAgIGlmICgg
cmMgKQ0KPiArICAgICAgICBmYWlsKCIgIEZhaWxlZCB0byBkZXN0cm95IGRvbWFpbjogJWQgLSAl
c1xuIiwNCj4gKyAgICAgICAgICAgICBlcnJubywgc3RyZXJyb3IoZXJybm8pKTsNCj4gK30NCj4g
Kw0KPiArc3RhdGljIHZvaWQgdGVzdF9ndWVzdHModm9pZCkNCj4gK3sNCj4gKyAgICBpZiAoIHhl
bl9oYXNfcHYgKQ0KPiArICAgIHsNCj4gKyAgICAgICAgc3RydWN0IHhlbl9kb21jdGxfY3JlYXRl
ZG9tYWluIGMgPSB7DQo+ICsgICAgICAgICAgICAubWF4X3ZjcHVzID0gMSwNCj4gKyAgICAgICAg
ICAgIC5tYXhfZ3JhbnRfZnJhbWVzID0gMSwNCj4gKyAgICAgICAgfTsNCj4gKw0KPiArICAgICAg
ICBwcmludGYoIlRlc3RpbmcgUFYgZ3Vlc3RcbiIpOw0KPiArICAgICAgICB0ZXN0X2d1ZXN0KCZj
KTsNCj4gKyAgICB9DQo+ICsNCj4gKyAgICBpZiAoIHhlbl9oYXNfaHZtICkNCj4gKyAgICB7DQo+
ICsgICAgICAgIHN0cnVjdCB4ZW5fZG9tY3RsX2NyZWF0ZWRvbWFpbiBjID0gew0KPiArICAgICAg
ICAgICAgLmZsYWdzID0gWEVOX0RPTUNUTF9DREZfaHZtLA0KPiArICAgICAgICAgICAgLm1heF92
Y3B1cyA9IDEsDQo+ICsgICAgICAgICAgICAubWF4X2dyYW50X2ZyYW1lcyA9IDEsDQo+ICsgICAg
ICAgICAgICAuYXJjaCA9IHsNCj4gKyAgICAgICAgICAgICAgICAuZW11bGF0aW9uX2ZsYWdzID0g
WEVOX1g4Nl9FTVVfTEFQSUMsDQo+ICsgICAgICAgICAgICB9LA0KPiArICAgICAgICB9Ow0KPiAr
DQo+ICsgICAgICAgIGlmICggcGh5c2luZm8uY2FwYWJpbGl0aWVzICYgWEVOX1NZU0NUTF9QSFlT
Q0FQX2hhcCApDQo+ICsgICAgICAgICAgICBjLmZsYWdzIHw9IFhFTl9ET01DVExfQ0RGX2hhcDsN
Cj4gKyAgICAgICAgZWxzZSBpZiAoICEocGh5c2luZm8uY2FwYWJpbGl0aWVzICYNCj4gWEVOX1NZ
U0NUTF9QSFlTQ0FQX3NoYWRvdykgKQ0KPiArICAgICAgICAgICAgcmV0dXJuIGZhaWwoIiAgSFZN
IGF2YWlsYWJsZSwgYnV0IG5laXRoZXIgSEFQIG5vcg0KPiBTaGFkb3dcbiIpOw0KPiArDQo+ICsg
ICAgICAgIHByaW50ZigiVGVzdGluZyBIVk0gZ3Vlc3RcbiIpOw0KPiArICAgICAgICB0ZXN0X2d1
ZXN0KCZjKTsNCj4gKyAgICB9DQo+ICt9DQo+ICsNCj4gKy8qIE9idGFpbiBzb21lIGdlbmVyYWwg
ZGF0YSwgdGhlbiBydW4gdGhlIHRlc3RzLiAqLw0KPiArc3RhdGljIHZvaWQgdGVzdF90c3godm9p
ZCkNCj4gK3sNCj4gKyAgICBpbnQgcmM7DQo+ICsNCj4gKyAgICAvKiBSZWFkIGFsbCBwb2xpY2ll
cyBleGNlcHQgcmF3LiAqLw0KPiArICAgIGZvciAoIGludCBpID0gWEVOX1NZU0NUTF9jcHVfcG9s
aWN5X2hvc3Q7DQo+ICsgICAgICAgICAgaSA8PSBYRU5fU1lTQ1RMX2NwdV9wb2xpY3lfaHZtX2Rl
ZmF1bHQ7ICsraSApDQo+ICsgICAgew0KPiArICAgICAgICByYyA9IHhjX2NwdV9wb2xpY3lfZ2V0
X3N5c3RlbSh4Y2gsIGksICZwb2xpY2llc1tpXSk7DQo+ICsNCj4gKyAgICAgICAgaWYgKCByYyA9
PSAtMSAmJiBlcnJubyA9PSBFT1BOT1RTVVBQICkNCj4gKyAgICAgICAgew0KPiArICAgICAgICAg
ICAgLyoNCj4gKyAgICAgICAgICAgICAqIFVzZSBFT1BOT1RTVVBQIHRvIHNwb3QgWGVuIG1pc3Np
bmcgQ09ORklHX3tQVixIVk19LA0KPiBhbmQgYWRqdXN0DQo+ICsgICAgICAgICAgICAgKiBsYXRl
ciB0ZXN0aW5nIGFjY29yZGluZ2x5Lg0KPiArICAgICAgICAgICAgICovDQo+ICsgICAgICAgICAg
ICBzd2l0Y2ggKCBpICkNCj4gKyAgICAgICAgICAgIHsNCj4gKyAgICAgICAgICAgIGNhc2UgWEVO
X1NZU0NUTF9jcHVfcG9saWN5X3B2X21heDoNCj4gKyAgICAgICAgICAgIGNhc2UgWEVOX1NZU0NU
TF9jcHVfcG9saWN5X3B2X2RlZmF1bHQ6DQo+ICsgICAgICAgICAgICAgICAgaWYgKCB4ZW5faGFz
X3B2ICkNCj4gKyAgICAgICAgICAgICAgICAgICAgcHJpbnRmKCIgIFhlbiBkb2Vzbid0IHN1cHBv
cnQgUFZcbiIpOw0KPiArICAgICAgICAgICAgICAgIHhlbl9oYXNfcHYgPSBmYWxzZTsNCj4gKyAg
ICAgICAgICAgICAgICBjb250aW51ZTsNCj4gKw0KPiArICAgICAgICAgICAgY2FzZSBYRU5fU1lT
Q1RMX2NwdV9wb2xpY3lfaHZtX21heDoNCj4gKyAgICAgICAgICAgIGNhc2UgWEVOX1NZU0NUTF9j
cHVfcG9saWN5X2h2bV9kZWZhdWx0Og0KPiArICAgICAgICAgICAgICAgIGlmICggeGVuX2hhc19o
dm0gKQ0KPiArICAgICAgICAgICAgICAgICAgICBwcmludGYoIiAgWGVuIGRvZXNuJ3Qgc3VwcG9y
dCBIVk1cbiIpOw0KPiArICAgICAgICAgICAgICAgIHhlbl9oYXNfaHZtID0gZmFsc2U7DQo+ICsg
ICAgICAgICAgICAgICAgY29udGludWU7DQo+ICsgICAgICAgICAgICB9DQo+ICsgICAgICAgIH0N
Cj4gKyAgICAgICAgaWYgKCByYyApDQo+ICsgICAgICAgICAgICByZXR1cm4gZmFpbCgiRmFpbGVk
IHRvIG9idGFpbiBwb2xpY3lbJXVdOiAlZCAtICVzXG4iLA0KPiArICAgICAgICAgICAgICAgICAg
ICAgICAgaSwgZXJybm8sIHN0cmVycm9yKGVycm5vKSk7DQo+ICsgICAgfQ0KPiArDQo+ICsgICAg
cmMgPSB4Y19waHlzaW5mbyh4Y2gsICZwaHlzaW5mbyk7DQo+ICsgICAgaWYgKCByYyApDQo+ICsg
ICAgICAgIHJldHVybiBmYWlsKCJGYWlsZWQgdG8gb2J0YWluIHBoeXNpbmZvOiAlZCAtICVzXG4i
LA0KPiArICAgICAgICAgICAgICAgICAgICBlcnJubywgc3RyZXJyb3IoZXJybm8pKTsNCj4gKw0K
PiArICAgIHByaW50ZigiICBHb3QgJXUgQ1BVc1xuIiwgcGh5c2luZm8ubWF4X2NwdV9pZCArIDEp
Ow0KPiArDQo+ICsgICAgdGVzdF90c3hfbXNycygpOw0KPiArICAgIHRlc3RfcnRtX2JlaGF2aW91
cigpOw0KPiArICAgIHRlc3RfZGVmX21heF9wb2xpY2llcygpOw0KPiArICAgIHRlc3RfZ3Vlc3Rz
KCk7DQo+ICt9DQo+ICsNCj4gK2ludCBtYWluKGludCBhcmdjLCBjaGFyICoqYXJndikNCj4gK3sN
Cj4gKyAgICBwcmludGYoIlRTWCB0ZXN0c1xuIik7DQo+ICsNCj4gKyAgICB4Y2ggPSB4Y19pbnRl
cmZhY2Vfb3BlbihOVUxMLCBOVUxMLCAwKTsNCj4gKw0KPiArICAgIGlmICggIXhjaCApDQo+ICsg
ICAgICAgIGVycigxLCAieGNfaW50ZXJmYWNlX29wZW4iKTsNCj4gKw0KPiArICAgIHRlc3RfdHN4
KCk7DQo+ICsNCj4gKyAgICByZXR1cm4gISFucl9mYWlsdXJlczsNCj4gK30NCg==


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 16:01:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 16:01:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141685.261630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsp1i-0003ON-HD; Mon, 14 Jun 2021 16:01:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141685.261630; Mon, 14 Jun 2021 16:01:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsp1i-0003OG-EE; Mon, 14 Jun 2021 16:01:26 +0000
Received: by outflank-mailman (input) for mailman id 141685;
 Mon, 14 Jun 2021 16:01:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XszW=LI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lsp1h-0003OA-Bl
 for xen-devel@lists.xen.org; Mon, 14 Jun 2021 16:01:25 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0c67e3f-ef28-490a-ac69-54ad03371dae;
 Mon, 14 Jun 2021 16:01:23 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2112.outbound.protection.outlook.com [104.47.18.112])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-4-6cUzbsp2NsieWGN6WuRrmw-1; Mon, 14 Jun 2021 18:01:21 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7152.eurprd04.prod.outlook.com (2603:10a6:800:12b::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20; Mon, 14 Jun
 2021 16:01:19 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 16:01:19 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR2P264CA0014.FRAP264.PROD.OUTLOOK.COM (2603:10a6:101::26) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.20 via Frontend Transport; Mon, 14 Jun 2021 16:01:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0c67e3f-ef28-490a-ac69-54ad03371dae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623686482;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rk++hUTYuZOmQ9pmnn4qYpG7O75kSBf3YqVqAzd7i6Q=;
	b=Y4Fnnd5JLz9v2G1pPRhWPks6e+JpVrjUXvyUM8ggDTt99FnWmhHYsdQGq94DkRlGJIyiLL
	p+mOqhDavG5Tjg3epdR5et2UJwIhcJLLnTLDCu7daTL/M1j/6Ofoz36g08Ywv4zW3GkgvY
	ovfdyJQWJftvw6lzz5rtEo3y1eTkfEw=
X-MC-Unique: 6cUzbsp2NsieWGN6WuRrmw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fPgM53Q3MyZFTe/Ipchdwd7ugzgS9qdW0/9oQMinntMKQvx+dXzZqQV9paTbKUcBA8563pjCzQAFqPvZhKA8RpUzT66494OWoqzP9rxk1nzdyhSQZvuXVOp7e3QCrgbvIh4wlKPKIpkvqjn2f0tauEPRfM8IVBJEwwqaU4o1nIroffP4FHNjTt+yKSQ8s/ujLsq3i6igDNUVqdNGPlhOzcJZ7XV3X8LBofXy/dYuB2UdLgzPRmiE77wrbQOet0C8F7FXQeUBSpG3aR56VNcT9biTuTnhjZ54ngYXOZQUmcGH3GhNypz88BHdmsUrxEYS+R3OfC5gu/azhQ4h2XYxdw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cnPnQz8/uHJaLhR8pq0KrTGD680xZy3Tyh9sddeQ8j8=;
 b=jAgdM4XxwcQHtVL0ZpOBUhSmXLU5b48fGcIkpLabHR2esMrRccms9DbCJMcI2uG3Ds0pNBTCKPqg6jwcJ4DIzbl2/VZidUSDSZXejadCc0jMOdFbYWpFu24V2TIK7JmwyfCn3ZQny03m4aQY/x2RkhuQZXeZvaaPhQgFjTegyKItfMwBd/ALdzn2ldXe1cRkjWyYa1IKapmWeD2+UqiUXpRFZ5iZDRtpx4U6i8YQU6EAd+HAzd21NxS3IpvQ0W0fauoZ+JrUgt16Bp5mt0uIU+r+L96jfamWVatLRH1CafBfrQszYyPoa88gMRVCqMfKsY45Pw/5Ky+52nm3Sk4LYA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: oracle.com; dkim=none (message not signed)
 header.d=none;oracle.com; dmarc=none action=none header.from=suse.com;
Subject: Re: BUG in 1f3d87c75129 ("x86/vpt: do not take pt_migrate rwlock in
 some cases")
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, xen-devel@lists.xen.org,
 boris.ostrovsky@oracle.com, stephen.s.brennan@oracle.com
References: <0efd0099-49bb-e20c-fe8d-fb97c9de2b63@citrix.com>
 <74378af9-5d04-f95e-3957-918cf5c81ded@suse.com>
 <YMdZKuKOnFKpQ3sg@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3e9f4ea8-2fca-bf66-6345-0b73b960cba7@suse.com>
Date: Mon, 14 Jun 2021 18:01:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <YMdZKuKOnFKpQ3sg@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR2P264CA0014.FRAP264.PROD.OUTLOOK.COM (2603:10a6:101::26)
 To VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 99563928-671e-467a-08cc-08d92f4da610
X-MS-TrafficTypeDiagnostic: VI1PR04MB7152:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB71527BCE7875AFFFE5A68729B3319@VI1PR04MB7152.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uniuRjrYPmL/MXN28d8OOEMWuBeT8Q/YwrupzvyZnpbWNv3diN00459Sl0lIMc64+XDy2x+1hY2oCEi/hO4HXYxXo4zQv6ZbccZkcM9dLLr03uNGox8vn/3XsKes/72525MMY4Qbk1YLoXFbzNx5nzGiO+oQrKtOMlAOqVYRzR+FIIHeKkqOBxchzCyLR3F4cf9eYSC3nieRRa/meW8W6EMutiGAMnNH8zeBLLTceI1HsUTlfj/O1PYgDrW6Uc3t29+UcqXmYdrBcggjyv+3yfoW7kE9CuaiDyrqmlBw1GukTGPAI1O5vawhXmVB6gKZjTZL+0upIL4y3u0bQrCjj5nifw9eaZMpLNP+tg4r2TAUdq7mDNmnlDMs0DCw4Dbd/iOYQ6b6NnLpKpipM/w+68q4VwG5ZDsZLQN5SaN3jr/4DZl6/JQDXNMWWbgxt7Qmk1Ask/2qrIh7wBTG7IE2Ym2QENGbgesYULyDUQj2b8uRzZx6xDUFFVr6l8hCR1NVqpET4eDMNsoT03OBa7Snp961bhPM6WWvOK2i9fUgV+F7Wv6AnITvXt+QjwaWPpXU9HamAbSb000MN06Mdxdx9lziQvxdg7UXfXlTevQ4yJHOSUbjRNOWdL0p7dPtqhKM85vHA8Dn5/wUF1Px3Y6FX1260hfWRPGgmFcxbdg3tiI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(396003)(39860400002)(346002)(376002)(136003)(26005)(66556008)(956004)(478600001)(2906002)(16576012)(4326008)(86362001)(186003)(53546011)(66476007)(316002)(31696002)(66946007)(5660300002)(2616005)(16526019)(8676002)(6486002)(38100700002)(31686004)(36756003)(8936002)(6916009)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?Wk2uSl2brQ2Cd7DoF1EhEw+gvSw5g5gUgzFV+WYoID58wQwXmej+RUDnBXZV?=
 =?us-ascii?Q?+AQbwMPpGlEGH9SEPPHC5juAi8yrLR+HXcIgbOllKUnshkRYrCol2Xdi5Mib?=
 =?us-ascii?Q?pPAwqq2vetaJAvlEGJIP4dCstHnHtHm3n+3k2IGuqFszQ0NFwyaShDSYYrSs?=
 =?us-ascii?Q?DMH8M/uaKdzn13bFl4rGzPejrXO/dPxChIRzSz0o/IRCg2Slf64pkK5bT7c9?=
 =?us-ascii?Q?qDgTW+30ICz6dPFQhNwiv38vpSyOcWoNEFXWOioCC+JWfCyWJGY55zCEM7Y4?=
 =?us-ascii?Q?VjvvW0lohnCejivjBhGYUu/kmGkk6NvvE+m6KMMlhQWrAh0dayoxN+TnWjZr?=
 =?us-ascii?Q?R619w0pwqA0oradJl41WFwqjp156NxrSiN6xsfmE1Rcf/rwvKyVRRL0ogVcd?=
 =?us-ascii?Q?Yd+20ZN6/IsjsqtT8aaIWfYBWRwYSgeGLYpeRdPmeR7eWrGPwdKQX8qrbM8b?=
 =?us-ascii?Q?C5tzaYj50A5PSQEHZnuzIZcb1ZDtAq4KGl9wuUMdeH2dI2MQXV9+EqyS2Du0?=
 =?us-ascii?Q?bptrq4Bu4wtPJZMqJFH3OlkaEVSUaVsL0XdTgMarBoBacs+HmwuYTITWfnc2?=
 =?us-ascii?Q?QGrz5j7akwT2RDYBb7l5LLt3tzDVTZY8eHRam1Yms/r9Fu//iQcEW9NGbDNU?=
 =?us-ascii?Q?TYoSlLkDLA6oo+3yfIReaBanf6+rceZnFhazXz2BMkURc4bPDhQtkw4oWuVw?=
 =?us-ascii?Q?9QRhomjz4mDAAzrMICZexi+7FsJ3Zx0cRlos4STTeat4VVZEwEn4ceteHZb+?=
 =?us-ascii?Q?I5osXvEmRgU6uHJc6Vw19df9Pcj5qwkQ3gcPC5BlcFiBS63DZ+BQKDT5HWeG?=
 =?us-ascii?Q?T+lMn2a6m5zcn0s+FrTG2rNA57weDoGMbyvWmNvYgrvMsuqZYcY94f1MfQqK?=
 =?us-ascii?Q?ACeWZ+iczG8NGvkFPChNM7vE91NouPZbEmAAEmmmLs4J4gfPeYJBfvsojRrA?=
 =?us-ascii?Q?fgvBZim5H0E+Jq1kGRsnlBiDU6DWn0S0GuwsVdcugc4ErXt4fZh/lEFhsDzc?=
 =?us-ascii?Q?EeEeuQDxDVWtYn6cfGKfG82sg5AnFK59UNFNQye+EdstVa5iK4B/4GB8MjBD?=
 =?us-ascii?Q?xcAB3B744fAgEQ+ZLdsTm9fZt6S54KPRXGwuDjilYp4oxTEY3CE3eHpuStpA?=
 =?us-ascii?Q?eXHStY3NXni9rqlhJNIDc2pf1apmHyTny1hWxo3bsrg04VbGOA0VR0K9ZNwz?=
 =?us-ascii?Q?U4Rjroy2ev4tPSHY1IviOCtOKheyD0bOmkbJzZV7qsOdOtCIM9AcTaWPdQBf?=
 =?us-ascii?Q?0w080+BnWUiIsmcIinH4g39RpiRVxUixL/nFaoSsRsyKEaxRIaJCqLgU9fld?=
 =?us-ascii?Q?WwyQg5FaTeXgGUdaYUUdoTck?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 99563928-671e-467a-08cc-08d92f4da610
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 16:01:19.6890
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: H0YNmgd14YmVeQYMdV3vuBU/nSMRhjygy1/Zu0U1/oKGsxkJ1Bm39v/9vwEiq6bKqitTkQNyBBKDBY8KHx2z/A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7152

On 14.06.2021 15:27, Roger Pau Monn=C3=A9 wrote:
> On Mon, Jun 14, 2021 at 01:53:09PM +0200, Jan Beulich wrote:
>> On 14.06.2021 13:15, Igor Druzhinin wrote:
>>> Hi, Boris, Stephen, Roger,
>>>
>>> We have stress tested recent changes on staging-4.13 which includes a
>>> backport of the subject. Since the backport is identical to the
>>> master branch and all of the pre-reqs are in place - we have no reason
>>> to believe the issue is not the same on master.
>>>
>>> Here is what we got by running heavy stress testing including multiple
>>> repeated VM lifecycle operations with storage and network load:
>>>
>>>
>>> Assertion 'timer->status >=3D TIMER_STATUS_inactive' failed at timer.c:=
287
>>> ----[ Xen-4.13.3-10.7-d  x86_64  debug=3Dy   Not tainted ]----
>>> CPU:    17
>>> RIP:    e008:[<ffff82d080246b65>] common/timer.c#active_timer+0xc/0x1b
>>> RFLAGS: 0000000000010046   CONTEXT: hypervisor (d675v0)
>>> rax: 0000000000000000   rbx: ffff83137a8ed300   rcx: 0000000000000000
>>> rdx: ffff83303fff7fff   rsi: ffff83303fff2549   rdi: ffff83137a8ed300
>>> rbp: ffff83303fff7cf8   rsp: ffff83303fff7cf8   r8:  0000000000000001
>>> r9:  0000000000000000   r10: 0000000000000011   r11: 0000168b0cc08083
>>> r12: 0000000000000000   r13: ffff82d0805cf300   r14: ffff82d0805cf300
>>> r15: 0000000000000292   cr0: 0000000080050033   cr4: 00000000000426e0
>>> cr3: 00000013c1a32000   cr2: 0000000000000000
>>> fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
>>> ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
>>> Xen code around <ffff82d080246b65> (common/timer.c#active_timer+0xc/0x1=
b):
>>>   0f b6 47 2a 84 c0 75 02 <0f> 0b 3c 04 76 02 0f 0b 3c 02 0f 97 c0 5d c=
3 55
>>> Xen stack trace from rsp=3Dffff83303fff7cf8:
>>>     ffff83303fff7d48 ffff82d0802479f1 0000168b0192b846 ffff83137a8ed328
>>>     000000001d0776eb ffff83137a8ed2c0 ffff83133ee47568 ffff83133ee47000
>>>     ffff83133ee47560 ffff832b1a0cd000 ffff83303fff7d78 ffff82d08031e74e
>>>     ffff83102d898000 ffff83133ee47000 ffff83102db8d000 0000000000000011
>>>     ffff83303fff7dc8 ffff82d08027df19 0000000000000000 ffff83133ee47060
>>>     ffff82d0805d0088 ffff83102d898000 ffff83133ee47000 0000000000000011
>>>     0000000000000001 0000000000000011 ffff83303fff7e08 ffff82d0802414e0
>>>     ffff83303fff7df8 0000168b0192b846 ffff83102d8a4660 0000168b0192b846
>>>     ffff83102d8a4720 0000000000000011 ffff83303fff7ea8 ffff82d080241d6c
>>>     ffff83133ee47000 ffff831244137a50 ffff83303fff7e48 ffff82d08031b5b8
>>>     ffff83133ee47000 ffff832b1a0cd000 ffff83303fff7e68 ffff82d080312b65
>>>     ffff83133ee47000 0000000000000000 ffff83303fff7ee8 ffff83102d8a4678
>>>     ffff83303fff7ee8 ffff82d0805d6380 ffff82d0805d5b00 ffffffffffffffff
>>>     ffff83303fff7fff 0000000000000000 ffff83303fff7ed8 ffff82d0802431f5
>>>     ffff83133ee47000 0000000000000000 0000000000000000 0000000000000000
>>>     ffff83303fff7ee8 ffff82d08024324a 00007ccfc00080e7 ffff82d08033930b
>>>     ffffffffb0ebd5a0 000000000000000d 0000000000000062 0000000000000097
>>>     000000000000001e 000000000000001f ffffffffb0ebd5ad 0000000000000000
>>>     0000000000000005 000000000003d91d 0000000000000000 0000000000000000
>>>     00000000000003d5 000000000000001e 0000000000000000 0000beef0000beef
>>> Xen call trace:
>>>     [<ffff82d080246b65>] R common/timer.c#active_timer+0xc/0x1b
>>>     [<ffff82d0802479f1>] F stop_timer+0xf5/0x188
>>>     [<ffff82d08031e74e>] F pt_save_timer+0x45/0x8a
>>>     [<ffff82d08027df19>] F context_switch+0xf9/0xee0
>>>     [<ffff82d0802414e0>] F common/schedule.c#sched_context_switch+0x146=
/0x151
>>>     [<ffff82d080241d6c>] F common/schedule.c#schedule+0x28a/0x299
>>>     [<ffff82d0802431f5>] F common/softirq.c#__do_softirq+0x85/0x90
>>>     [<ffff82d08024324a>] F do_softirq+0x13/0x15
>>>     [<ffff82d08033930b>] F vmx_asm_do_vmentry+0x2b/0x30
>>>
>>> ****************************************
>>> Panic on CPU 17:
>>> Assertion 'timer->status >=3D TIMER_STATUS_inactive' failed at timer.c:=
287
>>> ****************************************
>>
>> Since this suggests a timer was found on the list without ever having be=
en
>> initialized, I've spotted a case where this indeed could now happen. Cou=
ld
>> you give the patch below a try?
>>
>> Jan
>>
>> x86/vpt: fully init timers before putting onto list
>>
>> With pt_vcpu_lock() no longer acquiring the pt_migrate lock, parties
>> iterating the list and acting on the timers of the list entries will no
>> longer be kept from entering their loops by create_periodic_time()'s
>> holding of that lock. Therefore at least init_timer() needs calling
>> ahead of list insertion, but keep this and set_timer() together.
>>
>> Fixes: 8113b02f0bf8 ("x86/vpt: do not take pt_migrate rwlock in some cas=
es")
>> Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>=20
> Thanks for looking into this so quickly, and sorry for not realizing
> myself when relaxing the locking. Adding the timer to the list without
> it being fully initialized was a latent issue even if protected by the
> lock initially.
>=20
> Provided testing shows the issue is fixed:

I guess the change here is needed anyway, even if testing finds there's
still something amiss?

> Reviewed-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>

Thanks.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 16:11:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 16:11:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141693.261642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lspBF-0004qV-Ek; Mon, 14 Jun 2021 16:11:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141693.261642; Mon, 14 Jun 2021 16:11:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lspBF-0004qO-BY; Mon, 14 Jun 2021 16:11:17 +0000
Received: by outflank-mailman (input) for mailman id 141693;
 Mon, 14 Jun 2021 16:11:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AGyB=LI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lspBE-0004qI-B9
 for xen-devel@lists.xen.org; Mon, 14 Jun 2021 16:11:16 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f54a521f-288c-484f-8468-108234d9dc30;
 Mon, 14 Jun 2021 16:11:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f54a521f-288c-484f-8468-108234d9dc30
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623687074;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=4IXsOE+0doyab0n3YENYho7vANoNoNBsDlgqAt9hDvI=;
  b=Vo9Z69XeEkhLfOzMjE2rop+f3gM8nuLpAipC75nLUvyy44g/DHOM46Xm
   g9ISIjbujwHWGK96Ir6oX8VvpMNaB6wdcwoX9bJySWCJU7llisZprSfLQ
   SB6JIyn2VV3+s5CufGAy9z/C6rjptm0FAOejmHwhcM9f1Mtt4i+7TPsPZ
   g=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: mFPjPqHycggq6xH14w4AXiI7pHdrnpP3SBs9Ok2kDHjaCixgq0QERYHiogCxZrqaSng4iDcwbg
 h85rSOw88IpQgZ4EGdVMT7c5+OKfpsBfQMytes0HZ5mqMgzoXRjvSvT18ZxzvwhCmJldIdFkZY
 EiCs8yhstAvmFQ/wz8utlwkgp2biy6UpPfE4+7ZmtUSDNJ7MRxG30Q5hjCf3xLDxaao3upVgKT
 TuN2FCnmQO+wkdmUoZCGn5vER7cqgbpUwIrvV/eVjbJHWNRWvU/olpB5xUtzLhJcO1XF1BBOfv
 S98=
X-SBRS: 5.1
X-MesageID: 46087402
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:ou7RpKxzbtbig5npjnVlKrPxG+skLtp133Aq2lEZdPULSKOlfp
 GV8MjziyWYtN9IYgBcpTiBUJPwJE80hqQFnbX5Wo3SEDUO2VHYYb2KiLGN/9SOIVyGygcw79
 YCT0E6MqyLMbEYt7e03ODbKadZ/DDvysnB7o2+r0uFDzsaEJ2Ihz0JTTpzeXcGIDWucKBJcq
 Z0kfA3wAZIF05nDPiTNz0gZazuttfLnJXpbVotHBg88jSDijuu9frTDwWY9g12aUIA/Z4StU
 z+1yDp7KSqtP+2jjXG0XXI0phQkNz9jvNeGc23jNQPIDmEsHfoWG0hYczDgNkGmpDs1L8Yqq
 iIn/4UBbUx15qeRBDwnfKn4Xie7N9n0Q6d9bbfuwqknSWxfkNKN+NRwY1eaRfX8EwmoZV117
 9KxXuQs95NAQrHhzmV3amCa/hGrDv8nZMZq59as5Wfa/prVFZbl/1UwKqUKuZ3IMve0vFTLA
 BDNrCu2B9mSyLsU5mChBge/DWFZAVAIiu7
X-IronPort-AV: E=Sophos;i="5.83,273,1616472000"; 
   d="scan'208";a="46087402"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cbONE5NoXo7m1pHQyZhLFz0wcBuIPd4osuum8tUHTYDQil1OykoEcJPwKI7qWRR1srKLtVAxOohk9O461eh4zMouPK7y6pjm2CUVEVHlHm1L1mEWnL+OdbznHCMnrHNJ0+mC/s+tgawKb5bmUFO4DfpJ5EVxw/WTelKip3cFVQK6xS+eufN2UadKZ3V+wOVh4eXvsk4MUsZDBgGe8E6v492fzgp3GpqY5/S/4sMsOjuvWKlPrfg/NbOwujQkMbbnLpuVa1LgoryqAI5KRuB4fOQWYsRHCAnkBCPmFsw2CwryzKg72JEXVxXKPLSqznV1MROq6aoPJ8AJP6/LbDgQAA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pSYu2yIruysuaHr2D9ceHUQB+jqmnGD4Yn63V9Hi2o8=;
 b=gCJEDvZCwelzomfSq8fg28/i7UFnHlgtqPKK+J4JEXQcJDH5uxIz/ztEq1Lr4p8fjvW0ZTNxfw39wssKDyGgnnECDq0+VXNr+M9z0xXIaViH1CfF1pjewDyo/a6NeBhYW972DLOgDAN8XdkijHl2ZFNV6wWtyDRJN91rCTJtw/16eLd3reupttSmE6dxk8Kprty7pLEG/MUnpTV0jaAXLkcPotnx0w5I/7yEs/lLUQGrqs+lGYwd7CD8nseQ03g9psIWdN8odYEkAHevuFGFCPsJuxQdGutao+OPA0J/qhRHSGx86afXVm0ArNfyWAPnJf4Yj5A+TxOtE8rhwOw8jg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pSYu2yIruysuaHr2D9ceHUQB+jqmnGD4Yn63V9Hi2o8=;
 b=VjIdSpICTtu2h2YD4tbmPdLWn0AW4nhB0qBnL69eJFwm1ZhR8P83XJUpJJyqACsBmNPBisGrXlYO3g3GUPJ+PmrE3+jhyAPXiWZasZMR/u4Mtu1RMe9AS4QCpj6QEKX+yamxlzqo+5wYMdh5wWN5Kip7+76mY5cXOMEObfrh1kE=
Subject: Re: BUG in 1f3d87c75129 ("x86/vpt: do not take pt_migrate rwlock in
 some cases")
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, <xen-devel@lists.xen.org>,
	<boris.ostrovsky@oracle.com>, <stephen.s.brennan@oracle.com>
References: <0efd0099-49bb-e20c-fe8d-fb97c9de2b63@citrix.com>
 <74378af9-5d04-f95e-3957-918cf5c81ded@suse.com>
 <YMdZKuKOnFKpQ3sg@Air-de-Roger>
 <3e9f4ea8-2fca-bf66-6345-0b73b960cba7@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a7ec2d98-465a-9691-ab73-bef5b45a6cfd@citrix.com>
Date: Mon, 14 Jun 2021 17:10:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <3e9f4ea8-2fca-bf66-6345-0b73b960cba7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0206.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a5::13) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1b47d425-3398-435a-2571-08d92f4f035a
X-MS-TrafficTypeDiagnostic: BYAPR03MB3493:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB3493436D606F4BCA9FC94C2FBA319@BYAPR03MB3493.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: wvNCSFBDgWmgjPhMu0Su6ui/Al32Yj+NEnamql/Sls0odhHPLyMIFAblsiHIXZILjRnxBk8A8i+ZmT07AfYtNuVCSpF7Cuh+Hr04Bor+08GdD9RPH0U03LhXUeUXe6d1v9nMHCGo6iGfYBawj/kRMkHyMVo3TAR7EwLDyvCnwzmDeGzaLj/yDTuBUup6A5XsNp0Ub092Lv25zYPylObOfBD7/DragmL2CediFPdd9e8FgDEXbCphkWN8k8WkTMeFdSSDozFnza9MJm+FmjVKeGalib26DIcbAJrvkSwBkV0E9+vEc97ONDt6HSlD1n17o1pljffYpHx9CZBJsw81jQkB8cSwG0pclOXJNZHcFj7Q6Z3GgG5Ou6ilqmx8FjrKhG+NihllmlhqtRl6khL46MyWyZqPs55hMhLuf48fuJPcJVOW7ZXuSDTMPgjFanMUIqY57/ejPEINy7PHLDsmH9ueo65Fwe3m2wy4VQvSgY130TSdLoOVR7yEdVL9U3265lpFrp77c/U4U9eXUDPwQTpYt8s2k4Yyy3YP54EvKuTm9TdPBCV6jGkx4G4H/gJofJDbAXW0cWKMUnjCi31VPpCJiPUnDm97AcCEDSAo+It1I/TVRMf6iWRHXM1ZVyGDkGHWvYXmJbJqEOPfsEj+GmHBCWhcD0v/c0bPxK6l/KsKmsXyQjVbYQ5tdssHE7TV
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(39860400002)(396003)(136003)(376002)(4326008)(6636002)(956004)(66556008)(186003)(66476007)(2616005)(31686004)(16526019)(8676002)(16576012)(110136005)(2906002)(66946007)(316002)(26005)(478600001)(38100700002)(53546011)(6486002)(8936002)(5660300002)(6666004)(86362001)(36756003)(31696002)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?dVYzRkM3OEVONWRpdSsvd1V6OWJIQ3BjSExZMUplQ3FxcFFrR1B0ZmhVdG0r?=
 =?utf-8?B?VUJuM0RkY3dNMXVaNk9Gb3p3RVFYb1BWdHJwSXJBdEREbXZ0UGFhN1dMNGJ3?=
 =?utf-8?B?ZVU5SXNoYkdXL3BoSFdKN3AxUHluMXJ6YmtGVy9kTzAzTnovVGNUL2JSVkJn?=
 =?utf-8?B?cS9ocHNCK1FrUERNcDh5dFdFRWw4V1NZaEs2bXVSV2dpRlliZ1I3aTdjaTA3?=
 =?utf-8?B?dmxSSGd5TFBGY2wydWhDMmc3aUFrZWpGU2g0a3h1ZEVtUjZ2ZVUyWFIza0JQ?=
 =?utf-8?B?WTlRR3pTb28vVXR0VXJEMnhsdVc5S2l6UzdxLzBLN2dqSVY3VG9LY0lwbUVQ?=
 =?utf-8?B?MUsvZG1WUnJSTWgzdjZIQ0g1ZEZzUzVhcnBNbnI4dHdoU2YrODFzRE8rOExQ?=
 =?utf-8?B?cjFla0JVRkcxM1VKa2JRSEdWbU5yNHBIMVhaOWR5UUh1WnY1NzZxRjNkWDgx?=
 =?utf-8?B?di96WGJ2RlpqZHNmSWw2WDc0K0xzRWZ6ekw5Tkg3ZEVRZUY5ZVpJeTFhbEU0?=
 =?utf-8?B?OCtydDN0NWJ2WUwvTWtKeDh6T2IrMkFZN1A5ODk1SzZITERZVngzOVo5SnEw?=
 =?utf-8?B?MmdRcUZqdHZHaUFRaXB0OC9IM0gvbzRNRGJoWk1iN214U3U4bHdLMHluSjhI?=
 =?utf-8?B?L3pKaDVDVUlPYkJReFhmbGxUZC9BREl6cVN5Y0U2bmVoUmFyZHNPSXFidnRO?=
 =?utf-8?B?UkwzU0FiU3R3K2F6d0xkVzNWRk1pOFd0T0lMejVMQUFOQlVwVHhqVkNNa3dv?=
 =?utf-8?B?V3ducUlJSzVlTy9XMFZRek5GWmRmVlVubW1SWElrR1oxNDEybW9qckI5Y3Vh?=
 =?utf-8?B?d29jd1Y0ZUxvaktKK015aERkZWhUTEtjQnhZY0k4SXUxYVoyYTQzODArQkxI?=
 =?utf-8?B?bGZFWUJHSFRIQzVLTTlRelZYanFHaXkveGQxeTRvSit6VXJwcGQ5TGd6eVRV?=
 =?utf-8?B?clluYk5YeE92TmVQNUlXc0drM0oxbjJHRUR4aEJMMU16SVAwK2lwd25Vbm1q?=
 =?utf-8?B?dGhqOUI3TGpoTXB1RHJ3K0dmaUdMQmkwV0NHdTIyL1lCWE11R0tsbTlmeU1t?=
 =?utf-8?B?bVMwcUlEc05aY0tZaE1XZDQyVTNQeHlWMFRPWFRwbm9sL0RmcnM0c3JmVElm?=
 =?utf-8?B?eno4SnFTYUtHQ0p1cFdOcGNrYVVTanY0cUVxRlRNYUM3ci94K20wdmo5eThO?=
 =?utf-8?B?ajQ5WjkvODh1REE1by8veHBKTnhtT3hWd1RRRWN6bUVyV3JaUGxpTUh4bEli?=
 =?utf-8?B?NGF4QkxodXVYNGlXQmFyekZoZUVuendKWkJKZzQzWXdKcFpGTGRoTHE1RHpB?=
 =?utf-8?B?aVpXUk04T1NiSGlxazY3bkNaVGhUYit1YmRoZ0xSOW1FQ2pyZy9IU0daMkJy?=
 =?utf-8?B?czdWRG9ISUtKWERkSXRnVkovMkdqMThKVHJTaTdvNUtrZXg5MzluNEw2Ui80?=
 =?utf-8?B?MEFtTll3WWQvcVZudVIyUU90MWpmWU5hYnNGOXl1NjU0K2lLMGR0ZEJBSUNK?=
 =?utf-8?B?OHdSRDJId2YySnV2OG10a2x2Vk9sS0lNL2JNWWlweUk4QkE5cDNyV3RxcjhT?=
 =?utf-8?B?RGNUYkZOM1draUJjVEMvU1p5RE84SENla0w4YW1nRndWaUFSaWJEN0I1eWgz?=
 =?utf-8?B?OURNNVB6SmR2SE1JYU4vaGp6WlIxR242RTl2ZHpQUm54eWZTN2w1QXRCOW5o?=
 =?utf-8?B?MkRrWVF0ZS9UMnBCQXNGdUZSY0tnSzBYR1h2OWtRcDZNSEJ6SVk3L05JL3hN?=
 =?utf-8?Q?ZVdqQb4RdUsE9zSGx7poqZ05jg/BeFX5Q00k+Cq?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 1b47d425-3398-435a-2571-08d92f4f035a
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 16:11:05.6736
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UzmpuBvs6G57vQEnnWBWNEAJ49awI/8rjU7bcET0CUqe77NmqAb4dVqxlc7+hGEwcte35rC565bYiR9bi17W4ZA11lhkfIgEqQaDegNkG7g=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3493
X-OriginatorOrg: citrix.com

On 14/06/2021 17:01, Jan Beulich wrote:
> On 14.06.2021 15:27, Roger Pau Monné wrote:
>> On Mon, Jun 14, 2021 at 01:53:09PM +0200, Jan Beulich wrote:
>>> On 14.06.2021 13:15, Igor Druzhinin wrote:
>>>> Hi, Boris, Stephen, Roger,
>>>>
>>>> We have stress tested recent changes on staging-4.13 which includes a
>>>> backport of the subject. Since the backport is identical to the
>>>> master branch and all of the pre-reqs are in place - we have no reason
>>>> to believe the issue is not the same on master.
>>>>
>>>> Here is what we got by running heavy stress testing including multiple
>>>> repeated VM lifecycle operations with storage and network load:
>>>>
>>>>
>>>> Assertion 'timer->status >= TIMER_STATUS_inactive' failed at timer.c:287
>>>> ----[ Xen-4.13.3-10.7-d  x86_64  debug=y   Not tainted ]----
>>>> CPU:    17
>>>> RIP:    e008:[<ffff82d080246b65>] common/timer.c#active_timer+0xc/0x1b
>>>> RFLAGS: 0000000000010046   CONTEXT: hypervisor (d675v0)
>>>> rax: 0000000000000000   rbx: ffff83137a8ed300   rcx: 0000000000000000
>>>> rdx: ffff83303fff7fff   rsi: ffff83303fff2549   rdi: ffff83137a8ed300
>>>> rbp: ffff83303fff7cf8   rsp: ffff83303fff7cf8   r8:  0000000000000001
>>>> r9:  0000000000000000   r10: 0000000000000011   r11: 0000168b0cc08083
>>>> r12: 0000000000000000   r13: ffff82d0805cf300   r14: ffff82d0805cf300
>>>> r15: 0000000000000292   cr0: 0000000080050033   cr4: 00000000000426e0
>>>> cr3: 00000013c1a32000   cr2: 0000000000000000
>>>> fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
>>>> ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
>>>> Xen code around <ffff82d080246b65> (common/timer.c#active_timer+0xc/0x1b):
>>>>   0f b6 47 2a 84 c0 75 02 <0f> 0b 3c 04 76 02 0f 0b 3c 02 0f 97 c0 5d c3 55
>>>> Xen stack trace from rsp=ffff83303fff7cf8:
>>>>     ffff83303fff7d48 ffff82d0802479f1 0000168b0192b846 ffff83137a8ed328
>>>>     000000001d0776eb ffff83137a8ed2c0 ffff83133ee47568 ffff83133ee47000
>>>>     ffff83133ee47560 ffff832b1a0cd000 ffff83303fff7d78 ffff82d08031e74e
>>>>     ffff83102d898000 ffff83133ee47000 ffff83102db8d000 0000000000000011
>>>>     ffff83303fff7dc8 ffff82d08027df19 0000000000000000 ffff83133ee47060
>>>>     ffff82d0805d0088 ffff83102d898000 ffff83133ee47000 0000000000000011
>>>>     0000000000000001 0000000000000011 ffff83303fff7e08 ffff82d0802414e0
>>>>     ffff83303fff7df8 0000168b0192b846 ffff83102d8a4660 0000168b0192b846
>>>>     ffff83102d8a4720 0000000000000011 ffff83303fff7ea8 ffff82d080241d6c
>>>>     ffff83133ee47000 ffff831244137a50 ffff83303fff7e48 ffff82d08031b5b8
>>>>     ffff83133ee47000 ffff832b1a0cd000 ffff83303fff7e68 ffff82d080312b65
>>>>     ffff83133ee47000 0000000000000000 ffff83303fff7ee8 ffff83102d8a4678
>>>>     ffff83303fff7ee8 ffff82d0805d6380 ffff82d0805d5b00 ffffffffffffffff
>>>>     ffff83303fff7fff 0000000000000000 ffff83303fff7ed8 ffff82d0802431f5
>>>>     ffff83133ee47000 0000000000000000 0000000000000000 0000000000000000
>>>>     ffff83303fff7ee8 ffff82d08024324a 00007ccfc00080e7 ffff82d08033930b
>>>>     ffffffffb0ebd5a0 000000000000000d 0000000000000062 0000000000000097
>>>>     000000000000001e 000000000000001f ffffffffb0ebd5ad 0000000000000000
>>>>     0000000000000005 000000000003d91d 0000000000000000 0000000000000000
>>>>     00000000000003d5 000000000000001e 0000000000000000 0000beef0000beef
>>>> Xen call trace:
>>>>     [<ffff82d080246b65>] R common/timer.c#active_timer+0xc/0x1b
>>>>     [<ffff82d0802479f1>] F stop_timer+0xf5/0x188
>>>>     [<ffff82d08031e74e>] F pt_save_timer+0x45/0x8a
>>>>     [<ffff82d08027df19>] F context_switch+0xf9/0xee0
>>>>     [<ffff82d0802414e0>] F common/schedule.c#sched_context_switch+0x146/0x151
>>>>     [<ffff82d080241d6c>] F common/schedule.c#schedule+0x28a/0x299
>>>>     [<ffff82d0802431f5>] F common/softirq.c#__do_softirq+0x85/0x90
>>>>     [<ffff82d08024324a>] F do_softirq+0x13/0x15
>>>>     [<ffff82d08033930b>] F vmx_asm_do_vmentry+0x2b/0x30
>>>>
>>>> ****************************************
>>>> Panic on CPU 17:
>>>> Assertion 'timer->status >= TIMER_STATUS_inactive' failed at timer.c:287
>>>> ****************************************
>>> Since this suggests a timer was found on the list without ever having been
>>> initialized, I've spotted a case where this indeed could now happen. Could
>>> you give the patch below a try?
>>>
>>> Jan
>>>
>>> x86/vpt: fully init timers before putting onto list
>>>
>>> With pt_vcpu_lock() no longer acquiring the pt_migrate lock, parties
>>> iterating the list and acting on the timers of the list entries will no
>>> longer be kept from entering their loops by create_periodic_time()'s
>>> holding of that lock. Therefore at least init_timer() needs calling
>>> ahead of list insertion, but keep this and set_timer() together.
>>>
>>> Fixes: 8113b02f0bf8 ("x86/vpt: do not take pt_migrate rwlock in some cases")
>>> Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Thanks for looking into this so quickly, and sorry for not realizing
>> myself when relaxing the locking. Adding the timer to the list without
>> it being fully initialized was a latent issue even if protected by the
>> lock initially.
>>
>> Provided testing shows the issue is fixed:
> I guess the change here is needed anyway, even if testing finds there's
> still something amiss?

We've put this patch in for testing, but results will take a while,
because it only showed up in our weekend stress testing.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 16:13:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 16:13:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141698.261653 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lspDO-0005UW-T5; Mon, 14 Jun 2021 16:13:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141698.261653; Mon, 14 Jun 2021 16:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lspDO-0005UP-Pq; Mon, 14 Jun 2021 16:13:30 +0000
Received: by outflank-mailman (input) for mailman id 141698;
 Mon, 14 Jun 2021 16:13:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AGyB=LI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lspDN-0005UJ-2N
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 16:13:29 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f61eb937-8f6f-4980-9f41-75c846ff4db0;
 Mon, 14 Jun 2021 16:13:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f61eb937-8f6f-4980-9f41-75c846ff4db0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623687206;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=U6nTTiuWDOF0zOMLhu5s+oRTjwLaLCyfZOTDhRb6sCA=;
  b=QAP38MjN82p/EA1fM4zC33BIFpc1i4o64KyuoFig9qKQRH2qbZG1w+4Y
   Ixe4n2Aqoosw51SGhyRFOQ23AMvibwqJoCA5cfhdh2cvhZF6F3VOKuBic
   GjN9zy7mW/aT6PDpdbxEhLRo32I1JPlpJH3/9g9YKrvaGFQ2qaaz4K19m
   s=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: tsYCfwGFWn1ro4rbk1X18TXcCR9eHcRIlZDdPckR4ybmP/50t7rBO/L4cmg8AinH1xSU5E7yoO
 hH1ZLq1xKCiILPIhs9Ctym0FZIXyVaGAXdAIkjyoEPX38pnQ21vKjR5fUl6S+vxGqkzxIBUPoq
 EmnK1Ha7ATqNzWD5hwOSDqvKwLG0ei4tF73NJtNFzqAv5IlAyfg8rEayemzSBrtHqTdm7kj6KU
 1PUJG5u/+LlMMYZIAYtrba++rcPFjTy8rAzcGrKfjfrdRxrqnVwjch9u8yragTG2uHbB0M3unK
 wYU=
X-SBRS: 5.1
X-MesageID: 46087621
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:hZ30Pa4HaBd0Lkov7gPXwDLXdLJyesId70hD6qkQc3FomwKj9/
 xG/c5rsSMc7Qx6ZJhOo7+90cW7L080lqQFhLX5X43SPzUO0VHARO1fBOPZqAEIcBeOlNK1u5
 0AT0B/YueAcGSTj6zBkXWF+wBL+qj5zEiq792usUuEVWtRGsZdB58SMHfhLqVxLjM2Y6YRJd
 6nyedsgSGvQngTZtTTPAh+YwCSz+e77a4PeHQ9dmYa1DU=
X-IronPort-AV: E=Sophos;i="5.83,273,1616472000"; 
   d="scan'208";a="46087621"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, Edwin Torok
	<edvin.torok@citrix.com>, Andrew Cooper <andrew.cooper3@citrix.com>, "Jan
 Beulich" <JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 5/5] tests: Introduce a TSX test
Date: Mon, 14 Jun 2021 17:13:17 +0100
Message-ID: <20210614161317.31481-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210611163627.4878-6-andrew.cooper3@citrix.com>
References: <20210611163627.4878-6-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

See the comment at the top of test-tsx.c for details.

This covers various complexities encountered while trying to address the
recent TSX deprecation on client parts.

A sample run on KabyLake with latest microcode and default tsx= looks like
this:

  root@host# ./test-tsx
  TSX tests
    Got 8 CPUs
  Testing MSR_TSX_FORCE_ABORT consistency
    CPU0 val 0x3
  Testing MSR_TSX_CTRL consistency
  Testing RTM behaviour
    Got Abort
  Testing PV default/max policies
    Max: RTM 1, HLE 1, TSX_FORCE_ABORT 0, RTM_ALWAYS_ABORT 0, TSX_CTRL 0
    Def: RTM 0, HLE 0, TSX_FORCE_ABORT 0, RTM_ALWAYS_ABORT 0, TSX_CTRL 0
  Testing HVM default/max policies
    Max: RTM 1, HLE 1, TSX_FORCE_ABORT 0, RTM_ALWAYS_ABORT 0, TSX_CTRL 0
    Def: RTM 0, HLE 0, TSX_FORCE_ABORT 0, RTM_ALWAYS_ABORT 0, TSX_CTRL 0
  Testing PV guest
    Created d7
    Cur: RTM 0, HLE 0, TSX_FORCE_ABORT 0, RTM_ALWAYS_ABORT 0, TSX_CTRL 0
    Cur: RTM 1, HLE 1, TSX_FORCE_ABORT 0, RTM_ALWAYS_ABORT 0, TSX_CTRL 0
  Testing HVM guest
    Created d8
    Cur: RTM 0, HLE 0, TSX_FORCE_ABORT 0, RTM_ALWAYS_ABORT 0, TSX_CTRL 0
    Cur: RTM 1, HLE 1, TSX_FORCE_ABORT 0, RTM_ALWAYS_ABORT 0, TSX_CTRL 0

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Addess all comments.  Fix build with the BSDs.
v1.1:
 * Set alternative guest policy, and check.
 * Cope with !HAP configurations.
 * Complete the comment at the top of test-tsx.c
---
 tools/tests/Makefile       |   1 +
 tools/tests/tsx/.gitignore |   1 +
 tools/tests/tsx/Makefile   |  43 ++++
 tools/tests/tsx/test-tsx.c | 538 +++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 583 insertions(+)
 create mode 100644 tools/tests/tsx/.gitignore
 create mode 100644 tools/tests/tsx/Makefile
 create mode 100644 tools/tests/tsx/test-tsx.c

diff --git a/tools/tests/Makefile b/tools/tests/Makefile
index 8746aabe6b..25531a984a 100644
--- a/tools/tests/Makefile
+++ b/tools/tests/Makefile
@@ -5,6 +5,7 @@ SUBDIRS-y :=
 SUBDIRS-y += resource
 SUBDIRS-$(CONFIG_X86) += cpu-policy
 SUBDIRS-$(CONFIG_X86) += mce-test
+SUBDIRS-$(CONFIG_X86) += tsx
 ifneq ($(clang),y)
 SUBDIRS-$(CONFIG_X86) += x86_emulator
 endif
diff --git a/tools/tests/tsx/.gitignore b/tools/tests/tsx/.gitignore
new file mode 100644
index 0000000000..97ec4db7ff
--- /dev/null
+++ b/tools/tests/tsx/.gitignore
@@ -0,0 +1 @@
+test-tsx
diff --git a/tools/tests/tsx/Makefile b/tools/tests/tsx/Makefile
new file mode 100644
index 0000000000..c065a18346
--- /dev/null
+++ b/tools/tests/tsx/Makefile
@@ -0,0 +1,43 @@
+XEN_ROOT = $(CURDIR)/../../..
+include $(XEN_ROOT)/tools/Rules.mk
+
+TARGET := test-tsx
+
+.PHONY: all
+all: $(TARGET)
+
+.PHONY: run
+run: $(TARGET)
+	./$(TARGET)
+
+.PHONY: clean
+clean:
+	$(RM) -f -- *.o $(TARGET) $(DEPS_RM)
+
+.PHONY: distclean
+distclean: clean
+	$(RM) -f -- *~
+
+.PHONY: install
+install: all
+
+.PHONY: uninstall
+uninstall:
+
+CFLAGS += -Werror
+CFLAGS += $(CFLAGS_xeninclude)
+CFLAGS += $(CFLAGS_libxenctrl)
+CFLAGS += $(CFLAGS_libxenguest)
+CFLAGS += -I$(XEN_ROOT)/tools/libs/ctrl -I$(XEN_ROOT)/tools/libs/guest
+CFLAGS += $(APPEND_CFLAGS)
+
+LDFLAGS += $(LDLIBS_libxenctrl)
+LDFLAGS += $(LDLIBS_libxenguest)
+LDFLAGS += $(APPEND_LDFLAGS)
+
+test-tsx.o: Makefile
+
+$(TARGET): test-tsx.o
+	$(CC) -o $@ $< $(LDFLAGS)
+
+-include $(DEPS_INCLUDE)
diff --git a/tools/tests/tsx/test-tsx.c b/tools/tests/tsx/test-tsx.c
new file mode 100644
index 0000000000..adbbd70eee
--- /dev/null
+++ b/tools/tests/tsx/test-tsx.c
@@ -0,0 +1,538 @@
+/*
+ * TSX settings and consistency tests
+ *
+ * This tests various behaviours and invariants with regards to TSX.  It
+ * ideally wants running for several microcode versions, and all applicable
+ * tsx= commandline settings, on a single CPU, including after an S3
+ * suspend/resume event.
+ *
+ * It tests specifically:
+ *  - The consistency of MSR_TSX_CTRL/MSR_TSX_FORCE_ABORT values across the
+ *    system, and their accessibility WRT data in the host CPU policy.
+ *  - The actual behaviour of RTM on the system.
+ *  - Cross-check the default/max policies based on the actual RTM behaviour.
+ *  - Create some guests, check their defaults, and check that the defaults
+ *    can be changed.
+ */
+
+#define _GNU_SOURCE
+
+#include <err.h>
+#include <errno.h>
+#include <inttypes.h>
+#include <signal.h>
+#include <stdio.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <sys/ucontext.h>
+
+#include <xenctrl.h>
+#include <xenguest.h>
+#include <xen-tools/libs.h>
+
+#include "xg_private.h"
+
+enum {
+#define XEN_CPUFEATURE(name, value) X86_FEATURE_##name = value,
+#include <xen/arch-x86/cpufeatureset.h>
+};
+#define bitmaskof(idx)      (1u << ((idx) & 31))
+
+#define MSR_ARCH_CAPABILITIES               0x0000010a
+#define  ARCH_CAPS_TSX_CTRL                 (1 <<  7)
+#define MSR_TSX_FORCE_ABORT                 0x0000010f
+#define MSR_TSX_CTRL                        0x00000122
+
+static unsigned int nr_failures;
+#define fail(fmt, ...)                          \
+({                                              \
+    nr_failures++;                              \
+    (void)printf(fmt, ##__VA_ARGS__);           \
+})
+
+static xc_interface *xch;
+
+/*
+ * Policies, arranged as an array for easy collection of all of them.  We
+ * don't care about the raw policy (index 0) so reuse that for the guest
+ * policy.
+ */
+static struct xc_cpu_policy policies[6];
+#define guest_policy policies[0]
+#define host         policies[XEN_SYSCTL_cpu_policy_host]
+#define pv_max       policies[XEN_SYSCTL_cpu_policy_pv_max]
+#define hvm_max      policies[XEN_SYSCTL_cpu_policy_hvm_max]
+#define pv_default   policies[XEN_SYSCTL_cpu_policy_pv_default]
+#define hvm_default  policies[XEN_SYSCTL_cpu_policy_hvm_default]
+
+static bool xen_has_pv = true, xen_has_hvm = true;
+
+static xc_physinfo_t physinfo;
+
+static enum rtm_behaviour {
+    RTM_UD,
+    RTM_OK,
+    RTM_ABORT,
+} rtm_behaviour;
+
+/*
+ * Test a specific TSX MSR for consistency across the system, taking into
+ * account whether it ought to be accessible or not.
+ *
+ * We can't query offline CPUs, so skip those if encountered.  We don't care
+ * particularly for the exact MSR value, but we do care that it is the same
+ * everywhere.
+ */
+static void test_tsx_msr_consistency(unsigned int msr, bool accessible)
+{
+    uint64_t cpu0_val = ~0;
+
+    for ( unsigned int cpu = 0; cpu <= physinfo.max_cpu_id; ++cpu )
+    {
+        xc_resource_entry_t ent = {
+            .u.cmd = XEN_RESOURCE_OP_MSR_READ,
+            .idx = msr,
+        };
+        xc_resource_op_t op = {
+            .cpu = cpu,
+            .entries = &ent,
+            .nr_entries = 1,
+        };
+        int rc = xc_resource_op(xch, 1, &op);
+
+        if ( rc < 0 )
+        {
+            /* Don't emit a message for offline CPUs */
+            if ( errno != ENODEV )
+                fail("  xc_resource_op() for CPU%u failed: rc %d, errno %d - %s\n",
+                     cpu, rc, errno, strerror(errno));
+            continue;
+        }
+
+        if ( accessible )
+        {
+            if ( rc != 1 )
+            {
+                fail("  Expected 1 result, got %d\n", rc);
+                continue;
+            }
+            if ( ent.u.ret != 0 )
+            {
+                fail("  Expected ok, got %d\n", ent.u.ret);
+                continue;
+            }
+        }
+        else
+        {
+            if ( rc != 0 )
+                fail("  Expected 0 results, got %u\n", rc);
+            else if ( ent.u.ret != -EPERM )
+                fail("  Expected -EPERM, got %d\n", ent.u.ret);
+            continue;
+        }
+
+        if ( cpu == 0 )
+        {
+            cpu0_val = ent.val;
+            printf("  CPU0 val %#"PRIx64"\n", cpu0_val);
+        }
+        else if ( ent.val != cpu0_val )
+            fail("  CPU%u val %#"PRIx64" differs from CPU0 %#"PRIx64"\n",
+                 cpu, ent.val, cpu0_val);
+    }
+}
+
+/*
+ * Check all TSX MSRs, and in particular that their accessibility matches what
+ * is expressed in the host CPU policy.
+ */
+static void test_tsx_msrs(void)
+{
+    printf("Testing MSR_TSX_FORCE_ABORT consistency\n");
+    test_tsx_msr_consistency(
+        MSR_TSX_FORCE_ABORT, host.cpuid.feat.tsx_force_abort);
+
+    printf("Testing MSR_TSX_CTRL consistency\n");
+    test_tsx_msr_consistency(
+        MSR_TSX_CTRL, host.msr.arch_caps.tsx_ctrl);
+}
+
+/*
+ * Probe for how RTM behaves, deliberately not inspecting CPUID.
+ * Distinguishes between "no support at all" (i.e. XBEGIN suffers #UD),
+ * working ok, and appearing to always abort.
+ */
+static enum rtm_behaviour __attribute__((noclone)) probe_rtm_behaviour(void)
+{
+    for ( unsigned int i = 0; i < 1000; ++i )
+    {
+        /*
+         * Opencoding the RTM infrastructure from immintrin.h, because we
+         * still support older versions of GCC.  ALso so we can include #UD
+         * detection logic.
+         */
+#define XBEGIN_STARTED -1
+#define XBEGIN_UD      -2
+        unsigned int status = XBEGIN_STARTED;
+
+        asm volatile ( ".Lxbegin: .byte 0xc7,0xf8,0,0,0,0" /* XBEGIN 1f; 1: */
+                       : "+a" (status) :: "memory" );
+        if ( status == XBEGIN_STARTED )
+        {
+            asm volatile ( ".byte 0x0f,0x01,0xd5" ::: "memory" ); /* XEND */
+            return RTM_OK;
+        }
+        else if ( status == XBEGIN_UD )
+            return RTM_UD;
+    }
+
+    return RTM_ABORT;
+}
+
+static struct sigaction old_sigill;
+
+static void sigill_handler(int signo, siginfo_t *info, void *extra)
+{
+    extern const char xbegin_label[] asm(".Lxbegin");
+
+    if ( info->si_addr == xbegin_label &&
+         memcmp(info->si_addr, "\xc7\xf8\x00\x00\x00\x00", 6) == 0 )
+    {
+        ucontext_t *context = extra;
+
+        /*
+         * Found the XBEGIN instruction.  Step over it, and update `status` to
+         * signal #UD.
+         */
+#if defined(__linux__)
+# ifdef __x86_64__
+        context->uc_mcontext.gregs[REG_RIP] += 6;
+        context->uc_mcontext.gregs[REG_RAX] = XBEGIN_UD;
+# else
+        context->uc_mcontext.gregs[REG_EIP] += 6;
+        context->uc_mcontext.gregs[REG_EAX] = XBEGIN_UD;
+# endif
+
+#elif defined(__FreeBSD__)
+# ifdef __x86_64__
+        context->uc_mcontext.mc_rip += 6;
+        context->uc_mcontext.mc_rax = XBEGIN_UD;
+# else
+        context->uc_mcontext.mc_eip += 6;
+        context->uc_mcontext.mc_eax = XBEGIN_UD;
+# endif
+
+#elif defined(__NetBSD__)
+# ifdef __x86_64__
+        context->uc_mcontext.__gregs[_REG_RIP] += 6;
+        context->uc_mcontext.__gregs[_REG_RAX] = XBEGIN_UD;
+# else
+        context->uc_mcontext.__gregs[_REG_EIP] += 6;
+        context->uc_mcontext.__gregs[_REG_EAX] = XBEGIN_UD;
+# endif
+
+#else
+# error Unknown environment - please adjust
+#endif
+    }
+    else
+    {
+        /*
+         * Not the SIGILL we're looking for...  Restore the old handler and
+         * try again.  Will likely coredump as a result.
+         */
+        sigaction(SIGILL, &old_sigill, NULL);
+    }
+}
+
+static void test_rtm_behaviour(void)
+{
+    struct sigaction new_sigill = {
+        .sa_flags = SA_SIGINFO,
+        .sa_sigaction = sigill_handler,
+    };
+    const char *str;
+
+    printf("Testing RTM behaviour\n");
+
+    /*
+     * Install a custom SIGILL handler while probing for RTM behaviour, as the
+     * XBEGIN instruction might suffer #UD.
+     */
+    sigaction(SIGILL, &new_sigill, &old_sigill);
+    rtm_behaviour = probe_rtm_behaviour();
+    sigaction(SIGILL, &old_sigill, NULL);
+
+    switch ( rtm_behaviour )
+    {
+    case RTM_UD:    str = "#UD";   break;
+    case RTM_OK:    str = "OK";    break;
+    case RTM_ABORT: str = "Abort"; break;
+    default:        str = NULL;    break;
+    }
+
+    if ( str )
+        printf("  Got %s\n", str);
+    else
+        return fail("  Got unexpected behaviour %d\n", rtm_behaviour);
+
+    if ( host.cpuid.feat.rtm )
+    {
+        if ( rtm_behaviour == RTM_UD )
+            fail("  Host reports RTM, but appears unavailable\n");
+    }
+    else
+    {
+        if ( rtm_behaviour != RTM_UD )
+            fail("  Host reports no RTM, but appears available\n");
+    }
+}
+
+static void dump_tsx_details(const struct xc_cpu_policy *p, const char *pref)
+{
+    printf("  %s RTM %u, HLE %u, TSX_FORCE_ABORT %u, RTM_ALWAYS_ABORT %u, TSX_CTRL %u\n",
+           pref,
+           p->cpuid.feat.rtm,
+           p->cpuid.feat.hle,
+           p->cpuid.feat.tsx_force_abort,
+           p->cpuid.feat.rtm_always_abort,
+           p->msr.arch_caps.tsx_ctrl);
+}
+
+/* Sanity test various invariants we expect in the default/max policies. */
+static void test_guest_policies(const struct xc_cpu_policy *max,
+                                const struct xc_cpu_policy *def)
+{
+    const struct cpuid_policy *cm = &max->cpuid;
+    const struct cpuid_policy *cd = &def->cpuid;
+    const struct msr_policy *mm = &max->msr;
+    const struct msr_policy *md = &def->msr;
+
+    dump_tsx_details(max, "Max:");
+    dump_tsx_details(def, "Def:");
+
+    if ( ((cm->feat.raw[0].d | cd->feat.raw[0].d) &
+          (bitmaskof(X86_FEATURE_TSX_FORCE_ABORT) |
+           bitmaskof(X86_FEATURE_RTM_ALWAYS_ABORT))) ||
+         ((mm->arch_caps.raw | md->arch_caps.raw) & ARCH_CAPS_TSX_CTRL) )
+        fail("  Xen-only TSX controls offered to guest\n");
+
+    switch ( rtm_behaviour )
+    {
+    case RTM_UD:
+        if ( (cm->feat.raw[0].b | cd->feat.raw[0].b) &
+             (bitmaskof(X86_FEATURE_HLE) | bitmaskof(X86_FEATURE_RTM)) )
+             fail("  HLE/RTM offered to guests despite not being available\n");
+        break;
+
+    case RTM_ABORT:
+        if ( cd->feat.raw[0].b &
+             (bitmaskof(X86_FEATURE_HLE) | bitmaskof(X86_FEATURE_RTM)) )
+             fail("  HLE/RTM offered to guests by default despite not being usable\n");
+        break;
+
+    case RTM_OK:
+        if ( !cm->feat.rtm || !cd->feat.rtm )
+             fail("  RTM not offered to guests despite being available\n");
+        break;
+    }
+
+    if ( cd->feat.hle )
+        fail("  Fail: HLE offered in default policy\n");
+}
+
+static void test_def_max_policies(void)
+{
+    if ( xen_has_pv )
+    {
+        printf("Testing PV default/max policies\n");
+        test_guest_policies(&pv_max, &pv_default);
+    }
+
+    if ( xen_has_hvm )
+    {
+        printf("Testing HVM default/max policies\n");
+        test_guest_policies(&hvm_max, &hvm_default);
+    }
+}
+
+static void test_guest(struct xen_domctl_createdomain *c)
+{
+    uint32_t domid = 0;
+    int rc;
+
+    rc = xc_domain_create(xch, &domid, c);
+    if ( rc )
+        return fail("  Domain create failure: %d - %s\n",
+                    errno, strerror(errno));
+
+    printf("  Created d%u\n", domid);
+
+    rc = xc_cpu_policy_get_domain(xch, domid, &guest_policy);
+    if ( rc )
+    {
+        fail("  Failed to obtain domain policy: %d - %s\n",
+             errno, strerror(errno));
+        goto out;
+    }
+
+    dump_tsx_details(&guest_policy, "Cur:");
+
+    /*
+     * Check defaults given to the guest.
+     */
+    if ( guest_policy.cpuid.feat.rtm != (rtm_behaviour == RTM_OK) )
+        fail("  RTM %u in guest, despite rtm behaviour\n",
+             guest_policy.cpuid.feat.rtm);
+
+    if ( guest_policy.cpuid.feat.hle ||
+         guest_policy.cpuid.feat.tsx_force_abort ||
+         guest_policy.cpuid.feat.rtm_always_abort ||
+         guest_policy.msr.arch_caps.tsx_ctrl )
+        fail("  Unexpected features advertised\n");
+
+    if ( host.cpuid.feat.rtm )
+    {
+        unsigned int _7b0;
+
+        /*
+         * If host RTM is available, all combinations of guest flags should be
+         * possible.  Flip both HLE/RTM to check non-default settings.
+         */
+        _7b0 = (guest_policy.cpuid.feat.raw[0].b ^=
+                (bitmaskof(X86_FEATURE_HLE) | bitmaskof(X86_FEATURE_RTM)));
+
+        /* Set the new policy. */
+        rc = xc_cpu_policy_set_domain(xch, domid, &guest_policy);
+        if ( rc )
+        {
+            fail("  Failed to set domain policy: %d - %s\n",
+                 errno, strerror(errno));
+            goto out;
+        }
+
+        /* Re-get the new policy. */
+        rc = xc_cpu_policy_get_domain(xch, domid, &guest_policy);
+        if ( rc )
+        {
+            fail("  Failed to obtain domain policy: %d - %s\n",
+                 errno, strerror(errno));
+            goto out;
+        }
+
+        dump_tsx_details(&guest_policy, "Cur:");
+
+        if ( guest_policy.cpuid.feat.raw[0].b != _7b0 )
+        {
+            fail("  Expected CPUID.7[1].b 0x%08x differes from actual 0x%08x\n",
+                 _7b0, guest_policy.cpuid.feat.raw[0].b);
+            goto out;
+        }
+    }
+
+ out:
+    rc = xc_domain_destroy(xch, domid);
+    if ( rc )
+        fail("  Failed to destroy domain: %d - %s\n",
+             errno, strerror(errno));
+}
+
+static void test_guests(void)
+{
+    if ( xen_has_pv )
+    {
+        struct xen_domctl_createdomain c = {
+            .max_vcpus = 1,
+            .max_grant_frames = 1,
+        };
+
+        printf("Testing PV guest\n");
+        test_guest(&c);
+    }
+
+    if ( xen_has_hvm )
+    {
+        struct xen_domctl_createdomain c = {
+            .flags = XEN_DOMCTL_CDF_hvm,
+            .max_vcpus = 1,
+            .max_grant_frames = 1,
+            .arch = {
+                .emulation_flags = XEN_X86_EMU_LAPIC,
+            },
+        };
+
+        if ( physinfo.capabilities & XEN_SYSCTL_PHYSCAP_hap )
+            c.flags |= XEN_DOMCTL_CDF_hap;
+        else if ( !(physinfo.capabilities & XEN_SYSCTL_PHYSCAP_shadow) )
+            return fail("  HVM available, but neither HAP nor Shadow\n");
+
+        printf("Testing HVM guest\n");
+        test_guest(&c);
+    }
+}
+
+/* Obtain some general data, then run the tests. */
+static void test_tsx(void)
+{
+    int rc;
+
+    /* Read all policies except raw. */
+    for ( unsigned int i = XEN_SYSCTL_cpu_policy_host;
+          i <= XEN_SYSCTL_cpu_policy_hvm_default; ++i )
+    {
+        rc = xc_cpu_policy_get_system(xch, i, &policies[i]);
+
+        if ( rc == -1 && errno == EOPNOTSUPP )
+        {
+            /*
+             * Use EOPNOTSUPP to spot Xen missing CONFIG_{PV,HVM}, and adjust
+             * later testing accordingly.
+             */
+            switch ( i )
+            {
+            case XEN_SYSCTL_cpu_policy_pv_max:
+            case XEN_SYSCTL_cpu_policy_pv_default:
+                if ( xen_has_pv )
+                    printf("  Xen doesn't support PV\n");
+                xen_has_pv = false;
+                continue;
+
+            case XEN_SYSCTL_cpu_policy_hvm_max:
+            case XEN_SYSCTL_cpu_policy_hvm_default:
+                if ( xen_has_hvm )
+                    printf("  Xen doesn't support HVM\n");
+                xen_has_hvm = false;
+                continue;
+            }
+        }
+        if ( rc )
+            return fail("Failed to obtain policy[%u]: %d - %s\n",
+                        i, errno, strerror(errno));
+    }
+
+    rc = xc_physinfo(xch, &physinfo);
+    if ( rc )
+        return fail("Failed to obtain physinfo: %d - %s\n",
+                    errno, strerror(errno));
+
+    printf("  Got %u CPUs\n", physinfo.max_cpu_id + 1);
+
+    test_tsx_msrs();
+    test_rtm_behaviour();
+    test_def_max_policies();
+    test_guests();
+}
+
+int main(int argc, char **argv)
+{
+    printf("TSX tests\n");
+
+    xch = xc_interface_open(NULL, NULL, 0);
+
+    if ( !xch )
+        err(1, "xc_interface_open");
+
+    test_tsx();
+
+    return !!nr_failures;
+}
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 16:40:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 16:40:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141713.261670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lspdR-0000EZ-9j; Mon, 14 Jun 2021 16:40:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141713.261670; Mon, 14 Jun 2021 16:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lspdR-0000ES-6a; Mon, 14 Jun 2021 16:40:25 +0000
Received: by outflank-mailman (input) for mailman id 141713;
 Mon, 14 Jun 2021 16:40:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lspdP-0000EI-Or; Mon, 14 Jun 2021 16:40:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lspdP-0000Y5-EN; Mon, 14 Jun 2021 16:40:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lspdP-0002N7-6O; Mon, 14 Jun 2021 16:40:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lspdP-0005TV-5v; Mon, 14 Jun 2021 16:40:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=Cpu7sDY5JdpjoSC8tWvxpx/UlV/3/NAAju/paNgmRsQ=; b=kD4eX1gF3C7tsiPwrorASD/DWZ
	zj8ALBbxrcaGRwlFSRQw4rmUSEDvpwSDFhYt5r3vvh4+F0GaIjf3t8Rg8USu3RtsdhRRAa3eXBfiV
	SKXTaVXbinezWhdR6OJFYFrHL9Xim6qGCA5Ffq4HJzJJOMQk+Vu1sYX8mmmzBs6+Jw3s=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow
Message-Id: <E1lspdP-0005TV-5v@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Jun 2021 16:40:23 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow
testid guest-saverestore

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161107/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow.guest-saverestore.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow.guest-saverestore --summary-out=tmp/162806.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow guest-saverestore
Searching for failure / basis pass:
 162778 fail [host=pinot0] / 160125 [host=fiano0] 160119 [host=godello1] 160113 [host=huxelrebe1] 160104 [host=fiano1] 160097 [host=pinot1] 160091 [host=godello0] 160088 [host=elbling0] 160082 [host=godello1] 160079 [host=albana1] 160070 [host=chardonnay1] 160066 [host=chardonnay0] 160002 [host=huxelrebe1] 159947 [host=godello0] 159926 [host=elbling0] 159911 [host=fiano0] 159898 [host=albana0] 159888 [host=godello1] 159878 [host=godello0] 159869 [host=godello1] 159860 [host=albana1] 159853 [host\
 =elbling1] 159848 [host=huxelrebe1] 159842 [host=elbling0] 159834 [host=godello0] 159828 ok.
Failure / basis pass flights: 162778 / 159828
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#ef91b07388e1c0a50c604e5350eeda98428ccea6-c410ad4da4b7785170d3d42a3ba190c2caac6feb git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#cb90ecf9349198558569f6c86c4c27d215406095-894fc4fd670aaf04a67dc7507739f914ff4bacf2 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-e3c30795823672eec9bde75187e184f23ed98d70 git://xenbits.xen.org/xen.git#243036df0d55673de59c214e240b9b914d278b65-5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Loaded 31671 nodes in revision graph
Searching for test results:
 162778 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162796 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
 162806 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 159828 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
 159834 [host=godello0]
 159842 [host=elbling0]
 159848 [host=huxelrebe1]
 159853 [host=elbling1]
 159860 [host=albana1]
 159869 [host=godello1]
 159878 [host=godello0]
 159888 [host=godello1]
 159898 [host=albana0]
 159911 [host=fiano0]
 159926 [host=elbling0]
 159947 [host=godello0]
 160002 [host=huxelrebe1]
 160048 []
 160050 []
 160057 []
 160062 []
 160064 []
 160066 [host=chardonnay0]
 160070 [host=chardonnay1]
 160079 [host=albana1]
 160082 [host=godello1]
 160088 [host=elbling0]
 160091 [host=godello0]
 160097 [host=pinot1]
 160104 [host=fiano1]
 160113 [host=huxelrebe1]
 160119 [host=godello1]
 160125 [host=fiano0]
 160134 fail irrelevant
 160147 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160167 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca318882714080fb81fe9eb89a7b7934efc5bfae 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bdee969c0e65d4d509932b1d70e3a3b2ffbff6d5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160328 fail irrelevant
 160361 fail irrelevant
 160392 fail irrelevant
 160418 fail irrelevant
 160448 fail irrelevant
 160477 fail irrelevant
 160501 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160522 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160541 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ec2e6e016d24bd429792d08cf607e4c5350dcdaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160563 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7993b0f83fe5c3f8555e79781d5d098f99751a94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cead8c0d17462f3a1150b5657d3f4eaa88faf1cb
 160619 fail irrelevant
 160632 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 62bad17dcae18f55cb3bdc19909543dfdf928a2b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6ee55e1d10c25c2f6bf5ce2084ad2327e17affa5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 90629587e16e2efdb61da77f25c25fba3c4a5fd7
 160650 fail irrelevant
 160736 fail irrelevant
 160748 fail irrelevant
 160779 fail irrelevant
 160801 fail irrelevant
 160827 fail irrelevant
 160851 fail irrelevant
 160883 fail irrelevant
 160916 fail irrelevant
 161018 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
 161035 fail irrelevant
 161038 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2a9a6c2a86570ccbf8c5c30cbb8bf723168c459 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161040 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161041 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7286d62d4e259be8cecf3dc2deea80ecc14489a5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161043 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 69259911f948ad2755bd1f2c999dd60ac322c890 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161044 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e71c36557ed41017e634ae392fa80f03ced7fa1 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160980 fail irrelevant
 161047 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2255564fd21059960966b47212def9069cb56077 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161052 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e51b27fed31eb7b2a2cb4245806c8c7859207f7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0693602a23276b076a679b1e7ed9125a444336b6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161055 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8b858f9998a9d59a9a7188f2c5c6ffb99eff6115 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161057 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 30ca7eddc486646fa19c9619fcf233ceaa65e28c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161059 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2615a5e433aeb812c300d3a48e1a88e1303e2339 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 161062 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 51204c2f188ec1e2a38f14718d38a3772f850a4b b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 161063 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 773b0bc2838ede154c6de9d78401b91fafa91062 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5e8892db93f3fb6a7221f2d47f3c952a7e489737 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161068 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8e6bc6cdc82d45f203bc9fc4342c0452214c74fe b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 161069 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 757acb9a8295e8be4a37b2cfc1cd947e357fd29c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 161070 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 9abda42bf2f5aa6ef403d3140fd3d7d88e8064e9 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 893103e286ac1c500d2ad113f55c41edb35e047c
 161072 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f34661b6c97a37a5efc27d31c037ddeda4547e2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 0570d7f276dd20a3adee80ca44a5fe7daf7566cd
 161075 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 edd46cd407ea4a0adaa8d6ca86f550c2a4d5c507 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a557b00469bca61a058fc1db4855503cac1c3219 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 4e01c48886d9fbfee3bf7e481c4529a176331c78
 161076 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1941858448e76f83eb00614c4f34ac29e9a8e792 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 0570d7f276dd20a3adee80ca44a5fe7daf7566cd
 161077 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 edd46cd407ea4a0adaa8d6ca86f550c2a4d5c507 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 65a9d3807e9a0ffd9f9719416a07be41b6f39e94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e4bdcc8aef6707027168ea29caed844a7da67b4d
 161080 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 94fa95c8746c553324e8b69ea4a74af670075324 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e4341623a3b87e7eca87d42b7b88da967cd21c49 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 60c0444fae2148452f9ed0b7c49af1fa41f8f522
 161082 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d1929069e355afb809a50a7f6b6affdea399cc8c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 368096b9c4a273be58dd897e996e3e010bcfc21b
 161083 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b6d5996706ddb6082e3ea8de79849bfecf2aaa15 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e31b3a5c34c6e5be7ef60773e607f189eaa15f3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 161084 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161050 fail irrelevant
 161087 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ee2e67da8f882fcdef2c49fcc58e9962aa695f5a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161089 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
 161091 fail irrelevant
 161092 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f9c53a69edeb94ae8c65276b885c1a7efe4f613a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 571d413b5da6bc6f1c2aaca8484717642255ddb0 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161095 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 283d845c9164f57f5dba020a4783bb290493802f b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161097 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8becb36063fb14df1e3ae4916215667e2cb65fa2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161098 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161101 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161103 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161105 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161106 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161107 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161088 fail irrelevant
 161121 fail irrelevant
 161147 fail irrelevant
 161171 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2ad22420a710dc07e3b644f91a5b55c09c39ecf3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 264aa183ad85b2779b27d1312724a291259ccc9f
 161191 fail irrelevant
 161210 fail irrelevant
 161232 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b53173e7cdafb7a318a239d557478fd73734a86a
 161256 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161276 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161290 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161308 fail irrelevant
 161334 fail irrelevant
 161364 fail irrelevant
 161388 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
 161401 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aaa3eafb3ba8b544d19ca41cda1477640b22b8fc
 161419 fail irrelevant
 161434 fail irrelevant
 161444 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161455 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161472 fail irrelevant
 161481 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5396354b868bd6652600a654bba7df16701ac1cb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 11e7f0fe72ca0060762d18268e0388731fe8ccb6
 161495 fail irrelevant
 161514 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b90b8abb4049e2d98040f548ad23b6ab22d5d19 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
 161540 fail irrelevant
 161554 fail irrelevant
 161571 fail irrelevant
 161587 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161604 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161616 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 53c5433e84e8935abed8e91d4a2eb813168a0ecf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161631 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 15106f7dc3290ff3254611f265849a314a93eb0e b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161766 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
 161780 fail irrelevant
 161812 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d45a5270d075ea589f0b0ddcf963a5fea1f500ac b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 8cccd6438e86112ab383e41b433b5a7e73be9621
 161826 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161839 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161853 []
 161856 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161862 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161876 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161886 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161890 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161896 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 74e31681ba05ed1876818df30c581bc530554fb3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161907 fail irrelevant
 161915 fail irrelevant
 161924 fail irrelevant
 161938 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161941 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161950 fail irrelevant
 161955 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161961 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161963 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161967 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161971 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161976 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161981 fail irrelevant
 161986 fail irrelevant
 162019 fail irrelevant
 162070 fail irrelevant
 162090 fail irrelevant
 162104 fail irrelevant
 162099 fail irrelevant
 162108 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 15ee7b76891a78141e6e30ef3f8572e8d6b326d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 972e848b53970d12cb2ca64687ef8ff797fb6236 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162112 fail irrelevant
 162116 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162121 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162124 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162127 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162132 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162135 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162139 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162143 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 371ebfe28600fc5a435504b841cd401208a68f07 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162146 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162152 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162158 fail irrelevant
 162167 fail irrelevant
 162197 fail irrelevant
 162240 fail irrelevant
 162244 fail irrelevant
 162252 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7258034ab40e6927acbd005feb295eb3acf972bb 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 9fdcf851689cb2a9501d3947cb5d767d9c7797e8
 162257 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 62c0ac5041e9130b041adfa13a41583d3c3ddd24 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162260 fail irrelevant
 162264 fail irrelevant
 162267 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f9dc72de91d2915b808e82da34bf613afa5cce43 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162270 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162277 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162299 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 683d899e4bffca35c5b192ea0662362b0270a695
 162328 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3ff5dbe1dfc3420e5254d290500c0b6f6282d17 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162331 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fdf3666f01a2dd02d83a808f609b9c744a74c652 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162339 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b233eb1849ac01bdd5b24ea84460a2e481a4c5a9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dd2db39d78431ab5a0b78777afaab3d61e94533e 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162342 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1f515342d8d83ef0fff0c3f4ac67232dd8c97565 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c345b3e6a736d4985b2bca6f7f24b985900de63 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162347 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8e6dad2028d01b7f9ec76cf3b83457fab57fa1eb 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162356 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 453d9c61dd5681159051c6e4d07e7b2633de2e70 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162362 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 453d9c61dd5681159051c6e4d07e7b2633de2e70 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162409 fail irrelevant
 162379 fail irrelevant
 162429 fail irrelevant
 162454 fail irrelevant
 162474 fail irrelevant
 162501 []
 162527 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a35947f15c0ee695eba3c55248ec8ac3e4e23cca 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162551 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a4716fd8d7c877185652f5f8e25032dc7699d51b 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162591 fail irrelevant
 162623 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7fe7fae8b48e3f9c647fd685e5155ebc8e6fb84d e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162650 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162762 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162676 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162712 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Searching for interesting versions
 Result found: flight 159828 (pass), for basis pass
 Result found: flight 162650 (fail), for basis failure
 Repro found: flight 162796 (pass), for basis pass
 Repro found: flight 162806 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
No revisions left to test, checking graph state.
 Result found: flight 161098 (pass), for last pass
 Result found: flight 161101 (fail), for first failure
 Repro found: flight 161103 (pass), for last pass
 Repro found: flight 161105 (fail), for first failure
 Repro found: flight 161106 (pass), for last pass
 Repro found: flight 161107 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161107/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.295752 to fit
pnmtopng: 240 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow.guest-saverestore.{dot,ps,png,html,svg}.
----------------------------------------
162806: tolerable ALL FAIL

flight 162806 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162806/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail baseline untested


jobs:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 16:42:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 16:42:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141718.261684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lspfF-0000uy-Rc; Mon, 14 Jun 2021 16:42:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141718.261684; Mon, 14 Jun 2021 16:42:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lspfF-0000ur-Oj; Mon, 14 Jun 2021 16:42:17 +0000
Received: by outflank-mailman (input) for mailman id 141718;
 Mon, 14 Jun 2021 16:42:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AGyB=LI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lspfE-0000ua-5t
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 16:42:16 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id de7b7b0a-bd50-4957-9dae-b783964c599b;
 Mon, 14 Jun 2021 16:42:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de7b7b0a-bd50-4957-9dae-b783964c599b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623688934;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=YxjuSUkcpbdBTj2DdKCqim+neFqI116FXn0pmFQyPNQ=;
  b=XMYFTaFT/kWSqdjXGkrkZMDdKteFZKVWea2ztqCPn7Ioe1peo0q6Xcny
   F9GzQSss/0TjYPNHDZ0XAAoQXZdk8WTR4VfALz5+DpZPBjbD5CSzOUFjf
   5a0j/vpPffS9B9BF5HAPbODKUPbgyOv7LHxt+r81+bclCEEgIPVNyG0sd
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: pwAFhFVnoDb8Qn65g2gJMq8zaGKxeq4VPJUT73pPaYkn9+mr58kuoogA60XHQrMCVzszeC4rMK
 qiQF5tYXp+Q966tggI+6x9BV4wNyf1DHK9ZiZZO/6Hb/2ZHtY4VT1Qpd9wJcbWNvvCclXPExpp
 8L9gIEVjRBO75/in3bVNuaEaYxg+NK7U3GACzSC91UknPywRGcUn3T5tZiPphCkz24oXlSzExS
 cISzkyftuTmJSiUUus6FYD66tcNi3t+Yc/Gi4xfgnopAOtlGIbHh0o5Anzr6KD+dFtn6zPJ9Kg
 fz4=
X-SBRS: 5.1
X-MesageID: 46089939
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:/C0ed6NuUkKdJcBcTgWjsMiBIKoaSvp037BK7S1MoH1uA6mlfq
 WV9sjzuiWatN98Yh8dcLO7Scu9qBHnlaKdiLN5VduftWHd01dAR7sSjrcKrQeAJ8X/nNQtr5
 uJccJFeaDN5Y4Rt7eH3OG6eexQv+Vu6MqT9IPjJ+8Gd3ATV0lnhT0JbTqzIwlNayRtI4E2L5
 aY7tovnUvaRZxGBv7LYEXsRoL41qT2qK4=
X-IronPort-AV: E=Sophos;i="5.83,273,1616472000"; 
   d="scan'208";a="46089939"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jtBEKvz5QVWEifbMg2dpdDrOL+ImH/pJ+Lg4LssJ6maiULKA9q8xSZvmz+c3EXzXuKkyfgD76aqIXPdbkdoLoo6zOeXsrEtz5YoEK9uJLCfko/0b78XYbTvEFmqcQIN2fY7QKKr1eJ48wQbNYzn2UbY1Cbydd5UJ3QF401zK+Giw6ukwT2Q1VyXJHCzHqP04EcbPigyIRJv5iDz5ydg8zrM+qE2cENfQ69VTckTvcdQUjbiMnIL/+WT/1yt4Radhi0GQ4BJS8/qpb1ywmC7i8E/nXZxeHNAgS6M3IcYCptNn4JB5ouw5nPxtQnWFMexSyaOfAfMY4dG6Yjl19i+sDA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hJFBPrmc5Rl4w3EL0oHertCC54rQcxAQVNjvK5mj840=;
 b=VdkQznLpiol3vBfDFsdVswatgxbeLWEAtN4MnZ8trBvOE3gAtwc/HYCaez0vG58DVbBnS2kqGXr9n0HrFRURtqE1A+7DrF0ZgzqCRsfe7D2JhxHX0E1PnjBzALhKTIvAm5EDRbsS6DuHpNtnY8FFbV3Z13+5zqy5hRUXpYwQUTmqlEdBI4MlHEnU4FUre+Guo/ZhtOvX3DLyOwunVZLrYzR41HrLXQ0/UDtKAHuLB02NJqPZAHp8kZk4aJ1ah8DKzBaDMdPh4EFWuTiA3pg1nbqPGW3YE9pLc8idv2CEuobA2riIiz7ESpaZuYR7J5EKqgxQ9urQshcFlGJXuy4B6A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hJFBPrmc5Rl4w3EL0oHertCC54rQcxAQVNjvK5mj840=;
 b=AFoXz8Pdl8OoBAjiMPy5O7LptllLw5P+5ajkNVrgPN3tRJ9a2qHpe5moKPHs0zyGwGRnwlX66Ld2OhYm6xCnni/6cdDuB0G7eE5kgWjUL2u1TOnEsaw6jhVFnibRl3HXhpi/ehF/F1ALZUt0N50dBnKlJSL1ksMwivhxT3/i1go=
To: Edwin Torok <edvin.torok@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, Roger Pau Monne
	<roger.pau@citrix.com>, "JBeulich@suse.com" <JBeulich@suse.com>, "wl@xen.org"
	<wl@xen.org>
References: <20210611163627.4878-6-andrew.cooper3@citrix.com>
 <20210614104716.23405-1-andrew.cooper3@citrix.com>
 <3b9a4b575108a2043b2c61ade6f7389e3d6f7ad6.camel@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v1.1 5/5] tests: Introduce a TSX test
Message-ID: <22439f3c-e41b-3fd4-7865-41d6821c443e@citrix.com>
Date: Mon, 14 Jun 2021 17:32:44 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <3b9a4b575108a2043b2c61ade6f7389e3d6f7ad6.camel@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0366.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18e::11) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6f8d4fd4-1ce3-4d47-b1ea-08d92f520cf5
X-MS-TrafficTypeDiagnostic: BYAPR03MB4614:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB46144D8EC83C0C5CF8C452C9BA319@BYAPR03MB4614.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: R8eE6Xms+nzZYtBP9+YGdT9CuPQgUxV5s5faJX0iFm4+U1l9q+QdRS8bOX/kiBM2K46xAqlw/l1+5Vuaa8E7Hf/+NulZ9L7WWDL5AVjYzWlCSuSC7vXndm0kYHL8Qan0xCvZLVzvAbAxAIl9spEpZFFKkoo0tmtNZ1Y+j1ZJv5dJBDhNRKr189lwocLCPUMHqC0Lmp7FNw2yQ9hus9l94XeFLlbBLibCQyIqyZ71k4JX+l6pu7bKr45zJhnaGYb6J2eryMyPkBpG/mttSRRBzw5fTR9Wfmx4o3S1mTbp6plyuVqQG6HGYNMI7QyqkKHIoLNob8xb5Nou8QtnmoVPZqOk4Kxm9b+cz+T1uIrYoO4NgCcxZZMgpOSRzrUQZ9HEBykYD4EYETthyH6GxfI/sHNMI2VioYfBPYsd8eVPD7K4OAz4acWWyFUyy7OmpamkxUG0VfXQhoTbcWXIFY/KrglyWg9HbK90c7DaeN8WJKc7RC34sFh1d17qHnYDjwDTlFSVE86cfBBtETr/v5VkbmTBcEOnVKeRYYYpAO9zDM5ADL3DqzTCpre9N/AxjwVaP6t/k+x48g27niSZ0GMPzbbK7QfHKHtDCdc64Gzh9jXIxYsofS8b7GQZgUAFJjPhqHb3E5PR48RuIgVWQ8DmrkS7QAaASG7XWfBqPWKKUKU24/oH51gdYpB5lftsWXT6
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(136003)(346002)(376002)(366004)(4326008)(2906002)(5660300002)(8676002)(186003)(8936002)(6486002)(16526019)(83380400001)(110136005)(54906003)(26005)(31696002)(2616005)(956004)(38100700002)(31686004)(6666004)(53546011)(66476007)(66556008)(36756003)(478600001)(16576012)(66946007)(316002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?cXBIVXlFeDRUMjR0WHA2ZE9LZEJYcFM2dUI0dWpvOVZyL0JCU1NPNklLOGFC?=
 =?utf-8?B?NzJJby9GaURtTEZVdTVMaXdVeWJvVVhKYkdYcnVnZkhtcElnM3p0ZzdjVit6?=
 =?utf-8?B?RDhSMzNWOHJMdE9QVjBrOVhnU1dTKzRLeHlQNlRVd2xLbnNVdlA3eTZzOUFm?=
 =?utf-8?B?Y3NDeXV0YzhFdU5HUWZHWVQ3QnF4SXB5dTZsNDQwQWlhRDNhZXFWTDFNMnR6?=
 =?utf-8?B?aEhpUHVjekpzY3h2d3BNQ2kxVCs0U3Ryc2pYMDN4UU9SeEhJNHFydkpDcTlr?=
 =?utf-8?B?YUJpc1NlM1RndVpZUXl4NDMwUVRLYVFKZVRTMXFMNTYyM2xrUVNUUGt5bVoy?=
 =?utf-8?B?bTNaZ1Vjd1NEbDVkeFJRL282ZzZUUjJhZHdzWTFnUnFIc01JbXAzZFBUWFk5?=
 =?utf-8?B?SUlRMmxvMjg3Q1ozMnlJdnJjeUJFTUNzM2xWaUZNMkxZWTJWZjZnSnUwSE5I?=
 =?utf-8?B?M2R4M2NLbGlWV0xHYytNclF5TzFxQk5GMUVWMHRsdEpsNnhMT25WamtSQTd4?=
 =?utf-8?B?UzVzbnNienAvZzBxRXV1eFB0MWpVc3ptcVZ1ZmRlM0pjYWxEUml2Rllqbi9C?=
 =?utf-8?B?a2txUHA1KzlsVmFScGZjT2JqOXBXMWVEdDFxWmVjZUR2OU5WN0F0MjdxaWlW?=
 =?utf-8?B?S2ZlUTg3Tm5xOHhFdzh0VlBoa0pJNTl6VmY3SWw0RTNDbElrVzdQdU1yaGpv?=
 =?utf-8?B?NldaQURBVkNTRUVRZEF6akllVm05Y1hNd2loNUtBeWdhUGdBZ3FCZ1VvbmRT?=
 =?utf-8?B?cG9uR2NKTG1HUHBLbEtGRmpZcmNmUWtCNDh4dkJqWiswMWNlQm5SMElQRmRL?=
 =?utf-8?B?eE1Nb3VoQ2hmc0RMQWN5aDBwQzlWVXBYRlZjTWFzckFTbU5XbTRZZkRXT3dR?=
 =?utf-8?B?K0FLS2JKaC9Sbll5UGd1N0JsNVFtaGNWT2ZTTE9HeGgzVHhTVnVTcGZ6cFRE?=
 =?utf-8?B?WVM4aDN1andLbFplalhZdkZScURETC9kTGRISjlidUZtZ2RZeFJsSTZRSjZR?=
 =?utf-8?B?eGozdFE4SXFad0QzeWdIaE1CaGpTSENESGE1SWRyM09odmdsb3hvMDlzWUgx?=
 =?utf-8?B?QXVRL0pxR0xGTENoRDJ2T2J2M25LbnVjelkrWUhPNWoyQ2tPTmlPNkQ5Zkdk?=
 =?utf-8?B?WVFkZFlIdkJaejhKWUUweVhCSFp6WDJSQmtjazIxU2MrUzVFcjJldHFSTWNR?=
 =?utf-8?B?UFJlQWtKZEQ3VXVvNm9WRGhxNlVWZlFVc1Jmd1RBQkxqSFZXT1VERC9XaENz?=
 =?utf-8?B?ZlBTRCtkcTR2TSs0VU4yYmlPdUVrWGpwaS91elIzWFpIdlRsWmVXZnI0azRQ?=
 =?utf-8?B?UWlsNmljbFlUNnhGV3FtcmNDRUltWExRRE9iYUVrVmsrZnRNYk1vU3c0dVVy?=
 =?utf-8?B?ZmVUald3T3ArUjBDZzFRYkU3K0ZkMk8zVVl3KzUrQzVxSWx2L2pBVG40bnpC?=
 =?utf-8?B?V3ZVMVB5VnM2UmIzWExuM0FJWW90cnl0Um5yUlk5b1NHMEM5RWMyM0VqVlQ1?=
 =?utf-8?B?UkNzN1NQdmJFN1A1QVh5Tm1ZNG1IcHo1QXJWTnJpVkkvbUJ6U2dzS1dYeFhn?=
 =?utf-8?B?TVQzOUJhUnVaU3BqVWVBNlNPdnBBQUd1cE1aa1RSeDJOZU5MSS82ZjEyT2Vu?=
 =?utf-8?B?a3UyTk1pdlUwRGRzSnM2U1JJLzJHcGV1OU9pOHh0UFBKSTdlUmsvNHdoZGt2?=
 =?utf-8?B?dWlWOFh3U3ZIYTN0SjZFWlNicVJKWnUrbGU4eDBOSENENkxVQTF2REliRFkv?=
 =?utf-8?Q?lJtI1KmevRLXQgenxsklWA0BzojR2bvOgebpUCm?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6f8d4fd4-1ce3-4d47-b1ea-08d92f520cf5
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 16:32:50.3416
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +7SFPKzp6gxfI9yLMW3nPW1AOil58wlNGBZ5GUTRvdRjqV+rwVlrVrmQ6Afe3wbwHBfaDzcoOCOPmnF6AXwhX8tnZGlC/E6IDulTbp7A5/M=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4614
X-OriginatorOrg: citrix.com

On 14/06/2021 16:55, Edwin Torok wrote:
> On Mon, 2021-06-14 at 11:47 +0100, Andrew Cooper wrote:
>> +/*
>> + * Check all TSX MSRs, and in particular that their accessibility
>> matches what
>> + * is expressed in the host CPU policy.
>> + */
>> +static void test_tsx_msrs(void)
>> +{
>> +    printf("Testing MSR_TSX_FORCE_ABORT consistency\n");
>> +    test_tsx_msr_consistency(
>> +        MSR_TSX_FORCE_ABORT, host.cpuid.feat.tsx_force_abort);
>> +
>> +    printf("Testing MSR_TSX_CTRL consistency\n");
>> +    test_tsx_msr_consistency(
>> +        MSR_TSX_CTRL, host.msr.arch_caps.tsx_ctrl);
>> +}
>
> This is great, could we extend the test to all MSRs that Xen knows
> about and are expected to be identical? Particularly
> MSR_SPEC_CTRL, MSR_MCU_OPT_CTRL, and I see some MSRs used for errata
> workarounds like MSR_MCU_OPT_CTRL, possiblye more.

MSR_SPEC_CTRL, no.=C2=A0 It's value is influenced by the guest kernel in
context, and we would not expect it to be consistent across the system
at an arbitrary point in time.

MSR_MCU_OPT_CTRL might be a good candidate for a future change, but it's
not related to TSX.=C2=A0 (That said, it is actually how I spotted XSA-377)=
.

~Andrew



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 16:54:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 16:54:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141729.261695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsprG-0002Qq-W1; Mon, 14 Jun 2021 16:54:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141729.261695; Mon, 14 Jun 2021 16:54:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsprG-0002Qj-Su; Mon, 14 Jun 2021 16:54:42 +0000
Received: by outflank-mailman (input) for mailman id 141729;
 Mon, 14 Jun 2021 16:54:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lsprG-0002Qd-31
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 16:54:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lsprF-0000nQ-UV; Mon, 14 Jun 2021 16:54:41 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lsprF-0007UN-NO; Mon, 14 Jun 2021 16:54:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=urDkAB80A1heHPQyL7ADYzlwiCB93nhY3SG2xm4m4h8=; b=zm+UbbJrSgBidipqLPHMJPE3z5
	e3IXlvPQfVeFK916vaSXHozBTWFSOFNp5bDun1yGgVC5BaNwv4GUY1n7gT+OcwIDB15N0kCLDkHtC
	WjM0ver3vYAHZcWbKL0JRp7CwPntwjlFu6w6qnc3a7o73CjH7Xjhm2qGyyr4uRNesCFI=;
Subject: Re: [PATCH v2] Arm: avoid .init.data to be marked as executable
From: Julien Grall <julien@xen.org>
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
References: <5c173e92-f615-c95a-21a2-5c894727414d@suse.com>
 <74fbf731-59f2-0b2e-8707-142091a5876d@xen.org>
Message-ID: <d4406c77-040f-d70d-b356-0020c82bc624@xen.org>
Date: Mon, 14 Jun 2021 18:54:39 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <74fbf731-59f2-0b2e-8707-142091a5876d@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 14/06/2021 15:54, Julien Grall wrote:
> Hi Jan,
> 
> On 14/06/2021 15:52, Jan Beulich wrote:
>> This confuses disassemblers, at the very least. Move
>> .altinstr_replacement to .init.text. The previously redundant ALIGN()
>> now gets converted to page alignment, such that the hypervisor mapping
>> won't have this as executable (it'll instead get mapped r/w, which I'm
>> told is intended to be adjusted at some point).
>>
>> Note that for the actual patching logic's purposes this part of
>> .init.text _has_ to live after _einittext (or before _sinittext), or
>> else branch_insn_requires_update() would produce wrong results.
>>
>> Also, to have .altinstr_replacement have consistent attributes in the
>> object files, add "x" to the one instance where it was missing.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Julien Grall <jgrall@amazon.com>

Comitted.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 16:55:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 16:55:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141734.261709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lspru-00030J-AA; Mon, 14 Jun 2021 16:55:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141734.261709; Mon, 14 Jun 2021 16:55:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lspru-00030C-7G; Mon, 14 Jun 2021 16:55:22 +0000
Received: by outflank-mailman (input) for mailman id 141734;
 Mon, 14 Jun 2021 16:55:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsprs-0002zy-VZ; Mon, 14 Jun 2021 16:55:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsprs-0000o6-O1; Mon, 14 Jun 2021 16:55:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsprs-0003II-Gc; Mon, 14 Jun 2021 16:55:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsprs-00011U-G6; Mon, 14 Jun 2021 16:55:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jZS4LIInUo3gwSpUSyR0006/lJBrZKrfJ22h3a7p+vY=; b=1PHqIzfQOkb/io8Gxn62Y4xVTU
	DtPwuAg611uyo9XA1pr0qlS/youslbsriHuA1K73War0csQTcJpiHIJ8oyv6ShWa75vSBZ/aticDw
	Pq7COPrPUUy9faPU5JGXXD8ekN650LZVnoO21hqip8xXYPnc/ZOCUwVz/jwR/JKQg708=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162804-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162804: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=163f47c14737cfa9dfb3240deea356b08caf7614
X-Osstest-Versions-That:
    xen=4f1858763b7b1aeb79fa7c818eca98c96943aa69
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Jun 2021 16:55:20 +0000

flight 162804 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162804/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  163f47c14737cfa9dfb3240deea356b08caf7614
baseline version:
 xen                  4f1858763b7b1aeb79fa7c818eca98c96943aa69

Last test of basis   162800  2021-06-14 11:01:36 Z    0 days
Testing same since   162804  2021-06-14 14:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4f1858763b..163f47c147  163f47c14737cfa9dfb3240deea356b08caf7614 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 17:01:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 17:01:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141743.261723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lspxV-0004Up-0R; Mon, 14 Jun 2021 17:01:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141743.261723; Mon, 14 Jun 2021 17:01:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lspxU-0004Ui-Ts; Mon, 14 Jun 2021 17:01:08 +0000
Received: by outflank-mailman (input) for mailman id 141743;
 Mon, 14 Jun 2021 17:01:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lspxT-0004UX-Ev; Mon, 14 Jun 2021 17:01:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lspxT-0000wC-9G; Mon, 14 Jun 2021 17:01:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lspxS-0003cY-Vu; Mon, 14 Jun 2021 17:01:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lspxS-0004EP-VR; Mon, 14 Jun 2021 17:01:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BuKd3IzQhoKjeajjuH9Ug0fDQ+ThJw8yQCvxTTxuiCY=; b=qoU7pdJYQpbyp4Gv58ysilmmSX
	vqrOor1wFoTS9hvYBGg07MmRlkcaGVpXtYUSHkBb19HdwhKT0f+SvrsM4TozkA1C5/O7dBSKeS/oW
	nl5Y1Fn/E+qPuIka3LES7jF5/p2RGME0XcBfOAJor9HHtjZtHYLBFg9wNtsrRprh+29M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162793-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162793: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=009c9aa5be652675a06d5211e1640e02bbb1c33d
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Jun 2021 17:01:06 +0000

flight 162793 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162793/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                009c9aa5be652675a06d5211e1640e02bbb1c33d
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  317 days
Failing since        152366  2020-08-01 20:49:34 Z  316 days  540 attempts
Testing same since   162793  2021-06-14 03:59:52 Z    0 days    1 attempts

------------------------------------------------------------
6167 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1680413 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 17:06:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 17:06:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141753.261738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsq2b-0005FX-Rp; Mon, 14 Jun 2021 17:06:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141753.261738; Mon, 14 Jun 2021 17:06:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsq2b-0005FQ-O3; Mon, 14 Jun 2021 17:06:25 +0000
Received: by outflank-mailman (input) for mailman id 141753;
 Mon, 14 Jun 2021 17:06:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EgT/=LI=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lsq2a-0005FK-Ot
 for xen-devel@lists.xen.org; Mon, 14 Jun 2021 17:06:24 +0000
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82592fde-6a8d-44fa-a0f2-d0d36b5c89b5;
 Mon, 14 Jun 2021 17:06:23 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
 by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 15EH5fAm155348;
 Mon, 14 Jun 2021 17:06:21 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by aserp2130.oracle.com with ESMTP id 394jecc431-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 14 Jun 2021 17:06:21 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 15EH0tMg075761;
 Mon, 14 Jun 2021 17:06:21 GMT
Received: from nam10-bn7-obe.outbound.protection.outlook.com
 (mail-bn7nam10lp2109.outbound.protection.outlook.com [104.47.70.109])
 by aserp3020.oracle.com with ESMTP id 394mr6y44u-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 14 Jun 2021 17:06:20 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by MN2PR10MB4224.namprd10.prod.outlook.com (2603:10b6:208:1d1::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Mon, 14 Jun
 2021 17:06:18 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4219.025; Mon, 14 Jun 2021
 17:06:18 +0000
Received: from [10.74.101.149] (160.34.89.149) by
 SJ0PR13CA0147.namprd13.prod.outlook.com (2603:10b6:a03:2c6::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.9 via Frontend
 Transport; Mon, 14 Jun 2021 17:06:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82592fde-6a8d-44fa-a0f2-d0d36b5c89b5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=pOhayWF3jHa74pK6+doXkbZAgoBrEXaNbZOgZRCzPEQ=;
 b=islfppfPep/x6R1Q083IL22mQAzI5i28ewD8jiSjieH6evSFrj0yY09y2rK0OeDS9wCf
 B7X318L5YcGQ556dcNAZunpFHVl4pdlIinrf+onGY035tj8YVRCms4HWk13iGIeCd8+Z
 yj0SbUsYOuy5hWKnMKa5BJlvuBavvSdMBIH/SwJySn9BpMvyln13Qw6eGS0f8IW1q5+i
 oHemsjuQYRwuyw/u7GjDMhTBDuw8DvUV9O1O4lxoDbbSuQrIeL0+mqduQ0VofWzi8uWv
 uLawpkJ0KFlaFJVeyBuWeJ5bmtjJUTBpZn1mDoLP5Q75kkHO9YRNZgGKv4d4BfUjxspo wQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=h0bGJeqWhUWj9y3a2e7ISHHwBJideCpoxz6o9YCVqeDUA2cdd5JKRk6VfkXIgWDdUlNdC/wjRdP6XE3Lbh15jhCnAGEWFw8hYSqd9eWGyfiaCyjvhiCcLMDPuVilH1CbCvDucgTI5T8JKbtDbXtCU1kCXPSg3ka9GCWF0Vy3ciLpwJ8crZz7fkzThIRwwtn0NDMmoNiBfzy0LCJ5t+R0h7vuf0eNX2WcGBAMPMViBSBAlWwt1ABq1Sf3Esll4HvCwiX+xuA4sU5+XhZdx+62VUwQjD2zRSvA05o5LnQ22Ax18kDWxcqfCNPbC5PXZ0xewW7lsR8UGvO/gXsJFAsPzA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pOhayWF3jHa74pK6+doXkbZAgoBrEXaNbZOgZRCzPEQ=;
 b=jdc0ksrfuwWKwGz2bpOjEhdxln5hr34e9uUiwpS4TU87u1nzCZt7484rIjk4V0qjVQJCd3CBQsHwVEuzOOkC61OKhm68cfNZ1oBu6p1TBl8pL6QB8JQ/rT3wsa6GoECjC/7RWNAdHt7AQ23v4nQ1i3td/Z6Il9xx0PyojJ7A5AxDXN+vSRjFq66w5L2xeV3luYpesWLIE3lSFHNIBMS/I+xeHoM6Q5k+5zVLVf3NPwetPIcxOy+0XfrlYZAEnEvWwaZ4eoc2GEYxAXNHprJcJ4NmHcomrH5h4JkPedmS6c1eyr7WHggMJvlNBFZP9x8UO+J1wIAPxlEhvLqrrGJ+Gw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pOhayWF3jHa74pK6+doXkbZAgoBrEXaNbZOgZRCzPEQ=;
 b=grQ2fgzAQ9+XhB9PYmHa2qAvlY8si+GAb6yPrQIeThL89SBTPeRbSUoleXljxRL2ujgo2YHXBy773gtVUFlgxzp5UHIqKB+4j6JZTGv+nvxvvWbo3T8UN1PyLCuVtZebth0Gv+2z6HlxRgRMALN9iVsIYZ7270bX0XKoZMrrA8Q=
Subject: Re: BUG in 1f3d87c75129 ("x86/vpt: do not take pt_migrate rwlock in
 some cases")
To: Andrew Cooper <andrew.cooper3@citrix.com>,
        Jan Beulich
 <jbeulich@suse.com>,
        =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>, xen-devel@lists.xen.org,
        stephen.s.brennan@oracle.com
References: <0efd0099-49bb-e20c-fe8d-fb97c9de2b63@citrix.com>
 <74378af9-5d04-f95e-3957-918cf5c81ded@suse.com>
 <YMdZKuKOnFKpQ3sg@Air-de-Roger>
 <3e9f4ea8-2fca-bf66-6345-0b73b960cba7@suse.com>
 <a7ec2d98-465a-9691-ab73-bef5b45a6cfd@citrix.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <45cd516b-326f-a0e6-325a-fc0debd48571@oracle.com>
Date: Mon, 14 Jun 2021 13:06:12 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
In-Reply-To: <a7ec2d98-465a-9691-ab73-bef5b45a6cfd@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [160.34.89.149]
X-ClientProxiedBy: SJ0PR13CA0147.namprd13.prod.outlook.com
 (2603:10b6:a03:2c6::32) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 943b8ecd-145f-4bc8-485d-08d92f56b9ae
X-MS-TrafficTypeDiagnostic: MN2PR10MB4224:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<MN2PR10MB42249C9B36F92564AFFCE6AD8A319@MN2PR10MB4224.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	BpZXFAD+wvgLogMdG5XIXQJ6Icnhmr9LYWfC7INehCjVyyJJ4yosLXgAP+GVzIS2eJpajm+TKjgwSzFYBYIgK+rJKHGiMQqwJScij9C4H2IN4/idaHns5DTYoZh6pKTfkJetLqKIGX8dLNXjJoLQFYF9dQcdd+FJKAPIHH4uF8oZ38gNFIOYoi+NThNBJ60oNOFJQC9jfSLuPhqzMNHG5IC1GPf4JN53xNe3Fgn7qDAU9wFPoVfzkvu2hpCMiUA/yqwh7A3UCljbY8jKHe45ELFmpjOrAnyO85OYxfyywTzthLPpHWq0KBmweyMi49GVRfv0tOiDbV1Tkv4W91PkOlDUVrwduxbqDJNX03h/GLYF5wLq8kMDevDM8Z0jm/xPUCp4GQcta8dvo32LqNX0921zmDqm99Apw7cVVsAoTSAdV9fqGA0jlY5303++J7q0lXou4io772yGqltT1WIvk0BI5MrsAGcKnRfKDFU0wKBDjP4J03Yh3cc96koYc3HLyNm/28colAaKNdkbZMgEyahhUQNyBY1M5HvOoQLHYz4OtFGbub4o+ubSkGYXlbRRbBWHSGxywcOokb+vyiQEkkMwWcD+iWpHCouEWjCjG/PfEpeLjBoZKx0IFBM6kE32Q4oa+H09CVzFkxat5pOklersySoBuJn5/8PCVaM30fTbYxl4awBmLZFHzjdXfL/f
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(396003)(376002)(136003)(366004)(39860400002)(2616005)(956004)(5660300002)(478600001)(6486002)(83380400001)(44832011)(186003)(31686004)(31696002)(16526019)(38100700002)(53546011)(4326008)(107886003)(26005)(36756003)(6666004)(8936002)(8676002)(86362001)(316002)(66556008)(66476007)(16576012)(66946007)(2906002)(110136005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?ZXhDZW84eTcvSTE5TEpqSDg2SHZCY1A2UTRCZGtoZ2pUcmxPMXlvMTA4L1hz?=
 =?utf-8?B?WVFzUHFsNFhWTE5sQTNXSERxLzJtdjBNVDF4OXI1OWp6aXBQMUxkSFBDL05M?=
 =?utf-8?B?cExOd3h4SGlXY0p2TFROMjRqM2dNRG9TYTZEcVRPUnRTb0VQTFA5WWw1QTND?=
 =?utf-8?B?dGwrL3BPSjRkaFV3dURLMU41eS9DdWNndFVCcEZpZkhuc2pmbSs4RXFFb1c1?=
 =?utf-8?B?MC9rVjkybGxBNUpoa3NVTmlQbk15ZVZHLzQwQkZJZmgwZUNUcU00R3I3eXY5?=
 =?utf-8?B?blYxcUh4MjV0bjExNXFVM1JwQWwzdGhudzZkam9PMnI5ai9XS2NqVWxQdEhG?=
 =?utf-8?B?Y0FZbUJpZTBLcTVjYTlIQTZDelJybGlNVjM0aXkzK2hja0dnWnE0YTR2RWU3?=
 =?utf-8?B?VFcyWTJBVTZCSE9sYmp6TnRPemE5dUpkQUZYNVcvNldiVmp5THkxRW9EWU1J?=
 =?utf-8?B?bEYyT3A0Qnl1d3NUNWpKRHkxMHVQT0lzL25pbDJDeS8zSW1BNnFOK1AvTDJB?=
 =?utf-8?B?ckhraVBSVWNvRmllbVV4ZVBkNmgxbHN2TzkyTllvMEVSdkNOTnlDbkJCK21I?=
 =?utf-8?B?SDRkeFJCUFJkZWFEbWhxa2Z5blRCRmtGcFcwYTBDd2tUVFRvbjlUSDhnL3By?=
 =?utf-8?B?YWlZNU5YS3pTZmJYSFZ2bUlXZFNiK0NXOWplRCszNUVuelBSc0U3NzFqcnpP?=
 =?utf-8?B?RjdXUFkwaHArcXlpdTFiTStoWERjTzJvNjBjZnZYSzRlTTNyMlcrTjcvVFRk?=
 =?utf-8?B?K29XS2tjUnVxdXNWYkdILzJ3RnRWS2tBTWtwUVBSMUc3TC9FRk14WGtLSDl1?=
 =?utf-8?B?Y0VHbEZaNVFJUHA5RkgyQ0ZPODRuT1NSMDZNUlN5ZHhIdkdVOUtwSm5jNEpl?=
 =?utf-8?B?V2dxVThnR21pemNQZjF0N1BRSDdoci9JM0dXMGk5OEFTVGtrVDV0aXRkMHpO?=
 =?utf-8?B?b2F5ajFKb3dWakZxM3pvbzdHMUNZdDYxaHR2RmJGMnlFOXBFdmJxWGVnOXpW?=
 =?utf-8?B?eXFsMWQ5MzdUdThxb0NGWGx2ems4RnFXL0gwdnd3YzBFeXpjQWZuQWl1QzMr?=
 =?utf-8?B?QXk2VXhlRUpRVFNURHNqVk5VZVdmYVJDdjRIcDh1ZTJOamx1R0UzUmVjV3Nx?=
 =?utf-8?B?M0x1bms0YWRGQzlkbGdkeU0zRDFTam9wVHNwclBxRVdkbnFNRWZaN25wZzJk?=
 =?utf-8?B?cGYweHZUTGFHQ2hnRWQwSXl6VFJjVmNzc3prbmNQN1dZL3Fyb2hVQ1FNMVow?=
 =?utf-8?B?T3ZGZEJ5UE1vdjV3N1RCbWpWR0dtemc5SGxDNlZSMmY1RjJsaDJ2MUdZWXp3?=
 =?utf-8?B?dXNxQVJnMmFkRWtTcWQrL0FrSXl2akd2YUd1OEROWDhiQjdvS2QyczJ0ZHAv?=
 =?utf-8?B?SWROZTJjTWpUcjdSS3l5MVJTdXk1djRtWlduTy9zZHcwNmRhdERTOXMrUDlB?=
 =?utf-8?B?Wmp0Q0E0MkZRb09ZU1lHbDlaWXZvWVNKdy9jM3ZLQ3hYQ0F3ZVI1dWNZV3Za?=
 =?utf-8?B?QVpQUTBqcmpxbmowT3hiUjh6RkFhb3FpK1RmdVNMdjN2YnBHem9MS2dOUFZJ?=
 =?utf-8?B?VXRDUm12a3U2SFJHdkFzOTdubkZ0d3NOTGZOY0xCZ242MGRqbXJMd0p4dXlW?=
 =?utf-8?B?SFM3U3RLTVEyOG03WDV6T3FaMVI5SVZzTk9kbzdZa1NsOFRxYUVlV3lLQk5S?=
 =?utf-8?B?eTNtMlY2bDk4TmZWSnR5Q2swQTNUb3FnRktKYS9leVlyZmEwcS93cFdYeFZG?=
 =?utf-8?Q?u8ZMYGzi70nlxk42YyTKNWuO3IFgXzYH/tJmvXD?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 943b8ecd-145f-4bc8-485d-08d92f56b9ae
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 17:06:18.0566
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uLcWn4S7DEExEVBozerTBKEVAXxk6R7Ifbq3fuB67uzyMEsydLddvsKKEbsm6IYBlEyxjoeTNJlTNIMjSxjNfjIFt66s2hLdW3fJ9DO+1RU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR10MB4224
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10015 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 mlxlogscore=999 mlxscore=0
 malwarescore=0 spamscore=0 phishscore=0 suspectscore=0 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106140108
X-Proofpoint-ORIG-GUID: AgmiMJrp3skUPeKHQaePqYe0CObT7eqU
X-Proofpoint-GUID: AgmiMJrp3skUPeKHQaePqYe0CObT7eqU
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10015 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 phishscore=0
 bulkscore=0 clxscore=1011 mlxlogscore=999 adultscore=0 malwarescore=0
 spamscore=0 lowpriorityscore=0 impostorscore=0 mlxscore=0 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106140108


On 6/14/21 12:10 PM, Andrew Cooper wrote:
> On 14/06/2021 17:01, Jan Beulich wrote:
>> On 14.06.2021 15:27, Roger Pau Monné wrote:
>>> On Mon, Jun 14, 2021 at 01:53:09PM +0200, Jan Beulich wrote:
>>>> On 14.06.2021 13:15, Igor Druzhinin wrote:
>>>>> Hi, Boris, Stephen, Roger,
>>>>>
>>>>> We have stress tested recent changes on staging-4.13 which includes a
>>>>> backport of the subject. Since the backport is identical to the
>>>>> master branch and all of the pre-reqs are in place - we have no reason
>>>>> to believe the issue is not the same on master.
>>>>>
>>>>> Here is what we got by running heavy stress testing including multiple
>>>>> repeated VM lifecycle operations with storage and network load:
>>>>>
>>>>>
>>>>> Assertion 'timer->status >= TIMER_STATUS_inactive' failed at timer.c:287
>>>>> ----[ Xen-4.13.3-10.7-d  x86_64  debug=y   Not tainted ]----
>>>>> CPU:    17
>>>>> RIP:    e008:[<ffff82d080246b65>] common/timer.c#active_timer+0xc/0x1b
>>>>> RFLAGS: 0000000000010046   CONTEXT: hypervisor (d675v0)
>>>>> rax: 0000000000000000   rbx: ffff83137a8ed300   rcx: 0000000000000000
>>>>> rdx: ffff83303fff7fff   rsi: ffff83303fff2549   rdi: ffff83137a8ed300
>>>>> rbp: ffff83303fff7cf8   rsp: ffff83303fff7cf8   r8:  0000000000000001
>>>>> r9:  0000000000000000   r10: 0000000000000011   r11: 0000168b0cc08083
>>>>> r12: 0000000000000000   r13: ffff82d0805cf300   r14: ffff82d0805cf300
>>>>> r15: 0000000000000292   cr0: 0000000080050033   cr4: 00000000000426e0
>>>>> cr3: 00000013c1a32000   cr2: 0000000000000000
>>>>> fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
>>>>> ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
>>>>> Xen code around <ffff82d080246b65> (common/timer.c#active_timer+0xc/0x1b):
>>>>>   0f b6 47 2a 84 c0 75 02 <0f> 0b 3c 04 76 02 0f 0b 3c 02 0f 97 c0 5d c3 55
>>>>> Xen stack trace from rsp=ffff83303fff7cf8:
>>>>>     ffff83303fff7d48 ffff82d0802479f1 0000168b0192b846 ffff83137a8ed328
>>>>>     000000001d0776eb ffff83137a8ed2c0 ffff83133ee47568 ffff83133ee47000
>>>>>     ffff83133ee47560 ffff832b1a0cd000 ffff83303fff7d78 ffff82d08031e74e
>>>>>     ffff83102d898000 ffff83133ee47000 ffff83102db8d000 0000000000000011
>>>>>     ffff83303fff7dc8 ffff82d08027df19 0000000000000000 ffff83133ee47060
>>>>>     ffff82d0805d0088 ffff83102d898000 ffff83133ee47000 0000000000000011
>>>>>     0000000000000001 0000000000000011 ffff83303fff7e08 ffff82d0802414e0
>>>>>     ffff83303fff7df8 0000168b0192b846 ffff83102d8a4660 0000168b0192b846
>>>>>     ffff83102d8a4720 0000000000000011 ffff83303fff7ea8 ffff82d080241d6c
>>>>>     ffff83133ee47000 ffff831244137a50 ffff83303fff7e48 ffff82d08031b5b8
>>>>>     ffff83133ee47000 ffff832b1a0cd000 ffff83303fff7e68 ffff82d080312b65
>>>>>     ffff83133ee47000 0000000000000000 ffff83303fff7ee8 ffff83102d8a4678
>>>>>     ffff83303fff7ee8 ffff82d0805d6380 ffff82d0805d5b00 ffffffffffffffff
>>>>>     ffff83303fff7fff 0000000000000000 ffff83303fff7ed8 ffff82d0802431f5
>>>>>     ffff83133ee47000 0000000000000000 0000000000000000 0000000000000000
>>>>>     ffff83303fff7ee8 ffff82d08024324a 00007ccfc00080e7 ffff82d08033930b
>>>>>     ffffffffb0ebd5a0 000000000000000d 0000000000000062 0000000000000097
>>>>>     000000000000001e 000000000000001f ffffffffb0ebd5ad 0000000000000000
>>>>>     0000000000000005 000000000003d91d 0000000000000000 0000000000000000
>>>>>     00000000000003d5 000000000000001e 0000000000000000 0000beef0000beef
>>>>> Xen call trace:
>>>>>     [<ffff82d080246b65>] R common/timer.c#active_timer+0xc/0x1b
>>>>>     [<ffff82d0802479f1>] F stop_timer+0xf5/0x188
>>>>>     [<ffff82d08031e74e>] F pt_save_timer+0x45/0x8a
>>>>>     [<ffff82d08027df19>] F context_switch+0xf9/0xee0
>>>>>     [<ffff82d0802414e0>] F common/schedule.c#sched_context_switch+0x146/0x151
>>>>>     [<ffff82d080241d6c>] F common/schedule.c#schedule+0x28a/0x299
>>>>>     [<ffff82d0802431f5>] F common/softirq.c#__do_softirq+0x85/0x90
>>>>>     [<ffff82d08024324a>] F do_softirq+0x13/0x15
>>>>>     [<ffff82d08033930b>] F vmx_asm_do_vmentry+0x2b/0x30
>>>>>
>>>>> ****************************************
>>>>> Panic on CPU 17:
>>>>> Assertion 'timer->status >= TIMER_STATUS_inactive' failed at timer.c:287
>>>>> ****************************************
>>>> Since this suggests a timer was found on the list without ever having been
>>>> initialized, I've spotted a case where this indeed could now happen. Could
>>>> you give the patch below a try?
>>>>
>>>> Jan
>>>>
>>>> x86/vpt: fully init timers before putting onto list
>>>>
>>>> With pt_vcpu_lock() no longer acquiring the pt_migrate lock, parties
>>>> iterating the list and acting on the timers of the list entries will no
>>>> longer be kept from entering their loops by create_periodic_time()'s
>>>> holding of that lock. Therefore at least init_timer() needs calling
>>>> ahead of list insertion, but keep this and set_timer() together.
>>>>
>>>> Fixes: 8113b02f0bf8 ("x86/vpt: do not take pt_migrate rwlock in some cases")
>>>> Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> Thanks for looking into this so quickly, and sorry for not realizing
>>> myself when relaxing the locking. Adding the timer to the list without
>>> it being fully initialized was a latent issue even if protected by the
>>> lock initially.
>>>
>>> Provided testing shows the issue is fixed:
>> I guess the change here is needed anyway, even if testing finds there's
>> still something amiss?


Yes, I think so. Thanks for finding this.



Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>


> We've put this patch in for testing, but results will take a while,
> because it only showed up in our weekend stress testing.
>
> ~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 17:37:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 17:37:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141761.261748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsqWI-0008M7-90; Mon, 14 Jun 2021 17:37:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141761.261748; Mon, 14 Jun 2021 17:37:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsqWI-0008M0-62; Mon, 14 Jun 2021 17:37:06 +0000
Received: by outflank-mailman (input) for mailman id 141761;
 Mon, 14 Jun 2021 17:37:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AGyB=LI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lsqWG-0008Lu-Ki
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 17:37:04 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea170588-72f4-40a1-bd22-b136fc9a0647;
 Mon, 14 Jun 2021 17:37:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea170588-72f4-40a1-bd22-b136fc9a0647
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623692223;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=+lvXVQtxif1hpySznCBoQ70l4iST2DnGRk06jdEHwrU=;
  b=HPsX8bBdPmXLIQW2vn9K6eLuB2jjbajzKZvkSNuZ2nCAMsEMD2XBMLpL
   Uf+rXDZ6CoFed33KiyBngJgPcRjWR5l3SD2onxpf3LM1zk/IlPLZBt5+S
   EyT+Y5KREeSowFx0nm3ma57j81KkyJhZsoYbiozpM4JGQQv64lJP+t6vJ
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: geFQI56TdYjUmNR7E+fUq0iX8yCr8sfnsYbqbczihcowwGM3M4UmKTn5xiGThudYFlJQcy+3cp
 NmLwNoJ3aCw4JyeVae86ZGqJd9vjTIZs8EDQWPY5NaGzSk6OFYQwkXRLW+Fglb+2BOs+bb3IDW
 Qctb9TIoo5v8XGw8TDFZbfV/s557kX+4K1tFaDZFFd5RdLwiA+XF20pF7vD17efW2V+kc3GI4K
 eqLY/lEnoaHnKGhX93OR3+0iHg8FFMxUVlaw6uWg05kwCcFAkvVmdW8CC3x+NSEU3PKjBcR5AL
 F9M=
X-SBRS: 5.1
X-MesageID: 47668895
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:guuaY6MHi7pB8sBcTyX155DYdb4zR+YMi2TDiHoedfUFSKOlfp
 6V8MjztSWVtN4QMEtQ/uxoS5PwP080kqQFnrX5XI3SIDUO3VHIEGgM1/qY/9SNIVyGygcZ79
 YcT0EcMqyDMbEZt7eD3ODQKb9Jq7PrgcPY55ar854ud3ANV0gJ1XYLNu/xKDwSeOApP+tcKH
 PR3Ls8m9L2Ek5nHvhTS0N1EdTrlpnurtbLcBQGDxko5E2nii6p0qfzF1y90g0FWz1C7L8++S
 yd+jaJppmLgrWe8FvxxmXT55NZlJ/IzcZCPtWFjowwJi/3ggilSYx9U/mpvSwzosuo9FE2+e
 O84isIDoBW0Tf8b2u1qRzi103LyzA18ULvzleenD/KvdH5bChSMbsFuatpNj/ir2YwttB116
 xGm0iDsYBMMB/GlCPho/DVShBRkFauq3ZKq59Qs5Vma/pYVFZtl/1YwKsMe61wRR4SqbpXU9
 WGNfusoMq/KjihHijkVgAF+q3YYpwxdi32D3Tq9PbliAS/MRhCvgMlLfck7wE9HaQGOtN5Dt
 T/Q9NVfY51P4YrhIJGdas8qJiMeyPwqSylChPYHb2xLtB3B5uKke+s3IkI
X-IronPort-AV: E=Sophos;i="5.83,273,1616472000"; 
   d="scan'208";a="47668895"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K5KNWay1OtnTEChhCywQBXxtbNbCUoIOEX/SdRNPqpVyQtfQvCK2j1DPZ14EkrK/P7Ij/V30Fz4LWxVl/u8ZxFJgVEyJK6ZQw4VnrD+Zs0f5Ra8+vc1klHqbrGdtNTWioVvOyO8pxVvYHdxsqHxfw+mDe5q6vcxyJrLqMbdfV64vtlz7hARj2eEuVu/evJcA1dEufs2svz+PARpQzDMP4UAJ7j1tBqMXdizYaQvrx6LWUjPgoD+HZSU6lbd644LaGWvG8Xl7oqIQQg4I0QrbmXBYsgDfSeoI+tvgLyjrgbcVmU/yT8qEyAHmN9qVo8VCzYP0iKRPnPzeEohukiRqdg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XrO1I7RtTsJNiVkcx05KnjXlIafK7T0CJKH8Yf3+WcY=;
 b=O3iHqtyLsPmj7fdfkNUmfRx0O8hmxGoiAOMmZadGft5MqbALI0NfkwUI0dBC5+TVMTjSF9Q+fFdiOHsS2TQUOR8PqgVC4isorqFwMvIKJmyYO4KEFuRVYez8loDt3ed5OJUN5E3GsAXBwePxHjdS5L3ad4i72H7yNsir4h/zEKMyslvd/h+kJ39sXxeqjdfyGFn/qw6dnntMz8fWULD2eQltwamQ1FJn1V2+CXCp0cwW+mdSXtXqgwa1/sHikFxWyW726BoiQUweXrWRh7b1DM2YiTDz1axCQfJhjEFAi4k/rK7MyjxBFkrhY0uYBXyth44qmQG46vmPJJ9Ml3bcHg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XrO1I7RtTsJNiVkcx05KnjXlIafK7T0CJKH8Yf3+WcY=;
 b=cAr/PaKU1ZAmG0bxAA7lzezn/UARK7ql8DZ3uVm4RqUAESZoFdar97veJh8vyfH7z2EvNxSM6RK9HIA8ySaOD5R/q8aAo8pzwQdYvxmi9JpnieOLJKX0j8A+vEZW54vhjiGXIevL+on2r7SKsLgS6w6eUtMswy7flkpEdjICvxA=
Subject: Re: [PATCH v2 5/5] tests: Introduce a TSX test
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, Edwin Torok
	<edvin.torok@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20210611163627.4878-6-andrew.cooper3@citrix.com>
 <20210614161317.31481-1-andrew.cooper3@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <2288287b-a4bf-119a-1391-80afe203fa6e@citrix.com>
Date: Mon, 14 Jun 2021 18:21:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210614161317.31481-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0291.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:196::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ff256b55-e6b6-4612-90b3-08d92f58d2e2
X-MS-TrafficTypeDiagnostic: BYAPR03MB4120:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4120653869A7E4093E7A82C2BA319@BYAPR03MB4120.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: NqXLV5vznC8O3vfdFJVnuBB38t6R/LxbLHRuUg7MVdjlgn6cLg8HrZK9dX9fttg3gaS69YcJktI7rzsMIsy2FMsWaTX5iUa0X6nrMbfAh/jm2jmZ4G0R8CCjFOOB5gzT1uAVp1O00e0fjXlQWGIbPiz6nDR/+8m3UJ2oIIaNnCaU/Z74n7DteQ3l+1fCrDszh1XOske9seJuZIbmS8GK3JXFw0ee5QxF3VBRtAQ8W7JdVPJ7H9fS9V5Oq4YW1krINAqSmLpoI+c11LskpVDFvONtrhfkqoVCfrOkymGsri7mmzHhgKHM3DZIwjMC0xKIZuJUOL8+8rdAuuJUaa522Kx/ZR6EWd1lRCwSleXvHx6Ji8/DuYDAuYfH8kZS/4bWEcfE9hQxHvSMnA9vNazxsPCLyEUxiD+lZ7NKYj3gLM5rcZXBHz2IfwSNaGMFKL07hZdTsdc0n0sAn5rL9+SRCQLL+LGDNukRA8z92bGQBhJ4RVldr7cM54RJw9GsWIvrNmojQKQ2hXjCwzFfS9VRKcZRPjzHAiQro6hCANbzun4wStVf7bu+KD8e8LTaWSJ0TkK5l40DHWK8wj8Uf9TQHsuMcY25X3UkUXK/ewTuz1lb06Vai5PEN48ACDbB8R+gG29WrtblESV5nQFSQAxH+UcAEqUBFWMsUrb+zcJG/UctoZ9n7OEswHEFgzIc7hXUnyo0pbC7VAmHDodaIYgeYwF6EV2xRP285ymSn2Pz8SrBu8w/NW4VlfXo7jR/ubSBIyC5t1mVy5M3UzPz+STAf1fYzqXhF7fcE4ixdnLdbbg=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39840400004)(136003)(396003)(346002)(366004)(53546011)(478600001)(36756003)(31686004)(83380400001)(186003)(5660300002)(8676002)(16526019)(54906003)(4326008)(2906002)(6666004)(26005)(6916009)(8936002)(966005)(31696002)(38100700002)(66556008)(16576012)(66476007)(956004)(66946007)(6486002)(316002)(86362001)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?dFdPTTNKTHpPRnVOUHVKbEU1VzVudSs2aHV0MFA1RlFzbWdCUzNRSnhYYmo5?=
 =?utf-8?B?aE5RMUVUbkxudzVIWk4yRXdlTnY5NnM3UXNFeWl1dmE3aGZDcXN3TWczeFBI?=
 =?utf-8?B?NlJxVExWVEt0dW4rdkRaekViY0FTbjFMUWU1TXM5bnVqdEM2aVRoVWlUY2xS?=
 =?utf-8?B?RHM5M2VrSWtpR0s0eXk3bWFYVXlqYkJ5Z2NvNDJUSGtDcUxRRjhSK0JYQmQ2?=
 =?utf-8?B?SmlhTE16aDhsV2JPeXNxM09BVE1ITmJQdXg2UG1mZ3dZaWNhc0djRUZRUnBl?=
 =?utf-8?B?d1dXd2NCSlhyQUJ4T3FNR1NQVS9mdFRvUWdldld1eHVWSUZ0NkFsc1FMSjY1?=
 =?utf-8?B?RnNnTWxaWTVwT3dSLzRjYTVNMExnQmNWc3owRFZzZG0rcW1hUjJINXg2QUY1?=
 =?utf-8?B?MUEzNEZERjJEUElaNnpGTHBFS09PM014cVpxSjB5SFg2aFE0bDVmRFJOalYv?=
 =?utf-8?B?NndIMXhKWnM5elUvRTU5dGNWdG5YSUk3a1hYSm0yWXRua3VmNUVERGZxdmJP?=
 =?utf-8?B?b1RFK0FWRWN3TlEvbUtqZzlzWEM0YkhNcHNYbmRLZnVmMEYzaFRzVWpCWEll?=
 =?utf-8?B?RHFIR1hrdUVRL3UvL3RwemN6SW5VUlFBSmtZTXdvdGhVdGxqUENlazl4OVRC?=
 =?utf-8?B?MFJpNC9OZ0RFY0xQUTJ4NDN1YWVGTzI3SWVCemQ4bkVPSVRHVzhjZk9SV1M2?=
 =?utf-8?B?RmdKV1FkaGtRR0didE00M1lZZEVHY1NDMzJxL2hDdlJKRUVjTE9vcE1TUFk5?=
 =?utf-8?B?eXNsK3hNNU9tL241MHhZVlRIaUlKWDBSL0xoUVpzbkpyalc4US9iMWpmdHhi?=
 =?utf-8?B?NXphQktFdEt6cTdxL1ZSQ2hhQjA1djZqRWpFOVUwcjh4cVM2QmYrQ21wdnhB?=
 =?utf-8?B?V0ZORzZjS0RGSWlLTmlzbTJsTm1kbzNPR096dEhnTWdqSWJQUzdIN0crYTlm?=
 =?utf-8?B?clNBVURYUWhqejNhdjd6a0Z1elZ5akpNQUFiY3dzTit0ZFY0aTNocGlKNTBH?=
 =?utf-8?B?eFNUUXVDaHB6ZlhtK0UyVGZZT0NtN2lReWRHWWgrdHhDdjhiaXZxUElEKzJD?=
 =?utf-8?B?VjN6VXNxclp0bUg1bEwvTUlESmUzSWxCdU85bDVoZWFKeFhTbzNEL2poQUhR?=
 =?utf-8?B?dVBCUXdLMUNNTEQ0RXVidVhHVHc4Qy9YT0U1QUtpZjFyR2JpWmxqUjZNcnla?=
 =?utf-8?B?UEY2RGM4M0E5L2pBRk5Mcm54NE8ySStlTWtUWnBWWmxheWxMYWdRN09pOXYx?=
 =?utf-8?B?V1VWT2QwelAzby82MjlwMVhzTXdnRXNUT3dvdHB5OHg3YVZhSjhVTDZjTEhZ?=
 =?utf-8?B?SDJ6N1Y3RGpjY2pVMUp0bitlM29VK2pZMHQzZWUyNHBLQXlNemN0RlNQSVVa?=
 =?utf-8?B?RTA3NDI4TXlDQ3ZOQ0ovU29uSmRndEZPcHpndUY3bFVIblNTWjdMNXZNREZL?=
 =?utf-8?B?RzBPUmdZYURTNmt1czA5bjRWV2JLUzg1VDRUcnNCNyttZ3VIS05IRGQ2a2VJ?=
 =?utf-8?B?a0k2czlCNlc4RDZXYWlQRmZtSlFYQ1o0UUlpSllKKytabkhuUEpnYVhTVU9k?=
 =?utf-8?B?Mmhpc1d4M2NyRVBFOXdyWkNUU3AyOW9GRllsdTNsblh2b3JITUdUUFRCbGVJ?=
 =?utf-8?B?UE9nU3VUODRjRUx3ODRSYkI4b2NuaVgwNTVWVC9JcjdocVpNRHd4ZmF5ZEEr?=
 =?utf-8?B?NC9KOTBGYkw2S3Z2alQ5TkE5R3NRMXFQOHhpeUROZkRaZFBlcGppT3liNlRl?=
 =?utf-8?Q?GFy4gf2fX6aI2iOJ7VAfhaCIV6ISqzhfN/N9xhW?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ff256b55-e6b6-4612-90b3-08d92f58d2e2
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2021 17:21:19.2796
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2b/wpWrMw24/6rgmJmxQdNmbKRXOPQD/WYdp23dcKUnh1NFDbAOplDW4k/V+ralUJ50yI3NoI5IUAsbfpLAn6chVo/iglFWQr4z9ygyUM8E=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4120
X-OriginatorOrg: citrix.com

On 14/06/2021 17:13, Andrew Cooper wrote:
> +/*
> + * Probe for how RTM behaves, deliberately not inspecting CPUID.
> + * Distinguishes between "no support at all" (i.e. XBEGIN suffers #UD),
> + * working ok, and appearing to always abort.
> + */
> +static enum rtm_behaviour __attribute__((noclone)) probe_rtm_behaviour(void)

This doesn't compile, because Clang doesn't understand noclone.

With it dropped, https://cirrus-ci.com/build/6399801072812032 is the
FreeBSD build, confirming that sigill_handler() below is seemingly ok.

~Andrew

> +{
> +    for ( unsigned int i = 0; i < 1000; ++i )
> +    {
> +        /*
> +         * Opencoding the RTM infrastructure from immintrin.h, because we
> +         * still support older versions of GCC.  ALso so we can include #UD
> +         * detection logic.
> +         */
> +#define XBEGIN_STARTED -1
> +#define XBEGIN_UD      -2
> +        unsigned int status = XBEGIN_STARTED;
> +
> +        asm volatile ( ".Lxbegin: .byte 0xc7,0xf8,0,0,0,0" /* XBEGIN 1f; 1: */
> +                       : "+a" (status) :: "memory" );
> +        if ( status == XBEGIN_STARTED )
> +        {
> +            asm volatile ( ".byte 0x0f,0x01,0xd5" ::: "memory" ); /* XEND */
> +            return RTM_OK;
> +        }
> +        else if ( status == XBEGIN_UD )
> +            return RTM_UD;
> +    }
> +
> +    return RTM_ABORT;
> +}
> +
> +static struct sigaction old_sigill;
> +
> +static void sigill_handler(int signo, siginfo_t *info, void *extra)
> +{
> +    extern const char xbegin_label[] asm(".Lxbegin");
> +
> +    if ( info->si_addr == xbegin_label &&
> +         memcmp(info->si_addr, "\xc7\xf8\x00\x00\x00\x00", 6) == 0 )
> +    {
> +        ucontext_t *context = extra;
> +
> +        /*
> +         * Found the XBEGIN instruction.  Step over it, and update `status` to
> +         * signal #UD.
> +         */
> +#if defined(__linux__)
> +# ifdef __x86_64__
> +        context->uc_mcontext.gregs[REG_RIP] += 6;
> +        context->uc_mcontext.gregs[REG_RAX] = XBEGIN_UD;
> +# else
> +        context->uc_mcontext.gregs[REG_EIP] += 6;
> +        context->uc_mcontext.gregs[REG_EAX] = XBEGIN_UD;
> +# endif
> +
> +#elif defined(__FreeBSD__)
> +# ifdef __x86_64__
> +        context->uc_mcontext.mc_rip += 6;
> +        context->uc_mcontext.mc_rax = XBEGIN_UD;
> +# else
> +        context->uc_mcontext.mc_eip += 6;
> +        context->uc_mcontext.mc_eax = XBEGIN_UD;
> +# endif
> +
> +#elif defined(__NetBSD__)
> +# ifdef __x86_64__
> +        context->uc_mcontext.__gregs[_REG_RIP] += 6;
> +        context->uc_mcontext.__gregs[_REG_RAX] = XBEGIN_UD;
> +# else
> +        context->uc_mcontext.__gregs[_REG_EIP] += 6;
> +        context->uc_mcontext.__gregs[_REG_EAX] = XBEGIN_UD;
> +# endif
> +
> +#else
> +# error Unknown environment - please adjust
> +#endif
> +    }
> +    else
> +    {
> +        /*
> +         * Not the SIGILL we're looking for...  Restore the old handler and
> +         * try again.  Will likely coredump as a result.
> +         */
> +        sigaction(SIGILL, &old_sigill, NULL);
> +    }
> +}



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 19:18:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 19:18:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141772.261766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lss6O-0000k7-LD; Mon, 14 Jun 2021 19:18:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141772.261766; Mon, 14 Jun 2021 19:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lss6O-0000k0-H2; Mon, 14 Jun 2021 19:18:28 +0000
Received: by outflank-mailman (input) for mailman id 141772;
 Mon, 14 Jun 2021 19:18:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l3uV=LI=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1lss6N-0000jt-3N
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 19:18:27 +0000
Received: from mail-ej1-x632.google.com (unknown [2a00:1450:4864:20::632])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8c2e3039-eb96-4587-ac0f-143d8df5fee9;
 Mon, 14 Jun 2021 19:18:26 +0000 (UTC)
Received: by mail-ej1-x632.google.com with SMTP id c10so18385782eja.11
 for <xen-devel@lists.xenproject.org>; Mon, 14 Jun 2021 12:18:26 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id kf3sm6982622ejc.8.2021.06.14.12.18.24
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 14 Jun 2021 12:18:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c2e3039-eb96-4587-ac0f-143d8df5fee9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id;
        bh=yTgT3JGer71oJVOBpozTYEvIZBlnONWD7ReHwJ7SrLA=;
        b=jR9pdArvs+K1bVY2VXoSCCtB3m3ob2wjBua6dR6uHh6Y/+Kl1p40tPv3BTbS9Y5EAz
         eoLcXNgW919CeSu9XFVC0dmgfLfqUOrzwBAZdqpnhvbcYSe5E4YfniRKHqw10M/8Wtv8
         PYmuA2QDBZkcl3FnyTvUhHfzJ7b8dOSSZTPGeELmbBc3A9B6qmQpsxoiLi/WoRVyijbI
         FHvP6ZLOsN3P45DlFWhd2LqeFC502PyWdleM4kSwuU4yoZSf+1amnv3VyWeoMHEsbAti
         xdK0zxNu37nP05zsFlYOnk+dKYs4TZHn/s+djXYK+lYibgYRcg1eOAm+k3iiKS4FcmNk
         UG/g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id;
        bh=yTgT3JGer71oJVOBpozTYEvIZBlnONWD7ReHwJ7SrLA=;
        b=IE/uQLiP59sVgsQqujzzogXQjLp90ymY3tdNYpkvjt1EDJgIP1ltKsErJaJR2ykCjO
         CIHqOd0/fuutyMWNrvNfn7yrch5f15clr67qa38MS2kqUv09+eNxYCJiLag46xrx7LRa
         lP7lrCodujQZBX2Fp29ccyVaeSFK6hfLrtU2YFDTCzY7CAPdD1BTYav+bxrKsMMwVAGP
         5KGFYTXKauLVYZKre1B2MwsAtExFW/DSuQtNAXnn2PBxFLETI/PiRs0E9VSjMoQH8uuu
         nUDOQj3APkrFqQktUi4xn9tuVd7bLQtQ+HZiy+aP03OL2bBIfR3pCss1nhnPJrZ8MFd8
         Mzkw==
X-Gm-Message-State: AOAM530CIBgfQ9xibmegBKCRFNaG93ECpnxKPXyOcCDa9MUJHRZiOgEw
	vtRc/qlOgT4tWvV44gDF65sZirVq3mtLEQ==
X-Google-Smtp-Source: ABdhPJy8xF1ewy9/xlZGn5kyOabzu953ogISXaSgA7ldllXJWryjp2QYvE0abO+7+YaW2U6Pu2axGg==
X-Received: by 2002:a17:906:8608:: with SMTP id o8mr16916448ejx.72.1623698305179;
        Mon, 14 Jun 2021 12:18:25 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] iommu/arm: ipmmu-vmsa: Add compatible for Renesas R-Car M3-W+ SoC
Date: Mon, 14 Jun 2021 22:18:12 +0300
Message-Id: <1623698292-7464-1-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The "renesas,r8a77961" string identifies M3-W+ (aka M3-W ES3.0)
instead of "renesas,r8a7796" since Linux commit:
"9c9f7891093b02eb64ca4e1c7ab776a4296c058f soc: renesas: Identify R-Car M3-W+".
Add new compatible to the Xen driver.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
---
 xen/drivers/passthrough/arm/ipmmu-vmsa.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/drivers/passthrough/arm/ipmmu-vmsa.c b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
index 8b8e3a0..1255b0d 100644
--- a/xen/drivers/passthrough/arm/ipmmu-vmsa.c
+++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
@@ -1316,6 +1316,7 @@ static const struct dt_device_match ipmmu_dt_match[] __initconst =
     DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a7795"),
     DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a77965"),
     DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a7796"),
+    DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a77961"),
     { /* sentinel */ },
 };
 
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 19:34:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 19:34:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141779.261777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lssLw-0002xz-VY; Mon, 14 Jun 2021 19:34:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141779.261777; Mon, 14 Jun 2021 19:34:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lssLw-0002xs-Qc; Mon, 14 Jun 2021 19:34:32 +0000
Received: by outflank-mailman (input) for mailman id 141779;
 Mon, 14 Jun 2021 19:34:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l3uV=LI=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1lssLu-0002xm-Qk
 for xen-devel@lists.xenproject.org; Mon, 14 Jun 2021 19:34:30 +0000
Received: from mail-ed1-x52d.google.com (unknown [2a00:1450:4864:20::52d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3acbf862-9a14-4495-a176-4102edb468cb;
 Mon, 14 Jun 2021 19:34:29 +0000 (UTC)
Received: by mail-ed1-x52d.google.com with SMTP id ba2so46233318edb.2
 for <xen-devel@lists.xenproject.org>; Mon, 14 Jun 2021 12:34:29 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id u13sm2732346ejt.23.2021.06.14.12.34.28
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 14 Jun 2021 12:34:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3acbf862-9a14-4495-a176-4102edb468cb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id;
        bh=wsl5VUiqlSvUCYkCQr89Ot4l2itto59SeHxgpirziTs=;
        b=fbT9uMs72X0hB9b5eaxO3o9uWNdjtW8iAW3bi4IFtN/JqtUFycvakCI0N7/xnb4H7W
         0OU4UQoiSvBKjzILJHnzTRYeNHrHip9bpOM45KmkePNT4rg0gxJGIOnJKKVFZNVqN6rJ
         M9mEdexG9UCdamYKLGOedlcXiEn/lYXRMS6HuGYy1RGurVAPeIZztWoIr6l2ilKDUbaZ
         yOiJtLqXGk1pGEmaafCO0dOdTApvH/pCz04AQGoRdxFBu0BOcqNsYYevaZpBGpFJkUjk
         ZYi+RjT64FJecAQhpvAAwkc1xxTmNo3yvnO+7C1iLK6IVCV59MfZzSFoLgBL0QwBy7jj
         bylQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id;
        bh=wsl5VUiqlSvUCYkCQr89Ot4l2itto59SeHxgpirziTs=;
        b=aLDJu5ll3VSEMRjUpaFXP9BMwdEY9FpA/ePq4kXHi5x03opTGhsHu9cEHjOATx5onM
         1EHMeW2/BL9R8YHP2jtPSIwoY4I/oUjgGYDPy3dJWy1A9652qYrEC7cfi+z1cAPFjBfN
         EWnbWSI3jE5J2QCscSPQ7mUsIk0cJP28cgwB3iMuXlFj8cHPvhOayMV8vN3RDItD7/xm
         fo1xQ3AzVasI95H/NT0X6qBMIuJdw1uVXb911ZU21n8Ybv4at6OvGs2A4tuGmDh0q7zA
         v5qFKkXVXGYI/SFVSEmfY7CSxKg4hsQ+avz9CT5kUeGmFHrZ4xph5E/MYBX5qytAWZJ/
         krVw==
X-Gm-Message-State: AOAM532zzzYOCZMG6k9kRCtAop84ZmXaFecTOZLgh7QJknBX7PeE7MF5
	zzSbEz4Z4osKhyHFnGLFr8FHk9pyuFsrVA==
X-Google-Smtp-Source: ABdhPJzgCXPW2QyPZrRK2TJIvjDCT1B4Xuvcbd9uzl93uM7fYr+R/g8Qh1TiyPuSX384R4rNsClqbg==
X-Received: by 2002:a05:6402:26ce:: with SMTP id x14mr19303708edd.104.1623699268754;
        Mon, 14 Jun 2021 12:34:28 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: bootfdt: Always sort memory banks
Date: Mon, 14 Jun 2021 22:34:27 +0300
Message-Id: <1623699267-9475-1-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

At the moment, Xen expects the memory banks to be ordered.
Unfortunately, there may be a case when updated by firmware
device tree contains unordered banks. This means Xen will panic
when setting xenheap mappings for the subsequent bank with start
address being less than xenheap_mfn_start (start address of
the first bank).

As there is no clear requirment regarding ordering in the device
tree, update code to be able to deal with by sorting memory
banks if we have more than one.

Suggested-by: Julien Grall <jgrall@amazon.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
---

The proposed commit fixes the booting Xen on R-Car M3-W+ SoC:

Starting kernel ...
- UART enabled -
- Boot CPU booting -
- Current EL 00000008 -
- Initialize CPU -
- Turning on paging -
- Zero BSS -
- Ready -
(XEN) Checking for initrd in /chosen
(XEN) Initrd 0000000084000040-0000000085dbc32a
(XEN) RAM: 0000000480000000 - 00000004ffffffff
(XEN) RAM: 0000000048000000 - 00000000bfffffff
(XEN) RAM: 0000000600000000 - 00000006ffffffff

...

(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) cannot add xenheap mapping at 48000 below heap start 480000
(XEN) ****************************************
(XEN) 
(XEN) Reboot in five seconds...
---
 xen/arch/arm/bootfdt.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index dcff512..3ef63b3 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -13,6 +13,7 @@
 #include <xen/init.h>
 #include <xen/device_tree.h>
 #include <xen/libfdt/libfdt.h>
+#include <xen/sort.h>
 #include <xsm/xsm.h>
 #include <asm/setup.h>
 
@@ -395,6 +396,21 @@ static void __init early_print_info(void)
     printk("\n");
 }
 
+/* This function assumes that memory regions are not overlapped */
+static int __init cmp_memory_node(const void *key, const void *elem)
+{
+    const struct membank *handler0 = key;
+    const struct membank *handler1 = elem;
+
+    if ( handler0->start < handler1->start )
+        return -1;
+
+    if ( handler0->start >= (handler1->start + handler1->size) )
+        return 1;
+
+    return 0;
+}
+
 /**
  * boot_fdt_info - initialize bootinfo from a DTB
  * @fdt: flattened device tree binary
@@ -412,6 +428,12 @@ size_t __init boot_fdt_info(const void *fdt, paddr_t paddr)
     add_boot_module(BOOTMOD_FDT, paddr, fdt_totalsize(fdt), false);
 
     device_tree_for_each_node((void *)fdt, 0, early_scan_node, NULL);
+    if ( bootinfo.mem.nr_banks > 1 )
+    {
+        /* Some DT may describe unordered banks, sort them in ascending order */
+        sort(bootinfo.mem.bank, bootinfo.mem.nr_banks, sizeof(struct membank),
+             cmp_memory_node, NULL);
+    }
     early_print_info();
 
     return fdt_totalsize(fdt);
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Jun 14 19:40:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 19:40:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141784.261788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lssRG-0003m9-K8; Mon, 14 Jun 2021 19:40:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141784.261788; Mon, 14 Jun 2021 19:40:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lssRG-0003lz-FA; Mon, 14 Jun 2021 19:40:02 +0000
Received: by outflank-mailman (input) for mailman id 141784;
 Mon, 14 Jun 2021 19:40:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lssRF-0003dH-BA; Mon, 14 Jun 2021 19:40:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lssRF-0003ap-36; Mon, 14 Jun 2021 19:40:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lssRE-00020J-Nn; Mon, 14 Jun 2021 19:40:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lssRE-0006Z6-NH; Mon, 14 Jun 2021 19:40:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pOiJ6elC8ScY1bGVXVAZGawfew250FtMQyEt7RT6YBQ=; b=3/LmRI4k+NnJ0/70qt1n2f14Bh
	5yMVELLMEGlJPppcJkNaefylK3d/7XB4UwGgQh7iCvNXxlHmSsCJcKOFv9wata+Q93XKbFJihWcTC
	qoqAzal97aS+yWEHANN3x5Hkt6ktrvDOmWx25nwpZ7T2FcG4DRLN01pb/30c3p6VcRY8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162811-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162811: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8c9ed863738ff9e8b91975d6aa4464e7e8324eb7
X-Osstest-Versions-That:
    xen=163f47c14737cfa9dfb3240deea356b08caf7614
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Jun 2021 19:40:00 +0000

flight 162811 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162811/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8c9ed863738ff9e8b91975d6aa4464e7e8324eb7
baseline version:
 xen                  163f47c14737cfa9dfb3240deea356b08caf7614

Last test of basis   162804  2021-06-14 14:00:25 Z    0 days
Testing same since   162811  2021-06-14 17:03:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   163f47c147..8c9ed86373  8c9ed863738ff9e8b91975d6aa4464e7e8324eb7 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 22:11:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 22:11:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141816.261838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsunG-0001sR-4t; Mon, 14 Jun 2021 22:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141816.261838; Mon, 14 Jun 2021 22:10:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsunG-0001sK-1Z; Mon, 14 Jun 2021 22:10:54 +0000
Received: by outflank-mailman (input) for mailman id 141816;
 Mon, 14 Jun 2021 22:10:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsunF-0001sA-3o; Mon, 14 Jun 2021 22:10:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsunE-0006Dr-Qp; Mon, 14 Jun 2021 22:10:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsunE-0005MG-Gl; Mon, 14 Jun 2021 22:10:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsunE-0000ZU-GG; Mon, 14 Jun 2021 22:10:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4UyvStaVOCSLd/31/ZpNWwpGnzFibi50Dg4CeeaRG20=; b=BiE5w1zWikXtOHwLwUO0HwhOrV
	+V5aqqARgiBl/5jk3pBMN8xnPAOE0js5awlwPZN02oIDZzXvZRMLWRxN+VrECyY8TqCwIlf9Co674
	Xz4M/Pts780OYkksBZ0M7/no4xptfKB7zA57MZYkcVpXfxoYIiWgjM5ZOeF9emCFKlek=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162795-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162795: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=894fc4fd670aaf04a67dc7507739f914ff4bacf2
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Jun 2021 22:10:52 +0000

flight 162795 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162795/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail in 162778 REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install     fail pass in 162778

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 162778 like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                894fc4fd670aaf04a67dc7507739f914ff4bacf2
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  298 days
Failing since        152659  2020-08-21 14:07:39 Z  297 days  549 attempts
Testing same since   162650  2021-06-11 15:02:16 Z    3 days    6 attempts

------------------------------------------------------------
531 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 170840 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 14 23:16:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Jun 2021 23:16:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141843.261876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsvoW-0000LY-AB; Mon, 14 Jun 2021 23:16:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141843.261876; Mon, 14 Jun 2021 23:16:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsvoW-0000LR-6y; Mon, 14 Jun 2021 23:16:16 +0000
Received: by outflank-mailman (input) for mailman id 141843;
 Mon, 14 Jun 2021 23:16:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsvoV-0000LH-6U; Mon, 14 Jun 2021 23:16:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsvoU-0007HX-Ui; Mon, 14 Jun 2021 23:16:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lsvoU-000801-Nw; Mon, 14 Jun 2021 23:16:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lsvoU-00036O-NR; Mon, 14 Jun 2021 23:16:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uBu29vDMTy2j8MzNYxHFOtzIw0L6vBbUcx9G9mtE4l8=; b=MoZHMLHGgD9TOUxUjDoUfTw7rQ
	l8jEmGLPIpRcTEt3aZVabViLi/M/Lc2diuN2S2EK6ukaMhBxVXx/LYTbLwwaZdLGk/c60v8kuFgIq
	ZFPSe45JRjRldty2koO923lQ2VyWGydaZY760HbjFRPqCsXkulJUhMTm4Pqqcz+lsbXE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162808-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162808: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=b8649cf2a3e673a4a8cb6c255e394b354b771550
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Jun 2021 23:16:14 +0000

flight 162808 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162808/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 b8649cf2a3e673a4a8cb6c255e394b354b771550
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   10 days
Failing since        162368  2021-06-04 15:42:59 Z   10 days   19 attempts
Testing same since   162583  2021-06-09 23:44:58 Z    4 days   14 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1717 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 02:21:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 02:21:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141865.261920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsyhp-0006ex-6y; Tue, 15 Jun 2021 02:21:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141865.261920; Tue, 15 Jun 2021 02:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lsyhp-0006ec-0q; Tue, 15 Jun 2021 02:21:33 +0000
Received: by outflank-mailman (input) for mailman id 141865;
 Tue, 15 Jun 2021 02:21:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1z08=LJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lsyhn-0006eW-E9
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 02:21:31 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0c7c155a-4ccf-49fe-bb8e-e1216e3dfc2c;
 Tue, 15 Jun 2021 02:21:30 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id C32956141F;
 Tue, 15 Jun 2021 02:21:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c7c155a-4ccf-49fe-bb8e-e1216e3dfc2c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623723689;
	bh=q8FvFYQTeIR+MSlA/co1DP9IuOmoQAg9GycL2ujW264=;
	h=Date:From:To:cc:Subject:From;
	b=Jk45WC52doJo39RmxtcuzkczICr8Zn8C33IeUK9LVi+BIqY3wEoUjRwJmsL1zR549
	 MlmcL2Scw8DE+8HtzXoBCNWzjjxq2HYBWF87DN9coPijIWKcFiIv9bME3Vd1E9eJZD
	 rUa6jaH6MVjPVjzx1EI1DhMIJ67Wr0KtAWXwjrUwYfjyWfBUdBoZZVqQDqr56JIwAv
	 Y+PX6dB1f9KvW4wIN3KQVWCJ5hiDng7IIVbYU5sStr+TbFMuq4IKl19yGUfsOGBwsF
	 gR/qnDxWS1di62rWUlmcz3DB0AZlC0ac0TFf/6Glc9S/IGhCCMIKlwRwGSPUsvuNrN
	 3ZLmg0u+uxroQ==
Date: Mon, 14 Jun 2021 19:21:27 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <Rahul.Singh@arm.com>
cc: sstabellini@kernel.org, edgar.iglesias@xilinx.com, 
    xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com, julien@xen.org, 
    fnuv@xilinx.com
Subject: smmuv1 breakage
Message-ID: <alpine.DEB.2.21.2106141840150.24906@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1483615782-1623723347=:24906"
Content-ID: <alpine.DEB.2.21.2106141915560.24906@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1483615782-1623723347=:24906
Content-Type: text/plain; CHARSET=US-ASCII
Content-ID: <alpine.DEB.2.21.2106141915561.24906@sstabellini-ThinkPad-T480s>

Hi Rahul,

Unfortunately, after bisecting, I discovered a few more breakages due to
your smmuv1 series (commits e889809b .. 3e6047ddf) on Xilinx ZynqMP. I
attached the DTB as reference. Please note that I made sure to
cherry-pick "xen/arm: smmuv1: Revert associating the group pointer with
the S2CR" during bisection. So the errors are present also on staging.

The first breakage is an error at boot time in smmu.c#find_smmu_master,
see log1. I think it is due to the lack of ability to parse the new smmu
bindings in the old smmu driver.

After removing all the "smmus" and "#stream-id-cells" properties in
device tree, I get past the previous error, everything seems to be OK at
early boot, but I actually get SMMU errors as soon as dom0 starting
using devices:

(XEN) smmu: /smmu@fd800000: Unexpected global fault, this could be serious
(XEN) smmu: /smmu@fd800000:     GFSR 0x80000002, GFSYNR0 0x00000000, GFSYNR1 0x00000877, GFSYNR2 0x00000000
[   10.419681] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
[   10.426452] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready

Do you think you'll be able to help fix them?


You should be able to reproduce the two issues using Xilinx QEMU (but to
be honest I haven't tested it on QEMU yet, I was testing on real
hardware):
- clone and compile xilinx QEMU https://github.com/Xilinx/qemu.git
  ./configure  --target-list=aarch64-softmmu
  make
- clone and build git://github.com/Xilinx/qemu-devicetrees.git
- use the attached script to run it
    - kernel can be upstream defconfig 5.10

Cheers,

Stefano
--8323329-1483615782-1623723347=:24906
Content-Type: application/octet-stream; NAME=xen.dtb
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.21.2106141915470.24906@sstabellini-ThinkPad-T480s>
Content-Description: 
Content-Disposition: ATTACHMENT; FILENAME=xen.dtb

0A3+7QAAmmEAAAA4AACQWAAAACgAAAARAAAAEAAAAAAAAAoJAACQIAAAAAAA
AAAAAAAAAAAAAAAAAAABAAAAAAAAAAMAAAA5AAAAAHhsbngsenlucW1wLXpj
dTEwMi1yZXYxLjAAeGxueCx6eW5xbXAtemN1MTAyAHhsbngsenlucW1wAAAA
AAAAAAMAAAAEAAAACwAAAAIAAAADAAAABAAAABoAAAACAAAAAwAAABUAAAAm
WnlucU1QIFpDVTEwMiBSZXYxLjAAAAAAAAAAAWNwdXMAAAAAAAAAAwAAAAQA
AAALAAAAAQAAAAMAAAAEAAAAGgAAAAAAAAABY3B1QDAAAAAAAAADAAAADwAA
AABhcm0sY29ydGV4LWE1MwAAAAAAAwAAAAQAAAAsY3B1AAAAAAMAAAAFAAAA
OHBzY2kAAAAAAAAAAwAAAAQAAABGAAAAAQAAAAMAAAAEAAAAWgAAAAAAAAAD
AAAABAAAAF4AAAACAAAAAwAAAAgAAABuAAAAAwAAAAoAAAACAAAAAWNwdUAx
AAAAAAAAAwAAAA8AAAAAYXJtLGNvcnRleC1hNTMAAAAAAAMAAAAEAAAALGNw
dQAAAAADAAAABQAAADhwc2NpAAAAAAAAAAMAAAAEAAAAWgAAAAEAAAADAAAA
BAAAAEYAAAABAAAAAwAAAAQAAABeAAAAAgAAAAIAAAABY3B1QDIAAAAAAAAD
AAAADwAAAABhcm0sY29ydGV4LWE1MwAAAAAAAwAAAAQAAAAsY3B1AAAAAAMA
AAAFAAAAOHBzY2kAAAAAAAAAAwAAAAQAAABaAAAAAgAAAAMAAAAEAAAARgAA
AAEAAAADAAAABAAAAF4AAAACAAAAAgAAAAFjcHVAMwAAAAAAAAMAAAAPAAAA
AGFybSxjb3J0ZXgtYTUzAAAAAAADAAAABAAAACxjcHUAAAAAAwAAAAUAAAA4
cHNjaQAAAAAAAAADAAAABAAAAFoAAAADAAAAAwAAAAQAAABGAAAAAQAAAAMA
AAAEAAAAXgAAAAIAAAACAAAAAWlkbGUtc3RhdGVzAAAAAAMAAAAFAAAAdXBz
Y2kAAAAAAAAAAWNwdS1zbGVlcC0wAAAAAAMAAAAPAAAAAGFybSxpZGxlLXN0
YXRlAAAAAAADAAAABAAAAIJAAAAAAAAAAwAAAAAAAACZAAAAAwAAAAQAAACq
AAABLAAAAAMAAAAEAAAAuwAAAlgAAAADAAAABAAAAMsAACcQAAAAAwAAAAQA
AADcAAAAAgAAAAIAAAACAAAAAgAAAAFjcHUtb3BwLXRhYmxlAAAAAAAAAwAA
ABQAAAAAb3BlcmF0aW5nLXBvaW50cy12MgAAAAADAAAAAAAAAOQAAAADAAAA
BAAAANwAAAABAAAAAW9wcDAwAAAAAAAAAwAAAAgAAADvAAAAAEeGi/QAAAAD
AAAABAAAAPYAD0JAAAAAAwAAAAQAAAEEAAehIAAAAAIAAAABb3BwMDEAAAAA
AAADAAAACAAAAO8AAAAAI8NF+gAAAAMAAAAEAAAA9gAPQkAAAAADAAAABAAA
AQQAB6EgAAAAAgAAAAFvcHAwMgAAAAAAAAMAAAAIAAAA7wAAAAAX14P8AAAA
AwAAAAQAAAD2AA9CQAAAAAMAAAAEAAABBAAHoSAAAAACAAAAAW9wcDAzAAAA
AAAAAwAAAAgAAADvAAAAABHhov0AAAADAAAABAAAAPYAD0JAAAAAAwAAAAQA
AAEEAAehIAAAAAIAAAACAAAAAXp5bnFtcF9pcGkAAAAAAAMAAAAAAAABFQAA
AAMAAAAYAAAAAHhsbngsenlucW1wLWlwaS1tYWlsYm94AAAAAAMAAAAEAAAB
KQAAAAQAAAADAAAADAAAAToAAAAAAAAAIwAAAAQAAAADAAAABAAAAUUAAAAA
AAAAAwAAAAQAAAALAAAAAgAAAAMAAAAEAAAAGgAAAAIAAAADAAAAAAAAAVEA
AAABbWFpbGJveEBmZjk5MDQwMAAAAAAAAAADAAAAAAAAARUAAAADAAAAQAAA
AFoAAAAA/5kFwAAAAAAAAAAgAAAAAP+ZBeAAAAAAAAAAIAAAAAD/mQ6AAAAA
AAAAACAAAAAA/5kOoAAAAAAAAAAgAAAAAwAAAFgAAAFYbG9jYWxfcmVxdWVz
dF9yZWdpb24AbG9jYWxfcmVzcG9uc2VfcmVnaW9uAHJlbW90ZV9yZXF1ZXN0
X3JlZ2lvbgByZW1vdGVfcmVzcG9uc2VfcmVnaW9uAAAAAAMAAAAEAAABYgAA
AAEAAAADAAAABAAAAUUAAAAEAAAAAwAAAAQAAADcAAAABQAAAAIAAAACAAAA
AWRjYwAAAAADAAAACAAAAABhcm0sZGNjAAAAAAMAAAAJAAABbmRpc2FibGVk
AAAAAAAAAAMAAAAAAAABFQAAAAIAAAABcG11AAAAAAMAAAAQAAAAAGFybSxh
cm12OC1wbXV2MwAAAAADAAAABAAAASkAAAAEAAAAAwAAADAAAAE6AAAAAAAA
AI8AAAAEAAAAAAAAAJAAAAAEAAAAAAAAAJEAAAAEAAAAAAAAAJIAAAAEAAAA
AgAAAAFwc2NpAAAAAAAAAAMAAAANAAAAAGFybSxwc2NpLTAuMgAAAAAAAAAD
AAAABAAAAD9zbWMAAAAAAgAAAAFmaXJtd2FyZQAAAAAAAAABenlucW1wLWZp
cm13YXJlAAAAAAMAAAAVAAAAAHhsbngsenlucW1wLWZpcm13YXJlAAAAAAAA
AAMAAAAAAAABFQAAAAMAAAAEAAAAP3NtYwAAAAADAAAABAAAAXUAAAABAAAA
AwAAAAQAAADcAAAAJgAAAAFwY2FwAAAAAAAAAAMAAAAWAAAAAHhsbngsenlu
cW1wLXBjYXAtZnBnYQAAAAAAAAMAAAAIAAABiXJlZl9jbGsAAAAAAwAAAAgA
AABuAAAAAwAAACkAAAADAAAABAAAANwAAAALAAAAAgAAAAF6eW5xbXAtcG93
ZXIAAAAAAAAAAwAAAAAAAAEVAAAAAwAAABIAAAAAeGxueCx6eW5xbXAtcG93
ZXIAAAAAAAADAAAABAAAASkAAAAEAAAAAwAAAAwAAAE6AAAAAAAAACMAAAAE
AAAAAwAAABAAAAGVAAAABQAAAAAAAAAFAAAAAQAAAAMAAAAGAAABnHR4AHJ4
AAAAAAAAAgAAAAFyZXNldC1jb250cm9sbGVyAAAAAAAAAAMAAAASAAAAAHhs
bngsenlucW1wLXJlc2V0AAAAAAAAAwAAAAQAAAGnAAAAAQAAAAMAAAAEAAAA
3AAAADQAAAACAAAAAXBpbmN0cmwAAAAAAwAAABQAAAAAeGxueCx6eW5xbXAt
cGluY3RybAAAAAADAAAABQAAAW5va2F5AAAAAAAAAAFpMmMwLWRlZmF1bHQA
AAAAAAAAAwAAAAQAAADcAAAALAAAAAFtdXgAAAAAAwAAAAsAAAG0aTJjMF8z
X2dycAAAAAAAAwAAAAUAAAG7aTJjMAAAAAAAAAACAAAAAWNvbmYAAAAAAAAA
AwAAAAsAAAG0aTJjMF8zX2dycAAAAAAAAwAAAAAAAAHEAAAAAwAAAAQAAAHR
AAAAAQAAAAMAAAAEAAAB2wAAAAEAAAACAAAAAgAAAAFpMmMwLWdwaW8AAAAA
AAADAAAABAAAANwAAAAtAAAAAW11eAAAAAADAAAAGgAAAbRncGlvMF8xNF9n
cnAAZ3BpbzBfMTVfZ3JwAAAAAAAAAwAAAAYAAAG7Z3BpbzAAAAAAAAACAAAA
AWNvbmYAAAAAAAAAAwAAABoAAAG0Z3BpbzBfMTRfZ3JwAGdwaW8wXzE1X2dy
cAAAAAAAAAMAAAAEAAAB0QAAAAEAAAADAAAABAAAAdsAAAABAAAAAgAAAAIA
AAABaTJjMS1kZWZhdWx0AAAAAAAAAAMAAAAEAAAA3AAAAC8AAAABbXV4AAAA
AAMAAAALAAABtGkyYzFfNF9ncnAAAAAAAAMAAAAFAAABu2kyYzEAAAAAAAAA
AgAAAAFjb25mAAAAAAAAAAMAAAALAAABtGkyYzFfNF9ncnAAAAAAAAMAAAAA
AAABxAAAAAMAAAAEAAAB0QAAAAEAAAADAAAABAAAAdsAAAABAAAAAgAAAAIA
AAABaTJjMS1ncGlvAAAAAAAAAwAAAAQAAADcAAAAMAAAAAFtdXgAAAAAAwAA
ABoAAAG0Z3BpbzBfMTZfZ3JwAGdwaW8wXzE3X2dycAAAAAAAAAMAAAAGAAAB
u2dwaW8wAAAAAAAAAgAAAAFjb25mAAAAAAAAAAMAAAAaAAABtGdwaW8wXzE2
X2dycABncGlvMF8xN19ncnAAAAAAAAADAAAABAAAAdEAAAABAAAAAwAAAAQA
AAHbAAAAAQAAAAIAAAACAAAAAXVhcnQwLWRlZmF1bHQAAAAAAAADAAAABAAA
ANwAAAA3AAAAAW11eAAAAAADAAAADAAAAbR1YXJ0MF80X2dycAAAAAADAAAA
BgAAAbt1YXJ0MAAAAAAAAAIAAAABY29uZgAAAAAAAAADAAAADAAAAbR1YXJ0
MF80X2dycAAAAAADAAAABAAAAdEAAAABAAAAAwAAAAQAAAHbAAAAAQAAAAIA
AAABY29uZi1yeAAAAAADAAAABgAAAedNSU8xOAAAAAAAAAMAAAAAAAAB7AAA
AAIAAAABY29uZi10eAAAAAADAAAABgAAAedNSU8xOQAAAAAAAAMAAAAAAAAC
AAAAAAIAAAACAAAAAXVhcnQxLWRlZmF1bHQAAAAAAAADAAAABAAAANwAAAA4
AAAAAW11eAAAAAADAAAADAAAAbR1YXJ0MV81X2dycAAAAAADAAAABgAAAbt1
YXJ0MQAAAAAAAAIAAAABY29uZgAAAAAAAAADAAAADAAAAbR1YXJ0MV81X2dy
cAAAAAADAAAABAAAAdEAAAABAAAAAwAAAAQAAAHbAAAAAQAAAAIAAAABY29u
Zi1yeAAAAAADAAAABgAAAedNSU8yMQAAAAAAAAMAAAAAAAAB7AAAAAIAAAAB
Y29uZi10eAAAAAADAAAABgAAAedNSU8yMAAAAAAAAAMAAAAAAAACAAAAAAIA
AAACAAAAAXVzYjAtZGVmYXVsdAAAAAAAAAADAAAABAAAANwAAAA5AAAAAW11
eAAAAAADAAAACwAAAbR1c2IwXzBfZ3JwAAAAAAADAAAABQAAAbt1c2IwAAAA
AAAAAAIAAAABY29uZgAAAAAAAAADAAAACwAAAbR1c2IwXzBfZ3JwAAAAAAAD
AAAABAAAAdEAAAABAAAAAwAAAAQAAAHbAAAAAQAAAAIAAAABY29uZi1yeAAA
AAADAAAAEgAAAedNSU81MgBNSU81MwBNSU81NQAAAAAAAAMAAAAAAAAB7AAA
AAIAAAABY29uZi10eAAAAAADAAAANgAAAedNSU81NABNSU81NgBNSU81NwBN
SU81OABNSU81OQBNSU82MABNSU82MQBNSU82MgBNSU82MwAAAAAAAAMAAAAA
AAACAAAAAAIAAAACAAAAAWdlbTMtZGVmYXVsdAAAAAAAAAADAAAABAAAANwA
AAAqAAAAAW11eAAAAAADAAAACgAAAbtldGhlcm5ldDMAAAAAAAADAAAAEAAA
AbRldGhlcm5ldDNfMF9ncnAAAAAAAgAAAAFjb25mAAAAAAAAAAMAAAAQAAAB
tGV0aGVybmV0M18wX2dycAAAAAADAAAABAAAAdEAAAABAAAAAwAAAAQAAAHb
AAAAAQAAAAIAAAABY29uZi1yeAAAAAADAAAAJAAAAedNSU83MABNSU83MQBN
SU83MgBNSU83MwBNSU83NABNSU83NQAAAAADAAAAAAAAAewAAAADAAAAAAAA
Ag0AAAACAAAAAWNvbmYtdHgAAAAAAwAAACQAAAHnTUlPNjQATUlPNjUATUlP
NjYATUlPNjcATUlPNjgATUlPNjkAAAAAAwAAAAAAAAIAAAAAAwAAAAAAAAIf
AAAAAgAAAAFtdXgtbWRpbwAAAAAAAAADAAAABgAAAbttZGlvMwAAAAAAAAMA
AAAMAAABtG1kaW8zXzBfZ3JwAAAAAAIAAAABY29uZi1tZGlvAAAAAAAAAwAA
AAwAAAG0bWRpbzNfMF9ncnAAAAAAAwAAAAQAAAHRAAAAAQAAAAMAAAAEAAAB
2wAAAAEAAAADAAAAAAAAAgAAAAACAAAAAgAAAAFjYW4xLWRlZmF1bHQAAAAA
AAAAAwAAAAQAAADcAAAAJwAAAAFtdXgAAAAAAwAAAAUAAAG7Y2FuMQAAAAAA
AAADAAAACwAAAbRjYW4xXzZfZ3JwAAAAAAACAAAAAWNvbmYAAAAAAAAAAwAA
AAsAAAG0Y2FuMV82X2dycAAAAAAAAwAAAAQAAAHRAAAAAQAAAAMAAAAEAAAB
2wAAAAEAAAACAAAAAWNvbmYtcngAAAAAAwAAAAYAAAHnTUlPMjUAAAAAAAAD
AAAAAAAAAewAAAACAAAAAWNvbmYtdHgAAAAAAwAAAAYAAAHnTUlPMjQAAAAA
AAADAAAAAAAAAgAAAAACAAAAAgAAAAFzZGhjaTEtZGVmYXVsdAAAAAAAAwAA
AAQAAADcAAAANgAAAAFtdXgAAAAAAwAAAAwAAAG0c2RpbzFfMF9ncnAAAAAA
AwAAAAYAAAG7c2RpbzEAAAAAAAACAAAAAWNvbmYAAAAAAAAAAwAAAAwAAAG0
c2RpbzFfMF9ncnAAAAAAAwAAAAQAAAHRAAAAAQAAAAMAAAAEAAAB2wAAAAEA
AAADAAAAAAAAAgAAAAACAAAAAW11eC1jZAAAAAAAAwAAAA8AAAG0c2RpbzFf
Y2RfMF9ncnAAAAAAAAMAAAAJAAABu3NkaW8xX2NkAAAAAAAAAAIAAAABY29u
Zi1jZAAAAAADAAAADwAAAbRzZGlvMV9jZF8wX2dycAAAAAAAAwAAAAAAAAHs
AAAAAwAAAAAAAAHEAAAAAwAAAAQAAAHRAAAAAQAAAAMAAAAEAAAB2wAAAAEA
AAACAAAAAW11eC13cAAAAAAAAwAAAA8AAAG0c2RpbzFfd3BfMF9ncnAAAAAA
AAMAAAAJAAABu3NkaW8xX3dwAAAAAAAAAAIAAAABY29uZi13cAAAAAADAAAA
DwAAAbRzZGlvMV93cF8wX2dycAAAAAAAAwAAAAAAAAHsAAAAAwAAAAAAAAHE
AAAAAwAAAAQAAAHRAAAAAQAAAAMAAAAEAAAB2wAAAAEAAAACAAAAAgAAAAFn
cGlvLWRlZmF1bHQAAAAAAAAAAwAAAAQAAADcAAAAKwAAAAFtdXgtc3cAAAAA
AAMAAAAGAAABu2dwaW8wAAAAAAAAAwAAABoAAAG0Z3BpbzBfMjJfZ3JwAGdw
aW8wXzIzX2dycAAAAAAAAAIAAAABY29uZi1zdwAAAAADAAAAGgAAAbRncGlv
MF8yMl9ncnAAZ3BpbzBfMjNfZ3JwAAAAAAAAAwAAAAQAAAHRAAAAAQAAAAMA
AAAEAAAB2wAAAAEAAAACAAAAAW11eC1tc3AAAAAAAwAAAAYAAAG7Z3BpbzAA
AAAAAAADAAAAGgAAAbRncGlvMF8xM19ncnAAZ3BpbzBfMzhfZ3JwAAAAAAAA
AgAAAAFjb25mLW1zcAAAAAAAAAADAAAAGgAAAbRncGlvMF8xM19ncnAAZ3Bp
bzBfMzhfZ3JwAAAAAAAAAwAAAAQAAAHRAAAAAQAAAAMAAAAEAAAB2wAAAAEA
AAACAAAAAWNvbmYtcHVsbC11cAAAAAAAAAADAAAADAAAAedNSU8yMgBNSU8y
MwAAAAADAAAAAAAAAcQAAAACAAAAAWNvbmYtcHVsbC1ub25lAAAAAAADAAAA
DAAAAedNSU8xMwBNSU8zOAAAAAADAAAAAAAAAgAAAAACAAAAAgAAAAIAAAAB
Y2xvY2stY29udHJvbGxlcgAAAAAAAAADAAAAAAAAARUAAAADAAAABAAAAjAA
AAABAAAAAwAAABAAAAAAeGxueCx6eW5xbXAtY2xrAAAAAAMAAAAUAAAAbgAA
AAYAAAAHAAAACAAAAAkAAAAKAAAAAwAAAEEAAAGJcHNzX3JlZl9jbGsAdmlk
ZW9fY2xrAHBzc19hbHRfcmVmX2NsawBhdXhfcmVmX2NsawBndF9jcnhfcmVm
X2NsawAAAAAAAAADAAAABAAAANwAAAADAAAAAgAAAAIAAAACAAAAAXRpbWVy
AAAAAAAAAwAAABAAAAAAYXJtLGFybXY4LXRpbWVyAAAAAAMAAAAEAAABKQAA
AAQAAAADAAAAMAAAAToAAAABAAAADQAADwgAAAABAAAADgAADwgAAAABAAAA
CwAADwgAAAABAAAACgAADwgAAAACAAAAAWVkYWMAAAAAAAAAAwAAABQAAAAA
YXJtLGNvcnRleC1hNTMtZWRhYwAAAAACAAAAAWZwZ2EtZnVsbAAAAAAAAAMA
AAAMAAAAAGZwZ2EtcmVnaW9uAAAAAAMAAAAEAAACPQAAAAsAAAADAAAABAAA
AAsAAAACAAAAAwAAAAQAAAAaAAAAAgAAAAMAAAAAAAABUQAAAAIAAAABbnZt
ZW1fZmlybXdhcmUAAAAAAAMAAAAVAAAAAHhsbngsenlucW1wLW52bWVtLWZ3
AAAAAAAAAAMAAAAEAAAACwAAAAEAAAADAAAABAAAABoAAAABAAAAAXNvY19y
ZXZpc2lvbkAwAAAAAAADAAAACAAAAFoAAAAAAAAABAAAAAMAAAAEAAAA3AAA
ADMAAAACAAAAAWVmdXNlX2RuYUBjAAAAAAMAAAAIAAAAWgAAAAwAAAAMAAAA
AgAAAAFlZnVzZV91c3IwQDIwAAAAAAAAAwAAAAgAAABaAAAAIAAAAAQAAAAC
AAAAAWVmdXNlX3VzcjFAMjQAAAAAAAADAAAACAAAAFoAAAAkAAAABAAAAAIA
AAABZWZ1c2VfdXNyMkAyOAAAAAAAAAMAAAAIAAAAWgAAACgAAAAEAAAAAgAA
AAFlZnVzZV91c3IzQDJjAAAAAAAAAwAAAAgAAABaAAAALAAAAAQAAAACAAAA
AWVmdXNlX3VzcjRAMzAAAAAAAAADAAAACAAAAFoAAAAwAAAABAAAAAIAAAAB
ZWZ1c2VfdXNyNUAzNAAAAAAAAAMAAAAIAAAAWgAAADQAAAAEAAAAAgAAAAFl
ZnVzZV91c3I2QDM4AAAAAAAAAwAAAAgAAABaAAAAOAAAAAQAAAACAAAAAWVm
dXNlX3VzcjdAM2MAAAAAAAADAAAACAAAAFoAAAA8AAAABAAAAAIAAAABZWZ1
c2VfbWlzY3VzckA0MAAAAAAAAAADAAAACAAAAFoAAABAAAAABAAAAAIAAAAB
ZWZ1c2VfY2hhc2hANTAAAAAAAAMAAAAIAAAAWgAAAFAAAAAEAAAAAgAAAAFl
ZnVzZV9wdWZtaXNjQDU0AAAAAAAAAAMAAAAIAAAAWgAAAFQAAAAEAAAAAgAA
AAFlZnVzZV9zZWNANTgAAAAAAAAAAwAAAAgAAABaAAAAWAAAAAQAAAACAAAA
AWVmdXNlX3Nwa2lkQDVjAAAAAAADAAAACAAAAFoAAABcAAAABAAAAAIAAAAB
ZWZ1c2VfcHBrMGhhc2hAYTAAAAAAAAADAAAACAAAAFoAAACgAAAAMAAAAAIA
AAABZWZ1c2VfcHBrMWhhc2hAZDAAAAAAAAADAAAACAAAAFoAAADQAAAAMAAA
AAIAAAACAAAAAXp5bnFtcF9yc2EAAAAAAAMAAAAQAAAAAHhsbngsenlucW1w
LXJzYQAAAAACAAAAAXNoYTM4NAAAAAAAAwAAABcAAAAAeGxueCx6eW5xbXAt
a2VjY2FrLTM4NAAAAAAAAgAAAAF6eW5xbXBfYWVzAAAAAAADAAAAEAAAAAB4
bG54LHp5bnFtcC1hZXMAAAAAAgAAAAFhbWJhLWFwdUAwAAAAAAADAAAACwAA
AABzaW1wbGUtYnVzAAAAAAADAAAABAAAAAsAAAACAAAAAwAAAAQAAAAaAAAA
AQAAAAMAAAAUAAABUQAAAAAAAAAAAAAAAAAAAAD/////AAAAAWludGVycnVw
dC1jb250cm9sbGVyQGY5MDEwMDAwAAAAAAAAAwAAAAwAAAAAYXJtLGdpYy00
MDAAAAAAAwAAAAQAAAJGAAAAAwAAAAMAAAAwAAAAWgAAAAD5AQAAAAEAAAAA
AAD5AgAAAAIAAAAAAAD5BAAAAAIAAAAAAAD5BgAAAAIAAAAAAAMAAAAAAAAC
VwAAAAMAAAAEAAABKQAAAAQAAAADAAAADAAAAToAAAABAAAACQAADwQAAAAD
AAAABAAAAmwAAAACAAAAAwAAAAQAAAJ1AAAAYAAAAAMAAAAEAAAA3AAAAAQA
AAACAAAAAgAAAAFzbW11QGZkODAwMDAwAAAAAAAAAwAAAAwAAAAAYXJtLG1t
dS01MDAAAAAAAwAAABAAAABaAAAAAP2AAAAAAAAAAAIAAAAAAAMAAAAEAAAC
hAAAAAEAAAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAEAAACkQAAAAEAAAAD
AAAABAAAASkAAAAEAAAAAwAAAMwAAAE6AAAAAAAAAJsAAAAEAAAAAAAAAJsA
AAAEAAAAAAAAAJsAAAAEAAAAAAAAAJsAAAAEAAAAAAAAAJsAAAAEAAAAAAAA
AJsAAAAEAAAAAAAAAJsAAAAEAAAAAAAAAJsAAAAEAAAAAAAAAJsAAAAEAAAA
AAAAAJsAAAAEAAAAAAAAAJsAAAAEAAAAAAAAAJsAAAAEAAAAAAAAAJsAAAAE
AAAAAAAAAJsAAAAEAAAAAAAAAJsAAAAEAAAAAAAAAJsAAAAEAAAAAAAAAJsA
AAAEAAAAAwAAANAAAAKkAAAADAAACHQAAAANAAAIdQAAAA4AAAh2AAAADwAA
CHcAAAAQAAAIYAAAABEAAAhhAAAAEgAACHMAAAATAAAIaAAAABQAAAhpAAAA
FQAACGoAAAAWAAAIawAAABcAAAhsAAAAGAAACG0AAAAZAAAIbgAAABoAAAhv
AAAAGwAAFOgAAAAcAAAU6QAAAB0AABTqAAAAHgAAFOsAAAAfAAAU7AAAACAA
ABTtAAAAIQAAFO4AAAAiAAAU7wAAACMAAAhwAAAAJAAACHEAAAAlAAAIcgAA
AAMAAAAEAAAA3AAAACgAAAACAAAAAWFtYmEAAAAAAAAAAwAAAAsAAAAAc2lt
cGxlLWJ1cwAAAAAAAwAAAAAAAAEVAAAAAwAAAAQAAAALAAAAAgAAAAMAAAAE
AAAAGgAAAAIAAAADAAAAAAAAAVEAAAABY2FuQGZmMDYwMDAwAAAAAAAAAAMA
AAASAAAAAHhsbngsenlucS1jYW4tMS4wAAAAAAAAAwAAAAkAAAFuZGlzYWJs
ZWQAAAAAAAAAAwAAAA0AAAGJY2FuX2NsawBwY2xrAAAAAAAAAAMAAAAQAAAA
WgAAAAD/BgAAAAAAAAAAEAAAAAADAAAADAAAAToAAAAAAAAAFwAAAAQAAAAD
AAAABAAAASkAAAAEAAAAAwAAAAQAAAKwAAAAQAAAAAMAAAAEAAACvgAAAEAA
AAADAAAACAAAAswAAAAmAAAALwAAAAMAAAAQAAAAbgAAAAMAAAA/AAAAAwAA
AB8AAAACAAAAAWNhbkBmZjA3MDAwMAAAAAAAAAADAAAAEgAAAAB4bG54LHp5
bnEtY2FuLTEuMAAAAAAAAAMAAAAFAAABbm9rYXkAAAAAAAAAAwAAAA0AAAGJ
Y2FuX2NsawBwY2xrAAAAAAAAAAMAAAAQAAAAWgAAAAD/BwAAAAAAAAAAEAAA
AAADAAAADAAAAToAAAAAAAAAGAAAAAQAAAADAAAABAAAASkAAAAEAAAAAwAA
AAQAAAKwAAAAQAAAAAMAAAAEAAACvgAAAEAAAAADAAAACAAAAswAAAAmAAAA
MAAAAAMAAAAQAAAAbgAAAAMAAABAAAAAAwAAAB8AAAADAAAACAAAAtpkZWZh
dWx0AAAAAAMAAAAEAAAC6AAAACcAAAACAAAAAWNjaUBmZDZlMDAwMAAAAAAA
AAADAAAADAAAAABhcm0sY2NpLTQwMAAAAAADAAAAEAAAAFoAAAAA/W4AAAAA
AAAAAJAAAAAAAwAAABAAAAFRAAAAAAAAAAD9bgAAAAEAAAAAAAMAAAAEAAAA
CwAAAAEAAAADAAAABAAAABoAAAABAAAAAXBtdUA5MDAwAAAAAAAAAAMAAAAT
AAAAAGFybSxjY2ktNDAwLXBtdSxyMQAAAAAAAwAAAAgAAABaAACQAAAAUAAA
AAADAAAABAAAASkAAAAEAAAAAwAAADwAAAE6AAAAAAAAAHsAAAAEAAAAAAAA
AHsAAAAEAAAAAAAAAHsAAAAEAAAAAAAAAHsAAAAEAAAAAAAAAHsAAAAEAAAA
AgAAAAIAAAABZG1hQGZkNTAwMDAwAAAAAAAAAAMAAAAFAAABbm9rYXkAAAAA
AAAAAwAAABQAAAAAeGxueCx6eW5xbXAtZG1hLTEuMAAAAAADAAAAEAAAAFoA
AAAA/VAAAAAAAAAAABAAAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAMAAABOgAA
AAAAAAB8AAAABAAAAAMAAAARAAABiWNsa19tYWluAGNsa19hcGIAAAAAAAAA
AwAAAAQAAALyAAAAgAAAAAMAAAAIAAACzAAAACYAAAAqAAAAAwAAABAAAABu
AAAAAwAAABMAAAADAAAAHwAAAAMAAAAEAAAA3AAAABsAAAACAAAAAWRtYUBm
ZDUxMDAwMAAAAAAAAAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAUAAAAAHhs
bngsenlucW1wLWRtYS0xLjAAAAAAAwAAABAAAABaAAAAAP1RAAAAAAAAAAAQ
AAAAAAMAAAAEAAABKQAAAAQAAAADAAAADAAAAToAAAAAAAAAfQAAAAQAAAAD
AAAAEQAAAYljbGtfbWFpbgBjbGtfYXBiAAAAAAAAAAMAAAAEAAAC8gAAAIAA
AAADAAAACAAAAswAAAAmAAAAKgAAAAMAAAAQAAAAbgAAAAMAAAATAAAAAwAA
AB8AAAADAAAABAAAANwAAAAcAAAAAgAAAAFkbWFAZmQ1MjAwMDAAAAAAAAAA
AwAAAAUAAAFub2theQAAAAAAAAADAAAAFAAAAAB4bG54LHp5bnFtcC1kbWEt
MS4wAAAAAAMAAAAQAAAAWgAAAAD9UgAAAAAAAAAAEAAAAAADAAAABAAAASkA
AAAEAAAAAwAAAAwAAAE6AAAAAAAAAH4AAAAEAAAAAwAAABEAAAGJY2xrX21h
aW4AY2xrX2FwYgAAAAAAAAADAAAABAAAAvIAAACAAAAAAwAAAAgAAALMAAAA
JgAAACoAAAADAAAAEAAAAG4AAAADAAAAEwAAAAMAAAAfAAAAAwAAAAQAAADc
AAAAHQAAAAIAAAABZG1hQGZkNTMwMDAwAAAAAAAAAAMAAAAFAAABbm9rYXkA
AAAAAAAAAwAAABQAAAAAeGxueCx6eW5xbXAtZG1hLTEuMAAAAAADAAAAEAAA
AFoAAAAA/VMAAAAAAAAAABAAAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAMAAAB
OgAAAAAAAAB/AAAABAAAAAMAAAARAAABiWNsa19tYWluAGNsa19hcGIAAAAA
AAAAAwAAAAQAAALyAAAAgAAAAAMAAAAIAAACzAAAACYAAAAqAAAAAwAAABAA
AABuAAAAAwAAABMAAAADAAAAHwAAAAMAAAAEAAAA3AAAAB4AAAACAAAAAWRt
YUBmZDU0MDAwMAAAAAAAAAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAUAAAA
AHhsbngsenlucW1wLWRtYS0xLjAAAAAAAwAAABAAAABaAAAAAP1UAAAAAAAA
AAAQAAAAAAMAAAAEAAABKQAAAAQAAAADAAAADAAAAToAAAAAAAAAgAAAAAQA
AAADAAAAEQAAAYljbGtfbWFpbgBjbGtfYXBiAAAAAAAAAAMAAAAEAAAC8gAA
AIAAAAADAAAACAAAAswAAAAmAAAAKgAAAAMAAAAQAAAAbgAAAAMAAAATAAAA
AwAAAB8AAAADAAAABAAAANwAAAAfAAAAAgAAAAFkbWFAZmQ1NTAwMDAAAAAA
AAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAAFAAAAAB4bG54LHp5bnFtcC1k
bWEtMS4wAAAAAAMAAAAQAAAAWgAAAAD9VQAAAAAAAAAAEAAAAAADAAAABAAA
ASkAAAAEAAAAAwAAAAwAAAE6AAAAAAAAAIEAAAAEAAAAAwAAABEAAAGJY2xr
X21haW4AY2xrX2FwYgAAAAAAAAADAAAABAAAAvIAAACAAAAAAwAAAAgAAALM
AAAAJgAAACoAAAADAAAAEAAAAG4AAAADAAAAEwAAAAMAAAAfAAAAAwAAAAQA
AADcAAAAIAAAAAIAAAABZG1hQGZkNTYwMDAwAAAAAAAAAAMAAAAFAAABbm9r
YXkAAAAAAAAAAwAAABQAAAAAeGxueCx6eW5xbXAtZG1hLTEuMAAAAAADAAAA
EAAAAFoAAAAA/VYAAAAAAAAAABAAAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAM
AAABOgAAAAAAAACCAAAABAAAAAMAAAARAAABiWNsa19tYWluAGNsa19hcGIA
AAAAAAAAAwAAAAQAAALyAAAAgAAAAAMAAAAIAAACzAAAACYAAAAqAAAAAwAA
ABAAAABuAAAAAwAAABMAAAADAAAAHwAAAAMAAAAEAAAA3AAAACEAAAACAAAA
AWRtYUBmZDU3MDAwMAAAAAAAAAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAU
AAAAAHhsbngsenlucW1wLWRtYS0xLjAAAAAAAwAAABAAAABaAAAAAP1XAAAA
AAAAAAAQAAAAAAMAAAAEAAABKQAAAAQAAAADAAAADAAAAToAAAAAAAAAgwAA
AAQAAAADAAAAEQAAAYljbGtfbWFpbgBjbGtfYXBiAAAAAAAAAAMAAAAEAAAC
8gAAAIAAAAADAAAACAAAAswAAAAmAAAAKgAAAAMAAAAQAAAAbgAAAAMAAAAT
AAAAAwAAAB8AAAADAAAABAAAANwAAAAiAAAAAgAAAAFncHVAZmQ0YjAwMDAA
AAAAAAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAAHQAAAABhcm0sbWFsaS00
MDAAYXJtLG1hbGktdXRnYXJkAAAAAAAAAAMAAAAQAAAAWgAAAAD9SwAAAAAA
AAABAAAAAAADAAAABAAAASkAAAAEAAAAAwAAAEgAAAE6AAAAAAAAAIQAAAAE
AAAAAAAAAIQAAAAEAAAAAAAAAIQAAAAEAAAAAAAAAIQAAAAEAAAAAAAAAIQA
AAAEAAAAAAAAAIQAAAAEAAAAAwAAADEAAAMBSVJRR1AASVJRR1BNTVUASVJR
UFAwAElSUVBQTU1VMABJUlFQUDEASVJRUFBNTVUxAAAAAAAAAAMAAAAUAAAB
iWdwdQBncHVfcHAwAGdwdV9wcDEAAAAAAwAAAAgAAALMAAAAJgAAADoAAAAD
AAAAGAAAAG4AAAADAAAAGAAAAAMAAAAZAAAAAwAAABoAAAADAAAABAAAAxEA
AAABAAAAAgAAAAFkbWFAZmZhODAwMDAAAAAAAAAAAwAAAAUAAAFub2theQAA
AAAAAAADAAAAFAAAAAB4bG54LHp5bnFtcC1kbWEtMS4wAAAAAAMAAAAQAAAA
WgAAAAD/qAAAAAAAAAAAEAAAAAADAAAABAAAASkAAAAEAAAAAwAAAAwAAAE6
AAAAAAAAAE0AAAAEAAAAAwAAABEAAAGJY2xrX21haW4AY2xrX2FwYgAAAAAA
AAADAAAABAAAAvIAAABAAAAAAwAAAAgAAALMAAAAJgAAACsAAAADAAAAEAAA
AG4AAAADAAAARAAAAAMAAAAfAAAAAwAAAAQAAADcAAAAEwAAAAIAAAABZG1h
QGZmYTkwMDAwAAAAAAAAAAMAAAAFAAABbm9rYXkAAAAAAAAAAwAAABQAAAAA
eGxueCx6eW5xbXAtZG1hLTEuMAAAAAADAAAAEAAAAFoAAAAA/6kAAAAAAAAA
ABAAAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAMAAABOgAAAAAAAABOAAAABAAA
AAMAAAARAAABiWNsa19tYWluAGNsa19hcGIAAAAAAAAAAwAAAAQAAALyAAAA
QAAAAAMAAAAIAAACzAAAACYAAAArAAAAAwAAABAAAABuAAAAAwAAAEQAAAAD
AAAAHwAAAAMAAAAEAAAA3AAAABQAAAACAAAAAWRtYUBmZmFhMDAwMAAAAAAA
AAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAUAAAAAHhsbngsenlucW1wLWRt
YS0xLjAAAAAAAwAAABAAAABaAAAAAP+qAAAAAAAAAAAQAAAAAAMAAAAEAAAB
KQAAAAQAAAADAAAADAAAAToAAAAAAAAATwAAAAQAAAADAAAAEQAAAYljbGtf
bWFpbgBjbGtfYXBiAAAAAAAAAAMAAAAEAAAC8gAAAEAAAAADAAAACAAAAswA
AAAmAAAAKwAAAAMAAAAQAAAAbgAAAAMAAABEAAAAAwAAAB8AAAADAAAABAAA
ANwAAAAVAAAAAgAAAAFkbWFAZmZhYjAwMDAAAAAAAAAAAwAAAAUAAAFub2th
eQAAAAAAAAADAAAAFAAAAAB4bG54LHp5bnFtcC1kbWEtMS4wAAAAAAMAAAAQ
AAAAWgAAAAD/qwAAAAAAAAAAEAAAAAADAAAABAAAASkAAAAEAAAAAwAAAAwA
AAE6AAAAAAAAAFAAAAAEAAAAAwAAABEAAAGJY2xrX21haW4AY2xrX2FwYgAA
AAAAAAADAAAABAAAAvIAAABAAAAAAwAAAAgAAALMAAAAJgAAACsAAAADAAAA
EAAAAG4AAAADAAAARAAAAAMAAAAfAAAAAwAAAAQAAADcAAAAFgAAAAIAAAAB
ZG1hQGZmYWMwMDAwAAAAAAAAAAMAAAAFAAABbm9rYXkAAAAAAAAAAwAAABQA
AAAAeGxueCx6eW5xbXAtZG1hLTEuMAAAAAADAAAAEAAAAFoAAAAA/6wAAAAA
AAAAABAAAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAMAAABOgAAAAAAAABRAAAA
BAAAAAMAAAARAAABiWNsa19tYWluAGNsa19hcGIAAAAAAAAAAwAAAAQAAALy
AAAAQAAAAAMAAAAIAAACzAAAACYAAAArAAAAAwAAABAAAABuAAAAAwAAAEQA
AAADAAAAHwAAAAMAAAAEAAAA3AAAABcAAAACAAAAAWRtYUBmZmFkMDAwMAAA
AAAAAAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAUAAAAAHhsbngsenlucW1w
LWRtYS0xLjAAAAAAAwAAABAAAABaAAAAAP+tAAAAAAAAAAAQAAAAAAMAAAAE
AAABKQAAAAQAAAADAAAADAAAAToAAAAAAAAAUgAAAAQAAAADAAAAEQAAAYlj
bGtfbWFpbgBjbGtfYXBiAAAAAAAAAAMAAAAEAAAC8gAAAEAAAAADAAAACAAA
AswAAAAmAAAAKwAAAAMAAAAQAAAAbgAAAAMAAABEAAAAAwAAAB8AAAADAAAA
BAAAANwAAAAYAAAAAgAAAAFkbWFAZmZhZTAwMDAAAAAAAAAAAwAAAAUAAAFu
b2theQAAAAAAAAADAAAAFAAAAAB4bG54LHp5bnFtcC1kbWEtMS4wAAAAAAMA
AAAQAAAAWgAAAAD/rgAAAAAAAAAAEAAAAAADAAAABAAAASkAAAAEAAAAAwAA
AAwAAAE6AAAAAAAAAFMAAAAEAAAAAwAAABEAAAGJY2xrX21haW4AY2xrX2Fw
YgAAAAAAAAADAAAABAAAAvIAAABAAAAAAwAAAAgAAALMAAAAJgAAACsAAAAD
AAAAEAAAAG4AAAADAAAARAAAAAMAAAAfAAAAAwAAAAQAAADcAAAAGQAAAAIA
AAABZG1hQGZmYWYwMDAwAAAAAAAAAAMAAAAFAAABbm9rYXkAAAAAAAAAAwAA
ABQAAAAAeGxueCx6eW5xbXAtZG1hLTEuMAAAAAADAAAAEAAAAFoAAAAA/68A
AAAAAAAAABAAAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAMAAABOgAAAAAAAABU
AAAABAAAAAMAAAARAAABiWNsa19tYWluAGNsa19hcGIAAAAAAAAAAwAAAAQA
AALyAAAAQAAAAAMAAAAIAAACzAAAACYAAAArAAAAAwAAABAAAABuAAAAAwAA
AEQAAAADAAAAHwAAAAMAAAAEAAAA3AAAABoAAAACAAAAAW1lbW9yeS1jb250
cm9sbGVyQGZkMDcwMDAwAAAAAAADAAAAFwAAAAB4bG54LHp5bnFtcC1kZHJj
LTIuNDBhAAAAAAADAAAAEAAAAFoAAAAA/QcAAAAAAAAAAwAAAAAAAwAAAAQA
AAEpAAAABAAAAAMAAAAMAAABOgAAAAAAAABwAAAABAAAAAIAAAABbmFuZEBm
ZjEwMDAwMAAAAAAAAAMAAAARAAAAAGFyYXNhbixuZmMtdjNwMTAAAAAAAAAA
AwAAAAkAAAFuZGlzYWJsZWQAAAAAAAAAAwAAABAAAABaAAAAAP8QAAAAAAAA
AAAQAAAAAAMAAAASAAABiWNsa19zeXMAY2xrX2ZsYXNoAAAAAAAAAwAAAAQA
AAEpAAAABAAAAAMAAAAMAAABOgAAAAAAAAAOAAAABAAAAAMAAAAEAAAACwAA
AAEAAAADAAAABAAAABoAAAAAAAAAAwAAAAgAAALMAAAAJgAAACwAAAADAAAA
EAAAAG4AAAADAAAAPAAAAAMAAAAfAAAAAwAAAAQAAADcAAAAJQAAAAIAAAAB
ZXRoZXJuZXRAZmYwYjAwMDAAAAAAAAADAAAAGQAAAABjZG5zLHp5bnFtcC1n
ZW0AY2RucyxnZW0AAAAAAAAAAwAAAAkAAAFuZGlzYWJsZWQAAAAAAAAAAwAA
AAQAAAEpAAAABAAAAAMAAAAYAAABOgAAAAAAAAA5AAAABAAAAAAAAAA5AAAA
BAAAAAMAAAAQAAAAWgAAAAD/CwAAAAAAAAAAEAAAAAADAAAAIAAAAYlwY2xr
AGhjbGsAdHhfY2xrAHJ4X2NsawB0c3VfY2xrAAAAAAMAAAAEAAAACwAAAAEA
AAADAAAABAAAABoAAAAAAAAAAwAAAAgAAALMAAAAJgAAAB0AAAADAAAAKAAA
AG4AAAADAAAAHwAAAAMAAABoAAAAAwAAAC0AAAADAAAAMQAAAAMAAAAsAAAA
AwAAAAQAAADcAAAADAAAAAIAAAABZXRoZXJuZXRAZmYwYzAwMDAAAAAAAAAD
AAAAGQAAAABjZG5zLHp5bnFtcC1nZW0AY2RucyxnZW0AAAAAAAAAAwAAAAkA
AAFuZGlzYWJsZWQAAAAAAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAYAAABOgAA
AAAAAAA7AAAABAAAAAAAAAA7AAAABAAAAAMAAAAQAAAAWgAAAAD/DAAAAAAA
AAAAEAAAAAADAAAAIAAAAYlwY2xrAGhjbGsAdHhfY2xrAHJ4X2NsawB0c3Vf
Y2xrAAAAAAMAAAAEAAAACwAAAAEAAAADAAAABAAAABoAAAAAAAAAAwAAAAgA
AALMAAAAJgAAAB4AAAADAAAAKAAAAG4AAAADAAAAHwAAAAMAAABpAAAAAwAA
AC4AAAADAAAAMgAAAAMAAAAsAAAAAwAAAAQAAADcAAAADQAAAAIAAAABZXRo
ZXJuZXRAZmYwZDAwMDAAAAAAAAADAAAAGQAAAABjZG5zLHp5bnFtcC1nZW0A
Y2RucyxnZW0AAAAAAAAAAwAAAAkAAAFuZGlzYWJsZWQAAAAAAAAAAwAAAAQA
AAEpAAAABAAAAAMAAAAYAAABOgAAAAAAAAA9AAAABAAAAAAAAAA9AAAABAAA
AAMAAAAQAAAAWgAAAAD/DQAAAAAAAAAAEAAAAAADAAAAIAAAAYlwY2xrAGhj
bGsAdHhfY2xrAHJ4X2NsawB0c3VfY2xrAAAAAAMAAAAEAAAACwAAAAEAAAAD
AAAABAAAABoAAAAAAAAAAwAAAAgAAALMAAAAJgAAAB8AAAADAAAAKAAAAG4A
AAADAAAAHwAAAAMAAABqAAAAAwAAAC8AAAADAAAAMwAAAAMAAAAsAAAAAwAA
AAQAAADcAAAADgAAAAIAAAABZXRoZXJuZXRAZmYwZTAwMDAAAAAAAAADAAAA
GQAAAABjZG5zLHp5bnFtcC1nZW0AY2RucyxnZW0AAAAAAAAAAwAAAAUAAAFu
b2theQAAAAAAAAADAAAABAAAASkAAAAEAAAAAwAAABgAAAE6AAAAAAAAAD8A
AAAEAAAAAAAAAD8AAAAEAAAAAwAAABAAAABaAAAAAP8OAAAAAAAAAAAQAAAA
AAMAAAAgAAABiXBjbGsAaGNsawB0eF9jbGsAcnhfY2xrAHRzdV9jbGsAAAAA
AwAAAAQAAAALAAAAAQAAAAMAAAAEAAAAGgAAAAAAAAADAAAACAAAAswAAAAm
AAAAIAAAAAMAAAAoAAAAbgAAAAMAAAAfAAAAAwAAAGsAAAADAAAAMAAAAAMA
AAA0AAAAAwAAACwAAAADAAAABAAAAyMAAAApAAAAAwAAAAgAAALaZGVmYXVs
dAAAAAADAAAABAAAAugAAAAqAAAAAwAAAAkAAAMucmdtaWktaWQAAAAAAAAA
AwAAAAQAAAM3AAAAAAAAAAMAAAAGAAADSwAKNQAiAQAAAAAAAwAAAAQAAADc
AAAADwAAAAFldGhlcm5ldC1waHlAYwAAAAAAAwAAAAQAAABaAAAADAAAAAMA
AAAEAAADXQAAAAgAAAADAAAABAAAA3IAAAAKAAAAAwAAAAQAAAOHAAAAAQAA
AAMAAAAAAAADlQAAAAMAAAAEAAAA3AAAACkAAAACAAAAAgAAAAFncGlvQGZm
MGEwMDAwAAAAAAAAAwAAABUAAAAAeGxueCx6eW5xbXAtZ3Bpby0xLjAAAAAA
AAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAABAAAA7MAAAACAAAAAwAAAAAA
AAO/AAAAAwAAAAQAAAEpAAAABAAAAAMAAAAMAAABOgAAAAAAAAAQAAAABAAA
AAMAAAAAAAACVwAAAAMAAAAEAAACRgAAAAIAAAADAAAAEAAAAFoAAAAA/woA
AAAAAAAAABAAAAAAAwAAAAgAAALMAAAAJgAAAC4AAAADAAAACAAAAG4AAAAD
AAAAHwAAAAMAAAAIAAAC2mRlZmF1bHQAAAAAAwAAAAQAAALoAAAAKwAAAAMA
AAAEAAADzwAAACAAAAADAAAABAAAA98AAAAAAAAAAwAAAAQAAAPuAABWAAAA
AAMAAAAEAAAA3AAAAC4AAAACAAAAAWkyY0BmZjAyMDAwMAAAAAAAAAADAAAA
HgAAAABjZG5zLGkyYy1yMXAxNABjZG5zLGkyYy1yMXAxMAAAAAAAAAMAAAAF
AAABbm9rYXkAAAAAAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAMAAABOgAAAAAA
AAARAAAABAAAAAMAAAAQAAAAWgAAAAD/AgAAAAAAAAAAEAAAAAADAAAABAAA
AAsAAAABAAAAAwAAAAQAAAAaAAAAAAAAAAMAAAAIAAACzAAAACYAAAAlAAAA
AwAAAAgAAABuAAAAAwAAAD0AAAADAAAADQAAAtpkZWZhdWx0AGdwaW8AAAAA
AAAAAwAAAAQAAALoAAAALAAAAAMAAAAEAAAD/AAAAC0AAAADAAAADAAABAYA
AAAuAAAADgAAAAAAAAADAAAADAAABBAAAAAuAAAADwAAAAAAAAADAAAABAAA
BBoABhqAAAAAAWdwaW9AMjAAAAAAAwAAAAsAAAAAdGksdGNhNjQxNgAAAAAA
AwAAAAQAAABaAAAAIAAAAAMAAAAAAAADvwAAAAMAAAAEAAADswAAAAIAAAAD
AAAAegAABCpQU19HVFJfTEFOX1NFTDAAUFNfR1RSX0xBTl9TRUwxAFBTX0dU
Ul9MQU5fU0VMMgBQU19HVFJfTEFOX1NFTDMAUENJX0NMS19ESVJfU0VMAElJ
Q19NVVhfUkVTRVRfQgBHRU0zX0VYUF9SRVNFVF9CAAAAAAAAAAAAAAAAAAAA
AgAAAAFncGlvQDIxAAAAAAMAAAALAAAAAHRpLHRjYTY0MTYAAAAAAAMAAAAE
AAAAWgAAACEAAAADAAAAAAAAA78AAAADAAAABAAAA7MAAAACAAAAAwAAAO0A
AAQqVkNDUFNQTExfRU4ATUdUUkFWQ0NfRU4ATUdUUkFWVFRfRU4AVkNDUFNE
RFJQTExfRU4ATUlPMjZfUE1VX0lOUFVUX0xTAFBMX1BNQlVTX0FMRVJUAFBT
X1BNQlVTX0FMRVJUAE1BWElNX1BNQlVTX0FMRVJUAFBMX0REUjRfVlRFUk1f
RU4AUExfRERSNF9WUFBfMlY1X0VOAFBTX0RJTU1fVkREUV9UT19QU1ZDQ09f
T04AUFNfRElNTV9TVVNQRU5EX0VOAFBTX0REUjRfVlRFUk1fRU4AUFNfRERS
NF9WUFBfMlY1X0VOAAAAAAAAAAAAAgAAAAFpMmMtbXV4QDc1AAAAAAADAAAA
DAAAAABueHAscGNhOTU0NAAAAAADAAAABAAAAAsAAAABAAAAAwAAAAQAAAAa
AAAAAAAAAAMAAAAEAAAAWgAAAHUAAAABaTJjQDAAAAAAAAADAAAABAAAAAsA
AAABAAAAAwAAAAQAAAAaAAAAAAAAAAMAAAAEAAAAWgAAAAAAAAABaW5hMjI2
QDQwAAAAAAAAAwAAAAoAAAAAdGksaW5hMjI2AAAAAAAAAwAAAAQAAAQ6AAAA
AQAAAAMAAAALAAAETGluYTIyNi11NzYAAAAAAAMAAAAEAAAAWgAAAEAAAAAD
AAAABAAABFIAABOIAAAAAwAAAAQAAADcAAAAQQAAAAIAAAABaW5hMjI2QDQx
AAAAAAAAAwAAAAoAAAAAdGksaW5hMjI2AAAAAAAAAwAAAAQAAAQ6AAAAAQAA
AAMAAAALAAAETGluYTIyNi11NzcAAAAAAAMAAAAEAAAAWgAAAEEAAAADAAAA
BAAABFIAABOIAAAAAwAAAAQAAADcAAAAQgAAAAIAAAABaW5hMjI2QDQyAAAA
AAAAAwAAAAoAAAAAdGksaW5hMjI2AAAAAAAAAwAAAAQAAAQ6AAAAAQAAAAMA
AAALAAAETGluYTIyNi11NzgAAAAAAAMAAAAEAAAAWgAAAEIAAAADAAAABAAA
BFIAABOIAAAAAwAAAAQAAADcAAAAQwAAAAIAAAABaW5hMjI2QDQzAAAAAAAA
AwAAAAoAAAAAdGksaW5hMjI2AAAAAAAAAwAAAAQAAAQ6AAAAAQAAAAMAAAAL
AAAETGluYTIyNi11ODcAAAAAAAMAAAAEAAAAWgAAAEMAAAADAAAABAAABFIA
ABOIAAAAAwAAAAQAAADcAAAARAAAAAIAAAABaW5hMjI2QDQ0AAAAAAAAAwAA
AAoAAAAAdGksaW5hMjI2AAAAAAAAAwAAAAQAAAQ6AAAAAQAAAAMAAAALAAAE
TGluYTIyNi11ODUAAAAAAAMAAAAEAAAAWgAAAEQAAAADAAAABAAABFIAABOI
AAAAAwAAAAQAAADcAAAARQAAAAIAAAABaW5hMjI2QDQ1AAAAAAAAAwAAAAoA
AAAAdGksaW5hMjI2AAAAAAAAAwAAAAQAAAQ6AAAAAQAAAAMAAAALAAAETGlu
YTIyNi11ODYAAAAAAAMAAAAEAAAAWgAAAEUAAAADAAAABAAABFIAABOIAAAA
AwAAAAQAAADcAAAARgAAAAIAAAABaW5hMjI2QDQ2AAAAAAAAAwAAAAoAAAAA
dGksaW5hMjI2AAAAAAAAAwAAAAQAAAQ6AAAAAQAAAAMAAAALAAAETGluYTIy
Ni11OTMAAAAAAAMAAAAEAAAAWgAAAEYAAAADAAAABAAABFIAABOIAAAAAwAA
AAQAAADcAAAARwAAAAIAAAABaW5hMjI2QDQ3AAAAAAAAAwAAAAoAAAAAdGks
aW5hMjI2AAAAAAAAAwAAAAQAAAQ6AAAAAQAAAAMAAAALAAAETGluYTIyNi11
ODgAAAAAAAMAAAAEAAAAWgAAAEcAAAADAAAABAAABFIAABOIAAAAAwAAAAQA
AADcAAAASAAAAAIAAAABaW5hMjI2QDRhAAAAAAAAAwAAAAoAAAAAdGksaW5h
MjI2AAAAAAAAAwAAAAQAAAQ6AAAAAQAAAAMAAAALAAAETGluYTIyNi11MTUA
AAAAAAMAAAAEAAAAWgAAAEoAAAADAAAABAAABFIAABOIAAAAAwAAAAQAAADc
AAAASQAAAAIAAAABaW5hMjI2QDRiAAAAAAAAAwAAAAoAAAAAdGksaW5hMjI2
AAAAAAAAAwAAAAQAAAQ6AAAAAQAAAAMAAAALAAAETGluYTIyNi11OTIAAAAA
AAMAAAAEAAAAWgAAAEsAAAADAAAABAAABFIAABOIAAAAAwAAAAQAAADcAAAA
SgAAAAIAAAACAAAAAWkyY0AxAAAAAAAAAwAAAAQAAAALAAAAAQAAAAMAAAAE
AAAAGgAAAAAAAAADAAAABAAAAFoAAAABAAAAAWluYTIyNkA0MAAAAAAAAAMA
AAAKAAAAAHRpLGluYTIyNgAAAAAAAAMAAAAEAAAEOgAAAAEAAAADAAAACwAA
BExpbmEyMjYtdTc5AAAAAAADAAAABAAAAFoAAABAAAAAAwAAAAQAAARSAAAH
0AAAAAMAAAAEAAAA3AAAAEsAAAACAAAAAWluYTIyNkA0MQAAAAAAAAMAAAAK
AAAAAHRpLGluYTIyNgAAAAAAAAMAAAAEAAAEOgAAAAEAAAADAAAACwAABExp
bmEyMjYtdTgxAAAAAAADAAAABAAAAFoAAABBAAAAAwAAAAQAAARSAAATiAAA
AAMAAAAEAAAA3AAAAEwAAAACAAAAAWluYTIyNkA0MgAAAAAAAAMAAAAKAAAA
AHRpLGluYTIyNgAAAAAAAAMAAAAEAAAEOgAAAAEAAAADAAAACwAABExpbmEy
MjYtdTgwAAAAAAADAAAABAAAAFoAAABCAAAAAwAAAAQAAARSAAATiAAAAAMA
AAAEAAAA3AAAAE0AAAACAAAAAWluYTIyNkA0MwAAAAAAAAMAAAAKAAAAAHRp
LGluYTIyNgAAAAAAAAMAAAAEAAAEOgAAAAEAAAADAAAACwAABExpbmEyMjYt
dTg0AAAAAAADAAAABAAAAFoAAABDAAAAAwAAAAQAAARSAAATiAAAAAMAAAAE
AAAA3AAAAE4AAAACAAAAAWluYTIyNkA0NAAAAAAAAAMAAAAKAAAAAHRpLGlu
YTIyNgAAAAAAAAMAAAAEAAAEOgAAAAEAAAADAAAACwAABExpbmEyMjYtdTE2
AAAAAAADAAAABAAAAFoAAABEAAAAAwAAAAQAAARSAAATiAAAAAMAAAAEAAAA
3AAAAE8AAAACAAAAAWluYTIyNkA0NQAAAAAAAAMAAAAKAAAAAHRpLGluYTIy
NgAAAAAAAAMAAAAEAAAEOgAAAAEAAAADAAAACwAABExpbmEyMjYtdTY1AAAA
AAADAAAABAAAAFoAAABFAAAAAwAAAAQAAARSAAATiAAAAAMAAAAEAAAA3AAA
AFAAAAACAAAAAWluYTIyNkA0NgAAAAAAAAMAAAAKAAAAAHRpLGluYTIyNgAA
AAAAAAMAAAAEAAAEOgAAAAEAAAADAAAACwAABExpbmEyMjYtdTc0AAAAAAAD
AAAABAAAAFoAAABGAAAAAwAAAAQAAARSAAATiAAAAAMAAAAEAAAA3AAAAFEA
AAACAAAAAWluYTIyNkA0NwAAAAAAAAMAAAAKAAAAAHRpLGluYTIyNgAAAAAA
AAMAAAAEAAAEOgAAAAEAAAADAAAACwAABExpbmEyMjYtdTc1AAAAAAADAAAA
BAAAAFoAAABHAAAAAwAAAAQAAARSAAATiAAAAAMAAAAEAAAA3AAAAFIAAAAC
AAAAAgAAAAFpMmNAMgAAAAAAAAMAAAAEAAAACwAAAAEAAAADAAAABAAAABoA
AAAAAAAAAwAAAAQAAABaAAAAAgAAAAFtYXgxNTMwMUBhAAAAAAADAAAADwAA
AABtYXhpbSxtYXgxNTMwMQAAAAAAAwAAAAQAAABaAAAACgAAAAIAAAABbWF4
MTUzMDNAYgAAAAAAAwAAAA8AAAAAbWF4aW0sbWF4MTUzMDMAAAAAAAMAAAAE
AAAAWgAAAAsAAAACAAAAAW1heDE1MzAzQDEwAAAAAAMAAAAPAAAAAG1heGlt
LG1heDE1MzAzAAAAAAADAAAABAAAAFoAAAAQAAAAAgAAAAFtYXgxNTMwMUAx
MwAAAAADAAAADwAAAABtYXhpbSxtYXgxNTMwMQAAAAAAAwAAAAQAAABaAAAA
EwAAAAIAAAABbWF4MTUzMDNAMTQAAAAAAwAAAA8AAAAAbWF4aW0sbWF4MTUz
MDMAAAAAAAMAAAAEAAAAWgAAABQAAAACAAAAAW1heDE1MzAzQDE1AAAAAAMA
AAAPAAAAAG1heGltLG1heDE1MzAzAAAAAAADAAAABAAAAFoAAAAVAAAAAgAA
AAFtYXgxNTMwM0AxNgAAAAADAAAADwAAAABtYXhpbSxtYXgxNTMwMwAAAAAA
AwAAAAQAAABaAAAAFgAAAAIAAAABbWF4MTUzMDNAMTcAAAAAAwAAAA8AAAAA
bWF4aW0sbWF4MTUzMDMAAAAAAAMAAAAEAAAAWgAAABcAAAACAAAAAW1heDE1
MzAxQDE4AAAAAAMAAAAPAAAAAG1heGltLG1heDE1MzAxAAAAAAADAAAABAAA
AFoAAAAYAAAAAgAAAAFtYXgxNTMwM0AxYQAAAAADAAAADwAAAABtYXhpbSxt
YXgxNTMwMwAAAAAAAwAAAAQAAABaAAAAGgAAAAIAAAABbWF4MTUzMDNAMWIA
AAAAAwAAAA8AAAAAbWF4aW0sbWF4MTUzMDMAAAAAAAMAAAAEAAAAWgAAABsA
AAACAAAAAW1heDE1MzAzQDFkAAAAAAMAAAAPAAAAAG1heGltLG1heDE1MzAz
AAAAAAADAAAABAAAAFoAAAAdAAAAAgAAAAFtYXgyMDc1MUA3MgAAAAADAAAA
DwAAAABtYXhpbSxtYXgyMDc1MQAAAAAAAwAAAAQAAABaAAAAcgAAAAIAAAAB
bWF4MjA3NTFANzMAAAAAAwAAAA8AAAAAbWF4aW0sbWF4MjA3NTEAAAAAAAMA
AAAEAAAAWgAAAHMAAAACAAAAAgAAAAIAAAACAAAAAWkyY0BmZjAzMDAwMAAA
AAAAAAADAAAAHgAAAABjZG5zLGkyYy1yMXAxNABjZG5zLGkyYy1yMXAxMAAA
AAAAAAMAAAAFAAABbm9rYXkAAAAAAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAM
AAABOgAAAAAAAAASAAAABAAAAAMAAAAQAAAAWgAAAAD/AwAAAAAAAAAAEAAA
AAADAAAABAAAAAsAAAABAAAAAwAAAAQAAAAaAAAAAAAAAAMAAAAIAAACzAAA
ACYAAAAmAAAAAwAAAAgAAABuAAAAAwAAAD4AAAADAAAADQAAAtpkZWZhdWx0
AGdwaW8AAAAAAAAAAwAAAAQAAALoAAAALwAAAAMAAAAEAAAD/AAAADAAAAAD
AAAADAAABAYAAAAuAAAAEAAAAAAAAAADAAAADAAABBAAAAAuAAAAEQAAAAAA
AAADAAAABAAABBoABhqAAAAAAWkyYy1tdXhANzQAAAAAAAMAAAAMAAAAAG54
cCxwY2E5NTQ4AAAAAAMAAAAEAAAACwAAAAEAAAADAAAABAAAABoAAAAAAAAA
AwAAAAQAAABaAAAAdAAAAAFpMmNAMAAAAAAAAAMAAAAEAAAACwAAAAEAAAAD
AAAABAAAABoAAAAAAAAAAwAAAAQAAABaAAAAAAAAAAFlZXByb21ANTQAAAAA
AAADAAAADAAAAABhdG1lbCwyNGMwOAAAAAADAAAABAAAAFoAAABUAAAAAwAA
AAQAAAALAAAAAQAAAAMAAAAEAAAAGgAAAAEAAAABYm9hcmQtc25AMAAAAAAA
AwAAAAgAAABaAAAAAAAAABQAAAACAAAAAWV0aC1tYWNAMjAAAAAAAAMAAAAI
AAAAWgAAACAAAAAGAAAAAgAAAAFib2FyZC1uYW1lQGQwAAAAAAAAAwAAAAgA
AABaAAAA0AAAAAYAAAACAAAAAWJvYXJkLXJldmlzaW9uQGUwAAAAAAAAAwAA
AAgAAABaAAAA4AAAAAMAAAACAAAAAgAAAAIAAAABaTJjQDEAAAAAAAADAAAA
BAAAAAsAAAABAAAAAwAAAAQAAAAaAAAAAAAAAAMAAAAEAAAAWgAAAAEAAAAB
Y2xvY2stZ2VuZXJhdG9yQDM2AAAAAAADAAAADgAAAABzaWxhYnMsc2k1MzQx
AAAAAAAAAwAAAAQAAABaAAAANgAAAAIAAAACAAAAAWkyY0AyAAAAAAAAAwAA
AAQAAAALAAAAAQAAAAMAAAAEAAAAGgAAAAAAAAADAAAABAAAAFoAAAACAAAA
AWNsb2NrLWdlbmVyYXRvckA1ZAAAAAAAAwAAAAQAAAIwAAAAAAAAAAMAAAAN
AAAAAHNpbGFicyxzaTU3MAAAAAAAAAADAAAABAAAAFoAAABdAAAAAwAAAAQA
AARhAAAAMgAAAAMAAAAEAAAEdxHhowAAAAADAAAABAAABBoR4aMAAAAAAwAA
AAsAAASEc2k1NzBfdXNlcgAAAAAAAgAAAAIAAAABaTJjQDMAAAAAAAADAAAA
BAAAAAsAAAABAAAAAwAAAAQAAAAaAAAAAAAAAAMAAAAEAAAAWgAAAAMAAAAB
Y2xvY2stZ2VuZXJhdG9yQDVkAAAAAAADAAAABAAAAjAAAAAAAAAAAwAAAA0A
AAAAc2lsYWJzLHNpNTcwAAAAAAAAAAMAAAAEAAAAWgAAAF0AAAADAAAABAAA
BGEAAAAyAAAAAwAAAAQAAAR3CVAvkAAAAAMAAAAEAAAEGgjZ7iAAAAADAAAA
CgAABIRzaTU3MF9tZ3QAAAAAAAACAAAAAgAAAAFpMmNANAAAAAAAAAMAAAAE
AAAACwAAAAEAAAADAAAABAAAABoAAAAAAAAAAwAAAAQAAABaAAAABAAAAAFj
bG9jay1nZW5lcmF0b3JANjkAAAAAAAMAAAAOAAAAAHNpbGFicyxzaTUzMjgA
AAAAAAADAAAABAAAAFoAAABpAAAAAgAAAAIAAAACAAAAAWkyYy1tdXhANzUA
AAAAAAMAAAAMAAAAAG54cCxwY2E5NTQ4AAAAAAMAAAAEAAAACwAAAAEAAAAD
AAAABAAAABoAAAAAAAAAAwAAAAQAAABaAAAAdQAAAAFpMmNAMAAAAAAAAAMA
AAAEAAAACwAAAAEAAAADAAAABAAAABoAAAAAAAAAAwAAAAQAAABaAAAAAAAA
AAIAAAABaTJjQDEAAAAAAAADAAAABAAAAAsAAAABAAAAAwAAAAQAAAAaAAAA
AAAAAAMAAAAEAAAAWgAAAAEAAAACAAAAAWkyY0AyAAAAAAAAAwAAAAQAAAAL
AAAAAQAAAAMAAAAEAAAAGgAAAAAAAAADAAAABAAAAFoAAAACAAAAAgAAAAFp
MmNAMwAAAAAAAAMAAAAEAAAACwAAAAEAAAADAAAABAAAABoAAAAAAAAAAwAA
AAQAAABaAAAAAwAAAAIAAAABaTJjQDQAAAAAAAADAAAABAAAAAsAAAABAAAA
AwAAAAQAAAAaAAAAAAAAAAMAAAAEAAAAWgAAAAQAAAACAAAAAWkyY0A1AAAA
AAAAAwAAAAQAAAALAAAAAQAAAAMAAAAEAAAAGgAAAAAAAAADAAAABAAAAFoA
AAAFAAAAAgAAAAFpMmNANgAAAAAAAAMAAAAEAAAACwAAAAEAAAADAAAABAAA
ABoAAAAAAAAAAwAAAAQAAABaAAAABgAAAAIAAAABaTJjQDcAAAAAAAADAAAA
BAAAAAsAAAABAAAAAwAAAAQAAAAaAAAAAAAAAAMAAAAEAAAAWgAAAAcAAAAC
AAAAAgAAAAIAAAABbWVtb3J5LWNvbnRyb2xsZXJAZmY5NjAwMDAAAAAAAAMA
AAAVAAAAAHhsbngsenlucW1wLW9jbWMtMS4wAAAAAAAAAAMAAAAQAAAAWgAA
AAD/lgAAAAAAAAAAEAAAAAADAAAABAAAASkAAAAEAAAAAwAAAAwAAAE6AAAA
AAAAAAoAAAAEAAAAAgAAAAFwZXJmLW1vbml0b3JAZmZhMDAwMDAAAAAAAAAD
AAAAFgAAAAB4bG54LGF4aS1wZXJmLW1vbml0b3IAAAAAAAADAAAAEAAAAFoA
AAAA/6AAAAAAAAAAAQAAAAAAAwAAAAwAAAE6AAAAAAAAABkAAAAEAAAAAwAA
AAQAAAEpAAAABAAAAAMAAAAEAAAElwAAAAAAAAADAAAABAAABKsAAAAAAAAA
AwAAAAQAAAS9AAAAAQAAAAMAAAAEAAAE1AAAAAEAAAADAAAABAAABOwAAAAB
AAAAAwAAAAQAAAUCAAAAAQAAAAMAAAAEAAAFHwAAAAgAAAADAAAABAAABTQA
AAAgAAAAAwAAAAQAAAVMAAAAIAAAAAMAAAAEAAAFbAAAACAAAAADAAAABAAA
BYQAAAABAAAAAwAAAAgAAABuAAAAAwAAAB8AAAACAAAAAXBlcmYtbW9uaXRv
ckBmZDBiMDAwMAAAAAAAAAMAAAAWAAAAAHhsbngsYXhpLXBlcmYtbW9uaXRv
cgAAAAAAAAMAAAAQAAAAWgAAAAD9CwAAAAAAAAABAAAAAAADAAAADAAAAToA
AAAAAAAAewAAAAQAAAADAAAABAAAASkAAAAEAAAAAwAAAAQAAASXAAAAAAAA
AAMAAAAEAAAEqwAAAAAAAAADAAAABAAABL0AAAAGAAAAAwAAAAQAAATUAAAA
AQAAAAMAAAAEAAAE7AAAAAAAAAADAAAABAAABQIAAAABAAAAAwAAAAQAAAUf
AAAACgAAAAMAAAAEAAAFNAAAACAAAAADAAAABAAABUwAAAAgAAAAAwAAAAQA
AAVsAAAAIAAAAAMAAAAEAAAFhAAAAAEAAAADAAAACAAAAG4AAAADAAAAHAAA
AAIAAAABcGVyZi1tb25pdG9yQGZkNDkwMDAwAAAAAAAAAwAAABYAAAAAeGxu
eCxheGktcGVyZi1tb25pdG9yAAAAAAAAAwAAABAAAABaAAAAAP1JAAAAAAAA
AAEAAAAAAAMAAAAMAAABOgAAAAAAAAB7AAAABAAAAAMAAAAEAAABKQAAAAQA
AAADAAAABAAABJcAAAAAAAAAAwAAAAQAAASrAAAAAAAAAAMAAAAEAAAEvQAA
AAEAAAADAAAABAAABNQAAAABAAAAAwAAAAQAAATsAAAAAAAAAAMAAAAEAAAF
AgAAAAEAAAADAAAABAAABR8AAAAIAAAAAwAAAAQAAAU0AAAAIAAAAAMAAAAE
AAAFTAAAACAAAAADAAAABAAABWwAAAAgAAAAAwAAAAQAAAWEAAAAAQAAAAMA
AAAIAAAAbgAAAAMAAAAcAAAAAgAAAAFwZXJmLW1vbml0b3JAZmZhMTAwMDAA
AAAAAAADAAAAFgAAAAB4bG54LGF4aS1wZXJmLW1vbml0b3IAAAAAAAADAAAA
EAAAAFoAAAAA/6EAAAAAAAAAAQAAAAAAAwAAAAwAAAE6AAAAAAAAABkAAAAE
AAAAAwAAAAQAAAEpAAAABAAAAAMAAAAEAAAElwAAAAAAAAADAAAABAAABKsA
AAAAAAAAAwAAAAQAAAS9AAAAAQAAAAMAAAAEAAAE1AAAAAEAAAADAAAABAAA
BOwAAAABAAAAAwAAAAQAAAUCAAAAAQAAAAMAAAAEAAAFHwAAAAgAAAADAAAA
BAAABTQAAAAgAAAAAwAAAAQAAAVMAAAAIAAAAAMAAAAEAAAFbAAAACAAAAAD
AAAABAAABYQAAAABAAAAAwAAAAgAAABuAAAAAwAAAB8AAAACAAAAAXBjaWVA
ZmQwZTAwMDAAAAAAAAADAAAAEwAAAAB4bG54LG53bC1wY2llLTIuMTEAAAAA
AAMAAAAFAAABbm9rYXkAAAAAAAAAAwAAAAQAAAALAAAAAwAAAAMAAAAEAAAA
GgAAAAIAAAADAAAABAAAAkYAAAABAAAAAwAAAAAAAAWcAAAAAwAAAAQAAAAs
cGNpAAAAAAMAAAAEAAABKQAAAAQAAAADAAAAPAAAAToAAAAAAAAAdgAAAAQA
AAAAAAAAdQAAAAQAAAAAAAAAdAAAAAQAAAAAAAAAcwAAAAQAAAAAAAAAcgAA
AAQAAAADAAAAGgAAAwFtaXNjAGR1bW15AGludHgAbXNpMQBtc2kwAAAAAAAA
AwAAAAQAAAWrAAAAMQAAAAMAAAAwAAAAWgAAAAD9DgAAAAAAAAAAEAAAAAAA
/UgAAAAAAAAAABAAAAAAgAAAAAAAAAAAAQAAAAAAAAMAAAAQAAABWGJyZWcA
cGNpcmVnAGNmZwAAAAADAAAAOAAAAVECAAAAAAAAAOAAAAAAAAAA4AAAAAAA
AAAQAAAAQwAAAAAAAAYAAAAAAAAABgAAAAAAAAACAAAAAAAAAAMAAAAQAAAF
tgAAAAAAAAAAAAAAAAAAAAcAAAADAAAACAAABckAAAAAAAAA/wAAAAMAAABg
AAAF0wAAAAAAAAAAAAAAAAAAAAEAAAAyAAAAAQAAAAAAAAAAAAAAAAAAAAIA
AAAyAAAAAgAAAAAAAAAAAAAAAAAAAAMAAAAyAAAAAwAAAAAAAAAAAAAAAAAA
AAQAAAAyAAAABAAAAAMAAAAIAAACzAAAACYAAAA7AAAAAwAAAAgAAABuAAAA
AwAAABcAAAADAAAABAAABeEAAAAAAAAAAwAAAAQAAAXyAAAAAAAAAAMAAAAE
AAAGAwAAAAAAAAADAAAABAAABhQAAAAAAAAAAwAAAAQAAAYlAAAAAAAAAAMA
AAAEAAAGNgAAAAAAAAADAAAACgAABkdSb290IFBvcnQAAAAAAAADAAAABAAA
AxEAAAAAAAAAAwAAAAQAAADcAAAAMQAAAAFsZWdhY3ktaW50ZXJydXB0LWNv
bnRyb2xsZXIAAAAAAwAAAAAAAAJXAAAAAwAAAAQAAAALAAAAAAAAAAMAAAAE
AAACRgAAAAEAAAADAAAABAAAANwAAAAyAAAAAgAAAAIAAAABc3BpQGZmMGYw
MDAwAAAAAAAAAAMAAAAAAAABFQAAAAMAAAAVAAAAAHhsbngsenlucW1wLXFz
cGktMS4wAAAAAAAAAAMAAAAFAAABbm9rYXkAAAAAAAAAAwAAAA0AAAGJcmVm
X2NsawBwY2xrAAAAAAAAAAMAAAAMAAABOgAAAAAAAAAPAAAABAAAAAMAAAAE
AAABKQAAAAQAAAADAAAABAAABlYAAAABAAAAAwAAACAAAABaAAAAAP8PAAAA
AAAAAAAQAAAAAADAAAAAAAAAAAgAAAAAAAADAAAABAAAAAsAAAABAAAAAwAA
AAQAAAAaAAAAAAAAAAMAAAAIAAACzAAAACYAAAAtAAAAAwAAABAAAABuAAAA
AwAAADUAAAADAAAAHwAAAAMAAAAEAAAGXQAAAAEAAAADAAAABAAABmUAAAAE
AAAAAwAAAAQAAAZ2AAAABAAAAAMAAAAEAAAA3AAAABIAAAABZmxhc2hAMAAA
AAADAAAAFQAAAABtMjVwODAAamVkZWMsc3BpLW5vcgAAAAAAAAADAAAABAAA
AAsAAAABAAAAAwAAAAQAAAAaAAAAAQAAAAMAAAAEAAAAWgAAAAAAAAADAAAA
BAAABnYAAAABAAAAAwAAAAQAAAZlAAAABAAAAAMAAAAEAAAGhwZv8wAAAAAB
cGFydGl0aW9uQDAAAAAAAwAAAAUAAARMYm9vdAAAAAAAAAADAAAACAAAAFoA
AAAAAeAAAAAAAAIAAAABcGFydGl0aW9uQDEAAAAAAwAAAAgAAARMYm9vdGVu
dgAAAAADAAAACAAAAFoB4AAAAAQAAAAAAAIAAAABcGFydGl0aW9uQDIAAAAA
AwAAAAcAAARMa2VybmVsAAAAAAADAAAACAAAAFoB5AAAAkAAAAAAAAIAAAAC
AAAAAgAAAAFydGNAZmZhNjAwMDAAAAAAAAAAAwAAABAAAAAAeGxueCx6eW5x
bXAtcnRjAAAAAAMAAAAFAAABbm9rYXkAAAAAAAAAAwAAABAAAABaAAAAAP+m
AAAAAAAAAAABAAAAAAMAAAAEAAABKQAAAAQAAAADAAAAGAAAAToAAAAAAAAA
GgAAAAQAAAAAAAAAGwAAAAQAAAADAAAACgAAAwFhbGFybQBzZWMAAAAAAAAD
AAAABAAABpkAAIAAAAAAAgAAAAF6eW5xbXBfcGh5QGZkNDAwMDAwAAAAAAMA
AAAXAAAAAHhsbngsenlucW1wLXBzZ3RyLXYxLjEAAAAAAAMAAAAFAAABbm9r
YXkAAAAAAAAAAwAAACAAAABaAAAAAP1AAAAAAAAAAAQAAAAAAAD9PQAAAAAA
AAAAEAAAAAADAAAADAAAAVhzZXJkZXMAc2lvdQAAAAADAAAABAAABqUAAAAz
AAAAAwAAAA0AAAaxc29jX3JldmlzaW9uAAAAAAAAAAMAAABgAAAGwgAAADQA
AAAQAAAANAAAADsAAAA0AAAAPAAAADQAAAA9AAAANAAAAD4AAAA0AAAAPwAA
ADQAAABAAAAANAAAAAMAAAA0AAAAHQAAADQAAAAeAAAANAAAAB8AAAA0AAAA
IAAAAAMAAAB4AAAGyXNhdGFfcnN0AHVzYjBfY3JzdAB1c2IxX2Nyc3QAdXNi
MF9oaWJyc3QAdXNiMV9oaWJyc3QAdXNiMF9hcGJyc3QAdXNiMV9hcGJyc3QA
ZHBfcnN0AGdlbTBfcnN0AGdlbTFfcnN0AGdlbTJfcnN0AGdlbTNfcnN0AAAA
AAFsYW5lMAAAAAAAAAMAAAAEAAAG1QAAAAQAAAACAAAAAWxhbmUxAAAAAAAA
AwAAAAQAAAbVAAAABAAAAAMAAAAEAAAA3AAAADwAAAACAAAAAWxhbmUyAAAA
AAAAAwAAAAQAAAbVAAAABAAAAAMAAAAEAAAA3AAAADoAAAACAAAAAWxhbmUz
AAAAAAAAAwAAAAQAAAbVAAAABAAAAAMAAAAEAAAA3AAAADUAAAACAAAAAgAA
AAFhaGNpQGZkMGMwMDAwAAAAAAAAAwAAAA8AAAAAY2V2YSxhaGNpLTF2ODQA
AAAAAAMAAAAFAAABbm9rYXkAAAAAAAAAAwAAABAAAABaAAAAAP0MAAAAAAAA
AAAgAAAAAAMAAAAEAAABKQAAAAQAAAADAAAADAAAAToAAAAAAAAAhQAAAAQA
AAADAAAACAAAAswAAAAmAAAAHAAAAAMAAAAIAAAAbgAAAAMAAAAWAAAAAwAA
AAQAAAbgGEAYKAAAAAMAAAAEAAAG9wYUCA4AAAADAAAABAAABw4TCEoGAAAA
AwAAAAQAAAcjlqQ//AAAAAMAAAAEAAAHOBhAGCgAAAADAAAABAAAB08GFAgO
AAAAAwAAAAQAAAdmEwhKBgAAAAMAAAAEAAAHe5akP/wAAAADAAAACQAAB5Bz
YXRhLXBoeQAAAAAAAAADAAAAFAAAB5oAAAA1AAAAAQAAAAEAAAABB3NZQAAA
AAMAAAAEAAAHnwAAAAAAAAADAAAABAAAB7cAAAAAAAAAAgAAAAFtbWNAZmYx
NjAwMDAAAAAAAAAAAwAAAAAAAAEVAAAAAwAAACMAAAAAeGxueCx6eW5xbXAt
OC45YQBhcmFzYW4sc2RoY2ktOC45YQAAAAAAAwAAAAkAAAFuZGlzYWJsZWQA
AAAAAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAMAAABOgAAAAAAAAAwAAAABAAA
AAMAAAAQAAAAWgAAAAD/FgAAAAAAAAAAEAAAAAADAAAAEAAAAYljbGtfeGlu
AGNsa19haGIAAAAAAwAAAAQAAAfPAAAAAAAAAAMAAAAIAAACzAAAACYAAAAn
AAAAAwAAAAQAAAalAAAAMwAAAAMAAAANAAAGsXNvY19yZXZpc2lvbgAAAAAA
AAADAAAABAAAAjAAAAABAAAAAwAAABcAAASEY2xrX291dF9zZDAAY2xrX2lu
X3NkMAAAAAAAAwAAABAAAABuAAAAAwAAADYAAAADAAAAHwAAAAMAAAAEAAAA
3AAAACMAAAACAAAAAW1tY0BmZjE3MDAwMAAAAAAAAAADAAAAAAAAARUAAAAD
AAAAIwAAAAB4bG54LHp5bnFtcC04LjlhAGFyYXNhbixzZGhjaS04LjlhAAAA
AAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAEAAABKQAAAAQAAAADAAAADAAA
AToAAAAAAAAAMQAAAAQAAAADAAAAEAAAAFoAAAAA/xcAAAAAAAAAABAAAAAA
AwAAABAAAAGJY2xrX3hpbgBjbGtfYWhiAAAAAAMAAAAEAAAHzwAAAAEAAAAD
AAAACAAAAswAAAAmAAAAKAAAAAMAAAAEAAAGpQAAADMAAAADAAAADQAABrFz
b2NfcmV2aXNpb24AAAAAAAAAAwAAAAQAAAIwAAAAAQAAAAMAAAAXAAAEhGNs
a19vdXRfc2QxAGNsa19pbl9zZDEAAAAAAAMAAAAQAAAAbgAAAAMAAAA3AAAA
AwAAAB8AAAADAAAACAAAAtpkZWZhdWx0AAAAAAMAAAAEAAAC6AAAADYAAAAD
AAAAAAAAB94AAAADAAAABAAABBoLLLyuAAAAAwAAAAQAAAfnAAAAAQAAAAMA
AAAEAAAA3AAAACQAAAACAAAAAXNwaUBmZjA0MDAwMAAAAAAAAAADAAAADgAA
AABjZG5zLHNwaS1yMXA2AAAAAAAAAwAAAAkAAAFuZGlzYWJsZWQAAAAAAAAA
AwAAAAQAAAEpAAAABAAAAAMAAAAMAAABOgAAAAAAAAATAAAABAAAAAMAAAAQ
AAAAWgAAAAD/BAAAAAAAAAAAEAAAAAADAAAADQAAAYlyZWZfY2xrAHBjbGsA
AAAAAAAAAwAAAAQAAAALAAAAAQAAAAMAAAAEAAAAGgAAAAAAAAADAAAACAAA
AswAAAAmAAAAIwAAAAMAAAAQAAAAbgAAAAMAAAA6AAAAAwAAAB8AAAACAAAA
AXNwaUBmZjA1MDAwMAAAAAAAAAADAAAADgAAAABjZG5zLHNwaS1yMXA2AAAA
AAAAAwAAAAkAAAFuZGlzYWJsZWQAAAAAAAAAAwAAAAQAAAEpAAAABAAAAAMA
AAAMAAABOgAAAAAAAAAUAAAABAAAAAMAAAAQAAAAWgAAAAD/BQAAAAAAAAAA
EAAAAAADAAAADQAAAYlyZWZfY2xrAHBjbGsAAAAAAAAAAwAAAAQAAAALAAAA
AQAAAAMAAAAEAAAAGgAAAAAAAAADAAAACAAAAswAAAAmAAAAJAAAAAMAAAAQ
AAAAbgAAAAMAAAA7AAAAAwAAAB8AAAACAAAAAXRpbWVyQGZmMTEwMDAwAAAA
AAADAAAACQAAAABjZG5zLHR0YwAAAAAAAAADAAAACQAAAW5kaXNhYmxlZAAA
AAAAAAADAAAABAAAASkAAAAEAAAAAwAAACQAAAE6AAAAAAAAACQAAAAEAAAA
AAAAACUAAAAEAAAAAAAAACYAAAAEAAAAAwAAABAAAABaAAAAAP8RAAAAAAAA
AAAQAAAAAAMAAAAEAAAH9QAAACAAAAADAAAACAAAAswAAAAmAAAAGAAAAAMA
AAAIAAAAbgAAAAMAAAAfAAAAAgAAAAF0aW1lckBmZjEyMDAwMAAAAAAAAwAA
AAkAAAAAY2Rucyx0dGMAAAAAAAAAAwAAAAkAAAFuZGlzYWJsZWQAAAAAAAAA
AwAAAAQAAAEpAAAABAAAAAMAAAAkAAABOgAAAAAAAAAnAAAABAAAAAAAAAAo
AAAABAAAAAAAAAApAAAABAAAAAMAAAAQAAAAWgAAAAD/EgAAAAAAAAAAEAAA
AAADAAAABAAAB/UAAAAgAAAAAwAAAAgAAALMAAAAJgAAABkAAAADAAAACAAA
AG4AAAADAAAAHwAAAAIAAAABdGltZXJAZmYxMzAwMDAAAAAAAAMAAAAJAAAA
AGNkbnMsdHRjAAAAAAAAAAMAAAAJAAABbmRpc2FibGVkAAAAAAAAAAMAAAAE
AAABKQAAAAQAAAADAAAAJAAAAToAAAAAAAAAKgAAAAQAAAAAAAAAKwAAAAQA
AAAAAAAALAAAAAQAAAADAAAAEAAAAFoAAAAA/xMAAAAAAAAAABAAAAAAAwAA
AAQAAAf1AAAAIAAAAAMAAAAIAAACzAAAACYAAAAaAAAAAwAAAAgAAABuAAAA
AwAAAB8AAAACAAAAAXRpbWVyQGZmMTQwMDAwAAAAAAADAAAACQAAAABjZG5z
LHR0YwAAAAAAAAADAAAACQAAAW5kaXNhYmxlZAAAAAAAAAADAAAABAAAASkA
AAAEAAAAAwAAACQAAAE6AAAAAAAAAC0AAAAEAAAAAAAAAC4AAAAEAAAAAAAA
AC8AAAAEAAAAAwAAABAAAABaAAAAAP8UAAAAAAAAAAAQAAAAAAMAAAAEAAAH
9QAAACAAAAADAAAACAAAAswAAAAmAAAAGwAAAAMAAAAIAAAAbgAAAAMAAAAf
AAAAAgAAAAFzZXJpYWxAZmYwMDAwMDAAAAAAAwAAAAAAAAEVAAAAAwAAAB0A
AAAAY2Rucyx1YXJ0LXIxcDEyAHhsbngseHVhcnRwcwAAAAAAAAADAAAABQAA
AW5va2F5AAAAAAAAAAMAAAAEAAABKQAAAAQAAAADAAAADAAAAToAAAAAAAAA
FQAAAAQAAAADAAAAEAAAAFoAAAAA/wAAAAAAAAAAABAAAAAAAwAAAA4AAAGJ
dWFydF9jbGsAcGNsawAAAAAAAAMAAAAIAAACzAAAACYAAAAhAAAAAwAAABAA
AABuAAAAAwAAADgAAAADAAAAHwAAAAMAAAAIAAAC2mRlZmF1bHQAAAAAAwAA
AAQAAALoAAAANwAAAAMAAAAAAAAIAQAAAAMAAAAHAAAALHNlcmlhbAAAAAAA
AwAAAAQAAAgOAAAAAAAAAAIAAAABc2VyaWFsQGZmMDEwMDAwAAAAAAMAAAAA
AAABFQAAAAMAAAAdAAAAAGNkbnMsdWFydC1yMXAxMgB4bG54LHh1YXJ0cHMA
AAAAAAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAABAAAASkAAAAEAAAAAwAA
AAwAAAE6AAAAAAAAABYAAAAEAAAAAwAAABAAAABaAAAAAP8BAAAAAAAAAAAQ
AAAAAAMAAAAOAAABiXVhcnRfY2xrAHBjbGsAAAAAAAADAAAACAAAAswAAAAm
AAAAIgAAAAMAAAAQAAAAbgAAAAMAAAA5AAAAAwAAAB8AAAADAAAACAAAAtpk
ZWZhdWx0AAAAAAMAAAAEAAAC6AAAADgAAAADAAAAAAAACAEAAAADAAAABwAA
ACxzZXJpYWwAAAAAAAMAAAAEAAAIDgAAAAEAAAACAAAAAXVzYjBAZmY5ZDAw
MDAAAAAAAAADAAAABAAAAAsAAAACAAAAAwAAAAQAAAAaAAAAAgAAAAMAAAAF
AAABbm9rYXkAAAAAAAAAAwAAABEAAAAAeGxueCx6eW5xbXAtZHdjMwAAAAAA
AAADAAAAEAAAAFoAAAAA/50AAAAAAAAAAAEAAAAAAwAAABAAAAGJYnVzX2Ns
awByZWZfY2xrAAAAAAMAAAAIAAACzAAAACYAAAAWAAAAAwAAAAAAAAFRAAAA
AwAAAAQAAAalAAAAMwAAAAMAAAANAAAGsXNvY19yZXZpc2lvbgAAAAAAAAAD
AAAAEAAAAG4AAAADAAAAIAAAAAMAAAAiAAAAAwAAAAgAAALaZGVmYXVsdAAA
AAADAAAABAAAAugAAAA5AAAAAwAAAAQAAAMRAAAAAQAAAAMAAAAEAAAIGgAA
AAAAAAADAAAABAAACCwAAAAAAAAAAWR3YzNAZmUyMDAwMDAAAAAAAAADAAAA
CgAAAABzbnBzLGR3YzMAAAAAAAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAQ
AAAAWgAAAAD+IAAAAAAAAAAEAAAAAAADAAAABAAAASkAAAAEAAAAAwAAABMA
AAMBZHdjX3VzYjMAb3RnAGhpYmVyAAAAAAADAAAAJAAAAToAAAAAAAAAQQAA
AAQAAAAAAAAARQAAAAQAAAAAAAAASwAAAAQAAAADAAAABAAACEAAAAAgAAAA
AwAAAAAAAAhjAAAAAwAAAAAAAAh1AAAAAwAAAAAAAAiVAAAAAwAAAAAAAAiy
AAAAAwAAAAUAAAjJaG9zdAAAAAAAAAADAAAAAAAACNEAAAADAAAACQAAB5B1
c2IzLXBoeQAAAAAAAAADAAAAFAAAB5oAAAA6AAAABAAAAAAAAAACAYy6gAAA
AAMAAAAMAAAI53N1cGVyLXNwZWVkAAAAAAMAAAAEAAAA3AAAABAAAAACAAAA
AgAAAAF1c2IxQGZmOWUwMDAwAAAAAAAAAwAAAAQAAAALAAAAAgAAAAMAAAAE
AAAAGgAAAAIAAAADAAAACQAAAW5kaXNhYmxlZAAAAAAAAAADAAAAEQAAAAB4
bG54LHp5bnFtcC1kd2MzAAAAAAAAAAMAAAAQAAAAWgAAAAD/ngAAAAAAAAAA
AQAAAAADAAAAEAAAAYlidXNfY2xrAHJlZl9jbGsAAAAAAwAAAAgAAALMAAAA
JgAAABcAAAADAAAAAAAAAVEAAAADAAAABAAABqUAAAAzAAAAAwAAAA0AAAax
c29jX3JldmlzaW9uAAAAAAAAAAMAAAAQAAAAbgAAAAMAAAAhAAAAAwAAACIA
AAABZHdjM0BmZTMwMDAwMAAAAAAAAAMAAAAKAAAAAHNucHMsZHdjMwAAAAAA
AAMAAAAJAAABbmRpc2FibGVkAAAAAAAAAAMAAAAQAAAAWgAAAAD+MAAAAAAA
AAAEAAAAAAADAAAABAAAASkAAAAEAAAAAwAAABMAAAMBZHdjX3VzYjMAb3Rn
AGhpYmVyAAAAAAADAAAAJAAAAToAAAAAAAAARgAAAAQAAAAAAAAASgAAAAQA
AAAAAAAATAAAAAQAAAADAAAABAAACEAAAAAgAAAAAwAAAAAAAAhjAAAAAwAA
AAAAAAh1AAAAAwAAAAAAAAiVAAAAAwAAAAAAAAiyAAAAAwAAAAQAAADcAAAA
EQAAAAIAAAACAAAAAXdhdGNoZG9nQGZkNGQwMDAwAAAAAAAAAwAAAA4AAAAA
Y2Rucyx3ZHQtcjFwMgAAAAAAAAMAAAAFAAABbm9rYXkAAAAAAAAAAwAAAAQA
AAEpAAAABAAAAAMAAAAMAAABOgAAAAAAAABxAAAAAQAAAAMAAAAQAAAAWgAA
AAD9TQAAAAAAAAAAEAAAAAADAAAABAAACPUAAAA8AAAAAwAAAAAAAAkBAAAA
AwAAAAgAAABuAAAAAwAAAEsAAAACAAAAAXdhdGNoZG9nQGZmMTUwMDAwAAAA
AAAAAwAAAA4AAAAAY2Rucyx3ZHQtcjFwMgAAAAAAAAMAAAAFAAABbm9rYXkA
AAAAAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAMAAABOgAAAAAAAAA0AAAAAQAA
AAMAAAAQAAAAWgAAAAD/FQAAAAAAAAAAEAAAAAADAAAABAAACPUAAAAKAAAA
AwAAAAgAAABuAAAAAwAAAHAAAAACAAAAAWFtc0BmZmE1MDAwMAAAAAAAAAAD
AAAAEAAAAAB4bG54LHp5bnFtcC1hbXMAAAAAAwAAAAUAAAFub2theQAAAAAA
AAADAAAABAAAASkAAAAEAAAAAwAAAAwAAAE6AAAAAAAAADgAAAAEAAAAAwAA
AAgAAAMBYW1zLWlycQAAAAADAAAAEAAAAFoAAAAA/6UAAAAAAAAAAAgAAAAA
AwAAAAkAAAFYYW1zLWJhc2UAAAAAAAAAAwAAAAQAAAALAAAAAgAAAAMAAAAE
AAAAGgAAAAIAAAADAAAABAAABDoAAAABAAAAAwAAAAAAAAFRAAAAAwAAAAgA
AABuAAAAAwAAAEYAAAABYW1zX3BzQGZmYTUwODAwAAAAAAMAAAATAAAAAHhs
bngsenlucW1wLWFtcy1wcwAAAAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAA
EAAAAFoAAAAA/6UIAAAAAAAAAAQAAAAAAgAAAAFhbXNfcGxAZmZhNTBjMDAA
AAAAAwAAABMAAAAAeGxueCx6eW5xbXAtYW1zLXBsAAAAAAADAAAABQAAAW5v
a2F5AAAAAAAAAAMAAAAQAAAAWgAAAAD/pQwAAAAAAAAABAAAAAACAAAAAgAA
AAFkbWFAZmQ0YzAwMDAAAAAAAAAAAwAAAAsAAAAAeGxueCxkcGRtYQAAAAAA
AwAAAAUAAAFub2theQAAAAAAAAADAAAAEAAAAFoAAAAA/UwAAAAAAAAAABAA
AAAAAwAAAAwAAAE6AAAAAAAAAHoAAAAEAAAAAwAAAAQAAAEpAAAABAAAAAMA
AAAIAAABiWF4aV9jbGsAAAAAAwAAAAgAAALMAAAAJgAAACkAAAADAAAABAAA
CRIAAAAGAAAAAwAAAAQAAAkfAAAAAQAAAAMAAAAIAAAAbgAAAAMAAAAUAAAA
AwAAAAQAAADcAAAAPQAAAAFkbWEtdmlkZW8wY2hhbm5lbAAAAAAAAAMAAAAM
AAAAAHhsbngsdmlkZW8wAAAAAAIAAAABZG1hLXZpZGVvMWNoYW5uZWwAAAAA
AAADAAAADAAAAAB4bG54LHZpZGVvMQAAAAACAAAAAWRtYS12aWRlbzJjaGFu
bmVsAAAAAAAAAwAAAAwAAAAAeGxueCx2aWRlbzIAAAAAAgAAAAFkbWEtZ3Jh
cGhpY3NjaGFubmVsAAAAAAMAAAAOAAAAAHhsbngsZ3JhcGhpY3MAAAAAAAAC
AAAAAWRtYS1hdWRpbzBjaGFubmVsAAAAAAAAAwAAAAwAAAAAeGxueCxhdWRp
bzAAAAAAAgAAAAFkbWEtYXVkaW8xY2hhbm5lbAAAAAAAAAMAAAAMAAAAAHhs
bngsYXVkaW8xAAAAAAIAAAACAAAAAXp5bnFtcC1kaXNwbGF5QGZkNGEwMDAw
AAAAAAMAAAAWAAAAAHhsbngsenlucW1wLWRwc3ViLTEuNwAAAAAAAAMAAAAF
AAABbm9rYXkAAAAAAAAAAwAAAEAAAABaAAAAAP1KAAAAAAAAAAAQAAAAAAD9
SqAAAAAAAAAAEAAAAAAA/UqwAAAAAAAAABAAAAAAAP1KwAAAAAAAAAAQAAAA
AAMAAAAUAAABWGRwAGJsZW5kAGF2X2J1ZgBhdWQAAAAAAwAAAAwAAAE6AAAA
AAAAAHcAAAAEAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAqAAABiWRwX2FwYl9j
bGsAZHBfYXVkX2NsawBkcF92dGNfcGl4ZWxfY2xrX2luAAAAAAAAAwAAAAgA
AALMAAAAJgAAACkAAAADAAAAFAAAAG4AAAA7AAAAAwAAABEAAAADAAAAEAAA
AAMAAAAIAAAHkGRwLXBoeTAAAAAAAwAAABQAAAeaAAAAPAAAAAYAAAAAAAAA
AwGb/MAAAAADAAAABAAACSoAAAABAAAAAXZpZC1sYXllcgAAAAAAAAMAAAAP
AAAJOXZpZDAAdmlkMQB2aWQyAAAAAAADAAAAGAAACUMAAAA9AAAAAAAAAD0A
AAABAAAAPQAAAAIAAAACAAAAAWdmeC1sYXllcgAAAAAAAAMAAAAFAAAJOWdm
eDAAAAAAAAAAAwAAAAgAAAlDAAAAPQAAAAMAAAACAAAAAWkyYy1idXMAAAAA
AgAAAAF6eW5xbXBfZHBfc25kX2NvZGVjMAAAAAAAAAADAAAAEgAAAAB4bG54
LGRwLXNuZC1jb2RlYwAAAAAAAAMAAAAIAAABiWF1ZF9jbGsAAAAAAwAAAAgA
AABuAAAAAwAAABEAAAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAEAAAA3AAA
AEAAAAACAAAAAXp5bnFtcF9kcF9zbmRfcGNtMAAAAAAAAwAAABAAAAAAeGxu
eCxkcC1zbmQtcGNtAAAAAAMAAAAIAAAJQwAAAD0AAAAEAAAAAwAAAAMAAAk5
dHgAAAAAAAMAAAAFAAABbm9rYXkAAAAAAAAAAwAAAAQAAADcAAAAPgAAAAIA
AAABenlucW1wX2RwX3NuZF9wY20xAAAAAAADAAAAEAAAAAB4bG54LGRwLXNu
ZC1wY20AAAAAAwAAAAgAAAlDAAAAPQAAAAUAAAADAAAAAwAACTl0eAAAAAAA
AwAAAAUAAAFub2theQAAAAAAAAADAAAABAAAANwAAAA/AAAAAgAAAAF6eW5x
bXBfZHBfc25kX2NhcmQAAAAAAAMAAAARAAAAAHhsbngsZHAtc25kLWNhcmQA
AAAAAAAAAwAAAAgAAAlIAAAAPgAAAD8AAAADAAAABAAACVgAAABAAAAAAwAA
AAUAAAFub2theQAAAAAAAAACAAAAAgAAAAIAAAABZmNsazAAAAAAAAADAAAA
BQAAAW5va2F5AAAAAAAAAAMAAAAKAAAAAHhsbngsZmNsawAAAAAAAAMAAAAI
AAAAbgAAAAMAAABHAAAAAgAAAAFmY2xrMQAAAAAAAAMAAAAFAAABbm9rYXkA
AAAAAAAAAwAAAAoAAAAAeGxueCxmY2xrAAAAAAAAAwAAAAgAAABuAAAAAwAA
AEgAAAACAAAAAWZjbGsyAAAAAAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAA
CgAAAAB4bG54LGZjbGsAAAAAAAADAAAACAAAAG4AAAADAAAASQAAAAIAAAAB
ZmNsazMAAAAAAAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAKAAAAAHhsbngs
ZmNsawAAAAAAAAMAAAAIAAAAbgAAAAMAAABKAAAAAgAAAAFwc3NfcmVmX2Ns
awAAAAADAAAAAAAAARUAAAADAAAADAAAAABmaXhlZC1jbG9jawAAAAADAAAA
BAAAAjAAAAAAAAAAAwAAAAQAAAQaAfyTUAAAAAMAAAAEAAAA3AAAAAYAAAAC
AAAAAXZpZGVvX2NsawAAAAAAAAMAAAAAAAABFQAAAAMAAAAMAAAAAGZpeGVk
LWNsb2NrAAAAAAMAAAAEAAACMAAAAAAAAAADAAAABAAABBoBm/zAAAAAAwAA
AAQAAADcAAAABwAAAAIAAAABcHNzX2FsdF9yZWZfY2xrAAAAAAMAAAAAAAAB
FQAAAAMAAAAMAAAAAGZpeGVkLWNsb2NrAAAAAAMAAAAEAAACMAAAAAAAAAAD
AAAABAAABBoAAAAAAAAAAwAAAAQAAADcAAAACAAAAAIAAAABZ3RfY3J4X3Jl
Zl9jbGsAAAAAAAMAAAAAAAABFQAAAAMAAAAMAAAAAGZpeGVkLWNsb2NrAAAA
AAMAAAAEAAACMAAAAAAAAAADAAAABAAABBoGb/MAAAAAAwAAAAQAAADcAAAA
CgAAAAIAAAABYXV4X3JlZl9jbGsAAAAAAwAAAAAAAAEVAAAAAwAAAAwAAAAA
Zml4ZWQtY2xvY2sAAAAAAwAAAAQAAAIwAAAAAAAAAAMAAAAEAAAEGgGb/MAA
AAADAAAABAAAANwAAAAJAAAAAgAAAAFkcF9hY2xrAAAAAAMAAAAMAAAAAGZp
eGVkLWNsb2NrAAAAAAMAAAAEAAACMAAAAAAAAAADAAAABAAABBoF9eEAAAAA
AwAAAAQAAAlqAAAAZAAAAAMAAAAEAAAA3AAAADsAAAACAAAAAWdwaW8ta2V5
cwAAAAAAAAMAAAAKAAAAAGdwaW8ta2V5cwAAAAAAAAMAAAAEAAAACwAAAAEA
AAADAAAABAAAABoAAAAAAAAAAwAAAAAAAAl5AAAAAXN3MTkAAAAAAAAAAwAA
AAUAAARMc3cxOQAAAAAAAAADAAAADAAABAoAAAAuAAAAFgAAAAAAAAADAAAA
BAAACYQAAABsAAAAAwAAAAAAAAmPAAAAAwAAAAAAAAl5AAAAAgAAAAIAAAAB
bGVkcwAAAAAAAAADAAAACgAAAABncGlvLWxlZHMAAAAAAAABaGVhcnRiZWF0
LWxlZAAAAAAAAAMAAAAKAAAETGhlYXJ0YmVhdAAAAAAAAAMAAAAMAAAECgAA
AC4AAAAXAAAAAAAAAAMAAAAKAAAJnWhlYXJ0YmVhdAAAAAAAAAIAAAACAAAA
AWNob3NlbgAAAAAAAwAAAC4AAAmzL2FtYmEvaTJjQGZmMDMwMDAwL2kyYy1t
dXhANzQvaTJjQDAvZWVwcm9tQDU0AAAAAAAAAwAAAEQAAAm/IGVhcmx5Y29u
IGNvbnNvbGU9dHR5UFMwLDExNTIwMCBjbGtfaWdub3JlX3VudXNlZCByb290
PS9kZXYvcmFtMCBydwAAAAADAAAAEQAACchzZXJpYWwwOjExNTIwMG44AAAA
AAAAAAIAAAABaW5hMjI2LXU3NgAAAAAAAwAAAAoAAAAAaWlvLWh3bW9uAAAA
AAAAAwAAACAAAAnUAAAAQQAAAAAAAABBAAAAAQAAAEEAAAACAAAAQQAAAAMA
AAACAAAAAWluYTIyNi11NzcAAAAAAAMAAAAKAAAAAGlpby1od21vbgAAAAAA
AAMAAAAgAAAJ1AAAAEIAAAAAAAAAQgAAAAEAAABCAAAAAgAAAEIAAAADAAAA
AgAAAAFpbmEyMjYtdTc4AAAAAAADAAAACgAAAABpaW8taHdtb24AAAAAAAAD
AAAAIAAACdQAAABDAAAAAAAAAEMAAAABAAAAQwAAAAIAAABDAAAAAwAAAAIA
AAABaW5hMjI2LXU4NwAAAAAAAwAAAAoAAAAAaWlvLWh3bW9uAAAAAAAAAwAA
ACAAAAnUAAAARAAAAAAAAABEAAAAAQAAAEQAAAACAAAARAAAAAMAAAACAAAA
AWluYTIyNi11ODUAAAAAAAMAAAAKAAAAAGlpby1od21vbgAAAAAAAAMAAAAg
AAAJ1AAAAEUAAAAAAAAARQAAAAEAAABFAAAAAgAAAEUAAAADAAAAAgAAAAFp
bmEyMjYtdTg2AAAAAAADAAAACgAAAABpaW8taHdtb24AAAAAAAADAAAAIAAA
CdQAAABGAAAAAAAAAEYAAAABAAAARgAAAAIAAABGAAAAAwAAAAIAAAABaW5h
MjI2LXU5MwAAAAAAAwAAAAoAAAAAaWlvLWh3bW9uAAAAAAAAAwAAACAAAAnU
AAAARwAAAAAAAABHAAAAAQAAAEcAAAACAAAARwAAAAMAAAACAAAAAWluYTIy
Ni11ODgAAAAAAAMAAAAKAAAAAGlpby1od21vbgAAAAAAAAMAAAAgAAAJ1AAA
AEgAAAAAAAAASAAAAAEAAABIAAAAAgAAAEgAAAADAAAAAgAAAAFpbmEyMjYt
dTE1AAAAAAADAAAACgAAAABpaW8taHdtb24AAAAAAAADAAAAIAAACdQAAABJ
AAAAAAAAAEkAAAABAAAASQAAAAIAAABJAAAAAwAAAAIAAAABaW5hMjI2LXU5
MgAAAAAAAwAAAAoAAAAAaWlvLWh3bW9uAAAAAAAAAwAAACAAAAnUAAAASgAA
AAAAAABKAAAAAQAAAEoAAAACAAAASgAAAAMAAAACAAAAAWluYTIyNi11NzkA
AAAAAAMAAAAKAAAAAGlpby1od21vbgAAAAAAAAMAAAAgAAAJ1AAAAEsAAAAA
AAAASwAAAAEAAABLAAAAAgAAAEsAAAADAAAAAgAAAAFpbmEyMjYtdTgxAAAA
AAADAAAACgAAAABpaW8taHdtb24AAAAAAAADAAAAIAAACdQAAABMAAAAAAAA
AEwAAAABAAAATAAAAAIAAABMAAAAAwAAAAIAAAABaW5hMjI2LXU4MAAAAAAA
AwAAAAoAAAAAaWlvLWh3bW9uAAAAAAAAAwAAACAAAAnUAAAATQAAAAAAAABN
AAAAAQAAAE0AAAACAAAATQAAAAMAAAACAAAAAWluYTIyNi11ODQAAAAAAAMA
AAAKAAAAAGlpby1od21vbgAAAAAAAAMAAAAgAAAJ1AAAAE4AAAAAAAAATgAA
AAEAAABOAAAAAgAAAE4AAAADAAAAAgAAAAFpbmEyMjYtdTE2AAAAAAADAAAA
CgAAAABpaW8taHdtb24AAAAAAAADAAAAIAAACdQAAABPAAAAAAAAAE8AAAAB
AAAATwAAAAIAAABPAAAAAwAAAAIAAAABaW5hMjI2LXU2NQAAAAAAAwAAAAoA
AAAAaWlvLWh3bW9uAAAAAAAAAwAAACAAAAnUAAAAUAAAAAAAAABQAAAAAQAA
AFAAAAACAAAAUAAAAAMAAAACAAAAAWluYTIyNi11NzQAAAAAAAMAAAAKAAAA
AGlpby1od21vbgAAAAAAAAMAAAAgAAAJ1AAAAFEAAAAAAAAAUQAAAAEAAABR
AAAAAgAAAFEAAAADAAAAAgAAAAFpbmEyMjYtdTc1AAAAAAADAAAACgAAAABp
aW8taHdtb24AAAAAAAADAAAAIAAACdQAAABSAAAAAAAAAFIAAAABAAAAUgAA
AAIAAABSAAAAAwAAAAIAAAABYWxpYXNlcwAAAAADAAAAGAAACeAvYW1iYS9l
dGhlcm5ldEBmZjBlMDAwMAAAAAADAAAAEwAACeovYW1iYS9pMmNAZmYwMjAw
MDAAAAAAAAMAAAATAAAJ7y9hbWJhL2kyY0BmZjAzMDAwMAAAAAAAAwAAABYA
AAn0L2FtYmEvc2VyaWFsQGZmMDAwMDAwAAAAAAAAAwAAABYAAAn8L2FtYmEv
c2VyaWFsQGZmMDEwMDAwAAAAAAAAAwAAABMAAAoEL2FtYmEvc3BpQGZmMGYw
MDAwAAAAAAACAAAAAW1lbW9yeQAAAAAAAwAAAAcAAAAsbWVtb3J5AAAAAAAD
AAAAIAAAAFoAAAAAAAAAAAAAAAB/8AAAAAAACAAAAAAAAAAAgAAAAAAAAAIA
AAACAAAACWNvbXBhdGlibGUAI2FkZHJlc3MtY2VsbHMAI3NpemUtY2VsbHMA
bW9kZWwAZGV2aWNlX3R5cGUAZW5hYmxlLW1ldGhvZABvcGVyYXRpbmctcG9p
bnRzLXYyAHJlZwBjcHUtaWRsZS1zdGF0ZXMAY2xvY2tzAGVudHJ5LW1ldGhv
ZABhcm0scHNjaS1zdXNwZW5kLXBhcmFtAGxvY2FsLXRpbWVyLXN0b3AAZW50
cnktbGF0ZW5jeS11cwBleGl0LWxhdGVuY3ktdXMAbWluLXJlc2lkZW5jeS11
cwBwaGFuZGxlAG9wcC1zaGFyZWQAb3BwLWh6AG9wcC1taWNyb3ZvbHQAY2xv
Y2stbGF0ZW5jeS1ucwB1LWJvb3QsZG0tcHJlLXJlbG9jAGludGVycnVwdC1w
YXJlbnQAaW50ZXJydXB0cwB4bG54LGlwaS1pZAByYW5nZXMAcmVnLW5hbWVz
ACNtYm94LWNlbGxzAHN0YXR1cwAjcG93ZXItZG9tYWluLWNlbGxzAGNsb2Nr
LW5hbWVzAG1ib3hlcwBtYm94LW5hbWVzACNyZXNldC1jZWxscwBncm91cHMA
ZnVuY3Rpb24AYmlhcy1wdWxsLXVwAHNsZXctcmF0ZQBpby1zdGFuZGFyZABw
aW5zAGJpYXMtaGlnaC1pbXBlZGFuY2UAYmlhcy1kaXNhYmxlAGxvdy1wb3dl
ci1kaXNhYmxlAGxvdy1wb3dlci1lbmFibGUAI2Nsb2NrLWNlbGxzAGZwZ2Et
bWdyACNpbnRlcnJ1cHQtY2VsbHMAaW50ZXJydXB0LWNvbnRyb2xsZXIAbnVt
X2NwdXMAbnVtX2ludGVycnVwdHMAI2lvbW11LWNlbGxzACNnbG9iYWwtaW50
ZXJydXB0cwBtbXUtbWFzdGVycwB0eC1maWZvLWRlcHRoAHJ4LWZpZm8tZGVw
dGgAcG93ZXItZG9tYWlucwBwaW5jdHJsLW5hbWVzAHBpbmN0cmwtMAB4bG54
LGJ1cy13aWR0aABpbnRlcnJ1cHQtbmFtZXMAeGxueCx0ei1ub25zZWN1cmUA
cGh5LWhhbmRsZQBwaHktbW9kZQB4bG54LHB0cC1lbmV0LWNsb2NrAGxvY2Fs
LW1hYy1hZGRyZXNzAHRpLHJ4LWludGVybmFsLWRlbGF5AHRpLHR4LWludGVy
bmFsLWRlbGF5AHRpLGZpZm8tZGVwdGgAdGksZHA4Mzg2Ny1yeGN0cmwtc3Ry
YXAtcXVpcmsAI2dwaW8tY2VsbHMAZ3Bpby1jb250cm9sbGVyAGVtaW8tZ3Bp
by13aWR0aABncGlvLW1hc2staGlnaABncGlvLW1hc2stbG93AHBpbmN0cmwt
MQBzY2wtZ3Bpb3MAc2RhLWdwaW9zAGNsb2NrLWZyZXF1ZW5jeQBncGlvLWxp
bmUtbmFtZXMAI2lvLWNoYW5uZWwtY2VsbHMAbGFiZWwAc2h1bnQtcmVzaXN0
b3IAdGVtcGVyYXR1cmUtc3RhYmlsaXR5AGZhY3RvcnktZm91dABjbG9jay1v
dXRwdXQtbmFtZXMAeGxueCxlbmFibGUtcHJvZmlsZQB4bG54LGVuYWJsZS10
cmFjZQB4bG54LG51bS1tb25pdG9yLXNsb3RzAHhsbngsZW5hYmxlLWV2ZW50
LWNvdW50AHhsbngsZW5hYmxlLWV2ZW50LWxvZwB4bG54LGhhdmUtc2FtcGxl
ZC1tZXRyaWMtY250AHhsbngsbnVtLW9mLWNvdW50ZXJzAHhsbngsbWV0cmlj
LWNvdW50LXdpZHRoAHhsbngsbWV0cmljcy1zYW1wbGUtY291bnQtd2lkdGgA
eGxueCxnbG9iYWwtY291bnQtd2lkdGgAeGxueCxtZXRyaWMtY291bnQtc2Nh
bGUAbXNpLWNvbnRyb2xsZXIAbXNpLXBhcmVudABpbnRlcnJ1cHQtbWFwLW1h
c2sAYnVzLXJhbmdlAGludGVycnVwdC1tYXAAeGxueCxiYXIwLWVuYWJsZQB4
bG54LGJhcjEtZW5hYmxlAHhsbngsYmFyMi1lbmFibGUAeGxueCxiYXIzLWVu
YWJsZQB4bG54LGJhcjQtZW5hYmxlAHhsbngsYmFyNS1lbmFibGUAeGxueCxw
Y2llLW1vZGUAbnVtLWNzAGlzLWR1YWwAc3BpLXJ4LWJ1cy13aWR0aABzcGkt
dHgtYnVzLXdpZHRoAHNwaS1tYXgtZnJlcXVlbmN5AGNhbGlicmF0aW9uAG52
bWVtLWNlbGxzAG52bWVtLWNlbGwtbmFtZXMAcmVzZXRzAHJlc2V0LW5hbWVz
ACNwaHktY2VsbHMAY2V2YSxwMC1jb21pbml0LXBhcmFtcwBjZXZhLHAwLWNv
bXdha2UtcGFyYW1zAGNldmEscDAtYnVyc3QtcGFyYW1zAGNldmEscDAtcmV0
cnktcGFyYW1zAGNldmEscDEtY29taW5pdC1wYXJhbXMAY2V2YSxwMS1jb213
YWtlLXBhcmFtcwBjZXZhLHAxLWJ1cnN0LXBhcmFtcwBjZXZhLHAxLXJldHJ5
LXBhcmFtcwBwaHktbmFtZXMAcGh5cwB4bG54LHR6LW5vbnNlY3VyZS1zYXRh
MAB4bG54LHR6LW5vbnNlY3VyZS1zYXRhMQB4bG54LGRldmljZV9pZABuby0x
LTgtdgB4bG54LG1pb19iYW5rAHRpbWVyLXdpZHRoAGN0cy1vdmVycmlkZQBw
b3J0LW51bWJlcgB4bG54LHVzYi1wb2xhcml0eQB4bG54LHVzYi1yZXNldC1t
b2RlAHNucHMscXVpcmstZnJhbWUtbGVuZ3RoLWFkanVzdG1lbnQAc25wcyxy
ZWZjbGtfZmxhZGoAc25wcyxlbmFibGVfZ3VjdGwxX3Jlc3VtZV9xdWlyawBz
bnBzLGVuYWJsZV9ndWN0bDFfaXBkX3F1aXJrAHNucHMseGhjaS1zdHJlYW0t
cXVpcmsAZHJfbW9kZQBzbnBzLHVzYjNfbHBtX2NhcGFibGUAbWF4aW11bS1z
cGVlZAB0aW1lb3V0LXNlYwByZXNldC1vbi10aW1lb3V0AGRtYS1jaGFubmVs
cwAjZG1hLWNlbGxzAHhsbngsbWF4LWxhbmVzAGRtYS1uYW1lcwBkbWFzAHhs
bngsZHAtc25kLXBjbQB4bG54LGRwLXNuZC1jb2RlYwBjbG9jay1hY2N1cmFj
eQBhdXRvcmVwZWF0AGxpbnV4LGNvZGUAd2FrZXVwLXNvdXJjZQBsaW51eCxk
ZWZhdWx0LXRyaWdnZXIAeGxueCxlZXByb20AYm9vdGFyZ3MAc3Rkb3V0LXBh
dGgAaW8tY2hhbm5lbHMAZXRoZXJuZXQwAGkyYzAAaTJjMQBzZXJpYWwwAHNl
cmlhbDEAc3BpMAA=

--8323329-1483615782-1623723347=:24906
Content-Type: text/plain; CHARSET=US-ASCII; NAME=log1
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.21.2106141915471.24906@sstabellini-ThinkPad-T480s>
Content-Description: 
Content-Disposition: ATTACHMENT; FILENAME=log1

KFhFTikgc21tdTogL3NtbXVAZmQ4MDAwMDA6IGQwOiBwMm1hZGRyIDB4MDAw
MDAwMDg3YmY5YTAwMA0KKFhFTikgRGF0YSBBYm9ydCBUcmFwLiBTeW5kcm9t
ZT0weDQNCihYRU4pIFdhbGtpbmcgSHlwZXJ2aXNvciBWQSAweDE0ZWQwMDAw
ZmJmOWZkMjAgb24gQ1BVMCB2aWEgVFRCUiAweDAwMDAwMDAwMDBmM2YwMDAN
CihYRU4pIDBUSFsweDBdID0gMHgwMDAwMDAwMDAwZjQyZjdmDQooWEVOKSAx
U1RbMHgzXSA9IDB4MDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgQ1BVMDogVW5l
eHBlY3RlZCBUcmFwOiBEYXRhIEFib3J0DQooWEVOKSAtLS0tWyBYZW4tNC4x
Ni11bnN0YWJsZSAgYXJtNjQgIGRlYnVnPXkgIE5vdCB0YWludGVkIF0tLS0t
DQooWEVOKSBDUFU6ICAgIDANCihYRU4pIFBDOiAgICAgMDAwMDAwMDAwMDI0
Y2ZhNCBzbW11LmMjZmluZF9zbW11X21hc3RlcisweDgvMHgzYw0KKFhFTikg
TFI6ICAgICAwMDAwMDAwMDAwMjRkMTg4DQooWEVOKSBTUDogICAgIDAwMDAw
MDAwMDAzMDcxZjANCihYRU4pIENQU1I6ICAgMjAwMDAyNDkgTU9ERTo2NC1i
aXQgRUwyaCAoSHlwZXJ2aXNvciwgaGFuZGxlcikNCihYRU4pICAgICAgWDA6
IDE0ZWQwMDAwZmJmOWZkMjggIFgxOiAwMDAwODAwMGZiZmNiMDQwICBYMjog
MDAwMDgwMDBmYmZjYjJiMA0KKFhFTikgICAgICBYMzogMDAwMDAwMDAwMDJi
Mzg3MCAgWDQ6IDAwMDAwMDAwMDAwMDAwMDAgIFg1OiAwMDAwMDAwMDAwMDAw
MDAxDQooWEVOKSAgICAgIFg2OiAwMDAwMDAwMDAwMDAwMDAwICBYNzogMDAw
MDAwMDAwMDAwMDAwMCAgWDg6IDAwMDA4MDAwZmJmODFjMzANCihYRU4pICAg
ICAgWDk6IGZmZmZmZmZmZmZmZmZmZmEgWDEwOiAwMTAxMDEwMTAxMDEwMTAx
IFgxMTogMDAwMDAwMDAwMDAwMDAyMA0KKFhFTikgICAgIFgxMjogMDAwMDAw
MDAwMDAwMDAyMCBYMTM6IGZmMDAwMDAwMDAwMDAwMDAgWDE0OiAwMDAwMDAw
MDAwMDAwMDMwDQooWEVOKSAgICAgWDE1OiAwMDAwMDAwMDAwMDAwMDAwIFgx
NjogMDAwMDAwMDAwMDJiNTAwMCBYMTc6IDAwMDAwMDAwMDAyYjUwMDANCihY
RU4pICAgICBYMTg6IDAwMDAwMDAwMDAyYjYwMDAgWDE5OiAwMDAwODAwMGZi
ZmZjYmYwIFgyMDogMDAwMDAwMDAwMDJiMzg3OA0KKFhFTikgICAgIFgyMTog
MDAwMDgwMDBmYmZjYjA0MCBYMjI6IDAwMDA4MDAwZmJmOWZmZDAgWDIzOiAw
MDAwODAwMGZiZmNiMGQwDQooWEVOKSAgICAgWDI0OiAwMDAwODAwMGZiZjlk
MDAwIFgyNTogMDAwMDAwMDAwMDAwMDAwMSBYMjY6IDAwMDAwMDAwMDAwMDAw
MDENCihYRU4pICAgICBYMjc6IDAwMDAwMDAwMDAwMDAwMDAgWDI4OiAwMDAw
MDAwMDAwMDAwMDAwICBGUDogMDAwMDAwMDAwMDMwNzFmMA0KKFhFTikgDQoo
WEVOKSAgIFZUQ1JfRUwyOiA4MDAyMzU1OA0KKFhFTikgIFZUVEJSX0VMMjog
MDAwMDAwMDg3YmY5YTAwMA0KKFhFTikgDQooWEVOKSAgU0NUTFJfRUwyOiAz
MGNkMTgzZA0KKFhFTikgICAgSENSX0VMMjogMDAwMDAwMDAwMDAwMDAzYQ0K
KFhFTikgIFRUQlIwX0VMMjogMDAwMDAwMDAwMGYzZjAwMA0KKFhFTikgDQoo
WEVOKSAgICBFU1JfRUwyOiA5NjAwMDAwNA0KKFhFTikgIEhQRkFSX0VMMjog
MDAwMDAwMDAwMGY5MDEwMA0KKFhFTikgICAgRkFSX0VMMjogMTRlZDAwMDBm
YmY5ZmQyMA0KKFhFTikgDQooWEVOKSBYZW4gc3RhY2sgdHJhY2UgZnJvbSBz
cD0wMDAwMDAwMDAwMzA3MWYwOg0KKFhFTikgICAgMDAwMDAwMDAwMDMwNzIy
MCAwMDAwMDAwMDAwMjRkN2IwIDAwMDA4MDAwZmJmOWQwMDAgMDAwMDAwMDBm
ZmZmZmZmMA0KKFhFTikgICAgMDAwMDgwMDBmYmZjYjBiMCAwMDAwMDAwODAw
MDAwMDAxIDAwMDAwMDAwMDAzMDcyYTAgMDAwMDAwMDAwMDI0ZWRjOA0KKFhF
TikgICAgMDAwMDgwMDBmYmY5ZDAwMCAwMDAwMDAwMGZmZmZmZmYwIDAwMDA4
MDAwZmJmY2IwNDAgMDAwMDgwMDBmYmZjYjBhMA0KKFhFTikgICAgMDAwMDgw
MDBmYmZjYjBkMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDEg
MDAwMDAwMDAwMDAwMDAwMQ0KKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAzMDcyYTAgMDAwMDAwMDAwMDI0
ZWQ5OA0KKFhFTikgICAgMDAwMDgwMDBmYmY5ZDAwMCAwMDAwMDAwMDAwMzA3
NTUwIDAwMDAwMDAwMDAzMDcyZDAgMDAwMDAwMDAwMDJjOWY3Yw0KKFhFTikg
ICAgMDAwMDgwMDBmYmZjYjA0MCAwMDAwMDAwMDAwMzA3NTUwIDAwMDA4MDAw
ZmJmOWQwMDAgMDAwMDAwMDAwMDAwMDAwNQ0KKFhFTikgICAgMDAwMDAwMDAw
MDMwNzM5MCAwMDAwMDAwMDAwMmNhNDBjIDAwMDA4MDAwZmJmYzgwMzggMDAw
MDAwMDAwMDMwNzU1MA0KKFhFTikgICAgMDAwMDgwMDBmYmY5ZDAwMCAwMDAw
MDAwMDAwMDAwMDA1IDAwMDA4MDAwZmJmY2IwNDAgMDAwMDAwMDAwMDAwMDAw
MA0KKFhFTikgICAgMDAwMDgwMDBmYmZlMjEzMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICAg
MDAwMDAwMDAwMDJkYThlOCAwMDAwMDAwMGZiZjZhMDkwIDAwMDAwMDAwMDAy
ZGE4ZDggMDAwMDAwMDAwMDJkOWI4MA0KKFhFTikgICAgMDAwMDAwMDAwMDMw
NzM4MCAwMDAwMDAwMGZkMDcwMDAwIDAwMDA4MDAwZmJmYzgwMzggMDAwMDAw
MDAwMDAzMDAwMA0KKFhFTikgICAgMDAwMDgwMDBmYmY5ZDAwMCAwMDAwODAw
MGZiZjlkMDAwIDAwMDAwMDAwMDAwMDAwMDUgMDAwMDAwMDAwMDJjYTMwOA0K
KFhFTikgICAgMDAwMDAwMDAwMDMwNzQ1MCAwMDAwMDAwMDAwMmNhNDBjIDAw
MDA4MDAwZmJmYzAwMDAgMDAwMDAwMDAwMDMwNzU1MA0KKFhFTikgICAgMDAw
MDgwMDBmYmY5ZDAwMCAwMDAwMDAwMDAwMDAwMDA1IDAwMDA4MDAwZmJmYzgw
MzggMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICAgMDAwMDgwMDBmYmZlMDBj
NCAwMDAwMDAwMDAwMDAwMDE1IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMA0KKFhFTikgICAgMDAwMDAwMDAwMDJkYThlOCAwMDAwMDAwMGZi
ZjZhMDkwIDAwMDAwMDAwMDAyZGE4ZDggMDAwMDAwMDAwMDJkOWI4MA0KKFhF
TikgICAgMDAwMDAwMDAwMDMwNzQ0MCAwMDAwMDAwMDAwMjAzZWM0IDAwMDA4
MDAwZmJmYzAwMDAgMDAwMDAwMDAwMDMwNzU1MA0KKFhFTikgICAgMDAwMDgw
MDBmYmY5ZDAwMCAwMDAwODAwMGZiZjlkMDAwIDAwMDAwMDAwMDAwMDAwMDUg
MDAwMDAwMDAwMDJjYTMwOA0KKFhFTikgICAgMDAwMDAwMDAwMDMwNzUxMCAw
MDAwMDAwMDAwMmNhYzcwIDAwMDAwMDAwMDAwMGEwOTAgMDAwMDAwMDAwMGUw
MDAwMA0KKFhFTikgICAgMDAwMDAwMDAwMDJkYWFlOCAwMDAwODAwMGZiZjlk
MDAwIDAwMDAwMDAwMDAwMDAwMGYgMDAwMDAwMDAwMDAwMDAwNA0KKFhFTikg
ICAgMDAwMDAwMDAwMDJlODYwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDA4
ODAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMg0KKFhFTikgICAgMDAwMDAwMDAw
MDJkYThlOCAwMDAwMDAwMDAwMjJjZTUwIDAwMDAwMDAwMDAyZGE4ZDggMDAw
MDAwMDAwMDJkOWI4MA0KKFhFTikgICAgMDAwMDAwMDAwMDMwNzUwMCAwMDAw
MDAwMDAwMmJiY2M4IDAwMDAwMDAwMDAwMGEwOTAgMDAwMDAwMDAwMGUwMDAw
MA0KKFhFTikgICAgMDAwMDAwMDAwMDJkYWFlOCAwMDAwODAwMGZiZjlkMDAw
IDAwMDAwMDAwMDAwMDAwMDUgMDAwMDAwMDAwMDJjYWM1OA0KKFhFTikgICAg
MDAwMDAwMDAwMDMwN2RmMCAwMDAwMDAwMDAwMmNlZjY0IDAwMDA4MDAwZmJm
OWQwMDAgMDAwMDAwMDAwMDJiNDc4MA0KKFhFTikgICAgMDAwMDAwMDAwMDM0
ODQzMCAwMDAwMDAwMDAwMDAwMDA0IDAwMDAwMDAwMDAyYTg0ZTAgMDAwMDAw
MDAwMDAwMDAwMA0KKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMSAwMDAwODAw
MGZiZjlkMDAwIDAwMDA4MDAwZmJmNjAwMDAgMDAwMDAwMDAwMDAwMDAwMA0K
KFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMSAwMDAwMDAwMDIwMDAwMDAwIDAw
MDAwMDAwNDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMA0KKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhF
TikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMA0KKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikg
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMA0KKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MA0KKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgWGVu
IGNhbGwgdHJhY2U6DQooWEVOKSAgICBbPDAwMDAwMDAwMDAyNGNmYTQ+XSBz
bW11LmMjZmluZF9zbW11X21hc3RlcisweDgvMHgzYyAoUEMpDQooWEVOKSAg
ICBbPDAwMDAwMDAwMDAyNGQxODg+XSBzbW11LmMjZmluZF9zbW11X2Zvcl9k
ZXZpY2UrMHg0OC8weDk0IChMUikNCihYRU4pICAgIFs8MDAwMDAwMDAwMDI0
ZDdiMD5dIHNtbXUuYyNhcm1fc21tdV9hc3NpZ25fZGV2KzB4NTgvMHhiNDgN
CihYRU4pICAgIFs8MDAwMDAwMDAwMDI0ZWRjOD5dIGlvbW11X2Fzc2lnbl9k
dF9kZXZpY2UrMHg2NC8weGMwDQooWEVOKSAgICBbPDAwMDAwMDAwMDAyYzlm
N2M+XSBkb21haW5fYnVpbGQuYyNoYW5kbGVfbm9kZSsweDMxMC8weDllYw0K
KFhFTikgICAgWzwwMDAwMDAwMDAwMmNhNDBjPl0gZG9tYWluX2J1aWxkLmMj
aGFuZGxlX25vZGUrMHg3YTAvMHg5ZWMNCihYRU4pICAgIFs8MDAwMDAwMDAw
MDJjYTQwYz5dIGRvbWFpbl9idWlsZC5jI2hhbmRsZV9ub2RlKzB4N2EwLzB4
OWVjDQooWEVOKSAgICBbPDAwMDAwMDAwMDAyY2FjNzA+XSBjb25zdHJ1Y3Rf
ZG9tMCsweDQxMC8weDRiYw0KKFhFTikgICAgWzwwMDAwMDAwMDAwMmNlZjY0
Pl0gc3RhcnRfeGVuKzB4YmU4LzB4Y2QwDQooWEVOKSAgICBbPDAwMDAwMDAw
MDAyMDAxYTA+XSBhcm02NC9oZWFkLm8jcHJpbWFyeV9zd2l0Y2hlZCsweGMv
MHgxYw0KKFhFTikgDQooWEVOKSANCihYRU4pICoqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioNCihYRU4pIFBhbmljIG9uIENQVSAw
Og0KKFhFTikgQ1BVMDogVW5leHBlY3RlZCBUcmFwOiBEYXRhIEFib3J0DQoo
WEVOKSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
DQooWEVOKSANCihYRU4pIFJlYm9vdCBpbiBmaXZlIHNlY29uZHMuLi4NCg==

--8323329-1483615782-1623723347=:24906
Content-Type: application/x-sh; name=qemu-run-zynqmp-xilinx-xen.sh
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.21.2106141921270.24906@sstabellini-ThinkPad-T480s>
Content-Description: 
Content-Disposition: attachment; filename=qemu-run-zynqmp-xilinx-xen.sh

IyEvYmluL3NoCiMKIyBCdWlsZCBYaWxpbnggUUVNVS4KIyBCdWlsZCBYaWxp
bnggUUVNVSBkZXZpY2UtdHJlZXMuCiMgRWRpdCB0aGUgUUVNVSBhbmQgSFdf
RFRCIHZhcmlhbGJlcyBpbiB0aGlzIHNjcmlwdC4KIwojIEJ1aWxkIHRoZSBr
ZXJuZWwKIyBDcmVhdGUgeW91ciByb290ZnMgbmFtZWQgeGVuLXJvb3Rmcy5j
cGlvLmd6CiMgUnVuIHRoaXMgc2NyaXB0CiMKCnNldCAteAoKUUVNVT0vbG9j
YWwvcmVwb3MvcWVtdS14aWxpbngvYWFyY2g2NC1zb2Z0bW11L3FlbXUtc3lz
dGVtLWFhcmNoNjQKSFdfRFRCPS9sb2NhbC9yZXBvcy9xZW11LWRldmljZXRy
ZWVzL0xBVEVTVC9TSU5HTEVfQVJDSC96Y3UxMDItYXJtLmR0YgoKWEVOPS9w
YXRoL3hlbgpLRVJORUw9L3BhdGgveGVuLUltYWdlCkRUQj0vcGF0aC94ZW4u
ZHRiCklOSVRSRD0vcGF0aC94ZW4tcm9vdGZzLmNwaW8uZ3oKCiMgUGVuZGlu
ZyBhIGZpeCBpbiBRRU1VLgpHSUNfU0VUVVA9Ii1kZXZpY2UgbG9hZGVyLGFk
ZHI9MHhmOTAyZjAwMCxkYXRhPTB4MDAwMDAxZTksZGF0YS1sZW49NCxhdHRy
cy1zZWN1cmU9b24iClJFU0VUX0FQVT0iLWRldmljZSBsb2FkZXIsYWRkcj0w
eGZkMWEwMTA0LGRhdGE9MHg4MDAwMDAwZSxkYXRhLWxlbj00IgoKJHtRRU1V
fSAtTSBhcm0tZ2VuZXJpYy1mZHQsbGludXg9b24gLW0gMkcgLWh3LWR0YiAk
e0hXX0RUQn0JXAoJLWR0YiAke0RUQn0JCQkJCQlcCgktc2VyaWFsIG1vbjpz
dGRpbwkJCQkJXAoJLWRpc3BsYXkgbm9uZQkJCQkJCVwKCS1rZXJuZWwgJHtY
RU59CQkJCQkJXAoJLWRldmljZSBsb2FkZXIsZmlsZT0ke0tFUk5FTH0sYWRk
cj0weDQwMDAwMDAwCQlcCgktZGV2aWNlIGxvYWRlcixmaWxlPSR7SU5JVFJE
fSxhZGRyPTB4NjAwMDAwMDAJCVwKCS1uaWMgdXNlciAtbmljIHVzZXIgLW5p
YyB1c2VyIC1uaWMgdXNlcgkJCVwKCSR7R0lDX1NFVFVQfQkJCQkJCVwKCSR7
UkVTRVRfQVBVfQkJCQkJCVwKCSQqCg==

--8323329-1483615782-1623723347=:24906--


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 04:12:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 04:12:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141879.261949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt0R7-0008Fx-Lk; Tue, 15 Jun 2021 04:12:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141879.261949; Tue, 15 Jun 2021 04:12:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt0R7-0008Fq-I0; Tue, 15 Jun 2021 04:12:25 +0000
Received: by outflank-mailman (input) for mailman id 141879;
 Tue, 15 Jun 2021 04:12:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=62sT=LJ=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lt0R5-0008ED-7C
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 04:12:23 +0000
Received: from mail-qk1-x732.google.com (unknown [2607:f8b0:4864:20::732])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 05d75952-88cc-43bc-96f8-5116b466ac55;
 Tue, 15 Jun 2021 04:12:22 +0000 (UTC)
Received: by mail-qk1-x732.google.com with SMTP id d196so35582471qkg.12
 for <xen-devel@lists.xenproject.org>; Mon, 14 Jun 2021 21:12:22 -0700 (PDT)
Received: from mail-qt1-f181.google.com (mail-qt1-f181.google.com.
 [209.85.160.181])
 by smtp.gmail.com with ESMTPSA id h12sm11014027qtn.44.2021.06.14.21.12.20
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 14 Jun 2021 21:12:21 -0700 (PDT)
Received: by mail-qt1-f181.google.com with SMTP id 93so10257961qtc.10
 for <xen-devel@lists.xenproject.org>; Mon, 14 Jun 2021 21:12:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05d75952-88cc-43bc-96f8-5116b466ac55
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=YDA/6rYFbIzGwN3DJJ3v30Ilg60B464lccTuq9RZZyA=;
        b=fMG07OU4hKhOT0Ejr3qOHcrU08EN3G1+RpniZ8liKpsJESj8tYmvpPMQZbf905Arqa
         jTNjk7/cw8NTc05aXCrTOb4W4H63TAdnEqedfWvkUqde8aQCNzpKVfF+WTvNV5ptv2qS
         TsIum45Bdbkm1pBXJPvJDobZEXMxEtOKj24Zc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=YDA/6rYFbIzGwN3DJJ3v30Ilg60B464lccTuq9RZZyA=;
        b=LxcUUqD0dkI3B55dkNuH608U9NIZyLZrASDEna64bSYRHE1WpnIC712fipHgYf6Vsq
         A8z+yOtqa5qo5xTmPgcKRdy00zD0SevDagxgdEuutdc6AMhZfL9bRju5lp3nsX0t8OXX
         tCxUgSUMAQ3iVVTtQMiEafvPxkGsWhRpK3WGtRay4Qd4/b9CoOXoCLrqkfp4/GTTOco/
         rn28ybVBrxYIhxsKQz1ycCC7Er2hXlSF8DSPNlIIy9xfjki/2CP31GWL+vVTDgTXpMC0
         EwzbtbunSttX/k1kg9iek6Ycu/m4wAQK8XGkK3DAf0vbsJM6RlgaYGYGxpKQvQg3QV4B
         2PZw==
X-Gm-Message-State: AOAM531LSRgxqCsiw8olkPVg274OqOOaBNtP/Bn1kT7JgdBiYhgrRT/R
	Qr1lb9ybJq2PAYg3pqJslmABaE3ZU3nMhg==
X-Google-Smtp-Source: ABdhPJw1tsthZg+CGVRrKfhuqpknIqA2AIzt9ELS+wdJnbKf9WsA8tZO3gF8aSYMvGcgHTHXpNFcYw==
X-Received: by 2002:ae9:c218:: with SMTP id j24mr20030695qkg.94.1623730341641;
        Mon, 14 Jun 2021 21:12:21 -0700 (PDT)
X-Received: by 2002:a02:384b:: with SMTP id v11mr19686288jae.90.1623729977741;
 Mon, 14 Jun 2021 21:06:17 -0700 (PDT)
MIME-Version: 1.0
References: <20210611152659.2142983-1-tientzu@chromium.org>
 <20210611152659.2142983-2-tientzu@chromium.org> <20210614061644.GA28343@lst.de>
In-Reply-To: <20210614061644.GA28343@lst.de>
From: Claire Chang <tientzu@chromium.org>
Date: Tue, 15 Jun 2021 12:06:06 +0800
X-Gmail-Original-Message-ID: <CALiNf29cE-T7xf+nUZF2pjT8osaXj+wb4MibtdSkAU_K13wuMw@mail.gmail.com>
Message-ID: <CALiNf29cE-T7xf+nUZF2pjT8osaXj+wb4MibtdSkAU_K13wuMw@mail.gmail.com>
Subject: Re: [PATCH v9 01/14] swiotlb: Refactor swiotlb init functions
To: Christoph Hellwig <hch@lst.de>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

On Mon, Jun 14, 2021 at 2:16 PM Christoph Hellwig <hch@lst.de> wrote:
>
> On Fri, Jun 11, 2021 at 11:26:46PM +0800, Claire Chang wrote:
> > +     spin_lock_init(&mem->lock);
> > +     for (i = 0; i < mem->nslabs; i++) {
> > +             mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> > +             mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
> > +             mem->slots[i].alloc_size = 0;
> > +     }
> > +
> > +     if (memory_decrypted)
> > +             set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
> > +     memset(vaddr, 0, bytes);
>
> We don't really need to do this call before the memset.  Which means we
> can just move it to the callers that care instead of having a bool
> argument.
>
> Otherwise looks good:
>
> Reviewed-by: Christoph Hellwig <hch@lst.de>

Thanks for the review. Will wait more days for other reviews and send
v10 to address the comments in this and other patches.


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 05:39:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 05:39:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141890.261971 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt1ml-00085j-4C; Tue, 15 Jun 2021 05:38:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141890.261971; Tue, 15 Jun 2021 05:38:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt1ml-00085c-1I; Tue, 15 Jun 2021 05:38:51 +0000
Received: by outflank-mailman (input) for mailman id 141890;
 Tue, 15 Jun 2021 05:38:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt1mk-00085R-9u; Tue, 15 Jun 2021 05:38:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt1mk-0003TK-34; Tue, 15 Jun 2021 05:38:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt1mj-0007VM-S4; Tue, 15 Jun 2021 05:38:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lt1mj-0007Om-Rb; Tue, 15 Jun 2021 05:38:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=P03ToIk9ICWQya5YiwNxl1k1wTGXn7rYw+vvBBAUUM0=; b=EWY+W/mWrA/Z6y6zju1pa2lS+a
	NZQ1sy3c/orKgjrW2PXotFWhR5CeFT9m85b00P179DAKMXApNVq6VUXOqPMfWV8PKTlWkE6KPeBvC
	9tw0JFp+r9VppAmLACw8lVtLIMGDcYHUUMBmc5LvzPV9hZCYRowid4vF3T0BmW01OHxg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162821-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162821: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=b8649cf2a3e673a4a8cb6c255e394b354b771550
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Jun 2021 05:38:49 +0000

flight 162821 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162821/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 b8649cf2a3e673a4a8cb6c255e394b354b771550
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   11 days
Failing since        162368  2021-06-04 15:42:59 Z   10 days   20 attempts
Testing same since   162583  2021-06-09 23:44:58 Z    5 days   15 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1717 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 06:08:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 06:08:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141898.261986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt2FN-000312-HB; Tue, 15 Jun 2021 06:08:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141898.261986; Tue, 15 Jun 2021 06:08:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt2FN-00030v-DG; Tue, 15 Jun 2021 06:08:25 +0000
Received: by outflank-mailman (input) for mailman id 141898;
 Tue, 15 Jun 2021 06:08:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fd+K=LJ=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lt2FL-00030o-O0
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 06:08:23 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.78]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 906bb7b1-cf1e-4f24-9bff-6a39cd948d07;
 Tue, 15 Jun 2021 06:08:22 +0000 (UTC)
Received: from DBBPR09CA0006.eurprd09.prod.outlook.com (2603:10a6:10:c0::18)
 by DB9PR08MB7024.eurprd08.prod.outlook.com (2603:10a6:10:2cc::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Tue, 15 Jun
 2021 06:08:15 +0000
Received: from DB5EUR03FT052.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:c0:cafe::8b) by DBBPR09CA0006.outlook.office365.com
 (2603:10a6:10:c0::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20 via Frontend
 Transport; Tue, 15 Jun 2021 06:08:15 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT052.mail.protection.outlook.com (10.152.21.82) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Tue, 15 Jun 2021 06:08:14 +0000
Received: ("Tessian outbound 9d3d496fabe8:v93");
 Tue, 15 Jun 2021 06:08:14 +0000
Received: from 8ef54cfc1d8d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6B30C6FE-8B39-46B9-A80D-39D0E1048434.1; 
 Tue, 15 Jun 2021 06:08:06 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8ef54cfc1d8d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 15 Jun 2021 06:08:06 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR08MB3728.eurprd08.prod.outlook.com (2603:10a6:803:bf::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.23; Tue, 15 Jun
 2021 06:08:00 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::2807:2ff9:e371:2918]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::2807:2ff9:e371:2918%7]) with mapi id 15.20.4219.025; Tue, 15 Jun 2021
 06:08:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 906bb7b1-cf1e-4f24-9bff-6a39cd948d07
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kmvLMnUyXKkeR1agEijox6oairFBDgNT7zNDdvTaZss=;
 b=S623XQ7tp26Lc0+X/oBVj0v8VGN53WzOW31MXXmFVQvy+U6tIY1NHLHeToaeWyVomunmtpfndfA5q5umlM3/s6us+84MMKuIY4QSaPAvTsUAvHLNyVeTTy4RIOJ2beGFfnpzsExizi0mfN2yQokGGeA0v5evdQj2Ql1lbaO/k6A=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=R5FgR+lT2ndvnlB70+E7UDahJhpAuf1FQQrDQQoHkPTVjKfmBacI/gWo7s6U79QfyvfZMioevm3lz4bqrU9irI9annkajcPbaupfEDA79/9zJmAJD+pagUGCSLg/x16WheH4VwOkNsbPEB4kDGq4OwzphID7jWCSeNGtGwJ9bl8/IDSqsIwIie6ep5kA9Ikn70XTXd1wHBB5BR3IUkYjJEPPMAR2ptMt+WtEFM4h/BFNWVThKfXmw5Zcsi2VJrVCVFwjItJYB7ga7YCW0Pq9Ji1pPKxBzD0X44MA+0WmdP5T80mk8RLDvg4UKXuLi9QxY6aGc0sbXHyyZhIByOp84Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kmvLMnUyXKkeR1agEijox6oairFBDgNT7zNDdvTaZss=;
 b=Pj1lTERb8d1E+kUlMx7ILnLDZGB0lhZBgWwFNhN6kNEmbo4ZLEEFV7S9a8yamm/cGcbInsidSAkMjZlqO6Ea1B0kfO9RsNQgfWOCkR2CTF7oHoqT1IONdOBpiKUv9uP5ZwB+R5+CaP356TtaOdfXnhBmHB9a7z9MrDD9q3xsrYWpbwfyHllEnCC9c89nSGGU8tvST/XKwAEIym6o2wqnjFW1yiDlZFfHXvaix4XtiIU2IPWx05gx5muoSeblgKCmzFWT2RIcKbRxlC4/IoE84byqGyQpokADiD6PVUU2PfIrY5kNn05r0AKNbIq7n80c7HGq4WniEDenf1/4V3gXwA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kmvLMnUyXKkeR1agEijox6oairFBDgNT7zNDdvTaZss=;
 b=S623XQ7tp26Lc0+X/oBVj0v8VGN53WzOW31MXXmFVQvy+U6tIY1NHLHeToaeWyVomunmtpfndfA5q5umlM3/s6us+84MMKuIY4QSaPAvTsUAvHLNyVeTTy4RIOJ2beGFfnpzsExizi0mfN2yQokGGeA0v5evdQj2Ql1lbaO/k6A=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien.grall.oss@gmail.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>, nd
	<nd@arm.com>
Subject: RE: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
Thread-Topic: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
Thread-Index:
 AQHXS6WvvFnJG9tHGECWPMZXor2qraro8JyAgAAPtYCAAiGdAIAAuwwggAA2KACAFH+QgIABhnKAgADPmoCAAAmNAIAAHmCAgAXol4CAAprMgIAADkQAgAkaNQA=
Date: Tue, 15 Jun 2021 06:08:00 +0000
Message-ID:
 <VE1PR08MB52152038F1366DA9B8A7D3D8F7309@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-2-penny.zheng@arm.com>
 <e1b90f06-92d2-11da-c556-4081907124b8@xen.org>
 <VE1PR08MB521519C6F09E92EDB9C9A1AEF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <66e32065-ea2d-d000-1a70-e5598a182b6a@xen.org>
 <VE1PR08MB5215C1F5041860102BBAD595F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <14fb6fe4-c293-6994-8cbc-872d3bd8a3ac@xen.org>
 <VE1PR08MB52152792B6771236A6DF37E7F73D9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <4251e0e2-fb53-b8a3-0323-f4ce892cf21e@xen.org>
 <alpine.DEB.2.21.2106031408320.7272@sstabellini-ThinkPad-T480s>
 <CAJ=z9a234ANQDR7BmtSm4AT0k3jrCn67s4b3zZ+jdkUgBMahbw@mail.gmail.com>
 <alpine.DEB.2.21.2106031625530.7272@sstabellini-ThinkPad-T480s>
 <113937c2-f1a7-c27f-8e2e-79de729ea3ce@xen.org>
 <BAC8BC8D-9CD6-4857-88C0-7DCE9267EF0E@arm.com>
 <e3a81b21-fd11-852c-aed7-25e71e4b5539@xen.org>
In-Reply-To: <e3a81b21-fd11-852c-aed7-25e71e4b5539@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: D79ECCA853884E4E892F233370117AD9.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.112]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: ebaf0182-e710-4a9c-25f1-08d92fc3f66d
x-ms-traffictypediagnostic: VI1PR08MB3728:|DB9PR08MB7024:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB9PR08MB7024ADE32B2E969B2E681B3DF7309@DB9PR08MB7024.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 vQJ9YZqfMlqViVeILQVcSIr/Zs9KgcjTZAFXhGodO9MmSwsnwUen2ErVjkDaZjf44eAfVZL856/fSYEdEf1ehSc0Yg6i/L/ghGB878BJomfVW+7ubEBhmM21yTk7HhO/BW9RJ6hIo0Oksp9XdYBDiyKVOVkW0ZAz3mVz30gl5CRUnF6VGQbXMFQneV7k3HYyLyuX4Pny9jG8QKBP5g0pBn1gvsuR/R6I6mEqu0gk8+ROBzO6YSJ3YJxfb9omcomUMAcIicQOha+IMII3v6hiyahAZw0yFzXPp/NtNgMFFAmfj5fj2UEr8+an6Oam/mhErbn/zHOZmso6j/fTc9yLbhlB/U1KvJ3fLktkA+YdAd0mejpPK94uJIRSUDd+0USmiqiVExFehXuF9lsD3Lr/ucqpfE8d/OFPtD+HoAP36Elq9fjCHAHUlgi+4PkZy3dbYfIfA/IKc7caBvA0NGTluzEo+n6/CYHZLP+W2WRZgHJzd+tLqsk4MJSZoMKjR8/I8qJj9omforC4AS0XBPjQqArFkG8cJW0im6a7iTIuEro71mD/KkkoogafizXWODUVjA4N4cxXx0gM9t5INdpiIBqE8Hptl8FmUOJhJWVb6Z4=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(346002)(376002)(39860400002)(122000001)(83380400001)(66446008)(33656002)(38100700002)(64756008)(9686003)(66476007)(66946007)(55016002)(76116006)(66556008)(478600001)(2906002)(4326008)(53546011)(6506007)(86362001)(5660300002)(52536014)(6636002)(316002)(71200400001)(8676002)(54906003)(110136005)(8936002)(26005)(7696005)(186003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?I1L3705etq525jGyGkHEj9vN8X0oxxLgfSx46k4o7X1GUpGzMJp7pkDqlFTb?=
 =?us-ascii?Q?1mLUbO9RbtyQeVQNGbD7mM4EOY5IcFljC73WGC2f/4ExcBzEER37kY4sDhOr?=
 =?us-ascii?Q?AVEO1xqjg8gvuby89J1bXrY/6pLTib8FcbS7Pl6ZyrtS5CbmWKqJ/hdGSUC6?=
 =?us-ascii?Q?DuvsyED8Kf/5lPdzWBDO7Y8hPqtytJnf5GRDreCT6HKZvzGigYzRLX+rj9Wj?=
 =?us-ascii?Q?g7ggCQ2huoGvJKCIpFQx5+fkOffGT8LifvKoxtddVZ4+0CrWxq3bALaHUA9F?=
 =?us-ascii?Q?BHZ06TETSx3cZFKW5ocmwVX0iV9kwuyQu2+xZn0+Z5OhRydR+l96gBrPClbH?=
 =?us-ascii?Q?ZV5+SOSiM5grhD2jy5/Y4y6v9HQ75ff31aQtTW2yK5285EZ7IlVruOxdYTKb?=
 =?us-ascii?Q?2jB6RwEJAe+OTS8PVrX3IJlFpVOqOx/iXPxz4i0/md3c+SNDX3r0IPYKIMvv?=
 =?us-ascii?Q?o960E17wTMyAW5borWGaFNv/9Ot7/fanHE4zvX5ZUkS4IZwf4922TMXSjm/8?=
 =?us-ascii?Q?YF1nlhEygYn592kV2to7dcUU9jYr3gsnU+a8DQ71JrJhwJ926GLcJDaDQWa1?=
 =?us-ascii?Q?a3Yj322DUPezQ/hajkJ7DFnkCNpys/c0dBbVuvUDk1XsoL4QVv91Mqpn/5ng?=
 =?us-ascii?Q?EpJTe/HkugkH3qceLPm7siOlnYTpa0D7RUGA8Q6CrXub9RbKBsnTZpoB69jm?=
 =?us-ascii?Q?njzM3dpoQ0ivnrYzrkho3iT4XNiPsi1cJR2yScAftvNiX1nE6pAw0XXy7AGC?=
 =?us-ascii?Q?2yJt1iUp2ubwNDBRNRvRTm+yUkhZfs4Zg3YiRkNuQh2MJM90HA5gowAWnl3W?=
 =?us-ascii?Q?uxjKwP3dbhjCCXaY/gRdENfTxTBhzNgrV2hLG6BFzC3bs2+K50Hn9xgAGFAk?=
 =?us-ascii?Q?dBqnoU07MupGV3HELz/4X566XeZ8raJ2vLVsRXZdx55PRcy6my8B66YUALGY?=
 =?us-ascii?Q?Y2Y/TQg4HfqFXUO4018bESTKpkv5/HRue6u2CdLxIPIH6wL0BN4pJZ7Je+oG?=
 =?us-ascii?Q?rMfcU8InE0WGn3T32qLHIgavLhdgpv5HXwZsGGNvKWfqUfGPmeItFwOAQX1n?=
 =?us-ascii?Q?kFNaCQDVCcfsYWvJMk34FWXPe8uLsBsgut5Ouanw2lcV4R0l8d4SzCFTsgwF?=
 =?us-ascii?Q?CUsUs3X4aOx6m6C/m1WonQbLJqQQk4mQLQ2cbpXYwwA6oHyzclYhabqkMwp9?=
 =?us-ascii?Q?nj6YdGcSBfpxm39K5bXbivBogY0ImfUuirmneXlWzSBwXr14rA6p0BoobN99?=
 =?us-ascii?Q?ODBx9BomTrI33ft7p7UmMiY0OduCEXPxYOMqjWLqqKt+SThZWZvUr6phpnc2?=
 =?us-ascii?Q?1wY=3D?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3728
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7b274dda-83c1-4092-7a72-08d92fc3eddf
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zC8djnkChjvGueQP+5yNARe6+ejdsMS/eLvX7qZ2MlbnJ+i4xKINiIllGpuwohiUCJqMAL4rKABEPL3jPH1TeueHyx8TaXXjwgY56RSheCVWEIgw8FmiVMpQ0l4ZyFa3gBewiUpXi3ntLdvZju/MnoNLPmHxhSVeVJQvCnIYdDKKJsMrn5/kA9HJg70hHLGYHB0HfRL6fI4O2frbTnGlW5mnoFBgYqENODXjeOpy2WOJOdWfSeqjyZ3Ube9Pqz8w2HWModq3SuETb3Cdz5YxxBUIk3Dvetf2o3h1pGqOiWtlXO7WdzkswUbQ5QhSoFtVsf5e2aMjix1NmlFp53XpA0gA0nOVIAf0vWQNyaFubECZNvjbnngd9GDgY3fMmh82aeBC4tVS1rnnYekGGRbgczJI60j3KRuD8eX92abrsefTyKba26jbW8M6FhPPqO5SW0dRZlCnb2Q8SatThOvG8ewI2gdYiMD8czVN/lIEgUOs5cFpq0jXtvJn49V57j0D8NZKnTWj26uGARwZWELYlQ0YYA01v6nXK0BNK+b+HsCZSu2jxhsoPjOo5adeyrVuRy+lDNddOBHN/KQg+vKcQi5fPed19a1a/ed2KkQi6BfRIWeKCw+awdie4nYZwOyDhEK6SEvJxVErFefdFgKUQLIW61F/9CATxhmzbk2hjNw=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(376002)(136003)(346002)(39860400002)(46966006)(36840700001)(70586007)(70206006)(478600001)(356005)(6506007)(7696005)(86362001)(336012)(8676002)(83380400001)(53546011)(186003)(81166007)(82740400003)(5660300002)(26005)(47076005)(6636002)(2906002)(4326008)(9686003)(316002)(8936002)(55016002)(54906003)(36860700001)(52536014)(82310400003)(110136005)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2021 06:08:14.9032
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ebaf0182-e710-4a9c-25f1-08d92fc3f66d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7024

Hi julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Wednesday, June 9, 2021 6:47 PM
> To: Bertrand Marquis <Bertrand.Marquis@arm.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Julien Grall
> <julien.grall.oss@gmail.com>; Penny Zheng <Penny.Zheng@arm.com>; xen-
> devel@lists.xenproject.org; Wei Chen <Wei.Chen@arm.com>; nd
> <nd@arm.com>
> Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
>=20
>=20
>=20
> On 09/06/2021 10:56, Bertrand Marquis wrote:
> > Hi All,
>=20
> Hi,
>=20
> >> On 7 Jun 2021, at 19:09, Julien Grall <julien@xen.org
> >> <mailto:julien@xen.org>> wrote:
> >> Feel free to propose one. I suggested to use /reserved-memory because
> >> this is the approach that makes the most sense to me (see my reply abo=
ve).
> >>
> >> TBH, even after your explanation, I am still a bit puzzled into why
> >> /reserved-memory cannot be leveraged to exclude domain region from
> >> the hypervisor allocator.
> >
> > I really tend to think that the original solution from Penny is for
> > now the easiest and simplest to document.
>=20
> I can live with Penny's solution so long we don't duplicate the parsing a=
nd we
> don't create new datastructure in Xen for the new type of reserved memory=
.
> However...
>=20

Just to confirm what I understand here, you are not only worrying about the=
 duplication code imported in
dt_unreserved_regions, but also must introducing another path in func early=
_scan_node to parse my first implementation
"xen,static-mem =3D <...>", right?

On duplication code part, I could think of a way to extract common codes to=
 fix, but for introducing another new path to parse,
FWIT, it is inevitable if not re-using reserved-memory. ;/

> > In the long term, using directly memory and giving in it the address
> > range directly is the most natural solution but that would clash with
> > the current usage for it.
>=20
> ... we are already going to have quite some churn to support the system
> device-tree. So I don't want yet another binding to be invented in a few
> months time.
>=20
> IOW, the new binding should be a long term solution rather than a tempora=
ry
> one to fill the gap until we agree on what you call "domain v2".
>=20
> Cheers,
>=20
> --

Cheers

Penny
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 06:12:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 06:12:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141904.261997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt2JO-0004Pd-3m; Tue, 15 Jun 2021 06:12:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141904.261997; Tue, 15 Jun 2021 06:12:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt2JN-0004PW-VB; Tue, 15 Jun 2021 06:12:33 +0000
Received: by outflank-mailman (input) for mailman id 141904;
 Tue, 15 Jun 2021 06:12:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u9s/=LJ=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
 id 1lt2JL-0004PN-U8
 for xen-devel@lists.xen.org; Tue, 15 Jun 2021 06:12:31 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.84]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e09f2610-b1de-4dd6-a409-ad54eddad538;
 Tue, 15 Jun 2021 06:12:28 +0000 (UTC)
Received: from AM6P193CA0139.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:85::44)
 by VI1PR0802MB2575.eurprd08.prod.outlook.com (2603:10a6:800:ad::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Tue, 15 Jun
 2021 06:12:24 +0000
Received: from AM5EUR03FT048.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:85:cafe::14) by AM6P193CA0139.outlook.office365.com
 (2603:10a6:209:85::44) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21 via Frontend
 Transport; Tue, 15 Jun 2021 06:12:24 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT048.mail.protection.outlook.com (10.152.17.177) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Tue, 15 Jun 2021 06:12:23 +0000
Received: ("Tessian outbound 596959d6512a:v93");
 Tue, 15 Jun 2021 06:12:23 +0000
Received: from 4fd48cca6d3f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3F5C6526-3516-45A3-BC98-AD873FC17A2C.1; 
 Tue, 15 Jun 2021 06:12:11 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4fd48cca6d3f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 15 Jun 2021 06:12:11 +0000
Received: from DB9PR08MB6857.eurprd08.prod.outlook.com (2603:10a6:10:2a2::7)
 by DB7PR08MB3178.eurprd08.prod.outlook.com (2603:10a6:5:24::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.24; Tue, 15 Jun
 2021 06:12:10 +0000
Received: from DB9PR08MB6857.eurprd08.prod.outlook.com
 ([fe80::2078:8a4d:fb01:8143]) by DB9PR08MB6857.eurprd08.prod.outlook.com
 ([fe80::2078:8a4d:fb01:8143%6]) with mapi id 15.20.4195.032; Tue, 15 Jun 2021
 06:12:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e09f2610-b1de-4dd6-a409-ad54eddad538
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l8RnEFNFzmyl5Q14B3pkSW06F+C/zg8GMv4IGrZayBs=;
 b=xZ8I+VOCkDW8RfCcM1VpdeGTrjVGR9a7xBoFW4cPK4P9ai7bS9qYwPnAHIi8DtcfKmlbxs2UVxEen4HxKxTPQd5ho/8t08nOyO8r2ThyrDVaFA9i4HfCj2EV10iWtE5uXULeQQPbMygfK6lMq8kZvEZcdcwhE8X9/kyxrEBs7/8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xen.org; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;lists.xen.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OVA9kMe+UmhuIOhF51BpA5e2ee10Lvcd/Fqs/z7xMpHdGJp74H1cQn812xvLIpNRfGRHhO1o2ScpCS1Zy7pGoncpDT3Vp/VnxTqMk2H333lgUzu3jYASt42LROJ7IXHfk9JXNSYB5GnImDdZ8XmlDa/vky9A9hgxMcrw8GmFREhdAyNzUbPCmzWFwmPovmD+gswMaSEZPfZicjiFwlX4R8jrurKnkvh+8Jy0zu149v+T4Q3fNccHBzHWy+uYJtDrS0sis4CoxlZZLUhP1N5YmKDNmqSf9nVFaZ3jV7JV7+dAtrEbG1Snc58yU0hCzj1nlNST75cf4syp01UZi/QQcw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l8RnEFNFzmyl5Q14B3pkSW06F+C/zg8GMv4IGrZayBs=;
 b=Y9iWLfwby7aMfozgOIpDzHgF87nMOMthMxig7m0gSAqIHddDRHejGJZPru+j7eVfD4t0oI5I8/Nm1OwtZhgYmok5N31/RiZTszdq9rNegPwO7GVGwiKw2wRQa8DLDLtW4iFAk7UpGcljLmq3TF0xsYh71kHbaPBtSeKce1Jv+y6eV8Z57hlGCl66f9BAMLBaC4CyR2146x4gi+QRb+1Cad103s8aU0vxQ7BURRbY9oyeH/bGYXvF+l7jn5n9vQvRWjeLp6m6Czs6RSeq/73ARqYJpnHMoi7EC7iLI9mLBmzVTV05Z91ZSlnaeTfpTpYqaJwgXvJm8H2t6CBlDsj6mQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l8RnEFNFzmyl5Q14B3pkSW06F+C/zg8GMv4IGrZayBs=;
 b=xZ8I+VOCkDW8RfCcM1VpdeGTrjVGR9a7xBoFW4cPK4P9ai7bS9qYwPnAHIi8DtcfKmlbxs2UVxEen4HxKxTPQd5ho/8t08nOyO8r2ThyrDVaFA9i4HfCj2EV10iWtE5uXULeQQPbMygfK6lMq8kZvEZcdcwhE8X9/kyxrEBs7/8=
From: Wei Chen <Wei.Chen@arm.com>
To: "kvm@vger.kernel.org" <kvm@vger.kernel.org>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
CC: "will@kernel.org" <will@kernel.org>, "jean-philippe@linaro.org"
	<jean-philippe@linaro.org>, Julien Grall <julien@xen.org>, Andre Przywara
	<Andre.Przywara@arm.com>, Marc Zyngier <maz@kernel.org>,
	"julien.thierry.kdev@gmail.com" <julien.thierry.kdev@gmail.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Oleksandr Tyshchenko
	<Oleksandr_Tyshchenko@epam.com>
Subject: [Kvmtool] Some thoughts on using kvmtool Virtio for Xen
Thread-Topic: [Kvmtool] Some thoughts on using kvmtool Virtio for Xen
Thread-Index: Addhq3Jd+FbZaJt0R6WdbgcPW7X96w==
Date: Tue, 15 Jun 2021 06:12:08 +0000
Message-ID:
 <DB9PR08MB6857B375207376D8320AFBA89E309@DB9PR08MB6857.eurprd08.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 9DF71AC55BCB6443A06FF2F1548551B3.0
x-checkrecipientchecked: true
Authentication-Results-Original: vger.kernel.org; dkim=none (message not
 signed) header.d=none;vger.kernel.org; dmarc=none action=none
 header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 3da4bb37-2b79-4a86-38bc-08d92fc48ab6
x-ms-traffictypediagnostic: DB7PR08MB3178:|VI1PR0802MB2575:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0802MB25756354999F3A959544AC2D9E309@VI1PR0802MB2575.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 yAbx9fQRPoSzBa6DuAwtnUvK6HXeU+iazrkhGR8UB1fECCLwfb5FvKIrmqfeTQGlJET0uxNJbBKMkITHGG4zDE///s6fjCHkvIL5kKZm7oNkERkCMlMNR6A7LDI10UyA7ksDp8BE0l9Z3L22RPPDoGcuAjxeZQnBK1roDXX+f87zvPviSjg1/E6JAICh+BQGVs0C5d045QJu3gWzca4Nw0AMjwz1+7777zXN+oV4nRVjjTC13RnApDjc3BqVlWfg4L0HmYyqYvmvvCwLN713rGj6jS9oRtwlslD9RZL7Ab5fUhlHD0EHKOUzIxJHQNDY9gd9OvARR0NWyUeFZhp5B/9xjaFSRLExbphdBCtiUnRMx7efJJqR7jxhDlBuTO/H6scvwa6U/s2HDGvIAb2/kJxtoFI3UQLhNX1r4d54C4WgG85GHxGvxQJsvxiVlyul+djpYMLspjA0auGUmY3RrNSrfDcyDgG2D7m39wsUJCihdcG7ryqaMlTRKxVZVl8vM78x1kTS0O9FD20KRxDT/gUGSZwGP1V5kHO8QmVBKfE6+E/v43Z/cDu3Ac33RSifL2kPPsYFkpmupQIqx2G6leUgCMzWoQefbRIh1FvOwEbT1ZjmYPkF1g35rsTgd5LTmamHj08cw/qJNo9gugydDWtk/aDohLsdCdioflI78LrKr6UrwkP6lTHzfnyOuwEX+huyb93EvWFhLaT5IdEhSfttWWRF4/u2gPWk9GpvV7s/97IK76UWC8TCB0j4Jcvv
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB9PR08MB6857.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(39850400004)(396003)(136003)(346002)(86362001)(316002)(478600001)(4326008)(8936002)(26005)(2906002)(7696005)(186003)(966005)(54906003)(33656002)(110136005)(83380400001)(9686003)(52536014)(71200400001)(66946007)(66476007)(5660300002)(55016002)(66446008)(66556008)(76116006)(8676002)(64756008)(122000001)(6506007)(38100700002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?KRUEWix+JTKyXsS0ZbEP+t72HKCacCwKRf4CIEZl+BdQCnoZgd2976ZK43NU?=
 =?us-ascii?Q?PK+P+QDIgoQhZks10+xQ7Kh7aUFVKmqp+lBZ7czQz3S8y49UV9/gaZJqzWdD?=
 =?us-ascii?Q?5wouF/YRWpgHiuH4ZqvvTgSL36d3+UTxige0KjONBNkeBh8E9MYWx8RQ4Knh?=
 =?us-ascii?Q?GoOSebLNYMdiFW39r7eV3o234Xe52QVLjUrA52LgUq6ijAVUWENiTJ1MYwqa?=
 =?us-ascii?Q?Lon0h/IAqDRwc0pNcAp2MOOeD2bWOIrLlO1taz8NTVFPf7o3dMD2HwxC8J5z?=
 =?us-ascii?Q?yntx7H15I5egdAElVBS1gHSRBygjpHTUthb2bUjVZIZh89L1Sli7RzEcWGur?=
 =?us-ascii?Q?ZmcllirB4mqU8VnCsA54mT+jESSrA3iuLkGm8WMDG/G3fHLgTWlfklukJSs6?=
 =?us-ascii?Q?8nIVRRQI7pscDJvMA3nmPAkp99FhPAejV5Dt0y8j8sbrGaGsxBQrvw86lfOb?=
 =?us-ascii?Q?tAJZm9F6H2qL99/+ZJbI3PbJ5zepcgzfKfnt3twa0dS+yDIZqoTAiGdx7DPb?=
 =?us-ascii?Q?v1ngnb3LVn/iMPQvHZOy+ISma79pSqKWZmTGoCiyc6fR9GchVKvCc6RPzyaw?=
 =?us-ascii?Q?yO4AgLSpOX5n4zGGeRL9e3PyQP0O/akFvabW5VsuOWIYM1GSgBSAAWtPt18P?=
 =?us-ascii?Q?paksrQq322S7npnXSwo6ADFGFfGQK10ivZPJ+p6ipT5APBBB1c9ZSJKDX9js?=
 =?us-ascii?Q?HATHu/dQAt0HJ8fNp2qsBy5AFeOsJ7REQCSsJqs9p1SZf1lCq1ZyuoM1NxQu?=
 =?us-ascii?Q?7k7wE9IVOwnIoJdkX0OHJxolQM705kccFw/G7MXhntB058ISyEc6N/ImA0vY?=
 =?us-ascii?Q?cSYAErydrVVtw1BPS7IbXfL4YhAOE8NSv5g7jfYi1aAiz7gHkM7ilIQeli8Z?=
 =?us-ascii?Q?fDsmS1xPu2Mr6PFaI5RDKzTSdltaCEY/ufLVD4dx8zz/IsCudUZ8Zma05TE8?=
 =?us-ascii?Q?LUrO8rJKAGOwXKRR1efX110MhMubD98tKZsuhYhLN+tTTIAI84vnq5UYu2dk?=
 =?us-ascii?Q?f5Gl5XDarcOi3GXImabU33OkNvoLB2OEdj3cc5WDA4hItLs2+yNtn6aKgD/R?=
 =?us-ascii?Q?ClJk4Chh+a/Rb5Y+GTHvVOAOYGEdAu/pdRml3CPa2HAEOZO2c4LlOhOjBsU6?=
 =?us-ascii?Q?7EfrzTdRdbz5B85XM8m4P+/stfMjk9L+gojuCGSXLASxzToIUnwfZOQxbLB+?=
 =?us-ascii?Q?fJg+p0d+E2kdS+lIeGqM0efPNjU74mCtcWWLciUXFiw9sW89sevhqQO6H9il?=
 =?us-ascii?Q?Ykv0cTTZUV8jiAOEJyFYaViNfngV8OL6ybHJZxB6XbNSDn/ieKhlgK+0lE/u?=
 =?us-ascii?Q?ChxdtJO8R4Ijh16btzmPCMpl?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3178
Original-Authentication-Results: vger.kernel.org; dkim=none (message not signed)
 header.d=none;vger.kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	16fdc8ee-31b8-49c2-8b66-08d92fc481fb
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EBlKAL1AQPixVk9/HlVAUlPF6u7hi5a1o+dDPNECqt6tuiXhqWD9uxmqJktHRKvzxOqt8yaGfwBkCyuyJKzQLKZhzpkwLClPMTIN01bwoRrdXxZAVUAtnfeWu925ItrvNnYLT4PWklGONMJ2EW6oYmpSiwMuvyIkDGgBKcV5aBhdqwZYv/ZISbTQGGqoppmyuEqXE6bcI32YB1NO8zl+hGtZkG7Znm+py6n2wrp/d67Ze/cIOMUbMg3UONxqrWmiguDVjCAS5zX1KTKraUlAiQYZcfTtxk3u4/adkwnbcgkiXt7fRvXtI3JI0HPFYwSiRdO2X+3AnxNEHJuFjwEfNC+rc/0vqmswFOouThKl79dzqQD/UZ/FQM/PXilaTvpFEVp4cqBcTkBttnwdvCwCm3+EW1H8hSd7pUfloffTUQwbk2DBuPe41ksYi8aMvwNEVKv5fsiw+tuA9SkCH3K5ygHQ9qfrXeZDbW28HAZn+unFSj3NlgIxg9sPtl+MTXxhpenPLNTZp6+r9gY9RHWK0c8EO5TfG+jO643E8/NXYZ+mj0BlXRR1vJzBwGXPV49qSItl342A13ck3zsD0vZQKGfu2gb4W47f2aLBTqOzLYcdV31Z87W6Yce8dShk1CIkOoAcv7WpbPouNxWWXu9HrgpjMwjziP1eXooJEHmZ5GxNKeTtoM246mXRaydvPkMaq84d6r/u9NAwV/566sWWHiqrvHW/4GnfDWiHQU0ph+PBguJK2b6PIZ+ktLv/8uXZBE96QAuNzvvN3ZqOerZta7mcyiZzJovCgIYRF6rPa3XobGusB4kryQbrRN0tIt7+
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(346002)(376002)(136003)(39860400002)(46966006)(36840700001)(186003)(83380400001)(47076005)(36860700001)(4326008)(26005)(107886003)(5660300002)(2906002)(55016002)(8676002)(81166007)(70206006)(966005)(82310400003)(478600001)(86362001)(8936002)(7696005)(9686003)(70586007)(336012)(316002)(6506007)(54906003)(52536014)(110136005)(33656002)(82740400003)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2021 06:12:23.6220
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3da4bb37-2b79-4a86-38bc-08d92fc48ab6
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0802MB2575

Hi,

I have some thoughts of using kvmtool Virtio implementation
for Xen. I copied my markdown file to this email. If you have
time, could you please help me review it?

Any feedback is welcome!

# Some thoughts on using kvmtool Virtio for Xen
## Background

Xen community is working on adding VIRTIO capability to Xen. And we're work=
ing
on VIRTIO backend of Xen. But except QEMU can support virtio-net for x86-xe=
n,
there is not any VIRTIO backend can support Xen. Because of the community's
strong voice of Out-of-QEMU, we want to find a light weight VIRTIO backend =
to
support Xen.

We have an idea of utilizing the virtio implementaton of kvmtool for Xen. A=
nd
We know there was some agreement that kvmtool won't try to be a full QEMU
alternative. So we have written two proposals in following content for
communities to discuss in public:

## Proposals
### 1. Introduce a new "dm-only" command
1. Introduce a new "dm-only" command to provide a pure device model mode. I=
n
   this mode, kvmtool only handles IO request. VM creation and initializati=
on
   will be bypassed.

    * We will rework the interface between the virtio code and the rest of
    kvmtool, to use just the minimal set of information. At the end, there
    would be MMIO accesses and shared memory that control the device model,
    so that could be abstracted to do away with any KVM specifics at all. I=
f
    this is workable, we will send the first set of patches to introduce th=
is
    interface, and adapt the existing kvmtool to it. Then later we will can
    add Xen support on top of it.

    About Xen support, we will detect the presence of Xen libraries, also
    allow people to ignore them, as kvmtoll do with optional features like
    libz or libaio.

    Idealy, we want to move all code replying on Xen libraries to a set of
    new files. In this case, thes files can only be compiled when Xen
    libraries are detected. But if we can't decouple this code completely,
    we may introduce a bit of #ifdefs to protect this code.

    If kvm or other VMM do not need "dm-only" mode. Or "dm-only" can not
    work without Xen libraries. We will make "dm-only" command depends on
    the presence of Xen libraries.

    So a normal compile (without the Xen libraries installed) would create
    a binary as close as possible to the current code, and only the people
    who having Xen libraries installed would ever generate a "dm-only"
    capable kvmtool.

### 2. Abstract kvmtool virtio implementation as a library
1. Add a kvmtool Makefile target to generate a virtio library. In this
   scenario, not just Xen, but any project else want to provide a
   userspace virtio backend service can link to this virtio libraris.
   These users would benefit from the VIRTIO implementation of kvmtool
   and will participate in improvements, upgrades, and maintenance of
   the VIRTIO libraries.

    * In this case, Xen part code will not upstream to kvmtool repo,
      it would then be natural parts of the xen repo, in xen/tools or
      maintained in other repo.

      We will have a completely separate VIRTIO backend for Xen, just
      linking to kvmtool's VIRTIO library.

    * The main changes of kvmtool would be:
        1. Still need to rework the interface between the virtio code
           and the rest of kvmtool, to abstract the whole virtio
           implementation into a library
        2. Modify current build system to add a new virtio library target.

## Reworking the interface is the common work for above proposals
**In kvmtool, one virtual device can be separated into three layers:**

- A device type layer to provide an abstract
    - Provide interface to collect and store device configuration.
        Using block device as an example, kvmtool is using disk_image to
        -  collect and store disk parameters like:
            -  backend image format: raw, qcow or block device
            -  backend block device or file image path
            -  Readonly, direct and etc
    - Provide operations to interact with real backend devices or services:
        - provide backend device operations:
            - block device operations
            - raw image operations
            - qcow image operations
- Hypervisor interfaces
    - Guest memory mapping and unmapping interfaces
    - Virtual device register interface
        - MMIO/PIO space register
        - IRQ register
    - Virtual IRQ inject interface
    - Hypervisor eventfd interface
- An implementation layer to handle guest IO request.
    - Kvmtool provides virtual devices for guest. Some virtual devices two
      kinds of implementations:
        - VIRTIO implementation
        - Real hardware emulation

For example, kvmtool console has virtio console and 8250 serial two kinds
of implementations. These implementation depends on device type parameters
to create devices, and depends on device type ops to forward data from/to
real device. And the implementation will invoke hypervisor interfaces to
map/unmap resources and notify guest.

In the current kvmtool code, the boundaries between these three layers are
relatively clear, but there are a few pieces of code that are somewhat
interleaved, for example:
- In virtio_blk__init(...) function, the code will use disk_image directly.
  This data is kvmtool specified. If we want to make VIRTIO implementation
  become hypervisor agnostic. Such kind of code should be moved to other
  place. Or we just keep code from virtio_blk__init_one(...) in virtio bloc=
k
  implementation, but keep virtio_blk__init(...) in kvmtool specified part
  code.

However, in the current VIRTIO device creation and data handling process,
the device type and hypervisor API used are both exclusive to kvmtool and
KVM. If we want to use current VIRTIO implementation for other device
models and hypervisors, it is unlikely to work properly.

So, the major work of reworking interface is decoupling VIRTIO implementati=
on
from kvmtool and KVM.

**Introduce some intermediate data structures to do decouple:**
1. Introduce intermedidate type data structures like `virtio_disk_type`,
   `virtio_net_type`, `virtio_console_type` and etc. These data structures
   will be the standard device type interfaces between virtio device
   implementation and hypervisor.  Using virtio_disk_type as an example:
    ~~~~
    struct virtio_disk_type {
        /*
         * Essential configuration for virtio block device can be got from
         * kvmtool disk_image. Other hypervisor device model also can use
         * this data structure to pass necessary parameters for creating
         * a virtio block device.
         */
        struct virtio_blk_cfg vblk_cfg;
        /*
         * Virtio block device MMIO address and IRQ line. These two members
         * are optional. If hypervisor provides allocate_mmio_space and
         * allocate_irq_line capability and device model doesn't set these
         * two fields, virtio block implementation will use hypervisor APIs
         * to allocate MMIO address and IRQ line. If these two fields are
         * configured, virtio block implementation will use them.
         */
        paddr_t addr;
        uint32_t irq;
        /*
         * In kvmtool, this ops will connect to disk_image APIs. Other
         * hypervisor device model should provide similar APIs for this
         * ops to interact with real backend device.
         */
        struct disk_type_ops {
            .read
            .write
            .flush
            .wait
            ...
        } ops;
    };
    ~~~~

2. Introduce a intermediate hypervisor data structure. This data structure
   provides a set of standard hypervisor API interfaces. In virtio
   implementation, the KVM specified APIs, like kvm_register_mmio, will not
   be invoked directly. The virtio implementation will use these interfaces
   to access hypervisor specified APIs. for example `struct vmm_impl`:
    ~~~~
    struct vmm_impl {
        /*
         * Pointer that link to real hypervisor handle like `struct kvm *kv=
m`.
         * This pointer will be passed to the vmm ops;
         */
        void *vmm;
        allocate_irq_line_fn_t(void* vmm, ...);
        allocate_mmio_space_fn_t(void* vmm, ...);
        register_mmio_fn_t(void* vmm, ...);
        map_guest_page_fn_t(void* vmm, ...);
        unmap_guest_page_fn_t(void* vmm, ...);
        virtual_irq_inject_fn_t(void* vmm, ...);
    };
    ~~~~

3. After decoupled with kvmtool, any hypervisor can use standard `vmm_impl`
   and `virtio_xxxx_type` interfaces to invoke standard virtio implementati=
on
   interfaces to create virtio devices.
    ~~~~
    /* Prepare VMM interface */
    struct vmm_impl *vmm =3D ...;
    vmm->register_mmio_fn_t =3D kvm__register_mmio;
    /* kvm__map_guset_page is a wrapper guest_flat_to_host */
    vmm->map_guest_page_fn_t =3D kvm__map_guset_page;
    ...

    /* Prepare virtio_disk_type */
    struct virtio_disk_type *vdisk_type =3D ...;
    vdisk_type->vblk_cfg.capacity =3D disk_image->size / SECTOR_SIZE;
    ...
    vdisk_type->ops->read =3D disk_image__read;
    vdisk_type->ops->write =3D disk_image__write;
    ...

    /* Invoke VIRTIO implementation API to create a virtio block device */
    virtio_blk__init_one(vmm, vdisk_type);
    ~~~~

VIRTIO block device simple flow before reworking interface:
https://drive.google.com/file/d/1k0Grd4RSuCmhKUPktHj9FRamEYrPCFkX/view?usp=
=3Dsharing
![image](https://drive.google.com/uc?export=3Dview&id=3D1k0Grd4RSuCmhKUPktH=
j9FRamEYrPCFkX)

VIRTIO block device simple flow after reworking interface:
https://drive.google.com/file/d/1rMXRvulwlRO39juWf08Wgk3G1NZtG2nL/view?usp=
=3Dsharing
![image](https://drive.google.com/uc?export=3Dview&id=3D1rMXRvulwlRO39juWf0=
8Wgk3G1NZtG2nL)


Thanks,
Wei Chen
IMPORTANT NOTICE: The contents of this email and any attachments are confid=
ential and may also be privileged. If you are not the intended recipient, p=
lease notify the sender immediately and do not disclose the contents to any=
 other person, use it for any purpose, or store or copy the information in =
any medium. Thank you.


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 06:23:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 06:23:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141915.262013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt2TZ-00061O-AM; Tue, 15 Jun 2021 06:23:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141915.262013; Tue, 15 Jun 2021 06:23:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt2TZ-00061H-7U; Tue, 15 Jun 2021 06:23:05 +0000
Received: by outflank-mailman (input) for mailman id 141915;
 Tue, 15 Jun 2021 06:23:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt2TY-000617-Ct; Tue, 15 Jun 2021 06:23:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt2TY-0004Io-7P; Tue, 15 Jun 2021 06:23:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt2TX-0000YM-VC; Tue, 15 Jun 2021 06:23:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lt2TX-0005Xc-Ui; Tue, 15 Jun 2021 06:23:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cqQ5Pf6RxA8CgntilXMjtN3qyqAboQD2ki5zFRJt6hQ=; b=B7CdfC/ZrletZb1yotfw8pRVxw
	w2dqujk1w5XFUkAH58QIX6Lxc10akKX7n6/NPTuk2lyqAOeG7NyTVYgpOEK28CK6fQGMVSeBDPZhE
	BWmJW63w+qmAs7+tcVopUPGgxoqGssFcel5wKOoau4Z1hRGrMbX9XSXG04qbrvt+0tXM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162829-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162829: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=f14ca48ef42e552d97cac096968e95680b3c75b4
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Jun 2021 06:23:03 +0000

flight 162829 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162829/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              f14ca48ef42e552d97cac096968e95680b3c75b4
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  340 days
Failing since        151818  2020-07-11 04:18:52 Z  339 days  332 attempts
Testing same since   162829  2021-06-15 04:18:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 61375 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 06:45:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 06:45:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141923.262027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt2pN-0008Ra-8R; Tue, 15 Jun 2021 06:45:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141923.262027; Tue, 15 Jun 2021 06:45:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt2pN-0008RT-4M; Tue, 15 Jun 2021 06:45:37 +0000
Received: by outflank-mailman (input) for mailman id 141923;
 Tue, 15 Jun 2021 06:45:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt2pL-0008RI-Du; Tue, 15 Jun 2021 06:45:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt2pL-0004eq-7c; Tue, 15 Jun 2021 06:45:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt2pK-00014s-Vc; Tue, 15 Jun 2021 06:45:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lt2pK-0002Su-V8; Tue, 15 Jun 2021 06:45:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HLnRuCUU+O0GZPJR3xgOdMzYc+TjuUevNYbg9MH+kCI=; b=3HVUtvzoipenawj1f1i7NnSwBB
	d7hp+OkKTxX11fp5p3Zk/4/ie5LbRmS5ttP0+yaz3JrHPlz3yA4NF4FLgKZi7KVEuCvoOszDaR99u
	EnHLGxQOGulfUQuF8PF/7b301WpTMOUYZTOk/uM5P8XbI3KuCpFGc3ph3a9FYNKjEOAA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162807-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162807: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-amd64-i386-pvgrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-amd64-amd64-pvgrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4f1858763b7b1aeb79fa7c818eca98c96943aa69
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Jun 2021 06:45:34 +0000

flight 162807 xen-unstable real [real]
flight 162831 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162807/
http://logs.test-lab.xenproject.org/osstest/logs/162831/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
 test-amd64-amd64-i386-pvgrub 17 guest-localmigrate       fail REGR. vs. 162533
 test-amd64-amd64-amd64-pvgrub 17 guest-localmigrate      fail REGR. vs. 162533

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine      4 memdisk-try-append  fail pass in 162831-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162533
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162533
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4f1858763b7b1aeb79fa7c818eca98c96943aa69
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z    7 days
Failing since        162556  2021-06-08 22:39:08 Z    6 days    9 attempts
Testing same since   162807  2021-06-14 15:36:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 752 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 06:53:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 06:53:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141933.262044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt2x7-0001ZR-6q; Tue, 15 Jun 2021 06:53:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141933.262044; Tue, 15 Jun 2021 06:53:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt2x7-0001ZK-3m; Tue, 15 Jun 2021 06:53:37 +0000
Received: by outflank-mailman (input) for mailman id 141933;
 Tue, 15 Jun 2021 06:53:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt2x5-0001Z9-6a; Tue, 15 Jun 2021 06:53:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt2x5-0004o5-1b; Tue, 15 Jun 2021 06:53:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt2x4-0001LB-P3; Tue, 15 Jun 2021 06:53:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lt2x4-00069v-OW; Tue, 15 Jun 2021 06:53:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xGBh85Ym5Hu39z0VcFUOMy4Alku6Igd+6E0GrfJVGco=; b=qp0/9G6Iv5EUbgJ3HY+ka6HCAa
	w8+Wa2f7oOmuKzBDwvWL8sNgJ1zp5wd+mXTLN5m0wUiCpyC5beFwYqGRyvUkDGHCm5hyFRYSJG4Pu
	p5gzcVNybMt7S7aU1B6D7p/EC29FDhphsWR5TAKj4wmnuDfCQ8dXgbBGLSPmL1GkGoN0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162833-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162833: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=b8649cf2a3e673a4a8cb6c255e394b354b771550
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Jun 2021 06:53:34 +0000

flight 162833 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162833/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 b8649cf2a3e673a4a8cb6c255e394b354b771550
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   11 days
Failing since        162368  2021-06-04 15:42:59 Z   10 days   21 attempts
Testing same since   162583  2021-06-09 23:44:58 Z    5 days   16 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1717 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 07:20:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 07:20:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141942.262062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt3Mo-0004qb-Ex; Tue, 15 Jun 2021 07:20:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141942.262062; Tue, 15 Jun 2021 07:20:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt3Mo-0004qU-Bv; Tue, 15 Jun 2021 07:20:10 +0000
Received: by outflank-mailman (input) for mailman id 141942;
 Tue, 15 Jun 2021 07:20:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt3Mn-0004pj-Ns; Tue, 15 Jun 2021 07:20:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt3Mn-0005GJ-IP; Tue, 15 Jun 2021 07:20:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt3Mn-0001zI-7J; Tue, 15 Jun 2021 07:20:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lt3Mn-0004Ic-6q; Tue, 15 Jun 2021 07:20:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PR1UjiCw+vu8QE/XkBzy7Ck/Jq+BQLj004N7heDj9gw=; b=gIDhFOjfalzE+wVYgBckXx9huu
	HzVmZ3mhc0qifCM3+nDeVCaxXBau7cX2aZGoFNTbNHU5OCEteOBRuR40Lw4S7SxamQU6hSRZ6KB7A
	5klcHRddAK+LRyqRT3UxVZzt7JCua97gdcFYYmeAOFeIwZUbRMFTCZV66CKyubxn2it4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162812-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162812: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start/freebsd.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=009c9aa5be652675a06d5211e1640e02bbb1c33d
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Jun 2021 07:20:09 +0000

flight 162812 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162812/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit1  13 debian-fixup     fail in 162793 pass in 162812
 test-amd64-amd64-examine    4 memdisk-try-append fail in 162793 pass in 162812
 test-amd64-amd64-qemuu-freebsd11-amd64 21 guest-start/freebsd.repeat fail pass in 162793

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                009c9aa5be652675a06d5211e1640e02bbb1c33d
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  318 days
Failing since        152366  2020-08-01 20:49:34 Z  317 days  541 attempts
Testing same since   162793  2021-06-14 03:59:52 Z    1 days    2 attempts

------------------------------------------------------------
6167 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1680413 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 08:12:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 08:12:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141957.262079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt4Ba-00020M-QM; Tue, 15 Jun 2021 08:12:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141957.262079; Tue, 15 Jun 2021 08:12:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt4Ba-00020F-Mu; Tue, 15 Jun 2021 08:12:38 +0000
Received: by outflank-mailman (input) for mailman id 141957;
 Tue, 15 Jun 2021 08:12:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gB8e=LJ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lt4BY-000209-Lc
 for xen-devel@lists.xen.org; Tue, 15 Jun 2021 08:12:36 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e0c570d2-b933-469e-ab15-e0e203856390;
 Tue, 15 Jun 2021 08:12:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0c570d2-b933-469e-ab15-e0e203856390
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623744753;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=H/Un64voqM4zIW10FuYdprltye1L/0Uqc5IP5peFieg=;
  b=L7xuAuWAcMESzGuDqd9qcY7y39dIRW9fKS1jPX+omyRwN6vxbe3/KkAi
   6kJpsUmGyE7MBiELHy+dAHShmh3gTqDaBKOhSZV5VK75gaxyj265iDfDm
   iTz5aQgP8jsucDLiOrfPKejH7dGjP1ANmbBagX82wz8OooSaNU/M3c04y
   4=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: lsbpc/CpGIVI/k8uO4N6RTYFbNuaBIkdva4rR2tBqikh5Zk8EIbgm2FT6W2ncD2TbqId49k7l3
 GWMDdLn6LczDQi7CQirDDvknd4TWZOvi9j7WjpilZwL79hS/biHXP7BVJT5lGrOJHgOKG1spve
 1T3UPYlOnjIC1z5gdw+N07uCTIDPfmvv5CGGoRtxYE85f8w0nSnBzyN8fc2r8JBQ3kqo54HUNH
 K1OzJ4cRy7hys6DR2o471Hu22yT5w3InSTUiYDXTpHaeHYQNB3RF0y8sBCAQjkkD0/ICsnczSR
 Zvk=
X-SBRS: 5.1
X-MesageID: 46506655
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:621L5aP3GX18zMBcTsajsMiBIKoaSvp033AB3UoZc20wTiX4ra
 yTdZEgviMc5wx/ZJhNo7690cu7IU80hKQV3WB5B97LNmTbUQCTXeJfBOXZsljdMhy72ulB1b
 pxN4hSYeeAamSSVPyKgjWFLw==
X-IronPort-AV: E=Sophos;i="5.83,275,1616472000"; 
   d="scan'208";a="46506655"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LLgdMRHiTtcUXv4QyCx8oH9uYbniG3WWCRUnNjLlweQhXYIiwbGk9fAEUf1bzaQ3lhVGLVbsRI/StvZLUl+cLwj7QszEHdB5otgnw2IfzajETvmRiksKv2RBy7HXCABz09jl7PU6dCRRXJsJyCrhATam1cn0Wt4/pwH1NawUiR38/FBUuQenQC4fTFjMkA76kNAGw+QsvYZezkUGV0lmgHUdjs4xgdTv1HfQb5n3bTEEH3snOzIqxV2kK03OuTnK8AbamGtWZNqbO2SELxyeSKT7ENsWDbC8M/cgCkGqEaJ5lVB7y2MiRNcu2uUG3QN3pyqiCscIP+2hzKIucBrgbA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5YEOjpNrUo7JaqifnHuVQotiBvdYocDAqcX0FlhKRgE=;
 b=M+l70Q724i/1ADdnyjOhhvneWZQb5vYnIDO1Ik5kyTuw42pRKt0ZUiLih8bnlR+DOFm2KRn58aRBrZAaeoje8BLRZFoPVRke6f0nndo5MjohKsr3RX4qwzJrVHWIp6/Kfg/RmN4WdPv1abjj0/PH1DNDfTeeH93h3YzEOVkigptaYuN93BiQfgkF1uwFQDVQPjZfwha9X1Aa8cxot3OfqBMoG84TVzbyhmelfwtuvo/AGUJI2MhioQ+y5abGsIO8JPMiToGWW/xDzf0peyH8X/1C8a6KSikXb3Mo2DDvUCh2wy2f8q6wumvjYMoXVuCuYK7GqkWhr4htgd/6HLcaXg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5YEOjpNrUo7JaqifnHuVQotiBvdYocDAqcX0FlhKRgE=;
 b=nd5rTYvnbCdByg+UjDMEgEgvvytYpT84V3uoQz267+NFUAUeBDeuUmIv/qlF4nE6XndbggX9whP8CmmKBlvYMwh7XY8dxICFTI/lvsmAkUZ6OJnI7cmRiM+eOyvu2cZ5AumGiqUAouDdQ0I7eEXqOwVwCnGzN0ByG8YfPHhnKZc=
Date: Tue, 15 Jun 2021 10:12:24 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, <xen-devel@lists.xen.org>,
	<boris.ostrovsky@oracle.com>, <stephen.s.brennan@oracle.com>
Subject: Re: BUG in 1f3d87c75129 ("x86/vpt: do not take pt_migrate rwlock in
 some cases")
Message-ID: <YMhg6OclYQ9AS+wD@Air-de-Roger>
References: <0efd0099-49bb-e20c-fe8d-fb97c9de2b63@citrix.com>
 <74378af9-5d04-f95e-3957-918cf5c81ded@suse.com>
 <YMdZKuKOnFKpQ3sg@Air-de-Roger>
 <3e9f4ea8-2fca-bf66-6345-0b73b960cba7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <3e9f4ea8-2fca-bf66-6345-0b73b960cba7@suse.com>
X-ClientProxiedBy: AM0PR06CA0118.eurprd06.prod.outlook.com
 (2603:10a6:208:ab::23) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 085e0587-7533-400b-66b1-08d92fd55187
X-MS-TrafficTypeDiagnostic: DM6PR03MB3948:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB3948227E3FA5E1BDC609A9CE8F309@DM6PR03MB3948.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: J8CbSmoGjbXzY9CppDVhT0zFGWZa1ZlVP28CkGopI+26NnG4BNg7mRvYPJpqCV4fTyOwt+IdC9CLzo5vks2xbRZ0Mx+wAuXzCxBZic66QoxverPbTRCj/68rKelw5ef0xA6nrN8RfQ34/NIZQsFv3IzEVXr4UJdVcDv1gnwpobNKkUW/nJFj5JvYAqyX/5uUf0LgFthZ9PoF1JQV3haMY3QgT4PZakuDdnefZGQW6iesGJbONjVVv7Ygwe4HylsrXE1KYEzvQYKUvKTlZDyt+ngecd7iXiSNYffmP2qMDtVuQdAXddjc6M9vnau/+zcWC/bew7XdPwNOrypEevptD99bdZfj/inzGv5FkwxivFk29y0NcvGHIIXf8CChg5OTrM6WMXnkd0zcqB6ViTLIavAIFHNWqzjiksi8EhGu6g+KDsEt02raDL+42F1922+mcopN4l99GbvP8MGLRAXo5kREHYpvwCj3h4m2MlqxNhYWeFuAGF3Vqt9HSCxwOdwIgjHHS+V9HCeHYnZoTgo8fQZtXuRcX4L9M3STb5lIqym0ENg9gng5gLS/d+CbwRfU5U/c+9zns7PiZX2Ww+vkAZL1GYTfSR3lD/Ay+VdyEMMRZ5Q3yuaNZdeTMoQEkTP6GwIyPLGFKtuT5RNbJW39BQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(396003)(136003)(376002)(346002)(366004)(4326008)(2906002)(38100700002)(16526019)(186003)(66556008)(26005)(66476007)(8676002)(8936002)(6486002)(86362001)(85182001)(83380400001)(6496006)(9686003)(33716001)(6666004)(478600001)(66946007)(956004)(316002)(6916009)(53546011)(5660300002)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?bUVRNkloQ2tqcWpTUWlKbUpoTDlrL2xSTE9oSDNOQjdjb1dxNmd4SWRBS285?=
 =?utf-8?B?aDZpeUlUbXNiZ1lZM1VickZDb2wwMks3WUdlZTlRbVlvQzNyRnIyTlQwODc1?=
 =?utf-8?B?WkxCRld4T09YUmpQbHlqTnFOOGlGcU9TU1RHZ1hqYldJWGJDZkFGdVdwajhG?=
 =?utf-8?B?SFQwQmxMenRhV0k0WTFFL2t6NHZqL0NFUE5tclI1SU1iYVBsL0VqdStrSWhj?=
 =?utf-8?B?Z3JoRmJvaGZXMEpBaUtaVXB6V2NkKy9zRWtPaTZYQWw4RVgwenEzY0F4cEt2?=
 =?utf-8?B?UFpTeTBCZ0hKQ0VFYjB2RW9qbmNLY01MRk1rb0ZEVC96dFBUazE4L2F4N0lT?=
 =?utf-8?B?YUJLUi8wY3k0RU1JN1JLQjIyY0g5NlFkRVhWK0h2SnBlNnpqWnhtMUJSZ2Fi?=
 =?utf-8?B?dEV4TE03YkV4ZEt4WXRtRXBldnpuMXVuMWwvYTluNWgrcXIwclJhdjNNVjBr?=
 =?utf-8?B?QldiVllIc1F6SmlKS0QzSFllcnNDSGhQaU8xcjMyN1RIYnZwSWR4Y054ckRV?=
 =?utf-8?B?LzA5Zmw2dVRjSGoxbm95VHI0RzJKRG8rTjUvdzNFbStZanJudUJWcHVWbCtG?=
 =?utf-8?B?dkZCZlZmNWNOZko5c2ZJbEFHYWM0MVNBWHBHaGVkT0o3dHBFaVFtTVJTTXEr?=
 =?utf-8?B?NGZzbU8zdGdWdDRHV2liMFB1Y1FiODlXTE5NRHAwL3hNOHlwa1l1VWRCWWJK?=
 =?utf-8?B?WkVkeUV1VkNFa2UyT1ZzUUJ0MUZzYndBeis5cFlaYnk2QXNqOHgxb0dZS1Fj?=
 =?utf-8?B?RTRVTjRiM2tINWdZNTAyNVVtZjAvVTh6QjZQMkZETUk4bWZtdCtoamJsdDdn?=
 =?utf-8?B?eFExRERRQjBnazVWaHNBK2dwQjZ5TkpZZDdBdEdaQUoraHBNdWRTTGE3cHd2?=
 =?utf-8?B?ZHJBM3dKOW81MmJmMzlTb0VqTXFIWVFXRlplSFpyWG53YW0xZFhVUS93ZGM5?=
 =?utf-8?B?YUVWbzBETVNYZWtSV2l0QWJ0WU5IdzVZZk1DZHIvOEhFaG11VlRITVdBeUpq?=
 =?utf-8?B?ZzVlR24zbk9CbGQ3RHRtSHhlMFNoWUc4dEcvMUxGY1Nvd0lQUmNOTmsxOHU0?=
 =?utf-8?B?U2l3RHF2MGVhd0ZyblRYQVhPQ2F2ZkpzL21ZemI0K01STTFIV2VCR0NsdlZ4?=
 =?utf-8?B?VFFTenp1S3FTYkl6NGxCVDFBQ0pDdzlDcjlEZ0o4UzBsaWVodk9ucGEzbE9u?=
 =?utf-8?B?RUZtcHlzWk1MV0svUElxM0VSc2NlanpXK1BuY0Rkb3ZMM2tlbGpKcG8vYWQw?=
 =?utf-8?B?RjFySncwSm5ZZ2k4S0VVYy9VOU9PZmxYVExEQmdGSDVweU9vdE5makpKQS9B?=
 =?utf-8?B?Mnlna0c0OVhNTTY0WEQzTTlNL2NTQkpHV24ybFExWU9wOURpZ2ptZFVxU0FI?=
 =?utf-8?B?VDRmdnFObFczTmYvZDE4YkQxQ2psc2xrd0MyR0NaQldCRVJUd1UvbENBL0JB?=
 =?utf-8?B?U1hibmgvVHkxSDZQaHVodktMRnJuTGpWZDN5cEpnV3Y0ckF5N1Y4b2E1cHBB?=
 =?utf-8?B?b1V2V2JKR0RMR2w4VjdkY0FQU1JCZG9vdDVzUkppWFFuRHVPQTlFLzkyMEtq?=
 =?utf-8?B?TmM0b1Mvb01oa1N1akZEQkxVTDJJYWV2MHREcFlycUZMa3QxbjF6VVAzdzlh?=
 =?utf-8?B?c3BLVlRFMkErMWN3NlFjVm1ieXlwVWJPajVtVWVDMjFBVFJFbDNRdGFWT1FG?=
 =?utf-8?B?dXY0Tjg0K0k1ZGdBM3c5UUZIRWtRY1Y1MW9sS0FMbkpzdWhBMzBOTlVhZ0wy?=
 =?utf-8?Q?G5S+F9mcCrOojwRUXA5vywyeQn/vrjnr55ZdZs9?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 085e0587-7533-400b-66b1-08d92fd55187
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2021 08:12:29.3606
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /rxuqZ/OjW5Kvq9wM7eHi+nGj3+HH1vVLfrx6ZUq0kM0AN2r3ANijeEw3aZz69+ZahWXRxVgw/F6OfYzySyNSg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3948
X-OriginatorOrg: citrix.com

On Mon, Jun 14, 2021 at 06:01:17PM +0200, Jan Beulich wrote:
> On 14.06.2021 15:27, Roger Pau Monné wrote:
> > On Mon, Jun 14, 2021 at 01:53:09PM +0200, Jan Beulich wrote:
> >> On 14.06.2021 13:15, Igor Druzhinin wrote:
> >>> Hi, Boris, Stephen, Roger,
> >>>
> >>> We have stress tested recent changes on staging-4.13 which includes a
> >>> backport of the subject. Since the backport is identical to the
> >>> master branch and all of the pre-reqs are in place - we have no reason
> >>> to believe the issue is not the same on master.
> >>>
> >>> Here is what we got by running heavy stress testing including multiple
> >>> repeated VM lifecycle operations with storage and network load:
> >>>
> >>>
> >>> Assertion 'timer->status >= TIMER_STATUS_inactive' failed at timer.c:287
> >>> ----[ Xen-4.13.3-10.7-d  x86_64  debug=y   Not tainted ]----
> >>> CPU:    17
> >>> RIP:    e008:[<ffff82d080246b65>] common/timer.c#active_timer+0xc/0x1b
> >>> RFLAGS: 0000000000010046   CONTEXT: hypervisor (d675v0)
> >>> rax: 0000000000000000   rbx: ffff83137a8ed300   rcx: 0000000000000000
> >>> rdx: ffff83303fff7fff   rsi: ffff83303fff2549   rdi: ffff83137a8ed300
> >>> rbp: ffff83303fff7cf8   rsp: ffff83303fff7cf8   r8:  0000000000000001
> >>> r9:  0000000000000000   r10: 0000000000000011   r11: 0000168b0cc08083
> >>> r12: 0000000000000000   r13: ffff82d0805cf300   r14: ffff82d0805cf300
> >>> r15: 0000000000000292   cr0: 0000000080050033   cr4: 00000000000426e0
> >>> cr3: 00000013c1a32000   cr2: 0000000000000000
> >>> fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
> >>> ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
> >>> Xen code around <ffff82d080246b65> (common/timer.c#active_timer+0xc/0x1b):
> >>>   0f b6 47 2a 84 c0 75 02 <0f> 0b 3c 04 76 02 0f 0b 3c 02 0f 97 c0 5d c3 55
> >>> Xen stack trace from rsp=ffff83303fff7cf8:
> >>>     ffff83303fff7d48 ffff82d0802479f1 0000168b0192b846 ffff83137a8ed328
> >>>     000000001d0776eb ffff83137a8ed2c0 ffff83133ee47568 ffff83133ee47000
> >>>     ffff83133ee47560 ffff832b1a0cd000 ffff83303fff7d78 ffff82d08031e74e
> >>>     ffff83102d898000 ffff83133ee47000 ffff83102db8d000 0000000000000011
> >>>     ffff83303fff7dc8 ffff82d08027df19 0000000000000000 ffff83133ee47060
> >>>     ffff82d0805d0088 ffff83102d898000 ffff83133ee47000 0000000000000011
> >>>     0000000000000001 0000000000000011 ffff83303fff7e08 ffff82d0802414e0
> >>>     ffff83303fff7df8 0000168b0192b846 ffff83102d8a4660 0000168b0192b846
> >>>     ffff83102d8a4720 0000000000000011 ffff83303fff7ea8 ffff82d080241d6c
> >>>     ffff83133ee47000 ffff831244137a50 ffff83303fff7e48 ffff82d08031b5b8
> >>>     ffff83133ee47000 ffff832b1a0cd000 ffff83303fff7e68 ffff82d080312b65
> >>>     ffff83133ee47000 0000000000000000 ffff83303fff7ee8 ffff83102d8a4678
> >>>     ffff83303fff7ee8 ffff82d0805d6380 ffff82d0805d5b00 ffffffffffffffff
> >>>     ffff83303fff7fff 0000000000000000 ffff83303fff7ed8 ffff82d0802431f5
> >>>     ffff83133ee47000 0000000000000000 0000000000000000 0000000000000000
> >>>     ffff83303fff7ee8 ffff82d08024324a 00007ccfc00080e7 ffff82d08033930b
> >>>     ffffffffb0ebd5a0 000000000000000d 0000000000000062 0000000000000097
> >>>     000000000000001e 000000000000001f ffffffffb0ebd5ad 0000000000000000
> >>>     0000000000000005 000000000003d91d 0000000000000000 0000000000000000
> >>>     00000000000003d5 000000000000001e 0000000000000000 0000beef0000beef
> >>> Xen call trace:
> >>>     [<ffff82d080246b65>] R common/timer.c#active_timer+0xc/0x1b
> >>>     [<ffff82d0802479f1>] F stop_timer+0xf5/0x188
> >>>     [<ffff82d08031e74e>] F pt_save_timer+0x45/0x8a
> >>>     [<ffff82d08027df19>] F context_switch+0xf9/0xee0
> >>>     [<ffff82d0802414e0>] F common/schedule.c#sched_context_switch+0x146/0x151
> >>>     [<ffff82d080241d6c>] F common/schedule.c#schedule+0x28a/0x299
> >>>     [<ffff82d0802431f5>] F common/softirq.c#__do_softirq+0x85/0x90
> >>>     [<ffff82d08024324a>] F do_softirq+0x13/0x15
> >>>     [<ffff82d08033930b>] F vmx_asm_do_vmentry+0x2b/0x30
> >>>
> >>> ****************************************
> >>> Panic on CPU 17:
> >>> Assertion 'timer->status >= TIMER_STATUS_inactive' failed at timer.c:287
> >>> ****************************************
> >>
> >> Since this suggests a timer was found on the list without ever having been
> >> initialized, I've spotted a case where this indeed could now happen. Could
> >> you give the patch below a try?
> >>
> >> Jan
> >>
> >> x86/vpt: fully init timers before putting onto list
> >>
> >> With pt_vcpu_lock() no longer acquiring the pt_migrate lock, parties
> >> iterating the list and acting on the timers of the list entries will no
> >> longer be kept from entering their loops by create_periodic_time()'s
> >> holding of that lock. Therefore at least init_timer() needs calling
> >> ahead of list insertion, but keep this and set_timer() together.
> >>
> >> Fixes: 8113b02f0bf8 ("x86/vpt: do not take pt_migrate rwlock in some cases")
> >> Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Thanks for looking into this so quickly, and sorry for not realizing
> > myself when relaxing the locking. Adding the timer to the list without
> > it being fully initialized was a latent issue even if protected by the
> > lock initially.
> > 
> > Provided testing shows the issue is fixed:
> 
> I guess the change here is needed anyway, even if testing finds there's
> still something amiss?

Indeed, just wondered whether there might be other instances using a
similar pattern, but I'm not able to spot any.

It might even be better to fix other issues (if any) on a different
commit.

Thanks.


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 09:17:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 09:17:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141969.262090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt5Ca-0007mH-9v; Tue, 15 Jun 2021 09:17:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141969.262090; Tue, 15 Jun 2021 09:17:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt5Ca-0007mA-68; Tue, 15 Jun 2021 09:17:44 +0000
Received: by outflank-mailman (input) for mailman id 141969;
 Tue, 15 Jun 2021 09:17:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x6ws=LJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lt5CY-0007m4-Sv
 for xen-devel@lists.xen.org; Tue, 15 Jun 2021 09:17:43 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e9a2d625-5f09-4c43-acc7-6d11ba91f11c;
 Tue, 15 Jun 2021 09:17:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9a2d625-5f09-4c43-acc7-6d11ba91f11c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623748656;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=NEnHa7jmdmp4NFEfYuMvvbgTOi2Y6Xy3EX5TrC+4hP0=;
  b=GKAOpt6t4JEUzKS4FIOP9r2meQ9cteAxorsIpW0YTnVIBT0gz9xj7M6O
   AEzyodZ75ruXOvbhogSX9DJoKg8tJheuDpfKjuIKy9wCqRZHb8A1yLb8k
   /xpg7o0xe8KV12cdvSkzvapHCgjovwYnjhwTyj0I43yJZTJlu5tl37pPt
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: XuxVKP2yk99lEiSTrqD+SxgwtAZRekbHDOASHwPLCJHs05lqJ0U1m5zn+F/IVaT2im39WJLufJ
 BeYWtO2lHVoSLSILsEF0TeIZBnd9/XV+MmRs2f2DAWLxfC7j8dKRH+incvXtufY7rzjnVXv9Ih
 KVQwMeY0ljp1+fSgMNNqrPPqd7Pt1nl6/AhIzqggeWgwpaAkXwMnrTvTCLpbvEz7ktnCQ/VgEZ
 sILk9fbC0Ezypl76as4G2fo3omj6RNJlqnPTK4OkhTb7Z3q1KiW9Y9mpSZv7ys/6RuFX/qOR6h
 fEc=
X-SBRS: 5.1
X-MesageID: 46510792
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:a5R8/ajEltdZLIrX27eFcFokg3BQXiAji2hC6mlwRA09TyX5ra
 2TdTogtSMc6QxhPE3I/OrrBEDuexzhHPJOj7X5Xo3SOTUO2lHYT72KhLGKq1Hd8kXFndK1vp
 0QEZSWZueQMbB75/yKnTVREbwbsaW6GHbDv5ag859vJzsaFZ2J921Ce2Gm+tUdfng8OXI+fq
 DsgPZvln6bVlk8SN+0PXUBV/irnaywqHq3CSR2fiLO8WO1/EuV1II=
X-IronPort-AV: E=Sophos;i="5.83,275,1616472000"; 
   d="scan'208";a="46510792"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ho0dQ1CXXj1chiM1/FFJwKNch/N9PTPKY8iKIg4UIsllgMJVUdgLAJtzrItaxvbKBdr7nQL+7KKOBQG8ZA/RYDtBXCxu/Z6YIx5/MRCQT99XtYtpn5d9Hy/IWFRDyh3Mzf72EY9qZkvPAMKLAr88BH0hSGXTX14xvVw+y09AQfaiypoWC5SOK0UM8l9n23iOj9bAd2h3TLDT1NhdOT8YYqu0lCNox9J4CbsXdHd4bJoOMAgQSoIZJWUNvLyN+Msf2gRfauerqElCpCOZRfaReUbeKJ8Sohegp1fXh/kc1Gh/Ihtv2isayYD4bEKYM06oDQ6kh4p0sC+gkmTbOnKNXw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LjZN2yR43dGRx39ih2tx5i2/Ql+3p+wJnVZbqSFLIdY=;
 b=iirwa3Xl4IG8icN411VSQqiH6CT22oyMzmIl5Xq5yeFxwr4SsKUtFG3Xc9EBVod0lafWo+FYUuPKOTmT8UU3Q9cp23Gpbyz84V8Ha5BOPJAzpMXMA4FVc6FX4dOM7sAhJXqBUGr8v37TxYPSFFPAkoz+uJlaBfkIPhzG/Cl9rpDvWrzIJqyCUx4Zo3Ppo9NsxkSASNqrTUbliS45hOmVCHdkmC8HkEIi/SKLrc7p3BequhaikeEka5+nlizvioDBHVDVMmoJPWYAYmOYoPxwHpRGe3lu0wX6TPp8zqJ7KX2NHSTqkEXRcc+T/HH8q9lEmdZxyDZ+PtP9rlz5BpH6/w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LjZN2yR43dGRx39ih2tx5i2/Ql+3p+wJnVZbqSFLIdY=;
 b=kt/lv+zY5S7BJBByeOeX0QvPjzmfXjD7Q+0WQKudsCvOqNO/gVCe5fRJXtyZR4frwcqpu+cgsonHaIzsVgBInHaqDi0xmZbqYzif7RWCbMDxGuKEYmwkLQGTz69GDePkDJhxNvlNbDU/7kFOoWHpOCnm1nuOD28aUSCXssC8t2M=
Subject: Re: BUG in 1f3d87c75129 ("x86/vpt: do not take pt_migrate rwlock in
 some cases")
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Jan Beulich
	<jbeulich@suse.com>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, <xen-devel@lists.xen.org>,
	<boris.ostrovsky@oracle.com>, <stephen.s.brennan@oracle.com>
References: <0efd0099-49bb-e20c-fe8d-fb97c9de2b63@citrix.com>
 <74378af9-5d04-f95e-3957-918cf5c81ded@suse.com>
 <YMdZKuKOnFKpQ3sg@Air-de-Roger>
 <3e9f4ea8-2fca-bf66-6345-0b73b960cba7@suse.com>
 <YMhg6OclYQ9AS+wD@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <3aaed845-b273-0688-4cac-3d440e3d58d3@citrix.com>
Date: Tue, 15 Jun 2021 10:17:24 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <YMhg6OclYQ9AS+wD@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0228.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:b::24) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bddc1ab4-f72e-4a7c-ca9b-08d92fde675f
X-MS-TrafficTypeDiagnostic: BY5PR03MB4999:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB4999E1FF60E1BF64307E1F55BA309@BY5PR03MB4999.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: UHlvYXU9hRWkKN7E5yt+FMq9hunOQbXNbz5PcFZEr2L+5OKeHeIjxHHAroyuTAVYFT01RpthgCZQ1dl1YSExXYGWD/ZkEujgK3aGtkgUlczjNTdyYxGqKXpHv03ATAGkqo+rzqGQ70XuUi/Z+vqMw04A/jES1oWN3Tl5o78ZdjxX2zV+RN4Y1YY8qixVuLp05tZFwbReCBKOm/4fQ0dgUTOVNew3FjSHf5jqYEeE18QKURbldXoctvPrwL3TJspaRkqxYdz4AYaeOEEnKVppqWkysFwwfah5eMTNTRgbpDmlqi5vAlDLzBKOnGCVNloNA5h4yu2qEsAy9cYYqgbVX1IwhAV4aGuFk3aDUNGOBKVBpY4vR3SsvNvRNbmAKZaHeoQOmnxrH2XkKZhjw+Qw3C03hU9yFtLgLVQbrYfMDtZnQgohtBfWhVwaxTQSlTNEjbdhtWWW5c0Tl+jtPmB+T53gjuKi4cbjxfijoM2TalQV6ukoo8RQqft0hcNXnCaOQQZbZgfkC68qlR5fU5My7Bb7Fw0g88/eygaUp8Maau+1na12F7AaEq9Gpe73ZkWu4z3CybGJQGWF6CGc7r33Kw24YuLh9nq93XtVV7r6covDiB2dRH3Jg3OdHxpdhbkI0kQN27AO8eZoX7WSm9Tv2vYLHBKu6EaaVZiW+LFu7EOMryFUGn+O/oUXldphXT4f
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39850400004)(366004)(396003)(376002)(136003)(83380400001)(5660300002)(38100700002)(2906002)(66476007)(66946007)(66556008)(16576012)(316002)(26005)(86362001)(31686004)(478600001)(8936002)(6486002)(8676002)(110136005)(53546011)(16526019)(6666004)(4326008)(956004)(186003)(31696002)(2616005)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?QlJkdFVOVXBLVC8rY2tMMXA3VHlxcUUzZVc5R0FyLzRGblA4RVduYjhtdElW?=
 =?utf-8?B?dzVucml1dkxtMVJyazg4MjIrdkNUSUc5dlZjNUR2QnVZMDBKVXozZVcrMWdO?=
 =?utf-8?B?aURwTDNHSEpqUjhyZDU0eVRJbmJWMDdkV3RkNWxrTEk3aCt2eUhhN0VFRDNL?=
 =?utf-8?B?cVhSOXIwaEJyLzlIbTVhUERzSDAvcVlXZElRMGlJUHh2d0FSSENMVHYvWGFh?=
 =?utf-8?B?bEVaYUpsdGRLeWZzbVY1bVlvMXY3TEpYK0lpdUZjQnJZREgxKzhLajdiQ0pj?=
 =?utf-8?B?UkQxYTFOZmxrQjhjS2lJSGxkbFZNNzRLVWczYUxzbU5MQmxRRVU4WkdtRUNi?=
 =?utf-8?B?WlJMdUZuUkphOUFIRzY5bDVHVlNiOUdYL1FrU2xlN1dJZjlZMG5wMHduWFp5?=
 =?utf-8?B?bWlpa2VsVFpSakdiaWpSOHJhSW5kaW5RVVNHOEczc1lyMWRhYW8yQWlkTk5a?=
 =?utf-8?B?d2RNc1pUZUlxNzZLZE1CTFRqNGxtTHBzcVJhSDdiNllRVEhRanF5QXRXTW1p?=
 =?utf-8?B?b0IxMTFqZmg3MDcrOUNabXRaMU9OeWN0dFhaSi8xL0FUcFJDTVE3c25EQXdZ?=
 =?utf-8?B?b2RvaVFaRERPeXk1T2ZPeExwMUsvcVBvWkxrYzBNOHoyZmkzQW02Q0lacFQ0?=
 =?utf-8?B?ZHV0YWpadzdDQXE5dE1jOVdHZ0hHOWRjRUkxNGxTNXVhbTZsaXZLTmliNWto?=
 =?utf-8?B?U1I3anEydmh1VVBCcnRGT2xVb0JRQktCUmkrQVZzL0s3TnZERWgrSm9XcXZk?=
 =?utf-8?B?ZU8vc25rV2ZLTmltK0dlYy9iUENQRzAvNXUyRkFaQUpna1JxQVdSRkVtYWlo?=
 =?utf-8?B?NUdHRFhhM3BTOGhUK0k0U3VVMDlydEdjQjF6SWU1cjZxVDNkNEo4SVA3OEd3?=
 =?utf-8?B?Q1paejFvMk5PZFlmTGpXeVVhd3VEZ1pHcmdPVGR4THB6d1hrcWptdnQvTmNW?=
 =?utf-8?B?SlNGVkZ5OXNNbjZnU1VwRHZLaXplaW1EbG1DUUtWL3VrSG5FVm45cjVRS2or?=
 =?utf-8?B?N1pYNXRSQjFVTDJHZmxwMUwzUXh6dzJSQktwVEhIbGliU0YvVHk2M3ZubE9J?=
 =?utf-8?B?Sk9jeUZYaFFFcy8wRzZnMzAzR2VUbmtCYmtMTEpDWkpWcklBeDNjNStUSWVw?=
 =?utf-8?B?UUdOREszWTN6M0J1OEt5Zys2YWlneWxjQmdtWWVvck9OSmtEbFdkOUx1SXZu?=
 =?utf-8?B?cjhiajZicEJzTFlzUWpNN1duME1xTVFyaXdLWXlVaXZOc0tVc3ZlblFTZ0lx?=
 =?utf-8?B?aG9DSS9udWhIV3VKejM3NW9nVHRpOUNpdDAzR0VBa0grYTdlRVRVZDlwTEFa?=
 =?utf-8?B?RTVFZWxzOW1mREhEdU5ERWpTY1hsMWo4ZDZFTkk4RnE4ekFQRUE1UHQ5bXZV?=
 =?utf-8?B?REFTWDJqVk5KaDVBTXZ4UWtMMUtXcE1oVlBsWVNEbzRzUXhyV3IxN0h3WjZJ?=
 =?utf-8?B?SlFka0pvMFZnZFc3WEE2ZEdGaTBhbEJXeFJSMStHbGUyVVlCV1gzN01OQlk3?=
 =?utf-8?B?WE1jZkRlRkN0Q3c3VVdPWmVROTRkb0EvRnZ2K1VJS3dZc25ObU9NYThFR3RS?=
 =?utf-8?B?N1N2cmpaSUxjNzNETEpqU1A4ODNWOUx2NDlMMldudFdMSlJGNnBjcnYveE5T?=
 =?utf-8?B?L1lvUlRDMkpCLzArRmxyYlZDVFdNbXFsdWdRc1ViSnlsTFpCemkvSEl0NytB?=
 =?utf-8?B?b0lZdGQ2aDFhMEJSeHZud2lFK0NSSzRvQmpsaWRNYk1QTU5GcWNaNThHRjcy?=
 =?utf-8?Q?nXQcLfYwIll14I1bjHF6MsN7aWLEAyCLnhloABN?=
X-MS-Exchange-CrossTenant-Network-Message-Id: bddc1ab4-f72e-4a7c-ca9b-08d92fde675f
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2021 09:17:31.5250
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aysi2Bpr7TpRaLtXJS89E+ygQmpzN3iAttsvaHIwDrZNLQtojYQPuoN6Rt06FKh+N2czAG//3zYe6d/pQjDGqWcKnAsSDhfwRYT7RDwEIvM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB4999
X-OriginatorOrg: citrix.com

On 15/06/2021 09:12, Roger Pau Monné wrote:
> On Mon, Jun 14, 2021 at 06:01:17PM +0200, Jan Beulich wrote:
>> On 14.06.2021 15:27, Roger Pau Monné wrote:
>>> On Mon, Jun 14, 2021 at 01:53:09PM +0200, Jan Beulich wrote:
>>>> On 14.06.2021 13:15, Igor Druzhinin wrote:
>>>>> Hi, Boris, Stephen, Roger,
>>>>>
>>>>> We have stress tested recent changes on staging-4.13 which includes a
>>>>> backport of the subject. Since the backport is identical to the
>>>>> master branch and all of the pre-reqs are in place - we have no reason
>>>>> to believe the issue is not the same on master.
>>>>>
>>>>> Here is what we got by running heavy stress testing including multiple
>>>>> repeated VM lifecycle operations with storage and network load:
>>>>>
>>>>>
>>>>> Assertion 'timer->status >= TIMER_STATUS_inactive' failed at timer.c:287
>>>>> ----[ Xen-4.13.3-10.7-d  x86_64  debug=y   Not tainted ]----
>>>>> CPU:    17
>>>>> RIP:    e008:[<ffff82d080246b65>] common/timer.c#active_timer+0xc/0x1b
>>>>> RFLAGS: 0000000000010046   CONTEXT: hypervisor (d675v0)
>>>>> rax: 0000000000000000   rbx: ffff83137a8ed300   rcx: 0000000000000000
>>>>> rdx: ffff83303fff7fff   rsi: ffff83303fff2549   rdi: ffff83137a8ed300
>>>>> rbp: ffff83303fff7cf8   rsp: ffff83303fff7cf8   r8:  0000000000000001
>>>>> r9:  0000000000000000   r10: 0000000000000011   r11: 0000168b0cc08083
>>>>> r12: 0000000000000000   r13: ffff82d0805cf300   r14: ffff82d0805cf300
>>>>> r15: 0000000000000292   cr0: 0000000080050033   cr4: 00000000000426e0
>>>>> cr3: 00000013c1a32000   cr2: 0000000000000000
>>>>> fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
>>>>> ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
>>>>> Xen code around <ffff82d080246b65> (common/timer.c#active_timer+0xc/0x1b):
>>>>>   0f b6 47 2a 84 c0 75 02 <0f> 0b 3c 04 76 02 0f 0b 3c 02 0f 97 c0 5d c3 55
>>>>> Xen stack trace from rsp=ffff83303fff7cf8:
>>>>>     ffff83303fff7d48 ffff82d0802479f1 0000168b0192b846 ffff83137a8ed328
>>>>>     000000001d0776eb ffff83137a8ed2c0 ffff83133ee47568 ffff83133ee47000
>>>>>     ffff83133ee47560 ffff832b1a0cd000 ffff83303fff7d78 ffff82d08031e74e
>>>>>     ffff83102d898000 ffff83133ee47000 ffff83102db8d000 0000000000000011
>>>>>     ffff83303fff7dc8 ffff82d08027df19 0000000000000000 ffff83133ee47060
>>>>>     ffff82d0805d0088 ffff83102d898000 ffff83133ee47000 0000000000000011
>>>>>     0000000000000001 0000000000000011 ffff83303fff7e08 ffff82d0802414e0
>>>>>     ffff83303fff7df8 0000168b0192b846 ffff83102d8a4660 0000168b0192b846
>>>>>     ffff83102d8a4720 0000000000000011 ffff83303fff7ea8 ffff82d080241d6c
>>>>>     ffff83133ee47000 ffff831244137a50 ffff83303fff7e48 ffff82d08031b5b8
>>>>>     ffff83133ee47000 ffff832b1a0cd000 ffff83303fff7e68 ffff82d080312b65
>>>>>     ffff83133ee47000 0000000000000000 ffff83303fff7ee8 ffff83102d8a4678
>>>>>     ffff83303fff7ee8 ffff82d0805d6380 ffff82d0805d5b00 ffffffffffffffff
>>>>>     ffff83303fff7fff 0000000000000000 ffff83303fff7ed8 ffff82d0802431f5
>>>>>     ffff83133ee47000 0000000000000000 0000000000000000 0000000000000000
>>>>>     ffff83303fff7ee8 ffff82d08024324a 00007ccfc00080e7 ffff82d08033930b
>>>>>     ffffffffb0ebd5a0 000000000000000d 0000000000000062 0000000000000097
>>>>>     000000000000001e 000000000000001f ffffffffb0ebd5ad 0000000000000000
>>>>>     0000000000000005 000000000003d91d 0000000000000000 0000000000000000
>>>>>     00000000000003d5 000000000000001e 0000000000000000 0000beef0000beef
>>>>> Xen call trace:
>>>>>     [<ffff82d080246b65>] R common/timer.c#active_timer+0xc/0x1b
>>>>>     [<ffff82d0802479f1>] F stop_timer+0xf5/0x188
>>>>>     [<ffff82d08031e74e>] F pt_save_timer+0x45/0x8a
>>>>>     [<ffff82d08027df19>] F context_switch+0xf9/0xee0
>>>>>     [<ffff82d0802414e0>] F common/schedule.c#sched_context_switch+0x146/0x151
>>>>>     [<ffff82d080241d6c>] F common/schedule.c#schedule+0x28a/0x299
>>>>>     [<ffff82d0802431f5>] F common/softirq.c#__do_softirq+0x85/0x90
>>>>>     [<ffff82d08024324a>] F do_softirq+0x13/0x15
>>>>>     [<ffff82d08033930b>] F vmx_asm_do_vmentry+0x2b/0x30
>>>>>
>>>>> ****************************************
>>>>> Panic on CPU 17:
>>>>> Assertion 'timer->status >= TIMER_STATUS_inactive' failed at timer.c:287
>>>>> ****************************************
>>>> Since this suggests a timer was found on the list without ever having been
>>>> initialized, I've spotted a case where this indeed could now happen. Could
>>>> you give the patch below a try?
>>>>
>>>> Jan
>>>>
>>>> x86/vpt: fully init timers before putting onto list
>>>>
>>>> With pt_vcpu_lock() no longer acquiring the pt_migrate lock, parties
>>>> iterating the list and acting on the timers of the list entries will no
>>>> longer be kept from entering their loops by create_periodic_time()'s
>>>> holding of that lock. Therefore at least init_timer() needs calling
>>>> ahead of list insertion, but keep this and set_timer() together.
>>>>
>>>> Fixes: 8113b02f0bf8 ("x86/vpt: do not take pt_migrate rwlock in some cases")
>>>> Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> Thanks for looking into this so quickly, and sorry for not realizing
>>> myself when relaxing the locking. Adding the timer to the list without
>>> it being fully initialized was a latent issue even if protected by the
>>> lock initially.
>>>
>>> Provided testing shows the issue is fixed:
>> I guess the change here is needed anyway, even if testing finds there's
>> still something amiss?
> Indeed, just wondered whether there might be other instances using a
> similar pattern, but I'm not able to spot any.
>
> It might even be better to fix other issues (if any) on a different
> commit.

To be honest, this change is clearly good, and necessary.  I'd be
tempted to commit it now, as is, irrespective of whether there are
further bugs in this area.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 09:24:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 09:24:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141967.262101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt5Ip-0000jy-5R; Tue, 15 Jun 2021 09:24:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141967.262101; Tue, 15 Jun 2021 09:24:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt5Io-0000jr-Vc; Tue, 15 Jun 2021 09:24:11 +0000
Received: by outflank-mailman (input) for mailman id 141967;
 Tue, 15 Jun 2021 09:13:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JDHg=LJ=garbo.alstadheim.priv.no=hakon@srs-us1.protection.inumbo.net>)
 id 1lt58V-0007i8-1N
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 09:13:31 +0000
Received: from asav21.altibox.net (unknown [109.247.116.8])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e15d45cc-0496-45ed-82c5-a0eccb0fafe0;
 Tue, 15 Jun 2021 09:13:26 +0000 (UTC)
Received: from postfix-relay.alstadheim.priv.no
 (148-252-98.210.3p.ntebredband.no [148.252.98.210])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 (Authenticated sender: hakon.alstadheim@ntebb.no)
 by asav21.altibox.net (Postfix) with ESMTPSA id 7A3558005E
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 11:13:25 +0200 (CEST)
Received: from smtps.alstadheim.priv.no (localhost [127.0.0.1])
 by postfix-relay.alstadheim.priv.no (Postfix) with ESMTP id 0DA007AD0A54
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 11:13:25 +0200 (CEST)
Received: from [192.168.2.201] (unknown [192.168.2.201])
 (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested) (Authenticated sender: hakon)
 by smtps.alstadheim.priv.no (Postfix) with ESMTPSA id CEBFD7A52309
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 11:13:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e15d45cc-0496-45ed-82c5-a0eccb0fafe0
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
To: xen-devel@lists.xenproject.org
From: =?UTF-8?Q?H=c3=a5kon_Alstadheim?= <hakon@garbo.alstadheim.priv.no>
Subject: [BUG] xen net scatter-gather hang
Message-ID: <707e3cf9-4270-f04d-b409-5798a3d3c41e@garbo.alstadheim.priv.no>
Date: Tue, 15 Jun 2021 11:13:24 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-CMAE-Score: 0
X-CMAE-Analysis: v=2.3 cv=X6B81lbe c=1 sm=1 tr=0
	a=Ln9STfmRvcQJFdEeVvPm9g==:117 a=Ln9STfmRvcQJFdEeVvPm9g==:17
	a=IkcTkHD0fZMA:10 a=r6YtysWOX24A:10 a=M51BFTxLslgA:10 a=cWRNjhkoAAAA:8
	a=20KFwNOVAAAA:8 a=pEbpI6NkyUk22VDtUi4A:9 a=VUiFVFDAVMgfbENQ:21
	a=58EC4qwDXJh9NiOj:21 a=ljAIjXzDVTLcqA8U:21 a=QEXdDO2ut3YA:10
	a=sVa6W5Aao32NNC1mekxh:22

I get theese in a domU:

---

[  174.883188][    C4] net eth0: rx->offset: 0, size: -1
[  174.883198][    C4] net eth0: rx->offset: 0, size: -1
[  174.883202][    C4] net eth0: rx->offset: 0, size: -1
[  174.883205][    C4] net eth0: rx->offset: 0, size: -1
[  174.883207][    C4] net eth0: rx->offset: 0, size: -1
[  174.883211][    C4] net eth0: rx->offset: 0, size: -1
[  174.883214][    C4] net eth0: rx->offset: 0, size: -1
[  174.883217][    C4] net eth0: rx->offset: 0, size: -1
[  174.883220][    C4] net eth0: rx->offset: 0, size: -1
[  174.883471][    C4] 
==================================================================
[  174.883474][    C4] BUG: KASAN: use-after-free in 
xennet_poll+0x28dc/0x3c80
[  174.883484][    C4] Write of size 8 at addr ffff88812fba1040 by task 
X/2988

---

The crash is easilyu reprocucible, just start the domU and fire up 
firefox. The crash does not happen if I do     "ethtool -K vif${domid}.0 
sg off tso off"  in dom0. The domU has a USB card and and graphics card 
passed through. The config is way  too maximalist. Please let me know  
specific settings I should tighten up to get better debug info.

--- Details: ---

# xl info
host                   : gentoo
release                : 5.13.0-rc6
version                : #2 SMP Mon Jun 14 12:40:43 CEST 2021
machine                : x86_64
nr_cpus                : 12
max_cpu_id             : 11
nr_nodes               : 2
cores_per_socket       : 6
threads_per_core       : 1
cpu_mhz                : 2399.982
hw_caps                : 
bfebfbff:77fef3ff:2c100800:00000021:00000001:000037ab:00000000:00000100
virt_caps              : pv hvm hvm_directio pv_directio hap 
iommu_hap_pt_share
total_memory           : 130953
free_memory            : 9936
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 15
xen_extra              : .1-pre
xen_version            : 4.15.1-pre
xen_caps               : xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p 
hvm-3.0-x86_64
xen_scheduler          : credit2
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : Tue Jun 8 18:34:36 2021 +0100 git:a339ceaa8f-dirty
xen_commandline        : xen.cfg xen-marker-203 console_timestamps=date 
iommu=1 com1=115200,8n1 console=com1 conswitch=lx 
cpufreq=xen:performance,verbose smt=0 core_parking=power nmi=dom0 
gnttab_max_frames=512 gnttab_max_maptrack_frames=2048 
vcpu_migration_delay=2000 tickle_one_idle_cpu=1 sched=credit2 
timer_slop=5000 max_cstate=2 dom0_mem=16G,max:16G dom0_max_vcpus=8 
loglvl=error/all guest_loglvl=error/all
cc_compiler            : gcc (Gentoo 10.3.0 p1) 10.3.0
cc_compile_by          : hakon
cc_compile_domain      : alstadheim.priv.no
cc_compile_date        : Sat Jun 12 23:39:03 CEST 2021
build_id               : 32581c9edcce5110d37abc1490e91c9b
xend_config_format     : 4

--- xl info ends ---

The "dirty" is from this:

---

git diff
diff --git a/.gitignore b/.gitignore
index 1c2fa1530b..cddcf26db5 100644
--- a/.gitignore
+++ b/.gitignore
@@ -435,3 +435,4 @@ tools/xl/xl
  docs/txt/misc/*.txt
  docs/txt/man/*.txt
  docs/figs/*.png
+install.log
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 1a1c263080..31cf54807c 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1503,7 +1503,7 @@ static int libxl__device_pci_reset(libxl__gc *gc, 
unsigned int domain, unsigned
      char *reset;
      int fd, rc;

-    reset = GCSPRINTF("%s/do_flr", SYSFS_PCIBACK_DRIVER);
+    reset = GCSPRINTF("%s/reset", SYSFS_PCIBACK_DRIVER);
      fd = open(reset, O_WRONLY);
      if (fd >= 0) {
          char *buf = GCSPRINTF(PCI_BDF, domain, bus, dev, func);

--- diff ends ---

The console log is rather long. I have an automatic trigger of a 
shut-down of the domU after the "rx->offset" messages, so part of the 
log shows an attempt to shut down the domU.

Below the domU log comes a dump of everything from dom0 xen console. 
Search for "ends" to find it.

Full console log from domU:

---

Parsing config from /etc/xen/gt.hvm
WARNING: msr_relaxed will be removed in future versions.
If it fixes an issue you are having please report to 
xen-devel@lists.xenproject.org.
[    0.000000][    T0] Linux version 5.13.0-rc6-x86_64 (root@gt) 
(x86_64-pc-linux-gnu-gcc (Gentoo 11.1.0-r1 p2) 11.1.0, GNU ld (Gentoo 
2.36.1 p3) 2.36.1) #1 SMP Mon Jun 14 14:48:44 CEST 2021
[    0.000000][    T0] Command line: real_root=LABEL=RAID-GT  ro 
intel_iommu=on init=/lib/systemd/systemd net.ifnames=0 
xen_blkfront.max_indirect_segments=64 udev.log-priority=3 
xen_netfront.max_queues=4  scsi_mod.use_blk_mq=1 elevator=mq-deadline 
pti=off amdgpu.si_support=1 amdgpu.dpm=1 amdgpu.msi=1 nohz=off 
xen_timer_slop=5000 console=ttyS0 nowatchdog ignore_loglevel
[    0.000000][    T0] KERNEL supported cpus:
[    0.000000][    T0]   Intel GenuineIntel
[    0.000000][    T0] x86/fpu: Supporting XSAVE feature 0x001: 'x87 
floating point registers'
[    0.000000][    T0] x86/fpu: Supporting XSAVE feature 0x002: 'SSE 
registers'
[    0.000000][    T0] x86/fpu: Supporting XSAVE feature 0x004: 'AVX 
registers'
[    0.000000][    T0] x86/fpu: xstate_offset[2]:  576, 
xstate_sizes[2]:  256
[    0.000000][    T0] x86/fpu: Enabled xstate features 0x7, context 
size is 832 bytes, using 'standard' format.
[    0.000000][    T0] BIOS-provided physical RAM map:
[    0.000000][    T0] BIOS-e820: [mem 
0x0000000000000000-0x000000000009fbff] usable
[    0.000000][    T0] BIOS-e820: [mem 
0x000000000009fc00-0x000000000009ffff] reserved
[    0.000000][    T0] BIOS-e820: [mem 
0x00000000000f0000-0x00000000000fffff] reserved
[    0.000000][    T0] BIOS-e820: [mem 
0x0000000000100000-0x000000003ffecfff] usable
[    0.000000][    T0] BIOS-e820: [mem 
0x000000003ffed000-0x000000003fffffff] reserved
[    0.000000][    T0] BIOS-e820: [mem 
0x000000007db85000-0x000000007db94fff] reserved
[    0.000000][    T0] BIOS-e820: [mem 
0x00000000fc000000-0x00000000fc00afff] ACPI NVS
[    0.000000][    T0] BIOS-e820: [mem 
0x00000000fc00b000-0x00000000ffffffff] reserved
[    0.000000][    T0] BIOS-e820: [mem 
0x0000000100000000-0x000000057f7fffff] usable
[    0.000000][    T0] printk: debug: ignoring loglevel setting.
[    0.000000][    T0] NX (Execute Disable) protection: active
[    0.000000][    T0] SMBIOS 2.4 present.
[    0.000000][    T0] DMI: Xen HVM domU, BIOS 4.15.1-pre 06/12/2021
[    0.000000][    T0] Hypervisor detected: Xen HVM
[    0.000000][    T0] Xen version 4.15.
[    0.000000][    T0] Xen Platform PCI: I/O protocol version 1
[    0.000000][    T0] Netfront and the Xen platform PCI driver have 
been compiled for this kernel: unplug emulated NICs.
[    0.000000][    T0] Blkfront and the Xen platform PCI driver have 
been compiled for this kernel: unplug emulated disks.
[    0.000000][    T0] You might have to change the root device
[    0.000000][    T0] from /dev/hd[a-d] to /dev/xvd[a-d]
[    0.000000][    T0] in your root= kernel command line option
[    0.000023][    T0] HVMOP_pagetable_dying not supported
[    0.000493][    T0] tsc: Detected 2399.996 MHz processor
[    0.001648][    T0] e820: update [mem 0x00000000-0x00000fff] usable 
==> reserved
[    0.001664][    T0] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.001678][    T0] last_pfn = 0x57f800 max_arch_pfn = 0x400000000
[    0.004147][    T0] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  
WP  UC- WT
[    0.010831][    T0] last_pfn = 0x3ffed max_arch_pfn = 0x400000000
[    0.016025][    T0] found SMP MP-table at [mem 0x000f5a90-0x000f5a9f]
[    0.016070][    T0] Kernel/User page tables isolation: disabled on 
command line.
[    0.016078][    T0] Using GB pages for direct mapping
[    0.016821][    T0] RAMDISK: [mem 0x3fa30000-0x3ffdffff]
[    0.016851][    T0] ACPI: Early table checksum verification disabled
[    0.016871][    T0] ACPI: RSDP 0x00000000000F59E0 000024 (v02 Xen   )
[    0.016894][    T0] ACPI: XSDT 0x00000000FC00A670 000054 (v01 Xen    
HVM      00000000 HVML 00000000)
[    0.016918][    T0] ACPI: FACP 0x00000000FC00A370 0000F4 (v04 Xen    
HVM      00000000 HVML 00000000)
[    0.016961][    T0] ACPI: DSDT 0x00000000FC001040 0092A3 (v02 Xen    
HVM      00000000 INTL 20200326)
[    0.016973][    T0] ACPI: FACS 0x00000000FC001000 000040
[    0.016983][    T0] ACPI: FACS 0x00000000FC001000 000040
[    0.016994][    T0] ACPI: APIC 0x00000000FC00A470 000090 (v02 Xen    
HVM      00000000 HVML 00000000)
[    0.017006][    T0] ACPI: HPET 0x00000000FC00A580 000038 (v01 Xen    
HVM      00000000 HVML 00000000)
[    0.017017][    T0] ACPI: WAET 0x00000000FC00A5C0 000028 (v01 Xen    
HVM      00000000 HVML 00000000)
[    0.017028][    T0] ACPI: SSDT 0x00000000FC00A5F0 000031 (v02 Xen    
HVM      00000000 INTL 20200326)
[    0.017040][    T0] ACPI: SSDT 0x00000000FC00A630 000031 (v02 Xen    
HVM      00000000 INTL 20200326)
[    0.017049][    T0] ACPI: Reserving FACP table memory at [mem 
0xfc00a370-0xfc00a463]
[    0.017055][    T0] ACPI: Reserving DSDT table memory at [mem 
0xfc001040-0xfc00a2e2]
[    0.017060][    T0] ACPI: Reserving FACS table memory at [mem 
0xfc001000-0xfc00103f]
[    0.017065][    T0] ACPI: Reserving FACS table memory at [mem 
0xfc001000-0xfc00103f]
[    0.017069][    T0] ACPI: Reserving APIC table memory at [mem 
0xfc00a470-0xfc00a4ff]
[    0.017074][    T0] ACPI: Reserving HPET table memory at [mem 
0xfc00a580-0xfc00a5b7]
[    0.017078][    T0] ACPI: Reserving WAET table memory at [mem 
0xfc00a5c0-0xfc00a5e7]
[    0.017083][    T0] ACPI: Reserving SSDT table memory at [mem 
0xfc00a5f0-0xfc00a620]
[    0.017088][    T0] ACPI: Reserving SSDT table memory at [mem 
0xfc00a630-0xfc00a660]
[    0.017193][    T0] ACPI: Local APIC address 0xfee00000
[    0.017331][    T0] No NUMA configuration found
[    0.017336][    T0] Faking a node at [mem 
0x0000000000000000-0x000000057f7fffff]
[    0.017346][    T0] NODE_DATA(0) allocated [mem 0x57f7fb000-0x57f7fefff]
[    0.017430][    T0] Zone ranges:
[    0.017434][    T0]   DMA      [mem 
0x0000000000001000-0x0000000000ffffff]
[    0.017443][    T0]   DMA32    [mem 
0x0000000001000000-0x00000000ffffffff]
[    0.017450][    T0]   Normal   [mem 
0x0000000100000000-0x000000057f7fffff]
[    0.017457][    T0] Movable zone start for each node
[    0.017479][    T0] Early memory node ranges
[    0.017483][    T0]   node   0: [mem 
0x0000000000001000-0x000000000009efff]
[    0.017489][    T0]   node   0: [mem 
0x0000000000100000-0x000000003ffecfff]
[    0.017493][    T0]   node   0: [mem 
0x0000000100000000-0x000000057f7fffff]
[    0.017500][    T0] Initmem setup node 0 [mem 
0x0000000000001000-0x000000057f7fffff]
[    0.017507][    T0] On node 0 totalpages: 4978571
[    0.017513][    T0]   DMA zone: 64 pages used for memmap
[    0.017519][    T0]   DMA zone: 158 pages reserved
[    0.017523][    T0]   DMA zone: 3998 pages, LIFO batch:0
[    0.018807][    T0]   DMA zone: 28770 pages in unavailable ranges
[    0.018813][    T0]   DMA32 zone: 4032 pages used for memmap
[    0.018818][    T0]   DMA32 zone: 258029 pages, LIFO batch:63
[    0.026615][    T0]   DMA32 zone: 19 pages in unavailable ranges
[    0.026673][    T0]   Normal zone: 73696 pages used for memmap
[    0.026681][    T0]   Normal zone: 4716544 pages, LIFO batch:63
[    0.171473][    T0]   Normal zone: 2048 pages in unavailable ranges
[    0.770557][    T0] kasan: KernelAddressSanitizer initialized
[    0.773375][    T0] ACPI: PM-Timer IO Port: 0xb008
[    0.773395][    T0] ACPI: Local APIC address 0xfee00000
[    0.773535][    T0] IOAPIC[0]: apic_id 1, version 17, address 
0xfec00000, GSI 0-47
[    0.773546][    T0] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 
dfl dfl)
[    0.773553][    T0] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 
low level)
[    0.773560][    T0] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 
low level)
[    0.773566][    T0] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 
low level)
[    0.773575][    T0] ACPI: IRQ0 used by override.
[    0.773580][    T0] ACPI: IRQ5 used by override.
[    0.773584][    T0] ACPI: IRQ9 used by override.
[    0.773588][    T0] ACPI: IRQ10 used by override.
[    0.773592][    T0] ACPI: IRQ11 used by override.
[    0.773601][    T0] Using ACPI (MADT) for SMP configuration information
[    0.773606][    T0] ACPI: HPET id: 0x8086a201 base: 0xfed00000
[    0.773615][    T0] TSC deadline timer available
[    0.773621][    T0] smpboot: Allowing 6 CPUs, 0 hotplug CPUs
[    0.773650][    T0] PM: hibernation: Registered nosave memory: [mem 
0x00000000-0x00000fff]
[    0.773658][    T0] PM: hibernation: Registered nosave memory: [mem 
0x0009f000-0x0009ffff]
[    0.773663][    T0] PM: hibernation: Registered nosave memory: [mem 
0x000a0000-0x000effff]
[    0.773667][    T0] PM: hibernation: Registered nosave memory: [mem 
0x000f0000-0x000fffff]
[    0.773673][    T0] PM: hibernation: Registered nosave memory: [mem 
0x3ffed000-0x3fffffff]
[    0.773678][    T0] PM: hibernation: Registered nosave memory: [mem 
0x40000000-0x7db84fff]
[    0.773682][    T0] PM: hibernation: Registered nosave memory: [mem 
0x7db85000-0x7db94fff]
[    0.773686][    T0] PM: hibernation: Registered nosave memory: [mem 
0x7db95000-0xfbffffff]
[    0.773691][    T0] PM: hibernation: Registered nosave memory: [mem 
0xfc000000-0xfc00afff]
[    0.773695][    T0] PM: hibernation: Registered nosave memory: [mem 
0xfc00b000-0xffffffff]
[    0.773703][    T0] [mem 0x7db95000-0xfbffffff] available for PCI 
devices
[    0.773708][    T0] Booting paravirtualized kernel on Xen HVM
[    0.773726][    T0] clocksource: refined-jiffies: mask: 0xffffffff 
max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
[    0.773754][    T0] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 
nr_cpu_ids:6 nr_node_ids:1
[    0.774498][    T0] percpu: Embedded 65 pages/cpu s226832 r8192 
d31216 u524288
[    0.774513][    T0] pcpu-alloc: s226832 r8192 d31216 u524288 
alloc=1*2097152
[    0.774522][    T0] pcpu-alloc: [0] 0 1 2 3 [0] 4 5 - -
[    0.774597][    T0] xen: PV spinlocks enabled
[    0.774605][    T0] PV qspinlock hash table entries: 256 (order: 0, 
4096 bytes, linear)
[    0.774617][    T0] Built 1 zonelists, mobility grouping on. Total 
pages: 4900621
[    0.774622][    T0] Policy zone: Normal
[    0.774627][    T0] Kernel command line: real_root=LABEL=RAID-GT  ro 
intel_iommu=on init=/lib/systemd/systemd net.ifnames=0 
xen_blkfront.max_indirect_segments=64 udev.log-priority=3 
xen_netfront.max_queues=4  scsi_mod.use_blk_mq=1 elevator=mq-deadline 
pti=off amdgpu.si_support=1 amdgpu.dpm=1 amdgpu.msi=1 nohz=off 
xen_timer_slop=5000 console=ttyS0 nowatchdog ignore_loglevel
[    0.774722][    T0] DMAR: IOMMU enabled
[    0.774857][    T0] Kernel parameter elevator= does not have any 
effect anymore.
[    0.774857][    T0] Please use sysfs to set IO scheduler for 
individual devices.
[    0.779459][    T0] Dentry cache hash table entries: 4194304 (order: 
13, 33554432 bytes, linear)
[    0.781700][    T0] Inode-cache hash table entries: 2097152 (order: 
12, 16777216 bytes, linear)
[    0.781797][    T0] mem auto-init: stack:off, heap alloc:on, heap 
free:off
[    1.473275][    T0] Memory: 16880560K/19914284K available (49173K 
kernel code, 14692K rwdata, 11088K rodata, 2692K init, 7008K bss, 
3033468K reserved, 0K cma-reserved)
[    1.473314][    T0] random: get_random_u64 called from 
__kmem_cache_create+0x2e/0x5e0 with crng_init=0
[    1.473668][    T0] SLUB: HWalign=64, Order=0-3, MinObjects=0, 
CPUs=6, Nodes=1
[    1.473712][    T0] ftrace: allocating 52352 entries in 205 pages
[    1.504020][    T0] ftrace: allocated 205 pages with 5 groups
[    1.504440][    T0] rcu: Hierarchical RCU implementation.
[    1.504446][    T0] rcu:     RCU event tracing is enabled.
[    1.504450][    T0] rcu:     RCU restricting CPUs from NR_CPUS=64 to 
nr_cpu_ids=6.
[    1.504455][    T0]     Rude variant of Tasks RCU enabled.
[    1.504459][    T0]     Tracing variant of Tasks RCU enabled.
[    1.504462][    T0] rcu: RCU calculated value of scheduler-enlistment 
delay is 100 jiffies.
[    1.504466][    T0] rcu: Adjusting geometry for rcu_fanout_leaf=16, 
nr_cpu_ids=6
[    1.521458][    T0] NR_IRQS: 4352, nr_irqs: 880, preallocated irqs: 16
[    1.521605][    T0] xen:events: Using FIFO-based ABI
[    1.521654][    T0] xen:events: Xen HVM callback vector for event 
delivery is enabled
[    1.521917][    T0] random: crng done (trusting CPU's manufacturer)
[    1.600531][    T0] Console: colour VGA+ 80x25
[    3.042529][    T0] printk: console [ttyS0] enabled
[    3.049101][    T0] Lock dependency validator: Copyright (c) 2006 Red 
Hat, Inc., Ingo Molnar
[    3.060197][    T0] ... MAX_LOCKDEP_SUBCLASSES:  8
[    3.069900][    T0] ... MAX_LOCK_DEPTH:          48
[    3.079641][    T0] ... MAX_LOCKDEP_KEYS:        8192
[    3.089886][    T0] ... CLASSHASH_SIZE:          4096
[    3.097288][    T0] ... MAX_LOCKDEP_ENTRIES:     32768
[    3.102165][    T0] ... MAX_LOCKDEP_CHAINS:      65536
[    3.106783][    T0] ... CHAINHASH_SIZE:          32768
[    3.111947][    T0]  memory used by lock dependency info: 3637 kB
[    3.118567][    T0]  per task-struct memory footprint: 1920 bytes
[    3.125150][    T0] ACPI: Core revision 20210331
[    3.134142][    T0] clocksource: hpet: mask: 0xffffffff max_cycles: 
0xffffffff, max_idle_ns: 30580167144 ns
[    3.146769][    T0] APIC: Switch to symmetric I/O mode setup
[    3.154372][    T0] x2apic: IRQ remapping doesn't support X2APIC mode
[    3.167629][    T0] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=0 pin2=0
[    3.180765][    T0] clocksource: tsc-early: mask: 0xffffffffffffffff 
max_cycles: 0x229833f6470, max_idle_ns: 440795327230 ns
[    3.192960][    T0] Calibrating delay loop (skipped), value 
calculated using timer frequency.. 4799.99 BogoMIPS (lpj=2399996)
[    3.193966][    T0] pid_max: default: 32768 minimum: 301
[    3.196000][    T0] LSM: Security Framework initializing
[    3.197987][    T0] Mount-cache hash table entries: 65536 (order: 7, 
524288 bytes, linear)
[    3.199044][    T0] Mountpoint-cache hash table entries: 65536 
(order: 7, 524288 bytes, linear)
[    3.202200][    T0] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 
1024
[    3.202965][    T0] Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 
1024, 1GB 4
[    3.203975][    T0] Spectre V1 : Mitigation: usercopy/swapgs barriers 
and __user pointer sanitization
[    3.204967][    T0] Spectre V2 : Mitigation: Full generic retpoline
[    3.205962][    T0] Spectre V2 : Spectre v2 / SpectreRSB mitigation: 
Filling RSB on context switch
[    3.206961][    T0] Spectre V2 : Enabling Restricted Speculation for 
firmware calls
[    3.207969][    T0] Spectre V2 : mitigation: Enabling conditional 
Indirect Branch Prediction Barrier
[    3.208963][    T0] Spectre V2 : User space: Mitigation: STIBP via 
seccomp and prctl
[    3.209967][    T0] Speculative Store Bypass: Mitigation: Speculative 
Store Bypass disabled via prctl and seccomp
[    3.210972][    T0] MDS: Mitigation: Clear CPU buffers
[    3.218634][    T0] Freeing SMP alternatives memory: 44K
[    3.220211][    T1] clocksource: xen: mask: 0xffffffffffffffff 
max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
[    3.220981][    T1] Xen: using vcpuop timer interface
[    3.221991][    T1] installing Xen timer for CPU 0
[    3.223167][    T1] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2620 v3 @ 
2.40GHz (family: 0x6, model: 0x3f, stepping: 0x2)
[    3.224136][    T1] cpu 0 spinlock event irq 52
[    3.226018][    T1] Performance Events: unsupported p6 CPU model 63 
no PMU driver, software events only.
[    3.227207][    T1] rcu: Hierarchical SRCU implementation.
[    3.230482][    T1] NMI watchdog: Perf NMI watchdog permanently disabled
[    3.231524][    T1] smp: Bringing up secondary CPUs ...
[    3.232639][    T1] installing Xen timer for CPU 1
[    3.233194][    T1] x86: Booting SMP configuration:
[    3.233975][    T1] .... node  #0, CPUs:      #1
[    3.234972][   T17] cpu 1 spinlock event irq 57
[    3.247177][    T1] installing Xen timer for CPU 2
[    3.248211][    T1]  #2
[    3.249090][   T23] cpu 2 spinlock event irq 62
[    3.259712][    T1] installing Xen timer for CPU 3
[    3.260221][    T1]  #3
[    3.261162][   T29] cpu 3 spinlock event irq 67
[    3.273213][    T1] installing Xen timer for CPU 4
[    3.274190][    T1]  #4
[    3.275092][   T35] cpu 4 spinlock event irq 72
[    3.284642][    T1] installing Xen timer for CPU 5
[    3.285250][    T1]  #5
[    3.286100][   T41] cpu 5 spinlock event irq 77
[    3.304261][    T1] smp: Brought up 1 node, 6 CPUs
[    3.304978][    T1] smpboot: Max logical packages: 1
[    3.305979][    T1] smpboot: Total of 6 processors activated 
(28799.95 BogoMIPS)
[    3.310964][    T1] devtmpfs: initialized
[    3.319303][    T1] PM: Registering ACPI NVS region [mem 
0xfc000000-0xfc00afff] (45056 bytes)
[    3.320391][    T1] clocksource: jiffies: mask: 0xffffffff 
max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
[    3.321033][    T1] futex hash table entries: 2048 (order: 6, 262144 
bytes, linear)
[    3.322361][    T1] pinctrl core: initialized pinctrl subsystem
[    3.323909][    T1]
[    3.323970][    T1] 
*************************************************************
[    3.324975][    T1] **     NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE 
NOTICE    **
[    3.325977][    T1] 
**                                                         **
[    3.326978][    T1] **  IOMMU DebugFS SUPPORT HAS BEEN ENABLED IN 
THIS KERNEL  **
[    3.327978][    T1] 
**                                                         **
[    3.328977][    T1] ** This means that this kernel is built to expose 
internal **
[    3.329980][    T1] ** IOMMU data structures, which may compromise 
security on **
[    3.330977][    T1] ** your 
system.                                            **
[    3.331976][    T1] 
**                                                         **
[    3.332973][    T1] ** If you see this message and you are not 
debugging the   **
[    3.333972][    T1] ** kernel, report this immediately to your 
vendor!         **
[    3.334974][    T1] 
**                                                         **
[    3.335973][    T1] **     NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE 
NOTICE    **
[    3.336968][    T1] 
*************************************************************
[    3.338082][    T1] PM: RTC time: 23:00:23, date: 2021-06-14
[    3.339859][    T1] NET: Registered protocol family 16
[    3.341695][    T1] audit: initializing netlink subsys (disabled)
[    3.342192][   T50] audit: type=2000 audit(1623711622.829:1): 
state=initialized audit_enabled=0 res=1
[    3.343461][    T1] thermal_sys: Registered thermal governor 
'fair_share'
[    3.343976][    T1] thermal_sys: Registered thermal governor 'bang_bang'
[    3.344972][    T1] thermal_sys: Registered thermal governor 'step_wise'
[    3.345983][    T1] thermal_sys: Registered thermal governor 
'user_space'
[    3.346984][    T1] thermal_sys: Registered thermal governor 
'power_allocator'
[    3.348114][    T1] cpuidle: using governor ladder
[    3.350301][    T1] ACPI: bus type PCI registered
[    3.350978][    T1] acpiphp: ACPI Hot Plug PCI Controller Driver 
version: 0.5
[    3.353006][    T1] PCI: Using configuration type 1 for base access
[    3.372862][    T1] Kprobes globally optimized
[    3.373217][    T1] HugeTLB registered 1.00 GiB page size, 
pre-allocated 0 pages
[    3.374976][    T1] HugeTLB registered 2.00 MiB page size, 
pre-allocated 0 pages
[    3.379083][    T1] cryptd: max_cpu_qlen set to 1000
[    3.390301][    T1] ACPI: Added _OSI(Module Device)
[    3.390975][    T1] ACPI: Added _OSI(Processor Device)
[    3.391978][    T1] ACPI: Added _OSI(3.0 _SCP Extensions)
[    3.392981][    T1] ACPI: Added _OSI(Processor Aggregator Device)
[    3.393992][    T1] ACPI: Added _OSI(Linux-Dell-Video)
[    3.394994][    T1] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
[    3.395993][    T1] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
[    3.557052][    T1] ACPI: 3 ACPI AML tables successfully acquired and 
loaded
[    3.562045][    T1] xen: --> pirq=16 -> irq=9 (gsi=9)
[    3.615936][    T1] ACPI: Interpreter enabled
[    3.616106][    T1] ACPI: (supports S0 S3 S4 S5)
[    3.616991][    T1] ACPI: Using IOAPIC for interrupt routing
[    3.618202][    T1] PCI: Using host bridge windows from ACPI; if 
necessary, use "pci=nocrs" and report a bug
[    3.622294][    T1] ACPI: Enabled 2 GPEs in block 00 to 0F
[    3.739892][    T1] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 
00-ff])
[    3.740000][    T1] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM 
Segments MSI HPX-Type3]
[    3.741182][    T1] acpi PNP0A03:00: fail to add MMCONFIG 
information, can't access extended PCI configuration space under this 
bridge.
[    3.749653][    T1] acpiphp: Slot [3] registered
[    3.750246][    T1] acpiphp: Slot [4] registered
[    3.751235][    T1] acpiphp: Slot [5] registered
[    3.752296][    T1] acpiphp: Slot [6] registered
[    3.753433][    T1] acpiphp: Slot [7] registered
[    3.754260][    T1] acpiphp: Slot [8] registered
[    3.755218][    T1] acpiphp: Slot [9] registered
[    3.756170][    T1] acpiphp: Slot [10] registered
[    3.757170][    T1] acpiphp: Slot [11] registered
[    3.758163][    T1] acpiphp: Slot [12] registered
[    3.759208][    T1] acpiphp: Slot [13] registered
[    3.760186][    T1] acpiphp: Slot [14] registered
[    3.761247][    T1] acpiphp: Slot [15] registered
[    3.762170][    T1] acpiphp: Slot [16] registered
[    3.763219][    T1] acpiphp: Slot [17] registered
[    3.764201][    T1] acpiphp: Slot [18] registered
[    3.765225][    T1] acpiphp: Slot [19] registered
[    3.766184][    T1] acpiphp: Slot [20] registered
[    3.767210][    T1] acpiphp: Slot [21] registered
[    3.768209][    T1] acpiphp: Slot [22] registered
[    3.769178][    T1] acpiphp: Slot [23] registered
[    3.770194][    T1] acpiphp: Slot [24] registered
[    3.771229][    T1] acpiphp: Slot [25] registered
[    3.772229][    T1] acpiphp: Slot [26] registered
[    3.773287][    T1] acpiphp: Slot [27] registered
[    3.774221][    T1] acpiphp: Slot [28] registered
[    3.775251][    T1] acpiphp: Slot [29] registered
[    3.776224][    T1] acpiphp: Slot [30] registered
[    3.777219][    T1] acpiphp: Slot [31] registered
[    3.778123][    T1] PCI host bridge to bus 0000:00
[    3.778985][    T1] pci_bus 0000:00: root bus resource [io 
0x0000-0x0cf7 window]
[    3.779989][    T1] pci_bus 0000:00: root bus resource [io 
0x0d00-0xffff window]
[    3.780987][    T1] pci_bus 0000:00: root bus resource [mem 
0x000a0000-0x000bffff window]
[    3.781990][    T1] pci_bus 0000:00: root bus resource [mem 
0x40000000-0xfbffffff window]
[    3.782998][    T1] pci_bus 0000:00: root bus resource [bus 00-ff]
[    3.784406][    T1] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
[    3.790293][    T1] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
[    3.797457][    T1] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
[    3.800911][    T1] pci 0000:00:01.1: reg 0x20: [io 0xc200-0xc20f]
[    3.801970][    T1] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: 
[io  0x01f0-0x01f7]
[    3.802979][    T1] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: 
[io  0x03f6]
[    3.803983][    T1] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: 
[io  0x0170-0x0177]
[    3.804983][    T1] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: 
[io  0x0376]
[    3.808393][    T1] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
[    3.814430][    T1] pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] 
claimed by PIIX4 ACPI
[    3.815110][    T1] pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] 
claimed by PIIX4 SMB
[    3.820467][    T1] pci 0000:00:02.0: [5853:0001] type 00 class 0xff8000
[    3.822968][    T1] pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc0ff]
[    3.825970][    T1] pci 0000:00:02.0: reg 0x14: [mem 
0x52000000-0x52ffffff pref]
[    3.838421][    T1] pci 0000:00:03.0: [1013:00b8] type 00 class 0x030000
[    3.840971][    T1] pci 0000:00:03.0: reg 0x10: [mem 
0x50000000-0x51ffffff pref]
[    3.843839][    T1] pci 0000:00:03.0: reg 0x14: [mem 
0x530dc000-0x530dcfff]
[    3.851395][    T1] pci 0000:00:03.0: reg 0x30: [mem 
0x530c0000-0x530cffff pref]
[    3.857623][    T1] pci 0000:00:05.0: [1b21:1242] type 00 class 0x0c0330
[    3.862225][    T1] pci 0000:00:05.0: reg 0x10: [mem 
0x530d0000-0x530d7fff 64bit]
[    3.873555][    T1] pci 0000:00:05.0: enabling Extended Tags
[    3.882289][    T1] pci 0000:00:06.0: [1002:6811] type 00 class 0x030000
[    3.885012][    T1] pci 0000:00:06.0: reg 0x10: [mem 
0x3ff40000000-0x3ff4fffffff 64bit pref]
[    3.888011][    T1] pci 0000:00:06.0: reg 0x18: [mem 
0x53040000-0x5307ffff 64bit]
[    3.891015][    T1] pci 0000:00:06.0: reg 0x20: [io 0xc100-0xc1ff]
[    3.896003][    T1] pci 0000:00:06.0: reg 0x30: [mem 
0x530a0000-0x530bffff pref]
[    3.897367][    T1] pci 0000:00:06.0: enabling Extended Tags
[    3.902368][    T1] pci 0000:00:06.0: supports D1 D2
[    3.907673][    T1] pci 0000:00:07.0: [1002:aab0] type 00 class 0x040300
[    3.910018][    T1] pci 0000:00:07.0: reg 0x10: [mem 
0x530d8000-0x530dbfff 64bit]
[    3.921411][    T1] pci 0000:00:07.0: enabling Extended Tags
[    3.925516][    T1] pci 0000:00:07.0: supports D1 D2
[    3.946743][    T1] ACPI: PCI: Interrupt link LNKA configured for IRQ 5
[    3.948327][    T1] ACPI: PCI: Interrupt link LNKB configured for IRQ 10
[    3.950375][    T1] ACPI: PCI: Interrupt link LNKC configured for IRQ 11
[    3.952370][    T1] ACPI: PCI: Interrupt link LNKD configured for IRQ 5
[    3.988605][    T1] xen:balloon: Initialising balloon driver
[    3.989590][    T1] iommu: Default domain type: Translated
[    3.990146][    T1] pci 0000:00:03.0: vgaarb: setting as boot VGA device
[    3.990954][    T1] pci 0000:00:03.0: vgaarb: VGA device added: 
decodes=io+mem,owns=io+mem,locks=none
[    3.991103][    T1] pci 0000:00:06.0: vgaarb: VGA device added: 
decodes=io+mem,owns=io+mem,locks=none
[    3.991979][    T1] pci 0000:00:03.0: vgaarb: no bridge control possible
[    3.992973][    T1] pci 0000:00:06.0: vgaarb: bridge control possible
[    3.993976][    T1] vgaarb: loaded
[    3.996847][    T1] SCSI subsystem initialized
[    3.997147][    T1] libata version 3.00 loaded.
[    3.998287][    T1] mc: Linux media interface: v0.10
[    3.999030][    T1] videodev: Linux video capture interface: v2.00
[    4.000139][    T1] pps_core: LinuxPPS API ver. 1 registered
[    4.000973][    T1] pps_core: Software ver. 5.3.6 - Copyright 
2005-2007 Rodolfo Giometti <giometti@linux.it>
[    4.002028][    T1] PTP clock support registered
[    4.003244][    T1] EDAC MC: Ver: 3.0.0
[    4.008465][    T1] NetLabel: Initializing
[    4.008974][    T1] NetLabel:  domain hash size = 128
[    4.009978][    T1] NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
[    4.011140][    T1] NetLabel:  unlabeled traffic allowed by default
[    4.012174][    T1] PCI: Using ACPI for IRQ routing
[    4.012975][    T1] PCI: pci_cache_line_size set to 64 bytes
[    4.014954][    T1] pci 0000:00:06.0: can't claim BAR 0 [mem 
0x3ff40000000-0x3ff4fffffff 64bit pref]: no compatible bridge window
[    4.016534][    T1] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
[    4.016982][    T1] e820: reserve RAM buffer [mem 0x3ffed000-0x3fffffff]
[    4.017980][    T1] e820: reserve RAM buffer [mem 
0x57f800000-0x57fffffff]
[    4.019316][    T1] hpet: 3 channels of 0 reserved for per-cpu timers
[    4.020008][    T1] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
[    4.020973][    T1] hpet0: 3 comparators, 64-bit 62.500000 MHz counter
[    4.026479][    T1] clocksource: Switched to clocksource xen
[    4.230189][    T1] VFS: Disk quotas dquot_6.6.0
[    4.239949][    T1] VFS: Dquot-cache hash table entries: 512 (order 
0, 4096 bytes)
[    4.259610][    T1] FS-Cache: Loaded
[    4.270420][    T1] pnp: PnP ACPI init
[    4.278947][    T1] system 00:00: [mem 0x00000000-0x0009ffff] could 
not be reserved
[    4.294081][    T1] system 00:00: Plug and Play ACPI device, IDs 
PNP0c02 (active)
[    4.306565][    T1] system 00:01: [io  0x08a0-0x08a3] has been reserved
[    4.317586][    T1] system 00:01: [io  0x0cc0-0x0ccf] has been reserved
[    4.327656][    T1] system 00:01: [io  0x04d0-0x04d1] has been reserved
[    4.337691][    T1] system 00:01: Plug and Play ACPI device, IDs 
PNP0c02 (active)
[    4.350838][    T1] xen: --> pirq=17 -> irq=8 (gsi=8)
[    4.360348][    T1] pnp 00:02: Plug and Play ACPI device, IDs PNP0b00 
(active)
[    4.373174][    T1] xen: --> pirq=18 -> irq=12 (gsi=12)
[    4.383488][    T1] pnp 00:03: Plug and Play ACPI device, IDs PNP0f13 
(active)
[    4.397222][    T1] xen: --> pirq=19 -> irq=1 (gsi=1)
[    4.407775][    T1] pnp 00:04: Plug and Play ACPI device, IDs PNP0303 
PNP030b (active)
[    4.435180][    T1] xen: --> pirq=20 -> irq=6 (gsi=6)
[    4.453694][    T1] pnp 00:05: [dma 2]
[    4.464419][    T1] pnp 00:05: Plug and Play ACPI device, IDs PNP0700 
(active)
[    4.479430][    T1] xen: --> pirq=21 -> irq=4 (gsi=4)
[    4.487704][    T1] pnp 00:06: Plug and Play ACPI device, IDs PNP0501 
(active)
[    4.500486][    T1] system 00:07: [io  0xae00-0xae0f] has been reserved
[    4.510464][    T1] system 00:07: [io  0xb044-0xb047] has been reserved
[    4.519524][    T1] system 00:07: Plug and Play ACPI device, IDs 
PNP0c02 (active)
[    4.538009][    T1] pnp: PnP ACPI: found 8 devices
[    4.564424][    T1] clocksource: acpi_pm: mask: 0xffffff max_cycles: 
0xffffff, max_idle_ns: 2085701024 ns
[    4.577109][    T1] NET: Registered protocol family 2
[    4.584098][    T1] IP idents hash table entries: 262144 (order: 9, 
2097152 bytes, linear)
[    4.597552][    T1] tcp_listen_portaddr_hash hash table entries: 
16384 (order: 8, 1310720 bytes, linear)
[    4.613104][    T1] TCP established hash table entries: 262144 
(order: 9, 2097152 bytes, linear)
[    4.631136][    T1] TCP bind hash table entries: 65536 (order: 10, 
4718592 bytes, vmalloc)
[    4.646758][    T1] TCP: Hash tables configured (established 262144 
bind 65536)
[    4.658919][    T1] MPTCP token hash table entries: 32768 (order: 9, 
2883584 bytes, linear)
[    4.672747][    T1] UDP hash table entries: 16384 (order: 9, 2621440 
bytes, linear)
[    4.684508][    T1] UDP-Lite hash table entries: 16384 (order: 9, 
2621440 bytes, linear)
[    4.696264][    T1] NET: Registered protocol family 1
[    4.704132][    T1] RPC: Registered named UNIX socket transport module.
[    4.712545][    T1] RPC: Registered udp transport module.
[    4.719577][    T1] RPC: Registered tcp transport module.
[    4.726671][    T1] RPC: Registered tcp NFSv4.1 backchannel transport 
module.
[    4.735900][    T1] NET: Registered protocol family 44
[    4.742537][    T1] pci 0000:00:06.0: BAR 0: assigned [mem 
0x40000000-0x4fffffff 64bit pref]
[    4.774063][    T1] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 
window]
[    4.783210][    T1] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff 
window]
[    4.792051][    T1] pci_bus 0000:00: resource 6 [mem 
0x000a0000-0x000bffff window]
[    4.801478][    T1] pci_bus 0000:00: resource 7 [mem 
0x40000000-0xfbffffff window]
[    4.812601][    T1] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[    4.821801][    T1] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[    4.832079][    T1] pci 0000:00:00.0: quirk_natoma+0x0/0x60 took 
10036 usecs
[    4.843114][    T1] pci 0000:00:01.0: Activating ISA DMA hang 
workarounds
[    4.853830][    T1] pci 0000:00:01.0: quirk_isa_dma_hangs+0x0/0x60 
took 10464 usecs
[    4.864873][    T1] pci 0000:00:03.0: Video device with shadowed ROM 
at [mem 0x000c0000-0x000dffff]
[    4.876550][    T1] pci 0000:00:03.0: pci_fixup_video+0x0/0x220 took 
11652 usecs
[    4.890576][    T1] xen: --> pirq=34 -> irq=36 (gsi=36)
[    4.902384][    T1] pci 0000:00:05.0: 
quirk_usb_early_handoff+0x0/0xb80 took 16008 usecs
[    4.912847][    T1] PCI: CLS 0 bytes, default 64
[    4.918823][    T1] PCI-DMA: Using software bounce buffering for IO 
(SWIOTLB)
[    4.919297][  T120] Unpacking initramfs...
[    4.928050][    T1] software IO TLB: mapped [mem 
0x000000003ba30000-0x000000003fa30000] (64MB)
[    5.113717][    T1] clocksource: tsc: mask: 0xffffffffffffffff 
max_cycles: 0x229833f6470, max_idle_ns: 440795327230 ns
[    5.132532][    T1] Initialise system trusted keyrings
[    5.140184][    T1] Key type blacklist registered
[    5.146824][    T1] workingset: timestamp_bits=40 max_order=23 
bucket_order=0
[    5.174550][    T1] zbud: loaded
[    5.184144][    T1] squashfs: version 4.0 (2009/01/31) Phillip Lougher
[    5.195204][    T1] FS-Cache: Netfs 'nfs' registered for caching
[    5.204637][    T1] NFS: Registering the id_resolver key type
[    5.216693][    T1] Key type id_resolver registered
[    5.231875][    T1] Key type id_legacy registered
[    5.251095][    T1] nfs4filelayout_init: NFSv4 File Layout Driver 
Registering...
[    5.284410][    T1] FS-Cache: Netfs 'cifs' registered for caching
[    5.300623][    T1] Key type cifs.spnego registered
[    5.312321][    T1] Key type cifs.idmap registered
[    5.324312][    T1] fuse: init (API version 7.33)
[    5.334441][    T1] SGI XFS with ACLs, security attributes, realtime, 
quota, no debug enabled
[    5.356882][    T1] integrity: Platform Keyring initialized
[    5.380863][    T1] NET: Registered protocol family 38
[    5.392013][    T1] Key type asymmetric registered
[    5.400035][    T1] Asymmetric key parser 'x509' registered
[    5.409516][    T1] Block layer SCSI generic (bsg) driver version 0.4 
loaded (major 240)
[    5.435525][    T1] io scheduler mq-deadline registered
[    5.453004][    T1] start plist test
[    5.467282][    T1] end plist test
[    5.478298][    T1] shpchp: Standard Hot Plug PCI Controller Driver 
version: 0.4
[    5.494018][    T1] cirrusfb 0000:00:03.0: Cirrus Logic chipset on 
PCI bus, RAM (4096 kB) at 0x50000000
[    5.509345][    T1] fbcon: CL Picasso4 (fb0) is primary device
[    5.829551][    T1] Console: switching to colour frame buffer device 
80x30
[    5.898177][    T1] input: Power Button as 
/devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
[    5.916426][    T1] ACPI: button: Power Button [PWRF]
[    5.923863][    T1] input: Sleep Button as 
/devices/LNXSYSTM:00/LNXSLPBN:00/input/input1
[    5.937414][    T1] ACPI: button: Sleep Button [SLPF]
[    5.970460][    T1] xen:xen_evtchn: Event-channel device installed
[    5.997427][    T1] xen: --> pirq=22 -> irq=24 (gsi=24)
[    6.013583][    T1] xen:grant_table: Grant tables using version 1 layout
[    6.032896][    T1] Grant table initialized
[    6.047797][    T1] Initialising Xen pvcalls frontend driver
[    6.059880][    T1] Serial: 8250/16550 driver, 32 ports, IRQ sharing 
enabled
[    6.113463][    T1] 00:06: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 
115200) is a 16550A
[    6.145250][    T1] Linux agpgart interface v0.103
[    6.177360][  T144] FDC 0 is a S82078B
[    6.191043][    T1] loop: module loaded
[    6.204117][    T1] ata_piix 0000:00:01.1: version 2.13
[    6.214001][    T1] ata_piix 0000:00:01.1: enabling device (0000 -> 
0001)
[    6.282718][  T147] blkfront: xvda: flush diskcache: enabled; 
persistent grants: enabled; indirect descriptors: enabled;
[    6.477649][    T1] scsi host0: ata_piix
[    6.497944][    T1] scsi host1: ata_piix
[    6.499563][  T120] Freeing initrd memory: 5824K
[    6.511572][    T1] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 
0xc200 irq 14
[    6.539466][  T147] blkfront: xvdb: flush diskcache: enabled; 
persistent grants: enabled; indirect descriptors: enabled;
[    6.542648][    T1] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 
0xc208 irq 15
[    6.589290][    T1] libphy: Fixed MDIO Bus: probed
[    6.602930][    T1] tun: Universal TUN/TAP device driver, 1.6
[    6.607681][  T147] blkfront: xvdc: flush diskcache: enabled; 
persistent grants: enabled; indirect descriptors: enabled;
[    6.630800][    T1] PPP generic driver version 2.4.2
[    6.642655][    T1] xen_netfront: Initialising Xen virtual ethernet 
driver
[    6.660660][    T1] VFIO - User Level meta-driver version: 0.3
[    6.674310][  T147] xen_netfront: backend supports XDP headroom
[    6.677139][    T1] i8042: PNP: PS/2 Controller 
[PNP0303:PS2K,PNP0f13:PS2M] at 0x60,0x64 irq 1,12
[    6.737672][    T1] serio: i8042 KBD port at 0x60,0x64 irq 1
[    6.750315][    T1] serio: i8042 AUX port at 0x60,0x64 irq 12
[    6.761326][    T1] mousedev: PS/2 mouse device common for all mice
[    6.770325][    T1] input: PC Speaker as 
/devices/platform/pcspkr/input/input2
[    6.782407][    T1] rtc_cmos 00:02: registered as rtc0
[    6.788866][    T1] rtc_cmos 00:02: setting system clock to 
2021-06-14T23:00:28 UTC (1623711628)
[    6.799948][    T1] rtc_cmos 00:02: alarms up to one day, 114 bytes 
nvram, hpet irqs
[    6.810064][    T1] i2c /dev entries driver
[    6.815754][    T1] piix4_smbus 0000:00:01.3: SMBus Host Controller 
not enabled!
[    6.826304][    T1] device-mapper: uevent: version 1.0.3
[    6.833858][    T1] device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) 
initialised: dm-devel@redhat.com
[    6.845487][    T1] EDAC sbridge: Seeking for: PCI ID 8086:2fa0
[    6.853202][    T1] EDAC sbridge:  Ver: 1.1.2
[    6.860789][    T1] intel_pstate: CPU model not supported
[    6.872344][    T1] ledtrig-cpu: registered to indicate activity on CPUs
[    6.888920][    T1] NET: Registered protocol family 10
[    6.904591][    T1] Segment Routing with IPv6
[    6.913080][    T1] NET: Registered protocol family 17
[    6.920930][    T1] Key type dns_resolver registered
[    6.931929][    T1] IPI shorthand broadcast: enabled
[    6.940569][    T1] AVX2 version of gcm_enc/dec engaged.
[    6.947999][    T1] AES CTR mode by8 optimization enabled
[    6.960698][    T1] sched_clock: Marking stable (5325535054, 
1635099995)->(8958580926, -1997945877)
[    6.985319][    T1] registered taskstats version 1
[    6.995883][    T1] Loading compiled-in X.509 certificates
[    7.006689][    T1] Loaded X.509 cert 'Build time autogenerated 
kernel key: 4855da498a739ec59685b03b61a943f1fee5db94'
[    7.029613][    T1] Key type ._fscrypt registered
[    7.038119][    T1] Key type .fscrypt registered
[    7.050737][    T1] Key type fscrypt-provisioning registered
[    7.076264][    T1] Key type encrypted registered
[    7.096338][    T1] xenbus_probe_frontend: Device with no driver: 
device/pci/0
[    7.115726][    T1] xenbus_probe_frontend: Device with no driver: 
device/vkbd/0
[    7.129174][    T1] PM:   Magic number: 9:454:49
[    7.139569][    T1] RAS: Correctable Errors collector initialized.
[    7.166109][    T1] Freeing unused kernel image (initmem) memory: 2692K
[    7.187097][    T1] Write protecting the kernel read-only data: 63488k
[    7.204674][    T1] Freeing unused kernel image (text/rodata gap) 
memory: 2024K
[    7.217515][    T1] Freeing unused kernel image (rodata/data gap) 
memory: 1200K
[    7.233657][    T1] Run /init as init process
[    7.249063][    T1]   with arguments:
[    7.262046][    T1]     /init
[    7.275373][    T1]   with environment:
[    7.286303][    T1]     HOME=/
[    7.297475][    T1]     TERM=linux
[    7.309446][    T1]     real_root=LABEL=RAID-GT
[    7.316690][    T1]     intel_iommu=on
[    7.323916][    T1]     pti=off
[32;1m>>[0;39m[1m Genkernel 4.2.1 (2021-06-14 13:58:02 UTC). Linux 
kernel 5.13.0-rc6-x86_64 [0;39m
[    8.041945][  T102] input: ImExPS/2 Generic Explorer Mouse as 
/devices/platform/i8042/serio1/input/input3
[32;1m>>[0;39m[1m Activating udev ... [0;39m
[   10.155010][ T2645] udevd[2645]: starting eudev-3.2.10
[   10.375852][ T2663] ACPI: bus type USB registered
[   10.385577][ T2663] usbcore: registered new interface driver usbfs
[   10.405777][ T2663] usbcore: registered new interface driver hub
[   10.417795][ T2663] usbcore: registered new device driver usb
[   10.484195][ T2663] xhci_hcd 0000:00:05.0: xHCI Host Controller
[   10.496895][ T2663] xhci_hcd 0000:00:05.0: new USB bus registered, 
assigned bus number 1
[   10.575406][ T2663] xhci_hcd 0000:00:05.0: hcc params 0x0200eec0 hci 
version 0x110 quirks 0x0000000000800010
[   10.610013][ T2663] usb usb1: New USB device found, idVendor=1d6b, 
idProduct=0002, bcdDevice= 5.13
[   10.638595][ T2663] usb usb1: New USB device strings: Mfr=3, 
Product=2, SerialNumber=1
[   10.655835][ T2663] usb usb1: Product: xHCI Host Controller
[   10.664769][ T2663] usb usb1: Manufacturer: Linux 5.13.0-rc6-x86_64 
xhci-hcd
[   10.673950][ T2663] usb usb1: SerialNumber: 0000:00:05.0
[   10.683186][ T2663] hub 1-0:1.0: USB hub found
[   10.690075][ T2663] hub 1-0:1.0: 2 ports detected
[   10.698791][ T2663] xhci_hcd 0000:00:05.0: xHCI Host Controller
[   10.708191][ T2663] xhci_hcd 0000:00:05.0: new USB bus registered, 
assigned bus number 2
[   10.719887][ T2663] xhci_hcd 0000:00:05.0: Host supports USB 3.1 
Enhanced SuperSpeed
[   10.733401][ T2663] usb usb2: We don't know the algorithms for LPM 
for this host, disabling LPM.
[   10.745074][ T2663] usb usb2: New USB device found, idVendor=1d6b, 
idProduct=0003, bcdDevice= 5.13
[   10.755991][ T2663] usb usb2: New USB device strings: Mfr=3, 
Product=2, SerialNumber=1
[   10.765660][ T2663] usb usb2: Product: xHCI Host Controller
[   10.772771][ T2663] usb usb2: Manufacturer: Linux 5.13.0-rc6-x86_64 
xhci-hcd
[   10.784533][ T2663] usb usb2: SerialNumber: 0000:00:05.0
[   10.799548][ T2663] hub 2-0:1.0: USB hub found
[   10.810746][ T2663] hub 2-0:1.0: 2 ports detected
[32;1m>>[0;39m[1m Determining root device (trying LABEL=RAID-GT) ...
[32;1m>>[0;39m[1m Root device detected as /dev/xvda! [0;39m
[32;1m>>[0;39m[1m Mounting /dev/xvda as root ... [0;39m
[32;1m>>[0;39m[1m Using mount -t xfs -o ro /dev/xvda /newroot [0;39m
[   10.933758][ T2707] XFS (xvda): Mounting V5 Filesystem
[   11.006090][  T102] usb 1-1: new high-speed USB device number 2 using 
xhci_hcd
[   11.103723][ T2707] XFS (xvda): Ending clean mount
[   11.211807][  T102] usb 1-1: New USB device found, idVendor=1a40, 
idProduct=0101, bcdDevice= 1.11
[   11.225866][  T102] usb 1-1: New USB device strings: Mfr=0, 
Product=1, SerialNumber=0
[   11.237347][  T102] usb 1-1: Product: USB 2.0 Hub
[   11.249932][  T102] hub 1-1:1.0: USB hub found
[   11.259712][  T102] hub 1-1:1.0: 4 ports detected
[32;1m>>[0;39m[1m Restoring console log level (7) ... [0;39m
[32;1m>>[0;39m[1m Switching to real root: switch_root /newroot 
/lib/systemd/systemd  [0;39m
[   11.653028][  T102] usb 1-1.1: new high-speed USB device number 3 
using xhci_hcd
[   11.857879][  T102] usb 1-1.1: New USB device found, idVendor=1a40, 
idProduct=0201, bcdDevice= 1.00
[   11.873158][  T102] usb 1-1.1: New USB device strings: Mfr=0, 
Product=1, SerialNumber=0
[   11.888408][  T102] usb 1-1.1: Product: USB 2.0 Hub [MTT]
[   11.904923][  T102] hub 1-1.1:1.0: USB hub found
[   11.915361][  T102] hub 1-1.1:1.0: 7 ports detected
[   12.218120][   T64] usb 1-1.1.3: new full-speed USB device number 4 
using xhci_hcd
[   12.494108][   T64] usb 1-1.1.3: New USB device found, idVendor=1d50, 
idProduct=6122, bcdDevice= 1.01
[   12.507285][   T64] usb 1-1.1.3: New USB device strings: Mfr=1, 
Product=2, SerialNumber=0
[   12.519926][   T64] usb 1-1.1.3: Product: Ultimate Hacking Keyboard
[   12.529243][   T64] usb 1-1.1.3: Manufacturer: Ultimate Gadget 
Laboratories
[   12.947814][    T1] systemd[1]: systemd 248 running in system mode. 
(+PAM -AUDIT -SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS 
+OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD 
-LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -BZIP2 +LZ4 
+XZ -ZLIB -ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=hybrid)
[   13.007356][    T1] systemd[1]: Detected virtualization xen.
[   13.024478][    T1] systemd[1]: Detected architecture x86-64.

Welcome to [1;32mGentoo/Linux[0m!

[   15.137251][    T1] systemd[1]: 
/lib/systemd/system/lightdm.service:8: Standard output type syslog is 
obsolete, automatically updating to journal. Please update your unit 
file, and consider removing the setting altogether.
[   15.187556][    T1] systemd[1]: Queued start job for default target 
Graphical Interface.
[   15.259644][    T1] systemd[1]: Created slice system-getty.slice.
[[0;32m  OK  [0m] Created slice [0;1;39msystem-getty.slice[0m.
[   15.285724][    T1] systemd[1]: Created slice system-modprobe.slice.
[[0;32m  OK  [0m] Created slice [0;1;39msystem-modprobe.slice[0m.
[   15.312934][    T1] systemd[1]: Created slice 
system-serial\x2dgetty.slice.
[[0;32m  OK  [0m] Created slice 
[0;1;39msystem-serial\x2dgetty.slice[0m.
[   15.340276][    T1] systemd[1]: Created slice system-syslog\x2dng.slice.
[[0;32m  OK  [0m] Created slice [0;1;39msystem-syslog\x2dng.slice[0m.
[   15.373562][    T1] systemd[1]: Created slice 
system-systemd\x2dfsck.slice.
[[0;32m  OK  [0m] Created slice 
[0;1;39msystem-systemd\x2dfsck.slice[0m.
[   15.406079][    T1] systemd[1]: Created slice User and Session Slice.
[[0;32m  OK  [0m] Created slice [0;1;39mUser and Session Slice[0m.
[   15.423218][    T1] systemd[1]: Started Dispatch Password Requests to 
Console Directory Watch.
[[0;32m  OK  [0m] Started [0;1;39mDispatch Password …ts to Console 
Directory Watch[0m.
[   15.445670][    T1] systemd[1]: Started Forward Password Requests to 
Wall Directory Watch.
[[0;32m  OK  [0m] Started [0;1;39mForward Password R…uests to Wall 
Directory Watch[0m.
[   15.465687][    T1] systemd[1]: Condition check resulted in Arbitrary 
Executable File Formats File System Automount Point being skipped.
[   15.483892][    T1] systemd[1]: Reached target Paths.
[[0;32m  OK  [0m] Reached target [0;1;39mPaths[0m.
[   15.496104][    T1] systemd[1]: Reached target Slices.
[[0;32m  OK  [0m] Reached target [0;1;39mSlices[0m.
[   15.521323][    T1] systemd[1]: Listening on Process Core Dump Socket.
[[0;32m  OK  [0m] Listening on [0;1;39mProcess Core Dump Socket[0m.
[   15.547519][    T1] systemd[1]: Listening on initctl Compatibility 
Named Pipe.
[[0;32m  OK  [0m] Listening on [0;1;39minitctl Compatibility Named 
Pipe[0m.
[   15.586114][    T1] systemd[1]: Listening on Journal Audit Socket.
[[0;32m  OK  [0m] Listening on [0;1;39mJournal Audit Socket[0m.
[   15.609791][    T1] systemd[1]: Listening on Journal Socket (/dev/log).
[[0;32m  OK  [0m] Listening on [0;1;39mJournal Socket (/dev/log)[0m.
[   15.632269][    T1] systemd[1]: Listening on Journal Socket.
[[0;32m  OK  [0m] Listening on [0;1;39mJournal Socket[0m.
[   15.674676][    T1] systemd[1]: Listening on Network Service Netlink 
Socket.
[[0;32m  OK  [0m] Listening on [0;1;39mNetwork Service Netlink 
Socket[0m.
[   15.712060][    T1] systemd[1]: Listening on udev Control Socket.
[[0;32m  OK  [0m] Listening on [0;1;39mudev Control Socket[0m.
[   15.735675][    T1] systemd[1]: Listening on udev Kernel Socket.
[[0;32m  OK  [0m] Listening on [0;1;39mudev Kernel Socket[0m.
[   15.763745][    T1] systemd[1]: Mounting Huge Pages File System...
          Mounting [0;1;39mHuge Pages File System[0m...
[   15.786328][    T1] systemd[1]: Mounting POSIX Message Queue File 
System...
          Mounting [0;1;39mPOSIX Message Queue File System[0m...
[   15.807103][    T1] systemd[1]: Mounting Kernel Debug File System...
          Mounting [0;1;39mKernel Debug File System[0m...
[   15.829477][    T1] systemd[1]: Mounting Kernel Trace File System...
          Mounting [0;1;39mKernel Trace File System[0m...
[   15.850653][    T1] systemd[1]: Starting Create list of static device 
nodes for the current kernel...
          Starting [0;1;39mCreate list of st…odes for the current 
kernel[0m...
[   15.874516][    T1] systemd[1]: Starting Load Kernel Module configfs...
          Starting [0;1;39mLoad Kernel Module configfs[0m...
[   15.898604][    T1] systemd[1]: Starting Load Kernel Module drm...
          Starting [0;1;39mLoad Kernel Module drm[0m...
[   15.934381][    T1] systemd[1]: Starting Load Kernel Module fuse...
          Starting [0;1;39mLoad Kernel Module fuse[0m...
[   15.968942][    T1] systemd[1]: Condition check resulted in Set Up 
Additional Binary Formats being skipped.
[   15.993591][    T1] systemd[1]: Starting File System Check on Root 
Device...
          Starting [0;1;39mFile System Check on Root Device[0m...
[   16.026529][    T1] systemd[1]: Starting Journal Service...
          Starting [0;1;39mJournal Service[0m...
[   16.059228][    T1] systemd[1]: Starting Load Kernel Modules...
          Starting [0;1;39mLoad Kernel Modules[0m...
[   16.083609][    T1] systemd[1]: Starting Coldplug All udev Devices...
          Starting [0;1;39mColdplug All udev Devices[0m...
[   16.114369][    T1] systemd[1]: Mounted Huge Pages File System.
[[0;32m  OK  [0m] Mounted [0;1;39mHuge Pages [   16.148798][ T1] 
systemd[1]: Mounted POSIX Message Queue File System.
File System[0m.
[[0;32m  OK  [   16.180552][    T1] systemd[1]: Mounted Kernel Debug 
File System.
[0m] Mounted [0;1;39mPOSIX Mes[   16.205187][    T1] systemd[1]: 
Mounted Kernel Trace File System.
sage Queue File System[0m.
[[0;32m  OK  [0m] Mounted [0;1;3[   16.232991][    T1] systemd[1]: 
Finished Create list of static device nodes for the current kernel.
9mKernel Debug File System[0m. [   16.269094][    T1] systemd[1]: 
modprobe@configfs.service: Deactivated successfully.

[[0;32m  OK  [0m] Mounted [0[   16.295407][    T1] systemd[1]: 
Finished Load Kernel Module configfs.
;1;39mKernel Trace File System[[   16.310922][    T1] systemd[1]: 
Started Journal Service.
0m.
[[0;32m  OK  [0m] Finished [0;1;39mCreate list of st… nodes for the 
current kernel[0m.
[[0;32m  OK  [0m] Finished [0;1;39mLoad Kernel Module configfs[0m.
[[0;32m  OK  [0m] Started [0;1;39mJournal Service[0m.
[[0;32m  OK  [0m] Finished [0;1;39mLoad Kernel Module drm[0m.
[[0;32m  OK  [0m] Finished [0;1;39mLoad Kernel Module fuse[0m.
          Mounting [0;1;39mFUSE Control File System[0m...
          Mounting [0;1;39mKernel Configuration File System[0m...
[[0;32m  OK  [0m] Finished [0;1;39mFile System Check on Root Device[0m.
[[0;32m  OK  [0m] Mounted [0;1;39mFUSE Control File System[0m.
[[0;32m  OK  [0m] Mounted [0;1;39mKernel Configuration File System[0m.
          Starting [0;1;39mRemount Root and Kernel File Systems[0m...
[[0;32m  OK  [0m] Finished [0;1;39mColdplug All udev Devices[0m.
[   16.818156][ T2804] xfs filesystem being remounted at / supports 
timestamps until 2038 (0x7fffffff)
[[0;32m  OK  [0m] Finished [0;1;39mRemount Root and Kernel File 
Systems[0m.
          Starting [0;1;39mFlush Journal to Persistent Storage[0m...
          Starting [0;1;39mLoad/Save Random Seed[0m...
          Starting [0;1;39mCreate Static Device Nodes in /dev[0m...
[   16.919087][ T2795] systemd-journald[2795]: Received client request 
to flush runtime journal.
[[0;32m  OK  [0m] Finished [0;1;39mLoad/Save Random Seed[0m.
[   17.303339][ T2797] [drm] amdgpu kernel modesetting enabled.
[   17.317088][ T2797] amdgpu: CRAT table disabled by module option
[   17.328266][ T2797] amdgpu: Virtual CRAT table created for CPU
[[0;32m  OK  [[   17.338547][ T2797] amdgpu: Topology: Add CPU node
0m] Finished [0;1;39mCreate Static Device Nodes in /dev[0m.
[[0;32m  OK  [[   17.358475][ T2797] xen: --> pirq=64 -> irq=40 (gsi=40)
0m] Reached targ[   17.370634][ T2797] [drm] initializing kernel 
modesetting (PITCAIRN 0x1002:0x6811 0x1787:0x2337 0x00).
et [0;1;39mLocal File Systems ([   17.388573][ T2797] amdgpu 
0000:00:06.0: amdgpu: Trusted Memory Zone (TMZ) feature not supported
Pre)[0m.
[   17.412516][ T2797] [drm] register mmio base: 0x53040000
[   17.421615][ T2797] [drm] register mmio size: 262144
[[0;32m  OK  [[   17.429451][ T2797] [drm] add ip block number 0 
<si_common>
0m] Reached targ[   17.440046][ T2797] [drm] add ip block number 1 
<gmc_v6_0>
et [0;1;39mCont[   17.450850][ T2797] [drm] add ip block number 2 <si_ih>
ainers[0m.
[   17.459704][ T2797] [drm] add ip block number 3 <gfx_v6_0>
[   17.469800][ T2797] [drm] add ip block number 4 <si_dma>
[   17.481630][ T2797] [drm] add ip block number 5 <si_dpm>
[   17.491880][ T2797] [drm] add ip block number 6 <dce_v6_0>
[   17.504557][ T2797] [drm] add ip block number 7 <uvd_v3_1>
[   17.517377][ T2797] kfd kfd: amdgpu: PITCAIRN  not supported in kfd
          Starting [0;1;39mRule-based Manage…for Device Events and 
Files[0m...
[   17.820407][ T2797] amdgpu 0000:00:06.0: amdgpu: Fetched VBIOS from 
ROM BAR
[   17.838104][ T2797] amdgpu: ATOM BIOS: 113-C6300100-R27
[   17.848202][ T2797] [drm] GPU posting now...
[   17.870519][ T2797] [drm] vm size is 64 GB, 2 levels, block size is 
10-bit, fragment size is 9-bit
[   17.894837][ T2797] amdgpu 0000:00:06.0: amdgpu: VRAM: 2048M 
0x000000F400000000 - 0x000000F47FFFFFFF (2048M used)
[   17.908213][ T2797] amdgpu 0000:00:06.0: amdgpu: GART: 1024M 
0x000000FF00000000 - 0x000000FF3FFFFFFF
[   17.920376][ T2797] [drm] Detected VRAM RAM=2048M, BAR=256M
[   17.928787][ T2797] [drm] RAM width 256bits GDDR5
[   17.956799][ T2797] [drm] amdgpu: 2048M of VRAM memory ready
[   17.964555][ T2797] [drm] amdgpu: 3072M of GTT memory ready.
[   17.971939][ T2797] [drm] GART: num cpu pages 262144, num gpu pages 
262144
[   17.984049][ T2797] amdgpu 0000:00:06.0: amdgpu: PCIE GART of 1024M 
enabled (table at 0x000000F400000000).
[   18.035491][ T2797] [drm] Internal thermal controller with fan control
[   18.049827][ T2797]     ui class: none
[   18.056300][ T2797]     internal class: boot
[   18.062643][ T2797]     caps:
[   18.067278][ T2797] [drm]     uvd    vclk: 0 dclk: 0
[   18.074867][ T2797] [drm]         power level 0    sclk: 15000 mclk: 
15000 vddc: 900 vddci: 950 pcie gen: 3
[   18.090020][ T2797]     status: c r b
[   18.097197][ T2797]     ui class: performance
[   18.107288][ T2797]     internal class: none
[   18.113878][ T2797]     caps:
[   18.118430][ T2797] [drm]     uvd    vclk: 0 dclk: 0
[   18.124781][ T2797] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   18.138324][ T2797] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   18.151343][ T2797] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   18.164373][ T2797] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   18.177297][ T2797]     status:
[   18.181304][ T2797]     ui class: none
[   18.186258][ T2797]     internal class: uvd
[   18.191745][ T2797]     caps: video
[   18.196155][ T2797] [drm]     uvd    vclk: 72000 dclk: 56000
[   18.203724][ T2797] [drm]         power level 0    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   18.216835][ T2797] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   18.229721][ T2797] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   18.242063][ T2797]     status:
[   18.246771][ T2797]     ui class: none
[   18.253158][ T2797]     internal class: ulv
[   18.260493][ T2797]     caps:
[   18.267011][ T2797] [drm]     uvd    vclk: 0 dclk: 0
[   18.267018][ T2797] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   18.267023][ T2797] [drm]         power level 1    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   18.267026][ T2797] [drm]         power level 2    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[[0;32m  OK  [[   18.267029][ T2797]     status:
0m] Started [0;[   18.267035][ T2797] [drm] amdgpu: dpm initialized
1;39mRule-based [   18.267741][ T2797] [drm] AMDGPU Display Connectors
Manager for Devi[   18.370005][ T2797] [drm] Connector 0:
ce Events and Fi[   18.377128][ T2797] [drm]   DP-1
les[0m.
[   18.383200][ T2797] [drm]   HPD5
[   18.392889][ T2797] [drm]   DDC: 0x1950 0x1950 0x1951 0x1951 0x1952 
0x1952 0x1953 0x1953
[   18.411775][ T2797] [drm]   Encoders:
[   18.417685][ T2797] [drm]     DFP1: INTERNAL_UNIPHY2
[   18.426558][ T2797] [drm] Connector 1:
[   18.435561][ T2797] [drm]   HDMI-A-1
[   18.451407][ T2797] [drm]   HPD1
[   18.468892][ T2797] [drm]   DDC: 0x1954 0x1954 0x1955 0x1955 0x1956 
0x1956 0x1957 0x1957
[   18.494327][ T2797] [drm]   Encoders:
[   18.502168][ T2797] [drm]     DFP2: INTERNAL_UNIPHY1
[   18.510053][ T2797] [drm] Connector 2:
[   18.519890][ T2797] [drm]   DVI-I-1
[   18.530288][ T2797] [drm]   HPD6
[   18.538796][ T2797] [drm]   DDC: 0x1960 0x1960 0x1961 0x1961 0x1962 
0x1962 0x1963 0x1963
[   18.556927][ T2797] [drm]   Encoders:
[   18.568999][ T2797] [drm]     DFP3: INTERNAL_UNIPHY
[   18.569004][ T2797] [drm]     CRT1: INTERNAL_KLDSCP_DAC1
[   18.596517][ T2797] [drm] Found UVD firmware Version: 64.0 Family ID: 13
[[0;32m  OK  [0m] Found device [0;1;39m/dev/ttyS0[0m.
[   18.761364][  T102] input: AT Translated Set 2 keyboard as 
/devices/platform/i8042/serio0/input/input4
[[0;32m  OK  [0m] Found device [0;1;39m/dev/hvc0[0m.
[[0;32m  OK  [0m] Listening on [0;1;39mLoad/Save RF …itch Status 
/dev/rfkill Watch[0m.
[   18.954908][ T2845] input: Xen Virtual Keyboard as 
/devices/virtual/input/input5
[[0;32m  OK  [0m] Found device 
[0;1;39m/dev/disk/by-label/RAID-GT-SWAP[0m.
[   19.003182][ T2845] input: Xen Virtual Pointer as 
/devices/virtual/input/input6
          Activating swap [0;1;39m/dev/disk/by-label/RAID-GT-SWAP[0m...
[[0;32m  OK  [0m] Found device 
[0;1;39m/dev/disk/by-label/RAID-GT-PT[0m.
          Starting [0;1;39mFile System 
Check…ev/disk/by-label/RAID-GT-PT[0m...
[[0;32m  OK  [0m] Finished [0;1;39mFile System 
Check…/dev/disk/by-label/RAID-GT-PT[0m.
          Mounting [0;1;39m/pt[0m...
[   19.227265][ T2865] XFS (xvdc): Mounting V5 Filesystem
[   19.252685][ T2832] hid: raw HID events driver (C) Jiri Kosina
[   19.311433][ T2832] usbcore: registered new interface driver usbhid
[   19.330800][ T2832] usbhid: USB HID core driver
[   19.345140][ T2797] switching from power state:
[   19.361215][ T2797]     ui class: none
[   19.367430][ T2797]     internal class: boot
[   19.375710][ T2797]     caps:
[   19.381495][ T2797] [drm]     uvd    vclk: 0 dclk: 0
[   19.390691][ T2797] [drm]         power level 0    sclk: 15000 mclk: 
15000 vddc: 900 vddci: 950 pcie gen: 3
[   19.407695][ T2797]     status: c b
[   19.413402][ T2797] switching to power state:
[   19.419706][ T2797]     ui class: performance
[   19.426595][ T2797]     internal class: none
[   19.433893][ T2797]     caps:
[   19.436061][ T2832] hid-generic 0003:1D50:6122.0001: hiddev0,hidraw0: 
USB HID v1.10 Device [Ultimate Gadget Laboratories Ultimate Hacking 
Keyboard] on usb-0000:00:05.0-1.1.3/input0
[   19.440181][ T2797] [drm]     uvd    vclk: 0 dclk: 0
[   19.469256][ T2832] input: Ultimate Gadget Laboratories Ultimate 
Hacking Keyboard as 
/devices/pci0000:00/0000:00:05.0/usb1/1-1/1-1.1/1-1.1.3/1-1.1.3:1.1/0003:1D50:6122.0002/input/input7 

[   19.478750][ T2797] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   19.478765][ T2797] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   19.560381][ T2797] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   19.563440][ T2832] hid-generic 0003:1D50:6122.0002: input,hidraw1: 
USB HID v1.10 Keyboard [Ultimate Gadget Laboratories Ultimate Hacking 
Keyboard] on usb-0000:00:05.0-1.1.3/input1
[   19.572986][ T2797] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   19.573000][ T2797]     status: r
[   19.599278][ T2832] input: Ultimate Gadget Laboratories Ultimate 
Hacking Keyboard as 
/devices/pci0000:00/0000:00:05.0/usb1/1-1/1-1.1/1-1.1.3/1-1.1.3:1.2/0003:1D50:6122.0003/input/input8 

[   19.624115][  T102] switching from power state:
[   19.665310][  T102]     ui class: performance
[   19.672703][  T102]     internal class: none
[   19.678905][  T102]     caps:
[   19.682074][ T2832] hid-generic 0003:1D50:6122.0003: input,hidraw2: 
USB HID v1.10 Device [Ultimate Gadget Laboratories Ultimate Hacking 
Keyboard] on usb-0000:00:05.0-1.1.3/input2
[   19.684988][  T102] [drm]     uvd    vclk: 0 dclk: 0
[   19.738144][ T2832] input: Ultimate Gadget Laboratories Ultimate 
Hacking Keyboard as 
/devices/pci0000:00/0000:00:05.0/usb1/1-1/1-1.1/1-1.1.3/1-1.1.3:1.3/0003:1D50:6122.0004/input/input9 

[   19.754397][  T102] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   19.820490][  T102] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   19.841931][  T102] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   19.843255][ T2865] XFS (xvdc): Ending clean mount
[   19.864342][  T102] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   19.864363][  T102]     status:
[   19.866318][ T2832] hid-generic 0003:1D50:6122.0004: input,hidraw3: 
USB HID v1.10 Device [Ultimate Gadget Laboratories Ultimate Hacking 
Keyboard] on usb-0000:00:05.0-1.1.3/input3
[   19.868918][ T2832] input: Ultimate Gadget Laboratories Ultimate 
Hacking Keyboard as 
/devices/pci0000:00/0000:00:05.0/usb1/1-1/1-1.1/1-1.1.3/1-1.1.3:1.4/0003:1D50:6122.0005/input/input10 

[   19.870054][ T2832] hid-generic 0003:1D50:6122.0005: input,hidraw4: 
USB HID v1.10 Mouse [Ultimate Gadget Laboratories Ultimate Hacking 
Keyboard] on usb-0000:00:05.0-1.1.3/input4
[   20.011355][  T102]  c
[   20.015666][  T102] switching to power state:
[   20.023234][  T102]     ui class: performance
[   20.030345][  T102]     internal class: none
[   20.037912][  T102]     caps:
[   20.042646][  T102] [drm]     uvd    vclk: 0 dclk: 0
[   20.050858][  T102] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   20.066462][  T102] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   20.078796][  T102] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   20.092617][  T102] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   20.113424][  T102]     status: r
[   20.124362][ T2797] switching from power state:
[   20.141252][ T2797]     ui class: performance
[   20.155644][ T2797]     internal class: none
[   20.163503][ T2797]     caps:
[   20.171363][ T2797] [drm]     uvd    vclk: 0 dclk: 0
[   20.181686][ T2797] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   20.199096][ T2797] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   20.211841][ T2797] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   20.224918][ T2797] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   20.237144][ T2797]     status: c
[   20.241731][ T2797] switching to power state:
[   20.248193][ T2797]     ui class: none
[   20.252742][ T2797]     internal class: uvd
[   20.258404][ T2797]     caps: video
[   20.262427][ T2797] [drm]     uvd    vclk: 72000 dclk: 56000
[   20.269326][ T2797] [drm]         power level 0    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   20.283300][ T2797] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   20.295992][ T2797] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   20.316014][ T2797]     status: r
[   20.465354][ T2865] xfs filesystem being mounted at /pt supports 
timestamps until 2038 (0x7fffffff)
[   20.480237][ T2797] [drm] UVD initialized successfully.
[   20.498794][ T2797] amdgpu 0000:00:06.0: amdgpu: SE 2, SH per SE 2, 
CU per SH 5, active_cu_number 20
[[0;32m  OK  [0m] Mounted [0;1;39m/pt[0m.
[   20.535331][ T2797] switching from power state:
[   20.542439][ T2860] Adding 7813116k swap on /dev/xvdb. Priority:-2 
extents:1 across:7813116k SSFS
[   20.543669][ T2848] xen: --> pirq=68 -> irq=45 (gsi=45)
[   20.544040][ T2848] snd_hda_intel 0000:00:07.0: Force to non-snoop mode
[   20.547092][ T2797]     ui class: none
[   20.624498][ T2797]     internal class: uvd
[   20.624527][ T2797]     caps: video
[   20.624531][ T2797] [drm]     uvd    vclk: 72000 dclk: 56000
[   20.624541][ T2797] [drm]         power level 0    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[[0;32m  OK  [[   20.624546][ T2797] [drm]         power level 1    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
0m] Activated sw[   20.624550][ T2797] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   20.624553][ T2797]     status: c
ap [0;1;39m/dev[   20.624559][ T2797] switching to power state:
[   20.624562][ T2797]     ui class: none
/disk/by-label/R[   20.624566][ T2797]     internal class: uvd
AID-GT-SWAP[0m.[   20.624571][ T2797]     caps: video
[   20.624575][ T2797] [drm]     uvd    vclk: 72000 dclk: 56000

[   20.624578][ T2797] [drm]         power level 0    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   20.624582][ T2797] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   20.624585][ T2797] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   20.624588][ T2797]     status: r
[   20.624920][ T2797] switching from power state:
[   20.868574][ T2797]     ui class: none
[   20.868579][ T2797]     internal class: uvd
[   20.868585][ T2797]     caps: video
[   20.868588][ T2797] [drm]     uvd    vclk: 72000 dclk: 56000
[   20.868592][ T2797] [drm]         power level 0    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   20.868596][ T2797] [drm]         power level 1    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[[0;32m  OK  [[   20.948992][ T2797] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
0m] Reached targ[   20.970505][ T2797]     status: c
et [0;1;39mSwap[   20.980935][ T2797] switching to power state:
[0m.
[   20.992730][ T2797]     ui class: none
[   21.000855][ T2797]     internal class: uvd
[   21.013359][ T2797]     caps: video
[   21.023259][ T2797] [drm]     uvd    vclk: 72000 dclk: 56000
[   21.035066][ T2797] [drm]         power level 0    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   21.053816][ T2797] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   21.053823][ T2797] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   21.053828][ T2797]     status: r
[   21.083582][ T2797] switching from power state:
[   21.083587][ T2797]     ui class: none
[   21.083589][ T2797]     internal class: uvd
[   21.083594][ T2797]     caps: video
[   21.083598][ T2797] [drm]     uvd    vclk: 72000 dclk: 56000
[   21.083602][ T2797] [drm]         power level 0    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   21.083606][ T2797] [drm]         power level 1    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   21.083609][ T2797] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   21.083612][ T2797]     status: c
[   21.083616][ T2797] switching to power state:
[   21.083617][ T2797]     ui class: none
[   21.083619][ T2797]     internal class: uvd
[   21.083622][ T2797]     caps: video
[   21.083636][ T2797] [drm]     uvd    vclk: 72000 dclk: 56000
[   21.083638][ T2797] [drm]         power level 0    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
          Mountin[   21.083641][ T2797] [drm]         power level 1    
sclk: 45000 mclk: 140000 vddc: 950 vddci: 1000 pcie gen: 3
g [0;1;39m/tmp[   21.083644][ T2797] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1150 vddci: 1000 pcie gen: 3
[0m...[   21.083647][ T2797]     status: r

[   21.083761][ T2797] switching from power state:
[   21.083763][ T2797]     ui class: none
[   21.083765][ T2797]     internal class: uvd
[   21.083768][ T2797]     caps: video
[   21.083772][ T2797] [drm]     uvd    vclk: 72000 dclk: 56000
[   21.083774][ T2797] [drm]         power level 0    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   21.083777][ T2797] [drm]         power level 1    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   21.083780][ T2797] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   21.083782][ T2797]     status: c
[   21.083786][ T2797] switching to power state:
[   21.083787][ T2797]     ui class: none
[   21.083788][ T2797]     internal class: uvd
[   21.083802][ T2797]     caps: video
[   21.083805][ T2797] [drm]     uvd    vclk: 72000 dclk: 56000
[   21.083807][ T2797] [drm]         power level 0    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   21.083809][ T2797] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   21.083812][ T2797] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   21.083815][ T2797]     status: r
[   21.083894][ T2797] switching from power state:
[   21.083896][ T2797]     ui class: none
[   21.083898][ T2797]     internal class: uvd
[   21.083901][ T2797]     caps: video
[   21.083904][ T2797] [drm]     uvd    vclk: 72000 dclk: 56000
[   21.083906][ T2797] [drm]         power level 0    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   21.083909][ T2797] [drm]         power level 1    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   21.083911][ T2797] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   21.083914][ T2797]     status: c
[   21.083927][ T2797] switching to power state:
[   21.083928][ T2797]     ui class: none
[   21.083930][ T2797]     internal class: uvd
[   21.083933][ T2797]     caps: video
[   21.083936][ T2797] [drm]     uvd    vclk: 72000 dclk: 56000
[   21.083938][ T2797] [drm]         power level 0    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   21.083941][ T2797] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   21.083944][ T2797] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   21.083946][ T2797]     status: r
[   21.084063][ T2797] switching from power state:
[   21.084065][ T2797]     ui class: none
[   21.084067][ T2797]     internal class: uvd
[   21.084070][ T2797]     caps: video
[   21.084074][ T2797] [drm]     uvd    vclk: 72000 dclk: 56000
[   21.084076][ T2797] [drm]         power level 0    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   21.084078][ T2797] [drm]         power level 1    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   21.084081][ T2797] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   21.084084][ T2797]     status: c
[   21.084087][ T2797] switching to power state:
[   21.084088][ T2797]     ui class: none
[   21.084090][ T2797]     internal class: uvd
[   21.084105][ T2797]     caps: video
[   21.084108][ T2797] [drm]     uvd    vclk: 72000 dclk: 56000
[   21.084110][ T2797] [drm]         power level 0    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   21.084112][ T2797] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   21.084115][ T2797] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   21.084118][ T2797]     status: r
[   21.268402][ T2797] [drm] fb mappable at 0x4049F000
[   21.494320][  T125] switching from power state:
[   21.505337][ T2797] [drm] vram apper at 0x40000000
[   21.529295][  T125]     ui class: none
[   21.529329][  T125]     internal class: uvd
[   21.559587][ T2797] [drm] size 8294400
[   21.589893][  T125]     caps: video
[   21.598231][ T2797] [drm] fb depth is 24
[   21.607010][  T125]
[   21.607020][  T125] [drm]     uvd    vclk: 72000 dclk: 56000
[   21.607026][  T125] [drm]         power level 0    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   21.607030][  T125] [drm]         power level 1    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   21.607034][  T125] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   21.607037][  T125]     status: c
[   21.614126][ T2797] [drm]    pitch is 7680
[   22.063166][  T125] switching to power state:
[   22.063174][  T125]     ui class: performance
          Mountin[   22.063178][  T125]     internal class: none
g [0;1;39m/var/[   22.063184][  T125]     caps:
tmp/portage[0m.[   22.063187][  T125] [drm]     uvd    vclk: 0 dclk: 0
..[   22.063191][  T125] [drm]         power level 0    sclk: 30000 
mclk: 15000 vddc: 875 vddci: 850 pcie gen: 3
[   22.063195][  T125] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3

[   22.063199][  T125] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   22.063202][  T125] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   22.063221][  T125]     status: r
[   22.066179][ T2675] switching from power state:
[   22.190757][    T7] input: HDA ATI HDMI HDMI/DP,pcm=3 as 
/devices/pci0000:00/0000:00:07.0/sound/card0/input11
[   22.205742][ T2675]     ui class: performance
[   22.205748][ T2675]     internal class: none
[   22.205754][ T2675]     caps:
[   22.205757][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   22.205760][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   22.205764][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   22.205768][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   22.217260][    T7] input: HDA ATI HDMI HDMI/DP,pcm=7 as 
/devices/pci0000:00/0000:00:07.0/sound/card0/input12
[   22.217729][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   22.236644][    T7] input: HDA ATI HDMI HDMI/DP,pcm=8 as 
/devices/pci0000:00/0000:00:07.0/sound/card0/input13
[   22.240916][ T2675]     status: c
[   22.240926][ T2675] switching to power state:
[   22.240928][ T2675]     ui class: performance
[   22.240931][ T2675]     internal class: none
[   22.240935][ T2675]     caps:
[   22.240937][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   22.240941][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   22.240945][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   22.240948][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   22.250306][    T7] input: HDA ATI HDMI HDMI/DP,pcm=9 as 
/devices/pci0000:00/0000:00:07.0/sound/card0/input14
[   22.253256][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   22.253264][ T2675]     status: r
[   22.265608][ T2675] switching from power state:
[   22.283058][    T7] input: HDA ATI HDMI HDMI/DP,pcm=10 as 
/devices/pci0000:00/0000:00:07.0/sound/card0/input15
[   22.298166][ T2675]     ui class: performance
[   22.298172][ T2675]     internal class: none
[   22.298178][ T2675]     caps:
[   22.298180][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   22.298184][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   22.317108][    T7] input: HDA ATI HDMI HDMI/DP,pcm=11 as 
/devices/pci0000:00/0000:00:07.0/sound/card0/input16
[   22.342210][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   22.342221][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   22.342224][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   22.342228][ T2675]     status: c
[   22.342235][ T2675] switching to power state:
[   22.342237][ T2675]     ui class: performance
[   22.342240][ T2675]     internal class: none
[   22.342243][ T2675]     caps:
[   22.342246][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   22.342249][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   22.342252][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   22.342255][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   22.342258][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   22.342261][ T2675]     status: r
[   22.343815][ T2675] switching from power state:
[   22.731779][ T2675]     ui class: performance
[   22.731785][ T2675]     internal class: none
[   22.731791][ T2675]     caps:
[   22.731793][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   22.731797][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   22.731801][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   22.731804][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   22.731807][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   22.731810][ T2675]     status: c
[   22.731813][ T2675] switching to power state:
[   22.731815][ T2675]     ui class: performance
[   22.731817][ T2675]     internal class: none
[   22.731820][ T2675]     caps:
[   22.731823][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   22.731825][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   22.731828][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   22.731830][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   22.731833][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   22.731836][ T2675]     status: r
[   22.732093][ T2675] switching from power state:
[   22.744686][ T2797] amdgpu 0000:00:06.0: [drm] fb1: amdgpudrmfb frame 
buffer device
[   22.755361][ T2675]     ui class: performance
[   22.755367][ T2675]     internal class: none
[   22.755373][ T2675]     caps:
[   22.755375][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   22.755378][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   22.755383][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   22.755386][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   23.017318][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   23.017328][ T2675]     status: c
[   23.017334][ T2675] switching to power state:
[   23.017337][ T2675]     ui class: performance
[[0;32m  OK  [[   23.017339][ T2675]     internal class: none
0m] Finished [0[   23.017343][ T2675]     caps:
;1;39mFlush Jour[   23.017345][ T2675] [drm]     uvd    vclk: 0 dclk: 0
nal to Persisten[   23.017348][ T2675] [drm]         power level 0    
sclk: 30000 mclk: 15000 vddc: 875 vddci: 850 pcie gen: 3
t Storage[0m.[   23.017352][ T2675] [drm]         power level 1    
sclk: 45000 mclk: 140000 vddc: 950 vddci: 1000 pcie gen: 3
[   23.017355][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3

[   23.017358][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   23.017361][ T2675]     status: r
[   23.017589][ T2675] switching from power state:
[   23.182177][ T2675]     ui class: performance
[   23.182182][ T2675]     internal class: none
[   23.182188][ T2675]     caps:
[   23.182190][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[[0;32m  OK  [[   23.182193][ T2675] [drm]         power level 0    
sclk: 30000 mclk: 15000 vddc: 900 vddci: 850 pcie gen: 3
0m] Mounted [0;[   23.182198][ T2675] [drm]         power level 1    
sclk: 45000 mclk: 140000 vddc: 950 vddci: 1000 pcie gen: 3
1;39m/tmp[0m.[   23.182201][ T2675] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   23.182204][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3

[   23.182207][ T2675]     status: c
[   23.182211][ T2675] switching to power state:
[   23.182212][ T2675]     ui class: performance
[   23.182214][ T2675]     internal class: none
[   23.182218][ T2675]     caps:
[   23.182220][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   23.182222][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   23.391065][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   23.405447][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   23.419055][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   23.431643][ T2675]     status: r
[   23.431849][ T2675] switching from power state:
[   23.445936][ T2675]     ui class: performance
[[0;32m  OK  [[   23.452080][ T2675]     internal class: none
0m] Mounted [0;[   23.458954][ T2675]     caps:
1;39m/var/tmp/po[   23.464338][ T2675] [drm]     uvd    vclk: 0 dclk: 0
rtage[0m.[   23.472420][ T2675] [drm]         power level 0 sclk: 30000 
mclk: 15000 vddc: 900 vddci: 850 pcie gen: 3

[   23.485258][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   23.500797][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   23.521450][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   23.542111][ T2675]     status: c
[   23.542123][ T2675] switching to power state:
[   23.542125][ T2675]     ui class: performance
[   23.542128][ T2675]     internal class: none
[[0;32m  OK  [[   23.542132][ T2675]     caps:
0m] Reached targ[   23.542134][ T2675] [drm]     uvd    vclk: 0 dclk: 0
et [0;1;39mLoca[   23.542139][ T2675] [drm]         power level 0    
sclk: 30000 mclk: 15000 vddc: 875 vddci: 850 pcie gen: 3
[   23.542143][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
l File Systems[[   23.542146][ T2675] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1150 vddci: 1000 pcie gen: 3
0m.
[   23.542149][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   23.542152][ T2675]     status: r
[   23.542388][ T2675] switching from power state:
[   23.698543][ T2675]     ui class: performance
[   23.698548][ T2675]     internal class: none
[   23.698554][ T2675]     caps:
          Startin[   23.698556][ T2675] [drm]     uvd    vclk: 0 dclk: 0
g [0;1;39mCreat[   23.698560][ T2675] [drm]         power level 0    
sclk: 30000 mclk: 15000 vddc: 900 vddci: 850 pcie gen: 3
e Volatile Files[   23.698563][ T2675] [drm]         power level 1    
sclk: 45000 mclk: 140000 vddc: 950 vddci: 1000 pcie gen: 3
[   23.698567][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
  and Directories[   23.698570][ T2675] [drm]         power level 3    
sclk: 95500 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
[0m...
[   23.698572][ T2675]     status: c
[   23.830684][ T2675] switching to power state:
[   23.838840][ T2675]     ui class: performance
[   23.847149][ T2675]     internal class: none
[   23.852715][ T2797] [drm] Initialized amdgpu 3.41.0 20150101 for 
0000:00:06.0 on minor 0
[   23.856983][ T2675]     caps:
[   23.856990][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   23.856994][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   23.916632][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   23.932606][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   23.949306][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   23.962636][ T2675]     status: r
[   23.967096][  T102] switching from power state:
[   23.980643][  T102]     ui class: performance
[   23.991446][  T102]     internal class: none
[   23.999549][  T102]     caps:
[   24.007038][  T102] [drm]     uvd    vclk: 0 dclk: 0
[   24.015980][  T102] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   24.037299][  T102] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   24.051754][  T102] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.069148][  T102] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.086106][  T102]     status: c
[   24.091638][  T102] switching to power state:
[   24.099441][  T102]     ui class: performance
[   24.109180][  T102]     internal class: none
[   24.118614][  T102]     caps:
[   24.125326][  T102] [drm]     uvd    vclk: 0 dclk: 0
[   24.136458][  T102] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   24.154602][  T102] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   24.169410][  T102] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   24.183031][  T102] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.194934][  T102]     status: r
[   24.198901][ T2675] switching from power state:
[   24.206626][ T2675]     ui class: performance
[   24.212745][ T2675]     internal class: none
[   24.219308][ T2675]     caps:
[   24.223759][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   24.232206][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   24.232212][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   24.232216][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.232219][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[[0;32m  OK  [[   24.232222][ T2675]     status: c
0m] Finished [0[   24.232227][ T2675] switching to power state:
;1;39mLoad Kerne[   24.232229][ T2675]     ui class: performance
[   24.232231][ T2675]     internal class: none
l Modules[0m.
[   24.232235][ T2675]     caps:
[   24.355105][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   24.363939][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   24.383813][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   24.398739][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   24.398746][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.398750][ T2675]     status: r
[   24.411364][ T2675] switching from power state:
[   24.452834][ T2675]     ui class: performance
[   24.465425][ T2675]     internal class: none
[   24.472258][ T2675]     caps:
[[0;32m  OK  [[   24.479230][ T2675] [drm]     uvd    vclk: 0 dclk: 0
0m] Finished [0[   24.489117][ T2675] [drm]         power level 0    
sclk: 30000 mclk: 15000 vddc: 900 vddci: 850 pcie gen: 3
;1;39mCreate Vol[   24.512912][ T2675] [drm]         power level 1    
sclk: 45000 mclk: 140000 vddc: 950 vddci: 1000 pcie gen: 3
atile Files and [   24.535167][ T2675] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
Directories[0m.[   24.559315][ T2675] [drm]         power level 3    
sclk: 95500 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3

[   24.575576][ T2675]     status: c
[   24.581118][ T2675] switching to power state:
[   24.588728][ T2675]     ui class: performance
[   24.595638][ T2675]     internal class: none
[   24.602357][ T2675]     caps:
[   24.602366][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   24.602373][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   24.602377][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
          Startin[   24.602381][ T2675] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1150 vddci: 1000 pcie gen: 3
g [0;1;39mApply[   24.602384][ T2675] [drm]         power level 3    
sclk: 95500 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
  Kernel Variables[0m...
[   24.602387][ T2675]     status: r
[   24.608553][ T2675] switching from power state:
[   24.608562][ T2675]     ui class: performance
[   24.608565][ T2675]     internal class: none
[   24.608572][ T2675]     caps:
[   24.608575][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   24.608578][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   24.608583][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   24.608586][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.608589][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.608592][ T2675]     status: c
[   24.608596][ T2675] switching to power state:
[   24.608598][ T2675]     ui class: performance
[   24.608599][ T2675]     internal class: none
[   24.608603][ T2675]     caps:
[   24.608606][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   24.608608][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   24.608611][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   24.608614][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   24.608617][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.608620][ T2675]     status: r
[   24.608807][ T2675] switching from power state:
[   24.608809][ T2675]     ui class: performance
[   24.608811][ T2675]     internal class: none
[   24.608815][ T2675]     caps:
[   24.608817][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   24.608819][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   24.608822][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   24.608825][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.608828][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.608831][ T2675]     status: c
[   24.608835][ T2675] switching to power state:
[   24.608836][ T2675]     ui class: performance
[   24.608837][ T2675]     internal class: none
[   24.608840][ T2675]     caps:
[   24.608843][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   24.608845][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   24.608848][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   24.608850][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   24.608853][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.608856][ T2675]     status: r
[   24.609056][ T2675] switching from power state:
[   24.609059][ T2675]     ui class: performance
[   24.609061][ T2675]     internal class: none
[   24.609064][ T2675]     caps:
[   24.609066][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   24.609068][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   24.609071][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   24.609074][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.609077][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.609080][ T2675]     status: c
[   24.609083][ T2675] switching to power state:
[   24.609084][ T2675]     ui class: performance
[   24.609086][ T2675]     internal class: none
[   24.609089][ T2675]     caps:
[   24.609091][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   24.609093][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   24.609096][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   24.609098][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   24.609101][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.609103][ T2675]     status: r
[   24.609330][ T2675] switching from power state:
[   24.609333][ T2675]     ui class: performance
[   24.609334][ T2675]     internal class: none
[   24.609338][ T2675]     caps:
[   24.609340][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   24.609342][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   24.609345][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   24.609347][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.609350][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.609353][ T2675]     status: c
[   24.609356][ T2675] switching to power state:
[   24.609357][ T2675]     ui class: performance
[   24.609358][ T2675]     internal class: none
[   24.609361][ T2675]     caps:
[   24.609364][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   24.609365][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   24.609368][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   24.609370][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   24.609373][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.609376][ T2675]     status: r
[   24.609527][ T2675] switching from power state:
[   24.609529][ T2675]     ui class: performance
[   24.609530][ T2675]     internal class: none
[   24.609534][ T2675]     caps:
[   24.609536][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   24.609538][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   24.609541][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   24.609543][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.609546][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.609549][ T2675]     status: c
[   24.609552][ T2675] switching to power state:
[   24.609553][ T2675]     ui class: performance
[   24.609555][ T2675]     internal class: none
[   24.609558][ T2675]     caps:
[   24.609560][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   24.609562][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   24.609565][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   24.609568][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   24.609571][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.609573][ T2675]     status: r
[   24.609698][ T2675] switching from power state:
[   24.609701][ T2675]     ui class: performance
[   24.609702][ T2675]     internal class: none
[   24.609706][ T2675]     caps:
[   24.609708][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   24.609710][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   24.609713][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   24.609715][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.609718][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.609721][ T2675]     status: c
[   24.609724][ T2675] switching to power state:
[   24.609725][ T2675]     ui class: performance
[   24.609727][ T2675]     internal class: none
[   24.609730][ T2675]     caps:
[   24.609732][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   24.609733][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   24.609736][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   24.609739][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   24.609741][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   24.609744][ T2675]     status: r
[   24.609867][ T2675] switching from power state:
[   26.004045][ T2675]     ui class: performance
[   26.004053][ T2675]     internal class: none
[   26.004060][ T2675]     caps:
          Startin[   26.004062][ T2675] [drm]     uvd    vclk: 0 dclk: 0
g [0;1;39mUpdat[   26.004067][ T2675] [drm]         power level 0    
sclk: 30000 mclk: 15000 vddc: 900 vddci: 850 pcie gen: 3
e UTMP about Sys[   26.004072][ T2675] [drm]         power level 1    
sclk: 45000 mclk: 140000 vddc: 950 vddci: 1000 pcie gen: 3
tem Boot/Shutdow[   26.004075][ T2675] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
n[0m...
[   26.004079][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   26.004082][ T2675]     status: c
[   26.107188][ T2675] switching to power state:
[   26.114243][ T2675]     ui class: performance
[   26.114257][ T2675]     internal class: none
[   26.114265][ T2675]     caps:
[[0;32m  OK  [[   26.114269][ T2675] [drm]     uvd    vclk: 0 dclk: 0
0m] Finished [0[   26.114275][ T2675] [drm]         power level 0    
sclk: 30000 mclk: 15000 vddc: 875 vddci: 850 pcie gen: 3
;1;39mApply Kern[   26.168989][ T2675] [drm]         power level 1    
sclk: 45000 mclk: 140000 vddc: 950 vddci: 1000 pcie gen: 3
el Variables[0m[   26.187803][ T2675] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1150 vddci: 1000 pcie gen: 3
.
[   26.207730][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   26.222667][ T2675]     status: r
[   26.227755][    T7] switching from power state:
[   26.234639][    T7]     ui class: performance
[   26.234647][    T7]     internal class: none
[   26.234655][    T7]     caps:
          Startin[   26.234657][    T7] [drm]     uvd    vclk: 0 dclk: 0
g [0;1;39mNetwo[   26.234662][    T7] [drm]         power level 0    
sclk: 30000 mclk: 15000 vddc: 900 vddci: 850 pcie gen: 3
rk Service[0m..[   26.234666][    T7] [drm]         power level 1    
sclk: 45000 mclk: 140000 vddc: 950 vddci: 1000 pcie gen: 3
[   26.234670][    T7] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
.
[   26.234677][    T7] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   26.331885][    T7]     status: c
[   26.336148][    T7] switching to power state:
[   26.341864][    T7]     ui class: none
[   26.348125][    T7]     internal class: uvd
[   26.353875][    T7]     caps: video
[   26.353885][    T7] [drm]     uvd    vclk: 72000 dclk: 56000
[   26.353890][    T7] [drm]         power level 0    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   26.353894][    T7] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   26.353897][    T7] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   26.353901][    T7]     status: r
[[0;32m  OK  [0m] Finished [0;1;39mUpdate UTMP about System 
Boot/Shutdown[0m.
[[0;32m  OK  [0m] Reached target [0;1;39mSystem Initialization[0m.
[[0;32m  OK  [0m] Started [0;1;39mDaily Cleanup of Temporary 
Directories[0m.
[[0;32m  OK  [0m] Reached target [0;1;39mTimers[0m.
[[0;32m  OK  [0m] Listening on [0;1;39mACPID Listen Socket[0m.
[[0;32m  OK  [0m] Listening on [0;1;39mAvahi mDNS/DNS-SD Stack 
Activation Socket[0m.
[[0;32m  OK  [0m] Listening on [0;1;39mCUPS S[   26.532759][ T2675] 
switching from power state:
[   26.541552][ T2675]     ui class: none
cheduler[0m.
[   26.547731][ T2675]     internal class: uvd
[   26.555550][ T2675]     caps: video
[   26.560604][ T2675] [drm]     uvd    vclk: 72000 dclk: 56000
[   26.569568][ T2675] [drm]         power level 0    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   26.584803][ T2675] [drm]         power level 1    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   26.584812][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   26.584816][ T2675]     status: c
[   26.584824][ T2675] switching to power state:
[   26.584827][ T2675]     ui class: none
[   26.584829][ T2675]     internal class: uvd
[[0;32m  OK  [[   26.584833][ T2675]     caps: video
0m] Listening on[   26.584836][ T2675] [drm]     uvd    vclk: 72000 
dclk: 56000
[   26.584840][ T2675] [drm]         power level 0    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
  [0;1;39mD-Bus [   26.584843][ T2675] [drm]         power level 1    
sclk: 45000 mclk: 140000 vddc: 950 vddci: 1000 pcie gen: 3
System Message B[   26.584846][ T2675] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   26.584849][ T2675]     status: r
us Socket[0m.
[   26.602682][ T2675] switching from power state:
[   26.602688][ T2675]     ui class: none
[   26.602691][ T2675]     internal class: uvd
[   26.602698][ T2675]     caps: video
[   26.602702][ T2675] [drm]     uvd    vclk: 72000 dclk: 56000
[   26.602706][ T2675] [drm]         power level 0    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   26.602721][ T2675] [drm]         power level 1    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   26.602725][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   26.602728][ T2675]     status: c
[   26.602731][ T2675] switching to power state:
[   26.602733][ T2675]     ui class: none
[   26.602734][ T2675]     internal class: uvd
[   26.602738][ T2675]     caps: video
[   26.602742][ T2675] [drm]     uvd    vclk: 72000 dclk: 56000
[   26.602744][ T2675] [drm]         power level 0    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   26.602747][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   26.602750][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   26.602753][ T2675]     status: r
[   26.602912][ T2675] switching from power state:
[   26.602914][ T2675]     ui class: none
[   26.602916][ T2675]     internal class: uvd
[   26.602920][ T2675]     caps: video
[   26.602923][ T2675] [drm]     uvd    vclk: 72000 dclk: 56000
[   26.602925][ T2675] [drm]         power level 0    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   26.602928][ T2675] [drm]         power level 1    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   26.602932][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   26.602935][ T2675]     status: c
[   26.602938][ T2675] switching to power state:
[   26.602940][ T2675]     ui class: none
[   26.602941][ T2675]     internal class: uvd
[   26.602945][ T2675]     caps: video
[   26.602948][ T2675] [drm]     uvd    vclk: 72000 dclk: 56000
[   26.602949][ T2675] [drm]         power level 0    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   26.602952][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   26.602997][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   26.603000][ T2675]     status: r
[   26.621334][ T2675] switching from power state:
[   27.170889][ T2675]     ui class: none
[   27.170897][ T2675]     internal class: uvd
[   27.170903][ T2675]     caps: video
[[0;32m  OK  [[   27.170907][ T2675] [drm]     uvd    vclk: 72000 
dclk: 56000
0m] Reached targ[   27.170912][ T2675] [drm]         power level 0    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   27.170918][ T2675] [drm]         power level 1    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
et [0;1;39mSock[   27.170921][ T2675] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
ets[0m.
[   27.170925][ T2675]     status: c
[   27.170928][ T2675] switching to power state:
[   27.170930][ T2675]     ui class: none
[   27.170932][ T2675]     internal class: uvd
[   27.170935][ T2675]     caps: video
[   27.170939][ T2675] [drm]     uvd    vclk: 72000 dclk: 56000
[   27.170941][ T2675] [drm]         power level 0    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   27.170945][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   27.170948][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   27.170951][ T2675]     status: r
[   27.171240][ T2675] switching from power state:
[   27.384522][ T2675]     ui class: none
[   27.384533][ T2675]     internal class: uvd
[   27.384545][ T2675]     caps: video
[   27.384550][ T2675] [drm]     uvd    vclk: 72000 dclk: 56000
[[0;32m  OK  [[   27.384559][ T2675] [drm]         power level 0    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
0m] Reached targ[   27.384564][ T2675] [drm]         power level 1    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
et [0;1;39mBasi[   27.384568][ T2675] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   27.384572][ T2675]     status: c
c System[0m.
[   27.384576][ T2675] switching to power state:
[   27.384579][ T2675]     ui class: none
[   27.384580][ T2675]     internal class: uvd
[   27.384584][ T2675]     caps: video
[   27.384587][ T2675] [drm]     uvd    vclk: 72000 dclk: 56000
[   27.384591][ T2675] [drm]         power level 0    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   27.384595][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   27.384598][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   27.384601][ T2675]     status: r
[   27.384931][ T2675] switching from power state:
[   27.588304][ T2675]     ui class: none
[   27.588311][ T2675]     internal class: uvd
[   27.588317][ T2675]     caps: video
[[0;32m  OK  [[   27.588321][ T2675] [drm]     uvd    vclk: 72000 
dclk: 56000
0m] Started [0;[   27.588328][ T2675] [drm]         power level 0    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
1;39mACPI event [   27.588332][ T2675] [drm]         power level 1    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
daemon[0m.
[   27.588336][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   27.588339][ T2675]     status: c
[   27.588343][ T2675] switching to power state:
[   27.588345][ T2675]     ui class: none
[   27.588346][ T2675]     internal class: uvd
[   27.588350][ T2675]     caps: video
[   27.588353][ T2675] [drm]     uvd    vclk: 72000 dclk: 56000
[   27.588356][ T2675] [drm]         power level 0    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   27.588359][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   27.588362][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   27.588365][ T2675]     status: r
[   27.588493][    T7] switching from power state:
[   27.832641][    T7]     ui class: none
[   27.832648][    T7]     internal class: uvd
[   27.832656][    T7]     caps: video
          Startin[   27.832660][    T7] [drm]     uvd    vclk: 72000 
dclk: 56000
g [0;1;39mSave/[   27.832666][    T7] [drm]         power level 0    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
Restore Sound Ca[   27.832671][    T7] [drm]         power level 1    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
rd State[0m... [   27.832674][    T7] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   27.832678][    T7]     status: c

[   27.832682][    T7] switching to power state:
[   27.832684][    T7]     ui class: performance
[   27.832686][    T7]     internal class: none
[   27.946928][    T7]     caps:
[   27.946946][    T7] [drm]     uvd    vclk: 0 dclk: 0
[   27.946980][    T7] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   27.946986][    T7] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   27.946989][    T7] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   27.946993][    T7] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
          Startin[   27.946996][    T7]     status: r
g [0;1;39mAvahi[   27.967396][ T2675] switching from power state:
  mDNS/DNS-SD Sta[   28.077933][ T2675]     ui class: performance
ck[0m...
[   28.087671][ T2675]     internal class: none
[   28.098642][ T2675]     caps:
[   28.104986][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   28.114477][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   28.114497][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   28.114501][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   28.114505][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[[0;32m  OK  [[   28.114508][ T2675]     status: c
0m] Started [0;[   28.114519][ T2675] switching to power state:
1;39mD-Bus Syste[   28.114521][ T2675]     ui class: performance
[   28.114524][ T2675]     internal class: none
m Message Bus[0[   28.114528][ T2675]     caps:
m.
[   28.114531][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   28.114533][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   28.277230][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   28.293803][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   28.293819][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   28.293825][ T2675]     status: r
[   28.294117][ T2675] switching from power state:
[[0;32m  OK  [[   28.337995][ T2675]     ui class: performance
0m] Started [0;[   28.347115][ T2675]     internal class: none
1;39mTurn off se[   28.359330][ T2675]     caps:
gmentation offlo[   28.370448][ T2675] [drm]     uvd    vclk: 0 dclk: 0
ading for eth0[[   28.389580][ T2675] [drm]         power level 0    
sclk: 30000 mclk: 15000 vddc: 900 vddci: 850 pcie gen: 3
0m.
[   28.420716][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   28.441515][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   28.459397][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   28.477933][ T2675]     status: c
[   28.482524][ T2675] switching to power state:
[   28.490844][ T2675]     ui class: performance
[   28.499195][ T2675]     internal class: none
[   28.507032][ T2675]     caps:
[   28.512190][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   28.519530][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   28.534919][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   28.554026][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   28.583441][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   28.583451][ T2675]     status: r
[   28.583780][ T2675] switching from power state:
[   28.633284][ T2675]     ui class: performance
          Startin[   28.642513][ T2675]     internal class: none
g [0;1;39mUser [   28.653295][ T2675]     caps:
Login Management[   28.662731][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[0m...
[   28.671593][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   28.688842][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   28.706836][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   28.706862][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   28.706867][ T2675]     status: c
[   28.706875][ T2675] switching to power state:
[[0;32m  OK  [[   28.706879][ T2675]     ui class: performance
0m] Started [0;[   28.706881][ T2675]     internal class: none
1;39mNetwork Ser[   28.706885][ T2675]     caps:
[   28.706888][ T2675] [drm]     uvd    vclk: 0 dclk: 0
vice[0m.
[   28.706891][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   28.706895][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   28.706898][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   28.706901][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   28.706904][ T2675]     status: r
[   28.707177][ T2675] switching from power state:
[   28.928008][ T2675]     ui class: performance
[   28.928015][ T2675]     internal class: none
[   28.928033][ T2675]     caps:
[   28.928035][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   28.928040][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   28.928044][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   28.928048][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   28.928051][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   28.928054][ T2675]     status: c
[   28.928058][ T2675] switching to power state:
[   28.928060][ T2675]     ui class: performance
[   28.928062][ T2675]     internal class: none
[   28.928066][ T2675]     caps:
[   28.928068][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   28.928071][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   28.928074][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   28.928077][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   28.928080][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   28.928083][ T2675]     status: r
[   29.233319][ T2675] switching from power state:
[   29.254047][ T2675]     ui class: performance
[   29.268924][ T2675]     internal class: none
[   29.282021][ T2675]     caps:
[   29.289907][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[[0;32m  OK  [[   29.302718][ T2675] [drm]         power level 0    
sclk: 30000 mclk: 15000 vddc: 900 vddci: 850 pcie gen: 3
0m] Finished [0[   29.326786][ T2675] [drm]         power level 1    
sclk: 45000 mclk: 140000 vddc: 950 vddci: 1000 pcie gen: 3
;1;39mSave/Resto[   29.348210][ T2675] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
re Sound Card St[   29.367502][ T2675] [drm]         power level 3    
sclk: 95500 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
ate[0m.[   29.386626][ T2675]     status: c

[   29.396126][ T2675] switching to power state:
[   29.408535][ T2675]     ui class: performance
[   29.422304][ T2675]     internal class: none
[   29.437922][ T2675]     caps:
[   29.450429][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   29.450448][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   29.450456][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   29.450460][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[[0;32m  OK  [[   29.450463][ T2675] [drm]         power level 3    
sclk: 95500 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
0m] Reached targ[   29.450467][ T2675]     status: r
et [0;1;39mSoun[   29.451056][ T2675] switching from power state:
d Card[0m.[   29.557996][ T2675]     ui class: performance

[   29.568152][ T2675]     internal class: none
[   29.578429][ T2675]     caps:
[   29.585991][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   29.599302][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   29.599328][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   29.599333][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   29.599337][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
          Startin[   29.599340][ T2675]     status: c
g [0;1;39mWait [   29.599348][ T2675] switching to power state:
for Network to b[   29.599351][ T2675]     ui class: performance
e Configured[0m[   29.599353][ T2675]     internal class: none
...[   29.599357][ T2675]     caps:

[   29.599360][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   29.599362][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   29.599366][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   29.599368][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   29.599371][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   29.599374][ T2675]     status: r
[   29.599630][ T2675] switching from power state:
[   29.824571][ T2675]     ui class: performance
[   29.833462][ T2675]     internal class: none
[   29.845266][ T2675]     caps:
[   29.854526][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   29.854536][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   29.854541][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   29.854544][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
          Startin[   29.854547][ T2675] [drm]         power level 3    
sclk: 95500 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
g [0;1;39mNetwo[   29.854550][ T2675]     status: c
rk Name Resoluti[   29.854571][ T2675] switching to power state:
on[0m...[   29.854574][ T2675]     ui class: performance
[   29.854586][ T2675]     internal class: none

[   29.991246][ T2675]     caps:
[   29.998101][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   30.007420][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   30.024878][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   30.051909][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   30.080930][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   30.107449][ T2675]     status: r
[   30.114164][ T2675] switching from power state:
[   30.124897][ T2675]     ui class: performance
[   30.131319][ T2675]     internal class: none
[   30.137315][ T2675]     caps:
[   30.142026][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   30.148668][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   30.167036][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   30.186208][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   30.204702][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   30.219749][ T2675]     status: c
[   30.225218][ T2675] switching to power state:
[   30.233779][ T2675]     ui class: performance
[   30.240397][ T2675]     internal class: none
[   30.248058][ T2675]     caps:
[   30.255891][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   30.271786][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   30.271803][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   30.271807][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   30.271811][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   30.271814][ T2675]     status: r
[   30.272180][ T2675] switching from power state:
[   30.375665][ T2675]     ui class: performance
[   30.375675][ T2675]     internal class: none
[   30.375685][ T2675]     caps:
[   30.375688][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   30.375692][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   30.375697][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   30.375701][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   30.375705][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   30.375708][ T2675]     status: c
[   30.375712][ T2675] switching to power state:
[   30.375715][ T2675]     ui class: performance
[   30.375716][ T2675]     internal class: none
[   30.375720][ T2675]     caps:
[   30.375722][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   30.375725][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   30.375728][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   30.375731][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   30.586906][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   30.586924][ T2675]     status: r
[   30.587259][ T2675] switching from power state:
[   30.587263][ T2675]     ui class: performance
[   30.587268][ T2675]     internal class: none
[   30.587272][ T2675]     caps:
[   30.587274][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   30.587277][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   30.587281][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   30.587285][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   30.587287][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   30.587291][ T2675]     status: c
[   30.587294][ T2675] switching to power state:
[   30.587295][ T2675]     ui class: performance
[   30.587297][ T2675]     internal class: none
[   30.587300][ T2675]     caps:
[   30.587302][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   30.587304][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   30.587307][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   30.587310][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   30.587313][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   30.587315][ T2675]     status: r
[   30.732851][ T2675] switching from power state:
[   30.907762][ T2675]     ui class: performance
[   30.907775][ T2675]     internal class: none
[   30.907782][ T2675]     caps:
[   30.907785][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   30.907790][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   30.907795][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   30.907798][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   30.907802][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   30.907805][ T2675]     status: c
[   30.907809][ T2675] switching to power state:
[   30.907811][ T2675]     ui class: performance
[   30.907813][ T2675]     internal class: none
[   30.907816][ T2675]     caps:
[   30.907819][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   30.907821][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   30.907825][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   30.907828][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   31.172410][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.172425][ T2675]     status: r
[   31.172720][ T2675] switching from power state:
[   31.172724][ T2675]     ui class: performance
[   31.172727][ T2675]     internal class: none
[   31.172731][ T2675]     caps:
[   31.172733][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   31.172737][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   31.172740][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   31.172744][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.172747][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.172750][ T2675]     status: c
[   31.172753][ T2675] switching to power state:
[   31.172755][ T2675]     ui class: performance
[   31.172757][ T2675]     internal class: none
[   31.172760][ T2675]     caps:
[   31.172762][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   31.172764][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   31.172767][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   31.172770][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   31.172773][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.172776][ T2675]     status: r
[   31.172929][ T2675] switching from power state:
[   31.172931][ T2675]     ui class: performance
[   31.172933][ T2675]     internal class: none
[   31.172937][ T2675]     caps:
[   31.172939][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   31.172941][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   31.172944][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   31.172947][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.172950][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.172953][ T2675]     status: c
[   31.173013][ T2675] switching to power state:
[   31.173015][ T2675]     ui class: performance
[   31.173016][ T2675]     internal class: none
[   31.173020][ T2675]     caps:
[   31.173022][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   31.173024][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   31.173027][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   31.173030][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   31.173033][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.173036][ T2675]     status: r
[   31.173168][ T2675] switching from power state:
[   31.173171][ T2675]     ui class: performance
[   31.173172][ T2675]     internal class: none
[   31.173176][ T2675]     caps:
[   31.173178][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   31.173180][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   31.173183][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   31.173187][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.173190][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.173193][ T2675]     status: c
[   31.173196][ T2675] switching to power state:
[   31.173198][ T2675]     ui class: performance
[   31.173200][ T2675]     internal class: none
[   31.173203][ T2675]     caps:
[   31.173205][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   31.173207][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   31.173210][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   31.173212][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   31.173215][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.173217][ T2675]     status: r
[   31.173427][ T2675] switching from power state:
[   31.173431][ T2675]     ui class: performance
[   31.173434][ T2675]     internal class: none
[   31.173439][ T2675]     caps:
[   31.173441][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   31.173444][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   31.173448][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   31.173451][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.173454][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.173456][ T2675]     status: c
[   31.173463][ T2675] switching to power state:
[   31.173465][ T2675]     ui class: performance
[   31.173467][ T2675]     internal class: none
[   31.173471][ T2675]     caps:
[   31.173473][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   31.173475][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   31.173478][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   31.173481][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   31.173484][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.173487][ T2675]     status: r
[   31.173908][ T2675] switching from power state:
[   31.173914][ T2675]     ui class: performance
[   31.173917][ T2675]     internal class: none
[   31.173922][ T2675]     caps:
[   31.173924][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   31.173927][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   31.173931][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   31.173935][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.173938][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.173941][ T2675]     status: c
[   31.173945][ T2675] switching to power state:
[   31.173946][ T2675]     ui class: performance
[   31.173949][ T2675]     internal class: none
[   31.173952][ T2675]     caps:
[   31.173995][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   31.173998][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   31.174001][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   31.174004][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   31.174007][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.174011][ T2675]     status: r
[   31.174235][ T2675] switching from power state:
[   31.174239][ T2675]     ui class: performance
[   31.174241][ T2675]     internal class: none
[   31.174246][ T2675]     caps:
[   31.174248][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   31.174251][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   31.174255][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   31.174258][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.174261][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.174264][ T2675]     status: c
[   31.174268][ T2675] switching to power state:
[   31.174269][ T2675]     ui class: performance
[   31.174271][ T2675]     internal class: none
[   31.174274][ T2675]     caps:
[   31.174276][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   31.174278][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   31.174281][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   31.174284][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   31.174286][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.174289][ T2675]     status: r
[   31.174554][ T2675] switching from power state:
[   31.174557][ T2675]     ui class: performance
[   31.174559][ T2675]     internal class: none
[   31.174564][ T2675]     caps:
[   31.174566][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   31.174568][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   31.174571][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   31.174574][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.174577][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.174579][ T2675]     status: c
[   31.174583][ T2675] switching to power state:
[   31.174584][ T2675]     ui class: performance
[   31.174586][ T2675]     internal class: none
[   31.174589][ T2675]     caps:
[   31.174591][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   31.174593][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   31.174595][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   31.174598][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   31.174601][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.174603][ T2675]     status: r
[   31.174915][ T2675] switching from power state:
[   31.174917][ T2675]     ui class: performance
[   31.174920][ T2675]     internal class: none
[   31.174925][ T2675]     caps:
[   31.174927][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   31.174930][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   31.174933][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   31.174937][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.174940][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.174943][ T2675]     status: c
[   31.174946][ T2675] switching to power state:
[   31.174948][ T2675]     ui class: performance
[   31.174950][ T2675]     internal class: none
[   31.174953][ T2675]     caps:
[   31.174999][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   31.175001][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   31.175004][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   31.175007][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   31.175010][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   31.175013][ T2675]     status: r
[   31.175303][ T2675] switching from power state:
[   33.125724][ T2675]     ui class: performance
[   33.125733][ T2675]     internal class: none
[   33.125741][ T2675]     caps:
[[0;32m  OK  [[   33.125744][ T2675] [drm]     uvd    vclk: 0 dclk: 0
0m] Started [0;[   33.125749][ T2675] [drm]         power level 0    
sclk: 30000 mclk: 15000 vddc: 900 vddci: 850 pcie gen: 3
1;39mAvahi mDNS/[   33.125753][ T2675] [drm]         power level 1    
sclk: 45000 mclk: 140000 vddc: 950 vddci: 1000 pcie gen: 3
DNS-SD Stack[0m[   33.125757][ T2675] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
.[   33.125760][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.125763][ T2675]     status: c

[   33.125767][ T2675] switching to power state:
[   33.125768][ T2675]     ui class: performance
[   33.125770][ T2675]     internal class: none
[   33.125774][ T2675]     caps:
[   33.125777][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.125780][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   33.350477][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.350496][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   33.350500][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.350505][ T2675]     status: r
[   33.351873][ T2675] switching from power state:
[   33.439182][ T2675]     ui class: performance
[   33.439190][ T2675]     internal class: none
[   33.439197][ T2675]     caps:
[   33.439199][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.439204][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   33.439209][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.439212][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.439216][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.439219][ T2675]     status: c
[   33.439223][ T2675] switching to power state:
[   33.439224][ T2675]     ui class: performance
[   33.439226][ T2675]     internal class: none
[   33.439230][ T2675]     caps:
[   33.439232][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.439235][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   33.439238][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.439240][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   33.439243][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.439245][ T2675]     status: r
[   33.451718][ T2675] switching from power state:
[   33.451738][ T2675]     ui class: performance
[   33.451742][ T2675]     internal class: none
[   33.451751][ T2675]     caps:
[   33.451754][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.451761][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   33.451766][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.451770][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.451773][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.451776][ T2675]     status: c
[   33.451781][ T2675] switching to power state:
[   33.451783][ T2675]     ui class: performance
[   33.451785][ T2675]     internal class: none
[   33.451789][ T2675]     caps:
[   33.451791][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.451793][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   33.451870][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.451875][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   33.451878][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.451881][ T2675]     status: r
[   33.466727][ T2675] switching from power state:
[   33.466737][ T2675]     ui class: performance
[   33.466741][ T2675]     internal class: none
[   33.466748][ T2675]     caps:
[   33.466751][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.466756][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   33.466761][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.466764][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.466768][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.466770][ T2675]     status: c
[   33.466774][ T2675] switching to power state:
[   33.466776][ T2675]     ui class: performance
[   33.466778][ T2675]     internal class: none
[   33.466781][ T2675]     caps:
[   33.466784][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.466786][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   33.466789][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.466792][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   33.466795][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.466906][ T2675]     status: r
[   33.476544][ T2675] switching from power state:
[   33.476554][ T2675]     ui class: performance
[   33.476558][ T2675]     internal class: none
[   33.476565][ T2675]     caps:
[   33.476568][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.476573][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   33.476578][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.476582][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.476585][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.476588][ T2675]     status: c
[   33.476592][ T2675] switching to power state:
[   33.476594][ T2675]     ui class: performance
[   33.476596][ T2675]     internal class: none
[   33.476599][ T2675]     caps:
[   33.476601][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.476604][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   33.476607][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.476609][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   33.476612][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.476615][ T2675]     status: r
[   33.490402][ T2675] switching from power state:
[   33.490412][ T2675]     ui class: performance
[   33.490416][ T2675]     internal class: none
[   33.490424][ T2675]     caps:
[   33.490427][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.490433][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   33.490439][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.490442][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.490446][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.490449][ T2675]     status: c
[   33.490453][ T2675] switching to power state:
[   33.490454][ T2675]     ui class: performance
[   33.490456][ T2675]     internal class: none
[   33.490459][ T2675]     caps:
[   33.490462][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.490464][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   33.490467][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.490470][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   33.490475][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.490477][ T2675]     status: r
[   33.504295][ T2675] switching from power state:
[   33.504303][ T2675]     ui class: performance
[   33.504306][ T2675]     internal class: none
[   33.504312][ T2675]     caps:
[   33.504315][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.504319][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   33.504323][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.504326][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.504329][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.504332][ T2675]     status: c
[   33.504336][ T2675] switching to power state:
[   33.504337][ T2675]     ui class: performance
[   33.504339][ T2675]     internal class: none
[   33.504343][ T2675]     caps:
[   33.504345][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.504347][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   33.504351][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.504353][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   33.504356][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.504358][ T2675]     status: r
[   33.518182][ T2675] switching from power state:
[   33.518189][ T2675]     ui class: performance
[   33.518192][ T2675]     internal class: none
[   33.518198][ T2675]     caps:
[   33.518201][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.518204][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   33.518209][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.518212][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.518216][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.518219][ T2675]     status: c
[   33.518222][ T2675] switching to power state:
[   33.518224][ T2675]     ui class: performance
[   33.518226][ T2675]     internal class: none
[   33.518229][ T2675]     caps:
[   33.518232][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.518234][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   33.518237][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.518239][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   33.518242][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.518245][ T2675]     status: r
[   33.532361][ T2675] switching from power state:
[   33.532369][ T2675]     ui class: performance
[   33.532372][ T2675]     internal class: none
[   33.532378][ T2675]     caps:
[   33.532380][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.532384][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   33.532388][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.532392][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.532395][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.532398][ T2675]     status: c
[   33.532402][ T2675] switching to power state:
[   33.532403][ T2675]     ui class: performance
[   33.532405][ T2675]     internal class: none
[   33.532409][ T2675]     caps:
[   33.532411][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.532413][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   33.532416][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.532418][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   33.532421][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.532424][ T2675]     status: r
[   33.546094][ T2675] switching from power state:
[   33.546113][ T2675]     ui class: performance
[   33.546119][ T2675]     internal class: none
[   33.546129][ T2675]     caps:
[   33.546131][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.546137][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   33.546143][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.546146][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.546150][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.546153][ T2675]     status: c
[   33.546158][ T2675] switching to power state:
[   33.546159][ T2675]     ui class: performance
[   33.546161][ T2675]     internal class: none
[   33.546164][ T2675]     caps:
[   33.546167][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.546169][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   33.546172][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.546175][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   33.546178][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.546181][ T2675]     status: r
[   33.550781][ T2675] switching from power state:
[   33.550786][ T2675]     ui class: performance
[   33.550789][ T2675]     internal class: none
[   33.550794][ T2675]     caps:
[   33.550796][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.550800][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   33.550804][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.550808][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.550811][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.550814][ T2675]     status: c
[   33.550817][ T2675] switching to power state:
[   33.550819][ T2675]     ui class: performance
[   33.550821][ T2675]     internal class: none
[   33.550824][ T2675]     caps:
[   33.550827][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.550829][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   33.550832][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.550835][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   33.550838][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.550841][ T2675]     status: r
[   33.557688][ T2675] switching from power state:
[   33.557695][ T2675]     ui class: performance
[   33.557698][ T2675]     internal class: none
[   33.557712][ T2675]     caps:
[   33.557714][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.557718][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   33.557722][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.557728][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.557731][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.557734][ T2675]     status: c
[   33.557737][ T2675] switching to power state:
[   33.557739][ T2675]     ui class: performance
[   33.557741][ T2675]     internal class: none
[   33.557744][ T2675]     caps:
[   33.557746][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.557749][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   33.557752][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.557755][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   33.557758][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.557762][ T2675]     status: r
[   33.563751][ T2675] switching from power state:
[   33.563756][ T2675]     ui class: performance
[   33.563759][ T2675]     internal class: none
[   33.563764][ T2675]     caps:
[   33.563767][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.563770][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   33.563774][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.563778][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.563781][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.563784][ T2675]     status: c
[   33.563787][ T2675] switching to power state:
[   33.563789][ T2675]     ui class: performance
[   33.563791][ T2675]     internal class: none
[   33.563802][ T2675]     caps:
[   33.563804][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.563807][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   33.563809][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.563812][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   33.563815][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.563821][ T2675]     status: r
[   33.573022][ T2675] switching from power state:
[   33.573032][ T2675]     ui class: performance
[   33.573035][ T2675]     internal class: none
[   33.573042][ T2675]     caps:
[   33.573045][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.573049][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   33.573054][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.573058][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.573061][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.573064][ T2675]     status: c
[   33.573067][ T2675] switching to power state:
[   33.573069][ T2675]     ui class: performance
[   33.573070][ T2675]     internal class: none
[   33.573074][ T2675]     caps:
[   33.573076][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.573079][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   33.573082][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.573085][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   33.573087][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.573090][ T2675]     status: r
[   33.580388][ T2675] switching from power state:
[   33.580394][ T2675]     ui class: performance
[   33.580397][ T2675]     internal class: none
[   33.580402][ T2675]     caps:
[   33.580405][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.580408][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   33.580412][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.580416][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.580419][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.580422][ T2675]     status: c
[   33.580425][ T2675] switching to power state:
[   33.580427][ T2675]     ui class: performance
[   33.580429][ T2675]     internal class: none
[   33.580432][ T2675]     caps:
[   33.580435][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.580437][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   33.580439][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.580442][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   33.580445][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.580447][ T2675]     status: r
[   33.594667][ T2675] switching from power state:
[   33.594675][ T2675]     ui class: performance
[   33.594679][ T2675]     internal class: none
[   33.594685][ T2675]     caps:
[   33.594688][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.594692][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   33.594697][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.594700][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.594703][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.594706][ T2675]     status: c
[   33.594710][ T2675] switching to power state:
[   33.594711][ T2675]     ui class: performance
[   33.594713][ T2675]     internal class: none
[   33.594716][ T2675]     caps:
[   33.594719][ T2675] [drm]     uvd    vclk: 0 dclk: 0
[   33.594721][ T2675] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   33.594724][ T2675] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   33.594726][ T2675] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   33.594729][ T2675] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   33.594732][ T2675]     status: r
[[0;32m  OK  [0m] Started [0;1;39mNetwork Name Resolution[0m.
[[0;32m  OK  [0m] Finished [0;1;39mWait for Network to be 
Configured[0m.
[[0;32m  OK  [0m] Reached target [0;1;39mNetwork[0m.
[[0;32m  OK  [0m] Reached target [0;1;39mNetwork is Online[0m.
[[0;32m  OK  [0m] Reached target [0;1;39mHost and Network Name 
Lookups[0m.
          Mounting [0;1;39m/home[0m...
          Mounting [0;1;39m/mnt/gentoo-share[0m...
          Mounting [0;1;39m/usr/local/portage[0m...
          Mounting [0;1;39m/usr/portage[0m...
[[0;32m  OK  [0m] Started [0;1;39mBacula File Daemon service[0m.
          Starting [0;1;39mCUPS Scheduler[0m...
          Starting [0;1;39mOpenSSH server daemon[0m...
[   37.268427][ T2944] FS-Cache: Duplicate cookie detected
[   37.269774][ T2946] CIFS: No dialect specified on mount. Default has 
changed to a more secure dialect, SMB2.1 or later (e.g. SMB3.1.1), from 
CIFS (SMB1). To use the less secure SMB1 dialect to access old servers 
which do not support SMB3.1.1 (or even SMB3 or SMB2.1) specify vers=1.0 
on mount.
[   37.284812][ T2944] FS-Cache: O-cookie c=0000000040dffb02 
[p=0000000073984546 fl=222 nc=0 na=1]
[   37.367829][ T2946] CIFS: Attempting to mount \\gentoo\overlay
[   37.391763][ T2944] FS-Cache: O-cookie d=000000001d33de81 
n=0000000094d6da4d
[   37.413919][ T2944] FS-Cache: O-key=[16] 
'040000000200000002000801c0a80201'
[   37.413948][ T2944] FS-Cache: N-cookie c=00000000e8522cae 
[p=0000000073984546 fl=2 nc=0 na=1]
[   37.413999][ T2944] FS-Cache: N-cookie d=000000001d33de81 
n=00000000e8d4e0ce
[   37.414002][ T2944] FS-Cache: N-key=[16] 
'040000000200000002000801c0a80201'
          Starting [0;1;39mSystem Logger Daemon "default" instance[0m...
[[0;32m  OK  [0m] Mounted [0;1;39m/usr/local/portage[0m.
[[0;32m  OK  [0m] Started [0;1;39mUser Login Management[0m.
[[0;32m  OK  [0m] Reached target [0;1;39mRemote File Systems[0m.
[[0;32m  OK  [0m] Mounted [0;1;39m/home[0m.
[[0;32m  OK  [0m] Mounted [0;1;39m/mnt/gentoo-share[0m.
[[0;32m  OK  [0m] Mounted [0;1;39m/usr/portage[0m.
[[0;32m  OK  [0m] Started [0;1;39mOpenSSH server daemon[0m.
          Starting [0;1;39mPermit User Sessions[0m...
[[0;32m  OK  [0m] Finished [0;1;39mPermit User Sessions[0m.
          Starting [0;1;39mCommand Scheduler[0m...
[[0;32m  OK  [0m] Started [0;1;39mGetty on tty1[0m.
          Starting [0;1;39mLight Display Manager[0m...
[[0;32m  OK  [0m] Started [0;1;39mSerial Getty on hvc0[0m.
[[0;32m  OK  [0m] Started [0;1;39mSerial Getty on ttyS0[0m.
[[0;32m  OK  [0m] Reached target [0;1;39mLogin Prompts[0m.
[[0;32m  OK  [0m] Started [0;1;39mCommand Scheduler[0m.
[[0;32m  OK  [0m] Started [0;1;39mLight Display Manager[0m.
[[0;32m  OK  [0m] Reached target [0;1;39mUser and Group Name 
Lookups[0m.
          Starting [0;1;39mAccounts Service[0m...
[[0;32m  OK  [0m] Created slice [0;1;39mUser Slice of UID 0[0m.
          Starting [0;1;39mUser Runtime Directory /run/user/0[0m...
[[0;32m  OK  [0m] Finished [0;1;39mUser Runtime Directory 
/run/user/0[0m.
          Starting [0;1;39mUser Manager for UID 0[0m...
          Starting [0;1;39mManage, Install and Generate Color 
Profiles[0m...
          Starting [0;1;39mAuthorization Manager[0m...
[   39.079167][ T2988] switching from power state:
[   39.086986][ T2988]     ui class: performance
[   39.094240][ T2988]     internal class: none
[   39.100577][ T2988]     caps:
[   39.105470][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   39.114111][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   39.128594][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   39.140886][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   39.153951][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   39.171375][ T2988]     status: c
[   39.177739][ T2988] switching to power state:
[   39.185298][ T2988]     ui class: performance
[   39.194659][ T2988]     internal class: none
[   39.201982][ T2988]     caps:
[   39.207133][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   39.218058][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   39.237414][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   39.261747][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   39.280100][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   39.304889][ T2988]     status: r
[   39.315534][ T2988] switching from power state:
[   39.330014][ T2988]     ui class: performance
[   39.343479][ T2988]     internal class: none
[   39.356892][ T2988]     caps:
[   39.365133][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   39.376328][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   39.399560][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   39.421422][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   39.450887][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   39.481740][ T2988]     status: c
[   39.489673][ T2988] switching to power state:
[   39.502007][ T2988]     ui class: performance
[   39.512326][ T2988]     internal class: none
[   39.522164][ T2988]     caps:
[   39.529459][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   39.539435][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   39.562153][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   39.584355][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   39.603382][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   39.618946][ T2988]     status: r
[   39.625269][ T2988] switching from power state:
[   39.635593][ T2988]     ui class: performance
[   39.647210][ T2988]     internal class: none
[   39.659760][ T2988]     caps:
[   39.667833][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   39.679092][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   39.696757][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   39.708980][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   39.722003][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   39.735834][ T2988]     status: c
[   39.740813][ T2988] switching to power state:
[   39.747979][ T2988]     ui class: performance
[   39.753369][ T2988]     internal class: none
[   39.759024][ T2988]     caps:
[   39.763397][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   39.770354][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   39.783022][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   39.797893][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   39.814150][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   39.833900][ T2988]     status: r
[   39.840907][ T2988] switching from power state:
[   39.848480][ T2988]     ui class: performance
[   39.858842][ T2988]     internal class: none
[   39.866828][ T2988]     caps:
[   39.871337][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   39.880314][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   39.901746][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   39.901755][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   39.901759][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   39.901762][ T2988]     status: c
[[0;32m  OK  [[   39.901769][ T2988] switching to power state:
0m] Started [0;1;39mAuthorization Manager[0m.
[   39.901771][ T2988]     ui class: performance
[   39.901773][ T2988]     internal class: none
[   39.901777][ T2988]     caps:
[   39.901780][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   39.901782][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   39.901785][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   39.901788][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   39.901792][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   39.901794][ T2988]     status: r
[   39.902078][ T2988] switching from power state:
[   40.121025][ T2988]     ui class: performance
[   40.121031][ T2988]     internal class: none
[   40.121037][ T2988]     caps:
[[0;32m  OK  [[   40.121040][ T2988] [drm]     uvd    vclk: 0 dclk: 0
0m] Started [0;1;39mAccounts Service[0m.
[   40.121045][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   40.121050][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   40.121053][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   40.121056][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   40.222915][ T2988]     status: c
[   40.227563][ T2988] switching to power state:
[   40.233536][ T2988]     ui class: performance
[   40.239900][ T2988]     internal class: none
[   40.247703][ T2988]     caps:
[   40.252673][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   40.260537][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   40.279761][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   40.297249][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   40.297258][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   40.297262][ T2988]     status: r
[   40.297583][ T2988] switching from power state:
[[0;32m  OK  [[   40.350987][ T2988]     ui class: performance
0m] Started [0;1;39mUser Manager for UID 0[0m.
[   40.369597][ T2988]     internal class: none
[   40.379273][ T2988]     caps:
[   40.379279][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   40.379283][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   40.379288][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[[0;32m  OK  [[   40.379291][ T2988] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
0m] Started [0;1;39mSession 1 of user root[0m.
[   40.379294][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   40.379297][ T2988]     status: c
[   40.379302][ T2988] switching to power state:
[   40.379304][ T2988]     ui class: performance
[   40.379307][ T2988]     internal class: none
[   40.529834][ T2988]     caps:
[   40.535980][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   40.548568][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   40.570141][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   40.591885][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   40.616828][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   40.639054][ T2988]     status: r
[   40.647412][ T2988] switching from power state:
[   40.661183][ T2988]     ui class: performance
[   40.674869][ T2988]     internal class: none
[   40.690287][ T2988]     caps:
[   40.698926][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   40.713989][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   40.733415][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   40.756410][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   40.776282][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   40.793580][ T2988]     status: c
[   40.799758][ T2988] switching to power state:
[   40.809845][ T2988]     ui class: performance
[   40.820309][ T2988]     internal class: none
[   40.828718][ T2988]     caps:
[   40.834531][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   40.847800][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   40.869116][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   40.893781][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   40.893790][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   40.893794][ T2988]     status: r
[   40.894422][ T2988] switching from power state:
[[0;32m  OK  [[   40.944575][ T2988]     ui class: performance
0m] Started [0;[   40.955244][ T2988]     internal class: none
1;39mSession 3 o[   40.966209][ T2988]     caps:
f user root[0m.[   40.973563][ T2988] [drm]     uvd    vclk: 0 dclk: 0

[   40.984866][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   41.001211][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   41.016243][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   41.031878][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   41.050307][ T2988]     status: c
[   41.057854][ T2988] switching to power state:
[   41.071032][ T2988]     ui class: performance
[   41.084812][ T2988]     internal class: none
[   41.099211][ T2988]     caps:
[   41.108548][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   41.125567][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   41.165998][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   41.202078][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   41.235107][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   41.264624][ T2988]     status: r
[   41.273905][ T2988] switching from power state:
[   41.288128][ T2988]     ui class: performance
[   41.301826][ T2988]     internal class: none
[   41.315921][ T2988]     caps:
[   41.323730][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   41.337046][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   41.357991][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   41.374188][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   41.394410][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   41.411502][ T2988]     status: c
[   41.417409][ T2988] switching to power state:
[   41.427690][ T2988]     ui class: performance
[   41.436010][ T2988]     internal class: none
[   41.443439][ T2988]     caps:
[   41.447913][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   41.457491][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   41.472460][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   41.492375][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   41.518050][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   41.544749][ T2988]     status: r
[   41.553666][ T2988] switching from power state:
[   41.563742][ T2988]     ui class: performance
[   41.571805][ T2988]     internal class: none
[   41.579870][ T2988]     caps:
[   41.584863][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   41.594561][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   41.614402][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   41.637723][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   41.658350][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   41.674317][ T2988]     status: c
[   41.679599][ T2988] switching to power state:
[   41.688043][ T2988]     ui class: performance
[   41.695597][ T2988]     internal class: none
[   41.705135][ T2988]     caps:
[   41.713839][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   41.727853][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   41.747787][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   41.768526][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   41.785384][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   41.802017][ T2988]     status: r
[   41.807542][ T2988] switching from power state:
[   41.817495][ T2988]     ui class: performance
[   41.826334][ T2988]     internal class: none
[   41.835439][ T2988]     caps:
[   41.840706][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   41.849364][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   41.869533][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   41.887519][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   41.912995][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   41.937903][ T2988]     status: c
[   41.946487][ T2988] switching to power state:
[   41.957554][ T2988]     ui class: performance
[   41.968562][ T2988]     internal class: none
[   41.978721][ T2988]     caps:
[   41.986834][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   42.001140][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   42.022777][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   42.041563][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   42.060946][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   42.077930][ T2988]     status: r
[   42.085339][ T2988] switching from power state:
[   42.096473][ T2988]     ui class: performance
[   42.105908][ T2988]     internal class: none
[   42.114808][ T2988]     caps:
[   42.120219][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   42.129861][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   42.151685][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   42.173046][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   42.187322][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   42.202469][ T2988]     status: c
[   42.206361][ T2988] switching to power state:
[   42.214250][ T2988]     ui class: performance
[   42.220679][ T2988]     internal class: none
[   42.227454][ T2988]     caps:
[   42.231550][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   42.239815][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   42.257674][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   42.271901][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   42.287626][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   42.304684][ T2988]     status: r
[   42.311669][ T2988] switching from power state:
[   42.324153][ T2988]     ui class: performance
[   42.324159][ T2988]     internal class: none
[   42.324165][ T2988]     caps:
[   42.324168][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[[0;32m  OK  [[   42.324171][ T2988] [drm]         power level 0    
sclk: 30000 mclk: 15000 vddc: 900 vddci: 850 pcie gen: 3
0m] Started [0;[   42.324176][ T2988] [drm]         power level 1    
sclk: 45000 mclk: 140000 vddc: 950 vddci: 1000 pcie gen: 3
1;39mSession 4 o[   42.324179][ T2988] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   42.324183][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
f user root[0m.[   42.324186][ T2988]     status: c

[   42.324190][ T2988] switching to power state:
[   42.324192][ T2988]     ui class: performance
[   42.324193][ T2988]     internal class: none
[   42.324197][ T2988]     caps:
[   42.324199][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   42.324202][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   42.324204][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   42.324207][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   42.324210][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   42.324213][ T2988]     status: r
[   42.324371][ T2988] switching from power state:
[   42.565240][ T2988]     ui class: performance
[   42.572381][ T2988]     internal class: none
[   42.580130][ T2988]     caps:
[   42.584319][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   42.591249][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   42.604910][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   42.619367][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   42.635455][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   42.652094][ T2988]     status: c
[   42.657996][ T2988] switching to power state:
[   42.665474][ T2988]     ui class: performance
[   42.671812][ T2988]     internal class: none
[   42.686901][ T2988]     caps:
[   42.686910][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   42.686914][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   42.686919][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   42.686922][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   42.686925][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   42.686928][ T2988]     status: r
[   42.687445][ T2988] switching from power state:
[   42.687448][ T2988]     ui class: performance
[   42.687451][ T2988]     internal class: none
[   42.687455][ T2988]     caps:
[   42.687458][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   42.687460][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   42.687464][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   42.687467][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   42.687470][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   42.687473][ T2988]     status: c
[   42.687476][ T2988] switching to power state:
[   42.687478][ T2988]     ui class: performance
[   42.687479][ T2988]     internal class: none
[   42.687483][ T2988]     caps:
[   42.687485][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   42.687487][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   42.687490][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   42.687493][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   42.687495][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   42.687498][ T2988]     status: r
[   42.687662][ T2988] switching from power state:
[   42.687664][ T2988]     ui class: performance
[   42.687666][ T2988]     internal class: none
[   42.687670][ T2988]     caps:
[   42.687672][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   42.687674][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   42.687677][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   42.687679][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   42.687682][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   42.687685][ T2988]     status: c
[   42.687688][ T2988] switching to power state:
[   42.687689][ T2988]     ui class: performance
[   42.687691][ T2988]     internal class: none
[   42.687694][ T2988]     caps:
[   42.687696][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   42.687698][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   42.687701][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   42.687703][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   42.687706][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   42.687709][ T2988]     status: r
[   42.687796][ T2988] switching from power state:
[   42.687798][ T2988]     ui class: performance
[   42.687799][ T2988]     internal class: none
[   42.687802][ T2988]     caps:
[   42.687804][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   42.687806][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   42.687809][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   42.687812][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   42.687815][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   42.687818][ T2988]     status: c
[   42.687821][ T2988] switching to power state:
[   42.687822][ T2988]     ui class: performance
[   42.687823][ T2988]     internal class: none
[   42.687826][ T2988]     caps:
[   42.687828][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   42.687830][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   42.687833][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   42.687835][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   42.687838][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   42.687841][ T2988]     status: r
[   42.687925][ T2988] switching from power state:
[   43.371213][ T3044] vhci_hcd vhci_hcd.0: USB/IP Virtual Host Controller
[   43.380309][ T2988]     ui class: performance
[   43.380315][ T2988]     internal class: none
[   43.380329][ T2988]     caps:
[   43.380332][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   43.380336][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   43.380341][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   43.380344][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.380347][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.380350][ T2988]     status: c
[   43.380353][ T2988] switching to power state:
[   43.380355][ T2988]     ui class: performance
[   43.380357][ T2988]     internal class: none
[   43.380361][ T2988]     caps:
[   43.380363][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   43.380366][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   43.380369][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   43.380371][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   43.380374][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.380377][ T2988]     status: r
[   43.380541][ T2988] switching from power state:
[   43.390278][ T3044] vhci_hcd vhci_hcd.0: new USB bus registered, 
assigned bus number 3
[   43.390407][ T3044] vhci_hcd: created sysfs vhci_hcd.0
[   43.399449][ T2988]     ui class: performance
[   43.399455][ T2988]     internal class: none
[   43.399462][ T2988]     caps:
[   43.399464][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   43.399468][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   43.399473][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   43.413173][ T3044] usb usb3: New USB device found, idVendor=1d6b, 
idProduct=0002, bcdDevice= 5.13
[   43.426779][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.426789][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.426794][ T2988]     status:
[   43.451643][ T3044] usb usb3: New USB device strings: Mfr=3, 
Product=2, SerialNumber=1
[   43.469470][ T2988]  c
[   43.469479][ T2988] switching to power state:
[   43.469483][ T2988]     ui class: performance
[   43.469485][ T2988]     internal class: none
[   43.469489][ T2988]     caps:
[   43.469491][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   43.469495][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   43.469499][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   43.469502][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   43.469505][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.469508][ T2988]     status: r
[   43.469657][ T2988] switching from power state:
[   43.488953][ T3044] usb usb3: Product: USB/IP Virtual Host Controller
[   43.489002][ T3044] usb usb3: Manufacturer: Linux 5.13.0-rc6-x86_64 
vhci_hcd
[   43.489007][ T3044] usb usb3: SerialNumber: vhci_hcd.0
[   43.505133][ T2988]     ui class: performance
[   43.505140][ T2988]     internal class: none
[   43.505146][ T2988]     caps:
[   43.505148][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   43.505152][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   43.505157][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   43.505160][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.505163][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.505166][ T2988]     status: c
[   43.505170][ T2988] switching to power state:
[   43.505171][ T2988]     ui class: performance
[   43.505173][ T2988]     internal class: none
[   43.505177][ T2988]     caps:
[   43.505179][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   43.505181][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   43.505184][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   43.505187][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   43.505189][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.505192][ T2988]     status: r
[   43.505403][ T2988] switching from power state:
[   43.505405][ T2988]     ui class: performance
[   43.505407][ T2988]     internal class: none
[   43.505411][ T2988]     caps:
[   43.505413][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   43.505415][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   43.505418][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   43.505421][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.505424][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.505427][ T2988]     status: c
[   43.505430][ T2988] switching to power state:
[   43.505432][ T2988]     ui class: performance
[   43.505433][ T2988]     internal class: none
[   43.505436][ T2988]     caps:
[   43.505438][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   43.505440][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   43.505443][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   43.505446][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   43.505448][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.505451][ T2988]     status: r
[   43.505541][ T2988] switching from power state:
[   43.541281][ T3044] hub 3-0:1.0: USB hub found
[   43.544238][ T2988]     ui class: performance
[   43.563286][ T3044] hub 3-0:1.0: 8 ports detected
[   43.589251][ T2988]     internal class:
[   43.623167][ T3044] vhci_hcd vhci_hcd.0: USB/IP Virtual Host Controller
[   43.643600][ T2988]  none
[   43.643610][ T2988]     caps:
[   43.643613][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   43.643618][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   43.658220][ T3044] vhci_hcd vhci_hcd.0: new USB bus registered, 
assigned bus number 4
[   43.666081][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   43.666091][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.666094][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.666097][ T2988]     status: c
[   43.666104][ T2988] switching to power state:
[   43.666107][ T2988]     ui class: performance
[   43.666109][ T2988]     internal class: none
[   43.666113][ T2988]     caps:
[   43.666115][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   43.666118][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   43.666121][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   43.666124][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   43.666127][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.666129][ T2988]     status: r
[   43.666323][ T2988] switching from power state:
[   43.673498][ T3044] usb usb4: We don't know the algorithms for LPM 
for this host, disabling LPM.
[   43.690000][ T2988]     ui class: performance
[   43.690007][ T2988]     internal class: none
[   43.690014][ T2988]     caps:
[   43.690016][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   43.690020][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   43.690024][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   43.690027][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.690031][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.690033][ T2988]     status: c
[   43.696749][ T3044] usb usb4: New USB device found, idVendor=1d6b, 
idProduct=0003, bcdDevice= 5.13
[   43.705580][ T2988] switching to power state:
[   43.705586][ T2988]     ui class: performance
[   43.705590][ T2988]     internal class: none
[   43.705595][ T2988]     caps:
[   43.705598][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   43.705602][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   43.705606][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   43.705610][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   43.705613][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.712940][ T3044] usb usb4: New USB device strings: Mfr=3, 
Product=2, SerialNumber=1
[   43.712999][ T3044] usb usb4: Product: USB/IP Virtual Host Controller
[   43.723185][ T2988]     status: r
[   43.741867][ T3044] usb usb4: Manufacturer: Linux 5.13.0-rc6-x86_64 
vhci_hcd
[   43.741879][ T3044] usb usb4: SerialNumber: vhci_hcd.0
[   43.743258][ T3044] hub 4-0:1.0: USB hub found
[   43.760751][ T2988]
[   43.760934][ T2988] switching from power state:
[   43.779162][ T3044] hub 4-0:1.0: 8 ports detected
[   43.805011][ T2988]     ui class: performance
[   43.805017][ T2988]     internal class: none
[   43.805023][ T2988]     caps:
[   43.805026][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   43.805030][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   43.805035][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   43.805038][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.805041][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.805044][ T2988]     status: c
[   43.805048][ T2988] switching to power state:
[   43.805050][ T2988]     ui class: performance
[   43.805051][ T2988]     internal class: none
[   43.805055][ T2988]     caps:
[   43.805057][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   43.805059][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   43.805062][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   43.805065][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   43.805068][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.805070][ T2988]     status: r
[   43.805239][ T2988] switching from power state:
[   43.805241][ T2988]     ui class: performance
[   43.805242][ T2988]     internal class: none
[   43.805246][ T2988]     caps:
[   43.805248][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   43.805250][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   43.805253][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   43.805256][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.805259][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.805261][ T2988]     status: c
[   43.805265][ T2988] switching to power state:
[   43.805266][ T2988]     ui class: performance
[   43.805268][ T2988]     internal class: none
[   43.805271][ T2988]     caps:
[   43.805274][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   43.805276][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   43.805278][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   43.805281][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   43.805283][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   43.805286][ T2988]     status: r
[   45.689391][ T2988] amdgpu 0000:00:06.0: vgaarb: changed VGA decodes: 
olddecodes=io+mem,decodes=none:owns=io+mem


This is gt.alstadheim.priv.no (Linux x86_64 5.13.0-rc6-x86_64) 01:01:08

gt login: [   48.032816][ T2988] switching from power state:
[   48.041076][ T2988]     ui class: performance
[   48.048649][ T2988]     internal class: none
[   48.058047][ T2988]     caps:
[   48.063801][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   48.074998][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   48.089994][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   48.108626][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   48.129618][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   48.147348][ T2988]     status: c
[   48.153752][ T2988] switching to power state:
[   48.167500][ T2988]     ui class: performance
[   48.183477][ T2988]     internal class: none
[   48.198842][ T2988]     caps:
[   48.205904][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   48.214559][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   48.230109][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   48.244814][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   48.258299][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   48.270593][ T2988]     status: r
[   48.280501][ T2988] switching from power state:
[   48.286788][ T2988]     ui class: performance
[   48.292581][ T2988]     internal class: none
[   48.298288][ T2988]     caps:
[   48.302095][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   48.308845][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   48.321442][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   48.334913][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   48.355640][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   48.379144][ T2988]     status: c
[   48.384196][ T2988] switching to power state:
[   48.390933][ T2988]     ui class: performance
[   48.399953][ T2988]     internal class: none
[   48.408547][ T2988]     caps:
[   48.413988][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   48.421329][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   48.434862][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   48.448842][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   48.462921][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   48.478626][ T2988]     status: r
[   49.034368][ T2988] switching from power state:
[   49.041292][ T2988]     ui class: performance
[   49.047437][ T2988]     internal class: none
[   49.053882][ T2988]     caps:
[   49.058805][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   49.067902][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   49.084998][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   49.103008][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   49.118712][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   49.134622][ T2988]     status: c
[   49.143198][ T2988] switching to power state:
[   49.150537][ T2988]     ui class: performance
[   49.161984][ T2988]     internal class: none
[   49.171664][ T2988]     caps:
[   49.178121][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   49.186945][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   49.206745][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   49.221552][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   49.237213][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   49.253733][ T2988]     status: r
[   49.261825][ T2988] switching from power state:
[   49.270462][ T2988]     ui class: performance
[   49.278845][ T2988]     internal class: none
[   49.287814][ T2988]     caps:
[   49.294241][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   49.303926][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   49.321612][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   49.336497][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   49.351678][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   49.367259][ T2988]     status: c
[   49.372153][ T2988] switching to power state:
[   49.379519][ T2988]     ui class: performance
[   49.386865][ T2988]     internal class: none
[   49.391979][ T2988]     caps:
[   49.395742][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   49.402048][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   49.414436][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   49.426730][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   49.440250][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   49.456336][ T2988]     status: r
[   49.461785][ T2988] switching from power state:
[   49.469891][ T2988]     ui class: performance
[   49.476628][ T2988]     internal class: none
[   49.484859][ T2988]     caps:
[   49.491777][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   49.499459][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   49.513797][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   49.528987][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   49.544650][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   49.557229][ T2988]     status: c
[   49.561672][ T2988] switching to power state:
[   49.567910][ T2988]     ui class: performance
[   49.573516][ T2988]     internal class: none
[   49.580405][ T2988]     caps:
[   49.584918][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   49.593057][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   49.609009][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   49.630226][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   49.652790][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   49.676944][ T2988]     status: r
[   49.682619][ T2988] switching from power state:
[   49.689899][ T2988]     ui class: performance
[   49.696589][ T2988]     internal class: none
[   49.702927][ T2988]     caps:
[   49.707435][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   49.713599][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   49.726402][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   49.742338][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   49.758288][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   49.773441][ T2988]     status: c
[   49.779112][ T2988] switching to power state:
[   49.786506][ T2988]     ui class: performance
[   49.794639][ T2988]     internal class: none
[   49.802805][ T2988]     caps:
[   49.809514][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   49.820993][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   49.842420][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   49.860225][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   49.874683][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   49.888932][ T2988]     status: r
[   49.893911][ T2988] switching from power state:
[   49.901561][ T2988]     ui class: performance
[   49.908815][ T2988]     internal class: none
[   49.915359][ T2988]     caps:
[   49.919540][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   49.926312][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   49.940092][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   49.952644][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   49.965919][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   49.989070][ T2988]     status: c
[   49.997376][ T2988] switching to power state:
[   50.009370][ T2988]     ui class: performance
[   50.020269][ T2988]     internal class: none
[   50.027856][ T2988]     caps:
[   50.031951][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   50.040188][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   50.056578][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   50.070907][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   50.082287][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   50.096354][ T2988]     status: r
[   50.101918][ T2988] switching from power state:
[   50.111005][ T2988]     ui class: performance
[   50.118580][ T2988]     internal class: none
[   50.126465][ T2988]     caps:
[   50.132400][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   50.142496][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   50.161069][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   50.182601][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   50.202597][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   50.225937][ T2988]     status: c
[   50.229828][ T2988] switching to power state:
[   50.236708][ T2988]     ui class: performance
[   50.244843][ T2988]     internal class: none
[   50.251869][ T2988]     caps:
[   50.256883][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   50.267618][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   50.281997][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   50.298349][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   50.318122][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   50.334276][ T2988]     status: r
[   50.339526][ T2988] switching from power state:
[   50.346501][ T2988]     ui class: performance
[   50.352365][ T2988]     internal class: none
[   50.358358][ T2988]     caps:
[   50.361984][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   50.372588][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   50.399403][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   50.428084][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   50.444330][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   50.458770][ T2988]     status: c
[   50.463015][ T2988] switching to power state:
[   50.472439][ T2988]     ui class: performance
[   50.481182][ T2988]     internal class: none
[   50.489982][ T2988]     caps:
[   50.494159][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   50.502269][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   50.518564][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   50.541542][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   50.561988][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   50.592554][ T2988]     status: r
[   50.603951][ T2988] switching from power state:
[   50.617631][ T2988]     ui class: performance
[   50.629508][ T2988]     internal class: none
[   50.641057][ T2988]     caps:
[   50.645487][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   50.653570][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   50.668806][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   50.688428][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   50.705637][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   50.725643][ T2988]     status: c
[   50.730344][ T2988] switching to power state:
[   50.737954][ T2988]     ui class: performance
[   50.744236][ T2988]     internal class: none
[   50.751596][ T2988]     caps:
[   50.757683][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   50.764904][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   50.780878][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   50.798621][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   50.815217][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   50.834532][ T2988]     status: r
[   50.840101][ T2988] switching from power state:
[   50.846571][ T2988]     ui class: performance
[   50.851994][ T2988]     internal class: none
[   50.857365][ T2988]     caps:
[   50.861402][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   50.867670][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   50.880073][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   50.893018][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   50.906996][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   50.921286][ T2988]     status: c
[   50.925899][ T2988] switching to power state:
[   50.931410][ T2988]     ui class: performance
[   50.937817][ T2988]     internal class: none
[   50.944038][ T2988]     caps:
[   50.948115][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   50.956013][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   50.972574][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   50.994598][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   51.013321][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   51.032573][ T2988]     status: r
[   51.038151][ T2988] switching from power state:
[   51.045392][ T2988]     ui class: performance
[   51.052391][ T2988]     internal class: none
[   51.059034][ T2988]     caps:
[   51.063469][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   51.071888][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   51.086047][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   51.099927][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   51.118072][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   51.146645][ T2988]     status: c
[   51.155288][ T2988] switching to power state:
[   51.167835][ T2988]     ui class: performance
[   51.179474][ T2988]     internal class: none
[   51.192557][ T2988]     caps:
[   51.201312][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   51.218612][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   51.246104][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   51.264456][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   51.286401][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   51.311294][ T2988]     status: r
[   51.317797][ T2988] switching from power state:
[   51.329402][ T2988]     ui class: performance
[   51.338917][ T2988]     internal class: none
[   51.345126][ T2988]     caps:
[   51.348535][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   51.354690][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   51.367990][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   51.380289][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   51.395365][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   51.411029][ T2988]     status: c
[   51.415580][ T2988] switching to power state:
[   51.423418][ T2988]     ui class: performance
[   51.430393][ T2988]     internal class: none
[   51.436164][ T2988]     caps:
[   51.439859][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   51.448245][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   51.460466][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   51.472484][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   51.485544][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   51.497192][ T2988]     status: r
[   51.501473][ T2988] switching from power state:
[   51.508454][ T2988]     ui class: performance
[   51.514380][ T2988]     internal class: none
[   51.520281][ T2988]     caps:
[   51.525981][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   51.533504][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   51.548263][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   51.563761][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   51.584027][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   51.603100][ T2988]     status: c
[   51.607292][ T2988] switching to power state:
[   51.614235][ T2988]     ui class: performance
[   51.619993][ T2988]     internal class: none
[   51.625349][ T2988]     caps:
[   51.628733][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   51.636733][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   51.648307][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   51.659756][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   51.672274][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   51.684201][ T2988]     status: r
[   51.688465][ T2988] switching from power state:
[   51.694306][ T2988]     ui class: performance
[   51.699785][ T2988]     internal class: none
[   51.705507][ T2988]     caps:
[   51.709873][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   51.716498][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   51.728765][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   51.740914][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   51.752603][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   51.765120][ T2988]     status: c
[   51.769792][ T2988] switching to power state:
[   51.775983][ T2988]     ui class: performance
[   51.784072][ T2988]     internal class: none
[   51.791476][ T2988]     caps:
[   51.798010][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   51.806450][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   51.820851][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   51.835549][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   51.851746][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   51.869598][ T2988]     status: r
[   51.874601][ T2988] switching from power state:
[   51.880539][ T2988]     ui class: performance
[   51.886381][ T2988]     internal class: none
[   51.891858][ T2988]     caps:
[   51.895376][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   51.901402][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   51.915614][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   51.930367][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   51.944665][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   51.962389][ T2988]     status: c
[   51.967180][ T2988] switching to power state:
[   51.974312][ T2988]     ui class: performance
[   51.980987][ T2988]     internal class: none
[   51.987707][ T2988]     caps:
[   51.994676][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   52.001348][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   52.013246][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   52.026406][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   52.041410][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   52.055902][ T2988]     status: r
[   52.060845][ T2988] switching from power state:
[   52.068193][ T2988]     ui class: performance
[   52.074636][ T2988]     internal class: none
[   52.081123][ T2988]     caps:
[   52.084706][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   52.090983][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   52.102174][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   52.114261][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   52.128809][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   52.145312][ T2988]     status: c
[   52.151770][ T2988] switching to power state:
[   52.159885][ T2988]     ui class: performance
[   52.167398][ T2988]     internal class: none
[   52.174692][ T2988]     caps:
[   52.178952][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   52.186203][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   52.201208][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   52.214355][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   52.230199][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   52.244465][ T2988]     status: r
[   52.249682][ T2988] switching from power state:
[   52.257001][ T2988]     ui class: performance
[   52.263994][ T2988]     internal class: none
[   52.269935][ T2988]     caps:
[   52.274333][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   52.280367][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   52.294763][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   52.310535][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   52.338199][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   52.362555][ T2988]     status: c
[   52.369556][ T2988] switching to power state:
[   52.377248][ T2988]     ui class: performance
[   52.383716][ T2988]     internal class: none
[   52.390403][ T2988]     caps:
[   52.395560][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   52.403050][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   52.418941][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   52.433479][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   52.450252][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   52.464904][ T2988]     status: r
[   52.470040][ T2988] switching from power state:
[   52.477213][ T2988]     ui class: performance
[   52.483504][ T2988]     internal class: none
[   52.489869][ T2988]     caps:
[   52.494459][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   52.502478][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   52.522066][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   52.534675][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   52.551013][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   52.566447][ T2988]     status: c
[   52.571048][ T2988] switching to power state:
[   52.579688][ T2988]     ui class: performance
[   52.586773][ T2988]     internal class: none
[   52.593774][ T2988]     caps:
[   52.598884][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   52.606989][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   52.620636][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   52.634626][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   52.649530][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   52.666632][ T2988]     status: r
[   52.671610][ T2988] switching from power state:
[   52.680317][ T2988]     ui class: performance
[   52.688286][ T2988]     internal class: none
[   52.696042][ T2988]     caps:
[   52.701053][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   52.710833][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   52.732312][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   52.751301][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   52.766846][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   52.780286][ T2988]     status: c
[   52.784646][ T2988] switching to power state:
[   52.791133][ T2988]     ui class: performance
[   52.797078][ T2988]     internal class: none
[   52.802544][ T2988]     caps:
[   52.806377][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   52.813006][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   52.826620][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   52.840519][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   52.854898][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   52.871399][ T2988]     status: r
[   52.877357][ T2988] switching from power state:
[   52.885718][ T2988]     ui class: performance
[   52.892826][ T2988]     internal class: none
[   52.898709][ T2988]     caps:
[   52.903535][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   52.910777][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   52.927721][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   52.943075][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   52.958703][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   52.972487][ T2988]     status: c
[   52.976410][ T2988] switching to power state:
[   52.983166][ T2988]     ui class: performance
[   52.989292][ T2988]     internal class: none
[   52.995342][ T2988]     caps:
[   52.998693][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   53.005210][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   53.017650][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   53.033148][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   53.052452][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   53.067238][ T2988]     status: r
[   53.072057][ T2988] switching from power state:
[   53.080014][ T2988]     ui class: performance
[   53.089692][ T2988]     internal class: none
[   53.097617][ T2988]     caps:
[   53.104229][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   53.114633][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   53.134993][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   53.157591][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   53.176198][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   53.194511][ T2988]     status: c
[   53.199333][ T2988] switching to power state:
[   53.205505][ T2988]     ui class: performance
[   53.212231][ T2988]     internal class: none
[   53.218161][ T2988]     caps:
[   53.223317][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   53.233004][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   53.246568][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   53.260107][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   53.278949][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   53.300534][ T2988]     status: r
[   53.308065][ T2988] switching from power state:
[   53.316766][ T2988]     ui class: performance
[   53.324025][ T2988]     internal class: none
[   53.330057][ T2988]     caps:
[   53.336703][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   53.348285][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   53.364024][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   53.379383][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   53.396566][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   53.410890][ T2988]     status: c
[   53.414902][ T2988] switching to power state:
[   53.422397][ T2988]     ui class: performance
[   53.428015][ T2988]     internal class: none
[   53.433485][ T2988]     caps:
[   53.437663][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   53.445486][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   53.458247][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   53.471189][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   53.482367][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   53.496406][ T2988]     status: r
[   53.501136][ T2988] switching from power state:
[   53.507315][ T2988]     ui class: performance
[   53.512239][ T2988]     internal class: none
[   53.517741][ T2988]     caps:
[   53.521019][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   53.526783][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   53.537613][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   53.549485][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   53.561983][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   53.575308][ T2988]     status: c
[   53.579470][ T2988] switching to power state:
[   53.584943][ T2988]     ui class: performance
[   53.590451][ T2988]     internal class: none
[   53.595794][ T2988]     caps:
[   53.599393][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   53.605600][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   53.616791][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   53.629764][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   53.643803][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   53.658182][ T2988]     status: r
[   53.663452][ T2988] switching from power state:
[   53.671492][ T2988]     ui class: performance
[   53.678954][ T2988]     internal class: none
[   53.685434][ T2988]     caps:
[   53.690055][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   53.697607][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   53.714018][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   53.729730][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   53.746212][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   53.762455][ T2988]     status: c
[   53.767499][ T2988] switching to power state:
[   53.774392][ T2988]     ui class: performance
[   53.781225][ T2988]     internal class: none
[   53.787385][ T2988]     caps:
[   53.791914][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   53.799553][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   53.813752][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   53.830811][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   53.847724][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   53.863984][ T2988]     status: r
[   53.868210][ T2988] switching from power state:
[   53.873907][ T2988]     ui class: performance
[   53.879633][ T2988]     internal class: none
[   53.884989][ T2988]     caps:
[   53.888607][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   53.894658][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   53.905814][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   53.917337][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   53.929246][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   53.940616][ T2988]     status: c
[   53.944545][ T2988] switching to power state:
[   53.950473][ T2988]     ui class: performance
[   53.956986][ T2988]     internal class: none
[   53.963041][ T2988]     caps:
[   53.967727][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   53.974924][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   53.990405][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   54.007564][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   54.027702][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   54.048105][ T2988]     status: r
[   54.054209][ T2988] switching from power state:
[   54.064523][ T2988]     ui class: performance
[   54.072401][ T2988]     internal class: none
[   54.081251][ T2988]     caps:
[   54.087859][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   54.099306][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[   54.120358][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   54.138669][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   54.158616][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   54.175846][ T2988]     status: c
[   54.181744][ T2988] switching to power state:
[   54.188211][ T2988]     ui class: performance
[   54.196419][ T2988]     internal class: none
[   54.202864][ T2988]     caps:
[   54.206627][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[   54.212773][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[   54.227275][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[   54.244267][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[   54.257389][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[   54.270212][ T2988]     status: r
[  114.350871][    C4] net eth0: rx->offset: 0, size: -1
[  114.359479][    C4] net eth0: rx->offset: 0, size: -1
[  114.374273][    C4] net eth0: rx->offset: 0, size: -1
[  114.392255][    C4] net eth0: rx->offset: 0, size: -1
[  114.409994][    C4] net eth0: rx->offset: 0, size: -1
[  114.432371][    C4] net eth0: rx->offset: 0, size: -1
[  114.444757][    C4] net eth0: rx->offset: 0, size: -1
[  114.456661][    C4] net eth0: rx->offset: 0, size: -1
[  114.468947][    C4] net eth0: rx->offset: 0, size: -1
[  114.479537][    C4] net eth0: rx->offset: 0, size: -1
[  119.771214][    C0] net_ratelimit: 10 callbacks suppressed
[  119.771234][    C0] net eth0: rx->offset: 0, size: -1
[  122.267645][    C4] net eth0: rx->offset: 0, size: -1
[  129.349111][    C0] net eth0: rx->offset: 0, size: -1
[  143.047835][    C0] net eth0: rx->offset: 0, size: -1
[  144.817545][    C2] net eth0: rx->offset: 0, size: -1
[  144.839804][    C2] net eth0: rx->offset: 0, size: -1
[  144.862807][    C2] net eth0: rx->offset: 0, size: -1
[  144.885508][    C2] net eth0: rx->offset: 0, size: -1
[  144.901521][    C2] net eth0: rx->offset: 0, size: -1
[  144.922625][    C2] net eth0: rx->offset: 0, size: -1
[  144.945898][    C2] net eth0: rx->offset: 0, size: -1
[  144.959000][    C2] net eth0: rx->offset: 0, size: -1
[  144.971690][    C2] net eth0: rx->offset: 0, size: -1
[  151.170951][    C4] net_ratelimit: 34 callbacks suppressed
[  151.184198][    C4] net eth0: rx->offset: 0, size: -1
[  157.893995][    C0] net eth0: rx->offset: 0, size: -1
[  170.029166][ T3390] vhci_hcd vhci_hcd.0: remove, state 4
[  170.038098][ T3390] usb usb4: USB disconnect, device number 1
[  170.056878][  T144] vhci_hcd: stop threads
[  170.068642][  T144] vhci_hcd: release socket
[  170.079709][  T144] vhci_hcd: disconnect device
[  170.091313][  T144] vhci_hcd: stop threads
[  170.101492][  T144] vhci_hcd: release socket
[  170.116792][  T144] vhci_hcd: disconnect device
[  170.131894][  T144] vhci_hcd: stop threads
[  170.146247][  T144] vhci_hcd: release socket
[  170.157459][  T144] vhci_hcd: disconnect device
[  170.167452][  T144] vhci_hcd: stop threads
[  170.174531][  T144] vhci_hcd: release socket
[  170.181446][  T144] vhci_hcd: disconnect device
[  170.189136][  T144] vhci_hcd: stop threads
[  170.197586][  T144] vhci_hcd: release socket
[  170.205616][  T144] vhci_hcd: disconnect device
[  170.214846][  T144] vhci_hcd: stop threads
[  170.222260][  T144] vhci_hcd: release socket
[  170.230262][  T144] vhci_hcd: disconnect device
[  170.240362][  T144] vhci_hcd: stop threads
[  170.247683][  T144] vhci_hcd: release socket
[  170.256942][  T144] vhci_hcd: disconnect device
[  170.265135][  T144] vhci_hcd: stop threads
[  170.274894][  T144] vhci_hcd: release socket
[  170.282406][  T144] vhci_hcd: disconnect device
[  170.291643][ T3390] vhci_hcd vhci_hcd.0: USB bus 4 deregistered
[  170.304527][ T3390] vhci_hcd vhci_hcd.0: remove, state 4
[  170.318038][ T3390] usb usb3: USB disconnect, device number 1
[  170.337944][  T144] vhci_hcd: stop threads
[  170.349623][  T144] vhci_hcd: release socket
[  170.361032][  T144] vhci_hcd: disconnect device
[  170.373074][  T144] vhci_hcd: stop threads
[  170.381444][  T144] vhci_hcd: release socket
[  170.392380][  T144] vhci_hcd: disconnect device
[  170.400150][  T144] vhci_hcd: stop threads
[  170.407365][  T144] vhci_hcd: release socket
[  170.414146][  T144] vhci_hcd: disconnect device
[  170.422140][  T144] vhci_hcd: stop threads
[  170.429377][  T144] vhci_hcd: release socket
[  170.437002][  T144] vhci_hcd: disconnect device
[  170.444469][  T144] vhci_hcd: stop threads
[  170.450820][  T144] vhci_hcd: release socket
[  170.458211][  T144] vhci_hcd: disconnect device
[  170.465724][  T144] vhci_hcd: stop threads
[  170.474392][  T144] vhci_hcd: release socket
[  170.482202][  T144] vhci_hcd: disconnect device
[  170.489682][  T144] vhci_hcd: stop threads
[  170.496369][  T144] vhci_hcd: release socket
[  170.504703][  T144] vhci_hcd: disconnect device
[  170.512016][  T144] vhci_hcd: stop threads
[  170.519139][  T144] vhci_hcd: release socket
[  170.529409][  T144] vhci_hcd: disconnect device
[  170.542301][ T3390] vhci_hcd vhci_hcd.0: USB bus 3 deregistered
[  170.679838][    C2] net eth0: rx->offset: 0, size: -1
          Stopping [0;1;39mSession 11 of user hakon[0m.
[[0;32m  OK  [0m] Removed slice [0;1;39msystem-modprobe.slice[0m.
[[0;32m  OK  [0m] Stopped target [0;1;39mHost and Network Name 
Lookups[0m.
[[0;32m  OK  [0m] Stopped target [0;1;39mSound Card[0m.
[[0;32m  OK  [0m] Stopped target [0;1;39mTimers[0m.
[[0;32m  OK  [0m] Stopped [0;1;39mDaily Cleanup of Temporary 
Directories[0m.
[[0;32m  OK  [0m] Closed [0;1;39mProcess Core Dump Socket[0m.
[[0;32m  OK  [0m] Closed [0;1;39mLoad/Save RF Kill Switch Status 
/dev/rfkill Watch[0m.
          Unmounting [0;1;39m/mnt/gentoo-share[0m...
          Unmounting [0;1;39m/usr/portage[0m...
          Stopping [0;1;39mAccounts Service[0m...
          Stopping [0;1;39mSave/Restore Sound Card State[0m...
          Stopping [0;1;39mManage, Install and Generate Color 
Profiles[0m...
          Stopping [0;1;39mService for 
local.d/ftrace_dump_on_oops.*[0m...
          Stopping [0;1;39mService for local.d/radeon-bus-id.*[0m...
          Stopping [0;1;39mAuthorization Manager[0m...
          Stopping [0;1;39mLoad/Save Random Seed[0m...
          Stopping [0;1;39mUser Manager for UID 0[0m...
[[0;32m  OK  [0m] Stopped [0;1;39mService for 
local.d/ftrace_dump_on_oops.*[0m.
[[0;32m  OK  [0m] Stopped [0;1;39mService for 
local.d/radeon-bus-id.*[0m.
          Stopping [0;1;39mService for local.d/disk-tune.*[0m...
[[0;32m  OK  [0m] Stopped [0;1;39mService for local.d/disk-tune.*[0m.
          Stopping [0;1;39mService for local.d/config-record.*[0m...
[[0;32m  OK         Stopping [0;1;39mOpenSSH server daemon[0m...
          Stopping [0;1;39mSystem Logger Daemon "default" instance[0m...
[[0;32m  OK  [0m] Stopped [0;1;39mBacula File Daemon service[0m.
[[0;32m  OK  [0m] Stopped [0;1;39mOpenSSH server daemon[0m.
[[0;32m  OK  [0m] Stopped [0;1;39mGetty on tty1[0m.
[[0;32m  OK  [0m] Stopped [0;1;39mSerial Getty on hvc0[0m.
[[0;32m  OK  [0m] Stopped [0;1;39mCommand Scheduler[0m.
[[0;32m  OK  [0m] Stopped [0;1;39mSerial Getty on ttyS0[0m.
[[0;32m  OK  [0m] Stopped [0;1;39mSystem Logger Daemon "default" 
instance[0m.
[[0;32m  OK  [0m] Stopped [0;1;39mManage, Install and Generate Color 
Profiles[0m.
[[0;32m  OK  [0m] Stopped [0;1;39mAuthorization Manager[0m.
[[0;32m  OK  [0m] Stopped [0;1;39mUser Manager for UID 0[0m.
[[0;32m  OK  [0m] Unmounted [0;1;39m/mnt/gentoo-share[0m.
[[0;32m  OK  [0m] Unmounted [0;1;39m/usr/portage[0m.
[[0;32m  OK  [0m] Stopped [0;1;39mSave/Restore Sound Card State[0m.
[[0;32m  OK  [0m] Stopped [0;1;39mACPI event daemon[0m.
[[0;32m  OK  [0m] Stopped [0;1;39mLoad/Save Random Seed[0m.
[[0;32m  OK  [0m] Removed slice [0;1;39msystem-getty.slice[0m.
[[0;32m  OK  [0m] Removed slice [0;1;39msyste[  171.767105][ T105] 
switching from power state:
m-serial\x2dgett[  171.779176][  T105]     ui class: performance
[  171.793756][  T105]     internal class: none
y.slice[0m.
[  171.806940][  T105]     caps:
[  171.821269][  T105] [drm]     uvd    vclk: 0 dclk: 0
[  171.836987][  T105] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  171.836996][  T105] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  171.837000][  T105] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  171.837003][  T105] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[[0;32m  OK  [[  171.837006][  T105]     status: c
0m] Removed slic[  171.837013][  T105] switching to power state:
e [0;1;39msyste[  171.837015][  T105]     ui class: performance
[  171.837017][  T105]     internal class: none
m-syslog\x2dng.s[  171.837021][  T105]     caps:
lice[0m.
[  171.837024][  T105] [drm]     uvd    vclk: 0 dclk: 0
[  171.837026][  T105] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  171.837029][  T105] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  171.837033][  T105] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  171.837036][  T105] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  171.837038][  T105]     status: r
[  171.837556][  T105] switching from power state:
[  172.068590][  T105]     ui class: performance
[  172.068597][  T105]     internal class: none
[  172.068603][  T105]     caps:
          Stoppin[  172.068605][  T105] [drm]     uvd    vclk: 0 dclk: 0
g [0;1;39mUser [  172.068609][  T105] [drm]         power level 0    
sclk: 30000 mclk: 15000 vddc: 900 vddci: 850 pcie gen: 3
[  172.068613][  T105] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
Runtime Director[  172.068617][  T105] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
y /run/user/0[0[  172.068620][  T105] [drm]         power level 3    
sclk: 95500 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  172.068623][  T105]     status: c
m...
[  172.068626][  T105] switching to power state:
[  172.068628][  T105]     ui class: performance
[  172.068630][  T105]     internal class: none
[  172.068633][  T105]     caps:
[  172.068635][  T105] [drm]     uvd    vclk: 0 dclk: 0
[  172.068638][  T105] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  172.260460][  T105] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  172.260471][  T105] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  172.260475][  T105] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  172.260479][  T105]     status: r
[[0;32m  OK  [[  172.283298][  T105] switching from power state:
0m] Unmounted [[  172.334718][  T105]     ui class: performance
0;1;39m/run/user[  172.346929][  T105]     internal class: none
[  172.356983][  T105]     caps:
/0[0m.
[  172.364520][  T105] [drm]     uvd    vclk: 0 dclk: 0
[  172.375849][  T105] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  172.395753][  T105] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  172.395763][  T105] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  172.395766][  T105] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  172.395770][  T105]     status: c
[[0;32m  OK  [[  172.395776][  T105] switching to power state:
0m] Stopped [0;[  172.395779][  T105]     ui class: performance
1;39mMake remote[  172.395781][  T105]     internal class: none
[  172.395785][  T105]     caps:
  CUPS printers a[  172.395787][  T105] [drm]     uvd    vclk: 0 dclk: 0
[  172.395789][  T105] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
vailable locally[  172.395792][  T105] [drm]         power level 1    
sclk: 45000 mclk: 140000 vddc: 950 vddci: 1000 pcie gen: 3
[0m.
[  172.395795][  T105] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  172.395799][  T105] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  172.395802][  T105]     status: r
[  172.581477][  T105] switching from power state:
[  172.587999][  T105]     ui class: performance
[  172.588004][  T105]     internal class: none
[  172.588010][  T105]     caps:
[  172.588012][  T105] [drm]     uvd    vclk: 0 dclk: 0
[  172.588016][  T105] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[[0;32m  OK  [[  172.588020][  T105] [drm]         power level 1    
sclk: 45000 mclk: 140000 vddc: 950 vddci: 1000 pcie gen: 3
0m] Stopped [0;[  172.588023][  T105] [drm]         power level 2    
sclk: 93000 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
1;39mUser Runtim[  172.588027][  T105] [drm]         power level 3    
sclk: 95500 mclk: 140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  172.588030][  T105]     status: c
e Directory /run[  172.588034][  T105] switching to power state:
/user/0[0m.
[  172.588035][  T105]     ui class: performance
[  172.734695][  T105]     internal class: none
[  172.743153][  T105]     caps:
[  172.747863][  T105] [drm]     uvd    vclk: 0 dclk: 0
[  172.754606][  T105] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  172.754613][  T105] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  172.754617][  T105] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  172.754619][  T105] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[[0;32m  OK  [[  172.754622][  T105]     status: r
0m] Removed slic[  172.754767][  T105] switching from power state:
[  172.860991][  T105]     ui class: performance
e [0;1;39mUser [  172.867089][  T105]     internal class: none
Slice of UID 0[[  172.875203][  T105]     caps:
[  172.881470][  T105] [drm]     uvd    vclk: 0 dclk: 0
0m.
[  172.889330][  T105] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  172.906734][  T105] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  172.918557][  T105] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  172.932338][  T105] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  172.932346][  T105]     status: c
[  172.932352][  T105] switching to power state:
          Stoppin[  172.932354][  T105]     ui class: performance
g [0;1;39mAvahi[  172.932356][  T105]     internal class: none
[  172.932360][  T105]     caps:
  mDNS/DNS-SD Sta[  172.932362][  T105] [drm]     uvd    vclk: 0 dclk: 0
[  172.932365][  T105] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
ck[0m...
[  172.932368][  T105] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  172.932371][  T105] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  172.932374][  T105] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  172.932377][  T105]     status: r
[  172.932513][  T105] switching from power state:
[  173.069200][  T105]     ui class: performance
[  173.069205][  T105]     internal class: none
[  173.069211][  T105]     caps:
          Stoppin[  173.069213][  T105] [drm]     uvd    vclk: 0 dclk: 0
g [0;1;39mCUPS [  173.069217][  T105] [drm]         power level 0    
sclk: 30000 mclk: 15000 vddc: 900 vddci: 850 pcie gen: 3
Scheduler[0m...
[  173.069222][  T105] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  173.069225][  T105] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  173.069228][  T105] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  173.069231][  T105]     status: c
[  173.069235][  T105] switching to power state:
[  173.069236][  T105]     ui class: performance
[  173.069238][  T105]     internal class: none
[  173.069241][  T105]     caps:
[  173.069244][  T105] [drm]     uvd    vclk: 0 dclk: 0
[  173.226364][  T105] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  173.226374][  T105] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  173.226380][  T105] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  173.226383][  T105] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  173.226386][  T105]     status: r
[[0;32m  OK  [0m] Stopped [0;1;39mAvahi mDNS/DNS-SD Stack[0m.
[[0;32m  OK  [0m] Stopped [0;1;39mCUPS Scheduler[0m.
[  173.676996][ T2988] switching from power state:
[  173.685608][ T2988]     ui class: performance
[  173.692890][ T2988]     internal class: none
[  173.701716][ T2988]     caps:
[  173.708552][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  173.720330][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  173.739404][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  173.756798][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  173.772673][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  173.772683][ T2988]     status: c
[  173.772689][ T2988] switching to power state:
[  173.772691][ T2988]     ui class: performance
[[0;32m  OK  [[  173.772693][ T2988]     internal class: none
0m] Stopped [0;[  173.772697][ T2988]     caps:
1;39mAccounts Se[  173.772700][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  173.772703][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
rvice[0m.
[  173.772706][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  173.772709][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  173.772712][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  173.772715][ T2988]     status: r
[  173.933675][ T2988] switching from power state:
[  173.940210][ T2988]     ui class: performance
[  173.946592][ T2988]     internal class: none
[  173.952362][ T2988]     caps:
[  173.956396][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  173.962898][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  173.975939][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  173.994698][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  174.021216][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  174.042619][ T2988]     status: c
[  174.047717][ T2988] switching to power state:
[  174.054857][ T2988]     ui class: performance
[  174.060556][ T2988]     internal class: none
[  174.066060][ T2988]     caps:
[  174.069652][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  174.075980][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  174.088457][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  174.101182][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  174.116446][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  174.130094][ T2988]     status: r
[  174.151718][ T2988] switching from power state:
[  174.160065][ T2988]     ui class: performance
[  174.166732][ T2988]     internal class: none
[  174.174221][ T2988]     caps:
[  174.180681][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  174.194451][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  174.213891][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  174.227004][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  174.239056][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  174.254858][ T2988]     status: c
[  174.259900][ T2988] switching to power state:
[  174.269540][ T2988]     ui class: performance
[  174.280677][ T2988]     internal class: none
[  174.290226][ T2988]     caps:
[  174.294917][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  174.303289][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  174.319685][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  174.336738][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  174.353186][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  174.369386][ T2988]     status: r
[  174.375124][ T2988] switching from power state:
[  174.381298][ T2988]     ui class: performance
[  174.387061][ T2988]     internal class: none
[  174.392532][ T2988]     caps:
[  174.396184][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  174.402932][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  174.417510][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  174.443560][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  174.474364][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  174.496914][ T2988]     status: c
[  174.502184][ T2988] switching to power state:
[  174.510823][ T2988]     ui class: performance
[  174.518029][ T2988]     internal class: none
[  174.523845][ T2988]     caps:
[  174.527730][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  174.533809][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  174.546381][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  174.559560][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  174.573071][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  174.585596][ T2988]     status: r
[  174.589865][ T2988] switching from power state:
[  174.595665][ T2988]     ui class: performance
[  174.601790][ T2988]     internal class: none
[  174.607443][ T2988]     caps:
[  174.611988][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  174.621070][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  174.636439][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  174.650333][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  174.669586][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  174.684982][ T2988]     status: c
[  174.690780][ T2988] switching to power state:
[  174.697952][ T2988]     ui class: performance
[  174.704527][ T2988]     internal class: none
[  174.711512][ T2988]     caps:
[  174.717113][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  174.724947][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  174.741625][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  174.755981][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  174.771761][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  174.788205][ T2988]     status: r
[  174.793560][ T2988] switching from power state:
[  174.801709][ T2988]     ui class: performance
[  174.811023][ T2988]     internal class: none
[  174.824278][ T2988]     caps:
[  174.830405][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  174.843774][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  174.865656][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  174.883188][    C4] net eth0: rx->offset: 0, size: -1
[  174.883198][    C4] net eth0: rx->offset: 0, size: -1
[  174.883202][    C4] net eth0: rx->offset: 0, size: -1
[  174.883205][    C4] net eth0: rx->offset: 0, size: -1
[  174.883207][    C4] net eth0: rx->offset: 0, size: -1
[  174.883211][    C4] net eth0: rx->offset: 0, size: -1
[  174.883214][    C4] net eth0: rx->offset: 0, size: -1
[  174.883217][    C4] net eth0: rx->offset: 0, size: -1
[  174.883220][    C4] net eth0: rx->offset: 0, size: -1
[  174.883471][    C4] 
==================================================================
[  174.883474][    C4] BUG: KASAN: use-after-free in 
xennet_poll+0x28dc/0x3c80
[  174.883484][    C4] Write of size 8 at addr ffff88812fba1040 by task 
X/2988
[  174.883488][    C4]
[  174.883490][    C4] CPU: 4 PID: 2988 Comm: X Not tainted 
5.13.0-rc6-x86_64 #1
[  174.883494][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  174.883497][    C4] Call Trace:
[  174.883500][    C4]  <IRQ>
[  174.883503][    C4]  dump_stack+0xa1/0xd3
[  174.883510][    C4] print_address_description.constprop.0+0x1d/0x140
[  174.883518][    C4]  ? xennet_poll+0x28dc/0x3c80
[  174.883524][    C4]  kasan_report.cold+0x7b/0xd4
[  174.883532][    C4]  ? xennet_poll+0x28dc/0x3c80
[  174.883539][    C4]  __asan_report_store8_noabort+0x17/0x20
[  174.883545][    C4]  xennet_poll+0x28dc/0x3c80
[  174.883555][    C4]  ? xen_vcpuop_set_next_event+0x124/0x1c0
[  174.883568][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  174.883574][    C4]  ? __kasan_check_read+0x11/0x20
[  174.883579][    C4]  ? __raise_softirq_irqoff+0x36/0xe0
[  174.883586][    C4]  ? __napi_schedule+0x1b5/0x260
[  174.883596][    C4]  ? __kasan_check_read+0x11/0x20
[  174.883601][    C4]  ? __lock_acquire.constprop.0+0x494/0xe40
[  174.883609][    C4]  ? lock_release+0x205/0x820
[  174.883613][    C4]  ? do_raw_spin_lock+0x13e/0x280
[  174.883631][    C4]  ? handle_edge_irq+0x35e/0xb60
[  174.883642][    C4]  ? handle_irq_for_port+0x192/0x4c0
[  174.883650][    C4]  ? __kasan_check_write+0x14/0x20
[  174.883657][    C4]  __napi_poll+0xb2/0x4c0
[  174.883665][    C4]  net_rx_action+0x2d7/0xa20
[  174.883673][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  174.883677][    C4]  ? __xen_evtchn_do_upcall+0x99/0x180
[  174.883684][    C4]  ? __kasan_check_write+0x14/0x20
[  174.883690][    C4]  ? _raw_read_unlock+0x23/0x40
[  174.883698][    C4]  ? __xen_evtchn_do_upcall+0x107/0x180
[  174.883705][    C4]  __do_softirq+0x1cd/0x66d
[  174.883730][    C4]  irq_exit_rcu+0x12c/0x1c0
[  174.883739][    C4]  sysvec_xen_hvm_callback+0x79/0xa0
[  174.883745][    C4]  </IRQ>
[  174.883748][    C4]  asm_sysvec_xen_hvm_callback+0x12/0x20
[  174.883753][    C4] RIP: 0010:console_unlock+0x78f/0x940
[  174.883759][    C4] Code: 28 ff ff ff 4c 29 e6 49 8d bc 24 60 6f 2f 
86 e8 47 d7 ff ff 4c 01 e0 48 89 85 c0 fe ff ff e9 d5 fa ff ff fb 66 0f 
1f 44 00 00 <e9> 61 fe ff ff c7 05 82 3d e3 04 00 00 00 00 48 8b 7d 08 
e8 d9 e1
[  174.883763][    C4] RSP: 0018:ffffc90000617438 EFLAGS: 00000206
[  174.883769][    C4] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 
1ffff1102279ab9f
[  174.883772][    C4] RDX: 1ffff1102279ab70 RSI: 0000000000000000 RDI: 
ffff888113cd5b80
[  174.883775][    C4] RBP: ffffc900006175a8 R08: 0000000000000000 R09: 
000000000000000a
[  174.883777][    C4] R10: ffff888113cd5b88 R11: 0000000000080000 R12: 
0000000000000200
[  174.883780][    C4] R13: ffffc900006174a0 R14: dffffc0000000000 R15: 
ffffc90000617580
[  174.883792][    C4]  ? console_unlock+0x5ec/0x940
[  174.883801][    C4]  ? devkmsg_read+0x6e0/0x6e0
[  174.883808][    C4]  ? zap_class+0x162/0x740
[  174.883817][    C4]  ? _raw_spin_unlock_irqrestore+0x27/0x40
[  174.883823][    C4]  ? vprintk_default+0x1d/0x20
[  174.883832][    C4]  vprintk_emit+0xea/0x260
[  174.883840][    C4]  vprintk_default+0x1d/0x20
[  174.883844][    C4]  vprintk+0x4c/0xe0
[  174.883849][    C4]  printk+0xb2/0xe3
[  174.883853][    C4]  ? record_print_text.cold+0x11/0x11
[  174.883856][    C4]  ? record_print_text.cold+0x11/0x11
[  174.883861][    C4]  ? vprintk+0x4c/0xe0
[  174.883866][    C4]  ? printk+0xb2/0xe3
[  174.883874][    C4]  si_dpm_print_power_state.cold+0x15b/0x1d7 [amdgpu]
[  174.884295][    C4]  ? si_dpm_vblank_too_short+0x60/0x60 [amdgpu]
[  174.884619][    C4] amdgpu_pm_compute_clocks.part.0.cold+0x16d/0x339 
[amdgpu]
[  174.884949][    C4]  ? amdgpu_dpm_get_vrefresh+0x200/0x200 [amdgpu]
[  174.885269][    C4]  ? amdgpu_atombios_crtc_lock+0x180/0x180 [amdgpu]
[  174.885566][    C4]  ? amdgpu_atombios_crtc_blank+0x180/0x180 [amdgpu]
[  174.885827][    C4]  amdgpu_pm_compute_clocks+0x58/0x80 [amdgpu]
[  174.886146][    C4]  dce_v6_0_crtc_dpms+0xe4/0x220 [amdgpu]
[  174.886422][    C4]  dce_v6_0_crtc_disable+0x9c/0xce0 [amdgpu]
[  174.886733][    C4]  ? drm_helper_encoder_in_use+0x222/0x2e0
[  174.886741][    C4]  ? drm_helper_force_disable_all+0x1e0/0x1e0
[  174.886747][    C4]  ? dce_v6_0_resume+0x1e0/0x1e0 [amdgpu]
[  174.887020][    C4]  ? __kasan_check_read+0x11/0x20
[  174.887026][    C4]  ? mutex_is_locked+0x17/0x60
[  174.887036][    C4] __drm_helper_disable_unused_functions+0xfb/0x2a0
[  174.887043][    C4]  drm_crtc_helper_set_config+0x14a8/0x28a0
[  174.887048][    C4]  ? find_held_lock+0x35/0x140
[  174.887065][    C4]  ? drm_connector_get_single_encoder+0x220/0x220
[  174.887069][    C4]  ? do_raw_spin_unlock+0x159/0x200
[  174.887075][    C4]  ? _raw_spin_unlock_irqrestore+0x27/0x40
[  174.887085][    C4]  amdgpu_display_crtc_set_config+0xb7/0x460 [amdgpu]
[  174.887361][    C4] __drm_mode_set_config_internal+0x298/0x6c0
[  174.887369][    C4]  ? 
drm_warn_on_modeset_not_all_locked.part.0+0x97/0xe0
[  174.887378][    C4]  drm_mode_set_config_internal+0x12e/0x180
[  174.887385][    C4] drm_client_modeset_commit_locked+0x30c/0x4e0
[  174.887389][    C4]  ? mutex_lock_nested+0x1b/0x20
[  174.887397][    C4]  drm_client_modeset_commit+0x3f/0x80
[  174.887402][    C4] 
__drm_fb_helper_restore_fbdev_mode_unlocked+0x15a/0x1a0
[  174.887412][    C4]  drm_fb_helper_lastclose+0x39/0x60
[  174.887416][    C4]  amdgpu_driver_lastclose_kms+0xe/0x20 [amdgpu]
[  174.887671][    C4]  drm_release+0x3b8/0x4c0
[  174.887679][    C4]  __fput+0x1b7/0x780
[  174.887690][    C4]  ____fput+0xe/0x20
[  174.887695][    C4]  task_work_run+0xd4/0x160
[  174.887704][    C4]  do_exit+0x9d3/0x2380
[  174.887708][    C4]  ? __up_read+0x1d4/0x8c0
[  174.887711][    C4]  ? lock_release+0x205/0x820
[  174.887720][    C4]  ? mm_update_next_owner+0x6a0/0x6a0
[  174.887731][    C4]  ? up_read+0x23/0x40
[  174.887738][    C4]  do_group_exit+0xfd/0x2c0
[  174.887745][    C4]  __x64_sys_exit_group+0x43/0x60
[  174.887750][    C4]  do_syscall_64+0x40/0xc0
[  174.887756][    C4]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[  174.887761][    C4] RIP: 0033:0x7f0f4e5097c9
[  174.887765][    C4] Code: Unable to access opcode bytes at RIP 
0x7f0f4e50979f.
[  174.887767][    C4] RSP: 002b:00007fffa554fed8 EFLAGS: 00000246 
ORIG_RAX: 00000000000000e7
[  174.887772][    C4] RAX: ffffffffffffffda RBX: 00007f0f4e604800 RCX: 
00007f0f4e5097c9
[  174.887775][    C4] RDX: 000000000000003c RSI: 00000000000000e7 RDI: 
0000000000000000
[  174.887777][    C4] RBP: 00007f0f4e604800 R08: fffffffffffffd78 R09: 
000055cf14e37180
[  174.887780][    C4] R10: fffffffffffff9bc R11: 0000000000000246 R12: 
0000000000000000
[  174.887782][    C4] R13: 0000000000000000 R14: 00000000000005dd R15: 
0000000000000000
[  174.887795][    C4]
[  174.887797][    C4] Allocated by task 0:
[  174.887799][    C4]  kasan_save_stack+0x23/0x60
[  174.887804][    C4]  __kasan_slab_alloc+0x68/0x80
[  174.887808][    C4]  kmem_cache_alloc_node+0x242/0x380
[  174.887811][    C4]  __alloc_skb+0x156/0x280
[  174.887816][    C4]  __netdev_alloc_skb+0x46/0x320
[  174.887820][    C4]  xennet_alloc_rx_buffers+0x237/0xae0
[  174.887825][    C4]  xennet_poll+0x1e8b/0x3c80
[  174.887829][    C4]  __napi_poll+0xb2/0x4c0
[  174.887833][    C4]  net_rx_action+0x2d7/0xa20
[  174.887836][    C4]  __do_softirq+0x1cd/0x66d
[  174.887839][    C4]
[  174.887840][    C4] Freed by task 0:
[  174.887842][    C4]  kasan_save_stack+0x23/0x60
[  174.887846][    C4]  kasan_set_track+0x20/0x40
[  174.887849][    C4]  kasan_set_free_info+0x24/0x40
[  174.887854][    C4]  __kasan_slab_free+0xf1/0x140
[  174.887857][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  174.887861][    C4]  kmem_cache_free+0x10b/0x520
[  174.887864][    C4]  kfree_skbmem+0x9a/0x140
[  174.887868][    C4]  kfree_skb+0x10d/0x240
[  174.887871][    C4]  xennet_poll+0x1ab2/0x3c80
[  174.887875][    C4]  __napi_poll+0xb2/0x4c0
[  174.887878][    C4]  net_rx_action+0x2d7/0xa20
[  174.887881][    C4]  __do_softirq+0x1cd/0x66d
[  174.887884][    C4]
[  174.887886][    C4] The buggy address belongs to the object at 
ffff88812fba1040
[  174.887886][    C4]  which belongs to the cache skbuff_head_cache of 
size 216
[  174.887889][    C4] The buggy address is located 0 bytes inside of
[  174.887889][    C4]  216-byte region [ffff88812fba1040, 
ffff88812fba1118)
[  174.887892][    C4] The buggy address belongs to the page:
[  174.887894][    C4] page:00000000d03f6c30 refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x12fba0
[  174.887899][    C4] head:00000000d03f6c30 order:1 compound_mapcount:0
[  174.887902][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  174.887910][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff8881002a3e00
[  174.887914][    C4] raw: 0000000000000000 0000000000190019 
00000001ffffffff 0000000000000000
[  174.887916][    C4] page dumped because: kasan: bad access detected
[  174.887918][    C4]
[  174.887919][    C4] Memory state around the buggy address:
[  174.887921][    C4]  ffff88812fba0f00: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  174.887924][    C4]  ffff88812fba0f80: fb fb fb fb fb fb fb fb fb fb 
fb fc fc fc fc fc
[  174.887926][    C4] >ffff88812fba1000: fc fc fc fc fc fc fc fc fa fb 
fb fb fb fb fb fb
[  174.887928][    C4] ^
[  174.887930][    C4]  ffff88812fba1080: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  174.887932][    C4]  ffff88812fba1100: fb fb fb fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  174.887934][    C4] 
==================================================================
[  174.887936][    C4] Disabling lock debugging due to kernel taint
[  174.888006][    C4] 
==================================================================
[  174.888008][    C4] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
[  174.888012][    C4]
[  174.888014][    C4] CPU: 4 PID: 2988 Comm: X Tainted: G B             
5.13.0-rc6-x86_64 #1
[  174.888018][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  174.888020][    C4] Call Trace:
[  174.888021][    C4]  <IRQ>
[  174.888023][    C4]  dump_stack+0xa1/0xd3
[  174.888027][    C4] print_address_description.constprop.0+0x1d/0x140
[  174.888032][    C4]  ? kfree+0xd7/0x560
[  174.888036][    C4]  kasan_report_invalid_free+0x56/0x80
[  174.888040][    C4]  ? kfree+0xd7/0x560
[  174.888082][    C4]  __kasan_slab_free+0x110/0x140
[  174.888087][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  174.888092][    C4]  kfree+0xd7/0x560
[  174.888095][    C4]  ? skb_release_data+0x41c/0x520
[  174.888099][    C4]  ? xennet_poll+0x15d9/0x3c80
[  174.888104][    C4]  ? kmem_cache_free+0x10b/0x520
[  174.888109][    C4]  skb_release_data+0x41c/0x520
[  174.888113][    C4]  ? xennet_poll+0x15d9/0x3c80
[  174.888118][    C4]  ? xennet_poll+0x15d9/0x3c80
[  174.888122][    C4]  kfree_skb+0x105/0x240
[  174.888126][    C4]  xennet_poll+0x15d9/0x3c80
[  174.888135][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  174.888139][    C4]  ? __kasan_check_read+0x11/0x20
[  174.888144][    C4]  ? __raise_softirq_irqoff+0x36/0xe0
[  174.888149][    C4]  ? __napi_schedule+0x1b5/0x260
[  174.888156][    C4]  ? __kasan_check_read+0x11/0x20
[  174.888160][    C4]  ? __lock_acquire.constprop.0+0x494/0xe40
[  174.888165][    C4]  ? lock_release+0x205/0x820
[  174.888168][    C4]  ? do_raw_spin_lock+0x13e/0x280
[  174.888176][    C4]  ? handle_edge_irq+0x35e/0xb60
[  174.888182][    C4]  ? handle_irq_for_port+0x192/0x4c0
[  174.888187][    C4]  ? __kasan_check_write+0x14/0x20
[  174.888193][    C4]  __napi_poll+0xb2/0x4c0
[  174.888197][    C4]  net_rx_action+0x2d7/0xa20
[  174.888202][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  174.888205][    C4]  ? __xen_evtchn_do_upcall+0x99/0x180
[  174.888210][    C4]  ? __kasan_check_write+0x14/0x20
[  174.888214][    C4]  ? _raw_read_unlock+0x23/0x40
[  174.888219][    C4]  ? __xen_evtchn_do_upcall+0x107/0x180
[  174.888224][    C4]  __do_softirq+0x1cd/0x66d
[  174.888229][    C4]  irq_exit_rcu+0x12c/0x1c0
[  174.888234][    C4]  sysvec_xen_hvm_callback+0x79/0xa0
[  174.888238][    C4]  </IRQ>
[  174.888240][    C4]  asm_sysvec_xen_hvm_callback+0x12/0x20
[  174.888245][    C4] RIP: 0010:console_unlock+0x78f/0x940
[  174.888249][    C4] Code: 28 ff ff ff 4c 29 e6 49 8d bc 24 60 6f 2f 
86 e8 47 d7 ff ff 4c 01 e0 48 89 85 c0 fe ff ff e9 d5 fa ff ff fb 66 0f 
1f 44 00 00 <e9> 61 fe ff ff c7 05 82 3d e3 04 00 00 00 00 48 8b 7d 08 
e8 d9 e1
[  174.888252][    C4] RSP: 0018:ffffc90000617438 EFLAGS: 00000206
[  174.888256][    C4] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 
1ffff1102279ab9f
[  174.888258][    C4] RDX: 1ffff1102279ab70 RSI: 0000000000000000 RDI: 
ffff888113cd5b80
[  174.888260][    C4] RBP: ffffc900006175a8 R08: 0000000000000000 R09: 
000000000000000a
[  174.888262][    C4] R10: ffff888113cd5b88 R11: 0000000000080000 R12: 
0000000000000200
[  174.888265][    C4] R13: ffffc900006174a0 R14: dffffc0000000000 R15: 
ffffc90000617580
[  174.888270][    C4]  ? console_unlock+0x5ec/0x940
[  174.888275][    C4]  ? devkmsg_read+0x6e0/0x6e0
[  174.888280][    C4]  ? zap_class+0x162/0x740
[  174.888284][    C4]  ? _raw_spin_unlock_irqrestore+0x27/0x40
[  174.888289][    C4]  ? vprintk_default+0x1d/0x20
[  174.888294][    C4]  vprintk_emit+0xea/0x260
[  174.888299][    C4]  vprintk_default+0x1d/0x20
[  174.888302][    C4]  vprintk+0x4c/0xe0
[  174.888306][    C4]  printk+0xb2/0xe3
[  174.888309][    C4]  ? record_print_text.cold+0x11/0x11
[  174.888312][    C4]  ? record_print_text.cold+0x11/0x11
[  174.888316][    C4]  ? vprintk+0x4c/0xe0
[  174.888319][    C4]  ? printk+0xb2/0xe3
[  174.888324][    C4]  si_dpm_print_power_state.cold+0x15b/0x1d7 [amdgpu]
[  174.888653][    C4]  ? si_dpm_vblank_too_short+0x60/0x60 [amdgpu]
[  174.888979][    C4] amdgpu_pm_compute_clocks.part.0.cold+0x16d/0x339 
[amdgpu]
[  174.889306][    C4]  ? amdgpu_dpm_get_vrefresh+0x200/0x200 [amdgpu]
[  174.889661][    C4]  ? amdgpu_atombios_crtc_lock+0x180/0x180 [amdgpu]
[  174.889937][    C4]  ? amdgpu_atombios_crtc_blank+0x180/0x180 [amdgpu]
[  174.890195][    C4]  amdgpu_pm_compute_clocks+0x58/0x80 [amdgpu]
[  174.890512][    C4]  dce_v6_0_crtc_dpms+0xe4/0x220 [amdgpu]
[  174.890787][    C4]  dce_v6_0_crtc_disable+0x9c/0xce0 [amdgpu]
[  174.891227][    C4]  ? drm_helper_encoder_in_use+0x222/0x2e0
[  174.891234][    C4]  ? drm_helper_force_disable_all+0x1e0/0x1e0
[  174.891239][    C4]  ? dce_v6_0_resume+0x1e0/0x1e0 [amdgpu]
[  174.891509][    C4]  ? __kasan_check_read+0x11/0x20
[  174.891515][    C4]  ? mutex_is_locked+0x17/0x60
[  174.891522][    C4] __drm_helper_disable_unused_functions+0xfb/0x2a0
[  174.891528][    C4]  drm_crtc_helper_set_config+0x14a8/0x28a0
[  174.891532][    C4]  ? find_held_lock+0x35/0x140
[  174.891539][    C4]  ? drm_connector_get_single_encoder+0x220/0x220
[  174.891544][    C4]  ? do_raw_spin_unlock+0x159/0x200
[  174.891548][    C4]  ? _raw_spin_unlock_irqrestore+0x27/0x40
[  174.891555][    C4]  amdgpu_display_crtc_set_config+0xb7/0x460 [amdgpu]
[  174.891813][    C4] __drm_mode_set_config_internal+0x298/0x6c0
[  174.891819][    C4]  ? 
drm_warn_on_modeset_not_all_locked.part.0+0x97/0xe0
[  174.891826][    C4]  drm_mode_set_config_internal+0x12e/0x180
[  174.891831][    C4] drm_client_modeset_commit_locked+0x30c/0x4e0
[  174.891835][    C4]  ? mutex_lock_nested+0x1b/0x20
[  174.891840][    C4]  drm_client_modeset_commit+0x3f/0x80
[  174.891844][    C4] 
__drm_fb_helper_restore_fbdev_mode_unlocked+0x15a/0x1a0
[  174.891852][    C4]  drm_fb_helper_lastclose+0x39/0x60
[  174.891855][    C4]  amdgpu_driver_lastclose_kms+0xe/0x20 [amdgpu]
[  174.892130][    C4]  drm_release+0x3b8/0x4c0
[  174.892136][    C4]  __fput+0x1b7/0x780
[  174.892143][    C4]  ____fput+0xe/0x20
[  174.892147][    C4]  task_work_run+0xd4/0x160
[  174.892153][    C4]  do_exit+0x9d3/0x2380
[  174.892157][    C4]  ? __up_read+0x1d4/0x8c0
[  174.892160][    C4]  ? lock_release+0x205/0x820
[  174.892165][    C4]  ? mm_update_next_owner+0x6a0/0x6a0
[  174.892171][    C4]  ? up_read+0x23/0x40
[  174.892175][    C4]  do_group_exit+0xfd/0x2c0
[  174.892180][    C4]  __x64_sys_exit_group+0x43/0x60
[  174.892184][    C4]  do_syscall_64+0x40/0xc0
[  174.892189][    C4]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[  174.892194][    C4] RIP: 0033:0x7f0f4e5097c9
[  174.892198][    C4] Code: Unable to access opcode bytes at RIP 
0x7f0f4e50979f.
[  174.892199][    C4] RSP: 002b:00007fffa554fed8 EFLAGS: 00000246 
ORIG_RAX: 00000000000000e7
[  174.892205][    C4] RAX: ffffffffffffffda RBX: 00007f0f4e604800 RCX: 
00007f0f4e5097c9
[  174.892208][    C4] RDX: 000000000000003c RSI: 00000000000000e7 RDI: 
0000000000000000
[  174.892210][    C4] RBP: 00007f0f4e604800 R08: fffffffffffffd78 R09: 
000055cf14e37180
[  174.892212][    C4] R10: fffffffffffff9bc R11: 0000000000000246 R12: 
0000000000000000
[  174.892215][    C4] R13: 0000000000000000 R14: 00000000000005dd R15: 
0000000000000000
[  174.892220][    C4]
[  174.892222][    C4] Allocated by task 0:
[  174.892224][    C4] (stack is not available)
[  174.892225][    C4]
[  174.892227][    C4] Freed by task 0:
[  174.892229][    C4]  kasan_save_stack+0x23/0x60
[  174.892234][    C4]  kasan_set_track+0x20/0x40
[  174.892237][    C4]  kasan_set_free_info+0x24/0x40
[  174.892241][    C4]  __kasan_slab_free+0xf1/0x140
[  174.892245][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  174.892248][    C4]  kfree+0xd7/0x560
[  174.892251][    C4]  skb_release_data+0x41c/0x520
[  174.892255][    C4]  kfree_skb+0x105/0x240
[  174.892258][    C4]  xennet_poll+0x1ab2/0x3c80
[  174.892263][    C4]  __napi_poll+0xb2/0x4c0
[  174.892266][    C4]  net_rx_action+0x2d7/0xa20
[  174.892269][    C4]  __do_softirq+0x1cd/0x66d
[  174.892272][    C4]
[  174.892273][    C4] The buggy address belongs to the object at 
ffff88812fabd800
[  174.892273][    C4]  which belongs to the cache kmalloc-1k of size 1024
[  174.892276][    C4] The buggy address is located 0 bytes inside of
[  174.892276][    C4]  1024-byte region [ffff88812fabd800, 
ffff88812fabdc00)
[  174.892280][    C4] The buggy address belongs to the page:
[  174.892282][    C4] page:00000000d0c67a5b refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x12fab8
[  174.892286][    C4] head:00000000d0c67a5b order:3 compound_mapcount:0 
compound_pincount:0
[  174.892290][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  174.892297][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff888100042dc0
[  174.892300][    C4] raw: 0000000000000000 0000000000100010 
00000001ffffffff 0000000000000000
[  174.892302][    C4] page dumped because: kasan: bad access detected
[  174.892304][    C4]
[  174.892305][    C4] Memory state around the buggy address:
[  174.892307][    C4]  ffff88812fabd700: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  174.892309][    C4]  ffff88812fabd780: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  174.892311][    C4] >ffff88812fabd800: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  174.892313][    C4]                    ^
[  174.892315][    C4]  ffff88812fabd880: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  174.892317][    C4]  ffff88812fabd900: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  174.892319][    C4] 
==================================================================
[  174.892348][    C4] 
==================================================================
[  174.892350][    C4] BUG: KASAN: double-free or invalid-free in 
kmem_cache_free+0x10b/0x520
[  174.892354][    C4]
[  174.892356][    C4] CPU: 4 PID: 2988 Comm: X Tainted: G B             
5.13.0-rc6-x86_64 #1
[  174.892360][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  174.892362][    C4] Call Trace:
[  174.892364][    C4]  <IRQ>
[  174.892366][    C4]  dump_stack+0xa1/0xd3
[  174.892371][    C4] print_address_description.constprop.0+0x1d/0x140
[  174.892385][    C4]  ? kmem_cache_free+0x10b/0x520
[  174.892389][    C4]  kasan_report_invalid_free+0x56/0x80
[  174.892394][    C4]  ? kmem_cache_free+0x10b/0x520
[  174.892397][    C4]  __kasan_slab_free+0x110/0x140
[  174.892402][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  174.892406][    C4]  ? xennet_poll+0x15d9/0x3c80
[  174.892411][    C4]  kmem_cache_free+0x10b/0x520
[  174.892415][    C4]  ? kfree_skbmem+0x9a/0x140
[  174.892420][    C4]  ? xennet_poll+0x15d9/0x3c80
[  174.892424][    C4]  kfree_skbmem+0x9a/0x140
[  174.892428][    C4]  kfree_skb+0x10d/0x240
[  174.892432][    C4]  xennet_poll+0x15d9/0x3c80
[  174.892441][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  174.892445][    C4]  ? __kasan_check_read+0x11/0x20
[  174.892450][    C4]  ? __raise_softirq_irqoff+0x36/0xe0
[  174.892455][    C4]  ? __napi_schedule+0x1b5/0x260
[  174.892461][    C4]  ? __kasan_check_read+0x11/0x20
[  174.892466][    C4]  ? __lock_acquire.constprop.0+0x494/0xe40
[  174.892470][    C4]  ? lock_release+0x205/0x820
[  174.892473][    C4]  ? do_raw_spin_lock+0x13e/0x280
[  174.892481][    C4]  ? handle_edge_irq+0x35e/0xb60
[  174.892487][    C4]  ? handle_irq_for_port+0x192/0x4c0
[  174.892493][    C4]  ? __kasan_check_write+0x14/0x20
[  174.892498][    C4]  __napi_poll+0xb2/0x4c0
[  174.892503][    C4]  net_rx_action+0x2d7/0xa20
[  174.892507][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  174.892510][    C4]  ? __xen_evtchn_do_upcall+0x99/0x180
[  174.892515][    C4]  ? __kasan_check_write+0x14/0x20
[  174.892520][    C4]  ? _raw_read_unlock+0x23/0x40
[  174.892524][    C4]  ? __xen_evtchn_do_upcall+0x107/0x180
[  174.892529][    C4]  __do_softirq+0x1cd/0x66d
[  174.892534][    C4]  irq_exit_rcu+0x12c/0x1c0
[  174.892538][    C4]  sysvec_xen_hvm_callback+0x79/0xa0
[  174.892543][    C4]  </IRQ>
[  174.892545][    C4]  asm_sysvec_xen_hvm_callback+0x12/0x20
[  174.892549][    C4] RIP: 0010:console_unlock+0x78f/0x940
[  174.892553][    C4] Code: 28 ff ff ff 4c 29 e6 49 8d bc 24 60 6f 2f 
86 e8 47 d7 ff ff 4c 01 e0 48 89 85 c0 fe ff ff e9 d5 fa ff ff fb 66 0f 
1f 44 00 00 <e9> 61 fe ff ff c7 05 82 3d e3 04 00 00 00 00 48 8b 7d 08 
e8 d9 e1
[  174.892557][    C4] RSP: 0018:ffffc90000617438 EFLAGS: 00000206
[  174.892560][    C4] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 
1ffff1102279ab9f
[  174.892563][    C4] RDX: 1ffff1102279ab70 RSI: 0000000000000000 RDI: 
ffff888113cd5b80
[  174.892565][    C4] RBP: ffffc900006175a8 R08: 0000000000000000 R09: 
000000000000000a
[  174.892567][    C4] R10: ffff888113cd5b88 R11: 0000000000080000 R12: 
0000000000000200
[  174.892569][    C4] R13: ffffc900006174a0 R14: dffffc0000000000 R15: 
ffffc90000617580
[  174.892575][    C4]  ? console_unlock+0x5ec/0x940
[  174.892580][    C4]  ? devkmsg_read+0x6e0/0x6e0
[  174.892585][    C4]  ? zap_class+0x162/0x740
[  174.892589][    C4]  ? _raw_spin_unlock_irqrestore+0x27/0x40
[  174.892594][    C4]  ? vprintk_default+0x1d/0x20
[  174.892600][    C4]  vprintk_emit+0xea/0x260
[  174.892604][    C4]  vprintk_default+0x1d/0x20
[  174.892608][    C4]  vprintk+0x4c/0xe0
[  174.892611][    C4]  printk+0xb2/0xe3
[  174.892615][    C4]  ? record_print_text.cold+0x11/0x11
[  174.892618][    C4]  ? record_print_text.cold+0x11/0x11
[  174.892621][    C4]  ? vprintk+0x4c/0xe0
[  174.892625][    C4]  ? printk+0xb2/0xe3
[  174.892629][    C4]  si_dpm_print_power_state.cold+0x15b/0x1d7 [amdgpu]
[  174.892961][    C4]  ? si_dpm_vblank_too_short+0x60/0x60 [amdgpu]
[  174.893336][    C4] amdgpu_pm_compute_clocks.part.0.cold+0x16d/0x339 
[amdgpu]
[  174.893710][    C4]  ? amdgpu_dpm_get_vrefresh+0x200/0x200 [amdgpu]
[  174.894026][    C4]  ? amdgpu_atombios_crtc_lock+0x180/0x180 [amdgpu]
[  174.894293][    C4]  ? amdgpu_atombios_crtc_blank+0x180/0x180 [amdgpu]
[  174.894542][    C4]  amdgpu_pm_compute_clocks+0x58/0x80 [amdgpu]
[  174.894865][    C4]  dce_v6_0_crtc_dpms+0xe4/0x220 [amdgpu]
[  174.895175][    C4]  dce_v6_0_crtc_disable+0x9c/0xce0 [amdgpu]
[  174.895446][    C4]  ? drm_helper_encoder_in_use+0x222/0x2e0
[  174.895451][    C4]  ? drm_helper_force_disable_all+0x1e0/0x1e0
[  174.895456][    C4]  ? dce_v6_0_resume+0x1e0/0x1e0 [amdgpu]
[  174.895727][    C4]  ? __kasan_check_read+0x11/0x20
[  174.895732][    C4]  ? mutex_is_locked+0x17/0x60
[  174.895738][    C4] __drm_helper_disable_unused_functions+0xfb/0x2a0
[  174.895743][    C4]  drm_crtc_helper_set_config+0x14a8/0x28a0
[  174.895747][    C4]  ? find_held_lock+0x35/0x140
[  174.895755][    C4]  ? drm_connector_get_single_encoder+0x220/0x220
[  174.895759][    C4]  ? do_raw_spin_unlock+0x159/0x200
[  174.895763][    C4]  ? _raw_spin_unlock_irqrestore+0x27/0x40
[  174.895769][    C4]  amdgpu_display_crtc_set_config+0xb7/0x460 [amdgpu]
[  174.896030][    C4] __drm_mode_set_config_internal+0x298/0x6c0
[  174.896036][    C4]  ? 
drm_warn_on_modeset_not_all_locked.part.0+0x97/0xe0
[  174.896042][    C4]  drm_mode_set_config_internal+0x12e/0x180
[  174.896047][    C4] drm_client_modeset_commit_locked+0x30c/0x4e0
[  174.896051][    C4]  ? mutex_lock_nested+0x1b/0x20
[  174.896055][    C4]  drm_client_modeset_commit+0x3f/0x80
[  174.896059][    C4] 
__drm_fb_helper_restore_fbdev_mode_unlocked+0x15a/0x1a0
[  174.896066][    C4]  drm_fb_helper_lastclose+0x39/0x60
[  174.896069][    C4]  amdgpu_driver_lastclose_kms+0xe/0x20 [amdgpu]
[  174.896343][    C4]  drm_release+0x3b8/0x4c0
[  174.896347][    C4]  __fput+0x1b7/0x780
[  174.896353][    C4]  ____fput+0xe/0x20
[  174.896357][    C4]  task_work_run+0xd4/0x160
[  174.896362][    C4]  do_exit+0x9d3/0x2380
[  174.896405][    C4]  ? __up_read+0x1d4/0x8c0
[  174.896407][    C4]  ? lock_release+0x205/0x820
[  174.896412][    C4]  ? mm_update_next_owner+0x6a0/0x6a0
[  174.896418][    C4]  ? up_read+0x23/0x40
[  174.896421][    C4]  do_group_exit+0xfd/0x2c0
[  174.896426][    C4]  __x64_sys_exit_group+0x43/0x60
[  174.896430][    C4]  do_syscall_64+0x40/0xc0
[  174.896434][    C4]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[  174.896438][    C4] RIP: 0033:0x7f0f4e5097c9
[  174.896441][    C4] Code: Unable to access opcode bytes at RIP 
0x7f0f4e50979f.
[  174.896443][    C4] RSP: 002b:00007fffa554fed8 EFLAGS: 00000246 
ORIG_RAX: 00000000000000e7
[  174.896447][    C4] RAX: ffffffffffffffda RBX: 00007f0f4e604800 RCX: 
00007f0f4e5097c9
[  174.896450][    C4] RDX: 000000000000003c RSI: 00000000000000e7 RDI: 
0000000000000000
[  174.896452][    C4] RBP: 00007f0f4e604800 R08: fffffffffffffd78 R09: 
000055cf14e37180
[  174.896454][    C4] R10: fffffffffffff9bc R11: 0000000000000246 R12: 
0000000000000000
[  174.896456][    C4] R13: 0000000000000000 R14: 00000000000005dd R15: 
0000000000000000
[  174.896462][    C4]
[  174.896463][    C4] Allocated by task 0:
[  174.896466][    C4]  kasan_save_stack+0x23/0x60
[  174.896470][    C4]  __kasan_slab_alloc+0x68/0x80
[  174.896473][    C4]  kmem_cache_alloc_node+0x242/0x380
[  174.896476][    C4]  __alloc_skb+0x156/0x280
[  174.896479][    C4]  __netdev_alloc_skb+0x46/0x320
[  174.896482][    C4]  xennet_alloc_rx_buffers+0x237/0xae0
[  174.896486][    C4]  xennet_poll+0x1e8b/0x3c80
[  174.896490][    C4]  __napi_poll+0xb2/0x4c0
[  174.896492][    C4]  net_rx_action+0x2d7/0xa20
[  174.896495][    C4]  __do_softirq+0x1cd/0x66d
[  174.896497][    C4]
[  174.896498][    C4] Freed by task 0:
[  174.896500][    C4] (stack is not available)
[  174.896501][    C4]
[  174.896502][    C4] The buggy address belongs to the object at 
ffff88812fba1040
[  174.896502][    C4]  which belongs to the cache skbuff_head_cache of 
size 216
[  174.896505][    C4] The buggy address is located 0 bytes inside of
[  174.896505][    C4]  216-byte region [ffff88812fba1040, 
ffff88812fba1118)
[  174.896508][    C4] The buggy address belongs to the page:
[  174.896509][    C4] page:00000000d03f6c30 refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x12fba0
[  174.896513][    C4] head:00000000d03f6c30 order:1 compound_mapcount:0
[  174.896515][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  174.896520][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff8881002a3e00
[  174.896523][    C4] raw: 0000000000000000 0000000000190019 
00000001ffffffff 0000000000000000
[  174.896525][    C4] page dumped because: kasan: bad access detected
[  174.896526][    C4]
[  174.896527][    C4] Memory state around the buggy address:
[  174.896529][    C4]  ffff88812fba0f00: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  174.896532][    C4]  ffff88812fba0f80: fb fb fb fb fb fb fb fb fb fb 
fb fc fc fc fc fc
[  174.896534][    C4] >ffff88812fba1000: fc fc fc fc fc fc fc fc fa fb 
fb fb fb fb fb fb
[  174.896535][    C4] ^
[  174.896537][    C4]  ffff88812fba1080: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  174.896539][    C4]  ffff88812fba1100: fb fb fb fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  174.896541][    C4] 
==================================================================
[  174.896568][    C4] 
==================================================================
[  174.896570][    C4] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
[  174.896573][    C4]
[  174.896575][    C4] CPU: 4 PID: 2988 Comm: X Tainted: G B             
5.13.0-rc6-x86_64 #1
[  174.896579][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  174.896580][    C4] Call Trace:
[  174.896582][    C4]  <IRQ>
[  174.896584][    C4]  dump_stack+0xa1/0xd3
[  174.896588][    C4] print_address_description.constprop.0+0x1d/0x140
[  174.896592][    C4]  ? kfree+0xd7/0x560
[  174.896595][    C4]  kasan_report_invalid_free+0x56/0x80
[  174.896599][    C4]  ? kfree+0xd7/0x560
[  174.896603][    C4]  __kasan_slab_free+0x110/0x140
[  174.896607][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  174.896612][    C4]  kfree+0xd7/0x560
[  174.896615][    C4]  ? skb_release_data+0x41c/0x520
[  174.896619][    C4]  ? xennet_poll+0x15d9/0x3c80
[  174.896623][    C4]  ? kmem_cache_free+0x10b/0x520
[  174.896627][    C4]  skb_release_data+0x41c/0x520
[  174.896631][    C4]  ? xennet_poll+0x15d9/0x3c80
[  174.896635][    C4]  ? xennet_poll+0x15d9/0x3c80
[  174.896639][    C4]  kfree_skb+0x105/0x240
[  174.896643][    C4]  xennet_poll+0x15d9/0x3c80
[  174.896651][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  174.896655][    C4]  ? __kasan_check_read+0x11/0x20
[  174.896659][    C4]  ? __raise_softirq_irqoff+0x36/0xe0
[  174.896664][    C4]  ? __napi_schedule+0x1b5/0x260
[  174.896669][    C4]  ? __kasan_check_read+0x11/0x20
[  174.896673][    C4]  ? __lock_acquire.constprop.0+0x494/0xe40
[  174.896677][    C4]  ? lock_release+0x205/0x820
[  174.896680][    C4]  ? do_raw_spin_lock+0x13e/0x280
[  174.896687][    C4]  ? handle_edge_irq+0x35e/0xb60
[  174.896692][    C4]  ? handle_irq_for_port+0x192/0x4c0
[  174.896697][    C4]  ? __kasan_check_write+0x14/0x20
[  174.896702][    C4]  __napi_poll+0xb2/0x4c0
[  174.896706][    C4]  net_rx_action+0x2d7/0xa20
[  174.896711][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  174.896713][    C4]  ? __xen_evtchn_do_upcall+0x99/0x180
[  174.896718][    C4]  ? __kasan_check_write+0x14/0x20
[  174.896722][    C4]  ? _raw_read_unlock+0x23/0x40
[  174.896727][    C4]  ? __xen_evtchn_do_upcall+0x107/0x180
[  174.896731][    C4]  __do_softirq+0x1cd/0x66d
[  174.896736][    C4]  irq_exit_rcu+0x12c/0x1c0
[  174.896740][    C4]  sysvec_xen_hvm_callback+0x79/0xa0
[  174.896744][    C4]  </IRQ>
[  174.896746][    C4]  asm_sysvec_xen_hvm_callback+0x12/0x20
[  174.896750][    C4] RIP: 0010:console_unlock+0x78f/0x940
[  174.896754][    C4] Code: 28 ff ff ff 4c 29 e6 49 8d bc 24 60 6f 2f 
86 e8 47 d7 ff ff 4c 01 e0 48 89 85 c0 fe ff ff e9 d5 fa ff ff fb 66 0f 
1f 44 00 00 <e9> 61 fe ff ff c7 05 82 3d e3 04 00 00 00 00 48 8b 7d 08 
e8 d9 e1
[  174.896757][    C4] RSP: 0018:ffffc90000617438 EFLAGS: 00000206
[  174.896760][    C4] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 
1ffff1102279ab9f
[  174.896762][    C4] RDX: 1ffff1102279ab70 RSI: 0000000000000000 RDI: 
ffff888113cd5b80
[  174.896764][    C4] RBP: ffffc900006175a8 R08: 0000000000000000 R09: 
000000000000000a
[  174.896766][    C4] R10: ffff888113cd5b88 R11: 0000000000080000 R12: 
0000000000000200
[  174.896768][    C4] R13: ffffc900006174a0 R14: dffffc0000000000 R15: 
ffffc90000617580
[  174.896773][    C4]  ? console_unlock+0x5ec/0x940
[  174.896778][    C4]  ? devkmsg_read+0x6e0/0x6e0
[  174.896782][    C4]  ? zap_class+0x162/0x740
[  174.896786][    C4]  ? _raw_spin_unlock_irqrestore+0x27/0x40
[  174.896791][    C4]  ? vprintk_default+0x1d/0x20
[  174.896795][    C4]  vprintk_emit+0xea/0x260
[  174.896799][    C4]  vprintk_default+0x1d/0x20
[  174.896803][    C4]  vprintk+0x4c/0xe0
[  174.896806][    C4]  printk+0xb2/0xe3
[  174.896809][    C4]  ? record_print_text.cold+0x11/0x11
[  174.896812][    C4]  ? record_print_text.cold+0x11/0x11
[  174.896815][    C4]  ? vprintk+0x4c/0xe0
[  174.896818][    C4]  ? printk+0xb2/0xe3
[  174.896822][    C4]  si_dpm_print_power_state.cold+0x15b/0x1d7 [amdgpu]
[  174.897155][    C4]  ? si_dpm_vblank_too_short+0x60/0x60 [amdgpu]
[  174.897478][    C4] amdgpu_pm_compute_clocks.part.0.cold+0x16d/0x339 
[amdgpu]
[  174.897821][    C4]  ? amdgpu_dpm_get_vrefresh+0x200/0x200 [amdgpu]
[  174.898193][    C4]  ? amdgpu_atombios_crtc_lock+0x180/0x180 [amdgpu]
[  174.898491][    C4]  ? amdgpu_atombios_crtc_blank+0x180/0x180 [amdgpu]
[  174.898808][    C4]  amdgpu_pm_compute_clocks+0x58/0x80 [amdgpu]
[  174.899130][    C4]  dce_v6_0_crtc_dpms+0xe4/0x220 [amdgpu]
[  174.899401][    C4]  dce_v6_0_crtc_disable+0x9c/0xce0 [amdgpu]
[  174.899673][    C4]  ? drm_helper_encoder_in_use+0x222/0x2e0
[  174.899678][    C4]  ? drm_helper_force_disable_all+0x1e0/0x1e0
[  174.899683][    C4]  ? dce_v6_0_resume+0x1e0/0x1e0 [amdgpu]
[  174.899951][    C4]  ? __kasan_check_read+0x11/0x20
[  174.899956][    C4]  ? mutex_is_locked+0x17/0x60
[  174.899962][    C4] __drm_helper_disable_unused_functions+0xfb/0x2a0
[  174.899967][    C4]  drm_crtc_helper_set_config+0x14a8/0x28a0
[  174.899971][    C4]  ? find_held_lock+0x35/0x140
[  174.899978][    C4]  ? drm_connector_get_single_encoder+0x220/0x220
[  174.899982][    C4]  ? do_raw_spin_unlock+0x159/0x200
[  174.899986][    C4]  ? _raw_spin_unlock_irqrestore+0x27/0x40
[  174.899993][    C4]  amdgpu_display_crtc_set_config+0xb7/0x460 [amdgpu]
[  174.900251][    C4] __drm_mode_set_config_internal+0x298/0x6c0
[  174.900256][    C4]  ? 
drm_warn_on_modeset_not_all_locked.part.0+0x97/0xe0
[  174.900262][    C4]  drm_mode_set_config_internal+0x12e/0x180
[  174.900278][    C4] drm_client_modeset_commit_locked+0x30c/0x4e0
[  174.900282][    C4]  ? mutex_lock_nested+0x1b/0x20
[  174.900286][    C4]  drm_client_modeset_commit+0x3f/0x80
[  174.900290][    C4] 
__drm_fb_helper_restore_fbdev_mode_unlocked+0x15a/0x1a0
[  174.900296][    C4]  drm_fb_helper_lastclose+0x39/0x60
[  174.900310][    C4]  amdgpu_driver_lastclose_kms+0xe/0x20 [amdgpu]
[  174.900546][    C4]  drm_release+0x3b8/0x4c0
[  174.900550][    C4]  __fput+0x1b7/0x780
[  174.900556][    C4]  ____fput+0xe/0x20
[  174.900559][    C4]  task_work_run+0xd4/0x160
[  174.900564][    C4]  do_exit+0x9d3/0x2380
[  174.900568][    C4]  ? __up_read+0x1d4/0x8c0
[  174.900570][    C4]  ? lock_release+0x205/0x820
[  174.900575][    C4]  ? mm_update_next_owner+0x6a0/0x6a0
[  174.900581][    C4]  ? up_read+0x23/0x40
[  174.900584][    C4]  do_group_exit+0xfd/0x2c0
[  174.900589][    C4]  __x64_sys_exit_group+0x43/0x60
[  174.900593][    C4]  do_syscall_64+0x40/0xc0
[  174.900597][    C4]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[  174.900601][    C4] RIP: 0033:0x7f0f4e5097c9
[  174.900604][    C4] Code: Unable to access opcode bytes at RIP 
0x7f0f4e50979f.
[  174.900606][    C4] RSP: 002b:00007fffa554fed8 EFLAGS: 00000246 
ORIG_RAX: 00000000000000e7
[  174.900611][    C4] RAX: ffffffffffffffda RBX: 00007f0f4e604800 RCX: 
00007f0f4e5097c9
[  174.900613][    C4] RDX: 000000000000003c RSI: 00000000000000e7 RDI: 
0000000000000000
[  174.900615][    C4] RBP: 00007f0f4e604800 R08: fffffffffffffd78 R09: 
000055cf14e37180
[  174.900617][    C4] R10: fffffffffffff9bc R11: 0000000000000246 R12: 
0000000000000000
[  174.900619][    C4] R13: 0000000000000000 R14: 00000000000005dd R15: 
0000000000000000
[  174.900625][    C4]
[  174.900626][    C4] Allocated by task 0:
[  174.900628][    C4] (stack is not available)
[  174.900629][    C4]
[  174.900630][    C4] Freed by task 0:
[  174.900632][    C4]  kasan_save_stack+0x23/0x60
[  174.900636][    C4]  kasan_set_track+0x20/0x40
[  174.900639][    C4]  kasan_set_free_info+0x24/0x40
[  174.900643][    C4]  __kasan_slab_free+0xf1/0x140
[  174.900646][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  174.900649][    C4]  kfree+0xd7/0x560
[  174.900652][    C4]  skb_release_data+0x41c/0x520
[  174.900655][    C4]  kfree_skb+0x105/0x240
[  174.900658][    C4]  xennet_poll+0x1ab2/0x3c80
[  174.900662][    C4]  __napi_poll+0xb2/0x4c0
[  174.900664][    C4]  net_rx_action+0x2d7/0xa20
[  174.900667][    C4]  __do_softirq+0x1cd/0x66d
[  174.900670][    C4]
[  174.900670][    C4] The buggy address belongs to the object at 
ffff88812fabe800
[  174.900670][    C4]  which belongs to the cache kmalloc-1k of size 1024
[  174.900673][    C4] The buggy address is located 0 bytes inside of
[  174.900673][    C4]  1024-byte region [ffff88812fabe800, 
ffff88812fabec00)
[  174.900676][    C4] The buggy address belongs to the page:
[  174.900678][    C4] page:00000000d0c67a5b refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x12fab8
[  174.900682][    C4] head:00000000d0c67a5b order:3 compound_mapcount:0 
compound_pincount:0
[  174.900685][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  174.900690][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff888100042dc0
[  174.900693][    C4] raw: 0000000000000000 0000000000100010 
00000001ffffffff 0000000000000000
[  174.900695][    C4] page dumped because: kasan: bad access detected
[  174.900696][    C4]
[  174.900697][    C4] Memory state around the buggy address:
[  174.900699][    C4]  ffff88812fabe700: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  174.900702][    C4]  ffff88812fabe780: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  174.900704][    C4] >ffff88812fabe800: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  174.900705][    C4]                    ^
[  174.900707][    C4]  ffff88812fabe880: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  174.900709][    C4]  ffff88812fabe900: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  174.900710][    C4] 
==================================================================
[  174.900736][    C4] 
==================================================================
[  174.900738][    C4] BUG: KASAN: double-free or invalid-free in 
kmem_cache_free+0x10b/0x520
[  174.900742][    C4]
[  174.900743][    C4] CPU: 4 PID: 2988 Comm: X Tainted: G B             
5.13.0-rc6-x86_64 #1
[  174.900747][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  174.900748][    C4] Call Trace:
[  174.900750][    C4]  <IRQ>
[  174.900752][    C4]  dump_stack+0xa1/0xd3
[  174.900755][    C4] print_address_description.constprop.0+0x1d/0x140
[  174.900760][    C4]  ? kmem_cache_free+0x10b/0x520
[  174.900763][    C4]  kasan_report_invalid_free+0x56/0x80
[  174.900767][    C4]  ? kmem_cache_free+0x10b/0x520
[  174.900771][    C4]  __kasan_slab_free+0x110/0x140
[  174.900775][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  174.900779][    C4]  ? xennet_poll+0x15d9/0x3c80
[  174.900783][    C4]  kmem_cache_free+0x10b/0x520
[  174.900787][    C4]  ? kfree_skbmem+0x9a/0x140
[  174.900792][    C4]  ? xennet_poll+0x15d9/0x3c80
[  174.900796][    C4]  kfree_skbmem+0x9a/0x140
[  174.900800][    C4]  kfree_skb+0x10d/0x240
[  174.900803][    C4]  xennet_poll+0x15d9/0x3c80
[  174.900812][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  174.900828][    C4]  ? __kasan_check_read+0x11/0x20
[  174.900832][    C4]  ? __raise_softirq_irqoff+0x36/0xe0
[  174.900836][    C4]  ? __napi_schedule+0x1b5/0x260
[  174.900841][    C4]  ? __kasan_check_read+0x11/0x20
[  174.900845][    C4]  ? __lock_acquire.constprop.0+0x494/0xe40
[  174.900850][    C4]  ? lock_release+0x205/0x820
[  174.900852][    C4]  ? do_raw_spin_lock+0x13e/0x280
[  174.900859][    C4]  ? handle_edge_irq+0x35e/0xb60
[  174.900864][    C4]  ? handle_irq_for_port+0x192/0x4c0
[  174.900869][    C4]  ? __kasan_check_write+0x14/0x20
[  174.900874][    C4]  __napi_poll+0xb2/0x4c0
[  174.900878][    C4]  net_rx_action+0x2d7/0xa20
[  174.900883][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  174.900885][    C4]  ? __xen_evtchn_do_upcall+0x99/0x180
[  174.900890][    C4]  ? __kasan_check_write+0x14/0x20
[  174.900894][    C4]  ? _raw_read_unlock+0x23/0x40
[  174.900898][    C4]  ? __xen_evtchn_do_upcall+0x107/0x180
[  174.900903][    C4]  __do_softirq+0x1cd/0x66d
[  174.900907][    C4]  irq_exit_rcu+0x12c/0x1c0
[  174.900911][    C4]  sysvec_xen_hvm_callback+0x79/0xa0
[  174.900915][    C4]  </IRQ>
[  174.900917][    C4]  asm_sysvec_xen_hvm_callback+0x12/0x20
[  174.900921][    C4] RIP: 0010:console_unlock+0x78f/0x940
[  174.900925][    C4] Code: 28 ff ff ff 4c 29 e6 49 8d bc 24 60 6f 2f 
86 e8 47 d7 ff ff 4c 01 e0 48 89 85 c0 fe ff ff e9 d5 fa ff ff fb 66 0f 
1f 44 00 00 <e9> 61 fe ff ff c7 05 82 3d e3 04 00 00 00 00 48 8b 7d 08 
e8 d9 e1
[  174.900928][    C4] RSP: 0018:ffffc90000617438 EFLAGS: 00000206
[  174.900931][    C4] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 
1ffff1102279ab9f
[  174.900933][    C4] RDX: 1ffff1102279ab70 RSI: 0000000000000000 RDI: 
ffff888113cd5b80
[  174.900935][    C4] RBP: ffffc900006175a8 R08: 0000000000000000 R09: 
000000000000000a
[  174.900937][    C4] R10: ffff888113cd5b88 R11: 0000000000080000 R12: 
0000000000000200
[  174.900939][    C4] R13: ffffc900006174a0 R14: dffffc0000000000 R15: 
ffffc90000617580
[  174.900944][    C4]  ? console_unlock+0x5ec/0x940
[  174.900949][    C4]  ? devkmsg_read+0x6e0/0x6e0
[  174.900953][    C4]  ? zap_class+0x162/0x740
[  174.900962][    C4]  ? _raw_spin_unlock_irqrestore+0x27/0x40
[  174.900978][    C4]  ? vprintk_default+0x1d/0x20
[  174.900983][    C4]  vprintk_emit+0xea/0x260
[  174.900988][    C4]  vprintk_default+0x1d/0x20
[  174.900991][    C4]  vprintk+0x4c/0xe0
[  174.900995][    C4]  printk+0xb2/0xe3
[  174.900999][    C4]  ? record_print_text.cold+0x11/0x11
[  174.901002][    C4]  ? record_print_text.cold+0x11/0x11
[  174.901005][    C4]  ? vprintk+0x4c/0xe0
[  174.901009][    C4]  ? printk+0xb2/0xe3
[  174.901014][    C4]  si_dpm_print_power_state.cold+0x15b/0x1d7 [amdgpu]
[  174.901360][    C4]  ? si_dpm_vblank_too_short+0x60/0x60 [amdgpu]
[  174.901652][    C4] amdgpu_pm_compute_clocks.part.0.cold+0x16d/0x339 
[amdgpu]
[  174.901986][    C4]  ? amdgpu_dpm_get_vrefresh+0x200/0x200 [amdgpu]
[  174.902310][    C4]  ? amdgpu_atombios_crtc_lock+0x180/0x180 [amdgpu]
[  174.902557][    C4]  ? amdgpu_atombios_crtc_blank+0x180/0x180 [amdgpu]
[  174.902794][    C4]  amdgpu_pm_compute_clocks+0x58/0x80 [amdgpu]
[  174.903106][    C4]  dce_v6_0_crtc_dpms+0xe4/0x220 [amdgpu]
[  174.903391][    C4]  dce_v6_0_crtc_disable+0x9c/0xce0 [amdgpu]
[  174.903639][    C4]  ? drm_helper_encoder_in_use+0x222/0x2e0
[  174.903644][    C4]  ? drm_helper_force_disable_all+0x1e0/0x1e0
[  174.903648][    C4]  ? dce_v6_0_resume+0x1e0/0x1e0 [amdgpu]
[  174.903907][    C4]  ? __kasan_check_read+0x11/0x20
[  174.903912][    C4]  ? mutex_is_locked+0x17/0x60
[  174.903917][    C4] __drm_helper_disable_unused_functions+0xfb/0x2a0
[  174.903922][    C4]  drm_crtc_helper_set_config+0x14a8/0x28a0
[  174.903925][    C4]  ? find_held_lock+0x35/0x140
[  174.903932][    C4]  ? drm_connector_get_single_encoder+0x220/0x220
[  174.903936][    C4]  ? do_raw_spin_unlock+0x159/0x200
[  174.903940][    C4]  ? _raw_spin_unlock_irqrestore+0x27/0x40
[  174.903945][    C4]  amdgpu_display_crtc_set_config+0xb7/0x460 [amdgpu]
[  174.904213][    C4] __drm_mode_set_config_internal+0x298/0x6c0
[  174.904218][    C4]  ? 
drm_warn_on_modeset_not_all_locked.part.0+0x97/0xe0
[  174.904224][    C4]  drm_mode_set_config_internal+0x12e/0x180
[  174.904229][    C4] drm_client_modeset_commit_locked+0x30c/0x4e0
[  174.904233][    C4]  ? mutex_lock_nested+0x1b/0x20
[  174.904237][    C4]  drm_client_modeset_commit+0x3f/0x80
[  174.904241][    C4] 
__drm_fb_helper_restore_fbdev_mode_unlocked+0x15a/0x1a0
[  174.904247][    C4]  drm_fb_helper_lastclose+0x39/0x60
[  174.904251][    C4]  amdgpu_driver_lastclose_kms+0xe/0x20 [amdgpu]
[  174.904578][    C4]  drm_release+0x3b8/0x4c0
[  174.904583][    C4]  __fput+0x1b7/0x780
[  174.904589][    C4]  ____fput+0xe/0x20
[  174.904593][    C4]  task_work_run+0xd4/0x160
[  174.904598][    C4]  do_exit+0x9d3/0x2380
[  174.904602][    C4]  ? __up_read+0x1d4/0x8c0
[  174.904605][    C4]  ? lock_release+0x205/0x820
[  174.904609][    C4]  ? mm_update_next_owner+0x6a0/0x6a0
[  174.904616][    C4]  ? up_read+0x23/0x40
[  174.904619][    C4]  do_group_exit+0xfd/0x2c0
[  174.904624][    C4]  __x64_sys_exit_group+0x43/0x60
[  174.904628][    C4]  do_syscall_64+0x40/0xc0
[  174.904633][    C4]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[  174.904637][    C4] RIP: 0033:0x7f0f4e5097c9
[  174.904640][    C4] Code: Unable to access opcode bytes at RIP 
0x7f0f4e50979f.
[  174.904642][    C4] RSP: 002b:00007fffa554fed8 EFLAGS: 00000246 
ORIG_RAX: 00000000000000e7
[  174.904646][    C4] RAX: ffffffffffffffda RBX: 00007f0f4e604800 RCX: 
00007f0f4e5097c9
[  174.904648][    C4] RDX: 000000000000003c RSI: 00000000000000e7 RDI: 
0000000000000000
[  174.904650][    C4] RBP: 00007f0f4e604800 R08: fffffffffffffd78 R09: 
000055cf14e37180
[  174.904653][    C4] R10: fffffffffffff9bc R11: 0000000000000246 R12: 
0000000000000000
[  174.904655][    C4] R13: 0000000000000000 R14: 00000000000005dd R15: 
0000000000000000
[  174.904660][    C4]
[  174.904661][    C4] Allocated by task 0:
[  174.904664][    C4]  kasan_save_stack+0x23/0x60
[  174.904668][    C4]  __kasan_slab_alloc+0x68/0x80
[  174.904671][    C4]  kmem_cache_alloc_node+0x242/0x380
[  174.904675][    C4]  __alloc_skb+0x156/0x280
[  174.904678][    C4]  __netdev_alloc_skb+0x46/0x320
[  174.904681][    C4]  xennet_alloc_rx_buffers+0x237/0xae0
[  174.904685][    C4]  xennet_poll+0x1e8b/0x3c80
[  174.904689][    C4]  __napi_poll+0xb2/0x4c0
[  174.904692][    C4]  net_rx_action+0x2d7/0xa20
[  174.904694][    C4]  __do_softirq+0x1cd/0x66d
[  174.904697][    C4]
[  174.904698][    C4] Freed by task 0:
[  174.904700][    C4] (stack is not available)
[  174.904701][    C4]
[  174.904701][    C4] The buggy address belongs to the object at 
ffff88812fba0dc0
[  174.904701][    C4]  which belongs to the cache skbuff_head_cache of 
size 216
[  174.904704][    C4] The buggy address is located 0 bytes inside of
[  174.904704][    C4]  216-byte region [ffff88812fba0dc0, 
ffff88812fba0e98)
[  174.904708][    C4] The buggy address belongs to the page:
[  174.904709][    C4] page:00000000d03f6c30 refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x12fba0
[  174.904713][    C4] head:00000000d03f6c30 order:1 compound_mapcount:0
[  174.904715][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  174.904720][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff8881002a3e00
[  174.904723][    C4] raw: 0000000000000000 0000000000190019 
00000001ffffffff 0000000000000000
[  174.904725][    C4] page dumped because: kasan: bad access detected
[  174.904727][    C4]
[  174.904728][    C4] Memory state around the buggy address:
[  174.904729][    C4]  ffff88812fba0c80: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  174.904732][    C4]  ffff88812fba0d00: fb fb fb fb fb fb fb fb fb fb 
fb fc fc fc fc fc
[  174.904734][    C4] >ffff88812fba0d80: fc fc fc fc fc fc fc fc fa fb 
fb fb fb fb fb fb
[  174.904735][    C4] ^
[  174.904737][    C4]  ffff88812fba0e00: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  174.904739][    C4]  ffff88812fba0e80: fb fb fb fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  174.904741][    C4] 
==================================================================
[  180.087703][    C0] net_ratelimit: 28 callbacks suppressed
[  180.087737][    C0] net eth0: rx->offset: 0, size: -1
[  180.089133][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  180.089142][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  180.089146][ T2988]     status: c
[  180.089152][ T2988] switching to power state:
[  180.089154][ T2988]     ui class: performance
[  180.089156][ T2988]     internal class: none
[  182.140560][ T2988]     caps:
[  182.140563][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.140567][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.140571][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.140574][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.140577][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[[0m[0;31m*   [  182.140579][ T2988]     status: r
[  182.140701][    C4] 
==================================================================
[  182.140787][ T2988] switching from power state:
[  182.140790][ T2988]     ui class: performance
[  182.140792][ T2988]     internal class: none
[  182.140796][ T2988]     caps:
[  182.140798][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.140801][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.140805][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.140808][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.140810][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.140813][ T2988]     status: c
[  182.140816][ T2988] switching to power state:
[  182.140817][ T2988]     ui class: performance
[  182.140818][ T2988]     internal class: none
[  182.140821][ T2988]     caps:
[  182.140823][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.140824][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.140827][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.140830][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.140832][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.140835][ T2988]     status: r
[  182.140912][ T2988] switching from power state:
[  182.140913][ T2988]     ui class: performance
[  182.140915][ T2988]     internal class: none
[  182.140917][ T2988]     caps:
[  182.140919][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.140921][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.140923][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.140926][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.140929][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.140931][ T2988]     status: c
[  182.140934][ T2988] switching to power state:
[  182.140935][ T2988]     ui class: performance
[  182.140936][ T2988]     internal class: none
[  182.140938][ T2988]     caps:
[  182.140940][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.140941][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.140944][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.140946][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.140948][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.140950][ T2988]     status: r
[  182.141118][ T2988] switching from power state:
[  182.141120][ T2988]     ui class: performance
[  182.141121][ T2988]     internal class: none
[  182.141124][ T2988]     caps:
[  182.141126][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.141127][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.141130][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.141132][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141135][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141137][ T2988]     status: c
[  182.141139][ T2988] switching to power state:
[  182.141141][ T2988]     ui class: performance
[  182.141142][ T2988]     internal class: none
[  182.141144][ T2988]     caps:
[  182.141146][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.141148][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.141150][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.141152][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.141155][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141157][ T2988]     status: r
[  182.141224][ T2988] switching from power state:
[  182.141225][ T2988]     ui class: performance
[  182.141227][ T2988]     internal class: none
[  182.141229][ T2988]     caps:
[  182.141231][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.141232][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.141235][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.141237][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141239][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141241][ T2988]     status: c
[  182.141244][ T2988] switching to power state:
[  182.141245][ T2988]     ui class: performance
[  182.141246][ T2988]     internal class: none
[  182.141249][ T2988]     caps:
[  182.141250][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.141252][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.141254][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.141257][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.141259][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141262][ T2988]     status: r
[  182.141325][ T2988] switching from power state:
[  182.141326][ T2988]     ui class: performance
[  182.141327][ T2988]     internal class: none
[  182.141330][ T2988]     caps:
[  182.141332][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.141333][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.141336][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.141338][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141341][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141343][ T2988]     status: c
[  182.141346][ T2988] switching to power state:
[  182.141347][ T2988]     ui class: performance
[  182.141348][ T2988]     internal class: none
[  182.141351][ T2988]     caps:
[  182.141352][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.141354][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.141356][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.141359][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.141361][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141363][ T2988]     status: r
[  182.141491][ T2988] switching from power state:
[  182.141494][ T2988]     ui class: performance
[  182.141496][ T2988]     internal class: none
[  182.141500][ T2988]     caps:
[  182.141502][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.141504][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.141507][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.141510][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141512][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141515][ T2988]     status: c
[  182.141518][ T2988] switching to power state:
[  182.141519][ T2988]     ui class: performance
[  182.141520][ T2988]     internal class: none
[  182.141523][ T2988]     caps:
[  182.141525][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.141526][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.141529][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.141531][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.141534][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141536][ T2988]     status: r
[  182.141620][ T2988] switching from power state:
[  182.141622][ T2988]     ui class: performance
[  182.141624][ T2988]     internal class: none
[  182.141626][ T2988]     caps:
[  182.141628][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.141630][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.141632][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.141635][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141637][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141639][ T2988]     status: c
[  182.141642][ T2988] switching to power state:
[  182.141643][ T2988]     ui class: performance
[  182.141644][ T2988]     internal class: none
[  182.141647][ T2988]     caps:
[  182.141648][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.141650][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.141652][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.141654][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.141657][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141659][ T2988]     status: r
[  182.141828][ T2988] switching from power state:
[  182.141830][ T2988]     ui class: performance
[  182.141832][ T2988]     internal class: none
[  182.141835][ T2988]     caps:
[  182.141837][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.141838][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.141841][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.141843][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141846][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141849][ T2988]     status: c
[  182.141851][ T2988] switching to power state:
[  182.141852][ T2988]     ui class: performance
[  182.141853][ T2988]     internal class: none
[  182.141856][ T2988]     caps:
[  182.141857][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.141859][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.141861][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.141864][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.141866][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141868][ T2988]     status: r
[  182.141933][ T2988] switching from power state:
[  182.141934][ T2988]     ui class: performance
[  182.141935][ T2988]     internal class: none
[  182.141938][ T2988]     caps:
[  182.141940][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.141941][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.141944][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.141946][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141949][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.141951][ T2988]     status: c
[  182.141954][ T2988] switching to power state:
[  182.141984][ T2988]     ui class: performance
[  182.141986][ T2988]     internal class: none
[  182.141988][ T2988]     caps:
[  182.141990][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.141992][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.141995][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.141998][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.142001][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142003][ T2988]     status: r
[  182.142070][ T2988] switching from power state:
[  182.142071][ T2988]     ui class: performance
[  182.142072][ T2988]     internal class: none
[  182.142075][ T2988]     caps:
[  182.142077][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.142078][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.142081][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.142084][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142086][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142089][ T2988]     status: c
[  182.142092][ T2988] switching to power state:
[  182.142093][ T2988]     ui class: performance
[  182.142094][ T2988]     internal class: none
[  182.142096][ T2988]     caps:
[  182.142098][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.142100][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.142102][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.142105][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.142107][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142109][ T2988]     status: r
[  182.142253][ T2988] switching from power state:
[  182.142255][ T2988]     ui class: performance
[  182.142258][ T2988]     internal class: none
[  182.142262][ T2988]     caps:
[  182.142264][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.142266][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.142270][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.142273][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142276][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142278][ T2988]     status: c
[  182.142281][ T2988] switching to power state:
[  182.142283][ T2988]     ui class: performance
[  182.142284][ T2988]     internal class: none
[  182.142287][ T2988]     caps:
[  182.142289][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.142291][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.142293][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.142296][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.142298][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142301][ T2988]     status: r
[  182.142371][ T2988] switching from power state:
[  182.142372][ T2988]     ui class: performance
[  182.142388][ T2988]     internal class: none
[  182.142391][ T2988]     caps:
[  182.142393][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.142395][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.142397][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.142401][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142403][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142406][ T2988]     status: c
[  182.142408][ T2988] switching to power state:
[  182.142409][ T2988]     ui class: performance
[  182.142410][ T2988]     internal class: none
[  182.142413][ T2988]     caps:
[  182.142415][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.142417][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.142419][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.142422][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.142424][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142427][ T2988]     status: r
[  182.142598][ T2988] switching from power state:
[  182.142600][ T2988]     ui class: performance
[  182.142602][ T2988]     internal class: none
[  182.142605][ T2988]     caps:
[  182.142607][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.142609][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.142612][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.142615][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142618][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142620][ T2988]     status: c
[  182.142623][ T2988] switching to power state:
[  182.142624][ T2988]     ui class: performance
[  182.142625][ T2988]     internal class: none
[  182.142628][ T2988]     caps:
[  182.142630][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.142631][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.142634][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.142636][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.142639][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142641][ T2988]     status: r
[  182.142710][ T2988] switching from power state:
[  182.142711][ T2988]     ui class: performance
[  182.142712][ T2988]     internal class: none
[  182.142715][ T2988]     caps:
[  182.142717][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.142719][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.142721][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.142724][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142726][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142729][ T2988]     status: c
[  182.142731][ T2988] switching to power state:
[  182.142732][ T2988]     ui class: performance
[  182.142734][ T2988]     internal class: none
[  182.142736][ T2988]     caps:
[  182.142738][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.142739][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.142742][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.142745][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.142747][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142750][ T2988]     status: r
[  182.142813][ T2988] switching from power state:
[  182.142814][ T2988]     ui class: performance
[  182.142815][ T2988]     internal class: none
[  182.142818][ T2988]     caps:
[  182.142820][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.142822][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.142824][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.142827][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142830][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142832][ T2988]     status: c
[  182.142834][ T2988] switching to power state:
[  182.142836][ T2988]     ui class: performance
[  182.142837][ T2988]     internal class: none
[  182.142839][ T2988]     caps:
[  182.142841][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.142843][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.142845][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.142848][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.142851][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.142853][ T2988]     status: r
[  182.143124][ T2988] switching from power state:
[  182.143127][ T2988]     ui class: performance
[  182.143129][ T2988]     internal class: none
[  182.143133][ T2988]     caps:
[  182.143134][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.143136][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.143139][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.143142][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.143144][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.143147][ T2988]     status: c
[  182.143149][ T2988] switching to power state:
[  182.143150][ T2988]     ui class: performance
[  182.143151][ T2988]     internal class: none
[  182.143154][ T2988]     caps:
[  182.143156][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.143157][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.143160][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.143162][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.143166][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.143168][ T2988]     status: r
[  182.143243][ T2988] switching from power state:
[  182.143245][ T2988]     ui class: performance
[  182.143246][ T2988]     internal class: none
[  182.143249][ T2988]     caps:
[  182.143251][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.143252][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.143255][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.143257][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.143268][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.143270][ T2988]     status: c
[  182.143273][ T2988] switching to power state:
[  182.143274][ T2988]     ui class: performance
[  182.143275][ T2988]     internal class: none
[  182.143278][ T2988]     caps:
[  182.143279][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.143281][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.143283][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.143286][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.143289][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.143291][ T2988]     status: r
[  182.143390][ T2988] switching from power state:
[  182.143391][ T2988]     ui class: performance
[  182.143393][ T2988]     internal class: none
[  182.143395][ T2988]     caps:
[  182.143397][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.143399][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.143401][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.143404][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.143525][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.143529][ T2988]     status: c
[  182.143533][ T2988] switching to power state:
[  182.143534][ T2988]     ui class: performance
[  182.143536][ T2988]     internal class: none
[  182.143539][ T2988]     caps:
[  182.143541][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.143543][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.143546][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.143549][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.143551][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.143554][ T2988]     status: r
[  182.143642][ T2988] switching from power state:
[  182.143644][ T2988]     ui class: performance
[  182.143645][ T2988]     internal class: none
[  182.143648][ T2988]     caps:
[  182.143650][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.143652][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.143655][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.143658][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.143661][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.143663][ T2988]     status: c
[  182.143666][ T2988] switching to power state:
[  182.143667][ T2988]     ui class: performance
[  182.143668][ T2988]     internal class: none
[  182.143671][ T2988]     caps:
[  182.143673][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.143674][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.143677][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.143680][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.143682][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.143685][ T2988]     status: r
[  182.143750][ T2988] switching from power state:
[  182.143751][ T2988]     ui class: performance
[  182.143753][ T2988]     internal class: none
[  182.143755][ T2988]     caps:
[  182.143757][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.143759][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.143761][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.143764][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.143766][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.143769][ T2988]     status: c
[  182.143771][ T2988] switching to power state:
[  182.143772][ T2988]     ui class: performance
[  182.143774][ T2988]     internal class: none
[  182.143776][ T2988]     caps:
[  182.143778][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.143779][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.143782][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.143784][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.143787][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.143789][ T2988]     status: r
[  182.143852][ T2988] switching from power state:
[  182.143854][ T2988]     ui class: performance
[  182.143855][ T2988]     internal class: none
[  182.143857][ T2988]     caps:
[  182.143859][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.143861][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.143863][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.143865][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.143868][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.143870][ T2988]     status: c
[  182.143873][ T2988] switching to power state:
[  182.143874][ T2988]     ui class: performance
[  182.143875][ T2988]     internal class: none
[  182.143878][ T2988]     caps:
[  182.143880][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.143882][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.143884][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.143887][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.143890][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.143892][ T2988]     status: r
[  182.144047][ T2988] switching from power state:
[  182.144050][ T2988]     ui class: performance
[  182.144052][ T2988]     internal class: none
[  182.144055][ T2988]     caps:
[  182.144057][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.144059][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.144062][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.144065][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.144068][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.144070][ T2988]     status: c
[  182.144073][ T2988] switching to power state:
[  182.144074][ T2988]     ui class: performance
[  182.144076][ T2988]     internal class: none
[  182.144078][ T2988]     caps:
[  182.144080][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.144082][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.144084][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.144087][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.144090][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.144092][ T2988]     status: r
[  182.144320][ T2988] switching from power state:
[  182.144323][ T2988]     ui class: performance
[  182.144324][ T2988]     internal class: none
[  182.144327][ T2988]     caps:
[  182.144329][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.144331][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.144334][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.144337][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.144339][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.144342][ T2988]     status: c
[  182.144344][ T2988] switching to power state:
[  182.144345][ T2988]     ui class: performance
[  182.144347][ T2988]     internal class: none
[  182.144349][ T2988]     caps:
[  182.144351][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.144352][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.144355][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.144358][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.144360][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.144363][ T2988]     status: r
[  182.144438][ T2988] switching from power state:
[  182.144440][ T2988]     ui class: performance
[  182.144441][ T2988]     internal class: none
[  182.144444][ T2988]     caps:
[  182.144445][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.144447][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.144450][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.144453][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.144455][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.144458][ T2988]     status: c
[  182.144460][ T2988] switching to power state:
[  182.144461][ T2988]     ui class: performance
[  182.144463][ T2988]     internal class: none
[  182.144465][ T2988]     caps:
[  182.144467][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.144469][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.144472][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.144474][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.144476][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.144479][ T2988]     status: r
[  182.144601][ T2988] switching from power state:
[  182.144604][ T2988]     ui class: performance
[  182.144606][ T2988]     internal class: none
[  182.144610][ T2988]     caps:
[  182.144612][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.144615][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.144619][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.144622][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.144624][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.144627][ T2988]     status: c
[  182.144630][ T2988] switching to power state:
[  182.144631][ T2988]     ui class: performance
[  182.144633][ T2988]     internal class: none
[  182.144636][ T2988]     caps:
[  182.144638][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.144639][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.144642][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.144645][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.144648][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.144650][ T2988]     status: r
[  182.144736][ T2988] switching from power state:
[  182.144738][ T2988]     ui class: performance
[  182.144739][ T2988]     internal class: none
[  182.144742][ T2988]     caps:
[  182.144744][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.144746][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.144749][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.144751][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.144754][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.144756][ T2988]     status: c
[  182.144759][ T2988] switching to power state:
[  182.144760][ T2988]     ui class: performance
[  182.144762][ T2988]     internal class: none
[  182.144765][ T2988]     caps:
[  182.144766][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.144768][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.144771][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.144773][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.144776][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.144778][ T2988]     status: r
[  182.144900][ T2988] switching from power state:
[  182.144903][ T2988]     ui class: performance
[  182.144905][ T2988]     internal class: none
[  182.144909][ T2988]     caps:
[  182.144911][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.144914][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 900 vddci: 850 pcie gen: 3
[  182.144918][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.144921][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.144924][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.144927][ T2988]     status: c
[  182.144930][ T2988] switching to power state:
[  182.144931][ T2988]     ui class: performance
[  182.144933][ T2988]     internal class: none
[  182.144935][ T2988]     caps:
[  182.144937][ T2988] [drm]     uvd    vclk: 0 dclk: 0
[  182.144939][ T2988] [drm]         power level 0    sclk: 30000 mclk: 
15000 vddc: 875 vddci: 850 pcie gen: 3
[  182.144942][ T2988] [drm]         power level 1    sclk: 45000 mclk: 
140000 vddc: 950 vddci: 1000 pcie gen: 3
[  182.144945][ T2988] [drm]         power level 2    sclk: 93000 mclk: 
140000 vddc: 1150 vddci: 1000 pcie gen: 3
[  182.144947][ T2988] [drm]         power level 3    sclk: 95500 mclk: 
140000 vddc: 1188 vddci: 1000 pcie gen: 3
[  182.144950][ T2988]     status: r
[  185.527647][    C0] net eth0: rx->offset: 0, size: -1
[  185.531541][    C4] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
[  185.531556][    C4]
[  185.531559][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  185.531565][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  185.531568][    C4] Call Trace:
[  185.531575][    C4]  dump_stack+0xa1/0xd3
[  185.531581][    C4] print_address_description.constprop.0+0x1d/0x140
[  185.531587][    C4]  ? kfree+0xd7/0x560
[  187.570466][    C4]  kasan_report_invalid_free+0x56/0x80
[  187.577260][    C4]  ? kfree+0xd7/0x560
[  187.583133][    C4]  __kasan_slab_free+0x110/0x140
[  187.591285][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  187.598652][    C4]  kfree+0xd7/0x560
[  187.605044][    C4]  ? skb_release_data+0x41c/0x520
[  187.613266][    C4]  ? xennet_poll+0x15d9/0x3c80
[  187.623212][    C4]  ? kmem_cache_free+0x10b/0x520
[  187.632575][    C4]  skb_release_data+0x41c/0x520
[  187.642833][    C4]  ? xennet_poll+0x15d9/0x3c80
[  187.650333][    C4]  ? xennet_poll+0x15d9/0x3c80
[  187.657687][    C4]  kfree_skb+0x105/0x240
[  187.664488][    C4]  xennet_poll+0x15d9/0x3c80
[  187.672773][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  187.680320][    C4]  ? kasan_set_track+0x20/0x40
[  187.686723][    C4]  ? kasan_set_free_info+0x24/0x40
[  187.693925][    C4]  ? __kasan_slab_free+0xf1/0x140
[  187.700244][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  187.708223][    C4]  ? kfree+0xd7/0x560
[  187.714096][    C4]  ? skb_release_data+0x41c/0x520
[  187.720630][    C4]  ? __kfree_skb_defer+0x45/0x60
[  187.727349][    C4]  ? net_tx_action+0x1d9/0x860
[  187.734188][    C4]  ? __do_softirq+0x1cd/0x66d
[  187.741564][    C4]  ? run_ksoftirqd+0x2b/0x40
[  187.747522][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  187.754698][    C4]  ? kthread+0x32d/0x400
[  187.760406][    C4]  ? ret_from_fork+0x22/0x30
[  187.766552][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  187.773595][    C4]  ? __kasan_check_read+0x11/0x20
[  187.780511][    C4]  ? lock_release+0xa3/0x820
[  187.788568][    C4]  ? verify_cpu+0x100/0x100
[  187.796166][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  187.805455][    C4]  __napi_poll+0xb2/0x4c0
[  187.811803][    C4]  ? kfree+0xd7/0x560
[  187.818652][    C4]  net_rx_action+0x2d7/0xa20
[  187.827884][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  187.837312][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  187.845354][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  187.851581][    C4]  ? net_tx_action+0x1d9/0x860
[  187.857669][    C4]  __do_softirq+0x1cd/0x66d
[  187.863353][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  187.872406][    C4]  run_ksoftirqd+0x2b/0x40
[  187.879344][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  187.887566][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  187.897300][    C4]  ? __kthread_parkme+0x8d/0x120
[  187.905923][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  187.916290][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  187.925479][    C4]  kthread+0x32d/0x400
[  187.930727][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  187.938540][    C4]  ret_from_fork+0x22/0x30
[  187.944582][    C4]
[  187.947585][    C4] Allocated by task 0:
[  187.952774][    C4] (stack is not available)
[  187.958881][    C4]
[  187.961747][    C4] Freed by task 38:
[  187.966292][    C4]  kasan_save_stack+0x23/0x60
[  187.972898][    C4]  kasan_set_track+0x20/0x40
[  187.979780][    C4]  kasan_set_free_info+0x24/0x40
[  187.988647][    C4]  __kasan_slab_free+0xf1/0x140
[  187.999368][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  188.011072][    C4]  kfree+0xd7/0x560
[  188.020674][    C4]  skb_release_data+0x41c/0x520
[  188.030220][    C4]  kfree_skb+0x105/0x240
[  188.039313][    C4]  xennet_poll+0x1ab2/0x3c80
[  188.046713][    C4]  __napi_poll+0xb2/0x4c0
[  188.053726][    C4]  net_rx_action+0x2d7/0xa20
[  188.062236][    C4]  __do_softirq+0x1cd/0x66d
[  188.069114][    C4]
[  188.072904][    C4] The buggy address belongs to the object at 
ffff88812fab9800
[  188.072904][    C4]  which belongs to the cache kmalloc-1k of size 1024
[  188.095297][    C4] The buggy address is located 0 bytes inside of
[  188.095297][    C4]  1024-byte region [ffff88812fab9800, 
ffff88812fab9c00)
[  188.115461][    C4] The buggy address belongs to the page:
[  188.124069][    C4] page:00000000d0c67a5b refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x12fab8
[  188.139339][    C4] head:00000000d0c67a5b order:3 compound_mapcount:0 
compound_pincount:0
[  188.149736][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  188.161665][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff888100042dc0
[  188.178368][    C4] raw: 0000000000000000 0000000000100010 
00000001ffffffff 0000000000000000
[  188.197185][    C4] page dumped because: kasan: bad access detected
[  188.214106][    C4]
[  188.219630][    C4] Memory state around the buggy address:
[  188.232720][    C4]  ffff88812fab9700: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  188.248129][    C4]  ffff88812fab9780: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  188.261517][    C4] >ffff88812fab9800: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  188.275313][    C4]                    ^
[  188.283263][    C4]  ffff88812fab9880: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  188.296708][    C4]  ffff88812fab9900: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  188.310336][    C4] 
==================================================================
   [0m] (1 of 2) A stop job is running for …n 11 of user hakon (4s / 
1min 29s)
[  188.331574][    C4] 
==================================================================
[  188.343256][    C4] BUG: KASAN: double-free or invalid-free in 
kmem_cache_free+0x10b/0x520
[  188.343272][    C4]
[  188.343275][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  188.343281][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  188.343284][    C4] Call Trace:
[  188.343291][    C4]  dump_stack+0xa1/0xd3
[  188.343298][    C4] print_address_description.constprop.0+0x1d/0x140
M [K[[0;1;31m[  188.343303][    C4]  ? kmem_cache_free+0x10b/0x520
[  188.343307][    C4]  kasan_report_invalid_free+0x56/0x80
[  188.343312][    C4]  ? kmem_cache_free+0x10b/0x520
[  188.343315][    C4]  __kasan_slab_free+0x110/0x140
[  188.343321][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  188.343326][    C4]  ? xennet_poll+0x15d9/0x3c80
[  188.343333][    C4]  kmem_cache_free+0x10b/0x520
[  188.343337][    C4]  ? kfree_skbmem+0x9a/0x140
[  188.496310][    C4]  ? xennet_poll+0x15d9/0x3c80
[  188.504346][    C4]  kfree_skbmem+0x9a/0x140
[  188.512339][    C4]  kfree_skb+0x10d/0x240
[  188.519196][    C4]  xennet_poll+0x15d9/0x3c80
[  188.527146][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  188.534266][    C4]  ? kasan_set_track+0x20/0x40
[  188.542131][    C4]  ? kasan_set_free_info+0x24/0x40
[  188.550717][    C4]  ? __kasan_slab_free+0xf1/0x140
[  188.558140][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  188.565215][    C4]  ? kfree+0xd7/0x560
[  188.570966][    C4]  ? skb_release_data+0x41c/0x520
[  188.577784][    C4]  ? __kfree_skb_defer+0x45/0x60
[  188.584215][    C4]  ? net_tx_action+0x1d9/0x860
[  188.591062][    C4]  ? __do_softirq+0x1cd/0x66d
[  188.599608][    C4]  ? run_ksoftirqd+0x2b/0x40
[  188.609078][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  188.618220][    C4]  ? kthread+0x32d/0x400
[  188.624431][    C4]  ? ret_from_fork+0x22/0x30
[  188.630900][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  188.637538][    C4]  ? __kasan_check_read+0x11/0x20
[  188.644170][    C4]  ? lock_release+0xa3/0x820
[  188.649986][    C4]  ? verify_cpu+0x100/0x100
[  188.655871][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  188.662804][    C4]  __napi_poll+0xb2/0x4c0
[  188.668213][    C4]  ? kfree+0xd7/0x560
[  188.673485][    C4]  net_rx_action+0x2d7/0xa20
[  188.680200][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  188.687175][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  188.694712][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  188.701730][    C4]  ? net_tx_action+0x1d9/0x860
[  188.708527][    C4]  __do_softirq+0x1cd/0x66d
[  188.714309][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  188.723060][    C4]  run_ksoftirqd+0x2b/0x40
[  188.728937][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  188.735220][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  188.743952][    C4]  ? __kthread_parkme+0x8d/0x120
[  188.751294][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  188.761075][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  188.772278][    C4]  kthread+0x32d/0x400
[  188.779681][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  188.790615][    C4]  ret_from_fork+0x22/0x30
[  188.799164][    C4]
[  188.803667][    C4] Allocated by task 0:
[  188.811476][    C4]  kasan_save_stack+0x23/0x60
[  188.819980][    C4]  __kasan_slab_alloc+0x68/0x80
[  188.828416][    C4]  kmem_cache_alloc_node+0x242/0x380
[  188.836253][    C4]  __alloc_skb+0x156/0x280
[  188.842393][    C4]  __netdev_alloc_skb+0x46/0x320
[  188.849358][    C4]  xennet_alloc_rx_buffers+0x237/0xae0
[  188.857641][    C4]  xennet_poll+0x1e8b/0x3c80
[  188.864632][    C4]  __napi_poll+0xb2/0x4c0
[  188.871804][    C4]  net_rx_action+0x2d7/0xa20
[  188.879287][    C4]  __do_softirq+0x1cd/0x66d
[  188.886616][    C4]
[  188.890504][    C4] Freed by task 0:
[  188.897276][    C4] (stack is not available)
[  188.904648][    C4]
[  188.908527][    C4] The buggy address belongs to the object at 
ffff88812fba0f00
[  188.908527][    C4]  which belongs to the cache skbuff_head_cache of 
size 216
[  188.932865][    C4] The buggy address is located 0 bytes inside of
[  188.932865][    C4]  216-byte region [ffff88812fba0f00, 
ffff88812fba0fd8)
[  188.954232][    C4] The buggy address belongs to the page:
[  188.963240][    C4] page:00000000d03f6c30 refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x12fba0
[  188.981472][    C4] head:00000000d03f6c30 order:1 compound_mapcount:0
[  188.992485][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  189.006217][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff8881002a3e00
[  189.020528][    C4] raw: 0000000000000000 0000000000190019 
00000001ffffffff 0000000000000000
[  189.033607][    C4] page dumped because: kasan: bad access detected
[  189.042337][    C4]
[  189.045171][    C4] Memory state around the buggy address:
[  189.052429][    C4]  ffff88812fba0e00: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  189.063175][    C4]  ffff88812fba0e80: fb fb fb fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  189.074094][    C4] >ffff88812fba0f00: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  189.084859][    C4]                    ^
[  189.090295][    C4]  ffff88812fba0f80: fb fb fb fb fb fb fb fb fb fb 
fb fc fc fc fc fc
[  189.101715][    C4]  ffff88812fba1000: fc fc fc fc fc fc fc fc fa fb 
fb fb fb fb fb fb
[  189.112722][    C4] 
==================================================================
*[0m[0;31m*    [0m] (1 of 2) A stop job is running for … 11 of user 
hakon (16s / 1min 29s)
[  189.133240][    C4] 
==================================================================
[  189.144030][    C4] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
[  189.144042][    C4]
[  189.144045][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  189.144051][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  189.144054][    C4] Call Trace:
[  189.144058][    C4]  dump_stack+0xa1/0xd3
M [K[[0;31m*[  189.144065][    C4] 
print_address_description.constprop.0+0x1d/0x140
[  189.144072][    C4]  ? kfree+0xd7/0x560
[  189.144076][    C4]  kasan_report_invalid_free+0x56/0x80
[  189.144080][    C4]  ? kfree+0xd7/0x560
[  189.269416][    C4]  __kasan_slab_free+0x110/0x140
[  189.279730][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  189.290023][    C4]  kfree+0xd7/0x560
[  189.298437][    C4]  ? skb_release_data+0x41c/0x520
[  189.311044][    C4]  ? xennet_poll+0x15d9/0x3c80
[  189.319183][    C4]  ? kmem_cache_free+0x10b/0x520
[  189.327195][    C4]  skb_release_data+0x41c/0x520
[  189.334963][    C4]  ? xennet_poll+0x15d9/0x3c80
[  189.342751][    C4]  ? xennet_poll+0x15d9/0x3c80
[  189.349919][    C4]  kfree_skb+0x105/0x240
[  189.356569][    C4]  xennet_poll+0x15d9/0x3c80
[  189.363499][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  189.372826][    C4]  ? kasan_set_track+0x20/0x40
[  189.381380][    C4]  ? kasan_set_free_info+0x24/0x40
[  189.393415][    C4]  ? __kasan_slab_free+0xf1/0x140
[  189.407897][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  189.416753][    C4]  ? kfree+0xd7/0x560
[  189.423628][    C4]  ? skb_release_data+0x41c/0x520
[  189.431384][    C4]  ? __kfree_skb_defer+0x45/0x60
[  189.441583][    C4]  ? net_tx_action+0x1d9/0x860
[  189.449886][    C4]  ? __do_softirq+0x1cd/0x66d
[  189.456673][    C4]  ? run_ksoftirqd+0x2b/0x40
[  189.462434][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  189.469311][    C4]  ? kthread+0x32d/0x400
[  189.474809][    C4]  ? ret_from_fork+0x22/0x30
[  189.481299][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  189.490230][    C4]  ? __kasan_check_read+0x11/0x20
[  189.496365][    C4]  ? lock_release+0xa3/0x820
[  189.502544][    C4]  ? verify_cpu+0x100/0x100
[  189.509204][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  189.516768][    C4]  __napi_poll+0xb2/0x4c0
[  189.523611][    C4]  ? kfree+0xd7/0x560
[  189.529707][    C4]  net_rx_action+0x2d7/0xa20
[  189.535703][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  189.542355][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  189.549592][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  189.556280][    C4]  ? net_tx_action+0x1d9/0x860
[  189.562250][    C4]  __do_softirq+0x1cd/0x66d
[  189.567885][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  189.575866][    C4]  run_ksoftirqd+0x2b/0x40
[  189.581338][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  189.592577][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  189.607677][    C4]  ? __kthread_parkme+0x8d/0x120
[  189.617187][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  189.628365][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  189.639384][    C4]  kthread+0x32d/0x400
[  189.647862][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  189.654722][    C4]  ret_from_fork+0x22/0x30
[  189.660688][    C4]
[  189.663845][    C4] Allocated by task 0:
[  189.669816][    C4] (stack is not available)
[  189.675861][    C4]
[  189.680231][    C4] Freed by task 38:
[  189.686949][    C4]  kasan_save_stack+0x23/0x60
[  189.694724][    C4]  kasan_set_track+0x20/0x40
[  189.702323][    C4]  kasan_set_free_info+0x24/0x40
[  189.710505][    C4]  __kasan_slab_free+0xf1/0x140
[  189.718200][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  189.727993][    C4]  kfree+0xd7/0x560
[  189.734471][    C4]  skb_release_data+0x41c/0x520
[  189.743592][    C4]  kfree_skb+0x105/0x240
[  189.749712][    C4]  xennet_poll+0x1ab2/0x3c80
[  189.756715][    C4]  __napi_poll+0xb2/0x4c0
[  189.763497][    C4]  net_rx_action+0x2d7/0xa20
[  189.770829][    C4]  __do_softirq+0x1cd/0x66d
[  189.777570][    C4]
[  189.781026][    C4] The buggy address belongs to the object at 
ffff88812fabd000
[  189.781026][    C4]  which belongs to the cache kmalloc-1k of size 1024
[  189.813652][    C4] The buggy address is located 0 bytes inside of
[  189.813652][    C4]  1024-byte region [ffff88812fabd000, 
ffff88812fabd400)
[  189.843331][    C4] The buggy address belongs to the page:
[  189.852905][    C4] page:00000000d0c67a5b refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x12fab8
[  189.869090][    C4] head:00000000d0c67a5b order:3 compound_mapcount:0 
compound_pincount:0
[  189.882204][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  189.894830][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff888100042dc0
[  189.908717][    C4] raw: 0000000000000000 0000000000100010 
00000001ffffffff 0000000000000000
[  189.921429][    C4] page dumped because: kasan: bad access detected
[  189.930898][    C4]
[  189.934887][    C4] Memory state around the buggy address:
[  189.943863][    C4]  ffff88812fabcf00: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  189.956857][    C4]  ffff88812fabcf80: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  189.970095][    C4] >ffff88812fabd000: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  189.984139][    C4]                    ^
[  189.995979][    C4]  ffff88812fabd080: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  190.018090][    C4]  ffff88812fabd100: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  190.040822][    C4] 
==================================================================
[0;1;31m*[0m[0;31m*   [0m] (1 of 2) A stop job is running for … 11 of 
user hakon (17s / 1min 29s)
[  190.065443][    C4] 
==================================================================
[  190.077767][    C4] BUG: KASAN: double-free or invalid-free in 
kmem_cache_free+0x10b/0x520
[  190.077780][    C4]
[  190.077783][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  190.077789][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  190.077791][    C4] Call Trace:
[  190.077797][    C4]  dump_stack+0xa1/0xd3
[  190.077803][    C4] print_address_description.constprop.0+0x1d/0x140
M [K[ [0;31m*[  190.077808][    C4]  ? kmem_cache_free+0x10b/0x520
[  190.077813][    C4]  kasan_report_invalid_free+0x56/0x80
[  190.077817][    C4]  ? kmem_cache_free+0x10b/0x520
[  190.077821][    C4]  __kasan_slab_free+0x110/0x140
[  190.077826][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  190.077831][    C4]  ? xennet_poll+0x15d9/0x3c80
[  190.077837][    C4]  kmem_cache_free+0x10b/0x520
[  190.077842][    C4]  ? kfree_skbmem+0x9a/0x140
[  190.205538][    C4]  ? xennet_poll+0x15d9/0x3c80
[  190.213620][    C4]  kfree_skbmem+0x9a/0x140
[  190.224354][    C4]  kfree_skb+0x10d/0x240
[  190.235434][    C4]  xennet_poll+0x15d9/0x3c80
[  190.242643][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  190.250056][    C4]  ? kasan_set_track+0x20/0x40
[  190.258334][    C4]  ? kasan_set_free_info+0x24/0x40
[  190.266160][    C4]  ? __kasan_slab_free+0xf1/0x140
[  190.273945][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  190.281939][    C4]  ? kfree+0xd7/0x560
[  190.288041][    C4]  ? skb_release_data+0x41c/0x520
[  190.295976][    C4]  ? __kfree_skb_defer+0x45/0x60
[  190.304060][    C4]  ? net_tx_action+0x1d9/0x860
[  190.311897][    C4]  ? __do_softirq+0x1cd/0x66d
[  190.319757][    C4]  ? run_ksoftirqd+0x2b/0x40
[  190.328694][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  190.338408][    C4]  ? kthread+0x32d/0x400
[  190.345775][    C4]  ? ret_from_fork+0x22/0x30
[  190.355313][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  190.364526][    C4]  ? __kasan_check_read+0x11/0x20
[  190.374923][    C4]  ? lock_release+0xa3/0x820
[  190.382764][    C4]  ? verify_cpu+0x100/0x100
[  190.394163][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  190.407978][    C4]  __napi_poll+0xb2/0x4c0
[  190.417021][    C4]  ? kfree+0xd7/0x560
[  190.429405][    C4]  net_rx_action+0x2d7/0xa20
[  190.441660][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  190.449233][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  190.458420][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  190.464872][    C4]  ? net_tx_action+0x1d9/0x860
[  190.471471][    C4]  __do_softirq+0x1cd/0x66d
[  190.476915][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  190.484517][    C4]  run_ksoftirqd+0x2b/0x40
[  190.490331][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  190.497020][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  190.506932][    C4]  ? __kthread_parkme+0x8d/0x120
[  190.514554][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  190.526021][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  190.536669][    C4]  kthread+0x32d/0x400
[  190.543225][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  190.551447][    C4]  ret_from_fork+0x22/0x30
[  190.558672][    C4]
[  190.562759][    C4] Allocated by task 0:
[  190.569853][    C4]  kasan_save_stack+0x23/0x60
[  190.579652][    C4]  __kasan_slab_alloc+0x68/0x80
[  190.592327][    C4]  kmem_cache_alloc_node+0x242/0x380
[  190.604175][    C4]  __alloc_skb+0x156/0x280
[  190.613857][    C4]  __netdev_alloc_skb+0x46/0x320
[  190.626686][    C4]  xennet_alloc_rx_buffers+0x237/0xae0
[  190.636565][    C4]  xennet_poll+0x1e8b/0x3c80
[  190.644828][    C4]  __napi_poll+0xb2/0x4c0
[  190.652205][    C4]  net_rx_action+0x2d7/0xa20
[  190.660822][    C4]  __do_softirq+0x1cd/0x66d
[  190.667941][    C4]
[  190.671911][    C4] Freed by task 0:
[  190.677828][    C4] (stack is not available)
[  190.684930][    C4]
[  190.688641][    C4] The buggy address belongs to the object at 
ffff88812fba08c0
[  190.688641][    C4]  which belongs to the cache skbuff_head_cache of 
size 216
[  190.710807][    C4] The buggy address is located 0 bytes inside of
[  190.710807][    C4]  216-byte region [ffff88812fba08c0, 
ffff88812fba0998)
[  190.727292][    C4] The buggy address belongs to the page:
[  190.735058][    C4] page:00000000d03f6c30 refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x12fba0
[  190.749323][    C4] head:00000000d03f6c30 order:1 compound_mapcount:0
[  190.758581][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  190.770848][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff8881002a3e00
[  190.783878][    C4] raw: 0000000000000000 0000000000190019 
00000001ffffffff 0000000000000000
[  190.801150][    C4] page dumped because: kasan: bad access detected
[  190.814745][    C4]
[  190.818818][    C4] Memory state around the buggy address:
[  190.829341][    C4]  ffff88812fba0780: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  190.841686][    C4]  ffff88812fba0800: fb fb fb fb fb fb fb fb fb fb 
fb fc fc fc fc fc
[  190.852792][    C4] >ffff88812fba0880: fc fc fc fc fc fc fc fc fa fb 
fb fb fb fb fb fb
[  190.863771][    C4] ^
[  190.871906][    C4]  ffff88812fba0900: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  190.882218][    C4]  ffff88812fba0980: fb fb fb fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  190.893098][    C4] 
==================================================================
[0;1;31m*[0m[0;31m*  [0m] (2 of 2) A stop job is running for …t 
Display Manager (18s / 1min 30s)
[  190.912422][    C4] 
==================================================================
[  190.926244][    C4] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
[  190.926258][    C4]
[  190.926261][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  190.926268][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  190.926271][    C4] Call Trace:
[  190.926276][    C4]  dump_stack+0xa1/0xd3
[  190.926283][    C4] print_address_description.constprop.0+0x1d/0x140
M [K[  [0;31m[  190.926290][    C4]  ? kfree+0xd7/0x560
[  190.926293][    C4]  kasan_report_invalid_free+0x56/0x80
[  191.029215][    C4]  ? kfree+0xd7/0x560
[  191.036381][    C4]  __kasan_slab_free+0x110/0x140
[  191.045501][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  191.055804][    C4]  kfree+0xd7/0x560
[  191.062892][    C4]  ? skb_release_data+0x41c/0x520
[  191.071770][    C4]  ? xennet_poll+0x15d9/0x3c80
[  191.079163][    C4]  ? kmem_cache_free+0x10b/0x520
[  191.087657][    C4]  skb_release_data+0x41c/0x520
[  191.095521][    C4]  ? xennet_poll+0x15d9/0x3c80
[  191.105701][    C4]  ? xennet_poll+0x15d9/0x3c80
[  191.114513][    C4]  kfree_skb+0x105/0x240
[  191.122540][    C4]  xennet_poll+0x15d9/0x3c80
[  191.133127][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  191.141172][    C4]  ? kasan_set_track+0x20/0x40
[  191.149491][    C4]  ? kasan_set_free_info+0x24/0x40
[  191.160318][    C4]  ? __kasan_slab_free+0xf1/0x140
[  191.172281][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  191.191738][    C4]  ? kfree+0xd7/0x560
[  191.202612][    C4]  ? skb_release_data+0x41c/0x520
[  191.213975][    C4]  ? __kfree_skb_defer+0x45/0x60
[  191.224021][    C4]  ? net_tx_action+0x1d9/0x860
[  191.232076][    C4]  ? __do_softirq+0x1cd/0x66d
[  191.240112][    C4]  ? run_ksoftirqd+0x2b/0x40
[  191.246447][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  191.253797][    C4]  ? kthread+0x32d/0x400
[  191.259553][    C4]  ? ret_from_fork+0x22/0x30
[  191.266890][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  191.274770][    C4]  ? __kasan_check_read+0x11/0x20
[  191.282395][    C4]  ? lock_release+0xa3/0x820
[  191.290706][    C4]  ? verify_cpu+0x100/0x100
[  191.297634][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  191.306794][    C4]  __napi_poll+0xb2/0x4c0
[  191.314410][    C4]  ? kfree+0xd7/0x560
[  191.321519][    C4]  net_rx_action+0x2d7/0xa20
[  191.329187][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  191.339638][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  191.350101][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  191.358947][    C4]  ? net_tx_action+0x1d9/0x860
[  191.368811][    C4]  __do_softirq+0x1cd/0x66d
[  191.375214][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  191.385397][    C4]  run_ksoftirqd+0x2b/0x40
[  191.398719][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  191.413800][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  191.439665][    C4]  ? __kthread_parkme+0x8d/0x120
[  191.453471][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  191.463496][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  191.476027][    C4]  kthread+0x32d/0x400
[  191.481496][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  191.490285][    C4]  ret_from_fork+0x22/0x30
[  191.497026][    C4]
[  191.500352][    C4] Allocated by task 0:
[  191.506505][    C4] (stack is not available)
[  191.512027][    C4]
[  191.514781][    C4] Freed by task 38:
[  191.519770][    C4]  kasan_save_stack+0x23/0x60
[  191.525683][    C4]  kasan_set_track+0x20/0x40
[  191.531240][    C4]  kasan_set_free_info+0x24/0x40
[  191.537536][    C4]  __kasan_slab_free+0xf1/0x140
[  191.543765][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  191.550559][    C4]  kfree+0xd7/0x560
[  191.555703][    C4]  skb_release_data+0x41c/0x520
[  191.561966][    C4]  kfree_skb+0x105/0x240
[  191.567409][    C4]  xennet_poll+0x1ab2/0x3c80
[  191.573814][    C4]  __napi_poll+0xb2/0x4c0
[  191.580046][    C4]  net_rx_action+0x2d7/0xa20
[  191.588873][    C4]  __do_softirq+0x1cd/0x66d
[  191.597835][    C4]
[  191.602315][    C4] The buggy address belongs to the object at 
ffff88812faba000
[  191.602315][    C4]  which belongs to the cache kmalloc-1k of size 1024
[  191.632813][    C4] The buggy address is located 0 bytes inside of
[  191.632813][    C4]  1024-byte region [ffff88812faba000, 
ffff88812faba400)
[  191.658283][    C4] The buggy address belongs to the page:
[  191.667306][    C4] page:00000000d0c67a5b refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x12fab8
[  191.683502][    C4] head:00000000d0c67a5b order:3 compound_mapcount:0 
compound_pincount:0
[  191.696623][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  191.710183][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff888100042dc0
[  191.723799][    C4] raw: 0000000000000000 0000000000100010 
00000001ffffffff 0000000000000000
[  191.738303][    C4] page dumped because: kasan: bad access detected
[  191.748151][    C4]
[  191.751685][    C4] Memory state around the buggy address:
[  191.761888][    C4]  ffff88812fab9f00: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  191.773812][    C4]  ffff88812fab9f80: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  191.784889][    C4] >ffff88812faba000: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  191.798495][    C4]                    ^
[  191.807306][    C4]  ffff88812faba080: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  191.820657][    C4]  ffff88812faba100: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  191.834386][    C4] 
==================================================================
*[0;1;31m*[0m[0;31m* [0m] (2 of 2) A stop job is running for …t 
Display Manager (19s / 1min 30s)
[  191.857787][    C4] 
==================================================================
[  191.870725][    C4] BUG: KASAN: double-free or invalid-free in 
kmem_cache_free+0x10b/0x520
[  191.870739][    C4]
[  191.870742][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  191.870748][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  191.870751][    C4] Call Trace:
[  191.870757][    C4]  dump_stack+0xa1/0xd3
M [K[   [0;31[  191.870764][    C4] 
print_address_description.constprop.0+0x1d/0x140
[  191.870770][    C4]  ? kmem_cache_free+0x10b/0x520
[  191.870775][    C4]  kasan_report_invalid_free+0x56/0x80
[  191.946982][    C4]  ? kmem_cache_free+0x10b/0x520
[  191.953344][    C4]  __kasan_slab_free+0x110/0x140
[  191.960326][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  191.967293][    C4]  ? xennet_poll+0x15d9/0x3c80
[  191.973739][    C4]  kmem_cache_free+0x10b/0x520
[  191.980126][    C4]  ? kfree_skbmem+0x9a/0x140
[  191.989537][    C4]  ? xennet_poll+0x15d9/0x3c80
[  191.996560][    C4]  kfree_skbmem+0x9a/0x140
[  192.004530][    C4]  kfree_skb+0x10d/0x240
[  192.011599][    C4]  xennet_poll+0x15d9/0x3c80
[  192.017949][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  192.025262][    C4]  ? kasan_set_track+0x20/0x40
[  192.031628][    C4]  ? kasan_set_free_info+0x24/0x40
[  192.038991][    C4]  ? __kasan_slab_free+0xf1/0x140
[  192.045590][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  192.052534][    C4]  ? kfree+0xd7/0x560
[  192.057698][    C4]  ? skb_release_data+0x41c/0x520
[  192.063825][    C4]  ? __kfree_skb_defer+0x45/0x60
[  192.070304][    C4]  ? net_tx_action+0x1d9/0x860
[  192.076508][    C4]  ? __do_softirq+0x1cd/0x66d
[  192.082675][    C4]  ? run_ksoftirqd+0x2b/0x40
[  192.088867][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  192.096301][    C4]  ? kthread+0x32d/0x400
[  192.101878][    C4]  ? ret_from_fork+0x22/0x30
[  192.108017][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  192.114145][    C4]  ? __kasan_check_read+0x11/0x20
[  192.121139][    C4]  ? lock_release+0xa3/0x820
[  192.127107][    C4]  ? verify_cpu+0x100/0x100
[  192.132763][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  192.140496][    C4]  __napi_poll+0xb2/0x4c0
[  192.145915][    C4]  ? kfree+0xd7/0x560
[  192.150800][    C4]  net_rx_action+0x2d7/0xa20
[  192.157637][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  192.164579][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  192.173916][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  192.181394][    C4]  ? net_tx_action+0x1d9/0x860
[  192.190426][    C4]  __do_softirq+0x1cd/0x66d
[  192.197029][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  192.211223][    C4]  run_ksoftirqd+0x2b/0x40
[  192.219314][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  192.229962][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  192.241103][    C4]  ? __kthread_parkme+0x8d/0x120
[  192.248498][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  192.257517][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  192.266728][    C4]  kthread+0x32d/0x400
[  192.273553][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  192.280919][    C4]  ret_from_fork+0x22/0x30
[  192.289640][    C4]
[  192.294567][    C4] Allocated by task 0:
[  192.301935][    C4]  kasan_save_stack+0x23/0x60
[  192.309426][    C4]  __kasan_slab_alloc+0x68/0x80
[  192.316824][    C4]  kmem_cache_alloc_node+0x242/0x380
[  192.326266][    C4]  __alloc_skb+0x156/0x280
[  192.332690][    C4]  __netdev_alloc_skb+0x46/0x320
[  192.340298][    C4]  xennet_alloc_rx_buffers+0x237/0xae0
[  192.348233][    C4]  xennet_poll+0x1e8b/0x3c80
[  192.355483][    C4]  __napi_poll+0xb2/0x4c0
[  192.361084][    C4]  net_rx_action+0x2d7/0xa20
[  192.367630][    C4]  __do_softirq+0x1cd/0x66d
[  192.375731][    C4]
[  192.379474][    C4] Freed by task 0:
[  192.385746][    C4] (stack is not available)
[  192.397419][    C4]
[  192.403609][    C4] The buggy address belongs to the object at 
ffff888123766a00
[  192.403609][    C4]  which belongs to the cache skbuff_head_cache of 
size 216
[  192.432808][    C4] The buggy address is located 0 bytes inside of
[  192.432808][    C4]  216-byte region [ffff888123766a00, 
ffff888123766ad8)
[  192.454469][    C4] The buggy address belongs to the page:
[  192.462678][    C4] page:0000000033082a5e refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x123766
[  192.477744][    C4] head:0000000033082a5e order:1 compound_mapcount:0
[  192.487458][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  192.501120][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff8881002a3e00
[  192.515115][    C4] raw: 0000000000000000 0000000000190019 
00000001ffffffff 0000000000000000
[  192.530822][    C4] page dumped because: kasan: bad access detected
[  192.540913][    C4]
[  192.546652][    C4] Memory state around the buggy address:
[  192.561075][    C4]  ffff888123766900: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  192.579298][    C4]  ffff888123766980: fb fb fb fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  192.597861][    C4] >ffff888123766a00: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  192.614236][    C4]                    ^
[  192.621747][    C4]  ffff888123766a80: fb fb fb fb fb fb fb fb fb fb 
fb fc fc fc fc fc
[  192.635403][    C4]  ffff888123766b00: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  192.649548][    C4] 
==================================================================
m*[0;1;31m*[0m[0;31m*[0m] (2 of 2) A stop job is running for …t 
Display Manager (20s / 1min 30s)
[  192.676598][    C4] 
==================================================================
[  192.689893][    C4] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
[  192.689904][    C4]
[  192.689907][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  192.689913][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  192.689915][    C4] Call Trace:
M [K[    [0;3[  192.689921][    C4]  dump_stack+0xa1/0xd3
[  192.689928][    C4] print_address_description.constprop.0+0x1d/0x140
[  192.689933][    C4]  ? kfree+0xd7/0x560
[  192.689937][    C4]  kasan_report_invalid_free+0x56/0x80
[  192.689942][    C4]  ? kfree+0xd7/0x560
[  192.689945][    C4]  __kasan_slab_free+0x110/0x140
[  192.689950][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  192.689955][    C4]  kfree+0xd7/0x560
[  192.689959][    C4]  ? skb_release_data+0x41c/0x520
[  192.689966][    C4]  ? xennet_poll+0x15d9/0x3c80
[  192.689972][    C4]  ? kmem_cache_free+0x10b/0x520
[  192.689977][    C4]  skb_release_data+0x41c/0x520
[  192.689981][    C4]  ? xennet_poll+0x15d9/0x3c80
[  192.689986][    C4]  ? xennet_poll+0x15d9/0x3c80
[  192.689991][    C4]  kfree_skb+0x105/0x240
[  192.689995][    C4]  xennet_poll+0x15d9/0x3c80
[  192.690004][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  192.690009][    C4]  ? kasan_set_track+0x20/0x40
[  192.690013][    C4]  ? kasan_set_free_info+0x24/0x40
[  192.690017][    C4]  ? __kasan_slab_free+0xf1/0x140
[  192.690021][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  192.921491][    C4]  ? kfree+0xd7/0x560
[  192.926607][    C4]  ? skb_release_data+0x41c/0x520
[  192.932756][    C4]  ? __kfree_skb_defer+0x45/0x60
[  192.942952][    C4]  ? net_tx_action+0x1d9/0x860
[  192.949829][    C4]  ? __do_softirq+0x1cd/0x66d
[  192.958638][    C4]  ? run_ksoftirqd+0x2b/0x40
[  192.965016][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  192.973042][    C4]  ? kthread+0x32d/0x400
[  192.978590][    C4]  ? ret_from_fork+0x22/0x30
[  192.986109][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  192.995438][    C4]  ? __kasan_check_read+0x11/0x20
[  193.001581][    C4]  ? lock_release+0xa3/0x820
[  193.007532][    C4]  ? verify_cpu+0x100/0x100
[  193.013170][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  193.020537][    C4]  __napi_poll+0xb2/0x4c0
[  193.026094][    C4]  ? kfree+0xd7/0x560
[  193.031203][    C4]  net_rx_action+0x2d7/0xa20
[  193.037689][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  193.045865][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  193.054672][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  193.062498][    C4]  ? net_tx_action+0x1d9/0x860
[  193.069666][    C4]  __do_softirq+0x1cd/0x66d
[  193.076902][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  193.087875][    C4]  run_ksoftirqd+0x2b/0x40
[  193.095450][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  193.105045][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  193.116404][    C4]  ? __kthread_parkme+0x8d/0x120
[  193.126301][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  193.139579][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  193.154934][    C4]  kthread+0x32d/0x400
[  193.163333][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  193.181208][    C4]  ret_from_fork+0x22/0x30
[  193.193675][    C4]
[  193.198961][    C4] Allocated by task 0:
[  193.205212][    C4] (stack is not available)
[  193.211612][    C4]
[  193.214900][    C4] Freed by task 38:
[  193.220075][    C4]  kasan_save_stack+0x23/0x60
[  193.226387][    C4]  kasan_set_track+0x20/0x40
[  193.232659][    C4]  kasan_set_free_info+0x24/0x40
[  193.239812][    C4]  __kasan_slab_free+0xf1/0x140
[  193.247596][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  193.258294][    C4]  kfree+0xd7/0x560
[  193.264416][    C4]  skb_release_data+0x41c/0x520
[  193.272496][    C4]  kfree_skb+0x105/0x240
[  193.279610][    C4]  xennet_poll+0x1ab2/0x3c80
[  193.286843][    C4]  __napi_poll+0xb2/0x4c0
[  193.293498][    C4]  net_rx_action+0x2d7/0xa20
[  193.302317][    C4]  __do_softirq+0x1cd/0x66d
[  193.308548][    C4]
[  193.311701][    C4] The buggy address belongs to the object at 
ffff88812fabc000
[  193.311701][    C4]  which belongs to the cache kmalloc-1k of size 1024
[  193.331743][    C4] The buggy address is located 0 bytes inside of
[  193.331743][    C4]  1024-byte region [ffff88812fabc000, 
ffff88812fabc400)
[  193.362661][    C4] The buggy address belongs to the page:
[  193.378134][    C4] page:00000000d0c67a5b refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x12fab8
[  193.402344][    C4] head:00000000d0c67a5b order:3 compound_mapcount:0 
compound_pincount:0
[  193.415323][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  193.427764][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff888100042dc0
[  193.441720][    C4] raw: 0000000000000000 0000000000100010 
00000001ffffffff 0000000000000000
[  193.455326][    C4] page dumped because: kasan: bad access detected
[  193.465247][    C4]
[  193.468798][    C4] Memory state around the buggy address:
[  193.477749][    C4]  ffff88812fabbf00: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  193.490661][    C4]  ffff88812fabbf80: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  193.503351][    C4] >ffff88812fabc000: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  193.517517][    C4]                    ^
[  193.523690][    C4]  ffff88812fabc080: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  193.541440][    C4]  ffff88812fabc100: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  193.554356][    C4] 
==================================================================
1m*[0;1;31m*[0m] (1 of 2) A stop job is running for … 11 of user hakon 
(20s / 1min 29s)
[  193.579371][    C4] 
==================================================================
[  193.590634][    C4] BUG: KASAN: double-free or invalid-free in 
kmem_cache_free+0x10b/0x520
[  193.590649][    C4]
[  193.590651][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  193.590657][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  193.590660][    C4] Call Trace:
[  193.590665][    C4]  dump_stack+0xa1/0xd3
[  193.590671][    C4] print_address_description.constprop.0+0x1d/0x140
M [K[     [0;[  193.590677][    C4]  ? kmem_cache_free+0x10b/0x520
[  193.590681][    C4]  kasan_report_invalid_free+0x56/0x80
[  193.590686][    C4]  ? kmem_cache_free+0x10b/0x520
[  193.590690][    C4]  __kasan_slab_free+0x110/0x140
[  193.590695][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  193.590700][    C4]  ? xennet_poll+0x15d9/0x3c80
[  193.590705][    C4]  kmem_cache_free+0x10b/0x520
[  193.590710][    C4]  ? kfree_skbmem+0x9a/0x140
[  193.590717][    C4]  ? xennet_poll+0x15d9/0x3c80
[  193.590721][    C4]  kfree_skbmem+0x9a/0x140
[  193.590725][    C4]  kfree_skb+0x10d/0x240
[  193.590730][    C4]  xennet_poll+0x15d9/0x3c80
[  193.590739][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  193.759413][    C4]  ? kasan_set_track+0x20/0x40
[  193.766891][    C4]  ? kasan_set_free_info+0x24/0x40
[  193.774780][    C4]  ? __kasan_slab_free+0xf1/0x140
[  193.782288][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  193.791163][    C4]  ? kfree+0xd7/0x560
[  193.796963][    C4]  ? skb_release_data+0x41c/0x520
[  193.803786][    C4]  ? __kfree_skb_defer+0x45/0x60
[  193.810216][    C4]  ? net_tx_action+0x1d9/0x860
[  193.816171][    C4]  ? __do_softirq+0x1cd/0x66d
[  193.822204][    C4]  ? run_ksoftirqd+0x2b/0x40
[  193.827934][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  193.834446][    C4]  ? kthread+0x32d/0x400
[  193.840558][    C4]  ? ret_from_fork+0x22/0x30
[  193.846188][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  193.852719][    C4]  ? __kasan_check_read+0x11/0x20
[  193.860040][    C4]  ? lock_release+0xa3/0x820
[  193.866183][    C4]  ? verify_cpu+0x100/0x100
[  193.872025][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  193.878726][    C4]  __napi_poll+0xb2/0x4c0
[  193.884022][    C4]  ? kfree+0xd7/0x560
[  193.889382][    C4]  net_rx_action+0x2d7/0xa20
[  193.895136][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  193.903878][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  193.912675][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  193.922058][    C4]  ? net_tx_action+0x1d9/0x860
[  193.930306][    C4]  __do_softirq+0x1cd/0x66d
[  193.937941][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  193.948762][    C4]  run_ksoftirqd+0x2b/0x40
[  193.956111][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  193.962339][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  193.970112][    C4]  ? __kthread_parkme+0x8d/0x120
[  193.976308][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  193.983953][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  193.991765][    C4]  kthread+0x32d/0x400
[  193.997006][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  194.003485][    C4]  ret_from_fork+0x22/0x30
[  194.008936][    C4]
[  194.011855][    C4] Allocated by task 0:
[  194.016626][    C4]  kasan_save_stack+0x23/0x60
[  194.023223][    C4]  __kasan_slab_alloc+0x68/0x80
[  194.029017][    C4]  kmem_cache_alloc_node+0x242/0x380
[  194.035490][    C4]  __alloc_skb+0x156/0x280
[  194.041115][    C4]  __netdev_alloc_skb+0x46/0x320
[  194.047473][    C4]  xennet_alloc_rx_buffers+0x237/0xae0
[  194.054664][    C4]  xennet_poll+0x1e8b/0x3c80
[  194.060751][    C4]  __napi_poll+0xb2/0x4c0
[  194.066324][    C4]  net_rx_action+0x2d7/0xa20
[  194.072528][    C4]  __do_softirq+0x1cd/0x66d
[  194.079476][    C4]
[  194.082951][    C4] Freed by task 0:
[  194.092537][    C4] (stack is not available)
[  194.101414][    C4]
[  194.106966][    C4] The buggy address belongs to the object at 
ffff888123767900
[  194.106966][    C4]  which belongs to the cache skbuff_head_cache of 
size 216
[  194.138368][    C4] The buggy address is located 0 bytes inside of
[  194.138368][    C4]  216-byte region [ffff888123767900, 
ffff8881237679d8)
[  194.160147][    C4] The buggy address belongs to the page:
[  194.169144][    C4] page:0000000033082a5e refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x123766
[  194.182806][    C4] head:0000000033082a5e order:1 compound_mapcount:0
[  194.191704][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  194.201972][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff8881002a3e00
[  194.212411][    C4] raw: 0000000000000000 0000000000190019 
00000001ffffffff 0000000000000000
[  194.223703][    C4] page dumped because: kasan: bad access detected
[  194.231960][    C4]
[  194.234708][    C4] Memory state around the buggy address:
[  194.241865][    C4]  ffff888123767800: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  194.251707][    C4]  ffff888123767880: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  194.261749][    C4] >ffff888123767900: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  194.274864][    C4]                    ^
[  194.280701][    C4]  ffff888123767980: fb fb fb fb fb fb fb fb fb fb 
fb fc fc fc fc fc
[  194.295529][    C4]  ffff888123767a00: fc fc fc fc fc fc fc fc fa fb 
fb fb fb fb fb fb
[  194.310960][    C4] 
==================================================================
31m*[0m] (1 of 2) A stop job is running for … 11 of user hakon (21s / 
1min 29s)
[  194.333042][    C4] 
==================================================================
[  194.343415][    C4] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
[  194.343427][    C4]
[  194.343429][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  194.343435][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  194.343438][    C4] Call Trace:
[  194.343443][    C4]  dump_stack+0xa1/0xd3
[  194.343449][    C4] print_address_description.constprop.0+0x1d/0x140
M [K[    [0;3[  194.343456][    C4]  ? kfree+0xd7/0x560
[  194.343460][    C4]  kasan_report_invalid_free+0x56/0x80
[  194.412666][    C4]  ? kfree+0xd7/0x560
[  194.417749][    C4]  __kasan_slab_free+0x110/0x140
[  194.424363][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  194.431058][    C4]  kfree+0xd7/0x560
[  194.436257][    C4]  ? skb_release_data+0x41c/0x520
[  194.442790][    C4]  ? __kasan_check_read+0x11/0x20
[  194.448911][    C4]  skb_release_data+0x41c/0x520
[  194.455519][    C4]  ? xennet_poll+0x15d9/0x3c80
[  194.462340][    C4]  ? xennet_poll+0x15d9/0x3c80
[  194.469568][    C4]  kfree_skb+0x105/0x240
[  194.477947][    C4]  xennet_poll+0x15d9/0x3c80
[  194.485526][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  194.492917][    C4]  ? kasan_set_track+0x20/0x40
[  194.499161][    C4]  ? kasan_set_free_info+0x24/0x40
[  194.505599][    C4]  ? __kasan_slab_free+0xf1/0x140
[  194.511947][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  194.518706][    C4]  ? kfree+0xd7/0x560
[  194.523713][    C4]  ? skb_release_data+0x41c/0x520
[  194.529857][    C4]  ? __kfree_skb_defer+0x45/0x60
[  194.535810][    C4]  ? net_tx_action+0x1d9/0x860
[  194.541593][    C4]  ? __do_softirq+0x1cd/0x66d
[  194.547419][    C4]  ? run_ksoftirqd+0x2b/0x40
[  194.555667][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  194.563042][    C4]  ? kthread+0x32d/0x400
[  194.569126][    C4]  ? ret_from_fork+0x22/0x30
[  194.575247][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  194.581695][    C4]  ? __kasan_check_read+0x11/0x20
[  194.588381][    C4]  ? lock_release+0xa3/0x820
[  194.594616][    C4]  ? verify_cpu+0x100/0x100
[  194.600193][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  194.607292][    C4]  __napi_poll+0xb2/0x4c0
[  194.612546][    C4]  ? kfree+0xd7/0x560
[  194.617550][    C4]  net_rx_action+0x2d7/0xa20
[  194.623691][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  194.630566][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  194.638300][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  194.644860][    C4]  ? net_tx_action+0x1d9/0x860
[  194.651164][    C4]  __do_softirq+0x1cd/0x66d
[  194.657199][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  194.664416][    C4]  run_ksoftirqd+0x2b/0x40
[  194.669948][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  194.676515][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  194.684424][    C4]  ? __kthread_parkme+0x8d/0x120
[  194.690830][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  194.701145][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  194.711862][    C4]  kthread+0x32d/0x400
[  194.718919][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  194.727532][    C4]  ret_from_fork+0x22/0x30
[  194.734526][    C4]
[  194.738752][    C4] Allocated by task 0:
[  194.746283][    C4] (stack is not available)
[  194.752453][    C4]
[  194.755749][    C4] Freed by task 38:
[  194.760505][    C4]  kasan_save_stack+0x23/0x60
[  194.766447][    C4]  kasan_set_track+0x20/0x40
[  194.772392][    C4]  kasan_set_free_info+0x24/0x40
[  194.778548][    C4]  __kasan_slab_free+0xf1/0x140
[  194.784840][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  194.792154][    C4]  kfree+0xd7/0x560
[  194.797154][    C4]  skb_release_data+0x41c/0x520
[  194.803316][    C4]  kfree_skb+0x105/0x240
[  194.808515][    C4]  xennet_poll+0x1ab2/0x3c80
[  194.814265][    C4]  __napi_poll+0xb2/0x4c0
[  194.819908][    C4]  net_rx_action+0x2d7/0xa20
[  194.825744][    C4]  __do_softirq+0x1cd/0x66d
[  194.831408][    C4]
[  194.834239][    C4] The buggy address belongs to the object at 
ffff88812fabc800
[  194.834239][    C4]  which belongs to the cache kmalloc-1k of size 1024
[  194.852293][    C4] The buggy address is located 0 bytes inside of
[  194.852293][    C4]  1024-byte region [ffff88812fabc800, 
ffff88812fabcc00)
[  194.868775][    C4] The buggy address belongs to the page:
[  194.876277][    C4] page:00000000d0c67a5b refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x12fab8
[  194.891368][    C4] head:00000000d0c67a5b order:3 compound_mapcount:0 
compound_pincount:0
[  194.911581][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  194.929413][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff888100042dc0
[  194.947348][    C4] raw: 0000000000000000 0000000000100010 
00000001ffffffff 0000000000000000
[  194.960589][    C4] page dumped because: kasan: bad access detected
[  194.971494][    C4]
[  194.975515][    C4] Memory state around the buggy address:
[  194.985374][    C4]  ffff88812fabc700: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  194.999195][    C4]  ffff88812fabc780: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  195.014320][    C4] >ffff88812fabc800: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  195.029523][    C4]                    ^
[  195.037121][    C4]  ffff88812fabc880: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  195.050239][    C4]  ffff88812fabc900: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  195.063046][    C4] 
==================================================================
1m*[0;1;31m*[0m] (1 of 2) A stop job is running for … 11 of user hakon 
(22s / 1min 29s)
[  195.092525][    C4] 
==================================================================
[  195.113160][    C4] BUG: KASAN: double-free or invalid-free in 
kmem_cache_free+0x10b/0x520
[  195.113174][    C4]
[  195.113178][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  195.113184][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  195.113186][    C4] Call Trace:
[  195.113193][    C4]  dump_stack+0xa1/0xd3
[  195.113200][    C4] print_address_description.constprop.0+0x1d/0x140
M [K[   [0;31[  195.113205][    C4]  ? kmem_cache_free+0x10b/0x520
[  195.113209][    C4]  kasan_report_invalid_free+0x56/0x80
[  195.113214][    C4]  ? kmem_cache_free+0x10b/0x520
[  195.113218][    C4]  __kasan_slab_free+0x110/0x140
[  195.113223][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  195.113227][    C4]  ? xennet_poll+0x15d9/0x3c80
[  195.113234][    C4]  kmem_cache_free+0x10b/0x520
[  195.113238][    C4]  ? kfree_skbmem+0x9a/0x140
[  195.113245][    C4]  ? xennet_poll+0x15d9/0x3c80
[  195.113249][    C4]  kfree_skbmem+0x9a/0x140
[  195.113253][    C4]  kfree_skb+0x10d/0x240
[  195.113258][    C4]  xennet_poll+0x15d9/0x3c80
[  195.289108][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  195.296657][    C4]  ? kasan_set_track+0x20/0x40
[  195.305649][    C4]  ? kasan_set_free_info+0x24/0x40
[  195.315898][    C4]  ? __kasan_slab_free+0xf1/0x140
[  195.324427][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  195.333952][    C4]  ? kfree+0xd7/0x560
[  195.343453][    C4]  ? skb_release_data+0x41c/0x520
[  195.351519][    C4]  ? __kfree_skb_defer+0x45/0x60
[  195.358404][    C4]  ? net_tx_action+0x1d9/0x860
[  195.364312][    C4]  ? __do_softirq+0x1cd/0x66d
[  195.370190][    C4]  ? run_ksoftirqd+0x2b/0x40
[  195.375914][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  195.381978][    C4]  ? kthread+0x32d/0x400
[  195.387970][    C4]  ? ret_from_fork+0x22/0x30
[  195.394623][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  195.400496][    C4]  ? __kasan_check_read+0x11/0x20
[  195.407179][    C4]  ? lock_release+0xa3/0x820
[  195.412943][    C4]  ? verify_cpu+0x100/0x100
[  195.418918][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  195.427172][    C4]  __napi_poll+0xb2/0x4c0
[  195.433648][    C4]  ? kfree+0xd7/0x560
[  195.439112][    C4]  net_rx_action+0x2d7/0xa20
[  195.444732][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  195.451806][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  195.459019][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  195.465287][    C4]  ? net_tx_action+0x1d9/0x860
[  195.471188][    C4]  __do_softirq+0x1cd/0x66d
[  195.476694][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  195.487573][    C4]  run_ksoftirqd+0x2b/0x40
[  195.495732][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  195.508193][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  195.528025][    C4]  ? __kthread_parkme+0x8d/0x120
[  195.538723][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  195.547930][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  195.557802][    C4]  kthread+0x32d/0x400
[  195.563684][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  195.572394][    C4]  ret_from_fork+0x22/0x30
[  195.579716][    C4]
[  195.583433][    C4] Allocated by task 0:
[  195.590074][    C4]  kasan_save_stack+0x23/0x60
[  195.596654][    C4]  __kasan_slab_alloc+0x68/0x80
[  195.603344][    C4]  kmem_cache_alloc_node+0x242/0x380
[  195.610679][    C4]  __alloc_skb+0x156/0x280
[  195.616970][    C4]  __netdev_alloc_skb+0x46/0x320
[  195.625584][    C4]  xennet_alloc_rx_buffers+0x237/0xae0
[  195.633530][    C4]  xennet_poll+0x1e8b/0x3c80
[  195.641114][    C4]  __napi_poll+0xb2/0x4c0
[  195.647723][    C4]  net_rx_action+0x2d7/0xa20
[  195.654256][    C4]  __do_softirq+0x1cd/0x66d
[  195.660524][    C4]
[  195.664251][    C4] Freed by task 0:
[  195.673597][    C4] (stack is not available)
[  195.682949][    C4]
[  195.690702][    C4] The buggy address belongs to the object at 
ffff888123767180
[  195.690702][    C4]  which belongs to the cache skbuff_head_cache of 
size 216
[  195.724289][    C4] The buggy address is located 0 bytes inside of
[  195.724289][    C4]  216-byte region [ffff888123767180, 
ffff888123767258)
[  195.748575][    C4] The buggy address belongs to the page:
[  195.761612][    C4] page:0000000033082a5e refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x123766
[  195.785269][    C4] head:0000000033082a5e order:1 compound_mapcount:0
[  195.798209][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  195.808803][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff8881002a3e00
[  195.819803][    C4] raw: 0000000000000000 0000000000190019 
00000001ffffffff 0000000000000000
[  195.830992][    C4] page dumped because: kasan: bad access detected
[  195.840133][    C4]
[  195.842954][    C4] Memory state around the buggy address:
[  195.849774][    C4]  ffff888123767080: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  195.860723][    C4]  ffff888123767100: fb fb fb fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  195.874295][    C4] >ffff888123767180: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  195.889217][    C4]                    ^
[  195.897347][    C4]  ffff888123767200: fb fb fb fb fb fb fb fb fb fb 
fb fc fc fc fc fc
[  195.909563][    C4]  ffff888123767280: fc fc fc fc fc fc fc fc fa fb 
fb fb fb fb fb fb
[  195.920752][    C4] 
==================================================================
m*[0;1;31m*[0m[0;31m*[0m] (2 of 2) A stop job is running for …t 
Display Manager (23s / 1min 30s)
[  195.940670][    C4] 
==================================================================
[  195.951611][    C4] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
[  195.951623][    C4]
[  195.951626][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  195.951632][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  195.951635][    C4] Call Trace:
[  195.951640][    C4]  dump_stack+0xa1/0xd3
[  195.951647][    C4] print_address_description.constprop.0+0x1d/0x140
M [K[  [0;31m[  195.951652][    C4]  ? kfree+0xd7/0x560
[  195.951656][    C4]  kasan_report_invalid_free+0x56/0x80
[  195.951660][    C4]  ? kfree+0xd7/0x560
[  195.951664][    C4]  __kasan_slab_free+0x110/0x140
[  195.951668][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  195.951674][    C4]  kfree+0xd7/0x560
[  195.951677][    C4]  ? skb_release_data+0x41c/0x520
[  195.951683][    C4]  ? xennet_poll+0x15d9/0x3c80
[  195.951690][    C4]  ? kmem_cache_free+0x10b/0x520
[  195.951696][    C4]  skb_release_data+0x41c/0x520
[  195.951700][    C4]  ? xennet_poll+0x15d9/0x3c80
[  195.951705][    C4]  ? xennet_poll+0x15d9/0x3c80
[  195.951710][    C4]  kfree_skb+0x105/0x240
[  195.951714][    C4]  xennet_poll+0x15d9/0x3c80
[  196.111605][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  196.117624][    C4]  ? kasan_set_track+0x20/0x40
[  196.124382][    C4]  ? kasan_set_free_info+0x24/0x40
[  196.130787][    C4]  ? __kasan_slab_free+0xf1/0x140
[  196.137780][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  196.145236][    C4]  ? kfree+0xd7/0x560
[  196.150231][    C4]  ? skb_release_data+0x41c/0x520
[  196.156766][    C4]  ? __kfree_skb_defer+0x45/0x60
[  196.162824][    C4]  ? net_tx_action+0x1d9/0x860
[  196.168851][    C4]  ? __do_softirq+0x1cd/0x66d
[  196.174843][    C4]  ? run_ksoftirqd+0x2b/0x40
[  196.180729][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  196.187601][    C4]  ? kthread+0x32d/0x400
[  196.192969][    C4]  ? ret_from_fork+0x22/0x30
[  196.199359][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  196.206065][    C4]  ? __kasan_check_read+0x11/0x20
[  196.212475][    C4]  ? lock_release+0xa3/0x820
[  196.218861][    C4]  ? verify_cpu+0x100/0x100
[  196.225346][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  196.232450][    C4]  __napi_poll+0xb2/0x4c0
[  196.238146][    C4]  ? kfree+0xd7/0x560
[  196.244573][    C4]  net_rx_action+0x2d7/0xa20
[  196.252582][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  196.262681][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  196.272598][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  196.279570][    C0] net eth0: rx->offset: 0, size: -1
[  196.282546][    C4]  ? net_tx_action+0x1d9/0x860
[  196.282560][    C4]  __do_softirq+0x1cd/0x66d
[  196.282567][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  196.282575][    C4]  run_ksoftirqd+0x2b/0x40
[  196.282580][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  196.332331][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  196.341826][    C4]  ? __kthread_parkme+0x8d/0x120
[  196.349207][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  196.357673][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  196.365514][    C4]  kthread+0x32d/0x400
[  196.370816][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  196.377379][    C4]  ret_from_fork+0x22/0x30
[  196.382628][    C4]
[  196.385851][    C4] Allocated by task 0:
[  196.390913][    C4] (stack is not available)
[  196.396287][    C4]
[  196.399365][    C4] Freed by task 38:
[  196.403934][    C4]  kasan_save_stack+0x23/0x60
[  196.409647][    C4]  kasan_set_track+0x20/0x40
[  196.415064][    C4]  kasan_set_free_info+0x24/0x40
[  196.421504][    C4]  __kasan_slab_free+0xf1/0x140
[  196.427786][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  196.434235][    C4]  kfree+0xd7/0x560
[  196.439262][    C4]  skb_release_data+0x41c/0x520
[  196.447168][    C4]  kfree_skb+0x105/0x240
[  196.453911][    C4]  xennet_poll+0x1ab2/0x3c80
[  196.462466][    C4]  __napi_poll+0xb2/0x4c0
[  196.470083][    C4]  net_rx_action+0x2d7/0xa20
[  196.478533][    C4]  __do_softirq+0x1cd/0x66d
[  196.485217][    C4]
[  196.490054][    C4] The buggy address belongs to the object at 
ffff88812fabf800
[  196.490054][    C4]  which belongs to the cache kmalloc-1k of size 1024
[  196.510349][    C4] The buggy address is located 0 bytes inside of
[  196.510349][    C4]  1024-byte region [ffff88812fabf800, 
ffff88812fabfc00)
[  196.527581][    C4] The buggy address belongs to the page:
[  196.534468][    C4] page:00000000d0c67a5b refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x12fab8
[  196.547382][    C4] head:00000000d0c67a5b order:3 compound_mapcount:0 
compound_pincount:0
[  196.558036][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  196.568257][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff888100042dc0
[  196.579266][    C4] raw: 0000000000000000 0000000000100010 
00000001ffffffff 0000000000000000
[  196.590714][    C4] page dumped because: kasan: bad access detected
[  196.598771][    C4]
[  196.601943][    C4] Memory state around the buggy address:
[  196.609513][    C4]  ffff88812fabf700: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  196.619789][    C4]  ffff88812fabf780: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  196.632708][    C4] >ffff88812fabf800: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  196.647484][    C4]                    ^
[  196.656085][    C4]  ffff88812fabf880: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  196.670904][    C4]  ffff88812fabf900: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  196.683414][    C4] 
==================================================================
*[0;1;31m*[0m[0;31m* [0m] (2 of 2) A stop job is running for …t 
Display Manager (24s / 1min 30s)
[  196.703793][    C4] 
==================================================================
[  196.714438][    C4] BUG: KASAN: double-free or invalid-free in 
kmem_cache_free+0x10b/0x520
[  196.714451][    C4]
[  196.714454][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  196.714460][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  196.714463][    C4] Call Trace:
[  196.714469][    C4]  dump_stack+0xa1/0xd3
[  196.714475][    C4] print_address_description.constprop.0+0x1d/0x140
M [K[ [0;31m*[  196.714480][    C4]  ? kmem_cache_free+0x10b/0x520
[  196.714484][    C4]  kasan_report_invalid_free+0x56/0x80
[  196.714489][    C4]  ? kmem_cache_free+0x10b/0x520
[  196.714493][    C4]  __kasan_slab_free+0x110/0x140
[  196.714498][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  196.714503][    C4]  ? xennet_poll+0x15d9/0x3c80
[  196.714509][    C4]  kmem_cache_free+0x10b/0x520
[  196.714513][    C4]  ? kfree_skbmem+0x9a/0x140
[  196.845236][    C4]  ? xennet_poll+0x15d9/0x3c80
[  196.854355][    C4]  kfree_skbmem+0x9a/0x140
[  196.862950][    C4]  kfree_skb+0x10d/0x240
[  196.871478][    C4]  xennet_poll+0x15d9/0x3c80
[  196.878631][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  196.884679][    C4]  ? kasan_set_track+0x20/0x40
[  196.891339][    C4]  ? kasan_set_free_info+0x24/0x40
[  196.898334][    C4]  ? __kasan_slab_free+0xf1/0x140
[  196.904762][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  196.912166][    C4]  ? kfree+0xd7/0x560
[  196.917244][    C4]  ? skb_release_data+0x41c/0x520
[  196.923785][    C4]  ? __kfree_skb_defer+0x45/0x60
[  196.929875][    C4]  ? net_tx_action+0x1d9/0x860
[  196.936132][    C4]  ? __do_softirq+0x1cd/0x66d
[  196.942227][    C4]  ? run_ksoftirqd+0x2b/0x40
[  196.948024][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  196.954476][    C4]  ? kthread+0x32d/0x400
[  196.959747][    C4]  ? ret_from_fork+0x22/0x30
[  196.965344][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  196.971744][    C4]  ? __kasan_check_read+0x11/0x20
[  196.979012][    C4]  ? lock_release+0xa3/0x820
[  196.986470][    C4]  ? verify_cpu+0x100/0x100
[  196.993755][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  197.003250][    C4]  __napi_poll+0xb2/0x4c0
[  197.013533][    C4]  ? kfree+0xd7/0x560
[  197.025698][    C4]  net_rx_action+0x2d7/0xa20
[  197.035740][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  197.050115][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  197.067516][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  197.080702][    C4]  ? net_tx_action+0x1d9/0x860
[  197.089643][    C4]  __do_softirq+0x1cd/0x66d
[  197.098385][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  197.110771][    C4]  run_ksoftirqd+0x2b/0x40
[  197.118624][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  197.128315][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  197.139082][    C4]  ? __kthread_parkme+0x8d/0x120
[  197.147677][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  197.163858][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  197.179423][    C4]  kthread+0x32d/0x400
[  197.188145][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  197.197398][    C4]  ret_from_fork+0x22/0x30
[  197.205246][    C4]
[  197.209632][    C4] Allocated by task 0:
[  197.221411][    C4]  kasan_save_stack+0x23/0x60
[  197.236830][    C4]  __kasan_slab_alloc+0x68/0x80
[  197.251335][    C4]  kmem_cache_alloc_node+0x242/0x380
[  197.262631][    C4]  __alloc_skb+0x156/0x280
[  197.272539][    C4]  __netdev_alloc_skb+0x46/0x320
[  197.280669][    C4]  xennet_alloc_rx_buffers+0x237/0xae0
[  197.289944][    C4]  xennet_poll+0x1e8b/0x3c80
[  197.297216][    C4]  __napi_poll+0xb2/0x4c0
[  197.304057][    C4]  net_rx_action+0x2d7/0xa20
[  197.311158][    C4]  __do_softirq+0x1cd/0x66d
[  197.318365][    C4]
[  197.321891][    C4] Freed by task 0:
[  197.327427][    C4] (stack is not available)
[  197.333346][    C4]
[  197.336752][    C4] The buggy address belongs to the object at 
ffff888123766dc0
[  197.336752][    C4]  which belongs to the cache skbuff_head_cache of 
size 216
[  197.357131][    C4] The buggy address is located 0 bytes inside of
[  197.357131][    C4]  216-byte region [ffff888123766dc0, 
ffff888123766e98)
[  197.374016][    C4] The buggy address belongs to the page:
[  197.380785][    C4] page:0000000033082a5e refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x123766
[  197.396021][    C4] head:0000000033082a5e order:1 compound_mapcount:0
[  197.405669][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  197.416059][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff8881002a3e00
[  197.428976][    C4] raw: 0000000000000000 0000000000190019 
00000001ffffffff 0000000000000000
[  197.444956][    C4] page dumped because: kasan: bad access detected
[  197.455874][    C4]
[  197.459469][    C4] Memory state around the buggy address:
[  197.467122][    C4]  ffff888123766c80: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  197.479361][    C4]  ffff888123766d00: fb fb fb fb fb fb fb fb fb fb 
fb fc fc fc fc fc
[  197.489920][    C4] >ffff888123766d80: fc fc fc fc fc fc fc fc fa fb 
fb fb fb fb fb fb
[  197.499918][    C4] ^
[  197.508461][    C4]  ffff888123766e00: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  197.517910][    C4]  ffff888123766e80: fb fb fb fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  197.528159][    C4] 
==================================================================
[0;1;31m*[0m[0;31m*  [0m] (2 of 2) A stop job is running for …t 
Display Manager (25s / 1min 30s)
[  197.546305][    C4] 
==================================================================
[  197.556565][    C4] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
[  197.556576][    C4]
[  197.556579][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  197.556584][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  197.556587][    C4] Call Trace:
[  197.556591][    C4]  dump_stack+0xa1/0xd3
[  197.556597][    C4] print_address_description.constprop.0+0x1d/0x140
M [K[[0;31m*[  197.556604][    C4]  ? kfree+0xd7/0x560
[  197.556607][    C4]  kasan_report_invalid_free+0x56/0x80
[  197.556612][    C4]  ? kfree+0xd7/0x560
[  197.556615][    C4]  __kasan_slab_free+0x110/0x140
[  197.556620][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  197.556625][    C4]  kfree+0xd7/0x560
[  197.556629][    C4]  ? skb_release_data+0x41c/0x520
[  197.674534][    C4]  ? xennet_poll+0x15d9/0x3c80
[  197.680401][    C4]  ? kmem_cache_free+0x10b/0x520
[  197.687281][    C4]  skb_release_data+0x41c/0x520
[  197.694759][    C4]  ? xennet_poll+0x15d9/0x3c80
[  197.701585][    C4]  ? xennet_poll+0x15d9/0x3c80
[  197.708515][    C4]  kfree_skb+0x105/0x240
[  197.714227][    C4]  xennet_poll+0x15d9/0x3c80
[  197.720814][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  197.727034][    C4]  ? kasan_set_track+0x20/0x40
[  197.733603][    C4]  ? kasan_set_free_info+0x24/0x40
[  197.740929][    C4]  ? __kasan_slab_free+0xf1/0x140
[  197.747228][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  197.754523][    C4]  ? kfree+0xd7/0x560
[  197.760490][    C4]  ? skb_release_data+0x41c/0x520
[  197.766985][    C4]  ? __kfree_skb_defer+0x45/0x60
[  197.775549][    C4]  ? net_tx_action+0x1d9/0x860
[  197.781746][    C4]  ? __do_softirq+0x1cd/0x66d
[  197.792668][    C4]  ? run_ksoftirqd+0x2b/0x40
[  197.801889][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  197.815558][    C4]  ? kthread+0x32d/0x400
[  197.825950][    C4]  ? ret_from_fork+0x22/0x30
[  197.835734][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  197.843951][    C4]  ? __kasan_check_read+0x11/0x20
[  197.856137][    C4]  ? lock_release+0xa3/0x820
[  197.863900][    C4]  ? verify_cpu+0x100/0x100
[  197.872511][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  197.882326][    C4]  __napi_poll+0xb2/0x4c0
[  197.889238][    C4]  ? kfree+0xd7/0x560
[  197.895752][    C4]  net_rx_action+0x2d7/0xa20
[  197.902953][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  197.911100][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  197.921278][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  197.928642][    C4]  ? net_tx_action+0x1d9/0x860
[  197.936443][    C4]  __do_softirq+0x1cd/0x66d
[  197.943758][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  197.954181][    C4]  run_ksoftirqd+0x2b/0x40
[  197.961474][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  197.969116][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  197.978440][    C4]  ? __kthread_parkme+0x8d/0x120
[  197.986917][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  198.002581][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  198.023172][    C4]  kthread+0x32d/0x400
[  198.030482][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  198.039450][    C4]  ret_from_fork+0x22/0x30
[  198.046425][    C4]
[  198.049662][    C4] Allocated by task 0:
[  198.054694][    C4] (stack is not available)
[  198.060348][    C4]
[  198.063533][    C4] Freed by task 38:
[  198.068393][    C4]  kasan_save_stack+0x23/0x60
[  198.074675][    C4]  kasan_set_track+0x20/0x40
[  198.080501][    C4]  kasan_set_free_info+0x24/0x40
[  198.087458][    C4]  __kasan_slab_free+0xf1/0x140
[  198.095680][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  198.103660][    C4]  kfree+0xd7/0x560
[  198.109553][    C4]  skb_release_data+0x41c/0x520
[  198.118584][    C4]  kfree_skb+0x105/0x240
[  198.126056][    C4]  xennet_poll+0x1ab2/0x3c80
[  198.133346][    C4]  __napi_poll+0xb2/0x4c0
[  198.141173][    C4]  net_rx_action+0x2d7/0xa20
[  198.148043][    C4]  __do_softirq+0x1cd/0x66d
[  198.154647][    C4]
[  198.158311][    C4] The buggy address belongs to the object at 
ffff8881237d7800
[  198.158311][    C4]  which belongs to the cache kmalloc-1k of size 1024
[  198.179577][    C4] The buggy address is located 0 bytes inside of
[  198.179577][    C4]  1024-byte region [ffff8881237d7800, 
ffff8881237d7c00)
[  198.208651][    C4] The buggy address belongs to the page:
[  198.219422][    C4] page:000000003b068a4d refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x1237d0
[  198.243057][    C4] head:000000003b068a4d order:3 compound_mapcount:0 
compound_pincount:0
[  198.256730][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  198.271380][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff888100042dc0
[  198.286563][    C4] raw: 0000000000000000 0000000000100010 
00000001ffffffff 0000000000000000
[  198.301277][    C4] page dumped because: kasan: bad access detected
[  198.314275][    C4]
[  198.318929][    C4] Memory state around the buggy address:
[  198.329036][    C4]  ffff8881237d7700: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  198.343366][    C4]  ffff8881237d7780: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  198.355693][    C4] >ffff8881237d7800: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  198.368527][    C4]                    ^
[  198.377311][    C4]  ffff8881237d7880: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  198.396567][    C4]  ffff8881237d7900: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  198.416094][    C4] 
==================================================================
[0;1;31m*[0m[0;31m*   [0m] (1 of 2) A stop job is running for … 11 of 
user hakon (25s / 1min 29s)
[  198.446859][    C4] 
==================================================================
[  198.457043][    C4] BUG: KASAN: double-free or invalid-free in 
kmem_cache_free+0x10b/0x520
[  198.457057][    C4]
[  198.457061][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  198.457067][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  198.457070][    C4] Call Trace:
[  198.457076][    C4]  dump_stack+0xa1/0xd3
[  198.457083][    C4] print_address_description.constprop.0+0x1d/0x140
M [K[[0;1;31m[  198.457088][    C4]  ? kmem_cache_free+0x10b/0x520
[  198.457093][    C4]  kasan_report_invalid_free+0x56/0x80
[  198.457097][    C4]  ? kmem_cache_free+0x10b/0x520
[  198.457101][    C4]  __kasan_slab_free+0x110/0x140
[  198.543457][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  198.549847][    C4]  ? xennet_poll+0x15d9/0x3c80
[  198.556326][    C4]  kmem_cache_free+0x10b/0x520
[  198.562217][    C4]  ? kfree_skbmem+0x9a/0x140
[  198.567755][    C4]  ? xennet_poll+0x15d9/0x3c80
[  198.574692][    C4]  kfree_skbmem+0x9a/0x140
[  198.580641][    C4]  kfree_skb+0x10d/0x240
[  198.588351][    C4]  xennet_poll+0x15d9/0x3c80
[  198.595136][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  198.601827][    C4]  ? kasan_set_track+0x20/0x40
[  198.611153][    C4]  ? kasan_set_free_info+0x24/0x40
[  198.618921][    C4]  ? __kasan_slab_free+0xf1/0x140
[  198.628439][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  198.635785][    C4]  ? kfree+0xd7/0x560
[  198.640802][    C4]  ? skb_release_data+0x41c/0x520
[  198.647003][    C4]  ? __kfree_skb_defer+0x45/0x60
[  198.652860][    C4]  ? net_tx_action+0x1d9/0x860
[  198.658706][    C4]  ? __do_softirq+0x1cd/0x66d
[  198.664974][    C4]  ? run_ksoftirqd+0x2b/0x40
[  198.670892][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  198.677345][    C4]  ? kthread+0x32d/0x400
[  198.682661][    C4]  ? ret_from_fork+0x22/0x30
[  198.688808][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  198.695353][    C4]  ? __kasan_check_read+0x11/0x20
[  198.702288][    C4]  ? lock_release+0xa3/0x820
[  198.708759][    C4]  ? verify_cpu+0x100/0x100
[  198.714958][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  198.721860][    C4]  __napi_poll+0xb2/0x4c0
[  198.727152][    C4]  ? kfree+0xd7/0x560
[  198.732356][    C4]  net_rx_action+0x2d7/0xa20
[  198.738954][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  198.746318][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  198.753460][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  198.760415][    C4]  ? net_tx_action+0x1d9/0x860
[  198.770977][    C4]  __do_softirq+0x1cd/0x66d
[  198.779252][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  198.792733][    C4]  run_ksoftirqd+0x2b/0x40
[  198.801933][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  198.813520][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  198.824765][    C4]  ? __kthread_parkme+0x8d/0x120
[  198.832027][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  198.841767][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  198.850357][    C4]  kthread+0x32d/0x400
[  198.857849][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  198.866497][    C4]  ret_from_fork+0x22/0x30
[  198.873199][    C4]
[  198.876566][    C4] Allocated by task 38:
[  198.882819][    C4]  kasan_save_stack+0x23/0x60
[  198.890008][    C4]  __kasan_slab_alloc+0x68/0x80
[  198.896700][    C4]  kmem_cache_alloc_node+0x242/0x380
[  198.904306][    C4]  __alloc_skb+0x156/0x280
[  198.910690][    C4]  __netdev_alloc_skb+0x46/0x320
[  198.917455][    C4]  xennet_alloc_rx_buffers+0x237/0xae0
[  198.926010][    C4]  xennet_poll+0x1e8b/0x3c80
[  198.932245][    C4]  __napi_poll+0xb2/0x4c0
[  198.939041][    C4]  net_rx_action+0x2d7/0xa20
[  198.945988][    C4]  __do_softirq+0x1cd/0x66d
[  198.953151][    C4]
[  198.957672][    C4] Freed by task 0:
[  198.965667][    C4] (stack is not available)
[  198.975841][    C4]
[  198.980300][    C4] The buggy address belongs to the object at 
ffff888123767e00
[  198.980300][    C4]  which belongs to the cache skbuff_head_cache of 
size 216
[  199.011205][    C4] The buggy address is located 0 bytes inside of
[  199.011205][    C4]  216-byte region [ffff888123767e00, 
ffff888123767ed8)
[  199.032900][    C4] The buggy address belongs to the page:
[  199.042711][    C4] page:0000000033082a5e refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x123766
[  199.059163][    C4] head:0000000033082a5e order:1 compound_mapcount:0
[  199.069123][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  199.082894][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff8881002a3e00
[  199.096611][    C4] raw: 0000000000000000 0000000000190019 
00000001ffffffff 0000000000000000
[  199.110232][    C4] page dumped because: kasan: bad access detected
[  199.121213][    C4]
[  199.126160][    C4] Memory state around the buggy address:
[  199.134442][    C4]  ffff888123767d00: 00 00 00 00 00 00 00 00 00 00 
00 00 00 00 00 00
[  199.147976][    C4]  ffff888123767d80: 00 00 00 fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  199.168211][    C4] >ffff888123767e00: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  199.188885][    C4]                    ^
[  199.197861][    C4]  ffff888123767e80: fb fb fb fb fb fb fb fb fb fb 
fb fc fc fc fc fc
[  199.221841][    C4]  ffff888123767f00: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  199.241048][    C4] 
==================================================================
*[0m[0;31m*    [0m] (1 of 2) A stop job is running for … 11 of user 
hakon (26s / 1min 29s)
[  199.273299][    C4] 
==================================================================
[  199.290407][    C4] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
[  199.290422][    C4]
[  199.290425][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  199.290431][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  199.290433][    C4] Call Trace:
[  199.290439][    C4]  dump_stack+0xa1/0xd3
M [K[[0m[0;3[  199.290446][    C4] 
print_address_description.constprop.0+0x1d/0x140
[  199.290453][    C4]  ? kfree+0xd7/0x560
[  199.290457][    C4]  kasan_report_invalid_free+0x56/0x80
[  199.290461][    C4]  ? kfree+0xd7/0x560
[  199.415962][    C4]  __kasan_slab_free+0x110/0x140
[  199.430627][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  199.441098][    C4]  kfree+0xd7/0x560
[  199.447766][    C4]  ? skb_release_data+0x41c/0x520
[  199.457633][    C4]  ? xennet_poll+0x15d9/0x3c80
[  199.465678][    C4]  ? kmem_cache_free+0x10b/0x520
[  199.473870][    C4]  skb_release_data+0x41c/0x520
[  199.480854][    C4]  ? xennet_poll+0x15d9/0x3c80
[  199.488138][    C4]  ? xennet_poll+0x15d9/0x3c80
[  199.494851][    C4]  kfree_skb+0x105/0x240
[  199.500662][    C4]  xennet_poll+0x15d9/0x3c80
[  199.507585][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  199.513901][    C4]  ? kasan_set_track+0x20/0x40
[  199.521157][    C4]  ? kasan_set_free_info+0x24/0x40
[  199.528393][    C4]  ? __kasan_slab_free+0xf1/0x140
[  199.535531][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  199.543327][    C4]  ? kfree+0xd7/0x560
[  199.549296][    C4]  ? skb_release_data+0x41c/0x520
[  199.556785][    C4]  ? __kfree_skb_defer+0x45/0x60
[  199.566972][    C4]  ? net_tx_action+0x1d9/0x860
[  199.578739][    C4]  ? __do_softirq+0x1cd/0x66d
[  199.589654][    C4]  ? run_ksoftirqd+0x2b/0x40
[  199.598032][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  199.605483][    C4]  ? kthread+0x32d/0x400
[  199.610742][    C4]  ? ret_from_fork+0x22/0x30
[  199.616429][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  199.622798][    C4]  ? __kasan_check_read+0x11/0x20
[  199.629070][    C4]  ? lock_release+0xa3/0x820
[  199.634951][    C4]  ? verify_cpu+0x100/0x100
[  199.640679][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  199.648403][    C4]  __napi_poll+0xb2/0x4c0
[  199.657408][    C4]  ? kfree+0xd7/0x560
[  199.663410][    C4]  net_rx_action+0x2d7/0xa20
[  199.669041][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  199.675647][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  199.682562][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  199.689323][    C4]  ? net_tx_action+0x1d9/0x860
[  199.694945][    C4]  __do_softirq+0x1cd/0x66d
[  199.700237][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  199.707495][    C4]  run_ksoftirqd+0x2b/0x40
[  199.712819][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  199.719146][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  199.727121][    C4]  ? __kthread_parkme+0x8d/0x120
[  199.733630][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  199.741407][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  199.748961][    C4]  kthread+0x32d/0x400
[  199.754246][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  199.761147][    C4]  ret_from_fork+0x22/0x30
[  199.766482][    C4]
[  199.769492][    C4] Allocated by task 0:
[  199.774770][    C4] (stack is not available)
[  199.780466][    C4]
[  199.783493][    C4] Freed by task 0:
[  199.788930][    C4]  kasan_save_stack+0x23/0x60
[  199.796576][    C4]  kasan_set_track+0x20/0x40
[  199.805992][    C4]  kasan_set_free_info+0x24/0x40
[  199.818320][    C4]  __kasan_slab_free+0xf1/0x140
[  199.830701][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  199.841487][    C4]  kfree+0xd7/0x560
[  199.850715][    C4]  skb_release_data+0x41c/0x520
[  199.859704][    C4]  kfree_skb+0x105/0x240
[  199.868666][    C4]  xennet_poll+0x1ab2/0x3c80
[  199.877842][    C4]  __napi_poll+0xb2/0x4c0
[  199.885704][    C4]  net_rx_action+0x2d7/0xa20
[  199.892631][    C4]  __do_softirq+0x1cd/0x66d
[  199.898162][    C4]
[  199.901144][    C4] The buggy address belongs to the object at 
ffff8881237d7000
[  199.901144][    C4]  which belongs to the cache kmalloc-1k of size 1024
[  199.918816][    C4] The buggy address is located 0 bytes inside of
[  199.918816][    C4]  1024-byte region [ffff8881237d7000, 
ffff8881237d7400)
[  199.935613][    C4] The buggy address belongs to the page:
[  199.943031][    C4] page:000000003b068a4d refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x1237d0
[  199.956223][    C4] head:000000003b068a4d order:3 compound_mapcount:0 
compound_pincount:0
[  199.966591][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  199.977938][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff888100042dc0
[  199.993518][    C4] raw: 0000000000000000 0000000000100010 
00000001ffffffff 0000000000000000
[  200.011059][    C4] page dumped because: kasan: bad access detected
[  200.024150][    C4]
[  200.030019][    C4] Memory state around the buggy address:
[  200.040154][    C4]  ffff8881237d6f00: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  200.051802][    C4]  ffff8881237d6f80: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  200.065291][    C4] >ffff8881237d7000: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  200.078728][    C4]                    ^
[  200.084830][    C4]  ffff8881237d7080: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  200.095971][    C4]  ffff8881237d7100: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  200.109416][    C4] 
==================================================================
1m*     [0m] (1 of 2) A stop job is running for … 11 of user hakon (27s 
/ 1min 29s)
[  200.131324][    C4] 
==================================================================
[  200.143892][    C4] BUG: KASAN: double-free or invalid-free in 
kmem_cache_free+0x10b/0x520
[  200.143904][    C4]
[  200.143907][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  200.143912][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  200.143915][    C4] Call Trace:
[  200.143919][    C4]  dump_stack+0xa1/0xd3
M [K[[0;1;31m[  200.143926][    C4] 
print_address_description.constprop.0+0x1d/0x140
[  200.143931][    C4]  ? kmem_cache_free+0x10b/0x520
[  200.143935][    C4]  kasan_report_invalid_free+0x56/0x80
[  200.143940][    C4]  ? kmem_cache_free+0x10b/0x520
[  200.143943][    C4]  __kasan_slab_free+0x110/0x140
[  200.143948][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  200.143954][    C4]  ? xennet_poll+0x15d9/0x3c80
[  200.143960][    C4]  kmem_cache_free+0x10b/0x520
[  200.143964][    C4]  ? kfree_skbmem+0x9a/0x140
[  200.143971][    C4]  ? xennet_poll+0x15d9/0x3c80
[  200.143976][    C4]  kfree_skbmem+0x9a/0x140
[  200.143979][    C4]  kfree_skb+0x10d/0x240
[  200.143984][    C4]  xennet_poll+0x15d9/0x3c80
[  200.143993][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  200.143997][    C4]  ? kasan_set_track+0x20/0x40
[  200.144001][    C4]  ? kasan_set_free_info+0x24/0x40
[  200.144006][    C4]  ? __kasan_slab_free+0xf1/0x140
[  200.144010][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  200.144013][    C4]  ? kfree+0xd7/0x560
[  200.144017][    C4]  ? skb_release_data+0x41c/0x520
[  200.422499][    C4]  ? __kfree_skb_defer+0x45/0x60
[  200.433632][    C4]  ? net_tx_action+0x1d9/0x860
[  200.441630][    C4]  ? __do_softirq+0x1cd/0x66d
[  200.450496][    C4]  ? run_ksoftirqd+0x2b/0x40
[  200.458579][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  200.466961][    C4]  ? kthread+0x32d/0x400
[  200.474814][    C4]  ? ret_from_fork+0x22/0x30
[  200.481359][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  200.489666][    C4]  ? __kasan_check_read+0x11/0x20
[  200.499137][    C4]  ? lock_release+0xa3/0x820
[  200.508374][    C4]  ? verify_cpu+0x100/0x100
[  200.515303][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  200.524964][    C4]  __napi_poll+0xb2/0x4c0
[  200.531389][    C4]  ? kfree+0xd7/0x560
[  200.537937][    C4]  net_rx_action+0x2d7/0xa20
[  200.547205][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  200.560233][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  200.574181][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  200.584740][    C4]  ? net_tx_action+0x1d9/0x860
[  200.594485][    C4]  __do_softirq+0x1cd/0x66d
[  200.601066][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  200.609972][    C4]  run_ksoftirqd+0x2b/0x40
[  200.615619][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  200.622112][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  200.629669][    C4]  ? __kthread_parkme+0x8d/0x120
[  200.637946][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  200.647237][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  200.655370][    C4]  kthread+0x32d/0x400
[  200.660504][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  200.666800][    C4]  ret_from_fork+0x22/0x30
[  200.672481][    C4]
[  200.675423][    C4] Allocated by task 38:
[  200.680480][    C4]  kasan_save_stack+0x23/0x60
[  200.686959][    C4]  __kasan_slab_alloc+0x68/0x80
[  200.693435][    C4]  kmem_cache_alloc_node+0x242/0x380
[  200.700359][    C4]  __alloc_skb+0x156/0x280
[  200.706086][    C4]  __netdev_alloc_skb+0x46/0x320
[  200.712833][    C4]  xennet_alloc_rx_buffers+0x237/0xae0
[  200.719924][    C4]  xennet_poll+0x1e8b/0x3c80
[  200.726178][    C4]  __napi_poll+0xb2/0x4c0
[  200.733488][    C4]  net_rx_action+0x2d7/0xa20
[  200.745385][    C4]  __do_softirq+0x1cd/0x66d
[  200.756439][    C4]
[  200.762721][    C4] Freed by task 0:
[  200.774206][    C4] (stack is not available)
[  200.785775][    C4]
[  200.789925][    C4] The buggy address belongs to the object at 
ffff888123767b80
[  200.789925][    C4]  which belongs to the cache skbuff_head_cache of 
size 216
[  200.811445][    C4] The buggy address is located 0 bytes inside of
[  200.811445][    C4]  216-byte region [ffff888123767b80, 
ffff888123767c58)
[  200.831756][    C4] The buggy address belongs to the page:
[  200.839630][    C4] page:0000000033082a5e refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x123766
[  200.854616][    C4] head:0000000033082a5e order:1 compound_mapcount:0
[  200.863765][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  200.875050][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff8881002a3e00
[  200.888159][    C4] raw: 0000000000000000 0000000000190019 
00000001ffffffff 0000000000000000
[  200.900494][    C4] page dumped because: kasan: bad access detected
[  200.911374][    C4]
[  200.915042][    C4] Memory state around the buggy address:
[  200.926805][    C4]  ffff888123767a80: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  200.946444][    C4]  ffff888123767b00: fb fb fb fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  200.964582][    C4] >ffff888123767b80: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  200.981548][    C4]                    ^
[  200.988422][    C4]  ffff888123767c00: fb fb fb fb fb fb fb fb fb fb 
fb fc fc fc fc fc
[  201.000428][    C4]  ffff888123767c80: fc fc fc fc fc fc fc fc fa fb 
fb fb fb fb fb fb
[  201.012154][    C4] 
==================================================================
*[0m[0;31m*    [0m] (2 of 2) A stop job is running for …t Display 
Manager (29s / 1min 30s)
[  201.034635][    C4] net eth0: rx->offset: 0, size: -1
[  201.044199][    C4] net eth0: rx->offset: 0, size: -1
[  201.044209][    C4] net eth0: rx->offset: 0, size: -1
[  201.044215][    C4] net eth0: rx->offset: 0, size: -1
[  201.044218][    C4] net eth0: rx->offset: 0, size: -1
[  201.044221][    C4] net eth0: rx->offset: 0, size: -1
[  201.044230][    C4] 
==================================================================
[  201.044232][    C4] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
M [K[[0;31m*[  201.044241][    C4]
[  201.044244][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  201.044250][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  201.044252][    C4] Call Trace:
[  201.044258][    C4]  dump_stack+0xa1/0xd3
[  201.204927][    C4] print_address_description.constprop.0+0x1d/0x140
[  201.214952][    C4]  ? kfree+0xd7/0x560
[  201.220385][    C4]  kasan_report_invalid_free+0x56/0x80
[  201.229299][    C4]  ? kfree+0xd7/0x560
[  201.235873][    C4]  __kasan_slab_free+0x110/0x140
[  201.244242][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  201.253371][    C4]  kfree+0xd7/0x560
[  201.259751][    C4]  ? skb_release_data+0x41c/0x520
[  201.267861][    C4]  ? free_unref_page+0x145/0x1e0
[  201.275864][    C4]  skb_release_data+0x41c/0x520
[  201.283416][    C4]  ? xennet_poll+0x1ab2/0x3c80
[  201.290765][    C4]  kfree_skb+0x105/0x240
[  201.297392][    C4]  xennet_poll+0x1ab2/0x3c80
[  201.304453][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  201.312220][    C4]  ? kasan_set_track+0x20/0x40
[  201.322286][    C4]  ? kasan_set_free_info+0x24/0x40
[  201.333738][    C4]  ? __kasan_slab_free+0xf1/0x140
[  201.348056][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  201.363040][    C4]  ? kfree+0xd7/0x560
[  201.372620][    C4]  ? skb_release_data+0x41c/0x520
[  201.380319][    C4]  ? __kfree_skb_defer+0x45/0x60
[  201.387731][    C4]  ? net_tx_action+0x1d9/0x860
[  201.394518][    C4]  ? __do_softirq+0x1cd/0x66d
[  201.400588][    C4]  ? run_ksoftirqd+0x2b/0x40
[  201.406355][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  201.413038][    C4]  ? kthread+0x32d/0x400
[  201.418635][    C4]  ? ret_from_fork+0x22/0x30
[  201.424830][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  201.430972][    C4]  ? __kasan_check_read+0x11/0x20
[  201.438317][    C4]  ? lock_release+0xa3/0x820
[  201.444286][    C4]  ? verify_cpu+0x100/0x100
[  201.450645][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  201.458911][    C4]  __napi_poll+0xb2/0x4c0
[  201.464608][    C4]  ? kfree+0xd7/0x560
[  201.470120][    C4]  net_rx_action+0x2d7/0xa20
[  201.475957][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  201.482070][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  201.489324][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  201.495730][    C4]  ? net_tx_action+0x1d9/0x860
[  201.501456][    C4]  __do_softirq+0x1cd/0x66d
[  201.509042][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  201.518079][    C4]  run_ksoftirqd+0x2b/0x40
[  201.527063][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  201.535289][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  201.546472][    C4]  ? __kthread_parkme+0x8d/0x120
[  201.554707][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  201.564453][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  201.573892][    C4]  kthread+0x32d/0x400
[  201.579346][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  201.585741][    C4]  ret_from_fork+0x22/0x30
[  201.591733][    C4]
[  201.594684][    C4] Allocated by task 0:
[  201.599723][    C4] (stack is not available)
[  201.605438][    C4]
[  201.608370][    C4] Freed by task 0:
[  201.612857][    C4]  kasan_save_stack+0x23/0x60
[  201.618594][    C4]  kasan_set_track+0x20/0x40
[  201.624417][    C4]  kasan_set_free_info+0x24/0x40
[  201.630679][    C4]  __kasan_slab_free+0xf1/0x140
[  201.637181][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  201.643906][    C4]  kfree+0xd7/0x560
[  201.648754][    C4]  skb_release_data+0x41c/0x520
[  201.654662][    C4]  kfree_skb+0x105/0x240
[  201.659850][    C4]  xennet_poll+0x1ab2/0x3c80
[  201.665373][    C4]  __napi_poll+0xb2/0x4c0
[  201.670668][    C4]  net_rx_action+0x2d7/0xa20
[  201.676261][    C4]  __do_softirq+0x1cd/0x66d
[  201.681791][    C4]
[  201.684696][    C4] The buggy address belongs to the object at 
ffff8881237d1800
[  201.684696][    C4]  which belongs to the cache kmalloc-1k of size 1024
[  201.704499][    C4] The buggy address is located 0 bytes inside of
[  201.704499][    C4]  1024-byte region [ffff8881237d1800, 
ffff8881237d1c00)
[  201.725900][    C4] The buggy address belongs to the page:
[  201.740734][    C4] page:000000003b068a4d refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x1237d0
[  201.759714][    C4] head:000000003b068a4d order:3 compound_mapcount:0 
compound_pincount:0
[  201.773377][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  201.786864][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff888100042dc0
[  201.800669][    C4] raw: 0000000000000000 0000000000100010 
00000001ffffffff 0000000000000000
[  201.814845][    C4] page dumped because: kasan: bad access detected
[  201.827112][    C4]
[  201.830619][    C4] Memory state around the buggy address:
[  201.839451][    C4]  ffff8881237d1700: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  201.852340][    C4]  ffff8881237d1780: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  201.866765][    C4] >ffff8881237d1800: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  201.880917][    C4]                    ^
[  201.888498][    C4]  ffff8881237d1880: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  201.907115][    C4]  ffff8881237d1900: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  201.926265][    C4] 
==================================================================
[0;1;31m*[0m[0;31m*   [0m] (2 of 2) A stop job is running for …t 
Display Manager (29s / 1min 30s)
[  201.958556][    C4] 
==================================================================
[  201.970440][    C4] BUG: KASAN: double-free or invalid-free in 
kmem_cache_free+0x10b/0x520
[  201.970452][    C4]
[  201.970455][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  201.970461][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  201.970464][    C4] Call Trace:
[  201.970468][    C4]  dump_stack+0xa1/0xd3
M [K[ [0;31m*[  201.970475][    C4] 
print_address_description.constprop.0+0x1d/0x140
[  201.970481][    C4]  ? kmem_cache_free+0x10b/0x520
[  201.970485][    C4]  kasan_report_invalid_free+0x56/0x80
[  201.970489][    C4]  ? kmem_cache_free+0x10b/0x520
[  201.970493][    C4]  __kasan_slab_free+0x110/0x140
[  201.970498][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  201.970503][    C4]  ? xennet_poll+0x1ab2/0x3c80
[  201.970509][    C4]  kmem_cache_free+0x10b/0x520
[  201.970513][    C4]  ? kfree_skbmem+0x9a/0x140
[  201.970521][    C4]  ? xennet_poll+0x1ab2/0x3c80
[  202.135371][    C4]  kfree_skbmem+0x9a/0x140
[  202.143159][    C4]  kfree_skb+0x10d/0x240
[  202.148745][    C4]  xennet_poll+0x1ab2/0x3c80
[  202.155712][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  202.162503][    C4]  ? kasan_set_track+0x20/0x40
[  202.170606][    C4]  ? kasan_set_free_info+0x24/0x40
[  202.178868][    C4]  ? __kasan_slab_free+0xf1/0x140
[  202.185116][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  202.192385][    C4]  ? kfree+0xd7/0x560
[  202.197348][    C4]  ? skb_release_data+0x41c/0x520
[  202.203862][    C4]  ? __kfree_skb_defer+0x45/0x60
[  202.210196][    C4]  ? net_tx_action+0x1d9/0x860
[  202.215748][    C4]  ? __do_softirq+0x1cd/0x66d
[  202.221557][    C4]  ? run_ksoftirqd+0x2b/0x40
[  202.227224][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  202.233261][    C4]  ? kthread+0x32d/0x400
[  202.239029][    C4]  ? ret_from_fork+0x22/0x30
[  202.245304][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  202.251647][    C4]  ? __kasan_check_read+0x11/0x20
[  202.260266][    C4]  ? lock_release+0xa3/0x820
[  202.266719][    C4]  ? verify_cpu+0x100/0x100
[  202.275659][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  202.285720][    C4]  __napi_poll+0xb2/0x4c0
[  202.291882][    C4]  ? kfree+0xd7/0x560
[  202.297122][    C4]  net_rx_action+0x2d7/0xa20
[  202.304174][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  202.311862][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  202.319657][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  202.326067][    C4]  ? net_tx_action+0x1d9/0x860
[  202.331836][    C4]  __do_softirq+0x1cd/0x66d
[  202.337788][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  202.345855][    C4]  run_ksoftirqd+0x2b/0x40
[  202.351265][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  202.357538][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  202.365458][    C4]  ? __kthread_parkme+0x8d/0x120
[  202.371909][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  202.379724][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  202.387815][    C4]  kthread+0x32d/0x400
[  202.392923][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  202.398902][    C4]  ret_from_fork+0x22/0x30
[  202.404496][    C4]
[  202.407696][    C4] Allocated by task 38:
[  202.412716][    C4]  kasan_save_stack+0x23/0x60
[  202.418604][    C4]  __kasan_slab_alloc+0x68/0x80
[  202.424853][    C4]  kmem_cache_alloc_node+0x242/0x380
[  202.431392][    C4]  __alloc_skb+0x156/0x280
[  202.437864][    C4]  __netdev_alloc_skb+0x46/0x320
[  202.445954][    C4]  xennet_alloc_rx_buffers+0x237/0xae0
[  202.454631][    C4]  xennet_poll+0x1e8b/0x3c80
[  202.461691][    C4]  __napi_poll+0xb2/0x4c0
[  202.468156][    C4]  net_rx_action+0x2d7/0xa20
[  202.475068][    C4]  __do_softirq+0x1cd/0x66d
[  202.482262][    C4]
[  202.485790][    C4] Freed by task 0:
[  202.493736][    C4] (stack is not available)
[  202.499070][    C4]
[  202.502177][    C4] The buggy address belongs to the object at 
ffff888123302b40
[  202.502177][    C4]  which belongs to the cache skbuff_head_cache of 
size 216
[  202.522328][    C4] The buggy address is located 0 bytes inside of
[  202.522328][    C4]  216-byte region [ffff888123302b40, 
ffff888123302c18)
[  202.540081][    C4] The buggy address belongs to the page:
[  202.547265][    C4] page:00000000d73f6df1 refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x123302
[  202.560308][    C4] head:00000000d73f6df1 order:1 compound_mapcount:0
[  202.568107][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  202.578064][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff8881002a3e00
[  202.589336][    C4] raw: 0000000000000000 0000000000190019 
00000001ffffffff 0000000000000000
[  202.600427][    C4] page dumped because: kasan: bad access detected
[  202.611098][    C4]
[  202.614471][    C4] Memory state around the buggy address:
[  202.623856][    C4]  ffff888123302a00: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  202.640742][    C4]  ffff888123302a80: fb fb fb fb fb fb fb fb fb fb 
fb fc fc fc fc fc
[  202.660948][    C4] >ffff888123302b00: fc fc fc fc fc fc fc fc fa fb 
fb fb fb fb fb fb
[  202.682574][    C4] ^
[  202.694285][    C4]  ffff888123302b80: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  202.708306][    C4]  ffff888123302c00: fb fb fb fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  202.720595][    C4] 
==================================================================
[0;1;31m*[0m[0;31m*  [0m] (2 of 2) A stop job is running for …t 
Display Manager (30s / 1min 30s)
M [K[  [0;31m[  202.744830][    C4] 
==================================================================
[  202.758306][    C4] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
[  202.769743][    C4]
[  202.773580][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  202.790871][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  202.801795][    C4] Call Trace:
[  202.807362][    C4]  dump_stack+0xa1/0xd3
[  202.814819][    C4] print_address_description.constprop.0+0x1d/0x140
[  202.825948][    C4]  ? kfree+0xd7/0x560
[  202.831634][    C4]  kasan_report_invalid_free+0x56/0x80
[  202.842265][    C4]  ? kfree+0xd7/0x560
[  202.848832][    C4]  __kasan_slab_free+0x110/0x140
[  202.857816][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  202.865259][    C4]  kfree+0xd7/0x560
[  202.870643][    C4]  ? skb_release_data+0x41c/0x520
[  202.877259][    C4]  skb_release_data+0x41c/0x520
[  202.882829][    C4]  ? __netif_receive_skb_core.constprop.0+0x42d/0x31c0
[  202.891484][    C4]  ? __netif_receive_skb_core.constprop.0+0x42d/0x31c0
[  202.900106][    C4]  kfree_skb+0x105/0x240
[  202.905705][    C4] __netif_receive_skb_core.constprop.0+0x42d/0x31c0
[  202.914050][    C4]  ? ret_from_fork+0x22/0x30
[  202.920070][    C4]  ? generic_xdp_tx+0x4c0/0x4c0
[  202.926264][    C4]  ? kasan_save_stack+0x42/0x60
[  202.932178][    C4]  ? kasan_save_stack+0x23/0x60
[  202.938449][    C4]  ? __kasan_slab_alloc+0x68/0x80
[  202.944548][    C4]  ? kmem_cache_alloc_node+0x242/0x380
[  202.951203][    C4]  ? __alloc_skb+0x156/0x280
[  202.957215][    C4]  ? __netdev_alloc_skb+0x46/0x320
[  202.963442][    C4]  ? xennet_alloc_rx_buffers+0x237/0xae0
[  202.970591][    C4]  ? xennet_poll+0x1e8b/0x3c80
[  202.976674][    C4]  ? __napi_poll+0xb2/0x4c0
[  202.982170][    C4]  ? net_rx_action+0x2d7/0xa20
[  202.988453][    C4]  ? __do_softirq+0x1cd/0x66d
[  202.995589][    C4]  ? run_ksoftirqd+0x2b/0x40
[  203.004005][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  203.011871][    C4]  ? kthread+0x32d/0x400
[  203.019708][    C4]  ? ret_from_fork+0x22/0x30
[  203.029612][    C4]  ? get_partial_node.part.0+0x116/0x300
[  203.039588][    C4]  __netif_receive_skb_list_core+0x316/0xa20
[  203.048290][    C4]  ? __netif_receive_skb_list_core+0x316/0xa20
[  203.057196][    C4]  ? 
__netif_receive_skb_core.constprop.0+0x31c0/0x31c0
[  203.065784][    C4]  ? __kasan_check_read+0x11/0x20
[  203.072161][    C4]  ? lock_acquire+0x4b/0x260
[  203.077826][    C4] netif_receive_skb_list_internal+0x616/0xd80
[  203.085323][    C4]  ? __kasan_check_write+0x14/0x20
[  203.092068][    C4]  ? __netif_receive_skb_list_core+0xa20/0xa20
[  203.099831][    C4]  ? __alloc_skb+0xce/0x280
[  203.105944][    C4]  ? gnttab_grant_foreign_access_ref+0x50/0x80
[  203.113711][    C4]  ? xennet_alloc_rx_buffers+0x15e/0xae0
[  203.120775][    C4]  napi_complete_done+0x189/0x600
[  203.127731][    C4]  xennet_poll+0x2447/0x3c80
[  203.134046][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  203.139927][    C4]  ? kasan_set_track+0x20/0x40
[  203.145804][    C4]  ? kasan_set_free_info+0x24/0x40
[  203.152146][    C4]  ? __kasan_slab_free+0xf1/0x140
[  203.161383][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  203.168556][    C4]  ? kfree+0xd7/0x560
[  203.174194][    C4]  ? skb_release_data+0x41c/0x520
[  203.188842][    C4]  ? __kfree_skb_defer+0x45/0x60
[  203.199970][    C4]  ? net_tx_action+0x1d9/0x860
[  203.210335][    C4]  ? __do_softirq+0x1cd/0x66d
[  203.224241][    C4]  ? run_ksoftirqd+0x2b/0x40
[  203.237098][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  203.249586][    C4]  ? kthread+0x32d/0x400
[  203.256532][    C4]  ? ret_from_fork+0x22/0x30
[  203.263640][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  203.271145][    C4]  ? run_timer_softirq+0x100/0x100
[  203.279449][    C4]  ? __kasan_check_read+0x11/0x20
[  203.287784][    C4]  ? lock_release+0xa3/0x820
[  203.295657][    C4]  ? verify_cpu+0x100/0x100
[  203.302828][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  203.311488][    C4]  __napi_poll+0xb2/0x4c0
[  203.318228][    C4]  ? kfree+0xd7/0x560
[  203.324472][    C4]  net_rx_action+0x2d7/0xa20
[  203.331587][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  203.340312][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  203.348612][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  203.356768][    C4]  ? net_tx_action+0x1d9/0x860
[  203.364189][    C4]  __do_softirq+0x1cd/0x66d
[  203.372995][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  203.387407][    C4]  run_ksoftirqd+0x2b/0x40
[  203.397672][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  203.411192][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  203.425652][    C4]  ? __kthread_parkme+0x8d/0x120
[  203.431699][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  203.439896][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  203.447504][    C4]  kthread+0x32d/0x400
[  203.452427][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  203.459371][    C4]  ret_from_fork+0x22/0x30
[  203.464697][    C4]
[  203.467925][    C4] Allocated by task 0:
[  203.473767][    C4] (stack is not available)
[  203.479458][    C4]
[  203.482421][    C4] Freed by task 0:
[  203.487269][    C4]  kasan_save_stack+0x23/0x60
[  203.494875][    C4]  kasan_set_track+0x20/0x40
[  203.501611][    C4]  kasan_set_free_info+0x24/0x40
[  203.509449][    C4]  __kasan_slab_free+0xf1/0x140
[  203.516244][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  203.525600][    C4]  kfree+0xd7/0x560
[  203.531492][    C4]  skb_release_data+0x41c/0x520
[  203.541822][    C4]  kfree_skb+0x105/0x240
[  203.549323][    C4]  xennet_poll+0x1ab2/0x3c80
[  203.561585][    C4]  __napi_poll+0xb2/0x4c0
[  203.574616][    C4]  net_rx_action+0x2d7/0xa20
[  203.584877][    C4]  __do_softirq+0x1cd/0x66d
[  203.594306][    C4]
[  203.598432][    C4] The buggy address belongs to the object at 
ffff8881237ca800
[  203.598432][    C4]  which belongs to the cache kmalloc-1k of size 1024
[  203.624247][    C4] The buggy address is located 0 bytes inside of
[  203.624247][    C4]  1024-byte region [ffff8881237ca800, 
ffff8881237cac00)
[  203.640861][    C4] The buggy address belongs to the page:
[  203.647614][    C4] page:0000000027182cce refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x1237c8
[  203.660080][    C4] head:0000000027182cce order:3 compound_mapcount:0 
compound_pincount:0
[  203.670639][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  203.680770][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff888100042dc0
[  203.692182][    C4] raw: 0000000000000000 0000000000100010 
00000001ffffffff 0000000000000000
[  203.702422][    C4] page dumped because: kasan: bad access detected
[  203.710003][    C4]
[  203.712674][    C4] Memory state around the buggy address:
[  203.719827][    C4]  ffff8881237ca700: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  203.729864][    C4]  ffff8881237ca780: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  203.744176][    C4] >ffff8881237ca800: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  203.761633][    C4]                    ^
[  203.769491][    C4]  ffff8881237ca880: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  203.786483][    C4]  ffff8881237ca900: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  203.800667][    C4] 
==================================================================
*[0;1;31m*[0m[0;31m* [0m] (1 of 2) A stop job is running for … 11 of 
user hakon (31s / 1min 29s)
[  203.829231][    C4] 
==================================================================
[  203.844545][    C4] BUG: KASAN: double-free or invalid-free in 
kmem_cache_free+0x10b/0x520
[  203.844559][    C4]
[  203.844562][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  203.844568][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  203.844570][    C4] Call Trace:
[  203.844577][    C4]  dump_stack+0xa1/0xd3
M [K[   [0;31[  203.844583][    C4] 
print_address_description.constprop.0+0x1d/0x140
[  203.844596][    C4]  ? kmem_cache_free+0x10b/0x520
[  203.844600][    C4]  kasan_report_invalid_free+0x56/0x80
[  203.844605][    C4]  ? kmem_cache_free+0x10b/0x520
[  203.844609][    C4]  __kasan_slab_free+0x110/0x140
[  203.966488][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  203.982910][    C4]  ? __netif_receive_skb_core.constprop.0+0x42d/0x31c0
[  203.996192][    C4]  kmem_cache_free+0x10b/0x520
[  204.003596][    C4]  ? kfree_skbmem+0x9a/0x140
[  204.010595][    C4]  ? __netif_receive_skb_core.constprop.0+0x42d/0x31c0
[  204.020746][    C4]  kfree_skbmem+0x9a/0x140
[  204.028285][    C4]  kfree_skb+0x10d/0x240
[  204.034362][    C4] __netif_receive_skb_core.constprop.0+0x42d/0x31c0
[  204.044977][    C4]  ? ret_from_fork+0x22/0x30
[  204.052000][    C4]  ? generic_xdp_tx+0x4c0/0x4c0
[  204.059282][    C4]  ? kasan_save_stack+0x42/0x60
[  204.065962][    C4]  ? kasan_save_stack+0x23/0x60
[  204.073322][    C4]  ? __kasan_slab_alloc+0x68/0x80
[  204.080810][    C4]  ? kmem_cache_alloc_node+0x242/0x380
[  204.088710][    C4]  ? __alloc_skb+0x156/0x280
[  204.094268][    C4]  ? __netdev_alloc_skb+0x46/0x320
[  204.100598][    C4]  ? xennet_alloc_rx_buffers+0x237/0xae0
[  204.109256][    C4]  ? xennet_poll+0x1e8b/0x3c80
[  204.117524][    C4]  ? __napi_poll+0xb2/0x4c0
[  204.126228][    C4]  ? net_rx_action+0x2d7/0xa20
[  204.135678][    C4]  ? __do_softirq+0x1cd/0x66d
[  204.145742][    C4]  ? run_ksoftirqd+0x2b/0x40
[  204.155376][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  204.165497][    C4]  ? kthread+0x32d/0x400
[  204.172533][    C4]  ? ret_from_fork+0x22/0x30
[  204.178721][    C4]  ? get_partial_node.part.0+0x116/0x300
[  204.186258][    C4]  __netif_receive_skb_list_core+0x316/0xa20
[  204.194136][    C4]  ? __netif_receive_skb_list_core+0x316/0xa20
[  204.201974][    C4]  ? 
__netif_receive_skb_core.constprop.0+0x31c0/0x31c0
[  204.210614][    C4]  ? __kasan_check_read+0x11/0x20
[  204.216867][    C4]  ? lock_acquire+0x4b/0x260
[  204.223181][    C4] netif_receive_skb_list_internal+0x616/0xd80
[  204.230868][    C4]  ? __kasan_check_write+0x14/0x20
[  204.237282][    C4]  ? __netif_receive_skb_list_core+0xa20/0xa20
[  204.244828][    C4]  ? __alloc_skb+0xce/0x280
[  204.250732][    C4]  ? gnttab_grant_foreign_access_ref+0x50/0x80
[  204.260105][    C4]  ? xennet_alloc_rx_buffers+0x15e/0xae0
[  204.267251][    C4]  napi_complete_done+0x189/0x600
[  204.273397][    C4]  xennet_poll+0x2447/0x3c80
[  204.279196][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  204.284689][    C4]  ? kasan_set_track+0x20/0x40
[  204.291897][    C4]  ? kasan_set_free_info+0x24/0x40
[  204.298053][    C4]  ? __kasan_slab_free+0xf1/0x140
[  204.305806][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  204.314325][    C4]  ? kfree+0xd7/0x560
[  204.320792][    C4]  ? skb_release_data+0x41c/0x520
[  204.329328][    C4]  ? __kfree_skb_defer+0x45/0x60
[  204.338765][    C4]  ? net_tx_action+0x1d9/0x860
[  204.347523][    C4]  ? __do_softirq+0x1cd/0x66d
[  204.355677][    C4]  ? run_ksoftirqd+0x2b/0x40
[  204.362473][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  204.369031][    C4]  ? kthread+0x32d/0x400
[  204.374509][    C4]  ? ret_from_fork+0x22/0x30
[  204.379922][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  204.385894][    C4]  ? run_timer_softirq+0x100/0x100
[  204.392418][    C4]  ? __kasan_check_read+0x11/0x20
[  204.398180][    C4]  ? lock_release+0xa3/0x820
[  204.403698][    C4]  ? verify_cpu+0x100/0x100
[  204.409319][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  204.415917][    C4]  __napi_poll+0xb2/0x4c0
[  204.421454][    C4]  ? kfree+0xd7/0x560
[  204.426659][    C4]  net_rx_action+0x2d7/0xa20
[  204.432065][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  204.438778][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  204.446014][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  204.452476][    C4]  ? net_tx_action+0x1d9/0x860
[  204.458658][    C4]  __do_softirq+0x1cd/0x66d
[  204.463901][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  204.471840][    C4]  run_ksoftirqd+0x2b/0x40
[  204.477064][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  204.483031][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  204.492084][    C4]  ? __kthread_parkme+0x8d/0x120
[  204.499181][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  204.510237][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  204.521905][    C4]  kthread+0x32d/0x400
[  204.528474][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  204.536525][    C4]  ret_from_fork+0x22/0x30
[  204.544403][    C4]
[  204.547919][    C4] Allocated by task 0:
[  204.553934][    C4]  kasan_save_stack+0x23/0x60
[  204.561961][    C4]  __kasan_slab_alloc+0x68/0x80
[  204.569456][    C4]  kmem_cache_alloc_node+0x242/0x380
[  204.578390][    C4]  __alloc_skb+0x156/0x280
[  204.584299][    C4]  __netdev_alloc_skb+0x46/0x320
[  204.591699][    C4]  xennet_alloc_rx_buffers+0x237/0xae0
[  204.599173][    C4]  xennet_poll+0x1e8b/0x3c80
[  204.605884][    C4]  __napi_poll+0xb2/0x4c0
[  204.611685][    C4]  net_rx_action+0x2d7/0xa20
[  204.617153][    C4]  __do_softirq+0x1cd/0x66d
[  204.623591][    C4]
[  204.626848][    C4] Freed by task 269043760:
[  204.632842][    C4] ------------[ cut here ]------------
[  204.639936][    C4] slab index 2066561 out of bounds (277) for stack 
id ffff8881
[  204.649682][    C4] WARNING: CPU: 4 PID: 38 at lib/stackdepot.c:236 
stack_depot_fetch+0x71/0xa0
[  204.661358][    C4] Modules linked in: xen_front_pgdir_shbuf 
xen_scsifront snd_hda_codec_hdmi snd_hda_intel snd_intel_dspcfg 
snd_hda_codec snd_hda_core snd_hwdep snd_pcm snd_timer snd soundcore 
hid_generic usbhid hid xen_kbdfront intel_rapl_msr atkbd 
intel_rapl_common amdgpu drm_ttm_helper ttm gpu_sched xhci_pci xhci_hcd 
usbcore [last unloaded: usbip_core]
[  204.713101][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B             5.13.0-rc6-x86_64 #1
[  204.728930][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  204.742772][    C4] RIP: 0010:stack_depot_fetch+0x71/0xa0
[  204.750732][    C4] Code: 48 c1 e0 04 25 f0 3f 00 00 48 01 c2 48 8d 
42 18 48 89 03 48 8b 5d f8 8b 42 0c c9 c3 89 f9 48 c7 c7 48 85 bb 84 e8 
2d bf 3b 01 <0f> 0b 31 c0 48 8b 5d f8 c9 c3 48 8b 5d f8 31 c0 c9 c3 48 
c7 c7 a0
[  204.777841][    C4] RSP: 0018:ffffc900002df200 EFLAGS: 00010082
[  204.786886][    C4] RAX: 0000000000000000 RBX: ffffc900002df228 RCX: 
0000000000000000
[  204.801293][    C4] RDX: 0000000000000027 RSI: 0000000000000004 RDI: 
fffff5200005be32
[  204.812429][    C4] RBP: ffffc900002df218 R08: 0000000000000001 R09: 
ffff8884d3a2091b
[  204.825238][    C4] R10: ffffed109a744123 R11: 646e692062616c73 R12: 
ffff8881239692c0
[  204.838641][    C4] R13: ffffea00048e5a00 R14: ffff8881239692c0 R15: 
ffff888123969398
[  204.848504][    C4] FS:  0000000000000000(0000) 
GS:ffff8884d3a00000(0000) knlGS:0000000000000000
[  204.861513][    C4] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  204.870334][    C4] CR2: 00007f0f44da7948 CR3: 0000000004e2a003 CR4: 
00000000001706e0
[  204.882828][    C4] Call Trace:
[  204.889788][    C4]  print_stack+0xe/0x1d
[  204.898939][    C4]  print_track+0x22/0x33
[  204.906432][    C4] 
print_address_description.constprop.0.cold+0x3c/0x179
[  204.918578][    C4]  ? kmem_cache_free+0x10b/0x520
[  204.928978][    C4]  kasan_report_invalid_free+0x56/0x80
[  204.939519][    C4]  ? kmem_cache_free+0x10b/0x520
[  204.945753][    C4]  __kasan_slab_free+0x110/0x140
[  204.952047][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  204.959530][    C4]  ? __netif_receive_skb_core.constprop.0+0x42d/0x31c0
[  204.967933][    C4]  kmem_cache_free+0x10b/0x520
[  204.974157][    C4]  ? kfree_skbmem+0x9a/0x140
[  204.979557][    C4]  ? __netif_receive_skb_core.constprop.0+0x42d/0x31c0
[  204.992011][    C4]  kfree_skbmem+0x9a/0x140
[  204.997927][    C4]  kfree_skb+0x10d/0x240
[  205.003942][    C4] __netif_receive_skb_core.constprop.0+0x42d/0x31c0
[  205.014001][    C4]  ? ret_from_fork+0x22/0x30
[  205.022103][    C4]  ? generic_xdp_tx+0x4c0/0x4c0
[  205.031111][    C4]  ? kasan_save_stack+0x42/0x60
[  205.039632][    C4]  ? kasan_save_stack+0x23/0x60
[  205.047171][    C4]  ? __kasan_slab_alloc+0x68/0x80
[  205.056057][    C4]  ? kmem_cache_alloc_node+0x242/0x380
[  205.072434][    C4]  ? __alloc_skb+0x156/0x280
[  205.083823][    C4]  ? __netdev_alloc_skb+0x46/0x320
[  205.097450][    C4]  ? xennet_alloc_rx_buffers+0x237/0xae0
[  205.112375][    C4]  ? xennet_poll+0x1e8b/0x3c80
[  205.122121][    C4]  ? __napi_poll+0xb2/0x4c0
[  205.130540][    C4]  ? net_rx_action+0x2d7/0xa20
[  205.139272][    C4]  ? __do_softirq+0x1cd/0x66d
[  205.148547][    C4]  ? run_ksoftirqd+0x2b/0x40
[  205.163313][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  205.176671][    C4]  ? kthread+0x32d/0x400
[  205.187660][    C4]  ? ret_from_fork+0x22/0x30
[  205.196351][    C4]  ? get_partial_node.part.0+0x116/0x300
[  205.205474][    C4]  __netif_receive_skb_list_core+0x316/0xa20
[  205.215499][    C4]  ? __netif_receive_skb_list_core+0x316/0xa20
[  205.225547][    C4]  ? 
__netif_receive_skb_core.constprop.0+0x31c0/0x31c0
[  205.236442][    C4]  ? __kasan_check_read+0x11/0x20
[  205.244359][    C4]  ? lock_acquire+0x4b/0x260
[  205.251829][    C4] netif_receive_skb_list_internal+0x616/0xd80
[  205.268504][    C4]  ? __kasan_check_write+0x14/0x20
[  205.282183][    C4]  ? __netif_receive_skb_list_core+0xa20/0xa20
[  205.296254][    C4]  ? __alloc_skb+0xce/0x280
[  205.304164][    C4]  ? gnttab_grant_foreign_access_ref+0x50/0x80
[  205.315892][    C4]  ? xennet_alloc_rx_buffers+0x15e/0xae0
[  205.323890][    C4]  napi_complete_done+0x189/0x600
[  205.330378][    C4]  xennet_poll+0x2447/0x3c80
[  205.336781][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  205.342822][    C4]  ? kasan_set_track+0x20/0x40
[  205.348360][    C4]  ? kasan_set_free_info+0x24/0x40
[  205.355670][    C4]  ? __kasan_slab_free+0xf1/0x140
[  205.362149][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  205.369053][    C4]  ? kfree+0xd7/0x560
[  205.375028][    C4]  ? skb_release_data+0x41c/0x520
[  205.380899][    C4]  ? __kfree_skb_defer+0x45/0x60
[  205.387383][    C4]  ? net_tx_action+0x1d9/0x860
[  205.393808][    C4]  ? __do_softirq+0x1cd/0x66d
[  205.399671][    C4]  ? run_ksoftirqd+0x2b/0x40
[  205.406455][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  205.414000][    C4]  ? kthread+0x32d/0x400
[  205.420426][    C4]  ? ret_from_fork+0x22/0x30
[  205.428156][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  205.435256][    C4]  ? run_timer_softirq+0x100/0x100
[  205.443322][    C4]  ? __kasan_check_read+0x11/0x20
[  205.454004][    C4]  ? lock_release+0xa3/0x820
[  205.467919][    C4]  ? verify_cpu+0x100/0x100
[  205.479146][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  205.493467][    C4]  __napi_poll+0xb2/0x4c0
[  205.503502][    C4]  ? kfree+0xd7/0x560
[  205.509924][    C4]  net_rx_action+0x2d7/0xa20
[  205.517693][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  205.526828][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  205.537216][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  205.545459][    C4]  ? net_tx_action+0x1d9/0x860
[  205.554201][    C4]  __do_softirq+0x1cd/0x66d
[  205.561493][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  205.573580][    C4]  run_ksoftirqd+0x2b/0x40
[  205.581255][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  205.590443][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  205.601743][    C4]  ? __kthread_parkme+0x8d/0x120
[  205.609458][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  205.619228][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  205.629418][    C4]  kthread+0x32d/0x400
[  205.637919][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  205.648072][    C4]  ret_from_fork+0x22/0x30
[  205.658024][    C4] ---[ end trace 3841665df6b692d1 ]---
[  205.671039][    C4] ------------[ cut here ]------------
[  205.685394][    C4] WARNING: CPU: 4 PID: 38 at kernel/stacktrace.c:28 
stack_trace_print+0xf/0x40
[  205.704998][    C4] Modules linked in: xen_front_pgdir_shbuf 
xen_scsifront snd_hda_codec_hdmi snd_hda_intel snd_intel_dspcfg 
snd_hda_codec snd_hda_core snd_hwdep snd_pcm snd_timer snd soundcore 
hid_generic usbhid hid xen_kbdfront intel_rapl_msr atkbd 
intel_rapl_common amdgpu drm_ttm_helper ttm gpu_sched xhci_pci xhci_hcd 
usbcore [last unloaded: usbip_core]
[  205.754802][    C4] CPU: 4 PID: 38 Comm: ksoftirqd/4 Tainted: G    
B   W         5.13.0-rc6-x86_64 #1
[  205.768683][    C4] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  205.779478][    C4] RIP: 0010:stack_trace_print+0xf/0x40
[  205.787792][    C4] Code: 48 8b 55 f0 65 48 2b 14 25 28 00 00 00 75 
06 48 8b 5d f8 c9 c3 e8 e1 3e 84 02 90 0f 1f 44 00 00 48 85 ff 74 05 85 
f6 75 04 c3 <0f> 0b c3 55 48 89 e5 41 56 41 55 41 54 53 e9 66 c8 71 02 
66 66 2e
[  205.819850][    C4] RSP: 0018:ffffc900002df220 EFLAGS: 00010046
[  205.833238][    C4] RAX: 0000000000000000 RBX: ffff8881239692c8 RCX: 
0000000000000000
[  205.850669][    C4] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 
0000000000000000
[  205.871680][    C4] RBP: ffffc900002df230 R08: 0000000000000001 R09: 
ffff8884d3a2091b
[  205.886006][    C4] R10: ffffed109a744123 R11: 646e692062616c73 R12: 
ffff8881239692c0
[  205.897677][    C4] R13: ffffea00048e5a00 R14: ffff8881239692c0 R15: 
ffff888123969398
[  205.909911][    C4] FS:  0000000000000000(0000) 
GS:ffff8884d3a00000(0000) knlGS:0000000000000000
[  205.923372][    C4] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  205.933416][    C4] CR2: 00007f0f44da7948 CR3: 0000000004e2a003 CR4: 
00000000001706e0
[  205.946764][    C4] Call Trace:
[  205.951903][    C4]  ? print_stack+0x1b/0x1d
[  205.958813][    C4]  print_track+0x22/0x33
[  205.964776][    C4] 
print_address_description.constprop.0.cold+0x3c/0x179
[  205.975411][    C4]  ? kmem_cache_free+0x10b/0x520
[  205.982200][    C4]  kasan_report_invalid_free+0x56/0x80
[  205.990264][    C4]  ? kmem_cache_free+0x10b/0x520
[  205.996296][    C4]  __kasan_slab_free+0x110/0x140
[  206.004025][    C4]  slab_free_freelist_hook+0x8c/0x1c0
[  206.012567][    C4]  ? __netif_receive_skb_core.constprop.0+0x42d/0x31c0
[  206.023774][    C4]  kmem_cache_free+0x10b/0x520
[  206.030258][    C4]  ? kfree_skbmem+0x9a/0x140
[  206.038175][    C4]  ? __netif_receive_skb_core.constprop.0+0x42d/0x31c0
[  206.047834][    C4]  kfree_skbmem+0x9a/0x140
[  206.055121][    C4]  kfree_skb+0x10d/0x240
[  206.060109][    C4] __netif_receive_skb_core.constprop.0+0x42d/0x31c0
[  206.068046][    C4]  ? ret_from_fork+0x22/0x30
[  206.073507][    C4]  ? generic_xdp_tx+0x4c0/0x4c0
[  206.079362][    C4]  ? kasan_save_stack+0x42/0x60
[  206.085534][    C4]  ? kasan_save_stack+0x23/0x60
[  206.091558][    C4]  ? __kasan_slab_alloc+0x68/0x80
[  206.098262][    C4]  ? kmem_cache_alloc_node+0x242/0x380
[  206.106001][    C4]  ? __alloc_skb+0x156/0x280
[  206.112267][    C4]  ? __netdev_alloc_skb+0x46/0x320
[  206.120627][    C4]  ? xennet_alloc_rx_buffers+0x237/0xae0
[  206.129599][    C4]  ? xennet_poll+0x1e8b/0x3c80
[  206.137721][    C4]  ? __napi_poll+0xb2/0x4c0
[  206.145399][    C4]  ? net_rx_action+0x2d7/0xa20
[  206.151448][    C4]  ? __do_softirq+0x1cd/0x66d
[  206.157540][    C4]  ? run_ksoftirqd+0x2b/0x40
[  206.163171][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  206.169456][    C4]  ? kthread+0x32d/0x400
[  206.175006][    C4]  ? ret_from_fork+0x22/0x30
[  206.180549][    C4]  ? get_partial_node.part.0+0x116/0x300
[  206.190952][    C4]  __netif_receive_skb_list_core+0x316/0xa20
[  206.199112][    C4]  ? __netif_receive_skb_list_core+0x316/0xa20
[  206.210534][    C4]  ? 
__netif_receive_skb_core.constprop.0+0x31c0/0x31c0
[  206.222538][    C4]  ? __kasan_check_read+0x11/0x20
[  206.230892][    C4]  ? lock_acquire+0x4b/0x260
[  206.239928][    C4] netif_receive_skb_list_internal+0x616/0xd80
[  206.248858][    C4]  ? __kasan_check_write+0x14/0x20
[  206.255261][    C4]  ? __netif_receive_skb_list_core+0xa20/0xa20
[  206.262582][    C4]  ? __alloc_skb+0xce/0x280
[  206.268089][    C4]  ? gnttab_grant_foreign_access_ref+0x50/0x80
[  206.275346][    C4]  ? xennet_alloc_rx_buffers+0x15e/0xae0
[  206.281970][    C4]  napi_complete_done+0x189/0x600
[  206.288447][    C4]  xennet_poll+0x2447/0x3c80
[  206.293679][    C4]  ? xennet_xdp+0x7e0/0x7e0
[  206.299198][    C4]  ? kasan_set_track+0x20/0x40
[  206.305406][    C4]  ? kasan_set_free_info+0x24/0x40
[  206.311715][    C4]  ? __kasan_slab_free+0xf1/0x140
[  206.317609][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  206.324540][    C4]  ? kfree+0xd7/0x560
[  206.329432][    C4]  ? skb_release_data+0x41c/0x520
[  206.335534][    C4]  ? __kfree_skb_defer+0x45/0x60
[  206.341532][    C4]  ? net_tx_action+0x1d9/0x860
[  206.347102][    C4]  ? __do_softirq+0x1cd/0x66d
[  206.352695][    C4]  ? run_ksoftirqd+0x2b/0x40
[  206.358165][    C4]  ? smpboot_thread_fn+0x2fb/0x6e0
[  206.364124][    C4]  ? kthread+0x32d/0x400
[  206.370295][    C4]  ? ret_from_fork+0x22/0x30
[  206.377466][    C4]  ? lock_downgrade+0x7e0/0x7e0
[  206.384708][    C4]  ? run_timer_softirq+0x100/0x100
[  206.394604][    C4]  ? __kasan_check_read+0x11/0x20
[  206.402177][    C4]  ? lock_release+0xa3/0x820
[  206.410652][    C4]  ? verify_cpu+0x100/0x100
[  206.416971][    C4]  ? slab_free_freelist_hook+0x8c/0x1c0
[  206.424555][    C4]  __napi_poll+0xb2/0x4c0
[  206.429893][    C4]  ? kfree+0xd7/0x560
[  206.435174][    C4]  net_rx_action+0x2d7/0xa20
[  206.441688][    C4]  ? napi_threaded_poll+0x2c0/0x2c0
[  206.448184][    C4]  ? __kasan_poison_object_data+0x23/0x40
[  206.455769][    C4]  ? napi_skb_cache_put+0x3b/0x160
[  206.463011][    C4]  ? net_tx_action+0x1d9/0x860
[  206.468894][    C4]  __do_softirq+0x1cd/0x66d
[  206.475980][    C4]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  206.483157][    C4]  run_ksoftirqd+0x2b/0x40
[  206.489319][    C4]  smpboot_thread_fn+0x2fb/0x6e0
[  206.496891][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  206.509007][    C4]  ? __kthread_parkme+0x8d/0x120
[  206.516487][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  206.528201][    C4]  ? smpboot_register_percpu_thread+0x360/0x360
[  206.537820][    C4]  kthread+0x32d/0x400
[  206.543920][    C4]  ? __kthread_bind_mask+0xa0/0xa0
[  206.551009][    C4]  ret_from_fork+0x22/0x30
[  206.557448][    C4] ---[ end trace 3841665df6b692d2 ]---
[  206.565734][    C4]
[  206.573168][    C4] The buggy address belongs to the object at 
ffff8881239692c0
[  206.573168][    C4]  which belongs to the cache skbuff_head_cache of 
size 216
[  206.605429][    C4] The buggy address is located 0 bytes inside of
[  206.605429][    C4]  216-byte region [ffff8881239692c0, 
ffff888123969398)
[  206.628957][    C4] The buggy address belongs to the page:
[  206.638768][    C4] page:00000000ecb5bb98 refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x123968
[  206.654438][    C4] head:00000000ecb5bb98 order:1 compound_mapcount:0
[  206.662760][    C4] memcg:ffff888136ceca01
[  206.667965][    C4] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  206.677805][    C4] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff8881002a3e00
[  206.689429][    C4] raw: 0000000000000000 0000000000190019 
00000001ffffffff ffff888136ceca01
[  206.700611][    C4] page dumped because: kasan: bad access detected
[  206.711046][    C4]
[  206.714637][    C4] Memory state around the buggy address:
[  206.723667][    C4]  ffff888123969180: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  206.736048][    C4]  ffff888123969200: fb fb fb fb fb fb fb fb fb fb 
fb fc fc fc fc fc
[  206.754852][    C4] >ffff888123969280: fc fc fc fc fc fc fc fc fa fb 
fb fb fb fb fb fb
[  206.775194][    C4] ^
[  206.790116][    C4]  ffff888123969300: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  206.804612][    C4]  ffff888123969380: fb fb fb fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  206.814985][    C4] 
==================================================================
[  206.826314][ T2887] 
==================================================================
[  206.840778][ T2887] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
m*[0;1;31m*[0m[  206.859279][ T2887]
[  206.867326][ T2887] CPU: 3 PID: 2887 Comm: kworker/3:3 Tainted: G    
B   W         5.13.0-rc6-x86_64 #1
[0;31m*[0m] (1[  206.886178][ T2887] Hardware name: Xen HVM domU, BIOS 
4.15.1-pre 06/12/2021
[  206.900235][ T2887] Workqueue: events delayed_fput
  of 2) A stop jo[  206.909632][ T2887] Call Trace:
b is running for[  206.916561][ T2887]  dump_stack+0xa1/0xd3
[  206.926338][ T2887] print_address_description.constprop.0+0x1d/0x140
  … 11 of user [  206.937917][ T2887]  ? kfree+0xd7/0x560
[  206.946838][ T2887]  kasan_report_invalid_free+0x56/0x80
hakon (32s / 1mi[  206.963194][ T2887]  ? kfree+0xd7/0x560
n 29s)
[  206.975693][ T2887]  __kasan_slab_free+0x110/0x140
[  206.986639][ T2887]  slab_free_freelist_hook+0x8c/0x1c0
[  206.997208][ T2887]  kfree+0xd7/0x560
[  207.003641][ T2887]  ? skb_release_data+0x41c/0x520
[  207.012515][ T2887]  ? free_unref_page+0x145/0x1e0
[  207.012533][ T2887]  skb_release_data+0x41c/0x520
[  207.012542][ T2887]  ? skb_rbtree_purge+0x74/0xa0
[  207.012548][ T2887]  kfree_skb+0x105/0x240
[  207.012552][ T2887]  skb_rbtree_purge+0x74/0xa0
[  207.012557][ T2887]  tcp_v4_destroy_sock+0xaf/0x5a0
[  207.012565][ T2887]  inet_csk_destroy_sock+0x16a/0x300
M [K[    [0;3[  207.012570][ T2887]  __tcp_close+0xaa4/0xf60
[  207.012576][ T2887]  tcp_close+0x25/0xa0
1m*[0;1;31m*[0[  207.076522][ T2887]  inet_release+0x104/0x240
[  207.085117][ T2887]  __sock_release+0xcc/0x280
m] (1 of 2) A st[  207.092466][ T2887]  sock_close+0x18/0x20
[  207.100508][ T2887]  __fput+0x1b7/0x780
op job is runnin[  207.106611][ T2887]  delayed_fput+0x54/0x80
[  207.114453][ T2887]  process_one_work+0x801/0x1360
g for … 11 of [  207.122417][ T2887]  ? pwq_dec_nr_in_flight+0x300/0x300
[  207.135671][ T2887]  ? do_raw_spin_lock+0x13e/0x280
user hakon (35s [  207.151167][ T2887]  ? __kasan_check_read+0x11/0x20
[  207.173220][ T2887]  ? lock_acquire+0x4b/0x260
/ 1min 29s)
[  207.190651][ T2887]  worker_thread+0x54f/0xf40
[  207.212411][ T2887]  ? process_one_work+0x1360/0x1360
[  207.226712][ T2887]  kthread+0x32d/0x400
[  207.234266][ T2887]  ? __kthread_bind_mask+0xa0/0xa0
[  207.243513][ T2887]  ret_from_fork+0x22/0x30
[  207.251578][ T2887]
[  207.255697][ T2887] Allocated by task 0:
[  207.263882][ T2887] (stack is not available)
[  207.271761][ T2887]
[  207.276634][ T2887] Freed by task 0:
[  207.283017][ T2887]  kasan_save_stack+0x23/0x60
[  207.290966][ T2887]  kasan_set_track+0x20/0x40
[  207.297224][ T2887]  kasan_set_free_info+0x24/0x40
[  207.303949][ T2887]  __kasan_slab_free+0xf1/0x140
[  207.311016][ T2887]  slab_free_freelist_hook+0x8c/0x1c0
[  207.318415][ T2887]  kfree+0xd7/0x560
[  207.323738][ T2887]  skb_release_data+0x41c/0x520
[  207.330683][ T2887]  kfree_skb+0x105/0x240
[  207.336971][ T2887]  xennet_poll+0x1ab2/0x3c80
[  207.343287][ T2887]  __napi_poll+0xb2/0x4c0
[  207.348569][ T2887]  net_rx_action+0x2d7/0xa20
[  207.356261][ T2887]  __do_softirq+0x1cd/0x66d
[  207.363552][ T2887]
[  207.367077][ T2887] The buggy address belongs to the object at 
ffff888110428800
[  207.367077][ T2887]  which belongs to the cache kmalloc-1k of size 1024
[  207.397251][ T2887] The buggy address is located 0 bytes inside of
[  207.397251][ T2887]  1024-byte region [ffff888110428800, 
ffff888110428c00)
[  207.418402][ T2887] The buggy address belongs to the page:
[  207.425555][ T2887] page:00000000fb9805c0 refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x110428
[  207.439438][ T2887] head:00000000fb9805c0 order:3 compound_mapcount:0 
compound_pincount:0
[  207.450995][ T2887] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  207.462554][ T2887] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff888100042dc0
[  207.476170][ T2887] raw: 0000000000000000 0000000000100010 
00000001ffffffff 0000000000000000
[  207.491631][ T2887] page dumped because: kasan: bad access detected
[  207.501562][ T2887]
[  207.505434][ T2887] Memory state around the buggy address:
[  207.513976][ T2887]  ffff888110428700: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  207.529172][ T2887]  ffff888110428780: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  207.545443][ T2887] >ffff888110428800: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  207.557475][ T2887]                    ^
[  207.565183][ T2887]  ffff888110428880: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  207.589724][ T2887]  ffff888110428900: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  207.612955][ T2887] 
==================================================================
[  207.629370][ T2887] 
==================================================================
[  207.646407][ T2887] BUG: KASAN: double-free or invalid-free in 
kmem_cache_free+0x10b/0x520
[  207.660545][ T2887]
[  207.663729][ T2887] CPU: 3 PID: 2887 Comm: kworker/3:3 Tainted: G    
B   W         5.13.0-rc6-x86_64 #1
[  207.677685][ T2887] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  207.688497][ T2887] Workqueue: events delayed_fput
[  207.696352][ T2887] Call Trace:
[  207.702105][ T2887]  dump_stack+0xa1/0xd3
[  207.708960][ T2887] print_address_description.constprop.0+0x1d/0x140
[  207.719059][ T2887]  ? kmem_cache_free+0x10b/0x520
[  207.727435][ T2887]  kasan_report_invalid_free+0x56/0x80
[  207.735863][ T2887]  ? kmem_cache_free+0x10b/0x520
[  207.743762][ T2887]  __kasan_slab_free+0x110/0x140
[  207.753399][ T2887]  slab_free_freelist_hook+0x8c/0x1c0
[  207.769581][ T2887]  ? skb_rbtree_purge+0x74/0xa0
[  207.787841][ T2887]  kmem_cache_free+0x10b/0x520
[  207.802480][ T2887]  ? kfree_skbmem+0x9a/0x140
[  207.813977][ T2887]  ? skb_rbtree_purge+0x74/0xa0
[  207.826233][ T2887]  kfree_skbmem+0x9a/0x140
[  207.832977][ T2887]  kfree_skb+0x10d/0x240
[  207.840241][ T2887]  skb_rbtree_purge+0x74/0xa0
[  207.847185][ T2887]  tcp_v4_destroy_sock+0xaf/0x5a0
[  207.855111][ T2887]  inet_csk_destroy_sock+0x16a/0x300
[  207.862743][ T2887]  __tcp_close+0xaa4/0xf60
[  207.869278][ T2887]  tcp_close+0x25/0xa0
[  207.875585][ T2887]  inet_release+0x104/0x240
[  207.882238][ T2887]  __sock_release+0xcc/0x280
[  207.891526][ T2887]  sock_close+0x18/0x20
[  207.899190][ T2887]  __fput+0x1b7/0x780
[  207.907568][ T2887]  delayed_fput+0x54/0x80
[  207.916222][ T2887]  process_one_work+0x801/0x1360
[  207.927573][ T2887]  ? pwq_dec_nr_in_flight+0x300/0x300
[  207.938854][ T2887]  ? do_raw_spin_lock+0x13e/0x280
[  207.948172][ T2887]  ? __kasan_check_read+0x11/0x20
[  207.959240][ T2887]  ? lock_acquire+0x4b/0x260
[  207.967970][ T2887]  worker_thread+0x54f/0xf40
[  207.978365][ T2887]  ? process_one_work+0x1360/0x1360
[  207.987930][ T2887]  kthread+0x32d/0x400
[  207.996026][ T2887]  ? __kthread_bind_mask+0xa0/0xa0
[  208.005603][ T2887]  ret_from_fork+0x22/0x30
[  208.012446][ T2887]
[  208.015750][ T2887] Allocated by task 147:
[  208.022479][ T2887]  kasan_save_stack+0x23/0x60
[  208.029369][ T2887]  __kasan_slab_alloc+0x68/0x80
[  208.035575][ T2887]  kmem_cache_alloc_node+0x242/0x380
[  208.042207][ T2887]  __alloc_skb+0x156/0x280
[  208.049395][ T2887]  __netdev_alloc_skb+0x46/0x320
[  208.059450][ T2887]  xennet_alloc_rx_buffers+0x237/0xae0
[  208.070396][ T2887]  netback_changed+0x1c9a/0x28a0
[  208.080144][ T2887]  xenbus_otherend_changed+0x140/0x260
[  208.094503][ T2887]  backend_changed+0x13/0x20
[  208.105084][ T2887]  xenwatch_thread+0x224/0x480
[  208.113857][ T2887]  kthread+0x32d/0x400
[  208.122116][ T2887]  ret_from_fork+0x22/0x30
[  208.129759][ T2887]
[  208.133534][ T2887] Freed by task 0:
[  208.140881][ T2887] (stack is not available)
[  208.148237][ T2887]
[  208.156239][ T2887] The buggy address belongs to the object at 
ffff888110414280
[  208.156239][ T2887]  which belongs to the cache skbuff_head_cache of 
size 216
[  208.198610][ T2887] The buggy address is located 0 bytes inside of
[  208.198610][ T2887]  216-byte region [ffff888110414280, 
ffff888110414358)
[  208.226974][ T2887] The buggy address belongs to the page:
[  208.235679][ T2887] page:0000000079160a0d refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x110414
[  208.252583][ T2887] head:0000000079160a0d order:1 compound_mapcount:0
[  208.262706][ T2887] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  208.276446][ T2887] raw: 02fffc0000010200 ffffea0004411d00 
0000000a0000000a ffff8881002a3e00
[  208.293310][ T2887] raw: 0000000000000000 0000000000190019 
00000001ffffffff 0000000000000000
[  208.308186][ T2887] page dumped because: kasan: bad access detected
[  208.321613][ T2887]
[  208.325986][ T2887] Memory state around the buggy address:
[  208.335481][ T2887]  ffff888110414180: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  208.354158][ T2887]  ffff888110414200: fb fb fb fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  208.371078][ T2887] >ffff888110414280: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  208.391859][ T2887]                    ^
[  208.405309][ T2887]  ffff888110414300: fb fb fb fb fb fb fb fb fb fb 
fb fc fc fc fc fc
[  208.419984][ T2887]  ffff888110414380: fc fc fc fc fc fc fc fc fa fb 
fb fb fb fb fb fb
[  208.435515][ T2887] 
==================================================================
[  208.449571][    C0] 
==================================================================
[  208.464302][    C0] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
[  208.476236][    C0]
[  208.479456][    C0] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G B   
W         5.13.0-rc6-x86_64 #1
[  208.496737][    C0] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  208.509194][    C0] Call Trace:
[  208.515282][    C0]  <IRQ>
[  208.521392][    C0]  dump_stack+0xa1/0xd3
[  208.529263][    C0] print_address_description.constprop.0+0x1d/0x140
[  208.542219][    C0]  ? kfree+0xd7/0x560
[  208.551125][    C0]  kasan_report_invalid_free+0x56/0x80
[  208.566871][    C0]  ? kfree+0xd7/0x560
[  208.577076][    C0]  __kasan_slab_free+0x110/0x140
[  208.591366][    C0]  slab_free_freelist_hook+0x8c/0x1c0
[  208.603354][    C0]  kfree+0xd7/0x560
[  208.610395][    C0]  ? skb_release_data+0x41c/0x520
[  208.618380][    C0]  ? __kasan_check_read+0x11/0x20
[  208.626479][    C0]  skb_release_data+0x41c/0x520
[  208.636025][    C0]  ? xennet_poll+0x15d9/0x3c80
[  208.645979][    C0]  ? xennet_poll+0x15d9/0x3c80
[  208.655275][    C0]  kfree_skb+0x105/0x240
[  208.662297][    C0]  xennet_poll+0x15d9/0x3c80
[  208.670795][    C0]  ? xennet_xdp+0x7e0/0x7e0
[  208.678436][    C0]  ? __kasan_check_read+0x11/0x20
[  208.688315][    C0]  ? __raise_softirq_irqoff+0x36/0xe0
[  208.698421][    C0]  ? __napi_schedule+0x1b5/0x260
[  208.708356][    C0]  ? __kasan_check_read+0x11/0x20
[  208.716840][    C0]  ? lock_release+0xa3/0x820
[  208.726527][    C0]  ? do_raw_spin_lock+0x13e/0x280
[  208.739288][    C0]  ? handle_edge_irq+0x35e/0xb60
[  208.751543][    C0]  ? handle_irq_for_port+0x192/0x4c0
[  208.764200][    C0]  ? __kasan_check_write+0x14/0x20
[  208.777943][    C0]  __napi_poll+0xb2/0x4c0
[  208.787877][    C0]  net_rx_action+0x2d7/0xa20
[  208.797957][    C0]  ? napi_threaded_poll+0x2c0/0x2c0
[  208.807335][    C0]  ? __kasan_check_read+0x11/0x20
[  208.815899][    C0]  ? lock_acquire+0x4b/0x260
[  208.824622][    C0]  ? __kasan_check_write+0x14/0x20
[  208.832709][    C0]  ? _raw_read_unlock+0x23/0x40
[  208.842626][    C0]  ? __xen_evtchn_do_upcall+0x107/0x180
[  208.851806][    C0]  __do_softirq+0x1cd/0x66d
[  208.859903][    C0]  irq_exit_rcu+0x12c/0x1c0
[  208.867469][    C0]  sysvec_xen_hvm_callback+0x79/0xa0
[  208.877306][    C0]  </IRQ>
[  208.882737][    C0]  asm_sysvec_xen_hvm_callback+0x12/0x20
[  208.899300][    C0] RIP: 0010:native_safe_halt+0xe/0x20
[  208.911350][    C0] Code: cc cc cc cc cc cc cc cc cc cc cc cc cc cc 
cc cc cc cc cc cc cc cc cc cc cc cc cc cc e9 07 00 00 00 0f 00 2d b4 f4 
49 00 fb f4 <c3> 66 66 2e 0f 1f 84 00 00 00 00 00 66 0f 1f 44 00 00 e9 
07 00 00
[  208.964123][    C0] RSP: 0018:ffffffff84e07ce8 EFLAGS: 00000246
[  208.982138][    C0] RAX: 0000000000004000 RBX: ffff88810122d865 RCX: 
ffffffff83dd94c2
[  208.999803][    C0] RDX: 1ffffffff09c73d0 RSI: 0000000000000008 RDI: 
ffffffff84e39e80
[  209.015915][    C0] RBP: ffffffff84e07cf8 R08: 0000000000000000 R09: 
ffffffff84e39e87
[  209.031761][    C0] R10: fffffbfff09c73d0 R11: 0000000000000000 R12: 
ffffffff84e39e80
[  209.046682][    C0] R13: ffff88810a146000 R14: ffff88810122d864 R15: 
ffffffff8576a6c0
[  209.061586][    C0]  ? acpi_idle_do_entry+0x142/0x1e0
[  209.071461][    C0]  ? acpi_idle_do_entry+0x166/0x1e0
[  209.082887][    C0]  acpi_idle_enter+0x2ca/0x500
[  209.093822][    C0]  ? __kasan_check_write+0x14/0x20
[  209.105379][    C0]  cpuidle_enter_state+0x17c/0xe00
[  209.115312][    C0]  cpuidle_enter+0x4f/0xa0
[  209.128514][    C0]  do_idle+0x407/0x5c0
[  209.145729][    C0]  ? arch_cpu_idle_exit+0x60/0x60
[  209.166679][    C0]  cpu_startup_entry+0x20/0x40
[  209.183899][    C0]  rest_init+0x152/0x187
[  209.194747][    C0]  arch_call_rest_init+0xe/0x1b
[  209.204633][    C0]  start_kernel+0x3be/0x3db
[  209.213562][    C0]  x86_64_start_reservations+0x29/0x2b
[  209.224637][    C0]  x86_64_start_kernel+0x72/0x76
[  209.235052][    C0]  secondary_startup_64_no_verify+0xb0/0xbb
[  209.246555][    C0]
[  209.252426][    C0] Allocated by task 0:
[  209.263944][    C0] (stack is not available)
[  209.274194][    C0]
[  209.278531][    C0] Freed by task 0:
[  209.285685][    C0]  kasan_save_stack+0x23/0x60
[  209.293496][    C0]  kasan_set_track+0x20/0x40
[  209.302369][    C0]  kasan_set_free_info+0x24/0x40
[  209.313356][    C0]  __kasan_slab_free+0xf1/0x140
[  209.328610][    C0]  slab_free_freelist_hook+0x8c/0x1c0
[  209.345457][    C0]  kfree+0xd7/0x560
[  209.355449][    C0]  skb_release_data+0x41c/0x520
[  209.366346][    C0]  kfree_skb+0x105/0x240
[  209.376835][    C0]  xennet_poll+0x1ab2/0x3c80
[  209.384617][    C0]  __napi_poll+0xb2/0x4c0
[  209.392353][    C0]  net_rx_action+0x2d7/0xa20
[  209.392366][    C0]  __do_softirq+0x1cd/0x66d
[  209.392371][    C0]
[  209.392373][    C0] The buggy address belongs to the object at 
ffff888134b70800
[  209.392373][    C0]  which belongs to the cache kmalloc-1k of size 1024
[  209.392378][    C0] The buggy address is located 0 bytes inside of
[  209.392378][    C0]  1024-byte region [ffff888134b70800, 
ffff888134b70c00)
[  209.392382][    C0] The buggy address belongs to the page:
[  209.392385][    C0] page:00000000027e9922 refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x134b70
M [K[     [0;[  209.392391][    C0] head:00000000027e9922 order:3 
compound_mapcount:0 compound_pincount:0
[  209.392394][    C0] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
31m*[0m] (2 of [  209.392403][    C0] raw: 02fffc0000010200 
dead000000000100 dead000000000122 ffff888100042dc0
2) A stop job is[  209.392406][    C0] raw: 0000000000000000 
0000000000100010 00000001ffffffff 0000000000000000
[  209.392408][    C0] page dumped because: kasan: bad access detected
[  209.392410][    C0]
  running for …[  209.392411][    C0] Memory state around the buggy 
address:
[  209.392413][    C0]  ffff888134b70700: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
t Display Manage[  209.392416][    C0]  ffff888134b70780: fc fc fc fc fc 
fc fc fc fc fc fc fc fc fc fc fc
[  209.392418][    C0] >ffff888134b70800: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
r (38s / 1min 30[  209.392420][    C0]                    ^
[  209.392422][    C0]  ffff888134b70880: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
s)
[  209.392424][    C0]  ffff888134b70900: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  209.392427][    C0] 
==================================================================
[  209.392435][    C2] 
==================================================================
[  209.723211][    C2] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
[  209.735839][    C2]
[  209.739291][    C2] CPU: 2 PID: 0 Comm: swapper/2 Tainted: G B   
W         5.13.0-rc6-x86_64 #1
[  209.753237][    C2] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  209.763595][    C2] Call Trace:
[  209.768509][    C2]  <IRQ>
[  209.774166][    C2]  dump_stack+0xa1/0xd3
[  209.780668][    C2] print_address_description.constprop.0+0x1d/0x140
[  209.790619][    C2]  ? kfree+0xd7/0x560
[  209.796664][    C2]  kasan_report_invalid_free+0x56/0x80
[  209.805874][    C2]  ? kfree+0xd7/0x560
[  209.812209][    C2]  __kasan_slab_free+0x110/0x140
[  209.819732][    C2]  slab_free_freelist_hook+0x8c/0x1c0
[  209.827333][    C2]  kfree+0xd7/0x560
[  209.832693][    C2]  ? skb_release_data+0x41c/0x520
[  209.840455][    C2]  ? __kasan_check_read+0x11/0x20
[  209.847858][    C2]  skb_release_data+0x41c/0x520
[  209.856438][    C2]  ? xennet_poll+0x15d9/0x3c80
[  209.863296][    C2]  ? xennet_poll+0x15d9/0x3c80
[  209.872193][    C2]  kfree_skb+0x105/0x240
[  209.879332][    C2]  xennet_poll+0x15d9/0x3c80
[  209.886475][    C2]  ? xennet_xdp+0x7e0/0x7e0
[  209.886495][    C2]  ? __kasan_check_read+0x11/0x20
[  209.886506][    C2]  ? __raise_softirq_irqoff+0x36/0xe0
[  209.886517][    C2]  ? __napi_schedule+0x1b5/0x260
[  209.929053][    C2]  ? __kasan_check_read+0x11/0x20
M [K[    [0;3[  209.942487][    C2]  ? lock_release+0xa3/0x820
1m*[0;1;31m*[0[  209.954395][    C2]  ? do_raw_spin_lock+0x13e/0x280
m] (2 of 2) A st[  209.965557][    C2]  ? handle_edge_irq+0x35e/0xb60
[  209.977169][    C2]  ? handle_irq_for_port+0x192/0x4c0
op job is runnin[  209.986360][    C2]  ? __kasan_check_write+0x14/0x20
[  209.995306][    C2]  __napi_poll+0xb2/0x4c0
g for …t Displ[  210.000778][    C2]  net_rx_action+0x2d7/0xa20
ay Manager (38s [  210.009730][    C2]  ? napi_threaded_poll+0x2c0/0x2c0
[  210.019173][    C2]  ? __kasan_check_read+0x11/0x20
/ 1min 30s)
[  210.026064][    C2]  ? lock_acquire+0x4b/0x260
[  210.034041][    C2]  ? __kasan_check_write+0x14/0x20
[  210.041450][    C2]  ? _raw_read_unlock+0x23/0x40
[  210.047673][    C2]  ? __xen_evtchn_do_upcall+0x107/0x180
[  210.056466][    C2]  __do_softirq+0x1cd/0x66d
[  210.062785][    C2]  irq_exit_rcu+0x12c/0x1c0
[  210.069726][    C2]  sysvec_xen_hvm_callback+0x79/0xa0
[  210.078151][    C2]  </IRQ>
[  210.082531][    C2]  asm_sysvec_xen_hvm_callback+0x12/0x20
[  210.091301][    C2] RIP: 0010:native_safe_halt+0xe/0x20
[  210.098756][    C2] Code: cc cc cc cc cc cc cc cc cc cc cc cc cc cc 
cc cc cc cc cc cc cc cc cc cc cc cc cc cc e9 07 00 00 00 0f 00 2d b4 f4 
49 00 fb f4 <c3> 66 66 2e 0f 1f 84 00 00 00 00 00 66 0f 1f 44 00 00 e9 
07 00 00
[  210.140907][    C2] RSP: 0018:ffffc90000127cf0 EFLAGS: 00000246
[  210.157122][    C2] RAX: 0000000000004000 RBX: ffff88810122b865 RCX: 
ffffffff83dd94c2
[  210.171291][    C2] RDX: 1ffff1102011ea50 RSI: 0000000000000008 RDI: 
ffff8881008f5280
[  210.185050][    C2] RBP: ffffc90000127d00 R08: 0000000000000000 R09: 
ffff8881008f5287
[  210.197207][    C2] R10: ffffed102011ea50 R11: 0000000000000000 R12: 
ffff8881008f5280
[  210.209698][    C2] R13: ffff88810a142000 R14: ffff88810122b864 R15: 
ffffffff8576a6c0
[  210.221749][    C2]  ? acpi_idle_do_entry+0x142/0x1e0
[  210.229396][    C2]  ? acpi_idle_do_entry+0x166/0x1e0
[  210.237494][    C2]  acpi_idle_enter+0x2ca/0x500
[  210.244663][    C2]  ? __kasan_check_write+0x14/0x20
[  210.253335][    C2]  cpuidle_enter_state+0x17c/0xe00
[  210.261603][    C2]  cpuidle_enter+0x4f/0xa0
[  210.269706][    C2]  do_idle+0x407/0x5c0
[  210.276449][    C2]  ? arch_cpu_idle_exit+0x60/0x60
[  210.285936][    C2]  ? _raw_spin_unlock_irqrestore+0x27/0x40
[  210.297273][    C2]  ? complete+0x5c/0x80
[  210.306846][    C2]  cpu_startup_entry+0x20/0x40
[  210.315490][    C2]  start_secondary+0x27f/0x360
[  210.325569][    C2]  ? set_cpu_sibling_map+0x2040/0x2040
[  210.335835][    C2]  secondary_startup_64_no_verify+0xb0/0xbb
[  210.345022][    C2]
[  210.348214][    C2] Allocated by task 0:
[  210.354275][    C2] (stack is not available)
[  210.360984][    C2]
[  210.364400][    C2] Freed by task 0:
[  210.370640][    C2]  kasan_save_stack+0x23/0x60
[  210.377930][    C2]  kasan_set_track+0x20/0x40
[  210.384664][    C2]  kasan_set_free_info+0x24/0x40
[  210.392901][    C2]  __kasan_slab_free+0xf1/0x140
[  210.392909][    C2]  slab_free_freelist_hook+0x8c/0x1c0
[  210.392913][    C2]  kfree+0xd7/0x560
[  210.392917][    C2]  skb_release_data+0x41c/0x520
[  210.392924][    C2]  kfree_skb+0x105/0x240
[  210.392928][    C2]  xennet_poll+0x1ab2/0x3c80
M [K[   [0;31[  210.436675][    C2]  __napi_poll+0xb2/0x4c0
m*[0;1;31m*[0m[  210.445183][    C2]  net_rx_action+0x2d7/0xa20
[  210.455633][    C2]  __do_softirq+0x1cd/0x66d
[0;31m*[0m] (2[  210.462739][    C2]
  of 2) A stop jo[  210.470757][    C2] The buggy address belongs to the 
object at ffff888125adc800
[  210.470757][    C2]  which belongs to the cache kmalloc-1k of size 1024
[  210.503650][    C2] The buggy address is located 0 bytes inside of
[  210.503650][    C2]  1024-byte region [ffff888125adc800, 
ffff888125adcc00)
b is running for[  210.528221][    C2] The buggy address belongs to the 
page:
  …t Display Ma[  210.539424][    C2] page:00000000bf52ba57 refcount:1 
mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x125ad8
[  210.558012][    C2] head:00000000bf52ba57 order:3 compound_mapcount:0 
compound_pincount:0
nager (39s / 1mi[  210.574544][    C2] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
n 30s)
[  210.590226][    C2] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff888100042dc0
[  210.604743][    C2] raw: 0000000000000000 0000000000100010 
00000001ffffffff 0000000000000000
[  210.617840][    C2] page dumped because: kasan: bad access detected
[  210.627837][    C2]
[  210.631220][    C2] Memory state around the buggy address:
[  210.640094][    C2]  ffff888125adc700: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  210.652858][    C2]  ffff888125adc780: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  210.674645][    C2] >ffff888125adc800: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  210.694027][    C2]                    ^
[  210.704353][    C2]  ffff888125adc880: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  210.720622][    C2]  ffff888125adc900: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  210.732779][    C2] 
==================================================================
[  210.746566][    C0] 
==================================================================
[  210.761375][    C0] BUG: KASAN: double-free or invalid-free in 
kmem_cache_free+0x10b/0x520
[  210.761388][    C0]
[  210.761391][    C0] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G B   
W         5.13.0-rc6-x86_64 #1
[  210.761396][    C0] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  210.761399][    C0] Call Trace:
[  210.761401][    C0]  <IRQ>
[  210.761405][    C0]  dump_stack+0xa1/0xd3
M [K[  [0;31m[  210.761411][    C0] 
print_address_description.constprop.0+0x1d/0x140
[  210.761416][    C0]  ? kmem_cache_free+0x10b/0x520
*[0;1;31m*[0m[  210.761421][    C0] kasan_report_invalid_free+0x56/0x80
[  210.761426][    C0]  ? kmem_cache_free+0x10b/0x520
[0;31m* [0m] (1[  210.874322][    C0] __kasan_slab_free+0x110/0x140
[  210.889200][    C0]  slab_free_freelist_hook+0x8c/0x1c0
  of 2) A stop jo[  210.904751][    C0]  ? xennet_poll+0x15d9/0x3c80
[  210.917858][    C0]  kmem_cache_free+0x10b/0x520
b is running for[  210.925862][    C0]  ? kfree_skbmem+0x9a/0x140
[  210.933250][    C0]  ? xennet_poll+0x15d9/0x3c80
  … 11 of user [  210.939639][    C0]  kfree_skbmem+0x9a/0x140
[  210.946523][    C0]  kfree_skb+0x10d/0x240
hakon (39s / 1mi[  210.953376][    C0]  xennet_poll+0x15d9/0x3c80
n 29s)
[  210.961463][    C0]  ? xennet_xdp+0x7e0/0x7e0
[  210.968085][    C0]  ? __kasan_check_read+0x11/0x20
[  210.974655][    C0]  ? __raise_softirq_irqoff+0x36/0xe0
[  210.981605][    C0]  ? __napi_schedule+0x1b5/0x260
[  210.987743][    C0]  ? __kasan_check_read+0x11/0x20
[  210.993625][    C0]  ? lock_release+0xa3/0x820
[  210.998840][    C0]  ? do_raw_spin_lock+0x13e/0x280
[  211.005792][    C0]  ? handle_edge_irq+0x35e/0xb60
[  211.012005][    C0]  ? handle_irq_for_port+0x192/0x4c0
[  211.018925][    C0]  ? __kasan_check_write+0x14/0x20
[  211.025343][    C0]  __napi_poll+0xb2/0x4c0
[  211.030615][    C0]  net_rx_action+0x2d7/0xa20
[  211.036506][    C0]  ? napi_threaded_poll+0x2c0/0x2c0
[  211.045235][    C0]  ? __kasan_check_read+0x11/0x20
[  211.056296][    C0]  ? lock_acquire+0x4b/0x260
[  211.066446][    C0]  ? __kasan_check_write+0x14/0x20
[  211.075910][    C0]  ? _raw_read_unlock+0x23/0x40
[  211.084589][    C0]  ? __xen_evtchn_do_upcall+0x107/0x180
[  211.097182][    C0]  __do_softirq+0x1cd/0x66d
[  211.106840][    C0]  irq_exit_rcu+0x12c/0x1c0
[  211.112859][    C0]  sysvec_xen_hvm_callback+0x79/0xa0
[  211.122834][    C0]  </IRQ>
[  211.129663][    C0]  asm_sysvec_xen_hvm_callback+0x12/0x20
[  211.142050][    C0] RIP: 0010:native_safe_halt+0xe/0x20
[  211.142065][    C0] Code: cc cc cc cc cc cc cc cc cc cc cc cc cc cc 
cc cc cc cc cc cc cc cc cc cc cc cc cc cc e9 07 00 00 00 0f 00 2d b4 f4 
49 00 fb f4 <c3> 66 66 2e 0f 1f 84 00 00 00 00 00 66 0f 1f 44 00 00 e9 
07 00 00
[  211.142070][    C0] RSP: 0018:ffffffff84e07ce8 EFLAGS: 00000246
[  211.142076][    C0] RAX: 0000000000004000 RBX: ffff88810122d865 RCX: 
ffffffff83dd94c2
[  211.142079][    C0] RDX: 1ffffffff09c73d0 RSI: 0000000000000008 RDI: 
ffffffff84e39e80
[  211.142082][    C0] RBP: ffffffff84e07cf8 R08: 0000000000000000 R09: 
ffffffff84e39e87
[  211.142085][    C0] R10: fffffbfff09c73d0 R11: 0000000000000000 R12: 
ffffffff84e39e80
M [K[ [0;31m*[  211.142099][    C0] R13: ffff88810a146000 R14: 
ffff88810122d864 R15: ffffffff8576a6c0
[  211.142106][    C0]  ? acpi_idle_do_entry+0x142/0x1e0
[0;1;31m*[0m[[  211.308124][    C0]  ? acpi_idle_do_entry+0x166/0x1e0
0;31m*  [0m] (1[  211.326171][    C0] acpi_idle_enter+0x2ca/0x500
[  211.336383][    C0]  ? __kasan_check_write+0x14/0x20
  of 2) A stop jo[  211.347603][    C0] cpuidle_enter_state+0x17c/0xe00
[  211.358971][    C0]  cpuidle_enter+0x4f/0xa0
b is running for[  211.368444][    C0]  do_idle+0x407/0x5c0
[  211.378416][    C0]  ? arch_cpu_idle_exit+0x60/0x60
  … 11 of user [  211.390845][    C0]  cpu_startup_entry+0x20/0x40
hakon (39s / 1mi[  211.401724][    C0]  rest_init+0x152/0x187
[  211.412281][    C0]  arch_call_rest_init+0xe/0x1b
n 29s)
[  211.420974][    C0]  start_kernel+0x3be/0x3db
[  211.429753][    C0]  x86_64_start_reservations+0x29/0x2b
[  211.439153][    C0]  x86_64_start_kernel+0x72/0x76
[  211.448251][    C0]  secondary_startup_64_no_verify+0xb0/0xbb
[  211.471472][    C0]
[  211.481961][    C0] Allocated by task 0:
[  211.493371][    C0]  kasan_save_stack+0x23/0x60
[  211.505202][    C0]  __kasan_slab_alloc+0x68/0x80
[  211.514553][    C0]  kmem_cache_alloc_node+0x242/0x380
[  211.526945][    C0]  __alloc_skb+0x156/0x280
[  211.534557][    C0]  __netdev_alloc_skb+0x46/0x320
[  211.543107][    C0]  xennet_alloc_rx_buffers+0x237/0xae0
[  211.552982][    C0]  xennet_poll+0x1e8b/0x3c80
[  211.560710][    C0]  __napi_poll+0xb2/0x4c0
[  211.567319][    C0]  net_rx_action+0x2d7/0xa20
[  211.575739][    C0]  __do_softirq+0x1cd/0x66d
[  211.582454][    C0]
[  211.587943][    C0] Freed by task 0:
[  211.597660][    C0] (stack is not available)
[  211.606923][    C0]
[  211.610236][    C0] The buggy address belongs to the object at 
ffff888125a90dc0
[  211.610236][    C0]  which belongs to the cache skbuff_head_cache of 
size 216
[  211.630373][    C0] The buggy address is located 0 bytes inside of
[  211.630373][    C0]  216-byte region [ffff888125a90dc0, 
ffff888125a90e98)
[  211.654078][    C0] The buggy address belongs to the page:
[  211.654085][    C0] page:0000000052c94be1 refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x125a90
[  211.654094][    C0] head:0000000052c94be1 order:1 compound_mapcount:0
[  211.654097][    C0] memcg:ffff8881115b4401
[  211.654099][    C0] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  211.654108][    C0] raw: 02fffc0000010200 0000000000000000 
0000000c00000001 ffff8881002a3e00
[  211.654112][    C0] raw: 0000000000000000 0000000000190019 
00000001ffffffff ffff8881115b4401
M [K[[0;31m*[  211.654113][    C0] page dumped because: kasan: bad 
access detected
[0;1;31m*[0m[0[  211.654116][    C0]
;31m*   [0m] (1[  211.654117][    C0] Memory state around the buggy 
address:
[  211.654120][    C0]  ffff888125a90c80: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  211.654123][    C0]  ffff888125a90d00: fb fb fb fb fb fb fb fb fb fb 
fb fc fc fc fc fc
  of 2) A stop jo[  211.654125][    C0] >ffff888125a90d80: fc fc fc fc 
fc fc fc fc fa fb fb fb fb fb fb fb
b is running for[  211.654127][ 
C0]                                            ^
[  211.654130][    C0]  ffff888125a90e00: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
  … 11 of user [  211.654133][    C0]  ffff888125a90e80: fb fb fb fc fc 
fc fc fc fc fc fc fc fc fc fc fc
[  211.654135][    C0] 
==================================================================
[  211.654175][    C2] 
==================================================================
hakon (39s / 1mi[  211.945630][    C2] BUG: KASAN: double-free or 
invalid-free in kmem_cache_free+0x10b/0x520
[  211.961624][    C2]
n 29s)
[  211.965593][    C2] CPU: 2 PID: 0 Comm: swapper/2 Tainted: G B   
W         5.13.0-rc6-x86_64 #1
[  211.983256][    C2] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  211.995655][    C2] Call Trace:
[  212.000794][    C2]  <IRQ>
[  212.005346][    C2]  dump_stack+0xa1/0xd3
[  212.011845][    C2] print_address_description.constprop.0+0x1d/0x140
[  212.023189][    C2]  ? kmem_cache_free+0x10b/0x520
[  212.030709][    C2]  kasan_report_invalid_free+0x56/0x80
[  212.039717][    C2]  ? kmem_cache_free+0x10b/0x520
[  212.047575][    C2]  __kasan_slab_free+0x110/0x140
[  212.056328][    C2]  slab_free_freelist_hook+0x8c/0x1c0
[  212.065484][    C2]  ? xennet_poll+0x15d9/0x3c80
[  212.074590][    C2]  kmem_cache_free+0x10b/0x520
[  212.081702][    C2]  ? kfree_skbmem+0x9a/0x140
[  212.091997][    C2]  ? xennet_poll+0x15d9/0x3c80
[  212.105711][    C2]  kfree_skbmem+0x9a/0x140
[  212.114895][    C2]  kfree_skb+0x10d/0x240
[  212.121485][    C2]  xennet_poll+0x15d9/0x3c80
[  212.130639][    C2]  ? xennet_xdp+0x7e0/0x7e0
[  212.138735][    C2]  ? __kasan_check_read+0x11/0x20
[  212.138758][    C2]  ? __raise_softirq_irqoff+0x36/0xe0
[  212.138768][    C2]  ? __napi_schedule+0x1b5/0x260
[  212.138782][    C2]  ? __kasan_check_read+0x11/0x20
[  212.175834][    C2]  ? lock_release+0xa3/0x820
M [K[[0;1;31m[  212.184431][    C2]  ? do_raw_spin_lock+0x13e/0x280
*[0m[0;31m*   [  212.194290][    C2]  ? handle_edge_irq+0x35e/0xb60
  [0m] (2 of 2) [  212.204038][    C2]  ? handle_irq_for_port+0x192/0x4c0
[  212.215209][    C2]  ? __kasan_check_write+0x14/0x20
A stop job is ru[  212.223033][    C2]  __napi_poll+0xb2/0x4c0
[  212.231220][    C2]  net_rx_action+0x2d7/0xa20
nning for …t D[  212.240755][    C2]  ? napi_threaded_poll+0x2c0/0x2c0
isplay Manager ([  212.256030][    C2]  ? __kasan_check_read+0x11/0x20
[  212.274202][    C2]  ? lock_acquire+0x4b/0x260
41s / 1min 30s) [  212.288779][    C2]  ? __kasan_check_write+0x14/0x20

[  212.304377][    C2]  ? _raw_read_unlock+0x23/0x40
[  212.315612][    C2]  ? __xen_evtchn_do_upcall+0x107/0x180
[  212.325949][    C2]  __do_softirq+0x1cd/0x66d
[  212.332525][    C2]  irq_exit_rcu+0x12c/0x1c0
[  212.340302][    C2]  sysvec_xen_hvm_callback+0x79/0xa0
[  212.348774][    C2]  </IRQ>
[  212.354141][    C2]  asm_sysvec_xen_hvm_callback+0x12/0x20
[  212.363985][    C2] RIP: 0010:native_safe_halt+0xe/0x20
[  212.373586][    C2] Code: cc cc cc cc cc cc cc cc cc cc cc cc cc cc 
cc cc cc cc cc cc cc cc cc cc cc cc cc cc e9 07 00 00 00 0f 00 2d b4 f4 
49 00 fb f4 <c3> 66 66 2e 0f 1f 84 00 00 00 00 00 66 0f 1f 44 00 00 e9 
07 00 00
[  212.407196][    C2] RSP: 0018:ffffc90000127cf0 EFLAGS: 00000246
[  212.417979][    C2] RAX: 0000000000004000 RBX: ffff88810122b865 RCX: 
ffffffff83dd94c2
[  212.431089][    C2] RDX: 1ffff1102011ea50 RSI: 0000000000000008 RDI: 
ffff8881008f5280
[  212.449669][    C2] RBP: ffffc90000127d00 R08: 0000000000000000 R09: 
ffff8881008f5287
[  212.472308][    C2] R10: ffffed102011ea50 R11: 0000000000000000 R12: 
ffff8881008f5280
[  212.492965][    C2] R13: ffff88810a142000 R14: ffff88810122b864 R15: 
ffffffff8576a6c0
[  212.510149][    C2]  ? acpi_idle_do_entry+0x142/0x1e0
[  212.519161][    C2]  ? acpi_idle_do_entry+0x166/0x1e0
[  212.527234][    C2]  acpi_idle_enter+0x2ca/0x500
[  212.535328][    C2]  ? __kasan_check_write+0x14/0x20
[  212.543623][    C2]  cpuidle_enter_state+0x17c/0xe00
[  212.552231][    C2]  cpuidle_enter+0x4f/0xa0
[  212.560965][    C2]  do_idle+0x407/0x5c0
[  212.567318][    C2]  ? arch_cpu_idle_exit+0x60/0x60
[  212.574826][    C2]  ? _raw_spin_unlock_irqrestore+0x27/0x40
[  212.582124][    C2]  ? complete+0x5c/0x80
[  212.589136][    C2]  cpu_startup_entry+0x20/0x40
[  212.597135][    C2]  start_secondary+0x27f/0x360
[  212.604838][    C2]  ? set_cpu_sibling_map+0x2040/0x2040
[  212.613401][    C2]  secondary_startup_64_no_verify+0xb0/0xbb
[  212.629384][    C2]
[  212.634899][    C2] Allocated by task 0:
[  212.645849][    C2]  kasan_save_stack+0x23/0x60
[  212.645876][    C2]  __kasan_slab_alloc+0x68/0x80
[  212.645880][    C2]  kmem_cache_alloc_node+0x242/0x380
[  212.645885][    C2]  __alloc_skb+0x156/0x280
[  212.645894][    C2]  __netdev_alloc_skb+0x46/0x320
[  212.645899][    C2]  xennet_alloc_rx_buffers+0x237/0xae0
[  212.645907][    C2]  xennet_poll+0x1e8b/0x3c80
M [K[[0m[0;3[  212.645911][    C2]  __napi_poll+0xb2/0x4c0
[  212.645918][    C2]  net_rx_action+0x2d7/0xa20
1m*     [0m] (2[  212.645921][    C2]  __do_softirq+0x1cd/0x66d
[  212.756834][    C2]
  of 2) A stop jo[  212.762513][    C2] Freed by task 0:
[  212.773864][    C2] (stack is not available)
b is running for[  212.783526][    C2]
[  212.791849][    C2] The buggy address belongs to the object at 
ffff888126662f00
[  212.791849][    C2]  which belongs to the cache skbuff_head_cache of 
size 216
  …t Display Ma[  212.828293][    C2] The buggy address is located 0 
bytes inside of
[  212.828293][    C2]  216-byte region [ffff888126662f00, 
ffff888126662fd8)
[  212.869431][    C2] The buggy address belongs to the page:
nager (41s / 1mi[  212.879568][    C2] page:00000000dd43c559 refcount:1 
mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x126662
[  212.904761][    C2] head:00000000dd43c559 order:1 compound_mapcount:0
n 30s)
[  212.916950][    C2] memcg:ffff88811066a601
[  212.926878][    C2] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  212.941169][    C2] raw: 02fffc0000010200 0000000000000000 
0000000300000001 ffff8881002a3e00
[  212.954864][    C2] raw: 0000000000000000 0000000000190019 
00000001ffffffff ffff88811066a601
[  212.969455][    C2] page dumped because: kasan: bad access detected
[  212.979424][    C2]
[  212.983332][    C2] Memory state around the buggy address:
[  212.992570][    C2]  ffff888126662e00: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  213.011597][    C2]  ffff888126662e80: fb fb fb fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  213.030020][    C2] >ffff888126662f00: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  213.047418][    C2]                    ^
[  213.056353][    C2]  ffff888126662f80: fb fb fb fb fb fb fb fb fb fb 
fb fc fc fc fc fc
[  213.066534][    C2]  ffff888126663000: fc fc fc fc fc fc fc fc 00 00 
00 00 00 00 00 00
[  213.078057][    C2] 
==================================================================
[  213.089615][    C0] 
==================================================================
[  213.107373][    C0] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
[  213.119450][    C0]
[  213.125407][    C0] CPU: 0 PID: 12 Comm: ksoftirqd/0 Tainted: G    
B   W         5.13.0-rc6-x86_64 #1
[  213.150160][    C0] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  213.150167][    C0] Call Trace:
[  213.150174][    C0]  dump_stack+0xa1/0xd3
[  213.150184][    C0] print_address_description.constprop.0+0x1d/0x140
[  213.150191][    C0]  ? kfree+0xd7/0x560
[  213.150196][    C0]  kasan_report_invalid_free+0x56/0x80
[  213.150200][    C0]  ? kfree+0xd7/0x560
M [K[[0;1;31m[  213.150204][    C0] __kasan_slab_free+0x110/0x140
[  213.150209][    C0]  slab_free_freelist_hook+0x8c/0x1c0
*[0m[0;31m*   [  213.150214][    C0]  kfree+0xd7/0x560
[  213.150218][    C0]  ? skb_release_data+0x41c/0x520
  [0m] (2 of 2) [  213.150226][    C0] skb_release_data+0x41c/0x520
[  213.150231][    C0]  ? icmpv6_rcv+0x477/0xda0
A stop job is ru[  213.150238][    C0]  ? icmpv6_rcv+0x477/0xda0
[  213.310846][    C0]  kfree_skb+0x105/0x240
nning for …t D[  213.320810][    C0]  icmpv6_rcv+0x477/0xda0
isplay Manager ([  213.329854][    C0] ip6_protocol_deliver_rcu+0xaa6/0xfe0
[  213.340642][    C0]  ip6_input+0x200/0x260
42s / 1min 30s) [  213.347584][    C0]  ? ip6_input+0x1d1/0x260
[  213.357391][    C0]  ? ip6_input_finish+0x80/0x80

[  213.365347][    C0]  ? tcp_v4_early_demux+0x571/0x840
[  213.374679][    C0]  ip6_sublist_rcv_finish+0x9e/0x480
[  213.382025][    C0]  ip6_sublist_rcv+0x64b/0xba0
[  213.391069][    C0]  ? ip6_sublist_rcv_finish+0x480/0x480
[  213.405589][    C0]  ? __kasan_check_read+0x11/0x20
[  213.416755][    C0]  ? ip6_rcv_core+0xa3f/0x16a0
[  213.426805][    C0]  ipv6_list_rcv+0x2e3/0x460
[  213.434481][    C0]  ? xennet_alloc_rx_buffers+0x237/0xae0
[  213.447714][    C0]  ? ip6_sublist_rcv+0xba0/0xba0
[  213.457451][    C0]  __netif_receive_skb_list_core+0x184/0xa20
[  213.464703][    C0]  ? 
__netif_receive_skb_core.constprop.0+0x31c0/0x31c0
[  213.475906][    C0]  ? __kasan_check_read+0x11/0x20
[  213.482374][    C0]  ? lock_acquire+0x4b/0x260
[  213.488761][    C0]  ? skb_defer_rx_timestamp+0x2d3/0x380
[  213.496362][    C0] netif_receive_skb_list_internal+0x616/0xd80
[  213.506817][    C0]  ? __netif_receive_skb_list_core+0xa20/0xa20
[  213.515250][    C0]  ? napi_gro_complete.constprop.0.isra.0+0x18f/0x380
[  213.524133][    C0]  ? __kasan_check_write+0x14/0x20
[  213.530159][    C0]  ? napi_gro_flush+0x298/0x3c0
[  213.535922][    C0]  napi_complete_done+0x189/0x600
[  213.542171][    C0]  xennet_poll+0x2447/0x3c80
[  213.548171][    C0]  ? xennet_xdp+0x7e0/0x7e0
[  213.553923][    C0]  ? __kasan_check_read+0x11/0x20
[  213.561362][    C0]  ? lock_release+0xa3/0x820
[  213.567346][    C0]  ? lock_downgrade+0x7e0/0x7e0
[  213.573431][    C0]  ? lock_downgrade+0x7e0/0x7e0
[  213.579499][    C0]  ? __kasan_check_read+0x11/0x20
[  213.586116][    C0]  ? lock_release+0xa3/0x820
[  213.591801][    C0]  ? do_raw_spin_unlock+0x159/0x200
[  213.599418][    C0]  __napi_poll+0xb2/0x4c0
[  213.611864][    C0]  net_rx_action+0x2d7/0xa20
[  213.619909][    C0]  ? call_timer_fn+0x3c0/0x3c0
[  213.629818][    C0]  ? _raw_spin_unlock_irq+0x23/0x40
[  213.639803][    C0]  ? napi_threaded_poll+0x2c0/0x2c0
[  213.639820][    C0]  __do_softirq+0x1cd/0x66d
[  213.639827][    C0]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  213.639834][    C0]  run_ksoftirqd+0x2b/0x40
[  213.639839][    C0]  smpboot_thread_fn+0x2fb/0x6e0
[  213.639846][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
[  213.639850][    C0]  ? __kthread_parkme+0x8d/0x120
M [K[[0;31m*[  213.639855][    C0]  ? 
smpboot_register_percpu_thread+0x360/0x360
[  213.639859][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
[0;1;31m*[0m[0[  213.639864][    C0]  kthread+0x32d/0x400
[  213.639871][    C0]  ? __kthread_bind_mask+0xa0/0xa0
[  213.639876][    C0]  ret_from_fork+0x22/0x30
[  213.639886][    C0]
;31m*   [0m] (1[  213.639889][    C0] Allocated by task 0:
[  213.742620][    C0] (stack is not available)
  of 2) A stop jo[  213.750590][    C0]
[  213.756686][    C0] Freed by task 0:
b is running for[  213.762345][    C0] kasan_save_stack+0x23/0x60
[  213.769826][    C0]  kasan_set_track+0x20/0x40
  … 11 of user [  213.779170][    C0] kasan_set_free_info+0x24/0x40
[  213.794646][    C0]  __kasan_slab_free+0xf1/0x140
hakon (41s / 1mi[  213.807720][    C0] slab_free_freelist_hook+0x8c/0x1c0
n 29s)
[  213.827148][    C0]  kfree+0xd7/0x560
[  213.837365][    C0]  skb_release_data+0x41c/0x520
[  213.845656][    C0]  kfree_skb+0x105/0x240
[  213.852281][    C0]  xennet_poll+0x1ab2/0x3c80
[  213.862540][    C0]  __napi_poll+0xb2/0x4c0
[  213.869496][    C0]  net_rx_action+0x2d7/0xa20
[  213.877186][    C0]  __do_softirq+0x1cd/0x66d
[  213.885465][    C0]
[  213.891111][    C0] The buggy address belongs to the object at 
ffff888126d43000
[  213.891111][    C0]  which belongs to the cache kmalloc-1k of size 1024
[  213.918845][    C0] The buggy address is located 0 bytes inside of
[  213.918845][    C0]  1024-byte region [ffff888126d43000, 
ffff888126d43400)
[  213.944923][    C0] The buggy address belongs to the page:
[  213.956132][    C0] page:000000005fd2a6d8 refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x126d40
[  213.973591][    C0] head:000000005fd2a6d8 order:3 compound_mapcount:0 
compound_pincount:0
[  213.987881][    C0] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  214.001677][    C0] raw: 02fffc0000010200 0000000000000000 
0000000200000001 ffff888100042dc0
[  214.018218][    C0] raw: 0000000000000000 0000000000100010 
00000001ffffffff 0000000000000000
[  214.038729][    C0] page dumped because: kasan: bad access detected
[  214.050148][    C0]
[  214.053014][    C0] Memory state around the buggy address:
[  214.060651][    C0]  ffff888126d42f00: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  214.071773][    C0]  ffff888126d42f80: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  214.084691][    C0] >ffff888126d43000: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  214.097836][    C0]                    ^
[  214.103556][    C0]  ffff888126d43080: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  214.117473][    C0]  ffff888126d43100: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  214.129253][    C0] 
==================================================================
[  214.140478][    C2] 
==================================================================
[  214.140503][    C2] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
[  214.170341][    C2]
M [K[ [0;31m*[  214.174510][    C2] CPU: 2 PID: 0 Comm: swapper/2 
Tainted: G    B   W         5.13.0-rc6-x86_64 #1
[  214.193822][    C2] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[0;1;31m*[0m[[  214.215513][    C2] Call Trace:
0;31m*  [0m] (1[  214.228707][    C2]  <IRQ>
[  214.241152][    C2]  dump_stack+0xa1/0xd3
  of 2) A stop jo[  214.249599][    C2] 
print_address_description.constprop.0+0x1d/0x140
b is running for[  214.265137][    C2]  ? kfree+0xd7/0x560
  … 11 of user [  214.274501][    C2] kasan_report_invalid_free+0x56/0x80
[  214.284193][    C2]  ? kfree+0xd7/0x560
hakon (42s / 1mi[  214.293567][    C2] __kasan_slab_free+0x110/0x140
[  214.305209][    C2]  slab_free_freelist_hook+0x8c/0x1c0
n 29s)
[  214.315271][    C2]  kfree+0xd7/0x560
[  214.324662][    C2]  ? skb_release_data+0x41c/0x520
[  214.332942][    C2]  ? xennet_poll+0x15d9/0x3c80
[  214.343682][    C2]  ? kmem_cache_free+0x10b/0x520
[  214.351197][    C2]  skb_release_data+0x41c/0x520
[  214.359785][    C2]  ? xennet_poll+0x15d9/0x3c80
[  214.365826][    C2]  ? xennet_poll+0x15d9/0x3c80
[  214.371891][    C2]  kfree_skb+0x105/0x240
[  214.377112][    C2]  xennet_poll+0x15d9/0x3c80
[  214.382620][    C2]  ? xennet_xdp+0x7e0/0x7e0
[  214.388425][    C2]  ? __kasan_check_read+0x11/0x20
[  214.394572][    C2]  ? __raise_softirq_irqoff+0x36/0xe0
[  214.403160][    C2]  ? __napi_schedule+0x1b5/0x260
[  214.410851][    C2]  ? __kasan_check_read+0x11/0x20
[  214.418281][    C2]  ? lock_release+0xa3/0x820
[  214.425596][    C2]  ? do_raw_spin_lock+0x13e/0x280
[  214.432445][    C2]  ? handle_edge_irq+0x35e/0xb60
[  214.442766][    C2]  ? handle_irq_for_port+0x192/0x4c0
[  214.452876][    C2]  ? __kasan_check_write+0x14/0x20
[  214.460186][    C2]  __napi_poll+0xb2/0x4c0
[  214.468318][    C2]  net_rx_action+0x2d7/0xa20
[  214.476943][    C2]  ? napi_threaded_poll+0x2c0/0x2c0
[  214.487543][    C2]  ? __kasan_check_read+0x11/0x20
[  214.494720][    C2]  ? lock_acquire+0x4b/0x260
[  214.500909][    C2]  ? __kasan_check_write+0x14/0x20
[  214.508863][    C2]  ? _raw_read_unlock+0x23/0x40
[  214.515506][    C2]  ? __xen_evtchn_do_upcall+0x107/0x180
[  214.522839][    C2]  __do_softirq+0x1cd/0x66d
[  214.529817][    C2]  irq_exit_rcu+0x12c/0x1c0
[  214.535491][    C2]  sysvec_xen_hvm_callback+0x79/0xa0
[  214.542078][    C2]  </IRQ>
[  214.545845][    C2]  asm_sysvec_xen_hvm_callback+0x12/0x20
[  214.553437][    C2] RIP: 0010:native_safe_halt+0xe/0x20
[  214.562580][    C2] Code: cc cc cc cc cc cc cc cc cc cc cc cc cc cc 
cc cc cc cc cc cc cc cc cc cc cc cc cc cc e9 07 00 00 00 0f 00 2d b4 f4 
49 00 fb f4 <c3> 66 66 2e 0f 1f 84 00 00 00 00 00 66 0f 1f 44 00 00 e9 
07 00 00
[  214.591969][    C2] RSP: 0018:ffffc90000127cf0 EFLAGS: 00000246
[  214.602248][    C2] RAX: 0000000000004000 RBX: ffff88810122b865 RCX: 
ffffffff83dd94c2
[  214.615196][    C2] RDX: 1ffff1102011ea50 RSI: 0000000000000008 RDI: 
ffff8881008f5280
[  214.628021][    C2] RBP: ffffc90000127d00 R08: 0000000000000000 R09: 
ffff8881008f5287
[  214.640448][    C2] R10: ffffed102011ea50 R11: 0000000000000000 R12: 
ffff8881008f5280
[  214.640456][    C2] R13: ffff88810a142000 R14: ffff88810122b864 R15: 
ffffffff8576a6c0
[  214.640463][    C2]  ? acpi_idle_do_entry+0x142/0x1e0
[  214.640477][    C2]  ? acpi_idle_do_entry+0x166/0x1e0
[  214.640481][    C2]  acpi_idle_enter+0x2ca/0x500
[  214.640487][    C2]  ? __kasan_check_write+0x14/0x20
[  214.640495][    C2]  cpuidle_enter_state+0x17c/0xe00
M [K[  [0;31m[  214.640503][    C2]  cpuidle_enter+0x4f/0xa0
[  214.640508][    C2]  do_idle+0x407/0x5c0
*[0;1;31m*[0m[  214.640513][    C2]  ? arch_cpu_idle_exit+0x60/0x60
[  214.640517][    C2]  ? _raw_spin_unlock_irqrestore+0x27/0x40
[0;31m* [0m] (1[  214.640522][    C2]  ? complete+0x5c/0x80
[  214.640527][    C2]  cpu_startup_entry+0x20/0x40
  of 2) A stop jo[  214.640530][    C2] start_secondary+0x27f/0x360
[  214.640537][    C2]  ? set_cpu_sibling_map+0x2040/0x2040
b is running for[  214.640543][    C2] 
secondary_startup_64_no_verify+0xb0/0xbb
[  214.640553][    C2]
  … 11 of user [  214.640556][    C2] Allocated by task 0:
[  214.640558][    C2] (stack is not available)
hakon (42s / 1mi[  214.640560][    C2]
[  214.640561][    C2] Freed by task 0:
n 29s)
[  214.828228][    C2]  kasan_save_stack+0x23/0x60
[  214.835012][    C2]  kasan_set_track+0x20/0x40
[  214.840676][    C2]  kasan_set_free_info+0x24/0x40
[  214.846752][    C2]  __kasan_slab_free+0xf1/0x140
[  214.852817][    C2]  slab_free_freelist_hook+0x8c/0x1c0
[  214.859715][    C2]  kfree+0xd7/0x560
[  214.864312][    C2]  skb_release_data+0x41c/0x520
[  214.871639][    C2]  kfree_skb+0x105/0x240
[  214.877031][    C2]  xennet_poll+0x1ab2/0x3c80
[  214.883335][    C2]  __napi_poll+0xb2/0x4c0
[  214.889487][    C2]  net_rx_action+0x2d7/0xa20
[  214.894961][    C2]  __do_softirq+0x1cd/0x66d
[  214.900203][    C2]
[  214.902965][    C2] The buggy address belongs to the object at 
ffff888125adf800
[  214.902965][    C2]  which belongs to the cache kmalloc-1k of size 1024
[  214.920162][    C2] The buggy address is located 0 bytes inside of
[  214.920162][    C2]  1024-byte region [ffff888125adf800, 
ffff888125adfc00)
[  214.936626][    C2] The buggy address belongs to the page:
[  214.943436][    C2] page:00000000bf52ba57 refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x125ad8
[  214.955780][    C2] head:00000000bf52ba57 order:3 compound_mapcount:0 
compound_pincount:0
[  214.965745][    C2] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  214.976188][    C2] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff888100042dc0
[  214.986474][    C2] raw: 0000000000000000 0000000000100010 
00000001ffffffff 0000000000000000
[  214.996687][    C2] page dumped because: kasan: bad access detected
[  215.004325][    C2]
[  215.007489][    C2] Memory state around the buggy address:
[  215.015918][    C2]  ffff888125adf700: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  215.027162][    C2]  ffff888125adf780: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  215.043947][    C2] >ffff888125adf800: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  215.056172][    C2]                    ^
[  215.063167][    C2]  ffff888125adf880: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  215.076950][    C2]  ffff888125adf900: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  215.087822][    C2] 
==================================================================
[  215.097640][    C0] 
==================================================================
[  215.122447][    C0] BUG: KASAN: double-free or invalid-free in 
kmem_cache_free+0x10b/0x520
[  215.138627][    C0]
[  215.138632][    C0] CPU: 0 PID: 12 Comm: ksoftirqd/0 Tainted: G    
B   W         5.13.0-rc6-x86_64 #1
[  215.138638][    C0] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  215.138640][    C0] Call Trace:
[  215.138645][    C0]  dump_stack+0xa1/0xd3
[  215.138654][    C0] print_address_description.constprop.0+0x1d/0x140
[  215.138661][    C0]  ? kmem_cache_free+0x10b/0x520
M [K[   [0;31[  215.138666][    C0] kasan_report_invalid_free+0x56/0x80
[  215.138670][    C0]  ? kmem_cache_free+0x10b/0x520
m*[0;1;31m*[0m[  215.138674][    C0] __kasan_slab_free+0x110/0x140
[  215.138679][    C0]  slab_free_freelist_hook+0x8c/0x1c0
[0;31m*[0m] (2[  215.138684][    C0]  ? icmpv6_rcv+0x477/0xda0
[  215.138691][    C0]  kmem_cache_free+0x10b/0x520
  of 2) A stop jo[  215.138695][    C0]  ? kfree_skbmem+0x9a/0x140
[  215.138702][    C0]  ? icmpv6_rcv+0x477/0xda0
b is running for[  215.307343][    C0]  kfree_skbmem+0x9a/0x140
[  215.315286][    C0]  kfree_skb+0x10d/0x240
  …t Display Ma[  215.323574][    C0]  icmpv6_rcv+0x477/0xda0
[  215.331952][    C0]  ip6_protocol_deliver_rcu+0xaa6/0xfe0
nager (44s / 1mi[  215.342623][    C0]  ip6_input+0x200/0x260
[  215.351255][    C0]  ? ip6_input+0x1d1/0x260
n 30s)
[  215.360782][    C0]  ? ip6_input_finish+0x80/0x80
[  215.368718][    C0]  ? tcp_v4_early_demux+0x571/0x840
[  215.376461][    C0]  ip6_sublist_rcv_finish+0x9e/0x480
[  215.385701][    C0]  ip6_sublist_rcv+0x64b/0xba0
[  215.393017][    C0]  ? ip6_sublist_rcv_finish+0x480/0x480
[  215.402260][    C0]  ? __kasan_check_read+0x11/0x20
[  215.411695][    C0]  ? ip6_rcv_core+0xa3f/0x16a0
[  215.424529][    C0]  ipv6_list_rcv+0x2e3/0x460
[  215.433422][    C0]  ? xennet_alloc_rx_buffers+0x237/0xae0
[  215.444570][    C0]  ? ip6_sublist_rcv+0xba0/0xba0
[  215.456256][    C0]  __netif_receive_skb_list_core+0x184/0xa20
[  215.470330][    C0]  ? 
__netif_receive_skb_core.constprop.0+0x31c0/0x31c0
[  215.481119][    C0]  ? __kasan_check_read+0x11/0x20
[  215.492056][    C0]  ? lock_acquire+0x4b/0x260
[  215.499267][    C0]  ? skb_defer_rx_timestamp+0x2d3/0x380
[  215.508151][    C0] netif_receive_skb_list_internal+0x616/0xd80
[  215.517243][    C0]  ? __netif_receive_skb_list_core+0xa20/0xa20
[  215.526579][    C0]  ? napi_gro_complete.constprop.0.isra.0+0x18f/0x380
[  215.539190][    C0]  ? __kasan_check_write+0x14/0x20
[  215.546170][    C0]  ? napi_gro_flush+0x298/0x3c0
[  215.552659][    C0]  napi_complete_done+0x189/0x600
[  215.558684][    C0]  xennet_poll+0x2447/0x3c80
[  215.564367][    C0]  ? xennet_xdp+0x7e0/0x7e0
[  215.570707][    C0]  ? __kasan_check_read+0x11/0x20
[  215.576965][    C0]  ? lock_release+0xa3/0x820
[  215.582588][    C0]  ? lock_downgrade+0x7e0/0x7e0
[  215.589202][    C0]  ? lock_downgrade+0x7e0/0x7e0
[  215.596146][    C0]  ? __kasan_check_read+0x11/0x20
[  215.606544][    C0]  ? lock_release+0xa3/0x820
[  215.616358][    C0]  ? do_raw_spin_unlock+0x159/0x200
[  215.629194][    C0]  __napi_poll+0xb2/0x4c0
[  215.639002][    C0]  net_rx_action+0x2d7/0xa20
[  215.639015][    C0]  ? call_timer_fn+0x3c0/0x3c0
[  215.639020][    C0]  ? _raw_spin_unlock_irq+0x23/0x40
[  215.639028][    C0]  ? napi_threaded_poll+0x2c0/0x2c0
[  215.639035][    C0]  __do_softirq+0x1cd/0x66d
[  215.639040][    C0]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  215.639047][    C0]  run_ksoftirqd+0x2b/0x40
M [K[    [0;3[  215.639051][    C0] smpboot_thread_fn+0x2fb/0x6e0
[  215.639057][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
1m*[0;1;31m*[0[  215.639062][    C0]  ? __kthread_parkme+0x8d/0x120
[  215.639067][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
[  215.639071][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
m] (2 of 2) A st[  215.750261][    C0]  kthread+0x32d/0x400
[  215.758696][    C0]  ? __kthread_bind_mask+0xa0/0xa0
op job is runnin[  215.767203][    C0]  ret_from_fork+0x22/0x30
[  215.775045][    C0]
g for …t Displ[  215.778331][    C0] Allocated by task 0:
[  215.786085][    C0]  kasan_save_stack+0x23/0x60
ay Manager (44s [  215.799771][    C0] __kasan_slab_alloc+0x68/0x80
[  215.813570][    C0]  kmem_cache_alloc_node+0x242/0x380
/ 1min 30s)
[  215.830973][    C0]  __alloc_skb+0x156/0x280
[  215.845319][    C0]  __netdev_alloc_skb+0x46/0x320
[  215.853729][    C0]  xennet_alloc_rx_buffers+0x237/0xae0
[  215.862762][    C0]  xennet_poll+0x1e8b/0x3c80
[  215.870390][    C0]  __napi_poll+0xb2/0x4c0
[  215.877137][    C0]  net_rx_action+0x2d7/0xa20
[  215.884576][    C0]  __do_softirq+0x1cd/0x66d
[  215.891573][    C0]
[  215.894440][    C0] Freed by task 881776:
[  215.899812][    C0] ------------[ cut here ]------------
[  215.907252][    C0] slab index 2083072 out of bounds (278) for stack 
id ffffc900
[  215.916653][    C0] WARNING: CPU: 0 PID: 12 at lib/stackdepot.c:236 
stack_depot_fetch+0x71/0xa0
[  215.928558][    C0] Modules linked in: xen_front_pgdir_shbuf 
xen_scsifront snd_hda_codec_hdmi snd_hda_intel snd_intel_dspcfg 
snd_hda_codec snd_hda_core snd_hwdep snd_pcm snd_timer snd soundcore 
hid_generic usbhid hid xen_kbdfront intel_rapl_msr atkbd 
intel_rapl_common amdgpu drm_ttm_helper ttm gpu_sched xhci_pci xhci_hcd 
usbcore [last unloaded: usbip_core]
[  215.971549][    C0] CPU: 0 PID: 12 Comm: ksoftirqd/0 Tainted: G    
B   W         5.13.0-rc6-x86_64 #1
[  215.983765][    C0] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  215.994491][    C0] RIP: 0010:stack_depot_fetch+0x71/0xa0
[  216.003735][    C0] Code: 48 c1 e0 04 25 f0 3f 00 00 48 01 c2 48 8d 
42 18 48 89 03 48 8b 5d f8 8b 42 0c c9 c3 89 f9 48 c7 c7 48 85 bb 84 e8 
2d bf 3b 01 <0f> 0b 31 c0 48 8b 5d f8 c9 c3 48 8b 5d f8 31 c0 c9 c3 48 
c7 c7 a0
[  216.043019][    C0] RSP: 0018:ffffc900000d6f60 EFLAGS: 00010082
[  216.051129][    C0] RAX: 0000000000000000 RBX: ffffc900000d6f88 RCX: 
0000000000000000
[  216.060758][    C0] RDX: 0000000000000027 RSI: 0000000000000004 RDI: 
fffff5200001adde
[  216.070391][    C0] RBP: ffffc900000d6f78 R08: 0000000000000001 R09: 
ffff8884d382091b
[  216.079961][    C0] R10: ffffed109a704123 R11: 646e692062616c73 R12: 
ffff888125a13b80
[  216.089627][    C0] R13: ffffea0004968480 R14: ffff888125a13b80 R15: 
ffff888125a13c58
[  216.099205][    C0] FS:  0000000000000000(0000) 
GS:ffff8884d3800000(0000) knlGS:0000000000000000
[  216.110109][    C0] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  216.119389][    C0] CR2: 000056104ce6e020 CR3: 0000000121a0e002 CR4: 
00000000001706f0
[  216.129292][    C0] Call Trace:
[  216.133376][    C0]  print_stack+0xe/0x1d
[  216.138764][    C0]  print_track+0x22/0x33
[  216.138776][    C0] 
print_address_description.constprop.0.cold+0x3c/0x179
[  216.138781][    C0]  ? kmem_cache_free+0x10b/0x520
[  216.138789][    C0]  kasan_report_invalid_free+0x56/0x80
[  216.138793][    C0]  ? kmem_cache_free+0x10b/0x520
[  216.138797][    C0]  __kasan_slab_free+0x110/0x140
[  216.138802][    C0]  slab_free_freelist_hook+0x8c/0x1c0
M [K[     [0;[  216.138807][    C0]  ? icmpv6_rcv+0x477/0xda0
[  216.138814][    C0]  kmem_cache_free+0x10b/0x520
31m*[0m] (2 of [  216.138819][    C0]  ? kfree_skbmem+0x9a/0x140
[  216.138827][    C0]  ? icmpv6_rcv+0x477/0xda0
2) A stop job is[  216.237529][    C0]  kfree_skbmem+0x9a/0x140
[  216.244576][    C0]  kfree_skb+0x10d/0x240
  running for …[  216.250113][    C0]  icmpv6_rcv+0x477/0xda0
t Display Manage[  216.257968][    C0] ip6_protocol_deliver_rcu+0xaa6/0xfe0
[  216.266989][    C0]  ip6_input+0x200/0x260
[  216.273285][    C0]  ? ip6_input+0x1d1/0x260
r (45s / 1min 30[  216.279630][    C0]  ? ip6_input_finish+0x80/0x80
[  216.288422][    C0]  ? tcp_v4_early_demux+0x571/0x840
[  216.294400][    C0]  ip6_sublist_rcv_finish+0x9e/0x480
s)
[  216.300849][    C0]  ip6_sublist_rcv+0x64b/0xba0
[  216.309689][    C0]  ? ip6_sublist_rcv_finish+0x480/0x480
[  216.318640][    C0]  ? __kasan_check_read+0x11/0x20
[  216.327501][    C0]  ? ip6_rcv_core+0xa3f/0x16a0
[  216.335494][    C0]  ipv6_list_rcv+0x2e3/0x460
[  216.343953][    C0]  ? xennet_alloc_rx_buffers+0x237/0xae0
[  216.356727][    C0]  ? ip6_sublist_rcv+0xba0/0xba0
[  216.364941][    C0]  __netif_receive_skb_list_core+0x184/0xa20
[  216.378967][    C0]  ? 
__netif_receive_skb_core.constprop.0+0x31c0/0x31c0
[  216.394349][    C0]  ? __kasan_check_read+0x11/0x20
[  216.401453][    C0]  ? lock_acquire+0x4b/0x260
[  216.406910][    C0]  ? skb_defer_rx_timestamp+0x2d3/0x380
[  216.413867][    C0] netif_receive_skb_list_internal+0x616/0xd80
[  216.421442][    C0]  ? __netif_receive_skb_list_core+0xa20/0xa20
[  216.429617][    C0]  ? napi_gro_complete.constprop.0.isra.0+0x18f/0x380
[  216.441183][    C0]  ? __kasan_check_write+0x14/0x20
[  216.450645][    C0]  ? napi_gro_flush+0x298/0x3c0
[  216.461268][    C0]  napi_complete_done+0x189/0x600
[  216.472973][    C0]  xennet_poll+0x2447/0x3c80
[  216.481678][    C0]  ? xennet_xdp+0x7e0/0x7e0
[  216.488874][    C0]  ? __kasan_check_read+0x11/0x20
[  216.497616][    C0]  ? lock_release+0xa3/0x820
[  216.507491][    C0]  ? lock_downgrade+0x7e0/0x7e0
[  216.515381][    C0]  ? lock_downgrade+0x7e0/0x7e0
[  216.527679][    C0]  ? __kasan_check_read+0x11/0x20
[  216.539625][    C0]  ? lock_release+0xa3/0x820
[  216.551138][    C0]  ? do_raw_spin_unlock+0x159/0x200
[  216.559985][    C0]  __napi_poll+0xb2/0x4c0
[  216.567549][    C0]  net_rx_action+0x2d7/0xa20
[  216.575303][    C0]  ? call_timer_fn+0x3c0/0x3c0
[  216.583501][    C0]  ? _raw_spin_unlock_irq+0x23/0x40
[  216.593207][    C0]  ? napi_threaded_poll+0x2c0/0x2c0
[  216.601972][    C0]  __do_softirq+0x1cd/0x66d
[  216.608974][    C0]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  216.616645][    C0]  run_ksoftirqd+0x2b/0x40
[  216.622231][    C0]  smpboot_thread_fn+0x2fb/0x6e0
[  216.628379][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
[  216.637151][    C0]  ? __kthread_parkme+0x8d/0x120
[  216.637161][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
[  216.637166][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
[  216.637171][    C0]  kthread+0x32d/0x400
[  216.637175][    C0]  ? __kthread_bind_mask+0xa0/0xa0
[  216.637180][    C0]  ret_from_fork+0x22/0x30
[  216.637190][    C0] ---[ end trace 3841665df6b692d3 ]---
M [K[    [0;3[  216.637201][    C0] ------------[ cut here ]------------
[  216.637202][    C0] WARNING: CPU: 0 PID: 12 at kernel/stacktrace.c:28 
stack_trace_print+0xf/0x40
1m*[0;1;31m*[0[  216.714523][    C0] Modules linked in: 
xen_front_pgdir_shbuf xen_scsifront snd_hda_codec_hdmi snd_hda_intel 
snd_intel_dspcfg snd_hda_codec snd_hda_core snd_hwdep snd_pcm snd_timer 
snd soundcore hid_generic usbhid hid xen_kbdfront intel_rapl_msr atkbd 
intel_rapl_common amdgpu drm_ttm_helper ttm gpu_sched xhci_pci xhci_hcd 
usbcore [last unloaded: usbip_core]
m] (1 of 2) A st[  216.789867][    C0] CPU: 0 PID: 12 Comm: ksoftirqd/0 
Tainted: G    B   W         5.13.0-rc6-x86_64 #1
[  216.812451][    C0] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
op job is runnin[  216.829665][    C0] RIP: 0010:stack_trace_print+0xf/0x40
[  216.840786][    C0] Code: 48 8b 55 f0 65 48 2b 14 25 28 00 00 00 75 
06 48 8b 5d f8 c9 c3 e8 e1 3e 84 02 90 0f 1f 44 00 00 48 85 ff 74 05 85 
f6 75 04 c3 <0f> 0b c3 55 48 89 e5 41 56 41 55 41 54 53 e9 66 c8 71 02 
66 66 2e
g for … 11 of [  216.888966][    C0] RSP: 0018:ffffc900000d6f80 EFLAGS: 
00010046
[  216.902647][    C0] RAX: 0000000000000000 RBX: ffff888125a13b88 RCX: 
0000000000000000
user hakon (44s [  216.922758][    C0] RDX: 0000000000000000 RSI: 
0000000000000000 RDI: 0000000000000000
/ 1min 29s)
[  216.943070][    C0] RBP: ffffc900000d6f90 R08: 0000000000000001 R09: 
ffff8884d382091b
[  216.958315][    C0] R10: ffffed109a704123 R11: 646e692062616c73 R12: 
ffff888125a13b80
[  216.971693][    C0] R13: ffffea0004968480 R14: ffff888125a13b80 R15: 
ffff888125a13c58
[  216.987211][    C0] FS:  0000000000000000(0000) 
GS:ffff8884d3800000(0000) knlGS:0000000000000000
[  216.999672][    C0] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  217.011641][    C0] CR2: 000056104ce6e020 CR3: 0000000121a0e002 CR4: 
00000000001706f0
[  217.023442][    C0] Call Trace:
[  217.027889][    C0]  ? print_stack+0x1b/0x1d
[  217.033606][    C0]  print_track+0x22/0x33
[  217.039151][    C0] 
print_address_description.constprop.0.cold+0x3c/0x179
[  217.048929][    C0]  ? kmem_cache_free+0x10b/0x520
[  217.055699][    C0]  kasan_report_invalid_free+0x56/0x80
[  217.062352][    C0]  ? kmem_cache_free+0x10b/0x520
[  217.069311][    C0]  __kasan_slab_free+0x110/0x140
[  217.075568][    C0]  slab_free_freelist_hook+0x8c/0x1c0
[  217.082039][    C0]  ? icmpv6_rcv+0x477/0xda0
[  217.088175][    C0]  kmem_cache_free+0x10b/0x520
[  217.094235][    C0]  ? kfree_skbmem+0x9a/0x140
[  217.100650][    C0]  ? icmpv6_rcv+0x477/0xda0
[  217.106422][    C0]  kfree_skbmem+0x9a/0x140
[  217.116248][    C0]  kfree_skb+0x10d/0x240
[  217.126269][    C0]  icmpv6_rcv+0x477/0xda0
[  217.135788][    C0]  ip6_protocol_deliver_rcu+0xaa6/0xfe0
[  217.135804][    C0]  ip6_input+0x200/0x260
[  217.135807][    C0]  ? ip6_input+0x1d1/0x260
[  217.135811][    C0]  ? ip6_input_finish+0x80/0x80
[  217.135815][    C0]  ? tcp_v4_early_demux+0x571/0x840
[  217.135822][    C0]  ip6_sublist_rcv_finish+0x9e/0x480
[  217.135827][    C0]  ip6_sublist_rcv+0x64b/0xba0
M [K[   [0;31[  217.135832][    C0]  ? 
ip6_sublist_rcv_finish+0x480/0x480
[  217.135836][    C0]  ? __kasan_check_read+0x11/0x20
m*[0;1;31m*[0m[  217.135844][    C0]  ? ip6_rcv_core+0xa3f/0x16a0
[  217.135849][    C0]  ipv6_list_rcv+0x2e3/0x460
[0;31m*[0m] (1[  217.135852][    C0]  ? 
xennet_alloc_rx_buffers+0x237/0xae0
[  217.277878][    C0]  ? ip6_sublist_rcv+0xba0/0xba0
  of 2) A stop jo[  217.287550][    C0] 
__netif_receive_skb_list_core+0x184/0xa20
[  217.302209][    C0]  ? 
__netif_receive_skb_core.constprop.0+0x31c0/0x31c0
b is running for[  217.315924][    C0]  ? __kasan_check_read+0x11/0x20
[  217.327322][    C0]  ? lock_acquire+0x4b/0x260
  … 11 of user [  217.337370][    C0]  ? skb_defer_rx_timestamp+0x2d3/0x380
[  217.355881][    C0] netif_receive_skb_list_internal+0x616/0xd80
hakon (45s / 1mi[  217.377431][    C0]  ? 
__netif_receive_skb_list_core+0xa20/0xa20
n 29s)
[  217.390232][    C0]  ? napi_gro_complete.constprop.0.isra.0+0x18f/0x380
[  217.401164][    C0]  ? __kasan_check_write+0x14/0x20
[  217.407921][    C0]  ? napi_gro_flush+0x298/0x3c0
[  217.413740][    C0]  napi_complete_done+0x189/0x600
[  217.419828][    C0]  xennet_poll+0x2447/0x3c80
[  217.425350][    C0]  ? xennet_xdp+0x7e0/0x7e0
[  217.430522][    C0]  ? __kasan_check_read+0x11/0x20
[  217.437388][    C0]  ? lock_release+0xa3/0x820
[  217.443088][    C0]  ? lock_downgrade+0x7e0/0x7e0
[  217.449160][    C0]  ? lock_downgrade+0x7e0/0x7e0
[  217.455182][    C0]  ? __kasan_check_read+0x11/0x20
[  217.461414][    C0]  ? lock_release+0xa3/0x820
[  217.467271][    C0]  ? do_raw_spin_unlock+0x159/0x200
[  217.475102][    C0]  __napi_poll+0xb2/0x4c0
[  217.481098][    C0]  net_rx_action+0x2d7/0xa20
[  217.488627][    C0]  ? call_timer_fn+0x3c0/0x3c0
[  217.495530][    C0]  ? _raw_spin_unlock_irq+0x23/0x40
[  217.503530][    C0]  ? napi_threaded_poll+0x2c0/0x2c0
[  217.511310][    C0]  __do_softirq+0x1cd/0x66d
[  217.519280][    C0]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  217.530552][    C0]  run_ksoftirqd+0x2b/0x40
[  217.540646][    C0]  smpboot_thread_fn+0x2fb/0x6e0
[  217.549181][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
[  217.563674][    C0]  ? __kthread_parkme+0x8d/0x120
[  217.572618][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
[  217.580729][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
[  217.590732][    C0]  kthread+0x32d/0x400
[  217.596142][    C0]  ? __kthread_bind_mask+0xa0/0xa0
[  217.604763][    C0]  ret_from_fork+0x22/0x30
[  217.610662][    C0] ---[ end trace 3841665df6b692d4 ]---
[  217.617982][    C0]
[  217.620935][    C0] The buggy address belongs to the object at 
ffff888125a13b80
[  217.620935][    C0]  which belongs to the cache skbuff_head_cache of 
size 216
[  217.642368][    C0] The buggy address is located 0 bytes inside of
[  217.642368][    C0]  216-byte region [ffff888125a13b80, 
ffff888125a13c58)
[  217.642378][    C0] The buggy address belongs to the page:
[  217.642381][    C0] page:000000004ad1a198 refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x125a12
[  217.642387][    C0] head:000000004ad1a198 order:1 compound_mapcount:0
[  217.642390][    C0] memcg:ffff888135827401
[  217.642392][    C0] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  217.642401][    C0] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff8881002a3e00
M [K[  [0;31m[  217.642404][    C0] raw: 0000000000000000 
0000000000190019 00000001ffffffff ffff888135827401
[  217.642407][    C0] page dumped because: kasan: bad access detected
*[0;1;31m*[0m[  217.642409][    C0]
[0;31m* [0m] (1[  217.642410][    C0] Memory state around the buggy 
address:
[  217.642413][    C0]  ffff888125a13a80: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
  of 2) A stop jo[  217.642415][    C0]  ffff888125a13b00: fc fc fc fc 
fc fc fc fc fc fc fc fc fc fc fc fc
[  217.642417][    C0] >ffff888125a13b80: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  217.642419][    C0]                    ^
b is running for[  217.642422][    C0]  ffff888125a13c00: fb fb fb fb fb 
fb fb fb fb fb fb fc fc fc fc fc
  … 11 of user [  217.642424][    C0]  ffff888125a13c80: fc fc fc fc fc 
fc fc fc fa fb fb fb fb fb fb fb
[  217.642426][    C0] 
==================================================================
hakon (45s / 1mi[  217.642474][    C2] 
==================================================================
[  217.940416][    C2] BUG: KASAN: double-free or invalid-free in 
kmem_cache_free+0x10b/0x520
[  217.964246][    C2]
n 29s)
[  217.970287][    C2] CPU: 2 PID: 0 Comm: swapper/2 Tainted: G B   
W         5.13.0-rc6-x86_64 #1
[  217.986547][    C2] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  217.998701][    C2] Call Trace:
[  218.004200][    C2]  <IRQ>
[  218.010125][    C2]  dump_stack+0xa1/0xd3
[  218.017424][    C2] print_address_description.constprop.0+0x1d/0x140
[  218.028711][    C2]  ? kmem_cache_free+0x10b/0x520
[  218.037705][    C2]  kasan_report_invalid_free+0x56/0x80
[  218.047460][    C2]  ? kmem_cache_free+0x10b/0x520
[  218.056377][    C2]  __kasan_slab_free+0x110/0x140
[  218.064648][    C2]  slab_free_freelist_hook+0x8c/0x1c0
[  218.074625][    C2]  ? xennet_poll+0x15d9/0x3c80
[  218.083198][    C2]  kmem_cache_free+0x10b/0x520
[  218.091655][    C2]  ? kfree_skbmem+0x9a/0x140
[  218.100065][    C2]  ? xennet_poll+0x15d9/0x3c80
[  218.110259][    C2]  kfree_skbmem+0x9a/0x140
[  218.118567][    C2]  kfree_skb+0x10d/0x240
[  218.129506][    C2]  xennet_poll+0x15d9/0x3c80
[  218.140660][    C2]  ? xennet_xdp+0x7e0/0x7e0
[  218.140676][    C2]  ? __kasan_check_read+0x11/0x20
[  218.140683][    C2]  ? __raise_softirq_irqoff+0x36/0xe0
[  218.140690][    C2]  ? __napi_schedule+0x1b5/0x260
[  218.140699][    C2]  ? __kasan_check_read+0x11/0x20
[  218.140704][    C2]  ? lock_release+0xa3/0x820
[  218.140708][    C2]  ? do_raw_spin_lock+0x13e/0x280
M [K[ [0;31m*[  218.140718][    C2]  ? handle_edge_irq+0x35e/0xb60
[  218.140724][    C2]  ? handle_irq_for_port+0x192/0x4c0
[0;1;31m*[0m[[  218.140731][    C2]  ? __kasan_check_write+0x14/0x20
0;31m*  [0m] (2[  218.140736][    C2]  __napi_poll+0xb2/0x4c0
[  218.140742][    C2]  net_rx_action+0x2d7/0xa20
[  218.140747][    C2]  ? napi_threaded_poll+0x2c0/0x2c0
  of 2) A stop jo[  218.140751][    C2]  ? __kasan_check_read+0x11/0x20
[  218.277434][    C2]  ? lock_acquire+0x4b/0x260
b is running for[  218.284846][    C2]  ? __kasan_check_write+0x14/0x20
[  218.294970][    C2]  ? _raw_read_unlock+0x23/0x40
  …t Display Ma[  218.304685][    C2]  ? __xen_evtchn_do_upcall+0x107/0x180
[  218.320824][    C2]  __do_softirq+0x1cd/0x66d
nager (47s / 1mi[  218.333423][    C2]  irq_exit_rcu+0x12c/0x1c0
[  218.348881][    C2]  sysvec_xen_hvm_callback+0x79/0xa0
n 30s)
[  218.362961][    C2]  </IRQ>
[  218.368424][    C2]  asm_sysvec_xen_hvm_callback+0x12/0x20
[  218.378278][    C2] RIP: 0010:native_safe_halt+0xe/0x20
[  218.387553][    C2] Code: cc cc cc cc cc cc cc cc cc cc cc cc cc cc 
cc cc cc cc cc cc cc cc cc cc cc cc cc cc e9 07 00 00 00 0f 00 2d b4 f4 
49 00 fb f4 <c3> 66 66 2e 0f 1f 84 00 00 00 00 00 66 0f 1f 44 00 00 e9 
07 00 00
[  218.422923][    C2] RSP: 0018:ffffc90000127cf0 EFLAGS: 00000246
[  218.430618][    C2] RAX: 0000000000004000 RBX: ffff88810122b865 RCX: 
ffffffff83dd94c2
[  218.443367][    C2] RDX: 1ffff1102011ea50 RSI: 0000000000000008 RDI: 
ffff8881008f5280
[  218.458767][    C2] RBP: ffffc90000127d00 R08: 0000000000000000 R09: 
ffff8881008f5287
[  218.471518][    C2] R10: ffffed102011ea50 R11: 0000000000000000 R12: 
ffff8881008f5280
[  218.484528][    C2] R13: ffff88810a142000 R14: ffff88810122b864 R15: 
ffffffff8576a6c0
[  218.494235][    C2]  ? acpi_idle_do_entry+0x142/0x1e0
[  218.505888][    C2]  ? acpi_idle_do_entry+0x166/0x1e0
[  218.516420][    C2]  acpi_idle_enter+0x2ca/0x500
[  218.530813][    C2]  ? __kasan_check_write+0x14/0x20
[  218.543552][    C2]  cpuidle_enter_state+0x17c/0xe00
[  218.552802][    C2]  cpuidle_enter+0x4f/0xa0
[  218.559185][    C2]  do_idle+0x407/0x5c0
[  218.564474][    C2]  ? arch_cpu_idle_exit+0x60/0x60
[  218.571237][    C2]  ? _raw_spin_unlock_irqrestore+0x27/0x40
[  218.579190][    C2]  ? complete+0x5c/0x80
[  218.584533][    C2]  cpu_startup_entry+0x20/0x40
[  218.590222][    C2]  start_secondary+0x27f/0x360
[  218.596167][    C2]  ? set_cpu_sibling_map+0x2040/0x2040
[  218.603131][    C2]  secondary_startup_64_no_verify+0xb0/0xbb
[  218.610245][    C2]
[  218.612872][    C2] Allocated by task 0:
[  218.617758][    C2]  kasan_save_stack+0x23/0x60
[  218.623215][    C2]  __kasan_slab_alloc+0x68/0x80
[  218.629156][    C2]  kmem_cache_alloc_node+0x242/0x380
[  218.635366][    C2]  __alloc_skb+0x156/0x280
[  218.640622][    C2]  __netdev_alloc_skb+0x46/0x320
[  218.640631][    C2]  xennet_alloc_rx_buffers+0x237/0xae0
[  218.640638][    C2]  xennet_poll+0x1e8b/0x3c80
[  218.640642][    C2]  __napi_poll+0xb2/0x4c0
[  218.640645][    C2]  net_rx_action+0x2d7/0xa20
[  218.640648][    C2]  __do_softirq+0x1cd/0x66d
[  218.640653][    C2]
M [K[[0;31m*[  218.640655][    C2] Freed by task 0:
[  218.640658][    C2] (stack is not available)
[0;1;31m*[0m[0[  218.640659][    C2]
[  218.640660][    C2] The buggy address belongs to the object at 
ffff888126663b80
[  218.640660][    C2]  which belongs to the cache skbuff_head_cache of 
size 216
;31m*   [0m] (2[  218.640664][    C2] The buggy address is located 0 
bytes inside of
[  218.640664][    C2]  216-byte region [ffff888126663b80, 
ffff888126663c58)
[  218.640668][    C2] The buggy address belongs to the page:
  of 2) A stop jo[  218.640671][    C2] page:00000000dd43c559 refcount:1 
mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x126662
[  218.640676][    C2] head:00000000dd43c559 order:1 compound_mapcount:0
b is running for[  218.640678][    C2] memcg:ffff88811066a601
[  218.640680][    C2] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
  …t Display Ma[  218.640689][    C2] raw: 02fffc0000010200 
0000000000000000 0000000300000001 ffff8881002a3e00
[  218.640693][    C2] raw: 0000000000000000 0000000000190019 
00000001ffffffff ffff88811066a601
nager (47s / 1mi[  218.640695][    C2] page dumped because: kasan: bad 
access detected
[  218.640697][    C2]
n 30s)
[  218.640698][    C2] Memory state around the buggy address:
[  218.640700][    C2]  ffff888126663a80: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  218.640703][    C2]  ffff888126663b00: fb fb fb fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  218.640705][    C2] >ffff888126663b80: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  218.640707][    C2]                    ^
[  218.640709][    C2]  ffff888126663c00: fb fb fb fb fb fb fb fb fb fb 
fb fc fc fc fc fc
[  218.988653][    C2]  ffff888126663c80: fc fc fc fc fc fc fc fc fa fb 
fb fb fb fb fb fb
[  219.000772][    C2] 
==================================================================
[  219.013377][    C0] 
==================================================================
[  219.027503][    C0] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
[  219.036920][    C0]
[  219.040141][    C0] CPU: 0 PID: 12 Comm: ksoftirqd/0 Tainted: G    
B   W         5.13.0-rc6-x86_64 #1
[  219.052977][    C0] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  219.067630][    C0] Call Trace:
[  219.073619][    C0]  dump_stack+0xa1/0xd3
[  219.082313][    C0] print_address_description.constprop.0+0x1d/0x140
[  219.099444][    C0]  ? kfree+0xd7/0x560
[  219.113304][    C0]  kasan_report_invalid_free+0x56/0x80
[  219.128637][    C0]  ? kfree+0xd7/0x560
[  219.139426][    C0]  __kasan_slab_free+0x110/0x140
[  219.139442][    C0]  slab_free_freelist_hook+0x8c/0x1c0
[  219.139448][    C0]  kfree+0xd7/0x560
[  219.139452][    C0]  ? skb_release_data+0x41c/0x520
[  219.139461][    C0]  skb_release_data+0x41c/0x520
[  219.139466][    C0]  __kfree_skb+0x45/0x60
[  219.139471][    C0]  tcp_validate_incoming+0x543/0x1e40
M [K[[0;1;31m[  219.139478][    C0]  ? __kasan_check_write+0x14/0x20
[  219.139484][    C0]  ? tcp_reset+0x2c0/0x2c0
*[0m[0;31m*   [  219.139491][    C0]  ? 
xen_clocksource_get_cycles+0x15/0x20
[  219.139495][    C0]  ? ktime_get+0x6b/0x100
  [0m] (2 of 2) [  219.139501][    C0]  ? __kasan_check_write+0x14/0x20
[  219.139506][    C0]  tcp_rcv_established+0x51f/0x20e0
A stop job is ru[  219.139511][    C0]  ? 
__asan_report_load2_noabort+0x14/0x20
[  219.139516][    C0]  ? tcp_parse_md5sig_option+0x105/0x120
nning for …t D[  219.297882][    C0]  ? tcp_data_queue+0x5740/0x5740
[  219.313725][    C0]  ? do_raw_spin_lock+0x13e/0x280
isplay Manager ([  219.329893][    C0]  ? __kasan_check_read+0x11/0x20
[  219.342893][    C0]  tcp_v4_do_rcv+0x4fa/0x760
48s / 1min 30s) [  219.353893][    C0]  tcp_v4_rcv+0x2643/0x3640
[  219.362318][    C0]  ? tcp_v4_early_demux+0x840/0x840

[  219.371313][    C0]  ? __kasan_check_read+0x11/0x20
[  219.380240][    C0]  ip_protocol_deliver_rcu+0x2c/0x1c0
[  219.389736][    C0]  ip_local_deliver_finish+0x1df/0x260
[  219.397933][    C0]  ip_local_deliver+0x1b4/0x3c0
[  219.407219][    C0]  ? ip_local_deliver_finish+0x260/0x260
[  219.415900][    C0]  ? __asan_report_store8_noabort+0x17/0x20
[  219.423642][    C0]  ? tcp_v4_early_demux+0x7ec/0x840
[  219.430968][    C0]  ip_sublist_rcv_finish+0x219/0x480
[  219.438182][    C0]  ip_sublist_rcv+0x4b9/0x820
[  219.443854][    C0]  ? ip_sublist_rcv_finish+0x480/0x480
[  219.450525][    C0]  ? kasan_check_range+0x14f/0x1a0
[  219.456596][    C0]  ? __asan_report_load8_noabort+0x14/0x20
[  219.463212][    C0]  ? ip_rcv_core+0xab8/0xac0
[  219.468590][    C0]  ip_list_rcv+0x2f0/0x4e0
[  219.473707][    C0]  ? xennet_alloc_rx_buffers+0x237/0xae0
[  219.480245][    C0]  ? ip_rcv+0x5e0/0x5e0
[  219.485638][    C0]  __netif_receive_skb_list_core+0x6ae/0xa20
[  219.493448][    C0]  ? 
__netif_receive_skb_core.constprop.0+0x31c0/0x31c0
[  219.505070][    C0]  ? __kasan_check_read+0x11/0x20
[  219.512154][    C0]  ? lock_acquire+0x4b/0x260
[  219.518352][    C0]  ? skb_defer_rx_timestamp+0x2d3/0x380
[  219.527057][    C0] netif_receive_skb_list_internal+0x616/0xd80
[  219.535403][    C0]  ? __netif_receive_skb_list_core+0xa20/0xa20
[  219.542848][    C0]  ? napi_gro_complete.constprop.0.isra.0+0x18f/0x380
[  219.552848][    C0]  ? __kasan_check_write+0x14/0x20
[  219.560714][    C0]  ? napi_gro_flush+0x298/0x3c0
[  219.567818][    C0]  napi_complete_done+0x189/0x600
[  219.575787][    C0]  xennet_poll+0x2447/0x3c80
[  219.582475][    C0]  ? xennet_xdp+0x7e0/0x7e0
[  219.590414][    C0]  ? __kasan_check_read+0x11/0x20
[  219.597687][    C0]  ? lock_release+0xa3/0x820
[  219.604770][    C0]  ? lock_downgrade+0x7e0/0x7e0
[  219.610235][    C0]  ? lock_downgrade+0x7e0/0x7e0
[  219.615892][    C0]  ? __kasan_check_read+0x11/0x20
[  219.621826][    C0]  ? lock_release+0xa3/0x820
[  219.627184][    C0]  ? do_raw_spin_unlock+0x159/0x200
[  219.633183][    C0]  __napi_poll+0xb2/0x4c0
[  219.638404][    C0]  net_rx_action+0x2d7/0xa20
[  219.638413][    C0]  ? call_timer_fn+0x3c0/0x3c0
[  219.638418][    C0]  ? _raw_spin_unlock_irq+0x23/0x40
[  219.638426][    C0]  ? napi_threaded_poll+0x2c0/0x2c0
[  219.638433][    C0]  __do_softirq+0x1cd/0x66d
[  219.638437][    C0]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  219.638444][    C0]  run_ksoftirqd+0x2b/0x40
M [K[[0m[0;3[  219.638448][    C0] smpboot_thread_fn+0x2fb/0x6e0
[  219.638454][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
1m*     [0m] (1[  219.638459][    C0]  ? __kthread_parkme+0x8d/0x120
[  219.638464][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
  of 2) A stop jo[  219.638468][    C0]  ? 
smpboot_register_percpu_thread+0x360/0x360
b is running for[  219.790573][    C0]  kthread+0x32d/0x400
  … 11 of user [  219.799037][    C0]  ? __kthread_bind_mask+0xa0/0xa0
[  219.808029][    C0]  ret_from_fork+0x22/0x30
hakon (47s / 1mi[  219.814353][    C0]
n 29s)
[  219.820408][    C0] Allocated by task 0:
[  219.829480][    C0] (stack is not available)
[  219.837635][    C0]
[  219.841663][    C0] Freed by task 0:
[  219.848159][    C0]  kasan_save_stack+0x23/0x60
[  219.856773][    C0]  kasan_set_track+0x20/0x40
[  219.862659][    C0]  kasan_set_free_info+0x24/0x40
[  219.869510][    C0]  __kasan_slab_free+0xf1/0x140
[  219.876560][    C0]  slab_free_freelist_hook+0x8c/0x1c0
[  219.885463][    C0]  kfree+0xd7/0x560
[  219.890413][    C0]  skb_release_data+0x41c/0x520
[  219.896494][    C0]  kfree_skb+0x105/0x240
[  219.902822][    C0]  xennet_poll+0x1ab2/0x3c80
[  219.910253][    C0]  __napi_poll+0xb2/0x4c0
[  219.916214][    C0]  net_rx_action+0x2d7/0xa20
[  219.923268][    C0]  __do_softirq+0x1cd/0x66d
[  219.930926][    C0]
[  219.934353][    C0] The buggy address belongs to the object at 
ffff888134b77800
[  219.934353][    C0]  which belongs to the cache kmalloc-1k of size 1024
[  219.959545][    C0] The buggy address is located 0 bytes inside of
[  219.959545][    C0]  1024-byte region [ffff888134b77800, 
ffff888134b77c00)
[  219.988126][    C0] The buggy address belongs to the page:
[  220.001334][    C0] page:00000000027e9922 refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x134b70
[  220.018854][    C0] head:00000000027e9922 order:3 compound_mapcount:0 
compound_pincount:0
[  220.032958][    C0] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  220.048545][    C0] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff888100042dc0
[  220.059184][    C0] raw: 0000000000000000 0000000000100010 
00000001ffffffff 0000000000000000
[  220.072393][    C0] page dumped because: kasan: bad access detected
[  220.080501][    C0]
[  220.083922][    C0] Memory state around the buggy address:
[  220.091428][    C0]  ffff888134b77700: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  220.102532][    C0]  ffff888134b77780: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  220.112722][    C0] >ffff888134b77800: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  220.124660][    C0]                    ^
[  220.130466][    C0]  ffff888134b77880: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  220.144409][    C0]  ffff888134b77900: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  220.144416][    C0] 
==================================================================
[  220.144456][    C2] 
==================================================================
[  220.144461][    C2] BUG: KASAN: double-free or invalid-free in 
kfree+0xd7/0x560
[  220.144470][    C2]
[  220.144473][    C2] CPU: 2 PID: 0 Comm: swapper/2 Tainted: G B   
W         5.13.0-rc6-x86_64 #1
[  220.144478][    C2] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
M [K[[0;1;31m[  220.144482][    C2] Call Trace:
[  220.144485][    C2]  <IRQ>
*[0m[0;31m*   [  220.144489][    C2]  dump_stack+0xa1/0xd3
[  220.283702][    C2] print_address_description.constprop.0+0x1d/0x140
  [0m] (1 of 2) [  220.299859][    C2]  ? kfree+0xd7/0x560
A stop job is ru[  220.311791][    C2] kasan_report_invalid_free+0x56/0x80
[  220.324061][    C2]  ? kfree+0xd7/0x560
nning for … 11[  220.334193][    C2] __kasan_slab_free+0x110/0x140
  of user hakon ([  220.345917][    C2] slab_free_freelist_hook+0x8c/0x1c0
[  220.358159][    C2]  kfree+0xd7/0x560
48s / 1min 29s) [  220.365856][    C2]  ? skb_release_data+0x41c/0x520
[  220.376324][    C2]  ? xennet_poll+0x15d9/0x3c80

[  220.388157][    C2]  ? kmem_cache_free+0x10b/0x520
[  220.395273][    C2]  skb_release_data+0x41c/0x520
[  220.402187][    C2]  ? xennet_poll+0x15d9/0x3c80
[  220.412827][    C2]  ? xennet_poll+0x15d9/0x3c80
[  220.420663][    C2]  kfree_skb+0x105/0x240
[  220.426526][    C2]  xennet_poll+0x15d9/0x3c80
[  220.433255][    C2]  ? xennet_xdp+0x7e0/0x7e0
[  220.440736][    C2]  ? __kasan_check_read+0x11/0x20
[  220.447887][    C2]  ? __raise_softirq_irqoff+0x36/0xe0
[  220.456181][    C2]  ? __napi_schedule+0x1b5/0x260
[  220.465146][    C2]  ? __kasan_check_read+0x11/0x20
[  220.476879][    C2]  ? lock_release+0xa3/0x820
[  220.484656][    C2]  ? do_raw_spin_lock+0x13e/0x280
[  220.491521][    C2]  ? handle_edge_irq+0x35e/0xb60
[  220.498336][    C2]  ? handle_irq_for_port+0x192/0x4c0
[  220.506738][    C2]  ? __kasan_check_write+0x14/0x20
[  220.513630][    C2]  __napi_poll+0xb2/0x4c0
[  220.519833][    C2]  net_rx_action+0x2d7/0xa20
[  220.526776][    C2]  ? napi_threaded_poll+0x2c0/0x2c0
[  220.533872][    C2]  ? __kasan_check_read+0x11/0x20
[  220.540293][    C2]  ? lock_acquire+0x4b/0x260
[  220.545608][    C2]  ? __kasan_check_write+0x14/0x20
[  220.551638][    C2]  ? _raw_read_unlock+0x23/0x40
[  220.557360][    C2]  ? __xen_evtchn_do_upcall+0x107/0x180
[  220.563934][    C2]  __do_softirq+0x1cd/0x66d
[  220.569544][    C2]  irq_exit_rcu+0x12c/0x1c0
[  220.575980][    C2]  sysvec_xen_hvm_callback+0x79/0xa0
[  220.585387][    C2]  </IRQ>
[  220.590639][    C2]  asm_sysvec_xen_hvm_callback+0x12/0x20
[  220.600500][    C2] RIP: 0010:native_safe_halt+0xe/0x20
[  220.608971][    C2] Code: cc cc cc cc cc cc cc cc cc cc cc cc cc cc 
cc cc cc cc cc cc cc cc cc cc cc cc cc cc e9 07 00 00 00 0f 00 2d b4 f4 
49 00 fb f4 <c3> 66 66 2e 0f 1f 84 00 00 00 00 00 66 0f 1f 44 00 00 e9 
07 00 00
[  220.648521][    C2] RSP: 0018:ffffc90000127cf0 EFLAGS: 00000246
[  220.648533][    C2] RAX: 0000000000004000 RBX: ffff88810122b865 RCX: 
ffffffff83dd94c2
[  220.648537][    C2] RDX: 1ffff1102011ea50 RSI: 0000000000000008 RDI: 
ffff8881008f5280
[  220.648540][    C2] RBP: ffffc90000127d00 R08: 0000000000000000 R09: 
ffff8881008f5287
[  220.648543][    C2] R10: ffffed102011ea50 R11: 0000000000000000 R12: 
ffff8881008f5280
[  220.648546][    C2] R13: ffff88810a142000 R14: ffff88810122b864 R15: 
ffffffff8576a6c0
[  220.648554][    C2]  ? acpi_idle_do_entry+0x142/0x1e0
M [K[[0;31m*[  220.648568][    C2]  ? acpi_idle_do_entry+0x166/0x1e0
[  220.648573][    C2]  acpi_idle_enter+0x2ca/0x500
[0;1;31m*[0m[0[  220.648579][    C2]  ? __kasan_check_write+0x14/0x20
[  220.648586][    C2]  cpuidle_enter_state+0x17c/0xe00
;31m*   [0m] (1[  220.648594][    C2]  cpuidle_enter+0x4f/0xa0
[  220.648599][    C2]  do_idle+0x407/0x5c0
  of 2) A stop jo[  220.648606][    C2]  ? arch_cpu_idle_exit+0x60/0x60
[  220.823219][    C2]  ? _raw_spin_unlock_irqrestore+0x27/0x40
b is running for[  220.838640][    C2]  ? complete+0x5c/0x80
[  220.851481][    C2]  cpu_startup_entry+0x20/0x40
  … 11 of user [  220.866481][    C2]  start_secondary+0x27f/0x360
[  220.880151][    C2]  ? set_cpu_sibling_map+0x2040/0x2040
hakon (48s / 1mi[  220.896471][    C2] 
secondary_startup_64_no_verify+0xb0/0xbb
[  220.908473][    C2]
n 29s)
[  220.912851][    C2] Allocated by task 0:
[  220.919678][    C2] (stack is not available)
[  220.925463][    C2]
[  220.928665][    C2] Freed by task 0:
[  220.933070][    C2]  kasan_save_stack+0x23/0x60
[  220.938743][    C2]  kasan_set_track+0x20/0x40
[  220.944169][    C2]  kasan_set_free_info+0x24/0x40
[  220.949943][    C2]  __kasan_slab_free+0xf1/0x140
[  220.955639][    C2]  slab_free_freelist_hook+0x8c/0x1c0
[  220.962237][    C2]  kfree+0xd7/0x560
[  220.966769][    C2]  skb_release_data+0x41c/0x520
[  220.972706][    C2]  kfree_skb+0x105/0x240
[  220.977795][    C2]  xennet_poll+0x1ab2/0x3c80
[  220.983566][    C2]  __napi_poll+0xb2/0x4c0
[  220.988924][    C2]  net_rx_action+0x2d7/0xa20
[  220.996268][    C2]  __do_softirq+0x1cd/0x66d
[  221.003358][    C2]
[  221.006612][    C2] The buggy address belongs to the object at 
ffff8881263bf800
[  221.006612][    C2]  which belongs to the cache kmalloc-1k of size 1024
[  221.029471][    C2] The buggy address is located 0 bytes inside of
[  221.029471][    C2]  1024-byte region [ffff8881263bf800, 
ffff8881263bfc00)
[  221.063503][    C2] The buggy address belongs to the page:
[  221.080451][    C2] page:0000000032a6c71b refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x1263b8
[  221.098138][    C2] head:0000000032a6c71b order:3 compound_mapcount:0 
compound_pincount:0
[  221.114383][    C2] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  221.127696][    C2] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff888100042dc0
[  221.141318][    C2] raw: 0000000000000000 0000000000100010 
00000001ffffffff 0000000000000000
[  221.141324][    C2] page dumped because: kasan: bad access detected
[  221.141327][    C2]
[  221.141329][    C2] Memory state around the buggy address:
[  221.141332][    C2]  ffff8881263bf700: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  221.141335][    C2]  ffff8881263bf780: fc fc fc fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  221.141338][    C2] >ffff8881263bf800: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
M [K[ [0;31m*[  221.141340][    C2]                    ^
[  221.141343][    C2]  ffff8881263bf880: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[0;1;31m*[0m[[  221.141346][    C2]  ffff8881263bf900: fb fb fb fb fb 
fb fb fb fb fb fb fb fb fb fb fb
[  221.141348][    C2] 
==================================================================
[  221.141389][    C0] 
==================================================================
0;31m*  [0m] (2[  221.329937][    C0] BUG: KASAN: double-free or 
invalid-free in kmem_cache_free+0x10b/0x520
[  221.341736][    C0]
  of 2) A stop jo[  221.344711][    C0] CPU: 0 PID: 12 Comm: ksoftirqd/0 
Tainted: G    B   W         5.13.0-rc6-x86_64 #1
[  221.357250][    C0] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
b is running for[  221.368680][    C0] Call Trace:
[  221.377150][    C0]  dump_stack+0xa1/0xd3
  …t Display Ma[  221.384252][    C0] 
print_address_description.constprop.0+0x1d/0x140
[  221.396446][    C0]  ? kmem_cache_free+0x10b/0x520
nager (50s / 1mi[  221.407453][    C0] kasan_report_invalid_free+0x56/0x80
n 30s)
[  221.419965][    C0]  ? kmem_cache_free+0x10b/0x520
[  221.428553][    C0]  __kasan_slab_free+0x110/0x140
[  221.435422][    C0]  slab_free_freelist_hook+0x8c/0x1c0
[  221.443433][    C0]  kmem_cache_free+0x10b/0x520
[  221.450874][    C0]  ? kfree_skbmem+0x9a/0x140
[  221.458747][    C0]  kfree_skbmem+0x9a/0x140
[  221.464587][    C0]  __kfree_skb+0x4d/0x60
[  221.474434][    C0]  tcp_validate_incoming+0x543/0x1e40
[  221.485158][    C0]  ? __kasan_check_write+0x14/0x20
[  221.492922][    C0]  ? tcp_reset+0x2c0/0x2c0
[  221.499125][    C0]  ? xen_clocksource_get_cycles+0x15/0x20
[  221.506601][    C0]  ? ktime_get+0x6b/0x100
[  221.511967][    C0]  ? __kasan_check_write+0x14/0x20
[  221.519218][    C0]  tcp_rcv_established+0x51f/0x20e0
[  221.527972][    C0]  ? __asan_report_load2_noabort+0x14/0x20
[  221.536085][    C0]  ? tcp_parse_md5sig_option+0x105/0x120
[  221.545260][    C0]  ? tcp_data_queue+0x5740/0x5740
[  221.552573][    C0]  ? do_raw_spin_lock+0x13e/0x280
[  221.560259][    C0]  ? __kasan_check_read+0x11/0x20
[  221.569101][    C0]  tcp_v4_do_rcv+0x4fa/0x760
[  221.578037][    C0]  tcp_v4_rcv+0x2643/0x3640
[  221.585539][    C0]  ? tcp_v4_early_demux+0x840/0x840
[  221.594367][    C0]  ? __kasan_check_read+0x11/0x20
[  221.601892][    C0]  ip_protocol_deliver_rcu+0x2c/0x1c0
[  221.609841][    C0]  ip_local_deliver_finish+0x1df/0x260
[  221.618530][    C0]  ip_local_deliver+0x1b4/0x3c0
[  221.628604][    C0]  ? ip_local_deliver_finish+0x260/0x260
[  221.641821][    C0]  ? __asan_report_store8_noabort+0x17/0x20
[  221.641837][    C0]  ? tcp_v4_early_demux+0x7ec/0x840
[  221.641846][    C0]  ip_sublist_rcv_finish+0x219/0x480
[  221.641854][    C0]  ip_sublist_rcv+0x4b9/0x820
[  221.641860][    C0]  ? ip_sublist_rcv_finish+0x480/0x480
[  221.641865][    C0]  ? kasan_check_range+0x14f/0x1a0
[  221.641871][    C0]  ? __asan_report_load8_noabort+0x14/0x20
M [K[  [0;31m[  221.641876][    C0]  ? ip_rcv_core+0xab8/0xac0
[  221.641882][    C0]  ip_list_rcv+0x2f0/0x4e0
*[0;1;31m*[0m[  221.641886][    C0]  ? 
xennet_alloc_rx_buffers+0x237/0xae0
[  221.641894][    C0]  ? ip_rcv+0x5e0/0x5e0
[0;31m* [0m] (2[  221.641900][    C0] 
__netif_receive_skb_list_core+0x6ae/0xa20
[  221.641908][    C0]  ? 
__netif_receive_skb_core.constprop.0+0x31c0/0x31c0
  of 2) A stop jo[  221.641913][    C0]  ? __kasan_check_read+0x11/0x20
[  221.641918][    C0]  ? lock_acquire+0x4b/0x260
b is running for[  221.788087][    C0]  ? 
skb_defer_rx_timestamp+0x2d3/0x380
[  221.796345][    C0] netif_receive_skb_list_internal+0x616/0xd80
  …t Display Ma[  221.804650][    C0]  ? 
__netif_receive_skb_list_core+0xa20/0xa20
nager (50s / 1mi[  221.816777][    C0]  ? 
napi_gro_complete.constprop.0.isra.0+0x18f/0x380
[  221.832765][    C0]  ? __kasan_check_write+0x14/0x20
n 30s)
[  221.845992][    C0]  ? napi_gro_flush+0x298/0x3c0
[  221.856509][    C0]  napi_complete_done+0x189/0x600
[  221.864674][    C0]  xennet_poll+0x2447/0x3c80
[  221.872260][    C0]  ? xennet_xdp+0x7e0/0x7e0
[  221.877709][    C0]  ? __kasan_check_read+0x11/0x20
[  221.883665][    C0]  ? lock_release+0xa3/0x820
[  221.889528][    C0]  ? lock_downgrade+0x7e0/0x7e0
[  221.895396][    C0]  ? lock_downgrade+0x7e0/0x7e0
[  221.901237][    C0]  ? __kasan_check_read+0x11/0x20
[  221.907269][    C0]  ? lock_release+0xa3/0x820
[  221.912657][    C0]  ? do_raw_spin_unlock+0x159/0x200
[  221.918827][    C0]  __napi_poll+0xb2/0x4c0
[  221.923961][    C0]  net_rx_action+0x2d7/0xa20
[  221.929505][    C0]  ? call_timer_fn+0x3c0/0x3c0
[  221.935254][    C0]  ? _raw_spin_unlock_irq+0x23/0x40
[  221.941511][    C0]  ? napi_threaded_poll+0x2c0/0x2c0
[  221.949956][    C0]  __do_softirq+0x1cd/0x66d
[  221.956378][    C0]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  221.965444][    C0]  run_ksoftirqd+0x2b/0x40
[  221.972307][    C0]  smpboot_thread_fn+0x2fb/0x6e0
[  221.979773][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
[  221.993144][    C0]  ? __kthread_parkme+0x8d/0x120
[  222.004406][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
[  222.018319][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
[  222.034503][    C0]  kthread+0x32d/0x400
[  222.044564][    C0]  ? __kthread_bind_mask+0xa0/0xa0
[  222.051098][    C0]  ret_from_fork+0x22/0x30
[  222.056623][    C0]
[  222.059448][    C0] Allocated by task 0:
[  222.065563][    C0]  kasan_save_stack+0x23/0x60
[  222.071945][    C0]  __kasan_slab_alloc+0x68/0x80
[  222.078283][    C0]  kmem_cache_alloc_node+0x242/0x380
[  222.084876][    C0]  __alloc_skb+0x156/0x280
[  222.090710][    C0]  __netdev_alloc_skb+0x46/0x320
[  222.097522][    C0]  xennet_alloc_rx_buffers+0x237/0xae0
[  222.105505][    C0]  xennet_poll+0x1e8b/0x3c80
[  222.111276][    C0]  __napi_poll+0xb2/0x4c0
[  222.116938][    C0]  net_rx_action+0x2d7/0xa20
[  222.123107][    C0]  __do_softirq+0x1cd/0x66d
[  222.128925][    C0]
[  222.132287][    C0] Freed by task 881776:
[  222.138150][    C0] ------------[ cut here ]------------
[  222.138153][    C0] slab index 2083072 out of bounds (278) for stack 
id ffffc900
[  222.138175][    C0] WARNING: CPU: 0 PID: 12 at lib/stackdepot.c:236 
stack_depot_fetch+0x71/0xa0
[  222.138186][    C0] Modules linked in: xen_front_pgdir_shbuf 
xen_scsifront snd_hda_codec_hdmi snd_hda_intel snd_intel_dspcfg 
snd_hda_codec snd_hda_core snd_hwdep snd_pcm snd_timer snd soundcore 
hid_generic usbhid hid xen_kbdfront intel_rapl_msr atkbd 
intel_rapl_common amdgpu drm_ttm_helper ttm gpu_sched xhci_pci xhci_hcd 
usbcore [last unloaded: usbip_core]
[  222.224375][    C0] CPU: 0 PID: 12 Comm: ksoftirqd/0 Tainted: G    
B   W         5.13.0-rc6-x86_64 #1
[  222.244580][    C0] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  222.253384][    C0] RIP: 0010:stack_depot_fetch+0x71/0xa0
M [K[   [0;31[  222.260465][    C0] Code: 48 c1 e0 04 25 f0 3f 00 00 
48 01 c2 48 8d 42 18 48 89 03 48 8b 5d f8 8b 42 0c c9 c3 89 f9 48 c7 c7 
48 85 bb 84 e8 2d bf 3b 01 <0f> 0b 31 c0 48 8b 5d f8 c9 c3 48 8b 5d f8 
31 c0 c9 c3 48 c7 c7 a0
[  222.285208][    C0] RSP: 0018:ffffc900000d6c80 EFLAGS: 00010086
m*[0;1;31m*[0m[  222.293281][    C0] RAX: 0000000000000000 RBX: 
ffffc900000d6ca8 RCX: 0000000000000000
[  222.306257][    C0] RDX: 0000000000000027 RSI: 0000000000000004 RDI: 
fffff5200001ad82
[0;31m*[0m] (2[  222.317154][    C0] RBP: ffffc900000d6c98 R08: 
0000000000000001 R09: ffff8884d382091b
[  222.331541][    C0] R10: ffffed109a704123 R11: 0000000000000007 R12: 
ffff888125a123c0
  of 2) A stop job is running for[  222.345035][    C0] R13: 
ffffea0004968480 R14: ffff888125a123c0 R15: ffff888125a12498
[  222.360409][    C0] FS:  0000000000000000(0000) 
GS:ffff8884d3800000(0000) knlGS:0000000000000000
  …t Display Ma[  222.373111][    C0] CS:  0010 DS: 0000 ES: 0000 CR0: 
0000000080050033
[  222.390751][    C0] CR2: 000056104ce6e020 CR3: 0000000121a0e002 CR4: 
00000000001706f0
nager (51s / 1min 30s)
[  222.410775][    C0] Call Trace:
[  222.421021][    C0]  print_stack+0xe/0x1d
[  222.427927][    C0]  print_track+0x22/0x33
[  222.435329][    C0] 
print_address_description.constprop.0.cold+0x3c/0x179
[  222.446384][    C0]  ? kmem_cache_free+0x10b/0x520
[  222.453019][    C0]  kasan_report_invalid_free+0x56/0x80
[  222.461900][    C0]  ? kmem_cache_free+0x10b/0x520
[  222.470151][    C0]  __kasan_slab_free+0x110/0x140
[  222.478185][    C0]  slab_free_freelist_hook+0x8c/0x1c0
[  222.487662][    C0]  kmem_cache_free+0x10b/0x520
[  222.496046][    C0]  ? kfree_skbmem+0x9a/0x140
[  222.504214][    C0]  kfree_skbmem+0x9a/0x140
[  222.511698][    C0]  __kfree_skb+0x4d/0x60
[  222.519142][    C0]  tcp_validate_incoming+0x543/0x1e40
[  222.528397][    C0]  ? __kasan_check_write+0x14/0x20
[  222.537268][    C0]  ? tcp_reset+0x2c0/0x2c0
[  222.544138][    C0]  ? xen_clocksource_get_cycles+0x15/0x20
[  222.554239][    C0]  ? ktime_get+0x6b/0x100
[  222.562508][    C0]  ? __kasan_check_write+0x14/0x20
[  222.573110][    C0]  tcp_rcv_established+0x51f/0x20e0
[  222.587242][    C0]  ? __asan_report_load2_noabort+0x14/0x20
[  222.604554][    C0]  ? tcp_parse_md5sig_option+0x105/0x120
[  222.620484][    C0]  ? tcp_data_queue+0x5740/0x5740
[  222.635582][    C0]  ? do_raw_spin_lock+0x13e/0x280
[  222.635600][    C0]  ? __kasan_check_read+0x11/0x20
[  222.635608][    C0]  tcp_v4_do_rcv+0x4fa/0x760
[  222.635616][    C0]  tcp_v4_rcv+0x2643/0x3640
[  222.635653][    C0]  ? tcp_v4_early_demux+0x840/0x840
[  222.682168][    C0]  ? __kasan_check_read+0x11/0x20
[  222.690315][    C0]  ip_protocol_deliver_rcu+0x2c/0x1c0
M [K[    [0;3[  222.700572][    C0] ip_local_deliver_finish+0x1df/0x260
[  222.712227][    C0]  ip_local_deliver+0x1b4/0x3c0
1m*[0;1;31m*[0m] (1 of 2) A st[  222.722843][    C0]  ? 
ip_local_deliver_finish+0x260/0x260
[  222.735549][    C0]  ? __asan_report_store8_noabort+0x17/0x20
[  222.744743][    C0]  ? tcp_v4_early_demux+0x7ec/0x840
op job is runnin[  222.756120][    C0] ip_sublist_rcv_finish+0x219/0x480
[  222.771322][    C0]  ip_sublist_rcv+0x4b9/0x820
g for … 11 of [  222.786919][    C0]  ? ip_sublist_rcv_finish+0x480/0x480
user hakon (50s [  222.808808][    C0]  ? kasan_check_range+0x14f/0x1a0
/ 1min 29s)
[  222.829216][    C0]  ? __asan_report_load8_noabort+0x14/0x20
[  222.843889][    C0]  ? ip_rcv_core+0xab8/0xac0
[  222.854497][    C0]  ip_list_rcv+0x2f0/0x4e0
[  222.864301][    C0]  ? xennet_alloc_rx_buffers+0x237/0xae0
[  222.877084][    C0]  ? ip_rcv+0x5e0/0x5e0
[  222.888175][    C0]  __netif_receive_skb_list_core+0x6ae/0xa20
[  222.902120][    C0]  ? 
__netif_receive_skb_core.constprop.0+0x31c0/0x31c0
[  222.920250][    C0]  ? __kasan_check_read+0x11/0x20
[  222.930324][    C0]  ? lock_acquire+0x4b/0x260
[  222.937572][    C0]  ? skb_defer_rx_timestamp+0x2d3/0x380
[  222.946548][    C0] netif_receive_skb_list_internal+0x616/0xd80
[  222.957698][    C0]  ? __netif_receive_skb_list_core+0xa20/0xa20
[  222.971138][    C0]  ? napi_gro_complete.constprop.0.isra.0+0x18f/0x380
[  222.985056][    C0]  ? __kasan_check_write+0x14/0x20
[  222.998333][    C0]  ? napi_gro_flush+0x298/0x3c0
[  223.008145][    C0]  napi_complete_done+0x189/0x600
[  223.019542][    C0]  xennet_poll+0x2447/0x3c80
[  223.033816][    C0]  ? xennet_xdp+0x7e0/0x7e0
[  223.047087][    C0]  ? __kasan_check_read+0x11/0x20
[  223.060573][    C0]  ? lock_release+0xa3/0x820
[  223.069336][    C0]  ? lock_downgrade+0x7e0/0x7e0
[  223.077566][    C0]  ? lock_downgrade+0x7e0/0x7e0
[  223.088149][    C0]  ? __kasan_check_read+0x11/0x20
[  223.100391][    C0]  ? lock_release+0xa3/0x820
[  223.111366][    C0]  ? do_raw_spin_unlock+0x159/0x200
[  223.123314][    C0]  __napi_poll+0xb2/0x4c0
[  223.132415][    C0]  net_rx_action+0x2d7/0xa20
[  223.142731][    C0]  ? call_timer_fn+0x3c0/0x3c0
[  223.142745][    C0]  ? _raw_spin_unlock_irq+0x23/0x40
[  223.142754][    C0]  ? napi_threaded_poll+0x2c0/0x2c0
[  223.142763][    C0]  __do_softirq+0x1cd/0x66d
[  223.142769][    C0]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  223.142776][    C0]  run_ksoftirqd+0x2b/0x40
[  223.142780][    C0]  smpboot_thread_fn+0x2fb/0x6e0
M [K[     [0;[  223.142787][    C0]  ? 
smpboot_register_percpu_thread+0x360/0x360
[  223.142791][    C0]  ? __kthread_parkme+0x8d/0x120
31m*[0m] (1 of [  223.142796][    C0]  ? 
smpboot_register_percpu_thread+0x360/0x360
[  223.142800][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
2) A stop job is[  223.142806][    C0]  kthread+0x32d/0x400
[  223.142810][    C0]  ? __kthread_bind_mask+0xa0/0xa0
  running for …[  223.142815][    C0]  ret_from_fork+0x22/0x30
[  223.142824][    C0] ---[ end trace 3841665df6b692d5 ]---
  11 of user hako[  223.354520][    C0] ------------[ cut here 
]------------
[  223.368512][    C0] WARNING: CPU: 0 PID: 12 at kernel/stacktrace.c:28 
stack_trace_print+0xf/0x40
n (51s / 1min 29[  223.394289][    C0] Modules linked in: 
xen_front_pgdir_shbuf xen_scsifront snd_hda_codec_hdmi snd_hda_intel 
snd_intel_dspcfg snd_hda_codec snd_hda_core snd_hwdep snd_pcm snd_timer 
snd soundcore hid_generic usbhid hid xen_kbdfront intel_rapl_msr atkbd 
intel_rapl_common amdgpu drm_ttm_helper ttm gpu_sched xhci_pci xhci_hcd 
usbcore [last unloaded: usbip_core]
[  223.488775][    C0] CPU: 0 PID: 12 Comm: ksoftirqd/0 Tainted: G    
B   W         5.13.0-rc6-x86_64 #1
s)
[  223.513458][    C0] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  223.527568][    C0] RIP: 0010:stack_trace_print+0xf/0x40
[  223.537471][    C0] Code: 48 8b 55 f0 65 48 2b 14 25 28 00 00 00 75 
06 48 8b 5d f8 c9 c3 e8 e1 3e 84 02 90 0f 1f 44 00 00 48 85 ff 74 05 85 
f6 75 04 c3 <0f> 0b c3 55 48 89 e5 41 56 41 55 41 54 53 e9 66 c8 71 02 
66 66 2e
[  223.579298][    C0] RSP: 0018:ffffc900000d6ca0 EFLAGS: 00010046
[  223.593278][    C0] RAX: 0000000000000000 RBX: ffff888125a123c8 RCX: 
0000000000000000
[  223.610232][    C0] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 
0000000000000000
[  223.627455][    C0] RBP: ffffc900000d6cb0 R08: 0000000000000001 R09: 
ffff8884d382091b
[  223.648295][    C0] R10: ffffed109a704123 R11: 0000000000000007 R12: 
ffff888125a123c0
[  223.648304][    C0] R13: ffffea0004968480 R14: ffff888125a123c0 R15: 
ffff888125a12498
[  223.648307][    C0] FS:  0000000000000000(0000) 
GS:ffff8884d3800000(0000) knlGS:0000000000000000
[  223.648312][    C0] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  223.648315][    C0] CR2: 000056104ce6e020 CR3: 0000000121a0e002 CR4: 
00000000001706f0
[  223.648322][    C0] Call Trace:
[  223.648326][    C0]  ? print_stack+0x1b/0x1d
M [K[    [0;3[  223.648339][    C0]  print_track+0x22/0x33
[  223.648344][    C0] 
print_address_description.constprop.0.cold+0x3c/0x179
1m*[0;1;31m*[0m] (1 of 2) A st[  223.648350][    C0]  ? 
kmem_cache_free+0x10b/0x520
[  223.804629][    C0]  kasan_report_invalid_free+0x56/0x80
[  223.814500][    C0]  ? kmem_cache_free+0x10b/0x520
op job is runnin[  223.826720][    C0] __kasan_slab_free+0x110/0x140
[  223.837437][    C0]  slab_free_freelist_hook+0x8c/0x1c0
g for … 11 of [  223.847922][    C0]  kmem_cache_free+0x10b/0x520
[  223.868161][    C0]  ? kfree_skbmem+0x9a/0x140
user hakon (51s [  223.884154][    C0]  kfree_skbmem+0x9a/0x140
[  223.900176][    C0]  __kfree_skb+0x4d/0x60
/ 1min 29s)
[  223.912885][    C0]  tcp_validate_incoming+0x543/0x1e40
[  223.926020][    C0]  ? __kasan_check_write+0x14/0x20
[  223.934915][    C0]  ? tcp_reset+0x2c0/0x2c0
[  223.943623][    C0]  ? xen_clocksource_get_cycles+0x15/0x20
[  223.954404][    C0]  ? ktime_get+0x6b/0x100
[  223.962118][    C0]  ? __kasan_check_write+0x14/0x20
[  223.971530][    C0]  tcp_rcv_established+0x51f/0x20e0
[  223.980749][    C0]  ? __asan_report_load2_noabort+0x14/0x20
[  223.992645][    C0]  ? tcp_parse_md5sig_option+0x105/0x120
[  224.003425][    C0]  ? tcp_data_queue+0x5740/0x5740
[  224.013820][    C0]  ? do_raw_spin_lock+0x13e/0x280
[  224.025448][    C0]  ? __kasan_check_read+0x11/0x20
[  224.035386][    C0]  tcp_v4_do_rcv+0x4fa/0x760
[  224.044517][    C0]  tcp_v4_rcv+0x2643/0x3640
[  224.053684][    C0]  ? tcp_v4_early_demux+0x840/0x840
[  224.067321][    C0]  ? __kasan_check_read+0x11/0x20
[  224.082495][    C0]  ip_protocol_deliver_rcu+0x2c/0x1c0
[  224.098714][    C0]  ip_local_deliver_finish+0x1df/0x260
[  224.114575][    C0]  ip_local_deliver+0x1b4/0x3c0
[  224.128460][    C0]  ? ip_local_deliver_finish+0x260/0x260
[  224.139707][    C0]  ? __asan_report_store8_noabort+0x17/0x20
[  224.139724][    C0]  ? tcp_v4_early_demux+0x7ec/0x840
[  224.139732][    C0]  ip_sublist_rcv_finish+0x219/0x480
[  224.139740][    C0]  ip_sublist_rcv+0x4b9/0x820
[  224.139747][    C0]  ? ip_sublist_rcv_finish+0x480/0x480
[  224.139752][    C0]  ? kasan_check_range+0x14f/0x1a0
[  224.139758][    C0]  ? __asan_report_load8_noabort+0x14/0x20
M [K[   [0;31[  224.139763][    C0]  ? ip_rcv_core+0xab8/0xac0
[  224.139769][    C0]  ip_list_rcv+0x2f0/0x4e0
m*[0;1;31m*[0m[  224.139773][    C0]  ? 
xennet_alloc_rx_buffers+0x237/0xae0
[  224.139781][    C0]  ? ip_rcv+0x5e0/0x5e0
[0;31m*[0m] (2[  224.139787][    C0] 
__netif_receive_skb_list_core+0x6ae/0xa20
[  224.139795][    C0]  ? 
__netif_receive_skb_core.constprop.0+0x31c0/0x31c0
  of 2) A stop jo[  224.139800][    C0]  ? __kasan_check_read+0x11/0x20
b is running for[  224.317090][    C0]  ? lock_acquire+0x4b/0x260
[  224.332689][    C0]  ? skb_defer_rx_timestamp+0x2d3/0x380
  …t Display Ma[  224.346416][    C0] 
netif_receive_skb_list_internal+0x616/0xd80
[  224.362780][    C0]  ? __netif_receive_skb_list_core+0xa20/0xa20
nager (53s / 1mi[  224.379090][    C0]  ? 
napi_gro_complete.constprop.0.isra.0+0x18f/0x380
[  224.395485][    C0]  ? __kasan_check_write+0x14/0x20
[  224.406125][    C0]  ? napi_gro_flush+0x298/0x3c0
n 30s)
[  224.418245][    C0]  napi_complete_done+0x189/0x600
[  224.429697][    C0]  xennet_poll+0x2447/0x3c80
[  224.439312][    C0]  ? xennet_xdp+0x7e0/0x7e0
[  224.446881][    C0]  ? __kasan_check_read+0x11/0x20
[  224.457506][    C0]  ? lock_release+0xa3/0x820
[  224.465342][    C0]  ? lock_downgrade+0x7e0/0x7e0
[  224.475513][    C0]  ? lock_downgrade+0x7e0/0x7e0
[  224.483278][    C0]  ? __kasan_check_read+0x11/0x20
[  224.494173][    C0]  ? lock_release+0xa3/0x820
[  224.505905][    C0]  ? do_raw_spin_unlock+0x159/0x200
[  224.518074][    C0]  __napi_poll+0xb2/0x4c0
[  224.533333][    C0]  net_rx_action+0x2d7/0xa20
[  224.547891][    C0]  ? call_timer_fn+0x3c0/0x3c0
[  224.559371][    C0]  ? _raw_spin_unlock_irq+0x23/0x40
[  224.568377][    C0]  ? napi_threaded_poll+0x2c0/0x2c0
[  224.578984][    C0]  __do_softirq+0x1cd/0x66d
[  224.588373][    C0]  ? perf_trace_irq_handler_entry+0x4a0/0x4a0
[  224.598207][    C0]  run_ksoftirqd+0x2b/0x40
[  224.605831][    C0]  smpboot_thread_fn+0x2fb/0x6e0
[  224.614088][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
[  224.625115][    C0]  ? __kthread_parkme+0x8d/0x120
[  224.633499][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
[  224.645226][    C0]  ? smpboot_register_percpu_thread+0x360/0x360
[  224.645242][    C0]  kthread+0x32d/0x400
[  224.645249][    C0]  ? __kthread_bind_mask+0xa0/0xa0
[  224.645254][    C0]  ret_from_fork+0x22/0x30
[  224.645264][    C0] ---[ end trace 3841665df6b692d6 ]---
[  224.645268][    C0]
[  224.645271][    C0] The buggy address belongs to the object at 
ffff888125a123c0
[  224.645271][    C0]  which belongs to the cache skbuff_head_cache of 
size 216
[  224.645275][    C0] The buggy address is located 0 bytes inside of
[  224.645275][    C0]  216-byte region [ffff888125a123c0, 
ffff888125a12498)
[  224.645279][    C0] The buggy address belongs to the page:
[  224.645282][    C0] page:000000004ad1a198 refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0 pfn:0x125a12
[  224.645287][    C0] head:000000004ad1a198 order:1 compound_mapcount:0
[  224.645290][    C0] memcg:ffff888135827401
[  224.645292][    C0] flags: 
0x2fffc0000010200(slab|head|node=0|zone=2|lastcpupid=0x3fff)
[  224.645300][    C0] raw: 02fffc0000010200 dead000000000100 
dead000000000122 ffff8881002a3e00
[  224.645304][    C0] raw: 0000000000000000 0000000000190019 
00000001ffffffff ffff888135827401
[  224.645306][    C0] page dumped because: kasan: bad access detected
[  224.645308][    C0]
[  224.645309][    C0] Memory state around the buggy address:
[  224.645312][    C0]  ffff888125a12280: fa fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  224.645315][    C0]  ffff888125a12300: fb fb fb fb fb fb fb fb fb fb 
fb fc fc fc fc fc
[  224.645317][    C0] >ffff888125a12380: fc fc fc fc fc fc fc fc fa fb 
fb fb fb fb fb fb
[  224.645319][    C0] ^
[  224.645322][    C0]  ffff888125a12400: fb fb fb fb fb fb fb fb fb fb 
fb fb fb fb fb fb
[  224.645325][    C0]  ffff888125a12480: fb fb fb fc fc fc fc fc fc fc 
fc fc fc fc fc fc
[  224.645327][    C0] 
==================================================================
[  224.645359][    C2] 
==================================================================
[  224.649156][ T2950] usercopy: Kernel memory exposure attempt detected 
from SLUB object 'kmalloc-64' (offset 0, size 2706)!
[  224.649175][ T2950] ------------[ cut here ]------------
[  224.649177][ T2950] kernel BUG at mm/usercopy.c:99!
[  224.649185][ T2950] invalid opcode: 0000 [#1] SMP KASAN NOPTI
[  224.649193][ T2950] CPU: 5 PID: 2950 Comm: kworker/u13:1 Tainted: 
G    B   W         5.13.0-rc6-x86_64 #1
[  224.649198][ T2950] Hardware name: Xen HVM domU, BIOS 4.15.1-pre 
06/12/2021
[  224.649201][ T2950] Workqueue: xprtiod xs_stream_data_receive_workfn
[  224.649213][ T2950] RIP: 0010:usercopy_abort+0x7b/0x7d
[  224.649228][ T2950] Code: 35 84 51 48 0f 44 d6 49 c7 c3 00 96 35 84 
50 4c 89 d1 48 c7 c6 80 95 35 84 57 48 c7 c7 20 97 35 84 49 0f 44 f3 e8 
c7 33 fe ff <0f> 0b e8 01 cb d9 fd 4c 89 e1 49 89 d8 44 89 ea 48 81 e9 
00 00 00
[  224.649233][ T2950] RSP: 0018:ffffc90000f8f588 EFLAGS: 00010282
[  224.649237][ T2950] RAX: 0000000000000066 RBX: ffff888100042640 RCX: 
0000000000000000
[  224.649244][ T2950] RDX: 0000000000000004 RSI: 0000000000000008 RDI: 
fffff520001f1ea4
[  224.649247][ T2950] RBP: ffffc90000f8f5a0 R08: 0000000000000066 R09: 
ffff8884d3ab0d47
[  224.649250][ T2950] R10: ffffed109a7561a8 R11: 000000000000006a R12: 
0000000000000001
[  224.649253][ T2950] R13: 0000000000000040 R14: 0000000000000001 R15: 
ffffea000441bf80
[  224.649255][ T2950] FS:  0000000000000000(0000) 
GS:ffff8884d3a80000(0000) knlGS:0000000000000000
[  224.649259][ T2950] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  224.649262][ T2950] CR2: 00007fd56ff179d8 CR3: 0000000111ca8001 CR4: 
00000000001706e0
[  224.649268][ T2950] Call Trace:
[  224.649271][ T2950]  __check_heap_object+0x122/0x160
[  224.649277][ T2950]  ? kmem_cache_free+0x10b/0x520
[  224.649282][ T2950]  __check_object_size+0x1e5/0x280
[  224.649289][ T2950]  simple_copy_to_iter+0x2b/0x60
[  224.649296][ T2950]  __skb_datagram_iter+0x315/0x720
[  224.649302][ T2950]  ? zerocopy_sg_from_iter+0x100/0x100
[  224.649308][ T2950]  skb_copy_datagram_iter+0x92/0x160
[  224.649313][ T2950]  tcp_recvmsg_locked+0xcd9/0x2300
[  224.649323][ T2950]  ? tcp_splice_read+0x920/0x920
[  224.649327][ T2950]  ? __kasan_check_read+0x11/0x20
[  224.649332][ T2950]  ? lock_acquire+0x4b/0x260
[  224.649339][ T2950]  tcp_recvmsg+0x11c/0x4a0
[  224.649344][ T2950]  ? tcp_recvmsg_locked+0x2300/0x2300
[  224.649349][ T2950]  ? do_raw_spin_unlock+0x159/0x200
[  224.649353][ T2950]  ? _raw_spin_unlock_bh+0x31/0x40
[  224.649360][ T2950]  ? tcp_recvmsg+0x3c0/0x4a0
[  224.649365][ T2950]  inet_recvmsg+0x111/0x4e0
[  224.649370][ T2950]  ? inet_sendpage+0x140/0x140
[  224.649375][ T2950]  ? lock_release+0xa3/0x820
[  224.649378][ T2950]  ? lock_release+0xa3/0x820
[  224.649382][ T2950]  ? inet_sendpage+0x140/0x140
[  224.649386][ T2950]  sock_recvmsg+0xf0/0x140
[  224.649392][ T2950] xs_read_stream_request.constprop.0+0x5fd/0xe80
[  224.649397][ T2950]  ? lock_downgrade+0x7e0/0x7e0
[  224.649404][ T2950]  xs_read_stream.constprop.0+0x4fb/0xea0
[  224.649410][ T2950]  ? xs_setup_local+0x840/0x840
[  224.649414][ T2950]  ? __kasan_check_read+0x11/0x20
[  224.649421][ T2950]  ? lock_downgrade+0x7e0/0x7e0
[  224.649426][ T2950]  xs_stream_data_receive_workfn+0xfd/0x340
[  224.649432][ T2950]  process_one_work+0x801/0x1360
[  224.649438][ T2950]  ? pwq_dec_nr_in_flight+0x300/0x300
[  224.649441][ T2950]  ? do_raw_spin_lock+0x13e/0x280
[  224.649446][ T2950]  ? __kasan_check_read+0x11/0x20
[  224.649450][ T2950]  ? lock_acquire+0x4b/0x260
[  224.649455][ T2950]  worker_thread+0x54f/0xf40
[  224.649461][ T2950]  ? process_one_work+0x1360/0x1360
[  224.649465][ T2950]  kthread+0x32d/0x400


--- console log ends ---

dom0 log: (the dumU in question has to be d7 I believe)

---


2021-06-15 00:37:01 ---MARK---
(XEN) [2021-06-14 23:00:01] HVM d7v0 save: CPU
(XEN) [2021-06-14 23:00:01] HVM d7v1 save: CPU
(XEN) [2021-06-14 23:00:01] HVM d7v2 save: CPU
(XEN) [2021-06-14 23:00:01] HVM d7v3 save: CPU
(XEN) [2021-06-14 23:00:01] HVM d7v4 save: CPU
(XEN) [2021-06-14 23:00:01] HVM d7v5 save: CPU
(XEN) [2021-06-14 23:00:01] HVM d7 save: PIC
(XEN) [2021-06-14 23:00:01] HVM d7 save: IOAPIC
(XEN) [2021-06-14 23:00:01] HVM d7v0 save: LAPIC
(XEN) [2021-06-14 23:00:01] HVM d7v1 save: LAPIC
(XEN) [2021-06-14 23:00:06] printk: 55 messages suppressed.
(XEN) [2021-06-14 23:00:06] [VT-D] It's risky to assign 0000:04:00.0 
with shared RMRR at 7db85000 for d7.
(XEN) [2021-06-14 23:00:20] printk: 158 messages suppressed.
(XEN) [2021-06-14 23:00:20] Dom7 callback via changed to Direct Vector 0xf3
(XEN) [2021-06-14 23:00:24] memory_map:remove: dom7 gfn=530d0 mfn=c6b00 
nr=2
(XEN) [2021-06-14 23:00:24] memory_map:remove: dom7 gfn=530d3 mfn=c6b03 
nr=5
(XEN) [2021-06-14 23:00:26] printk: 124 messages suppressed.
(XEN) [2021-06-14 23:00:26] stdvga.c:178:d7v0 leaving stdvga mode
(XEN) [2021-06-14 23:00:32] printk: 6 messages suppressed.
(XEN) [2021-06-14 23:00:32] grant_table.c:1861:d7v2 Expanding d7 grant 
table from 5 to 6 frames
(XEN) [2021-06-14 23:00:38] memory_map:add: dom7 gfn=530a0 mfn=fbd40 nr=20
change evtmask to 0xffffff
change cpumask to 0xffff
(XEN) [2021-06-14 23:02:08] printk: 8 messages suppressed.
(XEN) [2021-06-14 23:02:08] xentrace: requesting 1 t_info pages for 32 
trace pages on 12 cpus
(XEN) [2021-06-14 23:02:08] xentrace: p0 mfn 160cbfc offset 129
(XEN) [2021-06-14 23:02:08] xentrace: p1 mfn 1600c44 offset 161
(XEN) [2021-06-14 23:02:08] xentrace: p2 mfn 1600a70 offset 193
(XEN) [2021-06-14 23:02:08] xentrace: p3 mfn 160c988 offset 225
(XEN) [2021-06-14 23:02:08] xentrace: p4 mfn 160cba0 offset 257
(XEN) [2021-06-14 23:02:08] xentrace: p5 mfn 160c990 offset 289
(XEN) [2021-06-14 23:02:08] xentrace: p6 mfn 160c940 offset 321
(XEN) [2021-06-14 23:02:08] xentrace: p7 mfn 160cb90 offset 353
(XEN) [2021-06-14 23:02:08] xentrace: p8 mfn 160cd90 offset 385
Use of uninitialized value in numeric eq (==) at 
/usr/lib64/perl5/vendor_perl/5.32/File/Tail.pm line 391.
(XEN) [2021-06-14 23:02:15] printk: 4 messages suppressed.
(XEN) [2021-06-14 23:02:15] grant_table.c:803:d0v3 Bad flags (0) or dom 
(0); expected d0
watch-xen-hypervisor-console: sending  debug-key
change evtmask to 0xffffff
change cpumask to 0xffff
(XEN) [2021-06-14 23:02:18] '*' pressed -> firing all diagnostic 
keyhandlers
(XEN) [2021-06-14 23:02:18] [d: dump registers]
(XEN) [2021-06-14 23:02:18] 'd' pressed -> dumping registers
(XEN) [2021-06-14 23:02:18]
(XEN) [2021-06-14 23:02:18] *** Dumping CPU9 guest state (d0v4): ***
(XEN) [2021-06-14 23:02:18] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:18] CPU:    9
(XEN) [2021-06-14 23:02:18] RIP: e033:[<ffffffff8100246a>]
(XEN) [2021-06-14 23:02:18] RFLAGS: 0000000000000286   EM: 0 CONTEXT: pv 
guest (d0v4)
(XEN) [2021-06-14 23:02:18] rax: 0000000000000023   rbx: 
ffff88810263a300   rcx: ffffffff8100246a
(XEN) [2021-06-14 23:02:18] rdx: 0000000000000000   rsi: 
0000000000000000   rdi: 00007f2d7b27e010
(XEN) [2021-06-14 23:02:18] rbp: ffffc90040c97ef8   rsp: 
ffffc90040c97e10   r8:  0000000000000000
(XEN) [2021-06-14 23:02:18] r9:  0000000000000000   r10: 
0000000000000000   r11: 0000000000000286
(XEN) [2021-06-14 23:02:18] r12: 0000000000000005   r13: 
00007ffd3b70fd90   r14: 00007ffd3b70fd90
(XEN) [2021-06-14 23:02:18] r15: ffff88810263a300   cr0: 
0000000080050033   cr4: 0000000000172660
(XEN) [2021-06-14 23:02:18] cr3: 0000000984805000   cr2: 0000558758755708
(XEN) [2021-06-14 23:02:18] fsb: 00007f2d7acbbc80   gsb: 
ffff888487300000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:18] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: e02b   cs: e033
(XEN) [2021-06-14 23:02:18] Guest stack trace from rsp=ffffc90040c97e10:
(XEN) [2021-06-14 23:02:18]    0000000000000000 0000000000000000 
ffffffff81859e07 ffffc90040c97e80
(XEN) [2021-06-14 23:02:18]    ffff888103cf1980 ffffc90040c97ec0 
ffffffff81204825 ffffc90040c97e78
(XEN) [2021-06-14 23:02:18]    ffffc90040c97e80 ffff888103cf19f8 
0000000000000000 0000000000001000
(XEN) [2021-06-14 23:02:18]    00007f2d7b27e000 0000000000000000 
0000000000000000 ffffc90040c97e80
(XEN) [2021-06-14 23:02:18]    78c0a401614c7200 0000000000000023 
00007f2d7b27e010 0000000000000000
(XEN) [2021-06-14 23:02:18]    0000000000000000 0000000000000000 
0000000000000000 78c0a401614c7200
(XEN) [2021-06-14 23:02:18]    ffff88810263a300 0000000000000005 
0000000000305000 00007ffd3b70fd90
(XEN) [2021-06-14 23:02:18]    ffff88810263a300 ffffc90040c97f30 
ffffffff812b4b49 0000000000000000
(XEN) [2021-06-14 23:02:18]    ffffc90040c97f58 0000000000000000 
0000000000000000 0000000000000000
(XEN) [2021-06-14 23:02:18]    ffffc90040c97f48 ffffffff81f2f77a 
0000000000000000 0000000000000000
(XEN) [2021-06-14 23:02:18]    ffffffff82000068 0000558758740d30 
0000558758740d40 00007ffd3b70fe10
(XEN) [2021-06-14 23:02:18]    0000000000000001 0000558758741b20 
00007ffd3b70fde0 0000000000000246
(XEN) [2021-06-14 23:02:19]    0000000000000001 0000000000000000 
0000000000000006 ffffffffffffffda
(XEN) [2021-06-14 23:02:19]    00007f2d7b06d417 00007ffd3b70fd90 
0000000000305000 0000000000000005
(XEN) [2021-06-14 23:02:19]    0000000000000010 00007f2d7b06d417 
0000000000000033 0000000000000246
(XEN) [2021-06-14 23:02:19]    00007ffd3b70fd88 000000000000002b
(XEN) [2021-06-14 23:02:19]
(XEN) [2021-06-14 23:02:19] *** Dumping CPU0 host state: ***
(XEN) [2021-06-14 23:02:19] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:19] CPU:    0
(XEN) [2021-06-14 23:02:19] RIP: e008:[<ffff82d0402860d6>] 
on_selected_cpus+0x156/0x180
(XEN) [2021-06-14 23:02:19] RFLAGS: 0000000000000246   CONTEXT: 
hypervisor (d6v0)
(XEN) [2021-06-14 23:02:19] rax: 0000000000000000   rbx: 
ffff82d040d5b478   rcx: 0000000000000100
(XEN) [2021-06-14 23:02:19] rdx: 0000000000000000   rsi: 
000000000000000c   rdi: ffff82d040d5b478
(XEN) [2021-06-14 23:02:19] rbp: ffff83007263fe98   rsp: 
ffff83007263fe70   r8:  1234567890abcdef
(XEN) [2021-06-14 23:02:19] r9:  1234567890abcdef   r10: 
1234567890abcdef   r11: 1234567890abcdef
(XEN) [2021-06-14 23:02:19] r12: ffff83207fa12000   r13: 
ffff82d0403d3200   r14: 0000000000000001
(XEN) [2021-06-14 23:02:19] r15: 000000000000000c   cr0: 
0000000080050033   cr4: 00000000001526e0
(XEN) [2021-06-14 23:02:19] cr3: 000000207f3f8000   cr2: 00007f180f1a6500
(XEN) [2021-06-14 23:02:19] fsb: 0000000000000000   gsb: 
0000000000000000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:19] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: 0000   cs: e008
(XEN) [2021-06-14 23:02:19] Xen code around <ffff82d0402860d6> 
(on_selected_cpus+0x156/0x180):
(XEN) [2021-06-14 23:02:19]  f3 90 8b 35 32 a0 77 00 <48> 89 df e8 a2 fa 
f7 ff 85 c0 74 ec e9 4d ff ff
(XEN) [2021-06-14 23:02:19] Xen stack trace from rsp=ffff83007263fe70:
(XEN) [2021-06-14 23:02:19]    ffff82d04060ed00 ffff83207fa12000 
0000000000000000 0000000000000000
(XEN) [2021-06-14 23:02:19]    ffff83207f3fa000 ffff83007263fec8 
ffff82d0403d3b73 ffff83207fa12000
(XEN) [2021-06-14 23:02:19]    0000000000000001 ffff83207f3fa160 
ffff83103ffe0000 ffff83007263fef0
(XEN) [2021-06-14 23:02:19]    ffff82d0403d9bcf ffff82d0403d9ac0 
ffff83007263fef8 ffff82d040d49098
(XEN) [2021-06-14 23:02:19]    ffff83007263fd60 0000000000000000 
0000000000000001 ffffffffaebc8640
(XEN) [2021-06-14 23:02:19]    ffff9976543aa400 ffff99765434f864 
0000000000000001 ffff9977ea42a364
(XEN) [2021-06-14 23:02:19]    ffff9977ea42a384 ffff99765434f800 
0000000000000001 0000000000004000
(XEN) [2021-06-14 23:02:19]    ffff9977ea42c100 0000000000000001 
ffffffffaebc8640 ffff99765434f864
(XEN) [2021-06-14 23:02:19]    0000beef0000beef ffffffffadc7c76e 
000000bf0000beef 0000000000000246
(XEN) [2021-06-14 23:02:19]    ffffffffaea03e30 000000000000beef 
000000000000beef 000000000000beef
(XEN) [2021-06-14 23:02:19]    000000000000beef 000000000000beef 
0000e01000000000 ffff83207fa12000
(XEN) [2021-06-14 23:02:19]    0000000000000000 00000000001526e0 
0000000000000000 0000000000000000
(XEN) [2021-06-14 23:02:19]    0000060000000000 0000000000000000
(XEN) [2021-06-14 23:02:19] Xen call trace:
(XEN) [2021-06-14 23:02:19]    [<ffff82d0402860d6>] R 
on_selected_cpus+0x156/0x180
(XEN) [2021-06-14 23:02:19]    [<ffff82d0403d3b73>] F 
arch/x86/hvm/vmx/vmcs.c#vmx_clear_vmcs+0x113/0x120
(XEN) [2021-06-14 23:02:19]    [<ffff82d0403d9bcf>] F 
vmx_do_resume+0x10f/0x560
(XEN) [2021-06-14 23:02:19]
(XEN) [2021-06-14 23:02:19] *** Dumping CPU0 guest state (d6v0): ***
(XEN) [2021-06-14 23:02:19] Segment register inaccessible for d6v0
(XEN) [2021-06-14 23:02:19] (If you see this outside of debugging 
activity, please report to xen-devel@lists.xenproject.org)
(XEN) [2021-06-14 23:02:19] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:19] CPU:    0
(XEN) [2021-06-14 23:02:19] RIP: 0000:[<ffffffffadc7c76e>]
(XEN) [2021-06-14 23:02:19] RFLAGS: 0000000000000246   CONTEXT: hvm 
guest (d6v0)
(XEN) [2021-06-14 23:02:19] rax: 0000000000004000   rbx: 
0000000000000001   rcx: ffff9977ea42c100
(XEN) [2021-06-14 23:02:19] rdx: 0000000000000001   rsi: 
ffffffffaebc8640   rdi: ffff99765434f864
(XEN) [2021-06-14 23:02:19] rbp: ffff99765434f864   rsp: 
ffffffffaea03e30   r8:  0000000000000001
(XEN) [2021-06-14 23:02:19] r9:  ffff99765434f800   r10: 
ffff9977ea42a384   r11: ffff9977ea42a364
(XEN) [2021-06-14 23:02:19] r12: ffff9976543aa400   r13: 
ffffffffaebc8640   r14: 0000000000000001
(XEN) [2021-06-14 23:02:19] r15: 0000000000000000   cr0: 
0000000080050033   cr4: 00000000001706b0
(XEN) [2021-06-14 23:02:19] cr3: 000000012648a003   cr2: 00007f1aee119010
(XEN) [2021-06-14 23:02:19] fsb: 0000000000000000   gsb: 
0000000000000000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:19] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: 0000   cs: 0000
(XEN) [2021-06-14 23:02:19]
(XEN) [2021-06-14 23:02:19] *** Dumping CPU1 host state: ***
(XEN) [2021-06-14 23:02:19] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:19] CPU:    1
(XEN) [2021-06-14 23:02:19] RIP: e008:[<ffff82d04028696a>] 
_spin_lock+0xaa/0x140
(XEN) [2021-06-14 23:02:19] RFLAGS: 0000000000000202   CONTEXT: 
hypervisor (d7v5)
(XEN) [2021-06-14 23:02:19] rax: 0000000043af43ab   rbx: 
ffff82d040a1c0e4   rcx: 0000000000000001
(XEN) [2021-06-14 23:02:19] rdx: 00000000000043af   rsi: 
0000000000000000   rdi: ffff82d040a1c0ea
(XEN) [2021-06-14 23:02:19] rbp: ffff83103ff37e60   rsp: 
ffff83103ff37e40   r8:  1234567890abcdef
(XEN) [2021-06-14 23:02:19] r9:  1234567890abcdef   r10: 
1234567890abcdef   r11: 1234567890abcdef
(XEN) [2021-06-14 23:02:19] r12: 0000000043af43ab   r13: 
ffff82d040a1c0ea   r14: 00000000000043af
(XEN) [2021-06-14 23:02:19] r15: 000000000000000c   cr0: 
0000000080050033   cr4: 00000000001526e0
(XEN) [2021-06-14 23:02:19] cr3: 0000000fbb822000   cr2: 00007f180d1214b0
(XEN) [2021-06-14 23:02:19] fsb: 0000000000000000   gsb: 
0000000000000000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:19] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: 0000   cs: e008
(XEN) [2021-06-14 23:02:19] Xen code around <ffff82d04028696a> 
(_spin_lock+0xaa/0x140):
(XEN) [2021-06-14 23:02:19]  c1 ea 10 f3 90 66 8b 03 <66> 39 c2 75 f6 4c 
89 ef e8 09 fc ff ff 5b 41 5c
(XEN) [2021-06-14 23:02:19] Xen stack trace from rsp=ffff83103ff37e40:
(XEN) [2021-06-14 23:02:19]    ffff82d04060ed00 ffff8315c1503000 
ffff82d0403d3200 0000000000000001
(XEN) [2021-06-14 23:02:19]    ffff83103ff37e98 ffff82d040285fea 
ffff82d04060ed00 ffff8315c1503000
(XEN) [2021-06-14 23:02:19]    0000000000000000 0000000000000000 
ffff83157a990000 ffff83103ff37ec8
(XEN) [2021-06-14 23:02:19]    ffff82d0403d3b73 ffff8315c1503000 
0000000000000001 ffff83157a990160
(XEN) [2021-06-14 23:02:19]    ffff83103ff4c000 ffff83103ff37ef0 
ffff82d0403d9bcf ffff82d0403d9ac0
(XEN) [2021-06-14 23:02:19]    ffff83103ff37ef8 ffff83103ff42098 
ffff83103ff37d60 ffffffff8576a6c0
(XEN) [2021-06-14 23:02:19]    ffff8881020b1864 ffff88810b58e800 
ffff8881008fd280 ffffc90000157d00
(XEN) [2021-06-14 23:02:19]    ffff8881020b1865 0000000000000000 
ffffed102011fa50 ffff8881008fd287
(XEN) [2021-06-14 23:02:19]    0000000000000000 0000000000004000 
ffffffff83dd94c2 1ffff1102011fa50
(XEN) [2021-06-14 23:02:19]    0000000000000008 ffff8881008fd280 
0000beef0000beef ffffffff83dd8cce
(XEN) [2021-06-14 23:02:19]    000000bf0000beef 0000000000000246 
ffffc90000157cf0 000000000000beef
(XEN) [2021-06-14 23:02:19]    000000000000beef 000000000000beef 
000000000000beef 000000000000beef
(XEN) [2021-06-14 23:02:19]    0000e01000000001 ffff8315c1503000 
0000003fff1f9000 00000000001526e0
(XEN) [2021-06-14 23:02:19]    0000000000000000 0000000000000000 
0000060000000000 0000000000000000
(XEN) [2021-06-14 23:02:19] Xen call trace:
(XEN) [2021-06-14 23:02:19]    [<ffff82d04028696a>] R _spin_lock+0xaa/0x140
(XEN) [2021-06-14 23:02:19]    [<ffff82d040285fea>] F 
on_selected_cpus+0x6a/0x180
(XEN) [2021-06-14 23:02:19]    [<ffff82d0403d3b73>] F 
arch/x86/hvm/vmx/vmcs.c#vmx_clear_vmcs+0x113/0x120
(XEN) [2021-06-14 23:02:19]    [<ffff82d0403d9bcf>] F 
vmx_do_resume+0x10f/0x560
(XEN) [2021-06-14 23:02:19]
(XEN) [2021-06-14 23:02:19] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:19] *** Dumping CPU1 guest state (d7v5): ***
(XEN) [2021-06-14 23:02:19] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:19] CPU:    1
(XEN) [2021-06-14 23:02:19] RIP: 0000:[<ffffffff83dd8cce>]
(XEN) [2021-06-14 23:02:19] RFLAGS: 0000000000000246   CONTEXT: hvm 
guest (d7v5)
(XEN) [2021-06-14 23:02:19] rax: 0000000000004000   rbx: 
ffff8881020b1865   rcx: ffffffff83dd94c2
(XEN) [2021-06-14 23:02:19] rdx: 1ffff1102011fa50   rsi: 
0000000000000008   rdi: ffff8881008fd280
(XEN) [2021-06-14 23:02:19] rbp: ffffc90000157d00   rsp: 
ffffc90000157cf0   r8:  0000000000000000
(XEN) [2021-06-14 23:02:19] r9:  ffff8881008fd287   r10: 
ffffed102011fa50   r11: 0000000000000000
(XEN) [2021-06-14 23:02:19] r12: ffff8881008fd280   r13: 
ffff88810b58e800   r14: ffff8881020b1864
(XEN) [2021-06-14 23:02:19] r15: ffffffff8576a6c0   cr0: 
0000000080050033   cr4: 00000000001706e0
(XEN) [2021-06-14 23:02:19] cr3: 00000001297ba001   cr2: 00007f1808d35dd6
(XEN) [2021-06-14 23:02:19] fsb: 0000000000000000   gsb: 
0000000000000000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:19] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: 0000   cs: 0000
(XEN) [2021-06-14 23:02:19]
(XEN) [2021-06-14 23:02:19] *** Dumping CPU2 host state: ***
(XEN) [2021-06-14 23:02:19] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:19] CPU:    2
(XEN) [2021-06-14 23:02:19] RIP: e008:[<ffff82d040286965>] 
_spin_lock+0xa5/0x140
(XEN) [2021-06-14 23:02:19] RFLAGS: 0000000000000216   CONTEXT: 
hypervisor (d6v3)
(XEN) [2021-06-14 23:02:19] rax: 0000000043b243af   rbx: 
ffff82d040a1c0e4   rcx: 0000000000000001
(XEN) [2021-06-14 23:02:19] rdx: 00000000000043b2   rsi: 
0000000000000000   rdi: ffff82d040a1c0ea
(XEN) [2021-06-14 23:02:19] rbp: ffff83103ff27e60   rsp: 
ffff83103ff27e40   r8:  1234567890abcdef
(XEN) [2021-06-14 23:02:19] r9:  1234567890abcdef   r10: 
1234567890abcdef   r11: 1234567890abcdef
(XEN) [2021-06-14 23:02:19] r12: 0000000043b243af   r13: 
ffff82d040a1c0ea   r14: 00000000000043b2
(XEN) [2021-06-14 23:02:19] r15: 000000000000000c   cr0: 
0000000080050033   cr4: 00000000001526e0
(XEN) [2021-06-14 23:02:19] cr3: 00000009bb594000   cr2: 00007f0f19a85000
(XEN) [2021-06-14 23:02:19] fsb: 0000000000000000   gsb: 
0000000000000000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:19] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: 0000   cs: e008
(XEN) [2021-06-14 23:02:20] Xen code around <ffff82d040286965> 
(_spin_lock+0xa5/0x140):
(XEN) [2021-06-14 23:02:20]  74 10 44 89 e2 c1 ea 10 <f3> 90 66 8b 03 66 
39 c2 75 f6 4c 89 ef e8 09 fc
(XEN) [2021-06-14 23:02:20] Xen stack trace from rsp=ffff83103ff27e40:
(XEN) [2021-06-14 23:02:20]    ffff82d04060ec80 ffff8315c13b1000 
ffff82d0403d3200 0000000000000001
(XEN) [2021-06-14 23:02:20]    ffff83103ff27e98 ffff82d040285fea 
ffff82d04060ec80 ffff8315c13b1000
(XEN) [2021-06-14 23:02:20]    0000000000000000 0000000000000000 
ffff83207f3fa000 ffff83103ff27ec8
(XEN) [2021-06-14 23:02:20]    ffff82d0403d3b73 ffff8315c13b1000 
0000000000000001 ffff83207f3fa160
(XEN) [2021-06-14 23:02:20]    ffff83103ff2d000 ffff83103ff27ef0 
ffff82d0403d9bcf ffff82d0403d9ac0
(XEN) [2021-06-14 23:02:20]    ffff83103ff27ef8 ffff83103ff2a098 
ffff83103ff27d60 0000000000000000
(XEN) [2021-06-14 23:02:20]    0000000000000001 ffffffffaebc8640 
ffff996ffb184800 ffff996fc0c37864
(XEN) [2021-06-14 23:02:20]    0000000000000001 ffff9977ea4ea364 
ffff9977ea4ea384 ffff996fc0c37800
(XEN) [2021-06-14 23:02:20]    0000000000000001 0000000000004000 
ffff9977ea4ec100 0000000000000001
(XEN) [2021-06-14 23:02:20]    ffffffffaebc8640 ffff996fc0c37864 
0000beef0000beef ffffffffadc7c76e
(XEN) [2021-06-14 23:02:20]    000000bf0000beef 0000000000000246 
ffffac780009fe58 000000000000beef
(XEN) [2021-06-14 23:02:20]    000000000000beef 000000000000beef 
000000000000beef 000000000000beef
(XEN) [2021-06-14 23:02:20]    0000e01000000002 ffff8315c13b1000 
0000003fff1e1000 00000000001526e0
(XEN) [2021-06-14 23:02:20]    0000000000000000 0000000000000000 
0000060000000000 0000000000000000
(XEN) [2021-06-14 23:02:20] Xen call trace:
(XEN) [2021-06-14 23:02:20]    [<ffff82d040286965>] R _spin_lock+0xa5/0x140
(XEN) [2021-06-14 23:02:20]    [<ffff82d040285fea>] F 
on_selected_cpus+0x6a/0x180
(XEN) [2021-06-14 23:02:20]    [<ffff82d0403d3b73>] F 
arch/x86/hvm/vmx/vmcs.c#vmx_clear_vmcs+0x113/0x120
(XEN) [2021-06-14 23:02:20]    [<ffff82d0403d9bcf>] F 
vmx_do_resume+0x10f/0x560
(XEN) [2021-06-14 23:02:20]
(XEN) [2021-06-14 23:02:20] *** Dumping CPU2 guest state (d6v3): ***
(XEN) [2021-06-14 23:02:20] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:20] CPU:    2
(XEN) [2021-06-14 23:02:20] RIP: 0000:[<ffffffffadc7c76e>]
(XEN) [2021-06-14 23:02:20] RFLAGS: 0000000000000246   CONTEXT: hvm 
guest (d6v3)
(XEN) [2021-06-14 23:02:20] rax: 0000000000004000   rbx: 
0000000000000001   rcx: ffff9977ea4ec100
(XEN) [2021-06-14 23:02:20] rdx: 0000000000000001   rsi: 
ffffffffaebc8640   rdi: ffff996fc0c37864
(XEN) [2021-06-14 23:02:20] rbp: ffff996fc0c37864   rsp: 
ffffac780009fe58   r8:  0000000000000001
(XEN) [2021-06-14 23:02:20] r9:  ffff996fc0c37800   r10: 
ffff9977ea4ea384   r11: ffff9977ea4ea364
(XEN) [2021-06-14 23:02:20] r12: ffff996ffb184800   r13: 
ffffffffaebc8640   r14: 0000000000000001
(XEN) [2021-06-14 23:02:20] r15: 0000000000000000   cr0: 
0000000080050033   cr4: 00000000001706a0
(XEN) [2021-06-14 23:02:20] cr3: 0000000123008006   cr2: 00007f26678167c5
(XEN) [2021-06-14 23:02:20] fsb: 0000000000000000   gsb: 
0000000000000000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:20] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: 0000   cs: 0000
(XEN) [2021-06-14 23:02:20]
(XEN) [2021-06-14 23:02:20] *** Dumping CPU3 guest state (d0v2): ***
(XEN) [2021-06-14 23:02:20] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:20] CPU:    3
(XEN) [2021-06-14 23:02:20] RIP: e033:[<ffffffff811412af>]
(XEN) [2021-06-14 23:02:20] RFLAGS: 0000000000000202   EM: 0 CONTEXT: pv 
guest (d0v2)
(XEN) [2021-06-14 23:02:20] rax: 0000000000000011   rbx: 
ffff888487328900   rcx: 0000000000000004
(XEN) [2021-06-14 23:02:20] rdx: ffffffff82c532d0   rsi: 
ffff8884872a4488   rdi: 0000000000000003
(XEN) [2021-06-14 23:02:20] rbp: ffffc900403bfcd0   rsp: 
ffffc900403bfc20   r8:  0000000000000004
(XEN) [2021-06-14 23:02:20] r9:  ffffffff82af5ae8   r10: 
deadbeefdeadf00d   r11: 0000000000000017
(XEN) [2021-06-14 23:02:20] r12: 0000000000000007   r13: 
0000000000000004   r14: ffff8884872a4480
(XEN) [2021-06-14 23:02:20] r15: 0000000000000004   cr0: 
0000000080050033   cr4: 0000000000172660
(XEN) [2021-06-14 23:02:20] cr3: 0000001032a22000   cr2: 00007fd4c3484248
(XEN) [2021-06-14 23:02:20] fsb: 0000000000000000   gsb: 
ffff888487280000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:20] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: e02b   cs: e033
(XEN) [2021-06-14 23:02:20] Guest stack trace from rsp=ffffc900403bfc20:
(XEN) [2021-06-14 23:02:20]    ffffc900403bfc68 ffff8884872a4480 
ffffffff81078f47 ffff8884872a4490
(XEN) [2021-06-14 23:02:20]    0000000700000007 0000000000024440 
0000000101000001 ffff8884872a4480
(XEN) [2021-06-14 23:02:20]    ffffffff81037560 ffff8884872a4488 
0000000200000003 0000000000028900
(XEN) [2021-06-14 23:02:20]    0000000000000000 ffff8884872a4488 
ffffc900403bfcf7 ffff888100076208
(XEN) [2021-06-14 23:02:20]    3a63221a0770e600 ffffffff82e78750 
ffffffff82e786e9 ffffffff82e786e0
(XEN) [2021-06-14 23:02:20]    0000000000000060 0000000000000000 
ffffc900403bfce0 ffffffff8114227d
(XEN) [2021-06-14 23:02:20]    ffffc900403bfd40 ffffffff81037aae 
cc00000000000005 ffffffff82e786e0
(XEN) [2021-06-14 23:02:20]    0000000100000007 0000000000000000 
3a63221a0770e600 ffffffff8284c970
(XEN) [2021-06-14 23:02:20]    ffffffff8284f640 ffffffff8284c968 
ffffffff82ec4220 0000000000000000
(XEN) [2021-06-14 23:02:20]    ffffc900403bfd50 ffffffff8103865a 
ffffc900403bfd60 ffffffff81034895
(XEN) [2021-06-14 23:02:20]    ffffc900403bfd98 ffffffff811de963 
0000000000000002 ffffffff82ec4220
(XEN) [2021-06-14 23:02:20]    ffff888100058100 0000000000000000 
ffff888100050000 ffffc900403bfdb8
(XEN) [2021-06-14 23:02:20]    ffffffff811dea3a ffffffff82ec4220 
0000000000000001 ffffc900403bfdd8
(XEN) [2021-06-14 23:02:20]    ffffffff811df0ea ffff88810632da80 
ffffffff82ec4220 ffffc900403bfdf0
(XEN) [2021-06-14 23:02:20]    ffffffff811df136 0000000000000001 
ffffc900403bfe68 ffffffff8126f234
(XEN) [2021-06-14 23:02:20]    0000000000000000 0000000000000000 
0000000000000000 0000000000000000
(XEN) [2021-06-14 23:02:20]    0000000000000000 0000000000000000 
0000000000000000 0000000000000000
(XEN) [2021-06-14 23:02:20]    0000000000000000 0000000000000000 
3a63221a0770e600 ffff88810632da80
(XEN) [2021-06-14 23:02:20]    ffffffff82b1b7c0 ffffc900403bfeb8 
ffffffff810b1865 ffff88810632daec
(XEN) [2021-06-14 23:02:20]    ffffffff82b1b7c8 000000000632da80 
ffff88810632daa8 ffff888100050000
(XEN) [2021-06-14 23:02:20]
(XEN) [2021-06-14 23:02:20] *** Dumping CPU4 host state: ***
(XEN) [2021-06-14 23:02:20] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:20] CPU:    4
(XEN) [2021-06-14 23:02:20] RIP: e008:[<ffff82d04028696a>] 
_spin_lock+0xaa/0x140
(XEN) [2021-06-14 23:02:20] RFLAGS: 0000000000000206   CONTEXT: 
hypervisor (d2v3)
(XEN) [2021-06-14 23:02:20] rax: 0000000043b443b1   rbx: 
ffff82d040a1c0e4   rcx: 0000000000000001
(XEN) [2021-06-14 23:02:20] rdx: 00000000000043b4   rsi: 
0000000000000000   rdi: ffff82d040a1c0ea
(XEN) [2021-06-14 23:02:20] rbp: ffff831037cf7e60   rsp: 
ffff831037cf7e40   r8:  1234567890abcdef
(XEN) [2021-06-14 23:02:20] r9:  1234567890abcdef   r10: 
1234567890abcdef   r11: 1234567890abcdef
(XEN) [2021-06-14 23:02:20] r12: 0000000043b443af   r13: 
ffff82d040a1c0ea   r14: 00000000000043b4
(XEN) [2021-06-14 23:02:20] r15: 000000000000000c   cr0: 
0000000080050033   cr4: 00000000001526e0
(XEN) [2021-06-14 23:02:20] cr3: 0000000b40bea000   cr2: 00007f1aee0f55b8
(XEN) [2021-06-14 23:02:20] fsb: 0000000000000000   gsb: 
0000000000000000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:20] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: 0000   cs: e008
(XEN) [2021-06-14 23:02:20] Xen code around <ffff82d04028696a> 
(_spin_lock+0xaa/0x140):
(XEN) [2021-06-14 23:02:20]  c1 ea 10 f3 90 66 8b 03 <66> 39 c2 75 f6 4c 
89 ef e8 09 fc ff ff 5b 41 5c
(XEN) [2021-06-14 23:02:20] Xen stack trace from rsp=ffff831037cf7e40:
(XEN) [2021-06-14 23:02:20]    ffff82d04060eca0 ffff83207f327000 
ffff82d0403d3200 0000000000000001
(XEN) [2021-06-14 23:02:20]    ffff831037cf7e98 ffff82d040285fea 
ffff82d04060eca0 ffff83207f327000
(XEN) [2021-06-14 23:02:20]    0000000000000000 0000000000000000 
ffff830b9af2f000 ffff831037cf7ec8
(XEN) [2021-06-14 23:02:20]    ffff82d0403d3b73 ffff83207f327000 
0000000000000001 ffff830b9af2f160
(XEN) [2021-06-14 23:02:20]    ffff831037cff000 ffff831037cf7ef0 
ffff82d0403d9bcf ffff82d0403d9ac0
(XEN) [2021-06-14 23:02:20]    ffff831037cf7ef8 ffff83103ff02098 
ffff831037cf7d60 0000000000000000
(XEN) [2021-06-14 23:02:20]    0000000000000000 0000000000000000 
0000000000000000 0000000000000003
(XEN) [2021-06-14 23:02:20]    ffff888568b62200 0000000000000001 
0000000000000001 000000000000001c
(XEN) [2021-06-14 23:02:20]    00000000ffffffff 0000000000000003 
0000000000000000 0000000000000000
(XEN) [2021-06-14 23:02:20]    ffffffff8229231a ffffffff82298a3c 
0000beef0000beef ffffffff81ac19ae
(XEN) [2021-06-14 23:02:20]    000000bf0000beef 0000000000000286 
ffffc900031c3eb0 000000000000beef
(XEN) [2021-06-14 23:02:20]    000000000000beef 000000000000beef 
000000000000beef 000000000000beef
(XEN) [2021-06-14 23:02:20]    0000e01000000004 ffff83207f327000 
0000003fff1b9000 00000000001526e0
(XEN) [2021-06-14 23:02:20]    0000000000000000 0000000000000000 
0000060000000000 0000000000000000
(XEN) [2021-06-14 23:02:20] Xen call trace:
(XEN) [2021-06-14 23:02:20]    [<ffff82d04028696a>] R _spin_lock+0xaa/0x140
(XEN) [2021-06-14 23:02:20]    [<ffff82d040285fea>] F 
on_selected_cpus+0x6a/0x180
(XEN) [2021-06-14 23:02:20]    [<ffff82d0403d3b73>] F 
arch/x86/hvm/vmx/vmcs.c#vmx_clear_vmcs+0x113/0x120
(XEN) [2021-06-14 23:02:20]    [<ffff82d0403d9bcf>] F 
vmx_do_resume+0x10f/0x560
(XEN) [2021-06-14 23:02:20]
(XEN) [2021-06-14 23:02:20] *** Dumping CPU4 guest state (d2v3): ***
(XEN) [2021-06-14 23:02:20] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:20] CPU:    4
(XEN) [2021-06-14 23:02:20] RIP: 0000:[<ffffffff81ac19ae>]
(XEN) [2021-06-14 23:02:20] RFLAGS: 0000000000000286   CONTEXT: hvm 
guest (d2v3)
(XEN) [2021-06-14 23:02:20] rax: 0000000000000003   rbx: 
ffff888568b62200   rcx: 0000000000000000
(XEN) [2021-06-14 23:02:20] rdx: 0000000000000000   rsi: 
ffffffff8229231a   rdi: ffffffff82298a3c
(XEN) [2021-06-14 23:02:20] rbp: 0000000000000003   rsp: 
ffffc900031c3eb0   r8:  00000000ffffffff
(XEN) [2021-06-14 23:02:20] r9:  000000000000001c   r10: 
0000000000000001   r11: 0000000000000001
(XEN) [2021-06-14 23:02:20] r12: 0000000000000000   r13: 
0000000000000000   r14: 0000000000000000
(XEN) [2021-06-14 23:02:21] r15: 0000000000000000   cr0: 
0000000080050033   cr4: 00000000001606e0
(XEN) [2021-06-14 23:02:21] cr3: 0000000002421003   cr2: 00007f19e8a8b160
(XEN) [2021-06-14 23:02:21] fsb: 0000000000000000   gsb: 
0000000000000000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:21] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: 0000   cs: 0000
(XEN) [2021-06-14 23:02:21]
(XEN) [2021-06-14 23:02:21] *** Dumping CPU5 host state: ***
(XEN) [2021-06-14 23:02:21] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:21] CPU:    5
(XEN) [2021-06-14 23:02:21] RIP: e008:[<ffff82d0403ac8d1>] 
arch/x86/cpu/mwait-idle.c#mwait_idle+0x5b1/0x960
(XEN) [2021-06-14 23:02:21] RFLAGS: 0000000000000246   CONTEXT: hypervisor
(XEN) [2021-06-14 23:02:21] rax: 0000000000000001   rbx: 
ffff831037cee8b0   rcx: 00000f5cd348ea6c
(XEN) [2021-06-14 23:02:21] rdx: ffff831037ce7fd0   rsi: 
0000000000000000   rdi: ffff831037cee8e0
(XEN) [2021-06-14 23:02:21] rbp: ffff831037ce7e98   rsp: 
ffff831037ce7e20   r8:  00000000000106ac
(XEN) [2021-06-14 23:02:21] r9:  ffff831037ce7e40   r10: 
1234567890abcdef   r11: 1234567890abcdef
(XEN) [2021-06-14 23:02:21] r12: 0000000000000002   r13: 
00000f5cd348ea6c   r14: 0000000000000060
(XEN) [2021-06-14 23:02:21] r15: ffff831037cee948   cr0: 
0000000080050033   cr4: 00000000001526e0
(XEN) [2021-06-14 23:02:21] cr3: 000000157a7b0000   cr2: 00007fd234ee3000
(XEN) [2021-06-14 23:02:21] fsb: 0000000000000000   gsb: 
0000000000000000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:21] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: 0000   cs: e008
(XEN) [2021-06-14 23:02:21] Xen code around <ffff82d0403ac8d1> 
(arch/x86/cpu/mwait-idle.c#mwait_idle+0x5b1/0x960):
(XEN) [2021-06-14 23:02:21]  fd ff fb 45 0f b6 67 01 <b8> 1f 00 00 00 89 
c2 44 29 e2 b8 01 00 00 00 c4
(XEN) [2021-06-14 23:02:21] Xen stack trace from rsp=ffff831037ce7e20:
(XEN) [2021-06-14 23:02:21]    0000000000000028 ffff82d040d6b068 
0000000537ce7e88 ffff831037cee948
(XEN) [2021-06-14 23:02:21]    d348ea6c00000002 00000000000000ff 
0000000000000000 00000000000000ff
(XEN) [2021-06-14 23:02:21]    0000000000000000 000003990000038c 
0000000000000005 ffff82d040d53400
(XEN) [2021-06-14 23:02:21]    ffff82d040d53680 0000000000007fff 
ffff82d040d48930 ffff831037ce7ef0
(XEN) [2021-06-14 23:02:21]    ffff82d0404d6609 ffff82d040d6b068 
0000000000000028 0000000000000030
(XEN) [2021-06-14 23:02:21]    ffff831037ce9930 ffff82d0404d62c0 
ffff831037ce7ef8 0000000000000028
(XEN) [2021-06-14 23:02:21]    ffff83157a66d000 ffff83103ffe1000 
ffff831037ce7db0 0000000000000001
(XEN) [2021-06-14 23:02:21]    0000000000000001 ffff888560a03064 
ffff888560a03000 ffffffff82a03d90
(XEN) [2021-06-14 23:02:21]    0000000000000001 ffff888564e2b3e4 
00000000000003ab 00000000000003a8
(XEN) [2021-06-14 23:02:21]    00000f1963680fd4 0000000000004000 
00000000ffffffff 4ec4ec4ec4ec4ec5
(XEN) [2021-06-14 23:02:21]    ffffffff82b78d40 ffff88855f2c8400 
0000beef0000beef ffffffff81beb36e
(XEN) [2021-06-14 23:02:21]    000000bf0000beef 0000000000000246 
ffffffff82a03d88 000000000000beef
(XEN) [2021-06-14 23:02:21]    000000000000beef 000000000000beef 
000000000000beef 000000000000beef
(XEN) [2021-06-14 23:02:21]    0000e01000000005 ffff831037cec000 
0000003ff6fa1000 00000000001526e0
(XEN) [2021-06-14 23:02:21]    0000000000000000 0000000000000000 
0000060000000000 0000000000000000
(XEN) [2021-06-14 23:02:21] Xen call trace:
(XEN) [2021-06-14 23:02:21]    [<ffff82d0403ac8d1>] R 
arch/x86/cpu/mwait-idle.c#mwait_idle+0x5b1/0x960
(XEN) [2021-06-14 23:02:21]    [<ffff82d0404d6609>] F 
arch/x86/domain.c#idle_loop+0x349/0x360
(XEN) [2021-06-14 23:02:21]
(XEN) [2021-06-14 23:02:21] *** Dumping CPU6 host state: ***
(XEN) [2021-06-14 23:02:21] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:21] CPU:    6
(XEN) [2021-06-14 23:02:21] RIP: e008:[<ffff82d0403ac8d1>] 
arch/x86/cpu/mwait-idle.c#mwait_idle+0x5b1/0x960
(XEN) [2021-06-14 23:02:21] RFLAGS: 0000000000000246   CONTEXT: hypervisor
(XEN) [2021-06-14 23:02:21] rax: 0000000000000001   rbx: 
ffff831037cd3610   rcx: 00000f5ce5a6182e
(XEN) [2021-06-14 23:02:21] rdx: ffff83207ca0ffd0   rsi: 
0000000000000000   rdi: ffff831037cd3640
(XEN) [2021-06-14 23:02:21] rbp: ffff83207ca0fe98   rsp: 
ffff83207ca0fe20   r8:  1234567890abcdef
(XEN) [2021-06-14 23:02:21] r9:  1234567890abcdef   r10: 
1234567890abcdef   r11: 1234567890abcdef
(XEN) [2021-06-14 23:02:21] r12: 0000000000000001   r13: 
00000f5ce5a6182e   r14: 0000000000000020
(XEN) [2021-06-14 23:02:21] r15: ffff831037cd3668   cr0: 
0000000080050033   cr4: 00000000001526e0
(XEN) [2021-06-14 23:02:21] cr3: 000000157a7af000   cr2: 0000074f9a672c18
(XEN) [2021-06-14 23:02:21] fsb: 0000000000000000   gsb: 
0000000000000000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:21] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: 0000   cs: e008
(XEN) [2021-06-14 23:02:21] Xen code around <ffff82d0403ac8d1> 
(arch/x86/cpu/mwait-idle.c#mwait_idle+0x5b1/0x960):
(XEN) [2021-06-14 23:02:21]  fd ff fb 45 0f b6 67 01 <b8> 1f 00 00 00 89 
c2 44 29 e2 b8 01 00 00 00 c4
(XEN) [2021-06-14 23:02:21] Xen stack trace from rsp=ffff83207ca0fe20:
(XEN) [2021-06-14 23:02:21]    0000000000000030 ffff82d040d6b070 
000000067ca0fe88 ffff831037cd3668
(XEN) [2021-06-14 23:02:21]    e5a6182e00000001 00000000000000ff 
0000000000000000 00000000000000ff
(XEN) [2021-06-14 23:02:21]    0000000000000000 00001d020000004e 
0000000000000006 ffff82d040d53400
(XEN) [2021-06-14 23:02:21]    ffff82d040d53700 0000000000007fff 
ffff82d040d48930 ffff83207ca0fef0
(XEN) [2021-06-14 23:02:21]    ffff82d0404d6609 ffff82d040d6b070 
0000000000000030 0000000000000038
(XEN) [2021-06-14 23:02:21]    ffff831037cd5930 ffff82d0404d62c0 
ffff83207ca0fef8 ffff831037cd8000
(XEN) [2021-06-14 23:02:21]    ffff831037cd57ac ffff831037cd3010 
ffff83207ca0fd60 0000000000000001
(XEN) [2021-06-14 23:02:21]    0000000000000001 ffff888560a00464 
ffff888560a00400 ffffc90000087df8
(XEN) [2021-06-14 23:02:21]    0000000000000001 ffff888564e6b3e4 
00000000000077a1 00000000ffffffff
(XEN) [2021-06-14 23:02:21]    00000f1975576a0d 0000000000004000 
00000000ffffffff 4ec4ec4ec4ec4ec5
(XEN) [2021-06-14 23:02:21]    ffffffff82b78d40 ffff88855f2c9800 
0000beef0000beef ffffffff81beb36e
(XEN) [2021-06-14 23:02:21]    000000bf0000beef 0000000000000246 
ffffc90000087df0 000000000000beef
(XEN) [2021-06-14 23:02:21]    000000000000beef 000000000000beef 
000000000000beef 000000000000beef
(XEN) [2021-06-14 23:02:21]    0000e01000000006 ffff831037cd8000 
0000003ff6f8d000 00000000001526e0
(XEN) [2021-06-14 23:02:21]    0000000000000000 0000000000000000 
0000060000000000 0000000000000000
(XEN) [2021-06-14 23:02:21] Xen call trace:
(XEN) [2021-06-14 23:02:21]    [<ffff82d0403ac8d1>] R 
arch/x86/cpu/mwait-idle.c#mwait_idle+0x5b1/0x960
(XEN) [2021-06-14 23:02:21]    [<ffff82d0404d6609>] F 
arch/x86/domain.c#idle_loop+0x349/0x360
(XEN) [2021-06-14 23:02:21]
(XEN) [2021-06-14 23:02:21] *** Dumping CPU7 host state: ***
(XEN) [2021-06-14 23:02:21] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:21] CPU:    7
(XEN) [2021-06-14 23:02:21] RIP: e008:[<ffff82d04053ed16>] 
flush_area_mask+0x336/0x360
(XEN) [2021-06-14 23:02:21] RFLAGS: 0000000000000246   CONTEXT: 
hypervisor (d5v5)
(XEN) [2021-06-14 23:02:21] rax: 0000000000000000   rbx: 
ffff82d040d6b8e0   rcx: 0000000000000830
(XEN) [2021-06-14 23:02:21] rdx: 0000000000000000   rsi: 
000000000000000c   rdi: ffff82d040d6b8e0
(XEN) [2021-06-14 23:02:21] rbp: ffff83207ca1fe40   rsp: 
ffff83207ca1fe08   r8:  0000000000006c88
(XEN) [2021-06-14 23:02:21] r9:  ffff83207ca1fe58   r10: 
1234567890abcdef   r11: 1234567890abcdef
(XEN) [2021-06-14 23:02:21] r12: ffff82d04060ed40   r13: 
ffff83207ca1fe50   r14: 0000000000000007
(XEN) [2021-06-14 23:02:21] r15: 000000000000000c   cr0: 
0000000080050033   cr4: 00000000001526e0
(XEN) [2021-06-14 23:02:21] cr3: 000000157a7ab000   cr2: 00007fd239d41000
(XEN) [2021-06-14 23:02:21] fsb: 0000000000000000   gsb: 
ffff888487200000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:21] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: e010   cs: e008
(XEN) [2021-06-14 23:02:21] Xen code around <ffff82d04053ed16> 
(flush_area_mask+0x336/0x360):
(XEN) [2021-06-14 23:02:21]  f3 90 8b 35 f2 13 4c 00 <48> 89 df e8 62 6e 
cc ff 85 c0 74 ec 48 8d 3d 97
(XEN) [2021-06-14 23:02:21] Xen stack trace from rsp=ffff83207ca1fe08:
(XEN) [2021-06-14 23:02:21]    0000000000000000 0000000000000000 
ffff82d04060ed40 ffff83207ca1fe50
(XEN) [2021-06-14 23:02:21]    0000000000000007 ffff82d040d53780 
ffff83207ca1fef8 ffff83207ca1fe80
(XEN) [2021-06-14 23:02:21]    ffff82d04053ee94 0000000000000f7f 
0000000000000000 0000000000000000
(XEN) [2021-06-14 23:02:21]    0000000000000000 0000000000000004 
ffff82d040d53780 ffff83207ca1fed8
(XEN) [2021-06-14 23:02:21]    ffff82d040284d00 ffff83207ca1ffc0 
00007d2fbf2acc04 0000000000000000
(XEN) [2021-06-14 23:02:21]    0000000000000000 ffff83157a627000 
00000000001526e0 0000000000000000
(XEN) [2021-06-14 23:02:21]    ffff831037cc9000 ffff83157a7c1000 
ffff83207ca1fee8 ffff82d040285013
(XEN) [2021-06-14 23:02:21]    ffff83207ca1fef0 ffff82d0403f73ab 
0000000000000001 0000000000000001
(XEN) [2021-06-14 23:02:21]    ffff888560a03864 ffff888560a03800 
ffffc900000a7df8 0000000000000001
(XEN) [2021-06-14 23:02:21]    ffff888564f6b3e4 00000000000003c6 
0000000000000390 00000f1966cd8cc2
(XEN) [2021-06-14 23:02:21]    0000000000004000 00000000ffffffff 
4ec4ec4ec4ec4ec5 ffffffff82b78d40
(XEN) [2021-06-14 23:02:21]    ffff88855f2ca400 0000beef0000beef 
ffffffff81beb36e 000000bf0000beef
(XEN) [2021-06-14 23:02:21]    0000000000000246 ffffc900000a7df0 
000000000000beef 000000000000beef
(XEN) [2021-06-14 23:02:21]    000000000000beef 000000000000beef 
000000000000beef 0000e01000000007
(XEN) [2021-06-14 23:02:21]    ffff83157a627000 0000003ff6f7d000 
00000000001526e0 0000000000000000
(XEN) [2021-06-14 23:02:21]    0000000000000000 0000060000000000 
0000000000000000
(XEN) [2021-06-14 23:02:21] Xen call trace:
(XEN) [2021-06-14 23:02:21]    [<ffff82d04053ed16>] R 
flush_area_mask+0x336/0x360
(XEN) [2021-06-14 23:02:21]    [<ffff82d04053ee94>] F 
new_tlbflush_clock_period+0x154/0x1c0
(XEN) [2021-06-14 23:02:21]    [<ffff82d040284d00>] F 
common/softirq.c#__do_softirq+0x200/0x340
(XEN) [2021-06-14 23:02:21]    [<ffff82d040285013>] F do_softirq+0x13/0x20
(XEN) [2021-06-14 23:02:21]    [<ffff82d0403f73ab>] F 
vmx_asm_do_vmentry+0x2b/0x30
(XEN) [2021-06-14 23:02:21]
(XEN) [2021-06-14 23:02:21] *** Dumping CPU7 guest state (d5v5): ***
(XEN) [2021-06-14 23:02:21] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:22] CPU:    7
(XEN) [2021-06-14 23:02:22] RIP: 0010:[<ffffffff81beb36e>]
(XEN) [2021-06-14 23:02:22] RFLAGS: 0000000000000246   CONTEXT: hvm 
guest (d5v5)
(XEN) [2021-06-14 23:02:22] rax: 0000000000004000   rbx: 
0000000000000001   rcx: 00000000ffffffff
(XEN) [2021-06-14 23:02:22] rdx: 4ec4ec4ec4ec4ec5   rsi: 
ffffffff82b78d40   rdi: ffff88855f2ca400
(XEN) [2021-06-14 23:02:22] rbp: ffffc900000a7df8   rsp: 
ffffc900000a7df0   r8:  00000f1966cd8cc2
(XEN) [2021-06-14 23:02:22] r9:  0000000000000390   r10: 
00000000000003c6   r11: ffff888564f6b3e4
(XEN) [2021-06-14 23:02:22] r12: ffff888560a03800   r13: 
ffff888560a03864   r14: 0000000000000001
(XEN) [2021-06-14 23:02:22] r15: 0000000000000001   cr0: 
0000000080050033   cr4: 00000000001606e0
(XEN) [2021-06-14 23:02:22] cr3: 000000054a0ca004   cr2: 00007fd239d41000
(XEN) [2021-06-14 23:02:22] fsb: 0000000000000000   gsb: 
ffff888564f40000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:22] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: 0018   cs: 0010
(XEN) [2021-06-14 23:02:22]
(XEN) [2021-06-14 23:02:22] *** Dumping CPU8 host state: ***
(XEN) [2021-06-14 23:02:22] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:22] CPU:    8
(XEN) [2021-06-14 23:02:22] RIP: e008:[<ffff82d04028696a>] 
_spin_lock+0xaa/0x140
(XEN) [2021-06-14 23:02:22] RFLAGS: 0000000000000202   CONTEXT: 
hypervisor (d7v5)
(XEN) [2021-06-14 23:02:22] rax: 0000000030cf30cd   rbx: 
ffff82d040a33bc0   rcx: 0000000000000000
(XEN) [2021-06-14 23:02:22] rdx: 00000000000030cf   rsi: 
0000000000000000   rdi: ffff82d040a33bc6
(XEN) [2021-06-14 23:02:22] rbp: ffff83207ca17d00   rsp: 
ffff83207ca17ce0   r8:  000000000001a900
(XEN) [2021-06-14 23:02:22] r9:  ffff83207ca17d4c   r10: 
0000000000000000   r11: 0000000000000000
(XEN) [2021-06-14 23:02:22] r12: 0000000030cf30cc   r13: 
ffff82d040a33bc6   r14: 00000000000030cf
(XEN) [2021-06-14 23:02:22] r15: 000000000000000c   cr0: 
0000000080050033   cr4: 00000000001526e0
(XEN) [2021-06-14 23:02:22] cr3: 0000000fbb822000   cr2: 00007f1808d35dd6
(XEN) [2021-06-14 23:02:22] fsb: 0000000000000000   gsb: 
0000000000000000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:22] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: 0000   cs: e008
(XEN) [2021-06-14 23:02:22] Xen code around <ffff82d04028696a> 
(_spin_lock+0xaa/0x140):
(XEN) [2021-06-14 23:02:22]  c1 ea 10 f3 90 66 8b 03 <66> 39 c2 75 f6 4c 
89 ef e8 09 fc ff ff 5b 41 5c
(XEN) [2021-06-14 23:02:22] Xen stack trace from rsp=ffff83207ca17ce0:
(XEN) [2021-06-14 23:02:22]    0000000000001000 ffff82d04060ed60 
ffff82d04060ec80 0000000000000008
(XEN) [2021-06-14 23:02:22]    ffff83207ca17d48 ffff82d04053ec09 
0000000000000000 0000000000000000
(XEN) [2021-06-14 23:02:22]    ffff83207f340000 ffff82d04060ec80 
ffff83157a990000 ffff8315c1503000
(XEN) [2021-06-14 23:02:22]    ffff830b9af2f000 ffff83207ca17db0 
ffff82d0404db85c 0000000000000282
(XEN) [2021-06-14 23:02:22]    ffff83103ff29a00 0000000000000002 
ffff83207ca17db0 0000000000000000
(XEN) [2021-06-14 23:02:22]    ffff82d000000008 ffff83207f340000 
ffff83207ca17ef8 ffff8315c1503000
(XEN) [2021-06-14 23:02:22]    ffff831037cb97ac 0000000000000008 
ffff83207ca17df8 ffff82d04031c329
(XEN) [2021-06-14 23:02:22]    0000000500000007 ffff830b9af64470 
ffff830b9af2f000 ffff83207f340000
(XEN) [2021-06-14 23:02:22]    ffff8315c151df20 ffff83157a990000 
0000000000000008 ffff83207ca17e80
(XEN) [2021-06-14 23:02:22]    ffff82d04031faff ffff82d040d6b080 
00007d2fbf294fc8 ffff82d040d49a98
(XEN) [2021-06-14 23:02:22]    ffff82d040d49a98 00000f5ce8b53c6b 
ffff82d000000001 ffff8315c1503000
(XEN) [2021-06-14 23:02:22]    0000000000000040 0000000000000048 
ffff831037c89570 0000000000000003
(XEN) [2021-06-14 23:02:22]    ffff82d040d53800 0000000000000008 
ffff82d040d53800 ffff83207ca17ef8
(XEN) [2021-06-14 23:02:22]    ffff83207ca17ed8 ffff82d040284d00 
ffff83207ca17fc0 00007d2fbf2acc04
(XEN) [2021-06-14 23:02:22]    0000000000000000 0000000000000000 
ffff8315c1503000 0000000000000000
(XEN) [2021-06-14 23:02:22]    0000000000000000 0000000000000000 
0000000000000000 ffff83207ca17ee8
(XEN) [2021-06-14 23:02:22]    ffff82d040285013 00007cdf835e80e7 
ffff82d0403f73ab ffffffff8576a6c0
(XEN) [2021-06-14 23:02:22]    ffff8881020b1864 ffff88810b58e800 
ffff8881008fd280 ffffc90000157d00
(XEN) [2021-06-14 23:02:22]    ffff8881020b1865 0000000000000000 
ffffed102011fa50 ffff8881008fd287
(XEN) [2021-06-14 23:02:22]    0000000000000000 0000000000004000 
ffffffff83dd94c2 1ffff1102011fa50
(XEN) [2021-06-14 23:02:22] Xen call trace:
(XEN) [2021-06-14 23:02:22]    [<ffff82d04028696a>] R _spin_lock+0xaa/0x140
(XEN) [2021-06-14 23:02:22]    [<ffff82d04053ec09>] F 
flush_area_mask+0x229/0x360
(XEN) [2021-06-14 23:02:22]    [<ffff82d0404db85c>] F 
context_switch+0x37c/0x2160
(XEN) [2021-06-14 23:02:22]    [<ffff82d04031c329>] F 
common/sched/core.c#sched_context_switch+0x1c9/0x940
(XEN) [2021-06-14 23:02:22]    [<ffff82d04031faff>] F 
common/sched/core.c#schedule+0xd1f/0x1080
(XEN) [2021-06-14 23:02:22]    [<ffff82d040284d00>] F 
common/softirq.c#__do_softirq+0x200/0x340
(XEN) [2021-06-14 23:02:22]    [<ffff82d040285013>] F do_softirq+0x13/0x20
(XEN) [2021-06-14 23:02:22]    [<ffff82d0403f73ab>] F 
vmx_asm_do_vmentry+0x2b/0x30
(XEN) [2021-06-14 23:02:22]
(XEN) [2021-06-14 23:02:22] *** Dumping CPU8 guest state (d7v5): ***
(XEN) [2021-06-14 23:02:22] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:22] CPU:    8
(XEN) [2021-06-14 23:02:22] RIP: 0010:[<ffffffff83dd8cce>]
(XEN) [2021-06-14 23:02:22] RFLAGS: 0000000000000246   CONTEXT: hvm 
guest (d7v5)
(XEN) [2021-06-14 23:02:22] rax: 0000000000004000   rbx: 
ffff8881020b1865   rcx: ffffffff83dd94c2
(XEN) [2021-06-14 23:02:22] rdx: 1ffff1102011fa50   rsi: 
0000000000000008   rdi: ffff8881008fd280
(XEN) [2021-06-14 23:02:22] rbp: ffffc90000157d00   rsp: 
ffffc90000157cf0   r8:  0000000000000000
(XEN) [2021-06-14 23:02:22] r9:  ffff8881008fd287   r10: 
ffffed102011fa50   r11: 0000000000000000
(XEN) [2021-06-14 23:02:22] r12: ffff8881008fd280   r13: 
ffff88810b58e800   r14: ffff8881020b1864
(XEN) [2021-06-14 23:02:22] r15: ffffffff8576a6c0   cr0: 
0000000080050033   cr4: 00000000001706e0
(XEN) [2021-06-14 23:02:22] cr3: 00000001297ba001   cr2: 00007f1808d35dd6
(XEN) [2021-06-14 23:02:22] fsb: 0000000000000000   gsb: 
ffff8884d3a80000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:22] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: 0018   cs: 0010
(XEN) [2021-06-14 23:02:22]
(XEN) [2021-06-14 23:02:22] *** Dumping CPU10 host state: ***
(XEN) [2021-06-14 23:02:22] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:22] CPU:    10
(XEN) [2021-06-14 23:02:22] RIP: e008:[<ffff82d04028696a>] 
_spin_lock+0xaa/0x140
(XEN) [2021-06-14 23:02:22] RFLAGS: 0000000000000212   CONTEXT: 
hypervisor (d6v1)
(XEN) [2021-06-14 23:02:22] rax: 0000000030d330cf   rbx: 
ffff82d040a33bc0   rcx: 0000000000000000
(XEN) [2021-06-14 23:02:22] rdx: 00000000000030d3   rsi: 
0000000000000000   rdi: ffff82d040a33bc6
(XEN) [2021-06-14 23:02:22] rbp: ffff83207cba7df8   rsp: 
ffff83207cba7dd8   r8:  1234567890abcdef
(XEN) [2021-06-14 23:02:22] r9:  1234567890abcdef   r10: 
1234567890abcdef   r11: 1234567890abcdef
(XEN) [2021-06-14 23:02:22] r12: 0000000030d330cf   r13: 
ffff82d040a33bc6   r14: 00000000000030d3
(XEN) [2021-06-14 23:02:22] r15: 000000000000000c   cr0: 
0000000080050033   cr4: 00000000001526e0
(XEN) [2021-06-14 23:02:22] cr3: 00000009bbbc9000   cr2: 00007f1aec175000
(XEN) [2021-06-14 23:02:22] fsb: 0000000000000000   gsb: 
0000000000000000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:22] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: 0000   cs: e008
(XEN) [2021-06-14 23:02:22] Xen code around <ffff82d04028696a> 
(_spin_lock+0xaa/0x140):
(XEN) [2021-06-14 23:02:22]  c1 ea 10 f3 90 66 8b 03 <66> 39 c2 75 f6 4c 
89 ef e8 09 fc ff ff 5b 41 5c
(XEN) [2021-06-14 23:02:22] Xen stack trace from rsp=ffff83207cba7dd8:
(XEN) [2021-06-14 23:02:22]    0000000000000100 ffff82d04060eda0 
ffff83207cba7e50 000000000000000a
(XEN) [2021-06-14 23:02:22]    ffff83207cba7e40 ffff82d04053ec09 
0000000000000000 0000000000000000
(XEN) [2021-06-14 23:02:22]    ffff82d04060eda0 ffff83207cba7e50 
000000000000000a ffff82d040d53900
(XEN) [2021-06-14 23:02:22]    ffff83207cba7ef8 ffff83207cba7e80 
ffff82d04053ee94 0000000000000bff
(XEN) [2021-06-14 23:02:22]    0000000000000000 0000000000000000 
0000000000000000 0000000000000004
(XEN) [2021-06-14 23:02:22]    ffff82d040d53900 ffff83207cba7ed8 
ffff82d040284d00 ffff83207cba7fc0
(XEN) [2021-06-14 23:02:22]    00007d2fbf2acc04 0000000000000000 
0000000000000000 ffff83207fa0a000
(XEN) [2021-06-14 23:02:22]    00000000001526e0 0000000000000000 
ffff831037ca2000 ffff83207f3fa000
(XEN) [2021-06-14 23:02:22]    ffff83207cba7ee8 ffff82d040285013 
ffff83207cba7ef0 ffff82d0403f73ab
(XEN) [2021-06-14 23:02:22]    0000000000000000 0000000000000001 
ffffffffaebc8640 ffff996ffb187800
(XEN) [2021-06-14 23:02:22]    ffff996fc0c35064 0000000000000001 
ffff9977ea46a364 ffff9977ea46a384
(XEN) [2021-06-14 23:02:22]    ffff996fc0c35000 0000000000000001 
0000000000004000 ffff9977ea46c100
(XEN) [2021-06-14 23:02:22]    0000000000000001 ffffffffaebc8640 
ffff996fc0c35064 0000beef0000beef
(XEN) [2021-06-14 23:02:22]    ffffffffadc7c76e 000000bf0000beef 
0000000000000246 ffffac780008fe58
(XEN) [2021-06-14 23:02:22]    000000000000beef 000000000000beef 
000000000000beef 000000000000beef
(XEN) [2021-06-14 23:02:22]    000000000000beef 0000e0100000000a 
ffff83207fa0a000 0000003ff6f55000
(XEN) [2021-06-14 23:02:22]    00000000001526e0 0000000000000000 
0000000000000000 0000060000000000
(XEN) [2021-06-14 23:02:22]    0000000000000000
(XEN) [2021-06-14 23:02:22] Xen call trace:
(XEN) [2021-06-14 23:02:22]    [<ffff82d04028696a>] R _spin_lock+0xaa/0x140
(XEN) [2021-06-14 23:02:22]    [<ffff82d04053ec09>] F 
flush_area_mask+0x229/0x360
(XEN) [2021-06-14 23:02:22]    [<ffff82d04053ee94>] F 
new_tlbflush_clock_period+0x154/0x1c0
(XEN) [2021-06-14 23:02:22]    [<ffff82d040284d00>] F 
common/softirq.c#__do_softirq+0x200/0x340
(XEN) [2021-06-14 23:02:22]    [<ffff82d040285013>] F do_softirq+0x13/0x20
(XEN) [2021-06-14 23:02:22]    [<ffff82d0403f73ab>] F 
vmx_asm_do_vmentry+0x2b/0x30
(XEN) [2021-06-14 23:02:22]
(XEN) [2021-06-14 23:02:22] *** Dumping CPU10 guest state (d6v1): ***
(XEN) [2021-06-14 23:02:22] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:22] CPU:    10
(XEN) [2021-06-14 23:02:22] RIP: 0010:[<ffffffffadc7c76e>]
(XEN) [2021-06-14 23:02:22] RFLAGS: 0000000000000246   CONTEXT: hvm 
guest (d6v1)
(XEN) [2021-06-14 23:02:23] rax: 0000000000004000   rbx: 
0000000000000001   rcx: ffff9977ea46c100
(XEN) [2021-06-14 23:02:23] rdx: 0000000000000001   rsi: 
ffffffffaebc8640   rdi: ffff996fc0c35064
(XEN) [2021-06-14 23:02:23] rbp: ffff996fc0c35064   rsp: 
ffffac780008fe58   r8:  0000000000000001
(XEN) [2021-06-14 23:02:23] r9:  ffff996fc0c35000   r10: 
ffff9977ea46a384   r11: ffff9977ea46a364
(XEN) [2021-06-14 23:02:23] r12: ffff996ffb187800   r13: 
ffffffffaebc8640   r14: 0000000000000001
(XEN) [2021-06-14 23:02:23] r15: 0000000000000000   cr0: 
0000000080050033   cr4: 00000000001706a0
(XEN) [2021-06-14 23:02:23] cr3: 000000012492a006   cr2: 00007f1aec175000
(XEN) [2021-06-14 23:02:23] fsb: 0000000000000000   gsb: 
ffff9977ea440000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:23] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: 0018   cs: 0010
(XEN) [2021-06-14 23:02:23]
(XEN) [2021-06-14 23:02:23] *** Dumping CPU11 host state: ***
(XEN) [2021-06-14 23:02:23] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:23] CPU:    11
(XEN) [2021-06-14 23:02:23] RIP: e008:[<ffff82d040205c6a>] 
__bitmap_empty+0xea/0x1a0
(XEN) [2021-06-14 23:02:23] RFLAGS: 0000000000000286   CONTEXT: 
hypervisor (d5v2)
(XEN) [2021-06-14 23:02:23] rax: 0000000000000001   rbx: 
0000000000000000   rcx: 0000000000000100
(XEN) [2021-06-14 23:02:23] rdx: 0000000000000000   rsi: 
000000000000000c   rdi: ffff82d040d5b478
(XEN) [2021-06-14 23:02:23] rbp: ffff83207ca37e60   rsp: 
ffff83207ca37e38   r8:  1234567890abcdef
(XEN) [2021-06-14 23:02:23] r9:  1234567890abcdef   r10: 
1234567890abcdef   r11: 1234567890abcdef
(XEN) [2021-06-14 23:02:23] r12: ffff83157a650000   r13: 
ffff82d040d5b478   r14: 0000000000000000
(XEN) [2021-06-14 23:02:23] r15: 000000000000000c   cr0: 
0000000080050033   cr4: 00000000001526e0
(XEN) [2021-06-14 23:02:23] cr3: 000000157a7ae000   cr2: 00007f0f19a85000
(XEN) [2021-06-14 23:02:23] fsb: 0000000000000000   gsb: 
0000000000000000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:23] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: 0000   cs: e008
(XEN) [2021-06-14 23:02:23] Xen code around <ffff82d040205c6a> 
(__bitmap_empty+0xea/0x1a0):
(XEN) [2021-06-14 23:02:23]  85 ff 74 34 49 8b 5d 00 <44> 89 fa c1 fa 1f 
c1 ea 1a 41 8d 04 17 83 e0 3f
(XEN) [2021-06-14 23:02:23] Xen stack trace from rsp=ffff83207ca37e38:
(XEN) [2021-06-14 23:02:23]    ffff82d040d5b478 ffff83157a650000 
ffff82d0403d3200 0000000000000001
(XEN) [2021-06-14 23:02:23]    000000000000000c ffff83207ca37e98 
ffff82d0402860de ffff82d04060eda0
(XEN) [2021-06-14 23:02:23]    ffff83157a650000 0000000000000000 
0000000000000000 ffff83157a7c1000
(XEN) [2021-06-14 23:02:23]    ffff83207ca37ec8 ffff82d0403d3b73 
ffff83157a650000 0000000000000001
(XEN) [2021-06-14 23:02:23]    ffff83157a7c1160 ffff831037c95000 
ffff83207ca37ef0 ffff82d0403d9bcf
(XEN) [2021-06-14 23:02:23]    ffff82d0403d9ac0 ffff83207ca37ef8 
ffff831037c92098 ffff83207ca37d60
(XEN) [2021-06-14 23:02:23]    0000000000000001 0000000000000001 
ffff888560a02864 ffff888560a02800
(XEN) [2021-06-14 23:02:23]    ffffc9000008fdf8 0000000000000001 
ffff888564eab3e4 00000000000003b3
(XEN) [2021-06-14 23:02:23]    00000000ffffffff 00000f19a2a78e02 
0000000000004000 00000000ffffffff
(XEN) [2021-06-14 23:02:23]    4ec4ec4ec4ec4ec5 ffffffff82b78d40 
ffff88855f2cb800 0000beef0000beef
(XEN) [2021-06-14 23:02:23]    ffffffff81beb36e 000000bf0000beef 
0000000000000246 ffffc9000008fdf0
(XEN) [2021-06-14 23:02:23]    000000000000beef 000000000000beef 
000000000000beef 000000000000beef
(XEN) [2021-06-14 23:02:23]    000000000000beef 0000e0100000000b 
ffff83157a650000 0000003ff6f49000
(XEN) [2021-06-14 23:02:23]    00000000001526e0 0000000000000000 
0000000000000000 0000060000000000
(XEN) [2021-06-14 23:02:23]    0000000000000000
(XEN) [2021-06-14 23:02:23] Xen call trace:
(XEN) [2021-06-14 23:02:23]    [<ffff82d040205c6a>] R 
__bitmap_empty+0xea/0x1a0
(XEN) [2021-06-14 23:02:23]    [<ffff82d0402860de>] F 
on_selected_cpus+0x15e/0x180
(XEN) [2021-06-14 23:02:23]    [<ffff82d0403d3b73>] F 
arch/x86/hvm/vmx/vmcs.c#vmx_clear_vmcs+0x113/0x120
(XEN) [2021-06-14 23:02:23]    [<ffff82d0403d9bcf>] F 
vmx_do_resume+0x10f/0x560
(XEN) [2021-06-14 23:02:23]
(XEN) [2021-06-14 23:02:23] *** Dumping CPU11 guest state (d5v2): ***
(XEN) [2021-06-14 23:02:23] ----[ Xen-4.15.1-pre  x86_64  debug=y 
ubsan=y  Not tainted ]----
(XEN) [2021-06-14 23:02:23] CPU:    11
(XEN) [2021-06-14 23:02:23] RIP: 0000:[<ffffffff81beb36e>]
(XEN) [2021-06-14 23:02:23] RFLAGS: 0000000000000246   CONTEXT: hvm 
guest (d5v2)
(XEN) [2021-06-14 23:02:23] rax: 0000000000004000   rbx: 
0000000000000001   rcx: 00000000ffffffff
(XEN) [2021-06-14 23:02:23] rdx: 4ec4ec4ec4ec4ec5   rsi: 
ffffffff82b78d40   rdi: ffff88855f2cb800
(XEN) [2021-06-14 23:02:23] rbp: ffffc9000008fdf8   rsp: 
ffffc9000008fdf0   r8:  00000f19a2a78e02
(XEN) [2021-06-14 23:02:23] r9:  00000000ffffffff   r10: 
00000000000003b3   r11: ffff888564eab3e4
(XEN) [2021-06-14 23:02:23] r12: ffff888560a02800   r13: 
ffff888560a02864   r14: 0000000000000001
(XEN) [2021-06-14 23:02:23] r15: 0000000000000001   cr0: 
0000000080050033   cr4: 00000000001606e0
(XEN) [2021-06-14 23:02:23] cr3: 00000004948f8006   cr2: 00007fd240899000
(XEN) [2021-06-14 23:02:23] fsb: 0000000000000000   gsb: 
0000000000000000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:23] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: 0000   cs: 0000
(XEN) [2021-06-14 23:02:23]
(XEN) [2021-06-14 23:02:23] [0: dump Dom0 registers]
(XEN) [2021-06-14 23:02:23] '0' pressed -> dumping Dom0's registers
(XEN) [2021-06-14 23:02:23] *** Dumping Dom0 vcpu#0 state: ***
(XEN) [2021-06-14 23:02:23] RIP: e033:[<ffffffff810d2100>]
(XEN) [2021-06-14 23:02:23] RFLAGS: 0000000000000246   EM: 1 CONTEXT: pv 
guest (d0v0)
(XEN) [2021-06-14 23:02:23] rax: 00000f4525a0a280   rbx: 
ffff88813b602c00   rcx: 0000000000000001
(XEN) [2021-06-14 23:02:23] rdx: 0000000000000000   rsi: 
ffff88816f701e80   rdi: ffff88816f701f40
(XEN) [2021-06-14 23:02:23] rbp: ffffc900400036f8   rsp: 
ffffc900400036b0   r8:  0000000000000000
(XEN) [2021-06-14 23:02:23] r9:  0000000000b71b00   r10: 
ffff88816f701f50   r11: ffff8884872e3570
(XEN) [2021-06-14 23:02:23] r12: ffff88816f701f40   r13: 
ffff88813bdbdb80   r14: ffff8884872e34c0
(XEN) [2021-06-14 23:02:23] r15: 0000000000000008   cr0: 
0000000000000000   cr4: 0000000000050660
(XEN) [2021-06-14 23:02:23] cr3: 00000009ac4ad000   cr2: 000055f224bcbf80
(XEN) [2021-06-14 23:02:23] fsb: 0000000000000000   gsb: 
ffff888487200000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:23] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: e02b   cs: e033
(XEN) [2021-06-14 23:02:23] Guest stack trace from rsp=ffffc900400036b0:
(XEN) [2021-06-14 23:02:23]   <G><1>Fixup #GP[0000]: ffff82d04055064e 
[arch/x86/traps.c#show_guest_stack+0x60e/0x6c0] -> ffff82d0405ca947
(XEN) [2021-06-14 23:02:23] Fault while accessing guest memory.
(XEN) [2021-06-14 23:02:23] *** Dumping Dom0 vcpu#1 state: ***
(XEN) [2021-06-14 23:02:23] RIP: e033:[<ffffffff81851080>]
(XEN) [2021-06-14 23:02:23] RFLAGS: 0000000000000282   EM: 0 CONTEXT: pv 
guest (d0v1)
(XEN) [2021-06-14 23:02:23] rax: ffffffff81851080   rbx: 
ffffc900401dd150   rcx: 0000000000000002
(XEN) [2021-06-14 23:02:23] rdx: 0000000000000056   rsi: 
00000009b04d18c0   rdi: ffff8881013650c8
(XEN) [2021-06-14 23:02:23] rbp: ffffc9004001edd8   rsp: 
ffffc9004001edd0   r8:  ffff8881013650c8
(XEN) [2021-06-14 23:02:23] r9:  0000000000000002   r10: 
0000000000000056   r11: 0000000000000040
(XEN) [2021-06-14 23:02:23] r12: ffff8881019a0b40   r13: 
ffff88810c6040e0   r14: 0000000000000000
(XEN) [2021-06-14 23:02:23] r15: ffff8881019a0b40   cr0: 
0000000080050033   cr4: 0000000000050660
(XEN) [2021-06-14 23:02:23] cr3: 0000000940515000   cr2: 0000559342580000
(XEN) [2021-06-14 23:02:23] fsb: 00007f6ebdcfd140   gsb: 
ffff888487240000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:23] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: e02b   cs: e033
(XEN) [2021-06-14 23:02:23] Guest stack trace from rsp=ffffc9004001edd0:
(XEN) [2021-06-14 23:02:23]   <G><1>Fixup #GP[0000]: ffff82d04055064e 
[arch/x86/traps.c#show_guest_stack+0x60e/0x6c0] -> ffff82d0405ca947
(XEN) [2021-06-14 23:02:23] Fault while accessing guest memory.
(XEN) [2021-06-14 23:02:23] *** Dumping Dom0 vcpu#2 state: ***
(XEN) [2021-06-14 23:02:23] RIP: e033:[<ffffffff811412ac>]
(XEN) [2021-06-14 23:02:23] RFLAGS: 0000000000000202   EM: 0 CONTEXT: pv 
guest (d0v2)
(XEN) [2021-06-14 23:02:23] rax: 0000000000000011   rbx: 
ffff888487328900   rcx: 0000000000000004
(XEN) [2021-06-14 23:02:23] rdx: ffffffff82c532d0   rsi: 
ffff8884872a4488   rdi: 0000000000000003
(XEN) [2021-06-14 23:02:23] rbp: ffffc900403bfcd0   rsp: 
ffffc900403bfc20   r8:  0000000000000004
(XEN) [2021-06-14 23:02:23] r9:  ffffffff82af5ae8   r10: 
deadbeefdeadf00d   r11: 0000000000000017
(XEN) [2021-06-14 23:02:23] r12: 0000000000000007   r13: 
0000000000000004   r14: ffff8884872a4480
(XEN) [2021-06-14 23:02:23] r15: 0000000000000004   cr0: 
0000000080050033   cr4: 0000000000050660
(XEN) [2021-06-14 23:02:23] cr3: 0000001032a22000   cr2: 00007fd4c3484248
(XEN) [2021-06-14 23:02:23] fsb: 0000000000000000   gsb: 
ffff888487280000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:23] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: e02b   cs: e033
(XEN) [2021-06-14 23:02:23] Guest stack trace from rsp=ffffc900403bfc20:
(XEN) [2021-06-14 23:02:23]   <G><1>Fixup #GP[0000]: ffff82d04055064e 
[arch/x86/traps.c#show_guest_stack+0x60e/0x6c0] -> ffff82d0405ca947
(XEN) [2021-06-14 23:02:23] Fault while accessing guest memory.
(XEN) [2021-06-14 23:02:23] *** Dumping Dom0 vcpu#3 state: ***
(XEN) [2021-06-14 23:02:23] RIP: e033:[<ffffffff810f8d2f>]
(XEN) [2021-06-14 23:02:23] RFLAGS: 0000000000000206   EM: 1 CONTEXT: pv 
guest (d0v3)
(XEN) [2021-06-14 23:02:23] rax: 0000000000000003   rbx: 
ffff8884872e41c0   rcx: 0000000000000000
(XEN) [2021-06-14 23:02:23] rdx: 0000000000006813   rsi: 
000000000000005f   rdi: 0000000000000018
(XEN) [2021-06-14 23:02:23] rbp: ffffc90040190c80   rsp: 
ffffc90040190c48   r8:  0000000000000000
(XEN) [2021-06-14 23:02:23] r9:  ffffffff82af5ae8   r10: 
0000000100fc9c34   r11: ffffc90040190ff8
(XEN) [2021-06-14 23:02:23] r12: ffff8884872e34c0   r13: 
0000000000000000   r14: 0000000000000001
(XEN) [2021-06-14 23:02:23] r15: 0000000000000100   cr0: 
0000000080050033   cr4: 0000000000050660
(XEN) [2021-06-14 23:02:23] cr3: 0000000984da6000   cr2: 00007f82d79f39b0
(XEN) [2021-06-14 23:02:23] fsb: 00007fc3662a5c00   gsb: 
ffff8884872c0000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:23] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: e02b   cs: e033
(XEN) [2021-06-14 23:02:23] Guest stack trace from rsp=ffffc90040190c48:
(XEN) [2021-06-14 23:02:23]   <G><1>Fixup #GP[0000]: ffff82d04055064e 
[arch/x86/traps.c#show_guest_stack+0x60e/0x6c0] -> ffff82d0405ca947
(XEN) [2021-06-14 23:02:23] Fault while accessing guest memory.
(XEN) [2021-06-14 23:02:23] *** Dumping Dom0 vcpu#4 state: ***
(XEN) [2021-06-14 23:02:23] RIP: e033:[<ffffffff810f8e0f>]
(XEN) [2021-06-14 23:02:23] RFLAGS: 0000000000000202   EM: 1 CONTEXT: pv 
guest (d0v4)
(XEN) [2021-06-14 23:02:23] rax: 000000000000011f   rbx: 
ffff8884873241c0   rcx: 0000000000000001
(XEN) [2021-06-14 23:02:23] rdx: 0000000000000000   rsi: 
0000000000000000   rdi: 0000000000140000
(XEN) [2021-06-14 23:02:23] rbp: ffffc900401bca08   rsp: 
ffffc900401bc9d0   r8:  0000000000140000
(XEN) [2021-06-14 23:02:23] r9:  0000000000000003   r10: 
0000000000000001   r11: 0000000000000020
(XEN) [2021-06-14 23:02:23] r12: ffff8884872e34c0   r13: 
ffff8884872e41c0   r14: ffff8884873241d4
(XEN) [2021-06-14 23:02:23] r15: 0000000000000001   cr0: 
0000000080050033   cr4: 0000000000050660
(XEN) [2021-06-14 23:02:23] cr3: 0000000984805000   cr2: 0000558758755708
(XEN) [2021-06-14 23:02:23] fsb: 00007f2d7acbbc80   gsb: 
ffff888487300000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:23] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: e02b   cs: e033
(XEN) [2021-06-14 23:02:23] Guest stack trace from rsp=ffffc900401bc9d0:
(XEN) [2021-06-14 23:02:23]   <G><1>Fixup #GP[0000]: ffff82d04055064e 
[arch/x86/traps.c#show_guest_stack+0x60e/0x6c0] -> ffff82d0405ca947
(XEN) [2021-06-14 23:02:23] Fault while accessing guest memory.
(XEN) [2021-06-14 23:02:23] *** Dumping Dom0 vcpu#5 state: ***
(XEN) [2021-06-14 23:02:23] RIP: e033:[<ffffffff81736eee>]
(XEN) [2021-06-14 23:02:23] RFLAGS: 0000000000010206   EM: 0 CONTEXT: pv 
guest (d0v5)
(XEN) [2021-06-14 23:02:23] rax: 00007f6aca3baa44   rbx: 
0000000000001000   rcx: 0000000000000b00
(XEN) [2021-06-14 23:02:23] rdx: 0000000000001000   rsi: 
00007f6aca3b9f44   rdi: ffff8881c3a7f500
(XEN) [2021-06-14 23:02:23] rbp: ffffc90045537ca8   rsp: 
ffffc90045537c18   r8:  0000000000000000
(XEN) [2021-06-14 23:02:23] r9:  ffff888000000000   r10: 
ffffea0000000000   r11: 000000000000142b
(XEN) [2021-06-14 23:02:23] r12: ffff8881c3a7f000   r13: 
ffffc90045537e18   r14: 0000000000001000
(XEN) [2021-06-14 23:02:23] r15: 0000000000008818   cr0: 
0000000080050033   cr4: 0000000000050660
(XEN) [2021-06-14 23:02:23] cr3: 0000000940d59000   cr2: 0000559867294f80
(XEN) [2021-06-14 23:02:23] fsb: 00007f6aca51c400   gsb: 
ffff888487340000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:23] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: e02b   cs: e033
(XEN) [2021-06-14 23:02:23] Guest stack trace from rsp=ffffc90045537c18:
(XEN) [2021-06-14 23:02:23]   <G><1>Fixup #GP[0000]: ffff82d04055064e 
[arch/x86/traps.c#show_guest_stack+0x60e/0x6c0] -> ffff82d0405ca947
(XEN) [2021-06-14 23:02:23] Fault while accessing guest memory.
(XEN) [2021-06-14 23:02:23] *** Dumping Dom0 vcpu#6 state: ***
(XEN) [2021-06-14 23:02:23] RIP: e033:[<ffffffff81028e5c>]
(XEN) [2021-06-14 23:02:23] RFLAGS: 0000000000000246   EM: 1 CONTEXT: pv 
guest (d0v6)
(XEN) [2021-06-14 23:02:23] rax: 0000000000000000   rbx: 
0000000000000006   rcx: 00000000c0000102
(XEN) [2021-06-14 23:02:23] rdx: 0000000000000000   rsi: 
ffffc90042fefb80   rdi: 00000000c0000102
(XEN) [2021-06-14 23:02:23] rbp: ffffc90042fefbd0   rsp: 
ffffc90042fefbb8   r8:  0000000000000000
(XEN) [2021-06-14 23:02:23] r9:  ffff88816f65dc68   r10: 
00000000000003bb   r11: 0000000000000000
(XEN) [2021-06-14 23:02:23] r12: 00000000c0000102   r13: 
ffff88810017db80   r14: ffff88810017f5c0
(XEN) [2021-06-14 23:02:23] r15: ffff88813be11a40   cr0: 
0000000080050033   cr4: 0000000000050660
(XEN) [2021-06-14 23:02:23] cr3: 00000009bc448000   cr2: 00007fa4910dbd08
(XEN) [2021-06-14 23:02:23] fsb: 0000000000000000   gsb: 
ffff888487380000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:23] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: e02b   cs: e033
(XEN) [2021-06-14 23:02:23] Guest stack trace from rsp=ffffc90042fefbb8:
(XEN) [2021-06-14 23:02:23]   <G><1>Fixup #GP[0000]: ffff82d04055064e 
[arch/x86/traps.c#show_guest_stack+0x60e/0x6c0] -> ffff82d0405ca947
(XEN) [2021-06-14 23:02:23] Fault while accessing guest memory.
(XEN) [2021-06-14 23:02:23] *** Dumping Dom0 vcpu#7 state: ***
(XEN) [2021-06-14 23:02:23] RIP: e033:[<ffffffff81002468>]
(XEN) [2021-06-14 23:02:23] RFLAGS: 0000000000000286   EM: 0 CONTEXT: pv 
guest (d0v7)
(XEN) [2021-06-14 23:02:23] rax: 0000000000000023   rbx: 
ffff88810b9ded01   rcx: ffffffff8100246a
(XEN) [2021-06-14 23:02:23] rdx: 0000000000000000   rsi: 
0000000000000000   rdi: 00007f434a04f010
(XEN) [2021-06-14 23:02:23] rbp: ffffc90040c17ef8   rsp: 
ffffc90040c17e10   r8:  0000000000000000
(XEN) [2021-06-14 23:02:23] r9:  0000000000000004   r10: 
0000000000000000   r11: 0000000000000286
(XEN) [2021-06-14 23:02:23] r12: 0000000000000004   r13: 
00007ffcc2ebf0d0   r14: 00007ffcc2ebf0d0
(XEN) [2021-06-14 23:02:23] r15: ffff88810b9ded00   cr0: 
0000000080050033   cr4: 0000000000050660
(XEN) [2021-06-14 23:02:23] cr3: 00000009a7e4b000   cr2: 00007f6aca74b000
(XEN) [2021-06-14 23:02:23] fsb: 00007f4349df1000   gsb: 
ffff8884873c0000   gss: 0000000000000000
(XEN) [2021-06-14 23:02:23] ds: 0000   es: 0000   fs: 0000   gs: 0000   
ss: e02b   cs: e033
(XEN) [2021-06-14 23:02:23] Guest stack trace from rsp=ffffc90040c17e10:
(XEN) [2021-06-14 23:02:23]   <G><1>Fixup #GP[0000]: ffff82d04055064e 
[arch/x86/traps.c#show_guest_stack+0x60e/0x6c0] -> ffff82d0405ca947
(XEN) [2021-06-14 23:02:23] Fault while accessing guest memory.
(XEN) [2021-06-14 23:02:23] [H: dump heap info]
(XEN) [2021-06-14 23:02:23] 'H' pressed -> dumping heap info (now = 
16893840499559)
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=0] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=1] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=2] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=3] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=4] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=5] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=6] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=7] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=8] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=9] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=10] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=11] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=12] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=13] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=14] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=15] -> 16128 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=16] -> 32768 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=17] -> 65536 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=18] -> 131072 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=19] -> 229409 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=20] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=21] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=22] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=23] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=24] -> 1572720 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=25] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=26] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=27] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=28] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=29] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=30] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=31] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=32] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=33] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=34] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=35] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=36] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=37] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=38] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=39] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=0][zone=40] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=0] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=1] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=2] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=3] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=4] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=5] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=6] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=7] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=8] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=9] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=10] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=11] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=12] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=13] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=14] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=15] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=16] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=17] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=18] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=19] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=20] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=21] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=22] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=23] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=24] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=25] -> 495793 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=26] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=27] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=28] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=29] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=30] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=31] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=32] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=33] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=34] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=35] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=36] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=37] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=38] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=39] -> 0 pages
(XEN) [2021-06-14 23:02:23] heap[node=1][zone=40] -> 0 pages
(XEN) [2021-06-14 23:02:23] [I: dump HVM irq info]
(XEN) [2021-06-14 23:02:23] 'I' pressed -> dumping HVM irq info
(XEN) [2021-06-14 23:02:23] Domain 2:
(XEN) [2021-06-14 23:02:23] PCI 0x00000000000000000000000000000000 ISA 
0x00000001 ROUTE 5 10 11 5
(XEN) [2021-06-14 23:02:23] grant_table.c:803:d0v3 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:23] GSI [0 - 7] 00 00 01 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [8 - f] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [10 - 17] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [18 - 1f] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [20 - 27] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [28 - 2f] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] Link 00 00 00 00
(XEN) [2021-06-14 23:02:23] Callback via 3:0xf3, not asserted
(XEN) [2021-06-14 23:02:23] Domain 3:
(XEN) [2021-06-14 23:02:23] PCI 0x00000000000000000000000000000000 ISA 
0x00000100 ROUTE 0 0 0 0
(XEN) [2021-06-14 23:02:23] GSI [0 - 7] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [8 - f] 01 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [10 - 17] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [18 - 1f] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [20 - 27] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [28 - 2f] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] Link 00 00 00 00
(XEN) [2021-06-14 23:02:23] Callback via 3:0xf3, not asserted
(XEN) [2021-06-14 23:02:23] Domain 5:
(XEN) [2021-06-14 23:02:23] PCI 0x00000000000000000000000000000000 ISA 
0x00000100 ROUTE 0 0 0 0
(XEN) [2021-06-14 23:02:23] GSI [0 - 7] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [8 - f] 01 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [10 - 17] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [18 - 1f] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [20 - 27] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [28 - 2f] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] Link 00 00 00 00
(XEN) [2021-06-14 23:02:23] Callback via 3:0xf3, not asserted
(XEN) [2021-06-14 23:02:23] Domain 6:
(XEN) [2021-06-14 23:02:23] PCI 0x00000000000000000000000000000000 ISA 
0x00000000 ROUTE 0 0 0 0
(XEN) [2021-06-14 23:02:23] GSI [0 - 7] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [8 - f] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [10 - 17] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [18 - 1f] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [20 - 27] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [28 - 2f] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] Link 00 00 00 00
(XEN) [2021-06-14 23:02:23] Callback via 3:0xf3, not asserted
(XEN) [2021-06-14 23:02:23] Domain 7:
(XEN) [2021-06-14 23:02:23] PCI 0x00000000000000000000000000000000 ISA 
0x00000100 ROUTE 0 0 0 0
(XEN) [2021-06-14 23:02:23] GSI [0 - 7] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [8 - f] 01 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [10 - 17] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [18 - 1f] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [20 - 27] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] GSI [28 - 2f] 00 00 00 00 00 00 00 00
(XEN) [2021-06-14 23:02:23] Link 00 00 00 00
(XEN) [2021-06-14 23:02:23] Callback via 3:0xf3, not asserted
(XEN) [2021-06-14 23:02:23] [M: dump MSI state]
(XEN) [2021-06-14 23:02:23] MSI information:
(XEN) [2021-06-14 23:02:23]  IOMMU   72 vec=b0 lowest  edge assert  log 
lowest dest=00010555 mask=1/  /?
(XEN) [2021-06-14 23:02:23]  IOMMU   73 vec=38 lowest  edge assert  log 
lowest dest=00000001 mask=1/  /?
(XEN) [2021-06-14 23:02:23]  MSI     74 vec=49 lowest  edge assert  log 
lowest dest=00000555 mask=1/H /1
(XEN) [2021-06-14 23:02:23]  MSI     75 vec=61 lowest  edge assert  log 
lowest dest=00000555 mask=1/H /1
(XEN) [2021-06-14 23:02:23]  MSI     76 vec=79 lowest  edge assert  log 
lowest dest=00000555 mask=1/H /1
(XEN) [2021-06-14 23:02:23]  MSI     77 vec=91 lowest  edge assert  log 
lowest dest=00000555 mask=1/H /1
(XEN) [2021-06-14 23:02:23]  MSI     78 vec=b1 lowest  edge assert  log 
lowest dest=00000555 mask=1/H /1
(XEN) [2021-06-14 23:02:23]  MSI     79 vec=c1 lowest  edge assert  log 
lowest dest=00000555 mask=0/  /?
(XEN) [2021-06-14 23:02:23]  MSI     80 vec=c9 lowest  edge assert  log 
lowest dest=00000555 mask=0/  /?
(XEN) [2021-06-14 23:02:23]  MSI     81 vec=d1 lowest  edge assert  log 
lowest dest=00000555 mask=0/  /?
(XEN) [2021-06-14 23:02:23]  MSI     82 vec=d9 lowest  edge assert  log 
lowest dest=00000555 mask=0/  /?
(XEN) [2021-06-14 23:02:23]  MSI     83 vec=e1 lowest  edge assert  log 
lowest dest=00000555 mask=0/  /?
(XEN) [2021-06-14 23:02:23]  MSI     84 vec=42 lowest  edge assert  log 
lowest dest=00000555 mask=1/H /1
(XEN) [2021-06-14 23:02:23]  MSI     85 vec=62 lowest  edge assert  log 
lowest dest=00000555 mask=1/H /1
(XEN) [2021-06-14 23:02:23]  MSI-X   86 vec=ec lowest  edge assert  log 
lowest dest=00000001 mask=1/  /0
(XEN) [2021-06-14 23:02:23]  MSI-X   87 vec=b2 lowest  edge assert  log 
lowest dest=00010010 mask=1/  /0
(XEN) [2021-06-14 23:02:23]  MSI-X   88 vec=2d lowest  edge assert  log 
lowest dest=00010010 mask=1/  /0
(XEN) [2021-06-14 23:02:23]  MSI-X   89 vec=9a lowest  edge assert  log 
lowest dest=00010001 mask=1/  /0
(XEN) [2021-06-14 23:02:23]  MSI-X   90 vec=35 lowest  edge assert  log 
lowest dest=00010001 mask=1/  /0
(XEN) [2021-06-14 23:02:23]  MSI-X   91 vec=cf lowest  edge assert  log 
lowest dest=00000010 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X   92 vec=4d lowest  edge assert  log 
lowest dest=00000040 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X   93 vec=45 lowest  edge assert  log 
lowest dest=00000001 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI     94 vec=3d lowest  edge assert  log 
lowest dest=00010100 mask=0/  /?
(XEN) [2021-06-14 23:02:24]  MSI     95 vec=d7 lowest  edge assert  log 
lowest dest=00010040 mask=0/  /?
(XEN) [2021-06-14 23:02:24]  MSI     96 vec=df lowest  edge assert  log 
lowest dest=00010040 mask=0/  /?
(XEN) [2021-06-14 23:02:24]  MSI-X   97 vec=8d lowest  edge assert  log 
lowest dest=00010004 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X   98 vec=65 lowest  edge assert  log 
lowest dest=00010400 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X   99 vec=bc lowest  edge assert  log 
lowest dest=00010100 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  100 vec=c7 lowest  edge assert  log 
lowest dest=00000010 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  101 vec=e4 lowest  edge assert  log 
lowest dest=00000100 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  102 vec=6a lowest  edge assert  log 
lowest dest=00000400 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  103 vec=cc lowest  edge assert  log 
lowest dest=00000010 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  104 vec=3c lowest  edge assert  log 
lowest dest=00010010 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  105 vec=84 lowest  edge assert  log 
lowest dest=00000001 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  106 vec=d4 lowest  edge assert  log 
lowest dest=00000100 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI    107 vec=9c lowest  edge assert  log 
lowest dest=00000400 mask=0/  /?
(XEN) [2021-06-14 23:02:24]  MSI    108 vec=5d lowest  edge assert  log 
lowest dest=00010100 mask=0/  /?
(XEN) [2021-06-14 23:02:24]  MSI-X  109 vec=64 lowest  edge assert  log 
lowest dest=00000040 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  110 vec=9b lowest  edge assert  log 
lowest dest=00000100 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  111 vec=a3 lowest  edge assert  log 
lowest dest=00000100 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  112 vec=ab lowest  edge assert  log 
lowest dest=00000100 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  113 vec=b3 lowest  edge assert  log 
lowest dest=00000100 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  114 vec=d3 lowest  edge assert  log 
lowest dest=00000100 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  115 vec=db lowest  edge assert  log 
lowest dest=00000100 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  116 vec=71 lowest  edge assert  log 
lowest dest=00010004 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  117 vec=63 lowest  edge assert  log 
lowest dest=00000001 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  118 vec=6b lowest  edge assert  log 
lowest dest=00000001 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  119 vec=73 lowest  edge assert  log 
lowest dest=00000001 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  120 vec=7b lowest  edge assert  log 
lowest dest=00000001 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  121 vec=83 lowest  edge assert  log 
lowest dest=00000001 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI-X  122 vec=8b lowest  edge assert  log 
lowest dest=00000001 mask=1/  /0
(XEN) [2021-06-14 23:02:24]  MSI    123 vec=23 lowest  edge assert  log 
lowest dest=00010100 mask=0/  /?
(XEN) [2021-06-14 23:02:24]  MSI    124 vec=5c lowest  edge assert  log 
lowest dest=00010400 mask=0/  /?
(XEN) [2021-06-14 23:02:24]  MSI    125 vec=4c lowest  edge assert  log 
lowest dest=00000400 mask=0/  /?
(XEN) [2021-06-14 23:02:24]  MSI    126 vec=6d lowest  edge assert  log 
lowest dest=00010004 mask=0/  /?
(XEN) [2021-06-14 23:02:24] [Q: dump PCI devices]
(XEN) [2021-06-14 23:02:24] ==== PCI devices ====
(XEN) [2021-06-14 23:02:24] ==== segment 0000 ====
(XEN) [2021-06-14 23:02:24] 0000:ff:1f.2 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:1f.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:1e.4 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:1e.3 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:1e.2 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:1e.1 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:1e.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:17.7 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:17.6 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:17.5 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:17.4 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:17.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:16.7 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:16.6 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:16.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:15.3 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:15.2 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:15.1 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:15.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:14.7 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:14.6 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:14.5 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:14.4 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:14.3 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:14.2 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:14.1 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:14.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:13.7 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:13.6 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:13.5 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:13.4 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:13.3 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:13.2 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:13.1 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:13.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:12.1 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:12.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:10.7 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:10.6 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:10.5 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:10.1 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:10.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:0f.6 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:0f.5 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:0f.4 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:0f.1 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:0f.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:0c.5 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:0c.4 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:0c.3 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:0c.2 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:0c.1 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:0c.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:0b.2 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:0b.1 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:0b.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:09.3 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:09.2 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:09.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:08.3 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:08.2 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:ff:08.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:82:00.1 - d7 - node 1   - MSIs < 125 >
(XEN) [2021-06-14 23:02:24] 0000:82:00.0 - d7 - node 1   - MSIs < 124 >
(XEN) [2021-06-14 23:02:24] 0000:81:00.1 - d5 - node 1   - MSIs < 126 >
(XEN) [2021-06-14 23:02:24] 0000:81:00.0 - d5 - node 1   - MSIs < 123 >
(XEN) [2021-06-14 23:02:24] 0000:80:05.4 - d0 - node 1   - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:80:05.2 - d0 - node 1   - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:80:05.1 - d0 - node 1   - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:80:05.0 - d0 - node 1   - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:80:03.0 - d0 - node 1   - MSIs < 85 >
(XEN) [2021-06-14 23:02:24] 0000:80:02.0 - d0 - node 1   - MSIs < 84 >
(XEN) [2021-06-14 23:02:24] 0000:7f:1f.2 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:1f.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:1e.4 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:1e.3 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:1e.2 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:1e.1 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:1e.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:17.7 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:17.6 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:17.5 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:17.4 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:17.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:16.7 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:16.6 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:16.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:15.3 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:15.2 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:15.1 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:15.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:14.7 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:14.6 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:14.5 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:14.4 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:14.3 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:14.2 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:14.1 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:14.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:13.7 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:13.6 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:13.5 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:13.4 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:13.3 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:13.2 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:13.1 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:13.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:12.1 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:12.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:24] 0000:7f:10.7 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:10.6 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:10.5 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:10.1 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:10.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:0f.6 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:0f.5 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:0f.4 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:0f.1 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:0f.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:0c.5 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:0c.4 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:0c.3 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:0c.2 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:0c.1 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:0c.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:0b.2 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:0b.1 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:0b.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:09.3 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:09.2 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:09.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:08.3 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:08.2 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:7f:08.0 - d0 - node -1  - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:0b:00.0 - d0 - node 0   - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:0a:00.0 - d0 - node 0   - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:09:00.0 - d0 - node 0   - MSIs < 96 >
(XEN) [2021-06-14 23:02:25] 0000:08:00.0 - d6 - node 0   - MSIs < 102 
103 104 105 106 >
(XEN) [2021-06-14 23:02:25] 0000:07:00.0 - d0 - node 0   - MSIs < 97 98 
99 100 101 >
(XEN) [2021-06-14 23:02:25] 0000:05:00.0 - d0 - node 0   - MSIs < 86 87 
88 89 90 91 92 93 >
(XEN) [2021-06-14 23:02:25] 0000:04:00.0 - d7 - node 0   - MSIs < 109 
110 111 112 113 114 115 >
(XEN) [2021-06-14 23:02:25] 0000:03:00.0 - d2 - node 0   - MSIs < 108 >
(XEN) [2021-06-14 23:02:25] 0000:02:00.0 - d5 - node 0   - MSIs < 116 
117 118 119 120 121 122 >
(XEN) [2021-06-14 23:02:25] 0000:00:1f.6 - d0 - node 0   - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:00:1f.3 - d0 - node 0   - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:00:1f.2 - d0 - node 0   - MSIs < 95 >
(XEN) [2021-06-14 23:02:25] 0000:00:1f.0 - d0 - node 0   - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:00:1d.0 - d0 - node 0   - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:00:1c.7 - d0 - node 0   - MSIs < 83 >
(XEN) [2021-06-14 23:02:25] 0000:00:1c.4 - d0 - node 0   - MSIs < 82 >
(XEN) [2021-06-14 23:02:25] 0000:00:1c.3 - d0 - node 0   - MSIs < 81 >
(XEN) [2021-06-14 23:02:25] 0000:00:1c.2 - d0 - node 0   - MSIs < 80 >
(XEN) [2021-06-14 23:02:25] 0000:00:1c.0 - d0 - node 0   - MSIs < 79 >
(XEN) [2021-06-14 23:02:25] 0000:00:1b.0 - d0 - node 0   - MSIs < 107 >
(XEN) [2021-06-14 23:02:25] 0000:00:1a.0 - d0 - node 0   - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:00:16.1 - d0 - node 0   - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:00:16.0 - d0 - node 0   - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:00:14.0 - d0 - node 0   - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:00:11.4 - d0 - node 0   - MSIs < 94 >
(XEN) [2021-06-14 23:02:25] 0000:00:11.0 - d0 - node 0   - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:00:05.4 - d0 - node 0   - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:00:05.2 - d0 - node 0   - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:00:05.1 - d0 - node 0   - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:00:05.0 - d0 - node 0   - MSIs < >
(XEN) [2021-06-14 23:02:25] 0000:00:03.0 - d0 - node 0   - MSIs < 78 >
(XEN) [2021-06-14 23:02:25] 0000:00:02.2 - d0 - node 0   - MSIs < 77 >
(XEN) [2021-06-14 23:02:25] 0000:00:02.0 - d0 - node 0   - MSIs < 76 >
(XEN) [2021-06-14 23:02:25] 0000:00:01.1 - d0 - node 0   - MSIs < 75 >
(XEN) [2021-06-14 23:02:25] 0000:00:01.0 - d0 - node 0   - MSIs < 74 >
(XEN) [2021-06-14 23:02:25] 0000:00:00.0 - d0 - node 0   - MSIs < >
(XEN) [2021-06-14 23:02:25] [V: dump iommu info]
(XEN) [2021-06-14 23:02:25]
(XEN) [2021-06-14 23:02:25] iommu 0: nr_pt_levels = 4.
(XEN) [2021-06-14 23:02:25]   Queued Invalidation: supported and enabled.
(XEN) [2021-06-14 23:02:25]   Interrupt Remapping: supported and enabled.
(XEN) [2021-06-14 23:02:25]   Interrupt Posting: not supported.
(XEN) [2021-06-14 23:02:25]   Interrupt remapping table 
(nr_entry=0x10000. Only dump P=1 entries here):
(XEN) [2021-06-14 23:02:25] R means remapped format, P means posted format.
(XEN) [2021-06-14 23:02:25] R:       SVT  SQ   SID  V  AVL FPD      DST 
DLM TM RH DM P
(XEN) [2021-06-14 23:02:25] P:       SVT  SQ   SID  V  AVL 
FPD              PDA  URG P
(XEN) [2021-06-14 23:02:25] R:  0000:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0001:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0002:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0003:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0004:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0005:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0006:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0007:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0008:  1   0  802c 37    0   0 
00010001   1  1  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0009:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  000a:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  000b:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  000c:  1   0  802c 3f    0   0 
00010001   1  1  1  1 1
(XEN) [2021-06-14 23:02:25] R:  000d:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  000e:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  000f:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0010:  1   0  802c ee    0   0 
00010400   1  1  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0011:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0012:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0013:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0014:  1   0  802c 27    0   0 
00010400   1  1  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0015:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0016:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0017:  1   0  802c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0018:  1   0  8010 42    0   0 
00000555   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0019:  1   0  8018 62    0   0 
00000555   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  001a:  1   0  8100 23    0   0 
00010100   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  001b:  1   0  8200 5c    0   0 
00010400   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  001c:  1   0  8201 4c    0   0 
00000400   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  001d:  1   0  8101 6d    0   0 
00010010   1  0  1  1 1
(XEN) [2021-06-14 23:02:25]
(XEN) [2021-06-14 23:02:25] iommu 1: nr_pt_levels = 4.
(XEN) [2021-06-14 23:02:25]   Queued Invalidation: supported and enabled.
(XEN) [2021-06-14 23:02:25]   Interrupt Remapping: supported and enabled.
(XEN) [2021-06-14 23:02:25]   Interrupt Posting: not supported.
(XEN) [2021-06-14 23:02:25]   Interrupt remapping table 
(nr_entry=0x10000. Only dump P=1 entries here):
(XEN) [2021-06-14 23:02:25] R means remapped format, P means posted format.
(XEN) [2021-06-14 23:02:25] R:       SVT  SQ   SID  V  AVL FPD      DST 
DLM TM RH DM P
(XEN) [2021-06-14 23:02:25] P:       SVT  SQ   SID  V  AVL 
FPD              PDA  URG P
(XEN) [2021-06-14 23:02:25] R:  0000:  1   0  f0ff 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0001:  1   0  f0ff a3    0   0 
00010004   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0002:  1   0  f0ff f0    0   0 
00000001   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0003:  1   0  f0ff 47    0   0 
00000100   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0004:  1   0  f0ff f1    0   0 
00010555   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0005:  1   0  f0ff 50    0   0 
00000001   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0006:  1   0  f0ff 58    0   0 
00000001   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0007:  1   0  f0ff 60    0   0 
00000001   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0008:  1   0  f0ff 8c    0   0 
00000040   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0009:  1   0  f0ff c0    0   0 
00000010   1  1  1  1 1
(XEN) [2021-06-14 23:02:25] R:  000a:  1   0  f0ff 78    0   0 
00000001   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  000b:  1   0  f0ff 88    0   0 
00000001   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  000c:  1   0  f0ff 90    0   0 
00000001   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  000d:  1   0  f0ff 98    0   0 
00000001   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  000e:  1   0  f0ff a0    0   0 
00000001   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  000f:  1   0  f0ff a8    0   0 
00000001   1  0  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0010:  1   0  f0ff b9    0   0 
00000555   1  1  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0011:  1   0  f0ff 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0012:  1   0  f0ff c4    0   0 
00000010   1  1  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0013:  1   0  f0ff 7d    0   0 
00010001   1  1  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0014:  1   0  f0ff 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0015:  1   0  f0ff 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0016:  1   0  f0ff bb    0   0 
00000555   1  1  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0017:  1   0  f0ff 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0018:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0019:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  001a:  1   0  002c d8    0   0 
00000555   1  1  1  1 1
(XEN) [2021-06-14 23:02:25] R:  001b:  1   0  002c 2f    0   0 
00010001   1  1  1  1 1
(XEN) [2021-06-14 23:02:25] R:  001c:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  001d:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  001e:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  001f:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0020:  1   0  002c e8    0   0 
00000010   1  1  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0021:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0022:  1   0  002c e6    0   0 
00010400   1  1  1  1 1
(XEN) [2021-06-14 23:02:25] R:  0023:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:25] R:  0024:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:26] R:  0025:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:26] R:  0026:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:26] R:  0027:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:26] R:  0028:  1   0  002c 99    0   0 
00000555   1  1  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0029:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:26] R:  002a:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:26] R:  002b:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:26] R:  002c:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:26] R:  002d:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:26] R:  002e:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:26] R:  002f:  1   0  002c 00    0   0 
00000000   0  0  0  0 1
(XEN) [2021-06-14 23:02:26] R:  0030:  1   0  0008 49    0   0 
00000555   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0031:  1   0  0009 61    0   0 
00000555   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0032:  1   0  0010 79    0   0 
00000555   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0033:  1   0  0012 91    0   0 
00000555   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0034:  1   0  0018 b1    0   0 
00000555   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0035:  1   0  00e0 c1    0   0 
00000555   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0036:  1   0  00e2 c9    0   0 
00000555   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0037:  1   0  00e3 d1    0   0 
00000555   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0038:  1   0  00e4 d9    0   0 
00000555   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0039:  1   0  00e7 e1    0   0 
00000555   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  003a:  1   0  0500 ec    0   0 
00000001   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  003b:  1   0  0500 b2    0   0 
00010010   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  003c:  1   0  0500 2d    0   0 
00010010   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  003d:  1   0  0500 9a    0   0 
00010001   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  003e:  1   0  0500 35    0   0 
00010400   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  003f:  1   0  0500 cf    0   0 
00000010   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0040:  1   0  0500 4d    0   0 
00000400   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0041:  1   0  0500 45    0   0 
00000001   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0042:  1   0  008c 3d    0   0 
00010100   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0043:  1   0  00fa d7    0   0 
00010040   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0044:  1   0  0900 df    0   0 
00010040   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0045:  1   0  0700 8d    0   0 
00010004   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0046:  1   0  0700 75    0   0 
00000040   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0047:  1   0  0700 bc    0   0 
00010001   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0048:  1   0  0700 c7    0   0 
00000100   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0049:  1   0  0700 e4    0   0 
00000010   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  004a:  1   0  0800 6a    0   0 
00000400   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  004b:  1   0  0800 cc    0   0 
00000010   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  004c:  1   0  0800 3c    0   0 
00010010   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  004d:  1   0  0800 84    0   0 
00000010   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  004e:  1   0  0800 d4    0   0 
00000040   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  004f:  1   0  00d8 9c    0   0 
00000400   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0050:  1   0  0300 5d    0   0 
00010400   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0051:  1   0  0400 64    0   0 
00000040   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0052:  1   0  0400 9b    0   0 
00000100   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0053:  1   0  0400 a3    0   0 
00000100   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0054:  1   0  0400 ab    0   0 
00000100   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0055:  1   0  0400 b3    0   0 
00000100   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0056:  1   0  0400 d3    0   0 
00000100   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0057:  1   0  0400 db    0   0 
00000100   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0058:  1   0  0200 71    0   0 
00010004   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  0059:  1   0  0200 63    0   0 
00000001   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  005a:  1   0  0200 6b    0   0 
00000001   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  005b:  1   0  0200 73    0   0 
00000001   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  005c:  1   0  0200 7b    0   0 
00000001   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  005d:  1   0  0200 83    0   0 
00000001   1  0  1  1 1
(XEN) [2021-06-14 23:02:26] R:  005e:  1   0  0200 8b    0   0 
00000001   1  0  1  1 1
(XEN) [2021-06-14 23:02:26]
(XEN) [2021-06-14 23:02:26] Redirection table of IOAPIC 0:
(XEN) [2021-06-14 23:02:26]   #entry IDX FMT MASK TRIG IRR POL STAT 
DELI  VECTOR
(XEN) [2021-06-14 23:02:26]    00:  0000   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    01:  0001   1    0   0   0   0 0    0     a3
(XEN) [2021-06-14 23:02:26]    02:  0002   1    0   0   0   0 0    0     f0
(XEN) [2021-06-14 23:02:26]    03:  0003   1    0   0   0   0 0    0     47
(XEN) [2021-06-14 23:02:26]    04:  0004   1    0   0   0   0 0    0     f1
(XEN) [2021-06-14 23:02:26]    05:  0005   1    0   0   0   0 0    0     50
(XEN) [2021-06-14 23:02:26]    06:  0006   1    0   0   0   0 0    0     58
(XEN) [2021-06-14 23:02:26]    07:  0007   1    0   0   0   0 0    0     60
(XEN) [2021-06-14 23:02:26]    08:  0008   1    0   0   0   0 0    0     8c
(XEN) [2021-06-14 23:02:26]    09:  0009   1    0   1   0   0 0    0     c0
(XEN) [2021-06-14 23:02:26]    0a:  000a   1    0   0   0   0 0    0     78
(XEN) [2021-06-14 23:02:26]    0b:  000b   1    0   0   0   0 0    0     88
(XEN) [2021-06-14 23:02:26]    0c:  000c   1    0   0   0   0 0    0     90
(XEN) [2021-06-14 23:02:26]    0d:  000d   1    1   0   0   0 0    0     98
(XEN) [2021-06-14 23:02:26]    0e:  000e   1    0   0   0   0 0    0     a0
(XEN) [2021-06-14 23:02:26]    0f:  000f   1    0   0   0   0 0    0     a8
(XEN) [2021-06-14 23:02:26]    10:  0010   1    1   1   0   1 0    0     b9
(XEN) [2021-06-14 23:02:26]    11:  0011   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    12:  0012   1    0   1   0   1 0    0     c4
(XEN) [2021-06-14 23:02:26]    13:  0013   1    0   1   0   1 0    0     7d
(XEN) [2021-06-14 23:02:26]    14:  0014   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    15:  0015   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    16:  0016   1    1   1   0   1 0    0     bb
(XEN) [2021-06-14 23:02:26]    17:  0017   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]
(XEN) [2021-06-14 23:02:26] Redirection table of IOAPIC 1:
(XEN) [2021-06-14 23:02:26]   #entry IDX FMT MASK TRIG IRR POL STAT 
DELI  VECTOR
(XEN) [2021-06-14 23:02:26]    00:  0018   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    01:  0019   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    02:  001a   1    1   1   0   1 0    0     d8
(XEN) [2021-06-14 23:02:26]    03:  001b   1    0   1   0   1 0    0     2f
(XEN) [2021-06-14 23:02:26]    04:  001c   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    05:  001d   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    06:  001e   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    07:  001f   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    08:  0020   1    0   1   0   1 0    0     e8
(XEN) [2021-06-14 23:02:26]    09:  0021   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    0a:  0022   1    0   1   0   1 0    0     e6
(XEN) [2021-06-14 23:02:26]    0b:  0023   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    0c:  0024   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    0d:  0025   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    0e:  0026   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    0f:  0027   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    10:  0028   1    1   1   0   1 0    0     99
(XEN) [2021-06-14 23:02:26]    11:  0029   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    12:  002a   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    13:  002b   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    14:  002c   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    15:  002d   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    16:  002e   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    17:  002f   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]
(XEN) [2021-06-14 23:02:26] Redirection table of IOAPIC 2:
(XEN) [2021-06-14 23:02:26]   #entry IDX FMT MASK TRIG IRR POL STAT 
DELI  VECTOR
(XEN) [2021-06-14 23:02:26]    00:  0000   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    01:  0001   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    02:  0002   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    03:  0003   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    04:  0004   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    05:  0005   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    06:  0006   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    07:  0007   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    08:  0008   1    0   1   0   1 0    0     37
(XEN) [2021-06-14 23:02:26]    09:  0009   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    0a:  000a   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    0b:  000b   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    0c:  000c   1    0   1   0   1 0    0     3f
(XEN) [2021-06-14 23:02:26]    0d:  000d   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    0e:  000e   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    0f:  000f   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    10:  0010   1    0   1   0   1 0    0     ee
(XEN) [2021-06-14 23:02:26]    11:  0011   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    12:  0012   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    13:  0013   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    14:  0014   1    0   1   0   1 0    0     27
(XEN) [2021-06-14 23:02:26]    15:  0015   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    16:  0016   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26]    17:  0017   1    1   0   0   0 0    0     00
(XEN) [2021-06-14 23:02:26] [a: dump timer queues]
(XEN) [2021-06-14 23:02:26] Dumping timer queues:
(XEN) [2021-06-14 23:02:27] CPU00:
(XEN) [2021-06-14 23:02:27]   ex=        9007us timer=ffff83103ffe2f10 
cb=common/sched/core.c#s_timer_fn(0000000000000000)
(XEN) [2021-06-14 23:02:27]   ex=      542407us timer=ffff82d040d7cca0 
cb=arch/x86/time.c#time_calibration(0000000000000000)
(XEN) [2021-06-14 23:02:27]   ex=       45381us timer=ffff830b1e05b068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff830b1e05b000)
(XEN) [2021-06-14 23:02:27]   ex=    12633206us timer=ffff82d040d63e40 
cb=arch/x86/cpu/mcheck/non-fatal.c#mce_work_fn(0000000000000000)
(XEN) [2021-06-14 23:02:27]   ex=    56011703us timer=ffff82d040d7cc00 
cb=arch/x86/time.c#plt_overflow(0000000000000000)
(XEN) [2021-06-14 23:02:27]   ex=       67982us timer=ffff831037c34068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831037c34000)
(XEN) [2021-06-14 23:02:27] CPU01:
(XEN) [2021-06-14 23:02:27]   ex=       77746us timer=ffff831dd0829068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831dd0829000)
(XEN) [2021-06-14 23:02:27]   ex=       78079us timer=ffff83103ff4eca0 
cb=common/sched/core.c#s_timer_fn(0000000000000000)
(XEN) [2021-06-14 23:02:27]   ex=      174852us timer=ffff831037c31068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831037c31000)
(XEN) [2021-06-14 23:02:27]   ex=       81494us timer=ffff83207fa12068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff83207fa12000)
(XEN) [2021-06-14 23:02:27]   ex=       78483us timer=ffff83157a627068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff83157a627000)
(XEN) [2021-06-14 23:02:27] CPU02:
(XEN) [2021-06-14 23:02:27]   ex=      145400us timer=ffff83103ff3fa20 
cb=common/sched/core.c#s_timer_fn(0000000000000000)
(XEN) [2021-06-14 23:02:27] CPU03:
(XEN) [2021-06-14 23:02:27]   ex=      154300us timer=ffff83103ff1a780 
cb=common/sched/core.c#s_timer_fn(0000000000000000)
(XEN) [2021-06-14 23:02:27]   ex=      175853us timer=ffff831037c34068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831037c34000)
(XEN) [2021-06-14 23:02:27]   ex=      174839us timer=ffff831037c3f068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831037c3f000)
(XEN) [2021-06-14 23:02:27]   ex=      498204us timer=ffff830b1e052068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff830b1e052000)
(XEN) [2021-06-14 23:02:27]   ex=  3452003430us timer=ffff83157a6bb0b8 
cb=arch/x86/hvm/rtc.c#rtc_alarm_cb(ffff83157a6bb010)
(XEN) [2021-06-14 23:02:27]   ex=      183482us timer=ffff83157a66d068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff83157a66d000)
(XEN) [2021-06-14 23:02:27] CPU04:
(XEN) [2021-06-14 23:02:27]   ex=      226746us timer=ffff831dd0829068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831dd0829000)
(XEN) [2021-06-14 23:02:27]   ex=      230375us timer=ffff83207f327068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff83207f327000)
(XEN) [2021-06-14 23:02:27]   ex=      227979us timer=ffff831037c3f068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831037c3f000)
(XEN) [2021-06-14 23:02:27]   ex=      275979us timer=ffff831037c38068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831037c38000)
(XEN) [2021-06-14 23:02:27] CPU05:
(XEN) [2021-06-14 23:02:27]   ex=      277473us timer=ffff831de8277068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831de8277000)
(XEN) [2021-06-14 23:02:27]   ex=      285438us timer=ffff83103ff05ee0 
cb=common/sched/core.c#s_timer_fn(0000000000000000)
(XEN) [2021-06-14 23:02:27]   ex=      277746us timer=ffff831de2cab068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831de2cab000)
(XEN) [2021-06-14 23:02:27]   ex=  3453012205us timer=ffff831ef0a72828 
cb=arch/x86/hvm/rtc.c#rtc_alarm_cb(ffff831ef0a72780)
(XEN) [2021-06-14 23:02:27]   ex=      322836us timer=ffff831037c31068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831037c31000)
(XEN) [2021-06-14 23:02:27]   ex=      323497us timer=ffff830b1e05b068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff830b1e05b000)
(XEN) [2021-06-14 23:02:27] CPU06:
(XEN) [2021-06-14 23:02:27]   ex=      349479us timer=ffff83157a650068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff83157a650000)
(XEN) [2021-06-14 23:02:27]   ex=      349494us timer=ffff8315c1380068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff8315c1380000)
(XEN) [2021-06-14 23:02:27]   ex=      373835us timer=ffff831037c2e068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831037c2e000)
(XEN) [2021-06-14 23:02:27]   ex=      349578us timer=ffff831037ceef60 
cb=common/sched/core.c#s_timer_fn(0000000000000000)
(XEN) [2021-06-14 23:02:27]   ex=   116590840us timer=ffff830b1e112b68 
cb=arch/x86/hvm/pmtimer.c#pmt_timer_callback(ffff830b1e112b48)
(XEN) [2021-06-14 23:02:27]   ex=   322275007us timer=ffff8309bbbc8b68 
cb=arch/x86/hvm/pmtimer.c#pmt_timer_callback(ffff8309bbbc8b48)
(XEN) [2021-06-14 23:02:27]   ex=  3452661639us timer=ffff830b1e112828 
cb=arch/x86/hvm/rtc.c#rtc_alarm_cb(ffff830b1e112780)
(XEN) [2021-06-14 23:02:27]   ex=      411978us timer=ffff831037c44068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831037c44000)
(XEN) [2021-06-14 23:02:27] CPU07:
(XEN) [2021-06-14 23:02:27]   ex=      445389us timer=ffff830b1e112560 
cb=arch/x86/irq.c#irq_guest_eoi_timer_fn(ffff831037d06c00)
(XEN) [2021-06-14 23:02:27]   ex=      445478us timer=ffff83157a650068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff83157a650000)
(XEN) [2021-06-14 23:02:27]   ex=      446837us timer=ffff831037c2e068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831037c2e000)
(XEN) [2021-06-14 23:02:27]   ex=      446837us timer=ffff831037c2a068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831037c2a000)
(XEN) [2021-06-14 23:02:27]   ex=      666216us timer=ffff830b1e063068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff830b1e063000)
(XEN) [2021-06-14 23:02:27]   ex=  3452661639us timer=ffff830b1e112828 
cb=arch/x86/hvm/rtc.c#rtc_alarm_cb(ffff830b1e112780)
(XEN) [2021-06-14 23:02:27]   ex=      451976us timer=ffff831037c44068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831037c44000)
(XEN) [2021-06-14 23:02:27] CPU08:
(XEN) [2021-06-14 23:02:27]   ex=      531477us timer=ffff83157a65d068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff83157a65d000)
(XEN) [2021-06-14 23:02:27]   ex=      531643us timer=ffff831de82593a0 
cb=arch/x86/irq.c#irq_guest_eoi_timer_fn(ffff831037d07b00)
(XEN) [2021-06-14 23:02:27]   ex=      531746us timer=ffff831dcffae068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831dcffae000)
(XEN) [2021-06-14 23:02:27]   ex=      533494us timer=ffff8315c1380068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff8315c1380000)
(XEN) [2021-06-14 23:02:27]   ex=   161890142us timer=ffff83157a6bb3f8 
cb=arch/x86/hvm/pmtimer.c#pmt_timer_callback(ffff83157a6bb3d8)
(XEN) [2021-06-14 23:02:27]   ex=   453760526us timer=ffff831ef0a72b68 
cb=arch/x86/hvm/pmtimer.c#pmt_timer_callback(ffff831ef0a72b48)
(XEN) [2021-06-14 23:02:27]   ex=      534440us timer=ffff831037cc8a70 
cb=common/sched/core.c#s_timer_fn(0000000000000000)
(XEN) [2021-06-14 23:02:27] CPU09:
(XEN) [2021-06-14 23:02:27] CPU10:
(XEN) [2021-06-14 23:02:27]   ex=      618476us timer=ffff83157a650068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff83157a650000)
(XEN) [2021-06-14 23:02:27]   ex=      618746us timer=ffff831dcffae068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831dcffae000)
(XEN) [2021-06-14 23:02:27]   ex=      635974us timer=ffff831037c2a068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831037c2a000)
(XEN) [2021-06-14 23:02:27] CPU11:
(XEN) [2021-06-14 23:02:27]   ex=      657746us timer=ffff831dcffae068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff831dcffae000)
(XEN) [2021-06-14 23:02:27]   ex=   115708650us timer=ffff830b9acc7f38 
cb=arch/x86/hvm/pmtimer.c#pmt_timer_callback(ffff830b9acc7f18)
(XEN) [2021-06-14 23:02:27]   ex=      660375us timer=ffff83207f32f068 
cb=common/sched/core.c#vcpu_singleshot_timer_fn(ffff83207f32f000)
(XEN) [2021-06-14 23:02:27] [c: dump ACPI Cx structures]
(XEN) [2021-06-14 23:02:27] 'c' pressed -> printing ACPI Cx structures
(XEN) [2021-06-14 23:02:27] max state: C2
(XEN) [2021-06-14 23:02:27] max sub-state: unlimited
(XEN) [2021-06-14 23:02:27] ==cpu0==
(XEN) [2021-06-14 23:02:27]     C1:    type[C1] latency[  2] usage[ 
3426144] method[  FFH] duration[371768197476]
(XEN) [2021-06-14 23:02:27]    *C2:    type[C1] latency[ 10] 
usage[11545884] method[  FFH] duration[2337207039237]
(XEN) [2021-06-14 23:02:27]     C3:    type[C2] latency[ 33] usage[ 
8756511] method[  FFH] duration[7923787648997]
(XEN) [2021-06-14 23:02:27]     C0:    usage[23728539] 
duration[6265262031629]
(XEN) [2021-06-14 23:02:27] PC2[703725452375] PC3[17119109027] PC6[0] 
PC7[0]
(XEN) [2021-06-14 23:02:27] CC3[5953375858208] CC6[0] CC7[0]
(XEN) [2021-06-14 23:02:27] ==cpu1==
(XEN) [2021-06-14 23:02:27]     C1:    type[C1] latency[  2] usage[ 
3570237] method[  FFH] duration[382958279577]
(XEN) [2021-06-14 23:02:27]     C2:    type[C1] latency[ 10] 
usage[11566597] method[  FFH] duration[2342061994564]
(XEN) [2021-06-14 23:02:27]     C3:    type[C2] latency[ 33] usage[ 
8776908] method[  FFH] duration[7902026301186]
(XEN) [2021-06-14 23:02:27]    *C0:    usage[23913743] 
duration[6271029653205]
(XEN) [2021-06-14 23:02:27] PC2[703725452375] PC3[17119109027] PC6[0] 
PC7[0]
(XEN) [2021-06-14 23:02:27] CC3[5930337774335] CC6[0] CC7[0]
(XEN) [2021-06-14 23:02:27] ==cpu2==
(XEN) [2021-06-14 23:02:27]     C1:    type[C1] latency[  2] usage[ 
3532489] method[  FFH] duration[381081386922]
(XEN) [2021-06-14 23:02:27]     C2:    type[C1] latency[ 10] 
usage[11531666] method[  FFH] duration[2336109536840]
(XEN) [2021-06-14 23:02:27]     C3:    type[C2] latency[ 33] usage[ 
8760630] method[  FFH] duration[7898938042796]
(XEN) [2021-06-14 23:02:27]    *C0:    usage[23824786] 
duration[6281998565349]
(XEN) [2021-06-14 23:02:27] PC2[703725452375] PC3[17119109027] PC6[0] 
PC7[0]
(XEN) [2021-06-14 23:02:27] CC3[5928995416057] CC6[0] CC7[0]
(XEN) [2021-06-14 23:02:27] ==cpu3==
(XEN) [2021-06-14 23:02:27]     C1:    type[C1] latency[  2] usage[ 
3513938] method[  FFH] duration[382322236882]
(XEN) [2021-06-14 23:02:27]     C2:    type[C1] latency[ 10] 
usage[11507842] method[  FFH] duration[2327504609721]
(XEN) [2021-06-14 23:02:27]     C3:    type[C2] latency[ 33] usage[ 
8742823] method[  FFH] duration[7921059076817]
(XEN) [2021-06-14 23:02:27]    *C0:    usage[23764604] 
duration[6267294297699]
(XEN) [2021-06-14 23:02:27] PC2[703725452375] PC3[17119109027] PC6[0] 
PC7[0]
(XEN) [2021-06-14 23:02:27] CC3[5955702111954] CC6[0] CC7[0]
(XEN) [2021-06-14 23:02:27] ==cpu4==
(XEN) [2021-06-14 23:02:27]     C1:    type[C1] latency[  2] usage[ 
3504026] method[  FFH] duration[379301525189]
(XEN) [2021-06-14 23:02:27]     C2:    type[C1] latency[ 10] 
usage[11449296] method[  FFH] duration[2321668430555]
(XEN) [2021-06-14 23:02:27]    *C3:    type[C2] latency[ 33] usage[ 
8778474] method[  FFH] duration[7923701916711]
(XEN) [2021-06-14 23:02:27]     C0:    usage[23731796] 
duration[6273559656198]
(XEN) [2021-06-14 23:02:27] PC2[703725452375] PC3[17119109027] PC6[0] 
PC7[0]
(XEN) [2021-06-14 23:02:27] CC3[5948358136364] CC6[0] CC7[0]
(XEN) [2021-06-14 23:02:27] ==cpu5==
(XEN) [2021-06-14 23:02:27]     C1:    type[C1] latency[  2] usage[ 
3346717] method[  FFH] duration[367393559993]
(XEN) [2021-06-14 23:02:27]     C2:    type[C1] latency[ 10] 
usage[11416612] method[  FFH] duration[2315153547936]
(XEN) [2021-06-14 23:02:27]     C3:    type[C2] latency[ 33] usage[ 
8754056] method[  FFH] duration[7970306537097]
(XEN) [2021-06-14 23:02:27]    *C0:    usage[23517386] 
duration[6245429192876]
(XEN) [2021-06-14 23:02:28] PC2[703725452375] PC3[17119109027] PC6[0] 
PC7[0]
(XEN) [2021-06-14 23:02:28] CC3[5998026335697] CC6[0] CC7[0]
(XEN) [2021-06-14 23:02:28] ==cpu6==
(XEN) [2021-06-14 23:02:28]     C1:    type[C1] latency[  2] usage[ 
3887672] method[  FFH] duration[433191645867]
(XEN) [2021-06-14 23:02:28]    *C2:    type[C1] latency[ 10] 
usage[10961983] method[  FFH] duration[2522238826658]
(XEN) [2021-06-14 23:02:28]     C3:    type[C2] latency[ 33] 
usage[10030297] method[  FFH] duration[7919171146331]
(XEN) [2021-06-14 23:02:28]     C0:    usage[24879952] 
duration[6023732523007]
(XEN) [2021-06-14 23:02:28] PC2[814910544708] PC3[16927767044] PC6[0] 
PC7[0]
(XEN) [2021-06-14 23:02:28] CC3[5741657782574] CC6[0] CC7[0]
(XEN) [2021-06-14 23:02:28] ==cpu7==
(XEN) [2021-06-14 23:02:28]     C1:    type[C1] latency[  2] usage[ 
4021070] method[  FFH] duration[440368003202]
(XEN) [2021-06-14 23:02:28]     C2:    type[C1] latency[ 10] 
usage[11085989] method[  FFH] duration[2546685703928]
(XEN) [2021-06-14 23:02:28]    *C3:    type[C2] latency[ 33] usage[ 
9942440] method[  FFH] duration[7788189808600]
(XEN) [2021-06-14 23:02:28]     C0:    usage[25049499] 
duration[6123141931172]
(XEN) [2021-06-14 23:02:28] PC2[814910544708] PC3[16927767044] PC6[0] 
PC7[0]
(XEN) [2021-06-14 23:02:28] CC3[5640497960003] CC6[0] CC7[0]
(XEN) [2021-06-14 23:02:28] ==cpu8==
(XEN) [2021-06-14 23:02:28]     C1:    type[C1] latency[  2] usage[ 
4042005] method[  FFH] duration[442690629564]
(XEN) [2021-06-14 23:02:28]    *C2:    type[C1] latency[ 10] 
usage[11060456] method[  FFH] duration[2545513103793]
(XEN) [2021-06-14 23:02:28]     C3:    type[C2] latency[ 33] usage[ 
9977426] method[  FFH] duration[7815044830327]
(XEN) [2021-06-14 23:02:28]     C0:    usage[25079887] 
duration[6095188189641]
(XEN) [2021-06-14 23:02:28] PC2[814910544708] PC3[16927767044] PC6[0] 
PC7[0]
(XEN) [2021-06-14 23:02:28] CC3[5657706161497] CC6[0] CC7[0]
(XEN) [2021-06-14 23:02:28] ==cpu9==
(XEN) [2021-06-14 23:02:28]     C1:    type[C1] latency[  2] usage[ 
4084466] method[  FFH] duration[446429271343]
(XEN) [2021-06-14 23:02:28]     C2:    type[C1] latency[ 10] 
usage[11093858] method[  FFH] duration[2544473818767]
(XEN) [2021-06-14 23:02:28]     C3:    type[C2] latency[ 33] usage[ 
9928199] method[  FFH] duration[7773031250146]
(XEN) [2021-06-14 23:02:28]    *C0:    usage[25106524] 
duration[6134553719433]
(XEN) [2021-06-14 23:02:28] PC2[814910544708] PC3[16927767044] PC6[0] 
PC7[0]
(XEN) [2021-06-14 23:02:28] CC3[5628625979110] CC6[0] CC7[0]
(XEN) [2021-06-14 23:02:28] ==cpu10==
(XEN) [2021-06-14 23:02:28]     C1:    type[C1] latency[  2] usage[ 
4098103] method[  FFH] duration[445198283764]
(XEN) [2021-06-14 23:02:28]    *C2:    type[C1] latency[ 10] 
usage[11080613] method[  FFH] duration[2548222295395]
(XEN) [2021-06-14 23:02:28]     C3:    type[C2] latency[ 33] usage[ 
9939904] method[  FFH] duration[7790556316873]
(XEN) [2021-06-14 23:02:28]     C0:    usage[25118620] 
duration[6114562470621]
(XEN) [2021-06-14 23:02:28] PC2[814910544708] PC3[16927767044] PC6[0] 
PC7[0]
(XEN) [2021-06-14 23:02:28] CC3[5641603513898] CC6[0] CC7[0]
(XEN) [2021-06-14 23:02:28] ==cpu11==
(XEN) [2021-06-14 23:02:28]     C1:    type[C1] latency[  2] usage[ 
3981766] method[  FFH] duration[434566740039]
(XEN) [2021-06-14 23:02:28]    *C2:    type[C1] latency[ 10] 
usage[11098192] method[  FFH] duration[2551206590336]
(XEN) [2021-06-14 23:02:28]     C3:    type[C2] latency[ 33] usage[ 
9991849] method[  FFH] duration[7801522846435]
(XEN) [2021-06-14 23:02:28]     C0:    usage[25071807] 
duration[6111294494814]
(XEN) [2021-06-14 23:02:28] PC2[814910544708] PC3[16927767044] PC6[0] 
PC7[0]
(XEN) [2021-06-14 23:02:28] CC3[5644586579492] CC6[0] CC7[0]
(XEN) [2021-06-14 23:02:28] [e: dump evtchn info]
(XEN) [2021-06-14 23:02:28] 'e' pressed -> dumping event-channel info
(XEN) [2021-06-14 23:02:28] Event channel information for domain 0:
(XEN) [2021-06-14 23:02:28] Polling vCPUs: {}
(XEN) [2021-06-14 23:02:28]     port [p/m/s]
(XEN) [2021-06-14 23:02:28]        1 [0/0/  -   ]: s=5 n=0 x=0 v=0
(XEN) [2021-06-14 23:02:28]        2 [1/1/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:28]        3 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:28]        4 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:28]        5 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:28]        6 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:28]        7 [1/0/  0   ]: s=5 n=1 x=0 v=0
(XEN) [2021-06-14 23:02:28]        8 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:28]        9 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:28]       10 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:28]       11 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:28]       12 [1/1/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:28]       13 [0/0/  -   ]: s=5 n=2 x=0 v=0
(XEN) [2021-06-14 23:02:28]       14 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:28]       15 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:28]       16 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:28]       17 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:28]       18 [1/1/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:28]       19 [0/0/  -   ]: s=5 n=3 x=0 v=0
(XEN) [2021-06-14 23:02:28]       20 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:28]       21 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:28]       22 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:28]       23 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:28]       24 [1/1/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:28]       25 [0/0/  -   ]: s=5 n=4 x=0 v=0
(XEN) [2021-06-14 23:02:28]       26 [0/0/  -   ]: s=6 n=4 x=0
(XEN) [2021-06-14 23:02:28]       27 [0/0/  -   ]: s=6 n=4 x=0
(XEN) [2021-06-14 23:02:28]       28 [0/0/  -   ]: s=6 n=4 x=0
(XEN) [2021-06-14 23:02:28]       29 [0/0/  -   ]: s=6 n=4 x=0
(XEN) [2021-06-14 23:02:28]       30 [1/1/  -   ]: s=6 n=4 x=0
(XEN) [2021-06-14 23:02:28]       31 [1/0/  0   ]: s=5 n=5 x=0 v=0
(XEN) [2021-06-14 23:02:28]       32 [0/0/  -   ]: s=6 n=5 x=0
(XEN) [2021-06-14 23:02:28]       33 [0/0/  -   ]: s=6 n=5 x=0
(XEN) [2021-06-14 23:02:28]       34 [0/0/  -   ]: s=6 n=5 x=0
(XEN) [2021-06-14 23:02:28]       35 [0/0/  -   ]: s=6 n=5 x=0
(XEN) [2021-06-14 23:02:28]       36 [1/1/  -   ]: s=6 n=5 x=0
(XEN) [2021-06-14 23:02:28]       37 [1/0/  0   ]: s=5 n=6 x=0 v=0
(XEN) [2021-06-14 23:02:28]       38 [0/0/  -   ]: s=6 n=6 x=0
(XEN) [2021-06-14 23:02:28]       39 [0/0/  -   ]: s=6 n=6 x=0
(XEN) [2021-06-14 23:02:28]       40 [0/0/  -   ]: s=6 n=6 x=0
(XEN) [2021-06-14 23:02:28]       41 [0/0/  -   ]: s=6 n=6 x=0
(XEN) [2021-06-14 23:02:28]       42 [1/1/  -   ]: s=6 n=6 x=0
(XEN) [2021-06-14 23:02:28]       43 [1/0/  0   ]: s=5 n=7 x=0 v=0
(XEN) [2021-06-14 23:02:28]       44 [0/0/  -   ]: s=6 n=7 x=0
(XEN) [2021-06-14 23:02:28]       45 [0/0/  -   ]: s=6 n=7 x=0
(XEN) [2021-06-14 23:02:28]       46 [0/0/  -   ]: s=6 n=7 x=0
(XEN) [2021-06-14 23:02:28]       47 [0/0/  -   ]: s=6 n=7 x=0
(XEN) [2021-06-14 23:02:28]       48 [1/1/  -   ]: s=6 n=7 x=0
(XEN) [2021-06-14 23:02:28]       49 [0/0/  -   ]: s=3 n=1 x=0 d=0 p=93
(XEN) [2021-06-14 23:02:28]       50 [0/0/  -   ]: s=5 n=2 x=0 v=9
(XEN) [2021-06-14 23:02:28]       51 [0/0/  -   ]: s=4 n=3 x=0 p=9 i=9
(XEN) [2021-06-14 23:02:28]       52 [0/0/  -   ]: s=5 n=4 x=0 v=16
(XEN) [2021-06-14 23:02:28]       53 [0/0/  -   ]: s=5 n=0 x=0 v=2
(XEN) [2021-06-14 23:02:28]       54 [0/0/  -   ]: s=4 n=5 x=0 p=891 i=86
(XEN) [2021-06-14 23:02:28]       55 [0/0/  -   ]: s=4 n=6 x=0 p=890 i=87
(XEN) [2021-06-14 23:02:28]       56 [0/0/  -   ]: s=4 n=7 x=0 p=889 i=88
(XEN) [2021-06-14 23:02:28]       57 [0/0/  -   ]: s=4 n=1 x=0 p=888 i=89
(XEN) [2021-06-14 23:02:28]       58 [0/0/  -   ]: s=4 n=2 x=0 p=887 i=90
(XEN) [2021-06-14 23:02:28]       59 [0/0/  -   ]: s=4 n=3 x=0 p=886 i=91
(XEN) [2021-06-14 23:02:28]       60 [0/0/  -   ]: s=4 n=4 x=0 p=885 i=92
(XEN) [2021-06-14 23:02:28]       61 [0/0/  -   ]: s=4 n=5 x=0 p=884 i=93
(XEN) [2021-06-14 23:02:28]       62 [0/0/  -   ]: s=4 n=6 x=0 p=883 i=94
(XEN) [2021-06-14 23:02:28]       63 [0/0/  -   ]: s=4 n=7 x=0 p=882 i=95
(XEN) [2021-06-14 23:02:28]       64 [0/0/  -   ]: s=4 n=0 x=0 p=881 i=96
(XEN) [2021-06-14 23:02:28]       65 [0/0/  -   ]: s=4 n=1 x=0 p=1 i=1
(XEN) [2021-06-14 23:02:28]       66 [0/0/  -   ]: s=4 n=2 x=0 p=8 i=8
(XEN) [2021-06-14 23:02:28]       67 [0/0/  -   ]: s=4 n=3 x=0 p=18 i=18
(XEN) [2021-06-14 23:02:28]       68 [0/0/  -   ]: s=3 n=2 x=0 d=6 p=1
(XEN) [2021-06-14 23:02:28]       69 [1/0/  0   ]: s=4 n=5 x=0 p=870 i=107
(XEN) [2021-06-14 23:02:28]       70 [0/0/  -   ]: s=3 n=4 x=0 d=6 p=2
(XEN) [2021-06-14 23:02:28]       71 [0/0/  -   ]: s=4 n=7 x=0 p=19 i=19
(XEN) [2021-06-14 23:02:28]       72 [0/0/  -   ]: s=3 n=6 x=0 d=6 p=3
(XEN) [2021-06-14 23:02:28]       73 [0/0/  -   ]: s=3 n=0 x=0 d=6 p=5
(XEN) [2021-06-14 23:02:28]       74 [0/0/  -   ]: s=3 n=1 x=0 d=6 p=6
(XEN) [2021-06-14 23:02:28]       75 [0/0/  -   ]: s=3 n=2 x=0 d=6 p=7
(XEN) [2021-06-14 23:02:28]       76 [0/0/  -   ]: s=3 n=3 x=0 d=6 p=8
(XEN) [2021-06-14 23:02:28]       77 [0/0/  -   ]: s=3 n=4 x=0 d=6 p=9
(XEN) [2021-06-14 23:02:28]       78 [1/1/  -   ]: s=3 n=5 x=0 d=6 p=4
(XEN) [2021-06-14 23:02:28]       79 [0/0/  -   ]: s=3 n=6 x=0 d=6 p=45
(XEN) [2021-06-14 23:02:28]       80 [0/0/  -   ]: s=3 n=7 x=0 d=6 p=46
(XEN) [2021-06-14 23:02:28]       81 [0/0/  -   ]: s=3 n=0 x=0 d=6 p=47
(XEN) [2021-06-14 23:02:28]       82 [0/0/  -   ]: s=3 n=1 x=0 d=6 p=48
(XEN) [2021-06-14 23:02:28]       83 [0/0/  -   ]: s=3 n=2 x=0 d=6 p=49
(XEN) [2021-06-14 23:02:28]       84 [0/0/  -   ]: s=3 n=3 x=0 d=6 p=50
(XEN) [2021-06-14 23:02:28]       85 [0/0/  -   ]: s=3 n=4 x=0 d=6 p=51
(XEN) [2021-06-14 23:02:28]       86 [0/0/  -   ]: s=3 n=5 x=0 d=6 p=52
(XEN) [2021-06-14 23:02:28]       87 [0/0/  -   ]: s=3 n=6 x=0 d=6 p=53
(XEN) [2021-06-14 23:02:28]       88 [0/0/  -   ]: s=4 n=0 x=0 p=880 i=97
(XEN) [2021-06-14 23:02:28]       89 [0/0/  -   ]: s=4 n=1 x=0 p=879 i=98
(XEN) [2021-06-14 23:02:28]       90 [0/0/  -   ]: s=4 n=2 x=0 p=878 i=99
(XEN) [2021-06-14 23:02:28]       91 [1/0/  0   ]: s=4 n=3 x=0 p=877 i=100
(XEN) [2021-06-14 23:02:28]       92 [0/0/  -   ]: s=4 n=4 x=0 p=876 i=101
(XEN) [2021-06-14 23:02:28]       93 [0/0/  -   ]: s=3 n=5 x=0 d=0 p=49
(XEN) [2021-06-14 23:02:28]       94 [0/0/  -   ]: s=5 n=6 x=0 v=3
(XEN) [2021-06-14 23:02:28]       95 [0/0/  -   ]: s=5 n=7 x=0 v=8
(XEN) [2021-06-14 23:02:28]       96 [0/0/  -   ]: s=3 n=7 x=0 d=6 p=54
(XEN) [2021-06-14 23:02:28]       97 [0/0/  -   ]: s=3 n=0 x=0 d=6 p=55
(XEN) [2021-06-14 23:02:28]       98 [0/0/  -   ]: s=3 n=1 x=0 d=6 p=56
(XEN) [2021-06-14 23:02:28]       99 [0/0/  -   ]: s=3 n=2 x=0 d=6 p=57
(XEN) [2021-06-14 23:02:28]      100 [0/0/  -   ]: s=3 n=3 x=0 d=6 p=58
(XEN) [2021-06-14 23:02:28]      101 [0/0/  -   ]: s=3 n=4 x=0 d=6 p=59
(XEN) [2021-06-14 23:02:28]      102 [0/0/  -   ]: s=3 n=5 x=0 d=6 p=60
(XEN) [2021-06-14 23:02:28]      103 [0/0/  -   ]: s=3 n=6 x=0 d=6 p=61
(XEN) [2021-06-14 23:02:28]      104 [0/0/  -   ]: s=3 n=7 x=0 d=6 p=62
(XEN) [2021-06-14 23:02:28]      105 [0/0/  -   ]: s=3 n=0 x=0 d=6 p=63
(XEN) [2021-06-14 23:02:28]      106 [0/0/  -   ]: s=3 n=1 x=0 d=6 p=64
(XEN) [2021-06-14 23:02:28]      107 [0/0/  -   ]: s=3 n=3 x=0 d=3 p=1
(XEN) [2021-06-14 23:02:28]      108 [0/0/  -   ]: s=3 n=4 x=0 d=2 p=1
(XEN) [2021-06-14 23:02:29]      109 [0/0/  -   ]: s=3 n=5 x=0 d=3 p=2
(XEN) [2021-06-14 23:02:29]      110 [0/0/  -   ]: s=3 n=6 x=0 d=3 p=3
(XEN) [2021-06-14 23:02:29]      111 [0/0/  -   ]: s=3 n=7 x=0 d=3 p=5
(XEN) [2021-06-14 23:02:29]      112 [0/0/  -   ]: s=3 n=0 x=0 d=3 p=6
(XEN) [2021-06-14 23:02:29]      113 [0/0/  -   ]: s=3 n=1 x=0 d=3 p=7
(XEN) [2021-06-14 23:02:29]      114 [0/1/  -   ]: s=3 n=2 x=0 d=3 p=4
(XEN) [2021-06-14 23:02:29]      115 [0/0/  -   ]: s=3 n=3 x=0 d=2 p=2
(XEN) [2021-06-14 23:02:29]      116 [0/0/  -   ]: s=3 n=4 x=0 d=2 p=3
(XEN) [2021-06-14 23:02:29]      117 [0/0/  -   ]: s=3 n=5 x=0 d=2 p=5
(XEN) [2021-06-14 23:02:29]      118 [0/0/  -   ]: s=3 n=6 x=0 d=2 p=6
(XEN) [2021-06-14 23:02:29]      119 [0/0/  -   ]: s=3 n=7 x=0 d=2 p=7
(XEN) [2021-06-14 23:02:29]      120 [1/1/  -   ]: s=3 n=0 x=0 d=2 p=4
(XEN) [2021-06-14 23:02:29]      121 [0/0/  -   ]: s=3 n=1 x=0 d=2 p=32
(XEN) [2021-06-14 23:02:29]      122 [0/0/  -   ]: s=3 n=2 x=0 d=2 p=33
(XEN) [2021-06-14 23:02:29]      123 [0/0/  -   ]: s=3 n=3 x=0 d=2 p=34
(XEN) [2021-06-14 23:02:29]      124 [0/0/  -   ]: s=3 n=4 x=0 d=2 p=35
(XEN) [2021-06-14 23:02:29]      125 [0/0/  -   ]: s=3 n=5 x=0 d=2 p=36
(XEN) [2021-06-14 23:02:29]      126 [0/0/  -   ]: s=3 n=6 x=0 d=2 p=37
(XEN) [2021-06-14 23:02:29]      127 [0/0/  -   ]: s=3 n=7 x=0 d=2 p=38
(XEN) [2021-06-14 23:02:29]      128 [0/0/  -   ]: s=3 n=0 x=0 d=2 p=39
(XEN) [2021-06-14 23:02:29]      129 [0/0/  -   ]: s=3 n=1 x=0 d=2 p=40
(XEN) [2021-06-14 23:02:29]      130 [0/0/  -   ]: s=3 n=2 x=0 d=2 p=41
(XEN) [2021-06-14 23:02:29]      131 [0/0/  -   ]: s=3 n=3 x=0 d=2 p=42
(XEN) [2021-06-14 23:02:29]      132 [0/0/  -   ]: s=3 n=4 x=0 d=2 p=43
(XEN) [2021-06-14 23:02:29]      133 [0/0/  -   ]: s=3 n=5 x=0 d=2 p=44
(XEN) [2021-06-14 23:02:29]      134 [0/0/  -   ]: s=3 n=6 x=0 d=2 p=45
(XEN) [2021-06-14 23:02:29]      135 [0/0/  -   ]: s=3 n=7 x=0 d=2 p=46
(XEN) [2021-06-14 23:02:29]      136 [0/0/  -   ]: s=3 n=0 x=0 d=2 p=47
(XEN) [2021-06-14 23:02:29]      137 [0/0/  -   ]: s=3 n=1 x=0 d=2 p=48
(XEN) [2021-06-14 23:02:29]      138 [0/0/  -   ]: s=3 n=2 x=0 d=2 p=49
(XEN) [2021-06-14 23:02:29]      139 [0/0/  -   ]: s=3 n=3 x=0 d=2 p=50
(XEN) [2021-06-14 23:02:29]      140 [0/0/  -   ]: s=3 n=4 x=0 d=2 p=51
(XEN) [2021-06-14 23:02:29]      141 [0/0/  -   ]: s=3 n=5 x=0 d=2 p=52
(XEN) [2021-06-14 23:02:29]      142 [0/0/  -   ]: s=3 n=6 x=0 d=2 p=53
(XEN) [2021-06-14 23:02:29]      143 [0/0/  -   ]: s=3 n=7 x=0 d=2 p=54
(XEN) [2021-06-14 23:02:29]      144 [0/0/  -   ]: s=3 n=0 x=0 d=2 p=55
(XEN) [2021-06-14 23:02:29]      145 [0/0/  -   ]: s=3 n=1 x=0 d=2 p=58
(XEN) [2021-06-14 23:02:29]      146 [0/0/  -   ]: s=3 n=2 x=0 d=2 p=59
(XEN) [2021-06-14 23:02:29]      147 [0/0/  -   ]: s=3 n=3 x=0 d=2 p=60
(XEN) [2021-06-14 23:02:29]      148 [0/0/  -   ]: s=3 n=4 x=0 d=2 p=61
(XEN) [2021-06-14 23:02:29]      149 [0/0/  -   ]: s=3 n=5 x=0 d=2 p=62
(XEN) [2021-06-14 23:02:29]      150 [0/0/  -   ]: s=3 n=6 x=0 d=2 p=63
(XEN) [2021-06-14 23:02:29]      151 [0/0/  -   ]: s=3 n=7 x=0 d=2 p=64
(XEN) [2021-06-14 23:02:29]      152 [0/0/  -   ]: s=3 n=0 x=0 d=2 p=65
(XEN) [2021-06-14 23:02:29]      153 [0/0/  -   ]: s=3 n=1 x=0 d=2 p=66
(XEN) [2021-06-14 23:02:29]      154 [0/0/  -   ]: s=3 n=2 x=0 d=2 p=67
(XEN) [2021-06-14 23:02:29]      155 [0/0/  -   ]: s=3 n=3 x=0 d=2 p=68
(XEN) [2021-06-14 23:02:29]      156 [0/0/  -   ]: s=3 n=4 x=0 d=2 p=69
(XEN) [2021-06-14 23:02:29]      157 [0/0/  -   ]: s=3 n=5 x=0 d=2 p=70
(XEN) [2021-06-14 23:02:29]      158 [0/0/  -   ]: s=3 n=6 x=0 d=2 p=71
(XEN) [2021-06-14 23:02:29]      159 [0/0/  -   ]: s=3 n=7 x=0 d=2 p=72
(XEN) [2021-06-14 23:02:29]      160 [0/0/  -   ]: s=3 n=0 x=0 d=2 p=73
(XEN) [2021-06-14 23:02:29]      161 [0/0/  -   ]: s=3 n=1 x=0 d=2 p=74
(XEN) [2021-06-14 23:02:29]      162 [0/0/  -   ]: s=3 n=2 x=0 d=2 p=75
(XEN) [2021-06-14 23:02:29]      163 [0/0/  -   ]: s=3 n=3 x=0 d=2 p=76
(XEN) [2021-06-14 23:02:29]      164 [0/0/  -   ]: s=3 n=4 x=0 d=2 p=77
(XEN) [2021-06-14 23:02:29]      165 [0/0/  -   ]: s=3 n=5 x=0 d=2 p=78
(XEN) [2021-06-14 23:02:29]      166 [0/0/  -   ]: s=3 n=6 x=0 d=2 p=79
(XEN) [2021-06-14 23:02:29]      167 [0/0/  -   ]: s=3 n=7 x=0 d=2 p=80
(XEN) [2021-06-14 23:02:29]      168 [0/0/  -   ]: s=3 n=0 x=0 d=2 p=81
(XEN) [2021-06-14 23:02:29]      169 [0/0/  -   ]: s=3 n=1 x=0 d=2 p=82
(XEN) [2021-06-14 23:02:29]      170 [0/0/  -   ]: s=3 n=2 x=0 d=2 p=83
(XEN) [2021-06-14 23:02:29]      171 [0/0/  -   ]: s=3 n=3 x=0 d=2 p=84
(XEN) [2021-06-14 23:02:29]      172 [0/0/  -   ]: s=3 n=4 x=0 d=2 p=85
(XEN) [2021-06-14 23:02:29]      173 [0/0/  -   ]: s=3 n=5 x=0 d=2 p=86
(XEN) [2021-06-14 23:02:29]      174 [0/0/  -   ]: s=3 n=6 x=0 d=2 p=87
(XEN) [2021-06-14 23:02:29]      175 [0/0/  -   ]: s=3 n=7 x=0 d=2 p=88
(XEN) [2021-06-14 23:02:29]      176 [0/0/  -   ]: s=3 n=0 x=0 d=2 p=89
(XEN) [2021-06-14 23:02:29]      177 [0/0/  -   ]: s=3 n=1 x=0 d=2 p=90
(XEN) [2021-06-14 23:02:29]      178 [0/0/  -   ]: s=3 n=2 x=0 d=3 p=30
(XEN) [2021-06-14 23:02:29]      179 [1/0/  91  ]: s=3 n=3 x=0 d=3 p=34
(XEN) [2021-06-14 23:02:29]      180 [0/0/  -   ]: s=3 n=4 x=0 d=3 p=35
(XEN) [2021-06-14 23:02:29]      181 [0/0/  -   ]: s=3 n=5 x=0 d=3 p=36
(XEN) [2021-06-14 23:02:29]      182 [0/0/  -   ]: s=3 n=6 x=0 d=3 p=37
(XEN) [2021-06-14 23:02:29]      183 [0/0/  -   ]: s=3 n=7 x=0 d=3 p=38
(XEN) [2021-06-14 23:02:29]      184 [0/0/  -   ]: s=3 n=0 x=0 d=3 p=39
(XEN) [2021-06-14 23:02:29]      185 [0/0/  -   ]: s=3 n=1 x=0 d=3 p=40
(XEN) [2021-06-14 23:02:29]      186 [0/0/  -   ]: s=3 n=2 x=0 d=3 p=41
(XEN) [2021-06-14 23:02:29]      187 [0/0/  -   ]: s=3 n=3 x=0 d=3 p=42
(XEN) [2021-06-14 23:02:29]      188 [0/0/  -   ]: s=3 n=4 x=0 d=3 p=43
(XEN) [2021-06-14 23:02:29]      189 [0/0/  -   ]: s=3 n=5 x=0 d=2 p=95
(XEN) [2021-06-14 23:02:29]      190 [0/0/  -   ]: s=3 n=6 x=0 d=2 p=96
(XEN) [2021-06-14 23:02:29]      191 [0/0/  -   ]: s=3 n=7 x=0 d=2 p=97
(XEN) [2021-06-14 23:02:29]      192 [0/0/  -   ]: s=3 n=0 x=0 d=2 p=98
(XEN) [2021-06-14 23:02:29]      193 [0/0/  -   ]: s=3 n=1 x=0 d=2 p=99
(XEN) [2021-06-14 23:02:29]      194 [0/0/  -   ]: s=3 n=2 x=0 d=2 p=100
(XEN) [2021-06-14 23:02:29]      195 [0/0/  -   ]: s=3 n=3 x=0 d=2 p=101
(XEN) [2021-06-14 23:02:29]      196 [0/0/  -   ]: s=3 n=4 x=0 d=2 p=102
(XEN) [2021-06-14 23:02:29]      197 [0/0/  -   ]: s=3 n=0 x=0 d=7 p=1
(XEN) [2021-06-14 23:02:29]      198 [0/0/  -   ]: s=3 n=6 x=0 d=5 p=1
(XEN) [2021-06-14 23:02:29]      199 [0/0/  -   ]: s=3 n=2 x=0 d=7 p=2
(XEN) [2021-06-14 23:02:29]      200 [0/0/  -   ]: s=3 n=3 x=0 d=7 p=3
(XEN) [2021-06-14 23:02:29]      201 [0/0/  -   ]: s=3 n=5 x=0 d=7 p=5
(XEN) [2021-06-14 23:02:29]      202 [0/0/  -   ]: s=3 n=0 x=0 d=7 p=6
(XEN) [2021-06-14 23:02:29]      203 [0/0/  -   ]: s=3 n=1 x=0 d=7 p=7
(XEN) [2021-06-14 23:02:29]      204 [0/0/  -   ]: s=3 n=2 x=0 d=7 p=8
(XEN) [2021-06-14 23:02:29]      205 [0/0/  -   ]: s=3 n=3 x=0 d=7 p=9
(XEN) [2021-06-14 23:02:29]      206 [1/1/  -   ]: s=3 n=4 x=0 d=7 p=4
(XEN) [2021-06-14 23:02:29]      207 [0/0/  -   ]: s=3 n=7 x=0 d=5 p=2
(XEN) [2021-06-14 23:02:29]      208 [0/0/  -   ]: s=3 n=0 x=0 d=5 p=3
(XEN) [2021-06-14 23:02:29]      209 [0/0/  -   ]: s=3 n=1 x=0 d=5 p=5
(XEN) [2021-06-14 23:02:29]      210 [0/0/  -   ]: s=3 n=2 x=0 d=5 p=6
(XEN) [2021-06-14 23:02:29]      211 [0/0/  -   ]: s=3 n=3 x=0 d=5 p=7
(XEN) [2021-06-14 23:02:29]      212 [0/0/  -   ]: s=3 n=4 x=0 d=5 p=8
(XEN) [2021-06-14 23:02:29]      213 [0/0/  -   ]: s=3 n=5 x=0 d=5 p=9
(XEN) [2021-06-14 23:02:29]      214 [1/1/  -   ]: s=3 n=6 x=0 d=5 p=4
(XEN) [2021-06-14 23:02:29]      215 [0/0/  -   ]: s=4 n=7 x=0 p=3 i=3
(XEN) [2021-06-14 23:02:29]      216 [0/0/  -   ]: s=3 n=5 x=0 d=7 p=42
(XEN) [2021-06-14 23:02:29]      217 [0/0/  -   ]: s=3 n=6 x=0 d=7 p=43
(XEN) [2021-06-14 23:02:29]      218 [0/0/  -   ]: s=3 n=7 x=0 d=7 p=44
(XEN) [2021-06-14 23:02:29]      219 [0/0/  -   ]: s=3 n=0 x=0 d=7 p=45
(XEN) [2021-06-14 23:02:29]      220 [0/0/  -   ]: s=3 n=1 x=0 d=7 p=46
(XEN) [2021-06-14 23:02:29]      221 [0/0/  -   ]: s=3 n=2 x=0 d=7 p=47
(XEN) [2021-06-14 23:02:29]      222 [0/0/  -   ]: s=3 n=3 x=0 d=7 p=48
(XEN) [2021-06-14 23:02:29]      223 [0/0/  -   ]: s=3 n=4 x=0 d=7 p=49
(XEN) [2021-06-14 23:02:29]      224 [0/0/  -   ]: s=3 n=5 x=0 d=7 p=50
(XEN) [2021-06-14 23:02:29]      225 [0/0/  -   ]: s=3 n=6 x=0 d=7 p=51
(XEN) [2021-06-14 23:02:29]      226 [0/0/  -   ]: s=3 n=7 x=0 d=7 p=52
(XEN) [2021-06-14 23:02:29]      227 [0/0/  -   ]: s=3 n=0 x=0 d=7 p=53
(XEN) [2021-06-14 23:02:29]      228 [0/0/  -   ]: s=3 n=1 x=0 d=7 p=54
(XEN) [2021-06-14 23:02:29]      229 [0/0/  -   ]: s=3 n=2 x=0 d=7 p=55
(XEN) [2021-06-14 23:02:29]      230 [0/0/  -   ]: s=3 n=3 x=0 d=7 p=56
(XEN) [2021-06-14 23:02:29]      231 [0/0/  -   ]: s=3 n=4 x=0 d=7 p=57
(XEN) [2021-06-14 23:02:29]      232 [0/0/  -   ]: s=3 n=5 x=0 d=7 p=58
(XEN) [2021-06-14 23:02:29]      233 [0/0/  -   ]: s=3 n=6 x=0 d=7 p=59
(XEN) [2021-06-14 23:02:29]      234 [0/0/  -   ]: s=3 n=7 x=0 d=7 p=60
(XEN) [2021-06-14 23:02:29]      235 [0/0/  -   ]: s=3 n=0 x=0 d=7 p=61
(XEN) [2021-06-14 23:02:29]      236 [0/0/  -   ]: s=3 n=4 x=0 d=5 p=47
(XEN) [2021-06-14 23:02:29]      237 [0/0/  -   ]: s=3 n=5 x=0 d=5 p=48
(XEN) [2021-06-14 23:02:29]      238 [0/0/  -   ]: s=3 n=6 x=0 d=5 p=49
(XEN) [2021-06-14 23:02:29]      239 [0/0/  -   ]: s=3 n=7 x=0 d=5 p=50
(XEN) [2021-06-14 23:02:29]      240 [0/0/  -   ]: s=3 n=0 x=0 d=5 p=51
(XEN) [2021-06-14 23:02:29]      241 [0/0/  -   ]: s=3 n=1 x=0 d=5 p=52
(XEN) [2021-06-14 23:02:29]      242 [0/0/  -   ]: s=3 n=2 x=0 d=5 p=53
(XEN) [2021-06-14 23:02:29]      243 [0/0/  -   ]: s=3 n=3 x=0 d=5 p=54
(XEN) [2021-06-14 23:02:29]      244 [0/0/  -   ]: s=3 n=4 x=0 d=5 p=57
(XEN) [2021-06-14 23:02:29]      245 [0/0/  -   ]: s=3 n=5 x=0 d=5 p=58
(XEN) [2021-06-14 23:02:29]      246 [0/0/  -   ]: s=3 n=6 x=0 d=5 p=59
(XEN) [2021-06-14 23:02:29]      247 [0/0/  -   ]: s=3 n=7 x=0 d=5 p=60
(XEN) [2021-06-14 23:02:29]      248 [0/0/  -   ]: s=3 n=0 x=0 d=5 p=61
(XEN) [2021-06-14 23:02:29]      249 [0/0/  -   ]: s=3 n=1 x=0 d=5 p=62
(XEN) [2021-06-14 23:02:29]      250 [0/0/  -   ]: s=3 n=2 x=0 d=5 p=63
(XEN) [2021-06-14 23:02:29]      251 [0/0/  -   ]: s=3 n=3 x=0 d=5 p=64
(XEN) [2021-06-14 23:02:29]      252 [0/0/  -   ]: s=3 n=4 x=0 d=5 p=65
(XEN) [2021-06-14 23:02:29]      253 [0/0/  -   ]: s=3 n=5 x=0 d=5 p=66
(XEN) [2021-06-14 23:02:29]      254 [0/0/  -   ]: s=3 n=6 x=0 d=5 p=67
(XEN) [2021-06-14 23:02:29]      255 [0/0/  -   ]: s=3 n=7 x=0 d=5 p=68
(XEN) [2021-06-14 23:02:29]      256 [0/0/  -   ]: s=3 n=1 x=0 d=7 p=66
(XEN) [2021-06-14 23:02:29]      257 [0/0/  -   ]: s=3 n=1 x=0 d=5 p=73
(XEN) [2021-06-14 23:02:29]      258 [1/0/  90  ]: s=5 n=2 x=0 v=4
(XEN) [2021-06-14 23:02:29] Event channel information for domain 2:
(XEN) [2021-06-14 23:02:29] Polling vCPUs: {}
(XEN) [2021-06-14 23:02:29]     port [p/m/s]
(XEN) [2021-06-14 23:02:29]        1 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=108
(XEN) [2021-06-14 23:02:29]        2 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=115
(XEN) [2021-06-14 23:02:29]        3 [0/1/  -   ]: s=3 n=0 x=1 d=0 p=116
(XEN) [2021-06-14 23:02:29]        4 [0/1/  -   ]: s=3 n=0 x=1 d=0 p=120
(XEN) [2021-06-14 23:02:29]        5 [0/1/  -   ]: s=3 n=1 x=1 d=0 p=117
(XEN) [2021-06-14 23:02:29]        6 [0/1/  -   ]: s=3 n=2 x=1 d=0 p=118
(XEN) [2021-06-14 23:02:29]        7 [0/1/  -   ]: s=3 n=3 x=1 d=0 p=119
(XEN) [2021-06-14 23:02:30]        8 [1/0/  0   ]: s=5 n=0 x=0 v=0
(XEN) [2021-06-14 23:02:30]        9 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:30]       10 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:30]       11 [0/0/  -   ]: s=5 n=0 x=0 v=1
(XEN) [2021-06-14 23:02:30]       12 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:30]       13 [1/1/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:30]       14 [1/0/  0   ]: s=5 n=1 x=0 v=0
(XEN) [2021-06-14 23:02:30]       15 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:30]       16 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:30]       17 [0/0/  -   ]: s=5 n=1 x=0 v=1
(XEN) [2021-06-14 23:02:30]       18 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:30]       19 [1/1/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:30]       20 [0/0/  -   ]: s=5 n=2 x=0 v=0
(XEN) [2021-06-14 23:02:30]       21 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:30]       22 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:30]       23 [0/0/  -   ]: s=5 n=2 x=0 v=1
(XEN) [2021-06-14 23:02:30]       24 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:30]       25 [1/1/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:30]       26 [1/0/  0   ]: s=5 n=3 x=0 v=0
(XEN) [2021-06-14 23:02:30]       27 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:30]       28 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:30]       29 [0/0/  -   ]: s=5 n=3 x=0 v=1
(XEN) [2021-06-14 23:02:30]       30 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:30]       31 [1/1/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:30]       32 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=121
(XEN) [2021-06-14 23:02:30]       33 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=122
(XEN) [2021-06-14 23:02:30]       34 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=123
(XEN) [2021-06-14 23:02:30]       35 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=124
(XEN) [2021-06-14 23:02:30]       36 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=125
(XEN) [2021-06-14 23:02:30]       37 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=126
(XEN) [2021-06-14 23:02:30]       38 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=127
(XEN) [2021-06-14 23:02:30]       39 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=128
(XEN) [2021-06-14 23:02:30]       40 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=129
(XEN) [2021-06-14 23:02:30]       41 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=130
(XEN) [2021-06-14 23:02:30]       42 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=131
(XEN) [2021-06-14 23:02:30]       43 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=132
(XEN) [2021-06-14 23:02:30]       44 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=133
(XEN) [2021-06-14 23:02:30]       45 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=134
(XEN) [2021-06-14 23:02:30]       46 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=135
(XEN) [2021-06-14 23:02:30]       47 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=136
(XEN) [2021-06-14 23:02:30]       48 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=137
(XEN) [2021-06-14 23:02:30]       49 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=138
(XEN) [2021-06-14 23:02:30]       50 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=139
(XEN) [2021-06-14 23:02:30]       51 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=140
(XEN) [2021-06-14 23:02:30]       52 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=141
(XEN) [2021-06-14 23:02:30]       53 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=142
(XEN) [2021-06-14 23:02:30]       54 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=143
(XEN) [2021-06-14 23:02:30]       55 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=144
(XEN) [2021-06-14 23:02:30]       56 [0/0/  -   ]: s=4 n=0 x=0 p=20 i=0
(XEN) [2021-06-14 23:02:30]       57 [0/0/  -   ]: s=4 n=0 x=0 p=21 i=0
(XEN) [2021-06-14 23:02:30]       58 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=145
(XEN) [2021-06-14 23:02:30]       59 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=146
(XEN) [2021-06-14 23:02:30]       60 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=147
(XEN) [2021-06-14 23:02:30]       61 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=148
(XEN) [2021-06-14 23:02:30]       62 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=149
(XEN) [2021-06-14 23:02:30]       63 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=150
(XEN) [2021-06-14 23:02:30]       64 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=151
(XEN) [2021-06-14 23:02:30]       65 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=152
(XEN) [2021-06-14 23:02:30]       66 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=153
(XEN) [2021-06-14 23:02:30]       67 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=154
(XEN) [2021-06-14 23:02:30]       68 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=155
(XEN) [2021-06-14 23:02:30]       69 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=156
(XEN) [2021-06-14 23:02:30]       70 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=157
(XEN) [2021-06-14 23:02:30]       71 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=158
(XEN) [2021-06-14 23:02:30]       72 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=159
(XEN) [2021-06-14 23:02:30]       73 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=160
(XEN) [2021-06-14 23:02:30]       74 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=161
(XEN) [2021-06-14 23:02:30]       75 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=162
(XEN) [2021-06-14 23:02:30]       76 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=163
(XEN) [2021-06-14 23:02:30]       77 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=164
(XEN) [2021-06-14 23:02:30]       78 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=165
(XEN) [2021-06-14 23:02:30]       79 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=166
(XEN) [2021-06-14 23:02:30]       80 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=167
(XEN) [2021-06-14 23:02:30]       81 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=168
(XEN) [2021-06-14 23:02:30]       82 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=169
(XEN) [2021-06-14 23:02:30]       83 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=170
(XEN) [2021-06-14 23:02:30]       84 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=171
(XEN) [2021-06-14 23:02:30]       85 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=172
(XEN) [2021-06-14 23:02:30]       86 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=173
(XEN) [2021-06-14 23:02:30]       87 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=174
(XEN) [2021-06-14 23:02:30]       88 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=175
(XEN) [2021-06-14 23:02:30]       89 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=176
(XEN) [2021-06-14 23:02:30]       90 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=177
(XEN) [2021-06-14 23:02:30]       91 [0/0/  -   ]: s=4 n=0 x=0 p=23 i=0
(XEN) [2021-06-14 23:02:30]       92 [0/0/  -   ]: s=4 n=0 x=0 p=19 i=0
(XEN) [2021-06-14 23:02:30]       93 [0/0/  -   ]: s=4 n=0 x=0 p=17 i=0
(XEN) [2021-06-14 23:02:30]       94 [0/0/  -   ]: s=4 n=0 x=0 p=22 i=0
(XEN) [2021-06-14 23:02:30]       95 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=189
(XEN) [2021-06-14 23:02:30]       96 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=190
(XEN) [2021-06-14 23:02:30]       97 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=191
(XEN) [2021-06-14 23:02:30]       98 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=192
(XEN) [2021-06-14 23:02:30]       99 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=193
(XEN) [2021-06-14 23:02:30]      100 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=194
(XEN) [2021-06-14 23:02:30]      101 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=195
(XEN) [2021-06-14 23:02:30]      102 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=196
(XEN) [2021-06-14 23:02:30] Event channel information for domain 3:
(XEN) [2021-06-14 23:02:30] Polling vCPUs: {}
(XEN) [2021-06-14 23:02:30]     port [p/m/s]
(XEN) [2021-06-14 23:02:30] grant_table.c:803:d0v3 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:30]        1 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=107
(XEN) [2021-06-14 23:02:30]        2 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=109
(XEN) [2021-06-14 23:02:30]        3 [0/1/  -   ]: s=3 n=0 x=1 d=0 p=110
(XEN) [2021-06-14 23:02:30]        4 [0/1/  -   ]: s=3 n=0 x=1 d=0 p=114
(XEN) [2021-06-14 23:02:30]        5 [0/1/  -   ]: s=3 n=1 x=1 d=0 p=111
(XEN) [2021-06-14 23:02:30]        6 [0/1/  -   ]: s=3 n=2 x=1 d=0 p=112
(XEN) [2021-06-14 23:02:30]        7 [0/1/  -   ]: s=3 n=3 x=1 d=0 p=113
(XEN) [2021-06-14 23:02:30]        8 [0/0/  -   ]: s=5 n=0 x=0 v=0
(XEN) [2021-06-14 23:02:30]        9 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:30]       10 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:30]       11 [0/0/  -   ]: s=5 n=0 x=0 v=1
(XEN) [2021-06-14 23:02:30]       12 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:30]       13 [1/0/  0   ]: s=5 n=1 x=0 v=0
(XEN) [2021-06-14 23:02:30]       14 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:30]       15 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:30]       16 [0/0/  -   ]: s=5 n=1 x=0 v=1
(XEN) [2021-06-14 23:02:30]       17 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:30]       18 [0/0/  -   ]: s=5 n=2 x=0 v=0
(XEN) [2021-06-14 23:02:30]       19 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:30]       20 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:30]       21 [0/0/  -   ]: s=5 n=2 x=0 v=1
(XEN) [2021-06-14 23:02:30]       22 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:30]       23 [0/0/  -   ]: s=5 n=3 x=0 v=0
(XEN) [2021-06-14 23:02:30]       24 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:30]       25 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:30]       26 [0/0/  -   ]: s=5 n=3 x=0 v=1
(XEN) [2021-06-14 23:02:30]       27 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:30]       28 [0/0/  -   ]: s=4 n=0 x=0 p=17 i=0
(XEN) [2021-06-14 23:02:30]       29 [0/0/  -   ]: s=4 n=0 x=0 p=18 i=0
(XEN) [2021-06-14 23:02:30]       30 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=178
(XEN) [2021-06-14 23:02:30]       31 [0/0/  -   ]: s=4 n=0 x=0 p=16 i=0
(XEN) [2021-06-14 23:02:30]       32 [0/0/  -   ]: s=4 n=0 x=0 p=20 i=0
(XEN) [2021-06-14 23:02:30]       33 [0/0/  -   ]: s=4 n=0 x=0 p=19 i=0
(XEN) [2021-06-14 23:02:30]       34 [1/0/  0   ]: s=3 n=1 x=0 d=0 p=179
(XEN) [2021-06-14 23:02:30]       35 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=180
(XEN) [2021-06-14 23:02:30]       36 [0/0/  -   ]: s=3 n=3 x=0 d=0 p=181
(XEN) [2021-06-14 23:02:30]       37 [0/0/  -   ]: s=3 n=2 x=0 d=0 p=182
(XEN) [2021-06-14 23:02:30]       38 [0/0/  -   ]: s=3 n=1 x=0 d=0 p=183
(XEN) [2021-06-14 23:02:30]       39 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=184
(XEN) [2021-06-14 23:02:30]       40 [0/0/  -   ]: s=3 n=3 x=0 d=0 p=185
(XEN) [2021-06-14 23:02:30]       41 [0/0/  -   ]: s=3 n=2 x=0 d=0 p=186
(XEN) [2021-06-14 23:02:30]       42 [0/0/  -   ]: s=3 n=1 x=0 d=0 p=187
(XEN) [2021-06-14 23:02:30]       43 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=188
(XEN) [2021-06-14 23:02:30] Event channel information for domain 5:
(XEN) [2021-06-14 23:02:30] Polling vCPUs: {}
(XEN) [2021-06-14 23:02:30]     port [p/m/s]
(XEN) [2021-06-14 23:02:30]        1 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=198
(XEN) [2021-06-14 23:02:30]        2 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=207
(XEN) [2021-06-14 23:02:30]        3 [0/1/  -   ]: s=3 n=0 x=1 d=0 p=208
(XEN) [2021-06-14 23:02:30]        4 [0/1/  -   ]: s=3 n=0 x=1 d=0 p=214
(XEN) [2021-06-14 23:02:30]        5 [0/1/  -   ]: s=3 n=1 x=1 d=0 p=209
(XEN) [2021-06-14 23:02:30]        6 [0/1/  -   ]: s=3 n=2 x=1 d=0 p=210
(XEN) [2021-06-14 23:02:30]        7 [0/1/  -   ]: s=3 n=3 x=1 d=0 p=211
(XEN) [2021-06-14 23:02:30]        8 [0/1/  -   ]: s=3 n=4 x=1 d=0 p=212
(XEN) [2021-06-14 23:02:30]        9 [0/1/  -   ]: s=3 n=5 x=1 d=0 p=213
(XEN) [2021-06-14 23:02:30]       10 [1/0/  0   ]: s=5 n=0 x=0 v=0
(XEN) [2021-06-14 23:02:30]       11 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:30]       12 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:30]       13 [0/0/  -   ]: s=5 n=0 x=0 v=1
(XEN) [2021-06-14 23:02:30]       14 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:30]       15 [1/1/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:30]       16 [1/0/  0   ]: s=5 n=1 x=0 v=0
(XEN) [2021-06-14 23:02:30]       17 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:30]       18 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:31]       19 [0/0/  -   ]: s=5 n=1 x=0 v=1
(XEN) [2021-06-14 23:02:31]       20 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:31]       21 [1/1/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:31]       22 [1/0/  0   ]: s=5 n=2 x=0 v=0
(XEN) [2021-06-14 23:02:31]       23 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:31]       24 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:31]       25 [0/0/  -   ]: s=5 n=2 x=0 v=1
(XEN) [2021-06-14 23:02:31]       26 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:31]       27 [1/1/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:31]       28 [0/0/  -   ]: s=5 n=3 x=0 v=0
(XEN) [2021-06-14 23:02:31]       29 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:31]       30 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:31]       31 [0/0/  -   ]: s=5 n=3 x=0 v=1
(XEN) [2021-06-14 23:02:31]       32 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:31]       33 [1/1/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:31]       34 [0/0/  -   ]: s=5 n=4 x=0 v=0
(XEN) [2021-06-14 23:02:31]       35 [0/0/  -   ]: s=6 n=4 x=0
(XEN) [2021-06-14 23:02:31]       36 [0/0/  -   ]: s=6 n=4 x=0
(XEN) [2021-06-14 23:02:31]       37 [0/0/  -   ]: s=5 n=4 x=0 v=1
(XEN) [2021-06-14 23:02:31]       38 [0/0/  -   ]: s=6 n=4 x=0
(XEN) [2021-06-14 23:02:31]       39 [1/1/  -   ]: s=6 n=4 x=0
(XEN) [2021-06-14 23:02:31]       40 [1/0/  0   ]: s=5 n=5 x=0 v=0
(XEN) [2021-06-14 23:02:31]       41 [0/0/  -   ]: s=6 n=5 x=0
(XEN) [2021-06-14 23:02:31]       42 [0/0/  -   ]: s=6 n=5 x=0
(XEN) [2021-06-14 23:02:31]       43 [0/0/  -   ]: s=5 n=5 x=0 v=1
(XEN) [2021-06-14 23:02:31]       44 [1/0/  0   ]: s=6 n=5 x=0
(XEN) [2021-06-14 23:02:31]       45 [1/1/  -   ]: s=6 n=5 x=0
(XEN) [2021-06-14 23:02:31]       46 [0/0/  -   ]: s=4 n=0 x=0 p=16 i=0
(XEN) [2021-06-14 23:02:31]       47 [0/0/  -   ]: s=3 n=1 x=0 d=0 p=236
(XEN) [2021-06-14 23:02:31]       48 [1/0/  0   ]: s=3 n=3 x=0 d=0 p=237
(XEN) [2021-06-14 23:02:31]       49 [0/0/  -   ]: s=3 n=5 x=0 d=0 p=238
(XEN) [2021-06-14 23:02:31]       50 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=239
(XEN) [2021-06-14 23:02:31]       51 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=240
(XEN) [2021-06-14 23:02:31]       52 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=241
(XEN) [2021-06-14 23:02:31]       53 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=242
(XEN) [2021-06-14 23:02:31]       54 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=243
(XEN) [2021-06-14 23:02:31]       55 [0/0/  -   ]: s=4 n=0 x=0 p=18 i=0
(XEN) [2021-06-14 23:02:31]       56 [0/0/  -   ]: s=4 n=0 x=0 p=19 i=0
(XEN) [2021-06-14 23:02:31]       57 [0/0/  -   ]: s=3 n=5 x=0 d=0 p=244
(XEN) [2021-06-14 23:02:31]       58 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=245
(XEN) [2021-06-14 23:02:31]       59 [0/0/  -   ]: s=3 n=2 x=0 d=0 p=246
(XEN) [2021-06-14 23:02:31]       60 [0/0/  -   ]: s=3 n=4 x=0 d=0 p=247
(XEN) [2021-06-14 23:02:31]       61 [0/0/  -   ]: s=3 n=1 x=0 d=0 p=248
(XEN) [2021-06-14 23:02:31]       62 [0/0/  -   ]: s=3 n=3 x=0 d=0 p=249
(XEN) [2021-06-14 23:02:31]       63 [0/0/  -   ]: s=3 n=5 x=0 d=0 p=250
(XEN) [2021-06-14 23:02:31]       64 [1/0/  0   ]: s=3 n=0 x=0 d=0 p=251
(XEN) [2021-06-14 23:02:31]       65 [0/0/  -   ]: s=3 n=2 x=0 d=0 p=252
(XEN) [2021-06-14 23:02:31]       66 [0/0/  -   ]: s=3 n=4 x=0 d=0 p=253
(XEN) [2021-06-14 23:02:31]       67 [0/0/  -   ]: s=3 n=1 x=0 d=0 p=254
(XEN) [2021-06-14 23:02:31]       68 [0/0/  -   ]: s=3 n=3 x=0 d=0 p=255
(XEN) [2021-06-14 23:02:31]       69 [1/1/  -   ]: s=4 n=0 x=0 p=17 i=0
(XEN) [2021-06-14 23:02:31]       70 [0/0/  -   ]: s=4 n=0 x=0 p=21 i=0
(XEN) [2021-06-14 23:02:31]       71 [0/0/  -   ]: s=4 n=0 x=0 p=20 i=0
(XEN) [2021-06-14 23:02:31]       72 [0/0/  -   ]: s=4 n=0 x=0 p=23 i=0
(XEN) [2021-06-14 23:02:31]       73 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=257
(XEN) [2021-06-14 23:02:31] Event channel information for domain 6:
(XEN) [2021-06-14 23:02:31] Polling vCPUs: {}
(XEN) [2021-06-14 23:02:31]     port [p/m/s]
(XEN) [2021-06-14 23:02:31]        1 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=68
(XEN) [2021-06-14 23:02:31]        2 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=70
(XEN) [2021-06-14 23:02:31]        3 [0/1/  -   ]: s=3 n=0 x=1 d=0 p=72
(XEN) [2021-06-14 23:02:31]        4 [0/1/  -   ]: s=3 n=0 x=1 d=0 p=78
(XEN) [2021-06-14 23:02:31]        5 [0/1/  -   ]: s=3 n=1 x=1 d=0 p=73
(XEN) [2021-06-14 23:02:31]        6 [0/1/  -   ]: s=3 n=2 x=1 d=0 p=74
(XEN) [2021-06-14 23:02:31]        7 [0/1/  -   ]: s=3 n=3 x=1 d=0 p=75
(XEN) [2021-06-14 23:02:31]        8 [0/1/  -   ]: s=3 n=4 x=1 d=0 p=76
(XEN) [2021-06-14 23:02:31]        9 [0/1/  -   ]: s=3 n=5 x=1 d=0 p=77
(XEN) [2021-06-14 23:02:31]       10 [1/0/  0   ]: s=5 n=0 x=0 v=0
(XEN) [2021-06-14 23:02:31]       11 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:31]       12 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:31]       13 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:31]       14 [1/1/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:31]       15 [1/0/  0   ]: s=5 n=1 x=0 v=0
(XEN) [2021-06-14 23:02:31]       16 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:31]       17 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:31]       18 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:31]       19 [1/1/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:31]       20 [1/0/  0   ]: s=5 n=2 x=0 v=0
(XEN) [2021-06-14 23:02:31]       21 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:31]       22 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:31]       23 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:31]       24 [1/1/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:31]       25 [0/0/  -   ]: s=5 n=3 x=0 v=0
(XEN) [2021-06-14 23:02:31]       26 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:31]       27 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:31]       28 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:31]       29 [1/1/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:31]       30 [1/0/  0   ]: s=5 n=4 x=0 v=0
(XEN) [2021-06-14 23:02:31]       31 [0/0/  -   ]: s=6 n=4 x=0
(XEN) [2021-06-14 23:02:31]       32 [0/0/  -   ]: s=6 n=4 x=0
(XEN) [2021-06-14 23:02:31]       33 [0/0/  -   ]: s=6 n=4 x=0
(XEN) [2021-06-14 23:02:31]       34 [1/1/  -   ]: s=6 n=4 x=0
(XEN) [2021-06-14 23:02:31]       35 [0/0/  -   ]: s=5 n=5 x=0 v=0
(XEN) [2021-06-14 23:02:31]       36 [0/0/  -   ]: s=6 n=5 x=0
(XEN) [2021-06-14 23:02:31]       37 [0/0/  -   ]: s=6 n=5 x=0
(XEN) [2021-06-14 23:02:31]       38 [0/0/  -   ]: s=6 n=5 x=0
(XEN) [2021-06-14 23:02:31]       39 [1/1/  -   ]: s=6 n=5 x=0
(XEN) [2021-06-14 23:02:31]       40 [0/0/  -   ]: s=4 n=0 x=0 p=16 i=0
(XEN) [2021-06-14 23:02:31]       41 [0/0/  -   ]: s=4 n=0 x=0 p=18 i=0
(XEN) [2021-06-14 23:02:31]       42 [0/0/  -   ]: s=4 n=0 x=0 p=20 i=0
(XEN) [2021-06-14 23:02:31]       43 [0/0/  -   ]: s=4 n=0 x=0 p=17 i=0
(XEN) [2021-06-14 23:02:31]       44 [0/0/  -   ]: s=4 n=0 x=0 p=22 i=0
(XEN) [2021-06-14 23:02:31]       45 [0/0/  -   ]: s=3 n=3 x=0 d=0 p=79
(XEN) [2021-06-14 23:02:31]       46 [1/0/  0   ]: s=3 n=5 x=0 d=0 p=80
(XEN) [2021-06-14 23:02:31]       47 [1/0/  0   ]: s=3 n=0 x=0 d=0 p=81
(XEN) [2021-06-14 23:02:31]       48 [1/0/  0   ]: s=3 n=2 x=0 d=0 p=82
(XEN) [2021-06-14 23:02:31]       49 [0/0/  -   ]: s=3 n=4 x=0 d=0 p=83
(XEN) [2021-06-14 23:02:31]       50 [0/0/  -   ]: s=3 n=1 x=0 d=0 p=84
(XEN) [2021-06-14 23:02:31]       51 [0/0/  -   ]: s=3 n=3 x=0 d=0 p=85
(XEN) [2021-06-14 23:02:31]       52 [0/0/  -   ]: s=3 n=5 x=0 d=0 p=86
(XEN) [2021-06-14 23:02:31]       53 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=87
(XEN) [2021-06-14 23:02:31]       54 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=96
(XEN) [2021-06-14 23:02:31]       55 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=97
(XEN) [2021-06-14 23:02:31]       56 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=98
(XEN) [2021-06-14 23:02:31]       57 [0/0/  -   ]: s=3 n=3 x=0 d=0 p=99
(XEN) [2021-06-14 23:02:31]       58 [0/0/  -   ]: s=3 n=5 x=0 d=0 p=100
(XEN) [2021-06-14 23:02:31]       59 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=101
(XEN) [2021-06-14 23:02:31]       60 [0/0/  -   ]: s=3 n=2 x=0 d=0 p=102
(XEN) [2021-06-14 23:02:31]       61 [0/0/  -   ]: s=3 n=4 x=0 d=0 p=103
(XEN) [2021-06-14 23:02:31]       62 [0/0/  -   ]: s=3 n=1 x=0 d=0 p=104
(XEN) [2021-06-14 23:02:31]       63 [0/0/  -   ]: s=3 n=3 x=0 d=0 p=105
(XEN) [2021-06-14 23:02:31]       64 [0/0/  -   ]: s=3 n=5 x=0 d=0 p=106
(XEN) [2021-06-14 23:02:31] Event channel information for domain 7:
(XEN) [2021-06-14 23:02:31] Polling vCPUs: {}
(XEN) [2021-06-14 23:02:31]     port [p/m/s]
(XEN) [2021-06-14 23:02:31]        1 [0/0/  -   ]: s=3 n=2 x=0 d=0 p=197
(XEN) [2021-06-14 23:02:31]        2 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=199
(XEN) [2021-06-14 23:02:31]        3 [0/1/  -   ]: s=3 n=0 x=1 d=0 p=200
(XEN) [2021-06-14 23:02:31]        4 [0/1/  -   ]: s=3 n=0 x=1 d=0 p=206
(XEN) [2021-06-14 23:02:31]        5 [0/1/  -   ]: s=3 n=1 x=1 d=0 p=201
(XEN) [2021-06-14 23:02:31]        6 [0/1/  -   ]: s=3 n=2 x=1 d=0 p=202
(XEN) [2021-06-14 23:02:31]        7 [0/1/  -   ]: s=3 n=3 x=1 d=0 p=203
(XEN) [2021-06-14 23:02:31]        8 [0/1/  -   ]: s=3 n=4 x=1 d=0 p=204
(XEN) [2021-06-14 23:02:31]        9 [0/1/  -   ]: s=3 n=5 x=1 d=0 p=205
(XEN) [2021-06-14 23:02:31]       10 [1/0/  0   ]: s=5 n=0 x=0 v=0
(XEN) [2021-06-14 23:02:31]       11 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:31]       12 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:31]       13 [0/0/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:31]       14 [1/1/  -   ]: s=6 n=0 x=0
(XEN) [2021-06-14 23:02:31]       15 [1/0/  0   ]: s=5 n=1 x=0 v=0
(XEN) [2021-06-14 23:02:31]       16 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:31]       17 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:31]       18 [0/0/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:31]       19 [1/1/  -   ]: s=6 n=1 x=0
(XEN) [2021-06-14 23:02:31]       20 [1/0/  0   ]: s=5 n=2 x=0 v=0
(XEN) [2021-06-14 23:02:31]       21 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:31]       22 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:31]       23 [0/0/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:31]       24 [0/1/  -   ]: s=6 n=2 x=0
(XEN) [2021-06-14 23:02:31]       25 [1/0/  0   ]: s=5 n=3 x=0 v=0
(XEN) [2021-06-14 23:02:31]       26 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:31]       27 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:31]       28 [0/0/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:31]       29 [1/1/  -   ]: s=6 n=3 x=0
(XEN) [2021-06-14 23:02:31]       30 [1/0/  0   ]: s=5 n=4 x=0 v=0
(XEN) [2021-06-14 23:02:31]       31 [0/0/  -   ]: s=6 n=4 x=0
(XEN) [2021-06-14 23:02:31]       32 [0/0/  -   ]: s=6 n=4 x=0
(XEN) [2021-06-14 23:02:31]       33 [0/0/  -   ]: s=6 n=4 x=0
(XEN) [2021-06-14 23:02:31]       34 [1/1/  -   ]: s=6 n=4 x=0
(XEN) [2021-06-14 23:02:31]       35 [1/0/  0   ]: s=5 n=5 x=0 v=0
(XEN) [2021-06-14 23:02:31]       36 [0/0/  -   ]: s=6 n=5 x=0
(XEN) [2021-06-14 23:02:31]       37 [0/0/  -   ]: s=6 n=5 x=0
(XEN) [2021-06-14 23:02:31]       38 [0/0/  -   ]: s=6 n=5 x=0
(XEN) [2021-06-14 23:02:31]       39 [1/1/  -   ]: s=6 n=5 x=0
(XEN) [2021-06-14 23:02:31]       40 [0/0/  -   ]: s=4 n=1 x=0 p=16 i=0
(XEN) [2021-06-14 23:02:31]       41 [0/0/  -   ]: s=4 n=3 x=0 p=20 i=0
(XEN) [2021-06-14 23:02:31]       42 [0/0/  -   ]: s=3 n=4 x=0 d=0 p=216
(XEN) [2021-06-14 23:02:31]       43 [0/0/  -   ]: s=3 n=5 x=0 d=0 p=217
(XEN) [2021-06-14 23:02:31]       44 [0/0/  -   ]: s=3 n=1 x=0 d=0 p=218
(XEN) [2021-06-14 23:02:32]       45 [0/0/  -   ]: s=3 n=2 x=0 d=0 p=219
(XEN) [2021-06-14 23:02:32]       46 [0/0/  -   ]: s=3 n=3 x=0 d=0 p=220
(XEN) [2021-06-14 23:02:32]       47 [0/0/  -   ]: s=3 n=4 x=0 d=0 p=221
(XEN) [2021-06-14 23:02:32]       48 [0/0/  -   ]: s=3 n=5 x=0 d=0 p=222
(XEN) [2021-06-14 23:02:32]       49 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=223
(XEN) [2021-06-14 23:02:32]       50 [0/0/  -   ]: s=3 n=1 x=0 d=0 p=224
(XEN) [2021-06-14 23:02:32]       51 [0/0/  -   ]: s=3 n=2 x=0 d=0 p=225
(XEN) [2021-06-14 23:02:32]       52 [0/0/  -   ]: s=3 n=3 x=0 d=0 p=226
(XEN) [2021-06-14 23:02:32]       53 [0/0/  -   ]: s=3 n=4 x=0 d=0 p=227
(XEN) [2021-06-14 23:02:32]       54 [0/0/  -   ]: s=3 n=5 x=0 d=0 p=228
(XEN) [2021-06-14 23:02:32]       55 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=229
(XEN) [2021-06-14 23:02:32]       56 [0/0/  -   ]: s=3 n=1 x=0 d=0 p=230
(XEN) [2021-06-14 23:02:32]       57 [1/0/  0   ]: s=3 n=2 x=0 d=0 p=231
(XEN) [2021-06-14 23:02:32]       58 [0/0/  -   ]: s=3 n=3 x=0 d=0 p=232
(XEN) [2021-06-14 23:02:32]       59 [0/0/  -   ]: s=3 n=4 x=0 d=0 p=233
(XEN) [2021-06-14 23:02:32]       60 [0/0/  -   ]: s=3 n=5 x=0 d=0 p=234
(XEN) [2021-06-14 23:02:32]       61 [0/0/  -   ]: s=3 n=0 x=0 d=0 p=235
(XEN) [2021-06-14 23:02:32]       62 [0/0/  -   ]: s=4 n=1 x=0 p=18 i=0
(XEN) [2021-06-14 23:02:32]       63 [0/0/  -   ]: s=4 n=2 x=0 p=19 i=0
(XEN) [2021-06-14 23:02:32]       64 [0/0/  -   ]: s=4 n=3 x=0 p=17 i=0
(XEN) [2021-06-14 23:02:32]       65 [0/0/  -   ]: s=4 n=4 x=0 p=21 i=0
(XEN) [2021-06-14 23:02:32]       66 [0/0/  -   ]: s=3 n=5 x=0 d=0 p=256
(XEN) [2021-06-14 23:02:32] [g: print grant table usage]
(XEN) [2021-06-14 23:02:32] gnttab_usage_print_all [ key 'g' pressed
(XEN) [2021-06-14 23:02:32]       -------- active -------- -------- 
shared --------
(XEN) [2021-06-14 23:02:32] [ref] localdom mfn      pin localdom 
gmfn     flags
(XEN) [2021-06-14 23:02:32] grant-table for remote d0 (v1)
(XEN) [2021-06-14 23:02:32]   1 frames (512 max), 70 maptrack frames 
(2048 max)
(XEN) [2021-06-14 23:02:32] no active grant table entries
(XEN) [2021-06-14 23:02:32]       -------- active -------- -------- 
shared --------
(XEN) [2021-06-14 23:02:32] [ref] localdom mfn      pin localdom 
gmfn     flags
(XEN) [2021-06-14 23:02:32] grant-table for remote d2 (v1)
(XEN) [2021-06-14 23:02:32]   22 frames (512 max), 0 maptrack frames 
(2048 max)
(XEN) [2021-06-14 23:02:32] [0x000]      0 0xb1e03b 0x00000002          
0 0x0fefff 0x19
(XEN) [2021-06-14 23:02:32] [0x008]      0 0x141cfa5 0x00000001          
0 0x566fa5 0x19
(XEN) [2021-06-14 23:02:32] [0x009]      0 0x141cfa8 0x00000001          
0 0x566fa8 0x19
(XEN) [2021-06-14 23:02:32] [0x00a]      0 0x141cfaa 0x00000001          
0 0x566faa 0x19
(XEN) [2021-06-14 23:02:32] [0x00b]      0 0x141cfac 0x00000001          
0 0x566fac 0x19
(XEN) [2021-06-14 23:02:32] [0x00c]      0 0x141cfaf 0x00000001          
0 0x566faf 0x19
(XEN) [2021-06-14 23:02:32] [0x00d]      0 0x141cfb1 0x00000001          
0 0x566fb1 0x19
(XEN) [2021-06-14 23:02:32] [0x00e]      0 0x141d218 0x00000001          
0 0x566a18 0x19
(XEN) [2021-06-14 23:02:32] [0x00f]      0 0x141d21a 0x00000001          
0 0x566a1a 0x19
(XEN) [2021-06-14 23:02:32] [0x010]      0 0x141d21c 0x00000001          
0 0x566a1c 0x19
(XEN) [2021-06-14 23:02:32] [0x011]      0 0x141d21d 0x00000001          
0 0x566a1d 0x19
(XEN) [2021-06-14 23:02:32] [0x012]      0 0x141d270 0x00000001          
0 0x566a70 0x19
(XEN) [2021-06-14 23:02:32] [0x013]      0 0x141d271 0x00000001          
0 0x566a71 0x19
(XEN) [2021-06-14 23:02:32] [0x014]      0 0x141d274 0x00000001          
0 0x566a74 0x19
(XEN) [2021-06-14 23:02:32] [0x015]      0 0x141d275 0x00000001          
0 0x566a75 0x19
(XEN) [2021-06-14 23:02:32] [0x016]      0 0x141d277 0x00000001          
0 0x566a77 0x19
(XEN) [2021-06-14 23:02:32] [0x017]      0 0x141d27a 0x00000001          
0 0x566a7a 0x19
(XEN) [2021-06-14 23:02:32] [0x018]      0 0x141d27c 0x00000001          
0 0x566a7c 0x19
(XEN) [2021-06-14 23:02:32] [0x019]      0 0x141d27e 0x00000001          
0 0x566a7e 0x19
(XEN) [2021-06-14 23:02:32] [0x01a]      0 0x141d380 0x00000001          
0 0x566b80 0x19
(XEN) [2021-06-14 23:02:32] [0x01b]      0 0x141d382 0x00000001          
0 0x566b82 0x19
(XEN) [2021-06-14 23:02:32] [0x01c]      0 0x141d383 0x00000001          
0 0x566b83 0x19
(XEN) [2021-06-14 23:02:32] [0x01d]      0 0x141d384 0x00000001          
0 0x566b84 0x19
(XEN) [2021-06-14 23:02:32] [0x01e]      0 0x141d387 0x00000001          
0 0x566b87 0x19
(XEN) [2021-06-14 23:02:32] [0x01f]      0 0x141d38a 0x00000001          
0 0x566b8a 0x19
(XEN) [2021-06-14 23:02:32] [0x020]      0 0x141d394 0x00000001          
0 0x566b94 0x19
(XEN) [2021-06-14 23:02:32] [0x021]      0 0x141d395 0x00000001          
0 0x566b95 0x19
(XEN) [2021-06-14 23:02:32] [0x022]      0 0x141d398 0x00000001          
0 0x566b98 0x19
(XEN) [2021-06-14 23:02:32] [0x023]      0 0x141d399 0x00000001          
0 0x566b99 0x19
(XEN) [2021-06-14 23:02:32] [0x024]      0 0x141d39a 0x00000001          
0 0x566b9a 0x19
(XEN) [2021-06-14 23:02:32] [0x025]      0 0x141d39c 0x00000001          
0 0x566b9c 0x19
(XEN) [2021-06-14 23:02:32] [0x026]      0 0x141d39e 0x00000001          
0 0x566b9e 0x19
(XEN) [2021-06-14 23:02:32] [0x027]      0 0x141d3a1 0x00000001          
0 0x566ba1 0x19
(XEN) [2021-06-14 23:02:32] [0x028]      0 0x141d3a2 0x00000001          
0 0x566ba2 0x19
(XEN) [2021-06-14 23:02:32] [0x029]      0 0x141d3a4 0x00000001          
0 0x566ba4 0x19
(XEN) [2021-06-14 23:02:32] [0x02a]      0 0x141d3a6 0x00000001          
0 0x566ba6 0x19
(XEN) [2021-06-14 23:02:32] [0x02b]      0 0x141d3f0 0x00000001          
0 0x566bf0 0x19
(XEN) [2021-06-14 23:02:32] [0x02c]      0 0x141d3f4 0x00000001          
0 0x566bf4 0x19
(XEN) [2021-06-14 23:02:32] [0x02d]      0 0x141d3f5 0x00000001          
0 0x566bf5 0x19
(XEN) [2021-06-14 23:02:32] [0x02e]      0 0x141d3f8 0x00000001          
0 0x566bf8 0x19
(XEN) [2021-06-14 23:02:32] [0x02f]      0 0x141d3fa 0x00000001          
0 0x566bfa 0x19
(XEN) [2021-06-14 23:02:32] [0x032]      0 0x141ecdc 0x00000001          
0 0x5650dc 0x19
(XEN) [2021-06-14 23:02:32] [0x033]      0 0x141ecde 0x00000001          
0 0x5650de 0x19
(XEN) [2021-06-14 23:02:32] [0x034]      0 0x141ecdf 0x00000001          
0 0x5650df 0x19
(XEN) [2021-06-14 23:02:32] [0x035]      0 0x141ece2 0x00000001          
0 0x5650e2 0x19
(XEN) [2021-06-14 23:02:32] [0x036]      0 0x141ece5 0x00000001          
0 0x5650e5 0x19
(XEN) [2021-06-14 23:02:32] [0x037]      0 0x141ece7 0x00000001          
0 0x5650e7 0x19
(XEN) [2021-06-14 23:02:32] [0x038]      0 0x141ece8 0x00000001          
0 0x5650e8 0x19
(XEN) [2021-06-14 23:02:32] [0x039]      0 0x141ece9 0x00000001          
0 0x5650e9 0x19
(XEN) [2021-06-14 23:02:32] [0x03a]      0 0x141ed03 0x00000001          
0 0x565103 0x19
(XEN) [2021-06-14 23:02:32] [0x03b]      0 0x141ed04 0x00000001          
0 0x565104 0x19
(XEN) [2021-06-14 23:02:32] [0x03c]      0 0x141ed06 0x00000001          
0 0x565106 0x19
(XEN) [2021-06-14 23:02:32] [0x03d]      0 0x141ed08 0x00000001          
0 0x565108 0x19
(XEN) [2021-06-14 23:02:32] [0x03e]      0 0x141ed0b 0x00000001          
0 0x56510b 0x19
(XEN) [2021-06-14 23:02:32] [0x03f]      0 0x141ed0c 0x00000001          
0 0x56510c 0x19
(XEN) [2021-06-14 23:02:32] [0x040]      0 0x141ed0d 0x00000001          
0 0x56510d 0x19
(XEN) [2021-06-14 23:02:32] [0x041]      0 0x141ed0f 0x00000001          
0 0x56510f 0x19
(XEN) [2021-06-14 23:02:32] [0x049]      0 0x141ddeb 0x00000001          
0 0x5661eb 0x19
(XEN) [2021-06-14 23:02:32] [0x079]      0 0x141dbd7 0x00000001          
0 0x5663d7 0x19
(XEN) [2021-06-14 23:02:32] [0x0a0]      0 0x141e0ec 0x00000001          
0 0x565cec 0x19
(XEN) [2021-06-14 23:02:32] [0x0be]      0 0x141e107 0x00000001          
0 0x565d07 0x19
(XEN) [2021-06-14 23:02:32] [0x0db]      0 0x141d7f3 0x00000001          
0 0x5667f3 0x19
(XEN) [2021-06-14 23:02:32] [0x113]      0 0x141e10f 0x00000001          
0 0x565d0f 0x19
(XEN) [2021-06-14 23:02:32] [0x119]      0 0x141dd83 0x00000001          
0 0x566183 0x19
(XEN) [2021-06-14 23:02:32] [0x130]      0 0x141dba9 0x00000001          
0 0x5663a9 0x19
(XEN) [2021-06-14 23:02:32] [0x18d]      0 0x141da51 0x00000001          
0 0x566251 0x19
(XEN) [2021-06-14 23:02:32] [0x191]      0 0x141dd1b 0x00000001          
0 0x56611b 0x19
(XEN) [2021-06-14 23:02:32] [0x1c1]      0 0x141dbe7 0x00000001          
0 0x5663e7 0x19
(XEN) [2021-06-14 23:02:32] [0x1d5]      0 0x141dbae 0x00000001          
0 0x5663ae 0x19
(XEN) [2021-06-14 23:02:32] [0x1f6]      0 0x141db62 0x00000001          
0 0x566362 0x19
(XEN) [2021-06-14 23:02:32] [0x22e]      0 0x141e1c4 0x00000001          
0 0x565dc4 0x19
(XEN) [2021-06-14 23:02:32] [0x239]      0 0x141e1d2 0x00000001          
0 0x565dd2 0x19
(XEN) [2021-06-14 23:02:32] [0x23a]      0 0x141e442 0x00000001          
0 0x565842 0x19
(XEN) [2021-06-14 23:02:32] [0x23d]      0 0x141e073 0x00000001          
0 0x565c73 0x19
(XEN) [2021-06-14 23:02:32] [0x23e]      0 0x141da1c 0x00000001          
0 0x56621c 0x19
(XEN) [2021-06-14 23:02:32] [0x2a7]      0 0x141da8e 0x00000001          
0 0x56628e 0x19
(XEN) [2021-06-14 23:02:32] [0x2b3]      0 0x141db00 0x00000001          
0 0x566300 0x19
(XEN) [2021-06-14 23:02:32] [0x2ef]      0 0x141e1de 0x00000001          
0 0x565dde 0x19
(XEN) [2021-06-14 23:02:32] [0x2f4]      0 0x141db4c 0x00000001          
0 0x56634c 0x19
(XEN) [2021-06-14 23:02:32] [0x2f6]      0 0x141dafc 0x00000001          
0 0x5662fc 0x19
(XEN) [2021-06-14 23:02:32] [0x2f7]      0 0x141dac5 0x00000001          
0 0x5662c5 0x19
(XEN) [2021-06-14 23:02:32] [0x309]      0 0x141df18 0x00000001          
0 0x565f18 0x19
(XEN) [2021-06-14 23:02:32] [0x30d]      0 0x141da2e 0x00000001          
0 0x56622e 0x19
(XEN) [2021-06-14 23:02:32] [0x30e]      0 0x141dd69 0x00000001          
0 0x566169 0x19
(XEN) [2021-06-14 23:02:32] [0x310]      0 0x141ddd0 0x00000001          
0 0x5661d0 0x19
(XEN) [2021-06-14 23:02:32] [0x311]      0 0x141e0e4 0x00000001          
0 0x565ce4 0x19
(XEN) [2021-06-14 23:02:32] [0x314]      0 0x141dfe3 0x00000001          
0 0x565fe3 0x19
(XEN) [2021-06-14 23:02:32] [0x318]      0 0x141e1ca 0x00000001          
0 0x565dca 0x19
(XEN) [2021-06-14 23:02:32] [0x319]      0 0x141df7d 0x00000001          
0 0x565f7d 0x19
(XEN) [2021-06-14 23:02:32] [0x324]      0 0x141da37 0x00000001          
0 0x566237 0x19
(XEN) [2021-06-14 23:02:32] [0x328]      0 0x141dec9 0x00000001          
0 0x565ec9 0x19
(XEN) [2021-06-14 23:02:32] [0x32c]      0 0x141dbcd 0x00000001          
0 0x5663cd 0x19
(XEN) [2021-06-14 23:02:32] [0x32d]      0 0x141de25 0x00000001          
0 0x565e25 0x19
(XEN) [2021-06-14 23:02:32] [0x338]      0 0x141ddc7 0x00000001          
0 0x5661c7 0x19
(XEN) [2021-06-14 23:02:32] [0x35e]      0 0x141e082 0x00000001          
0 0x565c82 0x19
(XEN) [2021-06-14 23:02:32] [0x370]      0 0x141e40f 0x00000001          
0 0x56580f 0x19
(XEN) [2021-06-14 23:02:32] [0x37b]      0 0x141dde2 0x00000001          
0 0x5661e2 0x19
(XEN) [2021-06-14 23:02:32] [0x398]      0 0x141dd34 0x00000001          
0 0x566134 0x19
(XEN) [2021-06-14 23:02:32] [0x3ac]      0 0x141e40a 0x00000001          
0 0x56580a 0x19
(XEN) [2021-06-14 23:02:32] [0x3d4]      0 0x141db93 0x00000001          
0 0x566393 0x19
(XEN) [2021-06-14 23:02:32] [0x41a]      0 0x141e110 0x00000001          
0 0x565d10 0x19
(XEN) [2021-06-14 23:02:33] [0x41e]      0 0x141e15d 0x00000001          
0 0x565d5d 0x19
(XEN) [2021-06-14 23:02:33] [0x54f]      0 0x141df2b 0x00000001          
0 0x565f2b 0x19
(XEN) [2021-06-14 23:02:33] [0x550]      0 0x141daf1 0x00000001          
0 0x5662f1 0x19
(XEN) [2021-06-14 23:02:33] [0x552]      0 0x141dc9a 0x00000001          
0 0x56609a 0x19
(XEN) [2021-06-14 23:02:33] [0x553]      0 0x141da4b 0x00000001          
0 0x56624b 0x19
(XEN) [2021-06-14 23:02:33] [0x554]      0 0x141dd75 0x00000001          
0 0x566175 0x19
(XEN) [2021-06-14 23:02:33] [0x55b]      0 0x141d787 0x00000001          
0 0x566787 0x19
(XEN) [2021-06-14 23:02:33] [0x55f]      0 0x141df6a 0x00000001          
0 0x565f6a 0x19
(XEN) [2021-06-14 23:02:33] [0x560]      0 0x141dabf 0x00000001          
0 0x5662bf 0x19
(XEN) [2021-06-14 23:02:33] [0x563]      0 0x141d7be 0x00000001          
0 0x5667be 0x19
(XEN) [2021-06-14 23:02:33] [0x566]      0 0x141de3e 0x00000001          
0 0x565e3e 0x19
(XEN) [2021-06-14 23:02:33] [0x57c]      0 0x141dee3 0x00000001          
0 0x565ee3 0x19
(XEN) [2021-06-14 23:02:33] [0x57d]      0 0x141df90 0x00000001          
0 0x565f90 0x19
(XEN) [2021-06-14 23:02:33] [0x57e]      0 0x141dbc8 0x00000001          
0 0x5663c8 0x19
(XEN) [2021-06-14 23:02:33] [0x583]      0 0x141db5d 0x00000001          
0 0x56635d 0x19
(XEN) [2021-06-14 23:02:33] [0x596]      0 0x141da5e 0x00000001          
0 0x56625e 0x19
(XEN) [2021-06-14 23:02:33] [0x5a8]      0 0x141db67 0x00000001          
0 0x566367 0x19
(XEN) [2021-06-14 23:02:33] [0x5aa]      0 0x141e44e 0x00000001          
0 0x56584e 0x19
(XEN) [2021-06-14 23:02:33] [0x5c2]      0 0x141dc8f 0x00000001          
0 0x56608f 0x19
(XEN) [2021-06-14 23:02:33] [0x5d9]      0 0x141dcb5 0x00000001          
0 0x5660b5 0x19
(XEN) [2021-06-14 23:02:33] [0xd00]      0 0x78ca43 0x00000001          
0 0x551643 0x19
(XEN) [2021-06-14 23:02:33] [0xd01]      0 0x78ca44 0x00000001          
0 0x551644 0x19
(XEN) [2021-06-14 23:02:33] [0xd02]      0 0x78ca46 0x00000001          
0 0x551646 0x19
(XEN) [2021-06-14 23:02:33] [0xd03]      0 0x78ca47 0x00000001          
0 0x551647 0x19
(XEN) [2021-06-14 23:02:33] [0xd04]      0 0x78ca4b 0x00000001          
0 0x55164b 0x19
(XEN) [2021-06-14 23:02:33] [0xd05]      0 0x78ca4c 0x00000001          
0 0x55164c 0x19
(XEN) [2021-06-14 23:02:33] [0xd06]      0 0x78ca4e 0x00000001          
0 0x55164e 0x19
(XEN) [2021-06-14 23:02:33] [0xd07]      0 0x78ca4f 0x00000001          
0 0x55164f 0x19
(XEN) [2021-06-14 23:02:33] [0xd3b]      0 0x141e0ac 0x00000001          
0 0x565cac 0x19
(XEN) [2021-06-14 23:02:33] [0xd59]      0 0x141daf4 0x00000001          
0 0x5662f4 0x19
(XEN) [2021-06-14 23:02:33] [0xd6c]      0 0x141e0a8 0x00000001          
0 0x565ca8 0x19
(XEN) [2021-06-14 23:02:33] [0xd72]      0 0x141dbd5 0x00000001          
0 0x5663d5 0x19
(XEN) [2021-06-14 23:02:33] [0xd73]      0 0x141e116 0x00000001          
0 0x565d16 0x19
(XEN) [2021-06-14 23:02:33] [0xd74]      0 0x141db43 0x00000001          
0 0x566343 0x19
(XEN) [2021-06-14 23:02:33] [0xd75]      0 0x141dbbe 0x00000001          
0 0x5663be 0x19
(XEN) [2021-06-14 23:02:33] [0xd77]      0 0x141da6d 0x00000001          
0 0x56626d 0x19
(XEN) [2021-06-14 23:02:33] [0xd7e]      0 0x141daa5 0x00000001          
0 0x5662a5 0x19
(XEN) [2021-06-14 23:02:33] [0xd8b]      0 0x141e420 0x00000001          
0 0x565820 0x19
(XEN) [2021-06-14 23:02:33] [0xdc8]      0 0x141e1e7 0x00000001          
0 0x565de7 0x19
(XEN) [2021-06-14 23:02:33] [0xdf8]      0 0x141db10 0x00000001          
0 0x566310 0x19
(XEN) [2021-06-14 23:02:33] [0xe00]      0 0x141dd48 0x00000001          
0 0x566148 0x19
(XEN) [2021-06-14 23:02:33] [0xe03]      0 0x141d7bc 0x00000001          
0 0x5667bc 0x19
(XEN) [2021-06-14 23:02:33] [0xe1c]      0 0x141dca4 0x00000001          
0 0x5660a4 0x19
(XEN) [2021-06-14 23:02:33] [0xe1f]      0 0x141de62 0x00000001          
0 0x565e62 0x19
(XEN) [2021-06-14 23:02:33] [0xe20]      0 0x141db81 0x00000001          
0 0x566381 0x19
(XEN) [2021-06-14 23:02:33] [0xe2a]      0 0x141df86 0x00000001          
0 0x565f86 0x19
(XEN) [2021-06-14 23:02:33] [0xe2d]      0 0x141e0f8 0x00000001          
0 0x565cf8 0x19
(XEN) [2021-06-14 23:02:33] [0xe2e]      0 0x141e011 0x00000001          
0 0x565c11 0x19
(XEN) [2021-06-14 23:02:33] [0xe38]      0 0x141dcf5 0x00000001          
0 0x5660f5 0x19
(XEN) [2021-06-14 23:02:33] [0xe63]      0 0x141e108 0x00000001          
0 0x565d08 0x19
(XEN) [2021-06-14 23:02:33] [0xe64]      0 0x141de56 0x00000001          
0 0x565e56 0x19
(XEN) [2021-06-14 23:02:33] [0xe68]      0 0x141da4d 0x00000001          
0 0x56624d 0x19
(XEN) [2021-06-14 23:02:33] [0xe69]      0 0x141df4e 0x00000001          
0 0x565f4e 0x19
(XEN) [2021-06-14 23:02:33] [0xe6b]      0 0x141dd4f 0x00000001          
0 0x56614f 0x19
(XEN) [2021-06-14 23:02:33] [0xe6d]      0 0x141e046 0x00000001          
0 0x565c46 0x19
(XEN) [2021-06-14 23:02:33] [0xe6e]      0 0x141db56 0x00000001          
0 0x566356 0x19
(XEN) [2021-06-14 23:02:33] [0xe70]      0 0x141e0fa 0x00000001          
0 0x565cfa 0x19
(XEN) [2021-06-14 23:02:33] [0xe71]      0 0x141ddfa 0x00000001          
0 0x5661fa 0x19
(XEN) [2021-06-14 23:02:33] [0xe74]      0 0x141dcec 0x00000001          
0 0x5660ec 0x19
(XEN) [2021-06-14 23:02:33] [0xe77]      0 0x141d7b6 0x00000001          
0 0x5667b6 0x19
(XEN) [2021-06-14 23:02:33] [0xe78]      0 0x141dddc 0x00000001          
0 0x5661dc 0x19
(XEN) [2021-06-14 23:02:33] [0xe79]      0 0x141db09 0x00000001          
0 0x566309 0x19
(XEN) [2021-06-14 23:02:33] [0xe7a]      0 0x141dee8 0x00000001          
0 0x565ee8 0x19
(XEN) [2021-06-14 23:02:33] [0xe7f]      0 0x141d7a3 0x00000001          
0 0x5667a3 0x19
(XEN) [2021-06-14 23:02:33] [0xe84]      0 0x141dcd0 0x00000001          
0 0x5660d0 0x19
(XEN) [2021-06-14 23:02:33] [0xe8e]      0 0x141df52 0x00000001          
0 0x565f52 0x19
(XEN) [2021-06-14 23:02:33] [0xe98]      0 0x141dfdf 0x00000001          
0 0x565fdf 0x19
(XEN) [2021-06-14 23:02:33] [0xe9f]      0 0x141e0f4 0x00000001          
0 0x565cf4 0x19
(XEN) [2021-06-14 23:02:33] [0xeb5]      0 0x141db11 0x00000001          
0 0x566311 0x19
(XEN) [2021-06-14 23:02:33] [0xeb6]      0 0x141dd31 0x00000001          
0 0x566131 0x19
(XEN) [2021-06-14 23:02:33] [0xed0]      0 0x141e01d 0x00000001          
0 0x565c1d 0x19
(XEN) [2021-06-14 23:02:33] [0xed1]      0 0x141dd86 0x00000001          
0 0x566186 0x19
(XEN) [2021-06-14 23:02:33] [0xed2]      0 0x141db49 0x00000001          
0 0x566349 0x19
(XEN) [2021-06-14 23:02:33] [0xeff]      0 0x141df81 0x00000001          
0 0x565f81 0x19
(XEN) [2021-06-14 23:02:33] [0xf00]      0 0x141d79a 0x00000001          
0 0x56679a 0x19
(XEN) [2021-06-14 23:02:33] [0xf05]      0 0x141e0ba 0x00000001          
0 0x565cba 0x19
(XEN) [2021-06-14 23:02:33] [0xf08]      0 0x141ddb3 0x00000001          
0 0x5661b3 0x19
(XEN) [2021-06-14 23:02:33] [0xf0e]      0 0x141e06d 0x00000001          
0 0x565c6d 0x19
(XEN) [2021-06-14 23:02:33] [0xf17]      0 0x141e041 0x00000001          
0 0x565c41 0x19
(XEN) [2021-06-14 23:02:33] [0xf30]      0 0x141da38 0x00000001          
0 0x566238 0x19
(XEN) [2021-06-14 23:02:33] [0xf42]      0 0x141ddc0 0x00000001          
0 0x5661c0 0x19
(XEN) [2021-06-14 23:02:33] [0xfb2]      0 0x141e19f 0x00000001          
0 0x565d9f 0x19
(XEN) [2021-06-14 23:02:33] [0xfb5]      0 0x141deba 0x00000001          
0 0x565eba 0x19
(XEN) [2021-06-14 23:02:33] [0xfb7]      0 0x141e1d0 0x00000001          
0 0x565dd0 0x19
(XEN) [2021-06-14 23:02:33] [0xfda]      0 0x141e03e 0x00000001          
0 0x565c3e 0x19
(XEN) [2021-06-14 23:02:33] [0x1121]      0 0x141d7b3 
0x00000001          0 0x5667b3 0x19
(XEN) [2021-06-14 23:02:33] [0x1123]      0 0x141e0b7 
0x00000001          0 0x565cb7 0x19
(XEN) [2021-06-14 23:02:33] [0x11e1]      0 0x141dda5 
0x00000001          0 0x5661a5 0x19
(XEN) [2021-06-14 23:02:33] [0x11e5]      0 0x141da7a 
0x00000001          0 0x56627a 0x19
(XEN) [2021-06-14 23:02:33] [0x121e]      0 0x141dfab 
0x00000001          0 0x565fab 0x19
(XEN) [2021-06-14 23:02:33] [0x1223]      0 0x141da62 
0x00000001          0 0x566262 0x19
(XEN) [2021-06-14 23:02:33] [0x123f]      0 0x141da21 
0x00000001          0 0x566221 0x19
(XEN) [2021-06-14 23:02:33] [0x125f]      0 0x141df7b 
0x00000001          0 0x565f7b 0x19
(XEN) [2021-06-14 23:02:33] [0x1276]      0 0x141e056 
0x00000001          0 0x565c56 0x19
(XEN) [2021-06-14 23:02:33] [0x1277]      0 0x141dd57 
0x00000001          0 0x566157 0x19
(XEN) [2021-06-14 23:02:33] [0x1299]      0 0x141df70 
0x00000001          0 0x565f70 0x19
(XEN) [2021-06-14 23:02:33] [0x12bb]      0 0x141e069 
0x00000001          0 0x565c69 0x19
(XEN) [2021-06-14 23:02:33] [0x1307]      0 0x141e685 
0x00000001          0 0x565685 0x19
(XEN) [2021-06-14 23:02:33] [0x130b]      0 0x141e12f 
0x00000001          0 0x565d2f 0x19
(XEN) [2021-06-14 23:02:33] [0x1344]      0 0x141deb5 
0x00000001          0 0x565eb5 0x19
(XEN) [2021-06-14 23:02:33] [0x1387]      0 0x141db5e 
0x00000001          0 0x56635e 0x19
(XEN) [2021-06-14 23:02:33] [0x138c]      0 0x141da2d 
0x00000001          0 0x56622d 0x19
(XEN) [2021-06-14 23:02:33] [0x138d]      0 0x141dbf3 
0x00000001          0 0x5663f3 0x19
(XEN) [2021-06-14 23:02:33] [0x138e]      0 0x141e020 
0x00000001          0 0x565c20 0x19
(XEN) [2021-06-14 23:02:33] [0x13a7]      0 0x141e07c 
0x00000001          0 0x565c7c 0x19
(XEN) [2021-06-14 23:02:33] [0x13ad]      0 0x141db86 
0x00000001          0 0x566386 0x19
(XEN) [2021-06-14 23:02:33] [0x13c8]      0 0x141d7cf 
0x00000001          0 0x5667cf 0x19
(XEN) [2021-06-14 23:02:33] [0x13d1]      0 0x141dfa4 
0x00000001          0 0x565fa4 0x19
(XEN) [2021-06-14 23:02:33] [0x1404]      0 0x141e109 
0x00000001          0 0x565d09 0x19
(XEN) [2021-06-14 23:02:33] [0x1406]      0 0x141ddf9 
0x00000001          0 0x5661f9 0x19
(XEN) [2021-06-14 23:02:33] [0x141c]      0 0x141dbc7 
0x00000001          0 0x5663c7 0x19
(XEN) [2021-06-14 23:02:33] [0x141d]      0 0x141dd3b 
0x00000001          0 0x56613b 0x19
(XEN) [2021-06-14 23:02:33] [0x1427]      0 0x141dffc 
0x00000001          0 0x565ffc 0x19
(XEN) [2021-06-14 23:02:33] [0x1439]      0 0x141db7b 
0x00000001          0 0x56637b 0x19
(XEN) [2021-06-14 23:02:33] [0x1442]      0 0x141df7c 
0x00000001          0 0x565f7c 0x19
(XEN) [2021-06-14 23:02:33] [0x144d]      0 0x141d78b 
0x00000001          0 0x56678b 0x19
(XEN) [2021-06-14 23:02:33] [0x14ad]      0 0x141da94 
0x00000001          0 0x566294 0x19
(XEN) [2021-06-14 23:02:33] [0x14f5]      0 0x141ddea 
0x00000001          0 0x5661ea 0x19
(XEN) [2021-06-14 23:02:33] [0x1501]      0 0x141e14d 
0x00000001          0 0x565d4d 0x19
(XEN) [2021-06-14 23:02:33] [0x1506]      0 0x141dabb 
0x00000001          0 0x5662bb 0x19
(XEN) [2021-06-14 23:02:33] [0x1512]      0 0x141e051 
0x00000001          0 0x565c51 0x19
(XEN) [2021-06-14 23:02:33] [0x151e]      0 0x141dbbd 
0x00000001          0 0x5663bd 0x19
(XEN) [2021-06-14 23:02:33] [0x1530]      0 0x141df1d 
0x00000001          0 0x565f1d 0x19
(XEN) [2021-06-14 23:02:33] [0x1539]      0 0x141da8f 
0x00000001          0 0x56628f 0x19
(XEN) [2021-06-14 23:02:33] [0x1547]      0 0x141debd 
0x00000001          0 0x565ebd 0x19
(XEN) [2021-06-14 23:02:33] [0x1552]      0 0x141d7f1 
0x00000001          0 0x5667f1 0x19
(XEN) [2021-06-14 23:02:33] [0x1555]      0 0x141db36 
0x00000001          0 0x566336 0x19
(XEN) [2021-06-14 23:02:33] [0x1562]      0 0x141daa6 
0x00000001          0 0x5662a6 0x19
(XEN) [2021-06-14 23:02:34] [0x15b3]      0 0x141e01b 
0x00000001          0 0x565c1b 0x19
(XEN) [2021-06-14 23:02:34] [0x15e3]      0 0x141e0e5 
0x00000001          0 0x565ce5 0x19
(XEN) [2021-06-14 23:02:34] [0x161d]      0 0x141dd2c 
0x00000001          0 0x56612c 0x19
(XEN) [2021-06-14 23:02:34] [0x161f]      0 0x141da4f 
0x00000001          0 0x56624f 0x19
(XEN) [2021-06-14 23:02:34] [0x16ae]      0 0x141db74 
0x00000001          0 0x566374 0x19
(XEN) [2021-06-14 23:02:34] [0x16b3]      0 0x141dfec 
0x00000001          0 0x565fec 0x19
(XEN) [2021-06-14 23:02:34] [0x16d6]      0 0x141ddbb 
0x00000001          0 0x5661bb 0x19
(XEN) [2021-06-14 23:02:34] [0x16ed]      0 0x141e0b1 
0x00000001          0 0x565cb1 0x19
(XEN) [2021-06-14 23:02:34] [0x16f3]      0 0x141dabc 
0x00000001          0 0x5662bc 0x19
(XEN) [2021-06-14 23:02:34] [0x16f5]      0 0x141dda1 
0x00000001          0 0x5661a1 0x19
(XEN) [2021-06-14 23:02:34] [0x174e]      0 0x141dd8f 
0x00000001          0 0x56618f 0x19
(XEN) [2021-06-14 23:02:34] [0x1750]      0 0x141dacd 
0x00000001          0 0x5662cd 0x19
(XEN) [2021-06-14 23:02:34] [0x1751]      0 0x141da33 
0x00000001          0 0x566233 0x19
(XEN) [2021-06-14 23:02:34] [0x1758]      0 0x141e096 
0x00000001          0 0x565c96 0x19
(XEN) [2021-06-14 23:02:34] [0x175b]      0 0x141dd6d 
0x00000001          0 0x56616d 0x19
(XEN) [2021-06-14 23:02:34] [0x17a3]      0 0x141daf0 
0x00000001          0 0x5662f0 0x19
(XEN) [2021-06-14 23:02:34] [0x17a5]      0 0x141e118 
0x00000001          0 0x565d18 0x19
(XEN) [2021-06-14 23:02:34] [0x17a7]      0 0x141deda 
0x00000001          0 0x565eda 0x19
(XEN) [2021-06-14 23:02:34] [0x17e1]      0 0x141d3fc 
0x00000001          0 0x566bfc 0x19
(XEN) [2021-06-14 23:02:34] [0x1874]      0 0x141dd07 
0x00000001          0 0x566107 0x19
(XEN) [2021-06-14 23:02:34] [0x18a0]      0 0x141ddc4 
0x00000001          0 0x5661c4 0x19
(XEN) [2021-06-14 23:02:34] [0x18c2]      0 0x141e053 
0x00000001          0 0x565c53 0x19
(XEN) [2021-06-14 23:02:34] [0x18ca]      0 0x141dfc5 
0x00000001          0 0x565fc5 0x19
(XEN) [2021-06-14 23:02:34] [0x18e2]      0 0x141dea1 
0x00000001          0 0x565ea1 0x19
(XEN) [2021-06-14 23:02:34] [0x18eb]      0 0x141dd85 
0x00000001          0 0x566185 0x19
(XEN) [2021-06-14 23:02:34] [0x18f9]      0 0x141dd30 
0x00000001          0 0x566130 0x19
(XEN) [2021-06-14 23:02:34] [0x1900]      0 0x141df73 
0x00000001          0 0x565f73 0x19
(XEN) [2021-06-14 23:02:34] [0x1902]      0 0x141db69 
0x00000001          0 0x566369 0x19
(XEN) [2021-06-14 23:02:34] [0x192a]      0 0x141da2b 
0x00000001          0 0x56622b 0x19
(XEN) [2021-06-14 23:02:34] [0x195e]      0 0x141e750 
0x00000001          0 0x565750 0x19
(XEN) [2021-06-14 23:02:34] [0x1961]      0 0x141db80 
0x00000001          0 0x566380 0x19
(XEN) [2021-06-14 23:02:34] [0x1963]      0 0x141db40 
0x00000001          0 0x566340 0x19
(XEN) [2021-06-14 23:02:34] [0x1976]      0 0x141e17c 
0x00000001          0 0x565d7c 0x19
(XEN) [2021-06-14 23:02:34] [0x19b7]      0 0x141d7a0 
0x00000001          0 0x5667a0 0x19
(XEN) [2021-06-14 23:02:34] [0x19d5]      0 0x141dbf0 
0x00000001          0 0x5663f0 0x19
(XEN) [2021-06-14 23:02:34] [0x19e8]      0 0x141e42a 
0x00000001          0 0x56582a 0x19
(XEN) [2021-06-14 23:02:34] [0x1a00]      0 0x141df4c 
0x00000001          0 0x565f4c 0x19
(XEN) [2021-06-14 23:02:34] [0x1a09]      0 0x141dbbc 
0x00000001          0 0x5663bc 0x19
(XEN) [2021-06-14 23:02:34] [0x1a14]      0 0x141e0d2 
0x00000001          0 0x565cd2 0x19
(XEN) [2021-06-14 23:02:34] [0x1a3d]      0 0x141dea6 
0x00000001          0 0x565ea6 0x19
(XEN) [2021-06-14 23:02:34] [0x1a5e]      0 0x141e41f 
0x00000001          0 0x56581f 0x19
(XEN) [2021-06-14 23:02:34] [0x1a65]      0 0x141da15 
0x00000001          0 0x566215 0x19
(XEN) [2021-06-14 23:02:34] [0x1a80]      0 0x141ddec 
0x00000001          0 0x5661ec 0x19
(XEN) [2021-06-14 23:02:34] [0x1aa7]      0 0x141da9f 
0x00000001          0 0x56629f 0x19
(XEN) [2021-06-14 23:02:34] [0x1acd]      0 0x141e031 
0x00000001          0 0x565c31 0x19
(XEN) [2021-06-14 23:02:34] [0x1acf]      0 0x141d7e9 
0x00000001          0 0x5667e9 0x19
(XEN) [2021-06-14 23:02:34] [0x1ad0]      0 0x141ddc5 
0x00000001          0 0x5661c5 0x19
(XEN) [2021-06-14 23:02:34] [0x1ad1]      0 0x141da58 
0x00000001          0 0x566258 0x19
(XEN) [2021-06-14 23:02:34] [0x1aed]      0 0x141de47 
0x00000001          0 0x565e47 0x19
(XEN) [2021-06-14 23:02:34] [0x1b6d]      0 0x141dc96 
0x00000001          0 0x566096 0x19
(XEN) [2021-06-14 23:02:34] [0x1c25]      0 0x141da7f 
0x00000001          0 0x56627f 0x19
(XEN) [2021-06-14 23:02:34] [0x1c72]      0 0x141dbf9 
0x00000001          0 0x5663f9 0x19
(XEN) [2021-06-14 23:02:34] [0x1cb5]      0 0x141e018 
0x00000001          0 0x565c18 0x19
(XEN) [2021-06-14 23:02:34] [0x1cb7]      0 0x141dd2f 
0x00000001          0 0x56612f 0x19
(XEN) [2021-06-14 23:02:34] [0x1cc3]      0 0x141de28 
0x00000001          0 0x565e28 0x19
(XEN) [2021-06-14 23:02:34] [0x1d25]      0 0x141e1bd 
0x00000001          0 0x565dbd 0x19
(XEN) [2021-06-14 23:02:34] [0x1d32]      0 0x141d7ce 
0x00000001          0 0x5667ce 0x19
(XEN) [2021-06-14 23:02:34] [0x1d5a]      0 0x141df65 
0x00000001          0 0x565f65 0x19
(XEN) [2021-06-14 23:02:34] [0x1d73]      0 0x141dce9 
0x00000001          0 0x5660e9 0x19
(XEN) [2021-06-14 23:02:34] [0x1dc8]      0 0x141e11f 
0x00000001          0 0x565d1f 0x19
(XEN) [2021-06-14 23:02:34] [0x1dda]      0 0x141e0df 
0x00000001          0 0x565cdf 0x19
(XEN) [2021-06-14 23:02:34] [0x1df4]      0 0x141dfda 
0x00000001          0 0x565fda 0x19
(XEN) [2021-06-14 23:02:34] [0x1df7]      0 0x141dbf1 
0x00000001          0 0x5663f1 0x19
(XEN) [2021-06-14 23:02:34] [0x1df8]      0 0x141e001 
0x00000001          0 0x565c01 0x19
(XEN) [2021-06-14 23:02:34] [0x1dfa]      0 0x141ddef 
0x00000001          0 0x5661ef 0x19
(XEN) [2021-06-14 23:02:34] [0x1e12]      0 0x141dec1 
0x00000001          0 0x565ec1 0x19
(XEN) [2021-06-14 23:02:34] [0x1e44]      0 0x141e1fe 
0x00000001          0 0x565dfe 0x19
(XEN) [2021-06-14 23:02:34] [0x1e4d]      0 0x141dfb2 
0x00000001          0 0x565fb2 0x19
(XEN) [2021-06-14 23:02:34] [0x1e5e]      0 0x141da05 
0x00000001          0 0x566205 0x19
(XEN) [2021-06-14 23:02:34] [0x1e63]      0 0x141dc87 
0x00000001          0 0x566087 0x19
(XEN) [2021-06-14 23:02:34] [0x1e7b]      0 0x141e43c 
0x00000001          0 0x56583c 0x19
(XEN) [2021-06-14 23:02:34] [0x1ed1]      0 0x141dd82 
0x00000001          0 0x566182 0x19
(XEN) [2021-06-14 23:02:34] [0x1ed4]      0 0x141df83 
0x00000001          0 0x565f83 0x19
(XEN) [2021-06-14 23:02:34] [0x1ed5]      0 0x141db28 
0x00000001          0 0x566328 0x19
(XEN) [2021-06-14 23:02:34] [0x1ed6]      0 0x141e035 
0x00000001          0 0x565c35 0x19
(XEN) [2021-06-14 23:02:34] [0x1ed7]      0 0x141e180 
0x00000001          0 0x565d80 0x19
(XEN) [2021-06-14 23:02:34] [0x1edc]      0 0x141e064 
0x00000001          0 0x565c64 0x19
(XEN) [2021-06-14 23:02:34] [0x1ef6]      0 0x141db1c 
0x00000001          0 0x56631c 0x19
(XEN) [2021-06-14 23:02:34] [0x1efa]      0 0x141daa9 
0x00000001          0 0x5662a9 0x19
(XEN) [2021-06-14 23:02:34] [0x1f05]      0 0x141e049 
0x00000001          0 0x565c49 0x19
(XEN) [2021-06-14 23:02:34] [0x1f13]      0 0x141df8e 
0x00000001          0 0x565f8e 0x19
(XEN) [2021-06-14 23:02:34] [0x1f17]      0 0x141dce2 
0x00000001          0 0x5660e2 0x19
(XEN) [2021-06-14 23:02:34] [0x1f26]      0 0x141dd7d 
0x00000001          0 0x56617d 0x19
(XEN) [2021-06-14 23:02:34] [0x1f4f]      0 0x141df91 
0x00000001          0 0x565f91 0x19
(XEN) [2021-06-14 23:02:34] [0x1f62]      0 0x141e0d3 
0x00000001          0 0x565cd3 0x19
(XEN) [2021-06-14 23:02:34] [0x1f6d]      0 0x141dde6 
0x00000001          0 0x5661e6 0x19
(XEN) [2021-06-14 23:02:34] [0x1f7d]      0 0x141dd18 
0x00000001          0 0x566118 0x19
(XEN) [2021-06-14 23:02:34] [0x1fab]      0 0x141e6ec 
0x00000001          0 0x5656ec 0x19
(XEN) [2021-06-14 23:02:34] [0x1fae]      0 0x141dd97 
0x00000001          0 0x566197 0x19
(XEN) [2021-06-14 23:02:34] [0x1fb7]      0 0x141de4e 
0x00000001          0 0x565e4e 0x19
(XEN) [2021-06-14 23:02:34] [0x1fbd]      0 0x141e188 
0x00000001          0 0x565d88 0x19
(XEN) [2021-06-14 23:02:34] [0x2005]      0 0x141e03c 
0x00000001          0 0x565c3c 0x19
(XEN) [2021-06-14 23:02:34] [0x2006]      0 0x141ddf5 
0x00000001          0 0x5661f5 0x19
(XEN) [2021-06-14 23:02:34] [0x2012]      0 0x141e156 
0x00000001          0 0x565d56 0x19
(XEN) [2021-06-14 23:02:34] [0x2017]      0 0x141dd1a 
0x00000001          0 0x56611a 0x19
(XEN) [2021-06-14 23:02:34] [0x2018]      0 0x141dce5 
0x00000001          0 0x5660e5 0x19
(XEN) [2021-06-14 23:02:34] [0x201a]      0 0x141e007 
0x00000001          0 0x565c07 0x19
(XEN) [2021-06-14 23:02:34] [0x201c]      0 0x141d7ec 
0x00000001          0 0x5667ec 0x19
(XEN) [2021-06-14 23:02:34] [0x201f]      0 0x141dfe8 
0x00000001          0 0x565fe8 0x19
(XEN) [2021-06-14 23:02:34] [0x2021]      0 0x141da8c 
0x00000001          0 0x56628c 0x19
(XEN) [2021-06-14 23:02:34] [0x2022]      0 0x141db60 
0x00000001          0 0x566360 0x19
(XEN) [2021-06-14 23:02:34] [0x2025]      0 0x141daaf 
0x00000001          0 0x5662af 0x19
(XEN) [2021-06-14 23:02:34] [0x2026]      0 0x141e477 
0x00000001          0 0x565877 0x19
(XEN) [2021-06-14 23:02:34] [0x2027]      0 0x141dae2 
0x00000001          0 0x5662e2 0x19
(XEN) [2021-06-14 23:02:34] [0x2034]      0 0x141e4f5 
0x00000001          0 0x5658f5 0x19
(XEN) [2021-06-14 23:02:34] [0x2036]      0 0x141da63 
0x00000001          0 0x566263 0x19
(XEN) [2021-06-14 23:02:34] [0x20fe]      0 0x141dd88 
0x00000001          0 0x566188 0x19
(XEN) [2021-06-14 23:02:34] [0x2100]      0 0x141dbb6 
0x00000001          0 0x5663b6 0x19
(XEN) [2021-06-14 23:02:34] [0x2101]      0 0x141da3a 
0x00000001          0 0x56623a 0x19
(XEN) [2021-06-14 23:02:34] [0x2104]      0 0x141e503 
0x00000001          0 0x565903 0x19
(XEN) [2021-06-14 23:02:34] [0x2105]      0 0x141e0d8 
0x00000001          0 0x565cd8 0x19
(XEN) [2021-06-14 23:02:34] [0x214f]      0 0x141e669 
0x00000001          0 0x565669 0x19
(XEN) [2021-06-14 23:02:34] [0x21bc]      0 0x141dcc1 
0x00000001          0 0x5660c1 0x19
(XEN) [2021-06-14 23:02:34] [0x21bd]      0 0x141da27 
0x00000001          0 0x566227 0x19
(XEN) [2021-06-14 23:02:34] [0x21ff]      0 0x141dab7 
0x00000001          0 0x5662b7 0x19
(XEN) [2021-06-14 23:02:34] [0x2226]      0 0x141da46 
0x00000001          0 0x566246 0x19
(XEN) [2021-06-14 23:02:34] [0x228b]      0 0x141e08c 
0x00000001          0 0x565c8c 0x19
(XEN) [2021-06-14 23:02:34] [0x22ef]      0 0x141e015 
0x00000001          0 0x565c15 0x19
(XEN) [2021-06-14 23:02:34] [0x22ff]      0 0x141daab 
0x00000001          0 0x5662ab 0x19
(XEN) [2021-06-14 23:02:34] [0x230f]      0 0x141e055 
0x00000001          0 0x565c55 0x19
(XEN) [2021-06-14 23:02:34] [0x2317]      0 0x141dcc9 
0x00000001          0 0x5660c9 0x19
(XEN) [2021-06-14 23:02:34] [0x232e]      0 0x141dbef 
0x00000001          0 0x5663ef 0x19
(XEN) [2021-06-14 23:02:34] [0x2335]      0 0x141d792 
0x00000001          0 0x566792 0x19
(XEN) [2021-06-14 23:02:34] [0x2377]      0 0x141dfbc 
0x00000001          0 0x565fbc 0x19
(XEN) [2021-06-14 23:02:34] [0x2384]      0 0x141e08b 
0x00000001          0 0x565c8b 0x19
(XEN) [2021-06-14 23:02:34] [0x23c4]      0 0x141db66 
0x00000001          0 0x566366 0x19
(XEN) [2021-06-14 23:02:34] [0x240e]      0 0x141ddae 
0x00000001          0 0x5661ae 0x19
(XEN) [2021-06-14 23:02:35] [0x2467]      0 0x141d796 
0x00000001          0 0x566796 0x19
(XEN) [2021-06-14 23:02:35] [0x246b]      0 0x141db8e 
0x00000001          0 0x56638e 0x19
(XEN) [2021-06-14 23:02:35] [0x247b]      0 0x141d7f4 
0x00000001          0 0x5667f4 0x19
(XEN) [2021-06-14 23:02:35] [0x247c]      0 0x141dd6c 
0x00000001          0 0x56616c 0x19
(XEN) [2021-06-14 23:02:35] [0x247d]      0 0x141db65 
0x00000001          0 0x566365 0x19
(XEN) [2021-06-14 23:02:35] [0x247e]      0 0x141db25 
0x00000001          0 0x566325 0x19
(XEN) [2021-06-14 23:02:35] [0x249a]      0 0x141e045 
0x00000001          0 0x565c45 0x19
(XEN) [2021-06-14 23:02:35] [0x24da]      0 0x141dba1 
0x00000001          0 0x5663a1 0x19
(XEN) [2021-06-14 23:02:35] [0x24dc]      0 0x141db82 
0x00000001          0 0x566382 0x19
(XEN) [2021-06-14 23:02:35] [0x2507]      0 0x141dbfa 
0x00000001          0 0x5663fa 0x19
(XEN) [2021-06-14 23:02:35] [0x250e]      0 0x141e115 
0x00000001          0 0x565d15 0x19
(XEN) [2021-06-14 23:02:35] [0x2515]      0 0x141de9c 
0x00000001          0 0x565e9c 0x19
(XEN) [2021-06-14 23:02:35] [0x251b]      0 0x141de7d 
0x00000001          0 0x565e7d 0x19
(XEN) [2021-06-14 23:02:35] [0x251d]      0 0x141df3e 
0x00000001          0 0x565f3e 0x19
(XEN) [2021-06-14 23:02:35] [0x2520]      0 0x141dcdb 
0x00000001          0 0x5660db 0x19
(XEN) [2021-06-14 23:02:35] [0x2535]      0 0x141dab4 
0x00000001          0 0x5662b4 0x19
(XEN) [2021-06-14 23:02:35] [0x254b]      0 0x141dd22 
0x00000001          0 0x566122 0x19
(XEN) [2021-06-14 23:02:35] [0x255f]      0 0x141da13 
0x00000001          0 0x566213 0x19
(XEN) [2021-06-14 23:02:35] [0x2560]      0 0x141df2e 
0x00000001          0 0x565f2e 0x19
(XEN) [2021-06-14 23:02:35] [0x2578]      0 0x141ddce 
0x00000001          0 0x5661ce 0x19
(XEN) [2021-06-14 23:02:35] [0x2583]      0 0x141e0d4 
0x00000001          0 0x565cd4 0x19
(XEN) [2021-06-14 23:02:35] [0x2586]      0 0x141da69 
0x00000001          0 0x566269 0x19
(XEN) [2021-06-14 23:02:35] [0x259e]      0 0x141da12 
0x00000001          0 0x566212 0x19
(XEN) [2021-06-14 23:02:35] [0x25b7]      0 0x141dbc6 
0x00000001          0 0x5663c6 0x19
(XEN) [2021-06-14 23:02:35] [0x25bb]      0 0x141e1d1 
0x00000001          0 0x565dd1 0x19
(XEN) [2021-06-14 23:02:35] [0x25db]      0 0x141db05 
0x00000001          0 0x566305 0x19
(XEN) [2021-06-14 23:02:35] [0x2614]      0 0x141e10c 
0x00000001          0 0x565d0c 0x19
(XEN) [2021-06-14 23:02:35] [0x2615]      0 0x141dfc3 
0x00000001          0 0x565fc3 0x19
(XEN) [2021-06-14 23:02:35] [0x2616]      0 0x141db34 
0x00000001          0 0x566334 0x19
(XEN) [2021-06-14 23:02:35] [0x26b2]      0 0x141e0eb 
0x00000001          0 0x565ceb 0x19
(XEN) [2021-06-14 23:02:35] [0x2a44]      0 0x141d789 
0x00000001          0 0x566789 0x19
(XEN) [2021-06-14 23:02:35] [0x2a4e]      0 0x141e125 
0x00000001          0 0x565d25 0x19
(XEN) [2021-06-14 23:02:35] [0x2a55]      0 0x141e075 
0x00000001          0 0x565c75 0x19
(XEN) [2021-06-14 23:02:35] [0x2a56]      0 0x141e0f2 
0x00000001          0 0x565cf2 0x19
(XEN) [2021-06-14 23:02:35] [0x2a58]      0 0x141e0b6 
0x00000001          0 0x565cb6 0x19
(XEN) [2021-06-14 23:02:35] [0x2a5b]      0 0x141df6d 
0x00000001          0 0x565f6d 0x19
(XEN) [2021-06-14 23:02:35]       -------- active -------- -------- 
shared --------
(XEN) [2021-06-14 23:02:35] [ref] localdom mfn      pin localdom 
gmfn     flags
(XEN) [2021-06-14 23:02:35] grant-table for remote d3 (v1)
(XEN) [2021-06-14 23:02:35]   5 frames (64 max), 0 maptrack frames (1024 
max)
(XEN) [2021-06-14 23:02:35] [0x000]      0 0x2031d90 0x00000002          
0 0x0fefff 0x19
(XEN) [2021-06-14 23:02:35] [0x009]      0 0x13bf1c7 0x00000001          
0 0x07f1c7 0x19
(XEN) [2021-06-14 23:02:35] [0x00a]      0 0x142eddd 0x00000001          
0 0x186fdd 0x19
(XEN) [2021-06-14 23:02:35] [0x900]      0 0x143c5dd 0x00000001          
0 0x0e97dd 0x19
(XEN) [2021-06-14 23:02:35] [0x901]      0 0x143c5de 0x00000001          
0 0x0e97de 0x19
(XEN) [2021-06-14 23:02:35] [0x902]      0 0x13bfae7 0x00000001          
0 0x07fae7 0x19
(XEN) [2021-06-14 23:02:35] [0x903]      0 0x13bf0cf 0x00000001          
0 0x07f0cf 0x19
(XEN) [2021-06-14 23:02:35] [0x904]      0 0x13bfb37 0x00000001          
0 0x07fb37 0x19
(XEN) [2021-06-14 23:02:35] [0x905]      0 0x143c5e1 0x00000001          
0 0x0e97e1 0x19
(XEN) [2021-06-14 23:02:35] [0x906]      0 0x13bf0d1 0x00000001          
0 0x07f0d1 0x19
(XEN) [2021-06-14 23:02:35] [0x907]      0 0x13bfb26 0x00000001          
0 0x07fb26 0x19
(XEN) [2021-06-14 23:02:35] [0x94e]      0 0x143c401 0x00000001          
0 0x0e9601 0x19
(XEN) [2021-06-14 23:02:35] [0x974]      0 0x143c7db 0x00000001          
0 0x0e95db 0x19
(XEN) [2021-06-14 23:02:35] [0x979]      0 0x143c7d6 0x00000001          
0 0x0e95d6 0x19
(XEN) [2021-06-14 23:02:35] [0x984]      0 0x143c7ca 0x00000001          
0 0x0e95ca 0x19
(XEN) [2021-06-14 23:02:35] [0x98a]      0 0x143c7c4 0x00000001          
0 0x0e95c4 0x19
(XEN) [2021-06-14 23:02:35] [0x990]      0 0x143c7be 0x00000001          
0 0x0e95be 0x19
(XEN) [2021-06-14 23:02:35] [0x991]      0 0x143c7bd 0x00000001          
0 0x0e95bd 0x19
(XEN) [2021-06-14 23:02:35] [0x9b8]      0 0x143c796 0x00000001          
0 0x0e9596 0x19
(XEN) [2021-06-14 23:02:35] [0x9d7]      0 0x143c777 0x00000001          
0 0x0e9577 0x19
(XEN) [2021-06-14 23:02:35] [0x9dc]      0 0x143c772 0x00000001          
0 0x0e9572 0x19
(XEN) [2021-06-14 23:02:35]       -------- active -------- -------- 
shared --------
(XEN) [2021-06-14 23:02:35] [ref] localdom mfn      pin localdom 
gmfn     flags
(XEN) [2021-06-14 23:02:35] grant-table for remote d5 (v1)
(XEN) [2021-06-14 23:02:35]   10 frames (64 max), 0 maptrack frames 
(1024 max)
(XEN) [2021-06-14 23:02:35] [0x000]      0 0x15b0922 0x00000002          
0 0x0fefff 0x19
(XEN) [2021-06-14 23:02:35] [0x008]      0 0x1dd00e3 0x00000001          
0 0x5610e3 0x19
(XEN) [2021-06-14 23:02:35] [0x009]      0 0x1dd00e8 0x00000001          
0 0x5610e8 0x19
(XEN) [2021-06-14 23:02:35] [0x00a]      0 0x1de2ef0 0x00000001          
0 0x54e2f0 0x19
(XEN) [2021-06-14 23:02:35] [0x00b]      0 0x1de2ef5 0x00000001          
0 0x54e2f5 0x19
(XEN) [2021-06-14 23:02:35] [0x00c]      0 0x1de2f18 0x00000001          
0 0x54e318 0x19
(XEN) [2021-06-14 23:02:35] [0x00d]      0 0x1de2f1d 0x00000001          
0 0x54e31d 0x19
(XEN) [2021-06-14 23:02:35] [0x00e]      0 0x1de2f22 0x00000001          
0 0x54e322 0x19
(XEN) [2021-06-14 23:02:35] [0x00f]      0 0x1de2f28 0x00000001          
0 0x54e328 0x19
(XEN) [2021-06-14 23:02:35] [0x010]      0 0x1de4d82 0x00000001          
0 0x54c582 0x19
(XEN) [2021-06-14 23:02:35] [0x112]      0 0x1de4a92 0x00000001          
0 0x54c692 0x19
(XEN) [2021-06-14 23:02:35] [0x114]      0 0x1de5000 0x00000001          
0 0x54c000 0x19
(XEN) [2021-06-14 23:02:35] [0x116]      0 0x1de4cdc 0x00000001          
0 0x54c4dc 0x19
(XEN) [2021-06-14 23:02:35] [0x11b]      0 0x1de46df 0x00000001          
0 0x54cadf 0x19
(XEN) [2021-06-14 23:02:35] [0x11c]      0 0x1de47ee 0x00000001          
0 0x54cbee 0x19
(XEN) [2021-06-14 23:02:35] [0x11f]      0 0x1de4739 0x00000001          
0 0x54cb39 0x19
(XEN) [2021-06-14 23:02:35] [0x120]      0 0x1de5110 0x00000001          
0 0x54c110 0x19
(XEN) [2021-06-14 23:02:35] [0x121]      0 0x1de4e94 0x00000001          
0 0x54c294 0x19
(XEN) [2021-06-14 23:02:35] [0x123]      0 0x1de544e 0x00000001          
0 0x54bc4e 0x19
(XEN) [2021-06-14 23:02:35] [0x126]      0 0x1de51a2 0x00000001          
0 0x54c1a2 0x19
(XEN) [2021-06-14 23:02:35] [0x127]      0 0x1de5564 0x00000001          
0 0x54bd64 0x19
(XEN) [2021-06-14 23:02:35] [0x128]      0 0x1de5585 0x00000001          
0 0x54bd85 0x19
(XEN) [2021-06-14 23:02:35] [0x129]      0 0x1de47e0 0x00000001          
0 0x54cbe0 0x19
(XEN) [2021-06-14 23:02:35] [0x12a]      0 0x1de5519 0x00000001          
0 0x54bd19 0x19
(XEN) [2021-06-14 23:02:35] [0x12b]      0 0x1de4a0b 0x00000001          
0 0x54c60b 0x19
(XEN) [2021-06-14 23:02:35] [0x12c]      0 0x1de51c0 0x00000001          
0 0x54c1c0 0x19
(XEN) [2021-06-14 23:02:35] [0x131]      0 0x1de51a6 0x00000001          
0 0x54c1a6 0x19
(XEN) [2021-06-14 23:02:35] [0x132]      0 0x1de4cd5 0x00000001          
0 0x54c4d5 0x19
(XEN) [2021-06-14 23:02:35] [0x136]      0 0x1de4cbd 0x00000001          
0 0x54c4bd 0x19
(XEN) [2021-06-14 23:02:35] [0x137]      0 0x1de4dfa 0x00000001          
0 0x54c5fa 0x19
(XEN) [2021-06-14 23:02:35] [0x138]      0 0x1de465e 0x00000001          
0 0x54ca5e 0x19
(XEN) [2021-06-14 23:02:35] [0x139]      0 0x1de4f3c 0x00000001          
0 0x54c33c 0x19
(XEN) [2021-06-14 23:02:35] [0x13a]      0 0x1de4a15 0x00000001          
0 0x54c615 0x19
(XEN) [2021-06-14 23:02:35] [0x13c]      0 0x1de5195 0x00000001          
0 0x54c195 0x19
(XEN) [2021-06-14 23:02:35] [0x13e]      0 0x1de4c74 0x00000001          
0 0x54c474 0x19
(XEN) [2021-06-14 23:02:35] [0x140]      0 0x1de5145 0x00000001          
0 0x54c145 0x19
(XEN) [2021-06-14 23:02:35] [0x141]      0 0x1de4aab 0x00000001          
0 0x54c6ab 0x19
(XEN) [2021-06-14 23:02:35] [0x142]      0 0x1de4b51 0x00000001          
0 0x54c751 0x19
(XEN) [2021-06-14 23:02:35] [0x144]      0 0x1de5187 0x00000001          
0 0x54c187 0x19
(XEN) [2021-06-14 23:02:35] [0x146]      0 0x1de50c7 0x00000001          
0 0x54c0c7 0x19
(XEN) [2021-06-14 23:02:35] [0x147]      0 0x1de4abf 0x00000001          
0 0x54c6bf 0x19
(XEN) [2021-06-14 23:02:35] [0x148]      0 0x1de4d3c 0x00000001          
0 0x54c53c 0x19
(XEN) [2021-06-14 23:02:35] [0x14b]      0 0x1de50e4 0x00000001          
0 0x54c0e4 0x19
(XEN) [2021-06-14 23:02:35] [0x14c]      0 0x1de55a6 0x00000001          
0 0x54bda6 0x19
(XEN) [2021-06-14 23:02:35] [0x14d]      0 0x1de4b5b 0x00000001          
0 0x54c75b 0x19
(XEN) [2021-06-14 23:02:35] [0x14e]      0 0x1de4d01 0x00000001          
0 0x54c501 0x19
(XEN) [2021-06-14 23:02:35] [0x14f]      0 0x1de5445 0x00000001          
0 0x54bc45 0x19
(XEN) [2021-06-14 23:02:35] [0x150]      0 0x1de4dbc 0x00000001          
0 0x54c5bc 0x19
(XEN) [2021-06-14 23:02:35] [0x154]      0 0x1de4d8c 0x00000001          
0 0x54c58c 0x19
(XEN) [2021-06-14 23:02:35] [0x156]      0 0x1de5228 0x00000001          
0 0x54be28 0x19
(XEN) [2021-06-14 23:02:35] [0x157]      0 0x1de49fc 0x00000001          
0 0x54c9fc 0x19
(XEN) [2021-06-14 23:02:35] [0x158]      0 0x1de54df 0x00000001          
0 0x54bcdf 0x19
(XEN) [2021-06-14 23:02:35] [0x159]      0 0x1de4ea2 0x00000001          
0 0x54c2a2 0x19
(XEN) [2021-06-14 23:02:35] [0x15b]      0 0x1de4f07 0x00000001          
0 0x54c307 0x19
(XEN) [2021-06-14 23:02:35] [0x15c]      0 0x1de507b 0x00000001          
0 0x54c07b 0x19
(XEN) [2021-06-14 23:02:35] [0x15d]      0 0x1de51fb 0x00000001          
0 0x54c1fb 0x19
(XEN) [2021-06-14 23:02:35] [0x15e]      0 0x1de4c54 0x00000001          
0 0x54c454 0x19
(XEN) [2021-06-14 23:02:35] [0x161]      0 0x1de4fbc 0x00000001          
0 0x54c3bc 0x19
(XEN) [2021-06-14 23:02:35] [0x163]      0 0x1de5043 0x00000001          
0 0x54c043 0x19
(XEN) [2021-06-14 23:02:35] [0x167]      0 0x1de5208 0x00000001          
0 0x54be08 0x19
(XEN) [2021-06-14 23:02:35] [0x168]      0 0x1de4aad 0x00000001          
0 0x54c6ad 0x19
(XEN) [2021-06-14 23:02:35] [0x169]      0 0x1de5410 0x00000001          
0 0x54bc10 0x19
(XEN) [2021-06-14 23:02:35] [0x16c]      0 0x1de4bd1 0x00000001          
0 0x54c7d1 0x19
(XEN) [2021-06-14 23:02:36] [0x16d]      0 0x1de4f37 0x00000001          
0 0x54c337 0x19
(XEN) [2021-06-14 23:02:36] [0x173]      0 0x1de545b 0x00000001          
0 0x54bc5b 0x19
(XEN) [2021-06-14 23:02:36] [0x174]      0 0x1de4be1 0x00000001          
0 0x54c7e1 0x19
(XEN) [2021-06-14 23:02:36] [0x175]      0 0x1de4b03 0x00000001          
0 0x54c703 0x19
(XEN) [2021-06-14 23:02:36] [0x176]      0 0x1de4c52 0x00000001          
0 0x54c452 0x19
(XEN) [2021-06-14 23:02:36] [0x177]      0 0x1de50ee 0x00000001          
0 0x54c0ee 0x19
(XEN) [2021-06-14 23:02:36] [0x179]      0 0x1de51b9 0x00000001          
0 0x54c1b9 0x19
(XEN) [2021-06-14 23:02:36] [0x17b]      0 0x1de51c5 0x00000001          
0 0x54c1c5 0x19
(XEN) [2021-06-14 23:02:36] [0x17c]      0 0x1de4be8 0x00000001          
0 0x54c7e8 0x19
(XEN) [2021-06-14 23:02:36] [0x17d]      0 0x1de5059 0x00000001          
0 0x54c059 0x19
(XEN) [2021-06-14 23:02:36] [0x181]      0 0x1de4dad 0x00000001          
0 0x54c5ad 0x19
(XEN) [2021-06-14 23:02:36] [0x182]      0 0x1de4a8f 0x00000001          
0 0x54c68f 0x19
(XEN) [2021-06-14 23:02:36] [0x183]      0 0x1de5531 0x00000001          
0 0x54bd31 0x19
(XEN) [2021-06-14 23:02:36] [0x185]      0 0x1de4650 0x00000001          
0 0x54ca50 0x19
(XEN) [2021-06-14 23:02:36] [0x186]      0 0x1de5515 0x00000001          
0 0x54bd15 0x19
(XEN) [2021-06-14 23:02:36] [0x187]      0 0x1de51d1 0x00000001          
0 0x54c1d1 0x19
(XEN) [2021-06-14 23:02:36] [0x18a]      0 0x1de4cb3 0x00000001          
0 0x54c4b3 0x19
(XEN) [2021-06-14 23:02:36] [0x18b]      0 0x1de508f 0x00000001          
0 0x54c08f 0x19
(XEN) [2021-06-14 23:02:36] [0x18c]      0 0x1de55b8 0x00000001          
0 0x54bdb8 0x19
(XEN) [2021-06-14 23:02:36] [0x18d]      0 0x1de5598 0x00000001          
0 0x54bd98 0x19
(XEN) [2021-06-14 23:02:36] [0x193]      0 0x1de55c1 0x00000001          
0 0x54bdc1 0x19
(XEN) [2021-06-14 23:02:36] [0x194]      0 0x1de4ccc 0x00000001          
0 0x54c4cc 0x19
(XEN) [2021-06-14 23:02:36] [0x195]      0 0x1de5457 0x00000001          
0 0x54bc57 0x19
(XEN) [2021-06-14 23:02:36] [0x199]      0 0x1de4a11 0x00000001          
0 0x54c611 0x19
(XEN) [2021-06-14 23:02:36] [0x19b]      0 0x1de5087 0x00000001          
0 0x54c087 0x19
(XEN) [2021-06-14 23:02:36] [0x19c]      0 0x1de4efe 0x00000001          
0 0x54c2fe 0x19
(XEN) [2021-06-14 23:02:36] [0x19d]      0 0x1de5002 0x00000001          
0 0x54c002 0x19
(XEN) [2021-06-14 23:02:36] [0x19e]      0 0x1de4daf 0x00000001          
0 0x54c5af 0x19
(XEN) [2021-06-14 23:02:36] [0x1a0]      0 0x1de540c 0x00000001          
0 0x54bc0c 0x19
(XEN) [2021-06-14 23:02:36] [0x1a2]      0 0x1de4784 0x00000001          
0 0x54cb84 0x19
(XEN) [2021-06-14 23:02:36] [0x1a3]      0 0x1de5255 0x00000001          
0 0x54be55 0x19
(XEN) [2021-06-14 23:02:36] [0x1a6]      0 0x1de4be0 0x00000001          
0 0x54c7e0 0x19
(XEN) [2021-06-14 23:02:36] [0x1a7]      0 0x1de4f1a 0x00000001          
0 0x54c31a 0x19
(XEN) [2021-06-14 23:02:36] [0x1a8]      0 0x1de4624 0x00000001          
0 0x54ca24 0x19
(XEN) [2021-06-14 23:02:36] [0x1a9]      0 0x1de5573 0x00000001          
0 0x54bd73 0x19
(XEN) [2021-06-14 23:02:36] [0x1ac]      0 0x1de54f5 0x00000001          
0 0x54bcf5 0x19
(XEN) [2021-06-14 23:02:36] [0x1ae]      0 0x1de472a 0x00000001          
0 0x54cb2a 0x19
(XEN) [2021-06-14 23:02:36] [0x1af]      0 0x1de4756 0x00000001          
0 0x54cb56 0x19
(XEN) [2021-06-14 23:02:36] [0x1b0]      0 0x1de4b3a 0x00000001          
0 0x54c73a 0x19
(XEN) [2021-06-14 23:02:36] [0x1b1]      0 0x1de46e7 0x00000001          
0 0x54cae7 0x19
(XEN) [2021-06-14 23:02:36] [0x1b3]      0 0x1de5136 0x00000001          
0 0x54c136 0x19
(XEN) [2021-06-14 23:02:36] [0x1b6]      0 0x1de54dd 0x00000001          
0 0x54bcdd 0x19
(XEN) [2021-06-14 23:02:36] [0x1b7]      0 0x1de4609 0x00000001          
0 0x54ca09 0x19
(XEN) [2021-06-14 23:02:36] [0x1b8]      0 0x1de4f26 0x00000001          
0 0x54c326 0x19
(XEN) [2021-06-14 23:02:36] [0x1b9]      0 0x1de58db 0x00000001          
0 0x54b8db 0x19
(XEN) [2021-06-14 23:02:36] [0x1bd]      0 0x1de4bde 0x00000001          
0 0x54c7de 0x19
(XEN) [2021-06-14 23:02:36] [0x1c1]      0 0x1de4fa2 0x00000001          
0 0x54c3a2 0x19
(XEN) [2021-06-14 23:02:36] [0x1c2]      0 0x1de564e 0x00000001          
0 0x54ba4e 0x19
(XEN) [2021-06-14 23:02:36] [0x1c4]      0 0x1de474c 0x00000001          
0 0x54cb4c 0x19
(XEN) [2021-06-14 23:02:36] [0x1c8]      0 0x1de4798 0x00000001          
0 0x54cb98 0x19
(XEN) [2021-06-14 23:02:36] [0x1c9]      0 0x1de4681 0x00000001          
0 0x54ca81 0x19
(XEN) [2021-06-14 23:02:36] [0x1cb]      0 0x1de518e 0x00000001          
0 0x54c18e 0x19
(XEN) [2021-06-14 23:02:36] [0x1cc]      0 0x1de4a99 0x00000001          
0 0x54c699 0x19
(XEN) [2021-06-14 23:02:36] [0x1cd]      0 0x1de4679 0x00000001          
0 0x54ca79 0x19
(XEN) [2021-06-14 23:02:36] [0x1ce]      0 0x1de4703 0x00000001          
0 0x54cb03 0x19
(XEN) [2021-06-14 23:02:36] [0x1d1]      0 0x1de4a60 0x00000001          
0 0x54c660 0x19
(XEN) [2021-06-14 23:02:36] [0x1d2]      0 0x1de55e8 0x00000001          
0 0x54bde8 0x19
(XEN) [2021-06-14 23:02:36] [0x1d3]      0 0x1de470d 0x00000001          
0 0x54cb0d 0x19
(XEN) [2021-06-14 23:02:36] [0x1d4]      0 0x1de46fc 0x00000001          
0 0x54cafc 0x19
(XEN) [2021-06-14 23:02:36] [0x1d5]      0 0x1de4b16 0x00000001          
0 0x54c716 0x19
(XEN) [2021-06-14 23:02:36] [0x1d6]      0 0x1de4ee5 0x00000001          
0 0x54c2e5 0x19
(XEN) [2021-06-14 23:02:36] [0x1d8]      0 0x1de4dda 0x00000001          
0 0x54c5da 0x19
(XEN) [2021-06-14 23:02:36] [0x1da]      0 0x1de479f 0x00000001          
0 0x54cb9f 0x19
(XEN) [2021-06-14 23:02:36] [0x1db]      0 0x1de5146 0x00000001          
0 0x54c146 0x19
(XEN) [2021-06-14 23:02:36] [0x1de]      0 0x1de4dd7 0x00000001          
0 0x54c5d7 0x19
(XEN) [2021-06-14 23:02:36] [0x1df]      0 0x1de4a09 0x00000001          
0 0x54c609 0x19
(XEN) [2021-06-14 23:02:36] [0x1e0]      0 0x1de5005 0x00000001          
0 0x54c005 0x19
(XEN) [2021-06-14 23:02:36] [0x1e4]      0 0x1de4ffc 0x00000001          
0 0x54c3fc 0x19
(XEN) [2021-06-14 23:02:36] [0x1e5]      0 0x1de4788 0x00000001          
0 0x54cb88 0x19
(XEN) [2021-06-14 23:02:36] [0x1e7]      0 0x1de4e78 0x00000001          
0 0x54c278 0x19
(XEN) [2021-06-14 23:02:36] [0x1e8]      0 0x1de4c28 0x00000001          
0 0x54c428 0x19
(XEN) [2021-06-14 23:02:36] [0x1e9]      0 0x1de4a08 0x00000001          
0 0x54c608 0x19
(XEN) [2021-06-14 23:02:36] [0x1ea]      0 0x1de4db0 0x00000001          
0 0x54c5b0 0x19
(XEN) [2021-06-14 23:02:36] [0x1eb]      0 0x1de46a3 0x00000001          
0 0x54caa3 0x19
(XEN) [2021-06-14 23:02:36] [0x1ef]      0 0x1de502e 0x00000001          
0 0x54c02e 0x19
(XEN) [2021-06-14 23:02:36] [0x1f1]      0 0x1de4f0e 0x00000001          
0 0x54c30e 0x19
(XEN) [2021-06-14 23:02:36] [0x1f3]      0 0x1de51b1 0x00000001          
0 0x54c1b1 0x19
(XEN) [2021-06-14 23:02:36] [0x1f8]      0 0x1de5072 0x00000001          
0 0x54c072 0x19
(XEN) [2021-06-14 23:02:36] [0xd00]      0 0x1de5697 0x00000001          
0 0x54ba97 0x19
(XEN) [2021-06-14 23:02:36] [0xd01]      0 0x1de5698 0x00000001          
0 0x54ba98 0x19
(XEN) [2021-06-14 23:02:36] [0xd02]      0 0x1de569e 0x00000001          
0 0x54ba9e 0x19
(XEN) [2021-06-14 23:02:36] [0xd03]      0 0x1de569f 0x00000001          
0 0x54ba9f 0x19
(XEN) [2021-06-14 23:02:36] [0xd04]      0 0x1de56ac 0x00000001          
0 0x54baac 0x19
(XEN) [2021-06-14 23:02:36] [0xd05]      0 0x1de56ad 0x00000001          
0 0x54baad 0x19
(XEN) [2021-06-14 23:02:36] [0xd06]      0 0x1de56b8 0x00000001          
0 0x54bab8 0x19
(XEN) [2021-06-14 23:02:36] [0xd07]      0 0x1de56b9 0x00000001          
0 0x54bab9 0x19
(XEN) [2021-06-14 23:02:36] [0xd08]      0 0x1de56c4 0x00000001          
0 0x54bac4 0x19
(XEN) [2021-06-14 23:02:36] [0xd09]      0 0x1de6324 0x00000001          
0 0x54af24 0x19
(XEN) [2021-06-14 23:02:36] [0xd0a]      0 0x1de634d 0x00000001          
0 0x54af4d 0x19
(XEN) [2021-06-14 23:02:36] [0xd0b]      0 0x1de634e 0x00000001          
0 0x54af4e 0x19
(XEN) [2021-06-14 23:02:36] [0xd0f]      0 0x1de4d02 0x00000001          
0 0x54c502 0x19
(XEN) [2021-06-14 23:02:36] [0xd10]      0 0x1de46e3 0x00000001          
0 0x54cae3 0x19
(XEN) [2021-06-14 23:02:36] [0xd11]      0 0x1de4a50 0x00000001          
0 0x54c650 0x19
(XEN) [2021-06-14 23:02:36] [0xd12]      0 0x1de46c8 0x00000001          
0 0x54cac8 0x19
(XEN) [2021-06-14 23:02:36] [0xd13]      0 0x1de4b70 0x00000001          
0 0x54c770 0x19
(XEN) [2021-06-14 23:02:36] [0xd14]      0 0x1de4cad 0x00000001          
0 0x54c4ad 0x19
(XEN) [2021-06-14 23:02:36] [0xd15]      0 0x1de5594 0x00000001          
0 0x54bd94 0x19
(XEN) [2021-06-14 23:02:36] [0xd16]      0 0x1de464c 0x00000001          
0 0x54ca4c 0x19
(XEN) [2021-06-14 23:02:36] [0xd17]      0 0x1de4657 0x00000001          
0 0x54ca57 0x19
(XEN) [2021-06-14 23:02:36] [0xd18]      0 0x1de4e06 0x00000001          
0 0x54c206 0x19
(XEN) [2021-06-14 23:02:36] [0xd19]      0 0x1de51cf 0x00000001          
0 0x54c1cf 0x19
(XEN) [2021-06-14 23:02:36] [0xd1a]      0 0x1de4d1d 0x00000001          
0 0x54c51d 0x19
(XEN) [2021-06-14 23:02:36] [0xd1b]      0 0x1de552c 0x00000001          
0 0x54bd2c 0x19
(XEN) [2021-06-14 23:02:36] [0xd1c]      0 0x1de4fe4 0x00000001          
0 0x54c3e4 0x19
(XEN) [2021-06-14 23:02:36] [0xd1e]      0 0x1de46a5 0x00000001          
0 0x54caa5 0x19
(XEN) [2021-06-14 23:02:36] [0xd21]      0 0x1de4ddc 0x00000001          
0 0x54c5dc 0x19
(XEN) [2021-06-14 23:02:36] [0xd22]      0 0x1de4fc1 0x00000001          
0 0x54c3c1 0x19
(XEN) [2021-06-14 23:02:36] [0xd23]      0 0x1de471e 0x00000001          
0 0x54cb1e 0x19
(XEN) [2021-06-14 23:02:36] [0xd24]      0 0x1de50ed 0x00000001          
0 0x54c0ed 0x19
(XEN) [2021-06-14 23:02:36] [0xd26]      0 0x1de4e00 0x00000001          
0 0x54c200 0x19
(XEN) [2021-06-14 23:02:36] [0xd2b]      0 0x1de473a 0x00000001          
0 0x54cb3a 0x19
(XEN) [2021-06-14 23:02:36] [0xd2c]      0 0x1de4ade 0x00000001          
0 0x54c6de 0x19
(XEN) [2021-06-14 23:02:36] [0xd2d]      0 0x1de4cc9 0x00000001          
0 0x54c4c9 0x19
(XEN) [2021-06-14 23:02:36] [0xd2e]      0 0x1de4e4a 0x00000001          
0 0x54c24a 0x19
(XEN) [2021-06-14 23:02:36] [0xd30]      0 0x1de4ed9 0x00000001          
0 0x54c2d9 0x19
(XEN) [2021-06-14 23:02:36] [0xd33]      0 0x1de469e 0x00000001          
0 0x54ca9e 0x19
(XEN) [2021-06-14 23:02:36] [0xd34]      0 0x1de4baf 0x00000001          
0 0x54c7af 0x19
(XEN) [2021-06-14 23:02:36] [0xd35]      0 0x1de4713 0x00000001          
0 0x54cb13 0x19
(XEN) [2021-06-14 23:02:36] [0xd36]      0 0x1de4dd2 0x00000001          
0 0x54c5d2 0x19
(XEN) [2021-06-14 23:02:36] [0xd37]      0 0x1de4e8b 0x00000001          
0 0x54c28b 0x19
(XEN) [2021-06-14 23:02:36] [0xd38]      0 0x1de4dcc 0x00000001          
0 0x54c5cc 0x19
(XEN) [2021-06-14 23:02:36] [0xd39]      0 0x1de50c3 0x00000001          
0 0x54c0c3 0x19
(XEN) [2021-06-14 23:02:36] [0xd3a]      0 0x1de4ec4 0x00000001          
0 0x54c2c4 0x19
(XEN) [2021-06-14 23:02:36] [0xd3b]      0 0x1de4725 0x00000001          
0 0x54cb25 0x19
(XEN) [2021-06-14 23:02:36] [0xd3d]      0 0x1de548f 0x00000001          
0 0x54bc8f 0x19
(XEN) [2021-06-14 23:02:36] [0xd3e]      0 0x1de4654 0x00000001          
0 0x54ca54 0x19
(XEN) [2021-06-14 23:02:36] [0xd3f]      0 0x1de4cb4 0x00000001          
0 0x54c4b4 0x19
(XEN) [2021-06-14 23:02:36] [0xd40]      0 0x1de462d 0x00000001          
0 0x54ca2d 0x19
(XEN) [2021-06-14 23:02:36] [0xd41]      0 0x1de4ece 0x00000001          
0 0x54c2ce 0x19
(XEN) [2021-06-14 23:02:37] [0xd44]      0 0x1de4f2a 0x00000001          
0 0x54c32a 0x19
(XEN) [2021-06-14 23:02:37] [0xd45]      0 0x1de47f1 0x00000001          
0 0x54cbf1 0x19
(XEN) [2021-06-14 23:02:37] [0xd46]      0 0x1de4b4b 0x00000001          
0 0x54c74b 0x19
(XEN) [2021-06-14 23:02:37] [0xd47]      0 0x1de4c67 0x00000001          
0 0x54c467 0x19
(XEN) [2021-06-14 23:02:37] [0xd48]      0 0x1de5037 0x00000001          
0 0x54c037 0x19
(XEN) [2021-06-14 23:02:37] [0xd49]      0 0x1de4a77 0x00000001          
0 0x54c677 0x19
(XEN) [2021-06-14 23:02:37] [0xd4d]      0 0x1de5423 0x00000001          
0 0x54bc23 0x19
(XEN) [2021-06-14 23:02:37] [0xd4f]      0 0x1de5511 0x00000001          
0 0x54bd11 0x19
(XEN) [2021-06-14 23:02:37] [0xd51]      0 0x1de4dc8 0x00000001          
0 0x54c5c8 0x19
(XEN) [2021-06-14 23:02:37] [0xd52]      0 0x1de5189 0x00000001          
0 0x54c189 0x19
(XEN) [2021-06-14 23:02:37] [0xd53]      0 0x1de4a65 0x00000001          
0 0x54c665 0x19
(XEN) [2021-06-14 23:02:37] [0xd54]      0 0x1de5149 0x00000001          
0 0x54c149 0x19
(XEN) [2021-06-14 23:02:37] [0xd55]      0 0x1de4d80 0x00000001          
0 0x54c580 0x19
(XEN) [2021-06-14 23:02:37] [0xd57]      0 0x1de4a6d 0x00000001          
0 0x54c66d 0x19
(XEN) [2021-06-14 23:02:37] [0xd5a]      0 0x1de4771 0x00000001          
0 0x54cb71 0x19
(XEN) [2021-06-14 23:02:37] [0xd5b]      0 0x1de4f47 0x00000001          
0 0x54c347 0x19
(XEN) [2021-06-14 23:02:37] [0xd5c]      0 0x1de4e0b 0x00000001          
0 0x54c20b 0x19
(XEN) [2021-06-14 23:02:37] [0xd60]      0 0x1de4e0c 0x00000001          
0 0x54c20c 0x19
(XEN) [2021-06-14 23:02:37] [0xd62]      0 0x1de468f 0x00000001          
0 0x54ca8f 0x19
(XEN) [2021-06-14 23:02:37] [0xd64]      0 0x1de4f9e 0x00000001          
0 0x54c39e 0x19
(XEN) [2021-06-14 23:02:37] [0xd65]      0 0x1de54b6 0x00000001          
0 0x54bcb6 0x19
(XEN) [2021-06-14 23:02:37] [0xd67]      0 0x1de4bae 0x00000001          
0 0x54c7ae 0x19
(XEN) [2021-06-14 23:02:37] [0xd69]      0 0x1de47fc 0x00000001          
0 0x54cbfc 0x19
(XEN) [2021-06-14 23:02:37] [0xd6a]      0 0x1de54eb 0x00000001          
0 0x54bceb 0x19
(XEN) [2021-06-14 23:02:37] [0xd6c]      0 0x1de5468 0x00000001          
0 0x54bc68 0x19
(XEN) [2021-06-14 23:02:37] [0xd6d]      0 0x1de2f65 0x00000001          
0 0x54e365 0x19
(XEN) [2021-06-14 23:02:37] [0xd6f]      0 0x1de46a1 0x00000001          
0 0x54caa1 0x19
(XEN) [2021-06-14 23:02:37] [0xd70]      0 0x1de4ba1 0x00000001          
0 0x54c7a1 0x19
(XEN) [2021-06-14 23:02:37] [0xd72]      0 0x1de4e05 0x00000001          
0 0x54c205 0x19
(XEN) [2021-06-14 23:02:37] [0xd73]      0 0x1de4f46 0x00000001          
0 0x54c346 0x19
(XEN) [2021-06-14 23:02:37] [0xd74]      0 0x1de4a48 0x00000001          
0 0x54c648 0x19
(XEN) [2021-06-14 23:02:37] [0xd75]      0 0x1de4f15 0x00000001          
0 0x54c315 0x19
(XEN) [2021-06-14 23:02:37] [0xd76]      0 0x1de5553 0x00000001          
0 0x54bd53 0x19
(XEN) [2021-06-14 23:02:37] [0xd78]      0 0x1de5199 0x00000001          
0 0x54c199 0x19
(XEN) [2021-06-14 23:02:37] [0xd7a]      0 0x1de549c 0x00000001          
0 0x54bc9c 0x19
(XEN) [2021-06-14 23:02:37] [0xd7b]      0 0x1de46bb 0x00000001          
0 0x54cabb 0x19
(XEN) [2021-06-14 23:02:37] [0xd85]      0 0x1de4a4a 0x00000001          
0 0x54c64a 0x19
(XEN) [2021-06-14 23:02:37] [0xd86]      0 0x1de51a3 0x00000001          
0 0x54c1a3 0x19
(XEN) [2021-06-14 23:02:37] [0xd87]      0 0x1de4ca9 0x00000001          
0 0x54c4a9 0x19
(XEN) [2021-06-14 23:02:37] [0xd8d]      0 0x1de514f 0x00000001          
0 0x54c14f 0x19
(XEN) [2021-06-14 23:02:37] [0xd8e]      0 0x1de4e39 0x00000001          
0 0x54c239 0x19
(XEN) [2021-06-14 23:02:37] [0xd90]      0 0x1de4e4b 0x00000001          
0 0x54c24b 0x19
(XEN) [2021-06-14 23:02:37] [0xd95]      0 0x1de54f4 0x00000001          
0 0x54bcf4 0x19
(XEN) [2021-06-14 23:02:37] [0xd96]      0 0x1de4ae7 0x00000001          
0 0x54c6e7 0x19
(XEN) [2021-06-14 23:02:37] [0xd98]      0 0x1de517b 0x00000001          
0 0x54c17b 0x19
(XEN) [2021-06-14 23:02:37] [0xd99]      0 0x1de4e08 0x00000001          
0 0x54c208 0x19
(XEN) [2021-06-14 23:02:37] [0xd9a]      0 0x1de4eaa 0x00000001          
0 0x54c2aa 0x19
(XEN) [2021-06-14 23:02:37] [0xd9c]      0 0x1de4d13 0x00000001          
0 0x54c513 0x19
(XEN) [2021-06-14 23:02:37] [0xd9d]      0 0x1de4ab3 0x00000001          
0 0x54c6b3 0x19
(XEN) [2021-06-14 23:02:37] [0xd9e]      0 0x1de55ce 0x00000001          
0 0x54bdce 0x19
(XEN) [2021-06-14 23:02:37] [0xda0]      0 0x1de4ec8 0x00000001          
0 0x54c2c8 0x19
(XEN) [2021-06-14 23:02:37] [0xda1]      0 0x1de4656 0x00000001          
0 0x54ca56 0x19
(XEN) [2021-06-14 23:02:37] [0xda2]      0 0x1de5548 0x00000001          
0 0x54bd48 0x19
(XEN) [2021-06-14 23:02:37] [0xda3]      0 0x1de4666 0x00000001          
0 0x54ca66 0x19
(XEN) [2021-06-14 23:02:37] [0xda4]      0 0x1de555a 0x00000001          
0 0x54bd5a 0x19
(XEN) [2021-06-14 23:02:37] [0xda5]      0 0x1de4e96 0x00000001          
0 0x54c296 0x19
(XEN) [2021-06-14 23:02:37] [0xda8]      0 0x1de4fad 0x00000001          
0 0x54c3ad 0x19
(XEN) [2021-06-14 23:02:37] [0xda9]      0 0x1de54c0 0x00000001          
0 0x54bcc0 0x19
(XEN) [2021-06-14 23:02:37] [0xdaa]      0 0x1de51ad 0x00000001          
0 0x54c1ad 0x19
(XEN) [2021-06-14 23:02:37] [0xdab]      0 0x1de4c51 0x00000001          
0 0x54c451 0x19
(XEN) [2021-06-14 23:02:37] [0xdac]      0 0x1de5470 0x00000001          
0 0x54bc70 0x19
(XEN) [2021-06-14 23:02:37] [0xdad]      0 0x1de4778 0x00000001          
0 0x54cb78 0x19
(XEN) [2021-06-14 23:02:37] [0xdb2]      0 0x1de5040 0x00000001          
0 0x54c040 0x19
(XEN) [2021-06-14 23:02:37] [0xdb3]      0 0x1de4653 0x00000001          
0 0x54ca53 0x19
(XEN) [2021-06-14 23:02:37] [0xdb6]      0 0x1de513e 0x00000001          
0 0x54c13e 0x19
(XEN) [2021-06-14 23:02:37] [0xdb8]      0 0x1de4b9d 0x00000001          
0 0x54c79d 0x19
(XEN) [2021-06-14 23:02:37] [0xdb9]      0 0x1de4792 0x00000001          
0 0x54cb92 0x19
(XEN) [2021-06-14 23:02:37] [0xdba]      0 0x1de5082 0x00000001          
0 0x54c082 0x19
(XEN) [2021-06-14 23:02:37] [0xdbc]      0 0x1de47ac 0x00000001          
0 0x54cbac 0x19
(XEN) [2021-06-14 23:02:37] [0xdbf]      0 0x1de4eb0 0x00000001          
0 0x54c2b0 0x19
(XEN) [2021-06-14 23:02:37] [0xdc0]      0 0x1de467b 0x00000001          
0 0x54ca7b 0x19
(XEN) [2021-06-14 23:02:37] [0xdc1]      0 0x1de55ec 0x00000001          
0 0x54bdec 0x19
(XEN) [2021-06-14 23:02:37] [0xdc2]      0 0x1de4ffe 0x00000001          
0 0x54c3fe 0x19
(XEN) [2021-06-14 23:02:37] [0xdc3]      0 0x1de4c7e 0x00000001          
0 0x54c47e 0x19
(XEN) [2021-06-14 23:02:37] [0xdc5]      0 0x1de5092 0x00000001          
0 0x54c092 0x19
(XEN) [2021-06-14 23:02:37] [0xdc6]      0 0x1de4d3b 0x00000001          
0 0x54c53b 0x19
(XEN) [2021-06-14 23:02:37] [0xdc8]      0 0x1de4d31 0x00000001          
0 0x54c531 0x19
(XEN) [2021-06-14 23:02:37] [0xdc9]      0 0x1de54c9 0x00000001          
0 0x54bcc9 0x19
(XEN) [2021-06-14 23:02:37] [0xdca]      0 0x1de5583 0x00000001          
0 0x54bd83 0x19
(XEN) [2021-06-14 23:02:37] [0xdcc]      0 0x1de50c1 0x00000001          
0 0x54c0c1 0x19
(XEN) [2021-06-14 23:02:37] [0xdce]      0 0x1de4bbe 0x00000001          
0 0x54c7be 0x19
(XEN) [2021-06-14 23:02:37] [0xdd0]      0 0x1de4b14 0x00000001          
0 0x54c714 0x19
(XEN) [2021-06-14 23:02:37] [0xdd1]      0 0x1de554c 0x00000001          
0 0x54bd4c 0x19
(XEN) [2021-06-14 23:02:37] [0xdd2]      0 0x1de503e 0x00000001          
0 0x54c03e 0x19
(XEN) [2021-06-14 23:02:37] [0xdd3]      0 0x1de517e 0x00000001          
0 0x54c17e 0x19
(XEN) [2021-06-14 23:02:37] [0xdd5]      0 0x1de4ac4 0x00000001          
0 0x54c6c4 0x19
(XEN) [2021-06-14 23:02:37] [0xdd6]      0 0x1de4ad4 0x00000001          
0 0x54c6d4 0x19
(XEN) [2021-06-14 23:02:37] [0xdd8]      0 0x1de4d33 0x00000001          
0 0x54c533 0x19
(XEN) [2021-06-14 23:02:37] [0xdd9]      0 0x1de5514 0x00000001          
0 0x54bd14 0x19
(XEN) [2021-06-14 23:02:37] [0xdda]      0 0x1de5555 0x00000001          
0 0x54bd55 0x19
(XEN) [2021-06-14 23:02:37] [0xdde]      0 0x1de55f6 0x00000001          
0 0x54bdf6 0x19
(XEN) [2021-06-14 23:02:37] [0xde1]      0 0x1de46f7 0x00000001          
0 0x54caf7 0x19
(XEN) [2021-06-14 23:02:37] [0xde3]      0 0x1de4e2e 0x00000001          
0 0x54c22e 0x19
(XEN) [2021-06-14 23:02:37] [0xde6]      0 0x1de5513 0x00000001          
0 0x54bd13 0x19
(XEN) [2021-06-14 23:02:37] [0xde8]      0 0x1de522f 0x00000001          
0 0x54be2f 0x19
(XEN) [2021-06-14 23:02:37] [0xdeb]      0 0x1de50f9 0x00000001          
0 0x54c0f9 0x19
(XEN) [2021-06-14 23:02:37] [0xdec]      0 0x1de4ca4 0x00000001          
0 0x54c4a4 0x19
(XEN) [2021-06-14 23:02:37] [0xded]      0 0x1de46b3 0x00000001          
0 0x54cab3 0x19
(XEN) [2021-06-14 23:02:37] [0xdee]      0 0x1de470b 0x00000001          
0 0x54cb0b 0x19
(XEN) [2021-06-14 23:02:37] [0xdef]      0 0x1de4c2b 0x00000001          
0 0x54c42b 0x19
(XEN) [2021-06-14 23:02:37] [0xdf0]      0 0x1de4fb0 0x00000001          
0 0x54c3b0 0x19
(XEN) [2021-06-14 23:02:37] [0xdf2]      0 0x1de50ce 0x00000001          
0 0x54c0ce 0x19
(XEN) [2021-06-14 23:02:37] [0xdf3]      0 0x1de4ba9 0x00000001          
0 0x54c7a9 0x19
(XEN) [2021-06-14 23:02:37] [0xdf6]      0 0x1de46cd 0x00000001          
0 0x54cacd 0x19
(XEN) [2021-06-14 23:02:37] [0xdf7]      0 0x1de4e1f 0x00000001          
0 0x54c21f 0x19
(XEN) [2021-06-14 23:02:37] [0xdf9]      0 0x1de4c1a 0x00000001          
0 0x54c41a 0x19
(XEN) [2021-06-14 23:02:37] [0xdfd]      0 0x1de51a8 0x00000001          
0 0x54c1a8 0x19
(XEN) [2021-06-14 23:02:37] [0xdfe]      0 0x1de4d17 0x00000001          
0 0x54c517 0x19
(XEN) [2021-06-14 23:02:37] [0xe00]      0 0x1de5095 0x00000001          
0 0x54c095 0x19
(XEN) [2021-06-14 23:02:37] [0xe02]      0 0x1de4a89 0x00000001          
0 0x54c689 0x19
(XEN) [2021-06-14 23:02:37] [0xe03]      0 0x1de47e4 0x00000001          
0 0x54cbe4 0x19
(XEN) [2021-06-14 23:02:37] [0xe08]      0 0x1de4c0e 0x00000001          
0 0x54c40e 0x19
(XEN) [2021-06-14 23:02:37] [0xe09]      0 0x1de4f66 0x00000001          
0 0x54c366 0x19
(XEN) [2021-06-14 23:02:37] [0xe0a]      0 0x1de5230 0x00000001          
0 0x54be30 0x19
(XEN) [2021-06-14 23:02:37] [0xe0b]      0 0x1de4dfe 0x00000001          
0 0x54c5fe 0x19
(XEN) [2021-06-14 23:02:37] [0xe0f]      0 0x1de51c6 0x00000001          
0 0x54c1c6 0x19
(XEN) [2021-06-14 23:02:37] [0xe11]      0 0x1de501b 0x00000001          
0 0x54c01b 0x19
(XEN) [2021-06-14 23:02:37] [0xe12]      0 0x1de46a2 0x00000001          
0 0x54caa2 0x19
(XEN) [2021-06-14 23:02:37] [0xe14]      0 0x1de4d9f 0x00000001          
0 0x54c59f 0x19
(XEN) [2021-06-14 23:02:37] [0xe1a]      0 0x1de463b 0x00000001          
0 0x54ca3b 0x19
(XEN) [2021-06-14 23:02:37] [0xe1c]      0 0x1de4ec3 0x00000001          
0 0x54c2c3 0x19
(XEN) [2021-06-14 23:02:37] [0xe1f]      0 0x1de4d71 0x00000001          
0 0x54c571 0x19
(XEN) [2021-06-14 23:02:37] [0xe21]      0 0x1de463a 0x00000001          
0 0x54ca3a 0x19
(XEN) [2021-06-14 23:02:37] [0xe23]      0 0x1de472f 0x00000001          
0 0x54cb2f 0x19
(XEN) [2021-06-14 23:02:37] [0xe25]      0 0x1de46ea 0x00000001          
0 0x54caea 0x19
(XEN) [2021-06-14 23:02:37] [0xe26]      0 0x1de47ed 0x00000001          
0 0x54cbed 0x19
(XEN) [2021-06-14 23:02:37] [0xe27]      0 0x1de4aba 0x00000001          
0 0x54c6ba 0x19
(XEN) [2021-06-14 23:02:37] [0xe2a]      0 0x1de5172 0x00000001          
0 0x54c172 0x19
(XEN) [2021-06-14 23:02:38] [0xe2b]      0 0x1de5093 0x00000001          
0 0x54c093 0x19
(XEN) [2021-06-14 23:02:38] [0xe2d]      0 0x1de4c0d 0x00000001          
0 0x54c40d 0x19
(XEN) [2021-06-14 23:02:38] [0xe2e]      0 0x1de47d1 0x00000001          
0 0x54cbd1 0x19
(XEN) [2021-06-14 23:02:38] [0xe2f]      0 0x1de55e2 0x00000001          
0 0x54bde2 0x19
(XEN) [2021-06-14 23:02:38] [0xe30]      0 0x1de46b9 0x00000001          
0 0x54cab9 0x19
(XEN) [2021-06-14 23:02:38] [0xe33]      0 0x1de4dca 0x00000001          
0 0x54c5ca 0x19
(XEN) [2021-06-14 23:02:38] [0xe35]      0 0x1de50d0 0x00000001          
0 0x54c0d0 0x19
(XEN) [2021-06-14 23:02:38] [0xe36]      0 0x1de51a5 0x00000001          
0 0x54c1a5 0x19
(XEN) [2021-06-14 23:02:38] [0xe37]      0 0x1de461d 0x00000001          
0 0x54ca1d 0x19
(XEN) [2021-06-14 23:02:38] [0xe38]      0 0x1de465c 0x00000001          
0 0x54ca5c 0x19
(XEN) [2021-06-14 23:02:38] [0xe39]      0 0x1de4d04 0x00000001          
0 0x54c504 0x19
(XEN) [2021-06-14 23:02:38] [0xe3a]      0 0x1de471b 0x00000001          
0 0x54cb1b 0x19
(XEN) [2021-06-14 23:02:38] [0xe3b]      0 0x1de4a31 0x00000001          
0 0x54c631 0x19
(XEN) [2021-06-14 23:02:38] [0xe3c]      0 0x1de4bd5 0x00000001          
0 0x54c7d5 0x19
(XEN) [2021-06-14 23:02:38] [0xe3d]      0 0x1de46bc 0x00000001          
0 0x54cabc 0x19
(XEN) [2021-06-14 23:02:38] [0xe3e]      0 0x1de49fa 0x00000001          
0 0x54c9fa 0x19
(XEN) [2021-06-14 23:02:38] [0xe3f]      0 0x1de5550 0x00000001          
0 0x54bd50 0x19
(XEN) [2021-06-14 23:02:38] [0xe41]      0 0x1de4a7b 0x00000001          
0 0x54c67b 0x19
(XEN) [2021-06-14 23:02:38] [0xe44]      0 0x1de4b8e 0x00000001          
0 0x54c78e 0x19
(XEN) [2021-06-14 23:02:38] [0xe45]      0 0x1de55d7 0x00000001          
0 0x54bdd7 0x19
(XEN) [2021-06-14 23:02:38] [0xe46]      0 0x1de55b0 0x00000001          
0 0x54bdb0 0x19
(XEN) [2021-06-14 23:02:38] [0xe47]      0 0x1de551f 0x00000001          
0 0x54bd1f 0x19
(XEN) [2021-06-14 23:02:38] [0xe48]      0 0x1de50c4 0x00000001          
0 0x54c0c4 0x19
(XEN) [2021-06-14 23:02:38] [0xe49]      0 0x1de55c4 0x00000001          
0 0x54bdc4 0x19
(XEN) [2021-06-14 23:02:38] [0xe4a]      0 0x1de4ca2 0x00000001          
0 0x54c4a2 0x19
(XEN) [2021-06-14 23:02:38] [0xe4c]      0 0x1de4c7d 0x00000001          
0 0x54c47d 0x19
(XEN) [2021-06-14 23:02:38] [0xe4f]      0 0x1de501f 0x00000001          
0 0x54c01f 0x19
(XEN) [2021-06-14 23:02:38] [0xe51]      0 0x1de4aa0 0x00000001          
0 0x54c6a0 0x19
(XEN) [2021-06-14 23:02:38] [0xe53]      0 0x1de4e2f 0x00000001          
0 0x54c22f 0x19
(XEN) [2021-06-14 23:02:38] [0xe57]      0 0x1de550c 0x00000001          
0 0x54bd0c 0x19
(XEN) [2021-06-14 23:02:38] [0xe58]      0 0x1de5140 0x00000001          
0 0x54c140 0x19
(XEN) [2021-06-14 23:02:38] [0xe5b]      0 0x1de4b0e 0x00000001          
0 0x54c70e 0x19
(XEN) [2021-06-14 23:02:38] [0xe5c]      0 0x1de5100 0x00000001          
0 0x54c100 0x19
(XEN) [2021-06-14 23:02:38] [0xe5d]      0 0x1de467e 0x00000001          
0 0x54ca7e 0x19
(XEN) [2021-06-14 23:02:38] [0xe60]      0 0x1de4f53 0x00000001          
0 0x54c353 0x19
(XEN) [2021-06-14 23:02:38] [0xe63]      0 0x1de559b 0x00000001          
0 0x54bd9b 0x19
(XEN) [2021-06-14 23:02:38] [0xe66]      0 0x1de4fb2 0x00000001          
0 0x54c3b2 0x19
(XEN) [2021-06-14 23:02:38] [0xe67]      0 0x1de4d5e 0x00000001          
0 0x54c55e 0x19
(XEN) [2021-06-14 23:02:38] [0xe69]      0 0x1de51e9 0x00000001          
0 0x54c1e9 0x19
(XEN) [2021-06-14 23:02:38] [0xe6a]      0 0x1de5580 0x00000001          
0 0x54bd80 0x19
(XEN) [2021-06-14 23:02:38] [0xe6b]      0 0x1de4f2d 0x00000001          
0 0x54c32d 0x19
(XEN) [2021-06-14 23:02:38] [0xe6d]      0 0x1de4d3d 0x00000001          
0 0x54c53d 0x19
(XEN) [2021-06-14 23:02:38] [0xe6e]      0 0x1de4a54 0x00000001          
0 0x54c654 0x19
(XEN) [2021-06-14 23:02:38] [0xe6f]      0 0x1de4d52 0x00000001          
0 0x54c552 0x19
(XEN) [2021-06-14 23:02:38] [0xe70]      0 0x1de4b93 0x00000001          
0 0x54c793 0x19
(XEN) [2021-06-14 23:02:38] [0xe71]      0 0x1de2f6e 0x00000001          
0 0x54e36e 0x19
(XEN) [2021-06-14 23:02:38] [0xe73]      0 0x1de5561 0x00000001          
0 0x54bd61 0x19
(XEN) [2021-06-14 23:02:38] [0xe74]      0 0x1de5214 0x00000001          
0 0x54be14 0x19
(XEN) [2021-06-14 23:02:38] [0xe75]      0 0x1de4b67 0x00000001          
0 0x54c767 0x19
(XEN) [2021-06-14 23:02:38] [0xe76]      0 0x1de2f71 0x00000001          
0 0x54e371 0x19
(XEN) [2021-06-14 23:02:38] [0xe77]      0 0x1de2f56 0x00000001          
0 0x54e356 0x19
(XEN) [2021-06-14 23:02:38] [0xe78]      0 0x1de5442 0x00000001          
0 0x54bc42 0x19
(XEN) [2021-06-14 23:02:38] [0xe79]      0 0x1de4f98 0x00000001          
0 0x54c398 0x19
(XEN) [2021-06-14 23:02:38] [0xe7c]      0 0x1de4d7b 0x00000001          
0 0x54c57b 0x19
(XEN) [2021-06-14 23:02:38] [0xe7f]      0 0x1de5522 0x00000001          
0 0x54bd22 0x19
(XEN) [2021-06-14 23:02:38] [0xe82]      0 0x1de4a3d 0x00000001          
0 0x54c63d 0x19
(XEN) [2021-06-14 23:02:38] [0xe83]      0 0x1de557c 0x00000001          
0 0x54bd7c 0x19
(XEN) [2021-06-14 23:02:38] [0xe88]      0 0x1de4bc0 0x00000001          
0 0x54c7c0 0x19
(XEN) [2021-06-14 23:02:38] [0xe89]      0 0x1de4f10 0x00000001          
0 0x54c310 0x19
(XEN) [2021-06-14 23:02:38] [0xe8a]      0 0x1de4a46 0x00000001          
0 0x54c646 0x19
(XEN) [2021-06-14 23:02:38] [0xe8b]      0 0x1de4648 0x00000001          
0 0x54ca48 0x19
(XEN) [2021-06-14 23:02:38] [0xe8d]      0 0x1de4d3e 0x00000001          
0 0x54c53e 0x19
(XEN) [2021-06-14 23:02:38] [0xe8f]      0 0x1de4ee7 0x00000001          
0 0x54c2e7 0x19
(XEN) [2021-06-14 23:02:38] [0xe90]      0 0x1de462f 0x00000001          
0 0x54ca2f 0x19
(XEN) [2021-06-14 23:02:38] [0xe91]      0 0x1de4d88 0x00000001          
0 0x54c588 0x19
(XEN) [2021-06-14 23:02:38] [0xe93]      0 0x1de501a 0x00000001          
0 0x54c01a 0x19
(XEN) [2021-06-14 23:02:38] [0xe97]      0 0x1de4726 0x00000001          
0 0x54cb26 0x19
(XEN) [2021-06-14 23:02:38] [0xe98]      0 0x1de5216 0x00000001          
0 0x54be16 0x19
(XEN) [2021-06-14 23:02:38] [0xe99]      0 0x1de519e 0x00000001          
0 0x54c19e 0x19
(XEN) [2021-06-14 23:02:38] [0xe9d]      0 0x1de4698 0x00000001          
0 0x54ca98 0x19
(XEN) [2021-06-14 23:02:38] [0xe9e]      0 0x1de510b 0x00000001          
0 0x54c10b 0x19
(XEN) [2021-06-14 23:02:38] [0xea1]      0 0x1de4c17 0x00000001          
0 0x54c417 0x19
(XEN) [2021-06-14 23:02:38] [0xea3]      0 0x1de4f97 0x00000001          
0 0x54c397 0x19
(XEN) [2021-06-14 23:02:38] [0xea4]      0 0x1de4c6d 0x00000001          
0 0x54c46d 0x19
(XEN) [2021-06-14 23:02:38] [0xea7]      0 0x1de548c 0x00000001          
0 0x54bc8c 0x19
(XEN) [2021-06-14 23:02:38] [0xea8]      0 0x1de4b7e 0x00000001          
0 0x54c77e 0x19
(XEN) [2021-06-14 23:02:38] [0xea9]      0 0x1de5581 0x00000001          
0 0x54bd81 0x19
(XEN) [2021-06-14 23:02:38] [0xead]      0 0x1de4773 0x00000001          
0 0x54cb73 0x19
(XEN) [2021-06-14 23:02:38] [0xeb0]      0 0x1de4794 0x00000001          
0 0x54cb94 0x19
(XEN) [2021-06-14 23:02:38] [0xeb1]      0 0x1de4f9f 0x00000001          
0 0x54c39f 0x19
(XEN) [2021-06-14 23:02:38] [0xeb3]      0 0x1de51ac 0x00000001          
0 0x54c1ac 0x19
(XEN) [2021-06-14 23:02:38] [0xeb4]      0 0x1de4aa6 0x00000001          
0 0x54c6a6 0x19
(XEN) [2021-06-14 23:02:38] [0xeb5]      0 0x1de46ee 0x00000001          
0 0x54caee 0x19
(XEN) [2021-06-14 23:02:38] [0xeb7]      0 0x1de500c 0x00000001          
0 0x54c00c 0x19
(XEN) [2021-06-14 23:02:38] [0xeba]      0 0x1de4fb6 0x00000001          
0 0x54c3b6 0x19
(XEN) [2021-06-14 23:02:38] [0xebc]      0 0x1de4750 0x00000001          
0 0x54cb50 0x19
(XEN) [2021-06-14 23:02:38] [0xebe]      0 0x1de4c36 0x00000001          
0 0x54c436 0x19
(XEN) [2021-06-14 23:02:38] [0xec0]      0 0x1de55e3 0x00000001          
0 0x54bde3 0x19
(XEN) [2021-06-14 23:02:38] [0xec1]      0 0x1de5463 0x00000001          
0 0x54bc63 0x19
(XEN) [2021-06-14 23:02:38] [0xec3]      0 0x1de5174 0x00000001          
0 0x54c174 0x19
(XEN) [2021-06-14 23:02:38] [0xec5]      0 0x1de472c 0x00000001          
0 0x54cb2c 0x19
(XEN) [2021-06-14 23:02:38] [0xec7]      0 0x1de5231 0x00000001          
0 0x54be31 0x19
(XEN) [2021-06-14 23:02:38] [0xec8]      0 0x1de2f68 0x00000001          
0 0x54e368 0x19
(XEN) [2021-06-14 23:02:38] [0xecb]      0 0x1de4a9b 0x00000001          
0 0x54c69b 0x19
(XEN) [2021-06-14 23:02:38] [0xece]      0 0x1de51a0 0x00000001          
0 0x54c1a0 0x19
(XEN) [2021-06-14 23:02:38] [0xed2]      0 0x1de4f09 0x00000001          
0 0x54c309 0x19
(XEN) [2021-06-14 23:02:38] [0xed3]      0 0x1de54a0 0x00000001          
0 0x54bca0 0x19
(XEN) [2021-06-14 23:02:38] [0xed4]      0 0x1de507c 0x00000001          
0 0x54c07c 0x19
(XEN) [2021-06-14 23:02:38] [0xed5]      0 0x1de4d4f 0x00000001          
0 0x54c54f 0x19
(XEN) [2021-06-14 23:02:38] [0xed6]      0 0x1de51f1 0x00000001          
0 0x54c1f1 0x19
(XEN) [2021-06-14 23:02:38] [0xed7]      0 0x1de5103 0x00000001          
0 0x54c103 0x19
(XEN) [2021-06-14 23:02:38] [0xed9]      0 0x1de4ea6 0x00000001          
0 0x54c2a6 0x19
(XEN) [2021-06-14 23:02:38] [0xedb]      0 0x1de4751 0x00000001          
0 0x54cb51 0x19
(XEN) [2021-06-14 23:02:38] [0xede]      0 0x1de5052 0x00000001          
0 0x54c052 0x19
(XEN) [2021-06-14 23:02:38] [0xedf]      0 0x1de51cc 0x00000001          
0 0x54c1cc 0x19
(XEN) [2021-06-14 23:02:38] [0xee2]      0 0x1de4a5e 0x00000001          
0 0x54c65e 0x19
(XEN) [2021-06-14 23:02:38] [0xee3]      0 0x1de4aaa 0x00000001          
0 0x54c6aa 0x19
(XEN) [2021-06-14 23:02:38] [0xee4]      0 0x1de520b 0x00000001          
0 0x54be0b 0x19
(XEN) [2021-06-14 23:02:38] [0xee5]      0 0x1de4c5d 0x00000001          
0 0x54c45d 0x19
(XEN) [2021-06-14 23:02:38] [0xee7]      0 0x1de4de3 0x00000001          
0 0x54c5e3 0x19
(XEN) [2021-06-14 23:02:38] [0xeeb]      0 0x1de4ae6 0x00000001          
0 0x54c6e6 0x19
(XEN) [2021-06-14 23:02:38] [0xeef]      0 0x1de5472 0x00000001          
0 0x54bc72 0x19
(XEN) [2021-06-14 23:02:38] [0xef2]      0 0x1de4ad0 0x00000001          
0 0x54c6d0 0x19
(XEN) [2021-06-14 23:02:38] [0xef4]      0 0x1de518c 0x00000001          
0 0x54c18c 0x19
(XEN) [2021-06-14 23:02:38] [0xef5]      0 0x1de5588 0x00000001          
0 0x54bd88 0x19
(XEN) [2021-06-14 23:02:38] [0xef9]      0 0x1de4c20 0x00000001          
0 0x54c420 0x19
(XEN) [2021-06-14 23:02:38] [0xefa]      0 0x1de2f5c 0x00000001          
0 0x54e35c 0x19
(XEN) [2021-06-14 23:02:38] [0xefd]      0 0x1de475b 0x00000001          
0 0x54cb5b 0x19
(XEN) [2021-06-14 23:02:38] [0xeff]      0 0x1de4cc5 0x00000001          
0 0x54c4c5 0x19
(XEN) [2021-06-14 23:02:38] [0xf00]      0 0x1de470e 0x00000001          
0 0x54cb0e 0x19
(XEN) [2021-06-14 23:02:38] [0xf01]      0 0x1de549a 0x00000001          
0 0x54bc9a 0x19
(XEN) [2021-06-14 23:02:38] [0xf02]      0 0x1de4ea3 0x00000001          
0 0x54c2a3 0x19
(XEN) [2021-06-14 23:02:38] [0xf03]      0 0x1de4c07 0x00000001          
0 0x54c407 0x19
(XEN) [2021-06-14 23:02:38] [0xf06]      0 0x1de47c0 0x00000001          
0 0x54cbc0 0x19
(XEN) [2021-06-14 23:02:38] [0xf07]      0 0x1de4be9 0x00000001          
0 0x54c7e9 0x19
(XEN) [2021-06-14 23:02:38] [0xf0a]      0 0x1de478b 0x00000001          
0 0x54cb8b 0x19
(XEN) [2021-06-14 23:02:38] [0xf0b]      0 0x1de4a79 0x00000001          
0 0x54c679 0x19
(XEN) [2021-06-14 23:02:38] [0xf0f]      0 0x1de518d 0x00000001          
0 0x54c18d 0x19
(XEN) [2021-06-14 23:02:39] [0xf12]      0 0x1de4b3e 0x00000001          
0 0x54c73e 0x19
(XEN) [2021-06-14 23:02:39] [0xf1a]      0 0x1de4660 0x00000001          
0 0x54ca60 0x19
(XEN) [2021-06-14 23:02:39] [0xf1b]      0 0x1de55ae 0x00000001          
0 0x54bdae 0x19
(XEN) [2021-06-14 23:02:39] [0xf1c]      0 0x1de4e30 0x00000001          
0 0x54c230 0x19
(XEN) [2021-06-14 23:02:39] [0xf1d]      0 0x1de4acd 0x00000001          
0 0x54c6cd 0x19
(XEN) [2021-06-14 23:02:39] [0xf21]      0 0x1de5218 0x00000001          
0 0x54be18 0x19
(XEN) [2021-06-14 23:02:39] [0xf23]      0 0x1de47ba 0x00000001          
0 0x54cbba 0x19
(XEN) [2021-06-14 23:02:39] [0xf26]      0 0x1de4f0a 0x00000001          
0 0x54c30a 0x19
(XEN) [2021-06-14 23:02:39] [0xf27]      0 0x1de5482 0x00000001          
0 0x54bc82 0x19
(XEN) [2021-06-14 23:02:39] [0xf28]      0 0x1de54ee 0x00000001          
0 0x54bcee 0x19
(XEN) [2021-06-14 23:02:39] [0xf29]      0 0x1de4c44 0x00000001          
0 0x54c444 0x19
(XEN) [2021-06-14 23:02:39] [0xf2b]      0 0x1de46ba 0x00000001          
0 0x54caba 0x19
(XEN) [2021-06-14 23:02:39] [0xf2d]      0 0x1de4be4 0x00000001          
0 0x54c7e4 0x19
(XEN) [2021-06-14 23:02:39] [0xf2e]      0 0x1de4ef1 0x00000001          
0 0x54c2f1 0x19
(XEN) [2021-06-14 23:02:39] [0xf30]      0 0x1de4bb5 0x00000001          
0 0x54c7b5 0x19
(XEN) [2021-06-14 23:02:39] [0xf33]      0 0x1de4a06 0x00000001          
0 0x54c606 0x19
(XEN) [2021-06-14 23:02:39] [0xf34]      0 0x1de5070 0x00000001          
0 0x54c070 0x19
(XEN) [2021-06-14 23:02:39] [0xf36]      0 0x1de47fa 0x00000001          
0 0x54cbfa 0x19
(XEN) [2021-06-14 23:02:39] [0xf37]      0 0x1de4fe6 0x00000001          
0 0x54c3e6 0x19
(XEN) [2021-06-14 23:02:39] [0xf38]      0 0x1de4d55 0x00000001          
0 0x54c555 0x19
(XEN) [2021-06-14 23:02:39] [0xf3c]      0 0x1de517d 0x00000001          
0 0x54c17d 0x19
(XEN) [2021-06-14 23:02:39] [0xf3f]      0 0x1de4f36 0x00000001          
0 0x54c336 0x19
(XEN) [2021-06-14 23:02:39] [0xf40]      0 0x1de4f0f 0x00000001          
0 0x54c30f 0x19
(XEN) [2021-06-14 23:02:39] [0xf43]      0 0x1de4e18 0x00000001          
0 0x54c218 0x19
(XEN) [2021-06-14 23:02:39] [0xf44]      0 0x1de5139 0x00000001          
0 0x54c139 0x19
(XEN) [2021-06-14 23:02:39] [0xf45]      0 0x1de46d9 0x00000001          
0 0x54cad9 0x19
(XEN) [2021-06-14 23:02:39] [0xf46]      0 0x1de515f 0x00000001          
0 0x54c15f 0x19
(XEN) [2021-06-14 23:02:39] [0xf47]      0 0x1de4711 0x00000001          
0 0x54cb11 0x19
(XEN) [2021-06-14 23:02:39] [0xf49]      0 0x1de5469 0x00000001          
0 0x54bc69 0x19
(XEN) [2021-06-14 23:02:39] [0xf4a]      0 0x1de47b2 0x00000001          
0 0x54cbb2 0x19
(XEN) [2021-06-14 23:02:39] [0xf4b]      0 0x1de5494 0x00000001          
0 0x54bc94 0x19
(XEN) [2021-06-14 23:02:39] [0xf4e]      0 0x1de4d37 0x00000001          
0 0x54c537 0x19
(XEN) [2021-06-14 23:02:39] [0xf51]      0 0x1de4799 0x00000001          
0 0x54cb99 0x19
(XEN) [2021-06-14 23:02:39] [0xf52]      0 0x1de4e63 0x00000001          
0 0x54c263 0x19
(XEN) [2021-06-14 23:02:39] [0xf54]      0 0x1de5593 0x00000001          
0 0x54bd93 0x19
(XEN) [2021-06-14 23:02:39] [0xf57]      0 0x1de50ec 0x00000001          
0 0x54c0ec 0x19
(XEN) [2021-06-14 23:02:39] [0xf5b]      0 0x1de47c7 0x00000001          
0 0x54cbc7 0x19
(XEN) [2021-06-14 23:02:39] [0xf5c]      0 0x1de4da5 0x00000001          
0 0x54c5a5 0x19
(XEN) [2021-06-14 23:02:39] [0xf5e]      0 0x1de55a4 0x00000001          
0 0x54bda4 0x19
(XEN) [2021-06-14 23:02:39] [0xf5f]      0 0x1de50a5 0x00000001          
0 0x54c0a5 0x19
(XEN) [2021-06-14 23:02:39] [0xf60]      0 0x1de4daa 0x00000001          
0 0x54c5aa 0x19
(XEN) [2021-06-14 23:02:39] [0xf61]      0 0x1de4c7b 0x00000001          
0 0x54c47b 0x19
(XEN) [2021-06-14 23:02:39] [0xf64]      0 0x1de5008 0x00000001          
0 0x54c008 0x19
(XEN) [2021-06-14 23:02:39] [0xf66]      0 0x1de46d3 0x00000001          
0 0x54cad3 0x19
(XEN) [2021-06-14 23:02:39] [0xf67]      0 0x1de5120 0x00000001          
0 0x54c120 0x19
(XEN) [2021-06-14 23:02:39] [0xf69]      0 0x1de4e37 0x00000001          
0 0x54c237 0x19
(XEN) [2021-06-14 23:02:39] [0xf6c]      0 0x1de556c 0x00000001          
0 0x54bd6c 0x19
(XEN) [2021-06-14 23:02:39] [0xf6e]      0 0x1de4a83 0x00000001          
0 0x54c683 0x19
(XEN) [2021-06-14 23:02:39] [0xf6f]      0 0x1de46fa 0x00000001          
0 0x54cafa 0x19
(XEN) [2021-06-14 23:02:39] [0xf70]      0 0x1de5250 0x00000001          
0 0x54be50 0x19
(XEN) [2021-06-14 23:02:39] [0xf72]      0 0x1de4e64 0x00000001          
0 0x54c264 0x19
(XEN) [2021-06-14 23:02:39] [0xf73]      0 0x1de5426 0x00000001          
0 0x54bc26 0x19
(XEN) [2021-06-14 23:02:39] [0xf74]      0 0x1de46b5 0x00000001          
0 0x54cab5 0x19
(XEN) [2021-06-14 23:02:39] [0xf77]      0 0x1de4a96 0x00000001          
0 0x54c696 0x19
(XEN) [2021-06-14 23:02:39] [0xf79]      0 0x1de4c47 0x00000001          
0 0x54c447 0x19
(XEN) [2021-06-14 23:02:39] [0xf7b]      0 0x1de4c3c 0x00000001          
0 0x54c43c 0x19
(XEN) [2021-06-14 23:02:39] [0xf7d]      0 0x1de4b11 0x00000001          
0 0x54c711 0x19
(XEN) [2021-06-14 23:02:39] [0xf7e]      0 0x1de544a 0x00000001          
0 0x54bc4a 0x19
(XEN) [2021-06-14 23:02:39] [0xf83]      0 0x1de4ee0 0x00000001          
0 0x54c2e0 0x19
(XEN) [2021-06-14 23:02:39] [0xf84]      0 0x1de54cc 0x00000001          
0 0x54bccc 0x19
(XEN) [2021-06-14 23:02:39] [0xf85]      0 0x1de4c9b 0x00000001          
0 0x54c49b 0x19
(XEN) [2021-06-14 23:02:39] [0xf87]      0 0x1de4736 0x00000001          
0 0x54cb36 0x19
(XEN) [2021-06-14 23:02:39] [0xf8b]      0 0x1de5079 0x00000001          
0 0x54c079 0x19
(XEN) [2021-06-14 23:02:39] [0xf8c]      0 0x1de54b9 0x00000001          
0 0x54bcb9 0x19
(XEN) [2021-06-14 23:02:39] [0xf8d]      0 0x1de5023 0x00000001          
0 0x54c023 0x19
(XEN) [2021-06-14 23:02:39] [0xf90]      0 0x1de521d 0x00000001          
0 0x54be1d 0x19
(XEN) [2021-06-14 23:02:39] [0xf91]      0 0x1de4bac 0x00000001          
0 0x54c7ac 0x19
(XEN) [2021-06-14 23:02:39] [0xf94]      0 0x1de5116 0x00000001          
0 0x54c116 0x19
(XEN) [2021-06-14 23:02:39] [0xf95]      0 0x1de5179 0x00000001          
0 0x54c179 0x19
(XEN) [2021-06-14 23:02:39] [0xf96]      0 0x1de5527 0x00000001          
0 0x54bd27 0x19
(XEN) [2021-06-14 23:02:39] [0xf97]      0 0x1de5528 0x00000001          
0 0x54bd28 0x19
(XEN) [2021-06-14 23:02:39] [0xf99]      0 0x1de5422 0x00000001          
0 0x54bc22 0x19
(XEN) [2021-06-14 23:02:39] [0xf9a]      0 0x1de4622 0x00000001          
0 0x54ca22 0x19
(XEN) [2021-06-14 23:02:39] [0xf9d]      0 0x1de460d 0x00000001          
0 0x54ca0d 0x19
(XEN) [2021-06-14 23:02:39] [0xf9f]      0 0x1de5491 0x00000001          
0 0x54bc91 0x19
(XEN) [2021-06-14 23:02:39] [0xfa2]      0 0x1de5089 0x00000001          
0 0x54c089 0x19
(XEN) [2021-06-14 23:02:39] [0xfa3]      0 0x1de540a 0x00000001          
0 0x54bc0a 0x19
(XEN) [2021-06-14 23:02:39] [0xfa4]      0 0x1de4e87 0x00000001          
0 0x54c287 0x19
(XEN) [2021-06-14 23:02:39] [0xfa5]      0 0x1de470a 0x00000001          
0 0x54cb0a 0x19
(XEN) [2021-06-14 23:02:39] [0xfa6]      0 0x1de466b 0x00000001          
0 0x54ca6b 0x19
(XEN) [2021-06-14 23:02:39] [0xfaa]      0 0x1de4d97 0x00000001          
0 0x54c597 0x19
(XEN) [2021-06-14 23:02:39] [0xfae]      0 0x1de4d42 0x00000001          
0 0x54c542 0x19
(XEN) [2021-06-14 23:02:39] [0xfb0]      0 0x1de4ad1 0x00000001          
0 0x54c6d1 0x19
(XEN) [2021-06-14 23:02:39] [0xfb2]      0 0x1de463c 0x00000001          
0 0x54ca3c 0x19
(XEN) [2021-06-14 23:02:39] [0xfb4]      0 0x1de4c73 0x00000001          
0 0x54c473 0x19
(XEN) [2021-06-14 23:02:39] [0xfb5]      0 0x1de4a16 0x00000001          
0 0x54c616 0x19
(XEN) [2021-06-14 23:02:39] [0xfb6]      0 0x1de47e9 0x00000001          
0 0x54cbe9 0x19
(XEN) [2021-06-14 23:02:39] [0xfb7]      0 0x1de4fbe 0x00000001          
0 0x54c3be 0x19
(XEN) [2021-06-14 23:02:39] [0xfb8]      0 0x1de4ff1 0x00000001          
0 0x54c3f1 0x19
(XEN) [2021-06-14 23:02:39] [0xfb9]      0 0x1de4b90 0x00000001          
0 0x54c790 0x19
(XEN) [2021-06-14 23:02:39] [0xfbc]      0 0x1de4ce9 0x00000001          
0 0x54c4e9 0x19
(XEN) [2021-06-14 23:02:39] [0xfbd]      0 0x1de4f17 0x00000001          
0 0x54c317 0x19
(XEN) [2021-06-14 23:02:39] [0xfc1]      0 0x1de4f8f 0x00000001          
0 0x54c38f 0x19
(XEN) [2021-06-14 23:02:39] [0xfc2]      0 0x1de4ce5 0x00000001          
0 0x54c4e5 0x19
(XEN) [2021-06-14 23:02:39] [0xfc3]      0 0x1de4eb2 0x00000001          
0 0x54c2b2 0x19
(XEN) [2021-06-14 23:02:39] [0xfc4]      0 0x1de5194 0x00000001          
0 0x54c194 0x19
(XEN) [2021-06-14 23:02:39] [0xfc9]      0 0x1de512c 0x00000001          
0 0x54c12c 0x19
(XEN) [2021-06-14 23:02:39] [0xfcc]      0 0x1de54e9 0x00000001          
0 0x54bce9 0x19
(XEN) [2021-06-14 23:02:39] [0xfce]      0 0x1de4b4d 0x00000001          
0 0x54c74d 0x19
(XEN) [2021-06-14 23:02:39] [0xfcf]      0 0x1de546b 0x00000001          
0 0x54bc6b 0x19
(XEN) [2021-06-14 23:02:39] [0xfd0]      0 0x1de4f03 0x00000001          
0 0x54c303 0x19
(XEN) [2021-06-14 23:02:39] [0xfd2]      0 0x1de50d3 0x00000001          
0 0x54c0d3 0x19
(XEN) [2021-06-14 23:02:39] [0xfd5]      0 0x1de4c80 0x00000001          
0 0x54c480 0x19
(XEN) [2021-06-14 23:02:39] [0xfd6]      0 0x1de5590 0x00000001          
0 0x54bd90 0x19
(XEN) [2021-06-14 23:02:39] [0xfd8]      0 0x1de4dc7 0x00000001          
0 0x54c5c7 0x19
(XEN) [2021-06-14 23:02:39] [0xfdd]      0 0x1de5213 0x00000001          
0 0x54be13 0x19
(XEN) [2021-06-14 23:02:39] [0xfde]      0 0x1de4fec 0x00000001          
0 0x54c3ec 0x19
(XEN) [2021-06-14 23:02:39] [0xfdf]      0 0x1de5479 0x00000001          
0 0x54bc79 0x19
(XEN) [2021-06-14 23:02:39] [0xfe0]      0 0x1de5461 0x00000001          
0 0x54bc61 0x19
(XEN) [2021-06-14 23:02:39] [0xfe1]      0 0x1de4af1 0x00000001          
0 0x54c6f1 0x19
(XEN) [2021-06-14 23:02:39] [0xfe2]      0 0x1de50d4 0x00000001          
0 0x54c0d4 0x19
(XEN) [2021-06-14 23:02:39] [0xfe3]      0 0x1de4b6d 0x00000001          
0 0x54c76d 0x19
(XEN) [2021-06-14 23:02:39] [0xfe4]      0 0x1de4f23 0x00000001          
0 0x54c323 0x19
(XEN) [2021-06-14 23:02:39] [0xfe6]      0 0x1de4d05 0x00000001          
0 0x54c505 0x19
(XEN) [2021-06-14 23:02:39] [0xfe7]      0 0x1de4e2b 0x00000001          
0 0x54c22b 0x19
(XEN) [2021-06-14 23:02:39] [0xfe8]      0 0x1de478f 0x00000001          
0 0x54cb8f 0x19
(XEN) [2021-06-14 23:02:39] [0xfe9]      0 0x1de4fc3 0x00000001          
0 0x54c3c3 0x19
(XEN) [2021-06-14 23:02:39] [0xfea]      0 0x1de4d38 0x00000001          
0 0x54c538 0x19
(XEN) [2021-06-14 23:02:39] [0xfeb]      0 0x1de4f6a 0x00000001          
0 0x54c36a 0x19
(XEN) [2021-06-14 23:02:39] [0xfed]      0 0x1de54bf 0x00000001          
0 0x54bcbf 0x19
(XEN) [2021-06-14 23:02:39] [0xfef]      0 0x1de4a2a 0x00000001          
0 0x54c62a 0x19
(XEN) [2021-06-14 23:02:39] [0xff5]      0 0x1de46af 0x00000001          
0 0x54caaf 0x19
(XEN) [2021-06-14 23:02:39] [0xff6]      0 0x1de4f27 0x00000001          
0 0x54c327 0x19
(XEN) [2021-06-14 23:02:39] [0xff8]      0 0x1de4646 0x00000001          
0 0x54ca46 0x19
(XEN) [2021-06-14 23:02:39] [0xff9]      0 0x1de2f58 0x00000001          
0 0x54e358 0x19
(XEN) [2021-06-14 23:02:39] [0xffa]      0 0x1de51df 0x00000001          
0 0x54c1df 0x19
(XEN) [2021-06-14 23:02:39] [0xffc]      0 0x1de4fbf 0x00000001          
0 0x54c3bf 0x19
(XEN) [2021-06-14 23:02:39] [0xffe]      0 0x1de51cb 0x00000001          
0 0x54c1cb 0x19
(XEN) [2021-06-14 23:02:40] [0x1002]      0 0x1de50ef 
0x00000001          0 0x54c0ef 0x19
(XEN) [2021-06-14 23:02:40] [0x1004]      0 0x1de4d5c 
0x00000001          0 0x54c55c 0x19
(XEN) [2021-06-14 23:02:40] [0x1005]      0 0x1de4b0a 
0x00000001          0 0x54c70a 0x19
(XEN) [2021-06-14 23:02:40] [0x1006]      0 0x1de47c8 
0x00000001          0 0x54cbc8 0x19
(XEN) [2021-06-14 23:02:40] [0x1008]      0 0x1de55ed 
0x00000001          0 0x54bded 0x19
(XEN) [2021-06-14 23:02:40] [0x1009]      0 0x1de4a19 
0x00000001          0 0x54c619 0x19
(XEN) [2021-06-14 23:02:40] [0x100a]      0 0x1de4e29 
0x00000001          0 0x54c229 0x19
(XEN) [2021-06-14 23:02:40] [0x100b]      0 0x1de515c 
0x00000001          0 0x54c15c 0x19
(XEN) [2021-06-14 23:02:40] [0x100c]      0 0x1de4eed 
0x00000001          0 0x54c2ed 0x19
(XEN) [2021-06-14 23:02:40] [0x100d]      0 0x1de4fd8 
0x00000001          0 0x54c3d8 0x19
(XEN) [2021-06-14 23:02:40] [0x1010]      0 0x1de4d93 
0x00000001          0 0x54c593 0x19
(XEN) [2021-06-14 23:02:40] [0x1011]      0 0x1de4d4b 
0x00000001          0 0x54c54b 0x19
(XEN) [2021-06-14 23:02:40] [0x1012]      0 0x1de5535 
0x00000001          0 0x54bd35 0x19
(XEN) [2021-06-14 23:02:40] [0x1014]      0 0x1de5254 
0x00000001          0 0x54be54 0x19
(XEN) [2021-06-14 23:02:40] [0x1015]      0 0x1de4c82 
0x00000001          0 0x54c482 0x19
(XEN) [2021-06-14 23:02:40] [0x1016]      0 0x1de4b04 
0x00000001          0 0x54c704 0x19
(XEN) [2021-06-14 23:02:40] [0x1017]      0 0x1de51f7 
0x00000001          0 0x54c1f7 0x19
(XEN) [2021-06-14 23:02:40] [0x101c]      0 0x1de5105 
0x00000001          0 0x54c105 0x19
(XEN) [2021-06-14 23:02:40] [0x101d]      0 0x1de4ccf 
0x00000001          0 0x54c4cf 0x19
(XEN) [2021-06-14 23:02:40] [0x1021]      0 0x1de54ff 
0x00000001          0 0x54bcff 0x19
(XEN) [2021-06-14 23:02:40] [0x1022]      0 0x1de513d 
0x00000001          0 0x54c13d 0x19
(XEN) [2021-06-14 23:02:40] [0x1023]      0 0x1de47a6 
0x00000001          0 0x54cba6 0x19
(XEN) [2021-06-14 23:02:40] [0x1025]      0 0x1de46b8 
0x00000001          0 0x54cab8 0x19
(XEN) [2021-06-14 23:02:40] [0x1027]      0 0x1de5415 
0x00000001          0 0x54bc15 0x19
(XEN) [2021-06-14 23:02:40] [0x1028]      0 0x1de4783 
0x00000001          0 0x54cb83 0x19
(XEN) [2021-06-14 23:02:40] [0x1029]      0 0x1de4ce2 
0x00000001          0 0x54c4e2 0x19
(XEN) [2021-06-14 23:02:40] [0x102a]      0 0x1de4621 
0x00000001          0 0x54ca21 0x19
(XEN) [2021-06-14 23:02:40] [0x102c]      0 0x1de4fb5 
0x00000001          0 0x54c3b5 0x19
(XEN) [2021-06-14 23:02:40] [0x102d]      0 0x1de4617 
0x00000001          0 0x54ca17 0x19
(XEN) [2021-06-14 23:02:40] [0x102e]      0 0x1de5579 
0x00000001          0 0x54bd79 0x19
(XEN) [2021-06-14 23:02:40] [0x102f]      0 0x1de4755 
0x00000001          0 0x54cb55 0x19
(XEN) [2021-06-14 23:02:40] [0x1030]      0 0x1de5448 
0x00000001          0 0x54bc48 0x19
(XEN) [2021-06-14 23:02:40] [0x1031]      0 0x1de4c8f 
0x00000001          0 0x54c48f 0x19
(XEN) [2021-06-14 23:02:40] [0x1032]      0 0x1de50c8 
0x00000001          0 0x54c0c8 0x19
(XEN) [2021-06-14 23:02:40] [0x1033]      0 0x1de4c30 
0x00000001          0 0x54c430 0x19
(XEN) [2021-06-14 23:02:40] [0x1036]      0 0x1de50ab 
0x00000001          0 0x54c0ab 0x19
(XEN) [2021-06-14 23:02:40] [0x1038]      0 0x1de5552 
0x00000001          0 0x54bd52 0x19
(XEN) [2021-06-14 23:02:40] [0x1039]      0 0x1de461f 
0x00000001          0 0x54ca1f 0x19
(XEN) [2021-06-14 23:02:40] [0x103a]      0 0x1de4cdb 
0x00000001          0 0x54c4db 0x19
(XEN) [2021-06-14 23:02:40] [0x103b]      0 0x1de4fd1 
0x00000001          0 0x54c3d1 0x19
(XEN) [2021-06-14 23:02:40] [0x103c]      0 0x1de5439 
0x00000001          0 0x54bc39 0x19
(XEN) [2021-06-14 23:02:40] [0x103d]      0 0x1de4d56 
0x00000001          0 0x54c556 0x19
(XEN) [2021-06-14 23:02:40] [0x103e]      0 0x1de4f0d 
0x00000001          0 0x54c30d 0x19
(XEN) [2021-06-14 23:02:40] [0x103f]      0 0x1de54af 
0x00000001          0 0x54bcaf 0x19
(XEN) [2021-06-14 23:02:40] [0x1040]      0 0x1de5582 
0x00000001          0 0x54bd82 0x19
(XEN) [2021-06-14 23:02:40] [0x1041]      0 0x1de54c8 
0x00000001          0 0x54bcc8 0x19
(XEN) [2021-06-14 23:02:40] [0x1044]      0 0x1de4a2c 
0x00000001          0 0x54c62c 0x19
(XEN) [2021-06-14 23:02:40] [0x1045]      0 0x1de4bc5 
0x00000001          0 0x54c7c5 0x19
(XEN) [2021-06-14 23:02:40] [0x1046]      0 0x1de558e 
0x00000001          0 0x54bd8e 0x19
(XEN) [2021-06-14 23:02:40] [0x104c]      0 0x1de4763 
0x00000001          0 0x54cb63 0x19
(XEN) [2021-06-14 23:02:40] [0x104e]      0 0x1de4ee8 
0x00000001          0 0x54c2e8 0x19
(XEN) [2021-06-14 23:02:40] [0x104f]      0 0x1de47f6 
0x00000001          0 0x54cbf6 0x19
(XEN) [2021-06-14 23:02:40] [0x1051]      0 0x1de4b72 
0x00000001          0 0x54c772 0x19
(XEN) [2021-06-14 23:02:40] [0x1052]      0 0x1de51eb 
0x00000001          0 0x54c1eb 0x19
(XEN) [2021-06-14 23:02:40] [0x1053]      0 0x1de4c64 
0x00000001          0 0x54c464 0x19
(XEN) [2021-06-14 23:02:40] [0x1054]      0 0x1de4ec9 
0x00000001          0 0x54c2c9 0x19
(XEN) [2021-06-14 23:02:40] [0x1056]      0 0x1de4752 
0x00000001          0 0x54cb52 0x19
(XEN) [2021-06-14 23:02:40] [0x1058]      0 0x1de4df8 
0x00000001          0 0x54c5f8 0x19
(XEN) [2021-06-14 23:02:40] [0x1059]      0 0x1de557d 
0x00000001          0 0x54bd7d 0x19
(XEN) [2021-06-14 23:02:40] [0x105a]      0 0x1de4b1b 
0x00000001          0 0x54c71b 0x19
(XEN) [2021-06-14 23:02:40] [0x105c]      0 0x1de508a 
0x00000001          0 0x54c08a 0x19
(XEN) [2021-06-14 23:02:40] [0x105d]      0 0x1de50f5 
0x00000001          0 0x54c0f5 0x19
(XEN) [2021-06-14 23:02:40] [0x105e]      0 0x1de5402 
0x00000001          0 0x54bc02 0x19
(XEN) [2021-06-14 23:02:40] [0x105f]      0 0x1de4d86 
0x00000001          0 0x54c586 0x19
(XEN) [2021-06-14 23:02:40] [0x1063]      0 0x1de4dc9 
0x00000001          0 0x54c5c9 0x19
(XEN) [2021-06-14 23:02:40] [0x1064]      0 0x1de5229 
0x00000001          0 0x54be29 0x19
(XEN) [2021-06-14 23:02:40] [0x1065]      0 0x1de5417 
0x00000001          0 0x54bc17 0x19
(XEN) [2021-06-14 23:02:40] [0x1066]      0 0x1de46bd 
0x00000001          0 0x54cabd 0x19
(XEN) [2021-06-14 23:02:40] [0x1067]      0 0x1de2f6c 
0x00000001          0 0x54e36c 0x19
(XEN) [2021-06-14 23:02:40] [0x106b]      0 0x1de5126 
0x00000001          0 0x54c126 0x19
(XEN) [2021-06-14 23:02:40] [0x106d]      0 0x1de4c5e 
0x00000001          0 0x54c45e 0x19
(XEN) [2021-06-14 23:02:40] [0x106f]      0 0x1de5562 
0x00000001          0 0x54bd62 0x19
(XEN) [2021-06-14 23:02:40] [0x1070]      0 0x1de4eab 
0x00000001          0 0x54c2ab 0x19
(XEN) [2021-06-14 23:02:40] [0x1071]      0 0x1de4bb8 
0x00000001          0 0x54c7b8 0x19
(XEN) [2021-06-14 23:02:40] [0x1072]      0 0x1de4b8a 
0x00000001          0 0x54c78a 0x19
(XEN) [2021-06-14 23:02:40] [0x1073]      0 0x1de503a 
0x00000001          0 0x54c03a 0x19
(XEN) [2021-06-14 23:02:40] [0x1075]      0 0x1de47f0 
0x00000001          0 0x54cbf0 0x19
(XEN) [2021-06-14 23:02:40] [0x1079]      0 0x1de4c48 
0x00000001          0 0x54c448 0x19
(XEN) [2021-06-14 23:02:40] [0x107a]      0 0x1de4e10 
0x00000001          0 0x54c210 0x19
(XEN) [2021-06-14 23:02:40] [0x107b]      0 0x1de555e 
0x00000001          0 0x54bd5e 0x19
(XEN) [2021-06-14 23:02:40] [0x107c]      0 0x1de4b62 
0x00000001          0 0x54c762 0x19
(XEN) [2021-06-14 23:02:40] [0x107d]      0 0x1de54ca 
0x00000001          0 0x54bcca 0x19
(XEN) [2021-06-14 23:02:40] [0x107f]      0 0x1de4f5d 
0x00000001          0 0x54c35d 0x19
(XEN) [2021-06-14 23:02:40] [0x1080]      0 0x1de5049 
0x00000001          0 0x54c049 0x19
(XEN) [2021-06-14 23:02:40] [0x1081]      0 0x1de5038 
0x00000001          0 0x54c038 0x19
(XEN) [2021-06-14 23:02:40] [0x1083]      0 0x1de5444 
0x00000001          0 0x54bc44 0x19
(XEN) [2021-06-14 23:02:40] [0x1084]      0 0x1de553d 
0x00000001          0 0x54bd3d 0x19
(XEN) [2021-06-14 23:02:40] [0x1085]      0 0x1de46dc 
0x00000001          0 0x54cadc 0x19
(XEN) [2021-06-14 23:02:40] [0x1088]      0 0x1de46b7 
0x00000001          0 0x54cab7 0x19
(XEN) [2021-06-14 23:02:40] [0x1089]      0 0x1de4ef2 
0x00000001          0 0x54c2f2 0x19
(XEN) [2021-06-14 23:02:40] [0x108b]      0 0x1de4671 
0x00000001          0 0x54ca71 0x19
(XEN) [2021-06-14 23:02:40] [0x108c]      0 0x1de4cc4 
0x00000001          0 0x54c4c4 0x19
(XEN) [2021-06-14 23:02:40] [0x108e]      0 0x1de506a 
0x00000001          0 0x54c06a 0x19
(XEN) [2021-06-14 23:02:40] [0x1093]      0 0x1de4cbf 
0x00000001          0 0x54c4bf 0x19
(XEN) [2021-06-14 23:02:40] [0x1094]      0 0x1de50b6 
0x00000001          0 0x54c0b6 0x19
(XEN) [2021-06-14 23:02:40] [0x1095]      0 0x1de4f8e 
0x00000001          0 0x54c38e 0x19
(XEN) [2021-06-14 23:02:40] [0x1098]      0 0x1de4fdf 
0x00000001          0 0x54c3df 0x19
(XEN) [2021-06-14 23:02:40] [0x1099]      0 0x1de4b44 
0x00000001          0 0x54c744 0x19
(XEN) [2021-06-14 23:02:40] [0x109b]      0 0x1de520c 
0x00000001          0 0x54be0c 0x19
(XEN) [2021-06-14 23:02:40] [0x109c]      0 0x1de49f6 
0x00000001          0 0x54c9f6 0x19
(XEN) [2021-06-14 23:02:40] [0x109d]      0 0x1de50fd 
0x00000001          0 0x54c0fd 0x19
(XEN) [2021-06-14 23:02:40] [0x109f]      0 0x1de5436 
0x00000001          0 0x54bc36 0x19
(XEN) [2021-06-14 23:02:40] [0x10a0]      0 0x1de4720 
0x00000001          0 0x54cb20 0x19
(XEN) [2021-06-14 23:02:40] [0x10a1]      0 0x1de4d36 
0x00000001          0 0x54c536 0x19
(XEN) [2021-06-14 23:02:40] [0x10a6]      0 0x1de4d6d 
0x00000001          0 0x54c56d 0x19
(XEN) [2021-06-14 23:02:40] [0x10a7]      0 0x1de508c 
0x00000001          0 0x54c08c 0x19
(XEN) [2021-06-14 23:02:40] [0x10a8]      0 0x1de460e 
0x00000001          0 0x54ca0e 0x19
(XEN) [2021-06-14 23:02:40] [0x10a9]      0 0x1de5135 
0x00000001          0 0x54c135 0x19
(XEN) [2021-06-14 23:02:40] [0x10aa]      0 0x1de47ae 
0x00000001          0 0x54cbae 0x19
(XEN) [2021-06-14 23:02:40] [0x10ab]      0 0x1de500f 
0x00000001          0 0x54c00f 0x19
(XEN) [2021-06-14 23:02:40] [0x10ac]      0 0x1de4e75 
0x00000001          0 0x54c275 0x19
(XEN) [2021-06-14 23:02:40] [0x10af]      0 0x1de4bee 
0x00000001          0 0x54c7ee 0x19
(XEN) [2021-06-14 23:02:40] [0x10b0]      0 0x1de4cbe 
0x00000001          0 0x54c4be 0x19
(XEN) [2021-06-14 23:02:40] [0x10b1]      0 0x1de4c23 
0x00000001          0 0x54c423 0x19
(XEN) [2021-06-14 23:02:40] [0x10b4]      0 0x1de5414 
0x00000001          0 0x54bc14 0x19
(XEN) [2021-06-14 23:02:40] [0x10b5]      0 0x1de4d29 
0x00000001          0 0x54c529 0x19
(XEN) [2021-06-14 23:02:40] [0x10b6]      0 0x1de51ca 
0x00000001          0 0x54c1ca 0x19
(XEN) [2021-06-14 23:02:40] [0x10b7]      0 0x1de54e3 
0x00000001          0 0x54bce3 0x19
(XEN) [2021-06-14 23:02:40] [0x10b8]      0 0x1de4fde 
0x00000001          0 0x54c3de 0x19
(XEN) [2021-06-14 23:02:40] [0x10b9]      0 0x1de554f 
0x00000001          0 0x54bd4f 0x19
(XEN) [2021-06-14 23:02:40] [0x10bb]      0 0x1de4b3c 
0x00000001          0 0x54c73c 0x19
(XEN) [2021-06-14 23:02:40] [0x10bc]      0 0x1de4d72 
0x00000001          0 0x54c572 0x19
(XEN) [2021-06-14 23:02:40] [0x10c1]      0 0x1de47f4 
0x00000001          0 0x54cbf4 0x19
(XEN) [2021-06-14 23:02:40] [0x10c4]      0 0x1de4efd 
0x00000001          0 0x54c2fd 0x19
(XEN) [2021-06-14 23:02:40] [0x10c6]      0 0x1de4c71 
0x00000001          0 0x54c471 0x19
(XEN) [2021-06-14 23:02:40] [0x10c8]      0 0x1de5151 
0x00000001          0 0x54c151 0x19
(XEN) [2021-06-14 23:02:40] [0x10ca]      0 0x1de2f60 
0x00000001          0 0x54e360 0x19
(XEN) [2021-06-14 23:02:41] [0x10cb]      0 0x1de47b6 
0x00000001          0 0x54cbb6 0x19
(XEN) [2021-06-14 23:02:41] [0x10cc]      0 0x1de4cfe 
0x00000001          0 0x54c4fe 0x19
(XEN) [2021-06-14 23:02:41] [0x10cd]      0 0x1de5597 
0x00000001          0 0x54bd97 0x19
(XEN) [2021-06-14 23:02:41] [0x10ce]      0 0x1de4f28 
0x00000001          0 0x54c328 0x19
(XEN) [2021-06-14 23:02:41] [0x10d0]      0 0x1de51ab 
0x00000001          0 0x54c1ab 0x19
(XEN) [2021-06-14 23:02:41] [0x10d3]      0 0x1de4df2 
0x00000001          0 0x54c5f2 0x19
(XEN) [2021-06-14 23:02:41] [0x10d4]      0 0x1de4668 
0x00000001          0 0x54ca68 0x19
(XEN) [2021-06-14 23:02:41] [0x10d5]      0 0x1de4a3e 
0x00000001          0 0x54c63e 0x19
(XEN) [2021-06-14 23:02:41] [0x10d6]      0 0x1de4faf 
0x00000001          0 0x54c3af 0x19
(XEN) [2021-06-14 23:02:41] [0x10d7]      0 0x1de4c01 
0x00000001          0 0x54c401 0x19
(XEN) [2021-06-14 23:02:41] [0x10d8]      0 0x1de5063 
0x00000001          0 0x54c063 0x19
(XEN) [2021-06-14 23:02:41] [0x10d9]      0 0x1de5101 
0x00000001          0 0x54c101 0x19
(XEN) [2021-06-14 23:02:41] [0x10da]      0 0x1de507d 
0x00000001          0 0x54c07d 0x19
(XEN) [2021-06-14 23:02:41] [0x10db]      0 0x1de4aa1 
0x00000001          0 0x54c6a1 0x19
(XEN) [2021-06-14 23:02:41] [0x10dc]      0 0x1de51e5 
0x00000001          0 0x54c1e5 0x19
(XEN) [2021-06-14 23:02:41] [0x10dd]      0 0x1de4bf7 
0x00000001          0 0x54c7f7 0x19
(XEN) [2021-06-14 23:02:41] [0x10de]      0 0x1de547c 
0x00000001          0 0x54bc7c 0x19
(XEN) [2021-06-14 23:02:41] [0x10e0]      0 0x1de2f5a 
0x00000001          0 0x54e35a 0x19
(XEN) [2021-06-14 23:02:41] [0x10e5]      0 0x1de5467 
0x00000001          0 0x54bc67 0x19
(XEN) [2021-06-14 23:02:41] [0x10e9]      0 0x1de5409 
0x00000001          0 0x54bc09 0x19
(XEN) [2021-06-14 23:02:41] [0x10eb]      0 0x1de4a36 
0x00000001          0 0x54c636 0x19
(XEN) [2021-06-14 23:02:41] [0x10ec]      0 0x1de5473 
0x00000001          0 0x54bc73 0x19
(XEN) [2021-06-14 23:02:41] [0x10ed]      0 0x1de4ed7 
0x00000001          0 0x54c2d7 0x19
(XEN) [2021-06-14 23:02:41] [0x10ee]      0 0x1de5034 
0x00000001          0 0x54c034 0x19
(XEN) [2021-06-14 23:02:41] [0x10ef]      0 0x1de4e53 
0x00000001          0 0x54c253 0x19
(XEN) [2021-06-14 23:02:41] [0x10f0]      0 0x1de4e4e 
0x00000001          0 0x54c24e 0x19
(XEN) [2021-06-14 23:02:41] [0x10f1]      0 0x1de4c3d 
0x00000001          0 0x54c43d 0x19
(XEN) [2021-06-14 23:02:41] [0x10f2]      0 0x1de4685 
0x00000001          0 0x54ca85 0x19
(XEN) [2021-06-14 23:02:41] [0x10f3]      0 0x1de51c9 
0x00000001          0 0x54c1c9 0x19
(XEN) [2021-06-14 23:02:41] [0x10f4]      0 0x1de54d2 
0x00000001          0 0x54bcd2 0x19
(XEN) [2021-06-14 23:02:41] [0x10f5]      0 0x1de4dc5 
0x00000001          0 0x54c5c5 0x19
(XEN) [2021-06-14 23:02:41] [0x10f7]      0 0x1de4cc2 
0x00000001          0 0x54c4c2 0x19
(XEN) [2021-06-14 23:02:41] [0x10fb]      0 0x1de4d6f 
0x00000001          0 0x54c56f 0x19
(XEN) [2021-06-14 23:02:41] [0x10fd]      0 0x1de55c9 
0x00000001          0 0x54bdc9 0x19
(XEN) [2021-06-14 23:02:41] [0x10fe]      0 0x1de5502 
0x00000001          0 0x54bd02 0x19
(XEN) [2021-06-14 23:02:41] [0x10ff]      0 0x1de478a 
0x00000001          0 0x54cb8a 0x19
(XEN) [2021-06-14 23:02:41] [0x1103]      0 0x1de4c15 
0x00000001          0 0x54c415 0x19
(XEN) [2021-06-14 23:02:41] [0x1106]      0 0x1de4f00 
0x00000001          0 0x54c300 0x19
(XEN) [2021-06-14 23:02:41] [0x1107]      0 0x1de4d9d 
0x00000001          0 0x54c59d 0x19
(XEN) [2021-06-14 23:02:41] [0x1108]      0 0x1de5170 
0x00000001          0 0x54c170 0x19
(XEN) [2021-06-14 23:02:41] [0x1109]      0 0x1de4df3 
0x00000001          0 0x54c5f3 0x19
(XEN) [2021-06-14 23:02:41] [0x110a]      0 0x1de4fd0 
0x00000001          0 0x54c3d0 0x19
(XEN) [2021-06-14 23:02:41] [0x110b]      0 0x1de4d98 
0x00000001          0 0x54c598 0x19
(XEN) [2021-06-14 23:02:41] [0x110d]      0 0x1de4b10 
0x00000001          0 0x54c710 0x19
(XEN) [2021-06-14 23:02:41] [0x110f]      0 0x1de4da9 
0x00000001          0 0x54c5a9 0x19
(XEN) [2021-06-14 23:02:41] [0x1111]      0 0x1de49f5 
0x00000001          0 0x54c9f5 0x19
(XEN) [2021-06-14 23:02:41] [0x1114]      0 0x1de50a0 
0x00000001          0 0x54c0a0 0x19
(XEN) [2021-06-14 23:02:41] [0x1116]      0 0x1de4d27 
0x00000001          0 0x54c527 0x19
(XEN) [2021-06-14 23:02:41] [0x1117]      0 0x1de4a1f 
0x00000001          0 0x54c61f 0x19
(XEN) [2021-06-14 23:02:41] [0x1118]      0 0x1de51f2 
0x00000001          0 0x54c1f2 0x19
(XEN) [2021-06-14 23:02:41] [0x111c]      0 0x1de543d 
0x00000001          0 0x54bc3d 0x19
(XEN) [2021-06-14 23:02:41] [0x1122]      0 0x1de479e 
0x00000001          0 0x54cb9e 0x19
(XEN) [2021-06-14 23:02:41] [0x1123]      0 0x1de540b 
0x00000001          0 0x54bc0b 0x19
(XEN) [2021-06-14 23:02:41] [0x1126]      0 0x1de4a87 
0x00000001          0 0x54c687 0x19
(XEN) [2021-06-14 23:02:41] [0x1129]      0 0x1de50be 
0x00000001          0 0x54c0be 0x19
(XEN) [2021-06-14 23:02:41] [0x112b]      0 0x1de4789 
0x00000001          0 0x54cb89 0x19
(XEN) [2021-06-14 23:02:41] [0x112c]      0 0x1de5559 
0x00000001          0 0x54bd59 0x19
(XEN) [2021-06-14 23:02:41] [0x112d]      0 0x1de4616 
0x00000001          0 0x54ca16 0x19
(XEN) [2021-06-14 23:02:41] [0x112f]      0 0x1de4688 
0x00000001          0 0x54ca88 0x19
(XEN) [2021-06-14 23:02:41] [0x1133]      0 0x1de55cf 
0x00000001          0 0x54bdcf 0x19
(XEN) [2021-06-14 23:02:41] [0x1134]      0 0x1de4e9d 
0x00000001          0 0x54c29d 0x19
(XEN) [2021-06-14 23:02:41] [0x1135]      0 0x1de558d 
0x00000001          0 0x54bd8d 0x19
(XEN) [2021-06-14 23:02:41] [0x1136]      0 0x1de4a12 
0x00000001          0 0x54c612 0x19
(XEN) [2021-06-14 23:02:41] [0x1137]      0 0x1de4e03 
0x00000001          0 0x54c203 0x19
(XEN) [2021-06-14 23:02:41] [0x1138]      0 0x1de5210 
0x00000001          0 0x54be10 0x19
(XEN) [2021-06-14 23:02:41] [0x113a]      0 0x1de4bd8 
0x00000001          0 0x54c7d8 0x19
(XEN) [2021-06-14 23:02:41] [0x1140]      0 0x1de4608 
0x00000001          0 0x54ca08 0x19
(XEN) [2021-06-14 23:02:41] [0x1142]      0 0x1de4bdc 
0x00000001          0 0x54c7dc 0x19
(XEN) [2021-06-14 23:02:41] [0x1146]      0 0x1de4cda 
0x00000001          0 0x54c4da 0x19
(XEN) [2021-06-14 23:02:41] [0x1147]      0 0x1de4f21 
0x00000001          0 0x54c321 0x19
(XEN) [2021-06-14 23:02:41] [0x1148]      0 0x1de4a38 
0x00000001          0 0x54c638 0x19
(XEN) [2021-06-14 23:02:41] [0x1149]      0 0x1de4c6e 
0x00000001          0 0x54c46e 0x19
(XEN) [2021-06-14 23:02:41] [0x114a]      0 0x1de5403 
0x00000001          0 0x54bc03 0x19
(XEN) [2021-06-14 23:02:41] [0x114b]      0 0x1de4a82 
0x00000001          0 0x54c682 0x19
(XEN) [2021-06-14 23:02:41] [0x114d]      0 0x1de55ee 
0x00000001          0 0x54bdee 0x19
(XEN) [2021-06-14 23:02:41] [0x114e]      0 0x1de4f01 
0x00000001          0 0x54c301 0x19
(XEN) [2021-06-14 23:02:41] [0x1150]      0 0x1de4d39 
0x00000001          0 0x54c539 0x19
(XEN) [2021-06-14 23:02:41] [0x1151]      0 0x1de55d1 
0x00000001          0 0x54bdd1 0x19
(XEN) [2021-06-14 23:02:41] [0x1152]      0 0x1de4bb1 
0x00000001          0 0x54c7b1 0x19
(XEN) [2021-06-14 23:02:41] [0x1154]      0 0x1de4bf8 
0x00000001          0 0x54c7f8 0x19
(XEN) [2021-06-14 23:02:41] [0x1157]      0 0x1de4b61 
0x00000001          0 0x54c761 0x19
(XEN) [2021-06-14 23:02:41] [0x1158]      0 0x1de54a4 
0x00000001          0 0x54bca4 0x19
(XEN) [2021-06-14 23:02:41] [0x1159]      0 0x1de4e66 
0x00000001          0 0x54c266 0x19
(XEN) [2021-06-14 23:02:41] [0x115a]      0 0x1de54cd 
0x00000001          0 0x54bccd 0x19
(XEN) [2021-06-14 23:02:41] [0x115b]      0 0x1de4fe5 
0x00000001          0 0x54c3e5 0x19
(XEN) [2021-06-14 23:02:41] [0x115c]      0 0x1de4f59 
0x00000001          0 0x54c359 0x19
(XEN) [2021-06-14 23:02:41] [0x115d]      0 0x1de5060 
0x00000001          0 0x54c060 0x19
(XEN) [2021-06-14 23:02:41] [0x1161]      0 0x1de4690 
0x00000001          0 0x54ca90 0x19
(XEN) [2021-06-14 23:02:41] [0x1162]      0 0x1de4ca7 
0x00000001          0 0x54c4a7 0x19
(XEN) [2021-06-14 23:02:41] [0x1163]      0 0x1de4ce7 
0x00000001          0 0x54c4e7 0x19
(XEN) [2021-06-14 23:02:41] [0x1166]      0 0x1de4b5d 
0x00000001          0 0x54c75d 0x19
(XEN) [2021-06-14 23:02:41] [0x116a]      0 0x1de4f79 
0x00000001          0 0x54c379 0x19
(XEN) [2021-06-14 23:02:41] [0x116c]      0 0x1de4f65 
0x00000001          0 0x54c365 0x19
(XEN) [2021-06-14 23:02:41] [0x1170]      0 0x1de4e6b 
0x00000001          0 0x54c26b 0x19
(XEN) [2021-06-14 23:02:41] [0x1171]      0 0x1de4a90 
0x00000001          0 0x54c690 0x19
(XEN) [2021-06-14 23:02:41] [0x1172]      0 0x1de55f4 
0x00000001          0 0x54bdf4 0x19
(XEN) [2021-06-14 23:02:41] [0x1174]      0 0x1de4f71 
0x00000001          0 0x54c371 0x19
(XEN) [2021-06-14 23:02:41] [0x1176]      0 0x1de50b5 
0x00000001          0 0x54c0b5 0x19
(XEN) [2021-06-14 23:02:41] [0x1177]      0 0x1de4fd4 
0x00000001          0 0x54c3d4 0x19
(XEN) [2021-06-14 23:02:41] [0x1179]      0 0x1de4dff 
0x00000001          0 0x54c5ff 0x19
(XEN) [2021-06-14 23:02:41] [0x117b]      0 0x1de4631 
0x00000001          0 0x54ca31 0x19
(XEN) [2021-06-14 23:02:41] [0x117c]      0 0x1de4a93 
0x00000001          0 0x54c693 0x19
(XEN) [2021-06-14 23:02:41] [0x117d]      0 0x1de54ba 
0x00000001          0 0x54bcba 0x19
(XEN) [2021-06-14 23:02:41] [0x117e]      0 0x1de5432 
0x00000001          0 0x54bc32 0x19
(XEN) [2021-06-14 23:02:41] [0x117f]      0 0x1de5420 
0x00000001          0 0x54bc20 0x19
(XEN) [2021-06-14 23:02:41] [0x1181]      0 0x1de548b 
0x00000001          0 0x54bc8b 0x19
(XEN) [2021-06-14 23:02:41] [0x1182]      0 0x1de46eb 
0x00000001          0 0x54caeb 0x19
(XEN) [2021-06-14 23:02:41] [0x1183]      0 0x1de4c8d 
0x00000001          0 0x54c48d 0x19
(XEN) [2021-06-14 23:02:41] [0x1185]      0 0x1de4d94 
0x00000001          0 0x54c594 0x19
(XEN) [2021-06-14 23:02:41] [0x1187]      0 0x1de4ce1 
0x00000001          0 0x54c4e1 0x19
(XEN) [2021-06-14 23:02:41] [0x1188]      0 0x1de4df6 
0x00000001          0 0x54c5f6 0x19
(XEN) [2021-06-14 23:02:41] [0x1189]      0 0x1de5021 
0x00000001          0 0x54c021 0x19
(XEN) [2021-06-14 23:02:41] [0x118c]      0 0x1de54c7 
0x00000001          0 0x54bcc7 0x19
(XEN) [2021-06-14 23:02:41] [0x118e]      0 0x1de559e 
0x00000001          0 0x54bd9e 0x19
(XEN) [2021-06-14 23:02:41] [0x1191]      0 0x1de4d91 
0x00000001          0 0x54c591 0x19
(XEN) [2021-06-14 23:02:41] [0x1194]      0 0x1de5412 
0x00000001          0 0x54bc12 0x19
(XEN) [2021-06-14 23:02:41] [0x1199]      0 0x1de47d2 
0x00000001          0 0x54cbd2 0x19
(XEN) [2021-06-14 23:02:41] [0x119a]      0 0x1de51ee 
0x00000001          0 0x54c1ee 0x19
(XEN) [2021-06-14 23:02:41] [0x119c]      0 0x1de4f82 
0x00000001          0 0x54c382 0x19
(XEN) [2021-06-14 23:02:41] [0x119d]      0 0x1de508d 
0x00000001          0 0x54c08d 0x19
(XEN) [2021-06-14 23:02:41] [0x119e]      0 0x1de46c4 
0x00000001          0 0x54cac4 0x19
(XEN) [2021-06-14 23:02:41] [0x119f]      0 0x1de4ca6 
0x00000001          0 0x54c4a6 0x19
(XEN) [2021-06-14 23:02:41] [0x11a0]      0 0x1de4f33 
0x00000001          0 0x54c333 0x19
(XEN) [2021-06-14 23:02:41] [0x11a1]      0 0x1de469b 
0x00000001          0 0x54ca9b 0x19
(XEN) [2021-06-14 23:02:41] [0x11a4]      0 0x1de46cf 
0x00000001          0 0x54cacf 0x19
(XEN) [2021-06-14 23:02:41] [0x11a5]      0 0x1de47a9 
0x00000001          0 0x54cba9 0x19
(XEN) [2021-06-14 23:02:41] [0x11a6]      0 0x1de54cb 
0x00000001          0 0x54bccb 0x19
(XEN) [2021-06-14 23:02:42] [0x11a8]      0 0x1de5542 
0x00000001          0 0x54bd42 0x19
(XEN) [2021-06-14 23:02:42] [0x11a9]      0 0x1de4c5a 
0x00000001          0 0x54c45a 0x19
(XEN) [2021-06-14 23:02:42] [0x11aa]      0 0x1de4c79 
0x00000001          0 0x54c479 0x19
(XEN) [2021-06-14 23:02:42] [0x11ac]      0 0x1de5437 
0x00000001          0 0x54bc37 0x19
(XEN) [2021-06-14 23:02:42] [0x11af]      0 0x1de4c56 
0x00000001          0 0x54c456 0x19
(XEN) [2021-06-14 23:02:42] [0x11b1]      0 0x1de50b9 
0x00000001          0 0x54c0b9 0x19
(XEN) [2021-06-14 23:02:42] [0x11b3]      0 0x1de4f81 
0x00000001          0 0x54c381 0x19
(XEN) [2021-06-14 23:02:42] [0x11b5]      0 0x1de4c0c 
0x00000001          0 0x54c40c 0x19
(XEN) [2021-06-14 23:02:42] [0x11b6]      0 0x1de4f95 
0x00000001          0 0x54c395 0x19
(XEN) [2021-06-14 23:02:42] [0x11b7]      0 0x1de4f13 
0x00000001          0 0x54c313 0x19
(XEN) [2021-06-14 23:02:42] [0x11ba]      0 0x1de4dcf 
0x00000001          0 0x54c5cf 0x19
(XEN) [2021-06-14 23:02:42] [0x11bc]      0 0x1de512e 
0x00000001          0 0x54c12e 0x19
(XEN) [2021-06-14 23:02:42] [0x11be]      0 0x1de5207 
0x00000001          0 0x54be07 0x19
(XEN) [2021-06-14 23:02:42] [0x11c1]      0 0x1de5144 
0x00000001          0 0x54c144 0x19
(XEN) [2021-06-14 23:02:42] [0x11c3]      0 0x1de46d1 
0x00000001          0 0x54cad1 0x19
(XEN) [2021-06-14 23:02:42] [0x11c4]      0 0x1de4606 
0x00000001          0 0x54ca06 0x19
(XEN) [2021-06-14 23:02:42] [0x11c6]      0 0x1de4ae3 
0x00000001          0 0x54c6e3 0x19
(XEN) [2021-06-14 23:02:42] [0x11c7]      0 0x1de4dc6 
0x00000001          0 0x54c5c6 0x19
(XEN) [2021-06-14 23:02:42] [0x11c9]      0 0x1de55ef 
0x00000001          0 0x54bdef 0x19
(XEN) [2021-06-14 23:02:42] [0x11cb]      0 0x1de552e 
0x00000001          0 0x54bd2e 0x19
(XEN) [2021-06-14 23:02:42] [0x11cd]      0 0x1de469f 
0x00000001          0 0x54ca9f 0x19
(XEN) [2021-06-14 23:02:42] [0x11cf]      0 0x1de4f73 
0x00000001          0 0x54c373 0x19
(XEN) [2021-06-14 23:02:42] [0x11d0]      0 0x1de4ccd 
0x00000001          0 0x54c4cd 0x19
(XEN) [2021-06-14 23:02:42] [0x11d1]      0 0x1de5051 
0x00000001          0 0x54c051 0x19
(XEN) [2021-06-14 23:02:42] [0x11d3]      0 0x1de4f32 
0x00000001          0 0x54c332 0x19
(XEN) [2021-06-14 23:02:42] [0x11d5]      0 0x1de520e 
0x00000001          0 0x54be0e 0x19
(XEN) [2021-06-14 23:02:42] [0x11d7]      0 0x1de4ffa 
0x00000001          0 0x54c3fa 0x19
(XEN) [2021-06-14 23:02:42] [0x11d8]      0 0x1de4691 
0x00000001          0 0x54ca91 0x19
(XEN) [2021-06-14 23:02:42] [0x11d9]      0 0x1de4ea8 
0x00000001          0 0x54c2a8 0x19
(XEN) [2021-06-14 23:02:42] [0x11dd]      0 0x1de4f60 
0x00000001          0 0x54c360 0x19
(XEN) [2021-06-14 23:02:42] [0x11df]      0 0x1de4bc7 
0x00000001          0 0x54c7c7 0x19
(XEN) [2021-06-14 23:02:42] [0x11e0]      0 0x1de4655 
0x00000001          0 0x54ca55 0x19
(XEN) [2021-06-14 23:02:42] [0x11e1]      0 0x1de473c 
0x00000001          0 0x54cb3c 0x19
(XEN) [2021-06-14 23:02:42] [0x11e3]      0 0x1de54aa 
0x00000001          0 0x54bcaa 0x19
(XEN) [2021-06-14 23:02:42] [0x11e4]      0 0x1de5028 
0x00000001          0 0x54c028 0x19
(XEN) [2021-06-14 23:02:42] [0x11e5]      0 0x1de4f7a 
0x00000001          0 0x54c37a 0x19
(XEN) [2021-06-14 23:02:42] [0x11e7]      0 0x1de47b3 
0x00000001          0 0x54cbb3 0x19
(XEN) [2021-06-14 23:02:42] [0x11e8]      0 0x1de54ef 
0x00000001          0 0x54bcef 0x19
(XEN) [2021-06-14 23:02:42] [0x11eb]      0 0x1de4601 
0x00000001          0 0x54ca01 0x19
(XEN) [2021-06-14 23:02:42] [0x11ef]      0 0x1de55bf 
0x00000001          0 0x54bdbf 0x19
(XEN) [2021-06-14 23:02:42] [0x11f1]      0 0x1de4ec5 
0x00000001          0 0x54c2c5 0x19
(XEN) [2021-06-14 23:02:42] [0x11f3]      0 0x1de558a 
0x00000001          0 0x54bd8a 0x19
(XEN) [2021-06-14 23:02:42] [0x1200]      0 0x1de5453 
0x00000001          0 0x54bc53 0x19
(XEN) [2021-06-14 23:02:42] [0x1201]      0 0x1de5016 
0x00000001          0 0x54c016 0x19
(XEN) [2021-06-14 23:02:42] [0x1202]      0 0x1de4ec2 
0x00000001          0 0x54c2c2 0x19
(XEN) [2021-06-14 23:02:42] [0x1205]      0 0x1de4fd7 
0x00000001          0 0x54c3d7 0x19
(XEN) [2021-06-14 23:02:42] [0x1207]      0 0x1de5180 
0x00000001          0 0x54c180 0x19
(XEN) [2021-06-14 23:02:42] [0x1209]      0 0x1de5421 
0x00000001          0 0x54bc21 0x19
(XEN) [2021-06-14 23:02:42] [0x120b]      0 0x1de4bba 
0x00000001          0 0x54c7ba 0x19
(XEN) [2021-06-14 23:02:42] [0x120c]      0 0x1de2f64 
0x00000001          0 0x54e364 0x19
(XEN) [2021-06-14 23:02:42] [0x120d]      0 0x1de46e5 
0x00000001          0 0x54cae5 0x19
(XEN) [2021-06-14 23:02:42] [0x120e]      0 0x1de4b4f 
0x00000001          0 0x54c74f 0x19
(XEN) [2021-06-14 23:02:42] [0x120f]      0 0x1de49f2 
0x00000001          0 0x54c9f2 0x19
(XEN) [2021-06-14 23:02:42] [0x1211]      0 0x1de461e 
0x00000001          0 0x54ca1e 0x19
(XEN) [2021-06-14 23:02:42] [0x1213]      0 0x1de4ae4 
0x00000001          0 0x54c6e4 0x19
(XEN) [2021-06-14 23:02:42] [0x1214]      0 0x1de4d1f 
0x00000001          0 0x54c51f 0x19
(XEN) [2021-06-14 23:02:42] [0x1215]      0 0x1de50a9 
0x00000001          0 0x54c0a9 0x19
(XEN) [2021-06-14 23:02:42] [0x1216]      0 0x1de477a 
0x00000001          0 0x54cb7a 0x19
(XEN) [2021-06-14 23:02:42] [0x1218]      0 0x1de50e3 
0x00000001          0 0x54c0e3 0x19
(XEN) [2021-06-14 23:02:42] [0x1219]      0 0x1de47bf 
0x00000001          0 0x54cbbf 0x19
(XEN) [2021-06-14 23:02:42] [0x121c]      0 0x1de4704 
0x00000001          0 0x54cb04 0x19
(XEN) [2021-06-14 23:02:42] [0x121d]      0 0x1de4e6e 
0x00000001          0 0x54c26e 0x19
(XEN) [2021-06-14 23:02:42] [0x1220]      0 0x1de4dcb 
0x00000001          0 0x54c5cb 0x19
(XEN) [2021-06-14 23:02:42] [0x1221]      0 0x1de47f7 
0x00000001          0 0x54cbf7 0x19
(XEN) [2021-06-14 23:02:42] [0x1223]      0 0x1de4d14 
0x00000001          0 0x54c514 0x19
(XEN) [2021-06-14 23:02:42] [0x1225]      0 0x1de5080 
0x00000001          0 0x54c080 0x19
(XEN) [2021-06-14 23:02:42] [0x1229]      0 0x1de4dbb 
0x00000001          0 0x54c5bb 0x19
(XEN) [2021-06-14 23:02:42] [0x122a]      0 0x1de5401 
0x00000001          0 0x54bc01 0x19
(XEN) [2021-06-14 23:02:42] [0x122b]      0 0x1de4adf 
0x00000001          0 0x54c6df 0x19
(XEN) [2021-06-14 23:02:42] [0x1230]      0 0x1de51d6 
0x00000001          0 0x54c1d6 0x19
(XEN) [2021-06-14 23:02:42] [0x1231]      0 0x1de4745 
0x00000001          0 0x54cb45 0x19
(XEN) [2021-06-14 23:02:42] [0x1232]      0 0x1de51cd 
0x00000001          0 0x54c1cd 0x19
(XEN) [2021-06-14 23:02:42] [0x1236]      0 0x1de47cd 
0x00000001          0 0x54cbcd 0x19
(XEN) [2021-06-14 23:02:42] [0x1238]      0 0x1de47ca 
0x00000001          0 0x54cbca 0x19
(XEN) [2021-06-14 23:02:42] [0x123a]      0 0x1de50fb 
0x00000001          0 0x54c0fb 0x19
(XEN) [2021-06-14 23:02:42] [0x123b]      0 0x1de4ea0 
0x00000001          0 0x54c2a0 0x19
(XEN) [2021-06-14 23:02:42] [0x123d]      0 0x1de5131 
0x00000001          0 0x54c131 0x19
(XEN) [2021-06-14 23:02:42] [0x123e]      0 0x1de4b63 
0x00000001          0 0x54c763 0x19
(XEN) [2021-06-14 23:02:42] [0x123f]      0 0x1de54f6 
0x00000001          0 0x54bcf6 0x19
(XEN) [2021-06-14 23:02:42] [0x1243]      0 0x1de511c 
0x00000001          0 0x54c11c 0x19
(XEN) [2021-06-14 23:02:42] [0x1244]      0 0x1de4e68 
0x00000001          0 0x54c268 0x19
(XEN) [2021-06-14 23:02:42] [0x1245]      0 0x1de4e1e 
0x00000001          0 0x54c21e 0x19
(XEN) [2021-06-14 23:02:42] [0x1246]      0 0x1de4c9f 
0x00000001          0 0x54c49f 0x19
(XEN) [2021-06-14 23:02:42] [0x1247]      0 0x1de519b 
0x00000001          0 0x54c19b 0x19
(XEN) [2021-06-14 23:02:42] [0x1248]      0 0x1de4efc 
0x00000001          0 0x54c2fc 0x19
(XEN) [2021-06-14 23:02:42] [0x1249]      0 0x1de4bf1 
0x00000001          0 0x54c7f1 0x19
(XEN) [2021-06-14 23:02:42] [0x124a]      0 0x1de4a2f 
0x00000001          0 0x54c62f 0x19
(XEN) [2021-06-14 23:02:42] [0x124b]      0 0x1de46ac 
0x00000001          0 0x54caac 0x19
(XEN) [2021-06-14 23:02:42] [0x124c]      0 0x1de4f70 
0x00000001          0 0x54c370 0x19
(XEN) [2021-06-14 23:02:42] [0x124f]      0 0x1de4c2d 
0x00000001          0 0x54c42d 0x19
(XEN) [2021-06-14 23:02:42] [0x1250]      0 0x1de4dec 
0x00000001          0 0x54c5ec 0x19
(XEN) [2021-06-14 23:02:42] [0x1252]      0 0x1de5477 
0x00000001          0 0x54bc77 0x19
(XEN) [2021-06-14 23:02:42] [0x1253]      0 0x1de47bd 
0x00000001          0 0x54cbbd 0x19
(XEN) [2021-06-14 23:02:42] [0x1254]      0 0x1de4d79 
0x00000001          0 0x54c579 0x19
(XEN) [2021-06-14 23:02:42] [0x1258]      0 0x1de2f67 
0x00000001          0 0x54e367 0x19
(XEN) [2021-06-14 23:02:42] [0x1259]      0 0x1de46f3 
0x00000001          0 0x54caf3 0x19
(XEN) [2021-06-14 23:02:42] [0x125b]      0 0x1de5492 
0x00000001          0 0x54bc92 0x19
(XEN) [2021-06-14 23:02:42] [0x125e]      0 0x1de4bd0 
0x00000001          0 0x54c7d0 0x19
(XEN) [2021-06-14 23:02:42] [0x125f]      0 0x1de4ed6 
0x00000001          0 0x54c2d6 0x19
(XEN) [2021-06-14 23:02:42] [0x1260]      0 0x1de5496 
0x00000001          0 0x54bc96 0x19
(XEN) [2021-06-14 23:02:42] [0x1261]      0 0x1de4ebb 
0x00000001          0 0x54c2bb 0x19
(XEN) [2021-06-14 23:02:42] [0x1262]      0 0x1de554a 
0x00000001          0 0x54bd4a 0x19
(XEN) [2021-06-14 23:02:42] [0x1264]      0 0x1de50d6 
0x00000001          0 0x54c0d6 0x19
(XEN) [2021-06-14 23:02:42] [0x1267]      0 0x1de471a 
0x00000001          0 0x54cb1a 0x19
(XEN) [2021-06-14 23:02:42] [0x1269]      0 0x1de4a98 
0x00000001          0 0x54c698 0x19
(XEN) [2021-06-14 23:02:42] [0x126a]      0 0x1de4b60 
0x00000001          0 0x54c760 0x19
(XEN) [2021-06-14 23:02:42] [0x126c]      0 0x1de47a2 
0x00000001          0 0x54cba2 0x19
(XEN) [2021-06-14 23:02:42] [0x126e]      0 0x1de4bbb 
0x00000001          0 0x54c7bb 0x19
(XEN) [2021-06-14 23:02:42] [0x126f]      0 0x1de5524 
0x00000001          0 0x54bd24 0x19
(XEN) [2021-06-14 23:02:42] [0x1270]      0 0x1de51af 
0x00000001          0 0x54c1af 0x19
(XEN) [2021-06-14 23:02:42] [0x1271]      0 0x1de4b00 
0x00000001          0 0x54c700 0x19
(XEN) [2021-06-14 23:02:42] [0x1272]      0 0x1de5202 
0x00000001          0 0x54be02 0x19
(XEN) [2021-06-14 23:02:42] [0x1273]      0 0x1de4b43 
0x00000001          0 0x54c743 0x19
(XEN) [2021-06-14 23:02:42] [0x1274]      0 0x1de4659 
0x00000001          0 0x54ca59 0x19
(XEN) [2021-06-14 23:02:42] [0x1276]      0 0x1de4ef9 
0x00000001          0 0x54c2f9 0x19
(XEN) [2021-06-14 23:02:42] [0x1277]      0 0x1de4e70 
0x00000001          0 0x54c270 0x19
(XEN) [2021-06-14 23:02:42] [0x1278]      0 0x1de4adc 
0x00000001          0 0x54c6dc 0x19
(XEN) [2021-06-14 23:02:42] [0x1279]      0 0x1de4604 
0x00000001          0 0x54ca04 0x19
(XEN) [2021-06-14 23:02:42] [0x127b]      0 0x1de4d2a 
0x00000001          0 0x54c52a 0x19
(XEN) [2021-06-14 23:02:42] [0x127c]      0 0x1de4f84 
0x00000001          0 0x54c384 0x19
(XEN) [2021-06-14 23:02:42] [0x127d]      0 0x1de50a7 
0x00000001          0 0x54c0a7 0x19
(XEN) [2021-06-14 23:02:42] [0x127e]      0 0x1de462a 
0x00000001          0 0x54ca2a 0x19
(XEN) [2021-06-14 23:02:42] [0x127f]      0 0x1de4ead 
0x00000001          0 0x54c2ad 0x19
(XEN) [2021-06-14 23:02:42] [0x1280]      0 0x1de543e 
0x00000001          0 0x54bc3e 0x19
(XEN) [2021-06-14 23:02:42] [0x1283]      0 0x1de508b 
0x00000001          0 0x54c08b 0x19
(XEN) [2021-06-14 23:02:42] [0x1285]      0 0x1de51fe 
0x00000001          0 0x54c1fe 0x19
(XEN) [2021-06-14 23:02:42] [0x1286]      0 0x1de51e6 
0x00000001          0 0x54c1e6 0x19
(XEN) [2021-06-14 23:02:43] [0x1287]      0 0x1de4e7c 
0x00000001          0 0x54c27c 0x19
(XEN) [2021-06-14 23:02:43] [0x1289]      0 0x1de4d7d 
0x00000001          0 0x54c57d 0x19
(XEN) [2021-06-14 23:02:43] [0x128a]      0 0x1de4f35 
0x00000001          0 0x54c335 0x19
(XEN) [2021-06-14 23:02:43] [0x128c]      0 0x1de4eae 
0x00000001          0 0x54c2ae 0x19
(XEN) [2021-06-14 23:02:43] [0x128d]      0 0x1de4fc6 
0x00000001          0 0x54c3c6 0x19
(XEN) [2021-06-14 23:02:43] [0x128e]      0 0x1de5251 
0x00000001          0 0x54be51 0x19
(XEN) [2021-06-14 23:02:43] [0x1291]      0 0x1de50d5 
0x00000001          0 0x54c0d5 0x19
(XEN) [2021-06-14 23:02:43] [0x1293]      0 0x1de54fe 
0x00000001          0 0x54bcfe 0x19
(XEN) [2021-06-14 23:02:43] [0x1294]      0 0x1de4ff7 
0x00000001          0 0x54c3f7 0x19
(XEN) [2021-06-14 23:02:43] [0x1295]      0 0x1de4be5 
0x00000001          0 0x54c7e5 0x19
(XEN) [2021-06-14 23:02:43] [0x1296]      0 0x1de55e9 
0x00000001          0 0x54bde9 0x19
(XEN) [2021-06-14 23:02:43] [0x1297]      0 0x1de5031 
0x00000001          0 0x54c031 0x19
(XEN) [2021-06-14 23:02:43] [0x1298]      0 0x1de4f11 
0x00000001          0 0x54c311 0x19
(XEN) [2021-06-14 23:02:43] [0x1299]      0 0x1de4f7c 
0x00000001          0 0x54c37c 0x19
(XEN) [2021-06-14 23:02:43] [0x129b]      0 0x1de46ce 
0x00000001          0 0x54cace 0x19
(XEN) [2021-06-14 23:02:43] [0x129c]      0 0x1de4b4a 
0x00000001          0 0x54c74a 0x19
(XEN) [2021-06-14 23:02:43] [0x129d]      0 0x1de2f53 
0x00000001          0 0x54e353 0x19
(XEN) [2021-06-14 23:02:43] [0x129f]      0 0x1de4614 
0x00000001          0 0x54ca14 0x19
(XEN) [2021-06-14 23:02:43] [0x12a0]      0 0x1de2f54 
0x00000001          0 0x54e354 0x19
(XEN) [2021-06-14 23:02:43] [0x12a1]      0 0x1de46d5 
0x00000001          0 0x54cad5 0x19
(XEN) [2021-06-14 23:02:43] [0x12a2]      0 0x1de5499 
0x00000001          0 0x54bc99 0x19
(XEN) [2021-06-14 23:02:43] [0x12a3]      0 0x1de509e 
0x00000001          0 0x54c09e 0x19
(XEN) [2021-06-14 23:02:43] [0x12a5]      0 0x1de55f5 
0x00000001          0 0x54bdf5 0x19
(XEN) [2021-06-14 23:02:43] [0x12a6]      0 0x1de549d 
0x00000001          0 0x54bc9d 0x19
(XEN) [2021-06-14 23:02:43] [0x12a8]      0 0x1de4d8b 
0x00000001          0 0x54c58b 0x19
(XEN) [2021-06-14 23:02:43] [0x12a9]      0 0x1de4cf1 
0x00000001          0 0x54c4f1 0x19
(XEN) [2021-06-14 23:02:43] [0x12aa]      0 0x1de4f8a 
0x00000001          0 0x54c38a 0x19
(XEN) [2021-06-14 23:02:43] [0x12ac]      0 0x1de4731 
0x00000001          0 0x54cb31 0x19
(XEN) [2021-06-14 23:02:43] [0x12ad]      0 0x1de4dc3 
0x00000001          0 0x54c5c3 0x19
(XEN) [2021-06-14 23:02:43] [0x12af]      0 0x1de2f57 
0x00000001          0 0x54e357 0x19
(XEN) [2021-06-14 23:02:43] [0x12b2]      0 0x1de4b37 
0x00000001          0 0x54c737 0x19
(XEN) [2021-06-14 23:02:43] [0x12b3]      0 0x1de4f88 
0x00000001          0 0x54c388 0x19
(XEN) [2021-06-14 23:02:43] [0x12b4]      0 0x1de5020 
0x00000001          0 0x54c020 0x19
(XEN) [2021-06-14 23:02:43] [0x12b5]      0 0x1de4c4d 
0x00000001          0 0x54c44d 0x19
(XEN) [2021-06-14 23:02:43] [0x12b6]      0 0x1de4a58 
0x00000001          0 0x54c658 0x19
(XEN) [2021-06-14 23:02:43] [0x12b7]      0 0x1de5036 
0x00000001          0 0x54c036 0x19
(XEN) [2021-06-14 23:02:43] [0x12b8]      0 0x1de511b 
0x00000001          0 0x54c11b 0x19
(XEN) [2021-06-14 23:02:43] [0x12b9]      0 0x1de5483 
0x00000001          0 0x54bc83 0x19
(XEN) [2021-06-14 23:02:43] [0x12ba]      0 0x1de553f 
0x00000001          0 0x54bd3f 0x19
(XEN) [2021-06-14 23:02:43] [0x12bb]      0 0x1de4791 
0x00000001          0 0x54cb91 0x19
(XEN) [2021-06-14 23:02:43] [0x12bd]      0 0x1de4e57 
0x00000001          0 0x54c257 0x19
(XEN) [2021-06-14 23:02:43] [0x12c0]      0 0x1de46a6 
0x00000001          0 0x54caa6 0x19
(XEN) [2021-06-14 23:02:43] [0x12c3]      0 0x1de4ca0 
0x00000001          0 0x54c4a0 0x19
(XEN) [2021-06-14 23:02:43] [0x12c4]      0 0x1de46a0 
0x00000001          0 0x54caa0 0x19
(XEN) [2021-06-14 23:02:43] [0x12c6]      0 0x1de5407 
0x00000001          0 0x54bc07 0x19
(XEN) [2021-06-14 23:02:43] [0x12c8]      0 0x1de5012 
0x00000001          0 0x54c012 0x19
(XEN) [2021-06-14 23:02:43] [0x12c9]      0 0x1de5447 
0x00000001          0 0x54bc47 0x19
(XEN) [2021-06-14 23:02:43] [0x12cd]      0 0x1de4a37 
0x00000001          0 0x54c637 0x19
(XEN) [2021-06-14 23:02:43] [0x12ce]      0 0x1de4a91 
0x00000001          0 0x54c691 0x19
(XEN) [2021-06-14 23:02:43] [0x12cf]      0 0x1de4ab9 
0x00000001          0 0x54c6b9 0x19
(XEN) [2021-06-14 23:02:43] [0x12d0]      0 0x1de4eff 
0x00000001          0 0x54c2ff 0x19
(XEN) [2021-06-14 23:02:43] [0x12d1]      0 0x1de51e1 
0x00000001          0 0x54c1e1 0x19
(XEN) [2021-06-14 23:02:43] [0x12d2]      0 0x1de5551 
0x00000001          0 0x54bd51 0x19
(XEN) [2021-06-14 23:02:43] [0x12d3]      0 0x1de47eb 
0x00000001          0 0x54cbeb 0x19
(XEN) [2021-06-14 23:02:43] [0x12d4]      0 0x1de5176 
0x00000001          0 0x54c176 0x19
(XEN) [2021-06-14 23:02:43] [0x12d7]      0 0x1de4e01 
0x00000001          0 0x54c201 0x19
(XEN) [2021-06-14 23:02:43] [0x12d8]      0 0x1de5554 
0x00000001          0 0x54bd54 0x19
(XEN) [2021-06-14 23:02:43] [0x12d9]      0 0x1de4ea4 
0x00000001          0 0x54c2a4 0x19
(XEN) [2021-06-14 23:02:43] [0x12da]      0 0x1de5489 
0x00000001          0 0x54bc89 0x19
(XEN) [2021-06-14 23:02:43] [0x12db]      0 0x1de4f22 
0x00000001          0 0x54c322 0x19
(XEN) [2021-06-14 23:02:43] [0x12dc]      0 0x1de4c69 
0x00000001          0 0x54c469 0x19
(XEN) [2021-06-14 23:02:43] [0x12dd]      0 0x1de5462 
0x00000001          0 0x54bc62 0x19
(XEN) [2021-06-14 23:02:43] [0x12e0]      0 0x1de51ff 
0x00000001          0 0x54c1ff 0x19
(XEN) [2021-06-14 23:02:43] [0x12e1]      0 0x1de4b50 
0x00000001          0 0x54c750 0x19
(XEN) [2021-06-14 23:02:43] [0x12e2]      0 0x1de4b92 
0x00000001          0 0x54c792 0x19
(XEN) [2021-06-14 23:02:43] [0x12e3]      0 0x1de4df0 
0x00000001          0 0x54c5f0 0x19
(XEN) [2021-06-14 23:02:43] [0x12e5]      0 0x1de55da 
0x00000001          0 0x54bdda 0x19
(XEN) [2021-06-14 23:02:43] [0x12e6]      0 0x1de4e14 
0x00000001          0 0x54c214 0x19
(XEN) [2021-06-14 23:02:43] [0x12e7]      0 0x1de49f8 
0x00000001          0 0x54c9f8 0x19
(XEN) [2021-06-14 23:02:43] [0x12eb]      0 0x1de4ced 
0x00000001          0 0x54c4ed 0x19
(XEN) [2021-06-14 23:02:43] [0x12ec]      0 0x1de4764 
0x00000001          0 0x54cb64 0x19
(XEN) [2021-06-14 23:02:43] [0x12ed]      0 0x1de4c29 
0x00000001          0 0x54c429 0x19
(XEN) [2021-06-14 23:02:43] [0x12ef]      0 0x1de5497 
0x00000001          0 0x54bc97 0x19
(XEN) [2021-06-14 23:02:43] [0x12f0]      0 0x1de5132 
0x00000001          0 0x54c132 0x19
(XEN) [2021-06-14 23:02:43] [0x12f1]      0 0x1de50f0 
0x00000001          0 0x54c0f0 0x19
(XEN) [2021-06-14 23:02:43] [0x12f2]      0 0x1de4f64 
0x00000001          0 0x54c364 0x19
(XEN) [2021-06-14 23:02:43] [0x12f7]      0 0x1de4758 
0x00000001          0 0x54cb58 0x19
(XEN) [2021-06-14 23:02:43] [0x12fb]      0 0x1de5449 
0x00000001          0 0x54bc49 0x19
(XEN) [2021-06-14 23:02:43] [0x12fd]      0 0x1de5520 
0x00000001          0 0x54bd20 0x19
(XEN) [2021-06-14 23:02:43] [0x12fe]      0 0x1de4625 
0x00000001          0 0x54ca25 0x19
(XEN) [2021-06-14 23:02:43] [0x1300]      0 0x1de4de9 
0x00000001          0 0x54c5e9 0x19
(XEN) [2021-06-14 23:02:43] [0x1301]      0 0x1de4f77 
0x00000001          0 0x54c377 0x19
(XEN) [2021-06-14 23:02:43] [0x1303]      0 0x1de4e95 
0x00000001          0 0x54c295 0x19
(XEN) [2021-06-14 23:02:43] [0x1304]      0 0x1de4d1e 
0x00000001          0 0x54c51e 0x19
(XEN) [2021-06-14 23:02:43] [0x1305]      0 0x1de5074 
0x00000001          0 0x54c074 0x19
(XEN) [2021-06-14 23:02:43] [0x1306]      0 0x1de4b0c 
0x00000001          0 0x54c70c 0x19
(XEN) [2021-06-14 23:02:43] [0x1307]      0 0x1de51d3 
0x00000001          0 0x54c1d3 0x19
(XEN) [2021-06-14 23:02:43] [0x1308]      0 0x1de4651 
0x00000001          0 0x54ca51 0x19
(XEN) [2021-06-14 23:02:43] [0x130a]      0 0x1de4b75 
0x00000001          0 0x54c775 0x19
(XEN) [2021-06-14 23:02:43] [0x130b]      0 0x1de4cee 
0x00000001          0 0x54c4ee 0x19
(XEN) [2021-06-14 23:02:43] [0x130d]      0 0x1de50d7 
0x00000001          0 0x54c0d7 0x19
(XEN) [2021-06-14 23:02:43] [0x130f]      0 0x1de4e9f 
0x00000001          0 0x54c29f 0x19
(XEN) [2021-06-14 23:02:43] [0x1310]      0 0x1de4a01 
0x00000001          0 0x54c601 0x19
(XEN) [2021-06-14 23:02:43] [0x1311]      0 0x1de4f41 
0x00000001          0 0x54c341 0x19
(XEN) [2021-06-14 23:02:43] [0x1314]      0 0x1de4c60 
0x00000001          0 0x54c460 0x19
(XEN) [2021-06-14 23:02:43] [0x1315]      0 0x1de4b58 
0x00000001          0 0x54c758 0x19
(XEN) [2021-06-14 23:02:43] [0x1318]      0 0x1de5237 
0x00000001          0 0x54be37 0x19
(XEN) [2021-06-14 23:02:43] [0x131b]      0 0x1de4eea 
0x00000001          0 0x54c2ea 0x19
(XEN) [2021-06-14 23:02:43] [0x131e]      0 0x1de55d0 
0x00000001          0 0x54bdd0 0x19
(XEN) [2021-06-14 23:02:43] [0x131f]      0 0x1de5464 
0x00000001          0 0x54bc64 0x19
(XEN) [2021-06-14 23:02:43] [0x1321]      0 0x1de4662 
0x00000001          0 0x54ca62 0x19
(XEN) [2021-06-14 23:02:43] [0x1324]      0 0x1de4e9b 
0x00000001          0 0x54c29b 0x19
(XEN) [2021-06-14 23:02:43] [0x1325]      0 0x1de4aaf 
0x00000001          0 0x54c6af 0x19
(XEN) [2021-06-14 23:02:43] [0x1327]      0 0x1de50eb 
0x00000001          0 0x54c0eb 0x19
(XEN) [2021-06-14 23:02:43] [0x1328]      0 0x1de54e2 
0x00000001          0 0x54bce2 0x19
(XEN) [2021-06-14 23:02:43] [0x1329]      0 0x1de5538 
0x00000001          0 0x54bd38 0x19
(XEN) [2021-06-14 23:02:43] [0x132b]      0 0x1de479c 
0x00000001          0 0x54cb9c 0x19
(XEN) [2021-06-14 23:02:43] [0x132f]      0 0x1de46aa 
0x00000001          0 0x54caaa 0x19
(XEN) [2021-06-14 23:02:43] [0x1335]      0 0x1de5232 
0x00000001          0 0x54be32 0x19
(XEN) [2021-06-14 23:02:43] [0x1336]      0 0x1de5102 
0x00000001          0 0x54c102 0x19
(XEN) [2021-06-14 23:02:43] [0x1337]      0 0x1de4c21 
0x00000001          0 0x54c421 0x19
(XEN) [2021-06-14 23:02:43] [0x1338]      0 0x1de4c86 
0x00000001          0 0x54c486 0x19
(XEN) [2021-06-14 23:02:43] [0x133a]      0 0x1de4c97 
0x00000001          0 0x54c497 0x19
(XEN) [2021-06-14 23:02:43] [0x133b]      0 0x1de5165 
0x00000001          0 0x54c165 0x19
(XEN) [2021-06-14 23:02:43] [0x133c]      0 0x1de4a28 
0x00000001          0 0x54c628 0x19
(XEN) [2021-06-14 23:02:43] [0x133e]      0 0x1de4d26 
0x00000001          0 0x54c526 0x19
(XEN) [2021-06-14 23:02:43] [0x1340]      0 0x1de4d64 
0x00000001          0 0x54c564 0x19
(XEN) [2021-06-14 23:02:43] [0x1343]      0 0x1de4ef5 
0x00000001          0 0x54c2f5 0x19
(XEN) [2021-06-14 23:02:43] [0x1344]      0 0x1de4a9d 
0x00000001          0 0x54c69d 0x19
(XEN) [2021-06-14 23:02:43] [0x1345]      0 0x1de4636 
0x00000001          0 0x54ca36 0x19
(XEN) [2021-06-14 23:02:43] [0x1346]      0 0x1de46d4 
0x00000001          0 0x54cad4 0x19
(XEN) [2021-06-14 23:02:43] [0x1347]      0 0x1de54c4 
0x00000001          0 0x54bcc4 0x19
(XEN) [2021-06-14 23:02:43] [0x1348]      0 0x1de4d1c 
0x00000001          0 0x54c51c 0x19
(XEN) [2021-06-14 23:02:43] [0x134c]      0 0x1de4eb6 
0x00000001          0 0x54c2b6 0x19
(XEN) [2021-06-14 23:02:43] [0x134f]      0 0x1de4a0a 
0x00000001          0 0x54c60a 0x19
(XEN) [2021-06-14 23:02:43] [0x1350]      0 0x1de4d0b 
0x00000001          0 0x54c50b 0x19
(XEN) [2021-06-14 23:02:44] [0x1353]      0 0x1de475f 
0x00000001          0 0x54cb5f 0x19
(XEN) [2021-06-14 23:02:44] [0x1355]      0 0x1de4ee3 
0x00000001          0 0x54c2e3 0x19
(XEN) [2021-06-14 23:02:44] [0x1357]      0 0x1de4638 
0x00000001          0 0x54ca38 0x19
(XEN) [2021-06-14 23:02:44] [0x1359]      0 0x1de46ca 
0x00000001          0 0x54caca 0x19
(XEN) [2021-06-14 23:02:44] [0x135b]      0 0x1de4c12 
0x00000001          0 0x54c412 0x19
(XEN) [2021-06-14 23:02:44] [0x135c]      0 0x1de4fe7 
0x00000001          0 0x54c3e7 0x19
(XEN) [2021-06-14 23:02:44] [0x135d]      0 0x1de5239 
0x00000001          0 0x54be39 0x19
(XEN) [2021-06-14 23:02:44] [0x1360]      0 0x1de5121 
0x00000001          0 0x54c121 0x19
(XEN) [2021-06-14 23:02:44] [0x1362]      0 0x1de4e3c 
0x00000001          0 0x54c23c 0x19
(XEN) [2021-06-14 23:02:44] [0x1363]      0 0x1de4c1e 
0x00000001          0 0x54c41e 0x19
(XEN) [2021-06-14 23:02:44] [0x1364]      0 0x1de4b71 
0x00000001          0 0x54c771 0x19
(XEN) [2021-06-14 23:02:44] [0x1365]      0 0x1de4fc5 
0x00000001          0 0x54c3c5 0x19
(XEN) [2021-06-14 23:02:44] [0x1366]      0 0x1de4781 
0x00000001          0 0x54cb81 0x19
(XEN) [2021-06-14 23:02:44] [0x1367]      0 0x1de5404 
0x00000001          0 0x54bc04 0x19
(XEN) [2021-06-14 23:02:44] [0x136b]      0 0x1de547d 
0x00000001          0 0x54bc7d 0x19
(XEN) [2021-06-14 23:02:44] [0x136d]      0 0x1de4bef 
0x00000001          0 0x54c7ef 0x19
(XEN) [2021-06-14 23:02:44] [0x1371]      0 0x1de4acb 
0x00000001          0 0x54c6cb 0x19
(XEN) [2021-06-14 23:02:44] [0x1372]      0 0x1de4a04 
0x00000001          0 0x54c604 0x19
(XEN) [2021-06-14 23:02:44] [0x1377]      0 0x1de4e93 
0x00000001          0 0x54c293 0x19
(XEN) [2021-06-14 23:02:44] [0x1378]      0 0x1de4be3 
0x00000001          0 0x54c7e3 0x19
(XEN) [2021-06-14 23:02:44] [0x1379]      0 0x1de5455 
0x00000001          0 0x54bc55 0x19
(XEN) [2021-06-14 23:02:44] [0x137b]      0 0x1de4fd6 
0x00000001          0 0x54c3d6 0x19
(XEN) [2021-06-14 23:02:44] [0x137d]      0 0x1de4c57 
0x00000001          0 0x54c457 0x19
(XEN) [2021-06-14 23:02:44] [0x137e]      0 0x1de5058 
0x00000001          0 0x54c058 0x19
(XEN) [2021-06-14 23:02:44] [0x1382]      0 0x1de5226 
0x00000001          0 0x54be26 0x19
(XEN) [2021-06-14 23:02:44] [0x1383]      0 0x1de5225 
0x00000001          0 0x54be25 0x19
(XEN) [2021-06-14 23:02:44] [0x1384]      0 0x1de54d0 
0x00000001          0 0x54bcd0 0x19
(XEN) [2021-06-14 23:02:44] [0x1385]      0 0x1de521b 
0x00000001          0 0x54be1b 0x19
(XEN) [2021-06-14 23:02:44] [0x1386]      0 0x1de5054 
0x00000001          0 0x54c054 0x19
(XEN) [2021-06-14 23:02:44] [0x1388]      0 0x1de5424 
0x00000001          0 0x54bc24 0x19
(XEN) [2021-06-14 23:02:44] [0x138b]      0 0x1de5222 
0x00000001          0 0x54be22 0x19
(XEN) [2021-06-14 23:02:44] [0x138e]      0 0x1de51be 
0x00000001          0 0x54c1be 0x19
(XEN) [2021-06-14 23:02:44]       -------- active -------- -------- 
shared --------
(XEN) [2021-06-14 23:02:44] [ref] localdom mfn      pin localdom 
gmfn     flags
(XEN) [2021-06-14 23:02:44] grant-table for remote d6 (v1)
(XEN) [2021-06-14 23:02:44]   7 frames (64 max), 0 maptrack frames (1024 
max)
(XEN) [2021-06-14 23:02:44] [0x000]      0 0xb476b0 0x00000002          
0 0x0fefff 0x19
(XEN) [2021-06-14 23:02:44] [0x12f]      0 0xfbfd15 0x00000001          
0 0x13fd15 0x19
(XEN) [2021-06-14 23:02:44] [0x133]      0 0xfbfd0c 0x00000001          
0 0x13fd0c 0x19
(XEN) [2021-06-14 23:02:44] [0x1fe]      0 0xfbdb48 0x00000001          
0 0x13db48 0x19
(XEN) [2021-06-14 23:02:44] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:44] [0x500]      0 0xfbc85c 0x00000001          
0 0x13c85c 0x19
(XEN) [2021-06-14 23:02:44] [0x501]      0 0xfbc85b 0x00000001          
0 0x13c85b 0x19
(XEN) [2021-06-14 23:02:44] [0x502]      0 0xfbc858 0x00000001          
0 0x13c858 0x19
(XEN) [2021-06-14 23:02:44] [0x503]      0 0xfbc857 0x00000001          
0 0x13c857 0x19
(XEN) [2021-06-14 23:02:44] [0x504]      0 0xfbc9d8 0x00000001          
0 0x13c9d8 0x19
(XEN) [2021-06-14 23:02:44] [0x505]      0 0xfbc9d9 0x00000001          
0 0x13c9d9 0x19
(XEN) [2021-06-14 23:02:44] [0x506]      0 0xfbc9dc 0x00000001          
0 0x13c9dc 0x19
(XEN) [2021-06-14 23:02:44] [0x507]      0 0xfbc9dd 0x00000001          
0 0x13c9dd 0x19
(XEN) [2021-06-14 23:02:44] [0x508]      0 0xfbc9de 0x00000001          
0 0x13c9de 0x19
(XEN) [2021-06-14 23:02:44] [0x509]      0 0xfbca98 0x00000001          
0 0x13ca98 0x19
(XEN) [2021-06-14 23:02:44] [0x50a]      0 0xfbca9a 0x00000001          
0 0x13ca9a 0x19
(XEN) [2021-06-14 23:02:44] [0x50b]      0 0xfbca9c 0x00000001          
0 0x13ca9c 0x19
(XEN) [2021-06-14 23:02:44] [0x50c]      0 0xfbca9e 0x00000001          
0 0x13ca9e 0x19
(XEN) [2021-06-14 23:02:44] [0x50d]      0 0xfbcf48 0x00000001          
0 0x13cf48 0x19
(XEN) [2021-06-14 23:02:44] [0x50e]      0 0xfbcf4a 0x00000001          
0 0x13cf4a 0x19
(XEN) [2021-06-14 23:02:44] [0x50f]      0 0xfbcf4b 0x00000001          
0 0x13cf4b 0x19
(XEN) [2021-06-14 23:02:44] [0x510]      0 0xfbcf4d 0x00000001          
0 0x13cf4d 0x19
(XEN) [2021-06-14 23:02:44] [0x511]      0 0xfbcf4e 0x00000001          
0 0x13cf4e 0x19
(XEN) [2021-06-14 23:02:44] [0x512]      0 0xfbcf69 0x00000001          
0 0x13cf69 0x19
(XEN) [2021-06-14 23:02:44] [0x513]      0 0xfbcf6a 0x00000001          
0 0x13cf6a 0x19
(XEN) [2021-06-14 23:02:44] [0x5b9]      0 0xfbdb42 0x00000001          
0 0x13db42 0x19
(XEN) [2021-06-14 23:02:44] [0x5c7]      0 0xfbdb41 0x00000001          
0 0x13db41 0x19
(XEN) [2021-06-14 23:02:44] [0x5e1]      0 0xfbfd17 0x00000001          
0 0x13fd17 0x19
(XEN) [2021-06-14 23:02:44] [0x64b]      0 0xfbfe45 0x00000001          
0 0x13fe45 0x19
(XEN) [2021-06-14 23:02:44] [0x658]      0 0xfbf495 0x00000001          
0 0x13f495 0x19
(XEN) [2021-06-14 23:02:44] [0x659]      0 0xfbfd27 0x00000001          
0 0x13fd27 0x19
(XEN) [2021-06-14 23:02:44] [0x65f]      0 0xfbfd1c 0x00000001          
0 0x13fd1c 0x19
(XEN) [2021-06-14 23:02:44] [0x681]      0 0xfbf496 0x00000001          
0 0x13f496 0x19
(XEN) [2021-06-14 23:02:44] [0x6a2]      0 0xfbfd19 0x00000001          
0 0x13fd19 0x19
(XEN) [2021-06-14 23:02:44] [0x6a5]      0 0xfbfd1a 0x00000001          
0 0x13fd1a 0x19
(XEN) [2021-06-14 23:02:44] [0x6bd]      0 0xfbfd18 0x00000001          
0 0x13fd18 0x19
(XEN) [2021-06-14 23:02:44] [0x6cd]      0 0xfbdb49 0x00000001          
0 0x13db49 0x19
(XEN) [2021-06-14 23:02:44] [0x721]      0 0xfbfe42 0x00000001          
0 0x13fe42 0x19
(XEN) [2021-06-14 23:02:44] [0x773]      0 0xfbfd1e 0x00000001          
0 0x13fd1e 0x19
(XEN) [2021-06-14 23:02:44] [0x81c]      0 0xfbf7a5 0x00000001          
0 0x13f7a5 0x19
(XEN) [2021-06-14 23:02:44] [0x81f]      0 0xfbfd28 0x00000001          
0 0x13fd28 0x19
(XEN) [2021-06-14 23:02:44] [0x868]      0 0xfbf492 0x00000001          
0 0x13f492 0x19
(XEN) [2021-06-14 23:02:44] [0x8fc]      0 0xfbfe41 0x00000001          
0 0x13fe41 0x19
(XEN) [2021-06-14 23:02:44] [0x951]      0 0xfbf3b0 0x00000001          
0 0x13f3b0 0x19
(XEN) [2021-06-14 23:02:44] [0x95f]      0 0xfbfd1d 0x00000001          
0 0x13fd1d 0x19
(XEN) [2021-06-14 23:02:44] [0x974]      0 0xfbfd20 0x00000001          
0 0x13fd20 0x19
(XEN) [2021-06-14 23:02:44] [0x975]      0 0xfbfd22 0x00000001          
0 0x13fd22 0x19
(XEN) [2021-06-14 23:02:44] [0x97e]      0 0xfbfe43 0x00000001          
0 0x13fe43 0x19
(XEN) [2021-06-14 23:02:44] [0x980]      0 0xfbfd5e 0x00000001          
0 0x13fd5e 0x19
(XEN) [2021-06-14 23:02:44] [0x9d8]      0 0xfbf72e 0x00000001          
0 0x13f72e 0x19
(XEN) [2021-06-14 23:02:44] [0x9de]      0 0xfbf491 0x00000001          
0 0x13f491 0x19
(XEN) [2021-06-14 23:02:44] [0x9ef]      0 0xfbfd5f 0x00000001          
0 0x13fd5f 0x19
(XEN) [2021-06-14 23:02:44] [0xa04]      0 0xfbfe7c 0x00000001          
0 0x13fe7c 0x19
(XEN) [2021-06-14 23:02:44] [0xa95]      0 0xfbf7a6 0x00000001          
0 0x13f7a6 0x19
(XEN) [2021-06-14 23:02:44] [0xaa3]      0 0xfbfd21 0x00000001          
0 0x13fd21 0x19
(XEN) [2021-06-14 23:02:44] [0xabf]      0 0xfbfc70 0x00000001          
0 0x13fc70 0x19
(XEN) [2021-06-14 23:02:44] [0xaed]      0 0xfbfd16 0x00000001          
0 0x13fd16 0x19
(XEN) [2021-06-14 23:02:44] [0xaf3]      0 0xfbf48c 0x00000001          
0 0x13f48c 0x19
(XEN) [2021-06-14 23:02:44] [0xb03]      0 0xfbf48f 0x00000001          
0 0x13f48f 0x19
(XEN) [2021-06-14 23:02:44] [0xb19]      0 0xfbdb40 0x00000001          
0 0x13db40 0x19
(XEN) [2021-06-14 23:02:44] [0xb2a]      0 0xfbf48d 0x00000001          
0 0x13f48d 0x19
(XEN) [2021-06-14 23:02:44] [0xb60]      0 0xfbfd14 0x00000001          
0 0x13fd14 0x19
(XEN) [2021-06-14 23:02:44] [0xb64]      0 0xfbfd7f 0x00000001          
0 0x13fd7f 0x19
(XEN) [2021-06-14 23:02:44] [0xb75]      0 0xfbfd1f 0x00000001          
0 0x13fd1f 0x19
(XEN) [2021-06-14 23:02:44] [0xbc3]      0 0xfbf7aa 0x00000001          
0 0x13f7aa 0x19
(XEN) [2021-06-14 23:02:44] [0xbd3]      0 0xfbf7a8 0x00000001          
0 0x13f7a8 0x19
(XEN) [2021-06-14 23:02:44] [0xbd8]      0 0xfbf7a7 0x00000001          
0 0x13f7a7 0x19
(XEN) [2021-06-14 23:02:44] [0xbe0]      0 0xfbdb47 0x00000001          
0 0x13db47 0x19
(XEN) [2021-06-14 23:02:44] [0xbe1]      0 0xfbf490 0x00000001          
0 0x13f490 0x19
(XEN) [2021-06-14 23:02:44] [0xbec]      0 0xfbf7ff 0x00000001          
0 0x13f7ff 0x19
(XEN) [2021-06-14 23:02:44] [0xc07]      0 0xfbf800 0x00000001          
0 0x13f800 0x19
(XEN) [2021-06-14 23:02:44]       -------- active -------- -------- 
shared --------
(XEN) [2021-06-14 23:02:44] [ref] localdom mfn      pin localdom 
gmfn     flags
(XEN) [2021-06-14 23:02:44] grant-table for remote d7 (v1)
(XEN) [2021-06-14 23:02:44]   9 frames (512 max), 0 maptrack frames 
(2048 max)
(XEN) [2021-06-14 23:02:44] [0x000]      0 0x15ba8cd 0x00000002          
0 0x0fefff 0x19
(XEN) [2021-06-14 23:02:44] [0x008]      0 0x16012cf 0x00000001          
0 0x1012cf 0x19
(XEN) [2021-06-14 23:02:44] [0x009]      0 0x16012c2 0x00000001          
0 0x1012c2 0x19
(XEN) [2021-06-14 23:02:44] [0x00a]      0 0x16012c1 0x00000001          
0 0x1012c1 0x19
(XEN) [2021-06-14 23:02:44] [0x00b]      0 0x16012bf 0x00000001          
0 0x1012bf 0x19
(XEN) [2021-06-14 23:02:44] [0x00c]      0 0x16012bd 0x00000001          
0 0x1012bd 0x19
(XEN) [2021-06-14 23:02:44] [0x00d]      0 0x16012bb 0x00000001          
0 0x1012bb 0x19
(XEN) [2021-06-14 23:02:44] [0x00e]      0 0x16012ae 0x00000001          
0 0x1012ae 0x19
(XEN) [2021-06-14 23:02:44] [0x00f]      0 0x16012ac 0x00000001          
0 0x1012ac 0x19
(XEN) [2021-06-14 23:02:44] [0x010]      0 0x16012a8 0x00000001          
0 0x1012a8 0x19
(XEN) [2021-06-14 23:02:44] [0x011]      0 0x16012a4 0x00000001          
0 0x1012a4 0x19
(XEN) [2021-06-14 23:02:44] [0x012]      0 0x16012a3 0x00000001          
0 0x1012a3 0x19
(XEN) [2021-06-14 23:02:44] [0x013]      0 0x16012a1 0x00000001          
0 0x1012a1 0x19
(XEN) [2021-06-14 23:02:44] [0x014]      0 0x16070e8 0x00000001          
0 0x1070e8 0x19
(XEN) [2021-06-14 23:02:44] [0x016]      0 0x1607062 0x00000001          
0 0x107062 0x19
(XEN) [2021-06-14 23:02:44] [0x14a]      0 0x160e30c 0x00000001          
0 0x10e30c 0x19
(XEN) [2021-06-14 23:02:44] [0x14b]      0 0x160e30b 0x00000001          
0 0x10e30b 0x19
(XEN) [2021-06-14 23:02:44] [0x14c]      0 0x160e30a 0x00000001          
0 0x10e30a 0x19
(XEN) [2021-06-14 23:02:44] [0x14e]      0 0x160e308 0x00000001          
0 0x10e308 0x19
(XEN) [2021-06-14 23:02:45] [0x150]      0 0x160e306 0x00000001          
0 0x10e306 0x19
(XEN) [2021-06-14 23:02:45] [0x152]      0 0x160e304 0x00000001          
0 0x10e304 0x19
(XEN) [2021-06-14 23:02:45] [0x153]      0 0x160e303 0x00000001          
0 0x10e303 0x19
(XEN) [2021-06-14 23:02:45] [0x157]      0 0x160e2ff 0x00000001          
0 0x10e2ff 0x19
(XEN) [2021-06-14 23:02:45] [0x159]      0 0x160e2fd 0x00000001          
0 0x10e2fd 0x19
(XEN) [2021-06-14 23:02:45] [0x15f]      0 0x160e2f7 0x00000001          
0 0x10e2f7 0x19
(XEN) [2021-06-14 23:02:45] [0x160]      0 0x160e2f6 0x00000001          
0 0x10e2f6 0x19
(XEN) [2021-06-14 23:02:45] [0x161]      0 0x160e2f5 0x00000001          
0 0x10e2f5 0x19
(XEN) [2021-06-14 23:02:45] [0x162]      0 0x160e2f4 0x00000001          
0 0x10e2f4 0x19
(XEN) [2021-06-14 23:02:45] [0x163]      0 0x160e2f3 0x00000001          
0 0x10e2f3 0x19
(XEN) [2021-06-14 23:02:45] [0x165]      0 0x160e2f0 0x00000001          
0 0x10e2f0 0x19
(XEN) [2021-06-14 23:02:45] [0x166]      0 0x160e2ef 0x00000001          
0 0x10e2ef 0x19
(XEN) [2021-06-14 23:02:45] [0x168]      0 0x160e2ed 0x00000001          
0 0x10e2ed 0x19
(XEN) [2021-06-14 23:02:45] [0x169]      0 0x160e2ec 0x00000001          
0 0x10e2ec 0x19
(XEN) [2021-06-14 23:02:45] [0x16a]      0 0x160e2eb 0x00000001          
0 0x10e2eb 0x19
(XEN) [2021-06-14 23:02:45] [0x16b]      0 0x160e2ea 0x00000001          
0 0x10e2ea 0x19
(XEN) [2021-06-14 23:02:45] [0x16c]      0 0x160e2e9 0x00000001          
0 0x10e2e9 0x19
(XEN) [2021-06-14 23:02:45] [0x16d]      0 0x160e2e8 0x00000001          
0 0x10e2e8 0x19
(XEN) [2021-06-14 23:02:45] [0x16e]      0 0x160e2e7 0x00000001          
0 0x10e2e7 0x19
(XEN) [2021-06-14 23:02:45] [0x1cb]      0 0x1605b36 0x00000001          
0 0x105b36 0x19
(XEN) [2021-06-14 23:02:45] [0x1cd]      0 0x1605b34 0x00000001          
0 0x105b34 0x19
(XEN) [2021-06-14 23:02:45] [0x1cf]      0 0x1605b32 0x00000001          
0 0x105b32 0x19
(XEN) [2021-06-14 23:02:45] [0x1d0]      0 0x1605b31 0x00000001          
0 0x105b31 0x19
(XEN) [2021-06-14 23:02:45] [0x1d1]      0 0x1605b30 0x00000001          
0 0x105b30 0x19
(XEN) [2021-06-14 23:02:45] [0x1d2]      0 0x1605b2f 0x00000001          
0 0x105b2f 0x19
(XEN) [2021-06-14 23:02:45] [0x1d3]      0 0x1605b2e 0x00000001          
0 0x105b2e 0x19
(XEN) [2021-06-14 23:02:45] [0x1d4]      0 0x1605b2d 0x00000001          
0 0x105b2d 0x19
(XEN) [2021-06-14 23:02:45] [0x1d5]      0 0x1605b2c 0x00000001          
0 0x105b2c 0x19
(XEN) [2021-06-14 23:02:45] [0x1d6]      0 0x1605b2b 0x00000001          
0 0x105b2b 0x19
(XEN) [2021-06-14 23:02:45] [0x1d7]      0 0x1605b2a 0x00000001          
0 0x105b2a 0x19
(XEN) [2021-06-14 23:02:45] [0x1d8]      0 0x1605b29 0x00000001          
0 0x105b29 0x19
(XEN) [2021-06-14 23:02:45] [0x1da]      0 0x1605b27 0x00000001          
0 0x105b27 0x19
(XEN) [2021-06-14 23:02:45] [0x1db]      0 0x1605b26 0x00000001          
0 0x105b26 0x19
(XEN) [2021-06-14 23:02:45] [0x1dc]      0 0x1605b25 0x00000001          
0 0x105b25 0x19
(XEN) [2021-06-14 23:02:45] [0x1de]      0 0x1605b23 0x00000001          
0 0x105b23 0x19
(XEN) [2021-06-14 23:02:45] [0x1df]      0 0x1605b22 0x00000001          
0 0x105b22 0x19
(XEN) [2021-06-14 23:02:45] [0x1e0]      0 0x1605b21 0x00000001          
0 0x105b21 0x19
(XEN) [2021-06-14 23:02:45] [0x1e1]      0 0x1605b1f 0x00000001          
0 0x105b1f 0x19
(XEN) [2021-06-14 23:02:45] [0x1e2]      0 0x1605b1e 0x00000001          
0 0x105b1e 0x19
(XEN) [2021-06-14 23:02:45] [0x1e4]      0 0x1605b1c 0x00000001          
0 0x105b1c 0x19
(XEN) [2021-06-14 23:02:45] [0x1e5]      0 0x1605b1b 0x00000001          
0 0x105b1b 0x19
(XEN) [2021-06-14 23:02:45] [0x1e7]      0 0x1605b19 0x00000001          
0 0x105b19 0x19
(XEN) [2021-06-14 23:02:45] [0x1e8]      0 0x1605b18 0x00000001          
0 0x105b18 0x19
(XEN) [2021-06-14 23:02:45] [0x1e9]      0 0x1605b17 0x00000001          
0 0x105b17 0x19
(XEN) [2021-06-14 23:02:45] [0x1ea]      0 0x1605b16 0x00000001          
0 0x105b16 0x19
(XEN) [2021-06-14 23:02:45] [0x1eb]      0 0x1605b15 0x00000001          
0 0x105b15 0x19
(XEN) [2021-06-14 23:02:45] [0x1ec]      0 0x1605b14 0x00000001          
0 0x105b14 0x19
(XEN) [2021-06-14 23:02:45] [0x1ed]      0 0x1605b13 0x00000001          
0 0x105b13 0x19
(XEN) [2021-06-14 23:02:45] [0x1ee]      0 0x1605b12 0x00000001          
0 0x105b12 0x19
(XEN) [2021-06-14 23:02:45] [0x1ef]      0 0x1605b11 0x00000001          
0 0x105b11 0x19
(XEN) [2021-06-14 23:02:45] [0x1f0]      0 0x1605b10 0x00000001          
0 0x105b10 0x19
(XEN) [2021-06-14 23:02:45] [0x1f1]      0 0x1605b0f 0x00000001          
0 0x105b0f 0x19
(XEN) [2021-06-14 23:02:45] [0x1f2]      0 0x1605b0e 0x00000001          
0 0x105b0e 0x19
(XEN) [2021-06-14 23:02:45] [0x1f3]      0 0x1605b0d 0x00000001          
0 0x105b0d 0x19
(XEN) [2021-06-14 23:02:45] [0x1f4]      0 0x1605b0c 0x00000001          
0 0x105b0c 0x19
(XEN) [2021-06-14 23:02:45] [0x1f6]      0 0x1605b0a 0x00000001          
0 0x105b0a 0x19
(XEN) [2021-06-14 23:02:45] [0x1f9]      0 0x1605b07 0x00000001          
0 0x105b07 0x19
(XEN) [2021-06-14 23:02:45] [0x1fa]      0 0x1605b06 0x00000001          
0 0x105b06 0x19
(XEN) [2021-06-14 23:02:45] [0x1fc]      0 0x1605b04 0x00000001          
0 0x105b04 0x19
(XEN) [2021-06-14 23:02:45] [0x1fd]      0 0x1605b03 0x00000001          
0 0x105b03 0x19
(XEN) [2021-06-14 23:02:45] [0x1fe]      0 0x1605b02 0x00000001          
0 0x105b02 0x19
(XEN) [2021-06-14 23:02:45] [0x900]      0 0x161002e 0x00000001          
0 0x11002e 0x19
(XEN) [2021-06-14 23:02:45] [0x901]      0 0x161002f 0x00000001          
0 0x11002f 0x19
(XEN) [2021-06-14 23:02:45] [0x902]      0 0x1610031 0x00000001          
0 0x110031 0x19
(XEN) [2021-06-14 23:02:45] [0x903]      0 0x1610032 0x00000001          
0 0x110032 0x19
(XEN) [2021-06-14 23:02:45] [0x904]      0 0x1610035 0x00000001          
0 0x110035 0x19
(XEN) [2021-06-14 23:02:45] [0x905]      0 0x1610036 0x00000001          
0 0x110036 0x19
(XEN) [2021-06-14 23:02:45] [0x906]      0 0x1610039 0x00000001          
0 0x110039 0x19
(XEN) [2021-06-14 23:02:45] [0x907]      0 0x161003a 0x00000001          
0 0x11003a 0x19
(XEN) [2021-06-14 23:02:45] [0x909]      0 0x16070e7 0x00000001          
0 0x1070e7 0x19
(XEN) [2021-06-14 23:02:45] [0x90a]      0 0x16070e6 0x00000001          
0 0x1070e6 0x19
(XEN) [2021-06-14 23:02:45] [0x90b]      0 0x16070e5 0x00000001          
0 0x1070e5 0x19
(XEN) [2021-06-14 23:02:45] [0x90c]      0 0x16070e4 0x00000001          
0 0x1070e4 0x19
(XEN) [2021-06-14 23:02:45] [0x90d]      0 0x16070e3 0x00000001          
0 0x1070e3 0x19
(XEN) [2021-06-14 23:02:45] [0x90e]      0 0x16070e2 0x00000001          
0 0x1070e2 0x19
(XEN) [2021-06-14 23:02:45] [0x90f]      0 0x16070e1 0x00000001          
0 0x1070e1 0x19
(XEN) [2021-06-14 23:02:45] [0x910]      0 0x16070e0 0x00000001          
0 0x1070e0 0x19
(XEN) [2021-06-14 23:02:45] [0x911]      0 0x16070df 0x00000001          
0 0x1070df 0x19
(XEN) [2021-06-14 23:02:45] [0x912]      0 0x16070de 0x00000001          
0 0x1070de 0x19
(XEN) [2021-06-14 23:02:45] [0x913]      0 0x16070dd 0x00000001          
0 0x1070dd 0x19
(XEN) [2021-06-14 23:02:45] [0x914]      0 0x16070dc 0x00000001          
0 0x1070dc 0x19
(XEN) [2021-06-14 23:02:45] [0x915]      0 0x16070db 0x00000001          
0 0x1070db 0x19
(XEN) [2021-06-14 23:02:45] [0x916]      0 0x16070da 0x00000001          
0 0x1070da 0x19
(XEN) [2021-06-14 23:02:45] [0x917]      0 0x16070d9 0x00000001          
0 0x1070d9 0x19
(XEN) [2021-06-14 23:02:45] [0x918]      0 0x16070d8 0x00000001          
0 0x1070d8 0x19
(XEN) [2021-06-14 23:02:45] [0x919]      0 0x16070d7 0x00000001          
0 0x1070d7 0x19
(XEN) [2021-06-14 23:02:45] [0x91a]      0 0x16070d6 0x00000001          
0 0x1070d6 0x19
(XEN) [2021-06-14 23:02:45] [0x91b]      0 0x16070d5 0x00000001          
0 0x1070d5 0x19
(XEN) [2021-06-14 23:02:45] [0x91c]      0 0x16070d4 0x00000001          
0 0x1070d4 0x19
(XEN) [2021-06-14 23:02:45] [0x91d]      0 0x16070d3 0x00000001          
0 0x1070d3 0x19
(XEN) [2021-06-14 23:02:45] [0x91e]      0 0x16070d2 0x00000001          
0 0x1070d2 0x19
(XEN) [2021-06-14 23:02:45] [0x91f]      0 0x16070d1 0x00000001          
0 0x1070d1 0x19
(XEN) [2021-06-14 23:02:45] [0x920]      0 0x16070d0 0x00000001          
0 0x1070d0 0x19
(XEN) [2021-06-14 23:02:45] [0x921]      0 0x16070cf 0x00000001          
0 0x1070cf 0x19
(XEN) [2021-06-14 23:02:45] [0x922]      0 0x16070ce 0x00000001          
0 0x1070ce 0x19
(XEN) [2021-06-14 23:02:45] [0x923]      0 0x16070cd 0x00000001          
0 0x1070cd 0x19
(XEN) [2021-06-14 23:02:45] [0x924]      0 0x16070cc 0x00000001          
0 0x1070cc 0x19
(XEN) [2021-06-14 23:02:45] [0x925]      0 0x16070cb 0x00000001          
0 0x1070cb 0x19
(XEN) [2021-06-14 23:02:45] [0x926]      0 0x16070ca 0x00000001          
0 0x1070ca 0x19
(XEN) [2021-06-14 23:02:45] [0x928]      0 0x16070c8 0x00000001          
0 0x1070c8 0x19
(XEN) [2021-06-14 23:02:45] [0x929]      0 0x16070c7 0x00000001          
0 0x1070c7 0x19
(XEN) [2021-06-14 23:02:45] [0x92a]      0 0x16070c6 0x00000001          
0 0x1070c6 0x19
(XEN) [2021-06-14 23:02:45] [0x92b]      0 0x16070c5 0x00000001          
0 0x1070c5 0x19
(XEN) [2021-06-14 23:02:45] [0x92c]      0 0x16070c4 0x00000001          
0 0x1070c4 0x19
(XEN) [2021-06-14 23:02:45] [0x92d]      0 0x16070c3 0x00000001          
0 0x1070c3 0x19
(XEN) [2021-06-14 23:02:45] [0x92e]      0 0x16070c2 0x00000001          
0 0x1070c2 0x19
(XEN) [2021-06-14 23:02:45] [0x92f]      0 0x16070c1 0x00000001          
0 0x1070c1 0x19
(XEN) [2021-06-14 23:02:45] [0x930]      0 0x16070c0 0x00000001          
0 0x1070c0 0x19
(XEN) [2021-06-14 23:02:45] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:45] [0x931]      0 0x16070bf 0x00000001          
0 0x1070bf 0x19
(XEN) [2021-06-14 23:02:45] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:45] [0x932]      0 0x16070be 0x00000001          
0 0x1070be 0x19
(XEN) [2021-06-14 23:02:45] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:45] [0x933]      0 0x16070bd 0x00000001          
0 0x1070bd 0x19
(XEN) [2021-06-14 23:02:45] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:45] [0x934]      0 0x16070bc 0x00000001          
0 0x1070bc 0x19
(XEN) [2021-06-14 23:02:45] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:45] [0x935]      0 0x16070bb 0x00000001          
0 0x1070bb 0x19
(XEN) [2021-06-14 23:02:45] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:45] [0x937]      0 0x16070b9 0x00000001          
0 0x1070b9 0x19
(XEN) [2021-06-14 23:02:45] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:45] [0x938]      0 0x16070b8 0x00000001          
0 0x1070b8 0x19
(XEN) [2021-06-14 23:02:45] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:45] [0x939]      0 0x16070b7 0x00000001          
0 0x1070b7 0x19
(XEN) [2021-06-14 23:02:45] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:45] [0x93a]      0 0x16070b6 0x00000001          
0 0x1070b6 0x19
(XEN) [2021-06-14 23:02:45] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:45] [0x93b]      0 0x16070b5 0x00000001          
0 0x1070b5 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0x93c]      0 0x16070b4 0x00000001          
0 0x1070b4 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0x93d]      0 0x16070b3 0x00000001          
0 0x1070b3 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0x93e]      0 0x16070b2 0x00000001          
0 0x1070b2 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0x93f]      0 0x16070b1 0x00000001          
0 0x1070b1 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0x9a7]      0 0x16070b0 0x00000001          
0 0x1070b0 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0x9a8]      0 0x16070af 0x00000001          
0 0x1070af 0x19
(XEN) [2021-06-14 23:02:46] [0x9a9]      0 0x16070ae 0x00000001          
0 0x1070ae 0x19
(XEN) [2021-06-14 23:02:46] [0x9aa]      0 0x16070ad 0x00000001          
0 0x1070ad 0x19
(XEN) [2021-06-14 23:02:46] [0x9ab]      0 0x16070ac 0x00000001          
0 0x1070ac 0x19
(XEN) [2021-06-14 23:02:46] [0x9ac]      0 0x16070ab 0x00000001          
0 0x1070ab 0x19
(XEN) [2021-06-14 23:02:46] [0x9ad]      0 0x16070a9 0x00000001          
0 0x1070a9 0x19
(XEN) [2021-06-14 23:02:46] [0x9ae]      0 0x16070a8 0x00000001          
0 0x1070a8 0x19
(XEN) [2021-06-14 23:02:46] [0x9d5]      0 0x1609365 0x00000001          
0 0x109365 0x19
(XEN) [2021-06-14 23:02:46] [0x9ee]      0 0x160935c 0x00000001          
0 0x10935c 0x19
(XEN) [2021-06-14 23:02:46] [0xa41]      0 0x160e249 0x00000001          
0 0x10e249 0x19
(XEN) [2021-06-14 23:02:46] [0xa42]      0 0x160e248 0x00000001          
0 0x10e248 0x19
(XEN) [2021-06-14 23:02:46] [0xa43]      0 0x160e247 0x00000001          
0 0x10e247 0x19
(XEN) [2021-06-14 23:02:46] [0xa44]      0 0x160e246 0x00000001          
0 0x10e246 0x19
(XEN) [2021-06-14 23:02:46] [0xa45]      0 0x160e245 0x00000001          
0 0x10e245 0x19
(XEN) [2021-06-14 23:02:46] [0xa46]      0 0x160e244 0x00000001          
0 0x10e244 0x19
(XEN) [2021-06-14 23:02:46] [0xa48]      0 0x160e242 0x00000001          
0 0x10e242 0x19
(XEN) [2021-06-14 23:02:46] [0xa49]      0 0x160e241 0x00000001          
0 0x10e241 0x19
(XEN) [2021-06-14 23:02:46] [0xa4a]      0 0x160e240 0x00000001          
0 0x10e240 0x19
(XEN) [2021-06-14 23:02:46] [0xa4c]      0 0x160e23e 0x00000001          
0 0x10e23e 0x19
(XEN) [2021-06-14 23:02:46] [0xa4d]      0 0x160e23d 0x00000001          
0 0x10e23d 0x19
(XEN) [2021-06-14 23:02:46] [0xa4f]      0 0x160e23b 0x00000001          
0 0x10e23b 0x19
(XEN) [2021-06-14 23:02:46] [0xa50]      0 0x160e23a 0x00000001          
0 0x10e23a 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa51]      0 0x160e239 0x00000001          
0 0x10e239 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa52]      0 0x160e238 0x00000001          
0 0x10e238 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa53]      0 0x160e237 0x00000001          
0 0x10e237 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa55]      0 0x160e235 0x00000001          
0 0x10e235 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa57]      0 0x160e233 0x00000001          
0 0x10e233 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa58]      0 0x160e232 0x00000001          
0 0x10e232 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa5a]      0 0x160e230 0x00000001          
0 0x10e230 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa5b]      0 0x160e22e 0x00000001          
0 0x10e22e 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa5d]      0 0x160e22c 0x00000001          
0 0x10e22c 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa60]      0 0x160e229 0x00000001          
0 0x10e229 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa61]      0 0x160e228 0x00000001          
0 0x10e228 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa64]      0 0x160e225 0x00000001          
0 0x10e225 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa65]      0 0x160e224 0x00000001          
0 0x10e224 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa66]      0 0x160e223 0x00000001          
0 0x10e223 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa68]      0 0x160e221 0x00000001          
0 0x10e221 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa69]      0 0x160e220 0x00000001          
0 0x10e220 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa6b]      0 0x160e21e 0x00000001          
0 0x10e21e 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa6c]      0 0x160e21d 0x00000001          
0 0x10e21d 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa6d]      0 0x160e21c 0x00000001          
0 0x10e21c 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa6e]      0 0x160e21b 0x00000001          
0 0x10e21b 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa6f]      0 0x160e21a 0x00000001          
0 0x10e21a 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa70]      0 0x160e219 0x00000001          
0 0x10e219 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa73]      0 0x160e216 0x00000001          
0 0x10e216 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa74]      0 0x160e215 0x00000001          
0 0x10e215 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa75]      0 0x160e214 0x00000001          
0 0x10e214 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa77]      0 0x160e212 0x00000001          
0 0x10e212 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa7c]      0 0x160e20d 0x00000001          
0 0x10e20d 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa7e]      0 0x160e20b 0x00000001          
0 0x10e20b 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa7f]      0 0x160e20a 0x00000001          
0 0x10e20a 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xa81]      0 0x160e208 0x00000001          
0 0x10e208 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xb45]      0 0x16093b9 0x00000001          
0 0x1093b9 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xb46]      0 0x16093b8 0x00000001          
0 0x1093b8 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xb47]      0 0x16093b7 0x00000001          
0 0x1093b7 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xb48]      0 0x16093b6 0x00000001          
0 0x1093b6 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xb49]      0 0x1605b7f 0x00000001          
0 0x105b7f 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xb4c]      0 0x1605b7c 0x00000001          
0 0x105b7c 0x19
(XEN) [2021-06-14 23:02:46] [0xb4d]      0 0x16093b5 0x00000001          
0 0x1093b5 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xb4e]      0 0x16093b4 0x00000001          
0 0x1093b4 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xb4f]      0 0x16093b3 0x00000001          
0 0x1093b3 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xb50]      0 0x16093b2 0x00000001          
0 0x1093b2 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xb51]      0 0x16093b1 0x00000001          
0 0x1093b1 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v5 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xb52]      0 0x16093b0 0x00000001          
0 0x1093b0 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v5 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xb53]      0 0x16093af 0x00000001          
0 0x1093af 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v5 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xb54]      0 0x16093ae 0x00000001          
0 0x1093ae 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v5 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xb55]      0 0x16093ad 0x00000001          
0 0x1093ad 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v5 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xb56]      0 0x16093ac 0x00000001          
0 0x1093ac 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v5 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xb57]      0 0x16093ab 0x00000001          
0 0x1093ab 0x19
(XEN) [2021-06-14 23:02:46] grant_table.c:803:d0v5 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:46] [0xb58]      0 0x16093aa 0x00000001          
0 0x1093aa 0x19
(XEN) [2021-06-14 23:02:47] grant_table.c:803:d0v5 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:47] [0xb59]      0 0x16093a9 0x00000001          
0 0x1093a9 0x19
(XEN) [2021-06-14 23:02:47] [0xb5a]      0 0x16093a8 0x00000001          
0 0x1093a8 0x19
(XEN) [2021-06-14 23:02:47] [0xb5b]      0 0x16093a7 0x00000001          
0 0x1093a7 0x19
(XEN) [2021-06-14 23:02:47] [0xb5c]      0 0x16093a6 0x00000001          
0 0x1093a6 0x19
(XEN) [2021-06-14 23:02:47] [0xb5d]      0 0x16093a5 0x00000001          
0 0x1093a5 0x19
(XEN) [2021-06-14 23:02:47] [0xb5e]      0 0x16093a4 0x00000001          
0 0x1093a4 0x19
(XEN) [2021-06-14 23:02:47] [0xb5f]      0 0x16093a3 0x00000001          
0 0x1093a3 0x19
(XEN) [2021-06-14 23:02:47] [0xb60]      0 0x16093a2 0x00000001          
0 0x1093a2 0x19
(XEN) [2021-06-14 23:02:47] [0xb61]      0 0x16093a1 0x00000001          
0 0x1093a1 0x19
(XEN) [2021-06-14 23:02:47] [0xb62]      0 0x16093a0 0x00000001          
0 0x1093a0 0x19
(XEN) [2021-06-14 23:02:47] [0xb63]      0 0x160939f 0x00000001          
0 0x10939f 0x19
(XEN) [2021-06-14 23:02:47] [0xb64]      0 0x160939e 0x00000001          
0 0x10939e 0x19
(XEN) [2021-06-14 23:02:47] [0xb65]      0 0x160939d 0x00000001          
0 0x10939d 0x19
(XEN) [2021-06-14 23:02:47] [0xb66]      0 0x160939c 0x00000001          
0 0x10939c 0x19
(XEN) [2021-06-14 23:02:47] [0xb67]      0 0x160939b 0x00000001          
0 0x10939b 0x19
(XEN) [2021-06-14 23:02:47] [0xb68]      0 0x160939a 0x00000001          
0 0x10939a 0x19
(XEN) [2021-06-14 23:02:47] [0xb69]      0 0x1609399 0x00000001          
0 0x109399 0x19
(XEN) [2021-06-14 23:02:47] [0xb6a]      0 0x1609398 0x00000001          
0 0x109398 0x19
(XEN) [2021-06-14 23:02:47] [0xb6b]      0 0x1609397 0x00000001          
0 0x109397 0x19
(XEN) [2021-06-14 23:02:47] [0xb6c]      0 0x1609396 0x00000001          
0 0x109396 0x19
(XEN) [2021-06-14 23:02:47] [0xb6d]      0 0x1609395 0x00000001          
0 0x109395 0x19
(XEN) [2021-06-14 23:02:47] [0xb6e]      0 0x1609394 0x00000001          
0 0x109394 0x19
(XEN) [2021-06-14 23:02:47] [0xb6f]      0 0x1609393 0x00000001          
0 0x109393 0x19
(XEN) [2021-06-14 23:02:47] [0xb70]      0 0x1609392 0x00000001          
0 0x109392 0x19
(XEN) [2021-06-14 23:02:47] [0xb71]      0 0x1609391 0x00000001          
0 0x109391 0x19
(XEN) [2021-06-14 23:02:47] [0xb72]      0 0x1609390 0x00000001          
0 0x109390 0x19
(XEN) [2021-06-14 23:02:47] [0xb73]      0 0x160938f 0x00000001          
0 0x10938f 0x19
(XEN) [2021-06-14 23:02:47] [0xb74]      0 0x160938e 0x00000001          
0 0x10938e 0x19
(XEN) [2021-06-14 23:02:47] [0xb75]      0 0x160938d 0x00000001          
0 0x10938d 0x19
(XEN) [2021-06-14 23:02:47] [0xb76]      0 0x160938c 0x00000001          
0 0x10938c 0x19
(XEN) [2021-06-14 23:02:47] [0xb77]      0 0x160938b 0x00000001          
0 0x10938b 0x19
(XEN) [2021-06-14 23:02:47] [0xb78]      0 0x160938a 0x00000001          
0 0x10938a 0x19
(XEN) [2021-06-14 23:02:47] [0xb79]      0 0x1609389 0x00000001          
0 0x109389 0x19
(XEN) [2021-06-14 23:02:47] [0xb7a]      0 0x1609388 0x00000001          
0 0x109388 0x19
(XEN) [2021-06-14 23:02:47] [0xb7b]      0 0x1609387 0x00000001          
0 0x109387 0x19
(XEN) [2021-06-14 23:02:47] [0xb7c]      0 0x1609386 0x00000001          
0 0x109386 0x19
(XEN) [2021-06-14 23:02:47] [0xb7d]      0 0x1609385 0x00000001          
0 0x109385 0x19
(XEN) [2021-06-14 23:02:47] [0xb7e]      0 0x1609384 0x00000001          
0 0x109384 0x19
(XEN) [2021-06-14 23:02:47] [0xb7f]      0 0x1609383 0x00000001          
0 0x109383 0x19
(XEN) [2021-06-14 23:02:47] [0xb80]      0 0x1609382 0x00000001          
0 0x109382 0x19
(XEN) [2021-06-14 23:02:47] [0xb81]      0 0x1609381 0x00000001          
0 0x109381 0x19
(XEN) [2021-06-14 23:02:47] [0xb82]      0 0x1609380 0x00000001          
0 0x109380 0x19
(XEN) [2021-06-14 23:02:47] [0xb83]      0 0x160937f 0x00000001          
0 0x10937f 0x19
(XEN) [2021-06-14 23:02:47] [0xb84]      0 0x160937e 0x00000001          
0 0x10937e 0x19
(XEN) [2021-06-14 23:02:47] [0xb85]      0 0x160937d 0x00000001          
0 0x10937d 0x19
(XEN) [2021-06-14 23:02:47] [0xb86]      0 0x160937c 0x00000001          
0 0x10937c 0x19
(XEN) [2021-06-14 23:02:47] [0xb87]      0 0x160937a 0x00000001          
0 0x10937a 0x19
(XEN) [2021-06-14 23:02:47] [0xb88]      0 0x1609379 0x00000001          
0 0x109379 0x19
(XEN) [2021-06-14 23:02:47] [0xb89]      0 0x1609378 0x00000001          
0 0x109378 0x19
(XEN) [2021-06-14 23:02:47] [0xb8a]      0 0x1609377 0x00000001          
0 0x109377 0x19
(XEN) [2021-06-14 23:02:47] [0xb8b]      0 0x16070a7 0x00000001          
0 0x1070a7 0x19
(XEN) [2021-06-14 23:02:47] [0xb8c]      0 0x16070a6 0x00000001          
0 0x1070a6 0x19
(XEN) [2021-06-14 23:02:47] [0xb8d]      0 0x16070a5 0x00000001          
0 0x1070a5 0x19
(XEN) [2021-06-14 23:02:47] [0xb8f]      0 0x16070a3 0x00000001          
0 0x1070a3 0x19
(XEN) [2021-06-14 23:02:47] [0xb90]      0 0x16070a2 0x00000001          
0 0x1070a2 0x19
(XEN) [2021-06-14 23:02:47] [0xb91]      0 0x16070a1 0x00000001          
0 0x1070a1 0x19
(XEN) [2021-06-14 23:02:47] [0xb92]      0 0x16070a0 0x00000001          
0 0x1070a0 0x19
(XEN) [2021-06-14 23:02:47] [0xb93]      0 0x160709f 0x00000001          
0 0x10709f 0x19
(XEN) [2021-06-14 23:02:47] [0xb94]      0 0x160709e 0x00000001          
0 0x10709e 0x19
(XEN) [2021-06-14 23:02:47] [0xb95]      0 0x160709d 0x00000001          
0 0x10709d 0x19
(XEN) [2021-06-14 23:02:47] [0xb96]      0 0x160709c 0x00000001          
0 0x10709c 0x19
(XEN) [2021-06-14 23:02:47] [0xb97]      0 0x160709b 0x00000001          
0 0x10709b 0x19
(XEN) [2021-06-14 23:02:47] [0xb98]      0 0x160709a 0x00000001          
0 0x10709a 0x19
(XEN) [2021-06-14 23:02:47] [0xb99]      0 0x1607099 0x00000001          
0 0x107099 0x19
(XEN) [2021-06-14 23:02:47] [0xb9a]      0 0x1607098 0x00000001          
0 0x107098 0x19
(XEN) [2021-06-14 23:02:47] [0xb9b]      0 0x1607097 0x00000001          
0 0x107097 0x19
(XEN) [2021-06-14 23:02:47] [0xb9c]      0 0x1607096 0x00000001          
0 0x107096 0x19
(XEN) [2021-06-14 23:02:47] [0xb9d]      0 0x1607095 0x00000001          
0 0x107095 0x19
(XEN) [2021-06-14 23:02:47] [0xb9e]      0 0x1607094 0x00000001          
0 0x107094 0x19
(XEN) [2021-06-14 23:02:47] [0xb9f]      0 0x1607093 0x00000001          
0 0x107093 0x19
(XEN) [2021-06-14 23:02:47] [0xba0]      0 0x1607092 0x00000001          
0 0x107092 0x19
(XEN) [2021-06-14 23:02:47] [0xba2]      0 0x1607090 0x00000001          
0 0x107090 0x19
(XEN) [2021-06-14 23:02:47] [0xba3]      0 0x160708f 0x00000001          
0 0x10708f 0x19
(XEN) [2021-06-14 23:02:47] [0xba4]      0 0x160708e 0x00000001          
0 0x10708e 0x19
(XEN) [2021-06-14 23:02:47] [0xba5]      0 0x160708d 0x00000001          
0 0x10708d 0x19
(XEN) [2021-06-14 23:02:47] [0xba6]      0 0x160708c 0x00000001          
0 0x10708c 0x19
(XEN) [2021-06-14 23:02:47] [0xba7]      0 0x160708b 0x00000001          
0 0x10708b 0x19
(XEN) [2021-06-14 23:02:47] [0xba8]      0 0x160708a 0x00000001          
0 0x10708a 0x19
(XEN) [2021-06-14 23:02:47] [0xbaa]      0 0x1607088 0x00000001          
0 0x107088 0x19
(XEN) [2021-06-14 23:02:47] [0xbab]      0 0x1607087 0x00000001          
0 0x107087 0x19
(XEN) [2021-06-14 23:02:47] [0xbac]      0 0x1607086 0x00000001          
0 0x107086 0x19
(XEN) [2021-06-14 23:02:47] [0xbad]      0 0x1607085 0x00000001          
0 0x107085 0x19
(XEN) [2021-06-14 23:02:47] [0xbae]      0 0x1607084 0x00000001          
0 0x107084 0x19
(XEN) [2021-06-14 23:02:47] [0xbaf]      0 0x1607083 0x00000001          
0 0x107083 0x19
(XEN) [2021-06-14 23:02:47] [0xbb0]      0 0x1607082 0x00000001          
0 0x107082 0x19
(XEN) [2021-06-14 23:02:47] [0xbb1]      0 0x1607081 0x00000001          
0 0x107081 0x19
(XEN) [2021-06-14 23:02:47] [0xbb2]      0 0x1607080 0x00000001          
0 0x107080 0x19
(XEN) [2021-06-14 23:02:47] [0xbb3]      0 0x160707f 0x00000001          
0 0x10707f 0x19
(XEN) [2021-06-14 23:02:47] [0xbb4]      0 0x160707e 0x00000001          
0 0x10707e 0x19
(XEN) [2021-06-14 23:02:47] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:47] [0xbb5]      0 0x160707d 0x00000001          
0 0x10707d 0x19
(XEN) [2021-06-14 23:02:47] [0xbb6]      0 0x160707c 0x00000001          
0 0x10707c 0x19
(XEN) [2021-06-14 23:02:47] [0xbb7]      0 0x160707b 0x00000001          
0 0x10707b 0x19
(XEN) [2021-06-14 23:02:47] [0xbb8]      0 0x160707a 0x00000001          
0 0x10707a 0x19
(XEN) [2021-06-14 23:02:47] [0xbb9]      0 0x1607079 0x00000001          
0 0x107079 0x19
(XEN) [2021-06-14 23:02:47] [0xbba]      0 0x1607078 0x00000001          
0 0x107078 0x19
(XEN) [2021-06-14 23:02:47] [0xbbb]      0 0x1607077 0x00000001          
0 0x107077 0x19
(XEN) [2021-06-14 23:02:47] [0xbbc]      0 0x1607076 0x00000001          
0 0x107076 0x19
(XEN) [2021-06-14 23:02:47] [0xbbd]      0 0x1607075 0x00000001          
0 0x107075 0x19
(XEN) [2021-06-14 23:02:47] [0xbbe]      0 0x1605b7b 0x00000001          
0 0x105b7b 0x19
(XEN) [2021-06-14 23:02:47] [0xbbf]      0 0x1605b7a 0x00000001          
0 0x105b7a 0x19
(XEN) [2021-06-14 23:02:47] [0xbc0]      0 0x1605b79 0x00000001          
0 0x105b79 0x19
(XEN) [2021-06-14 23:02:47] [0xbc1]      0 0x1605b78 0x00000001          
0 0x105b78 0x19
(XEN) [2021-06-14 23:02:47] [0xbc3]      0 0x1605b76 0x00000001          
0 0x105b76 0x19
(XEN) [2021-06-14 23:02:47] [0xbc4]      0 0x1605b75 0x00000001          
0 0x105b75 0x19
(XEN) [2021-06-14 23:02:47] [0xbc5]      0 0x160936c 0x00000001          
0 0x10936c 0x19
(XEN) [2021-06-14 23:02:47] [0xbc6]      0 0x1605b73 0x00000001          
0 0x105b73 0x19
(XEN) [2021-06-14 23:02:47] [0xbc7]      0 0x1605b72 0x00000001          
0 0x105b72 0x19
(XEN) [2021-06-14 23:02:47] [0xbc8]      0 0x1605b71 0x00000001          
0 0x105b71 0x19
(XEN) [2021-06-14 23:02:47] [0xbc9]      0 0x1605b70 0x00000001          
0 0x105b70 0x19
(XEN) [2021-06-14 23:02:47] [0xbca]      0 0x1605b6f 0x00000001          
0 0x105b6f 0x19
(XEN) [2021-06-14 23:02:47] [0xbcc]      0 0x1605b6d 0x00000001          
0 0x105b6d 0x19
(XEN) [2021-06-14 23:02:47] [0xbcd]      0 0x1605b6c 0x00000001          
0 0x105b6c 0x19
(XEN) [2021-06-14 23:02:47] [0xbce]      0 0x1605b6b 0x00000001          
0 0x105b6b 0x19
(XEN) [2021-06-14 23:02:47] [0xbcf]      0 0x1605b6a 0x00000001          
0 0x105b6a 0x19
(XEN) [2021-06-14 23:02:47] [0xbd0]      0 0x1605b69 0x00000001          
0 0x105b69 0x19
(XEN) [2021-06-14 23:02:47] [0xbd2]      0 0x1605b67 0x00000001          
0 0x105b67 0x19
(XEN) [2021-06-14 23:02:47] [0xbd3]      0 0x1605b66 0x00000001          
0 0x105b66 0x19
(XEN) [2021-06-14 23:02:47] [0xbd5]      0 0x1605b64 0x00000001          
0 0x105b64 0x19
(XEN) [2021-06-14 23:02:47] [0xbd9]      0 0x1605b5f 0x00000001          
0 0x105b5f 0x19
(XEN) [2021-06-14 23:02:47] [0xbda]      0 0x1605b5e 0x00000001          
0 0x105b5e 0x19
(XEN) [2021-06-14 23:02:47] [0xbdb]      0 0x1605b5d 0x00000001          
0 0x105b5d 0x19
(XEN) [2021-06-14 23:02:47] [0xbdc]      0 0x1605b5c 0x00000001          
0 0x105b5c 0x19
(XEN) [2021-06-14 23:02:47] [0xbdf]      0 0x1605b59 0x00000001          
0 0x105b59 0x19
(XEN) [2021-06-14 23:02:47] [0xbe0]      0 0x1605b58 0x00000001          
0 0x105b58 0x19
(XEN) [2021-06-14 23:02:47] [0xbe1]      0 0x1605b57 0x00000001          
0 0x105b57 0x19
(XEN) [2021-06-14 23:02:47] [0xbe2]      0 0x1605b56 0x00000001          
0 0x105b56 0x19
(XEN) [2021-06-14 23:02:48] [0xbe4]      0 0x1605b54 0x00000001          
0 0x105b54 0x19
(XEN) [2021-06-14 23:02:48] [0xbe5]      0 0x1605b53 0x00000001          
0 0x105b53 0x19
(XEN) [2021-06-14 23:02:48] [0xbe6]      0 0x1605b52 0x00000001          
0 0x105b52 0x19
(XEN) [2021-06-14 23:02:48] [0xbe7]      0 0x1605b51 0x00000001          
0 0x105b51 0x19
(XEN) [2021-06-14 23:02:48] [0xbe8]      0 0x1605b50 0x00000001          
0 0x105b50 0x19
(XEN) [2021-06-14 23:02:48] [0xbe9]      0 0x1605b4f 0x00000001          
0 0x105b4f 0x19
(XEN) [2021-06-14 23:02:48] [0xbea]      0 0x1605b4e 0x00000001          
0 0x105b4e 0x19
(XEN) [2021-06-14 23:02:48] [0xbeb]      0 0x1605b4d 0x00000001          
0 0x105b4d 0x19
(XEN) [2021-06-14 23:02:48] [0xbec]      0 0x1605b4c 0x00000001          
0 0x105b4c 0x19
(XEN) [2021-06-14 23:02:48] [0xbed]      0 0x1605b4b 0x00000001          
0 0x105b4b 0x19
(XEN) [2021-06-14 23:02:48] [0xbee]      0 0x1605b4a 0x00000001          
0 0x105b4a 0x19
(XEN) [2021-06-14 23:02:48] [0xbef]      0 0x1605b49 0x00000001          
0 0x105b49 0x19
(XEN) [2021-06-14 23:02:48] [0xbf0]      0 0x1605b48 0x00000001          
0 0x105b48 0x19
(XEN) [2021-06-14 23:02:48] [0xbf1]      0 0x1605b47 0x00000001          
0 0x105b47 0x19
(XEN) [2021-06-14 23:02:48] [0xbf2]      0 0x1605b46 0x00000001          
0 0x105b46 0x19
(XEN) [2021-06-14 23:02:48] [0xbf3]      0 0x1605b45 0x00000001          
0 0x105b45 0x19
(XEN) [2021-06-14 23:02:48] [0xbf4]      0 0x1605b44 0x00000001          
0 0x105b44 0x19
(XEN) [2021-06-14 23:02:48] [0xbf5]      0 0x1605b43 0x00000001          
0 0x105b43 0x19
(XEN) [2021-06-14 23:02:48] [0xbf6]      0 0x1605b42 0x00000001          
0 0x105b42 0x19
(XEN) [2021-06-14 23:02:48] [0xbf7]      0 0x1605b41 0x00000001          
0 0x105b41 0x19
(XEN) [2021-06-14 23:02:48] [0xbf8]      0 0x1605b40 0x00000001          
0 0x105b40 0x19
(XEN) [2021-06-14 23:02:48] [0xbf9]      0 0x1605b3f 0x00000001          
0 0x105b3f 0x19
(XEN) [2021-06-14 23:02:48] [0xbfa]      0 0x1605b3e 0x00000001          
0 0x105b3e 0x19
(XEN) [2021-06-14 23:02:48] [0xbfb]      0 0x1605b3d 0x00000001          
0 0x105b3d 0x19
(XEN) [2021-06-14 23:02:48] [0xbfc]      0 0x1605b3c 0x00000001          
0 0x105b3c 0x19
(XEN) [2021-06-14 23:02:48] [0xbfd]      0 0x1605b3b 0x00000001          
0 0x105b3b 0x19
(XEN) [2021-06-14 23:02:48] [0xbfe]      0 0x1605b3a 0x00000001          
0 0x105b3a 0x19
(XEN) [2021-06-14 23:02:48] [0xbff]      0 0x1605b39 0x00000001          
0 0x105b39 0x19
(XEN) [2021-06-14 23:02:48] [0xc00]      0 0x1607074 0x00000001          
0 0x107074 0x19
(XEN) [2021-06-14 23:02:48] [0xc01]      0 0x1607073 0x00000001          
0 0x107073 0x19
(XEN) [2021-06-14 23:02:48] [0xc02]      0 0x1607072 0x00000001          
0 0x107072 0x19
(XEN) [2021-06-14 23:02:48] [0xc03]      0 0x1607071 0x00000001          
0 0x107071 0x19
(XEN) [2021-06-14 23:02:48] [0xc04]      0 0x1607070 0x00000001          
0 0x107070 0x19
(XEN) [2021-06-14 23:02:48] [0xc05]      0 0x160706f 0x00000001          
0 0x10706f 0x19
(XEN) [2021-06-14 23:02:48] [0xc06]      0 0x160706e 0x00000001          
0 0x10706e 0x19
(XEN) [2021-06-14 23:02:48] [0xc07]      0 0x160706d 0x00000001          
0 0x10706d 0x19
(XEN) [2021-06-14 23:02:48] [0xc08]      0 0x160706c 0x00000001          
0 0x10706c 0x19
(XEN) [2021-06-14 23:02:48] [0xc09]      0 0x160706b 0x00000001          
0 0x10706b 0x19
(XEN) [2021-06-14 23:02:48] [0xc0a]      0 0x160706a 0x00000001          
0 0x10706a 0x19
(XEN) [2021-06-14 23:02:48] [0xc0b]      0 0x1607068 0x00000001          
0 0x107068 0x19
(XEN) [2021-06-14 23:02:48] [0xc0d]      0 0x1607066 0x00000001          
0 0x107066 0x19
(XEN) [2021-06-14 23:02:48] [0xc0e]      0 0x1607065 0x00000001          
0 0x107065 0x19
(XEN) [2021-06-14 23:02:48] [0xc11]      0 0x160934e 0x00000001          
0 0x10934e 0x19
(XEN) [2021-06-14 23:02:48] [0xc12]      0 0x160934f 0x00000001          
0 0x10934f 0x19
(XEN) [2021-06-14 23:02:48] [0xc13]      0 0x1609350 0x00000001          
0 0x109350 0x19
(XEN) [2021-06-14 23:02:48] [0xc14]      0 0x1609351 0x00000001          
0 0x109351 0x19
(XEN) [2021-06-14 23:02:48] [0xc15]      0 0x1609352 0x00000001          
0 0x109352 0x19
(XEN) [2021-06-14 23:02:48] [0xc16]      0 0x1609353 0x00000001          
0 0x109353 0x19
(XEN) [2021-06-14 23:02:48] [0xc17]      0 0x1609354 0x00000001          
0 0x109354 0x19
(XEN) [2021-06-14 23:02:48] [0xc18]      0 0x1609355 0x00000001          
0 0x109355 0x19
(XEN) [2021-06-14 23:02:48] [0xc19]      0 0x1609356 0x00000001          
0 0x109356 0x19
(XEN) [2021-06-14 23:02:48] [0xc1a]      0 0x1609357 0x00000001          
0 0x109357 0x19
(XEN) [2021-06-14 23:02:48] [0xc1b]      0 0x1609358 0x00000001          
0 0x109358 0x19
(XEN) [2021-06-14 23:02:48] [0xc1c]      0 0x1609359 0x00000001          
0 0x109359 0x19
(XEN) [2021-06-14 23:02:48] [0xc1d]      0 0x160935a 0x00000001          
0 0x10935a 0x19
(XEN) [2021-06-14 23:02:48] [0xc1e]      0 0x160935b 0x00000001          
0 0x10935b 0x19
(XEN) [2021-06-14 23:02:48] [0xc1f]      0 0x160934d 0x00000001          
0 0x10934d 0x19
(XEN) [2021-06-14 23:02:48] [0xc20]      0 0x1609366 0x00000001          
0 0x109366 0x19
(XEN) [2021-06-14 23:02:48] [0xc21]      0 0x1609367 0x00000001          
0 0x109367 0x19
(XEN) [2021-06-14 23:02:48] [0xc22]      0 0x1609368 0x00000001          
0 0x109368 0x19
(XEN) [2021-06-14 23:02:48] [0xc23]      0 0x1609369 0x00000001          
0 0x109369 0x19
(XEN) [2021-06-14 23:02:48] [0xc24]      0 0x160936a 0x00000001          
0 0x10936a 0x19
(XEN) [2021-06-14 23:02:48] [0xc25]      0 0x160936b 0x00000001          
0 0x10936b 0x19
(XEN) [2021-06-14 23:02:48] [0xc26]      0 0x160935d 0x00000001          
0 0x10935d 0x19
(XEN) [2021-06-14 23:02:48] [0xc27]      0 0x160935e 0x00000001          
0 0x10935e 0x19
(XEN) [2021-06-14 23:02:48] [0xc28]      0 0x160935f 0x00000001          
0 0x10935f 0x19
(XEN) [2021-06-14 23:02:48] [0xc29]      0 0x1609360 0x00000001          
0 0x109360 0x19
(XEN) [2021-06-14 23:02:48] [0xc2a]      0 0x1609361 0x00000001          
0 0x109361 0x19
(XEN) [2021-06-14 23:02:48] [0xc2b]      0 0x1609362 0x00000001          
0 0x109362 0x19
(XEN) [2021-06-14 23:02:48] [0xc2c]      0 0x1609363 0x00000001          
0 0x109363 0x19
(XEN) [2021-06-14 23:02:48] [0xc2d]      0 0x1609364 0x00000001          
0 0x109364 0x19
(XEN) [2021-06-14 23:02:48] [0x1041]      0 0x1607064 
0x00000001          0 0x107064 0x19
(XEN) [2021-06-14 23:02:48] [0x1042]      0 0x1607063 
0x00000001          0 0x107063 0x19
(XEN) [2021-06-14 23:02:48] [0x1043]      0 0x1609376 
0x00000001          0 0x109376 0x19
(XEN) [2021-06-14 23:02:48] [0x1044]      0 0x1609375 
0x00000001          0 0x109375 0x19
(XEN) [2021-06-14 23:02:48] [0x1045]      0 0x1609374 
0x00000001          0 0x109374 0x19
(XEN) [2021-06-14 23:02:48] [0x1046]      0 0x1609373 
0x00000001          0 0x109373 0x19
(XEN) [2021-06-14 23:02:48] [0x1047]      0 0x1609372 
0x00000001          0 0x109372 0x19
(XEN) [2021-06-14 23:02:48] [0x1048]      0 0x1609371 
0x00000001          0 0x109371 0x19
(XEN) [2021-06-14 23:02:48] [0x1049]      0 0x1609370 
0x00000001          0 0x109370 0x19
(XEN) [2021-06-14 23:02:48] [0x104a]      0 0x160936f 
0x00000001          0 0x10936f 0x19
(XEN) [2021-06-14 23:02:48] [0x104b]      0 0x160936e 
0x00000001          0 0x10936e 0x19
(XEN) [2021-06-14 23:02:48] [0x104c]      0 0x160936d 
0x00000001          0 0x10936d 0x19
(XEN) [2021-06-14 23:02:48] [0x104d]      0 0x1607061 
0x00000001          0 0x107061 0x19
(XEN) [2021-06-14 23:02:48] [0x104e]      0 0x1607060 
0x00000001          0 0x107060 0x19
(XEN) [2021-06-14 23:02:48] [0x104f]      0 0x160705f 
0x00000001          0 0x10705f 0x19
(XEN) [2021-06-14 23:02:48] [0x1050]      0 0x160705e 
0x00000001          0 0x10705e 0x19
(XEN) [2021-06-14 23:02:48] [0x1051]      0 0x160705d 
0x00000001          0 0x10705d 0x19
(XEN) [2021-06-14 23:02:48] gnttab_usage_print_all ] done
(XEN) [2021-06-14 23:02:48] [i: dump interrupt bindings]
(XEN) [2021-06-14 23:02:48] IRQ information:
(XEN) [2021-06-14 23:02:48]    IRQ:   0 vec:f0 IO-APIC-edge status=000 
aff:{0}/{0} arch/x86/time.c#timer_interrupt()
(XEN) [2021-06-14 23:02:48]    IRQ:   1 vec:a3 IO-APIC-edge status=030 
aff:{7}/{6-11} in-flight=0 d0:  1(---)
(XEN) [2021-06-14 23:02:48]    IRQ:   3 vec:47 IO-APIC-edge status=030 
aff:{4}/{0-5} in-flight=0 d0:  3(---)
(XEN) [2021-06-14 23:02:48]    IRQ:   4 vec:f1 IO-APIC-edge status=000 
aff:{0-11}/{0-11} drivers/char/ns16550.c#ns16550_interrupt()
(XEN) [2021-06-14 23:02:48]    IRQ:   5 vec:50 IO-APIC-edge status=002 
aff:{0}/{0} mapped, unbound
(XEN) [2021-06-14 23:02:48]    IRQ:   6 vec:58 IO-APIC-edge status=002 
aff:{0}/{0} mapped, unbound
(XEN) [2021-06-14 23:02:48]    IRQ:   7 vec:60 IO-APIC-edge status=002 
aff:{0}/{0} mapped, unbound
(XEN) [2021-06-14 23:02:48]    IRQ:   8 vec:8c IO-APIC-edge status=030 
aff:{3}/{0-5} in-flight=0 d0:  8(---)
(XEN) [2021-06-14 23:02:48]    IRQ:   9 vec:c0 IO-APIC-level status=030 
aff:{2}/{0-5} in-flight=0 d0:  9(---)
(XEN) [2021-06-14 23:02:48]    IRQ:  10 vec:78 IO-APIC-edge status=002 
aff:{0}/{0} mapped, unbound
(XEN) [2021-06-14 23:02:48]    IRQ:  11 vec:88 IO-APIC-edge status=002 
aff:{0}/{0} mapped, unbound
(XEN) [2021-06-14 23:02:48]    IRQ:  12 vec:90 IO-APIC-edge status=002 
aff:{0}/{0} mapped, unbound
(XEN) [2021-06-14 23:02:48]    IRQ:  13 vec:98 IO-APIC-edge status=002 
aff:{0-11}/{0} mapped, unbound
(XEN) [2021-06-14 23:02:48]    IRQ:  14 vec:a0 IO-APIC-edge status=002 
aff:{0}/{0} mapped, unbound
(XEN) [2021-06-14 23:02:48]    IRQ:  15 vec:a8 IO-APIC-edge status=002 
aff:{0}/{0} mapped, unbound
(XEN) [2021-06-14 23:02:48]    IRQ:  16 vec:b9 IO-APIC-level status=002 
aff:{0-11}/{0-5} mapped, unbound
(XEN) [2021-06-14 23:02:48]    IRQ:  18 vec:c4 IO-APIC-level status=030 
aff:{2}/{0-5} in-flight=0 d0: 18(---)
(XEN) [2021-06-14 23:02:48]    IRQ:  19 vec:ad IO-APIC-level status=030 
aff:{1}/{0-5} in-flight=0 d0: 19(---),d6: 19(-M-)
(XEN) [2021-06-14 23:02:48]    IRQ:  22 vec:bb IO-APIC-level status=002 
aff:{0-11}/{0-5} mapped, unbound
(XEN) [2021-06-14 23:02:48]    IRQ:  26 vec:d8 IO-APIC-level status=002 
aff:{0-11}/{0-5} mapped, unbound
(XEN) [2021-06-14 23:02:48]    IRQ:  27 vec:2f IO-APIC-level status=010 
aff:{6}/{6-11} in-flight=0 d5: 27(-M-)
(XEN) [2021-06-14 23:02:48]    IRQ:  32 vec:e8 IO-APIC-level status=010 
aff:{2}/{0-5} in-flight=0 d2: 32(-M-)
(XEN) [2021-06-14 23:02:48]    IRQ:  34 vec:e6 IO-APIC-level status=010 
aff:{11}/{6-11} in-flight=0 d7: 34(-M-)
(XEN) [2021-06-14 23:02:48]    IRQ:  40 vec:99 IO-APIC-level status=002 
aff:{0-11}/{0-5} mapped, unbound
(XEN) [2021-06-14 23:02:48]    IRQ:  56 vec:37 IO-APIC-level status=010 
aff:{6}/{6-11} in-flight=0 d5: 56(-M-)
(XEN) [2021-06-14 23:02:48]    IRQ:  60 vec:3f IO-APIC-level status=010 
aff:{6}/{6-11} in-flight=0 d5: 60(-M-)
(XEN) [2021-06-14 23:02:48]    IRQ:  64 vec:ee IO-APIC-level status=010 
aff:{11}/{6-11} in-flight=0 d7: 64(-M-)
(XEN) [2021-06-14 23:02:48]    IRQ:  68 vec:27 IO-APIC-level status=010 
aff:{11}/{6-11} in-flight=0 d7: 68(-M-)
(XEN) [2021-06-14 23:02:48]    IRQ:  72 vec:b0 DMA_MSI status=000 
aff:{6-11}/{6-11} drivers/passthrough/vtd/iommu.c#iommu_page_fault()
(XEN) [2021-06-14 23:02:48]    IRQ:  73 vec:38 DMA_MSI status=000 
aff:{0-5}/{0} drivers/passthrough/vtd/iommu.c#iommu_page_fault()
(XEN) [2021-06-14 23:02:48]    IRQ:  74 vec:49 PCI-MSI/-X status=002 
aff:{0-11}/{0-5} mapped, unbound
(XEN) [2021-06-14 23:02:49]    IRQ:  75 vec:61 PCI-MSI/-X status=002 
aff:{0-11}/{0-5} mapped, unbound
(XEN) [2021-06-14 23:02:49]    IRQ:  76 vec:79 PCI-MSI/-X status=002 
aff:{0-11}/{0-5} mapped, unbound
(XEN) [2021-06-14 23:02:49]    IRQ:  77 vec:91 PCI-MSI/-X status=002 
aff:{0-11}/{0-5} mapped, unbound
(XEN) [2021-06-14 23:02:49]    IRQ:  78 vec:b1 PCI-MSI/-X status=002 
aff:{0-11}/{0-5} mapped, unbound
(XEN) [2021-06-14 23:02:49]    IRQ:  79 vec:c1 PCI-MSI status=002 
aff:{0-11}/{0-5} mapped, unbound
(XEN) [2021-06-14 23:02:49]    IRQ:  80 vec:c9 PCI-MSI status=002 
aff:{0-11}/{0-5} mapped, unbound
(XEN) [2021-06-14 23:02:49]    IRQ:  81 vec:d1 PCI-MSI status=002 
aff:{0-11}/{0-5} mapped, unbound
(XEN) [2021-06-14 23:02:49]    IRQ:  82 vec:d9 PCI-MSI status=002 
aff:{0-11}/{0-5} mapped, unbound
(XEN) [2021-06-14 23:02:49]    IRQ:  83 vec:e1 PCI-MSI status=002 
aff:{0-11}/{0-5} mapped, unbound
(XEN) [2021-06-14 23:02:49]    IRQ:  84 vec:42 PCI-MSI/-X status=002 
aff:{0-11}/{0-5} mapped, unbound
(XEN) [2021-06-14 23:02:49]    IRQ:  85 vec:62 PCI-MSI/-X status=002 
aff:{0-11}/{0-5} mapped, unbound
(XEN) [2021-06-14 23:02:49]    IRQ:  86 vec:bd PCI-MSI/-X status=030 
aff:{10}/{6-11} in-flight=0 d0:891(---)
(XEN) [2021-06-14 23:02:49]    IRQ:  87 vec:b2 PCI-MSI/-X status=030 
aff:{7}/{6-11} in-flight=0 d0:890(---)
(XEN) [2021-06-14 23:02:49]    IRQ:  88 vec:a5 PCI-MSI/-X status=030 
aff:{3}/{0-5} in-flight=0 d0:889(---)
(XEN) [2021-06-14 23:02:49]    IRQ:  89 vec:d5 PCI-MSI/-X status=030 
aff:{5}/{0-5} in-flight=0 d0:888(---)
(XEN) [2021-06-14 23:02:49]    IRQ:  90 vec:36 PCI-MSI/-X status=010 
aff:{0}/{0-5} in-flight=0 d0:887(---)
(XEN) [2021-06-14 23:02:49]    IRQ:  91 vec:9d PCI-MSI/-X status=030 
aff:{10}/{6-11} in-flight=0 d0:886(---)
(XEN) [2021-06-14 23:02:49]    IRQ:  92 vec:dd PCI-MSI/-X status=030 
aff:{10}/{6-11} in-flight=0 d0:885(---)
(XEN) [2021-06-14 23:02:49]    IRQ:  93 vec:b5 PCI-MSI/-X status=030 
aff:{10}/{6-11} in-flight=0 d0:884(---)
(XEN) [2021-06-14 23:02:49]    IRQ:  94 vec:3d PCI-MSI status=030 
aff:{6}/{6-11} in-flight=0 d0:883(---)
(XEN) [2021-06-14 23:02:49]    IRQ:  95 vec:d7 PCI-MSI status=030 
aff:{9}/{6-11} in-flight=0 d0:882(---)
(XEN) [2021-06-14 23:02:49]    IRQ:  96 vec:df PCI-MSI status=030 
aff:{9}/{6-11} in-flight=0 d0:881(---)
(XEN) [2021-06-14 23:02:49]    IRQ:  97 vec:8d PCI-MSI/-X status=030 
aff:{7}/{6-11} in-flight=0 d0:880(---)
(XEN) [2021-06-14 23:02:49]    IRQ:  98 vec:75 PCI-MSI/-X status=030 
aff:{1}/{0-5} in-flight=0 d0:879(---)
(XEN) [2021-06-14 23:02:49]    IRQ:  99 vec:e5 PCI-MSI/-X status=030 
aff:{5}/{0-5} in-flight=0 d0:878(---)
(XEN) [2021-06-14 23:02:49]    IRQ: 100 vec:ed PCI-MSI/-X status=030 
aff:{10}/{6-11} in-flight=0 d0:877(---)
(XEN) [2021-06-14 23:02:49]    IRQ: 101 vec:2e PCI-MSI/-X status=030 
aff:{10}/{6-11} in-flight=0 d0:876(---)
(XEN) [2021-06-14 23:02:49]    IRQ: 102 vec:6a PCI-MSI/-X status=030 
aff:{5}/{0-5} in-flight=0 d6:103(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 103 vec:26 PCI-MSI/-X status=030 
aff:{11}/{6-11} in-flight=0 d6:102(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 104 vec:85 PCI-MSI/-X status=030 
aff:{4}/{0-5} in-flight=0 d6:101(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 105 vec:84 PCI-MSI/-X status=030 
aff:{2}/{0-5} (XEN) [2021-06-14 23:02:49] grant_table.c:803:d0v4 Bad 
flags (0) or dom (0); expected d0
in-flight=0 d6:100(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 106 vec:46 PCI-MSI/-X status=030 
aff:{0}/{0-5} in-flight=0 d6: 99(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 107 vec:c5 PCI-MSI status=030 
aff:{10}/{6-11} in-flight=0 d0:870(---)
(XEN) [2021-06-14 23:02:49]    IRQ: 108 vec:5d PCI-MSI status=010 
aff:{11}/{6-11} in-flight=0 d2:103(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 109 vec:64 PCI-MSI/-X status=030 
aff:{3}/{0-5} in-flight=0 d7:103(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 110 vec:9b PCI-MSI/-X status=030 
aff:{4}/{0-5} in-flight=0 d7:102(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 111 vec:a3 PCI-MSI/-X status=030 
aff:{4}/{0-5} in-flight=0 d7:101(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 112 vec:ab PCI-MSI/-X status=030 
aff:{4}/{0-5} in-flight=0 d7:100(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 113 vec:b3 PCI-MSI/-X status=030 
aff:{4}/{0-5} in-flight=0 d7: 99(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 114 vec:d3 PCI-MSI/-X status=030 
aff:{4}/{0-5} in-flight=0 d7: 98(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 115 vec:db PCI-MSI/-X status=030 
aff:{4}/{0-5} in-flight=0 d7: 97(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 116 vec:71 PCI-MSI/-X status=030 
aff:{7}/{6-11} in-flight=0 d5:103(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 117 vec:63 PCI-MSI/-X status=030 
aff:{0}/{0-5} in-flight=0 d5:102(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 118 vec:6b PCI-MSI/-X status=030 
aff:{0}/{0-5} in-flight=0 d5:101(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 119 vec:73 PCI-MSI/-X status=030 
aff:{0}/{0-5} in-flight=0 d5:100(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 120 vec:7b PCI-MSI/-X status=030 
aff:{0}/{0-5} in-flight=0 d5: 99(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 121 vec:83 PCI-MSI/-X status=030 
aff:{0}/{0-5} in-flight=0 d5: 98(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 122 vec:8b PCI-MSI/-X status=030 
aff:{0}/{0-5} in-flight=0 d5: 97(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 123 vec:4e PCI-MSI status=030 
aff:{5}/{0-5} in-flight=0 d5: 96(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 124 vec:5c PCI-MSI status=030 
aff:{6}/{6-11} in-flight=0 d7: 96(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 125 vec:4c PCI-MSI status=030 
aff:{5}/{0-5} in-flight=0 d7: 95(-M-)
(XEN) [2021-06-14 23:02:49]    IRQ: 126 vec:6d PCI-MSI status=030 
aff:{8}/{6-11} in-flight=0 d5: 95(-M-)
(XEN) [2021-06-14 23:02:49] Direct vector information:
(XEN) [2021-06-14 23:02:49]    0x22 -> irq_move_cleanup_interrupt()
(XEN) [2021-06-14 23:02:49]    0xf2 -> 
arch/x86/cpu/mcheck/mce_intel.c#cmci_interrupt()
(XEN) [2021-06-14 23:02:49]    0xf3 -> 
arch/x86/cpu/mcheck/mce_intel.c#intel_thermal_interrupt()
(XEN) [2021-06-14 23:02:49]    0xf4 -> 
arch/x86/hvm/vmx/vmx.c#pi_notification_interrupt()
(XEN) [2021-06-14 23:02:49]    0xf9 -> pmu_apic_interrupt()
(XEN) [2021-06-14 23:02:49]    0xfa -> apic_timer_interrupt()
(XEN) [2021-06-14 23:02:49]    0xfb -> call_function_interrupt()
(XEN) [2021-06-14 23:02:49]    0xfc -> event_check_interrupt()
(XEN) [2021-06-14 23:02:49]    0xfd -> invalidate_interrupt()
(XEN) [2021-06-14 23:02:49]    0xfe -> error_interrupt()
(XEN) [2021-06-14 23:02:49]    0xff -> spurious_interrupt()
(XEN) [2021-06-14 23:02:49] IO-APIC interrupt information:
(XEN) [2021-06-14 23:02:49]     IRQ  0 Vec240:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin  2: vec=f0 
delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 
dest_id:00000001
(XEN) [2021-06-14 23:02:49]     IRQ  1 Vec163:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin  1: vec=a3 
delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 
dest_id:00010004
(XEN) [2021-06-14 23:02:49]     IRQ  3 Vec 71:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin  3: vec=47 
delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 
dest_id:00000100
(XEN) [2021-06-14 23:02:49]     IRQ  4 Vec241:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin  4: vec=f1 
delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 
dest_id:00010555
(XEN) [2021-06-14 23:02:49]     IRQ  5 Vec 80:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin  5: vec=50 
delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 
dest_id:00000001
(XEN) [2021-06-14 23:02:49]     IRQ  6 Vec 88:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin  6: vec=58 
delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 
dest_id:00000001
(XEN) [2021-06-14 23:02:49]     IRQ  7 Vec 96:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin  7: vec=60 
delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 
dest_id:00000001
(XEN) [2021-06-14 23:02:49]     IRQ  8 Vec140:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin  8: vec=8c 
delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 
dest_id:00000040
(XEN) [2021-06-14 23:02:49]     IRQ  9 Vec192:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin  9: vec=c0 
delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=L mask=0 
dest_id:00000010
(XEN) [2021-06-14 23:02:49]     IRQ 10 Vec120:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin 10: vec=78 
delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 
dest_id:00000001
(XEN) [2021-06-14 23:02:49]     IRQ 11 Vec136:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin 11: vec=88 
delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 
dest_id:00000001
(XEN) [2021-06-14 23:02:49]     IRQ 12 Vec144:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin 12: vec=90 
delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 
dest_id:00000001
(XEN) [2021-06-14 23:02:49]     IRQ 13 Vec152:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin 13: vec=98 
delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=1 
dest_id:00000001
(XEN) [2021-06-14 23:02:49]     IRQ 14 Vec160:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin 14: vec=a0 
delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 
dest_id:00000001
(XEN) [2021-06-14 23:02:49]     IRQ 15 Vec168:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin 15: vec=a8 
delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 
dest_id:00000001
(XEN) [2021-06-14 23:02:49]     IRQ 16 Vec185:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin 16: vec=b9 
delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 
dest_id:00000555
(XEN) [2021-06-14 23:02:49]     IRQ 18 Vec196:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin 18: vec=c4 
delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 
dest_id:00000010
(XEN) [2021-06-14 23:02:49]     IRQ 19 Vec173:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin 19: vec=ad 
delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 
dest_id:00000004
(XEN) [2021-06-14 23:02:49]     IRQ 22 Vec187:
(XEN) [2021-06-14 23:02:49]       Apic 0x00, Pin 22: vec=bb 
delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 
dest_id:00000555
(XEN) [2021-06-14 23:02:49]     IRQ 26 Vec216:
(XEN) [2021-06-14 23:02:49]       Apic 0x01, Pin  2: vec=d8 
delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 
dest_id:00000555
(XEN) [2021-06-14 23:02:49]     IRQ 27 Vec 47:
(XEN) [2021-06-14 23:02:49]       Apic 0x01, Pin  3: vec=2f 
delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 
dest_id:00010001
(XEN) [2021-06-14 23:02:49]     IRQ 32 Vec232:
(XEN) [2021-06-14 23:02:49]       Apic 0x01, Pin  8: vec=e8 
delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 
dest_id:00000010
(XEN) [2021-06-14 23:02:49]     IRQ 34 Vec230:
(XEN) [2021-06-14 23:02:49]       Apic 0x01, Pin 10: vec=e6 
delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 
dest_id:00010400
(XEN) [2021-06-14 23:02:49]     IRQ 40 Vec153:
(XEN) [2021-06-14 23:02:49]       Apic 0x01, Pin 16: vec=99 
delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 
dest_id:00000555
(XEN) [2021-06-14 23:02:49]     IRQ 56 Vec 55:
(XEN) [2021-06-14 23:02:49]       Apic 0x02, Pin  8: vec=37 
delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 
dest_id:00010001
(XEN) [2021-06-14 23:02:50]     IRQ 60 Vec 63:
(XEN) [2021-06-14 23:02:50]       Apic 0x02, Pin 12: vec=3f 
delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 
dest_id:00010001
(XEN) [2021-06-14 23:02:50]     IRQ 64 Vec238:
(XEN) [2021-06-14 23:02:50]       Apic 0x02, Pin 16: vec=ee 
delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 
dest_id:00010400
(XEN) [2021-06-14 23:02:50]     IRQ 68 Vec 39:
(XEN) [2021-06-14 23:02:50]       Apic 0x02, Pin 20: vec=27 
delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 
dest_id:00010400
(XEN) [2021-06-14 23:02:50] [m: memory info]
(XEN) [2021-06-14 23:02:50] Physical memory information:
(XEN) [2021-06-14 23:02:50]     Xen heap: 0kB free
(XEN) [2021-06-14 23:02:50]     heap[15]: 64512kB free
(XEN) [2021-06-14 23:02:50]     heap[16]: 131072kB free
(XEN) [2021-06-14 23:02:50]     heap[17]: 262144kB free
(XEN) [2021-06-14 23:02:50]     heap[18]: 524288kB free
(XEN) [2021-06-14 23:02:50]     heap[19]: 917636kB free
(XEN) [2021-06-14 23:02:50]     DMA heap: 1899652kB free
(XEN) [2021-06-14 23:02:50]     heap[24]: 6290880kB free
(XEN) [2021-06-14 23:02:50]     heap[25]: 1983172kB free
(XEN) [2021-06-14 23:02:50]     Dom heap: 8274052kB free
(XEN) [2021-06-14 23:02:50] [n: NMI statistics]
(XEN) [2021-06-14 23:02:50] CPU    NMI
(XEN) [2021-06-14 23:02:50]   0      2
(XEN) [2021-06-14 23:02:50]   1      0
(XEN) [2021-06-14 23:02:50]   2      0
(XEN) [2021-06-14 23:02:50]   3      0
(XEN) [2021-06-14 23:02:50]   4      0
(XEN) [2021-06-14 23:02:50]   5      0
(XEN) [2021-06-14 23:02:50]   6      0
(XEN) [2021-06-14 23:02:50]   7      0
(XEN) [2021-06-14 23:02:50]   8      0
(XEN) [2021-06-14 23:02:50]   9      0
(XEN) [2021-06-14 23:02:50]  10      0
(XEN) [2021-06-14 23:02:50]  11      0
(XEN) [2021-06-14 23:02:50] d0v0: NMI neither pending nor masked
(XEN) [2021-06-14 23:02:50] [p: print performance counters]
(XEN) [2021-06-14 23:02:50] Xen performance counters SHOW  (now = 
16920481776666)
(XEN) [2021-06-14 23:02:50] exceptions TOTAL[    79357368]
(XEN) [2021-06-14 23:02:50]                   ARR00[         0] 
ARR01[         0]  ARR02[         0]  ARR03[    492405]
(XEN) [2021-06-14 23:02:50]                   ARR04[         0] 
ARR05[         0]  ARR06[      1201]  ARR07[         0]
(XEN) [2021-06-14 23:02:50]                   ARR08[         0] 
ARR09[         0]  ARR10[         0]  ARR11[         0]
(XEN) [2021-06-14 23:02:50]                   ARR12[         1] ARR13[  
69905172]  ARR14[   8958657]  ARR15[         0]
(XEN) [2021-06-14 23:02:50]                   ARR16[         0] 
ARR17[         0]  ARR18[         0]  ARR19[         0]
(XEN) [2021-06-14 23:02:50]                   ARR20[         0] 
ARR21[         0]  ARR22[         0]  ARR23[         0]
(XEN) [2021-06-14 23:02:50]                   ARR24[         0] 
ARR25[         0]  ARR26[         0]  ARR27[         0]
(XEN) [2021-06-14 23:02:50]                   ARR28[         0] 
ARR29[         0]  ARR30[         0]  ARR31[         0]
(XEN) [2021-06-14 23:02:50] vmexits TOTAL[  1185247649]
(XEN) [2021-06-14 23:02:50]                   ARR00[         0] ARR01[ 
135632490]  ARR02[         0]  ARR03[         0]
(XEN) [2021-06-14 23:02:50]                   ARR04[         0] 
ARR05[         0]  ARR06[         0]  ARR07[ 304121397]
(XEN) [2021-06-14 23:02:50]                   ARR08[         0] 
ARR09[         0]  ARR10[  24393537]  ARR11[         0]
(XEN) [2021-06-14 23:02:50]                   ARR12[ 251667860] 
ARR13[         0]  ARR14[         0]  ARR15[         0]
(XEN) [2021-06-14 23:02:50]                   ARR16[         0] 
ARR17[         0]  ARR18[ 425866442]  ARR19[         0]
(XEN) [2021-06-14 23:02:50]                   ARR20[         0] 
ARR21[         0]  ARR22[         0]  ARR23[         0]
(XEN) [2021-06-14 23:02:50]                   ARR24[         0] 
ARR25[         0]  ARR26[         0]  ARR27[         0]
(XEN) [2021-06-14 23:02:50]                   ARR28[   1609038] 
ARR29[        38]  ARR30[   3185995]  ARR31[    369562]
(XEN) [2021-06-14 23:02:50]                   ARR32[      1280] 
ARR33[         0]  ARR34[         0]  ARR35[         0]
(XEN) [2021-06-14 23:02:50]                   ARR36[         0] 
ARR37[         0]  ARR38[         0]  ARR39[         0]
(XEN) [2021-06-14 23:02:50]                   ARR40[  38129033] 
ARR41[         0]  ARR42[         0]  ARR43[         0]
(XEN) [2021-06-14 23:02:50]                   ARR44[         1] 
ARR45[       195]  ARR46[         0]  ARR47[         0]
(XEN) [2021-06-14 23:02:50]                   ARR48[    255810] 
ARR49[     18291]  ARR50[         0]  ARR51[         0]
(XEN) [2021-06-14 23:02:50]                   ARR52[         0] 
ARR53[         0]  ARR54[        55]  ARR55[        44]
(XEN) [2021-06-14 23:02:50] cause vector TOTAL[           0]
(XEN) [2021-06-14 23:02:50] SVMexits TOTAL[           0]
(XEN) [2021-06-14 23:02:50] segmentation fixups TOTAL[           0]
(XEN) [2021-06-14 23:02:50] apic timer interrupts TOTAL[   261296221]  
CPU00[  22015639]  CPU01[  21983485]  CPU02[ 21960573]  CPU03[  21918261]
(XEN) [2021-06-14 23:02:50] CPU04[  21901034]  CPU05[  21827405]  
CPU06[  21586232]  CPU07[ 21652276]
(XEN) [2021-06-14 23:02:50] CPU08[  21625113]  CPU09[  21601078]  
CPU10[  21596158]  CPU11[ 21629255]
(XEN) [2021-06-14 23:02:50] domain page tlb flushes TOTAL[      965705]  
CPU00[     61953]  CPU01[     63968] CPU02[     64600]  CPU03[     65705]
(XEN) [2021-06-14 23:02:50] CPU04[     66724]  CPU05[     62779]  
CPU06[     95203] CPU07[     98425]
(XEN) [2021-06-14 23:02:50] CPU08[     97532]  CPU09[     96406]  
CPU10[     96401] CPU11[     96011]
(XEN) [2021-06-14 23:02:50] calls to mmuext_op TOTAL[    16010940]  
CPU00[    730114]  CPU01[    805748] CPU02[    804784]  CPU03[    794956]
(XEN) [2021-06-14 23:02:50] CPU04[    794844]  CPU05[    702558]  
CPU06[   1754133]  CPU07[ 1971466]
(XEN) [2021-06-14 23:02:50] CPU08[   1919259]  CPU09[   1948422]  
CPU10[   1925783]  CPU11[ 1858873]
(XEN) [2021-06-14 23:02:50] mmuext ops TOTAL[    29773197]  CPU00[   
1356353]  CPU01[   1440142] CPU02[   1451254]  CPU03[   1419663]
(XEN) [2021-06-14 23:02:50] CPU04[   1408412]  CPU05[   1312110]  
CPU06[   3320591]  CPU07[ 3661687]
(XEN) [2021-06-14 23:02:50] CPU08[   3607938]  CPU09[   3645486]  
CPU10[   3614747]  CPU11[ 3534814]
(XEN) [2021-06-14 23:02:50] calls to mmu_update TOTAL[    39123648]  
CPU00[   1529289]  CPU01[   2264527] CPU02[   2013124]  CPU03[   2222015]
(XEN) [2021-06-14 23:02:50] CPU04[   2345658]  CPU05[   1390582]  
CPU06[   3222160]  CPU07[ 9154270]
(XEN) [2021-06-14 23:02:50] CPU08[   3854395]  CPU09[   3992600]  
CPU10[   3845809]  CPU11[ 3289305]
(XEN) [2021-06-14 23:02:50] page updates TOTAL[    44724861]  CPU00[   
1832850]  CPU01[   2705012] CPU02[   2433149]  CPU03[   2748417]
(XEN) [2021-06-14 23:02:50] CPU04[   2748751]  CPU05[   1681499]  
CPU06[   3639691]  CPU07[ 9848678]
(XEN) [2021-06-14 23:02:50] CPU08[   4441804]  CPU09[   4526026]  
CPU10[   4405489]  CPU11[ 3713495]
(XEN) [2021-06-14 23:02:50] mmu_updates of writable pages TOTAL[    
10110502]  CPU00[    319725]  CPU01[    467396] CPU02[    450769]  
CPU03[    527031]
(XEN) [2021-06-14 23:02:50] CPU04[    393785]  CPU05[    306907]  
CPU06[    445877]  CPU07[ 4924700]
(XEN) [2021-06-14 23:02:50] CPU08[    628516]  CPU09[    566336]  
CPU10[    627740]  CPU11[ 451720]
(XEN) [2021-06-14 23:02:50] calls to update_va_map TOTAL[     1475863]  
CPU00[     41121]  CPU01[     78848] CPU02[     69922]  CPU03[     71323]
(XEN) [2021-06-14 23:02:50] CPU04[     70903]  CPU05[     36540]  
CPU06[     63489]  CPU07[ 726400]
(XEN) [2021-06-14 23:02:50] CPU08[     81299]  CPU09[     91297]  
CPU10[     83875] CPU11[     60846]
(XEN) [2021-06-14 23:02:50] page faults TOTAL[     8958657]  CPU00[    
425172]  CPU01[    678626] CPU02[    598376]  CPU03[    670915]
(XEN) [2021-06-14 23:02:50] CPU04[    786008]  CPU05[    433004]  
CPU06[    732211]  CPU07[ 1054067]
(XEN) [2021-06-14 23:02:50] CPU08[    948596]  CPU09[    959282]  
CPU10[    916811]  CPU11[ 755589]
(XEN) [2021-06-14 23:02:50] copy_user faults TOTAL[      806766]  
CPU00[     59277]  CPU01[     59769] CPU02[     59739]  CPU03[     59760]
(XEN) [2021-06-14 23:02:50] CPU04[     59859]  CPU05[     59436]  
CPU06[     72312] CPU07[     76341]
(XEN) [2021-06-14 23:02:50] CPU08[     74886]  CPU09[     74916]  
CPU10[     74838] CPU11[     75633]
(XEN) [2021-06-14 23:02:50] map_domain_page count TOTAL[   181146772]  
CPU00[  14920770]  CPU01[  18860993] CPU02[   8383635]  CPU03[   9392496]
(XEN) [2021-06-14 23:02:50] CPU04[  13098013]  CPU05[   7602456]  
CPU06[  15744774]  CPU07[ 26119453]
(XEN) [2021-06-14 23:02:50] CPU08[  14569150]  CPU09[  14918259]  
CPU10[  16992285]  CPU11[ 20544488]
(XEN) [2021-06-14 23:02:50] writable pt emulations TOTAL[     3264636]  
CPU00[    138773]  CPU01[    230514] CPU02[    198911]  CPU03[    238590]
(XEN) [2021-06-14 23:02:50] CPU04[    298843]  CPU05[    167623]  
CPU06[    268141]  CPU07[ 406902]
(XEN) [2021-06-14 23:02:50] CPU08[    360233]  CPU09[    358088]  
CPU10[    324152]  CPU11[ 273866]
(XEN) [2021-06-14 23:02:50] mmio ro emulations TOTAL[         303]  
CPU00[         0]  CPU01[         3] CPU02[        23]  CPU03[        21]
(XEN) [2021-06-14 23:02:50] CPU04[        14]  CPU05[        25]  
CPU06[         8] CPU07[        54]
(XEN) [2021-06-14 23:02:50] CPU08[        42]  CPU09[        16]  
CPU10[        53] CPU11[        44]
(XEN) [2021-06-14 23:02:50] pre-exception fixed TOTAL[           0]
(XEN) [2021-06-14 23:02:50] guest pagetable walks TOTAL[  2242301107]  
CPU00[ 190015803]  CPU01[ 190010080]  CPU02[ 189552946]  CPU03[ 189379165]
(XEN) [2021-06-14 23:02:51] CPU04[ 188958240]  CPU05[ 188635765]  CPU06[ 
184108499]  CPU07[ 184688294]
(XEN) [2021-06-14 23:02:51] CPU08[ 184266295]  CPU09[ 184190111]  CPU10[ 
184188074]  CPU11[ 184311018]
(XEN) [2021-06-14 23:02:51] calls to shadow_alloc TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow_alloc flushed TLBs TOTAL[           0]
(XEN) [2021-06-14 23:02:51] number of shadow pages in use 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] calls to shadow_free TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow recycles old shadows TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow recycles in-use shadows 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow hit read-only linear map 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow A bit update TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow A&D bit update TOTAL[           0]
(XEN) [2021-06-14 23:02:51] calls to shadow_fault TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow_fault fast path n/p TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow_fault fast path mmio TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow_fault fast path error 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow_fault guest bad gfn TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow_fault really guest fault 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow_fault emulates a read 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow_fault emulates a write 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow_fault emulator fails TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow_fault emulate stack write 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow_fault emulate for CR0.WP=0 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow_fault fast emulate TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow_fault fast emulate failed 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow_fault handled as mmio 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow_fault fixed fault TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow causes ptwr to emulate 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] calls to shadow_validate_gl1e 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] calls to shadow_validate_gl2e 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] calls to shadow_validate_gl3e 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] calls to shadow_validate_gl4e 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] calls to shadow_hash_lookup TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow hash hit in bucket head 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow hash misses TOTAL[           0]
(XEN) [2021-06-14 23:02:51] calls to get_shadow_status TOTAL[           0]
(XEN) [2021-06-14 23:02:51] calls to shadow_hash_insert TOTAL[           0]
(XEN) [2021-06-14 23:02:51] calls to shadow_hash_delete TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow removes write access TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow writeable: 32b w2k3 TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow writeable: 32pae w2k3 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow writeable: 64b w2k3 TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow writeable: linux low/solaris 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow writeable: linux high 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow writeable: FreeBSD TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow writeable: sl1p TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow writeable: sl1p failed 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow writeable brute-force 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow writeable resync bf TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow removes all mappings TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow rm-mappings brute-force 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow unshadows for fork/exit 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow unshadows a page TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow unshadow by up-pointer 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow unshadow brute-force TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow_get_page_from_l1e failed 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow checks gwalk TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow check inconsistent gwalk 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow flush tlb by removing write perm  
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow emulates invlpg TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow invlpg faults TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow extra pt write TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow extra non-pt-write op 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow extra emulation failed 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow OOS fixup adds TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow OOS fixup evictions TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow OOS unsyncs TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow OOS evictions TOTAL[           0]
(XEN) [2021-06-14 23:02:51] shadow OOS resyncs TOTAL[           0]
(XEN) [2021-06-14 23:02:51] realmode instructions emulated 
TOTAL[           0]
(XEN) [2021-06-14 23:02:51] vmexits from realmode TOTAL[           0]
(XEN) [2021-06-14 23:02:51] vmexits from Pause-Loop Detection TOTAL[    
38131907]  CPU00[   3795163]  CPU01[   3785589] CPU02[   3885311]  
CPU03[   3799393]
(XEN) [2021-06-14 23:02:51] CPU04[   3856715]  CPU05[   3827047]  
CPU06[   2412338]  CPU07[ 2515533]
(XEN) [2021-06-14 23:02:51] CPU08[   2587634]  CPU09[   2585950]  
CPU10[   2526618]  CPU11[ 2554632]
(XEN) [2021-06-14 23:02:51] hypercalls TOTAL[   501159413]
(XEN) [2021-06-14 23:02:51]                   ARR00[ 501159476] 
ARR01[         0]  ARR02[         0]  ARR03[         0]
(XEN) [2021-06-14 23:02:51]                   ARR04[         0] 
ARR05[         0]  ARR06[         0]  ARR07[         0]
(XEN) [2021-06-14 23:02:51]                   ARR08[         0] 
ARR09[         0]  ARR10[         0]  ARR11[         0]
(XEN) [2021-06-14 23:02:51]                   ARR12[         0] 
ARR13[         0]  ARR14[         0]  ARR15[         0]
(XEN) [2021-06-14 23:02:51]                   ARR16[         0] 
ARR17[         0]  ARR18[         0]  ARR19[         0]
(XEN) [2021-06-14 23:02:51]                   ARR20[         0] 
ARR21[         0]  ARR22[         0]  ARR23[         0]
(XEN) [2021-06-14 23:02:51]                   ARR24[         0] 
ARR25[         0]  ARR26[         0]  ARR27[         0]
(XEN) [2021-06-14 23:02:51]                   ARR28[         0] 
ARR29[         0]  ARR30[         0]  ARR31[         0]
(XEN) [2021-06-14 23:02:51]                   ARR32[         0] 
ARR33[         0]  ARR34[         0]  ARR35[         0]
(XEN) [2021-06-14 23:02:51]                   ARR36[         0] 
ARR37[         0]  ARR38[         0]  ARR39[         0]
(XEN) [2021-06-14 23:02:51]                   ARR40[         0] 
ARR41[         0]  ARR42[         0]  ARR43[         0]
(XEN) [2021-06-14 23:02:51]                   ARR44[         0] 
ARR45[         0]  ARR46[         0]  ARR47[         0]
(XEN) [2021-06-14 23:02:51]                   ARR48[         0] 
ARR49[         0]  ARR50[         0]  ARR51[         0]
(XEN) [2021-06-14 23:02:51]                   ARR52[         0] 
ARR53[         0]  ARR54[         0]  ARR55[         0]
(XEN) [2021-06-14 23:02:51]                   ARR56[         0] 
ARR57[         0]  ARR58[         0]  ARR59[         0]
(XEN) [2021-06-14 23:02:51]                   ARR60[         0] 
ARR61[         0]  ARR62[         0]  ARR63[         0]
(XEN) [2021-06-14 23:02:51] calls to multicall TOTAL[      158784]  
CPU00[      7188]  CPU01[     13189] CPU02[     11477]  CPU03[     12631]
(XEN) [2021-06-14 23:02:51] CPU04[     12982]  CPU05[      6298]  
CPU06[     11880] CPU07[     23971]
(XEN) [2021-06-14 23:02:51] CPU08[     15217]  CPU09[     17148]  
CPU10[     15343] CPU11[     11460]
(XEN) [2021-06-14 23:02:51] calls from multicall TOTAL[     1309908]  
CPU00[     60337]  CPU01[    114716] CPU02[    101643]  CPU03[    102915]
(XEN) [2021-06-14 23:02:51] CPU04[    102546]  CPU05[     53810]  
CPU06[     94148]  CPU07[ 210569]
(XEN) [2021-06-14 23:02:51] CPU08[    120736]  CPU09[    134245]  
CPU10[    123707] CPU11[     90536]
(XEN) [2021-06-14 23:02:51] #interrupts TOTAL[   421232512]  CPU00[  
33606254]  CPU01[  33732584]  CPU02[ 33727434]  CPU03[  33635764]
(XEN) [2021-06-14 23:02:51] CPU04[  33648007]  CPU05[  33331794]  
CPU06[  36490358]  CPU07[ 36672312]
(XEN) [2021-06-14 23:02:51] CPU08[  36702950]  CPU09[  36563772]  
CPU10[  36657941]  CPU11[ 36463931]
(XEN) [2021-06-14 23:02:51] #IPIs TOTAL[   143261504]  CPU00[  
10448762]  CPU01[  10612484]  CPU02[ 10637290]  CPU03[  10579645]
(XEN) [2021-06-14 23:02:51] CPU04[  10614803]  CPU05[  10367581]  
CPU06[  13203728]  CPU07[ 13385253]
(XEN) [2021-06-14 23:02:51] CPU08[  13450223]  CPU09[  13336402]  
CPU10[  13419437]  CPU11[ 13206179]
(XEN) [2021-06-14 23:02:51] RCU: idle_timer TOTAL[           1]  
CPU00[         0]  CPU01[         0] CPU02[         0]  CPU03[         0]
(XEN) [2021-06-14 23:02:51] CPU04[         0]  CPU05[         0]  
CPU06[         1] CPU07[         0]
(XEN) [2021-06-14 23:02:51] CPU08[         0]  CPU09[         0]  
CPU10[         0] CPU11[         0]
(XEN) [2021-06-14 23:02:51] sched: timer TOTAL[     8895485]  CPU00[    
735221]  CPU01[    742084] CPU02[    739670]  CPU03[    743799]
(XEN) [2021-06-14 23:02:51] CPU04[    739681]  CPU05[    738291]  
CPU06[    721516]  CPU07[ 749070]
(XEN) [2021-06-14 23:02:51] CPU08[    742962]  CPU09[    747636]  
CPU10[    746520]  CPU11[ 749067]
(XEN) [2021-06-14 23:02:51] sched: runs through scheduler TOTAL[   
643776030]  CPU00[  52571647]  CPU01[  52705913]  CPU02[ 52635664]  
CPU03[  52414855]
(XEN) [2021-06-14 23:02:52] CPU04[  52453016]  CPU05[  52219792]  
CPU06[  54455766]  CPU07[ 54856069]
(XEN) [2021-06-14 23:02:52] CPU08[  54884308]  CPU09[  54830043]  
CPU10[  54821732]  CPU11[ 54927856]
(XEN) [2021-06-14 23:02:52] sched: context switches TOTAL[   594262846]  
CPU00[  47869202]  CPU01[  48008372]  CPU02[ 47839549]  CPU03[  47700399]
(XEN) [2021-06-14 23:02:52] CPU04[  47686462]  CPU05[  47484179]  
CPU06[  51083705]  CPU07[ 51345205]
(XEN) [2021-06-14 23:02:52] CPU08[  51307551]  CPU09[  51255232]  
CPU10[  51304768]  CPU11[ 51378804]
(XEN) [2021-06-14 23:02:52] sched: specific scheduler TOTAL[   
643779363]  CPU00[  52572005]  CPU01[  52706324]  CPU02[ 52635866]  
CPU03[  52415201]
(XEN) [2021-06-14 23:02:52] CPU04[  52453290]  CPU05[  52220213]  
CPU06[  54456037]  CPU07[ 54856352]
(XEN) [2021-06-14 23:02:52] CPU08[  54884524]  CPU09[  54830043]  
CPU10[  54822058]  CPU11[ 54928133]
(XEN) [2021-06-14 23:02:52] sched: dom_init TOTAL[           8]  
CPU00[         3]  CPU01[         1] CPU02[         0]  CPU03[         0]
(XEN) [2021-06-14 23:02:52] CPU04[         0]  CPU05[         1]  
CPU06[         0] CPU07[         0]
(XEN) [2021-06-14 23:02:52] CPU08[         1]  CPU09[         1]  
CPU10[         0] CPU11[         1]
(XEN) [2021-06-14 23:02:52] sched: dom_destroy TOTAL[           2]  
CPU00[         0]  CPU01[         0] CPU02[         0]  CPU03[         0]
(XEN) [2021-06-14 23:02:52] CPU04[         0]  CPU05[         0]  
CPU06[         1] CPU07[         0]
(XEN) [2021-06-14 23:02:52] CPU08[         0]  CPU09[         1]  
CPU10[         0] CPU11[         0]
(XEN) [2021-06-14 23:02:52] sched: vcpu_yield TOTAL[    38151639]  
CPU00[   3796348]  CPU01[   3786870] CPU02[   3886675]  CPU03[   3800747]
(XEN) [2021-06-14 23:02:52] CPU04[   3858025]  CPU05[   3828495]  
CPU06[   2414065]  CPU07[ 2518419]
(XEN) [2021-06-14 23:02:52] CPU08[   2589443]  CPU09[   2587682]  
CPU10[   2528510]  CPU11[ 2556420]
(XEN) [2021-06-14 23:02:52] sched: unit_alloc TOTAL[          70]  
CPU00[        42]  CPU01[         6] CPU02[         0]  CPU03[         0]
(XEN) [2021-06-14 23:02:52] CPU04[         0]  CPU05[         0]  
CPU06[         6] CPU07[         0]
(XEN) [2021-06-14 23:02:52] CPU08[         6]  CPU09[         0]  
CPU10[         4] CPU11[         6]
(XEN) [2021-06-14 23:02:52] sched: unit_insert TOTAL[          46]  
CPU00[        18]  CPU01[         6] CPU02[         0]  CPU03[         0]
(XEN) [2021-06-14 23:02:52] CPU04[         0]  CPU05[         0]  
CPU06[         6] CPU07[         0]
(XEN) [2021-06-14 23:02:52] CPU08[         6]  CPU09[         0]  
CPU10[         4] CPU11[         6]
(XEN) [2021-06-14 23:02:52] sched: unit_remove TOTAL[          12]  
CPU00[         0]  CPU01[         0] CPU02[         0]  CPU03[         0]
(XEN) [2021-06-14 23:02:52] CPU04[         0]  CPU05[         0]  
CPU06[         6] CPU07[         0]
(XEN) [2021-06-14 23:02:52] CPU08[         0]  CPU09[         6]  
CPU10[         0] CPU11[         0]
(XEN) [2021-06-14 23:02:52] sched: unit_sleep TOTAL[        2441]  
CPU00[       275]  CPU01[       297] CPU02[       142]  CPU03[       191]
(XEN) [2021-06-14 23:02:52] CPU04[       189]  CPU05[       217]  
CPU06[       211] CPU07[       191]
(XEN) [2021-06-14 23:02:52] CPU08[       142]  CPU09[       133]  
CPU10[       265] CPU11[       188]
(XEN) [2021-06-14 23:02:52] sched: unit_wake_running TOTAL[      
903117]  CPU00[     74109]  CPU01[     74002] CPU02[     72786]  
CPU03[     72722]
(XEN) [2021-06-14 23:02:52] CPU04[     74048]  CPU05[     72840]  
CPU06[     75887] CPU07[     78247]
(XEN) [2021-06-14 23:02:52] (XEN) [2021-06-14 23:02:52] 
grant_table.c:803:d0v3 Bad flags (0) or dom (0); expected d0
   CPU08[     77385]  CPU09[     76683]  CPU10[     76318] CPU11[     
78093]
(XEN) [2021-06-14 23:02:52] sched: unit_wake_onrunq TOTAL[           8]  
CPU00[         0]  CPU01[         0] CPU02[         0]  CPU03[         1]
(XEN) [2021-06-14 23:02:52] CPU04[         1]  CPU05[         0]  
CPU06[         2] CPU07[         1]
(XEN) [2021-06-14 23:02:52] CPU08[         0]  CPU09[         1]  
CPU10[         1] CPU11[         1]
(XEN) [2021-06-14 23:02:52] sched: unit_wake_runnable TOTAL[   
298128686]  CPU00[  23954632]  CPU01[  23970796]  CPU02[ 23885891]  
CPU03[  23827323]
(XEN) [2021-06-14 23:02:52] CPU04[  23851130]  CPU05[  23696608]  
CPU06[  25763246]  CPU07[ 25856640]
(XEN) [2021-06-14 23:02:52] CPU08[  25820279]  CPU09[  25826194]  
CPU10[  25830418]  CPU11[ 25845874]
(XEN) [2021-06-14 23:02:52] sched: unit_wake_not_runnable 
TOTAL[           0]
(XEN) [2021-06-14 23:02:52] sched: tickled_no_cpu TOTAL[    17059211]  
CPU00[   1082939]  CPU01[   1080884] CPU02[   1077255]  CPU03[   1075032]
(XEN) [2021-06-14 23:02:52] CPU04[   1070366]  CPU05[   1066321]  
CPU06[   1735355]  CPU07[ 1768423]
(XEN) [2021-06-14 23:02:52] CPU08[   1779619]  CPU09[   1764720]  
CPU10[   1778835]  CPU11[ 1779555]
(XEN) [2021-06-14 23:02:52] sched: tickled_idle_cpu TOTAL[   279331815]  
CPU00[  22736094]  CPU01[  22783920]  CPU02[ 22679538]  CPU03[  22646476]
(XEN) [2021-06-14 23:02:52] CPU04[  22634755]  CPU05[  22483830]  
CPU06[  23836361]  CPU07[ 23947563]
(XEN) [2021-06-14 23:02:52] CPU08[  23850059]  CPU09[  23942807]  
CPU10[  23886968]  CPU11[ 23903684]
(XEN) [2021-06-14 23:02:52] sched: tickled_idle_cpu_exclusive 
TOTAL[           0]
(XEN) [2021-06-14 23:02:52] sched: tickled_busy_cpu TOTAL[    18564667]  
CPU00[   1323898]  CPU01[   1292014] CPU02[   1308502]  CPU03[   1281682]
(XEN) [2021-06-14 23:02:52] CPU04[   1311544]  CPU05[   1306912]  
CPU06[   1794248]  CPU07[ 1783710]
(XEN) [2021-06-14 23:02:52] CPU08[   1824565]  CPU09[   1752398]  
CPU10[   1796374]  CPU11[ 1788924]
(XEN) [2021-06-14 23:02:52] sched: unit_check TOTAL[  1287614382]  
CPU00[ 105149848]  CPU01[ 105417756]  CPU02[ 105278102]  CPU03[ 104835850]
(XEN) [2021-06-14 23:02:52] CPU04[ 104912160]  CPU05[ 104445250]  CPU06[ 
108916512]  CPU07[ 109717314]
(XEN) [2021-06-14 23:02:52] CPU08[ 109773719]  CPU09[ 109660086]  CPU10[ 
109648730]  CPU11[ 109860704]
(XEN) [2021-06-14 23:02:52] csched: delay TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: acct_run TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: acct_no_work TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: acct_balance TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: acct_reorder TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: acct_min_credit TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: acct_unit_active TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: acct_unit_idle TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: unit_boost TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: unit_park TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: unit_unpark TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: load_balance_idle TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: load_balance_over TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: load_balance_other TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: steal_trylock TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: steal_trylock_failed 
TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: steal_peer_idle TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: migrate_queued TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: migrate_running TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: migrate_kicked_away TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched: unit_hot TOTAL[           0]
(XEN) [2021-06-14 23:02:52] csched2: burn_credits_t2c TOTAL[   
425609151]  CPU00[  34941338]  CPU01[  34921858]  CPU02[ 34961033]  
CPU03[  34723803]
(XEN) [2021-06-14 23:02:52] CPU04[  34817134]  CPU05[  34657671]  
CPU06[  35838948]  CPU07[ 36150523]
(XEN) [2021-06-14 23:02:52] CPU08[  36247870]  CPU09[  36071593]  
CPU10[  36113983]  CPU11[ 36163771]
(XEN) [2021-06-14 23:02:52] csched2: acct_load_balance TOTAL[        
1096]  CPU00[       107]  CPU01[       100] CPU02[       103]  
CPU03[        90]
(XEN) [2021-06-14 23:02:52] CPU04[        88]  CPU05[        81]  
CPU06[        82] CPU07[        95]
(XEN) [2021-06-14 23:02:52] CPU08[        84]  CPU09[        73]  
CPU10[        89] CPU11[       104]
(XEN) [2021-06-14 23:02:52] csched2: update_max_weight_quick 
TOTAL[           9]  CPU00[         3]  CPU01[         1] CPU02[         
1]  CPU03[         0]
(XEN) [2021-06-14 23:02:52] CPU04[         2]  CPU05[         0]  
CPU06[         1] CPU07[         0]
(XEN) [2021-06-14 23:02:52] CPU08[         0]  CPU09[         0]  
CPU10[         1] CPU11[         0]
(XEN) [2021-06-14 23:02:52] csched2: update_max_weight_full 
TOTAL[         426]  CPU00[        44]  CPU01[        41] CPU02[        
53]  CPU03[        32]
(XEN) [2021-06-14 23:02:53] CPU04[        26]  CPU05[        31]  
CPU06[        29] CPU07[        37]
(XEN) [2021-06-14 23:02:53] CPU08[        30]  CPU09[        26]  
CPU10[        44] CPU11[        33]
(XEN) [2021-06-14 23:02:53] csched2: migrate_requested TOTAL[         
824]  CPU00[        81]  CPU01[        75] CPU02[        77]  
CPU03[        58]
(XEN) [2021-06-14 23:02:53] CPU04[        72]  CPU05[        64]  
CPU06[        61] CPU07[        71]
(XEN) [2021-06-14 23:02:53] CPU08[        73]  CPU09[        49]  
CPU10[        65] CPU11[        78]
(XEN) [2021-06-14 23:02:53] csched2: migrate_on_runq TOTAL[          
83]  CPU00[         5]  CPU01[         7] CPU02[        10]  
CPU03[         7]
(XEN) [2021-06-14 23:02:53] CPU04[         6]  CPU05[         6]  
CPU06[         7] CPU07[        10]
(XEN) [2021-06-14 23:02:53] CPU08[         6]  CPU09[         4]  
CPU10[        10] CPU11[         5]
(XEN) [2021-06-14 23:02:53] csched2: migrate_no_runq TOTAL[        
2017]  CPU00[       200]  CPU01[       180] CPU02[       175]  
CPU03[       169]
(XEN) [2021-06-14 23:02:53] CPU04[       170]  CPU05[       150]  
CPU06[       155] CPU07[       167]
(XEN) [2021-06-14 23:02:53] CPU08[       150]  CPU09[       150]  
CPU10[       170] CPU11[       181]
(XEN) [2021-06-14 23:02:53] csched2: runtime_min_timer TOTAL[    
72280957]  CPU00[   4978109]  CPU01[   5487769] CPU02[   5196454]  
CPU03[   5215313]
(XEN) [2021-06-14 23:02:53] CPU04[   4711060]  CPU05[   4759081]  
CPU06[   6955494]  CPU07[ 7400382]
(XEN) [2021-06-14 23:02:53] CPU08[   6891188]  CPU09[   7165317]  
CPU10[   6798065]  CPU11[ 6722856]
(XEN) [2021-06-14 23:02:53] csched2: runtime_max_timer TOTAL[    
27380270]  CPU00[   1844873]  CPU01[   1829180] CPU02[   1855250]  
CPU03[   1839474]
(XEN) [2021-06-14 23:02:53] CPU04[   1840844]  CPU05[   1839481]  
CPU06[   2696125]  CPU07[ 2706080]
(XEN) [2021-06-14 23:02:53] CPU08[   2732319]  CPU09[   2714485]  
CPU10[   2737640]  CPU11[ 2744527]
(XEN) [2021-06-14 23:02:53] csched2: pick_resource TOTAL[         909]  
CPU00[       103]  CPU01[        85] CPU02[        67]  CPU03[        64]
(XEN) [2021-06-14 23:02:53] CPU04[        78]  CPU05[        66]  
CPU06[        76] CPU07[        68]
(XEN) [2021-06-14 23:02:53] CPU08[        78]  CPU09[        65]  
CPU10[        80] CPU11[        79]
(XEN) [2021-06-14 23:02:53] csched2: need_fallback_cpu TOTAL[           0]
(XEN) [2021-06-14 23:02:53] csched2: migrated TOTAL[    74432913]  
CPU00[   4953449]  CPU01[   4943252] CPU02[   4944818]  CPU03[   4926708]
(XEN) [2021-06-14 23:02:53] CPU04[   4923933]  CPU05[   4908607]  
CPU06[   7440838]  CPU07[ 7484221]
(XEN) [2021-06-14 23:02:53] CPU08[   7502586]  CPU09[   7442285]  
CPU10[   7491230]  CPU11[ 7471088]
(XEN) [2021-06-14 23:02:53] csched2: migrate_resisted TOTAL[     
3047248]  CPU00[    261206]  CPU01[    259510] CPU02[    260772]  
CPU03[    263325]
(XEN) [2021-06-14 23:02:53] CPU04[    258056]  CPU05[    265342]  
CPU06[    225253]  CPU07[ 244738]
(XEN) [2021-06-14 23:02:53] CPU08[    308578]  CPU09[    228671]  
CPU10[    227100]  CPU11[ 244697]
(XEN) [2021-06-14 23:02:53] csched2: credit_reset TOTAL[     3852877]  
CPU00[    318074]  CPU01[    322881] CPU02[    322653]  CPU03[    321017]
(XEN) [2021-06-14 23:02:53] CPU04[    320985]  CPU05[    317008]  
CPU06[    312510]  CPU07[ 326068]
(XEN) [2021-06-14 23:02:53] CPU08[    321385]  CPU09[    325287]  
CPU10[    322423]  CPU11[ 322589]
(XEN) [2021-06-14 23:02:53] csched2: deferred_to_tickled_cpu TOTAL[    
52041253]  CPU00[   3414118]  CPU01[   4156735] CPU02[   3659778]  
CPU03[   3710606]
(XEN) [2021-06-14 23:02:53] CPU04[   3057353]  CPU05[   3142181]  
CPU06[   5307453]  CPU07[ 5636271]
(XEN) [2021-06-14 23:02:53] CPU08[   4930845]  CPU09[   5346693]  
CPU10[   4915708]  CPU11[ 4763568]
(XEN) [2021-06-14 23:02:53] csched2: tickled_cpu_overwritten 
TOTAL[          11]  CPU00[         1]  CPU01[         2] CPU02[         
1]  CPU03[         1]
(XEN) [2021-06-14 23:02:53] CPU04[         2]  CPU05[         0]  
CPU06[         0] CPU07[         1]
(XEN) [2021-06-14 23:02:53] CPU08[         0]  CPU09[         0]  
CPU10[         1] CPU11[         2]
(XEN) [2021-06-14 23:02:53] csched2: tickled_cpu_overridden TOTAL[     
3151147]  CPU00[    182704]  CPU01[    182825] CPU02[    182161]  
CPU03[    182090]
(XEN) [2021-06-14 23:02:53] CPU04[    181397]  CPU05[    181895]  
CPU06[    334256]  CPU07[ 345981]
(XEN) [2021-06-14 23:02:53] CPU08[    345437]  CPU09[    344212]  
CPU10[    343746]  CPU11[ 344444]
(XEN) [2021-06-14 23:02:53] PG_need_flush tlb flushes TOTAL[      
549864]  CPU00[     25455]  CPU01[     45111] CPU02[     41430]  
CPU03[     43741]
(XEN) [2021-06-14 23:02:53] CPU04[     43584]  CPU05[     22757]  
CPU06[     40993] CPU07[     85853]
(XEN) [2021-06-14 23:02:53] CPU08[     51541]  CPU09[     57473]  
CPU10[     52934] CPU11[     38992]
(XEN) [2021-06-14 23:02:53] [q: dump domain (and guest debug) info]
(XEN) [2021-06-14 23:02:53] 'q' pressed -> dumping domain info (now = 
16923891556598)
(XEN) [2021-06-14 23:02:53] General information for domain 0:
(XEN) [2021-06-14 23:02:53]     refcnt=8 dying=0 pause_count=0
(XEN) [2021-06-14 23:02:53]     nr_pages=4194304 xenheap_pages=2 
shared_pages=0 paged_pages=0 dirty_cpus={2} max_pages=4194304
(XEN) [2021-06-14 23:02:53] handle=00000000-0000-0000-0000-000000000000 
vm_assist=0000002d
(XEN) [2021-06-14 23:02:53] Rangesets belonging to domain 0:
(XEN) [2021-06-14 23:02:53]     Interrupts { 1-71, 74-126 }
(XEN) [2021-06-14 23:02:53]     I/O Memory { 0-c7ffb, c7ffd-fbffb, 
fbffd-fedff, fef00-3ffffffff }
(XEN) [2021-06-14 23:02:53]     I/O Ports  { 0-1f, 22-3f, 44-60, 62-9f, 
a2-3f7, 400-407, 40c-cfb, d00-ffff }
(XEN) [2021-06-14 23:02:53]     log-dirty  { }
(XEN) [2021-06-14 23:02:53] Memory pages belonging to domain 0:
(XEN) [2021-06-14 23:02:53]     DomPage list too long to display
(XEN) [2021-06-14 23:02:53]     XenPage 000000000007dffc: 
caf=c000000000000002, taf=e400000000000002
(XEN) [2021-06-14 23:02:53]     XenPage 0000000001037c45: 
caf=c000000000000002, taf=e400000000000002
(XEN) [2021-06-14 23:02:53] NODE affinity for domain 0: [0-1]
(XEN) [2021-06-14 23:02:53] VCPU information and callbacks for domain 0:
(XEN) [2021-06-14 23:02:53]   UNIT0 affinities: hard={0-11} soft={0-11}
(XEN) [2021-06-14 23:02:53]     VCPU0: CPU4 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:53]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:53] No periodic timer
(XEN) [2021-06-14 23:02:53]   UNIT1 affinities: hard={0-11} soft={0-11}
(XEN) [2021-06-14 23:02:53]     VCPU1: CPU5 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:53]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:53] No periodic timer
(XEN) [2021-06-14 23:02:53]   UNIT2 affinities: hard={0-11} soft={0-11}
(XEN) [2021-06-14 23:02:53]     VCPU2: CPU5 [has=F] poll=0 
upcall_pend=00 upcall_mask=00 dirty_cpu=0
(XEN) [2021-06-14 23:02:53]     pause_count=0 pause_flags=0
(XEN) [2021-06-14 23:02:53] No periodic timer
(XEN) [2021-06-14 23:02:53]   UNIT3 affinities: hard={0-11} soft={0-11}
(XEN) [2021-06-14 23:02:53]     VCPU3: CPU6 [has=F] poll=0 
upcall_pend=01 upcall_mask=00
(XEN) [2021-06-14 23:02:53]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:53] No periodic timer
(XEN) [2021-06-14 23:02:53]   UNIT4 affinities: hard={0-11} soft={0-11}
(XEN) [2021-06-14 23:02:53]     VCPU4: CPU6 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:53]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:53] No periodic timer
(XEN) [2021-06-14 23:02:53]   UNIT5 affinities: hard={0-11} soft={0-11}
(XEN) [2021-06-14 23:02:53]     VCPU5: CPU6 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:53]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:53] No periodic timer
(XEN) [2021-06-14 23:02:53]   UNIT6 affinities: hard={0-11} soft={0-11}
(XEN) [2021-06-14 23:02:53]     VCPU6: CPU7 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:53]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:53] No periodic timer
(XEN) [2021-06-14 23:02:53]   UNIT7 affinities: hard={0-11} soft={0-11}
(XEN) [2021-06-14 23:02:53]     VCPU7: CPU0 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:53]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:53] No periodic timer
(XEN) [2021-06-14 23:02:53] General information for domain 2:
(XEN) [2021-06-14 23:02:53]     refcnt=3 dying=0 pause_count=0
(XEN) [2021-06-14 23:02:53]     nr_pages=4978763 xenheap_pages=23 
shared_pages=0 paged_pages=0 dirty_cpus={2,10} max_pages=4980992
(XEN) [2021-06-14 23:02:53] handle=39adc800-c94e-409a-83df-1ca7eb320ed7 
vm_assist=00000020
(XEN) [2021-06-14 23:02:53]     paging assistance: hap refcounts 
translate external
(XEN) [2021-06-14 23:02:53] Rangesets belonging to domain 2:
(XEN) [2021-06-14 23:02:53]     ioreq_server 0 pci { 0, 8-9, b, 10, 18, 
20, 30, 38 }
(XEN) [2021-06-14 23:02:53]     ioreq_server 0 memory { a0000-bffff, 
40000000-407fffff, 41000000-413fffff, 42000000-42ffffff, 
43040000-4307ffff, 43090000-4309500f, 43095020-4309507b, 
fec00000-fec00fff, fed00000-fed003ff, fee00000-feefffff }
(XEN) [2021-06-14 23:02:53]     ioreq_server 0 port { 0-1f, 60, 64, 
70-71, 80-83, 87, 89-8b, 8f, 92, b2-b3, c0-df, f0, 170-177, 1f0-1f7, 
376, 3b0-3df, 3f1-3ff, 510-511, cf8-cff, ae00-ae13, af00-af1f, 
afe0-afe3, b000-b005, b008-b00b, c000-c0ff, c200-c20f }
(XEN) [2021-06-14 23:02:53]     Interrupts { 32, 108 }
(XEN) [2021-06-14 23:02:53]     I/O Memory { c6c00-c6c3f }
(XEN) [2021-06-14 23:02:53]     I/O Ports  { }
(XEN) [2021-06-14 23:02:53]     log-dirty  { }
(XEN) [2021-06-14 23:02:53] Memory pages belonging to domain 2:
(XEN) [2021-06-14 23:02:53]     DomPage list too long to display
(XEN) [2021-06-14 23:02:54]     PoD entries=0 cachesize=0
(XEN) [2021-06-14 23:02:54]     XenPage 000000000007dff4: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 0000000000b9acaf: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141b38c: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141c7eb: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 0000000001404104: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141b39d: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141b396: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141b3af: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141b3a7: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141c7fd: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141c7f4: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141cf9b: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141cf94: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141cf8b: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141cf84: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141cf7b: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141cf72: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141cf69: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141cf60: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141cf55: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 0000000001de56e5: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 0000000001de3bae: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 0000000001de2f77: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     ExtraPage 0000000000b9af65: 
caf=8040000000000002, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     ExtraPage 0000000000b1e031: 
caf=8040000000000003, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     ExtraPage 0000000000b1e030: 
caf=8040000000000003, taf=e400000000000001
(XEN) [2021-06-14 23:02:54] NODE affinity for domain 2: [0]
(XEN) [2021-06-14 23:02:54] VCPU information and callbacks for domain 2:
(XEN) [2021-06-14 23:02:54]   UNIT0 affinities: hard={0-11} soft={0-5}
(XEN) [2021-06-14 23:02:54]     VCPU0: CPU8 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:54]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:54]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:54] No periodic timer
(XEN) [2021-06-14 23:02:54]   UNIT1 affinities: hard={0-11} soft={0-5}
(XEN) [2021-06-14 23:02:54]     VCPU1: CPU6 [has=T] poll=0 
upcall_pend=00 upcall_mask=00 dirty_cpu=7
(XEN) [2021-06-14 23:02:54]     pause_count=0 pause_flags=0
(XEN) [2021-06-14 23:02:54]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:54] No periodic timer
(XEN) [2021-06-14 23:02:54]   UNIT2 affinities: hard={0-11} soft={0-5}
(XEN) [2021-06-14 23:02:54]     VCPU2: CPU2 [has=T] poll=0 
upcall_pend=00 upcall_mask=00 dirty_cpu=5
(XEN) [2021-06-14 23:02:54]     pause_count=0 pause_flags=0
(XEN) [2021-06-14 23:02:54]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:54] No periodic timer
(XEN) [2021-06-14 23:02:54]   UNIT3 affinities: hard={0-11} soft={0-5}
(XEN) [2021-06-14 23:02:54]     VCPU3: CPU1 [has=T] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:54]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:54]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:54] No periodic timer
(XEN) [2021-06-14 23:02:54] General information for domain 3:
(XEN) [2021-06-14 23:02:54]     refcnt=3 dying=0 pause_count=0
(XEN) [2021-06-14 23:02:54]     nr_pages=1572923 xenheap_pages=6 
shared_pages=0 paged_pages=0 dirty_cpus={} max_pages=1573120
(XEN) [2021-06-14 23:02:54] handle=303cfd3a-6f50-42f6-92ae-c389b52435cd 
vm_assist=00000000
(XEN) [2021-06-14 23:02:54]     paging assistance: hap refcounts 
translate external
(XEN) [2021-06-14 23:02:54] Rangesets belonging to domain 3:
(XEN) [2021-06-14 23:02:54]     ioreq_server 0 pci { 0, 8-9, b, 10 }
(XEN) [2021-06-14 23:02:54]     ioreq_server 0 memory { 
f0000000-f0ffffff, fec00000-fec00fff, fed00000-fed003ff, 
fee00000-feefffff }
(XEN) [2021-06-14 23:02:54]     ioreq_server 0 port { 0-1f, 60, 64, 
70-71, 80-83, 87, 89-8b, 8f, 92, b2-b3, c0-df, f0, 170-177, 1f0-1f7, 
376, 3f1-3ff, cf8-cff, ae00-ae13, af00-af1f, afe0-afe3, b000-b005, 
b008-b00b, c000-c10f }
(XEN) [2021-06-14 23:02:54]     Interrupts { }
(XEN) [2021-06-14 23:02:54]     I/O Memory { }
(XEN) [2021-06-14 23:02:54]     I/O Ports  { }
(XEN) [2021-06-14 23:02:54]     log-dirty  { }
(XEN) [2021-06-14 23:02:54] Memory pages belonging to domain 3:
(XEN) [2021-06-14 23:02:54]     DomPage list too long to display
(XEN) [2021-06-14 23:02:54]     PoD entries=0 cachesize=0
(XEN) [2021-06-14 23:02:54]     XenPage 00000000000785fe: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 0000000000b1e064: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141c715: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141b3b2: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141c713: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000141c7e4: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     ExtraPage 0000000000b1e070: 
caf=8040000000000002, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     ExtraPage 0000000002031d87: 
caf=8040000000000003, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     ExtraPage 0000000002031d86: 
caf=8040000000000003, taf=e400000000000001
(XEN) [2021-06-14 23:02:54] NODE affinity for domain 3: [1]
(XEN) [2021-06-14 23:02:54] VCPU information and callbacks for domain 3:
(XEN) [2021-06-14 23:02:54]   UNIT0 affinities: hard={0-11} soft={6-11}
(XEN) [2021-06-14 23:02:54]     VCPU0: CPU7 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:54]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:54]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:54] No periodic timer
(XEN) [2021-06-14 23:02:54]   UNIT1 affinities: hard={0-11} soft={6-11}
(XEN) [2021-06-14 23:02:54]     VCPU1: CPU3 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:54]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:54]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:54] No periodic timer
(XEN) [2021-06-14 23:02:54]   UNIT2 affinities: hard={0-11} soft={6-11}
(XEN) [2021-06-14 23:02:54]     VCPU2: CPU0 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:54]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:54]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:54] No periodic timer
(XEN) [2021-06-14 23:02:54]   UNIT3 affinities: hard={0-11} soft={6-11}
(XEN) [2021-06-14 23:02:54]     VCPU3: CPU6 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:54]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:54]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:54] No periodic timer
(XEN) [2021-06-14 23:02:54] General information for domain 5:
(XEN) [2021-06-14 23:02:54]     refcnt=3 dying=0 pause_count=0
(XEN) [2021-06-14 23:02:54]     nr_pages=4948045 xenheap_pages=11 
shared_pages=0 paged_pages=0 dirty_cpus={0-1,4,6,10} max_pages=4980992
(XEN) [2021-06-14 23:02:54] handle=f70f2a85-5027-4a70-9aa2-0d5ebff5c7e5 
vm_assist=00000020
(XEN) [2021-06-14 23:02:54]     paging assistance: hap refcounts 
translate external
(XEN) [2021-06-14 23:02:54] Rangesets belonging to domain 5:
(XEN) [2021-06-14 23:02:54]     ioreq_server 0 pci { 0, 8-9, b, 10, 18, 
28, 30, 38 }
(XEN) [2021-06-14 23:02:54]     ioreq_server 0 memory { a0000-bffff, 
40000000-58ffffff, 59040000-5907ffff, 590d0000-590ddfff, 
fec00000-fec00fff, fed00000-fed003ff, fee00000-feefffff }
(XEN) [2021-06-14 23:02:54]     ioreq_server 0 port { 0-1f, 60, 64, 
70-71, 80-83, 87, 89-8b, 8f, 92, b2-b3, c0-df, f0, 170-177, 1ce-1d1, 
1f0-1f7, 376, 3b4-3b5, 3ba, 3c0-3cf, 3d4-3d5, 3da, 3f1-3ff, cf8-cff, 
ae00-ae13, af00-af1f, afe0-afe3, b000-b005, b008-b00b, c000-c20f, 
c240-c25f }
(XEN) [2021-06-14 23:02:54]     Interrupts { 27, 56, 60, 116-123, 126 }
(XEN) [2021-06-14 23:02:54]     I/O Memory { 0-2, c6d00-c6d07, 
e0000-effff, fbe00-fbe63 }
(XEN) [2021-06-14 23:02:54]     I/O Ports  { f000-f0ff }
(XEN) [2021-06-14 23:02:54]     log-dirty  { }
(XEN) [2021-06-14 23:02:54] Memory pages belonging to domain 5:
(XEN) [2021-06-14 23:02:54]     DomPage list too long to display
(XEN) [2021-06-14 23:02:54]     PoD entries=0 cachesize=0
(XEN) [2021-06-14 23:02:54]     XenPage 000000000007dd10: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 000000000157a66e: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 0000000001dd1f98: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 0000000001dd080e: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 0000000001dd1ef8: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 0000000001de2ee1: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 0000000001de2ce4: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 0000000001dd00c5: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 0000000001dd0809: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 0000000001dcffaa: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     XenPage 00000000015c150d: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     ExtraPage 000000000157a67a: 
caf=8040000000000002, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     ExtraPage 0000000001ef8ac7: 
caf=8040000000000003, taf=e400000000000001
(XEN) [2021-06-14 23:02:54]     ExtraPage 0000000001ef8ac6: 
caf=8040000000000003, taf=e400000000000001
(XEN) [2021-06-14 23:02:54] NODE affinity for domain 5: [1]
(XEN) [2021-06-14 23:02:54] VCPU information and callbacks for domain 5:
(XEN) [2021-06-14 23:02:54]   UNIT0 affinities: hard={0-11} soft={6-11}
(XEN) [2021-06-14 23:02:54]     VCPU0: CPU10 [has=T] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:54]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:54]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:54] No periodic timer
(XEN) [2021-06-14 23:02:54]   UNIT1 affinities: hard={0-11} soft={6-11}
(XEN) [2021-06-14 23:02:54]     VCPU1: CPU0 [has=F] poll=0 
upcall_pend=00 upcall_mask=00 dirty_cpu=2
(XEN) [2021-06-14 23:02:55]     pause_count=0 pause_flags=0
(XEN) [2021-06-14 23:02:55]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:55] No periodic timer
(XEN) [2021-06-14 23:02:55]   UNIT2 affinities: hard={0-11} soft={6-11}
(XEN) [2021-06-14 23:02:55]     VCPU2: CPU7 [has=T] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:55]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:55]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:55] No periodic timer
(XEN) [2021-06-14 23:02:55]   UNIT3 affinities: hard={0-11} soft={6-11}
(XEN) [2021-06-14 23:02:55]     VCPU3: CPU0 [has=T] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:55]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:55]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:55] No periodic timer
(XEN) [2021-06-14 23:02:55]   UNIT4 affinities: hard={0-11} soft={6-11}
(XEN) [2021-06-14 23:02:55]     VCPU4: CPU0 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:55]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:55]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:55] No periodic timer
(XEN) [2021-06-14 23:02:55]   UNIT5 affinities: hard={0-11} soft={6-11}
(XEN) [2021-06-14 23:02:55]     VCPU5: CPU8 [has=F] poll=0 
upcall_pend=00 upcall_mask=00 dirty_cpu=7
(XEN) [2021-06-14 23:02:55]     pause_count=0 pause_flags=0
(XEN) [2021-06-14 23:02:55]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:55] No periodic timer
(XEN) [2021-06-14 23:02:55] General information for domain 6:
(XEN) [2021-06-14 23:02:55]     refcnt=3 dying=0 pause_count=0
(XEN) [2021-06-14 23:02:55]     nr_pages=9697355 xenheap_pages=8 
shared_pages=0 paged_pages=0 dirty_cpus={3,5} max_pages=9699584
(XEN) [2021-06-14 23:02:55] handle=e7958404-b10a-434b-8f28-80506871a4a0 
vm_assist=00000020
(XEN) [2021-06-14 23:02:55]     paging assistance: hap refcounts 
translate external
(XEN) [2021-06-14 23:02:55] Rangesets belonging to domain 6:
(XEN) [2021-06-14 23:02:55]     ioreq_server 0 pci { 0, 8-9, b, 10, 18, 
28 }
(XEN) [2021-06-14 23:02:55]     ioreq_server 0 memory { a0000-bffff, 
f0000000-f07fffff, f1000000-f13fffff, f2000000-f307ffff, 
f30f0000-f30f4fff, fec00000-fec00fff, fed00000-fed003ff, 
fee00000-feefffff }
(XEN) [2021-06-14 23:02:55]     ioreq_server 0 port { 0-1f, 60, 64, 
70-71, 80-83, 87, 89-8b, 8f, 92, b2-b3, c0-df, f0, 170-177, 1f0-1f7, 
376, 3b0-3df, 3f1-3ff, 510-511, cf8-cff, ae00-ae13, af00-af1f, 
afe0-afe3, b000-b005, b008-b00b, c000-c10f, c140-c15f }
(XEN) [2021-06-14 23:02:55]     Interrupts { 19, 102-106 }
(XEN) [2021-06-14 23:02:55]     I/O Memory { c6900-c6983 }
(XEN) [2021-06-14 23:02:55]     I/O Ports  { 4000-401f }
(XEN) [2021-06-14 23:02:55]     log-dirty  { }
(XEN) [2021-06-14 23:02:55] Memory pages belonging to domain 6:
(XEN) [2021-06-14 23:02:55]     DomPage list too long to display
(XEN) [2021-06-14 23:02:55]     PoD entries=0 cachesize=0
(XEN) [2021-06-14 23:02:55]     XenPage 000000000007dd9c: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     XenPage 000000000207fa13: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     XenPage 0000000000fbb9e8: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     XenPage 0000000000fbb783: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     XenPage 0000000001de6f70: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     XenPage 0000000000fbb876: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     XenPage 0000000000fbb58f: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     XenPage 0000000000fbb768: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     ExtraPage 0000000000b9af72: 
caf=8040000000000002, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     ExtraPage 0000000000b476a7: 
caf=8040000000000003, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     ExtraPage 0000000000b476a6: 
caf=8040000000000003, taf=e400000000000001
(XEN) [2021-06-14 23:02:55] NODE affinity for domain 6: [0]
(XEN) [2021-06-14 23:02:55] VCPU information and callbacks for domain 6:
(XEN) [2021-06-14 23:02:55]   UNIT0 affinities: hard={0-11} soft={0-5}
(XEN) [2021-06-14 23:02:55]     VCPU0: CPU2 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:55]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:55]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:55] No periodic timer
(XEN) [2021-06-14 23:02:55]   UNIT1 affinities: hard={0-11} soft={0-5}
(XEN) [2021-06-14 23:02:55]     VCPU1: CPU4 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:55]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:55]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:55] No periodic timer
(XEN) [2021-06-14 23:02:55]   UNIT2 affinities: hard={0-11} soft={0-5}
(XEN) [2021-06-14 23:02:55]     VCPU2: CPU2 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:55]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:55]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:55] No periodic timer
(XEN) [2021-06-14 23:02:55]   UNIT3 affinities: hard={0-11} soft={0-5}
(XEN) [2021-06-14 23:02:55]     VCPU3: CPU5 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:55]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:55]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:55] No periodic timer
(XEN) [2021-06-14 23:02:55]   UNIT4 affinities: hard={0-11} soft={0-5}
(XEN) [2021-06-14 23:02:55]     VCPU4: CPU3 [has=F] poll=0 
upcall_pend=00 upcall_mask=00 dirty_cpu=3
(XEN) [2021-06-14 23:02:55]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:55]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:55] No periodic timer
(XEN) [2021-06-14 23:02:55]   UNIT5 affinities: hard={0-11} soft={0-5}
(XEN) [2021-06-14 23:02:55]     VCPU5: CPU7 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:55]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:55]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:55] No periodic timer
(XEN) [2021-06-14 23:02:55] General information for domain 7:
(XEN) [2021-06-14 23:02:55]     refcnt=3 dying=0 pause_count=0
(XEN) [2021-06-14 23:02:55]     nr_pages=4978763 xenheap_pages=10 
shared_pages=0 paged_pages=0 dirty_cpus={3-4} max_pages=4980992
(XEN) [2021-06-14 23:02:55] handle=86a83579-0463-4248-a894-78df22c57374 
vm_assist=00000020
(XEN) [2021-06-14 23:02:55]     paging assistance: hap refcounts 
translate external
(XEN) [2021-06-14 23:02:55] Rangesets belonging to domain 7:
(XEN) [2021-06-14 23:02:55]     ioreq_server 0 pci { 0, 8-9, b, 10, 18, 
28, 30, 38 }
(XEN) [2021-06-14 23:02:55]     ioreq_server 0 memory { a0000-bffff, 
40000000-507fffff, 51000000-513fffff, 52000000-52ffffff, 
53040000-5307ffff, 530d0000-530dcfff, fec00000-fec00fff, 
fed00000-fed003ff, fee00000-feefffff }
(XEN) [2021-06-14 23:02:55]     ioreq_server 0 port { 0-1f, 60, 64, 
70-71, 80-83, 87, 89-8b, 8f, 92, b2-b3, c0-df, f0, 170-177, 1f0-1f7, 
376, 3b0-3df, 3f1-3ff, 510-511, cf8-cff, ae00-ae13, af00-af1f, 
afe0-afe3, b000-b005, b008-b00b, c000-c20f }
(XEN) [2021-06-14 23:02:55]     Interrupts { 34, 64, 68, 109-115, 124-125 }
(XEN) [2021-06-14 23:02:55]     I/O Memory { c6b00-c6b07, fbd00-fbd63, 
3fff0000-3fffffff }
(XEN) [2021-06-14 23:02:55]     I/O Ports  { e000-e0ff }
(XEN) [2021-06-14 23:02:55]     log-dirty  { }
(XEN) [2021-06-14 23:02:55] Memory pages belonging to domain 7:
(XEN) [2021-06-14 23:02:55]     DomPage list too long to display
(XEN) [2021-06-14 23:02:55]     PoD entries=0 cachesize=0
(XEN) [2021-06-14 23:02:55]     XenPage 00000000000787c6: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     XenPage 0000000001de2a50: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     XenPage 000000000160cbb9: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     XenPage 0000000001609afc: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     XenPage 0000000001600c43: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     XenPage 000000000160152e: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     XenPage 0000000001601500: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     XenPage 0000000001600a75: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     XenPage 000000000160cbfe: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     XenPage 0000000000fbb810: 
caf=c000000000000001, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     ExtraPage 0000000000f870c2: 
caf=8040000000000002, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     ExtraPage 00000000015ba8c4: 
caf=8040000000000003, taf=e400000000000001
(XEN) [2021-06-14 23:02:55]     ExtraPage 00000000015ba8c3: 
caf=8040000000000003, taf=e400000000000001
(XEN) [2021-06-14 23:02:55] NODE affinity for domain 7: [1]
(XEN) [2021-06-14 23:02:55] VCPU information and callbacks for domain 7:
(XEN) [2021-06-14 23:02:55]   UNIT0 affinities: hard={0-11} soft={6-11}
(XEN) [2021-06-14 23:02:55]     VCPU0: CPU5 [has=F] poll=0 
upcall_pend=00 upcall_mask=00 dirty_cpu=5
(XEN) [2021-06-14 23:02:55]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:55]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:55] No periodic timer
(XEN) [2021-06-14 23:02:55]   UNIT1 affinities: hard={0-11} soft={6-11}
(XEN) [2021-06-14 23:02:55]     VCPU1: CPU6 [has=F] poll=0 
upcall_pend=00 upcall_mask=00 dirty_cpu=6
(XEN) [2021-06-14 23:02:55]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:55]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:55] No periodic timer
(XEN) [2021-06-14 23:02:55]   UNIT2 affinities: hard={0-11} soft={6-11}
(XEN) [2021-06-14 23:02:55]     VCPU2: CPU0 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:55]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:55]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:55] No periodic timer
(XEN) [2021-06-14 23:02:55]   UNIT3 affinities: hard={0-11} soft={6-11}
(XEN) [2021-06-14 23:02:55]     VCPU3: CPU6 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:55]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:55]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:55] No periodic timer
(XEN) [2021-06-14 23:02:55]   UNIT4 affinities: hard={0-11} soft={6-11}
(XEN) [2021-06-14 23:02:55]     VCPU4: CPU8 [has=F] poll=0 
upcall_pend=00 upcall_mask=00 dirty_cpu=7
(XEN) [2021-06-14 23:02:55]     pause_count=0 pause_flags=0
(XEN) [2021-06-14 23:02:55]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:55] No periodic timer
(XEN) [2021-06-14 23:02:55]   UNIT5 affinities: hard={0-11} soft={6-11}
(XEN) [2021-06-14 23:02:55]     VCPU5: CPU6 [has=F] poll=0 
upcall_pend=00 upcall_mask=00
(XEN) [2021-06-14 23:02:55]     pause_count=0 pause_flags=1
(XEN) [2021-06-14 23:02:55]     paging assistance: hap, 4 levels
(XEN) [2021-06-14 23:02:55] No periodic timer
(XEN) [2021-06-14 23:02:55] Notifying guest 0:0 (virq 1, port 0)
(XEN) [2021-06-14 23:02:55] Notifying guest 0:1 (virq 1, port 0)
(XEN) [2021-06-14 23:02:55] Notifying guest 0:2 (virq 1, port 0)
(XEN) [2021-06-14 23:02:56] Notifying guest 0:3 (virq 1, port 0)
(XEN) [2021-06-14 23:02:56] Notifying guest 0:4 (virq 1, port 0)
(XEN) [2021-06-14 23:02:56] Notifying guest 0:5 (virq 1, port 0)
(XEN) [2021-06-14 23:02:56] Notifying guest 0:6 (virq 1, port 0)
(XEN) [2021-06-14 23:02:56] Notifying guest 0:7 (virq 1, port 0)
(XEN) [2021-06-14 23:02:56] Notifying guest 2:0 (virq 1, port 11)
(XEN) [2021-06-14 23:02:56] Notifying guest 2:1 (virq 1, port 17)
(XEN) [2021-06-14 23:02:56] Notifying guest 2:2 (virq 1, port 23)
(XEN) [2021-06-14 23:02:56] Notifying guest 2:3 (virq 1, port 29)
(XEN) [2021-06-14 23:02:56] Notifying guest 3:0 (virq 1, port 11)
(XEN) [2021-06-14 23:02:56] Notifying guest 3:1 (virq 1, port 16)
(XEN) [2021-06-14 23:02:56] Notifying guest 3:2 (virq 1, port 21)
(XEN) [2021-06-14 23:02:56] Notifying guest 3:3 (virq 1, port 26)
(XEN) [2021-06-14 23:02:56] Notifying guest 5:0 (virq 1, port 13)
(XEN) [2021-06-14 23:02:56] Notifying guest 5:1 (virq 1, port 19)
(XEN) [2021-06-14 23:02:56] Notifying guest 5:2 (virq 1, port 25)
(XEN) [2021-06-14 23:02:56] Notifying guest 5:3 (virq 1, port 31)
(XEN) [2021-06-14 23:02:56] Notifying guest 5:4 (virq 1, port 37)
(XEN) [2021-06-14 23:02:56] Notifying guest 5:5 (virq 1, port 43)
(XEN) [2021-06-14 23:02:56] Notifying guest 6:0 (virq 1, port 0)
(XEN) [2021-06-14 23:02:56] Notifying guest 6:1 (virq 1, port 0)
(XEN) [2021-06-14 23:02:56] Notifying guest 6:2 (virq 1, port 0)
(XEN) [2021-06-14 23:02:56] Notifying guest 6:3 (virq 1, port 0)
(XEN) [2021-06-14 23:02:56] Notifying guest 6:4 (virq 1, port 0)
(XEN) [2021-06-14 23:02:56] Notifying guest 6:5 (virq 1, port 0)
(XEN) [2021-06-14 23:02:56] Notifying guest 7:0 (virq 1, port 0)
(XEN) [2021-06-14 23:02:56] Notifying guest 7:1 (virq 1, port 0)
(XEN) [2021-06-14 23:02:56] Notifying guest 7:2 (virq 1, port 0)
(XEN) [2021-06-14 23:02:56] Notifying guest 7:3 (virq 1, port 0)
(XEN) [2021-06-14 23:02:56] Notifying guest 7:4 (virq 1, port 0)
(XEN) [2021-06-14 23:02:56] Notifying guest 7:5 (virq 1, port 0)
(XEN) [2021-06-14 23:02:56] Shared frames 0 -- Saved frames 0
(XEN) [2021-06-14 23:02:56] [r: dump run queues]
(XEN) [2021-06-14 23:02:56] sched_smt_power_savings: disabled
(XEN) [2021-06-14 23:02:56] NOW=16926506786025
(XEN) [2021-06-14 23:02:56] Online Cpus: 0-11
(XEN) [2021-06-14 23:02:56] Cpupool 0:
(XEN) [2021-06-14 23:02:56] Cpus: 0-11
(XEN) [2021-06-14 23:02:56] Scheduling granularity: cpu, 1 CPU per 
sched-resource
(XEN) [2021-06-14 23:02:56] Scheduler: SMP Credit Scheduler rev2 (credit2)
(XEN) [2021-06-14 23:02:56] Active queues: 2
(XEN) [2021-06-14 23:02:56]     default-weight     = 256
(XEN) [2021-06-14 23:02:56] Runqueue 0:
(XEN) [2021-06-14 23:02:56]     ncpus              = 6
(XEN) [2021-06-14 23:02:56]     cpus               = 0-5
(XEN) [2021-06-14 23:02:56]     max_weight         = 8192
(XEN) [2021-06-14 23:02:56]     pick_bias          = 1
(XEN) [2021-06-14 23:02:56]     instload           = 6
(XEN) [2021-06-14 23:02:56]     aveload            = 922129 (~351%)
(XEN) [2021-06-14 23:02:56]     idlers: 010
(XEN) [2021-06-14 23:02:56]     tickled: 000
(XEN) [2021-06-14 23:02:56]     fully idle cores: 010
(XEN) [2021-06-14 23:02:56] Runqueue 1:
(XEN) [2021-06-14 23:02:56]     ncpus              = 6
(XEN) [2021-06-14 23:02:56]     cpus               = 6-11
(XEN) [2021-06-14 23:02:56]     max_weight         = 8192
(XEN) [2021-06-14 23:02:56]     pick_bias          = 9
(XEN) [2021-06-14 23:02:56]     instload           = 5
(XEN) [2021-06-14 23:02:56]     aveload            = 948262 (~361%)
(XEN) [2021-06-14 23:02:56]     idlers: 100
(XEN) [2021-06-14 23:02:56]     tickled: 800
(XEN) [2021-06-14 23:02:56]     fully idle cores: 100
(XEN) [2021-06-14 23:02:56] Domain info:
(XEN) [2021-06-14 23:02:56]     Domain: 0 w 8192 c 0 v 8
(XEN) [2021-06-14 23:02:56]       1: [0.0] flags=0 cpu=4 credit=10500000 
[w=8192] load=18377 (~7%)
(XEN) [2021-06-14 23:02:56]       2: [0.1] flags=0 cpu=0 credit=10500000 
[w=8192] load=15789 (~6%)
(XEN) [2021-06-14 23:02:56]       3: [0.2] flags=2 cpu=5 credit=-6474099 
[w=8192] load=19757 (~7%)
(XEN) [2021-06-14 23:02:56]       4: [0.3] flags=2 cpu=7 
credit=-10000000 [w=8192] load=21744 (~8%)
(XEN) [2021-06-14 23:02:56]       5: [0.4] flags=0 cpu=10 credit=9806067 
[w=8192] load=6582 (~2%)
(XEN) [2021-06-14 23:02:56]       6: [0.5] flags=2 cpu=6 credit=6065370 
[w=8192] load=17286 (~6%)
(XEN) [2021-06-14 23:02:56]       7: [0.6] flags=0 cpu=7 credit=10433976 
[w=8192] load=21940 (~8%)
(XEN) [2021-06-14 23:02:56]       8: [0.7] flags=0 cpu=0 credit=617262 
[w=8192] load=12078 (~4%)
(XEN) [2021-06-14 23:02:56]     Domain: 2 w 6144 c 0 v 4
(XEN) [2021-06-14 23:02:56]       9: [2.0] flags=0 cpu=10 
credit=-10000000 [w=6144] load=121976 (~46%)
(XEN) [2021-06-14 23:02:56]      10: [2.1] flags=0 cpu=7 
credit=-10000000 [w=6144] load=169292 (~64%)
(XEN) [2021-06-14 23:02:56]      11: [2.2] flags=0 cpu=4 
credit=-10000000 [w=6144] load=119004 (~45%)
(XEN) [2021-06-14 23:02:56]      12: [2.3] flags=0 cpu=0 
credit=-10000000 [w=6144] load=119251 (~45%)
(XEN) [2021-06-14 23:02:56]     Domain: 3 w 256 c 0 v 4
(XEN) [2021-06-14 23:02:56]      13: [3.0] flags=0 cpu=10 credit=7495200 
[w=256] load=15751 (~6%)
(XEN) [2021-06-14 23:02:56]      14: [3.1] flags=2 cpu=2 
credit=-10000000 [w=256] load=741 (~0%)
(XEN) [2021-06-14 23:02:56]      15: [3.2] flags=12 cpu=3 
credit=-10000000 [w=256] load=128 (~0%)
(XEN) [2021-06-14 23:02:56]      16: [3.3] flags=2 cpu=6 
credit=-10000000 [w=256] load=46 (~0%)
(XEN) [2021-06-14 23:02:56]     Domain: 5 w 4096 c 0 v 6
(XEN) [2021-06-14 23:02:56]      17: [5.0] flags=2 cpu=11 credit=9870780 
[w=4096] load=163724 (~62%)
(XEN) [2021-06-14 23:02:56]      18: [5.1] flags=0 cpu=1 
credit=-10000000 [w=4096] load=169315 (~64%)
(XEN) [2021-06-14 23:02:56]      19: [5.2] flags=0 cpu=7 
credit=-10000000 [w=4096] load=125433 (~47%)
(XEN) [2021-06-14 23:02:56]      20: [5.3] flags=0 cpu=0 credit=10500000 
[w=4096] load=86081 (~32%)
(XEN) [2021-06-14 23:02:56]      21: [5.4] flags=0 cpu=0 
credit=-10000000 [w=4096] load=111423 (~42%)
(XEN) [2021-06-14 23:02:56]      22: [5.5] flags=2 cpu=8 
credit=-10000000 [w=4096] load=94772 (~36%)
(XEN) [2021-06-14 23:02:56]     Domain: 6 w 7168 c 0 v 6
(XEN) [2021-06-14 23:02:56]      23: [6.0] flags=0 cpu=1 credit=7868149 
[w=7168] load=11321 (~4%)
(XEN) [2021-06-14 23:02:56]      24: [6.1] flags=0 cpu=2 credit=10488692 
[w=7168] load=11481 (~4%)
(XEN) [2021-06-14 23:02:56]      25: [6.2] flags=0 cpu=3 credit=-2788456 
[w=7168] load=6278 (~2%)
(XEN) [2021-06-14 23:02:56]      26: [6.3] flags=0 cpu=4 credit=10500000 
[w=7168] load=8498 (~3%)
(XEN) [2021-06-14 23:02:56]      27: [6.4] flags=0 cpu=5 credit=10500000 
[w=7168] load=70661 (~26%)
(XEN) [2021-06-14 23:02:56]      28: [6.5] flags=2 cpu=10 
credit=-2571937 [w=7168] load=23950 (~9%)
(XEN) [2021-06-14 23:02:56]     Domain: 7 w 4096 c 0 v 6
(XEN) [2021-06-14 23:02:56]      29: [7.0] flags=0 cpu=2 
credit=-10000000 [w=4096] load=32708 (~12%)
(XEN) [2021-06-14 23:02:56]      30: [7.1] flags=0 cpu=7 
credit=-10000000 [w=4096] load=64408 (~24%)
(XEN) [2021-06-14 23:02:56]      31: [7.2] flags=0 cpu=5 credit=10500000 
[w=4096] load=25520 (~9%)
(XEN) [2021-06-14 23:02:56]      32: [7.3] flags=0 cpu=8 
credit=-10000000 [w=4096] load=28079 (~10%)
(XEN) [2021-06-14 23:02:56]      33: [7.4] flags=0 cpu=10 
credit=-10000000 [w=4096] load=22690 (~8%)
(XEN) [2021-06-14 23:02:56]      34: [7.5] flags=0 cpu=11 
credit=10500000 [w=4096] load=26152 (~9%)
(XEN) [2021-06-14 23:02:56] Runqueue 0:
(XEN) [2021-06-14 23:02:56] CPU[00] runq=0, sibling={0}, core={0-5}
(XEN) [2021-06-14 23:02:56]     run: [0.1] flags=2 cpu=0 
credit=-10000000 [w=8192] load=37296 (~14%)
(XEN) [2021-06-14 23:02:56] CPU[01] runq=0, sibling={1}, core={0-5}
(XEN) [2021-06-14 23:02:56] CPU[02] runq=0, sibling={2}, core={0-5}
(XEN) [2021-06-14 23:02:56]     run: [3.1] flags=2 cpu=2 
credit=-10000000 [w=256] load=741 (~0%)
(XEN) [2021-06-14 23:02:56] CPU[03] runq=0, sibling={3}, core={0-5}
(XEN) [2021-06-14 23:02:56]     run: [3.2] flags=2 cpu=3 
credit=-10000000 [w=256] load=128 (~0%)
(XEN) [2021-06-14 23:02:56] CPU[04] runq=0, sibling={4}, core={0-5}
(XEN) [2021-06-14 23:02:56]     run: [0.0] flags=2 cpu=4 
credit=-10000000 [w=8192] load=21202 (~8%)
(XEN) [2021-06-14 23:02:56] CPU[05] runq=0, sibling={5}, core={0-5}
(XEN) [2021-06-14 23:02:56]     run: [0.2] flags=2 cpu=5 credit=-6474099 
[w=8192] load=19757 (~7%)
(XEN) [2021-06-14 23:02:56] RUNQ:
(XEN) [2021-06-14 23:02:56]       0: [5.3] flags=0 cpu=0 credit=10500000 
[w=4096] load=86081 (~32%)
(XEN) [2021-06-14 23:02:56]       1: [2.2] flags=0 cpu=4 
credit=-10000000 [w=6144] load=119004 (~45%)
(XEN) [2021-06-14 23:02:56] Runqueue 1:
(XEN) [2021-06-14 23:02:56] CPU[06] runq=1, sibling={6}, core={6-11}
(XEN) [2021-06-14 23:02:56]     run: [3.3] flags=2 cpu=6 
credit=-10000000 [w=256] load=46 (~0%)
(XEN) [2021-06-14 23:02:56] CPU[07] runq=1, sibling={7}, core={6-11}
(XEN) [2021-06-14 23:02:56]     run: [0.3] flags=2 cpu=7 
credit=-10000000 [w=8192] load=21744 (~8%)
(XEN) [2021-06-14 23:02:56] CPU[08] runq=1, sibling={8}, core={6-11}
(XEN) [2021-06-14 23:02:56] CPU[09] runq=1, sibling={9}, core={6-11}
(XEN) [2021-06-14 23:02:56] CPU[10] runq=1, sibling={10}, core={6-11}
(XEN) [2021-06-14 23:02:56] CPU[11] runq=1, sibling={11}, core={6-11}
(XEN) [2021-06-14 23:02:56]     run: [5.0] flags=2 cpu=11 credit=9870780 
[w=4096] load=163724 (~62%)
(XEN) [2021-06-14 23:02:56] RUNQ:
(XEN) [2021-06-14 23:02:56]       0: [0.6] flags=0 cpu=7 credit=10433976 
[w=8192] load=21940 (~8%)
(XEN) [2021-06-14 23:02:56]       1: [2.1] flags=0 cpu=7 
credit=-10000000 [w=6144] load=178689 (~68%)
(XEN) [2021-06-14 23:02:56]       2: [7.1] flags=0 cpu=7 
credit=-10000000 [w=4096] load=64408 (~24%)
(XEN) [2021-06-14 23:02:56]       3: [5.2] flags=0 cpu=7 
credit=-10000000 [w=4096] load=126185 (~48%)
(XEN) [2021-06-14 23:02:56] CPUs info:
(XEN) [2021-06-14 23:02:56] CPU[00] current=d0v1, curr=d0v1, prev=NULL
(XEN) [2021-06-14 23:02:56] CPU[01] current=d6v4, curr=d6v4, prev=NULL
(XEN) [2021-06-14 23:02:56] CPU[02] current=d5v4, curr=d5v4, prev=NULL
(XEN) [2021-06-14 23:02:56] CPU[03] current=d2v3, curr=d2v3, prev=d0v2
(XEN) [2021-06-14 23:02:56] CPU[04] current=d6v1, curr=d6v1, prev=NULL
(XEN) [2021-06-14 23:02:56] CPU[05] current=d7v0, curr=d7v0, prev=NULL
(XEN) [2021-06-14 23:02:56] CPU[06] current=d5v5, curr=d5v5, prev=NULL
(XEN) [2021-06-14 23:02:56] CPU[07] current=d3v0, curr=d3v0, prev=NULL
(XEN) [2021-06-14 23:02:56] CPU[08] current=d2v0, curr=d5v2, prev=d2v0
(XEN) [2021-06-14 23:02:56] CPU[09] current=d[IDLE]v9, curr=d[IDLE]v9, 
prev=NULL
(XEN) [2021-06-14 23:02:56] CPU[10] current=d0v4, curr=d0v4, prev=NULL
(XEN) [2021-06-14 23:02:56] CPU[11] current=d7v5, curr=d7v5, prev=d0v6
(XEN) [2021-06-14 23:02:56] [s: dump softtsc stats]
(XEN) [2021-06-14 23:02:56] TSC marked as reliable, warp = 0 (count=2)
(XEN) [2021-06-14 23:02:56] dom2(hvm): 
mode=0,ofs=0x323f78f3d9,khz=2399996,inc=1
(XEN) [2021-06-14 23:02:56] dom3(hvm): 
mode=0,ofs=0x3250b0bd9e,khz=2399996,inc=1
(XEN) [2021-06-14 23:02:56] dom5(hvm): 
mode=0,ofs=0x3cfe10b4a9,khz=2399996,inc=1
(XEN) [2021-06-14 23:02:56] dom6(hvm): 
mode=0,ofs=0xa34e58b7b78,khz=2399996,inc=1
(XEN) [2021-06-14 23:02:56] dom7(hvm): 
mode=0,ofs=0xf3c2e70dc08,khz=2399996,inc=1
(XEN) [2021-06-14 23:02:56] [t: display multi-cpu clock info]
(XEN) [2021-06-14 23:02:57] Synced stime skew: max=379ns avg=379ns 
samples=1 current=379ns
(XEN) [2021-06-14 23:02:57] Synced cycles skew: max=582 avg=582 
samples=1 current=582
(XEN) [2021-06-14 23:02:57] [u: dump NUMA info]
(XEN) [2021-06-14 23:02:57] 'u' pressed -> dumping numa info (now = 
16927332435516)
(XEN) [2021-06-14 23:02:57] NODE0 start->0 size->17301504 free->2047633
(XEN) [2021-06-14 23:02:57] NODE1 start->17301504 size->16777216 
free->495793
(XEN) [2021-06-14 23:02:57] CPU0...5 -> NODE0
(XEN) [2021-06-14 23:02:57] CPU6...11 -> NODE1
(XEN) [2021-06-14 23:02:57] Memory location of each domain:
(XEN) [2021-06-14 23:02:57] Domain 0 (total: 4194304):
(XEN) [2021-06-14 23:02:57]     Node 0: 2037218
(XEN) [2021-06-14 23:02:57]     Node 1: 2157086
(XEN) [2021-06-14 23:02:57] Domain 2 (total: 4978763):
(XEN) [2021-06-14 23:02:57]     Node 0: 4317759
(XEN) [2021-06-14 23:02:57]     Node 1: 661004
(XEN) [2021-06-14 23:02:57] Domain 3 (total: 1572923):
(XEN) [2021-06-14 23:02:57]     Node 0: 0
(XEN) [2021-06-14 23:02:57]     Node 1: 1572923
(XEN) [2021-06-14 23:02:57] Domain 5 (total: 4948045):
(XEN) [2021-06-14 23:02:57]     Node 0: 0
(XEN) [2021-06-14 23:02:57]     Node 1: 4948045
(XEN) [2021-06-14 23:02:57] Domain 6 (total: 9697355):
(XEN) [2021-06-14 23:02:57]     Node 0: 7862347
(XEN) [2021-06-14 23:02:57]     Node 1: 1835008
(XEN) [2021-06-14 23:02:57] Domain 7 (total: 4978763):
(XEN) [2021-06-14 23:02:57]     Node 0: 262144
(XEN) [2021-06-14 23:02:57]     Node 1: 4716619
(XEN) [2021-06-14 23:02:57] [v: dump VT-x VMCSs]
(XEN) [2021-06-14 23:02:57] *********** VMCS Areas **************
(XEN) [2021-06-14 23:02:57]
(XEN) [2021-06-14 23:02:57] >>> Domain 2 <<<
(XEN) [2021-06-14 23:02:57]     VCPU 0
(XEN) [2021-06-14 23:02:57] *** Guest State ***
(XEN) [2021-06-14 23:02:57] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:02:57] CR4: actual=0x00000000001626f0, 
shadow=0x0000000000160670, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:02:57] CR3 = 0x00000005161ea003
(XEN) [2021-06-14 23:02:57] RSP = 0xffffc9000bcf3ab0 
(0xffffc9000bcf3ab0)  RIP = 0xffffffff8196085e (0xffffffff8196085e)
(XEN) [2021-06-14 23:02:57] RFLAGS=0x00000206 (0x00000206)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:02:57] Sysenter RSP=fffffe0000003000 
CS:RIP=0010:ffffffff81c013a0
(XEN) [2021-06-14 23:02:57]        sel  attr  limit   base
(XEN) [2021-06-14 23:02:57]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:57]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:57]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:57]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:57]   FS: 0000 1c000 ffffffff 00007f04815f4740
(XEN) [2021-06-14 23:02:57]   GS: 0000 1c000 ffffffff ffff88856c200000
(XEN) [2021-06-14 23:02:57] GDTR:            0000007f fffffe0000001000
(XEN) [2021-06-14 23:02:57] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:57] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:02:57]   TR: 0040 0008b 0000206f fffffe0000003000
(XEN) [2021-06-14 23:02:57] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:02:57] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:02:57] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:02:57] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:02:57] InterruptStatus = 0000
(XEN) [2021-06-14 23:02:57] *** Host State ***
(XEN) [2021-06-14 23:02:57] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83207ca17f70
(XEN) [2021-06-14 23:02:57] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:02:57] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff831037cb8040
(XEN) [2021-06-14 23:02:57] GDTBase=ffff83207cba5000 
IDTBase=ffff83207cbb2000
(XEN) [2021-06-14 23:02:57] CR0=0000000080050033 CR3=000000207f3db000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:02:57] Sysenter RSP=ffff83207ca17fa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:02:57] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:02:57] *** Control State ***
(XEN) [2021-06-14 23:02:57] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:02:57] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:02:57] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:02:57] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:02:57] VMExit: intr_info=800000fc errcode=00000000 
ilen=00000002
(XEN) [2021-06-14 23:02:57]         reason=00000001 
qualification=0000000000000000
(XEN) [2021-06-14 23:02:57] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:02:57] TSC Offset = 0xfffd824c37f81ebb  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:02:57] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:02:57] EPT pointer = 0x0000000b9acc501e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:02:57] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:02:57] Virtual processor ID = 0x15eb VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:02:57]     VCPU 1
(XEN) [2021-06-14 23:02:57] *** Guest State ***
(XEN) [2021-06-14 23:02:57] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:02:57] CR4: actual=0x00000000001626e0, 
shadow=0x0000000000160660, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:02:57] CR3 = 0x0000000002421005
(XEN) [2021-06-14 23:02:57] RSP = 0xffffc900031b3eb0 
(0xffffc900031b3eb0)  RIP = 0xffffffff81ac19ad (0xffffffff81ac19ae)
(XEN) [2021-06-14 23:02:57] RFLAGS=0x00000286 (0x00000286)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:02:57] Sysenter RSP=fffffe000002f000 
CS:RIP=0010:ffffffff81c013a0
(XEN) [2021-06-14 23:02:57]        sel  attr  limit   base
(XEN) [2021-06-14 23:02:57]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:57]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:57]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:57]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:57]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:57]   GS: 0000 1c000 ffffffff ffff88856c280000
(XEN) [2021-06-14 23:02:57] GDTR:            0000007f fffffe000002d000
(XEN) [2021-06-14 23:02:57] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:57] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:02:57]   TR: 0040 0008b 0000206f fffffe000002f000
(XEN) [2021-06-14 23:02:57] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:02:57] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:02:57] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:02:57] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:02:57] InterruptStatus = 0000
(XEN) [2021-06-14 23:02:57] *** Host State ***
(XEN) [2021-06-14 23:02:57] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83207ca0ff70
(XEN) [2021-06-14 23:02:57] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:02:57] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff831037cd4040
(XEN) [2021-06-14 23:02:57] GDTBase=ffff83207cba6000 
IDTBase=ffff83207ca07000
(XEN) [2021-06-14 23:02:57] CR0=0000000080050033 CR3=0000000b40beb000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:02:57] Sysenter RSP=ffff83207ca0ffa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:02:57] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:02:57] *** Control State ***
(XEN) [2021-06-14 23:02:57] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:02:57] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:02:57] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:02:57] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:02:57] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000001
(XEN) [2021-06-14 23:02:57]         reason=0000000c 
qualification=0000000000000000
(XEN) [2021-06-14 23:02:57] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:02:57] TSC Offset = 0xfffd824c37f81ebb  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:02:57] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:02:57] EPT pointer = 0x0000000b9acc501e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:02:57] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:02:57] Virtual processor ID = 0x3ab2 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:02:57]     VCPU 2
(XEN) [2021-06-14 23:02:57] *** Guest State ***
(XEN) [2021-06-14 23:02:57] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:02:57] CR4: actual=0x00000000001626e0, 
shadow=0x0000000000160660, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:02:57] CR3 = 0x0000000002421002
(XEN) [2021-06-14 23:02:57] RSP = 0xffffc900031bbeb0 
(0xffffc900031bbeb0)  RIP = 0xffffffff81ac19ad (0xffffffff81ac19ae)
(XEN) [2021-06-14 23:02:57] RFLAGS=0x00000286 (0x00000286)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:02:57] Sysenter RSP=fffffe000005b000 
CS:RIP=0010:ffffffff81c013a0
(XEN) [2021-06-14 23:02:57]        sel  attr  limit   base
(XEN) [2021-06-14 23:02:57]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:57]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:57]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:57]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:57]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:57]   GS: 0000 1c000 ffffffff ffff88856c300000
(XEN) [2021-06-14 23:02:57] GDTR:            0000007f fffffe0000059000
(XEN) [2021-06-14 23:02:57] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:57] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:02:57]   TR: 0040 0008b 0000206f fffffe000005b000
(XEN) [2021-06-14 23:02:57] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:02:57] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:02:57] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:02:57] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:02:57] InterruptStatus = 0000
(XEN) [2021-06-14 23:02:57] *** Host State ***
(XEN) [2021-06-14 23:02:57] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83103ff27f70
(XEN) [2021-06-14 23:02:57] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:02:57] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff83103ff28040
(XEN) [2021-06-14 23:02:57] GDTBase=ffff83103ff1e000 
IDTBase=ffff83103ff1f000
(XEN) [2021-06-14 23:02:57] CR0=0000000080050033 CR3=000000207f3da000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:02:57] Sysenter RSP=ffff83103ff27fa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:02:57] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:02:57] *** Control State ***
(XEN) [2021-06-14 23:02:57] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:02:57] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:02:58] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:02:58] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:02:58] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000001
(XEN) [2021-06-14 23:02:58]         reason=0000000c 
qualification=0000000000000000
(XEN) [2021-06-14 23:02:58] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:02:58] TSC Offset = 0xfffd824c37f81ebb  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:02:58] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:02:58] EPT pointer = 0x0000000b9acc501e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:02:58] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:02:58] Virtual processor ID = 0x6df4 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:02:58]     VCPU 3
(XEN) [2021-06-14 23:02:58] *** Guest State ***
(XEN) [2021-06-14 23:02:58] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:02:58] CR4: actual=0x00000000001626e0, 
shadow=0x0000000000160660, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:02:58] CR3 = 0x0000000002421003
(XEN) [2021-06-14 23:02:58] RSP = 0xffffc900031c3eb0 
(0xffffc900031c3eb0)  RIP = 0xffffffff81ac19ad (0xffffffff81ac19ae)
(XEN) [2021-06-14 23:02:58] RFLAGS=0x00000286 (0x00000286)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:02:58] Sysenter RSP=fffffe0000087000 
CS:RIP=0010:ffffffff81c013a0
(XEN) [2021-06-14 23:02:58]        sel  attr  limit   base
(XEN) [2021-06-14 23:02:58]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:58]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:58]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:58]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:58]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:58]   GS: 0000 1c000 ffffffff ffff88856c380000
(XEN) [2021-06-14 23:02:58] GDTR:            0000007f fffffe0000085000
(XEN) [2021-06-14 23:02:58] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:58] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:02:58]   TR: 0040 0008b 0000206f fffffe0000087000
(XEN) [2021-06-14 23:02:58] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:02:58] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:02:58] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:02:58] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:02:58] InterruptStatus = 0000
(XEN) [2021-06-14 23:02:58] *** Host State ***
(XEN) [2021-06-14 23:02:58] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83103ff0ff70
(XEN) [2021-06-14 23:02:58] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:02:58] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff83103ff14040
(XEN) [2021-06-14 23:02:58] GDTBase=ffff83103ff05000 
IDTBase=ffff83103ff12000
(XEN) [2021-06-14 23:02:58] CR0=0000000080050033 CR3=0000000b40bea000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:02:58] Sysenter RSP=ffff83103ff0ffa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:02:58] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:02:58] *** Control State ***
(XEN) [2021-06-14 23:02:58] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:02:58] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:02:58] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:02:58] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:02:58] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000001
(XEN) [2021-06-14 23:02:58]         reason=0000000c 
qualification=0000000000000000
(XEN) [2021-06-14 23:02:58] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:02:58] TSC Offset = 0xfffd824c37f81ebb  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:02:58] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:02:58] EPT pointer = 0x0000000b9acc501e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:02:58] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:02:58] Virtual processor ID = 0x1688 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:02:58]
(XEN) [2021-06-14 23:02:58] >>> Domain 3 <<<
(XEN) [2021-06-14 23:02:58]     VCPU 0
(XEN) [2021-06-14 23:02:58] *** Guest State ***
(XEN) [2021-06-14 23:02:58] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:02:58] CR4: actual=0x00000000001626f0, 
shadow=0x0000000000160670, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:02:58] CR3 = 0x00000001873cb000
(XEN) [2021-06-14 23:02:58] RSP = 0xffff88018fc03cb0 
(0xffff88018fc03cb0)  RIP = 0xffffffff813a3082 (0xffffffff813a3083)
(XEN) [2021-06-14 23:02:58] RFLAGS=0x00000006 (0x00000006)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:02:58] Sysenter RSP=0000000000000000 
CS:RIP=0010:ffffffff8153c570
(XEN) [2021-06-14 23:02:58]        sel  attr  limit   base
(XEN) [2021-06-14 23:02:58]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:58]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:58]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:58]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:58]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:58]   GS: 0000 1c000 ffffffff ffff88018fc00000
(XEN) [2021-06-14 23:02:58] GDTR:            0000007f ffff88018fc0c000
(XEN) [2021-06-14 23:02:58] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:58] IDTR:            00000fff ffffffffff57c000
(XEN) [2021-06-14 23:02:58]   TR: 0040 0008b 00002087 ffff88018fc04480
(XEN) [2021-06-14 23:02:58] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0007010600070106
(XEN) [2021-06-14 23:02:58] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:02:58] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:02:58] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:02:58] InterruptStatus = 00f6
(XEN) [2021-06-14 23:02:58] *** Host State ***
(XEN) [2021-06-14 23:02:58] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83207ca1ff70
(XEN) [2021-06-14 23:02:58] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:02:58] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff831037cc4040
(XEN) [2021-06-14 23:02:58] GDTBase=ffff83207c9f7000 
IDTBase=ffff83207ca04000
(XEN) [2021-06-14 23:02:58] CR0=0000000080050033 CR3=000000207f319000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:02:58] Sysenter RSP=ffff83207ca1ffa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:02:58] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:02:58] *** Control State ***
(XEN) [2021-06-14 23:02:58] PinBased=000000bf CPUBased=b6a065fe 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:02:58] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:02:58] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:02:58] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:02:58] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000001
(XEN) [2021-06-14 23:02:58]         reason=0000001e 
qualification=0000000003f90000
(XEN) [2021-06-14 23:02:58] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:02:58] TSC Offset = 0xfffd824c638b8730  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:02:58] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:02:58] EPT pointer = 0x0000000b1e11101e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:02:58] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:02:58] Virtual processor ID = 0xebc9 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:02:58]     VCPU 1
(XEN) [2021-06-14 23:02:58] *** Guest State ***
(XEN) [2021-06-14 23:02:58] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:02:58] CR4: actual=0x00000000001626f0, 
shadow=0x0000000000160670, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:02:58] CR3 = 0x00000000ea652000
(XEN) [2021-06-14 23:02:58] RSP = 0xffff88018fc83e28 
(0xffff88018fc83e28)  RIP = 0xffffffff81539480 (0xffffffff81539480)
(XEN) [2021-06-14 23:02:58] RFLAGS=0x00000006 (0x00000006)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:02:58] Sysenter RSP=0000000000000000 
CS:RIP=0010:ffffffff8153c570
(XEN) [2021-06-14 23:02:58]        sel  attr  limit   base
(XEN) [2021-06-14 23:02:58]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:58]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:58]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:58]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:58]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:58]   GS: 0000 1c000 ffffffff ffff88018fc80000
(XEN) [2021-06-14 23:02:58] GDTR:            0000007f ffff88018fc8c000
(XEN) [2021-06-14 23:02:58] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:58] IDTR:            00000fff ffffffffff57c000
(XEN) [2021-06-14 23:02:58]   TR: 0040 0008b 00002087 ffff88018fc84480
(XEN) [2021-06-14 23:02:58] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0007010600070106
(XEN) [2021-06-14 23:02:58] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:02:58] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:02:58] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:02:58] InterruptStatus = 0000
(XEN) [2021-06-14 23:02:58] *** Host State ***
(XEN) [2021-06-14 23:02:58] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83103ff37f70
(XEN) [2021-06-14 23:02:58] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:02:58] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff83103ff40040
(XEN) [2021-06-14 23:02:58] GDTBase=ffff83103ff2e000 
IDTBase=ffff83103ff3b000
(XEN) [2021-06-14 23:02:58] CR0=0000000080050033 CR3=0000000b1e10b000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:02:58] Sysenter RSP=ffff83103ff37fa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:02:58] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:02:58] *** Control State ***
(XEN) [2021-06-14 23:02:58] PinBased=000000bf CPUBased=b6a065fe 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:02:58] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:02:58] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:02:58] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:02:58] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000002
(XEN) [2021-06-14 23:02:58]         reason=00000028 
qualification=0000000000000000
(XEN) [2021-06-14 23:02:58] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:02:58] TSC Offset = 0xfffd824c638b8730  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:02:58] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:02:58] EPT pointer = 0x0000000b1e11101e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:02:58] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:02:59] Virtual processor ID = 0x1c1f VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:02:59]     VCPU 2
(XEN) [2021-06-14 23:02:59] *** Guest State ***
(XEN) [2021-06-14 23:02:59] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:02:59] CR4: actual=0x00000000001626f0, 
shadow=0x0000000000160670, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:02:59] CR3 = 0x000000007fa56000
(XEN) [2021-06-14 23:02:59] RSP = 0xffff88018fd03e28 
(0xffff88018fd03e28)  RIP = 0xffffffff81539480 (0xffffffff81539480)
(XEN) [2021-06-14 23:02:59] RFLAGS=0x00000002 (0x00000002)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:02:59] Sysenter RSP=0000000000000000 
CS:RIP=0010:ffffffff8153c570
(XEN) [2021-06-14 23:02:59]        sel  attr  limit   base
(XEN) [2021-06-14 23:02:59]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:59]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:59]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:59]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:59]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:59]   GS: 0000 1c000 ffffffff ffff88018fd00000
(XEN) [2021-06-14 23:02:59] GDTR:            0000007f ffff88018fd0c000
(XEN) [2021-06-14 23:02:59] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:59] IDTR:            00000fff ffffffffff57c000
(XEN) [2021-06-14 23:02:59]   TR: 0040 0008b 00002087 ffff88018fd04480
(XEN) [2021-06-14 23:02:59] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0007010600070106
(XEN) [2021-06-14 23:02:59] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:02:59] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:02:59] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:02:59] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:02:59] InterruptStatus = 0000
(XEN) [2021-06-14 23:02:59] *** Host State ***
(XEN) [2021-06-14 23:02:59] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff831037cf7f70
(XEN) [2021-06-14 23:02:59] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:02:59] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff83103ff00040
(XEN) [2021-06-14 23:02:59] GDTBase=ffff831037cef000 
IDTBase=ffff831037cfc000
(XEN) [2021-06-14 23:02:59] CR0=0000000080050033 CR3=000000207f318000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:02:59] Sysenter RSP=ffff831037cf7fa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:02:59] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:02:59] *** Control State ***
(XEN) [2021-06-14 23:02:59] PinBased=000000bf CPUBased=b6a065fe 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:02:59] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:02:59] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:02:59] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:02:59] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000002
(XEN) [2021-06-14 23:02:59]         reason=00000028 
qualification=0000000000000000
(XEN) [2021-06-14 23:02:59] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:02:59] TSC Offset = 0xfffd824c638b8730  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:02:59] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:02:59] EPT pointer = 0x0000000b1e11101e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:02:59] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:02:59] Virtual processor ID = 0x7886 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:02:59]     VCPU 3
(XEN) [2021-06-14 23:02:59] *** Guest State ***
(XEN) [2021-06-14 23:02:59] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:02:59] CR4: actual=0x00000000001626f0, 
shadow=0x0000000000160670, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:02:59] CR3 = 0x00000000ea652000
(XEN) [2021-06-14 23:02:59] RSP = 0xffff88018fd83e28 
(0xffff88018fd83e28)  RIP = 0xffffffff81539480 (0xffffffff81539480)
(XEN) [2021-06-14 23:02:59] RFLAGS=0x00000002 (0x00000002)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:02:59] Sysenter RSP=0000000000000000 
CS:RIP=0010:ffffffff8153c570
(XEN) [2021-06-14 23:02:59]        sel  attr  limit   base
(XEN) [2021-06-14 23:02:59]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:59]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:59]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:59]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:59]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:59]   GS: 0000 1c000 ffffffff ffff88018fd80000
(XEN) [2021-06-14 23:02:59] GDTR:            0000007f ffff88018fd8c000
(XEN) [2021-06-14 23:02:59] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:59] IDTR:            00000fff ffffffffff57c000
(XEN) [2021-06-14 23:02:59]   TR: 0040 0008b 00002087 ffff88018fd84480
(XEN) [2021-06-14 23:02:59] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0007010600070106
(XEN) [2021-06-14 23:02:59] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:02:59] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:02:59] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:02:59] InterruptStatus = 0000
(XEN) [2021-06-14 23:02:59] *** Host State ***
(XEN) [2021-06-14 23:02:59] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83207ca1ff70
(XEN) [2021-06-14 23:02:59] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:02:59] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff831037cc4040
(XEN) [2021-06-14 23:02:59] GDTBase=ffff83207c9f7000 
IDTBase=ffff83207ca04000
(XEN) [2021-06-14 23:02:59] CR0=0000000080050033 CR3=0000000b1e10a000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:02:59] Sysenter RSP=ffff83207ca1ffa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:02:59] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:02:59] *** Control State ***
(XEN) [2021-06-14 23:02:59] PinBased=000000bf CPUBased=b6a065fe 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:02:59] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:02:59] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:02:59] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:02:59] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000002
(XEN) [2021-06-14 23:02:59]         reason=00000028 
qualification=0000000000000000
(XEN) [2021-06-14 23:02:59] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:02:59] TSC Offset = 0xfffd824c638b8730  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:02:59] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:02:59] EPT pointer = 0x0000000b1e11101e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:02:59] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:02:59] Virtual processor ID = 0xf840 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:02:59]
(XEN) [2021-06-14 23:02:59] >>> Domain 5 <<<
(XEN) [2021-06-14 23:02:59]     VCPU 0
(XEN) [2021-06-14 23:02:59] *** Guest State ***
(XEN) [2021-06-14 23:02:59] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:02:59] CR4: actual=0x00000000001626f0, 
shadow=0x0000000000160670, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:02:59] CR3 = 0x000000046094f804
(XEN) [2021-06-14 23:02:59] RSP = 0x00007fd23cdfeae8 
(0x00007fd23cdfeae8)  RIP = 0x00007fd3cbe94c51 (0x00007fd3cbe94c51)
(XEN) [2021-06-14 23:02:59] RFLAGS=0x00000202 (0x00000202)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:02:59] Sysenter RSP=fffffe0000003000 
CS:RIP=0010:ffffffff81c01560
(XEN) [2021-06-14 23:02:59]        sel  attr  limit   base
(XEN) [2021-06-14 23:02:59]   CS: 0033 0a0fb ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:59]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:59]   SS: 002b 0c0f3 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:59]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:59]   FS: 0000 1c000 ffffffff 00007fd23cdff640
(XEN) [2021-06-14 23:02:59]   GS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:59] GDTR:            0000007f fffffe0000001000
(XEN) [2021-06-14 23:02:59] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:02:59] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:02:59]   TR: 0040 0008b 00004087 fffffe0000003000
(XEN) [2021-06-14 23:02:59] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:02:59] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:02:59] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:02:59] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:02:59] InterruptStatus = 0000
(XEN) [2021-06-14 23:02:59] *** Host State ***
(XEN) [2021-06-14 23:02:59] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83207cba7f70
(XEN) [2021-06-14 23:02:59] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:02:59] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff831037c9c040
(XEN) [2021-06-14 23:02:59] GDTBase=ffff83207ca31000 
IDTBase=ffff83207ca3e000
(XEN) [2021-06-14 23:02:59] CR0=0000000080050033 CR3=000000157a7b0000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:02:59] Sysenter RSP=ffff83207cba7fa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:02:59] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:02:59] *** Control State ***
(XEN) [2021-06-14 23:02:59] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:02:59] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:02:59] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:02:59] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:02:59] VMExit: intr_info=800000fc errcode=00000000 
ilen=00000002
(XEN) [2021-06-14 23:02:59]         reason=00000001 
qualification=0000000000000000
(XEN) [2021-06-14 23:02:59] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:02:59] TSC Offset = 0xfffd823273e28d22  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:02:59] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:02:59] EPT pointer = 0x000000157a7bb01e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:02:59] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:02:59] Virtual processor ID = 0x24f3 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:02:59]     VCPU 1
(XEN) [2021-06-14 23:02:59] *** Guest State ***
(XEN) [2021-06-14 23:02:59] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:02:59] CR4: actual=0x00000000001626e0, 
shadow=0x0000000000160660, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:02:59] CR3 = 0x000000054a0cb805
(XEN) [2021-06-14 23:02:59] RSP = 0x00007fe2f2b62500 
(0x00007fe2f2b62500)  RIP = 0x00007fe309b81291 (0x00007fe309b81291)
(XEN) [2021-06-14 23:02:59] RFLAGS=0x00000206 (0x00000206)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:03:00] Sysenter RSP=fffffe0000034000 
CS:RIP=0010:ffffffff81c01560
(XEN) [2021-06-14 23:03:00]        sel  attr  limit   base
(XEN) [2021-06-14 23:03:00]   CS: 0033 0a0fb ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   SS: 002b 0c0f3 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   FS: 0000 1c000 ffffffff 00007fe2f2b63640
(XEN) [2021-06-14 23:03:00]   GS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00] GDTR:            0000007f fffffe0000032000
(XEN) [2021-06-14 23:03:00] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:03:00]   TR: 0040 0008b 00004087 fffffe0000034000
(XEN) [2021-06-14 23:03:00] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:03:00] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:03:00] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:03:00] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:03:00] InterruptStatus = 0000
(XEN) [2021-06-14 23:03:00] *** Host State ***
(XEN) [2021-06-14 23:03:00] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83103ff0ff70
(XEN) [2021-06-14 23:03:00] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:03:00] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff83103ff14040
(XEN) [2021-06-14 23:03:00] GDTBase=ffff83103ff05000 
IDTBase=ffff83103ff12000
(XEN) [2021-06-14 23:03:00] CR0=0000000080050033 CR3=000000157a7af000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:03:00] Sysenter RSP=ffff83103ff0ffa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:03:00] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:03:00] *** Control State ***
(XEN) [2021-06-14 23:03:00] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:03:00] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:03:00] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:03:00] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:03:00] VMExit: intr_info=800000fc errcode=00000000 
ilen=00000002
(XEN) [2021-06-14 23:03:00]         reason=00000001 
qualification=0000000000000000
(XEN) [2021-06-14 23:03:00] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:03:00] TSC Offset = 0xfffd823273e28d22  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:03:00] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:03:00] EPT pointer = 0x000000157a7bb01e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:03:00] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:03:00] Virtual processor ID = 0x268e VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:03:00]     VCPU 2
(XEN) [2021-06-14 23:03:00] *** Guest State ***
(XEN) [2021-06-14 23:03:00] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:03:00] CR4: actual=0x00000000001626e0, 
shadow=0x0000000000160660, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:03:00] CR3 = 0x00000004f25b2002
(XEN) [2021-06-14 23:03:00] RSP = 0xffffc9000008fdf0 
(0xffffc9000008fdf0)  RIP = 0xffffffff81beb36d (0xffffffff81beb36e)
(XEN) [2021-06-14 23:03:00] RFLAGS=0x00000246 (0x00000246)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:03:00] Sysenter RSP=fffffe0000065000 
CS:RIP=0010:ffffffff81c01560
(XEN) [2021-06-14 23:03:00]        sel  attr  limit   base
(XEN) [2021-06-14 23:03:00]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   GS: 0000 1c000 ffffffff ffff888564e80000
(XEN) [2021-06-14 23:03:00] GDTR:            0000007f fffffe0000063000
(XEN) [2021-06-14 23:03:00] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:03:00]   TR: 0040 0008b 00004087 fffffe0000065000
(XEN) [2021-06-14 23:03:00] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:03:00] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:03:00] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:03:00] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:03:00] InterruptStatus = 0000
(XEN) [2021-06-14 23:03:00] *** Host State ***
(XEN) [2021-06-14 23:03:00] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83207ca37f70
(XEN) [2021-06-14 23:03:00] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:03:00] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff831037c90040
(XEN) [2021-06-14 23:03:00] GDTBase=ffff83207ca2f000 
IDTBase=ffff83207ca3c000
(XEN) [2021-06-14 23:03:00] CR0=0000000080050033 CR3=000000157a7ae000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:03:00] Sysenter RSP=ffff83207ca37fa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:03:00] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:03:00] *** Control State ***
(XEN) [2021-06-14 23:03:00] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:03:00] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:03:00] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:03:00] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:03:00] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000001
(XEN) [2021-06-14 23:03:00]         reason=0000000c 
qualification=0000000000000000
(XEN) [2021-06-14 23:03:00] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:03:00] TSC Offset = 0xfffd823273e28d22  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:03:00] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:03:00] EPT pointer = 0x000000157a7bb01e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:03:00] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:03:00] Virtual processor ID = 0x3cfd VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:03:00]     VCPU 3
(XEN) [2021-06-14 23:03:00] *** Guest State ***
(XEN) [2021-06-14 23:03:00] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:03:00] CR4: actual=0x00000000001626e0, 
shadow=0x0000000000160660, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:03:00] CR3 = 0x000000046094e005
(XEN) [2021-06-14 23:03:00] RSP = 0xffffc90000097df0 
(0xffffc90000097df0)  RIP = 0xffffffff81beb36d (0xffffffff81beb36e)
(XEN) [2021-06-14 23:03:00] RFLAGS=0x00000246 (0x00000246)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:03:00] Sysenter RSP=fffffe0000096000 
CS:RIP=0010:ffffffff81c01560
(XEN) [2021-06-14 23:03:00]        sel  attr  limit   base
(XEN) [2021-06-14 23:03:00]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   GS: 0000 1c000 ffffffff ffff888564ec0000
(XEN) [2021-06-14 23:03:00] GDTR:            0000007f fffffe0000094000
(XEN) [2021-06-14 23:03:00] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:03:00]   TR: 0040 0008b 00004087 fffffe0000096000
(XEN) [2021-06-14 23:03:00] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:03:00] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:03:00] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:03:00] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:03:00] InterruptStatus = 0000
(XEN) [2021-06-14 23:03:00] *** Host State ***
(XEN) [2021-06-14 23:03:00] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83103ff27f70
(XEN) [2021-06-14 23:03:00] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:03:00] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff83103ff28040
(XEN) [2021-06-14 23:03:00] GDTBase=ffff83103ff1e000 
IDTBase=ffff83103ff1f000
(XEN) [2021-06-14 23:03:00] CR0=0000000080050033 CR3=000000157a7ad000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:03:00] Sysenter RSP=ffff83103ff27fa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:03:00] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:03:00] *** Control State ***
(XEN) [2021-06-14 23:03:00] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:03:00] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:03:00] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:03:00] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:03:00] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000001
(XEN) [2021-06-14 23:03:00]         reason=0000000c 
qualification=0000000000000000
(XEN) [2021-06-14 23:03:00] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:03:00] TSC Offset = 0xfffd823273e28d22  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:03:00] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:03:00] EPT pointer = 0x000000157a7bb01e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:03:00] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:03:00] Virtual processor ID = 0x8923 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:03:00]     VCPU 4
(XEN) [2021-06-14 23:03:00] *** Guest State ***
(XEN) [2021-06-14 23:03:00] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:03:00] CR4: actual=0x00000000001626e0, 
shadow=0x0000000000160660, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:03:00] CR3 = 0x00000004f4204005
(XEN) [2021-06-14 23:03:00] RSP = 0xffffc9000009fda8 
(0xffffc9000009fda8)  RIP = 0xffffffff81138e7c (0xffffffff81138e7c)
(XEN) [2021-06-14 23:03:00] RFLAGS=0x00000046 (0x00000046)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:03:00] Sysenter RSP=fffffe00000c7000 
CS:RIP=0010:ffffffff81c01560
(XEN) [2021-06-14 23:03:00]        sel  attr  limit   base
(XEN) [2021-06-14 23:03:00]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00]   GS: 0000 1c000 ffffffff ffff888564f00000
(XEN) [2021-06-14 23:03:00] GDTR:            0000007f fffffe00000c5000
(XEN) [2021-06-14 23:03:00] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:00] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:03:00]   TR: 0040 0008b 00004087 fffffe00000c7000
(XEN) [2021-06-14 23:03:01] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:03:01] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:03:01] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:03:01] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:03:01] InterruptStatus = 0000
(XEN) [2021-06-14 23:03:01] *** Host State ***
(XEN) [2021-06-14 23:03:01] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff831037cf7f70
(XEN) [2021-06-14 23:03:01] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:03:01] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff83103ff00040
(XEN) [2021-06-14 23:03:01] GDTBase=ffff831037cef000 
IDTBase=ffff831037cfc000
(XEN) [2021-06-14 23:03:01] CR0=0000000080050033 CR3=000000157a7ac000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:03:01] Sysenter RSP=ffff831037cf7fa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:03:01] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:03:01] *** Control State ***
(XEN) [2021-06-14 23:03:01] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:03:01] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:03:01] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:03:01] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:03:01] VMExit: intr_info=800000fc errcode=00000000 
ilen=00000003
(XEN) [2021-06-14 23:03:01]         reason=00000001 
qualification=0000000000000000
(XEN) [2021-06-14 23:03:01] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:03:01] TSC Offset = 0xfffd823273e28d22  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:03:01] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:03:01] EPT pointer = 0x000000157a7bb01e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:03:01] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:03:01] Virtual processor ID = 0x8938 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:03:01]     VCPU 5
(XEN) [2021-06-14 23:03:01] *** Guest State ***
(XEN) [2021-06-14 23:03:01] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:03:01] CR4: actual=0x00000000001626e0, 
shadow=0x0000000000160660, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:03:01] CR3 = 0x000000046094e003
(XEN) [2021-06-14 23:03:01] RSP = 0xffffc900000a7df0 
(0xffffc900000a7df0)  RIP = 0xffffffff81beb36d (0xffffffff81beb36e)
(XEN) [2021-06-14 23:03:01] RFLAGS=0x00000246 (0x00000246)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:03:01] Sysenter RSP=fffffe00000f8000 
CS:RIP=0010:ffffffff81c01560
(XEN) [2021-06-14 23:03:01]        sel  attr  limit   base
(XEN) [2021-06-14 23:03:01]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:01]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:01]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:01]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:01]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:01]   GS: 0000 1c000 ffffffff ffff888564f40000
(XEN) [2021-06-14 23:03:01] GDTR:            0000007f fffffe00000f6000
(XEN) [2021-06-14 23:03:01] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:01] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:03:01]   TR: 0040 0008b 00004087 fffffe00000f8000
(XEN) [2021-06-14 23:03:01] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:03:01] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:03:01] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:03:01] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:03:01] InterruptStatus = 0000
(XEN) [2021-06-14 23:03:01] *** Host State ***
(XEN) [2021-06-14 23:03:01] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83207ca0ff70
(XEN) [2021-06-14 23:03:01] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:03:01] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff831037cd4040
(XEN) [2021-06-14 23:03:01] GDTBase=ffff83207cba6000 
IDTBase=ffff83207ca07000
(XEN) [2021-06-14 23:03:01] CR0=0000000080050033 CR3=000000157a7ab000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:03:01] Sysenter RSP=ffff83207ca0ffa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:03:01] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:03:01] *** Control State ***
(XEN) [2021-06-14 23:03:01] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:03:01] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:03:01] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:03:01] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:03:01] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000001
(XEN) [2021-06-14 23:03:01]         reason=0000000c 
qualification=0000000000000000
(XEN) [2021-06-14 23:03:01] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:03:01] TSC Offset = 0xfffd823273e28d22  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:03:01] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:03:01] EPT pointer = 0x000000157a7bb01e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:03:01] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:03:01] Virtual processor ID = 0x64f8 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:03:01]
(XEN) [2021-06-14 23:03:01] >>> Domain 6 <<<
(XEN) [2021-06-14 23:03:01]     VCPU 0
(XEN) [2021-06-14 23:03:01] *** Guest State ***
(XEN) [2021-06-14 23:03:01] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:03:01] CR4: actual=0x00000000001726f0, 
shadow=0x0000000000170630, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:03:01] CR3 = 0x00000001210f8006
(XEN) [2021-06-14 23:03:01] RSP = 0xffffffffaea03e30 
(0xffffffffaea03e30)  RIP = 0xffffffffadc7c76d (0xffffffffadc7c76e)
(XEN) [2021-06-14 23:03:01] RFLAGS=0x00000246 (0x00000246)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:03:01] Sysenter RSP=fffffe0000003000 
CS:RIP=0010:ffffffffade01420
(XEN) [2021-06-14 23:03:01]        sel  attr  limit   base
(XEN) [2021-06-14 23:03:01]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:01]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:01]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:01]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:01]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:01]   GS: 0000 1c000 ffffffff ffff9977ea400000
(XEN) [2021-06-14 23:03:01] GDTR:            0000007f fffffe0000001000
(XEN) [2021-06-14 23:03:01] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:01] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:03:01]   TR: 0040 0008b 00004087 fffffe0000003000
(XEN) [2021-06-14 23:03:01] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:03:01] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:03:01] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:03:01] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:03:01] InterruptStatus = 0000
(XEN) [2021-06-14 23:03:01] *** Host State ***
(XEN) [2021-06-14 23:03:01] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff831037cf7f70
(XEN) [2021-06-14 23:03:01] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:03:01] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff83103ff00040
(XEN) [2021-06-14 23:03:01] GDTBase=ffff831037cef000 
IDTBase=ffff831037cfc000
(XEN) [2021-06-14 23:03:01] CR0=0000000080050033 CR3=000000207f3f8000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:03:01] Sysenter RSP=ffff831037cf7fa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:03:01] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:03:01] *** Control State ***
(XEN) [2021-06-14 23:03:01] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:03:01] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:03:01] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:03:01] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:03:01] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000001
(XEN) [2021-06-14 23:03:01]         reason=0000000c 
qualification=0000000000000000
(XEN) [2021-06-14 23:03:01] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:03:01] TSC Offset = 0xfffd6a453b49069e  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:03:01] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:03:01] EPT pointer = 0x0000000a2772901e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:03:01] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:03:01] Virtual processor ID = 0x918b VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:03:01]     VCPU 1
(XEN) [2021-06-14 23:03:01] *** Guest State ***
(XEN) [2021-06-14 23:03:01] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:03:01] CR4: actual=0x00000000001726e0, 
shadow=0x0000000000170620, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:03:01] CR3 = 0x0000000123008001
(XEN) [2021-06-14 23:03:01] RSP = 0xffffac780008fe58 
(0xffffac780008fe58)  RIP = 0xffffffffadc7c76d (0xffffffffadc7c76e)
(XEN) [2021-06-14 23:03:01] RFLAGS=0x00000246 (0x00000246)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:03:01] Sysenter RSP=fffffe0000038000 
CS:RIP=0010:ffffffffade01420
(XEN) [2021-06-14 23:03:01]        sel  attr  limit   base
(XEN) [2021-06-14 23:03:01]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:01]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:01]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:01]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:01]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:01]   GS: 0000 1c000 ffffffff ffff9977ea440000
(XEN) [2021-06-14 23:03:01] GDTR:            0000007f fffffe0000036000
(XEN) [2021-06-14 23:03:01] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:01] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:03:01]   TR: 0040 0008b 00004087 fffffe0000038000
(XEN) [2021-06-14 23:03:01] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:03:01] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:03:01] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:03:01] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:03:01] InterruptStatus = 0000
(XEN) [2021-06-14 23:03:01] *** Host State ***
(XEN) [2021-06-14 23:03:01] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83103ff27f70
(XEN) [2021-06-14 23:03:01] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:03:01] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff83103ff28040
(XEN) [2021-06-14 23:03:02] GDTBase=ffff83103ff1e000 
IDTBase=ffff83103ff1f000
(XEN) [2021-06-14 23:03:02] CR0=0000000080050033 CR3=00000009bbbc9000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:03:02] Sysenter RSP=ffff83103ff27fa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:03:02] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:03:02] *** Control State ***
(XEN) [2021-06-14 23:03:02] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:03:02] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:03:02] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:03:02] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:03:02] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000001
(XEN) [2021-06-14 23:03:02]         reason=0000000c 
qualification=0000000000000000
(XEN) [2021-06-14 23:03:02] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:03:02] TSC Offset = 0xfffd6a453b49069e  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:03:02] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:03:02] EPT pointer = 0x0000000a2772901e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:03:02] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:03:02] Virtual processor ID = 0x96f9 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:03:02]     VCPU 2
(XEN) [2021-06-14 23:03:02] *** Guest State ***
(XEN) [2021-06-14 23:03:02] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:03:02] CR4: actual=0x00000000001726e0, 
shadow=0x0000000000170620, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:03:02] CR3 = 0x000000012492a004
(XEN) [2021-06-14 23:03:02] RSP = 0xffffac7800097e58 
(0xffffac7800097e58)  RIP = 0xffffffffadc7c76d (0xffffffffadc7c76e)
(XEN) [2021-06-14 23:03:02] RFLAGS=0x00000246 (0x00000246)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:03:02] Sysenter RSP=fffffe000006d000 
CS:RIP=0010:ffffffffade01420
(XEN) [2021-06-14 23:03:02]        sel  attr  limit   base
(XEN) [2021-06-14 23:03:02]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:02]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:02]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:02]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:02]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:02]   GS: 0000 1c000 ffffffff ffff9977ea480000
(XEN) [2021-06-14 23:03:02] GDTR:            0000007f fffffe000006b000
(XEN) [2021-06-14 23:03:02] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:02] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:03:02]   TR: 0040 0008b 00004087 fffffe000006d000
(XEN) [2021-06-14 23:03:02] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:03:02] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:03:02] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:03:02] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:03:02] InterruptStatus = 0000
(XEN) [2021-06-14 23:03:02] *** Host State ***
(XEN) [2021-06-14 23:03:02] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83007263ff70
(XEN) [2021-06-14 23:03:02] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:03:02] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff82d040d47040
(XEN) [2021-06-14 23:03:02] GDTBase=ffff82d040a09000 
IDTBase=ffff82d040d45000
(XEN) [2021-06-14 23:03:02] CR0=0000000080050033 CR3=000000207f3f1000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:03:02] Sysenter RSP=ffff83007263ffa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:03:02] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:03:02] *** Control State ***
(XEN) [2021-06-14 23:03:02] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:03:02] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:03:02] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:03:02] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:03:02] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000001
(XEN) [2021-06-14 23:03:02]         reason=0000000c 
qualification=0000000000000000
(XEN) [2021-06-14 23:03:02] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:03:02] TSC Offset = 0xfffd6a453b49069e  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:03:02] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:03:02] EPT pointer = 0x0000000a2772901e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:03:02] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:03:02] Virtual processor ID = 0x7b99 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:03:02]     VCPU 3
(XEN) [2021-06-14 23:03:02] *** Guest State ***
(XEN) [2021-06-14 23:03:02] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:03:02] CR4: actual=0x00000000001726e0, 
shadow=0x0000000000170620, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:03:02] CR3 = 0x00000001210f8006
(XEN) [2021-06-14 23:03:02] RSP = 0xffffac780009fe58 
(0xffffac780009fe58)  RIP = 0xffffffffadc7c76d (0xffffffffadc7c76e)
(XEN) [2021-06-14 23:03:02] RFLAGS=0x00000246 (0x00000246)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:03:02] Sysenter RSP=fffffe00000a2000 
CS:RIP=0010:ffffffffade01420
(XEN) [2021-06-14 23:03:02]        sel  attr  limit   base
(XEN) [2021-06-14 23:03:02]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:02]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:02]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:02]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:02]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:02]   GS: 0000 1c000 ffffffff ffff9977ea4c0000
(XEN) [2021-06-14 23:03:02] GDTR:            0000007f fffffe00000a0000
(XEN) [2021-06-14 23:03:02] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:02] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:03:02]   TR: 0040 0008b 00004087 fffffe00000a2000
(XEN) [2021-06-14 23:03:02] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:03:02] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:03:02] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:03:02] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:03:02] InterruptStatus = 0000
(XEN) [2021-06-14 23:03:02] *** Host State ***
(XEN) [2021-06-14 23:03:02] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83103ff27f70
(XEN) [2021-06-14 23:03:02] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:03:02] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff83103ff28040
(XEN) [2021-06-14 23:03:02] GDTBase=ffff83103ff1e000 
IDTBase=ffff83103ff1f000
(XEN) [2021-06-14 23:03:02] CR0=0000000080050033 CR3=00000009bb594000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:03:02] Sysenter RSP=ffff83103ff27fa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:03:02] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:03:02] *** Control State ***
(XEN) [2021-06-14 23:03:02] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:03:02] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:03:02] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:03:02] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:03:02] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000001
(XEN) [2021-06-14 23:03:02]         reason=0000000c 
qualification=0000000000000000
(XEN) [2021-06-14 23:03:02] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:03:02] TSC Offset = 0xfffd6a453b49069e  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:03:02] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:03:02] EPT pointer = 0x0000000a2772901e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:03:02] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:03:02] Virtual processor ID = 0x9d38 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:03:02]     VCPU 4
(XEN) [2021-06-14 23:03:02] *** Guest State ***
(XEN) [2021-06-14 23:03:02] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:03:02] CR4: actual=0x00000000001726e0, 
shadow=0x0000000000170620, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:03:02] CR3 = 0x000000012492a003
(XEN) [2021-06-14 23:03:02] RSP = 0xffffac78000a7e58 
(0xffffac78000a7e58)  RIP = 0xffffffffadc7c76d (0xffffffffadc7c76e)
(XEN) [2021-06-14 23:03:02] RFLAGS=0x00000246 (0x00000246)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:03:02] Sysenter RSP=fffffe00000d7000 
CS:RIP=0010:ffffffffade01420
(XEN) [2021-06-14 23:03:02]        sel  attr  limit   base
(XEN) [2021-06-14 23:03:02]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:02]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:02]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:02]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:02]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:02]   GS: 0000 1c000 ffffffff ffff9977ea500000
(XEN) [2021-06-14 23:03:02] GDTR:            0000007f fffffe00000d5000
(XEN) [2021-06-14 23:03:02] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:02] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:03:02]   TR: 0040 0008b 00004087 fffffe00000d7000
(XEN) [2021-06-14 23:03:02] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:03:02] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:03:02] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:03:02] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:03:02] InterruptStatus = 0000
(XEN) [2021-06-14 23:03:02] *** Host State ***
(XEN) [2021-06-14 23:03:02] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83007263ff70
(XEN) [2021-06-14 23:03:02] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:03:02] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff82d040d47040
(XEN) [2021-06-14 23:03:02] GDTBase=ffff82d040a09000 
IDTBase=ffff82d040d45000
(XEN) [2021-06-14 23:03:02] CR0=0000000080050033 CR3=000000207f3f0000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:03:02] Sysenter RSP=ffff83007263ffa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:03:02] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:03:02] *** Control State ***
(XEN) [2021-06-14 23:03:02] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:03:02] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:03:02] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:03:02] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:03:02] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000001
(XEN) [2021-06-14 23:03:03]         reason=0000000c 
qualification=0000000000000000
(XEN) [2021-06-14 23:03:03] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:03:03] TSC Offset = 0xfffd6a453b49069e  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:03:03] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:03:03] EPT pointer = 0x0000000a2772901e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:03:03] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:03:03] Virtual processor ID = 0x822e VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:03:03]     VCPU 5
(XEN) [2021-06-14 23:03:03] *** Guest State ***
(XEN) [2021-06-14 23:03:03] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:03:03] CR4: actual=0x00000000001726e0, 
shadow=0x0000000000170620, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:03:03] CR3 = 0x000000012492a002
(XEN) [2021-06-14 23:03:03] RSP = 0xffffac78000afe58 
(0xffffac78000afe58)  RIP = 0xffffffffadc7c76d (0xffffffffadc7c76e)
(XEN) [2021-06-14 23:03:03] RFLAGS=0x00000246 (0x00000246)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:03:03] Sysenter RSP=fffffe000010c000 
CS:RIP=0010:ffffffffade01420
(XEN) [2021-06-14 23:03:03]        sel  attr  limit   base
(XEN) [2021-06-14 23:03:03]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:03]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:03]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:03]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:03]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:03]   GS: 0000 1c000 ffffffff ffff9977ea540000
(XEN) [2021-06-14 23:03:03] GDTR:            0000007f fffffe000010a000
(XEN) [2021-06-14 23:03:03] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:03] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:03:03]   TR: 0040 0008b 00004087 fffffe000010c000
(XEN) [2021-06-14 23:03:03] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:03:03] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:03:03] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:03:03] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:03:03] InterruptStatus = 0000
(XEN) [2021-06-14 23:03:03] *** Host State ***
(XEN) [2021-06-14 23:03:03] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83207cba7f70
(XEN) [2021-06-14 23:03:03] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:03:03] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff831037c9c040
(XEN) [2021-06-14 23:03:03] GDTBase=ffff83207ca31000 
IDTBase=ffff83207ca3e000
(XEN) [2021-06-14 23:03:03] CR0=0000000080050033 CR3=00000009bc0a1000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:03:03] Sysenter RSP=ffff83207cba7fa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:03:03] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:03:03] *** Control State ***
(XEN) [2021-06-14 23:03:03] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:03:03] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:03:03] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:03:03] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:03:03] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000001
(XEN) [2021-06-14 23:03:03]         reason=0000000c 
qualification=0000000000000000
(XEN) [2021-06-14 23:03:03] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:03:03] TSC Offset = 0xfffd6a453b49069e  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:03:03] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:03:03] EPT pointer = 0x0000000a2772901e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:03:03] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:03:03] Virtual processor ID = 0x4e81 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:03:03]
(XEN) [2021-06-14 23:03:03] >>> Domain 7 <<<
(XEN) [2021-06-14 23:03:03]     VCPU 0
(XEN) [2021-06-14 23:03:03] *** Guest State ***
(XEN) [2021-06-14 23:03:03] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:03:03] CR4: actual=0x00000000001726f0, 
shadow=0x0000000000170670, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:03:03] CR3 = 0x000000013000c006
(XEN) [2021-06-14 23:03:03] RSP = 0xffffffff84e07ce8 
(0xffffffff84e07ce8)  RIP = 0xffffffff83dd8ccd (0xffffffff83dd8cce)
(XEN) [2021-06-14 23:03:03] RFLAGS=0x00000246 (0x00000246)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:03:03] Sysenter RSP=fffffe0000003000 
CS:RIP=0010:ffffffff83e01540
(XEN) [2021-06-14 23:03:03]        sel  attr  limit   base
(XEN) [2021-06-14 23:03:03]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:03]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:03]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:03]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:03]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:03]   GS: 0000 1c000 ffffffff ffff8884d3800000
(XEN) [2021-06-14 23:03:03] GDTR:            0000007f fffffe0000001000
(XEN) [2021-06-14 23:03:03] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:03] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:03:03]   TR: 0040 0008b 00004087 fffffe0000003000
(XEN) [2021-06-14 23:03:03] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:03:03] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:03:03] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:03:03] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:03:03] InterruptStatus = 0000
(XEN) [2021-06-14 23:03:03] *** Host State ***
(XEN) [2021-06-14 23:03:03] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff831037cf7f70
(XEN) [2021-06-14 23:03:03] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:03:03] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff83103ff00040
(XEN) [2021-06-14 23:03:03] GDTBase=ffff831037cef000 
IDTBase=ffff831037cfc000
(XEN) [2021-06-14 23:03:03] CR0=0000000080050033 CR3=00000014040d7000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:03:03] Sysenter RSP=ffff831037cf7fa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:03:03] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:03:03] *** Control State ***
(XEN) [2021-06-14 23:03:03] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:03:03] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:03:03] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:03:03] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:03:03] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000001
(XEN) [2021-06-14 23:03:03]         reason=0000000c 
qualification=0000000000000000
(XEN) [2021-06-14 23:03:03] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:03:03] TSC Offset = 0xfffd5e346725fa5b  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:03:03] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:03:03] EPT pointer = 0x0000000f870ae01e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:03:03] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:03:03] Virtual processor ID = 0xa377 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:03:03]     VCPU 1
(XEN) [2021-06-14 23:03:03] *** Guest State ***
(XEN) [2021-06-14 23:03:03] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:03:03] CR4: actual=0x00000000001726e0, 
shadow=0x0000000000170660, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:03:03] CR3 = 0x0000000117484006
(XEN) [2021-06-14 23:03:03] RSP = 0xffffc90000117cf0 
(0xffffc90000117cf0)  RIP = 0xffffffff83dd8ccd (0xffffffff83dd8cce)
(XEN) [2021-06-14 23:03:03] RFLAGS=0x00000246 (0x00000246)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:03:03] Sysenter RSP=fffffe000003e000 
CS:RIP=0010:ffffffff83e01540
(XEN) [2021-06-14 23:03:03]        sel  attr  limit   base
(XEN) [2021-06-14 23:03:03]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:03]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:03]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:03]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:03]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:03]   GS: 0000 1c000 ffffffff ffff8884d3880000
(XEN) [2021-06-14 23:03:03] GDTR:            0000007f fffffe000003c000
(XEN) [2021-06-14 23:03:03] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:03] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:03:03]   TR: 0040 0008b 00004087 fffffe000003e000
(XEN) [2021-06-14 23:03:03] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:03:03] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:03:03] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:03:03] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:03:03] InterruptStatus = 0000
(XEN) [2021-06-14 23:03:03] *** Host State ***
(XEN) [2021-06-14 23:03:03] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83207ca37f70
(XEN) [2021-06-14 23:03:03] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:03:03] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff831037c90040
(XEN) [2021-06-14 23:03:03] GDTBase=ffff83207ca2f000 
IDTBase=ffff83207ca3c000
(XEN) [2021-06-14 23:03:03] CR0=0000000080050033 CR3=0000000fbb824000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:03:03] Sysenter RSP=ffff83207ca37fa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:03:03] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:03:03] *** Control State ***
(XEN) [2021-06-14 23:03:03] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:03:03] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:03:03] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:03:03] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:03:03] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000001
(XEN) [2021-06-14 23:03:03]         reason=0000000c 
qualification=0000000000000000
(XEN) [2021-06-14 23:03:03] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:03:03] TSC Offset = 0xfffd5e346725fa5b  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:03:03] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:03:03] EPT pointer = 0x0000000f870ae01e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:03:03] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:03:03] Virtual processor ID = 0x689f VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:03:03]     VCPU 2
(XEN) [2021-06-14 23:03:03] *** Guest State ***
(XEN) [2021-06-14 23:03:03] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:03:04] CR4: actual=0x00000000001726e0, 
shadow=0x0000000000170660, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:03:04] CR3 = 0x00000001440aa004
(XEN) [2021-06-14 23:03:04] RSP = 0xffffc90000127cf0 
(0xffffc90000127cf0)  RIP = 0xffffffff83dd8ccd (0xffffffff83dd8cce)
(XEN) [2021-06-14 23:03:04] RFLAGS=0x00000246 (0x00000246)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:03:04] Sysenter RSP=fffffe0000079000 
CS:RIP=0010:ffffffff83e01540
(XEN) [2021-06-14 23:03:04]        sel  attr  limit   base
(XEN) [2021-06-14 23:03:04]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   GS: 0000 1c000 ffffffff ffff8884d3900000
(XEN) [2021-06-14 23:03:04] GDTR:            0000007f fffffe0000077000
(XEN) [2021-06-14 23:03:04] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:03:04]   TR: 0040 0008b 00004087 fffffe0000079000
(XEN) [2021-06-14 23:03:04] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:03:04] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:03:04] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:03:04] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:03:04] InterruptStatus = 0000
(XEN) [2021-06-14 23:03:04] *** Host State ***
(XEN) [2021-06-14 23:03:04] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83207ca37f70
(XEN) [2021-06-14 23:03:04] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:03:04] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff831037c90040
(XEN) [2021-06-14 23:03:04] GDTBase=ffff83207ca2f000 
IDTBase=ffff83207ca3c000
(XEN) [2021-06-14 23:03:04] CR0=0000000080050033 CR3=00000014040de000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:03:04] Sysenter RSP=ffff83207ca37fa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:03:04] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:03:04] *** Control State ***
(XEN) [2021-06-14 23:03:04] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:03:04] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:03:04] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:03:04] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:03:04] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000001
(XEN) [2021-06-14 23:03:04]         reason=0000000c 
qualification=0000000000000000
(XEN) [2021-06-14 23:03:04] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:03:04] TSC Offset = 0xfffd5e346725fa5b  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:03:04] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:03:04] EPT pointer = 0x0000000f870ae01e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:03:04] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:03:04] Virtual processor ID = 0x6b74 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:03:04]     VCPU 3
(XEN) [2021-06-14 23:03:04] *** Guest State ***
(XEN) [2021-06-14 23:03:04] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:03:04] CR4: actual=0x00000000001726e0, 
shadow=0x0000000000170660, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:03:04] CR3 = 0x00000001297ba004
(XEN) [2021-06-14 23:03:04] RSP = 0xffffc900002a0ec8 
(0xffffc900002a0ec8)  RIP = 0xffffffff812e8693 (0xffffffff812e8693)
(XEN) [2021-06-14 23:03:04] RFLAGS=0x00000046 (0x00000046)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:03:04] Sysenter RSP=fffffe00000b4000 
CS:RIP=0010:ffffffff83e01540
(XEN) [2021-06-14 23:03:04]        sel  attr  limit   base
(XEN) [2021-06-14 23:03:04]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   GS: 0000 1c000 ffffffff ffff8884d3980000
(XEN) [2021-06-14 23:03:04] GDTR:            0000007f fffffe00000b2000
(XEN) [2021-06-14 23:03:04] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:03:04]   TR: 0040 0008b 00004087 fffffe00000b4000
(XEN) [2021-06-14 23:03:04] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:03:04] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:03:04] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:03:04] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:03:04] InterruptStatus = 0000
(XEN) [2021-06-14 23:03:04] *** Host State ***
(XEN) [2021-06-14 23:03:04] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83207ca37f70
(XEN) [2021-06-14 23:03:04] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:03:04] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff831037c90040
(XEN) [2021-06-14 23:03:04] GDTBase=ffff83207ca2f000 
IDTBase=ffff83207ca3c000
(XEN) [2021-06-14 23:03:04] CR0=0000000080050033 CR3=0000000fbb823000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:03:04] Sysenter RSP=ffff83207ca37fa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:03:04] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:03:04] *** Control State ***
(XEN) [2021-06-14 23:03:04] PinBased=000000bf CPUBased=b6a065fe 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:03:04] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:03:04] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:03:04] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:03:04] VMExit: intr_info=800000fc errcode=00000000 
ilen=00000007
(XEN) [2021-06-14 23:03:04]         reason=00000001 
qualification=0000000000000000
(XEN) [2021-06-14 23:03:04] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:03:04] TSC Offset = 0xfffd5e346725fa5b  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:03:04] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:03:04] EPT pointer = 0x0000000f870ae01e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:03:04] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:03:04] Virtual processor ID = 0x7092 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:03:04]     VCPU 4
(XEN) [2021-06-14 23:03:04] *** Guest State ***
(XEN) [2021-06-14 23:03:04] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:03:04] CR4: actual=0x00000000001726e0, 
shadow=0x0000000000170660, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:03:04] CR3 = 0x000000012c468001
(XEN) [2021-06-14 23:03:04] RSP = 0xffffc90000147cf0 
(0xffffc90000147cf0)  RIP = 0xffffffff83dd8ccd (0xffffffff83dd8cce)
(XEN) [2021-06-14 23:03:04] RFLAGS=0x00000246 (0x00000246)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:03:04] Sysenter RSP=fffffe00000ef000 
CS:RIP=0010:ffffffff83e01540
(XEN) [2021-06-14 23:03:04]        sel  attr  limit   base
(XEN) [2021-06-14 23:03:04]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   GS: 0000 1c000 ffffffff ffff8884d3a00000
(XEN) [2021-06-14 23:03:04] GDTR:            0000007f fffffe00000ed000
(XEN) [2021-06-14 23:03:04] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:03:04]   TR: 0040 0008b 00004087 fffffe00000ef000
(XEN) [2021-06-14 23:03:04] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:03:04] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:03:04] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:03:04] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:03:04] InterruptStatus = 0000
(XEN) [2021-06-14 23:03:04] *** Host State ***
(XEN) [2021-06-14 23:03:04] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83207ca37f70
(XEN) [2021-06-14 23:03:04] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:03:04] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff831037c90040
(XEN) [2021-06-14 23:03:04] GDTBase=ffff83207ca2f000 
IDTBase=ffff83207ca3c000
(XEN) [2021-06-14 23:03:04] CR0=0000000080050033 CR3=00000015c151b000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:03:04] Sysenter RSP=ffff83207ca37fa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:03:04] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:03:04] *** Control State ***
(XEN) [2021-06-14 23:03:04] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:03:04] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:03:04] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:03:04] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:03:04] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000001
(XEN) [2021-06-14 23:03:04]         reason=0000000c 
qualification=0000000000000000
(XEN) [2021-06-14 23:03:04] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:03:04] TSC Offset = 0xfffd5e346725fa5b  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:03:04] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:03:04] EPT pointer = 0x0000000f870ae01e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:03:04] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:03:04] Virtual processor ID = 0x74d6 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:03:04]     VCPU 5
(XEN) [2021-06-14 23:03:04] *** Guest State ***
(XEN) [2021-06-14 23:03:04] CR0: actual=0x0000000080050033, 
shadow=0x0000000080050033, gh_mask=ffffffffffffffff
(XEN) [2021-06-14 23:03:04] CR4: actual=0x00000000001726e0, 
shadow=0x0000000000170660, gh_mask=ffffffffffe8f860
(XEN) [2021-06-14 23:03:04] CR3 = 0x0000000126bce005
(XEN) [2021-06-14 23:03:04] RSP = 0xffffc90000157cf0 
(0xffffc90000157cf0)  RIP = 0xffffffff83dd8ccd (0xffffffff83dd8cce)
(XEN) [2021-06-14 23:03:04] RFLAGS=0x00000246 (0x00000246)  DR7 = 
0x0000000000000400
(XEN) [2021-06-14 23:03:04] Sysenter RSP=fffffe000012a000 
CS:RIP=0010:ffffffff83e01540
(XEN) [2021-06-14 23:03:04]        sel  attr  limit   base
(XEN) [2021-06-14 23:03:04]   CS: 0010 0a09b ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   DS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   SS: 0018 0c093 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   ES: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:04]   FS: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:05]   GS: 0000 1c000 ffffffff ffff8884d3a80000
(XEN) [2021-06-14 23:03:05] GDTR:            0000007f fffffe0000128000
(XEN) [2021-06-14 23:03:05] LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) [2021-06-14 23:03:05] IDTR:            00000fff fffffe0000000000
(XEN) [2021-06-14 23:03:05]   TR: 0040 0008b 00004087 fffffe000012a000
(XEN) [2021-06-14 23:03:05] EFER(VMCS) = 0x0000000000000d01  PAT = 
0x0407050600070106
(XEN) [2021-06-14 23:03:05] PreemptionTimer = 0x00000000  SM Base = 
0x00000000
(XEN) [2021-06-14 23:03:05] DebugCtl = 0x0000000000000000 
DebugExceptions = 0x0000000000000000
(XEN) [2021-06-14 23:03:05] Interruptibility = 00000000 ActivityState = 
00000000
(XEN) [2021-06-14 23:03:05] InterruptStatus = 0000
(XEN) [2021-06-14 23:03:05] *** Host State ***
(XEN) [2021-06-14 23:03:05] RIP = 0xffff82d0403f7110 
(vmx_asm_vmexit_handler)  RSP = 0xffff83207ca37f70
(XEN) [2021-06-14 23:03:05] CS=e008 SS=0000 DS=0000 ES=0000 FS=0000 
GS=0000 TR=e040
(XEN) [2021-06-14 23:03:05] FSBase=0000000000000000 
GSBase=0000000000000000 TRBase=ffff831037c90040
(XEN) [2021-06-14 23:03:05] GDTBase=ffff83207ca2f000 
IDTBase=ffff83207ca3c000
(XEN) [2021-06-14 23:03:05] CR0=0000000080050033 CR3=0000000fbb822000 
CR4=00000000001526e0
(XEN) [2021-06-14 23:03:05] Sysenter RSP=ffff83207ca37fa0 
CS:RIP=e008:ffff82d0405c8440
(XEN) [2021-06-14 23:03:05] EFER = 0x0000000000000d01  PAT = 
0x0000050100070406
(XEN) [2021-06-14 23:03:05] *** Control State ***
(XEN) [2021-06-14 23:03:05] PinBased=000000bf CPUBased=b6a065fa 
SecondaryExec=000017eb
(XEN) [2021-06-14 23:03:05] EntryControls=0000d3ff ExitControls=002fefff
(XEN) [2021-06-14 23:03:05] ExceptionBitmap=00060002 PFECmask=00000000 
PFECmatch=00000000
(XEN) [2021-06-14 23:03:05] VMEntry: intr_info=000000f3 errcode=00000000 
ilen=00000000
(XEN) [2021-06-14 23:03:05] VMExit: intr_info=00000000 errcode=00000000 
ilen=00000001
(XEN) [2021-06-14 23:03:05]         reason=0000000c 
qualification=0000000000000000
(XEN) [2021-06-14 23:03:05] IDTVectoring: info=00000000 errcode=00000000
(XEN) [2021-06-14 23:03:05] TSC Offset = 0xfffd5e346725fa5b  TSC 
Multiplier = 0x0000000000000000
(XEN) [2021-06-14 23:03:05] TPR Threshold = 0x00  PostedIntrVec = 0xf4
(XEN) [2021-06-14 23:03:05] EPT pointer = 0x0000000f870ae01e  EPTP index 
= 0x0000
(XEN) [2021-06-14 23:03:05] PLE Gap=00000080 Window=00001000
(XEN) [2021-06-14 23:03:05] Virtual processor ID = 0x7916 VMfunc 
controls = 0000000000000000
(XEN) [2021-06-14 23:03:05] **************************************
(XEN) [2021-06-14 23:03:05] [z: dump IOAPIC info]
(XEN) [2021-06-14 23:03:05] number of MP IRQ sources: 15.
(XEN) [2021-06-14 23:03:05] number of IO-APIC #1 registers: 24.
(XEN) [2021-06-14 23:03:05] number of IO-APIC #2 registers: 24.
(XEN) [2021-06-14 23:03:05] number of IO-APIC #3 registers: 24.
(XEN) [2021-06-14 23:03:05] testing the IO APIC.......................
(XEN) [2021-06-14 23:03:05] IO APIC #1......
(XEN) [2021-06-14 23:03:05] .... register #00: 01000000
(XEN) [2021-06-14 23:03:05] .......    : physical APIC id: 01
(XEN) [2021-06-14 23:03:05] .......    : Delivery Type: 0
(XEN) [2021-06-14 23:03:05] .......    : LTS          : 0
(XEN) [2021-06-14 23:03:05] .... register #01: 00170020
(XEN) [2021-06-14 23:03:05] .......     : max redirection entries: 0017
(XEN) [2021-06-14 23:03:05] .......     : PRQ implemented: 0
(XEN) [2021-06-14 23:03:05] .......     : IO APIC version: 0020
(XEN) [2021-06-14 23:03:05] .... IRQ redirection table:
(XEN) [2021-06-14 23:03:05]  NR  DestID Msk Trg IRR Pol Stat DstM DelM Vec
(XEN) [2021-06-14 23:03:05]  00 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  01 00010004 0   0   0   0   0    1 1    A3
(XEN) [2021-06-14 23:03:05]  02 00000001 0   0   0   0   0    1 1    F0
(XEN) [2021-06-14 23:03:05]  03 00000100 0   0   0   0   0    1 1    47
(XEN) [2021-06-14 23:03:05]  04 00010555 0   0   0   0   0    1 1    F1
(XEN) [2021-06-14 23:03:05]  05 00000001 0   0   0   0   0    1 1    50
(XEN) [2021-06-14 23:03:05]  06 00000001 0   0   0   0   0    1 1    58
(XEN) [2021-06-14 23:03:05]  07 00000001 0   0   0   0   0    1 1    60
(XEN) [2021-06-14 23:03:05]  08 00000040 0   0   0   0   0    1 1    8C
(XEN) [2021-06-14 23:03:05]  09 00000010 0   1   0   0   0    1 1    C0
(XEN) [2021-06-14 23:03:05]  0a 00000001 0   0   0   0   0    1 1    78
(XEN) [2021-06-14 23:03:05]  0b 00000001 0   0   0   0   0    1 1    88
(XEN) [2021-06-14 23:03:05]  0c 00000001 0   0   0   0   0    1 1    90
(XEN) [2021-06-14 23:03:05]  0d 00000001 1   0   0   0   0    1 1    98
(XEN) [2021-06-14 23:03:05]  0e 00000001 0   0   0   0   0    1 1    A0
(XEN) [2021-06-14 23:03:05]  0f 00000001 0   0   0   0   0    1 1    A8
(XEN) [2021-06-14 23:03:05]  10 00000555 1   1   0   1   0    1 1    B9
(XEN) [2021-06-14 23:03:05]  11 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  12 00000010 0   1   0   1   0    1 1    C4
(XEN) [2021-06-14 23:03:05]  13 00000004 0   1   0   1   0    1 1    AD
(XEN) [2021-06-14 23:03:05]  14 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  15 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  16 00000555 1   1   0   1   0    1 1    BB
(XEN) [2021-06-14 23:03:05]  17 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05] IO APIC #2......
(XEN) [2021-06-14 23:03:05] .... register #00: 02000000
(XEN) [2021-06-14 23:03:05] .......    : physical APIC id: 02
(XEN) [2021-06-14 23:03:05] .......    : Delivery Type: 0
(XEN) [2021-06-14 23:03:05] .......    : LTS          : 0
(XEN) [2021-06-14 23:03:05] .... register #01: 00170020
(XEN) [2021-06-14 23:03:05] .......     : max redirection entries: 0017
(XEN) [2021-06-14 23:03:05] .......     : PRQ implemented: 0
(XEN) [2021-06-14 23:03:05] .......     : IO APIC version: 0020
(XEN) [2021-06-14 23:03:05] .... register #02: 00000000
(XEN) [2021-06-14 23:03:05] .......     : arbitration: 00
(XEN) [2021-06-14 23:03:05] .... register #03: 00000001
(XEN) [2021-06-14 23:03:05] .......     : Boot DT    : 1
(XEN) [2021-06-14 23:03:05] .... IRQ redirection table:
(XEN) [2021-06-14 23:03:05]  NR  DestID Msk Trg IRR Pol Stat DstM DelM Vec
(XEN) [2021-06-14 23:03:05]  00 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  01 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  02 00000555 1   1   0   1   0    1 1    D8
(XEN) [2021-06-14 23:03:05]  03 00010001 0   1   0   1   0    1 1    2F
(XEN) [2021-06-14 23:03:05]  04 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  05 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  06 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  07 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  08 00000010 0   1   0   1   0    1 1    E8
(XEN) [2021-06-14 23:03:05]  09 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  0a 00010400 0   1   0   1   0    1 1    E6
(XEN) [2021-06-14 23:03:05]  0b 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  0c 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  0d 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  0e 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  0f 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  10 00000555 1   1   0   1   0    1 1    99
(XEN) [2021-06-14 23:03:05]  11 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  12 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  13 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  14 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  15 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  16 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  17 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05] IO APIC #3......
(XEN) [2021-06-14 23:03:05] .... register #00: 03000000
(XEN) [2021-06-14 23:03:05] .......    : physical APIC id: 03
(XEN) [2021-06-14 23:03:05] .......    : Delivery Type: 0
(XEN) [2021-06-14 23:03:05] .......    : LTS          : 0
(XEN) [2021-06-14 23:03:05] .... register #01: 00170020
(XEN) [2021-06-14 23:03:05] .......     : max redirection entries: 0017
(XEN) [2021-06-14 23:03:05] .......     : PRQ implemented: 0
(XEN) [2021-06-14 23:03:05] .......     : IO APIC version: 0020
(XEN) [2021-06-14 23:03:05] .... register #02: 00000000
(XEN) [2021-06-14 23:03:05] .......     : arbitration: 00
(XEN) [2021-06-14 23:03:05] .... register #03: 00000001
(XEN) [2021-06-14 23:03:05] .......     : Boot DT    : 1
(XEN) [2021-06-14 23:03:05] .... IRQ redirection table:
(XEN) [2021-06-14 23:03:05]  NR  DestID Msk Trg IRR Pol Stat DstM DelM Vec
(XEN) [2021-06-14 23:03:05]  00 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  01 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  02 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  03 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  04 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  05 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  06 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  07 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  08 00010001 0   1   0   1   0    1 1    37
(XEN) [2021-06-14 23:03:05]  09 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  0a 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  0b 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  0c 00010001 0   1   0   1   0    1 1    3F
(XEN) [2021-06-14 23:03:05]  0d 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  0e 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  0f 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  10 00010400 0   1   0   1   0    1 1    EE
(XEN) [2021-06-14 23:03:05]  11 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  12 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  13 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  14 00010400 0   1   0   1   0    1 1    27
(XEN) [2021-06-14 23:03:05]  15 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  16 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05]  17 00000000 1   0   0   0   0    0 0    00
(XEN) [2021-06-14 23:03:05] Using vector-based indexing
(XEN) [2021-06-14 23:03:05] IRQ to pin mappings:
(XEN) [2021-06-14 23:03:05] IRQ240 -> 0:2
(XEN) [2021-06-14 23:03:05] IRQ163 -> 0:1
(XEN) [2021-06-14 23:03:05] IRQ71 -> 0:3
(XEN) [2021-06-14 23:03:05] IRQ241 -> 0:4
(XEN) [2021-06-14 23:03:05] IRQ80 -> 0:5
(XEN) [2021-06-14 23:03:05] IRQ88 -> 0:6
(XEN) [2021-06-14 23:03:05] IRQ96 -> 0:7
(XEN) [2021-06-14 23:03:05] IRQ140 -> 0:8
(XEN) [2021-06-14 23:03:05] IRQ192 -> 0:9
(XEN) [2021-06-14 23:03:05] IRQ120 -> 0:10
(XEN) [2021-06-14 23:03:05] IRQ136 -> 0:11
(XEN) [2021-06-14 23:03:05] IRQ144 -> 0:12
(XEN) [2021-06-14 23:03:06] IRQ152 -> 0:13
(XEN) [2021-06-14 23:03:06] IRQ160 -> 0:14
(XEN) [2021-06-14 23:03:06] IRQ168 -> 0:15
(XEN) [2021-06-14 23:03:06] IRQ185 -> 0:16
(XEN) [2021-06-14 23:03:06] IRQ196 -> 0:18
(XEN) [2021-06-14 23:03:06] IRQ173 -> 0:19
(XEN) [2021-06-14 23:03:06] IRQ187 -> 0:22
(XEN) [2021-06-14 23:03:06] IRQ216 -> 1:2
(XEN) [2021-06-14 23:03:06] IRQ47 -> 1:3
(XEN) [2021-06-14 23:03:06] IRQ232 -> 1:8
(XEN) [2021-06-14 23:03:06] IRQ230 -> 1:10
(XEN) [2021-06-14 23:03:06] IRQ153 -> 1:16
(XEN) [2021-06-14 23:03:06] IRQ55 -> 2:8
(XEN) [2021-06-14 23:03:06] IRQ63 -> 2:12
(XEN) [2021-06-14 23:03:06] IRQ238 -> 2:16
(XEN) [2021-06-14 23:03:06] IRQ39 -> 2:20
(XEN) [2021-06-14 23:03:06] .................................... done.
(XEN) [2021-06-14 23:03:11] printk: 26 messages suppressed.
(XEN) [2021-06-14 23:03:11] grant_table.c:803:d0v2 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:03:16] grant_table.c:803:d0v0 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:03:16] grant_table.c:803:d0v0 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:03:16] grant_table.c:803:d0v0 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:03:16] grant_table.c:803:d0v0 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:03:16] grant_table.c:803:d0v0 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:03:16] grant_table.c:803:d0v0 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:03:16] grant_table.c:803:d0v0 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:03:16] grant_table.c:803:d0v0 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:03:16] grant_table.c:803:d0v0 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:03:17] printk: 54 messages suppressed.
(XEN) [2021-06-14 23:03:17] grant_table.c:803:d0v1 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:03:26] printk: 2 messages suppressed.
(XEN) [2021-06-14 23:03:26] grant_table.c:803:d0v6 Bad flags (0) or dom 
(0); expected d0
(XEN) [2021-06-14 23:03:37] grant_table.c:803:d0v4 Bad flags (0) or dom 
(0); expected d0
2021-06-15 01:07:01 ---MARK---


--- dom0 log ends ---

--

Regards, Håkon




From xen-devel-bounces@lists.xenproject.org Tue Jun 15 09:55:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 09:55:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.141983.262112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt5mn-0003up-Dn; Tue, 15 Jun 2021 09:55:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 141983.262112; Tue, 15 Jun 2021 09:55:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt5mn-0003ui-AH; Tue, 15 Jun 2021 09:55:09 +0000
Received: by outflank-mailman (input) for mailman id 141983;
 Tue, 15 Jun 2021 09:55:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt5mm-0003uY-3o; Tue, 15 Jun 2021 09:55:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt5ml-0008NF-3V; Tue, 15 Jun 2021 09:55:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt5mk-00011D-Ot; Tue, 15 Jun 2021 09:55:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lt5mk-0003Fh-OP; Tue, 15 Jun 2021 09:55:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aPaDJHRhItA9/cIEPn4j3YOb3ZtCq5E/5vMg5BXB+Kw=; b=XcsFvznmqvgA3jOf600K4/aHLk
	5H3bhJvDna/qGHAoQJ79Scrco4TQwZt27SJCUuRStxDjbSSCFJx3FtI80qMFiVzagO8qfAdrsPtne
	YBbkfr2kZZQONvaiYEsuBVy3iK0//F+LfER7QtEVMSbVDSr9DOYezHC0MtwZorq2FvEg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162818-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162818: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=1ea06abceec61b6f3ab33dadb0510b6e09fb61e2
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Jun 2021 09:55:06 +0000

flight 162818 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162818/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                1ea06abceec61b6f3ab33dadb0510b6e09fb61e2
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  299 days
Failing since        152659  2020-08-21 14:07:39 Z  297 days  550 attempts
Testing same since   162818  2021-06-14 22:38:18 Z    0 days    1 attempts

------------------------------------------------------------
532 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 171564 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:03:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142001.262126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt8iV-0003uL-6S; Tue, 15 Jun 2021 13:02:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142001.262126; Tue, 15 Jun 2021 13:02:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt8iV-0003uE-12; Tue, 15 Jun 2021 13:02:55 +0000
Received: by outflank-mailman (input) for mailman id 142001;
 Tue, 15 Jun 2021 13:02:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lt8iT-0003u4-Pu
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:02:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lt8iS-00035Z-Be; Tue, 15 Jun 2021 13:02:52 +0000
Received: from [54.239.6.184] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lt8iS-00044l-2s; Tue, 15 Jun 2021 13:02:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=m1jyjRgu2Dt6wTFkFiDBGeSzM27snggXNYe3KLa+a1E=; b=OiTpbn1rDvAT6ARZPu/UoanYeM
	GRNPf+92Gnn9w9hwTJY5o07PA37t3EJzUih0kWfLhLcdMGjyb5yqD3TtvMnr787FeKBm5Q0/El5jr
	rGDF1SEzm5JIHDVoPggGCzs5HeOnxX1VuTs5iNSi9SClkyZWYobPnHRMM4Mivu0I4qGo=;
Subject: Re: [XEN PATCH v2 0/8] Fix libxl with QEMU 6.0 + remove some more
 deprecated usages.
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210511092810.13759-1-anthony.perard@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <402a865e-972f-d0c4-78af-4be32894bfe9@xen.org>
Date: Tue, 15 Jun 2021 15:02:49 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210511092810.13759-1-anthony.perard@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Anthony,

On 11/05/2021 11:28, Anthony PERARD wrote:
> Patch series available in this git branch:
> https://xenbits.xen.org/git-http/people/aperard/xen-unstable.git br.deprecated-qemu-qmp-and-cmd-v2
> 
> v2:
> - fix coding style in patch 3
> - all reviewed
> 
> The Xen 4.15 release that went out just before QEMU 6.0 won't be compaptible
> with the latter. This patch series fixes libxl to replace use of QMP commands
> that have been removed from QEMU and to fix usage of deprecated command and
> parameters that well be remove from QEMU in the future.
> 
> All of the series should be backported to at least Xen 4.15 or it won't be
> possible to migrate, hotplug cpu or change cdrom on HVM guest when QEMU 6.0 and
> newer is used. QEMU 6.0 is about to be release, within a week.
> 
> Backport: 4.15
> 
> Anthony PERARD (8):
>    libxl: Replace deprecated QMP command by "query-cpus-fast"
>    libxl: Replace QEMU's command line short-form boolean option
>    libxl: Replace deprecated "cpu-add" QMP command by "device_add"
>    libxl: Use -device for cd-rom drives
>    libxl: Assert qmp_ev's state in qmp_ev_qemu_compare_version
>    libxl: Export libxl__qmp_ev_qemu_compare_version
>    libxl: Use `id` with the "eject" QMP command
>    libxl: Replace QMP command "change" by "blockdev-change-media"

I have committed the series.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:17:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:17:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142016.262168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt8wb-0005na-NT; Tue, 15 Jun 2021 13:17:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142016.262168; Tue, 15 Jun 2021 13:17:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt8wb-0005nT-Ka; Tue, 15 Jun 2021 13:17:29 +0000
Received: by outflank-mailman (input) for mailman id 142016;
 Tue, 15 Jun 2021 13:17:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=stKz=LJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lt8wa-0005nN-QY
 for xen-devel@lists.xen.org; Tue, 15 Jun 2021 13:17:28 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f66fa6dd-dced-478b-815f-eefcc0b382ea;
 Tue, 15 Jun 2021 13:17:27 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2056.outbound.protection.outlook.com [104.47.14.56]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-34-uoUbLgYzMtaGEI9M3AvL2w-1; Tue, 15 Jun 2021 15:17:25 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6861.eurprd04.prod.outlook.com (2603:10a6:803:13c::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.24; Tue, 15 Jun
 2021 13:17:24 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Tue, 15 Jun 2021
 13:17:24 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR3P281CA0040.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:4a::8) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.9 via Frontend Transport; Tue, 15 Jun 2021 13:17:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f66fa6dd-dced-478b-815f-eefcc0b382ea
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623763046;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QKFKFdfFqqT/C3iPfS0rQqjeIekAZIMfQUKmdR4wnMg=;
	b=O7hpsFFeabU5kHzcqkG0IApL4L2pQTQpwoeJ7tpqAOuOODW3SvvKNyX4tCKshqvV0LTLZ2
	AMYxFQHa5wejJDYYuWtYRfViCwoX/S4k0X2apWXCTpuo5LkMZDDMXWHs8z0SlBxsxeIWk1
	UgWjCRVJ8d2xhJ7SRjv28Bi8iYaO3Ao=
X-MC-Unique: uoUbLgYzMtaGEI9M3AvL2w-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VaJ4qP5l5OqYfYSCKRwgu39GNw41W4SckKrfXferUAEBcmrs2dOLCtt6nldu1xxIhPt8x0SHyYNHn3eni2730Xf8RKUN0vf6X42NFHF07amfXv9uVJ6IsNCYsUjQIhIG7oG4P7DARyBAdUlvBgu4m6TKFANh02nPpywAgC5R+dFEja5NIdwcDkbwqIPh0Aa+BZabo3d1cjV2UGwu091DZeshHzmQ6SHuak4LGpx6NTdHsly5X5vF8FuPM+YIArrvqpjKaUnIgzMH0wy8C/TZ/YnijnAfapyiugMmH+E3NVYKgHIG4b1xNtkycvyHH3MKwHOQyS4VLjHkjYOUR+3iZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+2wv8m0t6FtTzNKuCaRqkZ+24yZa54s3jsZPKWq2ub4=;
 b=enBckFUi/9zEedkjVhL9cc41LFvkJR/Qwj8BlV3OweWVYr+/B3rs4ILgqbsd/bNNkH9Y3spr8LbH8MBIu1H8KnWkGRjl6gHzAIOx+Mo08FH7mMrFvT4Ve012sp8mPk0skENRn9ENG4xJL0z24aBkrvh9ZrWSowSsL2VWz/m5GH70IvAl1uXbgTwTUJgDiJfwCha8uwZgn/JynXrfJPJhXVpmSL+EgxIEYRLwg8UHEPY/o2PtFYXmynb0ff/a3LksGINEeyPqOJYXXtHlnDgdZOk2xT1UjVKHDfNJL5JGNRlSPvsH+6LLbi9990krHdUFuz2KdhjdmWUQ0F39Jmw5Fw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: Re: BUG in 1f3d87c75129 ("x86/vpt: do not take pt_migrate rwlock in
 some cases")
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>, xen-devel@lists.xen.org,
 boris.ostrovsky@oracle.com, stephen.s.brennan@oracle.com,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <0efd0099-49bb-e20c-fe8d-fb97c9de2b63@citrix.com>
 <74378af9-5d04-f95e-3957-918cf5c81ded@suse.com>
 <YMdZKuKOnFKpQ3sg@Air-de-Roger>
 <3e9f4ea8-2fca-bf66-6345-0b73b960cba7@suse.com>
 <YMhg6OclYQ9AS+wD@Air-de-Roger>
 <3aaed845-b273-0688-4cac-3d440e3d58d3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e93fefae-a178-5fb9-1747-45d45818d66d@suse.com>
Date: Tue, 15 Jun 2021 15:17:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <3aaed845-b273-0688-4cac-3d440e3d58d3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR3P281CA0040.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::8) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b8219028-e037-4658-69c2-08d92fffea32
X-MS-TrafficTypeDiagnostic: VI1PR04MB6861:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB68611FAB500190D73DA85D16B3309@VI1PR04MB6861.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jRgMXq5pdWPNBan+XdNz1xdNsbAbVC/KWpQdHkgTaDFKAdBsj/BAKCEQ5OEu9msOga6PgHf+Vhl0K3Gl3m5RnXw5WqxV/V/XqQCg8V8USile9HUGOgwxcAPTgRbfrL/0ozyhDKkNisg8JTM3bpcEnHDH+8KtX7QBFYk2ZVpxpUdW13zI/Cmavg0+wDxpf1QjZ+U9WWqQ7Sy5RlPzse3HE2NPpRQM7x3NudCi++yQhrwFcvjjz3asXKp6pitgRYHLNTvH76ho1wXoT5DLhuNMmUZl3B2h8IMMB33BZsxd0JtpN+kxaR2Oz0dMFuomUo9SF0ZzqdyZ8DuWig38zhuaF0F7GNA6hQmzKJ54O06zWNG/QY6VBo5BJxtepFilepyZbXhIoUWI3Wo+AL8dcbA98sQJA9JVpVJukCdOkTNp8PKXTHULiaMV81fyMjiuGyRcNPqkxLyOecL6hWOgldRzq5edY/4QwUUCvGVC3wJfGSnyShSGSyl3sTzfzHRslByGG5PH8UwWl/+VIcni5gi6hmezeDRXxM/kEmnCEz8UUh+CQAbJymYxZwq/3nA2ER6uTCLa28wqBWmtQJ8rYgPsaNFCHORDBTKNkyVD8JLkTiQ3PMlmOoLnscXBGItHxghl0u8VB9/bC0q+tO9Jc1gxfznczTcm3zR+QRZYgE8Ip3vp8zNi0o52bkJxQQaFZSuJ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(366004)(396003)(136003)(346002)(376002)(8936002)(8676002)(478600001)(31696002)(54906003)(5660300002)(53546011)(6916009)(36756003)(83380400001)(4326008)(2616005)(16576012)(316002)(6486002)(956004)(66556008)(86362001)(66476007)(186003)(2906002)(38100700002)(31686004)(66946007)(26005)(16526019)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?h0++9GtSRHIqIk+Fm34uJ2m3ToIUXoeJ7ymmgKVNFGo3zc6d8nB37BIuz7r/?=
 =?us-ascii?Q?/BN+jDpF9Le/fEqr4sctkeO+sdKAe4GNWcRCZ+A7WX5afHAI4qCeVDAcDa3A?=
 =?us-ascii?Q?PYkNcNmm2habjIa99ejESdH0Eat+drd5mNYQ6mF0rMpJwnTYK++Ia6Cpsoef?=
 =?us-ascii?Q?jZoXCBmpXISYOAfa8746PLGTjlOzh3rdw+ppViJam9BOh9y0seKIa2BnFxdi?=
 =?us-ascii?Q?Wctgi/URZtj1z1mLTgH2u+Xniqwh7sArijz+rhLQqB6b9PMTofiDcyF4Szzb?=
 =?us-ascii?Q?2OOVr1DRVNq0B5vrwHYkYONSTnLBC8J0W+fCm4YVuCq8YIpqJog2y7Vr9WYz?=
 =?us-ascii?Q?BPpQW0/8IZ2nYM1zyr+50rjq9hJx1wH7v77zpho5GmO9sbw0qEJCmIlVtP7z?=
 =?us-ascii?Q?U1rtOxZtSZFcdCa5AT3Y4RButd/uloZPsNMyw8203nRqwK8GmRrKVo903TBf?=
 =?us-ascii?Q?U1CuOhnguOLtQZjOLkzBZDsX7sX6GG7SC8u92dQNdwb/+layFGFZej4fJ71c?=
 =?us-ascii?Q?9coKbUBMMvxAjUzBjWaBC7EDx3vdo0YNT+xJ7rClHgMflv4gTYGGkngZTuMu?=
 =?us-ascii?Q?b00MjBPcWXGQRnWYYv6YrlmZkg0XOWX5NiWkvtyuCmn4F9Afn+dQOW9E9u/w?=
 =?us-ascii?Q?I3+GdHb/KVflahKB6a7xNLdvTb6IXS/RcYfCft0+Z0eM1YQGYRqqb4VuFknT?=
 =?us-ascii?Q?H9wYb8DUu6Wi7aaLlm08SiW1Gkdi58RCdPLiQz3K0b8nlJiqSEfGbZD06H/w?=
 =?us-ascii?Q?GwFkNgsssj4lqKs+m4XoK3nCT5i1Pj0bVdyY692cLqV1gE+5OM7s1ldW+OKG?=
 =?us-ascii?Q?4XIs5bZyqNCIpFwnDnDNEBz0RtU4sASItvgkl01YhZXuWnJk5JwoGlm8RMOx?=
 =?us-ascii?Q?Xw/BpT2MkUFZkywVg3nbJaK7VsKfcbvpQcFjjLbtIaUP5GOfzFuIyRWmxf6B?=
 =?us-ascii?Q?QPu/FcwJNBIfCavvuMzZOXq9RlflXcLvJckdL1KvKFSRI6YF2E2RvoR5XKe+?=
 =?us-ascii?Q?faLGAYq1zGQG+FVSqmY6FNrg+0lIg03fbzr1CzbNXHLfYNre7KxwHYVTrSKu?=
 =?us-ascii?Q?N54PrxUmmSj5KmQVZ9g7vKLziIeXmpw0ZF2jtitL01EVTo4qA+fZlGM3uHgF?=
 =?us-ascii?Q?LJ8PNktai5Vj/nteh/88yb5DYsrffQfZ9OtyeuhUotwufo+qSGQzBiLDMaGY?=
 =?us-ascii?Q?df9BHYxUhjO2IrdM9sdpP//LxI7VTFlDAt890pwHxfbDoeLr7mdZliwzZccP?=
 =?us-ascii?Q?O6qFNhlKlZgghrOaE6XSqj5/Iryih5Z0JZX8HFiNGp9+bRX5n5cfUeWstL1g?=
 =?us-ascii?Q?uRj2+ARxXQnQ1FmhbtavUSlO?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b8219028-e037-4658-69c2-08d92fffea32
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2021 13:17:24.3213
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zPDMK6imvRJxXW4sN3Q9M6ec8rcY9HLPfCZOQvuzjvNVLHj6+006yqJRxcWzMgTH1mpJsTzUk/U/7YlZHGzTrA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6861

On 15.06.2021 11:17, Andrew Cooper wrote:
> On 15/06/2021 09:12, Roger Pau Monn=C3=A9 wrote:
>> On Mon, Jun 14, 2021 at 06:01:17PM +0200, Jan Beulich wrote:
>>> On 14.06.2021 15:27, Roger Pau Monn=C3=A9 wrote:
>>>> On Mon, Jun 14, 2021 at 01:53:09PM +0200, Jan Beulich wrote:
>>>>> x86/vpt: fully init timers before putting onto list
>>>>>
>>>>> With pt_vcpu_lock() no longer acquiring the pt_migrate lock, parties
>>>>> iterating the list and acting on the timers of the list entries will =
no
>>>>> longer be kept from entering their loops by create_periodic_time()'s
>>>>> holding of that lock. Therefore at least init_timer() needs calling
>>>>> ahead of list insertion, but keep this and set_timer() together.
>>>>>
>>>>> Fixes: 8113b02f0bf8 ("x86/vpt: do not take pt_migrate rwlock in some =
cases")
>>>>> Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>> Thanks for looking into this so quickly, and sorry for not realizing
>>>> myself when relaxing the locking. Adding the timer to the list without
>>>> it being fully initialized was a latent issue even if protected by the
>>>> lock initially.
>>>>
>>>> Provided testing shows the issue is fixed:
>>> I guess the change here is needed anyway, even if testing finds there's
>>> still something amiss?
>> Indeed, just wondered whether there might be other instances using a
>> similar pattern, but I'm not able to spot any.
>>
>> It might even be better to fix other issues (if any) on a different
>> commit.
>=20
> To be honest, this change is clearly good, and necessary.=C2=A0 I'd be
> tempted to commit it now, as is, irrespective of whether there are
> further bugs in this area.

Done.

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:27:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:27:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142025.262188 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt96C-0007KJ-Oz; Tue, 15 Jun 2021 13:27:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142025.262188; Tue, 15 Jun 2021 13:27:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt96C-0007KC-L2; Tue, 15 Jun 2021 13:27:24 +0000
Received: by outflank-mailman (input) for mailman id 142025;
 Tue, 15 Jun 2021 13:27:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=62sT=LJ=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lt96B-0007In-0F
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:27:23 +0000
Received: from mail-pj1-x102a.google.com (unknown [2607:f8b0:4864:20::102a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 153df43e-e4c4-4cfe-afa2-84235cb6fb9c;
 Tue, 15 Jun 2021 13:27:21 +0000 (UTC)
Received: by mail-pj1-x102a.google.com with SMTP id
 22-20020a17090a0c16b0290164a5354ad0so1806485pjs.2
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 06:27:21 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:1846:5274:e444:139e])
 by smtp.gmail.com with UTF8SMTPSA id k70sm16257566pgd.41.2021.06.15.06.27.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 06:27:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 153df43e-e4c4-4cfe-afa2-84235cb6fb9c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=ju6S8lgCxf3St5TjBpM/zWBg3zY1k2oOfdLAzP6ctcg=;
        b=S0KOTmQ1dyMFUgjraAGrRI0c2Mihsv8RUfx+tMl8U8V5wwFpZrXslfg71+0gtaQ4MS
         920rIvXLA/w2InlV9cjhFj31GrTJmfYtMvDEkxHMkIU3x8/eylAN0MxowVG16KZdI3xd
         R6EZBcTiAFbROkN0bkdkSLOWCWkPs3n3Q835U=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=ju6S8lgCxf3St5TjBpM/zWBg3zY1k2oOfdLAzP6ctcg=;
        b=WvZT+bFdZF6Dk9QtXQ6sTwL0WayKPS2jDKDmxD1vMBEGqOi9ijLnNIH/gPUIBl/FSF
         a24r4YXP+N+HIoY+14uULaktKKAAUzPeWPmQJfWUwnwW+GBjaoEXHDPUen8MmRzaP7bl
         gTChWEgz0dtZaIoe5aBpzDXOoS3p6Ho+2u+J6fqA7PTWm1i5oS5fzvpHX/1LAZvER1Sd
         MEW53KAyEe0fIPyMbNZJms/Xun5c+qDaZvCWVVXS5ec08OV2MMq1ZXQHHJ88Aatat8N4
         m6W+Go1F4+1WlhgEGucF/YktubjCLWRzQI8LwtaJwAyLBKjBDIPMrmIOVSF6RnFJucMM
         Sgow==
X-Gm-Message-State: AOAM530jALzCatHPmlPLGoISZrF+aKk1oEzP4/TfuX/qQXPSNPCqdRCp
	th4Mua2Yq+/3IOhwDFEmHcDt5Q==
X-Google-Smtp-Source: ABdhPJxr9A5pl6/ZdjjYY9mlXXb/uiYKpVxDCD4IZ9OsbKJOMyQWEYEFuItILAhw43oMKDFA7z0SMA==
X-Received: by 2002:a17:90a:be0b:: with SMTP id a11mr25279126pjs.197.1623763641154;
        Tue, 15 Jun 2021 06:27:21 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v10 00/12] Restricted DMA
Date: Tue, 15 Jun 2021 21:26:59 +0800
Message-Id: <20210615132711.553451-1-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series implements mitigations for lack of DMA access control on
systems without an IOMMU, which could result in the DMA accessing the
system memory at unexpected times and/or unexpected addresses, possibly
leading to data leakage or corruption.

For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
not behind an IOMMU. As PCI-e, by design, gives the device full access to
system memory, a vulnerability in the Wi-Fi firmware could easily escalate
to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
full chain of exploits; [2], [3]).

To mitigate the security concerns, we introduce restricted DMA. Restricted
DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
specially allocated region and does memory allocation from the same region.
The feature on its own provides a basic level of protection against the DMA
overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system needs
to provide a way to restrict the DMA to a predefined memory region (this is
usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).

[1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
[1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
[2] https://blade.tencent.com/en/advisories/qualpwn/
[3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
[4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132

v10:
Address the comments in v9 to
  - fix the dev->dma_io_tlb_mem assignment
  - propagate swiotlb_force setting into io_tlb_default_mem->force
  - move set_memory_decrypted out of swiotlb_init_io_tlb_mem
  - move debugfs_dir declaration into the main CONFIG_DEBUG_FS block
  - add swiotlb_ prefix to find_slots and release_slots
  - merge the 3 alloc/free related patches
  - move the CONFIG_DMA_RESTRICTED_POOL later

v9:
Address the comments in v7 to
  - set swiotlb active pool to dev->dma_io_tlb_mem
  - get rid of get_io_tlb_mem
  - dig out the device struct for is_swiotlb_active
  - move debugfs_create_dir out of swiotlb_create_debugfs
  - do set_memory_decrypted conditionally in swiotlb_init_io_tlb_mem
  - use IS_ENABLED in kernel/dma/direct.c
  - fix redefinition of 'of_dma_set_restricted_buffer'
https://lore.kernel.org/patchwork/cover/1445081/

v8:
- Fix reserved-memory.txt and add the reg property in example.
- Fix sizeof for of_property_count_elems_of_size in
  drivers/of/address.c#of_dma_set_restricted_buffer.
- Apply Will's suggestion to try the OF node having DMA configuration in
  drivers/of/address.c#of_dma_set_restricted_buffer.
- Fix typo in the comment of drivers/of/address.c#of_dma_set_restricted_buffer.
- Add error message for PageHighMem in
  kernel/dma/swiotlb.c#rmem_swiotlb_device_init and move it to
  rmem_swiotlb_setup.
- Fix the message string in rmem_swiotlb_setup.
https://lore.kernel.org/patchwork/cover/1437112/

v7:
Fix debugfs, PageHighMem and comment style in rmem_swiotlb_device_init
https://lore.kernel.org/patchwork/cover/1431031/

v6:
Address the comments in v5
https://lore.kernel.org/patchwork/cover/1423201/

v5:
Rebase on latest linux-next
https://lore.kernel.org/patchwork/cover/1416899/

v4:
- Fix spinlock bad magic
- Use rmem->name for debugfs entry
- Address the comments in v3
https://lore.kernel.org/patchwork/cover/1378113/

v3:
Using only one reserved memory region for both streaming DMA and memory
allocation.
https://lore.kernel.org/patchwork/cover/1360992/

v2:
Building on top of swiotlb.
https://lore.kernel.org/patchwork/cover/1280705/

v1:
Using dma_map_ops.
https://lore.kernel.org/patchwork/cover/1271660/


Claire Chang (12):
  swiotlb: Refactor swiotlb init functions
  swiotlb: Refactor swiotlb_create_debugfs
  swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
  swiotlb: Update is_swiotlb_buffer to add a struct device argument
  swiotlb: Update is_swiotlb_active to add a struct device argument
  swiotlb: Use is_dev_swiotlb_force for swiotlb data bouncing
  swiotlb: Move alloc_size to swiotlb_find_slots
  swiotlb: Refactor swiotlb_tbl_unmap_single
  swiotlb: Add restricted DMA pool initialization
  swiotlb: Add restricted DMA alloc/free support
  dt-bindings: of: Add restricted DMA pool
  of: Add plumbing for restricted DMA pool

 .../reserved-memory/reserved-memory.txt       |  36 ++-
 drivers/base/core.c                           |   4 +
 drivers/gpu/drm/i915/gem/i915_gem_internal.c  |   2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c         |   2 +-
 drivers/iommu/dma-iommu.c                     |  12 +-
 drivers/of/address.c                          |  33 +++
 drivers/of/device.c                           |   3 +
 drivers/of/of_private.h                       |   6 +
 drivers/pci/xen-pcifront.c                    |   2 +-
 drivers/xen/swiotlb-xen.c                     |   2 +-
 include/linux/device.h                        |   4 +
 include/linux/swiotlb.h                       |  40 ++-
 kernel/dma/Kconfig                            |  14 +
 kernel/dma/direct.c                           |  60 +++--
 kernel/dma/direct.h                           |   8 +-
 kernel/dma/swiotlb.c                          | 255 +++++++++++++-----
 16 files changed, 380 insertions(+), 103 deletions(-)

-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:27:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:27:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142026.262199 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt96M-0007du-0Y; Tue, 15 Jun 2021 13:27:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142026.262199; Tue, 15 Jun 2021 13:27:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt96L-0007dl-TH; Tue, 15 Jun 2021 13:27:33 +0000
Received: by outflank-mailman (input) for mailman id 142026;
 Tue, 15 Jun 2021 13:27:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=62sT=LJ=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lt96J-0007cp-R1
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:27:31 +0000
Received: from mail-pf1-x436.google.com (unknown [2607:f8b0:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 28851870-1fc2-46db-be05-0bfe71541fec;
 Tue, 15 Jun 2021 13:27:31 +0000 (UTC)
Received: by mail-pf1-x436.google.com with SMTP id s14so13248020pfd.9
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 06:27:31 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:1846:5274:e444:139e])
 by smtp.gmail.com with UTF8SMTPSA id h28sm15722759pfr.10.2021.06.15.06.27.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 06:27:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28851870-1fc2-46db-be05-0bfe71541fec
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=6Lex2qj4ya+gw4pq8ddXA1pOmW2WVbY6zj6BxcRRBt8=;
        b=ZLl3oH9HuXkZeTNYiDjmclcvlU1fHG6FEW/+EtgRMzMPns4DtPDPQudaVvva/BDuGC
         zGtHJJpxxCsxlT5RjT/7fLZ7EGq8AQYBsptRgMAZu90A/crqTTwQ5XbsIgcgVUdC3B5B
         +vI80sQbIyAPOwHeiiuB3ERZd8tb8uM0R7nRI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=6Lex2qj4ya+gw4pq8ddXA1pOmW2WVbY6zj6BxcRRBt8=;
        b=o115MfqzjoyYXfOJgwHkSkt4xryBBVkfiWgmtlFTrK/s5JVylaZdCTqhVFAOAVQqGF
         3wzTEJFPJ5pmjYiXY2UyrFi25vxqjGIVT0+yWsh1hRny5c4sf28a+6AJE4DKGe1DumGH
         wVo+Gp+11G38aNgof06+GkX78tKmJ0ReF7jbo+GddiW6gxE4lO0U+Na3w2/oO7QDrCI/
         tmRf1GEfQDv5DNGNdMZwV/f7LohLYSsrfH4rlh+rByNUIL/xESPYZFZuQ7ssXe+1Tn4a
         0qKqczrxc0ipQWjUuJUK4flAIdag4TSwE6JsnwOEr1mmhD5pqq0CKKWWJmy5LAB5qZIH
         3LMg==
X-Gm-Message-State: AOAM530zXmUEZPjrCtjU8DwxuA0xC82hY/pnktSnqdmivOozFn8L/Z5u
	W6qRprg4dQBM99q2C1VEP7g0XA==
X-Google-Smtp-Source: ABdhPJyP5TuiWqaeeXHxX9/tePPfMBb6ZUa8aOcCKUq7KZnuuyAqkWTH454AhakZnHXq7vRhTx7pnw==
X-Received: by 2002:a62:1ec4:0:b029:2fb:53cd:1dcb with SMTP id e187-20020a621ec40000b02902fb53cd1dcbmr4017356pfe.16.1623763650269;
        Tue, 15 Jun 2021 06:27:30 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v10 01/12] swiotlb: Refactor swiotlb init functions
Date: Tue, 15 Jun 2021 21:27:00 +0800
Message-Id: <20210615132711.553451-2-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210615132711.553451-1-tientzu@chromium.org>
References: <20210615132711.553451-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
initialization to make the code reusable.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 49 ++++++++++++++++++++++----------------------
 1 file changed, 24 insertions(+), 25 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 8ca7d505d61c..c64298e416c8 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -168,9 +168,28 @@ void __init swiotlb_update_mem_attributes(void)
 	memset(vaddr, 0, bytes);
 }
 
-int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
+				    unsigned long nslabs, bool late_alloc)
 {
+	void *vaddr = phys_to_virt(start);
 	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
+
+	mem->nslabs = nslabs;
+	mem->start = start;
+	mem->end = mem->start + bytes;
+	mem->index = 0;
+	mem->late_alloc = late_alloc;
+	spin_lock_init(&mem->lock);
+	for (i = 0; i < mem->nslabs; i++) {
+		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
+		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
+		mem->slots[i].alloc_size = 0;
+	}
+	memset(vaddr, 0, bytes);
+}
+
+int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+{
 	struct io_tlb_mem *mem;
 	size_t alloc_size;
 
@@ -186,16 +205,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 	if (!mem)
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
-	mem->nslabs = nslabs;
-	mem->start = __pa(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
+
+	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
 
 	io_tlb_default_mem = mem;
 	if (verbose)
@@ -282,8 +293,8 @@ swiotlb_late_init_with_default_size(size_t default_size)
 int
 swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 {
-	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
 	struct io_tlb_mem *mem;
+	unsigned long bytes = nslabs << IO_TLB_SHIFT;
 
 	if (swiotlb_force == SWIOTLB_NO_FORCE)
 		return 0;
@@ -297,20 +308,8 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	if (!mem)
 		return -ENOMEM;
 
-	mem->nslabs = nslabs;
-	mem->start = virt_to_phys(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	mem->late_alloc = 1;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
-
+	swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
 	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
-	memset(tlb, 0, bytes);
 
 	io_tlb_default_mem = mem;
 	swiotlb_print_info();
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:27:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:27:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142028.262210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt96U-00084d-EE; Tue, 15 Jun 2021 13:27:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142028.262210; Tue, 15 Jun 2021 13:27:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt96U-00084W-Aa; Tue, 15 Jun 2021 13:27:42 +0000
Received: by outflank-mailman (input) for mailman id 142028;
 Tue, 15 Jun 2021 13:27:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=62sT=LJ=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lt96T-00083S-5J
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:27:41 +0000
Received: from mail-pf1-x42b.google.com (unknown [2607:f8b0:4864:20::42b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c6d67b89-e23b-4fd9-b3c6-2dcfcb792f4b;
 Tue, 15 Jun 2021 13:27:40 +0000 (UTC)
Received: by mail-pf1-x42b.google.com with SMTP id c12so13276006pfl.3
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 06:27:40 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:1846:5274:e444:139e])
 by smtp.gmail.com with UTF8SMTPSA id p6sm217209pjh.24.2021.06.15.06.27.32
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 06:27:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6d67b89-e23b-4fd9-b3c6-2dcfcb792f4b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=9+wXpY/H82TQMYJceCCK5GarrRWjIymFsSn79ihfI/w=;
        b=Ityiyn/6rZ6eEobs+2lnH4TKbbZk+VdtN2w2T43meIIZNh9+72wW8S0VDK8goTPM0v
         VD9NihJYeGETrBdORYc12PZdn+8m79SSBJ6F1LfLiwMAUohS+C8HkRvwkY21VvJTZM8O
         JQ8rzrFr7pberm347Pyh+hg1b2fiB3MgVNUFQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=9+wXpY/H82TQMYJceCCK5GarrRWjIymFsSn79ihfI/w=;
        b=dL57QCZiEeoyp18iPNlW0dYqXDXDJykUCoeFX9dWMowk63sjxyeAIeSy80lYxcfobs
         IFzagHi4lGokaANOD0sRCh90m4k65XALpnGrttTBxXQTfsj4rKJd0phNG99j+pBSDnYE
         z9j9dX1niJ+lcvdZJBJwFvafmcip+eThpTtxv8Pl6cWJgMmDjp0zkpsp216c4jDd443j
         thVpvfJoY0G9ihRWsHxGk75T9Ph3RRKGs8TbUhRBGqmVZU1LVE/zARMbCYyXNRj7+gce
         Ss/4XueRt/eSIpGnZDKOotOaohEPSj4Qu83jYc24NL2KrlCEQWDPtyvqo4Vgk7eUCOFF
         T9kQ==
X-Gm-Message-State: AOAM5321K0dzyxax5KT7SzODyWTcJvoKjP74VvedEFMIl2ID1hsYrFRu
	gzUN3qeMVohT5xcJxO/pmlwgvg==
X-Google-Smtp-Source: ABdhPJx713+ejHrsLud4m0g5wl75An7KsmYTto2sbhWgatvTfcJwFg19W074/BIx2abKYG48hp2c6g==
X-Received: by 2002:a63:a805:: with SMTP id o5mr22403221pgf.328.1623763659827;
        Tue, 15 Jun 2021 06:27:39 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v10 02/12] swiotlb: Refactor swiotlb_create_debugfs
Date: Tue, 15 Jun 2021 21:27:01 +0800
Message-Id: <20210615132711.553451-3-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210615132711.553451-1-tientzu@chromium.org>
References: <20210615132711.553451-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Split the debugfs creation to make the code reusable for supporting
different bounce buffer pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index c64298e416c8..97c6ad50fdc2 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -661,19 +661,26 @@ bool is_swiotlb_active(void)
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
 #ifdef CONFIG_DEBUG_FS
+static struct dentry *debugfs_dir;
 
-static int __init swiotlb_create_debugfs(void)
+static void swiotlb_create_debugfs_files(struct io_tlb_mem *mem)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
-
-	if (!mem)
-		return 0;
-	mem->debugfs = debugfs_create_dir("swiotlb", NULL);
 	debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs);
 	debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used);
+}
+
+static int __init swiotlb_create_default_debugfs(void)
+{
+	struct io_tlb_mem *mem = io_tlb_default_mem;
+
+	debugfs_dir = debugfs_create_dir("swiotlb", NULL);
+	if (mem) {
+		mem->debugfs = debugfs_dir;
+		swiotlb_create_debugfs_files(mem);
+	}
 	return 0;
 }
 
-late_initcall(swiotlb_create_debugfs);
+late_initcall(swiotlb_create_default_debugfs);
 
 #endif
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:27:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:27:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142031.262221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt96d-0000Cj-Mg; Tue, 15 Jun 2021 13:27:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142031.262221; Tue, 15 Jun 2021 13:27:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt96d-0000Bg-JC; Tue, 15 Jun 2021 13:27:51 +0000
Received: by outflank-mailman (input) for mailman id 142031;
 Tue, 15 Jun 2021 13:27:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=62sT=LJ=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lt96c-00008I-0w
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:27:50 +0000
Received: from mail-pl1-x633.google.com (unknown [2607:f8b0:4864:20::633])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb5c12c0-b778-4d80-ad69-7c15a331c80b;
 Tue, 15 Jun 2021 13:27:49 +0000 (UTC)
Received: by mail-pl1-x633.google.com with SMTP id e1so8454597pld.13
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 06:27:49 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:1846:5274:e444:139e])
 by smtp.gmail.com with UTF8SMTPSA id k18sm2754133pff.63.2021.06.15.06.27.41
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 06:27:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb5c12c0-b778-4d80-ad69-7c15a331c80b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=tVRlHSBvojaNtd1pcEKRrz7NPHsOzWEMDgZ+1zbmGdU=;
        b=Cms1BF5oSGt10F2UP+9O24JLfXZ4Lgyb9VQETesDzBTKgGS2FhYMDpDInrIK+BxGXi
         O42ERCktMhFmjKYBGQo46u5TM2rojL1WmkdAsDK33i17jwJ2EqHwyU915h0pGnIwGy1f
         lh1qyRzNXBWb9F0PmNUmShWK2hwpwgmn4zTDg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=tVRlHSBvojaNtd1pcEKRrz7NPHsOzWEMDgZ+1zbmGdU=;
        b=aonaMKAV0EOW+7IrZHIgiUjnIlmPOkFfg468g0DO9Ctk/ceXs3TGjwLG5ho99AsGBB
         ObPlU/YtI8vLIHVU7YV3XcnYfD6YyhGGJ6YXeHc8SxvYb+0LinFZBk810w7c17W3aUqm
         m70vwpVThCK/xLPdEcCDjfQtkCUwFtyGXsUyRmllhnFj73kKefyXv5+1tKasu7Coefyv
         2t/hXsvfCUGfsqM2BVYxC9nfuk/3HLCEGQwHh/Un91m87V11Cv+aVbFAXBf8x7Q4gyL0
         UITd+7Zue0Kix/EVZv7bIPIi4XATm9+/S9wQlFKB43mvddtZCC87V1ZmVwm8yotZx5Qb
         I2mQ==
X-Gm-Message-State: AOAM530m2fMuFy6BgFcJCeUqKpw338lTaxaRdlP3ORWnPXNpyOEtanTL
	MQx8p2mji8qR8OGXrAYNk00Omg==
X-Google-Smtp-Source: ABdhPJwk8KMv/p2fIKu5kDhwGYYH+3Bkx30q2cCSNaBqyz4utMpWcyKvcqe+i2VzWvJcpDMj+Kp1ZA==
X-Received: by 2002:a17:90b:3ecb:: with SMTP id rm11mr24584966pjb.95.1623763668526;
        Tue, 15 Jun 2021 06:27:48 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v10 03/12] swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
Date: Tue, 15 Jun 2021 21:27:02 +0800
Message-Id: <20210615132711.553451-4-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210615132711.553451-1-tientzu@chromium.org>
References: <20210615132711.553451-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Always have the pointer to the swiotlb pool used in struct device. This
could help simplify the code for other pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/base/core.c    | 4 ++++
 include/linux/device.h | 4 ++++
 kernel/dma/swiotlb.c   | 8 ++++----
 3 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/base/core.c b/drivers/base/core.c
index b8a8c96dca58..eeb2d49d3aa3 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -27,6 +27,7 @@
 #include <linux/netdevice.h>
 #include <linux/sched/signal.h>
 #include <linux/sched/mm.h>
+#include <linux/swiotlb.h>
 #include <linux/sysfs.h>
 #include <linux/dma-map-ops.h> /* for dma_default_coherent */
 
@@ -2846,6 +2847,9 @@ void device_initialize(struct device *dev)
     defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU_ALL)
 	dev->dma_coherent = dma_default_coherent;
 #endif
+#ifdef CONFIG_SWIOTLB
+	dev->dma_io_tlb_mem = io_tlb_default_mem;
+#endif
 }
 EXPORT_SYMBOL_GPL(device_initialize);
 
diff --git a/include/linux/device.h b/include/linux/device.h
index 4443e12238a0..2e9a378c9100 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -432,6 +432,7 @@ struct dev_links_info {
  * @dma_pools:	Dma pools (if dma'ble device).
  * @dma_mem:	Internal for coherent mem override.
  * @cma_area:	Contiguous memory area for dma allocations
+ * @dma_io_tlb_mem: Pointer to the swiotlb pool used.  Not for driver use.
  * @archdata:	For arch-specific additions.
  * @of_node:	Associated device tree node.
  * @fwnode:	Associated device node supplied by platform firmware.
@@ -540,6 +541,9 @@ struct device {
 #ifdef CONFIG_DMA_CMA
 	struct cma *cma_area;		/* contiguous memory area for dma
 					   allocations */
+#endif
+#ifdef CONFIG_SWIOTLB
+	struct io_tlb_mem *dma_io_tlb_mem;
 #endif
 	/* arch specific additions */
 	struct dev_archdata	archdata;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 97c6ad50fdc2..949a6bb21343 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -339,7 +339,7 @@ void __init swiotlb_exit(void)
 static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size,
 			   enum dma_data_direction dir)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT;
 	phys_addr_t orig_addr = mem->slots[index].orig_addr;
 	size_t alloc_size = mem->slots[index].alloc_size;
@@ -421,7 +421,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
 static int find_slots(struct device *dev, phys_addr_t orig_addr,
 		size_t alloc_size)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
 	dma_addr_t tbl_dma_addr =
 		phys_to_dma_unencrypted(dev, mem->start) & boundary_mask;
@@ -498,7 +498,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		size_t mapping_size, size_t alloc_size,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
 	unsigned int i;
 	int index;
@@ -549,7 +549,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 			      size_t mapping_size, enum dma_data_direction dir,
 			      unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
 	unsigned long flags;
 	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:28:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:28:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142036.262232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt96o-0000oJ-00; Tue, 15 Jun 2021 13:28:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142036.262232; Tue, 15 Jun 2021 13:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt96n-0000oC-So; Tue, 15 Jun 2021 13:28:01 +0000
Received: by outflank-mailman (input) for mailman id 142036;
 Tue, 15 Jun 2021 13:28:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=62sT=LJ=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lt96m-0000bS-M3
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:28:00 +0000
Received: from mail-pj1-x1032.google.com (unknown [2607:f8b0:4864:20::1032])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d4fe8306-f163-44d0-a54a-0ce1cfe9df43;
 Tue, 15 Jun 2021 13:27:58 +0000 (UTC)
Received: by mail-pj1-x1032.google.com with SMTP id ei4so11781805pjb.3
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 06:27:58 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:1846:5274:e444:139e])
 by smtp.gmail.com with UTF8SMTPSA id q4sm16299955pfh.18.2021.06.15.06.27.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 06:27:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4fe8306-f163-44d0-a54a-0ce1cfe9df43
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=PL5BpOXQsaKo+GbzXNjJkzxAIqTXayil+SaEElw08ww=;
        b=gXf3FI+MKQU9QcRMF4+GEgHu9Yr6jqOOYhv/2BQubzcCzxA9yug58HjuKeTKJqpiIm
         xMh+J75/TSxDUNIZVLMAe9Xc3O6yu0HsySKJFmfbZ3ERdlsu5haZ2zNpnwIp0kTsy19X
         R0zXSGZvUZd7EQlrbLKK/nNYd0EAZ9VG3m5/Q=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=PL5BpOXQsaKo+GbzXNjJkzxAIqTXayil+SaEElw08ww=;
        b=DMNCcb8M5/VSABtl6BbJ2Hvs2FraxkUbQKj+8iNvzVvlJXhy7Eww7odJnyF4lHTDBL
         pQfAw5FBKZTC5G6ukc3sKTlQ75xZkDz1cjs3CLYojB1Tt+srKkXbViNqGad32iV5523d
         +NDXlKAhumcyqfjev4UDPswuM8aHwS3n809V/AzkrC6ZwfaA1gXSM7Ysx/Z/4VqiQQHM
         NmYFW8b7KaB9XG0yOHFVM+4y1u6MuOVm3I88vfrSjSgQr4Jyd8iD+R2VnCaWZp6o6tJ3
         lFKXkRrOCCAZKMjEtOjKnWzr08ngeKEybGwCFdxFgj8CyBnb0yQxlN9H36ivkw1ohMLP
         z5OA==
X-Gm-Message-State: AOAM533DHNaIoml6Sp/Q5O6JTBfVHheDry+E+/mvXJb1LktgVPsvd4zC
	LBGGV1VWD5YqSkUlSITiuyanvg==
X-Google-Smtp-Source: ABdhPJxRaXtg70rcLF2bfdnN2wxG8oMnLK9/SnXHkatnp219vaood2+iHVkIasptG0LQd+cbZ0cLtQ==
X-Received: by 2002:a17:90b:4c52:: with SMTP id np18mr4976518pjb.186.1623763677411;
        Tue, 15 Jun 2021 06:27:57 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v10 04/12] swiotlb: Update is_swiotlb_buffer to add a struct device argument
Date: Tue, 15 Jun 2021 21:27:03 +0800
Message-Id: <20210615132711.553451-5-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210615132711.553451-1-tientzu@chromium.org>
References: <20210615132711.553451-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_buffer to add a struct device argument. This will be
useful later to allow for different pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/iommu/dma-iommu.c | 12 ++++++------
 drivers/xen/swiotlb-xen.c |  2 +-
 include/linux/swiotlb.h   |  7 ++++---
 kernel/dma/direct.c       |  6 +++---
 kernel/dma/direct.h       |  6 +++---
 5 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 5d96fcc45fec..1a6a08908245 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -506,7 +506,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
 
 	__iommu_dma_unmap(dev, dma_addr, size);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 
@@ -577,7 +577,7 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
 	}
 
 	iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
-	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
+	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys))
 		swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs);
 	return iova;
 }
@@ -783,7 +783,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
 	if (!dev_is_dma_coherent(dev))
 		arch_sync_dma_for_cpu(phys, size, dir);
 
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_cpu(dev, phys, size, dir);
 }
 
@@ -796,7 +796,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev,
 		return;
 
 	phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_device(dev, phys, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -817,7 +817,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
 
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_cpu(dev, sg_phys(sg),
 						    sg->length, dir);
 	}
@@ -834,7 +834,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
 		return;
 
 	for_each_sg(sgl, sg, nelems, i) {
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_device(dev, sg_phys(sg),
 						       sg->length, dir);
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 24d11861ac7d..0c4fb34f11ab 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -100,7 +100,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	 * in our domain. Therefore _only_ check address within our domain.
 	 */
 	if (pfn_valid(PFN_DOWN(paddr)))
-		return is_swiotlb_buffer(paddr);
+		return is_swiotlb_buffer(dev, paddr);
 	return 0;
 }
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 216854a5e513..d1f3d95881cd 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_SWIOTLB_H
 #define __LINUX_SWIOTLB_H
 
+#include <linux/device.h>
 #include <linux/dma-direction.h>
 #include <linux/init.h>
 #include <linux/types.h>
@@ -101,9 +102,9 @@ struct io_tlb_mem {
 };
 extern struct io_tlb_mem *io_tlb_default_mem;
 
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
@@ -115,7 +116,7 @@ bool is_swiotlb_active(void);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index f737e3347059..84c9feb5474a 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -343,7 +343,7 @@ void dma_direct_sync_sg_for_device(struct device *dev,
 	for_each_sg(sgl, sg, nents, i) {
 		phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_device(dev, paddr, sg->length,
 						       dir);
 
@@ -369,7 +369,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(paddr, sg->length, dir);
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_cpu(dev, paddr, sg->length,
 						    dir);
 
@@ -504,7 +504,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr)
 {
 	return !dev_is_dma_coherent(dev) ||
-		is_swiotlb_buffer(dma_to_phys(dev, dma_addr));
+	       is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr));
 }
 
 /**
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 50afc05b6f1d..13e9e7158d94 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -56,7 +56,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev,
 {
 	phys_addr_t paddr = dma_to_phys(dev, addr);
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_device(dev, paddr, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -73,7 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev,
 		arch_sync_dma_for_cpu_all();
 	}
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_cpu(dev, paddr, size, dir);
 
 	if (dir == DMA_FROM_DEVICE)
@@ -113,7 +113,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
 		dma_direct_sync_single_for_cpu(dev, addr, size, dir);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 #endif /* _KERNEL_DMA_DIRECT_H */
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:28:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:28:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142048.262243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt97M-0001nV-9L; Tue, 15 Jun 2021 13:28:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142048.262243; Tue, 15 Jun 2021 13:28:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt97M-0001nM-63; Tue, 15 Jun 2021 13:28:36 +0000
Received: by outflank-mailman (input) for mailman id 142048;
 Tue, 15 Jun 2021 13:28:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=62sT=LJ=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lt97L-0000bS-N8
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:28:35 +0000
Received: from mail-pg1-x534.google.com (unknown [2607:f8b0:4864:20::534])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f93d018d-f236-4fc4-8b68-b2649dbc47df;
 Tue, 15 Jun 2021 13:28:06 +0000 (UTC)
Received: by mail-pg1-x534.google.com with SMTP id t17so11445911pga.5
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 06:28:06 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:1846:5274:e444:139e])
 by smtp.gmail.com with UTF8SMTPSA id b21sm15684682pfp.134.2021.06.15.06.27.58
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 06:28:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f93d018d-f236-4fc4-8b68-b2649dbc47df
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=qdy043E8YTZQxdgITN0GplXbYTlbTzsktNHyh8pAkvk=;
        b=LE7FUsZG0T4yrOJfVYqZvztpH/0tx0xtIiYngoeXZWZQ7yD+o2FIurIwAb564MrH8w
         DCA/OZgFJVliQvEgSZCg5idwHIOsL8QMWX0Fiyz/lbLd3mcm+p3N76DYIvqyXIxUHps4
         r6Q4zbm13Jo7PJcGq8OwPAyqxkJuWumlA0fRI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=qdy043E8YTZQxdgITN0GplXbYTlbTzsktNHyh8pAkvk=;
        b=XBeygkVQROWNc/jnLkHnaHyUrI9gh55ntgWrw4kpZTmnLOUPtKZArrtm46X86pVVyA
         +VYDBLkVBPit3B2DCkL2qZNSNAunks5yT32w3axUokr+6dkFtvqE8U8RKh8P7GuLeViw
         IkhW82L53/r+DZ32aJjXuF+nR7D5jqt7Fbp0cFwbBRkWj+zacIy0LcQLyTP+JVEFK1MG
         FghKjqxxnRO0iA+idMM/oA92w0jzerE1MZKa1a5VY3IEAQIxVUVL1IoEfom+H/ukh8j6
         WI/H2RgGutoz53YNcfbHYhfQyM8ycD9kJZE1N8QM3Q7Vj3HI/xH+hEy/jFkHu75zgw6G
         XT4g==
X-Gm-Message-State: AOAM532/+QO7itKpCalJIBMVnCeGLM0Ho/wSUqpa55GdSSYhFSdu2e0l
	/WOi0EhezXY+7t3lDBodyzUAUw==
X-Google-Smtp-Source: ABdhPJw2y9jHRhQ4l5Pe/PHcvzpxgjcnhuWvk/KV8xz5UQ5wJWbmUms9tMTlystb52kLlpgaBpc5aQ==
X-Received: by 2002:a63:6982:: with SMTP id e124mr21961725pgc.439.1623763686137;
        Tue, 15 Jun 2021 06:28:06 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v10 05/12] swiotlb: Update is_swiotlb_active to add a struct device argument
Date: Tue, 15 Jun 2021 21:27:04 +0800
Message-Id: <20210615132711.553451-6-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210615132711.553451-1-tientzu@chromium.org>
References: <20210615132711.553451-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_active to add a struct device argument. This will be
useful later to allow for different pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c        | 2 +-
 drivers/pci/xen-pcifront.c                   | 2 +-
 include/linux/swiotlb.h                      | 4 ++--
 kernel/dma/direct.c                          | 2 +-
 kernel/dma/swiotlb.c                         | 4 ++--
 6 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index ce6b664b10aa..89a894354263 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 
 	max_order = MAX_ORDER;
 #ifdef CONFIG_SWIOTLB
-	if (is_swiotlb_active()) {
+	if (is_swiotlb_active(obj->base.dev->dev)) {
 		unsigned int max_segment;
 
 		max_segment = swiotlb_max_segment();
diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
index f4c2e46b6fe1..2ca9d9a9e5d5 100644
--- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
@@ -276,7 +276,7 @@ nouveau_ttm_init(struct nouveau_drm *drm)
 	}
 
 #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
-	need_swiotlb = is_swiotlb_active();
+	need_swiotlb = is_swiotlb_active(dev->dev);
 #endif
 
 	ret = ttm_device_init(&drm->ttm.bdev, &nouveau_bo_driver, drm->dev->dev,
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index b7a8f3a1921f..0d56985bfe81 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
 
 	spin_unlock(&pcifront_dev_lock);
 
-	if (!err && !is_swiotlb_active()) {
+	if (!err && !is_swiotlb_active(&pdev->xdev->dev)) {
 		err = pci_xen_swiotlb_init_late();
 		if (err)
 			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index d1f3d95881cd..dd1c30a83058 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -112,7 +112,7 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
-bool is_swiotlb_active(void);
+bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
@@ -132,7 +132,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev)
 	return SIZE_MAX;
 }
 
-static inline bool is_swiotlb_active(void)
+static inline bool is_swiotlb_active(struct device *dev)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 84c9feb5474a..7a88c34d0867 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
 size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
-	if (is_swiotlb_active() &&
+	if (is_swiotlb_active(dev) &&
 	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 949a6bb21343..d07e32020edf 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -654,9 +654,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)
 	return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
 }
 
-bool is_swiotlb_active(void)
+bool is_swiotlb_active(struct device *dev)
 {
-	return io_tlb_default_mem != NULL;
+	return dev->dma_io_tlb_mem != NULL;
 }
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:31:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:31:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142061.262254 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9AO-0003OW-T4; Tue, 15 Jun 2021 13:31:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142061.262254; Tue, 15 Jun 2021 13:31:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9AO-0003OP-Q0; Tue, 15 Jun 2021 13:31:44 +0000
Received: by outflank-mailman (input) for mailman id 142061;
 Tue, 15 Jun 2021 13:31:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lt9AN-0003OJ-T5
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:31:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lt9AL-0003bn-Hs; Tue, 15 Jun 2021 13:31:41 +0000
Received: from [54.239.6.184] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lt9AL-00065V-9R; Tue, 15 Jun 2021 13:31:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=xlCqE8Rdy/ipxraYFRkc51wKDzIJqQj3pxs4kIiB+yg=; b=OavsvzG97CH+FSxOHtxP6ftXiA
	XKLn+PiSFsJA7dwQLB4BYTKlO9PqfpD8OMHkxlsxaa1nSQLu4kAxOTbJo6J4PhT8loisGn9cRFLtt
	N/9ClsVy5c4XtpYgPRlf5i7EusAIDQolQMLnH6Li+6wAkVx4D9TIuPj+4CPoHwgZg3Ao=;
Subject: PING Re: [XEN PATCH] libs/foreignmemory: Fix
 osdep_xenforeignmemory_map prototype
To: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20210601154147.55799-1-anthony.perard@citrix.com>
 <a5d4f4ae-21b9-9798-5501-2c288a70e7b4@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <d80b6904-ff5c-33d6-b0e3-6882fe3e8e89@xen.org>
Date: Tue, 15 Jun 2021 15:31:38 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <a5d4f4ae-21b9-9798-5501-2c288a70e7b4@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Ian & Wei,

On 02/06/2021 10:25, Jan Beulich wrote:
> On 01.06.2021 17:41, Anthony PERARD wrote:
>> Commit cf8c4d3d13b8 made some preparation to have one day
>> variable-length-array argument, but didn't declare the array in the
>> function prototype the same way as in the function definition. And now
>> GCC 11 complains about it.
>>
>> Fixes: cf8c4d3d13b8 ("tools/libs/foreignmemory: pull array length argument to map forward")
>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> Ian - this (or whichever alternative might be chosen to address gcc11's
> valid complaint) also will want backporting.

I was about to commit this patch and noticed that there is still a 
missing an ack from the tools maintainers. @Ian, @Wei, can you provide one?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:31:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:31:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142066.262265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9AS-0003go-57; Tue, 15 Jun 2021 13:31:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142066.262265; Tue, 15 Jun 2021 13:31:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9AS-0003gf-1U; Tue, 15 Jun 2021 13:31:48 +0000
Received: by outflank-mailman (input) for mailman id 142066;
 Tue, 15 Jun 2021 13:31:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=62sT=LJ=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lt984-0000bS-OW
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:29:20 +0000
Received: from mail-pj1-x1032.google.com (unknown [2607:f8b0:4864:20::1032])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d262b5c-53d2-4f64-9ee2-c21b38f94fba;
 Tue, 15 Jun 2021 13:28:51 +0000 (UTC)
Received: by mail-pj1-x1032.google.com with SMTP id
 o10-20020a17090aac0ab029016e92770073so2207456pjq.5
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 06:28:51 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:1846:5274:e444:139e])
 by smtp.gmail.com with UTF8SMTPSA id k63sm2609312pjh.13.2021.06.15.06.28.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 06:28:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d262b5c-53d2-4f64-9ee2-c21b38f94fba
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=Wqtfx6cy1Jtw2X0SInQsIQVX1lEuTH1kldxYY4/ko84=;
        b=PyPCb31nwqkas7zJi0U/cW8hKJbIyQRt3KSrT1/9LLelrXTmEmiBCd+NQ0T4Qz5xHZ
         5HLb/yCfBcSDKsc6Y3LcfI3tZ4lyRELs8jQ7mMR0sANKJ2Wj/vLJQSOMlV3VfujnvwxB
         uh1bUfi3nbDNpER9cQm/+5boAFyFENHa50RCg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=Wqtfx6cy1Jtw2X0SInQsIQVX1lEuTH1kldxYY4/ko84=;
        b=NV+dJmfHydywx6JXWlZLr7ysPr9uB5SZDf8Qu91KjQNm/HXMUpzOG4eB9mSqSocnDp
         9Tp+SLTKfdVn835nvBVdWo79uSxXR+zb/JnxZACtoy1sLxfSqduc2HG8qv2D/t4YRt3A
         R8lN4/aAB/b9j0rSfNm/NgjCUpbc7o+/pFADkOtQrc6s4xJV8SOWrXQCS8LjnSOAHUGt
         RvrR393ZCvxWmnOVZ5x99ocB1K1TozQiFw9fB5kE+Lxhi7sVv8t+5Gjz54Q1Go28Qi5m
         QCEr5vYcujMMJ2bS3edVpRdcK1eMzclCv/CipCMqSJutNVlDwQQNlHQvBdO5QG40kGJO
         vBAg==
X-Gm-Message-State: AOAM533UBV1W1ufEhvNZS8LrrPRt+DR95CLVs4hQxtW17RlHk59McqUN
	pPIwz8TTGVmN6KRbPcWuogMLAQ==
X-Google-Smtp-Source: ABdhPJyjg0/RR4SKMggFs51q0JksjVaDIqi9tLaQaLjRDeZX7ai5CONlJW5WV7csxUkV0QoMXbG+hQ==
X-Received: by 2002:a17:90a:10d0:: with SMTP id b16mr15680796pje.23.1623763730811;
        Tue, 15 Jun 2021 06:28:50 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v10 10/12] swiotlb: Add restricted DMA alloc/free support
Date: Tue, 15 Jun 2021 21:27:09 +0800
Message-Id: <20210615132711.553451-11-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210615132711.553451-1-tientzu@chromium.org>
References: <20210615132711.553451-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the functions, swiotlb_{alloc,free} to support the memory allocation
from restricted DMA pool.

The restricted DMA pool is preferred if available.

Note that since coherent allocation needs remapping, one must set up
another device coherent pool by shared-dma-pool and use
dma_alloc_from_dev_coherent instead for atomic coherent allocation.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 15 +++++++++++++
 kernel/dma/direct.c     | 50 ++++++++++++++++++++++++++++++-----------
 kernel/dma/swiotlb.c    | 42 ++++++++++++++++++++++++++++++++--
 3 files changed, 92 insertions(+), 15 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index e76ac46ffff9..9616346b727f 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -157,4 +157,19 @@ static inline void swiotlb_adjust_size(unsigned long size)
 extern void swiotlb_print_info(void);
 extern void swiotlb_set_max_segment(unsigned int);
 
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size);
+bool swiotlb_free(struct device *dev, struct page *page, size_t size);
+#else
+static inline struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	return NULL;
+}
+static inline bool swiotlb_free(struct device *dev, struct page *page,
+				size_t size)
+{
+	return false;
+}
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
+
 #endif /* __LINUX_SWIOTLB_H */
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 3713461d6fe0..da0e09621230 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -75,6 +75,15 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 		min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit);
 }
 
+static void __dma_direct_free_pages(struct device *dev, struct page *page,
+				    size_t size)
+{
+	if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL) &&
+	    swiotlb_free(dev, page, size))
+		return;
+	dma_free_contiguous(dev, page, size);
+}
+
 static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 		gfp_t gfp)
 {
@@ -86,7 +95,16 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 
 	gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
 					   &phys_limit);
-	page = dma_alloc_contiguous(dev, size, gfp);
+	if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL)) {
+		page = swiotlb_alloc(dev, size);
+		if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
+			__dma_direct_free_pages(dev, page, size);
+			return NULL;
+		}
+	}
+
+	if (!page)
+		page = dma_alloc_contiguous(dev, size, gfp);
 	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
 		dma_free_contiguous(dev, page, size);
 		page = NULL;
@@ -142,7 +160,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 		gfp |= __GFP_NOWARN;
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
 		page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
 		if (!page)
 			return NULL;
@@ -155,18 +173,23 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev))
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_dev_swiotlb_force(dev))
 		return arch_dma_alloc(dev, size, dma_handle, gfp, attrs);
 
 	/*
 	 * Remapping or decrypting memory may block. If either is required and
 	 * we can't block, allocate the memory from the atomic pools.
+	 * If restricted DMA (i.e., is_dev_swiotlb_force) is required, one must
+	 * set up another device coherent pool by shared-dma-pool and use
+	 * dma_alloc_from_dev_coherent instead.
 	 */
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
 	    !gfpflags_allow_blocking(gfp) &&
 	    (force_dma_unencrypted(dev) ||
-	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev))))
+	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
+	      !dev_is_dma_coherent(dev))) &&
+	    !is_dev_swiotlb_force(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	/* we always manually zero the memory once we are done */
@@ -237,7 +260,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 			return NULL;
 	}
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -247,15 +270,15 @@ void dma_direct_free(struct device *dev, size_t size,
 	unsigned int page_order = get_order(size);
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
 		/* cpu_addr is a struct page cookie, not a kernel address */
 		dma_free_contiguous(dev, cpu_addr, size);
 		return;
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev)) {
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_dev_swiotlb_force(dev)) {
 		arch_dma_free(dev, size, cpu_addr, dma_addr, attrs);
 		return;
 	}
@@ -273,7 +296,7 @@ void dma_direct_free(struct device *dev, size_t size,
 	else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
 		arch_dma_clear_uncached(cpu_addr, size);
 
-	dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size);
+	__dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
 }
 
 struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
@@ -283,7 +306,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	void *ret;
 
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
-	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp))
+	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) &&
+	    !is_dev_swiotlb_force(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	page = __dma_direct_alloc_pages(dev, size, gfp);
@@ -310,7 +334,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	*dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
 	return page;
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -329,7 +353,7 @@ void dma_direct_free_pages(struct device *dev, size_t size,
 	if (force_dma_unencrypted(dev))
 		set_memory_encrypted((unsigned long)vaddr, 1 << page_order);
 
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 }
 
 #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index ef1ccd63534d..5e277eb65f92 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -460,8 +460,9 @@ static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
 
 	index = wrap = wrap_index(mem, ALIGN(mem->index, stride));
 	do {
-		if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
-		    (orig_addr & iotlb_align_mask)) {
+		if (orig_addr &&
+		    (slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
+			    (orig_addr & iotlb_align_mask)) {
 			index = wrap_index(mem, index + 1);
 			continue;
 		}
@@ -702,6 +703,43 @@ late_initcall(swiotlb_create_default_debugfs);
 #endif
 
 #ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+	phys_addr_t tlb_addr;
+	int index;
+
+	/*
+	 * Skip io_tlb_default_mem since swiotlb_alloc doesn't support atomic
+	 * coherent allocation. Otherwise might break existing devices.
+	 * One must set up another device coherent pool by shared-dma-pool and
+	 * use dma_alloc_from_dev_coherent instead for atomic coherent
+	 * allocation to avoid mempry remapping.
+	 */
+	if (!mem || mem == io_tlb_default_mem)
+		return NULL;
+
+	index = swiotlb_find_slots(dev, 0, size);
+	if (index == -1)
+		return NULL;
+
+	tlb_addr = slot_addr(mem->start, index);
+
+	return pfn_to_page(PFN_DOWN(tlb_addr));
+}
+
+bool swiotlb_free(struct device *dev, struct page *page, size_t size)
+{
+	phys_addr_t tlb_addr = page_to_phys(page);
+
+	if (!is_swiotlb_buffer(dev, tlb_addr))
+		return false;
+
+	swiotlb_release_slots(dev, tlb_addr);
+
+	return true;
+}
+
 static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 				    struct device *dev)
 {
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:31:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:31:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142067.262270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9AS-0003j9-J2; Tue, 15 Jun 2021 13:31:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142067.262270; Tue, 15 Jun 2021 13:31:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9AS-0003iG-9C; Tue, 15 Jun 2021 13:31:48 +0000
Received: by outflank-mailman (input) for mailman id 142067;
 Tue, 15 Jun 2021 13:31:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=62sT=LJ=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lt97p-0000bS-Nr
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:29:05 +0000
Received: from mail-pj1-x1034.google.com (unknown [2607:f8b0:4864:20::1034])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f14f3368-a6d4-4c4f-82e3-4192547ab217;
 Tue, 15 Jun 2021 13:28:42 +0000 (UTC)
Received: by mail-pj1-x1034.google.com with SMTP id
 z3-20020a17090a3983b029016bc232e40bso2207566pjb.4
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 06:28:42 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:1846:5274:e444:139e])
 by smtp.gmail.com with UTF8SMTPSA id d8sm16002774pfq.198.2021.06.15.06.28.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 06:28:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f14f3368-a6d4-4c4f-82e3-4192547ab217
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=r+B+WmWHIHMs91nx15AVM2itHN+i1YzQ775bO9+kHdU=;
        b=WzvexR7E+ALC8UHpMEnA+Cx3vlr1A1Cr5s3TW1YH4PposTtGKLhFpA/R/qQszzGPqE
         ordIiKc6mP07Rj60smVk6CJmnBRs8OEvLe0owSLDWKCjVANuxKyE3xs4HLfFTCYWWVik
         a6KT5EvKiBmg9I5cDjlQBmab8/PKoLLvYg/3Y=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=r+B+WmWHIHMs91nx15AVM2itHN+i1YzQ775bO9+kHdU=;
        b=N+VuhTjAUWcldZiiuYUIZCQggpOTkhdXek2vfqdvCHMgLg8YtRyrIBcNusoiSRJu54
         mWwAOStcGxNJmA+eTR/EhXx+T9ZPHW2pZX1ut96IXdXkAC3zxg0XnaudfZlPOVeRtkOy
         hwToddxBtxiW+ux7lGqM/T0cfJgK2MaXX6COfphV5H1cyQ/ilYyfGkkSMDUAoziOPZcB
         3f4TpvztJx31v/UvnGVslWlUQTr8W/9sDyQ9FcDF2cZdqIrX++8xQierxh6j7f6pus02
         P6IelToOymYkC5pKsYsU54su3FwYPnGFq+QQJJ7Ae8jIP8fGiUNcL74L2qqNm708iHR4
         ewnw==
X-Gm-Message-State: AOAM532NqofGkKrkg3NUYC7WNfzxq3mhb2unzkJ2KMJyDlvWZFj07KN4
	CzaToNoydiAk1/By7Ntd3kPJxg==
X-Google-Smtp-Source: ABdhPJxXGen/VsbHb/d7x2+yJI3KCk0Nmq9xnzsLjk06QOONO3j9/EsgGWLcwS/xwhPSaKB97/qqqw==
X-Received: by 2002:a17:90b:19cd:: with SMTP id nm13mr5085576pjb.226.1623763722158;
        Tue, 15 Jun 2021 06:28:42 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v10 09/12] swiotlb: Add restricted DMA pool initialization
Date: Tue, 15 Jun 2021 21:27:08 +0800
Message-Id: <20210615132711.553451-10-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210615132711.553451-1-tientzu@chromium.org>
References: <20210615132711.553451-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the initialization function to create restricted DMA pools from
matching reserved-memory nodes.

Regardless of swiotlb setting, the restricted DMA pool is preferred if
available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h |  3 +-
 kernel/dma/Kconfig      | 14 ++++++++
 kernel/dma/swiotlb.c    | 78 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 94 insertions(+), 1 deletion(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index efcd56e3a16c..e76ac46ffff9 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -73,7 +73,8 @@ extern enum swiotlb_force swiotlb_force;
  *		range check to see if the memory was in fact allocated by this
  *		API.
  * @nslabs:	The number of IO TLB blocks (in groups of 64) between @start and
- *		@end. This is command line adjustable via setup_io_tlb_npages.
+ *		@end. For default swiotlb, this is command line adjustable via
+ *		setup_io_tlb_npages.
  * @used:	The number of used IO TLB block.
  * @list:	The free list describing the number of free entries available
  *		from each index.
diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
index 77b405508743..3e961dc39634 100644
--- a/kernel/dma/Kconfig
+++ b/kernel/dma/Kconfig
@@ -80,6 +80,20 @@ config SWIOTLB
 	bool
 	select NEED_DMA_MAP_STATE
 
+config DMA_RESTRICTED_POOL
+	bool "DMA Restricted Pool"
+	depends on OF && OF_RESERVED_MEM
+	select SWIOTLB
+	help
+	  This enables support for restricted DMA pools which provide a level of
+	  DMA memory protection on systems with limited hardware protection
+	  capabilities, such as those lacking an IOMMU.
+
+	  For more information see
+	  <Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt>
+	  and <kernel/dma/swiotlb.c>.
+	  If unsure, say "n".
+
 #
 # Should be selected if we can mmap non-coherent mappings to userspace.
 # The only thing that is really required is a way to set an uncached bit
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 3c01162c400b..ef1ccd63534d 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -39,6 +39,13 @@
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
 #endif
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_fdt.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/slab.h>
+#endif
 
 #include <asm/io.h>
 #include <asm/dma.h>
@@ -693,3 +700,74 @@ static int __init swiotlb_create_default_debugfs(void)
 late_initcall(swiotlb_create_default_debugfs);
 
 #endif
+
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
+				    struct device *dev)
+{
+	struct io_tlb_mem *mem = rmem->priv;
+	unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
+
+	/*
+	 * Since multiple devices can share the same pool, the private data,
+	 * io_tlb_mem struct, will be initialized by the first device attached
+	 * to it.
+	 */
+	if (!mem) {
+		mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
+		if (!mem)
+			return -ENOMEM;
+
+		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
+		mem->force = true;
+		set_memory_decrypted((unsigned long)phys_to_virt(rmem->base),
+				     rmem->size >> PAGE_SHIFT);
+
+		rmem->priv = mem;
+
+		if (IS_ENABLED(CONFIG_DEBUG_FS)) {
+			mem->debugfs =
+				debugfs_create_dir(rmem->name, debugfs_dir);
+			swiotlb_create_debugfs_files(mem);
+		}
+	}
+
+	dev->dma_io_tlb_mem = mem;
+
+	return 0;
+}
+
+static void rmem_swiotlb_device_release(struct reserved_mem *rmem,
+					struct device *dev)
+{
+	dev->dma_io_tlb_mem = io_tlb_default_mem;
+}
+
+static const struct reserved_mem_ops rmem_swiotlb_ops = {
+	.device_init = rmem_swiotlb_device_init,
+	.device_release = rmem_swiotlb_device_release,
+};
+
+static int __init rmem_swiotlb_setup(struct reserved_mem *rmem)
+{
+	unsigned long node = rmem->fdt_node;
+
+	if (of_get_flat_dt_prop(node, "reusable", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,cma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,dma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "no-map", NULL))
+		return -EINVAL;
+
+	if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
+		pr_err("Restricted DMA pool must be accessible within the linear mapping.");
+		return -EINVAL;
+	}
+
+	rmem->ops = &rmem_swiotlb_ops;
+	pr_info("Reserved memory: created restricted DMA pool at %pa, size %ld MiB\n",
+		&rmem->base, (unsigned long)rmem->size / SZ_1M);
+	return 0;
+}
+
+RESERVEDMEM_OF_DECLARE(dma, "restricted-dma-pool", rmem_swiotlb_setup);
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:31:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:31:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142069.262284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9AT-0004Bl-NH; Tue, 15 Jun 2021 13:31:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142069.262284; Tue, 15 Jun 2021 13:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9AT-0004AN-Ib; Tue, 15 Jun 2021 13:31:49 +0000
Received: by outflank-mailman (input) for mailman id 142069;
 Tue, 15 Jun 2021 13:31:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=62sT=LJ=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lt97V-0000bS-NN
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:28:45 +0000
Received: from mail-pj1-x102f.google.com (unknown [2607:f8b0:4864:20::102f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 25319a37-1ef3-495c-9766-927262ffd713;
 Tue, 15 Jun 2021 13:28:15 +0000 (UTC)
Received: by mail-pj1-x102f.google.com with SMTP id
 s17-20020a17090a8811b029016e89654f93so1812886pjn.1
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 06:28:15 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:1846:5274:e444:139e])
 by smtp.gmail.com with UTF8SMTPSA id q145sm8796577pfc.60.2021.06.15.06.28.08
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 06:28:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25319a37-1ef3-495c-9766-927262ffd713
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=gECPOPsCtEK5Wckt8RcLyHfozwB/E+EP94LARsjIoQM=;
        b=mPVHkLymWWoJ5FXza4lDdJkjRkkGCCIqZROcG/jPVhk85ennw8Lh8afIhfaPbKo5fw
         xPoOamL8LmyrImoT77PgcPoa1udPeAWbdwayu1G1kJioGMIeK2BCidzwj4454R/AYVbe
         p31PisuhVJxV7c4fEx4O7jr/Zump68KiUpmW0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=gECPOPsCtEK5Wckt8RcLyHfozwB/E+EP94LARsjIoQM=;
        b=cgSM6pAXJP2WI/Hg9W/3vFgYn0Q8YYU/yM9jzNcZJ5wFyjFdXAtjP1wu3d0mvl8lxm
         67uOacaMKxMAYHSoVTdkE66t573TZhNMJFPskfFFJdPnhRMEwJ/ayT1OjfKnNlweiNt/
         jSBq6dlcISGbz6RUsXiG78sgNxtd33gJnV3IkzT4UWaO1yOXPrY8bfFgdErHE0rN2zTp
         EuwJ5hya4h9z/lJ3VpA/7oVojC/THYRxIayZRzEElIhOgFrRlzbkOSpBwk/QNwzE2eJg
         aXSHNbEl/LS3/qf/X5u2yDO+6LX32a4lviSWsXl3bRvA9JdeohbNDV/tB/me/1pSmFEv
         H1HA==
X-Gm-Message-State: AOAM530x8lTPr65eOAlxI42Q6LEHv0L3xnlr7wxD8F+4LnQkkMOozP3a
	kXx/qynynNPrsg3aGrSbGUGdWA==
X-Google-Smtp-Source: ABdhPJwap5O4zLmEMY+RUjCuo1vguR5XAd/2u88H1tpc55JUzosxwbsWkllIYJMBsZxK3z6SxqvnHg==
X-Received: by 2002:a17:90a:fe18:: with SMTP id ck24mr5167618pjb.158.1623763695284;
        Tue, 15 Jun 2021 06:28:15 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v10 06/12] swiotlb: Use is_dev_swiotlb_force for swiotlb data bouncing
Date: Tue, 15 Jun 2021 21:27:05 +0800
Message-Id: <20210615132711.553451-7-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210615132711.553451-1-tientzu@chromium.org>
References: <20210615132711.553451-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Propagate the swiotlb_force setting into io_tlb_default_mem->force and
use it to determine whether to bounce the data or not. This will be
useful later to allow for different pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 11 +++++++++++
 kernel/dma/direct.c     |  2 +-
 kernel/dma/direct.h     |  2 +-
 kernel/dma/swiotlb.c    |  4 ++++
 4 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index dd1c30a83058..efcd56e3a16c 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -84,6 +84,7 @@ extern enum swiotlb_force swiotlb_force;
  *		unmap calls.
  * @debugfs:	The dentry to debugfs.
  * @late_alloc:	%true if allocated using the page allocator
+ * @force:      %true if swiotlb is forced
  */
 struct io_tlb_mem {
 	phys_addr_t start;
@@ -94,6 +95,7 @@ struct io_tlb_mem {
 	spinlock_t lock;
 	struct dentry *debugfs;
 	bool late_alloc;
+	bool force;
 	struct io_tlb_slot {
 		phys_addr_t orig_addr;
 		size_t alloc_size;
@@ -109,6 +111,11 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
 
+static inline bool is_dev_swiotlb_force(struct device *dev)
+{
+	return dev->dma_io_tlb_mem->force;
+}
+
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
@@ -120,6 +127,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
+static inline bool is_dev_swiotlb_force(struct device *dev)
+{
+	return false;
+}
 static inline void swiotlb_exit(void)
 {
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 7a88c34d0867..3713461d6fe0 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -496,7 +496,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
 	if (is_swiotlb_active(dev) &&
-	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
+	    (dma_addressing_limited(dev) || is_dev_swiotlb_force(dev)))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
 }
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 13e9e7158d94..6c4d13caceb1 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -87,7 +87,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
 	phys_addr_t phys = page_to_phys(page) + offset;
 	dma_addr_t dma_addr = phys_to_dma(dev, phys);
 
-	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
+	if (is_dev_swiotlb_force(dev))
 		return swiotlb_map(dev, phys, size, dir, attrs);
 
 	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index d07e32020edf..5af47a8f68b8 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -179,6 +179,10 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
 	mem->end = mem->start + bytes;
 	mem->index = 0;
 	mem->late_alloc = late_alloc;
+
+	if (swiotlb_force == SWIOTLB_FORCE)
+		mem->force = true;
+
 	spin_lock_init(&mem->lock);
 	for (i = 0; i < mem->nslabs; i++) {
 		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:31:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:31:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142070.262291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9AU-0004Lv-DQ; Tue, 15 Jun 2021 13:31:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142070.262291; Tue, 15 Jun 2021 13:31:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9AU-0004K6-6f; Tue, 15 Jun 2021 13:31:50 +0000
Received: by outflank-mailman (input) for mailman id 142070;
 Tue, 15 Jun 2021 13:31:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=62sT=LJ=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lt97f-0000bS-NV
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:28:55 +0000
Received: from mail-pj1-x102e.google.com (unknown [2607:f8b0:4864:20::102e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e168b282-b877-436d-99ba-e8f4e047ac20;
 Tue, 15 Jun 2021 13:28:25 +0000 (UTC)
Received: by mail-pj1-x102e.google.com with SMTP id
 s17-20020a17090a8811b029016e89654f93so1813180pjn.1
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 06:28:25 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:1846:5274:e444:139e])
 by smtp.gmail.com with UTF8SMTPSA id b6sm15444521pfv.16.2021.06.15.06.28.17
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 06:28:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e168b282-b877-436d-99ba-e8f4e047ac20
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=G2K/m+Zr/C3L9kjElXXnw2MqplvvJistlV9RBDCxxbA=;
        b=S+0WgFWRpS+XVuyOr6ge+m8GeoamDCl8MImPmxPownLMaRnPddIW/4jWtOzq7baMS4
         2cYnPw3kqWnOh0beyRy8sCyEGJJQnDgQZsawvLBFKtk8G7k3bIkUl48o16Ce4bkzeIE8
         42Td6UN7L2/FOv7HufjRoYmlw39AIEDY4BEEo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=G2K/m+Zr/C3L9kjElXXnw2MqplvvJistlV9RBDCxxbA=;
        b=luJygGixtfdeDh93lhTDWSjgs6YHIykYAx28LYiXM6z48fJT5+ytbiwYZ8KyG3r8Bd
         y2b6UOcRna0ZEiGjRdRDlDNOq3m26jKDtUsVrrr+u6d8yZfth/6W5wb2RfrZbV5jaXy3
         9eXqsnWAlfgFupZlVRMyiTRKcArWRA49hC/F7+4HB49MVSLX6Kpfzg+PHZxvMlshuCKp
         lfzCfXjKM0ydSV+BIzBybthBAn7DeOmufwtIY7PZ9BsHTZKwejedSEifA1aiJbtya3QX
         He5EK8SIw+PmvItMTHw2o1qXCZO60pRcwCy6OrXeeCcd6mqzAgu1vTXJaI27sXP7+gSu
         VBKQ==
X-Gm-Message-State: AOAM5337WjV4RYD2/upgQLgsUrJaDvpFVQgICYC+4VS78rZT4N0O/B18
	UFPaG0fWSNhsOXWUnbDvtbvgag==
X-Google-Smtp-Source: ABdhPJzZB1FWHQkznzmBmSBFzh8JRoY7MD6aobVhux7mtkc928qojTPDdCJJ8YwZ2pCQlZ/ERmu5LQ==
X-Received: by 2002:a17:90a:5b07:: with SMTP id o7mr10771533pji.35.1623763704484;
        Tue, 15 Jun 2021 06:28:24 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v10 07/12] swiotlb: Move alloc_size to swiotlb_find_slots
Date: Tue, 15 Jun 2021 21:27:06 +0800
Message-Id: <20210615132711.553451-8-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210615132711.553451-1-tientzu@chromium.org>
References: <20210615132711.553451-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Rename find_slots to swiotlb_find_slots and move the maintenance of
alloc_size to it for better code reusability later.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 5af47a8f68b8..e498f11e150e 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -422,8 +422,8 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
  * Find a suitable number of IO TLB entries size that will fit this request and
  * allocate a buffer from that IO TLB pool.
  */
-static int find_slots(struct device *dev, phys_addr_t orig_addr,
-		size_t alloc_size)
+static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
+			      size_t alloc_size)
 {
 	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
@@ -478,8 +478,11 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 	return -1;
 
 found:
-	for (i = index; i < index + nslots; i++)
+	for (i = index; i < index + nslots; i++) {
 		mem->slots[i].list = 0;
+		mem->slots[i].alloc_size =
+			alloc_size - ((i - index) << IO_TLB_SHIFT);
+	}
 	for (i = index - 1;
 	     io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 &&
 	     mem->slots[i].list; i--)
@@ -520,7 +523,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		return (phys_addr_t)DMA_MAPPING_ERROR;
 	}
 
-	index = find_slots(dev, orig_addr, alloc_size + offset);
+	index = swiotlb_find_slots(dev, orig_addr, alloc_size + offset);
 	if (index == -1) {
 		if (!(attrs & DMA_ATTR_NO_WARN))
 			dev_warn_ratelimited(dev,
@@ -534,11 +537,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	 * This is needed when we sync the memory.  Then we sync the buffer if
 	 * needed.
 	 */
-	for (i = 0; i < nr_slots(alloc_size + offset); i++) {
+	for (i = 0; i < nr_slots(alloc_size + offset); i++)
 		mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
-		mem->slots[index + i].alloc_size =
-			alloc_size - (i << IO_TLB_SHIFT);
-	}
 	tlb_addr = slot_addr(mem->start, index) + offset;
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
 	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:31:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:31:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142072.262309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9AX-0004wi-M3; Tue, 15 Jun 2021 13:31:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142072.262309; Tue, 15 Jun 2021 13:31:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9AX-0004wP-H6; Tue, 15 Jun 2021 13:31:53 +0000
Received: by outflank-mailman (input) for mailman id 142072;
 Tue, 15 Jun 2021 13:31:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=62sT=LJ=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lt97k-0000bS-Ni
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:29:00 +0000
Received: from mail-pg1-x52e.google.com (unknown [2607:f8b0:4864:20::52e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 372e2d18-11d2-416e-af8b-075cb120ea65;
 Tue, 15 Jun 2021 13:28:34 +0000 (UTC)
Received: by mail-pg1-x52e.google.com with SMTP id v7so2696927pgl.2
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 06:28:34 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:1846:5274:e444:139e])
 by smtp.gmail.com with UTF8SMTPSA id g17sm7117013pgh.61.2021.06.15.06.28.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 06:28:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 372e2d18-11d2-416e-af8b-075cb120ea65
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=Cc+9fFskPok5yanU+KBwhrDF032POUgNIPrJFjWpHUY=;
        b=UF5JJjf+p8IMYhkYNM8dISjQquNFfkf4iaq0k+iYxP9vTIwxyaJqsigdQrx1iycxm+
         rc3T9NaAGvNbYtm5GTXK1mA8pEMzIKnRY9iBgjBDFHGR7ylhLUjvmUFcFhLyT/dM51lt
         TiESxojr4yxy8lbSUjpOEANfg7K8Lf7k8QEBQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=Cc+9fFskPok5yanU+KBwhrDF032POUgNIPrJFjWpHUY=;
        b=SbQM+Jmhf3P1pU88QyGQ2roLYsiQ6tDRbLMNTFnAt5iopE7zOeQF3UuL8pSk4nqqF7
         dZp0V7LnkfjUNkiJ+5ckVIrpXOVSmixtNhf3FO2Nr/4ZcwpkuRKYNwfJZIyMUdBbsEAx
         6YrCGURKwzVRFTCv1R9dM5W1E2EFoWyp8aQ6A1LSWvKCln2Su28yBXbr/KuwbStn9gNf
         7acyv2vHZk8mmQkIFXcJYCQUcFIO86cU0tAGsORRhi5ky4TKiTuA/pswOO6sL4Anhv70
         HTnh44FR7ttDXwBOkHXhH/ohev45nrV3HFIybSiDc46lzJ3QxrbgGJSR7AwYmxBJCjMK
         ToRQ==
X-Gm-Message-State: AOAM532rSp+r/pACK6zaaUimJZ9LV10Z2wple2egXA02rZcefDWlBxGm
	MKAknQk2wEHLcs8NDi3nidD3ZQ==
X-Google-Smtp-Source: ABdhPJzTf6sN3+RhgHcg3uPiCnP3J9xzXkoCN0kkeRq24lHy9+yaQTUPTjqMetXgaRZFbgh3lWQO0g==
X-Received: by 2002:a05:6a00:148e:b029:2fb:9761:eb8a with SMTP id v14-20020a056a00148eb02902fb9761eb8amr4420599pfu.48.1623763713478;
        Tue, 15 Jun 2021 06:28:33 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v10 08/12] swiotlb: Refactor swiotlb_tbl_unmap_single
Date: Tue, 15 Jun 2021 21:27:07 +0800
Message-Id: <20210615132711.553451-9-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210615132711.553451-1-tientzu@chromium.org>
References: <20210615132711.553451-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, swiotlb_release_slots, to make the code reusable for
supporting different bounce buffer pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 35 ++++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index e498f11e150e..3c01162c400b 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -546,27 +546,15 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	return tlb_addr;
 }
 
-/*
- * tlb_addr is the physical address of the bounce buffer to unmap.
- */
-void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
-			      size_t mapping_size, enum dma_data_direction dir,
-			      unsigned long attrs)
+static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr)
 {
-	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long flags;
-	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
+	unsigned int offset = swiotlb_align_offset(dev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
 	int nslots = nr_slots(mem->slots[index].alloc_size + offset);
 	int count, i;
 
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
-		swiotlb_bounce(hwdev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
-
 	/*
 	 * Return the buffer to the free list by setting the corresponding
 	 * entries to indicate the number of contiguous entries available.
@@ -601,6 +589,23 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 	spin_unlock_irqrestore(&mem->lock, flags);
 }
 
+/*
+ * tlb_addr is the physical address of the bounce buffer to unmap.
+ */
+void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
+			      size_t mapping_size, enum dma_data_direction dir,
+			      unsigned long attrs)
+{
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
+		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
+
+	swiotlb_release_slots(dev, tlb_addr);
+}
+
 void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
 		size_t size, enum dma_data_direction dir)
 {
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:31:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:31:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142074.262314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9AY-00051G-6W; Tue, 15 Jun 2021 13:31:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142074.262314; Tue, 15 Jun 2021 13:31:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9AX-0004zm-Sr; Tue, 15 Jun 2021 13:31:53 +0000
Received: by outflank-mailman (input) for mailman id 142074;
 Tue, 15 Jun 2021 13:31:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=62sT=LJ=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lt98Y-0000bS-Pi
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:29:50 +0000
Received: from mail-pj1-x1031.google.com (unknown [2607:f8b0:4864:20::1031])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9017cfd7-4910-4b08-ad0c-1b384d7524de;
 Tue, 15 Jun 2021 13:29:08 +0000 (UTC)
Received: by mail-pj1-x1031.google.com with SMTP id ei4so11783760pjb.3
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 06:29:08 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:1846:5274:e444:139e])
 by smtp.gmail.com with UTF8SMTPSA id w27sm16447396pfq.117.2021.06.15.06.29.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 06:29:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9017cfd7-4910-4b08-ad0c-1b384d7524de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=JTiG8Mo/Nik+wIMq7cgtIy70t0SMnfIMs1UdiPtMcT8=;
        b=eMMTD49WILvEJwNkbyXll1fUBPjpxlVSOhGaqL9TwmPYonPep8XftjIi5HAggU5srI
         CLNpua3pQ7CwLVSW3dv0uDNKq24FS4175dYppaWlWO3XqF6T9hZPazFJ1he0aV5H6ErX
         dgTORgnp7aFTod/O8rpuewIUUw+E6SucoIApk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=JTiG8Mo/Nik+wIMq7cgtIy70t0SMnfIMs1UdiPtMcT8=;
        b=fYYXZ5sBZGRwdSC05TDzn3+DpWmHHo22pCeW6BmmAR1rtLCKVBnS4HfkV0Bi8HE/zb
         GrFDv0zfDc15Z6Ie+w/GilwhhEmLszk+gb6b4JwS65Tqc1rIbdbBCFDIXTr/+J3YhrKZ
         CbDJV5Lsqir6zp5Uk8YH066INAG8HOHs92CMYIBRwrOD/LHnNoCZFT+KTR797ajcrJuR
         rqHjnC7ghtmP9PglDMmulFu6FnvpPXpdPmB062Z+S31H6x+rgPUQUUBh8Hj3wosZF0Q7
         bimtbKSJtC0NpW97WQlOG6l9ILjzS/y/oYMas5zir752SwFW3cQ9nOQlUYrWa1g11S9z
         8IeQ==
X-Gm-Message-State: AOAM532O9wJEne+Kdm/BEG6mEQUH2YPeKEgSr6NhFy/bWZ+z7ti3f0xV
	pWPcMcOxNMSSIL4KcJwEbSZkhg==
X-Google-Smtp-Source: ABdhPJzXEtYDRC2fz4fUhVok40FY/FvrMnNmlVHfb+TKFwVcW+8fDTaQF2I0RwRDZ7J7BoI70NFDeg==
X-Received: by 2002:a17:902:e289:b029:10c:97e9:2c74 with SMTP id o9-20020a170902e289b029010c97e92c74mr4206776plc.34.1623763748216;
        Tue, 15 Jun 2021 06:29:08 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v10 12/12] of: Add plumbing for restricted DMA pool
Date: Tue, 15 Jun 2021 21:27:11 +0800
Message-Id: <20210615132711.553451-13-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210615132711.553451-1-tientzu@chromium.org>
References: <20210615132711.553451-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If a device is not behind an IOMMU, we look up the device node and set
up the restricted DMA when the restricted-dma-pool is presented.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/of/address.c    | 33 +++++++++++++++++++++++++++++++++
 drivers/of/device.c     |  3 +++
 drivers/of/of_private.h |  6 ++++++
 3 files changed, 42 insertions(+)

diff --git a/drivers/of/address.c b/drivers/of/address.c
index 3b2acca7e363..c8066d95ff0e 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -8,6 +8,7 @@
 #include <linux/logic_pio.h>
 #include <linux/module.h>
 #include <linux/of_address.h>
+#include <linux/of_reserved_mem.h>
 #include <linux/pci.h>
 #include <linux/pci_regs.h>
 #include <linux/sizes.h>
@@ -1001,6 +1002,38 @@ int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map)
 	of_node_put(node);
 	return ret;
 }
+
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np)
+{
+	struct device_node *node, *of_node = dev->of_node;
+	int count, i;
+
+	count = of_property_count_elems_of_size(of_node, "memory-region",
+						sizeof(u32));
+	/*
+	 * If dev->of_node doesn't exist or doesn't contain memory-region, try
+	 * the OF node having DMA configuration.
+	 */
+	if (count <= 0) {
+		of_node = np;
+		count = of_property_count_elems_of_size(
+			of_node, "memory-region", sizeof(u32));
+	}
+
+	for (i = 0; i < count; i++) {
+		node = of_parse_phandle(of_node, "memory-region", i);
+		/*
+		 * There might be multiple memory regions, but only one
+		 * restricted-dma-pool region is allowed.
+		 */
+		if (of_device_is_compatible(node, "restricted-dma-pool") &&
+		    of_device_is_available(node))
+			return of_reserved_mem_device_init_by_idx(dev, of_node,
+								  i);
+	}
+
+	return 0;
+}
 #endif /* CONFIG_HAS_DMA */
 
 /**
diff --git a/drivers/of/device.c b/drivers/of/device.c
index c5a9473a5fb1..2defdca418ec 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
 
 	arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
 
+	if (!iommu)
+		return of_dma_set_restricted_buffer(dev, np);
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(of_dma_configure_id);
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index 631489f7f8c0..376462798f7e 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -163,12 +163,18 @@ struct bus_dma_region;
 #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
 int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map);
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np);
 #else
 static inline int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map)
 {
 	return -ENODEV;
 }
+static inline int of_dma_set_restricted_buffer(struct device *dev,
+					       struct device_node *np)
+{
+	return -ENODEV;
+}
 #endif
 
 void fdt_init_reserved_mem(void);
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:31:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:31:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142076.262331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9Ab-0005f3-E0; Tue, 15 Jun 2021 13:31:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142076.262331; Tue, 15 Jun 2021 13:31:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9Ab-0005em-A6; Tue, 15 Jun 2021 13:31:57 +0000
Received: by outflank-mailman (input) for mailman id 142076;
 Tue, 15 Jun 2021 13:31:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=62sT=LJ=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lt98O-0000bS-PH
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:29:40 +0000
Received: from mail-pj1-x102a.google.com (unknown [2607:f8b0:4864:20::102a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3a923bd6-c1a6-4d10-a60f-47eb06fa6abb;
 Tue, 15 Jun 2021 13:29:00 +0000 (UTC)
Received: by mail-pj1-x102a.google.com with SMTP id g24so11780066pji.4
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 06:29:00 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:1846:5274:e444:139e])
 by smtp.gmail.com with UTF8SMTPSA id y205sm15379907pfb.53.2021.06.15.06.28.52
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 06:28:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a923bd6-c1a6-4d10-a60f-47eb06fa6abb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=GymVnULBeJ03m5EPo7c7VEX7DSqoBwMT6L2fMWoeeEc=;
        b=WOU8IVzg+iYvk6N5b67KD5CKh5Wgn355zu5PCMe6mPVyyUQjES9RmEtfXkhuQMaXmA
         hHjDKIazD9xu9BtywBAgsYVdsMG3DqbHBxwCuAdZqvctaQSD9keTHyqXNfZ2Km99muyW
         YJc+kW7TLNcpm9kWU0l/wPI+emgUC6NSvuBJc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=GymVnULBeJ03m5EPo7c7VEX7DSqoBwMT6L2fMWoeeEc=;
        b=fDFB/OtSWGheeEYv48Cd2meyG0HKuGXoqG6Jyvqb4DitAM7GvL6/84M9Twv6q6FHDB
         0KIbBcRRDjwa7WIICMcKelt4IKW5kTmcB0o12KwVD1sXOghqtpr+FayYNsygF+TXN2v1
         ILCXP6OmbTmeLLOOp5hWdHP/cDGh70qpI3j8bhzJQdEHMIdbcGoZn1NOUBaEjOWt4Qj5
         ZESiy24qkbR/JL/bQr7P2UGpksTVENFQMXJlSmBBgKMr0UfDF4qKJk2LOH1Iaq9NGQxG
         vmxbzpucLE4yct+6SvmaxSACQneXg5jMQE1ar/z2iI8FPkG/R+CcWMfIbMTr73utVUuN
         Vhbw==
X-Gm-Message-State: AOAM53030CDENpt8PNcP+ksgkj1w6m4mWFjbgSodXIyDNxy6vWeKBp/o
	OrCQytzURfy2OKigXqgBO/fkZg==
X-Google-Smtp-Source: ABdhPJwFBj2pKeWU+1SEbsBI+u3Jw3SyjbFfiN066dPovK3+ci3+q1oZRASaC88XjKL/ShDd5jcc2w==
X-Received: by 2002:a17:90a:a116:: with SMTP id s22mr24962655pjp.218.1623763739540;
        Tue, 15 Jun 2021 06:28:59 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v10 11/12] dt-bindings: of: Add restricted DMA pool
Date: Tue, 15 Jun 2021 21:27:10 +0800
Message-Id: <20210615132711.553451-12-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210615132711.553451-1-tientzu@chromium.org>
References: <20210615132711.553451-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce the new compatible string, restricted-dma-pool, for restricted
DMA. One can specify the address and length of the restricted DMA memory
region by restricted-dma-pool in the reserved-memory node.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 .../reserved-memory/reserved-memory.txt       | 36 +++++++++++++++++--
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
index e8d3096d922c..46804f24df05 100644
--- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
@@ -51,6 +51,23 @@ compatible (optional) - standard definition
           used as a shared pool of DMA buffers for a set of devices. It can
           be used by an operating system to instantiate the necessary pool
           management subsystem if necessary.
+        - restricted-dma-pool: This indicates a region of memory meant to be
+          used as a pool of restricted DMA buffers for a set of devices. The
+          memory region would be the only region accessible to those devices.
+          When using this, the no-map and reusable properties must not be set,
+          so the operating system can create a virtual mapping that will be used
+          for synchronization. The main purpose for restricted DMA is to
+          mitigate the lack of DMA access control on systems without an IOMMU,
+          which could result in the DMA accessing the system memory at
+          unexpected times and/or unexpected addresses, possibly leading to data
+          leakage or corruption. The feature on its own provides a basic level
+          of protection against the DMA overwriting buffer contents at
+          unexpected times. However, to protect against general data leakage and
+          system memory corruption, the system needs to provide way to lock down
+          the memory access, e.g., MPU. Note that since coherent allocation
+          needs remapping, one must set up another device coherent pool by
+          shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic
+          coherent allocation.
         - vendor specific string in the form <vendor>,[<device>-]<usage>
 no-map (optional) - empty property
     - Indicates the operating system must not create a virtual mapping
@@ -85,10 +102,11 @@ memory-region-names (optional) - a list of names, one for each corresponding
 
 Example
 -------
-This example defines 3 contiguous regions are defined for Linux kernel:
+This example defines 4 contiguous regions for Linux kernel:
 one default of all device drivers (named linux,cma@72000000 and 64MiB in size),
-one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB), and
-one for multimedia processing (named multimedia-memory@77000000, 64MiB).
+one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB),
+one for multimedia processing (named multimedia-memory@77000000, 64MiB), and
+one for restricted dma pool (named restricted_dma_reserved@0x50000000, 64MiB).
 
 / {
 	#address-cells = <1>;
@@ -120,6 +138,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 			compatible = "acme,multimedia-memory";
 			reg = <0x77000000 0x4000000>;
 		};
+
+		restricted_dma_reserved: restricted_dma_reserved {
+			compatible = "restricted-dma-pool";
+			reg = <0x50000000 0x4000000>;
+		};
 	};
 
 	/* ... */
@@ -138,4 +161,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 		memory-region = <&multimedia_reserved>;
 		/* ... */
 	};
+
+	pcie_device: pcie_device@0,0 {
+		reg = <0x83010000 0x0 0x00000000 0x0 0x00100000
+		       0x83010000 0x0 0x00100000 0x0 0x00100000>;
+		memory-region = <&restricted_dma_mem_reserved>;
+		/* ... */
+	};
 };
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:34:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:34:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142107.262342 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9DA-0008PT-DA; Tue, 15 Jun 2021 13:34:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142107.262342; Tue, 15 Jun 2021 13:34:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9DA-0008PK-9s; Tue, 15 Jun 2021 13:34:36 +0000
Received: by outflank-mailman (input) for mailman id 142107;
 Tue, 15 Jun 2021 13:34:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt9D9-0008Oo-DH; Tue, 15 Jun 2021 13:34:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt9D9-0003gh-9X; Tue, 15 Jun 2021 13:34:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lt9D9-0005ne-2t; Tue, 15 Jun 2021 13:34:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lt9D9-00058M-2K; Tue, 15 Jun 2021 13:34:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2MyD1/pQCSGAa1AK1FYevQUyFAod08BK5il3N7iG82Q=; b=vayfm7oVkRZN8HrLr5T62src3k
	Ak5T5S0rjC84qnecHKY+jDfO/NYSG3ujF6tH8f2OwCO1HNtmID1rRRRK5zx1LkrKZUUi3rOeu1sP4
	5G4qSLTQSQV+hV247Yy23zBgmGIUeScml3hPQeckt7+ScEtwdOtqEsA0xHhjEJM4fDcM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162837-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162837: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=b8649cf2a3e673a4a8cb6c255e394b354b771550
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Jun 2021 13:34:35 +0000

flight 162837 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162837/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 b8649cf2a3e673a4a8cb6c255e394b354b771550
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   11 days
Failing since        162368  2021-06-04 15:42:59 Z   10 days   22 attempts
Testing same since   162583  2021-06-09 23:44:58 Z    5 days   17 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1717 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:38:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:38:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142115.262356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9GY-0000gF-U7; Tue, 15 Jun 2021 13:38:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142115.262356; Tue, 15 Jun 2021 13:38:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9GY-0000g8-R6; Tue, 15 Jun 2021 13:38:06 +0000
Received: by outflank-mailman (input) for mailman id 142115;
 Tue, 15 Jun 2021 13:38:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0zBy=LJ=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lt9GX-0000g0-73
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:38:05 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c07f8ee3-1499-4aa0-8b4e-8a769df93f32;
 Tue, 15 Jun 2021 13:38:03 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 4EF4667373; Tue, 15 Jun 2021 15:37:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c07f8ee3-1499-4aa0-8b4e-8a769df93f32
Date: Tue, 15 Jun 2021 15:37:58 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v10 01/12] swiotlb: Refactor swiotlb init functions
Message-ID: <20210615133758.GA20389@lst.de>
References: <20210615132711.553451-1-tientzu@chromium.org> <20210615132711.553451-2-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210615132711.553451-2-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 15, 2021 at 09:27:00PM +0800, Claire Chang wrote:
> Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
> initialization to make the code reusable.

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:38:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:38:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142116.262367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9Gj-0000zn-6F; Tue, 15 Jun 2021 13:38:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142116.262367; Tue, 15 Jun 2021 13:38:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9Gj-0000zg-2k; Tue, 15 Jun 2021 13:38:17 +0000
Received: by outflank-mailman (input) for mailman id 142116;
 Tue, 15 Jun 2021 13:38:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0zBy=LJ=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lt9Gh-0000z0-Sx
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:38:15 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 37ee2dfa-b77c-47ea-b800-73a825c0616e;
 Tue, 15 Jun 2021 13:38:15 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id CC6DF68AFE; Tue, 15 Jun 2021 15:38:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37ee2dfa-b77c-47ea-b800-73a825c0616e
Date: Tue, 15 Jun 2021 15:38:12 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v10 02/12] swiotlb: Refactor swiotlb_create_debugfs
Message-ID: <20210615133812.GB20389@lst.de>
References: <20210615132711.553451-1-tientzu@chromium.org> <20210615132711.553451-3-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210615132711.553451-3-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 15, 2021 at 09:27:01PM +0800, Claire Chang wrote:
> Split the debugfs creation to make the code reusable for supporting
> different bounce buffer pools.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:38:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:38:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142123.262378 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9H8-0001lz-Dt; Tue, 15 Jun 2021 13:38:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142123.262378; Tue, 15 Jun 2021 13:38:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9H8-0001ls-Ai; Tue, 15 Jun 2021 13:38:42 +0000
Received: by outflank-mailman (input) for mailman id 142123;
 Tue, 15 Jun 2021 13:38:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0zBy=LJ=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lt9H7-0001iZ-5l
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:38:41 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 06bf06c1-4d25-42f8-bd58-cc20a42abdfe;
 Tue, 15 Jun 2021 13:38:34 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id B7C1668B05; Tue, 15 Jun 2021 15:38:32 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06bf06c1-4d25-42f8-bd58-cc20a42abdfe
Date: Tue, 15 Jun 2021 15:38:32 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v10 03/12] swiotlb: Set dev->dma_io_tlb_mem to the
 swiotlb pool used
Message-ID: <20210615133831.GC20389@lst.de>
References: <20210615132711.553451-1-tientzu@chromium.org> <20210615132711.553451-4-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210615132711.553451-4-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:38:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:38:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142127.262389 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9HK-0002Bz-ME; Tue, 15 Jun 2021 13:38:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142127.262389; Tue, 15 Jun 2021 13:38:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9HK-0002Bs-JG; Tue, 15 Jun 2021 13:38:54 +0000
Received: by outflank-mailman (input) for mailman id 142127;
 Tue, 15 Jun 2021 13:38:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0zBy=LJ=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lt9HI-00029U-TT
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:38:52 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b33e683b-054b-4a16-91f6-8bf3c1e67257;
 Tue, 15 Jun 2021 13:38:52 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 1FDA367373; Tue, 15 Jun 2021 15:38:50 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b33e683b-054b-4a16-91f6-8bf3c1e67257
Date: Tue, 15 Jun 2021 15:38:49 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v10 04/12] swiotlb: Update is_swiotlb_buffer to add a
 struct device argument
Message-ID: <20210615133849.GD20389@lst.de>
References: <20210615132711.553451-1-tientzu@chromium.org> <20210615132711.553451-5-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210615132711.553451-5-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 15, 2021 at 09:27:03PM +0800, Claire Chang wrote:
> Update is_swiotlb_buffer to add a struct device argument. This will be
> useful later to allow for different pools.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:39:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:39:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142129.262400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9HP-0002aB-WC; Tue, 15 Jun 2021 13:38:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142129.262400; Tue, 15 Jun 2021 13:38:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9HP-0002a4-SN; Tue, 15 Jun 2021 13:38:59 +0000
Received: by outflank-mailman (input) for mailman id 142129;
 Tue, 15 Jun 2021 13:38:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=62sT=LJ=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lt9HO-0002YN-KC
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:38:58 +0000
Received: from mail-pl1-x62a.google.com (unknown [2607:f8b0:4864:20::62a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c1adcf2-88d9-43e5-b18b-ae43ba116575;
 Tue, 15 Jun 2021 13:38:57 +0000 (UTC)
Received: by mail-pl1-x62a.google.com with SMTP id 11so8468032plk.12
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 06:38:57 -0700 (PDT)
Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com.
 [209.85.215.173])
 by smtp.gmail.com with ESMTPSA id o1sm15292214pjf.56.2021.06.15.06.38.56
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 06:38:56 -0700 (PDT)
Received: by mail-pg1-f173.google.com with SMTP id g22so6386632pgk.1
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 06:38:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c1adcf2-88d9-43e5-b18b-ae43ba116575
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=UDZeAXhtUGNxVigPWnCAmqvhZ5xmOZkpwoxu/kotpBc=;
        b=BkGuougKdpvJFSjPDCYcuDM+EfC9+Z6Iw8t63OKFUyvGcy9WEOaTZOyzUjDtlqxxWx
         b+f6+GiSSKCn4V8zlclen8CemZkaZyLB8FJKB0g3go6na2Mkb6J76IEsPQg2PB5YFP++
         oJX8/uFYscqhRRWgaBEXbZGZq7TD88YFqCsf4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=UDZeAXhtUGNxVigPWnCAmqvhZ5xmOZkpwoxu/kotpBc=;
        b=JnWWfxBtLo036nf7cRtgRPT/c2uHL+WXrwClMIYjTnFuRrQgFNbLCLkihDn+2txY7E
         uMryeR5B5Mczi4xuBJuOcHPkXs5FKerTviGjbKzDicvByNWqcGH97lksDArXe8uZGA0L
         hh/T6VGtTQNP+XPyBbFtKzS0Ze8H5cWI8B9BHV8LDcMGJtJFwLmiuVP90R7DV5xrM3bx
         qsljRTG8QJKwiDP0uLN76Zf0COpqUOtsQjo/Hkdj+qpoZ+aCr/Mgqh6IVtXU0ZgjnoiF
         wXK5yTqwduvjBKZPMjedaXEqZHu6P0bu1Y2ppYp8iBiPY8ZC4SbtfPjY6iTVs+di//FU
         NbUw==
X-Gm-Message-State: AOAM532dLB+gT9vxHOve9U0HSPgd2WZVWhnhoSnZDgWxRL9JCat114XI
	UrqWRU/4f0jSd30aRKrevFz8HSY0rOHoWQ==
X-Google-Smtp-Source: ABdhPJzREdBjVFcuSEDrKD4wi73XLP8IRPuoOXW1TWztJPM2WUNXdujWw31IKaeHAzIF3hzfWEYeaw==
X-Received: by 2002:a17:90a:9488:: with SMTP id s8mr13783063pjo.236.1623764336931;
        Tue, 15 Jun 2021 06:38:56 -0700 (PDT)
X-Received: by 2002:a05:6e02:219d:: with SMTP id j29mr17936278ila.64.1623763835517;
 Tue, 15 Jun 2021 06:30:35 -0700 (PDT)
MIME-Version: 1.0
References: <20210611152659.2142983-1-tientzu@chromium.org>
In-Reply-To: <20210611152659.2142983-1-tientzu@chromium.org>
From: Claire Chang <tientzu@chromium.org>
Date: Tue, 15 Jun 2021 21:30:24 +0800
X-Gmail-Original-Message-ID: <CALiNf28fb4rZ0Afun8wAWRYJY4gqc+-vRvDBZT3x2JgSPL_iVQ@mail.gmail.com>
Message-ID: <CALiNf28fb4rZ0Afun8wAWRYJY4gqc+-vRvDBZT3x2JgSPL_iVQ@mail.gmail.com>
Subject: Re: [PATCH v9 00/14] Restricted DMA
To: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

v10 here: https://lore.kernel.org/patchwork/cover/1446882/


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:39:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:39:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142133.262411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9HY-00037I-8X; Tue, 15 Jun 2021 13:39:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142133.262411; Tue, 15 Jun 2021 13:39:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9HY-00037B-4m; Tue, 15 Jun 2021 13:39:08 +0000
Received: by outflank-mailman (input) for mailman id 142133;
 Tue, 15 Jun 2021 13:39:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0zBy=LJ=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lt9HW-00031k-Ky
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:39:06 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 83cbd895-b7c6-45d5-9be5-8994ff405de2;
 Tue, 15 Jun 2021 13:39:05 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 3CF0668AFE; Tue, 15 Jun 2021 15:39:03 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83cbd895-b7c6-45d5-9be5-8994ff405de2
Date: Tue, 15 Jun 2021 15:39:02 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v10 05/12] swiotlb: Update is_swiotlb_active to add a
 struct device argument
Message-ID: <20210615133902.GE20389@lst.de>
References: <20210615132711.553451-1-tientzu@chromium.org> <20210615132711.553451-6-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210615132711.553451-6-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 15, 2021 at 09:27:04PM +0800, Claire Chang wrote:
> Update is_swiotlb_active to add a struct device argument. This will be
> useful later to allow for different pools.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:39:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:39:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142145.262421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9I3-00048O-Ia; Tue, 15 Jun 2021 13:39:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142145.262421; Tue, 15 Jun 2021 13:39:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9I3-00048G-FX; Tue, 15 Jun 2021 13:39:39 +0000
Received: by outflank-mailman (input) for mailman id 142145;
 Tue, 15 Jun 2021 13:39:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0zBy=LJ=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lt9I1-00047i-V9
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:39:37 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5ea7b9aa-cb32-4ee9-8a94-6b7a5d415d36;
 Tue, 15 Jun 2021 13:39:37 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 7B81967373; Tue, 15 Jun 2021 15:39:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ea7b9aa-cb32-4ee9-8a94-6b7a5d415d36
Date: Tue, 15 Jun 2021 15:39:35 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v10 06/12] swiotlb: Use is_dev_swiotlb_force for
 swiotlb data bouncing
Message-ID: <20210615133935.GF20389@lst.de>
References: <20210615132711.553451-1-tientzu@chromium.org> <20210615132711.553451-7-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210615132711.553451-7-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 15, 2021 at 09:27:05PM +0800, Claire Chang wrote:
> Propagate the swiotlb_force setting into io_tlb_default_mem->force and
> use it to determine whether to bounce the data or not. This will be
> useful later to allow for different pools.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:40:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:40:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142149.262433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9IO-0004lF-0D; Tue, 15 Jun 2021 13:40:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142149.262433; Tue, 15 Jun 2021 13:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9IN-0004l6-SG; Tue, 15 Jun 2021 13:39:59 +0000
Received: by outflank-mailman (input) for mailman id 142149;
 Tue, 15 Jun 2021 13:39:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0zBy=LJ=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lt9IL-0004UO-R4
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:39:57 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 52787674-8f59-4c0c-921f-f00d917576de;
 Tue, 15 Jun 2021 13:39:48 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 4BA0568AFE; Tue, 15 Jun 2021 15:39:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52787674-8f59-4c0c-921f-f00d917576de
Date: Tue, 15 Jun 2021 15:39:45 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v10 07/12] swiotlb: Move alloc_size to
 swiotlb_find_slots
Message-ID: <20210615133945.GG20389@lst.de>
References: <20210615132711.553451-1-tientzu@chromium.org> <20210615132711.553451-8-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210615132711.553451-8-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 15, 2021 at 09:27:06PM +0800, Claire Chang wrote:
> Rename find_slots to swiotlb_find_slots and move the maintenance of
> alloc_size to it for better code reusability later.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:40:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:40:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142152.262444 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9IX-0005rj-7u; Tue, 15 Jun 2021 13:40:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142152.262444; Tue, 15 Jun 2021 13:40:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9IX-0005ra-4S; Tue, 15 Jun 2021 13:40:09 +0000
Received: by outflank-mailman (input) for mailman id 142152;
 Tue, 15 Jun 2021 13:40:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0zBy=LJ=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lt9IV-0005Hn-QR
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:40:07 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 51c98c86-56f0-427f-81cc-c239ff729a50;
 Tue, 15 Jun 2021 13:40:04 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id BE0D967373; Tue, 15 Jun 2021 15:40:02 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51c98c86-56f0-427f-81cc-c239ff729a50
Date: Tue, 15 Jun 2021 15:40:02 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v10 08/12] swiotlb: Refactor swiotlb_tbl_unmap_single
Message-ID: <20210615134002.GH20389@lst.de>
References: <20210615132711.553451-1-tientzu@chromium.org> <20210615132711.553451-9-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210615132711.553451-9-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:40:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:40:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142160.262455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9J0-0006iZ-I6; Tue, 15 Jun 2021 13:40:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142160.262455; Tue, 15 Jun 2021 13:40:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9J0-0006iS-Di; Tue, 15 Jun 2021 13:40:38 +0000
Received: by outflank-mailman (input) for mailman id 142160;
 Tue, 15 Jun 2021 13:40:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0zBy=LJ=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lt9Iz-0006gN-45
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:40:37 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d45c0e8-9428-4c24-938d-56fd036f3652;
 Tue, 15 Jun 2021 13:40:34 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id C138F67373; Tue, 15 Jun 2021 15:40:32 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d45c0e8-9428-4c24-938d-56fd036f3652
Date: Tue, 15 Jun 2021 15:40:32 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v10 09/12] swiotlb: Add restricted DMA pool
 initialization
Message-ID: <20210615134032.GI20389@lst.de>
References: <20210615132711.553451-1-tientzu@chromium.org> <20210615132711.553451-10-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210615132711.553451-10-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 15, 2021 at 09:27:08PM +0800, Claire Chang wrote:
> Add the initialization function to create restricted DMA pools from
> matching reserved-memory nodes.
> 
> Regardless of swiotlb setting, the restricted DMA pool is preferred if
> available.
> 
> The restricted DMA pools provide a basic level of protection against the
> DMA overwriting buffer contents at unexpected times. However, to protect
> against general data leakage and system memory corruption, the system
> needs to provide a way to lock down the memory access, e.g., MPU.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:42:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:42:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142194.262465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9KW-0007aj-UY; Tue, 15 Jun 2021 13:42:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142194.262465; Tue, 15 Jun 2021 13:42:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9KW-0007ac-Ra; Tue, 15 Jun 2021 13:42:12 +0000
Received: by outflank-mailman (input) for mailman id 142194;
 Tue, 15 Jun 2021 13:42:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0zBy=LJ=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lt9KV-0007aS-NQ
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:42:11 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c2c02bb2-2b9f-4051-a1dd-ef6a4f5c5bf3;
 Tue, 15 Jun 2021 13:42:11 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 3D5CB67373; Tue, 15 Jun 2021 15:42:09 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2c02bb2-2b9f-4051-a1dd-ef6a4f5c5bf3
Date: Tue, 15 Jun 2021 15:42:08 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v10 10/12] swiotlb: Add restricted DMA alloc/free
 support
Message-ID: <20210615134208.GJ20389@lst.de>
References: <20210615132711.553451-1-tientzu@chromium.org> <20210615132711.553451-11-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210615132711.553451-11-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 15, 2021 at 09:27:09PM +0800, Claire Chang wrote:
> Add the functions, swiotlb_{alloc,free} to support the memory allocation
> from restricted DMA pool.
> 
> The restricted DMA pool is preferred if available.
> 
> Note that since coherent allocation needs remapping, one must set up
> another device coherent pool by shared-dma-pool and use
> dma_alloc_from_dev_coherent instead for atomic coherent allocation.

Note: when applied this should go before the next patch to make sure
bisection works fine.

>  #ifdef CONFIG_DMA_RESTRICTED_POOL
> +struct page *swiotlb_alloc(struct device *dev, size_t size)
> +{
> +	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
> +	phys_addr_t tlb_addr;
> +	int index;
> +
> +	/*
> +	 * Skip io_tlb_default_mem since swiotlb_alloc doesn't support atomic
> +	 * coherent allocation. Otherwise might break existing devices.
> +	 * One must set up another device coherent pool by shared-dma-pool and
> +	 * use dma_alloc_from_dev_coherent instead for atomic coherent
> +	 * allocation to avoid mempry remapping.

s/mempry/memory/g

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:49:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:49:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142214.262476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9RZ-0008NV-Mb; Tue, 15 Jun 2021 13:49:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142214.262476; Tue, 15 Jun 2021 13:49:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9RZ-0008NO-Jk; Tue, 15 Jun 2021 13:49:29 +0000
Received: by outflank-mailman (input) for mailman id 142214;
 Tue, 15 Jun 2021 13:49:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=stKz=LJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lt9RY-0008NI-5w
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:49:28 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 03ab4646-a3cc-4c42-9fb8-d1cf54a9922d;
 Tue, 15 Jun 2021 13:49:27 +0000 (UTC)
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur03lp2051.outbound.protection.outlook.com [104.47.9.51]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-35-JQPXKHijOhiNFV5HlvdraA-1; Tue, 15 Jun 2021 15:49:25 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2608.eurprd04.prod.outlook.com (2603:10a6:800:4f::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Tue, 15 Jun
 2021 13:49:23 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4219.025; Tue, 15 Jun 2021
 13:49:23 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR2PR09CA0010.eurprd09.prod.outlook.com (2603:10a6:101:16::22) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Tue, 15 Jun 2021 13:49:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03ab4646-a3cc-4c42-9fb8-d1cf54a9922d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623764966;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=L7Ef1FjiN7x1QXO4WGuChpDpELEzNJEDR5HVvEYj5Ps=;
	b=SR3Ec3L4As7MMQiIHHmII7MCpDL0/7UuN+qqTv8QvQ2XfM2dwY6g7C0kzNfo9YByct7udm
	kBBgyxeCTjwpev9TeI+EPxtfSe+KsdkM4RKxKWKwLaK5hzHStueJ6aDv3WlhOhfQ8oAcFm
	qgGer5coEXtRr4dEDwmGpbgJxOREE/A=
X-MC-Unique: JQPXKHijOhiNFV5HlvdraA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jaZAaf/1k8x23YV5XgdVCSzALrmT8DfMOMLMo2M4A5pyRWnlxapu5EbHiUvx5nMC4twz0iNIcGm7SDkQSgpWAiYQt40xKJvHMeIblpw1OALcNKm5gdiTpmwP85T/hoOyKJ/7BoEkGvlSJvhTJKo99chHRvNG1X0pAqnuAV7WF1vkYA2PwxZpl6qUlLu7J9Yv69R+vANUKGT9FUrBk89jByjQg9acQcTTE30CLWXlUc7grA5MetueUed6MAckJ7XGVSAKu9xd2DqrefRVG59JOS1kh98gkkJ5HABwZB5nBvh4lGFhNhF3mz/tfqmNIHHdSz8q5sOIzh/45DLWhKW+Sw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=L7Ef1FjiN7x1QXO4WGuChpDpELEzNJEDR5HVvEYj5Ps=;
 b=dKrCqU8qwc6EaNp0IzIaWN8aVCmbaYChXlx6QY4qMZuOXDnqaDmbkOi1+pstY7pzGWb4dRzFEhY/n26RirivL4WzHvIrOqzBevg1QFsKhKZDc2Il/gUVAdOCEx/9zlKtg1LuhOzKhH9FEaO8hPc6kkd8YDS3FcFSTkw7ZvoLd86GGu3LIfHutJ79glJmlgCBwEw916K1sG4o0xI89Zj75VfvMVBmgBtLBXtCSpCJlFy8Cu9hxuSLVD6i+nqt8z6tNHtv2SzqKxz3NB5gD5VDDkFfo0ZtqL9MA0kkFotA//zrhxODdGUR+nVVHOJQbMKIHxrxnxFEY6Y8SDOXPCi7Lg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH v2 5/5] tests: Introduce a TSX test
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>,
 Edwin Torok <edvin.torok@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210611163627.4878-6-andrew.cooper3@citrix.com>
 <20210614161317.31481-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5e48f143-5a9d-11a6-be98-37e67628c708@suse.com>
Date: Tue, 15 Jun 2021 15:49:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210614161317.31481-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR2PR09CA0010.eurprd09.prod.outlook.com
 (2603:10a6:101:16::22) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 060da0b7-ea59-4232-a6f3-08d930046228
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2608:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB2608AFF89F8CDDD484209697B3309@VI1PR0401MB2608.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zuOhf2OSJUyHZUvHXmNk50e7mTZLmA2F+p2pY6+kTNqXaG8ZypXlNSCIyFG2Gz4BlGuteymiu8oJefuqS+UuZABNVmfVvRQSdY9Hj2K/6R1rsREzEfQXg8bA7go+JmcUNwoHfyfgu+zkVmsl3Vt6BEIK1mAgH7YztPGP9EyjK7mkze9bufb9WTFLm/FyFhop2Tp5yQqvXRZ36ewiyclS1TDTF1VPFaZ639dQIwjaWEOc2aD7bwJuSuGvv50RxhnMpjr8xyMj/H4teO3AgaBmqfsK3EgixUs2SLxIDaB233sM6oFf8ymqDhoqbvvm2KZYl54OUrVt9kuHg/FJ9wiUt21cqB5ax3GBB8x+i8JKqW46x6t0LOcINYuGfZ9yfBprrceu5NapJbryUFZvmeCdm+4zWzQo1edg0qqeUrJVBe+UZFtspAaSxGjNeT/9sBYI7KXTji15ISrZoevjhJBoxHKOO/v40eCSqd7LCl3EluxvTl5Ch5PFOwEp/QtoWf26HX8LbXY5FZW89wRgbGkPwX1wMGCT3Fnfb573pXmpOv65rBUE/c04Y+9eF+FlI4O7dRDJnDVsKV0gd3AFTj6atFmw4ReAFyhTPjIzddnI+5oP2P4ZpTkpnE6ESkL02Yv9VL7hXAGs/GjjSkyzVMnGEiJ7BZ879TrtRpuviCqPEI0HpL5L7fM9QC67z9xTjijb
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(366004)(136003)(346002)(376002)(39840400004)(66946007)(66476007)(66556008)(53546011)(83380400001)(186003)(6486002)(26005)(54906003)(16576012)(16526019)(31686004)(5660300002)(478600001)(86362001)(8936002)(38100700002)(2616005)(4326008)(2906002)(956004)(36756003)(316002)(8676002)(6916009)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eUlHajNlRGZFR2h2Z1g0QWtxWDhJTG8wSWRSakQ2d1Yvck5rQ244TG9lcWdZ?=
 =?utf-8?B?WEwwWHl3WDFGenFaTm1kTWFpd2dldThON2RCYUJGeUExZkdJRU5TZy9IaTIy?=
 =?utf-8?B?ekNzbDR3MTlzZEdvRkw4dVpsWWdnSzBJNEV2TUZqODhMa2dCbkR6QkJkVjhF?=
 =?utf-8?B?RjhhQkRpRGtOWjY3M1lSVElmSUxrQ1VzOTJPMWpBNmltU3FwMjRJbUp0R0VN?=
 =?utf-8?B?Z1dLdzkxWmpPby9QclVrdHRldE44Yms0OGlBeHBxWWt4KzhzZ3VCai8zN29q?=
 =?utf-8?B?VFFLVkhPamZYZDlZQWhzb2cwUEEwaWtVczZuckE2V2pSYzllZ2FydHJwMDZM?=
 =?utf-8?B?S3cvSlhIV2VQTlZ3WEszSzIwdTYraWNEYWhWNFhKSWpFdHgyQ3lPR0RjMnBo?=
 =?utf-8?B?WlZYRHkyKzhJYVgyMnlKUUoxOStPSlZGUkpOTXR0ZTN3anllalFCaWFDNGR2?=
 =?utf-8?B?YlBLY2FNZjVLNHpWSHdKRlhERkJaQmZmMXdnVEtSa3I1ODNlc3ROTTJuK1dl?=
 =?utf-8?B?bUkxTVE2c05icGRsN2pnQk1mRWFSZUppTTZ0ZWorQ0dpeWF3VXFvYyt1dDN1?=
 =?utf-8?B?THpXbjdRa3lQNkwyNkcwS2RvVnJlZFl0NVRBWGNkNVJjSHFuOGk0K2Iranp1?=
 =?utf-8?B?cjkvYkUySjB6b1VjWUNaTlBZbytZbnl3cDhPZlNyZGdZQjF3THU3UmR0L2cw?=
 =?utf-8?B?WDh2Qk42VXFydkd5ZS9nSWxMODVDem5CbUFXcU51RjlYcFRwemNkWW8yL2RZ?=
 =?utf-8?B?K0FBc0Z0UWdLZjB2L2lNcnQ4YnBBQmg5am5uaVdXZlpCZ3h4UGJMRDBYR3c3?=
 =?utf-8?B?N2RTZVNxZ1UxaXhPMytYQWlzSzBQemVZZGZpYzBkb2wvUEJaOVA1WHhzZm01?=
 =?utf-8?B?NmJReXcyZ0Fya3loak5UU1JDeGI0UXN1ZFhONFp4VVRoUEpCcmNLWnhtQTRW?=
 =?utf-8?B?R1picmxDOTdQVWFNKzJUUzY1YnRhYmUyZm1nNmExZjVRaW5lVDZRZTR4dXBN?=
 =?utf-8?B?N2VHRENEYm5WbklQb0NGZ1BsRjN2ZzljbFEwR2ZLZktidmtDQmNzaC9YUzVP?=
 =?utf-8?B?WGRvZW9qQ3Rwbk56NGg2ekZzUkdLRjN0a0g4VEkxNEFhOTFZWDdIVElOMWg0?=
 =?utf-8?B?RGV5TzVUOFdjSFFEL3AycUpHTUcrYUpvd2gzQXZpcCt2NnJHeE13bWdyY1J4?=
 =?utf-8?B?MGZOcFc1WnJnWW12SzNnc1E0RHZZYjlhYXBYRDk5UVJQTk9pNzltdlhhSkVT?=
 =?utf-8?B?TEJYSUprN0gzYVFsd0x5czRIVUE1ZFgwZXdQM0NQbkZzMDcvaTRBQ3RLSC9F?=
 =?utf-8?B?aEFIQXRzN3VBSmZNTnd4SXQ3NEtSbCtVQUkwNlgzZVVjeFZIa2dNbVBqZDBC?=
 =?utf-8?B?amc3SEFCVVpwNFhzaG1MK0F4U2pLOFdaN0JQelhNYkZ4SlJyQWxZQTd5TXNB?=
 =?utf-8?B?NkQ1L2s3eEY1KzJqOElURExmcnRqSHBaRlJLSkRQaXRYUlpaTlFSTXlCNmlJ?=
 =?utf-8?B?TVh2N0gxSUlWbFd4WUhMc2NMY2k0S3llQmcrdDVFNWlldi85bWpPZ2RPaHl6?=
 =?utf-8?B?UWFLalE2amlDVHBPUGdzanFqT2pDb05GTGVhZkI5L2c1TnRJUUhzK1VUaEox?=
 =?utf-8?B?U2lnU2ZGN1NlYTVlNWxpNFFRcUFTTkphbElFam0remRuMldjT1R3anRLMWE5?=
 =?utf-8?B?UDM5UVE2QkczbFF2YUlIRWZRTitWZENYdnlQSWcwVnlpa21OMkpURlM0dExM?=
 =?utf-8?Q?FWk8h90VDq3FTXhwoaI116ipBJU5flXtLImzw5s?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 060da0b7-ea59-4232-a6f3-08d930046228
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2021 13:49:23.5529
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rM6CAOL1LbdA8ea3syk10bvWBrljg7w45Jugcy5B9ZvpORMD8ef7qqrrVvMex77ZvKUDQY54Soj2XwiJ+qXQGw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2608

On 14.06.2021 18:13, Andrew Cooper wrote:
> --- /dev/null
> +++ b/tools/tests/tsx/test-tsx.c
> @@ -0,0 +1,538 @@
> +/*
> + * TSX settings and consistency tests
> + *
> + * This tests various behaviours and invariants with regards to TSX.  It
> + * ideally wants running for several microcode versions, and all applicable
> + * tsx= commandline settings, on a single CPU, including after an S3
> + * suspend/resume event.
> + *
> + * It tests specifically:
> + *  - The consistency of MSR_TSX_CTRL/MSR_TSX_FORCE_ABORT values across the
> + *    system, and their accessibility WRT data in the host CPU policy.
> + *  - The actual behaviour of RTM on the system.
> + *  - Cross-check the default/max policies based on the actual RTM behaviour.
> + *  - Create some guests, check their defaults, and check that the defaults
> + *    can be changed.
> + */
> +
> +#define _GNU_SOURCE
> +
> +#include <err.h>
> +#include <errno.h>
> +#include <inttypes.h>
> +#include <signal.h>
> +#include <stdio.h>
> +#include <string.h>
> +#include <sys/mman.h>
> +#include <sys/ucontext.h>
> +
> +#include <xenctrl.h>
> +#include <xenguest.h>
> +#include <xen-tools/libs.h>
> +
> +#include "xg_private.h"
> +
> +enum {
> +#define XEN_CPUFEATURE(name, value) X86_FEATURE_##name = value,
> +#include <xen/arch-x86/cpufeatureset.h>
> +};
> +#define bitmaskof(idx)      (1u << ((idx) & 31))
> +
> +#define MSR_ARCH_CAPABILITIES               0x0000010a
> +#define  ARCH_CAPS_TSX_CTRL                 (1 <<  7)
> +#define MSR_TSX_FORCE_ABORT                 0x0000010f
> +#define MSR_TSX_CTRL                        0x00000122
> +
> +static unsigned int nr_failures;
> +#define fail(fmt, ...)                          \
> +({                                              \
> +    nr_failures++;                              \
> +    (void)printf(fmt, ##__VA_ARGS__);           \

fprintf(stderr, ...)?

Either way (and with the adjustment you pointed yourself out in reply)
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 13:50:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 13:50:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142218.262488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9SL-0001DS-W7; Tue, 15 Jun 2021 13:50:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142218.262488; Tue, 15 Jun 2021 13:50:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lt9SL-0001DL-T2; Tue, 15 Jun 2021 13:50:17 +0000
Received: by outflank-mailman (input) for mailman id 142218;
 Tue, 15 Jun 2021 13:50:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gB8e=LJ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lt9SL-0001DB-BR
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 13:50:17 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fc9030fe-bfc1-4a70-ba33-c68a7cecc3a2;
 Tue, 15 Jun 2021 13:50:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc9030fe-bfc1-4a70-ba33-c68a7cecc3a2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623765016;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=URIgTHhJpwzyh9L+sH0hxeIYib9J7dwAEmJz9FMtIx0=;
  b=d5Acr6QwF0VIEgGbyOycL8yX5y+7Ofl05MailEEL/tTYjCNcmgjUFQtp
   rme1uy54TrCcDhD5FO1HypA7kto2Bbx6Q6q5LBRrXx0TB1igAbFBxluMC
   kdm1AFj3zKgBq9Xl0hvtXgpy1lUCbHFXei67ymJ/Kz18QuKSV+UawdqfB
   c=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: vsqxLTZPmXluUffK1IiDGVP/48xF0QmS24uqOrIlZkSO6pj83jvHU4oQSBhW8KWO9GKLnfhDOQ
 cQ7tagr0snYZjOuYFABeg+KaybqnMabjwUNAPicnrx6WU2dRbH4C/gPo56Fgc7Dk+T2uWRxpIT
 mkS0Hh3Qgq2av2PSNxJpp+ea8olmJRwaOXmC2eA6CxoxVLomFIrFwrOipFXWtFRdPi3cpP889U
 6s7zHoZrDO4ZURr/76ujpHkOuC0prAWxofhHN7tpu1jAoq9CB/QMEHTN0uJtNFKckT5xHrkxVm
 F0U=
X-SBRS: 5.1
X-MesageID: 47740791
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:08ITlKCY1cabRfnlHejKsseALOsnbusQ8zAXPh9KJSC9I/b0qy
 nxpp4mPEfP+U0ssQIb6Kq90ci7MD7hHPtOjrX5Uo3SOzUO1FHJEGgm1/qb/9SCIVyzygc+79
 YHT0EWMrSZZjYasS+52njALz9K+qjkzEncv5a7854bd3AOV0gP1XYbNu/IKDwseOHmaKBJS6
 Z0s/A3/gaISDA9Qv6AQl8YQuPEu9rH0Kj+aRpDPBI/7hKS5AnYtYLSIlyy8i0lFxVS3Lwl7W
 /J1zbh7qHmifu80RO07R6I071m3PDM8OEGItCQgc4ILTipsB+hYMBaRrWeoCpdmpDy1L9/qq
 iVnz4Qe+BIr1/BdGC8phXgnyP61iw11nPkwViExVP+vM3QXlsBeoR8rLMcViGcx1srvdl63q
 4O9XmerYBrARTJmzm4z8TUVittilG/rRMZ4K8uZj1kIMsjgY1q3M4iFXBuYdI99eXBmcQa+d
 xVfYLhDTBtABSnhj7izy9SKZeXLwkO91+9MzQ/U4quondrdTlCvjQl7d1akXEa+J0nTZ5Yo+
 zCL6RzjblLCtQbdKRnGY46MIKK40H2MGXx2VipUCLa/WA8SgDwgo+y5K9w6PCheZQOwpd3kJ
 PdUElAvWp3f071E8WB0JBC7xiIGQyGLH/Q49Ab44I8tqz3RbLtPyHGQFcyk9G4q/FaBsHAQf
 68NJ9fHvemJ2rzHoRC2RH4RvBpWDEjuQ0uy6IGsne104j2w6HRx5zmmcfoVc/Q+GwfKxLC61
 M4LUvOzet7nz6WZkM=
X-IronPort-AV: E=Sophos;i="5.83,275,1616472000"; 
   d="scan'208";a="47740791"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hsjMqnBpE3VSt0i1CBcnJ5zAmGwcvJhauTibAfo+mxSzamlQZliOizjH8PjGE+QZh+CVx5GsYIygI8XbQZgrk3Onk9mPzroXzhbHs1v0pSNmsjWcO0m52biC3GLNEkN8328ZNGNAXHfHw+Zl09gdtq4ua0QD3yKbMxU60s++Kh5nuWiviGu0e3t+fW3/A5dCNxqXffoGPNmy4BhKLQsyKpON4vhgVEU9Tz0xznchhxwNmg+UbuN9Aoq4VkS1vPGUegJNw5ak9bgw7TBaEGEDAWiZms2SsVYIllnN1PizlzWxkIu9vNgkqPmqhlVtnfx4WfDThL/pAzYFxg0deE6H/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ry2vwoqwKaS725viSjbrqwk+RpNR3ZoO/f0y2jFHWLc=;
 b=CTn1R1WvJm0nVEdUPZnFmcFtuWrNgYURXGah6r1P0jV6UXEGIKJH0wCGs1mbJI64vSCEbaYWbNDiCLSDiXD+ilfz5xOkD6liKs6Qo33Jgwlymwy6n0U5OPjWiv0YuygXVkUqD5VLezwLFvd0GooH9uh6vhbDT/G9Ij442f3bB1Q0D+PFwd15JcKPxkGLTj3jL5Z6VCk+zFWc/SM42DVj9R1QHaf1Vzk417l8flgRA5U4Z1/il5a7k93jsH0XxahXPVikaNc4bMMVsmt4Fgj0GgQI2dA0XmidaRPH8YPS6lMI+AUGHONxPMpX3Bnd3lz8CnreGpMi4CIrvn7c6qvV2A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ry2vwoqwKaS725viSjbrqwk+RpNR3ZoO/f0y2jFHWLc=;
 b=CI4HeRJUzbqZg4T9pDtO2coyaX+7z6LfCXwWuM63uhRSVmZ3MqqYV7DgQcc5itiQLzZLBV+uCPK7wtDdA+vmzLMp6d2Tcl38ui83Twr80kRQpPbaGYFjhjqDvc14RMS+N4D55+tKj3xPC518vlldnJsxkRAET8QRhhIin7oWeaU=
Date: Tue, 15 Jun 2021 15:37:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>
CC: <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>
Subject: Re: [PATCH 0/3] x86/ept: force WB to foreign and grant mappings
Message-ID: <YMitM0QaLTf4d2fF@Air-de-Roger>
References: <20210528173935.29919-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20210528173935.29919-1-roger.pau@citrix.com>
X-ClientProxiedBy: MR1P264CA0069.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:3f::15) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ad347fd7-7152-44fb-8d2e-08d93002cadc
X-MS-TrafficTypeDiagnostic: DM4PR03MB5968:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM4PR03MB5968E91BE7C388FAEABE188D8F309@DM4PR03MB5968.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: aWkvAl0a4Uj4fgw78a9Kx5Em/wMJEUHoPVmGpIGW12SlJTGgyEdUB+luAHp1JwSmd7hqrvLr6O/bEo56dnXM9gmkggCWGzluffnG7gDqXRsCo6cq+X0AuEvqge4o5L1bUoEFmCjqIrPWMvtnxa+muco53wMX/jhVIpUR2MqunztoMp9f0jlIoMD0vywx//vaPUuBuOwxce4IVcIRrMEy+hE4R5Ccbg2Rj5W5QKs+yNw+hYzvoz2W6WURVc4U+A9kL8ubvYUCJF/wwhzuHf4212wdyfD7tMw7LBbX6nARn0j884fM6Nk9xmKBSC43O9psxFvnsRgPhqzQK5YeYLhscfX7ArZzUnCf6aitsznyUKGpxg+qoe/Xno5NbBq6dkbyhgwJ+FEtdWFX/zerpWmt/mJr0ZM3E5i+vnNRENYjUA5dUW5/ofOlw9QFwuSUyHrIP22JWw4hDAD5MzDsZD2Js1nKwggE7nKMRmUkEXtX+x3VQ7gjvo6XDmsoStTgAe0FRUlNf0313oq8uaXocc/HcAWHIKXa45p4w+ChLEc9nGJ5hvqmzgKkooeN281WneSN9/pmMZI0QN/UpserfhOEHGtSQt/bbGV3W0Stw/5Pa1M=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(136003)(376002)(39860400002)(396003)(346002)(316002)(85182001)(54906003)(33716001)(66476007)(66556008)(38100700002)(66946007)(9686003)(110136005)(6486002)(6496006)(8676002)(478600001)(956004)(83380400001)(8936002)(4326008)(86362001)(16526019)(2906002)(26005)(186003)(6666004)(5660300002)(107886003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?V3dQdVlEWS9WeFB5RFQvdE5CRGFDdFhvZjVBQ0pFY1phZXEreWRLKzhNMFpO?=
 =?utf-8?B?SVN5U1NSdEpLYi9oUndmQldLc1B0YXFIZVg4dEpIbXlobWtyS0dENjlOeVpL?=
 =?utf-8?B?d2dpMFhEaklGUGNRMXVBdktmVTVQY2hRVG5WNjNRakU4Rk5sNUI3Zkw0eGtv?=
 =?utf-8?B?TXV5RnBYZE4wVmFVR2I3K1BqUERNOFNoanNSMkUyV1VCWUNESDQ2dXpaNi9q?=
 =?utf-8?B?Z3hvVGZ0SkZnOE05Z0l2SklXWmJMODZtUjl0YXd1QjBVV0lnbEh4cDk4c2F5?=
 =?utf-8?B?dzMyTklDcU5zaGxuL2hDWUpiSHk1QUxzT3FGUUVJQ0R2VTF2SGo2Sk9BTVRL?=
 =?utf-8?B?VWpFV2NUWUhTSGZsRno5RTd5NEpUcHFmdnBlVjNGZHIxZ3B1bW5qOTNtejdL?=
 =?utf-8?B?TG1BQ2JXUk9NUzE1TEtQL1lDckZxbWRHc295UGZpQnFWQXU4VDBPTFVrcWll?=
 =?utf-8?B?dGJrSk5Xbkp6d3J3RDBhZ2gxTjFCU1NiMjFUNkRVWFNHWVFic2FTalcrUXFu?=
 =?utf-8?B?ZlN3YjB1dDE2VnI3cVhCN2ZHam5TQisyYy9JUTJ3SHJCZHBvMUF6OHBYeVpL?=
 =?utf-8?B?MzZNZmdqbDBuOEFCMWg4NkJyQllGNVIvYkFlVldocHJ1d0FUQm56Mlo0SUhU?=
 =?utf-8?B?b2QwVGtWVWNLTW5vUXVyOENDODFYT0J3OFlNQjI0L3c3RHBRcmVmN0xRaTBE?=
 =?utf-8?B?Mk51dkUrRXJmWVUrSHk2dW5BNE1yOHV0Z01GZ2FnWFFUZENzR085T1Z5aWhR?=
 =?utf-8?B?aDN3MFdKMEV6S0VGU1hPdUNwbzZEbXhGZXVIRkcrOVpiTDFmSGdVNnJPa0lU?=
 =?utf-8?B?bS9VbFJpMFJZL2xTVTZxK3F5V2VrcVRLbnU1VEUzLzlvczdndjRTb3habk04?=
 =?utf-8?B?ekhyTW5oWTRFL1JJSmptb3BwbHlISVN3NmJrVlJYYTF3VkdmSTJnNzJjSjB1?=
 =?utf-8?B?ZUJjcW1KNGpNVmU3c0JlQ3orS0pxanlPYUQ1bjBUTjF1M3BQTkJiamNPRVBu?=
 =?utf-8?B?S2tucU0xNFNQOUJWYXRRWjk0eVBqdDZIRzVQVnpLSG5MQmVnM21tTmNDNUdj?=
 =?utf-8?B?S0FLSEswZkJYaVBFRTVRdHVkWnN1RkFjd2E1MTdkcCszNkhoY1BXbktCbFBN?=
 =?utf-8?B?MVJ0d0Q4aWNRYzJqWElkbFJILzE5Q0Z2Vm1jK3dlSVRwdVpPM1FYR0JlbjBY?=
 =?utf-8?B?K0N2ZlRmcjBFeVVFZk1EKzN6dnNlVW9YZjRxZlIvV0d0Rm9ERGFvMDFwVEUx?=
 =?utf-8?B?Q3E0YzdmUHBlQXhjNlJPQ25YMnh3bDhRald6MjRUcUwxSTVad05OcUNJekxC?=
 =?utf-8?B?NnB0ZFNGalo3UHhScTNhTzZyNmlPcmF4YnBuSXZTZkloYWxwWmhzNWV1VWMr?=
 =?utf-8?B?Vk5NZ2VDRXBaR3pWa2U2RnJTME9UMVd4RlVqVzlUWmw1VGowbS9ZZ1RoZ2N2?=
 =?utf-8?B?RlhpRDF3bEY0amp4aC9Ea0oxRU1ObHpNWXA3dmJ2OFIvSC9hYlBhZ2JEa3FX?=
 =?utf-8?B?cWdHWFlYSnZJRUFjdE1pN2QvVnl4TVdST0owRTFsR255UERVS3k4NTIyUlc4?=
 =?utf-8?B?amE0Y0s3TnFpOGNiQzVZS3lWT1dlb3k0OURReTdsUU95VktoMVFSWEVtSFNm?=
 =?utf-8?B?NFlGVmJnSElibGkwd09LY0JSaE1JYlkzeENpbm4waDFoWTBWOFFmZnk2M3Z4?=
 =?utf-8?B?Z3NrQkEwOWhWcjhwclZad2xReTNXRXlvV00yWGNZMksrdFdiYXBIb2VDcnhR?=
 =?utf-8?Q?39ujkfgpIRKZJVATI2pWk3MenGb46382GVFnXjw?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ad347fd7-7152-44fb-8d2e-08d93002cadc
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2021 13:38:00.3873
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6mLiLHWqcVE/DzTEy22zQNKnuzA0r49z/NCEHzJ6qIpTYq15Mgy/F755bfTDvdgEzCqJfoRTCdNln4m8oOnMBA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB5968
X-OriginatorOrg: citrix.com

Ping?

This is missing an Ack or otherwise from the Intel maintainers.

Thanks, Roger.

On Fri, May 28, 2021 at 07:39:32PM +0200, Roger Pau Monne wrote:
> 
> Hello,
> 
> The aim of this series is to force the cache attribute of foreign and
> grant mappings to WB for HVM/PVH guests. This is required because those
> mappings will be likely be using unpopulated memory ranges in the p2m,
> and those are usually UC in the MTRR state.
> 
> Having the guest set the correct MTRR attributes is also unlikely,
> because the number of MTRR ranges is finite.
> 
> Roger Pau Monne (3):
>   x86/mtrr: remove stale function prototype
>   x86/mtrr: move epte_get_entry_emt to p2m-ept.c
>   x86/ept: force WB cache attributes for grant and foreign maps
> 
>  xen/arch/x86/hvm/mtrr.c           | 107 +---------------------
>  xen/arch/x86/hvm/vmx/vmx.c        |   6 +-
>  xen/arch/x86/mm/p2m-ept.c         | 145 ++++++++++++++++++++++++++++--
>  xen/include/asm-x86/hvm/vmx/vmx.h |   2 +
>  xen/include/asm-x86/mtrr.h        |   7 +-
>  5 files changed, 147 insertions(+), 120 deletions(-)
> 
> -- 
> 2.31.1
> 


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 14:32:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 14:32:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142232.262501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltA6W-0005Ur-Eq; Tue, 15 Jun 2021 14:31:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142232.262501; Tue, 15 Jun 2021 14:31:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltA6W-0005Uk-Bm; Tue, 15 Jun 2021 14:31:48 +0000
Received: by outflank-mailman (input) for mailman id 142232;
 Tue, 15 Jun 2021 14:31:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SZqV=LJ=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1ltA6U-0005Ue-QJ
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 14:31:46 +0000
Received: from mail-pf1-x435.google.com (unknown [2607:f8b0:4864:20::435])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 87f78395-1978-4cdd-b3f0-dd56133b5ec4;
 Tue, 15 Jun 2021 14:31:45 +0000 (UTC)
Received: by mail-pf1-x435.google.com with SMTP id z26so13400531pfj.5
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 07:31:45 -0700 (PDT)
Received: from ?IPv6:2404:f801:0:5:8000::4b1? ([2404:f801:9000:1a:efea::4b1])
 by smtp.gmail.com with ESMTPSA id
 fy16sm2711030pjb.49.2021.06.15.07.31.32
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 07:31:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87f78395-1978-4cdd-b3f0-dd56133b5ec4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=09f7flVgvhf5eQwpZCIbvOGTARFj7yeM2HilMpzlWLk=;
        b=jk2oxSkqx62fHfzZ2Td3CQMKVDfVLX5soiM/JukpALSNJHJr6bPDtlz3KWKAxXo94O
         s/hllSh2es9DhRUKAKUitklZDcf/EnVOKXhkOj9PYfOCpVShjA/ALCwL9qmhEU6/Uawa
         XK557NRD7ifNeZzvRQOvgq1OjZVLzX0tIkT8t770jqHE0cKYFvqhPHbZNRgnhaDUZKFF
         qX9q9wmOAxzVgsgD1wpLoriKUlqy7RCPkNZ1jG4aLxQucHRt8DEitzL0SOUV96Pc1UDE
         2mXJY2Jw/0wFzqk5I0o9iVdLMmeZJJ8WeATZpVLS5cPgmlZCB/DeT8DlSYdBw+t/FiNV
         Ts4Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=09f7flVgvhf5eQwpZCIbvOGTARFj7yeM2HilMpzlWLk=;
        b=pXsCkUgjkzWcP3pRRVwhcPQPvNPRMZC1Vkp5+OTYzo1IEBXV+UCwhcIA5E8fjg4Jdy
         Y412bDgn6MxV6OkTRmlrcsvSrCfq2hcXJFceAmT4fFjXBdSX0i8mKApJYa5p8foGS83c
         A/8HHMFMKCy8hgIz8E+978g2g2qDWl3i1G04v+vG0nGeMwfcG6GT+xN2lqk8QGMF/T29
         JAd3amJz4dbE6gKnnezlxovL1smFmL1Ns2NPlM2Q1ckDksXyDt/D/Ebj9NNSDZn8wa6z
         27SXZuQAX8qt5OlhtajCm7YUwWsT+++KpT/Op4N9HnkYOli2uzaaN0/7yRXOxqejAnLT
         UbuQ==
X-Gm-Message-State: AOAM533euti232PgBv3o/TyFdM3EP4EEa0pEtuvW3IfoJ+gRsRtI5Ob8
	7YUB5eT1hlNY/jU3j+bC4vc=
X-Google-Smtp-Source: ABdhPJyHvlYi58GrVtZOOV1UUSDfdDdKTtRA2DlWTeNf7j5KzC+iOROVnZire7W4JyUnDsrG1dvTzA==
X-Received: by 2002:a65:550e:: with SMTP id f14mr22678348pgr.160.1623767504841;
        Tue, 15 Jun 2021 07:31:44 -0700 (PDT)
Subject: Re: [RFC PATCH V3 10/11] HV/Netvsc: Add Isolation VM support for
 netvsc driver
To: Christoph Hellwig <hch@lst.de>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
 wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
 arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
 peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, m.szyprowski@samsung.com,
 robin.murphy@arm.com, boris.ostrovsky@oracle.com, jgross@suse.com,
 sstabellini@kernel.org, joro@8bytes.org, will@kernel.org,
 xen-devel@lists.xenproject.org, davem@davemloft.net, kuba@kernel.org,
 jejb@linux.ibm.com, martin.petersen@oracle.com,
 iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com,
 thomas.lendacky@amd.com, brijesh.singh@amd.com, sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-11-ltykernel@gmail.com>
 <20210607065007.GE24478@lst.de>
 <279cb4bf-c5b6-6db9-0f1e-9238e902c8f2@gmail.com>
 <20210614070903.GA29976@lst.de>
 <e10c2696-23c3-befe-4f4d-25e18918132f@gmail.com>
 <20210614153339.GB1741@lst.de>
From: Tianyu Lan <ltykernel@gmail.com>
Message-ID: <7d86307f-83ff-03ad-c6e9-87b455c559b8@gmail.com>
Date: Tue, 15 Jun 2021 22:31:31 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210614153339.GB1741@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 6/14/2021 11:33 PM, Christoph Hellwig wrote:
> On Mon, Jun 14, 2021 at 10:04:06PM +0800, Tianyu Lan wrote:
>> The pages in the hv_page_buffer array here are in the kernel linear
>> mapping. The packet sent to host will contain an array which contains
>> transaction data. In the isolation VM, data in the these pages needs to be
>> copied to bounce buffer and so call dma_map_single() here to map these data
>> pages with bounce buffer. The vmbus has ring buffer where the send/receive
>> packets are copied to/from. The ring buffer has been remapped to the extra
>> space above shared gpa boundary/vTom during probing Netvsc driver and so
>> not call dma map function for vmbus ring
>> buffer.
> 
> So why do we have all that PFN magic instead of using struct page or
> the usual kernel I/O buffers that contain a page pointer?
> 

These PFNs originally is part of Hyper-V protocol data and will be sent
to host. Host accepts these GFN and copy data from/to guest memory. The 
translation from va to pa is done by caller that populates the 
hv_page_buffer array. I will try calling dma map function before 
populating struct hv_page_buffer and this can avoid redundant 
translation between PA and VA.


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 15:25:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 15:25:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142239.262513 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltAwC-0001uw-Ed; Tue, 15 Jun 2021 15:25:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142239.262513; Tue, 15 Jun 2021 15:25:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltAwC-0001up-BR; Tue, 15 Jun 2021 15:25:12 +0000
Received: by outflank-mailman (input) for mailman id 142239;
 Tue, 15 Jun 2021 15:25:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SZqV=LJ=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1ltAwA-0001uj-RQ
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 15:25:10 +0000
Received: from mail-pg1-x529.google.com (unknown [2607:f8b0:4864:20::529])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 119af839-ed87-4940-a875-137ffc7c5022;
 Tue, 15 Jun 2021 15:25:10 +0000 (UTC)
Received: by mail-pg1-x529.google.com with SMTP id t17so11754615pga.5
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 08:25:10 -0700 (PDT)
Received: from ?IPv6:2404:f801:0:5:8000::4b1? ([2404:f801:9000:1a:efea::4b1])
 by smtp.gmail.com with ESMTPSA id
 u2sm15258266pfg.67.2021.06.15.08.24.57
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 08:25:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 119af839-ed87-4940-a875-137ffc7c5022
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=phyWWBHzkBpxWNeuQJLinu+go2K8vm0IQjBk4bPOC5o=;
        b=lLzyvAx1eLMBBQ9Nv6UWlyfS7bq09JfNMuTSA72Pidzs3ADzEJh5UZNx1+KXAA3QS7
         9qLwmHEZw5Xk+RHIQHD4WFV677p8Yi8BHKdEaKAbO078PZhHBwsAaiGPsQh7STbxJ28j
         3L9M/FShV9IoCoZ9A/Eh9ROBNoQr5F07JQAgtoeaUjtdIexg1Nk9GrQcrUXxHc9NNgOV
         D2nJiITYw2LCAuwzkpdFMOLcTULHsDylV5FWGHXPlqFv03pjAoTRz+TNqy9uLL90+fK0
         edPtBl2ZDPRtYC3zVnWadcZFdsLes9V2OCmaC/x9ZwogRVqJGjkXmr9UDyWpPbPkMxE8
         xgYA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=phyWWBHzkBpxWNeuQJLinu+go2K8vm0IQjBk4bPOC5o=;
        b=CP4LdSGnaIf6CQrzx+2wdovcGF7tGJOiLnXTP6gNV5MtM/G9iiD371hqWWD4AQw77e
         6mH8D4deGKSwXhRfcNT5uj52me1huhZ4kh2ES60zDPBVMHfOJOH9wdXHhyZrd8K4zQgZ
         pXlmvxM1jKDd8O0kR9ZX9DxzVwLCeSacDPYP0n/gGQhYOfxk61qSi1Q84G46gcojzqve
         FU3dAnOrlXKCLuzm6sylkHeKvLPYJgL+YxL2Hh+Cwt/JM8W9mHi8pEcymzbhKOy+C3ai
         plFwxf0dWmN7WSqIYGE/pTkKxx7m90OBY2UrDSKWLS1IrtAKumggOeRNrtYIs6GYc+Bw
         K3pg==
X-Gm-Message-State: AOAM532Z8TqrVXXr/XIHHuST2BnvMi7JgKo9dPA6ORgHvJg0GE3jOgDp
	bkzvPA7EcrWP9q38jEMdF6o=
X-Google-Smtp-Source: ABdhPJwV2uWurEEhhhqeCwFEjPg+6PpMse5GS6lAolB+VF7P2QpHJP+RpOWgZ5BBFF0RzwB+7lXyqg==
X-Received: by 2002:a65:6481:: with SMTP id e1mr85503pgv.140.1623770709416;
        Tue, 15 Jun 2021 08:25:09 -0700 (PDT)
Subject: Re: [RFC PATCH V3 08/11] swiotlb: Add bounce buffer remap address
 setting function
To: Christoph Hellwig <hch@lst.de>, Robin Murphy <robin.murphy@arm.com>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
 wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
 arnd@arndb.de, dave.hansen@linux.intel.com, luto@kernel.org,
 peterz@infradead.org, akpm@linux-foundation.org,
 kirill.shutemov@linux.intel.com, rppt@kernel.org, hannes@cmpxchg.org,
 cai@lca.pw, krish.sadhukhan@oracle.com, saravanand@fb.com,
 Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, m.szyprowski@samsung.com,
 boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org,
 joro@8bytes.org, will@kernel.org, xen-devel@lists.xenproject.org,
 davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com,
 martin.petersen@oracle.com, iommu@lists.linux-foundation.org,
 linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org,
 linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
 netdev@vger.kernel.org, vkuznets@redhat.com, thomas.lendacky@amd.com,
 brijesh.singh@amd.com, sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-9-ltykernel@gmail.com>
 <20210607064312.GB24478@lst.de>
 <94038087-a33c-93c5-27bf-7ec1f6f5f0e3@arm.com> <20210614153252.GA1741@lst.de>
From: Tianyu Lan <ltykernel@gmail.com>
Message-ID: <9e347c4c-d4b9-129c-10d2-0d7ff1b917cc@gmail.com>
Date: Tue, 15 Jun 2021 23:24:57 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210614153252.GA1741@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 6/14/2021 11:32 PM, Christoph Hellwig wrote:
> On Mon, Jun 14, 2021 at 02:49:51PM +0100, Robin Murphy wrote:
>> FWIW, I think a better generalisation for this would be allowing
>> set_memory_decrypted() to return an address rather than implicitly
>> operating in-place, and hide all the various hypervisor hooks behind that.
> 
> Yes, something like that would be a good idea.  As-is
> set_memory_decrypted is a pretty horribly API anyway due to passing
> the address as void, and taking a size parameter while it works in units
> of pages.  So I'd very much welcome a major overhaul of this API.
> 

Hi Christoph and Robin:
	Thanks for your suggestion. I will try this idea in the next version. 
Besides make the address translation into set_memory_
decrypted() and return address, do you want to make other changes to the 
API in order to make it more reasonable(e.g size parameter)?

Thanks


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 15:48:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 15:48:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142246.262524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBIB-0004EF-CL; Tue, 15 Jun 2021 15:47:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142246.262524; Tue, 15 Jun 2021 15:47:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBIB-0004E8-9M; Tue, 15 Jun 2021 15:47:55 +0000
Received: by outflank-mailman (input) for mailman id 142246;
 Tue, 15 Jun 2021 15:47:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PsWs=LJ=samsung.com=m.szyprowski@srs-us1.protection.inumbo.net>)
 id 1ltBI9-0004Dy-6D
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 15:47:53 +0000
Received: from mailout2.w1.samsung.com (unknown [210.118.77.12])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f8c8a821-dea1-488e-ac0d-5cf2f3287418;
 Tue, 15 Jun 2021 15:47:49 +0000 (UTC)
Received: from eucas1p2.samsung.com (unknown [182.198.249.207])
 by mailout2.w1.samsung.com (KnoxPortal) with ESMTP id
 20210615154748euoutp025393b0ef1c05766c8964ff08effcb737~IzBfLeirf0708407084euoutp02U;
 Tue, 15 Jun 2021 15:47:48 +0000 (GMT)
Received: from eusmges3new.samsung.com (unknown [203.254.199.245]) by
 eucas1p1.samsung.com (KnoxPortal) with ESMTP id
 20210615154747eucas1p10fc0c489f6b2b99ec3ee3c0d1d182386~IzBerCabW2052020520eucas1p1O;
 Tue, 15 Jun 2021 15:47:47 +0000 (GMT)
Received: from eucas1p1.samsung.com ( [182.198.249.206]) by
 eusmges3new.samsung.com (EUCPMTA) with SMTP id 25.05.09439.3ABC8C06; Tue, 15
 Jun 2021 16:47:47 +0100 (BST)
Received: from eusmtrp2.samsung.com (unknown [182.198.249.139]) by
 eucas1p1.samsung.com (KnoxPortal) with ESMTPA id
 20210615154746eucas1p1321b6f1cf38d21899632e132cf025e61~IzBeAseUp3082930829eucas1p1T;
 Tue, 15 Jun 2021 15:47:46 +0000 (GMT)
Received: from eusmgms1.samsung.com (unknown [182.198.249.179]) by
 eusmtrp2.samsung.com (KnoxPortal) with ESMTP id
 20210615154746eusmtrp2ae9e870f68d5a24f27850c8ddaf7c3b5~IzBd-erfP0310203102eusmtrp2N;
 Tue, 15 Jun 2021 15:47:46 +0000 (GMT)
Received: from eusmtip1.samsung.com ( [203.254.199.221]) by
 eusmgms1.samsung.com (EUCPMTA) with SMTP id 42.91.08705.2ABC8C06; Tue, 15
 Jun 2021 16:47:46 +0100 (BST)
Received: from [106.210.134.192] (unknown [106.210.134.192]) by
 eusmtip1.samsung.com (KnoxPortal) with ESMTPA id
 20210615154744eusmtip131f3ea0c677d37e629dba0cc864eb86e~IzBb_60KM0958409584eusmtip1b;
 Tue, 15 Jun 2021 15:47:44 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8c8a821-dea1-488e-ac0d-5cf2f3287418
DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.w1.samsung.com 20210615154748euoutp025393b0ef1c05766c8964ff08effcb737~IzBfLeirf0708407084euoutp02U
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com;
	s=mail20170921; t=1623772068;
	bh=X0hoPpP4PG9rXwI/rCRN/9SSp3Kx8sxGj31M98xjvfU=;
	h=Subject:To:Cc:From:Date:In-Reply-To:References:From;
	b=qJZOgWIWUXl8lnnt9+IxTIfAKH+yJbdmf6G9NWvHS3NKPvKBD8A1zhdMrs0dBTpCw
	 TTJcqahuR4hMHCgWD3qx7YlIWRXqLQmQ4kQliMh/gkWXrGBG6YhpegstqWuOg6QjHs
	 zghIbtOv7f2Sl4wFEzFd3MNi3/gfanhjgq/L45XU=
X-AuditID: cbfec7f5-c03ff700000024df-5c-60c8cba39819
Subject: Re: [PATCH 09/30] mtd_blkdevs: use blk_mq_alloc_disk
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>, Tim Waugh <tim@cyberelk.net>, Geoff
	Levand <geoff@infradead.org>, Ilya Dryomov <idryomov@gmail.com>, "Md. Haris
 Iqbal" <haris.iqbal@ionos.com>, Jack Wang <jinpu.wang@ionos.com>, "Michael
 S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>, Konrad
	Rzeszutek Wilk <konrad.wilk@oracle.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Mike Snitzer <snitzer@redhat.com>, Maxim Levitsky
	<maximlevitsky@gmail.com>, Alex Dubov <oakad@yahoo.com>, Miquel Raynal
	<miquel.raynal@bootlin.com>, Richard Weinberger <richard@nod.at>, Vignesh
	Raghavendra <vigneshr@ti.com>, Heiko Carstens <hca@linux.ibm.com>, Vasily
	Gorbik <gor@linux.ibm.com>, Christian Borntraeger <borntraeger@de.ibm.com>,
	dm-devel@redhat.com, linux-block@vger.kernel.org, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org, linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org, Bartlomiej Zolnierkiewicz
	<b.zolnierkie@samsung.com>
From: Marek Szyprowski <m.szyprowski@samsung.com>
Message-ID: <13b21a07-b7c7-37db-fdc9-77bf174b6f8f@samsung.com>
Date: Tue, 15 Jun 2021 17:47:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0)
	Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210602065345.355274-10-hch@lst.de>
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Brightmail-Tracker: H4sIAAAAAAAAA02Sa0xTZxjH855zenrapXgoKK/KhqvTZGWCzC2+CQ4vY3DiB0S2+IEt0gZO
	AIVqWphuy5SAE22GVEqkazcucimTi1puAlOkjJtAZVAmVhgpgiLCBnLbEOgohY1vv+f/f573
	ef7JS+FCM7mFipbFsXKZNEZE8onKpn9Mu3LbWiS7Uy57oKL+VBLd1tzkoIzCKgJNPE7D0N0/
	dRxkflALUN4LP9SpneWgyrZbXPQy8wZAPxc1YmggM5lEBV1TGBo3pBNoweqDzAk7UUHuMEB3
	LZ7oV1sqQLWJBVykyknC0eu5JQ4aa73ORQ3J9zjINj/KQa+HSzBkbSrCkbp6DCDNw1wOunh7
	BqDBfD2ObB0TXKTvyCTRXLkaOyBiMou/YV6qVYBpfdFOMI0N9VymftBIMtXafi5TVihm2sb6
	MKa7I54ZMmdgjOr6fcCU5Z1n1L16wDzU5ACmYcJMMLWPE8jgzaH8fRFsTPSXrNzbT8KPSsu6
	T5x+6nlW89vWBGDYrgQ8CtIfQMtIO1ACPiWkCwFsNtlwRzENYNGUcrWYWnbSdWBtpGAxHbez
	kNYD+HuH1NE0CWDxcB9mN1zoj+BgqYljZ1f6ANTdyiftTTidx4OPTImE3SBpH6gcVy4bFCWg
	/WBKjbtdJugdcMx6lbTzRjoc/pWpWXlHQDvD1h+GVkZ59PvwQZ1thXHaAyZV6HAHu0HLUBZm
	3wXpIT68ZOnFHFf7Q7M+lXSwCxxtLuc62B3aqtcGkgC0mkq4juJ7ALsTNauZfWGfaX7lUpx+
	F96s8XbIB+EF5R3cLkPaCfaOOzuOcIJplRmrsgBeuih0dO+E2ubS/9bWd3bhKiDSroumXRdH
	uy6O9v+92YC4AdzYeEVsJKvYI2PPeCmksYp4WaRX+KlYA1j++G1LzTN3QOHopJcRYBQwAkjh
	IlfBLkWLRCiIkH71NSs/FSaPj2EVRrCVIkRugpqK4jAhHSmNY0+y7GlWvuZiFG9LAiZNenOA
	91nluZmjE82fuxiajs+W+Zs/je/6IzrLemK3qTfkO3d5NRUaZ9nb8/yLeZPR3+TtNiIJrko7
	UTHicczvSonhvSD3zopXruqze62zXYq6HZ7RsQPeHPMbf9/ruXDcqWnTj7KynpDAKnVIpa86
	Oyk/efJg8DXLtvyRw6Pftsqm0o4U6558eDJIwVzpf7Kg5UdiAZuf/zQksUU9Dd0n1B9++61p
	7wiLpj3gWGjgonYpfSFAnnL+qFg19/EnYs2zwP3Xlq6eyfY9FFjX8mjD9ukw57xt4tCmd+a4
	3ZJfgjz3h5/jPts4sFjaIDF6ZFQX5ZSVHyIbDSXDG46o6Fd7XC/PbxIRiiipjxiXK6T/AgrX
	tBBnBAAA
X-Brightmail-Tracker: H4sIAAAAAAAAA02Sa0xTdxjG+Z9bC6Tk0JZ4hpiZLkMDo9AC9c8C1C/bzrIsUaaJUxxt8HAR
	KK6Xpduy2YiKdi5UWgYrrlxbqsBALjpuU6l1AkOGY2YSmFBgC1gUr4xx6WjZEr798r7P70ne
	5GWj3Bk8lJ2tUDNKhTxXQARgA2s/jUdVD9yWxXT/FQvrx4sIeLmsCYel9qsYXLhfjMCeR+U4
	HOnvArB2Nhn+Yn6JwysDzSz40HIJwIv1TgQ+sBQS0Hb3GQLnW0wYXJkUwRFdOLTVzADYMxoJ
	b3qKAOw6YWNBQ1UBCpcX13Do7qtmQUfhjzj0/DOHw+WZRgRO3qpHobHDDWDZUA0OT19+AaDL
	WodCz+ACC9YNWgi42GZEdgtoS8Pn9EOjAdB9sz9jtNNxg0XfcPUSdId5nEW32iPoAfcYQv86
	qKGnR0oR2lB9HdCttcdp4+91gB4qqwK0Y2EEo7vu64g9rxwUJirzNWpme1a+Sp0kOCSCYqEo
	AQrFcQlCUeyuw2+K4wXRyYlHmNzsTxhldLJMmFVccR07NhWpLRveqgMtr+mBP5si4yjbqgn1
	Mpe0Aur5hP/GPIzq+0aHbzCPWrmnJ/QgYD3zGFC/TX1FeBc8MolyfX/HF+KTu6nyZqsvhJI2
	f2p0eJy10XqAWio45WOCFFH6eW8Tm80hk6mvO8O8Y4x8nXJPnvd1hpDp1FJToY85ZDDV9+00
	5mV/Ukz1X/P4GCUllKV1Et3gV6mC9vL/eAs1Ol2BGADXvEk3b1LMmxTzJqUSYJcAn9Go8jLz
	VCKhSp6n0igyhen5eS1g/d+u3Fpq/QFY5p4IewHCBr2AYqMCPidKdVvG5RyRf/oZo8xPU2py
	GVUviF+/5zwaGpKev/6wCnWaSBITL4qTJMTEJ0hiBVs47d81pHHJTLmayWGYY4zyfw9h+4fq
	kNS43HOz8asBOc8aX3QWHdI2B9szLmRI91hXnU/3HnVlVa2IU6TbwAcmtd8ux0SkNkJnr8zo
	3Pruny9Loi0V147ST2oJSebQO4q7N7e5+NgfbfwKrTnxcXRgqeSjKb+9y3/Lz+akLrIP0MPZ
	jtTjYcXZ+/AvwkOGrMbnNU1BTNC8VvrxNO9DnnQqiCdlC9z7A9faL8pOTtRuX+wcHtP3Ne54
	dLjh5E6TM5A01r5ve3vsTkp/lMMkqzI0vReZ4GfYGZKUcoLz5VvwzOmOC0/5SzOERRlc2Xau
	lMc45SV27X7rG0EZptUzJVcPeuDZsHqsslvnyuspnet4EH6K6Q6/J8BUWXJRBKpUyf8FChkx
	W/gDAAA=
X-CMS-MailID: 20210615154746eucas1p1321b6f1cf38d21899632e132cf025e61
X-Msg-Generator: CA
Content-Type: text/plain; charset="utf-8"
X-RootMTR: 20210615154746eucas1p1321b6f1cf38d21899632e132cf025e61
X-EPHeader: CA
CMS-TYPE: 201P
X-CMS-RootMailID: 20210615154746eucas1p1321b6f1cf38d21899632e132cf025e61
References: <20210602065345.355274-1-hch@lst.de>
	<20210602065345.355274-10-hch@lst.de>
	<CGME20210615154746eucas1p1321b6f1cf38d21899632e132cf025e61@eucas1p1.samsung.com>

Hi,

On 02.06.2021 08:53, Christoph Hellwig wrote:
> Use the blk_mq_alloc_disk API to simplify the gendisk and request_queue
> allocation.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>

This patch landed in linux-next as commit 6966bb921def ("mtd_blkdevs: 
use blk_mq_alloc_disk"). It causes the following regression on my QEMU 
arm64 setup:

  Using buffer write method
  Concatenating MTD devices:
  (0): "0.flash"
  (1): "0.flash"
  into device "0.flash"
  Unable to handle kernel NULL pointer dereference at virtual address 
0000000000000068
  Mem abort info:
    ESR = 0x96000004
    EC = 0x25: DABT (current EL), IL = 32 bits
    SET = 0, FnV = 0
    EA = 0, S1PTW = 0
  Data abort info:
    ISV = 0, ISS = 0x00000004
    CM = 0, WnR = 0
  [0000000000000068] user address but active_mm is swapper
  Internal error: Oops: 96000004 [#1] PREEMPT SMP
  Modules linked in:
  CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.13.0-rc3+ #10492
  Hardware name: linux,dummy-virt (DT)
  pstate: 00000005 (nzcv daif -PAN -UAO -TCO BTYPE=--)
  pc : blk_finish_plug+0x5c/0x268
  lr : blk_queue_write_cache+0x28/0x70
...
  Call trace:
   blk_finish_plug+0x5c/0x268
   add_mtd_blktrans_dev+0x270/0x420
   mtdblock_add_mtd+0x68/0x98
   blktrans_notify_add+0x44/0x70
   add_mtd_device+0x41c/0x490
   mtd_device_parse_register+0xf4/0x1c8
   physmap_flash_probe+0x44c/0x780
   platform_probe+0x90/0xd8
   really_probe+0x108/0x3c0
   driver_probe_device+0x60/0xc0
   device_driver_attach+0x6c/0x78
   __driver_attach+0xc0/0x100
   bus_for_each_dev+0x68/0xc8
   driver_attach+0x20/0x28
   bus_add_driver+0x168/0x1f8
   driver_register+0x60/0x110
   __platform_driver_register+0x24/0x30
   physmap_init+0x18/0x20
   do_one_initcall+0x84/0x450
   kernel_init_freeable+0x2dc/0x334
   kernel_init+0x10/0x110
   ret_from_fork+0x10/0x18
  Code: 88027c01 35ffffa2 17fff079 f9800031 (c85f7c22)
  ---[ end trace b774518e0766cc92 ]---
  Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
  SMP: stopping secondary CPUs
  Kernel Offset: 0x594d1fa00000 from 0xffff800010000000
  PHYS_OFFSET: 0xffffea7300000000
  CPU features: 0x11000671,00000846
  Memory Limit: none
  ---[ end Kernel panic - not syncing: Attempted to kill init! 
exitcode=0x0000000b ]---

> ---
>   drivers/mtd/mtd_blkdevs.c | 48 ++++++++++++++++++---------------------
>   1 file changed, 22 insertions(+), 26 deletions(-)
>
> diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
> index fb8e12d590a1..5dc4c966ea73 100644
> --- a/drivers/mtd/mtd_blkdevs.c
> +++ b/drivers/mtd/mtd_blkdevs.c
> @@ -30,11 +30,9 @@ static void blktrans_dev_release(struct kref *kref)
>   	struct mtd_blktrans_dev *dev =
>   		container_of(kref, struct mtd_blktrans_dev, ref);
>   
> -	dev->disk->private_data = NULL;
> -	blk_cleanup_queue(dev->rq);
> +	blk_cleanup_disk(dev->disk);
>   	blk_mq_free_tag_set(dev->tag_set);
>   	kfree(dev->tag_set);
> -	put_disk(dev->disk);
>   	list_del(&dev->list);
>   	kfree(dev);
>   }
> @@ -354,7 +352,7 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
>   	if (new->devnum > (MINORMASK >> tr->part_bits) ||
>   	    (tr->part_bits && new->devnum >= 27 * 26)) {
>   		mutex_unlock(&blktrans_ref_mutex);
> -		goto error1;
> +		return ret;
>   	}
>   
>   	list_add_tail(&new->list, &tr->devs);
> @@ -366,17 +364,28 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
>   	if (!tr->writesect)
>   		new->readonly = 1;
>   
> -	/* Create gendisk */
>   	ret = -ENOMEM;
> -	gd = alloc_disk(1 << tr->part_bits);
> +	new->tag_set = kzalloc(sizeof(*new->tag_set), GFP_KERNEL);
> +	if (!new->tag_set)
> +		goto out_list_del;
>   
> -	if (!gd)
> -		goto error2;
> +	ret = blk_mq_alloc_sq_tag_set(new->tag_set, &mtd_mq_ops, 2,
> +			BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_BLOCKING);
> +	if (ret)
> +		goto out_kfree_tag_set;
> +
> +	/* Create gendisk */
> +	gd = blk_mq_alloc_disk(new->tag_set, new);
> +	if (IS_ERR(gd)) {
> +		ret = PTR_ERR(gd);
> +		goto out_free_tag_set;
> +	}
>   
>   	new->disk = gd;
>   	gd->private_data = new;
>   	gd->major = tr->major;
>   	gd->first_minor = (new->devnum) << tr->part_bits;
> +	gd->minors = 1 << tr->part_bits;
>   	gd->fops = &mtd_block_ops;
>   
>   	if (tr->part_bits)
> @@ -398,22 +407,9 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
>   	spin_lock_init(&new->queue_lock);
>   	INIT_LIST_HEAD(&new->rq_list);
>   
> -	new->tag_set = kzalloc(sizeof(*new->tag_set), GFP_KERNEL);
> -	if (!new->tag_set)
> -		goto error3;
> -
> -	new->rq = blk_mq_init_sq_queue(new->tag_set, &mtd_mq_ops, 2,
> -				BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_BLOCKING);
> -	if (IS_ERR(new->rq)) {
> -		ret = PTR_ERR(new->rq);
> -		new->rq = NULL;
> -		goto error4;
> -	}
> -
>   	if (tr->flush)
>   		blk_queue_write_cache(new->rq, true, false);
>   
> -	new->rq->queuedata = new;
>   	blk_queue_logical_block_size(new->rq, tr->blksize);
>   
>   	blk_queue_flag_set(QUEUE_FLAG_NONROT, new->rq);
> @@ -437,13 +433,13 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
>   		WARN_ON(ret);
>   	}
>   	return 0;
> -error4:
> +
> +out_free_tag_set:
> +	blk_mq_free_tag_set(new->tag_set);
> +out_kfree_tag_set:
>   	kfree(new->tag_set);
> -error3:
> -	put_disk(new->disk);
> -error2:
> +out_list_del:
>   	list_del(&new->list);
> -error1:
>   	return ret;
>   }
>   

Best regards
-- 
Marek Szyprowski, PhD
Samsung R&D Institute Poland



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 15:58:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 15:58:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142253.262534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBSJ-0005g4-Bt; Tue, 15 Jun 2021 15:58:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142253.262534; Tue, 15 Jun 2021 15:58:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBSJ-0005fx-91; Tue, 15 Jun 2021 15:58:23 +0000
Received: by outflank-mailman (input) for mailman id 142253;
 Tue, 15 Jun 2021 15:58:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0zBy=LJ=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1ltBSI-0005fr-ID
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 15:58:22 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 859362b2-a941-4111-8521-36d730b248c3;
 Tue, 15 Jun 2021 15:58:21 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 757AD67373; Tue, 15 Jun 2021 17:58:17 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 859362b2-a941-4111-8521-36d730b248c3
Date: Tue, 15 Jun 2021 17:58:17 +0200
From: Christoph Hellwig <hch@lst.de>
To: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Justin Sanders <justin@coraid.com>,
	Denis Efremov <efremov@linux.com>,
	Josef Bacik <josef@toxicpanda.com>, Tim Waugh <tim@cyberelk.net>,
	Geoff Levand <geoff@infradead.org>,
	Ilya Dryomov <idryomov@gmail.com>,
	"Md. Haris Iqbal" <haris.iqbal@ionos.com>,
	Jack Wang <jinpu.wang@ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Maxim Levitsky <maximlevitsky@gmail.com>,
	Alex Dubov <oakad@yahoo.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>, dm-devel@redhat.com,
	linux-block@vger.kernel.org, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-s390@vger.kernel.org,
	Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Subject: Re: [PATCH 09/30] mtd_blkdevs: use blk_mq_alloc_disk
Message-ID: <20210615155817.GA31047@lst.de>
References: <20210602065345.355274-1-hch@lst.de> <20210602065345.355274-10-hch@lst.de> <CGME20210615154746eucas1p1321b6f1cf38d21899632e132cf025e61@eucas1p1.samsung.com> <13b21a07-b7c7-37db-fdc9-77bf174b6f8f@samsung.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <13b21a07-b7c7-37db-fdc9-77bf174b6f8f@samsung.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 15, 2021 at 05:47:44PM +0200, Marek Szyprowski wrote:
> Hi,
> 
> On 02.06.2021 08:53, Christoph Hellwig wrote:
> > Use the blk_mq_alloc_disk API to simplify the gendisk and request_queue
> > allocation.
> >
> > Signed-off-by: Christoph Hellwig <hch@lst.de>
> 
> This patch landed in linux-next as commit 6966bb921def ("mtd_blkdevs: 
> use blk_mq_alloc_disk"). It causes the following regression on my QEMU 
> arm64 setup:

Please try the patch below:

diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
index 5dc4c966ea73..6ce4bc57f919 100644
--- a/drivers/mtd/mtd_blkdevs.c
+++ b/drivers/mtd/mtd_blkdevs.c
@@ -382,6 +382,7 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
 	}
 
 	new->disk = gd;
+	new->rq = new->disk->queue;
 	gd->private_data = new;
 	gd->major = tr->major;
 	gd->first_minor = (new->devnum) << tr->part_bits;


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 16:19:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 16:19:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142260.262545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBmx-0008Ul-5K; Tue, 15 Jun 2021 16:19:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142260.262545; Tue, 15 Jun 2021 16:19:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBmx-0008Ue-2K; Tue, 15 Jun 2021 16:19:43 +0000
Received: by outflank-mailman (input) for mailman id 142260;
 Tue, 15 Jun 2021 16:19:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x6ws=LJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ltBmv-0008UY-7h
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 16:19:41 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id de181325-28a1-4c2d-9140-6e4d015bd390;
 Tue, 15 Jun 2021 16:19:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de181325-28a1-4c2d-9140-6e4d015bd390
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623773977;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=bEMZpVBmeRQ9/504L1DN5a8NyA+wTFLwDvRaI1x9uWw=;
  b=Z5FQL1dyO9qusrQ5mbQQAsv8I8dQt55Z7z4qlkoS/f3o8LoALv+d2lxM
   9vSPTW/N9FoJ7+Y7qzAazOlhP5SA08IXLofPlddA3zdwCKm3zo1/coPGd
   wiDwATG7m4h+dk5QjE9flQHWQRWXPomx14tITkxAjnTSscbIED6nFy6jt
   8=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: gORyeNmDcweyhEWXVVae+k+mCTL1IDKF0C5mhgHy2YCVrmR8M2kRceTAcqbotumP4q8+DnSxSG
 dTuyGh4rwdZZsxqy9Jctg4ty5elfc7+ZJBsCkCN/wVNEb30DoNCXHZMKr08VeTd/fuq6bbRBYn
 0EbGeE1N5wUB4zXM9mF8NVk3VtPPAl+pHcmGN3XYdR7jHpBjs6VJ8GCqEVIQvKRwFeBAms2KZt
 SCWxfV4Dbn0B1mcxlHRAx0UKn6125hRvPXeJmETQKLVXZEssoeE2AMmi3as5io7I2a438Q7UQF
 QZM=
X-SBRS: None
X-MesageID: 46552105
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:EtEDYqprNmCLDzU1qfkfo8oaV5oTeYIsimQD101hICG8cqSj+f
 xG+85rrCMc6QxhPk3I9urhBEDtex/hHNtOkOws1NSZLW7bUQmTXeJfBOLZqlWKcUDDH6xmpM
 NdmsBFeaXN5DNB7PoSjjPWLz9Z+qjkzJyV
X-IronPort-AV: E=Sophos;i="5.83,275,1616472000"; 
   d="scan'208";a="46552105"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Juergen Gross
	<jgross@suse.com>
Subject: [PATCH 1/5] tools/tests: Drop obsolete mce-test infrastructure
Date: Tue, 15 Jun 2021 17:19:01 +0100
Message-ID: <20210615161905.9831-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210615161905.9831-1-andrew.cooper3@citrix.com>
References: <20210615161905.9831-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

mce-test has a test suite, but it depends on xend, needs to run in-tree, and
requires manual setup of at least one guest, and manual parameters to pass
into cases.  Drop the test infrasturcture.

Move the one useful remaining item, xen-mceinj, into misc/, fixing some minor
style issues as it goes.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Juergen Gross <jgross@suse.com>
---
 .gitignore                                         |   1 -
 tools/misc/.gitignore                              |   1 +
 tools/misc/Makefile                                |   4 +
 tools/{tests/mce-test/tools => misc}/xen-mceinj.c  |  32 +--
 tools/tests/Makefile                               |   1 -
 tools/tests/mce-test/Makefile                      |  12 -
 tools/tests/mce-test/README                        |  75 ------
 tools/tests/mce-test/cases/srao_llc/dom0/cases.sh  |  73 ------
 tools/tests/mce-test/cases/srao_llc/guest/cases.sh |  94 --------
 tools/tests/mce-test/cases/srao_llc/xen/cases.sh   |  69 ------
 tools/tests/mce-test/cases/srao_mem/dom0/cases.sh  |  73 ------
 tools/tests/mce-test/cases/srao_mem/guest/cases.sh |  94 --------
 tools/tests/mce-test/cases/srao_mem/xen/cases.sh   |  69 ------
 tools/tests/mce-test/cases/ucna_llc/dom0/cases.sh  |  72 ------
 tools/tests/mce-test/cases/ucna_llc/guest/cases.sh |  92 --------
 tools/tests/mce-test/cases/ucna_llc/xen/cases.sh   |  68 ------
 tools/tests/mce-test/config/setup.conf             |  24 --
 tools/tests/mce-test/lib/xen-mceinj-tool.sh        | 260 ---------------------
 tools/tests/mce-test/tools/Makefile                |  24 --
 tools/tests/mce-test/tools/README                  |  24 --
 20 files changed, 24 insertions(+), 1138 deletions(-)
 rename tools/{tests/mce-test/tools => misc}/xen-mceinj.c (97%)
 delete mode 100644 tools/tests/mce-test/Makefile
 delete mode 100644 tools/tests/mce-test/README
 delete mode 100644 tools/tests/mce-test/cases/srao_llc/dom0/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_llc/guest/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_llc/xen/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_mem/dom0/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_mem/guest/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_mem/xen/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/ucna_llc/dom0/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/ucna_llc/guest/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/ucna_llc/xen/cases.sh
 delete mode 100644 tools/tests/mce-test/config/setup.conf
 delete mode 100644 tools/tests/mce-test/lib/xen-mceinj-tool.sh
 delete mode 100644 tools/tests/mce-test/tools/Makefile
 delete mode 100644 tools/tests/mce-test/tools/README

diff --git a/.gitignore b/.gitignore
index 38a085e398..d4b90303b2 100644
--- a/.gitignore
+++ b/.gitignore
@@ -276,7 +276,6 @@ tools/tests/x86_emulator/test_x86_emulator
 tools/tests/x86_emulator/x86_emulate
 tools/tests/x86_emulator/xop*.[ch]
 tools/tests/xenstore/xs-test
-tools/tests/mce-test/tools/xen-mceinj
 tools/tests/vpci/list.h
 tools/tests/vpci/vpci.[hc]
 tools/tests/vpci/test_vpci
diff --git a/tools/misc/.gitignore b/tools/misc/.gitignore
index ce6f937d0c..73ce95e6d7 100644
--- a/tools/misc/.gitignore
+++ b/tools/misc/.gitignore
@@ -1,4 +1,5 @@
 xen-access
+xen-mceinj
 xen-memshare
 xen-ucode
 xen-vmtrace
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 2b683819d4..1a07191d83 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -22,6 +22,7 @@ INSTALL_SBIN-$(CONFIG_MIGRATE) += xen-hptool
 INSTALL_SBIN-$(CONFIG_X86)     += xen-hvmcrash
 INSTALL_SBIN-$(CONFIG_X86)     += xen-hvmctx
 INSTALL_SBIN-$(CONFIG_X86)     += xen-lowmemd
+INSTALL_SBIN-$(CONFIG_X86)     += xen-mceinj
 INSTALL_SBIN-$(CONFIG_X86)     += xen-memshare
 INSTALL_SBIN-$(CONFIG_X86)     += xen-mfndump
 INSTALL_SBIN-$(CONFIG_X86)     += xen-ucode
@@ -97,6 +98,9 @@ xen-memshare: xen-memshare.o
 xen-vmtrace: xen-vmtrace.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(LDLIBS_libxenforeignmemory) $(APPEND_LDFLAGS)
 
+xen-mceinj: xen-mceinj.o
+	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(APPEND_LDFLAGS)
+
 xenperf: xenperf.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
 
diff --git a/tools/tests/mce-test/tools/xen-mceinj.c b/tools/misc/xen-mceinj.c
similarity index 97%
rename from tools/tests/mce-test/tools/xen-mceinj.c
rename to tools/misc/xen-mceinj.c
index 1187d01e5f..df55eefbac 100644
--- a/tools/tests/mce-test/tools/xen-mceinj.c
+++ b/tools/misc/xen-mceinj.c
@@ -137,7 +137,7 @@ static void err(xc_interface *xc_handle, const char *fmt, ...)
     va_list args;
 
     va_start(args, fmt);
-    if (vasprintf(&buf, fmt, args) < 0)
+    if ( vasprintf(&buf, fmt, args) < 0 )
         abort();
     perror(buf);
     va_end(args);
@@ -173,7 +173,7 @@ static unsigned int mca_cpuinfo(xc_interface *xc_handle)
     mc.cmd = XEN_MC_physcpuinfo;
     mc.interface_version = XEN_MCA_INTERFACE_VERSION;
 
-    if (!xc_mca_op(xc_handle, &mc))
+    if ( !xc_mca_op(xc_handle, &mc) )
         return mc.u.mc_physcpuinfo.ncpus;
     else
         return 0;
@@ -187,9 +187,9 @@ static int inject_cmci(xc_interface *xc_handle, unsigned int cpu_nr)
     memset(&mc, 0, sizeof(struct xen_mc));
 
     nr_cpus = mca_cpuinfo(xc_handle);
-    if (!nr_cpus)
+    if ( !nr_cpus )
         err(xc_handle, "Failed to get mca_cpuinfo");
-    if (cpu_nr >= nr_cpus)
+    if ( cpu_nr >= nr_cpus )
         err(xc_handle, "-c %u is larger than %u", cpu_nr, nr_cpus - 1);
 
     mc.cmd = XEN_MC_inject_v2;
@@ -284,7 +284,7 @@ static int add_msr_intpose(xc_interface *xc_handle,
         flush_msr_inj(xc_handle);
         init_msr_inj();
     }
-    count= msr_inj.mcinj_count;
+    count = msr_inj.mcinj_count;
 
     if ( !count )
     {
@@ -422,7 +422,7 @@ static long xs_get_dom_mem(int domid)
     if (!memstr || !plen)
         return -1;
 
-    mem = atoll(memstr)*1024;
+    mem = atoll(memstr) * 1024;
     free(memstr);
 
     return mem;
@@ -474,17 +474,20 @@ int main(int argc, char *argv[])
     cpu_nr = 0;
 
     init_msr_inj();
-    xc_handle = xc_interface_open(0, 0, 0);
-    if ( !xc_handle ) {
+    xc_handle = xc_interface_open(NULL, NULL, 0);
+    if ( !xc_handle )
+    {
         Lprintf("Failed to get xc interface");
         exit(EXIT_FAILURE);
     }
 
-    while ( 1 ) {
+    while ( 1 )
+    {
         c = getopt_long(argc, argv, "c:Dd:t:hp:l", opts, &opt_index);
         if ( c == -1 )
             break;
-        switch ( c ) {
+        switch ( c )
+        {
         case 'D':
             dump=1;
             break;
@@ -516,7 +519,8 @@ int main(int argc, char *argv[])
         }
     }
 
-    if ( domid != DOMID_XEN ) {
+    if ( domid != DOMID_XEN )
+    {
         max_gpa = xs_get_dom_mem(domid);
         Lprintf("get domain %d max gpa is: 0x%lx", domid, max_gpa);
         if ( gaddr >= max_gpa )
@@ -524,7 +528,8 @@ int main(int argc, char *argv[])
     }
     Lprintf("get gaddr of error inject is: 0x%lx", gaddr);
 
-    if ( dump ) {
+    if ( dump )
+    {
         if ( domid == DOMID_XEN )
             Lprintf("Xen: gaddr=0x%lx", gaddr);
         else
@@ -532,7 +537,8 @@ int main(int argc, char *argv[])
         goto out;
     }
 
-    if ( type < 0 || type >= MCE_TABLE_SIZE ) {
+    if ( type < 0 || type >= MCE_TABLE_SIZE )
+    {
         err(xc_handle, "Unsupported error type");
         goto out;
     }
diff --git a/tools/tests/Makefile b/tools/tests/Makefile
index 8746aabe6b..f76feb0b3a 100644
--- a/tools/tests/Makefile
+++ b/tools/tests/Makefile
@@ -4,7 +4,6 @@ include $(XEN_ROOT)/tools/Rules.mk
 SUBDIRS-y :=
 SUBDIRS-y += resource
 SUBDIRS-$(CONFIG_X86) += cpu-policy
-SUBDIRS-$(CONFIG_X86) += mce-test
 ifneq ($(clang),y)
 SUBDIRS-$(CONFIG_X86) += x86_emulator
 endif
diff --git a/tools/tests/mce-test/Makefile b/tools/tests/mce-test/Makefile
deleted file mode 100644
index 1395df38ac..0000000000
--- a/tools/tests/mce-test/Makefile
+++ /dev/null
@@ -1,12 +0,0 @@
-.PHONY: all clean distclean
-
-all: 
-	$(MAKE) -C tools
-
-clean:
-	$(MAKE) -C tools clean
-
-distclean:
-	$(MAKE) -C tools distclean
-
-install uninstall:
diff --git a/tools/tests/mce-test/README b/tools/tests/mce-test/README
deleted file mode 100644
index 65e6d1b045..0000000000
--- a/tools/tests/mce-test/README
+++ /dev/null
@@ -1,75 +0,0 @@
-Xen MCE test suite
----------------
-
-The Xen MCE test suite is a collection of tools and test scripts for
-testing the Xen MCE processing features. The goal is to cover
-most Xen MCE processing code paths and features with automation tests.
-
-
-In the Package
---------------
-
-Here is a short description of what is included in the package
-
-README
-	This is document
-
-Makefile
-	For compile
-
-cases/*
-	Contains all test cases, which may be organized in sub-directories, 
-	the interface of test case is a shell script under cases/, such as:
-	   -- cases/srao_mem/dom0/cases.sh
-
-config/*
-	Contains test configuration files, which specifies the parameters 
-	for test cases, etc.
-
-lib/*
-	Contains some shell scripts, in which some common shell
-	functions and variable definitions are defined to be used by
-	test cases.
-
-tools/*
-	Tools used by MCE test suites, now only xen-mceinj tool.
-
-results/
-	When test is done, the test result will be placed in this
-	directory, test results	of various cases may be in corresponding 
-	directory. 
-	For example, files in
-	    results/srao_mem_dom0/result
-	is the result for test case cases/srao_mem/dom0/cases.sh, there will
-	be 3 result conditions: PASSED/FAILED/NORESULT.
-		results/<test_case>/testlog   #the test log during testing
-		results/<test_case>/mcelog    #mcelog output during testing
-		results/<test_case>/xenlog    #Xen log during testing
-		results/<test_case>/gklog     #VM guest kernel log during testing
-		results/<test_case>/guest_config   #config file used to create guest
-
-
-Test Instruction
-----------------
-
-1.	make sure you have a dom0 with mce support
-	CONFIG_X86_MCE=y
-	CONFIG_X86_MCE_INTEL=y
-	CONFIG_X86_MCE_AMD=y
-	CONFIG_X86_MCE_THRESHOLD=y
-	CONFIG_X86_MCE_INJECT=y
-
-2.	run system at xen and start xend. A installed guest image is
-	necessary when do guest MCE error injection.
-3.	compile tools that used to test. in mce-test, $make.
-	Note: make sure compile xen/tools before do this step
-4.	run test cases that you want.
-	e.g. $sh cases/srao_mem/dom0/cases.sh -d 0 -p 0x0200 -c 2 -t 1
-5.	get test result in results directory
-
-
-Notes
-----------------
-All test cases fake a error and inject this error in 0x180020, For Xen
-test cases(e.g. cases/srao_mem/xen/cases.sh), error happen on every page 
-may cause a Xen panic. 
diff --git a/tools/tests/mce-test/cases/srao_llc/dom0/cases.sh b/tools/tests/mce-test/cases/srao_llc/dom0/cases.sh
deleted file mode 100644
index c540f64998..0000000000
--- a/tools/tests/mce-test/cases/srao_llc/dom0/cases.sh
+++ /dev/null
@@ -1,73 +0,0 @@
-#!/bin/bash
-#
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-sd=$(dirname $0)
-export ROOT=`(cd $sd/../../../; pwd)`
-export this_case=srao_llc_dom0
-
-. $ROOT/lib/xen-mceinj-tool.sh
-
-usage()
-{
-    echo "Usage: ./cases.sh [-options] [arguments]"
-    echo "================Below are the optional options================"
-    echo -e "\t-d domainID\t: 0"
-    echo -e "\t-c injcpu\t: which cpu to inject error"
-    echo -e "\t-p pageaddr\t: Guest Physical Address to inject error"
-    echo -e "\t\t\tBy default, the GPA is 0x180020"
-    echo -e "\t-h help"
-    exit 0
-}
-
-while getopts ":c:d:p:h" option
-do
-    case "$option" in
-    c) injcpu=$OPTARG;;
-    d) domid=$OPTARG;;
-    p) pageaddr=$OPTARG;;
-    h) usage;;
-    *) echo "invalid option!"; usage;;
-    esac
-done
-
-[ -z $domid ] && domid=0
-
-inject()
-{
-    mce_inject_trigger $MCE_SRAO_LLC -d $domid -u $injcpu -p $pageaddr 
-    if [ $? -eq 0 ]; then
-        show "  Passed: Successfully to fake and inject a MCE error"
-    else
-        show "  Failed: Fake error and inject fail !!"
-        return 1
-    fi
-    return 0
-}
-
-do_main()
-{
-    ret_val=0
-    clean_env
-    inject || ret_val=1
-    xen_verify || ret_val=1
-    mcelog_verify $MCE_SRAO_LLC || ret_val=1
-    gen_result $ret_val
-}
-
-do_main "$@"
diff --git a/tools/tests/mce-test/cases/srao_llc/guest/cases.sh b/tools/tests/mce-test/cases/srao_llc/guest/cases.sh
deleted file mode 100644
index 47a7ee4ab9..0000000000
--- a/tools/tests/mce-test/cases/srao_llc/guest/cases.sh
+++ /dev/null
@@ -1,94 +0,0 @@
-#!/bin/bash
-#
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-sd=$(dirname $0)
-export ROOT=`(cd $sd/../../../; pwd)`
-export this_case=srao_llc_guest
-
-. $ROOT/lib/xen-mceinj-tool.sh
-
-usage()
-{
-    echo "Usage: ./cases.sh [-options] [arguments]"
-    echo "================Below are the must have options==============="
-    echo -e "\t-i image\t: guest image"
-    echo -e "\t-m memory\t: set guest virtual memory"
-    echo "========                                              ========"
-    echo "================Below are the optional options================"
-    echo -e "\t-u vcpus\t: set guest virtual cpus number"
-    echo -e "\t-c injcpu\t: which cpu to inject error"
-    echo -e "\t-p pageaddr\t: Guest Physical Address to inject error"
-    echo -e "\t\t\tBy default, the GPA is 0x180020"
-    echo -e "\t-h help"
-    exit 0
-}
-
-[ $# -lt 1 ] && usage
-
-while getopts ":i:u:m:c:p:hl:" option
-do
-    case "$option" in
-    i) image=$OPTARG; offset=`kpartx -l $image | awk '{print $NF*512}'`;;
-    u) vcpus=$OPTARG;;
-    m) memory=$OPTARG;;
-    c) injcpu=$OPTARG;;
-    p) pageaddr=$OPTARG;;
-    l) early_kill="0";;
-    h) usage;;
-    *) echo "invalid option!"; usage;;
-    esac
-done
-
-
-start_guest()
-{
-    create_hvm_guest $image -u $vcpus -m $memory
-    if [ $? -ne 0 ]; then
-        echo "  Create guest fail!"
-        return 1
-    fi
-    return 0
-}
-
-inject()
-{
-    mce_inject_trigger $MCE_SRAO_LLC -u $injcpu -p $pageaddr 
-    if [ $? -eq 0 ]; then
-        show "  Passed: Successfully to fake and inject a MCE error"
-    else
-        show "  Failed: Fake error and inject fail !!"
-        return 1
-    fi
-    return 0
-}
-
-do_main()
-{
-    ret_val=0
-    clean_env
-    start_guest || ret_val=1
-    inject || ret_val=1
-    xen_verify || ret_val=1
-    guest_verify || ret_val=1
-    mcelog_verify $MCE_SRAO_LLC || ret_val=1
-    des_guest
-    gen_result $ret_val
-}
-
-do_main "$@"
diff --git a/tools/tests/mce-test/cases/srao_llc/xen/cases.sh b/tools/tests/mce-test/cases/srao_llc/xen/cases.sh
deleted file mode 100644
index 1d8e02ff65..0000000000
--- a/tools/tests/mce-test/cases/srao_llc/xen/cases.sh
+++ /dev/null
@@ -1,69 +0,0 @@
-#!/bin/bash
-#
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-sd=$(dirname $0)
-export ROOT=`(cd $sd/../../../; pwd)`
-export this_case=srao_llc_xen
-
-. $ROOT/lib/xen-mceinj-tool.sh
-
-usage()
-{
-    echo "Usage: ./cases.sh [-options] [arguments]"
-    echo "================Below are the optional options================"
-    echo -e "\t-c injcpu\t: which cpu to inject error"
-    echo -e "\t-p pageaddr\t: Guest Physical Address to inject error"
-    echo -e "\t\t\tBy default, the GPA is 0x180020"
-    echo -e "\t-h help"
-    exit 0
-}
-
-while getopts ":c:p:h" option
-do
-    case "$option" in
-    c) injcpu=$OPTARG;;
-    p) pageaddr=$OPTARG;;
-    h) usage;;
-    *) echo "invalid option!"; usage;;
-    esac
-done
-
-inject()
-{
-    mce_inject_trigger $MCE_SRAO_LLC -u $injcpu -p $pageaddr 
-    if [ $? -eq 0 ]; then
-        show "  Passed: Successfully to fake and inject a MCE error"
-    else
-        show "  Failed: Fake error and inject fail !!"
-        return 1
-    fi
-    return 0
-}
-
-do_main()
-{
-    ret_val=0
-    clean_env
-    inject || ret_val=1
-    xen_verify || ret_val=1
-    mcelog_verify $MCE_SRAO_LLC || ret_val=1
-    gen_result $ret_val
-}
-
-do_main "$@"
diff --git a/tools/tests/mce-test/cases/srao_mem/dom0/cases.sh b/tools/tests/mce-test/cases/srao_mem/dom0/cases.sh
deleted file mode 100644
index 22d4a00960..0000000000
--- a/tools/tests/mce-test/cases/srao_mem/dom0/cases.sh
+++ /dev/null
@@ -1,73 +0,0 @@
-#!/bin/bash
-#
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-sd=$(dirname $0)
-export ROOT=`(cd $sd/../../../; pwd)`
-export this_case=srao_mem_dom0
-
-. $ROOT/lib/xen-mceinj-tool.sh
-
-usage()
-{
-    echo "Usage: ./cases.sh [-options] [arguments]"
-    echo "================Below are the optional options================"
-    echo -e "\t-d domainID\t: 0"
-    echo -e "\t-c injcpu\t: which cpu to inject error"
-    echo -e "\t-p pageaddr\t: Guest Physical Address to inject error"
-    echo -e "\t\t\tBy default, the GPA is 0x180020"
-    echo -e "\t-h help"
-    exit 0
-}
-
-while getopts ":c:d:p:h" option
-do
-    case "$option" in
-    c) injcpu=$OPTARG;;
-    d) domid=$OPTARG;;
-    p) pageaddr=$OPTARG;;
-    h) usage;;
-    *) echo "invalid option!"; usage;;
-    esac
-done
-
-[ -z $domid ] && domid=0
-
-inject()
-{
-    mce_inject_trigger $MCE_SRAO_MEM -d $domid -u $injcpu -p $pageaddr 
-    if [ $? -eq 0 ]; then
-        show "  Passed: Successfully to fake and inject a MCE error"
-    else
-        show "  Failed: Fake error and inject fail !!"
-        return 1
-    fi
-    return 0
-}
-
-do_main()
-{
-    ret_val=0
-    clean_env
-    inject || ret_val=1
-    xen_verify || ret_val=1
-    mcelog_verify $MCE_SRAO_MEM || ret_val=1
-    gen_result $ret_val
-}
-
-do_main "$@"
diff --git a/tools/tests/mce-test/cases/srao_mem/guest/cases.sh b/tools/tests/mce-test/cases/srao_mem/guest/cases.sh
deleted file mode 100644
index 7ab4523096..0000000000
--- a/tools/tests/mce-test/cases/srao_mem/guest/cases.sh
+++ /dev/null
@@ -1,94 +0,0 @@
-#!/bin/bash
-#
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-sd=$(dirname $0)
-export ROOT=`(cd $sd/../../../; pwd)`
-export this_case=srao_mem_guest
-
-. $ROOT/lib/xen-mceinj-tool.sh
-
-usage()
-{
-    echo "Usage: ./cases.sh [-options] [arguments]"
-    echo "================Below are the must have options==============="
-    echo -e "\t-i image\t: guest image"
-    echo -e "\t-m memory\t: set guest virtual memory"
-    echo "========                                              ========"
-    echo "================Below are the optional options================"
-    echo -e "\t-u vcpus\t: set guest virtual cpus number"
-    echo -e "\t-c injcpu\t: which cpu to inject error"
-    echo -e "\t-p pageaddr\t: Guest Physical Address to inject error"
-    echo -e "\t\t\tBy default, the GPA is 0x180020"
-    echo -e "\t-h help"
-    exit 0
-}
-
-[ $# -lt 1 ] && usage
-
-while getopts ":i:u:m:c:p:hl:" option
-do
-    case "$option" in
-    i) image=$OPTARG; offset=`kpartx -l $image | awk '{print $NF*512}'`;;
-    u) vcpus=$OPTARG;;
-    m) memory=$OPTARG;;
-    c) injcpu=$OPTARG;;
-    p) pageaddr=$OPTARG;;
-    l) early_kill="0";;
-    h) usage;;
-    *) echo "invalid option!"; usage;;
-    esac
-done
-
-
-start_guest()
-{
-    create_hvm_guest $image -u $vcpus -m $memory
-    if [ $? -ne 0 ]; then
-        echo "  Create guest fail!"
-        return 1
-    fi
-    return 0
-}
-
-inject()
-{
-    mce_inject_trigger $MCE_SRAO_MEM -u $injcpu -p $pageaddr 
-    if [ $? -eq 0 ]; then
-        show "  Passed: Successfully to fake and inject a MCE error"
-    else
-        show "  Failed: Fake error and inject fail !!"
-        return 1
-    fi
-    return 0
-}
-
-do_main()
-{
-    ret_val=0
-    clean_env
-    start_guest || ret_val=1
-    inject || ret_val=1
-    xen_verify || ret_val=1
-    guest_verify || ret_val=1
-    mcelog_verify $MCE_SRAO_MEM || ret_val=1
-    des_guest
-    gen_result $ret_val
-}
-
-do_main "$@"
diff --git a/tools/tests/mce-test/cases/srao_mem/xen/cases.sh b/tools/tests/mce-test/cases/srao_mem/xen/cases.sh
deleted file mode 100644
index 7ae49a82ac..0000000000
--- a/tools/tests/mce-test/cases/srao_mem/xen/cases.sh
+++ /dev/null
@@ -1,69 +0,0 @@
-#!/bin/bash
-#
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-sd=$(dirname $0)
-export ROOT=`(cd $sd/../../../; pwd)`
-export this_case=srao_mem_xen
-
-. $ROOT/lib/xen-mceinj-tool.sh
-
-usage()
-{
-    echo "Usage: ./cases.sh [-options] [arguments]"
-    echo "================Below are the optional options================"
-    echo -e "\t-c injcpu\t: which cpu to inject error"
-    echo -e "\t-p pageaddr\t: Guest Physical Address to inject error"
-    echo -e "\t\t\tBy default, the GPA is 0x180020"
-    echo -e "\t-h help"
-    exit 0
-}
-
-while getopts ":c:p:h" option
-do
-    case "$option" in
-    c) injcpu=$OPTARG;;
-    p) pageaddr=$OPTARG;;
-    h) usage;;
-    *) echo "invalid option!"; usage;;
-    esac
-done
-
-inject()
-{
-    mce_inject_trigger $MCE_SRAO_MEM -u $injcpu -p $pageaddr 
-    if [ $? -eq 0 ]; then
-        show "  Passed: Successfully to fake and inject a MCE error"
-    else
-        show "  Failed: Fake error and inject fail !!"
-        return 1
-    fi
-    return 0
-}
-
-do_main()
-{
-    ret_val=0
-    clean_env
-    inject || ret_val=1
-    xen_verify || ret_val=1
-    mcelog_verify $MCE_SRAO_MEM || ret_val=1
-    gen_result $ret_val
-}
-
-do_main "$@"
diff --git a/tools/tests/mce-test/cases/ucna_llc/dom0/cases.sh b/tools/tests/mce-test/cases/ucna_llc/dom0/cases.sh
deleted file mode 100644
index 808f007708..0000000000
--- a/tools/tests/mce-test/cases/ucna_llc/dom0/cases.sh
+++ /dev/null
@@ -1,72 +0,0 @@
-#!/bin/bash
-#
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-sd=$(dirname $0)
-export ROOT=`(cd $sd/../../../; pwd)`
-export this_case=ucna_llc_dom0
-
-. $ROOT/lib/xen-mceinj-tool.sh
-
-usage()
-{
-    echo "Usage: ./cases.sh [-options] [arguments]"
-    echo "================Below are the optional options================"
-    echo -e "\t-d domainID\t: 0"
-    echo -e "\t-c injcpu\t: which cpu to inject error"
-    echo -e "\t-p pageaddr\t: Guest Physical Address to inject error"
-    echo -e "\t\t\tBy default, the GPA is 0x180020"
-    echo -e "\t-h help"
-    exit 0
-}
-
-while getopts ":c:d:p:h" option
-do
-    case "$option" in
-    c) injcpu=$OPTARG;;
-    d) domid=$OPTARG;;
-    p) pageaddr=$OPTARG;;
-    h) usage;;
-    *) echo "invalid option!"; usage;;
-    esac
-done
-
-[ -z $domid ] && domid=0
-
-inject()
-{
-    mce_inject_trigger $CMCI_UCNA_LLC -d $domid -u $injcpu -p $pageaddr 
-    if [ $? -eq 0 ]; then
-        show "  Passed: Successfully to fake and inject a MCE error"
-    else
-        show "  Failed: Fake error and inject fail !!"
-        return 1
-    fi
-    return 0
-}
-
-do_main()
-{
-    ret_val=0
-    clean_env
-    inject || ret_val=1
-    mcelog_verify $CMCI_UCNA_LLC || ret_val=1
-    gen_result $ret_val
-}
-
-do_main "$@"
diff --git a/tools/tests/mce-test/cases/ucna_llc/guest/cases.sh b/tools/tests/mce-test/cases/ucna_llc/guest/cases.sh
deleted file mode 100644
index 0ca4e2c961..0000000000
--- a/tools/tests/mce-test/cases/ucna_llc/guest/cases.sh
+++ /dev/null
@@ -1,92 +0,0 @@
-#!/bin/bash
-#
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-sd=$(dirname $0)
-export ROOT=`(cd $sd/../../../; pwd)`
-export this_case=ucna_llc_guest
-
-. $ROOT/lib/xen-mceinj-tool.sh
-
-usage()
-{
-    echo "Usage: ./cases.sh [-options] [arguments]"
-    echo "================Below are the must have options==============="
-    echo -e "\t-i image\t: guest image"
-    echo -e "\t-m memory\t: set guest virtual memory"
-    echo "========                                              ========"
-    echo "================Below are the optional options================"
-    echo -e "\t-u vcpus\t: set guest virtual cpus number"
-    echo -e "\t-c injcpu\t: which cpu to inject error"
-    echo -e "\t-p pageaddr\t: Guest Physical Address to inject error"
-    echo -e "\t\t\tBy default, the GPA is 0x180020"
-    echo -e "\t-h help"
-    exit 0
-}
-
-[ $# -lt 1 ] && usage
-
-while getopts ":i:u:m:c:p:hl:" option
-do
-    case "$option" in
-    i) image=$OPTARG; offset=`kpartx -l $image | awk '{print $NF*512}'`;;
-    u) vcpus=$OPTARG;;
-    m) memory=$OPTARG;;
-    c) injcpu=$OPTARG;;
-    p) pageaddr=$OPTARG;;
-    l) early_kill="0";;
-    h) usage;;
-    *) echo "invalid option!"; usage;;
-    esac
-done
-
-
-start_guest()
-{
-    create_hvm_guest $image -u $vcpus -m $memory
-    if [ $? -ne 0 ]; then
-        echo "  Create guest fail!"
-        return 1
-    fi
-    return 0
-}
-
-inject()
-{
-    mce_inject_trigger $CMCI_UCNA_LLC -u $injcpu -p $pageaddr 
-    if [ $? -eq 0 ]; then
-        show "  Passed: Successfully to fake and inject a MCE error"
-    else
-        show "  Failed: Fake error and inject fail !!"
-        return 1
-    fi
-    return 0
-}
-
-do_main()
-{
-    ret_val=0
-    clean_env
-    start_guest || ret_val=1
-    inject || ret_val=1
-    mcelog_verify $CMCI_UCNA_LLC || ret_val=1
-    des_guest
-    gen_result $ret_val
-}
-
-do_main "$@"
diff --git a/tools/tests/mce-test/cases/ucna_llc/xen/cases.sh b/tools/tests/mce-test/cases/ucna_llc/xen/cases.sh
deleted file mode 100644
index c73a2f6c16..0000000000
--- a/tools/tests/mce-test/cases/ucna_llc/xen/cases.sh
+++ /dev/null
@@ -1,68 +0,0 @@
-#!/bin/bash
-#
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-sd=$(dirname $0)
-export ROOT=`(cd $sd/../../../; pwd)`
-export this_case=ucna_llc_xen
-
-. $ROOT/lib/xen-mceinj-tool.sh
-
-usage()
-{
-    echo "Usage: ./cases.sh [-options] [arguments]"
-    echo "================Below are the optional options================"
-    echo -e "\t-c injcpu\t: which cpu to inject error"
-    echo -e "\t-p pageaddr\t: Guest Physical Address to inject error"
-    echo -e "\t\t\tBy default, the GPA is 0x180020"
-    echo -e "\t-h help"
-    exit 0
-}
-
-while getopts ":c:p:h" option
-do
-    case "$option" in
-    c) injcpu=$OPTARG;;
-    p) pageaddr=$OPTARG;;
-    h) usage;;
-    *) echo "invalid option!"; usage;;
-    esac
-done
-
-inject()
-{
-    mce_inject_trigger $CMCI_UCNA_LLC -u $injcpu -p $pageaddr 
-    if [ $? -eq 0 ]; then
-        show "  Passed: Successfully to fake and inject a MCE error"
-    else
-        show "  Failed: Fake error and inject fail !!"
-        return 1
-    fi
-    return 0
-}
-
-do_main()
-{
-    ret_val=0
-    clean_env
-    inject || ret_val=1
-    mcelog_verify $CMCI_UCNA_LLC || ret_val=1
-    gen_result $ret_val
-}
-
-do_main "$@"
diff --git a/tools/tests/mce-test/config/setup.conf b/tools/tests/mce-test/config/setup.conf
deleted file mode 100644
index 05f754dfd6..0000000000
--- a/tools/tests/mce-test/config/setup.conf
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/bin/bash
-#
-# Software injection based test cases: test cases are triggered via
-# mce-inject tool.
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-export MCE_SRAO_MEM=0
-export MCE_SRAO_LLC=1
-export CMCI_UCNA_LLC=2
diff --git a/tools/tests/mce-test/lib/xen-mceinj-tool.sh b/tools/tests/mce-test/lib/xen-mceinj-tool.sh
deleted file mode 100644
index c0a3b293c5..0000000000
--- a/tools/tests/mce-test/lib/xen-mceinj-tool.sh
+++ /dev/null
@@ -1,260 +0,0 @@
-#!/bin/bash
-#
-# Software injection based test cases: test cases are triggered via
-# mce-inject tool.
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-. $ROOT/config/setup.conf
-
-#Guest Image Preparation
-hvm_image_prepare()
-{
-    local image=$1
-    local tmpdir=`mktemp -d`
-    local tmpfile=`mktemp`
-    local offset=`kpartx -l $image | awk '{print $NF*512}'`
-    mount -oloop,offset=$offset $image $tmpdir && echo "mount image to $tmpdir"
-    local g_grub=$tmpdir/boot/grub/grub.conf
-    if [ $? -ne 0 ]; then
-        show "  Mount image failed!"
-        return 1
-    fi
-
-    if ! grep FLAG_CONSOLE $g_grub; then
-        sed -e '/kernel/s/$/ console=ttyS0,115200,8n1 console=tty0/g' \
-            $g_grub > $tmpfile
-        mv -f $tmpfile $g_grub
-        rm -f $tmpfile
-        echo "
-#### FLAG_CONSOLE #### " >> $g_grub
-    fi
-    umount $tmpdir
-    rm -fr $tmpdir
-
-    return 0
-}
-
-create_hvm_guest()
-{
-    local image=$1
-    local originconfig="/etc/xen/xmexample.hvm"
-    local TF=`mktemp`
-    local case_dir=$ROOT/results/$this_case
-    local config=$case_dir/guest_config
-    [ -d $case_dir ] || mkdir $case_dir
-    [ -f $logfile ] || touch $logfile
-    local File=`echo $image|sed "s/\//\\\\\\\\\\//g"`
-    local g_name="`basename $image`_`date +%H%M%S`"
-
-    hvm_image_prepare $image
-
-    while getopts ":u:m:" Option
-    do
-        case $Option in
-            u ) vcpus=$OPTARG;;
-            m ) memory=$OPTARG;;
-            e ) bridge_name=$OPTARG;;
-            * ) ;;
-        esac
-    done
-
-    cp $originconfig $config -f
-
-    if [ -z $image ]; then
-        show "Image file $image does not exist, Please input one valid file"
-        return 1
-    fi
-
-    sed -e "/^disk/s/file:.*,\(hda\)/file:${File},\1/" $config \
-          | sed -e "/^disk/s/phy:.*,\(hda\)/file:${File},\1/" >$TF
-    mv -f $TF $config
-
-    [ -z $memory ] || sed -i "/^memory/s/^.*$/memory = $memory/" $config
-    [ -z $vcpus ] || sed -i "1,/^#vcpus/s/^#vcpus.*$/vcpus=$vcpus/;1d" $config
-    sed -i "/^vif/s/vif/#vif/" $config
-    sed -i "/^name/s/^.*$/name = \"$g_name\"/" $config
-
-    string1=$(ls /dev/pts | sort)
-    xm cr $config
-    [ $? -eq 0 ] && domid=`xm list $g_name | tail -n1 | awk '{print $2}'`
-    if [ -z $domid ]; then
-        show "  Guest can not boot up"
-        return 1
-    fi
-    
-    sleep 10
-
-    string2=$(ls /dev/pts | sort)
-
-    get_guest_klog
-    sleep 40
-
-    return 0
-}
-
-get_guest_klog()
-{
-    local case_dir=$ROOT/results/$this_case
-    gklog=$case_dir/gklog
-    [ -d $case_dir ] || mkdir $case_dir
-    [ -f $gklog ] || touch $gklog
-    for fo in $string2; do
-        echo $string1 | grep $fo -wq
-        [ $? -eq 1 ] && num=$fo
-    done
-    cat /dev/pts/$num > $gklog &
-}
-
-mce_inject_trigger()
-{
-    local errtype=$1
-    local append=""
-    while getopts ":d:u:p:" Option
-    do
-        case $Option in
-            d ) domid=$OPTARG;;
-            u ) cpu=$OPTARG;;
-            p ) pageaddr=$OPTARG;;
-            * ) ;;
-        esac
-    done
-
-    [ -z $domid ] || append=$append" -d $domid"
-    [ -z $cpu ] || append=$append" -c $cpu"
-    [ -z $pageaddr ] || append=$append" -p $pageaddr"
-
-    [ -f $ROOT/tools/xen-mceinj ]
-    if [ $? -eq 0 ]; then
-        xm dmesg -c
-        $ROOT/tools/xen-mceinj -t $errtype $append
-        if [ $? -ne 0 ]; then
-            show "  Failed: Maybe the memory addr is out of range. \
-                      Please check whether used xen-mceinj tool correctlly"
-            return 1
-        fi
-    else
-        show "  Failed: please compile xen-mce inject tool firstly"
-        return 1
-    fi
-    return 0
-}
-
-xen_verify()
-{
-    local case_dir=$ROOT/results/$this_case
-    local xenlog=$case_dir/xenlog
-    [ -d $case_dir ] || mkdir $case_dir
-    [ -f $xenlog ] || touch $xenlog
-    xm dmesg > $xenlog
-    grep "Error is successfully recovered" $xenlog > /dev/null
-    if [ $? -eq 0 ]; then
-        show "  Passed: Xen handle this MCE error successfully"
-    else
-        show "  Failed: Xen does not handle MCE error correctly !!"
-        return 1
-    fi
-    return 0
-}
-
-guest_verify()
-{
-    grep "kernel page recovery" $gklog > /dev/null
-    if [ $? -eq 0 ]; then
-        show "  Passed: Guest recive MCE error and solved correctly"
-    else
-        show "  Failed: Guest fail to solve MCE error"
-        return 1
-    fi
-    return 0
-}
-
-mcelog_verify()
-{
-    local err_type=$1
-    local ret=0
-    local case_dir=$ROOT/results/$this_case
-    local mcelog=$case_dir/mcelog
-    [ -d $case_dir ] || mkdir $case_dir
-    [ -f $mcelog ] || touch $mcelog
-    mcelog > $mcelog
-    if [ -z $mcelog ]; then
-        show "  Failed: MCELOG does not catch anything"
-        return 1
-    else
-        if [ $err_type -eq 0 ]; then
-            grep "MEMORY CONTROLLER MS_CHANNELunspecified_ERR" $mcelog \
-                > /dev/null
-            ret=$?
-        elif [ $err_type -eq 1 ]; then
-            grep "Generic CACHE Level-2 Eviction Error" $mcelog > /dev/null
-            ret=$?
-        elif [ $err_type -eq 2 ]; then
-            grep "Data CACHE Level-2 Data-Read Error" $mcelog > /dev/null
-            ret=$?
-        fi
-
-        if [ $ret -eq 0 ]; then
-            show "  Passed: MCElog catch a correct error"
-        else 
-            show "  Failed: MCE log catch a incorrect error !!"
-            return 1
-        fi
-    fi
-
-    return 0
-}
-
-function des_guest()
-{
-    xm des $domid    
-}
-
-function clean_env()
-{
-    [ -d $ROOT/results ] || mkdir $ROOT/results
-    # clean logs and results of last test for this case
-    rm -fr $ROOT/results/$this_case/*
-}
-
-function show()
-{
-    local case_dir=$ROOT/results/$this_case
-    local logfile=$case_dir/testlog
-    [ -d $case_dir ] || mkdir $case_dir
-    [ -f $logfile ] || touch $logfile
-    echo -e $* | tee -a $logfile > /dev/null
-}
-
-function gen_result()
-{
-    local ret=$1
-    local case_dir=$ROOT/results/$this_case
-    local result=$case_dir/result
-    [ -d $case_dir ] || mkdir $case_dir
-    [ -f $result ] || touch $result
-    
-    if [ $ret -eq 0 ]; then
-        echo "PASSED" > $result
-    elif [ $ret -eq 1 ]; then
-        echo "FAILED" > $result
-        echo "   Please check testlog for details!!! " >> $result
-    else
-        echo "NORESULT" > $result
-        echo "   Please check testlog for details!!! " >> $result
-    fi
-}
diff --git a/tools/tests/mce-test/tools/Makefile b/tools/tests/mce-test/tools/Makefile
deleted file mode 100644
index 0e92ac2977..0000000000
--- a/tools/tests/mce-test/tools/Makefile
+++ /dev/null
@@ -1,24 +0,0 @@
-XEN_ROOT=$(CURDIR)/../../../..
-include $(XEN_ROOT)/tools/Rules.mk
-
-CFLAGS += -Werror
-CFLAGS += $(CFLAGS_libxenctrl)
-CFLAGS += $(CFLAGS_libxenguest)
-CFLAGS += $(CFLAGS_libxenstore)
-CFLAGS += $(CFLAGS_xeninclude)
-
-.PHONY: all
-all: xen-mceinj
-
-install: xen-mceinj
-	$(INSTALL_PROG) xen-mceinj $(DESTDIR)$(sbindir)
-
-.PHONY: clean
-clean:
-	$(RM) *.o xen-mceinj
-
-.PHONY: distclean
-distclean: clean
-
-xen-mceinj: xen-mceinj.o Makefile
-	$(CC) -o $@ $< $(LDFLAGS) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore)
diff --git a/tools/tests/mce-test/tools/README b/tools/tests/mce-test/tools/README
deleted file mode 100644
index bd0d442bae..0000000000
--- a/tools/tests/mce-test/tools/README
+++ /dev/null
@@ -1,24 +0,0 @@
-Xen Machine Check Exception(MCE) error inject tool
-----------------------------------------------
-
-xen-mceinj is a software MCE injection tool, which is based on Xen 
-MCE injection mechanism. It allows to inject machine check errors on the
-software level into a running Xen/dom0/VM. This is intended for
-validation of the Xen machine check handler.
-
-With the help of the Makefile, it is possible to compile a binary file 
-named "xen-mceinj".
-
-Usage
------
-$make (make install) --Note: make sure compile xen/tools before do this step
-$./xen-mceinj [OPTION]...
-
-OPTION arguments can be:
-  -D, --dump           dump addr info without error injection
-  -c, --cpu=CPU_ID     target CPU, the default is CPU0
-  -d, --domain=DomID   target domain, the default is Xen itself
-  -p, --page           physical page address, the default is 0x180020
-  -t, --type=error     error type
-
-For detail help, please refer to "./xen-mceinj -h"
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 16:20:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 16:20:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142263.262557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBnU-0001JJ-KP; Tue, 15 Jun 2021 16:20:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142263.262557; Tue, 15 Jun 2021 16:20:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBnU-0001IL-G6; Tue, 15 Jun 2021 16:20:16 +0000
Received: by outflank-mailman (input) for mailman id 142263;
 Tue, 15 Jun 2021 16:20:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x6ws=LJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ltBnT-0001I3-9X
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 16:20:15 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fdb62d00-d648-4f72-aed7-cbfa368f4b73;
 Tue, 15 Jun 2021 16:20:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fdb62d00-d648-4f72-aed7-cbfa368f4b73
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623774013;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=F5Mje3VKXtUyGE4h1vE6y2c0mTHqiWniCE7C39mKUeA=;
  b=eXB0iBdiIRRaVmyydaENHFqVULnp0b5lGOg2eyDqC8SWkMwhNtddwZsV
   /aT9WPslym35rNEOaCCEkc4aCUuVMKSUPRCO4IGiPTj17W6AJgqvPlSUZ
   z8/pnh4rfTu8Gt0pf3J0B5bHP2dQqU0uoO99bnnyNOe+zMZOo5PubFBpA
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 5wWe7+wJYhbgc3ZA/tjz/BPbwzUU6BsTgBZ/ylAsbtojZw81FYJKt4P5MpT0mI7bEJjlxxvS0g
 fmUeWU9OVZzln/mq00OxvASNsVhn2cZY5qawOCsv/Am8v0nepNzkq1Bir5JzdKp5SB0VUlig9n
 TGxDgjBpS0HUjEdd++cp837SOziPflpi1yHIfQ95/sCBWVSZzpP+ETLfWAXLv92peDKxHlEneR
 do8/Y1sAvwhpusB4Zy/ROvqgQ83VnNqje6B4Zh9WzYJd/4Ss/BcVF9M0ggrWJyl2MSVNxbx4Dr
 6Fw=
X-SBRS: 5.1
X-MesageID: 46185786
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:JTKuIaABeTjyqPrlHemU55DYdb4zR+YMi2TC1yhKJyC9Ffbo7v
 xG/c5rsyMc5wxwZJhNo7y90ey7MBbhHP1OkO4s1NWZLWrbUQKTRekIh+bfKn/baknDH4ZmpN
 9dmsNFaeEYY2IUsS+D2njbL+od
X-IronPort-AV: E=Sophos;i="5.83,275,1616472000"; 
   d="scan'208";a="46185786"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Juergen Gross
	<jgross@suse.com>
Subject: [PATCH 0/5] tools/tests: More cleanup for automation improvements
Date: Tue, 15 Jun 2021 17:19:00 +0100
Message-ID: <20210615161905.9831-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This series takes us one step closer towards the "autotests" plan for
simplifying the running of unit and low level system tests.

With this series in place, all the tests which need running in dom0 have
working install/uninstall targets, so they can be packaged suitably.

Some questions, concerning future changes.

Ian: Presuambly OSSTest is going to complain if I stick a test- prefix on
depriv-fd-checker?  I've left that test alone for now, as it was the only
preexisting test with working install runes.

Jan/Roger: x86_emulator and vpci use $(HOSTCC) not $(CC).  While they are unit
tests, we still potentially want to run them in dom0 rather than the build
environment - particularly for x86_emulator which is heavily CPUID based and
wants to run on a wide set of hardware.  Any issues moving them off $(HOSTCC)?

Roger: vhpet isn't even wired into the build system, and seems non-trivial to
run in the first place.  How should "success" be judged?

Andrew Cooper (5):
  tools/tests: Drop obsolete mce-test infrastructure
  tools/tests: Drop run runes
  tests/resource: Rework Makefile
  tests/cpu-policy: Rework Makefile
  tests/xenstore: Rework Makefile

 .gitignore                                         |   2 -
 tools/misc/.gitignore                              |   1 +
 tools/misc/Makefile                                |   4 +
 tools/{tests/mce-test/tools => misc}/xen-mceinj.c  |  32 +--
 tools/tests/Makefile                               |   1 -
 tools/tests/cpu-policy/Makefile                    |  33 +--
 tools/tests/mce-test/Makefile                      |  12 -
 tools/tests/mce-test/README                        |  75 ------
 tools/tests/mce-test/cases/srao_llc/dom0/cases.sh  |  73 ------
 tools/tests/mce-test/cases/srao_llc/guest/cases.sh |  94 --------
 tools/tests/mce-test/cases/srao_llc/xen/cases.sh   |  69 ------
 tools/tests/mce-test/cases/srao_mem/dom0/cases.sh  |  73 ------
 tools/tests/mce-test/cases/srao_mem/guest/cases.sh |  94 --------
 tools/tests/mce-test/cases/srao_mem/xen/cases.sh   |  69 ------
 tools/tests/mce-test/cases/ucna_llc/dom0/cases.sh  |  72 ------
 tools/tests/mce-test/cases/ucna_llc/guest/cases.sh |  92 --------
 tools/tests/mce-test/cases/ucna_llc/xen/cases.sh   |  68 ------
 tools/tests/mce-test/config/setup.conf             |  24 --
 tools/tests/mce-test/lib/xen-mceinj-tool.sh        | 260 ---------------------
 tools/tests/mce-test/tools/Makefile                |  24 --
 tools/tests/mce-test/tools/README                  |  24 --
 tools/tests/resource/Makefile                      |  11 +-
 tools/tests/vpci/Makefile                          |   4 -
 tools/tests/x86_emulator/Makefile                  |   4 -
 tools/tests/xenstore/.gitignore                    |   1 +
 tools/tests/xenstore/Makefile                      |  31 ++-
 .../tests/xenstore/{xs-test.c => test-xenstore.c}  |   0
 27 files changed, 71 insertions(+), 1176 deletions(-)
 rename tools/{tests/mce-test/tools => misc}/xen-mceinj.c (97%)
 delete mode 100644 tools/tests/mce-test/Makefile
 delete mode 100644 tools/tests/mce-test/README
 delete mode 100644 tools/tests/mce-test/cases/srao_llc/dom0/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_llc/guest/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_llc/xen/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_mem/dom0/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_mem/guest/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_mem/xen/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/ucna_llc/dom0/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/ucna_llc/guest/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/ucna_llc/xen/cases.sh
 delete mode 100644 tools/tests/mce-test/config/setup.conf
 delete mode 100644 tools/tests/mce-test/lib/xen-mceinj-tool.sh
 delete mode 100644 tools/tests/mce-test/tools/Makefile
 delete mode 100644 tools/tests/mce-test/tools/README
 create mode 100644 tools/tests/xenstore/.gitignore
 rename tools/tests/xenstore/{xs-test.c => test-xenstore.c} (100%)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 16:20:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 16:20:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142264.262561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBnU-0001Nk-TS; Tue, 15 Jun 2021 16:20:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142264.262561; Tue, 15 Jun 2021 16:20:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBnU-0001Mt-P4; Tue, 15 Jun 2021 16:20:16 +0000
Received: by outflank-mailman (input) for mailman id 142264;
 Tue, 15 Jun 2021 16:20:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x6ws=LJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ltBnU-0001I3-1Q
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 16:20:16 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 97388bdc-2743-4f93-8aaa-34c51520979e;
 Tue, 15 Jun 2021 16:20:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97388bdc-2743-4f93-8aaa-34c51520979e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623774014;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=AvUFbf3FSMcV+rluVuwGLyq8UftlOj+vpVc2Zx0Ih3c=;
  b=fG/JNQxiBlnVHxBaYAr10OFENtr0/fogTkbQuzalC5rVQJPHpVa7Til5
   7DmA/ZIHaP4GqSsDqtLNyillujqrxkiLEMBluGJSo4fcMWk0WmXzPxgAy
   T2eW66uZfRRLdFPJYdVtRhumPqNblyDwpDUmYWXrNInnnWzY4aISAZi5j
   k=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: q5xrZZJri4rT2UW6TqANu9YkT7bgIzgR/i3TaOP9zZjx4NWlRnzChVlUfTvKm18qpmxKXuooe9
 +COybCnVGTXoJL/QcxIOY7AN6QPxVjw5iFOAXei2Cqg7M6h0HRQ79oaHDGmrAv9jPLcDsEGYSS
 w/X3PjGkvaSWGCZQLhg84drRu7NWNfsQ+NHy/xP5gcpORzEFS1t4+yMq12azvg8kaB5DIiwIqd
 zLguIR0wfOUZuxoxLPdE36gHfAgJ0z5IJ64BJ8+633Kr3OzGoW6J561QIeU6DpuTx3bpK5FE75
 K0Y=
X-SBRS: 5.1
X-MesageID: 46185793
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:0lTbQq31AEh6IFNLlqwSTAqjBIokLtp133Aq2lEZdPRUGvb3qy
 nIpoVj6faUskd2ZJhOo7C90cW7LU80sKQFhLX5Xo3SOzUO2lHYT72KhLGKq1aLdhEWtNQtsZ
 uIG5IOcOEYZmIasS+V2maF+q4bsbu6zJw=
X-IronPort-AV: E=Sophos;i="5.83,275,1616472000"; 
   d="scan'208";a="46185793"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Juergen Gross
	<jgross@suse.com>
Subject: [PATCH 5/5] tests/xenstore: Rework Makefile
Date: Tue, 15 Jun 2021 17:19:05 +0100
Message-ID: <20210615161905.9831-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210615161905.9831-1-andrew.cooper3@citrix.com>
References: <20210615161905.9831-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

In particular, fill in the install/uninstall rules so this test can be
packaged to be automated sensibly.

Rename xs-test to test-xenstore to be consistent with other tests.  Honour
APPEND_FLAGS too.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Juergen Gross <jgross@suse.com>
---
 .gitignore                                         |  1 -
 tools/tests/xenstore/.gitignore                    |  1 +
 tools/tests/xenstore/Makefile                      | 31 +++++++++++++++-------
 .../tests/xenstore/{xs-test.c => test-xenstore.c}  |  0
 4 files changed, 23 insertions(+), 10 deletions(-)
 create mode 100644 tools/tests/xenstore/.gitignore
 rename tools/tests/xenstore/{xs-test.c => test-xenstore.c} (100%)

diff --git a/.gitignore b/.gitignore
index d4b90303b2..8ebb51b6c5 100644
--- a/.gitignore
+++ b/.gitignore
@@ -275,7 +275,6 @@ tools/tests/x86_emulator/*sse*.[ch]
 tools/tests/x86_emulator/test_x86_emulator
 tools/tests/x86_emulator/x86_emulate
 tools/tests/x86_emulator/xop*.[ch]
-tools/tests/xenstore/xs-test
 tools/tests/vpci/list.h
 tools/tests/vpci/vpci.[hc]
 tools/tests/vpci/test_vpci
diff --git a/tools/tests/xenstore/.gitignore b/tools/tests/xenstore/.gitignore
new file mode 100644
index 0000000000..4b44f5dd60
--- /dev/null
+++ b/tools/tests/xenstore/.gitignore
@@ -0,0 +1 @@
+test-xenstore
diff --git a/tools/tests/xenstore/Makefile b/tools/tests/xenstore/Makefile
index a367d88803..3c7b7073fd 100644
--- a/tools/tests/xenstore/Makefile
+++ b/tools/tests/xenstore/Makefile
@@ -1,11 +1,7 @@
 XEN_ROOT=$(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
-
-CFLAGS += $(CFLAGS_libxenstore)
-
-TARGETS-y := xs-test
+TARGETS-y := test-xenstore
 TARGETS := $(TARGETS-y)
 
 .PHONY: all
@@ -16,14 +12,31 @@ build: $(TARGETS)
 
 .PHONY: clean
 clean:
-	$(RM) *.o $(TARGETS) *~ $(DEPS_RM)
+	$(RM) -f -- *.o $(TARGETS) $(DEPS_RM)
 
 .PHONY: distclean
 distclean: clean
+	$(RM) -f -- *~
+
+.PHONY: install
+install: all
+	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
+	$(if $(TARGETS),$(INSTALL_PROG) $(TARGETS) $(DESTDIR)$(LIBEXEC_BIN))
+
+.PHONY: uninstall
+uninstall:
+	$(RM) -f -- $(addprefix $(DESTDIR)$(LIBEXEC_BIN)/,$(TARGETS))
+
+CFLAGS += -Werror
+CFLAGS += $(CFLAGS_libxenstore)
+CFLAGS += $(APPEND_CFLAGS)
+
+LDFLAGS += $(LDLIBS_libxenstore)
+LDFLAGS += $(APPEND_LDFLAGS)
 
-xs-test: xs-test.o Makefile
-	$(CC) -o $@ $< $(LDFLAGS) $(LDLIBS_libxenstore)
+%.o: Makefile
 
-install uninstall:
+test-xenstore: test-xenstore.o
+	$(CC) -o $@ $< $(LDFLAGS)
 
 -include $(DEPS_INCLUDE)
diff --git a/tools/tests/xenstore/xs-test.c b/tools/tests/xenstore/test-xenstore.c
similarity index 100%
rename from tools/tests/xenstore/xs-test.c
rename to tools/tests/xenstore/test-xenstore.c
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 16:24:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 16:24:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142282.262579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBr7-0002j4-F5; Tue, 15 Jun 2021 16:24:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142282.262579; Tue, 15 Jun 2021 16:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBr7-0002ix-BN; Tue, 15 Jun 2021 16:24:01 +0000
Received: by outflank-mailman (input) for mailman id 142282;
 Tue, 15 Jun 2021 16:23:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x6ws=LJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ltBr5-0002ir-Nd
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 16:23:59 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f270937-ec10-4169-a767-df2b5c9094d9;
 Tue, 15 Jun 2021 16:23:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f270937-ec10-4169-a767-df2b5c9094d9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623774238;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=6YdOaL5KylmZkNkd/LcAJqbWCEoLcUzkQwf5yIAr7AQ=;
  b=JklyLH2WiZDBboUO3a2gOOBLg261dxVE/IV8anYccAqfE8d3RJyVvfV4
   ymrsSHBsCJVRNSXUlL22/cT/ktjjhVmQL+d1qereFXktZoaYhctYFtZUE
   pzhXw74Bh/5hUpXXObdqBdB3oQrubnY3dGLq/VA4+tY5peV8VMv+zcvyR
   w=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: gs87eUqGl0M/9UU75WbIMd38VSiTKcLnG/iMD30RY2ShXsd7o3Dl2ddIIudeT3xO9Yd8P3ak5c
 0f9eWO9D2KSl7C21WfQPpAmaODsJvSa8FAcPC91FuVvJObpoCf0KxA3uo8R90/fdz+PsuqgC9J
 YiQMN2Jjc8Rn4ugcoafVL3ovBSX1c9ZRrJsTAAGBUZQAprbV4W6JWYgnfw0yzUulD4zR+htd+M
 kucDi99EPNGOKApW3Ou0naDCaoEjBe9b8R2qNP68fQhLzol+Uss73ZyP0VBsCh3wg5u6vhNE1f
 qYs=
X-SBRS: 5.1
X-MesageID: 46186151
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:w6P3k6yQrTL3u2mWTtreKrPwFL1zdoMgy1knxilNoRw8SKKlfq
 eV7Y0mPH7P+VAssR4b+exoVJPtfZqYz+8R3WBzB8bEYOCFghrKEGgK1+KLqFeMJ8S9zJ846U
 4JSdkHNDSaNzlHZKjBjzVQa+xQouW6zA==
X-IronPort-AV: E=Sophos;i="5.83,275,1616472000"; 
   d="scan'208";a="46186151"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Juergen Gross
	<jgross@suse.com>
Subject: [PATCH 4/5] tests/cpu-policy: Rework Makefile
Date: Tue, 15 Jun 2021 17:19:04 +0100
Message-ID: <20210615161905.9831-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210615161905.9831-1-andrew.cooper3@citrix.com>
References: <20210615161905.9831-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

In particular, fill in the install/uninstall rules so this test can be
packaged to be automated sensibly.

Rework TARGET-y to be TARGETS, drop the unconditional -O3 and use the default
instead, and drop CFLAGS from the link line but honour APPEND_LDFLAGS.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Juergen Gross <jgross@suse.com>
---
 tools/tests/cpu-policy/Makefile | 29 ++++++++++++++++++-----------
 1 file changed, 18 insertions(+), 11 deletions(-)

diff --git a/tools/tests/cpu-policy/Makefile b/tools/tests/cpu-policy/Makefile
index 53b4f28b2a..ab3e0ffde3 100644
--- a/tools/tests/cpu-policy/Makefile
+++ b/tools/tests/cpu-policy/Makefile
@@ -1,25 +1,23 @@
 XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-TARGET-y := test-cpu-policy
+TARGETS :=
 
 # For brevity, these tests make extensive use of designated initialisers in
 # anonymous unions, but GCCs older than 4.6 can't cope.  Ignore the test in
 # this case.
-ifneq ($(clang),y)
-TARGET-$(call cc-ver,$(CC),lt,0x040600) :=
-endif
-
-ifeq ($(TARGET-y),)
+ifneq ($(gcc)$(call cc-ver,$(CC),lt,0x040600),yy)
+TARGETS += test-cpu-policy
+else
 $(warning Test harness not built, use newer compiler than "$(CC)" (version $(shell $(CC) -dumpversion)))
 endif
 
 .PHONY: all
-all: $(TARGET-y)
+all: $(TARGETS)
 
 .PHONY: clean
 clean:
-	$(RM) -f -- *.o .*.d .*.d2 test-cpu-policy
+	$(RM) -f -- *.o $(TARGETS) $(DEPS_RM)
 
 .PHONY: distclean
 distclean: clean
@@ -27,15 +25,24 @@ distclean: clean
 
 .PHONY: install
 install: all
+	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
+	$(if $(TARGETS),$(INSTALL_PROG) $(TARGETS) $(DESTDIR)$(LIBEXEC_BIN))
 
 .PHONY: uninstall
+uninstall:
+	$(RM) -f -- $(addprefix $(DESTDIR)$(LIBEXEC_BIN)/,$(TARGETS))
 
-CFLAGS += -Werror $(CFLAGS_xeninclude) -D__XEN_TOOLS__ -O3
+CFLAGS += -Werror -D__XEN_TOOLS__
+CFLAGS += $(CFLAGS_xeninclude)
 CFLAGS += $(APPEND_CFLAGS)
 
-vpath %.c ../../../xen/lib/x86
+LDFLAGS += $(APPEND_LDFLAGS)
+
+vpath %.c $(XEN_ROOT)/xen/lib/x86
+
+*.o: Makefile
 
 test-cpu-policy: test-cpu-policy.o msr.o cpuid.o policy.o
-	$(CC) $(CFLAGS) $^ -o $@
+	$(CC) $^ -o $@ $(LDFLAGS)
 
 -include $(DEPS_INCLUDE)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 16:25:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 16:25:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142286.262590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBs7-0003Im-Of; Tue, 15 Jun 2021 16:25:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142286.262590; Tue, 15 Jun 2021 16:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBs7-0003If-LF; Tue, 15 Jun 2021 16:25:03 +0000
Received: by outflank-mailman (input) for mailman id 142286;
 Tue, 15 Jun 2021 16:25:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x6ws=LJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ltBs6-0003IV-6U
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 16:25:02 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a890bea2-b883-4064-a3c5-4f17961b9e3e;
 Tue, 15 Jun 2021 16:25:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a890bea2-b883-4064-a3c5-4f17961b9e3e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623774301;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=SDBx1Aretrpn/nNAfujB3dI0OwxDJgEZecrfpSikAR8=;
  b=CwSzTEF89YJ2lApoWmXyOpa8SRjvyFmL++rMa+PC+hwUsRqQlRUQHpde
   RR4y+gc2f5sKsw4kKPBGLu1R4aZ7/3oeiaXhxj3IV6iXYc1wOSAxC/O8M
   nVv0GwhXmwsyFPZcrRuwnaHwkUSWSvPWtqxEAodx3fiQgzY/byuOAUdAe
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: cOMLRAZrh4TGYSiccUyZQmJRNeWGCWYTmomE3MYu4tnO/LwNnPFF2qUij/6iSskp0Njvqjrpxr
 tVLih9Ta5IB/lzBw5jIcBGvdTIj+CjDbzJ0HtlihjFvQlISDn1oPhzA1yzX3A+9M0cu7OGShL+
 Vw02kCSflV/jqU731ySCU+bSt05jJUZp7lodESMtC6jGVHe1OxAeQeocl5UZhBpv0RWqwnE2bD
 8bvEOSmYeSkUBVk5SJ0R3aL9+QqDAzf3OU/Y4dagUmseijWHxQib8JfFuqtjhPr7fqT84EgbVh
 Bss=
X-SBRS: 5.1
X-MesageID: 46186252
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:3GXDyaoys+EsuLjbOmgImPgaV5rdeYIsimQD101hICG9Evb0qy
 lhppQmPH7P+VAssRQb8+xoV5PufZqxz/BICMwqTNWftWrdyQyVxeNZnOjfKlTbckWTygce79
 YET0EXMrbN5DNB/KLHCWeDcurJwLO8gd+VbeW19QYScem9AZsQnjuQCWygYz1LrBEtP+tBKH
 IFjPA32gZJfx4sH7yGL0hAZcfvjfvRmqnrZBYXbiRXlDVn3VuTmcXH+wHz5GZlbw9y
X-IronPort-AV: E=Sophos;i="5.83,275,1616472000"; 
   d="scan'208";a="46186252"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Juergen Gross
	<jgross@suse.com>
Subject: [PATCH 3/5] tests/resource: Rework Makefile
Date: Tue, 15 Jun 2021 17:19:03 +0100
Message-ID: <20210615161905.9831-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210615161905.9831-1-andrew.cooper3@citrix.com>
References: <20210615161905.9831-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

In particular, fill in the install/uninstall rules so this test can be
packaged to be automated sensibly.

Make all object files depend on the Makefile, and us $(TARGET) when
appropriate.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Juergen Gross <jgross@suse.com>
---
 tools/tests/resource/Makefile | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/tools/tests/resource/Makefile b/tools/tests/resource/Makefile
index da5e2a4f9b..b22eb6fc21 100644
--- a/tools/tests/resource/Makefile
+++ b/tools/tests/resource/Makefile
@@ -16,9 +16,12 @@ distclean: clean
 
 .PHONY: install
 install: all
+	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
+	$(INSTALL_PROG) $(TARGET) $(DESTDIR)$(LIBEXEC_BIN)
 
 .PHONY: uninstall
 uninstall:
+	$(RM) -f -- $(DESTDIR)$(LIBEXEC_BIN)/$(TARGET)
 
 CFLAGS += -Werror
 CFLAGS += $(CFLAGS_xeninclude)
@@ -30,7 +33,9 @@ LDFLAGS += $(LDLIBS_libxenctrl)
 LDFLAGS += $(LDLIBS_libxenforeignmemory)
 LDFLAGS += $(APPEND_LDFLAGS)
 
-test-resource: test-resource.o
+*.o: Makefile
+
+$(TARGET): test-resource.o
 	$(CC) -o $@ $< $(LDFLAGS)
 
 -include $(DEPS_INCLUDE)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 16:25:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 16:25:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142288.262601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBsH-0003dK-Vz; Tue, 15 Jun 2021 16:25:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142288.262601; Tue, 15 Jun 2021 16:25:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBsH-0003dD-Ss; Tue, 15 Jun 2021 16:25:13 +0000
Received: by outflank-mailman (input) for mailman id 142288;
 Tue, 15 Jun 2021 16:25:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x6ws=LJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ltBsG-0003IV-4t
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 16:25:12 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 430cf9d0-84ca-47da-9e24-22f579c47ed3;
 Tue, 15 Jun 2021 16:25:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 430cf9d0-84ca-47da-9e24-22f579c47ed3
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623774302;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=zXwBttlq/sUctChlltDPrS3/9kYAFsNJN4Yl+m3Z5I0=;
  b=Uv9WOl155jbybyXWFKG/HpmXnY1Vc/f51nKqaLPArvvfWXRW0yb0uDbf
   5JXrQh25ZAqXuXYKh7JaFoXmJXNDyCiSF6m6DDvEKL4GLw5igwQAgiTi4
   454c9+zYnh2SsGAijq4sloLn84Xt5oOGFrXZQ0YRr4dAbrm0GilL+bGsv
   I=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: r7fvxB8phPJ6dc9iLBwsP4KCY8wB7wqr2HsSbs1E37V29dzS9IQVFgt/1/x6WSZDI2wQ2pZt/U
 qPfb9MxbMtxUN7rDtPXB/7aMaf9QLO29HsCHIzmQpTM7ZFbpZn4ghuZwG8uCiQ6DfA5x29QwJE
 krdg8h/zb+fTfgj86JRVBMR9gs7hnx1xFfARxsx4TveZOC6FGsME1nUvw1SEFiU+GMz+S1Zb3h
 JtFvvI3O13ggSczOfQAYNq6QkQo/StTfMoxy4T91dDXrJANpAOgJ2fCH4SMmkzSuQhk2c22Uj5
 DVw=
X-SBRS: 5.1
X-MesageID: 46552554
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:rNLb2qnSbOtMMtblxjK47wyt/kjpDfLW3DAbv31ZSRFFG/Fw9/
 rCoB3U73/JYVcqKRUdcLW7UpVoLkmyyXcY2+cs1NSZLWzbUQmTXeJfBOLZqlWNJ8SXzIVgPM
 xbAspD4bPLbGSTjazBkXSF+9RL+qj6zEh/792usEuETmtRGt9dBx8SMHf9LqXvLjM2fqbQEv
 Cnl6x6jgvlQ1s7ROKhCEIIWuDSzue77q4PMXY9dmcaABDlt0LR1ILH
X-IronPort-AV: E=Sophos;i="5.83,275,1616472000"; 
   d="scan'208";a="46552554"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Juergen Gross
	<jgross@suse.com>
Subject: [PATCH 2/5] tools/tests: Drop run runes
Date: Tue, 15 Jun 2021 17:19:02 +0100
Message-ID: <20210615161905.9831-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210615161905.9831-1-andrew.cooper3@citrix.com>
References: <20210615161905.9831-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

These aren't implemented consistently.  The one in resource/ is useless as the
binary needs running in dom0, and the layout in cpu-policy/ demonstrates the
weakness of this approach with multiple binaries.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Juergen Gross <jgross@suse.com>
---
 tools/tests/cpu-policy/Makefile   | 4 ----
 tools/tests/resource/Makefile     | 4 ----
 tools/tests/vpci/Makefile         | 4 ----
 tools/tests/x86_emulator/Makefile | 4 ----
 4 files changed, 16 deletions(-)

diff --git a/tools/tests/cpu-policy/Makefile b/tools/tests/cpu-policy/Makefile
index 70ff154da6..53b4f28b2a 100644
--- a/tools/tests/cpu-policy/Makefile
+++ b/tools/tests/cpu-policy/Makefile
@@ -17,10 +17,6 @@ endif
 .PHONY: all
 all: $(TARGET-y)
 
-.PHONY: run
-run: $(TARGET-y)
-	./$(TARGET-y)
-
 .PHONY: clean
 clean:
 	$(RM) -f -- *.o .*.d .*.d2 test-cpu-policy
diff --git a/tools/tests/resource/Makefile b/tools/tests/resource/Makefile
index 4bef482966..da5e2a4f9b 100644
--- a/tools/tests/resource/Makefile
+++ b/tools/tests/resource/Makefile
@@ -6,10 +6,6 @@ TARGET := test-resource
 .PHONY: all
 all: $(TARGET)
 
-.PHONY: run
-run: $(TARGET)
-	./$(TARGET)
-
 .PHONY: clean
 clean:
 	$(RM) -f -- *.o $(TARGET) $(DEPS_RM)
diff --git a/tools/tests/vpci/Makefile b/tools/tests/vpci/Makefile
index 5075bc2be2..f172cefd3d 100644
--- a/tools/tests/vpci/Makefile
+++ b/tools/tests/vpci/Makefile
@@ -6,10 +6,6 @@ TARGET := test_vpci
 .PHONY: all
 all: $(TARGET)
 
-.PHONY: run
-run: $(TARGET)
-	./$(TARGET)
-
 $(TARGET): vpci.c vpci.h list.h main.c emul.h
 	$(HOSTCC) -g -o $@ vpci.c main.c
 
diff --git a/tools/tests/x86_emulator/Makefile b/tools/tests/x86_emulator/Makefile
index 7b07c31bbd..7b3f58b7a2 100644
--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -7,10 +7,6 @@ TARGET := test_x86_emulator
 .PHONY: all
 all:
 
-.PHONY: run
-run: $(TARGET)
-	./$(TARGET)
-
 # Add libx86 to the build
 vpath %.c $(XEN_ROOT)/xen/lib/x86
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 16:28:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 16:28:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142307.262630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBvq-0005Mz-Vn; Tue, 15 Jun 2021 16:28:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142307.262630; Tue, 15 Jun 2021 16:28:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBvq-0005Ms-Rf; Tue, 15 Jun 2021 16:28:54 +0000
Received: by outflank-mailman (input) for mailman id 142307;
 Tue, 15 Jun 2021 16:28:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QkLx=LJ=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1ltBvp-0005Md-0t
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 16:28:53 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.4.85]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a9458951-44d2-40ce-a4cc-0d454079171a;
 Tue, 15 Jun 2021 16:28:50 +0000 (UTC)
Received: from AM6P195CA0051.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:87::28)
 by DB9PR08MB6974.eurprd08.prod.outlook.com (2603:10a6:10:2c1::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.24; Tue, 15 Jun
 2021 16:28:42 +0000
Received: from AM5EUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:87:cafe::3c) by AM6P195CA0051.outlook.office365.com
 (2603:10a6:209:87::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21 via Frontend
 Transport; Tue, 15 Jun 2021 16:28:42 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT023.mail.protection.outlook.com (10.152.16.169) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Tue, 15 Jun 2021 16:28:42 +0000
Received: ("Tessian outbound 94919dbe50f5:v93");
 Tue, 15 Jun 2021 16:28:41 +0000
Received: from 091b633b91c0.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F854707A-4E9D-431C-AC9D-35FC1DA88686.1; 
 Tue, 15 Jun 2021 16:28:30 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 091b633b91c0.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 15 Jun 2021 16:28:30 +0000
Received: from AS8PR08MB6919.eurprd08.prod.outlook.com (2603:10a6:20b:39e::10)
 by AS8PR08MB6679.eurprd08.prod.outlook.com (2603:10a6:20b:393::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21; Tue, 15 Jun
 2021 16:28:27 +0000
Received: from AS8PR08MB6919.eurprd08.prod.outlook.com
 ([fe80::2de3:452a:87cf:3ff5]) by AS8PR08MB6919.eurprd08.prod.outlook.com
 ([fe80::2de3:452a:87cf:3ff5%7]) with mapi id 15.20.4219.025; Tue, 15 Jun 2021
 16:28:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9458951-44d2-40ce-a4cc-0d454079171a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HJ1jRUoVYT4kD8AIqpCF4g42R3wx0bDOHXejiemdO7c=;
 b=itw+LU3BgtrqnMoo/AF+YI4QHXOTGriFakHaZzdhrX3mAGEPUsdl5pskR0UE9vYN7juW8m7H0lgWmMtjKOkmf+UAbHvGncNq8fAWruQmGGvN+adjuF6Gupoe7/ER3e9MB9iApi/AoQQDxq7PEiikIif6UeTWjoeIp9gJEru9ZHQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 003f35ef77bd73bf
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GVMnJj3WAwP7t/RHi2okrVGJpdZIjKO0PCFXlmB+e/boURkHkK9kbdaIjIsegnxhSmz4b49ls44+3aCucbWH+hbNmPYLK7Y7qEee/wSjlqooCffkPwbO8QgVAqWyyHi9GZjaUH00Tz0pZcVrbPtRFtoeZNMEIL3SMGBEDa+2dmlQREJDnefMfJ/AuC1NE55azGzEwbPNAWps+h1PiImJgyQzfqoaXroo/dGw8ciCOj8eWLmfGkecJ1J6cC1jSgL0xojHkqmzgEfrhN7x6Yyu+lKPGL8HPgJMe5Rs3TZZicfa8BgkXdP0U6h/Cyhg7GfCnRR/DU/2bYUMWke5QYLdPA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HJ1jRUoVYT4kD8AIqpCF4g42R3wx0bDOHXejiemdO7c=;
 b=ZlfMdFeVPAgffAbVNSlJ84AkeZgSkatWENDQTimen47we0oIA/6GWGuAx6+6/Cu883GYGycHgCr42SCRnb6tGUnQxBNUBj9SfD9tKMLORtBsXgBm/nHW+H9F02uxdYSwIjN7rdas2f/92BqbUKSyyAfChGQCADmfAZVC32ZSTJadzWmdqfa4nsNfb3rKsQdOLaO+JWwCHH+PM7jdBo+T7SNEbnAFbZFmWJ+adpncOCDO+0hny644yMZ6vkaaKeA5lRmU7Xqp9teGzX4RMaLDHLbUOd8JeLy675NxFWS8Fzou4pmIUXfYgH/rk7sEjHOAcyPSlsfz+TmMEmqLh6F2rw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HJ1jRUoVYT4kD8AIqpCF4g42R3wx0bDOHXejiemdO7c=;
 b=itw+LU3BgtrqnMoo/AF+YI4QHXOTGriFakHaZzdhrX3mAGEPUsdl5pskR0UE9vYN7juW8m7H0lgWmMtjKOkmf+UAbHvGncNq8fAWruQmGGvN+adjuF6Gupoe7/ER3e9MB9iApi/AoQQDxq7PEiikIif6UeTWjoeIp9gJEru9ZHQ=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "edgar.iglesias@xilinx.com" <edgar.iglesias@xilinx.com>, xen-devel
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Julien Grall <julien@xen.org>, "fnuv@xilinx.com"
	<fnuv@xilinx.com>
Subject: Re: smmuv1 breakage
Thread-Topic: smmuv1 breakage
Thread-Index: AQHXYY0v3XVdqEOvqESHjaQGLRWMRKsVQ9IA
Date: Tue, 15 Jun 2021 16:28:24 +0000
Message-ID: <791BFC00-6A50-48D2-A208-E529B887441F@arm.com>
References: <alpine.DEB.2.21.2106141840150.24906@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2106141840150.24906@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 5230cf41-55f1-4ec4-c725-08d9301aa391
x-ms-traffictypediagnostic: AS8PR08MB6679:|DB9PR08MB6974:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB9PR08MB6974E0AC872E72305661C553FC309@DB9PR08MB6974.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 u2sQwtBrcJZmhUP07d5B986mMKw2lAismYTZSw0ilNLKXD7aaIgKeU0C9AKULU2OMaOePnUrSlLnLb2T+qMNn4RRkpKhlKff/WGJXxRel/RYUR0bI8e2zOVe5vcwOJ63TVHg94hWjUcdpchtApPtyCsqJF5sLnMmCWWr44+Blp+qFWN8ih7mNmQFAizsImfyc7TyIS7dMXufOUXYkmoWgyOrR0ZZUieSiAQjswVtO+5FILFxg6dfD76NgZDavzSN2nuukNYTsbKPTSK0XpCGpjlWUQZ1Olmh8+slj+88+XouX3BBQ/G7put4Pd5PQwe+9Z0MHVgZuaD+DHq9LbEH4QRF4E8HbeC4Pdm7a77ePG/ybtH1DSa+gat7Fpj2aSU1z0/pHZQy8iMw/cUBSG8UiJSsELQgESiAtqaqqTi4rjTeOWCgeB14IlpkAmgEHbjbjzyoOTq1DlFa4snWXfJyy28IQoc/IbEQ9/Je6p+0JHgBBOjKwrwFvJSkykmo04yuX59w7lS65h2+xIh8iViP2BKy5e3PtEyrZxTM24Jdst9fhrBKeYptciCmcbJmzUZe7/fX7DA9fOArivQRGfR85Wi+KTwOyqqu1dWI+aK/5BOpLTGpa/Mon9Ki0xt+MzdPID8yJDTdR0hVWL7AM7qE/tH6BmIn24tIVqL9ylD3DARfAI1Kw9EKAoP7O/kr3jImdRpZYa6Iake8UNM45LIDTY6WdnDvY4qOQDCx78ccs5ZG5LPcUh8LQOQu2VrxEyOc6v9pbwN3L8BS51t1/ZrReQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB6919.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(396003)(39850400004)(136003)(346002)(4326008)(54906003)(2906002)(38100700002)(122000001)(66556008)(91956017)(66476007)(76116006)(7116003)(26005)(8676002)(66446008)(186003)(64756008)(86362001)(8936002)(71200400001)(6486002)(36756003)(478600001)(966005)(83380400001)(6512007)(33656002)(66946007)(6506007)(53546011)(316002)(2616005)(5660300002)(6916009)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?WHRGdHlWVkVVK2R1b2lwSnVkT0h6TmZnTDFGdUI0bEZHdGQwSHBaSWFHejZD?=
 =?utf-8?B?bHN3SkY3WXBlSXpKQ1FER24vUlM0Tjh2QmFVNGh1WjFUazk1a3Q4Qnd1VmhB?=
 =?utf-8?B?SEZlWjZFTWVhUW5lTjFvd3JoYzQ1VWl0bTVNSFZRQ2xrdC9HL1crSDZZcWFp?=
 =?utf-8?B?VEsxby9TL0VZTjNOMUhJbUJNTnJraDM2eGowbk1PbW9CQWd1TGJkRzg0VERI?=
 =?utf-8?B?ZXJPdUk5SDdRYmNFZVZMNGM1Wm1KbU4zSk9nVUlva1VwVXNiU3RvcXZHd1Iv?=
 =?utf-8?B?TjlXaTFwanZZbXdqZE1rQWxlVzJwSWV1VWdnelUxMDhUTzN0U29ZQkxZL20z?=
 =?utf-8?B?L2dKMm82eWF3Y0JhU0F5TFlWZ3B5aWNmT2NnVEZlUVhrbFF4UW9RY25qV3px?=
 =?utf-8?B?dk8yUW5VeHI0cEMvenB0QklheEs5Z0dOZHNWajZodUxkVHN1VjYzdFVFRHZk?=
 =?utf-8?B?Z2xKQUlKUzRmQWJqU0VXM2F0a29IaHNBSlhXZi9sL3JyMDkybExvbDFjNlVL?=
 =?utf-8?B?ckxTb2pobllwUEFJQmtUZnJWNElQZFRLZ0hOcnRObjZDTTJXZDcxWVo4VWVJ?=
 =?utf-8?B?c1NPc0VaUjVkSTdObnFoQ2JOVHFqclZOSG9tNDRNVXJoUGNYRTJHUS9LV2pI?=
 =?utf-8?B?TTgxUzl2aDZWWlJ1NmpxazY2M2JMOXJ3WktLRU5XZ0F6Z280ci9Qcm9sVkVY?=
 =?utf-8?B?MVhsaTA5aG1OR293c0V4UllwYjVacnRuSkhLRDlFVk93MWlSanBSSEYrdUx3?=
 =?utf-8?B?a3dzWllzQXhxbVRjRG15cU54dE5DclorRjhwckFzN0RoMmVRUy93WmpXbXJY?=
 =?utf-8?B?bWUrK0hkYm9LYXQxMDRNS1Z6WGs0cHQ0a3pBaTNNVWRzVWZmSVZwUnd5NTBB?=
 =?utf-8?B?eFM0R0ZVTDBpeDJna29mQmdmL1JMTG16NFBKcE96K0FiVkIwVytzMTlsVVpL?=
 =?utf-8?B?c2c2ZXdYV3FscVZhSG5wU2ZHbFpTWlNvTUJybmNhTHkzUHBxalRXRXgxcERE?=
 =?utf-8?B?MlJ2a0NBdlJoMzhWN2wzeTdCaEVrdTJWYVAzZUlYMDZ6N2xEQjY5aDZJb3BR?=
 =?utf-8?B?cE1mK3VHUFR3L2U3WDMrVEZZaXRKVnhVS2ZUVWMxcXVLUVdYVVlYb256NzJt?=
 =?utf-8?B?QU5hbjduRXBHc2xiYVJjSVZnS3lEaGptL1c1NXVvMkpveDRvcjQ0OFY1Q3Iz?=
 =?utf-8?B?T3dzcDFYZFZHc0htNWJKUjhSQklsczFFWnNIZ2hnQlAwZTZsWUs0OWxOWHVa?=
 =?utf-8?B?RFZmc2h2b1MwOGFlYUNlYUU1dUFuL0pBZGgwY3V6RWFWSlhiV29UZ0dLY2Fs?=
 =?utf-8?B?bW5FeHFJNHNMaEIvMXBQVGRMOVlJUXlhb0ZLU2xnTVhTTGZBSGlIRE9BOEJs?=
 =?utf-8?B?Q00yUmJOZXIwbkNZWEM2NXk4aFVwbWloRDl0dWx2WXUxaVh5QnhWcGs4RHlz?=
 =?utf-8?B?QUQvSDZSWFNuZ1BGVk9VU3pwbVoybUdVWGtOWEhtYXFMSHVROGdFNkk5d2RN?=
 =?utf-8?B?bFRnWktPTmpnSlFRVDVWdHlTVno2UkJQSDZpNm8xMlBIbGo4MVM1WTExK1ZO?=
 =?utf-8?B?YUR6aldRWTEvVDBIeVl4cFM3dUJ6VWxLKzdZc3FJVG5mQUZMM21JZzk2NzRm?=
 =?utf-8?B?YjNjTFZDekV4MkxXNkgxN3QvZElQUlN3MXppK2pVZWduR29yM0RwazhYNWpu?=
 =?utf-8?B?cGphaUJvcG1KTVIwamg4eU5PbHlvaHlySnZrQ21xbUJ4ZGxMeEg3Y2hHN3ky?=
 =?utf-8?Q?biUDooPHriA20w2YmU2AzhHYlGGN1Xl2bCB4vqu?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <783BFDF8A47EA24C920B359F49768EAA@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6679
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4a398a40-aa32-44a3-3134-08d9301a9a4f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QG1PjyIkn/c3d+x8bdlYqPAODEkoeht3/P4GNAJ42WU5CcqGR0UKR27xGrXq/fSOq23J38DBuGh6tjkK0/9sz7soNl4FGP5krNU0I9cNeUmKZNPr8elDkHA6W7NzL2b8Wpl1GWD0eTHaxvBC7jGI6FMfK9kQhD+lVOlut2+aVo/kgM7HQH5GER6MCOjz61BM1zgc72kpviBcmQg3mgCADeaoOBntCnG0rjfA0LaGwdykXy0T/B1KZy5OEoKbr1XqYunFsUZncmtYxCw5vDUYrg+IwZxd0A1Q9tj0DuL4tpHoGbZ8fCPMFZz2AFx3+4AJPNSvzRbusLl/s+fd2ZZO4Nw7kyODb+RmseY/zIFCxJq6X2vsVpAqgsH2jNSjkacW27ZgdNontvqvjKtAZlxjX9Wx50+jPM28ICsmOHUrZa/TbsDRMED9xKLAOORGOz1VHEWKQ+4CNApREIiYs9qFz7rxYB5gscuKh1untMkigANASRFjIFvhOS5+17RIQtSVLUwjWXokfWlfVQBCyv+1q7YvOahbxU5m+LraXSUv/Wfr+6j3AkpAbyR4JoWMY9r/x5yfuong5uzSFKIeTVuq+FHemA2FdPchQKs5da3Xi3aL1wuykjwxmGDHsuhw7yepozV04FSLU5r5v/6ilRsK09a2Cm99MRM/7HLzhlB1p6SjPKQ6VrjGhNI3pVN5utvmI0XkRN3JHCAK6kwGEN8q6XiaVtdGjlxkcEwIMEN6Rw5ao6ZaN3UXhmHurFrhl98u8WCeVvfZBhnprSvbtd/1hA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(376002)(396003)(39850400004)(136003)(46966006)(36840700001)(33656002)(47076005)(83380400001)(316002)(70586007)(5660300002)(86362001)(70206006)(82310400003)(82740400003)(8676002)(54906003)(8936002)(966005)(356005)(186003)(478600001)(6486002)(2616005)(107886003)(2906002)(336012)(6862004)(53546011)(4326008)(36860700001)(36756003)(6506007)(7116003)(26005)(6512007)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2021 16:28:42.0584
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5230cf41-55f1-4ec4-c725-08d9301aa391
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6974

SGkgU3RlZmFubw0KDQo+IE9uIDE1IEp1biAyMDIxLCBhdCAzOjIxIGFtLCBTdGVmYW5vIFN0YWJl
bGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPiANCj4gSGkgUmFodWwsDQo+
IA0KPiBVbmZvcnR1bmF0ZWx5LCBhZnRlciBiaXNlY3RpbmcsIEkgZGlzY292ZXJlZCBhIGZldyBt
b3JlIGJyZWFrYWdlcyBkdWUgdG8NCj4geW91ciBzbW11djEgc2VyaWVzIChjb21taXRzIGU4ODk4
MDliIC4uIDNlNjA0N2RkZikgb24gWGlsaW54IFp5bnFNUC4gSQ0KPiBhdHRhY2hlZCB0aGUgRFRC
IGFzIHJlZmVyZW5jZS4gUGxlYXNlIG5vdGUgdGhhdCBJIG1hZGUgc3VyZSB0bw0KPiBjaGVycnkt
cGljayAieGVuL2FybTogc21tdXYxOiBSZXZlcnQgYXNzb2NpYXRpbmcgdGhlIGdyb3VwIHBvaW50
ZXIgd2l0aA0KPiB0aGUgUzJDUiIgZHVyaW5nIGJpc2VjdGlvbi4gU28gdGhlIGVycm9ycyBhcmUg
cHJlc2VudCBhbHNvIG9uIHN0YWdpbmcuDQo+IA0KPiBUaGUgZmlyc3QgYnJlYWthZ2UgaXMgYW4g
ZXJyb3IgYXQgYm9vdCB0aW1lIGluIHNtbXUuYyNmaW5kX3NtbXVfbWFzdGVyLA0KPiBzZWUgbG9n
MS4gSSB0aGluayBpdCBpcyBkdWUgdG8gdGhlIGxhY2sgb2YgYWJpbGl0eSB0byBwYXJzZSB0aGUg
bmV3IHNtbXUNCj4gYmluZGluZ3MgaW4gdGhlIG9sZCBzbW11IGRyaXZlci4NCj4gDQo+IEFmdGVy
IHJlbW92aW5nIGFsbCB0aGUgInNtbXVzIiBhbmQgIiNzdHJlYW0taWQtY2VsbHMiIHByb3BlcnRp
ZXMgaW4NCj4gZGV2aWNlIHRyZWUsIEkgZ2V0IHBhc3QgdGhlIHByZXZpb3VzIGVycm9yLCBldmVy
eXRoaW5nIHNlZW1zIHRvIGJlIE9LIGF0DQo+IGVhcmx5IGJvb3QsIGJ1dCBJIGFjdHVhbGx5IGdl
dCBTTU1VIGVycm9ycyBhcyBzb29uIGFzIGRvbTAgc3RhcnRpbmcNCj4gdXNpbmcgZGV2aWNlczoN
Cj4gDQo+IChYRU4pIHNtbXU6IC9zbW11QGZkODAwMDAwOiBVbmV4cGVjdGVkIGdsb2JhbCBmYXVs
dCwgdGhpcyBjb3VsZCBiZSBzZXJpb3VzDQo+IChYRU4pIHNtbXU6IC9zbW11QGZkODAwMDAwOiAg
ICAgR0ZTUiAweDgwMDAwMDAyLCBHRlNZTlIwIDB4MDAwMDAwMDAsIEdGU1lOUjEgMHgwMDAwMDg3
NywgR0ZTWU5SMiAweDAwMDAwMDAwDQoNCiBUaGlzIGZhdWx0IGlzICJVbmlkZW50aWZpZWQgc3Ry
ZWFtIGZhdWx04oCdIGZvciBTdHJlYW1JRCDigJwgMHg4NzfigJ0gdGhhdCBtZWFucyBTTU1VIFNN
UiBpcyBub3QgY29uZmlndXJlZCBmb3Igc3RyZWFtSUQg4oCcMHg4NzciDQoNCg0KPiBbICAgMTAu
NDE5NjgxXSBtYWNiIGZmMGUwMDAwLmV0aGVybmV0IGV0aDA6IERNQSBidXMgZXJyb3I6IEhSRVNQ
IG5vdCBPSw0KPiBbICAgMTAuNDI2NDUyXSBJUHY2OiBBRERSQ09ORihORVRERVZfQ0hBTkdFKTog
ZXRoMDogbGluayBiZWNvbWVzIHJlYWR5DQo+IA0KPiBEbyB5b3UgdGhpbmsgeW91J2xsIGJlIGFi
bGUgdG8gaGVscCBmaXggdGhlbT8NCj4gDQo+IA0KPiBZb3Ugc2hvdWxkIGJlIGFibGUgdG8gcmVw
cm9kdWNlIHRoZSB0d28gaXNzdWVzIHVzaW5nIFhpbGlueCBRRU1VIChidXQgdG8NCj4gYmUgaG9u
ZXN0IEkgaGF2ZW4ndCB0ZXN0ZWQgaXQgb24gUUVNVSB5ZXQsIEkgd2FzIHRlc3Rpbmcgb24gcmVh
bA0KPiBoYXJkd2FyZSk6DQo+IC0gY2xvbmUgYW5kIGNvbXBpbGUgeGlsaW54IFFFTVUgaHR0cHM6
Ly9naXRodWIuY29tL1hpbGlueC9xZW11LmdpdA0KPiAgLi9jb25maWd1cmUgIC0tdGFyZ2V0LWxp
c3Q9YWFyY2g2NC1zb2Z0bW11DQo+ICBtYWtlDQo+IC0gY2xvbmUgYW5kIGJ1aWxkIGdpdDovL2dp
dGh1Yi5jb20vWGlsaW54L3FlbXUtZGV2aWNldHJlZXMuZ2l0DQo+IC0gdXNlIHRoZSBhdHRhY2hl
ZCBzY3JpcHQgdG8gcnVuIGl0DQo+ICAgIC0ga2VybmVsIGNhbiBiZSB1cHN0cmVhbSBkZWZjb25m
aWcgNS4xMA0KPiANCg0KSSB0cmllZCB0byByZXByb2R1Y2UgdGhlIGlzc3VlIG9uIFhpbGlueCBR
RU1VIGFzIHBlciB0aGUgc3RlcHMgc2hhcmVkIGFib3ZlIA0KYnV0IEkgYW0gbm90IG9ic2Vydmlu
ZyBhbnkgaXNzdWUgb24gWGlsaW54IFFFTVUuDQoNCkkgYWxzbyB0ZXN0ZWQgYW5kIGNvbmZpcm1l
ZCBvbiBRRU1VIHRoYXQgU01NVSBpcyBjb25maWd1cmVkIGNvcnJlY3RseSANCmZvciBzcGVjaWZp
Y2FsbHkgU3RyZWFtSUQg4oCcIDB4ODc34oCdIGFuZCBmb3Igb3RoZXIgc3RyZWFtSURzLg0KDQpJ
IGNoZWNrIHRoZSB4ZW4uZHRiIHNoYXJlZCBieSB5b3UgYW5kIGZvdW5kIG91dCB0aGUgdGhlcmUg
aXMgbm8gInN0cmVhbS1pZC1jZWxsc+KAnQ0KcHJvcGVydHkgaW4gdGhlIG1hc3RlciBkZXZpY2Ug
YnV0IHRoZSAibW11LW1hc3RlcnMiIHByb3BlcnR5IGlzIHByZXNlbnQgaW4gdGhlDQpzbW11IG5v
ZGUuIEZvciBsZWdhY3kgc21tdSBiaW5kaW5nIHdlIG5lZWQgYm90aCAic3RyZWFtLWlkLWNlbGxz
4oCdIGFuZCAibW11LW1hc3RlcnPigJ0uDQpJZiB5b3UgbmVlZCB0byBhZGQgdGhlIG5ldyBzbW11
IGJpbmRpbmcgcGxlYXNlIGFkZCB0aGUgImlvbW11LWNlbGxz4oCdDQpwcm9wZXJ0eSBpbiB0aGUg
c21tdSBub2RlIGFuZCB0aGUg4oCcaW9tbXVz4oCdIHByb3BlcnR5IGluIHRoZSBtYXN0ZXIgZGV2
aWNlLg0KDQpDYW4geW91IHBsZWFzZSBzaGFyZSB0aGUgeGVuIGJvb3QgbG9ncyB3aXRoIG1lIHNv
IHRoYXQgSSBjYW4gZGVidWcgZnVydGhlciB3aHkgdGhlIGVycm9yIGlzIG9ic2VydmVkPyANCg0K
UmVnYXJkcywNClJhaHVsDQoNCg0KPiBDaGVlcnMsDQo+IA0KPiBTdGVmYW5vPHhlbi5kdGI+PGxv
ZzEudHh0PjxxZW11LXJ1bi16eW5xbXAteGlsaW54LXhlbi5zaD4NCg0K


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 16:29:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 16:29:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142308.262641 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBvw-0005h3-BW; Tue, 15 Jun 2021 16:29:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142308.262641; Tue, 15 Jun 2021 16:29:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltBvw-0005gs-8P; Tue, 15 Jun 2021 16:29:00 +0000
Received: by outflank-mailman (input) for mailman id 142308;
 Tue, 15 Jun 2021 16:28:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PsWs=LJ=samsung.com=m.szyprowski@srs-us1.protection.inumbo.net>)
 id 1ltBvt-0005fe-LG
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 16:28:58 +0000
Received: from mailout1.w1.samsung.com (unknown [210.118.77.11])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fdd7d665-77c5-44f5-82bf-cca72d91274c;
 Tue, 15 Jun 2021 16:28:56 +0000 (UTC)
Received: from eucas1p2.samsung.com (unknown [182.198.249.207])
 by mailout1.w1.samsung.com (KnoxPortal) with ESMTP id
 20210615162855euoutp01ad6e86ce042d5d9a8c1d0cc8b7f986dd~IzlYzcF-E1226212262euoutp01G;
 Tue, 15 Jun 2021 16:28:55 +0000 (GMT)
Received: from eusmges3new.samsung.com (unknown [203.254.199.245]) by
 eucas1p1.samsung.com (KnoxPortal) with ESMTP id
 20210615162854eucas1p16fe3d1cb9712f22c61caf1af21eda61c~IzlYcZM1U2673626736eucas1p1h;
 Tue, 15 Jun 2021 16:28:54 +0000 (GMT)
Received: from eucas1p2.samsung.com ( [182.198.249.207]) by
 eusmges3new.samsung.com (EUCPMTA) with SMTP id 78.59.09439.645D8C06; Tue, 15
 Jun 2021 17:28:54 +0100 (BST)
Received: from eusmtrp1.samsung.com (unknown [182.198.249.138]) by
 eucas1p1.samsung.com (KnoxPortal) with ESMTPA id
 20210615162854eucas1p12f7975e37d474ac5ffdb532fa21ef58b~IzlX3jd6f1146211462eucas1p19;
 Tue, 15 Jun 2021 16:28:54 +0000 (GMT)
Received: from eusmgms1.samsung.com (unknown [182.198.249.179]) by
 eusmtrp1.samsung.com (KnoxPortal) with ESMTP id
 20210615162854eusmtrp1de18538b70433169cba9c508ea4d3c0d~IzlX2F6VP0369503695eusmtrp1k;
 Tue, 15 Jun 2021 16:28:54 +0000 (GMT)
Received: from eusmtip2.samsung.com ( [203.254.199.222]) by
 eusmgms1.samsung.com (EUCPMTA) with SMTP id 5B.55.08705.645D8C06; Tue, 15
 Jun 2021 17:28:54 +0100 (BST)
Received: from [106.210.134.192] (unknown [106.210.134.192]) by
 eusmtip2.samsung.com (KnoxPortal) with ESMTPA id
 20210615162852eusmtip2560d33f2ab723e29cbde1a513bd9cbf2~IzlWL851d0102301023eusmtip2f;
 Tue, 15 Jun 2021 16:28:52 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fdd7d665-77c5-44f5-82bf-cca72d91274c
DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.w1.samsung.com 20210615162855euoutp01ad6e86ce042d5d9a8c1d0cc8b7f986dd~IzlYzcF-E1226212262euoutp01G
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com;
	s=mail20170921; t=1623774535;
	bh=xlgLokA1E+ZL2R6HWaDfu9TCOox/AjjaAxkG29EvyOY=;
	h=Subject:To:Cc:From:Date:In-Reply-To:References:From;
	b=eXGCngLhbKBWvmZD1q//dCrYCLvpvjCmxVxNpM5R2hkZhZSuHLYtvDq66CpJGuZ3t
	 +vz3mAnsk7oXEfcQSrZKNgm3jjHeiaxibNQPz/dGglAcqD9fcYrhpOs5eMKI9Lc5/z
	 b6NF33kQ0+nbbTLJD9nkxi3x7ulcBsg/5Oruv6pA=
X-AuditID: cbfec7f5-c1bff700000024df-f1-60c8d5467e58
Subject: Re: [PATCH 09/30] mtd_blkdevs: use blk_mq_alloc_disk
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>, Denis
	Efremov <efremov@linux.com>, Josef Bacik <josef@toxicpanda.com>, Tim Waugh
	<tim@cyberelk.net>, Geoff Levand <geoff@infradead.org>, Ilya Dryomov
	<idryomov@gmail.com>, "Md. Haris Iqbal" <haris.iqbal@ionos.com>, Jack Wang
	<jinpu.wang@ionos.com>, "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang
	<jasowang@redhat.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Mike Snitzer
	<snitzer@redhat.com>, Maxim Levitsky <maximlevitsky@gmail.com>, Alex Dubov
	<oakad@yahoo.com>, Miquel Raynal <miquel.raynal@bootlin.com>, Richard
	Weinberger <richard@nod.at>, Vignesh Raghavendra <vigneshr@ti.com>, Heiko
	Carstens <hca@linux.ibm.com>, Vasily Gorbik <gor@linux.ibm.com>, Christian
	Borntraeger <borntraeger@de.ibm.com>, dm-devel@redhat.com,
	linux-block@vger.kernel.org, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
	linux-mmc@vger.kernel.org, linux-mtd@lists.infradead.org,
	linux-s390@vger.kernel.org, Bartlomiej Zolnierkiewicz
	<b.zolnierkie@samsung.com>
From: Marek Szyprowski <m.szyprowski@samsung.com>
Message-ID: <7f98a37c-281c-bff6-6126-a65feadcb6ca@samsung.com>
Date: Tue, 15 Jun 2021 18:28:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0)
	Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210615155817.GA31047@lst.de>
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Brightmail-Tracker: H4sIAAAAAAAAA01Se0xTZxzNd+/tvS1Z8VrZ+o25bKkb2ajA2HR+m0D2MO4mLoaMbVkMBjq4
	K4TnWjpASaxIhBEnCBJKGUWQAuPlChXEzgdvWS0YqItAxcEoRAoOeQ4VGPTixn/n9zvnl3PO
	l4+Pi+6S7vzI2ARWESuLlpAuRGPncq/Xwbu3Qt8p7pOi6vtZJDJoLvFQfmUTgWYGcjB07VEh
	D1l/NwFU9jAA3dEu8lCj+VcKOXRVAP1S3YGhB7p0EpX3zWFouv48gZ6N+CKr2gOVX7QDdG1Q
	itrXsgAypZZTKLvkFI6eLq3y0FR3KYXa0q/z0NqTSR56aq/F0EhnNY5ym6cA0vRe5KHThgWA
	RvUVOFqzzFCowqIj0ZIxF/tIwuhqjjOO3GzAdD+8TTAdbS0U0zLaSjLN2vsU01DpyZinbBjT
	b1ExY9Z8jMkuvQmYhrITTO69CsD0akoA0zZjJRjTgJoMfPmIi184Gx35A6vwCQh1iZhrWMLi
	ewRJjpxdajBPZQIBH9J74Jq+yIlFdCWA2m5VJnBZx/MAdua1A26YA7DW1og9v+g4m0dwRAWA
	mqbqTdVjAIsnOsgN1Q7aH47W9fA2sBstgfZJi1OE0+0CaD+rdhIk7QszpzOdB0I6APbo0pxB
	CPpN2F7A2b1Ih8G/dRoep9kOuwvGiA0soHfD2eZ+pwanX4NN0z/jHBbDwbFibMMM0lMucF7r
	4HG5D8CGqtrNDjvgZJdx8wV2QnPuGYI7OAXgSE8txQ1nAOxP1QBOtR/aep6sR+WvW7wNL131
	4dYfw7TMK/jGGtKu8N70di6EK8xpzN9cC2HGaRGn9oDarrr/bFvu9OHZQKLdUk27pY52Sx3t
	/74XAFEFxKxKGSNnle/FsoneSlmMUhUr9w6Li6kH6z/fvNq1cAVUTj72bgUYH7QCyMclbkIv
	5a1QkTBclnyMVcSFKFTRrLIVvMInJGLh1cs1ISJaLktgo1g2nlU8ZzG+wF2N8b/ojlCS6vEM
	g8fioiFO3275NP5DP0/tzbLe0tKaurb6l3YOCwdXw919tr0furB/KIAJC8rbK04xXKZwkvLf
	VeD/ZWSYIiEey/+aHTB77JOH7wt+K2V+fCJyaHy4f2Xl3bwjzX+JjxmlwT8dTjq0/I8+LrrR
	nhJSha96fTd0/MTKs9n0oOU477brS0vuUXq35cIXTEbbJyUniwJnv/WQfmPyPZBsrX/96MTR
	UteMom3ewbU5r0pVs3sGqaiOyKSUR0mWc3+8IUgd96+DfiLpg8+SrLOK6IQ/s8wnZ0yFZcuJ
	H1B7E39ssnUG7k4+d+h7ufvwVJmY+S3tRtBt4+fn5Te+ckgIZYTM1xNXKGX/Av4MyY1oBAAA
X-Brightmail-Tracker: H4sIAAAAAAAAA02Se0xTZxiH851LLxjcWYFwYC4zdSwGR6Hc/Eq0OGeWE5dNxszINknbwRko
	l7JeVLYFqg4YyG2UjVpHEQstFiaXThQbVIqgXddqZjcBxxwdjXJTJkyGEhktLuG/X/J7njfv
	m7wslHMPD2Xtz1XQslxJNpfhh9mfXfsj4q1fr4ujNL0s2DpaxYCdmnYc1rWcx+DscA0Cex+c
	xKHrJwuATRNCeFP7GIfd9g4mnNKZADzTOoDAu7oSBjT8MofAma5aDC6N8aFL9Ro06D0A9o5s
	gVeXqwC0HDUwYXXjMRQ+XXiGw2nbaSbsL7mEw+Unkzh86vkBgWODrShU90wDqLmhx2Fx5z8A
	upuNKFx2zDKh0aFjwIUf1cgOLqVr+4KaUlcDyjbxM0YN9PcxqT63lUH1aEeZlLklnLJP/45Q
	txxKatxVh1DVp68AytxUSKmHjIC6oWkEVP+sC6MswypGUshHvG0yqVJBb8yUyhXbuR/zYTSP
	L4C86FgBjx+zNTUhOo4bKdyWTmfvP0jLIoViXuaceQHJc7IPT9VsUoF5Zhlgs0gilhyo/BYr
	A34sDtEMyEsTndhqsYG0fafCV3MAufRbGWMVeghIy8lyxFsEENtJ91mnDwokuKRn0gG8EEoM
	skmLbRx4Cw5xDCHP/lXgzQyCT5bNeCexWf6EkHTqvvKtgRFh5NUT3b6hQUQaudhe8px5kbSd
	GPdtxCZeJx/13PIxKBFP6sxj6Gp+hTw/8/3zHEyOjDcg1YCjXaNr1yjaNYp2jXIKYCYQSCvl
	ORk5cj5PLsmRK3MzeGnSnC6w8nHdg4vmC0A3+TfPChAWsAKShXID/SPk18Uc/3RJ/ue0TCqS
	KbNpuRXErdzzDRoalCZdedlchYgfHxXHj40XRMUJ4mO4wf7n6ttEHCJDoqCzaDqPlv3vISx2
	qArpLiodnRUl6Yt2V1Ym73FFBK5HghqNh24brIrL8x1FAWHrNzVcnk95o+ndr+vd9h3Hg6OT
	qx6uO9Di9+WdwdK34/SeT4YP1A2d2vXOYIEO73tgOkLd/OCwhxexoaP/ftu997pS249O8Nxd
	0hD1S4lbR9RRhxZl3aXJphlx4IWCcIH+eMW+hNQEJ8YqTLwi3lhjyGquL85IZAsNIWE785Ne
	tW1uv5i/+czdf+/YPl26LX5sPacpRxMqdoYPDX8o+uxNtFbFdrTmVdQ+mUvZtzdm3V7No13a
	g3nq+j3qF94HLx/xa2pIp5hZ14QpkVBN2Jwm+5Y/NbXGwnIZfbE4UWPD7rdxMXmmhB+OyuSS
	/wCAOdbc+gMAAA==
X-CMS-MailID: 20210615162854eucas1p12f7975e37d474ac5ffdb532fa21ef58b
X-Msg-Generator: CA
Content-Type: text/plain; charset="utf-8"
X-RootMTR: 20210615154746eucas1p1321b6f1cf38d21899632e132cf025e61
X-EPHeader: CA
CMS-TYPE: 201P
X-CMS-RootMailID: 20210615154746eucas1p1321b6f1cf38d21899632e132cf025e61
References: <20210602065345.355274-1-hch@lst.de>
	<20210602065345.355274-10-hch@lst.de>
	<CGME20210615154746eucas1p1321b6f1cf38d21899632e132cf025e61@eucas1p1.samsung.com>
	<13b21a07-b7c7-37db-fdc9-77bf174b6f8f@samsung.com>
	<20210615155817.GA31047@lst.de>

Hi Christoph,

On 15.06.2021 17:58, Christoph Hellwig wrote:
> On Tue, Jun 15, 2021 at 05:47:44PM +0200, Marek Szyprowski wrote:
>> On 02.06.2021 08:53, Christoph Hellwig wrote:
>>> Use the blk_mq_alloc_disk API to simplify the gendisk and request_queue
>>> allocation.
>>>
>>> Signed-off-by: Christoph Hellwig <hch@lst.de>
>> This patch landed in linux-next as commit 6966bb921def ("mtd_blkdevs:
>> use blk_mq_alloc_disk"). It causes the following regression on my QEMU
>> arm64 setup:
> Please try the patch below:
>
> diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
> index 5dc4c966ea73..6ce4bc57f919 100644
> --- a/drivers/mtd/mtd_blkdevs.c
> +++ b/drivers/mtd/mtd_blkdevs.c
> @@ -382,6 +382,7 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
>   	}
>   
>   	new->disk = gd;
> +	new->rq = new->disk->queue;
>   	gd->private_data = new;
>   	gd->major = tr->major;
>   	gd->first_minor = (new->devnum) << tr->part_bits;

Right, this fixes the issue, thanks. Feel free to add:

Reported-by: Marek Szyprowski <m.szyprowski@samsung.com>

Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>

Best regards
-- 
Marek Szyprowski, PhD
Samsung R&D Institute Poland



From xen-devel-bounces@lists.xenproject.org Tue Jun 15 16:50:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 16:50:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142335.262651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltCG8-0000By-4R; Tue, 15 Jun 2021 16:49:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142335.262651; Tue, 15 Jun 2021 16:49:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltCG8-0000Br-1F; Tue, 15 Jun 2021 16:49:52 +0000
Received: by outflank-mailman (input) for mailman id 142335;
 Tue, 15 Jun 2021 16:49:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x6ws=LJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ltCG6-0000BS-Px
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 16:49:51 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a5bff2f-dbd8-40bc-a7d5-1ff03c290516;
 Tue, 15 Jun 2021 16:49:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a5bff2f-dbd8-40bc-a7d5-1ff03c290516
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623775789;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=d1iVAy6xdnaWv2tPO+fDTe4pChBoqWCdfeovOkk/snk=;
  b=AKZPwIQPTCad3LcknMqlLq2+VR8T32/lIs98zKFXLhTlyqt8z50KxFU0
   rFQrnh9gW8lVpbRqY3jITz70NKy74+HQuaythNg5F4IJ4xqUP2muwVIbg
   C1WWDrupZsOFflaylQgRuIVME1Viv4HJR3QHJSySvHd9lAmrPL1r6gSVq
   s=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: y/+kGFqlzsQoIarcprcwhK6zJ13kFKVAxRs3gVTavZZ1bGdIMgvLe4csR4ol0pZLVyrse7uycc
 gICOyQLl+jU14bEv4yGObea9PGpDyVZMXwRfkg2jc1Ye0sk65QFYSfGo5MQkmnbWdHGK7Tsi6+
 DSpWQoBm+UiuGb7TGTxKURyknX/nN1tSZcl3MN5devvfA62EGs2PIfGdyq3HicQdZy09wouqqE
 HvgNTTH3KZESWbgz3Fhyi3d3XrccrsgCInRQeWPibMIVPjX6vzm4A2wUHrrKJB814dVbKPmcTL
 n3A=
X-SBRS: 5.1
X-MesageID: 46254637
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:zePY8q2jCcBrohKIOlPSoQqjBLYkLtp133Aq2lEZdPUCSL39qy
 nOppQmPHDP4wr5NEtLpTniAsi9qBHnmqKdurNhWItKNTOO0FdASrsO0WKI+VPd8kPFmtK0es
 1bAs9D4HGbNykZsS5aijPIcOod/A==
X-IronPort-AV: E=Sophos;i="5.83,275,1616472000"; 
   d="scan'208";a="46254637"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cnNlTW1yCAPaR6MUy3JhrJDqkpwlWb1xd+I7XsEWj/iNahLB7O4ew/jtP9BKAmVgmhYX3kKYZsbWAyimAdv5MLew53Kslf3IzgAzYYU31fghhFevnF4Q8zJb2XPtFihNWxhyNEO0f73vyjvTDi7Zt7UlLk1OGOCq+UrCdAKXDy7rNiBKrz+EoXb5PGbrnXbHEuBcIeC+Q22mqMU7iPkUinit+LmgShzEe/pVa4WWdszcIeUr1FgKOK0GWEnqnhr/TqtbmLeE9aftc/wMHxowbMBCMRwQ05Z28seLoPau1qYpH8x+WXgpdpf3fDXt7OGo3W0hEsALdT4O8AFVAhGRYw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pbLHhndlWfPbIjpZBwRZA39NCitsbsjUprrUh9jKPyI=;
 b=ld43lgm+mK8XCGxEyvfKJhE5zWg1D07+qddI4uj5StLHu3KHK10N6yr9yZ9Gqleoa2Jb5RwdpJ0168r+w92nGgzewJTX7xr6VLh8EzXMaawOKS5WMvkX54gV+gnrIURCcmanuERIp9OGnrYvogcBUiwHESj7qcg9JgK/V8CToyG4WsJqnRm2Iw4b9S+/C/E8GRjL1+xGCiZjlNFdVYNM95HTnvRScu10PklPW/T7BooR3Oun9IFDs0xJC3sEXgsquMri6r7dlc10ESNXjw7YTMdx5jHhLmtf2nSESAjoGGstUV7iI+DGSMAfcleBNZGUHt12ABORCTODGSI5jZooug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pbLHhndlWfPbIjpZBwRZA39NCitsbsjUprrUh9jKPyI=;
 b=qpzXyAEbWZLB3iRL7cp2cC6K81KkhwFztgBtsg/zV8vTrHRqcTm8jcsOvjmmy5Ws+4temUoT5X+Cm4gq4rIMk57Fyx8mx0S8sfDs9+6rBPkidLNuyfUzsz06QMc84GEy0QFbc/181E3tsH254mJ8gH9MEWN1+ehMLwVOYstkT3s=
Subject: Re: [PATCH 3/5] tests/resource: Rework Makefile
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	Juergen Gross <jgross@suse.com>
References: <20210615161905.9831-1-andrew.cooper3@citrix.com>
 <20210615161905.9831-4-andrew.cooper3@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0478f0b6-b811-c77a-afe6-b893bcb5c0d5@citrix.com>
Date: Tue, 15 Jun 2021 17:49:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210615161905.9831-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LNXP123CA0020.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:d2::32) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b0454cd7-926e-4fb6-02ad-08d9301d9471
X-MS-TrafficTypeDiagnostic: BYAPR03MB3989:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB39896BEAE1EDF1D2741CC890BA309@BYAPR03MB3989.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 1isx2Wdu27Ri7pfOC3Ndhcip30YgG5teHPNC7H68UcKf5j27tPVwriLq2iEeaI0h9Yew99hADhgwtmJC6Zgu6L8C2yDWdPxFNiDdf4vCS/1ocPltVk2p6tHoL22WMlvLu79k55hmevUj17sYqXYqR34TX1FoFpeclPnShiNcZQAIE5ds7v2cxWhPavHUd4U9YscITHnxMIA5R54x6Nyg/Zh4o8y3MJa+PCGJz+unV7B0I9GPnlFQ3NfJl32G8hc8YnS4DWwSiPRuTF5RRRZgFEwKejXgY3koTHmklibfatFTJ2sEfWm76qnOH/rn4n36EbCE1IFa1vLO83qAOLfVBM4ZgmfG2pc2/QzaOvGmyfSCI0ycoCyGjufOSZL5iHaJJsDyNd1nG3ANDOlsFZkTUNPYfLVfRr+shDIrEsfG66NU+U5qG9ErHzPKopsFktIYFiNneHpc1ytYW27s+NKtKZmMZE00L190s+R050HN7iAJKXPbCZr+x2jPbNQISLsxkMLXugQUS9NGNgnSb/JdLDhyDZQZyWqYQc4fMxduml78x/uHpxX1PQdKUS7PlJ6gmXv55hIq67CIbEULxWizysga2XWrl3Pqtip0zA22U9Dia/IWVLtPhkoFWrXmfejx3nz1Or13K1Ie+pi5JKXvIoGrMbc1ncaYg1+jweRybhnse2zB4ScjXILIh1RthfzW
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(39860400002)(346002)(136003)(366004)(2906002)(4326008)(38100700002)(478600001)(31686004)(6666004)(83380400001)(8936002)(6916009)(2616005)(54906003)(956004)(16576012)(5660300002)(186003)(6486002)(16526019)(31696002)(86362001)(36756003)(66946007)(66556008)(26005)(66476007)(53546011)(316002)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?L2xtLzJtNzRxbEdkczRzR0ZWWTNqblRaaldSZ1cyczNlS2daNGFNMEtGZWsz?=
 =?utf-8?B?NkFPWDE5S0lPMUxEeW5xbXJDbGt6ZzZqTEo2ekVyMVhGYVRWRzhjY1ptKzEy?=
 =?utf-8?B?alZWcnBVaC9wNjBMbCszWUU2V25Nb3FrOHhSemo2NGVuNXA0Qmk0NEVSVENS?=
 =?utf-8?B?MFB3dmhTZjlWendtY1dGMUs3ZTI0MGVrT1hwcjhFd2E5ZzBaSjdCSEVHTFVU?=
 =?utf-8?B?a3Z4cXpSSGlxTTdjYVIxRDRzT1lIZU5XcEZVdGV1ZS9VeU5zZk5JbnpVOHVG?=
 =?utf-8?B?YTBxNTlzd1dIdEV1U3kxM09qSDVlS0VrVk9JVWI0bTJSdW1RRmppMjFYK0dV?=
 =?utf-8?B?ZGpHdFMrVDF6UEpvelAwUzNLTHUzQTB4WFhaMXFybXlVa2g0aW95Skt4Zjlj?=
 =?utf-8?B?a1F4d2x2MG5qRTgvU1h6UDFSWStrUHNMQWFRVUo1SXBha2hXMy9DNkZrbWdp?=
 =?utf-8?B?Q2kvaG43OXdOclZDZldReUZySURhYzNZcHJwUm1BSWluby8wZzZnWXNKUHo3?=
 =?utf-8?B?TVlqT0pXMloyeHVIVmFrVWl4eXFyRTZjejVDQ0dKVTFlMTBIMlVWNzJpQkpy?=
 =?utf-8?B?aUlCQkwrWGpaMGZNRjJ2MkxIejVlZ2s0RnYzM3dJeEZUMzBvSW1CeTIvb0pv?=
 =?utf-8?B?T1k5aG1UMlNLbVdQZndHRGkxblllN1lHeWw4UHUzbFI0cTMzS3ZsVFJGWktr?=
 =?utf-8?B?VUo1ZGhvaCtPRVQwaFp3ayt0NDFORlgvc2pkNTBRNCtLMERVekNuTlczcGgw?=
 =?utf-8?B?WGVOMkV1dXU0V25KQ2ZHRm90TVhveXZkVE4rYnZibXg4SEY0UVk0aXpndW5I?=
 =?utf-8?B?ejNjTEdMNGIrWWZ1QVUwRVI4VUpBUlRLY29OeWVkOVRjZlMvN0ZWclIwRzZn?=
 =?utf-8?B?MkcxRlJOV0hkNlJQcnZpaXdwS1cwZXNCOEtxVUpGLzBDOGRNdFBGM2dybGpK?=
 =?utf-8?B?dENneVZrMTEyWjFrZGtOVGxvREU0ZWd6TVlHOE5xQU51aXRBODl5NFowSFYw?=
 =?utf-8?B?Yi93aDIyc2dUY2JrdzlaazZpYTJZcHQ2dC9DSWY3d1RKNGVlSVdkc1p1Q3Ur?=
 =?utf-8?B?ekx6UFlxT0FlR2FBeTF0V0pLdVVUblZWZ2U3UW9ITXdqNjZrZkx5QjFPUGhp?=
 =?utf-8?B?aXRlN0RiMm1qTTRaQVBQNkdiSDE4RTdUTm5VUE1SZldoZmJqMlFnSkNlZWtp?=
 =?utf-8?B?YjI2Q2l5NEYvRkpIaHgzc1A0cm9mTzd3N1NKU0l6R2U5ajZWdFRQUDdUWExv?=
 =?utf-8?B?dU84VzdoTHpGbnRSMWhqb0dEVjUrNVVWZENQQlEvN3F2RlVMZzBGbitXUnVQ?=
 =?utf-8?B?YlZyd28wTzlZK2ZiNkVDbWF2VkpiQW1HNXFscHoxOHNFNXJ4Vjg4UmxtSzQy?=
 =?utf-8?B?dExCRGI0Q1JIQmxiYnFyRTRTSkszWlZVeENsWFBwTko4QkF3L3dEY2o3ZkVl?=
 =?utf-8?B?eVJxL2RXUjFqamhCaE9rTkgvTlRsbURTdFdzYXJIVzVsbTRjUHRsTVdvUVA0?=
 =?utf-8?B?eTRoV25UQXgzZ3AvSStkbjdJbWs2YWVZOExlVCs0L0IzNmhzTHA4N3l5aGFy?=
 =?utf-8?B?M1BvTWorcnl3MW5FRWUveUUrWXliRkpLOVNCdnFqV01xMm1Kd2hQclR5cGdj?=
 =?utf-8?B?bk5pWmJ4ZHQxaDhzS1A0Ni9WYkRkdXU3U2VuQkk3T3MxODFTc0xqbDN6Z1JK?=
 =?utf-8?B?NUcyalRmUCtSU3BsRnBOMyt2a3pLeWhuRHRiTUtCMmhUbHIreGZram5MUCsy?=
 =?utf-8?Q?qnZZjMZOheBlmKqPsmaDKYVxC7DQBN6mo2jqAI3?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b0454cd7-926e-4fb6-02ad-08d9301d9471
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2021 16:49:45.4283
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yN1c99PjENDOG3h38o4hcHpjdef+gq5QJyKc9UbNisZLHQYMQf45oZQCIaAVD4CWsWJsKlS6+gcxROvdgZ5DANFgy7XFl3Nm4qvvhtirSRE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3989
X-OriginatorOrg: citrix.com

On 15/06/2021 17:19, Andrew Cooper wrote:
> In particular, fill in the install/uninstall rules so this test can be
> packaged to be automated sensibly.
>
> Make all object files depend on the Makefile, and us $(TARGET) when
> appropriate.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Ian Jackson <iwj@xenproject.org>
> CC: Wei Liu <wl@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Juergen Gross <jgross@suse.com>
> ---
>  tools/tests/resource/Makefile | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/tools/tests/resource/Makefile b/tools/tests/resource/Makefile
> index da5e2a4f9b..b22eb6fc21 100644
> --- a/tools/tests/resource/Makefile
> +++ b/tools/tests/resource/Makefile
> @@ -16,9 +16,12 @@ distclean: clean
>  
>  .PHONY: install
>  install: all
> +	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
> +	$(INSTALL_PROG) $(TARGET) $(DESTDIR)$(LIBEXEC_BIN)
>  
>  .PHONY: uninstall
>  uninstall:
> +	$(RM) -f -- $(DESTDIR)$(LIBEXEC_BIN)/$(TARGET)

I've finally figured out where $(RM) comes from.  Its a GNU Make
default, and has ` -f` included.

I've altered this and later patches in the series to take this into account.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 17:01:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 17:01:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142342.262663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltCR2-0002NW-5f; Tue, 15 Jun 2021 17:01:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142342.262663; Tue, 15 Jun 2021 17:01:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltCR2-0002NP-2N; Tue, 15 Jun 2021 17:01:08 +0000
Received: by outflank-mailman (input) for mailman id 142342;
 Tue, 15 Jun 2021 17:01:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltCR0-0002NE-Sj; Tue, 15 Jun 2021 17:01:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltCR0-0007xg-Kf; Tue, 15 Jun 2021 17:01:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltCR0-0006Tf-Ap; Tue, 15 Jun 2021 17:01:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltCR0-0004HV-7y; Tue, 15 Jun 2021 17:01:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=d5S5sNq+YqJe7+C652VB3AU0BHiw5yAY9wqpA4/Qz64=; b=PoDBW7IIZah5Sl+dfhGMmLfGja
	eU+HkAbmGYuBCdz8edxjf2Esfo5ggf25irSnNrixrkGFIY9MnJNUfwPxZb5GbSRKT06RerjGRuaZ5
	DCIlz+6Dmf9YSYDU240KvJyZnm6A+ei8ke3CCb7wpRn+CHXwcxK8P7cAh+8J/RUchZMk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162842-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162842: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=93c5f98296fc78de79d621418a1e62fd413e73d1
X-Osstest-Versions-That:
    xen=8c9ed863738ff9e8b91975d6aa4464e7e8324eb7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Jun 2021 17:01:06 +0000

flight 162842 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162842/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  93c5f98296fc78de79d621418a1e62fd413e73d1
baseline version:
 xen                  8c9ed863738ff9e8b91975d6aa4464e7e8324eb7

Last test of basis   162811  2021-06-14 17:03:29 Z    0 days
Testing same since   162842  2021-06-15 14:00:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8c9ed86373..93c5f98296  93c5f98296fc78de79d621418a1e62fd413e73d1 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 17:12:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 17:12:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142352.262681 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltCc9-0003uv-9t; Tue, 15 Jun 2021 17:12:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142352.262681; Tue, 15 Jun 2021 17:12:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltCc9-0003uo-6l; Tue, 15 Jun 2021 17:12:37 +0000
Received: by outflank-mailman (input) for mailman id 142352;
 Tue, 15 Jun 2021 17:12:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ltCc7-0003ui-TE
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 17:12:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltCc5-00088o-GR; Tue, 15 Jun 2021 17:12:33 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltCc5-0002oZ-8P; Tue, 15 Jun 2021 17:12:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=GW5AmU1Yn1mLV0MhHismtI5cS8vL2PV+0Z0rJWlmtzA=; b=obHYWaWeDp3qPeeSAJauHrK2/f
	keBKFhyADL3h3KJ32oWrRO4YZykQGyAEpBeYGgi5xLhhDkj7cJSrxV9QR3R2EVLXbO9RJiBuh5AZ/
	2/1h0EIIRvJC8lG9hl+3KuPcuUXKiRc7GhNoXRnC8i7qhGJPblo+9ZJlG+6q8yrZ+nbY=;
Subject: Re: PING Re: [XEN PATCH] libs/foreignmemory: Fix
 osdep_xenforeignmemory_map prototype
From: Julien Grall <julien@xen.org>
To: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20210601154147.55799-1-anthony.perard@citrix.com>
 <a5d4f4ae-21b9-9798-5501-2c288a70e7b4@suse.com>
 <d80b6904-ff5c-33d6-b0e3-6882fe3e8e89@xen.org>
Message-ID: <1eed05e4-958c-c724-67dc-266cc0011c94@xen.org>
Date: Tue, 15 Jun 2021 19:12:30 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <d80b6904-ff5c-33d6-b0e3-6882fe3e8e89@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 15/06/2021 15:31, Julien Grall wrote:
> Hi Ian & Wei,
> 
> On 02/06/2021 10:25, Jan Beulich wrote:
>> On 01.06.2021 17:41, Anthony PERARD wrote:
>>> Commit cf8c4d3d13b8 made some preparation to have one day
>>> variable-length-array argument, but didn't declare the array in the
>>> function prototype the same way as in the function definition. And now
>>> GCC 11 complains about it.
>>>
>>> Fixes: cf8c4d3d13b8 ("tools/libs/foreignmemory: pull array length 
>>> argument to map forward")
>>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
>> Ian - this (or whichever alternative might be chosen to address gcc11's
>> valid complaint) also will want backporting.
> 
> I was about to commit this patch and noticed that there is still a 
> missing an ack from the tools maintainers. @Ian, @Wei, can you provide one?

 From an IRC discussion, Ian was happy to follow the rule:

"Commit something without the necessary acks if you're reasonably 
certain that the maintainer would have approved it"

This doesn't seem to be part of MAINTAINERS, but I will use it for this 
patch as Ian gave his assent.

This is committed now.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 17:38:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 17:38:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142361.262692 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltD0c-0006GS-En; Tue, 15 Jun 2021 17:37:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142361.262692; Tue, 15 Jun 2021 17:37:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltD0c-0006GL-BB; Tue, 15 Jun 2021 17:37:54 +0000
Received: by outflank-mailman (input) for mailman id 142361;
 Tue, 15 Jun 2021 17:37:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x6ws=LJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ltD0a-0006GF-LU
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 17:37:52 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0d923501-c852-4489-aa92-54b89c0d5d29;
 Tue, 15 Jun 2021 17:37:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d923501-c852-4489-aa92-54b89c0d5d29
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623778671;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=C0oNPe/Vx80/WG46LOPMcXjPYzVJH0JaIpDpC91+vuw=;
  b=YEc9Tu8faWo+WO+wEHHyAXXRzKKcuq15SrSBId5kb7qouw4UwB5JkQvj
   2HnsmNiRw50sguSWzOJtATA+FLfSStCuckfWj4L42hsnS6CE/S3WCl3rA
   N7T0HeWs8vp55d3+gWVkbEe1fS2+e+CqyD0EG/eEXp7kzEyPdYSJxys9Q
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 4jieiF6kxJJIla+bNk5PqiSYRlMOZ8KSKpyB0fNam2iKyBxRXKFRJUPTmABlJUvLDVxcsTgZK9
 FToEGitqz5JF8sKO3YvYBifWMAU7pWg2fAKAuuBx/2zCmG39C6mH4YPM06u4nmn5RttK+jOS6y
 cddrjkCSUZ0s8apKDtLBBMK7aRfsvDbQdgKbsNscVvla7xpLvVWyjotzlnKToRxGm6kg0f/cVv
 QoSFe/L0h7wKj2eJGN9A3tCRA+saDWuGHZ18JunfkJzA5BMjhcoSXxwZKQADbYA4O1UpSB1+jN
 ZTo=
X-SBRS: 5.1
X-MesageID: 46171399
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:uDQSE65zk+5rODmBSQPXwAzXdLJyesId70hD6qkQc3Fom62j5q
 WTdZEgvyMc5wx/ZJhNo7690cq7MBHhHPxOgbX5VI3KNGXbUQOTR72KhrGSoAEIdReeygZcv5
 0QCZSXCrfLfCVHZRCR2njFLz4iquP3j5xBnY3lvhNQpZkBUdAZ0+9+YDzrdXFedU19KrcSMo
 GT3cZDryrIQwVtUizqbkN1OdQqvrfw5evbXSI=
X-IronPort-AV: E=Sophos;i="5.83,275,1616472000"; 
   d="scan'208";a="46171399"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=c99VlHrIBHuJ/JZENmPbYVEpJhUsV2zPUPcDFV/xkybg0lOo+jB6Vls2NbJ8CR3gcdnFalWAVztu8IaVBXLAslMB+pRO4xVK3AN15Y0noC1GCUZ8+/v2ubzMw4X+/z/SYT2tBHfzJ1a0Rnr0nWZ5LyqTFaWIQO80J8otRnNDyhYG7i2dLGofxk8mWDzfnI2R52+hZy75jikBRFFBJVmM9iQsYcPjxniKxRZ6iDewexT30GwJtwDU9+6VBKHTm4MRevdM7lVJ5WWaUifGtrNuxJDNRlU/2QgM3lXsLfLzApeCildWF4ZB6vsH114O6MEohsYJN87hl6G+BwkuorK5iA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tC+0/rovNW2+KKahxeQKvJeOaITeWpng2PD17awFUEM=;
 b=RPOZ8FZGjg31uZSqmhShlzGajOSesyx4NQIkDVHcSPrdH746O0WR0f9x6q7mKvP08qPDSFukXOyQW0fMkZ3CLfxOpOJZNOZKCoroLqG4VxzQsqX6sRynto/5oTQDSKQ2CX49f7SSAJLQJIYYVgLVYW3yUpV7lMDwVi7kWWxj+ZF2LpfPYVVxUSISXXYqsP4HACn5K2Ll52wqWc0JZgTpEY7HQ0SmtQOKhLunC8Dv+2oL7iNcBb6AM9/91h6OZejH7Z1jR7TtCx7I5QezHgAF4+M4thqlJ7Tq69ZZi8befh53gtHwtwVqREPjQ39D02dSnyLj63DdmnEOvtTFZpyVlA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tC+0/rovNW2+KKahxeQKvJeOaITeWpng2PD17awFUEM=;
 b=hjGWhkA6xV2pMQLeKSmJ2kAJX/ntUejcEDNZPHzJjYetUS8x66dX9E6/damY7Ji5fCI3v1em8ta0mLEI7lPe6K9ykKWjB7RstwjUIEyYjXn3k+8MWK0WDp8dtlQatrm0w8LI/bg+MdEVjnNwk1z+MAfIP5xGLnrK/eO1b9x8+Jk=
Subject: Re: [PATCH 5/5] tests/xenstore: Rework Makefile
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	Juergen Gross <jgross@suse.com>
References: <20210615161905.9831-1-andrew.cooper3@citrix.com>
 <20210615161905.9831-6-andrew.cooper3@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <1b786e0d-bdb5-9d10-15ef-688b09fedbd5@citrix.com>
Date: Tue, 15 Jun 2021 18:37:40 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210615161905.9831-6-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0297.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:196::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 464c26b4-9622-4759-a219-08d930244a55
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6270:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB627078AB7FCB8C8A69E9A5BFBA309@SJ0PR03MB6270.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Cn/EVtmTJPkovmiErxHLDiYHom9JNV2q1rViOavqbgbRR6ADh6Q1pxYhJi3cSjx6J3qR8u0ljhhYlVKF39NyiMtQn/pOzcRfZPBQb4gklNV+ynG/+M4ub9UZN+gHLjqUGQhu6hxrpB3ni8PSXZrfe2Zq40+q0PNgkb52khL4LFA4tp2MFTdgxFTNoc+3iimpYxAF+LgRTwFTHoKwaYZgU9WXueejvtHFDi7yLa8OSJPjZz3fFNBmLYSbYm8fpff5wWzY2pHHYvCiUfyS18i4FSTBZXl/xh6iZDmyPsyjs+kb1XkUs3vxtaHlZ7rh9KeZqHcqWS55bHaFX0LGTkbDUSf2SjaG0BMSN1QNqdtdeQsYrepKxeSYuUocbeRxU+fBI1yytrsZRYTIVzjLCYe+QN/6lLfsb7OePt6owTAicoJEDi1uYlupxov0Ds4cGUT8Ca9QZL+voi2rGy8nCy94ER8yEA5YH3BgSIfTImXThln31pgv26ybwUtHqGYAIoQpbj6N3c/RiUe5owwXrO/xURWE2BCvGxrAse0OnJCLn0OOCmjdtuFxpKJZg3ktsx+vlntDGaIBJuyb2rvxWYcvIUyJvCEWnDl1d1h7JJw8ilvf26O9rGF/3cCQSJlujXH8RK/u+qObNFO996bHHia1xcNZGlzqfZH3ImY593ZzMrZKLupxjpAixAnYfkqFwADB
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(39860400002)(366004)(376002)(136003)(8676002)(956004)(2616005)(31686004)(6916009)(38100700002)(316002)(5660300002)(8936002)(6486002)(53546011)(478600001)(186003)(16526019)(2906002)(26005)(4744005)(16576012)(54906003)(36756003)(86362001)(66476007)(31696002)(66556008)(4326008)(66946007)(6666004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?MkpiQy8yTjNsajBFTjQ2MEZSdW9aVTBtMmFqdEs3OFFwZGpjS0cyOGJBQVlT?=
 =?utf-8?B?QlU5T1cyclJSd1FDbjZhSUtnMTJBQzRzR3N6dXJnUGFPVFRYK0tMcjYwQjZM?=
 =?utf-8?B?Ujd6Tml4aHdycXkxREtsNTI4dnlCTDlmbjV3MWFNMlFMSytoY2w2bnlBWEZ4?=
 =?utf-8?B?Y1hBYTFzVG9leVhPTWd3REZCMS9DOEZqUXdUenZ4UGduVDcyUE1aaEtyNk5h?=
 =?utf-8?B?WnNRYkd0YjVSZUVxRWZlVmc1V2puQVc5ZlFHaFBvZlQ5Zktkb3Y2eWU5NnU3?=
 =?utf-8?B?VUtmL3ZzMXhoS3JtdHVqZllGTkloclZlSkYrVlRNcUIwUHFEeTJpamVmQ3dx?=
 =?utf-8?B?eHVVQWdNMlo4OUp0OHM5clBSSEp2L2pvYWZISlNWKzFyTERxRDY5R2pObUhU?=
 =?utf-8?B?MFd3c2dqK0FVOTNSZ050SVg3RVpaV1FFTFErOFVYU2NzUU0vU3JsUXZNSHlS?=
 =?utf-8?B?b0FvTkZrR0FLNXVydzVGWWRoTSs1c0dQNnZ6bG9TVTdJNE9kaUczM2ZnYzZ2?=
 =?utf-8?B?RnRPNnV3UDVyL2l0QjhBQTZ5bnpXNFRhSzdSRGMrSmkraktSV21JQmJjcUJK?=
 =?utf-8?B?S3B6aSt1STZWREhNSkJHb1BvdmNDa21ZeWlDYWhqbFpQUDlhK0x5MVZjMS8x?=
 =?utf-8?B?Y1BIZVFHaE40bGxwY09UWDJYeXZTaG9wV0EvM3AyY1BzQUxyaGVrUmcraGpo?=
 =?utf-8?B?SWZ1RFJGLzBsM3J5clVOYTR1WVQybFZFZlZVa244VGZJeG1wYXNjRUwwOXZX?=
 =?utf-8?B?WG9yOE15NUw1N2xHYWpWNnZGS0ZFK1JYRkxVc0VneTF0RkpkY0JHa2gwTlFL?=
 =?utf-8?B?RkRSc3N3U2tYUFRLZFB6OWpqWmtGdjY4ZGhzTmRLOGZwSFl3WVAySzJHVmFF?=
 =?utf-8?B?Ri9JaS9BS3dWaGJkbGZIamE0QmtWS1VrbWNweDFWL1R5ZGVVTDRWWkIzMjJ6?=
 =?utf-8?B?bnpHc1U3Z0hPZWVLdmdaaUZMYnExM1pDQ0F4WUFSaXNuUG5xM3Z1Uk94RWtz?=
 =?utf-8?B?clgwYmR6SXhMRkF0cVhZVzZ0V25jQ2QrSW05eUVKVDFnbGt2VnBEZmV0bFBN?=
 =?utf-8?B?MGJWVCs1UWtobHZSNHM3QWI4b2ZiOGpTUitIOXhFc2YwOXNvRzJldGthcUtx?=
 =?utf-8?B?d3pWMnNCdUorbDhsdFZWRFNwTGk4dzZMVkVYZVVCY08xbEJ5dUhBMTVJRGph?=
 =?utf-8?B?K3N2VFV1ZXcxQlFhWmJlMWRLdnA1MU50UytWY0tyRXJ5a3VJY1FudmZDRmdX?=
 =?utf-8?B?eVNwZmZNcjJ5MlFyZ2ZOcXNsQVZaalEyQXRPVEpGcTFtNXJySGk3OEo5RGJn?=
 =?utf-8?B?ZWVydUIrOFFlQ0ozVEdWWGtQVzZ5cVF0bnZ2V1R1Wk82YjNibzR1RzJMd0cx?=
 =?utf-8?B?V2VLY2hoSnNGUmpUQWRqeTBZRzNzYnpWU3pscnhzT2RLWDJMeUIwWDlGeVdY?=
 =?utf-8?B?QjcvWk43RmN6ZWd5LzVWb3VoTXpwNU9SWmhBcVZQbFM4SGdnaE9vWkFpc2li?=
 =?utf-8?B?eWc0dSt2ZmZNQWovM2srYVZ3dnNUYStsb0xVSU1KaUJ1REh6YzZ4UXJCMlQw?=
 =?utf-8?B?SFlzdGVFM2oySHg1TGhrUXZ5cm1iNnhTakhqeEJsaTB1SXVid1dmSVEzbXc1?=
 =?utf-8?B?SDZuODJIZ0dPclVYNVRYL1R2U3RFLzIyMnNuTUw5TFRzaXB2ai9KTENWQmZT?=
 =?utf-8?B?b3E1L0h4WVhUWmtUNjQzQmxQOVlWTlFSREl5YVdGd2NFWklzeExhNHJEdXJ6?=
 =?utf-8?Q?c0TQ7ZkHINROrVnMuXSn8fQR9CD0AJc1Y1Mc/BD?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 464c26b4-9622-4759-a219-08d930244a55
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2021 17:37:47.5466
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: f+/BGjUH4ur6d5IJjpvwBaZwppYzt+ZqiHVYv36Qr8yi8KtjNC+sOSFS6/ePpJ5qAGe0eTpmmh0VJ6ky/xMGr3zxIjOkjeBRNuKwoC3GzBA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6270
X-OriginatorOrg: citrix.com

On 15/06/2021 17:19, Andrew Cooper wrote:
> In particular, fill in the install/uninstall rules so this test can be
> packaged to be automated sensibly.
>
> Rename xs-test to test-xenstore to be consistent with other tests.  Honour
> APPEND_FLAGS too.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

It turns out that the absence of install rule previously meant that this
wasn't covered by CI.

It triggers test-xenstore.c:486:5: error: ignoring return value of
'asprintf', declared with attribute warn_unused_result
[-Werror=unused-result] on newer GCCs.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 19:14:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 19:14:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142369.262703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltEW6-0006rQ-E9; Tue, 15 Jun 2021 19:14:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142369.262703; Tue, 15 Jun 2021 19:14:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltEW6-0006rJ-Ax; Tue, 15 Jun 2021 19:14:30 +0000
Received: by outflank-mailman (input) for mailman id 142369;
 Tue, 15 Jun 2021 19:14:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltEW4-0006r9-SN; Tue, 15 Jun 2021 19:14:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltEW4-0001oi-KY; Tue, 15 Jun 2021 19:14:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltEW4-0003qo-9X; Tue, 15 Jun 2021 19:14:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltEW4-0006ML-91; Tue, 15 Jun 2021 19:14:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7PF8etcj96viilqbYkYXRiMHyh1rm8MSYkEM1YAz6HU=; b=ihpLKXyzEBE+jCFVrvLUfknisa
	v9gXPzE6GrC5M+9vcm0n2FL5SiH6Rjztwx+FuWOZDjdReAIJVWnDEBFG+CcAqpcha4jIjDJqn+pPJ
	HDzO/Qh/NFH5IntmnTGQx9BNAbywD7ROAHoMytkVzN1NwMJOlS0b/5N8oVwHDZb131RY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162835-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162835: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8c9ed863738ff9e8b91975d6aa4464e7e8324eb7
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Jun 2021 19:14:28 +0000

flight 162835 xen-unstable real [real]
flight 162843 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162835/
http://logs.test-lab.xenproject.org/osstest/logs/162843/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 162422
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162533
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162533
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8c9ed863738ff9e8b91975d6aa4464e7e8324eb7
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z    7 days
Failing since        162556  2021-06-08 22:39:08 Z    6 days   10 attempts
Testing same since   162835  2021-06-15 06:47:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 826 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 19:34:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 19:34:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142377.262717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltEpA-0000io-3o; Tue, 15 Jun 2021 19:34:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142377.262717; Tue, 15 Jun 2021 19:34:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltEpA-0000ih-0m; Tue, 15 Jun 2021 19:34:12 +0000
Received: by outflank-mailman (input) for mailman id 142377;
 Tue, 15 Jun 2021 19:34:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltEp8-0000iW-Tn; Tue, 15 Jun 2021 19:34:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltEp8-000297-Fo; Tue, 15 Jun 2021 19:34:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltEp8-0004dZ-70; Tue, 15 Jun 2021 19:34:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltEp8-0004M7-6S; Tue, 15 Jun 2021 19:34:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7QT8hYcYzY6mOw+mUgwramD/Mr0crrIbvXOVkkzvXdw=; b=G9E4/akKFe1/D/Pb1BBXxGJyLC
	Bvbi/WXwZ2jdP59iZzw0RHBkbuK7GG8zlCrl4phz+w6CYQujVO8iRkql8i4S2Jx0xdGhlemgcivu7
	sWDCpPmIUd6i2XLECZJ8A+cXTmiONANd2kksBO6BeM4khke1EDpv8UYy2sDXiurMtu5w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162841-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162841: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=11b1c1d4b98bc1b5eaaaf9eaa94ecd34eeaba5f9
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Jun 2021 19:34:10 +0000

flight 162841 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162841/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 11b1c1d4b98bc1b5eaaaf9eaa94ecd34eeaba5f9
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   11 days
Failing since        162368  2021-06-04 15:42:59 Z   11 days   23 attempts
Testing same since   162841  2021-06-15 13:41:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1779 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 20:12:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 20:12:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142390.262750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltFQ0-0004x9-Ez; Tue, 15 Jun 2021 20:12:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142390.262750; Tue, 15 Jun 2021 20:12:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltFQ0-0004x2-Bt; Tue, 15 Jun 2021 20:12:16 +0000
Received: by outflank-mailman (input) for mailman id 142390;
 Tue, 15 Jun 2021 20:12:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltFPy-0004ws-JE; Tue, 15 Jun 2021 20:12:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltFPy-0002qU-6b; Tue, 15 Jun 2021 20:12:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltFPx-0005io-ST; Tue, 15 Jun 2021 20:12:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltFPx-0000RV-Ry; Tue, 15 Jun 2021 20:12:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=89H9Sj6D4TH6YiI5J75jXUwtEojAvN52UZ9G1vK8BqY=; b=zVzQrBgTF1YsGCVMcaEOiASCTu
	y92MfSGJUVxVcHggnWulZZ/3Yfgu/O9Bm85UNOOqSRcmZa3Ixtl6oP5o+2JnIcdnnoFa/mMmlw38H
	lYFx7PKx8k4QxC/mQD8/zwI2GEMiG19CEmiw2Yq6afveSHkT3jjH4KU4Y7IRtOfBb4co=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162844-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162844: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5d3e4ebb5c71477d74a0c503438545a0126d3863
X-Osstest-Versions-That:
    xen=93c5f98296fc78de79d621418a1e62fd413e73d1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Jun 2021 20:12:13 +0000

flight 162844 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162844/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5d3e4ebb5c71477d74a0c503438545a0126d3863
baseline version:
 xen                  93c5f98296fc78de79d621418a1e62fd413e73d1

Last test of basis   162842  2021-06-15 14:00:28 Z    0 days
Testing same since   162844  2021-06-15 18:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   93c5f98296..5d3e4ebb5c  5d3e4ebb5c71477d74a0c503438545a0126d3863 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 20:31:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 20:31:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142420.262783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltFis-0008VG-TJ; Tue, 15 Jun 2021 20:31:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142420.262783; Tue, 15 Jun 2021 20:31:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltFis-0008V9-Pj; Tue, 15 Jun 2021 20:31:46 +0000
Received: by outflank-mailman (input) for mailman id 142420;
 Tue, 15 Jun 2021 20:31:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltFir-0008Uz-3Z; Tue, 15 Jun 2021 20:31:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltFiq-0003BR-Rv; Tue, 15 Jun 2021 20:31:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltFiq-0006FG-Ii; Tue, 15 Jun 2021 20:31:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltFiq-0002lb-IE; Tue, 15 Jun 2021 20:31:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=k35FUHGlMlrhdIKUTWluNf+7QGvrxKYgw+RWqbkXa6o=; b=09sqFQ5xKEnpBJvNtW03QW2u3g
	CCqsKDboV6P+hcV5ZGJZU+o62g4syw/B9YES4n3NEL2gUjtF/0LOeVgylv4FFVumdn3lx2MoY5hMk
	L9JtcBESJFLBGWHXt95DSmF0tt8KxIDeCFRmi6k+YLeK9v+l8jY0MlAf6FYLkWfquoF4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162838-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162838: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start/freebsd.repeat:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=009c9aa5be652675a06d5211e1640e02bbb1c33d
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Jun 2021 20:31:44 +0000

flight 162838 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162838/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  14 guest-start    fail in 162812 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-freebsd11-amd64 21 guest-start/freebsd.repeat fail in 162812 pass in 162838
 test-arm64-arm64-xl-credit1  13 debian-fixup               fail pass in 162812

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                009c9aa5be652675a06d5211e1640e02bbb1c33d
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  319 days
Failing since        152366  2020-08-01 20:49:34 Z  317 days  542 attempts
Testing same since   162793  2021-06-14 03:59:52 Z    1 days    3 attempts

------------------------------------------------------------
6167 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1680413 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 21:26:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 21:26:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142428.262797 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltGZk-0004yT-3Z; Tue, 15 Jun 2021 21:26:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142428.262797; Tue, 15 Jun 2021 21:26:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltGZk-0004yM-0Q; Tue, 15 Jun 2021 21:26:24 +0000
Received: by outflank-mailman (input) for mailman id 142428;
 Tue, 15 Jun 2021 21:26:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nfts=LJ=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltGZj-0004yD-Fu
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 21:26:23 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.21])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 74e8cbc4-8f7d-451a-a1d5-6e848a20dcf8;
 Tue, 15 Jun 2021 21:26:21 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5FLQFqdO
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 15 Jun 2021 23:26:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74e8cbc4-8f7d-451a-a1d5-6e848a20dcf8
ARC-Seal: i=1; a=rsa-sha256; t=1623792375; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=YCj6V2bBvLybAgGly3kStX/G47iASckJmeoDFE0l14Jryi9T9YWS5gGHudtDH2j49O
    3f1R5TjIdwjzPWcWw3tEzEzc3EOAFBwCElBaoWk72PgVDSYqksmb3Xh+QwBbIfMtLCQa
    7B2zHboTBq9UgdG0i4Tge2MZeZnMhesaU8TQdPnBXCH4ILStRk0KGVJdNkAitGiiKlVM
    vFKly/lD7hlLsQqJWa6l/eIp+vrcxIMH4KmlzgIIs7ApPZpS3ZyXtk22xvS8dzRstoe+
    S5ZNfheA3akCX0MjYjX6EUTpP9cJZleKPfehoThIfOYD2eSJjKmCv/AGYfKRrHZhPvE+
    yOgQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623792375;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=0k8SVgqxgX6+F1SCHObvluVDx8AaoJl27Fziwhwu2Ig=;
    b=RS92+QSqFLrBnzswgKCnqL7Z8DodUma601iK1TamfX56pIEYQSRql3pMwu2mprfPA+
    SDyyVizTqVZBS7xqEgMAJhEfcz9R/BGTJZGMEOkEyILfBNdYf4soIuTvfMKvNpJMblY9
    T2k8wIgN86iDP47WJrvmYMlOxKNu1TLH9+nuLUJPyDUmFY1cH6YvZAP1TIxnH0TCa4/3
    32fWBmmOud7w3IN4cGKAG4Nae7eHyrNYqcRhV9qFizg5iIu0AqKye3SgjeKvkEZ7rIcf
    6eu2Jh+dQfOgBqr9UTQIJau3OXA2N4YJ4yOityJGAxc4k1JSpvTfPfSi/wWXwx4xMSBs
    Q+JA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623792375;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=0k8SVgqxgX6+F1SCHObvluVDx8AaoJl27Fziwhwu2Ig=;
    b=Djy5DLMQfV8yAkzlTiYKdrisy14Xf81pUjMCWaPlNk3+JIG0F8RjkOqt452PXHktMB
    NUwabTMLH9wJXNu5ddlZvyXaqfNCseC7+Old3pVXryoxgftbVMalbbM/1ARPW7cF8Ube
    9OlT6sdZBDvEI7AYhpQLfcdZWv3WVuQKhYHeEZSgE9cTqOEz6MRKmtoo2hEO2VAmzdN6
    Wd/1D+jM0wngxU6EwY8VfQMvzY1gArANiceU7ti3tCqCNt8mjvyJ8MPqXkEGM4F0/JuJ
    dNnFo3c7kZL55PBIqBm51zjoGoprYul7RJCu03SUB0CJNx273wlQCGbnhLmsB94JeLAE
    nC+g==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1] tools: ipxe: update for fixing build with GCC11
Date: Tue, 15 Jun 2021 23:26:13 +0200
Message-Id: <20210615212613.6270-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Use a snapshot which includes commit
f3f568e382a5f19824b3bfc6081cde39eee661e8 ("[crypto] Add
memory output constraints for big-integer inline assembly"),
which fixes build with gcc11.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/firmware/etherboot/Makefile | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/firmware/etherboot/Makefile b/tools/firmware/etherboot/Makefile
index ed9e11305f..23b3f6ca9d 100644
--- a/tools/firmware/etherboot/Makefile
+++ b/tools/firmware/etherboot/Makefile
@@ -10,7 +10,8 @@ else
 IPXE_GIT_URL ?= git://git.ipxe.org/ipxe.git
 endif
 
-IPXE_GIT_TAG := 988d2c13cdf0f0b4140685af35ced70ac5b3283c
+# put an updated tar.gz on xenbits after changes to this variable
+IPXE_GIT_TAG := bf4ccd4265ac614fbfa38bf168b6eeaf4c17d51e
 
 IPXE_TARBALL_URL ?= $(XEN_EXTFILES_URL)/ipxe-git-$(IPXE_GIT_TAG).tar.gz
 


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 22:23:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 22:23:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142435.262808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltHSU-00029z-Av; Tue, 15 Jun 2021 22:22:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142435.262808; Tue, 15 Jun 2021 22:22:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltHSU-00029s-7t; Tue, 15 Jun 2021 22:22:58 +0000
Received: by outflank-mailman (input) for mailman id 142435;
 Tue, 15 Jun 2021 22:22:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hleM=LJ=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
 id 1ltHSS-00029m-Ur
 for xen-devel@lists.xenproject.org; Tue, 15 Jun 2021 22:22:57 +0000
Received: from mx0a-00069f02.pphosted.com (unknown [205.220.165.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d30e4e0d-1c0c-417a-8ac0-9147846e8115;
 Tue, 15 Jun 2021 22:22:55 +0000 (UTC)
Received: from pps.filterd (m0246627.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 15FMCFLr031533; Tue, 15 Jun 2021 22:22:02 GMT
Received: from oracle.com (userp3030.oracle.com [156.151.31.80])
 by mx0b-00069f02.pphosted.com with ESMTP id 395x06hgrj-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 15 Jun 2021 22:22:01 +0000
Received: from userp3030.oracle.com (userp3030.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 15FMM0fF177365;
 Tue, 15 Jun 2021 22:22:00 GMT
Received: from nam10-bn7-obe.outbound.protection.outlook.com
 (mail-bn7nam10lp2107.outbound.protection.outlook.com [104.47.70.107])
 by userp3030.oracle.com with ESMTP id 396wan2uq2-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 15 Jun 2021 22:21:59 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com (2603:10b6:a03:85::27)
 by BY5PR10MB3764.namprd10.prod.outlook.com (2603:10b6:a03:1f9::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.23; Tue, 15 Jun
 2021 22:21:56 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::8111:d8f1:c262:808d]) by BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::8111:d8f1:c262:808d%6]) with mapi id 15.20.4219.025; Tue, 15 Jun 2021
 22:21:56 +0000
Received: from char.us.oracle.com (138.3.200.0) by
 SN4PR0801CA0019.namprd08.prod.outlook.com (2603:10b6:803:29::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.16 via Frontend
 Transport; Tue, 15 Jun 2021 22:21:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d30e4e0d-1c0c-417a-8ac0-9147846e8115
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : content-type : in-reply-to :
 mime-version; s=corp-2020-01-29;
 bh=Ewm2kPzkQXJo0QWHDfmpQUI+ONPvdfpRUyKyuic890s=;
 b=p9mJd6PS1dtlQ8F8vvfEP4dxRcC4JhBZvdxEX50hBkUl4ZBanGC491pLxBqW0zTtC50m
 ZQ/StQp8s6rBTSRosKg2cxueparkh2/RpFFNOfksGyhoM5jZ8OKm/DG4x5QrATvxL+pY
 Y+jIy8ndjCNyy88sS4ZxWhcmaac2dbS3SkDcYKI2IpIpXfRS9RSxpaTbhCY+huHW6r7J
 NwK4MluAUpAwvCgShkDepUawqfpj+/0vCn6GsbAxfpfyOa6IqYJRhXhsndYDpx3dzrXL
 Yr4aMuQtBskxrjSN+rbUpgxesyWRaezjaxefbEza9e+RWZW1ApDn0Ike2qvilpmpc70P HQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CjD5IZWjw6xLx8cXH9X5sGRYBkzibgo9DwaaGCnuTRBXFQ7gzbZI6TG2wl+rZpYDyMuAQA9SIbHVTSmC4erudeIrngYgfp5tQU28Tfn67E4k9k0i9LUdKJ/YxaEUrVwCSpKDd8D0dC1ruLWvFJIvWwxrje6QpMFzQIDsLeSiEFOjC433NUjQBRFvaY1X3iHg1AMRiVV6bnNQhV0T3S/51709QW3eePX3MFNCZrtznBZt8F9Gv27JYlgPX/NMlAlUdFkGvisf7JsUp/anlNiSpPp5QDB/uM2uqXyiroBd9UhPQVlKeBAGEmFwcMwsMwCPm80KOI43LpD7aLKS5lvDBQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ewm2kPzkQXJo0QWHDfmpQUI+ONPvdfpRUyKyuic890s=;
 b=QvY2wxrghSsAxqlgUAUPoHUiS/FaHfr7Ky8yvkaFgP/AcgAEn1W6GEzBVyGpXg9lnu1Tx062pdqsgDm7p8dA2RCGNre8aOhPBh0OcAf6/ca5/WxFjjvMa6PBlJGnNwIm17nKu8HfQxkaa5ZuzYAwfQe2NsmFCvRjF1KdFy2EYmRbLWTUrKe3Pegb1J7B3oqmRe64wfYecYOmo+m95BeV283pI3YdeDGXBoBG7F1kwXEuCkb9pm+ACWXghBV/zg330hlCNsZfc6iZ5xQaFYDC3QHk6Lck7x3NALD+XbEF1kIQgIODeHtJ+CmA7p/FcspsjA7XY7VOoRiDek05y5mReQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ewm2kPzkQXJo0QWHDfmpQUI+ONPvdfpRUyKyuic890s=;
 b=rzJh0+Ud8oWUbFWOJwf8IhtTX7ACTfqPmY5tDMMLcBrbhfKjzDVD7LQ4nZWiAm8Y+RBdXJfWZdRu3F6fXqDSqOWYweNGugnPdGDds+LOb8x/XvqwDEIO8f7dbNWQXyd1oUpNfnxZXFPU6/2aDaLLozF71KvVy0frh2J1jBCGVWA=
Authentication-Results: chromium.org; dkim=none (message not signed)
 header.d=none;chromium.org; dmarc=none action=none header.from=oracle.com;
Date: Tue, 15 Jun 2021 18:21:41 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
        Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
        Frank Rowand <frowand.list@gmail.com>, boris.ostrovsky@oracle.com,
        jgross@suse.com, Christoph Hellwig <hch@lst.de>,
        Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org,
        paulus@samba.org,
        "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
        sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
        grant.likely@arm.com, xypron.glpk@gmx.de,
        Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
        bauerman@linux.ibm.com, peterz@infradead.org,
        Greg KH <gregkh@linuxfoundation.org>,
        Saravana Kannan <saravanak@google.com>,
        "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
        heikki.krogerus@linux.intel.com,
        Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
        Randy Dunlap <rdunlap@infradead.org>,
        Dan Williams <dan.j.williams@intel.com>,
        Bartosz Golaszewski <bgolaszewski@baylibre.com>,
        linux-devicetree <devicetree@vger.kernel.org>,
        lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
        xen-devel@lists.xenproject.org,
        Nicolas Boichat <drinkcat@chromium.org>,
        Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
        bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
        daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
        intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
        jxgao@google.com, joonas.lahtinen@linux.intel.com,
        linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
        matthew.auld@intel.com, rodrigo.vivi@intel.com,
        thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v10 03/12] swiotlb: Set dev->dma_io_tlb_mem to the
 swiotlb pool used
Message-ID: <YMkn9YIJqKxelVRI@char.us.oracle.com>
References: <20210615132711.553451-1-tientzu@chromium.org>
 <20210615132711.553451-4-tientzu@chromium.org>
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210615132711.553451-4-tientzu@chromium.org>
X-Originating-IP: [138.3.200.0]
X-ClientProxiedBy: SN4PR0801CA0019.namprd08.prod.outlook.com
 (2603:10b6:803:29::29) To BYAPR10MB2999.namprd10.prod.outlook.com
 (2603:10b6:a03:85::27)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a84e40bf-de30-4457-be50-08d9304bfc37
X-MS-TrafficTypeDiagnostic: BY5PR10MB3764:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BY5PR10MB3764EA085563323CCCF5D9FF89309@BY5PR10MB3764.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	xF7BAeumBgA6E0VmbmaA3ioSAel2CytzKgdt1uIT9go9/NjNmeodkwaht6aelOZ9E6scx0YZtTQJdAXDmSneU9vu0wGcmTbUifhBUrkdlayQ6hRr/E6GKZdwmaf98Lpbj4tPlrlH1G6ZMfFQzfF0vZ5k/QVDLjMt/D9pAwjKPVWj2OoiSeud5q+Ax7HLo5os8kWWjR5DU6ih+dUDtHJ2B8UMhT2PbCAvBENqWz5H3daD9fYUgSLa9w7LeHsX0Vp8xdlQF60I0q32HPuFnlbblyWJtflvBBqu4KIWD0ALZ5jZQpKFyPZ70w2bTLf1DiPYNVR9+93noh7OsqeNiE05ClsMH7CIzUUKEIQ5k6HbzVX3wLFQQsR9fr+WKbljW96LJwkmZ2OVakWgej494Os0WvTLBO1BI4iP6FlbtrDOqPOOmCtKY6wAlvUKZIF13wPHDwFIBua4bkwooVntdBV63ohucyAmS/e5VVeI/9MjQTAzBdKXuT/6fR4ALhvqNUgSvJL8wAIXeUyk9tA8t/Rgbm9sOCAIyYxM0RpGWEcAIDmDvfaXxtrMtQCeDO/tW6GsQOgTfGz0Y+QKYcpP1BN4/975osQRX+C3Fs8t6GjNVmIa8tWn05O8vQ/DPtBpKo8nhQcvpphkFhRwJt403BvZOg==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2999.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(136003)(39860400002)(396003)(366004)(376002)(2906002)(6666004)(26005)(16526019)(83380400001)(186003)(7406005)(7366002)(7416002)(52116002)(7696005)(38350700002)(4326008)(66556008)(5660300002)(316002)(6916009)(8676002)(956004)(55016002)(66476007)(54906003)(38100700002)(478600001)(86362001)(66946007)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?yDIPw9DQZfFEwfQNMtXbYoN7a1mXxAcENcJ+ekAkoDUjpxnaKujqkd8PSIcL?=
 =?us-ascii?Q?ibYS1fAosw19N/jQFKEvQ4r7kaHdcY35TcjNe3oFHnbcs0I5mEuSqWMp7tRL?=
 =?us-ascii?Q?x4aVmjTd2p8303jAsK7aKx0kMXGZc1nwKJDCphVkHCrH0gqwZRw0j6nWTkfi?=
 =?us-ascii?Q?HTBbHsrXn+KghJ7EfpJg5QO9WBCEH+mMki3nAYPMxq8GilorBPlS6Vp7iiIi?=
 =?us-ascii?Q?sOZ+dL5k3tj8WZAm8w9fN2iRYjBP2E3k6pXttRTINm/yq3ON0Gy3HsPZ0oNX?=
 =?us-ascii?Q?jAs7wdw0Kj/p0e4Z7fY3NGivEmyTEEpJsG0ZNbAx4GOrtBlOnUpimbFdqEmR?=
 =?us-ascii?Q?aK+Hz1ce0ySqp1ugn8ATZ3ThiW+lJkre3AsmkPZcL2miur3Mr5BHPcqiLFG3?=
 =?us-ascii?Q?h6ZZ+UUJHCj5iBt12pNXoKc7+o3chgv7vTVBMpb2+SmtHYlrVuG2X9M23y33?=
 =?us-ascii?Q?qtX/0e5IxDyK9RJmI4671ZDydFhjsAa/yEppblayzVwm+hw94Y0vTxoPxrb5?=
 =?us-ascii?Q?qSRXj/iDDgKSVbuCufE2s4khdvZS+yO2/5D05Vv06Uj/Mm2z52Gby9CL/ZUK?=
 =?us-ascii?Q?Erycb4ZrnqmQyEZus4oZlt5N4gkhp1nq22o2tDliqXgofkuDaDe71PSDAEQ5?=
 =?us-ascii?Q?nIBIvIwhpqc0uWHPcOKk6ILycH4IOFRynQf39o34DDW380BvOSpFRz+6uM3W?=
 =?us-ascii?Q?XA0wGPMOcGLFSE+6LjIMHnr7OekrceqvnNej/rUPrfq1CSdYGRMfQ0uRSXKU?=
 =?us-ascii?Q?Ve+QJhcl9yqQwIl+Ykn4w2Pr9xf5G0FUGgp+Mw+/CkZqE5vCx2lYEO20HhNI?=
 =?us-ascii?Q?Dqd1qzm61djsVaJH+1OfA303dEMRYhXjM+b0ZQnmqKUdAieDlwWm5ScjnfSt?=
 =?us-ascii?Q?6IGcK/3fzTM0tjVZyNMizspT4Gnn6fP0ZlUSpT+3d9d2IKLYwthcs5I7RDAw?=
 =?us-ascii?Q?cPDUf9CaH/zEmBqtRKmZWjTCbthafxumpzy0LSiGJ3eZMxMzszsOayP6FlP3?=
 =?us-ascii?Q?eAEHQjYuBmxSnUj7OhtP8AB7xWUQwC1gQN/RkeIJRg/3eS+kLNI935eiBUQM?=
 =?us-ascii?Q?McdkFy5jDJyvGzHY5MSV1NVn7QxEsqaqB/7dY83kZHZkSKRrC3LRvRXQaIOq?=
 =?us-ascii?Q?WuPumsCb+EvsTY/4+eaPKLL2gRgXi4e1KIPm4eACiVtjTQOTev4uF87KfcjI?=
 =?us-ascii?Q?vbf+eNbI6dLHDatf8rIQNhfzHiXsPR2CKYkWFfgCrS7rCgXd4miSbKuDkIgo?=
 =?us-ascii?Q?TgGG6GbecvN9yyDoTbA5ywNChqeMUCt1pJHVEgeQiyvwDrAOGgG21NxIf1mL?=
 =?us-ascii?Q?Kc1HG8uqCIPyZdyggjFPlkni?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a84e40bf-de30-4457-be50-08d9304bfc37
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2999.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2021 22:21:56.4536
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gqDAXHWgJql7g0jEA30LQdjzpX2coNygHrx1GTLbl89x8Z5+jOR2M1o7u9p+TRdkzzgjCPG584j1WtU+wXlolA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR10MB3764
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10016 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 suspectscore=0
 mlxlogscore=999 spamscore=0 adultscore=0 bulkscore=0 mlxscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2106150137
X-Proofpoint-ORIG-GUID: vAX6Oun_7Zlja7oqlv8C5OTIAkc2TKkM
X-Proofpoint-GUID: vAX6Oun_7Zlja7oqlv8C5OTIAkc2TKkM

On Tue, Jun 15, 2021 at 09:27:02PM +0800, Claire Chang wrote:
> Always have the pointer to the swiotlb pool used in struct device. This
> could help simplify the code for other pools.

Applying: swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
error: patch failed: kernel/dma/swiotlb.c:339
error: kernel/dma/swiotlb.c: patch does not apply
..

Would you be OK rebasing this against devel/for-linus-5.14 please?
(And please send out with the Reviewed-by from Christopher)

Thank you!
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  drivers/base/core.c    | 4 ++++
>  include/linux/device.h | 4 ++++
>  kernel/dma/swiotlb.c   | 8 ++++----
>  3 files changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/base/core.c b/drivers/base/core.c
> index b8a8c96dca58..eeb2d49d3aa3 100644
> --- a/drivers/base/core.c
> +++ b/drivers/base/core.c
> @@ -27,6 +27,7 @@
>  #include <linux/netdevice.h>
>  #include <linux/sched/signal.h>
>  #include <linux/sched/mm.h>
> +#include <linux/swiotlb.h>
>  #include <linux/sysfs.h>
>  #include <linux/dma-map-ops.h> /* for dma_default_coherent */
>  
> @@ -2846,6 +2847,9 @@ void device_initialize(struct device *dev)
>      defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU_ALL)
>  	dev->dma_coherent = dma_default_coherent;
>  #endif
> +#ifdef CONFIG_SWIOTLB
> +	dev->dma_io_tlb_mem = io_tlb_default_mem;
> +#endif
>  }
>  EXPORT_SYMBOL_GPL(device_initialize);
>  
> diff --git a/include/linux/device.h b/include/linux/device.h
> index 4443e12238a0..2e9a378c9100 100644
> --- a/include/linux/device.h
> +++ b/include/linux/device.h
> @@ -432,6 +432,7 @@ struct dev_links_info {
>   * @dma_pools:	Dma pools (if dma'ble device).
>   * @dma_mem:	Internal for coherent mem override.
>   * @cma_area:	Contiguous memory area for dma allocations
> + * @dma_io_tlb_mem: Pointer to the swiotlb pool used.  Not for driver use.
>   * @archdata:	For arch-specific additions.
>   * @of_node:	Associated device tree node.
>   * @fwnode:	Associated device node supplied by platform firmware.
> @@ -540,6 +541,9 @@ struct device {
>  #ifdef CONFIG_DMA_CMA
>  	struct cma *cma_area;		/* contiguous memory area for dma
>  					   allocations */
> +#endif
> +#ifdef CONFIG_SWIOTLB
> +	struct io_tlb_mem *dma_io_tlb_mem;
>  #endif
>  	/* arch specific additions */
>  	struct dev_archdata	archdata;
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 97c6ad50fdc2..949a6bb21343 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -339,7 +339,7 @@ void __init swiotlb_exit(void)
>  static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size,
>  			   enum dma_data_direction dir)
>  {
> -	struct io_tlb_mem *mem = io_tlb_default_mem;
> +	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
>  	int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT;
>  	phys_addr_t orig_addr = mem->slots[index].orig_addr;
>  	size_t alloc_size = mem->slots[index].alloc_size;
> @@ -421,7 +421,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
>  static int find_slots(struct device *dev, phys_addr_t orig_addr,
>  		size_t alloc_size)
>  {
> -	struct io_tlb_mem *mem = io_tlb_default_mem;
> +	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
>  	unsigned long boundary_mask = dma_get_seg_boundary(dev);
>  	dma_addr_t tbl_dma_addr =
>  		phys_to_dma_unencrypted(dev, mem->start) & boundary_mask;
> @@ -498,7 +498,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
>  		size_t mapping_size, size_t alloc_size,
>  		enum dma_data_direction dir, unsigned long attrs)
>  {
> -	struct io_tlb_mem *mem = io_tlb_default_mem;
> +	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
>  	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
>  	unsigned int i;
>  	int index;
> @@ -549,7 +549,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
>  			      size_t mapping_size, enum dma_data_direction dir,
>  			      unsigned long attrs)
>  {
> -	struct io_tlb_mem *mem = io_tlb_default_mem;
> +	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
>  	unsigned long flags;
>  	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
>  	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
> -- 
> 2.32.0.272.g935e593368-goog
> 


From xen-devel-bounces@lists.xenproject.org Tue Jun 15 23:32:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Jun 2021 23:32:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142443.262819 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltIXM-0000Fw-Do; Tue, 15 Jun 2021 23:32:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142443.262819; Tue, 15 Jun 2021 23:32:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltIXM-0000Fo-AY; Tue, 15 Jun 2021 23:32:04 +0000
Received: by outflank-mailman (input) for mailman id 142443;
 Tue, 15 Jun 2021 23:32:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltIXL-0000Fe-CE; Tue, 15 Jun 2021 23:32:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltIXL-0006A6-1s; Tue, 15 Jun 2021 23:32:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltIXK-0004dw-NH; Tue, 15 Jun 2021 23:32:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltIXK-0004Dg-MM; Tue, 15 Jun 2021 23:32:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lN90fEenkghTia7zzPqWLuUuo7Od45eq1iIhHASOS8s=; b=JaUs0IwmBiry36BZlbhXv8f5gG
	J8kFT9n1xLq8tJHg2ExHlmQog9HARmvX7Kd1bbSncqqvl7xYHMzd/wyqyGdZUNYICV4bZsR0jFzkh
	jBUXoiD7AUT8k+vW8DZ3zYxlmne8gR1DhoysYExUGkxZ9VE4U+0DEcY9CNkTm1gSzeUU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162848-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162848: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4bcf6433eed3d9cbc00865ec62380a33ca832dac
X-Osstest-Versions-That:
    xen=5d3e4ebb5c71477d74a0c503438545a0126d3863
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Jun 2021 23:32:02 +0000

flight 162848 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162848/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4bcf6433eed3d9cbc00865ec62380a33ca832dac
baseline version:
 xen                  5d3e4ebb5c71477d74a0c503438545a0126d3863

Last test of basis   162844  2021-06-15 18:00:25 Z    0 days
Testing same since   162848  2021-06-15 21:01:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5d3e4ebb5c..4bcf6433ee  4bcf6433eed3d9cbc00865ec62380a33ca832dac -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 00:41:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 00:41:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142451.262833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltJc2-0007HP-B3; Wed, 16 Jun 2021 00:40:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142451.262833; Wed, 16 Jun 2021 00:40:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltJc2-0007HI-7p; Wed, 16 Jun 2021 00:40:58 +0000
Received: by outflank-mailman (input) for mailman id 142451;
 Wed, 16 Jun 2021 00:40:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltJc0-0007H8-K4; Wed, 16 Jun 2021 00:40:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltJc0-0007sl-AY; Wed, 16 Jun 2021 00:40:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltJbz-0001W4-VB; Wed, 16 Jun 2021 00:40:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltJbz-000134-Uh; Wed, 16 Jun 2021 00:40:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WlfprEZgksSXMvgns4aUcMQgwGBGE1IAkLJ8MdZSG9w=; b=Ynl1MaBA2fNPGgqTrYD05IyiA3
	4rD7YByQ0Q2z+sfY/RLmvEGgfFpErSny6X+1qZiaevWoeIEhkDLoHK1oJhE4q9BejPXDa2vtPezxw
	IKcH8u4GyMctceMALRhDaY6oRx4TqQjj+qfIHdIUGVK+WLURs7xV3ok3p7HZG4yUfvFs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162840-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162840: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=1ea06abceec61b6f3ab33dadb0510b6e09fb61e2
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Jun 2021 00:40:55 +0000

flight 162840 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162840/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                1ea06abceec61b6f3ab33dadb0510b6e09fb61e2
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  299 days
Failing since        152659  2020-08-21 14:07:39 Z  298 days  551 attempts
Testing same since   162818  2021-06-14 22:38:18 Z    1 days    2 attempts

------------------------------------------------------------
532 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 171564 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 01:20:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 01:20:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142460.262850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltKE2-00005H-FX; Wed, 16 Jun 2021 01:20:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142460.262850; Wed, 16 Jun 2021 01:20:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltKE2-000059-CJ; Wed, 16 Jun 2021 01:20:14 +0000
Received: by outflank-mailman (input) for mailman id 142460;
 Wed, 16 Jun 2021 01:20:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GKl/=LK=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ltKE0-000050-R8
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 01:20:12 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 88d464dd-fd23-490b-8518-6ddc70483d5b;
 Wed, 16 Jun 2021 01:20:11 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 6847A610A0;
 Wed, 16 Jun 2021 01:20:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88d464dd-fd23-490b-8518-6ddc70483d5b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623806410;
	bh=F0L0ERwsmDV2ns2rbfMEpBp38QMOCUNuZRDTxgEGh7M=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=bAfZzBAS6ghN/56Zj5zLcQfFtVsZT0dNNhxKRO0cjg7Kr3Oi5wxmbjdgPlZgdjdxI
	 NW6xPCoFpFRCEkhrrJMP+cAMkPHM1v6M84pj7T4Os5r7k2vep9qCTpnc+TJ4mHl0IG
	 zQi0FUffgLquVaG0bUfdD3yHcgNflEjOtPfQj3zAOy4dZc1NfPBWiOtzeow62UcYZt
	 1+9gngID0bW9+L5mWBNRqGoWgQ/goS+MCPjrHonPBlG6NXhzsxKDm5Sf2IOT+4reIo
	 gE6LO6KM3V1iz03zjNjWVGvxyqx/wODT02Yy6DPlE5YXifUMkpaNyr0Bve8mExWmgA
	 1AnAXvKWwzU9g==
Date: Tue, 15 Jun 2021 18:20:09 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <Rahul.Singh@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    "edgar.iglesias@xilinx.com" <edgar.iglesias@xilinx.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, Julien Grall <julien@xen.org>, 
    "fnuv@xilinx.com" <fnuv@xilinx.com>
Subject: Re: smmuv1 breakage
In-Reply-To: <791BFC00-6A50-48D2-A208-E529B887441F@arm.com>
Message-ID: <alpine.DEB.2.21.2106151756190.24906@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2106141840150.24906@sstabellini-ThinkPad-T480s> <791BFC00-6A50-48D2-A208-E529B887441F@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-468115872-1623805343=:24906"
Content-ID: <alpine.DEB.2.21.2106151805320.24906@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-468115872-1623805343=:24906
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2106151805321.24906@sstabellini-ThinkPad-T480s>

On Tue, 15 Jun 2021, Rahul Singh wrote:
> Hi Stefano
> 
> > On 15 Jun 2021, at 3:21 am, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > Hi Rahul,
> > 
> > Unfortunately, after bisecting, I discovered a few more breakages due to
> > your smmuv1 series (commits e889809b .. 3e6047ddf) on Xilinx ZynqMP. I
> > attached the DTB as reference. Please note that I made sure to
> > cherry-pick "xen/arm: smmuv1: Revert associating the group pointer with
> > the S2CR" during bisection. So the errors are present also on staging.
> > 
> > The first breakage is an error at boot time in smmu.c#find_smmu_master,
> > see log1. I think it is due to the lack of ability to parse the new smmu
> > bindings in the old smmu driver.
> > 
> > After removing all the "smmus" and "#stream-id-cells" properties in
> > device tree, I get past the previous error, everything seems to be OK at
> > early boot, but I actually get SMMU errors as soon as dom0 starting
> > using devices:
> > 
> > (XEN) smmu: /smmu@fd800000: Unexpected global fault, this could be serious
> > (XEN) smmu: /smmu@fd800000:     GFSR 0x80000002, GFSYNR0 0x00000000, GFSYNR1 0x00000877, GFSYNR2 0x00000000
> 
>  This fault is "Unidentified stream fault” for StreamID “ 0x877” that means SMMU SMR is not configured for streamID “0x877"
> 
> 
> > [   10.419681] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
> > [   10.426452] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
> > 
> > Do you think you'll be able to help fix them?
> > 
> > 
> > You should be able to reproduce the two issues using Xilinx QEMU (but to
> > be honest I haven't tested it on QEMU yet, I was testing on real
> > hardware):
> > - clone and compile xilinx QEMU https://github.com/Xilinx/qemu.git
> >  ./configure  --target-list=aarch64-softmmu
> >  make
> > - clone and build git://github.com/Xilinx/qemu-devicetrees.git
> > - use the attached script to run it
> >    - kernel can be upstream defconfig 5.10
> > 
> 
> I tried to reproduce the issue on Xilinx QEMU as per the steps shared above 
> but I am not observing any issue on Xilinx QEMU.

I tried on QEMU and it doesn't repro. I cannot explain why it works on
QEMU and it fails on real hardware.


> I also tested and confirmed on QEMU that SMMU is configured correctly 
> for specifically StreamID “ 0x877” and for other streamIDs.
> 
> I check the xen.dtb shared by you and found out the there is no "stream-id-cells”
> property in the master device but the "mmu-masters" property is present in the
> smmu node. For legacy smmu binding we need both "stream-id-cells” and "mmu-masters”.
> If you need to add the new smmu binding please add the "iommu-cells”
> property in the smmu node and the “iommus” property in the master device.

In regards to the missing "stream-id-cells" property, I shared the wrong
dtb before, sorry. I was running a number of tests and I might have
picked the wrong file. The proper dtb comes with "stream-id-cells" for
the 0x877 device, see attached.



> Can you please share the xen boot logs with me so that I can debug further why the error is observed? 

See attached. I did some debugging and discovered that it crashes while
accessing master->of_node in find_smmu_master. If I revert your series,
the crash goes away. It is very strange because your patches don't touch
find_smmu_master or insert_smmu_master directly.

I did a git reset --hard on the commit "xen/arm: smmuv1: Add a stream
map entry iterator" and it worked, which points to "xen/arm: smmuv1:
Intelligent SMR allocation" being the problem, even if I have the revert
cherry-picked on top. Maybe the revert is not reverting enough?

After this test, I switched back to staging and did:
git revert 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
git revert 0435784cc75dcfef3b5f59c29deb1dbb84265ddb

And it worked! So the issue truly is that
9f6cd4983715cb31f0ea540e6bbb63f799a35d8a doesn't revert "enough".
See "full-revert" for the patch reverting the remaining code. That on
top of staging fixes boot for me.
--8323329-468115872-1623805343=:24906
Content-Type: text/plain; NAME=konsole.txt
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.21.2106151802230.24906@sstabellini-ThinkPad-T480s>
Content-Description: 
Content-Disposition: ATTACHMENT; FILENAME=konsole.txt

U3RhcnRpbmcga2VybmVsIC4uLg0KDQogWGVuIDQuMTYtdW5zdGFibGUNCihY
RU4pIFhlbiB2ZXJzaW9uIDQuMTYtdW5zdGFibGUgKHNzdGFiZWxsaW5pQCkg
KGFhcmNoNjQtbGludXgtZ251LWdjYyAoTGluYXJvIEdDQyA1LjMtMjAxNi4w
NSkgNS4zLjEgMjAxNjA0MTIpIGRlYnVnPXkgVHVlIEp1biAxNSAxNzo1Njow
MyBQRFQgMjAyMQ0KKFhFTikgTGF0ZXN0IENoYW5nZVNldDogRnJpIEFwciAx
NiAxMjoyNTowMiAyMDIxICswMTAwIGdpdDozZTYwNDdkZGZhDQooWEVOKSBi
dWlsZC1pZDogNzdiMzdlNTQzMzYwMzUyOTIwYWY0YzQ1N2ZjNWMyOTAwNDEz
OTM1Nw0KKFhFTikgUHJvY2Vzc29yOiAwMDAwMDAwMDQxMGZkMDM0OiAiQVJN
IExpbWl0ZWQiLCB2YXJpYW50OiAweDAsIHBhcnQgMHhkMDMscmV2IDB4NA0K
KFhFTikgNjQtYml0IEV4ZWN1dGlvbjoNCihYRU4pICAgUHJvY2Vzc29yIEZl
YXR1cmVzOiAwMDAwMDAwMDAwMDAyMjIyIDAwMDAwMDAwMDAwMDAwMDANCihY
RU4pICAgICBFeGNlcHRpb24gTGV2ZWxzOiBFTDM6NjQrMzIgRUwyOjY0KzMy
IEVMMTo2NCszMiBFTDA6NjQrMzINCihYRU4pICAgICBFeHRlbnNpb25zOiBG
bG9hdGluZ1BvaW50IEFkdmFuY2VkU0lNRA0KKFhFTikgICBEZWJ1ZyBGZWF0
dXJlczogMDAwMDAwMDAxMDMwNTEwNiAwMDAwMDAwMDAwMDAwMDAwDQooWEVO
KSAgIEF1eGlsaWFyeSBGZWF0dXJlczogMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwDQooWEVOKSAgIE1lbW9yeSBNb2RlbCBGZWF0dXJlczog
MDAwMDAwMDAwMDAwMTEyMiAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSAgIElT
QSBGZWF0dXJlczogIDAwMDAwMDAwMDAwMTExMjAgMDAwMDAwMDAwMDAwMDAw
MA0KKFhFTikgMzItYml0IEV4ZWN1dGlvbjoNCihYRU4pICAgUHJvY2Vzc29y
IEZlYXR1cmVzOiAwMDAwMDAwMDAwMDAwMTMxOjAwMDAwMDAwMDAwMTEwMTEN
CihYRU4pICAgICBJbnN0cnVjdGlvbiBTZXRzOiBBQXJjaDMyIEEzMiBUaHVt
YiBUaHVtYi0yIEphemVsbGUNCihYRU4pICAgICBFeHRlbnNpb25zOiBHZW5l
cmljVGltZXIgU2VjdXJpdHkNCihYRU4pICAgRGVidWcgRmVhdHVyZXM6IDAw
MDAwMDAwMDMwMTAwNjYNCihYRU4pICAgQXV4aWxpYXJ5IEZlYXR1cmVzOiAw
MDAwMDAwMDAwMDAwMDAwDQooWEVOKSAgIE1lbW9yeSBNb2RlbCBGZWF0dXJl
czogMDAwMDAwMDAxMDIwMTEwNSAwMDAwMDAwMDQwMDAwMDAwDQooWEVOKSAg
ICAgICAgICAgICAgICAgICAgICAgICAgMDAwMDAwMDAwMTI2MDAwMCAwMDAw
MDAwMDAyMTAyMjExDQooWEVOKSAgIElTQSBGZWF0dXJlczogMDAwMDAwMDAw
MjEwMTExMCAwMDAwMDAwMDEzMTEyMTExIDAwMDAwMDAwMjEyMzIwNDINCihY
RU4pICAgICAgICAgICAgICAgICAwMDAwMDAwMDAxMTEyMTMxIDAwMDAwMDAw
MDAwMTExNDIgMDAwMDAwMDAwMDAxMTEyMQ0KKFhFTikgVXNpbmcgU01DIENh
bGxpbmcgQ29udmVudGlvbiB2MS4xDQooWEVOKSBVc2luZyBQU0NJIHYxLjEN
CihYRU4pIFNNUDogQWxsb3dpbmcgNCBDUFVzDQooWEVOKSBHZW5lcmljIFRp
bWVyIElSUTogcGh5cz0zMCBoeXA9MjYgdmlydD0yNyBGcmVxOiA5OTk5MCBL
SHoNCihYRU4pIEdJQ3YyIGluaXRpYWxpemF0aW9uOg0KKFhFTikgICAgICAg
ICBnaWNfZGlzdF9hZGRyPTAwMDAwMDAwZjkwMTAwMDANCihYRU4pICAgICAg
ICAgZ2ljX2NwdV9hZGRyPTAwMDAwMDAwZjkwMjAwMDANCihYRU4pICAgICAg
ICAgZ2ljX2h5cF9hZGRyPTAwMDAwMDAwZjkwNDAwMDANCihYRU4pICAgICAg
ICAgZ2ljX3ZjcHVfYWRkcj0wMDAwMDAwMGY5MDYwMDAwDQooWEVOKSAgICAg
ICAgIGdpY19tYWludGVuYW5jZV9pcnE9MjUNCihYRU4pIEdJQ3YyOiBBZGp1
c3RpbmcgQ1BVIGludGVyZmFjZSBiYXNlIHRvIDB4ZjkwMmYwMDANCihYRU4p
IEdJQ3YyOiAxOTIgbGluZXMsIDQgY3B1cywgc2VjdXJlIChJSUQgMDIwMDE0
M2IpLg0KKFhFTikgWFNNIEZyYW1ld29yayB2MS4wLjAgaW5pdGlhbGl6ZWQN
CihYRU4pIEluaXRpYWxpc2luZyBYU00gU0lMTyBtb2RlDQooWEVOKSBVc2lu
ZyBzY2hlZHVsZXI6IG51bGwgU2NoZWR1bGVyIChudWxsKQ0KKFhFTikgSW5p
dGlhbGl6aW5nIG51bGwgc2NoZWR1bGVyDQooWEVOKSBXQVJOSU5HOiBUaGlz
IGlzIGV4cGVyaW1lbnRhbCBzb2Z0d2FyZSBpbiBkZXZlbG9wbWVudC4NCihY
RU4pIFVzZSBhdCB5b3VyIG93biByaXNrLg0KKFhFTikgQWxsb2NhdGVkIGNv
bnNvbGUgcmluZyBvZiAzMiBLaUIuDQooWEVOKSBDUFUwOiBHdWVzdCBhdG9t
aWNzIHdpbGwgdHJ5IDEyIHRpbWVzIGJlZm9yZSBwYXVzaW5nIHRoZSBkb21h
aW4NCihYRU4pIEJyaW5naW5nIHVwIENQVTENCihYRU4pIENQVTE6IEd1ZXN0
IGF0b21pY3Mgd2lsbCB0cnkgMTMgdGltZXMgYmVmb3JlIHBhdXNpbmcgdGhl
IGRvbWFpbg0KKFhFTikgQ1BVIDEgYm9vdGVkLg0KKFhFTikgQnJpbmdpbmcg
dXAgQ1BVMg0KKFhFTikgQ1BVMjogR3Vlc3QgYXRvbWljcyB3aWxsIHRyeSAx
MyB0aW1lcyBiZWZvcmUgcGF1c2luZyB0aGUgZG9tYWluDQooWEVOKSBDUFUg
MiBib290ZWQuDQooWEVOKSBCcmluZ2luZyB1cCBDUFUzDQooWEVOKSBDUFUz
OiBHdWVzdCBhdG9taWNzIHdpbGwgdHJ5IDEzIHRpbWVzIGJlZm9yZSBwYXVz
aW5nIHRoZSBkb21haW4NCihYRU4pIEJyb3VnaHQgdXAgNCBDUFVzDQooWEVO
KSBDUFUgMyBib290ZWQuDQooWEVOKSBzbW11OiAvc21tdUBmZDgwMDAwMDog
cHJvYmluZyBoYXJkd2FyZSBjb25maWd1cmF0aW9uLi4uDQooWEVOKSBzbW11
OiAvc21tdUBmZDgwMDAwMDogU01NVXYyIHdpdGg6DQooWEVOKSBzbW11OiAv
c21tdUBmZDgwMDAwMDogICAgIHN0YWdlIDIgdHJhbnNsYXRpb24NCihYRU4p
IHNtbXU6IC9zbW11QGZkODAwMDAwOiAgICAgc3RyZWFtIG1hdGNoaW5nIHdp
dGggNDggcmVnaXN0ZXIgZ3JvdXBzLCBtYXNrIDB4N2ZmZjwyPnNtbXU6IC9z
bW11QGZkODAwMDAwOiAgICAxNiBjb250ZXh0IGJhbmtzICgwIHN0YWdlLTIg
b25seSkNCihYRU4pIHNtbXU6IC9zbW11QGZkODAwMDAwOiAgICAgU3RhZ2Ut
MjogNDgtYml0IElQQSAtPiA0OC1iaXQgUEENCihYRU4pIHNtbXU6IC9zbW11
QGZkODAwMDAwOiByZWdpc3RlcmVkIDI2IG1hc3RlciBkZXZpY2VzDQooWEVO
KSBJL08gdmlydHVhbGlzYXRpb24gZW5hYmxlZA0KKFhFTikgIC0gRG9tMCBt
b2RlOiBSZWxheGVkDQooWEVOKSBQMk06IDQwLWJpdCBJUEEgd2l0aCA0MC1i
aXQgUEEgYW5kIDgtYml0IFZNSUQNCihYRU4pIFAyTTogMyBsZXZlbHMgd2l0
aCBvcmRlci0xIHJvb3QsIFZUQ1IgMHg4MDAyMzU1OA0KKFhFTikgU2NoZWR1
bGluZyBncmFudWxhcml0eTogY3B1LCAxIENQVSBwZXIgc2NoZWQtcmVzb3Vy
Y2UNCihYRU4pIGFsdGVybmF0aXZlczogUGF0Y2hpbmcgd2l0aCBhbHQgdGFi
bGUgMDAwMDAwMDAwMDJkYzNiOCAtPiAwMDAwMDAwMDAwMmRjYmUwDQooWEVO
KSAqKiogTE9BRElORyBET01BSU4gMCAqKioNCihYRU4pIExvYWRpbmcgZDAg
a2VybmVsIGZyb20gYm9vdCBtb2R1bGUgQCAwMDAwMDAwMDAxMDAwMDAwDQoo
WEVOKSBMb2FkaW5nIHJhbWRpc2sgZnJvbSBib290IG1vZHVsZSBAIDAwMDAw
MDAwMDE4MDAwMDANCihYRU4pIEFsbG9jYXRpbmcgMToxIG1hcHBpbmdzIHRv
dGFsbGluZyAxMDI0TUIgZm9yIGRvbTA6DQooWEVOKSBCQU5LWzBdIDB4MDAw
MDAwMjAwMDAwMDAtMHgwMDAwMDA2MDAwMDAwMCAoMTAyNE1CKQ0KKFhFTikg
R3JhbnQgdGFibGUgcmFuZ2U6IDB4MDAwMDAwMDBlMDAwMDAtMHgwMDAwMDAw
MGU0MDAwMA0KKFhFTikgc21tdTogL3NtbXVAZmQ4MDAwMDA6IGQwOiBwMm1h
ZGRyIDB4MDAwMDAwMDg3YmY5YTAwMA0KKFhFTikgRGF0YSBBYm9ydCBUcmFw
LiBTeW5kcm9tZT0weDQNCihYRU4pIFdhbGtpbmcgSHlwZXJ2aXNvciBWQSAw
eDE0ZWQwMDAwZmJmOWZkMjAgb24gQ1BVMCB2aWEgVFRCUiAweDAwMDAwMDAw
MDBmM2YwMDANCihYRU4pIDBUSFsweDBdID0gMHgwMDAwMDAwMDAwZjQyZjdm
DQooWEVOKSAxU1RbMHgzXSA9IDB4MDAwMDAwMDAwMDAwMDAwMA0KKFhFTikg
Q1BVMDogVW5leHBlY3RlZCBUcmFwOiBEYXRhIEFib3J0DQooWEVOKSAtLS0t
WyBYZW4tNC4xNi11bnN0YWJsZSAgYXJtNjQgIGRlYnVnPXkgIE5vdCB0YWlu
dGVkIF0tLS0tDQooWEVOKSBDUFU6ICAgIDANCihYRU4pIFBDOiAgICAgMDAw
MDAwMDAwMDI0Y2ZhNCBzbW11LmMjZmluZF9zbW11X21hc3RlcisweDgvMHgz
Yw0KKFhFTikgTFI6ICAgICAwMDAwMDAwMDAwMjRkMTg4DQooWEVOKSBTUDog
ICAgIDAwMDAwMDAwMDAzMDcxZjANCihYRU4pIENQU1I6ICAgMjAwMDAyNDkg
TU9ERTo2NC1iaXQgRUwyaCAoSHlwZXJ2aXNvciwgaGFuZGxlcikNCihYRU4p
ICAgICAgWDA6IDE0ZWQwMDAwZmJmOWZkMjggIFgxOiAwMDAwODAwMGZiZmNh
ZjQwICBYMjogMDAwMDgwMDBmYmZjYjE5MA0KKFhFTikgICAgICBYMzogMDAw
MDAwMDAwMDJiMzg3MCAgWDQ6IDAwMDAwMDAwMDAwMDAwMDAgIFg1OiAwMDY1
MWYxMTAzMGMwNzE5DQooWEVOKSAgICAgIFg2OiAwMDAwMDAwMDAwMDAwMDAw
ICBYNzogMDAwMDAwMDAwMDAwMDAwMCAgWDg6IDAwMDA4MDAwZjkwN2VjMzAN
CihYRU4pICAgICAgWDk6IGZmZmZmZmZmZmZmZmZmZmUgWDEwOiAwMTAxMDEw
MTAxMDEwMTAxIFgxMTogMDAwMDAwMDAwMDAwMDAwMw0KKFhFTikgICAgIFgx
MjogMDAwMDAwMDAwMDAwMDAyMCBYMTM6IGZmMDAwMDAwMDAwMDAwMDAgWDE0
OiAwMDAwMDAwMDAwMDAwMDMwDQooWEVOKSAgICAgWDE1OiAwMDAwMDAwMDAw
MDAwMDAwIFgxNjogMDAwMDAwMDAwMDJiNTAwMCBYMTc6IDAwMDAwMDAwMDAy
YjUwMDANCihYRU4pICAgICBYMTg6IDAwMDAwMDAwMDAyYjYwMDAgWDE5OiAw
MDAwODAwMGZiZmZjYmYwIFgyMDogMDAwMDAwMDAwMDJiMzg3OA0KKFhFTikg
ICAgIFgyMTogMDAwMDgwMDBmYmZjYWY0MCBYMjI6IDAwMDA4MDAwZmJmOWZm
ZDAgWDIzOiAwMDAwODAwMGZiZmNhZmQwDQooWEVOKSAgICAgWDI0OiAwMDAw
ODAwMGZiZjlkMDAwIFgyNTogMDAwMDAwMDAwMDAwMDAwMSBYMjY6IDAwMDAw
MDAwMDAwMDAwMDENCihYRU4pICAgICBYMjc6IDAwMDAwMDAwMDAwMDAwMDAg
WDI4OiAwMDAwMDAwMDAwMDAwMDAwICBGUDogMDAwMDAwMDAwMDMwNzFmMA0K
KFhFTikgDQooWEVOKSAgIFZUQ1JfRUwyOiA4MDAyMzU1OA0KKFhFTikgIFZU
VEJSX0VMMjogMDAwMDAwMDg3YmY5YTAwMA0KKFhFTikgDQooWEVOKSAgU0NU
TFJfRUwyOiAzMGNkMTgzZA0KKFhFTikgICAgSENSX0VMMjogMDAwMDAwMDAw
MDAwMDAzYQ0KKFhFTikgIFRUQlIwX0VMMjogMDAwMDAwMDAwMGYzZjAwMA0K
KFhFTikgDQooWEVOKSAgICBFU1JfRUwyOiA5NjAwMDAwNA0KKFhFTikgIEhQ
RkFSX0VMMjogMDAwMDAwMDAwMGY5MDEwMA0KKFhFTikgICAgRkFSX0VMMjog
MTRlZDAwMDBmYmY5ZmQyMA0KKFhFTikgDQooWEVOKSBYZW4gc3RhY2sgdHJh
Y2UgZnJvbSBzcD0wMDAwMDAwMDAwMzA3MWYwOg0KKFhFTikgICAgMDAwMDAw
MDAwMDMwNzIyMCAwMDAwMDAwMDAwMjRkN2IwIDAwMDA4MDAwZmJmOWQwMDAg
MDAwMDAwMDBmZmZmZmZmMA0KKFhFTikgICAgMDAwMDgwMDBmYmZjYWZiMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAzMDcyYTAgMDAwMDAwMDAwMDI0
ZWRjOA0KKFhFTikgICAgMDAwMDgwMDBmYmY5ZDAwMCAwMDAwMDAwMGZmZmZm
ZmYwIDAwMDA4MDAwZmJmY2FmNDAgMDAwMDgwMDBmYmZjYWZhMA0KKFhFTikg
ICAgMDAwMDgwMDBmYmZjYWZkMCAwMDAwMDAwMDAwMDAwMDAxIDAwMDAwMDAw
MDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMQ0KKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAzMDcyYTAgMDAw
MDAwMDAwMDI0ZWQ5OA0KKFhFTikgICAgMDAwMDgwMDBmYmY5ZDAwMCAwMDAw
MDAwMDAwMzA3NTUwIDAwMDAwMDAwMDAzMDcyZDAgMDAwMDAwMDAwMDJjOWY3
Yw0KKFhFTikgICAgMDAwMDgwMDBmYmZjYWY0MCAwMDAwMDAwMDAwMzA3NTUw
IDAwMDA4MDAwZmJmOWQwMDAgMDAwMDAwMDAwMDAwMDAwNQ0KKFhFTikgICAg
MDAwMDAwMDAwMDMwNzM5MCAwMDAwMDAwMDAwMmNhNDBjIDAwMDA4MDAwZmJm
YzgwMzggMDAwMDAwMDAwMDMwNzU1MA0KKFhFTikgICAgMDAwMDgwMDBmYmY5
ZDAwMCAwMDAwMDAwMDAwMDAwMDA1IDAwMDA4MDAwZmJmY2FmNDAgMDAwMDAw
MDAwMDAwMDAwMA0KKFhFTikgICAgMDAwMDgwMDBmYmZlMjEzMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0K
KFhFTikgICAgMDAwMDAwMDAwMDJkYThlOCAwMDAwMDAwMGY5MDdhMDkwIDAw
MDAwMDAwMDAyZGE4ZDggMDAwMDAwMDAwMDJkOWI4MA0KKFhFTikgICAgMDAw
MDAwMDAwMDMwNzM4MCAwMDAwMDAwMGZkMDcwMDAwIDAwMDA4MDAwZmJmYzgw
MzggMDAwMDAwMDAwMDAzMDAwMA0KKFhFTikgICAgMDAwMDgwMDBmYmY5ZDAw
MCAwMDAwODAwMGZiZjlkMDAwIDAwMDAwMDAwMDAwMDAwMDUgMDAwMDAwMDAw
MDJjYTMwOA0KKFhFTikgICAgMDAwMDAwMDAwMDMwNzQ1MCAwMDAwMDAwMDAw
MmNhNDBjIDAwMDA4MDAwZmJmYzAwMDAgMDAwMDAwMDAwMDMwNzU1MA0KKFhF
TikgICAgMDAwMDgwMDBmYmY5ZDAwMCAwMDAwMDAwMDAwMDAwMDA1IDAwMDA4
MDAwZmJmYzgwMzggMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICAgMDAwMDgw
MDBmYmZlMDBjNCAwMDAwMDAwMDAwMDAwMDE1IDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICAgMDAwMDAwMDAwMDJkYThlOCAw
MDAwMDAwMGY5MDdhMDkwIDAwMDAwMDAwMDAyZGE4ZDggMDAwMDAwMDAwMDJk
OWI4MA0KKFhFTikgICAgMDAwMDAwMDAwMDMwNzQ0MCAwMDAwMDAwMDAwMjAz
ZWM0IDAwMDA4MDAwZmJmYzAwMDAgMDAwMDAwMDAwMDMwNzU1MA0KKFhFTikg
ICAgMDAwMDgwMDBmYmY5ZDAwMCAwMDAwODAwMGZiZjlkMDAwIDAwMDAwMDAw
MDAwMDAwMDUgMDAwMDAwMDAwMDJjYTMwOA0KKFhFTikgICAgMDAwMDAwMDAw
MDMwNzUxMCAwMDAwMDAwMDAwMmNhYzcwIDAwMDAwMDAwMDAwMGEwOTAgMDAw
MDAwMDAwMGUwMDAwMA0KKFhFTikgICAgMDAwMDAwMDAwMDJkYWFlOCAwMDAw
ODAwMGZiZjlkMDAwIDAwMDAwMDAwMDAwMDAwMGYgMDAwMDAwMDAwMDAwMDAw
NA0KKFhFTikgICAgMDAwMDAwMDAwMDJlODYwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDA4ODAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMg0KKFhFTikgICAg
MDAwMDAwMDAwMDJkYThlOCAwMDAwMDAwMDAwMjJjZTUwIDAwMDAwMDAwMDAy
ZGE4ZDggMDAwMDAwMDAwMDJkOWI4MA0KKFhFTikgICAgMDAwMDAwMDAwMDMw
NzUwMCAwMDAwMDAwMDAwMmJiY2M4IDAwMDAwMDAwMDAwMGEwOTAgMDAwMDAw
MDAwMGUwMDAwMA0KKFhFTikgICAgMDAwMDAwMDAwMDJkYWFlOCAwMDAwODAw
MGZiZjlkMDAwIDAwMDAwMDAwMDAwMDAwMDUgMDAwMDAwMDAwMDJjYWM1OA0K
KFhFTikgICAgMDAwMDAwMDAwMDMwN2RmMCAwMDAwMDAwMDAwMmNlZjY0IDAw
MDA4MDAwZmJmOWQwMDAgMDAwMDAwMDAwMDJiNDc4MA0KKFhFTikgICAgMDAw
MDAwMDAwMDM0ODQzMCAwMDAwMDAwMDAwMDAwMDA0IDAwMDAwMDAwMDAyYTg0
ZTAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICAgMDAwMDAwMDAwMDAwMDAw
MSAwMDAwODAwMGZiZjlkMDAwIDAwMDA4MDAwZjkwNzAwMDAgMDAwMDAwMDAw
MDAwMDAwMA0KKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMSAwMDAwMDAwMDIw
MDAwMDAwIDAwMDAwMDAwNDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhF
TikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMA0KKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikg
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMA0KKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MA0KKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMA0KKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0K
KFhFTikgWGVuIGNhbGwgdHJhY2U6DQooWEVOKSAgICBbPDAwMDAwMDAwMDAy
NGNmYTQ+XSBzbW11LmMjZmluZF9zbW11X21hc3RlcisweDgvMHgzYyAoUEMp
DQooWEVOKSAgICBbPDAwMDAwMDAwMDAyNGQxODg+XSBzbW11LmMjZmluZF9z
bW11X2Zvcl9kZXZpY2UrMHg0OC8weDk0IChMUikNCihYRU4pICAgIFs8MDAw
MDAwMDAwMDI0ZDdiMD5dIHNtbXUuYyNhcm1fc21tdV9hc3NpZ25fZGV2KzB4
NTgvMHhiNDgNCihYRU4pICAgIFs8MDAwMDAwMDAwMDI0ZWRjOD5dIGlvbW11
X2Fzc2lnbl9kdF9kZXZpY2UrMHg2NC8weGMwDQooWEVOKSAgICBbPDAwMDAw
MDAwMDAyYzlmN2M+XSBkb21haW5fYnVpbGQuYyNoYW5kbGVfbm9kZSsweDMx
MC8weDllYw0KKFhFTikgICAgWzwwMDAwMDAwMDAwMmNhNDBjPl0gZG9tYWlu
X2J1aWxkLmMjaGFuZGxlX25vZGUrMHg3YTAvMHg5ZWMNCihYRU4pICAgIFs8
MDAwMDAwMDAwMDJjYTQwYz5dIGRvbWFpbl9idWlsZC5jI2hhbmRsZV9ub2Rl
KzB4N2EwLzB4OWVjDQooWEVOKSAgICBbPDAwMDAwMDAwMDAyY2FjNzA+XSBj
b25zdHJ1Y3RfZG9tMCsweDQxMC8weDRiYw0KKFhFTikgICAgWzwwMDAwMDAw
MDAwMmNlZjY0Pl0gc3RhcnRfeGVuKzB4YmU4LzB4Y2QwDQooWEVOKSAgICBb
PDAwMDAwMDAwMDAyMDAxYTA+XSBhcm02NC9oZWFkLm8jcHJpbWFyeV9zd2l0
Y2hlZCsweGMvMHgxYw0KKFhFTikgDQooWEVOKSANCihYRU4pICoqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCihYRU4pIFBhbmlj
IG9uIENQVSAwOg0KKFhFTikgQ1BVMDogVW5leHBlY3RlZCBUcmFwOiBEYXRh
IEFib3J0DQooWEVOKSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqDQooWEVOKSANCihYRU4pIFJlYm9vdCBpbiBmaXZlIHNlY29u
ZHMuLi4NCg0K

--8323329-468115872-1623805343=:24906
Content-Type: application/octet-stream; NAME=xen.dtb
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.21.2106151802231.24906@sstabellini-ThinkPad-T480s>
Content-Description: 
Content-Disposition: ATTACHMENT; FILENAME=xen.dtb

0A3+7QAAnZEAAAA4AACTcAAAACgAAAARAAAAEAAAAAAAAAohAACTOAAAAAAA
AAAAAAAAAAAAAAAAAAABAAAAAAAAAAMAAAA5AAAAAHhsbngsenlucW1wLXpj
dTEwMi1yZXYxLjAAeGxueCx6eW5xbXAtemN1MTAyAHhsbngsenlucW1wAAAA
AAAAAAMAAAAEAAAACwAAAAIAAAADAAAABAAAABoAAAACAAAAAwAAABUAAAAm
WnlucU1QIFpDVTEwMiBSZXYxLjAAAAAAAAAAAWNwdXMAAAAAAAAAAwAAAAQA
AAALAAAAAQAAAAMAAAAEAAAAGgAAAAAAAAABY3B1QDAAAAAAAAADAAAADwAA
AABhcm0sY29ydGV4LWE1MwAAAAAAAwAAAAQAAAAsY3B1AAAAAAMAAAAFAAAA
OHBzY2kAAAAAAAAAAwAAAAQAAABGAAAAAQAAAAMAAAAEAAAAWgAAAAAAAAAD
AAAABAAAAF4AAAACAAAAAwAAAAgAAABuAAAAAwAAAAoAAAACAAAAAWNwdUAx
AAAAAAAAAwAAAA8AAAAAYXJtLGNvcnRleC1hNTMAAAAAAAMAAAAEAAAALGNw
dQAAAAADAAAABQAAADhwc2NpAAAAAAAAAAMAAAAEAAAAWgAAAAEAAAADAAAA
BAAAAEYAAAABAAAAAwAAAAQAAABeAAAAAgAAAAIAAAABY3B1QDIAAAAAAAAD
AAAADwAAAABhcm0sY29ydGV4LWE1MwAAAAAAAwAAAAQAAAAsY3B1AAAAAAMA
AAAFAAAAOHBzY2kAAAAAAAAAAwAAAAQAAABaAAAAAgAAAAMAAAAEAAAARgAA
AAEAAAADAAAABAAAAF4AAAACAAAAAgAAAAFjcHVAMwAAAAAAAAMAAAAPAAAA
AGFybSxjb3J0ZXgtYTUzAAAAAAADAAAABAAAACxjcHUAAAAAAwAAAAUAAAA4
cHNjaQAAAAAAAAADAAAABAAAAFoAAAADAAAAAwAAAAQAAABGAAAAAQAAAAMA
AAAEAAAAXgAAAAIAAAACAAAAAWlkbGUtc3RhdGVzAAAAAAMAAAAFAAAAdXBz
Y2kAAAAAAAAAAWNwdS1zbGVlcC0wAAAAAAMAAAAPAAAAAGFybSxpZGxlLXN0
YXRlAAAAAAADAAAABAAAAIJAAAAAAAAAAwAAAAAAAACZAAAAAwAAAAQAAACq
AAABLAAAAAMAAAAEAAAAuwAAAlgAAAADAAAABAAAAMsAACcQAAAAAwAAAAQA
AADcAAAAAgAAAAIAAAACAAAAAgAAAAFjcHUtb3BwLXRhYmxlAAAAAAAAAwAA
ABQAAAAAb3BlcmF0aW5nLXBvaW50cy12MgAAAAADAAAAAAAAAOQAAAADAAAA
BAAAANwAAAABAAAAAW9wcDAwAAAAAAAAAwAAAAgAAADvAAAAAEeGi/QAAAAD
AAAABAAAAPYAD0JAAAAAAwAAAAQAAAEEAAehIAAAAAIAAAABb3BwMDEAAAAA
AAADAAAACAAAAO8AAAAAI8NF+gAAAAMAAAAEAAAA9gAPQkAAAAADAAAABAAA
AQQAB6EgAAAAAgAAAAFvcHAwMgAAAAAAAAMAAAAIAAAA7wAAAAAX14P8AAAA
AwAAAAQAAAD2AA9CQAAAAAMAAAAEAAABBAAHoSAAAAACAAAAAW9wcDAzAAAA
AAAAAwAAAAgAAADvAAAAABHhov0AAAADAAAABAAAAPYAD0JAAAAAAwAAAAQA
AAEEAAehIAAAAAIAAAACAAAAAXp5bnFtcF9pcGkAAAAAAAMAAAAAAAABFQAA
AAMAAAAYAAAAAHhsbngsenlucW1wLWlwaS1tYWlsYm94AAAAAAMAAAAEAAAB
KQAAAAQAAAADAAAADAAAAToAAAAAAAAAIwAAAAQAAAADAAAABAAAAUUAAAAA
AAAAAwAAAAQAAAALAAAAAgAAAAMAAAAEAAAAGgAAAAIAAAADAAAAAAAAAVEA
AAABbWFpbGJveEBmZjk5MDQwMAAAAAAAAAADAAAAAAAAARUAAAADAAAAQAAA
AFoAAAAA/5kFwAAAAAAAAAAgAAAAAP+ZBeAAAAAAAAAAIAAAAAD/mQ6AAAAA
AAAAACAAAAAA/5kOoAAAAAAAAAAgAAAAAwAAAFgAAAFYbG9jYWxfcmVxdWVz
dF9yZWdpb24AbG9jYWxfcmVzcG9uc2VfcmVnaW9uAHJlbW90ZV9yZXF1ZXN0
X3JlZ2lvbgByZW1vdGVfcmVzcG9uc2VfcmVnaW9uAAAAAAMAAAAEAAABYgAA
AAEAAAADAAAABAAAAUUAAAAEAAAAAwAAAAQAAADcAAAABQAAAAIAAAACAAAA
AWRjYwAAAAADAAAACAAAAABhcm0sZGNjAAAAAAMAAAAJAAABbmRpc2FibGVk
AAAAAAAAAAMAAAAAAAABFQAAAAIAAAABcG11AAAAAAMAAAAQAAAAAGFybSxh
cm12OC1wbXV2MwAAAAADAAAABAAAASkAAAAEAAAAAwAAADAAAAE6AAAAAAAA
AI8AAAAEAAAAAAAAAJAAAAAEAAAAAAAAAJEAAAAEAAAAAAAAAJIAAAAEAAAA
AgAAAAFwc2NpAAAAAAAAAAMAAAANAAAAAGFybSxwc2NpLTAuMgAAAAAAAAAD
AAAABAAAAD9zbWMAAAAAAgAAAAFmaXJtd2FyZQAAAAAAAAABenlucW1wLWZp
cm13YXJlAAAAAAMAAAAVAAAAAHhsbngsenlucW1wLWZpcm13YXJlAAAAAAAA
AAMAAAAAAAABFQAAAAMAAAAEAAAAP3NtYwAAAAADAAAABAAAAXUAAAABAAAA
AwAAAAQAAADcAAAAJgAAAAFwY2FwAAAAAAAAAAMAAAAWAAAAAHhsbngsenlu
cW1wLXBjYXAtZnBnYQAAAAAAAAMAAAAIAAABiXJlZl9jbGsAAAAAAwAAAAgA
AABuAAAAAwAAACkAAAADAAAABAAAANwAAAALAAAAAgAAAAF6eW5xbXAtcG93
ZXIAAAAAAAAAAwAAAAAAAAEVAAAAAwAAABIAAAAAeGxueCx6eW5xbXAtcG93
ZXIAAAAAAAADAAAABAAAASkAAAAEAAAAAwAAAAwAAAE6AAAAAAAAACMAAAAE
AAAAAwAAABAAAAGVAAAABQAAAAAAAAAFAAAAAQAAAAMAAAAGAAABnHR4AHJ4
AAAAAAAAAgAAAAFyZXNldC1jb250cm9sbGVyAAAAAAAAAAMAAAASAAAAAHhs
bngsenlucW1wLXJlc2V0AAAAAAAAAwAAAAQAAAGnAAAAAQAAAAMAAAAEAAAA
3AAAADQAAAACAAAAAXBpbmN0cmwAAAAAAwAAABQAAAAAeGxueCx6eW5xbXAt
cGluY3RybAAAAAADAAAABQAAAW5va2F5AAAAAAAAAAFpMmMwLWRlZmF1bHQA
AAAAAAAAAwAAAAQAAADcAAAALAAAAAFtdXgAAAAAAwAAAAsAAAG0aTJjMF8z
X2dycAAAAAAAAwAAAAUAAAG7aTJjMAAAAAAAAAACAAAAAWNvbmYAAAAAAAAA
AwAAAAsAAAG0aTJjMF8zX2dycAAAAAAAAwAAAAAAAAHEAAAAAwAAAAQAAAHR
AAAAAQAAAAMAAAAEAAAB2wAAAAEAAAACAAAAAgAAAAFpMmMwLWdwaW8AAAAA
AAADAAAABAAAANwAAAAtAAAAAW11eAAAAAADAAAAGgAAAbRncGlvMF8xNF9n
cnAAZ3BpbzBfMTVfZ3JwAAAAAAAAAwAAAAYAAAG7Z3BpbzAAAAAAAAACAAAA
AWNvbmYAAAAAAAAAAwAAABoAAAG0Z3BpbzBfMTRfZ3JwAGdwaW8wXzE1X2dy
cAAAAAAAAAMAAAAEAAAB0QAAAAEAAAADAAAABAAAAdsAAAABAAAAAgAAAAIA
AAABaTJjMS1kZWZhdWx0AAAAAAAAAAMAAAAEAAAA3AAAAC8AAAABbXV4AAAA
AAMAAAALAAABtGkyYzFfNF9ncnAAAAAAAAMAAAAFAAABu2kyYzEAAAAAAAAA
AgAAAAFjb25mAAAAAAAAAAMAAAALAAABtGkyYzFfNF9ncnAAAAAAAAMAAAAA
AAABxAAAAAMAAAAEAAAB0QAAAAEAAAADAAAABAAAAdsAAAABAAAAAgAAAAIA
AAABaTJjMS1ncGlvAAAAAAAAAwAAAAQAAADcAAAAMAAAAAFtdXgAAAAAAwAA
ABoAAAG0Z3BpbzBfMTZfZ3JwAGdwaW8wXzE3X2dycAAAAAAAAAMAAAAGAAAB
u2dwaW8wAAAAAAAAAgAAAAFjb25mAAAAAAAAAAMAAAAaAAABtGdwaW8wXzE2
X2dycABncGlvMF8xN19ncnAAAAAAAAADAAAABAAAAdEAAAABAAAAAwAAAAQA
AAHbAAAAAQAAAAIAAAACAAAAAXVhcnQwLWRlZmF1bHQAAAAAAAADAAAABAAA
ANwAAAA3AAAAAW11eAAAAAADAAAADAAAAbR1YXJ0MF80X2dycAAAAAADAAAA
BgAAAbt1YXJ0MAAAAAAAAAIAAAABY29uZgAAAAAAAAADAAAADAAAAbR1YXJ0
MF80X2dycAAAAAADAAAABAAAAdEAAAABAAAAAwAAAAQAAAHbAAAAAQAAAAIA
AAABY29uZi1yeAAAAAADAAAABgAAAedNSU8xOAAAAAAAAAMAAAAAAAAB7AAA
AAIAAAABY29uZi10eAAAAAADAAAABgAAAedNSU8xOQAAAAAAAAMAAAAAAAAC
AAAAAAIAAAACAAAAAXVhcnQxLWRlZmF1bHQAAAAAAAADAAAABAAAANwAAAA4
AAAAAW11eAAAAAADAAAADAAAAbR1YXJ0MV81X2dycAAAAAADAAAABgAAAbt1
YXJ0MQAAAAAAAAIAAAABY29uZgAAAAAAAAADAAAADAAAAbR1YXJ0MV81X2dy
cAAAAAADAAAABAAAAdEAAAABAAAAAwAAAAQAAAHbAAAAAQAAAAIAAAABY29u
Zi1yeAAAAAADAAAABgAAAedNSU8yMQAAAAAAAAMAAAAAAAAB7AAAAAIAAAAB
Y29uZi10eAAAAAADAAAABgAAAedNSU8yMAAAAAAAAAMAAAAAAAACAAAAAAIA
AAACAAAAAXVzYjAtZGVmYXVsdAAAAAAAAAADAAAABAAAANwAAAA5AAAAAW11
eAAAAAADAAAACwAAAbR1c2IwXzBfZ3JwAAAAAAADAAAABQAAAbt1c2IwAAAA
AAAAAAIAAAABY29uZgAAAAAAAAADAAAACwAAAbR1c2IwXzBfZ3JwAAAAAAAD
AAAABAAAAdEAAAABAAAAAwAAAAQAAAHbAAAAAQAAAAIAAAABY29uZi1yeAAA
AAADAAAAEgAAAedNSU81MgBNSU81MwBNSU81NQAAAAAAAAMAAAAAAAAB7AAA
AAIAAAABY29uZi10eAAAAAADAAAANgAAAedNSU81NABNSU81NgBNSU81NwBN
SU81OABNSU81OQBNSU82MABNSU82MQBNSU82MgBNSU82MwAAAAAAAAMAAAAA
AAACAAAAAAIAAAACAAAAAWdlbTMtZGVmYXVsdAAAAAAAAAADAAAABAAAANwA
AAAqAAAAAW11eAAAAAADAAAACgAAAbtldGhlcm5ldDMAAAAAAAADAAAAEAAA
AbRldGhlcm5ldDNfMF9ncnAAAAAAAgAAAAFjb25mAAAAAAAAAAMAAAAQAAAB
tGV0aGVybmV0M18wX2dycAAAAAADAAAABAAAAdEAAAABAAAAAwAAAAQAAAHb
AAAAAQAAAAIAAAABY29uZi1yeAAAAAADAAAAJAAAAedNSU83MABNSU83MQBN
SU83MgBNSU83MwBNSU83NABNSU83NQAAAAADAAAAAAAAAewAAAADAAAAAAAA
Ag0AAAACAAAAAWNvbmYtdHgAAAAAAwAAACQAAAHnTUlPNjQATUlPNjUATUlP
NjYATUlPNjcATUlPNjgATUlPNjkAAAAAAwAAAAAAAAIAAAAAAwAAAAAAAAIf
AAAAAgAAAAFtdXgtbWRpbwAAAAAAAAADAAAABgAAAbttZGlvMwAAAAAAAAMA
AAAMAAABtG1kaW8zXzBfZ3JwAAAAAAIAAAABY29uZi1tZGlvAAAAAAAAAwAA
AAwAAAG0bWRpbzNfMF9ncnAAAAAAAwAAAAQAAAHRAAAAAQAAAAMAAAAEAAAB
2wAAAAEAAAADAAAAAAAAAgAAAAACAAAAAgAAAAFjYW4xLWRlZmF1bHQAAAAA
AAAAAwAAAAQAAADcAAAAJwAAAAFtdXgAAAAAAwAAAAUAAAG7Y2FuMQAAAAAA
AAADAAAACwAAAbRjYW4xXzZfZ3JwAAAAAAACAAAAAWNvbmYAAAAAAAAAAwAA
AAsAAAG0Y2FuMV82X2dycAAAAAAAAwAAAAQAAAHRAAAAAQAAAAMAAAAEAAAB
2wAAAAEAAAACAAAAAWNvbmYtcngAAAAAAwAAAAYAAAHnTUlPMjUAAAAAAAAD
AAAAAAAAAewAAAACAAAAAWNvbmYtdHgAAAAAAwAAAAYAAAHnTUlPMjQAAAAA
AAADAAAAAAAAAgAAAAACAAAAAgAAAAFzZGhjaTEtZGVmYXVsdAAAAAAAAwAA
AAQAAADcAAAANgAAAAFtdXgAAAAAAwAAAAwAAAG0c2RpbzFfMF9ncnAAAAAA
AwAAAAYAAAG7c2RpbzEAAAAAAAACAAAAAWNvbmYAAAAAAAAAAwAAAAwAAAG0
c2RpbzFfMF9ncnAAAAAAAwAAAAQAAAHRAAAAAQAAAAMAAAAEAAAB2wAAAAEA
AAADAAAAAAAAAgAAAAACAAAAAW11eC1jZAAAAAAAAwAAAA8AAAG0c2RpbzFf
Y2RfMF9ncnAAAAAAAAMAAAAJAAABu3NkaW8xX2NkAAAAAAAAAAIAAAABY29u
Zi1jZAAAAAADAAAADwAAAbRzZGlvMV9jZF8wX2dycAAAAAAAAwAAAAAAAAHs
AAAAAwAAAAAAAAHEAAAAAwAAAAQAAAHRAAAAAQAAAAMAAAAEAAAB2wAAAAEA
AAACAAAAAW11eC13cAAAAAAAAwAAAA8AAAG0c2RpbzFfd3BfMF9ncnAAAAAA
AAMAAAAJAAABu3NkaW8xX3dwAAAAAAAAAAIAAAABY29uZi13cAAAAAADAAAA
DwAAAbRzZGlvMV93cF8wX2dycAAAAAAAAwAAAAAAAAHsAAAAAwAAAAAAAAHE
AAAAAwAAAAQAAAHRAAAAAQAAAAMAAAAEAAAB2wAAAAEAAAACAAAAAgAAAAFn
cGlvLWRlZmF1bHQAAAAAAAAAAwAAAAQAAADcAAAAKwAAAAFtdXgtc3cAAAAA
AAMAAAAGAAABu2dwaW8wAAAAAAAAAwAAABoAAAG0Z3BpbzBfMjJfZ3JwAGdw
aW8wXzIzX2dycAAAAAAAAAIAAAABY29uZi1zdwAAAAADAAAAGgAAAbRncGlv
MF8yMl9ncnAAZ3BpbzBfMjNfZ3JwAAAAAAAAAwAAAAQAAAHRAAAAAQAAAAMA
AAAEAAAB2wAAAAEAAAACAAAAAW11eC1tc3AAAAAAAwAAAAYAAAG7Z3BpbzAA
AAAAAAADAAAAGgAAAbRncGlvMF8xM19ncnAAZ3BpbzBfMzhfZ3JwAAAAAAAA
AgAAAAFjb25mLW1zcAAAAAAAAAADAAAAGgAAAbRncGlvMF8xM19ncnAAZ3Bp
bzBfMzhfZ3JwAAAAAAAAAwAAAAQAAAHRAAAAAQAAAAMAAAAEAAAB2wAAAAEA
AAACAAAAAWNvbmYtcHVsbC11cAAAAAAAAAADAAAADAAAAedNSU8yMgBNSU8y
MwAAAAADAAAAAAAAAcQAAAACAAAAAWNvbmYtcHVsbC1ub25lAAAAAAADAAAA
DAAAAedNSU8xMwBNSU8zOAAAAAADAAAAAAAAAgAAAAACAAAAAgAAAAIAAAAB
Y2xvY2stY29udHJvbGxlcgAAAAAAAAADAAAAAAAAARUAAAADAAAABAAAAjAA
AAABAAAAAwAAABAAAAAAeGxueCx6eW5xbXAtY2xrAAAAAAMAAAAUAAAAbgAA
AAYAAAAHAAAACAAAAAkAAAAKAAAAAwAAAEEAAAGJcHNzX3JlZl9jbGsAdmlk
ZW9fY2xrAHBzc19hbHRfcmVmX2NsawBhdXhfcmVmX2NsawBndF9jcnhfcmVm
X2NsawAAAAAAAAADAAAABAAAANwAAAADAAAAAgAAAAIAAAACAAAAAXRpbWVy
AAAAAAAAAwAAABAAAAAAYXJtLGFybXY4LXRpbWVyAAAAAAMAAAAEAAABKQAA
AAQAAAADAAAAMAAAAToAAAABAAAADQAADwgAAAABAAAADgAADwgAAAABAAAA
CwAADwgAAAABAAAACgAADwgAAAACAAAAAWVkYWMAAAAAAAAAAwAAABQAAAAA
YXJtLGNvcnRleC1hNTMtZWRhYwAAAAACAAAAAWZwZ2EtZnVsbAAAAAAAAAMA
AAAMAAAAAGZwZ2EtcmVnaW9uAAAAAAMAAAAEAAACPQAAAAsAAAADAAAABAAA
AAsAAAACAAAAAwAAAAQAAAAaAAAAAgAAAAMAAAAAAAABUQAAAAIAAAABbnZt
ZW1fZmlybXdhcmUAAAAAAAMAAAAVAAAAAHhsbngsenlucW1wLW52bWVtLWZ3
AAAAAAAAAAMAAAAEAAAACwAAAAEAAAADAAAABAAAABoAAAABAAAAAXNvY19y
ZXZpc2lvbkAwAAAAAAADAAAACAAAAFoAAAAAAAAABAAAAAMAAAAEAAAA3AAA
ADMAAAACAAAAAWVmdXNlX2RuYUBjAAAAAAMAAAAIAAAAWgAAAAwAAAAMAAAA
AgAAAAFlZnVzZV91c3IwQDIwAAAAAAAAAwAAAAgAAABaAAAAIAAAAAQAAAAC
AAAAAWVmdXNlX3VzcjFAMjQAAAAAAAADAAAACAAAAFoAAAAkAAAABAAAAAIA
AAABZWZ1c2VfdXNyMkAyOAAAAAAAAAMAAAAIAAAAWgAAACgAAAAEAAAAAgAA
AAFlZnVzZV91c3IzQDJjAAAAAAAAAwAAAAgAAABaAAAALAAAAAQAAAACAAAA
AWVmdXNlX3VzcjRAMzAAAAAAAAADAAAACAAAAFoAAAAwAAAABAAAAAIAAAAB
ZWZ1c2VfdXNyNUAzNAAAAAAAAAMAAAAIAAAAWgAAADQAAAAEAAAAAgAAAAFl
ZnVzZV91c3I2QDM4AAAAAAAAAwAAAAgAAABaAAAAOAAAAAQAAAACAAAAAWVm
dXNlX3VzcjdAM2MAAAAAAAADAAAACAAAAFoAAAA8AAAABAAAAAIAAAABZWZ1
c2VfbWlzY3VzckA0MAAAAAAAAAADAAAACAAAAFoAAABAAAAABAAAAAIAAAAB
ZWZ1c2VfY2hhc2hANTAAAAAAAAMAAAAIAAAAWgAAAFAAAAAEAAAAAgAAAAFl
ZnVzZV9wdWZtaXNjQDU0AAAAAAAAAAMAAAAIAAAAWgAAAFQAAAAEAAAAAgAA
AAFlZnVzZV9zZWNANTgAAAAAAAAAAwAAAAgAAABaAAAAWAAAAAQAAAACAAAA
AWVmdXNlX3Nwa2lkQDVjAAAAAAADAAAACAAAAFoAAABcAAAABAAAAAIAAAAB
ZWZ1c2VfcHBrMGhhc2hAYTAAAAAAAAADAAAACAAAAFoAAACgAAAAMAAAAAIA
AAABZWZ1c2VfcHBrMWhhc2hAZDAAAAAAAAADAAAACAAAAFoAAADQAAAAMAAA
AAIAAAACAAAAAXp5bnFtcF9yc2EAAAAAAAMAAAAQAAAAAHhsbngsenlucW1w
LXJzYQAAAAACAAAAAXNoYTM4NAAAAAAAAwAAABcAAAAAeGxueCx6eW5xbXAt
a2VjY2FrLTM4NAAAAAAAAgAAAAF6eW5xbXBfYWVzAAAAAAADAAAAEAAAAAB4
bG54LHp5bnFtcC1hZXMAAAAAAgAAAAFhbWJhLWFwdUAwAAAAAAADAAAACwAA
AABzaW1wbGUtYnVzAAAAAAADAAAABAAAAAsAAAACAAAAAwAAAAQAAAAaAAAA
AQAAAAMAAAAUAAABUQAAAAAAAAAAAAAAAAAAAAD/////AAAAAWludGVycnVw
dC1jb250cm9sbGVyQGY5MDEwMDAwAAAAAAAAAwAAAAwAAAAAYXJtLGdpYy00
MDAAAAAAAwAAAAQAAAJGAAAAAwAAAAMAAAAwAAAAWgAAAAD5AQAAAAEAAAAA
AAD5AgAAAAIAAAAAAAD5BAAAAAIAAAAAAAD5BgAAAAIAAAAAAAMAAAAAAAAC
VwAAAAMAAAAEAAABKQAAAAQAAAADAAAADAAAAToAAAABAAAACQAADwQAAAAD
AAAABAAAAmwAAAACAAAAAwAAAAQAAAJ1AAAAYAAAAAMAAAAEAAAA3AAAAAQA
AAACAAAAAgAAAAFzbW11QGZkODAwMDAwAAAAAAAAAwAAAAwAAAAAYXJtLG1t
dS01MDAAAAAAAwAAABAAAABaAAAAAP2AAAAAAAAAAAIAAAAAAAMAAAAEAAAC
hAAAAAEAAAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAEAAACkQAAAAEAAAAD
AAAABAAAASkAAAAEAAAAAwAAAMwAAAE6AAAAAAAAAJsAAAAEAAAAAAAAAJsA
AAAEAAAAAAAAAJsAAAAEAAAAAAAAAJsAAAAEAAAAAAAAAJsAAAAEAAAAAAAA
AJsAAAAEAAAAAAAAAJsAAAAEAAAAAAAAAJsAAAAEAAAAAAAAAJsAAAAEAAAA
AAAAAJsAAAAEAAAAAAAAAJsAAAAEAAAAAAAAAJsAAAAEAAAAAAAAAJsAAAAE
AAAAAAAAAJsAAAAEAAAAAAAAAJsAAAAEAAAAAAAAAJsAAAAEAAAAAAAAAJsA
AAAEAAAAAwAAANAAAAKkAAAADAAACHQAAAANAAAIdQAAAA4AAAh2AAAADwAA
CHcAAAAQAAAIYAAAABEAAAhhAAAAEgAACHMAAAATAAAIaAAAABQAAAhpAAAA
FQAACGoAAAAWAAAIawAAABcAAAhsAAAAGAAACG0AAAAZAAAIbgAAABoAAAhv
AAAAGwAAFOgAAAAcAAAU6QAAAB0AABTqAAAAHgAAFOsAAAAfAAAU7AAAACAA
ABTtAAAAIQAAFO4AAAAiAAAU7wAAACMAAAhwAAAAJAAACHEAAAAlAAAIcgAA
AAMAAAAEAAAA3AAAACgAAAACAAAAAWFtYmEAAAAAAAAAAwAAAAsAAAAAc2lt
cGxlLWJ1cwAAAAAAAwAAAAAAAAEVAAAAAwAAAAQAAAALAAAAAgAAAAMAAAAE
AAAAGgAAAAIAAAADAAAAAAAAAVEAAAABY2FuQGZmMDYwMDAwAAAAAAAAAAMA
AAASAAAAAHhsbngsenlucS1jYW4tMS4wAAAAAAAAAwAAAAkAAAFuZGlzYWJs
ZWQAAAAAAAAAAwAAAA0AAAGJY2FuX2NsawBwY2xrAAAAAAAAAAMAAAAQAAAA
WgAAAAD/BgAAAAAAAAAAEAAAAAADAAAADAAAAToAAAAAAAAAFwAAAAQAAAAD
AAAABAAAASkAAAAEAAAAAwAAAAQAAAKwAAAAQAAAAAMAAAAEAAACvgAAAEAA
AAADAAAACAAAAswAAAAmAAAALwAAAAMAAAAQAAAAbgAAAAMAAAA/AAAAAwAA
AB8AAAACAAAAAWNhbkBmZjA3MDAwMAAAAAAAAAADAAAAEgAAAAB4bG54LHp5
bnEtY2FuLTEuMAAAAAAAAAMAAAAFAAABbm9rYXkAAAAAAAAAAwAAAA0AAAGJ
Y2FuX2NsawBwY2xrAAAAAAAAAAMAAAAQAAAAWgAAAAD/BwAAAAAAAAAAEAAA
AAADAAAADAAAAToAAAAAAAAAGAAAAAQAAAADAAAABAAAASkAAAAEAAAAAwAA
AAQAAAKwAAAAQAAAAAMAAAAEAAACvgAAAEAAAAADAAAACAAAAswAAAAmAAAA
MAAAAAMAAAAQAAAAbgAAAAMAAABAAAAAAwAAAB8AAAADAAAACAAAAtpkZWZh
dWx0AAAAAAMAAAAEAAAC6AAAACcAAAACAAAAAWNjaUBmZDZlMDAwMAAAAAAA
AAADAAAADAAAAABhcm0sY2NpLTQwMAAAAAADAAAAEAAAAFoAAAAA/W4AAAAA
AAAAAJAAAAAAAwAAABAAAAFRAAAAAAAAAAD9bgAAAAEAAAAAAAMAAAAEAAAA
CwAAAAEAAAADAAAABAAAABoAAAABAAAAAXBtdUA5MDAwAAAAAAAAAAMAAAAT
AAAAAGFybSxjY2ktNDAwLXBtdSxyMQAAAAAAAwAAAAgAAABaAACQAAAAUAAA
AAADAAAABAAAASkAAAAEAAAAAwAAADwAAAE6AAAAAAAAAHsAAAAEAAAAAAAA
AHsAAAAEAAAAAAAAAHsAAAAEAAAAAAAAAHsAAAAEAAAAAAAAAHsAAAAEAAAA
AgAAAAIAAAABZG1hQGZkNTAwMDAwAAAAAAAAAAMAAAAFAAABbm9rYXkAAAAA
AAAAAwAAABQAAAAAeGxueCx6eW5xbXAtZG1hLTEuMAAAAAADAAAAEAAAAFoA
AAAA/VAAAAAAAAAAABAAAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAMAAABOgAA
AAAAAAB8AAAABAAAAAMAAAARAAABiWNsa19tYWluAGNsa19hcGIAAAAAAAAA
AwAAAAQAAALyAAAAgAAAAAMAAAAEAAADAQAAAAEAAAADAAAACAAAAxIAAAAo
AAAU6AAAAAMAAAAIAAACzAAAACYAAAAqAAAAAwAAABAAAABuAAAAAwAAABMA
AAADAAAAHwAAAAMAAAAEAAAA3AAAABsAAAACAAAAAWRtYUBmZDUxMDAwMAAA
AAAAAAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAUAAAAAHhsbngsenlucW1w
LWRtYS0xLjAAAAAAAwAAABAAAABaAAAAAP1RAAAAAAAAAAAQAAAAAAMAAAAE
AAABKQAAAAQAAAADAAAADAAAAToAAAAAAAAAfQAAAAQAAAADAAAAEQAAAYlj
bGtfbWFpbgBjbGtfYXBiAAAAAAAAAAMAAAAEAAAC8gAAAIAAAAADAAAABAAA
AwEAAAABAAAAAwAAAAgAAAMSAAAAKAAAFOkAAAADAAAACAAAAswAAAAmAAAA
KgAAAAMAAAAQAAAAbgAAAAMAAAATAAAAAwAAAB8AAAADAAAABAAAANwAAAAc
AAAAAgAAAAFkbWFAZmQ1MjAwMDAAAAAAAAAAAwAAAAUAAAFub2theQAAAAAA
AAADAAAAFAAAAAB4bG54LHp5bnFtcC1kbWEtMS4wAAAAAAMAAAAQAAAAWgAA
AAD9UgAAAAAAAAAAEAAAAAADAAAABAAAASkAAAAEAAAAAwAAAAwAAAE6AAAA
AAAAAH4AAAAEAAAAAwAAABEAAAGJY2xrX21haW4AY2xrX2FwYgAAAAAAAAAD
AAAABAAAAvIAAACAAAAAAwAAAAQAAAMBAAAAAQAAAAMAAAAIAAADEgAAACgA
ABTqAAAAAwAAAAgAAALMAAAAJgAAACoAAAADAAAAEAAAAG4AAAADAAAAEwAA
AAMAAAAfAAAAAwAAAAQAAADcAAAAHQAAAAIAAAABZG1hQGZkNTMwMDAwAAAA
AAAAAAMAAAAFAAABbm9rYXkAAAAAAAAAAwAAABQAAAAAeGxueCx6eW5xbXAt
ZG1hLTEuMAAAAAADAAAAEAAAAFoAAAAA/VMAAAAAAAAAABAAAAAAAwAAAAQA
AAEpAAAABAAAAAMAAAAMAAABOgAAAAAAAAB/AAAABAAAAAMAAAARAAABiWNs
a19tYWluAGNsa19hcGIAAAAAAAAAAwAAAAQAAALyAAAAgAAAAAMAAAAEAAAD
AQAAAAEAAAADAAAACAAAAxIAAAAoAAAU6wAAAAMAAAAIAAACzAAAACYAAAAq
AAAAAwAAABAAAABuAAAAAwAAABMAAAADAAAAHwAAAAMAAAAEAAAA3AAAAB4A
AAACAAAAAWRtYUBmZDU0MDAwMAAAAAAAAAADAAAABQAAAW5va2F5AAAAAAAA
AAMAAAAUAAAAAHhsbngsenlucW1wLWRtYS0xLjAAAAAAAwAAABAAAABaAAAA
AP1UAAAAAAAAAAAQAAAAAAMAAAAEAAABKQAAAAQAAAADAAAADAAAAToAAAAA
AAAAgAAAAAQAAAADAAAAEQAAAYljbGtfbWFpbgBjbGtfYXBiAAAAAAAAAAMA
AAAEAAAC8gAAAIAAAAADAAAABAAAAwEAAAABAAAAAwAAAAgAAAMSAAAAKAAA
FOwAAAADAAAACAAAAswAAAAmAAAAKgAAAAMAAAAQAAAAbgAAAAMAAAATAAAA
AwAAAB8AAAADAAAABAAAANwAAAAfAAAAAgAAAAFkbWFAZmQ1NTAwMDAAAAAA
AAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAAFAAAAAB4bG54LHp5bnFtcC1k
bWEtMS4wAAAAAAMAAAAQAAAAWgAAAAD9VQAAAAAAAAAAEAAAAAADAAAABAAA
ASkAAAAEAAAAAwAAAAwAAAE6AAAAAAAAAIEAAAAEAAAAAwAAABEAAAGJY2xr
X21haW4AY2xrX2FwYgAAAAAAAAADAAAABAAAAvIAAACAAAAAAwAAAAQAAAMB
AAAAAQAAAAMAAAAIAAADEgAAACgAABTtAAAAAwAAAAgAAALMAAAAJgAAACoA
AAADAAAAEAAAAG4AAAADAAAAEwAAAAMAAAAfAAAAAwAAAAQAAADcAAAAIAAA
AAIAAAABZG1hQGZkNTYwMDAwAAAAAAAAAAMAAAAFAAABbm9rYXkAAAAAAAAA
AwAAABQAAAAAeGxueCx6eW5xbXAtZG1hLTEuMAAAAAADAAAAEAAAAFoAAAAA
/VYAAAAAAAAAABAAAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAMAAABOgAAAAAA
AACCAAAABAAAAAMAAAARAAABiWNsa19tYWluAGNsa19hcGIAAAAAAAAAAwAA
AAQAAALyAAAAgAAAAAMAAAAEAAADAQAAAAEAAAADAAAACAAAAxIAAAAoAAAU
7gAAAAMAAAAIAAACzAAAACYAAAAqAAAAAwAAABAAAABuAAAAAwAAABMAAAAD
AAAAHwAAAAMAAAAEAAAA3AAAACEAAAACAAAAAWRtYUBmZDU3MDAwMAAAAAAA
AAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAUAAAAAHhsbngsenlucW1wLWRt
YS0xLjAAAAAAAwAAABAAAABaAAAAAP1XAAAAAAAAAAAQAAAAAAMAAAAEAAAB
KQAAAAQAAAADAAAADAAAAToAAAAAAAAAgwAAAAQAAAADAAAAEQAAAYljbGtf
bWFpbgBjbGtfYXBiAAAAAAAAAAMAAAAEAAAC8gAAAIAAAAADAAAABAAAAwEA
AAABAAAAAwAAAAgAAAMSAAAAKAAAFO8AAAADAAAACAAAAswAAAAmAAAAKgAA
AAMAAAAQAAAAbgAAAAMAAAATAAAAAwAAAB8AAAADAAAABAAAANwAAAAiAAAA
AgAAAAFncHVAZmQ0YjAwMDAAAAAAAAAAAwAAAAUAAAFub2theQAAAAAAAAAD
AAAAHQAAAABhcm0sbWFsaS00MDAAYXJtLG1hbGktdXRnYXJkAAAAAAAAAAMA
AAAQAAAAWgAAAAD9SwAAAAAAAAABAAAAAAADAAAABAAAASkAAAAEAAAAAwAA
AEgAAAE6AAAAAAAAAIQAAAAEAAAAAAAAAIQAAAAEAAAAAAAAAIQAAAAEAAAA
AAAAAIQAAAAEAAAAAAAAAIQAAAAEAAAAAAAAAIQAAAAEAAAAAwAAADEAAAMZ
SVJRR1AASVJRR1BNTVUASVJRUFAwAElSUVBQTU1VMABJUlFQUDEASVJRUFBN
TVUxAAAAAAAAAAMAAAAUAAABiWdwdQBncHVfcHAwAGdwdV9wcDEAAAAAAwAA
AAgAAALMAAAAJgAAADoAAAADAAAAGAAAAG4AAAADAAAAGAAAAAMAAAAZAAAA
AwAAABoAAAADAAAABAAAAykAAAABAAAAAgAAAAFkbWFAZmZhODAwMDAAAAAA
AAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAAFAAAAAB4bG54LHp5bnFtcC1k
bWEtMS4wAAAAAAMAAAAQAAAAWgAAAAD/qAAAAAAAAAAAEAAAAAADAAAABAAA
ASkAAAAEAAAAAwAAAAwAAAE6AAAAAAAAAE0AAAAEAAAAAwAAABEAAAGJY2xr
X21haW4AY2xrX2FwYgAAAAAAAAADAAAABAAAAvIAAABAAAAAAwAAAAQAAAMB
AAAAAQAAAAMAAAAIAAACzAAAACYAAAArAAAAAwAAABAAAABuAAAAAwAAAEQA
AAADAAAAHwAAAAMAAAAEAAAA3AAAABMAAAACAAAAAWRtYUBmZmE5MDAwMAAA
AAAAAAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAUAAAAAHhsbngsenlucW1w
LWRtYS0xLjAAAAAAAwAAABAAAABaAAAAAP+pAAAAAAAAAAAQAAAAAAMAAAAE
AAABKQAAAAQAAAADAAAADAAAAToAAAAAAAAATgAAAAQAAAADAAAAEQAAAYlj
bGtfbWFpbgBjbGtfYXBiAAAAAAAAAAMAAAAEAAAC8gAAAEAAAAADAAAABAAA
AwEAAAABAAAAAwAAAAgAAALMAAAAJgAAACsAAAADAAAAEAAAAG4AAAADAAAA
RAAAAAMAAAAfAAAAAwAAAAQAAADcAAAAFAAAAAIAAAABZG1hQGZmYWEwMDAw
AAAAAAAAAAMAAAAFAAABbm9rYXkAAAAAAAAAAwAAABQAAAAAeGxueCx6eW5x
bXAtZG1hLTEuMAAAAAADAAAAEAAAAFoAAAAA/6oAAAAAAAAAABAAAAAAAwAA
AAQAAAEpAAAABAAAAAMAAAAMAAABOgAAAAAAAABPAAAABAAAAAMAAAARAAAB
iWNsa19tYWluAGNsa19hcGIAAAAAAAAAAwAAAAQAAALyAAAAQAAAAAMAAAAE
AAADAQAAAAEAAAADAAAACAAAAswAAAAmAAAAKwAAAAMAAAAQAAAAbgAAAAMA
AABEAAAAAwAAAB8AAAADAAAABAAAANwAAAAVAAAAAgAAAAFkbWFAZmZhYjAw
MDAAAAAAAAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAAFAAAAAB4bG54LHp5
bnFtcC1kbWEtMS4wAAAAAAMAAAAQAAAAWgAAAAD/qwAAAAAAAAAAEAAAAAAD
AAAABAAAASkAAAAEAAAAAwAAAAwAAAE6AAAAAAAAAFAAAAAEAAAAAwAAABEA
AAGJY2xrX21haW4AY2xrX2FwYgAAAAAAAAADAAAABAAAAvIAAABAAAAAAwAA
AAQAAAMBAAAAAQAAAAMAAAAIAAACzAAAACYAAAArAAAAAwAAABAAAABuAAAA
AwAAAEQAAAADAAAAHwAAAAMAAAAEAAAA3AAAABYAAAACAAAAAWRtYUBmZmFj
MDAwMAAAAAAAAAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAUAAAAAHhsbngs
enlucW1wLWRtYS0xLjAAAAAAAwAAABAAAABaAAAAAP+sAAAAAAAAAAAQAAAA
AAMAAAAEAAABKQAAAAQAAAADAAAADAAAAToAAAAAAAAAUQAAAAQAAAADAAAA
EQAAAYljbGtfbWFpbgBjbGtfYXBiAAAAAAAAAAMAAAAEAAAC8gAAAEAAAAAD
AAAABAAAAwEAAAABAAAAAwAAAAgAAALMAAAAJgAAACsAAAADAAAAEAAAAG4A
AAADAAAARAAAAAMAAAAfAAAAAwAAAAQAAADcAAAAFwAAAAIAAAABZG1hQGZm
YWQwMDAwAAAAAAAAAAMAAAAFAAABbm9rYXkAAAAAAAAAAwAAABQAAAAAeGxu
eCx6eW5xbXAtZG1hLTEuMAAAAAADAAAAEAAAAFoAAAAA/60AAAAAAAAAABAA
AAAAAwAAAAQAAAEpAAAABAAAAAMAAAAMAAABOgAAAAAAAABSAAAABAAAAAMA
AAARAAABiWNsa19tYWluAGNsa19hcGIAAAAAAAAAAwAAAAQAAALyAAAAQAAA
AAMAAAAEAAADAQAAAAEAAAADAAAACAAAAswAAAAmAAAAKwAAAAMAAAAQAAAA
bgAAAAMAAABEAAAAAwAAAB8AAAADAAAABAAAANwAAAAYAAAAAgAAAAFkbWFA
ZmZhZTAwMDAAAAAAAAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAAFAAAAAB4
bG54LHp5bnFtcC1kbWEtMS4wAAAAAAMAAAAQAAAAWgAAAAD/rgAAAAAAAAAA
EAAAAAADAAAABAAAASkAAAAEAAAAAwAAAAwAAAE6AAAAAAAAAFMAAAAEAAAA
AwAAABEAAAGJY2xrX21haW4AY2xrX2FwYgAAAAAAAAADAAAABAAAAvIAAABA
AAAAAwAAAAQAAAMBAAAAAQAAAAMAAAAIAAACzAAAACYAAAArAAAAAwAAABAA
AABuAAAAAwAAAEQAAAADAAAAHwAAAAMAAAAEAAAA3AAAABkAAAACAAAAAWRt
YUBmZmFmMDAwMAAAAAAAAAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAUAAAA
AHhsbngsenlucW1wLWRtYS0xLjAAAAAAAwAAABAAAABaAAAAAP+vAAAAAAAA
AAAQAAAAAAMAAAAEAAABKQAAAAQAAAADAAAADAAAAToAAAAAAAAAVAAAAAQA
AAADAAAAEQAAAYljbGtfbWFpbgBjbGtfYXBiAAAAAAAAAAMAAAAEAAAC8gAA
AEAAAAADAAAABAAAAwEAAAABAAAAAwAAAAgAAALMAAAAJgAAACsAAAADAAAA
EAAAAG4AAAADAAAARAAAAAMAAAAfAAAAAwAAAAQAAADcAAAAGgAAAAIAAAAB
bWVtb3J5LWNvbnRyb2xsZXJAZmQwNzAwMDAAAAAAAAMAAAAXAAAAAHhsbngs
enlucW1wLWRkcmMtMi40MGEAAAAAAAMAAAAQAAAAWgAAAAD9BwAAAAAAAAAD
AAAAAAADAAAABAAAASkAAAAEAAAAAwAAAAwAAAE6AAAAAAAAAHAAAAAEAAAA
AgAAAAFuYW5kQGZmMTAwMDAwAAAAAAAAAwAAABEAAAAAYXJhc2FuLG5mYy12
M3AxMAAAAAAAAAADAAAACQAAAW5kaXNhYmxlZAAAAAAAAAADAAAAEAAAAFoA
AAAA/xAAAAAAAAAAABAAAAAAAwAAABIAAAGJY2xrX3N5cwBjbGtfZmxhc2gA
AAAAAAADAAAABAAAASkAAAAEAAAAAwAAAAwAAAE6AAAAAAAAAA4AAAAEAAAA
AwAAAAQAAAALAAAAAQAAAAMAAAAEAAAAGgAAAAAAAAADAAAABAAAAwEAAAAB
AAAAAwAAAAgAAAMSAAAAKAAACHIAAAADAAAACAAAAswAAAAmAAAALAAAAAMA
AAAQAAAAbgAAAAMAAAA8AAAAAwAAAB8AAAADAAAABAAAANwAAAAlAAAAAgAA
AAFldGhlcm5ldEBmZjBiMDAwMAAAAAAAAAMAAAAZAAAAAGNkbnMsenlucW1w
LWdlbQBjZG5zLGdlbQAAAAAAAAADAAAACQAAAW5kaXNhYmxlZAAAAAAAAAAD
AAAABAAAASkAAAAEAAAAAwAAABgAAAE6AAAAAAAAADkAAAAEAAAAAAAAADkA
AAAEAAAAAwAAABAAAABaAAAAAP8LAAAAAAAAAAAQAAAAAAMAAAAgAAABiXBj
bGsAaGNsawB0eF9jbGsAcnhfY2xrAHRzdV9jbGsAAAAAAwAAAAQAAAALAAAA
AQAAAAMAAAAEAAAAGgAAAAAAAAADAAAABAAAAwEAAAABAAAAAwAAAAgAAAMS
AAAAKAAACHQAAAADAAAACAAAAswAAAAmAAAAHQAAAAMAAAAoAAAAbgAAAAMA
AAAfAAAAAwAAAGgAAAADAAAALQAAAAMAAAAxAAAAAwAAACwAAAADAAAABAAA
ANwAAAAMAAAAAgAAAAFldGhlcm5ldEBmZjBjMDAwMAAAAAAAAAMAAAAZAAAA
AGNkbnMsenlucW1wLWdlbQBjZG5zLGdlbQAAAAAAAAADAAAACQAAAW5kaXNh
YmxlZAAAAAAAAAADAAAABAAAASkAAAAEAAAAAwAAABgAAAE6AAAAAAAAADsA
AAAEAAAAAAAAADsAAAAEAAAAAwAAABAAAABaAAAAAP8MAAAAAAAAAAAQAAAA
AAMAAAAgAAABiXBjbGsAaGNsawB0eF9jbGsAcnhfY2xrAHRzdV9jbGsAAAAA
AwAAAAQAAAALAAAAAQAAAAMAAAAEAAAAGgAAAAAAAAADAAAABAAAAwEAAAAB
AAAAAwAAAAgAAAMSAAAAKAAACHUAAAADAAAACAAAAswAAAAmAAAAHgAAAAMA
AAAoAAAAbgAAAAMAAAAfAAAAAwAAAGkAAAADAAAALgAAAAMAAAAyAAAAAwAA
ACwAAAADAAAABAAAANwAAAANAAAAAgAAAAFldGhlcm5ldEBmZjBkMDAwMAAA
AAAAAAMAAAAZAAAAAGNkbnMsenlucW1wLWdlbQBjZG5zLGdlbQAAAAAAAAAD
AAAACQAAAW5kaXNhYmxlZAAAAAAAAAADAAAABAAAASkAAAAEAAAAAwAAABgA
AAE6AAAAAAAAAD0AAAAEAAAAAAAAAD0AAAAEAAAAAwAAABAAAABaAAAAAP8N
AAAAAAAAAAAQAAAAAAMAAAAgAAABiXBjbGsAaGNsawB0eF9jbGsAcnhfY2xr
AHRzdV9jbGsAAAAAAwAAAAQAAAALAAAAAQAAAAMAAAAEAAAAGgAAAAAAAAAD
AAAABAAAAwEAAAABAAAAAwAAAAgAAAMSAAAAKAAACHYAAAADAAAACAAAAswA
AAAmAAAAHwAAAAMAAAAoAAAAbgAAAAMAAAAfAAAAAwAAAGoAAAADAAAALwAA
AAMAAAAzAAAAAwAAACwAAAADAAAABAAAANwAAAAOAAAAAgAAAAFldGhlcm5l
dEBmZjBlMDAwMAAAAAAAAAMAAAAZAAAAAGNkbnMsenlucW1wLWdlbQBjZG5z
LGdlbQAAAAAAAAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAEAAABKQAAAAQA
AAADAAAAGAAAAToAAAAAAAAAPwAAAAQAAAAAAAAAPwAAAAQAAAADAAAAEAAA
AFoAAAAA/w4AAAAAAAAAABAAAAAAAwAAACAAAAGJcGNsawBoY2xrAHR4X2Ns
awByeF9jbGsAdHN1X2NsawAAAAADAAAABAAAAAsAAAABAAAAAwAAAAQAAAAa
AAAAAAAAAAMAAAAEAAADAQAAAAEAAAADAAAACAAAAxIAAAAoAAAIdwAAAAMA
AAAIAAACzAAAACYAAAAgAAAAAwAAACgAAABuAAAAAwAAAB8AAAADAAAAawAA
AAMAAAAwAAAAAwAAADQAAAADAAAALAAAAAMAAAAEAAADOwAAACkAAAADAAAA
CAAAAtpkZWZhdWx0AAAAAAMAAAAEAAAC6AAAACoAAAADAAAACQAAA0ZyZ21p
aS1pZAAAAAAAAAADAAAABAAAA08AAAAAAAAAAwAAAAYAAANjAAo1ACIBAAAA
AAADAAAABAAAANwAAAAPAAAAAWV0aGVybmV0LXBoeUBjAAAAAAADAAAABAAA
AFoAAAAMAAAAAwAAAAQAAAN1AAAACAAAAAMAAAAEAAADigAAAAoAAAADAAAA
BAAAA58AAAABAAAAAwAAAAAAAAOtAAAAAwAAAAQAAADcAAAAKQAAAAIAAAAC
AAAAAWdwaW9AZmYwYTAwMDAAAAAAAAADAAAAFQAAAAB4bG54LHp5bnFtcC1n
cGlvLTEuMAAAAAAAAAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAEAAADywAA
AAIAAAADAAAAAAAAA9cAAAADAAAABAAAASkAAAAEAAAAAwAAAAwAAAE6AAAA
AAAAABAAAAAEAAAAAwAAAAAAAAJXAAAAAwAAAAQAAAJGAAAAAgAAAAMAAAAQ
AAAAWgAAAAD/CgAAAAAAAAAAEAAAAAADAAAACAAAAswAAAAmAAAALgAAAAMA
AAAIAAAAbgAAAAMAAAAfAAAAAwAAAAgAAALaZGVmYXVsdAAAAAADAAAABAAA
AugAAAArAAAAAwAAAAQAAAPnAAAAIAAAAAMAAAAEAAAD9wAAAAAAAAADAAAA
BAAABAYAAFYAAAAAAwAAAAQAAADcAAAALgAAAAIAAAABaTJjQGZmMDIwMDAw
AAAAAAAAAAMAAAAeAAAAAGNkbnMsaTJjLXIxcDE0AGNkbnMsaTJjLXIxcDEw
AAAAAAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAABAAAASkAAAAEAAAAAwAA
AAwAAAE6AAAAAAAAABEAAAAEAAAAAwAAABAAAABaAAAAAP8CAAAAAAAAAAAQ
AAAAAAMAAAAEAAAACwAAAAEAAAADAAAABAAAABoAAAAAAAAAAwAAAAgAAALM
AAAAJgAAACUAAAADAAAACAAAAG4AAAADAAAAPQAAAAMAAAANAAAC2mRlZmF1
bHQAZ3BpbwAAAAAAAAADAAAABAAAAugAAAAsAAAAAwAAAAQAAAQUAAAALQAA
AAMAAAAMAAAEHgAAAC4AAAAOAAAAAAAAAAMAAAAMAAAEKAAAAC4AAAAPAAAA
AAAAAAMAAAAEAAAEMgAGGoAAAAABZ3Bpb0AyMAAAAAADAAAACwAAAAB0aSx0
Y2E2NDE2AAAAAAADAAAABAAAAFoAAAAgAAAAAwAAAAAAAAPXAAAAAwAAAAQA
AAPLAAAAAgAAAAMAAAB6AAAEQlBTX0dUUl9MQU5fU0VMMABQU19HVFJfTEFO
X1NFTDEAUFNfR1RSX0xBTl9TRUwyAFBTX0dUUl9MQU5fU0VMMwBQQ0lfQ0xL
X0RJUl9TRUwASUlDX01VWF9SRVNFVF9CAEdFTTNfRVhQX1JFU0VUX0IAAAAA
AAAAAAAAAAAAAAACAAAAAWdwaW9AMjEAAAAAAwAAAAsAAAAAdGksdGNhNjQx
NgAAAAAAAwAAAAQAAABaAAAAIQAAAAMAAAAAAAAD1wAAAAMAAAAEAAADywAA
AAIAAAADAAAA7QAABEJWQ0NQU1BMTF9FTgBNR1RSQVZDQ19FTgBNR1RSQVZU
VF9FTgBWQ0NQU0REUlBMTF9FTgBNSU8yNl9QTVVfSU5QVVRfTFMAUExfUE1C
VVNfQUxFUlQAUFNfUE1CVVNfQUxFUlQATUFYSU1fUE1CVVNfQUxFUlQAUExf
RERSNF9WVEVSTV9FTgBQTF9ERFI0X1ZQUF8yVjVfRU4AUFNfRElNTV9WRERR
X1RPX1BTVkNDT19PTgBQU19ESU1NX1NVU1BFTkRfRU4AUFNfRERSNF9WVEVS
TV9FTgBQU19ERFI0X1ZQUF8yVjVfRU4AAAAAAAAAAAACAAAAAWkyYy1tdXhA
NzUAAAAAAAMAAAAMAAAAAG54cCxwY2E5NTQ0AAAAAAMAAAAEAAAACwAAAAEA
AAADAAAABAAAABoAAAAAAAAAAwAAAAQAAABaAAAAdQAAAAFpMmNAMAAAAAAA
AAMAAAAEAAAACwAAAAEAAAADAAAABAAAABoAAAAAAAAAAwAAAAQAAABaAAAA
AAAAAAFpbmEyMjZANDAAAAAAAAADAAAACgAAAAB0aSxpbmEyMjYAAAAAAAAD
AAAABAAABFIAAAABAAAAAwAAAAsAAARkaW5hMjI2LXU3NgAAAAAAAwAAAAQA
AABaAAAAQAAAAAMAAAAEAAAEagAAE4gAAAADAAAABAAAANwAAABBAAAAAgAA
AAFpbmEyMjZANDEAAAAAAAADAAAACgAAAAB0aSxpbmEyMjYAAAAAAAADAAAA
BAAABFIAAAABAAAAAwAAAAsAAARkaW5hMjI2LXU3NwAAAAAAAwAAAAQAAABa
AAAAQQAAAAMAAAAEAAAEagAAE4gAAAADAAAABAAAANwAAABCAAAAAgAAAAFp
bmEyMjZANDIAAAAAAAADAAAACgAAAAB0aSxpbmEyMjYAAAAAAAADAAAABAAA
BFIAAAABAAAAAwAAAAsAAARkaW5hMjI2LXU3OAAAAAAAAwAAAAQAAABaAAAA
QgAAAAMAAAAEAAAEagAAE4gAAAADAAAABAAAANwAAABDAAAAAgAAAAFpbmEy
MjZANDMAAAAAAAADAAAACgAAAAB0aSxpbmEyMjYAAAAAAAADAAAABAAABFIA
AAABAAAAAwAAAAsAAARkaW5hMjI2LXU4NwAAAAAAAwAAAAQAAABaAAAAQwAA
AAMAAAAEAAAEagAAE4gAAAADAAAABAAAANwAAABEAAAAAgAAAAFpbmEyMjZA
NDQAAAAAAAADAAAACgAAAAB0aSxpbmEyMjYAAAAAAAADAAAABAAABFIAAAAB
AAAAAwAAAAsAAARkaW5hMjI2LXU4NQAAAAAAAwAAAAQAAABaAAAARAAAAAMA
AAAEAAAEagAAE4gAAAADAAAABAAAANwAAABFAAAAAgAAAAFpbmEyMjZANDUA
AAAAAAADAAAACgAAAAB0aSxpbmEyMjYAAAAAAAADAAAABAAABFIAAAABAAAA
AwAAAAsAAARkaW5hMjI2LXU4NgAAAAAAAwAAAAQAAABaAAAARQAAAAMAAAAE
AAAEagAAE4gAAAADAAAABAAAANwAAABGAAAAAgAAAAFpbmEyMjZANDYAAAAA
AAADAAAACgAAAAB0aSxpbmEyMjYAAAAAAAADAAAABAAABFIAAAABAAAAAwAA
AAsAAARkaW5hMjI2LXU5MwAAAAAAAwAAAAQAAABaAAAARgAAAAMAAAAEAAAE
agAAE4gAAAADAAAABAAAANwAAABHAAAAAgAAAAFpbmEyMjZANDcAAAAAAAAD
AAAACgAAAAB0aSxpbmEyMjYAAAAAAAADAAAABAAABFIAAAABAAAAAwAAAAsA
AARkaW5hMjI2LXU4OAAAAAAAAwAAAAQAAABaAAAARwAAAAMAAAAEAAAEagAA
E4gAAAADAAAABAAAANwAAABIAAAAAgAAAAFpbmEyMjZANGEAAAAAAAADAAAA
CgAAAAB0aSxpbmEyMjYAAAAAAAADAAAABAAABFIAAAABAAAAAwAAAAsAAARk
aW5hMjI2LXUxNQAAAAAAAwAAAAQAAABaAAAASgAAAAMAAAAEAAAEagAAE4gA
AAADAAAABAAAANwAAABJAAAAAgAAAAFpbmEyMjZANGIAAAAAAAADAAAACgAA
AAB0aSxpbmEyMjYAAAAAAAADAAAABAAABFIAAAABAAAAAwAAAAsAAARkaW5h
MjI2LXU5MgAAAAAAAwAAAAQAAABaAAAASwAAAAMAAAAEAAAEagAAE4gAAAAD
AAAABAAAANwAAABKAAAAAgAAAAIAAAABaTJjQDEAAAAAAAADAAAABAAAAAsA
AAABAAAAAwAAAAQAAAAaAAAAAAAAAAMAAAAEAAAAWgAAAAEAAAABaW5hMjI2
QDQwAAAAAAAAAwAAAAoAAAAAdGksaW5hMjI2AAAAAAAAAwAAAAQAAARSAAAA
AQAAAAMAAAALAAAEZGluYTIyNi11NzkAAAAAAAMAAAAEAAAAWgAAAEAAAAAD
AAAABAAABGoAAAfQAAAAAwAAAAQAAADcAAAASwAAAAIAAAABaW5hMjI2QDQx
AAAAAAAAAwAAAAoAAAAAdGksaW5hMjI2AAAAAAAAAwAAAAQAAARSAAAAAQAA
AAMAAAALAAAEZGluYTIyNi11ODEAAAAAAAMAAAAEAAAAWgAAAEEAAAADAAAA
BAAABGoAABOIAAAAAwAAAAQAAADcAAAATAAAAAIAAAABaW5hMjI2QDQyAAAA
AAAAAwAAAAoAAAAAdGksaW5hMjI2AAAAAAAAAwAAAAQAAARSAAAAAQAAAAMA
AAALAAAEZGluYTIyNi11ODAAAAAAAAMAAAAEAAAAWgAAAEIAAAADAAAABAAA
BGoAABOIAAAAAwAAAAQAAADcAAAATQAAAAIAAAABaW5hMjI2QDQzAAAAAAAA
AwAAAAoAAAAAdGksaW5hMjI2AAAAAAAAAwAAAAQAAARSAAAAAQAAAAMAAAAL
AAAEZGluYTIyNi11ODQAAAAAAAMAAAAEAAAAWgAAAEMAAAADAAAABAAABGoA
ABOIAAAAAwAAAAQAAADcAAAATgAAAAIAAAABaW5hMjI2QDQ0AAAAAAAAAwAA
AAoAAAAAdGksaW5hMjI2AAAAAAAAAwAAAAQAAARSAAAAAQAAAAMAAAALAAAE
ZGluYTIyNi11MTYAAAAAAAMAAAAEAAAAWgAAAEQAAAADAAAABAAABGoAABOI
AAAAAwAAAAQAAADcAAAATwAAAAIAAAABaW5hMjI2QDQ1AAAAAAAAAwAAAAoA
AAAAdGksaW5hMjI2AAAAAAAAAwAAAAQAAARSAAAAAQAAAAMAAAALAAAEZGlu
YTIyNi11NjUAAAAAAAMAAAAEAAAAWgAAAEUAAAADAAAABAAABGoAABOIAAAA
AwAAAAQAAADcAAAAUAAAAAIAAAABaW5hMjI2QDQ2AAAAAAAAAwAAAAoAAAAA
dGksaW5hMjI2AAAAAAAAAwAAAAQAAARSAAAAAQAAAAMAAAALAAAEZGluYTIy
Ni11NzQAAAAAAAMAAAAEAAAAWgAAAEYAAAADAAAABAAABGoAABOIAAAAAwAA
AAQAAADcAAAAUQAAAAIAAAABaW5hMjI2QDQ3AAAAAAAAAwAAAAoAAAAAdGks
aW5hMjI2AAAAAAAAAwAAAAQAAARSAAAAAQAAAAMAAAALAAAEZGluYTIyNi11
NzUAAAAAAAMAAAAEAAAAWgAAAEcAAAADAAAABAAABGoAABOIAAAAAwAAAAQA
AADcAAAAUgAAAAIAAAACAAAAAWkyY0AyAAAAAAAAAwAAAAQAAAALAAAAAQAA
AAMAAAAEAAAAGgAAAAAAAAADAAAABAAAAFoAAAACAAAAAW1heDE1MzAxQGEA
AAAAAAMAAAAPAAAAAG1heGltLG1heDE1MzAxAAAAAAADAAAABAAAAFoAAAAK
AAAAAgAAAAFtYXgxNTMwM0BiAAAAAAADAAAADwAAAABtYXhpbSxtYXgxNTMw
MwAAAAAAAwAAAAQAAABaAAAACwAAAAIAAAABbWF4MTUzMDNAMTAAAAAAAwAA
AA8AAAAAbWF4aW0sbWF4MTUzMDMAAAAAAAMAAAAEAAAAWgAAABAAAAACAAAA
AW1heDE1MzAxQDEzAAAAAAMAAAAPAAAAAG1heGltLG1heDE1MzAxAAAAAAAD
AAAABAAAAFoAAAATAAAAAgAAAAFtYXgxNTMwM0AxNAAAAAADAAAADwAAAABt
YXhpbSxtYXgxNTMwMwAAAAAAAwAAAAQAAABaAAAAFAAAAAIAAAABbWF4MTUz
MDNAMTUAAAAAAwAAAA8AAAAAbWF4aW0sbWF4MTUzMDMAAAAAAAMAAAAEAAAA
WgAAABUAAAACAAAAAW1heDE1MzAzQDE2AAAAAAMAAAAPAAAAAG1heGltLG1h
eDE1MzAzAAAAAAADAAAABAAAAFoAAAAWAAAAAgAAAAFtYXgxNTMwM0AxNwAA
AAADAAAADwAAAABtYXhpbSxtYXgxNTMwMwAAAAAAAwAAAAQAAABaAAAAFwAA
AAIAAAABbWF4MTUzMDFAMTgAAAAAAwAAAA8AAAAAbWF4aW0sbWF4MTUzMDEA
AAAAAAMAAAAEAAAAWgAAABgAAAACAAAAAW1heDE1MzAzQDFhAAAAAAMAAAAP
AAAAAG1heGltLG1heDE1MzAzAAAAAAADAAAABAAAAFoAAAAaAAAAAgAAAAFt
YXgxNTMwM0AxYgAAAAADAAAADwAAAABtYXhpbSxtYXgxNTMwMwAAAAAAAwAA
AAQAAABaAAAAGwAAAAIAAAABbWF4MTUzMDNAMWQAAAAAAwAAAA8AAAAAbWF4
aW0sbWF4MTUzMDMAAAAAAAMAAAAEAAAAWgAAAB0AAAACAAAAAW1heDIwNzUx
QDcyAAAAAAMAAAAPAAAAAG1heGltLG1heDIwNzUxAAAAAAADAAAABAAAAFoA
AAByAAAAAgAAAAFtYXgyMDc1MUA3MwAAAAADAAAADwAAAABtYXhpbSxtYXgy
MDc1MQAAAAAAAwAAAAQAAABaAAAAcwAAAAIAAAACAAAAAgAAAAIAAAABaTJj
QGZmMDMwMDAwAAAAAAAAAAMAAAAeAAAAAGNkbnMsaTJjLXIxcDE0AGNkbnMs
aTJjLXIxcDEwAAAAAAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAABAAAASkA
AAAEAAAAAwAAAAwAAAE6AAAAAAAAABIAAAAEAAAAAwAAABAAAABaAAAAAP8D
AAAAAAAAAAAQAAAAAAMAAAAEAAAACwAAAAEAAAADAAAABAAAABoAAAAAAAAA
AwAAAAgAAALMAAAAJgAAACYAAAADAAAACAAAAG4AAAADAAAAPgAAAAMAAAAN
AAAC2mRlZmF1bHQAZ3BpbwAAAAAAAAADAAAABAAAAugAAAAvAAAAAwAAAAQA
AAQUAAAAMAAAAAMAAAAMAAAEHgAAAC4AAAAQAAAAAAAAAAMAAAAMAAAEKAAA
AC4AAAARAAAAAAAAAAMAAAAEAAAEMgAGGoAAAAABaTJjLW11eEA3NAAAAAAA
AwAAAAwAAAAAbnhwLHBjYTk1NDgAAAAAAwAAAAQAAAALAAAAAQAAAAMAAAAE
AAAAGgAAAAAAAAADAAAABAAAAFoAAAB0AAAAAWkyY0AwAAAAAAAAAwAAAAQA
AAALAAAAAQAAAAMAAAAEAAAAGgAAAAAAAAADAAAABAAAAFoAAAAAAAAAAWVl
cHJvbUA1NAAAAAAAAAMAAAAMAAAAAGF0bWVsLDI0YzA4AAAAAAMAAAAEAAAA
WgAAAFQAAAADAAAABAAAAAsAAAABAAAAAwAAAAQAAAAaAAAAAQAAAAFib2Fy
ZC1zbkAwAAAAAAADAAAACAAAAFoAAAAAAAAAFAAAAAIAAAABZXRoLW1hY0Ay
MAAAAAAAAwAAAAgAAABaAAAAIAAAAAYAAAACAAAAAWJvYXJkLW5hbWVAZDAA
AAAAAAADAAAACAAAAFoAAADQAAAABgAAAAIAAAABYm9hcmQtcmV2aXNpb25A
ZTAAAAAAAAADAAAACAAAAFoAAADgAAAAAwAAAAIAAAACAAAAAgAAAAFpMmNA
MQAAAAAAAAMAAAAEAAAACwAAAAEAAAADAAAABAAAABoAAAAAAAAAAwAAAAQA
AABaAAAAAQAAAAFjbG9jay1nZW5lcmF0b3JAMzYAAAAAAAMAAAAOAAAAAHNp
bGFicyxzaTUzNDEAAAAAAAADAAAABAAAAFoAAAA2AAAAAgAAAAIAAAABaTJj
QDIAAAAAAAADAAAABAAAAAsAAAABAAAAAwAAAAQAAAAaAAAAAAAAAAMAAAAE
AAAAWgAAAAIAAAABY2xvY2stZ2VuZXJhdG9yQDVkAAAAAAADAAAABAAAAjAA
AAAAAAAAAwAAAA0AAAAAc2lsYWJzLHNpNTcwAAAAAAAAAAMAAAAEAAAAWgAA
AF0AAAADAAAABAAABHkAAAAyAAAAAwAAAAQAAASPEeGjAAAAAAMAAAAEAAAE
MhHhowAAAAADAAAACwAABJxzaTU3MF91c2VyAAAAAAACAAAAAgAAAAFpMmNA
MwAAAAAAAAMAAAAEAAAACwAAAAEAAAADAAAABAAAABoAAAAAAAAAAwAAAAQA
AABaAAAAAwAAAAFjbG9jay1nZW5lcmF0b3JANWQAAAAAAAMAAAAEAAACMAAA
AAAAAAADAAAADQAAAABzaWxhYnMsc2k1NzAAAAAAAAAAAwAAAAQAAABaAAAA
XQAAAAMAAAAEAAAEeQAAADIAAAADAAAABAAABI8JUC+QAAAAAwAAAAQAAAQy
CNnuIAAAAAMAAAAKAAAEnHNpNTcwX21ndAAAAAAAAAIAAAACAAAAAWkyY0A0
AAAAAAAAAwAAAAQAAAALAAAAAQAAAAMAAAAEAAAAGgAAAAAAAAADAAAABAAA
AFoAAAAEAAAAAWNsb2NrLWdlbmVyYXRvckA2OQAAAAAAAwAAAA4AAAAAc2ls
YWJzLHNpNTMyOAAAAAAAAAMAAAAEAAAAWgAAAGkAAAACAAAAAgAAAAIAAAAB
aTJjLW11eEA3NQAAAAAAAwAAAAwAAAAAbnhwLHBjYTk1NDgAAAAAAwAAAAQA
AAALAAAAAQAAAAMAAAAEAAAAGgAAAAAAAAADAAAABAAAAFoAAAB1AAAAAWky
Y0AwAAAAAAAAAwAAAAQAAAALAAAAAQAAAAMAAAAEAAAAGgAAAAAAAAADAAAA
BAAAAFoAAAAAAAAAAgAAAAFpMmNAMQAAAAAAAAMAAAAEAAAACwAAAAEAAAAD
AAAABAAAABoAAAAAAAAAAwAAAAQAAABaAAAAAQAAAAIAAAABaTJjQDIAAAAA
AAADAAAABAAAAAsAAAABAAAAAwAAAAQAAAAaAAAAAAAAAAMAAAAEAAAAWgAA
AAIAAAACAAAAAWkyY0AzAAAAAAAAAwAAAAQAAAALAAAAAQAAAAMAAAAEAAAA
GgAAAAAAAAADAAAABAAAAFoAAAADAAAAAgAAAAFpMmNANAAAAAAAAAMAAAAE
AAAACwAAAAEAAAADAAAABAAAABoAAAAAAAAAAwAAAAQAAABaAAAABAAAAAIA
AAABaTJjQDUAAAAAAAADAAAABAAAAAsAAAABAAAAAwAAAAQAAAAaAAAAAAAA
AAMAAAAEAAAAWgAAAAUAAAACAAAAAWkyY0A2AAAAAAAAAwAAAAQAAAALAAAA
AQAAAAMAAAAEAAAAGgAAAAAAAAADAAAABAAAAFoAAAAGAAAAAgAAAAFpMmNA
NwAAAAAAAAMAAAAEAAAACwAAAAEAAAADAAAABAAAABoAAAAAAAAAAwAAAAQA
AABaAAAABwAAAAIAAAACAAAAAgAAAAFtZW1vcnktY29udHJvbGxlckBmZjk2
MDAwMAAAAAAAAwAAABUAAAAAeGxueCx6eW5xbXAtb2NtYy0xLjAAAAAAAAAA
AwAAABAAAABaAAAAAP+WAAAAAAAAAAAQAAAAAAMAAAAEAAABKQAAAAQAAAAD
AAAADAAAAToAAAAAAAAACgAAAAQAAAACAAAAAXBlcmYtbW9uaXRvckBmZmEw
MDAwMAAAAAAAAAMAAAAWAAAAAHhsbngsYXhpLXBlcmYtbW9uaXRvcgAAAAAA
AAMAAAAQAAAAWgAAAAD/oAAAAAAAAAABAAAAAAADAAAADAAAAToAAAAAAAAA
GQAAAAQAAAADAAAABAAAASkAAAAEAAAAAwAAAAQAAASvAAAAAAAAAAMAAAAE
AAAEwwAAAAAAAAADAAAABAAABNUAAAABAAAAAwAAAAQAAATsAAAAAQAAAAMA
AAAEAAAFBAAAAAEAAAADAAAABAAABRoAAAABAAAAAwAAAAQAAAU3AAAACAAA
AAMAAAAEAAAFTAAAACAAAAADAAAABAAABWQAAAAgAAAAAwAAAAQAAAWEAAAA
IAAAAAMAAAAEAAAFnAAAAAEAAAADAAAACAAAAG4AAAADAAAAHwAAAAIAAAAB
cGVyZi1tb25pdG9yQGZkMGIwMDAwAAAAAAAAAwAAABYAAAAAeGxueCxheGkt
cGVyZi1tb25pdG9yAAAAAAAAAwAAABAAAABaAAAAAP0LAAAAAAAAAAEAAAAA
AAMAAAAMAAABOgAAAAAAAAB7AAAABAAAAAMAAAAEAAABKQAAAAQAAAADAAAA
BAAABK8AAAAAAAAAAwAAAAQAAATDAAAAAAAAAAMAAAAEAAAE1QAAAAYAAAAD
AAAABAAABOwAAAABAAAAAwAAAAQAAAUEAAAAAAAAAAMAAAAEAAAFGgAAAAEA
AAADAAAABAAABTcAAAAKAAAAAwAAAAQAAAVMAAAAIAAAAAMAAAAEAAAFZAAA
ACAAAAADAAAABAAABYQAAAAgAAAAAwAAAAQAAAWcAAAAAQAAAAMAAAAIAAAA
bgAAAAMAAAAcAAAAAgAAAAFwZXJmLW1vbml0b3JAZmQ0OTAwMDAAAAAAAAAD
AAAAFgAAAAB4bG54LGF4aS1wZXJmLW1vbml0b3IAAAAAAAADAAAAEAAAAFoA
AAAA/UkAAAAAAAAAAQAAAAAAAwAAAAwAAAE6AAAAAAAAAHsAAAAEAAAAAwAA
AAQAAAEpAAAABAAAAAMAAAAEAAAErwAAAAAAAAADAAAABAAABMMAAAAAAAAA
AwAAAAQAAATVAAAAAQAAAAMAAAAEAAAE7AAAAAEAAAADAAAABAAABQQAAAAA
AAAAAwAAAAQAAAUaAAAAAQAAAAMAAAAEAAAFNwAAAAgAAAADAAAABAAABUwA
AAAgAAAAAwAAAAQAAAVkAAAAIAAAAAMAAAAEAAAFhAAAACAAAAADAAAABAAA
BZwAAAABAAAAAwAAAAgAAABuAAAAAwAAABwAAAACAAAAAXBlcmYtbW9uaXRv
ckBmZmExMDAwMAAAAAAAAAMAAAAWAAAAAHhsbngsYXhpLXBlcmYtbW9uaXRv
cgAAAAAAAAMAAAAQAAAAWgAAAAD/oQAAAAAAAAABAAAAAAADAAAADAAAAToA
AAAAAAAAGQAAAAQAAAADAAAABAAAASkAAAAEAAAAAwAAAAQAAASvAAAAAAAA
AAMAAAAEAAAEwwAAAAAAAAADAAAABAAABNUAAAABAAAAAwAAAAQAAATsAAAA
AQAAAAMAAAAEAAAFBAAAAAEAAAADAAAABAAABRoAAAABAAAAAwAAAAQAAAU3
AAAACAAAAAMAAAAEAAAFTAAAACAAAAADAAAABAAABWQAAAAgAAAAAwAAAAQA
AAWEAAAAIAAAAAMAAAAEAAAFnAAAAAEAAAADAAAACAAAAG4AAAADAAAAHwAA
AAIAAAABcGNpZUBmZDBlMDAwMAAAAAAAAAMAAAATAAAAAHhsbngsbndsLXBj
aWUtMi4xMQAAAAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAABAAAAAsAAAAD
AAAAAwAAAAQAAAAaAAAAAgAAAAMAAAAEAAACRgAAAAEAAAADAAAAAAAABbQA
AAADAAAABAAAACxwY2kAAAAAAwAAAAQAAAEpAAAABAAAAAMAAAA8AAABOgAA
AAAAAAB2AAAABAAAAAAAAAB1AAAABAAAAAAAAAB0AAAABAAAAAAAAABzAAAA
BAAAAAAAAAByAAAABAAAAAMAAAAaAAADGW1pc2MAZHVtbXkAaW50eABtc2kx
AG1zaTAAAAAAAAADAAAABAAABcMAAAAxAAAAAwAAADAAAABaAAAAAP0OAAAA
AAAAAAAQAAAAAAD9SAAAAAAAAAAAEAAAAACAAAAAAAAAAAABAAAAAAAAAwAA
ABAAAAFYYnJlZwBwY2lyZWcAY2ZnAAAAAAMAAAA4AAABUQIAAAAAAAAA4AAA
AAAAAADgAAAAAAAAABAAAABDAAAAAAAABgAAAAAAAAAGAAAAAAAAAAIAAAAA
AAAAAwAAABAAAAXOAAAAAAAAAAAAAAAAAAAABwAAAAMAAAAIAAAF4QAAAAAA
AAD/AAAAAwAAAGAAAAXrAAAAAAAAAAAAAAAAAAAAAQAAADIAAAABAAAAAAAA
AAAAAAAAAAAAAgAAADIAAAACAAAAAAAAAAAAAAAAAAAAAwAAADIAAAADAAAA
AAAAAAAAAAAAAAAABAAAADIAAAAEAAAAAwAAAAgAAALMAAAAJgAAADsAAAAD
AAAACAAAAG4AAAADAAAAFwAAAAMAAAAEAAAF+QAAAAAAAAADAAAABAAABgoA
AAAAAAAAAwAAAAQAAAYbAAAAAAAAAAMAAAAEAAAGLAAAAAAAAAADAAAABAAA
Bj0AAAAAAAAAAwAAAAQAAAZOAAAAAAAAAAMAAAAKAAAGX1Jvb3QgUG9ydAAA
AAAAAAMAAAAEAAADKQAAAAAAAAADAAAABAAAANwAAAAxAAAAAWxlZ2FjeS1p
bnRlcnJ1cHQtY29udHJvbGxlcgAAAAADAAAAAAAAAlcAAAADAAAABAAAAAsA
AAAAAAAAAwAAAAQAAAJGAAAAAQAAAAMAAAAEAAAA3AAAADIAAAACAAAAAgAA
AAFzcGlAZmYwZjAwMDAAAAAAAAAAAwAAAAAAAAEVAAAAAwAAABUAAAAAeGxu
eCx6eW5xbXAtcXNwaS0xLjAAAAAAAAAAAwAAAAUAAAFub2theQAAAAAAAAAD
AAAADQAAAYlyZWZfY2xrAHBjbGsAAAAAAAAAAwAAAAwAAAE6AAAAAAAAAA8A
AAAEAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAEAAAGbgAAAAEAAAADAAAAIAAA
AFoAAAAA/w8AAAAAAAAAABAAAAAAAMAAAAAAAAAACAAAAAAAAAMAAAAEAAAA
CwAAAAEAAAADAAAABAAAABoAAAAAAAAAAwAAAAQAAAMBAAAAAQAAAAMAAAAI
AAADEgAAACgAAAhzAAAAAwAAAAgAAALMAAAAJgAAAC0AAAADAAAAEAAAAG4A
AAADAAAANQAAAAMAAAAfAAAAAwAAAAQAAAZ1AAAAAQAAAAMAAAAEAAAGfQAA
AAQAAAADAAAABAAABo4AAAAEAAAAAwAAAAQAAADcAAAAEgAAAAFmbGFzaEAw
AAAAAAMAAAAVAAAAAG0yNXA4MABqZWRlYyxzcGktbm9yAAAAAAAAAAMAAAAE
AAAACwAAAAEAAAADAAAABAAAABoAAAABAAAAAwAAAAQAAABaAAAAAAAAAAMA
AAAEAAAGjgAAAAEAAAADAAAABAAABn0AAAAEAAAAAwAAAAQAAAafBm/zAAAA
AAFwYXJ0aXRpb25AMAAAAAADAAAABQAABGRib290AAAAAAAAAAMAAAAIAAAA
WgAAAAAB4AAAAAAAAgAAAAFwYXJ0aXRpb25AMQAAAAADAAAACAAABGRib290
ZW52AAAAAAMAAAAIAAAAWgHgAAAABAAAAAAAAgAAAAFwYXJ0aXRpb25AMgAA
AAADAAAABwAABGRrZXJuZWwAAAAAAAMAAAAIAAAAWgHkAAACQAAAAAAAAgAA
AAIAAAACAAAAAXJ0Y0BmZmE2MDAwMAAAAAAAAAADAAAAEAAAAAB4bG54LHp5
bnFtcC1ydGMAAAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAAEAAAAFoAAAAA
/6YAAAAAAAAAAAEAAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAYAAABOgAAAAAA
AAAaAAAABAAAAAAAAAAbAAAABAAAAAMAAAAKAAADGWFsYXJtAHNlYwAAAAAA
AAMAAAAEAAAGsQAAgAAAAAACAAAAAXp5bnFtcF9waHlAZmQ0MDAwMDAAAAAA
AwAAABcAAAAAeGxueCx6eW5xbXAtcHNndHItdjEuMQAAAAAAAwAAAAUAAAFu
b2theQAAAAAAAAADAAAAIAAAAFoAAAAA/UAAAAAAAAAABAAAAAAAAP09AAAA
AAAAAAAQAAAAAAMAAAAMAAABWHNlcmRlcwBzaW91AAAAAAMAAAAEAAAGvQAA
ADMAAAADAAAADQAABslzb2NfcmV2aXNpb24AAAAAAAAAAwAAAGAAAAbaAAAA
NAAAABAAAAA0AAAAOwAAADQAAAA8AAAANAAAAD0AAAA0AAAAPgAAADQAAAA/
AAAANAAAAEAAAAA0AAAAAwAAADQAAAAdAAAANAAAAB4AAAA0AAAAHwAAADQA
AAAgAAAAAwAAAHgAAAbhc2F0YV9yc3QAdXNiMF9jcnN0AHVzYjFfY3JzdAB1
c2IwX2hpYnJzdAB1c2IxX2hpYnJzdAB1c2IwX2FwYnJzdAB1c2IxX2FwYnJz
dABkcF9yc3QAZ2VtMF9yc3QAZ2VtMV9yc3QAZ2VtMl9yc3QAZ2VtM19yc3QA
AAAAAWxhbmUwAAAAAAAAAwAAAAQAAAbtAAAABAAAAAIAAAABbGFuZTEAAAAA
AAADAAAABAAABu0AAAAEAAAAAwAAAAQAAADcAAAAPAAAAAIAAAABbGFuZTIA
AAAAAAADAAAABAAABu0AAAAEAAAAAwAAAAQAAADcAAAAOgAAAAIAAAABbGFu
ZTMAAAAAAAADAAAABAAABu0AAAAEAAAAAwAAAAQAAADcAAAANQAAAAIAAAAC
AAAAAWFoY2lAZmQwYzAwMDAAAAAAAAADAAAADwAAAABjZXZhLGFoY2ktMXY4
NAAAAAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAAEAAAAFoAAAAA/QwAAAAA
AAAAACAAAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAMAAABOgAAAAAAAACFAAAA
BAAAAAMAAAAIAAACzAAAACYAAAAcAAAAAwAAAAQAAAMBAAAABAAAAAMAAAAI
AAAAbgAAAAMAAAAWAAAAAwAAAAQAAAb4GEAYKAAAAAMAAAAEAAAHDwYUCA4A
AAADAAAABAAAByYTCEoGAAAAAwAAAAQAAAc7lqQ//AAAAAMAAAAEAAAHUBhA
GCgAAAADAAAABAAAB2cGFAgOAAAAAwAAAAQAAAd+EwhKBgAAAAMAAAAEAAAH
k5akP/wAAAADAAAACQAAB6hzYXRhLXBoeQAAAAAAAAADAAAAFAAAB7IAAAA1
AAAAAQAAAAEAAAABB3NZQAAAAAMAAAAEAAAHtwAAAAAAAAADAAAABAAAB88A
AAAAAAAAAgAAAAFtbWNAZmYxNjAwMDAAAAAAAAAAAwAAAAAAAAEVAAAAAwAA
ACMAAAAAeGxueCx6eW5xbXAtOC45YQBhcmFzYW4sc2RoY2ktOC45YQAAAAAA
AwAAAAkAAAFuZGlzYWJsZWQAAAAAAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAM
AAABOgAAAAAAAAAwAAAABAAAAAMAAAAQAAAAWgAAAAD/FgAAAAAAAAAAEAAA
AAADAAAAEAAAAYljbGtfeGluAGNsa19haGIAAAAAAwAAAAQAAAfnAAAAAAAA
AAMAAAAEAAADAQAAAAEAAAADAAAACAAAAxIAAAAoAAAIcAAAAAMAAAAIAAAC
zAAAACYAAAAnAAAAAwAAAAQAAAa9AAAAMwAAAAMAAAANAAAGyXNvY19yZXZp
c2lvbgAAAAAAAAADAAAABAAAAjAAAAABAAAAAwAAABcAAAScY2xrX291dF9z
ZDAAY2xrX2luX3NkMAAAAAAAAwAAABAAAABuAAAAAwAAADYAAAADAAAAHwAA
AAMAAAAEAAAA3AAAACMAAAACAAAAAW1tY0BmZjE3MDAwMAAAAAAAAAADAAAA
AAAAARUAAAADAAAAIwAAAAB4bG54LHp5bnFtcC04LjlhAGFyYXNhbixzZGhj
aS04LjlhAAAAAAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAEAAABKQAAAAQA
AAADAAAADAAAAToAAAAAAAAAMQAAAAQAAAADAAAAEAAAAFoAAAAA/xcAAAAA
AAAAABAAAAAAAwAAABAAAAGJY2xrX3hpbgBjbGtfYWhiAAAAAAMAAAAEAAAH
5wAAAAEAAAADAAAABAAAAwEAAAABAAAAAwAAAAgAAAMSAAAAKAAACHEAAAAD
AAAACAAAAswAAAAmAAAAKAAAAAMAAAAEAAAGvQAAADMAAAADAAAADQAABslz
b2NfcmV2aXNpb24AAAAAAAAAAwAAAAQAAAIwAAAAAQAAAAMAAAAXAAAEnGNs
a19vdXRfc2QxAGNsa19pbl9zZDEAAAAAAAMAAAAQAAAAbgAAAAMAAAA3AAAA
AwAAAB8AAAADAAAACAAAAtpkZWZhdWx0AAAAAAMAAAAEAAAC6AAAADYAAAAD
AAAAAAAAB/YAAAADAAAABAAABDILLLyuAAAAAwAAAAQAAAf/AAAAAQAAAAMA
AAAEAAAA3AAAACQAAAACAAAAAXNwaUBmZjA0MDAwMAAAAAAAAAADAAAADgAA
AABjZG5zLHNwaS1yMXA2AAAAAAAAAwAAAAkAAAFuZGlzYWJsZWQAAAAAAAAA
AwAAAAQAAAEpAAAABAAAAAMAAAAMAAABOgAAAAAAAAATAAAABAAAAAMAAAAQ
AAAAWgAAAAD/BAAAAAAAAAAAEAAAAAADAAAADQAAAYlyZWZfY2xrAHBjbGsA
AAAAAAAAAwAAAAQAAAALAAAAAQAAAAMAAAAEAAAAGgAAAAAAAAADAAAACAAA
AswAAAAmAAAAIwAAAAMAAAAQAAAAbgAAAAMAAAA6AAAAAwAAAB8AAAACAAAA
AXNwaUBmZjA1MDAwMAAAAAAAAAADAAAADgAAAABjZG5zLHNwaS1yMXA2AAAA
AAAAAwAAAAkAAAFuZGlzYWJsZWQAAAAAAAAAAwAAAAQAAAEpAAAABAAAAAMA
AAAMAAABOgAAAAAAAAAUAAAABAAAAAMAAAAQAAAAWgAAAAD/BQAAAAAAAAAA
EAAAAAADAAAADQAAAYlyZWZfY2xrAHBjbGsAAAAAAAAAAwAAAAQAAAALAAAA
AQAAAAMAAAAEAAAAGgAAAAAAAAADAAAACAAAAswAAAAmAAAAJAAAAAMAAAAQ
AAAAbgAAAAMAAAA7AAAAAwAAAB8AAAACAAAAAXRpbWVyQGZmMTEwMDAwAAAA
AAADAAAACQAAAABjZG5zLHR0YwAAAAAAAAADAAAACQAAAW5kaXNhYmxlZAAA
AAAAAAADAAAABAAAASkAAAAEAAAAAwAAACQAAAE6AAAAAAAAACQAAAAEAAAA
AAAAACUAAAAEAAAAAAAAACYAAAAEAAAAAwAAABAAAABaAAAAAP8RAAAAAAAA
AAAQAAAAAAMAAAAEAAAIDQAAACAAAAADAAAACAAAAswAAAAmAAAAGAAAAAMA
AAAIAAAAbgAAAAMAAAAfAAAAAgAAAAF0aW1lckBmZjEyMDAwMAAAAAAAAwAA
AAkAAAAAY2Rucyx0dGMAAAAAAAAAAwAAAAkAAAFuZGlzYWJsZWQAAAAAAAAA
AwAAAAQAAAEpAAAABAAAAAMAAAAkAAABOgAAAAAAAAAnAAAABAAAAAAAAAAo
AAAABAAAAAAAAAApAAAABAAAAAMAAAAQAAAAWgAAAAD/EgAAAAAAAAAAEAAA
AAADAAAABAAACA0AAAAgAAAAAwAAAAgAAALMAAAAJgAAABkAAAADAAAACAAA
AG4AAAADAAAAHwAAAAIAAAABdGltZXJAZmYxMzAwMDAAAAAAAAMAAAAJAAAA
AGNkbnMsdHRjAAAAAAAAAAMAAAAJAAABbmRpc2FibGVkAAAAAAAAAAMAAAAE
AAABKQAAAAQAAAADAAAAJAAAAToAAAAAAAAAKgAAAAQAAAAAAAAAKwAAAAQA
AAAAAAAALAAAAAQAAAADAAAAEAAAAFoAAAAA/xMAAAAAAAAAABAAAAAAAwAA
AAQAAAgNAAAAIAAAAAMAAAAIAAACzAAAACYAAAAaAAAAAwAAAAgAAABuAAAA
AwAAAB8AAAACAAAAAXRpbWVyQGZmMTQwMDAwAAAAAAADAAAACQAAAABjZG5z
LHR0YwAAAAAAAAADAAAACQAAAW5kaXNhYmxlZAAAAAAAAAADAAAABAAAASkA
AAAEAAAAAwAAACQAAAE6AAAAAAAAAC0AAAAEAAAAAAAAAC4AAAAEAAAAAAAA
AC8AAAAEAAAAAwAAABAAAABaAAAAAP8UAAAAAAAAAAAQAAAAAAMAAAAEAAAI
DQAAACAAAAADAAAACAAAAswAAAAmAAAAGwAAAAMAAAAIAAAAbgAAAAMAAAAf
AAAAAgAAAAFzZXJpYWxAZmYwMDAwMDAAAAAAAwAAAAAAAAEVAAAAAwAAAB0A
AAAAY2Rucyx1YXJ0LXIxcDEyAHhsbngseHVhcnRwcwAAAAAAAAADAAAABQAA
AW5va2F5AAAAAAAAAAMAAAAEAAABKQAAAAQAAAADAAAADAAAAToAAAAAAAAA
FQAAAAQAAAADAAAAEAAAAFoAAAAA/wAAAAAAAAAAABAAAAAAAwAAAA4AAAGJ
dWFydF9jbGsAcGNsawAAAAAAAAMAAAAIAAACzAAAACYAAAAhAAAAAwAAABAA
AABuAAAAAwAAADgAAAADAAAAHwAAAAMAAAAIAAAC2mRlZmF1bHQAAAAAAwAA
AAQAAALoAAAANwAAAAMAAAAAAAAIGQAAAAMAAAAHAAAALHNlcmlhbAAAAAAA
AwAAAAQAAAgmAAAAAAAAAAIAAAABc2VyaWFsQGZmMDEwMDAwAAAAAAMAAAAA
AAABFQAAAAMAAAAdAAAAAGNkbnMsdWFydC1yMXAxMgB4bG54LHh1YXJ0cHMA
AAAAAAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAABAAAASkAAAAEAAAAAwAA
AAwAAAE6AAAAAAAAABYAAAAEAAAAAwAAABAAAABaAAAAAP8BAAAAAAAAAAAQ
AAAAAAMAAAAOAAABiXVhcnRfY2xrAHBjbGsAAAAAAAADAAAACAAAAswAAAAm
AAAAIgAAAAMAAAAQAAAAbgAAAAMAAAA5AAAAAwAAAB8AAAADAAAACAAAAtpk
ZWZhdWx0AAAAAAMAAAAEAAAC6AAAADgAAAADAAAAAAAACBkAAAADAAAABwAA
ACxzZXJpYWwAAAAAAAMAAAAEAAAIJgAAAAEAAAACAAAAAXVzYjBAZmY5ZDAw
MDAAAAAAAAADAAAABAAAAAsAAAACAAAAAwAAAAQAAAAaAAAAAgAAAAMAAAAF
AAABbm9rYXkAAAAAAAAAAwAAABEAAAAAeGxueCx6eW5xbXAtZHdjMwAAAAAA
AAADAAAAEAAAAFoAAAAA/50AAAAAAAAAAAEAAAAAAwAAABAAAAGJYnVzX2Ns
awByZWZfY2xrAAAAAAMAAAAIAAACzAAAACYAAAAWAAAAAwAAAAAAAAFRAAAA
AwAAAAQAAAa9AAAAMwAAAAMAAAANAAAGyXNvY19yZXZpc2lvbgAAAAAAAAAD
AAAAEAAAAG4AAAADAAAAIAAAAAMAAAAiAAAAAwAAAAgAAALaZGVmYXVsdAAA
AAADAAAABAAAAugAAAA5AAAAAwAAAAQAAAMpAAAAAQAAAAMAAAAEAAAIMgAA
AAAAAAADAAAABAAACEQAAAAAAAAAAWR3YzNAZmUyMDAwMDAAAAAAAAADAAAA
CgAAAABzbnBzLGR3YzMAAAAAAAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAQ
AAAAWgAAAAD+IAAAAAAAAAAEAAAAAAADAAAABAAAASkAAAAEAAAAAwAAABMA
AAMZZHdjX3VzYjMAb3RnAGhpYmVyAAAAAAADAAAAJAAAAToAAAAAAAAAQQAA
AAQAAAAAAAAARQAAAAQAAAAAAAAASwAAAAQAAAADAAAABAAAAwEAAAABAAAA
AwAAAAgAAAMSAAAAKAAACGAAAAADAAAABAAACFgAAAAgAAAAAwAAAAAAAAh7
AAAAAwAAAAAAAAiNAAAAAwAAAAAAAAitAAAAAwAAAAAAAAjKAAAAAwAAAAUA
AAjhaG9zdAAAAAAAAAADAAAAAAAACOkAAAADAAAACQAAB6h1c2IzLXBoeQAA
AAAAAAADAAAAFAAAB7IAAAA6AAAABAAAAAAAAAACAYy6gAAAAAMAAAAMAAAI
/3N1cGVyLXNwZWVkAAAAAAMAAAAEAAAA3AAAABAAAAACAAAAAgAAAAF1c2Ix
QGZmOWUwMDAwAAAAAAAAAwAAAAQAAAALAAAAAgAAAAMAAAAEAAAAGgAAAAIA
AAADAAAACQAAAW5kaXNhYmxlZAAAAAAAAAADAAAAEQAAAAB4bG54LHp5bnFt
cC1kd2MzAAAAAAAAAAMAAAAQAAAAWgAAAAD/ngAAAAAAAAAAAQAAAAADAAAA
EAAAAYlidXNfY2xrAHJlZl9jbGsAAAAAAwAAAAgAAALMAAAAJgAAABcAAAAD
AAAAAAAAAVEAAAADAAAABAAABr0AAAAzAAAAAwAAAA0AAAbJc29jX3Jldmlz
aW9uAAAAAAAAAAMAAAAQAAAAbgAAAAMAAAAhAAAAAwAAACIAAAABZHdjM0Bm
ZTMwMDAwMAAAAAAAAAMAAAAKAAAAAHNucHMsZHdjMwAAAAAAAAMAAAAJAAAB
bmRpc2FibGVkAAAAAAAAAAMAAAAQAAAAWgAAAAD+MAAAAAAAAAAEAAAAAAAD
AAAABAAAASkAAAAEAAAAAwAAABMAAAMZZHdjX3VzYjMAb3RnAGhpYmVyAAAA
AAADAAAAJAAAAToAAAAAAAAARgAAAAQAAAAAAAAASgAAAAQAAAAAAAAATAAA
AAQAAAADAAAABAAAAwEAAAABAAAAAwAAAAgAAAMSAAAAKAAACGEAAAADAAAA
BAAACFgAAAAgAAAAAwAAAAAAAAh7AAAAAwAAAAAAAAiNAAAAAwAAAAAAAAit
AAAAAwAAAAAAAAjKAAAAAwAAAAQAAADcAAAAEQAAAAIAAAACAAAAAXdhdGNo
ZG9nQGZkNGQwMDAwAAAAAAAAAwAAAA4AAAAAY2Rucyx3ZHQtcjFwMgAAAAAA
AAMAAAAFAAABbm9rYXkAAAAAAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAMAAAB
OgAAAAAAAABxAAAAAQAAAAMAAAAQAAAAWgAAAAD9TQAAAAAAAAAAEAAAAAAD
AAAABAAACQ0AAAA8AAAAAwAAAAAAAAkZAAAAAwAAAAgAAABuAAAAAwAAAEsA
AAACAAAAAXdhdGNoZG9nQGZmMTUwMDAwAAAAAAAAAwAAAA4AAAAAY2Rucyx3
ZHQtcjFwMgAAAAAAAAMAAAAFAAABbm9rYXkAAAAAAAAAAwAAAAQAAAEpAAAA
BAAAAAMAAAAMAAABOgAAAAAAAAA0AAAAAQAAAAMAAAAQAAAAWgAAAAD/FQAA
AAAAAAAAEAAAAAADAAAABAAACQ0AAAAKAAAAAwAAAAgAAABuAAAAAwAAAHAA
AAACAAAAAWFtc0BmZmE1MDAwMAAAAAAAAAADAAAAEAAAAAB4bG54LHp5bnFt
cC1hbXMAAAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAABAAAASkAAAAEAAAA
AwAAAAwAAAE6AAAAAAAAADgAAAAEAAAAAwAAAAgAAAMZYW1zLWlycQAAAAAD
AAAAEAAAAFoAAAAA/6UAAAAAAAAAAAgAAAAAAwAAAAkAAAFYYW1zLWJhc2UA
AAAAAAAAAwAAAAQAAAALAAAAAgAAAAMAAAAEAAAAGgAAAAIAAAADAAAABAAA
BFIAAAABAAAAAwAAAAAAAAFRAAAAAwAAAAgAAABuAAAAAwAAAEYAAAABYW1z
X3BzQGZmYTUwODAwAAAAAAMAAAATAAAAAHhsbngsenlucW1wLWFtcy1wcwAA
AAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAAEAAAAFoAAAAA/6UIAAAAAAAA
AAQAAAAAAgAAAAFhbXNfcGxAZmZhNTBjMDAAAAAAAwAAABMAAAAAeGxueCx6
eW5xbXAtYW1zLXBsAAAAAAADAAAABQAAAW5va2F5AAAAAAAAAAMAAAAQAAAA
WgAAAAD/pQwAAAAAAAAABAAAAAACAAAAAgAAAAFkbWFAZmQ0YzAwMDAAAAAA
AAAAAwAAAAsAAAAAeGxueCxkcGRtYQAAAAAAAwAAAAUAAAFub2theQAAAAAA
AAADAAAAEAAAAFoAAAAA/UwAAAAAAAAAABAAAAAAAwAAAAwAAAE6AAAAAAAA
AHoAAAAEAAAAAwAAAAQAAAEpAAAABAAAAAMAAAAIAAABiWF4aV9jbGsAAAAA
AwAAAAgAAALMAAAAJgAAACkAAAADAAAABAAACSoAAAAGAAAAAwAAAAQAAAk3
AAAAAQAAAAMAAAAIAAAAbgAAAAMAAAAUAAAAAwAAAAQAAADcAAAAPQAAAAFk
bWEtdmlkZW8wY2hhbm5lbAAAAAAAAAMAAAAMAAAAAHhsbngsdmlkZW8wAAAA
AAIAAAABZG1hLXZpZGVvMWNoYW5uZWwAAAAAAAADAAAADAAAAAB4bG54LHZp
ZGVvMQAAAAACAAAAAWRtYS12aWRlbzJjaGFubmVsAAAAAAAAAwAAAAwAAAAA
eGxueCx2aWRlbzIAAAAAAgAAAAFkbWEtZ3JhcGhpY3NjaGFubmVsAAAAAAMA
AAAOAAAAAHhsbngsZ3JhcGhpY3MAAAAAAAACAAAAAWRtYS1hdWRpbzBjaGFu
bmVsAAAAAAAAAwAAAAwAAAAAeGxueCxhdWRpbzAAAAAAAgAAAAFkbWEtYXVk
aW8xY2hhbm5lbAAAAAAAAAMAAAAMAAAAAHhsbngsYXVkaW8xAAAAAAIAAAAC
AAAAAXp5bnFtcC1kaXNwbGF5QGZkNGEwMDAwAAAAAAMAAAAWAAAAAHhsbngs
enlucW1wLWRwc3ViLTEuNwAAAAAAAAMAAAAFAAABbm9rYXkAAAAAAAAAAwAA
AEAAAABaAAAAAP1KAAAAAAAAAAAQAAAAAAD9SqAAAAAAAAAAEAAAAAAA/Uqw
AAAAAAAAABAAAAAAAP1KwAAAAAAAAAAQAAAAAAMAAAAUAAABWGRwAGJsZW5k
AGF2X2J1ZgBhdWQAAAAAAwAAAAwAAAE6AAAAAAAAAHcAAAAEAAAAAwAAAAQA
AAEpAAAABAAAAAMAAAAqAAABiWRwX2FwYl9jbGsAZHBfYXVkX2NsawBkcF92
dGNfcGl4ZWxfY2xrX2luAAAAAAAAAwAAAAgAAALMAAAAJgAAACkAAAADAAAA
FAAAAG4AAAA7AAAAAwAAABEAAAADAAAAEAAAAAMAAAAIAAAHqGRwLXBoeTAA
AAAAAwAAABQAAAeyAAAAPAAAAAYAAAAAAAAAAwGb/MAAAAADAAAABAAACUIA
AAABAAAAAXZpZC1sYXllcgAAAAAAAAMAAAAPAAAJUXZpZDAAdmlkMQB2aWQy
AAAAAAADAAAAGAAACVsAAAA9AAAAAAAAAD0AAAABAAAAPQAAAAIAAAACAAAA
AWdmeC1sYXllcgAAAAAAAAMAAAAFAAAJUWdmeDAAAAAAAAAAAwAAAAgAAAlb
AAAAPQAAAAMAAAACAAAAAWkyYy1idXMAAAAAAgAAAAF6eW5xbXBfZHBfc25k
X2NvZGVjMAAAAAAAAAADAAAAEgAAAAB4bG54LGRwLXNuZC1jb2RlYwAAAAAA
AAMAAAAIAAABiWF1ZF9jbGsAAAAAAwAAAAgAAABuAAAAAwAAABEAAAADAAAA
BQAAAW5va2F5AAAAAAAAAAMAAAAEAAAA3AAAAEAAAAACAAAAAXp5bnFtcF9k
cF9zbmRfcGNtMAAAAAAAAwAAABAAAAAAeGxueCxkcC1zbmQtcGNtAAAAAAMA
AAAIAAAJWwAAAD0AAAAEAAAAAwAAAAMAAAlRdHgAAAAAAAMAAAAFAAABbm9r
YXkAAAAAAAAAAwAAAAQAAADcAAAAPgAAAAIAAAABenlucW1wX2RwX3NuZF9w
Y20xAAAAAAADAAAAEAAAAAB4bG54LGRwLXNuZC1wY20AAAAAAwAAAAgAAAlb
AAAAPQAAAAUAAAADAAAAAwAACVF0eAAAAAAAAwAAAAUAAAFub2theQAAAAAA
AAADAAAABAAAANwAAAA/AAAAAgAAAAF6eW5xbXBfZHBfc25kX2NhcmQAAAAA
AAMAAAARAAAAAHhsbngsZHAtc25kLWNhcmQAAAAAAAAAAwAAAAgAAAlgAAAA
PgAAAD8AAAADAAAABAAACXAAAABAAAAAAwAAAAUAAAFub2theQAAAAAAAAAC
AAAAAgAAAAIAAAABZmNsazAAAAAAAAADAAAABQAAAW5va2F5AAAAAAAAAAMA
AAAKAAAAAHhsbngsZmNsawAAAAAAAAMAAAAIAAAAbgAAAAMAAABHAAAAAgAA
AAFmY2xrMQAAAAAAAAMAAAAFAAABbm9rYXkAAAAAAAAAAwAAAAoAAAAAeGxu
eCxmY2xrAAAAAAAAAwAAAAgAAABuAAAAAwAAAEgAAAACAAAAAWZjbGsyAAAA
AAAAAwAAAAUAAAFub2theQAAAAAAAAADAAAACgAAAAB4bG54LGZjbGsAAAAA
AAADAAAACAAAAG4AAAADAAAASQAAAAIAAAABZmNsazMAAAAAAAADAAAABQAA
AW5va2F5AAAAAAAAAAMAAAAKAAAAAHhsbngsZmNsawAAAAAAAAMAAAAIAAAA
bgAAAAMAAABKAAAAAgAAAAFwc3NfcmVmX2NsawAAAAADAAAAAAAAARUAAAAD
AAAADAAAAABmaXhlZC1jbG9jawAAAAADAAAABAAAAjAAAAAAAAAAAwAAAAQA
AAQyAfyTUAAAAAMAAAAEAAAA3AAAAAYAAAACAAAAAXZpZGVvX2NsawAAAAAA
AAMAAAAAAAABFQAAAAMAAAAMAAAAAGZpeGVkLWNsb2NrAAAAAAMAAAAEAAAC
MAAAAAAAAAADAAAABAAABDIBm/zAAAAAAwAAAAQAAADcAAAABwAAAAIAAAAB
cHNzX2FsdF9yZWZfY2xrAAAAAAMAAAAAAAABFQAAAAMAAAAMAAAAAGZpeGVk
LWNsb2NrAAAAAAMAAAAEAAACMAAAAAAAAAADAAAABAAABDIAAAAAAAAAAwAA
AAQAAADcAAAACAAAAAIAAAABZ3RfY3J4X3JlZl9jbGsAAAAAAAMAAAAAAAAB
FQAAAAMAAAAMAAAAAGZpeGVkLWNsb2NrAAAAAAMAAAAEAAACMAAAAAAAAAAD
AAAABAAABDIGb/MAAAAAAwAAAAQAAADcAAAACgAAAAIAAAABYXV4X3JlZl9j
bGsAAAAAAwAAAAAAAAEVAAAAAwAAAAwAAAAAZml4ZWQtY2xvY2sAAAAAAwAA
AAQAAAIwAAAAAAAAAAMAAAAEAAAEMgGb/MAAAAADAAAABAAAANwAAAAJAAAA
AgAAAAFkcF9hY2xrAAAAAAMAAAAMAAAAAGZpeGVkLWNsb2NrAAAAAAMAAAAE
AAACMAAAAAAAAAADAAAABAAABDIF9eEAAAAAAwAAAAQAAAmCAAAAZAAAAAMA
AAAEAAAA3AAAADsAAAACAAAAAWdwaW8ta2V5cwAAAAAAAAMAAAAKAAAAAGdw
aW8ta2V5cwAAAAAAAAMAAAAEAAAACwAAAAEAAAADAAAABAAAABoAAAAAAAAA
AwAAAAAAAAmRAAAAAXN3MTkAAAAAAAAAAwAAAAUAAARkc3cxOQAAAAAAAAAD
AAAADAAABCIAAAAuAAAAFgAAAAAAAAADAAAABAAACZwAAABsAAAAAwAAAAAA
AAmnAAAAAwAAAAAAAAmRAAAAAgAAAAIAAAABbGVkcwAAAAAAAAADAAAACgAA
AABncGlvLWxlZHMAAAAAAAABaGVhcnRiZWF0LWxlZAAAAAAAAAMAAAAKAAAE
ZGhlYXJ0YmVhdAAAAAAAAAMAAAAMAAAEIgAAAC4AAAAXAAAAAAAAAAMAAAAK
AAAJtWhlYXJ0YmVhdAAAAAAAAAIAAAACAAAAAWNob3NlbgAAAAAAAwAAAC4A
AAnLL2FtYmEvaTJjQGZmMDMwMDAwL2kyYy1tdXhANzQvaTJjQDAvZWVwcm9t
QDU0AAAAAAAAAwAAAEQAAAnXIGVhcmx5Y29uIGNvbnNvbGU9dHR5UFMwLDEx
NTIwMCBjbGtfaWdub3JlX3VudXNlZCByb290PS9kZXYvcmFtMCBydwAAAAAD
AAAAEQAACeBzZXJpYWwwOjExNTIwMG44AAAAAAAAAAIAAAABaW5hMjI2LXU3
NgAAAAAAAwAAAAoAAAAAaWlvLWh3bW9uAAAAAAAAAwAAACAAAAnsAAAAQQAA
AAAAAABBAAAAAQAAAEEAAAACAAAAQQAAAAMAAAACAAAAAWluYTIyNi11NzcA
AAAAAAMAAAAKAAAAAGlpby1od21vbgAAAAAAAAMAAAAgAAAJ7AAAAEIAAAAA
AAAAQgAAAAEAAABCAAAAAgAAAEIAAAADAAAAAgAAAAFpbmEyMjYtdTc4AAAA
AAADAAAACgAAAABpaW8taHdtb24AAAAAAAADAAAAIAAACewAAABDAAAAAAAA
AEMAAAABAAAAQwAAAAIAAABDAAAAAwAAAAIAAAABaW5hMjI2LXU4NwAAAAAA
AwAAAAoAAAAAaWlvLWh3bW9uAAAAAAAAAwAAACAAAAnsAAAARAAAAAAAAABE
AAAAAQAAAEQAAAACAAAARAAAAAMAAAACAAAAAWluYTIyNi11ODUAAAAAAAMA
AAAKAAAAAGlpby1od21vbgAAAAAAAAMAAAAgAAAJ7AAAAEUAAAAAAAAARQAA
AAEAAABFAAAAAgAAAEUAAAADAAAAAgAAAAFpbmEyMjYtdTg2AAAAAAADAAAA
CgAAAABpaW8taHdtb24AAAAAAAADAAAAIAAACewAAABGAAAAAAAAAEYAAAAB
AAAARgAAAAIAAABGAAAAAwAAAAIAAAABaW5hMjI2LXU5MwAAAAAAAwAAAAoA
AAAAaWlvLWh3bW9uAAAAAAAAAwAAACAAAAnsAAAARwAAAAAAAABHAAAAAQAA
AEcAAAACAAAARwAAAAMAAAACAAAAAWluYTIyNi11ODgAAAAAAAMAAAAKAAAA
AGlpby1od21vbgAAAAAAAAMAAAAgAAAJ7AAAAEgAAAAAAAAASAAAAAEAAABI
AAAAAgAAAEgAAAADAAAAAgAAAAFpbmEyMjYtdTE1AAAAAAADAAAACgAAAABp
aW8taHdtb24AAAAAAAADAAAAIAAACewAAABJAAAAAAAAAEkAAAABAAAASQAA
AAIAAABJAAAAAwAAAAIAAAABaW5hMjI2LXU5MgAAAAAAAwAAAAoAAAAAaWlv
LWh3bW9uAAAAAAAAAwAAACAAAAnsAAAASgAAAAAAAABKAAAAAQAAAEoAAAAC
AAAASgAAAAMAAAACAAAAAWluYTIyNi11NzkAAAAAAAMAAAAKAAAAAGlpby1o
d21vbgAAAAAAAAMAAAAgAAAJ7AAAAEsAAAAAAAAASwAAAAEAAABLAAAAAgAA
AEsAAAADAAAAAgAAAAFpbmEyMjYtdTgxAAAAAAADAAAACgAAAABpaW8taHdt
b24AAAAAAAADAAAAIAAACewAAABMAAAAAAAAAEwAAAABAAAATAAAAAIAAABM
AAAAAwAAAAIAAAABaW5hMjI2LXU4MAAAAAAAAwAAAAoAAAAAaWlvLWh3bW9u
AAAAAAAAAwAAACAAAAnsAAAATQAAAAAAAABNAAAAAQAAAE0AAAACAAAATQAA
AAMAAAACAAAAAWluYTIyNi11ODQAAAAAAAMAAAAKAAAAAGlpby1od21vbgAA
AAAAAAMAAAAgAAAJ7AAAAE4AAAAAAAAATgAAAAEAAABOAAAAAgAAAE4AAAAD
AAAAAgAAAAFpbmEyMjYtdTE2AAAAAAADAAAACgAAAABpaW8taHdtb24AAAAA
AAADAAAAIAAACewAAABPAAAAAAAAAE8AAAABAAAATwAAAAIAAABPAAAAAwAA
AAIAAAABaW5hMjI2LXU2NQAAAAAAAwAAAAoAAAAAaWlvLWh3bW9uAAAAAAAA
AwAAACAAAAnsAAAAUAAAAAAAAABQAAAAAQAAAFAAAAACAAAAUAAAAAMAAAAC
AAAAAWluYTIyNi11NzQAAAAAAAMAAAAKAAAAAGlpby1od21vbgAAAAAAAAMA
AAAgAAAJ7AAAAFEAAAAAAAAAUQAAAAEAAABRAAAAAgAAAFEAAAADAAAAAgAA
AAFpbmEyMjYtdTc1AAAAAAADAAAACgAAAABpaW8taHdtb24AAAAAAAADAAAA
IAAACewAAABSAAAAAAAAAFIAAAABAAAAUgAAAAIAAABSAAAAAwAAAAIAAAAB
YWxpYXNlcwAAAAADAAAAGAAACfgvYW1iYS9ldGhlcm5ldEBmZjBlMDAwMAAA
AAADAAAAEwAACgIvYW1iYS9pMmNAZmYwMjAwMDAAAAAAAAMAAAATAAAKBy9h
bWJhL2kyY0BmZjAzMDAwMAAAAAAAAwAAABYAAAoML2FtYmEvc2VyaWFsQGZm
MDAwMDAwAAAAAAAAAwAAABYAAAoUL2FtYmEvc2VyaWFsQGZmMDEwMDAwAAAA
AAAAAwAAABMAAAocL2FtYmEvc3BpQGZmMGYwMDAwAAAAAAACAAAAAW1lbW9y
eQAAAAAAAwAAAAcAAAAsbWVtb3J5AAAAAAADAAAAIAAAAFoAAAAAAAAAAAAA
AAB/8AAAAAAACAAAAAAAAAAAgAAAAAAAAAIAAAACAAAACWNvbXBhdGlibGUA
I2FkZHJlc3MtY2VsbHMAI3NpemUtY2VsbHMAbW9kZWwAZGV2aWNlX3R5cGUA
ZW5hYmxlLW1ldGhvZABvcGVyYXRpbmctcG9pbnRzLXYyAHJlZwBjcHUtaWRs
ZS1zdGF0ZXMAY2xvY2tzAGVudHJ5LW1ldGhvZABhcm0scHNjaS1zdXNwZW5k
LXBhcmFtAGxvY2FsLXRpbWVyLXN0b3AAZW50cnktbGF0ZW5jeS11cwBleGl0
LWxhdGVuY3ktdXMAbWluLXJlc2lkZW5jeS11cwBwaGFuZGxlAG9wcC1zaGFy
ZWQAb3BwLWh6AG9wcC1taWNyb3ZvbHQAY2xvY2stbGF0ZW5jeS1ucwB1LWJv
b3QsZG0tcHJlLXJlbG9jAGludGVycnVwdC1wYXJlbnQAaW50ZXJydXB0cwB4
bG54LGlwaS1pZAByYW5nZXMAcmVnLW5hbWVzACNtYm94LWNlbGxzAHN0YXR1
cwAjcG93ZXItZG9tYWluLWNlbGxzAGNsb2NrLW5hbWVzAG1ib3hlcwBtYm94
LW5hbWVzACNyZXNldC1jZWxscwBncm91cHMAZnVuY3Rpb24AYmlhcy1wdWxs
LXVwAHNsZXctcmF0ZQBpby1zdGFuZGFyZABwaW5zAGJpYXMtaGlnaC1pbXBl
ZGFuY2UAYmlhcy1kaXNhYmxlAGxvdy1wb3dlci1kaXNhYmxlAGxvdy1wb3dl
ci1lbmFibGUAI2Nsb2NrLWNlbGxzAGZwZ2EtbWdyACNpbnRlcnJ1cHQtY2Vs
bHMAaW50ZXJydXB0LWNvbnRyb2xsZXIAbnVtX2NwdXMAbnVtX2ludGVycnVw
dHMAI2lvbW11LWNlbGxzACNnbG9iYWwtaW50ZXJydXB0cwBtbXUtbWFzdGVy
cwB0eC1maWZvLWRlcHRoAHJ4LWZpZm8tZGVwdGgAcG93ZXItZG9tYWlucwBw
aW5jdHJsLW5hbWVzAHBpbmN0cmwtMAB4bG54LGJ1cy13aWR0aAAjc3RyZWFt
LWlkLWNlbGxzAGlvbW11cwBpbnRlcnJ1cHQtbmFtZXMAeGxueCx0ei1ub25z
ZWN1cmUAcGh5LWhhbmRsZQBwaHktbW9kZQB4bG54LHB0cC1lbmV0LWNsb2Nr
AGxvY2FsLW1hYy1hZGRyZXNzAHRpLHJ4LWludGVybmFsLWRlbGF5AHRpLHR4
LWludGVybmFsLWRlbGF5AHRpLGZpZm8tZGVwdGgAdGksZHA4Mzg2Ny1yeGN0
cmwtc3RyYXAtcXVpcmsAI2dwaW8tY2VsbHMAZ3Bpby1jb250cm9sbGVyAGVt
aW8tZ3Bpby13aWR0aABncGlvLW1hc2staGlnaABncGlvLW1hc2stbG93AHBp
bmN0cmwtMQBzY2wtZ3Bpb3MAc2RhLWdwaW9zAGNsb2NrLWZyZXF1ZW5jeQBn
cGlvLWxpbmUtbmFtZXMAI2lvLWNoYW5uZWwtY2VsbHMAbGFiZWwAc2h1bnQt
cmVzaXN0b3IAdGVtcGVyYXR1cmUtc3RhYmlsaXR5AGZhY3RvcnktZm91dABj
bG9jay1vdXRwdXQtbmFtZXMAeGxueCxlbmFibGUtcHJvZmlsZQB4bG54LGVu
YWJsZS10cmFjZQB4bG54LG51bS1tb25pdG9yLXNsb3RzAHhsbngsZW5hYmxl
LWV2ZW50LWNvdW50AHhsbngsZW5hYmxlLWV2ZW50LWxvZwB4bG54LGhhdmUt
c2FtcGxlZC1tZXRyaWMtY250AHhsbngsbnVtLW9mLWNvdW50ZXJzAHhsbngs
bWV0cmljLWNvdW50LXdpZHRoAHhsbngsbWV0cmljcy1zYW1wbGUtY291bnQt
d2lkdGgAeGxueCxnbG9iYWwtY291bnQtd2lkdGgAeGxueCxtZXRyaWMtY291
bnQtc2NhbGUAbXNpLWNvbnRyb2xsZXIAbXNpLXBhcmVudABpbnRlcnJ1cHQt
bWFwLW1hc2sAYnVzLXJhbmdlAGludGVycnVwdC1tYXAAeGxueCxiYXIwLWVu
YWJsZQB4bG54LGJhcjEtZW5hYmxlAHhsbngsYmFyMi1lbmFibGUAeGxueCxi
YXIzLWVuYWJsZQB4bG54LGJhcjQtZW5hYmxlAHhsbngsYmFyNS1lbmFibGUA
eGxueCxwY2llLW1vZGUAbnVtLWNzAGlzLWR1YWwAc3BpLXJ4LWJ1cy13aWR0
aABzcGktdHgtYnVzLXdpZHRoAHNwaS1tYXgtZnJlcXVlbmN5AGNhbGlicmF0
aW9uAG52bWVtLWNlbGxzAG52bWVtLWNlbGwtbmFtZXMAcmVzZXRzAHJlc2V0
LW5hbWVzACNwaHktY2VsbHMAY2V2YSxwMC1jb21pbml0LXBhcmFtcwBjZXZh
LHAwLWNvbXdha2UtcGFyYW1zAGNldmEscDAtYnVyc3QtcGFyYW1zAGNldmEs
cDAtcmV0cnktcGFyYW1zAGNldmEscDEtY29taW5pdC1wYXJhbXMAY2V2YSxw
MS1jb213YWtlLXBhcmFtcwBjZXZhLHAxLWJ1cnN0LXBhcmFtcwBjZXZhLHAx
LXJldHJ5LXBhcmFtcwBwaHktbmFtZXMAcGh5cwB4bG54LHR6LW5vbnNlY3Vy
ZS1zYXRhMAB4bG54LHR6LW5vbnNlY3VyZS1zYXRhMQB4bG54LGRldmljZV9p
ZABuby0xLTgtdgB4bG54LG1pb19iYW5rAHRpbWVyLXdpZHRoAGN0cy1vdmVy
cmlkZQBwb3J0LW51bWJlcgB4bG54LHVzYi1wb2xhcml0eQB4bG54LHVzYi1y
ZXNldC1tb2RlAHNucHMscXVpcmstZnJhbWUtbGVuZ3RoLWFkanVzdG1lbnQA
c25wcyxyZWZjbGtfZmxhZGoAc25wcyxlbmFibGVfZ3VjdGwxX3Jlc3VtZV9x
dWlyawBzbnBzLGVuYWJsZV9ndWN0bDFfaXBkX3F1aXJrAHNucHMseGhjaS1z
dHJlYW0tcXVpcmsAZHJfbW9kZQBzbnBzLHVzYjNfbHBtX2NhcGFibGUAbWF4
aW11bS1zcGVlZAB0aW1lb3V0LXNlYwByZXNldC1vbi10aW1lb3V0AGRtYS1j
aGFubmVscwAjZG1hLWNlbGxzAHhsbngsbWF4LWxhbmVzAGRtYS1uYW1lcwBk
bWFzAHhsbngsZHAtc25kLXBjbQB4bG54LGRwLXNuZC1jb2RlYwBjbG9jay1h
Y2N1cmFjeQBhdXRvcmVwZWF0AGxpbnV4LGNvZGUAd2FrZXVwLXNvdXJjZQBs
aW51eCxkZWZhdWx0LXRyaWdnZXIAeGxueCxlZXByb20AYm9vdGFyZ3MAc3Rk
b3V0LXBhdGgAaW8tY2hhbm5lbHMAZXRoZXJuZXQwAGkyYzAAaTJjMQBzZXJp
YWwwAHNlcmlhbDEAc3BpMAA=

--8323329-468115872-1623805343=:24906
Content-Type: text/plain; charset=US-ASCII; name=full-revert
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.21.2106151820090.24906@sstabellini-ThinkPad-T480s>
Content-Description: 
Content-Disposition: attachment; filename=full-revert

ZGlmZiAtLWdpdCBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FybS9zbW11
LmMgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0vc21tdS5jDQppbmRl
eCAxYTY4YzJhYjNiLi4wN2I4Nzg1MzgwIDEwMDY0NA0KLS0tIGEveGVuL2Ry
aXZlcnMvcGFzc3Rocm91Z2gvYXJtL3NtbXUuYw0KKysrIGIveGVuL2RyaXZl
cnMvcGFzc3Rocm91Z2gvYXJtL3NtbXUuYw0KQEAgLTU5Nyw3ICs1OTcsNiBA
QCBlbnVtIGFybV9zbW11X2FyY2hfdmVyc2lvbiB7DQogfTsNCiANCiBzdHJ1
Y3QgYXJtX3NtbXVfczJjciB7DQotCWludAkJCQljb3VudDsNCiAJZW51bSBh
cm1fc21tdV9zMmNyX3R5cGUJCXR5cGU7DQogCWVudW0gYXJtX3NtbXVfczJj
cl9wcml2Y2ZnCXByaXZjZmc7DQogCXU4CQkJCWNibmR4Ow0KQEAgLTYxNCw3
ICs2MTMsNiBAQCBzdHJ1Y3QgYXJtX3NtbXVfc21yIHsNCiB9Ow0KIA0KIHN0
cnVjdCBhcm1fc21tdV9tYXN0ZXJfY2ZnIHsNCi0Jc3RydWN0IGFybV9zbW11
X2RldmljZQkJKnNtbXU7DQogCWludAkJCQludW1fc3RyZWFtaWRzOw0KIAl1
MTYJCQkJc3RyZWFtaWRzW01BWF9NQVNURVJfU1RSRUFNSURTXTsNCiAJczE2
CQkJCXNtZW5keFtNQVhfTUFTVEVSX1NUUkVBTUlEU107DQpAQCAtNjU3LDcg
KzY1NSw2IEBAIHN0cnVjdCBhcm1fc21tdV9kZXZpY2Ugew0KIAl1MTYJCQkJ
c21yX21hc2tfbWFzazsNCiAJc3RydWN0IGFybV9zbW11X3NtcgkJKnNtcnM7
DQogCXN0cnVjdCBhcm1fc21tdV9zMmNyCQkqczJjcnM7DQotCXNwaW5sb2Nr
X3QJCQlzdHJlYW1fbWFwX2xvY2s7DQogDQogCXVuc2lnbmVkIGxvbmcJCQlz
MV9pbnB1dF9zaXplOw0KIAl1bnNpZ25lZCBsb25nCQkJczFfb3V0cHV0X3Np
emU7DQpAQCAtMTQxMCw2ICsxNDA3LDIzIEBAIHN0YXRpYyB2b2lkIGFybV9z
bW11X2RvbWFpbl9kZXN0cm95KHN0cnVjdCBpb21tdV9kb21haW4gKmRvbWFp
bikNCiAJa2ZyZWUoc21tdV9kb21haW4pOw0KIH0NCiANCitzdGF0aWMgaW50
IGFybV9zbW11X2FsbG9jX3NtcihzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpz
bW11KQ0KK3sNCisJaW50IGk7DQorDQorCWZvciAoaSA9IDA7IGkgPCBzbW11
LT5udW1fbWFwcGluZ19ncm91cHM7IGkrKykNCisJCWlmICghY21weGNoZygm
c21tdS0+c21yc1tpXS52YWxpZCwgZmFsc2UsIHRydWUpKQ0KKwkJCXJldHVy
biBpOw0KKw0KKwlyZXR1cm4gSU5WQUxJRF9TTUVORFg7DQorfQ0KKw0KK3N0
YXRpYyB2b2lkIGFybV9zbW11X2ZyZWVfc21yKHN0cnVjdCBhcm1fc21tdV9k
ZXZpY2UgKnNtbXUsIGludCBpZHgpDQorew0KKwl3cml0ZWxfcmVsYXhlZCh+
U01SX1ZBTElELCBBUk1fU01NVV9HUjAoc21tdSkgKyBBUk1fU01NVV9HUjBf
U01SKGlkeCkpOw0KKwl3cml0ZV9hdG9taWMoJnNtbXUtPnNtcnNbaWR4XS52
YWxpZCwgZmFsc2UpOw0KK30NCisNCiBzdGF0aWMgdm9pZCBhcm1fc21tdV93
cml0ZV9zbXIoc3RydWN0IGFybV9zbW11X2RldmljZSAqc21tdSwgaW50IGlk
eCkNCiB7DQogCXN0cnVjdCBhcm1fc21tdV9zbXIgKnNtciA9IHNtbXUtPnNt
cnMgKyBpZHg7DQpAQCAtMTQzOCwxMzIgKzE0NTIsOTggQEAgc3RhdGljIHZv
aWQgYXJtX3NtbXVfd3JpdGVfc21lKHN0cnVjdCBhcm1fc21tdV9kZXZpY2Ug
KnNtbXUsIGludCBpZHgpDQogCQlhcm1fc21tdV93cml0ZV9zbXIoc21tdSwg
aWR4KTsNCiB9DQogDQotc3RhdGljIGludCBhcm1fc21tdV9maW5kX3NtZShz
dHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11LCB1MTYgaWQsIHUxNiBtYXNr
KQ0KK3N0YXRpYyBpbnQgYXJtX3NtbXVfbWFzdGVyX2FsbG9jX3NtZXMoc3Ry
dWN0IGFybV9zbW11X2RldmljZSAqc21tdSwNCisJCQkJICAgICAgc3RydWN0
IGFybV9zbW11X21hc3Rlcl9jZmcgKmNmZykNCiB7DQogCXN0cnVjdCBhcm1f
c21tdV9zbXIgKnNtcnMgPSBzbW11LT5zbXJzOw0KLQlpbnQgaSwgZnJlZV9p
ZHggPSAtRU5PU1BDOw0KKwlpbnQgaSwgaWR4Ow0KIA0KLQkvKiBTdHJlYW0g
aW5kZXhpbmcgaXMgYmxpc3NmdWxseSBlYXN5ICovDQotCWlmICghc21ycykN
Ci0JCXJldHVybiBpZDsNCisJLyogQWxsb2NhdGUgdGhlIFNNUnMgb24gdGhl
IFNNTVUgKi8NCisJZm9yX2VhY2hfY2ZnX3NtZShjZmcsIGksIGlkeCkgew0K
KwkJaWYgKGlkeCAhPSBJTlZBTElEX1NNRU5EWCkNCisJCQlyZXR1cm4gLUVF
WElTVDsNCiANCi0JLyogVmFsaWRhdGluZyBTTVJzIGlzLi4uIGxlc3Mgc28g
Ki8NCi0JZm9yIChpID0gMDsgaSA8IHNtbXUtPm51bV9tYXBwaW5nX2dyb3Vw
czsgKytpKSB7DQotCQlpZiAoIXNtcnNbaV0udmFsaWQpIHsNCi0JCQkvKg0K
LQkJCSAqIE5vdGUgdGhlIGZpcnN0IGZyZWUgZW50cnkgd2UgY29tZSBhY3Jv
c3MsIHdoaWNoDQotCQkJICogd2UnbGwgY2xhaW0gaW4gdGhlIGVuZCBpZiBu
b3RoaW5nIGVsc2UgbWF0Y2hlcy4NCi0JCQkgKi8NCi0JCQlpZiAoZnJlZV9p
ZHggPCAwKQ0KLQkJCQlmcmVlX2lkeCA9IGk7DQorCQkvKiAuLi5leGNlcHQg
b24gc3RyZWFtIGluZGV4aW5nIGhhcmR3YXJlLCBvZiBjb3Vyc2UgKi8NCisJ
CWlmICghc21ycykgew0KKwkJCWNmZy0+c21lbmR4W2ldID0gY2ZnLT5zdHJl
YW1pZHNbaV07DQogCQkJY29udGludWU7DQogCQl9DQotCQkvKg0KLQkJICog
SWYgdGhlIG5ldyBlbnRyeSBpcyBfZW50aXJlbHlfIG1hdGNoZWQgYnkgYW4g
ZXhpc3RpbmcgZW50cnksDQotCQkgKiB0aGVuIHJldXNlIHRoYXQsIHdpdGgg
dGhlIGd1YXJhbnRlZSB0aGF0IHRoZXJlIGFsc28gY2Fubm90DQotCQkgKiBi
ZSBhbnkgc3Vic2VxdWVudCBjb25mbGljdGluZyBlbnRyaWVzLiBJbiBub3Jt
YWwgdXNlIHdlJ2QNCi0JCSAqIGV4cGVjdCBzaW1wbHkgaWRlbnRpY2FsIGVu
dHJpZXMgZm9yIHRoaXMgY2FzZSwgYnV0IHRoZXJlJ3MNCi0JCSAqIG5vIGhh
cm0gaW4gYWNjb21tb2RhdGluZyB0aGUgZ2VuZXJhbGlzYXRpb24uDQotCQkg
Ki8NCi0JCWlmICgobWFzayAmIHNtcnNbaV0ubWFzaykgPT0gbWFzayAmJg0K
LQkJICAgICEoKGlkIF4gc21yc1tpXS5pZCkgJiB+c21yc1tpXS5tYXNrKSkN
Ci0JCQlyZXR1cm4gaTsNCi0JCS8qDQotCQkgKiBJZiB0aGUgbmV3IGVudHJ5
IGhhcyBhbnkgb3RoZXIgb3ZlcmxhcCB3aXRoIGFuIGV4aXN0aW5nIG9uZSwN
Ci0JCSAqIHRob3VnaCwgdGhlbiB0aGVyZSBhbHdheXMgZXhpc3RzIGF0IGxl
YXN0IG9uZSBzdHJlYW0gSUQNCi0JCSAqIHdoaWNoIHdvdWxkIGNhdXNlIGEg
Y29uZmxpY3QsIGFuZCB3ZSBjYW4ndCBhbGxvdyB0aGF0IHJpc2suDQotCQkg
Ki8NCi0JCWlmICghKChpZCBeIHNtcnNbaV0uaWQpICYgfihzbXJzW2ldLm1h
c2sgfCBtYXNrKSkpDQotCQkJcmV0dXJuIC1FSU5WQUw7DQotCX0NCiANCi0J
cmV0dXJuIGZyZWVfaWR4Ow0KLX0NCi0NCi1zdGF0aWMgYm9vbCBhcm1fc21t
dV9mcmVlX3NtZShzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11LCBpbnQg
aWR4KQ0KLXsNCi0JaWYgKC0tc21tdS0+czJjcnNbaWR4XS5jb3VudCkNCi0J
CXJldHVybiBmYWxzZTsNCi0NCi0Jc21tdS0+czJjcnNbaWR4XSA9IHMyY3Jf
aW5pdF92YWw7DQotCWlmIChzbW11LT5zbXJzKQ0KLQkJc21tdS0+c21yc1tp
ZHhdLnZhbGlkID0gZmFsc2U7DQotDQotCXJldHVybiB0cnVlOw0KLX0NCi0N
Ci1zdGF0aWMgaW50IGFybV9zbW11X21hc3Rlcl9hbGxvY19zbWVzKHN0cnVj
dCBkZXZpY2UgKmRldikNCi17DQotCXN0cnVjdCBhcm1fc21tdV9tYXN0ZXJf
Y2ZnICpjZmcgPSBmaW5kX3NtbXVfbWFzdGVyX2NmZyhkZXYpOw0KLQlzdHJ1
Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11ID0gY2ZnLT5zbW11Ow0KLQlzdHJ1
Y3QgYXJtX3NtbXVfc21yICpzbXJzID0gc21tdS0+c21yczsNCi0JaW50IGks
IGlkeCwgcmV0Ow0KLQ0KLQlzcGluX2xvY2soJnNtbXUtPnN0cmVhbV9tYXBf
bG9jayk7DQotCS8qIEZpZ3VyZSBvdXQgYSB2aWFibGUgc3RyZWFtIG1hcCBl
bnRyeSBhbGxvY2F0aW9uICovDQotCWZvcl9lYWNoX2NmZ19zbWUoY2ZnLCBp
LCBpZHgpIHsNCi0JCWlmIChpZHggIT0gSU5WQUxJRF9TTUVORFgpIHsNCi0J
CQlyZXQgPSAtRUVYSVNUOw0KLQkJCWdvdG8gb3V0X2VycjsNCisJCWlkeCA9
IGFybV9zbW11X2FsbG9jX3NtcihzbW11KTsNCisJCWlmIChJU19FUlJfVkFM
VUUoaWR4KSkgew0KKwkJCWRldl9lcnIoc21tdS0+ZGV2LCAiZmFpbGVkIHRv
IGFsbG9jYXRlIGZyZWUgU01SXG4iKTsNCisJCQlnb3RvIGVycl9mcmVlX3Nt
cnM7DQogCQl9DQorCQljZmctPnNtZW5keFtpXSA9IGlkeDsNCiANCi0JCXJl
dCA9IGFybV9zbW11X2ZpbmRfc21lKHNtbXUsIGNmZy0+c3RyZWFtaWRzW2ld
LCAwKTsNCi0JCWlmIChyZXQgPCAwKQ0KLQkJCWdvdG8gb3V0X2VycjsNCi0N
Ci0JCWlkeCA9IHJldDsNCi0JCWlmIChzbXJzICYmIHNtbXUtPnMyY3JzW2lk
eF0uY291bnQgPT0gMCkgew0KLQkJCXNtcnNbaWR4XS5pZCA9IGNmZy0+c3Ry
ZWFtaWRzW2ldOw0KLQkJCXNtcnNbaWR4XS5tYXNrID0gMDsgLyogV2UgZG9u
J3QgY3VycmVudGx5IHNoYXJlIFNNUnMgKi8NCi0JCQlzbXJzW2lkeF0udmFs
aWQgPSB0cnVlOw0KLQkJfQ0KLQkJc21tdS0+czJjcnNbaWR4XS5jb3VudCsr
Ow0KLQkJY2ZnLT5zbWVuZHhbaV0gPSAoczE2KWlkeDsNCisJCXNtcnNbaWR4
XS5pZCA9IGNmZy0+c3RyZWFtaWRzW2ldOw0KKwkJc21yc1tpZHhdLm1hc2sg
PSAwOyAvKiBXZSBkb24ndCBjdXJyZW50bHkgc2hhcmUgU01ScyAqLw0KIAl9
DQogDQorCWlmICghc21ycykNCisJCXJldHVybiAwOw0KKw0KIAkvKiBJdCB3
b3JrZWQhIE5vdywgcG9rZSB0aGUgYWN0dWFsIGhhcmR3YXJlICovDQotCWZv
cl9lYWNoX2NmZ19zbWUoY2ZnLCBpLCBpZHgpIHsNCi0JCWFybV9zbW11X3dy
aXRlX3NtZShzbW11LCBpZHgpOw0KLQl9DQorCWZvcl9lYWNoX2NmZ19zbWUo
Y2ZnLCBpLCBpZHgpDQorCQlhcm1fc21tdV93cml0ZV9zbXIoc21tdSwgaWR4
KTsNCiANCi0Jc3Bpbl91bmxvY2soJnNtbXUtPnN0cmVhbV9tYXBfbG9jayk7
DQogCXJldHVybiAwOw0KIA0KLW91dF9lcnI6DQorZXJyX2ZyZWVfc21yczoN
CiAJd2hpbGUgKGktLSkgew0KLQkJYXJtX3NtbXVfZnJlZV9zbWUoc21tdSwg
Y2ZnLT5zbWVuZHhbaV0pOw0KKwkJYXJtX3NtbXVfZnJlZV9zbXIoc21tdSwg
Y2ZnLT5zbWVuZHhbaV0pOw0KIAkJY2ZnLT5zbWVuZHhbaV0gPSBJTlZBTElE
X1NNRU5EWDsNCiAJfQ0KLQlzcGluX3VubG9jaygmc21tdS0+c3RyZWFtX21h
cF9sb2NrKTsNCi0JcmV0dXJuIHJldDsNCisJcmV0dXJuIC1FTk9TUEM7DQog
fQ0KIA0KLXN0YXRpYyB2b2lkIGFybV9zbW11X21hc3Rlcl9mcmVlX3NtZXMo
c3RydWN0IGFybV9zbW11X21hc3Rlcl9jZmcgKmNmZykNCitzdGF0aWMgdm9p
ZCBhcm1fc21tdV9tYXN0ZXJfZnJlZV9zbWVzKHN0cnVjdCBhcm1fc21tdV9k
ZXZpY2UgKnNtbXUsDQorCQkJCSAgICAgIHN0cnVjdCBhcm1fc21tdV9tYXN0
ZXJfY2ZnICpjZmcpDQogew0KLSAgICBzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNl
ICpzbW11ID0gY2ZnLT5zbW11Ow0KIAlpbnQgaSwgaWR4Ow0KIA0KLQlzcGlu
X2xvY2soJnNtbXUtPnN0cmVhbV9tYXBfbG9jayk7DQorCS8qDQorCSAqIFdl
ICptdXN0KiBjbGVhciB0aGUgUzJDUiBmaXJzdCwgYmVjYXVzZSBmcmVlaW5n
IHRoZSBTTVIgbWVhbnMNCisJICogdGhhdCBpdCBjYW4gYmUgcmUtYWxsb2Nh
dGVkIGltbWVkaWF0ZWx5Lg0KKwkgKi8NCiAJZm9yX2VhY2hfY2ZnX3NtZShj
ZmcsIGksIGlkeCkgew0KLQkJaWYgKGFybV9zbW11X2ZyZWVfc21lKHNtbXUs
IGlkeCkpDQotCQkJYXJtX3NtbXVfd3JpdGVfc21lKHNtbXUsIGlkeCk7DQor
CQkvKiBBbiBJT01NVSBncm91cCBpcyB0b3JuIGRvd24gYnkgdGhlIGZpcnN0
IGRldmljZSB0byBiZSByZW1vdmVkICovDQorCQlpZiAoaWR4ID09IElOVkFM
SURfU01FTkRYKQ0KKwkJCXJldHVybjsNCisNCisJCXNtbXUtPnMyY3JzW2lk
eF0gPSBzMmNyX2luaXRfdmFsOw0KKwkJYXJtX3NtbXVfd3JpdGVfczJjcihz
bW11LCBpZHgpOw0KKwl9DQorCS8qIFN5bmMgUzJDUiB1cGRhdGVzIGJlZm9y
ZSB0b3VjaGluZyBhbnl0aGluZyBlbHNlICovDQorCV9faW93bWIoKTsNCisN
CisJLyogSW52YWxpZGF0ZSB0aGUgU01ScyBiZWZvcmUgZnJlZWluZyBiYWNr
IHRvIHRoZSBhbGxvY2F0b3IgKi8NCisJZm9yX2VhY2hfY2ZnX3NtZShjZmcs
IGksIGlkeCkgew0KKwkJaWYgKHNtbXUtPnNtcnMpDQorCQkJYXJtX3NtbXVf
ZnJlZV9zbXIoc21tdSwgaWR4KTsNCisNCiAJCWNmZy0+c21lbmR4W2ldID0g
SU5WQUxJRF9TTUVORFg7DQogCX0NCi0Jc3Bpbl91bmxvY2soJnNtbXUtPnN0
cmVhbV9tYXBfbG9jayk7DQogfQ0KIA0KIHN0YXRpYyBpbnQgYXJtX3NtbXVf
ZG9tYWluX2FkZF9tYXN0ZXIoc3RydWN0IGFybV9zbW11X2RvbWFpbiAqc21t
dV9kb21haW4sDQogCQkJCSAgICAgIHN0cnVjdCBhcm1fc21tdV9tYXN0ZXJf
Y2ZnICpjZmcpDQogew0KKwlpbnQgaSwgaWR4LCByZXQgPSAwOw0KIAlzdHJ1
Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11ID0gc21tdV9kb21haW4tPnNtbXU7
DQogCXN0cnVjdCBhcm1fc21tdV9zMmNyICpzMmNyID0gc21tdS0+czJjcnM7
DQogCWVudW0gYXJtX3NtbXVfczJjcl90eXBlIHR5cGUgPSBTMkNSX1RZUEVf
VFJBTlM7DQogCXU4IGNibmR4ID0gc21tdV9kb21haW4tPmNmZy5jYm5keDsN
Ci0JaW50IGksIGlkeDsNCisNCisJaWYgKGNmZy0+c21lbmR4WzBdID09IElO
VkFMSURfU01FTkRYKQ0KKwkJcmV0ID0gYXJtX3NtbXVfbWFzdGVyX2FsbG9j
X3NtZXMoc21tdSwgY2ZnKTsNCisJaWYgKHJldCkNCisJCXJldHVybiByZXQ7
DQogDQogCWZvcl9lYWNoX2NmZ19zbWUoY2ZnLCBpLCBpZHgpIHsNCisJCS8q
IERldmljZXMgaW4gYW4gSU9NTVUgZ3JvdXAgbWF5IGFscmVhZHkgYmUgY29u
ZmlndXJlZCAqLw0KIAkJaWYgKHR5cGUgPT0gczJjcltpZHhdLnR5cGUgJiYg
Y2JuZHggPT0gczJjcltpZHhdLmNibmR4KQ0KLQkJCWNvbnRpbnVlOw0KKwkJ
CWJyZWFrOw0KIA0KIAkJczJjcltpZHhdLnR5cGUgPSB0eXBlIDsNCiAJCXMy
Y3JbaWR4XS5wcml2Y2ZnID0gUzJDUl9QUklWQ0ZHX1VOUFJJVjsNCkBAIC0x
NjIyLDEwICsxNjAyLDExIEBAIHN0YXRpYyBpbnQgYXJtX3NtbXVfYXR0YWNo
X2RldihzdHJ1Y3QgaW9tbXVfZG9tYWluICpkb21haW4sIHN0cnVjdCBkZXZp
Y2UgKmRldikNCiANCiBzdGF0aWMgdm9pZCBhcm1fc21tdV9kZXRhY2hfZGV2
KHN0cnVjdCBpb21tdV9kb21haW4gKmRvbWFpbiwgc3RydWN0IGRldmljZSAq
ZGV2KQ0KIHsNCisJc3RydWN0IGFybV9zbW11X2RldmljZSAqc21tdSA9IGZp
bmRfc21tdV9mb3JfZGV2aWNlKGRldik7DQogCXN0cnVjdCBhcm1fc21tdV9t
YXN0ZXJfY2ZnICpjZmcgPSBmaW5kX3NtbXVfbWFzdGVyX2NmZyhkZXYpOw0K
IA0KLQlpZiAoY2ZnKQ0KLQkJYXJtX3NtbXVfbWFzdGVyX2ZyZWVfc21lcyhj
ZmcpOw0KKwlpZiAoc21tdSAmJiBjZmcpDQorCQlhcm1fc21tdV9tYXN0ZXJf
ZnJlZV9zbWVzKHNtbXUsIGNmZyk7DQogDQogfQ0KIA0KQEAgLTE5NjAsMTcg
KzE5NDEsMjUgQEAgc3RhdGljIGludCBhcm1fc21tdV9hZGRfZGV2aWNlKHN0
cnVjdCBkZXZpY2UgKmRldikNCiAJc3RydWN0IGFybV9zbW11X21hc3Rlcl9j
ZmcgKmNmZzsNCiAJc3RydWN0IGlvbW11X2dyb3VwICpncm91cDsNCiAJdm9p
ZCAoKnJlbGVhc2Vmbikodm9pZCAqKSA9IE5VTEw7DQorCWludCByZXQ7DQog
DQogCXNtbXUgPSBmaW5kX3NtbXVfZm9yX2RldmljZShkZXYpOw0KIAlpZiAo
IXNtbXUpDQogCQlyZXR1cm4gLUVOT0RFVjsNCiANCisJZ3JvdXAgPSBpb21t
dV9ncm91cF9hbGxvYygpOw0KKwlpZiAoSVNfRVJSKGdyb3VwKSkgew0KKwkJ
ZGV2X2VycihkZXYsICJGYWlsZWQgdG8gYWxsb2NhdGUgSU9NTVUgZ3JvdXBc
biIpOw0KKwkJcmV0dXJuIFBUUl9FUlIoZ3JvdXApOw0KKwl9DQorDQogCWlm
IChkZXZfaXNfcGNpKGRldikpIHsNCiAJCXN0cnVjdCBwY2lfZGV2ICpwZGV2
ID0gdG9fcGNpX2RldihkZXYpOw0KIA0KIAkJY2ZnID0ga3phbGxvYyhzaXpl
b2YoKmNmZyksIEdGUF9LRVJORUwpOw0KIAkJaWYgKCFjZmcpIHsNCi0JCQly
ZXR1cm4gLUVOT01FTTsNCisJCQlyZXQgPSAtRU5PTUVNOw0KKwkJCWdvdG8g
b3V0X3B1dF9ncm91cDsNCiAJCX0NCiANCiAJCWNmZy0+bnVtX3N0cmVhbWlk
cyA9IDE7DQpAQCAtMTk4MSwzMCArMTk3MCwyNCBAQCBzdGF0aWMgaW50IGFy
bV9zbW11X2FkZF9kZXZpY2Uoc3RydWN0IGRldmljZSAqZGV2KQ0KIAkJcGNp
X2Zvcl9lYWNoX2RtYV9hbGlhcyhwZGV2LCBfX2FybV9zbW11X2dldF9wY2lf
c2lkLA0KIAkJCQkgICAgICAgJmNmZy0+c3RyZWFtaWRzWzBdKTsNCiAJCXJl
bGVhc2VmbiA9IF9fYXJtX3NtbXVfcmVsZWFzZV9wY2lfaW9tbXVkYXRhOw0K
LQkJY2ZnLT5zbW11ID0gc21tdTsNCiAJfSBlbHNlIHsNCiAJCXN0cnVjdCBh
cm1fc21tdV9tYXN0ZXIgKm1hc3RlcjsNCiANCiAJCW1hc3RlciA9IGZpbmRf
c21tdV9tYXN0ZXIoc21tdSwgZGV2LT5vZl9ub2RlKTsNCiAJCWlmICghbWFz
dGVyKSB7DQotCQkJcmV0dXJuIC1FTk9ERVY7DQorCQkJcmV0ID0gLUVOT0RF
VjsNCisJCQlnb3RvIG91dF9wdXRfZ3JvdXA7DQogCQl9DQogDQogCQljZmcg
PSAmbWFzdGVyLT5jZmc7DQotCQljZmctPnNtbXUgPSBzbW11Ow0KLQl9DQot
DQotCWdyb3VwID0gaW9tbXVfZ3JvdXBfYWxsb2MoKTsNCi0JaWYgKElTX0VS
Uihncm91cCkpIHsNCi0JCWRldl9lcnIoZGV2LCAiRmFpbGVkIHRvIGFsbG9j
YXRlIElPTU1VIGdyb3VwXG4iKTsNCi0JCXJldHVybiBQVFJfRVJSKGdyb3Vw
KTsNCiAJfQ0KIA0KIAlpb21tdV9ncm91cF9zZXRfaW9tbXVkYXRhKGdyb3Vw
LCBjZmcsIHJlbGVhc2Vmbik7DQotCWlvbW11X2dyb3VwX2FkZF9kZXZpY2Uo
Z3JvdXAsIGRldik7DQotCWlvbW11X2dyb3VwX3B1dChncm91cCk7DQorCXJl
dCA9IGlvbW11X2dyb3VwX2FkZF9kZXZpY2UoZ3JvdXAsIGRldik7DQogDQot
CXJldHVybiBhcm1fc21tdV9tYXN0ZXJfYWxsb2Nfc21lcyhkZXYpOw0KK291
dF9wdXRfZ3JvdXA6DQorCWlvbW11X2dyb3VwX3B1dChncm91cCk7DQorCXJl
dHVybiByZXQ7DQogfQ0KIA0KICNpZiAwIC8qIFhlbjogV2UgZG9uJ3Qgc3Vw
cG9ydCByZW1vdmUgZGV2aWNlIGZvciBub3cuIFdpbGwgYmUgdXNlZnVsIGZv
ciBQQ0kgKi8NCkBAIC0yMjM3LDcgKzIyMjAsNiBAQCBzdGF0aWMgaW50IGFy
bV9zbW11X2RldmljZV9jZmdfcHJvYmUoc3RydWN0IGFybV9zbW11X2Rldmlj
ZSAqc21tdSkNCiAJCXNtbXUtPnMyY3JzW2ldID0gczJjcl9pbml0X3ZhbDsN
CiANCiAJc21tdS0+bnVtX21hcHBpbmdfZ3JvdXBzID0gc2l6ZTsNCi0Jc3Bp
bl9sb2NrX2luaXQoJnNtbXUtPnN0cmVhbV9tYXBfbG9jayk7DQogDQogCS8q
IElEMSAqLw0KIAlpZCA9IHJlYWRsX3JlbGF4ZWQoZ3IwX2Jhc2UgKyBBUk1f
U01NVV9HUjBfSUQxKTsNCg==

--8323329-468115872-1623805343=:24906--


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 02:46:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 02:46:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142470.262861 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltLZN-00084a-1h; Wed, 16 Jun 2021 02:46:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142470.262861; Wed, 16 Jun 2021 02:46:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltLZM-00084R-S9; Wed, 16 Jun 2021 02:46:20 +0000
Received: by outflank-mailman (input) for mailman id 142470;
 Wed, 16 Jun 2021 02:46:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltLZL-00084H-Bk; Wed, 16 Jun 2021 02:46:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltLZL-0007NN-32; Wed, 16 Jun 2021 02:46:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltLZK-0000F0-QQ; Wed, 16 Jun 2021 02:46:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltLZK-0005me-Pi; Wed, 16 Jun 2021 02:46:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8F4sLqzI6z7jInmD4dwMOuUlO+sLBGrCOwawDLbYGac=; b=Sov+Rvi/QQleAH6yE+xK0kveW/
	oK2Q4nI/MNx88rXsRGQqLj2BGkYvoa0PQ9Tp5jbzYnteCoeblxd0OA8rct2vAigrtpATOgVo+IRCy
	mX5w/vXJBHRNzXpqfuCIVZkP6wi+iVl8HgKKJuEWASZKt+H+cb9V/tMsrRTmcLlT2WQI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162846-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162846: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=11b1c1d4b98bc1b5eaaaf9eaa94ecd34eeaba5f9
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Jun 2021 02:46:18 +0000

flight 162846 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162846/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 11b1c1d4b98bc1b5eaaaf9eaa94ecd34eeaba5f9
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   11 days
Failing since        162368  2021-06-04 15:42:59 Z   11 days   24 attempts
Testing same since   162841  2021-06-15 13:41:16 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1779 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 03:53:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 03:53:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142480.262886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMbw-0006QA-OY; Wed, 16 Jun 2021 03:53:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142480.262886; Wed, 16 Jun 2021 03:53:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMbw-0006Q1-LR; Wed, 16 Jun 2021 03:53:04 +0000
Received: by outflank-mailman (input) for mailman id 142480;
 Wed, 16 Jun 2021 03:53:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltMbv-0006PR-6p
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 03:53:03 +0000
Received: from mail-pg1-x532.google.com (unknown [2607:f8b0:4864:20::532])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fddd4143-0f99-4c56-902e-82a37ea01a04;
 Wed, 16 Jun 2021 03:53:02 +0000 (UTC)
Received: by mail-pg1-x532.google.com with SMTP id l184so848753pgd.8
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 20:53:02 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id j7sm5188658pjf.0.2021.06.15.20.52.54
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 20:53:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fddd4143-0f99-4c56-902e-82a37ea01a04
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=w8jpaGfzoLiwzikPo4cNB/z3ajmfRt4xSbdunsHd/ZI=;
        b=e09qxnrBWof91JKHEkUW/giVJtTvZmH8jr9Dd4Su1RWDV3XwXARSSQRgkr9HNYU7+F
         xfWT4ubcXBP6WIhPaOX3vFLQCLFhqmxhHXvBHccvrEGKsvHnieOM35s2iYF6pLJaXyJ/
         2e11UL+so6Eg69fq0bTkeYvB01cyhy65RZc2k=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=w8jpaGfzoLiwzikPo4cNB/z3ajmfRt4xSbdunsHd/ZI=;
        b=IzOmDhNWV0FwEN1qBt2n5+uKst1KzEslSVVIg7fn/WPbEPfyx2XmUTbV7vCHcMCZqJ
         he2e41dDFSGQSwmITPF+lwtJ+y5WntNtosZYqrgpXZf+Wott8sZ7Mh8h+YkkGSXpqMVK
         fc1BE8cvYqmiX0iPLTu2yGGJX38ZxNbb9ub/qgSGQaSSIsbNUoeeoE79T6bF0jnHlFQv
         DoWqqynGynsrNQw3JtlzyxvAQJJnrClVotfHco+Dr5BZ/b2rGyrPv1SXEdkLpeL52eqX
         V3sXzUK9rev6kh2zdSgg95MvKoZa5KapyKm7k0B90kZlyHMpasTskJyXTr6fttfGyiLB
         FOYw==
X-Gm-Message-State: AOAM530G8MFslnyVKL37duF94GU4gW/LROF3Eb/cZiKPanUL3JqxZLjT
	CAwNhaqHAoNkt+sOGAyekgfx0A==
X-Google-Smtp-Source: ABdhPJwa+ZmgeINp/qzrrMwMjmV1PlQ+238tMPYnN9cw69LWPLnWlkpTpgH9AKKMvgYOHqDOuIhj8A==
X-Received: by 2002:a63:5c4e:: with SMTP id n14mr2900002pgm.192.1623815581647;
        Tue, 15 Jun 2021 20:53:01 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v11 01/12] swiotlb: Refactor swiotlb init functions
Date: Wed, 16 Jun 2021 11:52:29 +0800
Message-Id: <20210616035240.840463-2-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616035240.840463-1-tientzu@chromium.org>
References: <20210616035240.840463-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
initialization to make the code reusable.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 kernel/dma/swiotlb.c | 49 ++++++++++++++++++++++----------------------
 1 file changed, 24 insertions(+), 25 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 52e2ac526757..3ba0f08a39a1 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -168,9 +168,28 @@ void __init swiotlb_update_mem_attributes(void)
 	memset(vaddr, 0, bytes);
 }
 
-int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
+				    unsigned long nslabs, bool late_alloc)
 {
+	void *vaddr = phys_to_virt(start);
 	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
+
+	mem->nslabs = nslabs;
+	mem->start = start;
+	mem->end = mem->start + bytes;
+	mem->index = 0;
+	mem->late_alloc = late_alloc;
+	spin_lock_init(&mem->lock);
+	for (i = 0; i < mem->nslabs; i++) {
+		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
+		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
+		mem->slots[i].alloc_size = 0;
+	}
+	memset(vaddr, 0, bytes);
+}
+
+int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+{
 	struct io_tlb_mem *mem;
 	size_t alloc_size;
 
@@ -186,16 +205,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 	if (!mem)
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
-	mem->nslabs = nslabs;
-	mem->start = __pa(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
+
+	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
 
 	io_tlb_default_mem = mem;
 	if (verbose)
@@ -282,8 +293,8 @@ swiotlb_late_init_with_default_size(size_t default_size)
 int
 swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 {
-	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
 	struct io_tlb_mem *mem;
+	unsigned long bytes = nslabs << IO_TLB_SHIFT;
 
 	if (swiotlb_force == SWIOTLB_NO_FORCE)
 		return 0;
@@ -297,20 +308,8 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	if (!mem)
 		return -ENOMEM;
 
-	mem->nslabs = nslabs;
-	mem->start = virt_to_phys(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	mem->late_alloc = 1;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
-
+	swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
 	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
-	memset(tlb, 0, bytes);
 
 	io_tlb_default_mem = mem;
 	swiotlb_print_info();
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 03:53:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 03:53:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142479.262875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMbo-00068T-GI; Wed, 16 Jun 2021 03:52:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142479.262875; Wed, 16 Jun 2021 03:52:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMbo-00068M-D7; Wed, 16 Jun 2021 03:52:56 +0000
Received: by outflank-mailman (input) for mailman id 142479;
 Wed, 16 Jun 2021 03:52:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltMbm-00068G-TJ
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 03:52:55 +0000
Received: from mail-pl1-x62c.google.com (unknown [2607:f8b0:4864:20::62c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5b7d5508-4884-46b2-a779-5e04c8da0f89;
 Wed, 16 Jun 2021 03:52:53 +0000 (UTC)
Received: by mail-pl1-x62c.google.com with SMTP id e1so422165plh.8
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 20:52:53 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id w79sm544897pff.21.2021.06.15.20.52.44
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 20:52:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b7d5508-4884-46b2-a779-5e04c8da0f89
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=BK1Fjj4uNMqRvyKyQHoMxYZCY4M7rkwqjSwMu/mf3rE=;
        b=YJrMpyhc8a7v7yFw01D1w/oyJp/uNq9Mu0PDNl/Rpc9G7L3NOGvaiU4YGtOGkCU491
         FzwVx5N0BLbo7kNZwYVvms/H2anCXFDdF95mizDApuQd4dOurVy6qFg+tHdYqCbUAKCm
         LUBBGGajEOKceB4fblNnofH28kIcKzjJ6EeYY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=BK1Fjj4uNMqRvyKyQHoMxYZCY4M7rkwqjSwMu/mf3rE=;
        b=Z2AvpOJ8qIQGp4iGlHbEUBAx+dzg+/dF6EwhbbTUVKPdGF0XKM/LScf9MZe9eaQqvn
         yTsG2wKHRr27T42C778lRW4sqedQhMasR5kpDmr9459JUliFpVmRqVqPLVxIKcq3HL15
         wpCLZ9hl4BBp/IEzAP8LZNnGsdzpS78VsreS77ize7BjkQhDU6ZRSPiTmpEiUHK9TB1D
         zLELxiO9fseiObbCBqxUW8DdLtO1oSvdmUHNwngqWRe93Avwqx2UCggf7ZBU3OlHEMdP
         Skm3XAz31QPJq+cwTVn3J5JM1EG6M9OP61ZA0kfGB0dEJgZOhY+L5VNEeCI8cwr8OIBk
         flkA==
X-Gm-Message-State: AOAM530puLvuopfDIpG3Srq6CFkcihudkqfxM06ZRKpYONbSJH5mqfW8
	3oUcGgzlLWPRdXbGJBRLj6xOig==
X-Google-Smtp-Source: ABdhPJyuUgdK//ECEfuYPbG7vWbWXerg9Upjmj0SmYUObaHKldxIhs0ZAtoDQcFdKTUdWv5k2A0xNg==
X-Received: by 2002:a17:902:d915:b029:119:a1ab:d2c4 with SMTP id c21-20020a170902d915b0290119a1abd2c4mr7084084plz.63.1623815572593;
        Tue, 15 Jun 2021 20:52:52 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v11 00/12] Restricted DMA
Date: Wed, 16 Jun 2021 11:52:28 +0800
Message-Id: <20210616035240.840463-1-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series implements mitigations for lack of DMA access control on
systems without an IOMMU, which could result in the DMA accessing the
system memory at unexpected times and/or unexpected addresses, possibly
leading to data leakage or corruption.

For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
not behind an IOMMU. As PCI-e, by design, gives the device full access to
system memory, a vulnerability in the Wi-Fi firmware could easily escalate
to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
full chain of exploits; [2], [3]).

To mitigate the security concerns, we introduce restricted DMA. Restricted
DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
specially allocated region and does memory allocation from the same region.
The feature on its own provides a basic level of protection against the DMA
overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system needs
to provide a way to restrict the DMA to a predefined memory region (this is
usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).

[1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
[1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
[2] https://blade.tencent.com/en/advisories/qualpwn/
[3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
[4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132

v11:
- Rebase against swiotlb devel/for-linus-5.14
- s/mempry/memory/g
- exchange the order of patch 09/12 and 10/12
https://lore.kernel.org/patchwork/cover/1446882/

v10:
Address the comments in v9 to
  - fix the dev->dma_io_tlb_mem assignment
  - propagate swiotlb_force setting into io_tlb_default_mem->force
  - move set_memory_decrypted out of swiotlb_init_io_tlb_mem
  - move debugfs_dir declaration into the main CONFIG_DEBUG_FS block
  - add swiotlb_ prefix to find_slots and release_slots
  - merge the 3 alloc/free related patches
  - move the CONFIG_DMA_RESTRICTED_POOL later

v9:
Address the comments in v7 to
  - set swiotlb active pool to dev->dma_io_tlb_mem
  - get rid of get_io_tlb_mem
  - dig out the device struct for is_swiotlb_active
  - move debugfs_create_dir out of swiotlb_create_debugfs
  - do set_memory_decrypted conditionally in swiotlb_init_io_tlb_mem
  - use IS_ENABLED in kernel/dma/direct.c
  - fix redefinition of 'of_dma_set_restricted_buffer'
https://lore.kernel.org/patchwork/cover/1445081/

v8:
- Fix reserved-memory.txt and add the reg property in example.
- Fix sizeof for of_property_count_elems_of_size in
  drivers/of/address.c#of_dma_set_restricted_buffer.
- Apply Will's suggestion to try the OF node having DMA configuration in
  drivers/of/address.c#of_dma_set_restricted_buffer.
- Fix typo in the comment of drivers/of/address.c#of_dma_set_restricted_buffer.
- Add error message for PageHighMem in
  kernel/dma/swiotlb.c#rmem_swiotlb_device_init and move it to
  rmem_swiotlb_setup.
- Fix the message string in rmem_swiotlb_setup.
https://lore.kernel.org/patchwork/cover/1437112/

v7:
Fix debugfs, PageHighMem and comment style in rmem_swiotlb_device_init
https://lore.kernel.org/patchwork/cover/1431031/

v6:
Address the comments in v5
https://lore.kernel.org/patchwork/cover/1423201/

v5:
Rebase on latest linux-next
https://lore.kernel.org/patchwork/cover/1416899/

v4:
- Fix spinlock bad magic
- Use rmem->name for debugfs entry
- Address the comments in v3
https://lore.kernel.org/patchwork/cover/1378113/

v3:
Using only one reserved memory region for both streaming DMA and memory
allocation.
https://lore.kernel.org/patchwork/cover/1360992/

v2:
Building on top of swiotlb.
https://lore.kernel.org/patchwork/cover/1280705/

v1:
Using dma_map_ops.
https://lore.kernel.org/patchwork/cover/1271660/

Claire Chang (12):
  swiotlb: Refactor swiotlb init functions
  swiotlb: Refactor swiotlb_create_debugfs
  swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
  swiotlb: Update is_swiotlb_buffer to add a struct device argument
  swiotlb: Update is_swiotlb_active to add a struct device argument
  swiotlb: Use is_dev_swiotlb_force for swiotlb data bouncing
  swiotlb: Move alloc_size to swiotlb_find_slots
  swiotlb: Refactor swiotlb_tbl_unmap_single
  swiotlb: Add restricted DMA alloc/free support
  swiotlb: Add restricted DMA pool initialization
  dt-bindings: of: Add restricted DMA pool
  of: Add plumbing for restricted DMA pool

 .../reserved-memory/reserved-memory.txt       |  36 ++-
 drivers/base/core.c                           |   4 +
 drivers/gpu/drm/i915/gem/i915_gem_internal.c  |   2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c         |   2 +-
 drivers/iommu/dma-iommu.c                     |  12 +-
 drivers/of/address.c                          |  33 +++
 drivers/of/device.c                           |   3 +
 drivers/of/of_private.h                       |   6 +
 drivers/pci/xen-pcifront.c                    |   2 +-
 drivers/xen/swiotlb-xen.c                     |   2 +-
 include/linux/device.h                        |   4 +
 include/linux/swiotlb.h                       |  40 ++-
 kernel/dma/Kconfig                            |  14 +
 kernel/dma/direct.c                           |  60 +++--
 kernel/dma/direct.h                           |   8 +-
 kernel/dma/swiotlb.c                          | 255 +++++++++++++-----
 16 files changed, 380 insertions(+), 103 deletions(-)

-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 03:53:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 03:53:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142481.262897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMc5-0006ms-2s; Wed, 16 Jun 2021 03:53:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142481.262897; Wed, 16 Jun 2021 03:53:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMc4-0006mh-W4; Wed, 16 Jun 2021 03:53:12 +0000
Received: by outflank-mailman (input) for mailman id 142481;
 Wed, 16 Jun 2021 03:53:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltMc3-0006l9-LN
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 03:53:11 +0000
Received: from mail-pf1-x42d.google.com (unknown [2607:f8b0:4864:20::42d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c5c076c7-9fe0-4354-9d61-0b06ba03f23e;
 Wed, 16 Jun 2021 03:53:10 +0000 (UTC)
Received: by mail-pf1-x42d.google.com with SMTP id k6so1069150pfk.12
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 20:53:10 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id k19sm557774pji.32.2021.06.15.20.53.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 20:53:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5c076c7-9fe0-4354-9d61-0b06ba03f23e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=U6ZYMbmpZtiKwFDcQ7JFrtewUpbfgXD5Ls9ZoD2Zrzg=;
        b=RZd9rxgKkVABYLQTSq6F5uS2C29CsSdg36ip3FvrlN8FR4TLMNlm3ETn/gcglxg0VH
         +1WxkUy7ZqMrM1JYwzaBTnnUjGVHlY6H2dMNBKQlvyd/XPL2EbpKZBPFjCBcKqXScjgL
         sO86XIdkFZpgufpRs2QMpcNH/rozerebhjh1Y=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=U6ZYMbmpZtiKwFDcQ7JFrtewUpbfgXD5Ls9ZoD2Zrzg=;
        b=Px/H6RUKo+T2gEADW/PuevQFh/Xlpm75HH4dEwvEV7egQzKh+MguGVayMbUQd7jliZ
         6Q1FD1na1Id3pZHii5KDN+oz+Lw+QJm+VmQu9wfka4NEnybsphNk1M872y49t5utWojl
         J17GHf4UcKTUPJkmigiH7b2vQmH0RY5lWf2hGJEbpE8/dFO++8d7WI2T/zzhongGb10/
         sbHdgtguSRsKZgEvSj2wy2H1bn5GqLCBSY7q6D863AHfdgzdOBi3fBVNB8HWD5lqxIxG
         3wmWwYyt/NA/PiAk3EfKexoKUIz5mv+hsmqgC6td5mqqpcNETENun1xKY2ax/khguyJl
         1B5w==
X-Gm-Message-State: AOAM531kgSKpc3YzrhgXXLmFc3YEt+z4q2JM4/mq6lHv64ipmb+tKICh
	m3jvj6AYHgOwx9oJ4WCuYUKOYw==
X-Google-Smtp-Source: ABdhPJwmduDhMHaz4gp1KWw9pFkGmBZM/DTXojEyWC2D7eYdCuhZ3NXD60A0zg3fj181Q33R/6IAmA==
X-Received: by 2002:a63:a805:: with SMTP id o5mr2939730pgf.328.1623815590191;
        Tue, 15 Jun 2021 20:53:10 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v11 02/12] swiotlb: Refactor swiotlb_create_debugfs
Date: Wed, 16 Jun 2021 11:52:30 +0800
Message-Id: <20210616035240.840463-3-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616035240.840463-1-tientzu@chromium.org>
References: <20210616035240.840463-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Split the debugfs creation to make the code reusable for supporting
different bounce buffer pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 kernel/dma/swiotlb.c | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 3ba0f08a39a1..af416bcd1914 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -670,19 +670,26 @@ bool is_swiotlb_active(void)
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
 #ifdef CONFIG_DEBUG_FS
+static struct dentry *debugfs_dir;
 
-static int __init swiotlb_create_debugfs(void)
+static void swiotlb_create_debugfs_files(struct io_tlb_mem *mem)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
-
-	if (!mem)
-		return 0;
-	mem->debugfs = debugfs_create_dir("swiotlb", NULL);
 	debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs);
 	debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used);
+}
+
+static int __init swiotlb_create_default_debugfs(void)
+{
+	struct io_tlb_mem *mem = io_tlb_default_mem;
+
+	debugfs_dir = debugfs_create_dir("swiotlb", NULL);
+	if (mem) {
+		mem->debugfs = debugfs_dir;
+		swiotlb_create_debugfs_files(mem);
+	}
 	return 0;
 }
 
-late_initcall(swiotlb_create_debugfs);
+late_initcall(swiotlb_create_default_debugfs);
 
 #endif
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 03:53:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 03:53:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142483.262908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMcD-0007Fm-Hx; Wed, 16 Jun 2021 03:53:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142483.262908; Wed, 16 Jun 2021 03:53:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMcD-0007FU-Do; Wed, 16 Jun 2021 03:53:21 +0000
Received: by outflank-mailman (input) for mailman id 142483;
 Wed, 16 Jun 2021 03:53:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltMcC-0007ES-Ec
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 03:53:20 +0000
Received: from mail-pf1-x436.google.com (unknown [2607:f8b0:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65597984-4682-415c-bd02-e76178fd9897;
 Wed, 16 Jun 2021 03:53:19 +0000 (UTC)
Received: by mail-pf1-x436.google.com with SMTP id a127so1073580pfa.10
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 20:53:19 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id i22sm535607pfq.6.2021.06.15.20.53.11
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 20:53:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65597984-4682-415c-bd02-e76178fd9897
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=pEdYN/CfqowQHo6LUUVSljtyJhX5z4kEbgyPSIqEKts=;
        b=j4fyQ5dBTYRVJhHpA0F9OiYHPXJFX5kS91Ph2He/dfZaHGKkDCV9cL++RJDhWPSpip
         MsOxHMrpDgJX5sQnmg+q4a563ftMcIxzmA5UIc9dV7acTBoKEleCubikYuGRoJtFGjGg
         c8YfFkzWFnZZ6w46d3MTIrA13qj9Tfy7u2N5Y=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=pEdYN/CfqowQHo6LUUVSljtyJhX5z4kEbgyPSIqEKts=;
        b=o+PgEc3MuqZttgPplO+aEZe9WVi+YQdAu5gwhVqaUFoNNf6/XkkDPzyl/zGgJX5yDK
         40e6pGJdU+fkphsJSij5N0v1w0VoWYwQPkD3SBlXD3QZE1ZylfggHG5dwNOD2dF8dZe1
         LuD198X/EWdGHXHOae4HaL9C4tKmZRFhRlWez9xMLPlIbVu4gl9t4rXzNB3dx7XoY7NK
         MiW382Q04y4Cwj7o5uePje63kiJt54EPDEcn2bHOKXy8ZcFKvqa2V7Luyso3OnksrlB+
         FTPPai+zlFX42e0zDxqmsBIBQN75a6avlGCRJq6lXvhd2zOYhZchokV4B+AX6fqIVJND
         eEBg==
X-Gm-Message-State: AOAM530aF0Rq72MMqbu30xEXOjmhIzhd8LXJn9u2V114fr+I/7bjKj5L
	0JL+qAPUWV7K9LxNQ+BTRHbuFA==
X-Google-Smtp-Source: ABdhPJxNUP/3Z3HU7AuvIP0sGwF+mZ9TKA3lFyzscXS+2nZ/cVBIJ32UuUpbtOSr91cWCSPR70W+aQ==
X-Received: by 2002:a63:2dc5:: with SMTP id t188mr2886831pgt.406.1623815598800;
        Tue, 15 Jun 2021 20:53:18 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v11 03/12] swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
Date: Wed, 16 Jun 2021 11:52:31 +0800
Message-Id: <20210616035240.840463-4-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616035240.840463-1-tientzu@chromium.org>
References: <20210616035240.840463-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Always have the pointer to the swiotlb pool used in struct device. This
could help simplify the code for other pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 drivers/base/core.c    | 4 ++++
 include/linux/device.h | 4 ++++
 kernel/dma/swiotlb.c   | 8 ++++----
 3 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/base/core.c b/drivers/base/core.c
index f29839382f81..cb3123e3954d 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -27,6 +27,7 @@
 #include <linux/netdevice.h>
 #include <linux/sched/signal.h>
 #include <linux/sched/mm.h>
+#include <linux/swiotlb.h>
 #include <linux/sysfs.h>
 #include <linux/dma-map-ops.h> /* for dma_default_coherent */
 
@@ -2736,6 +2737,9 @@ void device_initialize(struct device *dev)
     defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU_ALL)
 	dev->dma_coherent = dma_default_coherent;
 #endif
+#ifdef CONFIG_SWIOTLB
+	dev->dma_io_tlb_mem = io_tlb_default_mem;
+#endif
 }
 EXPORT_SYMBOL_GPL(device_initialize);
 
diff --git a/include/linux/device.h b/include/linux/device.h
index ba660731bd25..240d652a0696 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -416,6 +416,7 @@ struct dev_links_info {
  * @dma_pools:	Dma pools (if dma'ble device).
  * @dma_mem:	Internal for coherent mem override.
  * @cma_area:	Contiguous memory area for dma allocations
+ * @dma_io_tlb_mem: Pointer to the swiotlb pool used.  Not for driver use.
  * @archdata:	For arch-specific additions.
  * @of_node:	Associated device tree node.
  * @fwnode:	Associated device node supplied by platform firmware.
@@ -518,6 +519,9 @@ struct device {
 #ifdef CONFIG_DMA_CMA
 	struct cma *cma_area;		/* contiguous memory area for dma
 					   allocations */
+#endif
+#ifdef CONFIG_SWIOTLB
+	struct io_tlb_mem *dma_io_tlb_mem;
 #endif
 	/* arch specific additions */
 	struct dev_archdata	archdata;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index af416bcd1914..a9f5c08dd94a 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -339,7 +339,7 @@ void __init swiotlb_exit(void)
 static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size,
 			   enum dma_data_direction dir)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT;
 	unsigned int offset = (tlb_addr - mem->start) & (IO_TLB_SIZE - 1);
 	phys_addr_t orig_addr = mem->slots[index].orig_addr;
@@ -430,7 +430,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
 static int find_slots(struct device *dev, phys_addr_t orig_addr,
 		size_t alloc_size)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
 	dma_addr_t tbl_dma_addr =
 		phys_to_dma_unencrypted(dev, mem->start) & boundary_mask;
@@ -507,7 +507,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		size_t mapping_size, size_t alloc_size,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
 	unsigned int i;
 	int index;
@@ -558,7 +558,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 			      size_t mapping_size, enum dma_data_direction dir,
 			      unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
 	unsigned long flags;
 	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 03:53:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 03:53:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142488.262919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMcN-0007oI-Sm; Wed, 16 Jun 2021 03:53:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142488.262919; Wed, 16 Jun 2021 03:53:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMcN-0007nP-Ph; Wed, 16 Jun 2021 03:53:31 +0000
Received: by outflank-mailman (input) for mailman id 142488;
 Wed, 16 Jun 2021 03:53:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltMcM-0007ES-Dx
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 03:53:30 +0000
Received: from mail-pf1-x431.google.com (unknown [2607:f8b0:4864:20::431])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0a679df1-67bc-42c2-93d0-7259f0e4675b;
 Wed, 16 Jun 2021 03:53:28 +0000 (UTC)
Received: by mail-pf1-x431.google.com with SMTP id k15so1095719pfp.6
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 20:53:28 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id o139sm569527pfd.96.2021.06.15.20.53.20
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 20:53:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a679df1-67bc-42c2-93d0-7259f0e4675b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=qgVN2ZlJMrK0ZQ+KxIYsV5GmxXVfYXxXGP7J+tRHaos=;
        b=Bc5+LAaIgDCpWm2RgwGyAYizLnzUJxeVXdP8YQZc+b5vm5Y4cGdC75qJ7ZjBBOkgIP
         IbuablDIBYtKBMAKYiCxi72B2J79PlEusTGi1VLBr/KVWklPqIaYOv8NzdHPTxawpXsU
         M9agE+7NxReHtheDlbt0Wv1TlPEJQWSpngliE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=qgVN2ZlJMrK0ZQ+KxIYsV5GmxXVfYXxXGP7J+tRHaos=;
        b=jKYuW2BaFmtGYgNOHQZz+eT88RJWSyS8wIAyTS+z6tQQuN7t0yVbRicgRkAXplWnDk
         0uXLWb9wKtGHBGmiUZ6J6z0uaZomzy4DEojgzYXlyKCBJT9kqc7+jkGS+mOyrHprb7QS
         mQmx60P3MzkCJoFyrUVfWxH5NM6TVskg16y/JYq12OoJYG8zAUY+HaTR3Dedob5mFZVt
         27iGu9LQ4mnqeg9cr0ywx3rmE6BjKEAPYkLC4DNdQEzcZALzUbQ3KzP2T/3SU6yrgosI
         aBr4YzkIjK12X7nkVFRn+zvlZlmQVRGpPqk960w9NZLzukJEr/Jfj8TvPTjXVFQ7vYsF
         CU3w==
X-Gm-Message-State: AOAM530VRkhqYQpSOU29RHG9nf2ET1tIUS0cczjmuaVK5mr0kA7EM5lP
	NTKgblbzkivX9XusjyaU4tz6mg==
X-Google-Smtp-Source: ABdhPJxc07phkSxtKJ7m5Ir5GZbcb2WHx/fVCTd7f9ZMcX82YEzVuLY9OCNSvSYwPYTtNEI/suYjiw==
X-Received: by 2002:a63:3704:: with SMTP id e4mr2906426pga.310.1623815607521;
        Tue, 15 Jun 2021 20:53:27 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v11 04/12] swiotlb: Update is_swiotlb_buffer to add a struct device argument
Date: Wed, 16 Jun 2021 11:52:32 +0800
Message-Id: <20210616035240.840463-5-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616035240.840463-1-tientzu@chromium.org>
References: <20210616035240.840463-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_buffer to add a struct device argument. This will be
useful later to allow for different pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 drivers/iommu/dma-iommu.c | 12 ++++++------
 drivers/xen/swiotlb-xen.c |  2 +-
 include/linux/swiotlb.h   |  7 ++++---
 kernel/dma/direct.c       |  6 +++---
 kernel/dma/direct.h       |  6 +++---
 5 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 3087d9fa6065..10997ef541f8 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -507,7 +507,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
 
 	__iommu_dma_unmap(dev, dma_addr, size);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 
@@ -578,7 +578,7 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
 	}
 
 	iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
-	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
+	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys))
 		swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs);
 	return iova;
 }
@@ -749,7 +749,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
 	if (!dev_is_dma_coherent(dev))
 		arch_sync_dma_for_cpu(phys, size, dir);
 
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_cpu(dev, phys, size, dir);
 }
 
@@ -762,7 +762,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev,
 		return;
 
 	phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_device(dev, phys, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -783,7 +783,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
 
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_cpu(dev, sg_phys(sg),
 						    sg->length, dir);
 	}
@@ -800,7 +800,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
 		return;
 
 	for_each_sg(sgl, sg, nelems, i) {
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_device(dev, sg_phys(sg),
 						       sg->length, dir);
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 4c89afc0df62..0c6ed09f8513 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -100,7 +100,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	 * in our domain. Therefore _only_ check address within our domain.
 	 */
 	if (pfn_valid(PFN_DOWN(paddr)))
-		return is_swiotlb_buffer(paddr);
+		return is_swiotlb_buffer(dev, paddr);
 	return 0;
 }
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 216854a5e513..d1f3d95881cd 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_SWIOTLB_H
 #define __LINUX_SWIOTLB_H
 
+#include <linux/device.h>
 #include <linux/dma-direction.h>
 #include <linux/init.h>
 #include <linux/types.h>
@@ -101,9 +102,9 @@ struct io_tlb_mem {
 };
 extern struct io_tlb_mem *io_tlb_default_mem;
 
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
@@ -115,7 +116,7 @@ bool is_swiotlb_active(void);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index f737e3347059..84c9feb5474a 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -343,7 +343,7 @@ void dma_direct_sync_sg_for_device(struct device *dev,
 	for_each_sg(sgl, sg, nents, i) {
 		phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_device(dev, paddr, sg->length,
 						       dir);
 
@@ -369,7 +369,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(paddr, sg->length, dir);
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_cpu(dev, paddr, sg->length,
 						    dir);
 
@@ -504,7 +504,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr)
 {
 	return !dev_is_dma_coherent(dev) ||
-		is_swiotlb_buffer(dma_to_phys(dev, dma_addr));
+	       is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr));
 }
 
 /**
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 50afc05b6f1d..13e9e7158d94 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -56,7 +56,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev,
 {
 	phys_addr_t paddr = dma_to_phys(dev, addr);
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_device(dev, paddr, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -73,7 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev,
 		arch_sync_dma_for_cpu_all();
 	}
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_cpu(dev, paddr, size, dir);
 
 	if (dir == DMA_FROM_DEVICE)
@@ -113,7 +113,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
 		dma_direct_sync_single_for_cpu(dev, addr, size, dir);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 #endif /* _KERNEL_DMA_DIRECT_H */
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 03:53:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 03:53:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142491.262929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMcY-0008Uy-59; Wed, 16 Jun 2021 03:53:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142491.262929; Wed, 16 Jun 2021 03:53:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMcY-0008Up-29; Wed, 16 Jun 2021 03:53:42 +0000
Received: by outflank-mailman (input) for mailman id 142491;
 Wed, 16 Jun 2021 03:53:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltMcW-0007ES-EM
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 03:53:40 +0000
Received: from mail-pj1-x1034.google.com (unknown [2607:f8b0:4864:20::1034])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d3a10276-021d-4e71-be4d-b8e204760881;
 Wed, 16 Jun 2021 03:53:36 +0000 (UTC)
Received: by mail-pj1-x1034.google.com with SMTP id
 22-20020a17090a0c16b0290164a5354ad0so3149748pjs.2
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 20:53:36 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id o3sm527673pfd.41.2021.06.15.20.53.28
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 20:53:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3a10276-021d-4e71-be4d-b8e204760881
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=fDjj9RwGiHMHYf08qnwDA+ssC0u7zUi8VvqNHSt/pLY=;
        b=FzCXiZfAhSJ3qX/rErpJapsVCsBpVDc9iuXYgyxno3488wRdEUKpp/qVOBiDDBan7c
         BEumfOfxg3H0YCtDSi2Iz6vbUBA5o+CSPzejm09KNEkizOKzAyOWBcwOyE/MOTN9UWpE
         2basEbSEUBTBzMfOfBu31zhNd3mdVFfiBZ3rg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=fDjj9RwGiHMHYf08qnwDA+ssC0u7zUi8VvqNHSt/pLY=;
        b=Rpc1TTqTc8qhS+4sQx6MowNaDvldadSzDqOdz3Dr/EQEGUxzA+J6CYhnz5uVdj8ORt
         oxiMyRQYd4BxjWVuKcRLZt85TKT+l7I0W47WusaT2MvHrP1cOYApmslDpI1zf4jKXRfr
         LEgTte/s8gW/Bz4MU8mJiTtTLbcusO3MGxQx9Um0y0XcBkXT81dNg2wldKV13QpwPbb6
         N6Y2Y8pVP+bNGwWlkrOQEVP1ukYHs2C/z2KFmJSqkUSQdBZo8HcQSxGNubC56PqYfAdB
         QgzUcbcAmm871mSMezzhzHe0aTLAYT9urY3YIKlP8ae5WkJfbfugQlqg8+p4IJq6rd9q
         7MMQ==
X-Gm-Message-State: AOAM532WwEEMKy3qWyNKwmNpYBN8yV83llpIbxPeclEpgSJI9AloIhT1
	JdibwG4sJcAPQ27o0PfMJx+uNA==
X-Google-Smtp-Source: ABdhPJyO8H2c3Ch0V77jNbDhALTnA/67SzulpbGnNJkV6XZyUq79vak+n4u0ahyPR90TVphDjxcKRQ==
X-Received: by 2002:a17:90b:8c4:: with SMTP id ds4mr2775014pjb.65.1623815616105;
        Tue, 15 Jun 2021 20:53:36 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v11 05/12] swiotlb: Update is_swiotlb_active to add a struct device argument
Date: Wed, 16 Jun 2021 11:52:33 +0800
Message-Id: <20210616035240.840463-6-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616035240.840463-1-tientzu@chromium.org>
References: <20210616035240.840463-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_active to add a struct device argument. This will be
useful later to allow for different pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c        | 2 +-
 drivers/pci/xen-pcifront.c                   | 2 +-
 include/linux/swiotlb.h                      | 4 ++--
 kernel/dma/direct.c                          | 2 +-
 kernel/dma/swiotlb.c                         | 4 ++--
 6 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index a9d65fc8aa0e..4b7afa0fc85d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 
 	max_order = MAX_ORDER;
 #ifdef CONFIG_SWIOTLB
-	if (is_swiotlb_active()) {
+	if (is_swiotlb_active(obj->base.dev->dev)) {
 		unsigned int max_segment;
 
 		max_segment = swiotlb_max_segment();
diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
index 9662522aa066..be15bfd9e0ee 100644
--- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
@@ -321,7 +321,7 @@ nouveau_ttm_init(struct nouveau_drm *drm)
 	}
 
 #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
-	need_swiotlb = is_swiotlb_active();
+	need_swiotlb = is_swiotlb_active(dev->dev);
 #endif
 
 	ret = ttm_bo_device_init(&drm->ttm.bdev, &nouveau_bo_driver,
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index b7a8f3a1921f..0d56985bfe81 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
 
 	spin_unlock(&pcifront_dev_lock);
 
-	if (!err && !is_swiotlb_active()) {
+	if (!err && !is_swiotlb_active(&pdev->xdev->dev)) {
 		err = pci_xen_swiotlb_init_late();
 		if (err)
 			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index d1f3d95881cd..dd1c30a83058 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -112,7 +112,7 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
-bool is_swiotlb_active(void);
+bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
@@ -132,7 +132,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev)
 	return SIZE_MAX;
 }
 
-static inline bool is_swiotlb_active(void)
+static inline bool is_swiotlb_active(struct device *dev)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 84c9feb5474a..7a88c34d0867 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
 size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
-	if (is_swiotlb_active() &&
+	if (is_swiotlb_active(dev) &&
 	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index a9f5c08dd94a..101abeb0a57d 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -663,9 +663,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)
 	return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
 }
 
-bool is_swiotlb_active(void)
+bool is_swiotlb_active(struct device *dev)
 {
-	return io_tlb_default_mem != NULL;
+	return dev->dma_io_tlb_mem != NULL;
 }
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 03:53:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 03:53:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142496.262941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMce-0000a3-F4; Wed, 16 Jun 2021 03:53:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142496.262941; Wed, 16 Jun 2021 03:53:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMce-0000Zs-BO; Wed, 16 Jun 2021 03:53:48 +0000
Received: by outflank-mailman (input) for mailman id 142496;
 Wed, 16 Jun 2021 03:53:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltMcc-0000Vk-8K
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 03:53:46 +0000
Received: from mail-pl1-x62d.google.com (unknown [2607:f8b0:4864:20::62d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e4d7e68a-5d96-4673-8bdd-2bda6edd07a7;
 Wed, 16 Jun 2021 03:53:45 +0000 (UTC)
Received: by mail-pl1-x62d.google.com with SMTP id e7so427138plj.7
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 20:53:45 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id i21sm523349pfd.219.2021.06.15.20.53.37
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 20:53:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4d7e68a-5d96-4673-8bdd-2bda6edd07a7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=e9Vw6CwKftyWUkkMr8FpV3VZb4mbe4UuYsuH+xmCJ+M=;
        b=jc3T7c7VZDi587k2gT/eeg1/s25EMO4Lxy72AFhkTEa79olBru5uzce93B9VJJc7mb
         tv4xxblNNZSpK3BVn4J++3E61DthzfhVz93hD3eVM82+a9MHX0s1kWeLhR0ckmjZau/d
         yaE/e99lxczNFRLPelQucPShcgrTyXCG38jN8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=e9Vw6CwKftyWUkkMr8FpV3VZb4mbe4UuYsuH+xmCJ+M=;
        b=LT8QDZFvFSHsWMkwgGAJHGr1P/+hMawJmitWQ2+arjAGIa3tT7kGETWBUR+SEB/qCF
         EoSaGdfDuvMrwveIMgTxkG6I49FS9PEijhCzZ3uT9NXYNDKJfl/dUvHRlRevEoWF2/RD
         DceAvzSIBz/wN58XJ5UZLWTG2aB6eItIA0PFPWihYMgu8n76DExlOBvtt0T98jMHWcT+
         2dBLtSVwnaUJ0jpdf4fiGazZLBo3e5VWm9DR1i8kv0H0P2MpKc3w9rieWXyOBjx4Ax6g
         CdR4YdaNdqOC/QiVLHGDzprHlKRX4wFSw7CfMRxfS2JYnbDhnet/9yxMsiLYsn+G7Kap
         Ls6Q==
X-Gm-Message-State: AOAM530+c/6NGR0xq1kI/0VJEAen5Xatg1tfiTfIiVQaeUzvAWO5Qr5A
	ET1JqkvrfIXb5eiU2zq4pLM5HA==
X-Google-Smtp-Source: ABdhPJzC2GZu70JiqqjaeNpM5+wD1r3IbZYNkxwRBj90E6QvUGdzry0LzgTDbMYz2ZJVNaBcQldqkQ==
X-Received: by 2002:a17:90b:318:: with SMTP id ay24mr8150844pjb.150.1623815624668;
        Tue, 15 Jun 2021 20:53:44 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v11 06/12] swiotlb: Use is_dev_swiotlb_force for swiotlb data bouncing
Date: Wed, 16 Jun 2021 11:52:34 +0800
Message-Id: <20210616035240.840463-7-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616035240.840463-1-tientzu@chromium.org>
References: <20210616035240.840463-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Propagate the swiotlb_force setting into io_tlb_default_mem->force and
use it to determine whether to bounce the data or not. This will be
useful later to allow for different pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/swiotlb.h | 11 +++++++++++
 kernel/dma/direct.c     |  2 +-
 kernel/dma/direct.h     |  2 +-
 kernel/dma/swiotlb.c    |  4 ++++
 4 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index dd1c30a83058..efcd56e3a16c 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -84,6 +84,7 @@ extern enum swiotlb_force swiotlb_force;
  *		unmap calls.
  * @debugfs:	The dentry to debugfs.
  * @late_alloc:	%true if allocated using the page allocator
+ * @force:      %true if swiotlb is forced
  */
 struct io_tlb_mem {
 	phys_addr_t start;
@@ -94,6 +95,7 @@ struct io_tlb_mem {
 	spinlock_t lock;
 	struct dentry *debugfs;
 	bool late_alloc;
+	bool force;
 	struct io_tlb_slot {
 		phys_addr_t orig_addr;
 		size_t alloc_size;
@@ -109,6 +111,11 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
 
+static inline bool is_dev_swiotlb_force(struct device *dev)
+{
+	return dev->dma_io_tlb_mem->force;
+}
+
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
@@ -120,6 +127,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
+static inline bool is_dev_swiotlb_force(struct device *dev)
+{
+	return false;
+}
 static inline void swiotlb_exit(void)
 {
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 7a88c34d0867..3713461d6fe0 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -496,7 +496,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
 	if (is_swiotlb_active(dev) &&
-	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
+	    (dma_addressing_limited(dev) || is_dev_swiotlb_force(dev)))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
 }
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 13e9e7158d94..6c4d13caceb1 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -87,7 +87,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
 	phys_addr_t phys = page_to_phys(page) + offset;
 	dma_addr_t dma_addr = phys_to_dma(dev, phys);
 
-	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
+	if (is_dev_swiotlb_force(dev))
 		return swiotlb_map(dev, phys, size, dir, attrs);
 
 	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 101abeb0a57d..a9907ac262fc 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -179,6 +179,10 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
 	mem->end = mem->start + bytes;
 	mem->index = 0;
 	mem->late_alloc = late_alloc;
+
+	if (swiotlb_force == SWIOTLB_FORCE)
+		mem->force = true;
+
 	spin_lock_init(&mem->lock);
 	for (i = 0; i < mem->nslabs; i++) {
 		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 03:53:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 03:53:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142500.262952 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMcm-00019v-U9; Wed, 16 Jun 2021 03:53:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142500.262952; Wed, 16 Jun 2021 03:53:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMcm-00019j-Ox; Wed, 16 Jun 2021 03:53:56 +0000
Received: by outflank-mailman (input) for mailman id 142500;
 Wed, 16 Jun 2021 03:53:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltMcl-0000Vk-Em
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 03:53:55 +0000
Received: from mail-pj1-x1029.google.com (unknown [2607:f8b0:4864:20::1029])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e0dd94a-302e-402a-9669-981d55814215;
 Wed, 16 Jun 2021 03:53:53 +0000 (UTC)
Received: by mail-pj1-x1029.google.com with SMTP id
 22-20020a17090a0c16b0290164a5354ad0so3150078pjs.2
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 20:53:53 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id x22sm557553pjp.37.2021.06.15.20.53.46
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 20:53:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e0dd94a-302e-402a-9669-981d55814215
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=eU61psdxuacInDbxGN0V8O121x7LcVp3cBkwTdMQ+YI=;
        b=G+MHCjWefSWSsBn+zFsqavx79K0f1OgfsPHA3pIJ7ZUrc94j/9bKUmi3fYlnxX6sx7
         MpFnLUq23wOmkqNgWIGDoG5ljrkEFDbKSDT6O1jD22wjvWPKV2kuzmbOLBWoI6lHX43m
         0UxMGANvZTFecxtxMJ6BmCdD5qQlDyW8Evmz8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=eU61psdxuacInDbxGN0V8O121x7LcVp3cBkwTdMQ+YI=;
        b=Uj922dLZPlkXSDOosNGecM8q+ABjWkRSIHy1i+G+ocgkDmImnAIk3s9nUd4CPfyMD6
         NDzjRiAcB2jDqWB+i/G+nFX1dllGYz0TVnaMevAjTO9zrpMN1e9nTd3l+mFWxUkYNU07
         yr+u9ozjrNovVizLQbD7xeYfYdO9qwB1AKFYBUk1IXr35XFT5THKJmgpiuNiKqcWBTFd
         98XpSVeX0lJnpbYGch/8t7NE1x8mt4M38prEkhg0tOf8vklmnv/OtUPRrRsYJSTXAK2Z
         jq2tit7tE9E+zmwI9az6dBd3wvYL/YOZM6SVeGr2U7kDiXWOZo7CfzPjWB53iLYbIXzx
         bgfw==
X-Gm-Message-State: AOAM533UpAKGcoKj1hY7sUo3RilgGKLKxo6UPGQKqpv/yzT5WXkGSGkz
	SJprsYs/oHZxvvcfRN0yhG+TkQ==
X-Google-Smtp-Source: ABdhPJz+/n8SXXk477Y7KP8AZrN6dcjdprvaWXKVwnDO1qXcju51BlhAASYnAFCrwZ+85V6bo0r6Hw==
X-Received: by 2002:a17:902:b409:b029:114:afa6:7f4a with SMTP id x9-20020a170902b409b0290114afa67f4amr7247326plr.56.1623815633242;
        Tue, 15 Jun 2021 20:53:53 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v11 07/12] swiotlb: Move alloc_size to swiotlb_find_slots
Date: Wed, 16 Jun 2021 11:52:35 +0800
Message-Id: <20210616035240.840463-8-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616035240.840463-1-tientzu@chromium.org>
References: <20210616035240.840463-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Rename find_slots to swiotlb_find_slots and move the maintenance of
alloc_size to it for better code reusability later.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 kernel/dma/swiotlb.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index a9907ac262fc..037772724b3c 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -431,8 +431,8 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
  * Find a suitable number of IO TLB entries size that will fit this request and
  * allocate a buffer from that IO TLB pool.
  */
-static int find_slots(struct device *dev, phys_addr_t orig_addr,
-		size_t alloc_size)
+static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
+			      size_t alloc_size)
 {
 	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
@@ -487,8 +487,11 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 	return -1;
 
 found:
-	for (i = index; i < index + nslots; i++)
+	for (i = index; i < index + nslots; i++) {
 		mem->slots[i].list = 0;
+		mem->slots[i].alloc_size =
+			alloc_size - ((i - index) << IO_TLB_SHIFT);
+	}
 	for (i = index - 1;
 	     io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 &&
 	     mem->slots[i].list; i--)
@@ -529,7 +532,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		return (phys_addr_t)DMA_MAPPING_ERROR;
 	}
 
-	index = find_slots(dev, orig_addr, alloc_size + offset);
+	index = swiotlb_find_slots(dev, orig_addr, alloc_size + offset);
 	if (index == -1) {
 		if (!(attrs & DMA_ATTR_NO_WARN))
 			dev_warn_ratelimited(dev,
@@ -543,11 +546,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	 * This is needed when we sync the memory.  Then we sync the buffer if
 	 * needed.
 	 */
-	for (i = 0; i < nr_slots(alloc_size + offset); i++) {
+	for (i = 0; i < nr_slots(alloc_size + offset); i++)
 		mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
-		mem->slots[index + i].alloc_size =
-			alloc_size - (i << IO_TLB_SHIFT);
-	}
 	tlb_addr = slot_addr(mem->start, index) + offset;
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
 	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 03:54:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 03:54:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142512.262964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMdC-0002H4-BN; Wed, 16 Jun 2021 03:54:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142512.262964; Wed, 16 Jun 2021 03:54:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMdC-0002Gv-4n; Wed, 16 Jun 2021 03:54:22 +0000
Received: by outflank-mailman (input) for mailman id 142512;
 Wed, 16 Jun 2021 03:54:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltMdA-0000Vk-FW
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 03:54:20 +0000
Received: from mail-pj1-x102f.google.com (unknown [2607:f8b0:4864:20::102f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e3393e6f-521a-497a-a9a3-f9af1af1deb1;
 Wed, 16 Jun 2021 03:54:02 +0000 (UTC)
Received: by mail-pj1-x102f.google.com with SMTP id
 s17-20020a17090a8811b029016e89654f93so3154090pjn.1
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 20:54:02 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id o186sm531682pfb.59.2021.06.15.20.53.54
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 20:54:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3393e6f-521a-497a-a9a3-f9af1af1deb1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=oRb0kCzQtLL1ODc5XeMJxnNfFZ/YZDLs649wILFum/w=;
        b=fDeLRHQuzvZl/O2NTctSqXJKMWXTxqlo16QC4RXzz6Fvxm73sl/3M8yfUKRM/Xfzv2
         sPwjA2S4EPwgw1F4xRrjz17pkm8IuR4J9KX6wisU/p3khyvTs1wWM44nrtHddPEv3UvM
         iBGwg92Cx7crarCTSI8jG8z24tV7ZZ8Qr4q3E=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=oRb0kCzQtLL1ODc5XeMJxnNfFZ/YZDLs649wILFum/w=;
        b=p7BJmVOfWh89dunXPpUqduGTuPCzIcRY4cAECebJtFGhd1eHX7ox/EO/wiOb6IIRzQ
         BhliEdVo4OwBfJRZh9iQGtQkQDnYpTtXJxqmEcIFTV5tWXKjpxchGmPb/wXu/iXpL/Qk
         3lHjAn0ey0KdmOlxN4aiXfoTJDYnCFWC6YxbdNx5559GFKxtiP/0f+euyoZ82fu3t+Fz
         i254SCbsZCELnMVs7n+ZmfsL60CHRqvMHJqMcXDbB5YFnOch+rdKWHeiIrXvSUhVLayj
         y622+vRWflDnq7t0tw02d54mR2pdfsvAKoUp1R759f49gCkF1HcFA2uZR97uKZSzBn11
         bsfw==
X-Gm-Message-State: AOAM533kOWYfmVgbCh713F5TpWp2x22StcnSFqbrf9Tb2apYJqFSFtlN
	tmJ/sf+G/saM6AhY1MaA6f8/kg==
X-Google-Smtp-Source: ABdhPJz2SpFmFttTbKmua4yfCKw+rNX4RUtDeeiYBVc41CWQZ990iBLac4Laog8hhJ7q/QRKPHHVEg==
X-Received: by 2002:a17:902:bf02:b029:11e:89a0:8694 with SMTP id bi2-20020a170902bf02b029011e89a08694mr1859083plb.83.1623815641807;
        Tue, 15 Jun 2021 20:54:01 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v11 08/12] swiotlb: Refactor swiotlb_tbl_unmap_single
Date: Wed, 16 Jun 2021 11:52:36 +0800
Message-Id: <20210616035240.840463-9-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616035240.840463-1-tientzu@chromium.org>
References: <20210616035240.840463-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, swiotlb_release_slots, to make the code reusable for
supporting different bounce buffer pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 kernel/dma/swiotlb.c | 35 ++++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 037772724b3c..fec4934b9926 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -555,27 +555,15 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	return tlb_addr;
 }
 
-/*
- * tlb_addr is the physical address of the bounce buffer to unmap.
- */
-void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
-			      size_t mapping_size, enum dma_data_direction dir,
-			      unsigned long attrs)
+static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr)
 {
-	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long flags;
-	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
+	unsigned int offset = swiotlb_align_offset(dev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
 	int nslots = nr_slots(mem->slots[index].alloc_size + offset);
 	int count, i;
 
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
-		swiotlb_bounce(hwdev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
-
 	/*
 	 * Return the buffer to the free list by setting the corresponding
 	 * entries to indicate the number of contiguous entries available.
@@ -610,6 +598,23 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 	spin_unlock_irqrestore(&mem->lock, flags);
 }
 
+/*
+ * tlb_addr is the physical address of the bounce buffer to unmap.
+ */
+void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
+			      size_t mapping_size, enum dma_data_direction dir,
+			      unsigned long attrs)
+{
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
+		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
+
+	swiotlb_release_slots(dev, tlb_addr);
+}
+
 void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
 		size_t size, enum dma_data_direction dir)
 {
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 03:56:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 03:56:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142523.262973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMf1-0003Cy-LF; Wed, 16 Jun 2021 03:56:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142523.262973; Wed, 16 Jun 2021 03:56:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMf1-0003Cp-IF; Wed, 16 Jun 2021 03:56:15 +0000
Received: by outflank-mailman (input) for mailman id 142523;
 Wed, 16 Jun 2021 03:56:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltMf0-0003CF-3z
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 03:56:14 +0000
Received: from mail-io1-xd2d.google.com (unknown [2607:f8b0:4864:20::d2d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2ef1ff15-9336-411c-9784-9877bede5cae;
 Wed, 16 Jun 2021 03:56:11 +0000 (UTC)
Received: by mail-io1-xd2d.google.com with SMTP id h5so1558593iok.5
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 20:56:11 -0700 (PDT)
Received: from mail-io1-f49.google.com (mail-io1-f49.google.com.
 [209.85.166.49])
 by smtp.gmail.com with ESMTPSA id h7sm559671ilr.44.2021.06.15.20.56.08
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 20:56:09 -0700 (PDT)
Received: by mail-io1-f49.google.com with SMTP id b14so1511605iow.13
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 20:56:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ef1ff15-9336-411c-9784-9877bede5cae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=WOmuV0PXC9+eL+TwRgVOkvAj15bXYGA8vCpEhrD+JbE=;
        b=g059DIGHeCANFnvnns2yBVMKxVdn/0FkxtkHgCR2ddz4Y/AFKg+NKqmMaDLYlB4qj1
         BsArZxpOBsco7NEiA/zyZqql53JEfmdG9ukr1Ds/Ds48YXG4RGyj9judJY+lLSITQifa
         sC5ELG83uDJxvbEIx4gQavxy9fc+cCcP1s2B4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=WOmuV0PXC9+eL+TwRgVOkvAj15bXYGA8vCpEhrD+JbE=;
        b=i7HEWmIOLt6ET+Oe1CQ7k3Zpk4l0DHDL3F9yQVkeUr9CnRyplwrDIbWXhChhOmNVCS
         FjGD2PHy/yLLTTd0adwPMh8MnXLWBn7YfvqcmcofY6EC1Z6jg9S+KTOZ7NiIqrI6BJjk
         LAOpnwMcP8FaSWXBujxkiyV+fkslNWrB7kFtCx8bswGfwRbga+0IRY/w2X3TL8t9Niim
         FWWiHRhtUJ/uHhjlgEW9GblppGjWDr/FJzGXoxcFXJ/XAiN8APAy6xUntI/9foN2UeJA
         AGfiDwFIXNu4FC+m4B+WUplScIF5/5Lzlv0LzmQBTtbq7JhWXECVkUX0gnl+z3dxeJCA
         L83w==
X-Gm-Message-State: AOAM532gDsSkS56SnW57/p/Nzv4WZo6Ld3bwIKU8FjO1ZosgkwHxbgyi
	cNcT/tiZvMSn12qHbRcTdNCyDv/Pensydw==
X-Google-Smtp-Source: ABdhPJxv9aeeEm/8NfhAdHDw5foWpFTwdDe92diaXTYp0l4yUwoUUi0zAyvaphZOFB74XPeLAEC9TQ==
X-Received: by 2002:a05:6602:54:: with SMTP id z20mr1994277ioz.25.1623815770328;
        Tue, 15 Jun 2021 20:56:10 -0700 (PDT)
X-Received: by 2002:a05:6638:151:: with SMTP id y17mr2223209jao.128.1623815758013;
 Tue, 15 Jun 2021 20:55:58 -0700 (PDT)
MIME-Version: 1.0
References: <20210615132711.553451-1-tientzu@chromium.org>
In-Reply-To: <20210615132711.553451-1-tientzu@chromium.org>
From: Claire Chang <tientzu@chromium.org>
Date: Wed, 16 Jun 2021 11:55:47 +0800
X-Gmail-Original-Message-ID: <CALiNf29_cCNmfx7NBQQRtTGOk78VNS+fb_Ljf-fC1q1w3FRizA@mail.gmail.com>
Message-ID: <CALiNf29_cCNmfx7NBQQRtTGOk78VNS+fb_Ljf-fC1q1w3FRizA@mail.gmail.com>
Subject: Re: [PATCH v10 00/12] Restricted DMA
To: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

v11 https://lore.kernel.org/patchwork/cover/1447216/

On Tue, Jun 15, 2021 at 9:27 PM Claire Chang <tientzu@chromium.org> wrote:
>
> This series implements mitigations for lack of DMA access control on
> systems without an IOMMU, which could result in the DMA accessing the
> system memory at unexpected times and/or unexpected addresses, possibly
> leading to data leakage or corruption.
>
> For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
> not behind an IOMMU. As PCI-e, by design, gives the device full access to
> system memory, a vulnerability in the Wi-Fi firmware could easily escalate
> to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
> full chain of exploits; [2], [3]).
>
> To mitigate the security concerns, we introduce restricted DMA. Restricted
> DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
> specially allocated region and does memory allocation from the same region.
> The feature on its own provides a basic level of protection against the DMA
> overwriting buffer contents at unexpected times. However, to protect
> against general data leakage and system memory corruption, the system needs
> to provide a way to restrict the DMA to a predefined memory region (this is
> usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).
>
> [1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
> [1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
> [2] https://blade.tencent.com/en/advisories/qualpwn/
> [3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
> [4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132
>
> v10:
> Address the comments in v9 to
>   - fix the dev->dma_io_tlb_mem assignment
>   - propagate swiotlb_force setting into io_tlb_default_mem->force
>   - move set_memory_decrypted out of swiotlb_init_io_tlb_mem
>   - move debugfs_dir declaration into the main CONFIG_DEBUG_FS block
>   - add swiotlb_ prefix to find_slots and release_slots
>   - merge the 3 alloc/free related patches
>   - move the CONFIG_DMA_RESTRICTED_POOL later
>
> v9:
> Address the comments in v7 to
>   - set swiotlb active pool to dev->dma_io_tlb_mem
>   - get rid of get_io_tlb_mem
>   - dig out the device struct for is_swiotlb_active
>   - move debugfs_create_dir out of swiotlb_create_debugfs
>   - do set_memory_decrypted conditionally in swiotlb_init_io_tlb_mem
>   - use IS_ENABLED in kernel/dma/direct.c
>   - fix redefinition of 'of_dma_set_restricted_buffer'
> https://lore.kernel.org/patchwork/cover/1445081/
>
> v8:
> - Fix reserved-memory.txt and add the reg property in example.
> - Fix sizeof for of_property_count_elems_of_size in
>   drivers/of/address.c#of_dma_set_restricted_buffer.
> - Apply Will's suggestion to try the OF node having DMA configuration in
>   drivers/of/address.c#of_dma_set_restricted_buffer.
> - Fix typo in the comment of drivers/of/address.c#of_dma_set_restricted_buffer.
> - Add error message for PageHighMem in
>   kernel/dma/swiotlb.c#rmem_swiotlb_device_init and move it to
>   rmem_swiotlb_setup.
> - Fix the message string in rmem_swiotlb_setup.
> https://lore.kernel.org/patchwork/cover/1437112/
>
> v7:
> Fix debugfs, PageHighMem and comment style in rmem_swiotlb_device_init
> https://lore.kernel.org/patchwork/cover/1431031/
>
> v6:
> Address the comments in v5
> https://lore.kernel.org/patchwork/cover/1423201/
>
> v5:
> Rebase on latest linux-next
> https://lore.kernel.org/patchwork/cover/1416899/
>
> v4:
> - Fix spinlock bad magic
> - Use rmem->name for debugfs entry
> - Address the comments in v3
> https://lore.kernel.org/patchwork/cover/1378113/
>
> v3:
> Using only one reserved memory region for both streaming DMA and memory
> allocation.
> https://lore.kernel.org/patchwork/cover/1360992/
>
> v2:
> Building on top of swiotlb.
> https://lore.kernel.org/patchwork/cover/1280705/
>
> v1:
> Using dma_map_ops.
> https://lore.kernel.org/patchwork/cover/1271660/
>
>
> Claire Chang (12):
>   swiotlb: Refactor swiotlb init functions
>   swiotlb: Refactor swiotlb_create_debugfs
>   swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
>   swiotlb: Update is_swiotlb_buffer to add a struct device argument
>   swiotlb: Update is_swiotlb_active to add a struct device argument
>   swiotlb: Use is_dev_swiotlb_force for swiotlb data bouncing
>   swiotlb: Move alloc_size to swiotlb_find_slots
>   swiotlb: Refactor swiotlb_tbl_unmap_single
>   swiotlb: Add restricted DMA pool initialization
>   swiotlb: Add restricted DMA alloc/free support
>   dt-bindings: of: Add restricted DMA pool
>   of: Add plumbing for restricted DMA pool
>
>  .../reserved-memory/reserved-memory.txt       |  36 ++-
>  drivers/base/core.c                           |   4 +
>  drivers/gpu/drm/i915/gem/i915_gem_internal.c  |   2 +-
>  drivers/gpu/drm/nouveau/nouveau_ttm.c         |   2 +-
>  drivers/iommu/dma-iommu.c                     |  12 +-
>  drivers/of/address.c                          |  33 +++
>  drivers/of/device.c                           |   3 +
>  drivers/of/of_private.h                       |   6 +
>  drivers/pci/xen-pcifront.c                    |   2 +-
>  drivers/xen/swiotlb-xen.c                     |   2 +-
>  include/linux/device.h                        |   4 +
>  include/linux/swiotlb.h                       |  40 ++-
>  kernel/dma/Kconfig                            |  14 +
>  kernel/dma/direct.c                           |  60 +++--
>  kernel/dma/direct.h                           |   8 +-
>  kernel/dma/swiotlb.c                          | 255 +++++++++++++-----
>  16 files changed, 380 insertions(+), 103 deletions(-)
>
> --
> 2.32.0.272.g935e593368-goog
>


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 04:01:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 04:01:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142542.262985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMkA-0004mu-Aq; Wed, 16 Jun 2021 04:01:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142542.262985; Wed, 16 Jun 2021 04:01:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMkA-0004mn-6k; Wed, 16 Jun 2021 04:01:34 +0000
Received: by outflank-mailman (input) for mailman id 142542;
 Wed, 16 Jun 2021 04:01:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltMdZ-0000Vk-GW
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 03:54:45 +0000
Received: from mail-pg1-x52d.google.com (unknown [2607:f8b0:4864:20::52d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c10d58a-d753-4cfc-b2f9-9e89ce0045e1;
 Wed, 16 Jun 2021 03:54:28 +0000 (UTC)
Received: by mail-pg1-x52d.google.com with SMTP id t9so863582pgn.4
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 20:54:28 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id w18sm599561pjg.50.2021.06.15.20.54.20
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 20:54:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c10d58a-d753-4cfc-b2f9-9e89ce0045e1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=GymVnULBeJ03m5EPo7c7VEX7DSqoBwMT6L2fMWoeeEc=;
        b=bKNX+61KY+497C18UMf9tzB54BN9JmhN17dgRMarCuH6/YI06HcZUO1rsG8qU0UaJg
         9iacuS6rTIiJCommP10+VQN+2EY3YjRv+hgL0/TXA9Ap9/T8W18MYpuKAZHM60A9JzTy
         NcemwkgEoNzTX8rMbFHfcKVugBejDTMzKi1yI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=GymVnULBeJ03m5EPo7c7VEX7DSqoBwMT6L2fMWoeeEc=;
        b=IH18YKkStAo3AfirP1EsU4gQ8jVAckq9n6lUg2ZdJbvv/fflnCYgC8uyK0hW9F13Bs
         xrad4il2xIiC4mX0ePM3P7a4yu1IOn8/zQCC1FOj3S/AjiPTyvPKro7Kelv5itCC6NMc
         1FuAIETmGSu1TXlML8XZ3hp9cuIzXcsgbi73dStD7RjTuEaGoMomUOyNCEEtjdgvhly9
         uyf56xu++xpSG6avN5V/hfne5shPTh8Vx+k3qNMrGculZM0auRLeeNJO/qAFoZmvq1Ze
         l8Dl/8oYdvTWkRuCoT3JbSlDRIxz7SN5AwOJ6KbXB+UTwecW2GMmb/YDHruQ9pXJy/GM
         fcmA==
X-Gm-Message-State: AOAM531jN3prBxsnLXU0tWCWuO76pFTmmnNv9VSGamfmj5gHt499MwvE
	T3Q6fd3gNIgrciM4et8dj46SMA==
X-Google-Smtp-Source: ABdhPJyE12O0mP5M3Jhr5nm1IKCwmz9NUZqWaZOhZUHV0/ZLS5Hyi/5IPzyx8nJy3XXN3OzKpotPIA==
X-Received: by 2002:a65:58c1:: with SMTP id e1mr2877486pgu.200.1623815667680;
        Tue, 15 Jun 2021 20:54:27 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v11 11/12] dt-bindings: of: Add restricted DMA pool
Date: Wed, 16 Jun 2021 11:52:39 +0800
Message-Id: <20210616035240.840463-12-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616035240.840463-1-tientzu@chromium.org>
References: <20210616035240.840463-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce the new compatible string, restricted-dma-pool, for restricted
DMA. One can specify the address and length of the restricted DMA memory
region by restricted-dma-pool in the reserved-memory node.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 .../reserved-memory/reserved-memory.txt       | 36 +++++++++++++++++--
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
index e8d3096d922c..46804f24df05 100644
--- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
@@ -51,6 +51,23 @@ compatible (optional) - standard definition
           used as a shared pool of DMA buffers for a set of devices. It can
           be used by an operating system to instantiate the necessary pool
           management subsystem if necessary.
+        - restricted-dma-pool: This indicates a region of memory meant to be
+          used as a pool of restricted DMA buffers for a set of devices. The
+          memory region would be the only region accessible to those devices.
+          When using this, the no-map and reusable properties must not be set,
+          so the operating system can create a virtual mapping that will be used
+          for synchronization. The main purpose for restricted DMA is to
+          mitigate the lack of DMA access control on systems without an IOMMU,
+          which could result in the DMA accessing the system memory at
+          unexpected times and/or unexpected addresses, possibly leading to data
+          leakage or corruption. The feature on its own provides a basic level
+          of protection against the DMA overwriting buffer contents at
+          unexpected times. However, to protect against general data leakage and
+          system memory corruption, the system needs to provide way to lock down
+          the memory access, e.g., MPU. Note that since coherent allocation
+          needs remapping, one must set up another device coherent pool by
+          shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic
+          coherent allocation.
         - vendor specific string in the form <vendor>,[<device>-]<usage>
 no-map (optional) - empty property
     - Indicates the operating system must not create a virtual mapping
@@ -85,10 +102,11 @@ memory-region-names (optional) - a list of names, one for each corresponding
 
 Example
 -------
-This example defines 3 contiguous regions are defined for Linux kernel:
+This example defines 4 contiguous regions for Linux kernel:
 one default of all device drivers (named linux,cma@72000000 and 64MiB in size),
-one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB), and
-one for multimedia processing (named multimedia-memory@77000000, 64MiB).
+one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB),
+one for multimedia processing (named multimedia-memory@77000000, 64MiB), and
+one for restricted dma pool (named restricted_dma_reserved@0x50000000, 64MiB).
 
 / {
 	#address-cells = <1>;
@@ -120,6 +138,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 			compatible = "acme,multimedia-memory";
 			reg = <0x77000000 0x4000000>;
 		};
+
+		restricted_dma_reserved: restricted_dma_reserved {
+			compatible = "restricted-dma-pool";
+			reg = <0x50000000 0x4000000>;
+		};
 	};
 
 	/* ... */
@@ -138,4 +161,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 		memory-region = <&multimedia_reserved>;
 		/* ... */
 	};
+
+	pcie_device: pcie_device@0,0 {
+		reg = <0x83010000 0x0 0x00000000 0x0 0x00100000
+		       0x83010000 0x0 0x00100000 0x0 0x00100000>;
+		memory-region = <&restricted_dma_mem_reserved>;
+		/* ... */
+	};
 };
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 04:01:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 04:01:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142543.262996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMkE-00055A-NX; Wed, 16 Jun 2021 04:01:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142543.262996; Wed, 16 Jun 2021 04:01:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMkE-000553-KN; Wed, 16 Jun 2021 04:01:38 +0000
Received: by outflank-mailman (input) for mailman id 142543;
 Wed, 16 Jun 2021 04:01:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltMdU-0000Vk-GH
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 03:54:40 +0000
Received: from mail-pl1-x62f.google.com (unknown [2607:f8b0:4864:20::62f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a8e82e96-cf10-44e0-b0e2-19f4771421fd;
 Wed, 16 Jun 2021 03:54:19 +0000 (UTC)
Received: by mail-pl1-x62f.google.com with SMTP id v12so421250plo.10
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 20:54:19 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id 65sm520959pfu.159.2021.06.15.20.54.11
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 20:54:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8e82e96-cf10-44e0-b0e2-19f4771421fd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=DPLvw3+EdPjUpi9a0TyEMJVhnK8+AJG3DzRiP5BtscM=;
        b=mP9OES9rpyWNpVFyVCvjbhItqfqOCRbdjxwc3KJ3X06OSq+cTAcrpZ/aexZ9COgrCU
         2qqEGuS6PrCPpsiRki/Eik25TmSyeCDSre2DzqSqCUzXpsuY+KYlrzlw4CBdZ/xelHWW
         9SqDFfmypS+tW8WqWzUdM35uw6ow+BJWFjA5Q=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=DPLvw3+EdPjUpi9a0TyEMJVhnK8+AJG3DzRiP5BtscM=;
        b=UjRx+ufi56+ZeXbG/JMGpj10WbRqsKmMb253vRBmrq63JurAfiJwfYGEYoIp7oACrU
         KF+Q5V6NJ0fQ2xAg/sHuChaFEpTqVHpbe9l9aCEeQyovjU+wHuvk6VRE7NlJISsaRGVX
         BuA/pWejjkk4ENiDiwcHc3U22kGX/FJEebiwx7wky8tZTSbbHLJhzbMgfSSLv+0adVSX
         uiFHRJHTiGcDt+U5K6LstvKaLAGh4IQBUwh2V4W7dmTCOSLWVUsISFc3oWblyO9rCTsF
         6CD7Q8MJyKKulLWihWbou7VCTApsBHcRa920jhU919fOUoDyPSr60cwHMURH631pXWR9
         oEQg==
X-Gm-Message-State: AOAM531Ddr+5yv95EO4xBSYeUk921ueU8i9NsI/n1ch0MEGtf9msWbLq
	kD5hzcVbqmJzm067mtDb9Abrkg==
X-Google-Smtp-Source: ABdhPJy034QImhRnUwmNLV0FnjZxbsERhUc/r2tZZ9Z6RsabHN+NDjXW85B7lWN6LThYtY+4gRVOCw==
X-Received: by 2002:a17:902:7b98:b029:11d:455b:a70b with SMTP id w24-20020a1709027b98b029011d455ba70bmr6991978pll.35.1623815659058;
        Tue, 15 Jun 2021 20:54:19 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v11 10/12] swiotlb: Add restricted DMA pool initialization
Date: Wed, 16 Jun 2021 11:52:38 +0800
Message-Id: <20210616035240.840463-11-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616035240.840463-1-tientzu@chromium.org>
References: <20210616035240.840463-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the initialization function to create restricted DMA pools from
matching reserved-memory nodes.

Regardless of swiotlb setting, the restricted DMA pool is preferred if
available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/swiotlb.h |  3 +-
 kernel/dma/Kconfig      | 14 ++++++++
 kernel/dma/swiotlb.c    | 75 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 91 insertions(+), 1 deletion(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 2d5ec670e064..9616346b727f 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -73,7 +73,8 @@ extern enum swiotlb_force swiotlb_force;
  *		range check to see if the memory was in fact allocated by this
  *		API.
  * @nslabs:	The number of IO TLB blocks (in groups of 64) between @start and
- *		@end. This is command line adjustable via setup_io_tlb_npages.
+ *		@end. For default swiotlb, this is command line adjustable via
+ *		setup_io_tlb_npages.
  * @used:	The number of used IO TLB block.
  * @list:	The free list describing the number of free entries available
  *		from each index.
diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
index 77b405508743..3e961dc39634 100644
--- a/kernel/dma/Kconfig
+++ b/kernel/dma/Kconfig
@@ -80,6 +80,20 @@ config SWIOTLB
 	bool
 	select NEED_DMA_MAP_STATE
 
+config DMA_RESTRICTED_POOL
+	bool "DMA Restricted Pool"
+	depends on OF && OF_RESERVED_MEM
+	select SWIOTLB
+	help
+	  This enables support for restricted DMA pools which provide a level of
+	  DMA memory protection on systems with limited hardware protection
+	  capabilities, such as those lacking an IOMMU.
+
+	  For more information see
+	  <Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt>
+	  and <kernel/dma/swiotlb.c>.
+	  If unsure, say "n".
+
 #
 # Should be selected if we can mmap non-coherent mappings to userspace.
 # The only thing that is really required is a way to set an uncached bit
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 6ad85b48f101..f3f271f7e272 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -39,6 +39,13 @@
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
 #endif
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_fdt.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/slab.h>
+#endif
 
 #include <asm/io.h>
 #include <asm/dma.h>
@@ -742,4 +749,72 @@ bool swiotlb_free(struct device *dev, struct page *page, size_t size)
 	return true;
 }
 
+static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
+				    struct device *dev)
+{
+	struct io_tlb_mem *mem = rmem->priv;
+	unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
+
+	/*
+	 * Since multiple devices can share the same pool, the private data,
+	 * io_tlb_mem struct, will be initialized by the first device attached
+	 * to it.
+	 */
+	if (!mem) {
+		mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
+		if (!mem)
+			return -ENOMEM;
+
+		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
+		mem->force = true;
+		set_memory_decrypted((unsigned long)phys_to_virt(rmem->base),
+				     rmem->size >> PAGE_SHIFT);
+
+		rmem->priv = mem;
+
+		if (IS_ENABLED(CONFIG_DEBUG_FS)) {
+			mem->debugfs =
+				debugfs_create_dir(rmem->name, debugfs_dir);
+			swiotlb_create_debugfs_files(mem);
+		}
+	}
+
+	dev->dma_io_tlb_mem = mem;
+
+	return 0;
+}
+
+static void rmem_swiotlb_device_release(struct reserved_mem *rmem,
+					struct device *dev)
+{
+	dev->dma_io_tlb_mem = io_tlb_default_mem;
+}
+
+static const struct reserved_mem_ops rmem_swiotlb_ops = {
+	.device_init = rmem_swiotlb_device_init,
+	.device_release = rmem_swiotlb_device_release,
+};
+
+static int __init rmem_swiotlb_setup(struct reserved_mem *rmem)
+{
+	unsigned long node = rmem->fdt_node;
+
+	if (of_get_flat_dt_prop(node, "reusable", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,cma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,dma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "no-map", NULL))
+		return -EINVAL;
+
+	if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
+		pr_err("Restricted DMA pool must be accessible within the linear mapping.");
+		return -EINVAL;
+	}
+
+	rmem->ops = &rmem_swiotlb_ops;
+	pr_info("Reserved memory: created restricted DMA pool at %pa, size %ld MiB\n",
+		&rmem->base, (unsigned long)rmem->size / SZ_1M);
+	return 0;
+}
+
+RESERVEDMEM_OF_DECLARE(dma, "restricted-dma-pool", rmem_swiotlb_setup);
 #endif /* CONFIG_DMA_RESTRICTED_POOL */
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 04:01:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 04:01:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142545.263007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMkK-0005R6-1s; Wed, 16 Jun 2021 04:01:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142545.263007; Wed, 16 Jun 2021 04:01:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMkJ-0005Qz-TJ; Wed, 16 Jun 2021 04:01:43 +0000
Received: by outflank-mailman (input) for mailman id 142545;
 Wed, 16 Jun 2021 04:01:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltMdj-0000Vk-Gy
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 03:54:55 +0000
Received: from mail-pj1-x1036.google.com (unknown [2607:f8b0:4864:20::1036])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 476a1967-7599-4c5e-8952-72aaad3958f4;
 Wed, 16 Jun 2021 03:54:37 +0000 (UTC)
Received: by mail-pj1-x1036.google.com with SMTP id
 fy24-20020a17090b0218b029016c5a59021fso3175536pjb.0
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 20:54:37 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id bv3sm3688867pjb.1.2021.06.15.20.54.29
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 20:54:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 476a1967-7599-4c5e-8952-72aaad3958f4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=4psftO+uH/TVnoki1qOI27Xe5kV3FYGMoE+zQAlSb7o=;
        b=Li4AEy2JF5gPy9ukpQUJjDPB8rq/Zch13XpWyeO7bHztoc5ctZfpP5asNBSNDWCRJ9
         ebBl8RVKocmQh3z5hITYqIYweSnFDje1ai0AgaL9yWsFuJH6pOsddoNur+1F3lLEmeDF
         LM382R5+plRtaFQ3A0pI0SFPAyiAdN6Elek1g=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=4psftO+uH/TVnoki1qOI27Xe5kV3FYGMoE+zQAlSb7o=;
        b=lbQgORlVLTIdkMaNt+GCuJx9JAVzPK9Q8nOc+20F97Kt54ijZdCIGCQF4X8hrp0mcQ
         ulf9mw3q+YuhxQlT9sIxczcblQdW5SzA+vTW6OQm1ibBVaKkOrR6HAk4qS3RAHn6eZ3X
         hl3QSzPy9Bm1GbBcIGqRYV9zf4L9XEYDEQp5HElQ/ml/3lZ3CJ1QpkatSS+qkCDO9PGT
         YF/9+wSTMJ0X4AWpkUpvfkjYOqCPEvxLQWwatzLxXxKQ46FPNB4nO3tNsUvmFW1ZVjZg
         hXJ/z/5Mrht/18qAyZlrB9LcNU9dejP5UMdQ5rzQc5oCVv/EaPMXBZ7M4Auvx/f0aiKJ
         /SQw==
X-Gm-Message-State: AOAM5313BfdmMWoqe35evNg2+JNNr2+wvh1S/KacFgmHuxOVmkrV5XwZ
	SwuNnkKiRIrmV07sX7XvOUzJmtCAYKHzZw==
X-Google-Smtp-Source: ABdhPJxIA/TwmqjYIYiJDhgzfrw1oBecMGqUYPCqotzqYdZPBQm2bHhvFmGRXaCctCumXjdHnaXVuA==
X-Received: by 2002:a17:902:d645:b029:11d:d075:cf43 with SMTP id y5-20020a170902d645b029011dd075cf43mr4991840plh.14.1623815676876;
        Tue, 15 Jun 2021 20:54:36 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v11 12/12] of: Add plumbing for restricted DMA pool
Date: Wed, 16 Jun 2021 11:52:40 +0800
Message-Id: <20210616035240.840463-13-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616035240.840463-1-tientzu@chromium.org>
References: <20210616035240.840463-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If a device is not behind an IOMMU, we look up the device node and set
up the restricted DMA when the restricted-dma-pool is presented.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/of/address.c    | 33 +++++++++++++++++++++++++++++++++
 drivers/of/device.c     |  3 +++
 drivers/of/of_private.h |  6 ++++++
 3 files changed, 42 insertions(+)

diff --git a/drivers/of/address.c b/drivers/of/address.c
index 73ddf2540f3f..cdf700fba5c4 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -8,6 +8,7 @@
 #include <linux/logic_pio.h>
 #include <linux/module.h>
 #include <linux/of_address.h>
+#include <linux/of_reserved_mem.h>
 #include <linux/pci.h>
 #include <linux/pci_regs.h>
 #include <linux/sizes.h>
@@ -1022,6 +1023,38 @@ int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map)
 	of_node_put(node);
 	return ret;
 }
+
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np)
+{
+	struct device_node *node, *of_node = dev->of_node;
+	int count, i;
+
+	count = of_property_count_elems_of_size(of_node, "memory-region",
+						sizeof(u32));
+	/*
+	 * If dev->of_node doesn't exist or doesn't contain memory-region, try
+	 * the OF node having DMA configuration.
+	 */
+	if (count <= 0) {
+		of_node = np;
+		count = of_property_count_elems_of_size(
+			of_node, "memory-region", sizeof(u32));
+	}
+
+	for (i = 0; i < count; i++) {
+		node = of_parse_phandle(of_node, "memory-region", i);
+		/*
+		 * There might be multiple memory regions, but only one
+		 * restricted-dma-pool region is allowed.
+		 */
+		if (of_device_is_compatible(node, "restricted-dma-pool") &&
+		    of_device_is_available(node))
+			return of_reserved_mem_device_init_by_idx(dev, of_node,
+								  i);
+	}
+
+	return 0;
+}
 #endif /* CONFIG_HAS_DMA */
 
 /**
diff --git a/drivers/of/device.c b/drivers/of/device.c
index 6cb86de404f1..e68316836a7a 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
 
 	arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
 
+	if (!iommu)
+		return of_dma_set_restricted_buffer(dev, np);
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(of_dma_configure_id);
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index d9e6a324de0a..25cebbed5f02 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -161,12 +161,18 @@ struct bus_dma_region;
 #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
 int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map);
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np);
 #else
 static inline int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map)
 {
 	return -ENODEV;
 }
+static inline int of_dma_set_restricted_buffer(struct device *dev,
+					       struct device_node *np)
+{
+	return -ENODEV;
+}
 #endif
 
 #endif /* _LINUX_OF_PRIVATE_H */
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 04:01:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 04:01:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142546.263012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMkK-0005Uh-FQ; Wed, 16 Jun 2021 04:01:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142546.263012; Wed, 16 Jun 2021 04:01:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMkK-0005TO-6K; Wed, 16 Jun 2021 04:01:44 +0000
Received: by outflank-mailman (input) for mailman id 142546;
 Wed, 16 Jun 2021 04:01:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltMdP-0000Vk-G0
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 03:54:35 +0000
Received: from mail-pg1-x534.google.com (unknown [2607:f8b0:4864:20::534])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id df05fa54-7f7a-4bb8-91b5-b8b0a92732ec;
 Wed, 16 Jun 2021 03:54:11 +0000 (UTC)
Received: by mail-pg1-x534.google.com with SMTP id i34so848555pgl.9
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 20:54:11 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id b80sm527132pfb.151.2021.06.15.20.54.03
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 20:54:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df05fa54-7f7a-4bb8-91b5-b8b0a92732ec
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=ZXmU6yzP+hOGy6a5w7xUx+op0fIxrN6dQVFWjI9Fg98=;
        b=H3J253tStSd+Sippm5y1kLb/sxBHtsc81GVn7xqCMuznl+uIejEja5CpvAgNOkDgUV
         rW2jSd2SeApcedRqrCHthelTfcKnVhlQe6ePU9+UnSNNyaqurkjBrgXlhQZ38oVzAhmS
         uYETYDjjS42EMVE+V8ROo6uxeafX55HJgDW3s=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=ZXmU6yzP+hOGy6a5w7xUx+op0fIxrN6dQVFWjI9Fg98=;
        b=OQthr9ASk4lt2sAvDFN2A5Xv2A+Skx5hjsALH7ZKBZfVEhZAmo7WJhV0GH64NVST7p
         rZyo5JpPae1gmRaNnfE9SzGOHJr3j74yn2tfeS178OjpL7Qwaw+4+UiZLpgEewCDnOpc
         NXhlfow69gnyFjxEj65IKHGTfX9LWXtZXcSBZyMbKBqnWhmEJLMIWmbxszwSGTXKzfE1
         J3Z5NIpPLMn1IroF4BDUH+6rVNFH/r7RKKkU16/6JMXQ2EVpvtqY/EsKpYyZ7d77vfYr
         Pz/eLZXxQqx+ni2Dnn46Evfda98/nkYJNuZ4eSzWAsec/NEGUvjU74PnZcxfCh2q+qQj
         5ngQ==
X-Gm-Message-State: AOAM533ajV2j57uz90RFGluW7vevnzbsWMLkyw3FBB5/XqALI9s377z3
	z/uVYkBltHQE/5H7fRKCqPITdA==
X-Google-Smtp-Source: ABdhPJweQmr2eF6Xr1UvMmJWyzNtto6ofAzJzKlojyKwdI6zZ2VnCta/4IZcR5xAxNSgDQaSW5LnWQ==
X-Received: by 2002:a62:b50b:0:b029:2fc:db53:a56a with SMTP id y11-20020a62b50b0000b02902fcdb53a56amr2501590pfe.30.1623815650449;
        Tue, 15 Jun 2021 20:54:10 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v11 09/12] swiotlb: Add restricted DMA alloc/free support
Date: Wed, 16 Jun 2021 11:52:37 +0800
Message-Id: <20210616035240.840463-10-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616035240.840463-1-tientzu@chromium.org>
References: <20210616035240.840463-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the functions, swiotlb_{alloc,free} to support the memory allocation
from restricted DMA pool.

The restricted DMA pool is preferred if available.

Note that since coherent allocation needs remapping, one must set up
another device coherent pool by shared-dma-pool and use
dma_alloc_from_dev_coherent instead for atomic coherent allocation.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/swiotlb.h | 15 +++++++++++++
 kernel/dma/direct.c     | 50 ++++++++++++++++++++++++++++++-----------
 kernel/dma/swiotlb.c    | 45 +++++++++++++++++++++++++++++++++++--
 3 files changed, 95 insertions(+), 15 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index efcd56e3a16c..2d5ec670e064 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -156,4 +156,19 @@ static inline void swiotlb_adjust_size(unsigned long size)
 extern void swiotlb_print_info(void);
 extern void swiotlb_set_max_segment(unsigned int);
 
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size);
+bool swiotlb_free(struct device *dev, struct page *page, size_t size);
+#else
+static inline struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	return NULL;
+}
+static inline bool swiotlb_free(struct device *dev, struct page *page,
+				size_t size)
+{
+	return false;
+}
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
+
 #endif /* __LINUX_SWIOTLB_H */
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 3713461d6fe0..da0e09621230 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -75,6 +75,15 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 		min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit);
 }
 
+static void __dma_direct_free_pages(struct device *dev, struct page *page,
+				    size_t size)
+{
+	if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL) &&
+	    swiotlb_free(dev, page, size))
+		return;
+	dma_free_contiguous(dev, page, size);
+}
+
 static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 		gfp_t gfp)
 {
@@ -86,7 +95,16 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 
 	gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
 					   &phys_limit);
-	page = dma_alloc_contiguous(dev, size, gfp);
+	if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL)) {
+		page = swiotlb_alloc(dev, size);
+		if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
+			__dma_direct_free_pages(dev, page, size);
+			return NULL;
+		}
+	}
+
+	if (!page)
+		page = dma_alloc_contiguous(dev, size, gfp);
 	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
 		dma_free_contiguous(dev, page, size);
 		page = NULL;
@@ -142,7 +160,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 		gfp |= __GFP_NOWARN;
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
 		page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
 		if (!page)
 			return NULL;
@@ -155,18 +173,23 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev))
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_dev_swiotlb_force(dev))
 		return arch_dma_alloc(dev, size, dma_handle, gfp, attrs);
 
 	/*
 	 * Remapping or decrypting memory may block. If either is required and
 	 * we can't block, allocate the memory from the atomic pools.
+	 * If restricted DMA (i.e., is_dev_swiotlb_force) is required, one must
+	 * set up another device coherent pool by shared-dma-pool and use
+	 * dma_alloc_from_dev_coherent instead.
 	 */
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
 	    !gfpflags_allow_blocking(gfp) &&
 	    (force_dma_unencrypted(dev) ||
-	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev))))
+	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
+	      !dev_is_dma_coherent(dev))) &&
+	    !is_dev_swiotlb_force(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	/* we always manually zero the memory once we are done */
@@ -237,7 +260,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 			return NULL;
 	}
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -247,15 +270,15 @@ void dma_direct_free(struct device *dev, size_t size,
 	unsigned int page_order = get_order(size);
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
 		/* cpu_addr is a struct page cookie, not a kernel address */
 		dma_free_contiguous(dev, cpu_addr, size);
 		return;
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev)) {
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_dev_swiotlb_force(dev)) {
 		arch_dma_free(dev, size, cpu_addr, dma_addr, attrs);
 		return;
 	}
@@ -273,7 +296,7 @@ void dma_direct_free(struct device *dev, size_t size,
 	else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
 		arch_dma_clear_uncached(cpu_addr, size);
 
-	dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size);
+	__dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
 }
 
 struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
@@ -283,7 +306,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	void *ret;
 
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
-	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp))
+	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) &&
+	    !is_dev_swiotlb_force(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	page = __dma_direct_alloc_pages(dev, size, gfp);
@@ -310,7 +334,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	*dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
 	return page;
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -329,7 +353,7 @@ void dma_direct_free_pages(struct device *dev, size_t size,
 	if (force_dma_unencrypted(dev))
 		set_memory_encrypted((unsigned long)vaddr, 1 << page_order);
 
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 }
 
 #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index fec4934b9926..6ad85b48f101 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -462,8 +462,9 @@ static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
 
 	index = wrap = wrap_index(mem, ALIGN(mem->index, stride));
 	do {
-		if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
-		    (orig_addr & iotlb_align_mask)) {
+		if (orig_addr &&
+		    (slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
+			    (orig_addr & iotlb_align_mask)) {
 			index = wrap_index(mem, index + 1);
 			continue;
 		}
@@ -702,3 +703,43 @@ static int __init swiotlb_create_default_debugfs(void)
 late_initcall(swiotlb_create_default_debugfs);
 
 #endif
+
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+	phys_addr_t tlb_addr;
+	int index;
+
+	/*
+	 * Skip io_tlb_default_mem since swiotlb_alloc doesn't support atomic
+	 * coherent allocation. Otherwise might break existing devices.
+	 * One must set up another device coherent pool by shared-dma-pool and
+	 * use dma_alloc_from_dev_coherent instead for atomic coherent
+	 * allocation to avoid memory remapping.
+	 */
+	if (!mem || mem == io_tlb_default_mem)
+		return NULL;
+
+	index = swiotlb_find_slots(dev, 0, size);
+	if (index == -1)
+		return NULL;
+
+	tlb_addr = slot_addr(mem->start, index);
+
+	return pfn_to_page(PFN_DOWN(tlb_addr));
+}
+
+bool swiotlb_free(struct device *dev, struct page *page, size_t size)
+{
+	phys_addr_t tlb_addr = page_to_phys(page);
+
+	if (!is_swiotlb_buffer(dev, tlb_addr))
+		return false;
+
+	swiotlb_release_slots(dev, tlb_addr);
+
+	return true;
+}
+
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 04:10:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 04:10:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142574.263029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMt0-0008Fm-Gz; Wed, 16 Jun 2021 04:10:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142574.263029; Wed, 16 Jun 2021 04:10:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltMt0-0008Fd-Dz; Wed, 16 Jun 2021 04:10:42 +0000
Received: by outflank-mailman (input) for mailman id 142574;
 Wed, 16 Jun 2021 04:10:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltMsy-0008FX-RZ
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 04:10:40 +0000
Received: from mail-pl1-x630.google.com (unknown [2607:f8b0:4864:20::630])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6e203b4b-9854-4888-ab43-ca32fd1a0f39;
 Wed, 16 Jun 2021 04:10:39 +0000 (UTC)
Received: by mail-pl1-x630.google.com with SMTP id b12so434473plg.11
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 21:10:39 -0700 (PDT)
Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com.
 [209.85.210.171])
 by smtp.gmail.com with ESMTPSA id q3sm574733pfj.89.2021.06.15.21.10.37
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 21:10:38 -0700 (PDT)
Received: by mail-pf1-f171.google.com with SMTP id k6so1097700pfk.12
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 21:10:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e203b4b-9854-4888-ab43-ca32fd1a0f39
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=480iw3RHEmaThOE/pnDPB0pRHOKUr78KviuFxgV2m28=;
        b=ZGcLRBiht0jP6J+AXrtU54w8Uc+zr833JxnoG08K94QWWjvLCllFqXgKhb1YXYtuSI
         H2kUoXb56/euW7L5KGDSV7c5wQp5UCbd+ujOwhBYXXWkAOagA4ahf3bI5zABwigXHQRs
         2V+ulAhHs0i9h1/h3ywLUL41gakNk5ma80vrs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=480iw3RHEmaThOE/pnDPB0pRHOKUr78KviuFxgV2m28=;
        b=SLpP25OccjLMmVjtlQs+O8iJb3Hl3l2EYLJvo4kFvzhMkrgV4LWlnnuKWOpzrJm2Ah
         OxqzWRLZpZKJeyIkcz9cLTgC77/zSfbrLI92Kdmu1ALIjMt806Fd/xhNkpwF/b/XOquz
         jJdARGNTEBEtjUOzUulAyA5fOkTJIUKOTwVtG/K6CXfBtnntepascbqXpErudEbohjBg
         43HYorC36wnCR/z+6NBSu7NPa++63M1z6AkoyKZ/qkh83/S3mkYGrJvGNfHXKPfwybVF
         SCY6VMxxSoZ8fHVUXMs13YmDQQ6qc9cJfrY0BGoWEvhaG3UpDAsve31A/S1kPqX8btiF
         3SZw==
X-Gm-Message-State: AOAM530IUlGSJ90QJ+KHt+aOMNB4Vwnlp0x1ylV3Kmg1LIK8C7vkBCAe
	vm+nE7ktCNjDs8L48I8Ng3rOTlYEb3hQFw==
X-Google-Smtp-Source: ABdhPJxmhx5jp+UKPKrepzpjcRAPAjmxKeIrHRMrkOSXJu4dgRguYWPzfrNi15Nna5jeEqMC7SMCIw==
X-Received: by 2002:a17:90a:cf86:: with SMTP id i6mr548776pju.224.1623816638464;
        Tue, 15 Jun 2021 21:10:38 -0700 (PDT)
X-Received: by 2002:a5e:8513:: with SMTP id i19mr1992773ioj.50.1623816266935;
 Tue, 15 Jun 2021 21:04:26 -0700 (PDT)
MIME-Version: 1.0
References: <20210616035240.840463-1-tientzu@chromium.org> <20210616035240.840463-10-tientzu@chromium.org>
In-Reply-To: <20210616035240.840463-10-tientzu@chromium.org>
From: Claire Chang <tientzu@chromium.org>
Date: Wed, 16 Jun 2021 12:04:16 +0800
X-Gmail-Original-Message-ID: <CALiNf28=3vqAs+8HsjyBGOiPNR2F3yT6OGnLpZH_AkWqgTqgOA@mail.gmail.com>
Message-ID: <CALiNf28=3vqAs+8HsjyBGOiPNR2F3yT6OGnLpZH_AkWqgTqgOA@mail.gmail.com>
Subject: Re: [PATCH v11 09/12] swiotlb: Add restricted DMA alloc/free support
To: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

On Wed, Jun 16, 2021 at 11:54 AM Claire Chang <tientzu@chromium.org> wrote:
>
> Add the functions, swiotlb_{alloc,free} to support the memory allocation
> from restricted DMA pool.
>
> The restricted DMA pool is preferred if available.
>
> Note that since coherent allocation needs remapping, one must set up
> another device coherent pool by shared-dma-pool and use
> dma_alloc_from_dev_coherent instead for atomic coherent allocation.
>
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> ---
>  include/linux/swiotlb.h | 15 +++++++++++++
>  kernel/dma/direct.c     | 50 ++++++++++++++++++++++++++++++-----------
>  kernel/dma/swiotlb.c    | 45 +++++++++++++++++++++++++++++++++++--
>  3 files changed, 95 insertions(+), 15 deletions(-)
>
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index efcd56e3a16c..2d5ec670e064 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -156,4 +156,19 @@ static inline void swiotlb_adjust_size(unsigned long size)
>  extern void swiotlb_print_info(void);
>  extern void swiotlb_set_max_segment(unsigned int);
>
> +#ifdef CONFIG_DMA_RESTRICTED_POOL
> +struct page *swiotlb_alloc(struct device *dev, size_t size);
> +bool swiotlb_free(struct device *dev, struct page *page, size_t size);
> +#else
> +static inline struct page *swiotlb_alloc(struct device *dev, size_t size)
> +{
> +       return NULL;
> +}
> +static inline bool swiotlb_free(struct device *dev, struct page *page,
> +                               size_t size)
> +{
> +       return false;
> +}
> +#endif /* CONFIG_DMA_RESTRICTED_POOL */
> +
>  #endif /* __LINUX_SWIOTLB_H */
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 3713461d6fe0..da0e09621230 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -75,6 +75,15 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
>                 min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit);
>  }
>
> +static void __dma_direct_free_pages(struct device *dev, struct page *page,
> +                                   size_t size)
> +{
> +       if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL) &&
> +           swiotlb_free(dev, page, size))
> +               return;
> +       dma_free_contiguous(dev, page, size);
> +}
> +
>  static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
>                 gfp_t gfp)
>  {
> @@ -86,7 +95,16 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
>
>         gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
>                                            &phys_limit);
> -       page = dma_alloc_contiguous(dev, size, gfp);
> +       if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL)) {
> +               page = swiotlb_alloc(dev, size);
> +               if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
> +                       __dma_direct_free_pages(dev, page, size);
> +                       return NULL;
> +               }
> +       }
> +
> +       if (!page)
> +               page = dma_alloc_contiguous(dev, size, gfp);
>         if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
>                 dma_free_contiguous(dev, page, size);
>                 page = NULL;
> @@ -142,7 +160,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>                 gfp |= __GFP_NOWARN;
>
>         if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
> -           !force_dma_unencrypted(dev)) {
> +           !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
>                 page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
>                 if (!page)
>                         return NULL;
> @@ -155,18 +173,23 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>         }
>
>         if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
> -           !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
> -           !dev_is_dma_coherent(dev))
> +           !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
> +           !is_dev_swiotlb_force(dev))
>                 return arch_dma_alloc(dev, size, dma_handle, gfp, attrs);

Just noticed that after propagating swiotlb_force setting into
io_tlb_default_mem->force, the memory allocation behavior for
swiotlb_force will change (i.e. always skipping arch_dma_alloc and
dma_direct_alloc_from_pool).

>
>         /*
>          * Remapping or decrypting memory may block. If either is required and
>          * we can't block, allocate the memory from the atomic pools.
> +        * If restricted DMA (i.e., is_dev_swiotlb_force) is required, one must
> +        * set up another device coherent pool by shared-dma-pool and use
> +        * dma_alloc_from_dev_coherent instead.
>          */
>         if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
>             !gfpflags_allow_blocking(gfp) &&
>             (force_dma_unencrypted(dev) ||
> -            (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev))))
> +            (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
> +             !dev_is_dma_coherent(dev))) &&
> +           !is_dev_swiotlb_force(dev))
>                 return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);

And here.

>
>         /* we always manually zero the memory once we are done */
> @@ -237,7 +260,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>                         return NULL;
>         }
>  out_free_pages:
> -       dma_free_contiguous(dev, page, size);
> +       __dma_direct_free_pages(dev, page, size);
>         return NULL;
>  }
>
> @@ -247,15 +270,15 @@ void dma_direct_free(struct device *dev, size_t size,
>         unsigned int page_order = get_order(size);
>
>         if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
> -           !force_dma_unencrypted(dev)) {
> +           !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
>                 /* cpu_addr is a struct page cookie, not a kernel address */
>                 dma_free_contiguous(dev, cpu_addr, size);
>                 return;
>         }
>
>         if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
> -           !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
> -           !dev_is_dma_coherent(dev)) {
> +           !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
> +           !is_dev_swiotlb_force(dev)) {
>                 arch_dma_free(dev, size, cpu_addr, dma_addr, attrs);
>                 return;
>         }
> @@ -273,7 +296,7 @@ void dma_direct_free(struct device *dev, size_t size,
>         else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
>                 arch_dma_clear_uncached(cpu_addr, size);
>
> -       dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size);
> +       __dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
>  }
>
>  struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
> @@ -283,7 +306,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
>         void *ret;
>
>         if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
> -           force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp))
> +           force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) &&
> +           !is_dev_swiotlb_force(dev))
>                 return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
>
>         page = __dma_direct_alloc_pages(dev, size, gfp);
> @@ -310,7 +334,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
>         *dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
>         return page;
>  out_free_pages:
> -       dma_free_contiguous(dev, page, size);
> +       __dma_direct_free_pages(dev, page, size);
>         return NULL;
>  }
>
> @@ -329,7 +353,7 @@ void dma_direct_free_pages(struct device *dev, size_t size,
>         if (force_dma_unencrypted(dev))
>                 set_memory_encrypted((unsigned long)vaddr, 1 << page_order);
>
> -       dma_free_contiguous(dev, page, size);
> +       __dma_direct_free_pages(dev, page, size);
>  }
>
>  #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index fec4934b9926..6ad85b48f101 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -462,8 +462,9 @@ static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
>
>         index = wrap = wrap_index(mem, ALIGN(mem->index, stride));
>         do {
> -               if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
> -                   (orig_addr & iotlb_align_mask)) {
> +               if (orig_addr &&
> +                   (slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
> +                           (orig_addr & iotlb_align_mask)) {
>                         index = wrap_index(mem, index + 1);
>                         continue;
>                 }
> @@ -702,3 +703,43 @@ static int __init swiotlb_create_default_debugfs(void)
>  late_initcall(swiotlb_create_default_debugfs);
>
>  #endif
> +
> +#ifdef CONFIG_DMA_RESTRICTED_POOL
> +struct page *swiotlb_alloc(struct device *dev, size_t size)
> +{
> +       struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
> +       phys_addr_t tlb_addr;
> +       int index;
> +
> +       /*
> +        * Skip io_tlb_default_mem since swiotlb_alloc doesn't support atomic
> +        * coherent allocation. Otherwise might break existing devices.
> +        * One must set up another device coherent pool by shared-dma-pool and
> +        * use dma_alloc_from_dev_coherent instead for atomic coherent
> +        * allocation to avoid memory remapping.
> +        */
> +       if (!mem || mem == io_tlb_default_mem)
> +               return NULL;
> +
> +       index = swiotlb_find_slots(dev, 0, size);
> +       if (index == -1)
> +               return NULL;
> +
> +       tlb_addr = slot_addr(mem->start, index);
> +
> +       return pfn_to_page(PFN_DOWN(tlb_addr));
> +}
> +
> +bool swiotlb_free(struct device *dev, struct page *page, size_t size)
> +{
> +       phys_addr_t tlb_addr = page_to_phys(page);
> +
> +       if (!is_swiotlb_buffer(dev, tlb_addr))
> +               return false;
> +
> +       swiotlb_release_slots(dev, tlb_addr);
> +
> +       return true;
> +}
> +
> +#endif /* CONFIG_DMA_RESTRICTED_POOL */
> --
> 2.32.0.272.g935e593368-goog
>


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 04:59:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 04:59:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142582.263040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltNeA-00048h-6s; Wed, 16 Jun 2021 04:59:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142582.263040; Wed, 16 Jun 2021 04:59:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltNeA-00048a-3d; Wed, 16 Jun 2021 04:59:26 +0000
Received: by outflank-mailman (input) for mailman id 142582;
 Wed, 16 Jun 2021 04:59:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IKif=LK=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1ltNe8-00048U-8O
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 04:59:24 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d3f4d367-b013-4d51-aa9b-050c7182e9ed;
 Wed, 16 Jun 2021 04:59:22 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 9F8EA68AFE; Wed, 16 Jun 2021 06:59:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3f4d367-b013-4d51-aa9b-050c7182e9ed
Date: Wed, 16 Jun 2021 06:59:18 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com,
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk,
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie,
	dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com, Jianxiong Gao <jxgao@google.com>,
	joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com, matthew.auld@intel.com,
	rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v11 09/12] swiotlb: Add restricted DMA alloc/free
 support
Message-ID: <20210616045918.GA27537@lst.de>
References: <20210616035240.840463-1-tientzu@chromium.org> <20210616035240.840463-10-tientzu@chromium.org> <CALiNf28=3vqAs+8HsjyBGOiPNR2F3yT6OGnLpZH_AkWqgTqgOA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CALiNf28=3vqAs+8HsjyBGOiPNR2F3yT6OGnLpZH_AkWqgTqgOA@mail.gmail.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Jun 16, 2021 at 12:04:16PM +0800, Claire Chang wrote:
> Just noticed that after propagating swiotlb_force setting into
> io_tlb_default_mem->force, the memory allocation behavior for
> swiotlb_force will change (i.e. always skipping arch_dma_alloc and
> dma_direct_alloc_from_pool).

Yes, I think we need to split a "use_for_alloc" flag from the force flag.


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 05:10:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 05:10:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142589.263051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltNoq-0006cP-6S; Wed, 16 Jun 2021 05:10:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142589.263051; Wed, 16 Jun 2021 05:10:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltNoq-0006cI-3N; Wed, 16 Jun 2021 05:10:28 +0000
Received: by outflank-mailman (input) for mailman id 142589;
 Wed, 16 Jun 2021 05:10:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltNon-0006c7-Td
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 05:10:26 +0000
Received: from mail-qk1-x729.google.com (unknown [2607:f8b0:4864:20::729])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 98a75f98-a159-457e-a61d-0feb4d5e27d1;
 Wed, 16 Jun 2021 05:10:25 +0000 (UTC)
Received: by mail-qk1-x729.google.com with SMTP id j184so1381837qkd.6
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 22:10:25 -0700 (PDT)
Received: from mail-qk1-f172.google.com (mail-qk1-f172.google.com.
 [209.85.222.172])
 by smtp.gmail.com with ESMTPSA id v28sm882503qkg.22.2021.06.15.22.10.23
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 22:10:24 -0700 (PDT)
Received: by mail-qk1-f172.google.com with SMTP id c5so1395814qka.4
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 22:10:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98a75f98-a159-457e-a61d-0feb4d5e27d1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=iLn8eUCvJy2CWcSIXe6ohsWZex4qVXD74p7z6FMJPUo=;
        b=jDOToH5ThQyIZ/Z8kPwh8R97YYJUPgyXIvwYI7ELfesGT1LRFmc0oQizSpW5Du/1jT
         n8n511S9H2jKWIPPzwbEGsCQVrh6jqMZwL99GOCEoFmIgU2xBUW+YvuQbG8TA+PazhFY
         uMdw/pNT41Ub0lKjlBja/JS0DV7IUisf06CVE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=iLn8eUCvJy2CWcSIXe6ohsWZex4qVXD74p7z6FMJPUo=;
        b=oeRSP3npLASIzFmzMtYv+83J35lrjJFMSE0HgPMOe5xxs5Pl821pDL1TeCwyZBrbkK
         evzeHPlvcU+KmiedR59dOgKGTAFjR9wFvS6Tg2t6dF7XY4riET1pgK/Geil4GaIRZHYb
         ld/aRANjeiZ/rdwlgAzTYqTJf59j6bAFZqZgpaLQ9BlqhenjEuGh2ZoQxxSEnegrMV+Y
         C6+dLfWV0G98MTKPQvjBLcXriR/WvrHI4bCQHGVY31FXCpbmEpFBwWUChKBxHcgjQ7eM
         +yvRvGJ9Tw+AJEwgLFjgA25QXWkLePtrIy2UhubB/4xSLH09tY6jyfDwANo/7fywvWvl
         Al1Q==
X-Gm-Message-State: AOAM530i+NVHtgaaA5edRF3y2Vm/0aAHAeCHWykNlQRGb/X4U0vEt36E
	B1O9nkIr9chmS1dryDSPD9kDFsqHR0BqYw==
X-Google-Smtp-Source: ABdhPJx8oH/6vzAdvWCbVdnjJ9uQR730h9GRPOG12+oD1aOJvqvo5OUzEJf6YqvMXNQGWYy1knnb4A==
X-Received: by 2002:a37:9244:: with SMTP id u65mr3397437qkd.46.1623820224727;
        Tue, 15 Jun 2021 22:10:24 -0700 (PDT)
X-Received: by 2002:a05:6638:151:: with SMTP id y17mr2452891jao.128.1623820213046;
 Tue, 15 Jun 2021 22:10:13 -0700 (PDT)
MIME-Version: 1.0
References: <20210616035240.840463-1-tientzu@chromium.org> <20210616035240.840463-10-tientzu@chromium.org>
 <CALiNf28=3vqAs+8HsjyBGOiPNR2F3yT6OGnLpZH_AkWqgTqgOA@mail.gmail.com> <20210616045918.GA27537@lst.de>
In-Reply-To: <20210616045918.GA27537@lst.de>
From: Claire Chang <tientzu@chromium.org>
Date: Wed, 16 Jun 2021 13:10:02 +0800
X-Gmail-Original-Message-ID: <CALiNf2-+vL8rw5fi=DcR=V7d55Ls3-OXoxC87Pvrf1Kz14D_+A@mail.gmail.com>
Message-ID: <CALiNf2-+vL8rw5fi=DcR=V7d55Ls3-OXoxC87Pvrf1Kz14D_+A@mail.gmail.com>
Subject: Re: [PATCH v11 09/12] swiotlb: Add restricted DMA alloc/free support
To: Christoph Hellwig <hch@lst.de>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

On Wed, Jun 16, 2021 at 12:59 PM Christoph Hellwig <hch@lst.de> wrote:
>
> On Wed, Jun 16, 2021 at 12:04:16PM +0800, Claire Chang wrote:
> > Just noticed that after propagating swiotlb_force setting into
> > io_tlb_default_mem->force, the memory allocation behavior for
> > swiotlb_force will change (i.e. always skipping arch_dma_alloc and
> > dma_direct_alloc_from_pool).
>
> Yes, I think we need to split a "use_for_alloc" flag from the force flag.

How about splitting is_dev_swiotlb_force into is_swiotlb_force_bounce
(io_tlb_mem->force_bounce) and is_swiotlb_force_alloc
(io_tlb_mem->force_alloc)?


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 05:18:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 05:18:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142595.263062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltNws-0007Lf-0j; Wed, 16 Jun 2021 05:18:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142595.263062; Wed, 16 Jun 2021 05:18:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltNwr-0007LY-Tq; Wed, 16 Jun 2021 05:18:45 +0000
Received: by outflank-mailman (input) for mailman id 142595;
 Wed, 16 Jun 2021 05:18:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IKif=LK=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1ltNwr-0007LS-8i
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 05:18:45 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d7f71cb-0e37-435b-b809-5e9e5642ff32;
 Wed, 16 Jun 2021 05:18:44 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id EC67D68AFE; Wed, 16 Jun 2021 07:18:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d7f71cb-0e37-435b-b809-5e9e5642ff32
Date: Wed, 16 Jun 2021 07:18:39 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Christoph Hellwig <hch@lst.de>, Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com,
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk,
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie,
	dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com, Jianxiong Gao <jxgao@google.com>,
	joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com, matthew.auld@intel.com,
	rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v11 09/12] swiotlb: Add restricted DMA alloc/free
 support
Message-ID: <20210616051839.GA27982@lst.de>
References: <20210616035240.840463-1-tientzu@chromium.org> <20210616035240.840463-10-tientzu@chromium.org> <CALiNf28=3vqAs+8HsjyBGOiPNR2F3yT6OGnLpZH_AkWqgTqgOA@mail.gmail.com> <20210616045918.GA27537@lst.de> <CALiNf2-+vL8rw5fi=DcR=V7d55Ls3-OXoxC87Pvrf1Kz14D_+A@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CALiNf2-+vL8rw5fi=DcR=V7d55Ls3-OXoxC87Pvrf1Kz14D_+A@mail.gmail.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Jun 16, 2021 at 01:10:02PM +0800, Claire Chang wrote:
> On Wed, Jun 16, 2021 at 12:59 PM Christoph Hellwig <hch@lst.de> wrote:
> >
> > On Wed, Jun 16, 2021 at 12:04:16PM +0800, Claire Chang wrote:
> > > Just noticed that after propagating swiotlb_force setting into
> > > io_tlb_default_mem->force, the memory allocation behavior for
> > > swiotlb_force will change (i.e. always skipping arch_dma_alloc and
> > > dma_direct_alloc_from_pool).
> >
> > Yes, I think we need to split a "use_for_alloc" flag from the force flag.
> 
> How about splitting is_dev_swiotlb_force into is_swiotlb_force_bounce
> (io_tlb_mem->force_bounce) and is_swiotlb_force_alloc
> (io_tlb_mem->force_alloc)?

Yes, something like that.  I'd probably not use force for the alloc side
given that we otherwise never allocte from the swiotlb buffer.


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 06:22:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 06:22:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142604.263073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOwG-0005SV-Mi; Wed, 16 Jun 2021 06:22:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142604.263073; Wed, 16 Jun 2021 06:22:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOwG-0005SO-JL; Wed, 16 Jun 2021 06:22:12 +0000
Received: by outflank-mailman (input) for mailman id 142604;
 Wed, 16 Jun 2021 06:22:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltOwF-0005SI-4T
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 06:22:11 +0000
Received: from mail-pg1-x52c.google.com (unknown [2607:f8b0:4864:20::52c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4236e6e1-f329-411c-965e-5998b5c2ccf4;
 Wed, 16 Jun 2021 06:22:09 +0000 (UTC)
Received: by mail-pg1-x52c.google.com with SMTP id e22so1103274pgv.10
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 23:22:09 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id u23sm1279056pgk.38.2021.06.15.23.22.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 23:22:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4236e6e1-f329-411c-965e-5998b5c2ccf4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=A32xrBsbPeovZovK0BkQExsY3Uwla9z4m9ISmWRwQ4Q=;
        b=j4wZwZXG0R+C6l3S1lrI7hGfaXnSn19RNS7vjtBQrNkqQDWqcJYOFHNXcmfHG27B/8
         9aIXslwTNUvPH8Zu9ziKFpRmStSai2lGKBLvJ5nxUEUYOJhpYqgSmlnaKMnwWUxuddAC
         y3ulIbRnmS1EEccw/JXXUr9ZpvGNblmdteF58=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=A32xrBsbPeovZovK0BkQExsY3Uwla9z4m9ISmWRwQ4Q=;
        b=WXzDNTwzBCAaC3P2IaGIoQtqCg9F1/MRcKrBvDIFmYF/g435yBOHzCW7F34zwiP5Sf
         Y5BziO+g1Y31EE1oplzG6NDW7q/odlKEUaTuhsx3O52fSsZfmK+Bm2w7u8c3KBAVLHHo
         /y0Y/3C+6FaK9P8hiZeNLfvVFoPoOsAuPBoIXzj7tquOVYkeF3uf/+5O48PCXO8aBQaa
         tvMLXwnff2dfK7Wl2d8oAY4WbQ+ohN6B6nX2M/v8J+y+Qa63wFvp3JRxCFoWJwZ3z2x2
         nhQ58U0q9Qj5aO3oLqJXsTNHFBhFkW0ARyUrkUinq6ieV9VjJDD4TnIJALHl7uq95pDx
         X3Sw==
X-Gm-Message-State: AOAM530GQevjS7BpI75/CY+d5n0en+VXJ3iqMlEnQ+ayFPOO9KnfxwtZ
	ULftIRyD6OMm5Mu3M9Dsjey7GQ==
X-Google-Smtp-Source: ABdhPJw+JiP7kSBYYLgC3gcIAQAiyi2/aeKU3DP+AZEyvB31JMim//viADi8ocyrGRpTYSK2qvoyug==
X-Received: by 2002:a63:e245:: with SMTP id y5mr3404254pgj.171.1623824529065;
        Tue, 15 Jun 2021 23:22:09 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v12 00/12] Restricted DMA
Date: Wed, 16 Jun 2021 14:21:45 +0800
Message-Id: <20210616062157.953777-1-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series implements mitigations for lack of DMA access control on
systems without an IOMMU, which could result in the DMA accessing the
system memory at unexpected times and/or unexpected addresses, possibly
leading to data leakage or corruption.

For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
not behind an IOMMU. As PCI-e, by design, gives the device full access to
system memory, a vulnerability in the Wi-Fi firmware could easily escalate
to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
full chain of exploits; [2], [3]).

To mitigate the security concerns, we introduce restricted DMA. Restricted
DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
specially allocated region and does memory allocation from the same region.
The feature on its own provides a basic level of protection against the DMA
overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system needs
to provide a way to restrict the DMA to a predefined memory region (this is
usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).

[1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
[1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
[2] https://blade.tencent.com/en/advisories/qualpwn/
[3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
[4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132

v12:
Split is_dev_swiotlb_force into is_swiotlb_force_bounce (patch 06/12) and
is_swiotlb_for_alloc (patch 09/12)

v11:
- Rebase against swiotlb devel/for-linus-5.14
- s/mempry/memory/g
- exchange the order of patch 09/12 and 10/12
https://lore.kernel.org/patchwork/cover/1447216/

v10:
Address the comments in v9 to
  - fix the dev->dma_io_tlb_mem assignment
  - propagate swiotlb_force setting into io_tlb_default_mem->force
  - move set_memory_decrypted out of swiotlb_init_io_tlb_mem
  - move debugfs_dir declaration into the main CONFIG_DEBUG_FS block
  - add swiotlb_ prefix to find_slots and release_slots
  - merge the 3 alloc/free related patches
  - move the CONFIG_DMA_RESTRICTED_POOL later
https://lore.kernel.org/patchwork/cover/1446882/

v9:
Address the comments in v7 to
  - set swiotlb active pool to dev->dma_io_tlb_mem
  - get rid of get_io_tlb_mem
  - dig out the device struct for is_swiotlb_active
  - move debugfs_create_dir out of swiotlb_create_debugfs
  - do set_memory_decrypted conditionally in swiotlb_init_io_tlb_mem
  - use IS_ENABLED in kernel/dma/direct.c
  - fix redefinition of 'of_dma_set_restricted_buffer'
https://lore.kernel.org/patchwork/cover/1445081/

v8:
- Fix reserved-memory.txt and add the reg property in example.
- Fix sizeof for of_property_count_elems_of_size in
  drivers/of/address.c#of_dma_set_restricted_buffer.
- Apply Will's suggestion to try the OF node having DMA configuration in
  drivers/of/address.c#of_dma_set_restricted_buffer.
- Fix typo in the comment of drivers/of/address.c#of_dma_set_restricted_buffer.
- Add error message for PageHighMem in
  kernel/dma/swiotlb.c#rmem_swiotlb_device_init and move it to
  rmem_swiotlb_setup.
- Fix the message string in rmem_swiotlb_setup.
https://lore.kernel.org/patchwork/cover/1437112/

v7:
Fix debugfs, PageHighMem and comment style in rmem_swiotlb_device_init
https://lore.kernel.org/patchwork/cover/1431031/

v6:
Address the comments in v5
https://lore.kernel.org/patchwork/cover/1423201/

v5:
Rebase on latest linux-next
https://lore.kernel.org/patchwork/cover/1416899/

v4:
- Fix spinlock bad magic
- Use rmem->name for debugfs entry
- Address the comments in v3
https://lore.kernel.org/patchwork/cover/1378113/

v3:
Using only one reserved memory region for both streaming DMA and memory
allocation.
https://lore.kernel.org/patchwork/cover/1360992/

v2:
Building on top of swiotlb.
https://lore.kernel.org/patchwork/cover/1280705/

v1:
Using dma_map_ops.
https://lore.kernel.org/patchwork/cover/1271660/


Claire Chang (12):
  swiotlb: Refactor swiotlb init functions
  swiotlb: Refactor swiotlb_create_debugfs
  swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
  swiotlb: Update is_swiotlb_buffer to add a struct device argument
  swiotlb: Update is_swiotlb_active to add a struct device argument
  swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing
  swiotlb: Move alloc_size to swiotlb_find_slots
  swiotlb: Refactor swiotlb_tbl_unmap_single
  swiotlb: Add restricted DMA alloc/free support
  swiotlb: Add restricted DMA pool initialization
  dt-bindings: of: Add restricted DMA pool
  of: Add plumbing for restricted DMA pool

 .../reserved-memory/reserved-memory.txt       |  36 ++-
 drivers/base/core.c                           |   4 +
 drivers/gpu/drm/i915/gem/i915_gem_internal.c  |   2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c         |   2 +-
 drivers/iommu/dma-iommu.c                     |  12 +-
 drivers/of/address.c                          |  33 +++
 drivers/of/device.c                           |   3 +
 drivers/of/of_private.h                       |   6 +
 drivers/pci/xen-pcifront.c                    |   2 +-
 drivers/xen/swiotlb-xen.c                     |   2 +-
 include/linux/device.h                        |   4 +
 include/linux/swiotlb.h                       |  51 +++-
 kernel/dma/Kconfig                            |  14 +
 kernel/dma/direct.c                           |  59 +++--
 kernel/dma/direct.h                           |   8 +-
 kernel/dma/swiotlb.c                          | 249 +++++++++++++-----
 16 files changed, 385 insertions(+), 102 deletions(-)

-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 06:22:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 06:22:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142605.263084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOwO-0005jk-Uh; Wed, 16 Jun 2021 06:22:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142605.263084; Wed, 16 Jun 2021 06:22:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOwO-0005jd-Rh; Wed, 16 Jun 2021 06:22:20 +0000
Received: by outflank-mailman (input) for mailman id 142605;
 Wed, 16 Jun 2021 06:22:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltOwN-0005j0-5e
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 06:22:19 +0000
Received: from mail-pg1-x532.google.com (unknown [2607:f8b0:4864:20::532])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e29148ab-aaad-4f83-9ea7-8c09b98a8e37;
 Wed, 16 Jun 2021 06:22:18 +0000 (UTC)
Received: by mail-pg1-x532.google.com with SMTP id q15so1101429pgg.12
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 23:22:18 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id b21sm1100157pgj.74.2021.06.15.23.22.10
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 23:22:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e29148ab-aaad-4f83-9ea7-8c09b98a8e37
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=w8jpaGfzoLiwzikPo4cNB/z3ajmfRt4xSbdunsHd/ZI=;
        b=jTt9XUv4c1VjZCWjqcVf0OaKHSTM9qmeLPQt+E1mw44/Li4ZuKLF18RfGpTBv+Tid+
         nNOVbaYyhWI150qfN2S9ojFKh1FavQDj6Z6EHE2L+OIk+zrqnwXrhwMp/ayMHUZgSc6S
         YgAnH6iJxwdJ6kj8kmg0G1HcyFcGfORXwT+L0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=w8jpaGfzoLiwzikPo4cNB/z3ajmfRt4xSbdunsHd/ZI=;
        b=lsQB1GfnBKdVKsPNh/UZvM2s8kqgP/VITDtL0uamKbWqEGPSVey7VaPuAlnxBzRi7w
         Eq6db/NqEH82n0oYM75EgqZv1CkEdA/wjfHx/VylRYDGwAefAhpAAKvpaItF+3ykBolg
         md6AN2DVEdJFdnFYOI4f5IDoU8252lPlRuPvepnvpBgj7OubT/2NhdNwh6KYwXDHAd5f
         pFiiDLascAIwL2zA+6+g5C8GN4qTO7nWFdq+lg3C4bBsozZ2j6pE1Vo5wv4WNgVFOtqJ
         iVtUsQFolUg8E1WLLJaZ1PXnhyjOksj/VEKqi9IHxK18iUv6JQI0fG1FXjp9xDowAoeh
         QwYw==
X-Gm-Message-State: AOAM533Ji2jMbvgv6ccImOH6KhXeo0y0WkrAhXRaLyHaBLbu/O1VBa4v
	AXyaCDj43WV09VfrjXCue2Pq2A==
X-Google-Smtp-Source: ABdhPJxpe+iBf1VRMAYcdNKErtktFJS8QnINRpm76Af1BX1/Se6xXKJpZTxwC+DgLqnfRBuhk3siqg==
X-Received: by 2002:a63:4c8:: with SMTP id 191mr3477163pge.217.1623824537639;
        Tue, 15 Jun 2021 23:22:17 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v12 01/12] swiotlb: Refactor swiotlb init functions
Date: Wed, 16 Jun 2021 14:21:46 +0800
Message-Id: <20210616062157.953777-2-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616062157.953777-1-tientzu@chromium.org>
References: <20210616062157.953777-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
initialization to make the code reusable.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 kernel/dma/swiotlb.c | 49 ++++++++++++++++++++++----------------------
 1 file changed, 24 insertions(+), 25 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 52e2ac526757..3ba0f08a39a1 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -168,9 +168,28 @@ void __init swiotlb_update_mem_attributes(void)
 	memset(vaddr, 0, bytes);
 }
 
-int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
+				    unsigned long nslabs, bool late_alloc)
 {
+	void *vaddr = phys_to_virt(start);
 	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
+
+	mem->nslabs = nslabs;
+	mem->start = start;
+	mem->end = mem->start + bytes;
+	mem->index = 0;
+	mem->late_alloc = late_alloc;
+	spin_lock_init(&mem->lock);
+	for (i = 0; i < mem->nslabs; i++) {
+		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
+		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
+		mem->slots[i].alloc_size = 0;
+	}
+	memset(vaddr, 0, bytes);
+}
+
+int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+{
 	struct io_tlb_mem *mem;
 	size_t alloc_size;
 
@@ -186,16 +205,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 	if (!mem)
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
-	mem->nslabs = nslabs;
-	mem->start = __pa(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
+
+	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
 
 	io_tlb_default_mem = mem;
 	if (verbose)
@@ -282,8 +293,8 @@ swiotlb_late_init_with_default_size(size_t default_size)
 int
 swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 {
-	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
 	struct io_tlb_mem *mem;
+	unsigned long bytes = nslabs << IO_TLB_SHIFT;
 
 	if (swiotlb_force == SWIOTLB_NO_FORCE)
 		return 0;
@@ -297,20 +308,8 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	if (!mem)
 		return -ENOMEM;
 
-	mem->nslabs = nslabs;
-	mem->start = virt_to_phys(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	mem->late_alloc = 1;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
-
+	swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
 	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
-	memset(tlb, 0, bytes);
 
 	io_tlb_default_mem = mem;
 	swiotlb_print_info();
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 06:22:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 06:22:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142606.263095 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOwX-00066k-73; Wed, 16 Jun 2021 06:22:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142606.263095; Wed, 16 Jun 2021 06:22:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOwX-00066X-38; Wed, 16 Jun 2021 06:22:29 +0000
Received: by outflank-mailman (input) for mailman id 142606;
 Wed, 16 Jun 2021 06:22:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltOwV-00065R-Ow
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 06:22:27 +0000
Received: from mail-pj1-x1031.google.com (unknown [2607:f8b0:4864:20::1031])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e131413-abd5-4670-85c2-77d3c2be2ad7;
 Wed, 16 Jun 2021 06:22:27 +0000 (UTC)
Received: by mail-pj1-x1031.google.com with SMTP id
 i11-20020a17090a2acbb029016f0cce7c3fso912348pjg.3
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 23:22:27 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id 21sm1024194pfy.92.2021.06.15.23.22.19
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 23:22:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e131413-abd5-4670-85c2-77d3c2be2ad7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=U6ZYMbmpZtiKwFDcQ7JFrtewUpbfgXD5Ls9ZoD2Zrzg=;
        b=PJvHRcLLv4fhlWeXvSfKMQcWBxKOg+Fvvldtj3Y2Rwy0NG+fV8esg1TcKHxfLuji8x
         fALYR4BBGda9gTcNxGVF5jiXo1hdqXLQ9/EojERkZznhsp+lq9qt4VGs9UqStorOUfPe
         QcYH5iC4/YfxuhCJrGU8L8Q0LpoEXtWiU0umg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=U6ZYMbmpZtiKwFDcQ7JFrtewUpbfgXD5Ls9ZoD2Zrzg=;
        b=XFyrsUgnoDjKxePY9JvFfTnuPHu5aI3NwxkhcnumHTvR3lD9G5laVHUZxjOyBdXint
         vcpnMpGaafBT8k3EaxIJldQ/jzysMROKiQz+UDl2nEdmlOLv2gxpzGvhwo8E3mMR1lgP
         hg1Eygk/dOgFIP/1C9sDt1ttJ/F38WJwFL+TPExw2U6vsi+BLkqxX5AmPP0CToWmDLbM
         1pSGxbHbfRgtHO9xDHR1eghmidEngevz3wghmcA/54bHZrZF9GV4anKLCIVJtCjD9+yd
         Nypn0kIW0hbPLtDTxplP42Xqug/Ah4FSLkPE5YBzVefxMgEY3Iqo/gHAbzVN+ImM50wA
         CIHw==
X-Gm-Message-State: AOAM5333YX66TUefkJNy9SSUMJcahxl+kJutMgUa27apDIMZBHGKpPGl
	V4Y4y3Gaj3EoH7kLx2ExRSAERQ==
X-Google-Smtp-Source: ABdhPJyVyel5b3yfzmvkqrYXS5Eq1yNF15/6agY067MF9HfP0+RZa2tjHWPks9M0csBX136+z0kj6g==
X-Received: by 2002:a17:902:860b:b029:103:b23b:f1c3 with SMTP id f11-20020a170902860bb0290103b23bf1c3mr7842702plo.34.1623824546376;
        Tue, 15 Jun 2021 23:22:26 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v12 02/12] swiotlb: Refactor swiotlb_create_debugfs
Date: Wed, 16 Jun 2021 14:21:47 +0800
Message-Id: <20210616062157.953777-3-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616062157.953777-1-tientzu@chromium.org>
References: <20210616062157.953777-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Split the debugfs creation to make the code reusable for supporting
different bounce buffer pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 kernel/dma/swiotlb.c | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 3ba0f08a39a1..af416bcd1914 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -670,19 +670,26 @@ bool is_swiotlb_active(void)
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
 #ifdef CONFIG_DEBUG_FS
+static struct dentry *debugfs_dir;
 
-static int __init swiotlb_create_debugfs(void)
+static void swiotlb_create_debugfs_files(struct io_tlb_mem *mem)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
-
-	if (!mem)
-		return 0;
-	mem->debugfs = debugfs_create_dir("swiotlb", NULL);
 	debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs);
 	debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used);
+}
+
+static int __init swiotlb_create_default_debugfs(void)
+{
+	struct io_tlb_mem *mem = io_tlb_default_mem;
+
+	debugfs_dir = debugfs_create_dir("swiotlb", NULL);
+	if (mem) {
+		mem->debugfs = debugfs_dir;
+		swiotlb_create_debugfs_files(mem);
+	}
 	return 0;
 }
 
-late_initcall(swiotlb_create_debugfs);
+late_initcall(swiotlb_create_default_debugfs);
 
 #endif
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 06:22:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 06:22:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142610.263106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOwi-0006j8-Ft; Wed, 16 Jun 2021 06:22:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142610.263106; Wed, 16 Jun 2021 06:22:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOwi-0006ix-Cl; Wed, 16 Jun 2021 06:22:40 +0000
Received: by outflank-mailman (input) for mailman id 142610;
 Wed, 16 Jun 2021 06:22:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltOwg-0006SS-CI
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 06:22:38 +0000
Received: from mail-pf1-x42a.google.com (unknown [2607:f8b0:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c449626-2437-4825-8057-472082ff1899;
 Wed, 16 Jun 2021 06:22:35 +0000 (UTC)
Received: by mail-pf1-x42a.google.com with SMTP id y15so1354825pfl.4
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 23:22:35 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id m2sm1096854pjf.24.2021.06.15.23.22.27
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 23:22:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c449626-2437-4825-8057-472082ff1899
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=pEdYN/CfqowQHo6LUUVSljtyJhX5z4kEbgyPSIqEKts=;
        b=EVy8q8LRlBwXjbv5ZgdM2So19WcSXgvUoCIgbciZgrT3bQ/Raz5EfpUl8QY1EeXxlR
         AtchZXRhTiVu8fzyfGqQsWJ/9eTD+e6JzMhRxq8qv1qy08hGq8Fg+qotTCOaejp1tcgi
         Xzx8bRHUlFhvQX+18c32FXdPsodjGeZnyhYIM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=pEdYN/CfqowQHo6LUUVSljtyJhX5z4kEbgyPSIqEKts=;
        b=ClUbMH1YFw0q7NWEIviWqXMF0Utcnjm1jdNWsCC/UZVtnwnvcAfEL2vi69CIu5CmGq
         741fl/06nlt0GqViKobsFEShjPmrpXKX4N0upc4m7pXK/EcNlWpRuXqwD0ZZy/RVt5wS
         IHbc5tlmKJ/xoad8hPWsWmAFaxP+4oTIlNzpsl1+INfp6uDaE3JF5u/0oIlkjrApY2Yf
         YtjTpa2XA0Wh9IqyfJb+sfe0x3YdPRGe4FJVWlc+FsDj0QNeLbDsv9UC316fLC0YX+PJ
         qx9V2f2nnQTeBBRTtcsv7iqshml92FracLefyWa8YkYn7ToKx25UCLeoaUSknldmfci3
         bwYA==
X-Gm-Message-State: AOAM530sQHqFKNwzYbOuY1oc/sru+2lRMdlIGdSUpf4Ikn/rLLnl0INA
	4rSaZU5d9hm3+3dtt8oLt5UZ3g==
X-Google-Smtp-Source: ABdhPJwSkx5TMyTilGiVCcnPAeSvfxyOuh/e9rZIyr4xzr0620kbXFmZJ3BjZw8BvaIHoMpQs8L2Sw==
X-Received: by 2002:a62:2785:0:b029:2ec:b165:db1f with SMTP id n127-20020a6227850000b02902ecb165db1fmr7950026pfn.34.1623824555026;
        Tue, 15 Jun 2021 23:22:35 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v12 03/12] swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
Date: Wed, 16 Jun 2021 14:21:48 +0800
Message-Id: <20210616062157.953777-4-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616062157.953777-1-tientzu@chromium.org>
References: <20210616062157.953777-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Always have the pointer to the swiotlb pool used in struct device. This
could help simplify the code for other pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 drivers/base/core.c    | 4 ++++
 include/linux/device.h | 4 ++++
 kernel/dma/swiotlb.c   | 8 ++++----
 3 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/base/core.c b/drivers/base/core.c
index f29839382f81..cb3123e3954d 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -27,6 +27,7 @@
 #include <linux/netdevice.h>
 #include <linux/sched/signal.h>
 #include <linux/sched/mm.h>
+#include <linux/swiotlb.h>
 #include <linux/sysfs.h>
 #include <linux/dma-map-ops.h> /* for dma_default_coherent */
 
@@ -2736,6 +2737,9 @@ void device_initialize(struct device *dev)
     defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU_ALL)
 	dev->dma_coherent = dma_default_coherent;
 #endif
+#ifdef CONFIG_SWIOTLB
+	dev->dma_io_tlb_mem = io_tlb_default_mem;
+#endif
 }
 EXPORT_SYMBOL_GPL(device_initialize);
 
diff --git a/include/linux/device.h b/include/linux/device.h
index ba660731bd25..240d652a0696 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -416,6 +416,7 @@ struct dev_links_info {
  * @dma_pools:	Dma pools (if dma'ble device).
  * @dma_mem:	Internal for coherent mem override.
  * @cma_area:	Contiguous memory area for dma allocations
+ * @dma_io_tlb_mem: Pointer to the swiotlb pool used.  Not for driver use.
  * @archdata:	For arch-specific additions.
  * @of_node:	Associated device tree node.
  * @fwnode:	Associated device node supplied by platform firmware.
@@ -518,6 +519,9 @@ struct device {
 #ifdef CONFIG_DMA_CMA
 	struct cma *cma_area;		/* contiguous memory area for dma
 					   allocations */
+#endif
+#ifdef CONFIG_SWIOTLB
+	struct io_tlb_mem *dma_io_tlb_mem;
 #endif
 	/* arch specific additions */
 	struct dev_archdata	archdata;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index af416bcd1914..a9f5c08dd94a 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -339,7 +339,7 @@ void __init swiotlb_exit(void)
 static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size,
 			   enum dma_data_direction dir)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT;
 	unsigned int offset = (tlb_addr - mem->start) & (IO_TLB_SIZE - 1);
 	phys_addr_t orig_addr = mem->slots[index].orig_addr;
@@ -430,7 +430,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
 static int find_slots(struct device *dev, phys_addr_t orig_addr,
 		size_t alloc_size)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
 	dma_addr_t tbl_dma_addr =
 		phys_to_dma_unencrypted(dev, mem->start) & boundary_mask;
@@ -507,7 +507,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		size_t mapping_size, size_t alloc_size,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
 	unsigned int i;
 	int index;
@@ -558,7 +558,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 			      size_t mapping_size, enum dma_data_direction dir,
 			      unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
 	unsigned long flags;
 	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 06:22:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 06:22:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142613.263116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOwo-0007Es-U8; Wed, 16 Jun 2021 06:22:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142613.263116; Wed, 16 Jun 2021 06:22:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOwo-0007Eg-R3; Wed, 16 Jun 2021 06:22:46 +0000
Received: by outflank-mailman (input) for mailman id 142613;
 Wed, 16 Jun 2021 06:22:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltOwn-0007DA-Bn
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 06:22:45 +0000
Received: from mail-pj1-x1032.google.com (unknown [2607:f8b0:4864:20::1032])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 59979d36-ebd0-4c2f-a375-45dc75bc0ceb;
 Wed, 16 Jun 2021 06:22:44 +0000 (UTC)
Received: by mail-pj1-x1032.google.com with SMTP id g24so1088006pji.4
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 23:22:44 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id q23sm987071pff.175.2021.06.15.23.22.36
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 23:22:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59979d36-ebd0-4c2f-a375-45dc75bc0ceb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=qgVN2ZlJMrK0ZQ+KxIYsV5GmxXVfYXxXGP7J+tRHaos=;
        b=PZld1QjN9cC6qyAiQlWMGq58ojGyKGD0OvWirRWxGXNHoIwzW2but0Gm8wH4sIBEJn
         aIXZU6okb1sX7duIF8MvFO8dGYEYDukRHcVhEvcFXGI5C3vTDq+fxOKJevmGSJ8Rdjpe
         be2NRnHaX3ZOOzTU6z4ZH7BWle9WiAyXD4TMA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=qgVN2ZlJMrK0ZQ+KxIYsV5GmxXVfYXxXGP7J+tRHaos=;
        b=fYkA1nfybhUfhcZRoxucwXEZBwdVIbHJbmnRGjMewMYEG0yqxUDAf7SFXCYQBQrz16
         bnP67Z0STNV+udQfZq/cWxs1OzCuGECITRDgNMmZX2sjE9nJLt4WhU/fYzWY8fF4tyZJ
         78odp+leBBj11TGP1ufM2Wrzx38MsNZ2R7xVviZjAe+4ZR5JP/lgpgwjTJnclVw5+cSW
         /715Sg8OZVNLm6QAGpsaU1Pl15rlCkCxXDmq2Nh5yOH/2w/uZNbuZqoBtCEswLPxEuTO
         r17QAAj7P9x1YWnLXkDS6/WaI9boJoqvYdh67MCcnm+JBDTrBlJVrnOiExXwPWDau0sh
         btBg==
X-Gm-Message-State: AOAM531cbhxlYusczzLG/qD6TNGhWWUOfgJyVsx09ITbkaHpZj6bR0hk
	OM6tcnXSHt5N4ir6xIJD5/FHKA==
X-Google-Smtp-Source: ABdhPJzpzgCOKivn8rY3IxOSAbfDCKTaPReTRSBtzedOlZ2uVd6q3v0yAaSKsTkNrOCfP2FTDsiAsA==
X-Received: by 2002:a17:90a:540d:: with SMTP id z13mr3445560pjh.159.1623824563630;
        Tue, 15 Jun 2021 23:22:43 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v12 04/12] swiotlb: Update is_swiotlb_buffer to add a struct device argument
Date: Wed, 16 Jun 2021 14:21:49 +0800
Message-Id: <20210616062157.953777-5-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616062157.953777-1-tientzu@chromium.org>
References: <20210616062157.953777-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_buffer to add a struct device argument. This will be
useful later to allow for different pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 drivers/iommu/dma-iommu.c | 12 ++++++------
 drivers/xen/swiotlb-xen.c |  2 +-
 include/linux/swiotlb.h   |  7 ++++---
 kernel/dma/direct.c       |  6 +++---
 kernel/dma/direct.h       |  6 +++---
 5 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 3087d9fa6065..10997ef541f8 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -507,7 +507,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
 
 	__iommu_dma_unmap(dev, dma_addr, size);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 
@@ -578,7 +578,7 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
 	}
 
 	iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
-	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
+	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys))
 		swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs);
 	return iova;
 }
@@ -749,7 +749,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
 	if (!dev_is_dma_coherent(dev))
 		arch_sync_dma_for_cpu(phys, size, dir);
 
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_cpu(dev, phys, size, dir);
 }
 
@@ -762,7 +762,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev,
 		return;
 
 	phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_device(dev, phys, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -783,7 +783,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
 
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_cpu(dev, sg_phys(sg),
 						    sg->length, dir);
 	}
@@ -800,7 +800,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
 		return;
 
 	for_each_sg(sgl, sg, nelems, i) {
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_device(dev, sg_phys(sg),
 						       sg->length, dir);
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 4c89afc0df62..0c6ed09f8513 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -100,7 +100,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	 * in our domain. Therefore _only_ check address within our domain.
 	 */
 	if (pfn_valid(PFN_DOWN(paddr)))
-		return is_swiotlb_buffer(paddr);
+		return is_swiotlb_buffer(dev, paddr);
 	return 0;
 }
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 216854a5e513..d1f3d95881cd 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_SWIOTLB_H
 #define __LINUX_SWIOTLB_H
 
+#include <linux/device.h>
 #include <linux/dma-direction.h>
 #include <linux/init.h>
 #include <linux/types.h>
@@ -101,9 +102,9 @@ struct io_tlb_mem {
 };
 extern struct io_tlb_mem *io_tlb_default_mem;
 
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
@@ -115,7 +116,7 @@ bool is_swiotlb_active(void);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index f737e3347059..84c9feb5474a 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -343,7 +343,7 @@ void dma_direct_sync_sg_for_device(struct device *dev,
 	for_each_sg(sgl, sg, nents, i) {
 		phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_device(dev, paddr, sg->length,
 						       dir);
 
@@ -369,7 +369,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(paddr, sg->length, dir);
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_cpu(dev, paddr, sg->length,
 						    dir);
 
@@ -504,7 +504,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr)
 {
 	return !dev_is_dma_coherent(dev) ||
-		is_swiotlb_buffer(dma_to_phys(dev, dma_addr));
+	       is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr));
 }
 
 /**
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 50afc05b6f1d..13e9e7158d94 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -56,7 +56,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev,
 {
 	phys_addr_t paddr = dma_to_phys(dev, addr);
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_device(dev, paddr, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -73,7 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev,
 		arch_sync_dma_for_cpu_all();
 	}
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_cpu(dev, paddr, size, dir);
 
 	if (dir == DMA_FROM_DEVICE)
@@ -113,7 +113,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
 		dma_direct_sync_single_for_cpu(dev, addr, size, dir);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 #endif /* _KERNEL_DMA_DIRECT_H */
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 06:22:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 06:22:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142617.263128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOwx-0007k0-8i; Wed, 16 Jun 2021 06:22:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142617.263128; Wed, 16 Jun 2021 06:22:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOwx-0007jn-4s; Wed, 16 Jun 2021 06:22:55 +0000
Received: by outflank-mailman (input) for mailman id 142617;
 Wed, 16 Jun 2021 06:22:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltOwv-0007gS-NB
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 06:22:53 +0000
Received: from mail-pf1-x436.google.com (unknown [2607:f8b0:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8fbb8b62-c3cd-4d8d-b5bc-2911fc723675;
 Wed, 16 Jun 2021 06:22:52 +0000 (UTC)
Received: by mail-pf1-x436.google.com with SMTP id y15so1355305pfl.4
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 23:22:52 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id q23sm987609pff.175.2021.06.15.23.22.44
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 23:22:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8fbb8b62-c3cd-4d8d-b5bc-2911fc723675
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=fDjj9RwGiHMHYf08qnwDA+ssC0u7zUi8VvqNHSt/pLY=;
        b=bPjT67wPmPfunMVw5qx31ETGuTCm2RR/+eH0A8svDKwhKwfx0BYl7eocGETox1lRoT
         977YMxN4+A7BXPEtxRs1B0Lo85Jh14VIgB3QM4CD7Ib4/z6BrdB8jp0tztp/W6uPGqok
         edQ+sMvcRtQUwsPAFLHWSIBpT2O036z/Szk4w=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=fDjj9RwGiHMHYf08qnwDA+ssC0u7zUi8VvqNHSt/pLY=;
        b=FrmKhhuCwJg9k27OJmFkXBqzJU431E5Q2ou70Bmm4SL5dzahUyzXZ0zX/bNLFrUXlV
         GQ8gew47FPbq9A1+8wD2yoZjc1EcgbS4cNsypN5D6E4F6cU73yYrpRXgib3kXF1iYk5Q
         bA0wDBijlWSCrkmzXtTptBVBzhcn5GiCc79NiIqvaKRtOZiiQmnTes/MbWQoI1X3GpZz
         lDhAfNoRYaXT+wRxxFIpPPP/Z+RfavD0MrVuRPd3iPUAOo8mHajWvi4X8cH3RcyQxeq9
         tJZvrNC7i5IXWQBQZJaCK2djAJJo+KWr4QW9TSgNQ0cC6cLjEj0OUGTFlB20aG9nb3dk
         2xJw==
X-Gm-Message-State: AOAM533Fqhtd+YZwn214tLJTi5mhGLs+XdmhNdkNYjaszsRJuXhhjp/S
	sTQmSg9KWEmawm/0tzyxaarGJg==
X-Google-Smtp-Source: ABdhPJx/kCYyXn4Tu5RgYPFt+4XicnJPGmv9fsFjoKCS/8liFbzjjxoVt1Ap3EAoVNJt5dkstWejkQ==
X-Received: by 2002:a63:5d52:: with SMTP id o18mr3460733pgm.440.1623824572179;
        Tue, 15 Jun 2021 23:22:52 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v12 05/12] swiotlb: Update is_swiotlb_active to add a struct device argument
Date: Wed, 16 Jun 2021 14:21:50 +0800
Message-Id: <20210616062157.953777-6-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616062157.953777-1-tientzu@chromium.org>
References: <20210616062157.953777-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_active to add a struct device argument. This will be
useful later to allow for different pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c        | 2 +-
 drivers/pci/xen-pcifront.c                   | 2 +-
 include/linux/swiotlb.h                      | 4 ++--
 kernel/dma/direct.c                          | 2 +-
 kernel/dma/swiotlb.c                         | 4 ++--
 6 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index a9d65fc8aa0e..4b7afa0fc85d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 
 	max_order = MAX_ORDER;
 #ifdef CONFIG_SWIOTLB
-	if (is_swiotlb_active()) {
+	if (is_swiotlb_active(obj->base.dev->dev)) {
 		unsigned int max_segment;
 
 		max_segment = swiotlb_max_segment();
diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
index 9662522aa066..be15bfd9e0ee 100644
--- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
@@ -321,7 +321,7 @@ nouveau_ttm_init(struct nouveau_drm *drm)
 	}
 
 #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
-	need_swiotlb = is_swiotlb_active();
+	need_swiotlb = is_swiotlb_active(dev->dev);
 #endif
 
 	ret = ttm_bo_device_init(&drm->ttm.bdev, &nouveau_bo_driver,
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index b7a8f3a1921f..0d56985bfe81 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
 
 	spin_unlock(&pcifront_dev_lock);
 
-	if (!err && !is_swiotlb_active()) {
+	if (!err && !is_swiotlb_active(&pdev->xdev->dev)) {
 		err = pci_xen_swiotlb_init_late();
 		if (err)
 			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index d1f3d95881cd..dd1c30a83058 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -112,7 +112,7 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
-bool is_swiotlb_active(void);
+bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
@@ -132,7 +132,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev)
 	return SIZE_MAX;
 }
 
-static inline bool is_swiotlb_active(void)
+static inline bool is_swiotlb_active(struct device *dev)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 84c9feb5474a..7a88c34d0867 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
 size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
-	if (is_swiotlb_active() &&
+	if (is_swiotlb_active(dev) &&
 	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index a9f5c08dd94a..101abeb0a57d 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -663,9 +663,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)
 	return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
 }
 
-bool is_swiotlb_active(void)
+bool is_swiotlb_active(struct device *dev)
 {
-	return io_tlb_default_mem != NULL;
+	return dev->dma_io_tlb_mem != NULL;
 }
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 06:23:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 06:23:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142624.263139 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOxC-0000Cw-IB; Wed, 16 Jun 2021 06:23:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142624.263139; Wed, 16 Jun 2021 06:23:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOxC-0000Co-FG; Wed, 16 Jun 2021 06:23:10 +0000
Received: by outflank-mailman (input) for mailman id 142624;
 Wed, 16 Jun 2021 06:23:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltOxA-0007gS-HC
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 06:23:08 +0000
Received: from mail-pg1-x532.google.com (unknown [2607:f8b0:4864:20::532])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7906c1e4-39f2-4265-ab58-03d7d3f436a9;
 Wed, 16 Jun 2021 06:23:01 +0000 (UTC)
Received: by mail-pg1-x532.google.com with SMTP id t13so1105359pgu.11
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 23:23:01 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id f18sm4233834pjq.48.2021.06.15.23.22.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 23:23:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7906c1e4-39f2-4265-ab58-03d7d3f436a9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=yx1sBUKg2SZ/J5oPuuLxuIdituaRXED8so7Vw8sLIws=;
        b=Q4K6eTywavWHh5xBRTV2Sn1QN0pqm21rAHVV9HUXv/UP1w8xNe4Z5udU+NX1vgnB1+
         c3YMRhLko40t5PNbJ2OEaPKxiaHUl4VjZBfqy96tFVH3Qfa8f64z0y648a814QEkXJ8w
         S/rgJ3x7L88xW9cuGs4qk0A1P9NST1SKpuLUA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=yx1sBUKg2SZ/J5oPuuLxuIdituaRXED8so7Vw8sLIws=;
        b=Kkjp/2hKp+A6/6EHhR1lpb2n7IDjOJdovWDudj/E4Xhx8j8Rr48HJQCoBPGnPaycIF
         4K6NioejY88aPlO87wOLTEDSH+J0/XXkoylPW0nbm6HyJ7Y6vKYBs9rnfXOCoP8kePe7
         cgW6ivWROcu9zE0Pib+DiDlSDJKykwmbDKaMYIH2qLUYIpOXvcowNoMNUE9oUEJsuKy9
         u2UloizxfBw+Bjmi1yQIzST1TM5KvWxf/cmbB54b5ijNS+8M7oW0Gj5eE2IZhDOJh7BV
         3IGq2oGz2Vjkv7+ciZqyYh+Zr8UIDYNajpQrdayv8q/t1CaQI3lZuIIMa2bjwgj+NLFD
         cjbA==
X-Gm-Message-State: AOAM532ig/d+LX1gPfNjZSbfYRoEPBxAFN0cl7UoEcrM7lJ1uO6BRsqy
	AZbu4kDxAW/bO8UTVkuMfAnK+w==
X-Google-Smtp-Source: ABdhPJw3qSZ6gJG41Gxh6T5HUfWmBlX+79o8LgLEBQMF8QRY4VhLx/+veYL/HvERe1FdH5EyrXhyxw==
X-Received: by 2002:a63:4719:: with SMTP id u25mr3506046pga.193.1623824580858;
        Tue, 15 Jun 2021 23:23:00 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v12 06/12] swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing
Date: Wed, 16 Jun 2021 14:21:51 +0800
Message-Id: <20210616062157.953777-7-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616062157.953777-1-tientzu@chromium.org>
References: <20210616062157.953777-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
use it to determine whether to bounce the data or not. This will be
useful later to allow for different pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 11 +++++++++++
 kernel/dma/direct.c     |  2 +-
 kernel/dma/direct.h     |  2 +-
 kernel/dma/swiotlb.c    |  4 ++++
 4 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index dd1c30a83058..8d8855c77d9a 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -84,6 +84,7 @@ extern enum swiotlb_force swiotlb_force;
  *		unmap calls.
  * @debugfs:	The dentry to debugfs.
  * @late_alloc:	%true if allocated using the page allocator
+ * @force_bounce: %true if swiotlb bouncing is forced
  */
 struct io_tlb_mem {
 	phys_addr_t start;
@@ -94,6 +95,7 @@ struct io_tlb_mem {
 	spinlock_t lock;
 	struct dentry *debugfs;
 	bool late_alloc;
+	bool force_bounce;
 	struct io_tlb_slot {
 		phys_addr_t orig_addr;
 		size_t alloc_size;
@@ -109,6 +111,11 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
 
+static inline bool is_swiotlb_force_bounce(struct device *dev)
+{
+	return dev->dma_io_tlb_mem->force_bounce;
+}
+
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
@@ -120,6 +127,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
+static inline bool is_swiotlb_force_bounce(struct device *dev)
+{
+	return false;
+}
 static inline void swiotlb_exit(void)
 {
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 7a88c34d0867..a92465b4eb12 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -496,7 +496,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
 	if (is_swiotlb_active(dev) &&
-	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
+	    (dma_addressing_limited(dev) || is_swiotlb_force_bounce(dev)))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
 }
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 13e9e7158d94..4632b0f4f72e 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -87,7 +87,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
 	phys_addr_t phys = page_to_phys(page) + offset;
 	dma_addr_t dma_addr = phys_to_dma(dev, phys);
 
-	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
+	if (is_swiotlb_force_bounce(dev))
 		return swiotlb_map(dev, phys, size, dir, attrs);
 
 	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 101abeb0a57d..b5a9c4c0b4db 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -179,6 +179,10 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
 	mem->end = mem->start + bytes;
 	mem->index = 0;
 	mem->late_alloc = late_alloc;
+
+	if (swiotlb_force == SWIOTLB_FORCE)
+		mem->force_bounce = true;
+
 	spin_lock_init(&mem->lock);
 	for (i = 0; i < mem->nslabs; i++) {
 		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 06:23:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 06:23:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142633.263150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOxV-0000xI-S1; Wed, 16 Jun 2021 06:23:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142633.263150; Wed, 16 Jun 2021 06:23:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOxV-0000xB-OF; Wed, 16 Jun 2021 06:23:29 +0000
Received: by outflank-mailman (input) for mailman id 142633;
 Wed, 16 Jun 2021 06:23:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltOxU-0007gS-I7
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 06:23:28 +0000
Received: from mail-pj1-x1034.google.com (unknown [2607:f8b0:4864:20::1034])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id da795035-0c40-493b-9931-90ae26866678;
 Wed, 16 Jun 2021 06:23:10 +0000 (UTC)
Received: by mail-pj1-x1034.google.com with SMTP id
 z3-20020a17090a3983b029016bc232e40bso1127147pjb.4
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 23:23:10 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id f8sm960978pfv.73.2021.06.15.23.23.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 23:23:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da795035-0c40-493b-9931-90ae26866678
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=cTgFKnn3cLtvPfjI+55Cezw2EcnHDtp9/QQ3R2B3hUE=;
        b=cyJuCSilvPYniiPuEKQywgAZ2h40B6myx9TbQoS+O9qIihERcGr8FMpcEdjS0/tshr
         T6oF/Jqx7uEfUaJBG88Ds+6w77XhG2zyZ5Tn4UAWzJqH5XWwEFNMsFYg6ebrjJXldgCo
         SKPhXn/2f6RY0GqzhxLpDEImP7+pN5t8KYNM0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=cTgFKnn3cLtvPfjI+55Cezw2EcnHDtp9/QQ3R2B3hUE=;
        b=nChgnxE84+OhuWIzzK2Cz8tbKPdPN1su9PtluNo1Op2KMgkbj5/xd+Bdz/+M4qBgxZ
         MkAbvTGnM5tcbzfqnMdzhXKl0HxMV09wkGakcz9lCDwfxmzkLx6ePdWAjD+pK83/I6pQ
         nh+BOghMla3jYgNyhV1YQl6y+CG679YsbJMFVUFtzE7M6tuwB+cqO+4ftyWKt+qf//6e
         SsvutwI6HbHlqBIVrUgORGBPwOEaHlO6nQfCfPPohdWBM1em8hThH2XQpSGTz8Fi+6Ei
         HHzCBewew2SG0CREYRiSkRQBFVy88Or4/WWD8oU/wbM3B/Qq5BkgykQ73lmtgPIiewlj
         zxZA==
X-Gm-Message-State: AOAM533/Hz7tWk+VYO7oUQRQMNDr6EeuJTuej9MZpH1UgNOMgp9E5N2e
	CM7fC8BKJPkptx/5KZcz6o8UyA==
X-Google-Smtp-Source: ABdhPJxAvAAxvtMv6OUMS0EcucXOxwEcajpljVPpcHfCCwVFyghupOkDxlHVu6RexNkHSd0Tgg6RHw==
X-Received: by 2002:a17:90a:9511:: with SMTP id t17mr3458123pjo.108.1623824589397;
        Tue, 15 Jun 2021 23:23:09 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v12 07/12] swiotlb: Move alloc_size to swiotlb_find_slots
Date: Wed, 16 Jun 2021 14:21:52 +0800
Message-Id: <20210616062157.953777-8-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616062157.953777-1-tientzu@chromium.org>
References: <20210616062157.953777-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Rename find_slots to swiotlb_find_slots and move the maintenance of
alloc_size to it for better code reusability later.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 kernel/dma/swiotlb.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index b5a9c4c0b4db..b59e689aa79d 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -431,8 +431,8 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
  * Find a suitable number of IO TLB entries size that will fit this request and
  * allocate a buffer from that IO TLB pool.
  */
-static int find_slots(struct device *dev, phys_addr_t orig_addr,
-		size_t alloc_size)
+static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
+			      size_t alloc_size)
 {
 	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
@@ -487,8 +487,11 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 	return -1;
 
 found:
-	for (i = index; i < index + nslots; i++)
+	for (i = index; i < index + nslots; i++) {
 		mem->slots[i].list = 0;
+		mem->slots[i].alloc_size =
+			alloc_size - ((i - index) << IO_TLB_SHIFT);
+	}
 	for (i = index - 1;
 	     io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 &&
 	     mem->slots[i].list; i--)
@@ -529,7 +532,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		return (phys_addr_t)DMA_MAPPING_ERROR;
 	}
 
-	index = find_slots(dev, orig_addr, alloc_size + offset);
+	index = swiotlb_find_slots(dev, orig_addr, alloc_size + offset);
 	if (index == -1) {
 		if (!(attrs & DMA_ATTR_NO_WARN))
 			dev_warn_ratelimited(dev,
@@ -543,11 +546,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	 * This is needed when we sync the memory.  Then we sync the buffer if
 	 * needed.
 	 */
-	for (i = 0; i < nr_slots(alloc_size + offset); i++) {
+	for (i = 0; i < nr_slots(alloc_size + offset); i++)
 		mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
-		mem->slots[index + i].alloc_size =
-			alloc_size - (i << IO_TLB_SHIFT);
-	}
 	tlb_addr = slot_addr(mem->start, index) + offset;
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
 	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 06:23:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 06:23:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142635.263161 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOxb-0001LB-8t; Wed, 16 Jun 2021 06:23:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142635.263161; Wed, 16 Jun 2021 06:23:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOxb-0001L2-5n; Wed, 16 Jun 2021 06:23:35 +0000
Received: by outflank-mailman (input) for mailman id 142635;
 Wed, 16 Jun 2021 06:23:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltOxZ-0007gS-IA
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 06:23:33 +0000
Received: from mail-pg1-x536.google.com (unknown [2607:f8b0:4864:20::536])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1493b031-134f-4c35-9ab8-397669532dde;
 Wed, 16 Jun 2021 06:23:18 +0000 (UTC)
Received: by mail-pg1-x536.google.com with SMTP id i34so1112831pgl.9
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 23:23:18 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id a187sm943813pfb.66.2021.06.15.23.23.10
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 23:23:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1493b031-134f-4c35-9ab8-397669532dde
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=4uQkI0iLfUCijuOepwUkiZcSLaj5vr9UchvXk27nRw0=;
        b=i8vpfgPJ9bPz7Z1Al2l+J6+mN/Xl35Xu1XE2A22X8XR/Zq8Z0ZAIZW1TRjKf3ok7nZ
         cVehop1oa/YF7qhuElQsqUmYkYcuOfcjTbNEd4KCXAUWVnRRY2/VoP7cICHlXtOyLHuZ
         HdMm+p5xlQYTdItRv/hg6efoNr1CAhA7jHk00=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=4uQkI0iLfUCijuOepwUkiZcSLaj5vr9UchvXk27nRw0=;
        b=XIyZkHGMzspX8s4ctM1gXa/pldRiLtTirsrtyVQ7nD65/C5q5ttkWWhoRN2Q39QrYw
         5YGjIPu8VqXDErdzbvtxBwKZ3vjVZh5ICrlbNSNuyFBvQczeikaDgT/t4SBX17YC5Eso
         38VLbka6QbKFHTNwPMkmfyl+w/C5q5V4rtARTTHatX3tLsTpzAtnnd7NDLZj0p6HPm2y
         bMoyityIZJ0kEaKqHROiFzL3uW1Ckk29IPOpIH1hwSjFMcrnuhLBUGK8mi4ZdO6WYrMd
         yBvguoyhxpDjdvc1ikL1Lpxw8h1zfvCu2WzUZHbU/SC15cb5C91jTXMjh+mssQGNprAV
         syZg==
X-Gm-Message-State: AOAM533/Z3fljTm8Zy0UITbWstE/5iKMGwomoY52Yv/2Ru1zZ6EqUNfw
	sn3kQbYaffQDmqJ4cof8IpVB7A==
X-Google-Smtp-Source: ABdhPJzlTcTJITW8roEO1Qs6Oi6y2lI0FUouCTwChc7sW4UDPIgbdbK3IINTmKh9YLKWu+Lc6AZqFA==
X-Received: by 2002:aa7:9118:0:b029:2eb:2ef3:f197 with SMTP id 24-20020aa791180000b02902eb2ef3f197mr7844722pfh.27.1623824597932;
        Tue, 15 Jun 2021 23:23:17 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v12 08/12] swiotlb: Refactor swiotlb_tbl_unmap_single
Date: Wed, 16 Jun 2021 14:21:53 +0800
Message-Id: <20210616062157.953777-9-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616062157.953777-1-tientzu@chromium.org>
References: <20210616062157.953777-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, swiotlb_release_slots, to make the code reusable for
supporting different bounce buffer pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 kernel/dma/swiotlb.c | 35 ++++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index b59e689aa79d..688c6e0c43ff 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -555,27 +555,15 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	return tlb_addr;
 }
 
-/*
- * tlb_addr is the physical address of the bounce buffer to unmap.
- */
-void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
-			      size_t mapping_size, enum dma_data_direction dir,
-			      unsigned long attrs)
+static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr)
 {
-	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long flags;
-	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
+	unsigned int offset = swiotlb_align_offset(dev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
 	int nslots = nr_slots(mem->slots[index].alloc_size + offset);
 	int count, i;
 
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
-		swiotlb_bounce(hwdev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
-
 	/*
 	 * Return the buffer to the free list by setting the corresponding
 	 * entries to indicate the number of contiguous entries available.
@@ -610,6 +598,23 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 	spin_unlock_irqrestore(&mem->lock, flags);
 }
 
+/*
+ * tlb_addr is the physical address of the bounce buffer to unmap.
+ */
+void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
+			      size_t mapping_size, enum dma_data_direction dir,
+			      unsigned long attrs)
+{
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
+		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
+
+	swiotlb_release_slots(dev, tlb_addr);
+}
+
 void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
 		size_t size, enum dma_data_direction dir)
 {
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 06:25:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 06:25:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142647.263172 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOzl-0002ZH-NZ; Wed, 16 Jun 2021 06:25:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142647.263172; Wed, 16 Jun 2021 06:25:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltOzl-0002Z8-KG; Wed, 16 Jun 2021 06:25:49 +0000
Received: by outflank-mailman (input) for mailman id 142647;
 Wed, 16 Jun 2021 06:25:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltOzk-0002Yn-2V
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 06:25:48 +0000
Received: from mail-pj1-x1033.google.com (unknown [2607:f8b0:4864:20::1033])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 197afbae-3df1-4720-9cb0-2b310e29bca2;
 Wed, 16 Jun 2021 06:25:47 +0000 (UTC)
Received: by mail-pj1-x1033.google.com with SMTP id g4so1129462pjk.0
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 23:25:47 -0700 (PDT)
Received: from mail-pf1-f170.google.com (mail-pf1-f170.google.com.
 [209.85.210.170])
 by smtp.gmail.com with ESMTPSA id h8sm1043401pjf.7.2021.06.15.23.25.45
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 23:25:45 -0700 (PDT)
Received: by mail-pf1-f170.google.com with SMTP id x73so1334622pfc.8
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 23:25:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 197afbae-3df1-4720-9cb0-2b310e29bca2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=ahYpHO9B/+cUbYyFJd5R0IQOMPNRU7XLvRaVxvXYrwA=;
        b=jKYxRf5BLwVd+aOILdWyEQ9NEe6v5aYFQD8W2wYKnYxbx/kY+mKyD67t59FT+p6nN5
         Mpk6mfOcBOKrQiveQedh4qw5JCQtFiqVzV2RVlhONm9wznK7lpmoPFuDVcf3/tc5Ov2+
         CFj22Re2YUg+UI7p9tNWP8f+4QqCG/oIc1mEo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=ahYpHO9B/+cUbYyFJd5R0IQOMPNRU7XLvRaVxvXYrwA=;
        b=LfCVCymEFEqX7ufHmUJtRRdFmh+6T2dSB7MBo8JYhlRFEnAU2kwL05+mQAFSE7P04I
         Ronov2rGQIxoBnYFkdwKvrYlsCAQz1j6elkUr5wF43/DAPHlvr3C0EI6kwZyCCtRd8Su
         gdWNpw/oM42IcOyQ8MkvPMRbwwoR63eUUWDsqUOsNODR/PHI384avEbs76D9j0FXZbyp
         waf1a0I94uihUACWERf3ziWrYdVphgwJLzdt2QMJrWc12obcAXij1MKkyxtk1EEFXSfM
         AEilw1kqAOrbZyg6wBmIhpuaLjhaVsxz1a9bcZfpvmjn9rI+Iaw+qhdNeIfCX1mmJ5Gv
         YXvg==
X-Gm-Message-State: AOAM5311qYh4G9+jIIxfgcwmIwQ6rUGnTHtf6lfPvONCYHRalX5TUAd1
	Bh2vQunqghp8RrfnkCOHC0pB5OIhS8tafw==
X-Google-Smtp-Source: ABdhPJzdn+oOicDAVZT9CcFpLd/p7mr/e8PuXgCdr/OvFZ6ZJssuxCG704KYcLY9kYrI1s/mNI80GQ==
X-Received: by 2002:a17:902:6a87:b029:ef:2942:89fc with SMTP id n7-20020a1709026a87b02900ef294289fcmr7601588plk.36.1623824746099;
        Tue, 15 Jun 2021 23:25:46 -0700 (PDT)
X-Received: by 2002:a05:6e02:e8d:: with SMTP id t13mr2425681ilj.189.1623824734590;
 Tue, 15 Jun 2021 23:25:34 -0700 (PDT)
MIME-Version: 1.0
References: <20210616035240.840463-1-tientzu@chromium.org>
In-Reply-To: <20210616035240.840463-1-tientzu@chromium.org>
From: Claire Chang <tientzu@chromium.org>
Date: Wed, 16 Jun 2021 14:25:23 +0800
X-Gmail-Original-Message-ID: <CALiNf29qdqmk4Uzysz3VfGd=QcQse8Hu0MajcMeOauykxMyqXg@mail.gmail.com>
Message-ID: <CALiNf29qdqmk4Uzysz3VfGd=QcQse8Hu0MajcMeOauykxMyqXg@mail.gmail.com>
Subject: Re: [PATCH v11 00/12] Restricted DMA
To: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

v12: https://lore.kernel.org/patchwork/cover/1447254/

On Wed, Jun 16, 2021 at 11:52 AM Claire Chang <tientzu@chromium.org> wrote:
>
> This series implements mitigations for lack of DMA access control on
> systems without an IOMMU, which could result in the DMA accessing the
> system memory at unexpected times and/or unexpected addresses, possibly
> leading to data leakage or corruption.
>
> For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
> not behind an IOMMU. As PCI-e, by design, gives the device full access to
> system memory, a vulnerability in the Wi-Fi firmware could easily escalate
> to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
> full chain of exploits; [2], [3]).
>
> To mitigate the security concerns, we introduce restricted DMA. Restricted
> DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
> specially allocated region and does memory allocation from the same region.
> The feature on its own provides a basic level of protection against the DMA
> overwriting buffer contents at unexpected times. However, to protect
> against general data leakage and system memory corruption, the system needs
> to provide a way to restrict the DMA to a predefined memory region (this is
> usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).
>
> [1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
> [1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
> [2] https://blade.tencent.com/en/advisories/qualpwn/
> [3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
> [4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132
>
> v11:
> - Rebase against swiotlb devel/for-linus-5.14
> - s/mempry/memory/g
> - exchange the order of patch 09/12 and 10/12
> https://lore.kernel.org/patchwork/cover/1446882/
>
> v10:
> Address the comments in v9 to
>   - fix the dev->dma_io_tlb_mem assignment
>   - propagate swiotlb_force setting into io_tlb_default_mem->force
>   - move set_memory_decrypted out of swiotlb_init_io_tlb_mem
>   - move debugfs_dir declaration into the main CONFIG_DEBUG_FS block
>   - add swiotlb_ prefix to find_slots and release_slots
>   - merge the 3 alloc/free related patches
>   - move the CONFIG_DMA_RESTRICTED_POOL later
>
> v9:
> Address the comments in v7 to
>   - set swiotlb active pool to dev->dma_io_tlb_mem
>   - get rid of get_io_tlb_mem
>   - dig out the device struct for is_swiotlb_active
>   - move debugfs_create_dir out of swiotlb_create_debugfs
>   - do set_memory_decrypted conditionally in swiotlb_init_io_tlb_mem
>   - use IS_ENABLED in kernel/dma/direct.c
>   - fix redefinition of 'of_dma_set_restricted_buffer'
> https://lore.kernel.org/patchwork/cover/1445081/
>
> v8:
> - Fix reserved-memory.txt and add the reg property in example.
> - Fix sizeof for of_property_count_elems_of_size in
>   drivers/of/address.c#of_dma_set_restricted_buffer.
> - Apply Will's suggestion to try the OF node having DMA configuration in
>   drivers/of/address.c#of_dma_set_restricted_buffer.
> - Fix typo in the comment of drivers/of/address.c#of_dma_set_restricted_buffer.
> - Add error message for PageHighMem in
>   kernel/dma/swiotlb.c#rmem_swiotlb_device_init and move it to
>   rmem_swiotlb_setup.
> - Fix the message string in rmem_swiotlb_setup.
> https://lore.kernel.org/patchwork/cover/1437112/
>
> v7:
> Fix debugfs, PageHighMem and comment style in rmem_swiotlb_device_init
> https://lore.kernel.org/patchwork/cover/1431031/
>
> v6:
> Address the comments in v5
> https://lore.kernel.org/patchwork/cover/1423201/
>
> v5:
> Rebase on latest linux-next
> https://lore.kernel.org/patchwork/cover/1416899/
>
> v4:
> - Fix spinlock bad magic
> - Use rmem->name for debugfs entry
> - Address the comments in v3
> https://lore.kernel.org/patchwork/cover/1378113/
>
> v3:
> Using only one reserved memory region for both streaming DMA and memory
> allocation.
> https://lore.kernel.org/patchwork/cover/1360992/
>
> v2:
> Building on top of swiotlb.
> https://lore.kernel.org/patchwork/cover/1280705/
>
> v1:
> Using dma_map_ops.
> https://lore.kernel.org/patchwork/cover/1271660/
>
> Claire Chang (12):
>   swiotlb: Refactor swiotlb init functions
>   swiotlb: Refactor swiotlb_create_debugfs
>   swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
>   swiotlb: Update is_swiotlb_buffer to add a struct device argument
>   swiotlb: Update is_swiotlb_active to add a struct device argument
>   swiotlb: Use is_dev_swiotlb_force for swiotlb data bouncing
>   swiotlb: Move alloc_size to swiotlb_find_slots
>   swiotlb: Refactor swiotlb_tbl_unmap_single
>   swiotlb: Add restricted DMA alloc/free support
>   swiotlb: Add restricted DMA pool initialization
>   dt-bindings: of: Add restricted DMA pool
>   of: Add plumbing for restricted DMA pool
>
>  .../reserved-memory/reserved-memory.txt       |  36 ++-
>  drivers/base/core.c                           |   4 +
>  drivers/gpu/drm/i915/gem/i915_gem_internal.c  |   2 +-
>  drivers/gpu/drm/nouveau/nouveau_ttm.c         |   2 +-
>  drivers/iommu/dma-iommu.c                     |  12 +-
>  drivers/of/address.c                          |  33 +++
>  drivers/of/device.c                           |   3 +
>  drivers/of/of_private.h                       |   6 +
>  drivers/pci/xen-pcifront.c                    |   2 +-
>  drivers/xen/swiotlb-xen.c                     |   2 +-
>  include/linux/device.h                        |   4 +
>  include/linux/swiotlb.h                       |  40 ++-
>  kernel/dma/Kconfig                            |  14 +
>  kernel/dma/direct.c                           |  60 +++--
>  kernel/dma/direct.h                           |   8 +-
>  kernel/dma/swiotlb.c                          | 255 +++++++++++++-----
>  16 files changed, 380 insertions(+), 103 deletions(-)
>
> --
> 2.32.0.272.g935e593368-goog
>


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 06:31:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 06:31:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142665.263183 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltP5O-00042d-CB; Wed, 16 Jun 2021 06:31:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142665.263183; Wed, 16 Jun 2021 06:31:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltP5O-00042W-8y; Wed, 16 Jun 2021 06:31:38 +0000
Received: by outflank-mailman (input) for mailman id 142665;
 Wed, 16 Jun 2021 06:31:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltOxj-0007gS-Ii
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 06:23:43 +0000
Received: from mail-pf1-x42c.google.com (unknown [2607:f8b0:4864:20::42c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c28d7f57-0d72-49dc-ba48-629a3b062300;
 Wed, 16 Jun 2021 06:23:27 +0000 (UTC)
Received: by mail-pf1-x42c.google.com with SMTP id k15so1341786pfp.6
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 23:23:27 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id w23sm962726pfi.220.2021.06.15.23.23.19
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 23:23:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c28d7f57-0d72-49dc-ba48-629a3b062300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=//XvPeeos9rDreXGanBmItfF/L7P6KDOKno/Q971SkI=;
        b=GTtz78AqdP/K84KwrCCWY+yBsCK9twtSZHRyXSvV/h2opQBzsZMnn60wDFOZINUrOZ
         0URxUFIrHTlHymU+oZhs7U+vJo/L/4H0gb8jFa6zVJwXW+SAx8XnjdNmFzoaBe1BaNN9
         vTj8mjzgSlRGdibtD+PBwbIuvuowKhfhtnBHE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=//XvPeeos9rDreXGanBmItfF/L7P6KDOKno/Q971SkI=;
        b=W3Jux/0rqQMoRkenm4oPvso7HKZ70xd92EeZcPe0PgwKOG17dr9w/uy0mxxoe4JYDa
         3M7THn5rSIHlZX4gnJByEs5kMYwW+C5JLjHFDV6JEu2dz98ehCezMzeIYhrfk0AZAQ9O
         seR2kHgvu4Sbw8srswi08Gi26CPQOP8RXFZZNMCKWYqMxRVlvYR4nULtNk0TQn5UZURa
         ijtZBKDj/vWuMV9JsyNc10BITMf3/dRKoqmyocwqY0u4NuLnqFQGzkUI+SdjyM/+rvlk
         PrvUSBUp+W0Njs/mntR8Wi3lO30Nqvt66/dlASQAMvwn9V6QX7oTNiefQ/7V/EGlVJmk
         eJjg==
X-Gm-Message-State: AOAM532O4nkAFbk4/l+4CAsnSZKHn+n3QCht5EbgN0Rcy3RxdyyutpR3
	TChm/vLqRt9CCUjgM/nlR3P6Ww==
X-Google-Smtp-Source: ABdhPJz6EHrLqZ49j895TpaXeqXynHRoBE7uCDjKORaP6OUYcVW6K9wplRvx6YQFZ25ZL8MPI1QCfA==
X-Received: by 2002:a65:49c3:: with SMTP id t3mr3542154pgs.150.1623824606539;
        Tue, 15 Jun 2021 23:23:26 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v12 09/12] swiotlb: Add restricted DMA alloc/free support
Date: Wed, 16 Jun 2021 14:21:54 +0800
Message-Id: <20210616062157.953777-10-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616062157.953777-1-tientzu@chromium.org>
References: <20210616062157.953777-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the functions, swiotlb_{alloc,free} and is_swiotlb_for_alloc to
support the memory allocation from restricted DMA pool.

The restricted DMA pool is preferred if available.

Note that since coherent allocation needs remapping, one must set up
another device coherent pool by shared-dma-pool and use
dma_alloc_from_dev_coherent instead for atomic coherent allocation.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 26 ++++++++++++++++++++++
 kernel/dma/direct.c     | 49 +++++++++++++++++++++++++++++++----------
 kernel/dma/swiotlb.c    | 38 ++++++++++++++++++++++++++++++--
 3 files changed, 99 insertions(+), 14 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 8d8855c77d9a..a73fad460162 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -85,6 +85,7 @@ extern enum swiotlb_force swiotlb_force;
  * @debugfs:	The dentry to debugfs.
  * @late_alloc:	%true if allocated using the page allocator
  * @force_bounce: %true if swiotlb bouncing is forced
+ * @for_alloc:  %true if the pool is used for memory allocation
  */
 struct io_tlb_mem {
 	phys_addr_t start;
@@ -96,6 +97,7 @@ struct io_tlb_mem {
 	struct dentry *debugfs;
 	bool late_alloc;
 	bool force_bounce;
+	bool for_alloc;
 	struct io_tlb_slot {
 		phys_addr_t orig_addr;
 		size_t alloc_size;
@@ -156,4 +158,28 @@ static inline void swiotlb_adjust_size(unsigned long size)
 extern void swiotlb_print_info(void);
 extern void swiotlb_set_max_segment(unsigned int);
 
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size);
+bool swiotlb_free(struct device *dev, struct page *page, size_t size);
+
+static inline bool is_swiotlb_for_alloc(struct device *dev)
+{
+	return dev->dma_io_tlb_mem->for_alloc;
+}
+#else
+static inline struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	return NULL;
+}
+static inline bool swiotlb_free(struct device *dev, struct page *page,
+				size_t size)
+{
+	return false;
+}
+static inline bool is_swiotlb_for_alloc(struct device *dev)
+{
+	return false;
+}
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
+
 #endif /* __LINUX_SWIOTLB_H */
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index a92465b4eb12..2de33e5d302b 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -75,6 +75,15 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 		min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit);
 }
 
+static void __dma_direct_free_pages(struct device *dev, struct page *page,
+				    size_t size)
+{
+	if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL) &&
+	    swiotlb_free(dev, page, size))
+		return;
+	dma_free_contiguous(dev, page, size);
+}
+
 static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 		gfp_t gfp)
 {
@@ -86,6 +95,16 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 
 	gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
 					   &phys_limit);
+	if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL) &&
+	    is_swiotlb_for_alloc(dev)) {
+		page = swiotlb_alloc(dev, size);
+		if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
+			__dma_direct_free_pages(dev, page, size);
+			return NULL;
+		}
+		return page;
+	}
+
 	page = dma_alloc_contiguous(dev, size, gfp);
 	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
 		dma_free_contiguous(dev, page, size);
@@ -142,7 +161,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 		gfp |= __GFP_NOWARN;
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) {
 		page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
 		if (!page)
 			return NULL;
@@ -155,18 +174,23 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev))
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_swiotlb_for_alloc(dev))
 		return arch_dma_alloc(dev, size, dma_handle, gfp, attrs);
 
 	/*
 	 * Remapping or decrypting memory may block. If either is required and
 	 * we can't block, allocate the memory from the atomic pools.
+	 * If restricted DMA (i.e., is_swiotlb_for_alloc) is required, one must
+	 * set up another device coherent pool by shared-dma-pool and use
+	 * dma_alloc_from_dev_coherent instead.
 	 */
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
 	    !gfpflags_allow_blocking(gfp) &&
 	    (force_dma_unencrypted(dev) ||
-	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev))))
+	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
+	      !dev_is_dma_coherent(dev))) &&
+	    !is_swiotlb_for_alloc(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	/* we always manually zero the memory once we are done */
@@ -237,7 +261,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 			return NULL;
 	}
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -247,15 +271,15 @@ void dma_direct_free(struct device *dev, size_t size,
 	unsigned int page_order = get_order(size);
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) {
 		/* cpu_addr is a struct page cookie, not a kernel address */
 		dma_free_contiguous(dev, cpu_addr, size);
 		return;
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev)) {
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_swiotlb_for_alloc(dev)) {
 		arch_dma_free(dev, size, cpu_addr, dma_addr, attrs);
 		return;
 	}
@@ -273,7 +297,7 @@ void dma_direct_free(struct device *dev, size_t size,
 	else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
 		arch_dma_clear_uncached(cpu_addr, size);
 
-	dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size);
+	__dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
 }
 
 struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
@@ -283,7 +307,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	void *ret;
 
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
-	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp))
+	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) &&
+	    !is_swiotlb_for_alloc(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	page = __dma_direct_alloc_pages(dev, size, gfp);
@@ -310,7 +335,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	*dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
 	return page;
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -329,7 +354,7 @@ void dma_direct_free_pages(struct device *dev, size_t size,
 	if (force_dma_unencrypted(dev))
 		set_memory_encrypted((unsigned long)vaddr, 1 << page_order);
 
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 }
 
 #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 688c6e0c43ff..d3d4f1a25fee 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -462,8 +462,9 @@ static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
 
 	index = wrap = wrap_index(mem, ALIGN(mem->index, stride));
 	do {
-		if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
-		    (orig_addr & iotlb_align_mask)) {
+		if (orig_addr &&
+		    (slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
+			    (orig_addr & iotlb_align_mask)) {
 			index = wrap_index(mem, index + 1);
 			continue;
 		}
@@ -702,3 +703,36 @@ static int __init swiotlb_create_default_debugfs(void)
 late_initcall(swiotlb_create_default_debugfs);
 
 #endif
+
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+	phys_addr_t tlb_addr;
+	int index;
+
+	if (!mem)
+		return NULL;
+
+	index = swiotlb_find_slots(dev, 0, size);
+	if (index == -1)
+		return NULL;
+
+	tlb_addr = slot_addr(mem->start, index);
+
+	return pfn_to_page(PFN_DOWN(tlb_addr));
+}
+
+bool swiotlb_free(struct device *dev, struct page *page, size_t size)
+{
+	phys_addr_t tlb_addr = page_to_phys(page);
+
+	if (!is_swiotlb_buffer(dev, tlb_addr))
+		return false;
+
+	swiotlb_release_slots(dev, tlb_addr);
+
+	return true;
+}
+
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 06:31:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 06:31:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142667.263193 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltP5Q-0004K5-NU; Wed, 16 Jun 2021 06:31:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142667.263193; Wed, 16 Jun 2021 06:31:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltP5Q-0004Jy-KP; Wed, 16 Jun 2021 06:31:40 +0000
Received: by outflank-mailman (input) for mailman id 142667;
 Wed, 16 Jun 2021 06:31:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltOxy-0007gS-J3
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 06:23:58 +0000
Received: from mail-pg1-x533.google.com (unknown [2607:f8b0:4864:20::533])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ecd20f7a-d8a9-4f05-ae44-1afb6b9ba497;
 Wed, 16 Jun 2021 06:23:44 +0000 (UTC)
Received: by mail-pg1-x533.google.com with SMTP id l184so1116931pgd.8
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 23:23:44 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id s37sm1040984pfg.90.2021.06.15.23.23.36
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 23:23:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ecd20f7a-d8a9-4f05-ae44-1afb6b9ba497
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=GymVnULBeJ03m5EPo7c7VEX7DSqoBwMT6L2fMWoeeEc=;
        b=U3eun+ENVyocWxLJj29G6RCpfONi9/U6hPEIyZyKw371lsv5ZvHOuZIFA51JbRET1+
         Op95gA7j695YxlxlvZL3RhiG4sat3KCnGzJWEyj6V60Tnw5upJgOTBhRDMBdHFtlUqmv
         mWPc8X3tmbnhBe3Nq+XG0xr87l9VopnL/TOas=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=GymVnULBeJ03m5EPo7c7VEX7DSqoBwMT6L2fMWoeeEc=;
        b=lm5BINg7YSrP52/1PXnvJGtF2QfgKiZ5Bpkrdjty4KOMb2BnjKL1Nj5UisJQNG7QdX
         saFGwUIR9qSikmM3RQBaHOSaaJx85dX4pXTVbqg2J+lbgXdOIUuXHE0RQrN1AVcReNDD
         oSEI+fLWVJ9RDij4Uq5qnoUwaxYIgnoz/IL5Ic3V0LXH0Z3IWwsjMGhKl1qdYeImTqeC
         enL1vGBFbxsEZihT0wWxcDCxUkOlJRNcKUGV6ByWszYEkpWaC9/vrRipMjfWBt74g+Vt
         UeXkUTp0encULsZ/YwYh+e/AenqJDFFxT++odYm8gvsCxu3bAB11M5FE+ROMCeobCdle
         b7oQ==
X-Gm-Message-State: AOAM532JKjaR2VCBuxl7Yttf2TSRKE0C4GkTZAJ6IyMJqgrOuHdPCG8n
	+8iihbqxLb3wkfanPaVGnfSHqw==
X-Google-Smtp-Source: ABdhPJzos0AE316eqwE5bukVSiSBXorK5IVepm+pS66FYVRIq/CQ/YuE4vEFUrrRi3iREp0s0X/Ddw==
X-Received: by 2002:a63:6e87:: with SMTP id j129mr3484297pgc.45.1623824623702;
        Tue, 15 Jun 2021 23:23:43 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v12 11/12] dt-bindings: of: Add restricted DMA pool
Date: Wed, 16 Jun 2021 14:21:56 +0800
Message-Id: <20210616062157.953777-12-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616062157.953777-1-tientzu@chromium.org>
References: <20210616062157.953777-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce the new compatible string, restricted-dma-pool, for restricted
DMA. One can specify the address and length of the restricted DMA memory
region by restricted-dma-pool in the reserved-memory node.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 .../reserved-memory/reserved-memory.txt       | 36 +++++++++++++++++--
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
index e8d3096d922c..46804f24df05 100644
--- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
@@ -51,6 +51,23 @@ compatible (optional) - standard definition
           used as a shared pool of DMA buffers for a set of devices. It can
           be used by an operating system to instantiate the necessary pool
           management subsystem if necessary.
+        - restricted-dma-pool: This indicates a region of memory meant to be
+          used as a pool of restricted DMA buffers for a set of devices. The
+          memory region would be the only region accessible to those devices.
+          When using this, the no-map and reusable properties must not be set,
+          so the operating system can create a virtual mapping that will be used
+          for synchronization. The main purpose for restricted DMA is to
+          mitigate the lack of DMA access control on systems without an IOMMU,
+          which could result in the DMA accessing the system memory at
+          unexpected times and/or unexpected addresses, possibly leading to data
+          leakage or corruption. The feature on its own provides a basic level
+          of protection against the DMA overwriting buffer contents at
+          unexpected times. However, to protect against general data leakage and
+          system memory corruption, the system needs to provide way to lock down
+          the memory access, e.g., MPU. Note that since coherent allocation
+          needs remapping, one must set up another device coherent pool by
+          shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic
+          coherent allocation.
         - vendor specific string in the form <vendor>,[<device>-]<usage>
 no-map (optional) - empty property
     - Indicates the operating system must not create a virtual mapping
@@ -85,10 +102,11 @@ memory-region-names (optional) - a list of names, one for each corresponding
 
 Example
 -------
-This example defines 3 contiguous regions are defined for Linux kernel:
+This example defines 4 contiguous regions for Linux kernel:
 one default of all device drivers (named linux,cma@72000000 and 64MiB in size),
-one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB), and
-one for multimedia processing (named multimedia-memory@77000000, 64MiB).
+one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB),
+one for multimedia processing (named multimedia-memory@77000000, 64MiB), and
+one for restricted dma pool (named restricted_dma_reserved@0x50000000, 64MiB).
 
 / {
 	#address-cells = <1>;
@@ -120,6 +138,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 			compatible = "acme,multimedia-memory";
 			reg = <0x77000000 0x4000000>;
 		};
+
+		restricted_dma_reserved: restricted_dma_reserved {
+			compatible = "restricted-dma-pool";
+			reg = <0x50000000 0x4000000>;
+		};
 	};
 
 	/* ... */
@@ -138,4 +161,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 		memory-region = <&multimedia_reserved>;
 		/* ... */
 	};
+
+	pcie_device: pcie_device@0,0 {
+		reg = <0x83010000 0x0 0x00000000 0x0 0x00100000
+		       0x83010000 0x0 0x00100000 0x0 0x00100000>;
+		memory-region = <&restricted_dma_mem_reserved>;
+		/* ... */
+	};
 };
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 06:31:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 06:31:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142671.263205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltP5W-0004gg-0H; Wed, 16 Jun 2021 06:31:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142671.263205; Wed, 16 Jun 2021 06:31:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltP5V-0004gM-Su; Wed, 16 Jun 2021 06:31:45 +0000
Received: by outflank-mailman (input) for mailman id 142671;
 Wed, 16 Jun 2021 06:31:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltOxo-0007gS-Im
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 06:23:48 +0000
Received: from mail-pf1-x42a.google.com (unknown [2607:f8b0:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8bb72469-fe6a-4543-a218-52917a668c0c;
 Wed, 16 Jun 2021 06:23:35 +0000 (UTC)
Received: by mail-pf1-x42a.google.com with SMTP id x73so1330778pfc.8
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 23:23:35 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id f18sm4236365pjq.48.2021.06.15.23.23.27
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 23:23:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8bb72469-fe6a-4543-a218-52917a668c0c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=fvqOnnEtZvf+5rDIUrGZxBrY9gGEpTlWY7ChHAnSGjQ=;
        b=CRDjTJny+g1kqwDU7SWfTmY39w2Rvq9MmzvXnaKADK16TTP156eE2b3ioWfQJOZ3tJ
         gXx3vsP7yjSxzXHOZ8RPJ4uBTExKTI64V9ahdlNBwF+jrYbdanOBblfWP2yVFyPYF/xm
         VSljuQBpG1I2SjGNTlKM4aUniu14NDyAl5pgE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=fvqOnnEtZvf+5rDIUrGZxBrY9gGEpTlWY7ChHAnSGjQ=;
        b=B5oKFNoXC6zfVeXkvWCP4nmQFd/zlm/g+G2uNkCXzeDi+wtQ95phpvXHRqK5meqrnf
         2aRl5zpQl3VEBadCnn0kquwgNcnW1EdqlqHdgXtEfaGP+gzn/HoyNUiWXT4KUqoNc/wE
         zUq/ad5Kwl98UkOsZaN2+Uc9MfLCFxYeTHF7g3l0+3nYbQPcKp/TJa7Koh+zLN/CJdp6
         hV0s6mIavQ4QY214NuaIXbrRnxZ+Xvir7ta5v/x56astIEJhxipbmzQ8kdjsOYyMkZoe
         VyiPzf/6HbnlLobqOBkXF4YZEHoHrOOYbubs4RHc9LpU3FntUrQmnJyeFIkoe8JQ3BDb
         M/eA==
X-Gm-Message-State: AOAM530kpdI3g2t0lp9TLUwVMs1vCnDGGUBz5C0BxXfkpAoJV8C/n63b
	/Pbqze5UjOJmPpQYg2H+C7sz2w==
X-Google-Smtp-Source: ABdhPJwB8iJBY8gdPjxo1XSBiCow9YJsHV8sERx41OovNyQ1fGlSSsxchqWWNEC/J5WyR15S1lTtNA==
X-Received: by 2002:a63:3246:: with SMTP id y67mr3510828pgy.244.1623824615136;
        Tue, 15 Jun 2021 23:23:35 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v12 10/12] swiotlb: Add restricted DMA pool initialization
Date: Wed, 16 Jun 2021 14:21:55 +0800
Message-Id: <20210616062157.953777-11-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616062157.953777-1-tientzu@chromium.org>
References: <20210616062157.953777-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the initialization function to create restricted DMA pools from
matching reserved-memory nodes.

Regardless of swiotlb setting, the restricted DMA pool is preferred if
available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/swiotlb.h |  3 +-
 kernel/dma/Kconfig      | 14 ++++++++
 kernel/dma/swiotlb.c    | 76 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 92 insertions(+), 1 deletion(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index a73fad460162..175b6c113ed8 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -73,7 +73,8 @@ extern enum swiotlb_force swiotlb_force;
  *		range check to see if the memory was in fact allocated by this
  *		API.
  * @nslabs:	The number of IO TLB blocks (in groups of 64) between @start and
- *		@end. This is command line adjustable via setup_io_tlb_npages.
+ *		@end. For default swiotlb, this is command line adjustable via
+ *		setup_io_tlb_npages.
  * @used:	The number of used IO TLB block.
  * @list:	The free list describing the number of free entries available
  *		from each index.
diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
index 77b405508743..3e961dc39634 100644
--- a/kernel/dma/Kconfig
+++ b/kernel/dma/Kconfig
@@ -80,6 +80,20 @@ config SWIOTLB
 	bool
 	select NEED_DMA_MAP_STATE
 
+config DMA_RESTRICTED_POOL
+	bool "DMA Restricted Pool"
+	depends on OF && OF_RESERVED_MEM
+	select SWIOTLB
+	help
+	  This enables support for restricted DMA pools which provide a level of
+	  DMA memory protection on systems with limited hardware protection
+	  capabilities, such as those lacking an IOMMU.
+
+	  For more information see
+	  <Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt>
+	  and <kernel/dma/swiotlb.c>.
+	  If unsure, say "n".
+
 #
 # Should be selected if we can mmap non-coherent mappings to userspace.
 # The only thing that is really required is a way to set an uncached bit
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index d3d4f1a25fee..8a4d4ad4335e 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -39,6 +39,13 @@
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
 #endif
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_fdt.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/slab.h>
+#endif
 
 #include <asm/io.h>
 #include <asm/dma.h>
@@ -735,4 +742,73 @@ bool swiotlb_free(struct device *dev, struct page *page, size_t size)
 	return true;
 }
 
+static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
+				    struct device *dev)
+{
+	struct io_tlb_mem *mem = rmem->priv;
+	unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
+
+	/*
+	 * Since multiple devices can share the same pool, the private data,
+	 * io_tlb_mem struct, will be initialized by the first device attached
+	 * to it.
+	 */
+	if (!mem) {
+		mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
+		if (!mem)
+			return -ENOMEM;
+
+		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
+		mem->force_bounce = true;
+		mem->for_alloc = true;
+		set_memory_decrypted((unsigned long)phys_to_virt(rmem->base),
+				     rmem->size >> PAGE_SHIFT);
+
+		rmem->priv = mem;
+
+		if (IS_ENABLED(CONFIG_DEBUG_FS)) {
+			mem->debugfs =
+				debugfs_create_dir(rmem->name, debugfs_dir);
+			swiotlb_create_debugfs_files(mem);
+		}
+	}
+
+	dev->dma_io_tlb_mem = mem;
+
+	return 0;
+}
+
+static void rmem_swiotlb_device_release(struct reserved_mem *rmem,
+					struct device *dev)
+{
+	dev->dma_io_tlb_mem = io_tlb_default_mem;
+}
+
+static const struct reserved_mem_ops rmem_swiotlb_ops = {
+	.device_init = rmem_swiotlb_device_init,
+	.device_release = rmem_swiotlb_device_release,
+};
+
+static int __init rmem_swiotlb_setup(struct reserved_mem *rmem)
+{
+	unsigned long node = rmem->fdt_node;
+
+	if (of_get_flat_dt_prop(node, "reusable", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,cma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,dma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "no-map", NULL))
+		return -EINVAL;
+
+	if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
+		pr_err("Restricted DMA pool must be accessible within the linear mapping.");
+		return -EINVAL;
+	}
+
+	rmem->ops = &rmem_swiotlb_ops;
+	pr_info("Reserved memory: created restricted DMA pool at %pa, size %ld MiB\n",
+		&rmem->base, (unsigned long)rmem->size / SZ_1M);
+	return 0;
+}
+
+RESERVEDMEM_OF_DECLARE(dma, "restricted-dma-pool", rmem_swiotlb_setup);
 #endif /* CONFIG_DMA_RESTRICTED_POOL */
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 06:31:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 06:31:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142678.263216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltP5h-0005LZ-7S; Wed, 16 Jun 2021 06:31:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142678.263216; Wed, 16 Jun 2021 06:31:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltP5h-0005Kp-4K; Wed, 16 Jun 2021 06:31:57 +0000
Received: by outflank-mailman (input) for mailman id 142678;
 Wed, 16 Jun 2021 06:31:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qR6m=LK=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltOy8-0007gS-JC
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 06:24:08 +0000
Received: from mail-pf1-x435.google.com (unknown [2607:f8b0:4864:20::435])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6b1491c1-09a5-4755-8e83-7151d1949dba;
 Wed, 16 Jun 2021 06:23:53 +0000 (UTC)
Received: by mail-pf1-x435.google.com with SMTP id h12so1364045pfe.2
 for <xen-devel@lists.xenproject.org>; Tue, 15 Jun 2021 23:23:53 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:3d52:f252:7393:1992])
 by smtp.gmail.com with UTF8SMTPSA id j13sm1144475pgp.29.2021.06.15.23.23.45
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 15 Jun 2021 23:23:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b1491c1-09a5-4755-8e83-7151d1949dba
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=4psftO+uH/TVnoki1qOI27Xe5kV3FYGMoE+zQAlSb7o=;
        b=V/ahoq2vs8tTy7w+Y5OEwDE5hShbh8znTC5ewffraAkNPn/O4nZhX/k5O/Q4vn+BYw
         sVVqtxW5hQdZQN9GpVNrLf5hjnu+aW1hXGdh6T0neJbe4VzN1V/QXoB+hgAnMu1VsF+r
         sglWPBXIWMNhL2qZeTlT+lltwdWi+fBX8GceE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=4psftO+uH/TVnoki1qOI27Xe5kV3FYGMoE+zQAlSb7o=;
        b=T9a7k/fArYc7db8jismCo8ZfdIYWi1h6jF5NYjL6cJS4DKQkIlIGgmjbKdPHamBxJF
         2MKid/S/DpCnLRrekcBOax+p3peGmBzkbGMqZdNZrOqcHAmD+PMNcmoaJ0bdqyilxBgX
         w+DKF2/OUnfmjcKL3ve5cOTiiqUF9fJ1KpGV7ilUAAUhLKo8syr+H5Nu55qVMBCyqzlv
         0N/eUkjfJ9jPA6xPz/qAfg+e939+vv8IpbzkHBs3te7lNP+yOAMwvJx7gWYyzmmzSoHh
         fcrWIjNS/7PJ2jzQbVECMxrRoBfA0CLoBhiWkfX8Ezr6Bc0jadhilUqKmaSJvqmBSGxX
         3psw==
X-Gm-Message-State: AOAM530zx047OZmtwOI0XrH8fQ7ZmozitofZpifemDzebqkxfZWAI/pa
	SKg4o31Pe6G1xtn/ffEGNw7cMw==
X-Google-Smtp-Source: ABdhPJzGZYWYJboxtRbRv1n6aZlIOVWrHtwOTnE26briGp5Jug1JkMDb00dOZitCigU2dddBDVGa/Q==
X-Received: by 2002:a62:ae03:0:b029:2f8:b04f:c012 with SMTP id q3-20020a62ae030000b02902f8b04fc012mr8079076pff.62.1623824632264;
        Tue, 15 Jun 2021 23:23:52 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v12 12/12] of: Add plumbing for restricted DMA pool
Date: Wed, 16 Jun 2021 14:21:57 +0800
Message-Id: <20210616062157.953777-13-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.272.g935e593368-goog
In-Reply-To: <20210616062157.953777-1-tientzu@chromium.org>
References: <20210616062157.953777-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If a device is not behind an IOMMU, we look up the device node and set
up the restricted DMA when the restricted-dma-pool is presented.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/of/address.c    | 33 +++++++++++++++++++++++++++++++++
 drivers/of/device.c     |  3 +++
 drivers/of/of_private.h |  6 ++++++
 3 files changed, 42 insertions(+)

diff --git a/drivers/of/address.c b/drivers/of/address.c
index 73ddf2540f3f..cdf700fba5c4 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -8,6 +8,7 @@
 #include <linux/logic_pio.h>
 #include <linux/module.h>
 #include <linux/of_address.h>
+#include <linux/of_reserved_mem.h>
 #include <linux/pci.h>
 #include <linux/pci_regs.h>
 #include <linux/sizes.h>
@@ -1022,6 +1023,38 @@ int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map)
 	of_node_put(node);
 	return ret;
 }
+
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np)
+{
+	struct device_node *node, *of_node = dev->of_node;
+	int count, i;
+
+	count = of_property_count_elems_of_size(of_node, "memory-region",
+						sizeof(u32));
+	/*
+	 * If dev->of_node doesn't exist or doesn't contain memory-region, try
+	 * the OF node having DMA configuration.
+	 */
+	if (count <= 0) {
+		of_node = np;
+		count = of_property_count_elems_of_size(
+			of_node, "memory-region", sizeof(u32));
+	}
+
+	for (i = 0; i < count; i++) {
+		node = of_parse_phandle(of_node, "memory-region", i);
+		/*
+		 * There might be multiple memory regions, but only one
+		 * restricted-dma-pool region is allowed.
+		 */
+		if (of_device_is_compatible(node, "restricted-dma-pool") &&
+		    of_device_is_available(node))
+			return of_reserved_mem_device_init_by_idx(dev, of_node,
+								  i);
+	}
+
+	return 0;
+}
 #endif /* CONFIG_HAS_DMA */
 
 /**
diff --git a/drivers/of/device.c b/drivers/of/device.c
index 6cb86de404f1..e68316836a7a 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
 
 	arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
 
+	if (!iommu)
+		return of_dma_set_restricted_buffer(dev, np);
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(of_dma_configure_id);
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index d9e6a324de0a..25cebbed5f02 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -161,12 +161,18 @@ struct bus_dma_region;
 #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
 int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map);
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np);
 #else
 static inline int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map)
 {
 	return -ENODEV;
 }
+static inline int of_dma_set_restricted_buffer(struct device *dev,
+					       struct device_node *np)
+{
+	return -ENODEV;
+}
 #endif
 
 #endif /* _LINUX_OF_PRIVATE_H */
-- 
2.32.0.272.g935e593368-goog



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 06:39:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 06:39:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142698.263227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltPCT-0006f4-0F; Wed, 16 Jun 2021 06:38:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142698.263227; Wed, 16 Jun 2021 06:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltPCS-0006ex-TG; Wed, 16 Jun 2021 06:38:56 +0000
Received: by outflank-mailman (input) for mailman id 142698;
 Wed, 16 Jun 2021 06:38:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jola=LK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltPCR-0006er-FE
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 06:38:55 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d4cd851-01ed-49b5-8c0f-e188d152c837;
 Wed, 16 Jun 2021 06:38:54 +0000 (UTC)
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur02lp2054.outbound.protection.outlook.com [104.47.4.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-1-jwuixr1vP9uY2q6s3zHBOw-1;
 Wed, 16 Jun 2021 08:38:52 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7149.eurprd04.prod.outlook.com (2603:10a6:800:12e::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Wed, 16 Jun
 2021 06:38:51 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Wed, 16 Jun 2021
 06:38:50 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P195CA0028.EURP195.PROD.OUTLOOK.COM (2603:10a6:102:b6::33) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Wed, 16 Jun 2021 06:38:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d4cd851-01ed-49b5-8c0f-e188d152c837
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623825533;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7o62CH8tMP29dDg5bCfHgvNzv32UOojgxqPbsnjUHHE=;
	b=YBeS0+X/0b7AqcKqHjHhBxhTQMrVW8Mpm59WQ2ofzUBoqub+Jr/ieIEMeek033vWczM9f4
	iYPS6uL1xgB5pmRslkk+o2eoKOhuvzD+nkaByGbM1HoJ58RzXPBTfBAQ7Jaq/Cw1lK6icH
	mODiNsFSlwTxKReKk9LAA3nXPZ7YRgI=
X-MC-Unique: jwuixr1vP9uY2q6s3zHBOw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VfSixjsENXfEWhVAwvR2ap5Y06f5snoYqPIsr9QmOHhb9p+1oE/w0jxBXPk2pJBDnslnHHPxlOpXlT+VYyE0OxOA7Q5DS2EIj3IyTKOaQvp/qT/8yWDqQUrUfB/xKjgtn1JgwYMmBQwtaiD345UGzFhcdyz/eUI0GbqDZUy/4jTNpPrVq0mkHrujY+NvMdjHMTZb9sg5M1m1C+z1i4SHKepuVmOZK1bt9n6bsnwSe7X9rTS6QrcTvorNhP9U06piUQbBu2h3sb81I4zQeShSGxI9xxrlg+0SVS0SCNEQRU2LH2lNO6PLgfrI5Gx3qlfMqy2/QLyxmuizNzRmYR2Vkg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7o62CH8tMP29dDg5bCfHgvNzv32UOojgxqPbsnjUHHE=;
 b=HLDHR5eQUV8QlhnOYDR1J+v/L4NfKR35FpyYhzu8ba0Wgt496iTroNZhS8rprWzEx0xbEIqN+DgQIQuZIgW+CN6dhAm41BT2NkfSLzkt2zLHsq9ea9FEUZkxFNHnhcUVe2/nDfww8GqlOVsDlysI1giaHGM4OdEk+8bvdA/oB9Ztyu6kpWJTy/6tsUL9Qxt8cYGpAZnQxqY1Fb3Evw3I/qHBht1vOo+02/ac085vVc8KsOa2a5ywTG37jHGSokCZjYAJdzFPcBEhnHGDRZNJlvDT18K7mQdXDtzwn9qlPwTu6BqV3w5Qk6VTr5HSdBjIdT/7jy2eHcD9hb/9phhj6A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 0/5] tools/tests: More cleanup for automation improvements
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210615161905.9831-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5b0de61c-0276-cf06-4eeb-cda9ca990940@suse.com>
Date: Wed, 16 Jun 2021 08:38:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210615161905.9831-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P195CA0028.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:102:b6::33) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: df56e2b0-d882-4c84-51d3-08d9309166ef
X-MS-TrafficTypeDiagnostic: VI1PR04MB7149:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB7149AC3E199A5F0C9F6FD8CEB30F9@VI1PR04MB7149.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	R4J0mjy/iixNGhIfwRNaSUx4jpSImLGje35l0gaBFDJG8/wm06SUhENi0qtXQo34OUk6+m0kWQcbz3vxFnlMrMjMlH1RXkjEclThPSV86HodCznxHINgbPIo4Y7GFUqlbgQFXpz1Wjy8jVzL5Y1i1DUeBM7csAJ0tQz1FXwoTGoE0j7jnpQXpXscp1Czzi/7fUDRORqZSoj+J/M2LO0G+seU3aJ3sSZ57gY9mJyZ6+zUJnTRhvFykmHfejbazEUC8gM0HZmIhpGzzYtsP8I0YQ4L+uUZ8LxpzJEILrxw3hJbfRZ60W1WD70AOwlNpn14qjMGs+ZSY46SEykoO88kPmj8VfrTEGL84KJwqhHigvc7aURXS60D4m0hcoTyDENgOc3K3ZNUTkK7m0mOvwo3+5VueRMqApVXBB6qfknQr46N1OKuyzbdEXcbTQrf2raICxktNgBlR8whj0DboKs8Kr3cC3PdJXDksUQw24qHrpyEeB771HRANtC23OUmL3iVlLAd77yr3Kgs0Hm7lHMMKYsTmQSIdPLGGFKPXMy26SzEl/m+SRZdXd1c3Qnj23Zp3vHMNppsm7jHrCPyHDRXfk7AZrc5lxrGXEfX5O8WrmSS4XOLDx+E1HV6mtCcsj1WDoMRSS3jnIsD1IwmyjspG2XS8Z/VJn5lSTONltGbj4Z55dOrGZqJWQDPa4nYn1q3
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(376002)(136003)(39860400002)(366004)(346002)(478600001)(38100700002)(31696002)(8936002)(4326008)(36756003)(16526019)(186003)(66556008)(2906002)(66946007)(8676002)(6486002)(26005)(66476007)(16576012)(83380400001)(2616005)(316002)(956004)(86362001)(5660300002)(31686004)(6916009)(54906003)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VmlSbEFJbVNBR0NEelRPamRUMnY3U1JGNFY0OXBzUEtMSkNhSU1oMFV2WUp6?=
 =?utf-8?B?S2Q1NHFJY3pUN3RFZFJueXF5RlQ3U09Dc2xQN0ZJMFhuZkhOS3JBNmZraFV5?=
 =?utf-8?B?T0gyTk5pRi9mSlpJSElWcjJJak9EYlFwaHFQQWJKNUpFbS9Dd0dqNDIzbEgw?=
 =?utf-8?B?MHlHY0pJYkE5bWRzb2k5NFpMQ3RSaW9zOVpreUw4MGh1dzBFZ0UvR1h3WVd6?=
 =?utf-8?B?V1Z5S1VpRGgyZHNGaU5wMlh5WExOSnNaK0pxNHcvMFk4d2RVaGZXK0VlZFhk?=
 =?utf-8?B?aDdXNG9jRzdiWkw3aHJBdUd0YmljMFk2a1BZSUU4U055TGFOUzJnQi9GMmpD?=
 =?utf-8?B?SFlaS1pwdWwyWUFhNkNwSUdiMW1RWTVZclYxWkVDeitueEZpK09JWEEvaFdB?=
 =?utf-8?B?M1VWdzQ0Q0VKNkl0eTc5RnZ0SlJ3OC8rR1BFOGptRlJzNFNOaXBxWWs0RFh3?=
 =?utf-8?B?RDZlT3pDYnVyVFJxSzFzb0ZuVmVHUDMyck5iZ2pxTlRRSUJKQjIzS211bG0v?=
 =?utf-8?B?aDhIMnlXYmFaVXZoWU5EQzRDWlRNbG45QVQzNW9xTkRYNXl0bW1mazVxSkk2?=
 =?utf-8?B?Zy9ac3RsUVlKUUZqbGp1ZnV0YzFRQzJFcGZFbFovTEY3R0dBcEFhNHpBQURR?=
 =?utf-8?B?andSNlVQMWdreUowYi9EZlFjK0ZDdlpqVmJscldYMHpqODZQMzFRZUlraWUr?=
 =?utf-8?B?L0FNOEFRY2U0eTN1eEUxQ2dqMnNSQTNrN2dSVXA1QWlIMkcwbWhjVGRiaGpt?=
 =?utf-8?B?aXNMck8rZnR5dnd5bnk1VElyVmFjdlQyUFI2Y29Qbzh1d3lqWlV6bWRFYis4?=
 =?utf-8?B?MlZRVk9CK2d6NndsNVRUeWFWaDVCSDA1STB3QllZN3JXQVRiWkl5WDFmbXRZ?=
 =?utf-8?B?d1E2eW5id0EyckI5TXV2TDJHUUNkR1M5N2ZackdrUWl3QU1vQlVlaEhsR1Bh?=
 =?utf-8?B?UU9rZG1ZYVhaYkNKNzJzRzMvdXc0Qmt3S3hkcFp1b3BWN1BWcndlZEVmRzhB?=
 =?utf-8?B?NjVteVlrdVRBUVhERkdqRW1SZ1pJRmU0em1YK1FpVG5UTjFaVmRhNzRsWGdn?=
 =?utf-8?B?eXF6R0N1UTB0d2g1RzFhRy9hMkVPQUY4TmplWmxuc1NnVFhGNHAvM2pRN254?=
 =?utf-8?B?SExYN05tS0U5SDAzQ0N4eUxWdGtCd2Z5M1hsRDdCMk1JUXlqZTF2L1FEeU8v?=
 =?utf-8?B?ZmVkOWExRWNqeUkxZkxpRk1NaGl1eTdkcWZUTjd4ZU94c1FJSE5ZRHZ5bzJE?=
 =?utf-8?B?elBIR0xaZWEwUm5ORytPWG1ZNlhlYXNOcnNBeUZGcjFlajczN0k0UzFpbyts?=
 =?utf-8?B?L1VHZzg4QklKTFg1OStMU0l3cm10UUR5QnZCODYwSkZtK044UGRGYlV2YXMw?=
 =?utf-8?B?V3E4bC9sVkJxNFcyRVVIODlrZk9LWld2Vkk0d2RlWkNNM1ZIdjgwbjNZcHJX?=
 =?utf-8?B?Sm5VRnBOR2FmcmZuaTN3SnUzMVovMytNQU52ZHNzSnNieWxINmpaMHNrK3Vi?=
 =?utf-8?B?SGNlbENScnVTN2dLMW1ZQmwzTUpmakpoSUswZlJhdmNaYUd5RnNXZnRkNEFK?=
 =?utf-8?B?dDJkRUk1cDRucDU2WVNYb0lxY0o1Yktja2lJbUNyMmpzMjN1S2YxWGxyM1h4?=
 =?utf-8?B?VFRiaEM2Snp3b3Bhb2o4RkdwcnphNXVqbVM0em1nZm9TK2RNZFJYMUhEM1Ja?=
 =?utf-8?B?SW9mNFB4akxxNWt2L2w5U2hBa3VMdVU4ejViWEJzMFowVFdLZWdLaUdtM2FD?=
 =?utf-8?Q?nPOnbJgeMi7J9eQIv0GEF7l3FtrFxrCy9hLYyRW?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: df56e2b0-d882-4c84-51d3-08d9309166ef
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 06:38:50.6346
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EbDsX3afsl/wnnpLN2FTu1kapauJ2lud8NIBrBC8s+Uah8wCA8TNUue+0A/gCwzrEatRgea/83Wm5+8gPAUIUw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7149

On 15.06.2021 18:19, Andrew Cooper wrote:
> Jan/Roger: x86_emulator and vpci use $(HOSTCC) not $(CC).  While they are unit
> tests, we still potentially want to run them in dom0 rather than the build
> environment - particularly for x86_emulator which is heavily CPUID based and
> wants to run on a wide set of hardware.  Any issues moving them off $(HOSTCC)?

Well, yes, I'm afraid: If anything, we may need to build two binaries,
or build the one binary two different ways: The "run" (and "run32" for
the emulator harness) target wants a binary built with HOSTCC. The
install target (which prior to your series does nothing) indeed wants
building with CC. So maybe we want something like

install: HOSTCC:=$(CC)

plus suitable detection of whether the opposite set of objects are
presently in the build area, requiring a full rebuild? (Of course this
will work only as long as HOSTCC isn't used for any build time helper
binaries. See "x86emul: test AMX insns" for when this starts not to be
the case anymore for the emulator harness. So we'd need yet another
variable to express this detail.)

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 06:44:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 06:44:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142705.263238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltPI2-000899-Nh; Wed, 16 Jun 2021 06:44:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142705.263238; Wed, 16 Jun 2021 06:44:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltPI2-000892-KV; Wed, 16 Jun 2021 06:44:42 +0000
Received: by outflank-mailman (input) for mailman id 142705;
 Wed, 16 Jun 2021 06:44:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jola=LK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltPI1-00088w-Lm
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 06:44:41 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5753d0f7-a4b1-4c19-afdb-251b160b9c34;
 Wed, 16 Jun 2021 06:44:40 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2171.outbound.protection.outlook.com [104.47.17.171])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-4-NGgyfhXWO5KTRI1Dql3Heg-1; Wed, 16 Jun 2021 08:44:38 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6669.eurprd04.prod.outlook.com (2603:10a6:803:125::33)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.24; Wed, 16 Jun
 2021 06:44:35 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Wed, 16 Jun 2021
 06:44:35 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P191CA0024.EURP191.PROD.OUTLOOK.COM (2603:10a6:102:54::29) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.22 via Frontend Transport; Wed, 16 Jun 2021 06:44:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5753d0f7-a4b1-4c19-afdb-251b160b9c34
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623825879;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6no39c4ymfMgk1XxNnp9RMGfuzkFM/lh1ZHatcdZTZ4=;
	b=SdLST7/zj8B10JD+V6kCdtIgsFBXchMg0e7S4gGxxfEBjPt5q0pZm0wvmJenJUNTpkYhiV
	aXkGLAYFuTk9yKA63dPylR3t/KzpvYJvQ3jcrd7Fl2lDn0w5O/Pp5SP5311NnqsmpuqWNG
	I8Q8ft8TO9OGy1tnUsbyFOIRST17yNQ=
X-MC-Unique: NGgyfhXWO5KTRI1Dql3Heg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jt1wTM0y9cQzpBi2fBMoBVJEvVKZHKVJfM6MXUAgNZwx2tSgdviAogAeDuy/xFwKLvN/IckGJ5jWiCBuZGoHZLT+eOxk6zDEPCH1mkffU5Les8lniwiMVEUjA/IZQOaV4jUkK/GDzU8kyOeGXiXMT6GIJUr5Cgdd2A1mfuH/z6SR4fLRWvqRY6iC1ouhyJ70kYFRoa+cFmtqfaQBBWbdk1lvuao6IYGpELWmz/x1Vr2t8M6wrDZfFnhXz8ugZGb44X6NP0OBdNu1tiVaBrr8rHHTplthpmliBQzyGctudwN5bnK04/Oja53TWj98HZZ96/hwXa1m69vYe2Bo+lDQkg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6no39c4ymfMgk1XxNnp9RMGfuzkFM/lh1ZHatcdZTZ4=;
 b=KWIIloMhhyCuo245oqCOZ/Emu3P8tFv3+eKdz+7GYjscC1a32xQxWh8p8mG5h+kpjDEh9zhzwTHTqqSJt2777G6cO0uOyHHmO6y9q17YcT+mtRzvH/JP211yDTVfY0UrvIAUgsJFX3zrBYtqo9MAxsXXjZGdpvADFSI82Gp9jPx9EUyCVtVoGOlPjCjbqCbk4Z2LCVLXsVr7NFZvR1bUS0/ebgnn/2rIgIlQqUypmYyiH7q55HI72nWHSpSIjExMHoYteAZYCP4OhFqptwVvyIAXvjzGlASxFh4aYA/ZHeadcHxkoZwYIX3q+fxB/25ZhdueVG/DfCWaSbc7lro3lg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 2/5] tools/tests: Drop run runes
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210615161905.9831-1-andrew.cooper3@citrix.com>
 <20210615161905.9831-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c59dae19-2a88-9449-468a-ab22d38fd0e7@suse.com>
Date: Wed, 16 Jun 2021 08:44:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210615161905.9831-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P191CA0024.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:102:54::29) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6fdcd30f-6eab-4b6e-00ff-08d9309234a4
X-MS-TrafficTypeDiagnostic: VE1PR04MB6669:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB666914552742F862F80D1D2EB30F9@VE1PR04MB6669.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yEWFnkFyfRI0RhWz0FBF1Bo4GAt6IedNF5ypY1NqQnd3YNR115tvU3VP4jh6wITpDCOO3GmmQDsGiRs6RU1J1KiYXuX1rz4JbAajCcXb9hSU4MHBh2Gz0yMyTBbF6p0u0yvscz7DOPUf5qakTd12sq3R/la71sOuxJTtaPqZ041GPY/OJ100NIrF2rgEjJPNWsLnKPpBoh+YZS8Zqfs3F38o25Gp0QALwUbKEeYvHyGuc10CtTvItw+YaW2R46FvH35HKrlma2GO+3euLGDYMZ3/pAdp73Dln9BUdi8c/KhXB+WJqtsSlAU1UmslIM74ExXvldkTTqp4XNYeysIZFRK9LWN5D4SVhau4YRi4389JCGp3vgl9yEeKptzYyrz/4qTrc9xSV8Oxqmk6r29G21zhaW+qzLXwmeqpnMTR6hsgIPVu+pOHh70vLwzBcLT7Hj+mbAZcqz0NFfpTnpAdn32d+ou6QiitPxa0wP/OfGx4dhSoWNsF8q6mA2RIw8NlqC/P8fthuFcCQjOfFzJdYPP7THRRynkI3hCWFrZJv9BVi0ETYnlWZk82OokZLcslcgEDm5uWm+B5xCvReBfG2ET168lYHelgBTQrLHCPs3rM94P3lBR7KWDuhK3aIH09l5dLXXDX7dlyl218aXLMLzulhuduGf9OVP6aHsOE7AUdN0BaQ+xNOu2RV9BbMPMM
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(346002)(396003)(366004)(136003)(376002)(26005)(5660300002)(478600001)(31686004)(186003)(8676002)(53546011)(6916009)(54906003)(316002)(2906002)(4744005)(8936002)(16526019)(66556008)(86362001)(16576012)(6486002)(2616005)(956004)(38100700002)(36756003)(66476007)(66946007)(4326008)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b3psY2xDL3c2WDhUOVhya3ErYUo5RXNDQXRKeUlsRlQ2VGNCSHVWMmhPRUkz?=
 =?utf-8?B?dHMxS0NOeG9ld2tPTkVGL1BFZ0l6VkJRMXBwUG1RZjMvSHc3ZGRuNUNkUU5B?=
 =?utf-8?B?bVBhcWRXaFVPSUhBTjdkMExRbTBuakdhSEFwcVdHU1ZaNk80R3l5cFA5NUhv?=
 =?utf-8?B?YkVseWtwNlBKMHpCdzZGV2NmYW9ZN2hLU3lTUWRyS05ybUFpTkpUV3hBcnp4?=
 =?utf-8?B?MVVYNE5XZ0gwL3Jjc3ViYTM5enErU0FncFRiaFhRcnZrVjB1MTFtcHBITGRG?=
 =?utf-8?B?eGl4MEFYNk9CVkM2TFduMk9yNWZNR1JwSzVFZ1Z0M3B6WGlJSmRHTUg5STNh?=
 =?utf-8?B?N08wdmFGYW5tRzM0bm92SERyZWszYVFUdFNmdVc3eHowMUFOTHJMVnJWYkx1?=
 =?utf-8?B?bnBybzN2QXo4QkRpZS91YTNPck9VSTE5L0VpUkJQeVQvQTFzZWlsMjV6SFgw?=
 =?utf-8?B?di95d3AzcG9GSmE5SmhQNTVpTTBRVzBXdFlCYVFSN1I3NTdiU3c2dHk4UVJu?=
 =?utf-8?B?eXNhUlcxZVdaVU85R28ybnpvSW1XSFpYS2pySWxRZjFMbUN6UElEaUR5OCtL?=
 =?utf-8?B?WCtiaFAzT0Z2aU83dVVvbExabE9IdVc1SGVyM29qOC9helN2KzdQZ0plWC9Q?=
 =?utf-8?B?Yk1LSUlkb2dvS1hCR3B0dm5JMWdRbldkT0w3VXpVTUpqNUhPNXozL3B0c2xo?=
 =?utf-8?B?VjA0Y3FJNXlCeDlOUmVFVlpyaWNxYlBZbjlZRDEydndScWR0S1BPcmg3ajBH?=
 =?utf-8?B?cGVGTW9ON2NuV3Jzb2RmbTZuWmIwanBoUWRGdlpsWldhbWhuYmtEaHcrVVBY?=
 =?utf-8?B?WFBqdytFY3kyUE53NWZQMU1VUUhQU0JGTGxVQ3AzV3Z3bTBZYWRrN3JvRnNp?=
 =?utf-8?B?bDdjVXBPT0pBcVJ0TjlJTnNPNzhGb05DY0YvYVdLekl0SFByMndUVWp5ZXJ2?=
 =?utf-8?B?bGtSUXp0VW5VVFRna1VNRTY2TjBkRHF3ek1MYU04c3dMVjFyOWZrTGtoMmtV?=
 =?utf-8?B?MVhDM1ArU0NDdXZjL3V4V3Y4b2QxRnpMNlhJN0VpalNDOThvRDVPTC81Y3c3?=
 =?utf-8?B?aTRNYWJrWFVXa0tQb0htV0FWelRTbFRqVXgrdk1Fcm96RnVVRlVjbzBzUUtI?=
 =?utf-8?B?Q05WaWdvUGRPQkpPWnVmQzhmSEJQYkozLzY3alhBaEFPbjZqNm9TdmNCMFR3?=
 =?utf-8?B?WlpYVG9ZcFVDUzhlVTZjRDBCQzZIZkxBZUdpMHBQN1UxZVEySzl2MWZTSjV3?=
 =?utf-8?B?aVRpUXorMTdmQ09QMXpJY2UyQ1Z3enlqK09uY05VV1Q3RTgxQU05NXo0VlRo?=
 =?utf-8?B?SmJ4ZnNma0c4QnZIQ3h2ZGQwM08vM29zbWFZdEFzK1RPbk5ueGtWMTU5TG5O?=
 =?utf-8?B?ZExla1FQMERiNUlvRGFDYm80SW4xYU8yWWZ4cEJKeUZBbTJIZGo4dzlDblFs?=
 =?utf-8?B?WnFpVTh4TmNaK3Zla3gvN3puTy9BUzZSSUx0SkpNL1ltTU11NzdiSU1jMXBs?=
 =?utf-8?B?NUM2aGJFNjEvQU9CTnJQZFByemxQU3ZybG5lL3NNTUJmVzlmaXpIdVZZc20w?=
 =?utf-8?B?UlVqMVFGSVQ1ZWk1ZWpOY3RzUE1ZZFVHQ25SUnlyQWdBbGdCTkNFYURyZDRx?=
 =?utf-8?B?L0ZkbmhsQ0FWYVNsOTdFa2JzMDltbGppNEJ0bUlKOUZCakJuUm5YazFMcHEr?=
 =?utf-8?B?UTJ6bEZaOWh3a0QrVUFMZXEvZFdXVmVIY2Y1N0RLL3VwYWg4ZXE4WUJuNE1V?=
 =?utf-8?Q?0jMF5F56l6G/dxaW2ARy6FVtn1N0X0ytAFGC/Ou?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6fdcd30f-6eab-4b6e-00ff-08d9309234a4
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 06:44:35.6927
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HhTZELM2Le8V4GYSgTDdSKv4kR68A9xbA3xUNSTBFLCCT7Gg0IoXEEGRAK2fbvUc51q/vm+BoOxlpwoFrfX9PQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6669

On 15.06.2021 18:19, Andrew Cooper wrote:
> --- a/tools/tests/x86_emulator/Makefile
> +++ b/tools/tests/x86_emulator/Makefile
> @@ -7,10 +7,6 @@ TARGET := test_x86_emulator
>  .PHONY: all
>  all:
>  
> -.PHONY: run
> -run: $(TARGET)
> -	./$(TARGET)
> -
>  # Add libx86 to the build
>  vpath %.c $(XEN_ROOT)/xen/lib/x86
>  

This is not only incomplete, but actively (specifically here for my
own frequent using of it, but other tests I do run occasionally as
well, and then also that same way) harming things as long as you
don't introduce an alternative way. Note the top-level Makefile
making use of these rules, and note also the run32 companion here.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 06:46:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 06:46:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142711.263249 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltPJp-0000Js-4Z; Wed, 16 Jun 2021 06:46:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142711.263249; Wed, 16 Jun 2021 06:46:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltPJp-0000Jl-1N; Wed, 16 Jun 2021 06:46:33 +0000
Received: by outflank-mailman (input) for mailman id 142711;
 Wed, 16 Jun 2021 06:46:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jola=LK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltPJo-0000Jd-Ds
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 06:46:32 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63406904-ae5b-45be-86ce-149877e04309;
 Wed, 16 Jun 2021 06:46:31 +0000 (UTC)
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01lp2057.outbound.protection.outlook.com [104.47.2.57]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-5-Ijwcf9FsMHygkbiuRPma8A-2;
 Wed, 16 Jun 2021 08:46:29 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5166.eurprd04.prod.outlook.com (2603:10a6:803:53::33)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Wed, 16 Jun
 2021 06:46:26 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Wed, 16 Jun 2021
 06:46:26 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P191CA0009.EURP191.PROD.OUTLOOK.COM (2603:10a6:102:54::14) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Wed, 16 Jun 2021 06:46:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63406904-ae5b-45be-86ce-149877e04309
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623825990;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jI8GS4dXvPHwc4fa10PkyQkqApx/gKJ7eRbOO1jQzKs=;
	b=JbKoz/RsA0wfn/3WQTG7zk5HdXLs/J1VWhj+LV+2i8p1S/3twq0gG0weNhCADPtdqUVZLR
	W0o3lXOJ/xvQQgqJCE/N/SG5atPsAbOUs2qieJoeEWUuNC7omvqjGfOPaIWSyfPQt3da5T
	3HNqDAIhHiCljHenAaitJK4/E3vt4QI=
X-MC-Unique: Ijwcf9FsMHygkbiuRPma8A-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mlGaNqaSqPSiZuJpaPSjaxEvkRn1sDcJwwgjgMNRGP+dm1Pw/FM2q2zGQjWRZdTD18gCFhkn+EYytStpcD3dMJMbJJLLd/F3IFjtg5IWBjVC9fkvkw3w1hno0/mUPHOjaP/qLWhHqm6kf5vs2MY2Y9S5Lq1U/uPJqZ4O9LMgtEDx5ArTlzMqkuAEQqp25Y+f/VIi83u6IW5HtKygmPFdt/UYlvqtWwDW3BHq39ezrZXp/5QhhnrpEKPJouyWb+oIsjpRBC4Zre4xlwbeYYoQ23B2gA4gp2rA6AWeEuNkpliTmhyVC6PO3pIbFMWfPwLstxMp8XlPzvG+BYDuUjW5Pw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jI8GS4dXvPHwc4fa10PkyQkqApx/gKJ7eRbOO1jQzKs=;
 b=jgpXWquSR8Ikil8yKm6RgxYyEp3nXLCknC5W54YOj9l1mVZH+zgqJ538p4lbAbAFDLBYVTCs0xQcx7QfrKKamj+R+Q6bG5G+WyHy2NUtdKUe9teN8tVef022ngSJIFLu+ehhuXTPdUIlLgzj8YXryKqCOGyWU8Q93xWIZp3/+qksZ31mjClGIXDdJuygbWjfWrsMOoLDYCFEHwE6TnND96U8KIOZRYGJBsaE1oWmgVkJ8tavEmWt4UXi4TxmJA1jlq/ASIepwY5e75pqU7ZKxpZHDcXNqUZe8U4XJxrUuMN5GFGR0NO1bLs+cI/K+GjI4p0QMybXEVmN6zVqsichWw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 1/5] tools/tests: Drop obsolete mce-test infrastructure
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210615161905.9831-1-andrew.cooper3@citrix.com>
 <20210615161905.9831-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <04f641e6-04bd-8884-4b08-4c8f9a418b0b@suse.com>
Date: Wed, 16 Jun 2021 08:46:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210615161905.9831-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P191CA0009.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:102:54::14) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6202f005-df73-41a9-83bc-08d9309276b6
X-MS-TrafficTypeDiagnostic: VI1PR04MB5166:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB5166CCF8C4D6CD6C05F02F7BB30F9@VI1PR04MB5166.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nv6FrYBvRJ9HntRsmE5xTjEBmc5ynmI287/G2QQdL4x2IhQwxMAYYOxjcE0eQzcGYPwnL7z3dhs7SeYizqovPp0iEFp+Brhlu+EElZq/i6rsGG4URolJFJvj9IdyFkAq/RxAFYSSsqAsimc98X8Ua84mDMB7xSSVP+oTBNRgaF8/QlZ/scXKPBNa6Gxn6XgZKZLChJ6JmbWALfAhf1TZPrPbDykX0BM25Q60TLvQBpE2N3nbVyFKdRQ9kCnrcgJ19qL4+HrQhVxE58KsgenbT2UsoMUGWQgEKABnEceWG5++iy9sjJpEUDsFL0BdT/SoLW44piZsrIlHhKpbiNJUq2i1hJ58d9WBttJvuZuPKv9C06ymhosDNspBvFO3f8al6sQ5TXVrX4ulqpm+FRlgnP9LHNvamKswb8m39RgY94ndwZh3DPTqe+wTvKc4xOMO46wCKqa29euB1/LTWqWqUXNaNuuwu3nUJzcb114ZPTwwwNZ1C1DQeyNqr0wNFNYkmKFlC08o+qIWVx2hyuiaKBWRTVGzmHkjgtju3nBxyBYLwJLLo9sw/+tc1slOepf1Gstu7mrQ7iRKbHG+5WLeJNdGEVkGwkju0cjfE2kyZcvL7NhCuD0TIKAyM3C6BWKcDO4GOO2tPxCL39BbOn5AThCeD/4Wmu0l5kLlqpFI5gD7Bsr8leFTV/WncApr2Ncn
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(366004)(396003)(136003)(376002)(346002)(38100700002)(6916009)(26005)(316002)(54906003)(66946007)(31686004)(186003)(8676002)(66556008)(16576012)(66476007)(16526019)(2906002)(956004)(86362001)(2616005)(8936002)(36756003)(6486002)(31696002)(53546011)(4326008)(5660300002)(478600001)(4744005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ajg1Ukp1K0IwcDhEM2huQm16RTk4eG1hZkNONnZTK09KeGFkSzRDbVc1T3Nj?=
 =?utf-8?B?bTd3U1hxaUMzN09UdlloendJdkpKQUhob3BIM3RIRHdVd0JsbzRlaDdXemJM?=
 =?utf-8?B?WEU2N3F3SFFOSFlBU1pkcm5RdlZKZW5uVmFrMU9vL2VzK3dMUkNaR2svSklQ?=
 =?utf-8?B?eDEwdU5yaUpxZ0toeTdpZ21KYmgwTWY1Y29HMGp6WjAxemZqMTBQdUx0dklP?=
 =?utf-8?B?d0R1N1JDTExWKytyWUtGeElFRDZ6K2R0dS8ybXp0R1hqZVVCVnp3d1doZC83?=
 =?utf-8?B?U0dxTG93WVJKQVg1ZVN5aHRGcUNoN25FZEg5YmdZVXFuaXpOZjZuRkV5b2wy?=
 =?utf-8?B?Z3JKNGFXUzJHaXVlbFhkZ0lva05lY3JFMjMxMDdvZzR0RzVTZUltek1Ya0ND?=
 =?utf-8?B?VEQrV1ZFVkJTUVZOVEtBcm5zSUtSeHFpN050akJ4ckxtWWN4N3FaTkM5QTM0?=
 =?utf-8?B?V2RHVDl5eExmaUttYTNyajREUDg1TXloMUU4eCtFVnVwWGFDMkhNNnZFTStN?=
 =?utf-8?B?cGRrV0wvM2FZMXpkOGptTWhwZW1EcFBLdlZjYU02UG1idzZXS0RaL0N6RTlW?=
 =?utf-8?B?V05scEUyUkluVGJmZ3JMU1Y0eVI1bGJvV1VzUGJCZVowVGxrSkJVNmt0dTRw?=
 =?utf-8?B?eDZicTZqNUlxQ1UvWmo5ZzdwdUs1dXR5VFpYaG1ZNm5aTjhPVUNZQnNJckNP?=
 =?utf-8?B?dDI5bkhqMEZwcWpsalkvZXVpMEQzRmZFN1h0RkVpL3hCNit4UXhFRWk2cGJ1?=
 =?utf-8?B?Um14bzNrb1R3YlJxMjNPYTB4OHV3TGx1TVpUT1VaMlFpdXZpY3AxdW52R014?=
 =?utf-8?B?Rmk2MDBJZ3lZd3NLM1R1dkM2YlRmbXZXYzFpOFIvWjJjZHkvbGg2R2VBMzR0?=
 =?utf-8?B?eDRBZkE2UFE5L2pXNy9yZ2d6a0w1LzlaTGM4bFJBdy9wWC9FUTExeDlnUFl6?=
 =?utf-8?B?UnNyVWhPeHdpVVd1blNwOEcybzV3NDQ1NHNzUVBUQk9EUGQ0eHZLd3hERGRY?=
 =?utf-8?B?UFpLMDYxVm9JcHVoSnIrZ0tiOU40eFNZN2hJMGFWWmF5NFJCUHY2bUh5QkdZ?=
 =?utf-8?B?STZaVE0vRjFGWVJVWjRDL2QzbGozSVZzTjljbTdKV1g5Y0hQVk9udXYvSEMx?=
 =?utf-8?B?WU93ZDZuZi9wa0VjYVY2MzdJU1h5TjgrWS84SmlqeGczSGlPMlRnYmVuR05x?=
 =?utf-8?B?S3FJb1BtMGU3aGh0bkdMbTBIR3IrMEF3dVlWakpTczVMSFdJNXg1Qy9SZk9P?=
 =?utf-8?B?Y0ttYndBTm95RjZ0WElDN3pkREJwYWxkWm92eDhRUHJrQllNRzByKzJhR2R4?=
 =?utf-8?B?OWpoR2ZQaUlMZmFQZEpzQkJJbGl0RThGMFNidE9JRkZXS2tiampiK2dFREVo?=
 =?utf-8?B?SER0KzZ3N2J3WG9zVTZURkxaU2pZUFV2bmJ0S1QzZW1rQ01TMmd5WExUbXpj?=
 =?utf-8?B?V0pvamNPd3FEZzN2NzdHRkowWlpRbDREUWdpNDhZLzY5UDloSkY3ODAwMzBa?=
 =?utf-8?B?OGJZNDFNNWxTeFZPZ2R4WTdIbk9RaHpKRHJXazVQbUVyYnhwVlVBOWoxRVJq?=
 =?utf-8?B?c2QwcmFZOFNwaVUxeGNoeDRlZTJWczVQZU1VQVFvSGgvM0FINnBmUDVwSHoy?=
 =?utf-8?B?Y3FuMkl4VFBYN1ZjaElPdkhSWlNXL01Fbmt5blVsSnFzTnZVZHRtVTFHMXFG?=
 =?utf-8?B?UUZlTmRrMmlSMys2RkFLN09NT0tWYTc5aENWOUF1UmRqVDlkeEVINEIxbHBB?=
 =?utf-8?Q?lN2ITeM8MKCtlDzvzf38YPfodPYqOvF0XUpzp8b?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6202f005-df73-41a9-83bc-08d9309276b6
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 06:46:26.6006
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AV95Q6oR/M4UhIOWAyhT6cjBufUaRxH1tE+FtRmHKo0I++9Y7/lQrn/KOCTw4L9TIgonKOLSXHM7F6L/cFBbCA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5166

On 15.06.2021 18:19, Andrew Cooper wrote:
> mce-test has a test suite, but it depends on xend, needs to run in-tree, and
> requires manual setup of at least one guest, and manual parameters to pass
> into cases.  Drop the test infrasturcture.
> 
> Move the one useful remaining item, xen-mceinj, into misc/, fixing some minor
> style issues as it goes.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

While I'm not generally in favor of dropping testing code, given the
constraints you mention
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 06:54:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 06:54:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142719.263260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltPRd-0001ng-VV; Wed, 16 Jun 2021 06:54:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142719.263260; Wed, 16 Jun 2021 06:54:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltPRd-0001nZ-S7; Wed, 16 Jun 2021 06:54:37 +0000
Received: by outflank-mailman (input) for mailman id 142719;
 Wed, 16 Jun 2021 06:54:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltPRd-0001nP-2h; Wed, 16 Jun 2021 06:54:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltPRc-0003jY-O7; Wed, 16 Jun 2021 06:54:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltPRb-0002s2-Km; Wed, 16 Jun 2021 06:54:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltPRb-0006dH-K2; Wed, 16 Jun 2021 06:54:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3PgvB6sqlISyc2s1tZxOG0mCwjuqUpdUeVBfwkNx2WA=; b=s//m7oPcdTZbdejxsCJsy79zqC
	St88MbL/tonCBqVcwyOnaBoiFjPBxackgazTAWa2gLRSzH37GWtg2awMShjM7blBzSyipPKR4B1IP
	yJ8e9uaPQdL/oBfEVnDXax9N+0VgA+jManF4LInWqvElprHPwlFoYI9tPcu2iLubP4oY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162845-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162845: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=93c5f98296fc78de79d621418a1e62fd413e73d1
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Jun 2021 06:54:35 +0000

flight 162845 xen-unstable real [real]
flight 162853 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162845/
http://logs.test-lab.xenproject.org/osstest/logs/162853/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 162853-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 162422
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162533
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162533
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  93c5f98296fc78de79d621418a1e62fd413e73d1
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z    8 days
Failing since        162556  2021-06-08 22:39:08 Z    7 days   11 attempts
Testing same since   162845  2021-06-15 19:37:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1003 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 07:13:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 07:13:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142728.263274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltPjS-0004EJ-MF; Wed, 16 Jun 2021 07:13:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142728.263274; Wed, 16 Jun 2021 07:13:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltPjS-0004EC-If; Wed, 16 Jun 2021 07:13:02 +0000
Received: by outflank-mailman (input) for mailman id 142728;
 Wed, 16 Jun 2021 07:13:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jola=LK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltPjQ-0004Dp-N6
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 07:13:00 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bd96ad3a-f0a3-4e4a-a50a-1b93ac9ffc58;
 Wed, 16 Jun 2021 07:12:59 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2110.outbound.protection.outlook.com [104.47.18.110])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-15-ZsQq0u88PeSZP-64LMqb2A-1; Wed, 16 Jun 2021 09:12:57 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB3293.eurprd04.prod.outlook.com (2603:10a6:802:11::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.23; Wed, 16 Jun
 2021 07:12:55 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Wed, 16 Jun 2021
 07:12:55 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR06CA0130.eurprd06.prod.outlook.com (2603:10a6:208:ab::35) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.15 via Frontend Transport; Wed, 16 Jun 2021 07:12:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd96ad3a-f0a3-4e4a-a50a-1b93ac9ffc58
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623827578;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Uu42hhXTJxWfqXPKGA+KoO/ktXYcsraGGukZ2r/IfFg=;
	b=NckYuRFquKo5BTbyATZoCvPxNOgzoAiR8dKsmqSo5ZKXRgLOo+6X8apd5nN1ijVwwlO+3+
	9+EIiNk/rXD/+v13zaWUf+/TuhewipzGH/z06k1JVgBSwu6DIxLbyS5SBaassjjiA5eP9v
	UFl/+ABMOkxL55cdgBI/tMoKI7kd4lU=
X-MC-Unique: ZsQq0u88PeSZP-64LMqb2A-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lD1eCrFCrrsE/E9OE34ujfnP1myPgIGLGGNiZfQgdqYgcKjWXTuL9FBTODY/3NI7yrwM8EJdn1A0rAvfK1QuphTGWFFizIDVzYVzInh717PfbmmdshnekTcH+lHWAeRLTVudE/C7LQqrYWesnvYwIHuyRXQ604KzojFREvU9sigG+rhif66RgTJEMvx/ZMwiatuYeGpXidFNCGPQUBz+FJz6MILtzhgswEfY+OH6AFChBM7s6UbK1xUyC/4IyI++BnYt+13FVNvIqc0XB82FNLTCRAmJTri9Rbe6/N7Eq5CBjaoASxKNgbdX+tfs9+w8iohC5jrDCEqrCjntaSdvMw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Uu42hhXTJxWfqXPKGA+KoO/ktXYcsraGGukZ2r/IfFg=;
 b=dASd4WUtD/2FZI/j9HTMQS2wBckXuJsrNszuEtw+mAGxa3iIW0YH25Xs6BN53M0m5R/ocv/9oVBKxauM2Em+BGzva+jxSZLnwChwFtjn+LdShlhfcRwRiQGVYYZDt5FbY326yYQp+4/iE2WI14Z04a45bv07Co2Sq/ieuiPjT+02zJIoU15Rj9DFNPvnbvoj6ENUWpSVQ6UKs6fUlcFHeeHV95699S78eWXlSyDxKBD+Pr/5+rM5oazmPm9rytCCls7KgjJvUxOw9ZcGydAWlbbcH3mdM3akjx8m0OKtUUroViFK8zi4/72lMzXvFdyGP1OP8M5KggPstFMzOBcTvQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [xen-unstable test] 162845: regressions - FAIL
To: Ian Jackson <iwj@xenproject.org>,
 Anthony Perard <anthony.perard@citrix.com>
References: <osstest-162845-mainreport@xen.org>
Cc: xen-devel@lists.xenproject.org,
 osstest service owner <osstest-admin@xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8e39ca8f-3202-7d3a-d65d-7087634bd49e@suse.com>
Date: Wed, 16 Jun 2021 09:12:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <osstest-162845-mainreport@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR06CA0130.eurprd06.prod.outlook.com
 (2603:10a6:208:ab::35) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 99ee267e-458a-4e36-f51e-08d9309629a4
X-MS-TrafficTypeDiagnostic: VI1PR04MB3293:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB32937E05AF12496CE2B0DABDB30F9@VI1PR04MB3293.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YEHBy52hxA0ybpDvhGMVl1r7wQ62zWKe01ROfj2/tRx0iMp/V6Rh0gxRDx4Rtr1F9ffQ6AnOrm9bpLp8o7SwemdLjVy5uoeMtUZ/NgSNuZ2ZR78mujE35XVLRq5IzXLNy534nT5J0TbVVAQB8tZaNdQo0wYEaic3wZTUFMHF/wGrkcUOyfUbtpuP5VrRkEHxeqWbhTeysFyBtZ3lGJ4TL4VdGBvkR9zuunvtCKS59kuxyBIJ7wEJAM5lvbyeI2Q+5rESR4VnOL0eMy5T4SJZ2pkt2QlF/7wqZNEE2ITbi+HJtO9kAdxYG1jj+Xs7mLYu1Em691vqzsE2bGjizNfxPFXKZa5QNCUuDoHGca6taHRA/TSX3HyJfEK7aso1OeWCMe2EUnD8tf65rumF1sWJsbwuCgnKNrCA8FNEuVjH6N9L/T974XIjqF2RkQOlFzESdSxChhLSaZhyNcdfF3NcMD4YNz958cEq/LKrnzaIi3tVcnAf2IPkAh7MKuhkXi0hr5EIWNn7mBlDI41nCH6+Ep9APL9y0lIzz/7nKcjJzVqxKYUBNhtEcbQp0e1cNx65jDwn8t3KhSsy3L6Q3sPc/1ewiuHSmgHyXYKSBhkJuusgqQ2vp3ZWr6vAkQYx65Uiz1f4qRBboRjcUoe8pyXF/xi4481u9r/hja/iOiaKP87yzgFOtG+7/gL3/5ZI1jvgcNprwNrv/AGetnne3/T/tMvkSjIOk054li3iSe+t67MC9pDW2A3TYdv8VWRpzHfjvllB5U6ieIkBkVbrE88XneMzLR/YFSBsg5SHpol5o3ZXAUW5Jmae7/3DH2kNHS6k
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(346002)(39860400002)(136003)(366004)(396003)(66556008)(36756003)(66946007)(966005)(16576012)(53546011)(186003)(16526019)(66476007)(86362001)(2906002)(316002)(956004)(31696002)(2616005)(4744005)(5660300002)(4326008)(26005)(110136005)(8676002)(478600001)(6486002)(31686004)(38100700002)(83380400001)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SjA4OVptdGpkZkNjL20rSm1Iek5GeHUyMC9WcmVkWTl0enBZVVY1eGhHSW90?=
 =?utf-8?B?cTI1M1ZWMGw4UXRwem5uYVc1UTFBcjZodFZweTc1ZVM3SzR3MWYrUHc5ZENi?=
 =?utf-8?B?TzRUVWFFOEQyZUFXelh5eURNb050ZHNxTU9WbG52MGdnVFFQajMrZzdYQVFD?=
 =?utf-8?B?YTNOckJQaGpYMEppL2FMNzdTWTR3YTRwdnI4MVIxdldYblplRVpua05BN1c0?=
 =?utf-8?B?Z1VjWVBqOEhwVWg5R0p0WGZUV294ZTBpT0EvSS9RaS9BU2lxRHRObk5iYjQ4?=
 =?utf-8?B?ZzZGRmk3MlVzUnBhRWxaYStOeGZZZTNRMXIrdEZrb3VqR1J4bWJiNlpMRGNX?=
 =?utf-8?B?UFI1cmtsUWRybzNlUVM4RUVLaEZDRkNzYmdCSGdpbUxrbU8yeENZMVFTYXhB?=
 =?utf-8?B?L29ucWdRUTFvZmd0MnlkUW1BS0kxU2R4RmYrVURCQXkycVh6ZjY0cHU1ZGZp?=
 =?utf-8?B?UjBpUUF5WWdYY3hVT2JXbGxpd09VekdSamhvNGF0eE1HOEVHNVZDMVdid0RM?=
 =?utf-8?B?NGVzazFvVDJjMFozUEJJZ1JJeEdnWmJ2VUVHK3pQV3l3L1pYQ3daaU9wN25C?=
 =?utf-8?B?UU81WTJ3M3FvV0FaRHN0bkFXL3dHeUZScW14NzNLRk0waGN3azlya29YUjRr?=
 =?utf-8?B?NjNxbGorUTQ5MEhkc3FWcnJtcmYzMkVMRlNRV0FPQS9wOXh2VDlEWnhUOXRG?=
 =?utf-8?B?U1RuY3UrdkE5Q2dGOVcyaEducHA5aDFXckFGRkJ0aGtpZnNTNmhtcVpUVjdQ?=
 =?utf-8?B?QTRrSjY5dW82eUxPZzUwVWQydUN0V09NZHRKbFZUMmtBWDZsUW1JUkZjL1B6?=
 =?utf-8?B?WWlVSVNaNEkvMm96REgxRWdzT2hKcFJOYlpBeUl2UGhsbSswK1NtMXYzNlV3?=
 =?utf-8?B?Z2M0UytsNVdvODNFNHhGREFLK2NkL2xxckJ1NmxmTENUSXlFejZUU2E2NDhQ?=
 =?utf-8?B?c2dJcVVNVGtDQmRMMFNpR1ptUUJWa0tXaE9UOGwxbUl4b2R4b2ZXTy9QSkF1?=
 =?utf-8?B?T1BVMWNlNjg5VUxRSjQwRzRHZG53VDBNZjlnenRMVmVwakJlNmluM2lYR1RW?=
 =?utf-8?B?aXRKTTJ1dFZ6cDgvUHo1RDBhSkpYd2lWVnpFWm1XemlvSW04MXYrU3p1SDh6?=
 =?utf-8?B?NlY3K1JCVm1pVzA3anR2bE54NGtValhZeERvTkV0QTE0Ny9XSjhobGR3Z1o1?=
 =?utf-8?B?Q3lJaFRldkRXWmRXd2tkNWF2UnFQSmdmQXAxWnd6MHREMWlNdzFhTFdYc2FQ?=
 =?utf-8?B?bEZZVGdyU3BZYUF3SzUyRjdpc0FGWFFHRFg5bGxKWDBOQlNZSEkrbjE3TGw5?=
 =?utf-8?B?ZERLYUdvd0Yxc2haa05LNys2WVUwQ2xxVitwWXJQVERveHVwaGRWZXpRdWlB?=
 =?utf-8?B?cWRia2VzTUswaEZHZGE3U0NUV1YrSk1HVktDTFZPQlg5N28xd1VQWkN6Z3JO?=
 =?utf-8?B?anNaVGtycUg2RUJTS3VOMkQvU2hWM21kOEhtRFhCL1c5U0xqZ1NmWGxUZmI1?=
 =?utf-8?B?RTFnRWs5TTlQR2gxOHdxL1JwTGU3RHFjaXNDKzZXdXFmVnJ4eHJ0a3MvVkli?=
 =?utf-8?B?RlhlcnZYSHRxZ2kyVzlhQTdCbUxLSHNucFpvRzU5U2dHcUxkaGxOZFBCQW5h?=
 =?utf-8?B?cUdnalJ6OXVlYW4rNkZXdCtCc0tnTWxhYW9FWmY1d3VSUFhONjUrTkQ4VjIw?=
 =?utf-8?B?Z3JqVmUxOEVkMC83OGNqZXJRZEVDODZNKzlSS29mVmpDUE9CakxaUFBWcjlv?=
 =?utf-8?Q?C2QIkqn1t707jEvvWTUs8lzv75L5DjWxsXkkHql?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 99ee267e-458a-4e36-f51e-08d9309629a4
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 07:12:55.3029
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JebKgzJWdKdfh1i+YUo5g5bYj4gcYvWod7l+CL97v5YDzFJTRj02wMwewuW5cjrNWCsTnFpOY+M8/bI+VNPElg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB3293

On 16.06.2021 08:54, osstest service owner wrote:
> flight 162845 xen-unstable real [real]
> flight 162853 xen-unstable real-retest [real]
> http://logs.test-lab.xenproject.org/osstest/logs/162845/
> http://logs.test-lab.xenproject.org/osstest/logs/162853/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
>  test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533

There looks to still be an issue with the ovmf version used. I'm
puzzled to find this flight reporting

built_revision_ovmf	e1999b264f1f9d7230edf2448f757c73da567832

which isn't what the tree recently was rewound to, but about two
dozen commits older. I hope one of you has a clue at what is going
on here.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 07:30:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 07:30:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142736.263296 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltQ07-0006iE-Di; Wed, 16 Jun 2021 07:30:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142736.263296; Wed, 16 Jun 2021 07:30:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltQ07-0006i7-AF; Wed, 16 Jun 2021 07:30:15 +0000
Received: by outflank-mailman (input) for mailman id 142736;
 Wed, 16 Jun 2021 07:30:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=my07=LK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1ltQ05-0006d2-Pk
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 07:30:13 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fcca34ec-6c56-4ee8-928c-4016de194043;
 Wed, 16 Jun 2021 07:30:11 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 0071A1FD64;
 Wed, 16 Jun 2021 07:30:11 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id A8DBC118DD;
 Wed, 16 Jun 2021 07:30:10 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id GDcVKIKoyWBfZAAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 16 Jun 2021 07:30:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fcca34ec-6c56-4ee8-928c-4016de194043
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623828611; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=B5lCFKLS4SrkKe1lJQmspLTLsY+q2DCNZBeEteAkDQg=;
	b=kBk7upbTudELxn7yk841qgv/oTl4rY+JGrnXx/ubDi7or2hFGGLcCc/8IfvwqRjNXUmSs8
	DvdYGZHd+Nl/PdO99VHbDn9iHrQb8dHkoOHCudS/c0YIIBoEVMTyWWWTMF6kIviO54rDko
	riExKLBpi2seQw6jy11YdqeHGxnRf+I=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623828610; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=B5lCFKLS4SrkKe1lJQmspLTLsY+q2DCNZBeEteAkDQg=;
	b=eZR6UrR0IN/1H5lwf+BJwP8nIqhw0G0LC3nS3EHasCUSZUHuKQtAyH9P4xUtVldEvTR4+n
	TvEL48QavumVOcMtRU9RbokYzMc6wbMcBITMtPW4k48qrxHo53UDD6xeF7CaHeQbxnlpnt
	HO6SlWPPONAx2ISAbOkkl56EZ1vSlGs=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH 2/2] xen: rename wrong named pfn related variables
Date: Wed, 16 Jun 2021 09:30:07 +0200
Message-Id: <20210616073007.5215-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616073007.5215-1-jgross@suse.com>
References: <20210616073007.5215-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There are some variables in Xen specific coding which names imply them
holding a PFN, while in reality they are containing numbers of PFNs.

Rename them accordingly.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/xen/page.h |  4 ++--
 arch/x86/xen/p2m.c              | 31 ++++++++++++++++---------------
 arch/x86/xen/setup.c            |  2 +-
 3 files changed, 19 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index 1a162e559753..3590d6bf42c7 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -51,7 +51,7 @@ extern unsigned long *machine_to_phys_mapping;
 extern unsigned long  machine_to_phys_nr;
 extern unsigned long *xen_p2m_addr;
 extern unsigned long  xen_p2m_size;
-extern unsigned long  xen_max_p2m_pfn;
+extern unsigned long  xen_p2m_max_size;
 
 extern int xen_alloc_p2m_entry(unsigned long pfn);
 
@@ -144,7 +144,7 @@ static inline unsigned long __pfn_to_mfn(unsigned long pfn)
 
 	if (pfn < xen_p2m_size)
 		mfn = xen_p2m_addr[pfn];
-	else if (unlikely(pfn < xen_max_p2m_pfn))
+	else if (unlikely(pfn < xen_p2m_max_size))
 		return get_phys_to_machine(pfn);
 	else
 		return IDENTITY_FRAME(pfn);
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 5e6e236977c7..babdc18dbef7 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -95,8 +95,8 @@ unsigned long *xen_p2m_addr __read_mostly;
 EXPORT_SYMBOL_GPL(xen_p2m_addr);
 unsigned long xen_p2m_size __read_mostly;
 EXPORT_SYMBOL_GPL(xen_p2m_size);
-unsigned long xen_max_p2m_pfn __read_mostly;
-EXPORT_SYMBOL_GPL(xen_max_p2m_pfn);
+unsigned long xen_p2m_max_size __read_mostly;
+EXPORT_SYMBOL_GPL(xen_p2m_max_size);
 
 #ifdef CONFIG_XEN_MEMORY_HOTPLUG_LIMIT
 #define P2M_LIMIT CONFIG_XEN_MEMORY_HOTPLUG_LIMIT
@@ -121,7 +121,7 @@ static pte_t *p2m_identity_pte;
  * can avoid scanning the whole P2M (which may be sized to account for
  * hotplugged memory).
  */
-static unsigned long xen_p2m_last_pfn;
+static unsigned long xen_p2m_pfn_limit;
 
 static inline unsigned p2m_top_index(unsigned long pfn)
 {
@@ -239,7 +239,7 @@ void __ref xen_build_mfn_list_list(void)
 		p2m_mid_mfn_init(p2m_mid_missing_mfn, p2m_missing);
 	}
 
-	for (pfn = 0; pfn < xen_max_p2m_pfn && pfn < MAX_P2M_PFN;
+	for (pfn = 0; pfn < xen_p2m_max_size && pfn < MAX_P2M_PFN;
 	     pfn += P2M_PER_PAGE) {
 		topidx = p2m_top_index(pfn);
 		mididx = p2m_mid_index(pfn);
@@ -284,7 +284,7 @@ void xen_setup_mfn_list_list(void)
 	else
 		HYPERVISOR_shared_info->arch.pfn_to_mfn_frame_list_list =
 			virt_to_mfn(p2m_top_mfn);
-	HYPERVISOR_shared_info->arch.max_pfn = xen_p2m_last_pfn;
+	HYPERVISOR_shared_info->arch.max_pfn = xen_p2m_pfn_limit;
 	HYPERVISOR_shared_info->arch.p2m_generation = 0;
 	HYPERVISOR_shared_info->arch.p2m_vaddr = (unsigned long)xen_p2m_addr;
 	HYPERVISOR_shared_info->arch.p2m_cr3 =
@@ -302,7 +302,7 @@ void __init xen_build_dynamic_phys_to_machine(void)
 	for (pfn = xen_start_info->nr_pages; pfn < xen_p2m_size; pfn++)
 		xen_p2m_addr[pfn] = INVALID_P2M_ENTRY;
 
-	xen_max_p2m_pfn = xen_p2m_size;
+	xen_p2m_max_size = xen_p2m_size;
 }
 
 #define P2M_TYPE_IDENTITY	0
@@ -353,7 +353,7 @@ static void __init xen_rebuild_p2m_list(unsigned long *p2m)
 			pfn_pte(PFN_DOWN(__pa(p2m_identity)), PAGE_KERNEL_RO));
 	}
 
-	for (pfn = 0; pfn < xen_max_p2m_pfn; pfn += chunk) {
+	for (pfn = 0; pfn < xen_p2m_max_size; pfn += chunk) {
 		/*
 		 * Try to map missing/identity PMDs or p2m-pages if possible.
 		 * We have to respect the structure of the mfn_list_list
@@ -413,21 +413,22 @@ void __init xen_vmalloc_p2m_tree(void)
 	static struct vm_struct vm;
 	unsigned long p2m_limit;
 
-	xen_p2m_last_pfn = xen_max_p2m_pfn;
+	xen_p2m_pfn_limit = xen_p2m_max_size;
 
 	p2m_limit = (phys_addr_t)P2M_LIMIT * 1024 * 1024 * 1024 / PAGE_SIZE;
 	vm.flags = VM_ALLOC;
-	vm.size = ALIGN(sizeof(unsigned long) * max(xen_max_p2m_pfn, p2m_limit),
+	vm.size = ALIGN(sizeof(unsigned long) *
+			max(xen_p2m_max_size, p2m_limit),
 			PMD_SIZE * PMDS_PER_MID_PAGE);
 	vm_area_register_early(&vm, PMD_SIZE * PMDS_PER_MID_PAGE);
 	pr_notice("p2m virtual area at %p, size is %lx\n", vm.addr, vm.size);
 
-	xen_max_p2m_pfn = vm.size / sizeof(unsigned long);
+	xen_p2m_max_size = vm.size / sizeof(unsigned long);
 
 	xen_rebuild_p2m_list(vm.addr);
 
 	xen_p2m_addr = vm.addr;
-	xen_p2m_size = xen_max_p2m_pfn;
+	xen_p2m_size = xen_p2m_max_size;
 
 	xen_inv_extra_mem();
 }
@@ -438,7 +439,7 @@ unsigned long get_phys_to_machine(unsigned long pfn)
 	unsigned int level;
 
 	if (unlikely(pfn >= xen_p2m_size)) {
-		if (pfn < xen_max_p2m_pfn)
+		if (pfn < xen_p2m_max_size)
 			return xen_chk_extra_mem(pfn);
 
 		return IDENTITY_FRAME(pfn);
@@ -618,9 +619,9 @@ int xen_alloc_p2m_entry(unsigned long pfn)
 	}
 
 	/* Expanded the p2m? */
-	if (pfn >= xen_p2m_last_pfn) {
-		xen_p2m_last_pfn = ALIGN(pfn + 1, P2M_PER_PAGE);
-		HYPERVISOR_shared_info->arch.max_pfn = xen_p2m_last_pfn;
+	if (pfn >= xen_p2m_pfn_limit) {
+		xen_p2m_pfn_limit = ALIGN(pfn + 1, P2M_PER_PAGE);
+		HYPERVISOR_shared_info->arch.max_pfn = xen_p2m_pfn_limit;
 	}
 
 	return 0;
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 8bfc10330107..1caddd3fa0e1 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -816,7 +816,7 @@ char * __init xen_memory_setup(void)
 				n_pfns = PFN_DOWN(addr + chunk_size) - pfn_s;
 				extra_pages -= n_pfns;
 				xen_add_extra_mem(pfn_s, n_pfns);
-				xen_max_p2m_pfn = pfn_s + n_pfns;
+				xen_p2m_max_size = pfn_s + n_pfns;
 			} else
 				discard = true;
 		}
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 07:30:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 07:30:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142737.263307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltQ0A-000701-MS; Wed, 16 Jun 2021 07:30:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142737.263307; Wed, 16 Jun 2021 07:30:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltQ0A-0006zm-JK; Wed, 16 Jun 2021 07:30:18 +0000
Received: by outflank-mailman (input) for mailman id 142737;
 Wed, 16 Jun 2021 07:30:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=my07=LK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1ltQ09-0006Rp-2F
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 07:30:17 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 857ad28a-75a5-4017-a0fc-28718caeb760;
 Wed, 16 Jun 2021 07:30:11 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id A261A21A24;
 Wed, 16 Jun 2021 07:30:10 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 53A0511A98;
 Wed, 16 Jun 2021 07:30:10 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id kOVJE4KoyWBfZAAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 16 Jun 2021 07:30:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 857ad28a-75a5-4017-a0fc-28718caeb760
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623828610; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=S1r1xotAufgSrJxoO6eVl3GT9o9kmdrAP0fb2+09k/Y=;
	b=rrtxG4a3oZRTeuM96PmrvNJ9mDw4s3FGu3Lt679kjarEpkWxfo+hWYdS4X+u3TqylaZlJr
	V3FSb3/rS3kXEauOsaW7c5pXjxS4O9eIryIAQc0InNovNDrA6RZm3/l8q7+i88EsnXV6QY
	Gm6m0edAK5fEt0Xzp2IcFUvrAYJKM+w=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623828610; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=S1r1xotAufgSrJxoO6eVl3GT9o9kmdrAP0fb2+09k/Y=;
	b=rrtxG4a3oZRTeuM96PmrvNJ9mDw4s3FGu3Lt679kjarEpkWxfo+hWYdS4X+u3TqylaZlJr
	V3FSb3/rS3kXEauOsaW7c5pXjxS4O9eIryIAQc0InNovNDrA6RZm3/l8q7+i88EsnXV6QY
	Gm6m0edAK5fEt0Xzp2IcFUvrAYJKM+w=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	stable@vger.kernel.org
Subject: [PATCH 1/2] xen: fix setting of max_pfn in shared_info
Date: Wed, 16 Jun 2021 09:30:06 +0200
Message-Id: <20210616073007.5215-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616073007.5215-1-jgross@suse.com>
References: <20210616073007.5215-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Xen PV guests are specifying the highest used PFN via the max_pfn
field in shared_info. This value is used by the Xen tools when saving
or migrating the guest.

Unfortunately this field is misnamed, as in reality it is specifying
the number of pages (including any memory holes) of the guest, so it
is the highest used PFN + 1. Renaming isn't possible, as this is a
public Xen hypervisor interface which needs to be kept stable.

The kernel will set the value correctly initially at boot time, but
when adding more pages (e.g. due to memory hotplug or ballooning) a
real PFN number is stored in max_pfn. This is done when expanding the
p2m array, and the PFN stored there is even possibly wrong, as it
should be the last possible PFN of the just added P2M frame, and not
one which led to the P2M expansion.

Fix that by setting shared_info->max_pfn to the last possible PFN + 1.

Fixes: 98dd166ea3a3c3 ("x86/xen/p2m: hint at the last populated P2M entry")
Cc: stable@vger.kernel.org
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/xen/p2m.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index ac06ca32e9ef..5e6e236977c7 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -618,8 +618,8 @@ int xen_alloc_p2m_entry(unsigned long pfn)
 	}
 
 	/* Expanded the p2m? */
-	if (pfn > xen_p2m_last_pfn) {
-		xen_p2m_last_pfn = pfn;
+	if (pfn >= xen_p2m_last_pfn) {
+		xen_p2m_last_pfn = ALIGN(pfn + 1, P2M_PER_PAGE);
 		HYPERVISOR_shared_info->arch.max_pfn = xen_p2m_last_pfn;
 	}
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 07:30:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 07:30:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142735.263285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltQ05-0006S2-5l; Wed, 16 Jun 2021 07:30:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142735.263285; Wed, 16 Jun 2021 07:30:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltQ05-0006Rv-2i; Wed, 16 Jun 2021 07:30:13 +0000
Received: by outflank-mailman (input) for mailman id 142735;
 Wed, 16 Jun 2021 07:30:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=my07=LK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1ltQ04-0006Rp-80
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 07:30:12 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b025cbfe-4804-42be-a6fe-ade48f632676;
 Wed, 16 Jun 2021 07:30:11 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 538CE1FD47;
 Wed, 16 Jun 2021 07:30:10 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 0658A118DD;
 Wed, 16 Jun 2021 07:30:09 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id S9dVO4GoyWBfZAAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 16 Jun 2021 07:30:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b025cbfe-4804-42be-a6fe-ade48f632676
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623828610; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=ZIhury5ftO1rCyrI+1BJ+XupQSloUA5hjiMSaC8bUjc=;
	b=ucl+Y4z0TKifrQyD9WE/MlO6t3brYmjm8sEy7S2xiYkp9TKMEnNQoia7hROaajuttppBMM
	To/W7nAXMyuWTS0lz7aUfTW/F1lReXMCZAKCuFhIklT7Kjn/6peAFR9Ao0idHpnKbM7inl
	UTV9wIFj38OuUamOxFEy6iYkjsQaCJM=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623828610; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=ZIhury5ftO1rCyrI+1BJ+XupQSloUA5hjiMSaC8bUjc=;
	b=ucl+Y4z0TKifrQyD9WE/MlO6t3brYmjm8sEy7S2xiYkp9TKMEnNQoia7hROaajuttppBMM
	To/W7nAXMyuWTS0lz7aUfTW/F1lReXMCZAKCuFhIklT7Kjn/6peAFR9Ao0idHpnKbM7inl
	UTV9wIFj38OuUamOxFEy6iYkjsQaCJM=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	stable@vger.kernel.org
Subject: [PATCH 0/2] xen: fix max_pfn handling for pv guests
Date: Wed, 16 Jun 2021 09:30:05 +0200
Message-Id: <20210616073007.5215-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Fix some bad naming and setting of max_pfn related variables.

Juergen Gross (2):
  xen: fix setting of max_pfn in shared_info
  xen: rename wrong named pfn related variables

 arch/x86/include/asm/xen/page.h |  4 ++--
 arch/x86/xen/p2m.c              | 31 ++++++++++++++++---------------
 arch/x86/xen/setup.c            |  2 +-
 3 files changed, 19 insertions(+), 18 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 07:39:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 07:39:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142754.263318 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltQ93-0008P3-KQ; Wed, 16 Jun 2021 07:39:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142754.263318; Wed, 16 Jun 2021 07:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltQ93-0008Ow-HC; Wed, 16 Jun 2021 07:39:29 +0000
Received: by outflank-mailman (input) for mailman id 142754;
 Wed, 16 Jun 2021 07:39:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IKif=LK=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1ltQ92-0008Oq-D3
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 07:39:28 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 554d7ce9-f35c-4699-b8fc-20a360ed843a;
 Wed, 16 Jun 2021 07:39:27 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 9C8526736F; Wed, 16 Jun 2021 09:39:22 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 554d7ce9-f35c-4699-b8fc-20a360ed843a
Date: Wed, 16 Jun 2021 09:39:22 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v12 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
Message-ID: <20210616073922.GA2326@lst.de>
References: <20210616062157.953777-1-tientzu@chromium.org> <20210616062157.953777-7-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210616062157.953777-7-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Jun 16, 2021 at 02:21:51PM +0800, Claire Chang wrote:
> Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
> use it to determine whether to bounce the data or not. This will be
> useful later to allow for different pools.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 07:39:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 07:39:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142755.263329 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltQ9L-0000Pi-1f; Wed, 16 Jun 2021 07:39:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142755.263329; Wed, 16 Jun 2021 07:39:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltQ9K-0000Pb-Tl; Wed, 16 Jun 2021 07:39:46 +0000
Received: by outflank-mailman (input) for mailman id 142755;
 Wed, 16 Jun 2021 07:39:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IKif=LK=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1ltQ9J-0000P6-Qn
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 07:39:45 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 66164618-eb18-4de4-8f40-005f4cc140bc;
 Wed, 16 Jun 2021 07:39:44 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 329CC68B05; Wed, 16 Jun 2021 09:39:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66164618-eb18-4de4-8f40-005f4cc140bc
Date: Wed, 16 Jun 2021 09:39:42 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v12 09/12] swiotlb: Add restricted DMA alloc/free
 support
Message-ID: <20210616073942.GB2326@lst.de>
References: <20210616062157.953777-1-tientzu@chromium.org> <20210616062157.953777-10-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210616062157.953777-10-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Jun 16, 2021 at 02:21:54PM +0800, Claire Chang wrote:
> Add the functions, swiotlb_{alloc,free} and is_swiotlb_for_alloc to
> support the memory allocation from restricted DMA pool.
> 
> The restricted DMA pool is preferred if available.
> 
> Note that since coherent allocation needs remapping, one must set up
> another device coherent pool by shared-dma-pool and use
> dma_alloc_from_dev_coherent instead for atomic coherent allocation.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 07:59:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 07:59:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142770.263340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltQS2-0002w5-M9; Wed, 16 Jun 2021 07:59:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142770.263340; Wed, 16 Jun 2021 07:59:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltQS2-0002vy-Ig; Wed, 16 Jun 2021 07:59:06 +0000
Received: by outflank-mailman (input) for mailman id 142770;
 Wed, 16 Jun 2021 07:59:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltQS1-0002vl-79; Wed, 16 Jun 2021 07:59:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltQS1-0004qR-1b; Wed, 16 Jun 2021 07:59:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltQS0-00050w-KF; Wed, 16 Jun 2021 07:59:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltQS0-00042r-Jm; Wed, 16 Jun 2021 07:59:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=z8ub5LU4hS7Tg1Jio9RxK4OeyeHDk+YRNNZAxo9oskM=; b=1a83l/7lzWuIqPFgyR8DjF+0j1
	pNY/J+kcUM66wyDZ0qhzZtGNHc2YfNIgPoUqP6hz8/9xapecDRM7lQ6sdS7bHihuUV7PpxJot/SLI
	BrQAqID2o+fVVrUi7XLccjcwl1e0RmoQArv30FbLBwdH0Z+qzcQn1RmEHdjLFrF6k8lc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162852-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162852: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=9a51edebf88556823e05c3710ec378c976ce3386
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Jun 2021 07:59:04 +0000

flight 162852 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162852/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              9a51edebf88556823e05c3710ec378c976ce3386
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  341 days
Failing since        151818  2020-07-11 04:18:52 Z  340 days  333 attempts
Testing same since   162852  2021-06-16 04:19:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 62338 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 08:18:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 08:18:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142780.263354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltQkV-0005lU-Eu; Wed, 16 Jun 2021 08:18:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142780.263354; Wed, 16 Jun 2021 08:18:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltQkV-0005lN-Bs; Wed, 16 Jun 2021 08:18:11 +0000
Received: by outflank-mailman (input) for mailman id 142780;
 Wed, 16 Jun 2021 08:18:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltQkT-0005lD-DM; Wed, 16 Jun 2021 08:18:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltQkT-0005hD-6y; Wed, 16 Jun 2021 08:18:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltQkT-00060Z-0B; Wed, 16 Jun 2021 08:18:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltQkS-0004tU-W1; Wed, 16 Jun 2021 08:18:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WMEXjIjuYOewfjkff/n4xJdVQZ5OzWZVXqVl03Eq5K8=; b=dIdhzLnqmKaikh32bKZuwyM4qF
	vhC4zlkTEBGSzxvAJEIJvA93l9UiPva7GCPuuKHsCkw0PB+a0vt2qeo4CV6FB8uf8KBCdc0k2mxoE
	cOEjF9N4KgNqVIi17g66lQaPIb79oQhVocRpREPQ4JLDhoQ69HxHi6SCXCteAwRE+gpQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162851-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162851: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=11b1c1d4b98bc1b5eaaaf9eaa94ecd34eeaba5f9
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Jun 2021 08:18:08 +0000

flight 162851 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162851/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 11b1c1d4b98bc1b5eaaaf9eaa94ecd34eeaba5f9
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   12 days
Failing since        162368  2021-06-04 15:42:59 Z   11 days   25 attempts
Testing same since   162841  2021-06-15 13:41:16 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1779 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 08:48:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 08:48:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142788.263368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltRDo-0000Re-RI; Wed, 16 Jun 2021 08:48:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142788.263368; Wed, 16 Jun 2021 08:48:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltRDo-0000RX-O3; Wed, 16 Jun 2021 08:48:28 +0000
Received: by outflank-mailman (input) for mailman id 142788;
 Wed, 16 Jun 2021 08:48:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jola=LK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltRDn-0000RR-Ay
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 08:48:27 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 06ff9c3f-b992-412f-8cff-7411d05fa289;
 Wed, 16 Jun 2021 08:48:26 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2108.outbound.protection.outlook.com [104.47.17.108])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-26-SFWEWEdiO5iJiGe-QcSyYA-1; Wed, 16 Jun 2021 10:48:24 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6669.eurprd04.prod.outlook.com (2603:10a6:803:125::33)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.24; Wed, 16 Jun
 2021 08:48:22 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Wed, 16 Jun 2021
 08:48:22 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR06CA0101.eurprd06.prod.outlook.com (2603:10a6:208:fa::42) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Wed, 16 Jun 2021 08:48:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06ff9c3f-b992-412f-8cff-7411d05fa289
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623833305;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qtDY/q4ql3JvXVo+L74NaHVybmOZeuTtX/csX77Q1hs=;
	b=QabxUPVGNPhkoXGjvyy92BTDYbU/DKi91Iz4/HKiK1EZPYLgB9papecnYIITWyDLziWBHm
	WDE36HnPG3xmcpqv83DUNENMLALuI0gcTqU6BR7/mv2/sV1BlO4sPt+p6YI/fX8oet/v9j
	6jEz38Xhk0+gPWA9RSj9tuj4G4Yi1sU=
X-MC-Unique: SFWEWEdiO5iJiGe-QcSyYA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Bsf2i7RQ3RWxu9A0y8h1JKZpdhXzl8ysEQONab9HI0EMGqD2aoPL5aXKiQu3Dm9Wu+PYhRDnSwHg5AsQUe1Nby1rsBNaY+2TEgYvCgD1SL40/ji9NaqRifqlx0qpquDIFkKhUQQyX7VGDhbWcCzY/4S4nkyhuLhDLTzuMJtmVbva3r7cvTOltXfqhBUi/xCVII7Cji06KE6I8RbeP6PVZN9vSk+xOCEt1ON2rsmVmLSvaq+htFIZk7xhOYWYSrsAy2/kEIxqb+7nrRSAK2WHvINlJdsbCWQmnpbKRElR726qv8vdZE9FC5FcWxrIBasYy70mD+pg3YJdbK/fCJfTGA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5BU8vr41YN2FhhIRWUiayrW2+Qt66Ivh/tplXlnIw3I=;
 b=UrasMs+4ULhO6XxnImk88XAD7/SmTPknIbVpBgqs36qZYH+kX8NzpjcYW0P3Wagj+ykDMKxmoj9RnmpFRnbGfaSMLq/9dZwoTsdu25gkQeZzq9jz8wnqiIiAzwsW7M5d59iomcYnVbz0MuVLygQD4wdBxjYRFuxTjlPy+Xt5933Eel3ACdsib4nkrm5npQnAZa2I7hrrlRoEPhDN4VUUqiNxcwjlz7YbcErUyVHlyxb3/COImmo2ccWp9lsVf1hugpI5CAFSvhXaLZLF31DwpAsd4sQxySU6O7Ei4PQgxlgTVVpEo26tNgaZ8S1velsmz6YbauHzvePsotu2v96ZMg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: Re: Regressed XSA-286, was [xen-unstable test] 161917: regressions -
 FAIL
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <osstest-161917-mainreport@xen.org>
 <7cfa28ae-2fbe-0945-8a6c-a965ec52155f@citrix.com>
CC: xen-devel@lists.xenproject.org, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, "committers@xenproject.org"
 <committers@xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b57c2120-2f86-caa7-56ec-e215a7ad0529@suse.com>
Date: Wed, 16 Jun 2021 10:48:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <7cfa28ae-2fbe-0945-8a6c-a965ec52155f@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR06CA0101.eurprd06.prod.outlook.com
 (2603:10a6:208:fa::42) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f5042a44-dd51-46c7-4a83-08d930a37f24
X-MS-TrafficTypeDiagnostic: VE1PR04MB6669:
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB6669639C42233037024EC181B30F9@VE1PR04MB6669.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ByYUcZfkN/xbNeXIypST6Xkxo8+t9qMbJNM1JQKvu+MNmB2/bCTRDVemq1MJI4IsIcjEpswf8BU0IaZRoxYPDD/e1Wqgbat/eRZJWoZgl/gpCvfyr/L447HZ2yrwI6lyCfRZiWeszNK1g5e367hd5ZKJXjDYgzkrfjsounavXxv/mCTUdQPeVWhhlgSE0lX3J3U8SeZLo945sQJ3qp6WWtsINiW/M0qMmRofGx+/9Pps2VfuRC8ycUBlijnMlI5iPxrn3PiSAFAjyj1ShpxbIuzslPJ/Uc1HcVqqOdmUw2aTSVLP87o2irFUtXuwsCy5uY1uMMx3Al1so80D5jR0RWKAKVnbG1VAGUAmJ5/nyJqqNl7pmEYHV7+fypoUe4Kjo7BlWMGi2lAMusCGOja7vCeVoXpVy20ATN9wPD0A+hANqW7K6tS6rZasQE6eEV2pvsmUe6/r+w0XpEA/YfyZIVvOIIJHm9UUPq1SvTbuwY2VHBGmYYGMffHAYia7Y1u5078KbagZS4h2vPpjV19eBRAyuKHImTQtR6/AoRKLuME+jWX9A7cnineV5fyk39ZVlNvdOFF61hoS3dAxuu8phsRySFCrq3OfOsSnJKYXHSvwz+blVj1yoci2OMOTkfhrE/RDI38BhDVH9yQlSjszxqDcG2DW/9xexx8MxIKobZQcYgurJrlZByscYBBnmQfapSyD050oo8dzt1dFWJtCOCCs7x1VL/SIzse56SuEqoIHc3Lp1oUVlVPo//QjpD1VOBmt5iUWXrnbK3SXrH/SHa1mzZ0PfSvbNr3ZjsCYnc2Yo7CXDfiUXqTTC1zWTvjc
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(396003)(39830400003)(366004)(136003)(346002)(26005)(5660300002)(478600001)(31686004)(186003)(966005)(8676002)(6916009)(54906003)(53546011)(2906002)(8936002)(16526019)(66556008)(316002)(6486002)(2616005)(83380400001)(86362001)(16576012)(38100700002)(36756003)(66476007)(66946007)(4326008)(31696002)(956004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?Mmsk0HkCwV0eyDg8u9C36Xe2SX7Zm40FHSFhUqdY1KUAdJ5GEi0YurbJ3Ddp?=
 =?us-ascii?Q?rAx8wGuEWVwiVPpaATjz3z9tuAq7T2x5Hq+krzDk0aK+L0p4WBMLpUzneKOU?=
 =?us-ascii?Q?Tnh7hfAPNuRddpT9KgT+wCQDhEO/h+lE/mUewATrfypTmhyBulr7wMnn4llJ?=
 =?us-ascii?Q?/7oamoIfC1rXTH9vJul3iGrf4hufgq2MCJDEBoZFnexIxZlz8i8rHyERgRge?=
 =?us-ascii?Q?z4jkT8kmxAsprOam0QAMm1E0SHRZPdh3pRVy3zOJ3B5BgMvfJUKCf7EEwmli?=
 =?us-ascii?Q?4w3J34E1z6aRGgM0wpafX3Wvpp3UpxM+Vr4VPeIC2CJo7XW0NLeEchu6MN3q?=
 =?us-ascii?Q?xAokaNY5Lfm27hYRictWhK9F/ZLyhH2f4m5qjvpSpOj9IZqEqFM+y/zXErrh?=
 =?us-ascii?Q?YxulwTtS2Sf8o3v8hh6/MAGe3bTzigWJTaA4PibYGkXJILKBb3vhowugOPoF?=
 =?us-ascii?Q?EyETO90uTu1eRK1YpXqldNUfj9USfspIZdNGogl17XHWTEoWj3j1TZl9EyPA?=
 =?us-ascii?Q?2pq9oZWR/2LyA6FmO3GxNjMfQn60b8+wcn7DVqEzmC38KHb+7FWY4tO+8LEM?=
 =?us-ascii?Q?rTlJ4A/0ypg2TYD550hDyoh3Ys9aiL6R1xcAXcz7sy/fTjqD/RXBaoDRDyM/?=
 =?us-ascii?Q?GNQ9R4MOXOdC9tghTOqKjfsmbQHNgU+WB9p4QcPIer5N1VIJCP0r1fl29MvI?=
 =?us-ascii?Q?kSfrcqLzKN9N2vU/uWpyUWkOgcNJrytmgdvIW8EQ6bCEw5gxvdypETx5RxN8?=
 =?us-ascii?Q?4/NBgzA2jwjAucs0WJmtRTpQ3kLIYzAxTQY6DVfCihyjXJA/VYxP5bL+yha4?=
 =?us-ascii?Q?D5uMqDU+JwI/ARpjtRObRCpEliPJhq/1oXAEefr89ji8Fm0dBJurNGHxiBoO?=
 =?us-ascii?Q?Ojdfg62vAAfqHxMppMtIIJ0RS5stuL7fbCpUvFKHClmYVMjIDPXukinFpFWo?=
 =?us-ascii?Q?WNI9Xhr2srf5J8GTotGsJxmcf3V2xIaY/27QgY+6DRQIsl+CsGIVda9OkQ6c?=
 =?us-ascii?Q?iwpnUMRs1h+mS/pJ3dWehwzak8J1y/sOhl0r7ogzwcm53Zc1Q0Idl7kX25/9?=
 =?us-ascii?Q?fT3rs0LvbkoU3WwhIsL9vMLM9CdFnoI9vFgl57+1gjV4QD/w54HtHUGHCWiH?=
 =?us-ascii?Q?/TArvYM9s1YMPP5Y0vyE8C3CFPZx6ifJKxRh3vrb9qBzjT5uqDEnWVSt3gfI?=
 =?us-ascii?Q?kbUn+UqXoeGJY9srUGrJACTQiWiGzVBWvOv7BktTBzQZ4kzltIMsPp73fQIO?=
 =?us-ascii?Q?EzaiKDajDLXt34H2R1cY/XFzsHehhH61gXf5NLZC/tFXaOcLD4iHwybRwnWo?=
 =?us-ascii?Q?0I9Fvl0Yax3+ZTbgm6FXPpAX?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f5042a44-dd51-46c7-4a83-08d930a37f24
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 08:48:22.1412
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SBt12K8/5jr8E1PjyNbIJrNV7yC1LCWPFMqRTvrdEIC5hi3kvSYTE+Q7kSc0lZSDfP4UNtkM7mOSy35eKkcfSw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6669

On 13.05.2021 22:15, Andrew Cooper wrote:
> On 13/05/2021 04:56, osstest service owner wrote:
>> flight 161917 xen-unstable real [real]
>> http://logs.test-lab.xenproject.org/osstest/logs/161917/
>>
>> Regressions :-(
>>
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>>  test-arm64-arm64-examine      8 reboot                   fail REGR. vs.=
 161898
>>  test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs.=
 161898
>>  test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs.=
 161898
>>  test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs.=
 161898
>>  test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs.=
 161898
>=20
> I reported these on IRC, and Julien/Stefano have already committed a fix.
>=20
>> Tests which are failing intermittently (not blocking):
>>  test-xtf-amd64-amd64-3 92 xtf/test-pv32pae-xsa-286 fail in 161909 pass =
in 161917
>=20
> While noticing the ARM issue above, I also spotted this one by chance.=C2=
=A0
> There are two issues.
>=20
> First, I have reverted bed7e6cad30 and edcfce55917.=C2=A0 The XTF test is
> correct, and they really do reintroduce XSA-286.=C2=A0 It is a miracle of
> timing that we don't need an XSA/CVE against Xen 4.15.

As expressed at the time already, I view this reverting you did, without
there being any emergency and without you having gathered any acks or
allowed for objections, as overstepping your competencies. I did post a
patch to the XTF test, which I believe is wrong, without having had any
feedback there either. Unless I hear back by the end of this week with
substantial arguments of why I am wrong (which would need to also cover
the fact that an issue was found with 32-bit PAE only, in turn supporting
my view on the overall state), I intend to revert your revert early next
week.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 09:05:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 09:05:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142799.263379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltRTx-0002lv-FN; Wed, 16 Jun 2021 09:05:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142799.263379; Wed, 16 Jun 2021 09:05:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltRTx-0002lo-Bc; Wed, 16 Jun 2021 09:05:09 +0000
Received: by outflank-mailman (input) for mailman id 142799;
 Wed, 16 Jun 2021 09:05:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltRTw-0002le-LJ; Wed, 16 Jun 2021 09:05:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltRTw-0006Tm-DR; Wed, 16 Jun 2021 09:05:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltRTw-000089-0G; Wed, 16 Jun 2021 09:05:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltRTv-0007E3-Vz; Wed, 16 Jun 2021 09:05:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tIlUYPtqifTu1TgzAaL34o6JzoTl/ykWZiLS2UUY7i4=; b=k6E49xt/w30+ZyoSCb2pT+faNM
	jANEiLXu6ApmeAOXO5zSML/Jkwt3mIu19DnwSYZ8vKE5dzc8/ivK6PWCo0jTCcY8IXR1wCs9T2l+t
	6au4Jel3eGmdo2Q7a0d9EaKRgAz/mBlr6C6oVKLPsLXHx8HKx8yMiOZgI9xcdmchVFIE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162847-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162847: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start/debian.repeat:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=94f0b2d4a1d0c52035aef425da5e022bd2cb1c71
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Jun 2021 09:05:07 +0000

flight 162847 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162847/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt    20 guest-start/debian.repeat fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 linux                94f0b2d4a1d0c52035aef425da5e022bd2cb1c71
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  319 days
Failing since        152366  2020-08-01 20:49:34 Z  318 days  543 attempts
Testing same since   162847  2021-06-15 20:43:14 Z    0 days    1 attempts

------------------------------------------------------------
6167 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1680457 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 09:52:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 09:52:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142809.263398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltSDa-0007XJ-34; Wed, 16 Jun 2021 09:52:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142809.263398; Wed, 16 Jun 2021 09:52:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltSDa-0007XC-07; Wed, 16 Jun 2021 09:52:18 +0000
Received: by outflank-mailman (input) for mailman id 142809;
 Wed, 16 Jun 2021 09:52:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jola=LK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltSDY-0007X6-Jd
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 09:52:16 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0bc8cf1-0a06-49b4-aeec-ad9c50f0263d;
 Wed, 16 Jun 2021 09:52:14 +0000 (UTC)
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01lp2054.outbound.protection.outlook.com [104.47.1.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-17-vGbijrSvOhCCfgHVyeH-1Q-1; Wed, 16 Jun 2021 11:52:12 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4384.eurprd04.prod.outlook.com (2603:10a6:803:6f::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Wed, 16 Jun
 2021 09:52:09 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Wed, 16 Jun 2021
 09:52:09 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM3PR04CA0128.eurprd04.prod.outlook.com (2603:10a6:207::12) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Wed, 16 Jun 2021 09:52:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0bc8cf1-0a06-49b4-aeec-ad9c50f0263d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623837133;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nvqSXTc9qRreDkQ/pbmsZdVQqBPgRdx/Z2HSpqNP6N0=;
	b=Znz5vEftehY0hTPCn427w2Zzu6IXIjahsxGLJGYh8bW9IvAiMhornKEfOt7vCY1kFWWDeG
	hlwpCUSCbATOZ3n5YpEPVFl8RfLvF73UvNH5OBSnVQoE5l++RxNUwnNDMvcEumHiWVuUxw
	RBhdVbZp0EnWn2v238HLsxKEQ0z+dbM=
X-MC-Unique: vGbijrSvOhCCfgHVyeH-1Q-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XC10Br6y36hBvWKoUo/oarDNIq4lTQGETRDr/OgjPC7aa+RqkszowagqHmLbvw00b/Tj396BLwvtCSZesaO3nSs2lFw9Lir4rsgQ4UdLcVz4oYXU5ZxyTbIHORtCsrS/1Ts4PsIE4CUoCMhADtV4tD8CB3apwwKYam+WlIWBaOzW5ccC4XWAT7kIGWRfa8Rx3KmZiqDuKuuVsHDG5Thn1acCGunze4YOnzadro3Q8GyEJqY4uGbwc6AXvu/GRpwSjedO8dLMEU7UlhUpYGgtlPhltOdyxRPvxb4Moiy4KGcSM/lA54RwxkjAN7BOK86zL5qiRE7liKvL2XKug1vd9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nvqSXTc9qRreDkQ/pbmsZdVQqBPgRdx/Z2HSpqNP6N0=;
 b=R72IzlYS5Ei2XM5dOmMz8lXcjSD39HqCmIgAnWtg9ZB4Ahy0f0iER5gK2xy7dST8M1i43nXY8rIn/A98UT/RYaMnVaxf8AYxzmCUGBzYCFneqZyhDD3kn+dandaIcsAhwya4pSTy8oRC/bhjTnbRHFwDd0PTqqBLz9qRlYGYI4kQpru0+xBQT5aWrFET2u62aCt9I6MN0br/Ydb0Ra040jOzpvxAgwo8/J1v9lc8/hsGLNhsIaipgckAA4+x5ms/E5F48QhC8Nn9ORR0yOVjMd1MZHjJL2uEA6SVuK8M++zVyPIb5ahuL+Py3ndbTqd2kPmQnUa5r4xBHbSQVfKopg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH 1/2] xen: fix setting of max_pfn in shared_info
To: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 stable@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, x86@kernel.org
References: <20210616073007.5215-1-jgross@suse.com>
 <20210616073007.5215-2-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a3674ab9-40d8-c365-d48c-0e1c88814942@suse.com>
Date: Wed, 16 Jun 2021 11:52:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210616073007.5215-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM3PR04CA0128.eurprd04.prod.outlook.com (2603:10a6:207::12)
 To VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7c1cb976-7611-4f2d-cf91-08d930ac682d
X-MS-TrafficTypeDiagnostic: VI1PR04MB4384:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB438425BF93D1ADF601327226B30F9@VI1PR04MB4384.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nCI6gW+Oih2tEzdwvAsFCoc//7HZKyxDSefAaWpGEffrCeJrQV0zqL6ufRrpV3ONJjQjejRThkXXn4FNq9GWPnnSxKJFMukSzCz2esNHJn2737gmQ20gMjUWHWSqjAGXhO1SsSS5xh1OGDFX1Az0pUFVRkYrFWj/5/bxjqx5YKG6xuiw1+8zjj7Z97/EaqUE6aujK5RtArN200BRarcFAFTRyVieqC6ax6AihLhXvFrDVhn+Lydyt26IkmVZYKwHnOAnU0DGbLOJ+Y01gKRoEvvx+Bly0d0k3FxjIeQW4tgQf7WNrijqqe5WRCZ6IwmY2FEUKjDvBiMbo+TPcY1OZQFVkera3rjBvFTaKTyVtQlJ5dQ09lZRnxbecVAqaavYc/b5jFfHg4kSv7QFnwl9THVJfQPskQwhzZcK8+mRS47NEAf0BtRmOTCu2iXgDrVLmgbC42EcvNwOdM3fQ6eYi8R310fvyxL1AW1aQgT7iJLie7W3LnN6gfmEgZWvVb/Yu+WqeMPqS03pZkRMzdD/nICuij3Oo8g6bu449rPFWVZBo07OsgFEKi6mO/Gtr8aZHbQ7BMrqoNty/x4McGRx8ZBKG6jxaG2DmOuBDklkwpo3XABT2btAo/QqxzHnvBXtOgXwyksypPL+6hoaX7MYDTsR9xLHDbjozxNHw+g4jCqp/6rdCL3cnh2JQkecH7Lb
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(39850400004)(376002)(396003)(346002)(5660300002)(31696002)(16526019)(186003)(26005)(6636002)(478600001)(86362001)(2906002)(31686004)(6862004)(4326008)(8676002)(8936002)(6486002)(66946007)(38100700002)(7416002)(66476007)(66556008)(2616005)(16576012)(956004)(316002)(54906003)(36756003)(53546011)(37006003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SFdLVW9TOFhucnFNdS9pOURUMis3bXVzaCsyL1BXMzBTN1NINnFqZDU3VkhI?=
 =?utf-8?B?NGZ1bjV3MUtnbUNkV1M0UzNXcG40WTlIbUNwalZOcGlmbDJlUy9xVFpCaC9y?=
 =?utf-8?B?R1lzQzZwZTRaRmVqNFVtWUYzV21DeFhVSXUxc1VML1Z4ak5HSFVNVFROdmhK?=
 =?utf-8?B?SmJ0MHZ4d2laWkZZbzVHZXNTeXE4dG1lOUVvRmt2aEREWENMVk8xMXVmZlBW?=
 =?utf-8?B?NzFEYTB5dE15OXk0ZDZYLzhWNHVBajZzbWFBOWVnWVR1TUk2RGFtK2I0bEcw?=
 =?utf-8?B?elp0SmpXb2RPWkxQVlErNnhnRzlXenIvdXdxRnpPdVNmVi9vbUN3SDlKTmt6?=
 =?utf-8?B?WExBSUtxYk1nV2tvV1BWZ0I3QmxZVXFMSEhWVitmRzZnbGNNMmQ5RTlzUjNN?=
 =?utf-8?B?QytjdVZHeVJ4QmYxbXNaUlV1L2l3TG5iN3RUNlZSemRyVndnSTNvbHh0SkVQ?=
 =?utf-8?B?cHFzL2ZKSkkvUGhjNGY1ODFvelplOTIrc2x6RnpUQmlrcTZpTmxaSXpqK2d6?=
 =?utf-8?B?YWV2VGM0NWVrVXd4NkdUSW9HcmNiMmRSMmlDdzBNWkE0bzNTS2w2VWVKWksw?=
 =?utf-8?B?NUZFaHJvazR4cS9FR3VWVDEraCtOYUorNE5QZkxEanpuSExEaVFEbVVSem9X?=
 =?utf-8?B?NFFNSlRuNHNrV2syR00xdDA4RklCRW40TVZxbUo2MXMwWDBGVXliZnhWYVhl?=
 =?utf-8?B?ODRkdUdOWjVjRFoydUFrdUdwcjQrbk8ycEkvUzZCdE9Rc3BXdDRtSVdXV1Rk?=
 =?utf-8?B?V0ppZThnUHpXTmVDbmtkOEdnTis2eWM3QW5LNUlHYnFFTWFNNmRyOHVlVlBO?=
 =?utf-8?B?QjgwVkp6UXgvSVFUeGxwRmk5MFZiYkZvZ2Z1RXBFZXN3anI1Yy9heDV4U1hI?=
 =?utf-8?B?UTYxUm80dGtKNVkyN1UyazRKSU5weGRheExhemJKM1A5TENPNDE0N3lFYW1i?=
 =?utf-8?B?NThPdlNVc2xWOXlyaS9ZQlJSY2dWYnQzd1ZUR3dHTzlZM3E3SGlpOHpPRURV?=
 =?utf-8?B?aG0xV01qMlFHYlJHR3ljNkpleHJicmpjUFl1dlo3M3lQcEVBM29NYXIrSjlD?=
 =?utf-8?B?WTlHenEzUjg4WURWWFV5d0JNWWovSlNlU0hFYzZmVnFqd0ZvbVcwYyt5ZGh1?=
 =?utf-8?B?TlhsOWpoZGlPU0FrK3hzK08yZDJJbXpacitqV1MxRWtqcUxnOVRoOUpQNDhR?=
 =?utf-8?B?K2ZYRTg1cFV5eXcrSnk4MC8zTGJqSk9takJrYWI5WjNnOTNCcjRuV1M1T0t2?=
 =?utf-8?B?QkxRT0YvbmliZUgvTkIyTzIzVlNiMTBwNXloaWdyNm1RZU1kOFpRdStibklw?=
 =?utf-8?B?eS9ibUxtTy9JUXAyQTlvUWJQR09kOVRxakE5SmVGY2xzQ3V6amE1WjR5MStG?=
 =?utf-8?B?WUs5OVZLS0kvUVFMMHIvQitGNUxsQ1MxTGY0aHpwLzBzYUoyKzBHT3ByVXRY?=
 =?utf-8?B?TWpKc1RFL3A1U0xGRDBURHhlM1QxdzZncE8weEhPbm8waUU4M01JdWVoOFN3?=
 =?utf-8?B?a0dJL1FPcStPS1BBTzlIU1FzQXhJb2VMVy9wVzdtb2FmS1pRd1JJZUY3QjA1?=
 =?utf-8?B?SlFBd1F0RklPRVlZRVFROGZGaEhvVTJXRGpXTXU2U2p4ZE5kMlNKMVFCVW5R?=
 =?utf-8?B?dGxDd3JvVkJHaDdWREZDS3pXN2RnWlUzY1lXV3Q5VG10b24yMWFUek4xV09r?=
 =?utf-8?B?eHJXVGJHZGk5L0YycGRuV2toTWEvRXA2ak5vWTYybnhmZlNaa0ZFU2xKWDdP?=
 =?utf-8?Q?1/E+t7pKBBIhHF+TFAmyGLfsXnaLHKLYhcOn1Ar?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7c1cb976-7611-4f2d-cf91-08d930ac682d
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 09:52:09.1915
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: v22s9K57WDLOl7ynke65U0dVXI3993OHUQFbSHSVvXtIh3s6nFDVFhLIwrsdKQiR+9FZF6UUc5uGiCbIHkkxAQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4384

On 16.06.2021 09:30, Juergen Gross wrote:
> Xen PV guests are specifying the highest used PFN via the max_pfn
> field in shared_info. This value is used by the Xen tools when saving
> or migrating the guest.
> 
> Unfortunately this field is misnamed, as in reality it is specifying
> the number of pages (including any memory holes) of the guest, so it
> is the highest used PFN + 1. Renaming isn't possible, as this is a
> public Xen hypervisor interface which needs to be kept stable.
> 
> The kernel will set the value correctly initially at boot time, but
> when adding more pages (e.g. due to memory hotplug or ballooning) a
> real PFN number is stored in max_pfn. This is done when expanding the
> p2m array, and the PFN stored there is even possibly wrong, as it
> should be the last possible PFN of the just added P2M frame, and not
> one which led to the P2M expansion.
> 
> Fix that by setting shared_info->max_pfn to the last possible PFN + 1.
> 
> Fixes: 98dd166ea3a3c3 ("x86/xen/p2m: hint at the last populated P2M entry")
> Cc: stable@vger.kernel.org
> Signed-off-by: Juergen Gross <jgross@suse.com>

The code change is fine, so
Reviewed-by: Jan Beulich <jbeulich@suse.com>

But I think even before the rename you would want to clarify the comment
next to the variable's definition, to make clear what it really holds.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 09:57:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 09:57:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142816.263410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltSI9-0008GJ-Mz; Wed, 16 Jun 2021 09:57:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142816.263410; Wed, 16 Jun 2021 09:57:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltSI9-0008GC-Jx; Wed, 16 Jun 2021 09:57:01 +0000
Received: by outflank-mailman (input) for mailman id 142816;
 Wed, 16 Jun 2021 09:57:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jola=LK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltSI7-0008G6-Up
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 09:56:59 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ea8c99e-6b36-4182-ade0-930f563b2f14;
 Wed, 16 Jun 2021 09:56:59 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2056.outbound.protection.outlook.com [104.47.13.56]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-37-R86nVK6COOeW0KLlx9Wj7g-2; Wed, 16 Jun 2021 11:56:57 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3389.eurprd04.prod.outlook.com (2603:10a6:803:b::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.24; Wed, 16 Jun
 2021 09:56:54 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Wed, 16 Jun 2021
 09:56:54 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM4PR07CA0034.eurprd07.prod.outlook.com (2603:10a6:205:1::47) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.7 via Frontend Transport; Wed, 16 Jun 2021 09:56:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ea8c99e-6b36-4182-ade0-930f563b2f14
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623837418;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=oLoIRvGF7swCo/JQNs7ZP8KKxmb6K6petkrTD2/9pNg=;
	b=nuvcbil+cxmOd/QWv3pt9RtLzm9eroU/4We4Lzdal6dwOAJTPYwafHeWixBQMguaZ+3bNt
	mCD2bVmCbUSnZq+hao2vdrLkuG0mgu9FTCe+d1UdiU6IIdIpFjavXDgXjLplmQtfFnX3+3
	z+Va9LPzWI5pAY4SYPjeF3R09f+F1Os=
X-MC-Unique: R86nVK6COOeW0KLlx9Wj7g-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QtODOFJ+M3pQoZc04NKQBlU06FyPkQC/Pn/iPqpdUniLmooJp0r9NHrTOoqUA/hyJ713309Mmb4P4+qkI2Yi/8r5G0SKmd2DBgux4DjbUazMKdLDic+P89LkXva6ZY8lqQylq6BQ+9CoeMwurL342xEcbkx5wc/58d6FREPCeeQWkOxs6g/Ueb6tTovWyC30Fwv9uOh13uGvsduUG8gn26JehgxYkRO5f7pCMa+RUMM8sMR4+ieYaseygWt4yQAG0Z2dDhFF0+T5Csa5MoioVOG/bwXPaReWkq6OuXMg38TyS4LrtjugCwZcr9G9Sv9VmTUFS6JnFphdwluHOXKgfw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oLoIRvGF7swCo/JQNs7ZP8KKxmb6K6petkrTD2/9pNg=;
 b=OsDrRZNdd5qdfssP5HFO2eY6Dkf8drzI7did/WZIyBoYzusU6AVTH/ufaapp0+T2SLSd8YjKIRP9EEFBhF1aVYcZQ69n57Hr9FhhaqJMmEAFyRr7xb5xb665JGNI5g2DbvFr9atwkQEuRi1z6fM8Bqd/niSN4wE8+6VHJ23twwmk4CLd69x4O97SU+p96K2GnNCGgK71ND5PfR2wYSIPKJ+viM6fdALu1XHOgDzGqfmE27bR6EpulJJ0i8Jo3oK4WremmuENOFgnEALBZIgj8uVfK78lj7apOEIgqcii7ugksr82CHgkiLRkT9ypF3akBGFlEJeeQFK3e/TjaP/Zlg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH 2/2] xen: rename wrong named pfn related variables
To: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, x86@kernel.org
References: <20210616073007.5215-1-jgross@suse.com>
 <20210616073007.5215-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8dbeb9ea-56c9-de30-4d5f-fc9c0ced6ac4@suse.com>
Date: Wed, 16 Jun 2021 11:56:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210616073007.5215-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM4PR07CA0034.eurprd07.prod.outlook.com
 (2603:10a6:205:1::47) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 60d543f9-2f39-4987-5098-08d930ad122c
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3389:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB3389EEA2D55BDBFEF9BA2156B30F9@VI1PR0402MB3389.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SIcZkIaKn70yEbJaswttJ6Dr+2BiUjVpZGztvA2mkPGIyo/x72TbrBmOtOygZvA1dPKijOadCgalui5Mcy1JugrtqGG3RdkKT7wYVq41n0EUqtV8bUzRTE34PbH0ilxCsn/7wUW7Uw69JMfNRGFH7eKeZPTrW/0H3LCQFBfWk0d0dm2XpcBylEcNbvKEy4peqZ7BAwH3ewkWpg9K5sTI7+5GxcmqgTXLVgIrQQBYQDzfzgc5rAN2TVwbrUp8BvoOuXYnzE3CmgWzkyJ9BHMznVe0q7SzV1c8xnw8prltZ0UcqAEt2J6eNLujJ3Orhs+bTBIm92vscAjfcEWnJyWeOMrRbHSeMca7Bdbu22HQC6XBB3Y5lgrGRhV0mmeS2HHP1X0+Toh7YNvL5JhKXXmwEfa/33LgVBmFj63u4ka++OoaCSPymogvkr7OPKg25jdJDB2iAFGWhQCEP29l5kzxBMEXkmMH6Ga0FmHktE5DRvHBCQCccHR48E6kB67gfvRRwJoWlhGkytJ0mconsRkqwKEoSc7PJxzDR9j+lyZcwnrKUKbWK/753E+FMmQ8kMMsTBfqA2b9OLJU+dIc4mCuwQySe4e/4AVwPfDVJtAY/gH9VegvRamwTtmtUrQxXzL/KOGyllsePH/L+Bz1fM4v5LA6OVvZvoMEzYEwE/Dq1icxENQuzmczwlGA9Gjv1Oob
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(376002)(346002)(366004)(396003)(136003)(38100700002)(31686004)(83380400001)(8936002)(6636002)(31696002)(36756003)(2906002)(2616005)(37006003)(26005)(66556008)(5660300002)(16526019)(53546011)(478600001)(186003)(66476007)(956004)(54906003)(16576012)(316002)(4326008)(66946007)(6862004)(8676002)(6486002)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MXdOc3JHcnA0S2JJNTNieHQ2WjBkZXJIbHo0RytZYkp5TEJNYXlrVGQzNWpX?=
 =?utf-8?B?bXNIdEU3ZVdBNjJSZVhHcTZsR245YnBRV2J0MDNBaDZsRnZzK1hiTzFIVXp3?=
 =?utf-8?B?OXBkaU9rUDkwMWRCd3BKK1N4OUlzU1V0eG0yMVJkNmZPa2hQVFg3OXB1Sm9k?=
 =?utf-8?B?YTJBU0hhM0tGYXNlYm1hbnJRQ3Y0QWNuNEl5R3hOVTlvNlNNQ1RseGNpUHZC?=
 =?utf-8?B?VVZnamtYQm9LSk1qNUxFeXRqQTlXN01URDhwRGlMRGhHMU1nNXpSUm9iOE1K?=
 =?utf-8?B?OTU1R3pnY2ROQWM2b2h0YTZjVWVicjZMaGZQTnJmNW1NOUlGTlNJNzdKZGNG?=
 =?utf-8?B?aWlHL0NNdFQ4WTB3dHV5aDJaTXVqRHkzZ1BlVUlFR0xRQ0ZTZUpuK05ISHdJ?=
 =?utf-8?B?N05hQUR6NlBLbk9SbmFqSkNLdTd1bzZZcVdXVisremVuWExIYXhyOUpTbHpR?=
 =?utf-8?B?WE13bGIzVktYbGhuQ3R4ekUwcEFJUTU3aU03amN0djBFa3Znb0hnVmdZUisy?=
 =?utf-8?B?c3J6Q2NMeVdWQTY1WnY4MGtPMTBpTi9MQTExbzVLeXE2S2Ywb0YycGJJb3NO?=
 =?utf-8?B?MEZKQ2F5bW91TDFZMmQ3Q2VVamwzRkR3QlpadXc0eTVkdFkyTkY1UXQ0V2Yx?=
 =?utf-8?B?cjJEeFVRREY1RURnSldMUFA2UGF3MWlxaisrcEY1Vmt2b1cxQ2FyOFV0a0Yr?=
 =?utf-8?B?UTRJZDNGMjlNOWRGMWJUZHdkZHJEMTVOUlF0L0N1R1ZPenhIbXludnFTLzFJ?=
 =?utf-8?B?TkNLeVlCcEhaVEVhOExYTm11cFBHZ0p5V3VmNkFpM2ZJYTlCaXBRWTFFTkJL?=
 =?utf-8?B?eFVqaVJBcUhGSXlJL3ZERno5VkpLU1R6NXhISlhTQkxDd2gwLzVYek5KaC9R?=
 =?utf-8?B?ZzZaWVp6WmJsRmRHV3BDNXFoRkhhenBlbTgybWVPY05nTGI3WlFjNjl6ODk1?=
 =?utf-8?B?LzNYS0ltc0FoZGxFOW40TjR6UlM2dWpJVjV4c09WVGhzSlRhanBuWldmSkla?=
 =?utf-8?B?L2E5NkxVcmpmalJnVDdoaUVDTmNJWnJGWEpLUFZnMzdrVmZMeEhBZllOcnR5?=
 =?utf-8?B?NlJrZlRZR2QyQkZmRXJPVzV4MUZmc2FJdTZvT01OVjZhbFF3K1ZWek1WZzhH?=
 =?utf-8?B?c3hJU1A2am1wSWhVK1VBRkVUbHFkY245aWU2T3JWVG1Lb2xVb0ppZS9rSTJM?=
 =?utf-8?B?Rk4rcUVBZ2lPQjlKVVN4bkQ1aXVWRCsra1ZEOUc5Nk1aR0hxZlR6UllQS0xr?=
 =?utf-8?B?STZSUVF1THMwbVhxSVI5ZEUvb2hldTlrS2NoTjNwSWlydFZVNERodlFUYzRq?=
 =?utf-8?B?U3hrRFY5S1pNM2s4RUJrNzM1ZmxnQ2F3RTllbHYxc1puWGJxYU1adTF5SXd1?=
 =?utf-8?B?VUdtUlk5eG8ra0ZoZzZJN1VkVkxlbnFXbVdNTGo5NnVPTVF2Ujh2M0VibUFr?=
 =?utf-8?B?em9JNlV4ZFRKMGIvbDlZUDFrR05CMy9qcUd4VzQxR1l4WGJ5VzEzOFJBTXZL?=
 =?utf-8?B?VnlvR3I0Zm8xOXFwaXZWWjRFK2tQWFUzdUVnWWdZME9PQWdUZXMyOFV5VzdS?=
 =?utf-8?B?aWJ6WHNiUk01TVdFbGdWam1VV0J3UE5zdjlWSGZXcUEzbW1sN29wdk80MjJk?=
 =?utf-8?B?U00yRlVsSzdWWmtOencxWnR0UGpOa05qVGFveE9CYytHL3JKdnkySUg5YWtt?=
 =?utf-8?B?UURqNlpXdjJ6OHpGZFljdm8rQlE1TUd6dkdzZFlFaWdad3VwU1VYcTYycTFK?=
 =?utf-8?Q?Vv8rcUuxCnA96VqwdXNs34InqnxAWo1+TcYQ1e8?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 60d543f9-2f39-4987-5098-08d930ad122c
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 09:56:54.4224
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yuY6J3B6QZ0xYZoj/Ax6pRBer1tejsL/Gat4gZbJbqj9Kp36GkBsLhvcM5rOtKuoYwamaItGCF8OmsOxJn1IZA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3389

On 16.06.2021 09:30, Juergen Gross wrote:
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -95,8 +95,8 @@ unsigned long *xen_p2m_addr __read_mostly;
>  EXPORT_SYMBOL_GPL(xen_p2m_addr);
>  unsigned long xen_p2m_size __read_mostly;
>  EXPORT_SYMBOL_GPL(xen_p2m_size);
> -unsigned long xen_max_p2m_pfn __read_mostly;
> -EXPORT_SYMBOL_GPL(xen_max_p2m_pfn);
> +unsigned long xen_p2m_max_size __read_mostly;
> +EXPORT_SYMBOL_GPL(xen_p2m_max_size);

Instead of renaming the exported variable (which will break consumers
anyway), how about dropping the apparently unneeded export at this
occasion? Further it looks to me as if xen_p2m_size and this variable
were actually always kept in sync, so I'd like to put up the question
of dropping one of the two.

> @@ -121,7 +121,7 @@ static pte_t *p2m_identity_pte;
>   * can avoid scanning the whole P2M (which may be sized to account for
>   * hotplugged memory).
>   */
> -static unsigned long xen_p2m_last_pfn;
> +static unsigned long xen_p2m_pfn_limit;

As to the comment remark in patch 1: You don't alter the comment
here either, and "limit" still doesn't make clear whether that's an
inclusive or exclusive limit.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 10:13:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 10:13:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142829.263421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltSYI-0002B0-3k; Wed, 16 Jun 2021 10:13:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142829.263421; Wed, 16 Jun 2021 10:13:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltSYI-0002At-0s; Wed, 16 Jun 2021 10:13:42 +0000
Received: by outflank-mailman (input) for mailman id 142829;
 Wed, 16 Jun 2021 10:13:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltSYG-0002Aj-Po; Wed, 16 Jun 2021 10:13:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltSYG-0007mE-I9; Wed, 16 Jun 2021 10:13:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltSYG-0002dy-8P; Wed, 16 Jun 2021 10:13:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltSYG-00030u-7t; Wed, 16 Jun 2021 10:13:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ILxPO/3TN1qCU+H7cJciFVHUDojpmrwCPmNWJDZyFxM=; b=VI8SY1drUeHILo+m1ke4ZdxHsM
	WXjP5/TTbMJlXBt93FZORnz8Te4u/dncF3gQgetQfEqeq7W6GEQyjZwHuELb+RNKSXCw6tXw8QrVk
	JAQfC6GGM71Uwb4B9n9s+ygEv8JqkeoCzXvNL9ksLa0AQmZoj80647Oq4fEOoVW642kg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162857-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 162857: all pass - PUSHED
X-Osstest-Versions-This:
    xen=4bcf6433eed3d9cbc00865ec62380a33ca832dac
X-Osstest-Versions-That:
    xen=93031fbe9f4c341a2e7950a088025ea550291433
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Jun 2021 10:13:40 +0000

flight 162857 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162857/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  4bcf6433eed3d9cbc00865ec62380a33ca832dac
baseline version:
 xen                  93031fbe9f4c341a2e7950a088025ea550291433

Last test of basis   162765  2021-06-13 09:18:26 Z    3 days
Testing same since   162857  2021-06-16 09:18:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   93031fbe9f..4bcf6433ee  4bcf6433eed3d9cbc00865ec62380a33ca832dac -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 10:37:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 10:37:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142838.263435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltSvT-0004X7-1e; Wed, 16 Jun 2021 10:37:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142838.263435; Wed, 16 Jun 2021 10:37:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltSvS-0004X0-Ul; Wed, 16 Jun 2021 10:37:38 +0000
Received: by outflank-mailman (input) for mailman id 142838;
 Wed, 16 Jun 2021 10:37:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=my07=LK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1ltSvS-0004Wu-7t
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 10:37:38 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1018128-c766-4f55-a1ec-c891493b5bfb;
 Wed, 16 Jun 2021 10:37:37 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 2B1DE21A44;
 Wed, 16 Jun 2021 10:37:36 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id D9072118DD;
 Wed, 16 Jun 2021 10:37:35 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 1+K+M2/UyWC/UgAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 16 Jun 2021 10:37:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1018128-c766-4f55-a1ec-c891493b5bfb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623839856; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=cUDp+bU8nGCF6IWL4hSqRX+diw04+4z08WiV4lCEuww=;
	b=cdJtvqI6DIqUaTm9tW3iM77vchjUaHIBtvLx66HqJgpSxhx0yu51xv91aqJRrZRd8KIMzY
	MIKviO/VCL57SzP0YfXxwJVe8J2gN/23y8c7UBn3YcJKN5PeeS8bMF40N/gNZzrCETcon2
	TPqnrWPu4r4j9cQ3/nBkely36ZmX+Jw=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623839856; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=cUDp+bU8nGCF6IWL4hSqRX+diw04+4z08WiV4lCEuww=;
	b=cdJtvqI6DIqUaTm9tW3iM77vchjUaHIBtvLx66HqJgpSxhx0yu51xv91aqJRrZRd8KIMzY
	MIKviO/VCL57SzP0YfXxwJVe8J2gN/23y8c7UBn3YcJKN5PeeS8bMF40N/gNZzrCETcon2
	TPqnrWPu4r4j9cQ3/nBkely36ZmX+Jw=
Subject: Re: [PATCH 1/2] xen: fix setting of max_pfn in shared_info
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 stable@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, x86@kernel.org
References: <20210616073007.5215-1-jgross@suse.com>
 <20210616073007.5215-2-jgross@suse.com>
 <a3674ab9-40d8-c365-d48c-0e1c88814942@suse.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <97de842a-f095-3a12-ab16-beca0f97ba67@suse.com>
Date: Wed, 16 Jun 2021 12:37:35 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <a3674ab9-40d8-c365-d48c-0e1c88814942@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="KTsGTPJxuSc8B4hvJibcDyNolPqoBmQJN"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--KTsGTPJxuSc8B4hvJibcDyNolPqoBmQJN
Content-Type: multipart/mixed; boundary="ap0P46W17IAWxsPrQZNj1vLtTBF2bl8UM";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 stable@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, x86@kernel.org
Message-ID: <97de842a-f095-3a12-ab16-beca0f97ba67@suse.com>
Subject: Re: [PATCH 1/2] xen: fix setting of max_pfn in shared_info
References: <20210616073007.5215-1-jgross@suse.com>
 <20210616073007.5215-2-jgross@suse.com>
 <a3674ab9-40d8-c365-d48c-0e1c88814942@suse.com>
In-Reply-To: <a3674ab9-40d8-c365-d48c-0e1c88814942@suse.com>

--ap0P46W17IAWxsPrQZNj1vLtTBF2bl8UM
Content-Type: multipart/mixed;
 boundary="------------807AE2AA9ED82DDA23942201"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------807AE2AA9ED82DDA23942201
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.06.21 11:52, Jan Beulich wrote:
> On 16.06.2021 09:30, Juergen Gross wrote:
>> Xen PV guests are specifying the highest used PFN via the max_pfn
>> field in shared_info. This value is used by the Xen tools when saving
>> or migrating the guest.
>>
>> Unfortunately this field is misnamed, as in reality it is specifying
>> the number of pages (including any memory holes) of the guest, so it
>> is the highest used PFN + 1. Renaming isn't possible, as this is a
>> public Xen hypervisor interface which needs to be kept stable.
>>
>> The kernel will set the value correctly initially at boot time, but
>> when adding more pages (e.g. due to memory hotplug or ballooning) a
>> real PFN number is stored in max_pfn. This is done when expanding the
>> p2m array, and the PFN stored there is even possibly wrong, as it
>> should be the last possible PFN of the just added P2M frame, and not
>> one which led to the P2M expansion.
>>
>> Fix that by setting shared_info->max_pfn to the last possible PFN + 1.=

>>
>> Fixes: 98dd166ea3a3c3 ("x86/xen/p2m: hint at the last populated P2M en=
try")
>> Cc: stable@vger.kernel.org
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>=20
> The code change is fine, so
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>=20
> But I think even before the rename you would want to clarify the commen=
t
> next to the variable's definition, to make clear what it really holds.

It already says: "Number of valid entries in the p2m table(s) ..."
What do you think is unclear about that? Or do you mean another
variable?


Juergen

--------------807AE2AA9ED82DDA23942201
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------807AE2AA9ED82DDA23942201--

--ap0P46W17IAWxsPrQZNj1vLtTBF2bl8UM--

--KTsGTPJxuSc8B4hvJibcDyNolPqoBmQJN
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDJ1G8FAwAAAAAACgkQsN6d1ii/Ey+H
Lwf+Ick3/mutcwbdkHo2Pofr1/gJNNwiHCi8f6Z2VRzm/Gs1+KKQRCyPbGYaZpETytNkWhobwgGf
MD7r0bIaVRXqPGO3poig7sW40S/3o7Aro8DkBCDg6fDspcGuFYsUIw5UtjlEjPEm1wa1b8poGDvO
qXm80V15BdTfhFc2dshqWnfl2qOmbK25OSRWVGd0JfxaqVZO4XiBDCHR4G+akozbAE/liMTOm5pk
itPHw03GTCdvhnlQAA0P4UK9WeD1Kfvq7D9pdYXtPoBq76LJeL6WaLGp3mGlOaqD9ey9E4NTqqVA
rp5MsHZ/zhIe67DTPT+ZCUjYrlrBGnOtfkTLtsv1mw==
=ND52
-----END PGP SIGNATURE-----

--KTsGTPJxuSc8B4hvJibcDyNolPqoBmQJN--


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 10:43:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 10:43:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142845.263446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltT14-0005v4-OD; Wed, 16 Jun 2021 10:43:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142845.263446; Wed, 16 Jun 2021 10:43:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltT14-0005ux-JZ; Wed, 16 Jun 2021 10:43:26 +0000
Received: by outflank-mailman (input) for mailman id 142845;
 Wed, 16 Jun 2021 10:43:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=my07=LK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1ltT13-0005ur-Dq
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 10:43:25 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c2bd153-6eb0-4245-a224-e0501f2e24a2;
 Wed, 16 Jun 2021 10:43:24 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id B0E78219D6;
 Wed, 16 Jun 2021 10:43:23 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 6E6DF118DD;
 Wed, 16 Jun 2021 10:43:23 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id dDzIGcvVyWD4VQAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 16 Jun 2021 10:43:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c2bd153-6eb0-4245-a224-e0501f2e24a2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623840203; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Bb/rTfViqgn/up8wJQ8Yp+8BLgpyC0oL3NZRhwy1EjM=;
	b=iZJqsfEt4IlOt/DOipSdqvicm54qbRFcmtYCAkNzVbMhGQ6sZ+IWJY2hJqSrH5ePW++wY/
	gusZGx+7LXRlIiEjEiMqAsMwcQlMnjboE5LfAUQq0+kqnT71lmOvWlS/0gxeW8Sr/nPh07
	YfHv1BAZQmsynoOETzu8jhG8OQ6kn3M=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623840203; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Bb/rTfViqgn/up8wJQ8Yp+8BLgpyC0oL3NZRhwy1EjM=;
	b=iZJqsfEt4IlOt/DOipSdqvicm54qbRFcmtYCAkNzVbMhGQ6sZ+IWJY2hJqSrH5ePW++wY/
	gusZGx+7LXRlIiEjEiMqAsMwcQlMnjboE5LfAUQq0+kqnT71lmOvWlS/0gxeW8Sr/nPh07
	YfHv1BAZQmsynoOETzu8jhG8OQ6kn3M=
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, x86@kernel.org
References: <20210616073007.5215-1-jgross@suse.com>
 <20210616073007.5215-3-jgross@suse.com>
 <8dbeb9ea-56c9-de30-4d5f-fc9c0ced6ac4@suse.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 2/2] xen: rename wrong named pfn related variables
Message-ID: <79434ec4-4543-97ad-b010-3f2c1b6a55ad@suse.com>
Date: Wed, 16 Jun 2021 12:43:22 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <8dbeb9ea-56c9-de30-4d5f-fc9c0ced6ac4@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="xuNDAfV0ocaoXYJt1qoHabXYUrAOASd1N"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--xuNDAfV0ocaoXYJt1qoHabXYUrAOASd1N
Content-Type: multipart/mixed; boundary="yn2w8Vnh1nN6dHx4ICehFvVeJJ4q41bwl";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, x86@kernel.org
Message-ID: <79434ec4-4543-97ad-b010-3f2c1b6a55ad@suse.com>
Subject: Re: [PATCH 2/2] xen: rename wrong named pfn related variables
References: <20210616073007.5215-1-jgross@suse.com>
 <20210616073007.5215-3-jgross@suse.com>
 <8dbeb9ea-56c9-de30-4d5f-fc9c0ced6ac4@suse.com>
In-Reply-To: <8dbeb9ea-56c9-de30-4d5f-fc9c0ced6ac4@suse.com>

--yn2w8Vnh1nN6dHx4ICehFvVeJJ4q41bwl
Content-Type: multipart/mixed;
 boundary="------------4AEF556428C2D08DB245AC5A"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------4AEF556428C2D08DB245AC5A
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.06.21 11:56, Jan Beulich wrote:
> On 16.06.2021 09:30, Juergen Gross wrote:
>> --- a/arch/x86/xen/p2m.c
>> +++ b/arch/x86/xen/p2m.c
>> @@ -95,8 +95,8 @@ unsigned long *xen_p2m_addr __read_mostly;
>>   EXPORT_SYMBOL_GPL(xen_p2m_addr);
>>   unsigned long xen_p2m_size __read_mostly;
>>   EXPORT_SYMBOL_GPL(xen_p2m_size);
>> -unsigned long xen_max_p2m_pfn __read_mostly;
>> -EXPORT_SYMBOL_GPL(xen_max_p2m_pfn);
>> +unsigned long xen_p2m_max_size __read_mostly;
>> +EXPORT_SYMBOL_GPL(xen_p2m_max_size);
>=20
> Instead of renaming the exported variable (which will break consumers
> anyway), how about dropping the apparently unneeded export at this
> occasion?

Why do you think it isn't needed? It is being referenced via the inline
function __pfn_to_mfn() in arch/x86/include/asm/xen/page.h. And
__pfn_to_mfn() is used via lots of other inline functions and macros.

> Further it looks to me as if xen_p2m_size and this variable
> were actually always kept in sync, so I'd like to put up the question
> of dropping one of the two.

Hmm, should be possible, yes.

>=20
>> @@ -121,7 +121,7 @@ static pte_t *p2m_identity_pte;
>>    * can avoid scanning the whole P2M (which may be sized to account f=
or
>>    * hotplugged memory).
>>    */
>> -static unsigned long xen_p2m_last_pfn;
>> +static unsigned long xen_p2m_pfn_limit;
>=20
> As to the comment remark in patch 1: You don't alter the comment
> here either, and "limit" still doesn't make clear whether that's an
> inclusive or exclusive limit.

Oh, indeed. Will fix that.


Juergen

--------------4AEF556428C2D08DB245AC5A
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------4AEF556428C2D08DB245AC5A--

--yn2w8Vnh1nN6dHx4ICehFvVeJJ4q41bwl--

--xuNDAfV0ocaoXYJt1qoHabXYUrAOASd1N
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDJ1coFAwAAAAAACgkQsN6d1ii/Ey+E
xAf+JOLQ5Gtcr4+HH7tJx/skcMTr4v4NFPbV4rcwXsYKzElIbBDOoTQY0X3nndWsvOyYJS0WLjPd
cwxP5RhuTdgwKi5ReJ0+M25tL/T/5E9WRovxXHi5AyYw8IXJ47XzXQYHIods7ZtlCH9VJwSLblaF
UfkkBd8jgoCInCkszeD5nYmpm+zV0H0loCLK0QcJCSiHrZy2Hw4w1F5S1v6FZ4/MT/MpOx3OGB/j
bW0SssPOmlwnXO05w8gLw0TvoJBu2sUVdYDZIePFfxfZkjR1PV8XB//slMiCpoB67XYWidEZ77Sa
D6jQ3ep2sbqZgTnW+Lx895wXqcneGxSqq51n+beLzQ==
=cvhV
-----END PGP SIGNATURE-----

--xuNDAfV0ocaoXYJt1qoHabXYUrAOASd1N--


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 10:56:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 10:56:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142853.263457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltTDx-0007Ra-Va; Wed, 16 Jun 2021 10:56:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142853.263457; Wed, 16 Jun 2021 10:56:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltTDx-0007RT-Rs; Wed, 16 Jun 2021 10:56:45 +0000
Received: by outflank-mailman (input) for mailman id 142853;
 Wed, 16 Jun 2021 10:56:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jola=LK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltTDw-0007RN-Or
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 10:56:44 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9df3a922-1330-4431-89e8-2868bc88626a;
 Wed, 16 Jun 2021 10:56:43 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2059.outbound.protection.outlook.com [104.47.13.59]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-18-DRxD3Vd6NDeS2PRc23dUgQ-1; Wed, 16 Jun 2021 12:56:42 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7040.eurprd04.prod.outlook.com (2603:10a6:800:121::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Wed, 16 Jun
 2021 10:56:39 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Wed, 16 Jun 2021
 10:56:39 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0094.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:18::34) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Wed, 16 Jun 2021 10:56:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9df3a922-1330-4431-89e8-2868bc88626a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623841003;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=l4T+Y4Bmfu4hOBK10YfuO2pb8j7kiTpMxBLpw8+PnyI=;
	b=QehCBG57AA8mvgIowLX3sp60oKnW2GJZkXZkK+OO61yZe8UcfohmnidYlEMltn/glVxK+n
	pB32kISH/I975iOc8V+r0YzZG/k5kOMhOSuifuY5FPS/zzHc2sxBj4OQXDtCUKGi6APnd0
	wk/C+qLXSyVdN4bOR7ET+mHE4F2kqo8=
X-MC-Unique: DRxD3Vd6NDeS2PRc23dUgQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JqampmMQAY3EyMlwGWYh2woIyNC9CcybqTx4slcegCJIisw56wU2zFpjUjFWLE0+KDY72yRe1JetJoG10U6XnOFTiteigTGCNzkVnZKZKiFIfHZdInkLATdkwrbkFgrajIoxQDh5T0nZ6OwV7b9T55i64B4T8cbsqJi9IshIVR93K6pTgg3Ru3UvCCmkrH7zRaTZ8mlk/c5+uK6N699lWqJJTDvjErCrDglBRumBR0wYDjCvp/v2uxsWVL51kUx/SfL/jPrA4fenPIXl+BwS73ZfhCk6d1q+/EP5qkVOGlhY4K8tnlTGUH2wXVI90bH5rlfObBgKLwfrL762B1xKCg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l4T+Y4Bmfu4hOBK10YfuO2pb8j7kiTpMxBLpw8+PnyI=;
 b=C518WbVDfLd3/Rd0BhutK76ssuHKlm60SaBbuIniW9JhNcgP/00xhCxnsuSkLv2E2X/v1oWoq4bz+vSkwHI2BFnH/8UCb7r6ba5LbSOZJ9aTU4lUbMPFBvswJy784+bnzKrXU1AiJKVqwf9WoY7MnDGHEy0V6elKCQLTBBrRciTTPQ+2F79gz8ssQ6vmlNfVTZZ4Q/N7mjdx8OpZsXxTHfwIuZ/Xy02d+EKyfdr3lRlgYtyKjX0HxjXrMgp3ordYFVq54YtdDk7frQ7u2x0WaMffCFc2CVyCAj1mtsAuQwKPPVf29rmJtPb6gefp5QgHoU5bDM7dWb7CESyRCSriwg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH 1/2] xen: fix setting of max_pfn in shared_info
To: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 stable@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, x86@kernel.org
References: <20210616073007.5215-1-jgross@suse.com>
 <20210616073007.5215-2-jgross@suse.com>
 <a3674ab9-40d8-c365-d48c-0e1c88814942@suse.com>
 <97de842a-f095-3a12-ab16-beca0f97ba67@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a5747064-cf3b-4ccb-5b46-3b6e069e7202@suse.com>
Date: Wed, 16 Jun 2021 12:56:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <97de842a-f095-3a12-ab16-beca0f97ba67@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0094.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:18::34) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b1f2dc08-82b8-4575-e304-08d930b56b36
X-MS-TrafficTypeDiagnostic: VI1PR04MB7040:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB7040A929C61582A619CFC443B30F9@VI1PR04MB7040.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	r27tALQEcW+UccV1NrjecLgyuTmf9w9Xf+1PIBCjl5K0rhkLyi3vdzBauzgUG68zNPaSumI5dZrebjkvv7SfhxlqnecIS+wcuoHODiliNPzH5kpw9TrlNjySKiuWZORvk28GoYDnp6Y+6vh6u3a9THqOERbyGJUNavk6cZn5jaQp6tXco77izfbPVV1hqNpPK96u+4ic/kPRiM6NBya/FK+ixq/B/62g5AEuImtVm+7GwRuembOzRo62AkEMn/mrfsu7M2gKIrvCM7DatQUwXXCFAsoxuoxSQ2X8AozPw8U/wepnNJIAB69DCrLYXprdLJNlvRhNyVGGNgXAGt9G2LaBCOoxVHmiabk0rOHM9Iow8SqnJXzNKXLnHAlaMkzdPG+ukjQkhGm+HW4mtWjAWlbNdGG4TIZWr0xkuMajyusaztGhCynxlkSkFhVq3U+vUjTb4hJSJuhUMdCDRH8a5if3xFYQY5JUrnjkJJV3SRcdnT+vHaU5Wew/4LbC146p6wnZlO8QDZKdnbWLQyxqqmuQdNM9137wmrWQWknWkqzFR9ws2UUEFrMs0B0VG/aKmP6zNy3+6JYttx09s9d+JiuExZV/zqgbOcpjpKdJSUpBtvTFWryV73f1emT0e0uuulxvwkoRBtD7QfsGeNiyF7XGK3OvV3fKLwd7GCPvHs+3Yf9Qp7APaeNbRxdaTbt1
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(376002)(39850400004)(136003)(346002)(366004)(66476007)(66946007)(7416002)(956004)(66556008)(37006003)(38100700002)(8676002)(2616005)(53546011)(5660300002)(54906003)(478600001)(83380400001)(86362001)(4326008)(186003)(8936002)(26005)(316002)(36756003)(6486002)(6862004)(16526019)(6636002)(31696002)(16576012)(2906002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Wk5WZ3VzWmh5NXY4dVVuSW5hOENhQXROTDYrb2IreTdCWnk1VmJTRUlvL1cr?=
 =?utf-8?B?aldTVHZ1UmVoc3hIN3I3UTlvZHlxOFZjbi9FWTZwNU9RSnBvQVJ6UmhnWm93?=
 =?utf-8?B?OTQrS1ZGb3U5eUkwVTc4OU16S0tYZ2VXOWd4Y1Q0UUMvYWt2ZTJ1UDd0WjIx?=
 =?utf-8?B?a1RXL0ZXVGJkVXp3U3gzMjJsaFFNM0ZzYXdUbGoyVWJ4Z0gyUkMwMFBCbDhZ?=
 =?utf-8?B?Rm9oMU5ER2Vpc29Ic2FsaE1iRHRJd2dBL1UyeWVSaUhIYWZ1TFJwVFVGTHlr?=
 =?utf-8?B?M284c0YyR1NuUTNaSTJ6SGNkT09uTGVvNms3MVVNVHJuME5KT2ZkTEZwNFhE?=
 =?utf-8?B?TFVkL1RWY2pxRjFFaXRaTFFYYmhtQTdCQ0lUaGM3Z2tSbGJFTEplWXdMcGx0?=
 =?utf-8?B?aDZzWEkyQnB1VktBNW5OV3dYSTZaK2xqSWI1OUVuU1hwc1hIT0JVd2hPZ2lL?=
 =?utf-8?B?TEFCcGY2U1E2UnlRZVF1OWFzejBRL1RyWEtwRzZUMlNBNGllZ0FaUDNRWTh3?=
 =?utf-8?B?UFpvamp4Y2h1dVhxZUtkVVFhamVCMGE5YnZKSG1CcFBNZk1CcEZzTzhLVjVC?=
 =?utf-8?B?UjZZMzdHSy9xblpLOUpIWFd2MHlqN25kNW1NUWxrWUM1Q3o0bWI1TWJDbno1?=
 =?utf-8?B?L0Y3N1JqaXE2RUxGQmtGem5HeXlkcERjV2luMWJWeTJ2eWpadWZKbTJ3R0hY?=
 =?utf-8?B?LzJCcDRUWng3Uld2U3ljNnpLSG5oczgxWm9SdjlUT0tIYy9vK0ZIQXRHTHEz?=
 =?utf-8?B?ZXpPcTZmYjJYb01EWTYvd0lPdXI3YjJHc20vMGRNVmpBdWRXeE4yTVp2SzRV?=
 =?utf-8?B?d2g1bWRLbW5adFowNTdwNURleURMak1oSzRsRkJHcjhVTW1BQlBzakpDdEZM?=
 =?utf-8?B?dm9KVGVHbURzOVdVQjhlOWdhZG00OUgvT0VmR3RCNzdiS2xDTFJHc2kxMEsy?=
 =?utf-8?B?QkFLTFVCUzdlcW5rQklQYWVUT25vVlA4ZjZ4OEJyVG93NS90ZVM4aFlrQnJV?=
 =?utf-8?B?UEY1ZEM0S2JDYkdxMXpwVmRrdFNQUlVETUJicHE5YmFmTHpOSVVPR3g4aDRG?=
 =?utf-8?B?S2RWcGFVSHBoSU1lM3BrQnZtcWtWaFV5a1B6RVNQZUQ1RWVHQTJIMERGcFpD?=
 =?utf-8?B?ZThBNFhmZXJjV3BYbWYvSzlCZmM1b21lM0pvUVd6S1JDUjhGSFg2N21kZkRH?=
 =?utf-8?B?eElQRzNjKzdNS1BRdWVDRExWY2hiTkRsYkpVbWEreGFSZm1KNXVCRkxHK1Za?=
 =?utf-8?B?cUQySGR3aStncG1mWWhFRlhPYWEweDh0Zmhja1dXM01EbTZPM0JtL2F4K0lh?=
 =?utf-8?B?dk1oL2tLWUw4VVpjREJmeEc0TWprZ2wxaXlCNEluNng2L1JFZmFEVExwcG5D?=
 =?utf-8?B?WVp2L1NpZDlPc3JCYXVzSXk2dmJDNGtKUkFNTkhMdkl3dmd1VVZNc2VpWGdk?=
 =?utf-8?B?akxZQTQ3Qk5nei8rTDB2Y2duWnFoMG1lK0E4QlVSR3NxUkNON2JkckhjRC9i?=
 =?utf-8?B?VndpcjVIWWhsbFM3K05qQzVaMFdBSW1ZVktVaGlsd2VUOCtzU0szMlJrbTZa?=
 =?utf-8?B?UlFmZWQxQnV6QnNzd0J0SStqQ3hBOTZTZWIrT1I0VEx5czFMTzkxTlFQcXNI?=
 =?utf-8?B?QTlTTnFZZzdIM0ppMEtRREFPQytCLzJqY1g0akYyK1p1MVN2TnY1d2QzNzQ2?=
 =?utf-8?B?ZjY1bkJYYTMzbzQ4L21VNkNuUVkzcURvbWVDbTRmYkQ2eXJSbExzdHFSTGdv?=
 =?utf-8?Q?VN1qYCk0UCqiqDMNNliXToGtlXwM/rrzT2UfuXn?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b1f2dc08-82b8-4575-e304-08d930b56b36
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 10:56:39.8177
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BsBXyF1UkiLS8mAKsjsrayNDh95tDeO3K3mu/H3RlUWtbJZM632/lyWKNsJbwPdv1sIeRBrO9x/PYPlvD54UKQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7040

On 16.06.2021 12:37, Juergen Gross wrote:
> On 16.06.21 11:52, Jan Beulich wrote:
>> On 16.06.2021 09:30, Juergen Gross wrote:
>>> Xen PV guests are specifying the highest used PFN via the max_pfn
>>> field in shared_info. This value is used by the Xen tools when saving
>>> or migrating the guest.
>>>
>>> Unfortunately this field is misnamed, as in reality it is specifying
>>> the number of pages (including any memory holes) of the guest, so it
>>> is the highest used PFN + 1. Renaming isn't possible, as this is a
>>> public Xen hypervisor interface which needs to be kept stable.
>>>
>>> The kernel will set the value correctly initially at boot time, but
>>> when adding more pages (e.g. due to memory hotplug or ballooning) a
>>> real PFN number is stored in max_pfn. This is done when expanding the
>>> p2m array, and the PFN stored there is even possibly wrong, as it
>>> should be the last possible PFN of the just added P2M frame, and not
>>> one which led to the P2M expansion.
>>>
>>> Fix that by setting shared_info->max_pfn to the last possible PFN + 1.
>>>
>>> Fixes: 98dd166ea3a3c3 ("x86/xen/p2m: hint at the last populated P2M entry")
>>> Cc: stable@vger.kernel.org
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
>> The code change is fine, so
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
>> But I think even before the rename you would want to clarify the comment
>> next to the variable's definition, to make clear what it really holds.
> 
> It already says: "Number of valid entries in the p2m table(s) ..."
> What do you think is unclear about that? Or do you mean another
> variable?

I mean the variable the value of which the patch corrects, i.e.
xen_p2m_last_pfn. What I see in current source is

/*
 * Hint at last populated PFN.
 *
 * Used to set HYPERVISOR_shared_info->arch.max_pfn so the toolstack
 * can avoid scanning the whole P2M (which may be sized to account for
 * hotplugged memory).
 */
static unsigned long xen_p2m_last_pfn;

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 11:01:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 11:01:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142858.263467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltTIr-0000QP-IK; Wed, 16 Jun 2021 11:01:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142858.263467; Wed, 16 Jun 2021 11:01:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltTIr-0000QI-F7; Wed, 16 Jun 2021 11:01:49 +0000
Received: by outflank-mailman (input) for mailman id 142858;
 Wed, 16 Jun 2021 11:01:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jola=LK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltTIp-0000QC-LO
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 11:01:47 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e01c74c9-67f9-4b3d-8433-c19d125cac31;
 Wed, 16 Jun 2021 11:01:46 +0000 (UTC)
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2057.outbound.protection.outlook.com [104.47.12.57]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-23-Lgo-8QzIONmqVrB3j2VoFA-1; Wed, 16 Jun 2021 13:01:44 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7037.eurprd04.prod.outlook.com (2603:10a6:800:125::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Wed, 16 Jun
 2021 11:01:43 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Wed, 16 Jun 2021
 11:01:43 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FRYP281CA0001.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::11) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.7 via Frontend Transport; Wed, 16 Jun 2021 11:01:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e01c74c9-67f9-4b3d-8433-c19d125cac31
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623841305;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=P4Qd2KX4Ezz3bmQaG3UyE0GNrbB+QN0FC8EFB3mW1zc=;
	b=icfYihTsPMpm/K5zqlkwio+aXEVWOlPdMJmQjguu5r9rM5l6HM8hn61ed3M9FDdqSAHIi1
	rMLxOx7uLHBlJyadUoUNFjYNWoHwljsVRsYgY4SHJPFpg0m94Bh4h3KZzimyrbMIp1zKWb
	OrWetJFPN1OLClNWsA4c3CrLrPZfrnI=
X-MC-Unique: Lgo-8QzIONmqVrB3j2VoFA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=goLh89kPEN+DH5mu+Q1OEIsgNuLMusl000kPUUeFJujS7KxDWDju8GI/IUyVqgbZQupZKoL//ZFkZofc7HbruzVUntx6rKgCvtqL1nDaBfOzqNWWiysIGSqWl8Mohs181q0SVERiUxFZlxRCCfyKrMDI3esg0R07wKBPfJhryf7bZTWXcQ7HGASNDl7Lt3B/4mL4b+hsGqo6SC8pD/CihoOOjfTJjXDNbmDNGwOBAeIIyDfIxYHrl85FAGEKatcPSQLTlFYFLY1UmPwzTGHPdoRSlbEziCwZeyDc16UzRONC0wzCiuzrovTomNxeYC+CrUl/5PgNWjJlvug2JnQiVw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=P4Qd2KX4Ezz3bmQaG3UyE0GNrbB+QN0FC8EFB3mW1zc=;
 b=iUGlSOSLGP2akuSzhGQX05Cwp9me7MXFqcPIzI7umeYppExWf8icgT3DNY0cUujCRjvRk7WAuIEsA18j3EA+2746Cl9FxrWlSYF2OHXfYP/afE5Ip4zS/MYhn93qQqGMlwZGEqI4tfG4/a465g5vOuf26FyHuAeIQVJvP2PMtqnuf0KtiwDq45K3Rzs3kBEeolJ+B1SLRTjYgp+q7JCcUOCcWHi3fFbbHVlHhAxYSSjit3pgCubUahdaYs59g5Xcd9z2/XEqwvYrldBgonC6jsu/bIZcNrHJMAcsfsmzKI/mLR2wvDQJyQtJcNWDRvLTcjkmlfH4B7P+OAXlWTsq5A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH 2/2] xen: rename wrong named pfn related variables
To: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, x86@kernel.org
References: <20210616073007.5215-1-jgross@suse.com>
 <20210616073007.5215-3-jgross@suse.com>
 <8dbeb9ea-56c9-de30-4d5f-fc9c0ced6ac4@suse.com>
 <79434ec4-4543-97ad-b010-3f2c1b6a55ad@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f2328359-06ac-dbde-4afb-9be2a7b26e42@suse.com>
Date: Wed, 16 Jun 2021 13:01:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <79434ec4-4543-97ad-b010-3f2c1b6a55ad@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FRYP281CA0001.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::11)
 To VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 707bdb7d-0294-4f02-2eb0-08d930b6200a
X-MS-TrafficTypeDiagnostic: VI1PR04MB7037:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB7037009A945EABDA4B6DFC61B30F9@VI1PR04MB7037.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8o1qIriXw+Uz5PiwAEl1/f/r06R7V2GBc97VXm2vVBRg4NzSX19X5HtzKBIUKoREc+G0chfepS4y67jonRB8483V779tIK5VuZES4YiWEIbPhBwz9S2oaIx5CjI3q3FVpfRY3k25Ssnca353OTYYxY2GdV54pffF8i7qLsHM21ORavrwoBHYPcSpZZjTYZ7wjIQm+0grecb9uEQRtw7tqxK89p9ilOXnekeZdLl2zAMgiP9piZDLqVveemx07BmClZ5qJ1I9lDzcUTQiSkvJg/gPEP6TrnL9856IxBX0Ph9zSQnJZc/jsEF/MM1OEkGhaB05F3ZzQnPLFWb///Vt2jVb3a3Z369RKHy2g4Fhf/jxhB30e9I6qTvdzeRbcsRfD1p/xRslw6IIS4Rmo/rmtfT9LJMwYjFNJlS2yzLIp5G5ZNa8cTM6qYOLxCYNLdusLO5HXRbYO2z7QQFzCraMwdm0ANAtDvh/HKoqDeC3JYsF40M6pEYWP7gJdrs2q9EsZDuJ9EVgDY4AZpX7c4e4P3YhZ3sLM2j0vNhPb0Y5akDG2ERQ0gLT/9L17VhaMmwTCuqAk7gYZMDSHWftt8x/YqSVRb3yMTdf9+M/1Lj3HysaHAYNpzZTmD59zbVTwTSHBEk+jVS0USy41vyg3v9G9ReH4NkPX8C51nYQS0r0hn4mm29H1rAtKs40ZGCoL5iD
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39850400004)(366004)(376002)(346002)(396003)(136003)(6636002)(53546011)(31686004)(66476007)(66556008)(6486002)(6862004)(4326008)(5660300002)(956004)(2616005)(66946007)(478600001)(2906002)(54906003)(16576012)(37006003)(8936002)(8676002)(31696002)(316002)(38100700002)(26005)(16526019)(186003)(36756003)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?enJnZ2doM25heU5zeVliQ1NaRHVMOEZHaVFRM000Y3duRUpPUEZGd0pWNTlF?=
 =?utf-8?B?V2VXTDdMbFV5SVI3QXZHVjFPOGt4Y3ArZmhqeGVVejczZzFVYm5XOWZXWHAv?=
 =?utf-8?B?ZmpFcVhyakY3WjZrRGtwMGkrNVliQ1lUdkpMVkFnUFhaNG5oYldmU29pZ1Ez?=
 =?utf-8?B?Q2pXekJkS05jUTYvZGU3clo4NzRtVTJIN0g0YkxhMlR1R2FkaXM2ckxrUm1a?=
 =?utf-8?B?Y2NCenlUbXpBdk1ud05rKzF3VHNVYmlZQ0dOZ0VKVDJzRGFDdUtXKzVKNnNQ?=
 =?utf-8?B?dEp3c08wTUh2V05LeGNTRU11ZHZNSHBkMlJNNVFLelN6WnJ1YjFrMXlvaDJK?=
 =?utf-8?B?L0xycEgvQU92bUNCbm9lUnlLNHJVaHhZbXJKVkRPZHRTam5ydFVndnVpbTBp?=
 =?utf-8?B?bmxaN2VEUmhzblRKWWh6UEczNzh1c2IyejYwZkkxM3Y1M1FTc2Yrbm1HQyty?=
 =?utf-8?B?T3cyNllYK2dKL1NNUXVXNS9hb0tOOE0wZDZDOXp6d3VxYTNHZ2pDSmpicjli?=
 =?utf-8?B?LzlQV1BWS0l3bGZhWWxFQUFrRDc3cURHcXM0U205UmQ1VHdWNFRqV0ExQzJH?=
 =?utf-8?B?QmJMak4vdnh2SVZILy93UGxMc1BmbzZvWUVmVlVPSFV2MENUaWtZUHNDZUdO?=
 =?utf-8?B?b0JIOUxZODVKUnh0d3RheDBuK25Cc0tFajFSallQT1d5ekVBOVBYQWordzZC?=
 =?utf-8?B?N1loZjhEOVFoenZpTnB3VkFlc3Znalg1dGo1MjFpOWtHK0ZwZm5DdVQvSzJH?=
 =?utf-8?B?L0dvb2ZvVWVvdnByTEFzUmpCcU1HejRWcXB0Sm1NVFJDODZhU2Fha0VUdDY4?=
 =?utf-8?B?cERJUkVSRXptVlhxbkdZZ3BHS24zaVBSaEJsVEJUNDJQWE5yUkNvSmZFdytl?=
 =?utf-8?B?bjVsVnowSXc1eUNmZ29vZG1xY2lJSyt6RHdDVVhmeTViWnNPcE9JWi9WdXQr?=
 =?utf-8?B?OW1yMExQcjJuVUU5ajdTWnVnZlR2OENnZ2Zjc2NrSHpodWZzQ3pZWndQeDBR?=
 =?utf-8?B?NjJZcDFmeTBtZ3c5MGpneExnZ202Qys2UlRSeUdQZlZIL1RTK2QwWTU2QUJ4?=
 =?utf-8?B?dXdGNTNhYVJDUVlWTUJrV1dSRFIxQXNKZG11ZjFxRUVPWGQ2TEZCSlVtZ2dn?=
 =?utf-8?B?R0c4WWhLSmI3eTNuOGJRZGorY1ZFemYwNHJDbko2dUNQNUUyUGNPdTlmdG1v?=
 =?utf-8?B?RkVYcUVKN05XZXpUdkpBdUhBc1hrUGtET2dQYzZZUnh3bS9sKytYNUxVT21G?=
 =?utf-8?B?OHhVM2FBZ2dVc0VpbFczd2VHbmI5RHY2YUdRMGFWRVE0QjQzdGlaNVRiYWNZ?=
 =?utf-8?B?REF5cmxqZ1BqS2xnTzdUa2E0Rk1PMHZ3VS9UeVZndk5pZ1VSV0tEb3dQME5X?=
 =?utf-8?B?K1VXcEkzV3laWHgrY3ZHQlZLZmJWblhGUlg0V3ZGN0I1L2tZTTVlK093YWdU?=
 =?utf-8?B?OFRVU3VEWGhiR2dNWmdzSWQvYW5DQUg4RHdKeXg2NTR0c1JBRzllU0lUVmd1?=
 =?utf-8?B?NFRBZkhZMU84UGt3Y2ZNN1pHcHkyOUErSjlpVGFla1Vqa0EzT0xHWFk5Tkt4?=
 =?utf-8?B?eTk0ZzV2elZsTjdoKzEvVFZGRk5NSXlUMFlaa0ZzejFMcDlWalRUWnFSZnBK?=
 =?utf-8?B?d0ZLSWhsT0YzblZXVTFDTUJKNkMrRG9uQVhGelNWNkZpcVhRMk9tcDV2ZjRl?=
 =?utf-8?B?aVl6VmhnU0ZrQ2FUVytrT3l4TWtaOSt4Mkl2WkdPWW9nankxR2dUNXNpUDh6?=
 =?utf-8?Q?9YedW+zZdeuFqQI1XJDDkovymnkynzEAuODGe0l?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 707bdb7d-0294-4f02-2eb0-08d930b6200a
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 11:01:43.0084
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GdcOza78rHSP8UbioDdGhUGS2Mm/UniLe4/AuDyrXtae80uQh/Wus/3Zjc138lQktOAqqnDXy0aKXShAz2GLuA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7037

On 16.06.2021 12:43, Juergen Gross wrote:
> On 16.06.21 11:56, Jan Beulich wrote:
>> On 16.06.2021 09:30, Juergen Gross wrote:
>>> --- a/arch/x86/xen/p2m.c
>>> +++ b/arch/x86/xen/p2m.c
>>> @@ -95,8 +95,8 @@ unsigned long *xen_p2m_addr __read_mostly;
>>>   EXPORT_SYMBOL_GPL(xen_p2m_addr);
>>>   unsigned long xen_p2m_size __read_mostly;
>>>   EXPORT_SYMBOL_GPL(xen_p2m_size);
>>> -unsigned long xen_max_p2m_pfn __read_mostly;
>>> -EXPORT_SYMBOL_GPL(xen_max_p2m_pfn);
>>> +unsigned long xen_p2m_max_size __read_mostly;
>>> +EXPORT_SYMBOL_GPL(xen_p2m_max_size);
>>
>> Instead of renaming the exported variable (which will break consumers
>> anyway), how about dropping the apparently unneeded export at this
>> occasion?
> 
> Why do you think it isn't needed? It is being referenced via the inline
> function __pfn_to_mfn() in arch/x86/include/asm/xen/page.h. And
> __pfn_to_mfn() is used via lots of other inline functions and macros.

Oh, sorry. Not working that much with the Linux sources anymore,
I didn't pay attention to include/ changes living ahead of *.c
ones, and inferred from the last file touched being *.c that no
headers were getting changed by the patch.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 11:11:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 11:11:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142866.263478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltTS7-0001sR-GX; Wed, 16 Jun 2021 11:11:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142866.263478; Wed, 16 Jun 2021 11:11:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltTS7-0001sK-DU; Wed, 16 Jun 2021 11:11:23 +0000
Received: by outflank-mailman (input) for mailman id 142866;
 Wed, 16 Jun 2021 11:11:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltTS6-0001sA-HW; Wed, 16 Jun 2021 11:11:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltTS6-0000Ib-BA; Wed, 16 Jun 2021 11:11:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltTS6-0006B3-53; Wed, 16 Jun 2021 11:11:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltTS6-00042y-4T; Wed, 16 Jun 2021 11:11:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=sUC7q7almMuaviM69SvNNqZYK3zAGs6GbVhicC9NZFA=; b=3x4O5MNjDvtKCGpoDYuBpfjpFQ
	OZJR0eSc9vZEOScQ8j4D0XQLFXVqfa66jKd/FnGbmKGtOGuulO4TYmrJiQhDSL7m0LClUx4vMxHif
	d/ZxW3AUMxYsSa2bvN4KoU9cAkORL2FmQ+X6PYkI5Kd3BBSvw/LoNfHOG1CJQHS1qFnI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-amd64-xl-qemuu-debianhvm-amd64
Message-Id: <E1ltTS6-00042y-4T@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Jun 2021 11:11:22 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemuu-debianhvm-amd64
testid guest-saverestore

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161183/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-debianhvm-amd64.guest-saverestore.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-debianhvm-amd64.guest-saverestore --summary-out=tmp/162858.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-amd64-xl-qemuu-debianhvm-amd64 guest-saverestore
Searching for failure / basis pass:
 162840 fail [host=pinot0] / 160125 [host=fiano0] 160119 [host=godello1] 160113 [host=chardonnay0] 160104 [host=fiano1] 160097 [host=pinot1] 160091 [host=godello0] 160088 [host=elbling0] 160082 [host=albana0] 160079 [host=albana1] 160070 [host=chardonnay1] 160066 [host=chardonnay0] 160002 [host=huxelrebe1] 159947 [host=godello0] 159926 [host=elbling0] 159911 [host=fiano0] 159898 [host=albana0] 159888 [host=godello1] 159878 [host=godello0] 159869 [host=chardonnay1] 159860 [host=albana1] 159853 [h\
 ost=elbling1] 159848 [host=huxelrebe1] 159842 [host=elbling0] 159834 [host=godello0] 159828 ok.
Failure / basis pass flights: 162840 / 159828
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1ea06abceec61b6f3ab33dadb0510b6e09fb61e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#ef91b07388e1c0a50c604e5350eeda98428ccea6-c410ad4da4b7785170d3d42a3ba190c2caac6feb git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#cb90ecf9349198558569f6c86c4c27d215406095-1ea06abceec61b6f3ab33dadb0510b6e09fb61e2 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-e3c30795823672eec9bde75187e184f23ed98d70 git://xenbits.xen.org/xen.git#243036df0d55673de59c214e240b9b914d278b65-5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
>From git://cache:9419/git://xenbits.xen.org/xen
   93031fbe9f..4bcf6433ee  coverity-tested/smoke -> origin/coverity-tested/smoke
Loaded 31671 nodes in revision graph
Searching for test results:
 162778 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162795 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162818 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1ea06abceec61b6f3ab33dadb0510b6e09fb61e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162840 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1ea06abceec61b6f3ab33dadb0510b6e09fb61e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162849 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
 162858 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1ea06abceec61b6f3ab33dadb0510b6e09fb61e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 159828 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
 159834 [host=godello0]
 159842 [host=elbling0]
 159848 [host=huxelrebe1]
 159853 [host=elbling1]
 159860 [host=albana1]
 159869 [host=chardonnay1]
 159878 [host=godello0]
 159888 [host=godello1]
 159898 [host=albana0]
 159911 [host=fiano0]
 159926 [host=elbling0]
 159947 [host=godello0]
 160002 [host=huxelrebe1]
 160048 []
 160050 []
 160057 []
 160062 []
 160064 []
 160066 [host=chardonnay0]
 160070 [host=chardonnay1]
 160079 [host=albana1]
 160082 [host=albana0]
 160088 [host=elbling0]
 160091 [host=godello0]
 160097 [host=pinot1]
 160104 [host=fiano1]
 160113 [host=chardonnay0]
 160119 [host=godello1]
 160125 [host=fiano0]
 160134 fail irrelevant
 160147 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160167 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca318882714080fb81fe9eb89a7b7934efc5bfae 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bdee969c0e65d4d509932b1d70e3a3b2ffbff6d5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160328 fail irrelevant
 160361 fail irrelevant
 160392 fail irrelevant
 160418 fail irrelevant
 160448 fail irrelevant
 160477 fail irrelevant
 160501 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160522 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160541 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ec2e6e016d24bd429792d08cf607e4c5350dcdaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160563 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7993b0f83fe5c3f8555e79781d5d098f99751a94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cead8c0d17462f3a1150b5657d3f4eaa88faf1cb
 160619 fail irrelevant
 160632 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 62bad17dcae18f55cb3bdc19909543dfdf928a2b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6ee55e1d10c25c2f6bf5ce2084ad2327e17affa5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 90629587e16e2efdb61da77f25c25fba3c4a5fd7
 160650 fail irrelevant
 160736 fail irrelevant
 160748 fail irrelevant
 160779 fail irrelevant
 160801 fail irrelevant
 160827 fail irrelevant
 160851 fail irrelevant
 160883 fail irrelevant
 160916 fail irrelevant
 160980 fail irrelevant
 161050 fail irrelevant
 161111 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
 161112 fail irrelevant
 161115 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2a9a6c2a86570ccbf8c5c30cbb8bf723168c459 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161118 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161088 fail irrelevant
 161120 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7286d62d4e259be8cecf3dc2deea80ecc14489a5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161122 fail irrelevant
 161126 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 69259911f948ad2755bd1f2c999dd60ac322c890 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161127 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e71c36557ed41017e634ae392fa80f03ced7fa1 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161130 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2255564fd21059960966b47212def9069cb56077 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161132 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e51b27fed31eb7b2a2cb4245806c8c7859207f7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0693602a23276b076a679b1e7ed9125a444336b6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161133 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8b858f9998a9d59a9a7188f2c5c6ffb99eff6115 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161135 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 30ca7eddc486646fa19c9619fcf233ceaa65e28c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161137 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2615a5e433aeb812c300d3a48e1a88e1303e2339 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 161138 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 51204c2f188ec1e2a38f14718d38a3772f850a4b b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 161142 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 773b0bc2838ede154c6de9d78401b91fafa91062 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5e8892db93f3fb6a7221f2d47f3c952a7e489737 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161143 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8e6bc6cdc82d45f203bc9fc4342c0452214c74fe b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 161144 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 757acb9a8295e8be4a37b2cfc1cd947e357fd29c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 161145 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 9abda42bf2f5aa6ef403d3140fd3d7d88e8064e9 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 893103e286ac1c500d2ad113f55c41edb35e047c
 161121 fail irrelevant
 161146 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f34661b6c97a37a5efc27d31c037ddeda4547e2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 0570d7f276dd20a3adee80ca44a5fe7daf7566cd
 161148 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
 161149 fail irrelevant
 161152 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 edd46cd407ea4a0adaa8d6ca86f550c2a4d5c507 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a557b00469bca61a058fc1db4855503cac1c3219 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 4e01c48886d9fbfee3bf7e481c4529a176331c78
 161155 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1941858448e76f83eb00614c4f34ac29e9a8e792 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 0570d7f276dd20a3adee80ca44a5fe7daf7566cd
 161156 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 edd46cd407ea4a0adaa8d6ca86f550c2a4d5c507 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 65a9d3807e9a0ffd9f9719416a07be41b6f39e94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e4bdcc8aef6707027168ea29caed844a7da67b4d
 161158 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 94fa95c8746c553324e8b69ea4a74af670075324 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e4341623a3b87e7eca87d42b7b88da967cd21c49 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 60c0444fae2148452f9ed0b7c49af1fa41f8f522
 161160 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d1929069e355afb809a50a7f6b6affdea399cc8c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 368096b9c4a273be58dd897e996e3e010bcfc21b
 161162 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b6d5996706ddb6082e3ea8de79849bfecf2aaa15 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e31b3a5c34c6e5be7ef60773e607f189eaa15f3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 161163 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161164 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ee2e67da8f882fcdef2c49fcc58e9962aa695f5a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161165 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f9c53a69edeb94ae8c65276b885c1a7efe4f613a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 571d413b5da6bc6f1c2aaca8484717642255ddb0 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161166 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 283d845c9164f57f5dba020a4783bb290493802f b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161168 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8becb36063fb14df1e3ae4916215667e2cb65fa2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161147 fail irrelevant
 161170 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161172 fail irrelevant
 161174 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161178 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161181 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161182 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161183 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161171 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2ad22420a710dc07e3b644f91a5b55c09c39ecf3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 264aa183ad85b2779b27d1312724a291259ccc9f
 161191 fail irrelevant
 161210 fail irrelevant
 161232 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b53173e7cdafb7a318a239d557478fd73734a86a
 161256 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161276 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161290 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161308 fail irrelevant
 161334 fail irrelevant
 161364 fail irrelevant
 161388 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
 161401 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aaa3eafb3ba8b544d19ca41cda1477640b22b8fc
 161419 fail irrelevant
 161434 fail irrelevant
 161444 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161455 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161472 fail irrelevant
 161481 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5396354b868bd6652600a654bba7df16701ac1cb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 11e7f0fe72ca0060762d18268e0388731fe8ccb6
 161495 fail irrelevant
 161514 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b90b8abb4049e2d98040f548ad23b6ab22d5d19 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
 161540 fail irrelevant
 161554 fail irrelevant
 161571 fail irrelevant
 161587 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161604 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161616 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 53c5433e84e8935abed8e91d4a2eb813168a0ecf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161631 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 15106f7dc3290ff3254611f265849a314a93eb0e b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161766 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
 161780 fail irrelevant
 161812 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d45a5270d075ea589f0b0ddcf963a5fea1f500ac b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 8cccd6438e86112ab383e41b433b5a7e73be9621
 161826 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161839 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161853 []
 161856 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161862 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161876 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161886 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161890 []
 161896 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 74e31681ba05ed1876818df30c581bc530554fb3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161907 fail irrelevant
 161915 fail irrelevant
 161924 fail irrelevant
 161938 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161941 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161950 fail irrelevant
 161955 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161961 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161963 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161967 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161971 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161976 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161981 fail irrelevant
 161986 fail irrelevant
 162019 fail irrelevant
 162070 fail irrelevant
 162090 fail irrelevant
 162104 fail irrelevant
 162099 fail irrelevant
 162108 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 15ee7b76891a78141e6e30ef3f8572e8d6b326d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 972e848b53970d12cb2ca64687ef8ff797fb6236 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162112 fail irrelevant
 162116 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162121 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162124 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162127 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162132 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162135 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162139 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162143 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 371ebfe28600fc5a435504b841cd401208a68f07 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162146 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162152 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162158 fail irrelevant
 162167 fail irrelevant
 162197 fail irrelevant
 162240 fail irrelevant
 162244 fail irrelevant
 162252 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7258034ab40e6927acbd005feb295eb3acf972bb 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 9fdcf851689cb2a9501d3947cb5d767d9c7797e8
 162257 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 62c0ac5041e9130b041adfa13a41583d3c3ddd24 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162260 fail irrelevant
 162264 fail irrelevant
 162267 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f9dc72de91d2915b808e82da34bf613afa5cce43 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162270 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162277 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162299 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 683d899e4bffca35c5b192ea0662362b0270a695
 162328 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3ff5dbe1dfc3420e5254d290500c0b6f6282d17 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162331 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fdf3666f01a2dd02d83a808f609b9c744a74c652 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162339 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b233eb1849ac01bdd5b24ea84460a2e481a4c5a9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dd2db39d78431ab5a0b78777afaab3d61e94533e 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162342 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1f515342d8d83ef0fff0c3f4ac67232dd8c97565 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c345b3e6a736d4985b2bca6f7f24b985900de63 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162347 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8e6dad2028d01b7f9ec76cf3b83457fab57fa1eb 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162356 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 453d9c61dd5681159051c6e4d07e7b2633de2e70 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162362 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 453d9c61dd5681159051c6e4d07e7b2633de2e70 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162409 fail irrelevant
 162379 fail irrelevant
 162429 fail irrelevant
 162454 fail irrelevant
 162474 fail irrelevant
 162501 []
 162527 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a35947f15c0ee695eba3c55248ec8ac3e4e23cca 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162551 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a4716fd8d7c877185652f5f8e25032dc7699d51b 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162591 fail irrelevant
 162623 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7fe7fae8b48e3f9c647fd685e5155ebc8e6fb84d e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162650 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162762 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162676 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162712 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Searching for interesting versions
 Result found: flight 159828 (pass), for basis pass
 Result found: flight 162818 (fail), for basis failure
 Repro found: flight 162849 (pass), for basis pass
 Repro found: flight 162858 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
No revisions left to test, checking graph state.
 Result found: flight 161170 (pass), for last pass
 Result found: flight 161174 (fail), for first failure
 Repro found: flight 161178 (pass), for last pass
 Repro found: flight 161181 (fail), for first failure
 Repro found: flight 161182 (pass), for last pass
 Repro found: flight 161183 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161183/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.29022 to fit
pnmtopng: 236 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-debianhvm-amd64.guest-saverestore.{dot,ps,png,html,svg}.
----------------------------------------
162858: tolerable FAIL

flight 162858 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162858/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail baseline untested


jobs:
 build-amd64                                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 11:18:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 11:18:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142874.263492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltTZ0-0002fq-Gq; Wed, 16 Jun 2021 11:18:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142874.263492; Wed, 16 Jun 2021 11:18:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltTZ0-0002fj-Dq; Wed, 16 Jun 2021 11:18:30 +0000
Received: by outflank-mailman (input) for mailman id 142874;
 Wed, 16 Jun 2021 11:18:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=my07=LK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1ltTYz-0002fd-BB
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 11:18:29 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dc2ed7f2-86db-41c6-ba9a-07a3cb725144;
 Wed, 16 Jun 2021 11:18:28 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 414AD1FD49;
 Wed, 16 Jun 2021 11:18:27 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id F2AA5118DD;
 Wed, 16 Jun 2021 11:18:26 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id zkYeOgLeyWAbagAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 16 Jun 2021 11:18:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc2ed7f2-86db-41c6-ba9a-07a3cb725144
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623842307; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=vQnhy7epSoxBF0Q33jPWHaptTxsHKqQgS9prFbITaw8=;
	b=IS2BO4p9AeNfBtCzmJKOtn1kQfXZttWvhazm9fxd7yfRfJjGYxJqXvQYMTvOF6Bggiu1VJ
	33PDTgYPxLPNj59gnZqo2/t7sGQT225s34IfjqzDoZt4SdxySEWeCHCKFZYKRqmnpxKfdm
	t5Zl1X5Iy73Pg7UQqVSlz7NCuncGnAM=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623842307; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=vQnhy7epSoxBF0Q33jPWHaptTxsHKqQgS9prFbITaw8=;
	b=IS2BO4p9AeNfBtCzmJKOtn1kQfXZttWvhazm9fxd7yfRfJjGYxJqXvQYMTvOF6Bggiu1VJ
	33PDTgYPxLPNj59gnZqo2/t7sGQT225s34IfjqzDoZt4SdxySEWeCHCKFZYKRqmnpxKfdm
	t5Zl1X5Iy73Pg7UQqVSlz7NCuncGnAM=
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 stable@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, x86@kernel.org
References: <20210616073007.5215-1-jgross@suse.com>
 <20210616073007.5215-2-jgross@suse.com>
 <a3674ab9-40d8-c365-d48c-0e1c88814942@suse.com>
 <97de842a-f095-3a12-ab16-beca0f97ba67@suse.com>
 <a5747064-cf3b-4ccb-5b46-3b6e069e7202@suse.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 1/2] xen: fix setting of max_pfn in shared_info
Message-ID: <38b6ffe0-e737-b5ad-292e-4567169da2cd@suse.com>
Date: Wed, 16 Jun 2021 13:18:26 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <a5747064-cf3b-4ccb-5b46-3b6e069e7202@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="IQylznpjmVJzclnnrAPlCY9QqLxMCmrJS"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--IQylznpjmVJzclnnrAPlCY9QqLxMCmrJS
Content-Type: multipart/mixed; boundary="OLR98vrvW4TVmJmg6eTytLC9R4elmdYdY";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 stable@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, x86@kernel.org
Message-ID: <38b6ffe0-e737-b5ad-292e-4567169da2cd@suse.com>
Subject: Re: [PATCH 1/2] xen: fix setting of max_pfn in shared_info
References: <20210616073007.5215-1-jgross@suse.com>
 <20210616073007.5215-2-jgross@suse.com>
 <a3674ab9-40d8-c365-d48c-0e1c88814942@suse.com>
 <97de842a-f095-3a12-ab16-beca0f97ba67@suse.com>
 <a5747064-cf3b-4ccb-5b46-3b6e069e7202@suse.com>
In-Reply-To: <a5747064-cf3b-4ccb-5b46-3b6e069e7202@suse.com>

--OLR98vrvW4TVmJmg6eTytLC9R4elmdYdY
Content-Type: multipart/mixed;
 boundary="------------9763D726AC0B9F598954F6C1"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------9763D726AC0B9F598954F6C1
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.06.21 12:56, Jan Beulich wrote:
> On 16.06.2021 12:37, Juergen Gross wrote:
>> On 16.06.21 11:52, Jan Beulich wrote:
>>> On 16.06.2021 09:30, Juergen Gross wrote:
>>>> Xen PV guests are specifying the highest used PFN via the max_pfn
>>>> field in shared_info. This value is used by the Xen tools when savin=
g
>>>> or migrating the guest.
>>>>
>>>> Unfortunately this field is misnamed, as in reality it is specifying=

>>>> the number of pages (including any memory holes) of the guest, so it=

>>>> is the highest used PFN + 1. Renaming isn't possible, as this is a
>>>> public Xen hypervisor interface which needs to be kept stable.
>>>>
>>>> The kernel will set the value correctly initially at boot time, but
>>>> when adding more pages (e.g. due to memory hotplug or ballooning) a
>>>> real PFN number is stored in max_pfn. This is done when expanding th=
e
>>>> p2m array, and the PFN stored there is even possibly wrong, as it
>>>> should be the last possible PFN of the just added P2M frame, and not=

>>>> one which led to the P2M expansion.
>>>>
>>>> Fix that by setting shared_info->max_pfn to the last possible PFN + =
1.
>>>>
>>>> Fixes: 98dd166ea3a3c3 ("x86/xen/p2m: hint at the last populated P2M =
entry")
>>>> Cc: stable@vger.kernel.org
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>
>>> The code change is fine, so
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> But I think even before the rename you would want to clarify the comm=
ent
>>> next to the variable's definition, to make clear what it really holds=
=2E
>>
>> It already says: "Number of valid entries in the p2m table(s) ..."
>> What do you think is unclear about that? Or do you mean another
>> variable?
>=20
> I mean the variable the value of which the patch corrects, i.e.
> xen_p2m_last_pfn. What I see in current source is
>=20
> /*
>   * Hint at last populated PFN.
>   *
>   * Used to set HYPERVISOR_shared_info->arch.max_pfn so the toolstack
>   * can avoid scanning the whole P2M (which may be sized to account for=

>   * hotplugged memory).
>   */
> static unsigned long xen_p2m_last_pfn;

Ah, okay.

I think only changing the comment without renaming the variable isn't
the way to go.

In order to keep the to be backported patch small I'd rather do the
comment adjustment and variable renaming in the followup patch.


Juergen

--------------9763D726AC0B9F598954F6C1
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------9763D726AC0B9F598954F6C1--

--OLR98vrvW4TVmJmg6eTytLC9R4elmdYdY--

--IQylznpjmVJzclnnrAPlCY9QqLxMCmrJS
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDJ3gIFAwAAAAAACgkQsN6d1ii/Ey9a
TQf/bibt2AbowpHgThmNOjzsPvI0Fp3dN7Jl7jetnuYN4jCYRmV2dqwUYxoNfiHKsk7Kxzy1mg5P
QuVkjPoYKxYRGFveAhHsowsMaB1Hx6jMvZiYBa3GdxK1Ii74j0u0RSC2xqwbsS+ra04xQ8h3s/6w
UHmni0j+SvJMgbUBpW/jvwd+fE6zaidjhRgw9sPFwMyfk3CLc0SPErT+xD6pmwu5hkwl/nLN3QQl
DIRCetps5EQvTgXTjEeR7dqShhC09obcBtdHZASZKGkC2ToSkqB/u1ysLwwgx2+WYGPKVfDatA6y
purErzPLK7l0ABvIrF1p+E0IVrnXsKOR6qiK+EIkeg==
=2F1H
-----END PGP SIGNATURE-----

--IQylznpjmVJzclnnrAPlCY9QqLxMCmrJS--


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 11:43:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 11:43:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142890.263515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltTwz-00061O-PG; Wed, 16 Jun 2021 11:43:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142890.263515; Wed, 16 Jun 2021 11:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltTwz-00061H-M9; Wed, 16 Jun 2021 11:43:17 +0000
Received: by outflank-mailman (input) for mailman id 142890;
 Wed, 16 Jun 2021 11:43:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=igu3=LK=gmail.com=rm.skakun@srs-us1.protection.inumbo.net>)
 id 1ltTwy-00060l-9z
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 11:43:16 +0000
Received: from mail-lj1-x22b.google.com (unknown [2a00:1450:4864:20::22b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a84bb722-96ed-4cdc-8182-d5ca1c592074;
 Wed, 16 Jun 2021 11:43:15 +0000 (UTC)
Received: by mail-lj1-x22b.google.com with SMTP id d2so3347420ljj.11
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 04:43:15 -0700 (PDT)
Received: from localhost ([178.151.124.169])
 by smtp.gmail.com with ESMTPSA id m18sm239190ljg.105.2021.06.16.04.43.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 16 Jun 2021 04:43:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a84bb722-96ed-4cdc-8182-d5ca1c592074
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=7nQgkz7v7kjXkgg5q9M0yPzBFdiLLV6nxJ8RkjNgpeA=;
        b=aDPrdhEfh1kFSt/STcDHzzby2mSMyRVLn2/IHyVKerpOr7l762ZRsl4B8m+uIlcDi3
         YarS0JqZSnkfgGWTjhBS33bwVvWwsN9nS6HTbF06IdrjyF0I+4ErUV0d4n3oQwSBWmRP
         dj+YSl8fn58koBvmMomHtj+y335nmWnHTHraIFK6AntfuXyCET5STmdVe2XbvZl7nFhp
         BO6DQqYEqPgl4aaK7NM2AXkY++fkunJiXeaavy0hWBed6lC73frJYMnYO1aVCTLtlrF1
         +S4ovABSl4rLVsvoQQEbRdYYJjfqE1/WP2IX9ovrmrc8rEP1iDam+gsllSJ/H1DQHFmm
         O1ZA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=7nQgkz7v7kjXkgg5q9M0yPzBFdiLLV6nxJ8RkjNgpeA=;
        b=g1oMw62aYEEmUIJXAs/R6lIOBXSXZMNE1WKnQV6z8tRwXaZ58VjgMJdkJQQ6EqgWwJ
         1h5hkhRF7ssmUllreGdvSmb4C+esAyt3BYeYqU9B6r/P/FK/7TDfHzTMVMICoQ0L6f4H
         9vuatgjDLXbo+GQfhBqxVMKdhFQIzRbtV8BVJJ6ubdTyJz1U55eShNYeEQxYDf9sqPRl
         +xIT9cyniCF9dVfNxKTG37uwdCEP/5y1adZKRF637JQCz3VJen3q8kB3W9XyWr+rUS55
         wihgol/0e0MzRwpIDLzsDn5x3zpMGDdh5s4nDX+UUf8HmY/qSt5y42NNnrqdifqP5LjD
         oL0g==
X-Gm-Message-State: AOAM530E7JqgyOVFEOb24x1om8vsX1oUbvZIHKK5Laaq55FJSdp1CRka
	qMcBrCZZDWoV5Azh4P8nAVI=
X-Google-Smtp-Source: ABdhPJy8y+YAcCkoYKQg66iG8xEkfr5244XcQiy8PJossc/WgLE8Gd946VbZxMMUHzTiblQoWh+b7A==
X-Received: by 2002:a2e:b0eb:: with SMTP id h11mr2037701ljl.350.1623843794328;
        Wed, 16 Jun 2021 04:43:14 -0700 (PDT)
From: Roman Skakun <rm.skakun@gmail.com>
X-Google-Original-From: Roman Skakun <roman_skakun@epam.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org,
	iommu@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Roman Skakun <rm.skakun@gmail.com>,
	Roman Skakun <roman_skakun@epam.com>,
	Andrii Anisov <andrii_anisov@epam.com>
Subject: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops
Date: Wed, 16 Jun 2021 14:42:05 +0300
Message-Id: <20210616114205.38902-2-roman_skakun@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210616114205.38902-1-roman_skakun@epam.com>
References: <855a58e2-1e03-4763-cb56-81367b73762c@oracle.com>
 <20210616114205.38902-1-roman_skakun@epam.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This commit is dedicated to fix incorrect conversion from
cpu_addr to page address in cases when we get virtual
address which allocated through xen_swiotlb_alloc_coherent()
and can be mapped in the vmalloc range.
As the result, virt_to_page() cannot convert this address
properly and return incorrect page address.

Need to detect such cases and obtains the page address using
vmalloc_to_page() instead.

The reference code for mmap() and get_sgtable() was copied
from kernel/dma/ops_helpers.c and modified to provide
additional detections as described above.

In order to simplify code there was added a new
dma_cpu_addr_to_page() helper.

Signed-off-by: Roman Skakun <roman_skakun@epam.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
---
 drivers/xen/swiotlb-xen.c | 42 +++++++++++++++++++++++++++++++--------
 1 file changed, 34 insertions(+), 8 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 90bc5fc321bc..9331a8500547 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -118,6 +118,14 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	return 0;
 }
 
+static struct page *cpu_addr_to_page(void *cpu_addr)
+{
+	if (is_vmalloc_addr(cpu_addr))
+		return vmalloc_to_page(cpu_addr);
+	else
+		return virt_to_page(cpu_addr);
+}
+
 static int
 xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
 {
@@ -337,7 +345,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 	int order = get_order(size);
 	phys_addr_t phys;
 	u64 dma_mask = DMA_BIT_MASK(32);
-	struct page *page;
+	struct page *page = cpu_addr_to_page(vaddr);
 
 	if (hwdev && hwdev->coherent_dma_mask)
 		dma_mask = hwdev->coherent_dma_mask;
@@ -349,11 +357,6 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 	/* Convert the size to actually allocated. */
 	size = 1UL << (order + XEN_PAGE_SHIFT);
 
-	if (is_vmalloc_addr(vaddr))
-		page = vmalloc_to_page(vaddr);
-	else
-		page = virt_to_page(vaddr);
-
 	if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
 		     range_straddles_page_boundary(phys, size)) &&
 	    TestClearPageXenRemapped(page))
@@ -573,7 +576,23 @@ xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma,
 		     void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		     unsigned long attrs)
 {
-	return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
+	unsigned long user_count = vma_pages(vma);
+	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+	unsigned long off = vma->vm_pgoff;
+	struct page *page = cpu_addr_to_page(cpu_addr);
+	int ret;
+
+	vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs);
+
+	if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret))
+		return ret;
+
+	if (off >= count || user_count > count - off)
+		return -ENXIO;
+
+	return remap_pfn_range(vma, vma->vm_start,
+			page_to_pfn(page) + vma->vm_pgoff,
+			user_count << PAGE_SHIFT, vma->vm_page_prot);
 }
 
 /*
@@ -585,7 +604,14 @@ xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt,
 			void *cpu_addr, dma_addr_t handle, size_t size,
 			unsigned long attrs)
 {
-	return dma_common_get_sgtable(dev, sgt, cpu_addr, handle, size, attrs);
+	struct page *page = cpu_addr_to_page(cpu_addr);
+	int ret;
+
+	ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
+	if (!ret)
+		sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0);
+
+	return ret;
 }
 
 const struct dma_map_ops xen_swiotlb_dma_ops = {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 11:43:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 11:43:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142889.263504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltTwu-0005kT-Ic; Wed, 16 Jun 2021 11:43:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142889.263504; Wed, 16 Jun 2021 11:43:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltTwu-0005kM-EE; Wed, 16 Jun 2021 11:43:12 +0000
Received: by outflank-mailman (input) for mailman id 142889;
 Wed, 16 Jun 2021 11:43:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=igu3=LK=gmail.com=rm.skakun@srs-us1.protection.inumbo.net>)
 id 1ltTws-0005kF-Vk
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 11:43:11 +0000
Received: from mail-lf1-x136.google.com (unknown [2a00:1450:4864:20::136])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 28481f55-5f02-4a9e-bded-afe8bdc3180d;
 Wed, 16 Jun 2021 11:43:10 +0000 (UTC)
Received: by mail-lf1-x136.google.com with SMTP id p7so3814561lfg.4
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 04:43:10 -0700 (PDT)
Received: from localhost ([178.151.124.169])
 by smtp.gmail.com with ESMTPSA id bp28sm222612lfb.188.2021.06.16.04.43.08
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 16 Jun 2021 04:43:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28481f55-5f02-4a9e-bded-afe8bdc3180d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=TRz4KFdLmerhp38zcm9b3IR/zrRQXeHz9t6YRT8nYG4=;
        b=BQ1EiQqyrVqVHYO1YJE1WkJqCisfV4zCqAT3TpOECMypGYpV2HVPzoVU78QyxUTGAf
         aAH1SaSFCB7L2Zwql0oaykhTd5SIZvsqRiyKGnR6Kxog148iIqumKXrnh0/VJ/veU9Ux
         wvSgoT6qHRxR8lAsqujwyx9M+PQupGHXXWZpxTPZG0r4b8vCndoHVlDaSmhzeTE2Manq
         3vxiQEtDa7fUD49JhUcZzcevSLWhTCg706FlcM3EuG69A56qczdMbi5WsoKwXdU2sXaJ
         ze1zdv4c9B+QJXP88ExqlT0wV7mh1mHgraR8vV7NhcZ77jHpxAgJjsi/6obs4dwP4k4B
         Da3A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=TRz4KFdLmerhp38zcm9b3IR/zrRQXeHz9t6YRT8nYG4=;
        b=fmUI/8oLWjJbxccEsAqHUtk7WcGpRC5d49YoBlSRDxYUrB7SI1t1s7CMBqiANmsthY
         moqy8+8KyjmsqllvnQ6Ct8LXzJLqYn6JFD19VaiNuTjnKQcZd00dt1Jn2oyIl1q9SZ88
         44BiqdvtKAY3dx+pqPk6dsl3xFg3z8sHGBRExP+mUdwOIfKiGcyA/38MndgpKLPIbXGD
         t1uXtENj00kmdV7O1bSf6VKpIck1ppS+KH3dzuYyOykGnFyaPHf8n1MIzgSQm3EXlAY8
         EtUq3aa5yMymyIPsf45vxoj5a5EEe+PUw7DcA/yU1BwzhbDQAJKwdEyD+fadIYhA0iLj
         miHA==
X-Gm-Message-State: AOAM530DKQEjBn8ws2DCWmQWUochB6RfCTfhNX11QPhMjt0uE+XoOOnI
	D2iNjDfKelqchXjnmQ7UEeI=
X-Google-Smtp-Source: ABdhPJwAYLX/JWWa/WTgHFPNNwbAi3jjfzTIsIUGQh6s1o9tPoHMdzNk1Uqk89bpuFeRS3vRXZ3c8w==
X-Received: by 2002:ac2:58e3:: with SMTP id v3mr3388446lfo.339.1623843788885;
        Wed, 16 Jun 2021 04:43:08 -0700 (PDT)
From: Roman Skakun <rm.skakun@gmail.com>
X-Google-Original-From: Roman Skakun <roman_skakun@epam.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org,
	iommu@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Roman Skakun <rm.skakun@gmail.com>,
	Roman Skakun <roman_skakun@epam.com>
Subject: [PATCH 1/2] Revert "swiotlb-xen: remove xen_swiotlb_dma_mmap and xen_swiotlb_dma_get_sgtable"
Date: Wed, 16 Jun 2021 14:42:04 +0300
Message-Id: <20210616114205.38902-1-roman_skakun@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <855a58e2-1e03-4763-cb56-81367b73762c@oracle.com>
References: <855a58e2-1e03-4763-cb56-81367b73762c@oracle.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This reverts commit 922659ea771b3fd728149262c5ea15608fab9719.

Signed-off-by: Roman Skakun <roman_skakun@epam.com>
---
 drivers/xen/swiotlb-xen.c | 29 +++++++++++++++++++++++++++--
 1 file changed, 27 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 2b385c1b4a99..90bc5fc321bc 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -563,6 +563,31 @@ xen_swiotlb_dma_supported(struct device *hwdev, u64 mask)
 	return xen_virt_to_bus(hwdev, xen_io_tlb_end - 1) <= mask;
 }
 
+/*
+ * Create userspace mapping for the DMA-coherent memory.
+ * This function should be called with the pages from the current domain only,
+ * passing pages mapped from other domains would lead to memory corruption.
+ */
+static int
+xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma,
+		     void *cpu_addr, dma_addr_t dma_addr, size_t size,
+		     unsigned long attrs)
+{
+	return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
+}
+
+/*
+ * This function should be called with the pages from the current domain only,
+ * passing pages mapped from other domains would lead to memory corruption.
+ */
+static int
+xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt,
+			void *cpu_addr, dma_addr_t handle, size_t size,
+			unsigned long attrs)
+{
+	return dma_common_get_sgtable(dev, sgt, cpu_addr, handle, size, attrs);
+}
+
 const struct dma_map_ops xen_swiotlb_dma_ops = {
 	.alloc = xen_swiotlb_alloc_coherent,
 	.free = xen_swiotlb_free_coherent,
@@ -575,8 +600,8 @@ const struct dma_map_ops xen_swiotlb_dma_ops = {
 	.map_page = xen_swiotlb_map_page,
 	.unmap_page = xen_swiotlb_unmap_page,
 	.dma_supported = xen_swiotlb_dma_supported,
-	.mmap = dma_common_mmap,
-	.get_sgtable = dma_common_get_sgtable,
+	.mmap = xen_swiotlb_dma_mmap,
+	.get_sgtable = xen_swiotlb_get_sgtable,
 	.alloc_pages = dma_common_alloc_pages,
 	.free_pages = dma_common_free_pages,
 };
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 11:45:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 11:45:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142900.263526 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltTzN-0006xR-8B; Wed, 16 Jun 2021 11:45:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142900.263526; Wed, 16 Jun 2021 11:45:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltTzN-0006xK-4v; Wed, 16 Jun 2021 11:45:45 +0000
Received: by outflank-mailman (input) for mailman id 142900;
 Wed, 16 Jun 2021 11:45:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=igu3=LK=gmail.com=rm.skakun@srs-us1.protection.inumbo.net>)
 id 1ltTzM-0006xE-EF
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 11:45:44 +0000
Received: from mail-lj1-x232.google.com (unknown [2a00:1450:4864:20::232])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 51577879-0491-4a3f-82ff-b34a26e2ec2d;
 Wed, 16 Jun 2021 11:45:43 +0000 (UTC)
Received: by mail-lj1-x232.google.com with SMTP id b37so3332313ljr.13
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 04:45:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51577879-0491-4a3f-82ff-b34a26e2ec2d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=dtd/r4kQdNTFgMzEtuMDxRln8AeazBFMNuFE+IwxLQ0=;
        b=tM9YyD9dbmrU6QAIak9v1L1Hze3NeFQPPNBEtD4omgZjEyhQcUg1q3wkLUmDpwXXOT
         6u+FMb976YVcaypY+NkjNsra9tUi2NzMsuaXdS253tuTX+l569fNkUgrvOzggDbWTp8W
         ENky2qq1vTSb11/DZgVTmvzbf9N+SWxrHqqLmE895cQUdLDT00v5gJHuqst+RLC6DELx
         AZubyEXZLMybvYei/eZAvnHzFQnU3mWT2NbfPe9zTz+GpRTN5egfyB5q61LhPml0mNjt
         9xsP8j3AJPQS9llNh712rLD/eGmKOg1uNQpb4kiRypPtttLcmaoFcMAzMHxwwR+iDCTV
         h9lw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=dtd/r4kQdNTFgMzEtuMDxRln8AeazBFMNuFE+IwxLQ0=;
        b=hjGG4WkHoenU8KfGsYdL8Km3zgxM4GlodKf0fALGqsDRN+hmPagNeTfhGUkOAY3IFS
         +NFSPPYVGk5AeVZr7N0hstKs/O642BJb04YWQ7xvIPis4frBW5lcHP1MfgR6olr8j35Y
         t+bm407g3/ChUm7KJWQ63mtDki9mZ6AG7jXFbTJl6ueL8tGezuowP0DcQpUN1QTo8xhI
         BygEbTgQdZthDBvnpJXKXQZKRViUFkYwAmlDxiVUXaN3ogPWpuBiBW8mgmmZF+sPjThY
         TQJWGYU6cXTwiWjnkSBkUvxr/KF6Wd+BltrU65fTDPFjst6IlIpBCwDIuox4/ta++/es
         J1DA==
X-Gm-Message-State: AOAM532enZTvopB1K2VpN56fXxLFwJaNWHyVv3o8v1jdm0ugL5BWWZN3
	5yESG2vz95boKRSViaBeTlQuzMaTVIVT7YFkPqbiVGXjKqp89Q==
X-Google-Smtp-Source: ABdhPJzI40Mx409ixL6A8VaGSABYZuuy5E1EPuBErhnFhAQBqUvi8LwmZwSBohmrfaadIq0f8nBwGp02vfH8jjD6rDc=
X-Received: by 2002:a05:651c:291:: with SMTP id b17mr2735729ljo.497.1623843942771;
 Wed, 16 Jun 2021 04:45:42 -0700 (PDT)
MIME-Version: 1.0
References: <20210611095528.9230-1-roman_skakun@epam.com> <855a58e2-1e03-4763-cb56-81367b73762c@oracle.com>
 <CADu_u-MqALJkG8RJHrr65vC_sHu-UyvCGwwUfaBong0eir5+hQ@mail.gmail.com> <fbaeaad5-ea8a-ff2d-2e62-d27b4d234e8e@oracle.com>
In-Reply-To: <fbaeaad5-ea8a-ff2d-2e62-d27b4d234e8e@oracle.com>
From: Roman Skakun <rm.skakun@gmail.com>
Date: Wed, 16 Jun 2021 14:45:32 +0300
Message-ID: <CADu_u-MgdJYH-sf57AL_Fg3AnjpHoZ1Bk1nxytmoupJc=hJDfw@mail.gmail.com>
Subject: Re: [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Juergen Gross <jgross@suse.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org, 
	iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, 
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Roman Skakun <roman_skakun@epam.com>, 
	Andrii Anisov <andrii_anisov@epam.com>
Content-Type: text/plain; charset="UTF-8"

> We make sure that we allocate contiguous memory in xen_swiotlb_alloc_coherent().

I understood.
Thanks!

-- 
Best Regards, Roman.


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:09:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:09:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142910.263537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltULk-000130-Aq; Wed, 16 Jun 2021 12:08:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142910.263537; Wed, 16 Jun 2021 12:08:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltULk-00012t-5q; Wed, 16 Jun 2021 12:08:52 +0000
Received: by outflank-mailman (input) for mailman id 142910;
 Wed, 16 Jun 2021 12:08:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EZWs=LK=kernel.org=will@srs-us1.protection.inumbo.net>)
 id 1ltULj-00012n-7V
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:08:51 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 43c8636c-5b13-4f09-ad0c-2f17bcfa809b;
 Wed, 16 Jun 2021 12:08:50 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id CD51A61245;
 Wed, 16 Jun 2021 12:08:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43c8636c-5b13-4f09-ad0c-2f17bcfa809b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623845329;
	bh=QlkI38WT3hL5S6bnBl68cYI3mNCvqwOg2BtN6RClOD8=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=gNoqbf4BoEB+mMSkF0LbPACcE8hfdhWziBuc3On0yCSF1OPpARGrphJEZzi5R8zhj
	 EQBk9eSasfJ21LDYEs5fQgad/5i5kQ4hvjwr2VzkCT5qsBkbBa2YuO73damjDS9tPT
	 gEWv6ceU59Ud8WyKcV32BLXCIyoDzCKKiI1Qx1b1D7h7TGQnS6Yc8Hh4jwg0vvdXHU
	 1ESCA6NJ9d+xly0Ts4XCXJSqpVE+kwtWKvwUzczVJ12VkVaF8vrrLoIIMNmcUpnUot
	 CLEWTlb2HNfBG1sx2OnHpHCbJjguomfMTK9NnEnMModJku7GBFKFrxYfN57naGCL26
	 kwPvkXDDNtLyg==
Date: Wed, 16 Jun 2021 13:08:38 +0100
From: Will Deacon <will@kernel.org>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v12 00/12] Restricted DMA
Message-ID: <20210616120837.GA22783@willie-the-truck>
References: <20210616062157.953777-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210616062157.953777-1-tientzu@chromium.org>
User-Agent: Mutt/1.10.1 (2018-07-13)

Hi Claire,

On Wed, Jun 16, 2021 at 02:21:45PM +0800, Claire Chang wrote:
> This series implements mitigations for lack of DMA access control on
> systems without an IOMMU, which could result in the DMA accessing the
> system memory at unexpected times and/or unexpected addresses, possibly
> leading to data leakage or corruption.
> 
> For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
> not behind an IOMMU. As PCI-e, by design, gives the device full access to
> system memory, a vulnerability in the Wi-Fi firmware could easily escalate
> to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
> full chain of exploits; [2], [3]).
> 
> To mitigate the security concerns, we introduce restricted DMA. Restricted
> DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
> specially allocated region and does memory allocation from the same region.
> The feature on its own provides a basic level of protection against the DMA
> overwriting buffer contents at unexpected times. However, to protect
> against general data leakage and system memory corruption, the system needs
> to provide a way to restrict the DMA to a predefined memory region (this is
> usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).
> 
> [1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
> [1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
> [2] https://blade.tencent.com/en/advisories/qualpwn/
> [3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
> [4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132
> 
> v12:
> Split is_dev_swiotlb_force into is_swiotlb_force_bounce (patch 06/12) and
> is_swiotlb_for_alloc (patch 09/12)

I took this for a spin in an arm64 KVM guest with virtio devices using the
DMA API and it works as expected on top of swiotlb devel/for-linus-5.14, so:

Tested-by: Will Deacon <will@kernel.org>

Thanks!

Will


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:34:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:34:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142919.263548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltUkA-0004B3-D9; Wed, 16 Jun 2021 12:34:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142919.263548; Wed, 16 Jun 2021 12:34:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltUkA-0004Aw-9k; Wed, 16 Jun 2021 12:34:06 +0000
Received: by outflank-mailman (input) for mailman id 142919;
 Wed, 16 Jun 2021 12:34:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vOo1=LK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ltUk8-0004AS-I9
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:34:04 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6f2f0a5e-baf2-43c6-b8bf-69578b065bc0;
 Wed, 16 Jun 2021 12:34:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f2f0a5e-baf2-43c6-b8bf-69578b065bc0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623846842;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Q6wRThENTrA6WiGjPmxl70N/Hn0Hfd0Ejyik4wKHFvU=;
  b=K2Hu57x/tGDzzp9LEGT3QypE4MMCLAe47UXz5i936lLhLDSD/Nw0Nz7c
   a9e+ak6j+nwVRsCZ1aagWsFkAGBMQ6YfTa28OQF0YWro3zpIE6GlABVLh
   53Tld9QixN7xyE7w4WMgK4LCKlosAp3GNpoPDTr1rC1sYy0wtL450NGE2
   U=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: p7161nYTEOC4m0lgqqlWzc5pHB7ONGNQk51GvKvjnK09DFHgeou5eWTlmqwTvFIytTMIHSLySi
 ORUXQevn189nTmqjG/TFSF9qC3xZYW0jDb/NiU9rWcB9mHg+Lh8LsmAuOXIAocDGEnO3e3y0qg
 8jeYWQhcDapW7L+DQCTrdS6mh8/+yXHDNE2wNhqOwui/sMZQOXeIcI6SEHPP1MSEenJoAEF4La
 l6JEhQIPC/mAx4IL5rVz9pdnSP1w3iUTr0zNP9i8i728TzvTBCEDWci36JzfGJYMTFbM6VumbZ
 zvU=
X-SBRS: 5.1
X-MesageID: 47834587
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:tG+lQa1ELjsnmHZDLX0s+wqjBV9yeYIsimQD101hICG9Lfb3qy
 n+ppsmPEHP5Ar5OEtBpTiBUJPwJk80hqQFn7X5Wo3SIzUO2VHYUL2KiLGC/9SOIVyEygcw79
 YYT0E6MqyMMbEYt7eJ3ODbKadZ/DDvysnB7o2yvhQdL3AfV0gj1XYeNu/yKDwEeOAsP+tdKH
 Pz3Lsim9PtQwVsUiztbUN1L9Qr6ue7267OUFojPVoK+QOOhTSn5PrTFAWZ5A4XV3dqza05+W
 bIvgTl7uH72svLiyP05iv21dB7idHhwtxMCIiljdUUECzljkKNaJ56U7OPkTgpqKWE6Uoskv
 PLvxA8Vv4DpU/5TyWQm1/AygPg2DEh5zvJ0lmDm0bupsT/WXYTF9dBrZgxSGqa12MQ+PVHlI
 5b1WOQsJRaSTnamj7m2tTOXxZ20mKpvHsZl/IJhXA3a/pcVFZol/1awKppKuZGIMqjg7pXVt
 WGTfuspMq+SGnqKkww5QJUsYWRth1ZJGbyfqAA0vblmQS+0koJl3fw//Zv6EvowqhNAKWs19
 60RZiAq4s+B/P+TZgNSdvpEvHHRlAkf3r3QSqvyAPcZdA60jT22sXK3Ik=
X-IronPort-AV: E=Sophos;i="5.83,278,1616472000"; 
   d="scan'208";a="47834587"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ftQkgq9o/ElLvgTX7wIuuAwiMzSDaY29tsItiD3eiJ9Jd0zT2O8ud492cRsplk2J5wAHHBv76i8RusQ9dxMOd/oOx2EUin1XzwtblmZ4UrXwYYIlgUE6xEHsDEaIWhpsqOps1HSyzMKBnIZNXEtpCUUya4pZnwcKHQVD7Y1N2/edaZLVFnjbs4oLyU/9DN/KcT7P6mCd/QWPcEwjNFQZA8Ou7qsXu5LhTh5yk+Ha2xAtaO9nQkNdP2DkHlFnW4jhWgpo/+nUb/8zr/GC1qGBRcEKRe2RhI2N/3j7QegMy8qZfoa3D35AioHgpmRSymO2tAedTBBT6Qmkdib+7akjQA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iYxgsDO0MXLgHa8Lpb26RFkwNjpPFEK7vJHhmqcMMcE=;
 b=fOBTcUN2IJF1CEzApJy27UpZbwbIx8VuSHAasZcF4+vUGkYGPsFWjyQPaAghcubS3+XZrnkbiAS3S7dZ5+8sV2nFIIuwTW2xZcVWekvnSGW56+BFh2OiOIfT7c6Qh6OyuyVNEnp9AK+U67h5J0iZ9EWYbxqVQA6g8s6WNEqlI7clB2m0ASerYckWKsi9BfCfzrkonEMmnpwchPxa7WBVmQDmcUD+ko9D0/L1cuk5tseeOj1cCxZdRYkRRs294XfpmWxaixq6YCEIcaW1TGPVx4rQjCzybLUGxk8GW9ax6on2NKoYFVtzGcOrGoyAFM+JVhLDFQYBGtfadkGdoIHNng==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iYxgsDO0MXLgHa8Lpb26RFkwNjpPFEK7vJHhmqcMMcE=;
 b=qzKKimd85OPiHjxGkaHcGjcYHbwFq6iCerzk6p7+VRfUYtyqTmES2ywMSJFUlPvik8mbdh4lQm8CiIHLxwxozSkemlH31tgi6679wIhkCiLpPnm24PkKxng/uuYq+iMZ5tAfe39AZA1WZK7NxRc+CYspvQ0PLh7idSlW2IdWvo4=
Subject: Re: [PATCH v1] tools: ipxe: update for fixing build with GCC11
To: Olaf Hering <olaf@aepfle.de>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210615212613.6270-1-olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b78ccdf3-9898-c903-4d9f-4d25bd27182e@citrix.com>
Date: Wed, 16 Jun 2021 13:33:52 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210615212613.6270-1-olaf@aepfle.de>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0076.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8::16) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1485aab1-7e3e-4688-c789-08d930c3034a
X-MS-TrafficTypeDiagnostic: BYAPR03MB3989:
X-Microsoft-Antispam-PRVS: <BYAPR03MB3989D43AA7292677E7FC5210BA0F9@BYAPR03MB3989.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:312;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: vXbCiQZ1Z4VhD0diI+DMOMCNuNhL0bL6KmJWyNWbpR6vw5fP72t+pFMdWDVJUz7IbJ2QVX8G7eYZCOtgXM+Z38xjM0NpMU6byCAVIv40yiYjZGYuSJxkmz46Q7OyDhHLAM4eou9i4Qg5M6M3axGRxM5acapBNmiqj1LtwlX8RIraVSddO8Qm36cJYkf2cDy9ZU4hq9B/LNiZ96p+70+2NX7qVjJ7Bf+DxBX9mR5Qw0NpEc7fFh9IqQvDHSbQsyv+azlWsDPh6KECCHncTNbrOzyFNw1RSTQk3iBRAOW/dR4ZWRgah1JOP/jfivffmNyh+cyoHzHKJjh2ijpCn9t2bcjat7Xoqqsecjts7VUwAgFXa2Fdij6NaXFhYj90ZphjbKFzi2oLSsEG76eohHwJ/XYfQptUt7tnKthzi+Ra2B+UbA1bUz2czkmxbpuhfnXM4rVbtheENihdn8Sk4/kcEoBIhpp/VuS+sF4j102n5zIRr+I23F7MtU/FiZOByY2llODE7ZpE5rVUHpOmVfHe9yI6/t5MnyWXsnqNSSWSK4M80qtFPpxnfsC70mMD2gaDiBfBijFFh+dFnEcA8lQBBVRktJ9JkwEN3cYutAXcl26E2HtacN14EhyZlUnIYDOvuk4KzqAHCcW61WEBMu6KgGOHPrfSP9Dz0PEvsf3Gufr89exEOq62COGUobrY+PfKO2CyoodEY+8w3Bqt479yqXBn/snilXWuwWNUiwpyygo/tYA1VPsF0MpIOQ74yJBkSCmS9/vt/2iDkEHqhf8Ip21RDGTbnkcrTaEebbjToV5hxXMTpymi1k23H3RytZ3B
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(136003)(376002)(366004)(396003)(39860400002)(186003)(5660300002)(16576012)(8676002)(16526019)(6486002)(53546011)(54906003)(956004)(2616005)(66556008)(316002)(8936002)(15650500001)(66476007)(31696002)(66946007)(86362001)(36756003)(26005)(478600001)(2906002)(966005)(38100700002)(4326008)(83380400001)(6666004)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?OGV5QWZWNXh1N2lKR29Xam9RK3lIbjlMWEhnMXN2MTBzSGxvOXBMbFBENkZp?=
 =?utf-8?B?TU5XWExwZTkrN2ppc1dWdi9VMHl0bzNHTlZpRDVnSlVjalI3a0xPa0x5RTYr?=
 =?utf-8?B?RWs1SG9Hc0t0cTlRU1o2VkQ2N1VZM0hNNWRSRjRrcGZ5M2xnNGJ0aGZXaFNj?=
 =?utf-8?B?Y3E5QXhPSkZ6WktUTDJLUFlQVi9OTUxIbHdvNWJ3dUZsSU56S3FwQWlzcXRn?=
 =?utf-8?B?Vk9wZnp2a0MrdUI5SGVHRDBEZXpuQ0RqdEJJKzl6QW5LTVhNVUNKSmJrZU1u?=
 =?utf-8?B?OW50aWhUTi9saEIrU0cvdEk4aXBxUDNGeVIxQktKSEg5UUpHeXZCOVFpS2xa?=
 =?utf-8?B?SXUxZGVsekVyV2hTaGdneUZaQU52V2JjWWdOVklNU1NWRGZtRGVodURvZWJW?=
 =?utf-8?B?My9vQjlTTUJZekM2Yk1JbjRVcmRkM2Q0VFlWdFVIYTY4VHB5QVFDc2F3cVhW?=
 =?utf-8?B?cDVSZDZKSGxaSG9hb2pOTFZoR3JJakJtaWVCUHJEbTE0aWxUSWN2cUI2OE9L?=
 =?utf-8?B?aXZvUlRpVFh0KzI4TDJ1Nk5ZeVV0ZzhhSG1WMjEybC9peHVlejdVdUMweHVC?=
 =?utf-8?B?QUhuS2hLcVZFRU9FRGNITUliOFFGRVZVSHR2S3phZHdEaGV6SHVXRDZqNmpn?=
 =?utf-8?B?OGZLRDVaVC9YcFhXczhzSk0zZTFraklGc3QzVG0wRE9QNFN1bUpUbFdRSkRW?=
 =?utf-8?B?Y2VBa0svZEZ2RlRBcDJtZmplK09yYTBOQ3EzWVFPWGNuNVlrMmtvQVgwWFlO?=
 =?utf-8?B?L21hMTJCNlhCaHlwbDhyVUVDdXYxZmZhN3hpK1dTZlZvNUhKL0JPazZmZUky?=
 =?utf-8?B?aUlKakJPNytuK2JRKy9xeVFuRzllY3FqcHJNaUlKWkJxZXpLUVFrQWdvSnI1?=
 =?utf-8?B?N2llcGowQWtpSFBITXJSWUVuT01zZ3NqM2oycEpLZk42LzZYVU5ZTytrQlpB?=
 =?utf-8?B?MXZWYlZQbDgzNVhqT2xGeFdDWGxha25XT2NJRU9XZXdNYkYvOU1pS2k1a2tr?=
 =?utf-8?B?NnY5SWZ4RGNOand1eWk1WFpVa285NWIvamdWUUI0N2hpMG9IcndkYzd0NXps?=
 =?utf-8?B?Y3NZdFlvS1ZEckFNcXkrbHNKRUo3WWx5aEtvN1FJNXNWeTdUbjI5QTcwckF6?=
 =?utf-8?B?dGFuWWIwZzNCWUlpZHdCdDV3b1RxSlFQaVhKWEEyUHpiTEZoUnVVL0FRNDg2?=
 =?utf-8?B?VVE3K2dDTG8rOVQwZ3MveU03TUhGdDV0N2h2MENNbWw0VXlkQkM0cXRtUWpD?=
 =?utf-8?B?RFZQd0kwZ21BMXk4QVBNeHNKY2hmZys2NEFyb0k3UWpodnYwRUxoTEdtNEJn?=
 =?utf-8?B?ZUhxOHN3OUN1Z01GVkhCR3BCTUJIVWNzdVI3bWd2VmdYSUJISVNKN3dPYk5a?=
 =?utf-8?B?dXZNeHgwOU1zeUsweG9xK1ZlSkxKdUh2c3lPQk1mRWtKTTg3T0RuMlYwa1Rq?=
 =?utf-8?B?ZkFVamc1b2ZnVERJMmYvSS80MWZlb2Z3bGdhWkFRN0JOZ1BuOWhiZi9kMjlT?=
 =?utf-8?B?UW9kcm9sQ3dXZmdVYTlhZXBRQlNaMExmZUhIT2xIMGV5VnR3RFFSSmU1Y2Ry?=
 =?utf-8?B?UHJjaVNBYnZka01Wcm4yUE9KbmV6S2dwUlVBR3dEOXE5MUp5c29wV2hXR00y?=
 =?utf-8?B?RTBvQU1RMWdxTzJIcFh6enlMMS9sMnFqTWduVzFGQklPQWo1TUllSE50Sm1R?=
 =?utf-8?B?V0dCUFBZY2Z5Z1MwbFU5TGRHTDJ1NzhNRkl1UXdUc01uMFVHUUFUWHNXdWZW?=
 =?utf-8?Q?bPK95vcZMr3NzWkXdhJMquq8An1GaaE4h7Onzv2?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 1485aab1-7e3e-4688-c789-08d930c3034a
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 12:33:58.4402
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pXxwQloa2I3saMSj5m+S6aVoPPsGkhZ4xbcwq9wVrhXkTPDmcda+B1H6n+m47AW+KqIz7bNRzADPQvJaMH0Gc0zZd9HNJWNtYKlS8P653pc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3989
X-OriginatorOrg: citrix.com

On 15/06/2021 22:26, Olaf Hering wrote:
> Use a snapshot which includes commit
> f3f568e382a5f19824b3bfc6081cde39eee661e8 ("[crypto] Add
> memory output constraints for big-integer inline assembly"),
> which fixes build with gcc11.
>
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>  tools/firmware/etherboot/Makefile | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/tools/firmware/etherboot/Makefile b/tools/firmware/etherboot/Makefile
> index ed9e11305f..23b3f6ca9d 100644
> --- a/tools/firmware/etherboot/Makefile
> +++ b/tools/firmware/etherboot/Makefile
> @@ -10,7 +10,8 @@ else
>  IPXE_GIT_URL ?= git://git.ipxe.org/ipxe.git
>  endif
>  
> -IPXE_GIT_TAG := 988d2c13cdf0f0b4140685af35ced70ac5b3283c
> +# put an updated tar.gz on xenbits after changes to this variable
> +IPXE_GIT_TAG := bf4ccd4265ac614fbfa38bf168b6eeaf4c17d51e

CI says no.

Gitlab CI is currently fairly red because of a clang build fix which
hasn't made its way into master yet, but this job:

  https://gitlab.com/xen-project/patchew/xen/-/jobs/1349871230

shows a real failure on CentOS 7.

...
  [VERSION] bin/version.rtl8139.rom.o
  [AR] bin/blib.a
ar: creating bin/blib.a
objcopy: invalid option -- 'D'
Usage: objcopy [option(s)] in-file [out-file]
...

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:51:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:51:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142932.263562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV11-0006Th-00; Wed, 16 Jun 2021 12:51:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142932.263562; Wed, 16 Jun 2021 12:51:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV10-0006Ta-TK; Wed, 16 Jun 2021 12:51:30 +0000
Received: by outflank-mailman (input) for mailman id 142932;
 Wed, 16 Jun 2021 12:51:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vOo1=LK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ltV10-0006TU-41
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:51:30 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3455a608-2e4b-4d91-9e3a-95f5a5e1aacc;
 Wed, 16 Jun 2021 12:51:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3455a608-2e4b-4d91-9e3a-95f5a5e1aacc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623847889;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=DWEAnpv/NQlwUw+QpENGwlmp+K0qmMQIa0yQLbbWvc4=;
  b=DqKltO6cG42swTJtvJJXPFG24gz0KJxd+zmOO6sC9BCeE8Z0DAahRCSx
   zCUOcbEHAeqESL4OudAPIMeqTIdcojLU5qcVYUZrI2IAw1yZ3qyR7WKvE
   MtIT93Td00uMkxLwXzbM8PUde7wHhNo2TB3eRRxichDBEGh65cpnCCoqR
   g=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Ztvxs88uqQKrMjniIQkxPhj/MvEqxmHWl6jBwuLpeCcePeXOL+1veOrHzfS3HlZI3rIsqx+VoB
 VKu8fsMOJBjWg5QrcT8rjVoBHaGzidjQaa1VC6W8qCfjIKjfj+87BPLKcjxlhY7C5wZLJGbU2W
 b2gHa6X/y2/3dYLz4Qcp5cBf713R8oVXjT0onWOf8qsV8gPA9hgwcx0cypteFoP0h9zTX4pkRo
 /PwOsHcqSyFp6qBQYde1Js/hRNjPM5zyY3JsFyO4vI2jeEH8Go/60VbCoztPL9Civo6/0A9QQn
 itY=
X-SBRS: 5.1
X-MesageID: 46626772
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:4Cfgv65CRQlBzaRaZAPXwViBI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc0AxhI03Jmbi7Scq9qADnhORICOgqTPqftWzd1FdAQ7sSircKrweAJ8S6zJ8k6U
 4CSdkzNDSTNykdsS+S2mDRfLgdKZu8gdmVbIzlvhVQpHRRGsVdBnBCe2Om+yNNJDVuNN4cLt
 6x98BHrz2vdTA8dcKgHEQIWODFupniiI/mSQRuPW9o1CC+yReTrJLqGRmR2RkTFxlVx605zG
 TDmwvloo2+rvCAzAPG3WO71eUWpDKh8KoCOCW/sLlWFtzesHfsWG2nYczHgNkBmpDt1L/tqq
 iKn/5vBbU015qbRBDJnfKk4Xid7N9p0Q6s9bbQuwqdneXpAD09EMZPnoRfb1/Q7Fchpsh11O
 ZR03uerIc/N2KJoM3R3am/a/hRrDv8nZPiq59ns1VPFY8FLLNBp40W+01YVJ8GASLh8YgiVO
 1jFtvV6vpaeU6TKymxhBgk/PW8GnAoWhuWSEkLvcKYlzBQgXBi1kMdgMgShG0J+p4xQ4RNo+
 7ELqNrnrdTSdJ+V9M3OA7Ae7rBNoXpe2O/DIu/GyWWKEg3AQO4l3es2sRF2AiDQu168HIdou
 W+bG9l
X-IronPort-AV: E=Sophos;i="5.83,278,1616472000"; 
   d="scan'208";a="46626772"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ED9eCKfnovD1BHLiwJ6UVo9PxzMflz869q9jrQtBe0iKWZ3ai8u/2oJMaLQFMZOG5AfOKUs14cbPXBlw0aWc3f3Rj1OFdae8MRhncDwmEeYDZF8nfluzRr6Je61G7iZVrENAbmdPhzy3uoK9IzhCPERYfqQI7AJZ6yrOq07cSl+F69pir5iDfXOgWawS5yhyaVNHZsSZS7OSOfsZzaTu9Ziy+fj0dbPmwbWG5df922tMVwnmpDXXu/XM5TLRjGE4JUYoZVqa9Vw45DqIs4xLHhgB5ylsKvQUPlSvZXPMFmbpLeYn/3TKBGnPRgMndEYg/R3pcMs2ioJKZSP2dWxjUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bL1/xi8M78gk7WGmiYauEb85Z9lHkvAkyZCdZIgmTnw=;
 b=bkRW7lDS8ogRTyuVx7Ak2GzLfjGfo7zeWtehvTXzb8EfMYcygboK5icegTw6QBRGalYt/eOjqq21iqzCtYg5tg5cDV/XxwimTSECC6DsbJj1jnia5eukJ5j4O0QxCgkE9zAlviYG9FBkZh01oKi0agCVT9rdHC3XyU9BxdIVXYcEmqajK7nE+QJxJVJcfg0PDLX0clBbHWlEvFGdL6MKuNnjBPBbvrSGXplgEvLc16qn2usbxDrcyx83uzXeiKR4h44uxq9t7YHJG/Wcnnet0s4mv4Y9QusHU8MDHTYmnG+gxSNi66IF983adKqBCQNheDsHe9fzUBoEYoY+ZydMBQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bL1/xi8M78gk7WGmiYauEb85Z9lHkvAkyZCdZIgmTnw=;
 b=PRmPZvNKWm5ZLsmqIXcg6FkZFsFkspcbWJDfOrfr3+WXkP4/ACYcMDM0S3oA1C68msPAJvL8WsQ34zPSy/jztH4U9Zo9cFbpDtkG95T7Oh7SHV2nccbr7s2aTW1fY5O5KpnCais5p/TemGeR0cgqjUofdZ9aE6jaMwMfVC5Tx+A=
To: Jan Beulich <jbeulich@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Juergen Gross
	<jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210615161905.9831-1-andrew.cooper3@citrix.com>
 <20210615161905.9831-2-andrew.cooper3@citrix.com>
 <04f641e6-04bd-8884-4b08-4c8f9a418b0b@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/5] tools/tests: Drop obsolete mce-test infrastructure
Message-ID: <ba55ec10-73f0-8e1f-f0a5-6fb9a1155515@citrix.com>
Date: Wed, 16 Jun 2021 13:51:16 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <04f641e6-04bd-8884-4b08-4c8f9a418b0b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO3P265CA0003.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:bb::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 90cc1049-5215-49ec-7d72-08d930c571e7
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5885:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB588556B2B1AC995BC182530ABA0F9@SJ0PR03MB5885.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: RkbyAdKD/tx/ozxVXupffFx9w3D7WLHCGYTHj53fwzuUQbQa6s8uqS6raE9Cl7U4nK/lxK/uEw5vXhxknyK0qD8LW08N8pJXPDPh+XdMnk4L5/mWzz5AK7ekwpgESivCH+GeLlT9RCZMe2LohnQXXwX0lAexwnioWBs8/xjn+BUx5nnexIaNTN/uKxlthUJnhgYwFWST5KQWUNxWIHDs9bmeIjoOXNj3j1mrSi6zr3Zy/kIcB7JDN+LEdiYo4ObOO0dXBz30rtgoL4D3vP7X2Yxn2vN0T44XSP5HLmxNHJ2tvCa4CsVfyl2YD81lIqt++1fDjPd/73hjSQbig8DNDrDs24Kz0JX7G+96GrY62trdnaLrOKcAbvriDPYUkXybTbCDESIT8V4eME1k8bpdCt/XhxraTZzILYenHumiUZqIVKGaxv2LfuHdfvtN59sfTy/x3o0sqK02VlBM/sZfTOczrqM6aiModuO/i0ntJ7r+ZyCeorAcjzq1WDKDY98BCmsBgpiGr1j9Pd0hAFvjBOAkFjRjmtK5aLHudpSOZ/67dmTH3OhNZEc1ONpRWUjY8vyQPsn60VG3nhmODsQXTj9b1ieMLTXUaCOJcmEn/9ik0Q1hxA8YmaarEQFS1wyTgvnPkFJ5f64u4nWuQRh0F24u0R2a5xHRbJSEnDmeQPrtfs6Ts9QKKI1xMKjDUe66
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(136003)(39860400002)(366004)(376002)(396003)(4326008)(2616005)(66946007)(26005)(54906003)(8936002)(6916009)(316002)(31696002)(66556008)(66476007)(186003)(478600001)(16526019)(86362001)(16576012)(2906002)(5660300002)(53546011)(956004)(36756003)(6666004)(8676002)(6486002)(31686004)(4744005)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?L1E3ZUU2amY0OWxVaWUzMStzd3c2SWNtV3V2WmNWa1kyNTFhMjFhdG1wZHlL?=
 =?utf-8?B?a1BhRGNmVWs3MXluMlluTmcyMlM1bzk3U2NnbzdLRVF2WHcrRW5vTFJUeGI2?=
 =?utf-8?B?U3p1VHpqVWF5d0d4WCtvNUV0Snk3dE8xOWhWblJscUVONlk1ZmFFK0ptS1Vx?=
 =?utf-8?B?bFRCV29tUzR6UjZUeDRQSzZWNTBqdHN1d3laWWZZM0VzVjErWXRtZVo0b2pX?=
 =?utf-8?B?ejgyKzhBUHR4cFFPT2t6V3RGSFUxUjVCcHQxL0U1cTA5SGtyUUVJTE1SOHVj?=
 =?utf-8?B?OUo3VGdVUzZsRElwZnBTaWJ1VGkvbUYxSXJveDJhTUlWKzlaN0o0aTdONTc1?=
 =?utf-8?B?ckNsVkN3Y0lJMEtZV1BCVW9PTFdYcVFQbk1WSzJmdFF2eWhmVTBQNTJCaldY?=
 =?utf-8?B?ajN3S1NucVFWZmFFVzljSlRIUnNmaHkyUkJaODQ5WjJkNllQTHBQOXdUY2Uv?=
 =?utf-8?B?ZGhRTy85TWpoYkwvU0U4UzRWUWU1QVBmUit3L0Y0TjE1ZWN2Tm45aSthQ0pa?=
 =?utf-8?B?YWNLS2MzYWoxSmRYN3h1S1VHTVZmMzRiYnh1T0FIRUhYQXFkc1FhYWNtR3BY?=
 =?utf-8?B?ZG5iY3Y1VlRkZ1UxK0htNmJYa3lkRTlHZXZPMENMSW1HWWlWaDVMK0pYQ1ND?=
 =?utf-8?B?cFMwdTcyZEs0OVhrd1hiaW5iWWpUa1MvZlg2UW93WWw0VmxhN3UvMFU5TUMv?=
 =?utf-8?B?UEl5TzlIbDZJNEZrejJhYUVSc0N1cEFFd3ErSVpzcFkyNVRTNWs3WkZBWEpo?=
 =?utf-8?B?dnRxcWk4ZExXL0pnaXlRdEhxMFYrcnpMbUc0Y3V2a3F1cVJWY21LWUdSLzVS?=
 =?utf-8?B?aHJIWnQrUEladXdka0dYTVgrSDdsdk9BSnJ1bVE4S3BGakx0UTJjcDNFb2lI?=
 =?utf-8?B?SDNiTjRWTVdmUndKYWpVdzd4OHk5b2pBVzhVTlN1RFphZXBROTdNcFBkcWdv?=
 =?utf-8?B?YWhmaW5uc1ZOUnNpenJ5YlFxcDdhd2h1a1FML1lLN29wN0l4Q0NPNUFCdk9Q?=
 =?utf-8?B?eEFkV2YxWHl5Z2pvWHJLb243VVB6bW5NQndnMktvUC9xWkdIM1JVZGhSdWJF?=
 =?utf-8?B?OGZ1dE1MVnhsRERoM3RSeGRkV3c2UVBsZ3o4OWt1aTJJbDNYN2wycVhtdWV4?=
 =?utf-8?B?aFRORzNsZktLYVB4TUJOSk5vc2l3bnF6VnpZZmZEMDhzcEJKTUZ6MWJtYWNN?=
 =?utf-8?B?ekd1Z055L1VVWUtJNWw0bkJyeEg2aDgwaVU2UEY5OE05ZDZNSGhtN3NlcE0w?=
 =?utf-8?B?YUZWQXIzRko4TFpNaFdqRVFsRnRtWXJxeXNiYzFldFh0RkZGcGlQQ3pzRitI?=
 =?utf-8?B?SXBWVGxpVkpVRVdPdHVXV3VDY0xTTDVSUlFwTUNiUHhycW15Z3UyRjFkVzZC?=
 =?utf-8?B?U0RNT0R0dEQ3V3RuZTdBWnpPVXp2NVdIMkE4RGlYY3BIVVU5RjBxTS9vSjJx?=
 =?utf-8?B?aTJGN3RZSS8vRGlHS1R1NG0vSGhPZnpGSWdDc3JsTjlNMXdSOStlNGZZQWxv?=
 =?utf-8?B?ZGFrdnRha2trdWNEaWlOcm53TWtBUGVmeVMrRDFJb2NPU1FiRlB5NitKME5t?=
 =?utf-8?B?T1dJL3YxSVB5ajArZUVzckFUUTRhR2NEdlFnajJVc1hkQ2ppTkV1K2IxUE9I?=
 =?utf-8?B?aDdSUjVoN1lhdXZYY21JdGhkbEVoV3lmUW1aTzQzTWJRRU1ldzNkZVkrcXlC?=
 =?utf-8?B?ZFdTc05lZ3NRWmpmU1pTUVRGZFVnV1BrajlZMlF6aS8vM05TLzdsM1pjOHZS?=
 =?utf-8?Q?Z+teIShVFpfmJw15v1mRWbGCwKG2JptW5pE9+rf?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 90cc1049-5215-49ec-7d72-08d930c571e7
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 12:51:22.8881
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uP9GGcxIWzdwWsnddS7NHfShKT2dUay2tWKyeLXh1BeYJMsoewvC9xzENkg8eT99RiMogzLpVNKMYllvFjp2Ze0J4kJ7YOaiTdolRkekZFU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5885
X-OriginatorOrg: citrix.com

On 16/06/2021 07:46, Jan Beulich wrote:
> On 15.06.2021 18:19, Andrew Cooper wrote:
>> mce-test has a test suite, but it depends on xend, needs to run in-tree,=
 and
>> requires manual setup of at least one guest, and manual parameters to pa=
ss
>> into cases.  Drop the test infrasturcture.
>>
>> Move the one useful remaining item, xen-mceinj, into misc/, fixing some =
minor
>> style issues as it goes.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> While I'm not generally in favor of dropping testing code, given the
> constraints you mention
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks.

Testing at this level is really in the OSSTest/XenRT realm, as it
involves coordination between guests and checking logs after the fact.=C2=
=A0
However, the need to pass raw addresses around to make any of it work in
the first place make this manual in practice.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:51:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:51:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142933.263574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV17-0006mT-8z; Wed, 16 Jun 2021 12:51:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142933.263574; Wed, 16 Jun 2021 12:51:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV17-0006mM-5L; Wed, 16 Jun 2021 12:51:37 +0000
Received: by outflank-mailman (input) for mailman id 142933;
 Wed, 16 Jun 2021 12:51:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV16-0006lZ-1h
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:51:36 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.24])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 50c90775-fee9-4da9-8daa-1e23d6d55a8e;
 Wed, 16 Jun 2021 12:51:34 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpWtln
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:32 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50c90775-fee9-4da9-8daa-1e23d6d55a8e
ARC-Seal: i=1; a=rsa-sha256; t=1623847893; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=SpoGYODFxJ/5rDv5XIvz1RqNoqFifqHZwD6DLa4DNHnd/lo3KVLAD+BiLqnbgoT9hL
    oRfHvLbVJgxtnCEUshlXhPJY1QHDZY6xU0kGR8+IYPP13Dfz/Zqqi6tZhZkOK/P7pcGJ
    PIr1eTNNKXXGvvyXMi8TolFvUjhXpv0amWLuGHHzCgvwAYfnkeYCZ97Kww3ulYS9aGli
    TRTuiEmogda+iOSj/QhhdlRIn5EEhIXVMVMbr4Yt0kuzc52oTZOSPW7c4RxxgUDITwmR
    GBAYzgnF4JPQow6k1p4KVzdZNF+6Ul6T3wIQXioBZok0K4F8glSMk1EgEQrYlyBWvrBG
    0GJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847893;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=FAEv0aYDzvB+OwgbknaQXf2VrLPGXUF/L40vFVnPX+s=;
    b=tXIcAeMbFHeCFS6CCX7DQQAmTKxKfqlKccfljrlmWM5kVwJ++lVIUfqL54QrLIeFQI
    RXvMhvSK8q61MkHO2eJh5UlSjCsw5VcRWASYwfS7zNaWFZTAqlZYp+Qvk6RW1xzQgcIc
    Y4EiQpvE0Kx19c2+pXOsOMGmweOsmmxeOCxPUO3DTAArsXq9uz+mFTlG2ymZ7FBydksp
    nBe2UqHf2NkhGEjqZySO0EImfoBEv5iQI/RrEQQcmtYheDA6cOTzost/mvSOuEw8iugE
    iGKngi3+8CvaLuOsM3NcDWwPCcnRJfFFvKMdRU/2MxbCKNC0npigTnSp643oS7a5JuWx
    BafQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847893;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=FAEv0aYDzvB+OwgbknaQXf2VrLPGXUF/L40vFVnPX+s=;
    b=MphKgNpmJvNkbN9WvF7ZtMOP5zZx20+IRAmEaA1JVzYN8asMzg4gbpzK708PHsmh5s
    kCtPjbsoIUYpibIeaeVSVNSCdwNE4JS5VRE6VuDACHtgQLRaCG0iwjpITc5FIbypowsq
    t7TxuM3tcZkLNpWj9ClW1GusLhtOjMHGHMqaWL1hvx/B1IF4NfFsKEgVJoNwaaFk73ic
    1CF1sAK2iLJQv7xRCFSGk+sbpsJWxplxYfUyTZIQ5idjJmi/2wzrVe+FzqNJ6yX5uyrE
    Jmteb896IDU4Vlp975P9KpMxftE3kFPTcG+dPwQ4deNz+LCbmRtXgO+ou878pVxLhwAR
    Cjcw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>
Subject: [PATCH v20210616 00/36] leftover from 2020
Date: Wed, 16 Jun 2021 14:50:53 +0200
Message-Id: <20210616125129.26563-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Various unreviewed changes, rebase to 4bcf6433ee.

Olaf Hering (36):
  hotplug/Linux: fix starting of xenstored with restarting systemd
  tools: add API to work with sevaral bits at once
  xl: fix description of migrate --debug
  tools: create libxensaverestore
  MAINTAINERS: add myself as saverestore maintainer
  tools: add readv_exact to libxenctrl
  tools: add sr_is_known_page_type to libsaverestore
  tools: use sr_is_known_page_type
  tools: unify type checking for data pfns in migration stream
  tools: show migration transfer rate in send_dirty_pages
  tools: prepare to allocate saverestore arrays once
  tools: save: move mfns array
  tools: save: move types array
  tools: save: move errors array
  tools: save: move iov array
  tools: save: move rec_pfns array
  tools: save: move guest_data array
  tools: save: move local_pages array
  tools: restore: move types array
  tools: restore: move mfns array
  tools: restore: move map_errs array
  tools: restore: move mfns array in populate_pfns
  tools: restore: move pfns array in populate_pfns
  tools: restore: split record processing
  tools: restore: split handle_page_data
  tools: restore: write data directly into guest
  tools: recognize LIBXL_API_VERSION for 4.16
  tools: adjust libxl_domain_suspend to receive a struct props
  tools: change struct precopy_stats to precopy_stats_t
  tools: add callback to libxl for precopy_policy and precopy_stats_t
  tools: add --max_iters to libxl_domain_suspend
  tools: add --min_remaining to libxl_domain_suspend
  tools: add --abort_if_busy to libxl_domain_suspend
  tools: add API for expandable bitmaps
  tools: use sr_bitmap for populated_pfns
  tools: use superpages during restore of HVM guest

 .gitignore                                    |   2 +
 MAINTAINERS                                   |   6 +
 docs/man/xl.1.pod.in                          |  22 +-
 tools/hotplug/Linux/init.d/xencommons.in      |   2 +-
 tools/hotplug/Linux/launch-xenstore.in        |  40 +-
 .../Linux/systemd/xenstored.service.in        |   2 +-
 tools/include/libxl.h                         |  32 +-
 tools/include/xenguest.h                      | 186 -----
 tools/include/xensaverestore.h                | 207 ++++++
 tools/libs/Makefile                           |   1 +
 tools/libs/ctrl/xc_bitops.h                   |  28 +
 tools/libs/ctrl/xc_private.c                  |  57 +-
 tools/libs/ctrl/xc_private.h                  |   1 +
 tools/libs/guest/Makefile                     |  11 -
 tools/libs/guest/xg_dom_x86.c                 |   5 -
 tools/libs/guest/xg_offline_page.c            |   1 -
 tools/libs/guest/xg_private.h                 |   5 +
 tools/libs/guest/xg_sr_restore_x86_hvm.c      | 274 --------
 tools/libs/light/Makefile                     |   4 +-
 tools/libs/light/libxl_dom_save.c             |  24 +
 tools/libs/light/libxl_domain.c               |  10 +-
 tools/libs/light/libxl_internal.h             |   7 +
 tools/libs/light/libxl_save_helper.c          |   1 +
 tools/libs/light/libxl_save_msgs_gen.pl       |   5 +-
 tools/libs/light/libxl_stream_write.c         |   9 +-
 tools/libs/light/libxl_types.idl              |   1 +
 tools/libs/saverestore/Makefile               |  38 ++
 .../xg_sr_common.c => saverestore/common.c}   |  76 ++-
 .../xg_sr_common.h => saverestore/common.h}   | 253 ++++++-
 .../common_x86.c}                             |   2 +-
 .../common_x86.h}                             |   2 +-
 .../common_x86_pv.c}                          |   2 +-
 .../common_x86_pv.h}                          |   2 +-
 .../nomigrate.c}                              |   0
 .../xg_sr_restore.c => saverestore/restore.c} | 617 +++++++++--------
 tools/libs/saverestore/restore_x86_hvm.c      | 645 ++++++++++++++++++
 .../restore_x86_pv.c}                         |  70 +-
 .../xg_sr_save.c => saverestore/save.c}       | 165 ++---
 .../save_restore.h}                           |   2 -
 .../save_x86_hvm.c}                           |   7 +-
 .../save_x86_pv.c}                            |  33 +-
 .../stream_format.h}                          |   0
 tools/libs/uselibs.mk                         |   4 +-
 tools/ocaml/libs/xl/xenlight_stubs.c          |   3 +-
 tools/xl/xl_cmdtable.c                        |  26 +-
 tools/xl/xl_migrate.c                         |  54 +-
 tools/xl/xl_saverestore.c                     |   3 +-
 47 files changed, 2006 insertions(+), 941 deletions(-)
 create mode 100644 tools/include/xensaverestore.h
 delete mode 100644 tools/libs/guest/xg_sr_restore_x86_hvm.c
 create mode 100644 tools/libs/saverestore/Makefile
 rename tools/libs/{guest/xg_sr_common.c => saverestore/common.c} (72%)
 rename tools/libs/{guest/xg_sr_common.h => saverestore/common.h} (68%)
 rename tools/libs/{guest/xg_sr_common_x86.c => saverestore/common_x86.c} (99%)
 rename tools/libs/{guest/xg_sr_common_x86.h => saverestore/common_x86.h} (98%)
 rename tools/libs/{guest/xg_sr_common_x86_pv.c => saverestore/common_x86_pv.c} (99%)
 rename tools/libs/{guest/xg_sr_common_x86_pv.h => saverestore/common_x86_pv.h} (98%)
 rename tools/libs/{guest/xg_nomigrate.c => saverestore/nomigrate.c} (100%)
 rename tools/libs/{guest/xg_sr_restore.c => saverestore/restore.c} (66%)
 create mode 100644 tools/libs/saverestore/restore_x86_hvm.c
 rename tools/libs/{guest/xg_sr_restore_x86_pv.c => saverestore/restore_x86_pv.c} (94%)
 rename tools/libs/{guest/xg_sr_save.c => saverestore/save.c} (88%)
 rename tools/libs/{guest/xg_save_restore.h => saverestore/save_restore.h} (98%)
 rename tools/libs/{guest/xg_sr_save_x86_hvm.c => saverestore/save_x86_hvm.c} (96%)
 rename tools/libs/{guest/xg_sr_save_x86_pv.c => saverestore/save_x86_pv.c} (97%)
 rename tools/libs/{guest/xg_sr_stream_format.h => saverestore/stream_format.h} (100%)



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:51:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:51:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142934.263585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1B-00076S-LZ; Wed, 16 Jun 2021 12:51:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142934.263585; Wed, 16 Jun 2021 12:51:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1B-00076F-HT; Wed, 16 Jun 2021 12:51:41 +0000
Received: by outflank-mailman (input) for mailman id 142934;
 Wed, 16 Jun 2021 12:51:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1A-00075D-8n
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:51:40 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [81.169.146.164])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1e989e65-495b-4d05-8988-fae5248b83ef;
 Wed, 16 Jun 2021 12:51:39 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpXtlp
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e989e65-495b-4d05-8988-fae5248b83ef
ARC-Seal: i=1; a=rsa-sha256; t=1623847894; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=FeGjylmrxkl+Ci1PpMJGW1Tax2bnjr7O4fv8cyrOoEZvjjd28zNOYElLMC/KY6smkZ
    MR/Ql/hggF3Op+FrIVRcBDIT0Xjd53gWafL8smW/Ze4Ais3Qn6WkRwNFyEV7rques3NI
    3kWV42VBxgxLKEKiTebKmgHaC8/ugSPPwlSZnGT7tsOtT2EvMIck+oxUMs7tQzPyi/QY
    cLpdl19/1WkfVvwKvuDQkZVQOgq/DSk3s1D9uGJ6qNgw6+e5RkncKEXiCFOfmzENnztA
    Ihg+zfDU3UIcjvZkxOpQVg2fn2lVcgZMlZUVXRf/z8BMGgmUgAp8h4wKkGFMd7kDq654
    kJ1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847894;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=DqV9Bza4xh4lmngXwj1J3LSRM3kpJQsPapKBO4Z/YvI=;
    b=kh+V1rAx7xgK2zFpG6lcaLCQakgeMdWrwOYnXxg8ivaAECy3YwjOPQ617eESscoi8d
    31+z2zGp4Nd8vEaoj8h00aVnVZTdPSzsBL4XWvGOac3xjFx4oIkfBuswAH83QrHL2pSm
    0EYkkAMZLD4ZpWms/eLy33qxc6oXwFCVdbuw6iACNwmOv/WKNLPCpVRVMgDYJV0TUCDe
    EeMggkNZ4Mby7iJw0gEYIVXQkPOSZYM167QgaD8F42jdndQSR82EYkEfg3oFlt7D2j8r
    UyluyDkkau2rCTNlgNnNmh/Z3pqihoJSEVChtoDm1BN2AmSLG2gUM6s3+DaM2vDUXf/f
    M0wQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847894;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=DqV9Bza4xh4lmngXwj1J3LSRM3kpJQsPapKBO4Z/YvI=;
    b=ChJfkLUGcpV13DyNrpdOMxO0gJbRtLCxPMzv0Spyvh1kCIfFls2vLalZ4Mdeg1GSDG
    mo4tAr7da5bWH5afDonQsTr7EAUfYdKNAqKktQS9YbRSVXp8EuNcqHQJBS2HgJoudS7l
    MHa5q/KfOoa0USuMnvUQKM/RdidnYaaoQ+DZjGw1KysmVq1zhujC5e/p382DutvOVQbZ
    H5u1fSJv7zJz7vIxIdzmRVp9hMFfoV3b6USJGzrIKD/j6hxles3STf8pYZMynfZ6VxjA
    /KIQHWTE5UFpFwaaOlAxLLv/jfVuGuSMPjRmWzxwvrAzlMxRJAOJ2KXvjFv6cp9YkB39
    Dm6g==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v20210616 02/36] tools: add API to work with sevaral bits at once
Date: Wed, 16 Jun 2021 14:50:55 +0200
Message-Id: <20210616125129.26563-3-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce new API to test if a fixed number of bits is clear or set,
and clear or set them all at once.

The caller has to make sure the input bitnumber is a multiple of BITS_PER_LONG.

This API avoids the loop over each bit in a known range just to see
if all of them are either clear or set.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

v02:
- change return type from int to bool (jgross)
---
 tools/libs/ctrl/xc_bitops.h | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/tools/libs/ctrl/xc_bitops.h b/tools/libs/ctrl/xc_bitops.h
index f0bac4a071..8e8c6efb45 100644
--- a/tools/libs/ctrl/xc_bitops.h
+++ b/tools/libs/ctrl/xc_bitops.h
@@ -3,6 +3,7 @@
 
 /* bitmap operations for single threaded access */
 
+#include <stdbool.h>
 #include <stdlib.h>
 #include <string.h>
 
@@ -77,4 +78,31 @@ static inline void bitmap_or(void *_dst, const void *_other,
         dst[i] |= other[i];
 }
 
+static inline bool test_bit_long_set(unsigned long nr_base, const void *_addr)
+{
+    const unsigned long *addr = _addr;
+    unsigned long val = addr[nr_base / BITS_PER_LONG];
+
+    return val == ~0;
+}
+
+static inline bool test_bit_long_clear(unsigned long nr_base, const void *_addr)
+{
+    const unsigned long *addr = _addr;
+    unsigned long val = addr[nr_base / BITS_PER_LONG];
+
+    return val == 0;
+}
+
+static inline void clear_bit_long(unsigned long nr_base, void *_addr)
+{
+    unsigned long *addr = _addr;
+    addr[nr_base / BITS_PER_LONG] = 0;
+}
+
+static inline void set_bit_long(unsigned long nr_base, void *_addr)
+{
+    unsigned long *addr = _addr;
+    addr[nr_base / BITS_PER_LONG] = ~0;
+}
 #endif  /* XC_BITOPS_H */


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:51:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:51:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142936.263600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1D-0007QE-2e; Wed, 16 Jun 2021 12:51:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142936.263600; Wed, 16 Jun 2021 12:51:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1C-0007P0-TD; Wed, 16 Jun 2021 12:51:42 +0000
Received: by outflank-mailman (input) for mailman id 142936;
 Wed, 16 Jun 2021 12:51:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1A-0006lZ-WD
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:51:41 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.220])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e6699dc-b1cf-43a2-81bb-a6bd85cbf143;
 Wed, 16 Jun 2021 12:51:39 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpXtlo
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e6699dc-b1cf-43a2-81bb-a6bd85cbf143
ARC-Seal: i=1; a=rsa-sha256; t=1623847893; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=Cdxlo0nJQMRYmW173WlSYKFWf7/BTOjrKQt8Kj6rjd66akrj/b/65Gqx0zAZhprFEw
    b9MfAK4ql7VcFbOxknAtUQF4kP/445oU9FW6neSBS9rLYdduVeT1mV33QYzbp6RMZQcq
    dvjCcN0zAF0llJlX40JdmnW5SKgQoqqz/KjZPwsijKfIB2HBZHxQCaH8tYNAzBcbfokI
    AaHIBlHBHsE6aMZw6xKr/+jy2vAcsyvcsRe4MdAEKiA57KxzBBAcPuTMGGEmj3P7fNks
    zO+z2d2kPOr/J6iMstJD781c8pihaeG0mRHUAYh4csCzTSMQ61WmFebAJ2T/6oDUeaWh
    Ko/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847893;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=fttDAmpegUqwpOBB61SFRLpHBJB9wPmVo+V4iKyUqQw=;
    b=jwlIGgbZKKQpnjD9HSvE2RFElERcmHymT3lbI6MRI/8WtMQLgP/8QqVy9khLXRP9nx
    LaIIBB/sqeFglBzBuW43cCiCrWtb+fhF/LQ9ekSnMaPs7fOtugjuvJIcHD1RAeA7W0uc
    O+uViMr2ixMvHJEdeAhZFLyaXJQv5BYL4uKCiU5NiCv8o/teSTnJrYjYb81cySBD6drS
    7sBaxUX0Nxi93Dr8hltVhfqfNh1BZc+5I4l/xnTsfQM1wYdZeTOvchyWX7Ep3mYLBokp
    C6QtuPS+3pGYTn6+MvpTCjONG9a1k2DbhwmpvkBrN0k/TCt+rqPse4cf78Wc7nnzCArD
    orOw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847893;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=fttDAmpegUqwpOBB61SFRLpHBJB9wPmVo+V4iKyUqQw=;
    b=OjG0qsM2uRqMdqvr+J6VYuyfYZx+2bMTe2yoHEx5YW4o/f/enW5cCNozCLK+2rskLR
    91s0v4n92LNh/vJTe9y/0HSQeDR+x9soRnWkIbNawgmdSLT2p9ZqljbDEaMAOpsnF6nW
    5CRSYtkNGAiVQHy8w7wfM/8fNG/YpYfAXrh8QzJ/NsKRe/aRG1/ci6r7PcUTaxkFpDzl
    griQpPW8S6Kwkl7PxdQfQZhUM7WFoVERQODYRm08VHSufgKuaITlAE3K0dIsQmz68S+G
    xS7JSQy46bAsmICo3W0JcHagcUJPBWQJN4h2EX6wMSp04v3NjxuLBSksbyN7brLpE/SH
    OPKA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210616 01/36] hotplug/Linux: fix starting of xenstored with restarting systemd
Date: Wed, 16 Jun 2021 14:50:54 +0200
Message-Id: <20210616125129.26563-2-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

A hard to trigger race with another unrelated systemd service and
xenstored.service unveiled a bug in the way how xenstored is launched
with systemd.

launch-xenstore may start either a daemon or a domain. In case a domain
is used, systemd-notify was called. If another service triggered a
restart of systemd while xenstored.service was executed, systemd may
temporary lose track of services with Type=notify. As a result,
xenstored.service would be marked as failed and units that depend on it
will not be started. This breaks the enire Xen toolstack.

The chain of events is basically: xenstored.service sends the
notification to systemd, this is a one-way event. Then systemd may be
restarted by the other unit. During this time, xenstored.service is done
and exits. Once systemd is done with its restart, it collects the pending
notifications and childs. If it does not find the unit which sent the
notification it will declare it as failed.

A workaround for this scenario is to leave the child processes running
for a short time after sending the "READY=1" notification. If systemd
happens to restart it will still find the unit it launched.

Adjust the callers of launch-xenstore to specifiy the init system:
Do not fork xenstored with systemd, preserve pid. This wil also avoid
the need for a sleep because the process which sent the "READY=1" (the
previously forked child) is still alive.

Remove the --pid-file in the systemd case because the pid of the child
is known, and the file had probably little effect anyway due to lack of
PidFile= and Type=forking in the unit file.

Be verbose about xenstored startup only with sysv to avoid interleaved
output in systemd journal. Do the same also for domain case, even if is
not strictly needed because init-xenstore-domain has no output.

The fix for upstream systemd which is supposed to fix it:
575b300b795b6 ("pid1: rework how we dispatch SIGCHLD and other signals")

Signed-off-by: Olaf Hering <olaf@aepfle.de>

--
v04:
- do mkdir unconditionally because init-xenstore-domain writes the domid to
  xenstored.pid
v03:
- remove run_xenstored function, follow style of shell built-in test function
v02:
- preserve Type=notify
---
 tools/hotplug/Linux/init.d/xencommons.in      |  2 +-
 tools/hotplug/Linux/launch-xenstore.in        | 40 ++++++++++++++-----
 .../Linux/systemd/xenstored.service.in        |  2 +-
 3 files changed, 31 insertions(+), 13 deletions(-)

diff --git a/tools/hotplug/Linux/init.d/xencommons.in b/tools/hotplug/Linux/init.d/xencommons.in
index 7fd6903b98..dcb0ce4b73 100644
--- a/tools/hotplug/Linux/init.d/xencommons.in
+++ b/tools/hotplug/Linux/init.d/xencommons.in
@@ -60,7 +60,7 @@ do_start () {
 	mkdir -m700 -p ${XEN_LOCK_DIR}
 	mkdir -p ${XEN_LOG_DIR}
 
-	@XEN_SCRIPT_DIR@/launch-xenstore || exit 1
+	@XEN_SCRIPT_DIR@/launch-xenstore 'sysv' || exit 1
 
 	echo Setting domain 0 name, domid and JSON config...
 	${LIBEXEC_BIN}/xen-init-dom0 ${XEN_DOM0_UUID}
diff --git a/tools/hotplug/Linux/launch-xenstore.in b/tools/hotplug/Linux/launch-xenstore.in
index 019f9d6f4d..d40c66482a 100644
--- a/tools/hotplug/Linux/launch-xenstore.in
+++ b/tools/hotplug/Linux/launch-xenstore.in
@@ -15,6 +15,17 @@
 # License along with this library; If not, see <http://www.gnu.org/licenses/>.
 #
 
+initd=$1
+
+case "$initd" in
+	sysv) nonl='-n' ;;
+	systemd) nonl= ;;
+	*)
+	echo "first argument must be 'sysv' or 'systemd'"
+	exit 1
+	;;
+esac
+
 XENSTORED=@XENSTORED@
 
 . @XEN_SCRIPT_DIR@/hotplugpath.sh
@@ -44,14 +55,16 @@ timeout_xenstore () {
 	return 0
 }
 
-test_xenstore && exit 0
+mkdir -p @XEN_RUN_DIR@
+
+if test "$initd" = 'sysv' ; then
+	test_xenstore && exit 0
+fi
 
 test -f @CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons && . @CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons
 
 [ "$XENSTORETYPE" = "" ] && XENSTORETYPE=daemon
 
-/bin/mkdir -p @XEN_RUN_DIR@
-
 [ "$XENSTORETYPE" = "daemon" ] && {
 	[ -z "$XENSTORED_TRACE" ] || XENSTORED_ARGS="$XENSTORED_ARGS -T @XEN_LOG_DIR@/xenstored-trace.log"
 	[ -z "$XENSTORED" ] && XENSTORED=@XENSTORED@
@@ -59,13 +72,15 @@ test -f @CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons && . @CONFIG_DIR@/@CONFIG_LEAF
 		echo "No xenstored found"
 		exit 1
 	}
+	[ "$initd" = 'sysv' ] && {
+		echo $nonl Starting $XENSTORED...
+		$XENSTORED --pid-file @XEN_RUN_DIR@/xenstored.pid $XENSTORED_ARGS
+		timeout_xenstore $XENSTORED || exit 1
+		exit 0
+	}
 
-	echo -n Starting $XENSTORED...
-	$XENSTORED --pid-file @XEN_RUN_DIR@/xenstored.pid $XENSTORED_ARGS
-
-	systemd-notify --booted 2>/dev/null || timeout_xenstore $XENSTORED || exit 1
-
-	exit 0
+	exec $XENSTORED -N $XENSTORED_ARGS
+	exit 1
 }
 
 [ "$XENSTORETYPE" = "domain" ] && {
@@ -75,9 +90,12 @@ test -f @CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons && . @CONFIG_DIR@/@CONFIG_LEAF
 	XENSTORE_DOMAIN_ARGS="$XENSTORE_DOMAIN_ARGS --memory $XENSTORE_DOMAIN_SIZE"
 	[ -z "$XENSTORE_MAX_DOMAIN_SIZE" ] || XENSTORE_DOMAIN_ARGS="$XENSTORE_DOMAIN_ARGS --maxmem $XENSTORE_MAX_DOMAIN_SIZE"
 
-	echo -n Starting $XENSTORE_DOMAIN_KERNEL...
+	echo $nonl Starting $XENSTORE_DOMAIN_KERNEL...
 	${LIBEXEC_BIN}/init-xenstore-domain $XENSTORE_DOMAIN_ARGS || exit 1
-	systemd-notify --ready 2>/dev/null
+	[ "$initd" = 'systemd' ] && {
+		systemd-notify --ready
+		sleep 9
+	}
 
 	exit 0
 }
diff --git a/tools/hotplug/Linux/systemd/xenstored.service.in b/tools/hotplug/Linux/systemd/xenstored.service.in
index 80c1d408a5..c226eb3635 100644
--- a/tools/hotplug/Linux/systemd/xenstored.service.in
+++ b/tools/hotplug/Linux/systemd/xenstored.service.in
@@ -11,7 +11,7 @@ Type=notify
 NotifyAccess=all
 RemainAfterExit=true
 ExecStartPre=/bin/grep -q control_d /proc/xen/capabilities
-ExecStart=@XEN_SCRIPT_DIR@/launch-xenstore
+ExecStart=@XEN_SCRIPT_DIR@/launch-xenstore 'systemd'
 
 [Install]
 WantedBy=multi-user.target


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:51:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:51:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142937.263611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1E-0007hV-DW; Wed, 16 Jun 2021 12:51:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142937.263611; Wed, 16 Jun 2021 12:51:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1E-0007h6-7G; Wed, 16 Jun 2021 12:51:44 +0000
Received: by outflank-mailman (input) for mailman id 142937;
 Wed, 16 Jun 2021 12:51:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1D-00075D-3i
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:51:43 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.171])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 71b8fcae-6fc2-487b-a5d1-2823a9646c44;
 Wed, 16 Jun 2021 12:51:42 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpatlu
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71b8fcae-6fc2-487b-a5d1-2823a9646c44
ARC-Seal: i=1; a=rsa-sha256; t=1623847896; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=gg+/47bZpm4YB6I1hv7GH0kgAMdek5epu5Z7Cwt0HbMxD0mv2NJDtrvIS/eJUdH9Qd
    ZiybM/Yt17xfoQGSa6wYbBmqLbvB/pxNUvndEP6KTGsfIsS2ZG79Dk4RxrTmHibYjcg+
    8H/ZbGhHgEgAfQpexEViICbi8NheNQKlRTo6bnXMe5tL99sE109F2PIuLdKZ9tnBeqSi
    wMxb2h9te3iLP/zkRNGQmdCKG1H/zGt/P4NOCkz+VmpUTa83mfwcJ5HTYBZGmm7MaM2F
    qgOy+5DoQ4CjvBjEszbAb5COq8CDl7PDPyEsrSTM/78MGURuE2fdHQlt6M+Gshkh1Gn5
    o0RQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847896;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=2reXKEzUM07y7ty9jLpkdpjD7wjbIcJf0B0ZkXQLZnc=;
    b=noUc6A2hE/RWnNf1+XuBctR6/Mytab/IRCwImeUw18mDKqqnrgPeu+7MzVAvTB03hN
    rEG688W0NlVcQRBaYy6Lx/ih5DbKkOm7LcTXbawrnGuboA2OvUYtJiSNv8rfA3773O3Q
    FG+lDUuyEyReBBaa4Wx0tQNf6HoKwYZRl5ftbwll1RvFF45WGuXOLWNDA7sDV0eOtB/1
    Nb3Ghmo2E8Hnz9q8F66AKHYAHzqT+ZyQbr3Rkh+UI31AP5Iwez/4yF/tGli4H5B66v0d
    uR4yhi3ZQNL0J6rwVlPstvmlnkuQr7yD9d1V0DbaGaPdXlwd1gR9PhLIhC23bd+t/b0c
    7qKw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847896;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=2reXKEzUM07y7ty9jLpkdpjD7wjbIcJf0B0ZkXQLZnc=;
    b=oQ6Ua7JrtYUy0TVkX1AMIVu4rj1i2r4SR9AoWp9b1P2B8U13DXbBSJMBR1axLkAq/7
    u968g+YGpSbMeujc1bTK8kmN5rcz9zy5evUojInagHDIigYu2pFDVMZEJwCCsPexyo79
    gXg6U+cySyKwRQ8VCOD40ZcLDKp5skuPUBwzpVvMC5SJUgdR/jqlX0qfE+1Udjq1azjx
    MZZWpYrpNJ5HarkwBph81RsEAT8bNLapCjGooGmxCK6eSMdc4Uo+HoiSuw7wbrTCqayh
    48CKdAPnvWtLLIq96r6KB8UwyBMN2NeOjAPM6NbPzCdRskFRCfz8tI/c9HnhQUG9oXvA
    irHA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v20210616 06/36] tools: add readv_exact to libxenctrl
Date: Wed, 16 Jun 2021 14:50:59 +0200
Message-Id: <20210616125129.26563-7-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Read a batch of iovec's.

Short reads are the common case, finish the trailing iov with read_exact.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

v2:
- add comment to short-read handling
---
 tools/libs/ctrl/xc_private.c | 57 +++++++++++++++++++++++++++++++++++-
 tools/libs/ctrl/xc_private.h |  1 +
 2 files changed, 57 insertions(+), 1 deletion(-)

diff --git a/tools/libs/ctrl/xc_private.c b/tools/libs/ctrl/xc_private.c
index d94f846686..da58c3d9ba 100644
--- a/tools/libs/ctrl/xc_private.c
+++ b/tools/libs/ctrl/xc_private.c
@@ -659,8 +659,23 @@ int write_exact(int fd, const void *data, size_t size)
 
 #if defined(__MINIOS__)
 /*
- * MiniOS's libc doesn't know about writev(). Implement it as multiple write()s.
+ * MiniOS's libc doesn't know about readv/writev().
+ * Implement it as multiple read/write()s.
  */
+int readv_exact(int fd, const struct iovec *iov, int iovcnt)
+{
+    int rc, i;
+
+    for ( i = 0; i < iovcnt; ++i )
+    {
+        rc = read_exact(fd, iov[i].iov_base, iov[i].iov_len);
+        if ( rc )
+            return rc;
+    }
+
+    return 0;
+}
+
 int writev_exact(int fd, const struct iovec *iov, int iovcnt)
 {
     int rc, i;
@@ -675,6 +690,46 @@ int writev_exact(int fd, const struct iovec *iov, int iovcnt)
     return 0;
 }
 #else
+int readv_exact(int fd, const struct iovec *iov, int iovcnt)
+{
+    int rc = 0, idx = 0;
+    ssize_t len;
+
+    while ( idx < iovcnt )
+    {
+        len = readv(fd, &iov[idx], min(iovcnt - idx, IOV_MAX));
+        if ( len == -1 && errno == EINTR )
+            continue;
+        if ( len <= 0 )
+        {
+            rc = -1;
+            goto out;
+        }
+
+        /* Finish a potential short read in the last iov */
+        while ( len > 0 && idx < iovcnt )
+        {
+            if ( len >= iov[idx].iov_len )
+            {
+                len -= iov[idx].iov_len;
+            }
+            else
+            {
+                void *p = iov[idx].iov_base + len;
+                size_t l = iov[idx].iov_len - len;
+
+                rc = read_exact(fd, p, l);
+                if ( rc )
+                    goto out;
+                len = 0;
+            }
+            idx++;
+        }
+    }
+out:
+    return rc;
+}
+
 int writev_exact(int fd, const struct iovec *iov, int iovcnt)
 {
     struct iovec *local_iov = NULL;
diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
index 3e299b943f..66086ef19f 100644
--- a/tools/libs/ctrl/xc_private.h
+++ b/tools/libs/ctrl/xc_private.h
@@ -410,6 +410,7 @@ int xc_flush_mmu_updates(xc_interface *xch, struct xc_mmu *mmu);
 
 /* Return 0 on success; -1 on error setting errno. */
 int read_exact(int fd, void *data, size_t size); /* EOF => -1, errno=0 */
+int readv_exact(int fd, const struct iovec *iov, int iovcnt);
 int write_exact(int fd, const void *data, size_t size);
 int writev_exact(int fd, const struct iovec *iov, int iovcnt);
 


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:51:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:51:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142938.263622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1H-000881-Ly; Wed, 16 Jun 2021 12:51:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142938.263622; Wed, 16 Jun 2021 12:51:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1H-00087k-HT; Wed, 16 Jun 2021 12:51:47 +0000
Received: by outflank-mailman (input) for mailman id 142938;
 Wed, 16 Jun 2021 12:51:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1G-0006lZ-0K
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:51:46 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.53])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d1943abb-3cfd-4d03-9314-0c71412d5751;
 Wed, 16 Jun 2021 12:51:41 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpYtlq
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1943abb-3cfd-4d03-9314-0c71412d5751
ARC-Seal: i=1; a=rsa-sha256; t=1623847894; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=soFrmfUzg9UNLI8yVxGJmyKafXyp1k0pP6Tz7C6qrjRjIjjInvmcIq7NDWzoXEpYpa
    is5J4weSbVdhj+Zv+exqhxzn9JCh70HsxVuioyvtd3/icKwIxal2s1iOhU6IiAqeshN4
    GE5cuALgLBx29/FjVhUoAtIn17oRmmHc9ygwS581gl/kQ3KK8otGIGOjZeyH+g1lf0Mf
    wM4lwrrI/6tKseCltsc33Y2MpZNXO/SNS6sdL7AazagYfsUSCfQMy1Rbv4tjcU0Fj6sO
    o0EsWuYJzd1QLdrcqzgL/jozmedbbylz7MjX5SaezjDDFoHX4XOh3T8nyMx6jn/333eg
    eoTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847894;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=WwrXv/hX+8hePyd4mOYFfrU0MUgrTtUZ8Gjzx55scrs=;
    b=CumTvA1YqjZBixzbp99+/N+2MDfv6RVYWAZjCKD6Rf5H1R1lNbsSY6YZObLLUE5Owt
    32biwzycgSL2YhhRgHJ8y1+pMc2ek0NExN0ijFfLlNqajcTRu1fINPOMMPCoW+4BSuJP
    GQnDgF9MWrzopCxL4ZfOuCwsZ4DOvxkHlt4PmuNoeOozH4WT/mrIzH2etYic/7of6/Gq
    TL8WTiBGP0mT6fz/Of+0rZJr2y0W6svOguW22N+B7dYQ/+JjX7kvrgdZTVk/pCGIlaQS
    cCJkTCU26Y1hbQZEZ3Xe3PF2xycaGsuyLBJ7EJVSBJJXLNkF16Hf/z5ZQE2XTRl0HZU/
    1cWw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847894;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=WwrXv/hX+8hePyd4mOYFfrU0MUgrTtUZ8Gjzx55scrs=;
    b=Hdyru/O388OBCl1cJ0YWnVg/2XeuMH7WAZa6zCOYBj2Y0mqdK1mO+sA1FldUjL2AL0
    b0snEqEad8XYecbuLL55rwYjFhNs5PeRbVqPttbvjWLEmb7JjURaPEvNMiVFVLuhnDYy
    kAv6XnK34DIhKUwixo5hwHPBdgrfMA//xNS2j7t+SxN0HEpP1Tlg2TS9g6RJPJAolpxg
    gzu14toyeV8UqfP8LLUhmnzfvLHgrzSRTvSIYdhpLMhjEHt2il73AVt63UUAQ4GHw8jc
    Bd28M4cLNrCzsnrkfRKdaoRXblmkSeQ8afmaRpQsF9ZgkaDy6XFOhDfSkIdXEWAw8FRd
    QY7Q==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v20210616 03/36] xl: fix description of migrate --debug
Date: Wed, 16 Jun 2021 14:50:56 +0200
Message-Id: <20210616125129.26563-4-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xl migrate --debug used to track every pfn in every batch of pages.
But these times are gone. The code in xc_domain_save is the consumer
of this knob, but it considers it only for the remus and colo case.

Adjust the help text to tell what --debug does today: Nothing.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Juergen Gross <jgross@suse.com>

v02:
- the option has no effect anymore
---
 docs/man/xl.1.pod.in   | 2 +-
 tools/xl/xl_cmdtable.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index e2176bd696..70a6ebf438 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -481,7 +481,7 @@ domain.
 
 =item B<--debug>
 
-Display huge (!) amount of debug information during the migration process.
+This option has no effect. It is preserved for compatibility reasons.
 
 =item B<-p>
 
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 661323d488..ca1dfa3525 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -172,7 +172,7 @@ const struct cmd_spec cmd_table[] = {
       "                migrate-receive [-d -e]\n"
       "-e              Do not wait in the background (on <host>) for the death\n"
       "                of the domain.\n"
-      "--debug         Print huge (!) amount of debug during the migration process.\n"
+      "--debug         Ignored.\n"
       "-p              Do not unpause domain after migrating it.\n"
       "-D              Preserve the domain id"
     },


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:51:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:51:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142939.263627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1I-0008DI-4M; Wed, 16 Jun 2021 12:51:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142939.263627; Wed, 16 Jun 2021 12:51:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1H-0008BS-T8; Wed, 16 Jun 2021 12:51:47 +0000
Received: by outflank-mailman (input) for mailman id 142939;
 Wed, 16 Jun 2021 12:51:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1G-00075D-58
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:51:46 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.84])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f18633fe-d4db-43d5-bfab-8fd491179c21;
 Wed, 16 Jun 2021 12:51:45 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpdtm3
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f18633fe-d4db-43d5-bfab-8fd491179c21
ARC-Seal: i=1; a=rsa-sha256; t=1623847899; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=V9RtZbf8C1wCglPYTnqfkxIgmvIai9m1C0WragE6L8TdkzfjYumF7NNu+mG+JUHAk4
    QXWqeR2xquVh1uVG/VkbLL2j4nGs57IJ1xwFck5cvPZbKkNO0QvysEfAyIgbwT3l9Y4Z
    dJZPv4YXeDdDdhJhRIHptkE/HTeegGIfOaSX+wzIjdesa1YWoFjY+3GheapIdaOJJZL0
    d0Iq+se2UH0Br2RQBI73EsqjeK8WZJGg/sLxcpXGZBYND2PjovgbaIyL8nFcuIU3myOG
    J/Awj6LCZq5uCLxRdR/PEN+JTAa+MT6t3ZVSe1K7m30vtb4Dd6s2mK+qxTTEHLLFWtsX
    FDkg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847899;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=6QhLsTHKnGLq8ej3+19WMGRz5FDjlHYnvJVyc5tVB4g=;
    b=h0z+eiF/8OC+LqIJRm8ILXtBw7ot3Z8f+f6U6rVifrn5AlWSjCOFCLemkoRi+37BfG
    698ydSzAHVBHMHx5PWd4EPzAmexuuhyA+6ED2c/HXOIhgSjGOmGmxFWwvhP8eZ/DbLyH
    BKoVPTcOLU5gDwVUnLjWmnognqBp8a3e7Ds7JRd9laVwwlrTR4xpaMFO8YixXK9tEVX/
    wXskzK7shBvG+54OXBGB3+85HBLpuJUvd4qTgpZRh0spnhDHbqDNLhnrTAaLZBRzeKRA
    xZzYc5GwZGdJW/c7Lri9BW/FWfzNBiPG/x7dy5ALMmS4yRC+GpqATtYl3FtaCn0LuiOE
    qsBQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847899;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=6QhLsTHKnGLq8ej3+19WMGRz5FDjlHYnvJVyc5tVB4g=;
    b=olx76e+wLmQTdJGVtYcWr7CZ+zPZc2Px6jO/FrjyPpGQpK3OA4lw2zvWycFRaokYNK
    3yeJKYqQj6FvNf/EQRtio/ooBjYBBvvwDpVrdp4oUc5uxMaIwbPpUakOmLR90585EF2Y
    CNIWA29se7WZKhW1tkj63t87JZiGLqWBQm9H9w3/d7h9PqF9IPvNPruSI32xXxqlVpLS
    ZoymyoBb/ZqTWzGG+F6zsZPsL07md2yjbkxiCkiSTDKYWIUzHG1PwYsmb9ju3gmkeyk8
    fDNJDQ0YVSqpicsIz70bRRhtDhju+GxQHXSGySp05PoULlFDcZfghtcSP5sCmFUenNPl
    oTcw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210616 13/36] tools: save: move types array
Date: Wed, 16 Jun 2021 14:51:06 +0200
Message-Id: <20210616125129.26563-14-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move types array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/saverestore/common.h | 2 ++
 tools/libs/saverestore/save.c   | 7 ++-----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 6129710a3f..1df684acb9 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -227,6 +227,8 @@ struct sr_save_arrays {
     xen_pfn_t batch_pfns[MAX_BATCH_SIZE];
     /* write_batch: Mfns of the batch pfns. */
     xen_pfn_t mfns[MAX_BATCH_SIZE];
+    /* write_batch: Types of the batch pfns. */
+    xen_pfn_t types[MAX_BATCH_SIZE];
 };
 
 struct sr_restore_arrays {
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index 6b09784be8..0883c1fac0 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -88,7 +88,7 @@ static int write_checkpoint_record(struct xc_sr_context *ctx)
 static int write_batch(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
-    xen_pfn_t *mfns = ctx->save.m->mfns, *types = NULL;
+    xen_pfn_t *mfns = ctx->save.m->mfns, *types = ctx->save.m->types;
     void *guest_mapping = NULL;
     void **guest_data = NULL;
     void **local_pages = NULL;
@@ -105,8 +105,6 @@ static int write_batch(struct xc_sr_context *ctx)
 
     assert(nr_pfns != 0);
 
-    /* Types of the batch pfns. */
-    types = malloc(nr_pfns * sizeof(*types));
     /* Errors from attempting to map the gfns. */
     errors = malloc(nr_pfns * sizeof(*errors));
     /* Pointers to page data to send.  Mapped gfns or local allocations. */
@@ -116,7 +114,7 @@ static int write_batch(struct xc_sr_context *ctx)
     /* iovec[] for writev(). */
     iov = malloc((nr_pfns + 4) * sizeof(*iov));
 
-    if ( !types || !errors || !guest_data || !local_pages || !iov )
+    if ( !errors || !guest_data || !local_pages || !iov )
     {
         ERROR("Unable to allocate arrays for a batch of %u pages",
               nr_pfns);
@@ -274,7 +272,6 @@ static int write_batch(struct xc_sr_context *ctx)
     free(local_pages);
     free(guest_data);
     free(errors);
-    free(types);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:51:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:51:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142942.263644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1M-0000Wl-SY; Wed, 16 Jun 2021 12:51:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142942.263644; Wed, 16 Jun 2021 12:51:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1M-0000WM-Nr; Wed, 16 Jun 2021 12:51:52 +0000
Received: by outflank-mailman (input) for mailman id 142942;
 Wed, 16 Jun 2021 12:51:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1L-0006lZ-0A
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:51:51 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.83])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43aa6e05-a993-430c-9968-2ac1ce447547;
 Wed, 16 Jun 2021 12:51:42 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpbtlx
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:37 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43aa6e05-a993-430c-9968-2ac1ce447547
ARC-Seal: i=1; a=rsa-sha256; t=1623847897; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=ODGNr+uLUxhmfATeLKkGIHOLU7hPqdY/15IH6h60god/q+pChYN/+cU/VtTjPuTghE
    jO3D94kvRZV6Ghcif00Qn7nAg5e6aZ24xw5UuRhJneWquP5pzoRrnlrLhyf5ef8fiFM7
    X00POAishpBGFDOoOnR+wLUfUfGm2WO7MbeDjCnwSQL5G9mm5rsYzh3FfUQv5NiBtmUz
    E3j2vwEWDaQlZRVeWPnZXFLDYnib22vpQML1mYppvhx/y6dJuAXSW2Bpizg+dgTJKmZF
    AV8db9yNq6tkbHzr/stXZB7a1DcJYEVGcHxh0E2fhDE6HdRFON18uBiaffB7DKIh3DIR
    m3oQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847897;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=ex4r4d11Mg1k2Zn9qFLItgRI9LdfJn43Y1sVEepvWiU=;
    b=ElVLRXcEYGZ3OxVQGsKXyhDRMLIeRmWr/Lkb/L3jeDK4+X10Bth8quN5EG31PNmK6N
    MT/ImowTtg+rh41UDlCSHPGXI6AMt6Y4dC/zB1xQu5x/nAEg/0SXlxT3c2LfoTDZLSA5
    hD0cTPXB/a0xPaND1wObq1JViiEJTf1647uG+UON/Wd/YtqRXb5AYPgm1L9p7sa2KUXN
    4Cn+UgXWFAxZIGhvAKVkIOVJ5wR6M63A9NXClgqGJ1KYJm/LqrgcbOPUsylFop8qU3ah
    ZeC5i8Qzfd8H6UdET/GYRPRMXR/hbFYy0l6JgKx1+43KGE1PDry4SSMILYrtvgo91F/W
    zR7w==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847897;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=ex4r4d11Mg1k2Zn9qFLItgRI9LdfJn43Y1sVEepvWiU=;
    b=rdPVMskwM4FKzs88imzG5wXXgdzUPInSgw0NFn1sl+Dzq3JBU3VUXKySfJ6qe2EzVz
    2ZDqzXJOgseB/2eka0fb4Z60lHbMp27kxYSX/UjquR6H8+1xEmwKM1FlI9d4cYQEYCB5
    IYZK6hnWD4msa5fEnx83JgfMkyLHZSDPrGuGW2y0quwR86rL7jztf+w+kb6X8+uE4E1/
    ELJWvIJmzCxcPiBOTeR9pR4nZ15S5JF2HEp057EUeAU/TMhUSrBljTT1wpb8NC7StN9u
    9qYjlaiREmgnFxRnwjvtGuq/LMMcr7FGHcOqx0S20QlqEgdQszkC+65poKZoonPinfQQ
    ocBQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210616 08/36] tools: use sr_is_known_page_type
Date: Wed, 16 Jun 2021 14:51:01 +0200
Message-Id: <20210616125129.26563-9-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Verify pfn type on sending side, also verify incoming batch of pfns.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Juergen Gross <jgross@suse.com>

v02:
- use sr_is_known_page_type instead of xc_is_known_page_type
---
 tools/libs/saverestore/restore.c | 3 +--
 tools/libs/saverestore/save.c    | 6 ++++++
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index be259a1c6b..324b9050e2 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -406,8 +406,7 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         }
 
         type = (pages->pfn[i] & PAGE_DATA_TYPE_MASK) >> 32;
-        if ( ((type >> XEN_DOMCTL_PFINFO_LTAB_SHIFT) >= 5) &&
-             ((type >> XEN_DOMCTL_PFINFO_LTAB_SHIFT) <= 8) )
+        if ( sr_is_known_page_type(type) == false )
         {
             ERROR("Invalid type %#"PRIx32" for pfn %#"PRIpfn" (index %u)",
                   type, pfn, i);
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index ae3e8797d0..6f820ea432 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -147,6 +147,12 @@ static int write_batch(struct xc_sr_context *ctx)
 
     for ( i = 0; i < nr_pfns; ++i )
     {
+        if ( sr_is_known_page_type(types[i]) == false )
+        {
+            ERROR("Wrong type %#"PRIpfn" for pfn %#"PRIpfn, types[i], mfns[i]);
+            goto err;
+        }
+
         switch ( types[i] )
         {
         case XEN_DOMCTL_PFINFO_BROKEN:


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:51:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:51:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142943.263655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1Q-00016I-DG; Wed, 16 Jun 2021 12:51:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142943.263655; Wed, 16 Jun 2021 12:51:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1Q-000166-70; Wed, 16 Jun 2021 12:51:56 +0000
Received: by outflank-mailman (input) for mailman id 142943;
 Wed, 16 Jun 2021 12:51:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1P-00075D-7R
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:51:55 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.103])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4e9aa579-d54f-4978-9ff7-95a20453cfd6;
 Wed, 16 Jun 2021 12:51:46 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpftm8
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e9aa579-d54f-4978-9ff7-95a20453cfd6
ARC-Seal: i=1; a=rsa-sha256; t=1623847901; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=qCP7K0VPh2lHPt79phYQtfJ3EltvJWbyp0vzsiw+Fw+2qvCl8ffjTn5xcfLXt0xc1D
    QQa6oMySIG0UkhNDF/8UaRvKrzz/l3SOuJDrg1VnJRQc81kiazRCt7W4FdQdIFW062m7
    gWsqAHy7u5fy4HE4VJR/H3XYBkztAOBUtgcih/DXoe5KazWq+CDGvEPUMYn8/FiR9JEX
    KmarUltqJQJXmxsu27htAqEMsmIDhWZJzo34EmYkZyxI4l22B0w4kWszBiPLbqENv6mZ
    SGRNDG/mxdQDVizfJWYLYAYVU4Luvd0qFnkS7sEh1eMEPTplU1OvwjGyiM3Ns5YdV545
    xwWg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847901;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=1Q2pJ96Q87Yp6O1o39i2QtRnldFx2Zb75m3igVfGdxc=;
    b=k4gvs2RDpyNND1f4GpB6oSepQQYDcTFhqb2RBKuk6bOCbCRTnyYsgOUhLEGNsQ6AwI
    SqQoXzr/oGmQUw6qps4VpbuuO4rCldWmBmGgLjucP0RA2p4GFSKqK9FiNRevnqzIQ7M1
    iHRygBWy+Q/kBtb0FStU36HO4kFotnlDFvJZi7455wKEPECB4n1YDKt2289j71VLFVLv
    DeqZAN7JlUk85Ld/ctIF0AA1sXdKw5Yi0muUGIqwOZqg8KJfSGQfRMn/yVHWNUcFIfC4
    lUw3yd9Gbpr2Nq+zJIx5jMlskDcod/4Qd3bpP6TNp+uqN3Dwtna3DSkIemOzR67iog4s
    2JKg==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847901;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=1Q2pJ96Q87Yp6O1o39i2QtRnldFx2Zb75m3igVfGdxc=;
    b=QKkWE1Bzcmtf0jW7m9uSoGmmmOUeB8TXyU+CG6V7YfLCe5aRu7G2qU8FSbOr/RS+VL
    g5AcZrl6G7mBr13qHK1wNdPaB65vIkj18GJo8T5lA9f7/KKIMymaFDxMmpDEgFLJvGuO
    wUi1HTcaKS6pfaIvYISq4d+gi2yc+XyZW7zeAn8SPTYvbfCKljlhHh/fetlqc/C5tDCP
    8V30LgxIVgKukdE9UygjrTG2nRQPuIDSqjzF8ak4mz0TRUFGYzJKFOx38EjzBBdjAHQR
    DwomF2T4j1F0ehToUyHRYinv0J8JAEfs/lDfihm2A1LDt/MZIT15ysSngVk0AVTIsTZM
    P/Qw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210616 17/36] tools: save: move guest_data array
Date: Wed, 16 Jun 2021 14:51:10 +0200
Message-Id: <20210616125129.26563-18-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move guest_data array into preallocated space.

Because this was allocated with calloc:
Adjust the loop to clear unused entries as needed.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/saverestore/common.h |  2 ++
 tools/libs/saverestore/save.c   | 11 ++++++-----
 2 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 2950947f1d..c4ab843c77 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -235,6 +235,8 @@ struct sr_save_arrays {
     struct iovec iov[MAX_BATCH_SIZE + 4];
     /* write_batch */
     uint64_t rec_pfns[MAX_BATCH_SIZE];
+    /* write_batch: Pointers to page data to send. Mapped gfns or local allocations. */
+    void *guest_data[MAX_BATCH_SIZE];
 };
 
 struct sr_restore_arrays {
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index 0f02988ff9..ea04cb1a74 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -90,7 +90,7 @@ static int write_batch(struct xc_sr_context *ctx)
     xc_interface *xch = ctx->xch;
     xen_pfn_t *mfns = ctx->save.m->mfns, *types = ctx->save.m->types;
     void *guest_mapping = NULL;
-    void **guest_data = NULL;
+    void **guest_data = ctx->save.m->guest_data;
     void **local_pages = NULL;
     int *errors = ctx->save.m->errors, rc = -1;
     unsigned int i, p, nr_pages = 0, nr_pages_mapped = 0;
@@ -105,12 +105,10 @@ static int write_batch(struct xc_sr_context *ctx)
 
     assert(nr_pfns != 0);
 
-    /* Pointers to page data to send.  Mapped gfns or local allocations. */
-    guest_data = calloc(nr_pfns, sizeof(*guest_data));
     /* Pointers to locally allocated pages.  Need freeing. */
     local_pages = calloc(nr_pfns, sizeof(*local_pages));
 
-    if ( !guest_data || !local_pages )
+    if ( !local_pages )
     {
         ERROR("Unable to allocate arrays for a batch of %u pages",
               nr_pfns);
@@ -166,7 +164,10 @@ static int write_batch(struct xc_sr_context *ctx)
         for ( i = 0, p = 0; i < nr_pfns; ++i )
         {
             if ( page_type_has_stream_data(types[i]) == false )
+            {
+                guest_data[i] = NULL;
                 continue;
+            }
 
             if ( errors[p] )
             {
@@ -183,6 +184,7 @@ static int write_batch(struct xc_sr_context *ctx)
 
             if ( rc )
             {
+                guest_data[i] = NULL;
                 if ( rc == -1 && errno == EAGAIN )
                 {
                     set_bit(ctx->save.m->batch_pfns[i], ctx->save.deferred_pages);
@@ -256,7 +258,6 @@ static int write_batch(struct xc_sr_context *ctx)
     for ( i = 0; local_pages && i < nr_pfns; ++i )
         free(local_pages[i]);
     free(local_pages);
-    free(guest_data);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:51:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:51:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142944.263664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1S-0001WF-1i; Wed, 16 Jun 2021 12:51:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142944.263664; Wed, 16 Jun 2021 12:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1R-0001Tj-MP; Wed, 16 Jun 2021 12:51:57 +0000
Received: by outflank-mailman (input) for mailman id 142944;
 Wed, 16 Jun 2021 12:51:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1Q-0006lZ-0A
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:51:56 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.83])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cece85ed-2d27-4fdd-8717-480416772f0b;
 Wed, 16 Jun 2021 12:51:43 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpatlv
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cece85ed-2d27-4fdd-8717-480416772f0b
ARC-Seal: i=1; a=rsa-sha256; t=1623847896; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=dxOtudDpDUxA61h6q5gaxkGxLyyUZ0Qhf56ZBw5pvtCPzZyGAT+05ORT39AQQSr2GL
    cOJ6ZQ+jCQOg6eWlv4mIPMeZwPXlLrUjhw40KsAgIVPchaLrVNwOSsvZ26CanAJ5GDn1
    e3lyeeON91UPhwe+gTQpxToQRTj7Tu5bE7Z1CDguN+O3sjVy2cA7OqYNmgp0YIojewEN
    pXRkAItVbuHNTwsh0MO7jc+G6EnlEJUiFL0TYf+0rFGHHyJ1d60pzWPSjkMrrFRJwAks
    OqF2ASJoSouRjLhCM4oAeVO3vX123nYGdsINbfyx+CfEUdmh4lb0smVwYK+8JvrcDi3V
    OMcg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847896;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=ikZ54MUbm6J0+L4XW5ARSvIze4KQJX0wbR+SClf730w=;
    b=Li9M5NuoXwVzfog2OtC6dmydfnROU2UPs+KqlXq/sizYlcp2z8JA1F1Fa7HPKCTfQF
    Cdf6ROhmdEJtMN7UeEf/BOI/r6KYbHTTXegtQksvsO/OIHkAisdDcCZ0tiwPqpuxIs10
    RrRHP/OMaMfNq47rot0BJL7SU+pSMqi11mCd9W7zZyc8BhS6nSEdtS17beoy3vugwAGF
    NLEGRJbGP4AkEjtjiuV7PtjA4LPq0lXBRZcGXIRAhxt3zAn/d5vtfdQmU+ggh1AUnJIj
    x5nM0HhegNmyIaGQENmES70rr4jjGH8tLlPEEjUnN/qTQGzV6unni4r0I3xCKNmFfbNA
    4JiA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847896;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=ikZ54MUbm6J0+L4XW5ARSvIze4KQJX0wbR+SClf730w=;
    b=UIsPIfYTqJ11BJg4cVQSTqrMfNiTuot30qjQsOw8l2Zh0GFtCmMhXoLh+mS7NmkjR9
    KsOOPifUa9IHi4sxnQPt+CuIlzPk5Z2wpeBspqFbXJQ+T40WPYFOHvzYydzu/PJlU0w+
    vsqQvAW83UgmmIDfDCo5M5lMQKBF7i24W0tUQp350Omma930GaS4sTbUwQkamAEpnq2O
    GjjTvU3t8D/spCCn9oP7gPSiZmkjZbRAwnSjhGU9yRtrauVWgB9Ppg15cBkoWlid50jS
    r5Nfvbm3NyQgxiOdVBb3vHaWh25+ZlhHmP220QJTlJsn0mnlW+noM8fmZXTQhSOAJCVX
    3IAA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v20210616 07/36] tools: add sr_is_known_page_type to libsaverestore
Date: Wed, 16 Jun 2021 14:51:00 +0200
Message-Id: <20210616125129.26563-8-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Users of xc_get_pfn_type_batch may want to sanity check the data
returned by Xen. Add a simple helper for this purpose.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

v02:
- rename xc_is_known_page_type to sr_is_known_page_type
- move from ctrl/xc_private.h to saverestore/common.h (jgross)
---
 tools/libs/saverestore/common.h | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index ca2eb47a4f..c9cc4206e5 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -467,6 +467,39 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
 /* Handle a STATIC_DATA_END record. */
 int handle_static_data_end(struct xc_sr_context *ctx);
 
+/* Sanitiy check for types returned by Xen */
+static inline bool sr_is_known_page_type(xen_pfn_t type)
+{
+    bool ret;
+
+    switch (type)
+    {
+    case XEN_DOMCTL_PFINFO_NOTAB:
+
+    case XEN_DOMCTL_PFINFO_L1TAB:
+    case XEN_DOMCTL_PFINFO_L1TAB | XEN_DOMCTL_PFINFO_LPINTAB:
+
+    case XEN_DOMCTL_PFINFO_L2TAB:
+    case XEN_DOMCTL_PFINFO_L2TAB | XEN_DOMCTL_PFINFO_LPINTAB:
+
+    case XEN_DOMCTL_PFINFO_L3TAB:
+    case XEN_DOMCTL_PFINFO_L3TAB | XEN_DOMCTL_PFINFO_LPINTAB:
+
+    case XEN_DOMCTL_PFINFO_L4TAB:
+    case XEN_DOMCTL_PFINFO_L4TAB | XEN_DOMCTL_PFINFO_LPINTAB:
+
+    case XEN_DOMCTL_PFINFO_XTAB:
+    case XEN_DOMCTL_PFINFO_XALLOC:
+    case XEN_DOMCTL_PFINFO_BROKEN:
+        ret = true;
+        break;
+    default:
+        ret = false;
+        break;
+    }
+    return ret;
+}
+
 #endif
 /*
  * Local variables:


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:52:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:52:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142946.263677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1W-0002Ni-Da; Wed, 16 Jun 2021 12:52:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142946.263677; Wed, 16 Jun 2021 12:52:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1W-0002Mr-7m; Wed, 16 Jun 2021 12:52:02 +0000
Received: by outflank-mailman (input) for mailman id 142946;
 Wed, 16 Jun 2021 12:52:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1U-00075D-81
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:00 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [81.169.146.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 21727138-bae6-4aea-a9cd-17eb42f3cbf2;
 Wed, 16 Jun 2021 12:51:48 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpgtmA
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21727138-bae6-4aea-a9cd-17eb42f3cbf2
ARC-Seal: i=1; a=rsa-sha256; t=1623847902; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=jSZ2Z9L7tplcMdULHYPib6jr8P3O27PjFRi3DaMtcyaivK632dkPPGhq0kdpV3n4Ke
    Ll9sPl0CkDZrFVZWnOKuyHz/G5EsWAHZpZGc24pW4iQ448b2y0LUGLcrGVoYy3QwK/ld
    MkhJ8fFHOemkCAoMSLkdr6LSLIIa4D23B8mjii49uUAIefMR0sq0Nd6lxkFk8nI8bKkJ
    bRnZmGamDGiBnG7I1R2i3mj0vvMQJTTaqYYpjUxs7jC2Ku7iHEvtcE3XrJQe8vDa8Z95
    xiYUTFiCcNR57gDc/1xRquzvncRhqw+S43VFkrZKTidJknSPW7nU7wXcJGfqYYJnVPGK
    C8Wg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847902;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=+zNYdRT311M2lTYh9UyGXJAkdHDCHae4YkiiVz2nJTk=;
    b=YMS9icgO95j0umh569k+Un5aJGs6dFWpsZCBYy3KmGCN/UW8obv+2y6cx/La7dD8EA
    j9xVhhfqAYPfxa4A5EWAKt58F72chToPGPOxqRIueqKyb/mWALz3tmA84i1BIU5FBFXy
    lKdInWBa7Ra8ZqLT0OWQX03fnqNgwb5WkaGqZY4zwsFL58xCIoaKXUaGzpe2RgurAafj
    KNx6APWzvA+8bnh9bfjyZk1WVJ7Q476iDHVoT1paw6qrzSCWzf/J9WPQ5kVuGcTmLC7a
    BvM91e0QYYJj46/XgXFwIVIRu8DcL5ymi2leA/7xBJ2msDIn7WT3V/RW2w9AepFWPpRi
    4YMA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847902;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=+zNYdRT311M2lTYh9UyGXJAkdHDCHae4YkiiVz2nJTk=;
    b=I565a8Paxp67rMuXqxcCzHUGgYOTV7Qh04/11K40ZE9TdvE5EZGK1CDOtZD1ZhJJJ0
    IRtgquJKznOCTY7oJjMdfm++zrGTTFCwgqk3iww144KlZPJqkf4py3rD52QldjONCo84
    Dyjjg04JTp9XmC2NITklkz8oIPs7FaUNv2OhCrJWzRZrXFcJ/jJ4IgVLAXhYFUBVCW6U
    UP3VvT1yX+BpZgmIgtqI1CN7vwASvmEzzyx0xEcuZYlVLoAZprvhhKnzRnAxQHgH5u1P
    YWBwFn4Ggkrw1ASXJi1sYOthJUYd+/2mTIPyAAZiXj/W4bdaMwgJgU7GDxYuAreEbyIl
    n8RQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210616 19/36] tools: restore: move types array
Date: Wed, 16 Jun 2021 14:51:12 +0200
Message-Id: <20210616125129.26563-20-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move types array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/saverestore/common.h  |  1 +
 tools/libs/saverestore/restore.c | 12 +-----------
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 96ae0904fc..fe44302eac 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -238,6 +238,7 @@ struct sr_save_arrays {
 struct sr_restore_arrays {
     /* handle_page_data */
     xen_pfn_t pfns[MAX_BATCH_SIZE];
+    uint32_t types[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_context
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index e18a03b381..d460a2b2b5 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -316,7 +316,7 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
     int rc = -1;
 
     xen_pfn_t *pfns = ctx->restore.m->pfns, pfn;
-    uint32_t *types = NULL, type;
+    uint32_t *types = ctx->restore.m->types, type;
 
     /*
      * v2 compatibility only exists for x86 streams.  This is a bit of a
@@ -363,14 +363,6 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         goto err;
     }
 
-    types = malloc(pages->count * sizeof(*types));
-    if ( !types )
-    {
-        ERROR("Unable to allocate enough memory for %u pfns",
-              pages->count);
-        goto err;
-    }
-
     for ( i = 0; i < pages->count; ++i )
     {
         pfn = pages->pfn[i] & PAGE_DATA_PFN_MASK;
@@ -410,8 +402,6 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
     rc = process_page_data(ctx, pages->count, pfns, types,
                            &pages->pfn[pages->count]);
  err:
-    free(types);
-
     return rc;
 }
 


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:52:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:52:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142948.263683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1X-0002T6-3G; Wed, 16 Jun 2021 12:52:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142948.263683; Wed, 16 Jun 2021 12:52:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1W-0002SG-LF; Wed, 16 Jun 2021 12:52:02 +0000
Received: by outflank-mailman (input) for mailman id 142948;
 Wed, 16 Jun 2021 12:52:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1V-0006lZ-0M
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:01 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.171])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a13aee9a-ffdd-4c72-83ef-ec013995368e;
 Wed, 16 Jun 2021 12:51:43 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpbtly
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:37 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a13aee9a-ffdd-4c72-83ef-ec013995368e
ARC-Seal: i=1; a=rsa-sha256; t=1623847897; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=DYoFmxn0Dtbm+VbaZKlDCbmsImsyeyVIbmdVja/WYH7Yh7b3VxzyV+95Rv/od0I/cz
    yz5v+ihbmp6qtr+klmkkX01fY3hxfjDnHfHy82NnPZkcFtuTgOHOvwxSr3wEgXANEMIL
    9vano6TGgNPAjgJ6A26sqYCTr+FtzFAaLdLlib64ZsvtD8UJGPZ5jWklAtDkf8CqqWgI
    NhVoMJ9ZxY4oM2AKtBImnUrA5r8MOYK3gWUZgRHbM/zThMNEN5eklers7DqmTQ8MqZAh
    LEM+/N/41WuijmQ27TTLid+o5rNMAVqVa5eaTZL/7YYlDv/kPRbLuoJqqRObjVrNsn6d
    6eWQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847897;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=VuUrMSQO4CJqxpoErSuXThYpl5ESMJog6Ej/du7Jhm8=;
    b=L7BIenlD2gLFJY33mpjNEtwdvigxvmiQu8kcfsKPRIaLNvBAF0MhNd3rn7RaNc4F+S
    Iwk3Iff43fgyXwqIo3e+JG/9f8eoWGSR1QVY44AiV42Tv9mn2D+AALn/110YsgRbJF/7
    HUbWydzTUDZQYMWhV86LHLRjJFdriIlXI5/62DRsel2b3g5ALbiohbzX8yDTbZuG0OYo
    2vjvgVJOSDRCm9nTNlngBefJc0QYwsrXZUPs5wY/HQYYz6WUNXxIkAB/Inb5b/rQkIHS
    prRgL18wVgHo1pcWpKYV8SrWVdQHa3RA/NZtYgotR/OEIhwa3QJdaTW+ax+2iOFnWcud
    Bu6g==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847897;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=VuUrMSQO4CJqxpoErSuXThYpl5ESMJog6Ej/du7Jhm8=;
    b=fgVuiq6lNo14eMaaSHNN964UZYKMmBml6lOvEjdDo157lZMXSwttn3KdVtF/ybsVr0
    V/1rNHX9ojTlroR5uczwa5SglFU2HT0cvO3VTN+xnF/kXb7/v3Y86wkExfAhM49LTU44
    naNCWgKSBgJqtYDjrzEzsPXh7BQnPXNELk2GhIN2qANRGy8XPKq9k/ZlUsPRcwo/WN43
    HnpOfBzpqr30GsKcBchylGKkbEgiaYju/NEaAGwGZhg4uZorNoAKz0yELULez/AUL/dQ
    Aa8yILmfNI6g5c4dDmOLKZ3c+S7OQZL7cJpgOAwgErSZoQVwo3IeahqISgUuZWtQ1tGA
    yZ1w==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v20210616 09/36] tools: unify type checking for data pfns in migration stream
Date: Wed, 16 Jun 2021 14:51:02 +0200
Message-Id: <20210616125129.26563-10-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce a helper which decides if a given pfn type has data
for the migration stream.

No change in behavior intended.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h  | 17 ++++++++++++++++
 tools/libs/saverestore/restore.c | 34 +++++---------------------------
 tools/libs/saverestore/save.c    | 14 ++-----------
 3 files changed, 24 insertions(+), 41 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index c9cc4206e5..08bbe902b9 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -500,6 +500,23 @@ static inline bool sr_is_known_page_type(xen_pfn_t type)
     return ret;
 }
 
+static inline bool page_type_has_stream_data(uint32_t type)
+{
+    bool ret;
+
+    switch (type)
+    {
+    case XEN_DOMCTL_PFINFO_XTAB:
+    case XEN_DOMCTL_PFINFO_XALLOC:
+    case XEN_DOMCTL_PFINFO_BROKEN:
+        ret = false;
+        break;
+    default:
+        ret = true;
+        break;
+    }
+    return ret;
+}
 #endif
 /*
  * Local variables:
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index 324b9050e2..70c92eaadc 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -152,9 +152,8 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
 
     for ( i = 0; i < count; ++i )
     {
-        if ( (!types || (types &&
-                         (types[i] != XEN_DOMCTL_PFINFO_XTAB &&
-                          types[i] != XEN_DOMCTL_PFINFO_BROKEN))) &&
+        if ( (!types ||
+              (types && page_type_has_stream_data(types[i]) == true)) &&
              !pfn_is_populated(ctx, original_pfns[i]) )
         {
             rc = pfn_set_populated(ctx, original_pfns[i]);
@@ -233,25 +232,8 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
     {
         ctx->restore.ops.set_page_type(ctx, pfns[i], types[i]);
 
-        switch ( types[i] )
-        {
-        case XEN_DOMCTL_PFINFO_NOTAB:
-
-        case XEN_DOMCTL_PFINFO_L1TAB:
-        case XEN_DOMCTL_PFINFO_L1TAB | XEN_DOMCTL_PFINFO_LPINTAB:
-
-        case XEN_DOMCTL_PFINFO_L2TAB:
-        case XEN_DOMCTL_PFINFO_L2TAB | XEN_DOMCTL_PFINFO_LPINTAB:
-
-        case XEN_DOMCTL_PFINFO_L3TAB:
-        case XEN_DOMCTL_PFINFO_L3TAB | XEN_DOMCTL_PFINFO_LPINTAB:
-
-        case XEN_DOMCTL_PFINFO_L4TAB:
-        case XEN_DOMCTL_PFINFO_L4TAB | XEN_DOMCTL_PFINFO_LPINTAB:
-
+        if ( page_type_has_stream_data(types[i]) == true )
             mfns[nr_pages++] = ctx->restore.ops.pfn_to_gfn(ctx, pfns[i]);
-            break;
-        }
     }
 
     /* Nothing to do? */
@@ -271,14 +253,8 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
 
     for ( i = 0, j = 0; i < count; ++i )
     {
-        switch ( types[i] )
-        {
-        case XEN_DOMCTL_PFINFO_XTAB:
-        case XEN_DOMCTL_PFINFO_BROKEN:
-        case XEN_DOMCTL_PFINFO_XALLOC:
-            /* No page data to deal with. */
+        if ( page_type_has_stream_data(types[i]) == false )
             continue;
-        }
 
         if ( map_errs[j] )
         {
@@ -413,7 +389,7 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
             goto err;
         }
 
-        if ( type < XEN_DOMCTL_PFINFO_BROKEN )
+        if ( page_type_has_stream_data(type) == true )
             /* NOTAB and all L1 through L4 tables (including pinned) should
              * have a page worth of data in the record. */
             pages_of_data++;
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index 6f820ea432..12598bd4e2 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -153,13 +153,8 @@ static int write_batch(struct xc_sr_context *ctx)
             goto err;
         }
 
-        switch ( types[i] )
-        {
-        case XEN_DOMCTL_PFINFO_BROKEN:
-        case XEN_DOMCTL_PFINFO_XALLOC:
-        case XEN_DOMCTL_PFINFO_XTAB:
+        if ( page_type_has_stream_data(types[i]) == false )
             continue;
-        }
 
         mfns[nr_pages++] = mfns[i];
     }
@@ -177,13 +172,8 @@ static int write_batch(struct xc_sr_context *ctx)
 
         for ( i = 0, p = 0; i < nr_pfns; ++i )
         {
-            switch ( types[i] )
-            {
-            case XEN_DOMCTL_PFINFO_BROKEN:
-            case XEN_DOMCTL_PFINFO_XALLOC:
-            case XEN_DOMCTL_PFINFO_XTAB:
+            if ( page_type_has_stream_data(types[i]) == false )
                 continue;
-            }
 
             if ( errors[p] )
             {


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:52:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:52:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142949.263699 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1a-0003T6-QC; Wed, 16 Jun 2021 12:52:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142949.263699; Wed, 16 Jun 2021 12:52:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1a-0003Sk-JJ; Wed, 16 Jun 2021 12:52:06 +0000
Received: by outflank-mailman (input) for mailman id 142949;
 Wed, 16 Jun 2021 12:52:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1Z-00075D-82
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:05 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 440b148c-5a47-47e7-8268-4fec67cd4e3c;
 Wed, 16 Jun 2021 12:51:48 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpgtmC
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 440b148c-5a47-47e7-8268-4fec67cd4e3c
ARC-Seal: i=1; a=rsa-sha256; t=1623847903; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=opDqhO4a1Vq4ddVcj+k8tqaWyd7iEl35jDu6mAo2Qt4tu3vXYHrlskuG3lPRYff0ov
    y443MOvX2wXF0kBESl9kaSX8E/AF7MORqBMUI2k0vhRoSUnqrzB09kJ3F+y7YG7LhZ3o
    g0frzi7M7n7ZEhoqv3SccUbu1XdjqdvuS9taPrW3zCAJ/c0BXG0XJ7dU8fQq9S5/ZaEl
    4LJBD/B7XnT8w8kRlGdf3ozpGAu8cLr9nzTxqSFqvO9FKRS5rIQVfdi6WSx7rnqQWVrc
    RAGbuHfYazO1mmNG8JeWeb+JxeM2AqHnxL/uMXS1CeTRPwaV39GFpFVR6Z4WNNtudfQ6
    HCnw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847903;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=Nst2DxeDxoQRaBpmHCGwCg/XwLbUQH8n2F7XGaSzYSY=;
    b=THuKMd0uiY8blfLD/vzBCt8CiQ7ynxcB7hOV8Oxqi8BWA7HBC3wUB7Kl7Z1sdXrzM0
    DWYoQvThi6ZX0oD/Yhjwi8w9huvWWniZP73zhjlvzG0vcErO9ANjbQyEDm86iZ6LDmtM
    7ZaI4Aq+d4RowYfo5U8oNd0kOd2i+felN+y7zJWEsXFAYaF+pTUGvRmG8dKdwq3R83Fk
    Xfs8mErtxq5Jx48BYee3841s/5wrSkzDl2U8Gc8qZ7NxmMdhL5JLXPDxSaqSMahq0Axt
    kbEBe1sYMKXH6P6gI2PasDzOlx9o3Zv1qMmXHSwHkHR8grFF2cFQworQ6GR8QvAGDvvu
    mweQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847903;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=Nst2DxeDxoQRaBpmHCGwCg/XwLbUQH8n2F7XGaSzYSY=;
    b=tX+SlobqcYRUw8DVNaMA6Zdbbft4TT8fAIUkIPGgXORmYpajCecrKedu0kgLaOCfds
    nKCIO6rQo2jOj2NwcV0xTUF08Rh6GKXcGFyYQdP6FgmgNWIsxSulJH2gwjtprAPSCqee
    ia8rlaZbuMoqapjc6ahFsXkBrKDqgjSnb4snFm0x80hoMN9ptzziYn2KREpVfmoYY/JH
    HDUa33N6zy4VAaGjArJuY1/BccqjgYIa888jm88kcXm/LaUtpgcVmCeC+l+8BNyGyELp
    McljlGSAN26T8QNOvwXKPYQDtf/T7URtuTQx0gByye+vSJ9/aSoyeCLlAN0A51UJk5XX
    +kmA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210616 20/36] tools: restore: move mfns array
Date: Wed, 16 Jun 2021 14:51:13 +0200
Message-Id: <20210616125129.26563-21-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move mfns array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/saverestore/common.h  | 2 ++
 tools/libs/saverestore/restore.c | 5 ++---
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index fe44302eac..54352f5427 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -239,6 +239,8 @@ struct sr_restore_arrays {
     /* handle_page_data */
     xen_pfn_t pfns[MAX_BATCH_SIZE];
     uint32_t types[MAX_BATCH_SIZE];
+    /* process_page_data */
+    xen_pfn_t mfns[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_context
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index d460a2b2b5..1a7cfbcd47 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -205,7 +205,7 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
                              xen_pfn_t *pfns, uint32_t *types, void *page_data)
 {
     xc_interface *xch = ctx->xch;
-    xen_pfn_t *mfns = malloc(count * sizeof(*mfns));
+    xen_pfn_t *mfns = ctx->restore.m->mfns;
     int *map_errs = malloc(count * sizeof(*map_errs));
     int rc;
     void *mapping = NULL, *guest_page = NULL;
@@ -213,7 +213,7 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
         j,          /* j indexes the subset of pfns we decide to map. */
         nr_pages = 0;
 
-    if ( !mfns || !map_errs )
+    if ( !map_errs )
     {
         rc = -1;
         ERROR("Failed to allocate %zu bytes to process page data",
@@ -299,7 +299,6 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
         xenforeignmemory_unmap(xch->fmem, mapping, nr_pages);
 
     free(map_errs);
-    free(mfns);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:52:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:52:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142951.263704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1b-0003Zt-G9; Wed, 16 Jun 2021 12:52:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142951.263704; Wed, 16 Jun 2021 12:52:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1b-0003Xu-3l; Wed, 16 Jun 2021 12:52:07 +0000
Received: by outflank-mailman (input) for mailman id 142951;
 Wed, 16 Jun 2021 12:52:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1a-0006lZ-0S
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:06 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.84])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1862b455-a593-4217-a96b-2c8baf4754e0;
 Wed, 16 Jun 2021 12:51:44 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpctlz
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1862b455-a593-4217-a96b-2c8baf4754e0
ARC-Seal: i=1; a=rsa-sha256; t=1623847898; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=ag0e/5sNW6+i/kXyXoqt1VWKzYhGkvW0AiLGi5eTmr7Em384H6YwHHk9NueZhAldqx
    Jyl+j2c36db5IiIN4u1QAyhX1KukuIMi3YGVKKJW5/1zg62cXvVrKjWONLG7vzXV2bdQ
    6eRBuxuXW/bLCVKD9MWId3jcIDrBrqAKxzBkE7GbTnMXSWcaapFr4nwqzUdfyPJy/f5c
    CL0ZuNrWYSbiz4BYzJLpf+fjqQreDGX6LWPbdoMluAgV8mCVF4QR+s3ibQPKwh3gYZVM
    74PiscTSFgHaThxQv4wobdcIX2qiqxHiLZAO550Ry7vp0jByztRWwz1d4VvxK4tjHXsn
    aVoQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847898;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=Zji+2Vj186gOQDCiA8MR4SQwrN68K3CPojQMJJRtxVk=;
    b=Sqxqycp1IAHnBrceTsX6/60pWhzCIFxBPvyU4pzVlVcPo/G5vnjTvQIHTGcohglMzn
    tuhpSWpV7Rds8Tj389fql61styvu47nTWkAD5xwFmCEHbjYHUUEhVt0RL36cA3WEck1j
    hJYPlLlD0s9zwo25eLhgblPwK9xT1BOOEj2nr8/HUsZtayN13MUmivKNhhs8E7n0uzCR
    1yZ2p987jl/fJf6+t2Vjwx9ud92HyBPBMWdgwvhqQrghxirMy+LiS6b9B7oQ8u9QCpMb
    l79RzrF8f1GZz+qscsndw85QqQpYj7tIVWsLvrNYgl6jnqsUe2mLDjmx3gi8pQyDJDPi
    tJGg==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847898;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=Zji+2Vj186gOQDCiA8MR4SQwrN68K3CPojQMJJRtxVk=;
    b=naAFTL1rKhbPf85zZPnuCga7UufnYhJaiCPJ0EROBqh0RuNZfq3kV28HyN1Gq0Q4V3
    pK70e3O5ZB1+IzWWrS1w0gtK+YlOhQz2NmlumxQ1gf+Vc781A7bK/2/yAkXQ7AqABPE+
    vY284A8IDgmtp5vNpzL2/+Wf1QFSyxXNU/wVUmZdl7PoSVX+uJ9/VmJ1Sq83UGrU5Qc7
    xtpHQ61QC65WVEEeDWyNp5VhzWSPqArVfwh6YrnE8VZ3Ftjl1mwFL0w5PmA8yUHwjfPw
    FRql2yVtjyYZg17Iuj0MZ8YlwqS4kHHR0NyL6xsAjG30HOpK5kbMYwVHdak7ZsTnqO8b
    l1uw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v20210616 10/36] tools: show migration transfer rate in send_dirty_pages
Date: Wed, 16 Jun 2021 14:51:03 +0200
Message-Id: <20210616125129.26563-11-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Show how fast domU pages are transferred in each iteration.

The relevant data is how fast the pfns travel, not so much how much
protocol overhead exists. So the reported MiB/sec is just for pfns.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

v02:
- rearrange MiB_sec calculation (jgross)
---
 tools/libs/saverestore/common.h |  2 ++
 tools/libs/saverestore/save.c   | 46 +++++++++++++++++++++++++++++++++
 2 files changed, 48 insertions(+)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 08bbe902b9..d61569e1a6 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -250,6 +250,8 @@ struct xc_sr_context
             bool debug;
 
             unsigned long p2m_size;
+            size_t pages_sent;
+            size_t overhead_sent;
 
             struct precopy_stats stats;
 
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index 12598bd4e2..f8fbe7a742 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -1,5 +1,6 @@
 #include <assert.h>
 #include <arpa/inet.h>
+#include <time.h>
 
 #include "common.h"
 
@@ -238,6 +239,8 @@ static int write_batch(struct xc_sr_context *ctx)
     iov[3].iov_len = nr_pfns * sizeof(*rec_pfns);
 
     iovcnt = 4;
+    ctx->save.pages_sent += nr_pages;
+    ctx->save.overhead_sent += sizeof(rec) + sizeof(hdr) + nr_pfns * sizeof(*rec_pfns);
 
     if ( nr_pages )
     {
@@ -357,6 +360,42 @@ static int suspend_domain(struct xc_sr_context *ctx)
     return 0;
 }
 
+static void show_transfer_rate(struct xc_sr_context *ctx, struct timespec *start)
+{
+    xc_interface *xch = ctx->xch;
+    struct timespec end = {}, diff = {};
+    size_t ms, MiB_sec;
+
+    if (!ctx->save.pages_sent)
+        return;
+
+    if ( clock_gettime(CLOCK_MONOTONIC, &end) )
+        PERROR("clock_gettime");
+
+    if ( (end.tv_nsec - start->tv_nsec) < 0 )
+    {
+        diff.tv_sec = end.tv_sec - start->tv_sec - 1;
+        diff.tv_nsec = end.tv_nsec - start->tv_nsec + (1000U*1000U*1000U);
+    }
+    else
+    {
+        diff.tv_sec = end.tv_sec - start->tv_sec;
+        diff.tv_nsec = end.tv_nsec - start->tv_nsec;
+    }
+
+    ms = (diff.tv_nsec / (1000U*1000U));
+    ms += (diff.tv_sec * 1000U);
+    if (!ms)
+        ms = 1;
+
+    MiB_sec = (ctx->save.pages_sent * PAGE_SIZE * 1000U) / ms / (1024U*1024U);
+
+    errno = 0;
+    IPRINTF("%s: %zu bytes + %zu pages in %ld.%09ld sec, %zu MiB/sec", __func__,
+            ctx->save.overhead_sent, ctx->save.pages_sent,
+            diff.tv_sec, diff.tv_nsec, MiB_sec);
+}
+
 /*
  * Send a subset of pages in the guests p2m, according to the dirty bitmap.
  * Used for each subsequent iteration of the live migration loop.
@@ -370,9 +409,15 @@ static int send_dirty_pages(struct xc_sr_context *ctx,
     xen_pfn_t p;
     unsigned long written;
     int rc;
+    struct timespec start = {};
     DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
                                     &ctx->save.dirty_bitmap_hbuf);
 
+    ctx->save.pages_sent = 0;
+    ctx->save.overhead_sent = 0;
+    if ( clock_gettime(CLOCK_MONOTONIC, &start) )
+        PERROR("clock_gettime");
+
     for ( p = 0, written = 0; p < ctx->save.p2m_size; ++p )
     {
         if ( !test_bit(p, dirty_bitmap) )
@@ -396,6 +441,7 @@ static int send_dirty_pages(struct xc_sr_context *ctx,
     if ( written > entries )
         DPRINTF("Bitmap contained more entries than expected...");
 
+    show_transfer_rate(ctx, &start);
     xc_report_progress_step(xch, entries, entries);
 
     return ctx->save.ops.check_vm_state(ctx);


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:52:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:52:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142952.263720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1f-0004dN-V1; Wed, 16 Jun 2021 12:52:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142952.263720; Wed, 16 Jun 2021 12:52:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1f-0004d3-QC; Wed, 16 Jun 2021 12:52:11 +0000
Received: by outflank-mailman (input) for mailman id 142952;
 Wed, 16 Jun 2021 12:52:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1e-00075D-88
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:10 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c3133b3a-05ca-476e-a4d8-03107f301f32;
 Wed, 16 Jun 2021 12:51:48 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCphtmF
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3133b3a-05ca-476e-a4d8-03107f301f32
ARC-Seal: i=1; a=rsa-sha256; t=1623847903; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=ohS/KAgS1qnECx4K8vUIAIRYQKCo0QYE9XXnqRt6loKSUMq5Af3ZMwucSSyU4hidku
    mQlWc/m0IPMaMuS60Kz+vXTUjg6az/PvkNew1U0vwz05NTofNK5EHKCCtpKPQVQk+vGY
    SaBcqIg+XXdVL3rCg5mNFhX2VIvPZNswDCoKUNXzxqEsI1FWVpJAnS3ketn9gxYc58WR
    2jfMZUmrNSHS8rsMksgvsD+9fAKYBfV85U/QcJqk9teC6XQX19yHms13x5I5FfnV+3Yc
    uqu54dHC2WlJOaiulF6eXl65vw7Owj1joqeII3kqa5b+YL2m74W75yuyBeJARZvbtXYL
    MRbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847903;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=jECKf3sL2DTnpGqAznPVbUN7tLXZQbgjgWqGsEtA4iA=;
    b=GjR90vz9RCP0DIoOSuG1FugVmpRs1Gl2p0tn70f7eDGTIuc12jdOGUDAYVhAWPTWct
    x1eFWvAXGaZH33DWYP90UNrhm01LI7cjr7kjVNfZkB2rjo1q2Lj4y9A45gQweWNmJIKJ
    jWOqvrk4kokNbUhawJMK/k0CiTJAhRz2M7y3YjQZgd23lrtwvhnxxn6qDtPsFtHMUN8y
    bMYXJUWpd+EGBbQ+t6gC+WW3cq43iNCL3qywx1rILWsv6m6bGILhjHvvlzDq5Ec47Z8Y
    nVTqpFz4jcUOEL7sE7qJiYvwLFeumJWHAzkCdVNnCw3YWeqFij97Br6d+T5GFy+85ANr
    HFkA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847903;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=jECKf3sL2DTnpGqAznPVbUN7tLXZQbgjgWqGsEtA4iA=;
    b=F54RWxRU+WpmRT4Yk7Xy3BUNbHQQpm2uV1w+WnAQVsgZK2YZvBeQak043u5WfqVdTc
    XowzbQlDxibs0LBDp2jMjKmjI6Cg87lDGHxm1MuG95q0hhy8DLxeOsBYd08fTFsJu4MR
    Jh6/TOl9mx/CE7/Jkco/bY7/sy9zkNCp2chtOXjF7iB365k5LXIoR9DLNDDmIMsXaPNa
    xuQl6UlWTKuPKTHWZh0tzy7eLhLVsFmK2TS7jeAgFDZ9TR4+z01XryyEznJ2d321OS+S
    sV3xTazczZiYlWjY8lxyck5VSx56nCtowrNo+TRn11K7QjFoNnbG+v9/unXH3dIWSr0H
    ylNg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210616 21/36] tools: restore: move map_errs array
Date: Wed, 16 Jun 2021 14:51:14 +0200
Message-Id: <20210616125129.26563-22-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move map_errs array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/saverestore/common.h  |  1 +
 tools/libs/saverestore/restore.c | 12 +-----------
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 54352f5427..34042c2b90 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -241,6 +241,7 @@ struct sr_restore_arrays {
     uint32_t types[MAX_BATCH_SIZE];
     /* process_page_data */
     xen_pfn_t mfns[MAX_BATCH_SIZE];
+    int map_errs[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_context
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index 1a7cfbcd47..6eb955423c 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -206,21 +206,13 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
 {
     xc_interface *xch = ctx->xch;
     xen_pfn_t *mfns = ctx->restore.m->mfns;
-    int *map_errs = malloc(count * sizeof(*map_errs));
+    int *map_errs = ctx->restore.m->map_errs;
     int rc;
     void *mapping = NULL, *guest_page = NULL;
     unsigned int i, /* i indexes the pfns from the record. */
         j,          /* j indexes the subset of pfns we decide to map. */
         nr_pages = 0;
 
-    if ( !map_errs )
-    {
-        rc = -1;
-        ERROR("Failed to allocate %zu bytes to process page data",
-              count * (sizeof(*mfns) + sizeof(*map_errs)));
-        goto err;
-    }
-
     rc = populate_pfns(ctx, count, pfns, types);
     if ( rc )
     {
@@ -298,8 +290,6 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
     if ( mapping )
         xenforeignmemory_unmap(xch->fmem, mapping, nr_pages);
 
-    free(map_errs);
-
     return rc;
 }
 


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:52:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:52:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142954.263723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1g-0004jG-CQ; Wed, 16 Jun 2021 12:52:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142954.263723; Wed, 16 Jun 2021 12:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1g-0004hl-7T; Wed, 16 Jun 2021 12:52:12 +0000
Received: by outflank-mailman (input) for mailman id 142954;
 Wed, 16 Jun 2021 12:52:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1f-0006lZ-0a
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:11 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.80])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 34cb16d1-5820-4808-abfe-5c8756b5df6d;
 Wed, 16 Jun 2021 12:51:44 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpctm2
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34cb16d1-5820-4808-abfe-5c8756b5df6d
ARC-Seal: i=1; a=rsa-sha256; t=1623847899; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=ks+H/9Lq48itLgjkf43MKm2w6YqTU5BIgnXYJ95Akrg3HFrIyzoYuFeJAD8DuBVYlW
    gk/LNSYNDIbE2Ms4+0hEnvUIUBDwg6dRVLN97pBqIFh2BSlA4O/3sSRLjl7xEq+YcVAO
    I7KZNsDnWzgH3didlsLqY+3Ze9NjmrXitsiVCBfk9I4/eBl9Zb1HRf8T4L+TiROYvdbk
    VBF5Ice1XM9iA/V+FQrGuBcrOAmAEOhNLwnxjm1rLcm5MdBK7eT1dIknANUQtNw1H0SN
    CrmljYo5r/LJbgNxRIBpFK2+ZihzvovaQnrrQQLVDZwUB1dTBLeO3NFhhfKA08otRFV3
    I63g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847899;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=aebmW30p8aSy8ZbiG8FVV1BV3qJw8YQmzxiu0pIwFxU=;
    b=c/MaKwTpb25k2Ii/WPKi5bRa3dhl2xMCIc44Gpim9vNbWGOCL1TnR1es7KF0X/Grbb
    6cumaaxEYSHr3bzlDNYvGvn53Zvjt65NE+KW3PHVEAq3tqdtEpiKs1UE8f8svC8ysRJB
    Q+9BToDThANxVxIU9Whc3+4qSegFTLSWwJvHMDgr2QufxEYRpW0LN+ohxkwUfCf0iyJC
    Dc4v1UQOnoyUzSLkaxDSybfjpC6xRN/QfIKLhNrJPs8kQYMC8O26AYVAEv3I0I+vqTTs
    MGIO0LXhyLdBkUWOO5xDlPONpeEeBkxfJZjDxe9T1iywjcgkykwD8ax/5jYi+nLY6Pvq
    YU8g==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847899;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=aebmW30p8aSy8ZbiG8FVV1BV3qJw8YQmzxiu0pIwFxU=;
    b=I0fK91BTFb+YLpjAxmw4SLrJF10jLgNtr2YzluulSr0rNp+lZvDkd/6A2v19c/5y3M
    iEHRslKvBIUfw94aXpnZfywrZay0EbLX04diucJ2ZaEmM+1WWNEREygAyyssI5Q4lHvi
    oDfXeE3reWqHLeN8tUJRam366HvfHempVythqpZWBQasaXmVfV8HfafGoR3aZqwXNVLT
    Cy35d9X3Ju3GDw44OzzAeanSobNQ5lgoJ7WuZhQsiw/remW0bS/Cuf+RpRJ/QsIoQLGy
    Ye2S3/J0nYPOP738PpZNlnvqcHBtcxlGAadMtUOAVEZ/R6ZgWbMDG9OX6hcy6ooiq6qf
    Jd5A==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210616 12/36] tools: save: move mfns array
Date: Wed, 16 Jun 2021 14:51:05 +0200
Message-Id: <20210616125129.26563-13-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move mfns array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/saverestore/common.h | 2 ++
 tools/libs/saverestore/save.c   | 7 ++-----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index b3941af537..6129710a3f 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -225,6 +225,8 @@ static inline int update_blob(struct xc_sr_blob *blob,
 
 struct sr_save_arrays {
     xen_pfn_t batch_pfns[MAX_BATCH_SIZE];
+    /* write_batch: Mfns of the batch pfns. */
+    xen_pfn_t mfns[MAX_BATCH_SIZE];
 };
 
 struct sr_restore_arrays {
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index e29b6e1d66..6b09784be8 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -88,7 +88,7 @@ static int write_checkpoint_record(struct xc_sr_context *ctx)
 static int write_batch(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
-    xen_pfn_t *mfns = NULL, *types = NULL;
+    xen_pfn_t *mfns = ctx->save.m->mfns, *types = NULL;
     void *guest_mapping = NULL;
     void **guest_data = NULL;
     void **local_pages = NULL;
@@ -105,8 +105,6 @@ static int write_batch(struct xc_sr_context *ctx)
 
     assert(nr_pfns != 0);
 
-    /* Mfns of the batch pfns. */
-    mfns = malloc(nr_pfns * sizeof(*mfns));
     /* Types of the batch pfns. */
     types = malloc(nr_pfns * sizeof(*types));
     /* Errors from attempting to map the gfns. */
@@ -118,7 +116,7 @@ static int write_batch(struct xc_sr_context *ctx)
     /* iovec[] for writev(). */
     iov = malloc((nr_pfns + 4) * sizeof(*iov));
 
-    if ( !mfns || !types || !errors || !guest_data || !local_pages || !iov )
+    if ( !types || !errors || !guest_data || !local_pages || !iov )
     {
         ERROR("Unable to allocate arrays for a batch of %u pages",
               nr_pfns);
@@ -277,7 +275,6 @@ static int write_batch(struct xc_sr_context *ctx)
     free(guest_data);
     free(errors);
     free(types);
-    free(mfns);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:52:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:52:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142957.263743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1k-0005kx-Sw; Wed, 16 Jun 2021 12:52:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142957.263743; Wed, 16 Jun 2021 12:52:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1k-0005kQ-OG; Wed, 16 Jun 2021 12:52:16 +0000
Received: by outflank-mailman (input) for mailman id 142957;
 Wed, 16 Jun 2021 12:52:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1j-00075D-8G
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:15 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [81.169.146.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 369b540b-50dd-4914-92b8-e2f355492428;
 Wed, 16 Jun 2021 12:51:49 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCphtmH
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 369b540b-50dd-4914-92b8-e2f355492428
ARC-Seal: i=1; a=rsa-sha256; t=1623847903; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=N9SFQiycwUqMVroio62f217jPoVGGdQJPf+D4dC5ewl1zRi6GWZOq5/LTLwxJbNuF1
    9BfpbqPX6LQPB4vOcV5UPcNv4ZDrHaYcxfQhfG9hmP7QCeTMYYGIZVZQvtap80KRLHqX
    Jj4kVAMFAjF+X9iA05oIcmhnXZK1FqFzaoe5/j4wLIjeaNR/glhGnGrD0VQZESX+opfM
    GDAq4bwCedM48Jo8Na7rwEAciY81rkFfrUNAE78psUmav+Ruz1cWMZRS064C9wXrGOlL
    5eBChx/a46rzypa89VzkNt9S1hdPVtwm+OZn3v/VUCEEN2nGbI1CjOUSd/yg7mRIlG58
    /AUg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847903;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=2Gn1MT0Bi1Jqgmn6EP44HuJomJlyOc6/uRvNu601HFs=;
    b=TTyH7AS0EYebV694Lh3ak/7t7MpNELS967/d4eOPxKcUOU8PGLBzIivhH89OA/R3jC
    vz9f3/ke1SiOubqneeASXXLI/T9s4cH05Djx7giPr3m6x2keMKisCzwPZGEWxVQAR0Fd
    NQHBuTBuHu3HQhcb12hNY/WVcB3lfKHGzlR72v63f6hq6pQ0gGCCYC5XjIf255A6kAk2
    ytL2PW6LB09Uc2uWJPWcZ3FsbVFDvaezeOcN9MrP7w9Ix/yrFKelf8PzqLYx8m0GhFdE
    ivA3x4o+bXEFXA5KeJyyaDr+zpq3J24MmwHK3kvWHp+bIIVIy08xDg2K/iWox1hWyWMS
    y2eQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847903;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=2Gn1MT0Bi1Jqgmn6EP44HuJomJlyOc6/uRvNu601HFs=;
    b=HsPEmMwcLZBl7ViPln50aU/C9dmPPsiFdX7Gu+BIHYlvtmUwrzRTVn5aUVIOotoJUa
    xY+D0/ZdQEMjoBTz+nMkMBKTcWgY2JwXMrbUImBIcgmN/sIo+/Tr66fychQ1WT/tqkjH
    G0tQzl3iZ849Ok4KZ9vi50aRFysBJ/6fmswu2BbCg/PeftVFLIkBUAytM4VgSAd07o00
    VKy9vYaxcL0Ay9/a33XngyJL0YFAHrVEprB392l6WRypprNQdUVdqCABDbDjIgHWJfbC
    GNj5Bjdn/QD0x5V32rlAjVN3DDMLQEsFv4NotZMHtCvQDsbCIQM+0gUO/N5zMqTgrUTL
    eoTQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210616 22/36] tools: restore: move mfns array in populate_pfns
Date: Wed, 16 Jun 2021 14:51:15 +0200
Message-Id: <20210616125129.26563-23-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move populate_pfns mfns array into preallocated space.
Use some prefix to avoid conflict with an array used in handle_page_data.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/saverestore/common.h  | 2 ++
 tools/libs/saverestore/restore.c | 5 ++---
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 34042c2b90..3cfb23861f 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -242,6 +242,8 @@ struct sr_restore_arrays {
     /* process_page_data */
     xen_pfn_t mfns[MAX_BATCH_SIZE];
     int map_errs[MAX_BATCH_SIZE];
+    /* populate_pfns */
+    xen_pfn_t pp_mfns[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_context
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index 6eb955423c..0c29478ccb 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -138,12 +138,12 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
                   const xen_pfn_t *original_pfns, const uint32_t *types)
 {
     xc_interface *xch = ctx->xch;
-    xen_pfn_t *mfns = malloc(count * sizeof(*mfns)),
+    xen_pfn_t *mfns = ctx->restore.m->pp_mfns,
         *pfns = malloc(count * sizeof(*pfns));
     unsigned int i, nr_pfns = 0;
     int rc = -1;
 
-    if ( !mfns || !pfns )
+    if ( !pfns )
     {
         ERROR("Failed to allocate %zu bytes for populating the physmap",
               2 * count * sizeof(*mfns));
@@ -191,7 +191,6 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
 
  err:
     free(pfns);
-    free(mfns);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:52:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:52:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142958.263748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1l-0005tQ-P6; Wed, 16 Jun 2021 12:52:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142958.263748; Wed, 16 Jun 2021 12:52:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1l-0005r5-Bm; Wed, 16 Jun 2021 12:52:17 +0000
Received: by outflank-mailman (input) for mailman id 142958;
 Wed, 16 Jun 2021 12:52:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1k-0006lZ-0y
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:16 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 60011629-2e09-443f-87df-86cf30994ee6;
 Wed, 16 Jun 2021 12:51:44 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpctm0
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60011629-2e09-443f-87df-86cf30994ee6
ARC-Seal: i=1; a=rsa-sha256; t=1623847898; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=pFTQeRYsr2MTd3019r4AEMFlQuer/MMi9ihfJb0tTG6q5Zpz0ChAs4/r+ClZFhRFrg
    EUuvnfONNvYhxxTMsXP9LkSTK74641BL60nFoHuC1h/nLTBB9bKvP9VYBanQx5rA4ghf
    nErv0iQ5I5UkWbGszs/xx3kU4TTqcEOY3WpEnM2bISjvq2meu8NUmcvO6x/D1WaUp9+B
    ZVdf9DNVr6wXgz7eK4wsWJDflVC+ba5WAl5dtQu5J0BGy/59T0RR+rWmwYmSx1aHSeND
    /5HyH/pxK2uiuAygJKCHWhesmKSqiQjSP/+4gZLn07KpA3DJ4RrGViwvPwPTzcTApXG3
    o+Xw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847898;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=LAtpQH5Nzzg8VriDH1a+nFDqqWS5bmIvzEWNNDGz4ss=;
    b=an9UrxYlf/qKTISLIAosLyLVy24jAp/pJxHtf1nf8iB61a6GhgjQG1Utr8j8rEFi5Q
    IwdCQgVMzhWU7G6syZLblQQYfuHCkiJyqMUuCS3WOLShL9H6AvDTEIIFdmjI8ALheS1P
    ZjSx2P50ntV8z2uSOMf66XhTnwQ8N8sZUFnQZMZsb8tQxF146sVAIVO5SxFyrF2qyQcT
    VRprS/Taaf9ZBQUhND3ZIhP08lmQH4aByc1u4F/mUKQvRoV4xNRstFHMRVCWdNVEw/P1
    9JyMIw5WDSRCx/cAnx7Zv4hFkkzcOiM9MLYkm24/RRgqsRO+D8r98KHTdtumpl8wLIhG
    C+kw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847898;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=LAtpQH5Nzzg8VriDH1a+nFDqqWS5bmIvzEWNNDGz4ss=;
    b=DHAkqf8vlK/aDQZNR6eWescyrHhCQzm6nKxrdI0no+Bm59v6PO9S+Vx636nAgTyMs4
    tSfDujVUOiErBYIj6gMS/RrvtSfAFjWlj9I7we72tBiAWa3xfvTPZVV2RgAUxyXtpnRf
    vzv35+ZQ+q9KyIb4zgntvyrz6wkKgr18sl4C6Hcbm3mhN31030EVwEfuLoRnM3uW1GSC
    toAc0i38DKXSFwDl481uSTrYcXrnLgQT/CvZ6m1RUXmPm3ZzO6itHBC2KZcwMxkGcdO9
    AJHyYw8r0EtycmO7V01VfC2OHUeUCf9Iqs6yCl7oLanwuA08+9+kwMXhVa663sTfZVcP
    eaPw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v20210616 11/36] tools: prepare to allocate saverestore arrays once
Date: Wed, 16 Jun 2021 14:51:04 +0200
Message-Id: <20210616125129.26563-12-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The hotpath 'send_dirty_pages' is supposed to do just one thing: sending.
The other end 'handle_page_data' is supposed to do just receiving.

But instead both do other costly work like memory allocations and data moving.
Do the allocations once, the array sizes are a compiletime constant.
Avoid unneeded copying of data by receiving data directly into mapped guest memory.

This patch is just prepartion, subsequent changes will populate the arrays.

Once all changes are applied, migration of a busy HVM domU changes like that:

Without this series, from sr650 to sr950 (xen-4.15.20201027T173911.16a20963b3 xen_testing):
2020-10-29 10:23:10.711+0000: xc: show_transfer_rate: 23663128 bytes + 2879563 pages in 55.324905335 sec, 203 MiB/sec: Internal error
2020-10-29 10:23:35.115+0000: xc: show_transfer_rate: 16829632 bytes + 2097552 pages in 24.401179720 sec, 335 MiB/sec: Internal error
2020-10-29 10:23:59.436+0000: xc: show_transfer_rate: 16829032 bytes + 2097478 pages in 24.319025928 sec, 336 MiB/sec: Internal error
2020-10-29 10:24:23.844+0000: xc: show_transfer_rate: 16829024 bytes + 2097477 pages in 24.406992500 sec, 335 MiB/sec: Internal error
2020-10-29 10:24:48.292+0000: xc: show_transfer_rate: 16828912 bytes + 2097463 pages in 24.446489027 sec, 335 MiB/sec: Internal error
2020-10-29 10:25:01.816+0000: xc: show_transfer_rate: 16836080 bytes + 2098356 pages in 13.447091818 sec, 609 MiB/sec: Internal error

With this series, from sr650 to sr950 (xen-4.15.20201027T173911.16a20963b3 xen_unstable):
2020-10-28 21:26:05.074+0000: xc: show_transfer_rate: 23663128 bytes + 2879563 pages in 52.564054368 sec, 213 MiB/sec: Internal error
2020-10-28 21:26:23.527+0000: xc: show_transfer_rate: 16830040 bytes + 2097603 pages in 18.450592015 sec, 444 MiB/sec: Internal error
2020-10-28 21:26:41.926+0000: xc: show_transfer_rate: 16830944 bytes + 2097717 pages in 18.397862306 sec, 445 MiB/sec: Internal error
2020-10-28 21:27:00.339+0000: xc: show_transfer_rate: 16829176 bytes + 2097498 pages in 18.411973339 sec, 445 MiB/sec: Internal error
2020-10-28 21:27:18.643+0000: xc: show_transfer_rate: 16828592 bytes + 2097425 pages in 18.303326695 sec, 447 MiB/sec: Internal error
2020-10-28 21:27:26.289+0000: xc: show_transfer_rate: 16835952 bytes + 2098342 pages in 7.579846749 sec, 1081 MiB/sec: Internal error

Note: the performance improvement depends on the used network cards,
wirespeed and the host:
- No improvement is expected with a 1G link.
- Improvement can be seen as shown above on a 10G link.
- Just a slight improvment can be seen on a 100G link.

This change also populates sr_save_arrays with "batch_pfns", and
sr_restore_arrays with "pfns" to make sure malloc is always called
with a non-zero value.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

v02:
- rename xc_sr_save_arrays to sr_save_arrays
- rename xc_sr_restore_arrays to sr_restore_arrays
- merge handling of "batch_pfns" and "pfns" to make sure malloc is
  called with a non-zero size value (jgross)
---
 tools/libs/saverestore/common.h  | 12 +++++++++++-
 tools/libs/saverestore/restore.c | 14 ++++++++++----
 tools/libs/saverestore/save.c    | 27 +++++++++++++--------------
 3 files changed, 34 insertions(+), 19 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index d61569e1a6..b3941af537 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -223,6 +223,15 @@ static inline int update_blob(struct xc_sr_blob *blob,
     return 0;
 }
 
+struct sr_save_arrays {
+    xen_pfn_t batch_pfns[MAX_BATCH_SIZE];
+};
+
+struct sr_restore_arrays {
+    /* handle_page_data */
+    xen_pfn_t pfns[MAX_BATCH_SIZE];
+};
+
 struct xc_sr_context
 {
     xc_interface *xch;
@@ -255,11 +264,11 @@ struct xc_sr_context
 
             struct precopy_stats stats;
 
-            xen_pfn_t *batch_pfns;
             unsigned int nr_batch_pfns;
             unsigned long *deferred_pages;
             unsigned long nr_deferred_pages;
             xc_hypercall_buffer_t dirty_bitmap_hbuf;
+            struct sr_save_arrays *m;
         } save;
 
         struct /* Restore data. */
@@ -311,6 +320,7 @@ struct xc_sr_context
 
             /* Sender has invoked verify mode on the stream. */
             bool verify;
+            struct sr_restore_arrays *m;
         } restore;
     };
 
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index 70c92eaadc..e18a03b381 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -315,7 +315,7 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
     unsigned int i, pages_of_data = 0;
     int rc = -1;
 
-    xen_pfn_t *pfns = NULL, pfn;
+    xen_pfn_t *pfns = ctx->restore.m->pfns, pfn;
     uint32_t *types = NULL, type;
 
     /*
@@ -363,9 +363,8 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         goto err;
     }
 
-    pfns = malloc(pages->count * sizeof(*pfns));
     types = malloc(pages->count * sizeof(*types));
-    if ( !pfns || !types )
+    if ( !types )
     {
         ERROR("Unable to allocate enough memory for %u pfns",
               pages->count);
@@ -412,7 +411,6 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
                            &pages->pfn[pages->count]);
  err:
     free(types);
-    free(pfns);
 
     return rc;
 }
@@ -739,6 +737,13 @@ static int setup(struct xc_sr_context *ctx)
     }
     ctx->restore.allocated_rec_num = DEFAULT_BUF_RECORDS;
 
+    ctx->restore.m = malloc(sizeof(*ctx->restore.m));
+    if ( !ctx->restore.m ) {
+        ERROR("Unable to allocate memory for arrays");
+        rc = -1;
+        goto err;
+    }
+
  err:
     return rc;
 }
@@ -757,6 +762,7 @@ static void cleanup(struct xc_sr_context *ctx)
         xc_hypercall_buffer_free_pages(
             xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->restore.p2m_size)));
 
+    free(ctx->restore.m);
     free(ctx->restore.buffered_records);
     free(ctx->restore.populated_pfns);
 
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index f8fbe7a742..e29b6e1d66 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -77,7 +77,7 @@ static int write_checkpoint_record(struct xc_sr_context *ctx)
 
 /*
  * Writes a batch of memory as a PAGE_DATA record into the stream.  The batch
- * is constructed in ctx->save.batch_pfns.
+ * is constructed in ctx->save.m->batch_pfns.
  *
  * This function:
  * - gets the types for each pfn in the batch.
@@ -128,12 +128,12 @@ static int write_batch(struct xc_sr_context *ctx)
     for ( i = 0; i < nr_pfns; ++i )
     {
         types[i] = mfns[i] = ctx->save.ops.pfn_to_gfn(ctx,
-                                                      ctx->save.batch_pfns[i]);
+                                                      ctx->save.m->batch_pfns[i]);
 
         /* Likely a ballooned page. */
         if ( mfns[i] == INVALID_MFN )
         {
-            set_bit(ctx->save.batch_pfns[i], ctx->save.deferred_pages);
+            set_bit(ctx->save.m->batch_pfns[i], ctx->save.deferred_pages);
             ++ctx->save.nr_deferred_pages;
         }
     }
@@ -179,7 +179,7 @@ static int write_batch(struct xc_sr_context *ctx)
             if ( errors[p] )
             {
                 ERROR("Mapping of pfn %#"PRIpfn" (mfn %#"PRIpfn") failed %d",
-                      ctx->save.batch_pfns[i], mfns[p], errors[p]);
+                      ctx->save.m->batch_pfns[i], mfns[p], errors[p]);
                 goto err;
             }
 
@@ -193,7 +193,7 @@ static int write_batch(struct xc_sr_context *ctx)
             {
                 if ( rc == -1 && errno == EAGAIN )
                 {
-                    set_bit(ctx->save.batch_pfns[i], ctx->save.deferred_pages);
+                    set_bit(ctx->save.m->batch_pfns[i], ctx->save.deferred_pages);
                     ++ctx->save.nr_deferred_pages;
                     types[i] = XEN_DOMCTL_PFINFO_XTAB;
                     --nr_pages;
@@ -224,7 +224,7 @@ static int write_batch(struct xc_sr_context *ctx)
     rec.length += nr_pages * PAGE_SIZE;
 
     for ( i = 0; i < nr_pfns; ++i )
-        rec_pfns[i] = ((uint64_t)(types[i]) << 32) | ctx->save.batch_pfns[i];
+        rec_pfns[i] = ((uint64_t)(types[i]) << 32) | ctx->save.m->batch_pfns[i];
 
     iov[0].iov_base = &rec.type;
     iov[0].iov_len = sizeof(rec.type);
@@ -296,9 +296,9 @@ static int flush_batch(struct xc_sr_context *ctx)
 
     if ( !rc )
     {
-        VALGRIND_MAKE_MEM_UNDEFINED(ctx->save.batch_pfns,
+        VALGRIND_MAKE_MEM_UNDEFINED(ctx->save.m->batch_pfns,
                                     MAX_BATCH_SIZE *
-                                    sizeof(*ctx->save.batch_pfns));
+                                    sizeof(*ctx->save.m->batch_pfns));
     }
 
     return rc;
@@ -315,7 +315,7 @@ static int add_to_batch(struct xc_sr_context *ctx, xen_pfn_t pfn)
         rc = flush_batch(ctx);
 
     if ( rc == 0 )
-        ctx->save.batch_pfns[ctx->save.nr_batch_pfns++] = pfn;
+        ctx->save.m->batch_pfns[ctx->save.nr_batch_pfns++] = pfn;
 
     return rc;
 }
@@ -849,13 +849,12 @@ static int setup(struct xc_sr_context *ctx)
 
     dirty_bitmap = xc_hypercall_buffer_alloc_pages(
         xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->save.p2m_size)));
-    ctx->save.batch_pfns = malloc(MAX_BATCH_SIZE *
-                                  sizeof(*ctx->save.batch_pfns));
     ctx->save.deferred_pages = bitmap_alloc(ctx->save.p2m_size);
+    ctx->save.m = malloc(sizeof(*ctx->save.m));
 
-    if ( !ctx->save.batch_pfns || !dirty_bitmap || !ctx->save.deferred_pages )
+    if ( !ctx->save.m || !dirty_bitmap || !ctx->save.deferred_pages )
     {
-        ERROR("Unable to allocate memory for dirty bitmaps, batch pfns and"
+        ERROR("Unable to allocate memory for dirty bitmaps and"
               " deferred pages");
         rc = -1;
         errno = ENOMEM;
@@ -884,7 +883,7 @@ static void cleanup(struct xc_sr_context *ctx)
     xc_hypercall_buffer_free_pages(xch, dirty_bitmap,
                                    NRPAGES(bitmap_size(ctx->save.p2m_size)));
     free(ctx->save.deferred_pages);
-    free(ctx->save.batch_pfns);
+    free(ctx->save.m);
 }
 
 /*


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:52:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:52:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142961.263764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1p-0006pg-UW; Wed, 16 Jun 2021 12:52:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142961.263764; Wed, 16 Jun 2021 12:52:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV1p-0006p1-Oa; Wed, 16 Jun 2021 12:52:21 +0000
Received: by outflank-mailman (input) for mailman id 142961;
 Wed, 16 Jun 2021 12:52:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1o-00075D-8h
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:20 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.103])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a55ae0b3-93b5-4fad-bf8e-247239aca60e;
 Wed, 16 Jun 2021 12:51:50 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpitmI
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a55ae0b3-93b5-4fad-bf8e-247239aca60e
ARC-Seal: i=1; a=rsa-sha256; t=1623847904; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=LTxVl9Oyg2idIwCSzLv4XMhSMg50poqnIqEXBORyknOrl2MMSI8beM3alR1r8dAiNq
    qQfKyOwHUDeig5sb4IxCh8JoJYfeG+8N8odQsng/yKsGvQqpo2ptViKYqtC/Xzo5G5nJ
    5rPBsCgXIsNDynCFvKqv9xIX/YbMCyuBZAXUeAXhABJPZ52VZOqtKnJLYd26Xzagk5rp
    GrkRAX8yk5p+ZOQr4/fqYvZmh3EJrfoEquOCGKFDnhps7jrTi+SHpFY/Mr9UDejr2Phi
    FMgXxfgo4wGBjdWZchBjwZ5wM1pbXDU/YG5JUvfGHbswerXw5v8Y7MkfGRcuvGFgxfNM
    lVvQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847904;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=pnLF16VUG8E4Jp5fWe4D439r7jQrrghaZKOdiE+JFI0=;
    b=t1ElC85hwloXfK5N6nrxainkrTPKhRZHdvPq6dL7JDtPKhDFDmDwz3Jdbpi33IVN+B
    ZHt7hvT3IcBtT/zhhe7WP2SuBbMhqEkYjShsDFuy8JzXQymbXn5pP14l+PMIOu5Mk8IE
    L2s4nWXoAO+1H3V/NwRbw3zUSOozMY2Xso2gyVL32e2MECs7C4UG0nRY+j16xh9RYsqD
    FBhieSIpl/MZciH0KcH3f+lGdZos3fpLTm8jCl4ZGR0ou2t84iSShLXhEjuPhMB+o/Zp
    2luAbl6KpFxhKQszOy/tZXz4JDtMJXI1HFNEfGOaxS2O1/bf1FPK5g1SEZuI8nkmIAmo
    9OxA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847904;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=pnLF16VUG8E4Jp5fWe4D439r7jQrrghaZKOdiE+JFI0=;
    b=gCyQuUTTqDyZMhlmYIN9tY4A0P4yNzjeUiLbPrNZBns0jWdbQR/4EIAVBkDiKWm6v8
    +z1bs71Tjm8IgH7dP9k/QuoUljLADS5U+d+ddz8KAqhkIAr2EM0kts06fWJPCuMigebL
    dSNE1S2V61S9fBxiFSRzcDSVsoE6QLr4/J6pZl25d+Wk9KJFGEUz9seNYkVFqSQ+iPpz
    4pZ1Ou9KKgvPt2LOwSndsnDCDNLwGLIKhag4y/LYr2TPWBRjBnpOQlI4u+BGiZa+/oXx
    RWEQH3fboZ+i1pDcrhdc3FyG1dW83flQhyrPENqhh4agYg3xlcQDOQFeIYmKHzDeqpxJ
    W5Sw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210616 23/36] tools: restore: move pfns array in populate_pfns
Date: Wed, 16 Jun 2021 14:51:16 +0200
Message-Id: <20210616125129.26563-24-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move populate_pfns' pfns array into preallocated space.
Use some prefix to avoid conflict with an array used in handle_page_data.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/saverestore/common.h  |  1 +
 tools/libs/saverestore/restore.c | 11 +----------
 2 files changed, 2 insertions(+), 10 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 3cfb23861f..379887e149 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -244,6 +244,7 @@ struct sr_restore_arrays {
     int map_errs[MAX_BATCH_SIZE];
     /* populate_pfns */
     xen_pfn_t pp_mfns[MAX_BATCH_SIZE];
+    xen_pfn_t pp_pfns[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_context
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index 0c29478ccb..f2234eac55 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -139,17 +139,10 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
 {
     xc_interface *xch = ctx->xch;
     xen_pfn_t *mfns = ctx->restore.m->pp_mfns,
-        *pfns = malloc(count * sizeof(*pfns));
+        *pfns = ctx->restore.m->pp_pfns;
     unsigned int i, nr_pfns = 0;
     int rc = -1;
 
-    if ( !pfns )
-    {
-        ERROR("Failed to allocate %zu bytes for populating the physmap",
-              2 * count * sizeof(*mfns));
-        goto err;
-    }
-
     for ( i = 0; i < count; ++i )
     {
         if ( (!types ||
@@ -190,8 +183,6 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
     rc = 0;
 
  err:
-    free(pfns);
-
     return rc;
 }
 


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 12:59:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 12:59:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142972.263776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV8P-0001Tm-Q5; Wed, 16 Jun 2021 12:59:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142972.263776; Wed, 16 Jun 2021 12:59:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltV8P-0001Tf-MC; Wed, 16 Jun 2021 12:59:09 +0000
Received: by outflank-mailman (input) for mailman id 142972;
 Wed, 16 Jun 2021 12:59:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV8O-0001TZ-Iy
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:59:08 +0000
Received: from mo4-p04-ob.smtp.rzone.de (unknown [81.169.146.176])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8b2af396-d96c-4737-963e-f6fd63c1ca17;
 Wed, 16 Jun 2021 12:59:06 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCwwtp4
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:58:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b2af396-d96c-4737-963e-f6fd63c1ca17
ARC-Seal: i=1; a=rsa-sha256; t=1623848339; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=mUVmRMntJsGRjZRwHGWeMiQoTiSeKxc+wsihjAkpFMeXty7uVslwQquXyJIHleUFjv
    B+COnkeLtXZ8MPVaaCJtFAeLRYxfFOaQXD/Edx/GZgiCHUN+gxXvsXSyzbo10KEab+B/
    kClxNjWYzJCPUbRaWC8re2069a7W6UWQSzO+XwN6H4JV73nt6F+fKWqEYxqOSaoU3XUD
    ve0JVCAC0D5P6bmRdAPIhMx+BHZ5Y2wxVQxoWzmkgZGQZPZ5kpstE7Odx04q+at/GUEv
    nd8Ry7Cp8/OVzo9IEY0YIoEBSPmrZEwlCxmN+2N7TaBWMGEfvblo299Pu5AnBgc2UvwK
    4j8Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623848339;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=sOq0vE76oOysPkPAtwQ2V9NTvbVQfVVkGBXRa8pF8jM=;
    b=mWWdSsHYs9DDQoOtVQbkMDWWrh6z6PQrmr1h2alFm6bxgvETZMbNCjq7sC8LGBsW5S
    HlyBEdX/STLLF880+YEJRPl2BWvjI1QMVQ3EUTAZPL2gbH20YjlYvUNrOlQn98q9GXcK
    P/wGAegj9FnOUgResdaRootSSJG0/HMF9b1ukcnNuSfZ5D/mgnqH/yl3l4ztN8YvkeNz
    7nRNRRrWZDTAAQdU6BOAxBlCxcjvnX2IQZrEao9vpcD2JCcjVBGOlQIoWZbsdwXei3vN
    tW9Jum8o5Y8ACt4DWsGy6kYGhL97oxt0P1C+W6X1kGVLuBlqaFX6AR0lCggTe4jXPKsl
    VLEA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623848339;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=sOq0vE76oOysPkPAtwQ2V9NTvbVQfVVkGBXRa8pF8jM=;
    b=doUICSDiq2DM4BcvpF/+jFEI467WTKOCNfxeALhiX8hGovATYPevGqEm6ctYt1hFFQ
    Id3vIahGwZ4aFjvxFAAOPpywWmrWbTZKea4aVmw6XegGNe5DNXud6QfTawWXu4m8j+gl
    p5cH2bdg+yy9foCjUj6uXrbT4m2flQuLnZrd3Aujb2cR6cNmAN4KkVxqGhbWh+rLN2qp
    N8nbOQjxU2EdNcvC4/aS+EpMnqMhz9nD1CsQ/cOSb3Z4sZl34JP5PxCLtPtH0EMQnl+j
    IA/S5FtRgtLUqYcorrCGyP85D8WIqnvvq537Ftp7+/Z0Tqec3sL3UmeHsXBK0C+KRCVG
    hxEA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisQsVxSIbR7sf0kebs4Z3Zpqv+Sabl5o7CzRq+Ps8Q=="
X-RZG-CLASS-ID: mo00
Date: Wed, 16 Jun 2021 14:58:46 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Michael Brown <mcb30@ipxe.org>, "Bernhard M. Wiedemann"
 <bwiedemann@suse.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, Wei Liu
 <wl@xen.org>
Subject: Re: [PATCH v1] tools: ipxe: update for fixing build with GCC11
Message-ID: <20210616145846.305d3ce1.olaf@aepfle.de>
In-Reply-To: <b78ccdf3-9898-c903-4d9f-4d25bd27182e@citrix.com>
References: <20210615212613.6270-1-olaf@aepfle.de>
	<b78ccdf3-9898-c903-4d9f-4d25bd27182e@citrix.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/wrElCStfx.hA2l3F1hAu_2R";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/wrElCStfx.hA2l3F1hAu_2R
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Please revert bf4ccd4265ac614fbfa38bf168b6eeaf4c17d51e in ipxe.git, CentOS =
7 apparently fails to handle '-D'.

It worked in my testing with SLE12SP5 and SLE15SP3 as a base system.

See below.


I guess for xen.git, updating to just bf4ccd4265ac614fbfa38bf168b6eeaf4c17d=
51e^ will be good enough.

Olaf

Am Wed, 16 Jun 2021 13:33:52 +0100
schrieb Andrew Cooper <andrew.cooper3@citrix.com>:

> On 15/06/2021 22:26, Olaf Hering wrote:
> > Use a snapshot which includes commit
> > f3f568e382a5f19824b3bfc6081cde39eee661e8 ("[crypto] Add
> > memory output constraints for big-integer inline assembly"),
> > which fixes build with gcc11.
> >
> > Signed-off-by: Olaf Hering <olaf@aepfle.de>
> > ---
> >  tools/firmware/etherboot/Makefile | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/tools/firmware/etherboot/Makefile b/tools/firmware/etherbo=
ot/Makefile
> > index ed9e11305f..23b3f6ca9d 100644
> > --- a/tools/firmware/etherboot/Makefile
> > +++ b/tools/firmware/etherboot/Makefile
> > @@ -10,7 +10,8 @@ else
> >  IPXE_GIT_URL ?=3D git://git.ipxe.org/ipxe.git
> >  endif
> > =20
> > -IPXE_GIT_TAG :=3D 988d2c13cdf0f0b4140685af35ced70ac5b3283c
> > +# put an updated tar.gz on xenbits after changes to this variable
> > +IPXE_GIT_TAG :=3D bf4ccd4265ac614fbfa38bf168b6eeaf4c17d51e
>=20
> CI says no.
>=20
> Gitlab CI is currently fairly red because of a clang build fix which
> hasn't made its way into master yet, but this job:
>=20
> =C2=A0 https://gitlab.com/xen-project/patchew/xen/-/jobs/1349871230
>=20
> shows a real failure on CentOS 7.
>=20
> ...
> =C2=A0 [VERSION] bin/version.rtl8139.rom.o
> =C2=A0 [AR] bin/blib.a
> ar: creating bin/blib.a
> objcopy: invalid option -- 'D'
> Usage: objcopy [option(s)] in-file [out-file]
> ...
>=20
> ~Andrew


--Sig_/wrElCStfx.hA2l3F1hAu_2R
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmDJ9YYACgkQ86SN7mm1
DoBPrA/+Ktxxob0B0BOkxe4M/DZb5YAnFq1eEZUfbSgXloG3KhIt6Ozfw4vLiQp4
FSexyAlMY2vSZr9ws4LKmkRiL090tcf67Y3mRrp6Qnu7ZL3tVehzirqYSrGLnq08
4V8OFkI5Daxo+Ls4LNLx/ZhPRsB5dk7MhgrB2BlOhK2wkbiXgcXIx1m+fK5PHIUr
vBk0wDFsh0z68RKfOTDOva2CAscBqBzPzRUBkFid6qsbEj4kHTzVxxk76qkM92iY
RrNqkLEH0HKKN0Djl1dRa9pp6y8UXxK2cnWbjOTo72oInJ9JBsVjdTMLF7qpjmSx
XnLD0UzH9+/Hc5Sde9RJ+/38/HknEuLn/HeMDVWGRc2ixRzq0CzDpiMRftu3aBUt
KEPpZr/DDIuQxWTwm6ush2nVBFSJAn4HhcMkhmcPFdULiWhowqh/aNvAb2ZU+r1+
ywzXPh0a7hkjxYBRS/QnYNq05hZhKIBJMyOiHIZG1wmPzPx5XCk8BvJ5JTK/FKsC
J7a5y8Uf18i8zP1WQ0Idom1+PnrOreeJBoEUIbN7V8nRaWywiKy+AFxdsP/1vzTF
VjS9tTH+dtLd58aGTOR/HXFnALV5fl4v1Tv/oiit2p74Kbfj0KZ0tuhbyye+9Byk
WLUCmzY+Ymt+UkBY/K+25lQWUBkYF4kkLhKhmXQiLv6Wx2pYOuQ=
=9a8r
-----END PGP SIGNATURE-----

--Sig_/wrElCStfx.hA2l3F1hAu_2R--


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:06:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:06:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142983.263787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVFk-0002up-Jo; Wed, 16 Jun 2021 13:06:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142983.263787; Wed, 16 Jun 2021 13:06:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVFk-0002ui-FX; Wed, 16 Jun 2021 13:06:44 +0000
Received: by outflank-mailman (input) for mailman id 142983;
 Wed, 16 Jun 2021 13:06:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV23-00075D-90
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:35 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [81.169.146.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1d52c234-137d-4e0e-94cb-eb330f4ac413;
 Wed, 16 Jun 2021 12:51:52 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpltmO
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:47 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d52c234-137d-4e0e-94cb-eb330f4ac413
ARC-Seal: i=1; a=rsa-sha256; t=1623847907; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=LMTXMECnOOeKSJF1PJhPqO/25SF9NGK1THJsaFw/1J/70Y45J1kHvpSqyapwdqlfjS
    ipM49hs3lVxcpWlVpQCaMx1E1TVPLYp9rfnlTyD2pCNXF8+Vuw+1xzcx/52WMl/kfAGN
    OgP2Qz73BnsH4oRTZwDMCwjtrQJde4n0ufC7AJsXyUIJj9fYLbFgXK4whg1f09IH/qaK
    xKdt58Rs0VEXrFVKRPYARh9Z5lhA5skY6f7Q38KncPPagZtKKsc8O0/c9FFoeX/U4leC
    n8E57UTQjQOTsXRLzT5CkaJtgyPTbFviN4njGZ3F3D3x0PGKzwIaff3yNtfo3PQfgaJ/
    URrQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847907;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=YGLyCDpmfEm63K5TIs+xnzmQnbcLd4l6eUuaqJ8OGLg=;
    b=jXRFafS3o7BXy7rTxn/pEYaFG0E8WY33NmNWTFSQaRBis0744oY8Pyv0hHr/3I2lOZ
    Gax4JSMzlmxlp04trR7+k6VAmxz6VUKsOZtPKAnIAA4OwzpgUoy13onu4FLzkHz9dPZK
    KctNTiLN4mKeUK4kJwVt6Q8B35icMQNhdGYzyLQ3hIX0zu3OhSt8Ljr+ccskJijWmSUo
    7WZfxSyaw6WQhqAmBS4LbMhwVxOX9f2p4CY/NLa+x5HSGqneJetffaJtOIG5LOBllmJJ
    7s0Go3Fwe9q1wm6WwFohapArhlnDBZ9bqIf+IijghCMYnh7dTWO2M/yLHqlSWVhaJTgC
    WTLg==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847907;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=YGLyCDpmfEm63K5TIs+xnzmQnbcLd4l6eUuaqJ8OGLg=;
    b=q8THds8HGfoUSPajomJ9wIW1uH8sUibkJEb3bkBcdaOlZUohpfD7m5LcOugPhyFwog
    104Cp3Lts5G+EAu4WBi+LB4oMuJIeTTdLKuzsXZcWSZaZnzHDbD/bu6D9ivTGM5G8t54
    5Fxj3RSR5JxEsfQ96QmBUPPTV8fvuY5wALKZV6H7f2aftX7bIgdCeoiAJMMfFIx5Ajpd
    FLXgzOAsxM7hJ9PUIy9GgapNYw6NSa6sk1Oq3Fd/ICR80oMZdedzBquBzVTfruekinfg
    2p1op6igS7JFuwBPgKdH/ASiPY56snontUIX+7riJ3KBqSqNcPOF6qVVq9Q2oQCS4yJe
    R6Og==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v20210616 29/36] tools: change struct precopy_stats to precopy_stats_t
Date: Wed, 16 Jun 2021 14:51:22 +0200
Message-Id: <20210616125129.26563-30-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This will help libxl_save_msgs_gen.pl to copy the struct as a region of memory.

No change in behavior intented.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/include/xensaverestore.h  | 7 +++----
 tools/libs/saverestore/common.h | 2 +-
 tools/libs/saverestore/save.c   | 6 +++---
 3 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/tools/include/xensaverestore.h b/tools/include/xensaverestore.h
index 0410f0469e..dca0134605 100644
--- a/tools/include/xensaverestore.h
+++ b/tools/include/xensaverestore.h
@@ -23,18 +23,17 @@
 #define XCFLAGS_DEBUG     (1 << 1)
 
 /* For save's precopy_policy(). */
-struct precopy_stats
-{
+typedef struct {
     unsigned int iteration;
     unsigned long total_written;
     long dirty_count; /* -1 if unknown */
-};
+} precopy_stats_t;
 
 /*
  * A precopy_policy callback may not be running in the same address
  * space as libxc an so precopy_stats is passed by value.
  */
-typedef int (*precopy_policy_t)(struct precopy_stats, void *);
+typedef int (*precopy_policy_t)(precopy_stats_t, void *);
 
 /* callbacks provided by xc_domain_save */
 struct save_callbacks {
diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 5c440f28ec..60bbba6aa9 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -283,7 +283,7 @@ struct xc_sr_context
             size_t pages_sent;
             size_t overhead_sent;
 
-            struct precopy_stats stats;
+            precopy_stats_t stats;
 
             unsigned int nr_batch_pfns;
             unsigned long *deferred_pages;
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index e486bce96f..537b977ba8 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -488,7 +488,7 @@ static int update_progress_string(struct xc_sr_context *ctx, char **str)
 #define SPP_MAX_ITERATIONS      5
 #define SPP_TARGET_DIRTY_COUNT 50
 
-static int simple_precopy_policy(struct precopy_stats stats, void *user)
+static int simple_precopy_policy(precopy_stats_t stats, void *user)
 {
     return ((stats.dirty_count >= 0 &&
              stats.dirty_count < SPP_TARGET_DIRTY_COUNT) ||
@@ -515,13 +515,13 @@ static int send_memory_live(struct xc_sr_context *ctx)
     precopy_policy_t precopy_policy = ctx->save.callbacks->precopy_policy;
     void *data = ctx->save.callbacks->data;
 
-    struct precopy_stats *policy_stats;
+    precopy_stats_t *policy_stats;
 
     rc = update_progress_string(ctx, &progress_str);
     if ( rc )
         goto out;
 
-    ctx->save.stats = (struct precopy_stats){
+    ctx->save.stats = (precopy_stats_t){
         .dirty_count = ctx->save.p2m_size,
     };
     policy_stats = &ctx->save.stats;


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:06:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:06:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142987.263798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVFl-0003Bj-VM; Wed, 16 Jun 2021 13:06:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142987.263798; Wed, 16 Jun 2021 13:06:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVFl-0003Bc-RT; Wed, 16 Jun 2021 13:06:45 +0000
Received: by outflank-mailman (input) for mailman id 142987;
 Wed, 16 Jun 2021 13:06:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV2D-00075D-9S
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:45 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.171])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0da998a0-f389-4c10-bd1c-49a8d0dc6afe;
 Wed, 16 Jun 2021 12:51:53 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpYtls
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0da998a0-f389-4c10-bd1c-49a8d0dc6afe
ARC-Seal: i=1; a=rsa-sha256; t=1623847895; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=nlXGn4eAj+/o8cz1jwkwxkYwqZj4Dr4F3XxtmexjzjhsiTBWvfgmgK5bomoEHqWlJa
    PiUMT8zfvgr6EAAanYx25PEI4Y15fF34UjD/SE+WbAr25BnE1Zz0UI2N5DmltSVsLw+N
    7YYCFN0OoBI1WuGBbNYWRNU97SysCiXg06NOUeSzPSJVy+Y6iWmFq3v5c6Uos0piDh+/
    jFK5g9f8tWZmcL7utME32WUv0Wo/Y2jt+AirQy0b2ri6RDuLVGE+nOvX6v+sQaFj6a5g
    nOd4/SZZ4Vaws2zrGV55QYp+uP1THIblELU8UDe8rjbS6tmI7EvZ8w6GzSqGU2ld5d9H
    sXQg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847895;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=31z4r3p8KG3e4KFRiT8bYpR5AIuurjrWpdtWj4tqq5A=;
    b=b/BtIFTCY9djLKWONeenwbsHzlpRkls0TDQXBh5pl6Jeqz2Ehp9uG1rjZt2EL76E2f
    wSl9yFL7EtHLQmedHfXcWr0qgOLt1zedkSBpVa2r59xFwxmZD/0OHDsa7diHhzibCKlG
    w9VOeGbq3zd9TDMTAm7ANYl+5q3qMn26M4hB2Pyznv/w5kbYd0oyo4n1IlwsKBpxTr00
    /MBByCzdyvQOG2PzLXUKG/ecw4ZLUhooPsbYm1xgzbyE95VaZuoZVWnDXiiq60MnQ7xf
    beF4F8IP9WDj5ChW6Gf0zEQApcDJqXV+xMTrLy5K2WnNSCtD6t0DZUSehHFI65SuyOFl
    8tXQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847895;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=31z4r3p8KG3e4KFRiT8bYpR5AIuurjrWpdtWj4tqq5A=;
    b=bkwIWCRIKq4BG8o2IXVVrJLwM1/4ph7G66ofLvqqM/39E8EddtrHv5YUdtmkpGwQg9
    /AdYkgexVkhfKqCLwqDvhOCjnOcdHWIFWx1vmK9G1R8qb7yMHmcqzglPx+IQHWxFRvEd
    Afa/u574eJGlCwaBNGDEaO0jmcJ9J27Ua0THsUvx+K2LK9ilbgNFKcV9RLynQ+pAH2To
    GMnQvDbTprTESVfklHMl3vp9wdL0ts9gyBMRRH1cj8cMoHz8ddFZCMw6YTLDCOeIyJN/
    yC/eg+g63ctZ2AEXSkP3N+2IAkGxd1WJ6JrBJK7pR5YcNf5xfnWuYqzsb5JqB2y7ij/p
    3ybw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v20210616 04/36] tools: create libxensaverestore
Date: Wed, 16 Jun 2021 14:50:57 +0200
Message-Id: <20210616125129.26563-5-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move all save/restore related code from libxenguest.so into a separate
library libxensaverestore.so. The only consumer is libxl-save-helper.
There is no need to have the moved code mapped all the time in binaries
where libxenguest.so is used.

According to size(1) the change is:
   text	   data	    bss	    dec	    hex	filename
 187183	   4304	     48	 191535	  2ec2f	guest/libxenguest.so.4.15.0

 124106	   3376	     48	 127530	  1f22a	guest/libxenguest.so.4.15.0
  67841	   1872	      8	  69721	  11059	saverestore/libxensaverestore.so.4.15.0

While touching the files anyway, take the opportunity to drop the
redundant xg_sr_ filename prefix.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Acked-by: Wei Liu <wl@xen.org>

v5:
- fix spelling in description
v4:
- drop xg_ prefix from filenames (jgross)
- drop sr_ prefix from filenames (jbeulich)
v3:
- repost in time for 4.16
v2:
- copy also license header
- move xg_nomigrate.c
- add size(1) output to commit msg
- remove change from libxl_create.c
---
 .gitignore                                    |   2 +
 tools/include/xenguest.h                      | 186 ----------------
 tools/include/xensaverestore.h                | 208 ++++++++++++++++++
 tools/libs/Makefile                           |   1 +
 tools/libs/guest/Makefile                     |  11 -
 tools/libs/guest/xg_offline_page.c            |   1 -
 tools/libs/light/Makefile                     |   4 +-
 tools/libs/light/libxl_internal.h             |   1 +
 tools/libs/light/libxl_save_helper.c          |   1 +
 tools/libs/light/libxl_save_msgs_gen.pl       |   2 +-
 tools/libs/saverestore/Makefile               |  38 ++++
 .../xg_sr_common.c => saverestore/common.c}   |   2 +-
 .../xg_sr_common.h => saverestore/common.h}   |  16 +-
 .../common_x86.c}                             |   2 +-
 .../common_x86.h}                             |   2 +-
 .../common_x86_pv.c}                          |   2 +-
 .../common_x86_pv.h}                          |   2 +-
 .../nomigrate.c}                              |   0
 .../xg_sr_restore.c => saverestore/restore.c} |   2 +-
 .../restore_x86_hvm.c}                        |   2 +-
 .../restore_x86_pv.c}                         |   2 +-
 .../xg_sr_save.c => saverestore/save.c}       |   2 +-
 .../save_restore.h}                           |   2 -
 .../save_x86_hvm.c}                           |   2 +-
 .../save_x86_pv.c}                            |   2 +-
 .../stream_format.h}                          |   0
 tools/libs/uselibs.mk                         |   4 +-
 27 files changed, 282 insertions(+), 217 deletions(-)
 create mode 100644 tools/include/xensaverestore.h
 create mode 100644 tools/libs/saverestore/Makefile
 rename tools/libs/{guest/xg_sr_common.c => saverestore/common.c} (99%)
 rename tools/libs/{guest/xg_sr_common.h => saverestore/common.h} (98%)
 rename tools/libs/{guest/xg_sr_common_x86.c => saverestore/common_x86.c} (99%)
 rename tools/libs/{guest/xg_sr_common_x86.h => saverestore/common_x86.h} (98%)
 rename tools/libs/{guest/xg_sr_common_x86_pv.c => saverestore/common_x86_pv.c} (99%)
 rename tools/libs/{guest/xg_sr_common_x86_pv.h => saverestore/common_x86_pv.h} (98%)
 rename tools/libs/{guest/xg_nomigrate.c => saverestore/nomigrate.c} (100%)
 rename tools/libs/{guest/xg_sr_restore.c => saverestore/restore.c} (99%)
 rename tools/libs/{guest/xg_sr_restore_x86_hvm.c => saverestore/restore_x86_hvm.c} (99%)
 rename tools/libs/{guest/xg_sr_restore_x86_pv.c => saverestore/restore_x86_pv.c} (99%)
 rename tools/libs/{guest/xg_sr_save.c => saverestore/save.c} (99%)
 rename tools/libs/{guest/xg_save_restore.h => saverestore/save_restore.h} (98%)
 rename tools/libs/{guest/xg_sr_save_x86_hvm.c => saverestore/save_x86_hvm.c} (99%)
 rename tools/libs/{guest/xg_sr_save_x86_pv.c => saverestore/save_x86_pv.c} (99%)
 rename tools/libs/{guest/xg_sr_stream_format.h => saverestore/stream_format.h} (100%)

diff --git a/.gitignore b/.gitignore
index 38a085e398..08a321e995 100644
--- a/.gitignore
+++ b/.gitignore
@@ -147,6 +147,8 @@ tools/libs/light/test_timedereg
 tools/libs/light/test_fdderegrace
 tools/libs/light/tmp.*
 tools/libs/light/xenlight.pc
+tools/libs/saverestore/libxensaverestore.map
+tools/libs/saverestore/xensaverestore.pc
 tools/libs/stat/_paths.h
 tools/libs/stat/headers.chk
 tools/libs/stat/libxenstat.map
diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
index 61d0a82f48..7417675b3b 100644
--- a/tools/include/xenguest.h
+++ b/tools/include/xenguest.h
@@ -24,9 +24,6 @@
 
 #define XC_NUMA_NO_NODE   (~0U)
 
-#define XCFLAGS_LIVE      (1 << 0)
-#define XCFLAGS_DEBUG     (1 << 1)
-
 #define X86_64_B_SIZE   64 
 #define X86_32_B_SIZE   32
 
@@ -433,189 +430,6 @@ static inline xen_pfn_t xc_dom_p2m(struct xc_dom_image *dom, xen_pfn_t pfn)
  */
 struct xenevtchn_handle;
 
-/* For save's precopy_policy(). */
-struct precopy_stats
-{
-    unsigned int iteration;
-    unsigned long total_written;
-    long dirty_count; /* -1 if unknown */
-};
-
-/*
- * A precopy_policy callback may not be running in the same address
- * space as libxc an so precopy_stats is passed by value.
- */
-typedef int (*precopy_policy_t)(struct precopy_stats, void *);
-
-/* callbacks provided by xc_domain_save */
-struct save_callbacks {
-    /*
-     * Called after expiration of checkpoint interval,
-     * to suspend the guest.
-     */
-    int (*suspend)(void *data);
-
-    /*
-     * Called before and after every batch of page data sent during
-     * the precopy phase of a live migration to ask the caller what
-     * to do next based on the current state of the precopy migration.
-     *
-     * Should return one of the values listed below:
-     */
-#define XGS_POLICY_ABORT          (-1) /* Abandon the migration entirely
-                                        * and tidy up. */
-#define XGS_POLICY_CONTINUE_PRECOPY 0  /* Remain in the precopy phase. */
-#define XGS_POLICY_STOP_AND_COPY    1  /* Immediately suspend and transmit the
-                                        * remaining dirty pages. */
-    precopy_policy_t precopy_policy;
-
-    /*
-     * Called after the guest's dirty pages have been
-     *  copied into an output buffer.
-     * Callback function resumes the guest & the device model,
-     *  returns to xc_domain_save.
-     * xc_domain_save then flushes the output buffer, while the
-     *  guest continues to run.
-     */
-    int (*postcopy)(void *data);
-
-    /*
-     * Called after the memory checkpoint has been flushed
-     * out into the network. Typical actions performed in this
-     * callback include:
-     *   (a) send the saved device model state (for HVM guests),
-     *   (b) wait for checkpoint ack
-     *   (c) release the network output buffer pertaining to the acked checkpoint.
-     *   (c) sleep for the checkpoint interval.
-     *
-     * returns:
-     * 0: terminate checkpointing gracefully
-     * 1: take another checkpoint
-     */
-    int (*checkpoint)(void *data);
-
-    /*
-     * Called after the checkpoint callback.
-     *
-     * returns:
-     * 0: terminate checkpointing gracefully
-     * 1: take another checkpoint
-     */
-    int (*wait_checkpoint)(void *data);
-
-    /* Enable qemu-dm logging dirty pages to xen */
-    int (*switch_qemu_logdirty)(uint32_t domid, unsigned enable, void *data); /* HVM only */
-
-    /* to be provided as the last argument to each callback function */
-    void *data;
-};
-
-/* Type of stream.  Plain, or using a continuous replication protocol? */
-typedef enum {
-    XC_STREAM_PLAIN,
-    XC_STREAM_REMUS,
-    XC_STREAM_COLO,
-} xc_stream_type_t;
-
-/**
- * This function will save a running domain.
- *
- * @param xch a handle to an open hypervisor interface
- * @param io_fd the file descriptor to save a domain to
- * @param dom the id of the domain
- * @param flags XCFLAGS_xxx
- * @param stream_type XC_STREAM_PLAIN if the far end of the stream
- *        doesn't use checkpointing
- * @param recv_fd Only used for XC_STREAM_COLO.  Contains backchannel from
- *        the destination side.
- * @return 0 on success, -1 on failure
- */
-int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
-                   uint32_t flags, struct save_callbacks *callbacks,
-                   xc_stream_type_t stream_type, int recv_fd);
-
-/* callbacks provided by xc_domain_restore */
-struct restore_callbacks {
-    /*
-     * Called once the STATIC_DATA_END record has been received/inferred.
-     *
-     * For compatibility with older streams, provides a list of static data
-     * expected to be found in the stream, which was missing.  A higher level
-     * toolstack is responsible for providing any necessary compatibiltiy.
-     */
-#define XGR_SDD_MISSING_CPUID (1 << 0)
-#define XGR_SDD_MISSING_MSR   (1 << 1)
-    int (*static_data_done)(unsigned int missing, void *data);
-
-    /* Called after a new checkpoint to suspend the guest. */
-    int (*suspend)(void *data);
-
-    /*
-     * Called after the secondary vm is ready to resume.
-     * Callback function resumes the guest & the device model,
-     * returns to xc_domain_restore.
-     */
-    int (*postcopy)(void *data);
-
-    /*
-     * A checkpoint record has been found in the stream.
-     * returns:
-     */
-#define XGR_CHECKPOINT_ERROR    0 /* Terminate processing */
-#define XGR_CHECKPOINT_SUCCESS  1 /* Continue reading more data from the stream */
-#define XGR_CHECKPOINT_FAILOVER 2 /* Failover and resume VM */
-    int (*checkpoint)(void *data);
-
-    /*
-     * Called after the checkpoint callback.
-     *
-     * returns:
-     * 0: terminate checkpointing gracefully
-     * 1: take another checkpoint
-     */
-    int (*wait_checkpoint)(void *data);
-
-    /*
-     * callback to send store gfn and console gfn to xl
-     * if we want to resume vm before xc_domain_save()
-     * exits.
-     */
-    void (*restore_results)(xen_pfn_t store_gfn, xen_pfn_t console_gfn,
-                            void *data);
-
-    /* to be provided as the last argument to each callback function */
-    void *data;
-};
-
-/**
- * This function will restore a saved domain.
- *
- * Domain is restored in a suspended state ready to be unpaused.
- *
- * @param xch a handle to an open hypervisor interface
- * @param io_fd the file descriptor to restore a domain from
- * @param dom the id of the domain
- * @param store_evtchn the xenstore event channel for this domain to use
- * @param store_mfn filled with the gfn of the store page
- * @param store_domid the backend domain for xenstore
- * @param console_evtchn the console event channel for this domain to use
- * @param console_mfn filled with the gfn of the console page
- * @param console_domid the backend domain for xenconsole
- * @param stream_type XC_STREAM_PLAIN if the far end of the stream is using
- *        checkpointing
- * @param callbacks non-NULL to receive a callback to restore toolstack
- *        specific data
- * @param send_back_fd Only used for XC_STREAM_COLO.  Contains backchannel to
- *        the source side.
- * @return 0 on success, -1 on failure
- */
-int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
-                      unsigned int store_evtchn, unsigned long *store_mfn,
-                      uint32_t store_domid, unsigned int console_evtchn,
-                      unsigned long *console_mfn, uint32_t console_domid,
-                      xc_stream_type_t stream_type,
-                      struct restore_callbacks *callbacks, int send_back_fd);
-
 /**
  * This function will create a domain for a paravirtualized Linux
  * using file names pointing to kernel and ramdisk
diff --git a/tools/include/xensaverestore.h b/tools/include/xensaverestore.h
new file mode 100644
index 0000000000..0410f0469e
--- /dev/null
+++ b/tools/include/xensaverestore.h
@@ -0,0 +1,208 @@
+/******************************************************************************
+ * A library for guest domain save/restore/migration in Xen.
+ *
+ * Copyright (c) 2003-2004, K A Fraser.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2.1 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef XENSAVERESTORE_H
+#define XENSAVERESTORE_H
+
+#define XCFLAGS_LIVE      (1 << 0)
+#define XCFLAGS_DEBUG     (1 << 1)
+
+/* For save's precopy_policy(). */
+struct precopy_stats
+{
+    unsigned int iteration;
+    unsigned long total_written;
+    long dirty_count; /* -1 if unknown */
+};
+
+/*
+ * A precopy_policy callback may not be running in the same address
+ * space as libxc an so precopy_stats is passed by value.
+ */
+typedef int (*precopy_policy_t)(struct precopy_stats, void *);
+
+/* callbacks provided by xc_domain_save */
+struct save_callbacks {
+    /*
+     * Called after expiration of checkpoint interval,
+     * to suspend the guest.
+     */
+    int (*suspend)(void *data);
+
+    /*
+     * Called before and after every batch of page data sent during
+     * the precopy phase of a live migration to ask the caller what
+     * to do next based on the current state of the precopy migration.
+     *
+     * Should return one of the values listed below:
+     */
+#define XGS_POLICY_ABORT          (-1) /* Abandon the migration entirely
+                                        * and tidy up. */
+#define XGS_POLICY_CONTINUE_PRECOPY 0  /* Remain in the precopy phase. */
+#define XGS_POLICY_STOP_AND_COPY    1  /* Immediately suspend and transmit the
+                                        * remaining dirty pages. */
+    precopy_policy_t precopy_policy;
+
+    /*
+     * Called after the guest's dirty pages have been
+     *  copied into an output buffer.
+     * Callback function resumes the guest & the device model,
+     *  returns to xc_domain_save.
+     * xc_domain_save then flushes the output buffer, while the
+     *  guest continues to run.
+     */
+    int (*postcopy)(void *data);
+
+    /*
+     * Called after the memory checkpoint has been flushed
+     * out into the network. Typical actions performed in this
+     * callback include:
+     *   (a) send the saved device model state (for HVM guests),
+     *   (b) wait for checkpoint ack
+     *   (c) release the network output buffer pertaining to the acked checkpoint.
+     *   (c) sleep for the checkpoint interval.
+     *
+     * returns:
+     * 0: terminate checkpointing gracefully
+     * 1: take another checkpoint
+     */
+    int (*checkpoint)(void *data);
+
+    /*
+     * Called after the checkpoint callback.
+     *
+     * returns:
+     * 0: terminate checkpointing gracefully
+     * 1: take another checkpoint
+     */
+    int (*wait_checkpoint)(void *data);
+
+    /* Enable qemu-dm logging dirty pages to xen */
+    int (*switch_qemu_logdirty)(uint32_t domid, unsigned enable, void *data); /* HVM only */
+
+    /* to be provided as the last argument to each callback function */
+    void *data;
+};
+
+/* Type of stream.  Plain, or using a continuous replication protocol? */
+typedef enum {
+    XC_STREAM_PLAIN,
+    XC_STREAM_REMUS,
+    XC_STREAM_COLO,
+} xc_stream_type_t;
+
+/**
+ * This function will save a running domain.
+ *
+ * @param xch a handle to an open hypervisor interface
+ * @param io_fd the file descriptor to save a domain to
+ * @param dom the id of the domain
+ * @param flags XCFLAGS_xxx
+ * @param stream_type XC_STREAM_PLAIN if the far end of the stream
+ *        doesn't use checkpointing
+ * @param recv_fd Only used for XC_STREAM_COLO.  Contains backchannel from
+ *        the destination side.
+ * @return 0 on success, -1 on failure
+ */
+int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
+                   uint32_t flags, struct save_callbacks *callbacks,
+                   xc_stream_type_t stream_type, int recv_fd);
+
+/* callbacks provided by xc_domain_restore */
+struct restore_callbacks {
+    /*
+     * Called once the STATIC_DATA_END record has been received/inferred.
+     *
+     * For compatibility with older streams, provides a list of static data
+     * expected to be found in the stream, which was missing.  A higher level
+     * toolstack is responsible for providing any necessary compatibiltiy.
+     */
+#define XGR_SDD_MISSING_CPUID (1 << 0)
+#define XGR_SDD_MISSING_MSR   (1 << 1)
+    int (*static_data_done)(unsigned int missing, void *data);
+
+    /* Called after a new checkpoint to suspend the guest. */
+    int (*suspend)(void *data);
+
+    /*
+     * Called after the secondary vm is ready to resume.
+     * Callback function resumes the guest & the device model,
+     * returns to xc_domain_restore.
+     */
+    int (*postcopy)(void *data);
+
+    /*
+     * A checkpoint record has been found in the stream.
+     * returns:
+     */
+#define XGR_CHECKPOINT_ERROR    0 /* Terminate processing */
+#define XGR_CHECKPOINT_SUCCESS  1 /* Continue reading more data from the stream */
+#define XGR_CHECKPOINT_FAILOVER 2 /* Failover and resume VM */
+    int (*checkpoint)(void *data);
+
+    /*
+     * Called after the checkpoint callback.
+     *
+     * returns:
+     * 0: terminate checkpointing gracefully
+     * 1: take another checkpoint
+     */
+    int (*wait_checkpoint)(void *data);
+
+    /*
+     * callback to send store gfn and console gfn to xl
+     * if we want to resume vm before xc_domain_save()
+     * exits.
+     */
+    void (*restore_results)(xen_pfn_t store_gfn, xen_pfn_t console_gfn,
+                            void *data);
+
+    /* to be provided as the last argument to each callback function */
+    void *data;
+};
+
+/**
+ * This function will restore a saved domain.
+ *
+ * Domain is restored in a suspended state ready to be unpaused.
+ *
+ * @param xch a handle to an open hypervisor interface
+ * @param io_fd the file descriptor to restore a domain from
+ * @param dom the id of the domain
+ * @param store_evtchn the xenstore event channel for this domain to use
+ * @param store_mfn filled with the gfn of the store page
+ * @param store_domid the backend domain for xenstore
+ * @param console_evtchn the console event channel for this domain to use
+ * @param console_mfn filled with the gfn of the console page
+ * @param console_domid the backend domain for xenconsole
+ * @param stream_type XC_STREAM_PLAIN if the far end of the stream is using
+ *        checkpointing
+ * @param callbacks non-NULL to receive a callback to restore toolstack
+ *        specific data
+ * @param send_back_fd Only used for XC_STREAM_COLO.  Contains backchannel to
+ *        the source side.
+ * @return 0 on success, -1 on failure
+ */
+int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
+                      unsigned int store_evtchn, unsigned long *store_mfn,
+                      uint32_t store_domid, unsigned int console_evtchn,
+                      unsigned long *console_mfn, uint32_t console_domid,
+                      xc_stream_type_t stream_type,
+                      struct restore_callbacks *callbacks, int send_back_fd);
+
+#endif /* XENSAVERESTORE_H */
diff --git a/tools/libs/Makefile b/tools/libs/Makefile
index 1afcd12e2b..ca43c66777 100644
--- a/tools/libs/Makefile
+++ b/tools/libs/Makefile
@@ -12,6 +12,7 @@ SUBDIRS-y += devicemodel
 SUBDIRS-y += ctrl
 SUBDIRS-y += guest
 SUBDIRS-y += hypfs
+SUBDIRS-y += saverestore
 SUBDIRS-y += store
 SUBDIRS-y += stat
 SUBDIRS-$(CONFIG_Linux) += vchan
diff --git a/tools/libs/guest/Makefile b/tools/libs/guest/Makefile
index 2ce92d247e..4cf5459bb1 100644
--- a/tools/libs/guest/Makefile
+++ b/tools/libs/guest/Makefile
@@ -11,18 +11,7 @@ SRCS-y += xg_domain.c
 SRCS-y += xg_suspend.c
 SRCS-y += xg_resume.c
 ifeq ($(CONFIG_MIGRATE),y)
-SRCS-y += xg_sr_common.c
-SRCS-$(CONFIG_X86) += xg_sr_common_x86.c
-SRCS-$(CONFIG_X86) += xg_sr_common_x86_pv.c
-SRCS-$(CONFIG_X86) += xg_sr_restore_x86_pv.c
-SRCS-$(CONFIG_X86) += xg_sr_restore_x86_hvm.c
-SRCS-$(CONFIG_X86) += xg_sr_save_x86_pv.c
-SRCS-$(CONFIG_X86) += xg_sr_save_x86_hvm.c
-SRCS-y += xg_sr_restore.c
-SRCS-y += xg_sr_save.c
 SRCS-y += xg_offline_page.c
-else
-SRCS-y += xg_nomigrate.c
 endif
 SRCS-y       += xg_core.c
 SRCS-$(CONFIG_X86) += xg_core_x86.c
diff --git a/tools/libs/guest/xg_offline_page.c b/tools/libs/guest/xg_offline_page.c
index cfe0e2d537..92b65243b1 100644
--- a/tools/libs/guest/xg_offline_page.c
+++ b/tools/libs/guest/xg_offline_page.c
@@ -29,7 +29,6 @@
 
 #include "xc_private.h"
 #include "xg_private.h"
-#include "xg_save_restore.h"
 
 struct pte_backup_entry
 {
diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index 7d8c51d492..68e51dd13c 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -179,7 +179,7 @@ $(ACPI_OBJS) $(ACPI_PIC_OBJS): CFLAGS += -I. -DLIBACPI_STDUTILS=\"$(CURDIR)/libx
 $(TEST_PROG_OBJS) _libxl.api-for-check: CFLAGS += $(CFLAGS_libxentoollog) $(CFLAGS_libxentoolcore)
 libxl_dom.o libxl_dom.opic: CFLAGS += -I$(XEN_ROOT)/tools  # include libacpi/x86.h
 libxl_x86_acpi.o libxl_x86_acpi.opic: CFLAGS += -I$(XEN_ROOT)/tools
-$(SAVE_HELPER_OBJS): CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenevtchn) $(CFLAGS_libxenguest)
+$(SAVE_HELPER_OBJS): CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenevtchn) $(CFLAGS_libxensaverestore)
 
 testidl.o: CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenlight)
 testidl.c: libxl_types.idl gentest.py $(XEN_INCLUDE)/libxl.h $(AUTOINCS)
@@ -241,7 +241,7 @@ test_%: test_%.o test_common.o libxenlight_test.so
 	$(CC) $(LDFLAGS) -o $@ $^ $(filter-out %libxenlight.so, $(LDLIBS_libxenlight)) $(LDLIBS_libxentoollog) $(LDLIBS_libxentoolcore) -lyajl $(APPEND_LDFLAGS)
 
 libxl-save-helper: $(SAVE_HELPER_OBJS) libxenlight.so
-	$(CC) $(LDFLAGS) -o $@ $(SAVE_HELPER_OBJS) $(LDLIBS_libxentoollog) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxentoolcore) $(APPEND_LDFLAGS)
+	$(CC) $(LDFLAGS) -o $@ $(SAVE_HELPER_OBJS) $(LDLIBS_libxentoollog) $(LDLIBS_libxenctrl) $(LDLIBS_libxensaverestore) $(LDLIBS_libxentoolcore) $(APPEND_LDFLAGS)
 
 testidl: testidl.o libxenlight.so
 	$(CC) $(LDFLAGS) -o $@ testidl.o $(LDLIBS_libxenlight) $(LDLIBS_libxentoollog) $(LDLIBS_libxentoolcore) $(APPEND_LDFLAGS)
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 0b4671318c..439c654733 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -56,6 +56,7 @@
 #define XC_WANT_COMPAT_MAP_FOREIGN_API
 #include <xenctrl.h>
 #include <xenguest.h>
+#include <xensaverestore.h>
 #include <xenhypfs.h>
 
 #include <xen-tools/libs.h>
diff --git a/tools/libs/light/libxl_save_helper.c b/tools/libs/light/libxl_save_helper.c
index 65dff389bf..896e845a2f 100644
--- a/tools/libs/light/libxl_save_helper.c
+++ b/tools/libs/light/libxl_save_helper.c
@@ -48,6 +48,7 @@
 
 #include "xenctrl.h"
 #include "xenguest.h"
+#include "xensaverestore.h"
 #include "_libxl_save_msgs_helper.h"
 
 /*----- logger -----*/
diff --git a/tools/libs/light/libxl_save_msgs_gen.pl b/tools/libs/light/libxl_save_msgs_gen.pl
index 9d425b1dee..f263ee01bb 100755
--- a/tools/libs/light/libxl_save_msgs_gen.pl
+++ b/tools/libs/light/libxl_save_msgs_gen.pl
@@ -72,7 +72,7 @@ END_BOTH
 END_CALLOUT
 
 #include <xenctrl.h>
-#include <xenguest.h>
+#include <xensaverestore.h>
 #include "_libxl_save_msgs_${ah}.h"
 
 END_HELPER
diff --git a/tools/libs/saverestore/Makefile b/tools/libs/saverestore/Makefile
new file mode 100644
index 0000000000..48728b3be2
--- /dev/null
+++ b/tools/libs/saverestore/Makefile
@@ -0,0 +1,38 @@
+XEN_ROOT = $(CURDIR)/../../..
+include $(XEN_ROOT)/tools/Rules.mk
+
+ifeq ($(CONFIG_MIGRATE),y)
+SRCS-y += common.c
+SRCS-$(CONFIG_X86) += common_x86.c
+SRCS-$(CONFIG_X86) += common_x86_pv.c
+SRCS-$(CONFIG_X86) += restore_x86_pv.c
+SRCS-$(CONFIG_X86) += restore_x86_hvm.c
+SRCS-$(CONFIG_X86) += save_x86_pv.c
+SRCS-$(CONFIG_X86) += save_x86_hvm.c
+SRCS-y += restore.c
+SRCS-y += save.c
+else
+SRCS-y += nomigrate.c
+endif
+
+CFLAGS += -I$(XEN_libxenctrl)
+CFLAGS += -I$(XEN_libxenguest)
+
+-include $(XEN_TARGET_ARCH)/Makefile
+
+CFLAGS   += -Werror -Wmissing-prototypes
+CFLAGS   += -I. -I./include $(CFLAGS_xeninclude)
+CFLAGS   += -D__XEN_TOOLS__
+CFLAGS   += -include $(XEN_ROOT)/tools/config.h
+# Needed for asprintf()
+CFLAGS-$(CONFIG_Linux) += -D_GNU_SOURCE
+
+LIBHEADER := xensaverestore.h
+
+NO_HEADERS_CHK := y
+
+include $(XEN_ROOT)/tools/libs/libs.mk
+
+.PHONY: cleanlocal
+cleanlocal:
+	rm -f libxensaverestore.map
diff --git a/tools/libs/guest/xg_sr_common.c b/tools/libs/saverestore/common.c
similarity index 99%
rename from tools/libs/guest/xg_sr_common.c
rename to tools/libs/saverestore/common.c
index 17567ab133..77128bc747 100644
--- a/tools/libs/guest/xg_sr_common.c
+++ b/tools/libs/saverestore/common.c
@@ -1,6 +1,6 @@
 #include <assert.h>
 
-#include "xg_sr_common.h"
+#include "common.h"
 
 #include <xen-tools/libs.h>
 
diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/saverestore/common.h
similarity index 98%
rename from tools/libs/guest/xg_sr_common.h
rename to tools/libs/saverestore/common.h
index e2994e18ac..ca2eb47a4f 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/saverestore/common.h
@@ -1,13 +1,25 @@
 #ifndef __COMMON__H
 #define __COMMON__H
 
+#include <unistd.h>
+#include <errno.h>
 #include <stdbool.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+
+#include "xc_private.h"
+#include "xenguest.h"
+#include "xensaverestore.h"
 
 #include "xg_private.h"
-#include "xg_save_restore.h"
+#include "save_restore.h"
 #include "xc_bitops.h"
 
-#include "xg_sr_stream_format.h"
+#include "stream_format.h"
 
 /* String representation of Domain Header types. */
 const char *dhdr_type_to_str(uint32_t type);
diff --git a/tools/libs/guest/xg_sr_common_x86.c b/tools/libs/saverestore/common_x86.c
similarity index 99%
rename from tools/libs/guest/xg_sr_common_x86.c
rename to tools/libs/saverestore/common_x86.c
index 563b4f0168..f1beb234ae 100644
--- a/tools/libs/guest/xg_sr_common_x86.c
+++ b/tools/libs/saverestore/common_x86.c
@@ -1,4 +1,4 @@
-#include "xg_sr_common_x86.h"
+#include "common_x86.h"
 
 int write_x86_tsc_info(struct xc_sr_context *ctx)
 {
diff --git a/tools/libs/guest/xg_sr_common_x86.h b/tools/libs/saverestore/common_x86.h
similarity index 98%
rename from tools/libs/guest/xg_sr_common_x86.h
rename to tools/libs/saverestore/common_x86.h
index b55758c96d..3a2d91dcb8 100644
--- a/tools/libs/guest/xg_sr_common_x86.h
+++ b/tools/libs/saverestore/common_x86.h
@@ -1,7 +1,7 @@
 #ifndef __COMMON_X86__H
 #define __COMMON_X86__H
 
-#include "xg_sr_common.h"
+#include "common.h"
 
 /*
  * Obtains a domains TSC information from Xen and writes a X86_TSC_INFO record
diff --git a/tools/libs/guest/xg_sr_common_x86_pv.c b/tools/libs/saverestore/common_x86_pv.c
similarity index 99%
rename from tools/libs/guest/xg_sr_common_x86_pv.c
rename to tools/libs/saverestore/common_x86_pv.c
index c0acf00f90..cfe1b24bed 100644
--- a/tools/libs/guest/xg_sr_common_x86_pv.c
+++ b/tools/libs/saverestore/common_x86_pv.c
@@ -1,6 +1,6 @@
 #include <assert.h>
 
-#include "xg_sr_common_x86_pv.h"
+#include "common_x86_pv.h"
 
 xen_pfn_t mfn_to_pfn(struct xc_sr_context *ctx, xen_pfn_t mfn)
 {
diff --git a/tools/libs/guest/xg_sr_common_x86_pv.h b/tools/libs/saverestore/common_x86_pv.h
similarity index 98%
rename from tools/libs/guest/xg_sr_common_x86_pv.h
rename to tools/libs/saverestore/common_x86_pv.h
index 953b5bfb8d..a9f8c970e3 100644
--- a/tools/libs/guest/xg_sr_common_x86_pv.h
+++ b/tools/libs/saverestore/common_x86_pv.h
@@ -1,7 +1,7 @@
 #ifndef __COMMON_X86_PV_H
 #define __COMMON_X86_PV_H
 
-#include "xg_sr_common_x86.h"
+#include "common_x86.h"
 
 /* Virtual address ranges reserved for hypervisor. */
 #define HYPERVISOR_VIRT_START_X86_64 0xFFFF800000000000ULL
diff --git a/tools/libs/guest/xg_nomigrate.c b/tools/libs/saverestore/nomigrate.c
similarity index 100%
rename from tools/libs/guest/xg_nomigrate.c
rename to tools/libs/saverestore/nomigrate.c
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/saverestore/restore.c
similarity index 99%
rename from tools/libs/guest/xg_sr_restore.c
rename to tools/libs/saverestore/restore.c
index b57a787519..be259a1c6b 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -2,7 +2,7 @@
 
 #include <assert.h>
 
-#include "xg_sr_common.h"
+#include "common.h"
 
 /*
  * Read and validate the Image and Domain headers.
diff --git a/tools/libs/guest/xg_sr_restore_x86_hvm.c b/tools/libs/saverestore/restore_x86_hvm.c
similarity index 99%
rename from tools/libs/guest/xg_sr_restore_x86_hvm.c
rename to tools/libs/saverestore/restore_x86_hvm.c
index d6ea6f3012..bd63bd2818 100644
--- a/tools/libs/guest/xg_sr_restore_x86_hvm.c
+++ b/tools/libs/saverestore/restore_x86_hvm.c
@@ -1,7 +1,7 @@
 #include <assert.h>
 #include <arpa/inet.h>
 
-#include "xg_sr_common_x86.h"
+#include "common_x86.h"
 
 /*
  * Process an HVM_CONTEXT record from the stream.
diff --git a/tools/libs/guest/xg_sr_restore_x86_pv.c b/tools/libs/saverestore/restore_x86_pv.c
similarity index 99%
rename from tools/libs/guest/xg_sr_restore_x86_pv.c
rename to tools/libs/saverestore/restore_x86_pv.c
index dc50b0f5a8..96608e5231 100644
--- a/tools/libs/guest/xg_sr_restore_x86_pv.c
+++ b/tools/libs/saverestore/restore_x86_pv.c
@@ -1,6 +1,6 @@
 #include <assert.h>
 
-#include "xg_sr_common_x86_pv.h"
+#include "common_x86_pv.h"
 
 static xen_pfn_t pfn_to_mfn(const struct xc_sr_context *ctx, xen_pfn_t pfn)
 {
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/saverestore/save.c
similarity index 99%
rename from tools/libs/guest/xg_sr_save.c
rename to tools/libs/saverestore/save.c
index 2ba7c3200c..ae3e8797d0 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/saverestore/save.c
@@ -1,7 +1,7 @@
 #include <assert.h>
 #include <arpa/inet.h>
 
-#include "xg_sr_common.h"
+#include "common.h"
 
 /*
  * Writes an Image header and Domain header into the stream.
diff --git a/tools/libs/guest/xg_save_restore.h b/tools/libs/saverestore/save_restore.h
similarity index 98%
rename from tools/libs/guest/xg_save_restore.h
rename to tools/libs/saverestore/save_restore.h
index 3dbbc8dcd2..20bd3d30a5 100644
--- a/tools/libs/guest/xg_save_restore.h
+++ b/tools/libs/saverestore/save_restore.h
@@ -15,8 +15,6 @@
  * License along with this library; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include "xc_private.h"
-
 #include <xen/foreign/x86_32.h>
 #include <xen/foreign/x86_64.h>
 
diff --git a/tools/libs/guest/xg_sr_save_x86_hvm.c b/tools/libs/saverestore/save_x86_hvm.c
similarity index 99%
rename from tools/libs/guest/xg_sr_save_x86_hvm.c
rename to tools/libs/saverestore/save_x86_hvm.c
index 1634a7bc43..91c2cb99ab 100644
--- a/tools/libs/guest/xg_sr_save_x86_hvm.c
+++ b/tools/libs/saverestore/save_x86_hvm.c
@@ -1,6 +1,6 @@
 #include <assert.h>
 
-#include "xg_sr_common_x86.h"
+#include "common_x86.h"
 
 #include <xen/hvm/params.h>
 
diff --git a/tools/libs/guest/xg_sr_save_x86_pv.c b/tools/libs/saverestore/save_x86_pv.c
similarity index 99%
rename from tools/libs/guest/xg_sr_save_x86_pv.c
rename to tools/libs/saverestore/save_x86_pv.c
index 4964f1f7b8..92f77fad0f 100644
--- a/tools/libs/guest/xg_sr_save_x86_pv.c
+++ b/tools/libs/saverestore/save_x86_pv.c
@@ -1,7 +1,7 @@
 #include <assert.h>
 #include <limits.h>
 
-#include "xg_sr_common_x86_pv.h"
+#include "common_x86_pv.h"
 
 /* Check a 64 bit virtual address for being canonical. */
 static inline bool is_canonical_address(xen_vaddr_t vaddr)
diff --git a/tools/libs/guest/xg_sr_stream_format.h b/tools/libs/saverestore/stream_format.h
similarity index 100%
rename from tools/libs/guest/xg_sr_stream_format.h
rename to tools/libs/saverestore/stream_format.h
diff --git a/tools/libs/uselibs.mk b/tools/libs/uselibs.mk
index efd7a475ba..62a2990b95 100644
--- a/tools/libs/uselibs.mk
+++ b/tools/libs/uselibs.mk
@@ -20,6 +20,8 @@ LIBS_LIBS += ctrl
 USELIBS_ctrl := toollog call evtchn gnttab foreignmemory devicemodel
 LIBS_LIBS += guest
 USELIBS_guest := evtchn ctrl
+LIBS_LIBS += saverestore
+USELIBS_saverestore := guest ctrl
 LIBS_LIBS += store
 USELIBS_store := toolcore
 LIBS_LIBS += vchan
@@ -27,7 +29,7 @@ USELIBS_vchan := toollog store gnttab evtchn
 LIBS_LIBS += stat
 USELIBS_stat := ctrl store
 LIBS_LIBS += light
-USELIBS_light := toollog evtchn toolcore ctrl store hypfs guest
+USELIBS_light := toollog evtchn toolcore ctrl store hypfs guest saverestore
 LIBS_LIBS += util
 USELIBS_util := light
 FILENAME_util := xlutil


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:07:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:07:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.142991.263809 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVG1-0003fQ-BR; Wed, 16 Jun 2021 13:07:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 142991.263809; Wed, 16 Jun 2021 13:07:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVG1-0003fF-7Z; Wed, 16 Jun 2021 13:07:01 +0000
Received: by outflank-mailman (input) for mailman id 142991;
 Wed, 16 Jun 2021 13:07:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1p-0006lZ-1E
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:21 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1f7850ed-248e-4334-9e79-bd485c10ed00;
 Wed, 16 Jun 2021 12:51:45 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpdtm5
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f7850ed-248e-4334-9e79-bd485c10ed00
ARC-Seal: i=1; a=rsa-sha256; t=1623847900; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=jXnxuKXXi9YSOUvEDsqfQ/lU2hM8dPHWhZy85fULVb+ZPaiZ+lpuA9lAo88wEyIQ7X
    SqoIxwwxav2z+uy+BRdlvTVehdwmPUH7zW9Z0B47ZwVMyaFPIJiLpeEYq8oYVr9Y+J+R
    UKpIVVoTxy9VjrObf2j9mE7OJV1+mTk3+aj5O81tvzpZrH+GxtvQhXA06cTEXUBZrJfP
    uGqfrmV4onf5fDnNXXS+n2Gix+snjS+G5/MKkvSoiBWwto7+qeFd1/HPOnNyorIYh/2Q
    Mo7VfqwgETTUTBQh9FADRR+NH5D1dQwg+9gbsYTQq43ijBiinc3TQNW6mjsD7S/Q7bU5
    Y2jg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847900;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=SqZyfuP6OhwWmlaC4WpxM23i9gjc0SfkH6LfRu5qTXI=;
    b=ZhoNi07QQy07VhaFaQT6l33vJ2GGOtub+4UaWcd2qHkgHyqF6oVMrF/d5JS6P0vtYH
    V5DcIu5wxkZ+exkhZApDDJ+pW8QX9GULXdNtiwwBybAsggfmdn/ALuRtNAGdIR66HWNF
    tfswfypCnwNjmtF1h23SRCaDjyTAnhnS1Kb0Dca4vh/AlhE0+b9Pc3eNfscTugUctrl8
    g72PFUTY61N7p2SDAh6AnwQWrkX944K3FioBAZpuJ3B7jxDiemHGzo20SIF/hIj66Gbb
    MkTx6cO+9awO2ezgU8VBrxQCIqXPq0NFNt7Z7Aq9cxcfIXkTjiG8iU0Bx9rt4lu1RkU5
    FOKw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847900;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=SqZyfuP6OhwWmlaC4WpxM23i9gjc0SfkH6LfRu5qTXI=;
    b=WgvLDIHXTCDnHlevSZ58X7Z9r2oezcfXPemDhnz2t28Br521maZfAbtHKs5lVmssTI
    SZiiQv0THduwxlXrLR/07iQtBMgI4Cri3E3rbgrxCmgvndBi/esg4hyXQhdFo+v2XSve
    g3aHDuDOPUzuapJI0Tv81UKB5fBD9OJH1L807eZ+6wZtU5okXhff9U6c0IkM2YI6WHMj
    YE4Aa8GDPPpwh+VvxjxCGb7WTBXVhQw9u1yQ6BlmQsjbmLCnrrcrn0tXRUbErFbdfdO9
    FDE6mTEeRhFLsE0uTzSS4HO8jv0EimfRw4t1V9MkgypbiDlf+wLgHJqcky8HmR7XHj6F
    xgLw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210616 14/36] tools: save: move errors array
Date: Wed, 16 Jun 2021 14:51:07 +0200
Message-Id: <20210616125129.26563-15-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move errors array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/saverestore/common.h | 2 ++
 tools/libs/saverestore/save.c   | 7 ++-----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 1df684acb9..558b5fbf06 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -229,6 +229,8 @@ struct sr_save_arrays {
     xen_pfn_t mfns[MAX_BATCH_SIZE];
     /* write_batch: Types of the batch pfns. */
     xen_pfn_t types[MAX_BATCH_SIZE];
+    /* write_batch: Errors from attempting to map the gfns. */
+    int errors[MAX_BATCH_SIZE];
 };
 
 struct sr_restore_arrays {
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index 0883c1fac0..9ebbf00ce7 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -92,7 +92,7 @@ static int write_batch(struct xc_sr_context *ctx)
     void *guest_mapping = NULL;
     void **guest_data = NULL;
     void **local_pages = NULL;
-    int *errors = NULL, rc = -1;
+    int *errors = ctx->save.m->errors, rc = -1;
     unsigned int i, p, nr_pages = 0, nr_pages_mapped = 0;
     unsigned int nr_pfns = ctx->save.nr_batch_pfns;
     void *page, *orig_page;
@@ -105,8 +105,6 @@ static int write_batch(struct xc_sr_context *ctx)
 
     assert(nr_pfns != 0);
 
-    /* Errors from attempting to map the gfns. */
-    errors = malloc(nr_pfns * sizeof(*errors));
     /* Pointers to page data to send.  Mapped gfns or local allocations. */
     guest_data = calloc(nr_pfns, sizeof(*guest_data));
     /* Pointers to locally allocated pages.  Need freeing. */
@@ -114,7 +112,7 @@ static int write_batch(struct xc_sr_context *ctx)
     /* iovec[] for writev(). */
     iov = malloc((nr_pfns + 4) * sizeof(*iov));
 
-    if ( !errors || !guest_data || !local_pages || !iov )
+    if ( !guest_data || !local_pages || !iov )
     {
         ERROR("Unable to allocate arrays for a batch of %u pages",
               nr_pfns);
@@ -271,7 +269,6 @@ static int write_batch(struct xc_sr_context *ctx)
     free(iov);
     free(local_pages);
     free(guest_data);
-    free(errors);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:07:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:07:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143004.263820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVGC-0004Lm-LA; Wed, 16 Jun 2021 13:07:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143004.263820; Wed, 16 Jun 2021 13:07:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVGC-0004Ld-Hf; Wed, 16 Jun 2021 13:07:12 +0000
Received: by outflank-mailman (input) for mailman id 143004;
 Wed, 16 Jun 2021 13:07:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV2J-0006lZ-2P
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:51 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [81.169.146.173])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1066a928-1d2a-49f9-b44c-1f3a18eed590;
 Wed, 16 Jun 2021 12:51:51 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpjtmL
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1066a928-1d2a-49f9-b44c-1f3a18eed590
ARC-Seal: i=1; a=rsa-sha256; t=1623847905; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=bANuIUngkLsBnEZY2I+0vBdL+kR3ZOiAGbA14jWxgDEHBCVpyWfW+yAuoW8rHW1Ez4
    fh3DD0wY3yoNJcHlGB+NEHaU99w4wtPbY8MCu/tDS0nQGTE3TRjco6g9xmErPuWMFTbD
    Md8EVmmNEdg3bLfhW8rwOocIF28q5DHxwsALxipzmD4IkN2sHG6exKiiEhLUXDdRkHoA
    TWuNWfD+zcAVreUHyVOxZkzhlzeCqrKHKrxAt+Zvrxummz3Oy785D8bZ/NF1j8yWXFey
    szCXCc5WJjS3ShjrdPjH0t/uNX1Ar/bNsRp+XmaXTFsCfddSRbzcDxL/w8kNKyLyq1wN
    e1Jg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847905;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=XvzNAYL4Bz6FS15Jp93i8O47PVWqmiH8eMe8cuJSWuw=;
    b=GtJEASD9ixYM8kwE/+pjHDoFO61iRoeMYDGQE7gtB1iAGP3z+P/F/7YErteofWL/2V
    B3iyQ698+k9IrcOJ5VrOKsd30fgH/qxo1NEPvLPlqM3oPvKnmOVzdHn3gabr+BHmySRd
    rqGfEhP5eYFtUjBG7HJ+me6ucNS2e8hTyBVTp/9/RaW2CWjTdrPdLPO1DU3+2GWteF8z
    kRirjomPtwCKL3FO6f+gkLgXPOw5T63Gds/4fg1pQPqANJAHGYXSDE4Dz1NiBNayEaHL
    ZxhtD1pX88KLyP2lYYsUZqmhRQ9GdaRzhm6vRjTiDx1xb4/v56mcW1ioc2ioXfrT5jvb
    fDYQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847905;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=XvzNAYL4Bz6FS15Jp93i8O47PVWqmiH8eMe8cuJSWuw=;
    b=EXhLBPz/onKs0O5mn7+1kh6DniN7HJd6hIGmA5MmHqQsmZ8tEp9ubhaW9FSyISWoSp
    gFxMvmLF/lMcVsGZbnCQgGku0AYAEgde1LLnc8lQcOLNgjXl/GB8WcaCvzHDEobu/mfS
    j2c/Ivbks06pgF2QYlEyiazfcNYzxq5c4rv2ti/AzbaaNC1ufl4ehGV4UY5+CsZIkDSW
    lXbbQLt/MU1WTDgJvspuAHk7ADM0V45qGnG8g6/KZh9r+jS6oLaJQPb6yVUqfSUJS+l4
    rWUmEXNz8zwXYNOR0VXzUe/iCRzR1Vai6F81fsA7hh/zj3kiNJ4LhotGoa9CBsXl75El
    DteA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v20210616 26/36] tools: restore: write data directly into guest
Date: Wed, 16 Jun 2021 14:51:19 +0200
Message-Id: <20210616125129.26563-27-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Read incoming migration stream directly into the guest memory.
This avoids the memory allocation and copying, and the resulting
performance penalty.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h  |   1 +
 tools/libs/saverestore/restore.c | 132 ++++++++++++++++++++++++++++++-
 2 files changed, 129 insertions(+), 4 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index d479f1a918..5c440f28ec 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -243,6 +243,7 @@ struct sr_restore_arrays {
     xen_pfn_t mfns[MAX_BATCH_SIZE];
     int map_errs[MAX_BATCH_SIZE];
     void *guest_data[MAX_BATCH_SIZE];
+    struct iovec iov[MAX_BATCH_SIZE];
 
     /* populate_pfns */
     xen_pfn_t pp_mfns[MAX_BATCH_SIZE];
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index 877fd19a9b..d0148606bf 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -392,6 +392,122 @@ err:
     return rc;
 }
 
+/*
+ * Handle PAGE_DATA record from the stream.
+ * Given a list of pfns, their types, and a block of page data from the
+ * stream, populate and record their types, map the relevant subset and copy
+ * the data into the guest.
+ */
+static int handle_incoming_page_data(struct xc_sr_context *ctx,
+                                     struct xc_sr_rhdr *rhdr)
+{
+    xc_interface *xch = ctx->xch;
+    struct sr_restore_arrays *m = ctx->restore.m;
+    struct xc_sr_rec_page_data_header *pages = &m->pages;
+    uint64_t *pfn_nums = m->pages.pfn;
+    uint32_t i;
+    int rc, iov_idx;
+
+    rc = handle_static_data_end_v2(ctx);
+    if ( rc )
+        goto err;
+
+    /* First read and verify the header */
+    rc = read_exact(ctx->fd, pages, sizeof(*pages));
+    if ( rc )
+    {
+        PERROR("Could not read rec_pfn header");
+        goto err;
+    }
+
+    if ( verify_rec_page_hdr(ctx, rhdr->length, pages) == false )
+    {
+        rc = -1;
+        goto err;
+    }
+
+    /* Then read and verify the incoming pfn numbers */
+    rc = read_exact(ctx->fd, pfn_nums, sizeof(*pfn_nums) * pages->count);
+    if ( rc )
+    {
+        PERROR("Could not read rec_pfn data");
+        goto err;
+    }
+
+    if ( verify_rec_page_pfns(ctx, rhdr->length, pages) == false )
+    {
+        rc = -1;
+        goto err;
+    }
+
+    /* Finally read and verify the incoming pfn data */
+    rc = map_guest_pages(ctx, pages);
+    if ( rc )
+        goto err;
+
+    /* Prepare read buffers, either guest or throw away memory */
+    for ( i = 0, iov_idx = 0; i < pages->count; i++ )
+    {
+        if ( !m->guest_data[i] )
+            continue;
+
+        m->iov[iov_idx].iov_len = PAGE_SIZE;
+        if ( ctx->restore.verify )
+            m->iov[iov_idx].iov_base = ctx->restore.verify_buf + i * PAGE_SIZE;
+        else
+            m->iov[iov_idx].iov_base = m->guest_data[i];
+        iov_idx++;
+    }
+
+    if ( !iov_idx )
+        goto done;
+
+    rc = readv_exact(ctx->fd, m->iov, iov_idx);
+    if ( rc )
+    {
+        PERROR("read of %d pages failed", iov_idx);
+        goto err;
+    }
+
+    /* Post-processing of pfn data */
+    for ( i = 0, iov_idx = 0; i < pages->count; i++ )
+    {
+        if ( !m->guest_data[i] )
+            continue;
+
+        rc = ctx->restore.ops.localise_page(ctx, m->types[i], m->iov[iov_idx].iov_base);
+        if ( rc )
+        {
+            ERROR("Failed to localise pfn %#"PRIpfn" (type %#"PRIx32")",
+                  m->pfns[i], m->types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
+            goto err;
+
+        }
+
+        if ( ctx->restore.verify )
+        {
+            if ( memcmp(m->guest_data[i], m->iov[iov_idx].iov_base, PAGE_SIZE) )
+            {
+                ERROR("verify pfn %#"PRIpfn" failed (type %#"PRIx32")",
+                      m->pfns[i], m->types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
+            }
+        }
+
+        iov_idx++;
+    }
+
+done:
+    rc = 0;
+
+err:
+    if ( ctx->restore.guest_mapping )
+    {
+        xenforeignmemory_unmap(xch->fmem, ctx->restore.guest_mapping, ctx->restore.nr_mapped_pages);
+        ctx->restore.guest_mapping = NULL;
+    }
+    return rc;
+}
+
 /*
  * Handle PAGE_DATA record from an existing buffer
  * Given a list of pfns, their types, and a block of page data from the
@@ -773,11 +889,19 @@ static int process_incoming_record_header(struct xc_sr_context *ctx, struct xc_s
     struct xc_sr_record rec;
     int rc;
 
-    rc = read_record_data(ctx, ctx->fd, rhdr, &rec);
-    if ( rc )
-        return rc;
+    switch ( rhdr->type )
+    {
+    case REC_TYPE_PAGE_DATA:
+        rc = handle_incoming_page_data(ctx, rhdr);
+        break;
+    default:
+        rc = read_record_data(ctx, ctx->fd, rhdr, &rec);
+        if ( rc == 0 )
+            rc = process_buffered_record(ctx, &rec);;
+        break;
+    }
 
-    return process_buffered_record(ctx, &rec);
+    return rc;
 }
 
 


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:07:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:07:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143005.263825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVGD-0004OP-0S; Wed, 16 Jun 2021 13:07:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143005.263825; Wed, 16 Jun 2021 13:07:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVGC-0004Ne-QM; Wed, 16 Jun 2021 13:07:12 +0000
Received: by outflank-mailman (input) for mailman id 143005;
 Wed, 16 Jun 2021 13:07:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV2c-00075D-AK
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:53:10 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.101])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 43676f1c-25e8-4adf-bb2b-2befd94fe2ff;
 Wed, 16 Jun 2021 12:51:57 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpntmT
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:49 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43676f1c-25e8-4adf-bb2b-2befd94fe2ff
ARC-Seal: i=1; a=rsa-sha256; t=1623847909; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=FzMJyk/OgO8egTJoevrQLeC8WIWy98hhXt9CyBmgw5d6lttT9+go9w/CwXmMyiisx7
    RFA12X8FSlTS+yQFTxgKeWk5AU3KNhvUxkUGj96tY8z1KfiwA4FwSXo09le0hFOTBMaZ
    W798aP3BgQ8j8WrYy7L0FHnL38vTCY55xzUsovUqhYY/xml+0jGb5fb9Y/QTvN5iXmS+
    5r92VmV+X7nedj474psHGPmaaTddCCrV02Oa4rFs+gfRBcsBGX2u7a6M+/YdfEqFgfyY
    E8t6WS8RpMW1QFWWs4Qp6LFaWptcQm0bAhl34m+em9Jph2wtu018XzDGBn1KHhTnkqzF
    hUCg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847909;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=+vXVfBW28MRAf3KK7IPzTuDzNHY/m8g7e2aJIg9hVrs=;
    b=r7yfjeR8GYJC4Y94AfVV0JXweeAYZJl3+bjgVUlMUyUTk0R7TSPdYh66Ezw3Q11bsX
    5KzKtPiz4DYuQjLDWSlz7P0+YZw6LdQTejzpEsa/bSvlLztTPnRchvH6c2SFE+i/HKfR
    /IsMbm1VQ88am8IBIHJM34DvCUjq1sp8aqE9lgHXTfgJQV15OODXlnyBk3lEFGeKIQs6
    18vzH2cgYZ0mUV90Zt7Qm9S4KvqJYis30/5yIQ4vJn3UXu+9Jq/MHD3OuzeuyQwdCiAb
    2pH+JnxXlY59zw2CZJChz+5A3SUyLN3MVlAEiqmnopfqyIeqkW4VBJ7ppGQcYMtHixKJ
    PGFw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847909;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=+vXVfBW28MRAf3KK7IPzTuDzNHY/m8g7e2aJIg9hVrs=;
    b=bWXogRscBUHUFRe1lcJZLupNeRT7jS5RjUDObCHmXcy04k4cG/LIEthkDCF/tQdBOq
    YwRxLR5WT2/XRBhL/2/giudwdX/+xs+ZhczS5ca72DnUmhtmQ7lVRvV9113+ELVKM8iE
    O0Pp70opATjLgfjmaq/am/5RQs7dCOqBkrRfKUaYyspYa0CxT+CBr7XgaeDzs5bcRJsK
    elBWQv/xM1sBqP3V44JvTwpDyjXtp9+oU0EpYkD3Nr+C2L8w2YrD8qI1iwXOE0HehpQs
    BO8WUbpCO2qdKoi6TJxRwr5CkLvzSvap98KXYBDnbcvT3xl0qSAN5Dpu+wKcY+/Y1P/K
    8mfw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v20210616 33/36] tools: add --abort_if_busy to libxl_domain_suspend
Date: Wed, 16 Jun 2021 14:51:26 +0200
Message-Id: <20210616125129.26563-34-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Provide a knob to the host admin to abort the live migration of a
running domU if the downtime during final transit will be too long
for the workload within domU.

Adjust error reporting. Add ERROR_MIGRATION_ABORTED to allow callers of
libxl_domain_suspend to distinguish between errors and the requested
constraint.

Adjust precopy_policy to simplify reporting of remaining dirty pages.
The loop in send_memory_live populates ->dirty_count in a different
place than ->iteration. Let it proceeed one more time to provide the
desired information before leaving the loop.

This patch adjusts xl(1) and the libxl API.
External users check LIBXL_HAVE_DOMAIN_SUSPEND_PROPS for the availibility
of the new .abort_if_busy property.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/man/xl.1.pod.in                  |  8 +++++++
 tools/include/libxl.h                 |  1 +
 tools/libs/light/libxl_dom_save.c     |  7 ++++++-
 tools/libs/light/libxl_domain.c       |  1 +
 tools/libs/light/libxl_internal.h     |  2 ++
 tools/libs/light/libxl_stream_write.c |  9 +++++++-
 tools/libs/light/libxl_types.idl      |  1 +
 tools/xl/xl_cmdtable.c                |  6 +++++-
 tools/xl/xl_migrate.c                 | 30 ++++++++++++++++++++-------
 9 files changed, 55 insertions(+), 10 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index 09e866ad87..37267c9171 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -506,6 +506,14 @@ low, the guest is suspended and the domU will finally be moved to I<host>.
 This allows the host admin to control for how long the domU will likely
 be suspended during transit.
 
+=item B<--abort_if_busy>
+
+Abort migration instead of doing final suspend/move/resume if the
+guest produced more than I<min_remaining> dirty pages during th number
+of I<max_iters> iterations.
+This avoids long periods of time where the guest is suspended, which
+may confuse the workload within domU.
+
 =back
 
 =item B<remus> [I<OPTIONS>] I<domain-id> I<host>
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 28d70b1078..cc056ed627 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -1719,6 +1719,7 @@ typedef struct {
 } libxl_domain_suspend_props;
 #define LIBXL_SUSPEND_DEBUG 1
 #define LIBXL_SUSPEND_LIVE 2
+#define LIBXL_SUSPEND_ABORT_IF_BUSY 4
 
 int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd,
                          libxl_domain_suspend_props *props,
diff --git a/tools/libs/light/libxl_dom_save.c b/tools/libs/light/libxl_dom_save.c
index ad5df89b2c..1999a8997f 100644
--- a/tools/libs/light/libxl_dom_save.c
+++ b/tools/libs/light/libxl_dom_save.c
@@ -383,11 +383,16 @@ static int libxl__domain_save_precopy_policy(precopy_stats_t stats, void *user)
          stats.iteration, stats.dirty_count, stats.total_written);
     if (stats.dirty_count >= 0 && stats.dirty_count < dss->min_remaining)
         goto stop_copy;
-    if (stats.iteration >= dss->max_iters)
+    if (stats.dirty_count >= 0 && stats.iteration >= dss->max_iters)
         goto stop_copy;
     return XGS_POLICY_CONTINUE_PRECOPY;
 
 stop_copy:
+    if (dss->abort_if_busy)
+    {
+        dss->remaining_dirty_pages = stats.dirty_count;
+        return XGS_POLICY_ABORT;
+    }
     return XGS_POLICY_STOP_AND_COPY;
 }
 
diff --git a/tools/libs/light/libxl_domain.c b/tools/libs/light/libxl_domain.c
index 06ca7a7df6..e4740b063e 100644
--- a/tools/libs/light/libxl_domain.c
+++ b/tools/libs/light/libxl_domain.c
@@ -529,6 +529,7 @@ int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd,
     dss->type = type;
     dss->max_iters = props->max_iters ?: LIBXL_XGS_POLICY_MAX_ITERATIONS;
     dss->min_remaining = props->min_remaining ?: LIBXL_XGS_POLICY_TARGET_DIRTY_COUNT;
+    dss->abort_if_busy = props->flags & LIBXL_SUSPEND_ABORT_IF_BUSY;
     dss->live = props->flags & LIBXL_SUSPEND_LIVE;
     dss->debug = props->flags & LIBXL_SUSPEND_DEBUG;
     dss->checkpointed_stream = LIBXL_CHECKPOINTED_STREAM_NONE;
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index e4bfb34085..905d5179ba 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -3648,9 +3648,11 @@ struct libxl__domain_save_state {
     libxl_domain_type type;
     int live;
     int debug;
+    int abort_if_busy;
     int checkpointed_stream;
     uint32_t max_iters;
     uint32_t min_remaining;
+    long remaining_dirty_pages;
     const libxl_domain_remus_info *remus;
     /* private */
     int rc;
diff --git a/tools/libs/light/libxl_stream_write.c b/tools/libs/light/libxl_stream_write.c
index 634f3240d1..1ab3943f3e 100644
--- a/tools/libs/light/libxl_stream_write.c
+++ b/tools/libs/light/libxl_stream_write.c
@@ -344,11 +344,18 @@ void libxl__xc_domain_save_done(libxl__egc *egc, void *dss_void,
         goto err;
 
     if (retval) {
+        if (dss->remaining_dirty_pages) {
+            LOGD(NOTICE, dss->domid, "saving domain: aborted,"
+                 " %ld remaining dirty pages.", dss->remaining_dirty_pages);
+        } else {
         LOGEVD(ERROR, errnoval, dss->domid, "saving domain: %s",
               dss->dsps.guest_responded ?
               "domain responded to suspend request" :
               "domain did not respond to suspend request");
-        if (!dss->dsps.guest_responded)
+        }
+        if (dss->remaining_dirty_pages)
+           rc = ERROR_MIGRATION_ABORTED;
+        else if(!dss->dsps.guest_responded)
             rc = ERROR_GUEST_TIMEDOUT;
         else if (dss->rc)
             rc = dss->rc;
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index f45adddab0..b91769ee10 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -76,6 +76,7 @@ libxl_error = Enumeration("error", [
     (-30, "QMP_DEVICE_NOT_ACTIVE"), # a device has failed to be become active
     (-31, "QMP_DEVICE_NOT_FOUND"), # the requested device has not been found
     (-32, "QEMU_API"), # QEMU's replies don't contains expected members
+    (-33, "MIGRATION_ABORTED"),
     ], value_namespace = "")
 
 libxl_domain_type = Enumeration("domain_type", [
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 2cb4980c80..322a47c2bc 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -176,7 +176,11 @@ const struct cmd_spec cmd_table[] = {
       "-p                Do not unpause domain after migrating it.\n"
       "-D                Preserve the domain id\n"
       "--max_iters N     Number of copy iterations before final stop+move\n"
-      "--min_remaining N Number of remaining dirty pages before final stop+move"
+      "--min_remaining N Number of remaining dirty pages before final stop+move\n"
+      "--abort_if_busy   Abort migration instead of doing final stop+move,\n"
+      "                  if the number of dirty pages is higher than <min_remaining>\n"
+      "                  after <max_iters> iterations. Otherwise the amount of memory\n"
+      "                  to be transfered would exceed maximum allowed domU downtime."
     },
     { "restore",
       &main_restore, 0, 1,
diff --git a/tools/xl/xl_migrate.c b/tools/xl/xl_migrate.c
index 14feb2b7ec..f523746e5b 100644
--- a/tools/xl/xl_migrate.c
+++ b/tools/xl/xl_migrate.c
@@ -177,7 +177,7 @@ static void migrate_do_preamble(int send_fd, int recv_fd, pid_t child,
 }
 
 static void migrate_domain(uint32_t domid, int preserve_domid,
-                           const char *rune, int debug,
+                           const char *rune, int debug, int abort_if_busy,
                            uint32_t max_iters,
                            uint32_t min_remaining,
                            const char *override_config_file)
@@ -213,14 +213,20 @@ static void migrate_domain(uint32_t domid, int preserve_domid,
 
     if (debug)
         props.flags |= LIBXL_SUSPEND_DEBUG;
+    if (abort_if_busy)
+        props.flags |= LIBXL_SUSPEND_ABORT_IF_BUSY;
     rc = libxl_domain_suspend(ctx, domid, send_fd, &props, NULL);
     if (rc) {
         fprintf(stderr, "migration sender: libxl_domain_suspend failed"
                 " (rc=%d)\n", rc);
-        if (rc == ERROR_GUEST_TIMEDOUT)
-            goto failed_suspend;
-        else
-            goto failed_resume;
+        switch (rc) {
+            case ERROR_GUEST_TIMEDOUT:
+                goto failed_suspend;
+            case ERROR_MIGRATION_ABORTED:
+                goto failed_busy;
+            default:
+                goto failed_resume;
+        }
     }
 
     //fprintf(stderr, "migration sender: Transfer complete.\n");
@@ -302,6 +308,12 @@ static void migrate_domain(uint32_t domid, int preserve_domid,
     fprintf(stderr, "Migration failed, failed to suspend at sender.\n");
     exit(EXIT_FAILURE);
 
+ failed_busy:
+    close(send_fd);
+    migration_child_report(recv_fd);
+    fprintf(stderr, "Migration aborted as requested, domain is too busy.\n");
+    exit(EXIT_FAILURE);
+
  failed_resume:
     close(send_fd);
     migration_child_report(recv_fd);
@@ -545,13 +557,14 @@ int main_migrate(int argc, char **argv)
     char *rune = NULL;
     char *host;
     int opt, daemonize = 1, monitor = 1, debug = 0, pause_after_migration = 0;
-    int preserve_domid = 0;
+    int preserve_domid = 0, abort_if_busy = 0;
     uint32_t max_iters = 0;
     uint32_t min_remaining = 0;
     static struct option opts[] = {
         {"debug", 0, 0, 0x100},
         {"max_iters", 1, 0, 0x101},
         {"min_remaining", 1, 0, 0x102},
+        {"abort_if_busy", 0, 0, 0x103},
         {"live", 0, 0, 0x200},
         COMMON_LONG_OPTS
     };
@@ -585,6 +598,9 @@ int main_migrate(int argc, char **argv)
     case 0x102: /* --min_remaining */
         min_remaining = atoi(optarg);
         break;
+    case 0x103: /* --abort_if_busy */
+        abort_if_busy = 1;
+        break;
     case 0x200: /* --live */
         /* ignored for compatibility with xm */
         break;
@@ -619,7 +635,7 @@ int main_migrate(int argc, char **argv)
                   pause_after_migration ? " -p" : "");
     }
 
-    migrate_domain(domid, preserve_domid, rune, debug,
+    migrate_domain(domid, preserve_domid, rune, debug, abort_if_busy,
                    max_iters, min_remaining, config_filename);
     return EXIT_SUCCESS;
 }


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:07:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:07:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143007.263842 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVGH-00055l-BH; Wed, 16 Jun 2021 13:07:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143007.263842; Wed, 16 Jun 2021 13:07:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVGH-00055a-6c; Wed, 16 Jun 2021 13:07:17 +0000
Received: by outflank-mailman (input) for mailman id 143007;
 Wed, 16 Jun 2021 13:07:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV2i-0006lZ-3Z
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:53:16 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.103])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 05e4d266-6a6e-47b8-9f53-83a45b230a53;
 Wed, 16 Jun 2021 12:51:56 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpmtmS
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:48 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05e4d266-6a6e-47b8-9f53-83a45b230a53
ARC-Seal: i=1; a=rsa-sha256; t=1623847908; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=ZbwVa8IxdPe/2U3g4S2BClLR/Cr7UXEm8XD35g25MdfwgOrqBZpW96bhOttC6rb6gK
    9HckDwzfn+BKqN+1vnCvN4CoobKCcGEphFzFv1VPU1zzS7Swmql2Fo1imdSWDaNZYvR0
    PlbQZY8/mdiFAyso+hVGBIlamek6TRRRp5/DSrdmbEfL1z76EmCJRi+MZe2g+W09//zr
    asgdVc5f9AChs1wOqHyUT9qoVFJI2FdPKADui8UsEfHSed1c8uiSMvgcbvnhmi+6VRHy
    LRJVYucZbDiweZu5r2ZXEUlJQTroq0z9KpU/gAY0JOkbse7D6jOEqi7HW/Jxv28f4QST
    el/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847908;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=qkM58ppJmICGaCKigNV7+3gfD18mpe/eRIOurb3atVw=;
    b=Z5AwzOS5tlCd6dyHSwzl3gKPr3NJ5sEnwkYAZ0wVtfyAezpPtqoI1xzm/YQ0wLvuR9
    ucTuuNxURAYY1+CIIiHwlB+8zwwKvFJryu08NSs85n3ZtWPuL4jhT0Sr8eU1rKOB36XZ
    JF+nwOnb5ZRa2/IjNJZHMnCH6U387/YI2rGi07RwjF7tOIlTGnpBVzn++SV4AgC9Hzok
    CPjPPrQ0DlqvMirP44BWXae5LHvspeFF0w0laPbW1R8QQLGaagla8hzyfDKTpPe629+G
    3akUkjx1soX91T0+IJBH+cm4sptvmvvzLH0HyDcd9I5xtAI0KVLOedGOZWdKyKE2XhUj
    mPCQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847908;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=qkM58ppJmICGaCKigNV7+3gfD18mpe/eRIOurb3atVw=;
    b=UPJ6B0D3F5GZ72Gb2NPrMvTFuZ2VqtnzA+jWaSQOCpM56nRClCnuDI3h7Zv2Xh3nQL
    kFOqtIWDllAFCnoLQaF9i2pdSLrlaxYXN9PYDIpda4oYaYJtGQ34F/HIOlGitFKg3G1b
    aLfYt+cg6qnuOBBQscePbVXGMuto1Nh1jwMjsG8CGmfJG4AnTn9hmCbBBhT/LKSN78Or
    R+JnyDxYw741UAFhj8CxcM2bMPwV6wwawXxwESHSoyVmzEblwb6JDVYjvNCw/6GuAWK8
    7EzxPRkHGHAo/t857KxO17FBg9Q2tP3UO/Dkl2D3aXETo9EQs1MtoYwuxVpXXwa6PTHP
    +adw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v20210616 32/36] tools: add --min_remaining to libxl_domain_suspend
Date: Wed, 16 Jun 2021 14:51:25 +0200
Message-Id: <20210616125129.26563-33-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The decision to stop+move a domU to the new host must be based on two factors:
- the available network bandwidth for the migration stream
- the maximum time a workload within a domU can be savely suspended

Both values define how many dirty pages a workload may produce prior the
final stop+move.

The default value of 50 pages is much too low with todays network bandwidths.
On an idle 1GiB link these 200K will be transferred within ~2ms.

Give the admin a knob to adjust the point when the final stop+move will
be done, so he can base this decision on his own needs.

This patch adjusts xl(1) and the libxl API.
External users check LIBXL_HAVE_DOMAIN_SUSPEND_PROPS for the availibility
of the new .min_remaining property.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/man/xl.1.pod.in              |  8 ++++++++
 tools/include/libxl.h             |  1 +
 tools/libs/light/libxl_dom_save.c |  2 +-
 tools/libs/light/libxl_domain.c   |  1 +
 tools/libs/light/libxl_internal.h |  1 +
 tools/xl/xl_cmdtable.c            | 23 ++++++++++++-----------
 tools/xl/xl_migrate.c             |  9 ++++++++-
 7 files changed, 32 insertions(+), 13 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index 594387bcf4..09e866ad87 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -498,6 +498,14 @@ possible to use this option for a 'localhost' migration.
 
 Number of copy iterations before final suspend+move (default: 5)
 
+=item B<--min_remaing> I<pages>
+
+Number of remaining dirty pages. If the number of dirty pages drops that
+low, the guest is suspended and the domU will finally be moved to I<host>.
+
+This allows the host admin to control for how long the domU will likely
+be suspended during transit.
+
 =back
 
 =item B<remus> [I<OPTIONS>] I<domain-id> I<host>
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index bf77da0524..28d70b1078 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -1715,6 +1715,7 @@ static inline int libxl_retrieve_domain_configuration_0x041200(
 typedef struct {
     uint32_t flags; /* LIBXL_SUSPEND_* */
     uint32_t max_iters;
+    uint32_t min_remaining;
 } libxl_domain_suspend_props;
 #define LIBXL_SUSPEND_DEBUG 1
 #define LIBXL_SUSPEND_LIVE 2
diff --git a/tools/libs/light/libxl_dom_save.c b/tools/libs/light/libxl_dom_save.c
index 938c0127f3..ad5df89b2c 100644
--- a/tools/libs/light/libxl_dom_save.c
+++ b/tools/libs/light/libxl_dom_save.c
@@ -381,7 +381,7 @@ static int libxl__domain_save_precopy_policy(precopy_stats_t stats, void *user)
 
     LOGD(DEBUG, shs->domid, "iteration %u dirty_count %ld total_written %lu",
          stats.iteration, stats.dirty_count, stats.total_written);
-    if (stats.dirty_count >= 0 && stats.dirty_count < LIBXL_XGS_POLICY_TARGET_DIRTY_COUNT)
+    if (stats.dirty_count >= 0 && stats.dirty_count < dss->min_remaining)
         goto stop_copy;
     if (stats.iteration >= dss->max_iters)
         goto stop_copy;
diff --git a/tools/libs/light/libxl_domain.c b/tools/libs/light/libxl_domain.c
index 9f98cd7f2b..06ca7a7df6 100644
--- a/tools/libs/light/libxl_domain.c
+++ b/tools/libs/light/libxl_domain.c
@@ -528,6 +528,7 @@ int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd,
     dss->fd = fd;
     dss->type = type;
     dss->max_iters = props->max_iters ?: LIBXL_XGS_POLICY_MAX_ITERATIONS;
+    dss->min_remaining = props->min_remaining ?: LIBXL_XGS_POLICY_TARGET_DIRTY_COUNT;
     dss->live = props->flags & LIBXL_SUSPEND_LIVE;
     dss->debug = props->flags & LIBXL_SUSPEND_DEBUG;
     dss->checkpointed_stream = LIBXL_CHECKPOINTED_STREAM_NONE;
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 8cbcc5282c..e4bfb34085 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -3650,6 +3650,7 @@ struct libxl__domain_save_state {
     int debug;
     int checkpointed_stream;
     uint32_t max_iters;
+    uint32_t min_remaining;
     const libxl_domain_remus_info *remus;
     /* private */
     int rc;
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 9b6b3c99aa..2cb4980c80 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -165,17 +165,18 @@ const struct cmd_spec cmd_table[] = {
       &main_migrate, 0, 1,
       "Migrate a domain to another host",
       "[options] <Domain> <host>",
-      "-h              Print this help.\n"
-      "-C <config>     Send <config> instead of config file from creation.\n"
-      "-s <sshcommand> Use <sshcommand> instead of ssh.  String will be passed\n"
-      "                to sh. If empty, run <host> instead of ssh <host> xl\n"
-      "                migrate-receive [-d -e]\n"
-      "-e              Do not wait in the background (on <host>) for the death\n"
-      "                of the domain.\n"
-      "--debug         Ignored.\n"
-      "-p              Do not unpause domain after migrating it.\n"
-      "-D              Preserve the domain id\n"
-      "--max_iters N   Number of copy iterations before final stop+move"
+      "-h                Print this help.\n"
+      "-C <config>       Send <config> instead of config file from creation.\n"
+      "-s <sshcommand>   Use <sshcommand> instead of ssh.  String will be passed\n"
+      "                  to sh. If empty, run <host> instead of ssh <host> xl\n"
+      "                  migrate-receive [-d -e]\n"
+      "-e                Do not wait in the background (on <host>) for the death\n"
+      "                  of the domain.\n"
+      "--debug           Ignored.\n"
+      "-p                Do not unpause domain after migrating it.\n"
+      "-D                Preserve the domain id\n"
+      "--max_iters N     Number of copy iterations before final stop+move\n"
+      "--min_remaining N Number of remaining dirty pages before final stop+move"
     },
     { "restore",
       &main_restore, 0, 1,
diff --git a/tools/xl/xl_migrate.c b/tools/xl/xl_migrate.c
index af117d4d56..14feb2b7ec 100644
--- a/tools/xl/xl_migrate.c
+++ b/tools/xl/xl_migrate.c
@@ -179,6 +179,7 @@ static void migrate_do_preamble(int send_fd, int recv_fd, pid_t child,
 static void migrate_domain(uint32_t domid, int preserve_domid,
                            const char *rune, int debug,
                            uint32_t max_iters,
+                           uint32_t min_remaining,
                            const char *override_config_file)
 {
     pid_t child = -1;
@@ -191,6 +192,7 @@ static void migrate_domain(uint32_t domid, int preserve_domid,
     libxl_domain_suspend_props props = {
         .flags = LIBXL_SUSPEND_LIVE,
         .max_iters = max_iters,
+        .min_remaining = min_remaining,
         };
 
     save_domain_core_begin(domid, preserve_domid, override_config_file,
@@ -545,9 +547,11 @@ int main_migrate(int argc, char **argv)
     int opt, daemonize = 1, monitor = 1, debug = 0, pause_after_migration = 0;
     int preserve_domid = 0;
     uint32_t max_iters = 0;
+    uint32_t min_remaining = 0;
     static struct option opts[] = {
         {"debug", 0, 0, 0x100},
         {"max_iters", 1, 0, 0x101},
+        {"min_remaining", 1, 0, 0x102},
         {"live", 0, 0, 0x200},
         COMMON_LONG_OPTS
     };
@@ -578,6 +582,9 @@ int main_migrate(int argc, char **argv)
     case 0x101: /* --max_iters */
         max_iters = atoi(optarg);
         break;
+    case 0x102: /* --min_remaining */
+        min_remaining = atoi(optarg);
+        break;
     case 0x200: /* --live */
         /* ignored for compatibility with xm */
         break;
@@ -613,7 +620,7 @@ int main_migrate(int argc, char **argv)
     }
 
     migrate_domain(domid, preserve_domid, rune, debug,
-                   max_iters, config_filename);
+                   max_iters, min_remaining, config_filename);
     return EXIT_SUCCESS;
 }
 


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:07:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:07:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143023.263853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVGT-0005zA-S1; Wed, 16 Jun 2021 13:07:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143023.263853; Wed, 16 Jun 2021 13:07:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVGT-0005yz-Np; Wed, 16 Jun 2021 13:07:29 +0000
Received: by outflank-mailman (input) for mailman id 143023;
 Wed, 16 Jun 2021 13:07:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV29-0006lZ-21
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:41 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.104])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d81fa4ca-4f08-400e-94e6-11372cd20c32;
 Wed, 16 Jun 2021 12:51:48 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpftm9
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d81fa4ca-4f08-400e-94e6-11372cd20c32
ARC-Seal: i=1; a=rsa-sha256; t=1623847902; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=LHRmdgbB2P9FHXnH1+3pKYiiMR4798eMpFlLXZXIOO5bIvV64ZvF5OLbijo/uikpKU
    M5ZH5UCmwHXnOuhoCfRGyNBA339A6lRDZkIEB/tHPeNOAGL5mA4LpvBb08FfGLDWE70l
    iOiY5TpKwfIFCWStoRK0e0NyRDb2r2UZDdcohB8JSniP2vv23FM/mcdDTyII4jPfpZ1K
    UsFi9OAK5u4uVD1eUIKn8NDs+u0xijWVdRCOdHjwB2QXCrQcFnlkZshoAawddneWTrcQ
    KyA3EjhWYFBlXuwjyIeOW5LZAUQL2Fd8+FV2gVeg7p8thJgTJLRxJH83L5TphCIvugee
    vRmw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847902;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=+1yexgYfAf4Dkla6Khf3T41pK7uLyCVzzVYxTrxfJ48=;
    b=BWPrXQpnG32M8F4fD+kGPisOlk9owLjbQdlPOK+aBT1bbQ/v4vfHu894RVk+Rq4A3a
    oD9cI1H0XxHM9jV5y5h4rY3E+qqhkIV2BseburvAl5SFkro4B2B8qtLGi7Dlrg33Advv
    1RDreYpgKOcjlng262z1yk/S2GhpKEE9rN89KgqnJuhP1qE2BbFsxAfk+yoPTeVLf2jD
    +lpmPhEysPhF8nnQKJ2wz6je/YgSdBzHE9PmTsIvGYiBpWWIX1+8Mrfk+spStG8+EpQB
    sjcSNjdIOcZjxY6mdBi3xZdBfssZ92KdFiJRGRtFZ5nWmoPQOoV6KZK1vN5yaxq8NFTQ
    f5sA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847902;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=+1yexgYfAf4Dkla6Khf3T41pK7uLyCVzzVYxTrxfJ48=;
    b=B89XTjnLcCrMRxIrW6HPZ6cnntI76bCASJh8Pi6CX0Bw+499Rl0RNtlKESneF/fMPN
    TSMINec8+KVobD8SplsEJYJooC2hg0ruvEVgEyfvOynJJkZkT8ukvFsSj6MHDZ90DJzk
    n2GD8p173JYT9zj9z788E0rW0/hSn8385o9aAVCLoZEyEr54L9MHSWlFtsMNWHOY7JjJ
    4oyfludkdNQrvHw087R91XMlnlO31IlEilZDSXls4LB5E0ehkcEo8s4tUl0qQYUXTY9/
    cn4tTkCzkTDBQ2SzLfktBOG/23kUNUQCj3W6riqe5fkfGY64z1aZNtrAAUk6+jlfXE2p
    WsgA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210616 18/36] tools: save: move local_pages array
Date: Wed, 16 Jun 2021 14:51:11 +0200
Message-Id: <20210616125129.26563-19-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move local_pages array into preallocated space.

Adjust the code to use the src page as is in case of HVM.
In case of PV the page may need to be normalised, use a private memory
area for this purpose.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/saverestore/common.h       | 22 ++++++++++---------
 tools/libs/saverestore/save.c         | 25 +++------------------
 tools/libs/saverestore/save_x86_hvm.c |  5 +++--
 tools/libs/saverestore/save_x86_pv.c  | 31 ++++++++++++++++++---------
 4 files changed, 39 insertions(+), 44 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index c4ab843c77..96ae0904fc 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -45,16 +45,12 @@ struct xc_sr_save_ops
      * Optionally transform the contents of a page from being specific to the
      * sending environment, to being generic for the stream.
      *
-     * The page of data at the end of 'page' may be a read-only mapping of a
-     * running guest; it must not be modified.  If no transformation is
-     * required, the callee should leave '*pages' untouched.
+     * The page of data '*src' may be a read-only mapping of a running guest;
+     * it must not be modified. If no transformation is required, the callee
+     * should leave '*src' untouched, and return it via '**ptr'.
      *
-     * If a transformation is required, the callee should allocate themselves
-     * a local page using malloc() and return it via '*page'.
-     *
-     * The caller shall free() '*page' in all cases.  In the case that the
-     * callee encounters an error, it should *NOT* free() the memory it
-     * allocated for '*page'.
+     * If a transformation is required, the callee should provide the
+     * transformed page in a private buffer and return it via '**ptr'.
      *
      * It is valid to fail with EAGAIN if the transformation is not able to be
      * completed at this point.  The page shall be retried later.
@@ -62,7 +58,7 @@ struct xc_sr_save_ops
      * @returns 0 for success, -1 for failure, with errno appropriately set.
      */
     int (*normalise_page)(struct xc_sr_context *ctx, xen_pfn_t type,
-                          void **page);
+                          void *src, unsigned int idx, void **ptr);
 
     /**
      * Set up local environment to save a domain. (Typically querying
@@ -385,6 +381,12 @@ struct xc_sr_context
 
                 union
                 {
+                    struct
+                    {
+                        /* Used by write_batch for modified pages. */
+                        void *normalised_pages;
+                    } save;
+
                     struct
                     {
                         /* State machine for the order of received records. */
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index ea04cb1a74..fa83648f9a 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -91,11 +91,10 @@ static int write_batch(struct xc_sr_context *ctx)
     xen_pfn_t *mfns = ctx->save.m->mfns, *types = ctx->save.m->types;
     void *guest_mapping = NULL;
     void **guest_data = ctx->save.m->guest_data;
-    void **local_pages = NULL;
     int *errors = ctx->save.m->errors, rc = -1;
     unsigned int i, p, nr_pages = 0, nr_pages_mapped = 0;
     unsigned int nr_pfns = ctx->save.nr_batch_pfns;
-    void *page, *orig_page;
+    void *src;
     uint64_t *rec_pfns = ctx->save.m->rec_pfns;
     struct iovec *iov = ctx->save.m->iov; int iovcnt = 0;
     struct xc_sr_rec_page_data_header hdr = { 0 };
@@ -105,16 +104,6 @@ static int write_batch(struct xc_sr_context *ctx)
 
     assert(nr_pfns != 0);
 
-    /* Pointers to locally allocated pages.  Need freeing. */
-    local_pages = calloc(nr_pfns, sizeof(*local_pages));
-
-    if ( !local_pages )
-    {
-        ERROR("Unable to allocate arrays for a batch of %u pages",
-              nr_pfns);
-        goto err;
-    }
-
     for ( i = 0; i < nr_pfns; ++i )
     {
         types[i] = mfns[i] = ctx->save.ops.pfn_to_gfn(ctx,
@@ -176,11 +165,8 @@ static int write_batch(struct xc_sr_context *ctx)
                 goto err;
             }
 
-            orig_page = page = guest_mapping + (p * PAGE_SIZE);
-            rc = ctx->save.ops.normalise_page(ctx, types[i], &page);
-
-            if ( orig_page != page )
-                local_pages[i] = page;
+            src = guest_mapping + (p * PAGE_SIZE);
+            rc = ctx->save.ops.normalise_page(ctx, types[i], src, i, &guest_data[i]);
 
             if ( rc )
             {
@@ -195,8 +181,6 @@ static int write_batch(struct xc_sr_context *ctx)
                 else
                     goto err;
             }
-            else
-                guest_data[i] = page;
 
             rc = -1;
             ++p;
@@ -255,9 +239,6 @@ static int write_batch(struct xc_sr_context *ctx)
  err:
     if ( guest_mapping )
         xenforeignmemory_unmap(xch->fmem, guest_mapping, nr_pages_mapped);
-    for ( i = 0; local_pages && i < nr_pfns; ++i )
-        free(local_pages[i]);
-    free(local_pages);
 
     return rc;
 }
diff --git a/tools/libs/saverestore/save_x86_hvm.c b/tools/libs/saverestore/save_x86_hvm.c
index 91c2cb99ab..26f49ee267 100644
--- a/tools/libs/saverestore/save_x86_hvm.c
+++ b/tools/libs/saverestore/save_x86_hvm.c
@@ -129,9 +129,10 @@ static xen_pfn_t x86_hvm_pfn_to_gfn(const struct xc_sr_context *ctx,
     return pfn;
 }
 
-static int x86_hvm_normalise_page(struct xc_sr_context *ctx,
-                                  xen_pfn_t type, void **page)
+static int x86_hvm_normalise_page(struct xc_sr_context *ctx, xen_pfn_t type,
+                                  void *src, unsigned int idx, void **ptr)
 {
+    *ptr = src;
     return 0;
 }
 
diff --git a/tools/libs/saverestore/save_x86_pv.c b/tools/libs/saverestore/save_x86_pv.c
index 92f77fad0f..159ff59480 100644
--- a/tools/libs/saverestore/save_x86_pv.c
+++ b/tools/libs/saverestore/save_x86_pv.c
@@ -999,29 +999,31 @@ static xen_pfn_t x86_pv_pfn_to_gfn(const struct xc_sr_context *ctx,
  * save_ops function.  Performs pagetable normalisation on appropriate pages.
  */
 static int x86_pv_normalise_page(struct xc_sr_context *ctx, xen_pfn_t type,
-                                 void **page)
+                                  void *src, unsigned int idx, void **ptr)
 {
     xc_interface *xch = ctx->xch;
-    void *local_page;
     int rc;
+    void *dst;
 
     type &= XEN_DOMCTL_PFINFO_LTABTYPE_MASK;
 
     if ( type < XEN_DOMCTL_PFINFO_L1TAB || type > XEN_DOMCTL_PFINFO_L4TAB )
+    {
+        *ptr = src;
         return 0;
+    }
 
-    local_page = malloc(PAGE_SIZE);
-    if ( !local_page )
+    if ( idx >= MAX_BATCH_SIZE )
     {
-        ERROR("Unable to allocate scratch page");
-        rc = -1;
-        goto out;
+        ERROR("idx %u out of range", idx);
+        errno = ERANGE;
+        return -1;
     }
 
-    rc = normalise_pagetable(ctx, *page, local_page, type);
-    *page = local_page;
+    dst = ctx->x86.pv.save.normalised_pages + idx * PAGE_SIZE;
+    rc = normalise_pagetable(ctx, src, dst, type);
+    *ptr = dst;
 
- out:
     return rc;
 }
 
@@ -1031,8 +1033,16 @@ static int x86_pv_normalise_page(struct xc_sr_context *ctx, xen_pfn_t type,
  */
 static int x86_pv_setup(struct xc_sr_context *ctx)
 {
+    xc_interface *xch = ctx->xch;
     int rc;
 
+    ctx->x86.pv.save.normalised_pages = malloc(MAX_BATCH_SIZE * PAGE_SIZE);
+    if ( !ctx->x86.pv.save.normalised_pages )
+    {
+        PERROR("Failed to allocate normalised_pages");
+        return -1;
+    }
+
     rc = x86_pv_domain_info(ctx);
     if ( rc )
         return rc;
@@ -1118,6 +1128,7 @@ static int x86_pv_check_vm_state(struct xc_sr_context *ctx)
 
 static int x86_pv_cleanup(struct xc_sr_context *ctx)
 {
+    free(ctx->x86.pv.save.normalised_pages);
     free(ctx->x86.pv.p2m_pfns);
 
     if ( ctx->x86.pv.p2m )


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:07:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:07:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143025.263863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVGV-0006JO-5H; Wed, 16 Jun 2021 13:07:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143025.263863; Wed, 16 Jun 2021 13:07:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVGV-0006IV-1B; Wed, 16 Jun 2021 13:07:31 +0000
Received: by outflank-mailman (input) for mailman id 143025;
 Wed, 16 Jun 2021 13:07:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV2h-00075D-AQ
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:53:15 +0000
Received: from mo4-p04-ob.smtp.rzone.de (unknown [81.169.146.178])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 91e5a4a4-21aa-4503-93d0-c098ebfb81b5;
 Wed, 16 Jun 2021 12:51:59 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpotmW
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:50 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91e5a4a4-21aa-4503-93d0-c098ebfb81b5
ARC-Seal: i=1; a=rsa-sha256; t=1623847911; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=VCr/2PZxuem7ziOl0SvYUxtu2I1JCHZc1ppdDwAIg22ZqdZnYeVyQLbrGR0oQ6uu4H
    stB2H3kgGsdz5JS00Ia/o1JYHyfQo7LyVBUO8BU3J/6GcYF4A/uM2Xp+A/1Hf68wMiwc
    Pma3fNeP6mim28jfdA/zwPu2J81jpvN1bPpfhG9lARFQ4sKiqUilwJMMrgN3BjD4zKfc
    yxgf8qsVwe20OSYhSIiubB3TrzsFhPxdGy9e5SUVUAqWoN4anUTXgrh8jRokFtA3bMeT
    AcUX3RspEySfbtjqF5b8ryOKpQA3bbIANFgT36sfcThNPduxtY6nTRgVBJeHct1+DWEQ
    VSJQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847911;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=v7F+g3JSzDmZZTOZdDO+rS/XRfxqZ/ug2X6EnF8j6XQ=;
    b=QPf9+ju5uY2SxuUQDlHbY+HL/uWI2y94DzIjAGALOOCmoL7sl5Z7KI3lQD6qqbPiCl
    5prcoABg+YJs0xivxuG3uB+P5sxHKTr/DJHwa4s5nyzExpHyHD3jxmyb7iPMYGOCgfDx
    o3BSytem3D7YCs7EGTgkzuUSXwZvGaLMVmW63r/kwu6vQfrlTt+Hu53Dui8ctMx1zUpl
    mtv0XAFXgmlD2WYdrWYgwfHqUz1ADGgxU237krCTBvan75mW19OvmiMPZhI17OTmMHcR
    ZvxnVYICI8U2ro+VaMKSN2212EWxd7dy+aYMQruFfnQqwUeby9Z55DWqknjtCD/iMXn5
    CR1w==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847910;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=v7F+g3JSzDmZZTOZdDO+rS/XRfxqZ/ug2X6EnF8j6XQ=;
    b=UGgWGQEPOlSwB3aEFnGTthuNM4J51eERuVa6sFgn5Rn/7eC9HsaPBfZ2O2JGn3MXDH
    KDjgevsBGd7Gq+BdKhZH3qEH00w8PRhoyS8ey6Ndy81zp5kChWkiGzy8iKJnQHiiQFOu
    5/evSxKQdDu2jCwVhDzPqEfwBwnZZQzjayjXDR8UT+BCOcw6XZ0qt5lR7OnguZg3somx
    pdsU66CKlC442tSTeJj9LU1xjz2mOfZNWdsOfo6KSegrLdf13gf6pLVhFTBs2aS3oD5f
    AqYtRIpjVDVL4RZmCky3uYK4dA/wae8zNl81xo+0hoQTIXzhbUhX2GBAkjGAkDWWmk5P
    lamQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v20210616 36/36] tools: use superpages during restore of HVM guest
Date: Wed, 16 Jun 2021 14:51:29 +0200
Message-Id: <20210616125129.26563-37-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

During creating of a HVM domU meminit_hvm() tries to map superpages.
After save/restore or migration this mapping is lost, everything is
allocated in single pages. This causes a performance degradation after
migration.

Add neccessary code to preallocate a superpage for an incoming chunk of
pfns. In case a pfn was not populated on the sending side, it must be
freed on the receiving side to avoid over-allocation.

The existing code for x86_pv is moved unmodified into its own file.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

v02:
- remove xg_ prefix from called functions
---
 tools/libs/guest/xg_dom_x86.c            |   5 -
 tools/libs/guest/xg_private.h            |   5 +
 tools/libs/saverestore/common.c          |   1 -
 tools/libs/saverestore/common.h          |  28 +-
 tools/libs/saverestore/restore.c         |  62 +---
 tools/libs/saverestore/restore_x86_hvm.c | 370 ++++++++++++++++++++++-
 tools/libs/saverestore/restore_x86_pv.c  |  61 +++-
 7 files changed, 455 insertions(+), 77 deletions(-)

diff --git a/tools/libs/guest/xg_dom_x86.c b/tools/libs/guest/xg_dom_x86.c
index d2eb89ce01..ec0d18fd60 100644
--- a/tools/libs/guest/xg_dom_x86.c
+++ b/tools/libs/guest/xg_dom_x86.c
@@ -44,11 +44,6 @@
 
 #define SUPERPAGE_BATCH_SIZE 512
 
-#define SUPERPAGE_2MB_SHIFT   9
-#define SUPERPAGE_2MB_NR_PFNS (1UL << SUPERPAGE_2MB_SHIFT)
-#define SUPERPAGE_1GB_SHIFT   18
-#define SUPERPAGE_1GB_NR_PFNS (1UL << SUPERPAGE_1GB_SHIFT)
-
 #define X86_CR0_PE 0x01
 #define X86_CR0_ET 0x10
 
diff --git a/tools/libs/guest/xg_private.h b/tools/libs/guest/xg_private.h
index 28441ee13f..b7372e6bd5 100644
--- a/tools/libs/guest/xg_private.h
+++ b/tools/libs/guest/xg_private.h
@@ -179,4 +179,9 @@ struct xc_cpu_policy {
 };
 #endif /* x86 */
 
+#define SUPERPAGE_2MB_SHIFT   9
+#define SUPERPAGE_2MB_NR_PFNS (1UL << SUPERPAGE_2MB_SHIFT)
+#define SUPERPAGE_1GB_SHIFT   18
+#define SUPERPAGE_1GB_NR_PFNS (1UL << SUPERPAGE_1GB_SHIFT)
+
 #endif /* XG_PRIVATE_H */
diff --git a/tools/libs/saverestore/common.c b/tools/libs/saverestore/common.c
index 8b4e402df5..5c659aa55b 100644
--- a/tools/libs/saverestore/common.c
+++ b/tools/libs/saverestore/common.c
@@ -1,5 +1,4 @@
 #include <assert.h>
-
 #include "common.h"
 
 #include <xen-tools/libs.h>
diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 43a31f9aa5..8e67989bbf 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -219,6 +219,16 @@ struct xc_sr_restore_ops
      */
     int (*setup)(struct xc_sr_context *ctx);
 
+    /**
+     * Populate PFNs
+     *
+     * Given a set of pfns, obtain memory from Xen to fill the physmap for the
+     * unpopulated subset.
+     */
+    int (*populate_pfns)(struct xc_sr_context *ctx, unsigned count,
+                         const xen_pfn_t *original_pfns, const uint32_t *types);
+
+
     /**
      * Process an individual record from the stream.  The caller shall take
      * care of processing common records (e.g. END, PAGE_DATA).
@@ -366,6 +376,8 @@ struct xc_sr_context
 
             int send_back_fd;
             unsigned long p2m_size;
+            unsigned long max_pages;
+            unsigned long tot_pages;
             xc_hypercall_buffer_t dirty_bitmap_hbuf;
 
             /* From Image Header. */
@@ -503,6 +515,14 @@ struct xc_sr_context
                     {
                         /* HVM context blob. */
                         struct xc_sr_blob context;
+
+                        /* Bitmap of currently allocated PFNs during restore. */
+                        struct sr_bitmap attempted_1g;
+                        struct sr_bitmap attempted_2m;
+                        struct sr_bitmap allocated_pfns;
+                        xen_pfn_t prev_populated_pfn;
+                        xen_pfn_t iteration_tracker_pfn;
+                        unsigned long iteration;
                     } restore;
                 };
             } hvm;
@@ -567,14 +587,6 @@ int read_record_header(struct xc_sr_context *ctx, int fd, struct xc_sr_rhdr *rhd
 int read_record_data(struct xc_sr_context *ctx, int fd, struct xc_sr_rhdr *rhdr,
                      struct xc_sr_record *rec);
 
-/*
- * This would ideally be private in restore.c, but is needed by
- * x86_pv_localise_page() if we receive pagetables frames ahead of the
- * contents of the frames they point at.
- */
-int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
-                  const xen_pfn_t *original_pfns, const uint32_t *types);
-
 /* Handle a STATIC_DATA_END record. */
 int handle_static_data_end(struct xc_sr_context *ctx);
 
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index 8f7bce2585..5ad3df49ba 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -71,63 +71,6 @@ static int read_headers(struct xc_sr_context *ctx)
     return 0;
 }
 
-/*
- * Given a set of pfns, obtain memory from Xen to fill the physmap for the
- * unpopulated subset.  If types is NULL, no page type checking is performed
- * and all unpopulated pfns are populated.
- */
-int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
-                  const xen_pfn_t *original_pfns, const uint32_t *types)
-{
-    xc_interface *xch = ctx->xch;
-    xen_pfn_t *mfns = ctx->restore.m->pp_mfns,
-        *pfns = ctx->restore.m->pp_pfns;
-    unsigned int i, nr_pfns = 0;
-    int rc = -1;
-
-    for ( i = 0; i < count; ++i )
-    {
-        if ( (!types ||
-              (types && page_type_has_stream_data(types[i]) == true)) &&
-             !pfn_is_populated(ctx, original_pfns[i]) )
-        {
-            rc = pfn_set_populated(ctx, original_pfns[i]);
-            if ( rc )
-                goto err;
-            pfns[nr_pfns] = mfns[nr_pfns] = original_pfns[i];
-            ++nr_pfns;
-        }
-    }
-
-    if ( nr_pfns )
-    {
-        rc = xc_domain_populate_physmap_exact(
-            xch, ctx->domid, nr_pfns, 0, 0, mfns);
-        if ( rc )
-        {
-            PERROR("Failed to populate physmap");
-            goto err;
-        }
-
-        for ( i = 0; i < nr_pfns; ++i )
-        {
-            if ( mfns[i] == INVALID_MFN )
-            {
-                ERROR("Populate physmap failed for pfn %u", i);
-                rc = -1;
-                goto err;
-            }
-
-            ctx->restore.ops.set_gfn(ctx, pfns[i], mfns[i]);
-        }
-    }
-
-    rc = 0;
-
- err:
-    return rc;
-}
-
 static int handle_static_data_end_v2(struct xc_sr_context *ctx)
 {
     int rc = 0;
@@ -270,7 +213,7 @@ static int map_guest_pages(struct xc_sr_context *ctx,
     uint32_t i, p;
     int rc;
 
-    rc = populate_pfns(ctx, pages->count, m->pfns, m->types);
+    rc = ctx->restore.ops.populate_pfns(ctx, pages->count, m->pfns, m->types);
     if ( rc )
     {
         ERROR("Failed to populate pfns for batch of %u pages", pages->count);
@@ -1077,6 +1020,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
         return -1;
     }
 
+    /* See xc_domain_getinfo */
+    ctx.restore.max_pages = ctx.dominfo.max_memkb >> (PAGE_SHIFT-10);
+    ctx.restore.tot_pages = ctx.dominfo.nr_pages;
     ctx.restore.p2m_size = nr_pfns;
     ctx.restore.ops = ctx.dominfo.hvm
         ? restore_ops_x86_hvm : restore_ops_x86_pv;
diff --git a/tools/libs/saverestore/restore_x86_hvm.c b/tools/libs/saverestore/restore_x86_hvm.c
index 97e7e0f48c..7ed438e1be 100644
--- a/tools/libs/saverestore/restore_x86_hvm.c
+++ b/tools/libs/saverestore/restore_x86_hvm.c
@@ -130,6 +130,25 @@ static int x86_hvm_localise_page(struct xc_sr_context *ctx,
     return 0;
 }
 
+static bool x86_hvm_expand_sp_bitmaps(struct xc_sr_context *ctx, unsigned long max_pfn)
+{
+    struct sr_bitmap *bm;
+
+    bm = &ctx->x86.hvm.restore.attempted_1g;
+    if ( !sr_bitmap_expand(bm, max_pfn >> SUPERPAGE_1GB_SHIFT) )
+        return false;
+
+    bm = &ctx->x86.hvm.restore.attempted_2m;
+    if ( !sr_bitmap_expand(bm, max_pfn >> SUPERPAGE_2MB_SHIFT) )
+        return false;
+
+    bm = &ctx->x86.hvm.restore.allocated_pfns;
+    if ( !sr_bitmap_expand(bm, max_pfn) )
+        return false;
+
+    return true;
+}
+
 /*
  * restore_ops function. Confirms the stream matches the domain.
  */
@@ -164,12 +183,21 @@ static int x86_hvm_setup(struct xc_sr_context *ctx)
 
     max_pfn = max(ctx->restore.p2m_size, ctx->dominfo.max_memkb >> (PAGE_SHIFT-10));
     if ( !sr_bitmap_expand(&ctx->restore.populated_pfns, max_pfn) )
-    {
-        PERROR("Unable to allocate memory for populated_pfns bitmap");
-        return -1;
-    }
+        goto out;
+
+    if ( !x86_hvm_expand_sp_bitmaps(ctx, max_pfn) )
+        goto out;
+
+    /* FIXME: distinguish between PVH and HVM */
+    /* No superpage in 1st 2MB due to VGA hole */
+    sr_set_bit(0, &ctx->x86.hvm.restore.attempted_1g);
+    sr_set_bit(0, &ctx->x86.hvm.restore.attempted_2m);
 
     return 0;
+
+out:
+    PERROR("Unable to allocate memory for pfn bitmaps");
+    return -1;
 }
 
 /*
@@ -250,6 +278,9 @@ static int x86_hvm_stream_complete(struct xc_sr_context *ctx)
 static int x86_hvm_cleanup(struct xc_sr_context *ctx)
 {
     sr_bitmap_free(&ctx->restore.populated_pfns);
+    sr_bitmap_free(&ctx->x86.hvm.restore.attempted_1g);
+    sr_bitmap_free(&ctx->x86.hvm.restore.attempted_2m);
+    sr_bitmap_free(&ctx->x86.hvm.restore.allocated_pfns);
     free(ctx->x86.hvm.restore.context.ptr);
 
     free(ctx->x86.restore.cpuid.ptr);
@@ -258,6 +289,336 @@ static int x86_hvm_cleanup(struct xc_sr_context *ctx)
     return 0;
 }
 
+/*
+ * Set a range of pfns as allocated
+ */
+static void pfn_set_long_allocated(struct xc_sr_context *ctx, xen_pfn_t base_pfn)
+{
+    sr_set_long_bit(base_pfn, &ctx->x86.hvm.restore.allocated_pfns);
+}
+
+static void pfn_set_allocated(struct xc_sr_context *ctx, xen_pfn_t pfn)
+{
+    sr_set_bit(pfn, &ctx->x86.hvm.restore.allocated_pfns);
+}
+
+struct x86_hvm_sp {
+    xen_pfn_t pfn;
+    xen_pfn_t base_pfn;
+    unsigned long index;
+    unsigned long count;
+};
+
+/*
+ * Try to allocate a 1GB page for this pfn, but avoid Over-allocation.
+ * If this succeeds, mark the range of 2MB pages as busy.
+ */
+static bool x86_hvm_alloc_1g(struct xc_sr_context *ctx, struct x86_hvm_sp *sp)
+{
+    xc_interface *xch = ctx->xch;
+    unsigned int order;
+    int i, done;
+    xen_pfn_t extent;
+
+    /* Only one attempt to avoid overlapping allocation */
+    if ( sr_test_and_set_bit(sp->index, &ctx->x86.hvm.restore.attempted_1g) )
+        return false;
+
+    order = SUPERPAGE_1GB_SHIFT;
+    sp->count = SUPERPAGE_1GB_NR_PFNS;
+
+    /* Allocate only if there is room for another superpage */
+    if ( ctx->restore.tot_pages + sp->count > ctx->restore.max_pages )
+        return false;
+
+    extent = sp->base_pfn = (sp->pfn >> order) << order;
+    done = xc_domain_populate_physmap(xch, ctx->domid, 1, order, 0, &extent);
+    if ( done < 0 ) {
+        PERROR("populate_physmap failed.");
+        return false;
+    }
+    if ( done == 0 )
+        return false;
+
+    DPRINTF("1G %" PRI_xen_pfn "\n", sp->base_pfn);
+
+    /* Mark all 2MB pages as done to avoid overlapping allocation */
+    for ( i = 0; i < (SUPERPAGE_1GB_NR_PFNS/SUPERPAGE_2MB_NR_PFNS); i++ )
+        sr_set_bit((sp->base_pfn >> SUPERPAGE_2MB_SHIFT) + i, &ctx->x86.hvm.restore.attempted_2m);
+
+    return true;
+}
+
+/* Allocate a 2MB page if x86_hvm_alloc_1g failed, avoid Over-allocation. */
+static bool x86_hvm_alloc_2m(struct xc_sr_context *ctx, struct x86_hvm_sp *sp)
+{
+    xc_interface *xch = ctx->xch;
+    unsigned int order;
+    int done;
+    xen_pfn_t extent;
+
+    /* Only one attempt to avoid overlapping allocation */
+    if ( sr_test_and_set_bit(sp->index, &ctx->x86.hvm.restore.attempted_2m) )
+        return false;
+
+    order = SUPERPAGE_2MB_SHIFT;
+    sp->count = SUPERPAGE_2MB_NR_PFNS;
+
+    /* Allocate only if there is room for another superpage */
+    if ( ctx->restore.tot_pages + sp->count > ctx->restore.max_pages )
+        return false;
+
+    extent = sp->base_pfn = (sp->pfn >> order) << order;
+    done = xc_domain_populate_physmap(xch, ctx->domid, 1, order, 0, &extent);
+    if ( done < 0 ) {
+        PERROR("populate_physmap failed.");
+        return false;
+    }
+    if ( done == 0 )
+        return false;
+
+    DPRINTF("2M %" PRI_xen_pfn "\n", sp->base_pfn);
+    return true;
+}
+
+/* Allocate a single page if x86_hvm_alloc_2m failed. */
+static bool x86_hvm_alloc_4k(struct xc_sr_context *ctx, struct x86_hvm_sp *sp)
+{
+    xc_interface *xch = ctx->xch;
+    unsigned int order;
+    int done;
+    xen_pfn_t extent;
+
+    order = 0;
+    sp->count = 1UL;
+
+    /* Allocate only if there is room for another page */
+    if ( ctx->restore.tot_pages + sp->count > ctx->restore.max_pages ) {
+        errno = E2BIG;
+        return false;
+    }
+
+    extent = sp->base_pfn = (sp->pfn >> order) << order;
+    done = xc_domain_populate_physmap(xch, ctx->domid, 1, order, 0, &extent);
+    if ( done < 0 ) {
+        PERROR("populate_physmap failed.");
+        return false;
+    }
+    if ( done == 0 ) {
+        errno = ENOMEM;
+        return false;
+    }
+
+    DPRINTF("4K %" PRI_xen_pfn "\n", sp->base_pfn);
+    return true;
+}
+/*
+ * Attempt to allocate a superpage where the pfn resides.
+ */
+static int x86_hvm_allocate_pfn(struct xc_sr_context *ctx, xen_pfn_t pfn)
+{
+    bool success;
+    unsigned long idx_1g, idx_2m;
+    struct x86_hvm_sp sp = {
+        .pfn = pfn
+    };
+
+    if ( sr_test_bit(pfn, &ctx->x86.hvm.restore.allocated_pfns) )
+        return 0;
+
+    idx_1g = pfn >> SUPERPAGE_1GB_SHIFT;
+    idx_2m = pfn >> SUPERPAGE_2MB_SHIFT;
+
+    sp.index = idx_1g;
+    success = x86_hvm_alloc_1g(ctx, &sp);
+
+    if ( success == false ) {
+        sp.index = idx_2m;
+        success = x86_hvm_alloc_2m(ctx, &sp);
+    }
+
+    if ( success == false ) {
+        sp.index = 0;
+        success = x86_hvm_alloc_4k(ctx, &sp);
+    }
+
+    if ( success == false )
+        return -1;
+
+    do {
+        if ( sp.count >= BITS_PER_LONG ) {
+            sp.count -= BITS_PER_LONG;
+            ctx->restore.tot_pages += BITS_PER_LONG;
+            pfn_set_long_allocated(ctx, sp.base_pfn + sp.count);
+        } else {
+            sp.count--;
+            ctx->restore.tot_pages++;
+            pfn_set_allocated(ctx, sp.base_pfn + sp.count);
+        }
+    } while ( sp.count );
+
+    return 0;
+}
+
+/*
+ * Deallocate memory.
+ * There was likely an optimistic superpage allocation.
+ * This means more pages may have been allocated past gap_end.
+ * This range is not freed now. Incoming higher pfns will release it.
+ */
+static int x86_hvm_punch_hole(struct xc_sr_context *ctx,
+                               xen_pfn_t gap_start, xen_pfn_t gap_end)
+{
+    xc_interface *xch = ctx->xch;
+    xen_pfn_t _pfn, pfn;
+    uint32_t domid, freed = 0;
+    int rc;
+
+    pfn = gap_start >> SUPERPAGE_1GB_SHIFT;
+    do
+    {
+        sr_set_bit(pfn, &ctx->x86.hvm.restore.attempted_1g);
+    } while (++pfn <= gap_end >> SUPERPAGE_1GB_SHIFT);
+
+    pfn = gap_start >> SUPERPAGE_2MB_SHIFT;
+    do
+    {
+        sr_set_bit(pfn, &ctx->x86.hvm.restore.attempted_2m);
+    } while (++pfn <= gap_end >> SUPERPAGE_2MB_SHIFT);
+
+    pfn = gap_start;
+
+    while ( pfn <= gap_end )
+    {
+        if ( sr_test_and_clear_bit(pfn, &ctx->x86.hvm.restore.allocated_pfns) )
+        {
+            domid = ctx->domid;
+            _pfn = pfn;
+            rc = xc_domain_decrease_reservation_exact(xch, domid, 1, 0, &_pfn);
+            if ( rc )
+            {
+                PERROR("Failed to release pfn %" PRI_xen_pfn, pfn);
+                return -1;
+            }
+            ctx->restore.tot_pages--;
+            freed++;
+        }
+        pfn++;
+    }
+    if ( freed )
+        DPRINTF("freed %u between %" PRI_xen_pfn " %" PRI_xen_pfn "\n",
+                freed, gap_start, gap_end);
+    return 0;
+}
+
+static int x86_hvm_unpopulate_page(struct xc_sr_context *ctx, xen_pfn_t pfn)
+{
+    sr_clear_bit(pfn, &ctx->restore.populated_pfns);
+    return x86_hvm_punch_hole(ctx, pfn, pfn);
+}
+
+static int x86_hvm_populate_page(struct xc_sr_context *ctx, xen_pfn_t pfn)
+{
+    xen_pfn_t gap_start, gap_end;
+    bool has_gap, first_iteration;
+    int rc;
+
+    /*
+     * Check for a gap between the previous populated pfn and this pfn.
+     * In case a gap exists, it is required to punch a hole to release memory,
+     * starting after the previous pfn and before this pfn.
+     *
+     * But: this can be done only during the first iteration, which is the
+     * only place there superpage allocations are attempted. All following
+     * iterations lack the info to properly maintain prev_populated_pfn.
+     */
+    has_gap = ctx->x86.hvm.restore.prev_populated_pfn + 1 < pfn;
+    first_iteration = ctx->x86.hvm.restore.iteration == 0;
+    if ( has_gap && first_iteration )
+    {
+        gap_start = ctx->x86.hvm.restore.prev_populated_pfn + 1;
+        gap_end = pfn - 1;
+
+        rc = x86_hvm_punch_hole(ctx, gap_start, gap_end);
+        if ( rc )
+            goto err;
+    }
+
+    rc = x86_hvm_allocate_pfn(ctx, pfn);
+    if ( rc )
+        goto err;
+    pfn_set_populated(ctx, pfn);
+    ctx->x86.hvm.restore.prev_populated_pfn = pfn;
+
+    rc = 0;
+err:
+    return rc;
+}
+
+/*
+ * Try to allocate superpages.
+ * This works without memory map because the pfns arrive in incremental order.
+ * All pfn numbers and their type are submitted.
+ * Only pfns with data will have also pfn content transmitted.
+ */
+static int x86_hvm_populate_pfns(struct xc_sr_context *ctx, unsigned count,
+                                 const xen_pfn_t *original_pfns,
+                                 const uint32_t *types)
+{
+    xc_interface *xch = ctx->xch;
+    xen_pfn_t pfn, min_pfn, max_pfn;
+    bool has_data, populated;
+    unsigned i = count;
+    int rc = 0;
+
+    min_pfn = count ? original_pfns[0] : 0;
+    max_pfn = count ? original_pfns[count - 1] : 0;
+    DPRINTF("batch of %u pfns between %" PRI_xen_pfn " %" PRI_xen_pfn "\n",
+            count, min_pfn, max_pfn);
+
+    if ( !x86_hvm_expand_sp_bitmaps(ctx, max_pfn) )
+    {
+        ERROR("Unable to allocate memory for pfn bitmaps");
+        return -1;
+    }
+
+    /*
+     * There is no indicator for a new iteration.
+     * Simulate it by checking if a lower pfn is coming in.
+     * In the end it matters only to know if this iteration is the first one.
+     */
+    if ( min_pfn < ctx->x86.hvm.restore.iteration_tracker_pfn )
+        ctx->x86.hvm.restore.iteration++;
+    ctx->x86.hvm.restore.iteration_tracker_pfn = min_pfn;
+
+    for ( i = 0; i < count; ++i )
+    {
+        pfn = original_pfns[i];
+
+        has_data = page_type_has_stream_data(types[i]);
+        populated = pfn_is_populated(ctx, pfn);
+
+        /*
+         * page has data, pfn populated: nothing to do
+         * page has data, pfn not populated: likely never seen before
+         * page has no data, pfn populated: likely ballooned out during migration
+         * page has no data, pfn not populated: nothing to do
+         */
+        if ( has_data && !populated )
+        {
+            rc = x86_hvm_populate_page(ctx, pfn);
+        } else if ( !has_data && populated )
+        {
+            rc = x86_hvm_unpopulate_page(ctx, pfn);
+        }
+        if ( rc )
+            break;
+    }
+
+    return rc;
+}
+
+
 struct xc_sr_restore_ops restore_ops_x86_hvm =
 {
     .pfn_is_valid    = x86_hvm_pfn_is_valid,
@@ -266,6 +627,7 @@ struct xc_sr_restore_ops restore_ops_x86_hvm =
     .set_page_type   = x86_hvm_set_page_type,
     .localise_page   = x86_hvm_localise_page,
     .setup           = x86_hvm_setup,
+    .populate_pfns   = x86_hvm_populate_pfns,
     .process_record  = x86_hvm_process_record,
     .static_data_complete = x86_static_data_complete,
     .stream_complete = x86_hvm_stream_complete,
diff --git a/tools/libs/saverestore/restore_x86_pv.c b/tools/libs/saverestore/restore_x86_pv.c
index c73a3cd99f..244f1da218 100644
--- a/tools/libs/saverestore/restore_x86_pv.c
+++ b/tools/libs/saverestore/restore_x86_pv.c
@@ -959,6 +959,64 @@ static void x86_pv_set_gfn(struct xc_sr_context *ctx, xen_pfn_t pfn,
         ((uint32_t *)ctx->x86.pv.p2m)[pfn] = mfn;
 }
 
+/*
+ * Given a set of pfns, obtain memory from Xen to fill the physmap for the
+ * unpopulated subset.  If types is NULL, no page type checking is performed
+ * and all unpopulated pfns are populated.
+ */
+static int x86_pv_populate_pfns(struct xc_sr_context *ctx, unsigned count,
+                                const xen_pfn_t *original_pfns,
+                                const uint32_t *types)
+{
+    xc_interface *xch = ctx->xch;
+    xen_pfn_t *mfns = ctx->restore.m->pp_mfns,
+        *pfns = ctx->restore.m->pp_pfns;
+    unsigned int i, nr_pfns = 0;
+    int rc = -1;
+
+    for ( i = 0; i < count; ++i )
+    {
+        if ( (!types ||
+              (types && page_type_has_stream_data(types[i]) == true)) &&
+             !pfn_is_populated(ctx, original_pfns[i]) )
+        {
+            rc = pfn_set_populated(ctx, original_pfns[i]);
+            if ( rc )
+                goto err;
+            pfns[nr_pfns] = mfns[nr_pfns] = original_pfns[i];
+            ++nr_pfns;
+        }
+    }
+
+    if ( nr_pfns )
+    {
+        rc = xc_domain_populate_physmap_exact(
+            xch, ctx->domid, nr_pfns, 0, 0, mfns);
+        if ( rc )
+        {
+            PERROR("Failed to populate physmap");
+            goto err;
+        }
+
+        for ( i = 0; i < nr_pfns; ++i )
+        {
+            if ( mfns[i] == INVALID_MFN )
+            {
+                ERROR("Populate physmap failed for pfn %u", i);
+                rc = -1;
+                goto err;
+            }
+
+            ctx->restore.ops.set_gfn(ctx, pfns[i], mfns[i]);
+        }
+    }
+
+    rc = 0;
+
+ err:
+    return rc;
+}
+
 /*
  * restore_ops function.  Convert pfns back to mfns in pagetables.  Possibly
  * needs to populate new frames if a PTE is found referring to a frame which
@@ -1003,7 +1061,7 @@ static int x86_pv_localise_page(struct xc_sr_context *ctx,
         }
     }
 
-    if ( to_populate && populate_pfns(ctx, to_populate, pfns, NULL) )
+    if ( to_populate && x86_pv_populate_pfns(ctx, to_populate, pfns, NULL) )
         return -1;
 
     for ( i = 0; i < (PAGE_SIZE / sizeof(uint64_t)); ++i )
@@ -1200,6 +1258,7 @@ struct xc_sr_restore_ops restore_ops_x86_pv =
     .set_gfn         = x86_pv_set_gfn,
     .localise_page   = x86_pv_localise_page,
     .setup           = x86_pv_setup,
+    .populate_pfns   = x86_pv_populate_pfns,
     .process_record  = x86_pv_process_record,
     .static_data_complete = x86_static_data_complete,
     .stream_complete = x86_pv_stream_complete,


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:07:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:07:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143029.263875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVGY-0006px-Lh; Wed, 16 Jun 2021 13:07:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143029.263875; Wed, 16 Jun 2021 13:07:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVGY-0006pc-Ho; Wed, 16 Jun 2021 13:07:34 +0000
Received: by outflank-mailman (input) for mailman id 143029;
 Wed, 16 Jun 2021 13:07:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV2S-00075D-9q
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:53:00 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [81.169.146.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 95efd68c-aecb-4df5-81e1-862432e556f6;
 Wed, 16 Jun 2021 12:51:56 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpmtmR
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:48 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95efd68c-aecb-4df5-81e1-862432e556f6
ARC-Seal: i=1; a=rsa-sha256; t=1623847908; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=cHIyIkmuhvUDNSy/ndXRUCW+1wkpVW7xVLVw6QYGgg8ZFKtrTjVR0kuYWyZbxNWHyz
    1cCvLRiVNJ47kWv6rN0JzfpOi3CWwF5PR2W4EpVsCacVY1AFWZVv71uEQkVeqpqXvCsr
    ngOw7RTpHM9y5FfxOOs2yVwHTSgP0Vh1Eqo+YrXdAse+LS1gI8gp2vHI9LR/Cddv36ny
    IwmTmDOKPf+ARZsXhmCcJlBc69guJOtLvXMrCycDgd/FxJYyW1KQlErrloh6cXpP6CIS
    TTEMbmLulxivexXgp1lAUZLFSHBAWXfa5wdD4QW9HG6IFNsosgt6AYkZPPpXXdUGqFGJ
    OWcg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847908;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=6LebzrvgpkRPMXPrQvQqUEqJ9oCT3tBlpGnE42P4LCk=;
    b=lmir08Rm5poB1h9YZdUUyI82Vm670RNQ5lbdEDt6mvlI2N1LVvVTnDYkjvjcZd94da
    SA6ctKVSltK3cfFmg+X1LkOOh/T8kYz9MOoRWL3M7bAtqgb/Vf768T5ZD4LBJIRtM1+u
    dmte6Ae+Xvv4l8iPjgcsHepokmZ+kbSt5aPr9Q2oM0g231B6FzkPOzkNSdAy8/Tma5f1
    4zOj00HTkjc0qlfwtyyvCuPUnwCBdjH/YXj2CSUMztRRv8IwZn/G+9r+0vIBbpj0Yr5a
    KX0B54xfbznJPo5siGlqkPS77wArfWCQ6JdLFGJBJai83fiXNq7BeG5BVLVvzgGHcXk/
    dHpA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847908;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=6LebzrvgpkRPMXPrQvQqUEqJ9oCT3tBlpGnE42P4LCk=;
    b=hKrESaYSskxqbrwZ/gQCQYM/Cof6kuuYncYKGb2HW7qY7+k8Ysk54N5YHqE3QqyHmP
    cO8/I8XpWaPnkeZjaJDooigmKSEsoSOXEbEcX02Fa6DVlIbag+l/KHiLOYBCLAbqgM60
    2dJWURDk9251OlsT2hPyYc0btejRsB7x6w/+Rc5GM7CKrbpNH50tSvfcUBBg2AeJnO3r
    a22NE7XVlvT50bo2lEacRLXPtLgN+rMEp8mowE31KNJCOd/zmFi3hJ62bvzXt8U9hu7c
    tOJHVNmWUM24xj76OUmsUN5Fzh7utoS5wtPYqTWI3DZXIwkFugF3Sq51DllrmTAUl61u
    rEcg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v20210616 31/36] tools: add --max_iters to libxl_domain_suspend
Date: Wed, 16 Jun 2021 14:51:24 +0200
Message-Id: <20210616125129.26563-32-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Migrating a large, and potentially busy, domU will take more
time than neccessary due to excessive number of copying iterations.

Allow to host admin to control the number of iterations which
copy cumulated domU dirty pages to the target host.

The default remains 5, which means one initial iteration to copy the
entire domU memory, and up to 4 additional iterations to copy dirty
memory from the still running domU. After the given number of iterations
the domU is suspended, remaining dirty memory is copied and the domU is
finally moved to the target host.

This patch adjusts xl(1) and the libxl API.
External users check LIBXL_HAVE_DOMAIN_SUSPEND_PROPS for the availibility
of the new .max_iters property.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/man/xl.1.pod.in              |  4 ++++
 tools/include/libxl.h             |  1 +
 tools/libs/light/libxl_dom_save.c |  2 +-
 tools/libs/light/libxl_domain.c   |  1 +
 tools/libs/light/libxl_internal.h |  1 +
 tools/xl/xl_cmdtable.c            |  3 ++-
 tools/xl/xl_migrate.c             | 10 +++++++++-
 7 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index 70a6ebf438..594387bcf4 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -494,6 +494,10 @@ such that it will be identical on the destination host, unless that
 configuration is overridden using the B<-C> option. Note that it is not
 possible to use this option for a 'localhost' migration.
 
+=item B<--max_iters> I<iterations>
+
+Number of copy iterations before final suspend+move (default: 5)
+
 =back
 
 =item B<remus> [I<OPTIONS>] I<domain-id> I<host>
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 9a4d7514ed..bf77da0524 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -1714,6 +1714,7 @@ static inline int libxl_retrieve_domain_configuration_0x041200(
 
 typedef struct {
     uint32_t flags; /* LIBXL_SUSPEND_* */
+    uint32_t max_iters;
 } libxl_domain_suspend_props;
 #define LIBXL_SUSPEND_DEBUG 1
 #define LIBXL_SUSPEND_LIVE 2
diff --git a/tools/libs/light/libxl_dom_save.c b/tools/libs/light/libxl_dom_save.c
index 3f3cff0342..938c0127f3 100644
--- a/tools/libs/light/libxl_dom_save.c
+++ b/tools/libs/light/libxl_dom_save.c
@@ -383,7 +383,7 @@ static int libxl__domain_save_precopy_policy(precopy_stats_t stats, void *user)
          stats.iteration, stats.dirty_count, stats.total_written);
     if (stats.dirty_count >= 0 && stats.dirty_count < LIBXL_XGS_POLICY_TARGET_DIRTY_COUNT)
         goto stop_copy;
-    if (stats.iteration >= LIBXL_XGS_POLICY_MAX_ITERATIONS)
+    if (stats.iteration >= dss->max_iters)
         goto stop_copy;
     return XGS_POLICY_CONTINUE_PRECOPY;
 
diff --git a/tools/libs/light/libxl_domain.c b/tools/libs/light/libxl_domain.c
index 5dbd27900f..9f98cd7f2b 100644
--- a/tools/libs/light/libxl_domain.c
+++ b/tools/libs/light/libxl_domain.c
@@ -527,6 +527,7 @@ int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd,
     dss->domid = domid;
     dss->fd = fd;
     dss->type = type;
+    dss->max_iters = props->max_iters ?: LIBXL_XGS_POLICY_MAX_ITERATIONS;
     dss->live = props->flags & LIBXL_SUSPEND_LIVE;
     dss->debug = props->flags & LIBXL_SUSPEND_DEBUG;
     dss->checkpointed_stream = LIBXL_CHECKPOINTED_STREAM_NONE;
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 57d7e4b4b8..8cbcc5282c 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -3649,6 +3649,7 @@ struct libxl__domain_save_state {
     int live;
     int debug;
     int checkpointed_stream;
+    uint32_t max_iters;
     const libxl_domain_remus_info *remus;
     /* private */
     int rc;
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index ca1dfa3525..9b6b3c99aa 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -174,7 +174,8 @@ const struct cmd_spec cmd_table[] = {
       "                of the domain.\n"
       "--debug         Ignored.\n"
       "-p              Do not unpause domain after migrating it.\n"
-      "-D              Preserve the domain id"
+      "-D              Preserve the domain id\n"
+      "--max_iters N   Number of copy iterations before final stop+move"
     },
     { "restore",
       &main_restore, 0, 1,
diff --git a/tools/xl/xl_migrate.c b/tools/xl/xl_migrate.c
index 144890924f..af117d4d56 100644
--- a/tools/xl/xl_migrate.c
+++ b/tools/xl/xl_migrate.c
@@ -178,6 +178,7 @@ static void migrate_do_preamble(int send_fd, int recv_fd, pid_t child,
 
 static void migrate_domain(uint32_t domid, int preserve_domid,
                            const char *rune, int debug,
+                           uint32_t max_iters,
                            const char *override_config_file)
 {
     pid_t child = -1;
@@ -189,6 +190,7 @@ static void migrate_domain(uint32_t domid, int preserve_domid,
     int config_len;
     libxl_domain_suspend_props props = {
         .flags = LIBXL_SUSPEND_LIVE,
+        .max_iters = max_iters,
         };
 
     save_domain_core_begin(domid, preserve_domid, override_config_file,
@@ -542,8 +544,10 @@ int main_migrate(int argc, char **argv)
     char *host;
     int opt, daemonize = 1, monitor = 1, debug = 0, pause_after_migration = 0;
     int preserve_domid = 0;
+    uint32_t max_iters = 0;
     static struct option opts[] = {
         {"debug", 0, 0, 0x100},
+        {"max_iters", 1, 0, 0x101},
         {"live", 0, 0, 0x200},
         COMMON_LONG_OPTS
     };
@@ -571,6 +575,9 @@ int main_migrate(int argc, char **argv)
     case 0x100: /* --debug */
         debug = 1;
         break;
+    case 0x101: /* --max_iters */
+        max_iters = atoi(optarg);
+        break;
     case 0x200: /* --live */
         /* ignored for compatibility with xm */
         break;
@@ -605,7 +612,8 @@ int main_migrate(int argc, char **argv)
                   pause_after_migration ? " -p" : "");
     }
 
-    migrate_domain(domid, preserve_domid, rune, debug, config_filename);
+    migrate_domain(domid, preserve_domid, rune, debug,
+                   max_iters, config_filename);
     return EXIT_SUCCESS;
 }
 


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:07:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:07:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143052.263886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVGq-0007vq-0X; Wed, 16 Jun 2021 13:07:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143052.263886; Wed, 16 Jun 2021 13:07:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVGp-0007vg-St; Wed, 16 Jun 2021 13:07:51 +0000
Received: by outflank-mailman (input) for mailman id 143052;
 Wed, 16 Jun 2021 13:07:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV2N-00075D-9W
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:55 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.103])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f2b453df-01fb-4b67-9fad-bdacdf928dd0;
 Wed, 16 Jun 2021 12:51:55 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpltmP
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:47 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2b453df-01fb-4b67-9fad-bdacdf928dd0
ARC-Seal: i=1; a=rsa-sha256; t=1623847907; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=kIuewaBaUo7N4cc0L8a9lIZxZRuflXX7CS3wZNNzDXoBBAFZK8OgO5Vgn7jTiXKbHu
    F4D58+c7d5+4Z2ezbhWTlNpJF0l0Y6XER3UScRfd52ggzeIE+iaFltig9uXMSEwCAcUm
    zbPBypeK9qBKXuvulxrB2zYStI+7hi3uA2tYoQ5QfTX/BfskBO8mc7kv1Z2w23zgb2Y1
    cPPTPHbnbp/qmYBvkvsBfrD+TNmzrBAiEG212yqlb/i/6Cwm6Sas4wKuMxm7GNEvGXl9
    HUQJl0tzAtosl9LEExc50fO5k+e/W9OxtBsCo/fv+o2BKnjX3HezgR6+2GZSP2GMRMSh
    PlWw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847907;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=am1tugVzg71doIb/LdiNWtayiDvPFoCjZNyMmciOrqY=;
    b=GqFb2vbB+Jh89wB/deQ9VqUiTbVkXdOAaLyOfBMuvAOeFlEEJQCYaBIfsFIiYwEo7u
    KAwSemkL/0AstwiAPeaGhNv9uC7QnrUR1TbD+AmUtD6t3n+2PhejtIvKccKBHRz666sw
    S5iZKwghbJVzONTNoYPeFWNGcl6778xol3DgrXtn2Rcl38Ob7MYxjSc2RAe1aA6qM5G+
    mZroe03cbZ1o/sCwIXK11WhYdCpIAftpb+gVYC/+oOWjFGAAlMeEIE6F3KVVyy1zvGTJ
    aB0b2vU1m8I0huk4P3Nu17YIGc4/4bFOxceFvtKNGMLTp4FN1YDWjfrFB23AJ4h7jV+u
    HUBQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847907;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=am1tugVzg71doIb/LdiNWtayiDvPFoCjZNyMmciOrqY=;
    b=ZyPflmDE/hyYGB/vXZcl7u7B+KonkHuWE4Mfv2YzW2JwqivRJ0Zrfz8oNrJEYVF9xX
    crKYZV8633x4FUi6IQpDkEKP0/8al9TL6vmmoe5RJgUBirwguDpNMHeEIyhBPTYIP4jC
    E4ZK+JbnP513hgwumJ504JhFWRvvAJm6fyNyTwbLVicuPejYmqpZFVbjRq5kjUWNLNeg
    BWTxwa9FWaE7hjzFQVZ5catJMO5nSelGGbvnQW16LaW0+PGZMxyPZNG+10d6tWdGLoP9
    +oQQ0koGIFuI3mlgF8zRRNRP4gE9jpSyJWV8Sa6GtlX0FOpBVNYH+8CFcBHQu9EbOhCM
    jpYg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v20210616 30/36] tools: add callback to libxl for precopy_policy and precopy_stats_t
Date: Wed, 16 Jun 2021 14:51:23 +0200
Message-Id: <20210616125129.26563-31-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This duplicates simple_precopy_policy. To recap its purpose:
- do up to 5 iterations of copying dirty domU memory to target,
  including the initial copying of all domU memory, excluding
  the final copying while the domU is suspended
- do fewer iterations in case the domU dirtied less than 50 pages

Take the opportunity to also move xen_pfn_t into qw().

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/light/libxl_dom_save.c       | 19 +++++++++++++++++++
 tools/libs/light/libxl_internal.h       |  2 ++
 tools/libs/light/libxl_save_msgs_gen.pl |  3 ++-
 3 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/tools/libs/light/libxl_dom_save.c b/tools/libs/light/libxl_dom_save.c
index 32e3cb5a13..3f3cff0342 100644
--- a/tools/libs/light/libxl_dom_save.c
+++ b/tools/libs/light/libxl_dom_save.c
@@ -373,6 +373,24 @@ int libxl__save_emulator_xenstore_data(libxl__domain_save_state *dss,
     return rc;
 }
 
+static int libxl__domain_save_precopy_policy(precopy_stats_t stats, void *user)
+{
+    libxl__save_helper_state *shs = user;
+    libxl__domain_save_state *dss = shs->caller_state;
+    STATE_AO_GC(dss->ao);
+
+    LOGD(DEBUG, shs->domid, "iteration %u dirty_count %ld total_written %lu",
+         stats.iteration, stats.dirty_count, stats.total_written);
+    if (stats.dirty_count >= 0 && stats.dirty_count < LIBXL_XGS_POLICY_TARGET_DIRTY_COUNT)
+        goto stop_copy;
+    if (stats.iteration >= LIBXL_XGS_POLICY_MAX_ITERATIONS)
+        goto stop_copy;
+    return XGS_POLICY_CONTINUE_PRECOPY;
+
+stop_copy:
+    return XGS_POLICY_STOP_AND_COPY;
+}
+
 /*----- main code for saving, in order of execution -----*/
 
 void libxl__domain_save(libxl__egc *egc, libxl__domain_save_state *dss)
@@ -430,6 +448,7 @@ void libxl__domain_save(libxl__egc *egc, libxl__domain_save_state *dss)
         callbacks->suspend = libxl__domain_suspend_callback;
 
     callbacks->switch_qemu_logdirty = libxl__domain_suspend_common_switch_qemu_logdirty;
+    callbacks->precopy_policy = libxl__domain_save_precopy_policy;
 
     dss->sws.ao  = dss->ao;
     dss->sws.dss = dss;
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 439c654733..57d7e4b4b8 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -125,6 +125,8 @@
 #define DOMID_XS_PATH "domid"
 #define PVSHIM_BASENAME "xen-shim"
 #define PVSHIM_CMDLINE "pv-shim console=xen,pv"
+#define LIBXL_XGS_POLICY_MAX_ITERATIONS 5
+#define LIBXL_XGS_POLICY_TARGET_DIRTY_COUNT 50
 
 /* Size macros. */
 #define __AC(X,Y)   (X##Y)
diff --git a/tools/libs/light/libxl_save_msgs_gen.pl b/tools/libs/light/libxl_save_msgs_gen.pl
index f263ee01bb..ab55c81644 100755
--- a/tools/libs/light/libxl_save_msgs_gen.pl
+++ b/tools/libs/light/libxl_save_msgs_gen.pl
@@ -23,6 +23,7 @@ our @msgs = (
                                              STRING doing_what),
                                             'unsigned long', 'done',
                                             'unsigned long', 'total'] ],
+    [ 'scxW',   "precopy_policy", ['precopy_stats_t', 'stats'] ],
     [ 'srcxA',  "suspend", [] ],
     [ 'srcxA',  "postcopy", [] ],
     [ 'srcxA',  "checkpoint", [] ],
@@ -142,7 +143,7 @@ static void bytes_put(unsigned char *const buf, int *len,
 
 END
 
-foreach my $simpletype (qw(int uint16_t uint32_t unsigned), 'unsigned long', 'xen_pfn_t') {
+foreach my $simpletype (qw(int uint16_t uint32_t unsigned precopy_stats_t xen_pfn_t), 'unsigned long') {
     my $typeid = typeid($simpletype);
     $out_body{'callout'} .= <<END;
 static int ${typeid}_get(const unsigned char **msg,


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:08:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:08:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143063.263897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVH3-0000Fz-9s; Wed, 16 Jun 2021 13:08:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143063.263897; Wed, 16 Jun 2021 13:08:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVH3-0000Fp-6J; Wed, 16 Jun 2021 13:08:05 +0000
Received: by outflank-mailman (input) for mailman id 143063;
 Wed, 16 Jun 2021 13:08:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV2I-00075D-9Y
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:50 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e6b65d89-6720-4ffd-a184-2f781514df3e;
 Wed, 16 Jun 2021 12:51:55 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpotmV
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:50 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6b65d89-6720-4ffd-a184-2f781514df3e
ARC-Seal: i=1; a=rsa-sha256; t=1623847910; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=FBHkTgZXolibcwjpysZzh8M2qH2k+aSKsoY4a7M4Oow0Xs9T0Wm2Haj6mmlIlJy0Kr
    dDh16bRAqyGU1vzTj7yR6KTbR1apuQAYKJAVc/lS2CqovzKJUO5BGO74TihGU2JH/SAn
    0Notaabw9CAj4nbXVpSUixRlvcFdE8CvTynRchOxS6LW1Q9v0F5HPs8HFnTzKPx++GAX
    5i/Y0SdT12lbo6Gg4LKX5i5Np04j26SPb3jqYZNvTjTgND0qTYi6j2q2l7khuhOd67HJ
    6gi4E8PzZlZLm4wk17b9iQgZgXW6cTr6g6XkshHIE6acYVHMc2b/QLOHHleBNvki+dld
    JRxg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847910;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=b3yBWw7yU2epDTrjSW6D9+v+xfIfJnT+XxyS94GOcl8=;
    b=H2SI/FxhNxisGvyhqeDCfOM2s2MYMvmWSqLHovX+WYbylzW+XxiEfLT5On1kI+bD3v
    TL7TTzr+XWcg5KdutNQZAh4R6/x2oWT1XKsYfwYq3FjRNkudN4Tx7/vVvJK2fp/SNQdd
    mrcIoUA1YZJzecTc/O+4fiqxMOG6oLHzjsOCIrU03HG21nYXYMzumBZfpNhQjG90FcfT
    Nc+lsDGudDPX0EPkbgpMfHFyFlgY2ay17tAfs21gWR/JKyIZ50yqet+E3vNAOoP3CbKe
    jPXV9MVdAmPgHOiItbMB+3beqaGSdAQ6UnDJHRlwivJrEMH5zDVgfzJXA0Xca/p8QwIt
    dj0A==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847910;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=b3yBWw7yU2epDTrjSW6D9+v+xfIfJnT+XxyS94GOcl8=;
    b=Ln01R/NUGT1zY8Rr1c8rijll3Cl+j/E0G1PEoVLjpe3cDyQ1pIxIZ3rY/1lsOfI7i8
    PUWanNj6lNj52oXIq53XVA4cAJgTtZXVrT4O8KnaF+EBi9oS0K1wyalqOqiKX1uoCuTG
    98Xi5sWV8EEWby7BhXjt8LW8pHY75yUmQ3QSVUX7OqHJ5Ha3CDWL/Jb9n/gZ7+SGhMK+
    RmBxTjuKgmX0IixB+cuLPQVCnPvb2ILoLMzcq0Iy9sT93gm/iJICw9DOSw1wksBsGEiV
    LAqolLLUwXC+hERsAt5Q7A38e1Mhh6E4cyFUQNze3RASCjO1km6MXqR7Gph9nHcaUCYw
    9Q9Q==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v20210616 35/36] tools: use sr_bitmap for populated_pfns
Date: Wed, 16 Jun 2021 14:51:28 +0200
Message-Id: <20210616125129.26563-36-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Olaf Hering <olaf@aepfle.de>

v02:
- remove xg_ prefix from called functions
---
 tools/libs/saverestore/common.h          | 21 +++++++-
 tools/libs/saverestore/restore.c         | 69 ------------------------
 tools/libs/saverestore/restore_x86_hvm.c |  9 ++++
 tools/libs/saverestore/restore_x86_pv.c  |  7 +++
 4 files changed, 35 insertions(+), 71 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 43aa1a7b86..43a31f9aa5 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -403,8 +403,7 @@ struct xc_sr_context
             uint32_t     xenstore_domid,  console_domid;
 
             /* Bitmap of currently populated PFNs during restore. */
-            unsigned long *populated_pfns;
-            xen_pfn_t max_populated_pfn;
+            struct sr_bitmap populated_pfns;
 
             /* Sender has invoked verify mode on the stream. */
             bool verify;
@@ -629,6 +628,24 @@ static inline bool page_type_has_stream_data(uint32_t type)
     }
     return ret;
 }
+
+static inline bool pfn_is_populated(struct xc_sr_context *ctx, xen_pfn_t pfn)
+{
+    return sr_test_bit(pfn, &ctx->restore.populated_pfns);
+}
+
+static inline int pfn_set_populated(struct xc_sr_context *ctx, xen_pfn_t pfn)
+{
+    xc_interface *xch = ctx->xch;
+
+    if ( sr_set_bit(pfn, &ctx->restore.populated_pfns) == false )
+    {
+        PERROR("Failed to realloc populated_pfns bitmap");
+        errno = ENOMEM;
+        return -1;
+    }
+    return 0;
+}
 #endif
 /*
  * Local variables:
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index d0148606bf..8f7bce2585 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -71,64 +71,6 @@ static int read_headers(struct xc_sr_context *ctx)
     return 0;
 }
 
-/*
- * Is a pfn populated?
- */
-static bool pfn_is_populated(const struct xc_sr_context *ctx, xen_pfn_t pfn)
-{
-    if ( pfn > ctx->restore.max_populated_pfn )
-        return false;
-    return test_bit(pfn, ctx->restore.populated_pfns);
-}
-
-/*
- * Set a pfn as populated, expanding the tracking structures if needed. To
- * avoid realloc()ing too excessively, the size increased to the nearest power
- * of two large enough to contain the required pfn.
- */
-static int pfn_set_populated(struct xc_sr_context *ctx, xen_pfn_t pfn)
-{
-    xc_interface *xch = ctx->xch;
-
-    if ( pfn > ctx->restore.max_populated_pfn )
-    {
-        xen_pfn_t new_max;
-        size_t old_sz, new_sz;
-        unsigned long *p;
-
-        /* Round up to the nearest power of two larger than pfn, less 1. */
-        new_max = pfn;
-        new_max |= new_max >> 1;
-        new_max |= new_max >> 2;
-        new_max |= new_max >> 4;
-        new_max |= new_max >> 8;
-        new_max |= new_max >> 16;
-#ifdef __x86_64__
-        new_max |= new_max >> 32;
-#endif
-
-        old_sz = bitmap_size(ctx->restore.max_populated_pfn + 1);
-        new_sz = bitmap_size(new_max + 1);
-        p = realloc(ctx->restore.populated_pfns, new_sz);
-        if ( !p )
-        {
-            ERROR("Failed to realloc populated bitmap");
-            errno = ENOMEM;
-            return -1;
-        }
-
-        memset((uint8_t *)p + old_sz, 0x00, new_sz - old_sz);
-
-        ctx->restore.populated_pfns    = p;
-        ctx->restore.max_populated_pfn = new_max;
-    }
-
-    assert(!test_bit(pfn, ctx->restore.populated_pfns));
-    set_bit(pfn, ctx->restore.populated_pfns);
-
-    return 0;
-}
-
 /*
  * Given a set of pfns, obtain memory from Xen to fill the physmap for the
  * unpopulated subset.  If types is NULL, no page type checking is performed
@@ -929,16 +871,6 @@ static int setup(struct xc_sr_context *ctx)
     if ( rc )
         goto err;
 
-    ctx->restore.max_populated_pfn = (32 * 1024 / 4) - 1;
-    ctx->restore.populated_pfns = bitmap_alloc(
-        ctx->restore.max_populated_pfn + 1);
-    if ( !ctx->restore.populated_pfns )
-    {
-        ERROR("Unable to allocate memory for populated_pfns bitmap");
-        rc = -1;
-        goto err;
-    }
-
     ctx->restore.buffered_records = malloc(
         DEFAULT_BUF_RECORDS * sizeof(struct xc_sr_record));
     if ( !ctx->restore.buffered_records )
@@ -977,7 +909,6 @@ static void cleanup(struct xc_sr_context *ctx)
 
     free(ctx->restore.m);
     free(ctx->restore.buffered_records);
-    free(ctx->restore.populated_pfns);
 
     if ( ctx->restore.ops.cleanup(ctx) )
         PERROR("Failed to clean up");
diff --git a/tools/libs/saverestore/restore_x86_hvm.c b/tools/libs/saverestore/restore_x86_hvm.c
index bd63bd2818..97e7e0f48c 100644
--- a/tools/libs/saverestore/restore_x86_hvm.c
+++ b/tools/libs/saverestore/restore_x86_hvm.c
@@ -136,6 +136,7 @@ static int x86_hvm_localise_page(struct xc_sr_context *ctx,
 static int x86_hvm_setup(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
+    unsigned long max_pfn;
 
     if ( ctx->restore.guest_type != DHDR_TYPE_X86_HVM )
     {
@@ -161,6 +162,13 @@ static int x86_hvm_setup(struct xc_sr_context *ctx)
     }
 #endif
 
+    max_pfn = max(ctx->restore.p2m_size, ctx->dominfo.max_memkb >> (PAGE_SHIFT-10));
+    if ( !sr_bitmap_expand(&ctx->restore.populated_pfns, max_pfn) )
+    {
+        PERROR("Unable to allocate memory for populated_pfns bitmap");
+        return -1;
+    }
+
     return 0;
 }
 
@@ -241,6 +249,7 @@ static int x86_hvm_stream_complete(struct xc_sr_context *ctx)
 
 static int x86_hvm_cleanup(struct xc_sr_context *ctx)
 {
+    sr_bitmap_free(&ctx->restore.populated_pfns);
     free(ctx->x86.hvm.restore.context.ptr);
 
     free(ctx->x86.restore.cpuid.ptr);
diff --git a/tools/libs/saverestore/restore_x86_pv.c b/tools/libs/saverestore/restore_x86_pv.c
index 96608e5231..c73a3cd99f 100644
--- a/tools/libs/saverestore/restore_x86_pv.c
+++ b/tools/libs/saverestore/restore_x86_pv.c
@@ -1060,6 +1060,12 @@ static int x86_pv_setup(struct xc_sr_context *ctx)
     if ( rc )
         return rc;
 
+    if ( !sr_bitmap_expand(&ctx->restore.populated_pfns, 32 * 1024 / 4) )
+    {
+        PERROR("Unable to allocate memory for populated_pfns bitmap");
+        return -1;
+    }
+
     ctx->x86.pv.restore.nr_vcpus = ctx->dominfo.max_vcpu_id + 1;
     ctx->x86.pv.restore.vcpus = calloc(sizeof(struct xc_sr_x86_pv_restore_vcpu),
                                        ctx->x86.pv.restore.nr_vcpus);
@@ -1153,6 +1159,7 @@ static int x86_pv_stream_complete(struct xc_sr_context *ctx)
  */
 static int x86_pv_cleanup(struct xc_sr_context *ctx)
 {
+    sr_bitmap_free(&ctx->restore.populated_pfns);
     free(ctx->x86.pv.p2m);
     free(ctx->x86.pv.p2m_pfns);
 


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:08:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:08:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143066.263908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVH4-0000X8-In; Wed, 16 Jun 2021 13:08:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143066.263908; Wed, 16 Jun 2021 13:08:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVH4-0000WP-FD; Wed, 16 Jun 2021 13:08:06 +0000
Received: by outflank-mailman (input) for mailman id 143066;
 Wed, 16 Jun 2021 13:08:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV28-00075D-99
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:40 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [81.169.146.173])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1e7ffb82-5835-4a90-9d45-0c35951b5587;
 Wed, 16 Jun 2021 12:51:55 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpntmU
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:49 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e7ffb82-5835-4a90-9d45-0c35951b5587
ARC-Seal: i=1; a=rsa-sha256; t=1623847909; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=q8VD0jm7Rew2fXNFlWlOaluFjoDypQ44Ob95CamwqiR+wFbK3aoPdeWBNP7Adyjzne
    yy2SZMgDjWgqkigG4/sdsBn/oGb12CzvBW+3Qb0Qq9ufdvKNKw85v41ikb8g8rnt+o4b
    M7EwndFm5bcRBLyW1EfDdkrEG2V/u9XrbEr5HBD6XwVfHcbSg/YVxGY9LtaDuKZ6O/TA
    KEID/JqBJVq057/DMijJlIsy7b3KZkozr8FcxRKmXP3zeQrVEjeG0En//q9cHI5E5CPB
    eYk4XJBzRjoeGv7kqf9hM4o7yL1r5JH8a1T7JkB6YawWDINI3pxdH3bq8lO55OF1kEzR
    hkyA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847909;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=zRM6/H2oQQ+NxJB9cpmsAqIVMl48wKfvMp6dg0xJ0jg=;
    b=ViIRyMbhJ2GR4h7fsY7ePjIySyiPiqm73DE6zoKgQkE/O4VDkNLugmMiD+Vvyp5V9x
    WCwZsVjXfvjSN1hMmKBDFwhBbcvB95Lu/jf6th14rlrDL4fREJ5WiYCkAAo2I/Pd2wqM
    3cGrx86CSgW5OZFexKeCXM94BHF6G9zb5yt65YNgP0ZJmWrIk1mZdbVuErqcqSgdbxkx
    rVrSiaa3WGEk/3GRm5X9tMfiIhHVniPWoopomhrpiS7aiFhW/3Cr7E7UQHNhHqLKfjRW
    ELtrQk/gVKSKbVzv8BUh6X9U3EX1UcAXDbom7rjaPKeVuHNKwwcFF3fQm3BPMfTjesxB
    NHHw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847909;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=zRM6/H2oQQ+NxJB9cpmsAqIVMl48wKfvMp6dg0xJ0jg=;
    b=FyYXt8+mf0DOlOUBmaijcaYECM/KBmEV/qHq+vM9apZ2vEz/9dbrx5zMW4NwsK32Av
    IbZrxGT2tm0DciNpmZ76RpPlvNL4jTF6zDJUzZfQgIoWNZ3hpV4v56u55k2D9xha6Jfj
    6dHb0xTMZ3UgMbgv8YxQYDpzpNI6igcnmJu8WKFD7iVTs1ODe8rxrleursPbIpxkwXRc
    6fXS1k1oDbl5hb7Rn6aNOR/97+JyPWaSAQbxO+jWIiJcPsU5XIXjTIhTGmTbMkcu7G34
    9KivIKBrVt+sFYa48GAgS1ssJZ54TE+TluJHI2w5kvnCoHUZdnJ99l5bPARgxZZBUaar
    TLoA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v20210616 34/36] tools: add API for expandable bitmaps
Date: Wed, 16 Jun 2021 14:51:27 +0200
Message-Id: <20210616125129.26563-35-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Since the incoming migration stream lacks info about what the highest pfn
will be, data structures can not be allocated upfront.

Add an API for expandable bitmaps, loosely based on pfn_set_populated.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

v02:
- remove xg_ prefix from functions
---
 tools/libs/saverestore/common.c | 40 ++++++++++++++++++++
 tools/libs/saverestore/common.h | 67 +++++++++++++++++++++++++++++++++
 2 files changed, 107 insertions(+)

diff --git a/tools/libs/saverestore/common.c b/tools/libs/saverestore/common.c
index 7da7fa4e2c..8b4e402df5 100644
--- a/tools/libs/saverestore/common.c
+++ b/tools/libs/saverestore/common.c
@@ -163,6 +163,46 @@ static void __attribute__((unused)) build_assertions(void)
     BUILD_BUG_ON(sizeof(struct xc_sr_rec_hvm_params)        != 8);
 }
 
+/*
+ * Expand the tracking structures as needed.
+ * To avoid realloc()ing too excessively, the size increased to the nearest
+ * power of two large enough to contain the required number of bits.
+ */
+bool _sr_bitmap_expand(struct sr_bitmap *bm, unsigned long bits)
+{
+    size_t new_max;
+    size_t old_sz, new_sz;
+    void *p;
+
+    if (bits <= bm->bits)
+        return true;
+
+    /* Round up to the nearest power of two larger than bit, less 1. */
+    new_max = bits;
+    new_max |= new_max >> 1;
+    new_max |= new_max >> 2;
+    new_max |= new_max >> 4;
+    new_max |= new_max >> 8;
+    new_max |= new_max >> 16;
+    if ( sizeof(unsigned long) > 4 )
+        new_max |= new_max >> 32;
+
+    /* Allocate units of unsigned long */
+    new_max = (new_max + BITS_PER_LONG - 1) & ~(BITS_PER_LONG - 1);
+
+    old_sz = bitmap_size(bm->bits);
+    new_sz = bitmap_size(new_max);
+    p = realloc(bm->p, new_sz);
+    if (!p)
+        return false;
+
+    memset(p + old_sz, 0, new_sz - old_sz);
+    bm->p = p;
+    bm->bits = new_max;
+
+    return true;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 60bbba6aa9..43aa1a7b86 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -30,6 +30,73 @@ const char *rec_type_to_str(uint32_t type);
 struct xc_sr_context;
 struct xc_sr_record;
 
+struct sr_bitmap
+{
+    void *p;
+    unsigned long bits;
+};
+
+extern bool _sr_bitmap_expand(struct sr_bitmap *bm, unsigned long bits);
+
+static inline bool sr_bitmap_expand(struct sr_bitmap *bm, unsigned long bits)
+{
+    if (bits > bm->bits)
+        return _sr_bitmap_expand(bm, bits);
+    return true;
+}
+
+static inline void sr_bitmap_free(struct sr_bitmap *bm)
+{
+    free(bm->p);
+    bm->p = NULL;
+}
+
+static inline bool sr_set_bit(unsigned long bit, struct sr_bitmap *bm)
+{
+    if (sr_bitmap_expand(bm, bit) == false)
+        return false;
+
+    set_bit(bit, bm->p);
+    return true;
+}
+
+static inline bool sr_test_bit(unsigned long bit, struct sr_bitmap *bm)
+{
+    if (bit > bm->bits)
+        return false;
+    return !!test_bit(bit, bm->p);
+}
+
+static inline void sr_clear_bit(unsigned long bit, struct sr_bitmap *bm)
+{
+    if (bit <= bm->bits)
+        clear_bit(bit, bm->p);
+}
+
+static inline bool sr_test_and_clear_bit(unsigned long bit, struct sr_bitmap *bm)
+{
+    if (bit > bm->bits)
+        return false;
+    return !!test_and_clear_bit(bit, bm->p);
+}
+
+/* No way to report potential allocation error, bitmap must be expanded prior usage */
+static inline bool sr_test_and_set_bit(unsigned long bit, struct sr_bitmap *bm)
+{
+    if (bit > bm->bits)
+        return false;
+    return !!test_and_set_bit(bit, bm->p);
+}
+
+static inline bool sr_set_long_bit(unsigned long base_bit, struct sr_bitmap *bm)
+{
+    if (sr_bitmap_expand(bm, base_bit + BITS_PER_LONG) == false)
+        return false;
+
+    set_bit_long(base_bit, bm->p);
+    return true;
+}
+
 /**
  * Save operations.  To be implemented for each type of guest, for use by the
  * common save algorithm.


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:08:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:08:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143080.263919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVHG-0001G1-2I; Wed, 16 Jun 2021 13:08:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143080.263919; Wed, 16 Jun 2021 13:08:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVHF-0001Fu-Uo; Wed, 16 Jun 2021 13:08:17 +0000
Received: by outflank-mailman (input) for mailman id 143080;
 Wed, 16 Jun 2021 13:08:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vOo1=LK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ltVHE-0001Bf-K0
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 13:08:16 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d4303b6-94ca-41b8-9804-d5fc67abc46e;
 Wed, 16 Jun 2021 13:08:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d4303b6-94ca-41b8-9804-d5fc67abc46e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623848895;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=gfB4WLnnfctfUdDhO9e+GdzqwsHatf4UDHelZqgdt9A=;
  b=iZQ/dPOEBn/udJuIRsZJiaeVOznir0B/zABmT8inTVMhPKdBaCo0sY+K
   26zsJ42HzVYeN8U0sXKRVPLnmwN6H3AOVOp7TtsVs3WgrFhg8hCgKcCUn
   NX0oPZ2jZCa0QU/v1AcF51rJbD63TPYzGL4+asRzwVd9UaH7xT8hBVrW1
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ua+CtNaeO+rvzfL/hhXb0jPXAeMPHNCKS6ywtqsyYUPueSJnRazBmBhxZcROrvz5Uzk9rVIXCq
 vReCZOmLenpNbfm8MmelxKHvlaU5J51bVDbFHv2jfgHiyFo6hXicRilA5Gygm0iJQhec28uE40
 dHWpkeQd3ZqXDPAUzmV+pxj8e9A8hBM1Ol+eX2wnU+dsITq5Z+SBC0zVTwKRlXXArUoTJaW+ym
 nwfA8EW3+rMx05QXJa1rSOBNgAc9N2bxINWAA4aOC1fi/SFVRABkPMGZkewCUFTaaxIldAjczs
 eME=
X-SBRS: 5.1
X-MesageID: 46261424
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:w/OMMqmH+a4Gwf8DyLxhQ6xdHBLpDfO1imdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcIi7SdG9qADnhOVICOgqTP2ftWzd1FdAQ7sSibcKrweAJ8S6zJ8l6U
 4CSdkyNDSTNykcsS+S2mDRfLgdKZu8gcaVbIzlvhRQpHRRGsRdBnBCe2Sm+yNNJDVuNN4cLt
 6x98BHrz2vdTA8dcKgHEQIWODFupniiI/mSQRuPW9p1CC+yReTrJLqGRmR2RkTFxlVx605zG
 TDmwvloo2+rvCAzAPG3WO71eUZpDKh8KoDOCW/sLlXFtzesHfrWG2nYczGgNkBmpDu1L/tqq
 iJn/5vBbU115qbRBDOnfKk4Xic7N9p0Q6v9bbQuwqeneXpAD09EMZPnoRfb1/Q7Fchpsh11O
 ZR03uerIc/N2K3oM3R3am9a/hRrDvCnZPiq59hs1VPFY8FLLNBp40W+01YVJ8GASLh8YgiVO
 1jFtvV6vpaeU6TKymxhBgm/PW8GnAoWhuWSEkLvcKYlzBQgXBi1kMdgMgShG0J+p4xQ4RNo+
 7ELqNrnrdTSdJ+V9M1OA7Ae7rDNoXpe2OGDIu/GyWXKEg3AQO+l3es2sRK2AiDQu158HIdou
 W/bG9l
X-IronPort-AV: E=Sophos;i="5.83,278,1616472000"; 
   d="scan'208";a="46261424"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FBbjebTFX3Oun0Ft9HWc0byHl2kpkuMtOcknlQAl8hsmsPWpIxdFSM6pPeVGIdILyOZOxN8v009/NWZxAx4+xAA/Ht8TL/TBCyiHUUoH4Olu5BtgvIJQByZTYxkUm8tec3WnjOEZQWjB5mH5VEEeaKos+EdsHRnt4WtQlR+ZIBKA42oGclcvAXdhKLD3hup+6v0teyO+3aNDeEz6sc0U8jO5SuLsYRkJmI7vj87In1WlWiqu2cFE1Rv0YW2Hu8+r1/elFXER/UOsdj3Mzj4iRWUcEQM/sJn1QYV+ijf6sL/vVBdThQYpgt0LvuNMPbVqvPhEqMBQSSh5mjLk58HH9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5O9mg7xjMiZSLFM8fAN66veyxGTyxu0Qce0UXo8k07I=;
 b=R87pcTK7GOjdXbWF1Mm4tSriinywPoxleanpry7NnvD33CNxj18qWfcEnxNpUpu11J1ro/t/c5BKQ46LT0TqzzWoZXC9eSTuZFy40ZkhfS9hB/hBt0iAXc7RwMOI4eDgSqXstr/LhtbBKZEeGaGw/M4VhkvfEhMDdlNPBNP+nH6piICRp8naw+flbiYRDN59GyQa9jVP/PRpP9kE6Yo1XYvMTrAMsTo977I4ysWEltIbT8x2ebVv6kGiG4opc/kBnKASJIAItqrqnxptAR81NjxSfJLdMS6wTmieTNKwougrtluE2aWuVe/40LeEAeLWHeetpXD7ocvUTx52ORKeEA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5O9mg7xjMiZSLFM8fAN66veyxGTyxu0Qce0UXo8k07I=;
 b=EsOYUW5e5ClKnfnsE6vRJPGxXTGjXfIdQ5CFSq0DdaWXS/RQE/1gsxHQ3u0Q2t+l9ErC+ogkiW/VHGEMNiJD+E2zK5+txWBAs7ns9xUswh6+dMdFrHIhU+tsQF0CWxaFDcYPxgwWifo6nvvU61rVpaGvRVgUhNwPGiW19KwzT70=
To: Jan Beulich <jbeulich@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Juergen Gross
	<jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210615161905.9831-1-andrew.cooper3@citrix.com>
 <20210615161905.9831-3-andrew.cooper3@citrix.com>
 <c59dae19-2a88-9449-468a-ab22d38fd0e7@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 2/5] tools/tests: Drop run runes
Message-ID: <4a57467a-36ea-bd5b-7e6a-ed0dfaa33314@citrix.com>
Date: Wed, 16 Jun 2021 14:08:05 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <c59dae19-2a88-9449-468a-ab22d38fd0e7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0376.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a3::28) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 57002e7b-d8b3-4fb3-a4a4-08d930c7cb3c
X-MS-TrafficTypeDiagnostic: BYAPR03MB3989:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB39898DCBD749EB51D4AC7559BA0F9@BYAPR03MB3989.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2276;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: EEZwqa9ocIae61Wd9CX5qlL+UH0l8dCU2AjUDYfwtnlIulfTSLJ6eeNPkbq3BJ5DCC4x0G879QIX43zUxBVLQ1fwjFBrbI8WJZ0uPnYvg07RwtkFLMuvmb8AGFE3r1jJFORLDruZezH2FyEDtJUhSMmJFkd65QJEWbuzO+FcmC08tqv8+vm6FqncsScR/WlngS4kIMK8jVBA8GWaWCgsnoONEvSGtU3Dn3znUMeVOstKIuRm36s9QThe4cE6ceJ63rolRK0RxTdoLZdRSYsGTi+0yr/u+mBQSFiOu6CMIKrTnCJ9VWPEYloBWblCoelEaJoJ4tad8EEvhvhHmrbCOfKNcJ/76iO8G8w7WnA2biqM/ubDv6/FpeJPuqG31RMBdtjBDaZP5MyoefdFRPHhDUoLFjhcw3XuwdJbGBCON23A9IiY/GXNYDrt7FrWYSK54y+gb/c63LdjR+tjzuV0h2HFq6UqTnAO8/vvO+cB1H6tAHbfGK3LUDkv2l4LSs89bldi2TCoxrNHQFEAmzRYzJeLwfIfR12EIdL1nFbCgHujf0E22HZSeNPAgb7rFrNsJ7ksu0sJkdJAcDVO+RpxM3HzBMCSUp2TtK4ATwdhSIa+vjjHkz6//Uujj95dT7nzKDT9+ixjVPCwaMP313vQpXQJFLhmaev3mvcMEokUdwufM1hpFnVM4HbLrUOp2YRD
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(396003)(366004)(346002)(136003)(2906002)(38100700002)(4326008)(478600001)(31686004)(83380400001)(6666004)(956004)(2616005)(6916009)(54906003)(8676002)(186003)(5660300002)(16576012)(6486002)(53546011)(16526019)(66946007)(31696002)(26005)(86362001)(36756003)(66556008)(66476007)(316002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?UlhjMzd5ZFoyd3pYYmxvNHBYUXRJdW9oS0krQ3hxeWRmc1lkckJpc0FSd2pV?=
 =?utf-8?B?Q1VEWXdWK0xPVDRJNTR0NkROT05YQjZ5NGdYMDlWZXZHY0RhbWJNdjI5b1pa?=
 =?utf-8?B?L0drYWUvRFFIY3cwUFhTM1ppdkJPM1VLcHJMMEcrU1ByTXExK1dmT09uYnNL?=
 =?utf-8?B?ZkVXYzdmY3VPOFVuTW9nYXJSK0thUTVBelpiY2c0VmJjM3ZwMkNDbXplamNL?=
 =?utf-8?B?Yzc4UGlRVWdrZ0VGdWxqL1VpNU1UNmVOWUpaUGU0SFFWM1piTGp5WE9RaHJJ?=
 =?utf-8?B?SmR2SkkwVHZBN3l0K0tLZjZZUUhxWGhVa09nRmQvb2p1R3pVNUdlMCtRUmti?=
 =?utf-8?B?Szh2emN2L0piZ3ZhTEVGTkJkdkNMeXpTZ1pFVTdWOEE4bkpzTGlzY3BERWtp?=
 =?utf-8?B?a3E4WC9HSm5QS2dyQTJYYk5Nc0ZwZWdINm16RWFFQ3lxdkI0WDN6TkpGSHlJ?=
 =?utf-8?B?WmlZYnV4enAwSHhxRjM3V0lvZ3UwOHRoQittcnh1RXEwRTlMUjRFWjR3OEla?=
 =?utf-8?B?TmhWeDVFaEZMWmJOclppSTQxT2c0UXpOTjQzQ2Q1SDBmYng5Zks5eUt6WVBi?=
 =?utf-8?B?TTRDN0VpUDdLVzlhQnM4OFFMSXhoVWRQZFp0UTBHYnFMMEptczdzQUxZZkQw?=
 =?utf-8?B?TDlNa2NDTEg1bGJiOFMxaUlYWkxDZVJvYk82RmhJb25acDZKMUVNZWwyVVpX?=
 =?utf-8?B?NWZtOWZUcmNxeG16c1lMdENyRHdKdFBuTTJZa3podFFUVk14UlFjN0JZeEZQ?=
 =?utf-8?B?YmxEMThQa2k0eEVFcnY1YytLNlpoUzNlTWE0eHpQQU5zQVdnbDVrc1lGM0Rl?=
 =?utf-8?B?a094aDBGWmxJamNBd2tWRFRKWStLNCtqRXJUT3FXYW50cnpQS3hudkYrcFRu?=
 =?utf-8?B?U1h0LzQ1UG1UVUdzYUJHNDAwMld6N3NxV3dGNEd3M01LNnlDTHFXTnhON1pv?=
 =?utf-8?B?NmRaY3NyNUl2L25ZZzJnMlRLMTR6TkZxaEcvYitSMStOWkdVS1IrLzJKelpO?=
 =?utf-8?B?TUgwZnRSRTdiK0VLQTE5ZFdWOW0wbWZUUTdkTjBRZWQ0M1BjengyRzBEUGJs?=
 =?utf-8?B?Sm9wMEtvekxzdHlUZzN2M09sM3kxUUN0OXh2cWRpQklkQUEvcWlqMWVwcXgw?=
 =?utf-8?B?aUFKWmd5S20xM3N0N1pleFlnVDJDeTVNV3FxbTlocDFXTFk3Nk9vWDE1cjk3?=
 =?utf-8?B?alJ5SGdRczhsdXI2YWJTUG1TSWkrYVMrTU1yRkFBZjcwNlJPa1QxL3FyaGg5?=
 =?utf-8?B?aGFUUnBmT2JnVHNRbU1pR3dvaTcra2NpTjBHTFNSNDk3KzMrSGpBUnhFRFFI?=
 =?utf-8?B?NzZ4SDFZMmhVZVdWT2pFVkp6cUxkdW9yODJwTmprZFVHcldyRnI5bTBibzNt?=
 =?utf-8?B?MFM3RUduWlZRdTJOS1RDRTZoUUNmdlZ6Z1p1VDk1N0pXOTlZemxpNmR6NC82?=
 =?utf-8?B?eGl3anFCeU1tT3psbDZ1cXh2cVdGbUtBSUZLK1UyMWhnSzdZZmJLNzYvaTFS?=
 =?utf-8?B?cE5BbzRxUk44Z00wNnIvTEdVZE0yb3p6a1R3WVpzallTWHk5UUpYbXh0Z0hm?=
 =?utf-8?B?S241Mk9kZ2hmU3cxdlV5K1NzZ3kwSjZ3RzgyQXZWYVYvblh4RjlrZ2VDOEYw?=
 =?utf-8?B?eWFuTkcvekpnSGdLdmdxeWNjMDl3VGp3d05MRmd5czYrdk4xTTJHLytyNnpO?=
 =?utf-8?B?NmxLMVIranQ3L2wrZ2cvVGRxMUVjcmtDd1F3cWhtZmlWc2N3eUQvNE9lWm5M?=
 =?utf-8?Q?2j4U4vZQGuRRhmzx1VcpmPIyqYA0pVC+COjivYm?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 57002e7b-d8b3-4fb3-a4a4-08d930c7cb3c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 13:08:11.9448
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Sb49vt0rfAQivhXI0l3bm7SP5f3Tthhf6FbvVlIM8Ga66hBw6Z4QOZ+3R1EkVzmYT/H4lijlpqYFoWw6kg+LJ2OaTaNUqExLrt9CBlskCAo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3989
X-OriginatorOrg: citrix.com

On 16/06/2021 07:44, Jan Beulich wrote:
> On 15.06.2021 18:19, Andrew Cooper wrote:
>> --- a/tools/tests/x86_emulator/Makefile
>> +++ b/tools/tests/x86_emulator/Makefile
>> @@ -7,10 +7,6 @@ TARGET :=3D test_x86_emulator
>>  .PHONY: all
>>  all:
>> =20
>> -.PHONY: run
>> -run: $(TARGET)
>> -	./$(TARGET)
>> -
>>  # Add libx86 to the build
>>  vpath %.c $(XEN_ROOT)/xen/lib/x86
>> =20
> This is not only incomplete, but actively (specifically here for my
> own frequent using of it, but other tests I do run occasionally as
> well, and then also that same way) harming things as long as you
> don't introduce an alternative way. Note the top-level Makefile
> making use of these rules, and note also the run32 companion here.

Honestly, this makefile is borderline impossible to follow.=C2=A0 I failed =
to
make the install runes work, which is part of why I deferred the
unit-like tests for now.

But I'm taking this as a strong preference to keep the run target?

TBH, this patch is a little on the side of the rest of the series.=C2=A0 I
stand by my commit message though - these are inconsistent, and buggy in
at least one case.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:08:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:08:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143090.263930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVHL-0001mJ-BK; Wed, 16 Jun 2021 13:08:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143090.263930; Wed, 16 Jun 2021 13:08:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVHL-0001m7-87; Wed, 16 Jun 2021 13:08:23 +0000
Received: by outflank-mailman (input) for mailman id 143090;
 Wed, 16 Jun 2021 13:08:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1z-0006lZ-1f
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:31 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65af41dd-a6c9-42f1-bf3a-f6f8f41f3654;
 Wed, 16 Jun 2021 12:51:46 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpetm7
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65af41dd-a6c9-42f1-bf3a-f6f8f41f3654
ARC-Seal: i=1; a=rsa-sha256; t=1623847901; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=Y7kTzV20Ryphv7nsbuvcPoIymod2LecV5RXqfm85m6o0JAABVhouZMriYYidwcN8EW
    5OgB6B1FrI8ofK6nne/kDnK5IH+Aw2rD5N+mDtWs+5IIiX81F1XfRbjqYc06Z1PpN7C0
    7nr5hAgrKpq2/ffaI3pu7uaOkLdDDFKmLQRkthc9fL+H6mT/PEnOGtnaLBiy5rFNvyqC
    OwQp4gdqSQpQLrdNbgMI9ba0Go1qx41r7lD7DamchIukCS2rJCo4RAAnf37ZCJ8dqM3h
    079Se9Qjr3dB0kkYSAK2NlKjmYhXCSXKQ+NUCXQ1Dc7FiFmxFFA3tReN/O1YYgNQgxFs
    twew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847901;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=YoJCymJybJhr3GQavdsstKKb1/Nvxc490XeqP35kyEU=;
    b=IIuwyi298hRmgHzgfUA0RTSK18HEz9PFjl49wvz4GNtijls/hM1T1lViA5ZZ4FomUo
    dwzf6yJII2jbYgU0OH9x/aOuMv8vCzJBXg3V3I4ErpUNHuNU3Kg/37kSVusFe1UMCqPn
    i/Ucnfv1foekWiHBx7o7H/QkYtGRcabBfS37/dp/XxcmwON/xZG2qzpELliypDrye3MO
    SJtgtUYlGbJ3/0w5/1PSMZPQDGemGYO95P+gZTRBlWASmwk7hqqYDl6VZqHpkd0L2Y6Q
    O1DTTCFb1YCzR8dYzNdyW9p8M/TfTWux7d86Of8/EKkmXJGXA1SJntMrL8wgsctgMiLi
    +xTw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847901;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=YoJCymJybJhr3GQavdsstKKb1/Nvxc490XeqP35kyEU=;
    b=KrSUsPk9X2uvD0iK5qKfS6WX1d0Oa4/IkJJIXNSKYMb1lfM3Jcb5oz17Lg2wQ+DKBK
    b3IPkUnj0nDY6OEfxPY/ETIXMQEgLZ5VeEyEB7KXOyjsuJ4eXgc+vJ+00in9mq5YjvKj
    V32FvQ666+Mnn8lJDAeGJLZPPSsaTbIwyZWucHpViSoSVWX1AoajBP2uY92DIEGTmofg
    fMSG53R9gDG4pTJkeKkBUc3vnG80eXabYSLyFXwKJct+1LKle9bJ13NVP0cO6Rx2xf5x
    o0g4ubXe+t6ngvjuds5X8u6iCRC/uY/r1ylFnid1Uke7pSPD6d4FUYqM3t63ttJuRewD
    IrUg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210616 16/36] tools: save: move rec_pfns array
Date: Wed, 16 Jun 2021 14:51:09 +0200
Message-Id: <20210616125129.26563-17-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move rec_pfns array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/saverestore/common.h |  2 ++
 tools/libs/saverestore/save.c   | 11 +----------
 2 files changed, 3 insertions(+), 10 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index ae87954364..2950947f1d 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -233,6 +233,8 @@ struct sr_save_arrays {
     int errors[MAX_BATCH_SIZE];
     /* write_batch: iovec[] for writev(). */
     struct iovec iov[MAX_BATCH_SIZE + 4];
+    /* write_batch */
+    uint64_t rec_pfns[MAX_BATCH_SIZE];
 };
 
 struct sr_restore_arrays {
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index 1a5f3d29ea..0f02988ff9 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -96,7 +96,7 @@ static int write_batch(struct xc_sr_context *ctx)
     unsigned int i, p, nr_pages = 0, nr_pages_mapped = 0;
     unsigned int nr_pfns = ctx->save.nr_batch_pfns;
     void *page, *orig_page;
-    uint64_t *rec_pfns = NULL;
+    uint64_t *rec_pfns = ctx->save.m->rec_pfns;
     struct iovec *iov = ctx->save.m->iov; int iovcnt = 0;
     struct xc_sr_rec_page_data_header hdr = { 0 };
     struct xc_sr_record rec = {
@@ -201,14 +201,6 @@ static int write_batch(struct xc_sr_context *ctx)
         }
     }
 
-    rec_pfns = malloc(nr_pfns * sizeof(*rec_pfns));
-    if ( !rec_pfns )
-    {
-        ERROR("Unable to allocate %zu bytes of memory for page data pfn list",
-              nr_pfns * sizeof(*rec_pfns));
-        goto err;
-    }
-
     hdr.count = nr_pfns;
 
     rec.length = sizeof(hdr);
@@ -259,7 +251,6 @@ static int write_batch(struct xc_sr_context *ctx)
     rc = ctx->save.nr_batch_pfns = 0;
 
  err:
-    free(rec_pfns);
     if ( guest_mapping )
         xenforeignmemory_unmap(xch->fmem, guest_mapping, nr_pages_mapped);
     for ( i = 0; local_pages && i < nr_pfns; ++i )


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:08:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:08:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143096.263941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVHQ-0002Ls-NO; Wed, 16 Jun 2021 13:08:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143096.263941; Wed, 16 Jun 2021 13:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVHQ-0002LZ-Jc; Wed, 16 Jun 2021 13:08:28 +0000
Received: by outflank-mailman (input) for mailman id 143096;
 Wed, 16 Jun 2021 13:08:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV2T-0006lZ-2q
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:53:01 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 046d4327-6b6c-4c35-882b-9d3651f4494a;
 Wed, 16 Jun 2021 12:51:51 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpitmJ
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 046d4327-6b6c-4c35-882b-9d3651f4494a
ARC-Seal: i=1; a=rsa-sha256; t=1623847904; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=FBnmkbb9LZm/91iFiW2q8M9FboL1OUtmmvM7VGSguLzkiWperTNUhojbegLeR7lS3v
    Maix+C5WnemiZeKLPz27BPVfF0D/muHRgGPFQNrtVjL0AyykIlx+eAivdmDorzen2pTc
    istP1d73k1oVB9g+kwnc28I7ZYXqLOFpYiqk8go1coDS66n7LkkKPqmX4LENDlOdYPwD
    GFmtXQENh9IjOTm78CjORfHoLtVqpZjj6Qawe06TF//smjR8QMxAZ6OKgA6McR8TJLSW
    0pJJCeGi4EJWbXopCuRj+dIEYuSk0JnAFZBYc6CRthdLkjF0rbkhQ8o9fU4Au7WEYPhp
    XPww==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847904;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=2FAZoK8lmXx9wrot1ueh4V2OeQ0OeMkG5/FHU6cLwr4=;
    b=PzMQRnTORgigITH0zv8GFr0IbWYUNZbO01DzrX3il4hf47u5Y74Y+NyvWapCIbYmxd
    yHqz6xGkoFa4VlwiAsZyfBiFNudsijQ3IBiUT/0kK4gKRq85EXcSIUBWl95Bb3D3ZFTE
    FaeXX1pWamcEBq1Jc6WbnIo8YFVcyZHk9Bn6jkVVDBqnBVaNQjTMPDN4qZ4AxtHv8VZb
    i19bjwGYC5c1jd2xDJgF2xC04DLz7dEOIyfv/7jSSsdvNKuV590IIK9ErWASM2fpl82z
    iA0lKPAlMMq9gAUr8Pj4q9VvBThM4YgXso7nkjOEcPYD8BcU1AStArLyH3cIvx6pYLo8
    w74A==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847904;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=2FAZoK8lmXx9wrot1ueh4V2OeQ0OeMkG5/FHU6cLwr4=;
    b=T4A/lccgXKvjZLLHc/E21dQfvrcQjrAgebXECbhRInmR+Xb+JTwEteNz2nAXu1x9Q1
    1d0QDBugkXLKUmSjbbbtVt0Cwf/bX5bxQvKmkMyCgPvBa2uio/lmIo8fKcHFngjWBjIM
    M3oWLC1nXs6hfe9motmiDsd8JQO94yMCiZ4rUvwzrctp7LV4TfvsclsMTii6mUlSKmvB
    xAJ6/WLMXnHNS95AMSBz9/swblmq2iHhED8b/9A5jq5wBvgMqIVeTbyfrP7qxtZEw/Bj
    jnI09XkzIbFCemZk7aA7DZQVbQc4KJ/r8+cFv9plRj41ZlAucfDi0RTEpfeGv2ekQBwe
    ATUA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210616 24/36] tools: restore: split record processing
Date: Wed, 16 Jun 2021 14:51:17 +0200
Message-Id: <20210616125129.26563-25-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

handle_page_data must be able to read directly into mapped guest memory.
This will avoid unneccesary memcpy calls for data which can be consumed verbatim.

Rearrange the code to allow decisions based on the incoming record.

This change is preparation for future changes in handle_page_data,
no change in behavior is intended.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/saverestore/common.c  | 33 ++++++++++++---------
 tools/libs/saverestore/common.h  |  4 ++-
 tools/libs/saverestore/restore.c | 49 ++++++++++++++++++++++----------
 tools/libs/saverestore/save.c    |  7 ++++-
 4 files changed, 63 insertions(+), 30 deletions(-)

diff --git a/tools/libs/saverestore/common.c b/tools/libs/saverestore/common.c
index 77128bc747..7da7fa4e2c 100644
--- a/tools/libs/saverestore/common.c
+++ b/tools/libs/saverestore/common.c
@@ -91,26 +91,33 @@ int write_split_record(struct xc_sr_context *ctx, struct xc_sr_record *rec,
     return -1;
 }
 
-int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec)
+int read_record_header(struct xc_sr_context *ctx, int fd, struct xc_sr_rhdr *rhdr)
 {
     xc_interface *xch = ctx->xch;
-    struct xc_sr_rhdr rhdr;
-    size_t datasz;
 
-    if ( read_exact(fd, &rhdr, sizeof(rhdr)) )
+    if ( read_exact(fd, rhdr, sizeof(*rhdr)) )
     {
         PERROR("Failed to read Record Header from stream");
         return -1;
     }
 
-    if ( rhdr.length > REC_LENGTH_MAX )
+    if ( rhdr->length > REC_LENGTH_MAX )
     {
-        ERROR("Record (0x%08x, %s) length %#x exceeds max (%#x)", rhdr.type,
-              rec_type_to_str(rhdr.type), rhdr.length, REC_LENGTH_MAX);
+        ERROR("Record (0x%08x, %s) length %#x exceeds max (%#x)", rhdr->type,
+              rec_type_to_str(rhdr->type), rhdr->length, REC_LENGTH_MAX);
         return -1;
     }
 
-    datasz = ROUNDUP(rhdr.length, REC_ALIGN_ORDER);
+    return 0;
+}
+
+int read_record_data(struct xc_sr_context *ctx, int fd, struct xc_sr_rhdr *rhdr,
+                     struct xc_sr_record *rec)
+{
+    xc_interface *xch = ctx->xch;
+    size_t datasz;
+
+    datasz = ROUNDUP(rhdr->length, REC_ALIGN_ORDER);
 
     if ( datasz )
     {
@@ -119,7 +126,7 @@ int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec)
         if ( !rec->data )
         {
             ERROR("Unable to allocate %zu bytes for record data (0x%08x, %s)",
-                  datasz, rhdr.type, rec_type_to_str(rhdr.type));
+                  datasz, rhdr->type, rec_type_to_str(rhdr->type));
             return -1;
         }
 
@@ -128,18 +135,18 @@ int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec)
             free(rec->data);
             rec->data = NULL;
             PERROR("Failed to read %zu bytes of data for record (0x%08x, %s)",
-                   datasz, rhdr.type, rec_type_to_str(rhdr.type));
+                   datasz, rhdr->type, rec_type_to_str(rhdr->type));
             return -1;
         }
     }
     else
         rec->data = NULL;
 
-    rec->type   = rhdr.type;
-    rec->length = rhdr.length;
+    rec->type   = rhdr->type;
+    rec->length = rhdr->length;
 
     return 0;
-};
+}
 
 static void __attribute__((unused)) build_assertions(void)
 {
diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 379887e149..2ced6f100d 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -487,7 +487,9 @@ static inline int write_record(struct xc_sr_context *ctx,
  *
  * On failure, the contents of the record structure are undefined.
  */
-int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec);
+int read_record_header(struct xc_sr_context *ctx, int fd, struct xc_sr_rhdr *rhdr);
+int read_record_data(struct xc_sr_context *ctx, int fd, struct xc_sr_rhdr *rhdr,
+                     struct xc_sr_record *rec);
 
 /*
  * This would ideally be private in restore.c, but is needed by
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index f2234eac55..2409c8d603 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -471,7 +471,7 @@ static int send_checkpoint_dirty_pfn_list(struct xc_sr_context *ctx)
     return rc;
 }
 
-static int process_record(struct xc_sr_context *ctx, struct xc_sr_record *rec);
+static int process_buffered_record(struct xc_sr_context *ctx, struct xc_sr_record *rec);
 static int handle_checkpoint(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
@@ -510,7 +510,7 @@ static int handle_checkpoint(struct xc_sr_context *ctx)
 
         for ( i = 0; i < ctx->restore.buffered_rec_num; i++ )
         {
-            rc = process_record(ctx, &ctx->restore.buffered_records[i]);
+            rc = process_buffered_record(ctx, &ctx->restore.buffered_records[i]);
             if ( rc )
                 goto err;
         }
@@ -571,10 +571,11 @@ static int handle_checkpoint(struct xc_sr_context *ctx)
     return rc;
 }
 
-static int buffer_record(struct xc_sr_context *ctx, struct xc_sr_record *rec)
+static int buffer_record(struct xc_sr_context *ctx, struct xc_sr_rhdr *rhdr)
 {
     xc_interface *xch = ctx->xch;
     unsigned int new_alloc_num;
+    struct xc_sr_record rec;
     struct xc_sr_record *p;
 
     if ( ctx->restore.buffered_rec_num >= ctx->restore.allocated_rec_num )
@@ -592,8 +593,13 @@ static int buffer_record(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         ctx->restore.allocated_rec_num = new_alloc_num;
     }
 
+    if ( read_record_data(ctx, ctx->fd, rhdr, &rec) )
+    {
+        return -1;
+    }
+
     memcpy(&ctx->restore.buffered_records[ctx->restore.buffered_rec_num++],
-           rec, sizeof(*rec));
+           &rec, sizeof(rec));
 
     return 0;
 }
@@ -624,7 +630,7 @@ int handle_static_data_end(struct xc_sr_context *ctx)
     return rc;
 }
 
-static int process_record(struct xc_sr_context *ctx, struct xc_sr_record *rec)
+static int process_buffered_record(struct xc_sr_context *ctx, struct xc_sr_record *rec)
 {
     xc_interface *xch = ctx->xch;
     int rc = 0;
@@ -662,6 +668,19 @@ static int process_record(struct xc_sr_context *ctx, struct xc_sr_record *rec)
     return rc;
 }
 
+static int process_incoming_record_header(struct xc_sr_context *ctx, struct xc_sr_rhdr *rhdr)
+{
+    struct xc_sr_record rec;
+    int rc;
+
+    rc = read_record_data(ctx, ctx->fd, rhdr, &rec);
+    if ( rc )
+        return rc;
+
+    return process_buffered_record(ctx, &rec);
+}
+
+
 static int setup(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
@@ -745,7 +764,7 @@ static void cleanup(struct xc_sr_context *ctx)
 static int restore(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
-    struct xc_sr_record rec;
+    struct xc_sr_rhdr rhdr;
     int rc, saved_rc = 0, saved_errno = 0;
 
     IPRINTF("Restoring domain");
@@ -756,7 +775,7 @@ static int restore(struct xc_sr_context *ctx)
 
     do
     {
-        rc = read_record(ctx, ctx->fd, &rec);
+        rc = read_record_header(ctx, ctx->fd, &rhdr);
         if ( rc )
         {
             if ( ctx->restore.buffer_all_records )
@@ -766,25 +785,25 @@ static int restore(struct xc_sr_context *ctx)
         }
 
         if ( ctx->restore.buffer_all_records &&
-             rec.type != REC_TYPE_END &&
-             rec.type != REC_TYPE_CHECKPOINT )
+             rhdr.type != REC_TYPE_END &&
+             rhdr.type != REC_TYPE_CHECKPOINT )
         {
-            rc = buffer_record(ctx, &rec);
+            rc = buffer_record(ctx, &rhdr);
             if ( rc )
                 goto err;
         }
         else
         {
-            rc = process_record(ctx, &rec);
+            rc = process_incoming_record_header(ctx, &rhdr);
             if ( rc == RECORD_NOT_PROCESSED )
             {
-                if ( rec.type & REC_TYPE_OPTIONAL )
+                if ( rhdr.type & REC_TYPE_OPTIONAL )
                     DPRINTF("Ignoring optional record %#x (%s)",
-                            rec.type, rec_type_to_str(rec.type));
+                            rhdr.type, rec_type_to_str(rhdr.type));
                 else
                 {
                     ERROR("Mandatory record %#x (%s) not handled",
-                          rec.type, rec_type_to_str(rec.type));
+                          rhdr.type, rec_type_to_str(rhdr.type));
                     rc = -1;
                     goto err;
                 }
@@ -795,7 +814,7 @@ static int restore(struct xc_sr_context *ctx)
                 goto err;
         }
 
-    } while ( rec.type != REC_TYPE_END );
+    } while ( rhdr.type != REC_TYPE_END );
 
  remus_failover:
     if ( ctx->stream_type == XC_STREAM_COLO )
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index fa83648f9a..e486bce96f 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -589,6 +589,7 @@ static int send_memory_live(struct xc_sr_context *ctx)
 static int colo_merge_secondary_dirty_bitmap(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
+    struct xc_sr_rhdr rhdr;
     struct xc_sr_record rec;
     uint64_t *pfns = NULL;
     uint64_t pfn;
@@ -597,7 +598,11 @@ static int colo_merge_secondary_dirty_bitmap(struct xc_sr_context *ctx)
     DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
                                     &ctx->save.dirty_bitmap_hbuf);
 
-    rc = read_record(ctx, ctx->save.recv_fd, &rec);
+    rc = read_record_header(ctx, ctx->save.recv_fd, &rhdr);
+    if ( rc )
+        goto err;
+
+    rc = read_record_data(ctx, ctx->save.recv_fd, &rhdr, &rec);
     if ( rc )
         goto err;
 


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:08:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:08:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143102.263948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVHR-0002U7-DL; Wed, 16 Jun 2021 13:08:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143102.263948; Wed, 16 Jun 2021 13:08:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVHR-0002Rg-47; Wed, 16 Jun 2021 13:08:29 +0000
Received: by outflank-mailman (input) for mailman id 143102;
 Wed, 16 Jun 2021 13:08:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV24-0006lZ-1j
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:36 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fb714658-f09d-4355-abea-3ce246a2aade;
 Wed, 16 Jun 2021 12:51:46 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpZtlt
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb714658-f09d-4355-abea-3ce246a2aade
ARC-Seal: i=1; a=rsa-sha256; t=1623847895; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=qAkzC+afFgUE2AnwHG51nJsrR/LZDNYKYoEMoNM4kCUpKNtHmro7Np4B815L2h4/1g
    OeqgG/tiTpj2WQAwTgrLRBZ4aZZHL81XM36sgup0bRzZeKPPX8DUCTNMbRRpdJwDxEZV
    Nlq4v33jpLivhyAfEc7dSo+vrORLONwqD/s64oqCusBkrqhELfSZyqxm6gNjfsuRYpQA
    FDHjiup36JBAq4NKf3jC4muOjHk2qJKkyUDjwcKNc+4Lw7vnvxwhYXJI5bnEQ+NLyKaO
    0FFyjQJWuehqYfvIim9VPmTPIZF9XBngFkPKerw3Kl52eP4L/duQz/bnZy2rNRE+Ccz3
    wUAQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847895;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=5OMdhD/BLm6v061HzJKSHTlMezBek4cccWDJ4F+kDME=;
    b=PQ+LkxJnNdz6yZBR0CqNxUAh4SuxdOqYH1yC8YUESK0EbnzPzPjltUA7pNeWvMb8Iy
    L/dNpzTQGTCxLOHxPxNhQuDB2xCQthIFsmIa3kcfzSW45uqVUZStGTdiCRZI0QXJbmTg
    QQplmIyAnaww8FUQj8YoqgsweJ1GLHxZ3Vi+xBE47BTIVBxMXlBSvyIcwgHs7oZWyt+i
    zkO2sHjYW3tMhgZujijdPkRsYe8AIsoJbLGTbA0j5YkODZ8hU9DJu+//1YLTgssHHW13
    YH4K0+TCQJF591Vie2PXeyijbabry8ZuSmBEfEPE2JRIiMSrqaDRs60/l61Ejbv0ZeMw
    Qz/w==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847895;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=5OMdhD/BLm6v061HzJKSHTlMezBek4cccWDJ4F+kDME=;
    b=PbR5bZJ6nLqe9xCJmbIqduxFqGtJHgwz3s6xQG465ikfrxnVkdMmkbIAo8PvIuH6IL
    92LHUy9u0lYUp9FYSpTefqPdLKQa+1THa/qtlfu0PGlN95RBe3jeZZ2CCKlAInHG4bt+
    EwxrdqvSugw1uIKvr502hcjlfCsanzoDY8m/GzvsXASx+e7Y9Et0oSxZi4NY1jkmi4e2
    QS26aZNyMSpwiCnzztnDTVbtYorO3O0IaYvRJqAtum+qYXjKdmzgdDC2rEAq5YJlTB/g
    B078TaGMtHuw3Ffvb9gHtHKzFdV1aMmBxaR2nKLR3KQnWsLkc3EQFxgaPJJJ8iI/3btw
    84uw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210616 05/36] MAINTAINERS: add myself as saverestore maintainer
Date: Wed, 16 Jun 2021 14:50:58 +0200
Message-Id: <20210616125129.26563-6-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

I touched it last.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 39750bb75d..dbb8f56ab3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -381,6 +381,12 @@ R:	Juergen Gross <jgross@suse.com>
 S:	Supported
 F:	tools/libs/
 
+LIBSAVERESTORE:
+M:	Olaf Hering <olaf@aepfle.de>
+S:	Supported
+F:	tools/include/xensaverestore.h
+F:	tools/libs/saverestore/
+
 LIBXENLIGHT
 M:	Ian Jackson <iwj@xenproject.org>
 M:	Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:08:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:08:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143103.263963 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVHT-00034l-MJ; Wed, 16 Jun 2021 13:08:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143103.263963; Wed, 16 Jun 2021 13:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVHT-00034c-H8; Wed, 16 Jun 2021 13:08:31 +0000
Received: by outflank-mailman (input) for mailman id 143103;
 Wed, 16 Jun 2021 13:08:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1u-0006lZ-1Q
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:26 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.80])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d87af7a-fcd5-438d-ab62-34ee038e5189;
 Wed, 16 Jun 2021 12:51:45 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpetm6
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d87af7a-fcd5-438d-ab62-34ee038e5189
ARC-Seal: i=1; a=rsa-sha256; t=1623847900; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=qqndapM6fq0DVojuSeYOeqUo244Ivu4gAa4rSpcedg8YERJY7/OLOi6XWeyI845Uxs
    zBNVMAcdUAhMd6qaxhh+NdNE9aHaxpZ/yVg0zLAzoilDVYcabOyCyJ4SY++Z8/y9UQyT
    loToNlG4zIBEAs24WvEu4K32b/t/1n+2dtiYP/uRptnyEbDXBSBqPsUTXrPnXcoOTsh+
    7o8BM2E12cQJFOKnv00bClK3RshwRq+wqM/CwNwGh//cxCHz1lUvok+q1DnrKHRFLnbh
    EYA63dif5twORXHM4K5DTPS3ruswNOpHA3eTc8ef7wDr8f+z5wzkJfRtEuOvhdQwW+A/
    42yQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847900;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=XGzxV8H0rtxBrYjtA99frZPaR/R3Z7Ebv988Uk2fYO0=;
    b=W22pArx+bx+xlzcrGw3A+cdqm96bzRnLx2oAT6Prjp36W5Ol69D4zNnOAYiBX1MMmy
    eFP93tipOYV9IjoT9DUsifnqz9j7wSRECx17AZWitUD3OMldAzlWRW/+B9ZH4DbO7P3e
    Na+g+TncdD9xx3yqFin3mtGg1eXeFAmx1Hwl3BOvgdqT7+mAvqNUjXJa7LL4eFoQNzg9
    Qeq9iqt48s1dEck9LA5s+3NBikUwuO753UDcUCaTJK0XLMRPdsmO1G+HWmMiFo0H5z3K
    m0Ot/vrEdZ4eFrDFFzmIWm8iR0WHBpNUF2FHxmoaK4Mqg156xJsQ4+6q8JT4KC6XWNfa
    LiuA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847900;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=XGzxV8H0rtxBrYjtA99frZPaR/R3Z7Ebv988Uk2fYO0=;
    b=X8IO+QK3/HCFVRHjeCoMJqDkTfz+7zRmaDSFeq5Ll3aHQwW8Bw1dU7zILDykfzwXAP
    3FK3+F9Ba+uUAXuUc9hcDbQLz3Y5DHR2a6oXhgkN161wz3XXqU6V97ilZt1GbMG2x461
    qDPeQtbrMnXylsNaBavhbRE+tBlh5GzZA3a7L+QWL8LkrMiF/5wZZmJpK5t+jxfHvNZO
    u38EflqMHcHDHyJSJMQPlJinDy8aLCiRsPl7hZ4SXQpQEtTzqzpelTvLL91ZcISCF/ho
    anmnlZHdWGVSD6zTDMJQJkhAtWIqWv/XQuDmpIeiXBr2dpItR5DZNrHYvJV0Uxc+nf5B
    3e3w==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210616 15/36] tools: save: move iov array
Date: Wed, 16 Jun 2021 14:51:08 +0200
Message-Id: <20210616125129.26563-16-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move iov array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/saverestore/common.h | 2 ++
 tools/libs/saverestore/save.c   | 7 ++-----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 558b5fbf06..ae87954364 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -231,6 +231,8 @@ struct sr_save_arrays {
     xen_pfn_t types[MAX_BATCH_SIZE];
     /* write_batch: Errors from attempting to map the gfns. */
     int errors[MAX_BATCH_SIZE];
+    /* write_batch: iovec[] for writev(). */
+    struct iovec iov[MAX_BATCH_SIZE + 4];
 };
 
 struct sr_restore_arrays {
diff --git a/tools/libs/saverestore/save.c b/tools/libs/saverestore/save.c
index 9ebbf00ce7..1a5f3d29ea 100644
--- a/tools/libs/saverestore/save.c
+++ b/tools/libs/saverestore/save.c
@@ -97,7 +97,7 @@ static int write_batch(struct xc_sr_context *ctx)
     unsigned int nr_pfns = ctx->save.nr_batch_pfns;
     void *page, *orig_page;
     uint64_t *rec_pfns = NULL;
-    struct iovec *iov = NULL; int iovcnt = 0;
+    struct iovec *iov = ctx->save.m->iov; int iovcnt = 0;
     struct xc_sr_rec_page_data_header hdr = { 0 };
     struct xc_sr_record rec = {
         .type = REC_TYPE_PAGE_DATA,
@@ -109,10 +109,8 @@ static int write_batch(struct xc_sr_context *ctx)
     guest_data = calloc(nr_pfns, sizeof(*guest_data));
     /* Pointers to locally allocated pages.  Need freeing. */
     local_pages = calloc(nr_pfns, sizeof(*local_pages));
-    /* iovec[] for writev(). */
-    iov = malloc((nr_pfns + 4) * sizeof(*iov));
 
-    if ( !guest_data || !local_pages || !iov )
+    if ( !guest_data || !local_pages )
     {
         ERROR("Unable to allocate arrays for a batch of %u pages",
               nr_pfns);
@@ -266,7 +264,6 @@ static int write_batch(struct xc_sr_context *ctx)
         xenforeignmemory_unmap(xch->fmem, guest_mapping, nr_pages_mapped);
     for ( i = 0; local_pages && i < nr_pfns; ++i )
         free(local_pages[i]);
-    free(iov);
     free(local_pages);
     free(guest_data);
 


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:08:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:08:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143112.263974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVHb-0003ox-3R; Wed, 16 Jun 2021 13:08:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143112.263974; Wed, 16 Jun 2021 13:08:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVHa-0003oT-UN; Wed, 16 Jun 2021 13:08:38 +0000
Received: by outflank-mailman (input) for mailman id 143112;
 Wed, 16 Jun 2021 13:08:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV2s-0006lZ-3z
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:53:26 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [81.169.146.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 836fdc85-aa4f-4342-9e78-cb0882912229;
 Wed, 16 Jun 2021 12:52:01 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpktmN
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 836fdc85-aa4f-4342-9e78-cb0882912229
ARC-Seal: i=1; a=rsa-sha256; t=1623847906; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=Bb02/uYdYI3v3QAK8agr/MyAFv/G7Oa2Bx7a+FYVforjXXtyAQMg+Q0iqz0l0bqefP
    /lnqy5lhD55+fZcRhOp0dKREn/o1okFQfsRHdNGvHo6Lgex5rGsh0Hitb09cXi4IzRJO
    eeRMtn40kAnmG6bwOc5iikT2XLb4CRET14Etyez4ckvBL28Ila9AT3boSSqe4sarKbSn
    Nd+ORIevFXOQPL5BCw1T49q8D7nEU03VzXmzFTXScaRFYrPtyDPo8bdMli/IT6l9895w
    9+mzysccjy+OQQAnFcj8uaftHeC+lfCqZ8q9QzM6hT5iWOvIMvN4JI6CdfGO6XXQieDG
    XJgg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847906;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=n0d0gwWXlJ/Ban5eVrK506Di/gOt/MD61K+N2RGQBbA=;
    b=q1UKio5xKBtR+KdxSh2Yno0DYwGIVeXHUBzHinABh6f2lymqVTcENFragEmMOlCYXj
    CT8T9OMHy8B/f+qQ9SLFQqraQw73s+ycFM47rF1imI6qwXcZYIINA6gVibgn6ZJa0YTl
    GRJfy3zeRhgAp8TRyjwNWlyV6FPws6FI/dmpk6swQyFWtG7jVEP+YNV8fjuv18tliD/q
    +c4WZHqo8nCuuq3ZKcmH6FYuH6Mt9dg1h9Yxgn2lrqwE9g0QSdWTQUeS/6ZGqLp4gFgl
    drSnIlEoYzFUJKjX1cihsbP5M8MGMjuqEdbiGD/5QgAglMNpmXR/2gLwJs0gXXBy3ZTg
    bCqA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847906;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=n0d0gwWXlJ/Ban5eVrK506Di/gOt/MD61K+N2RGQBbA=;
    b=g3wCKnOJqhlu289qUqGeYqy9RDAUEaOV7Yw1QOo/Y/prbvl4AlhGdYNtBPV3koqqxd
    5uXUaYW0z9j7h1+jH/krk/hO4rvCxVYiFzwKxwXbhRAAbcoYRmAQkOOUF/cv5gWVZv0a
    rJ4Xw08/2hkOEX43CKJG92FbvATCekbqj7nmJUmroaqWmC5QQaZ7YozFKqEMn4Jz5sDC
    0QR1DWVHXs+N/Jt1GirmjX2M2jfOZ9ep4bXQrC23hrF/yMEDX1vWKUsLPdZcqxtsClgi
    GC8WwDpJtzxmk1fhYuH1lUXd32BavjwbFcKhyN0pm6Fxxftep1sLkhdd9zNeivGIKxmB
    U0OA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Christian Lindig <christian.lindig@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	David Scott <dave@recoil.org>
Subject: [PATCH v20210616 28/36] tools: adjust libxl_domain_suspend to receive a struct props
Date: Wed, 16 Jun 2021 14:51:21 +0200
Message-Id: <20210616125129.26563-29-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Upcoming changes will pass more knobs down to xc_domain_save.
Adjust the libxl_domain_suspend API to allow easy adding of additional knobs.

No change in behavior intented.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Acked-by: Christian Lindig <christian.lindig@citrix.com>
---
 tools/include/libxl.h                | 26 +++++++++++++++++++++++---
 tools/libs/light/libxl_domain.c      |  7 ++++---
 tools/ocaml/libs/xl/xenlight_stubs.c |  3 ++-
 tools/xl/xl_migrate.c                |  9 ++++++---
 tools/xl/xl_saverestore.c            |  3 ++-
 5 files changed, 37 insertions(+), 11 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 29931626a2..9a4d7514ed 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -1706,12 +1706,32 @@ static inline int libxl_retrieve_domain_configuration_0x041200(
     libxl_retrieve_domain_configuration_0x041200
 #endif
 
+/*
+ * LIBXL_HAVE_DOMAIN_SUSPEND_PROPS indicates that the
+ * libxl_domain_suspend_props() function takes a props struct.
+ */
+#define LIBXL_HAVE_DOMAIN_SUSPEND_PROPS 1
+
+typedef struct {
+    uint32_t flags; /* LIBXL_SUSPEND_* */
+} libxl_domain_suspend_props;
+#define LIBXL_SUSPEND_DEBUG 1
+#define LIBXL_SUSPEND_LIVE 2
+
 int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd,
-                         int flags, /* LIBXL_SUSPEND_* */
+                         libxl_domain_suspend_props *props,
                          const libxl_asyncop_how *ao_how)
                          LIBXL_EXTERNAL_CALLERS_ONLY;
-#define LIBXL_SUSPEND_DEBUG 1
-#define LIBXL_SUSPEND_LIVE 2
+#if defined(LIBXL_API_VERSION) && LIBXL_API_VERSION < 0x041600
+static inline int libxl_domain_suspend_0x041500(libxl_ctx *ctx, uint32_t domid,
+                         int fd, int flags, /* LIBXL_SUSPEND_* */
+                         const libxl_asyncop_how *ao_how)
+{
+    libxl_domain_suspend_props props = { .flags = flags, };
+    return libxl_domain_suspend(ctx, domid, fd, &props, ao_how);
+}
+#define libxl_domain_suspend libxl_domain_suspend_0x041500
+#endif
 
 /*
  * Only suspend domain, do not save its state to file, do not destroy it.
diff --git a/tools/libs/light/libxl_domain.c b/tools/libs/light/libxl_domain.c
index c00c36c928..5dbd27900f 100644
--- a/tools/libs/light/libxl_domain.c
+++ b/tools/libs/light/libxl_domain.c
@@ -505,7 +505,8 @@ static void domain_suspend_cb(libxl__egc *egc,
 
 }
 
-int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd, int flags,
+int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd,
+                         libxl_domain_suspend_props *props,
                          const libxl_asyncop_how *ao_how)
 {
     AO_CREATE(ctx, domid, ao_how);
@@ -526,8 +527,8 @@ int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd, int flags,
     dss->domid = domid;
     dss->fd = fd;
     dss->type = type;
-    dss->live = flags & LIBXL_SUSPEND_LIVE;
-    dss->debug = flags & LIBXL_SUSPEND_DEBUG;
+    dss->live = props->flags & LIBXL_SUSPEND_LIVE;
+    dss->debug = props->flags & LIBXL_SUSPEND_DEBUG;
     dss->checkpointed_stream = LIBXL_CHECKPOINTED_STREAM_NONE;
 
     rc = libxl__fd_flags_modify_save(gc, dss->fd,
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 352a00134d..eaf7bce35a 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -614,10 +614,11 @@ value stub_libxl_domain_suspend(value ctx, value domid, value fd, value async, v
 	int ret;
 	uint32_t c_domid = Int_val(domid);
 	int c_fd = Int_val(fd);
+    libxl_domain_suspend_props props = {};
 	libxl_asyncop_how *ao_how = aohow_val(async);
 
 	caml_enter_blocking_section();
-	ret = libxl_domain_suspend(CTX, c_domid, c_fd, 0, ao_how);
+	ret = libxl_domain_suspend(CTX, c_domid, c_fd, &props, ao_how);
 	caml_leave_blocking_section();
 
 	free(ao_how);
diff --git a/tools/xl/xl_migrate.c b/tools/xl/xl_migrate.c
index b8594f44a5..144890924f 100644
--- a/tools/xl/xl_migrate.c
+++ b/tools/xl/xl_migrate.c
@@ -186,7 +186,10 @@ static void migrate_domain(uint32_t domid, int preserve_domid,
     char *away_domname;
     char rc_buf;
     uint8_t *config_data;
-    int config_len, flags = LIBXL_SUSPEND_LIVE;
+    int config_len;
+    libxl_domain_suspend_props props = {
+        .flags = LIBXL_SUSPEND_LIVE,
+        };
 
     save_domain_core_begin(domid, preserve_domid, override_config_file,
                            &config_data, &config_len);
@@ -205,8 +208,8 @@ static void migrate_domain(uint32_t domid, int preserve_domid,
     xtl_stdiostream_adjust_flags(logger, XTL_STDIOSTREAM_HIDE_PROGRESS, 0);
 
     if (debug)
-        flags |= LIBXL_SUSPEND_DEBUG;
-    rc = libxl_domain_suspend(ctx, domid, send_fd, flags, NULL);
+        props.flags |= LIBXL_SUSPEND_DEBUG;
+    rc = libxl_domain_suspend(ctx, domid, send_fd, &props, NULL);
     if (rc) {
         fprintf(stderr, "migration sender: libxl_domain_suspend failed"
                 " (rc=%d)\n", rc);
diff --git a/tools/xl/xl_saverestore.c b/tools/xl/xl_saverestore.c
index 953d791d1a..476d4d9a6a 100644
--- a/tools/xl/xl_saverestore.c
+++ b/tools/xl/xl_saverestore.c
@@ -130,6 +130,7 @@ static int save_domain(uint32_t domid, int preserve_domid,
     int fd;
     uint8_t *config_data;
     int config_len;
+    libxl_domain_suspend_props props = {};
 
     save_domain_core_begin(domid, preserve_domid, override_config_file,
                            &config_data, &config_len);
@@ -146,7 +147,7 @@ static int save_domain(uint32_t domid, int preserve_domid,
 
     save_domain_core_writeconfig(fd, filename, config_data, config_len);
 
-    int rc = libxl_domain_suspend(ctx, domid, fd, 0, NULL);
+    int rc = libxl_domain_suspend(ctx, domid, fd, &props, NULL);
     close(fd);
 
     if (rc < 0) {


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:08:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:08:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143114.263983 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVHc-00049v-G9; Wed, 16 Jun 2021 13:08:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143114.263983; Wed, 16 Jun 2021 13:08:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVHc-00048f-Bd; Wed, 16 Jun 2021 13:08:40 +0000
Received: by outflank-mailman (input) for mailman id 143114;
 Wed, 16 Jun 2021 13:08:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1y-00075D-8o
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:30 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [81.169.146.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a6f8c5a5-da29-4a23-8ff5-586f6ef15fae;
 Wed, 16 Jun 2021 12:51:52 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpjtmK
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6f8c5a5-da29-4a23-8ff5-586f6ef15fae
ARC-Seal: i=1; a=rsa-sha256; t=1623847905; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=JvlnntSKYv9tF9aoHR4YcbHinXeXv7WSGPTF4iEOj3hEgaZ3rfYK71Z+c1u/P9oqXP
    ygpI2DAhmADpkQjxV7nIy4Nl6T9C6yg4tDpufy7R5mP+18i3dNBfoIpuVB5DcSmrGKJd
    Q1JJV1P+F5f53g89lP/qCWI0UeBb/Wi7Okc/ZR9/rFf9vxReF2goTbAb9BapTDPzZbkG
    uI/8EdXhHw3bnVr9ZSZqbUprSS2+/e1ItAUXhABtzOpSMkWBcn6HkhYKSid34+pM0XVF
    T4mv1PoEej+efCX4C99jewUswe36nTVWhCbZfO03vAnHEeDEWmCiRxgd5R7C3OIXCfem
    kGwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847905;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=hD2Ug5waXHQrkHDlWstdnaBGpKn51RUN9z6BofzxctY=;
    b=NN9G0ZUM6yI8Gy7r02RYTVweKFCGoz1PK1+mndvDLbuszw92lldN86tr8zyviSCxpy
    ngHevMpvqH004PlVQc2aYk48z8IwTkQsXPBcUTrgtumFCJRtuMF0ExtzkTFjFzoED+tR
    Ka94LAsMlrlNkzSBCFoF8pvotQFKcMTUOrcpbTOIsuDd5h/5qbm2KIGc7AiRk2QaDvoh
    ztEwWWiwbFbBxtcVqjGrElq/E7Ggw+XRwlADRbYuyym7DCJ37omw22SFFRQQ5gXVw4fn
    mgO+8kg8UlwU1Uox7+JdfnmpyW5HDBNnthlUY7y0jZN90DSAM/DceN6aaOGeekRjoJUC
    Oe6A==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847905;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=hD2Ug5waXHQrkHDlWstdnaBGpKn51RUN9z6BofzxctY=;
    b=oIIgINgdysp5e2Ft3jMn8MxLIF1+rBoIenWMzeKeJJkk9UmTE8HEZ72sknf5SHBHOa
    vD7+xM6CmoLdF2lXes3AeMGOouESDswyoY5uQdO8wRaVmRj0ScOQ6JzIvdd59fg1WNa/
    0UJ+UtXbRmyUV+RI+GSyg/E4WZy0r07M7otyy8bzWD2pBr3tVnA75udXorURoU9JU7Uu
    OoTP0QhMVOyAvJAvG1AXIydMnd++Dy82qX1neuEdXE3WjSaRfCBY5B5Ms566TbprjUrD
    rJybBnxLw7VQ+MtYmK41GoEeU85HXlKmqxunqTzsT0GgkkTb/yTEzi3xsWmiVhRy51Fu
    Ou5A==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v20210616 25/36] tools: restore: split handle_page_data
Date: Wed, 16 Jun 2021 14:51:18 +0200
Message-Id: <20210616125129.26563-26-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

handle_page_data must be able to read directly into mapped guest memory.
This will avoid unneccesary memcpy calls for data that can be consumed verbatim.

Split the various steps of record processing:
- move processing to handle_buffered_page_data
- adjust xenforeignmemory_map to set errno in case of failure
- adjust verify mode to set errno in case of failure

This change is preparation for future changes in handle_page_data,
no change in behavior is intended.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/saverestore/common.h  |   9 +
 tools/libs/saverestore/restore.c | 343 ++++++++++++++++++++-----------
 2 files changed, 231 insertions(+), 121 deletions(-)

diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h
index 2ced6f100d..d479f1a918 100644
--- a/tools/libs/saverestore/common.h
+++ b/tools/libs/saverestore/common.h
@@ -242,9 +242,14 @@ struct sr_restore_arrays {
     /* process_page_data */
     xen_pfn_t mfns[MAX_BATCH_SIZE];
     int map_errs[MAX_BATCH_SIZE];
+    void *guest_data[MAX_BATCH_SIZE];
+
     /* populate_pfns */
     xen_pfn_t pp_mfns[MAX_BATCH_SIZE];
     xen_pfn_t pp_pfns[MAX_BATCH_SIZE];
+
+    /* Must be the last member */
+    struct xc_sr_rec_page_data_header pages;
 };
 
 struct xc_sr_context
@@ -335,7 +340,11 @@ struct xc_sr_context
 
             /* Sender has invoked verify mode on the stream. */
             bool verify;
+            void *verify_buf;
+
             struct sr_restore_arrays *m;
+            void *guest_mapping;
+            uint32_t nr_mapped_pages;
         } restore;
     };
 
diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c
index 2409c8d603..877fd19a9b 100644
--- a/tools/libs/saverestore/restore.c
+++ b/tools/libs/saverestore/restore.c
@@ -186,123 +186,18 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
     return rc;
 }
 
-/*
- * Given a list of pfns, their types, and a block of page data from the
- * stream, populate and record their types, map the relevant subset and copy
- * the data into the guest.
- */
-static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
-                             xen_pfn_t *pfns, uint32_t *types, void *page_data)
+static int handle_static_data_end_v2(struct xc_sr_context *ctx)
 {
-    xc_interface *xch = ctx->xch;
-    xen_pfn_t *mfns = ctx->restore.m->mfns;
-    int *map_errs = ctx->restore.m->map_errs;
-    int rc;
-    void *mapping = NULL, *guest_page = NULL;
-    unsigned int i, /* i indexes the pfns from the record. */
-        j,          /* j indexes the subset of pfns we decide to map. */
-        nr_pages = 0;
-
-    rc = populate_pfns(ctx, count, pfns, types);
-    if ( rc )
-    {
-        ERROR("Failed to populate pfns for batch of %u pages", count);
-        goto err;
-    }
-
-    for ( i = 0; i < count; ++i )
-    {
-        ctx->restore.ops.set_page_type(ctx, pfns[i], types[i]);
-
-        if ( page_type_has_stream_data(types[i]) == true )
-            mfns[nr_pages++] = ctx->restore.ops.pfn_to_gfn(ctx, pfns[i]);
-    }
-
-    /* Nothing to do? */
-    if ( nr_pages == 0 )
-        goto done;
-
-    mapping = guest_page = xenforeignmemory_map(
-        xch->fmem, ctx->domid, PROT_READ | PROT_WRITE,
-        nr_pages, mfns, map_errs);
-    if ( !mapping )
-    {
-        rc = -1;
-        PERROR("Unable to map %u mfns for %u pages of data",
-               nr_pages, count);
-        goto err;
-    }
-
-    for ( i = 0, j = 0; i < count; ++i )
-    {
-        if ( page_type_has_stream_data(types[i]) == false )
-            continue;
-
-        if ( map_errs[j] )
-        {
-            rc = -1;
-            ERROR("Mapping pfn %#"PRIpfn" (mfn %#"PRIpfn", type %#"PRIx32") failed with %d",
-                  pfns[i], mfns[j], types[i], map_errs[j]);
-            goto err;
-        }
-
-        /* Undo page normalisation done by the saver. */
-        rc = ctx->restore.ops.localise_page(ctx, types[i], page_data);
-        if ( rc )
-        {
-            ERROR("Failed to localise pfn %#"PRIpfn" (type %#"PRIx32")",
-                  pfns[i], types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
-            goto err;
-        }
-
-        if ( ctx->restore.verify )
-        {
-            /* Verify mode - compare incoming data to what we already have. */
-            if ( memcmp(guest_page, page_data, PAGE_SIZE) )
-                ERROR("verify pfn %#"PRIpfn" failed (type %#"PRIx32")",
-                      pfns[i], types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
-        }
-        else
-        {
-            /* Regular mode - copy incoming data into place. */
-            memcpy(guest_page, page_data, PAGE_SIZE);
-        }
-
-        ++j;
-        guest_page += PAGE_SIZE;
-        page_data += PAGE_SIZE;
-    }
-
- done:
-    rc = 0;
-
- err:
-    if ( mapping )
-        xenforeignmemory_unmap(xch->fmem, mapping, nr_pages);
-
-    return rc;
-}
+    int rc = 0;
 
-/*
- * Validate a PAGE_DATA record from the stream, and pass the results to
- * process_page_data() to actually perform the legwork.
- */
-static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
-{
+#if defined(__i386__) || defined(__x86_64__)
     xc_interface *xch = ctx->xch;
-    struct xc_sr_rec_page_data_header *pages = rec->data;
-    unsigned int i, pages_of_data = 0;
-    int rc = -1;
-
-    xen_pfn_t *pfns = ctx->restore.m->pfns, pfn;
-    uint32_t *types = ctx->restore.m->types, type;
-
     /*
      * v2 compatibility only exists for x86 streams.  This is a bit of a
      * bodge, but it is less bad than duplicating handle_page_data() between
      * different architectures.
      */
-#if defined(__i386__) || defined(__x86_64__)
+
     /* v2 compat.  Infer the position of STATIC_DATA_END. */
     if ( ctx->restore.format_version < 3 && !ctx->restore.seen_static_data_end )
     {
@@ -320,12 +215,26 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         ERROR("No STATIC_DATA_END seen");
         goto err;
     }
+
+    rc = 0;
+err:
 #endif
 
-    if ( rec->length < sizeof(*pages) )
+    return rc;
+}
+
+static bool verify_rec_page_hdr(struct xc_sr_context *ctx, uint32_t rec_length,
+                                 struct xc_sr_rec_page_data_header *pages)
+{
+    xc_interface *xch = ctx->xch;
+    bool ret = false;
+
+    errno = EINVAL;
+
+    if ( rec_length < sizeof(*pages) )
     {
         ERROR("PAGE_DATA record truncated: length %u, min %zu",
-              rec->length, sizeof(*pages));
+              rec_length, sizeof(*pages));
         goto err;
     }
 
@@ -335,13 +244,35 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         goto err;
     }
 
-    if ( rec->length < sizeof(*pages) + (pages->count * sizeof(uint64_t)) )
+    if ( pages->count > MAX_BATCH_SIZE )
+    {
+        ERROR("pfn count %u in PAGE_DATA record too large", pages->count);
+        errno = E2BIG;
+        goto err;
+    }
+
+    if ( rec_length < sizeof(*pages) + (pages->count * sizeof(uint64_t)) )
     {
         ERROR("PAGE_DATA record (length %u) too short to contain %u"
-              " pfns worth of information", rec->length, pages->count);
+              " pfns worth of information", rec_length, pages->count);
         goto err;
     }
 
+    ret = true;
+
+err:
+    return ret;
+}
+
+static bool verify_rec_page_pfns(struct xc_sr_context *ctx, uint32_t rec_length,
+                                 struct xc_sr_rec_page_data_header *pages)
+{
+    xc_interface *xch = ctx->xch;
+    uint32_t i, pages_of_data = 0;
+    xen_pfn_t pfn;
+    uint32_t type;
+    bool ret = false;
+
     for ( i = 0; i < pages->count; ++i )
     {
         pfn = pages->pfn[i] & PAGE_DATA_PFN_MASK;
@@ -364,23 +295,183 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
              * have a page worth of data in the record. */
             pages_of_data++;
 
-        pfns[i] = pfn;
-        types[i] = type;
+        ctx->restore.m->pfns[i] = pfn;
+        ctx->restore.m->types[i] = type;
     }
 
-    if ( rec->length != (sizeof(*pages) +
+    if ( rec_length != (sizeof(*pages) +
                          (sizeof(uint64_t) * pages->count) +
                          (PAGE_SIZE * pages_of_data)) )
     {
         ERROR("PAGE_DATA record wrong size: length %u, expected "
-              "%zu + %zu + %lu", rec->length, sizeof(*pages),
+              "%zu + %zu + %lu", rec_length, sizeof(*pages),
               (sizeof(uint64_t) * pages->count), (PAGE_SIZE * pages_of_data));
         goto err;
     }
 
-    rc = process_page_data(ctx, pages->count, pfns, types,
-                           &pages->pfn[pages->count]);
+    ret = true;
+
+err:
+    return ret;
+}
+
+/*
+ * Populate pfns, if required
+ * Fill m->guest_data with either mapped address or NULL
+ * The caller must unmap guest_mapping
+ */
+static int map_guest_pages(struct xc_sr_context *ctx,
+                           struct xc_sr_rec_page_data_header *pages)
+{
+    xc_interface *xch = ctx->xch;
+    struct sr_restore_arrays *m = ctx->restore.m;
+    uint32_t i, p;
+    int rc;
+
+    rc = populate_pfns(ctx, pages->count, m->pfns, m->types);
+    if ( rc )
+    {
+        ERROR("Failed to populate pfns for batch of %u pages", pages->count);
+        goto err;
+    }
+
+    ctx->restore.nr_mapped_pages = 0;
+
+    for ( i = 0; i < pages->count; i++ )
+    {
+        ctx->restore.ops.set_page_type(ctx, m->pfns[i], m->types[i]);
+
+        if ( page_type_has_stream_data(m->types[i]) == false )
+        {
+            m->guest_data[i] = NULL;
+            continue;
+        }
+
+        m->mfns[ctx->restore.nr_mapped_pages++] = ctx->restore.ops.pfn_to_gfn(ctx, m->pfns[i]);
+    }
+
+    /* Nothing to do? */
+    if ( ctx->restore.nr_mapped_pages == 0 )
+        goto done;
+
+    ctx->restore.guest_mapping = xenforeignmemory_map(xch->fmem, ctx->domid,
+            PROT_READ | PROT_WRITE, ctx->restore.nr_mapped_pages,
+            m->mfns, m->map_errs);
+    if ( !ctx->restore.guest_mapping )
+    {
+        rc = -1;
+        PERROR("Unable to map %u mfns for %u pages of data",
+               ctx->restore.nr_mapped_pages, pages->count);
+        goto err;
+    }
+
+    /* Verify mapping, and assign address to pfn data */
+    for ( i = 0, p = 0; i < pages->count; i++ )
+    {
+        if ( page_type_has_stream_data(m->types[i]) == false )
+            continue;
+
+        if ( m->map_errs[p] == 0 )
+        {
+            m->guest_data[i] = ctx->restore.guest_mapping + (p * PAGE_SIZE);
+            p++;
+            continue;
+        }
+
+        errno = m->map_errs[p];
+        rc = -1;
+        PERROR("Mapping pfn %#"PRIpfn" (mfn %#"PRIpfn", type %#"PRIx32") failed",
+              m->pfns[i], m->mfns[p], m->types[i]);
+        goto err;
+    }
+
+done:
+    rc = 0;
+
+err:
+    return rc;
+}
+
+/*
+ * Handle PAGE_DATA record from an existing buffer
+ * Given a list of pfns, their types, and a block of page data from the
+ * stream, populate and record their types, map the relevant subset and copy
+ * the data into the guest.
+ */
+static int handle_buffered_page_data(struct xc_sr_context *ctx,
+                                     struct xc_sr_record *rec)
+{
+    xc_interface *xch = ctx->xch;
+    struct xc_sr_rec_page_data_header *pages = rec->data;
+    struct sr_restore_arrays *m = ctx->restore.m;
+    void *p;
+    uint32_t i;
+    int rc = -1, idx;
+
+    rc = handle_static_data_end_v2(ctx);
+    if ( rc )
+        goto err;
+
+    /* First read and verify the header */
+    if ( verify_rec_page_hdr(ctx, rec->length, pages) == false )
+    {
+        rc = -1;
+        goto err;
+    }
+
+    /* Then read and verify the pfn numbers */
+    if ( verify_rec_page_pfns(ctx, rec->length, pages) == false )
+    {
+        rc = -1;
+        goto err;
+    }
+
+    /* Map the target pfn */
+    rc = map_guest_pages(ctx, pages);
+    if ( rc )
+        goto err;
+
+    for ( i = 0, idx = 0; i < pages->count; i++ )
+    {
+        if ( !m->guest_data[i] )
+            continue;
+
+        p = &pages->pfn[pages->count] + (idx * PAGE_SIZE);
+        rc = ctx->restore.ops.localise_page(ctx, m->types[i], p);
+        if ( rc )
+        {
+            ERROR("Failed to localise pfn %#"PRIpfn" (type %#"PRIx32")",
+                  m->pfns[i], m->types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
+            goto err;
+
+        }
+
+        if ( ctx->restore.verify )
+        {
+            if ( memcmp(m->guest_data[i], p, PAGE_SIZE) )
+            {
+                errno = EIO;
+                ERROR("verify pfn %#"PRIpfn" failed (type %#"PRIx32")",
+                      m->pfns[i], m->types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
+                goto err;
+            }
+        }
+        else
+        {
+            memcpy(m->guest_data[i], p, PAGE_SIZE);
+        }
+
+        idx++;
+    }
+
+    rc = 0;
+
  err:
+    if ( ctx->restore.guest_mapping )
+    {
+        xenforeignmemory_unmap(xch->fmem, ctx->restore.guest_mapping, ctx->restore.nr_mapped_pages);
+        ctx->restore.guest_mapping = NULL;
+    }
     return rc;
 }
 
@@ -641,12 +732,21 @@ static int process_buffered_record(struct xc_sr_context *ctx, struct xc_sr_recor
         break;
 
     case REC_TYPE_PAGE_DATA:
-        rc = handle_page_data(ctx, rec);
+        rc = handle_buffered_page_data(ctx, rec);
         break;
 
     case REC_TYPE_VERIFY:
         DPRINTF("Verify mode enabled");
         ctx->restore.verify = true;
+        if ( !ctx->restore.verify_buf )
+        {
+            ctx->restore.verify_buf = malloc(MAX_BATCH_SIZE * PAGE_SIZE);
+            if ( !ctx->restore.verify_buf )
+            {
+                rc = -1;
+                PERROR("Unable to allocate verify_buf");
+            }
+        }
         break;
 
     case REC_TYPE_CHECKPOINT:
@@ -725,7 +825,8 @@ static int setup(struct xc_sr_context *ctx)
     }
     ctx->restore.allocated_rec_num = DEFAULT_BUF_RECORDS;
 
-    ctx->restore.m = malloc(sizeof(*ctx->restore.m));
+    ctx->restore.m = malloc(sizeof(*ctx->restore.m) +
+            (sizeof(*ctx->restore.m->pages.pfn) * MAX_BATCH_SIZE));
     if ( !ctx->restore.m ) {
         ERROR("Unable to allocate memory for arrays");
         rc = -1;


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:08:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:08:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143115.263990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVHd-0004Gs-7l; Wed, 16 Jun 2021 13:08:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143115.263990; Wed, 16 Jun 2021 13:08:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVHc-0004Fr-Sd; Wed, 16 Jun 2021 13:08:40 +0000
Received: by outflank-mailman (input) for mailman id 143115;
 Wed, 16 Jun 2021 13:08:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltV1t-00075D-8k
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:25 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [81.169.146.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fd38271b-afa3-43fb-bb08-840dca73930b;
 Wed, 16 Jun 2021 12:51:50 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GCpjtmM
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 14:51:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd38271b-afa3-43fb-bb08-840dca73930b
ARC-Seal: i=1; a=rsa-sha256; t=1623847906; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=Vbs1j7mJbricsA1IIQ8f5z4T87A3bI9ApYAk+bMRpD8bhxcTw5qvgllKCvFNm+WwCw
    WmOKlZMV9uGIwJg03kHSni+u88xYHW9DMdPtis4Q9gXtaQgY9xbbplBw5+X3BwqZe59w
    VkvV/xN9uEtJNjE18co5YigmoL1BPFlF2RAQG3NJyHpK7YGEowhl5f93iEbj3MoAGMSJ
    fOjOPiNLSdVQoKSulFYSp9mWLF8VikPXhZSPTJNK85EmXNOxJnuAMGtBvqiBKRP3Yiqi
    2P6Irr3/kB0KEnCZZFvnk1cNpKjCQ8q3pxlPwYNHDI4OkxykCWdAo2iMqZ9PE9pcENzd
    4mOA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847906;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=0LMXyjqr5dhJU6Kt86LbarPPbfCTtmro+NAOq1cH7Ts=;
    b=pLJdI34sA4mQh8XdxM4ZJwGOprEg6rp0dENeOEVsTbec9S87BDRDWvSYINYY+MvbSv
    jwRvEPnnWuNfJ91oRSnr/1zGp3uI5Rt+pvRH4sCJzy/jum3qIETlctAlLXVH3snHFyOQ
    c7p7qLFz3L5s5dDCal9ApishyyhVgMjir9anBkAKd6XycAnhyKYFI0pKz49WnBtkMyJp
    nuin+YUvbYR4iKQJgm7+544NQDJnUjZQi7zMlBnwSywmFe87HlGD3gmupMFXdP5NvD6q
    OaB6rJ2VXaXxYzLF94l0r8EYg/B23qSunWp/+6WQ/Yyj3hmXBzfPIDJPxHw4J8tm2yx+
    ewtw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847906;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=0LMXyjqr5dhJU6Kt86LbarPPbfCTtmro+NAOq1cH7Ts=;
    b=iYOUWSZIPy/UeeZszze/3scZsU10jMdAzh7SXzV8Ou6jDvEfVIfruOS0enhMOI1WnQ
    +3tTWcJdMz5iKbR4gQX/ZrQXs7uUvDpyQfn5HhXOUyrVtNlIFw7eg7v9mFb5WTRjnWYY
    Rsmhw6Ao1Uem1OgyRvZbFbUnLyFLaEpOsa/Kjo4dAjd9wifCjt/KjwUDYIsBe2JRI3B/
    worMaYRqYW/RmV5KgKDJpCLhwCIHibY7FYcAQJJ+bDHxl5etIiPgxK8patbwwzcVvSkV
    BAckxj5Wghl0WjZwzjtlD0HCAgXTaNsdbGCrlzogg9eLVnq5T9uo1NDS3XfVH6FvW+bo
    TcLg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210616 27/36] tools: recognize LIBXL_API_VERSION for 4.16
Date: Wed, 16 Jun 2021 14:51:20 +0200
Message-Id: <20210616125129.26563-28-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is required by upcoming API changes.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/include/libxl.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index ae7fe27c1f..29931626a2 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -729,7 +729,8 @@ typedef struct libxl__ctx libxl_ctx;
 #if LIBXL_API_VERSION != 0x040200 && LIBXL_API_VERSION != 0x040300 && \
     LIBXL_API_VERSION != 0x040400 && LIBXL_API_VERSION != 0x040500 && \
     LIBXL_API_VERSION != 0x040700 && LIBXL_API_VERSION != 0x040800 && \
-    LIBXL_API_VERSION != 0x041300 && LIBXL_API_VERSION != 0x041400
+    LIBXL_API_VERSION != 0x041300 && LIBXL_API_VERSION != 0x041400 && \
+    LIBXL_API_VERSION != 0x041600
 #error Unknown LIBXL_API_VERSION
 #endif
 #endif


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:14:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:14:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143211.264011 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVNX-0008Io-5a; Wed, 16 Jun 2021 13:14:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143211.264011; Wed, 16 Jun 2021 13:14:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVNX-0008Ih-2L; Wed, 16 Jun 2021 13:14:47 +0000
Received: by outflank-mailman (input) for mailman id 143211;
 Wed, 16 Jun 2021 13:14:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltVNW-0008Ib-0S
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 13:14:46 +0000
Received: from mo4-p04-ob.smtp.rzone.de (unknown [81.169.146.178])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f9836156-07ff-4cd5-bf62-a6d58cf66d39;
 Wed, 16 Jun 2021 13:14:44 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GDEctur
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 15:14:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9836156-07ff-4cd5-bf62-a6d58cf66d39
ARC-Seal: i=1; a=rsa-sha256; t=1623849279; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=KGsZWre7/Ghq+S+monVzGitmSNTADGrv3qFYc5MWCUctVeQXFOVGSlPItqHHDG7UAc
    M3+ohYJnFzSL+qZ5qY1kEuv2ip2WXpSFtrjHAxBcAXJHnZwJGJzkSXxrBIgwG9vVgLFp
    z1qKniM7FN4Rj+EOrZZa+duDhfe2uu0ftcArPcdeeP4xaYROniKuFhbMZRQ+zQiFFKWg
    MSaA27B/NOJwjBKhf0DR1+7aNdmb0QJ0H+eduTmJQoOQEA0VTqkAmtyGSaW68JpJkbdt
    JL6PL0OgHz4CgKm/Z9+AXXVB2F5ecY4e6faCDnQwQ8JpYlw34xh/S5Bg2H6UJTSN+pF/
    5o7w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623849279;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=jApK5xhguXgItxZ5f/1GTBckGrTPVB37W+HmDgfW5pk=;
    b=cCPYCTYcfX7Z2A8LOcSroHp5d5Wo3+3RZMXJboBEY4eYSA103uFCsjnu/p6FEZxQ7a
    0ANB0TtUYG+HZ6c6Ze1mmn+/vo6mWjhXef9EdqQifVhcrWoO8RW/ZkQLgEZuKuD0a6ug
    5ikwmuB6dnoRsLeHdyHZ3rQW/SvZcUKGNqRdmHD8aH1ID4QpdK0d3zbTn+b8aUXNsQjD
    ZwdKkoIkfsKc5MXOLxOTXLWOyHzhi0kqQb5YYWDofK2fkDOAYp2bjmOMJ47uD8TaMxYd
    5+VpdL+429VizPrhuSkU9E91WSSMJ4jResQtb3mJF42lKeLkq+xudK3NkIlSL9MlY1GM
    wJRA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623849279;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=jApK5xhguXgItxZ5f/1GTBckGrTPVB37W+HmDgfW5pk=;
    b=Wqnv0lCQAUfFAcLO3ibwmNP8zalMB8Jj/LPH6kmWMJeVXimGLFHRSsY8Pbf0/XNQbC
    i7rIzTftAZuZVu+/qejzlsWlzaEi9G9l2ea3E+4X/G/f0kP+XvaJl0+aVrdXyOqOSaQ2
    Rgyr3xiy7XRomba1KbQu9YagOl5V4ExYvPoGhxshz2V/5apBKiuIjmfHxkZvVGd6o93+
    Av9XOJO7THaDNdH+nE/qoUHeZSALfhYITsJGdn2tpssuo2jPH5ebEeRtTT3ZBIGpuEKn
    9j0X9TMxKiJRR8YpSf3f4huFXi1khrxPAPG/MQwUn2of/8+fyqvqi56sHhxpDUROcuGj
    m9Zw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2] tools: ipxe: update for fixing build with GCC11
Date: Wed, 16 Jun 2021 15:14:35 +0200
Message-Id: <20210616131435.27770-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Use a snapshot which includes commit
f3f568e382a5f19824b3bfc6081cde39eee661e8 ("[crypto] Add
memory output constraints for big-integer inline assembly"),
which fixes build with gcc11.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/firmware/etherboot/Makefile | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

v02:
- use parent of bf4ccd4265ac to allow build with CentOS 7

diff --git a/tools/firmware/etherboot/Makefile b/tools/firmware/etherboot/Makefile
index ed9e11305f..4bc3633ba3 100644
--- a/tools/firmware/etherboot/Makefile
+++ b/tools/firmware/etherboot/Makefile
@@ -10,7 +10,8 @@ else
 IPXE_GIT_URL ?= git://git.ipxe.org/ipxe.git
 endif
 
-IPXE_GIT_TAG := 988d2c13cdf0f0b4140685af35ced70ac5b3283c
+# put an updated tar.gz on xenbits after changes to this variable
+IPXE_GIT_TAG := 3c040ad387099483102708bb1839110bc788cefb
 
 IPXE_TARBALL_URL ?= $(XEN_EXTFILES_URL)/ipxe-git-$(IPXE_GIT_TAG).tar.gz
 


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:22:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:22:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143218.264021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVV5-0001NM-01; Wed, 16 Jun 2021 13:22:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143218.264021; Wed, 16 Jun 2021 13:22:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVV4-0001NF-TK; Wed, 16 Jun 2021 13:22:34 +0000
Received: by outflank-mailman (input) for mailman id 143218;
 Wed, 16 Jun 2021 13:22:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vOo1=LK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ltVV3-0001N9-Lo
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 13:22:33 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4df12387-e410-4910-bd0d-db673c6242ab;
 Wed, 16 Jun 2021 13:22:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4df12387-e410-4910-bd0d-db673c6242ab
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623849752;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=6Fo3+4g4wS1IddXThWliLT0nnG6znJt9+wfwAw+Cf94=;
  b=Pl8coqkck+57zN4ymgssyaPCeiZ4NBYmS8ls1BCarmNGrSMFULV1q5za
   qe4kJRmabMNn48XFcH6ZBMwxKKccq6U88gW7vVsT3/Mekz+r8KDRfcOxR
   4u2VFmMAgLznMzb+oQ+4D0MshMU1JfPYbEcosH68QGhtoqlFdTFgVKrTK
   Y=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: SWT2nxb+TGHzuw8JlO0gbLyqtF5DrsRqhjSnMFHyv1DEb/4GB866O2O5AiegrwOJ39YeiYeDE7
 pDmyM3AxPlcAKYTTTccigshraPpDA02sF4Cc0z9DggrxmqZWpDUegdUxp4Z6cEOIH9jlwnlk4o
 u1CUokmswqyq8GkQuKGbqWmv/bq6rVGljg6ZGTf/VGlt901tIky0+8abPffjNFV1kztXjOQS1p
 HD/3YIDcFRYv099fZDaDerhlbUQYucofnEaNLoOYGW2ooT3r+FHNS/q0p9zCNnWgPKLULcv99x
 jkQ=
X-SBRS: 5.1
X-MesageID: 46240753
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:MEciTK21SQ8p29rzM9reegqjBVByeYIsimQD101hICG9Lfb2qy
 n+ppgmPEHP5Qr5OEtApTiBUJPwJE80hqQFnrX5Wo3SIDUO2VHYUb2KiLGN/9SOIVyHygcw79
 YGT0E6MqyLMbEYt7eI3ODbKadY/DDvysnB7o2/vhQdOD2CKZsQizuRYjzrYnGeLzM2Y6bReq
 DshPav6wDQAkj+Oa+Adwg4tqX41pL2vaOjRSRDKw8s6QGIgz/twLnmEyKA1hNbdz9U278t/U
 XMjgS8v8yYwrCG4y6Z81WWw4VdmdPnxNcGLMuQivINIjGprgqzfoxuV5CLoThwiuCy71QBls
 XKvn4bTopOwkKUWlvwjQrm2gHm3jprwWTl00WkjXzqptG8bC4mCuJa7LgpMCfx2g4FhpVRwa
 hL12WWu958FhXbhhnw4NDOSlVDile0m3w/iuQe5kYvErf2UIUh6bD3wXklV6vpREnBmcYa+a
 hVfYHhDc9tABanhyuzhBg3/DTENU5DbCtvQSA5y4aoOnZt7ShEJ+Zx/r1Xop46zuNLd3Bz3Z
 WODk1ZrsA7ciYoV9MKOA4ge7r7NoWfe2OBDIqtSW6XXJ3vbEi91aIfpo9Fv92XRA==
X-IronPort-AV: E=Sophos;i="5.83,278,1616472000"; 
   d="scan'208";a="46240753"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MNrtK6gRuF+f7aGCQS+evzCeFANo8RXmopOhWYx8useNVWjCSHGQo9ybZvVhWYOSbX1nqTD2eYc002zG6ANf3MaW0NAEmFpla+HNzFLVmIMghdnZy+KwY2OPKcNDq63F/oCYFZIYnvpqgLcIw7KGkra8cmx4u648JYFXieALEXvy8Wh/L63HQhKbry9LS1cQYqTquM9sfrnrE42MrlLxRIsQkzqFVGbm3nSbr1XJFg23N5MthSJOgxTPI7AiHjaAZExjizhNo89SjtzNhR8mjFMmjn4axgoSAlwXLOYyN5BJ4eSZ4IxL6D+aSSn05azoDzRLlcqShSQJqeC9HeaSlA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KQeKuo84mnXecHW3LWCCXtrgaV/RyUvoutWimsrj0d4=;
 b=gvCWHiDkgF4MyzVXryeOFHQA41EPzByi0RsmSTzqqhCsKGpf7ppfhaQaE1hyCIm4NUzv827fiy9JaYFqNWbNBAIO+1QGDK+8GohY+rILGBszpFPSPL2bZ5f4JX86mShBOnmnRFFscvyJWKVBDWBPvxRXLnSj/cuBW1OFRApVExK5WX5baAktpHqaRDFEPcWpn85bnWAOEnz14bBUNnlXerFPZ0QG/w2y9On+mWAf5t7OGbGwXe1mOmWj8nWGllTiQx6GZi+E2HQXGO1pnS8jm8C83oG6xcqsfvPTEbKhSnPQMWQB9prLPhdBeYeKbl5+nuEDvcMI6bjz9X4/zd31aw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KQeKuo84mnXecHW3LWCCXtrgaV/RyUvoutWimsrj0d4=;
 b=opau1BXzHrs4cRyxHgBZz1LygmJ9Q3GGcMGJwby2eZGmuTU9bMh5BWoNFGpr1mdgiZAqx4Fyct+ekqd5JA8Nkgg0AJFRbODcUa2tf2/HyACb0JyYnh9PYv24Xt7JsFOCeWo0UXa6OEKqRi+vfZ0aDpSzF3sN92J3f4Z/Tl1POMk=
To: Jan Beulich <jbeulich@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Juergen Gross
	<jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210615161905.9831-1-andrew.cooper3@citrix.com>
 <5b0de61c-0276-cf06-4eeb-cda9ca990940@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 0/5] tools/tests: More cleanup for automation improvements
Message-ID: <5f786fa7-80eb-ea66-3fbc-2b556c1d3a73@citrix.com>
Date: Wed, 16 Jun 2021 14:22:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <5b0de61c-0276-cf06-4eeb-cda9ca990940@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0514.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13b::21) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 46dc6cc5-90cc-4346-ccea-08d930c9c964
X-MS-TrafficTypeDiagnostic: BYAPR03MB4806:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB48067801DDB6E6E1B0D0D641BA0F9@BYAPR03MB4806.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: lQZHHMb+PLZNadWal7tp0RK895h0BRF2ozHvL/1D9xLztTHUubQj9gY9NSzPnssuwwS0cW4hhUlDbrZJkMJjfZfKQZkPk3WkQ3Htc9v0b4njiAe+kXz3+MqVK7sWN4FVqFXnnBJXzZ4AnRe24PsKAOnKXy0o0vJrFkQ+OcOLnByDlGbBH4iHXH5nv+CMu5ymoDrzKKZHlcOEwyTsxAdCfO3TLj5xfC0ndDAwQcL6QxVo7qafzqo39zCQ5HGyhC3sivA2NyRrMxulPhLSDOKdQFtYNSrENH5JniXO9dvYPR/+hHC70xJUh8Zy7rWtZQL8YvSO1Wi/PdcZUEn7atOqq1LNjhN4fcR7yKrBnWwqzNMzNvmqyoXkerb4xKTTukmYIPwNSLlZ0kxT+ZPERFNlPoCaE4j65uakUfiKh7GHOP/1ftG05BO+fNL677N3tCq5lNVwlmGKByHjViwDADkMleaUrZwHfzjVjympt4C7UYMDvRDEGzIPeSZo2gzfHKdcNjpjJtBiZjJOogwibP3zcagUsgP/0yEYiz1S8TQzvclhf8qn8gGpB2FI7kbIZwvWQZ247JXsNd+zSE+RTpQ1aph/yqLvkHqN7jOO4/LSKdF0YMZDUEZ3HkK9dI5yzyZS/pVdyNB/lgeXK0CPjoeqxhJj+vsBoVkZlbWEIza+gIh/0T5g/t2mBr94OdjFyT0B
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(396003)(136003)(376002)(366004)(31696002)(86362001)(38100700002)(83380400001)(478600001)(36756003)(66946007)(316002)(16576012)(6666004)(186003)(31686004)(956004)(6486002)(6916009)(54906003)(66556008)(2906002)(16526019)(8676002)(66476007)(5660300002)(53546011)(26005)(8936002)(2616005)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?RU1ZU2V0OFZJMFlpdHprcWRwVStJS0sxcENDY1JMRTRiU3N2dDFBanJsYmpZ?=
 =?utf-8?B?S202Wlo1dGFidVl6NlhOUCtSU1cwV2xpY1Vua0ViNXk4ZDY4L2FJclhlNEQw?=
 =?utf-8?B?WlRYQWtoTjN2Q2hjVy9FVllndUljWVdXZkV5R241RnU0QjlsUDlOck80VWI0?=
 =?utf-8?B?aUJSL0F2dkpqWTNNeGJjcU5GRmYwNEJBaGUxOFNUOFhkSjgrQ1lnc0J2NDV2?=
 =?utf-8?B?ZmZuamVtc09MU1hsK3kwdUJTajFIdjF4OUtwRCsxdGY2OVVKbm1PcFAzV2Vr?=
 =?utf-8?B?OGk3ZVFIblBUQTN2b0VWU3RIZmhSS2JkZ0pvVlFySXo1NjRjdmlpTHhZcGlK?=
 =?utf-8?B?ZGJNMXZCdGxHekZlWFZPMnBmWiszREQ2a1hFQjhld1lYUjZITWVjemlTcVhx?=
 =?utf-8?B?WDNpQkdDamFaallzejM5WjlNdUprZ1lnVFJINHBMNFlPK2ZVSTBzRS9SSjZ3?=
 =?utf-8?B?T1pLSHZ3MDB3N0ZjdFppQ2FMaGc2L1J2c01qd2J1ZkNSeEExWHc5U3hKVHpv?=
 =?utf-8?B?OVEySCt6SWNuNWt3SEtQTnJWb0o5WW5QL1R3MTVMbm13RExjTUoyYXl1VzBF?=
 =?utf-8?B?VUFJK3FLVFgrSGYwenZMbURjNm4wSzRNTjIwMEMvckIwNHZweHorZlFFWlhm?=
 =?utf-8?B?VUY0WS92cWN5K05yanRkNURaSUpLckhwMG1kUHFKeDF3RlNDdEZRbkU3R3pJ?=
 =?utf-8?B?dHJhem91czRZOC9FcEZhOWNQQkRsa0tTb2xha1U2SGlJQWdwSnBaVkVUblI0?=
 =?utf-8?B?TUpBaW5jamZWMEU0WjdIekM0QUJqbWZxRDhSL0NNMGsxMkRVbEQrN2Y1aGl2?=
 =?utf-8?B?V3RkVDFqdSsrTjZkaUt2YkpzbEh4MUFFWG1YMzBwSkhzSDNnbGcxY0hBcXo0?=
 =?utf-8?B?bnl0WjhUQkRHR2VPZkdkcUFSa05CYnVJR1daK2RWbzcyU3VxYVpwOHRGaTRx?=
 =?utf-8?B?QWoyVCtRemJSYWxZTXpDMmVna0VKUlpQUFhid0ZlbmhGZGszTkZYaC9BSTJG?=
 =?utf-8?B?VjczZlRXcU1tZlI5RXpadTlyMmxET3JyYTByMnNMMkkrU2RyOXZZbGRVWmFZ?=
 =?utf-8?B?dHR4RDk5ZDNwQytMUjVUMUVIUjNVNElOUFlwK1RPdjVqVFN2NkdUUmVpa0Zh?=
 =?utf-8?B?eS9xdGYweE42WjZhM0t3dEFMNTgwWHVUV092SUhtMjRPelJlVWRReWxwaW9Q?=
 =?utf-8?B?Umd6cllnU3g2QXFHZ2RmSGFCNTYzTXNYa2lycGtFT3ZFczJ4M1hiNCsrVkUr?=
 =?utf-8?B?WmpMZ3VqWUJpZEpReG5BeGhyUW5TTGpDcGhuQnZUbTdxMDV6VFVhQ0htQXJR?=
 =?utf-8?B?RDZ6YkQ2VlA2T0hHZGpqZUQ0SzhuK0x2K053UlZYKzN4NkZRUEY2YmtmUVYx?=
 =?utf-8?B?Q3haR04vNFg0UFdqWU5hM1B1TmJGTUVtZ3VVVzJ2U1lWalJDWW81Qk9PMm9k?=
 =?utf-8?B?Zkh5bURxZG1LakNscHRtMFZieXphbkJXNjd2VVJvWTNpKzVaODNKNHl1MGlO?=
 =?utf-8?B?a1poL0N3NWI2SE53MGo4citJc2JyTjVLLzJQY0tWVzBwaHRPUmFiOGozS05s?=
 =?utf-8?B?MUNqTUJTcS9SZnBoZi9Ibkpqa1pZMnVRYStZMUlUUFJYVnpyV2dqaXBZcXdy?=
 =?utf-8?B?ZEQ1S1RBb0lBS25OYjlTOU1RZzlsTkJtVFBrdTNqYlAwZEVNVGd1M3JYbUEx?=
 =?utf-8?B?bXJsWVdaVEVoYnZjNDNBSlRuZWN3b0xwUWo1RXJ2WDl4cUo4SldSd3JTUko3?=
 =?utf-8?Q?m6wCEvvRVmQMSyf2OQQWDaZqaaMmVW939vXFZ7q?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 46dc6cc5-90cc-4346-ccea-08d930c9c964
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 13:22:27.6910
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: igradOD6KFIqiJ1Csk3OTJPXOEpamRaJtP1i5VSG1wyiY6ZVgQwNG1XW1rp5cSPHev6gjoRpmwMaCh/gS9yblaWA7eXsXMi0icVqBMFjzv4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4806
X-OriginatorOrg: citrix.com

On 16/06/2021 07:38, Jan Beulich wrote:
> On 15.06.2021 18:19, Andrew Cooper wrote:
>> Jan/Roger: x86_emulator and vpci use $(HOSTCC) not $(CC).  While they ar=
e unit
>> tests, we still potentially want to run them in dom0 rather than the bui=
ld
>> environment - particularly for x86_emulator which is heavily CPUID based=
 and
>> wants to run on a wide set of hardware.  Any issues moving them off $(HO=
STCC)?
> Well, yes, I'm afraid: If anything, we may need to build two binaries,
> or build the one binary two different ways: The "run" (and "run32" for
> the emulator harness) target wants a binary built with HOSTCC. The
> install target (which prior to your series does nothing) indeed wants
> building with CC. So maybe we want something like
>
> install: HOSTCC:=3D$(CC)
>
> plus suitable detection of whether the opposite set of objects are
> presently in the build area, requiring a full rebuild? (Of course this
> will work only as long as HOSTCC isn't used for any build time helper
> binaries. See "x86emul: test AMX insns" for when this starts not to be
> the case anymore for the emulator harness. So we'd need yet another
> variable to express this detail.)

Having slept on the problem overnight, I'm going to argue that HOSTCC is
conceptually wrong to use here in the first place.

In an arm64 environment, cross-compiling x86_64, this will explode
everywhere, and the fault is with using HOSTCC rather than CC.

HOSTCC is specifically for compiling utilities executed as part of the
build.=C2=A0 Tests, and particularly arch-specific ones like x86_emulate, a=
re
not in this category.=C2=A0 Whether you happen to be able to run
test_x86_emulator in the build environment is a property of whether
you're cross-compiling.

For a non-cross-compiled builds, HOSTCC and CC are largely
interchangeable, and won't impact the ability to run the binary in the
build environment.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:34:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:34:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143226.264032 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVg7-0002te-T4; Wed, 16 Jun 2021 13:33:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143226.264032; Wed, 16 Jun 2021 13:33:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVg7-0002tX-Q8; Wed, 16 Jun 2021 13:33:59 +0000
Received: by outflank-mailman (input) for mailman id 143226;
 Wed, 16 Jun 2021 13:33:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3cid=LK=suse.de=bwiedemann@srs-us1.protection.inumbo.net>)
 id 1ltVg5-0002tR-SQ
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 13:33:58 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30d56e2a-8a82-4726-8d30-17f97bd34d25;
 Wed, 16 Jun 2021 13:33:56 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 935BA1FD49;
 Wed, 16 Jun 2021 13:33:55 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 472A0118DD;
 Wed, 16 Jun 2021 13:33:55 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id ZslqDcP9yWAZPAAALh3uQQ
 (envelope-from <bwiedemann@suse.de>); Wed, 16 Jun 2021 13:33:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30d56e2a-8a82-4726-8d30-17f97bd34d25
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623850435; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FNUQ0UpcanY9AJLSfsSp40lFAXfaXAy8ZdLMbo45tPM=;
	b=ZyZuKCYSoguVYz09Mrm04YTMpRRBSorz0ed/y4k5eMopfsZBgTmaMZiIEB3XvE6jTg5bsC
	178ror+AAAN6Sb2s3mZ8bmxbqPqsIKLPfF40kyGqOXi/c/AN77myl5xQHbQc3Mbqd8XagU
	FlM4cXJqc70Mk+43EgZ/d5RECG6WQxg=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623850435;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FNUQ0UpcanY9AJLSfsSp40lFAXfaXAy8ZdLMbo45tPM=;
	b=MTFkNflY/DCn0F6d01qpwu9NTNpXm3DpTvuz7jMtP7hUDFpT6KKzEq2rKiN0g0thy6yXhU
	C/bvS3eUOcKMl1AA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1623850435; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FNUQ0UpcanY9AJLSfsSp40lFAXfaXAy8ZdLMbo45tPM=;
	b=ZyZuKCYSoguVYz09Mrm04YTMpRRBSorz0ed/y4k5eMopfsZBgTmaMZiIEB3XvE6jTg5bsC
	178ror+AAAN6Sb2s3mZ8bmxbqPqsIKLPfF40kyGqOXi/c/AN77myl5xQHbQc3Mbqd8XagU
	FlM4cXJqc70Mk+43EgZ/d5RECG6WQxg=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1623850435;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FNUQ0UpcanY9AJLSfsSp40lFAXfaXAy8ZdLMbo45tPM=;
	b=MTFkNflY/DCn0F6d01qpwu9NTNpXm3DpTvuz7jMtP7hUDFpT6KKzEq2rKiN0g0thy6yXhU
	C/bvS3eUOcKMl1AA==
To: Olaf Hering <olaf@aepfle.de>, Michael Brown <mcb30@ipxe.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20210615212613.6270-1-olaf@aepfle.de>
 <b78ccdf3-9898-c903-4d9f-4d25bd27182e@citrix.com>
 <20210616145846.305d3ce1.olaf@aepfle.de>
From: "Bernhard M. Wiedemann" <bwiedemann@suse.de>
Subject: Re: [PATCH v1] tools: ipxe: update for fixing build with GCC11
Message-ID: <fe5ac73a-6026-6db6-6756-911f803adc5f@suse.de>
Date: Wed, 16 Jun 2021 15:33:54 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210616145846.305d3ce1.olaf@aepfle.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

So this means, CentOS7 binutils has
9cb80f72d8b from 2011-12-21
but not
git blame binutils/objcopy.c|grep enable-determini
955d0b3bd75 (Roland McGrath       2013-01-07 17:40:59 +0000  549)   -D
--enable-deterministic-archives\n\
2e30cb575a1 (Cary Coutant         2012-04-25 17:50:14 +0000  555)   -D
--enable-deterministic-archives\n\


one way out could be to call objcopy -D $PARAMS || objcopy $PARAMS




On 16/06/2021 14.58, Olaf Hering wrote:
> Please revert bf4ccd4265ac614fbfa38bf168b6eeaf4c17d51e in ipxe.git, CentOS 7 apparently fails to handle '-D'.
> 
> It worked in my testing with SLE12SP5 and SLE15SP3 as a base system.
> 
> See below.
> 
> 
> I guess for xen.git, updating to just bf4ccd4265ac614fbfa38bf168b6eeaf4c17d51e^ will be good enough.
> 
> Olaf
> 
> Am Wed, 16 Jun 2021 13:33:52 +0100
> schrieb Andrew Cooper <andrew.cooper3@citrix.com>:
> 
>> On 15/06/2021 22:26, Olaf Hering wrote:
>>> Use a snapshot which includes commit
>>> f3f568e382a5f19824b3bfc6081cde39eee661e8 ("[crypto] Add
>>> memory output constraints for big-integer inline assembly"),
>>> which fixes build with gcc11.
>>>
>>> Signed-off-by: Olaf Hering <olaf@aepfle.de>
>>> ---
>>>  tools/firmware/etherboot/Makefile | 3 ++-
>>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/tools/firmware/etherboot/Makefile b/tools/firmware/etherboot/Makefile
>>> index ed9e11305f..23b3f6ca9d 100644
>>> --- a/tools/firmware/etherboot/Makefile
>>> +++ b/tools/firmware/etherboot/Makefile
>>> @@ -10,7 +10,8 @@ else
>>>  IPXE_GIT_URL ?= git://git.ipxe.org/ipxe.git
>>>  endif
>>>  
>>> -IPXE_GIT_TAG := 988d2c13cdf0f0b4140685af35ced70ac5b3283c
>>> +# put an updated tar.gz on xenbits after changes to this variable
>>> +IPXE_GIT_TAG := bf4ccd4265ac614fbfa38bf168b6eeaf4c17d51e
>>
>> CI says no.
>>
>> Gitlab CI is currently fairly red because of a clang build fix which
>> hasn't made its way into master yet, but this job:
>>
>>   https://gitlab.com/xen-project/patchew/xen/-/jobs/1349871230
>>
>> shows a real failure on CentOS 7.
>>
>> ...
>>   [VERSION] bin/version.rtl8139.rom.o
>>   [AR] bin/blib.a
>> ar: creating bin/blib.a
>> objcopy: invalid option -- 'D'
>> Usage: objcopy [option(s)] in-file [out-file]
>> ...
>>
>> ~Andrew
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:42:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:42:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143234.264044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVo9-0004Ls-NR; Wed, 16 Jun 2021 13:42:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143234.264044; Wed, 16 Jun 2021 13:42:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltVo9-0004Ll-JR; Wed, 16 Jun 2021 13:42:17 +0000
Received: by outflank-mailman (input) for mailman id 143234;
 Wed, 16 Jun 2021 13:42:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltVo8-0004Lb-3F; Wed, 16 Jun 2021 13:42:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltVo7-00033j-NC; Wed, 16 Jun 2021 13:42:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltVo7-0007nC-Bf; Wed, 16 Jun 2021 13:42:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltVo7-000139-Au; Wed, 16 Jun 2021 13:42:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=STLst32EYR7wSJY6kl0WRPuG5udJQQKSvUdmHgjrp28=; b=oD8f1+sfwdmSl7RW3DSJyA89aK
	qhDI/wNKuKQ455h4wf3SGctyUyM4amJwUuXJ+e0fVEuJ1zn/ZkuTbdoKwxetT3idVgrs3z1B7Nyiv
	sJQH7WqMivupei+wacRhNtYKhJ1fX6uOvX9qEeuAe0Zm6qvOPD3T5zAOQSvjRfwn5A/k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162850-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162850: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=1ea06abceec61b6f3ab33dadb0510b6e09fb61e2
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Jun 2021 13:42:15 +0000

flight 162850 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162850/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                1ea06abceec61b6f3ab33dadb0510b6e09fb61e2
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  300 days
Failing since        152659  2020-08-21 14:07:39 Z  298 days  552 attempts
Testing same since   162818  2021-06-14 22:38:18 Z    1 days    3 attempts

------------------------------------------------------------
532 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 171564 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 13:59:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 13:59:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143244.264058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltW4h-00060P-9z; Wed, 16 Jun 2021 13:59:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143244.264058; Wed, 16 Jun 2021 13:59:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltW4h-00060I-6y; Wed, 16 Jun 2021 13:59:23 +0000
Received: by outflank-mailman (input) for mailman id 143244;
 Wed, 16 Jun 2021 13:59:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jola=LK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltW4g-00060B-BL
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 13:59:22 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8b34d71d-1c3f-428f-a4c3-1dde60135ef6;
 Wed, 16 Jun 2021 13:59:21 +0000 (UTC)
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2052.outbound.protection.outlook.com [104.47.8.52]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-25-kNc61U6fMT2YWwDBVlcP0w-2; Wed, 16 Jun 2021 15:59:19 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3776.eurprd04.prod.outlook.com (2603:10a6:803:18::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.16; Wed, 16 Jun
 2021 13:59:15 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Wed, 16 Jun 2021
 13:59:15 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0153.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1b::21) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Wed, 16 Jun 2021 13:59:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b34d71d-1c3f-428f-a4c3-1dde60135ef6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623851960;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XUfjheHrqtrxHC7G41jU7BZSbA3Dgop0b7LfYa4UFUY=;
	b=gP/nOj/PrbQc1IfyvVcumJsFNNsZElrusGkrBA81x1f4Y/vH7l7vXrLHvnW3TrMgAUOdeT
	zvuS9tXIk+2OuKv0KXe1O2oUswDypSVS/9Q7h9YDjPekFMgFxuuu2wGGrs0+Gz50cHAKGn
	Iw2SYYtfqXs1kPygDCQuDiHuES1DIJw=
X-MC-Unique: kNc61U6fMT2YWwDBVlcP0w-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B2aMciyO58vpLw/AhOczfLoXNDHRDqe8y6+S2jeB/1QNLq5azpipVAA9ARcFsh2hsMlxDLnV0XtcIiphrjTx0kVkW7StnZhITpJUgdSuiXweVpBT3d2NJieJqITael4dOEuqir/Lu584zjvvhRlHChd1yZrbx/nmKup5mKQnw6MZAMEyRJRjhHiTqnlBjDIt33XLGiE6dJG39gMdrfHZXLQPZlhFLxbnEU7Wp8WaHMhxb0RSlP9Pqt/l3CxXiIxwEC1cGZk6YFfpcsoFAmFeciCrNV10JNZmXiu2OkjolE1W2SLh7WU8pYwjsDSz7O12ZTMl1SgOozyz9wx+vf3CPA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wJuPjcrNXkrHMc34KKYfciYUx2XknK5hLSr7ZRq4h4Y=;
 b=GZV69zm4jwMIA3bJ/B/BDrK1+M0OYqao4R6gJnKWCaYMKsCRwuzlmDV37jmVh+/MhXlzlfxJouclbqNNWq5NWfrmIdGEp623wp0yPwIqymrIgGJzoXtjOCN+jrbTnG7gVvlA2o+/bgwyJUUkk3u88JOcbgN5t1/uKZ4J1R8606ZY/lbiRVc71KdRh7Q9Og/Gq659n+XJXAKdiEQT0dPK467zM2h58GRGu+Xxn7uD6vsdqvSnp2Yzl/nSgMl9gyJuPgIMbhihB3gWVDgVrjcJ+XDAziF+dhdUw/iStXVS/bYM1lQatAZ7Xv6PUPGdWKBfBobdqz1dUYXVlQIbh7ELhw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 2/5] tools/tests: Drop run runes
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210615161905.9831-1-andrew.cooper3@citrix.com>
 <20210615161905.9831-3-andrew.cooper3@citrix.com>
 <c59dae19-2a88-9449-468a-ab22d38fd0e7@suse.com>
 <4a57467a-36ea-bd5b-7e6a-ed0dfaa33314@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8fbb2de3-b2be-e767-346b-29df378eb8a3@suse.com>
Date: Wed, 16 Jun 2021 15:59:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <4a57467a-36ea-bd5b-7e6a-ed0dfaa33314@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0153.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1b::21) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d6ff09d4-ac48-4788-9f98-08d930ceed7b
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3776:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB377688BDE7D1F53233D84E66B30F9@VI1PR0402MB3776.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HJmTPPii+iKdYhnqpGoM51itKbrJuxSOZCt9VoyKj1xZSYVnGGep25Nn/gZC8OrPNoX9kKsmCCgHpxmopLoPyh+W3DnE6miR87dXdrXqk/W81djIEh/xEnj89jdn3PnsqicjKNkH4T02/o3ekEvNwh6lFO1/cRWOxyEshZqMZaIZ0rlD+JTUapTdCC4YffNAmbjVlgWP2cxIaFnOjyaXHxCdVGfPkEdROuZlpNWHY1by7YhxZejXSD0PkUcSHQR71Zln9gbUreL83WspCoiKcbSJn+MHCh7dRCQR4R6ZnglX0VR22vFV58QrklF/39EbtnIw3duo1Ck8cMzlsSZU5XZDXgnDVaZPQxYfQlxRMldg2EbFPCQcO+Cxo5TlOg8YZgcerhQmKvjFM+6g01ROjXlAweJXVC2mmPgCWeufoV552FqY74YCqL98JXHKicXkvEj9dvSfq7HEkwjloXb2RDHY1tMygRfRJw7r7n+umeCbWYpbFIe++3V+Byrxxk15A58iHW/9U+30I3CZXTsDP8/RFERN6irAdELhLf6jzeZu6I1JVVWLyNRM3N07szVpmIdy5XPc24EEEYXSG9bHUzjHBej7+WKXiJXTcav60FvkCyFZgKKwdI65XGNj+qCtQCRb5Kw85yZTN4ovvA/Pu4gmDQXsHDI/Pj44/Q4pGZvKmpBHZzJzTP19hiwuWf2m
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(366004)(136003)(39860400002)(396003)(376002)(36756003)(8676002)(5660300002)(26005)(478600001)(186003)(16526019)(2616005)(956004)(6916009)(2906002)(86362001)(53546011)(54906003)(66946007)(8936002)(66556008)(4326008)(66476007)(316002)(31686004)(16576012)(6486002)(38100700002)(83380400001)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?3xvAytkzO8MaRlMZn2/T8O5asiGm0t/7JhrzNYiqeeFiMuwsI1argwIs9pLS?=
 =?us-ascii?Q?hOZUNsgXPVJVrRonL4FrATrtRVaURhLz10oPXS64wlpxg0Yt8HIc7hpj2kcL?=
 =?us-ascii?Q?9vG8Jh3rfFcrWdDxwqNXTvnvpPZVgUw30Dmb83MsCjS7A79pyHwkUCWCgrU8?=
 =?us-ascii?Q?zJYTYpG8RKwy7zrJnEDNSEscmj1lQqP4zFfQn5JtSy+X//qPp3K2VivAwIJT?=
 =?us-ascii?Q?daSQeBx4+Xl4Hd2szQorOoX1ZKFjbymETt8RXkNSUmBHr2FTilb+ehHcUK1n?=
 =?us-ascii?Q?dAaMgVUkQ9FhZlk0TdyoIfNJiO1B/nPBmplJOUnlydZKQkeCcG7Em9cb7jFH?=
 =?us-ascii?Q?GJxGYNU0BoeedTeTwWOkkqfsqHUc6aN7sNEZ0lbuaa4l6PlF7Gm1cBGsb848?=
 =?us-ascii?Q?V7BrRG4ozkUojcxvq2FILwKl9RnBYbyRre+i2xKSjQ2FO9RtZ9E+7wy0TKsv?=
 =?us-ascii?Q?R1X+x9xB62EP5cZosMbPMzcHT2c8ns23uK3WAMVWJpMwixiKEPaTOfEp19MF?=
 =?us-ascii?Q?k/pphPwTLwNLu/LR3exfiUxrmGaLYnDmwJVSQilsi/nqppXMXOZbdPdtL8R7?=
 =?us-ascii?Q?q2BbkNGy7aJ+DPe+qNs4phgDMWB828ieBPm0VqTkLpDT8AiY+BnDmPpbV3Sn?=
 =?us-ascii?Q?w08vu6WANe4deFVd2pkNLMcc6C04QlZou105QnnURwIYXEA+ANN9F6bh+YUv?=
 =?us-ascii?Q?ohHeQmOBsdnQIJlDUqnQXz5COh4M93rUgMmf4VPlQbUWvcmuSW23Y3VHiD5R?=
 =?us-ascii?Q?Z3IO7jBxzpkxnTcXya23C5Mft75/L7+nHElwb7LMhqKYHYZsbvb2mrSoG4E7?=
 =?us-ascii?Q?bB3SPRmFcwZt7gqs1A/f2KnUzHyLd2QfT6ntjOqvYXnm1C5yEA0nu4D2e2k9?=
 =?us-ascii?Q?s0reXFfz9PrUySo1W2jemw46zMIDdqElhG/YYE4xksnaDE+/uCw+p2tLuuZN?=
 =?us-ascii?Q?K2UTFkARRz8qTMGv8JLfwD7KQSr5rrOy5QdFsmtMCP0t2rTUKm7c0athBKUc?=
 =?us-ascii?Q?Ni5GEDHuG215A1shtycZl1kDWeuU28o7jMkWT0040aYN2ZlDFfpnSyAgwOKq?=
 =?us-ascii?Q?gHJJ+RIQTSBMgXknFEG/rMrdqw5E5eYj7qTSrP8kJvQdEWGhFkn8y8LyoM+m?=
 =?us-ascii?Q?x9qr4O9lop3nBy9pz1F4xFZxaKLEOma2F3q+YlJr2tWF4AntWOm36byRy+DF?=
 =?us-ascii?Q?w6cyEnL9CZz3UMYibFRrlPcOCuuzZGm00dxD3NOCxqRAHWjMxXQxTVEyZlKw?=
 =?us-ascii?Q?0TG3R06Ls0ByB7IRFDPgxCIh70MYBTHOlLKke2yey1655jBsnE0iAuCdM0in?=
 =?us-ascii?Q?c1wvJaxyB9WYvKyLlORuoRfE?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d6ff09d4-ac48-4788-9f98-08d930ceed7b
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 13:59:15.6903
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mc4TAJJXcdNesg4N70AIfAXBU7n2OWz7Q1mCrS1Jr+yJry9HcfQakoLRRmzptvYNgAaaX22jWG/yhTgoNmc/vA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3776

On 16.06.2021 15:08, Andrew Cooper wrote:
> On 16/06/2021 07:44, Jan Beulich wrote:
>> On 15.06.2021 18:19, Andrew Cooper wrote:
>>> --- a/tools/tests/x86_emulator/Makefile
>>> +++ b/tools/tests/x86_emulator/Makefile
>>> @@ -7,10 +7,6 @@ TARGET :=3D test_x86_emulator
>>>  .PHONY: all
>>>  all:
>>> =20
>>> -.PHONY: run
>>> -run: $(TARGET)
>>> -	./$(TARGET)
>>> -
>>>  # Add libx86 to the build
>>>  vpath %.c $(XEN_ROOT)/xen/lib/x86
>>> =20
>> This is not only incomplete, but actively (specifically here for my
>> own frequent using of it, but other tests I do run occasionally as
>> well, and then also that same way) harming things as long as you
>> don't introduce an alternative way. Note the top-level Makefile
>> making use of these rules, and note also the run32 companion here.
>=20
> Honestly, this makefile is borderline impossible to follow.=C2=A0 I faile=
d to
> make the install runes work, which is part of why I deferred the
> unit-like tests for now.

Well, yes, it's not pretty, but I lack ideas for a clear improvement.

> But I'm taking this as a strong preference to keep the run target?

Yes, and not just here. Any test that can be run directly in the build
tree would imo better have such a target, so re-building and running
it can be invoked with a single make invocation. Having to re-build
the entire tools/ in order to then (re-)run one of these tests is, in
particular, way too much latency.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 14:13:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 14:13:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143251.264069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWI6-0008Kz-Il; Wed, 16 Jun 2021 14:13:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143251.264069; Wed, 16 Jun 2021 14:13:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWI6-0008Ks-Fa; Wed, 16 Jun 2021 14:13:14 +0000
Received: by outflank-mailman (input) for mailman id 143251;
 Wed, 16 Jun 2021 14:13:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D5mW=LK=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1ltWI4-0008Km-Im
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 14:13:12 +0000
Received: from mx0b-00069f02.pphosted.com (unknown [205.220.177.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 549fc2d7-84f2-4f9c-b5f1-c387f5bbede4;
 Wed, 16 Jun 2021 14:13:11 +0000 (UTC)
Received: from pps.filterd (m0246630.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 15GECaps016432; Wed, 16 Jun 2021 14:13:06 GMT
Received: from oracle.com (userp3030.oracle.com [156.151.31.80])
 by mx0b-00069f02.pphosted.com with ESMTP id 39770h96h1-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 16 Jun 2021 14:13:06 +0000
Received: from userp3030.oracle.com (userp3030.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 15GED5Ev037081;
 Wed, 16 Jun 2021 14:13:05 GMT
Received: from nam02-bn1-obe.outbound.protection.outlook.com
 (mail-bn1nam07lp2045.outbound.protection.outlook.com [104.47.51.45])
 by userp3030.oracle.com with ESMTP id 396wanweed-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 16 Jun 2021 14:13:05 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by BLAPR10MB5010.namprd10.prod.outlook.com (2603:10b6:208:30d::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Wed, 16 Jun
 2021 14:13:02 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4242.019; Wed, 16 Jun 2021
 14:13:02 +0000
Received: from [10.74.102.136] (160.34.88.136) by
 SJ0PR05CA0152.namprd05.prod.outlook.com (2603:10b6:a03:339::7) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.7 via Frontend Transport; Wed, 16 Jun 2021 14:12:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 549fc2d7-84f2-4f9c-b5f1-c387f5bbede4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=T7vY73bDKI8WFO+pocnsgZkd5Hzn8Ef9UwS+QEQ96XI=;
 b=L3orx3mmYJhIbf79pqmBUu7jWhw4O2m3Hypxr0EZ7O1pWPYslDqvkGB59U31R2URIz7P
 4Lls7X+mnvFjdoLOyebS64piS4LrHIsvlatPszRTwZsW26pQTSV8ZDbD8u9EcKFPcaxz
 M2IBIc1FdmGyZW74sSzq2KIUkPXFTQ9gYa/ceWvU7xl5Fg5nP8mP8zxGe6u0CmzhBw5g
 XXoOVrL50DC80sYIahsXLl6yDwmJYkOjo4Qh+WY07ZUCwz1gi75q+Jkir6WYxwZL2u9R
 qT1MEW57C8eQ3YFXntlv4c6Yy4zODex1XW/qiw+h2AotOSyS/8CzdCeJfannRxARLa5U 3g== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ICmAjzJDhqyZfQVlnvCMP6XCHZhXO+cebuhjrZnBokz+6/+D0lIqywSqyXEKb2tlBVopKplvvgAousMeYR27ots5RQ1XQXrr/0oRfTHWOIduRYD1bgGUz6SBEvbxH1gUR490iTZx6ASnPz0sDKc/VIiG6qIWXPecPIqY/hsaKlALDgtiz1iA34T6Tyd6bvuti7up/exiSW4tf1zhpxnxEK9BQe2BUq2eBukZqq3GSstGtDbdx332aAQDR6leN9/rL/nYly7i5JghrnAVxGb7C7lHptl8njEFURttGRo5PqL6h9Fmpth66IZgEjp6qEh/j3nF0ReKJV4AanmmDPB8Sg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=T7vY73bDKI8WFO+pocnsgZkd5Hzn8Ef9UwS+QEQ96XI=;
 b=Jw/xO06fEdGtF551RGlSRhzrwxnQyloFXfV3yDjXwXUvjU+kly2YGjb9Wi07a2wPiBtGDSa0gWEIFaddD12qU4cyLBpTalRJRF8YSZ3j2R3DRnO+t/M2bRAVmUXTUXTlWjHua8Nj3KO6O/Llr0jd2tKJGg3OCeSnrot23IsO1/nGtpWTZNABr4pJlOEI/z+aDfjBgRt/05SljOqrAkBJTIVfl/SLvw4vswofidP/RnlA3a3l6uzczr2RiH9pJ/LfULxjxTQAiNUBFsvBBzAQT08iGTIjtHLDi+IKT+75riv0r784s90ORZunRMZ83cx9JBQ4Jon57bVi499QSYKq5A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=T7vY73bDKI8WFO+pocnsgZkd5Hzn8Ef9UwS+QEQ96XI=;
 b=XIcmjUsypaJuzpQ5GWgStVOAwIoxnUuvUYvFNyTU89XO7oMLqUlU0UJ6qpVqWrUKkydm03kGaIak7IP4cNE/2W2ZK4RkQdyjVmJ4vZ2embwbHA6YopJztbs8MDdV2kagxrYPxbNQjDr+mZgOkLbN8c7KGbi75GTEWoRw8JMpFB4=
Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma
 ops
To: Roman Skakun <rm.skakun@gmail.com>,
        Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
        Juergen Gross <jgross@suse.com>,
        Stefano Stabellini
 <sstabellini@kernel.org>,
        xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org,
        linux-kernel@vger.kernel.org, Christoph Hellwig <hch@lst.de>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
        Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
        Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
        Roman Skakun <roman_skakun@epam.com>,
        Andrii Anisov <andrii_anisov@epam.com>
References: <855a58e2-1e03-4763-cb56-81367b73762c@oracle.com>
 <20210616114205.38902-1-roman_skakun@epam.com>
 <20210616114205.38902-2-roman_skakun@epam.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <2834cdc0-534c-4f07-1901-e468a7713c1f@oracle.com>
Date: Wed, 16 Jun 2021 10:12:55 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
In-Reply-To: <20210616114205.38902-2-roman_skakun@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [160.34.88.136]
X-ClientProxiedBy: SJ0PR05CA0152.namprd05.prod.outlook.com
 (2603:10b6:a03:339::7) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: aa76d82e-ff93-4f88-4148-08d930d0da67
X-MS-TrafficTypeDiagnostic: BLAPR10MB5010:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BLAPR10MB5010217FBC106EA9B3FC25A78A0F9@BLAPR10MB5010.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	9WqY3a8BulNSroI+92f5uZf/SbIOjBy6tQJalZafuswYUfzDfGhopKg2h45D4ezJKbdtrpGZtsee0TPFaVXxlm5eo3RsQEic9UukS5kSaGrAUVdnyNgkeeBNdVs9qjFt0rmx6k/xxPNr7+1i4Q1I/O/cy5dP1jXSAcvQDkd2beSy9C0bu3r7EFeDlVcU2RENdvNaBKdyTg71fLq0ms+MFBIL7wxxvit/+3bgTTjOMuMG88zUjm80eQt5lEAnuAtala6hrFQQctDCfiMxlvB+71HzMBbzU+x8ENcOjwJn1h3hx1X3SBUBLfJx2IvpkKF6p8JRz2/JzukZrUhqr9J9vmA1czOt92ID14S+GtcAlnA8xEwyBHk+cgXIvT/PwMH9zCHhdhkjary7odRH+Bof1/1Wp4HqebauBUhz4XISaM3w2rcA+6vdwICf0GkxZg9/c3xMOS4N5llhPqGg76DuUeK9C+XJaRzWBWoWP8VLWQuzRt0qjl03UyIl3FSZwTGaj6fXGZFjmkJa2VvuXe+dXv0AYgjJG04zQwfE8QXRvUMMiigg5adDU43DnXAS/b7lD4kK7XK3AWYWAQEkzAfPBdt/P2lZwvkGKkowRXRXMQul9Ben3xmoGLxPDpDsNoum+8O4rh7QHp3OZ8RdSCMI0QfYkRqEEDC2F2ANk6HdjdOU/+rIe/kLo+tmG2Wj4dYu
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(376002)(396003)(346002)(39860400002)(366004)(66476007)(66556008)(8936002)(2906002)(4326008)(44832011)(16526019)(478600001)(6666004)(83380400001)(186003)(7416002)(8676002)(38100700002)(54906003)(110136005)(5660300002)(2616005)(16576012)(26005)(31696002)(316002)(956004)(86362001)(31686004)(53546011)(6486002)(36756003)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?clVQVTRkTUlHcERvdFFvMDUwZ3RmRDZ5c1RQQUdNNVV3dmsrUEVVRFF2dzVn?=
 =?utf-8?B?VUhvQ0I2NlIveDBCVkV5Ti9qMlFUdldkWDFaQmkrNUp3ck12NHhZTHZmUDVs?=
 =?utf-8?B?Zk02UmJqMHVkOWxiYmlOWFpGanZTZFF4ZU8rOFJuZHpTUjVOdXluczlLR0RQ?=
 =?utf-8?B?QkdKMFRQMG1rVmpJbGltOEpXNjd5TEhsU3lRZ0gxMWp3V1JjTFlBUTE0M3N6?=
 =?utf-8?B?S1ZreEQ4RUxiaWVnSFdkbEVjdTM1NU1GU3orTlVjN3ZFdEh0akJvejdNZ2dq?=
 =?utf-8?B?aDZiclZwSmpXYk9JdEgxWVk4bnMwNlRsdEVGTW1pN25kN2NJQm8rQTVJUDhF?=
 =?utf-8?B?WUtPcmVxQ3c1anJBQVRqcFFLT2V4R1lNczl4WkZHT2pkNjNjRXdyZWJiTUdm?=
 =?utf-8?B?WnNXNmFwbGd6VzRXeHJvYlU2c2NXazVLVzlTVGhBNnYwU2dhU250dFR5cmN0?=
 =?utf-8?B?QllySEpKYUJVdHE5N2NYZmVaNGkrZlRweE9MNmppaWhjdXpZNWVzQkVsRTJk?=
 =?utf-8?B?c1NLbHZLU214VU9vUkxtUE5EQnJ2VmVQOVVobHQ1T210RmxzNllEcHBHRHhG?=
 =?utf-8?B?aW9XZDk3OFNFRHdxakU3OHMzVjVETWFPTHQyNmpucTkwOHZDa3RwWDJQeE8y?=
 =?utf-8?B?NjVlb29BUkh2MW1peklQYWZ4VEpvNTRNK0l2U2srbFBtTWdCN1VuOXZwTHdB?=
 =?utf-8?B?MFBueDVTaHZsTzJ5ci9ZVnN4YXZIdGVLcEl3ZlBkNmFBSUtxRlJMaFlLaVhy?=
 =?utf-8?B?ekhBNUVKNkdYM2x4Yk8wMEdPQUhCNmhNeE1NMDdmczdVdWs1QjlNaURjaHpF?=
 =?utf-8?B?Ty9NdUZrcUVVWU1na3d2YnVhWFZ5RmMzOHpBVTdVRnkybHlSU1Nldks5Y3Iv?=
 =?utf-8?B?T1puT1RGMHhjVWdSdGRzcjVOUHRTSzR4ejZSa3JISWovZHdTN1pKUHdkbUtv?=
 =?utf-8?B?MUxsK2ZjWTVidUJvOWN4SWl2SW9rd3ZndzdNWFhzV3NmVVNpamhNTVVPWE9x?=
 =?utf-8?B?bzhVdHhLaERmV1ZEQ01obTVLUHJWMDhva2tMSHQ4VDZjYVl0SFN6eXovbG5p?=
 =?utf-8?B?eFBEVldGRlhCcnk3b0NYQit2eHJ1MjdHU2NzMUE2T2VxNmdCdDdCVTZmWXNm?=
 =?utf-8?B?SkNEL1loTGFVL3ovUHE1aU9tMGxDZDY1T1pjbHNGMzVMR1pkdFIyWjkwc1Jm?=
 =?utf-8?B?UEdQWDhIV1liNlZ3Q2ZNdXpRcWh6RnhjMkRuZ3UvVDlLQmVra1YyaW9VK2c4?=
 =?utf-8?B?dzdhWFEwOHM4VGhsS2ZjU1dSK3lJTzZSaTZqVW5kdUdEWWh5QnJLTXFXSXVM?=
 =?utf-8?B?R2w2TkViMDdIbzJrOG8xbzBaZFVVaVdaRG1SZ3F1YkFLZGFSaFArTml3aHZV?=
 =?utf-8?B?UnNmSEhpeFZnaHRhQVg3eU1YRTdGM1I5ZHBhRU9pZ3hOK2F2ZUxBdnZ2RTB3?=
 =?utf-8?B?QjNPMG5QRmdyOW9IOU5OVGs1Yk8vYkZnaDl4cHhCYkNVUDF2QndrTThxWVF0?=
 =?utf-8?B?THZmSmVVeG0xQy9EdWxoemZ2RW9aL0phRDhTeWtiNmRYVXhpb2ZaQ1BOTXNU?=
 =?utf-8?B?OE8xUEY4SndhWXd3U2tUc3B5TGsyMjQ3WlZFUHJRaGs3azZlS2J6TGYrdmFH?=
 =?utf-8?B?Rm05cEJhenFPZ3hGWStJeDhDY3NYMEYrc0V4WDY2ZWlRSEFiRVFkdkhNYmQ5?=
 =?utf-8?B?aVlzQ2RQSTRlWDl1L01sV1BpYWZ5M3h2R0V5bzltN01HZnBWR2JKbmx0RmNO?=
 =?utf-8?Q?J47h/hknymDYuAMp0nsYTfBEX2wXjP7lpNQP+zV?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: aa76d82e-ff93-4f88-4148-08d930d0da67
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 14:13:02.6670
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4DUX2/d7H8Nh5WUow0Xu3IguO83SfOSSzhqDZ07uad5oABLIdAgDloOyH1RbwZbHLKcvanyfgUOQMGjwCLpE4a0/MPgz6CkPsyMEGQeeLGY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR10MB5010
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10016 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 suspectscore=0
 mlxlogscore=999 spamscore=0 adultscore=0 bulkscore=0 mlxscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2106160083
X-Proofpoint-GUID: 8eyhWaWUFcFNEW6BVBQAZX7jHMuNc9vm
X-Proofpoint-ORIG-GUID: 8eyhWaWUFcFNEW6BVBQAZX7jHMuNc9vm


On 6/16/21 7:42 AM, Roman Skakun wrote:
> This commit is dedicated to fix incorrect conversion from
> cpu_addr to page address in cases when we get virtual
> address which allocated through xen_swiotlb_alloc_coherent()
> and can be mapped in the vmalloc range.
> As the result, virt_to_page() cannot convert this address
> properly and return incorrect page address.
>
> Need to detect such cases and obtains the page address using
> vmalloc_to_page() instead.
>
> The reference code for mmap() and get_sgtable() was copied
> from kernel/dma/ops_helpers.c and modified to provide
> additional detections as described above.
>
> In order to simplify code there was added a new
> dma_cpu_addr_to_page() helper.
>
> Signed-off-by: Roman Skakun <roman_skakun@epam.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
> ---
>  drivers/xen/swiotlb-xen.c | 42 +++++++++++++++++++++++++++++++--------
>  1 file changed, 34 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 90bc5fc321bc..9331a8500547 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -118,6 +118,14 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
>  	return 0;
>  }
>  
> +static struct page *cpu_addr_to_page(void *cpu_addr)
> +{
> +	if (is_vmalloc_addr(cpu_addr))
> +		return vmalloc_to_page(cpu_addr);
> +	else
> +		return virt_to_page(cpu_addr);
> +}
> +
>  static int
>  xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
>  {
> @@ -337,7 +345,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>  	int order = get_order(size);
>  	phys_addr_t phys;
>  	u64 dma_mask = DMA_BIT_MASK(32);
> -	struct page *page;
> +	struct page *page = cpu_addr_to_page(vaddr);
>  
>  	if (hwdev && hwdev->coherent_dma_mask)
>  		dma_mask = hwdev->coherent_dma_mask;
> @@ -349,11 +357,6 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>  	/* Convert the size to actually allocated. */
>  	size = 1UL << (order + XEN_PAGE_SHIFT);
>  
> -	if (is_vmalloc_addr(vaddr))
> -		page = vmalloc_to_page(vaddr);
> -	else
> -		page = virt_to_page(vaddr);
> -
>  	if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
>  		     range_straddles_page_boundary(phys, size)) &&
>  	    TestClearPageXenRemapped(page))
> @@ -573,7 +576,23 @@ xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma,
>  		     void *cpu_addr, dma_addr_t dma_addr, size_t size,
>  		     unsigned long attrs)
>  {
> -	return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
> +	unsigned long user_count = vma_pages(vma);
> +	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> +	unsigned long off = vma->vm_pgoff;
> +	struct page *page = cpu_addr_to_page(cpu_addr);
> +	int ret;
> +
> +	vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs);
> +
> +	if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret))
> +		return ret;
> +
> +	if (off >= count || user_count > count - off)
> +		return -ENXIO;
> +
> +	return remap_pfn_range(vma, vma->vm_start,
> +			page_to_pfn(page) + vma->vm_pgoff,
> +			user_count << PAGE_SHIFT, vma->vm_page_prot);
>  }


I wonder now whether we could avoid code duplication between here and dma_common_mmap()/dma_common_get_sgtable() and use your helper there.


Christoph, would that work?  I.e. something like


diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c
index 910ae69cae77..43411c2fa47b 100644
--- a/kernel/dma/ops_helpers.c
+++ b/kernel/dma/ops_helpers.c
@@ -12,7 +12,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
                 void *cpu_addr, dma_addr_t dma_addr, size_t size,
                 unsigned long attrs)
 {
-       struct page *page = virt_to_page(cpu_addr);
+       struct page *page = cpu_addr_to_page(cpu_addr);
        int ret;
 
        ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
@@ -43,7 +43,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
                return -ENXIO;
 
        return remap_pfn_range(vma, vma->vm_start,
-                       page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff,
+                       page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff,
                        user_count << PAGE_SHIFT, vma->vm_page_prot);
 #else
        return -ENXIO;


-boris


>  
>  /*
> @@ -585,7 +604,14 @@ xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt,
>  			void *cpu_addr, dma_addr_t handle, size_t size,
>  			unsigned long attrs)
>  {
> -	return dma_common_get_sgtable(dev, sgt, cpu_addr, handle, size, attrs);
> +	struct page *page = cpu_addr_to_page(cpu_addr);
> +	int ret;
> +
> +	ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
> +	if (!ret)
> +		sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0);
> +
> +	return ret;
>  }
>  
>  const struct dma_map_ops xen_swiotlb_dma_ops = {


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 14:14:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 14:14:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143256.264081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWJG-0000Tg-W7; Wed, 16 Jun 2021 14:14:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143256.264081; Wed, 16 Jun 2021 14:14:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWJG-0000TZ-Q2; Wed, 16 Jun 2021 14:14:26 +0000
Received: by outflank-mailman (input) for mailman id 143256;
 Wed, 16 Jun 2021 14:14:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jola=LK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltWJF-0000Su-Er
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 14:14:25 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fe451f7d-69d6-4e76-a4ef-2c76837cc788;
 Wed, 16 Jun 2021 14:14:23 +0000 (UTC)
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01lp2057.outbound.protection.outlook.com [104.47.2.57]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-23-SLhuHx12PT2w6btFnya4Uw-2; Wed, 16 Jun 2021 16:14:21 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2334.eurprd04.prod.outlook.com (2603:10a6:800:29::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.24; Wed, 16 Jun
 2021 14:14:17 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Wed, 16 Jun 2021
 14:14:17 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM9P195CA0004.EURP195.PROD.OUTLOOK.COM (2603:10a6:20b:21f::9) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.15 via Frontend Transport; Wed, 16 Jun 2021 14:14:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe451f7d-69d6-4e76-a4ef-2c76837cc788
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623852862;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=m4XZ8CwSmaHJ9zvm22v9sTxmImBIwuzWpl7px+1Svp4=;
	b=M93q78ElqrIkblPHC5CT8OsqRJhivTV4G+LOBI/9Hqo3R3vOonOKZ4nR5Sjqa1CESQ1NI7
	eIk0DeET6YIBUPH+0bYezAvzgCx9f9rqQF3LY2erwWn9OrZLoOC6Q1RZM18tLxocoN7agj
	UIH+OAVRoOM2W6vaBOVte3IKK+dbdJo=
X-MC-Unique: SLhuHx12PT2w6btFnya4Uw-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SRElto/ZisgzpoUO14o1GrsioCGUFnLecs1EQ0+FxoWQwSKQ3kPQSBmZ/4ErIE1YTHIwT2XFw6COnGYP5DdjQ4kSq9fisJhTjLZ5bacWFEPuOIN8dhBQcp39POYHI3ht1jFuWNKrvQnPPysTZKZQlF4+vAQqdQ2UXsbz/mHDz206+fA04RmuCL6FRG3NmU9ccjaf9hb9CI5NB+8ichGmUs6IIT1sae6r3Hjc115bpPv3+mOhTp3Jd5r2H4tPVzbKkNG+5bH2pUXnqwtUOPFbJaCrDQWghraEVADZoC/vvUEfZab1ZTQkWPrrdUAQ5fTiGg/h95VZhSnWH52bWjr50g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F2tpvVs5LBonr6F4loQtC2jEdHbJJ3jKJIJ2pWSo1qw=;
 b=ZXV5mzueSi1hp3eQuwVx19rr5yii74dtTcbXiIXss3+0OYPDnU7zligMQOuacdGPaNK78SGwaF9VI1V00vuyFqo56QbONxV7ybyuuUATtAIPt/nQ5XEqeBiLzMTZeAO1C0td7ymFeulg9PoKnyyP+Vg24wMz1srPaANCPAv7Z066lBwEdpFTEgJQxLUVCQU61NB2VBhDxq+SqgAMUdQtJpxDe7NJ038fOf4MrZtBdjUIpwK75Vi7YvBxJdka9kbcDHzNI4wgLSBs4dA1pjtq8aumJFRofLY56OsRgaRKYsAWaSlnG+Ib5e66axvteamZrxzYJ2F3D1v+owlLwKS/xw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 0/5] tools/tests: More cleanup for automation improvements
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210615161905.9831-1-andrew.cooper3@citrix.com>
 <5b0de61c-0276-cf06-4eeb-cda9ca990940@suse.com>
 <5f786fa7-80eb-ea66-3fbc-2b556c1d3a73@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5f44c856-635c-cf46-2504-75451b1abac4@suse.com>
Date: Wed, 16 Jun 2021 16:14:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <5f786fa7-80eb-ea66-3fbc-2b556c1d3a73@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM9P195CA0004.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:20b:21f::9) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: fe60a59f-6d90-4693-f234-08d930d1073f
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2334:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB2334E8BE4E5CEC964E2F8F67B30F9@VI1PR0401MB2334.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	khX7C8kM6nHyrJat9LzE/A9FpX3aG+YlXX5tqgaIzy041Kdq3zZu1XB8QBTGSOVEflX/z+SASGrssPcEUjK+jxsmZomJrrc7wE61tmg9Wqw6VvfltWlVduxpcahWBBwo+rofzsi7SdOwjEHgc+73+ggU1aAzBHYb1eiFXNF6S1RuVlAq7HbJtthtcqzVDsi2Cs4bVZL82JQ/+haHqTghyUNttr1I/pHuOMpvyorHP/24qp0oEp9uWp9IXq0kqzwLG6f/h5jUVAYErO+RJ0jbxhsRmO1FM5y8fDJpd1HG5rbHGV6HDGbbp62EXbUcxLnZnBWhHDqDhvmcduNdiMfpbGZ4ahmdJZrgklG32iksPJGsqH9vajfiM2txn3kHrHwcwaSL4bCwaKhGLNQkINL/Zac3bhnb8jkR7R+i3xtmxxcpYVFAHClIlXXbg6BZTJ/QKkqwBa3zopaeGzk3mYBo4nvwsC9RrETz0Xryi50Wo9Nurd+F9vK/kbU/awRMabDpov51DcO/W1n7cEOIVN6RxsPHkrctRJzcv4XZwsXJxKF7R+FY+Rt9wMV0jGBtmXnaZbsqqXHtQfMMXa/lbEFaJ8qGkkFrj/Kn8K+w/VNRL4JP2agELsrjBTVl+CFGhbU62GLwVpw8mlf53/2I11VoS33nIOe42ZewlpByzjGY6L2NrmEqaXkoNDI0JauP+e83
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(136003)(396003)(366004)(39860400002)(376002)(956004)(31686004)(83380400001)(2616005)(86362001)(16576012)(316002)(6916009)(478600001)(8676002)(8936002)(54906003)(36756003)(31696002)(2906002)(5660300002)(26005)(6486002)(53546011)(4326008)(38100700002)(16526019)(186003)(66556008)(66476007)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?+fpOuI9oqcacwn+AJGqaZrLwKxBWFHuaUbZ7BeWA5jYbzzHp/trsAlbPIJri?=
 =?us-ascii?Q?yIu6i7StUu8/MW1d0IKuGYK2+6yoCgohAuU/YohGUzAdiIw7R1EEJak3yR8F?=
 =?us-ascii?Q?47CwNbNf5F++JspPQH1IuB6rM0cI7MZe1iLVt9t07w0B1be6D8OYqD76QmKt?=
 =?us-ascii?Q?mSqNWU8YtQnJ7+n8xvZS6ovcZ8Az6qLIg/oE3Wwfabyz31l0sU97u9PJDLrM?=
 =?us-ascii?Q?z3kYs41qDhY2UaC2HaaisJkz/kQb4LSbFRkjojbHE33al59p6pYdeCQ/UiVm?=
 =?us-ascii?Q?qbqIjR0tC/WsdR6FEVljFvPwJa7WTnHvCbMlPDoz3TUbepAmfTtyrl411YCn?=
 =?us-ascii?Q?dTJT9py4cD4r5ub7ozb5pT4Zk5J2GsCEEiPuXC4aJe1nWGKaOJ/ugWWfAtsO?=
 =?us-ascii?Q?YJeaRtt8YaMmst1NuN74NDDhuH2vyIjcGo2KeNQxSXdXYAIa7eiBrz/EyH9B?=
 =?us-ascii?Q?jlNysxa6hTm70D1r66aOccUONufmVmQBP1JkNSofoE3bErGiOUAqkddgqqXn?=
 =?us-ascii?Q?V/spmKYBnFhMoULW+VBcpzartFpmGLa/ymZRT60rvYn8Gv9ejEY9qtNb3Xgh?=
 =?us-ascii?Q?2/eozf4B/Iw0JKxxjCYKUTzDK/fygrm+P0h6Qg2oqqetixFvOaxKG87MHEcc?=
 =?us-ascii?Q?WeSxujdjFurx9Iv4Xa4ySL710NWoZ3ywhh3c6W3S7M0nbqdPrExNMxwsw+pp?=
 =?us-ascii?Q?q7cKXtnbmrMei51kzcT84jblbZ9pwJxNTaeOTOfQd5G3h885mo5eJXgU6lq2?=
 =?us-ascii?Q?ybABJGgPcUkbSeSwCSIlUKNQ41e1/M50biPaFOp1VdRJ+rzjgqrdDxkPz0DO?=
 =?us-ascii?Q?FiNdTvNxqSjFy/quAJzNM29L+E7sD2O0QmwGnG+vB90ISm7//aufF6Z3IMSb?=
 =?us-ascii?Q?OwdfnEhONfxZ2TSUNvfwZtvgyUOzJQzqmB8VGwmJzYlw7hBVFUdmqNo8vFLo?=
 =?us-ascii?Q?sqmwqs7JTb1kRDdTY1HKI9uvpI8pEHpAO8B7Q5oBAjkjfzKi+UP/0K00ros5?=
 =?us-ascii?Q?uR1w8CXcmzKvkbQNBq7oxy8G77IEmVLsyS7awEjXPLjR6J4eK5GPi95MJ+hj?=
 =?us-ascii?Q?egBgxyFhqxcWZb1qwF80pBWWIOe9VaxaImsDc1cb7x+hKaD+jDmJo/J6JZ+P?=
 =?us-ascii?Q?7tdOIUHQnLnGW/wcx2xaGgguTErNIAtcMC1pQ7fYRDL1eqREs4lyz+2vjaDY?=
 =?us-ascii?Q?6ajqPll3zZQMo+wXX14xc07+Q6F5pCGMJsfOMZRTYbbGxdxiB+HhPNzfLZsJ?=
 =?us-ascii?Q?rfdx7NvJpY4+yLmnOonKM7BbOoZXnvjKywfhy3HGPYGFi/2AIciWtu0gRqfv?=
 =?us-ascii?Q?fpgs/Zj5fXvjkfdGtR+DOOzh?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fe60a59f-6d90-4693-f234-08d930d1073f
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 14:14:17.8314
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AmfAPUxmzVXCuUN+UmqsypomZ2a8uvmgKBGZNnFwBh7aQVEcKTz9CaqFEFyBAs/0Ze7eiI9yt+hQ3yO/WK+DMg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2334

On 16.06.2021 15:22, Andrew Cooper wrote:
> On 16/06/2021 07:38, Jan Beulich wrote:
>> On 15.06.2021 18:19, Andrew Cooper wrote:
>>> Jan/Roger: x86_emulator and vpci use $(HOSTCC) not $(CC).  While they a=
re unit
>>> tests, we still potentially want to run them in dom0 rather than the bu=
ild
>>> environment - particularly for x86_emulator which is heavily CPUID base=
d and
>>> wants to run on a wide set of hardware.  Any issues moving them off $(H=
OSTCC)?
>> Well, yes, I'm afraid: If anything, we may need to build two binaries,
>> or build the one binary two different ways: The "run" (and "run32" for
>> the emulator harness) target wants a binary built with HOSTCC. The
>> install target (which prior to your series does nothing) indeed wants
>> building with CC. So maybe we want something like
>>
>> install: HOSTCC:=3D$(CC)
>>
>> plus suitable detection of whether the opposite set of objects are
>> presently in the build area, requiring a full rebuild? (Of course this
>> will work only as long as HOSTCC isn't used for any build time helper
>> binaries. See "x86emul: test AMX insns" for when this starts not to be
>> the case anymore for the emulator harness. So we'd need yet another
>> variable to express this detail.)
>=20
> Having slept on the problem overnight, I'm going to argue that HOSTCC is
> conceptually wrong to use here in the first place.
>=20
> In an arm64 environment, cross-compiling x86_64, this will explode
> everywhere, and the fault is with using HOSTCC rather than CC.

In principle, if there wasn't the massive amount of inline assembly,
and if the emulator wasn't just re-executing the instructions it is
asked to emulate, building all of this on Arm ought to be possible.
But with the code we have this simply makes no sense.

> HOSTCC is specifically for compiling utilities executed as part of the
> build.=C2=A0 Tests, and particularly arch-specific ones like x86_emulate,=
 are
> not in this category.

Hmm, right now they definitely are. Running them directly (which is
their only purpose right now, with the install targets doing nothing)
from the build tree puts them into this category. But they aren't
anymore as soon as you want to install them. Hence the need to have
two modes here.

>=C2=A0 Whether you happen to be able to run
> test_x86_emulator in the build environment is a property of whether
> you're cross-compiling.

In a way, yes. I'd consider the run32 target to also be cross-like,
yet that binary can then still be run from the build tree (if the
distro supports 32-bit binaries).

> For a non-cross-compiled builds, HOSTCC and CC are largely
> interchangeable, and won't impact the ability to run the binary in the
> build environment.

Not exactly, I think. If I override CC but not HOSTCC (which I do
normally), I still don't need to worry about that other CC finding
all the bits and pieces it needs for building a host binary. For the
C compiler this may not mean much, but the C++ compiler (if we used
it and hence would need to treat it similarly) wants e.g. its own
copies of the headers, which may not be readily available. I will
admit that my environment may be pretty non-standard, as I run non-
default compilers (and binutils) also directly from their build
trees. But things have been working fine this way for all sorts of
projects, so I expect the test harness here to not break in this
regard.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 14:21:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 14:21:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143266.264101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWQU-0002JM-2W; Wed, 16 Jun 2021 14:21:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143266.264101; Wed, 16 Jun 2021 14:21:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWQT-0002JF-V5; Wed, 16 Jun 2021 14:21:53 +0000
Received: by outflank-mailman (input) for mailman id 143266;
 Wed, 16 Jun 2021 14:21:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IKif=LK=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1ltWQS-0002Ic-IC
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 14:21:52 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ab75f1b-cda7-43d3-9b30-1911f556e34e;
 Wed, 16 Jun 2021 14:21:51 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id E68C468BEB; Wed, 16 Jun 2021 16:21:48 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ab75f1b-cda7-43d3-9b30-1911f556e34e
Date: Wed, 16 Jun 2021 16:21:48 +0200
From: Christoph Hellwig <hch@lst.de>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Roman Skakun <rm.skakun@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, Christoph Hellwig <hch@lst.de>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Roman Skakun <roman_skakun@epam.com>,
	Andrii Anisov <andrii_anisov@epam.com>
Subject: Re: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable
 dma ops
Message-ID: <20210616142148.GA764@lst.de>
References: <855a58e2-1e03-4763-cb56-81367b73762c@oracle.com> <20210616114205.38902-1-roman_skakun@epam.com> <20210616114205.38902-2-roman_skakun@epam.com> <2834cdc0-534c-4f07-1901-e468a7713c1f@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2834cdc0-534c-4f07-1901-e468a7713c1f@oracle.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Jun 16, 2021 at 10:12:55AM -0400, Boris Ostrovsky wrote:
> I wonder now whether we could avoid code duplication between here and dma_common_mmap()/dma_common_get_sgtable() and use your helper there.
> 
> 
> Christoph, would that work? I.e. something like

You should not duplicate the code at all, and just make the common
helpers work with vmalloc addresses.


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 14:21:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 14:21:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143265.264090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWQK-00020g-Pb; Wed, 16 Jun 2021 14:21:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143265.264090; Wed, 16 Jun 2021 14:21:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWQK-00020Z-Mh; Wed, 16 Jun 2021 14:21:44 +0000
Received: by outflank-mailman (input) for mailman id 143265;
 Wed, 16 Jun 2021 14:21:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QzdS=LK=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1ltWQJ-00020T-8J
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 14:21:43 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f9b3c938-3dfe-4a50-a922-7f1ededa09f7;
 Wed, 16 Jun 2021 14:21:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9b3c938-3dfe-4a50-a922-7f1ededa09f7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623853301;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=S6oR3vmD89E5yDMHzIIjiuiUjUWHabH851embpyzv9k=;
  b=Q6KLxEa0kMs/P7PsEj5Jqy3S/X2St79Q8z4GfEskBRqFycKUI/NBej9a
   jJ3ByH8q5TJ2h8QiDV0/4TXoG43+v3WUMhfY+K+q7fuaaZMK/0UZwsto7
   NxPQKV/sWb6x4SfRZ0YqVUS3m7G62OGRN9T2IEXV/2CrGj7gwMvBZerq8
   Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: J8SVoTDNiCQt5iq6TEd7SB3RNRjD6UJhe9Xwk9NOxhz5ymEPMBVkFnrGglOzzSNT6SDc2dU9BZ
 y+ODT/w+msq0VVmKVUX0X6lok/5itG4br+ZGd3gR0pNe4fisOPGd+vr5FS9vmaL40MTMlw+Crk
 10hpkdTLG+JK2HvdVL0aVzobbP2/r0RWt+BDLgAQxKw4wWlZAL0y5o75tzvXM9FkVv3Q+bWfBI
 Fo70DdYsRoym2q2inKSwCg32ZSuUUFAoHp14HfYnMigx/6QEh6MojC2KQA+LUavmSPvsd4Kgm1
 5xw=
X-SBRS: 5.1
X-MesageID: 46270533
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:f9QSPKtXIsM4kTw002QKuXdC7skDdNV00zEX/kB9WHVpmszxra
 GTddAgpHjJYVcqKRUdcL+7VJVoLUmyyXcx2/h2AV7AZniChILLFvAA0WKK+VSJcEeSygce79
 YDT0EXMqyIMbEQt6bHCWeDfeod/A==
X-IronPort-AV: E=Sophos;i="5.83,278,1616472000"; 
   d="scan'208";a="46270533"
Date: Wed, 16 Jun 2021 15:21:37 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, <xen-devel@lists.xenproject.org>,
	osstest service owner <osstest-admin@xenproject.org>
Subject: Re: [xen-unstable test] 162845: regressions - FAIL
Message-ID: <YMoI8YZfOvogwOMY@perard>
References: <osstest-162845-mainreport@xen.org>
 <8e39ca8f-3202-7d3a-d65d-7087634bd49e@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <8e39ca8f-3202-7d3a-d65d-7087634bd49e@suse.com>

On Wed, Jun 16, 2021 at 09:12:52AM +0200, Jan Beulich wrote:
> On 16.06.2021 08:54, osstest service owner wrote:
> > flight 162845 xen-unstable real [real]
> > flight 162853 xen-unstable real-retest [real]
> > http://logs.test-lab.xenproject.org/osstest/logs/162845/
> > http://logs.test-lab.xenproject.org/osstest/logs/162853/
> > 
> > Regressions :-(
> > 
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >  test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
> >  test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
> 
> There looks to still be an issue with the ovmf version used. I'm
> puzzled to find this flight reporting
> 
> built_revision_ovmf	e1999b264f1f9d7230edf2448f757c73da567832
> 
> which isn't what the tree recently was rewound to, but about two
> dozen commits older. I hope one of you has a clue at what is going
> on here.

So this commit is "master" from https://xenbits.xen.org/git-http/ovmf.git
rather than "xen-tested-master" from https://xenbits.xen.org/git-http/osstest/ovmf.git

master is what xen.git would have cloned. And "xen-tested-master" is the
commit that I was expecting osstest to pick up, but maybe that as been
setup only for stable trees?

Anyway, after aad7b5c11d51 ("tools/firmware/ovmf: Use OvmfXen platform
file is exist"), it isn't the same OVMF that is been used. We used to
use OvmfX64, but now we are going to use OvmfXen. (Xen support in
OvmfX64 has been removed so can't be used anymore.)


So there is maybe an issue with OvmfXen which doesn't need to block
xen-unstable flights.


As for the failure, I can think of one thing in that is different,
OvmfXen maps the XENMAPSPACE_shared_info page as high as possible in the
guest physical memory, in order to avoid creating hole the RAM, but a
call to XENMEM_remove_from_physmap is done as well. Could that actually
cause issues with saverestore?

So maybe we can force-push in the mean time if tests with OVMF is the
only failure.

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 14:43:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 14:43:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143280.264124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlQ-0005GT-5g; Wed, 16 Jun 2021 14:43:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143280.264124; Wed, 16 Jun 2021 14:43:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlQ-0005GM-20; Wed, 16 Jun 2021 14:43:32 +0000
Received: by outflank-mailman (input) for mailman id 143280;
 Wed, 16 Jun 2021 14:43:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ltWlO-000505-KG
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 14:43:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlN-0004Ce-C3; Wed, 16 Jun 2021 14:43:29 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlN-0007D0-2m; Wed, 16 Jun 2021 14:43:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=iF2lVk0Jwa+VHF6NCrh9tPc8M6wsYLGVp6TwRt8NLek=; b=udJarbdoYk7AMrC4FmVk4POvy
	bzqzvtUJQkdDGklBjlLj878ADH/RCHrO3d0917hfJI/L7vIfX1n6vNDQ3o+vKWsv3xthaN+QdhIj5
	KKQeHNIzVLOwKJaOM0yqKioyitnkKtSb707ANXl/BCdkbPebKTTuYCm1eTg3TIWkE8HCQ=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	doebel@amazon.de,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 01/10] MAINTAINERS: Add myself as reviewers for tools/xenstore
Date: Wed, 16 Jun 2021 15:43:15 +0100
Message-Id: <20210616144324.31652-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210616144324.31652-1-julien@xen.org>
References: <20210616144324.31652-1-julien@xen.org>

I would like to help reviewing Xenstored patches. It is more convenient
to find them if I am CCed.

Signed-off-by: Julien Grall <julien@xen.org>
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 39750bb75db5..dd8c011456cd 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -628,6 +628,7 @@ XENSTORE
 M:	Ian Jackson <iwj@xenproject.org>
 M:	Wei Liu <wl@xen.org>
 M:	Juergen Gross <jgross@suse.com>
+R:	Julien Grall <julien@xen.org>
 S:	Supported
 F:	tools/xenstore/
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 14:43:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 14:43:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143279.264112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlO-00050I-SZ; Wed, 16 Jun 2021 14:43:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143279.264112; Wed, 16 Jun 2021 14:43:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlO-00050A-Pg; Wed, 16 Jun 2021 14:43:30 +0000
Received: by outflank-mailman (input) for mailman id 143279;
 Wed, 16 Jun 2021 14:43:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ltWlN-0004zz-Ao
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 14:43:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlL-0004CW-Px; Wed, 16 Jun 2021 14:43:27 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlL-0007D0-Fu; Wed, 16 Jun 2021 14:43:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=fZA6YmLD+LYE4dV/8CG4YJOfv9OzLD5ZRPfM2lCkh+Q=; b=xd8PGOKz5NX1zZk2AA7OGBohAG
	KQ34FbO6P//4gZUvN1fDjPNp3As2KeTEmCniNx/n1MmGUQskevFqpZv4Cy/qZ04As8RUmUjUMg6t+
	C9LKzjME2NR13uAeXHm2eE8IIxrvZg/I6OVJiSGNXvkRdWD9XW7IrfPneYYKfbW8rZd0=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	doebel@amazon.de,
	Julien Grall <jgrall@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 00/10] tools/xenstored: Bug fixes + Improve Live-Update
Date: Wed, 16 Jun 2021 15:43:14 +0100
Message-Id: <20210616144324.31652-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Hi all,

At the moment, Live-Update will, by default, not proceed if there are
in-flight transactions. It is possible force it by passing -F but this
will break any connection with in-flight transactions.

There are PV drivers out that may never terminate some transaction. On
host running such guest, we would need to use -F. Unfortunately, this
also risks to break well-behaving guests (and even dom0) because
Live-Update will happen as soon as the timeout is hit.

This series aims to allow to Live-Update more safely even when the option
-F is used.

The first part of the series contains a few fixes for bug found while
testing Live-Update.

Cheers,

Julien Grall (10):
  MAINTAINERS: Add myself as reviewers for tools/xenstore
  tools/xenstored: Introduce lu_get_connection() and use it
  tools/xenstore: Don't assume conn->in points to the LU request
  tools/xenstored: Limit the number of requests a connection can delay
  tools/xenstored: xenstored_core.h should include fcntl.h
  tools/xenstored: Introduce a wrapper for conn->funcs->can_{read,
    write}
  tools/xenstored: delay_request: don't assume conn->in == in
  tools/xenstored: Extend restore code to handle multiple input buffer
  tools/xenstored: Dump delayed requests
  tools/xenstored: Delay new transaction while Live-Update is pending

 MAINTAINERS                        |   1 +
 tools/xenstore/xenstored_control.c |  66 +++++++++-
 tools/xenstore/xenstored_control.h |   7 ++
 tools/xenstore/xenstored_core.c    | 196 +++++++++++++++++++++++------
 tools/xenstore/xenstored_core.h    |   8 +-
 tools/xenstore/xenstored_domain.c  |  46 +++----
 tools/xenstore/xenstored_domain.h  |   6 +-
 7 files changed, 255 insertions(+), 75 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 14:43:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 14:43:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143282.264146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlS-0005o7-P6; Wed, 16 Jun 2021 14:43:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143282.264146; Wed, 16 Jun 2021 14:43:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlS-0005nt-Km; Wed, 16 Jun 2021 14:43:34 +0000
Received: by outflank-mailman (input) for mailman id 143282;
 Wed, 16 Jun 2021 14:43:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ltWlR-0005fp-7T
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 14:43:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlQ-0004DK-36; Wed, 16 Jun 2021 14:43:32 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlP-0007D0-QY; Wed, 16 Jun 2021 14:43:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=P1+HaOfv4X7DVCks8z7Jhaa8GAvQxlzVkg9j3S037W8=; b=quBzH1JZOnInax6kNzeH3FNAI
	zbQZXo1G76WpD5d+HcsDZ6AHyMZ6J/2PCCik/5r6a0V2YUPwXGe2Em4vBSutgGFNH8nwsN3SdfjNM
	7dWkknRc/yifzOYru3RRYEpUxPII6ICpxUPwHd6kYQZwMTHXkuhCBuz0b3D690WnDTrng=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	doebel@amazon.de,
	Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [PATCH 03/10] tools/xenstore: Don't assume conn->in points to the LU request
Date: Wed, 16 Jun 2021 15:43:17 +0100
Message-Id: <20210616144324.31652-4-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210616144324.31652-1-julien@xen.org>
References: <20210616144324.31652-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

call_delayed() is currently assuming that conn->in is NULL when
handling delayed request. However, the connection is not paused.
Therefore new request can be processed and conn->in may be non-NULL
if we have only received a partial request.

Furthermore, as we overwrite conn->in, the current partial request
will not be transferred. This will result to corrupt the connection.

Rather than updating conn->in, stash the LU request in lu_status and
let each callback for delayed request to update conn->in when
necessary.

To keep a sane interface, the code to write the "OK" response the
LU request is moved in xenstored_core.c.

Fixes: c5ca1404b4 ("tools/xenstore: add support for delaying execution of a xenstore request")
Fixes: ed6eebf17d ("tools/xenstore: dump the xenstore state for live update")
Signed-off-by: Julien Grall <jgrall@amazon.com>

----

This is fixing bugs from two separate commits. I couldn't figure out
how to split in two patches without breaking bisection.
---
 tools/xenstore/xenstored_control.c | 41 ++++++++++++++++++++++++++++--
 tools/xenstore/xenstored_control.h |  3 +++
 tools/xenstore/xenstored_core.c    | 17 +++----------
 3 files changed, 46 insertions(+), 15 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index d08a2b961432..7acc2d134f9f 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -50,6 +50,9 @@ struct live_update {
 	/* For verification the correct connection is acting. */
 	struct connection *conn;
 
+	/* Pointer to the command used to request LU */
+	struct buffered_data *in;
+
 #ifdef __MINIOS__
 	void *kernel;
 	unsigned int kernel_size;
@@ -100,6 +103,7 @@ static const char *lu_begin(struct connection *conn)
 	if (!lu_status)
 		return "Allocation failure.";
 	lu_status->conn = conn;
+	lu_status->in = conn->in;
 	talloc_set_destructor(lu_status, lu_destroy);
 
 	return NULL;
@@ -110,11 +114,34 @@ struct connection *lu_get_connection(void)
 	return lu_status ? lu_status->conn : NULL;
 }
 
+unsigned int lu_write_response(FILE *fp)
+{
+	struct xsd_sockmsg msg;
+
+	assert(lu_status);
+
+	msg = lu_status->in->hdr.msg;
+
+	msg.len = sizeof("OK");
+	if (fp && fwrite(&msg, sizeof(msg), 1, fp) != 1)
+		return 0;
+	if (fp && fwrite("OK", msg.len, 1, fp) != 1)
+		return 0;
+
+	return sizeof(msg) + msg.len;
+}
+
 #else
 struct connection *lu_get_connection(void)
 {
 	return NULL;
 }
+
+unsigned int lu_write_response(FILE *fp)
+{
+	/* Unsupported */
+	return 0;
+}
 #endif
 
 struct cmd_s {
@@ -658,6 +685,8 @@ static bool do_lu_start(struct delayed_request *req)
 {
 	time_t now = time(NULL);
 	const char *ret;
+	struct buffered_data *saved_in;
+	struct connection *conn = lu_status->conn;
 
 	if (!lu_check_lu_allowed()) {
 		if (now < lu_status->started_at + lu_status->timeout)
@@ -668,8 +697,9 @@ static bool do_lu_start(struct delayed_request *req)
 		}
 	}
 
+	assert(req->in == lu_status->in);
 	/* Dump out internal state, including "OK" for live update. */
-	ret = lu_dump_state(req->in, lu_status->conn);
+	ret = lu_dump_state(req->in, conn);
 	if (!ret) {
 		/* Perform the activation of new binary. */
 		ret = lu_activate_binary(req->in);
@@ -677,7 +707,14 @@ static bool do_lu_start(struct delayed_request *req)
 
 	/* We will reach this point only in case of failure. */
  out:
-	send_reply(lu_status->conn, XS_CONTROL, ret, strlen(ret) + 1);
+	/*
+	 * send_reply() will send the response for conn->in. Save the current
+	 * conn->in and restore it afterwards.
+	 */
+	saved_in = conn->in;
+	conn->in = req->in;
+	send_reply(conn, XS_CONTROL, ret, strlen(ret) + 1);
+	conn->in = saved_in;
 	talloc_free(lu_status);
 
 	return true;
diff --git a/tools/xenstore/xenstored_control.h b/tools/xenstore/xenstored_control.h
index 6842b8d88760..27d7f19e4b7f 100644
--- a/tools/xenstore/xenstored_control.h
+++ b/tools/xenstore/xenstored_control.h
@@ -20,3 +20,6 @@ int do_control(struct connection *conn, struct buffered_data *in);
 void lu_read_state(void);
 
 struct connection *lu_get_connection(void);
+
+/* Write the "OK" response for the live-update command */
+unsigned int lu_write_response(FILE *fp);
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 607187361d84..41b26d7094c8 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -272,15 +272,10 @@ static int undelay_request(void *_req)
 
 static void call_delayed(struct connection *conn, struct delayed_request *req)
 {
-	assert(conn->in == NULL);
-	conn->in = req->in;
-
 	if (req->func(req)) {
 		undelay_request(req);
 		talloc_set_destructor(req, NULL);
 	}
-
-	conn->in = NULL;
 }
 
 int delay_request(struct connection *conn, struct buffered_data *in,
@@ -2375,7 +2370,7 @@ const char *dump_state_buffered_data(FILE *fp, const struct connection *c,
 	struct buffered_data *out, *in = c->in;
 	bool partial = true;
 
-	if (in && c != lu_get_connection()) {
+	if (in) {
 		len = in->inhdr ? in->used : sizeof(in->hdr);
 		if (fp && fwrite(&in->hdr, len, 1, fp) != 1)
 			return "Dump read data error";
@@ -2416,16 +2411,12 @@ const char *dump_state_buffered_data(FILE *fp, const struct connection *c,
 
 	/* Add "OK" for live-update command. */
 	if (c == lu_get_connection()) {
-		struct xsd_sockmsg msg = c->in->hdr.msg;
+		unsigned int rc = lu_write_response(fp);
 
-		msg.len = sizeof("OK");
-		if (fp && fwrite(&msg, sizeof(msg), 1, fp) != 1)
+		if (!rc)
 			return "Dump buffered data error";
-		len += sizeof(msg);
-		if (fp && fwrite("OK", msg.len, 1, fp) != 1)
 
-			return "Dump buffered data error";
-		len += msg.len;
+		len += rc;
 	}
 
 	if (sc)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 14:43:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 14:43:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143281.264131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlQ-0005JS-HL; Wed, 16 Jun 2021 14:43:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143281.264131; Wed, 16 Jun 2021 14:43:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlQ-0005IK-AI; Wed, 16 Jun 2021 14:43:32 +0000
Received: by outflank-mailman (input) for mailman id 143281;
 Wed, 16 Jun 2021 14:43:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ltWlP-0005Fw-R6
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 14:43:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlO-0004Cu-NI; Wed, 16 Jun 2021 14:43:30 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlO-0007D0-EZ; Wed, 16 Jun 2021 14:43:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=NtHsRqFJ/9j0/O4AKrXX5mo/TDjv7sxBrn2GQynLUM8=; b=qy0Yl7SLc8nebkjHd6dzT+3G1
	HiAD+0b2NTEFeji3acrVAe2Er1QrWgZT+SNC8ysP/0aXBk06BVprwuINaQxS5DU09/MfPG9EETbDG
	YF16eBb8l9f1vbTl9ZM4QszPAkv/KsiPvfk8IZbmzb9ALDwU90mbTVsatqlQiDjC1LXPY=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	doebel@amazon.de,
	Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [PATCH 02/10] tools/xenstored: Introduce lu_get_connection() and use it
Date: Wed, 16 Jun 2021 15:43:16 +0100
Message-Id: <20210616144324.31652-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210616144324.31652-1-julien@xen.org>
References: <20210616144324.31652-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

At the moment, dump_state_buffered_data() is taking two connections
in parameters (one is the connection to dump, the other is the
connection used to request LU). The naming doesn't help to
distinguish (c vs conn) them and this already lead to several mistake
while modifying the function.

To remove the confusion, introduce an help lu_get_connection() that
will return the connection used to request LU and use it
in place of the existing parameter.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_control.c | 13 ++++++++++++-
 tools/xenstore/xenstored_control.h |  2 ++
 tools/xenstore/xenstored_core.c    |  7 +++----
 tools/xenstore/xenstored_core.h    |  1 -
 tools/xenstore/xenstored_domain.c  |  6 +++---
 tools/xenstore/xenstored_domain.h  |  2 +-
 6 files changed, 21 insertions(+), 10 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 0d57f9f9400d..d08a2b961432 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -104,6 +104,17 @@ static const char *lu_begin(struct connection *conn)
 
 	return NULL;
 }
+
+struct connection *lu_get_connection(void)
+{
+	return lu_status ? lu_status->conn : NULL;
+}
+
+#else
+struct connection *lu_get_connection(void)
+{
+	return NULL;
+}
 #endif
 
 struct cmd_s {
@@ -516,7 +527,7 @@ static const char *lu_dump_state(const void *ctx, struct connection *conn)
 	ret = dump_state_global(fp);
 	if (ret)
 		goto out;
-	ret = dump_state_connections(fp, conn);
+	ret = dump_state_connections(fp);
 	if (ret)
 		goto out;
 	ret = dump_state_special_nodes(fp);
diff --git a/tools/xenstore/xenstored_control.h b/tools/xenstore/xenstored_control.h
index aac61f05908f..6842b8d88760 100644
--- a/tools/xenstore/xenstored_control.h
+++ b/tools/xenstore/xenstored_control.h
@@ -18,3 +18,5 @@
 
 int do_control(struct connection *conn, struct buffered_data *in);
 void lu_read_state(void);
+
+struct connection *lu_get_connection(void);
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 883a1a582a60..607187361d84 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2369,14 +2369,13 @@ const char *dump_state_global(FILE *fp)
 
 /* Called twice: first with fp == NULL to get length, then for writing data. */
 const char *dump_state_buffered_data(FILE *fp, const struct connection *c,
-				     const struct connection *conn,
 				     struct xs_state_connection *sc)
 {
 	unsigned int len = 0, used;
 	struct buffered_data *out, *in = c->in;
 	bool partial = true;
 
-	if (in && c != conn) {
+	if (in && c != lu_get_connection()) {
 		len = in->inhdr ? in->used : sizeof(in->hdr);
 		if (fp && fwrite(&in->hdr, len, 1, fp) != 1)
 			return "Dump read data error";
@@ -2416,8 +2415,8 @@ const char *dump_state_buffered_data(FILE *fp, const struct connection *c,
 	}
 
 	/* Add "OK" for live-update command. */
-	if (c == conn) {
-		struct xsd_sockmsg msg = conn->in->hdr.msg;
+	if (c == lu_get_connection()) {
+		struct xsd_sockmsg msg = c->in->hdr.msg;
 
 		msg.len = sizeof("OK");
 		if (fp && fwrite(&msg, sizeof(msg), 1, fp) != 1)
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index bb36111ecc56..89ce155e755b 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -269,7 +269,6 @@ void set_tdb_key(const char *name, TDB_DATA *key);
 
 const char *dump_state_global(FILE *fp);
 const char *dump_state_buffered_data(FILE *fp, const struct connection *c,
-				     const struct connection *conn,
 				     struct xs_state_connection *sc);
 const char *dump_state_nodes(FILE *fp, const void *ctx);
 const char *dump_state_node_perms(FILE *fp, const struct xs_permissions *perms,
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 322b0dbca449..6d8d29cbe41c 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1183,7 +1183,7 @@ void wrl_apply_debit_trans_commit(struct connection *conn)
 	wrl_apply_debit_actual(conn->domain);
 }
 
-const char *dump_state_connections(FILE *fp, struct connection *conn)
+const char *dump_state_connections(FILE *fp)
 {
 	const char *ret = NULL;
 	unsigned int conn_id = 1;
@@ -1209,7 +1209,7 @@ const char *dump_state_connections(FILE *fp, struct connection *conn)
 			sc.spec.socket_fd = c->fd;
 		}
 
-		ret = dump_state_buffered_data(NULL, c, conn, &sc);
+		ret = dump_state_buffered_data(NULL, c, &sc);
 		if (ret)
 			return ret;
 		head.length += sc.data_in_len + sc.data_out_len;
@@ -1219,7 +1219,7 @@ const char *dump_state_connections(FILE *fp, struct connection *conn)
 		if (fwrite(&sc, offsetof(struct xs_state_connection, data),
 			   1, fp) != 1)
 			return "Dump connection state error";
-		ret = dump_state_buffered_data(fp, c, conn, NULL);
+		ret = dump_state_buffered_data(fp, c, NULL);
 		if (ret)
 			return ret;
 		ret = dump_state_align(fp);
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index cc5147d7e747..62ee471ea6aa 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -101,7 +101,7 @@ void wrl_log_periodic(struct wrl_timestampt now);
 void wrl_apply_debit_direct(struct connection *conn);
 void wrl_apply_debit_trans_commit(struct connection *conn);
 
-const char *dump_state_connections(FILE *fp, struct connection *conn);
+const char *dump_state_connections(FILE *fp);
 const char *dump_state_special_nodes(FILE *fp);
 
 void read_state_connection(const void *ctx, const void *state);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 14:43:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 14:43:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143283.264157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlU-00067G-9A; Wed, 16 Jun 2021 14:43:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143283.264157; Wed, 16 Jun 2021 14:43:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlU-000673-4X; Wed, 16 Jun 2021 14:43:36 +0000
Received: by outflank-mailman (input) for mailman id 143283;
 Wed, 16 Jun 2021 14:43:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ltWlS-0005nk-El
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 14:43:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlR-0004DX-Ey; Wed, 16 Jun 2021 14:43:33 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlR-0007D0-6H; Wed, 16 Jun 2021 14:43:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=JWVgF6QLGsEmGe+em+lI3NsrndUxxF4I6/J85EZgUFY=; b=h5Gm1UgtjuzByYBGo/uL1qERD
	AQ6/Ty3bvZNqIFGvz4uzKFSq678gloxDxx0aUTyDcS0wasJkoZU0LMceRM2U/xBk9hxhC1hrvrlYS
	a2Lxgoy+JUaD6DGl+LXH67uFsAXRmV118MeIXQDj1l/zvOfaxUIqSpS8RETriuWbq2R1I=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	doebel@amazon.de,
	Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [PATCH 04/10] tools/xenstored: Limit the number of requests a connection can delay
Date: Wed, 16 Jun 2021 15:43:18 +0100
Message-Id: <20210616144324.31652-5-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210616144324.31652-1-julien@xen.org>
References: <20210616144324.31652-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

Currently, only liveupdate request can be delayed. The request can only
be performed by a privileged connection (e.g. dom0). So it is fine to
have no limits.

In a follow-up patch we will want to delay request for unprivileged
connection as well. So it is best to apply a limit.

For now and for simplicity, only a single request can be delayed
for a given unprivileged connection.

Take the opportunity to tweak the prototype and provide a way to
bypass the quota check. This would be useful when the function
is called from the restore code.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_control.c |  2 +-
 tools/xenstore/xenstored_core.c    | 11 ++++++++++-
 tools/xenstore/xenstored_core.h    |  3 ++-
 3 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 7acc2d134f9f..1c24d4869eab 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -737,7 +737,7 @@ static const char *lu_start(const void *ctx, struct connection *conn,
 	lu_status->timeout = to;
 	lu_status->started_at = time(NULL);
 
-	errno = delay_request(conn, conn->in, do_lu_start, NULL);
+	errno = delay_request(conn, conn->in, do_lu_start, NULL, false);
 
 	return NULL;
 }
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 41b26d7094c8..51d210828922 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -279,10 +279,19 @@ static void call_delayed(struct connection *conn, struct delayed_request *req)
 }
 
 int delay_request(struct connection *conn, struct buffered_data *in,
-		  bool (*func)(struct delayed_request *), void *data)
+		  bool (*func)(struct delayed_request *), void *data,
+		  bool no_quota_check)
 {
 	struct delayed_request *req;
 
+	/*
+	 * Only allow one request can be delayed for an unprivileged
+	 * connection.
+	 */
+	if (!no_quota_check && domain_is_unprivileged(conn) &&
+	    !list_empty(&conn->delayed))
+		return ENOSPC;
+
 	req = talloc(in, struct delayed_request);
 	if (!req)
 		return ENOMEM;
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 89ce155e755b..34839b34f6e9 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -213,7 +213,8 @@ char *get_parent(const void *ctx, const char *node);
 
 /* Delay a request. */
 int delay_request(struct connection *conn, struct buffered_data *in,
-		  bool (*func)(struct delayed_request *), void *data);
+		  bool (*func)(struct delayed_request *), void *data,
+		  bool no_quota_check);
 
 /* Tracing infrastructure. */
 void trace_create(const void *data, const char *type);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 14:43:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 14:43:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143284.264163 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlU-0006Cl-Re; Wed, 16 Jun 2021 14:43:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143284.264163; Wed, 16 Jun 2021 14:43:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlU-0006BQ-IP; Wed, 16 Jun 2021 14:43:36 +0000
Received: by outflank-mailman (input) for mailman id 143284;
 Wed, 16 Jun 2021 14:43:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ltWlT-00064H-Qb
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 14:43:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlS-0004Do-Qy; Wed, 16 Jun 2021 14:43:34 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlS-0007D0-IH; Wed, 16 Jun 2021 14:43:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=fKULMeUXfmVrCVmVDhYu2L9+CZLfHs4b31xT1ty4foo=; b=KXkJfhVMqSRa/+S2GRnCjEDdU
	HWFmyaOk78Yma8CB6ovb7dkMDNwYuZNUYC8WEO7NDWzue6ckQuGp7+KdibTHe0f8Qfn0fJw0+nshB
	O8l4YNMeVVboKM1Epm8CzaK1REWCwGrqTDcaCGRLnDiGRWFXZXKfxkP9dKRlsPg8TROGk=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	doebel@amazon.de,
	Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [PATCH 05/10] tools/xenstored: xenstored_core.h should include fcntl.h
Date: Wed, 16 Jun 2021 15:43:19 +0100
Message-Id: <20210616144324.31652-6-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210616144324.31652-1-julien@xen.org>
References: <20210616144324.31652-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

xenstored_core.h will consider live-udpate is not supported if
O_CLOEXEC doesn't exist. However, the header doesn't include the one
defining O_CLOEXEC (i.e. fcntl.h). This means that depending on
the header included, some source file will think Live-Update is not
supported.

I am not aware of any issue with the existing. Therefore this is just
a latent bug so far.

Prevent any potential issue by including fcntl.h in xenstored_core.h

Fixes: cd831ee438 ("tools/xenstore: handle CLOEXEC flag for local files and pipes")
Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 34839b34f6e9..dac517156993 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -24,6 +24,7 @@
 
 #include <sys/types.h>
 #include <dirent.h>
+#include <fcntl.h>
 #include <stdbool.h>
 #include <stdint.h>
 #include <errno.h>
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 14:43:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 14:43:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143285.264179 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlX-0006kH-1N; Wed, 16 Jun 2021 14:43:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143285.264179; Wed, 16 Jun 2021 14:43:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlW-0006jk-SR; Wed, 16 Jun 2021 14:43:38 +0000
Received: by outflank-mailman (input) for mailman id 143285;
 Wed, 16 Jun 2021 14:43:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ltWlV-0006Mk-5F
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 14:43:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlU-0004E6-6M; Wed, 16 Jun 2021 14:43:36 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlT-0007D0-U4; Wed, 16 Jun 2021 14:43:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=ZvLpylj/UqcYoOOmC8VRLGxCMI8WwLt87nHoKc1byWA=; b=FYK8bYQfTjgnv0czNv4OhkHKe
	fPm5TAmEg1tQm63lBFvvY2P/mdoDDnRH6FLSvrv05vR38D2NXBtJ4nSrPYxW8I4anfx15txQMWTS5
	klUweyg5PW5A7yUraRaA0Qjd4mHPIcMDsq2hpu6Xo4882oi6ehJj45LJS4qxkH4Ac/tds=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	doebel@amazon.de,
	Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [PATCH 06/10] tools/xenstored: Introduce a wrapper for conn->funcs->can_{read, write}
Date: Wed, 16 Jun 2021 15:43:20 +0100
Message-Id: <20210616144324.31652-7-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210616144324.31652-1-julien@xen.org>
References: <20210616144324.31652-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

Currently, the callbacks can_read and can_write are called directly. This
doesn't allow us to add generic check and therefore requires duplication.

At the moment, one check that could benefit to be common is whether the
connection should ignored. The position is slightly different between
domain and socket because for the latter we want to check the state of
the file descriptor first.

In follow-up patches, there will be more potential generic checks.

This patch provides wrappers to read/write a connection and move
the check ->is_ignored after the callback for everyone.

This also requires to replace the direct call to domain_can_read()
and domain_can_write() with the new wrapper. At the same time,
both functions can now be static. Note that the implementations need
to be moved earlier in the file xenstored_domain.c to avoid
declaring the prototype.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c   | 18 ++++++++++----
 tools/xenstore/xenstored_domain.c | 40 +++++++++++++------------------
 tools/xenstore/xenstored_domain.h |  4 ----
 3 files changed, 31 insertions(+), 31 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 51d210828922..2e5760fe4599 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -334,6 +334,16 @@ static int destroy_conn(void *_conn)
 	return 0;
 }
 
+static bool conn_can_read(struct connection *conn)
+{
+	return conn->funcs->can_read(conn) && !conn->is_ignored;
+}
+
+static bool conn_can_write(struct connection *conn)
+{
+	return conn->funcs->can_write(conn) && !conn->is_ignored;
+}
+
 /* This function returns index inside the array if succeed, -1 if fail */
 static int set_fd(int fd, short events)
 {
@@ -396,8 +406,8 @@ static void initialize_fds(int *p_sock_pollfd_idx, int *ptimeout)
 	list_for_each_entry(conn, &connections, list) {
 		if (conn->domain) {
 			wrl_check_timeout(conn->domain, now, ptimeout);
-			if (domain_can_read(conn) ||
-			    (domain_can_write(conn) &&
+			if (conn_can_read(conn) ||
+			    (conn_can_write(conn) &&
 			     !list_empty(&conn->out_list)))
 				*ptimeout = 0;
 		} else {
@@ -2325,14 +2335,14 @@ int main(int argc, char *argv[])
 			if (&next->list != &connections)
 				talloc_increase_ref_count(next);
 
-			if (conn->funcs->can_read(conn))
+			if (conn_can_read(conn))
 				handle_input(conn);
 			if (talloc_free(conn) == 0)
 				continue;
 
 			talloc_increase_ref_count(conn);
 
-			if (conn->funcs->can_write(conn))
+			if (conn_can_write(conn))
 				handle_output(conn);
 			if (talloc_free(conn) == 0)
 				continue;
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 6d8d29cbe41c..47e9107c144e 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -172,6 +172,23 @@ static int readchn(struct connection *conn, void *data, unsigned int len)
 	return len;
 }
 
+static bool domain_can_write(struct connection *conn)
+{
+	struct xenstore_domain_interface *intf = conn->domain->interface;
+
+	return ((intf->rsp_prod - intf->rsp_cons) != XENSTORE_RING_SIZE);
+}
+
+static bool domain_can_read(struct connection *conn)
+{
+	struct xenstore_domain_interface *intf = conn->domain->interface;
+
+	if (domain_is_unprivileged(conn) && conn->domain->wrl_credit < 0)
+		return false;
+
+	return (intf->req_cons != intf->req_prod);
+}
+
 static const struct interface_funcs domain_funcs = {
 	.write = writechn,
 	.read = readchn,
@@ -290,19 +307,6 @@ void handle_event(void)
 		barf_perror("Failed to write to event fd");
 }
 
-bool domain_can_read(struct connection *conn)
-{
-	struct xenstore_domain_interface *intf = conn->domain->interface;
-
-	if (domain_is_unprivileged(conn) && conn->domain->wrl_credit < 0)
-		return false;
-
-	if (conn->is_ignored)
-		return false;
-
-	return (intf->req_cons != intf->req_prod);
-}
-
 static bool domid_is_unprivileged(unsigned int domid)
 {
 	return domid != 0 && domid != priv_domid;
@@ -314,16 +318,6 @@ bool domain_is_unprivileged(struct connection *conn)
 	       domid_is_unprivileged(conn->domain->domid);
 }
 
-bool domain_can_write(struct connection *conn)
-{
-	struct xenstore_domain_interface *intf = conn->domain->interface;
-
-	if (conn->is_ignored)
-		return false;
-
-	return ((intf->rsp_prod - intf->rsp_cons) != XENSTORE_RING_SIZE);
-}
-
 static char *talloc_domain_path(void *context, unsigned int domid)
 {
 	return talloc_asprintf(context, "/local/domain/%u", domid);
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 62ee471ea6aa..1e929b8f8c6f 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -51,10 +51,6 @@ void domain_deinit(void);
 /* Returns the implicit path of a connection (only domains have this) */
 const char *get_implicit_path(const struct connection *conn);
 
-/* Can connection attached to domain read/write. */
-bool domain_can_read(struct connection *conn);
-bool domain_can_write(struct connection *conn);
-
 bool domain_is_unprivileged(struct connection *conn);
 
 /* Remove node permissions for no longer existing domains. */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 14:43:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 14:43:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143286.264190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlY-00072v-Hf; Wed, 16 Jun 2021 14:43:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143286.264190; Wed, 16 Jun 2021 14:43:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlY-00071R-9W; Wed, 16 Jun 2021 14:43:40 +0000
Received: by outflank-mailman (input) for mailman id 143286;
 Wed, 16 Jun 2021 14:43:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ltWlW-0006jG-Lk
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 14:43:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlV-0004EM-II; Wed, 16 Jun 2021 14:43:37 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlV-0007D0-9n; Wed, 16 Jun 2021 14:43:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=ntUe4oRZZR/IFXSXsYlm7XZxb7Y2yIuX7QXLhkAF0hE=; b=IDKUeY/nN0OTprDr81ImN5AMo
	ppaCwTSJm8FAjz1yN1Oo8Xho+UoCUB5Hxqcr0Ow7Vl4VFyAQtyAQaf3r0w2i48mZ+1l/+2WnTN8pe
	EfoouNnL+UYE25y6EOdvJJKnh+KpSuVQbHnNXJUD3LCaXFXr51N0x84BnFJLFEGA0ELOA=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	doebel@amazon.de,
	Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [PATCH 07/10] tools/xenstored: delay_request: don't assume conn->in == in
Date: Wed, 16 Jun 2021 15:43:21 +0100
Message-Id: <20210616144324.31652-8-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210616144324.31652-1-julien@xen.org>
References: <20210616144324.31652-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

delay_request() is currently assuming that the request delayed is
always conn->in. This is currently correct, but it is a call for
a latent bug as the function allows the caller to specify any request.

To prevent any future surprise, check if the request delayed is the
current one.

Fixes: c5ca1404b4 ("tools/xenstore: add support for delaying execution of a xenstore request")
Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 2e5760fe4599..a5084a5b173d 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -306,7 +306,9 @@ int delay_request(struct connection *conn, struct buffered_data *in,
 	delayed_requests++;
 	list_add(&req->list, &conn->delayed);
 
-	conn->in = NULL;
+	/* Unlink the request from conn if this is the current one */
+	if (conn->in == in)
+		conn->in = NULL;
 
 	return 0;
 }
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 14:43:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 14:43:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143287.264193 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlZ-00079X-1Y; Wed, 16 Jun 2021 14:43:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143287.264193; Wed, 16 Jun 2021 14:43:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlY-00077z-SE; Wed, 16 Jun 2021 14:43:40 +0000
Received: by outflank-mailman (input) for mailman id 143287;
 Wed, 16 Jun 2021 14:43:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ltWlX-0006uw-Mo
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 14:43:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlW-0004El-UA; Wed, 16 Jun 2021 14:43:38 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlW-0007D0-LZ; Wed, 16 Jun 2021 14:43:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=Jt8gi0XN2RUq5+ySNQqwwiEayVXWhG3OzYqjKkFvJ1s=; b=RooQrwIt8jwfxA3Q7p226Z7KI
	sS65LowA7JvCVwNRGe5ajhcxHT3ymqNn9jwJGdC2Ir/o2DDWcz/i25+JbG1GldtTzVyh7MbrXa3Q9
	VuUWvfJdZz2rKoG0kis30MSplQuzUPrrkZkviOqTfqGT0w0EVWD1SIijLdVkDcx2rTSBQ=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	doebel@amazon.de,
	Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [PATCH 08/10] tools/xenstored: Extend restore code to handle multiple input buffer
Date: Wed, 16 Jun 2021 15:43:22 +0100
Message-Id: <20210616144324.31652-9-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210616144324.31652-1-julien@xen.org>
References: <20210616144324.31652-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

Currently, the restore code is considering the stream will contain at
most one in-flight request per connection. In a follow-up changes, we
will want to transfer multiple in-flight requests.

The function read_state_buffered() is now extended to restore multiple
in-flight request. Complete requests will be queued as delayed
requests, if there a partial request (only the last one can) then it
will used as the current in-flight request.

Note that we want to bypass the quota check for delayed requests as
the new Xenstore may have a lower limit.

Lastly, there is no need to change the specification as there was
no restriction on the number of in-flight requests preserved.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c | 56 ++++++++++++++++++++++++++++-----
 1 file changed, 48 insertions(+), 8 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index a5084a5b173d..5b7ab7f74013 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1486,6 +1486,10 @@ static void process_message(struct connection *conn, struct buffered_data *in)
 	enum xsd_sockmsg_type type = in->hdr.msg.type;
 	int ret;
 
+	/* At least send_error() and send_reply() expects conn->in == in */
+	assert(conn->in == in);
+	trace_io(conn, in, 0);
+
 	if ((unsigned int)type >= XS_TYPE_COUNT || !wire_funcs[type].func) {
 		eprintf("Client unknown operation %i", type);
 		send_error(conn, ENOSYS);
@@ -1515,6 +1519,23 @@ static void process_message(struct connection *conn, struct buffered_data *in)
 	conn->transaction = NULL;
 }
 
+static bool process_delayed_message(struct delayed_request *req)
+{
+	struct connection *conn = req->data;
+	struct buffered_data *saved_in = conn->in;
+
+	/*
+	 * Part of process_message() expects conn->in to contains the
+	 * processed response. So save the current conn->in and restore it
+	 * afterwards.
+	 */
+	conn->in = req->in;
+	process_message(req->data, req->in);
+	conn->in = saved_in;
+
+	return true;
+}
+
 static void consider_message(struct connection *conn)
 {
 	if (verbose)
@@ -1582,7 +1603,6 @@ static void handle_input(struct connection *conn)
 	if (in->used != in->hdr.msg.len)
 		return;
 
-	trace_io(conn, in, 0);
 	consider_message(conn);
 	return;
 
@@ -2611,14 +2631,20 @@ void read_state_buffered_data(const void *ctx, struct connection *conn,
 	unsigned int len;
 	bool partial = sc->data_resp_len;
 
-	if (sc->data_in_len) {
+	for (data = sc->data; data < sc->data + sc->data_in_len; data += len) {
 		bdata = new_buffer(conn);
 		if (!bdata)
 			barf("error restoring read data");
-		if (sc->data_in_len < sizeof(bdata->hdr)) {
+
+		/*
+		 * We don't know yet if there is more than one message
+		 * to process. So the len is the size of the leftover data.
+		 */
+		len = sc->data_in_len - (data - sc->data);
+		if (len < sizeof(bdata->hdr)) {
 			bdata->inhdr = true;
-			memcpy(&bdata->hdr, sc->data, sc->data_in_len);
-			bdata->used = sc->data_in_len;
+			memcpy(&bdata->hdr, sc->data, len);
+			bdata->used = len;
 		} else {
 			bdata->inhdr = false;
 			memcpy(&bdata->hdr, sc->data, sizeof(bdata->hdr));
@@ -2629,12 +2655,26 @@ void read_state_buffered_data(const void *ctx, struct connection *conn,
 							bdata->hdr.msg.len);
 			if (!bdata->buffer)
 				barf("Error allocating in buffer");
-			bdata->used = sc->data_in_len - sizeof(bdata->hdr);
-			memcpy(bdata->buffer, sc->data + sizeof(bdata->hdr),
+			bdata->used = min_t(unsigned int,
+					    len - sizeof(bdata->hdr),
+					    bdata->hdr.msg.len);
+			memcpy(bdata->buffer, data + sizeof(bdata->hdr),
 			       bdata->used);
+			/* Update len to match the size of the message. */
+			len = bdata->used + sizeof(bdata->hdr);
 		}
 
-		conn->in = bdata;
+		/*
+		 * If the message is not complete, then it means this was
+		 * the current processed message. All the other messages
+		 * will be queued to be handled after restoring.
+		 */
+		if (bdata->inhdr || bdata->used != bdata->hdr.msg.len) {
+			assert(conn->in == NULL);
+			conn->in = bdata;
+		} else if (delay_request(conn, bdata, process_delayed_message,
+					 conn, true))
+			barf("Unable to delay the request");
 	}
 
 	for (data = sc->data + sc->data_in_len;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 14:43:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 14:43:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143288.264213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlb-0007lT-Q9; Wed, 16 Jun 2021 14:43:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143288.264213; Wed, 16 Jun 2021 14:43:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWlb-0007js-Cb; Wed, 16 Jun 2021 14:43:43 +0000
Received: by outflank-mailman (input) for mailman id 143288;
 Wed, 16 Jun 2021 14:43:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ltWlZ-0007AL-2j
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 14:43:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlY-0004F5-9d; Wed, 16 Jun 2021 14:43:40 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlY-0007D0-1H; Wed, 16 Jun 2021 14:43:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=xS2Ezk6IKf2znuGkPh4eU4xq2n4H9YnKOyWflMAxdpI=; b=rTI2FWbDmwOU+ZmJRYSDmZq+x
	Urw587evNvAsBzxoNv4yyEvVa3DN3rzzXLATjbW2U+u0iQ4JPV5cEEapLQLirpv+pdqB1TciCzf+V
	E9c2zPhuhG61R7p/A2q6dZazkAPyr+HWWffWt1ynbjUBCaNgreQxRPZEiZXHy/DUcVGu8=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	doebel@amazon.de,
	Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [PATCH 09/10] tools/xenstored: Dump delayed requests
Date: Wed, 16 Jun 2021 15:43:23 +0100
Message-Id: <20210616144324.31652-10-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210616144324.31652-1-julien@xen.org>
References: <20210616144324.31652-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

Currently, only Live-Update request can be delayed. In a follow-up,
we will want to delay more requests (e.g. transaction start).
Therefore we want to preserve delayed requests across Live-Update.

Delayed requests are just complete "in" buffer. So the code is
refactored to allow sharing the code to dump "in" buffer.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c | 42 +++++++++++++++++++++++++--------
 1 file changed, 32 insertions(+), 10 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 5b7ab7f74013..9eca58682b51 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2403,25 +2403,47 @@ const char *dump_state_global(FILE *fp)
 	return NULL;
 }
 
+static const char *dump_input_buffered_data(FILE *fp,
+					    const struct buffered_data *in,
+					    unsigned int *total_len)
+{
+	unsigned int hlen = in->inhdr ? in->used : sizeof(in->hdr);
+
+	*total_len += hlen;
+	if (fp && fwrite(&in->hdr, hlen, 1, fp) != 1)
+		return "Dump read data error";
+	if (!in->inhdr && in->used) {
+		*total_len += in->used;
+		if (fp && fwrite(in->buffer, in->used, 1, fp) != 1)
+			return "Dump read data error";
+	}
+
+	return NULL;
+}
+
 /* Called twice: first with fp == NULL to get length, then for writing data. */
 const char *dump_state_buffered_data(FILE *fp, const struct connection *c,
 				     struct xs_state_connection *sc)
 {
 	unsigned int len = 0, used;
-	struct buffered_data *out, *in = c->in;
+	struct buffered_data *out;
 	bool partial = true;
+	struct delayed_request *req;
+	const char *ret;
 
-	if (in) {
-		len = in->inhdr ? in->used : sizeof(in->hdr);
-		if (fp && fwrite(&in->hdr, len, 1, fp) != 1)
-			return "Dump read data error";
-		if (!in->inhdr && in->used) {
-			len += in->used;
-			if (fp && fwrite(in->buffer, in->used, 1, fp) != 1)
-				return "Dump read data error";
-		}
+	/* Dump any command that was delayed */
+	list_for_each_entry(req, &c->delayed, list) {
+		if (req->func != process_delayed_message)
+			continue;
+
+		assert(!req->in->inhdr);
+		if ((ret = dump_input_buffered_data(fp, req->in, &len)))
+			return ret;
 	}
 
+	if (c->in && (ret = dump_input_buffered_data(fp, c->in, &len)))
+		return ret;
+
 	if (sc) {
 		sc->data_in_len = len;
 		sc->data_resp_len = 0;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 14:49:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 14:49:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143318.264222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWrO-0002mW-SW; Wed, 16 Jun 2021 14:49:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143318.264222; Wed, 16 Jun 2021 14:49:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWrO-0002mP-P4; Wed, 16 Jun 2021 14:49:42 +0000
Received: by outflank-mailman (input) for mailman id 143318;
 Wed, 16 Jun 2021 14:49:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jola=LK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltWrM-0002mJ-RT
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 14:49:40 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1676ad9e-93b6-4a0e-9a7e-f6bb1a7b7629;
 Wed, 16 Jun 2021 14:49:39 +0000 (UTC)
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01lp2050.outbound.protection.outlook.com [104.47.1.50]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-12-P7_ZRSQoP1iF_uKzFfNjuw-1; Wed, 16 Jun 2021 16:49:37 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2335.eurprd04.prod.outlook.com (2603:10a6:800:2e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Wed, 16 Jun
 2021 14:49:35 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Wed, 16 Jun 2021
 14:49:35 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0179.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1c::23) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Wed, 16 Jun 2021 14:49:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1676ad9e-93b6-4a0e-9a7e-f6bb1a7b7629
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623854978;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Rfx/z6Gn12LvWIDZev2ilo1Hf/f2BsBOP0o0auTiRL4=;
	b=RD+WzFQT7paSt5vf2hSdvIygCSkktqYnPxDRDtzAD8t3SlhNCGw91QFo6pHUnEER9jXGy+
	oUYdRFvAiZEK7jTw6BoPmzDRcSAITYFG/kbWGwnCIkE0SR1YqloWe47L7V81/RziB+dy+N
	jjMtKB7UqFBf+vd/RMNzB8pU9BN4LeM=
X-MC-Unique: P7_ZRSQoP1iF_uKzFfNjuw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VqamaEt08eVjfPYyaOBvzObLzDzAeA8luWSZyvLE1Kmd+qpw6oTySRl+D4uRb6Xgg/2eXQq12+gqtva0RZWttxvhcw1HSb+idzJdlQ4BdUL4ecglt9L1Zr7Z7VOfh13tqiZbk0ud1KByRMt3IyDzDLZr3BlSh8q8q6nNIm0xPB5Si4FFHdcLRwLehiD0qWoTSgzGs4qj6oJUryeqT51EpbA4QKkz337CcK8nyZUizJFByO5DAHRs6eDdLk4khYAgJPc/FkktbwO36bodlQO4QyQPrfTflxGv/7KuNZ/VzRruw+tHFRr7kP7jBUSIdIXXxUN9oSfSFTGjkGW5f2MbDQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y/ZMKelHV+Jv01gZUSj82wX7lF9u4L04oMHCqlNtAi4=;
 b=Mi/YS7hS7BxdkluUC45vV4EGp8p3T750FGmklEA84xpq141RR+/EBDsLFGbCUs8+kr9+s4OLid/6+msXNSl490kGY2/vFls4YQ09YyTlxLnmT4cgzEoz1UDRVDR1EaJZNBMFddDCYsZV+esK48eypOZF3rkg5Dz34rmXDXkkgQVEth9sQMkpQNU1zV1pP9dbGdmlDcOzX51veq5ZM/MQIX4S7gqufw4U4ETomZaj58YeurMGfolEIIvGjBclG015kEThXT63bKVvRAO7HWmY/c/mi3KVGry3mACfKIM+aY4e81SwqUde3g37j4Eb9VAUPJ6BCbB7PuLRxxMCiV9kJA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [xen-unstable test] 162845: regressions - FAIL
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Ian Jackson <iwj@xenproject.org>, xen-devel@lists.xenproject.org,
 osstest service owner <osstest-admin@xenproject.org>
References: <osstest-162845-mainreport@xen.org>
 <8e39ca8f-3202-7d3a-d65d-7087634bd49e@suse.com> <YMoI8YZfOvogwOMY@perard>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f8c4151a-6dac-d87c-ef46-eb35ada07bd9@suse.com>
Date: Wed, 16 Jun 2021 16:49:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <YMoI8YZfOvogwOMY@perard>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0179.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1c::23) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 69769ea5-c012-4614-e9e7-08d930d5f54d
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2335:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB2335E488A3569C3E5E55AAB7B30F9@VI1PR0401MB2335.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7IZbK6Z/ZW/Yj3keUW63Cg1guZS/4AZx9OPU+CvFweOqS5KubmALLCiJOorwqv/U0vAhwKUV/osLL0FU51HahTp+GWYlbbq49VRBR/VZtwMOdfkMfq7YNsdlGTflOxzVyVLYRirbGBGpnrtz40GuLoFsecOvSocDHgUJPPqxHUf+lISDTipr5E1MJmlbWGGqkoAs1hp1AaVEK3i/xGPH68uNYut97b5LVpjwzoO36+d3raTet2eotxpMRCs4ozuyuuPzJELoUYPxyjjyMXsgUKxUmE8FVBYRBDGH+QsNoeWB1afos1j/8ansi/3McL9hPfOlfRnzegSNUGUO1T/JYIfEZgpkd50zUhby6FijykTb8XWQAStIJO5Thvi7TSxKdxKf5WgyfPA2pJ8I/PWJydeuhZT2nZkZt1RJdLvEhcsNyNFb+sj4xvUNO2h7JiCLlTo859F1QN4c03yOjI25RaWK8u9L7T6gX7UZGAvG30UeEeH3hfr9wqDb02sC8D6Hn+dYIv1bQzY7PqWQihH5XT/1ccl1clZ821ZK6L4XqFJfZVLMdGHtE98PFkNkuclIE7yb5ecK/z8oMd8l6OtQfL8Lc9BGeWkkM/Xvb3ikeQgj+go7ooD/vi393UMGCXPGJ8mwpwJ0XJ3qx+gyYj6ijCSvQvSzQSriv8FxG161cCWROk8ZkQaIsfY2lldbPAgnK8sTPKJrhpzU5la8+DNcsCojUNYcfsUczauePZIvWRvPp/wScjGaLKAx07sM5kGenmbunGwrmJEP1dwF3HygLHJzPje6vK/1fgE4E5jK7nNT1kU9zWkZ7MSVHwI99MOKVZhHwm5Jv9d/KcFkFF+k2XfRKeTXdXnbXWYuLicp/ms=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(396003)(366004)(136003)(39860400002)(376002)(31686004)(478600001)(53546011)(5660300002)(6916009)(66574015)(38100700002)(2616005)(83380400001)(956004)(36756003)(6486002)(316002)(8676002)(8936002)(54906003)(966005)(4326008)(31696002)(16576012)(16526019)(86362001)(186003)(66556008)(2906002)(26005)(66946007)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?il4lWVSgIxx0XFYlw9LzX8EF4OZyQzYxxC1zok+AcRPWR2gvK60Z8MDvK/JI?=
 =?us-ascii?Q?CD8TuP5x+KcRBEcNlIp6gjiyhe8vTGzOAgk9Y96Yl+9ArGRGbsJzp5MoS1ES?=
 =?us-ascii?Q?UPLeTd58N0n1tIeBKL7JmR/ewclERJWbKlV8bygFm/rwjK7p+BBZ+Z0wiBnO?=
 =?us-ascii?Q?OsZ1SMMZ/Z2K2szwBBYJwYNrj/XzL7ZybmPiyzu5sPB00Qppa1qJRn2vK2B9?=
 =?us-ascii?Q?+rQ/0JOv168vOfk+XRGRCi4XupHjj+oV9g9Yct03u20vWAnQxI1vNW/2Szbt?=
 =?us-ascii?Q?FYckhfXQOcGCSwFzc+tHZdlchysNxW5OFy1RHjWcLYry6cyuihQpgUH7Fe3K?=
 =?us-ascii?Q?jTx+HI2OXlszFHSBbcSJtQ1K7XX/uL5iuqIg4xui8Xw1phJgR4ZOM0BNiPSV?=
 =?us-ascii?Q?pN2xAOZiFgUsYVGWrpoVAtnzwPJ8aFKg6cT8yBGQlXw+Ra34RroRTQB2dFz0?=
 =?us-ascii?Q?9Rz8jyUcBBvBF0oM5MNL6H14RfKgc7wsPFRVxEfAGNze6XHi1DxfCkfwTB5l?=
 =?us-ascii?Q?R0b7iGMI0eGX98VS4Z3amzvvHUyPkhN4i3YS+Vz8LzC80rRs846+QnyZQX0z?=
 =?us-ascii?Q?+lmYtHr/Pww7UlBtDUHCP36YP55KzGrVjnniUXsMXqQE85DpnfLlMgGDYK14?=
 =?us-ascii?Q?ZewtHjN8PvZiYysjFn+Wi40mDoBm2zTmcmiKkAVFkOZgLfEaQSjFA0ttNdC4?=
 =?us-ascii?Q?32oEKrkvg7ZOuiIHf4SoXsYI9StWV06r+wjqgA2MufH/MaFASr2sGU7RDjWm?=
 =?us-ascii?Q?cqNHHhIWYv1Yczhuy0QUBFtgUoHEIcR7p2TxbJ/4BV/cIYyEQzKYObi3SAuZ?=
 =?us-ascii?Q?Cjg5GhqVbUQzk8nNLgT15cPKisWlT6lEufjmII0y1LTqmYPX0USLm5wJiioi?=
 =?us-ascii?Q?tEKR53352DzCJcxo/LutWMJQbLoSuNNPPNNsK8Tq5+JHoo4CDDizQKwI6qrK?=
 =?us-ascii?Q?z08skfbI/NAmBJgtdRLh5dH58qiVL9VknNDNlT1gWzS1ZRWNLOL57wKwLIWp?=
 =?us-ascii?Q?hMN2OdDLbNdLRMhsze4Rgo66Lsb3SeDqrPDWPXExRXHmN5TTEmjIvUVXN82j?=
 =?us-ascii?Q?TPz1VZCDFS5toJs7xolMY4VvPUXRbbRIGn0pGHQyO4FUjudoAvgyXpwbRd/j?=
 =?us-ascii?Q?DoSCl5D55GQuz84pp1jPITimDC8ca8g7YifVYz7PTDtBXEoElyfHkFo3Pz6W?=
 =?us-ascii?Q?tAX1Deo7DHIUelk2KgdcfmBY1qvqbp5BIape7K253r8IvSGGODCJt04UvgC5?=
 =?us-ascii?Q?MEJAHmA4rOtecmJ/bq77bm8w2/qCXKh67Q1sT0dR4/xd8VgeL4uoqSugCkf3?=
 =?us-ascii?Q?DgYx4+ymMHI2B+Ne0LqebQ+H?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 69769ea5-c012-4614-e9e7-08d930d5f54d
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 14:49:35.2796
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fpjOZM8XB1mRdcAVcc4XovBZpycomitOl2NIbYbOm4QzuK4nOySPy9kKPRhVTpxQyGUOyKkZbw7PoxyerAHc3w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2335

On 16.06.2021 16:21, Anthony PERARD wrote:
> On Wed, Jun 16, 2021 at 09:12:52AM +0200, Jan Beulich wrote:
>> On 16.06.2021 08:54, osstest service owner wrote:
>>> flight 162845 xen-unstable real [real]
>>> flight 162853 xen-unstable real-retest [real]
>>> http://logs.test-lab.xenproject.org/osstest/logs/162845/
>>> http://logs.test-lab.xenproject.org/osstest/logs/162853/
>>>
>>> Regressions :-(
>>>
>>> Tests which did not succeed and are blocking,
>>> including tests which could not be run:
>>>  test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. v=
s. 162533
>>>  test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs=
. 162533
>>
>> There looks to still be an issue with the ovmf version used. I'm
>> puzzled to find this flight reporting
>>
>> built_revision_ovmf	e1999b264f1f9d7230edf2448f757c73da567832
>>
>> which isn't what the tree recently was rewound to, but about two
>> dozen commits older. I hope one of you has a clue at what is going
>> on here.
>=20
> So this commit is "master" from https://xenbits.xen.org/git-http/ovmf.git
> rather than "xen-tested-master" from https://xenbits.xen.org/git-http/oss=
test/ovmf.git
>=20
> master is what xen.git would have cloned. And "xen-tested-master" is the
> commit that I was expecting osstest to pick up, but maybe that as been
> setup only for stable trees?
>=20
> Anyway, after aad7b5c11d51 ("tools/firmware/ovmf: Use OvmfXen platform
> file is exist"), it isn't the same OVMF that is been used. We used to
> use OvmfX64, but now we are going to use OvmfXen. (Xen support in
> OvmfX64 has been removed so can't be used anymore.)
>=20
>=20
> So there is maybe an issue with OvmfXen which doesn't need to block
> xen-unstable flights.
>=20
>=20
> As for the failure, I can think of one thing in that is different,
> OvmfXen maps the XENMAPSPACE_shared_info page as high as possible in the
> guest physical memory, in order to avoid creating hole the RAM, but a
> call to XENMEM_remove_from_physmap is done as well. Could that actually
> cause issues with saverestore?

I don't think it should. But I now notice I should have looked at the
logs of these tests:

xc: info: Saving domain 2, type x86 HVM
xc: error: Unable to obtain the guest p2m size (1 =3D Operation not permitt=
ed): Internal error
xc: error: Save failed (1 =3D Operation not permitted): Internal error

which looks suspiciously similar to the issue J=C3=BCrgen's d21121685fac
("tools/libs/guest: fix save and restore of pv domains after 32-bit
de-support") took care of, just that here we're dealing with a HVM
guest. I'll have to go inspect what exactly the library is doing there,
and hence where in Xen the -EPERM may be coming from all of the
sudden (and only for OVMF).

Of course the behavior you describe above may play into this, since
aiui this might lead to an excessively large p2m (depending what
exactly you mean with "as high as possible").

> So maybe we can force-push in the mean time if tests with OVMF is the
> only failure.

I don't think I see a force push justified just yet.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 14:50:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 14:50:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143323.264234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWsI-00043l-7H; Wed, 16 Jun 2021 14:50:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143323.264234; Wed, 16 Jun 2021 14:50:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWsI-00043c-3y; Wed, 16 Jun 2021 14:50:38 +0000
Received: by outflank-mailman (input) for mailman id 143323;
 Wed, 16 Jun 2021 14:50:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vOo1=LK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ltWsH-00041z-CU
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 14:50:37 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 03c52529-3755-4480-a5a8-5acfe8d5dca8;
 Wed, 16 Jun 2021 14:50:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03c52529-3755-4480-a5a8-5acfe8d5dca8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623855036;
  h=to:references:from:subject:message-id:date:in-reply-to:
   content-transfer-encoding:mime-version;
  bh=j4HSTzgP+pmeWq0lADAI1AhbzdnY6QzVvcnb0xjihH0=;
  b=c2gVyNlmnF1D3jCov5KH9b4gScXQgR+565u2rkMCehVaY6+6sVXlZKlc
   8/QoDwmCZHxiUAjjiz7ZR2+d0ZFGYLYZCXFx1cwkoK5Q0dx6HkkoRLW//
   BVAbCgOV0YpyMz5jQVLq7OH5RVXCepVDVZKyKyHlSyUvErlwTXdyEvzws
   4=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 5HEtx2E7DCkbnn//7JtX9Svp8KnRrt+UyvLYtWpizPFggbf/QQPgYkXPnZlQ2ccG+QyFe8dvvS
 gWOxMSJRA8gdZ1t0WmQZMfh4Poh1ZDkxXeycVao/s7e7VB7J8jaiYlWNcp5hSqvBbxuTdTnRCs
 UVklxLOQn1GD55suE79kPHT6GB7pOaGdEMTmFB0Lgthk7MAzPj3nxUVP+XP+W2lyoV59AIZjIL
 ihZovFr3y5KlZwfPC80DT0+6Q1cm+s/SmEV553kYny0gIg9Bn7jLoKmvjWLYNtIcryfn7y5OyJ
 ftU=
X-SBRS: 5.1
X-MesageID: 46640997
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:b2PeOaqSBjt2Hh5Hm6kp0Z4aV5ulL9V00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssSkb6Ki90dq7MAjhHP9OkMQs1NKZMDUO11HYSL2KgbGC/9SkIVyGygc/79
 YtT0EdMqyWMbESt6+Tj2eF+pQbsb+6GcuT9ITjJgJWPGRXgtZbnmVE42igcnFedU1jP94UBZ
 Cc7s1Iq36LYnIMdPm2AXEDQqzqu8DLvIiOW29JOzcXrC21yR+44r/zFBaVmj0EVSlU/Lsk+W
 /Z1yTk+6SYte2hwBO07R6T030Woqqg9jJwPr3PtiEnEESotu9uXvUkZ1S2hkF3nAho0idsrD
 CDmWZnAy050QKqQoj8m2qR5+Cn6kdg15aq8y7lvVLz5cP+Xz40EMxHmMZQdQbY8VMpuJVm3L
 tMxH/xjesfMfrsplWL2zHzbWAdqqN0mwtQrccDy3hEFYcOYr5YqoISuEtTDZcbBSr/rIQqCv
 NnAs3Q7OtfNQryVQGTgkB/hNi3GngjFBaPRUYP/sSTzjhNhXh8i08V3tYWkHsM/I80D5NE++
 PHOKJ1k6wmdL5fUUu8PpZ0fSKTMB2GffvhChPiHb3XLtBzB5uWke+I3Fwc3pDbRHUn9upMpL
 3RFFVFqGU1Z0XiTcWTwZwjyGG+fFmA
X-IronPort-AV: E=Sophos;i="5.83,278,1616472000"; 
   d="scan'208";a="46640997"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eR8r90kpL8EA+vtREXwXEz0gqtLGMFtbUYAioPgv0pNoJoIzpw++xr/Rw4/539xIHhchUbKpN/23QiJigZbjEQ63j+SBvHZSy5qcUMaXnbDwNuQlcFu/kecX+9Z7xpyMdQGqe7udBSZeJt91BElSSQ86TOl2RpPKkdwwNsN/x6blH5V7irqLPIeKmPPjgdRL3KXr7NO8ecsXc/B7POXKkYdzL7WDVsIsnMMdmkcrGFBospSMSPxABfeTHDu75n8MfC5xsPrlQdkUk/v6yXzkD6i8+CF7nSSl3NZ/me8caHuXCW1K/Csde5dyI2g0y96UO87flmBVlbXM7WzGTXxhlQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=j4HSTzgP+pmeWq0lADAI1AhbzdnY6QzVvcnb0xjihH0=;
 b=BQ/Jkeie+7DdexBzl4Is4zcxT/8SW2+QzuR3a5IzGiWM8on8OjhnoiJFBErYgzL5ulhg4+5cEb3S9p67siNL/rWVO4i/VqbqDayBzMk2nySDHdqdkf1rtpSKT+Eu0awIS58hA0w0RUzciJuJ+dA5EIFzr0PielXtmU+Jrbk/+XGFNBbE/qJC03WLX1Kq7ZHhlxkGPV8cgk0uzY3068UAoi7+CuHW+XRDzpiB7hfqqh4cyV5Utvv4jhi3YoMlNhPFE5F8pW8WQ79pfyPgEhuVTFjP9EhqoLOPi0Df+pf0FVlgxGq03LWtf9Jua4zxbvhD5Sned9JexLMeVXYi/KZstA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=j4HSTzgP+pmeWq0lADAI1AhbzdnY6QzVvcnb0xjihH0=;
 b=uLlde0Z0NrXrTLCjhHyBwfShRP3gqYduLi5Gey2ekDjqIzXbVrNOiWInjCs1FcQTCxMCPUNcwCDOaYtKAD6Q4TImrKSAM5iP+jWp1GazJ3pF7UPHGo/oW0JWGyNHn36BAr0HA8NjrCP1i8/hH8nulcES148XeLwDTLKeD4UrMAA=
To: Olaf Hering <olaf@aepfle.de>, <xen-devel@lists.xenproject.org>
References: <20210616125129.26563-1-olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v20210616 00/36] leftover from 2020
Message-ID: <968713af-3c99-3fe3-8378-9d9c3985098e@citrix.com>
Date: Wed, 16 Jun 2021 15:50:24 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0303.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::27) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 26186757-eb45-4650-58c6-08d930d61634
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5437:
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5437E300BA342F6B8BAE2051BA0F9@SJ0PR03MB5437.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: JXW3+ZxaOlUiKxXZSulXca/9KL2QD7VvP28tjXO0nK0JK7RhIAD0ncxO//pcAl73WwsNh503reztKrgQE+i4Lo5XNkDj8Q8xfh1TQNYdaXQrAUni9bw+28lEG/dk4EkH8HRrG+SBkK6mOgVPP9x7jSCCZ3+V+igBYjAWltlLHhwjo03dKXJpBHVeng4mulmA+Pu+RbYi/mAw9tm0HBKzwoDhsuTmMSx7pRuam644h5maPiRxgrdXDuNMm9/iS4VfwzEawliGAiYNfKwu+axZcOAXDOQOiu2kxNUKN5HgRiw+0UpfFYvJghNYxZKnlCTG+mtS8aM17trmzHDo5FLy5VjCE4C99IbXyq6nbGTNY56+sYSxsd1peJVFUuD58LFmgiBnkR1RADIg6z1m+AdRku61xuv4JgOCzqrZT7BJeHL60G5fa/agZZAeJg+/FEMA6m1VXH3g6bZ3LPrDylJDH+1ewwxz+riq7wWoJPZlL0tlp2tPdrn0k4H4mN+xhAZ61taAYjTI68KVvo5ay78+oxl9Y5pk5/xuhw+tIX2mzEeW7tedMErH2GYRPGNEXSAYSOemF5RFbCwzgdtu4xphutD/rsHsB4gg8CSrIcGfv6gce8yrMopofao6bwwq1QCCBl8ULRIU5Uy2mKy2Z0pgEPaqVFV3shHdPONlIar5/O5ksZtxkJUhSFy2C4aeBVKoSRggHTYrXQo4WTuxRYlJk+snBQ+KAHGdxD52V4ipr2Ro9FFk7G1Qh9phUYfNikwC/MsNsXRQV+4nI5N7nyQJSOcE7hNHfyl+02QEdib7f/y3y6ft8oU/h9IV/OdX0nxAkDmrgVoDkpxxlXD8jWKPWYNjlpT1yZV1vx+DI69mya8=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(39860400002)(396003)(136003)(376002)(186003)(16526019)(4744005)(26005)(6486002)(5660300002)(66476007)(66556008)(66946007)(6666004)(31686004)(83380400001)(86362001)(2906002)(2616005)(956004)(53546011)(31696002)(316002)(16576012)(38100700002)(8936002)(478600001)(8676002)(966005)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?M0hsaWJXeEtleElkRWNhTkd1L3VhM2doQWVENVRWR25YN3MyVkE4ZHg4SWpR?=
 =?utf-8?B?UmQxaEs2endqYTNvcXBMandqREhTTWE1YS9rYXNMb0pWRWRrcGFmSmtZTTFV?=
 =?utf-8?B?RDEzNHdCZkVvSURRQ0NvZFdnaW9XTm0xRGx0aCtNL3NBejIrT0hFOUk4Z28x?=
 =?utf-8?B?TzhhMDdQMitZRzdzSnlVQ2JvSkx4ajJzVmVDQVFpZmpKaVd3dUNja2xCZjFE?=
 =?utf-8?B?YVY5bUtMa3IycGJ6VHZVY2htL2VqZzR3b1Zhb0diNGpaUnRsdWIxSkxPRHps?=
 =?utf-8?B?NTlSSDJMWGp0cjBEbmlEWXRBQXc1VE5saW03ZCtkZW0vZWZnSk1icUZ3MDEv?=
 =?utf-8?B?YUd0LzBTSUZhUkdWNC9PMXBVZzRJWGxMdmcxOFhNcDdzWFVLLzY0ckZCR0J0?=
 =?utf-8?B?d3U2VFdrREhXeW5KaWxZYUt6YjRyT3VLSk54NUhsM2dXMGJ0ZDZKRm54dHl6?=
 =?utf-8?B?Z1JXblRYcHRnejJXZDNnczE5bnNpVCtPdHJyMk5hL0VrWVVRL2REemFuOERN?=
 =?utf-8?B?WG5RTGwvRHQ5Zmh3SDQ1c3ZPUDEzM25jZEdmLzBydTFINjZRNTFlbnRZZUU5?=
 =?utf-8?B?cERlM0JQZENHNXZTZTd0OUNMcEJwTE5mZjdqbkdpRDZ4SUtGdzl6QUF0MC9Q?=
 =?utf-8?B?Y1dwd2pjZUd6WU9leE9YdEpXRnZaTjhmSVQ2Y2NzMGlrSCt3MjFwTStBM0Fn?=
 =?utf-8?B?MEZjTUlQQXhtcEc0akQvNG5ZcXFsZ3RDYmhZeEJzWDV6czNXQitWTXdzVlgv?=
 =?utf-8?B?WGlnSDQ1cmN4Zmx0YjNia1NNdmo1Y1lKb1BtcG1zTmI0d210SDJUcUpLc2VS?=
 =?utf-8?B?cnNwNlAyTHJycWE0UW9ydlVJeUczS24wbVlpYjhDeUl2U0F0emdHL2J1Vit0?=
 =?utf-8?B?UDQvdnpYMzc4eHRBMFNsN3RsOWFXcmdVMnVhblAyMFpwS2V3bzNhL2V1cUhi?=
 =?utf-8?B?eGFQTVcxZGRSM280M0cyNXlVTENpaW1Yb1o5Z0xvNlZmQlZ6NWF6cldmUlRu?=
 =?utf-8?B?LzVoR0NZdzBiWkdBeXRLck1IaUZrczVQSU0xcnc4aDhrQW9xcC9ERWhtM08x?=
 =?utf-8?B?SGppWm5sc1VoNzc5RmxLTTgyNkI1L0JoeSt1Z0R4SFhqbXVrY25lSDQ4TUkv?=
 =?utf-8?B?Qk9FRDNXczJQYk5NQ0pZS1ZmMDN3ZHY5SmgxR0dxZDhKV04wM1QySXdyN2ov?=
 =?utf-8?B?NXlvUnlZcDFmejhxM1lHdE5WTXFVMzQzUW9mMFRJRWR0SEdpTWd6Rlp6bUIv?=
 =?utf-8?B?bkZ6ZTRqU1cwSHB4M0FWRlRjbnN0bEwzSjAyWDVzQThuTE81UDd5eDB3NWhk?=
 =?utf-8?B?MDB1QTNsNzluTWNkejdrMUxRekNTdExnOXpQTkF2VEVRaGRadkdubE9tQk5y?=
 =?utf-8?B?cTJlRC91SVF4dldzQVY0STlKbUQ5b0xORUd4OFNCenZEaXZ2b2VTNXNNZzB3?=
 =?utf-8?B?clFHNTR0bWJ4a1RObnRSUGVIWk9mSWN5NjNTdkdhWVgvM25GWnNkODQvTlRM?=
 =?utf-8?B?WU5ySVZ4Vnh4aUxPMy9ydFNJekFET05vT2FaTEk1a3JRWnA0ODRhZC9kM1dx?=
 =?utf-8?B?WkF0ckhqc09PaDdxTFcvSEttaWROdk5SaFRKL2RTUHkrb3Nwdm1qbEJpSzRp?=
 =?utf-8?B?d1pwb1k2SFVkMThGV2VFL21xOVo3NEpaRm04VmowaFpxNGpoTFMzY0Eycis2?=
 =?utf-8?B?VStGL0hVQjVId3d1VHlNbW9RMldJT2FvS1RuQzB3VFg2NHBrSkkyUmsxMmR4?=
 =?utf-8?Q?35i0Mb6+h6uGNn81x4fuTJOjSdgRn/qA0JLiXyr?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 26186757-eb45-4650-58c6-08d930d61634
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 14:50:30.5418
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: obQ9118q8YVwGTJ+pDIPdfdyqfmWsVF2BISI3k3dAMy1Yv+nOLf4YyM8B8QfvFD9KVGHJGvHsrMgAxm8O5VXmfxAOTrOlPKQhiOBsF4HWNA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5437
X-OriginatorOrg: citrix.com

T24gMTYvMDYvMjAyMSAxMzo1MCwgT2xhZiBIZXJpbmcgd3JvdGU6Cj4gVmFyaW91cyB1bnJldmll
d2VkIGNoYW5nZXMsIHJlYmFzZSB0byA0YmNmNjQzM2VlLgoKR2VuZXJhbCBDSSBydW46Cmh0dHBz
Oi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9wYXRjaGV3L3hlbi8tL3BpcGVsaW5lcy8zMjIwMzI0
MTkKClNvbWUgc3BlY2lmaWMgZmFpbHVyZXMuCmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVj
dC9wYXRjaGV3L3hlbi8tL2pvYnMvMTM1MTk3NzU2NyAoMzJiaXQKdG9vbHN0YWNrIGJ1aWxkKToK
CmNvbW1vbi5jOiBJbiBmdW5jdGlvbiAnX3NyX2JpdG1hcF9leHBhbmQnOgpjb21tb24uYzoxODc6
OTogZXJyb3I6IHJpZ2h0IHNoaWZ0IGNvdW50ID49IHdpZHRoIG9mIHR5cGUgWy1XZXJyb3JdCsKg
wqDCoMKgwqDCoMKgwqAgbmV3X21heCB8PSBuZXdfbWF4ID4+IDMyOwrCoMKgwqDCoMKgwqDCoMKg
IF4KCgpodHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3QvcGF0Y2hldy94ZW4vLS9qb2JzLzEz
NTE5Nzc3MDggKGFybTY0KQoKbm9taWdyYXRlLmM6MjU6MjA6IGVycm9yOiB1bmtub3duIHR5cGUg
bmFtZSAneGNfc3RyZWFtX3R5cGVfdCcKwqDCoCAyNSB8wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgeGNfc3RyZWFtX3R5cGVfdCBzdHJlYW1fdHlwZSwgaW50IHJlY3ZfZmQp
CsKgwqDCoMKgwqAgfMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIF5+fn5+
fn5+fn5+fn5+fn4KCgpJIGhhdmVuJ3QgbG9va2VkIHRocm91Z2ggYWxsIHRoZSBmYWlsdXJlcyBp
biB0aGUgZ2VuZXJhbCBydW4sIGJ1dCBiZQphd2FyZSB0aGF0IHRoZXJlIG1pZ2h0IHN0aWxsIGJl
IHNvbWUgY2xhbmcgZmFsbG91dCBpbiBkb20wX2J1aWxkLmMgaW4KWGVuLCBhbmQgUFYzMiBmYWxs
b3V0IGZvciB0aGUgc21va2UgdGVzdHMsIHdoaWNoIHdvbid0IGJlIGZyb20geW91ciBzZXJpZXMu
Cgp+QW5kcmV3Cg==


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 14:53:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 14:53:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143346.264244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWue-0004qz-M1; Wed, 16 Jun 2021 14:53:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143346.264244; Wed, 16 Jun 2021 14:53:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltWue-0004qs-Ip; Wed, 16 Jun 2021 14:53:04 +0000
Received: by outflank-mailman (input) for mailman id 143346;
 Wed, 16 Jun 2021 14:53:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ltWud-0004qk-Hr
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 14:53:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWuc-0004SR-Hr; Wed, 16 Jun 2021 14:53:02 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltWlZ-0007D0-DF; Wed, 16 Jun 2021 14:43:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=Vb1oaoq7rCyl8ROu/TuBrNZhgyUJCaVoNnm3GNRQzS4=; b=p1H5ZlKe0o33n7Y6Jw6qdCJfi
	lCvnJmoLm7XvEURSpqGX7ozQUOxBvRPyBKO9y9zdBhJLGULKpSi7TlK5VROa9Jn3s2bnzFBc/F+oX
	38pKHjDTyjAWlZl4hrqjMKRl+VYTcckleX02VhdzhAB9+2icGQsW8t3Gfw5gvdTER27Iw=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	doebel@amazon.de,
	Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [PATCH 10/10] tools/xenstored: Delay new transaction while Live-Update is pending
Date: Wed, 16 Jun 2021 15:43:24 +0100
Message-Id: <20210616144324.31652-11-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210616144324.31652-1-julien@xen.org>
References: <20210616144324.31652-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

At the moment, Live-Update will, by default, not proceed if there are
in-flight transactions. It is possible force it by passing -F but this
will break any connection with in-flight transactions.

There are PV drivers out that may never terminate some transaction. On
host running such guest, we would need to use -F. Unfortunately, this
also risks to break well-behaving guests (and even dom0) because
Live-Update will happen as soon as the timeout is hit.

Ideally, we would want to preserve transactions but this requires
some work and a lot of testing to be able to use it in production.

As a stop gap, we want to limit the damage of -F. This patch will delay
any transactions that are started after Live-Update has been requested.

If the request cannot be delayed, the connection will be stalled to
avoid loosing requests.

If the connection has already a pending transaction before Live-Update,
then new transaction will not be delayed. This is to avoid the connection
to stall.

With this stop gap in place, domains with long running transactions will
still break when using -F, but other domains which starts a transaction
in the middle of Live-Update will continue to work.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_control.c | 10 ++++++
 tools/xenstore/xenstored_control.h |  2 ++
 tools/xenstore/xenstored_core.c    | 49 +++++++++++++++++++++++++++++-
 tools/xenstore/xenstored_core.h    |  3 ++
 4 files changed, 63 insertions(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 1c24d4869eab..a045f102a420 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -131,6 +131,11 @@ unsigned int lu_write_response(FILE *fp)
 	return sizeof(msg) + msg.len;
 }
 
+bool lu_is_pending(void)
+{
+	return lu_status != NULL;
+}
+
 #else
 struct connection *lu_get_connection(void)
 {
@@ -142,6 +147,11 @@ unsigned int lu_write_response(FILE *fp)
 	/* Unsupported */
 	return 0;
 }
+
+bool lu_is_pending(void)
+{
+	return false;
+}
 #endif
 
 struct cmd_s {
diff --git a/tools/xenstore/xenstored_control.h b/tools/xenstore/xenstored_control.h
index 27d7f19e4b7f..98b6fbcea2b1 100644
--- a/tools/xenstore/xenstored_control.h
+++ b/tools/xenstore/xenstored_control.h
@@ -23,3 +23,5 @@ struct connection *lu_get_connection(void);
 
 /* Write the "OK" response for the live-update command */
 unsigned int lu_write_response(FILE *fp);
+
+bool lu_is_pending(void);
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 9eca58682b51..10b53af76ac5 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -338,7 +338,20 @@ static int destroy_conn(void *_conn)
 
 static bool conn_can_read(struct connection *conn)
 {
-	return conn->funcs->can_read(conn) && !conn->is_ignored;
+	if (!conn->funcs->can_read(conn))
+		return false;
+
+	if (conn->is_ignored)
+		return false;
+
+	/*
+	 * For stalled connection, we want to process the pending
+	 * command as soon as live-update has aborted.
+	 */
+	if (conn->is_stalled)
+		return !lu_is_pending();
+
+	return true;
 }
 
 static bool conn_can_write(struct connection *conn)
@@ -417,6 +430,12 @@ static void initialize_fds(int *p_sock_pollfd_idx, int *ptimeout)
 			if (!list_empty(&conn->out_list))
 				events |= POLLOUT;
 			conn->pollfd_idx = set_fd(conn->fd, events);
+			/*
+			 * For stalled connection, we want to process the
+			 * pending command as soon as live-update has aborted.
+			 */
+			if (conn->is_stalled && !lu_is_pending())
+				*ptimeout = 0;
 		}
 	}
 }
@@ -1524,6 +1543,9 @@ static bool process_delayed_message(struct delayed_request *req)
 	struct connection *conn = req->data;
 	struct buffered_data *saved_in = conn->in;
 
+	if (lu_is_pending())
+		return false;
+
 	/*
 	 * Part of process_message() expects conn->in to contains the
 	 * processed response. So save the current conn->in and restore it
@@ -1543,6 +1565,30 @@ static void consider_message(struct connection *conn)
 			sockmsg_string(conn->in->hdr.msg.type),
 			conn->in->hdr.msg.len, conn);
 
+	conn->is_stalled = false;
+	/*
+	 * Currently, Live-Update is not supported if there is active
+	 * transactions. In order to reduce the number of retry, delay
+	 * any new request to start a transaction if Live-Update is pending
+	 * and there are no transactions in-flight.
+	 *
+	 * If we can't delay the request, then mark the connection as
+	 * stalled. This will ignore new requests until Live-Update happened
+	 * or it was aborted.
+	 */
+	if (lu_is_pending() && conn->transaction_started == 0 &&
+	    conn->in->hdr.msg.type == XS_TRANSACTION_START) {
+		trace("Delaying transaction start for connection %p req_id %u\n",
+		      conn, conn->in->hdr.msg.req_id);
+
+		if (delay_request(conn, conn->in, process_delayed_message,
+				  conn, false) != 0) {
+			trace("Stalling connection %p\n", conn);
+			conn->is_stalled = true;
+		}
+		return;
+	}
+
 	process_message(conn, conn->in);
 
 	assert(conn->in == NULL);
@@ -1629,6 +1675,7 @@ struct connection *new_connection(const struct interface_funcs *funcs)
 	new->pollfd_idx = -1;
 	new->funcs = funcs;
 	new->is_ignored = false;
+	new->is_stalled = false;
 	new->transaction_started = 0;
 	INIT_LIST_HEAD(&new->out_list);
 	INIT_LIST_HEAD(&new->watches);
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index dac517156993..258f6ff38279 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -110,6 +110,9 @@ struct connection
 	/* Is this connection ignored? */
 	bool is_ignored;
 
+	/* Is the connection stalled? */
+	bool is_stalled;
+
 	/* Buffered incoming data. */
 	struct buffered_data *in;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 15:01:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 15:01:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143354.264256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltX2i-0006Lu-LE; Wed, 16 Jun 2021 15:01:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143354.264256; Wed, 16 Jun 2021 15:01:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltX2i-0006Ln-HS; Wed, 16 Jun 2021 15:01:24 +0000
Received: by outflank-mailman (input) for mailman id 143354;
 Wed, 16 Jun 2021 15:01:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jola=LK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltX2h-0006Le-94
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 15:01:23 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ad9bd4ce-de2c-489f-afb1-9ec5cdcc1bb1;
 Wed, 16 Jun 2021 15:01:21 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2104.outbound.protection.outlook.com [104.47.18.104])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-31-21fd_TuyOIm1h4jd0AZOMA-1; Wed, 16 Jun 2021 17:01:19 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6861.eurprd04.prod.outlook.com (2603:10a6:803:13c::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.24; Wed, 16 Jun
 2021 15:01:18 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Wed, 16 Jun 2021
 15:01:17 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0209.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1f::29) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Wed, 16 Jun 2021 15:01:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad9bd4ce-de2c-489f-afb1-9ec5cdcc1bb1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623855681;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yVRTu0CfEeFesjRETUlB3Z+ftIEmd6Fu6G7zHSTMe9w=;
	b=lW4wtMgAmwRstwNkXvwSqlKM4TiKhiqWq6mb5vEUmio0RIwzMihqardHPkfu46Ob1T36LW
	umeoOl6Zz4ThFC6i2GxYIY7xd8VrururB7y10NR95hTDPK7onzhOqz1tkEHSbz48odtfyG
	LtjpVHaLitEMGGGVZrow9yLDaI4aV1M=
X-MC-Unique: 21fd_TuyOIm1h4jd0AZOMA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iXKbNE6I8vjYZ6lLSeuqxy7kB9HAilJdZqGwAWBj+lJmbFE94drAiQt1GpO/XhRwnHn24yyYwtgLd/3yf1nx3dRUg3jphIqyaiEn1jMTG3Bk6xe/akrfPeHyXbWHHFyHStCmz9WY/K00bPKmUiZawfcokfStdbibo7eZF3rywnGqaa1YR/bK1gMvhDXX1kIU/4/HlpnvE+wKRLz+ToAcYg9en7hdZ5afBJY0dQKodBKP6K2lljETwC5StwbMw5LAGyKY9Dn4gRofpt6AcN2FuC4ejfGN4FGZaq3jvbHDyXgNLbRkkT/d5eHWqe2JGft8ZOqabsvrS/1uhTuvieZX4A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3lFacDmEIodEaoxeJPMMmvxpxSTyv01Hwb2O7xxbCpc=;
 b=XXKxOYVVWlmCe6ODR8ymkc37QGGF/QnfpCQfBk36U82naQ2+ojgPkqNwNQW29VwUj+axRS0F9crbO9o1f2iyUar7STLp8TaiXSYXEAi0Tcq5bcaybbFPC6YEey8Ep7q+SF2Gg3lCBDI9JEgfSVQocE2lQViKwD7A/YEeCKU+a3tBRiXq61EBkQGstAvi8EBO2UcV9TwyIvg9CRCe/yO40VsWyv8xovHti8KA9VDXyeMrFw3kMIblRGm11cimlYECByePXV1JuzXNig5h4rxQDBd5jFmwnmjx7ioZ3m/Bvpwq/p2zWPDAHrrliccqDO3YKEoXVHg+S/6rG1NNMRMb0A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [xen-unstable test] 162845: regressions - FAIL
From: Jan Beulich <jbeulich@suse.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Ian Jackson <iwj@xenproject.org>, xen-devel@lists.xenproject.org,
 osstest service owner <osstest-admin@xenproject.org>
References: <osstest-162845-mainreport@xen.org>
 <8e39ca8f-3202-7d3a-d65d-7087634bd49e@suse.com> <YMoI8YZfOvogwOMY@perard>
 <f8c4151a-6dac-d87c-ef46-eb35ada07bd9@suse.com>
Message-ID: <52c8fd06-c60b-2765-b25a-47c28f98427f@suse.com>
Date: Wed, 16 Jun 2021 17:01:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <f8c4151a-6dac-d87c-ef46-eb35ada07bd9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0209.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1f::29) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c5d45577-52fa-4795-55ec-08d930d79803
X-MS-TrafficTypeDiagnostic: VI1PR04MB6861:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB686196F34B9D4C3B5A5FAD51B30F9@VI1PR04MB6861.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rDtQ/YAAm6OZzqnCVIBWpIWZfM7jXg2GhqFUf9Az0BIsSrl52mZryZPbbve8osEd5hsqXbQLs78x9mkNy1XxQgW1TeIhBwm6KEReNuw2roXEYIMFNxl2KcmNvtFm4gy/UPQ4bTznYuDWAQYnBbTuvyJmYuzek6u0s/4YaWt4jA8eO/5pzNHdf9atao2zZjyfOUlbAw9IRfrqRufuieWFlAQMee/rqkJxzpsbt+4dQXqozWpx+R/k+B427QR4OTUo1DXykOhQnFfJFlEr/2xBEAGXzK3cQJsfk/VC1Ulb9a8xcK+/lexBqRMCc6f4YwcVLu1b/wPQ6Nd/KLT4H6Fu+NJtumjJ91FEQC2MrQGRrCVVw7Cgj/T/6dOCelNXo1HZXKgqU3rG5TpAtwrhOK/ugRAsqD9WRcnHirWyUs4Ja8OfPC8gPGTc4UBtKZktOrGa04hA1J+8lwFG+OWyvNlp9i3tL+3iBYKb6fKkSZloCnsrMbrawspxYyktBbpvzVBC6/2PQtPSueWvwS5XVaaVyPAaEReKph3+nH9hHdNKEw4ELLWhyQ+UXGWmKdQfV1Zu2bvLs/nbTMA9j+aObDP+6SVi/JymOChP7SXUPkhqsRp1tRUG5KQ3KPvTU8YAkqpJNqMsnQpFxWevn4U8W25/DRPyxrAFApiO2wQE8iifpwdhnnET1d/wTseBUrEuoqtBkiBHleay6eHDThkO+fldgMcsvPzg3epcnaO8fsgvjKfXoybvAnI0dtl52+KCJC8vQ+A678S06g22gK/DhiwdagCbW5RZTJ109bVbQKWbZBuREQbeJmRbtEuWVj+KyIwZaVfXVaQ70dlNIwZ+vCVt5Q/oGcxogvWtltTM6vaA3wk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(366004)(39860400002)(346002)(376002)(136003)(966005)(8936002)(8676002)(31696002)(5660300002)(6916009)(54906003)(36756003)(478600001)(83380400001)(4326008)(53546011)(66574015)(38100700002)(2906002)(6486002)(956004)(66476007)(86362001)(66556008)(16526019)(16576012)(26005)(186003)(2616005)(316002)(31686004)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?SlvCYzGPbKndFUaQzeGu0m0UQKZHUQoBauVQ4ua+GQUI/juPXSmeHnPQFwf8?=
 =?us-ascii?Q?HkHLPSDuSDEvNp0JHnbWvwm6QFHdiDTuYKMPjTyXRzkEX6Ca65QJfl+Os2r8?=
 =?us-ascii?Q?SEsPPg/KFq0DpbFBEtrR31w5VRpPp9eU3vrSuHy54QULoSmx0iKBdf3ec9oZ?=
 =?us-ascii?Q?YbHbHYjhoLPezi4PuHjg0G+cUINA0rTHp4+A5uOxWGxVwLAiU6I/2ZfbDQwL?=
 =?us-ascii?Q?osvzN6lBsT9BLQ7mQE6WHpJUXUBVIAoCFLkhRzxp+FMFPzyXGkpMKKqqiKd2?=
 =?us-ascii?Q?rIpwEnXQFOX+XyQCTfhPorXXMSlpNV6UIT7IpsX39GLCf8bIYbWufWHxRv5Q?=
 =?us-ascii?Q?TkbFNhHCChydzYaChmWYqQiSSq720HwwNqE14StvKvOQ8jG3GZAHBVN5nboV?=
 =?us-ascii?Q?1KJJQU9JJipNrHmBVExkpR/FToWC2xjVOZk2EWyVENPQUgJiLbZhOOQpGF3Z?=
 =?us-ascii?Q?QRWdtMhL2TM1Lu3ziOGdbiIxdw/pTTZsoCVqK/mxBdEoOo3OT/0jjcaoiOtE?=
 =?us-ascii?Q?wian4kzSZw1p1ez1KUx5SrxO6z0hW1C7FHEDxAt7QHi0qtVdxCwweCzvMrj3?=
 =?us-ascii?Q?rySDktKvyIYquzhCTsqqrMYqASyeZPn+JRSphuwGgDtJvJYuRfiLqRqoE9Kq?=
 =?us-ascii?Q?e86OoC+FFFTvvTJ7DZS42qeXEmUOr0Ha2y8+bgJGeAj483d3BJlw4pmoet69?=
 =?us-ascii?Q?RdhuglAljz/Hdh8xhRh2mMYTPrIVPd2fz9n5qILvKcguIAcN/GEh3uh9YvUe?=
 =?us-ascii?Q?aB6lWdhs4JppdJF7sB+nGwDTkszr29Y40GuvR+xDq7qEj2izAz0z42FasAUK?=
 =?us-ascii?Q?8C2wyDchPX0fQ0195/keAQLlV3oc5ZJCU4v/m50cai2QMd5+MCPIW6BaK++l?=
 =?us-ascii?Q?++pLA0Fi7JzKK2U4m1ZTC8aG5EU7gEk0daNgN1u7NehaRvWQBiR2QyiVkJD5?=
 =?us-ascii?Q?q8z9VfT62d0bhRQRgpTwyvbjULRlqlUUB3svqful4oUy/PzGlCwFrm0iHl02?=
 =?us-ascii?Q?cY5OjyqWZR35wiRG6g+AL63phwyDv4RtppZY3LL7dwFzpm8+Szwvz0vgxgwB?=
 =?us-ascii?Q?n8ZVQseaxftKYQjDi8gs7aTtEHZ54VCWBX+4Uht5y1gcFXUOOv9x2bGTBJ9C?=
 =?us-ascii?Q?fcAzp1ntJjTZiWiTVqIWiz19qOwAjW7V2g4Ku/NHG1482sYm38YGSz4BQR1b?=
 =?us-ascii?Q?DPqy/IY/CHOerkdSkBib05WzCTaEJGvQWU+EKQ99HFXLxJZvqBBQCX0yyQrm?=
 =?us-ascii?Q?8bKuf90xjVUBvz2YG+rfGM08p4Kg5EM6Lzq/J6LHc12AhPCHseLGq0hJBGI4?=
 =?us-ascii?Q?hEshwUSnqZf6y92rXggmOGNM?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c5d45577-52fa-4795-55ec-08d930d79803
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 15:01:17.7442
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yslFqtSrRz2/ggtbIQDKbaXStHx+T+wjLiNINhGyluQ5iABP/nqnJEFFZf6lJ2xuy6p8hHzyZl/OV6g39N/tSA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6861

On 16.06.2021 16:49, Jan Beulich wrote:
> On 16.06.2021 16:21, Anthony PERARD wrote:
>> On Wed, Jun 16, 2021 at 09:12:52AM +0200, Jan Beulich wrote:
>>> On 16.06.2021 08:54, osstest service owner wrote:
>>>> flight 162845 xen-unstable real [real]
>>>> flight 162853 xen-unstable real-retest [real]
>>>> http://logs.test-lab.xenproject.org/osstest/logs/162845/
>>>> http://logs.test-lab.xenproject.org/osstest/logs/162853/
>>>>
>>>> Regressions :-(
>>>>
>>>> Tests which did not succeed and are blocking,
>>>> including tests which could not be run:
>>>>  test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. =
vs. 162533
>>>>  test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. v=
s. 162533
>>>
>>> There looks to still be an issue with the ovmf version used. I'm
>>> puzzled to find this flight reporting
>>>
>>> built_revision_ovmf	e1999b264f1f9d7230edf2448f757c73da567832
>>>
>>> which isn't what the tree recently was rewound to, but about two
>>> dozen commits older. I hope one of you has a clue at what is going
>>> on here.
>>
>> So this commit is "master" from https://xenbits.xen.org/git-http/ovmf.gi=
t
>> rather than "xen-tested-master" from https://xenbits.xen.org/git-http/os=
stest/ovmf.git
>>
>> master is what xen.git would have cloned. And "xen-tested-master" is the
>> commit that I was expecting osstest to pick up, but maybe that as been
>> setup only for stable trees?
>>
>> Anyway, after aad7b5c11d51 ("tools/firmware/ovmf: Use OvmfXen platform
>> file is exist"), it isn't the same OVMF that is been used. We used to
>> use OvmfX64, but now we are going to use OvmfXen. (Xen support in
>> OvmfX64 has been removed so can't be used anymore.)
>>
>>
>> So there is maybe an issue with OvmfXen which doesn't need to block
>> xen-unstable flights.
>>
>>
>> As for the failure, I can think of one thing in that is different,
>> OvmfXen maps the XENMAPSPACE_shared_info page as high as possible in the
>> guest physical memory, in order to avoid creating hole the RAM, but a
>> call to XENMEM_remove_from_physmap is done as well. Could that actually
>> cause issues with saverestore?
>=20
> I don't think it should. But I now notice I should have looked at the
> logs of these tests:
>=20
> xc: info: Saving domain 2, type x86 HVM
> xc: error: Unable to obtain the guest p2m size (1 =3D Operation not permi=
tted): Internal error
> xc: error: Save failed (1 =3D Operation not permitted): Internal error
>=20
> which looks suspiciously similar to the issue J=C3=BCrgen's d21121685fac
> ("tools/libs/guest: fix save and restore of pv domains after 32-bit
> de-support") took care of, just that here we're dealing with a HVM
> guest. I'll have to go inspect what exactly the library is doing there,
> and hence where in Xen the -EPERM may be coming from all of the
> sudden (and only for OVMF).

The *-amd64-i386-* variant has

xc: info: Saving domain 2, type x86 HVM
xc: error: Cannot save this big a guest (7 =3D Argument list too long): Int=
ernal error

which to me hints at ...

> Of course the behavior you describe above may play into this, since
> aiui this might lead to an excessively large p2m (depending what
> exactly you mean with "as high as possible").

.. a connection, but I'm not sure at all. XENMEM_maximum_gpfn returns
its result as the hypercall return value, so huge values could be a
problem at least for 32-bit tool stacks.

What page number are you mapping the shared info page at in OVMF?

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 15:02:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 15:02:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143364.264267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltX46-000746-00; Wed, 16 Jun 2021 15:02:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143364.264267; Wed, 16 Jun 2021 15:02:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltX45-00073x-SW; Wed, 16 Jun 2021 15:02:49 +0000
Received: by outflank-mailman (input) for mailman id 143364;
 Wed, 16 Jun 2021 15:02:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltX44-00073p-Lt
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 15:02:48 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3101d6e1-cdf3-4913-ad6a-c3a5d47d3639;
 Wed, 16 Jun 2021 15:02:47 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GF2kuV7
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 17:02:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3101d6e1-cdf3-4913-ad6a-c3a5d47d3639
ARC-Seal: i=1; a=rsa-sha256; t=1623855766; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=KvW1hzRFySl5uI0enubpqYuKtCTNyuoEmbDMlpx7AGSdfH9QMeVrDvyi+69KtSfw7a
    3ILiwtbqCnFJHmWKYHQtvkdICJjRN0VXzctgUxn89iLibn3gQhtqDRij/ZDo5a2gAfXs
    R9WVkwL4i6R6gAEI6UBZSJo4YtFNHUFYaPXsTKzQOm4T2dIWB7EoAiO3Gi9pPBF8trjX
    7L88TCTEpeY+iLW++7RQ7xV4xAUw/X1UPiBR5tYzF5aHRQQNHiwO64doPoL02Ysg5sse
    iada6c+PxrSKHtuiAd7YyRzgxxRgiZRG3oBXNeeKPQAw5RqEDhZu+98KEfwNMN3DS/42
    e/AQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623855766;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=MxdCG8G3tmZS3gwuCaeIcp7C7dm8wsQt/HqzUW435yI=;
    b=RvjStZrCth4DBQ6WI4e16hH4CHpEwFv83hjbrIGAQPDpuUwaXXbNhSg9/kCoOEt82E
    OBgiQ4IF5W6Ai4iHeCPZQiYHuseIlgbngk6mwha8d0BpvEkRZMUHQIMMobOcWoZC4ng3
    qqdsW0lo6H7Ln6PJQYRfrr7Ru4J8oR1PY8MTbX7tSr1JNiOAdpzlsI2Cdo9NDOSbxVDw
    faCl0IhgxY6LzPV99M+Y2thxjGr2AeK46QWi+DR0zxCWMCgDxUY/c+y228q3GRC7WDzw
    C+9rRWLReIU2Aj5F7f87CgS9sCmAa8GiBsQhPje+qgOH+RZ5Jj+jdLlNnQgpUm9spyw5
    P9Uw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623855766;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=MxdCG8G3tmZS3gwuCaeIcp7C7dm8wsQt/HqzUW435yI=;
    b=ME610usUe3eedHJtHGdtW/ADg2yGHkZ1C8oNf+QOhH+uRNJMEx9d23N2q6YKYdzBX4
    xhPFVUwel5CtVMSqViTYOUaRBUlcvLgQpNyy9O3Z7EAi9jciq1vFA1Vi3U3sVRylsaj9
    GVLDGK2LJ1PhOw9/6M+eApAQLzUWezloMdWF87Jm2b9WJP/cee7MqE1LEKxcWFxfPTae
    LKrq6X94IRfMk7kpYzVDqFiJfiKfM10bZKH4CMXYIWxSVR1oj/b3VLhAW2oqZ08yS//x
    tHfVq8uIe8SP1F9njibmxiE5nNYRT+IjyvV+4R9aKIs4j1UhfLZ20rdYdiIbKOHUZOE8
    HU9Q==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisQsVxSIbR7sf0kebs4Z3Zpqv+Sabl5o7CzRq+Ps8Q=="
X-RZG-CLASS-ID: mo00
Date: Wed, 16 Jun 2021 17:02:38 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v20210616 00/36] leftover from 2020
Message-ID: <20210616170238.376cb13d.olaf@aepfle.de>
In-Reply-To: <968713af-3c99-3fe3-8378-9d9c3985098e@citrix.com>
References: <20210616125129.26563-1-olaf@aepfle.de>
	<968713af-3c99-3fe3-8378-9d9c3985098e@citrix.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/Boa920dlP4wAxY26fdCZPK+";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/Boa920dlP4wAxY26fdCZPK+
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Am Wed, 16 Jun 2021 15:50:24 +0100
schrieb Andrew Cooper <andrew.cooper3@citrix.com>:

> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 new_max |=3D new_max >> =
32;

Lazy compiler? I hoped this is a compile-time constant, which evaluates to =
zero in 32bit builds.

    if ( sizeof(unsigned long) > 4 )                                       =
    =20

I guess a #ifdef as it is done in old code must be done.

Olaf

--Sig_/Boa920dlP4wAxY26fdCZPK+
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmDKEo4ACgkQ86SN7mm1
DoCVew//dh8U8RNU6ewCoKH3Rif8Hd3AOJn1r8Y6EVtPcs6ju5+TxYZzIhHryqM/
P3K/Wu+LeFlZ4NWv7Y2jyFHmEu7H4Gw1albGxCDNa+DP3ldjd75FR5eAY8DtFhmU
/LSOT84lDDkEsY+ID17BfgYikCNP44brNMyWEG73/d0764cH5r2LQ/f10EslpUYG
n1VsiheNr06jL21Z/HD8d9i5BRdobSGq328vmL1nSPQ4ITz9kWhLvT5H8WpriiJV
1Om2a7ayl8eyJoWR3KkSvX+LJJBdg110kBlAO+mFAVxSNzyuJGnv7D/BjcmD7Ase
g6WlDL6Xlcx2Kn329Hb03sEAlLl/aaqOVdcj4i+/bmXb/BDlhWDIC4Jb43w/Igj5
VUO+/ALOdBoGJlOs+TjErfBqdj5uX0AV3mfEWNJugB0/hFssnFM2s3eBTOK1D3GI
3bJ/bEAk1fE0IfOszeZq4ZgMxCpjm+m3kjudYOPiR5crJJRai2c2pOlIPsuQaakL
mbWnO1L0WslI3ZxLhUXsvt4P9Fjd2XqmcJBjBrowOZG7xY8Fe6vPzTHZY+HC38Dz
DHXMvOP9SiIsrxRtOrcMPejqUnh5cIhVukFnKVVkmNc9KwSBcmfr6M6Zr74xb5GP
wg9tNg21sF5Cii1z3laeJ2qyWt8JlhB+SX2Gw3DGrg2OqbXyWnk=
=0WkU
-----END PGP SIGNATURE-----

--Sig_/Boa920dlP4wAxY26fdCZPK+--


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 15:12:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 15:12:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143382.264277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXDg-0000AJ-T6; Wed, 16 Jun 2021 15:12:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143382.264277; Wed, 16 Jun 2021 15:12:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXDg-0000AB-Q5; Wed, 16 Jun 2021 15:12:44 +0000
Received: by outflank-mailman (input) for mailman id 143382;
 Wed, 16 Jun 2021 15:12:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QzdS=LK=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1ltXDf-00009y-Dw
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 15:12:43 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe398a16-51bf-4ce6-9f81-1df8f429cb86;
 Wed, 16 Jun 2021 15:12:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe398a16-51bf-4ce6-9f81-1df8f429cb86
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623856362;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=4OHMNBsyC4VgF/ZEWfsxw0GmwEl9ZlrkDb/L+SKMo1g=;
  b=EnJ4wuxMFuzVYumkYi/OaL8nVChfvekb+NtU/8ml+XWBw92smrASC5cS
   2F6VHFA1fdtFSYp3Prq4HJxPNrHD1kZTJROdvpR2gJrEJuz4Gu08ux79k
   +1b3BKB9lZH7EwZoXLvK4oToabwKifaoDMXOtnDC6hEr6JDBgPdkoSHjh
   Y=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 0MKoFemF0jnFkhsTgYcSxVl1bRzQJnsJFOXZ3ol5k5f4f0/b5ysRQ1ueN8/OoWuciJHgIcRg2R
 SrpOja0H81mSuohxwGnM6aOyKJTjOYepR3R2pfuGs4kTkqthGhWVq6yiF9Ovrvr+I9ddMqrNme
 Bi5rWvXIkkLS81QBKIgaSzCrMPhBkUtca416D/OYXaNZcFETolgF4PgAv7dvV5VjseGpOsCpUj
 lwl/tqlHQQEO2uAVxbdaHxxPRWcrUSCOKGo+WsDljl5z8adWTorPThNc0TRIFpL1CQAmSJ6GG3
 Qak=
X-SBRS: 5.1
X-MesageID: 47855158
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:9Ylkga0CLrFmu14Wv338sAqjBL4kLtp133Aq2lEZdPUCSL39qy
 ncppUmPH7P5wr5N0tNpTntAsO9qDbnhP1ICOoqVotKPjOHhILAFugL0WKh+UyDJ8SUzJ856U
 4PScVD4JSbNzZHsfo=
X-IronPort-AV: E=Sophos;i="5.83,278,1616472000"; 
   d="scan'208";a="47855158"
Date: Wed, 16 Jun 2021 16:12:38 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, <xen-devel@lists.xenproject.org>,
	osstest service owner <osstest-admin@xenproject.org>
Subject: Re: [xen-unstable test] 162845: regressions - FAIL
Message-ID: <YMoU5gLQEVBkmnLC@perard>
References: <osstest-162845-mainreport@xen.org>
 <8e39ca8f-3202-7d3a-d65d-7087634bd49e@suse.com> <YMoI8YZfOvogwOMY@perard>
 <f8c4151a-6dac-d87c-ef46-eb35ada07bd9@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <f8c4151a-6dac-d87c-ef46-eb35ada07bd9@suse.com>

On Wed, Jun 16, 2021 at 04:49:33PM +0200, Jan Beulich wrote:
> I don't think it should. But I now notice I should have looked at the
> logs of these tests:
> 
> xc: info: Saving domain 2, type x86 HVM
> xc: error: Unable to obtain the guest p2m size (1 = Operation not permitted): Internal error
> xc: error: Save failed (1 = Operation not permitted): Internal error
> 
> which looks suspiciously similar to the issue Jrgen's d21121685fac
> ("tools/libs/guest: fix save and restore of pv domains after 32-bit
> de-support") took care of, just that here we're dealing with a HVM
> guest. I'll have to go inspect what exactly the library is doing there,
> and hence where in Xen the -EPERM may be coming from all of the
> sudden (and only for OVMF).
> 
> Of course the behavior you describe above may play into this, since
> aiui this might lead to an excessively large p2m (depending what
> exactly you mean with "as high as possible").

The maximum physical address size as reported by cpuid 0x80000008
(or 1<<48 if above that) minus 1 page, or 1<<36 - 1 page.

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 15:15:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 15:15:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143389.264289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXGf-0000nn-Bv; Wed, 16 Jun 2021 15:15:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143389.264289; Wed, 16 Jun 2021 15:15:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXGf-0000ng-8P; Wed, 16 Jun 2021 15:15:49 +0000
Received: by outflank-mailman (input) for mailman id 143389;
 Wed, 16 Jun 2021 15:15:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=my07=LK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1ltXGe-0000nW-1z
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 15:15:48 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 082c210b-e951-4567-8da5-a50e2d2947da;
 Wed, 16 Jun 2021 15:15:46 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id C3977219C7;
 Wed, 16 Jun 2021 15:15:45 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 860C4118DD;
 Wed, 16 Jun 2021 15:15:45 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id Om+YH6EVymBadwAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 16 Jun 2021 15:15:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 082c210b-e951-4567-8da5-a50e2d2947da
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623856545; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=BprycyD/7aFkT8LZPN9lfjXfWwl5CozQZntJN4O3i9Y=;
	b=WYr88gkwYtFuG+Czlfy7VKwdTcUrd95+8eRKJp2T2mWB3/z8bIZ63rUeAoOJv1Sn1vEXcz
	RNhnatNsM6lU2Fgfq1T4j5XHz12K8MK+W0Z9FVWjxkSRUDkKdBsIXDIptfnt7GQpogk27+
	m71HPpl1X1XnhJv5g6nO4lsYaedmFWM=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623856545; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=BprycyD/7aFkT8LZPN9lfjXfWwl5CozQZntJN4O3i9Y=;
	b=WYr88gkwYtFuG+Czlfy7VKwdTcUrd95+8eRKJp2T2mWB3/z8bIZ63rUeAoOJv1Sn1vEXcz
	RNhnatNsM6lU2Fgfq1T4j5XHz12K8MK+W0Z9FVWjxkSRUDkKdBsIXDIptfnt7GQpogk27+
	m71HPpl1X1XnhJv5g6nO4lsYaedmFWM=
Subject: Re: [PATCH 01/10] MAINTAINERS: Add myself as reviewers for
 tools/xenstore
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-2-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <fb8de6a8-0b94-18cf-9026-7b85b461fb50@suse.com>
Date: Wed, 16 Jun 2021 17:15:44 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210616144324.31652-2-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="ITlDbs2hApUg5AldH4i42oyeXiViPyzZp"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--ITlDbs2hApUg5AldH4i42oyeXiViPyzZp
Content-Type: multipart/mixed; boundary="6lCflRAI8L9zNb7jpakD7eZCMSrzrwVPU";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Message-ID: <fb8de6a8-0b94-18cf-9026-7b85b461fb50@suse.com>
Subject: Re: [PATCH 01/10] MAINTAINERS: Add myself as reviewers for
 tools/xenstore
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-2-julien@xen.org>
In-Reply-To: <20210616144324.31652-2-julien@xen.org>

--6lCflRAI8L9zNb7jpakD7eZCMSrzrwVPU
Content-Type: multipart/mixed;
 boundary="------------EC3D59429543C86D6BD0572D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------EC3D59429543C86D6BD0572D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.06.21 16:43, Julien Grall wrote:
> I would like to help reviewing Xenstored patches. It is more convenient=

> to find them if I am CCed.
>=20
> Signed-off-by: Julien Grall <julien@xen.org>

Acked-by: Juergen Gross <jgross@suse.com>


Juergen

--------------EC3D59429543C86D6BD0572D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------EC3D59429543C86D6BD0572D--

--6lCflRAI8L9zNb7jpakD7eZCMSrzrwVPU--

--ITlDbs2hApUg5AldH4i42oyeXiViPyzZp
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDKFaEFAwAAAAAACgkQsN6d1ii/Ey8j
Ggf+JdspmkxMsKUfprFzdW/a0S5A5hZLDVZORJyYH+BXyY5Q/IXVqBOdsOZXCwiiEJg5ou6wJzDT
YoVXqH/sp4YcZ7cXB/2van/FNO5WxgzFQomkbjSAyPkbaHjAimE09XXbOVoltBISpS3VNQDqbQfh
ddZwJ4zl6wS8D7IjW0UrHtXteq+Yh6Be1iqYIryam8XE0oHegveDpB/r6k3uIsMQXTSkIWKPtHW5
+1dYKSIfEkcEGKX92kca4DVcOeuY9+Rh9ZCSDHWAB5BGy5LilSLSNBToIguj4hS2T0EwFEsFvGZ0
YkZsv8fL19SoKB/rLU7B5BwZbyniOqHEcysYsrQ5SQ==
=Voas
-----END PGP SIGNATURE-----

--ITlDbs2hApUg5AldH4i42oyeXiViPyzZp--


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 15:17:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 15:17:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143396.264300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXIR-0001Ud-SG; Wed, 16 Jun 2021 15:17:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143396.264300; Wed, 16 Jun 2021 15:17:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXIR-0001UW-O2; Wed, 16 Jun 2021 15:17:39 +0000
Received: by outflank-mailman (input) for mailman id 143396;
 Wed, 16 Jun 2021 15:17:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltXIQ-0001UQ-PS
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 15:17:38 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b6824e6-4fc2-4acb-8872-109fa8468554;
 Wed, 16 Jun 2021 15:17:37 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GFHPuZF
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 17:17:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b6824e6-4fc2-4acb-8872-109fa8468554
ARC-Seal: i=1; a=rsa-sha256; t=1623856646; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=drJV+278tqiI6bSBzkMJO/6Q4r4m65SwApY2ONXqUs0L8escI6k+effrgLcfrXhrSW
    rDu1l6NeKzISB+1eBGVO6fXic+ifROMhu6wHZ8VmwwmjxiQ4qPx5nVr+5IPniVXHbZOx
    egQb1+BjRw81+LMrGcSRfhmRzqcEzTumR2LpBn/eCP7bbpoL3Qc7lAkAww0YPIB3FHwV
    fiuZqXDU9LUz/oiMMVoxExOLPtEneRsVCYf8kx/z0pzBz//ErpeXfEIQVI71a7uUGUez
    +M7BuZ+Njg73R1P09/NmPAD8RDirYNdnIi4BO/tqEZybZlF6Zbu8TiIRBt1tQ8ZNk7es
    nutQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623856646;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=rcR3fIihUPNHsD2xDx+zqWqCaDx02jTmSvzpKhcgqHM=;
    b=hF25v1cPFC7ylRsKrpApG/7ynky1ii3FCAdefe4xYqd3Is57l3DOPeona/5Vz/i5g2
    q6cTCBjhRcfqsSSKZ9f0VaXDHnq1B4fHrugAq8VEzOwARKf1zBXk4rVd1fQuWlw4fZdc
    BMYfSSR4tbbZdx6Sn3/Oox8K86RkkLti3NDHH7HDl5D3T0mNvazYL1Jf08kChWfNwtIZ
    Mdi93nyEXacT6fIp6L6edqqYH+NOOJ8OLy1BOmLSHZhS/TTKfjvzjLAufOptA512ofLi
    KrfZIzm2DBiABs6U+/Rpr0l0oPxPthZgUMB9/G0him2OZs5usYXmgA93pSy4aUj7+SfV
    fDjg==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623856646;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=rcR3fIihUPNHsD2xDx+zqWqCaDx02jTmSvzpKhcgqHM=;
    b=BFOBHfge/qZfI1AmbQ/GJ+HXO7kKgML60qrxT/kQYWIeIoWbTHZz6RZrywumeEStIE
    0Om4ylhKg3NjPh2M8pGrRbpraPRG4oMxu+UIUQC1A+IuPrGLMG7qNng4KtTTURmsJy43
    HkTCMsPuO9umU3hytF32Ipj2q5Z3r4QBoR3Hiy+dh0dTNxKKqOpIJ9QaoZcmseOvzjke
    tYwv6SKdt1xo9gGNhCQ2bi1FlWfYb7dYSUHDmS/s1priebMwJ2YrFbLuINNAZXH20PFs
    Sm0yFS0ITjBocxjKUM6xoMigdWNSIFBB4XtVg65tjtV5nlvoGjyAA951U//JTBnMNeR8
    T7JA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisQsVxSIbR7sf0kebs4Z3Zpqv+Sabl5o7CzRq+Ps8Q=="
X-RZG-CLASS-ID: mo00
Date: Wed, 16 Jun 2021 17:17:18 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, George
 Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Juergen Gross <jgross@suse.com>,
 Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210616 04/36] tools: create libxensaverestore
Message-ID: <20210616171718.012fbb6a.olaf@aepfle.de>
In-Reply-To: <20210616125129.26563-5-olaf@aepfle.de>
References: <20210616125129.26563-1-olaf@aepfle.de>
	<20210616125129.26563-5-olaf@aepfle.de>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/xLZ=8GvyLqie6N7GGHDqxGe";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/xLZ=8GvyLqie6N7GGHDqxGe
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Wed, 16 Jun 2021 14:50:57 +0200
schrieb Olaf Hering <olaf@aepfle.de>:

> Move all save/restore related code from libxenguest.so into a separate
> library libxensaverestore.so.

This additional change is required to cover non-x86.

--- a/tools/libs/saverestore/nomigrate.c
+++ b/tools/libs/saverestore/nomigrate.c
@@ -17,8 +17,7 @@
=20
 #include <inttypes.h>
 #include <errno.h>
-#include <xenctrl.h>
-#include <xenguest.h>
+#include <xensaverestore.h>
=20
 int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t fl=
ags,
                    struct save_callbacks *callbacks,

Olaf

--Sig_/xLZ=8GvyLqie6N7GGHDqxGe
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmDKFf4ACgkQ86SN7mm1
DoCRtA/9E6lx9RAuzknFAfSRQe9Iz2NyEjENU+bGkir2e7FdTvp8xnDEMhf1W/kb
GfycDdiXdFpRAzdhpD4PB6RfrNQL6+gOdASgJ5XT86myHmm+L/kEm07uOocsoVg3
28m9ePezS1itDa01DoiUQAscnZq+wdGM71uX3dCtj2Fa/c8MrFUtFd7ffnI2J1m6
ThuDBZqqnXCbblNQvq3zqjvZs47BMCH31PLcktzuXAO7ImsyUA4GuFkMJR+EXZo2
/p+yhVy4nrWS/YiUPYBayELB2H+v3JDQGSt4QsY7l4BkixWTqUDDgqOwTrGuGa3C
7HKIX6t+69C9ovbedIK9USR5SfaIesOuXpDdWea8fawawKaiSwt9zfCHfwmYRdwJ
eEnRSregh5vvVxbDKMPgYrtkfgLLCIUAgCjraFR6hwGi5JhzKyYZrB9/wJznQ7ZP
9GABSCK3D1i6xEqWhfFEvMrtuu+rQr34J9+4+uTFtt6w66e1Ta4sfhcIIcxVayJD
a+x9G9/CcjlHQsG5o1oM5OsrASkrHNiDdqV9drDW2V4xPzFnO5jra7IbxYYZAQ0c
NRMK8l/ThNbs+QsNdZn4mPH+iicp08Zi+cdhjg46bB+GrNJUfine8FB5VJQ1wcpX
VRGpfdXA5K8f+x/Uo6AP4RWM7do+dMaRgXbgbZXLods28f44ukc=
=UXx6
-----END PGP SIGNATURE-----

--Sig_/xLZ=8GvyLqie6N7GGHDqxGe--


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 15:19:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 15:19:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143401.264311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXJh-00026E-5d; Wed, 16 Jun 2021 15:18:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143401.264311; Wed, 16 Jun 2021 15:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXJh-000267-2F; Wed, 16 Jun 2021 15:18:57 +0000
Received: by outflank-mailman (input) for mailman id 143401;
 Wed, 16 Jun 2021 15:18:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NJFi=LK=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1ltXJf-00025t-Nu
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 15:18:55 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5b4fa1b8-3a64-4a02-a784-63879ed1aa02;
 Wed, 16 Jun 2021 15:18:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b4fa1b8-3a64-4a02-a784-63879ed1aa02
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623856734;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=bCyomFsR6dtfqKURitc5WY2glFEaC3FgGkeWvNQe234=;
  b=EDLy1OH9H5LpG7ef8iZlKRXxm1RFt8olK4eryjujgd98w+PjP7vy0skX
   /BIEHMkq6nmKMA3G/AQ4uHVbLsTsLdkaSev2exb+G4Ht5WCEgMXrdHva/
   9gIcoL7ToWP+FUDmtRJRl/khIlbDiI7jZErWs55Wimg4p6VOEBnQl2nby
   U=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: FrowePqB5Fx01UIm359GYqUJBPVN1GEy+GhVhpEi+jq8WLxZAm/4D4KAkt4T5xTpB6W6+5ec0v
 br7v8eatKp7c1EbBNKgiGblN2ecCGMcuOOuYmVDMfmRWRE8u3yPqo5CvqnGG0iE1yer4Wl1evF
 jADxRYV7sQrwMx+P36HkfrGn7aODhMV/yRwt73CqaL3VYgjd5jvMdQajOlTtW3jb7DoXK/DnB4
 tLSafMbl90TcmqRBIDLVnmir0Qfrh8LYihpTeaReDG7gKIQlS0wKI/H/MX/rng1TRjsuawGswX
 WAo=
X-SBRS: 5.1
X-MesageID: 46007795
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:MT5q/qFpBRyI86eYpLqE5seALOsnbusQ8zAXP0AYc31om6uj5q
 eTdZUgpGbJYVkqKRIdcLy7V5VoIkmskaKdg7NhX4tKNTOO0ADDQe1fBOPZslvd8kbFltK1u5
 0PT0EHMqyUMWRH
X-IronPort-AV: E=Sophos;i="5.83,278,1616472000"; 
   d="scan'208";a="46007795"
From: George Dunlap <george.dunlap@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<ian.jackson@citrix.com>, Wei Liu <wl@xen.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, "Paul
 Durrant" <paul@xen.org>, Owen Smith <owen.smith@citrix.com>, "Chandrika
 Srinivasan" <chandrika.srinivasan@citrix.com>, Christian Lindig
	<christian.lindig@citrix.com>, Konstantina Chremmou
	<konstantina.chremmou@citrix.com>, Rob Hoes <Rob.Hoes@citrix.com>, "Li Zhang
 (3P)" <Li.Zhang@citrix.com>
Subject: [PATCH] code-of-conduct.rst: Add Stefano Stabellini as a Conduct Team member
Date: Wed, 16 Jun 2021 16:18:26 +0100
Message-ID: <20210616151826.224793-1-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

With my upcoming leave, Ian will be the only person actively on the
Conduct Team.  Stefano has volunteered to join the team, so that there
are at least two members.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
Membership of the Conduct Team is a global XenProject decision,
and so needs a vote of the leadership of all active projects.

Please vote by replying to this email with +2 / +1 / 0 / -1 / -2, in
accordance with https://xenproject.org/developers/governance/#project-decisions .

CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Paul Durrant <paul@xen.org>
CC: Owen Smith <owen.smith@citrix.com>
CC: Chandrika Srinivasan <chandrika.srinivasan@citrix.com>,
CC: Christian Lindig <christian.lindig@citrix.com>
CC: Konstantina Chremmou <konstantina.chremmou@citrix.com>
CC: Rob Hoes <Rob.Hoes@citrix.com>
CC: "Li Zhang (3P)" <Li.Zhang@citrix.com>
---
 source/code-of-conduct.rst | 1 +
 1 file changed, 1 insertion(+)

diff --git a/source/code-of-conduct.rst b/source/code-of-conduct.rst
index 4cb33da..963d605 100644
--- a/source/code-of-conduct.rst
+++ b/source/code-of-conduct.rst
@@ -81,6 +81,7 @@ sub-project. The current list of Conduct Team members is:
 
 * George Dunlap <george dot dunlap at citrix dot com>
 * Ian Jackson <ian dot jackson at citrix dot com>
+* Stefano Stabellini <sstabellini at kernel dot org>
 
 Conduct Team members are changed by proposing a change to this document,
 posted on all sub-project lists, followed by a formal global vote as outlined
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 15:20:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 15:20:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143406.264321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXKx-0003QO-F4; Wed, 16 Jun 2021 15:20:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143406.264321; Wed, 16 Jun 2021 15:20:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXKx-0003QH-CC; Wed, 16 Jun 2021 15:20:15 +0000
Received: by outflank-mailman (input) for mailman id 143406;
 Wed, 16 Jun 2021 15:20:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ltXKw-0003Py-42
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 15:20:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltXKu-0004wd-Mc; Wed, 16 Jun 2021 15:20:12 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltXKu-0001dm-Dz; Wed, 16 Jun 2021 15:20:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=2fPoTx6uDZ5idqHNU3a1FlscP88oQauxf4LJ1kncppk=; b=r3HTH4FiJz+nlT00CNnZPTjLfj
	OyUMIQlNRVryk4xizL4wEFxDwM3gfhFu+Db3xVXZmzzUMtvt8OwjvvmJo1GMQ8////BUJkHwW6BME
	kX3awF9uEh3Z/GrtH2L66mVGgnmNcWZXeDkTXOkTHYIhY+fL/A8EomtjDjFuEbF8xxZ8=;
Subject: Re: [PATCH] code-of-conduct.rst: Add Stefano Stabellini as a Conduct
 Team member
To: George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Owen Smith <owen.smith@citrix.com>,
 Chandrika Srinivasan <chandrika.srinivasan@citrix.com>,
 Christian Lindig <christian.lindig@citrix.com>,
 Konstantina Chremmou <konstantina.chremmou@citrix.com>,
 Rob Hoes <Rob.Hoes@citrix.com>, "Li Zhang (3P)" <Li.Zhang@citrix.com>
References: <20210616151826.224793-1-george.dunlap@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <64a77547-a942-7da6-5ab9-478c612e52e9@xen.org>
Date: Wed, 16 Jun 2021 17:20:08 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210616151826.224793-1-george.dunlap@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi George,

On 16/06/2021 17:18, George Dunlap wrote:
> With my upcoming leave, Ian will be the only person actively on the
> Conduct Team.  Stefano has volunteered to join the team, so that there
> are at least two members.
> 
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> ---
> Membership of the Conduct Team is a global XenProject decision,
> and so needs a vote of the leadership of all active projects.
> 
> Please vote by replying to this email with +2 / +1 / 0 / -1 / -2, in
> accordance with https://xenproject.org/developers/governance/#project-decisions .

+2

Acked-by: Julien Grall <julien@xen.org>

Cheers,

> 
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Wei Liu <wl@xen.org>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> CC: Paul Durrant <paul@xen.org>
> CC: Owen Smith <owen.smith@citrix.com>
> CC: Chandrika Srinivasan <chandrika.srinivasan@citrix.com>,
> CC: Christian Lindig <christian.lindig@citrix.com>
> CC: Konstantina Chremmou <konstantina.chremmou@citrix.com>
> CC: Rob Hoes <Rob.Hoes@citrix.com>
> CC: "Li Zhang (3P)" <Li.Zhang@citrix.com>
> ---
>   source/code-of-conduct.rst | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/source/code-of-conduct.rst b/source/code-of-conduct.rst
> index 4cb33da..963d605 100644
> --- a/source/code-of-conduct.rst
> +++ b/source/code-of-conduct.rst
> @@ -81,6 +81,7 @@ sub-project. The current list of Conduct Team members is:
>   
>   * George Dunlap <george dot dunlap at citrix dot com>
>   * Ian Jackson <ian dot jackson at citrix dot com>
> +* Stefano Stabellini <sstabellini at kernel dot org>
>   
>   Conduct Team members are changed by proposing a change to this document,
>   posted on all sub-project lists, followed by a formal global vote as outlined
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 15:34:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 15:34:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143416.264332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXYN-00054x-NP; Wed, 16 Jun 2021 15:34:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143416.264332; Wed, 16 Jun 2021 15:34:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXYN-00054q-KW; Wed, 16 Jun 2021 15:34:07 +0000
Received: by outflank-mailman (input) for mailman id 143416;
 Wed, 16 Jun 2021 15:34:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D5mW=LK=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1ltXYN-00054k-1U
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 15:34:07 +0000
Received: from mx0a-00069f02.pphosted.com (unknown [205.220.165.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3dee62f-71b2-446c-86b8-bc92f235e330;
 Wed, 16 Jun 2021 15:34:06 +0000 (UTC)
Received: from pps.filterd (m0246627.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 15GFWNBK009743; Wed, 16 Jun 2021 15:34:00 GMT
Received: from oracle.com (userp3020.oracle.com [156.151.31.79])
 by mx0b-00069f02.pphosted.com with ESMTP id 397h4bgenf-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 16 Jun 2021 15:34:00 +0000
Received: from userp3020.oracle.com (userp3020.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 15GFXw3m083559;
 Wed, 16 Jun 2021 15:33:58 GMT
Received: from nam10-bn7-obe.outbound.protection.outlook.com
 (mail-bn7nam10lp2107.outbound.protection.outlook.com [104.47.70.107])
 by userp3020.oracle.com with ESMTP id 396wawf7yw-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 16 Jun 2021 15:33:58 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by MN2PR10MB4302.namprd10.prod.outlook.com (2603:10b6:208:199::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.16; Wed, 16 Jun
 2021 15:33:56 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4242.019; Wed, 16 Jun 2021
 15:33:56 +0000
Received: from [10.74.102.136] (138.3.201.8) by
 SN4PR0201CA0056.namprd02.prod.outlook.com (2603:10b6:803:20::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.21 via Frontend
 Transport; Wed, 16 Jun 2021 15:33:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3dee62f-71b2-446c-86b8-bc92f235e330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=0UVoaecywuccJw/TpunixyYtBjn6fc61YbswWJ1qBDg=;
 b=xzYNaGlmS81YcWGLdCXWjDeqK3qHaVRTaZppPscrNcgiQKthPyTEvrLp12IleqbFchCn
 3fqHKLCzrP0WtIayRpQXHVxDEUs6DMX9Pub1WZG+/WPzDhOyei4Wu95RYOMHm9RKjet+
 0sKNFEO9vYUHY1sPPHz4dWPtGapHn8zbd/AKJ4NaCbeCvAwrQmw0CscTiTBZvywQA47b
 6Zcj0bwKMj5K3Mc8/3TNv3yKGKgXrEJHnZon39SIZe6XhhHG1eaqiy6eXdIvkPuDtQ9d
 oywuvdPaitRcty5oBqunOWGpME60nQMD0wFNBffuN5IO59VxsuG5nHePAPhSVD/7j/lf uA== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Rg2n1noyrhwtCwVqdSSlxhvxsIrcvjeGHvnlXovhVl0qIPcq+tsEHWYnOGqA8lbum1UHUqjsmi7gDpo6zf86ArJk/KhUk+wdqXvt1DevePru21nrG67BRVaHg7W4jq53YnrUoz00zbnTMGL3HSCLDyg7BwCQhh2NLmdHBdSExhecJs8xvzHKOGdRhK1Na04e1puxSLOnmU0qsplwR2Bo9RL3CGZmQkotq9LhzQdArDX9ZIUrUE3FA8VitciAHhEa8IRzVs+CqZZe5W/S0bb6mxwW2Q8OADbt7ouBMHFd/MZdqigPw65/dQxC/LT7dNUsB3KwUv2IJQSeCs/YJvhZEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0UVoaecywuccJw/TpunixyYtBjn6fc61YbswWJ1qBDg=;
 b=HeNgp0assIB3YXP+Dlyn0Ha0p85OqyXh5tgI1WYn3tQyqqc5ON0fIXbD8EKJCUnCBX4zmI5MxIFQvAS2pvCqizgxM9yCB7UtNbTXIjZEK6nfbwqi8w4PvlStwwwyBAek+vHNR9N/pTjq6F8hCQzFCwC27wXG2kDzJ6sKQeDqkt1IogDXn2grnH3yxW6mR3hoPKR0/C5JS2eDCSGP2RfHuoOG0S4eJkn74k82adNar0MglDBmkhEM+5hpTyl4BUzu3vD8Z/hm5ohCF2r566IMAhXJd/52qgPwDxJ3Aa6gLpRV/jc+JKPhCxyciIPuLKseiZ0o084cfFAg6GM1cZ+nNA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0UVoaecywuccJw/TpunixyYtBjn6fc61YbswWJ1qBDg=;
 b=WwYocbkNqIm7NTJS71Za0BtUV0/8JymR9QDiJEJqMn7AeGUDQKKzQyJ9YEoiC2+8hMmu6xlhkrlvY0Kaslwo6r2KfMcTE9AAUlbXFVj4q0UDWHSxDIVMS0/fu+z+m+j+9zxUgANfkmen8kyBj7cTDhbBZ9pdjlOuFzWELx8XFaQ=
Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma
 ops
To: Christoph Hellwig <hch@lst.de>
Cc: Roman Skakun <rm.skakun@gmail.com>,
        Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
        Juergen Gross <jgross@suse.com>,
        Stefano Stabellini
 <sstabellini@kernel.org>,
        xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org,
        linux-kernel@vger.kernel.org,
        Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
        Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
        Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
        Roman Skakun <roman_skakun@epam.com>,
        Andrii Anisov <andrii_anisov@epam.com>
References: <855a58e2-1e03-4763-cb56-81367b73762c@oracle.com>
 <20210616114205.38902-1-roman_skakun@epam.com>
 <20210616114205.38902-2-roman_skakun@epam.com>
 <2834cdc0-534c-4f07-1901-e468a7713c1f@oracle.com>
 <20210616142148.GA764@lst.de>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <83353b34-2abb-9dfc-bed6-21d500abf49f@oracle.com>
Date: Wed, 16 Jun 2021 11:33:50 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
In-Reply-To: <20210616142148.GA764@lst.de>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [138.3.201.8]
X-ClientProxiedBy: SN4PR0201CA0056.namprd02.prod.outlook.com
 (2603:10b6:803:20::18) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ccfe135f-8924-4ce1-5686-08d930dc2757
X-MS-TrafficTypeDiagnostic: MN2PR10MB4302:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<MN2PR10MB4302657A2BD264A00DC6F4C28A0F9@MN2PR10MB4302.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3513;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	IkwnElf5yfGgg7noxftFNzPVp08pj/U0UCUGUYm/PJljPJxJwy+tr3zvTwcYYrQgWeupXHEN+NmhF+G9smoCUZQfsLhndz4qKN6qereBElAwod6M6SsDKEzf9NTZwzasxO+Rcmvzb5yJtiQC6OhendVZymQLbY9VOu1tMaHdifjpiMNhg9YxbxWFM/QO0rSqv+hNqNDSgJ+DI+tAw5vK7SzgFlXk0DFepC2t58Qs5Jlrt0BS8xOs5LcZFKpFyCnR/pd1geGoKeWB2GoPx2B0ui04LP/htq42HjfaFdUzBD1DcthY5b4kGLLYadWrhAeMV1ZhzAd6ImC8YgxA2VcxAc+yc/aw7AfMjYskIJCrIf6Rs3TO953sfpLaibccs9BCbgwc+vPLXt9etKqZDa9dHDXjhty2VCfsTVby3raDH1faVTdG9MZineFCpWALwuneU8DZO8PGo54lZMXRnA70/1VEz9Yzi/8Z0H5NX7v7grEBkJZUWVCiXlaihSXaHGiFKS9jYqeZ8amDoWkFgQy0+W/BBgXNUZIBERHNf/golH2ZsUk5u+zdklY52H57dPC/NZ7olkdNLfZoX2w2/J0TltQnv2rX/y4ulFkZtLn9lCFrMUjWud6nLGB+DfZwBLBT44Frvh7jPqPWwqAWP9xY+gojZxmwq8DPKf0LhCTPUr7qEoeqALk6cxGC4odJ6+Co
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(39860400002)(346002)(136003)(366004)(396003)(2906002)(8676002)(316002)(4744005)(4326008)(36756003)(54906003)(186003)(2616005)(8936002)(956004)(16526019)(31686004)(44832011)(7416002)(31696002)(86362001)(6916009)(66946007)(5660300002)(6486002)(478600001)(16576012)(66556008)(6666004)(53546011)(38100700002)(66476007)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?SGw3a05mdjZiNFEvV3l4S1lkck1HWUttdVdYZVNra3hFazRhdGw2bGw0eWtV?=
 =?utf-8?B?T2I5czBPRWsrdWFiUDB6QUxkc0dvT050Q09SaHh6bkViR01EWmYwVXN0VkZM?=
 =?utf-8?B?WkxxeVFCZTdhVDdtWU1sT09tbWdLNFlMUTRFT3ZId2FTTVlMZklEcmUxOWZp?=
 =?utf-8?B?bmt4ZFhpU3E2YXlKR0xIbVhwM2lZYzdscWcwTnNhK1JHaFg5WFk3WTJhcURo?=
 =?utf-8?B?L2UvV1RWcW1nbVc4RE1qcTB6RHR4cVEzbXZKcU9iVkt1cjVVdEhPamh4TGpH?=
 =?utf-8?B?NWRHQi9LYTlWZmFVZFVVd2VsRlFpeXNmSWdFUDJ4MEs2REMrdGJleE1lWXpU?=
 =?utf-8?B?aDdQZTh1RUJsVnFCUUxVWWpLZE5yQ3JpSStxeWxxZ0ZsNDJnZ1pHY0NXWXBs?=
 =?utf-8?B?Y0NWcW52QnYxSFliRnA4Y00wYjN1THQ5RHhsUnNyUzlSTVVkdjB2dGtKY2dV?=
 =?utf-8?B?S2U2Q3RrY2JUcC9kcnplbkpDaHBUVm8wZWdGS3Z0dGdDNXVBNXlxVlRUc1N2?=
 =?utf-8?B?d0pJdWk3WXVZOXdIdHYwS04zdFRNVGFidys3aHdCMlhHbUJ4R3llRWtYU3lK?=
 =?utf-8?B?ZnpZZ211SFZRWWZaSlFNVEVoSUs2bFQxZzE0N1V0VE5YajdPRWp0M2ZOY016?=
 =?utf-8?B?bEtuMUNrRnZGZThJSUdnT1hrTlNWditMVERucS9aWk9KTW9HSi9HNGZodVhq?=
 =?utf-8?B?RTJ5WDZvWjAySEFCcDRPMkN6NkVOR3FTU2xPbFRkRzdEenJmK01xM1dYSXBJ?=
 =?utf-8?B?SGhOZVlTaUZpcnlOb3c3VmJMait6QjZXSnE5cldxQTNKRUtKNGt1ME9jUTRy?=
 =?utf-8?B?UFNlNHl1K1p5cDN3b2k5Mlc3QnBBY0ZkaWgvYUdvMlR3V0hXN2M5dkN3VnNI?=
 =?utf-8?B?VzgyZzQyZm81NDBYRnJhbDgrVFpMNS9qQW9oVWI2b3BOUlVySlFMMmNkNVNv?=
 =?utf-8?B?NG42L0wrOE9MMW5YeXpkME1xN1JOY0E3Y2dJbVJ1QXZDTDhoNGxTSHU1NUwy?=
 =?utf-8?B?WmhEVEtCSE5maEdTenJDeVdSdDgxMXI1Z3lySllXMHY2b0lTTjI4OHdBNUZ6?=
 =?utf-8?B?OW1vRWlZcG50NWl6MXZvWmU1bm5lZkE1aEtiSkpCclJNc0hVK2hkQm5OMlB5?=
 =?utf-8?B?T3IrN25EVFhhRkpkQm95R25oSEVLTXB6SXRLV2p4YUVTOENZT05qeTVsRnRK?=
 =?utf-8?B?d2FISEEzdEt2T0VVcW44K0JZMC9FR2xuQU1IZndESDlKaVdJcFExVENtbWF2?=
 =?utf-8?B?dTViT3NIMm4xU3liNUpXSFBtYXNzdTBrb0pBUUQ0R0VYM2lZaWtJczhkTHlo?=
 =?utf-8?B?SnZzd3dFQlVhUE5iT05TZDcva1REMGNNYm1nVWNLUjh4UU1YZy8xaEpiL2E5?=
 =?utf-8?B?VjIyTzNGQzZpVzVCcEo3Tmg5NmxRSnBRbHVzT3lGelpWTm5TRnplelVLWjRB?=
 =?utf-8?B?ZVpEc1pPcXdqVlVqTUh4a2hBRnd5TmxoUHVSWkZKNkQ5YWxqZFpBRmNhOWZG?=
 =?utf-8?B?aGFuaWlQZE4yWXVaR1ZuM2NMV1VWaUJCWlZIODZxOXB3S1ZseUI3UDl0cS9j?=
 =?utf-8?B?YmQyTGJ0OHJLM1hldXlraWFwY0drTklybnlVZkttYWErb1Zrb1Q4VVAzdE1T?=
 =?utf-8?B?QStXYnc4VE9Camp2Y1l1QWk3S0lBcWlydHBXbVdxT3BZK1ZjbGNsSWJBSDFE?=
 =?utf-8?B?OEtwSUVqRTF5L1ZzWHQ4Q0RpU2JRN1V2cXdPbVU1MW5oakJKNXlUMFZ3d1J1?=
 =?utf-8?Q?WSNHruuYoNZyWdLKt+pYnAFdULGUq060N5TkWCA?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ccfe135f-8924-4ce1-5686-08d930dc2757
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 15:33:56.2451
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: erxKNzPRbruHC/K18avd5iS39eZ0XqN14JHxHYH5TUulaKNdM4zBFHW58NtPXdqKdjmt7MQcNIfqJb+3L5RaqaknhiqjWpfrsOnXuNM0uJU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR10MB4302
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10016 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 malwarescore=0
 suspectscore=0 spamscore=0 bulkscore=0 mlxlogscore=999 mlxscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2106160089
X-Proofpoint-ORIG-GUID: Lv6eqeZvCVRLAZrZKufO7wDmYVG0EeVG
X-Proofpoint-GUID: Lv6eqeZvCVRLAZrZKufO7wDmYVG0EeVG


On 6/16/21 10:21 AM, Christoph Hellwig wrote:
> On Wed, Jun 16, 2021 at 10:12:55AM -0400, Boris Ostrovsky wrote:
>> I wonder now whether we could avoid code duplication between here and dma_common_mmap()/dma_common_get_sgtable() and use your helper there.
>>
>>
>> Christoph, would that work?  I.e. something like
> You should not duplicate the code at all, and just make the common
> helpers work with vmalloc addresses.


Isn't the expectation of virt_to_page() that it only works on non-vmalloc'd addresses? (This is not a rhetorical question, I actually don't know).



-boris



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 15:34:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 15:34:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143418.264343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXYj-0005UH-WE; Wed, 16 Jun 2021 15:34:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143418.264343; Wed, 16 Jun 2021 15:34:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXYj-0005UA-T1; Wed, 16 Jun 2021 15:34:29 +0000
Received: by outflank-mailman (input) for mailman id 143418;
 Wed, 16 Jun 2021 15:34:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jola=LK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltXYi-0005PW-D7
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 15:34:28 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1939a9f6-3413-4085-81c2-5e58f8636553;
 Wed, 16 Jun 2021 15:34:26 +0000 (UTC)
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02lp2051.outbound.protection.outlook.com [104.47.6.51]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-29-KYGs_NoTMo-ajRnFbRz4Fw-1; Wed, 16 Jun 2021 17:34:24 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB2957.eurprd04.prod.outlook.com (2603:10a6:802:4::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Wed, 16 Jun
 2021 15:34:22 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Wed, 16 Jun 2021
 15:34:22 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM4PR0902CA0010.eurprd09.prod.outlook.com (2603:10a6:200:9b::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.16 via Frontend
 Transport; Wed, 16 Jun 2021 15:34:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1939a9f6-3413-4085-81c2-5e58f8636553
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623857665;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=x8tpKxy8KlSHxMCtuG/eMRMNCstS9i53aq8u/RpJNYM=;
	b=bc1EhOGr5jPCK1PZVwJdVdDYTwuVD6ox1MbS+I43KfIuCYag/fmaVuopkGr+J6yYVu1FNf
	OBA3CqAUccSynq/Lxd9x/AuCwbWVkdXK9gbcNSpfeiqhsVL3ZcYxtTvbZ9B1H23uGY/E8D
	en3pUbcNFtY0CoLxQ7MYJQBNAHXknTE=
X-MC-Unique: KYGs_NoTMo-ajRnFbRz4Fw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YwXKCXdEz9Le/YFS5D/Yfx8kMKWO2AP4Q3TLV27TSC1KzbWIGnJbMgytfhNvDVh2XGZu7fNHd/uEYZQv8BY5lF7k/wFagNlh8UM8hBXdvehTaxYQ+t5JEtl8tnQhkQEXGZMggUs13k4j92CDaEVShwv0/aDEvkZ60V1jDRAxXpgEaK+0iYR4BYGQk2Q8amv55MU0nFvioSxbQzv0X4PaU46sDXhNcfK74CEqNtcs6a9dKrkDc3RZygxKpkuuWBIFyfrY8Vf25DwGbKhNu3/t+4fGC1s869oNZ3smZ5zrrV0CXrFwGQw3fxXil0F3ESqjpL2p+N8qJqVC6WaiC38WPw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vaAwwp88N1zeKfF5XWWmXKzUDLfyl3gzUR91ONM+jNo=;
 b=Iqyoe6y3XA5LA7X5zsfctlRQqCAH6RY7HrXo7RGVSEm9Bwi8zoR8lPhM6GErS1UiTzYOEsNZyFTHvmtknOyVM6HO8zex8DGtM37RdCm9hKMDSrLVK5Qdpqf7IoyUqQttu5zV1t2EU7hu9tgNexIw/vPuV8sUv1YMx6nZiMhItb51Z+0h8j9ldnEmV/zlEPmEYc0cjgJ891FiI/VrVr1wXKzoqKNtnXLeUdDZNGwZG4/EoJr/GWR1qIo7Dx8zJK3CZJ8GjfN8rsc/20Gw47UupJIvvGFaBPrX9eRL3DINse1WV+8Lt1BQuNqqrVJ9d87lJYt5ZoVSJDuNLFT939UBgw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [xen-unstable test] 162845: regressions - FAIL
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Ian Jackson <iwj@xenproject.org>, xen-devel@lists.xenproject.org,
 osstest service owner <osstest-admin@xenproject.org>
References: <osstest-162845-mainreport@xen.org>
 <8e39ca8f-3202-7d3a-d65d-7087634bd49e@suse.com> <YMoI8YZfOvogwOMY@perard>
 <f8c4151a-6dac-d87c-ef46-eb35ada07bd9@suse.com> <YMoU5gLQEVBkmnLC@perard>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8e4dc36b-10e3-9da6-eb4e-91644c2f417c@suse.com>
Date: Wed, 16 Jun 2021 17:34:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <YMoU5gLQEVBkmnLC@perard>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM4PR0902CA0010.eurprd09.prod.outlook.com
 (2603:10a6:200:9b::20) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c9ce8ddb-12a1-47b9-5b66-08d930dc36f1
X-MS-TrafficTypeDiagnostic: VI1PR04MB2957:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB2957194625E41F530153A6E0B30F9@VI1PR04MB2957.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JHXeMdKzBW+n+jyQtsBPugOOOVv1PqbL3j9HwO22cjEfKpR3xtK+RS0mG2JMBYygVLeJTfVBHLNXvAVy04rNgZQ2cPexKJLu23cyypo1cDxX7/4a5/qM4MQg9/3sugZ2OgwfZObF6ivrncejLJihA6KkC4kF3VQx3o1xq4hDcxnyUqRCGoRTachou/XLjIpzw8HII9Y+BpzsUZJjz8PzpGsmv9Z0CpAjeq/qQbdO3FxjEAN+ju8uqK9PvPRCJzrvioILLlMav32dzVVgdrBTVBv79Ma+kBvPJj5hi8SXsS6Js4atbu/bLOi5tfHOyoTlObDk59Ue9yYzf80Y3MtltBJNdo5CmhdHGfa6Bn2gmS2QcfHB3UGdtLqUg1Iumj9dQ+Bz6p84whJDqlMiqBYxLSQuXp9cYwQ4XZl0vxa2AUewQB480aLaYoaDe/ehiHOtbhnrVYtcoU2yurVBewRckTN0f9AW1DMGRdnLBF3Ptii355zVdCf54aBsn7fvnUIX3S99nH0zkZciG7Igk5CNOGlYSbY2KwRgl3x6HUlESWpt9KKiJhgohgrMtHsLypeWNu05OmefCKvcUFQdE8IerrcAELvakpvuKWz/mpdTspkuocXsr7l1hp7RcX2nl9BoAlW/n7Kk1jglPr+aM17gSWHsZD/ahyIj6zaTGVjt6dRjXm084/Xyr2KaYB4zaI9N
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(346002)(39860400002)(396003)(376002)(136003)(83380400001)(8936002)(316002)(66574015)(2906002)(956004)(2616005)(5660300002)(6916009)(8676002)(6486002)(26005)(16576012)(54906003)(31696002)(16526019)(38100700002)(186003)(66476007)(478600001)(66946007)(86362001)(66556008)(31686004)(36756003)(4326008)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?2aZQyGiFqXnwu+pyRpQnW+e1IqwWSzKRSd+bWgo3Fidfl5GL5uJ2GCugTbbS?=
 =?us-ascii?Q?WepoTKUaK+TSyWVz8ltPJgDgEeJ0DU4TT7iodqgjVU5OyIZquVJLzjFPic+k?=
 =?us-ascii?Q?75Xp5nvQiNABupk2Z2SvQNGXuZ5k6BthkOkk4fVBUlpdv8en6wRwHRY+wyE3?=
 =?us-ascii?Q?yyiE9wf0qcnVqc+S1t3hRo86Rzz6uHw4PzO1RUX0IYl3zpxAxr7pMLsB0IaQ?=
 =?us-ascii?Q?ltRtTmiZZ9z176GnXCa1n5X/CWvPFK28jtrt9b2JvvnpBQg6xODF0EFrVfSK?=
 =?us-ascii?Q?ieNL6DhQIV8mkh97TB5BQYqrpVO2/anG1piVZK6Hj9ZagziId7lqkFT3+eSy?=
 =?us-ascii?Q?YUX+OJkZl6na1CFxtrBoZQ/EZ9ilt4DT1A4asg1ke3qnIQdsEuEFUfthcX73?=
 =?us-ascii?Q?9wqrCRA+7cbC0i7893/g7w0CZIA3CRXqSEdx854gHt3X4VBOiDgZ8nhTxmWR?=
 =?us-ascii?Q?TbIkMTp5dK7LkhuIyaB9iTbRL1YJ6QYb+K95ZcN+rk29u7G0hIapmZZfdC0l?=
 =?us-ascii?Q?kbQL0FCy9NhthZs0ifgocFr9xFATNTKxzdHMkRubjATIMuaeKCwe3ENePZuX?=
 =?us-ascii?Q?VqgGmu1S5RoMc3AWBlc3Pg2CBgYVcQ61uhTN1romAN1tgG6hNSIwzo4/4oJm?=
 =?us-ascii?Q?DY2e1kORIGUapJnNWXd+LGi+7to7clIMpY9l+zWy2Qt+4NXYu8Nx1a63QQwh?=
 =?us-ascii?Q?vYNyfBaIGFwQpcnzC9aH13Q/SyBj2/RGl4zPsM4v2+sWI4nkI8wPQp6aMysY?=
 =?us-ascii?Q?g35uIMhlptdwuiRcPHoJuw70ooXApY1/DixU7qMLZoOxOLtPm0MSzUyCXw9u?=
 =?us-ascii?Q?sg+R1EAK9hcesxVHvMEnJinWpRFnTEeNpjWxY/4dba1SO4/erPI8ViVc/sjk?=
 =?us-ascii?Q?FJdlY09XxXgMxqmBojeknXZV7D5fV91JMltvey58hpLODuuwEpZtiyaecaZt?=
 =?us-ascii?Q?8jv3i81Q/powdIZf7BSL/3F2i5KhIJUuMf/U4+Xr8dq6NfzTrpd8/bJQQFd/?=
 =?us-ascii?Q?jSLnERgMSksck2FztQbRkCoZH11JeWEFUoLUQQOtBN+wP7aXvmqs4er95z+8?=
 =?us-ascii?Q?fFKMuNtXUt2kTxtK+cc7xo4IMl3GVu55QjbWy6vLnHaIIZoLKDuVkhoTLhmo?=
 =?us-ascii?Q?/dNnMAvc8LNPJuvJ+h144ZY7/i5ca73fADnREY8d9zov4Q29u1VrubbWtYgZ?=
 =?us-ascii?Q?PHfZMTzq2Z0QCfp1fattxMBM/auYNJVYCAniMCquWQ/Uhj9XAlXm8GMqP+i7?=
 =?us-ascii?Q?DbYUdFsCgYByiZRE1c5eDh4nG2UrVk3gVF+qACSar75JwLDyHrz+S7m9zJvO?=
 =?us-ascii?Q?N0Z8RztHNTrG+8p1kRQ6FNHY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c9ce8ddb-12a1-47b9-5b66-08d930dc36f1
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 15:34:22.3500
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IsGDBnS6Q3hJ8ChT4qvtof4zdiieVuvsaVSWy0RnO9o+REBSyuQ3HOzKex3JlipGaiGiA4Pm+u4/TqthezneSg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB2957

On 16.06.2021 17:12, Anthony PERARD wrote:
> On Wed, Jun 16, 2021 at 04:49:33PM +0200, Jan Beulich wrote:
>> I don't think it should. But I now notice I should have looked at the
>> logs of these tests:
>>
>> xc: info: Saving domain 2, type x86 HVM
>> xc: error: Unable to obtain the guest p2m size (1 =3D Operation not perm=
itted): Internal error
>> xc: error: Save failed (1 =3D Operation not permitted): Internal error
>>
>> which looks suspiciously similar to the issue J=C3=BCrgen's d21121685fac
>> ("tools/libs/guest: fix save and restore of pv domains after 32-bit
>> de-support") took care of, just that here we're dealing with a HVM
>> guest. I'll have to go inspect what exactly the library is doing there,
>> and hence where in Xen the -EPERM may be coming from all of the
>> sudden (and only for OVMF).
>>
>> Of course the behavior you describe above may play into this, since
>> aiui this might lead to an excessively large p2m (depending what
>> exactly you mean with "as high as possible").
>=20
> The maximum physical address size as reported by cpuid 0x80000008
> (or 1<<48 if above that) minus 1 page, or 1<<36 - 1 page.

So this is very likely the problem, and not just for a 32-bit tool
stack right now. With ...

long do_memory_op(xc_interface *xch, int cmd, void *arg, size_t len)
{
    DECLARE_HYPERCALL_BOUNCE(arg, len, XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
    long ret =3D -1;
    ...
    ret =3D xencall2(xch->xcall, __HYPERVISOR_memory_op,
                   cmd, HYPERCALL_BUFFER_AS_ARG(arg));

... I'm disappointed to find:

int xencall0(xencall_handle *xcall, unsigned int op);
int xencall1(xencall_handle *xcall, unsigned int op,
             uint64_t arg1);
int xencall2(xencall_handle *xcall, unsigned int op,
             uint64_t arg1, uint64_t arg2);
...

I'm sure we had the problem of a truncated memory-op hypercall
result already in the past, so there definitely was a known problem
that got re-introduced. Or wait, no - I've found that commit
(a27f1fb69d13), and it didn't really have any effect afaict:
Adjusting do_memory_op()'s return type wasn't sufficient, when
do_xen_hypercall() was returning only int. Now on to figuring a
not overly intrusive way of addressing this.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 15:35:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 15:35:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143427.264355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXZe-0006Hn-EO; Wed, 16 Jun 2021 15:35:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143427.264355; Wed, 16 Jun 2021 15:35:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXZe-0006Hg-B8; Wed, 16 Jun 2021 15:35:26 +0000
Received: by outflank-mailman (input) for mailman id 143427;
 Wed, 16 Jun 2021 15:35:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IKif=LK=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1ltXZd-0006HQ-0Q
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 15:35:25 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f3a83026-2ede-4195-9356-24f270d9223d;
 Wed, 16 Jun 2021 15:35:23 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 1777768B05; Wed, 16 Jun 2021 17:35:20 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3a83026-2ede-4195-9356-24f270d9223d
Date: Wed, 16 Jun 2021 17:35:19 +0200
From: Christoph Hellwig <hch@lst.de>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>, Roman Skakun <rm.skakun@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Roman Skakun <roman_skakun@epam.com>,
	Andrii Anisov <andrii_anisov@epam.com>
Subject: Re: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable
 dma ops
Message-ID: <20210616153519.GA6476@lst.de>
References: <855a58e2-1e03-4763-cb56-81367b73762c@oracle.com> <20210616114205.38902-1-roman_skakun@epam.com> <20210616114205.38902-2-roman_skakun@epam.com> <2834cdc0-534c-4f07-1901-e468a7713c1f@oracle.com> <20210616142148.GA764@lst.de> <83353b34-2abb-9dfc-bed6-21d500abf49f@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <83353b34-2abb-9dfc-bed6-21d500abf49f@oracle.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Jun 16, 2021 at 11:33:50AM -0400, Boris Ostrovsky wrote:
> Isn't the expectation of virt_to_page() that it only works on non-vmalloc'd addresses? (This is not a rhetorical question, I actually don't know).

Yes.  Thus is why I'd suggest to just do the vmalloc_to_page or
virt_to_page dance in ops_helpers.c and just continue using that.


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 15:38:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 15:38:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143436.264366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXco-000715-TJ; Wed, 16 Jun 2021 15:38:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143436.264366; Wed, 16 Jun 2021 15:38:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXco-00070y-QM; Wed, 16 Jun 2021 15:38:42 +0000
Received: by outflank-mailman (input) for mailman id 143436;
 Wed, 16 Jun 2021 15:38:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qy3q=LK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltXcn-00070s-Ej
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 15:38:41 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 02bcbec8-0e21-44d8-8b56-fcd97d9fe283;
 Wed, 16 Jun 2021 15:38:40 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH)
 with ESMTPSA id j0415bx5GFccueh
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 16 Jun 2021 17:38:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02bcbec8-0e21-44d8-8b56-fcd97d9fe283
ARC-Seal: i=1; a=rsa-sha256; t=1623857918; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=rXHzJ8PefZYdVyW2zZc++KIaC/+smyKlvlKRsyM3bj2Zq53opkZ71rRHfVTqU7KCrt
    KLhgILi9mSubGpan0oGsxMtsm8kf4+r2ocMbhzDt7SRxXCK8eyNHR6y0k6srEiWvXHob
    k4RxEZbBAcuEBnLEhH9o6YlsE8499CQOx3lnE4s1hMogs2ClwscZ6mrU4XXeJRUMod26
    ZoRyn2CqjCpmfgHnQHtd2eq3btEOh/c2LCZWDG657jKrtkszIVtgjBVjeLdqR6JiAieu
    Kh+MOqPSJFj/Lg9zcQnRBCAM0P6jYHij/Gi169jBszYimZvLJqjz5xizDH+B3jumYUcO
    KSdw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623857918;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=Yqvfuu9UZegYYzHJL+ypLT9elgylsIYFTThnKpnRGSQ=;
    b=cD8VGMjkfhjlCL89hB95brrBZEvTVUNJKLA6jXklCQVTQ7x53+Ls2x6JYQdjrS6uik
    vIq9QWk0tAVXIwDWm+w3t7r9YIXo5UI4jeuYhOypgyNaTZnPrH+yqZcyNx/ncODH3SCG
    RQ3wcyDpZDYO2v8XrnVmo2iNnEP/6W1DvlO+QV1rI+pMVfuoupdvbaKHyh4xIVeSU+vB
    dEUDYlVAlwd4WvIDjna0vFUGrn1/YonnHUPWK7mOLe/RNCXUPnqn4OHrnYuCs5+IaU22
    UMEaNRmLlQhpGNGuiuBu8S1MldBhy5k629GfKnA/fksFhXQo6fEVsNfhoD5PrJCsrK9M
    G3IQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623857918;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=Yqvfuu9UZegYYzHJL+ypLT9elgylsIYFTThnKpnRGSQ=;
    b=fSoTL3VCFwRlI2o+fKMeOK0du/MGqRZDDbzr0OC4Z/y093/Mn6O3Kozni/G5zYr1vb
    VzTDqQUJWeTnucvwnGLz2Kto0bt/4eRS+xDdDeEPmY6trew8STXJY2d7mEUMzjTPqAQ4
    jfnT0I+/+cAu5eqBmFAQbSK+nFdWn0gfSz3gwBSnC/qSMOOTI/0sUnnvpr4Phanxfxjo
    nms7INPTHQ47Staq0DPqDZuBoESwiph+tlfDklbKPRb+8N0yTcoLuUE2mBbNJD5EFv+u
    fxysv8Ms0sCAqgjs98MXlG3scFGGJiZzPQQyPLmdhAFcRILfKagrPjXipna4wGHqzG+R
    CzSw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisQsVxSIbR7sf0kebs4Z3Zpqv+Sabl5o7CzRq+Ps8Q=="
X-RZG-CLASS-ID: mo00
Date: Wed, 16 Jun 2021 17:38:31 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v20210616 00/36] leftover from 2020
Message-ID: <20210616173831.5e8214bc.olaf@aepfle.de>
In-Reply-To: <968713af-3c99-3fe3-8378-9d9c3985098e@citrix.com>
References: <20210616125129.26563-1-olaf@aepfle.de>
	<968713af-3c99-3fe3-8378-9d9c3985098e@citrix.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/PRMvHEXRp7n8BNSmXZ70aZi";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/PRMvHEXRp7n8BNSmXZ70aZi
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Wed, 16 Jun 2021 15:50:24 +0100
schrieb Andrew Cooper <andrew.cooper3@citrix.com>:

> 32bit toolstack build

as in i386?
How is this used in practice?
I guess such build should be marked as CONFIG_MIGRATE=3Dn in config/x86_32.=
mk?

Olaf

--Sig_/PRMvHEXRp7n8BNSmXZ70aZi
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmDKGvcACgkQ86SN7mm1
DoA8+g//ZAagq6PreHucLtraLk0capVlSKgSR9Xs2LArbtVmP+FmH3bFZhMmLb2o
IgdTckM+1TvUPEuOKFtFQENiomDAoc8kHjUT+xv6XpaQIPmY7u5qKAz5YQjPznyf
43pgvs/+7nqzbrL7pdBJWE+B3V4SaqVcThCKXdG94sV7MMFMJHaMw6KB9zRytsI+
v7CiCa8bRD6zr9FdNKaDWEWJeLZiltWn7kaVx+3Zup/H0Vg65sH5j3yhViHRE+y/
6JZ0fP7bENxyWwaxpbdVdKmzUDwrKk6pbtNV55tFHA97EDlOQqb1PYSefymNTZcW
eaD+asQ+a+p7Egyo46M9uwCtRR4dwdJaGkyn4nI20/xMfA3Gx8Mjyc6MhEtiOaKo
w+ilpH8+x7nJjRA32go8WbQUIT0Hw8x4JEi9/NQxC1v53vEEv1QvR6cYBF2GKJDc
eD4vanJugL/g/9zXE6L64wGtFwpylscTUtcDCjaaZOHCWuVehBOmEhE0lmPxAbBj
02wGzSvKsTvv5ifmcJIIyWR8jnXSjMl5tLxl/zPpbLACMQCXS6oHnpEy6hyBGNzo
s16eLtXUCR0PZ4pp1r4n0iY8iGU6rbIl6IbRnYaiGo9ca5TssbZTijum7hfjU2nq
az37pf2oSnQXH5bs49XRFcBGinkraEUb9myPM1+43fAuauOs8Z4=
=HFW+
-----END PGP SIGNATURE-----

--Sig_/PRMvHEXRp7n8BNSmXZ70aZi--


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 15:39:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 15:39:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143441.264377 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXdV-0007a0-6e; Wed, 16 Jun 2021 15:39:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143441.264377; Wed, 16 Jun 2021 15:39:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXdV-0007Zt-2w; Wed, 16 Jun 2021 15:39:25 +0000
Received: by outflank-mailman (input) for mailman id 143441;
 Wed, 16 Jun 2021 15:39:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D5mW=LK=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1ltXdU-0007Zl-FI
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 15:39:24 +0000
Received: from mx0a-00069f02.pphosted.com (unknown [205.220.165.32])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 486d339b-d571-4513-8ecb-8a9a9bfdac96;
 Wed, 16 Jun 2021 15:39:23 +0000 (UTC)
Received: from pps.filterd (m0246629.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 15GFVgrr000890; Wed, 16 Jun 2021 15:39:19 GMT
Received: from oracle.com (userp3030.oracle.com [156.151.31.80])
 by mx0b-00069f02.pphosted.com with ESMTP id 397jnqr7r6-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 16 Jun 2021 15:39:18 +0000
Received: from userp3030.oracle.com (userp3030.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 15GFYxr4079246;
 Wed, 16 Jun 2021 15:39:17 GMT
Received: from nam04-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam08lp2168.outbound.protection.outlook.com [104.47.73.168])
 by userp3030.oracle.com with ESMTP id 396wap0u7v-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 16 Jun 2021 15:39:17 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by MN2PR10MB4288.namprd10.prod.outlook.com (2603:10b6:208:1dc::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Wed, 16 Jun
 2021 15:39:15 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4242.019; Wed, 16 Jun 2021
 15:39:15 +0000
Received: from [10.74.102.136] (160.34.88.136) by
 SA9PR11CA0018.namprd11.prod.outlook.com (2603:10b6:806:6e::23) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.21 via Frontend Transport; Wed, 16 Jun 2021 15:39:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 486d339b-d571-4513-8ecb-8a9a9bfdac96
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=9o4zO9/76jPH5jhgzUljYo/DDXq6qELutUc+AEOe64w=;
 b=lgYqKRI9MPy4nW7kNY2WaylETH8CBbHQUO2m6cLQ2Lh/hkDjeDfAATY3PeHYndQpakKB
 K+79nVgf/tzE2YkF6QljrxMZFGBNws2CYzGsJ8JBixwuq8LsZcZ+QVzJ8W5032g0YtRH
 FBwa4zFI/fBZ7ltbjazO4JGrdhSEPxPYyOAtmYEl7Dg6z5dKp5qZuPXInsDVta5Rpe+z
 Et5nwHgbP9VteRFW7CA2lKGHxvKPaxiFSUJ7nH+AjYcBuFkVPEPumzWM+imbHLPu2lSP
 d6MMmnb1zIp25L8v+l+Gj9dALkOSdKLkBA++T6gHh5VJfgcf6yk5oSSXF6qS6dE7+cdT vg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZBJN0eTMw0U99qaOm8CFe0MiCTUETF/LmmFsaNSxyMvZMRNQmyZv/iYmBaW975rseVcHsE9Qwh0UQwWl/5tr8lGaP38z/04qgsd+Kl1CVZBvMWtyTf4Bq6gYGS8nh7pBaW8QgVpeZyDlE+oOQa7Pv9zlp/uW5E5tVo0HK9tD37pCdCjgMQouqaOhFkWXceAOBby+WLucg3u8Vqlbeb1gLt33j/kihXtVSKPRoN0Sw8cPDTfsErYF7wxXEPAiggaiUSj87QXhkab5i3koV7rEiD+dbt8Z/VltUDOKwe9RRY08jpaAtYzlt9DQBYin04yPVtYhwjXLtdblUErgZkzRTA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9o4zO9/76jPH5jhgzUljYo/DDXq6qELutUc+AEOe64w=;
 b=NnOIjolSG58jDXOfo0uCfTDlONJRkNijR9OFKJgmMUAs+VLGPH97tcO+7PqS20+OOO2i6/AM/ieR08rV2wEo9X4KYsuaI3jEpsVjbrMVp2dyUkTi1W1dX8jMm7GW3z4Z2yh1uSjPmV7GKlOtaEYxnBquw5A0Gh9hlq9NpceUq97LwnNyw7zKUaVDf9R7PV0R1vnIWusR/+xkmgwj2Kec8o9F6KVxp2IEpg8uPxaXx5aQ0l/i6mttbKkICpwjol3oj06Y5orjJj93DMCq2aKuXoLjNhYezkRvUxMv8X5QIIFqaNJ5iq3fKxsm3aI6A6PhXaH9ZbqrqjYuK5//H5hBNg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9o4zO9/76jPH5jhgzUljYo/DDXq6qELutUc+AEOe64w=;
 b=ZtI4ZxZR1MLhG4gtn/sdI3YMq5LF4Zz26C/dm6Xsl7b4CXcvrm+xafMe6q23t47NXOrp865vcQ8oZ9riXvT1RrHdWG+/KbuAhf0PS53bnbINWkDC0jp3D3O0JM0HgODJO309kXfa4VSc2IWZIg48dA3HjK2UsPiNdow0D/FrS2s=
Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma
 ops
To: Christoph Hellwig <hch@lst.de>
Cc: Roman Skakun <rm.skakun@gmail.com>,
        Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
        Juergen Gross <jgross@suse.com>,
        Stefano Stabellini
 <sstabellini@kernel.org>,
        xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org,
        linux-kernel@vger.kernel.org,
        Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
        Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
        Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
        Roman Skakun <roman_skakun@epam.com>,
        Andrii Anisov <andrii_anisov@epam.com>
References: <855a58e2-1e03-4763-cb56-81367b73762c@oracle.com>
 <20210616114205.38902-1-roman_skakun@epam.com>
 <20210616114205.38902-2-roman_skakun@epam.com>
 <2834cdc0-534c-4f07-1901-e468a7713c1f@oracle.com>
 <20210616142148.GA764@lst.de>
 <83353b34-2abb-9dfc-bed6-21d500abf49f@oracle.com>
 <20210616153519.GA6476@lst.de>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <d4e8c822-3f92-d552-018c-611a44299e28@oracle.com>
Date: Wed, 16 Jun 2021 11:39:07 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
In-Reply-To: <20210616153519.GA6476@lst.de>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [160.34.88.136]
X-ClientProxiedBy: SA9PR11CA0018.namprd11.prod.outlook.com
 (2603:10b6:806:6e::23) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9e8f31f4-5250-47c7-72c5-08d930dce567
X-MS-TrafficTypeDiagnostic: MN2PR10MB4288:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<MN2PR10MB4288A68D4DCCA1623DEE0A078A0F9@MN2PR10MB4288.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4502;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	0UL7UXMMgLuE6DoiuUnfADfQhdLKp/OHK9PL2ed4eTI9RLPXu+rnsKMvUJlm2fFswTnDXVqXJTPlKCrTPNoa1koZXYWHPh6ugeqeQAwc4uS6P66gBmxBk9m3PBsOBc/O0yHSry4ql8PXePej88tPUVlQtECskpa02NGtmkDzKKt0We9DuvzPKBmrMjgMVaKatHaiXgdbCs1T9bPcftthwi0rLN9+5iqggZe34tsDgWsrXRW3WZsAb+9D2t+L5hgb3MR/QCyLOmzmI/so8m4eycaPCuiUaekdH8GVsUIGIXR5hAfESIlMK07WhxFSs3jPdo8ekXKOAjclG6BOwO/Wa0OndGqP2EboMrj0btndkDdCuPcXYPVeFCawQBL4OljFxJRcMYpPQd62NkHidPF+eUXA01GR+50t5T91Sv4NjfGMKEovFcvrsFLCWII+4eEffl05j7wtyLknoB4T9VyG9+11K2g0snUHuKRQBXqyoWX+A+YwDgCPaOVmqXFxsL2n3CuLy0+ZFNwpJyubLQ3smv5xegS3ytgyjf3SPaOXepP7m7DKFYZEQLR59ZncxEFYXoAiMfCEclTsCZZdLmNZiDU5lUv/ZeZ26vupBWK23vtSA1vuWAuHvorOJ7bGwxkjhGwkqSZxoVjpi/9T6w7MqZQd1gJFfc/r7J+1kWV2CNEpDgdEwSHWcdaI3fhszxR9
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(39860400002)(376002)(346002)(366004)(136003)(66556008)(66476007)(66946007)(7416002)(956004)(38100700002)(2616005)(6916009)(53546011)(54906003)(5660300002)(6666004)(44832011)(8676002)(4744005)(4326008)(26005)(16526019)(86362001)(186003)(8936002)(316002)(36756003)(2906002)(31686004)(478600001)(31696002)(6486002)(16576012)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?Nkl2dTgwUnhqcjY4SEFSNFk2VFUvTFo0UkJ1UFJJVkhKTUFXU3ZtdUVSNURC?=
 =?utf-8?B?U1hwUk1NcnZZOG5DMlBvK2dhenpseFBYY1ZWdjBNQ25yb3p2c0VnWjVjcUMr?=
 =?utf-8?B?MjFKS3BtdU9DakJ0ZmdxZXFleUJ2Y0hLT0VpYWFKT3Fna2JhK1JYU2ZNRkhS?=
 =?utf-8?B?UkJRY0UweEpQRTdCT2lKVmpqSDhWbndVSXVIcDdzS1J5SE9XOU5QRWRtcW9J?=
 =?utf-8?B?LzVDa1dMcmdaYjBkUTl5NDlaeDZObS9SMVY3Znl2VGMrT245TG5GWnRCVnhs?=
 =?utf-8?B?ZXhoTUZpY010SEw4TThTZFNGbEtER1QyM0FSSng1Ny9HNWR3UXkzSTV5bzhS?=
 =?utf-8?B?czJEUWZaeEJ4bWNSR2Vyby84TWN3WFBJZDZ2QWRCVGppQzQyUWdoQ2h0d0ZS?=
 =?utf-8?B?TU43R3RKeTU2M0hVVHVRbDhYMGhFS1NDaWRhS2FPOTRNbC82VGhnTjhlVzBu?=
 =?utf-8?B?bVoxczg5VzBDL1VCbW4wdGVhUU4vaGI2MThNTXlzd3QyWU5GU1k3Z3I5b1dB?=
 =?utf-8?B?V2ZsWGR5U3g5anJDSXBscVhPaWxhbFhxQmF3QXVXODE3NUl4UERTQlkyYlhZ?=
 =?utf-8?B?Y1c0dEQ3Z2JwSFJUTCtvQTV0RG1CTXhobkhNU3hGOUxhVUlncXdZY0M2TUZW?=
 =?utf-8?B?ZnVnVzNxaHh5M2JWNHR2bU16Zlk4bVlMWmtibXQrSjdVRkNPeGE3aXNoSXlW?=
 =?utf-8?B?Sys4S2w2SGtsOGFJRXRGR3FwdFQ2RHVCeUY2RTV0bEZ0NUZaeFMyWGloRENE?=
 =?utf-8?B?ZDh6ZHVYanJ5QW5mRzRzOGdNamVNdm54MERyYmk4NWVxbldoNTYvR2RWZnlX?=
 =?utf-8?B?UWZtKzd3bE1pUHQyU3hHNWtIa0cxTXpnVXlCUE5hVWZqSjZGVlB2elp0RWdh?=
 =?utf-8?B?bXRLZTFhV1Y0Z2JjS1Azek9CNDhuNDJ5UkVlL2tpR0ZmT1VwSFN0bHcrNXVz?=
 =?utf-8?B?MlB1Z1pwdlRvVVZPWWJ1ODBkTFVCNWE5aEhjNEtxeVA0Y2xtN0dqWmtNb1Vy?=
 =?utf-8?B?L2ZUMjI1aTk0eVVhdG1Mam0wOXM0QUZmcTRwcTdTekVnaS83Ri85aWYxRG12?=
 =?utf-8?B?WjJvd0JsaTZXcExEYjBzdjZvTlBxVkNPZVNlNFRnOHh2dWlHbkF5ZUVlZDhM?=
 =?utf-8?B?K3E0d29CdTZlZEsvNitCTWVzYU1wc1IreGN2VVUvaUFYK283ckYvdjJwNGJU?=
 =?utf-8?B?MVg0RzRERTZKaW1KVGRvMWV0UEFZK2V1bkJnWHROaEllOTM0NExHdWhkK3A0?=
 =?utf-8?B?OXc2MmlHZklQd3laM1lHSzFXOE5HUTVuc3p1QVIreFRZdzdHcC92OHpVTjll?=
 =?utf-8?B?cXVRNHZua01sU2ZlQnNYcS93RkNod1RvWnFMRTFDcDRBd1VSQkYyVEhlVWRI?=
 =?utf-8?B?WmswcVdaMUNHU2lBNVpCZTZ0bk5BOEpjOFo0UnNndGtsOStmejRscy8xbWxZ?=
 =?utf-8?B?ZGdPdDd1OUwzQTRzWXpYUmVSUVNpUS9jTFc2MnBFUEN2QzQ2RmNSWllBR2FD?=
 =?utf-8?B?c09idEt2MTBJNU5Cbko2VDVvdExnSVBTeUs4aFZSZUNRdVdwbzZkZ3JoTWN4?=
 =?utf-8?B?R2lsSEZTcXlDUjVTdGI4SzFwUy9vODhCRGZtdlpReFB5Zm1rdzNXcDNRWFZN?=
 =?utf-8?B?NHBYYTBBK09jZ1dsMW14amVFT2ZsYytJWUt4dXk3Wk4zTjUvNE44aGI4TlE3?=
 =?utf-8?B?MktrOHpBSWh0aVppckY0aXFCdE1DRGo2bjBuNklYL3NVZDlqNU9mdDBmaFVN?=
 =?utf-8?Q?FeaCpjFqICWItRg11+Kq79sBQTi7tPQBowjFqzF?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9e8f31f4-5250-47c7-72c5-08d930dce567
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 15:39:15.1267
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3ZeGRt4qcV4Kmmp09CJ4TuWSOOBYt6SRkKrTekZXPfkq/lHraD1a0QZoLKMYnIf6pyLGCAfIAOptWqZCfafWCYlURdIeTSPUlYYnmdHbsSs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR10MB4288
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10016 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 suspectscore=0
 mlxlogscore=999 spamscore=0 adultscore=0 bulkscore=0 mlxscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2106160089
X-Proofpoint-GUID: XLkRvkGr5dscz3vWtyy9po6ZAzrd3rO9
X-Proofpoint-ORIG-GUID: XLkRvkGr5dscz3vWtyy9po6ZAzrd3rO9


On 6/16/21 11:35 AM, Christoph Hellwig wrote:
> On Wed, Jun 16, 2021 at 11:33:50AM -0400, Boris Ostrovsky wrote:
>> Isn't the expectation of virt_to_page() that it only works on non-vmalloc'd addresses? (This is not a rhetorical question, I actually don't know).
> Yes.  Thus is why I'd suggest to just do the vmalloc_to_page or
> virt_to_page dance in ops_helpers.c and just continue using that.


Ah, OK, so something along the lines of what I suggested. (I thought by "helpers" you meant virt_to_page()).


-boris



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 15:43:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 15:43:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143449.264388 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXhE-0000dY-Ol; Wed, 16 Jun 2021 15:43:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143449.264388; Wed, 16 Jun 2021 15:43:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXhE-0000dR-LC; Wed, 16 Jun 2021 15:43:16 +0000
Received: by outflank-mailman (input) for mailman id 143449;
 Wed, 16 Jun 2021 15:43:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vOo1=LK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ltXhD-0000dL-2k
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 15:43:15 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a1431fd4-a4a5-4fd0-93cd-361daaf22b1f;
 Wed, 16 Jun 2021 15:43:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1431fd4-a4a5-4fd0-93cd-361daaf22b1f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623858193;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=3qlFCcCVUZQNy7jAPqgTvE3VdNIeqftGoCfJPtivP3Q=;
  b=Xszm2k7YufyEAw9pLJ/WJXYPfzqX/XcR93aM+9vDCD9JG6NHBPyH1tsn
   N7YkGcHOIHR6jVZqeSM3hQtavLFOSaTUP3/C4FYAGmPMicgN7L2OEZr3c
   K+lgFjxe/fYnV+pVBGbJ8XWW1nyy1z/loMpspJZOgQeFTlOktSiNYylaK
   g=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ohFkRoilwqXOXeg3BAoXD1Peau5PjDIjWrRgQv6k4UwpzIYxnZn7If1RN1AwJlkKlztv6C9p9c
 FQCqlfd6YIyfwOvKooaqGE8QKMRlCFuWnzFQBh2HVEVAbMLCXOjlskGPObFR5o6QF2dEFf3iw/
 AC0Hp/6BeN7bVkPGEP2I8OkN3Nmjr7ySdzhEQdKC0qzStYUKTPsnlrgdnoGnvbYagRC3Q9z/+U
 1LK2VTBa8Iciawz4/8sq4+EHfjVauNQxFhhRd6MZ6t51nmb7NP3zpiwRUb3jllWfUx9sTtmtlp
 YKI=
X-SBRS: 5.1
X-MesageID: 46280069
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:GAGdA60LDssbaF2gf3nyqgqjBEgkLtp133Aq2lEZdPU0SKGlfg
 6V/MjztCWE7Ar5PUtLpTnuAsa9qB/nm6KdgrNhWItKPjOW21dARbsKheffKlXbcBEWndQtt5
 uIHZIeNDXxZ2IK8PoT4mODYqodKA/sytHWuQ/cpU0dMz2Dc8tbnmBE4p7wKDwMeOFBb6BJcq
 a01458iBeLX28YVci/DmltZZm4mzWa/KiWGCLvHnQcmXGzsQ8=
X-IronPort-AV: E=Sophos;i="5.83,278,1616472000"; 
   d="scan'208";a="46280069"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AoUFc12oDpbmTTca4EPdGqL4TCjVhGVmFC0UxqQiTFQaluJPCP8QuPs6rU7MYDhdok5rH+1LEgobAIYqXqRVaANZFD8hBJlAH9KMJ28MqLVrgy9QTXqyVBj0kTnQGI3+F+Hiw+nuqIY9oJabShsFlCcyz0zhPTp1NwsxNINViuueMNfdD/0pPntqj+vWSdcfFgDoI+Jt+ZDUWEB+O4meZ7yD8gr40gC2bUoH/D7OJrXBQc9cgWcVduabFGb8su6dAp5rFCj5y6G1/nMVrCxOOX0DB5ufhDKyN3HQyYNIUydbyIdPJDBOH4fGX882x+w7yXXPF82dlg7eEUvCgg5/Gg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0mZ1lbsU7yf9CgkDPRlTbyZCvhu9rRsQqF5qrUkQ2t8=;
 b=c3WqLsTSa1mojrHxL1C4JTAL01DbhJul38QqS91r5bsshZenkYnU1xolwNOl+TnkRPo64blZBWh4fdYW2mfYGQ1PhYyHacShdgLI+qVgJx1YjAQu39DuIk3e2iN150GIcLIS+3NMavX5Ob1Lb6kdHk4UI6pz/QD7e9vXDymqznENUoqO3t75You8ndSECcOtrgd+In/ENv1SexOrWDlwqZb0eZqzY0pkWbloCq3+Xy13JWqKgkRjvJn3ahsR07uNGI3FRYRLE6uZLHhA/jX/gfwzBCodxhCMoQyZpOpsV4S1HF2sJTHoI1wro0PPVcmHbaLHCj9xwPfWnSqBGem4kw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0mZ1lbsU7yf9CgkDPRlTbyZCvhu9rRsQqF5qrUkQ2t8=;
 b=EU7q3LiSBjkVFhBvgLve730nACAxYfGPU0K2YbpN/9v2fv753Voe8ZrxlVDNohWxetksxtttK+L65HiLOxZ8yxZnY1A2eCPF2sl/zF6Fk+aqh64HPyciG6wt2HMj1x8EztzHMIeUT/3mDUpccuwSGyfgbT/W7n4Ta6Si4qx2QyY=
To: Jan Beulich <jbeulich@suse.com>
CC: <xen-devel@lists.xenproject.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, "committers@xenproject.org"
	<committers@xenproject.org>
References: <osstest-161917-mainreport@xen.org>
 <7cfa28ae-2fbe-0945-8a6c-a965ec52155f@citrix.com>
 <b57c2120-2f86-caa7-56ec-e215a7ad0529@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Regressed XSA-286, was [xen-unstable test] 161917: regressions -
 FAIL
Message-ID: <637ff3c7-afeb-aae4-0f1d-5ae168e01e01@citrix.com>
Date: Wed, 16 Jun 2021 16:43:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <b57c2120-2f86-caa7-56ec-e215a7ad0529@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0419.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18b::10) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cbd24b8e-871b-480d-ada9-08d930dd6fa9
X-MS-TrafficTypeDiagnostic: BY5PR03MB4997:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB499763B94AA8178F7E14E454BA0F9@BY5PR03MB4997.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: N6Iv1TRRA0idh5HbYWXCeIcciq19CeC84qYsTpmBMcGVyGMApeh0boWrCD1pJFHhWkGFKt3Sv+xD3OVnS/vmqQgLzyOizXPOe65Lni10qckY8bXWhQkZiPhB1Xht0EI83YyZoku9TXNDr4oiRDHiJZZmxmrR5P+CJ2D2G8W6eTyOvUigjhDublOrDIRMFRvYcIONIsJwoEN0svtSNSPHeIVPsU9+U8UILcY14VRWVysDdkcI8P/8kZXfbVRy7+QmcZ2qOnQG+VSoIIUY8Z7J8BBBWXcLKs99d3s6dLaF7SxIB1HR4jk92VanjjmxlEeTm7R1rdETwFd9XeUUOnYH3YnIupy00lRijELHPTeGTxTr1ix94AfgX0+2Vp/UMPVAwKQlM0DgoHF7pchUyrJ030SqY7nkHcoyj28TEKWty78L1hBn3niM9zMo4FSiTUYdSuLjegEeWpAuV8aVUvN5FVVtWaSgd+HMVSSW+TvsMcfh1L7z8ldxC/kYAaxxtzNPgDxqU3oIjCqg9ecPkYk3ske5z3eh4AMrdtWfIz5avRPnD4asPMiPQJGc9CmodCGUx1pH8rkQr7KM+AFwDo/xYXsBdBakbGdgKcbiZwRi7fJCgdb5pFoPflfKcuAhoG1ztjTyWIWnswzuPDV193XdPgzUf6Z4le/mfTVQ0ZPwdQ8Hwd7WrNj6k7bgMfxm6qJumkWRxllhGkGd59+Mkm05PiEUyVslu618HXzkzgbdqQx1ch/RfU9VLcJWxwtHgZmEREB+lfXKxns4xp/oX5NuPkdibL5anjKewUi8DX5nUFQKSOze2wCrLnzcpUM9Aa4c
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(39860400002)(366004)(396003)(66556008)(66476007)(66946007)(4326008)(966005)(16526019)(956004)(478600001)(53546011)(26005)(8676002)(83380400001)(186003)(8936002)(2616005)(16576012)(6916009)(54906003)(5660300002)(2906002)(36756003)(6666004)(38100700002)(31696002)(86362001)(6486002)(316002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?YU4vVmN1ZHVqSHBQZlRObEptU0M2VW42TXNZclhyRHQ3MTdocWEzZnovRlFM?=
 =?utf-8?B?dWV2dStWaldVZFNxZHVBbkRWUEh6QUxHOGNHeDQvNDRQbHNXSzJELzN5NWpJ?=
 =?utf-8?B?WTZjaGRxWFVqcE1wY3lXTkpIK0xWclpHMy95MUhxN0dncVNxbUJ0OUp1MFpu?=
 =?utf-8?B?R2RPNkU2NjRoc3VIdFdWNFpyZnpjdjJOdmY1MmJZTHhYM3huVU5EK01tRmNl?=
 =?utf-8?B?T2FSNWMrUy84N2pKZFd3OFlEZExxV0p0bjhBVEllM3Y4Y0JJcFliNzZncStz?=
 =?utf-8?B?bENDR3MrdXREWk5rY1JBK29EcmtKWWlTWGNQWTJOZFZZWnJieU5PSWtPQnZS?=
 =?utf-8?B?UzdUTjBoNlBqV2t3bS9lRlY0bHBQSXZBd3JsQmNrK3A4ZXNvME9KbVdXRGVM?=
 =?utf-8?B?T0NLU2ExcHlRL1lkZUVkVkdSV2I3aTFEWDE1K0R0QjExdG5iTWxqM21DWkpV?=
 =?utf-8?B?WGxXN08xV1BJK3FNZ0FuVzFvMG4xUVZsd2ZJOUZUakErWTJzZ1pDZi82OXpr?=
 =?utf-8?B?MmtET3pvQkRuWFpCZ3BGRWwwVWJ2bVBNZXkweG4vejFnT1ZROG53OUVXYmJV?=
 =?utf-8?B?WVVod3FzOEpFMWM4UjVQN3hCRVc3OTU5WXhJU2diQVlBRE96RERGcG4wQ3Zw?=
 =?utf-8?B?OEtiMjExazBwNFkwV3NwRk9tMDIrckdMR0JkdUpxdFFZbDdiWWdiaHNtL1A3?=
 =?utf-8?B?akdNZXZ5OWxyWHBqbXNSbFNVU1NGY2h4NnpKVWRFMFRoVWw3bkN3L0prRFkv?=
 =?utf-8?B?dzdHMGlySU1DRFpGUllpYmlVc0N6cTU4cW9Xa0FId002WTBGTDJWSGxZUmlo?=
 =?utf-8?B?TzJMRldQYzdzNzR6VFJWSVRkeEFNNFZpRkpZb0dDUzAvQkV6SlVNdlFGajhm?=
 =?utf-8?B?VFd4QWx5dHdaaW15eVdaV3FjM2tqQjJDNkZ1eGlkNXZQR1RSVEJrZXJjTm0r?=
 =?utf-8?B?Zm40SGUrWnRLWGppZzhBaFNudTJIbEk0QWRtUEl2ZjVOWUI2RCtzd3dBb2Fm?=
 =?utf-8?B?NW1CZVMxN1BGZXo4RGlpWGVpRzN1blExMS85dEF5UmRKcjFyMytkbjJ4UXdZ?=
 =?utf-8?B?dzk5TGhNcWlXV2wrNVNUOXg1OU1Sb1d6NEVUSWRTSzRnVTBxSVA2L1dpQ21J?=
 =?utf-8?B?ZEtGb3I3SEZKSDRrZzNGVDNLdG1CS2hab2hzQ3NwWGRwbWJudE1SVHhCOGlK?=
 =?utf-8?B?QnZFUGd3WnFSa2QwcWt1dDlueUd5Nzd1N3h0MXEvTHNnazBqWE1MWUcreVNC?=
 =?utf-8?B?OEV1aW5BenlFbVkrcVk1V0FaT2RZVWplUmxNSC9ydzMzVEtRSkdTakZCWUky?=
 =?utf-8?B?Q2xqWG93TGp5T2dvandVekovY0lBUDQ5R3Q1aThCUXlSbUdwMDllU0dGSUox?=
 =?utf-8?B?RW1QU3hQcUdUbUQ2dnI3NVdXV1NTZmdZTnNaRTVNN2ZUZXovMTZqMlRiVjRl?=
 =?utf-8?B?QXNvUUNMSFZwR0tYYk91V1J2dU00TGpWM2M5YWFNdzZpeWk5dkh5cHgwQWFH?=
 =?utf-8?B?NlB3b3BGdnI3VW9lazNscEYydURaNnhORVJCcEhCajlKelBRZmRKeXJxWFZP?=
 =?utf-8?B?VUs4RkZ6VXRpZzdJUlVTT0pIUVNsUnlZNC9WK0lhWGgycWx0MHprclJBUVJ1?=
 =?utf-8?B?SlU1bFVPb3Irb3lRZFU5RS9oWWZhN1Z3RzNDM3pUVDF3QS8rNmxNRFV2ZzBL?=
 =?utf-8?B?WURNMkxuZS9ob25SR3VFRmxiYlV5TytBL1ZOeGlseEhpVURJclRqRHhYajBE?=
 =?utf-8?Q?iQng3nC83NLNjCVAfDz/7EIKFQqn0fFVml/bKi1?=
X-MS-Exchange-CrossTenant-Network-Message-Id: cbd24b8e-871b-480d-ada9-08d930dd6fa9
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 15:43:07.2600
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MgX1+pwVAACVD0sTq9NulDYhqSs4ABIwGnP56zWIw1kq7IqxHQWpWtcQTdguJskiR7yirUqRuADY/oWEpEthWAl7ftx8BFNiH0eLHMPE1gQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB4997
X-OriginatorOrg: citrix.com

On 16/06/2021 09:48, Jan Beulich wrote:
> On 13.05.2021 22:15, Andrew Cooper wrote:
>> On 13/05/2021 04:56, osstest service owner wrote:
>>> flight 161917 xen-unstable real [real]
>>> http://logs.test-lab.xenproject.org/osstest/logs/161917/
>>>
>>> Regressions :-(
>>>
>>> Tests which did not succeed and are blocking,
>>> including tests which could not be run:
>>>  test-arm64-arm64-examine      8 reboot                   fail REGR. vs=
. 161898
>>>  test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs=
. 161898
>>>  test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs=
. 161898
>>>  test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs=
. 161898
>>>  test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs=
. 161898
>> I reported these on IRC, and Julien/Stefano have already committed a fix=
.
>>
>>> Tests which are failing intermittently (not blocking):
>>>  test-xtf-amd64-amd64-3 92 xtf/test-pv32pae-xsa-286 fail in 161909 pass=
 in 161917
>> While noticing the ARM issue above, I also spotted this one by chance.=
=C2=A0
>> There are two issues.
>>
>> First, I have reverted bed7e6cad30 and edcfce55917.=C2=A0 The XTF test i=
s
>> correct, and they really do reintroduce XSA-286.=C2=A0 It is a miracle o=
f
>> timing that we don't need an XSA/CVE against Xen 4.15.
> As expressed at the time already, I view this reverting you did, without
> there being any emergency and without you having gathered any acks or
> allowed for objections, as overstepping your competencies. I did post a
> patch to the XTF test, which I believe is wrong, without having had any
> feedback there either. Unless I hear back by the end of this week with
> substantial arguments of why I am wrong (which would need to also cover
> the fact that an issue was found with 32-bit PAE only, in turn supporting
> my view on the overall state), I intend to revert your revert early next
> week.

It has frankly taken a while to formulate a civil reply.

I am very irritated that you have *twice* recently introduced security
vulnerabilities by bypassing my reviews/objections on patches.

At the time, I had to drop work on an in-progress security issue to
urgently investigate why we'd regressed upstream, and why OSSTest hadn't
blocked it.

I am more generally irritated that you are constantly breaking things
which GitlabCI can tell you is broken, and that I'm having to drop work
I'm supposed to be doing to unbreak them.

In the case of this revert specifically, I did get agreement on IRC
before reverting.


In your proposed edit to the XTF test, you say

=C2=A0 L3 entry updates aren't specified to take immediate effect in PAE mo=
de:

but this is not accurate.=C2=A0 It's what the Intel SDM says, but is
contradicted by the AMD APM which states that this behaviour is not true
under NPT under any circumstance, nor is it true on native.

Furthermore, any 32bit PV guest knowing it is running on a 64bit Xen
(even from simply checking Xen >=3D 4.3) can rely on the relaxed
behaviour, irrespective of what the unwritten PV ABI might want to say
on the matter, due to knowing that it is running on Long mode paging as
opposed to legacy PAE paging.

If these two technical reasons aren't good enough, then consider the
manifestation of the issue itself.=C2=A0 XSA-286 is specifically about Xen
editing the wrong PTE, because of the use of linear pagetables, in light
of the guest not flushing the TLB.

If you were to remove linear pagetables from Xen, the issue
(do_mmu_update() edits the wrong PTE) would cease to manifest even on
legacy PAE paging, demonstrating that the problem is with Xen's actions,
not with the guests.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 15:44:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 15:44:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143455.264402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXib-0001Mm-8I; Wed, 16 Jun 2021 15:44:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143455.264402; Wed, 16 Jun 2021 15:44:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltXib-0001Mf-55; Wed, 16 Jun 2021 15:44:41 +0000
Received: by outflank-mailman (input) for mailman id 143455;
 Wed, 16 Jun 2021 15:44:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IKif=LK=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1ltXia-0001MX-0f
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 15:44:40 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea112699-168d-4f67-8d30-9adf75dc741d;
 Wed, 16 Jun 2021 15:44:38 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 71B3B67357; Wed, 16 Jun 2021 17:44:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea112699-168d-4f67-8d30-9adf75dc741d
Date: Wed, 16 Jun 2021 17:44:36 +0200
From: Christoph Hellwig <hch@lst.de>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>, Roman Skakun <rm.skakun@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Roman Skakun <roman_skakun@epam.com>,
	Andrii Anisov <andrii_anisov@epam.com>
Subject: Re: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable
 dma ops
Message-ID: <20210616154436.GA7619@lst.de>
References: <855a58e2-1e03-4763-cb56-81367b73762c@oracle.com> <20210616114205.38902-1-roman_skakun@epam.com> <20210616114205.38902-2-roman_skakun@epam.com> <2834cdc0-534c-4f07-1901-e468a7713c1f@oracle.com> <20210616142148.GA764@lst.de> <83353b34-2abb-9dfc-bed6-21d500abf49f@oracle.com> <20210616153519.GA6476@lst.de> <d4e8c822-3f92-d552-018c-611a44299e28@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <d4e8c822-3f92-d552-018c-611a44299e28@oracle.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Jun 16, 2021 at 11:39:07AM -0400, Boris Ostrovsky wrote:
> 
> On 6/16/21 11:35 AM, Christoph Hellwig wrote:
> > On Wed, Jun 16, 2021 at 11:33:50AM -0400, Boris Ostrovsky wrote:
> >> Isn't the expectation of virt_to_page() that it only works on non-vmalloc'd addresses? (This is not a rhetorical question, I actually don't know).
> > Yes.  Thus is why I'd suggest to just do the vmalloc_to_page or
> > virt_to_page dance in ops_helpers.c and just continue using that.
> 
> 
> Ah, OK, so something along the lines of what I suggested. (I thought by "helpers" you meant virt_to_page()).

Yes.  Just keeping it contained in the common code without duplicating it
into a xen-specific version.


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 16:04:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 16:04:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143465.264413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltY1U-0004Pt-Tq; Wed, 16 Jun 2021 16:04:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143465.264413; Wed, 16 Jun 2021 16:04:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltY1U-0004Pm-Qe; Wed, 16 Jun 2021 16:04:12 +0000
Received: by outflank-mailman (input) for mailman id 143465;
 Wed, 16 Jun 2021 16:04:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jola=LK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltY1S-0004Pf-UR
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 16:04:10 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4453dd97-5b39-42fd-99ae-29fb14628536;
 Wed, 16 Jun 2021 16:04:10 +0000 (UTC)
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur03lp2056.outbound.protection.outlook.com [104.47.9.56]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-15-YVSoROo_OjervzgSEg-58A-2; Wed, 16 Jun 2021 18:04:07 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2445.eurprd04.prod.outlook.com (2603:10a6:800:55::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Wed, 16 Jun
 2021 16:04:05 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Wed, 16 Jun 2021
 16:04:05 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0200.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1f::20) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Wed, 16 Jun 2021 16:04:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4453dd97-5b39-42fd-99ae-29fb14628536
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623859449;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=lo7dWu124mDXCjniLtxKg5FLgByRQJ2w3dohpW4Ret8=;
	b=d5uatcXmEd0GXlarOuwmcuHVP4vb7KOLMp+s3UXCk9Qu5ve74UDyvDm9uwkq7qF3jLqAMM
	On9bn76TipE8Xg3+svbmNehU8ngcHYehqqAJlFe/bRDH2wnfuHw3MD11K4yjIR8pQSY360
	p9s07ZI62wznBqjMHfNGhs5tbWjddxg=
X-MC-Unique: YVSoROo_OjervzgSEg-58A-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YpGySNvHjPKNWPuBFQWVtTCJTyfpGR8lZC+hqZq0at2dyVUoYWbBP0FHrDTpvDRTQefvzQxZ7+fu96g3jteVRCAS05CBkPyax8tmo5sEPaVa2SpTM/dDS4cEa5pDtjfT9nJDq3p51JyA8ZT6AIyHL/xaFGliblGQj+Llzt2JOT7Zy+Hm4mwRZMWC9J69FHnn/bIErhIAgYq6wRaP+v1viABy+Lq34b9APMyyMDI/szt3D7rO++srxTqd2DqnNMuw1ISs782YY7Yz91OcrXr9J81hiDSGGqCk0kk8mywY3WMihAUNmNYNv6P9FSqNFLeHuKnx4si6uGI/pZ3AtPVMqA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lo7dWu124mDXCjniLtxKg5FLgByRQJ2w3dohpW4Ret8=;
 b=YG2LMNID6lR7wG3sIx9ii4y/uJycwWUkdoqWcDf0pzTK9OukqiHUMh8ikN5UFLfvoI2HMz/LdXgkvdCuoMvBttcceja7snWcLfzCMj+6CQBTOAR421oE9/okZQUJLSqsm6S/b91a1pMWBYiELZt3MKIMx4cXwYJnj0uVKSOrfaDg4sHToUf0X9+XULWo/8bl5YVzwQVSnsQ3EHJFNDpb6Uf/OJ/Dhg6FTET/68T3I1IcY4O3ig7ASApWQgWV9+i/zjuoy3SkyerV9p4+xtrztAJihhSsiF/+9MDmoEpKqjU/1+2qbDlyGO65DA3cQhqQ++3wDQHkZhe4DL5cyCL8LQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: hypercalls with 64-bit results
Message-ID: <71b8a4f1-9c18-36e7-56b1-3f1b1dabddd6@suse.com>
Date: Wed, 16 Jun 2021 18:04:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0200.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1f::20) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d9c6e6f2-d3fc-4a67-7f07-08d930e05dc0
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2445:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB2445542FB23E933425D2A296B30F9@VI1PR0401MB2445.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7SJr3HhC29cU2BBXrT1R19NCJ+4uyt5YtObJtlDBE85IsJt22jlzEjCXqS7Tawo8m6A2et5MUOYJzcK68Hyi4I1rCkDYpEkwtCZDgSX/t7No2rqUHR3olXeikbMtl6zfm5NpWMFfCtBivDz+V7O6kzbKl4yUpKf1RTbqGt27ebUpBL5w18M1gP2ANGCyEkdFtOx3ClsKJrvUlB3nYJT+XGuQu/T/+4bhJnBdQbhNc0GmLgU0HyrANLcpaDaUM7uiUStFBkvoLEfr4thCbiWWAA4fdQ3JiGQ0eIHUJNBa+WPRqAhvwcc6ykJ7v+bEDu3oMcW68YmAwWln7WSqOFAugiGGvxIZ5mh78cRz+7Awh2Cxtz2qwdaNmmsJ3hBCaMMkHcX2Dj1SaNlKTTCiGLgwbMBBR8E+IQatY8l16/NiinO9K0Dpj7Did5N05JVERFuVr5gXMSXgMtjUU1Nao5Yugkj0RlKBqqXrgLlaqZpoZUbVI+Fv7GfhASJqQXyjuh8Dud+1ht/0a7cIe48rFq6lE4oaO0WeLRpy1D5QZIVcYQsgQ8LJC7CizwzPHnaE/wEgZosf9sVY9QshyMQOkHrbQ9v6JrLIDPW1gMO9OXrFL6rNMPy0vOHS93qhj7IeCeS3KkLXkobz5oMYhKygGrMCx25GPTHczsU0bZQY5iPi5v5/7qa9d8lBJTkZGEYP8YaS
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(39850400004)(136003)(346002)(366004)(376002)(316002)(8676002)(16576012)(7416002)(38100700002)(83380400001)(5660300002)(110136005)(36756003)(86362001)(2906002)(31696002)(54906003)(4326008)(26005)(6486002)(8936002)(186003)(31686004)(16526019)(66946007)(956004)(66556008)(478600001)(2616005)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZlZRR1N5WGJwc1dOYzJaSzQ2cDFrKzQxdStHSEgxR3A1Mng5U3pOTFRHaHlH?=
 =?utf-8?B?SEF5dmFmM2NHUm5IK3p1WkpjRVI4R1VVZDJmc0lGWnBVTDViMFdubmxJVGd0?=
 =?utf-8?B?UjM4d2dYMjFQeGVrek9hZEJHMUpNbmU5SkdwNzcvbVB4ZnZRd1JZVlNLeVpn?=
 =?utf-8?B?ZXFlWEdVeitPSW95WW54UkJ2V3pKTGRldXhpa205WUtjaTJhcHFQUDNDUk5F?=
 =?utf-8?B?OUdieE5hTUE0YWo4RklVL2VhSmE5RDBvUWJPcXBGWG9SeTNlaGZzVmJxUlF6?=
 =?utf-8?B?eGVmWENkTDBQREZWQi9pcTVMYnpadzQzTU9vMWpvWGxkNUJ6V0NmeDRvaEpl?=
 =?utf-8?B?RTJpRmwvOTRiaE56bFdjM0tqTFAzeVVlNGRLT0ZtQitGUGY0UjRNdC9MWXl6?=
 =?utf-8?B?bWxNY1pnMFpvYjFGNVBaeVhhcVd5VjRWTHVRVm1jU2ExTmNDQ1YxcGdGS2NZ?=
 =?utf-8?B?cGJjNTAyUDZTQ1cvdmRpNXFWbnNyNVJGdTBZV3ZQMmxFeXMvenRCK01qMWxF?=
 =?utf-8?B?REc1K2RRUWRZNGxad0FqMWdRb2xELzBCazI1emFKM0pXTnZvcllZbktQY0Jk?=
 =?utf-8?B?QUR1aThYek9ub1BlYW4xTEtPTFJVL1NOL3RXZXE2dTVKbmdocjZobWQ3ZGJB?=
 =?utf-8?B?bGtmU1M1aldmM2cxUElJQVN3Tncxa2FEdGFBVVY0Z3p4WmxFSFRhekRjMWlO?=
 =?utf-8?B?Ykt2OU55cW5jd09JSjdrRUcrYU5DWGU1TW1PRE1ZTkJyZWlhd1czUGFRY010?=
 =?utf-8?B?aVMrT1FKUDl4aXpRZXlSOWFXK2ZmQ0hiODhyZ1pKQzZNR1JGc05pRzkra1ZH?=
 =?utf-8?B?WjgrVWhYa1ZpOEo0ZEN4QmprMFZrTENQeHM4YU40K1ZiOENXaEFTQ3BOd2Ra?=
 =?utf-8?B?ZnhpNEhzZHlKaFBTTS82WEh1QkJ2UTlaSDVGcWQ5T0hxWjBUMCtjVWVTY1Nq?=
 =?utf-8?B?NFhaSHNwV1h0am82TjhERkVpTC9rcVhLYlJSR2RBWjcvK1lxNytvMkhHdG40?=
 =?utf-8?B?czZQM3dKQmMrb0JrRVhyMWxyZktIajFwOUJrc2VhZzlRN2JNWnYrWnlRdElV?=
 =?utf-8?B?aTBqclNNZ05tVkhzbEtDMjAxSGdDb3ZoMDJIVE9ocXV0VDhYTHlTU0ZVeGJo?=
 =?utf-8?B?Q05aYnRxbDJNVWl0OE8rZnJDVXR4Z1ZuNlQyNUVXekpxNExCZGpKbzdiOEd2?=
 =?utf-8?B?WFhIU05yeVp6eVpiN3NiU0pNVzVMK3JjS0lmNExDQ1U2djNQdGNOK09MZEpy?=
 =?utf-8?B?QTNpd0xCT0dKY2M3QzBZTU8wcVpxbUxlK3c5NkRqQW5kei9KYi9IMXdFNUNB?=
 =?utf-8?B?UHB6Nk1ablJFVHR6c0JzbS9CclVmdkl0L0Q4TmVVZUNBVUphc21TcnhRa3NR?=
 =?utf-8?B?aWF1SmFFbjIzcTNVcEk0elo3T2VaOGZYRHJudXJEWHRtTUdDT29SdmVaSnpF?=
 =?utf-8?B?bVI4czFqVU5NaWtpTkhOTFhJNHlHMXJVcnUrbGpManlRTGhRd2swbGM2MGdu?=
 =?utf-8?B?dmJPYi9VT2ExSkROK3NXQ1RuZkNKTGNKZS9IdnhwQm4vNjBzd1BKYW1hZWli?=
 =?utf-8?B?RE9ZMW13VG9uNXRrMGpyc3psV29ta3JaL25kYmlVWUlOT2hnUFFNellWb29j?=
 =?utf-8?B?N29Yd1N1RmppanM5dEdTcWFZelJqZGN2QjlubDNBT0d2SEMvRkI2dzdhMzZP?=
 =?utf-8?B?c1VEVlU4Qk5WWVlKdFhXRXBrZTJEK3p2ME5UNU95Y3BKREQ2YzhTMnNMTlVT?=
 =?utf-8?Q?Ib9tLmPrQUQynSI730/qzISOVuXxcnEl6umzQQ9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d9c6e6f2-d3fc-4a67-7f07-08d930e05dc0
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 16:04:05.4819
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BEf79sirBh4C6LIimBdsdZvewZEyNkdYpe4CoY93yum0b8KsXouiPiEIQaaVQsTx7f9yQ5VB4Sx0fgjXKgkesQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2445

All,

several years back do_memory_op() in libxc was changed to have "long"
return type. This is because some of the sub-ops return potentially
large values as the hypercall return value (i.e. not in an argument
structure field). This change, however, didn't have the intended
effect from all I can tell, which apparently manifests in the present
two remaining ovmf failures in the staging osstest flights. Anthony
tells me that ovmf as of not very long ago puts the shared info page
at a really high address, thus making the p2m of the guest very large.
Its size gets returned by XENMEM_maximum_gpfn, as function return
value.

Since hypercalls from the tool stack are based on ioctl(), and since
ioctl() has a return type of "int", I'm afraid there's no way we can
deal with this by adjusting function return types in the libraries.
Instead we appear to need either a new privcmd ioctl or new XENMEM_*
subops (for those cases where potentially large values get returned).

Until we manage to deal with this I wonder whether we should suggest
to the ovmf folks to undo that change. I'm anyway not really
convinced this aggressive enlarging of the p2m is a good idea. There
are a number of cases in the hypervisor where we try to reduce GFN
ranges based on this upper bound, and there in particular is a loop
in mem-sharing code going all the way up to that limit. EPT P2M
dumping also has such a loop.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 16:05:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 16:05:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143470.264424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltY3B-000517-8z; Wed, 16 Jun 2021 16:05:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143470.264424; Wed, 16 Jun 2021 16:05:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltY3B-000510-65; Wed, 16 Jun 2021 16:05:57 +0000
Received: by outflank-mailman (input) for mailman id 143470;
 Wed, 16 Jun 2021 16:05:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltY39-00050m-Tq; Wed, 16 Jun 2021 16:05:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltY39-0006I6-Oc; Wed, 16 Jun 2021 16:05:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltY39-0005EK-Dx; Wed, 16 Jun 2021 16:05:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltY39-0006Pr-DW; Wed, 16 Jun 2021 16:05:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6Ye1y9b/gmY+S8VDyxZwau9tLA5WLeGJCc+dMW2KtvU=; b=floqTOkxrRhwhpDTvnrbLKUsai
	ohNJ/LbhbWzC7Dfv1ttqm1EwIByanLQm+RmEju9EZBI/PY3tr7aXMvu1VLY2hOXLc1nk1A/9RhU8J
	aj2D7xYeKDYU3QBqD8onC7A1AQjr0nopW6ZIrJacUUQfrWwoR3TaWTAOPK28+hwjGii8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162855-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162855: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=ab2b389e7afc19ca87574eb4594eaf26ca2d4135
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Jun 2021 16:05:55 +0000

flight 162855 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162855/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 ab2b389e7afc19ca87574eb4594eaf26ca2d4135
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   12 days
Failing since        162368  2021-06-04 15:42:59 Z   12 days   26 attempts
Testing same since   162855  2021-06-16 08:42:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2040 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 18:16:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 18:16:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143502.264439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lta5E-000151-PB; Wed, 16 Jun 2021 18:16:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143502.264439; Wed, 16 Jun 2021 18:16:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lta5E-00014u-Lw; Wed, 16 Jun 2021 18:16:12 +0000
Received: by outflank-mailman (input) for mailman id 143502;
 Wed, 16 Jun 2021 18:16:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vOo1=LK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lta5D-00014o-Cm
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 18:16:11 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id df8d5487-4cf7-4e2c-a0f3-2c4e903ff88a;
 Wed, 16 Jun 2021 18:16:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df8d5487-4cf7-4e2c-a0f3-2c4e903ff88a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623867370;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=sNpQzDIlnm7MsoEC/S7AiHE4ULsw2UwrHUkFA0POEGw=;
  b=IovncgtZWagR36jRrGvMV7BgRzt8deuiYXRTJ2de9H9g6mASoAbBAx3x
   CitTTkziJnpywg2PlSEqupQMqm3jbC7NUuJXhdRlcg2R+a65zB7knuRmB
   7+vJ/ZsAdqiKLHvY/z68MafibBTnNX7Vgse2Ivk7Eju5NEiT77h8a4I+7
   8=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: tcPL1OS48t9WCPHElOZERB02mnYsCec8GziUNLcgrMv0lGAbw5ORGHdYjCWU3DtTHspRCLrlWY
 IWru/Y6Bjcxme8U94WuunZscC0LZdSi6wMonc4Je9qBZLFsIDqPqD8lrOkXRmJ2G0TXZXx9cdB
 96VWt7jFvwr2JVog2R7/3dpmbslmIU9Le97BsQZj/xr26+8cIhZVZ4pAdI2NoVyS3LqQIFU1g2
 treK9jRvLsaDXzebSrqBexx2YCbZ4fvFwp1+PtRRbjpYqqp/HsJkNwCzz4U6dqBCYYdXMKHtwe
 JZU=
X-SBRS: 5.1
X-MesageID: 46299123
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:LGaQka1wt1H47nllPsMdTgqjBVByeYIsimQD101hICG9Lfb2qy
 n+ppgmPEHP5Qr5OEtApTiBUJPwJE80hqQFnrX5Wo3SIDUO2VHYUb2KiLGN/9SOIVyHygcw79
 YGT0E6MqyLMbEYt7eI3ODbKadY/DDvysnB7o2/vhQdOD2CKZsQizuRYjzrYnGeLzM2Y6bReq
 DshPav6wDQAkj+Oa+Adwg4tqX41pL2vaOjRSRDKw8s6QGIgz/twLnmEyKA1hNbdz9U278t/U
 XMjgS8v8yYwrCG4y6Z81WWw4VdmdPnxNcGLMuQivINIjGprgqzfoxuV5CLoThwiuCy71QBls
 XKvn4bTopOwkKUWlvwjQrm2gHm3jprwWTl00WkjXzqptG8bC4mCuJa7LgpMCfx2g4FhpVRwa
 hL12WWu958FhXbhhnw4NDOSlVDile0m3w/iuQe5kYvErf2UIUh6bD3wXklV6vpREnBmcYa+a
 hVfYHhDc9tABanhyuzhBg3/DTENU5DbCtvQSA5y4aoOnZt7ShEJ+Zx/r1Xop46zuNLd3Bz3Z
 WODk1ZrsA7ciYoV9MKOA4ge7r7NoWfe2OBDIqtSW6XXJ3vbEi91aIfpo9Fv92XRA==
X-IronPort-AV: E=Sophos;i="5.83,278,1616472000"; 
   d="scan'208";a="46299123"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JnPrBs8mzxggplF32FEN80sBocLA7DQDv1g7MfQrtM1yHvI5j8ogHX6RKdo6+cV/JQitQpZD/z8IUeWmjcwBUaedXzQIdRyn9s0FFJK/fi2AqVYY7AxCtM+xlibgHrp8g1tUMBO+Gwt+srYQeWShL+cS2Hyc50DJOOHuuMAj9hlDSRk+R4LWZBmKNvC8lFp+IZ3IZZdTiKcKw+7AiSIjzjzh91idaq3wk6pE56xoipewbna39h/O3XNqrfJCfRX2QctIFUD3QwqpIney46JgeXXKHB+ZthygODFmvhmJb/dMAXu+FUNgXxGDgVAARoqOMh4IRLr+G3OfamTh0EJTzw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sNpQzDIlnm7MsoEC/S7AiHE4ULsw2UwrHUkFA0POEGw=;
 b=IZHD3T2oIKqy7BeUrr/uIKfZcAr5yHhsaRcqhk6mebzaauGWlWIiWCLfIgDpHCTx/OsEfAnZwLFqjP/yfbtlzhsBZgMAWsX2K0gm9gTgq2ihsxy25erwu9aOhsKy2myDeAQFQHti+AXxQ9OknS0TRlyXdsb6NmyeNBjOmoNU8Thartes2Mn/OnptFSJF+FLaBCf0PvRXTjrnwaBqqSs0dtoPCp88IN68pJOI2VtyIQlR6l9d7nVZpzd/Hbi0Tru8/VBBOGhsl72fck8ru8cEzyRQYy/9cgkTjaLmdzWt7dL7rlGFnv3uFmdIdElKpxYGaJY529fRFi177yuifvsejw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sNpQzDIlnm7MsoEC/S7AiHE4ULsw2UwrHUkFA0POEGw=;
 b=bNngVdClWZaji1nl02/krQHD/Ru7I6bYmqxQER2z3ZoYGtEoc8C94dESEApULs+/J1HRbLweaB9VjVNSsDNs+11guDg0eW9TVZnjodWzrNzXircDHksqYWDaeL3umblZ1H048nTUTczBhRHljP8SFdkdav3iD4caYQs5GNMx2co=
To: Jan Beulich <jbeulich@suse.com>, George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <71b8a4f1-9c18-36e7-56b1-3f1b1dabddd6@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: hypercalls with 64-bit results
Message-ID: <d38cd3a3-6139-5ebd-6a78-debc20c3b2bf@citrix.com>
Date: Wed, 16 Jun 2021 19:15:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <71b8a4f1-9c18-36e7-56b1-3f1b1dabddd6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0351.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:d::27) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 48e04db2-f3f6-4957-b5d1-08d930f2ceea
X-MS-TrafficTypeDiagnostic: BYAPR03MB4119:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4119345B4842B6F9AFA58744BA0F9@BYAPR03MB4119.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: q1npZMLABpFBf1LJVS8X5USPyx+IqK3l40/YT4k3qk7PaLD1iObbm94MI72tH1uzdjDlRs02b3EJWSweR7C4lvATGZnE1e4bg4dBFpOfw0bGrRR14Y2fYGPsdW4NlAwD5kBQWF538zYlbtfBbC/fulLthNEweiyPGfgI5Whu1X1obV3YQK3DDr8W+ZIWXAfxtWQ6zCGTcCaaBvWJ5iIhsZVZ7EKwDH6WgvOpnKbxWPgMl/h9GRmDBYmQLa2eYdWwCX9COG9lv+JrkJZhBJdOdl/tHE713qYO0iQTZ3uXxcsRZ5ss4IoPF4+PEcWaN8K+fa9agP69QUNxU1GAOO+gh+R8pJQTOLO9S+SeaHM1IDXB2xLAT9FqwX/qubyoXC+6Nqp4B98GjbPfvnsLwBhf0B9TM3QA4kACIMF/8xUA/+9GqS4j5SWXpJQkk6423HSKCs0sDVjh3wOTjFlGiKsVzVAupIYVF1hgMi71AMLBWMk2CSNBo89IUqNO5S6bD8PpfTP+zDI3b0HSRSTSuzB8E5D/uBETeM1zKFCBKaFAKLe1K86iVVr/3BV49dwA1zPGyqF660s/PjbYbCxKlEyXmX/T8rLbZZC7Nqp+jrFZCBncMNWw/MHoZDWfUUfA8cpbRbEds9U07nNMKHL1bHvy0rIHlcSMzFiilXFep7EaisBnQ/Gd7OUIG/5tO63VO8Wf
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(396003)(346002)(136003)(39860400002)(376002)(6636002)(956004)(4326008)(31696002)(2906002)(53546011)(66556008)(2616005)(66946007)(478600001)(6666004)(66476007)(8936002)(83380400001)(54906003)(36756003)(31686004)(86362001)(110136005)(186003)(16526019)(6486002)(38100700002)(26005)(8676002)(5660300002)(316002)(16576012)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?TWpNOXNjWGh3bUE1MmVuemhvK0pCa1lmRVlFaU9tSDJXV3pPTlJsL2cxSWpY?=
 =?utf-8?B?V2ZCR1VmN05xZHo2eStSVU5EWGk4cjZkVExsMUpGa2RuRWI0S0twL25LYTRz?=
 =?utf-8?B?K3dFVEMrWHA5NkpsYldpN2tQS3Zha3JEci9JNWd5QWQ5cEFlMGtJTm51UkxW?=
 =?utf-8?B?MWc2SDB3MUdVL0pldHJjM1dJWlYrUWRTSGhkZjVIeEpLcmIvMkxGb21YZjg0?=
 =?utf-8?B?dEVSaVFrTWN1WUx2NlQ4dml3ZlBvYWJ0d1V4R0hlc1RsUDhCKzdrNEo0R0JD?=
 =?utf-8?B?TEVXdDBzY1pKSS8wNkkrdXRhb1pFa2d5M2JDaE9FL2ViR0hFU21HWTlvc0Vn?=
 =?utf-8?B?Rm5qcnlKTSs0NkpzUkJJZCtiYjdBR2ZXaDhMeGsxaDdZc0hPemtwV3NQUlhv?=
 =?utf-8?B?eEV0NW53RFVJZnRHNTVHSjJNUU4rQlM4V3QyUzBPQk13NDNIdHBuQnhwUnNl?=
 =?utf-8?B?Sm5iYUxZM2R4RjFUdGZEZGN0dkdwY1pBZXNWc3ZvMkZsQ3A0UXgybmZCSTRq?=
 =?utf-8?B?SC9hcEl5cEVtdndtNXhIQ1Jvei8rNWUvR1JyTDlISjRJQU95ZytvejR3Qzly?=
 =?utf-8?B?WjBMNXNTL2NtcTlhMHpYYU9ubkV5MHhNamxpK2psUGVwcWV6RWtpbWlrNjJo?=
 =?utf-8?B?NmxZZzVkZUhzSzdPZnV4Ty9WUkZuMi96bXlQNExzVkVDb1lVWk16eWJIR2I0?=
 =?utf-8?B?TXdTZHB4anc0U2ROOVhvcUJCTitKVmthdWJ3RzExUG5aN1ZaWkJ6ZFBZYTJ0?=
 =?utf-8?B?cHdSMklKL09Lcno5Z0RsRktDZStyMjlGZW1QNVNIZFdHcklGSnA0eEpGQWlv?=
 =?utf-8?B?YzFBUE9PSmJYcTFzQUdjS2NwaDZkbFRaZnMvWkxBY2xEZHpkVmN4aG1zb2J1?=
 =?utf-8?B?YjRveDg4M0t3RFJaQ3JybXRvaExpVkRsWXZHZWREa0RjSGFSUG1weHhOanZF?=
 =?utf-8?B?QzkvUndQb1VzRzQrZ1NPUjg4a05rdzlTdXBGcFlnYkh6c1NUdThMMXliMjNn?=
 =?utf-8?B?TlV2b3MzNnZmaGU0TVBZSlJIcHRaREdLRGxYYmFCN3BHOEVTQUc0amZzSlpR?=
 =?utf-8?B?VmlOK1VhT2R2NWg1RHV0QU1LeUhZbWVOME4vcXVXR1BzbHRxZURlWU5MYXdP?=
 =?utf-8?B?SmZWdmpxSFV5bis5R083MkR4OWpRaXpaMjJDdkNPcFlQaFoxRGR3aXhoVFFt?=
 =?utf-8?B?UmlmK21ZU2owWUc5R0FmdFNqVGU2TnpOVStNcXVPazVoaUFpK0dYQnN4TUk4?=
 =?utf-8?B?OUozY1I1WUxvaFlzNTlOUEdndFZISDIySU5BNWdTUUpsL2NpVFk2cExTVkxl?=
 =?utf-8?B?N1JXc0JZSXNGRFRJanh3d0ZoM2d5cE5YbUw1aWY5Qno1dVFGTHdKWmlFSUhS?=
 =?utf-8?B?aXVjczd0V2dqZ1R1S2Y0Q2JRVTRoM0IxQURsZ1AxdEJKV1Z5Wk8wK09uTU1C?=
 =?utf-8?B?TG1SUGYzVzZGY2E0azcvdXdKU0NoRncrdURTM203aTRIYmVOSkRIdWQ0YXZ0?=
 =?utf-8?B?L2dQUXB0TEllOTVydjBzdllWOHB2QWVGNG5GT2h3VGQ5c3I0ZExBM1VjWXph?=
 =?utf-8?B?alNWUjJsZUN0QzdIcHBDMDB2Q1lXQ1BuYnBWVmxXN3NHODBQRE1VazZKZlQy?=
 =?utf-8?B?Vyt4Unh6RFJ2TkpoVTZ4TkorcGtmRGlNSkNLVEs0Qmo3RjNPRkhMQ2NkSVpi?=
 =?utf-8?B?Qm9lVzhybkpzRFNDbUxRTnRUZ0F4YVZtaXo1WFJaRmtra3BVd1JtV2VOV3pV?=
 =?utf-8?Q?7brKTLb7Z7Wp4+BRTrXWhehqgBZR7SQoHu0MNIr?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 48e04db2-f3f6-4957-b5d1-08d930f2ceea
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2021 18:16:06.4371
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bPlleJjG/5W76SiJuApEQuBEzReZW3OMOBIxjCJ6iCk62cMPsjOhUFhT5AEk4ivUhKhe2NtMCxjaf95I7NnkUAtC4aYPL0SO8rVNrF6DiAk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4119
X-OriginatorOrg: citrix.com

On 16/06/2021 17:04, Jan Beulich wrote:
> All,
>
> several years back do_memory_op() in libxc was changed to have "long"
> return type. This is because some of the sub-ops return potentially
> large values as the hypercall return value (i.e. not in an argument
> structure field). This change, however, didn't have the intended
> effect from all I can tell, which apparently manifests in the present
> two remaining ovmf failures in the staging osstest flights. Anthony
> tells me that ovmf as of not very long ago puts the shared info page
> at a really high address, thus making the p2m of the guest very large.
> Its size gets returned by XENMEM_maximum_gpfn, as function return
> value.
>
> Since hypercalls from the tool stack are based on ioctl(), and since
> ioctl() has a return type of "int", I'm afraid there's no way we can
> deal with this by adjusting function return types in the libraries.
> Instead we appear to need either a new privcmd ioctl or new XENMEM_*
> subops (for those cases where potentially large values get returned).
>
> Until we manage to deal with this I wonder whether we should suggest
> to the ovmf folks to undo that change. I'm anyway not really
> convinced this aggressive enlarging of the p2m is a good idea. There
> are a number of cases in the hypervisor where we try to reduce GFN
> ranges based on this upper bound, and there in particular is a loop
> in mem-sharing code going all the way up to that limit. EPT P2M
> dumping also has such a loop.

There are multiple things in here which are disappointing, but I think
they've mostly been known already.

But I do agree that this is very much another nail in the coffin of the
ioctl ABI.

For ABIv2, there are many changes needed, and this ioctl ABI was never
going to survive, for other reasons too.=C2=A0 Obviously, we can't wait for
ABIv2 to fix this immediate issue.

However, I think it might be reasonable to wait for ABIv2 until we can
reasonably support VMs larger than 8T(?).

For now, I'd agree with trying to undo the change in OVMF.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed Jun 16 19:43:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 19:43:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143518.264450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltbRb-0001DT-41; Wed, 16 Jun 2021 19:43:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143518.264450; Wed, 16 Jun 2021 19:43:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltbRb-0001DM-0r; Wed, 16 Jun 2021 19:43:23 +0000
Received: by outflank-mailman (input) for mailman id 143518;
 Wed, 16 Jun 2021 19:43:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GKl/=LK=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ltbRa-0001DG-Ag
 for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 19:43:22 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 60a7a4f4-7cae-44e4-878f-a8b8ad831aa8;
 Wed, 16 Jun 2021 19:43:21 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 34428610CA;
 Wed, 16 Jun 2021 19:43:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60a7a4f4-7cae-44e4-878f-a8b8ad831aa8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623872600;
	bh=Tv+VlQHMa9PphzjEUQvV8mYdb7RPlUZ7Ia9IIGJ92Zs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=qRUXzEzhGlSZ3FtiDtK+a0JBksN6Fuvx2oXZbLBOi7wEgny6mxwAHdMrYeQlbay6S
	 2RRtcu+ehRGv6C/jdEj0LjiePdFHN/AUfgVoJNU0cd+apeW/dZ2/hmt2Faji3M0VO2
	 3HyslkSiTmVYvYBm9ktc9wnDYXFcyUcWTQjnLAxjeOdaHj5fRpqObS1hxgFKt70bTV
	 VW1iF/r5vF7NuNXQak05iJu/4XLWpMczV2p97GFPtXxUwzMhUWklV6MDyHUeoMcHSG
	 HEoGfFqyq6qlwaKIvF3pMcH+/ifNQGXRdTiwDE10E7flGpkg33YDNp7xzXAbx/+TyI
	 jjxbmAvI0BOEA==
Date: Wed, 16 Jun 2021 12:43:19 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Konstantina Chremmou <konstantina.chremmou@citrix.com>
cc: Ian Jackson <iwj@xenproject.org>, 
    Andrew Cooper <Andrew.Cooper3@citrix.com>, 
    George Dunlap <George.Dunlap@citrix.com>, Wei Liu <wl@xen.org>, 
    Jan Beulich <jbeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Paul Durrant <paul@xen.org>, Owen Smith <owen.smith@citrix.com>, 
    Chandrika Srinivasan <chandrika.srinivasan@citrix.com>, 
    Christian Lindig <christian.lindig@citrix.com>, 
    Rob Hoes <Rob.Hoes@citrix.com>, "Li Zhang (3P)" <Li.Zhang@citrix.com>, 
    xen-devel@lists.xenproject.org
Subject: RE: [PATCH] code-of-conduct.rst: Add Stefano Stabellini as a Conduct
 Team member
In-Reply-To: <DM6PR03MB356233FB38691977790D2E0EF00F9@DM6PR03MB3562.namprd03.prod.outlook.com>
Message-ID: <alpine.DEB.2.21.2106161242300.24906@sstabellini-ThinkPad-T480s>
References: <20210616151430.224739-1-george.dunlap@citrix.com> <4df84ee7-6c79-03c3-dac4-32de3595e861@citrix.com> <24778.8917.652550.796651@mariner.uk.xensource.com> <DM6PR03MB356233FB38691977790D2E0EF00F9@DM6PR03MB3562.namprd03.prod.outlook.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Thank you all for the support!


On Wed, 16 Jun 2021, Konstantina Chremmou wrote:
> +1
> 
> -----Original Message-----
> From: Ian Jackson <iwj@xenproject.org> 
> Sent: 16 June 2021 5:12 PM
> To: Andrew Cooper <Andrew.Cooper3@citrix.com>
> Cc: George Dunlap <George.Dunlap@citrix.com>; xen-devel@umich.edu; Wei Liu <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Stefano Stabellini <sstabellini@kernel.org>; Julien Grall <julien@xen.org>; Paul Durrant <paul@xen.org>; Owen Smith <owen.smith@citrix.com>; Chandrika Srinivasan <chandrika.srinivasan@citrix.com>; Christian Lindig <christian.lindig@citrix.com>; Konstantina Chremmou <konstantina.chremmou@citrix.com>; Rob Hoes <Rob.Hoes@citrix.com>; Li Zhang (3P) <Li.Zhang@citrix.com>
> Subject: Re: [PATCH] code-of-conduct.rst: Add Stefano Stabellini as a Conduct Team member
> 
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
> 
> Andrew Cooper writes ("Re: [PATCH] code-of-conduct.rst: Add Stefano Stabellini as a Conduct Team member"):
> > On 16/06/2021 16:14, George Dunlap wrote:
> > > With my upcoming leave, Ian will be the only person actively on the 
> > > Conduct Team.  Stefano has volunteered to join the team, so that 
> > > there are at least two members.
> ...
> > +1
> 
> +1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 20:49:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 20:49:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143528.264460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltcTF-000778-4q; Wed, 16 Jun 2021 20:49:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143528.264460; Wed, 16 Jun 2021 20:49:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltcTF-000771-1s; Wed, 16 Jun 2021 20:49:09 +0000
Received: by outflank-mailman (input) for mailman id 143528;
 Wed, 16 Jun 2021 20:49:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltcTD-00076r-Ch; Wed, 16 Jun 2021 20:49:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltcTD-0002gL-4l; Wed, 16 Jun 2021 20:49:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltcTC-0002EA-PX; Wed, 16 Jun 2021 20:49:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltcTC-0003uy-P2; Wed, 16 Jun 2021 20:49:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DGsfMoGInGanyDYE2mE0G2uJGVOYm+HJ4bZ1Lzu0I9g=; b=tWsPGVPzUzQ5QgPlwyfKqSeAnz
	q9zuTGGaVU7kzt878RGATX2vBemkyVTFK4h8/xvjQriexF2MiOUgXeeF4pINd9Ky3/Ufky3IQkKgB
	H2zohc3ebHV6e3y9ScIAmWaq9+RTctwoSfOwyfXBdrhRQGv8PFCt0A3iTNWIfFmx14Bo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162854-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162854: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4bcf6433eed3d9cbc00865ec62380a33ca832dac
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Jun 2021 20:49:06 +0000

flight 162854 xen-unstable real [real]
flight 162866 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162854/
http://logs.test-lab.xenproject.org/osstest/logs/162866/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 162422
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162533
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162533
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4bcf6433eed3d9cbc00865ec62380a33ca832dac
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z    8 days
Failing since        162556  2021-06-08 22:39:08 Z    7 days   12 attempts
Testing same since   162854  2021-06-16 06:56:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1115 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 22:00:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 22:00:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143551.264493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltda7-0006xr-Lb; Wed, 16 Jun 2021 22:00:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143551.264493; Wed, 16 Jun 2021 22:00:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltda7-0006xk-Ic; Wed, 16 Jun 2021 22:00:19 +0000
Received: by outflank-mailman (input) for mailman id 143551;
 Wed, 16 Jun 2021 22:00:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltda7-0006xa-2Q; Wed, 16 Jun 2021 22:00:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltda6-0003sb-Sg; Wed, 16 Jun 2021 22:00:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltda6-0005lI-HN; Wed, 16 Jun 2021 22:00:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltda6-0007i3-Gq; Wed, 16 Jun 2021 22:00:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1Lfbm9JuzNKmVwXnujkRT9ZfX3ZRAsxXV6ALLYYaWlQ=; b=BSFHN3bPMZ0FTmQy1RX2gPp1Zz
	48nxJGRgEkrMxo2o7qZqEkMeknxY1Fx6IrwIKioFEXr98J+c9FQF1ajvBG58CvfE2mvrqphIARSFi
	dzvLA6TgucJoi+YuXDDjCmM+qqS4umyKNiGQ/HMuNVMM498Pk6BnWNYL+reWBX5hwTpg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162856-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162856: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=94f0b2d4a1d0c52035aef425da5e022bd2cb1c71
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Jun 2021 22:00:18 +0000

flight 162856 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162856/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 linux                94f0b2d4a1d0c52035aef425da5e022bd2cb1c71
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  320 days
Failing since        152366  2020-08-01 20:49:34 Z  319 days  544 attempts
Testing same since   162847  2021-06-15 20:43:14 Z    1 days    2 attempts

------------------------------------------------------------
6167 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1680457 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 16 23:03:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Jun 2021 23:03:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143560.264506 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lteYw-0004IQ-DB; Wed, 16 Jun 2021 23:03:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143560.264506; Wed, 16 Jun 2021 23:03:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lteYw-0004IJ-AG; Wed, 16 Jun 2021 23:03:10 +0000
Received: by outflank-mailman (input) for mailman id 143560;
 Wed, 16 Jun 2021 23:03:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lteYv-0004I9-DY; Wed, 16 Jun 2021 23:03:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lteYv-0004uN-53; Wed, 16 Jun 2021 23:03:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lteYu-0000xb-Sf; Wed, 16 Jun 2021 23:03:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lteYu-0003gk-SC; Wed, 16 Jun 2021 23:03:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wejewcKSJFdg9K6bLAwXqk6aWOXEbVncXHHMXLBWdug=; b=d3NDSTVGnucQv7e2xvE+tldb3G
	CL6fCIkdX4zFBTu7Ykp8usUw1JAyuzNCs8NIlu6XgXhCNgp7UfLKUhmlLBb2mSm9JVScIGD8bjj8H
	L36Ija1T9R7vR0bGS/1lVjFZOZPyUs0bz15gvK0egvM/9YFxvxjPdCa44r3rts3/wod4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162859-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 162859: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ffe4d2a0684d4e6aa6ff066b1cd8663a62f2888c
X-Osstest-Versions-That:
    linux=3909e2374335335c9504467caabc906d3f7487e4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Jun 2021 23:03:08 +0000

flight 162859 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162859/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162604
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162604
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162604
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162604
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162604
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162604
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162604
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162604
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162604
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162604
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162604
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                ffe4d2a0684d4e6aa6ff066b1cd8663a62f2888c
baseline version:
 linux                3909e2374335335c9504467caabc906d3f7487e4

Last test of basis   162604  2021-06-10 12:11:03 Z    6 days
Testing same since   162859  2021-06-16 10:12:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Alaa Hleihel <alaa@nvidia.com>
  Alexander Kuznetsov <wwfq@yandex-team.ru>
  Alexandre Belloni <alexandre.belloni@bootlin.com>
  Alexandre GRIVEAUX <agriveaux@deutnet.info>
  Andrea Righi <andrea.righi@canonical.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com> # pxa2xx
  Andy Shevchenko <andy.shevchenko@gmail.com>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arvind Sankar <nivedita@alum.mit.edu>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Chris Packham <chris.packham@alliedtelesis.co.nz>
  Christoph Hellwig <hch@lst.de>
  Chunyan Zhang <chunyan.zhang@unisoc.com>
  Cornelia Huck <cohuck@redhat.com>
  Dai Ngo <dai.ngo@oracle.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Desmond Cheong Zhi Xi <desmondcheongzx@gmail.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dmitry Bogdanov <d.bogdanov@yadro.com>
  Dmitry Osipenko <digetx@gmail.com>
  Dmitry Yakunin <zeil@yandex-team.ru>
  Drew Fustini <drew@beagleboard.org>
  Eric Farman <farman@linux.ibm.com>
  Felipe Balbi <balbi@kernel.org>
  Florian Fainelli <f.fainelli@gmail.com>
  George McCollister <george.mccollister@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hannes Reinecke <hare@suse.de>
  Hans de Goede <hdegoede@redhat.com>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Hulk Robot <hulkrobot@huawei.com>
  Ingo Molnar <mingo@kernel.org>
  Jack Pham <jackp@codeaurora.org>
  James Wang <jnwang@linux.alibaba.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Jason Self <jason@bluehome.net>
  Javed Hasan <jhasan@marvell.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jeimon <jjjinmeng.zhou@gmail.com>
  Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
  Jiri Olsa <jolsa@redhat.com>
  Johan Hovold <johan@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  John Garry <john.garry@huawei.com>
  John Keeping <john@metanate.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jun'ichi Nomura <junichi.nomura@nec.com>
  Kamal Heib <kamalheib1@gmail.com>
  Kees Cook <keescook@chromium.org>
  Kyle Tso <kyletso@google.com>
  Leo Yan <leo.yan@linaro.org>
  Leon Romanovsky <leonro@nvidia.com>
  Liangyan <liangyan.peng@linux.alibaba.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Linyu Yuan <linyyuan@codeaurora.com>
  Lorenzo Colitti <lorenzo@google.com>
  Lukas Wunner <lukas@wunner.de>
  Maciej Żenczykowski <maze@google.com>
  Marco Elver <elver@google.com>
  Marco Felsch <m.felsch@pengutronix.de>
  Marian-Cristian Rotariu <marian.c.rotariu@gmail.com>
  Mark Brown <broonie@kernel.org>
  Mark-PK Tsai <mark-pk.tsai@mediatek.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Matt Wang <wwentao@vmware.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Mayank Rana <mrana@codeaurora.org>
  Michael Ellerman <mpe@ellerman.id.au>
  Mike Snitzer <snitzer@redhat.com>
  Ming Lei <ming.lei@redhat.com>
  Nathan Chancellor <nathan@kernel.org>
  Nick Desaulniers <ndesaulniers@google.com>
  Nick Desaulniers <ndesaulniers@google.com> # build
  Nikolay Borisov <nborisov@suse.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rao Shoaib <rao.shoaib@oracle.com>
  Ritesh Harjani <riteshh@linux.ibm.com>
  Sagi Grimberg <sagi@grimberg.me>
  Saravana Kannan <saravanak@google.com>
  Sasha Levin <sashal@kernel.org>
  Saubhik Mukherjee <saubhik.mukherjee@gmail.com>
  Sean Christopherson <seanjc@google.com>
  Sedat Dilek <sedat.dilek@gmail.com>
  Sergey Senozhatsky <senozhatsky@chromium.org>
  Shakeel Butt <shakeelb@google.com>
  Shay Drory <shayd@nvidia.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Stefan Agner <stefan@agner.ch>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Tejun Heo <tj@kernel.org>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Petazzoni <thomas.petazzoni@bootlin.com>
  Tiezhu Yang <yangtiezhu@loongson.cn>
  Tony Lindgren <tony@atomide.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Vincent Guittot <vincent.guittot@linaro.org>
  Wenli Looi <wlooi@ucalgary.ca>
  Wesley Cheng <wcheng@codeaurora.org>
  Wolfram Sang <wsa@kernel.org>
  Yang Yingliang <yangyingliang@huawei.com>
  Zheyu Ma <zheyuma97@gmail.com>
  Zong Li <zong.li@sifive.com>
  Zou Wei <zou_wei@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   3909e2374335..ffe4d2a0684d  ffe4d2a0684d4e6aa6ff066b1cd8663a62f2888c -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 00:09:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 00:09:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143568.264521 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltfbD-00028U-F4; Thu, 17 Jun 2021 00:09:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143568.264521; Thu, 17 Jun 2021 00:09:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltfbD-00028N-Bj; Thu, 17 Jun 2021 00:09:35 +0000
Received: by outflank-mailman (input) for mailman id 143568;
 Thu, 17 Jun 2021 00:09:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uYqS=LL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ltfbB-00028H-Qn
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 00:09:33 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 10f35a75-3c88-423a-9fc1-eabb1c4dca26;
 Thu, 17 Jun 2021 00:09:33 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 70EFE61351;
 Thu, 17 Jun 2021 00:09:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10f35a75-3c88-423a-9fc1-eabb1c4dca26
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623888572;
	bh=mN2okVVdTICmc/8v91DABSXp7kK2mc41VelpLLMzraM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ZYuvhN96769HVBf9HwXfEMrx5h2joXFESxJj/lRenrF7yXWLZjtrg1PWIka+vT/Q0
	 oFtXyOpOZ9yR7QPt6WZ9qiVwYGYEJ72zE5pEW7HiR62r0bK1cPffUlPeluSwwsGV++
	 8riMe4sA8jh8H05UnuJF58KJLPG9dNHhkmEpW63Symhvau9C6X5CTpmeRZXg03FBNA
	 YdmM71is91QiA8p/farR5qTSk/+kZjwaUjrHEt7/cAUyCA79A0OF17vUa8D0c+gd+H
	 nwIEeujLTdx4sfeAu0ptJEutsq4BmObXoP4A9/HlakW6ogsicBITdzMemlH7LIUWJw
	 GZfH7kl1LZ6XQ==
Date: Wed, 16 Jun 2021 17:09:29 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Claire Chang <tientzu@chromium.org>
cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, 
    Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, 
    Frank Rowand <frowand.list@gmail.com>, 
    Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, 
    jgross@suse.com, Christoph Hellwig <hch@lst.de>, 
    Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, 
    paulus@samba.org, 
    "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, 
    sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>, 
    grant.likely@arm.com, xypron.glpk@gmx.de, 
    Thierry Reding <treding@nvidia.com>, mingo@kernel.org, 
    bauerman@linux.ibm.com, peterz@infradead.org, 
    Greg KH <gregkh@linuxfoundation.org>, 
    Saravana Kannan <saravanak@google.com>, 
    "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
    heikki.krogerus@linux.intel.com, 
    Andy Shevchenko <andriy.shevchenko@linux.intel.com>, 
    Randy Dunlap <rdunlap@infradead.org>, 
    Dan Williams <dan.j.williams@intel.com>, 
    Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
    linux-devicetree <devicetree@vger.kernel.org>, 
    lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org, 
    xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>, 
    Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org, 
    bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, 
    daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
    intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
    jxgao@google.com, joonas.lahtinen@linux.intel.com, 
    linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
    matthew.auld@intel.com, rodrigo.vivi@intel.com, 
    thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v12 11/12] dt-bindings: of: Add restricted DMA pool
In-Reply-To: <20210616062157.953777-12-tientzu@chromium.org>
Message-ID: <alpine.DEB.2.21.2106161651290.24906@sstabellini-ThinkPad-T480s>
References: <20210616062157.953777-1-tientzu@chromium.org> <20210616062157.953777-12-tientzu@chromium.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 16 Jun 2021, Claire Chang wrote:
> Introduce the new compatible string, restricted-dma-pool, for restricted
> DMA. One can specify the address and length of the restricted DMA memory
> region by restricted-dma-pool in the reserved-memory node.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  .../reserved-memory/reserved-memory.txt       | 36 +++++++++++++++++--
>  1 file changed, 33 insertions(+), 3 deletions(-)
> 
> diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
> index e8d3096d922c..46804f24df05 100644
> --- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
> +++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
> @@ -51,6 +51,23 @@ compatible (optional) - standard definition
>            used as a shared pool of DMA buffers for a set of devices. It can
>            be used by an operating system to instantiate the necessary pool
>            management subsystem if necessary.
> +        - restricted-dma-pool: This indicates a region of memory meant to be
> +          used as a pool of restricted DMA buffers for a set of devices. The
> +          memory region would be the only region accessible to those devices.
> +          When using this, the no-map and reusable properties must not be set,
> +          so the operating system can create a virtual mapping that will be used
> +          for synchronization. The main purpose for restricted DMA is to
> +          mitigate the lack of DMA access control on systems without an IOMMU,
> +          which could result in the DMA accessing the system memory at
> +          unexpected times and/or unexpected addresses, possibly leading to data
> +          leakage or corruption. The feature on its own provides a basic level
> +          of protection against the DMA overwriting buffer contents at
> +          unexpected times. However, to protect against general data leakage and
> +          system memory corruption, the system needs to provide way to lock down
> +          the memory access, e.g., MPU. Note that since coherent allocation
> +          needs remapping, one must set up another device coherent pool by
> +          shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic
> +          coherent allocation.
>          - vendor specific string in the form <vendor>,[<device>-]<usage>
>  no-map (optional) - empty property
>      - Indicates the operating system must not create a virtual mapping
> @@ -85,10 +102,11 @@ memory-region-names (optional) - a list of names, one for each corresponding
>  
>  Example
>  -------
> -This example defines 3 contiguous regions are defined for Linux kernel:
> +This example defines 4 contiguous regions for Linux kernel:
>  one default of all device drivers (named linux,cma@72000000 and 64MiB in size),
> -one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB), and
> -one for multimedia processing (named multimedia-memory@77000000, 64MiB).
> +one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB),
> +one for multimedia processing (named multimedia-memory@77000000, 64MiB), and
> +one for restricted dma pool (named restricted_dma_reserved@0x50000000, 64MiB).
>  
>  / {
>  	#address-cells = <1>;
> @@ -120,6 +138,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
>  			compatible = "acme,multimedia-memory";
>  			reg = <0x77000000 0x4000000>;
>  		};
> +
> +		restricted_dma_reserved: restricted_dma_reserved {
> +			compatible = "restricted-dma-pool";
> +			reg = <0x50000000 0x4000000>;
> +		};
>  	};
>  
>  	/* ... */
> @@ -138,4 +161,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
>  		memory-region = <&multimedia_reserved>;
>  		/* ... */
>  	};
> +
> +	pcie_device: pcie_device@0,0 {
> +		reg = <0x83010000 0x0 0x00000000 0x0 0x00100000
> +		       0x83010000 0x0 0x00100000 0x0 0x00100000>;
> +		memory-region = <&restricted_dma_mem_reserved>;

Shouldn't it be &restricted_dma_reserved ?



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 00:47:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 00:47:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143577.264532 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltgBc-00065K-Gt; Thu, 17 Jun 2021 00:47:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143577.264532; Thu, 17 Jun 2021 00:47:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltgBc-00065D-CF; Thu, 17 Jun 2021 00:47:12 +0000
Received: by outflank-mailman (input) for mailman id 143577;
 Thu, 17 Jun 2021 00:47:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uYqS=LL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ltgBb-000657-B8
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 00:47:11 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9f0d39f3-49fa-44d1-855f-fff67277c825;
 Thu, 17 Jun 2021 00:47:10 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id C7A2B613B9;
 Thu, 17 Jun 2021 00:47:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f0d39f3-49fa-44d1-855f-fff67277c825
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623890829;
	bh=WCzGPQl/aym123sINb8QnvJYTwJNBRaVx2zLf3Bv1N8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=rMohPe+tbKQa0AYXnI+IW3OaFUpO6B0eUu0gZForopqzfrCjPyFI1Ewjobgur6/dk
	 bLRtb8PIYvryWZweFMXTwAbMWtg0nctMUoV8kxk6uELmUKnBiVw5LeKCqL5SBhOByE
	 jiTek0nnewDrHmhCidXDSQ3WaahCsU5g4GDabRezUbSedVXgMEydAYEHuXFwzTS91g
	 cJMCyter7dx77I2E16IUKEfK8pcdYqTSbuwilDTS8T+uuTkXRJvXElaDgqvBEXf5YU
	 0wAWL6nOk87f0a1jJqezXJtxglL4nvT9peSjxOtxRa83ff4UqBDa3CGXSoXj4TqnFN
	 tHVw/jZbmu50w==
Date: Wed, 16 Jun 2021 17:47:07 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Claire Chang <tientzu@chromium.org>
cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, 
    Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, 
    Frank Rowand <frowand.list@gmail.com>, 
    Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, 
    jgross@suse.com, Christoph Hellwig <hch@lst.de>, 
    Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, 
    paulus@samba.org, 
    "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, 
    sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>, 
    grant.likely@arm.com, xypron.glpk@gmx.de, 
    Thierry Reding <treding@nvidia.com>, mingo@kernel.org, 
    bauerman@linux.ibm.com, peterz@infradead.org, 
    Greg KH <gregkh@linuxfoundation.org>, 
    Saravana Kannan <saravanak@google.com>, 
    "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
    heikki.krogerus@linux.intel.com, 
    Andy Shevchenko <andriy.shevchenko@linux.intel.com>, 
    Randy Dunlap <rdunlap@infradead.org>, 
    Dan Williams <dan.j.williams@intel.com>, 
    Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
    linux-devicetree <devicetree@vger.kernel.org>, 
    lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org, 
    xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>, 
    Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org, 
    bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, 
    daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
    intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
    jxgao@google.com, joonas.lahtinen@linux.intel.com, 
    linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
    matthew.auld@intel.com, rodrigo.vivi@intel.com, 
    thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v12 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
In-Reply-To: <20210616062157.953777-7-tientzu@chromium.org>
Message-ID: <alpine.DEB.2.21.2106161711030.24906@sstabellini-ThinkPad-T480s>
References: <20210616062157.953777-1-tientzu@chromium.org> <20210616062157.953777-7-tientzu@chromium.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 16 Jun 2021, Claire Chang wrote:
> Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
> use it to determine whether to bounce the data or not. This will be
> useful later to allow for different pools.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  include/linux/swiotlb.h | 11 +++++++++++
>  kernel/dma/direct.c     |  2 +-
>  kernel/dma/direct.h     |  2 +-
>  kernel/dma/swiotlb.c    |  4 ++++
>  4 files changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index dd1c30a83058..8d8855c77d9a 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -84,6 +84,7 @@ extern enum swiotlb_force swiotlb_force;
>   *		unmap calls.
>   * @debugfs:	The dentry to debugfs.
>   * @late_alloc:	%true if allocated using the page allocator
> + * @force_bounce: %true if swiotlb bouncing is forced
>   */
>  struct io_tlb_mem {
>  	phys_addr_t start;
> @@ -94,6 +95,7 @@ struct io_tlb_mem {
>  	spinlock_t lock;
>  	struct dentry *debugfs;
>  	bool late_alloc;
> +	bool force_bounce;
>  	struct io_tlb_slot {
>  		phys_addr_t orig_addr;
>  		size_t alloc_size;
> @@ -109,6 +111,11 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  	return mem && paddr >= mem->start && paddr < mem->end;
>  }
>  
> +static inline bool is_swiotlb_force_bounce(struct device *dev)
> +{
> +	return dev->dma_io_tlb_mem->force_bounce;
> +}
>  void __init swiotlb_exit(void);
>  unsigned int swiotlb_max_segment(void);
>  size_t swiotlb_max_mapping_size(struct device *dev);
> @@ -120,6 +127,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  {
>  	return false;
>  }
> +static inline bool is_swiotlb_force_bounce(struct device *dev)
> +{
> +	return false;
> +}
>  static inline void swiotlb_exit(void)
>  {
>  }
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 7a88c34d0867..a92465b4eb12 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -496,7 +496,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
>  {
>  	/* If SWIOTLB is active, use its maximum mapping size */
>  	if (is_swiotlb_active(dev) &&
> -	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
> +	    (dma_addressing_limited(dev) || is_swiotlb_force_bounce(dev)))
>  		return swiotlb_max_mapping_size(dev);
>  	return SIZE_MAX;
>  }
> diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
> index 13e9e7158d94..4632b0f4f72e 100644
> --- a/kernel/dma/direct.h
> +++ b/kernel/dma/direct.h
> @@ -87,7 +87,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
>  	phys_addr_t phys = page_to_phys(page) + offset;
>  	dma_addr_t dma_addr = phys_to_dma(dev, phys);
>  
> -	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
> +	if (is_swiotlb_force_bounce(dev))
>  		return swiotlb_map(dev, phys, size, dir, attrs);
>
>  	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {

Should we also make the same change in
drivers/xen/swiotlb-xen.c:xen_swiotlb_map_page ?

If I make that change, I can see that everything is working as
expected for a restricted-dma device with Linux running as dom0 on Xen.
However, is_swiotlb_force_bounce returns non-zero even for normal
non-restricted-dma devices. That shouldn't happen, right?

It looks like struct io_tlb_slot is not zeroed on allocation.
Adding memset(mem, 0x0, struct_size) in swiotlb_late_init_with_tbl
solves the issue.

With those two changes, the series passes my tests and you can add my
tested-by.


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 01:29:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 01:29:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143585.264542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltgqB-0007Lx-Mc; Thu, 17 Jun 2021 01:29:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143585.264542; Thu, 17 Jun 2021 01:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltgqB-0007Lq-Jg; Thu, 17 Jun 2021 01:29:07 +0000
Received: by outflank-mailman (input) for mailman id 143585;
 Thu, 17 Jun 2021 01:29:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltgqA-0007Lg-56; Thu, 17 Jun 2021 01:29:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltgq9-0004lA-VK; Thu, 17 Jun 2021 01:29:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltgq9-0007QP-LX; Thu, 17 Jun 2021 01:29:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltgq9-0005Ac-L3; Thu, 17 Jun 2021 01:29:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=15E0obmRTxPsm/hJLpaQMJG/6mIjdxBj5rycsMWcsVI=; b=g2W0zEIdiU5pcGsQvcKviqP1Yz
	xm7dg85lePv7UTPQ9NPoW4KifVPZLrGLtcRfCoVAoTCAcVGh3gxL4Esab8Iijm7qeO0H8xWQjMj4H
	CUq/BlVmZtK9ogtnTwV9nZawqisY4jMOHAh6mNdUU0PTq5oQqgNDUG4mi0MzUdlIx1OQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162865-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162865: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=04ddd1271e9518008bcd899bdaf29c1701f0f7a0
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Jun 2021 01:29:05 +0000

flight 162865 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162865/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 04ddd1271e9518008bcd899bdaf29c1701f0f7a0
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   12 days
Failing since        162368  2021-06-04 15:42:59 Z   12 days   27 attempts
Testing same since   162865  2021-06-16 16:11:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  gaoliming <gaoliming@byosoft.com.cn>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2122 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 04:29:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 04:29:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143593.264557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltjep-0006lJ-OP; Thu, 17 Jun 2021 04:29:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143593.264557; Thu, 17 Jun 2021 04:29:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltjep-0006lB-IP; Thu, 17 Jun 2021 04:29:35 +0000
Received: by outflank-mailman (input) for mailman id 143593;
 Thu, 17 Jun 2021 04:29:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltjeo-0006l1-4O; Thu, 17 Jun 2021 04:29:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltjen-0008Ks-Se; Thu, 17 Jun 2021 04:29:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltjen-0000EW-EN; Thu, 17 Jun 2021 04:29:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltjen-0003i0-D4; Thu, 17 Jun 2021 04:29:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=P34ly50zpvHsTDNopMWa9gRibj9S8VbhO97sCHlYXjs=; b=m9KgFDcmwWmn+ykp83NBC36N+u
	tembr9mTAxqYGR4MzvUfvXBiTmzyjgYFNqR97w6offDtO554gLioHQGY5BXPQnklGMWk30RjX7kqa
	WvF/hMnJf74WCWj+HQRjpQwptcPF5XbLCVzWozBlA3N7o6m8kVSpZX2Q3H6AlONSztz8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162860-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162860: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=1dd259ae24a26d8a987ab83aefb5c04dbe5f4b2a
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Jun 2021 04:29:33 +0000

flight 162860 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162860/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                1dd259ae24a26d8a987ab83aefb5c04dbe5f4b2a
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  300 days
Failing since        152659  2020-08-21 14:07:39 Z  299 days  553 attempts
Testing same since   162860  2021-06-16 13:45:34 Z    0 days    1 attempts

------------------------------------------------------------
532 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 171690 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 04:56:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 04:56:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143603.264574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltk4N-0001Tt-3U; Thu, 17 Jun 2021 04:55:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143603.264574; Thu, 17 Jun 2021 04:55:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltk4N-0001Tm-09; Thu, 17 Jun 2021 04:55:59 +0000
Received: by outflank-mailman (input) for mailman id 143603;
 Thu, 17 Jun 2021 04:55:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bffI=LL=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1ltk4L-0001Tg-Fu
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 04:55:57 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4df4dea0-728e-4f2f-8bf1-00e70a38d7f9;
 Thu, 17 Jun 2021 04:55:56 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 63C40219E4;
 Thu, 17 Jun 2021 04:55:55 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 1E1C6118DD;
 Thu, 17 Jun 2021 04:55:55 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id dAQhBtvVymAwQwAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 17 Jun 2021 04:55:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4df4dea0-728e-4f2f-8bf1-00e70a38d7f9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623905755; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=E36MNh3UfUWCEgSQ7/x84RbS9i6ZwfJBT3hTXJiUfiE=;
	b=os8KGbJkFoFq6958tvzGjoa8yDE89Mj0ECefdLFJAtTVZ2szcSA+AkSPXHoN8p2RU/7V2c
	GqT67dx34sABU2GEcXZbR8AqfwF2DKlazAVlA2UbMvYT48pJYJgEUCXaCCjWiODgD5Fnjg
	QPZI1EnsoN6j5yNQFsVEmmIcUCNaJVg=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623905755; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=E36MNh3UfUWCEgSQ7/x84RbS9i6ZwfJBT3hTXJiUfiE=;
	b=os8KGbJkFoFq6958tvzGjoa8yDE89Mj0ECefdLFJAtTVZ2szcSA+AkSPXHoN8p2RU/7V2c
	GqT67dx34sABU2GEcXZbR8AqfwF2DKlazAVlA2UbMvYT48pJYJgEUCXaCCjWiODgD5Fnjg
	QPZI1EnsoN6j5yNQFsVEmmIcUCNaJVg=
Subject: Re: hypercalls with 64-bit results
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <71b8a4f1-9c18-36e7-56b1-3f1b1dabddd6@suse.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <2adfba14-8adb-89d8-3e8b-a25aef6fc2d6@suse.com>
Date: Thu, 17 Jun 2021 06:55:54 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <71b8a4f1-9c18-36e7-56b1-3f1b1dabddd6@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="zJNavzvWrmM7tdMNTIKi4bu2e92vxC4Jy"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--zJNavzvWrmM7tdMNTIKi4bu2e92vxC4Jy
Content-Type: multipart/mixed; boundary="Lh0wGOpypYfBtGM0AC9S2NBFy2DLGUZ5f";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <2adfba14-8adb-89d8-3e8b-a25aef6fc2d6@suse.com>
Subject: Re: hypercalls with 64-bit results
References: <71b8a4f1-9c18-36e7-56b1-3f1b1dabddd6@suse.com>
In-Reply-To: <71b8a4f1-9c18-36e7-56b1-3f1b1dabddd6@suse.com>

--Lh0wGOpypYfBtGM0AC9S2NBFy2DLGUZ5f
Content-Type: multipart/mixed;
 boundary="------------0FEDB2DFDD21089219872CD1"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------0FEDB2DFDD21089219872CD1
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.06.21 18:04, Jan Beulich wrote:
> All,
>=20
> several years back do_memory_op() in libxc was changed to have "long"
> return type. This is because some of the sub-ops return potentially
> large values as the hypercall return value (i.e. not in an argument
> structure field). This change, however, didn't have the intended
> effect from all I can tell, which apparently manifests in the present
> two remaining ovmf failures in the staging osstest flights. Anthony
> tells me that ovmf as of not very long ago puts the shared info page
> at a really high address, thus making the p2m of the guest very large.
> Its size gets returned by XENMEM_maximum_gpfn, as function return
> value.
>=20
> Since hypercalls from the tool stack are based on ioctl(), and since
> ioctl() has a return type of "int", I'm afraid there's no way we can
> deal with this by adjusting function return types in the libraries.
> Instead we appear to need either a new privcmd ioctl or new XENMEM_*
> subops (for those cases where potentially large values get returned).

I think we can just use a multicall in libxc to wrap the affected
operations.


Juergen

--------------0FEDB2DFDD21089219872CD1
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------0FEDB2DFDD21089219872CD1--

--Lh0wGOpypYfBtGM0AC9S2NBFy2DLGUZ5f--

--zJNavzvWrmM7tdMNTIKi4bu2e92vxC4Jy
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDK1doFAwAAAAAACgkQsN6d1ii/Ey/x
Kgf+KT2pv9nZvRlBIPad7YWCzH+ThXNhkR9c5U+KpJ43Ji56Tcx8Gj2FUkHvqZY+T3Jj4736wZMJ
9mj6hfL2pp1IROIkygiiyd/gMdqvDIOT2RiB2gvV7LHqY9H6imLyOXbZbqo3OmHszIl95a39DW8n
2Wpm+OVxA4nx8A5aGXIV0w5dFNyHqGA2CLHnwsWGVy0H3sGD/YB8OgCAulygSjeI4qiGpPb6onrg
xv7LFXPrXn/2zCSqNdK22HC1bKIgLO4ohf3fRibwgSwnsv0g8PvVpKArRmqe6pHP1RIDsBiBbMK8
x7xRgYtBOPAw8PFnWnEvGjZONjxnPOgxjqYr3UBJPQ==
=DhZc
-----END PGP SIGNATURE-----

--zJNavzvWrmM7tdMNTIKi4bu2e92vxC4Jy--


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 06:27:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 06:27:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143610.264584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlUH-0001bY-FY; Thu, 17 Jun 2021 06:26:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143610.264584; Thu, 17 Jun 2021 06:26:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlUH-0001bR-CX; Thu, 17 Jun 2021 06:26:49 +0000
Received: by outflank-mailman (input) for mailman id 143610;
 Thu, 17 Jun 2021 06:26:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V3wh=LL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltlUG-0001bL-1G
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 06:26:48 +0000
Received: from mail-pl1-x62e.google.com (unknown [2607:f8b0:4864:20::62e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 25e5427f-bf39-46da-855f-6051a1a5cccd;
 Thu, 17 Jun 2021 06:26:46 +0000 (UTC)
Received: by mail-pl1-x62e.google.com with SMTP id x22so836797pll.11
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 23:26:46 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:e349:a6ae:d3d0:1621])
 by smtp.gmail.com with UTF8SMTPSA id i128sm4164159pfc.142.2021.06.16.23.26.37
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 16 Jun 2021 23:26:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25e5427f-bf39-46da-855f-6051a1a5cccd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=T2FafqXgYHBid0ZBe1+1JDcoq2ITMR3qeR/772URNKQ=;
        b=IVV/5fAzJUboV2IX/WhvyfcG+Tda0hKqJwIbJhxk8/euHe8sF9QY7sPB101n4t3pq0
         xCibs0rYhiNNqIGSKpxDHiRDbTJYqM6ZAdTvt7xU72VktjPhLVAhBUBErzgE8kE16i9D
         Mr8MI74zoNL4jiAa52zvQRyTf9JTWT/AWijSs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=T2FafqXgYHBid0ZBe1+1JDcoq2ITMR3qeR/772URNKQ=;
        b=NXf+46NnKps8yzgwNGy0xn0I2agWYRPepuDTEODIDATbkAmUKIoWf1sqEAVfUGcpGt
         qfLw2CCtnVBa9KeDwUv5xLZEPhYsbwYgWVvGgSixtJ70DRx5iffxlNIhUh2RYu5zCihA
         xtb5/+ZBq4EHJ2TKZBOExIWShVl+9xxs8XmQVvT5XSeeVJxHYWgaWdSJvMdw5O3dexWt
         C7D8IRHVhwbugXz7vC6Xcx81cppQJ0YZXRYoUsz+16KFvp002aecR/eepmHK+KGYinQu
         GY6ICFBqYyg6VWAPpB4S4Y2LH+Nr+LgScUnyVU8Ddd6T4lpJ5wOp18jDZ1L2CHSYMA/m
         VgUQ==
X-Gm-Message-State: AOAM530rLQg/mpXy8buDZoF+TUr/UhJIfau9fUS5nAvV2+Em6gMnvfX0
	4lk9zm/ZzS4oVZfyAaqqnSUGMA==
X-Google-Smtp-Source: ABdhPJy0c6CkeYmIknk2P7/qxll1LLLQMygbBdhbTcld1tRPD8L5OQgaDNVXxbwz0PNAcGxSRbzFDA==
X-Received: by 2002:a17:90b:502:: with SMTP id r2mr15134236pjz.18.1623911205862;
        Wed, 16 Jun 2021 23:26:45 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v13 00/12] Restricted DMA
Date: Thu, 17 Jun 2021 14:26:23 +0800
Message-Id: <20210617062635.1660944-1-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series implements mitigations for lack of DMA access control on
systems without an IOMMU, which could result in the DMA accessing the
system memory at unexpected times and/or unexpected addresses, possibly
leading to data leakage or corruption.

For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
not behind an IOMMU. As PCI-e, by design, gives the device full access to
system memory, a vulnerability in the Wi-Fi firmware could easily escalate
to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
full chain of exploits; [2], [3]).

To mitigate the security concerns, we introduce restricted DMA. Restricted
DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
specially allocated region and does memory allocation from the same region.
The feature on its own provides a basic level of protection against the DMA
overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system needs
to provide a way to restrict the DMA to a predefined memory region (this is
usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).

[1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
[1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
[2] https://blade.tencent.com/en/advisories/qualpwn/
[3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
[4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132

v13:
- Fix xen-swiotlb issues
  - memset in patch 01/12
  - is_swiotlb_force_bounce in patch 06/12
- Fix the dts example typo in reserved-memory.txt
- Add Stefano and Will's Tested-by tag from v12

v12:
Split is_dev_swiotlb_force into is_swiotlb_force_bounce (patch 06/12) and
is_swiotlb_for_alloc (patch 09/12)
https://lore.kernel.org/patchwork/cover/1447254/

v11:
- Rebase against swiotlb devel/for-linus-5.14
- s/mempry/memory/g
- exchange the order of patch 09/12 and 10/12
https://lore.kernel.org/patchwork/cover/1447216/

v10:
Address the comments in v9 to
  - fix the dev->dma_io_tlb_mem assignment
  - propagate swiotlb_force setting into io_tlb_default_mem->force
  - move set_memory_decrypted out of swiotlb_init_io_tlb_mem
  - move debugfs_dir declaration into the main CONFIG_DEBUG_FS block
  - add swiotlb_ prefix to find_slots and release_slots
  - merge the 3 alloc/free related patches
  - move the CONFIG_DMA_RESTRICTED_POOL later
https://lore.kernel.org/patchwork/cover/1446882/

v9:
Address the comments in v7 to
  - set swiotlb active pool to dev->dma_io_tlb_mem
  - get rid of get_io_tlb_mem
  - dig out the device struct for is_swiotlb_active
  - move debugfs_create_dir out of swiotlb_create_debugfs
  - do set_memory_decrypted conditionally in swiotlb_init_io_tlb_mem
  - use IS_ENABLED in kernel/dma/direct.c
  - fix redefinition of 'of_dma_set_restricted_buffer'
https://lore.kernel.org/patchwork/cover/1445081/

v8:
- Fix reserved-memory.txt and add the reg property in example.
- Fix sizeof for of_property_count_elems_of_size in
  drivers/of/address.c#of_dma_set_restricted_buffer.
- Apply Will's suggestion to try the OF node having DMA configuration in
  drivers/of/address.c#of_dma_set_restricted_buffer.
- Fix typo in the comment of drivers/of/address.c#of_dma_set_restricted_buffer.
- Add error message for PageHighMem in
  kernel/dma/swiotlb.c#rmem_swiotlb_device_init and move it to
  rmem_swiotlb_setup.
- Fix the message string in rmem_swiotlb_setup.
https://lore.kernel.org/patchwork/cover/1437112/

v7:
Fix debugfs, PageHighMem and comment style in rmem_swiotlb_device_init
https://lore.kernel.org/patchwork/cover/1431031/

v6:
Address the comments in v5
https://lore.kernel.org/patchwork/cover/1423201/

v5:
Rebase on latest linux-next
https://lore.kernel.org/patchwork/cover/1416899/

v4:
- Fix spinlock bad magic
- Use rmem->name for debugfs entry
- Address the comments in v3
https://lore.kernel.org/patchwork/cover/1378113/

v3:
Using only one reserved memory region for both streaming DMA and memory
allocation.
https://lore.kernel.org/patchwork/cover/1360992/

v2:
Building on top of swiotlb.
https://lore.kernel.org/patchwork/cover/1280705/

v1:
Using dma_map_ops.
https://lore.kernel.org/patchwork/cover/1271660/

Claire Chang (12):
  swiotlb: Refactor swiotlb init functions
  swiotlb: Refactor swiotlb_create_debugfs
  swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
  swiotlb: Update is_swiotlb_buffer to add a struct device argument
  swiotlb: Update is_swiotlb_active to add a struct device argument
  swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing
  swiotlb: Move alloc_size to swiotlb_find_slots
  swiotlb: Refactor swiotlb_tbl_unmap_single
  swiotlb: Add restricted DMA alloc/free support
  swiotlb: Add restricted DMA pool initialization
  dt-bindings: of: Add restricted DMA pool
  of: Add plumbing for restricted DMA pool

 .../reserved-memory/reserved-memory.txt       |  36 ++-
 drivers/base/core.c                           |   4 +
 drivers/gpu/drm/i915/gem/i915_gem_internal.c  |   2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c         |   2 +-
 drivers/iommu/dma-iommu.c                     |  12 +-
 drivers/of/address.c                          |  33 +++
 drivers/of/device.c                           |   3 +
 drivers/of/of_private.h                       |   6 +
 drivers/pci/xen-pcifront.c                    |   2 +-
 drivers/xen/swiotlb-xen.c                     |   4 +-
 include/linux/device.h                        |   4 +
 include/linux/swiotlb.h                       |  51 +++-
 kernel/dma/Kconfig                            |  14 +
 kernel/dma/direct.c                           |  59 +++--
 kernel/dma/direct.h                           |   8 +-
 kernel/dma/swiotlb.c                          | 250 +++++++++++++-----
 16 files changed, 387 insertions(+), 103 deletions(-)

-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 06:27:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 06:27:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143611.264596 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlUO-0001sg-P9; Thu, 17 Jun 2021 06:26:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143611.264596; Thu, 17 Jun 2021 06:26:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlUO-0001sX-LS; Thu, 17 Jun 2021 06:26:56 +0000
Received: by outflank-mailman (input) for mailman id 143611;
 Thu, 17 Jun 2021 06:26:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V3wh=LL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltlUO-0001sJ-3n
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 06:26:56 +0000
Received: from mail-pf1-x42c.google.com (unknown [2607:f8b0:4864:20::42c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0e30efba-a626-4c4a-8bf6-5a56d9214ed4;
 Thu, 17 Jun 2021 06:26:55 +0000 (UTC)
Received: by mail-pf1-x42c.google.com with SMTP id q25so4159622pfh.7
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 23:26:55 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:e349:a6ae:d3d0:1621])
 by smtp.gmail.com with UTF8SMTPSA id v4sm4315866pgr.65.2021.06.16.23.26.47
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 16 Jun 2021 23:26:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e30efba-a626-4c4a-8bf6-5a56d9214ed4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=E+MUWhkl+Ru8Pk+7S8K4jPmnJS5eOQrmgvdL6fMLJbw=;
        b=mma1FOIPi3/YlNrFdBEt4nYURzXDczrPJ/dWyP51Gnu6yNJuHJmCKUAbOFtFw9T1s0
         CqnKQSlxkrNsSEoLmZQrTGgCcmovuvFMg3qeWbXVsey51cG7JnPh89ZNVCtuZcsYBGLu
         s0zPcptkHlVax9UTbEnKIF01g9/TavWHQPZhU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=E+MUWhkl+Ru8Pk+7S8K4jPmnJS5eOQrmgvdL6fMLJbw=;
        b=h8SDWQQ9NmTUq8qlUFIZC171aMb+BA7bH4C+mPFZ8AZ0QgjeDjkVi1U3mJ7Hc/UPfW
         mZ3NuMLWemjW2FQvVVwp5vbHJluMLn9W+B1s7cWVpxCwYvOWEtEtO2ZxMbEtkHCXfSqd
         7ezLPLR9HvaUQrU1LizfQJFin2QIRwAxsTjkLuopEo5oQF1I2mAhMVZ3HDmeNuR0YgKQ
         5VYQwMN5ANU9/zufL70tVNFEQlO3mFa0hkTOp34qsIQttRbm1iDHCJEeSHgRUJ63CU+a
         EiaxGvA/j0CpI7Fo2M3rMJ+QKDMVT50F4QBqEiKEgH0A/U8QIGDUptB8C7TAflqpGz6Y
         EsWQ==
X-Gm-Message-State: AOAM530RLHlCRJNY+jo+z+mCpcBOBOkLH5b30ih3ZGM1gnCDA4nRuSm0
	TGfFJ01nBeQWlue4vOOjgWj0BQ==
X-Google-Smtp-Source: ABdhPJzwzzctenRzDpMSjaIh11QwTfUKyGGsiCAh7Hps6A4qa+DU6rXx2ZvTsmXaTYG5juJhqHJp9Q==
X-Received: by 2002:a63:4915:: with SMTP id w21mr3470580pga.363.1623911214530;
        Wed, 16 Jun 2021 23:26:54 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v13 01/12] swiotlb: Refactor swiotlb init functions
Date: Thu, 17 Jun 2021 14:26:24 +0800
Message-Id: <20210617062635.1660944-2-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210617062635.1660944-1-tientzu@chromium.org>
References: <20210617062635.1660944-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
initialization to make the code reusable.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 kernel/dma/swiotlb.c | 50 ++++++++++++++++++++++----------------------
 1 file changed, 25 insertions(+), 25 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 52e2ac526757..47bb2a766798 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -168,9 +168,28 @@ void __init swiotlb_update_mem_attributes(void)
 	memset(vaddr, 0, bytes);
 }
 
-int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
+				    unsigned long nslabs, bool late_alloc)
 {
+	void *vaddr = phys_to_virt(start);
 	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
+
+	mem->nslabs = nslabs;
+	mem->start = start;
+	mem->end = mem->start + bytes;
+	mem->index = 0;
+	mem->late_alloc = late_alloc;
+	spin_lock_init(&mem->lock);
+	for (i = 0; i < mem->nslabs; i++) {
+		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
+		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
+		mem->slots[i].alloc_size = 0;
+	}
+	memset(vaddr, 0, bytes);
+}
+
+int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+{
 	struct io_tlb_mem *mem;
 	size_t alloc_size;
 
@@ -186,16 +205,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 	if (!mem)
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
-	mem->nslabs = nslabs;
-	mem->start = __pa(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
+
+	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
 
 	io_tlb_default_mem = mem;
 	if (verbose)
@@ -282,8 +293,8 @@ swiotlb_late_init_with_default_size(size_t default_size)
 int
 swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 {
-	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
 	struct io_tlb_mem *mem;
+	unsigned long bytes = nslabs << IO_TLB_SHIFT;
 
 	if (swiotlb_force == SWIOTLB_NO_FORCE)
 		return 0;
@@ -297,20 +308,9 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	if (!mem)
 		return -ENOMEM;
 
-	mem->nslabs = nslabs;
-	mem->start = virt_to_phys(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	mem->late_alloc = 1;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
-
+	memset(mem, 0, sizeof(*mem));
+	swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
 	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
-	memset(tlb, 0, bytes);
 
 	io_tlb_default_mem = mem;
 	swiotlb_print_info();
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 06:27:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 06:27:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143612.264607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlUZ-0002Hb-3Q; Thu, 17 Jun 2021 06:27:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143612.264607; Thu, 17 Jun 2021 06:27:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlUY-0002HS-UN; Thu, 17 Jun 2021 06:27:06 +0000
Received: by outflank-mailman (input) for mailman id 143612;
 Thu, 17 Jun 2021 06:27:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V3wh=LL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltlUX-0002Eu-AG
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 06:27:05 +0000
Received: from mail-pl1-x632.google.com (unknown [2607:f8b0:4864:20::632])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c37b477d-b997-4a16-b3ad-89158780fb1f;
 Thu, 17 Jun 2021 06:27:03 +0000 (UTC)
Received: by mail-pl1-x632.google.com with SMTP id h1so2390834plt.1
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 23:27:03 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:e349:a6ae:d3d0:1621])
 by smtp.gmail.com with UTF8SMTPSA id o16sm553563pjq.34.2021.06.16.23.26.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 16 Jun 2021 23:27:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c37b477d-b997-4a16-b3ad-89158780fb1f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=KErEGZ0FKTYEit+r+b5b30lp04bWxqVkd7eLPHLFOPc=;
        b=fWSZPQqtkEFc0v1lsfv1E5zaTAKfy+POYliVtKgvQfSwPyhV1IIZne42xoVDVcWoKr
         GePyvrrQbPaghpGzJim6vL3T/YbInWmx2tOKWDB+LuhBVuMSYPGVTD5OuyMs4PtPNYYl
         8ImV7jol1DIg81iCYe8vTZsXuU1yA24ivydu8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=KErEGZ0FKTYEit+r+b5b30lp04bWxqVkd7eLPHLFOPc=;
        b=f+T1uek03Qz91xXh4IDUnGVT/XCQZpwzMo7p60hkb4oyubiCUbuBmIk05zqwi8vvpv
         pT/wFrDdiiOvg8Erc+GSIcYPbCiJVIljtqtgFKsVXrjApgWawq828Z215VKCgU4tMO7z
         tkXYbPZLqXYWSxoIvOmj+X6J4Ba3h1qYQiy6ZqEBX/hZybBY4PYMJAstyLJapLtS7VmA
         pfGKGbI3CW3P9hpfugmU4bHssC/IfmwUSruBPyUj/4MjNVQT/xxKh/xHgSyQxsbeIJ7j
         O0XP4T4HNuYMqQwwL+gkFAFWJihSy83NsnzqSb2/QRLNs0tw0tM73RfooyphCrOKOU9V
         hxsA==
X-Gm-Message-State: AOAM531zHjvkJ5bSJS2ClY4piiSQo2pJzFMUwQNVNqs8LliK+Bv6Rrsu
	zm6Hu6SxsUfEueIik/k7dvyFLw==
X-Google-Smtp-Source: ABdhPJwO+NycHbE4HtdqaCFDvde6JNjarbiAoIdD19GqTdP0e6NuKJYjpDc9SPpkCJD5g3bxI6kSOA==
X-Received: by 2002:a17:902:9004:b029:f0:b40d:38d with SMTP id a4-20020a1709029004b02900f0b40d038dmr3065888plp.85.1623911223159;
        Wed, 16 Jun 2021 23:27:03 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v13 02/12] swiotlb: Refactor swiotlb_create_debugfs
Date: Thu, 17 Jun 2021 14:26:25 +0800
Message-Id: <20210617062635.1660944-3-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210617062635.1660944-1-tientzu@chromium.org>
References: <20210617062635.1660944-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Split the debugfs creation to make the code reusable for supporting
different bounce buffer pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 kernel/dma/swiotlb.c | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 47bb2a766798..2dba659a1e73 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -671,19 +671,26 @@ bool is_swiotlb_active(void)
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
 #ifdef CONFIG_DEBUG_FS
+static struct dentry *debugfs_dir;
 
-static int __init swiotlb_create_debugfs(void)
+static void swiotlb_create_debugfs_files(struct io_tlb_mem *mem)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
-
-	if (!mem)
-		return 0;
-	mem->debugfs = debugfs_create_dir("swiotlb", NULL);
 	debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs);
 	debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used);
+}
+
+static int __init swiotlb_create_default_debugfs(void)
+{
+	struct io_tlb_mem *mem = io_tlb_default_mem;
+
+	debugfs_dir = debugfs_create_dir("swiotlb", NULL);
+	if (mem) {
+		mem->debugfs = debugfs_dir;
+		swiotlb_create_debugfs_files(mem);
+	}
 	return 0;
 }
 
-late_initcall(swiotlb_create_debugfs);
+late_initcall(swiotlb_create_default_debugfs);
 
 #endif
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 06:27:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 06:27:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143615.264618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlUg-0002mj-F6; Thu, 17 Jun 2021 06:27:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143615.264618; Thu, 17 Jun 2021 06:27:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlUg-0002mZ-BB; Thu, 17 Jun 2021 06:27:14 +0000
Received: by outflank-mailman (input) for mailman id 143615;
 Thu, 17 Jun 2021 06:27:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V3wh=LL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltlUf-0002Eu-6R
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 06:27:13 +0000
Received: from mail-pf1-x42a.google.com (unknown [2607:f8b0:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bea1b01a-072b-40d0-aa07-f7c5a5c0ba7a;
 Thu, 17 Jun 2021 06:27:12 +0000 (UTC)
Received: by mail-pf1-x42a.google.com with SMTP id x73so4150093pfc.8
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 23:27:12 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:e349:a6ae:d3d0:1621])
 by smtp.gmail.com with UTF8SMTPSA id g18sm3933417pfi.199.2021.06.16.23.27.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 16 Jun 2021 23:27:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bea1b01a-072b-40d0-aa07-f7c5a5c0ba7a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=sOkzSft4/PekQY0R4flE8dZ9jaJgeD5vzcmemGtg3cs=;
        b=PgkZf0dqtS9ToNJdUfOn4ouQrVj5xyDb0O2hfA2ZXQeoO+JV+cvuiqavT8vPvjQgKA
         uGeSMQI4OSn0CIUdv+4NKoUnDmfkrwa4J9Y2i2zrNHTx2K4IDo1nSWJLR1VqoLOEzqab
         AJGia54I/c58jg7PTAZHkzSODnw0UdhBZ9VbM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=sOkzSft4/PekQY0R4flE8dZ9jaJgeD5vzcmemGtg3cs=;
        b=nGITiXcIXrp2uL9QPxhOLLa5rhW3PvwRm18al79kzUQagB37QnKN3k1dFud8/XVLsR
         SvxUwArRGp2hxL0uclllvtJc1XxRHAO6f6sw7oLhZn0QG2GmX71tUbKhYlAKq/NpMZen
         hObGVSrRmq2GG2EryFJaq//FInDmCoyGBpYJqRyT2IvJeGqiQlxYVpNzh/i50lANhr9Q
         xDG4jFZmT+Ja6NcdXgSEDN8FEeogkOpM0uLmN8v1mFgo7hAk5Ggc87uBFDWzRIOIpeaH
         ZrzLXEJmaSn4sIE+YUISA4z/mPcQRnTQG1wzZ9zOfx1aoNcerHn4TBsPC7GtqIyYDwes
         jo5g==
X-Gm-Message-State: AOAM5306mYLOCLhaTzNrOj7R4MVon5WQWs5ExrdO7vUnUuUD4hHHbJlq
	YxZLwtkLEA7P3H96pZlfYBgkxA==
X-Google-Smtp-Source: ABdhPJy1KoFoI0QRNP1gDWx99G7HAJ3Ym8BDz2VDQJeU++xyAJ/MV7lswhON0zIpk8ViWjMSgKTtug==
X-Received: by 2002:a63:4554:: with SMTP id u20mr3474032pgk.23.1623911231703;
        Wed, 16 Jun 2021 23:27:11 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v13 03/12] swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
Date: Thu, 17 Jun 2021 14:26:26 +0800
Message-Id: <20210617062635.1660944-4-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210617062635.1660944-1-tientzu@chromium.org>
References: <20210617062635.1660944-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Always have the pointer to the swiotlb pool used in struct device. This
could help simplify the code for other pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 drivers/base/core.c    | 4 ++++
 include/linux/device.h | 4 ++++
 kernel/dma/swiotlb.c   | 8 ++++----
 3 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/base/core.c b/drivers/base/core.c
index f29839382f81..cb3123e3954d 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -27,6 +27,7 @@
 #include <linux/netdevice.h>
 #include <linux/sched/signal.h>
 #include <linux/sched/mm.h>
+#include <linux/swiotlb.h>
 #include <linux/sysfs.h>
 #include <linux/dma-map-ops.h> /* for dma_default_coherent */
 
@@ -2736,6 +2737,9 @@ void device_initialize(struct device *dev)
     defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU_ALL)
 	dev->dma_coherent = dma_default_coherent;
 #endif
+#ifdef CONFIG_SWIOTLB
+	dev->dma_io_tlb_mem = io_tlb_default_mem;
+#endif
 }
 EXPORT_SYMBOL_GPL(device_initialize);
 
diff --git a/include/linux/device.h b/include/linux/device.h
index ba660731bd25..240d652a0696 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -416,6 +416,7 @@ struct dev_links_info {
  * @dma_pools:	Dma pools (if dma'ble device).
  * @dma_mem:	Internal for coherent mem override.
  * @cma_area:	Contiguous memory area for dma allocations
+ * @dma_io_tlb_mem: Pointer to the swiotlb pool used.  Not for driver use.
  * @archdata:	For arch-specific additions.
  * @of_node:	Associated device tree node.
  * @fwnode:	Associated device node supplied by platform firmware.
@@ -518,6 +519,9 @@ struct device {
 #ifdef CONFIG_DMA_CMA
 	struct cma *cma_area;		/* contiguous memory area for dma
 					   allocations */
+#endif
+#ifdef CONFIG_SWIOTLB
+	struct io_tlb_mem *dma_io_tlb_mem;
 #endif
 	/* arch specific additions */
 	struct dev_archdata	archdata;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 2dba659a1e73..de79e9437030 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -340,7 +340,7 @@ void __init swiotlb_exit(void)
 static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size,
 			   enum dma_data_direction dir)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT;
 	unsigned int offset = (tlb_addr - mem->start) & (IO_TLB_SIZE - 1);
 	phys_addr_t orig_addr = mem->slots[index].orig_addr;
@@ -431,7 +431,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
 static int find_slots(struct device *dev, phys_addr_t orig_addr,
 		size_t alloc_size)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
 	dma_addr_t tbl_dma_addr =
 		phys_to_dma_unencrypted(dev, mem->start) & boundary_mask;
@@ -508,7 +508,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		size_t mapping_size, size_t alloc_size,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
 	unsigned int i;
 	int index;
@@ -559,7 +559,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 			      size_t mapping_size, enum dma_data_direction dir,
 			      unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
 	unsigned long flags;
 	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 06:27:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 06:27:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143620.264629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlUq-0003RF-Mv; Thu, 17 Jun 2021 06:27:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143620.264629; Thu, 17 Jun 2021 06:27:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlUq-0003R8-JZ; Thu, 17 Jun 2021 06:27:24 +0000
Received: by outflank-mailman (input) for mailman id 143620;
 Thu, 17 Jun 2021 06:27:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V3wh=LL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltlUp-0003Kj-6y
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 06:27:23 +0000
Received: from mail-pf1-x42b.google.com (unknown [2607:f8b0:4864:20::42b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea7a6078-d95e-4b1c-8f9d-df3c664084e1;
 Thu, 17 Jun 2021 06:27:22 +0000 (UTC)
Received: by mail-pf1-x42b.google.com with SMTP id y15so4179918pfl.4
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 23:27:22 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:e349:a6ae:d3d0:1621])
 by smtp.gmail.com with UTF8SMTPSA id d15sm3950443pfd.35.2021.06.16.23.27.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 16 Jun 2021 23:27:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea7a6078-d95e-4b1c-8f9d-df3c664084e1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=zt9ftl2GgG6zLHGKuFNKcb10jEqN4NFqjv4UdPB0c0s=;
        b=I9lZlzpZbxhq5xBhP7M+6kWMnnmbT4As/+TtYFRnm2RkjBuetycyYLTw15piFzRzmY
         pc7T619hVo2Xs7ezKavzjM6pcXqAYf2ybMqNdh8HJ/uXnc2Isjl3Agjergl9lcUGkfHV
         NtQtJptIhUFitpqIg8s6xtVxaSvrBp8LbMuaE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=zt9ftl2GgG6zLHGKuFNKcb10jEqN4NFqjv4UdPB0c0s=;
        b=ObJe0p9KzKTu/cTzkN/KvcvLe5dsFGmVzBCmuOP70TqJCcVOCe0TVkCZmmwXBlxZh9
         /bpTlGnQvLE4STLpqCFWBFRx558voP7r0qjHDRmY+9TalJ7ihMLYDs9UgA+QILyA4dY/
         HNTMloc3i6VZpckMd5EgFVbBwzYasjgAStTo79zQrYtl7cMhGnsMoJLlsgW7u6Z3ZmM5
         hqQXDFUF/tuT/OiJPUb1Ps0c5/aM88ikSgfGrt1nlRFeOiVuD2ZjwGUP5Ufs9r+7qeax
         bJnu1E1Cwl6itQkzBUORAI8QdpCadBo95HOnhdOOqLngFQCzU3I80N0MOzsVtGFs8eTO
         8KOQ==
X-Gm-Message-State: AOAM530vGbOlTBuU0nM7jXahDg/g6CAOJ/5EgSrHKzl6kCyyRYD30ztu
	rYaTae58I4jVZTQGGlbKle7fvg==
X-Google-Smtp-Source: ABdhPJxGCqAZFc2OdjcM7HCpPJ/7nqm7w1lmyGtnWHSp15jnjjZqon+J4Vxv5k08FCH+83ciqwnwRQ==
X-Received: by 2002:a05:6a00:18a7:b029:2f2:b099:7b1a with SMTP id x39-20020a056a0018a7b02902f2b0997b1amr3582676pfh.59.1623911241486;
        Wed, 16 Jun 2021 23:27:21 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v13 04/12] swiotlb: Update is_swiotlb_buffer to add a struct device argument
Date: Thu, 17 Jun 2021 14:26:27 +0800
Message-Id: <20210617062635.1660944-5-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210617062635.1660944-1-tientzu@chromium.org>
References: <20210617062635.1660944-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_buffer to add a struct device argument. This will be
useful later to allow for different pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 drivers/iommu/dma-iommu.c | 12 ++++++------
 drivers/xen/swiotlb-xen.c |  2 +-
 include/linux/swiotlb.h   |  7 ++++---
 kernel/dma/direct.c       |  6 +++---
 kernel/dma/direct.h       |  6 +++---
 5 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 3087d9fa6065..10997ef541f8 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -507,7 +507,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
 
 	__iommu_dma_unmap(dev, dma_addr, size);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 
@@ -578,7 +578,7 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
 	}
 
 	iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
-	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
+	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys))
 		swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs);
 	return iova;
 }
@@ -749,7 +749,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
 	if (!dev_is_dma_coherent(dev))
 		arch_sync_dma_for_cpu(phys, size, dir);
 
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_cpu(dev, phys, size, dir);
 }
 
@@ -762,7 +762,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev,
 		return;
 
 	phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_device(dev, phys, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -783,7 +783,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
 
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_cpu(dev, sg_phys(sg),
 						    sg->length, dir);
 	}
@@ -800,7 +800,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
 		return;
 
 	for_each_sg(sgl, sg, nelems, i) {
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_device(dev, sg_phys(sg),
 						       sg->length, dir);
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 4c89afc0df62..0c6ed09f8513 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -100,7 +100,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	 * in our domain. Therefore _only_ check address within our domain.
 	 */
 	if (pfn_valid(PFN_DOWN(paddr)))
-		return is_swiotlb_buffer(paddr);
+		return is_swiotlb_buffer(dev, paddr);
 	return 0;
 }
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 216854a5e513..d1f3d95881cd 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_SWIOTLB_H
 #define __LINUX_SWIOTLB_H
 
+#include <linux/device.h>
 #include <linux/dma-direction.h>
 #include <linux/init.h>
 #include <linux/types.h>
@@ -101,9 +102,9 @@ struct io_tlb_mem {
 };
 extern struct io_tlb_mem *io_tlb_default_mem;
 
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
@@ -115,7 +116,7 @@ bool is_swiotlb_active(void);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index f737e3347059..84c9feb5474a 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -343,7 +343,7 @@ void dma_direct_sync_sg_for_device(struct device *dev,
 	for_each_sg(sgl, sg, nents, i) {
 		phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_device(dev, paddr, sg->length,
 						       dir);
 
@@ -369,7 +369,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(paddr, sg->length, dir);
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_cpu(dev, paddr, sg->length,
 						    dir);
 
@@ -504,7 +504,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr)
 {
 	return !dev_is_dma_coherent(dev) ||
-		is_swiotlb_buffer(dma_to_phys(dev, dma_addr));
+	       is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr));
 }
 
 /**
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 50afc05b6f1d..13e9e7158d94 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -56,7 +56,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev,
 {
 	phys_addr_t paddr = dma_to_phys(dev, addr);
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_device(dev, paddr, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -73,7 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev,
 		arch_sync_dma_for_cpu_all();
 	}
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_cpu(dev, paddr, size, dir);
 
 	if (dir == DMA_FROM_DEVICE)
@@ -113,7 +113,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
 		dma_direct_sync_single_for_cpu(dev, addr, size, dir);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 #endif /* _KERNEL_DMA_DIRECT_H */
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 06:27:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 06:27:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143628.264639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlV2-00048k-1v; Thu, 17 Jun 2021 06:27:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143628.264639; Thu, 17 Jun 2021 06:27:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlV1-00048Z-VD; Thu, 17 Jun 2021 06:27:35 +0000
Received: by outflank-mailman (input) for mailman id 143628;
 Thu, 17 Jun 2021 06:27:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V3wh=LL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltlV1-0003Kj-9z
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 06:27:35 +0000
Received: from mail-pl1-x62f.google.com (unknown [2607:f8b0:4864:20::62f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 786254ea-3782-4d71-82f4-0e59ed355864;
 Thu, 17 Jun 2021 06:27:30 +0000 (UTC)
Received: by mail-pl1-x62f.google.com with SMTP id v12so2372153plo.10
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 23:27:30 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:e349:a6ae:d3d0:1621])
 by smtp.gmail.com with UTF8SMTPSA id gk21sm7239729pjb.20.2021.06.16.23.27.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 16 Jun 2021 23:27:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 786254ea-3782-4d71-82f4-0e59ed355864
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=KelymLiaUr2kAqb1cjw3xb5mAGn5EZZnSyDtFIgOmrQ=;
        b=PW7z41PFDNTlIGuzfq05csmDjr4qMLdCaE7tmGdf4gvlKN/znqHktRWbHssSA3dnOt
         FhN+nliA+Vw4JPIVDX6ymQbXExTG7M1+qNG2rsrEvRcC/Dq+1XMIQs/vlMESkfIVcSXG
         55tMikicnW50hUKlHGe6oPGyW7NPEGqY7+RII=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=KelymLiaUr2kAqb1cjw3xb5mAGn5EZZnSyDtFIgOmrQ=;
        b=if2J1fsuqWwMjBVmB3VI5giRXEkNReD4MBYmOc28YmQ1YAQsZvkykoaVWMLuhaTBMD
         lMsw9hG+h+6j9sUicCfzLsUKpgd2dSvoRZGfzJIL6rsSmo0qSrKFDgpzUlSmEq6OQI0F
         dFk2zE4+5eEBggb7l4SuivPPXVOC6mKVQDnKs81bUVWOV2RngiSDh7Zoj98riUtctR4j
         u3s9ZobWfv9B2HsixiOW8IXfs/wAOxtFBEim4fNIl8lsrg2uhioMSqmMXx1LQwgRQmJi
         Tujr7w6SN99nYv3WwacVWW4r+XgtmEmSH0cNy2iKCYm0N70b6aJb00cUbw0ioKcqNBwt
         pTkA==
X-Gm-Message-State: AOAM531bW7zJAYot/nofI9h/88T72DiOQGAR2PY9WnnTnIb+x1/a+syq
	W48BODiAG68P5nIfv7gBwZpgoQ==
X-Google-Smtp-Source: ABdhPJx2kV7GPTVnVnDwmspJJj04dwjp3LHB/TRZ3kQtKfgjzJx3d4GCcV0I6QuHxBDRWzUo37vpdA==
X-Received: by 2002:a17:90a:4e85:: with SMTP id o5mr15147777pjh.22.1623911250177;
        Wed, 16 Jun 2021 23:27:30 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v13 05/12] swiotlb: Update is_swiotlb_active to add a struct device argument
Date: Thu, 17 Jun 2021 14:26:28 +0800
Message-Id: <20210617062635.1660944-6-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210617062635.1660944-1-tientzu@chromium.org>
References: <20210617062635.1660944-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_active to add a struct device argument. This will be
useful later to allow for different pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c        | 2 +-
 drivers/pci/xen-pcifront.c                   | 2 +-
 include/linux/swiotlb.h                      | 4 ++--
 kernel/dma/direct.c                          | 2 +-
 kernel/dma/swiotlb.c                         | 4 ++--
 6 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index a9d65fc8aa0e..4b7afa0fc85d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 
 	max_order = MAX_ORDER;
 #ifdef CONFIG_SWIOTLB
-	if (is_swiotlb_active()) {
+	if (is_swiotlb_active(obj->base.dev->dev)) {
 		unsigned int max_segment;
 
 		max_segment = swiotlb_max_segment();
diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
index 9662522aa066..be15bfd9e0ee 100644
--- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
@@ -321,7 +321,7 @@ nouveau_ttm_init(struct nouveau_drm *drm)
 	}
 
 #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
-	need_swiotlb = is_swiotlb_active();
+	need_swiotlb = is_swiotlb_active(dev->dev);
 #endif
 
 	ret = ttm_bo_device_init(&drm->ttm.bdev, &nouveau_bo_driver,
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index b7a8f3a1921f..0d56985bfe81 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
 
 	spin_unlock(&pcifront_dev_lock);
 
-	if (!err && !is_swiotlb_active()) {
+	if (!err && !is_swiotlb_active(&pdev->xdev->dev)) {
 		err = pci_xen_swiotlb_init_late();
 		if (err)
 			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index d1f3d95881cd..dd1c30a83058 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -112,7 +112,7 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
-bool is_swiotlb_active(void);
+bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
@@ -132,7 +132,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev)
 	return SIZE_MAX;
 }
 
-static inline bool is_swiotlb_active(void)
+static inline bool is_swiotlb_active(struct device *dev)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 84c9feb5474a..7a88c34d0867 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
 size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
-	if (is_swiotlb_active() &&
+	if (is_swiotlb_active(dev) &&
 	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index de79e9437030..409694d7a8ad 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -664,9 +664,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)
 	return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
 }
 
-bool is_swiotlb_active(void)
+bool is_swiotlb_active(struct device *dev)
 {
-	return io_tlb_default_mem != NULL;
+	return dev->dma_io_tlb_mem != NULL;
 }
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 06:27:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 06:27:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143632.264651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlVC-0004hK-A6; Thu, 17 Jun 2021 06:27:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143632.264651; Thu, 17 Jun 2021 06:27:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlVC-0004h8-74; Thu, 17 Jun 2021 06:27:46 +0000
Received: by outflank-mailman (input) for mailman id 143632;
 Thu, 17 Jun 2021 06:27:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V3wh=LL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltlVB-0003Kj-AL
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 06:27:45 +0000
Received: from mail-pj1-x1033.google.com (unknown [2607:f8b0:4864:20::1033])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 73a45940-e716-43a8-8f60-8e9b78dd7ec4;
 Thu, 17 Jun 2021 06:27:39 +0000 (UTC)
Received: by mail-pj1-x1033.google.com with SMTP id
 fy24-20020a17090b0218b029016c5a59021fso5511044pjb.0
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 23:27:39 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:e349:a6ae:d3d0:1621])
 by smtp.gmail.com with UTF8SMTPSA id 35sm3596288pjo.16.2021.06.16.23.27.31
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 16 Jun 2021 23:27:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73a45940-e716-43a8-8f60-8e9b78dd7ec4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=M9QMk6wNnc5T8dsQ/UUh5z8ZmmRgMMkTubKMJRCpcCQ=;
        b=JnlqSxQ0Uf3OGBOxNwxNPZIoriAC1E5rya6kAaIHMCoU7r6zyVlQyQAs2Cf4khFFTc
         FnTwZMJShLQfAGP+cRvLvkERNMaZ59MsN4kjblSaWpyhFUBx++fS1mFP3+oMtxuFiXLi
         zmBgZVd1RthmZvsv20lFYrEIPwPYmEanEfCKU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=M9QMk6wNnc5T8dsQ/UUh5z8ZmmRgMMkTubKMJRCpcCQ=;
        b=YPCz1TP3D4jI918Uh7DpwB3vxpg082hUvbzXAcXVrBj7O1HMJQmOPxHoK5oWQyHD5F
         TZImo6XWtD6TIsoSY6+nsO9zNy3V/QCLQ2I+/Y0cuONDweiaCVkQ+I+j+C6W5NVVEfSE
         evPpiyji4vvVRDza8K6zV6o1syJl6vm7rQ8+BXgo3IJMWHmkLJ6yQjIwuOMcomBUC7E6
         3k68NQiKPBLnnEAgMOHVaWyOffHxYd5xy1HyCkpnP1fhYU+Ns7Aqo8oY2/P8Ajc/AP7z
         sDNuJ93iHL9RIgwla7ut3TVqxMoRMvlOZqdmQbLrVp41V2cIDdUbmJDXU6RGoo3qoV4u
         SwKw==
X-Gm-Message-State: AOAM530Bg2cNiTCNNlYvxknTiosrBd8f3Ip3EyGl5hpUdsLBj75LCZtm
	NTXUnZmml+WNX0tG6rk21dXzkg==
X-Google-Smtp-Source: ABdhPJztF3OTqYFYqSHpiBI0X201j37FByNBNjiUFjc1rLhaj0s7r4pHVp8888/DL4p7nv1mwIeNTA==
X-Received: by 2002:a17:90b:881:: with SMTP id bj1mr3933119pjb.119.1623911258726;
        Wed, 16 Jun 2021 23:27:38 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v13 06/12] swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing
Date: Thu, 17 Jun 2021 14:26:29 +0800
Message-Id: <20210617062635.1660944-7-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210617062635.1660944-1-tientzu@chromium.org>
References: <20210617062635.1660944-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
use it to determine whether to bounce the data or not. This will be
useful later to allow for different pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 drivers/xen/swiotlb-xen.c |  2 +-
 include/linux/swiotlb.h   | 11 +++++++++++
 kernel/dma/direct.c       |  2 +-
 kernel/dma/direct.h       |  2 +-
 kernel/dma/swiotlb.c      |  4 ++++
 5 files changed, 18 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 0c6ed09f8513..4730a146fa35 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -369,7 +369,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 	if (dma_capable(dev, dev_addr, size, true) &&
 	    !range_straddles_page_boundary(phys, size) &&
 		!xen_arch_need_swiotlb(dev, phys, dev_addr) &&
-		swiotlb_force != SWIOTLB_FORCE)
+		!is_swiotlb_force_bounce(dev))
 		goto done;
 
 	/*
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index dd1c30a83058..8d8855c77d9a 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -84,6 +84,7 @@ extern enum swiotlb_force swiotlb_force;
  *		unmap calls.
  * @debugfs:	The dentry to debugfs.
  * @late_alloc:	%true if allocated using the page allocator
+ * @force_bounce: %true if swiotlb bouncing is forced
  */
 struct io_tlb_mem {
 	phys_addr_t start;
@@ -94,6 +95,7 @@ struct io_tlb_mem {
 	spinlock_t lock;
 	struct dentry *debugfs;
 	bool late_alloc;
+	bool force_bounce;
 	struct io_tlb_slot {
 		phys_addr_t orig_addr;
 		size_t alloc_size;
@@ -109,6 +111,11 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
 
+static inline bool is_swiotlb_force_bounce(struct device *dev)
+{
+	return dev->dma_io_tlb_mem->force_bounce;
+}
+
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
@@ -120,6 +127,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
+static inline bool is_swiotlb_force_bounce(struct device *dev)
+{
+	return false;
+}
 static inline void swiotlb_exit(void)
 {
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 7a88c34d0867..a92465b4eb12 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -496,7 +496,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
 	if (is_swiotlb_active(dev) &&
-	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
+	    (dma_addressing_limited(dev) || is_swiotlb_force_bounce(dev)))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
 }
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 13e9e7158d94..4632b0f4f72e 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -87,7 +87,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
 	phys_addr_t phys = page_to_phys(page) + offset;
 	dma_addr_t dma_addr = phys_to_dma(dev, phys);
 
-	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
+	if (is_swiotlb_force_bounce(dev))
 		return swiotlb_map(dev, phys, size, dir, attrs);
 
 	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 409694d7a8ad..13891d5de8c9 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -179,6 +179,10 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
 	mem->end = mem->start + bytes;
 	mem->index = 0;
 	mem->late_alloc = late_alloc;
+
+	if (swiotlb_force == SWIOTLB_FORCE)
+		mem->force_bounce = true;
+
 	spin_lock_init(&mem->lock);
 	for (i = 0; i < mem->nslabs; i++) {
 		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 06:27:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 06:27:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143636.264662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlVM-0005Fk-ON; Thu, 17 Jun 2021 06:27:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143636.264662; Thu, 17 Jun 2021 06:27:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlVM-0005FB-Jk; Thu, 17 Jun 2021 06:27:56 +0000
Received: by outflank-mailman (input) for mailman id 143636;
 Thu, 17 Jun 2021 06:27:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V3wh=LL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltlVL-0003Kj-Ay
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 06:27:55 +0000
Received: from mail-pf1-x42a.google.com (unknown [2607:f8b0:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a3300be8-72b2-4530-8b42-58876100a5c0;
 Thu, 17 Jun 2021 06:27:48 +0000 (UTC)
Received: by mail-pf1-x42a.google.com with SMTP id k15so4168328pfp.6
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 23:27:48 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:e349:a6ae:d3d0:1621])
 by smtp.gmail.com with UTF8SMTPSA id b5sm802199pgh.41.2021.06.16.23.27.40
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 16 Jun 2021 23:27:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3300be8-72b2-4530-8b42-58876100a5c0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=vL93q1YbKGNvBEIa7UrABp5ApsQUTAD5UUYTnsdEB3Q=;
        b=CtB2mooENzTC4i0L5rpa1zNICqjYFPsIZqEbEkBKAJa8+qd9w2N9/n64NCFrt9j45u
         Ih26Uj+R6NzicgrImuSogNhtDWEP6supbvO5mJkr1dAyUpDH0cFI8DnqnJG9Y3c4jeK7
         STp7xwS/W8U1n8UjbDMotkZIyUJH71Vl6CqRE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=vL93q1YbKGNvBEIa7UrABp5ApsQUTAD5UUYTnsdEB3Q=;
        b=hrjoAg3MIQBh3+JM8bg4TIaGxNfsREKSMP+kNdlchlKUEe84DISoPsKAlqfHt79DmK
         jlkIIxZ8a3ZjrM+dNapDgrXtYoP9yz4DVF/s3+EUmQ3imTwQtGnQBc9l1tpUGha/ErK5
         a3NtB89VtKlRogD0UuEZL1lOXd+qH59rsOIqW/uSVn12Z90YWc++DISBjxhAORRHgQj9
         M4vtA/2Wshsupch7qVbDz21VtmdIsZ+ARM8H+16FE3p8xxwKNBnBHYDz+ZxVkxZasrjM
         nFaZKHvAZQ0trF+UyjhkTblASQ64zRCfQLBJtKYn8E7hXi1c+dziBkMt0+XoAdrOjilR
         5lnw==
X-Gm-Message-State: AOAM532888g9W36AKE427k2aoLKDHdsiYyMd33N0yhirtU6t6HEtdOBT
	vVTWbvs0gx7OnCfSkrWmBpwOXg==
X-Google-Smtp-Source: ABdhPJwOXzNBlzTLYb2sPRtjxMXEGiwV/XPAbxRS65Sr0Y1ZJRRWjSo71Ji+g4hEO7fWdP+p0wiGoQ==
X-Received: by 2002:a05:6a00:b8a:b029:2ec:761e:33e3 with SMTP id g10-20020a056a000b8ab02902ec761e33e3mr3577341pfj.35.1623911267361;
        Wed, 16 Jun 2021 23:27:47 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v13 07/12] swiotlb: Move alloc_size to swiotlb_find_slots
Date: Thu, 17 Jun 2021 14:26:30 +0800
Message-Id: <20210617062635.1660944-8-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210617062635.1660944-1-tientzu@chromium.org>
References: <20210617062635.1660944-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Rename find_slots to swiotlb_find_slots and move the maintenance of
alloc_size to it for better code reusability later.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 kernel/dma/swiotlb.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 13891d5de8c9..89049d021d0d 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -432,8 +432,8 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
  * Find a suitable number of IO TLB entries size that will fit this request and
  * allocate a buffer from that IO TLB pool.
  */
-static int find_slots(struct device *dev, phys_addr_t orig_addr,
-		size_t alloc_size)
+static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
+			      size_t alloc_size)
 {
 	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
@@ -488,8 +488,11 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 	return -1;
 
 found:
-	for (i = index; i < index + nslots; i++)
+	for (i = index; i < index + nslots; i++) {
 		mem->slots[i].list = 0;
+		mem->slots[i].alloc_size =
+			alloc_size - ((i - index) << IO_TLB_SHIFT);
+	}
 	for (i = index - 1;
 	     io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 &&
 	     mem->slots[i].list; i--)
@@ -530,7 +533,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		return (phys_addr_t)DMA_MAPPING_ERROR;
 	}
 
-	index = find_slots(dev, orig_addr, alloc_size + offset);
+	index = swiotlb_find_slots(dev, orig_addr, alloc_size + offset);
 	if (index == -1) {
 		if (!(attrs & DMA_ATTR_NO_WARN))
 			dev_warn_ratelimited(dev,
@@ -544,11 +547,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	 * This is needed when we sync the memory.  Then we sync the buffer if
 	 * needed.
 	 */
-	for (i = 0; i < nr_slots(alloc_size + offset); i++) {
+	for (i = 0; i < nr_slots(alloc_size + offset); i++)
 		mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
-		mem->slots[index + i].alloc_size =
-			alloc_size - (i << IO_TLB_SHIFT);
-	}
 	tlb_addr = slot_addr(mem->start, index) + offset;
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
 	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 06:28:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 06:28:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143642.264673 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlVW-0005tn-1f; Thu, 17 Jun 2021 06:28:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143642.264673; Thu, 17 Jun 2021 06:28:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlVV-0005tc-U7; Thu, 17 Jun 2021 06:28:05 +0000
Received: by outflank-mailman (input) for mailman id 143642;
 Thu, 17 Jun 2021 06:28:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V3wh=LL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltlVV-0003Kj-B2
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 06:28:05 +0000
Received: from mail-pg1-x529.google.com (unknown [2607:f8b0:4864:20::529])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cabf9e5d-2f18-44fd-a019-e817878ad45b;
 Thu, 17 Jun 2021 06:27:56 +0000 (UTC)
Received: by mail-pg1-x529.google.com with SMTP id i34so4089468pgl.9
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 23:27:56 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:e349:a6ae:d3d0:1621])
 by smtp.gmail.com with UTF8SMTPSA id n72sm3947577pfd.8.2021.06.16.23.27.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 16 Jun 2021 23:27:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cabf9e5d-2f18-44fd-a019-e817878ad45b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=EpbY4t1YqkHyM2/NW3gMAx4yclJU7uBbQcXCZDHmtKs=;
        b=e8/y/1pyNps/ZK6IHXKXnhI106tF8+sDmAp4xvjDP1dmGFb5bTSm6j6t7NqNrsMO3B
         4ffXhEra/SRTwvklP4KTuClY8LZOyfxrzvOiJDGX4UEO5rICKDNcf7JWVSUnF4DMCc0o
         WVIQGrd7NhuRQi9XYwZ7O9V8e1JGPFsrUx+Uw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=EpbY4t1YqkHyM2/NW3gMAx4yclJU7uBbQcXCZDHmtKs=;
        b=njja/+k1coXxuyEyhm6LrtYJtZ9+WnexkIHIrOySa5NhSeNyYnTSPpygRXw7hc8BTx
         eTdXxVNGiw6TA9Hk6ThdrnnSkKLvH9PsjaiT9LIJLtohFFJYzxtpTy8rXuW7/D21Ntmu
         kMgc2QopHvn/t+FZ/S3vPeMU9Z0j85mmXbToUFY9kZLRGBpHFlw0kSmViyVqLjUTLCYg
         6cRswAG5X9N+B17kWVl+gMHwnAncPDDAmVzFc9B2RmlZT8CTwLpO6hY2Tkrx7rShU6er
         nDu4awJDZtlZwU/G6MSUfh4ux9XQ8jRcPxe0STwPQJzFw4BmpaebQNt57PSdEPcV7g3K
         2thQ==
X-Gm-Message-State: AOAM531ji83X/kfpvfEZAbxInJkZtzOKrHVJh8sOFjmb1d+pPc1Uh/KC
	Yr+KYWmpOvoxUT7s0pHbYKcAQQ==
X-Google-Smtp-Source: ABdhPJxztQv+POqTC9QxexHqRq0v81bxjQesrgOuptpj83OtrETFUYQg5rDYRTVu/p7cUu8j7sfWFA==
X-Received: by 2002:a62:31c3:0:b029:2fe:b554:6746 with SMTP id x186-20020a6231c30000b02902feb5546746mr1602855pfx.66.1623911275926;
        Wed, 16 Jun 2021 23:27:55 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v13 08/12] swiotlb: Refactor swiotlb_tbl_unmap_single
Date: Thu, 17 Jun 2021 14:26:31 +0800
Message-Id: <20210617062635.1660944-9-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210617062635.1660944-1-tientzu@chromium.org>
References: <20210617062635.1660944-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, swiotlb_release_slots, to make the code reusable for
supporting different bounce buffer pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 kernel/dma/swiotlb.c | 35 ++++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 89049d021d0d..ff09341bb9f5 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -556,27 +556,15 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	return tlb_addr;
 }
 
-/*
- * tlb_addr is the physical address of the bounce buffer to unmap.
- */
-void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
-			      size_t mapping_size, enum dma_data_direction dir,
-			      unsigned long attrs)
+static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr)
 {
-	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long flags;
-	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
+	unsigned int offset = swiotlb_align_offset(dev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
 	int nslots = nr_slots(mem->slots[index].alloc_size + offset);
 	int count, i;
 
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
-		swiotlb_bounce(hwdev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
-
 	/*
 	 * Return the buffer to the free list by setting the corresponding
 	 * entries to indicate the number of contiguous entries available.
@@ -611,6 +599,23 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 	spin_unlock_irqrestore(&mem->lock, flags);
 }
 
+/*
+ * tlb_addr is the physical address of the bounce buffer to unmap.
+ */
+void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
+			      size_t mapping_size, enum dma_data_direction dir,
+			      unsigned long attrs)
+{
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
+		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
+
+	swiotlb_release_slots(dev, tlb_addr);
+}
+
 void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
 		size_t size, enum dma_data_direction dir)
 {
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 06:31:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 06:31:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143666.264684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlYt-0007w4-Hj; Thu, 17 Jun 2021 06:31:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143666.264684; Thu, 17 Jun 2021 06:31:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlYt-0007vx-EI; Thu, 17 Jun 2021 06:31:35 +0000
Received: by outflank-mailman (input) for mailman id 143666;
 Thu, 17 Jun 2021 06:31:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V3wh=LL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltlWO-0003Kj-CF
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 06:29:00 +0000
Received: from mail-pg1-x534.google.com (unknown [2607:f8b0:4864:20::534])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7dabbb9-993c-4f45-8155-19a764e18c85;
 Thu, 17 Jun 2021 06:28:31 +0000 (UTC)
Received: by mail-pg1-x534.google.com with SMTP id u190so281586pgd.8
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 23:28:31 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:e349:a6ae:d3d0:1621])
 by smtp.gmail.com with UTF8SMTPSA id o186sm3871495pfb.59.2021.06.16.23.28.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 16 Jun 2021 23:28:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7dabbb9-993c-4f45-8155-19a764e18c85
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=xfRvKXXPDGozL9xpkc2RXgCWY6OxEYfQN41+lxoFnvY=;
        b=P0DcuqP8d/CAJT5H3BXTszPLFR8/POeZwYtbY1hzB0BZSzBVE99MQjVTdgEtJMNxza
         8C5V55pgNOpxdtHe4MvP5ZXgHC+nHMSC0PqmBhg9/ppxPOVo1PsZ+EwMp7aMo1+zOeWe
         KqOQ2EnOhWaUxUqrPUYZrFAZhB0ZZNObJmz4g=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=xfRvKXXPDGozL9xpkc2RXgCWY6OxEYfQN41+lxoFnvY=;
        b=OL1NzvCCjWAf5QIEGzrv85J5YltTJtW3z9eKdoi67Pf3v2lEoaf0UtbR7SDhVMOxO3
         3OaPNZSxmk5Kl9FfALytDKlPuUYCib2J7dg+CcNA2pxrzaZwUVsP/nUgQE/n7CVXM1+c
         xzK+e94BtRIhZ6WoA/B5fjNnnA0j2mXAFQbfZKNWwouIhKGA5/CDRxZbjrDzpSzrz+1E
         KCd9TglMFzL0Vy6udX8fBI3w9RLZ8tu1fiArJ8BnNtNJBQMC2wIB/HGpOgJg/DcYlm6Q
         fDH68mSUolgHdrvSdpkj+kVswHaKUSQIRbhbap+c0dISC2BSVy7eLLMu8N7/R1P16jfo
         pgSg==
X-Gm-Message-State: AOAM531XBaSPkApAP8EekcSARraMQDDNlJPzVEW4LZtW2GXpCouKVyRM
	zG3XNrBJB1Pzqgd7YwGaT8253w==
X-Google-Smtp-Source: ABdhPJxBUnvdEKrFvX9gwkhRYpe6iM3TMIr0SMX+wx5+SenBzwe41Fi6FDk/v+JnW0WePmCPk48T7g==
X-Received: by 2002:a63:f009:: with SMTP id k9mr3590765pgh.356.1623911310959;
        Wed, 16 Jun 2021 23:28:30 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v13 12/12] of: Add plumbing for restricted DMA pool
Date: Thu, 17 Jun 2021 14:26:35 +0800
Message-Id: <20210617062635.1660944-13-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210617062635.1660944-1-tientzu@chromium.org>
References: <20210617062635.1660944-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If a device is not behind an IOMMU, we look up the device node and set
up the restricted DMA when the restricted-dma-pool is presented.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 drivers/of/address.c    | 33 +++++++++++++++++++++++++++++++++
 drivers/of/device.c     |  3 +++
 drivers/of/of_private.h |  6 ++++++
 3 files changed, 42 insertions(+)

diff --git a/drivers/of/address.c b/drivers/of/address.c
index 73ddf2540f3f..cdf700fba5c4 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -8,6 +8,7 @@
 #include <linux/logic_pio.h>
 #include <linux/module.h>
 #include <linux/of_address.h>
+#include <linux/of_reserved_mem.h>
 #include <linux/pci.h>
 #include <linux/pci_regs.h>
 #include <linux/sizes.h>
@@ -1022,6 +1023,38 @@ int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map)
 	of_node_put(node);
 	return ret;
 }
+
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np)
+{
+	struct device_node *node, *of_node = dev->of_node;
+	int count, i;
+
+	count = of_property_count_elems_of_size(of_node, "memory-region",
+						sizeof(u32));
+	/*
+	 * If dev->of_node doesn't exist or doesn't contain memory-region, try
+	 * the OF node having DMA configuration.
+	 */
+	if (count <= 0) {
+		of_node = np;
+		count = of_property_count_elems_of_size(
+			of_node, "memory-region", sizeof(u32));
+	}
+
+	for (i = 0; i < count; i++) {
+		node = of_parse_phandle(of_node, "memory-region", i);
+		/*
+		 * There might be multiple memory regions, but only one
+		 * restricted-dma-pool region is allowed.
+		 */
+		if (of_device_is_compatible(node, "restricted-dma-pool") &&
+		    of_device_is_available(node))
+			return of_reserved_mem_device_init_by_idx(dev, of_node,
+								  i);
+	}
+
+	return 0;
+}
 #endif /* CONFIG_HAS_DMA */
 
 /**
diff --git a/drivers/of/device.c b/drivers/of/device.c
index 6cb86de404f1..e68316836a7a 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
 
 	arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
 
+	if (!iommu)
+		return of_dma_set_restricted_buffer(dev, np);
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(of_dma_configure_id);
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index d9e6a324de0a..25cebbed5f02 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -161,12 +161,18 @@ struct bus_dma_region;
 #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
 int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map);
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np);
 #else
 static inline int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map)
 {
 	return -ENODEV;
 }
+static inline int of_dma_set_restricted_buffer(struct device *dev,
+					       struct device_node *np)
+{
+	return -ENODEV;
+}
 #endif
 
 #endif /* _LINUX_OF_PRIVATE_H */
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 06:31:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 06:31:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143669.264695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlYw-0008E8-Q4; Thu, 17 Jun 2021 06:31:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143669.264695; Thu, 17 Jun 2021 06:31:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlYw-0008Dz-MW; Thu, 17 Jun 2021 06:31:38 +0000
Received: by outflank-mailman (input) for mailman id 143669;
 Thu, 17 Jun 2021 06:31:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V3wh=LL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltlWE-0003Kj-C3
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 06:28:50 +0000
Received: from mail-pj1-x102c.google.com (unknown [2607:f8b0:4864:20::102c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e9d90548-3429-4c6d-88ca-849efa7e8a33;
 Thu, 17 Jun 2021 06:28:22 +0000 (UTC)
Received: by mail-pj1-x102c.google.com with SMTP id h16so3186610pjv.2
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 23:28:22 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:e349:a6ae:d3d0:1621])
 by smtp.gmail.com with UTF8SMTPSA id o1sm3950751pjf.56.2021.06.16.23.28.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 16 Jun 2021 23:28:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9d90548-3429-4c6d-88ca-849efa7e8a33
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=YPdZA059qJtBhHo92L+QtxB+I/HHPOi+M+A9tEqXqKY=;
        b=Myr0b5NJr1PyQ0pW9//q1QpUhnnMsBxVbgNFzqZKi6jRhYN3OXVXNzDbwwEnJ3vnIQ
         tGnTGpWMqJ5MxAv3CpCyi7gQ6p5P9yt5UwV7uci1/sAVn7FxhiIICuN7ffqDwVtmweoJ
         xnKMkVtNwRO3vARX46foyYNRbe69Lf2KTg2Lo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=YPdZA059qJtBhHo92L+QtxB+I/HHPOi+M+A9tEqXqKY=;
        b=ngtaioe0QxzhwLbOhTQnP2LPXbzoW/1twZ11a+EJUt1CIesvLYcilCK6tjvzPk7m7H
         yYMygl0W+8q7cx/nwuyvj1s0U7aDeGMEJQFeJApRbWcgvjoB3FDEk3CWOTMk4ZSSASCC
         xLYMpHkuOogeDukatSZxdej6fiuUaXrzHqS5ORwVTGkTbiamxoSUq4Li+UGwxOiUhUjB
         nF9MDKIBUoNUDAgnP2uhLPAbhTWy5F2OUtEHjjdh5HdNmoZ7AUCUyWYjKUug1wBUs8/e
         AaJE9reW8VbZobTEharKGvkl+Gbm5iGecK0/CvbTY5JhsfIHzUPZbp0BrrjfC1UhwQzs
         Lz5Q==
X-Gm-Message-State: AOAM533PGHqBsAIyhjkxN/wM9kAVnXHoVWlvRzDtOjbGfP3BwFXWLXgy
	FOrm7nUkq6C6wBjpFJboL9b03Q==
X-Google-Smtp-Source: ABdhPJy79i1TVMii//XQctad3ri2e3PEZ5kRZnre2T9swQRHDNRr2UXPLKrhAGeBBPSh8ySQDv1aNg==
X-Received: by 2002:a17:902:b609:b029:118:7acc:517a with SMTP id b9-20020a170902b609b02901187acc517amr3088068pls.67.1623911301959;
        Wed, 16 Jun 2021 23:28:21 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v13 11/12] dt-bindings: of: Add restricted DMA pool
Date: Thu, 17 Jun 2021 14:26:34 +0800
Message-Id: <20210617062635.1660944-12-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210617062635.1660944-1-tientzu@chromium.org>
References: <20210617062635.1660944-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce the new compatible string, restricted-dma-pool, for restricted
DMA. One can specify the address and length of the restricted DMA memory
region by restricted-dma-pool in the reserved-memory node.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 .../reserved-memory/reserved-memory.txt       | 36 +++++++++++++++++--
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
index e8d3096d922c..39b5f4c5a511 100644
--- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
@@ -51,6 +51,23 @@ compatible (optional) - standard definition
           used as a shared pool of DMA buffers for a set of devices. It can
           be used by an operating system to instantiate the necessary pool
           management subsystem if necessary.
+        - restricted-dma-pool: This indicates a region of memory meant to be
+          used as a pool of restricted DMA buffers for a set of devices. The
+          memory region would be the only region accessible to those devices.
+          When using this, the no-map and reusable properties must not be set,
+          so the operating system can create a virtual mapping that will be used
+          for synchronization. The main purpose for restricted DMA is to
+          mitigate the lack of DMA access control on systems without an IOMMU,
+          which could result in the DMA accessing the system memory at
+          unexpected times and/or unexpected addresses, possibly leading to data
+          leakage or corruption. The feature on its own provides a basic level
+          of protection against the DMA overwriting buffer contents at
+          unexpected times. However, to protect against general data leakage and
+          system memory corruption, the system needs to provide way to lock down
+          the memory access, e.g., MPU. Note that since coherent allocation
+          needs remapping, one must set up another device coherent pool by
+          shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic
+          coherent allocation.
         - vendor specific string in the form <vendor>,[<device>-]<usage>
 no-map (optional) - empty property
     - Indicates the operating system must not create a virtual mapping
@@ -85,10 +102,11 @@ memory-region-names (optional) - a list of names, one for each corresponding
 
 Example
 -------
-This example defines 3 contiguous regions are defined for Linux kernel:
+This example defines 4 contiguous regions for Linux kernel:
 one default of all device drivers (named linux,cma@72000000 and 64MiB in size),
-one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB), and
-one for multimedia processing (named multimedia-memory@77000000, 64MiB).
+one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB),
+one for multimedia processing (named multimedia-memory@77000000, 64MiB), and
+one for restricted dma pool (named restricted_dma_reserved@0x50000000, 64MiB).
 
 / {
 	#address-cells = <1>;
@@ -120,6 +138,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 			compatible = "acme,multimedia-memory";
 			reg = <0x77000000 0x4000000>;
 		};
+
+		restricted_dma_reserved: restricted_dma_reserved {
+			compatible = "restricted-dma-pool";
+			reg = <0x50000000 0x4000000>;
+		};
 	};
 
 	/* ... */
@@ -138,4 +161,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 		memory-region = <&multimedia_reserved>;
 		/* ... */
 	};
+
+	pcie_device: pcie_device@0,0 {
+		reg = <0x83010000 0x0 0x00000000 0x0 0x00100000
+		       0x83010000 0x0 0x00100000 0x0 0x00100000>;
+		memory-region = <&restricted_dma_reserved>;
+		/* ... */
+	};
 };
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 06:31:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 06:31:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143670.264700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlYx-0008GV-6V; Thu, 17 Jun 2021 06:31:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143670.264700; Thu, 17 Jun 2021 06:31:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlYw-0008G3-VR; Thu, 17 Jun 2021 06:31:38 +0000
Received: by outflank-mailman (input) for mailman id 143670;
 Thu, 17 Jun 2021 06:31:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V3wh=LL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltlW4-0003Kj-Bl
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 06:28:40 +0000
Received: from mail-pj1-x1035.google.com (unknown [2607:f8b0:4864:20::1035])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a412e1c8-bb76-4822-a35d-98ee546d692d;
 Thu, 17 Jun 2021 06:28:14 +0000 (UTC)
Received: by mail-pj1-x1035.google.com with SMTP id
 m15-20020a17090a5a4fb029016f385ffad0so410336pji.0
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 23:28:14 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:e349:a6ae:d3d0:1621])
 by smtp.gmail.com with UTF8SMTPSA id w15sm3701122pjg.32.2021.06.16.23.28.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 16 Jun 2021 23:28:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a412e1c8-bb76-4822-a35d-98ee546d692d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=YcJPdW3IJKdu8Vg2GAKBiOP1MfG4JF6ejtcxzo1Zojw=;
        b=WLRzcZL+Sp/o8JxRFJy9NQzEr/G3yFoLjJpxsgkVdU9mIalPFmQgD7zkZHyjNgav1G
         t4ZHSy7CENJJjqq74/Nq6GDw0sr3iB9qBLUdGrpnfiVyc0+bDCIwUCBGA/hTAmTh6xBM
         onFKTV0nEVzcKHlXT23XylR9k1E7YfCcKx5qg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=YcJPdW3IJKdu8Vg2GAKBiOP1MfG4JF6ejtcxzo1Zojw=;
        b=bLDqZxJ5DiB0RC1vDoSdV2mh34CHJG+Lt5KAcWBdN1/Eu1rxgrp5X5GQTYPyFhAGTL
         7YBUbSOKHyP35yiB4iss9rt3zAwVmIw8Rj7j/D+S8Cz8QmzTgBtLR0nYT+WylMt23k3v
         wmk7cI5NA/pbsFXgFdfK8l+WX4z6Wqkpzsoq9NXj/EmkG2v9zZWRx/zU77EHTuDWg2YL
         3rKHYH8OcCbXsZoBAksNfINECjD9vh5eVrkDplIeVgmKtsxD/sx1rBzs2TxVzLhizA4q
         IZlmUWtZW/c1VBKvVK+lcrDl55eFXHMWXl2eqiLLit+HeKlOFyvuIIScaHmobPcZO1C/
         2YdQ==
X-Gm-Message-State: AOAM533rNL5SxdtnZuSpc/8Z9lvPUqPWaN529oeKBlfMWVfLntm0Y6hN
	SrUnTPcKwC8s5S9Td/EJKI8n5w==
X-Google-Smtp-Source: ABdhPJw8I1qc1M5Vlv8eVJW74vPd1y0lVITad0P61ht+zfTIsbm4XCl1ZXk2aG0LNG9E7FsizxGSiQ==
X-Received: by 2002:a17:90a:7e0a:: with SMTP id i10mr3834074pjl.133.1623911293360;
        Wed, 16 Jun 2021 23:28:13 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v13 10/12] swiotlb: Add restricted DMA pool initialization
Date: Thu, 17 Jun 2021 14:26:33 +0800
Message-Id: <20210617062635.1660944-11-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210617062635.1660944-1-tientzu@chromium.org>
References: <20210617062635.1660944-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the initialization function to create restricted DMA pools from
matching reserved-memory nodes.

Regardless of swiotlb setting, the restricted DMA pool is preferred if
available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 include/linux/swiotlb.h |  3 +-
 kernel/dma/Kconfig      | 14 ++++++++
 kernel/dma/swiotlb.c    | 76 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 92 insertions(+), 1 deletion(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index a73fad460162..175b6c113ed8 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -73,7 +73,8 @@ extern enum swiotlb_force swiotlb_force;
  *		range check to see if the memory was in fact allocated by this
  *		API.
  * @nslabs:	The number of IO TLB blocks (in groups of 64) between @start and
- *		@end. This is command line adjustable via setup_io_tlb_npages.
+ *		@end. For default swiotlb, this is command line adjustable via
+ *		setup_io_tlb_npages.
  * @used:	The number of used IO TLB block.
  * @list:	The free list describing the number of free entries available
  *		from each index.
diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
index 77b405508743..3e961dc39634 100644
--- a/kernel/dma/Kconfig
+++ b/kernel/dma/Kconfig
@@ -80,6 +80,20 @@ config SWIOTLB
 	bool
 	select NEED_DMA_MAP_STATE
 
+config DMA_RESTRICTED_POOL
+	bool "DMA Restricted Pool"
+	depends on OF && OF_RESERVED_MEM
+	select SWIOTLB
+	help
+	  This enables support for restricted DMA pools which provide a level of
+	  DMA memory protection on systems with limited hardware protection
+	  capabilities, such as those lacking an IOMMU.
+
+	  For more information see
+	  <Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt>
+	  and <kernel/dma/swiotlb.c>.
+	  If unsure, say "n".
+
 #
 # Should be selected if we can mmap non-coherent mappings to userspace.
 # The only thing that is really required is a way to set an uncached bit
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 6499cfbfe95f..d4099f03b2f0 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -39,6 +39,13 @@
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
 #endif
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_fdt.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/slab.h>
+#endif
 
 #include <asm/io.h>
 #include <asm/dma.h>
@@ -736,4 +743,73 @@ bool swiotlb_free(struct device *dev, struct page *page, size_t size)
 	return true;
 }
 
+static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
+				    struct device *dev)
+{
+	struct io_tlb_mem *mem = rmem->priv;
+	unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
+
+	/*
+	 * Since multiple devices can share the same pool, the private data,
+	 * io_tlb_mem struct, will be initialized by the first device attached
+	 * to it.
+	 */
+	if (!mem) {
+		mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
+		if (!mem)
+			return -ENOMEM;
+
+		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
+		mem->force_bounce = true;
+		mem->for_alloc = true;
+		set_memory_decrypted((unsigned long)phys_to_virt(rmem->base),
+				     rmem->size >> PAGE_SHIFT);
+
+		rmem->priv = mem;
+
+		if (IS_ENABLED(CONFIG_DEBUG_FS)) {
+			mem->debugfs =
+				debugfs_create_dir(rmem->name, debugfs_dir);
+			swiotlb_create_debugfs_files(mem);
+		}
+	}
+
+	dev->dma_io_tlb_mem = mem;
+
+	return 0;
+}
+
+static void rmem_swiotlb_device_release(struct reserved_mem *rmem,
+					struct device *dev)
+{
+	dev->dma_io_tlb_mem = io_tlb_default_mem;
+}
+
+static const struct reserved_mem_ops rmem_swiotlb_ops = {
+	.device_init = rmem_swiotlb_device_init,
+	.device_release = rmem_swiotlb_device_release,
+};
+
+static int __init rmem_swiotlb_setup(struct reserved_mem *rmem)
+{
+	unsigned long node = rmem->fdt_node;
+
+	if (of_get_flat_dt_prop(node, "reusable", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,cma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,dma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "no-map", NULL))
+		return -EINVAL;
+
+	if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
+		pr_err("Restricted DMA pool must be accessible within the linear mapping.");
+		return -EINVAL;
+	}
+
+	rmem->ops = &rmem_swiotlb_ops;
+	pr_info("Reserved memory: created restricted DMA pool at %pa, size %ld MiB\n",
+		&rmem->base, (unsigned long)rmem->size / SZ_1M);
+	return 0;
+}
+
+RESERVEDMEM_OF_DECLARE(dma, "restricted-dma-pool", rmem_swiotlb_setup);
 #endif /* CONFIG_DMA_RESTRICTED_POOL */
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 06:31:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 06:31:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143671.264716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlZ0-0000Ph-JI; Thu, 17 Jun 2021 06:31:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143671.264716; Thu, 17 Jun 2021 06:31:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlZ0-0000PT-EK; Thu, 17 Jun 2021 06:31:42 +0000
Received: by outflank-mailman (input) for mailman id 143671;
 Thu, 17 Jun 2021 06:31:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V3wh=LL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltlVz-0003Kj-BZ
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 06:28:35 +0000
Received: from mail-pg1-x530.google.com (unknown [2607:f8b0:4864:20::530])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c52e1b07-d567-4af5-beb3-651326e28397;
 Thu, 17 Jun 2021 06:28:05 +0000 (UTC)
Received: by mail-pg1-x530.google.com with SMTP id v7so4120600pgl.2
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 23:28:05 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:e349:a6ae:d3d0:1621])
 by smtp.gmail.com with UTF8SMTPSA id m2sm2849224pgv.40.2021.06.16.23.27.57
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 16 Jun 2021 23:28:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c52e1b07-d567-4af5-beb3-651326e28397
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=1kVT61e7FOIemS+OQDjJ59YjuPCYLuBoi6RHFjpPHNc=;
        b=BeDo/CLAwGt69GSevGZ+Jlur9FOYV0+JT00bUgMqV5zvCmf+Dt7QmcjNX+ehz6NyjI
         F04Xs2LTe97xscpsVqqr/XlmKi3kwAWAdD8VfuWX8St0+oP+/V8w3zQAC/Y40is3nZ2Q
         JLv+Djj6EkAyNlAAGVrAoK/NpQzcJb6TjJx1A=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=1kVT61e7FOIemS+OQDjJ59YjuPCYLuBoi6RHFjpPHNc=;
        b=el5co+LeuRjyHH/zcecMg8KJ9yda9ixiewpuLkEZHXwh5vcPV92MTgwzOk2jm6RM/U
         iV90IKF7NQZqxGoZOni01TqiTHruGrZ9fyvGhpJUm8dWI83RUJp748Cg127Ex5ZeF0O2
         G9emduukYq2ugIuZ19twz9llyP9nW7NuMY9aWCnNTNUR6TckWK9j9deMr7MQZHYXmZX9
         8wsIYx0hkrjCB8g03+KdAMsOim6THfYBy8MofQ7OXx45HE8gjCG/5DC95q3Lb2mnmPl/
         gDtaXHQ7z3vfro82JfXv61j3hGfR6QHN9m6fSmGu6eUK9FIBlkgYBtjXFEQ81iczqs+c
         2CKg==
X-Gm-Message-State: AOAM532y6r6u2EFrrD1T03dbSuYSLIatzoxpgYSshrY9BKaylopDIYnX
	/yqfaGIhpK/dPLDEhh/aGLQAcQ==
X-Google-Smtp-Source: ABdhPJw4ZLie0RfzFHyNTNnSLgPVeY+nKtsiyTkWGQJcdSMRPIrJJv1FueHvqQhlVXurM+pnWdVimw==
X-Received: by 2002:a63:e343:: with SMTP id o3mr3477191pgj.416.1623911284706;
        Wed, 16 Jun 2021 23:28:04 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v13 09/12] swiotlb: Add restricted DMA alloc/free support
Date: Thu, 17 Jun 2021 14:26:32 +0800
Message-Id: <20210617062635.1660944-10-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210617062635.1660944-1-tientzu@chromium.org>
References: <20210617062635.1660944-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the functions, swiotlb_{alloc,free} and is_swiotlb_for_alloc to
support the memory allocation from restricted DMA pool.

The restricted DMA pool is preferred if available.

Note that since coherent allocation needs remapping, one must set up
another device coherent pool by shared-dma-pool and use
dma_alloc_from_dev_coherent instead for atomic coherent allocation.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 include/linux/swiotlb.h | 26 ++++++++++++++++++++++
 kernel/dma/direct.c     | 49 +++++++++++++++++++++++++++++++----------
 kernel/dma/swiotlb.c    | 38 ++++++++++++++++++++++++++++++--
 3 files changed, 99 insertions(+), 14 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 8d8855c77d9a..a73fad460162 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -85,6 +85,7 @@ extern enum swiotlb_force swiotlb_force;
  * @debugfs:	The dentry to debugfs.
  * @late_alloc:	%true if allocated using the page allocator
  * @force_bounce: %true if swiotlb bouncing is forced
+ * @for_alloc:  %true if the pool is used for memory allocation
  */
 struct io_tlb_mem {
 	phys_addr_t start;
@@ -96,6 +97,7 @@ struct io_tlb_mem {
 	struct dentry *debugfs;
 	bool late_alloc;
 	bool force_bounce;
+	bool for_alloc;
 	struct io_tlb_slot {
 		phys_addr_t orig_addr;
 		size_t alloc_size;
@@ -156,4 +158,28 @@ static inline void swiotlb_adjust_size(unsigned long size)
 extern void swiotlb_print_info(void);
 extern void swiotlb_set_max_segment(unsigned int);
 
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size);
+bool swiotlb_free(struct device *dev, struct page *page, size_t size);
+
+static inline bool is_swiotlb_for_alloc(struct device *dev)
+{
+	return dev->dma_io_tlb_mem->for_alloc;
+}
+#else
+static inline struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	return NULL;
+}
+static inline bool swiotlb_free(struct device *dev, struct page *page,
+				size_t size)
+{
+	return false;
+}
+static inline bool is_swiotlb_for_alloc(struct device *dev)
+{
+	return false;
+}
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
+
 #endif /* __LINUX_SWIOTLB_H */
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index a92465b4eb12..2de33e5d302b 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -75,6 +75,15 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 		min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit);
 }
 
+static void __dma_direct_free_pages(struct device *dev, struct page *page,
+				    size_t size)
+{
+	if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL) &&
+	    swiotlb_free(dev, page, size))
+		return;
+	dma_free_contiguous(dev, page, size);
+}
+
 static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 		gfp_t gfp)
 {
@@ -86,6 +95,16 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 
 	gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
 					   &phys_limit);
+	if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL) &&
+	    is_swiotlb_for_alloc(dev)) {
+		page = swiotlb_alloc(dev, size);
+		if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
+			__dma_direct_free_pages(dev, page, size);
+			return NULL;
+		}
+		return page;
+	}
+
 	page = dma_alloc_contiguous(dev, size, gfp);
 	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
 		dma_free_contiguous(dev, page, size);
@@ -142,7 +161,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 		gfp |= __GFP_NOWARN;
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) {
 		page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
 		if (!page)
 			return NULL;
@@ -155,18 +174,23 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev))
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_swiotlb_for_alloc(dev))
 		return arch_dma_alloc(dev, size, dma_handle, gfp, attrs);
 
 	/*
 	 * Remapping or decrypting memory may block. If either is required and
 	 * we can't block, allocate the memory from the atomic pools.
+	 * If restricted DMA (i.e., is_swiotlb_for_alloc) is required, one must
+	 * set up another device coherent pool by shared-dma-pool and use
+	 * dma_alloc_from_dev_coherent instead.
 	 */
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
 	    !gfpflags_allow_blocking(gfp) &&
 	    (force_dma_unencrypted(dev) ||
-	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev))))
+	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
+	      !dev_is_dma_coherent(dev))) &&
+	    !is_swiotlb_for_alloc(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	/* we always manually zero the memory once we are done */
@@ -237,7 +261,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 			return NULL;
 	}
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -247,15 +271,15 @@ void dma_direct_free(struct device *dev, size_t size,
 	unsigned int page_order = get_order(size);
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) {
 		/* cpu_addr is a struct page cookie, not a kernel address */
 		dma_free_contiguous(dev, cpu_addr, size);
 		return;
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev)) {
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_swiotlb_for_alloc(dev)) {
 		arch_dma_free(dev, size, cpu_addr, dma_addr, attrs);
 		return;
 	}
@@ -273,7 +297,7 @@ void dma_direct_free(struct device *dev, size_t size,
 	else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
 		arch_dma_clear_uncached(cpu_addr, size);
 
-	dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size);
+	__dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
 }
 
 struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
@@ -283,7 +307,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	void *ret;
 
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
-	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp))
+	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) &&
+	    !is_swiotlb_for_alloc(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	page = __dma_direct_alloc_pages(dev, size, gfp);
@@ -310,7 +335,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	*dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
 	return page;
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -329,7 +354,7 @@ void dma_direct_free_pages(struct device *dev, size_t size,
 	if (force_dma_unencrypted(dev))
 		set_memory_encrypted((unsigned long)vaddr, 1 << page_order);
 
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 }
 
 #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index ff09341bb9f5..6499cfbfe95f 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -463,8 +463,9 @@ static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
 
 	index = wrap = wrap_index(mem, ALIGN(mem->index, stride));
 	do {
-		if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
-		    (orig_addr & iotlb_align_mask)) {
+		if (orig_addr &&
+		    (slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
+			    (orig_addr & iotlb_align_mask)) {
 			index = wrap_index(mem, index + 1);
 			continue;
 		}
@@ -703,3 +704,36 @@ static int __init swiotlb_create_default_debugfs(void)
 late_initcall(swiotlb_create_default_debugfs);
 
 #endif
+
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+	phys_addr_t tlb_addr;
+	int index;
+
+	if (!mem)
+		return NULL;
+
+	index = swiotlb_find_slots(dev, 0, size);
+	if (index == -1)
+		return NULL;
+
+	tlb_addr = slot_addr(mem->start, index);
+
+	return pfn_to_page(PFN_DOWN(tlb_addr));
+}
+
+bool swiotlb_free(struct device *dev, struct page *page, size_t size)
+{
+	phys_addr_t tlb_addr = page_to_phys(page);
+
+	if (!is_swiotlb_buffer(dev, tlb_addr))
+		return false;
+
+	swiotlb_release_slots(dev, tlb_addr);
+
+	return true;
+}
+
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 06:32:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 06:32:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143695.264727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlZm-0001vc-TP; Thu, 17 Jun 2021 06:32:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143695.264727; Thu, 17 Jun 2021 06:32:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlZm-0001vV-QS; Thu, 17 Jun 2021 06:32:30 +0000
Received: by outflank-mailman (input) for mailman id 143695;
 Thu, 17 Jun 2021 06:32:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V3wh=LL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltlZl-0000uz-UN
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 06:32:30 +0000
Received: from mail-il1-x130.google.com (unknown [2607:f8b0:4864:20::130])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb68ce8f-68ba-47f5-88f7-095c4c4af3f3;
 Thu, 17 Jun 2021 06:32:24 +0000 (UTC)
Received: by mail-il1-x130.google.com with SMTP id b9so4482162ilr.2
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 23:32:24 -0700 (PDT)
Received: from mail-io1-f48.google.com (mail-io1-f48.google.com.
 [209.85.166.48])
 by smtp.gmail.com with ESMTPSA id r11sm2295272ilm.23.2021.06.16.23.32.22
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 16 Jun 2021 23:32:22 -0700 (PDT)
Received: by mail-io1-f48.google.com with SMTP id s19so1916544ioc.3
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 23:32:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb68ce8f-68ba-47f5-88f7-095c4c4af3f3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=IqsCA/yjikDS1aCYzllqiH9HoLVcQQhYIn5qSeonABE=;
        b=kDczImboY2eVAJ3+sYnRR25HrludaTlIxUHPV26t7zvADYlkn+EAot8vmNZFxS6ckk
         NyqXP6vgSOkhkXMigW38BKTSHGkozev95CT8sd0RyYFWHSEXQf1vzK9yA5xbK5UPbuEq
         Bn0NNhaz+a2zeBhstpYyx1baPQ8VKdD07y9nQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=IqsCA/yjikDS1aCYzllqiH9HoLVcQQhYIn5qSeonABE=;
        b=IhmNtFX1tIwU9tbSPxClx+seC0UWS8Jlll/Eu/2WdAwKArfgEjFety40FZZwZZuVzM
         4+q6C1sLow3SarBekhxnGj89u6b4uBGnNNzRvCtgbLfKEE/zeoFbtCCjY3NbAdt0OUCm
         lGQ/OHfdPzlwJyL7eLl/z869DYGdCLfKph6KK4GocHzhYxrDbPTSw1bK3Z2ktou0t03n
         bYWjHymvY7RD+BNG2AcOPqQ2s2f135fRPclxfbbfMY+KsbLGqyPI8HUVTBbWGKD5T7uQ
         C23T+iw8hhhouVofX37ABH2FgZfHVNKrJs2h1UqAT3I+ILz7zZzqAe9hQ1j0t4+tSZwh
         +jTw==
X-Gm-Message-State: AOAM532/nbPIbBZEJtptneu6uaJwsv+f4R85M9N60d8nUC0RE0o6YE89
	551uXtkFAzo98ubqAcV8MwsTCdwQ1wNCgQ==
X-Google-Smtp-Source: ABdhPJy+Ylvb07m6VZSwygcUX6MU05B4tTk3aXHsBN0R3Jdor4+k4Db742QIvSVsg7bzEzOuXW9mSg==
X-Received: by 2002:a05:6e02:1e0d:: with SMTP id g13mr2504706ila.178.1623911544015;
        Wed, 16 Jun 2021 23:32:24 -0700 (PDT)
X-Received: by 2002:a92:c852:: with SMTP id b18mr352877ilq.18.1623911531698;
 Wed, 16 Jun 2021 23:32:11 -0700 (PDT)
MIME-Version: 1.0
References: <20210617062635.1660944-1-tientzu@chromium.org>
In-Reply-To: <20210617062635.1660944-1-tientzu@chromium.org>
From: Claire Chang <tientzu@chromium.org>
Date: Thu, 17 Jun 2021 14:32:00 +0800
X-Gmail-Original-Message-ID: <CALiNf2_qF7OY0LHToNYx0E79BWMt2n7=nepPPLf+7YV3=KFEyw@mail.gmail.com>
Message-ID: <CALiNf2_qF7OY0LHToNYx0E79BWMt2n7=nepPPLf+7YV3=KFEyw@mail.gmail.com>
Subject: Re: [PATCH v13 00/12] Restricted DMA
To: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

v13: https://lore.kernel.org/patchwork/cover/1448001/


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 06:49:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 06:49:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143711.264739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlpo-0003dA-Bf; Thu, 17 Jun 2021 06:49:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143711.264739; Thu, 17 Jun 2021 06:49:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltlpo-0003d3-8U; Thu, 17 Jun 2021 06:49:04 +0000
Received: by outflank-mailman (input) for mailman id 143711;
 Thu, 17 Jun 2021 06:49:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V3wh=LL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ltlpn-0003cx-9t
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 06:49:03 +0000
Received: from mail-pl1-x634.google.com (unknown [2607:f8b0:4864:20::634])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 32eff0be-56c6-4a8a-936c-0ce81a5f744b;
 Thu, 17 Jun 2021 06:49:02 +0000 (UTC)
Received: by mail-pl1-x634.google.com with SMTP id x10so2413863plg.3
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 23:49:02 -0700 (PDT)
Received: from mail-pg1-f169.google.com (mail-pg1-f169.google.com.
 [209.85.215.169])
 by smtp.gmail.com with ESMTPSA id o34sm759154pgm.6.2021.06.16.23.49.01
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 16 Jun 2021 23:49:01 -0700 (PDT)
Received: by mail-pg1-f169.google.com with SMTP id e22so4125998pgv.10
 for <xen-devel@lists.xenproject.org>; Wed, 16 Jun 2021 23:49:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32eff0be-56c6-4a8a-936c-0ce81a5f744b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=IqsCA/yjikDS1aCYzllqiH9HoLVcQQhYIn5qSeonABE=;
        b=GiFPgetacPTJVsGZ3MFWgSvQPnCdoTJhc4FxcdhnTR1mIKLWNYkC5qsclOh1Nu4T0t
         2JBHLcEHEvZiTaAAwPJzRshEva/oe1afpxIyCNXuS6Yc5SEiWBAid4ULBNFnG08bi+4b
         5Y3w62S7YQM/biDmcuEcWF0r86zBUuRahetbQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=IqsCA/yjikDS1aCYzllqiH9HoLVcQQhYIn5qSeonABE=;
        b=mPWP1Kc2tL77A8f7hWt5yRVzjRAMHN689evo9lxfp8Mjyne7cPNWGC7lgxauIGLB5Y
         X+otdIYTnEMjD88glFo9JHmaxt36LaL6IJK1TCxuAWjsmMqvyNuH82+MzOc9i4/tb/2v
         6P26uQI/3fvMVKFyG3sOEwW/ZmIzgai05eaH3fhL1P2PAzqNbxiT9tApK+akp7wnD8bo
         sL0E9g9Iz54qUmuzeQgxD/GjgH90tXtjvFKhsFixzow9ePU7NIPILgOS0WaKP+//RHAG
         YdDwhQvZhuyuz8mgNUihSQ3CFRIHAo3p98R6Y/qtoSzx5IYaAD3+TUFEqC9vY+Qy8/4b
         2Nqg==
X-Gm-Message-State: AOAM533vRMOEhYFlOZFFiHSUGHricsW4ZEsRWJMK6gvfKyux7d/YdYdl
	mKckziAImTfsg+SuXuD2jzrxWp+VB7Fung==
X-Google-Smtp-Source: ABdhPJxJXQ84cUseu++86IpuRVDPr4p/6SI3aPiykIMrIlz8fLyXTyibCL6Gcu8XdqmmrDvw5SfHhQ==
X-Received: by 2002:a17:90a:6e4d:: with SMTP id s13mr3944782pjm.29.1623912541680;
        Wed, 16 Jun 2021 23:49:01 -0700 (PDT)
X-Received: by 2002:a05:6602:2344:: with SMTP id r4mr2559955iot.69.1623912068770;
 Wed, 16 Jun 2021 23:41:08 -0700 (PDT)
MIME-Version: 1.0
References: <20210616062157.953777-1-tientzu@chromium.org> <20210616120837.GA22783@willie-the-truck>
In-Reply-To: <20210616120837.GA22783@willie-the-truck>
From: Claire Chang <tientzu@chromium.org>
Date: Thu, 17 Jun 2021 14:40:57 +0800
X-Gmail-Original-Message-ID: <CALiNf28SSxhs_+9Oq=pyOc7OWWDyWrtZLUqXKQKin6dRyXwo=w@mail.gmail.com>
Message-ID: <CALiNf28SSxhs_+9Oq=pyOc7OWWDyWrtZLUqXKQKin6dRyXwo=w@mail.gmail.com>
Subject: Re: [PATCH v12 00/12] Restricted DMA
To: Will Deacon <will@kernel.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Frank Rowand <frowand.list@gmail.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig <hch@lst.de>, 
	Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

v13: https://lore.kernel.org/patchwork/cover/1448001/


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 08:00:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 08:00:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143721.264750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltmwt-0003L7-5c; Thu, 17 Jun 2021 08:00:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143721.264750; Thu, 17 Jun 2021 08:00:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltmwt-0003L0-1O; Thu, 17 Jun 2021 08:00:27 +0000
Received: by outflank-mailman (input) for mailman id 143721;
 Thu, 17 Jun 2021 08:00:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=k/hY=LL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltmwr-0003Ku-OM
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 08:00:25 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 89f7646e-446d-4b77-a5d0-4e536b3a58d1;
 Thu, 17 Jun 2021 08:00:24 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2054.outbound.protection.outlook.com [104.47.14.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-22-wVyxjQ9RPv6h5x_i8gpiDw-1; Thu, 17 Jun 2021 10:00:22 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4606.eurprd04.prod.outlook.com (2603:10a6:803:70::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Thu, 17 Jun
 2021 08:00:19 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Thu, 17 Jun 2021
 08:00:19 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR3P281CA0003.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:1d::22) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.7 via Frontend Transport; Thu, 17 Jun 2021 08:00:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89f7646e-446d-4b77-a5d0-4e536b3a58d1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623916823;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jiTgbx2mu8DbI/5lQiyzCnV3uep143WJcQXwYHB7zAM=;
	b=m0/0V3AzEO3bvwwziusGccsPnqMsSf5KfuiPP+EvycdRtroaI+2uvd2GFRo+pnOVB34KTj
	k2S06Z8FOPSCfr9jzcNoktc7MXSH3gS7KyBLY7ukfxuGZKQXaKsr4MMAIq4EL8GnDtN69S
	uGLS4qTQFwIxe4UvbKp1miQoehP8ifM=
X-MC-Unique: wVyxjQ9RPv6h5x_i8gpiDw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ci2ojKaN8dZFEXIMo1q3B/jKvrixvsUEnfZz+aQdQidwbMmoSCLySs1ZopGgHgkgwRE85AeSYT7S4qcYlFgyD5dwyhCwQONsB6vk754+0pybF00sWNe1GnhuUG1e6FuCr9QNnkPFmWg/YN3CwpBA+DPOR4BYA6xjzqeYOU/oVnDnoRFP7oc2i+J65Grp/mTy5qKCOzM6DjCiaohqpdHj3Uy1wsD9dGM1At19zL2UbPf66cENvz3ssBX37ql1GCP4TZhnDNPOArwtMks1OmysqaH6qtZPjakRr2pywopKDZXxYhy1yyyVsiu/TClXfpq+u5FsC3OttmfvFXdX3YomhA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jiTgbx2mu8DbI/5lQiyzCnV3uep143WJcQXwYHB7zAM=;
 b=BNy9Fk+Dpm67YdKGsgXHeiwkv5qZgFx24I1VVjQIl6Q27QXQFQ/I8RbRm+MfGP/xghEKbhIzjdm1KmwtHnFLxfSFeN8U3jIhLAbm0sfhUudBHq585UDKuE2yOAcBWwzNkuza8KprwaZ5Plu0h0tbiozXUSDaeDi2K9AKmg2AspHaqBtXyIXDF8oiAWHUcgT4GMMbBs2KuZ2xp+15AQoKDWtyHgEdWpSR538S4HOGfHUOdETM1qLd6MMSxAHdcWm8QRx2CjvpeqH/0jW8ykyc8p3buClTDi7L3pHH1YuTraxh3sLAl12yJNwd+46wPRhXZdLAYwR8Ymgh884rH6LC+Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: oracle.com; dkim=none (message not signed)
 header.d=none;oracle.com; dmarc=none action=none header.from=suse.com;
Subject: Re: hypercalls with 64-bit results
To: Juergen Gross <jgross@suse.com>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <71b8a4f1-9c18-36e7-56b1-3f1b1dabddd6@suse.com>
 <2adfba14-8adb-89d8-3e8b-a25aef6fc2d6@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cf751696-5c9b-b465-67d0-544245d8563f@suse.com>
Date: Thu, 17 Jun 2021 10:00:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <2adfba14-8adb-89d8-3e8b-a25aef6fc2d6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR3P281CA0003.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::22) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 663db20e-9d3d-46af-249a-08d93165f34a
X-MS-TrafficTypeDiagnostic: VI1PR04MB4606:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4606ACC9365569514FC73789B30E9@VI1PR04MB4606.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	T313y+74LPXJjwrDh8k1LlcdDHUguoVHBqowv4lOGYrjmQhekoV2LLYm6k206phZm6Y6EqgZiAKcV3mCOpMHrsetolSpxQl8lgyKIP+mlmOf9N0zffCeR28LvmJp6dWCwDA2XBUdIx8v4eDHqMAznQMSCdLukmAoqkoVKJmAorpfSEqPvpLXatcTdPGTOAw98nrwC03oZ/7uovMuASrs3EYVHxTzb/OmWfVEfrhtDiEXp/CDFW/n8S08vWo0Ta47TxciCz72XA+7IruaRleKptJ6p+Oqthwmu3zpFKL1Ucy9/2foYX8ogaPzseCrOzUq0WBPWGeJRqxkOkLzvgGC3id5mxu3UKD272Y+37g5TpE/Ayc15Ot5t3IwEQdxkk0GQPCKND3MjO+bbrtSWZMgAKUP8RuxsRGBDEFFSihyok91Z+S+7NEACRY/F0O9gv4Nkwja3aWYaXM2+SxEUGfXxwr2Srx/HmqY1Snb9YzQWebutHIo42cAyWDtU0CkHTjYkqYStsjGQMdcox2Ssn5KTBfEy7MfkEz8/17ziA313XI8qob7dRtO9EUawbakhHz6pKCblz/eiB7lHv0mwlBMdAkkdi40UMKL/gYxGaYsKUTht2zY748jsOzpRixdrkPpojNofng8t7EC8f7HZGjJOngyS+ycxskCr3+caHyXHSYZYw6hmXYPfqawFKby4SPn
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(396003)(39860400002)(376002)(366004)(136003)(478600001)(66946007)(66556008)(66476007)(16526019)(31686004)(54906003)(38100700002)(316002)(8676002)(186003)(2616005)(6636002)(26005)(8936002)(16576012)(2906002)(5660300002)(37006003)(956004)(4326008)(4744005)(7416002)(6862004)(31696002)(6486002)(53546011)(86362001)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bWhuZXA0dFpNVTlDRE5oMkpkYmxKR3dCdmk5K0xWa2ZqdUQvbjE3RWFLbDdT?=
 =?utf-8?B?RjFESkNYSjFCSzhNUTlKZDdaTnJ3RHZMbTBlSzZKNzRVRERMaGJJSnBIUlB3?=
 =?utf-8?B?ODR4QW9Tb2diZXZtNzhub3IxRDBmanhBbkdwcEJndVF5TmxVeUpSOUxQNXFI?=
 =?utf-8?B?OG1nbVhpb2tXWEpLSldMUFcrZTZySGxRVlNEWDB1ck5VSkt2N2h1WDZqWnR6?=
 =?utf-8?B?T3JneVJkK0t2OGp3QW50L0F6dnhLcUhFc0xCekRtRmFJQ29YUFIveDdwcHpI?=
 =?utf-8?B?R1V0b3JvNFMrakpvWnhXT2hpZHVuTURSa0FnTnhvT09QMVlzUkF2L3VKT1VG?=
 =?utf-8?B?dUxBT0EzSjltOTZROG9rZVRSSmRZNFJpTTl0YjhRdVFFa2hMc05kS09qbmVv?=
 =?utf-8?B?SE5TRWtZQWt6OHRoWjVESHRDRnV0bHVWZGlEblR1UWJzM01LaUFqcHQ2aWQ2?=
 =?utf-8?B?aXRpaUNHQWFkaGxhVWNMbVE0dXhkTmZUZVdvOC9oejgrcFUvZjUxQUN3Tk5u?=
 =?utf-8?B?SFlMMVU5UGprV3FRV1FiSitqLytsM1QxanhhTytpL0RNc3dsQzBpdFZLb3VC?=
 =?utf-8?B?U0pnb2poUkd6ZldhWmlBY2ZhRGxiRVNQZ1FRTTNGTEdqRGZIWmI1Rnkya2xT?=
 =?utf-8?B?S1puOUhraEZwaW5wZnZ1cGtwMjc1c0FLZE5zMnNOWXd4NU9nVnhXTjVNMHZY?=
 =?utf-8?B?SlQ5alNTNXpOZUxUZGVGY2JQQlp2VmNZenhKZkkzZ1J2SisyL2FCZXBzVHBi?=
 =?utf-8?B?S3J1aDkwdDF4a3V4SmNCbXZMWFBlZ0I4L25kZERkcjd3SXNYR2R6WEd2bG16?=
 =?utf-8?B?dlhlNjFKRTdJNERpN25hOG96ZldPSC83MmRjcktyeGtSSGtQdUFQZXI5QjA4?=
 =?utf-8?B?NTU1VkJpa1RoVjBURDRsYzFXa0NlZlJ2dEg3cFJGaklzaUREd3JFZzMxWjl4?=
 =?utf-8?B?eFI3ZDBaYkRvcFNpNURkMEhtSGtLcElnTHljU1MzeFM4Yk1wc0FjbXgzbHlO?=
 =?utf-8?B?UnNXMEdxN1B5MEtLT3k1Tk5Uc2p0aG9CN0E2L3BheWQyN0I1dVkydWg0Nlpp?=
 =?utf-8?B?VVo5WDdVZDhuQ3hvMFBwbElxakYreTRuNWVxbzVDSGpYWlNxaGk3NmJ0M2ZU?=
 =?utf-8?B?eXZkemNRbytzdVc0QlBncWVYRG56TklVMlRnNW9BYWdKclhITFhseUNZVHNY?=
 =?utf-8?B?WHR0bFk3U04wS3QvbTN6OVIzdDg4SklpUk1jLzBjWHNYUG9oYmpkWEZRV0JD?=
 =?utf-8?B?YkV4Qzl4MDJISHBzOTRwYXhyRDBwTUpSWUNaOGpLcmdKVExEajlpV2ZSaEw0?=
 =?utf-8?B?UnkyKzJIYVV0c3BVQnd5MGVLRVhzRlFnU3NuaDNzdjFhYlUwRi85MVJrS3N6?=
 =?utf-8?B?clVhVE5UYXpGa01ZaFM5Wk1zWWdrejRyVm5PVndySHJXSGxpdjdaSGw4OEo2?=
 =?utf-8?B?bVNUQWJPc0JHeENzRlhxc002Z0JNbm9hdWJXS3VqNUxDeTNrdHlBSnQzbDVV?=
 =?utf-8?B?ZlEvcFRFaW5xa1VBcnRMaHhqa2hTQnVBU3hNSzZPNE9TTURnSEppZW44Rmh3?=
 =?utf-8?B?S0lXUExvTWVIenh5ZWlvbmFkNzVDMWRlalVMU0tqS0JHUHRNdzNZQjZBUGp3?=
 =?utf-8?B?ZGI1ZkRZdkcxOFllWXBSekNRdkg3UDRHekw0RHpUVWQ0SzFSdG9SRXZUTnhu?=
 =?utf-8?B?dldQUjd0cm9qZG9SQzQ4MVBVelQ4UjVxMnB1bm1USnNxdS92MHJQU294d3FQ?=
 =?utf-8?Q?yITpjN9QsnJnY39TbDI/VTAgf62MkSgNZzB6Nny?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 663db20e-9d3d-46af-249a-08d93165f34a
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2021 08:00:19.4285
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Ct1Wx2oi97MkMZruT9xKg1+H722AvLuwkKjnJW1aKPACpXKjg+0GVM0BAJFjYxqH4961MuwcIM9iBXXsb8FBvA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4606

On 17.06.2021 06:55, Juergen Gross wrote:
> On 16.06.21 18:04, Jan Beulich wrote:
>> Since hypercalls from the tool stack are based on ioctl(), and since
>> ioctl() has a return type of "int", I'm afraid there's no way we can
>> deal with this by adjusting function return types in the libraries.
>> Instead we appear to need either a new privcmd ioctl or new XENMEM_*
>> subops (for those cases where potentially large values get returned).
> 
> I think we can just use a multicall in libxc to wrap the affected
> operations.

Hmm, we might, if we're happy for these to then not work in HVM domains
(PVH Dom0, which still is experimental only, or PVH/HVM DomU-s using
the libraries for some purpose), or if we finally wire up multicalls in
the HVM case (there ought to be a reason why they aren't, but I have no
idea what that is).

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 08:04:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 08:04:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143727.264760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltn0M-00041C-LU; Thu, 17 Jun 2021 08:04:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143727.264760; Thu, 17 Jun 2021 08:04:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltn0M-000415-I6; Thu, 17 Jun 2021 08:04:02 +0000
Received: by outflank-mailman (input) for mailman id 143727;
 Thu, 17 Jun 2021 08:04:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=k/hY=LL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltn0L-00040z-Ad
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 08:04:01 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 31dcc469-ede3-4ba4-b278-48574246923e;
 Thu, 17 Jun 2021 08:04:00 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2112.outbound.protection.outlook.com [104.47.18.112])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-19-jVgd05zyN9C3mCQI5rHEjA-2; Thu, 17 Jun 2021 10:03:57 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2447.eurprd04.prod.outlook.com (2603:10a6:800:53::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.16; Thu, 17 Jun
 2021 08:03:54 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Thu, 17 Jun 2021
 08:03:54 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P251CA0013.EURP251.PROD.OUTLOOK.COM (2603:10a6:102:b5::7) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.15 via Frontend Transport; Thu, 17 Jun 2021 08:03:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31dcc469-ede3-4ba4-b278-48574246923e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623917039;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ALSgb5AFXT/P5jkyXCqdpXY9SoTQFvUPjHPLRpzqA2Y=;
	b=Q21/WSvOSBWlpW4+s4RkHIyrdzg7DUDgp/9kacCdlo6tZ4zJ2KCwVBOl7BHoHFCu+gaFH6
	Ov6zad8G1Aa5VbxriXsEnTH/K5wjZm4q/RchuPSAHGvfdAQiY89GqOtBPDHVj8PM5RRKv1
	oxWfVDmExKhkq8SIC2IPp7EDIh/i9tg=
X-MC-Unique: jVgd05zyN9C3mCQI5rHEjA-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MPtr9gUmERvu5Lg57upfHynsmbnQRCB3x9VZ6C+pRDXQb789GFjoSEukMngr1wL0YQ8AH7X4X/Z3QU5YnI+R3G+vGvkhhZfmSRSolur9PSpSDNiCGq/0o7aWQdNCklrmEXA3RI0fwWsOcw9QM3L6uPri9C7onfJ+IgOphBE1kKM+oo6KJVMeoKI3PHjlnuPJyQvt2ZXmklQiIXDcJwnfWt5/Y8LnRP9cxIb78FBmNoqEvyQRPlt+Yxr0r7KScFDkyiAenP0B3PXf4ksve1pGm5vMMVUZquVm6T7nNNku1/BZQzyc7yJ2UMnPY4JrvoQsScAcHDs3R/6vv5GtUYI6cg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8iiEYt0fEL+ik/evNO33yFdF+zFcXcN1O5IxHQeWCJI=;
 b=WEKrt6OSFtCPuVbFp0qcSZJpM9tGaghfn9mob+CKskiV4MjhrAnRRb9Ow52+7ovbpx77II3l6tFNtHdLErqQzJM6eHbqSLObjsBKqIxhpTsMF4kBHH6NWqS+jjK15DiDnOUFpk5FTv7+tCPzlEqZFOZRKeb0bFACbCBLqD6jPWhAoGVrYxSivDSSeewQO00Q6t+TbFHwzw/XVwGmney8O2wGwpTKV+IRSSAYtjjChHFT9z7S9vROticTEMXAJ3/Yuto/STCAF9uz9e0TqS+hghfoILCeOODD3d5ny0UzNjAnN0ywZB3S9GcxjAODUA9IAUuJ5HqaO+hNFpT7MvSi4A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: oracle.com; dkim=none (message not signed)
 header.d=none;oracle.com; dmarc=none action=none header.from=suse.com;
Subject: Re: hypercalls with 64-bit results
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Anthony Perard <anthony.perard@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <71b8a4f1-9c18-36e7-56b1-3f1b1dabddd6@suse.com>
 <d38cd3a3-6139-5ebd-6a78-debc20c3b2bf@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1adf28a8-a0fd-4ea4-bbd0-52734630d52b@suse.com>
Date: Thu, 17 Jun 2021 10:03:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <d38cd3a3-6139-5ebd-6a78-debc20c3b2bf@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P251CA0013.EURP251.PROD.OUTLOOK.COM
 (2603:10a6:102:b5::7) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e8471467-3949-4e8c-3dd0-08d931667374
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2447:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB244739B4BE7AB37582B3D82AB30E9@VI1PR0401MB2447.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/yF9GVHe7d1uOMXbHqmsnDp9OzkiowQkXWwjRPPqNAXV3ve6Y9TCcAHbJhPivdwjxktQaue+/8zdqoLjyceB4k1OJnjV07xWLYCpDsHuen7mw9ekLbV/aEP1AvYUN52SiiUnQGPJ7P//fSMGEtPW3Ff9NB+8OjkXsOy7FCNhtP50ORp+v6K0kYZZAR44XwEdRv9T3VGdKYBWCbAz5Nd/P4OU3Ez1QqMckTlSRlgdV48EmB0KVdq+dHYpCQ8Pi5j2bl1Hb3+vXyFmD02jVupIH2+Qjo+mGGzDXz/EOxpYs5M3C6FXBzYZDAhc8rZPOJv4kUAPMRseaXMjMxaHl3KURBkHsBcFyFDyP6VpFBuMQc5L6Eb3MoQw9Z+IO1WizUDpUL1AfH5tuA8xvJ69LOzGiW0AiiyJNRo8VF7+Nalxpt0+QDhrI83oWluuR6awdC7Y3aFGw+RthqQ1xSBQW84MkVkJmDIUOKQiFne98NsKe6pfRQViCUwbMD19eDS2HAuyos+I7WciAQyYXZdQmVuzNwNonPI2XUsZRXrS1Ll+K3gKbWxxyrrgvBPBzfFv2xb5V7kcwiiLRvb26TM6A4fvdpkVtobrQKXcEsNcrQ8gDC6pLR9/U/z5ucjV+ZMsoPFv/dxVKZxCT2NU+qJmlIntVJluGuD5wL7FYwRJH6eEcgwfVKPLR4FR1WSsQxCyqzbo
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(376002)(136003)(39860400002)(366004)(396003)(956004)(66556008)(2616005)(36756003)(478600001)(31686004)(7416002)(5660300002)(2906002)(53546011)(54906003)(8936002)(31696002)(66946007)(66476007)(16526019)(4326008)(186003)(38100700002)(26005)(16576012)(110136005)(6486002)(83380400001)(316002)(8676002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?jLHotoWOg7Hv0Iw5j4tnYrIwVxM2WRwlhF1dI4k8qUOk4IS1ezxgmEWfKeQ0?=
 =?us-ascii?Q?uNJVR8M/YAy4e6mlB4JBxVOK+QdOJZ/4hdJqGvGYppX2F7GG9PacAV8HM0G9?=
 =?us-ascii?Q?31gx0oCoiFudmA9uQ/JuYjCESWr3rKMWOkAYkn4GNvpX+E2mla+1BFFmHyG3?=
 =?us-ascii?Q?05/+m5KLEsSHoAqYFI4HcRwz7njZfe93bEgHemqy4sVyS1ka6dwslC8X/efG?=
 =?us-ascii?Q?ed5suK438zcMfV9CxNb2j2TDYxsJO174yx19zwV8lzEYZ0yJvZH2bG8z9zdZ?=
 =?us-ascii?Q?RO0Id0kFH1KpcWOEj3iwv0myRpBOsqSTkUa0WRWun4gvfEjrit+NF0kFAMJR?=
 =?us-ascii?Q?diqtj7ZixUh4MVn8opumxHMp+fNrjqrdyaqew7MeQsAYpINRKtom/XFE8gJC?=
 =?us-ascii?Q?J2xTXNxFacX6/QnSmXU+gpGbdXLY6aWHx4NIfkTCgYRlL0RzEuSGRbnDlfM+?=
 =?us-ascii?Q?yyMPXNNhimAZ7cFEcH/opW3R2d8ejyK8Wf5+9J0CKC0SDVsfBKWeNQIUnF9E?=
 =?us-ascii?Q?jB0V6uK2j7ZgybpBxznZ0ZVsV/Na8dHGcwWCADaYLlF17NMydyo74hMGEZ34?=
 =?us-ascii?Q?m5ozl3tpKy419H8E5aauFxh4PLjpFZa7hzAqiVjwRh7/3jkrjDtHS8bRT/sD?=
 =?us-ascii?Q?sDvEZhmLf7aHiBqMRkQgj4w/HTG6yyTSMpuRrmbUzIGbOcxxx4IKLs4XuGpL?=
 =?us-ascii?Q?9xcdhgl2/mp4o9c3LrKEjMeKLTAWnmd+j3xTwNZ/jRnYZZ2MYm2l8/tPpZvb?=
 =?us-ascii?Q?p8nBbb94jLpfJUWi8ZIQzP7YoKi4EcSR448N1O/Mga1KrK7ByK+m2+LqXznk?=
 =?us-ascii?Q?2nUVoCBjI+DnFzAGK/nFMDJgLiPIWVVnRJrJUuT55jmM9xy1TKAspt9sjtHk?=
 =?us-ascii?Q?ZLhnuadct4js83JLnaejckZI01KJbH2bKRuUNljmRYAra+sXuuzgWj4F+NnN?=
 =?us-ascii?Q?rXx7QcuZUmL4EzxOgNsM6XjO2MMV63XtW+O332Rd3Ya66cNVwgGZWYp8VVhr?=
 =?us-ascii?Q?7ochIE+NWyASpQoOosoLBf4DH50hcEtIeoyEzsrRyzjNC4yi1TxryjstCahV?=
 =?us-ascii?Q?Zxb3oUaySF8jS9CYxCf971lvLQuFbtF8/sbKmy4ifBYr3BQZDGZV1mIRVm26?=
 =?us-ascii?Q?gSsIBcdQLL5XmjvzxEEOF/KHKrQNomXNLSXTBM5FYplOfBnQg/tX5Ow5QHqO?=
 =?us-ascii?Q?d7+oxCFUwLGGwFv/JnQGbE/QVo7NZe33QcvpovK7TgBrxhl2yAfgqwissf38?=
 =?us-ascii?Q?yvbCFPuIR/hLxPatGQmmLZ1MJKeIt6orTTj86kcGDirkmgraTJZEh5z49frm?=
 =?us-ascii?Q?l2Ak0j59mClboc7SNeivsW0Z?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e8471467-3949-4e8c-3dd0-08d931667374
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2021 08:03:54.4148
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: S+pupnIQxGcOTnCg5jnAcWh9WAm4FLTUFHdQucvvowp0XVgJt2Ax8deF8o3E1u+kL9MrtfomKFk4Tin0H+0BRg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2447

On 16.06.2021 20:15, Andrew Cooper wrote:
> On 16/06/2021 17:04, Jan Beulich wrote:
>> All,
>>
>> several years back do_memory_op() in libxc was changed to have "long"
>> return type. This is because some of the sub-ops return potentially
>> large values as the hypercall return value (i.e. not in an argument
>> structure field). This change, however, didn't have the intended
>> effect from all I can tell, which apparently manifests in the present
>> two remaining ovmf failures in the staging osstest flights. Anthony
>> tells me that ovmf as of not very long ago puts the shared info page
>> at a really high address, thus making the p2m of the guest very large.
>> Its size gets returned by XENMEM_maximum_gpfn, as function return
>> value.
>>
>> Since hypercalls from the tool stack are based on ioctl(), and since
>> ioctl() has a return type of "int", I'm afraid there's no way we can
>> deal with this by adjusting function return types in the libraries.
>> Instead we appear to need either a new privcmd ioctl or new XENMEM_*
>> subops (for those cases where potentially large values get returned).
>>
>> Until we manage to deal with this I wonder whether we should suggest
>> to the ovmf folks to undo that change. I'm anyway not really
>> convinced this aggressive enlarging of the p2m is a good idea. There
>> are a number of cases in the hypervisor where we try to reduce GFN
>> ranges based on this upper bound, and there in particular is a loop
>> in mem-sharing code going all the way up to that limit. EPT P2M
>> dumping also has such a loop.
>=20
> There are multiple things in here which are disappointing, but I think
> they've mostly been known already.
>=20
> But I do agree that this is very much another nail in the coffin of the
> ioctl ABI.
>=20
> For ABIv2, there are many changes needed, and this ioctl ABI was never
> going to survive, for other reasons too.=C2=A0 Obviously, we can't wait f=
or
> ABIv2 to fix this immediate issue.
>=20
> However, I think it might be reasonable to wait for ABIv2 until we can
> reasonably support VMs larger than 8T(?).

But it's not just XENMEM_maximum_gpfn that's affected; that's just the
one pointing out the underlying issue. Plus if so, shouldn't we avoid
returning values that are going to be truncated (and, as can be seen
here, then get perhaps recognized as error codes up the call chain)?

> For now, I'd agree with trying to undo the change in OVMF.

Anthony, thoughts?

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 08:05:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 08:05:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143733.264772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltn1U-0004c6-Vr; Thu, 17 Jun 2021 08:05:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143733.264772; Thu, 17 Jun 2021 08:05:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltn1U-0004bz-Sq; Thu, 17 Jun 2021 08:05:12 +0000
Received: by outflank-mailman (input) for mailman id 143733;
 Thu, 17 Jun 2021 08:05:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bffI=LL=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1ltn1T-0004bp-PF
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 08:05:11 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5cec161c-52da-4775-8971-36fb8e7078b7;
 Thu, 17 Jun 2021 08:05:11 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 3C4F21FDC4;
 Thu, 17 Jun 2021 08:05:10 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id EB385118DD;
 Thu, 17 Jun 2021 08:05:09 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id DiwXODUCy2CHGgAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 17 Jun 2021 08:05:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5cec161c-52da-4775-8971-36fb8e7078b7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623917110; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=cX/mv8PKPa4tHFaec8ckFRQLOv90EFQmMPQ0u+WxDPg=;
	b=TeYyc2ycfcv4qAGTybc0sIOsp0SeJ4W4MSk/18Xsgg1BHz+3iZ2ryfhdjtJQEzR5NBUYL5
	8z+MilXJiVyYENA7mHu1AYOTehjtYib5G2tv+noiHCUxaZ2C/c7NNXiPiSb4nJMYojFYH3
	HMqhJJhhp2PHxMrLpSoqZjP724NS5cs=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623917110; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=cX/mv8PKPa4tHFaec8ckFRQLOv90EFQmMPQ0u+WxDPg=;
	b=TeYyc2ycfcv4qAGTybc0sIOsp0SeJ4W4MSk/18Xsgg1BHz+3iZ2ryfhdjtJQEzR5NBUYL5
	8z+MilXJiVyYENA7mHu1AYOTehjtYib5G2tv+noiHCUxaZ2C/c7NNXiPiSb4nJMYojFYH3
	HMqhJJhhp2PHxMrLpSoqZjP724NS5cs=
Subject: Re: hypercalls with 64-bit results
To: Jan Beulich <jbeulich@suse.com>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <71b8a4f1-9c18-36e7-56b1-3f1b1dabddd6@suse.com>
 <2adfba14-8adb-89d8-3e8b-a25aef6fc2d6@suse.com>
 <cf751696-5c9b-b465-67d0-544245d8563f@suse.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <75d84a60-1857-f62c-23f5-eb3bfa3b93b2@suse.com>
Date: Thu, 17 Jun 2021 10:05:09 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <cf751696-5c9b-b465-67d0-544245d8563f@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="R7VCp0nAA68jraq20emljAjH812HxPBif"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--R7VCp0nAA68jraq20emljAjH812HxPBif
Content-Type: multipart/mixed; boundary="34PJlwSh19z5tPPbrCtPPgT0hNktS5zgB";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <75d84a60-1857-f62c-23f5-eb3bfa3b93b2@suse.com>
Subject: Re: hypercalls with 64-bit results
References: <71b8a4f1-9c18-36e7-56b1-3f1b1dabddd6@suse.com>
 <2adfba14-8adb-89d8-3e8b-a25aef6fc2d6@suse.com>
 <cf751696-5c9b-b465-67d0-544245d8563f@suse.com>
In-Reply-To: <cf751696-5c9b-b465-67d0-544245d8563f@suse.com>

--34PJlwSh19z5tPPbrCtPPgT0hNktS5zgB
Content-Type: multipart/mixed;
 boundary="------------FB86F1DDFC7E319F7301ED11"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------FB86F1DDFC7E319F7301ED11
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.06.21 10:00, Jan Beulich wrote:
> On 17.06.2021 06:55, Juergen Gross wrote:
>> On 16.06.21 18:04, Jan Beulich wrote:
>>> Since hypercalls from the tool stack are based on ioctl(), and since
>>> ioctl() has a return type of "int", I'm afraid there's no way we can
>>> deal with this by adjusting function return types in the libraries.
>>> Instead we appear to need either a new privcmd ioctl or new XENMEM_*
>>> subops (for those cases where potentially large values get returned).=

>>
>> I think we can just use a multicall in libxc to wrap the affected
>> operations.
>=20
> Hmm, we might, if we're happy for these to then not work in HVM domains=

> (PVH Dom0, which still is experimental only, or PVH/HVM DomU-s using
> the libraries for some purpose), or if we finally wire up multicalls in=

> the HVM case (there ought to be a reason why they aren't, but I have no=

> idea what that is).

Me neither, especially as on Arm they are supported.

And TBH: PVH Dom0 without multicalls might be hard anyway.


Juergen

--------------FB86F1DDFC7E319F7301ED11
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------FB86F1DDFC7E319F7301ED11--

--34PJlwSh19z5tPPbrCtPPgT0hNktS5zgB--

--R7VCp0nAA68jraq20emljAjH812HxPBif
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDLAjUFAwAAAAAACgkQsN6d1ii/Ey+T
xwf/Tg5TXSMDKHRfRkDgWJ0BjiW2YC1pxvEP9ovl4Qg6JT1/ki+nO5e2IpRXLHxg9kuclIOGhhQ2
7Y00Q5MfBC2nTvwTUzLMyrVbeTIJLnyBcL6ELLoDOhyW/sU+SOU6V57JkEdV4fkyOCfoiKEkKhyp
ROxZIykQOmSnNyBfsmJDgDcB1cRR4GmzgJ27yBDIhu+yHtXii+xYKquZDW14U5sJT2jY1G+A5rJO
ml4EDNZsmxTO+MI9wItCWIW+KsemHLnU/1oDcigqVYpvGNoZUP4pxzuHZjIWJwr1eVgJ+nCOzKxf
MWDJy9PlZz5Sg4WAW5osEGbTIwwKspg5Ejtw6MDWow==
=XFsu
-----END PGP SIGNATURE-----

--R7VCp0nAA68jraq20emljAjH812HxPBif--


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 08:08:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 08:08:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143742.264783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltn4f-0005Ma-Hu; Thu, 17 Jun 2021 08:08:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143742.264783; Thu, 17 Jun 2021 08:08:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltn4f-0005MT-Ef; Thu, 17 Jun 2021 08:08:29 +0000
Received: by outflank-mailman (input) for mailman id 143742;
 Thu, 17 Jun 2021 08:08:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=k/hY=LL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltn4e-0005MN-1A
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 08:08:28 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb3bf5a4-daab-4f1d-a48c-55f88a58c7cb;
 Thu, 17 Jun 2021 08:08:26 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2106.outbound.protection.outlook.com [104.47.18.106])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-27-34eOnv2vO5KZja_SEzGdng-2; Thu, 17 Jun 2021 10:08:24 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4349.eurprd04.prod.outlook.com (2603:10a6:803:40::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Thu, 17 Jun
 2021 08:08:22 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Thu, 17 Jun 2021
 08:08:22 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P193CA0059.EURP193.PROD.OUTLOOK.COM (2603:10a6:102:51::34) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.16 via Frontend Transport; Thu, 17 Jun 2021 08:08:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb3bf5a4-daab-4f1d-a48c-55f88a58c7cb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623917305;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=y8aNzExAo+zTxDcWbNyqc8wNERoLqS9+E3nd0exZcj0=;
	b=YL3Ii+w+g91Dz70S+FDHpUr600zFY3wg3Rm2D9TvsatqzVhDidBjfHjubCsOlXxKHE1cEJ
	zY2/oHLUCbNNb4kajAZppjpY+ZoidJYQPvfb0fHeh500IHQ4eGYQ3lHQEqU9gHLVtbbWKE
	awdQ8Ly9I6p/Xl4zyTtMOQOSnmmUfb4=
X-MC-Unique: 34eOnv2vO5KZja_SEzGdng-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RfoUhBdmaa08SPk/PmhdFTms3wq06V0NBEpGBaszfBHNIndHzFN7ENR//gdoVc/w0Ue3Zn4vWu6CscWdTxBr5lGV2xS4+A2l62dYncXKRgc6kbxR7LrkcVy99majpFvMbGbXHFekDkaVwERLJGM6Tqq7WI6C1unqTzdyj8A+ELWZx49cHK8V8uSsyj+CsW48oNQlEWxrzeynQVanpg09WJjBAAziv02pWUWv69ej8t8b3PZ5vSEiajXApMhdFNOz7yzu2fsDViGBFnjYFY+k+NMJWYcAZwfov5/mSeWqspLkWL1XuhwGBYqQSbv7nhS8RT3GTH6ggeTuegmVl98h4Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y8aNzExAo+zTxDcWbNyqc8wNERoLqS9+E3nd0exZcj0=;
 b=D6lFH2VdLp5PX2kNHJpBo9/d2zOFmrkrdux6EUAp/E8PgqZYJNMXRQJUrHp8CIl8OrIdmXM/guEpUcEnQ5m19dyOP8deF4xBN8xcheeUNW9eccAJj9uPGF2bNlTHlUFi0A1Na6m16kL3WJNLgiIv//bUHpCINlAwx26ndaay1TfT7i9XTfUXJDEH5tNwRf9YZX8i8BFSMaNTMtEDWYUCR7hOPpWJ76hT3q7yHfHPcySLhOcf9FbN6f8eoQKS4ge/twaEm51vdu83M1wZh0KxbM3jkZlqrp54aKH/TthV7urYGNg35Cwsm86DfDg0e8vlnvgeq4VykDEBeRMw3Nt03w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: oracle.com; dkim=none (message not signed)
 header.d=none;oracle.com; dmarc=none action=none header.from=suse.com;
Subject: Re: hypercalls with 64-bit results
To: Juergen Gross <jgross@suse.com>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <71b8a4f1-9c18-36e7-56b1-3f1b1dabddd6@suse.com>
 <2adfba14-8adb-89d8-3e8b-a25aef6fc2d6@suse.com>
 <cf751696-5c9b-b465-67d0-544245d8563f@suse.com>
 <75d84a60-1857-f62c-23f5-eb3bfa3b93b2@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c151faff-5ff2-af59-fe95-1675aeb5be33@suse.com>
Date: Thu, 17 Jun 2021 10:08:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <75d84a60-1857-f62c-23f5-eb3bfa3b93b2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P193CA0059.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:102:51::34) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8fd5e433-bd50-41c9-6a90-08d93167133f
X-MS-TrafficTypeDiagnostic: VI1PR04MB4349:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB43499BF7BBDB179E6654B1F2B30E9@VI1PR04MB4349.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XN7Kdm23iHPfwG9U5EMx+hGM3FA8XQH881K6/+55TTcUVwYKr1AUZJk+i1q80AOEJf90f2uNxkcatPlrnlj0JUwUjicnT2T3esoMCEBfajjl1hTkYlmOwByD7BMo0BBJH4KogYl7t2JYTkN6F70oGG0DHFLm01LFySvKIqKNeeKsrKsrGMD2TRw6iuw4R4Or4vlv8WR6iW0McZsHtvab9+W4Lb0j3QPj06Xpsg1L7s5JXMoUbt4sg7IBeotfuQwXZuNc4okYU4FutWoAurRP5XbbYfk3xkre2A404yhCPUWF967YebMF5SWMvGnwhW+TOSLk3E482fgzYkPmzxlzMl08SmDbZdbbAA8ZHB3YOyjD/Z++xtoZM8dRL7SaHZgcC+3zI6GhAqcocF5zNlxpuKIcuUA7vsRv9SjqHOoWYxZIGCO47zvEXKJaeGj/PUmVkX7izDDYbQeNYLa4EA1tgn6gURqnVFtoXSvre6X3lovLf4B5aQCkCSp1zs9OoB17Y78xXIl0DWgEBjNXmvXjMlgGynP50pGI3zjtx5zCdHLmLjwRZqVWFwnnT4Q1/vJfRsOSuHwYZLUBdZn9rFBa7vf4F6kqXy9QxYTZ3lLSHeQEVGFngZgQ6kuiOAg9xgE8FA9aRkPXZZPJOpr+kVltbKConc9Ox65BPi3qgAuxn7jg4F8Zw98cKoDEx1Qh46ar
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(346002)(39860400002)(396003)(376002)(5660300002)(6636002)(186003)(26005)(53546011)(6486002)(2616005)(36756003)(956004)(478600001)(16526019)(2906002)(54906003)(66476007)(8936002)(66556008)(66946007)(7416002)(31686004)(37006003)(38100700002)(31696002)(4326008)(6862004)(316002)(16576012)(8676002)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N1hCZHJYZkNTZnZXNXdtT01vNEFQODR6WEErVXl0UVl0cW15eFppaEgwb2Rq?=
 =?utf-8?B?K3NGN3hFZVIrN3F2M3hxSzkxS3l3Y29OY3R5RHZrZXlNY2J6N1ZXcXhvQWh4?=
 =?utf-8?B?bnp6MlVGUVRLVHhnRitxMy9uMDluVHZ3R2EwUU1DM3VrQWwvZjJaWEFSRDF5?=
 =?utf-8?B?RjRjbnFaN2M3WW5mdUtkcno2OFdYSEkvYUEyVlh5SFdGcitISll1b215YUtO?=
 =?utf-8?B?aGVWbTdPb0tXTWJFZ25keVppQWJGNWJhTU5BdDN2RUJEVE5EYnpMRWJRYzh0?=
 =?utf-8?B?VlpYL09nbXpaOXI4YzB4NEx0OU13KzVrR3l5bjdnaVM2RlFLREMvQTNEQTQ5?=
 =?utf-8?B?V1RXem90ejNpTmxTWWkzSE9VVTBvRUt5TXlZdStOMHhXK1dJRlFNZlFLdEtY?=
 =?utf-8?B?anZPajJzWENYNUQ3UkVjeVlWWGZibXhYMFMxQ1FIMUppbDdIYnlqeUdZUWhV?=
 =?utf-8?B?bWtsZ0U2WUU4am9SNW5rWFVwMVZwYTBEZURNdWJCcEhoOEVzTUM4QXlCcU96?=
 =?utf-8?B?OEV4ZGRoTEQ3QkdCcXlJZHdPZTBsaUw3Qlhta2hVdXNtZmJvQmo2QmRPYTUw?=
 =?utf-8?B?MEVhOHQxc0p1Nzc1YkFqYzZuNE1nREJhV08xSXBleU5oZFNNTlpvUUpIOXlx?=
 =?utf-8?B?OXNqUklGbVRoUTZDdE1nWlh4WUNwWC9aVWZwcys3T2tBNTVVOVdPYnlFSFFI?=
 =?utf-8?B?WmtTQ25jM1JJMm9DQ2xOeG5tQVBFYkZBYmRjTHdHTkxnVDFWR1BtZWpyNHNO?=
 =?utf-8?B?ZWNZU3JJS3BMYVRkMTNTZjgvdU1rSXJUaUZvNjdCV0lybVprbFYvbDRZUlB4?=
 =?utf-8?B?MTZlNzRaV1pMVk1KWDZ3QXpwZjFTOHFRd3QyaUw0My83WTN3ODVBaUxTRWl1?=
 =?utf-8?B?SEh0Zk5JRHJuZW1QM1plZ1grNFRsYUdyWDBZdDVXTEZibjdhYlpRVDhOdU5I?=
 =?utf-8?B?WE4xUGlFVlQyYXdZbVY4Y0pmcmRvaTNMTHpuS1lDOFEwamZqTVFQSEJ2TUJU?=
 =?utf-8?B?NlNUSTNGVUl6U0VCTEYwd3pIZHFLY0JUOXp3WHVrWVBLZ1RURTJRRG9BQnht?=
 =?utf-8?B?SkcwbzZ2S3BIeGw2anJxRFhUaS9MREdmbmkxdUEzZzhsdDBGb1UzY1lkUHcx?=
 =?utf-8?B?blBLNmlHUVVPejNIbExKMjBINGFReWdGRnFNSmhhU3RKSEhsbnl0TzVMdVVp?=
 =?utf-8?B?V0lleUVndEpPUmg3cnFManVLUHZCaHBWenoyOW96YVZ5RDVmc2RaRU1GZGpQ?=
 =?utf-8?B?L2RLNm9iM3Nxd29nd0sySkRaSkM2RDAxczFmMlAyb1pYeXhyMGJIMGdPVnM1?=
 =?utf-8?B?U3lIRUtDNGRiMSs4MlBWMEZqVXpOakd5Y003eUpyRC9GekxKdEZtcVRUTllN?=
 =?utf-8?B?QUU5emk3ZitBSFN5S0p1RHdXbnM5eXpqYVJZVTRyQTJaV1ZGYkxyUkxhWWtr?=
 =?utf-8?B?SE0rQU5uMTRNSW0xU2RyRTdncUxaK1UveGxuZml1N0JEUEhaSDlsaFF1bGVM?=
 =?utf-8?B?T3FiSDFVbzB2MWVMQmpaOUtvUkltZlg4eE9ZYWVHcTJnNi9ZemdITVR3dmxy?=
 =?utf-8?B?U0VSU052K1dEa3ByNWMybFk1N29TOWQ0RSs3S1h6RndSR3Q2QWZOWGxKYTZE?=
 =?utf-8?B?bmJnMUZSVHAwQitmV3pXNitoaXBCazlhd21yZ2F5blJwVFl6LzZhNDFBVnU2?=
 =?utf-8?B?cENPcFVPVWY0WE5uUXdXU0owZUoxcWE2RXBZbHdjMTdKOGFacGlOci81bWJK?=
 =?utf-8?Q?UDJyyw0guS8ywSiLIEFf/rZXg5/PFPod5KitTo7?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8fd5e433-bd50-41c9-6a90-08d93167133f
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2021 08:08:22.5170
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dM13C4KRnQCwcwwmH71idEjqPjDgfZ3n0mVecOSqGRRrwEDxDB7Tdzp1w5UKwiSjInSr5OxJHIEW/Q58oeHAhg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4349

On 17.06.2021 10:05, Juergen Gross wrote:
> On 17.06.21 10:00, Jan Beulich wrote:
>> On 17.06.2021 06:55, Juergen Gross wrote:
>>> On 16.06.21 18:04, Jan Beulich wrote:
>>>> Since hypercalls from the tool stack are based on ioctl(), and since
>>>> ioctl() has a return type of "int", I'm afraid there's no way we can
>>>> deal with this by adjusting function return types in the libraries.
>>>> Instead we appear to need either a new privcmd ioctl or new XENMEM_*
>>>> subops (for those cases where potentially large values get returned).
>>>
>>> I think we can just use a multicall in libxc to wrap the affected
>>> operations.
>>
>> Hmm, we might, if we're happy for these to then not work in HVM domains
>> (PVH Dom0, which still is experimental only, or PVH/HVM DomU-s using
>> the libraries for some purpose), or if we finally wire up multicalls in
>> the HVM case (there ought to be a reason why they aren't, but I have no
>> idea what that is).
> 
> Me neither, especially as on Arm they are supported.
> 
> And TBH: PVH Dom0 without multicalls might be hard anyway.

Okay, let me see whether, while trying to wire them up, I run into
particular issues.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 08:18:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 08:18:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143750.264793 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltnEF-0006pK-GA; Thu, 17 Jun 2021 08:18:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143750.264793; Thu, 17 Jun 2021 08:18:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltnEF-0006pD-DB; Thu, 17 Jun 2021 08:18:23 +0000
Received: by outflank-mailman (input) for mailman id 143750;
 Thu, 17 Jun 2021 08:18:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltnEE-0006p3-35; Thu, 17 Jun 2021 08:18:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltnED-0004cn-R4; Thu, 17 Jun 2021 08:18:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltnED-00029n-G0; Thu, 17 Jun 2021 08:18:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltnED-000402-FZ; Thu, 17 Jun 2021 08:18:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zOgloxeAYNTH/5B3rls4jGwQXo6jgi//WOXrB+flVi8=; b=Gt8NLOTxjOwujQGY7rRGUpEh6z
	JT5EsMpbaWLUBqJm2m+fJRlEzROsEflJvaZhG9KYvp8pRcAISVSJNXQoPKg2rlnLgn+RLKDwl9nby
	VxxAOwUhikq24YeBuisfJTB82+ET+seKhipx5WJXWdAqMOwCoCbB9ZlQ17Twwh2VncdY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162870-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162870: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=56dcdec1ac8104f94371c210585bab91eb36395d
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Jun 2021 08:18:21 +0000

flight 162870 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162870/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              56dcdec1ac8104f94371c210585bab91eb36395d
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  342 days
Failing since        151818  2020-07-11 04:18:52 Z  341 days  334 attempts
Testing same since   162870  2021-06-17 04:20:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 62434 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 09:15:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 09:15:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143758.264808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lto7i-0004LL-Vp; Thu, 17 Jun 2021 09:15:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143758.264808; Thu, 17 Jun 2021 09:15:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lto7i-0004LE-S6; Thu, 17 Jun 2021 09:15:42 +0000
Received: by outflank-mailman (input) for mailman id 143758;
 Thu, 17 Jun 2021 09:15:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MES1=LL=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1lto7h-0004L8-CD
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 09:15:41 +0000
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6c425919-9c43-4373-b874-6fe194e317e6;
 Thu, 17 Jun 2021 09:15:38 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 17 Jun 2021 02:15:36 -0700
Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81])
 by orsmga005.jf.intel.com with ESMTP; 17 Jun 2021 02:15:36 -0700
Received: from fmsmsx605.amr.corp.intel.com (10.18.126.85) by
 fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.4; Thu, 17 Jun 2021 02:15:35 -0700
Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by
 fmsmsx605.amr.corp.intel.com (10.18.126.85) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.4
 via Frontend Transport; Thu, 17 Jun 2021 02:15:35 -0700
Received: from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.105)
 by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2242.4; Thu, 17 Jun 2021 02:15:35 -0700
Received: from MWHPR11MB1886.namprd11.prod.outlook.com (2603:10b6:300:110::9)
 by MWHPR1101MB2174.namprd11.prod.outlook.com (2603:10b6:301:50::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.25; Thu, 17 Jun
 2021 09:15:33 +0000
Received: from MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::6597:eb05:c507:c6c1]) by MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::6597:eb05:c507:c6c1%12]) with mapi id 15.20.4219.026; Thu, 17 Jun
 2021 09:15:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c425919-9c43-4373-b874-6fe194e317e6
IronPort-SDR: 039Ty+3ijRsQdJ8lKpfpCFWII91IQbisGKrw3q0JzPXmABG/xZcnspTspBLpRVx6T5Y0CCe5mz
 bcT7tBwjtO0w==
X-IronPort-AV: E=McAfee;i="6200,9189,10017"; a="206289541"
X-IronPort-AV: E=Sophos;i="5.83,280,1616482800"; 
   d="scan'208";a="206289541"
IronPort-SDR: QZmjzlAMEbosCCYDVQnH2JcS9qDqykxcdiR9+5KupKXNd14+UzHosisJgPGCBw1zG3Ev1YDNgx
 k59xjiCLGGGQ==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.83,280,1616482800"; 
   d="scan'208";a="621990924"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cQVwJHkCxpzvFsnUqqO7DW18N3onO6DZyaW0ZdbIgsb4QKAVVLOB2ynlkDeKSFUwdqqG2/sOMehNwogMpEvkb09cuYaYThJMvBiPLBBJtqC5012ecGIR+DQrJgC7jy8ezsiiAzkl89OBpXCPKRvS6EyJ2wxWyE0HFHnlNlqFX/P2lY0zbo8IKS57zuc4S08CtWMnOi+jE7dX3naLFjUP7KGmRKLPNpKa5NO2oFXJ89A6rKi7ZLR2+9F0ANsn06yCPdWNZtd6e7rDixBI3DDZawaJD0ygiChIQwubd7fbtgKAJ869+QTsc2jtFjIdiYM05hXmVakk3rr2B0IVHhMSpQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4d61lk7tDeQfiDDGMzQr6q7ou/egAgr+mPK3kjA1sEY=;
 b=U8yPP9VGFk8JIywRTSlnFseXpByX7XyIsnHE5XgfZ3T7kfD0bTRA5tq+YGGRmZSNf4z5SMJ9V8tGdPfJnBOcGN4gb+ojw++DseDp5N8hM37AXc3A7QCOxPDghMnAhO0NCHcGbN1gYODSkSVZCbHp8KkBA503/Z8yVIfHd/LDZCcsekmouupxKC7SxAk8R8hb+SEkvWuhuxVAU/WIhmKKMINcA6X3LJh7ma3O5ZXDGc01j6JH3jyTDNAqvDstBgRA2n94sKSFO45D0gPzkC/1kUQIoH+0ZiJYTzQvzr8BWECWjIOZDSkpb6U/NwJ/tF5ESrssSK/tcy0fRE9JDaC+ow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4d61lk7tDeQfiDDGMzQr6q7ou/egAgr+mPK3kjA1sEY=;
 b=uVbsvpn32Hocrm/qZ2D641QtwT/wEpKSNvYpUkktwdawMSQA8IIt4kyfm30r/XJSQbsIgF7LlTOh/XP7fohn7rCgr/rCMPA7JLD0Cqd4zoecZMwohtafwGAzK0AzqMbTiyav/qrLRGMJ57bjgbskwYO6ck8KMW24CRDvcldpFUQ=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Roger Pau Monne <roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, "Cooper, Andrew"
	<andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, "Nakajima, Jun"
	<jun.nakajima@intel.com>, George Dunlap <george.dunlap@citrix.com>
Subject: RE: [PATCH 2/3] x86/mtrr: move epte_get_entry_emt to p2m-ept.c
Thread-Topic: [PATCH 2/3] x86/mtrr: move epte_get_entry_emt to p2m-ept.c
Thread-Index: AQHXU+iAXrzV1nvK+0GBuqNHaWnAzasYCrog
Date: Thu, 17 Jun 2021 09:15:33 +0000
Message-ID: <MWHPR11MB1886DF80D81BAD5050A7DD8C8C0E9@MWHPR11MB1886.namprd11.prod.outlook.com>
References: <20210528173935.29919-1-roger.pau@citrix.com>
 <20210528173935.29919-3-roger.pau@citrix.com>
In-Reply-To: <20210528173935.29919-3-roger.pau@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.142.24]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 68bfa03f-a476-4c89-a62d-08d93170763f
x-ms-traffictypediagnostic: MWHPR1101MB2174:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <MWHPR1101MB2174466D47A958244E1297948C0E9@MWHPR1101MB2174.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:2582;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: SBA1ytRxaqf2/0ydBZEQTahzHY6dM+lY1faUcyW9+k4QOawjU5eNPwEyzfLd+z1NFLIzt+BlqTSJstpu1UTYQ4/tD3NWlaSLgao/DNCzh7N/3Mcp6CQUqII6GWizmLRGyENLZPmMtHgYacI6U9e/09GiCr81YIe7HKhsJp6SENeoeQ8V17KNFedx0O95LUsTnLVjQ+lz4tCqznVTB5LdoL1Hln6a6ncfre/JHJ+R1kZcHy95Oni0iVIQjBPnjcNvHKp/+eB042vQHsdu9cKxd79L6fDigS2yIRWgAe6lF+Rv12TOC5s0CHxSAVSLKeZQSX2qBXksC1n4itQ8+wNWfqw8BfXSDb86roBZ/iA4w4ZUqUb0yT9pM+/Qq5wfbkNHmtK3bREOz0JvhyAotVVrSmIydkhwl0MIKYvwDLbNBSnz9g0DLsmJFs6B63Etq7bq7xUwt2NLYsjwTDfJYIhcUQU+DSUHUVgW/Y9jr/7xfi2Q7SIkbETFHDXJYp+xJethU2LSEvisyAQWvRC2Djj3voP6/eQMm/1fhD51/u0XKYlGVLNJBSM69w80FSdmszQZJiswNRgQzhA2ewfuldNEaLBWlbmrIGygF6hG3K1e19U=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1886.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(376002)(136003)(346002)(396003)(39860400002)(54906003)(83380400001)(26005)(110136005)(52536014)(55016002)(6506007)(186003)(9686003)(76116006)(7696005)(66556008)(66946007)(66476007)(33656002)(66446008)(64756008)(122000001)(478600001)(86362001)(38100700002)(71200400001)(8676002)(2906002)(4326008)(8936002)(30864003)(5660300002)(316002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?MHcwOGJqR09WL0NodTZoWGRsbVllamYwRWMvRlF2YXRiNURQVjRDY3l1OTNT?=
 =?utf-8?B?ZTVOYzdwUHFlQ2ZuTHMrY3ZsZ0JSRjIzVDRZZ3BDN3kwZktGMVMyZitiMUJv?=
 =?utf-8?B?TEd2b2x0dDBhZ1VmaXZxK3VrQkp1V24rMk85dkl2UU5YY3AzcGFyQTYrRm5n?=
 =?utf-8?B?MmZiYlNuak5KbTZmQzZsTlB2bkdJeHhXRjlkWmFUakF5THdLOG1QZ0ZaSUhw?=
 =?utf-8?B?UDNvV24wcnRXc3psZG5XOEkwZk04KzZGNW5WTjhUYW4wTFg3UGlWanFjb01X?=
 =?utf-8?B?OVVkWFB2Znc5VFpFc1BiWFNnT1A1dGpibFV2N1BJTmFkRHlidys2NVZ3ZjU4?=
 =?utf-8?B?dkFFWnZTbFpKU0JnSC9FZ09ERFIxWGV4ZDEvd25Oalc0MW82QTZkS3g0SVFu?=
 =?utf-8?B?V3JuY1RyZDNvZlNmbzEvVDl3SEU0aDlkN3pNVUhrazIwL2ZkVE9xMFVEWm1D?=
 =?utf-8?B?azY5L1J2UzM4ZnYreW05bkQzZWRQdUppaHdLN3dMVmhsM1JZMmRkSDlOeHl3?=
 =?utf-8?B?YWl3SVpLU1FxWHIyYllDVG5MRGtIanNSYllRcGJ6dHAzeU9tL1BUMFBWN3Rz?=
 =?utf-8?B?UGNYaXlsOGgvTS9GNkdkcEVodGlvYlo0eVBXYWQvWFl3RDhrU2NMWm1QcUJl?=
 =?utf-8?B?bmJ1bnVrWmp5VDcyM3kwNGVYVis5TlRlWnZlUkVVakdRcFZscTFWWXhCSURP?=
 =?utf-8?B?M1dPR214VlNHeHhzdmd6dnZIT3Z2RVJHbjZDU3pnQlh1QlhSUnh4MWo4UFpJ?=
 =?utf-8?B?N2JaK1BSQUg2blNRcDRQVWVqNGc3bXBORWtlODBqaW80WjAxT3hNaERwQzZn?=
 =?utf-8?B?M1V4ZnZZWTdTeEw5TnUrRGpsT3ZzWENYSW5CdFgzMVlRZkRjNXFXbVZLWjFr?=
 =?utf-8?B?ZGoxVXVCWVRRdkFoUzR2YjVEV1JWMUxBdW51eGlWYjE3NFRZMmQ3Q010WThX?=
 =?utf-8?B?Wk91TndrVm9WUndnS0RaV1IwdENRRHdaWjJQbkdnR3cyZVl6d1U1UGhnTlhv?=
 =?utf-8?B?VUMycTAwSXhlclpReTVvd0pRTmxvNGFPK21ySXNkSElQMUxJek0raEFFaXY0?=
 =?utf-8?B?NEI4dVpUR2k1c3hUU0JFSzJ6a1llRUZWSUxRaEVjN2Qvd0Q4T2lIdk9nWkU5?=
 =?utf-8?B?KzZwaTIzOFFoNEdxOWVvbWZOb21zMUxuT24zQmpOTG1ETjA4K2VtS1R6WDhU?=
 =?utf-8?B?bEs2NlBRcklDREpxOEpVbkNWTWd3SG1JeVZGaU10VnNYaEt4NFZYMFVmbk5E?=
 =?utf-8?B?Zk16aWkwTVhtd0pWNkhtbjVFVGF2MGtUVi9QMnl4T2RtTWxpRFczdmZoN3VK?=
 =?utf-8?B?cC9MaHdmSUxuYWVveDJDTVVtMDVoWCttaitJeGFWZld1WGpXaTRKc0QreHF2?=
 =?utf-8?B?NTF3NVB5RmNhQ3ZsZnlBemhtSTgxa0YvQzQwZ28vd1dMUHEvNG5STVowaWJD?=
 =?utf-8?B?RFVLK2g4NFVJN01pZ0MrQ3ZuYjhaQUszNHFzUTczNEcrRk9QUUQxUERZeFhM?=
 =?utf-8?B?Zy8xMWJlZXFZaU9wR0lkNEtMRzJnMFZSYmhkajh4Tk5xUXNUVW9UUWViZ3RC?=
 =?utf-8?B?S1BRbGhONTF6QitMRVF4L1pzSDJIb2hoR3djMmdQTVRiNEpQK3BFTGxVRjBE?=
 =?utf-8?B?ME1Wa1lpYkRqYlB6aFFvVkd0U20zMjdxcW5KejRWYmhLK2UvQ2duWTVnamZH?=
 =?utf-8?B?ZGJQSGtKV2ViY0R1c0FIQ2hjVXBKTlcxUCtpS0NCd25ud09xdno0SEhWQmVi?=
 =?utf-8?Q?ZxVtp99QyXO1XhR8o56xXWLkFRrouuFtnb03Axp?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1886.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 68bfa03f-a476-4c89-a62d-08d93170763f
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jun 2021 09:15:33.7887
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: F6SZ4rIaQveYiIxpibKs3MdXjQ/vkP+GDNZJsRQhM2JCBX21DXlOUFVM79fyIhFyETzcCTn/2UntcMR5Z9H7aw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR1101MB2174
X-OriginatorOrg: intel.com

PiBGcm9tOiBSb2dlciBQYXUgTW9ubmUgPHJvZ2VyLnBhdUBjaXRyaXguY29tPg0KPiBTZW50OiBT
YXR1cmRheSwgTWF5IDI5LCAyMDIxIDE6NDAgQU0NCj4gDQo+IFRoaXMgaXMgYW4gRVBUIHNwZWNp
ZmljIGZ1bmN0aW9uLCBzbyBpdCBzaG91bGRuJ3QgbGl2ZSBpbiB0aGUgZ2VuZXJpYw0KPiBtdHJy
IGZpbGUuIFN1Y2ggbW92ZW1lbnQgaXMgYWxzbyBuZWVkZWQgZm9yIGZ1dHVyZSB3b3JrIHRoYXQg
d2lsbA0KPiByZXF1aXJlIHBhc3NpbmcgYSBwMm1fdHlwZV90IHBhcmFtZXRlciB0byBlcHRlX2dl
dF9lbnRyeV9lbXQsIGFuZA0KPiBtYWtpbmcgdGhhdCB0eXBlIHZpc2libGUgdG8gdGhlIG10cnIg
dXNlcnMgaXMgY3VtYmVyc29tZSBhbmQNCj4gdW5uZWVkZWQuDQo+IA0KPiBNb3ZpbmcgZXB0ZV9n
ZXRfZW50cnlfZW10IG91dCBvZiBtdHJyLmMgcmVxdWlyZXMgbWFraW5nIHRoZSBoZWxwZXIgdG8N
Cj4gZ2V0IHRoZSBNVFJSIHR5cGUgb2YgYW4gYWRkcmVzcyBmcm9tIHRoZSBtdHJyIHN0YXRlIHB1
YmxpYy4gV2hpbGUNCj4gdGhlcmUgcmVuYW1lIHRoZSBmdW5jdGlvbiB0byBzdGFydCB3aXRoIHRo
ZSBtdHJyIHByZWZpeCwgbGlrZSBvdGhlcg0KPiBtdHJyIHJlbGF0ZWQgZnVuY3Rpb25zLg0KPiAN
Cj4gV2hpbGUgdGhlcmUgZml4IHNvbWUgb2YgdGhlIHR5cGVzIG9mIHRoZSBmdW5jdGlvbiBwYXJh
bWV0ZXJzLg0KPiANCj4gTm8gZnVuY3Rpb25hbCBjaGFuZ2UgaW50ZW5kZWQuDQo+IA0KPiBTaWdu
ZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4NCg0KUmV2
aWV3ZWQtYnk6IEtldmluIFRpYW4gPGtldmluLnRpYW5AaW50ZWwuY29tPg0KDQo+IC0tLQ0KPiAg
eGVuL2FyY2gveDg2L2h2bS9tdHJyLmMgICAgICAgICAgIHwgMTA3ICstLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0NCj4gIHhlbi9hcmNoL3g4Ni9odm0vdm14L3ZteC5jICAgICAgICB8ICAgNCAr
LQ0KPiAgeGVuL2FyY2gveDg2L21tL3AybS1lcHQuYyAgICAgICAgIHwgMTE0ICsrKysrKysrKysr
KysrKysrKysrKysrKysrKystLQ0KPiAgeGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0vdm14L3ZteC5o
IHwgICAyICsNCj4gIHhlbi9pbmNsdWRlL2FzbS14ODYvbXRyci5oICAgICAgICB8ICAgNSArLQ0K
PiAgNSBmaWxlcyBjaGFuZ2VkLCAxMTggaW5zZXJ0aW9ucygrKSwgMTE0IGRlbGV0aW9ucygtKQ0K
PiANCj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9odm0vbXRyci5jIGIveGVuL2FyY2gveDg2
L2h2bS9tdHJyLmMNCj4gaW5kZXggODJkZWQxNjM1Yy4uNGE5ZjMxNzdlZCAxMDA2NDQNCj4gLS0t
IGEveGVuL2FyY2gveDg2L2h2bS9tdHJyLmMNCj4gKysrIGIveGVuL2FyY2gveDg2L2h2bS9tdHJy
LmMNCj4gQEAgLTE5NCw4ICsxOTQsNyBAQCB2b2lkIGh2bV92Y3B1X2NhY2hlYXR0cl9kZXN0cm95
KHN0cnVjdCB2Y3B1ICp2KQ0KPiAgICogTWF5IHJldHVybiBhIG5lZ2F0aXZlIHZhbHVlIHdoZW4g
b3JkZXIgPiAwLCBpbmRpY2F0aW5nIHRvIHRoZSBjYWxsZXINCj4gICAqIHRoYXQgdGhlIHJlc3Bl
Y3RpdmUgbWFwcGluZyBuZWVkcyBzcGxpdHRpbmcuDQo+ICAgKi8NCj4gLXN0YXRpYyBpbnQgZ2V0
X210cnJfdHlwZShjb25zdCBzdHJ1Y3QgbXRycl9zdGF0ZSAqbSwNCj4gLSAgICAgICAgICAgICAg
ICAgICAgICAgICBwYWRkcl90IHBhLCB1bnNpZ25lZCBpbnQgb3JkZXIpDQo+ICtpbnQgbXRycl9n
ZXRfdHlwZShjb25zdCBzdHJ1Y3QgbXRycl9zdGF0ZSAqbSwgcGFkZHJfdCBwYSwgdW5zaWduZWQg
aW50DQo+IG9yZGVyKQ0KPiAgew0KPiAgICAgdWludDhfdCAgICAgb3ZlcmxhcF9tdHJyID0gMDsN
Cj4gICAgIHVpbnQ4X3QgICAgIG92ZXJsYXBfbXRycl9wb3MgPSAwOw0KPiBAQCAtMzIzLDcgKzMy
Miw3IEBAIHN0YXRpYyB1aW50OF90IGVmZmVjdGl2ZV9tbV90eXBlKHN0cnVjdCBtdHJyX3N0YXRl
DQo+ICptLA0KPiAgICAgICAqIGp1c3QgdXNlIGl0DQo+ICAgICAgICovDQo+ICAgICAgaWYgKCBn
bXRycl9tdHlwZSA9PSBOT19IQVJEQ09ERV9NRU1fVFlQRSApDQo+IC0gICAgICAgIG10cnJfbXR5
cGUgPSBnZXRfbXRycl90eXBlKG0sIGdwYSwgMCk7DQo+ICsgICAgICAgIG10cnJfbXR5cGUgPSBt
dHJyX2dldF90eXBlKG0sIGdwYSwgMCk7DQo+ICAgICAgZWxzZQ0KPiAgICAgICAgICBtdHJyX210
eXBlID0gZ210cnJfbXR5cGU7DQo+IA0KPiBAQCAtMzUwLDcgKzM0OSw3IEBAIHVpbnQzMl90IGdl
dF9wYXRfZmxhZ3Moc3RydWN0IHZjcHUgKnYsDQo+ICAgICAgZ3Vlc3RfZWZmX21tX3R5cGUgPSBl
ZmZlY3RpdmVfbW1fdHlwZShnLCBwYXQsIGdwYWRkciwNCj4gICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIGdsMWVfZmxhZ3MsIGdtdHJyX210eXBlKTsNCj4gICAgICAv
KiAyLiBHZXQgdGhlIG1lbW9yeSB0eXBlIG9mIGhvc3QgcGh5c2ljYWwgYWRkcmVzcywgd2l0aCBN
VFJSICovDQo+IC0gICAgc2hhZG93X210cnJfdHlwZSA9IGdldF9tdHJyX3R5cGUoJm10cnJfc3Rh
dGUsIHNwYWRkciwgMCk7DQo+ICsgICAgc2hhZG93X210cnJfdHlwZSA9IG10cnJfZ2V0X3R5cGUo
Jm10cnJfc3RhdGUsIHNwYWRkciwgMCk7DQo+IA0KPiAgICAgIC8qIDMuIEZpbmQgdGhlIG1lbW9y
eSB0eXBlIGluIFBBVCwgd2l0aCBob3N0IE1UUlIgbWVtb3J5IHR5cGUNCj4gICAgICAgKiBhbmQg
Z3Vlc3QgZWZmZWN0aXZlIG1lbW9yeSB0eXBlLg0KPiBAQCAtNzg5LDEwNiArNzg4LDYgQEAgdm9p
ZCBtZW1vcnlfdHlwZV9jaGFuZ2VkKHN0cnVjdCBkb21haW4gKmQpDQo+ICAgICAgfQ0KPiAgfQ0K
PiANCj4gLWludCBlcHRlX2dldF9lbnRyeV9lbXQoc3RydWN0IGRvbWFpbiAqZCwgdW5zaWduZWQg
bG9uZyBnZm4sIG1mbl90IG1mbiwNCj4gLSAgICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQg
aW50IG9yZGVyLCB1aW50OF90ICppcGF0LCBib29sX3QgZGlyZWN0X21taW8pDQo+IC17DQo+IC0g
ICAgaW50IGdtdHJyX210eXBlLCBobXRycl9tdHlwZTsNCj4gLSAgICBzdHJ1Y3QgdmNwdSAqdiA9
IGN1cnJlbnQ7DQo+IC0gICAgdW5zaWduZWQgbG9uZyBpOw0KPiAtDQo+IC0gICAgKmlwYXQgPSAw
Ow0KPiAtDQo+IC0gICAgaWYgKCB2LT5kb21haW4gIT0gZCApDQo+IC0gICAgICAgIHYgPSBkLT52
Y3B1ID8gZC0+dmNwdVswXSA6IE5VTEw7DQo+IC0NCj4gLSAgICAvKiBNYXNrLCBub3QgYWRkLCBm
b3Igb3JkZXIgc28gaXQgd29ya3Mgd2l0aCBJTlZBTElEX01GTiBvbiB1bm1hcHBpbmcNCj4gKi8N
Cj4gLSAgICBpZiAoIHJhbmdlc2V0X292ZXJsYXBzX3JhbmdlKG1taW9fcm9fcmFuZ2VzLCBtZm5f
eChtZm4pLA0KPiAtICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbWZuX3gobWZuKSB8
ICgoMVVMIDw8IG9yZGVyKSAtIDEpKSApDQo+IC0gICAgew0KPiAtICAgICAgICBpZiAoICFvcmRl
ciB8fCByYW5nZXNldF9jb250YWluc19yYW5nZShtbWlvX3JvX3JhbmdlcywgbWZuX3gobWZuKSwN
Cj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbWZuX3go
bWZuKSB8ICgoMVVMIDw8IG9yZGVyKSAtIDEpKSApDQo+IC0gICAgICAgIHsNCj4gLSAgICAgICAg
ICAgICppcGF0ID0gMTsNCj4gLSAgICAgICAgICAgIHJldHVybiBNVFJSX1RZUEVfVU5DQUNIQUJM
RTsNCj4gLSAgICAgICAgfQ0KPiAtICAgICAgICAvKiBGb3JjZSBpbnZhbGlkIG1lbW9yeSB0eXBl
IHNvIHJlc29sdmVfbWlzY29uZmlnKCkgd2lsbCBzcGxpdCBpdCAqLw0KPiAtICAgICAgICByZXR1
cm4gLTE7DQo+IC0gICAgfQ0KPiAtDQo+IC0gICAgaWYgKCAhbWZuX3ZhbGlkKG1mbikgKQ0KPiAt
ICAgIHsNCj4gLSAgICAgICAgKmlwYXQgPSAxOw0KPiAtICAgICAgICByZXR1cm4gTVRSUl9UWVBF
X1VOQ0FDSEFCTEU7DQo+IC0gICAgfQ0KPiAtDQo+IC0gICAgaWYgKCAhZGlyZWN0X21taW8gJiYg
IWlzX2lvbW11X2VuYWJsZWQoZCkNCj4gJiYgIWNhY2hlX2ZsdXNoX3Blcm1pdHRlZChkKSApDQo+
IC0gICAgew0KPiAtICAgICAgICAqaXBhdCA9IDE7DQo+IC0gICAgICAgIHJldHVybiBNVFJSX1RZ
UEVfV1JCQUNLOw0KPiAtICAgIH0NCj4gLQ0KPiAtICAgIGZvciAoIGkgPSAwOyBpIDwgKDF1bCA8
PCBvcmRlcik7IGkrKyApDQo+IC0gICAgew0KPiAtICAgICAgICBpZiAoIGlzX3NwZWNpYWxfcGFn
ZShtZm5fdG9fcGFnZShtZm5fYWRkKG1mbiwgaSkpKSApDQo+IC0gICAgICAgIHsNCj4gLSAgICAg
ICAgICAgIGlmICggb3JkZXIgKQ0KPiAtICAgICAgICAgICAgICAgIHJldHVybiAtMTsNCj4gLSAg
ICAgICAgICAgICppcGF0ID0gMTsNCj4gLSAgICAgICAgICAgIHJldHVybiBNVFJSX1RZUEVfV1JC
QUNLOw0KPiAtICAgICAgICB9DQo+IC0gICAgfQ0KPiAtDQo+IC0gICAgaWYgKCBkaXJlY3RfbW1p
byApDQo+IC0gICAgICAgIHJldHVybiBNVFJSX1RZUEVfVU5DQUNIQUJMRTsNCj4gLQ0KPiAtICAg
IGdtdHJyX210eXBlID0gaHZtX2dldF9tZW1fcGlubmVkX2NhY2hlYXR0cihkLCBfZ2ZuKGdmbiks
IG9yZGVyKTsNCj4gLSAgICBpZiAoIGdtdHJyX210eXBlID49IDAgKQ0KPiAtICAgIHsNCj4gLSAg
ICAgICAgKmlwYXQgPSAxOw0KPiAtICAgICAgICByZXR1cm4gZ210cnJfbXR5cGUgIT0gUEFUX1RZ
UEVfVUNfTUlOVVMgPyBnbXRycl9tdHlwZQ0KPiAtICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgOiBNVFJSX1RZUEVfVU5DQUNIQUJMRTsNCj4gLSAgICB9DQo+
IC0gICAgaWYgKCBnbXRycl9tdHlwZSA9PSAtRUFERFJOT1RBVkFJTCApDQo+IC0gICAgICAgIHJl
dHVybiAtMTsNCj4gLQ0KPiAtICAgIGdtdHJyX210eXBlID0gdiA/IGdldF9tdHJyX3R5cGUoJnYt
PmFyY2guaHZtLm10cnIsIGdmbiA8PCBQQUdFX1NISUZULA0KPiBvcmRlcikNCj4gLSAgICAgICAg
ICAgICAgICAgICAgOiBNVFJSX1RZUEVfV1JCQUNLOw0KPiAtICAgIGhtdHJyX210eXBlID0gZ2V0
X210cnJfdHlwZSgmbXRycl9zdGF0ZSwgbWZuX3gobWZuKSA8PCBQQUdFX1NISUZULA0KPiBvcmRl
cik7DQo+IC0gICAgaWYgKCBnbXRycl9tdHlwZSA8IDAgfHwgaG10cnJfbXR5cGUgPCAwICkNCj4g
LSAgICAgICAgcmV0dXJuIC0xOw0KPiAtDQo+IC0gICAgLyogSWYgYm90aCB0eXBlcyBtYXRjaCB3
ZSdyZSBmaW5lLiAqLw0KPiAtICAgIGlmICggbGlrZWx5KGdtdHJyX210eXBlID09IGhtdHJyX210
eXBlKSApDQo+IC0gICAgICAgIHJldHVybiBobXRycl9tdHlwZTsNCj4gLQ0KPiAtICAgIC8qIElm
IGVpdGhlciB0eXBlIGlzIFVDLCB3ZSBoYXZlIHRvIGdvIHdpdGggdGhhdCBvbmUuICovDQo+IC0g
ICAgaWYgKCBnbXRycl9tdHlwZSA9PSBNVFJSX1RZUEVfVU5DQUNIQUJMRSB8fA0KPiAtICAgICAg
ICAgaG10cnJfbXR5cGUgPT0gTVRSUl9UWVBFX1VOQ0FDSEFCTEUgKQ0KPiAtICAgICAgICByZXR1
cm4gTVRSUl9UWVBFX1VOQ0FDSEFCTEU7DQo+IC0NCj4gLSAgICAvKiBJZiBlaXRoZXIgdHlwZSBp
cyBXQiwgd2UgaGF2ZSB0byBnbyB3aXRoIHRoZSBvdGhlciBvbmUuICovDQo+IC0gICAgaWYgKCBn
bXRycl9tdHlwZSA9PSBNVFJSX1RZUEVfV1JCQUNLICkNCj4gLSAgICAgICAgcmV0dXJuIGhtdHJy
X210eXBlOw0KPiAtICAgIGlmICggaG10cnJfbXR5cGUgPT0gTVRSUl9UWVBFX1dSQkFDSyApDQo+
IC0gICAgICAgIHJldHVybiBnbXRycl9tdHlwZTsNCj4gLQ0KPiAtICAgIC8qDQo+IC0gICAgICog
QXQgdGhpcyBwb2ludCB3ZSBoYXZlIGRpc2FncmVlaW5nIFdDLCBXVCwgb3IgV1AgdHlwZXMuIFRo
ZSBvbmx5DQo+IC0gICAgICogY29tYmluYXRpb24gdGhhdCBjYW4gYmUgY2xlYW5seSByZXNvbHZl
ZCBpcyBXVDpXUC4gVGhlIG9uZXMgaW52b2x2aW5nDQo+IC0gICAgICogV0MgbmVlZCB0byBiZSBj
b252ZXJ0ZWQgdG8gVUMsIGJvdGggZHVlIHRvIHRoZSBtZW1vcnkgb3JkZXJpbmcNCj4gLSAgICAg
KiBkaWZmZXJlbmNlcyBhbmQgYmVjYXVzZSBXQyBkaXNhbGxvd3MgcmVhZHMgdG8gYmUgY2FjaGVk
IChXVCBhbmQgV1ANCj4gLSAgICAgKiBwZXJtaXQgdGhpcyksIHdoaWxlIFdUIGFuZCBXUCByZXF1
aXJlIHdyaXRlcyB0byBnbyBzdHJhaWdodCB0byBtZW1vcnkNCj4gLSAgICAgKiAoV0MgY2FuIGJ1
ZmZlciB0aGVtKS4NCj4gLSAgICAgKi8NCj4gLSAgICBpZiAoIChnbXRycl9tdHlwZSA9PSBNVFJS
X1RZUEVfV1JUSFJPVUdIICYmDQo+IC0gICAgICAgICAgaG10cnJfbXR5cGUgPT0gTVRSUl9UWVBF
X1dSUFJPVCkgfHwNCj4gLSAgICAgICAgIChnbXRycl9tdHlwZSA9PSBNVFJSX1RZUEVfV1JQUk9U
ICYmDQo+IC0gICAgICAgICAgaG10cnJfbXR5cGUgPT0gTVRSUl9UWVBFX1dSVEhST1VHSCkgKQ0K
PiAtICAgICAgICByZXR1cm4gTVRSUl9UWVBFX1dSUFJPVDsNCj4gLQ0KPiAtICAgIHJldHVybiBN
VFJSX1RZUEVfVU5DQUNIQUJMRTsNCj4gLX0NCj4gLQ0KPiAgLyoNCj4gICAqIExvY2FsIHZhcmlh
YmxlczoNCj4gICAqIG1vZGU6IEMNCj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9odm0vdm14
L3ZteC5jIGIveGVuL2FyY2gveDg2L2h2bS92bXgvdm14LmMNCj4gaW5kZXggN2UzZTY3ZmRjMy4u
MGQ0YjQ3NjgxYiAxMDA2NDQNCj4gLS0tIGEveGVuL2FyY2gveDg2L2h2bS92bXgvdm14LmMNCj4g
KysrIGIveGVuL2FyY2gveDg2L2h2bS92bXgvdm14LmMNCj4gQEAgLTQxNywxMiArNDE3LDEyIEBA
IHN0YXRpYyBpbnQgdm14X2RvbWFpbl9pbml0aWFsaXNlKHN0cnVjdCBkb21haW4gKmQpDQo+ICBz
dGF0aWMgdm9pZCBkb21haW5fY3JlYXRpb25fZmluaXNoZWQoc3RydWN0IGRvbWFpbiAqZCkNCj4g
IHsNCj4gICAgICBnZm5fdCBnZm4gPSBnYWRkcl90b19nZm4oQVBJQ19ERUZBVUxUX1BIWVNfQkFT
RSk7DQo+IC0gICAgdWludDhfdCBpcGF0Ow0KPiArICAgIGJvb2wgaXBhdDsNCj4gDQo+ICAgICAg
aWYgKCAhaGFzX3ZsYXBpYyhkKSB8fCBtZm5fZXEoYXBpY19hY2Nlc3NfbWZuLCBJTlZBTElEX01G
TikgKQ0KPiAgICAgICAgICByZXR1cm47DQo+IA0KPiAtICAgIEFTU0VSVChlcHRlX2dldF9lbnRy
eV9lbXQoZCwgZ2ZuX3goZ2ZuKSwgYXBpY19hY2Nlc3NfbWZuLCAwLCAmaXBhdCwNCj4gKyAgICBB
U1NFUlQoZXB0ZV9nZXRfZW50cnlfZW10KGQsIGdmbiwgYXBpY19hY2Nlc3NfbWZuLCAwLCAmaXBh
dCwNCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHRydWUpID09IE1UUlJfVFlQRV9X
UkJBQ0spOw0KPiAgICAgIEFTU0VSVChpcGF0KTsNCj4gDQo+IGRpZmYgLS1naXQgYS94ZW4vYXJj
aC94ODYvbW0vcDJtLWVwdC5jIGIveGVuL2FyY2gveDg2L21tL3AybS1lcHQuYw0KPiBpbmRleCBh
M2JlYWY5MWUyLi5mMWQxZDA3ZTkyIDEwMDY0NA0KPiAtLS0gYS94ZW4vYXJjaC94ODYvbW0vcDJt
LWVwdC5jDQo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0LmMNCj4gQEAgLTIwLDYgKzIw
LDcgQEANCj4gICNpbmNsdWRlIDxwdWJsaWMvaHZtL2RtX29wLmg+DQo+ICAjaW5jbHVkZSA8YXNt
L2FsdHAybS5oPg0KPiAgI2luY2x1ZGUgPGFzbS9jdXJyZW50Lmg+DQo+ICsjaW5jbHVkZSA8YXNt
L2lvY2FwLmg+DQo+ICAjaW5jbHVkZSA8YXNtL3BhZ2luZy5oPg0KPiAgI2luY2x1ZGUgPGFzbS90
eXBlcy5oPg0KPiAgI2luY2x1ZGUgPGFzbS9kb21haW4uaD4NCj4gQEAgLTQ4NSw2ICs0ODYsMTA4
IEBAIHN0YXRpYyBpbnQgZXB0X2ludmFsaWRhdGVfZW10X3JhbmdlKHN0cnVjdA0KPiBwMm1fZG9t
YWluICpwMm0sDQo+ICAgICAgcmV0dXJuIHJjOw0KPiAgfQ0KPiANCj4gK2ludCBlcHRlX2dldF9l
bnRyeV9lbXQoc3RydWN0IGRvbWFpbiAqZCwgZ2ZuX3QgZ2ZuLCBtZm5fdCBtZm4sDQo+ICsgICAg
ICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGludCBvcmRlciwgYm9vbCAqaXBhdCwgYm9vbCBk
aXJlY3RfbW1pbykNCj4gK3sNCj4gKyAgICBpbnQgZ210cnJfbXR5cGUsIGhtdHJyX210eXBlOw0K
PiArICAgIHN0cnVjdCB2Y3B1ICp2ID0gY3VycmVudDsNCj4gKyAgICB1bnNpZ25lZCBsb25nIGk7
DQo+ICsNCj4gKyAgICAqaXBhdCA9IGZhbHNlOw0KPiArDQo+ICsgICAgaWYgKCB2LT5kb21haW4g
IT0gZCApDQo+ICsgICAgICAgIHYgPSBkLT52Y3B1ID8gZC0+dmNwdVswXSA6IE5VTEw7DQo+ICsN
Cj4gKyAgICAvKiBNYXNrLCBub3QgYWRkLCBmb3Igb3JkZXIgc28gaXQgd29ya3Mgd2l0aCBJTlZB
TElEX01GTiBvbiB1bm1hcHBpbmcNCj4gKi8NCj4gKyAgICBpZiAoIHJhbmdlc2V0X292ZXJsYXBz
X3JhbmdlKG1taW9fcm9fcmFuZ2VzLCBtZm5feChtZm4pLA0KPiArICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgbWZuX3gobWZuKSB8ICgoMVVMIDw8IG9yZGVyKSAtIDEpKSApDQo+ICsg
ICAgew0KPiArICAgICAgICBpZiAoICFvcmRlciB8fCByYW5nZXNldF9jb250YWluc19yYW5nZSht
bWlvX3JvX3JhbmdlcywgbWZuX3gobWZuKSwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgbWZuX3gobWZuKSB8ICgoMVVMIDw8IG9yZGVyKSAtIDEpKSAp
DQo+ICsgICAgICAgIHsNCj4gKyAgICAgICAgICAgICppcGF0ID0gdHJ1ZTsNCj4gKyAgICAgICAg
ICAgIHJldHVybiBNVFJSX1RZUEVfVU5DQUNIQUJMRTsNCj4gKyAgICAgICAgfQ0KPiArICAgICAg
ICAvKiBGb3JjZSBpbnZhbGlkIG1lbW9yeSB0eXBlIHNvIHJlc29sdmVfbWlzY29uZmlnKCkgd2ls
bCBzcGxpdCBpdCAqLw0KPiArICAgICAgICByZXR1cm4gLTE7DQo+ICsgICAgfQ0KPiArDQo+ICsg
ICAgaWYgKCAhbWZuX3ZhbGlkKG1mbikgKQ0KPiArICAgIHsNCj4gKyAgICAgICAgKmlwYXQgPSB0
cnVlOw0KPiArICAgICAgICByZXR1cm4gTVRSUl9UWVBFX1VOQ0FDSEFCTEU7DQo+ICsgICAgfQ0K
PiArDQo+ICsgICAgaWYgKCAhZGlyZWN0X21taW8gJiYgIWlzX2lvbW11X2VuYWJsZWQoZCkNCj4g
JiYgIWNhY2hlX2ZsdXNoX3Blcm1pdHRlZChkKSApDQo+ICsgICAgew0KPiArICAgICAgICAqaXBh
dCA9IHRydWU7DQo+ICsgICAgICAgIHJldHVybiBNVFJSX1RZUEVfV1JCQUNLOw0KPiArICAgIH0N
Cj4gKw0KPiArICAgIGZvciAoIGkgPSAwOyBpIDwgKDF1bCA8PCBvcmRlcik7IGkrKyApDQo+ICsg
ICAgew0KPiArICAgICAgICBpZiAoIGlzX3NwZWNpYWxfcGFnZShtZm5fdG9fcGFnZShtZm5fYWRk
KG1mbiwgaSkpKSApDQo+ICsgICAgICAgIHsNCj4gKyAgICAgICAgICAgIGlmICggb3JkZXIgKQ0K
PiArICAgICAgICAgICAgICAgIHJldHVybiAtMTsNCj4gKyAgICAgICAgICAgICppcGF0ID0gdHJ1
ZTsNCj4gKyAgICAgICAgICAgIHJldHVybiBNVFJSX1RZUEVfV1JCQUNLOw0KPiArICAgICAgICB9
DQo+ICsgICAgfQ0KPiArDQo+ICsgICAgaWYgKCBkaXJlY3RfbW1pbyApDQo+ICsgICAgICAgIHJl
dHVybiBNVFJSX1RZUEVfVU5DQUNIQUJMRTsNCj4gKw0KPiArICAgIGdtdHJyX210eXBlID0gaHZt
X2dldF9tZW1fcGlubmVkX2NhY2hlYXR0cihkLCBnZm4sIG9yZGVyKTsNCj4gKyAgICBpZiAoIGdt
dHJyX210eXBlID49IDAgKQ0KPiArICAgIHsNCj4gKyAgICAgICAgKmlwYXQgPSB0cnVlOw0KPiAr
ICAgICAgICByZXR1cm4gZ210cnJfbXR5cGUgIT0gUEFUX1RZUEVfVUNfTUlOVVMgPyBnbXRycl9t
dHlwZQ0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
OiBNVFJSX1RZUEVfVU5DQUNIQUJMRTsNCj4gKyAgICB9DQo+ICsgICAgaWYgKCBnbXRycl9tdHlw
ZSA9PSAtRUFERFJOT1RBVkFJTCApDQo+ICsgICAgICAgIHJldHVybiAtMTsNCj4gKw0KPiArICAg
IGdtdHJyX210eXBlID0gdiA/IG10cnJfZ2V0X3R5cGUoJnYtPmFyY2guaHZtLm10cnIsDQo+ICsg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBnZm5feChnZm4pIDw8IFBBR0VfU0hJ
RlQsIG9yZGVyKQ0KPiArICAgICAgICAgICAgICAgICAgICA6IE1UUlJfVFlQRV9XUkJBQ0s7DQo+
ICsgICAgaG10cnJfbXR5cGUgPSBtdHJyX2dldF90eXBlKCZtdHJyX3N0YXRlLCBtZm5feChtZm4p
IDw8IFBBR0VfU0hJRlQsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIG9yZGVy
KTsNCj4gKyAgICBpZiAoIGdtdHJyX210eXBlIDwgMCB8fCBobXRycl9tdHlwZSA8IDAgKQ0KPiAr
ICAgICAgICByZXR1cm4gLTE7DQo+ICsNCj4gKyAgICAvKiBJZiBib3RoIHR5cGVzIG1hdGNoIHdl
J3JlIGZpbmUuICovDQo+ICsgICAgaWYgKCBsaWtlbHkoZ210cnJfbXR5cGUgPT0gaG10cnJfbXR5
cGUpICkNCj4gKyAgICAgICAgcmV0dXJuIGhtdHJyX210eXBlOw0KPiArDQo+ICsgICAgLyogSWYg
ZWl0aGVyIHR5cGUgaXMgVUMsIHdlIGhhdmUgdG8gZ28gd2l0aCB0aGF0IG9uZS4gKi8NCj4gKyAg
ICBpZiAoIGdtdHJyX210eXBlID09IE1UUlJfVFlQRV9VTkNBQ0hBQkxFIHx8DQo+ICsgICAgICAg
ICBobXRycl9tdHlwZSA9PSBNVFJSX1RZUEVfVU5DQUNIQUJMRSApDQo+ICsgICAgICAgIHJldHVy
biBNVFJSX1RZUEVfVU5DQUNIQUJMRTsNCj4gKw0KPiArICAgIC8qIElmIGVpdGhlciB0eXBlIGlz
IFdCLCB3ZSBoYXZlIHRvIGdvIHdpdGggdGhlIG90aGVyIG9uZS4gKi8NCj4gKyAgICBpZiAoIGdt
dHJyX210eXBlID09IE1UUlJfVFlQRV9XUkJBQ0sgKQ0KPiArICAgICAgICByZXR1cm4gaG10cnJf
bXR5cGU7DQo+ICsgICAgaWYgKCBobXRycl9tdHlwZSA9PSBNVFJSX1RZUEVfV1JCQUNLICkNCj4g
KyAgICAgICAgcmV0dXJuIGdtdHJyX210eXBlOw0KPiArDQo+ICsgICAgLyoNCj4gKyAgICAgKiBB
dCB0aGlzIHBvaW50IHdlIGhhdmUgZGlzYWdyZWVpbmcgV0MsIFdULCBvciBXUCB0eXBlcy4gVGhl
IG9ubHkNCj4gKyAgICAgKiBjb21iaW5hdGlvbiB0aGF0IGNhbiBiZSBjbGVhbmx5IHJlc29sdmVk
IGlzIFdUOldQLiBUaGUgb25lcyBpbnZvbHZpbmcNCj4gKyAgICAgKiBXQyBuZWVkIHRvIGJlIGNv
bnZlcnRlZCB0byBVQywgYm90aCBkdWUgdG8gdGhlIG1lbW9yeSBvcmRlcmluZw0KPiArICAgICAq
IGRpZmZlcmVuY2VzIGFuZCBiZWNhdXNlIFdDIGRpc2FsbG93cyByZWFkcyB0byBiZSBjYWNoZWQg
KFdUIGFuZCBXUA0KPiArICAgICAqIHBlcm1pdCB0aGlzKSwgd2hpbGUgV1QgYW5kIFdQIHJlcXVp
cmUgd3JpdGVzIHRvIGdvIHN0cmFpZ2h0IHRvIG1lbW9yeQ0KPiArICAgICAqIChXQyBjYW4gYnVm
ZmVyIHRoZW0pLg0KPiArICAgICAqLw0KPiArICAgIGlmICggKGdtdHJyX210eXBlID09IE1UUlJf
VFlQRV9XUlRIUk9VR0ggJiYNCj4gKyAgICAgICAgICBobXRycl9tdHlwZSA9PSBNVFJSX1RZUEVf
V1JQUk9UKSB8fA0KPiArICAgICAgICAgKGdtdHJyX210eXBlID09IE1UUlJfVFlQRV9XUlBST1Qg
JiYNCj4gKyAgICAgICAgICBobXRycl9tdHlwZSA9PSBNVFJSX1RZUEVfV1JUSFJPVUdIKSApDQo+
ICsgICAgICAgIHJldHVybiBNVFJSX1RZUEVfV1JQUk9UOw0KPiArDQo+ICsgICAgcmV0dXJuIE1U
UlJfVFlQRV9VTkNBQ0hBQkxFOw0KPiArfQ0KPiArDQo+ICAvKg0KPiAgICogUmVzb2x2ZSBkZWxp
YmVyYXRlbHkgbWlzLWNvbmZpZ3VyZWQgKEVNVCBmaWVsZCBzZXQgdG8gYW4gaW52YWxpZCB2YWx1
ZSkNCj4gICAqIGVudHJpZXMgaW4gdGhlIHBhZ2UgdGFibGUgaGllcmFyY2h5IGZvciB0aGUgZ2l2
ZW4gR0ZOOg0KPiBAQCAtNTE5LDcgKzYyMiw3IEBAIHN0YXRpYyBpbnQgcmVzb2x2ZV9taXNjb25m
aWcoc3RydWN0IHAybV9kb21haW4NCj4gKnAybSwgdW5zaWduZWQgbG9uZyBnZm4pDQo+IA0KPiAg
ICAgICAgICBpZiAoIGxldmVsID09IDAgfHwgaXNfZXB0ZV9zdXBlcnBhZ2UoJmUpICkNCj4gICAg
ICAgICAgew0KPiAtICAgICAgICAgICAgdWludDhfdCBpcGF0ID0gMDsNCj4gKyAgICAgICAgICAg
IGJvb2wgaXBhdDsNCj4gDQo+ICAgICAgICAgICAgICBpZiAoIGUuZW10ICE9IE1UUlJfTlVNX1RZ
UEVTICkNCj4gICAgICAgICAgICAgICAgICBicmVhazsNCj4gQEAgLTUzNSw3ICs2MzgsNyBAQCBz
dGF0aWMgaW50IHJlc29sdmVfbWlzY29uZmlnKHN0cnVjdCBwMm1fZG9tYWluDQo+ICpwMm0sIHVu
c2lnbmVkIGxvbmcgZ2ZuKQ0KPiAgICAgICAgICAgICAgICAgICAgICAgICAgZS5lbXQgPSAwOw0K
PiAgICAgICAgICAgICAgICAgICAgICBpZiAoICFpc19lcHRlX3ZhbGlkKCZlKSB8fCAhaXNfZXB0
ZV9wcmVzZW50KCZlKSApDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICBjb250aW51ZTsNCj4g
LSAgICAgICAgICAgICAgICAgICAgZS5lbXQgPSBlcHRlX2dldF9lbnRyeV9lbXQocDJtLT5kb21h
aW4sIGdmbiArIGksDQo+ICsgICAgICAgICAgICAgICAgICAgIGUuZW10ID0gZXB0ZV9nZXRfZW50
cnlfZW10KHAybS0+ZG9tYWluLCBfZ2ZuKGdmbiArIGkpLA0KPiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBfbWZuKGUubWZuKSwgMCwgJmlwYXQsDQo+ICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGUuc2FfcDJtdCA9
PSBwMm1fbW1pb19kaXJlY3QpOw0KPiAgICAgICAgICAgICAgICAgICAgICBlLmlwYXQgPSBpcGF0
Ow0KPiBAQCAtNTUzLDcgKzY1Niw4IEBAIHN0YXRpYyBpbnQgcmVzb2x2ZV9taXNjb25maWcoc3Ry
dWN0IHAybV9kb21haW4NCj4gKnAybSwgdW5zaWduZWQgbG9uZyBnZm4pDQo+ICAgICAgICAgICAg
ICB9DQo+ICAgICAgICAgICAgICBlbHNlDQo+ICAgICAgICAgICAgICB7DQo+IC0gICAgICAgICAg
ICAgICAgaW50IGVtdCA9IGVwdGVfZ2V0X2VudHJ5X2VtdChwMm0tPmRvbWFpbiwgZ2ZuLCBfbWZu
KGUubWZuKSwNCj4gKyAgICAgICAgICAgICAgICBpbnQgZW10ID0gZXB0ZV9nZXRfZW50cnlfZW10
KHAybS0+ZG9tYWluLCBfZ2ZuKGdmbiksDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBfbWZuKGUubWZuKSwNCj4gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIGxldmVsICogRVBUX1RBQkxFX09SREVSLCAmaXBhdCwNCj4g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGUuc2FfcDJtdCA9
PSBwMm1fbW1pb19kaXJlY3QpOw0KPiAgICAgICAgICAgICAgICAgIGJvb2xfdCByZWNhbGMgPSBl
LnJlY2FsYzsNCj4gQEAgLTc4OCw4ICs4OTIsOCBAQCBlcHRfc2V0X2VudHJ5KHN0cnVjdCBwMm1f
ZG9tYWluICpwMm0sIGdmbl90IGdmbl8sDQo+IG1mbl90IG1mbiwNCj4gDQo+ICAgICAgaWYgKCBt
Zm5fdmFsaWQobWZuKSB8fCBwMm1fYWxsb3dzX2ludmFsaWRfbWZuKHAybXQpICkNCj4gICAgICB7
DQo+IC0gICAgICAgIHVpbnQ4X3QgaXBhdCA9IDA7DQo+IC0gICAgICAgIGludCBlbXQgPSBlcHRl
X2dldF9lbnRyeV9lbXQocDJtLT5kb21haW4sIGdmbiwgbWZuLA0KPiArICAgICAgICBib29sIGlw
YXQ7DQo+ICsgICAgICAgIGludCBlbXQgPSBlcHRlX2dldF9lbnRyeV9lbXQocDJtLT5kb21haW4s
IF9nZm4oZ2ZuKSwgbWZuLA0KPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IGkgKiBFUFRfVEFCTEVfT1JERVIsICZpcGF0LA0KPiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHAybXQgPT0gcDJtX21taW9fZGlyZWN0KTsNCj4gDQo+IGRpZmYgLS1naXQg
YS94ZW4vaW5jbHVkZS9hc20teDg2L2h2bS92bXgvdm14LmggYi94ZW4vaW5jbHVkZS9hc20tDQo+
IHg4Ni9odm0vdm14L3ZteC5oDQo+IGluZGV4IDUzNGU5ZmMyMjEuLmY2NjhlZTFmMDkgMTAwNjQ0
DQo+IC0tLSBhL3hlbi9pbmNsdWRlL2FzbS14ODYvaHZtL3ZteC92bXguaA0KPiArKysgYi94ZW4v
aW5jbHVkZS9hc20teDg2L2h2bS92bXgvdm14LmgNCj4gQEAgLTU5OSw2ICs1OTksOCBAQCB2b2lk
IGVwdF9wMm1fdW5pbml0KHN0cnVjdCBwMm1fZG9tYWluICpwMm0pOw0KPiANCj4gIHZvaWQgZXB0
X3dhbGtfdGFibGUoc3RydWN0IGRvbWFpbiAqZCwgdW5zaWduZWQgbG9uZyBnZm4pOw0KPiAgYm9v
bF90IGVwdF9oYW5kbGVfbWlzY29uZmlnKHVpbnQ2NF90IGdwYSk7DQo+ICtpbnQgZXB0ZV9nZXRf
ZW50cnlfZW10KHN0cnVjdCBkb21haW4gKmQsIGdmbl90IGdmbiwgbWZuX3QgbWZuLA0KPiArICAg
ICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBpbnQgb3JkZXIsIGJvb2wgKmlwYXQsIGJvb2wg
ZGlyZWN0X21taW8pOw0KPiAgdm9pZCBzZXR1cF9lcHRfZHVtcCh2b2lkKTsNCj4gIHZvaWQgcDJt
X2luaXRfYWx0cDJtX2VwdChzdHJ1Y3QgZG9tYWluICpkLCB1bnNpZ25lZCBpbnQgaSk7DQo+ICAv
KiBMb2NhdGUgYW4gYWx0ZXJuYXRlIHAybSBieSBpdHMgRVBUUCAqLw0KPiBkaWZmIC0tZ2l0IGEv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tdHJyLmggYi94ZW4vaW5jbHVkZS9hc20teDg2L210cnIuaA0K
PiBpbmRleCAyNGU1ZGU1YzIyLi5lMGZkMTAwNWNlIDEwMDY0NA0KPiAtLS0gYS94ZW4vaW5jbHVk
ZS9hc20teDg2L210cnIuaA0KPiArKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L210cnIuaA0KPiBA
QCAtNzIsMTIgKzcyLDExIEBAIGV4dGVybiBpbnQgbXRycl9hZGRfcGFnZSh1bnNpZ25lZCBsb25n
IGJhc2UsDQo+IHVuc2lnbmVkIGxvbmcgc2l6ZSwNCj4gICAgICAgICAgICAgICAgICAgICAgICAg
ICB1bnNpZ25lZCBpbnQgdHlwZSwgY2hhciBpbmNyZW1lbnQpOw0KPiAgZXh0ZXJuIGludCBtdHJy
X2RlbChpbnQgcmVnLCB1bnNpZ25lZCBsb25nIGJhc2UsIHVuc2lnbmVkIGxvbmcgc2l6ZSk7DQo+
ICBleHRlcm4gaW50IG10cnJfZGVsX3BhZ2UoaW50IHJlZywgdW5zaWduZWQgbG9uZyBiYXNlLCB1
bnNpZ25lZCBsb25nIHNpemUpOw0KPiArZXh0ZXJuIGludCBtdHJyX2dldF90eXBlKGNvbnN0IHN0
cnVjdCBtdHJyX3N0YXRlICptLCBwYWRkcl90IHBhLA0KPiArICAgICAgICAgICAgICAgICAgICAg
ICAgIHVuc2lnbmVkIGludCBvcmRlcik7DQo+ICBleHRlcm4gdm9pZCBtdHJyX2NlbnRhdXJfcmVw
b3J0X21jcihpbnQgbWNyLCB1MzIgbG8sIHUzMiBoaSk7DQo+ICBleHRlcm4gdTMyIGdldF9wYXRf
ZmxhZ3Moc3RydWN0IHZjcHUgKnYsIHUzMiBnbDFlX2ZsYWdzLCBwYWRkcl90IGdwYWRkciwNCj4g
ICAgICAgICAgICAgICAgICAgIHBhZGRyX3Qgc3BhZGRyLCB1aW50OF90IGdtdHJyX210eXBlKTsN
Cj4gLWV4dGVybiBpbnQgZXB0ZV9nZXRfZW50cnlfZW10KHN0cnVjdCBkb21haW4gKiwgdW5zaWdu
ZWQgbG9uZyBnZm4sIG1mbl90DQo+IG1mbiwNCj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIHVuc2lnbmVkIGludCBvcmRlciwgdWludDhfdCAqaXBhdCwNCj4gLSAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIGJvb2xfdCBkaXJlY3RfbW1pbyk7DQo+ICBleHRlcm4gdW5zaWduZWQg
Y2hhciBwYXRfdHlwZV8yX3B0ZV9mbGFncyh1bnNpZ25lZCBjaGFyIHBhdF90eXBlKTsNCj4gIGV4
dGVybiBpbnQgaG9sZF9tdHJyX3VwZGF0ZXNfb25fYXBzOw0KPiAgZXh0ZXJuIHZvaWQgbXRycl9h
cHNfc3luY19iZWdpbih2b2lkKTsNCj4gLS0NCj4gMi4zMS4xDQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 09:26:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 09:26:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143766.264818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltoHy-0005vn-4F; Thu, 17 Jun 2021 09:26:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143766.264818; Thu, 17 Jun 2021 09:26:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltoHy-0005vg-0Z; Thu, 17 Jun 2021 09:26:18 +0000
Received: by outflank-mailman (input) for mailman id 143766;
 Thu, 17 Jun 2021 09:26:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iG+e=LL=eikelenboom.it=linux@srs-us1.protection.inumbo.net>)
 id 1ltoHw-0005va-LJ
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 09:26:17 +0000
Received: from server.eikelenboom.it (unknown [91.121.65.215])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3388a46f-a79f-4cbf-bae2-ca4d83a59f68;
 Thu, 17 Jun 2021 09:26:06 +0000 (UTC)
Received: from 76-24-144-85.ftth.glasoperator.nl ([85.144.24.76]:51060
 helo=[172.16.1.50]) by server.eikelenboom.it with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <linux@eikelenboom.it>)
 id 1ltoMN-00059G-9i; Thu, 17 Jun 2021 11:30:51 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3388a46f-a79f-4cbf-bae2-ca4d83a59f68
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=eikelenboom.it; s=20180706; h=Content-Type:MIME-Version:Date:Message-ID:Cc:
	To:Subject:From:Sender:Reply-To:Content-Transfer-Encoding:Content-ID:
	Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
	:Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
	List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=v4joKe5jbm8AoLXI826WbACGrs8wnOzMSepikK2QlgQ=; b=cBoEe9J8qPkUdAcGRpwePKjz9X
	Wlj2aeeuxuCc8Yl0SXrBMT9+SDx828puDHeh/ey/auV08O+Y1pG8KDJqo01m7h6RQHsNpZWTRD6s5
	WYiCbrK08+l0eH34YlU7bmyOchWtDHadRmbtmBALHXauo4gKFDLBhhVkCEP/hXGtgQFE=;
From: Sander Eikelenboom <linux@eikelenboom.it>
Subject: Linux 5.13-rc6 regression to 5.12.x: kernel OOM and panic during
 kernel boot in low memory Xen VM's (256MB assigned memory).
To: linux-kernel <linux-kernel@vger.kernel.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Linus Torvalds <torvalds@linux-foundation.org>
Message-ID: <ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it>
Date: Thu, 17 Jun 2021 11:26:04 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="------------12557986E89B73948A73857A"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------12557986E89B73948A73857A
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit

L.S.,

I just tried to upgrade and test the linux kernel going from the 5.12 kernel series to 5.13-rc6 on my homeserver with Xen, but ran in some trouble.

Some VM's boot fine (with more than 256MB memory assigned), but the smaller (memory wise) PVH ones crash during kernel boot due to OOM.
Booting VM's with 5.12(.9) kernel still works fine, also when dom0 is running 5.13-rc6 (but it has more memory assigned, so that is not unexpected).

The 5.13-rc6'ish kernel is a pull of today, tried both with and without last AKPM's patches, but that
makes no difference.

Below are stacktraces from a few of the crashing VM's.

Attached is the kernel .config

Any pointers ?

--
Sander



[    0.986515] Bluetooth: HCI UART protocol Intel registered
[    0.986714] Bluetooth: HCI UART protocol AG6XX registered
[    0.986760] usbcore: registered new interface driver bcm203x
[    0.986812] usbcore: registered new interface driver bpa10x
[    0.986854] usbcore: registered new interface driver bfusb
[    0.986907] usbcore: registered new interface driver btusb
[    0.986955] usbcore: registered new interface driver ath3k
[    0.986998] hid: raw HID events driver (C) Jiri Kosina
[    0.987250] usbcore: registered new interface driver usbhid
[    0.987283] usbhid: USB HID core driver
[    0.988461] swapper/0 invoked oom-killer: gfp_mask=0xcc0(GFP_KERNEL), order=0, oom_score_adj=0
[    0.988530] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 5.13.0-rc6-20210617-doflr-mac80211debug+ #1
[    0.988572] Call Trace:
[    0.988587]  dump_stack+0x76/0x94
[    0.988609]  dump_header+0x45/0x1d4
[    0.988627]  out_of_memory.cold.44+0x39/0x7e
[    0.988649]  __alloc_pages_slowpath.constprop.112+0xb9e/0xc80
[    0.988678]  ? idr_alloc_u32+0x8b/0xc0
[    0.988697]  __alloc_pages+0x318/0x330
[    0.988715]  alloc_page_interleave+0xe/0x70
[    0.988737]  allocate_slab+0x28d/0x330
[    0.988757]  ___slab_alloc+0x41e/0x5c0
[    0.988777]  ? bus_add_driver+0x48/0x1c0
[    0.988797]  ? call_usermodehelper_exec+0xed/0x160
[    0.988822]  ? bus_add_driver+0x48/0x1c0
[    0.988841]  __slab_alloc+0x17/0x30
[    0.988861]  kmem_cache_alloc_trace+0x403/0x440
[    0.988885]  ? si3054_driver_init+0x15/0x15
[    0.988907]  ? rdinit_setup+0x27/0x27
[    0.988925]  bus_add_driver+0x48/0x1c0
[    0.988944]  ? si3054_driver_init+0x15/0x15
[    0.988963]  driver_register+0x66/0xb0
[    0.988984]  ? si3054_driver_init+0x15/0x15
[    0.989003]  do_one_initcall+0x3f/0x1c0
[    0.989024]  kernel_init_freeable+0x21a/0x295
[    0.989049]  ? rest_init+0xa4/0xa4
[    0.989069]  kernel_init+0x5/0xfc
[    0.989086]  ret_from_fork+0x22/0x30
[    0.989105] Mem-Info:
[    0.989116] active_anon:0 inactive_anon:0 isolated_anon:0
[    0.989116]  active_file:0 inactive_file:0 isolated_file:0
[    0.989116]  unevictable:27090 dirty:0 writeback:0
[    0.989116]  slab_reclaimable:2960 slab_unreclaimable:3021
[    0.989116]  mapped:0 shmem:0 pagetables:3 bounce:0
[    0.989116]  free:783 free_pcp:16 free_cma:0
[    0.989244] Node 0 active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:108360kB isolated(anon):0kB isolated(file):0kB mapped:0kB dirty:0kB writeback:0kB shmem:0kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB kernel_stack:5776kB pagetables:12kB all_unreclaimable? no
[    1.178718] Node 0 DMA free:720kB min:148kB low:184kB high:220kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:80kB writepending:0kB present:15996kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[    1.178780]  xvda: xvda1 xvda2
[    1.178825] lowmem_reserve[]: 0 146 146 146
[    1.178828] Node 0 DMA32 free:2208kB min:1472kB low:1840kB high:2208kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:108252kB writepending:0kB present:245760kB managed:149632kB mlocked:0kB bounce:0kB free_pcp:44kB local_pcp:32kB free_cma:0kB
[    1.178833] lowmem_reserve[]: 0 0 0 0
[    1.178836] Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 1*64kB (U) 1*128kB (U) 0*256kB 1*512kB (U) 0*1024kB 0*2048kB 0*4096kB = 704kB
[    1.179045] Node 0 DMA32: 2*4kB (ME) 3*8kB (M) 2*16kB (M) 3*32kB (UM) 1*64kB (M) 2*128kB (UE) 2*256kB (UE) 2*512kB (U) 0*1024kB 0*2048kB 0*4096kB = 2016kB
[    1.179110] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[    1.179152] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[    1.179200] 27099 total pagecache pages
[    1.179227] 0 pages in swap cache
[    1.179244] Swap cache stats: add 0, delete 0, find 0/0
[    1.179267] Free swap  = 0kB
[    1.179282] Total swap = 0kB
[    1.179308] 65439 pages RAM
[    1.179320] 0 pages HighMem/MovableOnly
[    1.179336] 24191 pages reserved
[    1.179352] 0 pages cma reserved
[    1.179382] Tasks state (memory values in pages):
[    1.179403] [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[    1.179470] Out of memory and no killable processes...
[    1.179494] Kernel panic - not syncing: System is deadlocked on memory
[    1.179521] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 5.13.0-rc6-20210617-doflr-mac80211debug+ #1
[    1.179563] Call Trace:
[    1.179576]  dump_stack+0x76/0x94
[    1.179595]  panic+0xfc/0x2c0
[    1.179613]  out_of_memory.cold.44+0x5e/0x7e
[    1.179637]  __alloc_pages_slowpath.constprop.112+0xb9e/0xc80
[    1.179669]  ? idr_alloc_u32+0x8b/0xc0
[    1.179686]  __alloc_pages+0x318/0x330
[    1.179704]  alloc_page_interleave+0xe/0x70
[    1.179722]  allocate_slab+0x28d/0x330
[    1.179739]  ___slab_alloc+0x41e/0x5c0
[    1.179755]  ? bus_add_driver+0x48/0x1c0
[    1.179773]  ? call_usermodehelper_exec+0xed/0x160
[    1.179794]  ? bus_add_driver+0x48/0x1c0
[    1.179823]  __slab_alloc+0x17/0x30
[    1.179838]  kmem_cache_alloc_trace+0x403/0x440
[    1.179858]  ? si3054_driver_init+0x15/0x15
[    1.179874]  ? rdinit_setup+0x27/0x27
[    1.179891]  bus_add_driver+0x48/0x1c0
[    1.179915]  ? si3054_driver_init+0x15/0x15
[    1.179950]  driver_register+0x66/0xb0
[    1.179968]  ? si3054_driver_init+0x15/0x15
[    1.179986]  do_one_initcall+0x3f/0x1c0
[    1.180005]  kernel_init_freeable+0x21a/0x295
[    1.180027]  ? rest_init+0xa4/0xa4
[    1.180045]  kernel_init+0x5/0xfc
[    1.180063]  ret_from_fork+0x22/0x30
[    1.180185] Kernel Offset: disabled



And another VM:

[    1.034613] Bluetooth: BNEP (Ethernet Emulation) ver 1.3
[    1.034633] Bluetooth: BNEP filters: protocol multicast
[    1.034653] Bluetooth: BNEP socket layer initialized
[    1.034671] Bluetooth: HIDP (Human Interface Emulation) ver 1.2
[    1.034696] Bluetooth: HIDP socket layer initialized
[    1.034716] 8021q: 802.1Q VLAN Support v1.8
[    1.034747] Key type dns_resolver registered
[    1.034768] Key type ceph registered
[    1.034865] libceph: loaded (mon/osd proto 15/24)
[    1.035030] IPI shorthand broadcast: enabled
[    1.035060] sched_clock: Marking stable (833426729, 200986153)->(1207636155, -173223273)
[    1.035143] registered taskstats version 1
[    1.035161] Loading compiled-in X.509 certificates
[    1.035350] Btrfs loaded, crc32c=crc32c-generic, zoned=no
[    1.167932] kworker/u2:0 invoked oom-killer: gfp_mask=0x100cc2(GFP_HIGHUSER), order=0, oom_score_adj=0
[    1.167989] CPU: 0 PID: 7 Comm: kworker/u2:0 Not tainted 5.13.0-rc6-20210617-doflr-mac80211debug+ #1
[    1.168026] Workqueue: events_unbound async_run_entry_fn
[    1.168051] Call Trace:
[    1.168064]  dump_stack+0x76/0x94
[    1.168082]  dump_header+0x45/0x1d4
[    1.168098]  out_of_memory.cold.44+0x39/0x7e
[    1.168119]  __alloc_pages_slowpath.constprop.112+0xb9e/0xc80
[    1.168145]  ? __mod_memcg_lruvec_state+0x1d/0x100
[    1.168167]  __alloc_pages+0x318/0x330
[    1.168183]  pagecache_get_page+0x24b/0x400
[    1.168199]  grab_cache_page_write_begin+0x17/0x30
[    1.168220]  simple_write_begin+0x1e/0x1e0
[    1.168237]  generic_perform_write+0xef/0x1b0
[    1.168257]  __generic_file_write_iter+0x140/0x1b0
[    1.168279]  ? write_buffer+0x32/0x32
[    1.168296]  generic_file_write_iter+0x58/0xa0
[    1.168316]  __kernel_write+0x146/0x2c0
[    1.168333]  kernel_write+0x51/0xf0
[    1.168350]  xwrite+0x2c/0x5f
[    1.168366]  ? initrd_load+0x268/0x268
[    1.168382]  do_copy+0xc7/0x109
[    1.168397]  ? initrd_load+0x19e/0x268
[    1.168412]  ? do_name+0x11a/0x269
[    1.168427]  write_buffer+0x22/0x32
[    1.168443]  flush_buffer+0x2f/0x86
[    1.168458]  __gunzip+0x26e/0x315
[    1.168474]  ? bunzip2+0x397/0x397
[    1.168490]  ? initrd_load+0x268/0x268
[    1.168505]  gunzip+0xe/0x11
[    1.168520]  ? initrd_load+0x268/0x268
[    1.168537]  unpack_to_rootfs+0x159/0x28f
[    1.168554]  ? initrd_load+0x268/0x268
[    1.168571]  do_populate_rootfs+0x6c/0x160
[    1.168588]  async_run_entry_fn+0x1b/0xa0
[    1.168603]  process_one_work+0x1f6/0x390
[    1.168620]  worker_thread+0x28/0x3d0
[    1.168638]  ? process_one_work+0x390/0x390
[    1.168654]  kthread+0x111/0x130
[    1.168671]  ? kthread_park+0x80/0x80
[    1.168686]  ret_from_fork+0x22/0x30
[    1.168705] Mem-Info:
[    1.168716] active_anon:0 inactive_anon:0 isolated_anon:0
[    1.168716]  active_file:0 inactive_file:0 isolated_file:0
[    1.168716]  unevictable:28085 dirty:0 writeback:0
[    1.168716]  slab_reclaimable:2883 slab_unreclaimable:4055
[    1.168716]  mapped:0 shmem:0 pagetables:3 bounce:0
[    1.168716]  free:550 free_pcp:9 free_cma:0
[    1.168818] Node 0 active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:112340kB isolated(anon):0kB isolated(file):0kB mapped:0kB dirty:0kB writeback:0kB shmem:0kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB kernel_stack:6256kB pagetables:12kB all_unreclaimable? yes
[    1.168916] Node 0 DMA free:728kB min:148kB low:184kB high:220kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:14472kB writepending:0kB present:15996kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[    1.169010] lowmem_reserve[]: 0 146 146 146
[    1.169028] Node 0 DMA32 free:1472kB min:1472kB low:1840kB high:2208kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:97860kB writepending:0kB present:245760kB managed:149852kB mlocked:0kB bounce:0kB free_pcp:36kB local_pcp:36kB free_cma:0kB
[    1.309647] lowmem_reserve[]: 0 0 0 0
[    1.309668] Node 0 DMA: 0*4kB 1*8kB (U) 1*16kB (U) 0*32kB 1*64kB (U) 1*128kB (U) 0*256kB 1*512kB (U) 0*1024kB 0*2048kB 0*4096kB = 728kB
[    1.309720] Node 0 DMA32: 0*4kB 4*8kB (UM) 2*16kB (M) 4*32kB (UME) 2*64kB (ME) 1*128kB (U) 2*256kB (UE) 1*512kB (E) 0*1024kB 0*2048kB 0*4096kB = 1472kB
[    1.309777] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[    1.309810] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[    1.309843] 28095 total pagecache pages
[    1.309858] 0 pages in swap cache
[    1.309874] Swap cache stats: add 0, delete 0, find 0/0
[    1.309894] Free swap  = 0kB
[    1.309908] Total swap = 0kB
[    1.309922] 65439 pages RAM
[    1.309932] 0 pages HighMem/MovableOnly
[    1.309947] 24136 pages reserved
[    1.309961] 0 pages cma reserved
[    1.309975] Tasks state (memory values in pages):
[    1.309993] [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[    1.310035] Out of memory and no killable processes...
[    1.310054] Kernel panic - not syncing: System is deadlocked on memory
[    1.310077] CPU: 0 PID: 7 Comm: kworker/u2:0 Not tainted 5.13.0-rc6-20210617-doflr-mac80211debug+ #1
[    1.310112] Workqueue: events_unbound async_run_entry_fn
[    1.310133] Call Trace:
[    1.310144]  dump_stack+0x76/0x94
[    1.310159]  panic+0xfc/0x2c0
[    1.310176]  out_of_memory.cold.44+0x5e/0x7e
[    1.310195]  __alloc_pages_slowpath.constprop.112+0xb9e/0xc80
[    1.310220]  ? __mod_memcg_lruvec_state+0x1d/0x100
[    1.310240]  __alloc_pages+0x318/0x330
[    1.310256]  pagecache_get_page+0x24b/0x400
[    1.310273]  grab_cache_page_write_begin+0x17/0x30
[    1.310293]  simple_write_begin+0x1e/0x1e0
[    1.310309]  generic_perform_write+0xef/0x1b0
[    1.310329]  __generic_file_write_iter+0x140/0x1b0
[    1.310349]  ? write_buffer+0x32/0x32
[    1.310364]  generic_file_write_iter+0x58/0xa0
[    1.310384]  __kernel_write+0x146/0x2c0
[    1.310400]  kernel_write+0x51/0xf0
[    1.310415]  xwrite+0x2c/0x5f
[    1.310430]  ? initrd_load+0x268/0x268
[    1.310446]  do_copy+0xc7/0x109
[    1.310461]  ? initrd_load+0x19e/0x268
[    1.310476]  ? do_name+0x11a/0x269
[    1.310491]  write_buffer+0x22/0x32
[    1.310507]  flush_buffer+0x2f/0x86
[    1.310522]  __gunzip+0x26e/0x315
[    1.310538]  ? bunzip2+0x397/0x397
[    1.310554]  ? initrd_load+0x268/0x268
[    1.310569]  gunzip+0xe/0x11
[    1.310584]  ? initrd_load+0x268/0x268
[    1.310600]  unpack_to_rootfs+0x159/0x28f
[    1.310616]  ? initrd_load+0x268/0x268
[    1.310632]  do_populate_rootfs+0x6c/0x160
[    1.310647]  async_run_entry_fn+0x1b/0xa0
[    1.310663]  process_one_work+0x1f6/0x390
[    1.310679]  worker_thread+0x28/0x3d0
[    1.510226]  ? process_one_work+0x390/0x390
[    1.510243]  kthread+0x111/0x130
[    1.510259]  ? kthread_park+0x80/0x80
[    1.510275]  ret_from_fork+0x22/0x30
[    1.510336] Kernel Offset: disabled


And another one:

[    0.772775] IPVS: ipvs loaded.
[    0.773014] NET: Registered protocol family 10
[    0.777428] Segment Routing with IPv6
[    0.777541] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver
[    0.777687] NET: Registered protocol family 17
[    1.114018] Bridge firewalling registered
[    1.117325] Bluetooth: RFCOMM TTY layer initialized
[    1.117372] Bluetooth: RFCOMM socket layer initialized
[    1.117402] Bluetooth: RFCOMM ver 1.11
[    1.117421] Bluetooth: BNEP (Ethernet Emulation) ver 1.3
[    1.117448] Bluetooth: BNEP filters: protocol multicast
[    1.117471] Bluetooth: BNEP socket layer initialized
[    1.117493] Bluetooth: HIDP (Human Interface Emulation) ver 1.2
[    1.117527] Bluetooth: HIDP socket layer initialized
[    1.117550] 8021q: 802.1Q VLAN Support v1.8
[    1.117581] Key type dns_resolver registered
[    1.117605] Key type ceph registered
[    1.117725] libceph: loaded (mon/osd proto 15/24)
[    1.117931] IPI shorthand broadcast: enabled
[    1.117973] sched_clock: Marking stable (915120961, 202168371)->(1461715317, -344425985)
[    1.118063] registered taskstats version 1
[    1.118083] Loading compiled-in X.509 certificates
[    1.118305] Btrfs loaded, crc32c=crc32c-generic, zoned=no
[    1.187460] kworker/u2:0 invoked oom-killer: gfp_mask=0x100cc2(GFP_HIGHUSER), order=0, oom_score_adj=0
[    1.187513] CPU: 0 PID: 7 Comm: kworker/u2:0 Not tainted 5.13.0-rc6-20210617-doflr-mac80211debug+ #1
[    1.187555] Workqueue: events_unbound async_run_entry_fn
[    1.187582] Call Trace:
[    1.187597]  dump_stack+0x76/0x94
[    1.187617]  dump_header+0x45/0x1d4
[    1.187637]  out_of_memory.cold.44+0x39/0x7e
[    1.187660]  __alloc_pages_slowpath.constprop.112+0xb9e/0xc80
[    1.187692]  ? __mod_memcg_lruvec_state+0x1d/0x100
[    1.187718]  __alloc_pages+0x318/0x330
[    1.187737]  pagecache_get_page+0x24b/0x400
[    1.187756]  grab_cache_page_write_begin+0x17/0x30
[    1.187779]  simple_write_begin+0x1e/0x1e0
[    1.187798]  generic_perform_write+0xef/0x1b0
[    1.187822]  __generic_file_write_iter+0x140/0x1b0
[    1.187848]  ? write_buffer+0x32/0x32
[    1.187866]  generic_file_write_iter+0x58/0xa0
[    1.187890]  __kernel_write+0x146/0x2c0
[    1.187908]  kernel_write+0x51/0xf0
[    1.187926]  xwrite+0x2c/0x5f
[    1.187946]  ? initrd_load+0x268/0x268
[    1.187964]  do_copy+0xc7/0x109
[    1.187982]  ? initrd_load+0x19e/0x268
[    1.187999]  ? do_name+0x11a/0x269
[    1.188017]  write_buffer+0x22/0x32
[    1.188034]  flush_buffer+0x2f/0x86
[    1.188052]  __gunzip+0x26e/0x315
[    1.188071]  ? bunzip2+0x397/0x397
[    1.188090]  ? initrd_load+0x268/0x268
[    1.188107]  gunzip+0xe/0x11
[    1.188125]  ? initrd_load+0x268/0x268
[    1.188143]  unpack_to_rootfs+0x159/0x28f
[    1.188161]  ? initrd_load+0x268/0x268
[    1.188178]  do_populate_rootfs+0x6c/0x160
[    1.188197]  async_run_entry_fn+0x1b/0xa0
[    1.188214]  process_one_work+0x1f6/0x390
[    1.188234]  worker_thread+0x28/0x3d0
[    1.188253]  ? process_one_work+0x390/0x390
[    1.188271]  kthread+0x111/0x130
[    1.188293]  ? kthread_park+0x80/0x80
[    1.188312]  ret_from_fork+0x22/0x30
[    1.188335] Mem-Info:
[    1.188347] active_anon:0 inactive_anon:0 isolated_anon:0
[    1.188347]  active_file:0 inactive_file:0 isolated_file:0
[    1.188347]  unevictable:28130 dirty:0 writeback:0
[    1.188347]  slab_reclaimable:2879 slab_unreclaimable:4027
[    1.188347]  mapped:0 shmem:0 pagetables:3 bounce:0
[    1.188347]  free:548 free_pcp:8 free_cma:0
[    1.188465] Node 0 active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:112520kB isolated(anon):0kB isolated(file):0kB mapped:0kB dirty:0kB writeback:0kB shmem:0kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB kernel_stack:6224kB pagetables:12kB all_unreclaimable? yes
[    1.188578] Node 0 DMA free:732kB min:148kB low:184kB high:220kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:10428kB writepending:0kB present:15996kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[    1.188688] lowmem_reserve[]: 0 146 146 146
[    1.188711] Node 0 DMA32 free:1460kB min:1472kB low:1840kB high:2208kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:102080kB writepending:0kB present:245760kB managed:149852kB mlocked:0kB bounce:0kB free_pcp:32kB local_pcp:32kB free_cma:0kB
[    1.188819] lowmem_reserve[]: 0 0 0 0
[    1.188838] Node 0 DMA: 1*4kB (U) 1*8kB (U) 1*16kB (U) 0*32kB 1*64kB (U) 1*128kB (U) 0*256kB 1*512kB (U) 0*1024kB 0*2048kB 0*4096kB = 732kB
[    1.188899] Node 0 DMA32: 1*4kB (E) 4*8kB (UM) 3*16kB (ME) 3*32kB (ME) 2*64kB (UM) 1*128kB (U) 2*256kB (UE) 1*512kB (E) 0*1024kB 0*2048kB 0*4096kB = 1460kB
[    1.388548] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[    1.388596] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[    1.388634] 28140 total pagecache pages
[    1.388651] 0 pages in swap cache
[    1.388667] Swap cache stats: add 0, delete 0, find 0/0
[    1.388689] Free swap  = 0kB
[    1.388705] Total swap = 0kB
[    1.388721] 65439 pages RAM
[    1.388733] 0 pages HighMem/MovableOnly
[    1.388749] 24136 pages reserved
[    1.388764] 0 pages cma reserved
[    1.388781] Tasks state (memory values in pages):
[    1.388803] [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[    1.388871] Out of memory and no killable processes...
[    1.388896] Kernel panic - not syncing: System is deadlocked on memory
[    1.388924] CPU: 0 PID: 7 Comm: kworker/u2:0 Not tainted 5.13.0-rc6-20210617-doflr-mac80211debug+ #1
[    1.388966] Workqueue: events_unbound async_run_entry_fn
[    1.388994] Call Trace:
[    1.389007]  dump_stack+0x76/0x94
[    1.389032]  panic+0xfc/0x2c0
[    1.389054]  out_of_memory.cold.44+0x5e/0x7e
[    1.389078]  __alloc_pages_slowpath.constprop.112+0xb9e/0xc80
[    1.389108]  ? __mod_memcg_lruvec_state+0x1d/0x100
[    1.389131]  __alloc_pages+0x318/0x330
[    1.389150]  pagecache_get_page+0x24b/0x400
[    1.389169]  grab_cache_page_write_begin+0x17/0x30
[    1.389192]  simple_write_begin+0x1e/0x1e0
[    1.389210]  generic_perform_write+0xef/0x1b0
[    1.389232]  __generic_file_write_iter+0x140/0x1b0
[    1.389256]  ? write_buffer+0x32/0x32
[    1.389274]  generic_file_write_iter+0x58/0xa0
[    1.389299]  __kernel_write+0x146/0x2c0
[    1.389318]  kernel_write+0x51/0xf0
[    1.389335]  xwrite+0x2c/0x5f
[    1.389353]  ? initrd_load+0x268/0x268
[    1.389370]  do_copy+0xc7/0x109
[    1.389388]  ? initrd_load+0x19e/0x268
[    1.389404]  ? do_name+0x11a/0x269
[    1.389421]  write_buffer+0x22/0x32
[    1.389450]  flush_buffer+0x2f/0x86
[    1.389465]  __gunzip+0x26e/0x315
[    1.389482]  ? bunzip2+0x397/0x397
[    1.389498]  ? initrd_load+0x268/0x268
[    1.389513]  gunzip+0xe/0x11
[    1.389544]  ? initrd_load+0x268/0x268
[    1.389562]  unpack_to_rootfs+0x159/0x28f
[    1.389581]  ? initrd_load+0x268/0x268
[    1.389599]  do_populate_rootfs+0x6c/0x160
[    1.389618]  async_run_entry_fn+0x1b/0xa0
[    1.389636]  process_one_work+0x1f6/0x390
[    1.389657]  worker_thread+0x28/0x3d0
[    1.389676]  ? process_one_work+0x390/0x390
[    1.389698]  kthread+0x111/0x130
[    1.389716]  ? kthread_park+0x80/0x80
[    1.389733]  ret_from_fork+0x22/0x30
[    1.389803] Kernel Offset: disabled

--------------12557986E89B73948A73857A
Content-Type: text/plain; charset=UTF-8;
 name="config-5.13.0-rc6-20210617-doflr-mac80211debug+"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="config-5.13.0-rc6-20210617-doflr-mac80211debug+"

IwojIEF1dG9tYXRpY2FsbHkgZ2VuZXJhdGVkIGZpbGU7IERPIE5PVCBFRElULgojIExpbnV4
L3g4Nl82NCA1LjEzLjAtcmM2IEtlcm5lbCBDb25maWd1cmF0aW9uCiMKQ09ORklHX0NDX1ZF
UlNJT05fVEVYVD0iZ2NjIChEZWJpYW4gOC4zLjAtNikgOC4zLjAiCkNPTkZJR19DQ19JU19H
Q0M9eQpDT05GSUdfR0NDX1ZFUlNJT049ODAzMDAKQ09ORklHX0NMQU5HX1ZFUlNJT049MApD
T05GSUdfQVNfSVNfR05VPXkKQ09ORklHX0FTX1ZFUlNJT049MjMxMDEKQ09ORklHX0xEX0lT
X0JGRD15CkNPTkZJR19MRF9WRVJTSU9OPTIzMTAxCkNPTkZJR19MTERfVkVSU0lPTj0wCkNP
TkZJR19DQ19DQU5fTElOSz15CkNPTkZJR19DQ19DQU5fTElOS19TVEFUSUM9eQpDT05GSUdf
Q0NfSEFTX0FTTV9HT1RPPXkKQ09ORklHX0NDX0hBU19BU01fSU5MSU5FPXkKQ09ORklHX0lS
UV9XT1JLPXkKQ09ORklHX0JVSUxEVElNRV9UQUJMRV9TT1JUPXkKQ09ORklHX1RIUkVBRF9J
TkZPX0lOX1RBU0s9eQoKIwojIEdlbmVyYWwgc2V0dXAKIwpDT05GSUdfSU5JVF9FTlZfQVJH
X0xJTUlUPTMyCiMgQ09ORklHX0NPTVBJTEVfVEVTVCBpcyBub3Qgc2V0CkNPTkZJR19MT0NB
TFZFUlNJT049IiIKIyBDT05GSUdfTE9DQUxWRVJTSU9OX0FVVE8gaXMgbm90IHNldApDT05G
SUdfQlVJTERfU0FMVD0iIgpDT05GSUdfSEFWRV9LRVJORUxfR1pJUD15CkNPTkZJR19IQVZF
X0tFUk5FTF9CWklQMj15CkNPTkZJR19IQVZFX0tFUk5FTF9MWk1BPXkKQ09ORklHX0hBVkVf
S0VSTkVMX1haPXkKQ09ORklHX0hBVkVfS0VSTkVMX0xaTz15CkNPTkZJR19IQVZFX0tFUk5F
TF9MWjQ9eQpDT05GSUdfSEFWRV9LRVJORUxfWlNURD15CkNPTkZJR19LRVJORUxfR1pJUD15
CiMgQ09ORklHX0tFUk5FTF9CWklQMiBpcyBub3Qgc2V0CiMgQ09ORklHX0tFUk5FTF9MWk1B
IGlzIG5vdCBzZXQKIyBDT05GSUdfS0VSTkVMX1haIGlzIG5vdCBzZXQKIyBDT05GSUdfS0VS
TkVMX0xaTyBpcyBub3Qgc2V0CiMgQ09ORklHX0tFUk5FTF9MWjQgaXMgbm90IHNldAojIENP
TkZJR19LRVJORUxfWlNURCBpcyBub3Qgc2V0CkNPTkZJR19ERUZBVUxUX0lOSVQ9IiIKQ09O
RklHX0RFRkFVTFRfSE9TVE5BTUU9Iihub25lKSIKQ09ORklHX1NXQVA9eQpDT05GSUdfU1lT
VklQQz15CkNPTkZJR19TWVNWSVBDX1NZU0NUTD15CkNPTkZJR19QT1NJWF9NUVVFVUU9eQpD
T05GSUdfUE9TSVhfTVFVRVVFX1NZU0NUTD15CiMgQ09ORklHX1dBVENIX1FVRVVFIGlzIG5v
dCBzZXQKQ09ORklHX0NST1NTX01FTU9SWV9BVFRBQ0g9eQojIENPTkZJR19VU0VMSUIgaXMg
bm90IHNldAojIENPTkZJR19BVURJVCBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX0FSQ0hfQVVE
SVRTWVNDQUxMPXkKCiMKIyBJUlEgc3Vic3lzdGVtCiMKQ09ORklHX0dFTkVSSUNfSVJRX1BS
T0JFPXkKQ09ORklHX0dFTkVSSUNfSVJRX1NIT1c9eQpDT05GSUdfR0VORVJJQ19JUlFfRUZG
RUNUSVZFX0FGRl9NQVNLPXkKQ09ORklHX0dFTkVSSUNfUEVORElOR19JUlE9eQpDT05GSUdf
R0VORVJJQ19JUlFfTUlHUkFUSU9OPXkKQ09ORklHX0dFTkVSSUNfSVJRX0lOSkVDVElPTj15
CkNPTkZJR19IQVJESVJRU19TV19SRVNFTkQ9eQpDT05GSUdfSVJRX0RPTUFJTj15CkNPTkZJ
R19JUlFfRE9NQUlOX0hJRVJBUkNIWT15CkNPTkZJR19HRU5FUklDX01TSV9JUlE9eQpDT05G
SUdfR0VORVJJQ19NU0lfSVJRX0RPTUFJTj15CkNPTkZJR19JUlFfTVNJX0lPTU1VPXkKQ09O
RklHX0dFTkVSSUNfSVJRX01BVFJJWF9BTExPQ0FUT1I9eQpDT05GSUdfR0VORVJJQ19JUlFf
UkVTRVJWQVRJT05fTU9ERT15CkNPTkZJR19JUlFfRk9SQ0VEX1RIUkVBRElORz15CkNPTkZJ
R19TUEFSU0VfSVJRPXkKIyBDT05GSUdfR0VORVJJQ19JUlFfREVCVUdGUyBpcyBub3Qgc2V0
CiMgZW5kIG9mIElSUSBzdWJzeXN0ZW0KCkNPTkZJR19DTE9DS1NPVVJDRV9XQVRDSERPRz15
CkNPTkZJR19BUkNIX0NMT0NLU09VUkNFX0lOSVQ9eQpDT05GSUdfQ0xPQ0tTT1VSQ0VfVkFM
SURBVEVfTEFTVF9DWUNMRT15CkNPTkZJR19HRU5FUklDX1RJTUVfVlNZU0NBTEw9eQpDT05G
SUdfR0VORVJJQ19DTE9DS0VWRU5UUz15CkNPTkZJR19HRU5FUklDX0NMT0NLRVZFTlRTX0JS
T0FEQ0FTVD15CkNPTkZJR19HRU5FUklDX0NMT0NLRVZFTlRTX01JTl9BREpVU1Q9eQpDT05G
SUdfR0VORVJJQ19DTU9TX1VQREFURT15CkNPTkZJR19IQVZFX1BPU0lYX0NQVV9USU1FUlNf
VEFTS19XT1JLPXkKQ09ORklHX1BPU0lYX0NQVV9USU1FUlNfVEFTS19XT1JLPXkKCiMKIyBU
aW1lcnMgc3Vic3lzdGVtCiMKQ09ORklHX1RJQ0tfT05FU0hPVD15CkNPTkZJR19OT19IWl9D
T01NT049eQojIENPTkZJR19IWl9QRVJJT0RJQyBpcyBub3Qgc2V0CkNPTkZJR19OT19IWl9J
RExFPXkKIyBDT05GSUdfTk9fSFpfRlVMTCBpcyBub3Qgc2V0CkNPTkZJR19OT19IWj15CkNP
TkZJR19ISUdIX1JFU19USU1FUlM9eQojIGVuZCBvZiBUaW1lcnMgc3Vic3lzdGVtCgpDT05G
SUdfQlBGPXkKQ09ORklHX0hBVkVfRUJQRl9KSVQ9eQpDT05GSUdfQVJDSF9XQU5UX0RFRkFV
TFRfQlBGX0pJVD15CgojCiMgQlBGIHN1YnN5c3RlbQojCiMgQ09ORklHX0JQRl9TWVNDQUxM
IGlzIG5vdCBzZXQKIyBDT05GSUdfQlBGX0pJVCBpcyBub3Qgc2V0CiMgZW5kIG9mIEJQRiBz
dWJzeXN0ZW0KCiMgQ09ORklHX1BSRUVNUFRfTk9ORSBpcyBub3Qgc2V0CkNPTkZJR19QUkVF
TVBUX1ZPTFVOVEFSWT15CiMgQ09ORklHX1BSRUVNUFQgaXMgbm90IHNldAoKIwojIENQVS9U
YXNrIHRpbWUgYW5kIHN0YXRzIGFjY291bnRpbmcKIwpDT05GSUdfVElDS19DUFVfQUNDT1VO
VElORz15CiMgQ09ORklHX1ZJUlRfQ1BVX0FDQ09VTlRJTkdfR0VOIGlzIG5vdCBzZXQKIyBD
T05GSUdfSVJRX1RJTUVfQUNDT1VOVElORyBpcyBub3Qgc2V0CkNPTkZJR19CU0RfUFJPQ0VT
U19BQ0NUPXkKIyBDT05GSUdfQlNEX1BST0NFU1NfQUNDVF9WMyBpcyBub3Qgc2V0CkNPTkZJ
R19UQVNLU1RBVFM9eQpDT05GSUdfVEFTS19ERUxBWV9BQ0NUPXkKQ09ORklHX1RBU0tfWEFD
Q1Q9eQpDT05GSUdfVEFTS19JT19BQ0NPVU5USU5HPXkKIyBDT05GSUdfUFNJIGlzIG5vdCBz
ZXQKIyBlbmQgb2YgQ1BVL1Rhc2sgdGltZSBhbmQgc3RhdHMgYWNjb3VudGluZwoKQ09ORklH
X0NQVV9JU09MQVRJT049eQoKIwojIFJDVSBTdWJzeXN0ZW0KIwpDT05GSUdfVFJFRV9SQ1U9
eQojIENPTkZJR19SQ1VfRVhQRVJUIGlzIG5vdCBzZXQKQ09ORklHX1NSQ1U9eQpDT05GSUdf
VFJFRV9TUkNVPXkKQ09ORklHX1JDVV9TVEFMTF9DT01NT049eQpDT05GSUdfUkNVX05FRURf
U0VHQ0JMSVNUPXkKIyBlbmQgb2YgUkNVIFN1YnN5c3RlbQoKQ09ORklHX0lLQ09ORklHPXkK
IyBDT05GSUdfSUtDT05GSUdfUFJPQyBpcyBub3Qgc2V0CiMgQ09ORklHX0lLSEVBREVSUyBp
cyBub3Qgc2V0CkNPTkZJR19MT0dfQlVGX1NISUZUPTIwCkNPTkZJR19MT0dfQ1BVX01BWF9C
VUZfU0hJRlQ9MTMKQ09ORklHX1BSSU5US19TQUZFX0xPR19CVUZfU0hJRlQ9MTMKQ09ORklH
X0hBVkVfVU5TVEFCTEVfU0NIRURfQ0xPQ0s9eQoKIwojIFNjaGVkdWxlciBmZWF0dXJlcwoj
CiMgQ09ORklHX1VDTEFNUF9UQVNLIGlzIG5vdCBzZXQKIyBlbmQgb2YgU2NoZWR1bGVyIGZl
YXR1cmVzCgpDT05GSUdfQVJDSF9TVVBQT1JUU19OVU1BX0JBTEFOQ0lORz15CkNPTkZJR19B
UkNIX1dBTlRfQkFUQ0hFRF9VTk1BUF9UTEJfRkxVU0g9eQpDT05GSUdfQ0NfSEFTX0lOVDEy
OD15CkNPTkZJR19BUkNIX1NVUFBPUlRTX0lOVDEyOD15CiMgQ09ORklHX05VTUFfQkFMQU5D
SU5HIGlzIG5vdCBzZXQKQ09ORklHX0NHUk9VUFM9eQpDT05GSUdfUEFHRV9DT1VOVEVSPXkK
Q09ORklHX01FTUNHPXkKQ09ORklHX01FTUNHX1NXQVA9eQpDT05GSUdfTUVNQ0dfS01FTT15
CkNPTkZJR19CTEtfQ0dST1VQPXkKQ09ORklHX0NHUk9VUF9XUklURUJBQ0s9eQpDT05GSUdf
Q0dST1VQX1NDSEVEPXkKQ09ORklHX0ZBSVJfR1JPVVBfU0NIRUQ9eQpDT05GSUdfQ0ZTX0JB
TkRXSURUSD15CkNPTkZJR19SVF9HUk9VUF9TQ0hFRD15CkNPTkZJR19DR1JPVVBfUElEUz15
CkNPTkZJR19DR1JPVVBfUkRNQT15CkNPTkZJR19DR1JPVVBfRlJFRVpFUj15CkNPTkZJR19D
R1JPVVBfSFVHRVRMQj15CkNPTkZJR19DUFVTRVRTPXkKQ09ORklHX1BST0NfUElEX0NQVVNF
VD15CkNPTkZJR19DR1JPVVBfREVWSUNFPXkKQ09ORklHX0NHUk9VUF9DUFVBQ0NUPXkKIyBD
T05GSUdfQ0dST1VQX1BFUkYgaXMgbm90IHNldAojIENPTkZJR19DR1JPVVBfTUlTQyBpcyBu
b3Qgc2V0CiMgQ09ORklHX0NHUk9VUF9ERUJVRyBpcyBub3Qgc2V0CkNPTkZJR19TT0NLX0NH
Uk9VUF9EQVRBPXkKQ09ORklHX05BTUVTUEFDRVM9eQpDT05GSUdfVVRTX05TPXkKQ09ORklH
X1RJTUVfTlM9eQpDT05GSUdfSVBDX05TPXkKIyBDT05GSUdfVVNFUl9OUyBpcyBub3Qgc2V0
CkNPTkZJR19QSURfTlM9eQpDT05GSUdfTkVUX05TPXkKIyBDT05GSUdfQ0hFQ0tQT0lOVF9S
RVNUT1JFIGlzIG5vdCBzZXQKQ09ORklHX1NDSEVEX0FVVE9HUk9VUD15CiMgQ09ORklHX1NZ
U0ZTX0RFUFJFQ0FURUQgaXMgbm90IHNldAojIENPTkZJR19SRUxBWSBpcyBub3Qgc2V0CkNP
TkZJR19CTEtfREVWX0lOSVRSRD15CkNPTkZJR19JTklUUkFNRlNfU09VUkNFPSIiCkNPTkZJ
R19SRF9HWklQPXkKQ09ORklHX1JEX0JaSVAyPXkKQ09ORklHX1JEX0xaTUE9eQpDT05GSUdf
UkRfWFo9eQpDT05GSUdfUkRfTFpPPXkKQ09ORklHX1JEX0xaND15CiMgQ09ORklHX1JEX1pT
VEQgaXMgbm90IHNldAojIENPTkZJR19CT09UX0NPTkZJRyBpcyBub3Qgc2V0CkNPTkZJR19D
Q19PUFRJTUlaRV9GT1JfUEVSRk9STUFOQ0U9eQojIENPTkZJR19DQ19PUFRJTUlaRV9GT1Jf
U0laRSBpcyBub3Qgc2V0CkNPTkZJR19MRF9PUlBIQU5fV0FSTj15CkNPTkZJR19TWVNDVEw9
eQpDT05GSUdfSEFWRV9VSUQxNj15CkNPTkZJR19TWVNDVExfRVhDRVBUSU9OX1RSQUNFPXkK
Q09ORklHX0hBVkVfUENTUEtSX1BMQVRGT1JNPXkKIyBDT05GSUdfRVhQRVJUIGlzIG5vdCBz
ZXQKQ09ORklHX1VJRDE2PXkKQ09ORklHX01VTFRJVVNFUj15CkNPTkZJR19TR0VUTUFTS19T
WVNDQUxMPXkKQ09ORklHX1NZU0ZTX1NZU0NBTEw9eQpDT05GSUdfRkhBTkRMRT15CkNPTkZJ
R19QT1NJWF9USU1FUlM9eQpDT05GSUdfUFJJTlRLPXkKQ09ORklHX1BSSU5US19OTUk9eQpD
T05GSUdfQlVHPXkKQ09ORklHX0VMRl9DT1JFPXkKQ09ORklHX1BDU1BLUl9QTEFURk9STT15
CkNPTkZJR19CQVNFX0ZVTEw9eQpDT05GSUdfRlVURVg9eQpDT05GSUdfRlVURVhfUEk9eQpD
T05GSUdfRVBPTEw9eQpDT05GSUdfU0lHTkFMRkQ9eQpDT05GSUdfVElNRVJGRD15CkNPTkZJ
R19FVkVOVEZEPXkKQ09ORklHX1NITUVNPXkKQ09ORklHX0FJTz15CkNPTkZJR19JT19VUklO
Rz15CkNPTkZJR19BRFZJU0VfU1lTQ0FMTFM9eQpDT05GSUdfTUVNQkFSUklFUj15CkNPTkZJ
R19LQUxMU1lNUz15CkNPTkZJR19LQUxMU1lNU19BTEw9eQpDT05GSUdfS0FMTFNZTVNfQUJT
T0xVVEVfUEVSQ1BVPXkKQ09ORklHX0tBTExTWU1TX0JBU0VfUkVMQVRJVkU9eQojIENPTkZJ
R19VU0VSRkFVTFRGRCBpcyBub3Qgc2V0CkNPTkZJR19BUkNIX0hBU19NRU1CQVJSSUVSX1NZ
TkNfQ09SRT15CkNPTkZJR19LQ01QPXkKQ09ORklHX1JTRVE9eQojIENPTkZJR19FTUJFRERF
RCBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX1BFUkZfRVZFTlRTPXkKCiMKIyBLZXJuZWwgUGVy
Zm9ybWFuY2UgRXZlbnRzIEFuZCBDb3VudGVycwojCkNPTkZJR19QRVJGX0VWRU5UUz15CiMg
Q09ORklHX0RFQlVHX1BFUkZfVVNFX1ZNQUxMT0MgaXMgbm90IHNldAojIGVuZCBvZiBLZXJu
ZWwgUGVyZm9ybWFuY2UgRXZlbnRzIEFuZCBDb3VudGVycwoKQ09ORklHX1ZNX0VWRU5UX0NP
VU5URVJTPXkKQ09ORklHX1NMVUJfREVCVUc9eQojIENPTkZJR19DT01QQVRfQlJLIGlzIG5v
dCBzZXQKIyBDT05GSUdfU0xBQiBpcyBub3Qgc2V0CkNPTkZJR19TTFVCPXkKQ09ORklHX1NM
QUJfTUVSR0VfREVGQVVMVD15CiMgQ09ORklHX1NMQUJfRlJFRUxJU1RfUkFORE9NIGlzIG5v
dCBzZXQKIyBDT05GSUdfU0xBQl9GUkVFTElTVF9IQVJERU5FRCBpcyBub3Qgc2V0CkNPTkZJ
R19TSFVGRkxFX1BBR0VfQUxMT0NBVE9SPXkKQ09ORklHX1NMVUJfQ1BVX1BBUlRJQUw9eQpD
T05GSUdfU1lTVEVNX0RBVEFfVkVSSUZJQ0FUSU9OPXkKQ09ORklHX1BST0ZJTElORz15CkNP
TkZJR19UUkFDRVBPSU5UUz15CiMgZW5kIG9mIEdlbmVyYWwgc2V0dXAKCkNPTkZJR182NEJJ
VD15CkNPTkZJR19YODZfNjQ9eQpDT05GSUdfWDg2PXkKQ09ORklHX0lOU1RSVUNUSU9OX0RF
Q09ERVI9eQpDT05GSUdfT1VUUFVUX0ZPUk1BVD0iZWxmNjQteDg2LTY0IgpDT05GSUdfTE9D
S0RFUF9TVVBQT1JUPXkKQ09ORklHX1NUQUNLVFJBQ0VfU1VQUE9SVD15CkNPTkZJR19NTVU9
eQpDT05GSUdfQVJDSF9NTUFQX1JORF9CSVRTX01JTj0yOApDT05GSUdfQVJDSF9NTUFQX1JO
RF9CSVRTX01BWD0zMgpDT05GSUdfQVJDSF9NTUFQX1JORF9DT01QQVRfQklUU19NSU49OApD
T05GSUdfQVJDSF9NTUFQX1JORF9DT01QQVRfQklUU19NQVg9MTYKQ09ORklHX0dFTkVSSUNf
SVNBX0RNQT15CkNPTkZJR19HRU5FUklDX0JVRz15CkNPTkZJR19HRU5FUklDX0JVR19SRUxB
VElWRV9QT0lOVEVSUz15CkNPTkZJR19BUkNIX01BWV9IQVZFX1BDX0ZEQz15CkNPTkZJR19H
RU5FUklDX0NBTElCUkFURV9ERUxBWT15CkNPTkZJR19BUkNIX0hBU19DUFVfUkVMQVg9eQpD
T05GSUdfQVJDSF9IQVNfRklMVEVSX1BHUFJPVD15CkNPTkZJR19IQVZFX1NFVFVQX1BFUl9D
UFVfQVJFQT15CkNPTkZJR19ORUVEX1BFUl9DUFVfRU1CRURfRklSU1RfQ0hVTks9eQpDT05G
SUdfTkVFRF9QRVJfQ1BVX1BBR0VfRklSU1RfQ0hVTks9eQpDT05GSUdfQVJDSF9ISUJFUk5B
VElPTl9QT1NTSUJMRT15CkNPTkZJR19BUkNIX1NVU1BFTkRfUE9TU0lCTEU9eQpDT05GSUdf
QVJDSF9XQU5UX0dFTkVSQUxfSFVHRVRMQj15CkNPTkZJR19aT05FX0RNQTMyPXkKQ09ORklH
X0FVRElUX0FSQ0g9eQpDT05GSUdfWDg2XzY0X1NNUD15CkNPTkZJR19BUkNIX1NVUFBPUlRT
X1VQUk9CRVM9eQpDT05GSUdfRklYX0VBUkxZQ09OX01FTT15CkNPTkZJR19QR1RBQkxFX0xF
VkVMUz00CkNPTkZJR19DQ19IQVNfU0FORV9TVEFDS1BST1RFQ1RPUj15CgojCiMgUHJvY2Vz
c29yIHR5cGUgYW5kIGZlYXR1cmVzCiMKQ09ORklHX1pPTkVfRE1BPXkKQ09ORklHX1NNUD15
CkNPTkZJR19YODZfRkVBVFVSRV9OQU1FUz15CkNPTkZJR19YODZfWDJBUElDPXkKIyBDT05G
SUdfWDg2X01QUEFSU0UgaXMgbm90IHNldAojIENPTkZJR19HT0xERklTSCBpcyBub3Qgc2V0
CkNPTkZJR19SRVRQT0xJTkU9eQojIENPTkZJR19YODZfQ1BVX1JFU0NUUkwgaXMgbm90IHNl
dAojIENPTkZJR19YODZfRVhURU5ERURfUExBVEZPUk0gaXMgbm90IHNldAojIENPTkZJR19Y
ODZfSU5URUxfTFBTUyBpcyBub3Qgc2V0CiMgQ09ORklHX1g4Nl9BTURfUExBVEZPUk1fREVW
SUNFIGlzIG5vdCBzZXQKIyBDT05GSUdfSU9TRl9NQkkgaXMgbm90IHNldApDT05GSUdfWDg2
X1NVUFBPUlRTX01FTU9SWV9GQUlMVVJFPXkKQ09ORklHX1NDSEVEX09NSVRfRlJBTUVfUE9J
TlRFUj15CkNPTkZJR19IWVBFUlZJU09SX0dVRVNUPXkKQ09ORklHX1BBUkFWSVJUPXkKQ09O
RklHX1BBUkFWSVJUX1hYTD15CiMgQ09ORklHX1BBUkFWSVJUX0RFQlVHIGlzIG5vdCBzZXQK
Q09ORklHX1BBUkFWSVJUX1NQSU5MT0NLUz15CkNPTkZJR19YODZfSFZfQ0FMTEJBQ0tfVkVD
VE9SPXkKQ09ORklHX1hFTj15CkNPTkZJR19YRU5fUFY9eQpDT05GSUdfWEVOXzUxMkdCPXkK
Q09ORklHX1hFTl9QVl9TTVA9eQpDT05GSUdfWEVOX0RPTTA9eQpDT05GSUdfWEVOX1BWSFZN
PXkKQ09ORklHX1hFTl9QVkhWTV9TTVA9eQpDT05GSUdfWEVOX1BWSFZNX0dVRVNUPXkKQ09O
RklHX1hFTl9TQVZFX1JFU1RPUkU9eQojIENPTkZJR19YRU5fREVCVUdfRlMgaXMgbm90IHNl
dApDT05GSUdfWEVOX1BWSD15CkNPTkZJR19LVk1fR1VFU1Q9eQpDT05GSUdfQVJDSF9DUFVJ
RExFX0hBTFRQT0xMPXkKQ09ORklHX1BWSD15CiMgQ09ORklHX1BBUkFWSVJUX1RJTUVfQUND
T1VOVElORyBpcyBub3Qgc2V0CkNPTkZJR19QQVJBVklSVF9DTE9DSz15CiMgQ09ORklHX0pB
SUxIT1VTRV9HVUVTVCBpcyBub3Qgc2V0CiMgQ09ORklHX0FDUk5fR1VFU1QgaXMgbm90IHNl
dAojIENPTkZJR19NSzggaXMgbm90IHNldAojIENPTkZJR19NUFNDIGlzIG5vdCBzZXQKIyBD
T05GSUdfTUNPUkUyIGlzIG5vdCBzZXQKIyBDT05GSUdfTUFUT00gaXMgbm90IHNldApDT05G
SUdfR0VORVJJQ19DUFU9eQpDT05GSUdfWDg2X0lOVEVSTk9ERV9DQUNIRV9TSElGVD02CkNP
TkZJR19YODZfTDFfQ0FDSEVfU0hJRlQ9NgpDT05GSUdfWDg2X1RTQz15CkNPTkZJR19YODZf
Q01QWENIRzY0PXkKQ09ORklHX1g4Nl9DTU9WPXkKQ09ORklHX1g4Nl9NSU5JTVVNX0NQVV9G
QU1JTFk9NjQKQ09ORklHX1g4Nl9ERUJVR0NUTE1TUj15CkNPTkZJR19JQTMyX0ZFQVRfQ1RM
PXkKQ09ORklHX1g4Nl9WTVhfRkVBVFVSRV9OQU1FUz15CkNPTkZJR19DUFVfU1VQX0lOVEVM
PXkKQ09ORklHX0NQVV9TVVBfQU1EPXkKQ09ORklHX0NQVV9TVVBfSFlHT049eQpDT05GSUdf
Q1BVX1NVUF9DRU5UQVVSPXkKQ09ORklHX0NQVV9TVVBfWkhBT1hJTj15CkNPTkZJR19IUEVU
X1RJTUVSPXkKQ09ORklHX0hQRVRfRU1VTEFURV9SVEM9eQpDT05GSUdfRE1JPXkKQ09ORklH
X0dBUlRfSU9NTVU9eQojIENPTkZJR19NQVhTTVAgaXMgbm90IHNldApDT05GSUdfTlJfQ1BV
U19SQU5HRV9CRUdJTj0yCkNPTkZJR19OUl9DUFVTX1JBTkdFX0VORD01MTIKQ09ORklHX05S
X0NQVVNfREVGQVVMVD02NApDT05GSUdfTlJfQ1BVUz02CkNPTkZJR19TQ0hFRF9TTVQ9eQpD
T05GSUdfU0NIRURfTUM9eQpDT05GSUdfU0NIRURfTUNfUFJJTz15CkNPTkZJR19YODZfTE9D
QUxfQVBJQz15CkNPTkZJR19YODZfSU9fQVBJQz15CkNPTkZJR19YODZfUkVST1VURV9GT1Jf
QlJPS0VOX0JPT1RfSVJRUz15CkNPTkZJR19YODZfTUNFPXkKIyBDT05GSUdfWDg2X01DRUxP
R19MRUdBQ1kgaXMgbm90IHNldApDT05GSUdfWDg2X01DRV9JTlRFTD15CkNPTkZJR19YODZf
TUNFX0FNRD15CkNPTkZJR19YODZfTUNFX1RIUkVTSE9MRD15CiMgQ09ORklHX1g4Nl9NQ0Vf
SU5KRUNUIGlzIG5vdCBzZXQKCiMKIyBQZXJmb3JtYW5jZSBtb25pdG9yaW5nCiMKQ09ORklH
X1BFUkZfRVZFTlRTX0lOVEVMX1VOQ09SRT15CkNPTkZJR19QRVJGX0VWRU5UU19JTlRFTF9S
QVBMPXkKQ09ORklHX1BFUkZfRVZFTlRTX0lOVEVMX0NTVEFURT15CiMgQ09ORklHX1BFUkZf
RVZFTlRTX0FNRF9QT1dFUiBpcyBub3Qgc2V0CiMgZW5kIG9mIFBlcmZvcm1hbmNlIG1vbml0
b3JpbmcKCkNPTkZJR19YODZfMTZCSVQ9eQpDT05GSUdfWDg2X0VTUEZJWDY0PXkKQ09ORklH
X1g4Nl9WU1lTQ0FMTF9FTVVMQVRJT049eQpDT05GSUdfWDg2X0lPUExfSU9QRVJNPXkKIyBD
T05GSUdfSThLIGlzIG5vdCBzZXQKIyBDT05GSUdfTUlDUk9DT0RFIGlzIG5vdCBzZXQKQ09O
RklHX1g4Nl9NU1I9eQpDT05GSUdfWDg2X0NQVUlEPXkKIyBDT05GSUdfWDg2XzVMRVZFTCBp
cyBub3Qgc2V0CkNPTkZJR19YODZfRElSRUNUX0dCUEFHRVM9eQojIENPTkZJR19YODZfQ1BB
X1NUQVRJU1RJQ1MgaXMgbm90IHNldAojIENPTkZJR19BTURfTUVNX0VOQ1JZUFQgaXMgbm90
IHNldApDT05GSUdfTlVNQT15CkNPTkZJR19BTURfTlVNQT15CkNPTkZJR19YODZfNjRfQUNQ
SV9OVU1BPXkKIyBDT05GSUdfTlVNQV9FTVUgaXMgbm90IHNldApDT05GSUdfTk9ERVNfU0hJ
RlQ9OApDT05GSUdfQVJDSF9TUEFSU0VNRU1fRU5BQkxFPXkKQ09ORklHX0FSQ0hfU1BBUlNF
TUVNX0RFRkFVTFQ9eQpDT05GSUdfQVJDSF9TRUxFQ1RfTUVNT1JZX01PREVMPXkKQ09ORklH
X0FSQ0hfUFJPQ19LQ09SRV9URVhUPXkKQ09ORklHX0lMTEVHQUxfUE9JTlRFUl9WQUxVRT0w
eGRlYWQwMDAwMDAwMDAwMDAKIyBDT05GSUdfWDg2X1BNRU1fTEVHQUNZIGlzIG5vdCBzZXQK
Q09ORklHX1g4Nl9DSEVDS19CSU9TX0NPUlJVUFRJT049eQpDT05GSUdfWDg2X0JPT1RQQVJB
TV9NRU1PUllfQ09SUlVQVElPTl9DSEVDSz15CkNPTkZJR19YODZfUkVTRVJWRV9MT1c9NjQK
Q09ORklHX01UUlI9eQpDT05GSUdfTVRSUl9TQU5JVElaRVI9eQpDT05GSUdfTVRSUl9TQU5J
VElaRVJfRU5BQkxFX0RFRkFVTFQ9MApDT05GSUdfTVRSUl9TQU5JVElaRVJfU1BBUkVfUkVH
X05SX0RFRkFVTFQ9MQpDT05GSUdfWDg2X1BBVD15CkNPTkZJR19BUkNIX1VTRVNfUEdfVU5D
QUNIRUQ9eQpDT05GSUdfQVJDSF9SQU5ET009eQpDT05GSUdfWDg2X1NNQVA9eQpDT05GSUdf
WDg2X1VNSVA9eQpDT05GSUdfWDg2X0lOVEVMX01FTU9SWV9QUk9URUNUSU9OX0tFWVM9eQpD
T05GSUdfWDg2X0lOVEVMX1RTWF9NT0RFX09GRj15CiMgQ09ORklHX1g4Nl9JTlRFTF9UU1hf
TU9ERV9PTiBpcyBub3Qgc2V0CiMgQ09ORklHX1g4Nl9JTlRFTF9UU1hfTU9ERV9BVVRPIGlz
IG5vdCBzZXQKIyBDT05GSUdfWDg2X1NHWCBpcyBub3Qgc2V0CiMgQ09ORklHX0VGSSBpcyBu
b3Qgc2V0CiMgQ09ORklHX0haXzEwMCBpcyBub3Qgc2V0CiMgQ09ORklHX0haXzI1MCBpcyBu
b3Qgc2V0CkNPTkZJR19IWl8zMDA9eQojIENPTkZJR19IWl8xMDAwIGlzIG5vdCBzZXQKQ09O
RklHX0haPTMwMApDT05GSUdfU0NIRURfSFJUSUNLPXkKIyBDT05GSUdfS0VYRUMgaXMgbm90
IHNldAojIENPTkZJR19LRVhFQ19GSUxFIGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JBU0hfRFVN
UCBpcyBub3Qgc2V0CkNPTkZJR19QSFlTSUNBTF9TVEFSVD0weDEwMDAwMDAKQ09ORklHX1JF
TE9DQVRBQkxFPXkKIyBDT05GSUdfUkFORE9NSVpFX0JBU0UgaXMgbm90IHNldApDT05GSUdf
UEhZU0lDQUxfQUxJR049MHgxMDAwMDAwCkNPTkZJR19IT1RQTFVHX0NQVT15CiMgQ09ORklH
X0JPT1RQQVJBTV9IT1RQTFVHX0NQVTAgaXMgbm90IHNldAojIENPTkZJR19ERUJVR19IT1RQ
TFVHX0NQVTAgaXMgbm90IHNldAojIENPTkZJR19DT01QQVRfVkRTTyBpcyBub3Qgc2V0CiMg
Q09ORklHX0xFR0FDWV9WU1lTQ0FMTF9FTVVMQVRFIGlzIG5vdCBzZXQKIyBDT05GSUdfTEVH
QUNZX1ZTWVNDQUxMX1hPTkxZIGlzIG5vdCBzZXQKQ09ORklHX0xFR0FDWV9WU1lTQ0FMTF9O
T05FPXkKIyBDT05GSUdfQ01ETElORV9CT09MIGlzIG5vdCBzZXQKQ09ORklHX01PRElGWV9M
RFRfU1lTQ0FMTD15CkNPTkZJR19IQVZFX0xJVkVQQVRDSD15CiMgZW5kIG9mIFByb2Nlc3Nv
ciB0eXBlIGFuZCBmZWF0dXJlcwoKQ09ORklHX0FSQ0hfSEFTX0FERF9QQUdFUz15CkNPTkZJ
R19BUkNIX01IUF9NRU1NQVBfT05fTUVNT1JZX0VOQUJMRT15CkNPTkZJR19VU0VfUEVSQ1BV
X05VTUFfTk9ERV9JRD15CgojCiMgUG93ZXIgbWFuYWdlbWVudCBhbmQgQUNQSSBvcHRpb25z
CiMKIyBDT05GSUdfU1VTUEVORCBpcyBub3Qgc2V0CkNPTkZJR19ISUJFUk5BVEVfQ0FMTEJB
Q0tTPXkKIyBDT05GSUdfSElCRVJOQVRJT04gaXMgbm90IHNldApDT05GSUdfUE1fU0xFRVA9
eQpDT05GSUdfUE1fU0xFRVBfU01QPXkKIyBDT05GSUdfUE1fQVVUT1NMRUVQIGlzIG5vdCBz
ZXQKIyBDT05GSUdfUE1fV0FLRUxPQ0tTIGlzIG5vdCBzZXQKQ09ORklHX1BNPXkKIyBDT05G
SUdfUE1fREVCVUcgaXMgbm90IHNldApDT05GSUdfUE1fQ0xLPXkKIyBDT05GSUdfV1FfUE9X
RVJfRUZGSUNJRU5UX0RFRkFVTFQgaXMgbm90IHNldAojIENPTkZJR19FTkVSR1lfTU9ERUwg
aXMgbm90IHNldApDT05GSUdfQVJDSF9TVVBQT1JUU19BQ1BJPXkKQ09ORklHX0FDUEk9eQpD
T05GSUdfQUNQSV9MRUdBQ1lfVEFCTEVTX0xPT0tVUD15CkNPTkZJR19BUkNIX01JR0hUX0hB
VkVfQUNQSV9QREM9eQpDT05GSUdfQUNQSV9TWVNURU1fUE9XRVJfU1RBVEVTX1NVUFBPUlQ9
eQojIENPTkZJR19BQ1BJX0RFQlVHR0VSIGlzIG5vdCBzZXQKQ09ORklHX0FDUElfU1BDUl9U
QUJMRT15CiMgQ09ORklHX0FDUElfRlBEVCBpcyBub3Qgc2V0CkNPTkZJR19BQ1BJX0xQSVQ9
eQpDT05GSUdfQUNQSV9SRVZfT1ZFUlJJREVfUE9TU0lCTEU9eQojIENPTkZJR19BQ1BJX0VD
X0RFQlVHRlMgaXMgbm90IHNldApDT05GSUdfQUNQSV9BQz15CkNPTkZJR19BQ1BJX0JBVFRF
Ulk9eQpDT05GSUdfQUNQSV9CVVRUT049eQpDT05GSUdfQUNQSV9WSURFTz15CkNPTkZJR19B
Q1BJX0ZBTj15CiMgQ09ORklHX0FDUElfVEFEIGlzIG5vdCBzZXQKIyBDT05GSUdfQUNQSV9E
T0NLIGlzIG5vdCBzZXQKQ09ORklHX0FDUElfQ1BVX0ZSRVFfUFNTPXkKQ09ORklHX0FDUElf
UFJPQ0VTU09SX0NTVEFURT15CkNPTkZJR19BQ1BJX1BST0NFU1NPUl9JRExFPXkKQ09ORklH
X0FDUElfQ1BQQ19MSUI9eQpDT05GSUdfQUNQSV9QUk9DRVNTT1I9eQpDT05GSUdfQUNQSV9I
T1RQTFVHX0NQVT15CkNPTkZJR19BQ1BJX1BST0NFU1NPUl9BR0dSRUdBVE9SPXkKQ09ORklH
X0FDUElfVEhFUk1BTD15CkNPTkZJR19BQ1BJX0NVU1RPTV9EU0RUX0ZJTEU9IiIKQ09ORklH
X0FSQ0hfSEFTX0FDUElfVEFCTEVfVVBHUkFERT15CkNPTkZJR19BQ1BJX1RBQkxFX1VQR1JB
REU9eQojIENPTkZJR19BQ1BJX0RFQlVHIGlzIG5vdCBzZXQKIyBDT05GSUdfQUNQSV9QQ0lf
U0xPVCBpcyBub3Qgc2V0CkNPTkZJR19BQ1BJX0NPTlRBSU5FUj15CkNPTkZJR19BQ1BJX0hP
VFBMVUdfSU9BUElDPXkKIyBDT05GSUdfQUNQSV9TQlMgaXMgbm90IHNldApDT05GSUdfQUNQ
SV9IRUQ9eQojIENPTkZJR19BQ1BJX0NVU1RPTV9NRVRIT0QgaXMgbm90IHNldAojIENPTkZJ
R19BQ1BJX05GSVQgaXMgbm90IHNldApDT05GSUdfQUNQSV9OVU1BPXkKQ09ORklHX0FDUElf
SE1BVD15CkNPTkZJR19IQVZFX0FDUElfQVBFST15CkNPTkZJR19IQVZFX0FDUElfQVBFSV9O
TUk9eQojIENPTkZJR19BQ1BJX0FQRUkgaXMgbm90IHNldAojIENPTkZJR19BQ1BJX0RQVEYg
aXMgbm90IHNldAojIENPTkZJR19BQ1BJX0NPTkZJR0ZTIGlzIG5vdCBzZXQKIyBDT05GSUdf
UE1JQ19PUFJFR0lPTiBpcyBub3Qgc2V0CkNPTkZJR19YODZfUE1fVElNRVI9eQoKIwojIENQ
VSBGcmVxdWVuY3kgc2NhbGluZwojCkNPTkZJR19DUFVfRlJFUT15CkNPTkZJR19DUFVfRlJF
UV9HT1ZfQVRUUl9TRVQ9eQpDT05GSUdfQ1BVX0ZSRVFfR09WX0NPTU1PTj15CiMgQ09ORklH
X0NQVV9GUkVRX1NUQVQgaXMgbm90IHNldAojIENPTkZJR19DUFVfRlJFUV9ERUZBVUxUX0dP
Vl9QRVJGT1JNQU5DRSBpcyBub3Qgc2V0CiMgQ09ORklHX0NQVV9GUkVRX0RFRkFVTFRfR09W
X1BPV0VSU0FWRSBpcyBub3Qgc2V0CkNPTkZJR19DUFVfRlJFUV9ERUZBVUxUX0dPVl9VU0VS
U1BBQ0U9eQojIENPTkZJR19DUFVfRlJFUV9ERUZBVUxUX0dPVl9TQ0hFRFVUSUwgaXMgbm90
IHNldApDT05GSUdfQ1BVX0ZSRVFfR09WX1BFUkZPUk1BTkNFPXkKIyBDT05GSUdfQ1BVX0ZS
RVFfR09WX1BPV0VSU0FWRSBpcyBub3Qgc2V0CkNPTkZJR19DUFVfRlJFUV9HT1ZfVVNFUlNQ
QUNFPXkKQ09ORklHX0NQVV9GUkVRX0dPVl9PTkRFTUFORD15CiMgQ09ORklHX0NQVV9GUkVR
X0dPVl9DT05TRVJWQVRJVkUgaXMgbm90IHNldApDT05GSUdfQ1BVX0ZSRVFfR09WX1NDSEVE
VVRJTD15CgojCiMgQ1BVIGZyZXF1ZW5jeSBzY2FsaW5nIGRyaXZlcnMKIwpDT05GSUdfWDg2
X0lOVEVMX1BTVEFURT15CkNPTkZJR19YODZfUENDX0NQVUZSRVE9eQpDT05GSUdfWDg2X0FD
UElfQ1BVRlJFUT15CkNPTkZJR19YODZfQUNQSV9DUFVGUkVRX0NQQj15CiMgQ09ORklHX1g4
Nl9QT1dFUk5PV19LOCBpcyBub3Qgc2V0CiMgQ09ORklHX1g4Nl9BTURfRlJFUV9TRU5TSVRJ
VklUWSBpcyBub3Qgc2V0CiMgQ09ORklHX1g4Nl9TUEVFRFNURVBfQ0VOVFJJTk8gaXMgbm90
IHNldAojIENPTkZJR19YODZfUDRfQ0xPQ0tNT0QgaXMgbm90IHNldAoKIwojIHNoYXJlZCBv
cHRpb25zCiMKIyBlbmQgb2YgQ1BVIEZyZXF1ZW5jeSBzY2FsaW5nCgojCiMgQ1BVIElkbGUK
IwpDT05GSUdfQ1BVX0lETEU9eQpDT05GSUdfQ1BVX0lETEVfR09WX0xBRERFUj15CkNPTkZJ
R19DUFVfSURMRV9HT1ZfTUVOVT15CiMgQ09ORklHX0NQVV9JRExFX0dPVl9URU8gaXMgbm90
IHNldAojIENPTkZJR19DUFVfSURMRV9HT1ZfSEFMVFBPTEwgaXMgbm90IHNldApDT05GSUdf
SEFMVFBPTExfQ1BVSURMRT15CiMgZW5kIG9mIENQVSBJZGxlCgojIENPTkZJR19JTlRFTF9J
RExFIGlzIG5vdCBzZXQKIyBlbmQgb2YgUG93ZXIgbWFuYWdlbWVudCBhbmQgQUNQSSBvcHRp
b25zCgojCiMgQnVzIG9wdGlvbnMgKFBDSSBldGMuKQojCkNPTkZJR19QQ0lfRElSRUNUPXkK
Q09ORklHX1BDSV9NTUNPTkZJRz15CkNPTkZJR19QQ0lfWEVOPXkKQ09ORklHX01NQ09ORl9G
QU0xMEg9eQpDT05GSUdfSVNBX0RNQV9BUEk9eQpDT05GSUdfQU1EX05CPXkKIyBDT05GSUdf
WDg2X1NZU0ZCIGlzIG5vdCBzZXQKIyBlbmQgb2YgQnVzIG9wdGlvbnMgKFBDSSBldGMuKQoK
IwojIEJpbmFyeSBFbXVsYXRpb25zCiMKQ09ORklHX0lBMzJfRU1VTEFUSU9OPXkKIyBDT05G
SUdfWDg2X1gzMiBpcyBub3Qgc2V0CkNPTkZJR19DT01QQVRfMzI9eQpDT05GSUdfQ09NUEFU
PXkKQ09ORklHX0NPTVBBVF9GT1JfVTY0X0FMSUdOTUVOVD15CkNPTkZJR19TWVNWSVBDX0NP
TVBBVD15CiMgZW5kIG9mIEJpbmFyeSBFbXVsYXRpb25zCgojCiMgRmlybXdhcmUgRHJpdmVy
cwojCiMgQ09ORklHX0VERCBpcyBub3Qgc2V0CkNPTkZJR19GSVJNV0FSRV9NRU1NQVA9eQpD
T05GSUdfRE1JSUQ9eQpDT05GSUdfRE1JX1NZU0ZTPXkKQ09ORklHX0RNSV9TQ0FOX01BQ0hJ
TkVfTk9OX0VGSV9GQUxMQkFDSz15CiMgQ09ORklHX0ZXX0NGR19TWVNGUyBpcyBub3Qgc2V0
CiMgQ09ORklHX0dPT0dMRV9GSVJNV0FSRSBpcyBub3Qgc2V0CgojCiMgVGVncmEgZmlybXdh
cmUgZHJpdmVyCiMKIyBlbmQgb2YgVGVncmEgZmlybXdhcmUgZHJpdmVyCiMgZW5kIG9mIEZp
cm13YXJlIERyaXZlcnMKCkNPTkZJR19IQVZFX0tWTT15CiMgQ09ORklHX1ZJUlRVQUxJWkFU
SU9OIGlzIG5vdCBzZXQKQ09ORklHX0FTX0FWWDUxMj15CkNPTkZJR19BU19TSEExX05JPXkK
Q09ORklHX0FTX1NIQTI1Nl9OST15CkNPTkZJR19BU19UUEFVU0U9eQoKIwojIEdlbmVyYWwg
YXJjaGl0ZWN0dXJlLWRlcGVuZGVudCBvcHRpb25zCiMKQ09ORklHX0NSQVNIX0NPUkU9eQpD
T05GSUdfSE9UUExVR19TTVQ9eQpDT05GSUdfR0VORVJJQ19FTlRSWT15CiMgQ09ORklHX0tQ
Uk9CRVMgaXMgbm90IHNldApDT05GSUdfSlVNUF9MQUJFTD15CiMgQ09ORklHX1NUQVRJQ19L
RVlTX1NFTEZURVNUIGlzIG5vdCBzZXQKIyBDT05GSUdfU1RBVElDX0NBTExfU0VMRlRFU1Qg
aXMgbm90IHNldApDT05GSUdfVVBST0JFUz15CkNPTkZJR19IQVZFX0VGRklDSUVOVF9VTkFM
SUdORURfQUNDRVNTPXkKQ09ORklHX0FSQ0hfVVNFX0JVSUxUSU5fQlNXQVA9eQpDT05GSUdf
SEFWRV9JT1JFTUFQX1BST1Q9eQpDT05GSUdfSEFWRV9LUFJPQkVTPXkKQ09ORklHX0hBVkVf
S1JFVFBST0JFUz15CkNPTkZJR19IQVZFX09QVFBST0JFUz15CkNPTkZJR19IQVZFX0tQUk9C
RVNfT05fRlRSQUNFPXkKQ09ORklHX0hBVkVfRlVOQ1RJT05fRVJST1JfSU5KRUNUSU9OPXkK
Q09ORklHX0hBVkVfTk1JPXkKQ09ORklHX0hBVkVfQVJDSF9UUkFDRUhPT0s9eQpDT05GSUdf
SEFWRV9ETUFfQ09OVElHVU9VUz15CkNPTkZJR19HRU5FUklDX1NNUF9JRExFX1RIUkVBRD15
CkNPTkZJR19BUkNIX0hBU19GT1JUSUZZX1NPVVJDRT15CkNPTkZJR19BUkNIX0hBU19TRVRf
TUVNT1JZPXkKQ09ORklHX0FSQ0hfSEFTX1NFVF9ESVJFQ1RfTUFQPXkKQ09ORklHX0hBVkVf
QVJDSF9USFJFQURfU1RSVUNUX1dISVRFTElTVD15CkNPTkZJR19BUkNIX1dBTlRTX0RZTkFN
SUNfVEFTS19TVFJVQ1Q9eQpDT05GSUdfSEFWRV9BU01fTU9EVkVSU0lPTlM9eQpDT05GSUdf
SEFWRV9SRUdTX0FORF9TVEFDS19BQ0NFU1NfQVBJPXkKQ09ORklHX0hBVkVfUlNFUT15CkNP
TkZJR19IQVZFX0ZVTkNUSU9OX0FSR19BQ0NFU1NfQVBJPXkKQ09ORklHX0hBVkVfSFdfQlJF
QUtQT0lOVD15CkNPTkZJR19IQVZFX01JWEVEX0JSRUFLUE9JTlRTX1JFR1M9eQpDT05GSUdf
SEFWRV9VU0VSX1JFVFVSTl9OT1RJRklFUj15CkNPTkZJR19IQVZFX1BFUkZfRVZFTlRTX05N
ST15CkNPTkZJR19IQVZFX0hBUkRMT0NLVVBfREVURUNUT1JfUEVSRj15CkNPTkZJR19IQVZF
X1BFUkZfUkVHUz15CkNPTkZJR19IQVZFX1BFUkZfVVNFUl9TVEFDS19EVU1QPXkKQ09ORklH
X0hBVkVfQVJDSF9KVU1QX0xBQkVMPXkKQ09ORklHX0hBVkVfQVJDSF9KVU1QX0xBQkVMX1JF
TEFUSVZFPXkKQ09ORklHX01NVV9HQVRIRVJfVEFCTEVfRlJFRT15CkNPTkZJR19NTVVfR0FU
SEVSX1JDVV9UQUJMRV9GUkVFPXkKQ09ORklHX0FSQ0hfSEFWRV9OTUlfU0FGRV9DTVBYQ0hH
PXkKQ09ORklHX0hBVkVfQUxJR05FRF9TVFJVQ1RfUEFHRT15CkNPTkZJR19IQVZFX0NNUFhD
SEdfTE9DQUw9eQpDT05GSUdfSEFWRV9DTVBYQ0hHX0RPVUJMRT15CkNPTkZJR19BUkNIX1dB
TlRfQ09NUEFUX0lQQ19QQVJTRV9WRVJTSU9OPXkKQ09ORklHX0FSQ0hfV0FOVF9PTERfQ09N
UEFUX0lQQz15CkNPTkZJR19IQVZFX0FSQ0hfU0VDQ09NUD15CkNPTkZJR19IQVZFX0FSQ0hf
U0VDQ09NUF9GSUxURVI9eQpDT05GSUdfU0VDQ09NUD15CkNPTkZJR19TRUNDT01QX0ZJTFRF
Uj15CiMgQ09ORklHX1NFQ0NPTVBfQ0FDSEVfREVCVUcgaXMgbm90IHNldApDT05GSUdfSEFW
RV9BUkNIX1NUQUNLTEVBSz15CkNPTkZJR19IQVZFX1NUQUNLUFJPVEVDVE9SPXkKQ09ORklH
X1NUQUNLUFJPVEVDVE9SPXkKQ09ORklHX1NUQUNLUFJPVEVDVE9SX1NUUk9ORz15CkNPTkZJ
R19BUkNIX1NVUFBPUlRTX0xUT19DTEFORz15CkNPTkZJR19BUkNIX1NVUFBPUlRTX0xUT19D
TEFOR19USElOPXkKQ09ORklHX0xUT19OT05FPXkKQ09ORklHX0hBVkVfQVJDSF9XSVRISU5f
U1RBQ0tfRlJBTUVTPXkKQ09ORklHX0hBVkVfQ09OVEVYVF9UUkFDS0lORz15CkNPTkZJR19I
QVZFX0NPTlRFWFRfVFJBQ0tJTkdfT0ZGU1RBQ0s9eQpDT05GSUdfSEFWRV9WSVJUX0NQVV9B
Q0NPVU5USU5HX0dFTj15CkNPTkZJR19IQVZFX0lSUV9USU1FX0FDQ09VTlRJTkc9eQpDT05G
SUdfSEFWRV9NT1ZFX1BVRD15CkNPTkZJR19IQVZFX01PVkVfUE1EPXkKQ09ORklHX0hBVkVf
QVJDSF9UUkFOU1BBUkVOVF9IVUdFUEFHRT15CkNPTkZJR19IQVZFX0FSQ0hfVFJBTlNQQVJF
TlRfSFVHRVBBR0VfUFVEPXkKQ09ORklHX0hBVkVfQVJDSF9IVUdFX1ZNQVA9eQpDT05GSUdf
QVJDSF9XQU5UX0hVR0VfUE1EX1NIQVJFPXkKQ09ORklHX0hBVkVfQVJDSF9TT0ZUX0RJUlRZ
PXkKQ09ORklHX0hBVkVfTU9EX0FSQ0hfU1BFQ0lGSUM9eQpDT05GSUdfTU9EVUxFU19VU0Vf
RUxGX1JFTEE9eQpDT05GSUdfSEFWRV9JUlFfRVhJVF9PTl9JUlFfU1RBQ0s9eQpDT05GSUdf
SEFWRV9TT0ZUSVJRX09OX09XTl9TVEFDSz15CkNPTkZJR19BUkNIX0hBU19FTEZfUkFORE9N
SVpFPXkKQ09ORklHX0hBVkVfQVJDSF9NTUFQX1JORF9CSVRTPXkKQ09ORklHX0hBVkVfRVhJ
VF9USFJFQUQ9eQpDT05GSUdfQVJDSF9NTUFQX1JORF9CSVRTPTI4CkNPTkZJR19IQVZFX0FS
Q0hfTU1BUF9STkRfQ09NUEFUX0JJVFM9eQpDT05GSUdfQVJDSF9NTUFQX1JORF9DT01QQVRf
QklUUz04CkNPTkZJR19IQVZFX0FSQ0hfQ09NUEFUX01NQVBfQkFTRVM9eQpDT05GSUdfSEFW
RV9TVEFDS19WQUxJREFUSU9OPXkKQ09ORklHX0hBVkVfUkVMSUFCTEVfU1RBQ0tUUkFDRT15
CkNPTkZJR19PTERfU0lHU1VTUEVORDM9eQpDT05GSUdfQ09NUEFUX09MRF9TSUdBQ1RJT049
eQpDT05GSUdfQ09NUEFUXzMyQklUX1RJTUU9eQpDT05GSUdfSEFWRV9BUkNIX1ZNQVBfU1RB
Q0s9eQpDT05GSUdfVk1BUF9TVEFDSz15CkNPTkZJR19IQVZFX0FSQ0hfUkFORE9NSVpFX0tT
VEFDS19PRkZTRVQ9eQojIENPTkZJR19SQU5ET01JWkVfS1NUQUNLX09GRlNFVF9ERUZBVUxU
IGlzIG5vdCBzZXQKQ09ORklHX0FSQ0hfSEFTX1NUUklDVF9LRVJORUxfUldYPXkKQ09ORklH
X1NUUklDVF9LRVJORUxfUldYPXkKQ09ORklHX0FSQ0hfSEFTX1NUUklDVF9NT0RVTEVfUldY
PXkKQ09ORklHX1NUUklDVF9NT0RVTEVfUldYPXkKQ09ORklHX0hBVkVfQVJDSF9QUkVMMzJf
UkVMT0NBVElPTlM9eQojIENPTkZJR19MT0NLX0VWRU5UX0NPVU5UUyBpcyBub3Qgc2V0CkNP
TkZJR19BUkNIX0hBU19NRU1fRU5DUllQVD15CkNPTkZJR19IQVZFX1NUQVRJQ19DQUxMPXkK
Q09ORklHX0hBVkVfU1RBVElDX0NBTExfSU5MSU5FPXkKQ09ORklHX0hBVkVfUFJFRU1QVF9E
WU5BTUlDPXkKQ09ORklHX0FSQ0hfV0FOVF9MRF9PUlBIQU5fV0FSTj15CkNPTkZJR19BUkNI
X1NVUFBPUlRTX0RFQlVHX1BBR0VBTExPQz15CkNPTkZJR19BUkNIX0hBU19FTEZDT1JFX0NP
TVBBVD15CgojCiMgR0NPVi1iYXNlZCBrZXJuZWwgcHJvZmlsaW5nCiMKIyBDT05GSUdfR0NP
Vl9LRVJORUwgaXMgbm90IHNldApDT05GSUdfQVJDSF9IQVNfR0NPVl9QUk9GSUxFX0FMTD15
CiMgZW5kIG9mIEdDT1YtYmFzZWQga2VybmVsIHByb2ZpbGluZwoKQ09ORklHX0hBVkVfR0ND
X1BMVUdJTlM9eQojIGVuZCBvZiBHZW5lcmFsIGFyY2hpdGVjdHVyZS1kZXBlbmRlbnQgb3B0
aW9ucwoKQ09ORklHX1JUX01VVEVYRVM9eQpDT05GSUdfQkFTRV9TTUFMTD0wCkNPTkZJR19N
T0RVTEVTPXkKIyBDT05GSUdfTU9EVUxFX0ZPUkNFX0xPQUQgaXMgbm90IHNldApDT05GSUdf
TU9EVUxFX1VOTE9BRD15CiMgQ09ORklHX01PRFVMRV9GT1JDRV9VTkxPQUQgaXMgbm90IHNl
dAojIENPTkZJR19NT0RWRVJTSU9OUyBpcyBub3Qgc2V0CiMgQ09ORklHX01PRFVMRV9TUkNW
RVJTSU9OX0FMTCBpcyBub3Qgc2V0CiMgQ09ORklHX01PRFVMRV9TSUcgaXMgbm90IHNldAoj
IENPTkZJR19NT0RVTEVfQ09NUFJFU1NfTk9ORSBpcyBub3Qgc2V0CkNPTkZJR19NT0RVTEVf
Q09NUFJFU1NfR1pJUD15CiMgQ09ORklHX01PRFVMRV9DT01QUkVTU19YWiBpcyBub3Qgc2V0
CiMgQ09ORklHX01PRFVMRV9DT01QUkVTU19aU1REIGlzIG5vdCBzZXQKIyBDT05GSUdfTU9E
VUxFX0FMTE9XX01JU1NJTkdfTkFNRVNQQUNFX0lNUE9SVFMgaXMgbm90IHNldApDT05GSUdf
TU9EUFJPQkVfUEFUSD0iL3NiaW4vbW9kcHJvYmUiCkNPTkZJR19NT0RVTEVTX1RSRUVfTE9P
S1VQPXkKQ09ORklHX0JMT0NLPXkKQ09ORklHX0JMS19TQ1NJX1JFUVVFU1Q9eQpDT05GSUdf
QkxLX0NHUk9VUF9SV1NUQVQ9eQpDT05GSUdfQkxLX0RFVl9CU0c9eQojIENPTkZJR19CTEtf
REVWX0JTR0xJQiBpcyBub3Qgc2V0CkNPTkZJR19CTEtfREVWX0lOVEVHUklUWT15CkNPTkZJ
R19CTEtfREVWX0lOVEVHUklUWV9UMTA9eQojIENPTkZJR19CTEtfREVWX1pPTkVEIGlzIG5v
dCBzZXQKQ09ORklHX0JMS19ERVZfVEhST1RUTElORz15CiMgQ09ORklHX0JMS19ERVZfVEhS
T1RUTElOR19MT1cgaXMgbm90IHNldAojIENPTkZJR19CTEtfQ01ETElORV9QQVJTRVIgaXMg
bm90IHNldAojIENPTkZJR19CTEtfV0JUIGlzIG5vdCBzZXQKQ09ORklHX0JMS19DR1JPVVBf
SU9MQVRFTkNZPXkKIyBDT05GSUdfQkxLX0NHUk9VUF9JT0NPU1QgaXMgbm90IHNldApDT05G
SUdfQkxLX0RFQlVHX0ZTPXkKIyBDT05GSUdfQkxLX1NFRF9PUEFMIGlzIG5vdCBzZXQKIyBD
T05GSUdfQkxLX0lOTElORV9FTkNSWVBUSU9OIGlzIG5vdCBzZXQKCiMKIyBQYXJ0aXRpb24g
VHlwZXMKIwpDT05GSUdfUEFSVElUSU9OX0FEVkFOQ0VEPXkKIyBDT05GSUdfQUNPUk5fUEFS
VElUSU9OIGlzIG5vdCBzZXQKIyBDT05GSUdfQUlYX1BBUlRJVElPTiBpcyBub3Qgc2V0CkNP
TkZJR19PU0ZfUEFSVElUSU9OPXkKQ09ORklHX0FNSUdBX1BBUlRJVElPTj15CiMgQ09ORklH
X0FUQVJJX1BBUlRJVElPTiBpcyBub3Qgc2V0CkNPTkZJR19NQUNfUEFSVElUSU9OPXkKQ09O
RklHX01TRE9TX1BBUlRJVElPTj15CkNPTkZJR19CU0RfRElTS0xBQkVMPXkKQ09ORklHX01J
TklYX1NVQlBBUlRJVElPTj15CkNPTkZJR19TT0xBUklTX1g4Nl9QQVJUSVRJT049eQpDT05G
SUdfVU5JWFdBUkVfRElTS0xBQkVMPXkKIyBDT05GSUdfTERNX1BBUlRJVElPTiBpcyBub3Qg
c2V0CkNPTkZJR19TR0lfUEFSVElUSU9OPXkKIyBDT05GSUdfVUxUUklYX1BBUlRJVElPTiBp
cyBub3Qgc2V0CkNPTkZJR19TVU5fUEFSVElUSU9OPXkKQ09ORklHX0tBUk1BX1BBUlRJVElP
Tj15CkNPTkZJR19FRklfUEFSVElUSU9OPXkKIyBDT05GSUdfU1lTVjY4X1BBUlRJVElPTiBp
cyBub3Qgc2V0CiMgQ09ORklHX0NNRExJTkVfUEFSVElUSU9OIGlzIG5vdCBzZXQKIyBlbmQg
b2YgUGFydGl0aW9uIFR5cGVzCgpDT05GSUdfQkxPQ0tfQ09NUEFUPXkKQ09ORklHX0JMS19N
UV9QQ0k9eQpDT05GSUdfQkxLX1BNPXkKCiMKIyBJTyBTY2hlZHVsZXJzCiMKQ09ORklHX01R
X0lPU0NIRURfREVBRExJTkU9eQpDT05GSUdfTVFfSU9TQ0hFRF9LWUJFUj15CkNPTkZJR19J
T1NDSEVEX0JGUT15CiMgQ09ORklHX0JGUV9HUk9VUF9JT1NDSEVEIGlzIG5vdCBzZXQKIyBl
bmQgb2YgSU8gU2NoZWR1bGVycwoKQ09ORklHX0FTTjE9eQpDT05GSUdfSU5MSU5FX1NQSU5f
VU5MT0NLX0lSUT15CkNPTkZJR19JTkxJTkVfUkVBRF9VTkxPQ0s9eQpDT05GSUdfSU5MSU5F
X1JFQURfVU5MT0NLX0lSUT15CkNPTkZJR19JTkxJTkVfV1JJVEVfVU5MT0NLPXkKQ09ORklH
X0lOTElORV9XUklURV9VTkxPQ0tfSVJRPXkKQ09ORklHX0FSQ0hfU1VQUE9SVFNfQVRPTUlD
X1JNVz15CkNPTkZJR19NVVRFWF9TUElOX09OX09XTkVSPXkKQ09ORklHX1JXU0VNX1NQSU5f
T05fT1dORVI9eQpDT05GSUdfTE9DS19TUElOX09OX09XTkVSPXkKQ09ORklHX0FSQ0hfVVNF
X1FVRVVFRF9TUElOTE9DS1M9eQpDT05GSUdfUVVFVUVEX1NQSU5MT0NLUz15CkNPTkZJR19B
UkNIX1VTRV9RVUVVRURfUldMT0NLUz15CkNPTkZJR19RVUVVRURfUldMT0NLUz15CkNPTkZJ
R19BUkNIX0hBU19OT05fT1ZFUkxBUFBJTkdfQUREUkVTU19TUEFDRT15CkNPTkZJR19BUkNI
X0hBU19TWU5DX0NPUkVfQkVGT1JFX1VTRVJNT0RFPXkKQ09ORklHX0FSQ0hfSEFTX1NZU0NB
TExfV1JBUFBFUj15CkNPTkZJR19GUkVFWkVSPXkKCiMKIyBFeGVjdXRhYmxlIGZpbGUgZm9y
bWF0cwojCkNPTkZJR19CSU5GTVRfRUxGPXkKQ09ORklHX0NPTVBBVF9CSU5GTVRfRUxGPXkK
Q09ORklHX0VMRkNPUkU9eQpDT05GSUdfQ09SRV9EVU1QX0RFRkFVTFRfRUxGX0hFQURFUlM9
eQpDT05GSUdfQklORk1UX1NDUklQVD15CkNPTkZJR19CSU5GTVRfTUlTQz15CkNPTkZJR19D
T1JFRFVNUD15CiMgZW5kIG9mIEV4ZWN1dGFibGUgZmlsZSBmb3JtYXRzCgojCiMgTWVtb3J5
IE1hbmFnZW1lbnQgb3B0aW9ucwojCkNPTkZJR19TRUxFQ1RfTUVNT1JZX01PREVMPXkKQ09O
RklHX1NQQVJTRU1FTV9NQU5VQUw9eQpDT05GSUdfU1BBUlNFTUVNPXkKQ09ORklHX05FRURf
TVVMVElQTEVfTk9ERVM9eQpDT05GSUdfU1BBUlNFTUVNX0VYVFJFTUU9eQpDT05GSUdfU1BB
UlNFTUVNX1ZNRU1NQVBfRU5BQkxFPXkKQ09ORklHX1NQQVJTRU1FTV9WTUVNTUFQPXkKQ09O
RklHX0hBVkVfRkFTVF9HVVA9eQpDT05GSUdfTUVNT1JZX0lTT0xBVElPTj15CkNPTkZJR19B
UkNIX0VOQUJMRV9NRU1PUllfSE9UUExVRz15CiMgQ09ORklHX01FTU9SWV9IT1RQTFVHIGlz
IG5vdCBzZXQKQ09ORklHX1NQTElUX1BUTE9DS19DUFVTPTQKQ09ORklHX0FSQ0hfRU5BQkxF
X1NQTElUX1BNRF9QVExPQ0s9eQpDT05GSUdfQ09NUEFDVElPTj15CiMgQ09ORklHX1BBR0Vf
UkVQT1JUSU5HIGlzIG5vdCBzZXQKQ09ORklHX01JR1JBVElPTj15CkNPTkZJR19BUkNIX0VO
QUJMRV9IVUdFUEFHRV9NSUdSQVRJT049eQpDT05GSUdfQVJDSF9FTkFCTEVfVEhQX01JR1JB
VElPTj15CkNPTkZJR19DT05USUdfQUxMT0M9eQpDT05GSUdfUEhZU19BRERSX1RfNjRCSVQ9
eQpDT05GSUdfVklSVF9UT19CVVM9eQpDT05GSUdfTU1VX05PVElGSUVSPXkKIyBDT05GSUdf
S1NNIGlzIG5vdCBzZXQKQ09ORklHX0RFRkFVTFRfTU1BUF9NSU5fQUREUj00MDk2CkNPTkZJ
R19BUkNIX1NVUFBPUlRTX01FTU9SWV9GQUlMVVJFPXkKIyBDT05GSUdfTUVNT1JZX0ZBSUxV
UkUgaXMgbm90IHNldApDT05GSUdfVFJBTlNQQVJFTlRfSFVHRVBBR0U9eQpDT05GSUdfVFJB
TlNQQVJFTlRfSFVHRVBBR0VfQUxXQVlTPXkKIyBDT05GSUdfVFJBTlNQQVJFTlRfSFVHRVBB
R0VfTUFEVklTRSBpcyBub3Qgc2V0CkNPTkZJR19BUkNIX1dBTlRTX1RIUF9TV0FQPXkKQ09O
RklHX1RIUF9TV0FQPXkKIyBDT05GSUdfQ0xFQU5DQUNIRSBpcyBub3Qgc2V0CiMgQ09ORklH
X0ZST05UU1dBUCBpcyBub3Qgc2V0CkNPTkZJR19DTUE9eQojIENPTkZJR19DTUFfREVCVUcg
aXMgbm90IHNldAojIENPTkZJR19DTUFfREVCVUdGUyBpcyBub3Qgc2V0CiMgQ09ORklHX0NN
QV9TWVNGUyBpcyBub3Qgc2V0CkNPTkZJR19DTUFfQVJFQVM9NwojIENPTkZJR19aUE9PTCBp
cyBub3Qgc2V0CiMgQ09ORklHX1pCVUQgaXMgbm90IHNldAojIENPTkZJR19aU01BTExPQyBp
cyBub3Qgc2V0CkNPTkZJR19HRU5FUklDX0VBUkxZX0lPUkVNQVA9eQojIENPTkZJR19ERUZF
UlJFRF9TVFJVQ1RfUEFHRV9JTklUIGlzIG5vdCBzZXQKIyBDT05GSUdfSURMRV9QQUdFX1RS
QUNLSU5HIGlzIG5vdCBzZXQKQ09ORklHX0FSQ0hfSEFTX0NBQ0hFX0xJTkVfU0laRT15CkNP
TkZJR19BUkNIX0hBU19QVEVfREVWTUFQPXkKQ09ORklHX0FSQ0hfVVNFU19ISUdIX1ZNQV9G
TEFHUz15CkNPTkZJR19BUkNIX0hBU19QS0VZUz15CiMgQ09ORklHX1BFUkNQVV9TVEFUUyBp
cyBub3Qgc2V0CiMgQ09ORklHX0dVUF9URVNUIGlzIG5vdCBzZXQKIyBDT05GSUdfUkVBRF9P
TkxZX1RIUF9GT1JfRlMgaXMgbm90IHNldApDT05GSUdfQVJDSF9IQVNfUFRFX1NQRUNJQUw9
eQojIGVuZCBvZiBNZW1vcnkgTWFuYWdlbWVudCBvcHRpb25zCgpDT05GSUdfTkVUPXkKQ09O
RklHX05FVF9JTkdSRVNTPXkKQ09ORklHX1NLQl9FWFRFTlNJT05TPXkKCiMKIyBOZXR3b3Jr
aW5nIG9wdGlvbnMKIwpDT05GSUdfUEFDS0VUPXkKQ09ORklHX1BBQ0tFVF9ESUFHPXkKQ09O
RklHX1VOSVg9eQpDT05GSUdfVU5JWF9TQ009eQpDT05GSUdfVU5JWF9ESUFHPXkKIyBDT05G
SUdfVExTIGlzIG5vdCBzZXQKQ09ORklHX1hGUk09eQpDT05GSUdfWEZSTV9BTEdPPXkKIyBD
T05GSUdfWEZSTV9VU0VSIGlzIG5vdCBzZXQKIyBDT05GSUdfWEZSTV9JTlRFUkZBQ0UgaXMg
bm90IHNldAojIENPTkZJR19YRlJNX1NVQl9QT0xJQ1kgaXMgbm90IHNldAojIENPTkZJR19Y
RlJNX01JR1JBVEUgaXMgbm90IHNldAojIENPTkZJR19YRlJNX1NUQVRJU1RJQ1MgaXMgbm90
IHNldApDT05GSUdfWEZSTV9BSD15CkNPTkZJR19YRlJNX0VTUD15CkNPTkZJR19YRlJNX0lQ
Q09NUD15CiMgQ09ORklHX05FVF9LRVkgaXMgbm90IHNldApDT05GSUdfSU5FVD15CiMgQ09O
RklHX0lQX01VTFRJQ0FTVCBpcyBub3Qgc2V0CkNPTkZJR19JUF9BRFZBTkNFRF9ST1VURVI9
eQojIENPTkZJR19JUF9GSUJfVFJJRV9TVEFUUyBpcyBub3Qgc2V0CiMgQ09ORklHX0lQX01V
TFRJUExFX1RBQkxFUyBpcyBub3Qgc2V0CiMgQ09ORklHX0lQX1JPVVRFX01VTFRJUEFUSCBp
cyBub3Qgc2V0CkNPTkZJR19JUF9ST1VURV9WRVJCT1NFPXkKQ09ORklHX0lQX1JPVVRFX0NM
QVNTSUQ9eQojIENPTkZJR19JUF9QTlAgaXMgbm90IHNldAojIENPTkZJR19ORVRfSVBJUCBp
cyBub3Qgc2V0CiMgQ09ORklHX05FVF9JUEdSRV9ERU1VWCBpcyBub3Qgc2V0CkNPTkZJR19O
RVRfSVBfVFVOTkVMPXkKQ09ORklHX1NZTl9DT09LSUVTPXkKIyBDT05GSUdfTkVUX0lQVlRJ
IGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0ZPVSBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9G
T1VfSVBfVFVOTkVMUyBpcyBub3Qgc2V0CiMgQ09ORklHX0lORVRfQUggaXMgbm90IHNldAoj
IENPTkZJR19JTkVUX0VTUCBpcyBub3Qgc2V0CiMgQ09ORklHX0lORVRfSVBDT01QIGlzIG5v
dCBzZXQKQ09ORklHX0lORVRfVFVOTkVMPXkKQ09ORklHX0lORVRfRElBRz15CkNPTkZJR19J
TkVUX1RDUF9ESUFHPXkKQ09ORklHX0lORVRfVURQX0RJQUc9eQpDT05GSUdfSU5FVF9SQVdf
RElBRz15CiMgQ09ORklHX0lORVRfRElBR19ERVNUUk9ZIGlzIG5vdCBzZXQKQ09ORklHX1RD
UF9DT05HX0FEVkFOQ0VEPXkKIyBDT05GSUdfVENQX0NPTkdfQklDIGlzIG5vdCBzZXQKQ09O
RklHX1RDUF9DT05HX0NVQklDPXkKIyBDT05GSUdfVENQX0NPTkdfV0VTVFdPT0QgaXMgbm90
IHNldAojIENPTkZJR19UQ1BfQ09OR19IVENQIGlzIG5vdCBzZXQKIyBDT05GSUdfVENQX0NP
TkdfSFNUQ1AgaXMgbm90IHNldAojIENPTkZJR19UQ1BfQ09OR19IWUJMQSBpcyBub3Qgc2V0
CiMgQ09ORklHX1RDUF9DT05HX1ZFR0FTIGlzIG5vdCBzZXQKIyBDT05GSUdfVENQX0NPTkdf
TlYgaXMgbm90IHNldAojIENPTkZJR19UQ1BfQ09OR19TQ0FMQUJMRSBpcyBub3Qgc2V0CiMg
Q09ORklHX1RDUF9DT05HX0xQIGlzIG5vdCBzZXQKIyBDT05GSUdfVENQX0NPTkdfVkVOTyBp
cyBub3Qgc2V0CiMgQ09ORklHX1RDUF9DT05HX1lFQUggaXMgbm90IHNldAojIENPTkZJR19U
Q1BfQ09OR19JTExJTk9JUyBpcyBub3Qgc2V0CiMgQ09ORklHX1RDUF9DT05HX0RDVENQIGlz
IG5vdCBzZXQKIyBDT05GSUdfVENQX0NPTkdfQ0RHIGlzIG5vdCBzZXQKIyBDT05GSUdfVENQ
X0NPTkdfQkJSIGlzIG5vdCBzZXQKQ09ORklHX0RFRkFVTFRfQ1VCSUM9eQojIENPTkZJR19E
RUZBVUxUX1JFTk8gaXMgbm90IHNldApDT05GSUdfREVGQVVMVF9UQ1BfQ09ORz0iY3ViaWMi
CiMgQ09ORklHX1RDUF9NRDVTSUcgaXMgbm90IHNldApDT05GSUdfSVBWNj15CiMgQ09ORklH
X0lQVjZfUk9VVEVSX1BSRUYgaXMgbm90IHNldAojIENPTkZJR19JUFY2X09QVElNSVNUSUNf
REFEIGlzIG5vdCBzZXQKQ09ORklHX0lORVQ2X0FIPXkKQ09ORklHX0lORVQ2X0VTUD15CiMg
Q09ORklHX0lORVQ2X0VTUF9PRkZMT0FEIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5FVDZfRVNQ
SU5UQ1AgaXMgbm90IHNldApDT05GSUdfSU5FVDZfSVBDT01QPXkKIyBDT05GSUdfSVBWNl9N
SVA2IGlzIG5vdCBzZXQKIyBDT05GSUdfSVBWNl9JTEEgaXMgbm90IHNldApDT05GSUdfSU5F
VDZfWEZSTV9UVU5ORUw9eQpDT05GSUdfSU5FVDZfVFVOTkVMPXkKIyBDT05GSUdfSVBWNl9W
VEkgaXMgbm90IHNldApDT05GSUdfSVBWNl9TSVQ9eQpDT05GSUdfSVBWNl9TSVRfNlJEPXkK
Q09ORklHX0lQVjZfTkRJU0NfTk9ERVRZUEU9eQojIENPTkZJR19JUFY2X1RVTk5FTCBpcyBu
b3Qgc2V0CiMgQ09ORklHX0lQVjZfTVVMVElQTEVfVEFCTEVTIGlzIG5vdCBzZXQKIyBDT05G
SUdfSVBWNl9NUk9VVEUgaXMgbm90IHNldAojIENPTkZJR19JUFY2X1NFRzZfTFdUVU5ORUwg
aXMgbm90IHNldAojIENPTkZJR19JUFY2X1NFRzZfSE1BQyBpcyBub3Qgc2V0CiMgQ09ORklH
X0lQVjZfUlBMX0xXVFVOTkVMIGlzIG5vdCBzZXQKIyBDT05GSUdfTVBUQ1AgaXMgbm90IHNl
dApDT05GSUdfTkVUV09SS19TRUNNQVJLPXkKQ09ORklHX05FVF9QVFBfQ0xBU1NJRlk9eQoj
IENPTkZJR19ORVRXT1JLX1BIWV9USU1FU1RBTVBJTkcgaXMgbm90IHNldApDT05GSUdfTkVU
RklMVEVSPXkKQ09ORklHX05FVEZJTFRFUl9BRFZBTkNFRD15CkNPTkZJR19CUklER0VfTkVU
RklMVEVSPXkKCiMKIyBDb3JlIE5ldGZpbHRlciBDb25maWd1cmF0aW9uCiMKQ09ORklHX05F
VEZJTFRFUl9JTkdSRVNTPXkKQ09ORklHX05FVEZJTFRFUl9ORVRMSU5LPXkKQ09ORklHX05F
VEZJTFRFUl9GQU1JTFlfQlJJREdFPXkKQ09ORklHX05FVEZJTFRFUl9GQU1JTFlfQVJQPXkK
Q09ORklHX05FVEZJTFRFUl9ORVRMSU5LX0FDQ1Q9eQpDT05GSUdfTkVURklMVEVSX05FVExJ
TktfUVVFVUU9eQpDT05GSUdfTkVURklMVEVSX05FVExJTktfTE9HPXkKQ09ORklHX05FVEZJ
TFRFUl9ORVRMSU5LX09TRj15CkNPTkZJR19ORl9DT05OVFJBQ0s9eQpDT05GSUdfTkZfTE9H
X1NZU0xPRz15CkNPTkZJR19ORVRGSUxURVJfQ09OTkNPVU5UPXkKQ09ORklHX05GX0NPTk5U
UkFDS19NQVJLPXkKQ09ORklHX05GX0NPTk5UUkFDS19TRUNNQVJLPXkKQ09ORklHX05GX0NP
Tk5UUkFDS19aT05FUz15CkNPTkZJR19ORl9DT05OVFJBQ0tfUFJPQ0ZTPXkKQ09ORklHX05G
X0NPTk5UUkFDS19FVkVOVFM9eQpDT05GSUdfTkZfQ09OTlRSQUNLX1RJTUVPVVQ9eQpDT05G
SUdfTkZfQ09OTlRSQUNLX1RJTUVTVEFNUD15CkNPTkZJR19ORl9DT05OVFJBQ0tfTEFCRUxT
PXkKQ09ORklHX05GX0NUX1BST1RPX0RDQ1A9eQpDT05GSUdfTkZfQ1RfUFJPVE9fR1JFPXkK
Q09ORklHX05GX0NUX1BST1RPX1NDVFA9eQpDT05GSUdfTkZfQ1RfUFJPVE9fVURQTElURT15
CiMgQ09ORklHX05GX0NPTk5UUkFDS19BTUFOREEgaXMgbm90IHNldApDT05GSUdfTkZfQ09O
TlRSQUNLX0ZUUD15CkNPTkZJR19ORl9DT05OVFJBQ0tfSDMyMz15CkNPTkZJR19ORl9DT05O
VFJBQ0tfSVJDPXkKIyBDT05GSUdfTkZfQ09OTlRSQUNLX05FVEJJT1NfTlMgaXMgbm90IHNl
dAojIENPTkZJR19ORl9DT05OVFJBQ0tfU05NUCBpcyBub3Qgc2V0CkNPTkZJR19ORl9DT05O
VFJBQ0tfUFBUUD15CiMgQ09ORklHX05GX0NPTk5UUkFDS19TQU5FIGlzIG5vdCBzZXQKQ09O
RklHX05GX0NPTk5UUkFDS19TSVA9eQojIENPTkZJR19ORl9DT05OVFJBQ0tfVEZUUCBpcyBu
b3Qgc2V0CkNPTkZJR19ORl9DVF9ORVRMSU5LPXkKIyBDT05GSUdfTkZfQ1RfTkVUTElOS19U
SU1FT1VUIGlzIG5vdCBzZXQKIyBDT05GSUdfTkZfQ1RfTkVUTElOS19IRUxQRVIgaXMgbm90
IHNldApDT05GSUdfTkVURklMVEVSX05FVExJTktfR0xVRV9DVD15CkNPTkZJR19ORl9OQVQ9
eQpDT05GSUdfTkZfTkFUX0ZUUD15CkNPTkZJR19ORl9OQVRfSVJDPXkKQ09ORklHX05GX05B
VF9TSVA9eQpDT05GSUdfTkZfTkFUX1JFRElSRUNUPXkKQ09ORklHX05GX05BVF9NQVNRVUVS
QURFPXkKQ09ORklHX05FVEZJTFRFUl9TWU5QUk9YWT15CkNPTkZJR19ORl9UQUJMRVM9eQpD
T05GSUdfTkZfVEFCTEVTX0lORVQ9eQpDT05GSUdfTkZfVEFCTEVTX05FVERFVj15CkNPTkZJ
R19ORlRfTlVNR0VOPXkKQ09ORklHX05GVF9DVD15CiMgQ09ORklHX05GVF9GTE9XX09GRkxP
QUQgaXMgbm90IHNldApDT05GSUdfTkZUX0NPVU5URVI9eQpDT05GSUdfTkZUX0NPTk5MSU1J
VD15CkNPTkZJR19ORlRfTE9HPXkKQ09ORklHX05GVF9MSU1JVD15CkNPTkZJR19ORlRfTUFT
UT15CkNPTkZJR19ORlRfUkVESVI9eQpDT05GSUdfTkZUX05BVD15CkNPTkZJR19ORlRfVFVO
TkVMPXkKQ09ORklHX05GVF9PQkpSRUY9eQpDT05GSUdfTkZUX1FVRVVFPXkKQ09ORklHX05G
VF9RVU9UQT15CkNPTkZJR19ORlRfUkVKRUNUPXkKQ09ORklHX05GVF9SRUpFQ1RfSU5FVD15
CkNPTkZJR19ORlRfQ09NUEFUPXkKQ09ORklHX05GVF9IQVNIPXkKQ09ORklHX05GVF9GSUI9
eQojIENPTkZJR19ORlRfWEZSTSBpcyBub3Qgc2V0CkNPTkZJR19ORlRfU09DS0VUPXkKQ09O
RklHX05GVF9PU0Y9eQpDT05GSUdfTkZUX1RQUk9YWT15CkNPTkZJR19ORlRfU1lOUFJPWFk9
eQpDT05GSUdfTkZfRFVQX05FVERFVj15CkNPTkZJR19ORlRfRFVQX05FVERFVj15CkNPTkZJ
R19ORlRfRldEX05FVERFVj15CkNPTkZJR19ORlRfUkVKRUNUX05FVERFVj15CkNPTkZJR19O
Rl9GTE9XX1RBQkxFX0lORVQ9eQpDT05GSUdfTkZfRkxPV19UQUJMRT15CkNPTkZJR19ORVRG
SUxURVJfWFRBQkxFUz15CkNPTkZJR19ORVRGSUxURVJfWFRBQkxFU19DT01QQVQ9eQoKIwoj
IFh0YWJsZXMgY29tYmluZWQgbW9kdWxlcwojCkNPTkZJR19ORVRGSUxURVJfWFRfTUFSSz15
CkNPTkZJR19ORVRGSUxURVJfWFRfQ09OTk1BUks9eQpDT05GSUdfTkVURklMVEVSX1hUX1NF
VD15CgojCiMgWHRhYmxlcyB0YXJnZXRzCiMKQ09ORklHX05FVEZJTFRFUl9YVF9UQVJHRVRf
Q0hFQ0tTVU09eQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9DTEFTU0lGWT15CkNPTkZJ
R19ORVRGSUxURVJfWFRfVEFSR0VUX0NPTk5NQVJLPXkKQ09ORklHX05FVEZJTFRFUl9YVF9U
QVJHRVRfQ09OTlNFQ01BUks9eQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9DVD15CkNP
TkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX0RTQ1A9eQpDT05GSUdfTkVURklMVEVSX1hUX1RB
UkdFVF9ITD15CkNPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX0hNQVJLPXkKQ09ORklHX05F
VEZJTFRFUl9YVF9UQVJHRVRfSURMRVRJTUVSPXkKIyBDT05GSUdfTkVURklMVEVSX1hUX1RB
UkdFVF9MRUQgaXMgbm90IHNldApDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9MT0c9eQpD
T05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9NQVJLPXkKQ09ORklHX05FVEZJTFRFUl9YVF9O
QVQ9eQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9ORVRNQVA9eQpDT05GSUdfTkVURklM
VEVSX1hUX1RBUkdFVF9ORkxPRz15CkNPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX05GUVVF
VUU9eQojIENPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX05PVFJBQ0sgaXMgbm90IHNldApD
T05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9SQVRFRVNUPXkKQ09ORklHX05FVEZJTFRFUl9Y
VF9UQVJHRVRfUkVESVJFQ1Q9eQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9NQVNRVUVS
QURFPXkKQ09ORklHX05FVEZJTFRFUl9YVF9UQVJHRVRfVEVFPXkKQ09ORklHX05FVEZJTFRF
Ul9YVF9UQVJHRVRfVFJBQ0U9eQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9TRUNNQVJL
PXkKQ09ORklHX05FVEZJTFRFUl9YVF9UQVJHRVRfVENQTVNTPXkKQ09ORklHX05FVEZJTFRF
Ul9YVF9UQVJHRVRfVENQT1BUU1RSSVA9eQoKIwojIFh0YWJsZXMgbWF0Y2hlcwojCkNPTkZJ
R19ORVRGSUxURVJfWFRfTUFUQ0hfQUREUlRZUEU9eQpDT05GSUdfTkVURklMVEVSX1hUX01B
VENIX0JQRj15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfQ0dST1VQPXkKQ09ORklHX05F
VEZJTFRFUl9YVF9NQVRDSF9DTFVTVEVSPXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9D
T01NRU5UPXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9DT05OQllURVM9eQpDT05GSUdf
TkVURklMVEVSX1hUX01BVENIX0NPTk5MQUJFTD15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFU
Q0hfQ09OTkxJTUlUPXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9DT05OTUFSSz15CkNP
TkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfQ09OTlRSQUNLPXkKQ09ORklHX05FVEZJTFRFUl9Y
VF9NQVRDSF9DUFU9eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0RDQ1A9eQpDT05GSUdf
TkVURklMVEVSX1hUX01BVENIX0RFVkdST1VQPXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRD
SF9EU0NQPXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9FQ049eQpDT05GSUdfTkVURklM
VEVSX1hUX01BVENIX0VTUD15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfSEFTSExJTUlU
PXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9IRUxQRVI9eQpDT05GSUdfTkVURklMVEVS
X1hUX01BVENIX0hMPXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9JUENPTVA9eQpDT05G
SUdfTkVURklMVEVSX1hUX01BVENIX0lQUkFOR0U9eQpDT05GSUdfTkVURklMVEVSX1hUX01B
VENIX0lQVlM9eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0wyVFA9eQpDT05GSUdfTkVU
RklMVEVSX1hUX01BVENIX0xFTkdUSD15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfTElN
SVQ9eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX01BQz15CkNPTkZJR19ORVRGSUxURVJf
WFRfTUFUQ0hfTUFSSz15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfTVVMVElQT1JUPXkK
Q09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9ORkFDQ1Q9eQpDT05GSUdfTkVURklMVEVSX1hU
X01BVENIX09TRj15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfT1dORVI9eQojIENPTkZJ
R19ORVRGSUxURVJfWFRfTUFUQ0hfUE9MSUNZIGlzIG5vdCBzZXQKQ09ORklHX05FVEZJTFRF
Ul9YVF9NQVRDSF9QSFlTREVWPXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9QS1RUWVBF
PXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9RVU9UQT15CkNPTkZJR19ORVRGSUxURVJf
WFRfTUFUQ0hfUkFURUVTVD15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfUkVBTE09eQpD
T05GSUdfTkVURklMVEVSX1hUX01BVENIX1JFQ0VOVD15CkNPTkZJR19ORVRGSUxURVJfWFRf
TUFUQ0hfU0NUUD15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfU09DS0VUPXkKQ09ORklH
X05FVEZJTFRFUl9YVF9NQVRDSF9TVEFURT15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hf
U1RBVElTVElDPXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9TVFJJTkc9eQpDT05GSUdf
TkVURklMVEVSX1hUX01BVENIX1RDUE1TUz15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hf
VElNRT15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfVTMyPXkKIyBlbmQgb2YgQ29yZSBO
ZXRmaWx0ZXIgQ29uZmlndXJhdGlvbgoKQ09ORklHX0lQX1NFVD15CkNPTkZJR19JUF9TRVRf
TUFYPTI1NgpDT05GSUdfSVBfU0VUX0JJVE1BUF9JUD15CkNPTkZJR19JUF9TRVRfQklUTUFQ
X0lQTUFDPXkKQ09ORklHX0lQX1NFVF9CSVRNQVBfUE9SVD15CkNPTkZJR19JUF9TRVRfSEFT
SF9JUD15CkNPTkZJR19JUF9TRVRfSEFTSF9JUE1BUks9eQpDT05GSUdfSVBfU0VUX0hBU0hf
SVBQT1JUPXkKQ09ORklHX0lQX1NFVF9IQVNIX0lQUE9SVElQPXkKQ09ORklHX0lQX1NFVF9I
QVNIX0lQUE9SVE5FVD15CkNPTkZJR19JUF9TRVRfSEFTSF9JUE1BQz15CkNPTkZJR19JUF9T
RVRfSEFTSF9NQUM9eQpDT05GSUdfSVBfU0VUX0hBU0hfTkVUUE9SVE5FVD15CkNPTkZJR19J
UF9TRVRfSEFTSF9ORVQ9eQpDT05GSUdfSVBfU0VUX0hBU0hfTkVUTkVUPXkKQ09ORklHX0lQ
X1NFVF9IQVNIX05FVFBPUlQ9eQpDT05GSUdfSVBfU0VUX0hBU0hfTkVUSUZBQ0U9eQpDT05G
SUdfSVBfU0VUX0xJU1RfU0VUPXkKQ09ORklHX0lQX1ZTPXkKQ09ORklHX0lQX1ZTX0lQVjY9
eQojIENPTkZJR19JUF9WU19ERUJVRyBpcyBub3Qgc2V0CkNPTkZJR19JUF9WU19UQUJfQklU
Uz0xMgoKIwojIElQVlMgdHJhbnNwb3J0IHByb3RvY29sIGxvYWQgYmFsYW5jaW5nIHN1cHBv
cnQKIwojIENPTkZJR19JUF9WU19QUk9UT19UQ1AgaXMgbm90IHNldAojIENPTkZJR19JUF9W
U19QUk9UT19VRFAgaXMgbm90IHNldAojIENPTkZJR19JUF9WU19QUk9UT19FU1AgaXMgbm90
IHNldAojIENPTkZJR19JUF9WU19QUk9UT19BSCBpcyBub3Qgc2V0CiMgQ09ORklHX0lQX1ZT
X1BST1RPX1NDVFAgaXMgbm90IHNldAoKIwojIElQVlMgc2NoZWR1bGVyCiMKIyBDT05GSUdf
SVBfVlNfUlIgaXMgbm90IHNldAojIENPTkZJR19JUF9WU19XUlIgaXMgbm90IHNldAojIENP
TkZJR19JUF9WU19MQyBpcyBub3Qgc2V0CiMgQ09ORklHX0lQX1ZTX1dMQyBpcyBub3Qgc2V0
CiMgQ09ORklHX0lQX1ZTX0ZPIGlzIG5vdCBzZXQKIyBDT05GSUdfSVBfVlNfT1ZGIGlzIG5v
dCBzZXQKIyBDT05GSUdfSVBfVlNfTEJMQyBpcyBub3Qgc2V0CiMgQ09ORklHX0lQX1ZTX0xC
TENSIGlzIG5vdCBzZXQKIyBDT05GSUdfSVBfVlNfREggaXMgbm90IHNldAojIENPTkZJR19J
UF9WU19TSCBpcyBub3Qgc2V0CiMgQ09ORklHX0lQX1ZTX01IIGlzIG5vdCBzZXQKIyBDT05G
SUdfSVBfVlNfU0VEIGlzIG5vdCBzZXQKIyBDT05GSUdfSVBfVlNfTlEgaXMgbm90IHNldAoj
IENPTkZJR19JUF9WU19UV09TIGlzIG5vdCBzZXQKCiMKIyBJUFZTIFNIIHNjaGVkdWxlcgoj
CkNPTkZJR19JUF9WU19TSF9UQUJfQklUUz04CgojCiMgSVBWUyBNSCBzY2hlZHVsZXIKIwpD
T05GSUdfSVBfVlNfTUhfVEFCX0lOREVYPTEyCgojCiMgSVBWUyBhcHBsaWNhdGlvbiBoZWxw
ZXIKIwpDT05GSUdfSVBfVlNfTkZDVD15CgojCiMgSVA6IE5ldGZpbHRlciBDb25maWd1cmF0
aW9uCiMKQ09ORklHX05GX0RFRlJBR19JUFY0PXkKQ09ORklHX05GX1NPQ0tFVF9JUFY0PXkK
Q09ORklHX05GX1RQUk9YWV9JUFY0PXkKQ09ORklHX05GX1RBQkxFU19JUFY0PXkKQ09ORklH
X05GVF9SRUpFQ1RfSVBWND15CkNPTkZJR19ORlRfRFVQX0lQVjQ9eQojIENPTkZJR19ORlRf
RklCX0lQVjQgaXMgbm90IHNldAojIENPTkZJR19ORl9UQUJMRVNfQVJQIGlzIG5vdCBzZXQK
Q09ORklHX05GX0ZMT1dfVEFCTEVfSVBWND15CkNPTkZJR19ORl9EVVBfSVBWND15CiMgQ09O
RklHX05GX0xPR19BUlAgaXMgbm90IHNldApDT05GSUdfTkZfTE9HX0lQVjQ9eQpDT05GSUdf
TkZfUkVKRUNUX0lQVjQ9eQpDT05GSUdfTkZfTkFUX1BQVFA9eQpDT05GSUdfTkZfTkFUX0gz
MjM9eQojIENPTkZJR19JUF9ORl9JUFRBQkxFUyBpcyBub3Qgc2V0CkNPTkZJR19JUF9ORl9B
UlBUQUJMRVM9eQpDT05GSUdfSVBfTkZfQVJQRklMVEVSPXkKQ09ORklHX0lQX05GX0FSUF9N
QU5HTEU9eQojIGVuZCBvZiBJUDogTmV0ZmlsdGVyIENvbmZpZ3VyYXRpb24KCiMKIyBJUHY2
OiBOZXRmaWx0ZXIgQ29uZmlndXJhdGlvbgojCkNPTkZJR19ORl9TT0NLRVRfSVBWNj15CkNP
TkZJR19ORl9UUFJPWFlfSVBWNj15CkNPTkZJR19ORl9UQUJMRVNfSVBWNj15CkNPTkZJR19O
RlRfUkVKRUNUX0lQVjY9eQpDT05GSUdfTkZUX0RVUF9JUFY2PXkKQ09ORklHX05GVF9GSUJf
SVBWNj15CkNPTkZJR19ORl9GTE9XX1RBQkxFX0lQVjY9eQpDT05GSUdfTkZfRFVQX0lQVjY9
eQpDT05GSUdfTkZfUkVKRUNUX0lQVjY9eQpDT05GSUdfTkZfTE9HX0lQVjY9eQpDT05GSUdf
SVA2X05GX0lQVEFCTEVTPXkKQ09ORklHX0lQNl9ORl9NQVRDSF9BSD15CkNPTkZJR19JUDZf
TkZfTUFUQ0hfRVVJNjQ9eQpDT05GSUdfSVA2X05GX01BVENIX0ZSQUc9eQpDT05GSUdfSVA2
X05GX01BVENIX09QVFM9eQpDT05GSUdfSVA2X05GX01BVENIX0hMPXkKQ09ORklHX0lQNl9O
Rl9NQVRDSF9JUFY2SEVBREVSPXkKQ09ORklHX0lQNl9ORl9NQVRDSF9NSD15CkNPTkZJR19J
UDZfTkZfTUFUQ0hfUlBGSUxURVI9eQpDT05GSUdfSVA2X05GX01BVENIX1JUPXkKQ09ORklH
X0lQNl9ORl9NQVRDSF9TUkg9eQpDT05GSUdfSVA2X05GX1RBUkdFVF9ITD15CkNPTkZJR19J
UDZfTkZfRklMVEVSPXkKQ09ORklHX0lQNl9ORl9UQVJHRVRfUkVKRUNUPXkKQ09ORklHX0lQ
Nl9ORl9UQVJHRVRfU1lOUFJPWFk9eQpDT05GSUdfSVA2X05GX01BTkdMRT15CkNPTkZJR19J
UDZfTkZfUkFXPXkKQ09ORklHX0lQNl9ORl9OQVQ9eQpDT05GSUdfSVA2X05GX1RBUkdFVF9N
QVNRVUVSQURFPXkKQ09ORklHX0lQNl9ORl9UQVJHRVRfTlBUPXkKIyBlbmQgb2YgSVB2Njog
TmV0ZmlsdGVyIENvbmZpZ3VyYXRpb24KCkNPTkZJR19ORl9ERUZSQUdfSVBWNj15CiMgQ09O
RklHX05GX1RBQkxFU19CUklER0UgaXMgbm90IHNldApDT05GSUdfTkZfQ09OTlRSQUNLX0JS
SURHRT15CkNPTkZJR19CUklER0VfTkZfRUJUQUJMRVM9eQpDT05GSUdfQlJJREdFX0VCVF9C
Uk9VVEU9eQpDT05GSUdfQlJJREdFX0VCVF9UX0ZJTFRFUj15CkNPTkZJR19CUklER0VfRUJU
X1RfTkFUPXkKQ09ORklHX0JSSURHRV9FQlRfODAyXzM9eQpDT05GSUdfQlJJREdFX0VCVF9B
TU9ORz15CkNPTkZJR19CUklER0VfRUJUX0FSUD15CkNPTkZJR19CUklER0VfRUJUX0lQPXkK
Q09ORklHX0JSSURHRV9FQlRfSVA2PXkKQ09ORklHX0JSSURHRV9FQlRfTElNSVQ9eQpDT05G
SUdfQlJJREdFX0VCVF9NQVJLPXkKQ09ORklHX0JSSURHRV9FQlRfUEtUVFlQRT15CkNPTkZJ
R19CUklER0VfRUJUX1NUUD15CkNPTkZJR19CUklER0VfRUJUX1ZMQU49eQpDT05GSUdfQlJJ
REdFX0VCVF9BUlBSRVBMWT15CkNPTkZJR19CUklER0VfRUJUX0ROQVQ9eQpDT05GSUdfQlJJ
REdFX0VCVF9NQVJLX1Q9eQpDT05GSUdfQlJJREdFX0VCVF9SRURJUkVDVD15CkNPTkZJR19C
UklER0VfRUJUX1NOQVQ9eQpDT05GSUdfQlJJREdFX0VCVF9MT0c9eQpDT05GSUdfQlJJREdF
X0VCVF9ORkxPRz15CiMgQ09ORklHX0JQRklMVEVSIGlzIG5vdCBzZXQKIyBDT05GSUdfSVBf
RENDUCBpcyBub3Qgc2V0CiMgQ09ORklHX0lQX1NDVFAgaXMgbm90IHNldAojIENPTkZJR19S
RFMgaXMgbm90IHNldAojIENPTkZJR19USVBDIGlzIG5vdCBzZXQKIyBDT05GSUdfQVRNIGlz
IG5vdCBzZXQKIyBDT05GSUdfTDJUUCBpcyBub3Qgc2V0CkNPTkZJR19TVFA9eQpDT05GSUdf
QlJJREdFPXkKQ09ORklHX0JSSURHRV9JR01QX1NOT09QSU5HPXkKQ09ORklHX0JSSURHRV9W
TEFOX0ZJTFRFUklORz15CiMgQ09ORklHX0JSSURHRV9NUlAgaXMgbm90IHNldAojIENPTkZJ
R19CUklER0VfQ0ZNIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0RTQSBpcyBub3Qgc2V0CkNP
TkZJR19WTEFOXzgwMjFRPXkKIyBDT05GSUdfVkxBTl84MDIxUV9HVlJQIGlzIG5vdCBzZXQK
IyBDT05GSUdfVkxBTl84MDIxUV9NVlJQIGlzIG5vdCBzZXQKIyBDT05GSUdfREVDTkVUIGlz
IG5vdCBzZXQKQ09ORklHX0xMQz15CiMgQ09ORklHX0xMQzIgaXMgbm90IHNldAojIENPTkZJ
R19BVEFMSyBpcyBub3Qgc2V0CiMgQ09ORklHX1gyNSBpcyBub3Qgc2V0CiMgQ09ORklHX0xB
UEIgaXMgbm90IHNldAojIENPTkZJR19QSE9ORVQgaXMgbm90IHNldAojIENPTkZJR182TE9X
UEFOIGlzIG5vdCBzZXQKIyBDT05GSUdfSUVFRTgwMjE1NCBpcyBub3Qgc2V0CkNPTkZJR19O
RVRfU0NIRUQ9eQoKIwojIFF1ZXVlaW5nL1NjaGVkdWxpbmcKIwojIENPTkZJR19ORVRfU0NI
X0NCUSBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9TQ0hfSFRCIGlzIG5vdCBzZXQKIyBDT05G
SUdfTkVUX1NDSF9IRlNDIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9QUklPIGlzIG5v
dCBzZXQKIyBDT05GSUdfTkVUX1NDSF9NVUxUSVEgaXMgbm90IHNldAojIENPTkZJR19ORVRf
U0NIX1JFRCBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9TQ0hfU0ZCIGlzIG5vdCBzZXQKIyBD
T05GSUdfTkVUX1NDSF9TRlEgaXMgbm90IHNldAojIENPTkZJR19ORVRfU0NIX1RFUUwgaXMg
bm90IHNldAojIENPTkZJR19ORVRfU0NIX1RCRiBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9T
Q0hfQ0JTIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9FVEYgaXMgbm90IHNldAojIENP
TkZJR19ORVRfU0NIX1RBUFJJTyBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9TQ0hfR1JFRCBp
cyBub3Qgc2V0CiMgQ09ORklHX05FVF9TQ0hfRFNNQVJLIGlzIG5vdCBzZXQKIyBDT05GSUdf
TkVUX1NDSF9ORVRFTSBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9TQ0hfRFJSIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTkVUX1NDSF9NUVBSSU8gaXMgbm90IHNldAojIENPTkZJR19ORVRfU0NI
X1NLQlBSSU8gaXMgbm90IHNldAojIENPTkZJR19ORVRfU0NIX0NIT0tFIGlzIG5vdCBzZXQK
IyBDT05GSUdfTkVUX1NDSF9RRlEgaXMgbm90IHNldAojIENPTkZJR19ORVRfU0NIX0NPREVM
IGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9GUV9DT0RFTCBpcyBub3Qgc2V0CiMgQ09O
RklHX05FVF9TQ0hfQ0FLRSBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9TQ0hfRlEgaXMgbm90
IHNldAojIENPTkZJR19ORVRfU0NIX0hIRiBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9TQ0hf
UElFIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9JTkdSRVNTIGlzIG5vdCBzZXQKIyBD
T05GSUdfTkVUX1NDSF9QTFVHIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9FVFMgaXMg
bm90IHNldAojIENPTkZJR19ORVRfU0NIX0RFRkFVTFQgaXMgbm90IHNldAoKIwojIENsYXNz
aWZpY2F0aW9uCiMKQ09ORklHX05FVF9DTFM9eQojIENPTkZJR19ORVRfQ0xTX0JBU0lDIGlz
IG5vdCBzZXQKIyBDT05GSUdfTkVUX0NMU19UQ0lOREVYIGlzIG5vdCBzZXQKIyBDT05GSUdf
TkVUX0NMU19ST1VURTQgaXMgbm90IHNldAojIENPTkZJR19ORVRfQ0xTX0ZXIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTkVUX0NMU19VMzIgaXMgbm90IHNldAojIENPTkZJR19ORVRfQ0xTX1JT
VlAgaXMgbm90IHNldAojIENPTkZJR19ORVRfQ0xTX1JTVlA2IGlzIG5vdCBzZXQKIyBDT05G
SUdfTkVUX0NMU19GTE9XIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0NMU19DR1JPVVAgaXMg
bm90IHNldAojIENPTkZJR19ORVRfQ0xTX0JQRiBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9D
TFNfRkxPV0VSIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0NMU19NQVRDSEFMTCBpcyBub3Qg
c2V0CkNPTkZJR19ORVRfRU1BVENIPXkKQ09ORklHX05FVF9FTUFUQ0hfU1RBQ0s9MzIKIyBD
T05GSUdfTkVUX0VNQVRDSF9DTVAgaXMgbm90IHNldAojIENPTkZJR19ORVRfRU1BVENIX05C
WVRFIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0VNQVRDSF9VMzIgaXMgbm90IHNldAojIENP
TkZJR19ORVRfRU1BVENIX01FVEEgaXMgbm90IHNldAojIENPTkZJR19ORVRfRU1BVENIX1RF
WFQgaXMgbm90IHNldAojIENPTkZJR19ORVRfRU1BVENIX0lQU0VUIGlzIG5vdCBzZXQKIyBD
T05GSUdfTkVUX0VNQVRDSF9JUFQgaXMgbm90IHNldApDT05GSUdfTkVUX0NMU19BQ1Q9eQoj
IENPTkZJR19ORVRfQUNUX1BPTElDRSBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9BQ1RfR0FD
VCBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9BQ1RfTUlSUkVEIGlzIG5vdCBzZXQKIyBDT05G
SUdfTkVUX0FDVF9TQU1QTEUgaXMgbm90IHNldApDT05GSUdfTkVUX0FDVF9JUFQ9eQojIENP
TkZJR19ORVRfQUNUX05BVCBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9BQ1RfUEVESVQgaXMg
bm90IHNldAojIENPTkZJR19ORVRfQUNUX1NJTVAgaXMgbm90IHNldAojIENPTkZJR19ORVRf
QUNUX1NLQkVESVQgaXMgbm90IHNldAojIENPTkZJR19ORVRfQUNUX0NTVU0gaXMgbm90IHNl
dAojIENPTkZJR19ORVRfQUNUX01QTFMgaXMgbm90IHNldAojIENPTkZJR19ORVRfQUNUX1ZM
QU4gaXMgbm90IHNldAojIENPTkZJR19ORVRfQUNUX0JQRiBpcyBub3Qgc2V0CkNPTkZJR19O
RVRfQUNUX0NPTk5NQVJLPXkKQ09ORklHX05FVF9BQ1RfQ1RJTkZPPXkKIyBDT05GSUdfTkVU
X0FDVF9TS0JNT0QgaXMgbm90IHNldAojIENPTkZJR19ORVRfQUNUX0lGRSBpcyBub3Qgc2V0
CiMgQ09ORklHX05FVF9BQ1RfVFVOTkVMX0tFWSBpcyBub3Qgc2V0CkNPTkZJR19ORVRfQUNU
X0NUPXkKIyBDT05GSUdfTkVUX0FDVF9HQVRFIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1RD
X1NLQl9FWFQgaXMgbm90IHNldApDT05GSUdfTkVUX1NDSF9GSUZPPXkKIyBDT05GSUdfRENC
IGlzIG5vdCBzZXQKQ09ORklHX0ROU19SRVNPTFZFUj15CiMgQ09ORklHX0JBVE1BTl9BRFYg
aXMgbm90IHNldAojIENPTkZJR19PUEVOVlNXSVRDSCBpcyBub3Qgc2V0CiMgQ09ORklHX1ZT
T0NLRVRTIGlzIG5vdCBzZXQKQ09ORklHX05FVExJTktfRElBRz15CiMgQ09ORklHX01QTFMg
aXMgbm90IHNldAojIENPTkZJR19ORVRfTlNIIGlzIG5vdCBzZXQKIyBDT05GSUdfSFNSIGlz
IG5vdCBzZXQKIyBDT05GSUdfTkVUX1NXSVRDSERFViBpcyBub3Qgc2V0CiMgQ09ORklHX05F
VF9MM19NQVNURVJfREVWIGlzIG5vdCBzZXQKIyBDT05GSUdfUVJUUiBpcyBub3Qgc2V0CiMg
Q09ORklHX05FVF9OQ1NJIGlzIG5vdCBzZXQKQ09ORklHX1BDUFVfREVWX1JFRkNOVD15CkNP
TkZJR19SUFM9eQpDT05GSUdfUkZTX0FDQ0VMPXkKQ09ORklHX1NPQ0tfUlhfUVVFVUVfTUFQ
UElORz15CkNPTkZJR19YUFM9eQojIENPTkZJR19DR1JPVVBfTkVUX1BSSU8gaXMgbm90IHNl
dApDT05GSUdfQ0dST1VQX05FVF9DTEFTU0lEPXkKQ09ORklHX05FVF9SWF9CVVNZX1BPTEw9
eQpDT05GSUdfQlFMPXkKQ09ORklHX05FVF9GTE9XX0xJTUlUPXkKCiMKIyBOZXR3b3JrIHRl
c3RpbmcKIwojIENPTkZJR19ORVRfUEtUR0VOIGlzIG5vdCBzZXQKQ09ORklHX05FVF9EUk9Q
X01PTklUT1I9eQojIGVuZCBvZiBOZXR3b3JrIHRlc3RpbmcKIyBlbmQgb2YgTmV0d29ya2lu
ZyBvcHRpb25zCgojIENPTkZJR19IQU1SQURJTyBpcyBub3Qgc2V0CiMgQ09ORklHX0NBTiBp
cyBub3Qgc2V0CkNPTkZJR19CVD15CkNPTkZJR19CVF9CUkVEUj15CkNPTkZJR19CVF9SRkNP
TU09eQpDT05GSUdfQlRfUkZDT01NX1RUWT15CkNPTkZJR19CVF9CTkVQPXkKQ09ORklHX0JU
X0JORVBfTUNfRklMVEVSPXkKQ09ORklHX0JUX0JORVBfUFJPVE9fRklMVEVSPXkKQ09ORklH
X0JUX0hJRFA9eQpDT05GSUdfQlRfSFM9eQpDT05GSUdfQlRfTEU9eQojIENPTkZJR19CVF9M
RURTIGlzIG5vdCBzZXQKQ09ORklHX0JUX01TRlRFWFQ9eQojIENPTkZJR19CVF9BT1NQRVhU
IGlzIG5vdCBzZXQKQ09ORklHX0JUX0RFQlVHRlM9eQojIENPTkZJR19CVF9TRUxGVEVTVCBp
cyBub3Qgc2V0CiMgQ09ORklHX0JUX0ZFQVRVUkVfREVCVUcgaXMgbm90IHNldAoKIwojIEJs
dWV0b290aCBkZXZpY2UgZHJpdmVycwojCkNPTkZJR19CVF9JTlRFTD15CkNPTkZJR19CVF9C
Q009eQpDT05GSUdfQlRfUlRMPXkKQ09ORklHX0JUX0hDSUJUVVNCPXkKIyBDT05GSUdfQlRf
SENJQlRVU0JfQVVUT1NVU1BFTkQgaXMgbm90IHNldApDT05GSUdfQlRfSENJQlRVU0JfQkNN
PXkKQ09ORklHX0JUX0hDSUJUVVNCX01USz15CkNPTkZJR19CVF9IQ0lCVFVTQl9SVEw9eQpD
T05GSUdfQlRfSENJVUFSVD15CkNPTkZJR19CVF9IQ0lVQVJUX0g0PXkKQ09ORklHX0JUX0hD
SVVBUlRfQkNTUD15CkNPTkZJR19CVF9IQ0lVQVJUX0FUSDNLPXkKQ09ORklHX0JUX0hDSVVB
UlRfSU5URUw9eQpDT05GSUdfQlRfSENJVUFSVF9BRzZYWD15CkNPTkZJR19CVF9IQ0lCQ00y
MDNYPXkKQ09ORklHX0JUX0hDSUJQQTEwWD15CkNPTkZJR19CVF9IQ0lCRlVTQj15CkNPTkZJ
R19CVF9IQ0lWSENJPXkKQ09ORklHX0JUX01SVkw9eQpDT05GSUdfQlRfQVRIM0s9eQojIGVu
ZCBvZiBCbHVldG9vdGggZGV2aWNlIGRyaXZlcnMKCiMgQ09ORklHX0FGX1JYUlBDIGlzIG5v
dCBzZXQKIyBDT05GSUdfQUZfS0NNIGlzIG5vdCBzZXQKQ09ORklHX1dJUkVMRVNTPXkKQ09O
RklHX0NGRzgwMjExPXkKIyBDT05GSUdfTkw4MDIxMV9URVNUTU9ERSBpcyBub3Qgc2V0CiMg
Q09ORklHX0NGRzgwMjExX0RFVkVMT1BFUl9XQVJOSU5HUyBpcyBub3Qgc2V0CkNPTkZJR19D
Rkc4MDIxMV9SRVFVSVJFX1NJR05FRF9SRUdEQj15CkNPTkZJR19DRkc4MDIxMV9VU0VfS0VS
TkVMX1JFR0RCX0tFWVM9eQojIENPTkZJR19DRkc4MDIxMV9ERUZBVUxUX1BTIGlzIG5vdCBz
ZXQKIyBDT05GSUdfQ0ZHODAyMTFfREVCVUdGUyBpcyBub3Qgc2V0CkNPTkZJR19DRkc4MDIx
MV9DUkRBX1NVUFBPUlQ9eQojIENPTkZJR19DRkc4MDIxMV9XRVhUIGlzIG5vdCBzZXQKQ09O
RklHX01BQzgwMjExPXkKQ09ORklHX01BQzgwMjExX0hBU19SQz15CkNPTkZJR19NQUM4MDIx
MV9SQ19NSU5TVFJFTD15CkNPTkZJR19NQUM4MDIxMV9SQ19ERUZBVUxUX01JTlNUUkVMPXkK
Q09ORklHX01BQzgwMjExX1JDX0RFRkFVTFQ9Im1pbnN0cmVsX2h0IgojIENPTkZJR19NQUM4
MDIxMV9NRVNIIGlzIG5vdCBzZXQKQ09ORklHX01BQzgwMjExX0xFRFM9eQojIENPTkZJR19N
QUM4MDIxMV9ERUJVR0ZTIGlzIG5vdCBzZXQKIyBDT05GSUdfTUFDODAyMTFfTUVTU0FHRV9U
UkFDSU5HIGlzIG5vdCBzZXQKIyBDT05GSUdfTUFDODAyMTFfREVCVUdfTUVOVSBpcyBub3Qg
c2V0CkNPTkZJR19NQUM4MDIxMV9TVEFfSEFTSF9NQVhfU0laRT0wCiMgQ09ORklHX1JGS0lM
TCBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF85UCBpcyBub3Qgc2V0CiMgQ09ORklHX0NBSUYg
aXMgbm90IHNldApDT05GSUdfQ0VQSF9MSUI9eQojIENPTkZJR19DRVBIX0xJQl9QUkVUVFlE
RUJVRyBpcyBub3Qgc2V0CiMgQ09ORklHX0NFUEhfTElCX1VTRV9ETlNfUkVTT0xWRVIgaXMg
bm90IHNldAojIENPTkZJR19ORkMgaXMgbm90IHNldAojIENPTkZJR19QU0FNUExFIGlzIG5v
dCBzZXQKIyBDT05GSUdfTkVUX0lGRSBpcyBub3Qgc2V0CiMgQ09ORklHX0xXVFVOTkVMIGlz
IG5vdCBzZXQKQ09ORklHX0RTVF9DQUNIRT15CkNPTkZJR19HUk9fQ0VMTFM9eQpDT05GSUdf
TkVUX1NFTEZURVNUUz15CkNPTkZJR19QQUdFX1BPT0w9eQojIENPTkZJR19GQUlMT1ZFUiBp
cyBub3Qgc2V0CkNPTkZJR19FVEhUT09MX05FVExJTks9eQoKIwojIERldmljZSBEcml2ZXJz
CiMKQ09ORklHX0hBVkVfRUlTQT15CiMgQ09ORklHX0VJU0EgaXMgbm90IHNldApDT05GSUdf
SEFWRV9QQ0k9eQpDT05GSUdfUENJPXkKQ09ORklHX1BDSV9ET01BSU5TPXkKQ09ORklHX1BD
SUVQT1JUQlVTPXkKQ09ORklHX1BDSUVBRVI9eQpDT05GSUdfUENJRUFFUl9JTkpFQ1Q9eQpD
T05GSUdfUENJRV9FQ1JDPXkKQ09ORklHX1BDSUVBU1BNPXkKQ09ORklHX1BDSUVBU1BNX0RF
RkFVTFQ9eQojIENPTkZJR19QQ0lFQVNQTV9QT1dFUlNBVkUgaXMgbm90IHNldAojIENPTkZJ
R19QQ0lFQVNQTV9QT1dFUl9TVVBFUlNBVkUgaXMgbm90IHNldAojIENPTkZJR19QQ0lFQVNQ
TV9QRVJGT1JNQU5DRSBpcyBub3Qgc2V0CkNPTkZJR19QQ0lFX1BNRT15CiMgQ09ORklHX1BD
SUVfRFBDIGlzIG5vdCBzZXQKIyBDT05GSUdfUENJRV9QVE0gaXMgbm90IHNldApDT05GSUdf
UENJX01TST15CkNPTkZJR19QQ0lfTVNJX0lSUV9ET01BSU49eQpDT05GSUdfUENJX1FVSVJL
Uz15CkNPTkZJR19QQ0lfREVCVUc9eQpDT05GSUdfUENJX1JFQUxMT0NfRU5BQkxFX0FVVE89
eQpDT05GSUdfUENJX1NUVUI9eQojIENPTkZJR19QQ0lfUEZfU1RVQiBpcyBub3Qgc2V0CkNP
TkZJR19YRU5fUENJREVWX0ZST05URU5EPXkKQ09ORklHX1BDSV9BVFM9eQpDT05GSUdfUENJ
X0xPQ0tMRVNTX0NPTkZJRz15CkNPTkZJR19QQ0lfSU9WPXkKQ09ORklHX1BDSV9QUkk9eQpD
T05GSUdfUENJX1BBU0lEPXkKQ09ORklHX1BDSV9MQUJFTD15CiMgQ09ORklHX0hPVFBMVUdf
UENJIGlzIG5vdCBzZXQKCiMKIyBQQ0kgY29udHJvbGxlciBkcml2ZXJzCiMKIyBDT05GSUdf
Vk1EIGlzIG5vdCBzZXQKCiMKIyBEZXNpZ25XYXJlIFBDSSBDb3JlIFN1cHBvcnQKIwpDT05G
SUdfUENJRV9EVz15CkNPTkZJR19QQ0lFX0RXX0hPU1Q9eQpDT05GSUdfUENJRV9EV19QTEFU
PXkKQ09ORklHX1BDSUVfRFdfUExBVF9IT1NUPXkKIyBDT05GSUdfUENJX01FU09OIGlzIG5v
dCBzZXQKIyBlbmQgb2YgRGVzaWduV2FyZSBQQ0kgQ29yZSBTdXBwb3J0CgojCiMgTW9iaXZl
aWwgUENJZSBDb3JlIFN1cHBvcnQKIwojIGVuZCBvZiBNb2JpdmVpbCBQQ0llIENvcmUgU3Vw
cG9ydAoKIwojIENhZGVuY2UgUENJZSBjb250cm9sbGVycyBzdXBwb3J0CiMKIyBlbmQgb2Yg
Q2FkZW5jZSBQQ0llIGNvbnRyb2xsZXJzIHN1cHBvcnQKIyBlbmQgb2YgUENJIGNvbnRyb2xs
ZXIgZHJpdmVycwoKIwojIFBDSSBFbmRwb2ludAojCiMgQ09ORklHX1BDSV9FTkRQT0lOVCBp
cyBub3Qgc2V0CiMgZW5kIG9mIFBDSSBFbmRwb2ludAoKIwojIFBDSSBzd2l0Y2ggY29udHJv
bGxlciBkcml2ZXJzCiMKIyBDT05GSUdfUENJX1NXX1NXSVRDSFRFQyBpcyBub3Qgc2V0CiMg
ZW5kIG9mIFBDSSBzd2l0Y2ggY29udHJvbGxlciBkcml2ZXJzCgojIENPTkZJR19DWExfQlVT
IGlzIG5vdCBzZXQKIyBDT05GSUdfUENDQVJEIGlzIG5vdCBzZXQKIyBDT05GSUdfUkFQSURJ
TyBpcyBub3Qgc2V0CgojCiMgR2VuZXJpYyBEcml2ZXIgT3B0aW9ucwojCkNPTkZJR19VRVZF
TlRfSEVMUEVSPXkKQ09ORklHX1VFVkVOVF9IRUxQRVJfUEFUSD0iL3NiaW4vaG90cGx1ZyIK
Q09ORklHX0RFVlRNUEZTPXkKQ09ORklHX0RFVlRNUEZTX01PVU5UPXkKIyBDT05GSUdfU1RB
TkRBTE9ORSBpcyBub3Qgc2V0CiMgQ09ORklHX1BSRVZFTlRfRklSTVdBUkVfQlVJTEQgaXMg
bm90IHNldAoKIwojIEZpcm13YXJlIGxvYWRlcgojCkNPTkZJR19GV19MT0FERVI9eQpDT05G
SUdfRldfTE9BREVSX1BBR0VEX0JVRj15CkNPTkZJR19FWFRSQV9GSVJNV0FSRT0iIgpDT05G
SUdfRldfTE9BREVSX1VTRVJfSEVMUEVSPXkKQ09ORklHX0ZXX0xPQURFUl9VU0VSX0hFTFBF
Ul9GQUxMQkFDSz15CkNPTkZJR19GV19MT0FERVJfQ09NUFJFU1M9eQpDT05GSUdfRldfQ0FD
SEU9eQojIGVuZCBvZiBGaXJtd2FyZSBsb2FkZXIKCkNPTkZJR19BTExPV19ERVZfQ09SRURV
TVA9eQojIENPTkZJR19ERUJVR19EUklWRVIgaXMgbm90IHNldApDT05GSUdfREVCVUdfREVW
UkVTPXkKIyBDT05GSUdfREVCVUdfVEVTVF9EUklWRVJfUkVNT1ZFIGlzIG5vdCBzZXQKQ09O
RklHX0hNRU1fUkVQT1JUSU5HPXkKIyBDT05GSUdfVEVTVF9BU1lOQ19EUklWRVJfUFJPQkUg
aXMgbm90IHNldApDT05GSUdfU1lTX0hZUEVSVklTT1I9eQpDT05GSUdfR0VORVJJQ19DUFVf
QVVUT1BST0JFPXkKQ09ORklHX0dFTkVSSUNfQ1BVX1ZVTE5FUkFCSUxJVElFUz15CkNPTkZJ
R19SRUdNQVA9eQpDT05GSUdfUkVHTUFQX0kyQz15CkNPTkZJR19ETUFfU0hBUkVEX0JVRkZF
Uj15CiMgQ09ORklHX0RNQV9GRU5DRV9UUkFDRSBpcyBub3Qgc2V0CiMgZW5kIG9mIEdlbmVy
aWMgRHJpdmVyIE9wdGlvbnMKCiMKIyBCdXMgZGV2aWNlcwojCiMgQ09ORklHX01ISV9CVVMg
aXMgbm90IHNldAojIGVuZCBvZiBCdXMgZGV2aWNlcwoKQ09ORklHX0NPTk5FQ1RPUj15CkNP
TkZJR19QUk9DX0VWRU5UUz15CiMgQ09ORklHX0dOU1MgaXMgbm90IHNldAojIENPTkZJR19N
VEQgaXMgbm90IHNldAojIENPTkZJR19PRiBpcyBub3Qgc2V0CkNPTkZJR19BUkNIX01JR0hU
X0hBVkVfUENfUEFSUE9SVD15CiMgQ09ORklHX1BBUlBPUlQgaXMgbm90IHNldApDT05GSUdf
UE5QPXkKQ09ORklHX1BOUF9ERUJVR19NRVNTQUdFUz15CgojCiMgUHJvdG9jb2xzCiMKQ09O
RklHX1BOUEFDUEk9eQpDT05GSUdfQkxLX0RFVj15CiMgQ09ORklHX0JMS19ERVZfTlVMTF9C
TEsgaXMgbm90IHNldAojIENPTkZJR19CTEtfREVWX0ZEIGlzIG5vdCBzZXQKQ09ORklHX0NE
Uk9NPXkKIyBDT05GSUdfQkxLX0RFVl9QQ0lFU1NEX01USVAzMlhYIGlzIG5vdCBzZXQKQ09O
RklHX0JMS19ERVZfTE9PUD15CkNPTkZJR19CTEtfREVWX0xPT1BfTUlOX0NPVU5UPTgKIyBD
T05GSUdfQkxLX0RFVl9DUllQVE9MT09QIGlzIG5vdCBzZXQKIyBDT05GSUdfQkxLX0RFVl9E
UkJEIGlzIG5vdCBzZXQKIyBDT05GSUdfQkxLX0RFVl9OQkQgaXMgbm90IHNldAojIENPTkZJ
R19CTEtfREVWX1NYOCBpcyBub3Qgc2V0CkNPTkZJR19CTEtfREVWX1JBTT15CkNPTkZJR19C
TEtfREVWX1JBTV9DT1VOVD0xNgpDT05GSUdfQkxLX0RFVl9SQU1fU0laRT0xNjM4NAojIENP
TkZJR19DRFJPTV9QS1RDRFZEIGlzIG5vdCBzZXQKIyBDT05GSUdfQVRBX09WRVJfRVRIIGlz
IG5vdCBzZXQKQ09ORklHX1hFTl9CTEtERVZfRlJPTlRFTkQ9eQpDT05GSUdfWEVOX0JMS0RF
Vl9CQUNLRU5EPXkKIyBDT05GSUdfQkxLX0RFVl9SQkQgaXMgbm90IHNldAojIENPTkZJR19C
TEtfREVWX1JTWFggaXMgbm90IHNldAoKIwojIE5WTUUgU3VwcG9ydAojCiMgQ09ORklHX0JM
S19ERVZfTlZNRSBpcyBub3Qgc2V0CiMgQ09ORklHX05WTUVfRkMgaXMgbm90IHNldAojIENP
TkZJR19OVk1FX1RDUCBpcyBub3Qgc2V0CiMgZW5kIG9mIE5WTUUgU3VwcG9ydAoKIwojIE1p
c2MgZGV2aWNlcwojCiMgQ09ORklHX0FENTI1WF9EUE9UIGlzIG5vdCBzZXQKIyBDT05GSUdf
RFVNTVlfSVJRIGlzIG5vdCBzZXQKIyBDT05GSUdfSUJNX0FTTSBpcyBub3Qgc2V0CiMgQ09O
RklHX1BIQU5UT00gaXMgbm90IHNldAojIENPTkZJR19USUZNX0NPUkUgaXMgbm90IHNldAoj
IENPTkZJR19JQ1M5MzJTNDAxIGlzIG5vdCBzZXQKIyBDT05GSUdfRU5DTE9TVVJFX1NFUlZJ
Q0VTIGlzIG5vdCBzZXQKIyBDT05GSUdfSFBfSUxPIGlzIG5vdCBzZXQKIyBDT05GSUdfQVBE
Uzk4MDJBTFMgaXMgbm90IHNldAojIENPTkZJR19JU0wyOTAwMyBpcyBub3Qgc2V0CiMgQ09O
RklHX0lTTDI5MDIwIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19UU0wyNTUwIGlzIG5v
dCBzZXQKIyBDT05GSUdfU0VOU09SU19CSDE3NzAgaXMgbm90IHNldAojIENPTkZJR19TRU5T
T1JTX0FQRFM5OTBYIGlzIG5vdCBzZXQKIyBDT05GSUdfSE1DNjM1MiBpcyBub3Qgc2V0CiMg
Q09ORklHX0RTMTY4MiBpcyBub3Qgc2V0CiMgQ09ORklHX0xBVFRJQ0VfRUNQM19DT05GSUcg
aXMgbm90IHNldAojIENPTkZJR19TUkFNIGlzIG5vdCBzZXQKIyBDT05GSUdfRFdfWERBVEFf
UENJRSBpcyBub3Qgc2V0CiMgQ09ORklHX1BDSV9FTkRQT0lOVF9URVNUIGlzIG5vdCBzZXQK
IyBDT05GSUdfWElMSU5YX1NERkVDIGlzIG5vdCBzZXQKIyBDT05GSUdfQzJQT1JUIGlzIG5v
dCBzZXQKCiMKIyBFRVBST00gc3VwcG9ydAojCiMgQ09ORklHX0VFUFJPTV9BVDI0IGlzIG5v
dCBzZXQKIyBDT05GSUdfRUVQUk9NX0FUMjUgaXMgbm90IHNldAojIENPTkZJR19FRVBST01f
TEVHQUNZIGlzIG5vdCBzZXQKIyBDT05GSUdfRUVQUk9NX01BWDY4NzUgaXMgbm90IHNldApD
T05GSUdfRUVQUk9NXzkzQ1g2PXkKIyBDT05GSUdfRUVQUk9NXzkzWFg0NiBpcyBub3Qgc2V0
CiMgQ09ORklHX0VFUFJPTV9JRFRfODlIUEVTWCBpcyBub3Qgc2V0CiMgQ09ORklHX0VFUFJP
TV9FRTEwMDQgaXMgbm90IHNldAojIGVuZCBvZiBFRVBST00gc3VwcG9ydAoKIyBDT05GSUdf
Q0I3MTBfQ09SRSBpcyBub3Qgc2V0CgojCiMgVGV4YXMgSW5zdHJ1bWVudHMgc2hhcmVkIHRy
YW5zcG9ydCBsaW5lIGRpc2NpcGxpbmUKIwojIENPTkZJR19USV9TVCBpcyBub3Qgc2V0CiMg
ZW5kIG9mIFRleGFzIEluc3RydW1lbnRzIHNoYXJlZCB0cmFuc3BvcnQgbGluZSBkaXNjaXBs
aW5lCgojIENPTkZJR19TRU5TT1JTX0xJUzNfSTJDIGlzIG5vdCBzZXQKQ09ORklHX0FMVEVS
QV9TVEFQTD15CiMgQ09ORklHX0lOVEVMX01FSSBpcyBub3Qgc2V0CiMgQ09ORklHX0lOVEVM
X01FSV9NRSBpcyBub3Qgc2V0CiMgQ09ORklHX0lOVEVMX01FSV9UWEUgaXMgbm90IHNldAoj
IENPTkZJR19WTVdBUkVfVk1DSSBpcyBub3Qgc2V0CiMgQ09ORklHX0dFTldRRSBpcyBub3Qg
c2V0CiMgQ09ORklHX0VDSE8gaXMgbm90IHNldAojIENPTkZJR19CQ01fVksgaXMgbm90IHNl
dAojIENPTkZJR19NSVNDX0FMQ09SX1BDSSBpcyBub3Qgc2V0CiMgQ09ORklHX01JU0NfUlRT
WF9QQ0kgaXMgbm90IHNldAojIENPTkZJR19NSVNDX1JUU1hfVVNCIGlzIG5vdCBzZXQKIyBD
T05GSUdfSEFCQU5BX0FJIGlzIG5vdCBzZXQKIyBDT05GSUdfVUFDQ0UgaXMgbm90IHNldAoj
IENPTkZJR19QVlBBTklDIGlzIG5vdCBzZXQKIyBlbmQgb2YgTWlzYyBkZXZpY2VzCgpDT05G
SUdfSEFWRV9JREU9eQojIENPTkZJR19JREUgaXMgbm90IHNldAoKIwojIFNDU0kgZGV2aWNl
IHN1cHBvcnQKIwpDT05GSUdfU0NTSV9NT0Q9eQojIENPTkZJR19SQUlEX0FUVFJTIGlzIG5v
dCBzZXQKQ09ORklHX1NDU0k9eQpDT05GSUdfU0NTSV9ETUE9eQpDT05GSUdfU0NTSV9QUk9D
X0ZTPXkKCiMKIyBTQ1NJIHN1cHBvcnQgdHlwZSAoZGlzaywgdGFwZSwgQ0QtUk9NKQojCkNP
TkZJR19CTEtfREVWX1NEPXkKIyBDT05GSUdfQ0hSX0RFVl9TVCBpcyBub3Qgc2V0CkNPTkZJ
R19CTEtfREVWX1NSPXkKQ09ORklHX0NIUl9ERVZfU0c9eQojIENPTkZJR19DSFJfREVWX1ND
SCBpcyBub3Qgc2V0CkNPTkZJR19TQ1NJX0NPTlNUQU5UUz15CiMgQ09ORklHX1NDU0lfTE9H
R0lORyBpcyBub3Qgc2V0CiMgQ09ORklHX1NDU0lfU0NBTl9BU1lOQyBpcyBub3Qgc2V0Cgoj
CiMgU0NTSSBUcmFuc3BvcnRzCiMKQ09ORklHX1NDU0lfU1BJX0FUVFJTPXkKIyBDT05GSUdf
U0NTSV9GQ19BVFRSUyBpcyBub3Qgc2V0CiMgQ09ORklHX1NDU0lfSVNDU0lfQVRUUlMgaXMg
bm90IHNldAojIENPTkZJR19TQ1NJX1NBU19BVFRSUyBpcyBub3Qgc2V0CiMgQ09ORklHX1ND
U0lfU0FTX0xJQlNBUyBpcyBub3Qgc2V0CiMgQ09ORklHX1NDU0lfU1JQX0FUVFJTIGlzIG5v
dCBzZXQKIyBlbmQgb2YgU0NTSSBUcmFuc3BvcnRzCgojIENPTkZJR19TQ1NJX0xPV0xFVkVM
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0NTSV9ESCBpcyBub3Qgc2V0CiMgZW5kIG9mIFNDU0kg
ZGV2aWNlIHN1cHBvcnQKCkNPTkZJR19BVEE9eQpDT05GSUdfU0FUQV9IT1NUPXkKQ09ORklH
X1BBVEFfVElNSU5HUz15CkNPTkZJR19BVEFfVkVSQk9TRV9FUlJPUj15CkNPTkZJR19BVEFf
Rk9SQ0U9eQpDT05GSUdfQVRBX0FDUEk9eQojIENPTkZJR19TQVRBX1pQT0REIGlzIG5vdCBz
ZXQKQ09ORklHX1NBVEFfUE1QPXkKCiMKIyBDb250cm9sbGVycyB3aXRoIG5vbi1TRkYgbmF0
aXZlIGludGVyZmFjZQojCkNPTkZJR19TQVRBX0FIQ0k9eQpDT05GSUdfU0FUQV9NT0JJTEVf
TFBNX1BPTElDWT0wCkNPTkZJR19TQVRBX0FIQ0lfUExBVEZPUk09eQojIENPTkZJR19TQVRB
X0lOSUMxNjJYIGlzIG5vdCBzZXQKIyBDT05GSUdfU0FUQV9BQ0FSRF9BSENJIGlzIG5vdCBz
ZXQKIyBDT05GSUdfU0FUQV9TSUwyNCBpcyBub3Qgc2V0CiMgQ09ORklHX0FUQV9TRkYgaXMg
bm90IHNldApDT05GSUdfTUQ9eQojIENPTkZJR19CTEtfREVWX01EIGlzIG5vdCBzZXQKQ09O
RklHX0JDQUNIRT15CiMgQ09ORklHX0JDQUNIRV9ERUJVRyBpcyBub3Qgc2V0CiMgQ09ORklH
X0JDQUNIRV9DTE9TVVJFU19ERUJVRyBpcyBub3Qgc2V0CiMgQ09ORklHX0JDQUNIRV9BU1lO
Q19SRUdJU1RSQVRJT04gaXMgbm90IHNldApDT05GSUdfQkxLX0RFVl9ETV9CVUlMVElOPXkK
Q09ORklHX0JMS19ERVZfRE09eQpDT05GSUdfRE1fREVCVUc9eQpDT05GSUdfRE1fQlVGSU89
eQojIENPTkZJR19ETV9ERUJVR19CTE9DS19NQU5BR0VSX0xPQ0tJTkcgaXMgbm90IHNldApD
T05GSUdfRE1fQklPX1BSSVNPTj15CkNPTkZJR19ETV9QRVJTSVNURU5UX0RBVEE9eQojIENP
TkZJR19ETV9VTlNUUklQRUQgaXMgbm90IHNldApDT05GSUdfRE1fQ1JZUFQ9eQpDT05GSUdf
RE1fU05BUFNIT1Q9eQojIENPTkZJR19ETV9USElOX1BST1ZJU0lPTklORyBpcyBub3Qgc2V0
CkNPTkZJR19ETV9DQUNIRT15CkNPTkZJR19ETV9DQUNIRV9TTVE9eQojIENPTkZJR19ETV9X
UklURUNBQ0hFIGlzIG5vdCBzZXQKIyBDT05GSUdfRE1fRUJTIGlzIG5vdCBzZXQKIyBDT05G
SUdfRE1fRVJBIGlzIG5vdCBzZXQKIyBDT05GSUdfRE1fQ0xPTkUgaXMgbm90IHNldApDT05G
SUdfRE1fTUlSUk9SPXkKIyBDT05GSUdfRE1fTE9HX1VTRVJTUEFDRSBpcyBub3Qgc2V0CiMg
Q09ORklHX0RNX1JBSUQgaXMgbm90IHNldApDT05GSUdfRE1fWkVSTz15CiMgQ09ORklHX0RN
X01VTFRJUEFUSCBpcyBub3Qgc2V0CiMgQ09ORklHX0RNX0RFTEFZIGlzIG5vdCBzZXQKIyBD
T05GSUdfRE1fRFVTVCBpcyBub3Qgc2V0CiMgQ09ORklHX0RNX0lOSVQgaXMgbm90IHNldAoj
IENPTkZJR19ETV9VRVZFTlQgaXMgbm90IHNldAojIENPTkZJR19ETV9GTEFLRVkgaXMgbm90
IHNldAojIENPTkZJR19ETV9WRVJJVFkgaXMgbm90IHNldAojIENPTkZJR19ETV9TV0lUQ0gg
aXMgbm90IHNldAojIENPTkZJR19ETV9MT0dfV1JJVEVTIGlzIG5vdCBzZXQKIyBDT05GSUdf
RE1fSU5URUdSSVRZIGlzIG5vdCBzZXQKIyBDT05GSUdfVEFSR0VUX0NPUkUgaXMgbm90IHNl
dAojIENPTkZJR19GVVNJT04gaXMgbm90IHNldAoKIwojIElFRUUgMTM5NCAoRmlyZVdpcmUp
IHN1cHBvcnQKIwojIENPTkZJR19GSVJFV0lSRSBpcyBub3Qgc2V0CiMgQ09ORklHX0ZJUkVX
SVJFX05PU1kgaXMgbm90IHNldAojIGVuZCBvZiBJRUVFIDEzOTQgKEZpcmVXaXJlKSBzdXBw
b3J0CgojIENPTkZJR19NQUNJTlRPU0hfRFJJVkVSUyBpcyBub3Qgc2V0CkNPTkZJR19ORVRE
RVZJQ0VTPXkKQ09ORklHX05FVF9DT1JFPXkKIyBDT05GSUdfQk9ORElORyBpcyBub3Qgc2V0
CiMgQ09ORklHX0RVTU1ZIGlzIG5vdCBzZXQKIyBDT05GSUdfV0lSRUdVQVJEIGlzIG5vdCBz
ZXQKIyBDT05GSUdfRVFVQUxJWkVSIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0ZDIGlzIG5v
dCBzZXQKIyBDT05GSUdfSUZCIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1RFQU0gaXMgbm90
IHNldAojIENPTkZJR19NQUNWTEFOIGlzIG5vdCBzZXQKIyBDT05GSUdfSVBWTEFOIGlzIG5v
dCBzZXQKIyBDT05GSUdfVlhMQU4gaXMgbm90IHNldAojIENPTkZJR19HRU5FVkUgaXMgbm90
IHNldAojIENPTkZJR19CQVJFVURQIGlzIG5vdCBzZXQKIyBDT05GSUdfR1RQIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTUFDU0VDIGlzIG5vdCBzZXQKQ09ORklHX05FVENPTlNPTEU9eQpDT05G
SUdfTkVUUE9MTD15CkNPTkZJR19ORVRfUE9MTF9DT05UUk9MTEVSPXkKQ09ORklHX1RVTj15
CiMgQ09ORklHX1RVTl9WTkVUX0NST1NTX0xFIGlzIG5vdCBzZXQKQ09ORklHX1ZFVEg9eQoj
IENPTkZJR19OTE1PTiBpcyBub3Qgc2V0CiMgQ09ORklHX0FSQ05FVCBpcyBub3Qgc2V0CkNP
TkZJR19FVEhFUk5FVD15CiMgQ09ORklHX05FVF9WRU5ET1JfM0NPTSBpcyBub3Qgc2V0CiMg
Q09ORklHX05FVF9WRU5ET1JfQURBUFRFQyBpcyBub3Qgc2V0CkNPTkZJR19ORVRfVkVORE9S
X0FHRVJFPXkKIyBDT05GSUdfRVQxMzFYIGlzIG5vdCBzZXQKQ09ORklHX05FVF9WRU5ET1Jf
QUxBQ1JJVEVDSD15CiMgQ09ORklHX1NMSUNPU1MgaXMgbm90IHNldAojIENPTkZJR19ORVRf
VkVORE9SX0FMVEVPTiBpcyBub3Qgc2V0CiMgQ09ORklHX0FMVEVSQV9UU0UgaXMgbm90IHNl
dApDT05GSUdfTkVUX1ZFTkRPUl9BTUFaT049eQojIENPTkZJR19FTkFfRVRIRVJORVQgaXMg
bm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX0FNRCBpcyBub3Qgc2V0CkNPTkZJR19ORVRf
VkVORE9SX0FRVUFOVElBPXkKIyBDT05GSUdfQVFUSU9OIGlzIG5vdCBzZXQKQ09ORklHX05F
VF9WRU5ET1JfQVJDPXkKIyBDT05GSUdfTkVUX1ZFTkRPUl9BVEhFUk9TIGlzIG5vdCBzZXQK
IyBDT05GSUdfTkVUX1ZFTkRPUl9CUk9BRENPTSBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9W
RU5ET1JfQlJPQ0FERSBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9WRU5ET1JfQ0FERU5DRSBp
cyBub3Qgc2V0CkNPTkZJR19ORVRfVkVORE9SX0NBVklVTT15CiMgQ09ORklHX1RIVU5ERVJf
TklDX1BGIGlzIG5vdCBzZXQKIyBDT05GSUdfVEhVTkRFUl9OSUNfVkYgaXMgbm90IHNldAoj
IENPTkZJR19USFVOREVSX05JQ19CR1ggaXMgbm90IHNldAojIENPTkZJR19USFVOREVSX05J
Q19SR1ggaXMgbm90IHNldApDT05GSUdfQ0FWSVVNX1BUUD15CiMgQ09ORklHX0xJUVVJRElP
IGlzIG5vdCBzZXQKIyBDT05GSUdfTElRVUlESU9fVkYgaXMgbm90IHNldAojIENPTkZJR19O
RVRfVkVORE9SX0NIRUxTSU8gaXMgbm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX0NJU0NP
IGlzIG5vdCBzZXQKQ09ORklHX05FVF9WRU5ET1JfQ09SVElOQT15CiMgQ09ORklHX0NYX0VD
QVQgaXMgbm90IHNldAojIENPTkZJR19ETkVUIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1ZF
TkRPUl9ERUMgaXMgbm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX0RMSU5LIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9FTVVMRVggaXMgbm90IHNldApDT05GSUdfTkVUX1ZF
TkRPUl9FWkNISVA9eQojIENPTkZJR19ORVRfVkVORE9SX0dPT0dMRSBpcyBub3Qgc2V0CkNP
TkZJR19ORVRfVkVORE9SX0hVQVdFST15CiMgQ09ORklHX0hJTklDIGlzIG5vdCBzZXQKQ09O
RklHX05FVF9WRU5ET1JfSTgyNVhYPXkKQ09ORklHX05FVF9WRU5ET1JfSU5URUw9eQojIENP
TkZJR19FMTAwIGlzIG5vdCBzZXQKQ09ORklHX0UxMDAwPXkKQ09ORklHX0UxMDAwRT15CkNP
TkZJR19FMTAwMEVfSFdUUz15CkNPTkZJR19JR0I9eQpDT05GSUdfSUdCX0hXTU9OPXkKQ09O
RklHX0lHQlZGPXkKIyBDT05GSUdfSVhHQiBpcyBub3Qgc2V0CiMgQ09ORklHX0lYR0JFIGlz
IG5vdCBzZXQKIyBDT05GSUdfSVhHQkVWRiBpcyBub3Qgc2V0CiMgQ09ORklHX0k0MEUgaXMg
bm90IHNldAojIENPTkZJR19JNDBFVkYgaXMgbm90IHNldAojIENPTkZJR19JQ0UgaXMgbm90
IHNldAojIENPTkZJR19GTTEwSyBpcyBub3Qgc2V0CiMgQ09ORklHX0lHQyBpcyBub3Qgc2V0
CkNPTkZJR19ORVRfVkVORE9SX01JQ1JPU09GVD15CiMgQ09ORklHX0pNRSBpcyBub3Qgc2V0
CiMgQ09ORklHX05FVF9WRU5ET1JfTUFSVkVMTCBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9W
RU5ET1JfTUVMTEFOT1ggaXMgbm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX01JQ1JFTCBp
cyBub3Qgc2V0CkNPTkZJR19ORVRfVkVORE9SX01JQ1JPQ0hJUD15CiMgQ09ORklHX0VOQzI4
SjYwIGlzIG5vdCBzZXQKIyBDT05GSUdfRU5DWDI0SjYwMCBpcyBub3Qgc2V0CiMgQ09ORklH
X0xBTjc0M1ggaXMgbm90IHNldApDT05GSUdfTkVUX1ZFTkRPUl9NSUNST1NFTUk9eQojIENP
TkZJR19ORVRfVkVORE9SX01ZUkkgaXMgbm90IHNldAojIENPTkZJR19GRUFMTlggaXMgbm90
IHNldAojIENPTkZJR19ORVRfVkVORE9SX05BVFNFTUkgaXMgbm90IHNldAojIENPTkZJR19O
RVRfVkVORE9SX05FVEVSSU9OIGlzIG5vdCBzZXQKQ09ORklHX05FVF9WRU5ET1JfTkVUUk9O
T01FPXkKIyBDT05GSUdfTkZQIGlzIG5vdCBzZXQKQ09ORklHX05FVF9WRU5ET1JfTkk9eQoj
IENPTkZJR19OSV9YR0VfTUFOQUdFTUVOVF9FTkVUIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVU
X1ZFTkRPUl9OVklESUEgaXMgbm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX09LSSBpcyBu
b3Qgc2V0CiMgQ09ORklHX0VUSE9DIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9Q
QUNLRVRfRU5HSU5FUyBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9WRU5ET1JfUEVOU0FORE8g
aXMgbm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX1FMT0dJQyBpcyBub3Qgc2V0CkNPTkZJ
R19ORVRfVkVORE9SX1FVQUxDT01NPXkKIyBDT05GSUdfUUNPTV9FTUFDIGlzIG5vdCBzZXQK
IyBDT05GSUdfUk1ORVQgaXMgbm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX1JEQyBpcyBu
b3Qgc2V0CkNPTkZJR19ORVRfVkVORE9SX1JFQUxURUs9eQojIENPTkZJR184MTM5Q1AgaXMg
bm90IHNldAojIENPTkZJR184MTM5VE9PIGlzIG5vdCBzZXQKQ09ORklHX1I4MTY5PXkKQ09O
RklHX05FVF9WRU5ET1JfUkVORVNBUz15CkNPTkZJR19ORVRfVkVORE9SX1JPQ0tFUj15CkNP
TkZJR19ORVRfVkVORE9SX1NBTVNVTkc9eQojIENPTkZJR19TWEdCRV9FVEggaXMgbm90IHNl
dApDT05GSUdfTkVUX1ZFTkRPUl9TRUVRPXkKQ09ORklHX05FVF9WRU5ET1JfU09MQVJGTEFS
RT15CiMgQ09ORklHX1NGQyBpcyBub3Qgc2V0CiMgQ09ORklHX1NGQ19GQUxDT04gaXMgbm90
IHNldApDT05GSUdfTkVUX1ZFTkRPUl9TSUxBTj15CiMgQ09ORklHX1NDOTIwMzEgaXMgbm90
IHNldAojIENPTkZJR19ORVRfVkVORE9SX1NJUyBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9W
RU5ET1JfU01TQyBpcyBub3Qgc2V0CkNPTkZJR19ORVRfVkVORE9SX1NPQ0lPTkVYVD15CiMg
Q09ORklHX05FVF9WRU5ET1JfU1RNSUNSTyBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9WRU5E
T1JfU1VOIGlzIG5vdCBzZXQKQ09ORklHX05FVF9WRU5ET1JfU1lOT1BTWVM9eQojIENPTkZJ
R19EV0NfWExHTUFDIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9URUhVVEkgaXMg
bm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX1RJIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVU
X1ZFTkRPUl9WSUEgaXMgbm90IHNldApDT05GSUdfTkVUX1ZFTkRPUl9XSVpORVQ9eQojIENP
TkZJR19XSVpORVRfVzUxMDAgaXMgbm90IHNldAojIENPTkZJR19XSVpORVRfVzUzMDAgaXMg
bm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX1hJTElOWCBpcyBub3Qgc2V0CiMgQ09ORklH
X0ZEREkgaXMgbm90IHNldAojIENPTkZJR19ISVBQSSBpcyBub3Qgc2V0CiMgQ09ORklHX05F
VF9TQjEwMDAgaXMgbm90IHNldApDT05GSUdfUEhZTElCPXkKIyBDT05GSUdfTEVEX1RSSUdH
RVJfUEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfRklYRURfUEhZIGlzIG5vdCBzZXQKCiMKIyBN
SUkgUEhZIGRldmljZSBkcml2ZXJzCiMKIyBDT05GSUdfQU1EX1BIWSBpcyBub3Qgc2V0CiMg
Q09ORklHX0FESU5fUEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfQVFVQU5USUFfUEhZIGlzIG5v
dCBzZXQKIyBDT05GSUdfQVg4ODc5NkJfUEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfQlJPQURD
T01fUEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfQkNNNTQxNDBfUEhZIGlzIG5vdCBzZXQKIyBD
T05GSUdfQkNNN1hYWF9QSFkgaXMgbm90IHNldAojIENPTkZJR19CQ004NDg4MV9QSFkgaXMg
bm90IHNldAojIENPTkZJR19CQ004N1hYX1BIWSBpcyBub3Qgc2V0CiMgQ09ORklHX0NJQ0FE
QV9QSFkgaXMgbm90IHNldAojIENPTkZJR19DT1JUSU5BX1BIWSBpcyBub3Qgc2V0CiMgQ09O
RklHX0RBVklDT01fUEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfSUNQTFVTX1BIWSBpcyBub3Qg
c2V0CiMgQ09ORklHX0xYVF9QSFkgaXMgbm90IHNldAojIENPTkZJR19JTlRFTF9YV0FZX1BI
WSBpcyBub3Qgc2V0CiMgQ09ORklHX0xTSV9FVDEwMTFDX1BIWSBpcyBub3Qgc2V0CiMgQ09O
RklHX01BUlZFTExfUEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfTUFSVkVMTF8xMEdfUEhZIGlz
IG5vdCBzZXQKIyBDT05GSUdfTUFSVkVMTF84OFgyMjIyX1BIWSBpcyBub3Qgc2V0CiMgQ09O
RklHX01JQ1JFTF9QSFkgaXMgbm90IHNldAojIENPTkZJR19NSUNST0NISVBfUEhZIGlzIG5v
dCBzZXQKIyBDT05GSUdfTUlDUk9DSElQX1QxX1BIWSBpcyBub3Qgc2V0CiMgQ09ORklHX01J
Q1JPU0VNSV9QSFkgaXMgbm90IHNldAojIENPTkZJR19OQVRJT05BTF9QSFkgaXMgbm90IHNl
dAojIENPTkZJR19OWFBfQzQ1X1RKQTExWFhfUEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfTlhQ
X1RKQTExWFhfUEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfUVNFTUlfUEhZIGlzIG5vdCBzZXQK
Q09ORklHX1JFQUxURUtfUEhZPXkKIyBDT05GSUdfUkVORVNBU19QSFkgaXMgbm90IHNldAoj
IENPTkZJR19ST0NLQ0hJUF9QSFkgaXMgbm90IHNldAojIENPTkZJR19TTVNDX1BIWSBpcyBu
b3Qgc2V0CiMgQ09ORklHX1NURTEwWFAgaXMgbm90IHNldAojIENPTkZJR19URVJBTkVUSUNT
X1BIWSBpcyBub3Qgc2V0CiMgQ09ORklHX0RQODM4MjJfUEhZIGlzIG5vdCBzZXQKIyBDT05G
SUdfRFA4M1RDODExX1BIWSBpcyBub3Qgc2V0CiMgQ09ORklHX0RQODM4NDhfUEhZIGlzIG5v
dCBzZXQKIyBDT05GSUdfRFA4Mzg2N19QSFkgaXMgbm90IHNldAojIENPTkZJR19EUDgzODY5
X1BIWSBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJVEVTU0VfUEhZIGlzIG5vdCBzZXQKIyBDT05G
SUdfWElMSU5YX0dNSUkyUkdNSUkgaXMgbm90IHNldAojIENPTkZJR19NSUNSRUxfS1M4OTk1
TUEgaXMgbm90IHNldApDT05GSUdfTURJT19ERVZJQ0U9eQpDT05GSUdfTURJT19CVVM9eQpD
T05GSUdfTURJT19ERVZSRVM9eQojIENPTkZJR19NRElPX0JJVEJBTkcgaXMgbm90IHNldAoj
IENPTkZJR19NRElPX0JDTV9VTklNQUMgaXMgbm90IHNldAojIENPTkZJR19NRElPX01WVVNC
IGlzIG5vdCBzZXQKIyBDT05GSUdfTURJT19NU0NDX01JSU0gaXMgbm90IHNldAojIENPTkZJ
R19NRElPX1RIVU5ERVIgaXMgbm90IHNldAoKIwojIE1ESU8gTXVsdGlwbGV4ZXJzCiMKCiMK
IyBQQ1MgZGV2aWNlIGRyaXZlcnMKIwojIENPTkZJR19QQ1NfWFBDUyBpcyBub3Qgc2V0CiMg
ZW5kIG9mIFBDUyBkZXZpY2UgZHJpdmVycwoKQ09ORklHX1BQUD15CkNPTkZJR19QUFBfQlNE
Q09NUD15CkNPTkZJR19QUFBfREVGTEFURT15CkNPTkZJR19QUFBfRklMVEVSPXkKQ09ORklH
X1BQUF9NUFBFPXkKQ09ORklHX1BQUF9NVUxUSUxJTks9eQpDT05GSUdfUFBQT0U9eQojIENP
TkZJR19QUFBfQVNZTkMgaXMgbm90IHNldApDT05GSUdfUFBQX1NZTkNfVFRZPXkKIyBDT05G
SUdfU0xJUCBpcyBub3Qgc2V0CkNPTkZJR19TTEhDPXkKQ09ORklHX1VTQl9ORVRfRFJJVkVS
Uz15CiMgQ09ORklHX1VTQl9DQVRDIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX0tBV0VUSCBp
cyBub3Qgc2V0CiMgQ09ORklHX1VTQl9QRUdBU1VTIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNC
X1JUTDgxNTAgaXMgbm90IHNldAojIENPTkZJR19VU0JfUlRMODE1MiBpcyBub3Qgc2V0CiMg
Q09ORklHX1VTQl9MQU43OFhYIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1VTQk5FVCBpcyBu
b3Qgc2V0CiMgQ09ORklHX1VTQl9JUEhFVEggaXMgbm90IHNldApDT05GSUdfV0xBTj15CiMg
Q09ORklHX1dMQU5fVkVORE9SX0FETVRFSyBpcyBub3Qgc2V0CkNPTkZJR19BVEhfQ09NTU9O
PXkKQ09ORklHX1dMQU5fVkVORE9SX0FUSD15CiMgQ09ORklHX0FUSF9ERUJVRyBpcyBub3Qg
c2V0CiMgQ09ORklHX0FUSDVLIGlzIG5vdCBzZXQKIyBDT05GSUdfQVRINUtfUENJIGlzIG5v
dCBzZXQKQ09ORklHX0FUSDlLX0hXPXkKQ09ORklHX0FUSDlLX0NPTU1PTj15CiMgQ09ORklH
X0FUSDlLX0JUQ09FWF9TVVBQT1JUIGlzIG5vdCBzZXQKIyBDT05GSUdfQVRIOUsgaXMgbm90
IHNldApDT05GSUdfQVRIOUtfSFRDPXkKIyBDT05GSUdfQVRIOUtfSFRDX0RFQlVHRlMgaXMg
bm90IHNldAojIENPTkZJR19DQVJMOTE3MCBpcyBub3Qgc2V0CiMgQ09ORklHX0FUSDZLTCBp
cyBub3Qgc2V0CiMgQ09ORklHX0FSNTUyMyBpcyBub3Qgc2V0CiMgQ09ORklHX1dJTDYyMTAg
aXMgbm90IHNldAojIENPTkZJR19BVEgxMEsgaXMgbm90IHNldAojIENPTkZJR19XQ04zNlhY
IGlzIG5vdCBzZXQKIyBDT05GSUdfV0xBTl9WRU5ET1JfQVRNRUwgaXMgbm90IHNldAojIENP
TkZJR19XTEFOX1ZFTkRPUl9CUk9BRENPTSBpcyBub3Qgc2V0CiMgQ09ORklHX1dMQU5fVkVO
RE9SX0NJU0NPIGlzIG5vdCBzZXQKIyBDT05GSUdfV0xBTl9WRU5ET1JfSU5URUwgaXMgbm90
IHNldAojIENPTkZJR19XTEFOX1ZFTkRPUl9JTlRFUlNJTCBpcyBub3Qgc2V0CiMgQ09ORklH
X1dMQU5fVkVORE9SX01BUlZFTEwgaXMgbm90IHNldAojIENPTkZJR19XTEFOX1ZFTkRPUl9N
RURJQVRFSyBpcyBub3Qgc2V0CiMgQ09ORklHX1dMQU5fVkVORE9SX01JQ1JPQ0hJUCBpcyBu
b3Qgc2V0CkNPTkZJR19XTEFOX1ZFTkRPUl9SQUxJTks9eQpDT05GSUdfUlQyWDAwPXkKIyBD
T05GSUdfUlQyNDAwUENJIGlzIG5vdCBzZXQKIyBDT05GSUdfUlQyNTAwUENJIGlzIG5vdCBz
ZXQKIyBDT05GSUdfUlQ2MVBDSSBpcyBub3Qgc2V0CiMgQ09ORklHX1JUMjgwMFBDSSBpcyBu
b3Qgc2V0CkNPTkZJR19SVDI1MDBVU0I9eQpDT05GSUdfUlQ3M1VTQj15CkNPTkZJR19SVDI4
MDBVU0I9eQpDT05GSUdfUlQyODAwVVNCX1JUMzNYWD15CkNPTkZJR19SVDI4MDBVU0JfUlQz
NVhYPXkKQ09ORklHX1JUMjgwMFVTQl9SVDM1NzM9eQpDT05GSUdfUlQyODAwVVNCX1JUNTNY
WD15CkNPTkZJR19SVDI4MDBVU0JfUlQ1NVhYPXkKQ09ORklHX1JUMjgwMFVTQl9VTktOT1dO
PXkKQ09ORklHX1JUMjgwMF9MSUI9eQpDT05GSUdfUlQyWDAwX0xJQl9VU0I9eQpDT05GSUdf
UlQyWDAwX0xJQj15CkNPTkZJR19SVDJYMDBfTElCX0ZJUk1XQVJFPXkKQ09ORklHX1JUMlgw
MF9MSUJfQ1JZUFRPPXkKQ09ORklHX1JUMlgwMF9MSUJfTEVEUz15CiMgQ09ORklHX1JUMlgw
MF9ERUJVRyBpcyBub3Qgc2V0CkNPTkZJR19XTEFOX1ZFTkRPUl9SRUFMVEVLPXkKQ09ORklH
X1JUTDgxODA9eQpDT05GSUdfUlRMODE4Nz15CkNPTkZJR19SVEw4MTg3X0xFRFM9eQpDT05G
SUdfUlRMX0NBUkRTPXkKQ09ORklHX1JUTDgxOTJDRT15CkNPTkZJR19SVEw4MTkyU0U9eQpD
T05GSUdfUlRMODE5MkRFPXkKIyBDT05GSUdfUlRMODcyM0FFIGlzIG5vdCBzZXQKQ09ORklH
X1JUTDg3MjNCRT15CkNPTkZJR19SVEw4MTg4RUU9eQpDT05GSUdfUlRMODE5MkVFPXkKQ09O
RklHX1JUTDg4MjFBRT15CkNPTkZJR19SVEw4MTkyQ1U9eQpDT05GSUdfUlRMV0lGST15CkNP
TkZJR19SVExXSUZJX1BDST15CkNPTkZJR19SVExXSUZJX1VTQj15CiMgQ09ORklHX1JUTFdJ
RklfREVCVUcgaXMgbm90IHNldApDT05GSUdfUlRMODE5MkNfQ09NTU9OPXkKQ09ORklHX1JU
TDg3MjNfQ09NTU9OPXkKQ09ORklHX1JUTEJUQ09FWElTVD15CkNPTkZJR19SVEw4WFhYVT15
CiMgQ09ORklHX1JUTDhYWFhVX1VOVEVTVEVEIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRXODgg
aXMgbm90IHNldAojIENPTkZJR19XTEFOX1ZFTkRPUl9SU0kgaXMgbm90IHNldAojIENPTkZJ
R19XTEFOX1ZFTkRPUl9TVCBpcyBub3Qgc2V0CiMgQ09ORklHX1dMQU5fVkVORE9SX1RJIGlz
IG5vdCBzZXQKIyBDT05GSUdfV0xBTl9WRU5ET1JfWllEQVMgaXMgbm90IHNldApDT05GSUdf
V0xBTl9WRU5ET1JfUVVBTlRFTk5BPXkKIyBDT05GSUdfUVRORk1BQ19QQ0lFIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTUFDODAyMTFfSFdTSU0gaXMgbm90IHNldAojIENPTkZJR19VU0JfTkVU
X1JORElTX1dMQU4gaXMgbm90IHNldAojIENPTkZJR19WSVJUX1dJRkkgaXMgbm90IHNldAoj
IENPTkZJR19XQU4gaXMgbm90IHNldAojIENPTkZJR19XV0FOIGlzIG5vdCBzZXQKQ09ORklH
X1hFTl9ORVRERVZfRlJPTlRFTkQ9eQpDT05GSUdfWEVOX05FVERFVl9CQUNLRU5EPXkKIyBD
T05GSUdfVk1YTkVUMyBpcyBub3Qgc2V0CiMgQ09ORklHX0ZVSklUU1VfRVMgaXMgbm90IHNl
dAojIENPTkZJR19ORVRERVZTSU0gaXMgbm90IHNldAojIENPTkZJR19ORVRfRkFJTE9WRVIg
aXMgbm90IHNldAojIENPTkZJR19JU0ROIGlzIG5vdCBzZXQKIyBDT05GSUdfTlZNIGlzIG5v
dCBzZXQKCiMKIyBJbnB1dCBkZXZpY2Ugc3VwcG9ydAojCkNPTkZJR19JTlBVVD15CkNPTkZJ
R19JTlBVVF9MRURTPXkKQ09ORklHX0lOUFVUX0ZGX01FTUxFU1M9eQpDT05GSUdfSU5QVVRf
U1BBUlNFS01BUD15CiMgQ09ORklHX0lOUFVUX01BVFJJWEtNQVAgaXMgbm90IHNldAoKIwoj
IFVzZXJsYW5kIGludGVyZmFjZXMKIwpDT05GSUdfSU5QVVRfTU9VU0VERVY9eQojIENPTkZJ
R19JTlBVVF9NT1VTRURFVl9QU0FVWCBpcyBub3Qgc2V0CkNPTkZJR19JTlBVVF9NT1VTRURF
Vl9TQ1JFRU5fWD0xMDI0CkNPTkZJR19JTlBVVF9NT1VTRURFVl9TQ1JFRU5fWT03NjgKIyBD
T05GSUdfSU5QVVRfSk9ZREVWIGlzIG5vdCBzZXQKQ09ORklHX0lOUFVUX0VWREVWPXkKIyBD
T05GSUdfSU5QVVRfRVZCVUcgaXMgbm90IHNldAoKIwojIElucHV0IERldmljZSBEcml2ZXJz
CiMKQ09ORklHX0lOUFVUX0tFWUJPQVJEPXkKIyBDT05GSUdfS0VZQk9BUkRfQURQNTU4OCBp
cyBub3Qgc2V0CiMgQ09ORklHX0tFWUJPQVJEX0FEUDU1ODkgaXMgbm90IHNldApDT05GSUdf
S0VZQk9BUkRfQVRLQkQ9eQojIENPTkZJR19LRVlCT0FSRF9RVDEwNTAgaXMgbm90IHNldAoj
IENPTkZJR19LRVlCT0FSRF9RVDEwNzAgaXMgbm90IHNldAojIENPTkZJR19LRVlCT0FSRF9R
VDIxNjAgaXMgbm90IHNldAojIENPTkZJR19LRVlCT0FSRF9ETElOS19ESVI2ODUgaXMgbm90
IHNldAojIENPTkZJR19LRVlCT0FSRF9MS0tCRCBpcyBub3Qgc2V0CiMgQ09ORklHX0tFWUJP
QVJEX0dQSU8gaXMgbm90IHNldAojIENPTkZJR19LRVlCT0FSRF9HUElPX1BPTExFRCBpcyBu
b3Qgc2V0CiMgQ09ORklHX0tFWUJPQVJEX1RDQTY0MTYgaXMgbm90IHNldAojIENPTkZJR19L
RVlCT0FSRF9UQ0E4NDE4IGlzIG5vdCBzZXQKIyBDT05GSUdfS0VZQk9BUkRfTUFUUklYIGlz
IG5vdCBzZXQKIyBDT05GSUdfS0VZQk9BUkRfTE04MzIzIGlzIG5vdCBzZXQKIyBDT05GSUdf
S0VZQk9BUkRfTE04MzMzIGlzIG5vdCBzZXQKIyBDT05GSUdfS0VZQk9BUkRfTUFYNzM1OSBp
cyBub3Qgc2V0CiMgQ09ORklHX0tFWUJPQVJEX01DUyBpcyBub3Qgc2V0CiMgQ09ORklHX0tF
WUJPQVJEX01QUjEyMSBpcyBub3Qgc2V0CiMgQ09ORklHX0tFWUJPQVJEX05FV1RPTiBpcyBu
b3Qgc2V0CiMgQ09ORklHX0tFWUJPQVJEX09QRU5DT1JFUyBpcyBub3Qgc2V0CiMgQ09ORklH
X0tFWUJPQVJEX1NBTVNVTkcgaXMgbm90IHNldAojIENPTkZJR19LRVlCT0FSRF9TVE9XQVdB
WSBpcyBub3Qgc2V0CiMgQ09ORklHX0tFWUJPQVJEX1NVTktCRCBpcyBub3Qgc2V0CiMgQ09O
RklHX0tFWUJPQVJEX1RNMl9UT1VDSEtFWSBpcyBub3Qgc2V0CiMgQ09ORklHX0tFWUJPQVJE
X1hUS0JEIGlzIG5vdCBzZXQKQ09ORklHX0lOUFVUX01PVVNFPXkKQ09ORklHX01PVVNFX1BT
Mj15CkNPTkZJR19NT1VTRV9QUzJfQUxQUz15CkNPTkZJR19NT1VTRV9QUzJfQllEPXkKQ09O
RklHX01PVVNFX1BTMl9MT0dJUFMyUFA9eQpDT05GSUdfTU9VU0VfUFMyX1NZTkFQVElDUz15
CkNPTkZJR19NT1VTRV9QUzJfU1lOQVBUSUNTX1NNQlVTPXkKQ09ORklHX01PVVNFX1BTMl9D
WVBSRVNTPXkKQ09ORklHX01PVVNFX1BTMl9MSUZFQk9PSz15CkNPTkZJR19NT1VTRV9QUzJf
VFJBQ0tQT0lOVD15CiMgQ09ORklHX01PVVNFX1BTMl9FTEFOVEVDSCBpcyBub3Qgc2V0CiMg
Q09ORklHX01PVVNFX1BTMl9TRU5URUxJQyBpcyBub3Qgc2V0CiMgQ09ORklHX01PVVNFX1BT
Ml9UT1VDSEtJVCBpcyBub3Qgc2V0CkNPTkZJR19NT1VTRV9QUzJfRk9DQUxURUNIPXkKIyBD
T05GSUdfTU9VU0VfUFMyX1ZNTU9VU0UgaXMgbm90IHNldApDT05GSUdfTU9VU0VfUFMyX1NN
QlVTPXkKIyBDT05GSUdfTU9VU0VfU0VSSUFMIGlzIG5vdCBzZXQKIyBDT05GSUdfTU9VU0Vf
QVBQTEVUT1VDSCBpcyBub3Qgc2V0CiMgQ09ORklHX01PVVNFX0JDTTU5NzQgaXMgbm90IHNl
dAojIENPTkZJR19NT1VTRV9DWUFQQSBpcyBub3Qgc2V0CiMgQ09ORklHX01PVVNFX0VMQU5f
STJDIGlzIG5vdCBzZXQKIyBDT05GSUdfTU9VU0VfVlNYWFhBQSBpcyBub3Qgc2V0CiMgQ09O
RklHX01PVVNFX0dQSU8gaXMgbm90IHNldAojIENPTkZJR19NT1VTRV9TWU5BUFRJQ1NfSTJD
IGlzIG5vdCBzZXQKIyBDT05GSUdfTU9VU0VfU1lOQVBUSUNTX1VTQiBpcyBub3Qgc2V0CiMg
Q09ORklHX0lOUFVUX0pPWVNUSUNLIGlzIG5vdCBzZXQKQ09ORklHX0lOUFVUX1RBQkxFVD15
CiMgQ09ORklHX1RBQkxFVF9VU0JfQUNFQ0FEIGlzIG5vdCBzZXQKIyBDT05GSUdfVEFCTEVU
X1VTQl9BSVBURUsgaXMgbm90IHNldAojIENPTkZJR19UQUJMRVRfVVNCX0hBTldBTkcgaXMg
bm90IHNldAojIENPTkZJR19UQUJMRVRfVVNCX0tCVEFCIGlzIG5vdCBzZXQKIyBDT05GSUdf
VEFCTEVUX1VTQl9QRUdBU1VTIGlzIG5vdCBzZXQKIyBDT05GSUdfVEFCTEVUX1NFUklBTF9X
QUNPTTQgaXMgbm90IHNldApDT05GSUdfSU5QVVRfVE9VQ0hTQ1JFRU49eQojIENPTkZJR19U
T1VDSFNDUkVFTl9BRFM3ODQ2IGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fQUQ3
ODc3IGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fQUQ3ODc5IGlzIG5vdCBzZXQK
IyBDT05GSUdfVE9VQ0hTQ1JFRU5fQVRNRUxfTVhUIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9V
Q0hTQ1JFRU5fQVVPX1BJWENJUiBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX0JV
MjEwMTMgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9CVTIxMDI5IGlzIG5vdCBz
ZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fQ0hJUE9ORV9JQ044NTA1IGlzIG5vdCBzZXQKIyBD
T05GSUdfVE9VQ0hTQ1JFRU5fQ1k4Q1RNQTE0MCBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNI
U0NSRUVOX0NZOENUTUcxMTAgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9DWVRU
U1BfQ09SRSBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX0NZVFRTUDRfQ09SRSBp
cyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX0RZTkFQUk8gaXMgbm90IHNldAojIENP
TkZJR19UT1VDSFNDUkVFTl9IQU1QU0hJUkUgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFND
UkVFTl9FRVRJIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fRUdBTEFYX1NFUklB
TCBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX0VYQzMwMDAgaXMgbm90IHNldAoj
IENPTkZJR19UT1VDSFNDUkVFTl9GVUpJVFNVIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hT
Q1JFRU5fR09PRElYIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fSElERUVQIGlz
IG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fSFlDT05fSFk0NlhYIGlzIG5vdCBzZXQK
IyBDT05GSUdfVE9VQ0hTQ1JFRU5fSUxJMjEwWCBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNI
U0NSRUVOX0lMSVRFSyBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX1M2U1k3NjEg
aXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9HVU5aRSBpcyBub3Qgc2V0CiMgQ09O
RklHX1RPVUNIU0NSRUVOX0VLVEYyMTI3IGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JF
RU5fRUxBTiBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX0VMTyBpcyBub3Qgc2V0
CiMgQ09ORklHX1RPVUNIU0NSRUVOX1dBQ09NX1c4MDAxIGlzIG5vdCBzZXQKIyBDT05GSUdf
VE9VQ0hTQ1JFRU5fV0FDT01fSTJDIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5f
TUFYMTE4MDEgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9NQ1M1MDAwIGlzIG5v
dCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fTU1TMTE0IGlzIG5vdCBzZXQKIyBDT05GSUdf
VE9VQ0hTQ1JFRU5fTUVMRkFTX01JUDQgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVF
Tl9NU0cyNjM4IGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fTVRPVUNIIGlzIG5v
dCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fSU5FWElPIGlzIG5vdCBzZXQKIyBDT05GSUdf
VE9VQ0hTQ1JFRU5fTUs3MTIgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9QRU5N
T1VOVCBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX0VEVF9GVDVYMDYgaXMgbm90
IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9UT1VDSFJJR0hUIGlzIG5vdCBzZXQKIyBDT05G
SUdfVE9VQ0hTQ1JFRU5fVE9VQ0hXSU4gaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVF
Tl9QSVhDSVIgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9XRFQ4N1hYX0kyQyBp
cyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX1VTQl9DT01QT1NJVEUgaXMgbm90IHNl
dAojIENPTkZJR19UT1VDSFNDUkVFTl9UT1VDSElUMjEzIGlzIG5vdCBzZXQKIyBDT05GSUdf
VE9VQ0hTQ1JFRU5fVFNDX1NFUklPIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5f
VFNDMjAwNCBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX1RTQzIwMDUgaXMgbm90
IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9UU0MyMDA3IGlzIG5vdCBzZXQKIyBDT05GSUdf
VE9VQ0hTQ1JFRU5fUk1fVFMgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9TSUxF
QUQgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9TSVNfSTJDIGlzIG5vdCBzZXQK
IyBDT05GSUdfVE9VQ0hTQ1JFRU5fU1QxMjMyIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hT
Q1JFRU5fU1RNRlRTIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fU1VSNDAgaXMg
bm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9TVVJGQUNFM19TUEkgaXMgbm90IHNldAoj
IENPTkZJR19UT1VDSFNDUkVFTl9TWDg2NTQgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFND
UkVFTl9UUFM2NTA3WCBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX1pFVDYyMjMg
aXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9aRk9SQ0UgaXMgbm90IHNldAojIENP
TkZJR19UT1VDSFNDUkVFTl9ST0hNX0JVMjEwMjMgaXMgbm90IHNldAojIENPTkZJR19UT1VD
SFNDUkVFTl9JUVM1WFggaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9aSU5JVElY
IGlzIG5vdCBzZXQKQ09ORklHX0lOUFVUX01JU0M9eQojIENPTkZJR19JTlBVVF9BRDcxNFgg
aXMgbm90IHNldAojIENPTkZJR19JTlBVVF9CTUExNTAgaXMgbm90IHNldAojIENPTkZJR19J
TlBVVF9FM1gwX0JVVFRPTiBpcyBub3Qgc2V0CiMgQ09ORklHX0lOUFVUX1BDU1BLUiBpcyBu
b3Qgc2V0CiMgQ09ORklHX0lOUFVUX01NQTg0NTAgaXMgbm90IHNldAojIENPTkZJR19JTlBV
VF9BUEFORUwgaXMgbm90IHNldAojIENPTkZJR19JTlBVVF9HUElPX0JFRVBFUiBpcyBub3Qg
c2V0CiMgQ09ORklHX0lOUFVUX0dQSU9fREVDT0RFUiBpcyBub3Qgc2V0CiMgQ09ORklHX0lO
UFVUX0dQSU9fVklCUkEgaXMgbm90IHNldAojIENPTkZJR19JTlBVVF9BVExBU19CVE5TIGlz
IG5vdCBzZXQKIyBDT05GSUdfSU5QVVRfQVRJX1JFTU9URTIgaXMgbm90IHNldAojIENPTkZJ
R19JTlBVVF9LRVlTUEFOX1JFTU9URSBpcyBub3Qgc2V0CiMgQ09ORklHX0lOUFVUX0tYVEo5
IGlzIG5vdCBzZXQKIyBDT05GSUdfSU5QVVRfUE9XRVJNQVRFIGlzIG5vdCBzZXQKIyBDT05G
SUdfSU5QVVRfWUVBTElOSyBpcyBub3Qgc2V0CiMgQ09ORklHX0lOUFVUX0NNMTA5IGlzIG5v
dCBzZXQKIyBDT05GSUdfSU5QVVRfVUlOUFVUIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5QVVRf
UENGODU3NCBpcyBub3Qgc2V0CiMgQ09ORklHX0lOUFVUX0dQSU9fUk9UQVJZX0VOQ09ERVIg
aXMgbm90IHNldAojIENPTkZJR19JTlBVVF9EQTcyODBfSEFQVElDUyBpcyBub3Qgc2V0CiMg
Q09ORklHX0lOUFVUX0FEWEwzNFggaXMgbm90IHNldAojIENPTkZJR19JTlBVVF9JTVNfUENV
IGlzIG5vdCBzZXQKIyBDT05GSUdfSU5QVVRfSVFTMjY5QSBpcyBub3Qgc2V0CiMgQ09ORklH
X0lOUFVUX0lRUzYyNkEgaXMgbm90IHNldAojIENPTkZJR19JTlBVVF9DTUEzMDAwIGlzIG5v
dCBzZXQKQ09ORklHX0lOUFVUX1hFTl9LQkRERVZfRlJPTlRFTkQ9eQojIENPTkZJR19JTlBV
VF9JREVBUEFEX1NMSURFQkFSIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5QVVRfRFJWMjYwWF9I
QVBUSUNTIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5QVVRfRFJWMjY2NV9IQVBUSUNTIGlzIG5v
dCBzZXQKIyBDT05GSUdfSU5QVVRfRFJWMjY2N19IQVBUSUNTIGlzIG5vdCBzZXQKIyBDT05G
SUdfUk1JNF9DT1JFIGlzIG5vdCBzZXQKCiMKIyBIYXJkd2FyZSBJL08gcG9ydHMKIwpDT05G
SUdfU0VSSU89eQpDT05GSUdfQVJDSF9NSUdIVF9IQVZFX1BDX1NFUklPPXkKQ09ORklHX1NF
UklPX0k4MDQyPXkKQ09ORklHX1NFUklPX1NFUlBPUlQ9eQojIENPTkZJR19TRVJJT19DVDgy
QzcxMCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFUklPX1BDSVBTMiBpcyBub3Qgc2V0CkNPTkZJ
R19TRVJJT19MSUJQUzI9eQojIENPTkZJR19TRVJJT19SQVcgaXMgbm90IHNldAojIENPTkZJ
R19TRVJJT19BTFRFUkFfUFMyIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VSSU9fUFMyTVVMVCBp
cyBub3Qgc2V0CiMgQ09ORklHX1NFUklPX0FSQ19QUzIgaXMgbm90IHNldAojIENPTkZJR19T
RVJJT19HUElPX1BTMiBpcyBub3Qgc2V0CiMgQ09ORklHX1VTRVJJTyBpcyBub3Qgc2V0CiMg
Q09ORklHX0dBTUVQT1JUIGlzIG5vdCBzZXQKIyBlbmQgb2YgSGFyZHdhcmUgSS9PIHBvcnRz
CiMgZW5kIG9mIElucHV0IGRldmljZSBzdXBwb3J0CgojCiMgQ2hhcmFjdGVyIGRldmljZXMK
IwpDT05GSUdfVFRZPXkKQ09ORklHX1ZUPXkKQ09ORklHX0NPTlNPTEVfVFJBTlNMQVRJT05T
PXkKQ09ORklHX1ZUX0NPTlNPTEU9eQpDT05GSUdfVlRfQ09OU09MRV9TTEVFUD15CkNPTkZJ
R19IV19DT05TT0xFPXkKQ09ORklHX1ZUX0hXX0NPTlNPTEVfQklORElORz15CkNPTkZJR19V
TklYOThfUFRZUz15CiMgQ09ORklHX0xFR0FDWV9QVFlTIGlzIG5vdCBzZXQKQ09ORklHX0xE
SVNDX0FVVE9MT0FEPXkKCiMKIyBTZXJpYWwgZHJpdmVycwojCkNPTkZJR19TRVJJQUxfRUFS
TFlDT049eQpDT05GSUdfU0VSSUFMXzgyNTA9eQpDT05GSUdfU0VSSUFMXzgyNTBfREVQUkVD
QVRFRF9PUFRJT05TPXkKQ09ORklHX1NFUklBTF84MjUwX1BOUD15CiMgQ09ORklHX1NFUklB
TF84MjUwXzE2NTUwQV9WQVJJQU5UUyBpcyBub3Qgc2V0CiMgQ09ORklHX1NFUklBTF84MjUw
X0ZJTlRFSyBpcyBub3Qgc2V0CkNPTkZJR19TRVJJQUxfODI1MF9DT05TT0xFPXkKQ09ORklH
X1NFUklBTF84MjUwX1BDST15CkNPTkZJR19TRVJJQUxfODI1MF9FWEFSPXkKQ09ORklHX1NF
UklBTF84MjUwX05SX1VBUlRTPTMyCkNPTkZJR19TRVJJQUxfODI1MF9SVU5USU1FX1VBUlRT
PTQKQ09ORklHX1NFUklBTF84MjUwX0VYVEVOREVEPXkKQ09ORklHX1NFUklBTF84MjUwX01B
TllfUE9SVFM9eQpDT05GSUdfU0VSSUFMXzgyNTBfU0hBUkVfSVJRPXkKQ09ORklHX1NFUklB
TF84MjUwX0RFVEVDVF9JUlE9eQpDT05GSUdfU0VSSUFMXzgyNTBfUlNBPXkKQ09ORklHX1NF
UklBTF84MjUwX0RXTElCPXkKIyBDT05GSUdfU0VSSUFMXzgyNTBfRFcgaXMgbm90IHNldAoj
IENPTkZJR19TRVJJQUxfODI1MF9SVDI4OFggaXMgbm90IHNldApDT05GSUdfU0VSSUFMXzgy
NTBfTFBTUz15CkNPTkZJR19TRVJJQUxfODI1MF9NSUQ9eQoKIwojIE5vbi04MjUwIHNlcmlh
bCBwb3J0IHN1cHBvcnQKIwojIENPTkZJR19TRVJJQUxfTUFYMzEwMCBpcyBub3Qgc2V0CiMg
Q09ORklHX1NFUklBTF9NQVgzMTBYIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VSSUFMX1VBUlRM
SVRFIGlzIG5vdCBzZXQKQ09ORklHX1NFUklBTF9DT1JFPXkKQ09ORklHX1NFUklBTF9DT1JF
X0NPTlNPTEU9eQojIENPTkZJR19TRVJJQUxfSlNNIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VS
SUFMX0xBTlRJUSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFUklBTF9TQ0NOWFAgaXMgbm90IHNl
dAojIENPTkZJR19TRVJJQUxfU0MxNklTN1hYIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VSSUFM
X0JDTTYzWFggaXMgbm90IHNldAojIENPTkZJR19TRVJJQUxfQUxURVJBX0pUQUdVQVJUIGlz
IG5vdCBzZXQKIyBDT05GSUdfU0VSSUFMX0FMVEVSQV9VQVJUIGlzIG5vdCBzZXQKIyBDT05G
SUdfU0VSSUFMX0FSQyBpcyBub3Qgc2V0CiMgQ09ORklHX1NFUklBTF9SUDIgaXMgbm90IHNl
dAojIENPTkZJR19TRVJJQUxfRlNMX0xQVUFSVCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFUklB
TF9GU0xfTElORkxFWFVBUlQgaXMgbm90IHNldAojIENPTkZJR19TRVJJQUxfU1BSRCBpcyBu
b3Qgc2V0CiMgZW5kIG9mIFNlcmlhbCBkcml2ZXJzCgpDT05GSUdfU0VSSUFMX01DVFJMX0dQ
SU89eQpDT05GSUdfU0VSSUFMX05PTlNUQU5EQVJEPXkKIyBDT05GSUdfTU9YQV9JTlRFTExJ
TyBpcyBub3Qgc2V0CiMgQ09ORklHX01PWEFfU01BUlRJTyBpcyBub3Qgc2V0CiMgQ09ORklH
X1NZTkNMSU5LX0dUIGlzIG5vdCBzZXQKIyBDT05GSUdfTl9IRExDIGlzIG5vdCBzZXQKIyBD
T05GSUdfTl9HU00gaXMgbm90IHNldAojIENPTkZJR19OT1pPTUkgaXMgbm90IHNldAojIENP
TkZJR19OVUxMX1RUWSBpcyBub3Qgc2V0CkNPTkZJR19IVkNfRFJJVkVSPXkKQ09ORklHX0hW
Q19JUlE9eQpDT05GSUdfSFZDX1hFTj15CkNPTkZJR19IVkNfWEVOX0ZST05URU5EPXkKIyBD
T05GSUdfU0VSSUFMX0RFVl9CVVMgaXMgbm90IHNldAojIENPTkZJR19WSVJUSU9fQ09OU09M
RSBpcyBub3Qgc2V0CiMgQ09ORklHX0lQTUlfSEFORExFUiBpcyBub3Qgc2V0CkNPTkZJR19I
V19SQU5ET009eQpDT05GSUdfSFdfUkFORE9NX1RJTUVSSU9NRU09eQpDT05GSUdfSFdfUkFO
RE9NX0lOVEVMPXkKQ09ORklHX0hXX1JBTkRPTV9BTUQ9eQojIENPTkZJR19IV19SQU5ET01f
QkE0MzEgaXMgbm90IHNldApDT05GSUdfSFdfUkFORE9NX1ZJQT15CiMgQ09ORklHX0hXX1JB
TkRPTV9YSVBIRVJBIGlzIG5vdCBzZXQKIyBDT05GSUdfQVBQTElDT00gaXMgbm90IHNldAoj
IENPTkZJR19NV0FWRSBpcyBub3Qgc2V0CkNPTkZJR19ERVZNRU09eQojIENPTkZJR19OVlJB
TSBpcyBub3Qgc2V0CiMgQ09ORklHX1JBV19EUklWRVIgaXMgbm90IHNldApDT05GSUdfREVW
UE9SVD15CkNPTkZJR19IUEVUPXkKIyBDT05GSUdfSFBFVF9NTUFQIGlzIG5vdCBzZXQKQ09O
RklHX0hBTkdDSEVDS19USU1FUj15CiMgQ09ORklHX1RDR19UUE0gaXMgbm90IHNldAojIENP
TkZJR19URUxDTE9DSyBpcyBub3Qgc2V0CiMgQ09ORklHX1hJTExZQlVTIGlzIG5vdCBzZXQK
IyBlbmQgb2YgQ2hhcmFjdGVyIGRldmljZXMKCiMgQ09ORklHX1JBTkRPTV9UUlVTVF9DUFUg
aXMgbm90IHNldAojIENPTkZJR19SQU5ET01fVFJVU1RfQk9PVExPQURFUiBpcyBub3Qgc2V0
CgojCiMgSTJDIHN1cHBvcnQKIwpDT05GSUdfSTJDPXkKQ09ORklHX0FDUElfSTJDX09QUkVH
SU9OPXkKQ09ORklHX0kyQ19CT0FSRElORk89eQpDT05GSUdfSTJDX0NPTVBBVD15CkNPTkZJ
R19JMkNfQ0hBUkRFVj15CkNPTkZJR19JMkNfTVVYPXkKCiMKIyBNdWx0aXBsZXhlciBJMkMg
Q2hpcCBzdXBwb3J0CiMKQ09ORklHX0kyQ19NVVhfR1BJTz15CkNPTkZJR19JMkNfTVVYX0xU
QzQzMDY9eQpDT05GSUdfSTJDX01VWF9QQ0E5NTQxPXkKQ09ORklHX0kyQ19NVVhfUENBOTU0
eD15CkNPTkZJR19JMkNfTVVYX1JFRz15CkNPTkZJR19JMkNfTVVYX01MWENQTEQ9eQojIGVu
ZCBvZiBNdWx0aXBsZXhlciBJMkMgQ2hpcCBzdXBwb3J0CgpDT05GSUdfSTJDX0hFTFBFUl9B
VVRPPXkKQ09ORklHX0kyQ19TTUJVUz15CkNPTkZJR19JMkNfQUxHT0JJVD15CgojCiMgSTJD
IEhhcmR3YXJlIEJ1cyBzdXBwb3J0CiMKCiMKIyBQQyBTTUJ1cyBob3N0IGNvbnRyb2xsZXIg
ZHJpdmVycwojCiMgQ09ORklHX0kyQ19BTEkxNTM1IGlzIG5vdCBzZXQKIyBDT05GSUdfSTJD
X0FMSTE1NjMgaXMgbm90IHNldAojIENPTkZJR19JMkNfQUxJMTVYMyBpcyBub3Qgc2V0CkNP
TkZJR19JMkNfQU1ENzU2PXkKIyBDT05GSUdfSTJDX0FNRDc1Nl9TNDg4MiBpcyBub3Qgc2V0
CkNPTkZJR19JMkNfQU1EODExMT15CiMgQ09ORklHX0kyQ19BTURfTVAyIGlzIG5vdCBzZXQK
Q09ORklHX0kyQ19JODAxPXkKQ09ORklHX0kyQ19JU0NIPXkKIyBDT05GSUdfSTJDX0lTTVQg
aXMgbm90IHNldApDT05GSUdfSTJDX1BJSVg0PXkKIyBDT05GSUdfSTJDX05GT1JDRTIgaXMg
bm90IHNldAojIENPTkZJR19JMkNfTlZJRElBX0dQVSBpcyBub3Qgc2V0CiMgQ09ORklHX0ky
Q19TSVM1NTk1IGlzIG5vdCBzZXQKIyBDT05GSUdfSTJDX1NJUzYzMCBpcyBub3Qgc2V0CiMg
Q09ORklHX0kyQ19TSVM5NlggaXMgbm90IHNldAojIENPTkZJR19JMkNfVklBIGlzIG5vdCBz
ZXQKIyBDT05GSUdfSTJDX1ZJQVBSTyBpcyBub3Qgc2V0CgojCiMgQUNQSSBkcml2ZXJzCiMK
Q09ORklHX0kyQ19TQ01JPXkKCiMKIyBJMkMgc3lzdGVtIGJ1cyBkcml2ZXJzIChtb3N0bHkg
ZW1iZWRkZWQgLyBzeXN0ZW0tb24tY2hpcCkKIwojIENPTkZJR19JMkNfQ0JVU19HUElPIGlz
IG5vdCBzZXQKIyBDT05GSUdfSTJDX0RFU0lHTldBUkVfUExBVEZPUk0gaXMgbm90IHNldAoj
IENPTkZJR19JMkNfREVTSUdOV0FSRV9QQ0kgaXMgbm90IHNldAojIENPTkZJR19JMkNfRU1F
VjIgaXMgbm90IHNldAojIENPTkZJR19JMkNfR1BJTyBpcyBub3Qgc2V0CiMgQ09ORklHX0ky
Q19PQ09SRVMgaXMgbm90IHNldAojIENPTkZJR19JMkNfUENBX1BMQVRGT1JNIGlzIG5vdCBz
ZXQKIyBDT05GSUdfSTJDX1NJTVRFQyBpcyBub3Qgc2V0CiMgQ09ORklHX0kyQ19YSUxJTlgg
aXMgbm90IHNldAoKIwojIEV4dGVybmFsIEkyQy9TTUJ1cyBhZGFwdGVyIGRyaXZlcnMKIwoj
IENPTkZJR19JMkNfRElPTEFOX1UyQyBpcyBub3Qgc2V0CiMgQ09ORklHX0kyQ19DUDI2MTUg
aXMgbm90IHNldAojIENPTkZJR19JMkNfUk9CT1RGVVpaX09TSUYgaXMgbm90IHNldAojIENP
TkZJR19JMkNfVEFPU19FVk0gaXMgbm90IHNldAojIENPTkZJR19JMkNfVElOWV9VU0IgaXMg
bm90IHNldAoKIwojIE90aGVyIEkyQy9TTUJ1cyBidXMgZHJpdmVycwojCiMgQ09ORklHX0ky
Q19NTFhDUExEIGlzIG5vdCBzZXQKIyBlbmQgb2YgSTJDIEhhcmR3YXJlIEJ1cyBzdXBwb3J0
CgojIENPTkZJR19JMkNfU1RVQiBpcyBub3Qgc2V0CiMgQ09ORklHX0kyQ19TTEFWRSBpcyBu
b3Qgc2V0CiMgQ09ORklHX0kyQ19ERUJVR19DT1JFIGlzIG5vdCBzZXQKIyBDT05GSUdfSTJD
X0RFQlVHX0FMR08gaXMgbm90IHNldAojIENPTkZJR19JMkNfREVCVUdfQlVTIGlzIG5vdCBz
ZXQKIyBlbmQgb2YgSTJDIHN1cHBvcnQKCiMgQ09ORklHX0kzQyBpcyBub3Qgc2V0CkNPTkZJ
R19TUEk9eQojIENPTkZJR19TUElfREVCVUcgaXMgbm90IHNldApDT05GSUdfU1BJX01BU1RF
Uj15CiMgQ09ORklHX1NQSV9NRU0gaXMgbm90IHNldAoKIwojIFNQSSBNYXN0ZXIgQ29udHJv
bGxlciBEcml2ZXJzCiMKIyBDT05GSUdfU1BJX0FMVEVSQSBpcyBub3Qgc2V0CiMgQ09ORklH
X1NQSV9BWElfU1BJX0VOR0lORSBpcyBub3Qgc2V0CiMgQ09ORklHX1NQSV9CSVRCQU5HIGlz
IG5vdCBzZXQKIyBDT05GSUdfU1BJX0NBREVOQ0UgaXMgbm90IHNldAojIENPTkZJR19TUElf
REVTSUdOV0FSRSBpcyBub3Qgc2V0CiMgQ09ORklHX1NQSV9OWFBfRkxFWFNQSSBpcyBub3Qg
c2V0CiMgQ09ORklHX1NQSV9HUElPIGlzIG5vdCBzZXQKIyBDT05GSUdfU1BJX0xBTlRJUV9T
U0MgaXMgbm90IHNldAojIENPTkZJR19TUElfT0NfVElOWSBpcyBub3Qgc2V0CiMgQ09ORklH
X1NQSV9QWEEyWFggaXMgbm90IHNldAojIENPTkZJR19TUElfUk9DS0NISVAgaXMgbm90IHNl
dAojIENPTkZJR19TUElfU0MxOElTNjAyIGlzIG5vdCBzZXQKIyBDT05GSUdfU1BJX1NJRklW
RSBpcyBub3Qgc2V0CiMgQ09ORklHX1NQSV9NWElDIGlzIG5vdCBzZXQKIyBDT05GSUdfU1BJ
X1hDT01NIGlzIG5vdCBzZXQKIyBDT05GSUdfU1BJX1hJTElOWCBpcyBub3Qgc2V0CiMgQ09O
RklHX1NQSV9aWU5RTVBfR1FTUEkgaXMgbm90IHNldAojIENPTkZJR19TUElfQU1EIGlzIG5v
dCBzZXQKCiMKIyBTUEkgTXVsdGlwbGV4ZXIgc3VwcG9ydAojCiMgQ09ORklHX1NQSV9NVVgg
aXMgbm90IHNldAoKIwojIFNQSSBQcm90b2NvbCBNYXN0ZXJzCiMKIyBDT05GSUdfU1BJX1NQ
SURFViBpcyBub3Qgc2V0CiMgQ09ORklHX1NQSV9MT09QQkFDS19URVNUIGlzIG5vdCBzZXQK
IyBDT05GSUdfU1BJX1RMRTYyWDAgaXMgbm90IHNldAojIENPTkZJR19TUElfU0xBVkUgaXMg
bm90IHNldApDT05GSUdfU1BJX0RZTkFNSUM9eQojIENPTkZJR19TUE1JIGlzIG5vdCBzZXQK
IyBDT05GSUdfSFNJIGlzIG5vdCBzZXQKQ09ORklHX1BQUz15CiMgQ09ORklHX1BQU19ERUJV
RyBpcyBub3Qgc2V0CgojCiMgUFBTIGNsaWVudHMgc3VwcG9ydAojCiMgQ09ORklHX1BQU19D
TElFTlRfS1RJTUVSIGlzIG5vdCBzZXQKIyBDT05GSUdfUFBTX0NMSUVOVF9MRElTQyBpcyBu
b3Qgc2V0CiMgQ09ORklHX1BQU19DTElFTlRfR1BJTyBpcyBub3Qgc2V0CgojCiMgUFBTIGdl
bmVyYXRvcnMgc3VwcG9ydAojCgojCiMgUFRQIGNsb2NrIHN1cHBvcnQKIwpDT05GSUdfUFRQ
XzE1ODhfQ0xPQ0s9eQoKIwojIEVuYWJsZSBQSFlMSUIgYW5kIE5FVFdPUktfUEhZX1RJTUVT
VEFNUElORyB0byBzZWUgdGhlIGFkZGl0aW9uYWwgY2xvY2tzLgojCkNPTkZJR19QVFBfMTU4
OF9DTE9DS19LVk09eQojIENPTkZJR19QVFBfMTU4OF9DTE9DS19JRFQ4MlAzMyBpcyBub3Qg
c2V0CiMgQ09ORklHX1BUUF8xNTg4X0NMT0NLX0lEVENNIGlzIG5vdCBzZXQKIyBDT05GSUdf
UFRQXzE1ODhfQ0xPQ0tfVk1XIGlzIG5vdCBzZXQKIyBDT05GSUdfUFRQXzE1ODhfQ0xPQ0tf
T0NQIGlzIG5vdCBzZXQKIyBlbmQgb2YgUFRQIGNsb2NrIHN1cHBvcnQKCiMgQ09ORklHX1BJ
TkNUUkwgaXMgbm90IHNldApDT05GSUdfR1BJT0xJQj15CkNPTkZJR19HUElPTElCX0ZBU1RQ
QVRIX0xJTUlUPTUxMgpDT05GSUdfR1BJT19BQ1BJPXkKIyBDT05GSUdfREVCVUdfR1BJTyBp
cyBub3Qgc2V0CkNPTkZJR19HUElPX0NERVY9eQojIENPTkZJR19HUElPX0NERVZfVjEgaXMg
bm90IHNldAoKIwojIE1lbW9yeSBtYXBwZWQgR1BJTyBkcml2ZXJzCiMKIyBDT05GSUdfR1BJ
T19BTURQVCBpcyBub3Qgc2V0CiMgQ09ORklHX0dQSU9fRFdBUEIgaXMgbm90IHNldAojIENP
TkZJR19HUElPX0VYQVIgaXMgbm90IHNldAojIENPTkZJR19HUElPX0dFTkVSSUNfUExBVEZP
Uk0gaXMgbm90IHNldAojIENPTkZJR19HUElPX01CODZTN1ggaXMgbm90IHNldAojIENPTkZJ
R19HUElPX1ZYODU1IGlzIG5vdCBzZXQKIyBDT05GSUdfR1BJT19BTURfRkNIIGlzIG5vdCBz
ZXQKIyBlbmQgb2YgTWVtb3J5IG1hcHBlZCBHUElPIGRyaXZlcnMKCiMKIyBQb3J0LW1hcHBl
ZCBJL08gR1BJTyBkcml2ZXJzCiMKIyBDT05GSUdfR1BJT19GNzE4OFggaXMgbm90IHNldAoj
IENPTkZJR19HUElPX0lUODcgaXMgbm90IHNldAojIENPTkZJR19HUElPX1NDSCBpcyBub3Qg
c2V0CiMgQ09ORklHX0dQSU9fU0NIMzExWCBpcyBub3Qgc2V0CiMgQ09ORklHX0dQSU9fV0lO
Qk9ORCBpcyBub3Qgc2V0CiMgQ09ORklHX0dQSU9fV1MxNkM0OCBpcyBub3Qgc2V0CiMgZW5k
IG9mIFBvcnQtbWFwcGVkIEkvTyBHUElPIGRyaXZlcnMKCiMKIyBJMkMgR1BJTyBleHBhbmRl
cnMKIwojIENPTkZJR19HUElPX0FEUDU1ODggaXMgbm90IHNldAojIENPTkZJR19HUElPX01B
WDczMDAgaXMgbm90IHNldAojIENPTkZJR19HUElPX01BWDczMlggaXMgbm90IHNldAojIENP
TkZJR19HUElPX1BDQTk1M1ggaXMgbm90IHNldAojIENPTkZJR19HUElPX1BDQTk1NzAgaXMg
bm90IHNldAojIENPTkZJR19HUElPX1BDRjg1N1ggaXMgbm90IHNldAojIENPTkZJR19HUElP
X1RQSUMyODEwIGlzIG5vdCBzZXQKIyBlbmQgb2YgSTJDIEdQSU8gZXhwYW5kZXJzCgojCiMg
TUZEIEdQSU8gZXhwYW5kZXJzCiMKIyBlbmQgb2YgTUZEIEdQSU8gZXhwYW5kZXJzCgojCiMg
UENJIEdQSU8gZXhwYW5kZXJzCiMKIyBDT05GSUdfR1BJT19BTUQ4MTExIGlzIG5vdCBzZXQK
IyBDT05GSUdfR1BJT19CVDhYWCBpcyBub3Qgc2V0CiMgQ09ORklHX0dQSU9fTUxfSU9IIGlz
IG5vdCBzZXQKIyBDT05GSUdfR1BJT19QQ0lfSURJT18xNiBpcyBub3Qgc2V0CiMgQ09ORklH
X0dQSU9fUENJRV9JRElPXzI0IGlzIG5vdCBzZXQKIyBDT05GSUdfR1BJT19SREMzMjFYIGlz
IG5vdCBzZXQKIyBlbmQgb2YgUENJIEdQSU8gZXhwYW5kZXJzCgojCiMgU1BJIEdQSU8gZXhw
YW5kZXJzCiMKIyBDT05GSUdfR1BJT19NQVgzMTkxWCBpcyBub3Qgc2V0CiMgQ09ORklHX0dQ
SU9fTUFYNzMwMSBpcyBub3Qgc2V0CiMgQ09ORklHX0dQSU9fTUMzMzg4MCBpcyBub3Qgc2V0
CiMgQ09ORklHX0dQSU9fUElTT1NSIGlzIG5vdCBzZXQKIyBDT05GSUdfR1BJT19YUkExNDAz
IGlzIG5vdCBzZXQKIyBlbmQgb2YgU1BJIEdQSU8gZXhwYW5kZXJzCgojCiMgVVNCIEdQSU8g
ZXhwYW5kZXJzCiMKIyBlbmQgb2YgVVNCIEdQSU8gZXhwYW5kZXJzCgojCiMgVmlydHVhbCBH
UElPIGRyaXZlcnMKIwojIENPTkZJR19HUElPX0FHR1JFR0FUT1IgaXMgbm90IHNldAojIENP
TkZJR19HUElPX01PQ0tVUCBpcyBub3Qgc2V0CiMgZW5kIG9mIFZpcnR1YWwgR1BJTyBkcml2
ZXJzCgojIENPTkZJR19XMSBpcyBub3Qgc2V0CiMgQ09ORklHX1BPV0VSX1JFU0VUIGlzIG5v
dCBzZXQKQ09ORklHX1BPV0VSX1NVUFBMWT15CiMgQ09ORklHX1BPV0VSX1NVUFBMWV9ERUJV
RyBpcyBub3Qgc2V0CkNPTkZJR19QT1dFUl9TVVBQTFlfSFdNT049eQojIENPTkZJR19QREFf
UE9XRVIgaXMgbm90IHNldAojIENPTkZJR19URVNUX1BPV0VSIGlzIG5vdCBzZXQKIyBDT05G
SUdfQ0hBUkdFUl9BRFA1MDYxIGlzIG5vdCBzZXQKIyBDT05GSUdfQkFUVEVSWV9DVzIwMTUg
aXMgbm90IHNldAojIENPTkZJR19CQVRURVJZX0RTMjc4MCBpcyBub3Qgc2V0CiMgQ09ORklH
X0JBVFRFUllfRFMyNzgxIGlzIG5vdCBzZXQKIyBDT05GSUdfQkFUVEVSWV9EUzI3ODIgaXMg
bm90IHNldAojIENPTkZJR19CQVRURVJZX1NCUyBpcyBub3Qgc2V0CiMgQ09ORklHX0NIQVJH
RVJfU0JTIGlzIG5vdCBzZXQKIyBDT05GSUdfTUFOQUdFUl9TQlMgaXMgbm90IHNldAojIENP
TkZJR19CQVRURVJZX0JRMjdYWFggaXMgbm90IHNldAojIENPTkZJR19CQVRURVJZX01BWDE3
MDQwIGlzIG5vdCBzZXQKIyBDT05GSUdfQkFUVEVSWV9NQVgxNzA0MiBpcyBub3Qgc2V0CiMg
Q09ORklHX0NIQVJHRVJfTUFYODkwMyBpcyBub3Qgc2V0CiMgQ09ORklHX0NIQVJHRVJfTFA4
NzI3IGlzIG5vdCBzZXQKIyBDT05GSUdfQ0hBUkdFUl9HUElPIGlzIG5vdCBzZXQKIyBDT05G
SUdfQ0hBUkdFUl9MVDM2NTEgaXMgbm90IHNldAojIENPTkZJR19DSEFSR0VSX0xUQzQxNjJM
IGlzIG5vdCBzZXQKIyBDT05GSUdfQ0hBUkdFUl9CUTI0MTVYIGlzIG5vdCBzZXQKIyBDT05G
SUdfQ0hBUkdFUl9CUTI0MjU3IGlzIG5vdCBzZXQKIyBDT05GSUdfQ0hBUkdFUl9CUTI0NzM1
IGlzIG5vdCBzZXQKIyBDT05GSUdfQ0hBUkdFUl9CUTI1MTVYIGlzIG5vdCBzZXQKIyBDT05G
SUdfQ0hBUkdFUl9CUTI1ODkwIGlzIG5vdCBzZXQKIyBDT05GSUdfQ0hBUkdFUl9CUTI1OTgw
IGlzIG5vdCBzZXQKIyBDT05GSUdfQ0hBUkdFUl9CUTI1NlhYIGlzIG5vdCBzZXQKIyBDT05G
SUdfQ0hBUkdFUl9TTUIzNDcgaXMgbm90IHNldAojIENPTkZJR19CQVRURVJZX0dBVUdFX0xU
QzI5NDEgaXMgbm90IHNldAojIENPTkZJR19CQVRURVJZX0dPTERGSVNIIGlzIG5vdCBzZXQK
IyBDT05GSUdfQ0hBUkdFUl9SVDk0NTUgaXMgbm90IHNldAojIENPTkZJR19DSEFSR0VSX0JE
OTk5NTQgaXMgbm90IHNldApDT05GSUdfSFdNT049eQpDT05GSUdfSFdNT05fVklEPXkKIyBD
T05GSUdfSFdNT05fREVCVUdfQ0hJUCBpcyBub3Qgc2V0CgojCiMgTmF0aXZlIGRyaXZlcnMK
IwojIENPTkZJR19TRU5TT1JTX0FCSVRVR1VSVSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNP
UlNfQUJJVFVHVVJVMyBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQUQ3MzE0IGlzIG5v
dCBzZXQKIyBDT05GSUdfU0VOU09SU19BRDc0MTQgaXMgbm90IHNldAojIENPTkZJR19TRU5T
T1JTX0FENzQxOCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURNMTAyMSBpcyBub3Qg
c2V0CiMgQ09ORklHX1NFTlNPUlNfQURNMTAyNSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNP
UlNfQURNMTAyNiBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURNMTAyOSBpcyBub3Qg
c2V0CiMgQ09ORklHX1NFTlNPUlNfQURNMTAzMSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNP
UlNfQURNMTE3NyBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURNOTI0MCBpcyBub3Qg
c2V0CiMgQ09ORklHX1NFTlNPUlNfQURUNzMxMCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNP
UlNfQURUNzQxMCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURUNzQxMSBpcyBub3Qg
c2V0CiMgQ09ORklHX1NFTlNPUlNfQURUNzQ2MiBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNP
UlNfQURUNzQ3MCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURUNzQ3NSBpcyBub3Qg
c2V0CiMgQ09ORklHX1NFTlNPUlNfQUhUMTAgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JT
X0FTMzcwIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19BU0M3NjIxIGlzIG5vdCBzZXQK
IyBDT05GSUdfU0VOU09SU19BWElfRkFOX0NPTlRST0wgaXMgbm90IHNldAojIENPTkZJR19T
RU5TT1JTX0s4VEVNUCBpcyBub3Qgc2V0CkNPTkZJR19TRU5TT1JTX0sxMFRFTVA9eQpDT05G
SUdfU0VOU09SU19GQU0xNUhfUE9XRVI9eQojIENPTkZJR19TRU5TT1JTX0FQUExFU01DIGlz
IG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19BU0IxMDAgaXMgbm90IHNldAojIENPTkZJR19T
RU5TT1JTX0FTUEVFRCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQVRYUDEgaXMgbm90
IHNldAojIENPTkZJR19TRU5TT1JTX0NPUlNBSVJfQ1BSTyBpcyBub3Qgc2V0CiMgQ09ORklH
X1NFTlNPUlNfQ09SU0FJUl9QU1UgaXMgbm90IHNldApDT05GSUdfU0VOU09SU19EUklWRVRF
TVA9eQojIENPTkZJR19TRU5TT1JTX0RTNjIwIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09S
U19EUzE2MjEgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0RFTExfU01NIGlzIG5vdCBz
ZXQKIyBDT05GSUdfU0VOU09SU19JNUtfQU1CIGlzIG5vdCBzZXQKQ09ORklHX1NFTlNPUlNf
RjcxODA1Rj15CkNPTkZJR19TRU5TT1JTX0Y3MTg4MkZHPXkKQ09ORklHX1NFTlNPUlNfRjc1
Mzc1Uz15CiMgQ09ORklHX1NFTlNPUlNfRlNDSE1EIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VO
U09SU19GVFNURVVUQVRFUyBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfR0w1MThTTSBp
cyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfR0w1MjBTTSBpcyBub3Qgc2V0CiMgQ09ORklH
X1NFTlNPUlNfRzc2MEEgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0c3NjIgaXMgbm90
IHNldAojIENPTkZJR19TRU5TT1JTX0hJSDYxMzAgaXMgbm90IHNldAojIENPTkZJR19TRU5T
T1JTX0k1NTAwIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19DT1JFVEVNUCBpcyBub3Qg
c2V0CkNPTkZJR19TRU5TT1JTX0lUODc9eQpDT05GSUdfU0VOU09SU19KQzQyPXkKIyBDT05G
SUdfU0VOU09SU19QT1dSMTIyMCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfTElORUFH
RSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfTFRDMjk0NSBpcyBub3Qgc2V0CiMgQ09O
RklHX1NFTlNPUlNfTFRDMjk0N19JMkMgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0xU
QzI5NDdfU1BJIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MVEMyOTkwIGlzIG5vdCBz
ZXQKIyBDT05GSUdfU0VOU09SU19MVEMyOTkyIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09S
U19MVEM0MTUxIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MVEM0MjE1IGlzIG5vdCBz
ZXQKIyBDT05GSUdfU0VOU09SU19MVEM0MjIyIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09S
U19MVEM0MjQ1IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MVEM0MjYwIGlzIG5vdCBz
ZXQKIyBDT05GSUdfU0VOU09SU19MVEM0MjYxIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09S
U19NQVgxMTExIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19NQVgxMjcgaXMgbm90IHNl
dAojIENPTkZJR19TRU5TT1JTX01BWDE2MDY1IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09S
U19NQVgxNjE5IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19NQVgxNjY4IGlzIG5vdCBz
ZXQKIyBDT05GSUdfU0VOU09SU19NQVgxOTcgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JT
X01BWDMxNzIyIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19NQVgzMTczMCBpcyBub3Qg
c2V0CiMgQ09ORklHX1NFTlNPUlNfTUFYNjYyMSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNP
UlNfTUFYNjYzOSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfTUFYNjY0MiBpcyBub3Qg
c2V0CiMgQ09ORklHX1NFTlNPUlNfTUFYNjY1MCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNP
UlNfTUFYNjY5NyBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfTUFYMzE3OTAgaXMgbm90
IHNldAojIENPTkZJR19TRU5TT1JTX01DUDMwMjEgaXMgbm90IHNldAojIENPTkZJR19TRU5T
T1JTX1RDNjU0IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19UUFMyMzg2MSBpcyBub3Qg
c2V0CiMgQ09ORklHX1NFTlNPUlNfTVI3NTIwMyBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNP
UlNfQURDWFggaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0xNNjMgaXMgbm90IHNldAoj
IENPTkZJR19TRU5TT1JTX0xNNzAgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0xNNzMg
aXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0xNNzUgaXMgbm90IHNldAojIENPTkZJR19T
RU5TT1JTX0xNNzcgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0xNNzggaXMgbm90IHNl
dAojIENPTkZJR19TRU5TT1JTX0xNODAgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0xN
ODMgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0xNODUgaXMgbm90IHNldAojIENPTkZJ
R19TRU5TT1JTX0xNODcgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0xNOTAgaXMgbm90
IHNldAojIENPTkZJR19TRU5TT1JTX0xNOTIgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JT
X0xNOTMgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0xNOTUyMzQgaXMgbm90IHNldAoj
IENPTkZJR19TRU5TT1JTX0xNOTUyNDEgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0xN
OTUyNDUgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1BDODczNjAgaXMgbm90IHNldAoj
IENPTkZJR19TRU5TT1JTX1BDODc0MjcgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX05U
Q19USEVSTUlTVE9SIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19OQ1Q2NjgzIGlzIG5v
dCBzZXQKIyBDT05GSUdfU0VOU09SU19OQ1Q2Nzc1IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VO
U09SU19OQ1Q3ODAyIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19OQ1Q3OTA0IGlzIG5v
dCBzZXQKIyBDT05GSUdfU0VOU09SU19OUENNN1hYIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VO
U09SU19OWlhUX0tSQUtFTjIgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1BDRjg1OTEg
aXMgbm90IHNldAojIENPTkZJR19QTUJVUyBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNf
U0JUU0kgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1NIVDE1IGlzIG5vdCBzZXQKIyBD
T05GSUdfU0VOU09SU19TSFQyMSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfU0hUM3gg
aXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1NIVEMxIGlzIG5vdCBzZXQKIyBDT05GSUdf
U0VOU09SU19TSVM1NTk1IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19ETUUxNzM3IGlz
IG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19FTUMxNDAzIGlzIG5vdCBzZXQKIyBDT05GSUdf
U0VOU09SU19FTUMyMTAzIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19FTUM2VzIwMSBp
cyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfU01TQzQ3TTEgaXMgbm90IHNldAojIENPTkZJ
R19TRU5TT1JTX1NNU0M0N00xOTIgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1NNU0M0
N0IzOTcgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1NDSDU2MjcgaXMgbm90IHNldAoj
IENPTkZJR19TRU5TT1JTX1NDSDU2MzYgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1NU
VFM3NTEgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1NNTTY2NSBpcyBub3Qgc2V0CiMg
Q09ORklHX1NFTlNPUlNfQURDMTI4RDgxOCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNf
QURTNzgyOCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURTNzg3MSBpcyBub3Qgc2V0
CiMgQ09ORklHX1NFTlNPUlNfQU1DNjgyMSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNf
SU5BMjA5IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19JTkEyWFggaXMgbm90IHNldAoj
IENPTkZJR19TRU5TT1JTX0lOQTMyMjEgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1RD
NzQgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1RITUM1MCBpcyBub3Qgc2V0CiMgQ09O
RklHX1NFTlNPUlNfVE1QMTAyIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19UTVAxMDMg
aXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1RNUDEwOCBpcyBub3Qgc2V0CiMgQ09ORklH
X1NFTlNPUlNfVE1QNDAxIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19UTVA0MjEgaXMg
bm90IHNldAojIENPTkZJR19TRU5TT1JTX1RNUDUxMyBpcyBub3Qgc2V0CiMgQ09ORklHX1NF
TlNPUlNfVklBX0NQVVRFTVAgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1ZJQTY4NkEg
aXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1ZUMTIxMSBpcyBub3Qgc2V0CiMgQ09ORklH
X1NFTlNPUlNfVlQ4MjMxIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19XODM3NzNHIGlz
IG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19XODM3ODFEIGlzIG5vdCBzZXQKIyBDT05GSUdf
U0VOU09SU19XODM3OTFEIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19XODM3OTJEIGlz
IG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19XODM3OTMgaXMgbm90IHNldAojIENPTkZJR19T
RU5TT1JTX1c4Mzc5NSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfVzgzTDc4NVRTIGlz
IG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19XODNMNzg2TkcgaXMgbm90IHNldAojIENPTkZJ
R19TRU5TT1JTX1c4MzYyN0hGIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19XODM2MjdF
SEYgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1hHRU5FIGlzIG5vdCBzZXQKCiMKIyBB
Q1BJIGRyaXZlcnMKIwpDT05GSUdfU0VOU09SU19BQ1BJX1BPV0VSPXkKIyBDT05GSUdfU0VO
U09SU19BVEswMTEwIGlzIG5vdCBzZXQKQ09ORklHX1RIRVJNQUw9eQojIENPTkZJR19USEVS
TUFMX05FVExJTksgaXMgbm90IHNldAojIENPTkZJR19USEVSTUFMX1NUQVRJU1RJQ1MgaXMg
bm90IHNldApDT05GSUdfVEhFUk1BTF9FTUVSR0VOQ1lfUE9XRVJPRkZfREVMQVlfTVM9MApD
T05GSUdfVEhFUk1BTF9IV01PTj15CiMgQ09ORklHX1RIRVJNQUxfV1JJVEFCTEVfVFJJUFMg
aXMgbm90IHNldApDT05GSUdfVEhFUk1BTF9ERUZBVUxUX0dPVl9TVEVQX1dJU0U9eQojIENP
TkZJR19USEVSTUFMX0RFRkFVTFRfR09WX0ZBSVJfU0hBUkUgaXMgbm90IHNldAojIENPTkZJ
R19USEVSTUFMX0RFRkFVTFRfR09WX1VTRVJfU1BBQ0UgaXMgbm90IHNldAojIENPTkZJR19U
SEVSTUFMX0dPVl9GQUlSX1NIQVJFIGlzIG5vdCBzZXQKQ09ORklHX1RIRVJNQUxfR09WX1NU
RVBfV0lTRT15CiMgQ09ORklHX1RIRVJNQUxfR09WX0JBTkdfQkFORyBpcyBub3Qgc2V0CiMg
Q09ORklHX1RIRVJNQUxfR09WX1VTRVJfU1BBQ0UgaXMgbm90IHNldAojIENPTkZJR19USEVS
TUFMX0VNVUxBVElPTiBpcyBub3Qgc2V0CgojCiMgSW50ZWwgdGhlcm1hbCBkcml2ZXJzCiMK
IyBDT05GSUdfSU5URUxfUE9XRVJDTEFNUCBpcyBub3Qgc2V0CkNPTkZJR19YODZfVEhFUk1B
TF9WRUNUT1I9eQojIENPTkZJR19YODZfUEtHX1RFTVBfVEhFUk1BTCBpcyBub3Qgc2V0CiMg
Q09ORklHX0lOVEVMX1NPQ19EVFNfVEhFUk1BTCBpcyBub3Qgc2V0CgojCiMgQUNQSSBJTlQz
NDBYIHRoZXJtYWwgZHJpdmVycwojCiMgQ09ORklHX0lOVDM0MFhfVEhFUk1BTCBpcyBub3Qg
c2V0CiMgZW5kIG9mIEFDUEkgSU5UMzQwWCB0aGVybWFsIGRyaXZlcnMKCiMgQ09ORklHX0lO
VEVMX1BDSF9USEVSTUFMIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5URUxfVENDX0NPT0xJTkcg
aXMgbm90IHNldAojIGVuZCBvZiBJbnRlbCB0aGVybWFsIGRyaXZlcnMKCkNPTkZJR19XQVRD
SERPRz15CkNPTkZJR19XQVRDSERPR19DT1JFPXkKIyBDT05GSUdfV0FUQ0hET0dfTk9XQVlP
VVQgaXMgbm90IHNldApDT05GSUdfV0FUQ0hET0dfSEFORExFX0JPT1RfRU5BQkxFRD15CkNP
TkZJR19XQVRDSERPR19PUEVOX1RJTUVPVVQ9MAojIENPTkZJR19XQVRDSERPR19TWVNGUyBp
cyBub3Qgc2V0CgojCiMgV2F0Y2hkb2cgUHJldGltZW91dCBHb3Zlcm5vcnMKIwojIENPTkZJ
R19XQVRDSERPR19QUkVUSU1FT1VUX0dPViBpcyBub3Qgc2V0CgojCiMgV2F0Y2hkb2cgRGV2
aWNlIERyaXZlcnMKIwojIENPTkZJR19TT0ZUX1dBVENIRE9HIGlzIG5vdCBzZXQKIyBDT05G
SUdfV0RBVF9XRFQgaXMgbm90IHNldAojIENPTkZJR19YSUxJTlhfV0FUQ0hET0cgaXMgbm90
IHNldAojIENPTkZJR19aSUlSQVZFX1dBVENIRE9HIGlzIG5vdCBzZXQKIyBDT05GSUdfQ0FE
RU5DRV9XQVRDSERPRyBpcyBub3Qgc2V0CiMgQ09ORklHX0RXX1dBVENIRE9HIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTUFYNjNYWF9XQVRDSERPRyBpcyBub3Qgc2V0CiMgQ09ORklHX0FDUVVJ
UkVfV0RUIGlzIG5vdCBzZXQKIyBDT05GSUdfQURWQU5URUNIX1dEVCBpcyBub3Qgc2V0CiMg
Q09ORklHX0FMSU0xNTM1X1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX0FMSU03MTAxX1dEVCBp
cyBub3Qgc2V0CiMgQ09ORklHX0VCQ19DMzg0X1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX0Y3
MTgwOEVfV0RUIGlzIG5vdCBzZXQKQ09ORklHX1NQNTEwMF9UQ089eQojIENPTkZJR19TQkNf
RklUUEMyX1dBVENIRE9HIGlzIG5vdCBzZXQKIyBDT05GSUdfRVVST1RFQ0hfV0RUIGlzIG5v
dCBzZXQKIyBDT05GSUdfSUI3MDBfV0RUIGlzIG5vdCBzZXQKIyBDT05GSUdfSUJNQVNSIGlz
IG5vdCBzZXQKIyBDT05GSUdfV0FGRVJfV0RUIGlzIG5vdCBzZXQKIyBDT05GSUdfSTYzMDBF
U0JfV0RUIGlzIG5vdCBzZXQKIyBDT05GSUdfSUU2WFhfV0RUIGlzIG5vdCBzZXQKIyBDT05G
SUdfSVRDT19XRFQgaXMgbm90IHNldAojIENPTkZJR19JVDg3MTJGX1dEVCBpcyBub3Qgc2V0
CiMgQ09ORklHX0lUODdfV0RUIGlzIG5vdCBzZXQKIyBDT05GSUdfSFBfV0FUQ0hET0cgaXMg
bm90IHNldAojIENPTkZJR19TQzEyMDBfV0RUIGlzIG5vdCBzZXQKIyBDT05GSUdfUEM4NzQx
M19XRFQgaXMgbm90IHNldAojIENPTkZJR19OVl9UQ08gaXMgbm90IHNldAojIENPTkZJR182
MFhYX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX0NQVTVfV0RUIGlzIG5vdCBzZXQKIyBDT05G
SUdfU01TQ19TQ0gzMTFYX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX1NNU0MzN0I3ODdfV0RU
IGlzIG5vdCBzZXQKIyBDT05GSUdfVFFNWDg2X1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJ
QV9XRFQgaXMgbm90IHNldAojIENPTkZJR19XODM2MjdIRl9XRFQgaXMgbm90IHNldAojIENP
TkZJR19XODM4NzdGX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX1c4Mzk3N0ZfV0RUIGlzIG5v
dCBzZXQKIyBDT05GSUdfTUFDSFpfV0RUIGlzIG5vdCBzZXQKIyBDT05GSUdfU0JDX0VQWF9D
M19XQVRDSERPRyBpcyBub3Qgc2V0CiMgQ09ORklHX05JOTAzWF9XRFQgaXMgbm90IHNldAoj
IENPTkZJR19OSUM3MDE4X1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX01FTl9BMjFfV0RUIGlz
IG5vdCBzZXQKQ09ORklHX1hFTl9XRFQ9eQoKIwojIFBDSS1iYXNlZCBXYXRjaGRvZyBDYXJk
cwojCiMgQ09ORklHX1BDSVBDV0FUQ0hET0cgaXMgbm90IHNldAojIENPTkZJR19XRFRQQ0kg
aXMgbm90IHNldAoKIwojIFVTQi1iYXNlZCBXYXRjaGRvZyBDYXJkcwojCiMgQ09ORklHX1VT
QlBDV0FUQ0hET0cgaXMgbm90IHNldApDT05GSUdfU1NCX1BPU1NJQkxFPXkKIyBDT05GSUdf
U1NCIGlzIG5vdCBzZXQKQ09ORklHX0JDTUFfUE9TU0lCTEU9eQojIENPTkZJR19CQ01BIGlz
IG5vdCBzZXQKCiMKIyBNdWx0aWZ1bmN0aW9uIGRldmljZSBkcml2ZXJzCiMKQ09ORklHX01G
RF9DT1JFPXkKIyBDT05GSUdfTUZEX0FTMzcxMSBpcyBub3Qgc2V0CiMgQ09ORklHX1BNSUNf
QURQNTUyMCBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9BQVQyODcwX0NPUkUgaXMgbm90IHNl
dAojIENPTkZJR19NRkRfQkNNNTkwWFggaXMgbm90IHNldAojIENPTkZJR19NRkRfQkQ5NTcx
TVdWIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX0FYUDIwWF9JMkMgaXMgbm90IHNldAojIENP
TkZJR19NRkRfTUFERVJBIGlzIG5vdCBzZXQKIyBDT05GSUdfUE1JQ19EQTkwM1ggaXMgbm90
IHNldAojIENPTkZJR19NRkRfREE5MDUyX1NQSSBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9E
QTkwNTJfSTJDIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX0RBOTA1NSBpcyBub3Qgc2V0CiMg
Q09ORklHX01GRF9EQTkwNjIgaXMgbm90IHNldAojIENPTkZJR19NRkRfREE5MDYzIGlzIG5v
dCBzZXQKIyBDT05GSUdfTUZEX0RBOTE1MCBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9ETE4y
IGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX01DMTNYWFhfU1BJIGlzIG5vdCBzZXQKIyBDT05G
SUdfTUZEX01DMTNYWFhfSTJDIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX01QMjYyOSBpcyBu
b3Qgc2V0CiMgQ09ORklHX0hUQ19QQVNJQzMgaXMgbm90IHNldAojIENPTkZJR19IVENfSTJD
UExEIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX0lOVEVMX1FVQVJLX0kyQ19HUElPIGlzIG5v
dCBzZXQKIyBDT05GSUdfTFBDX0lDSCBpcyBub3Qgc2V0CkNPTkZJR19MUENfU0NIPXkKIyBD
T05GSUdfSU5URUxfU09DX1BNSUNfQ0hURENfVEkgaXMgbm90IHNldAojIENPTkZJR19NRkRf
SU5URUxfTFBTU19BQ1BJIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX0lOVEVMX0xQU1NfUENJ
IGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX0lOVEVMX1BNVCBpcyBub3Qgc2V0CiMgQ09ORklH
X01GRF9JUVM2MlggaXMgbm90IHNldAojIENPTkZJR19NRkRfSkFOWl9DTU9ESU8gaXMgbm90
IHNldAojIENPTkZJR19NRkRfS0VNUExEIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEXzg4UE04
MDAgaXMgbm90IHNldAojIENPTkZJR19NRkRfODhQTTgwNSBpcyBub3Qgc2V0CiMgQ09ORklH
X01GRF84OFBNODYwWCBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9NQVgxNDU3NyBpcyBub3Qg
c2V0CiMgQ09ORklHX01GRF9NQVg3NzY5MyBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9NQVg3
Nzg0MyBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9NQVg4OTA3IGlzIG5vdCBzZXQKIyBDT05G
SUdfTUZEX01BWDg5MjUgaXMgbm90IHNldAojIENPTkZJR19NRkRfTUFYODk5NyBpcyBub3Qg
c2V0CiMgQ09ORklHX01GRF9NQVg4OTk4IGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX01UNjM2
MCBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9NVDYzOTcgaXMgbm90IHNldAojIENPTkZJR19N
RkRfTUVORjIxQk1DIGlzIG5vdCBzZXQKIyBDT05GSUdfRVpYX1BDQVAgaXMgbm90IHNldAoj
IENPTkZJR19NRkRfVklQRVJCT0FSRCBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9SRVRVIGlz
IG5vdCBzZXQKIyBDT05GSUdfTUZEX1BDRjUwNjMzIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZE
X1JEQzMyMVggaXMgbm90IHNldAojIENPTkZJR19NRkRfUlQ1MDMzIGlzIG5vdCBzZXQKIyBD
T05GSUdfTUZEX1JDNVQ1ODMgaXMgbm90IHNldAojIENPTkZJR19NRkRfU0VDX0NPUkUgaXMg
bm90IHNldAojIENPTkZJR19NRkRfU0k0NzZYX0NPUkUgaXMgbm90IHNldAojIENPTkZJR19N
RkRfU001MDEgaXMgbm90IHNldAojIENPTkZJR19NRkRfU0tZODE0NTIgaXMgbm90IHNldAoj
IENPTkZJR19NRkRfU1lTQ09OIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1RJX0FNMzM1WF9U
U0NBREMgaXMgbm90IHNldAojIENPTkZJR19NRkRfTFAzOTQzIGlzIG5vdCBzZXQKIyBDT05G
SUdfTUZEX0xQODc4OCBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9USV9MTVUgaXMgbm90IHNl
dAojIENPTkZJR19NRkRfUEFMTUFTIGlzIG5vdCBzZXQKIyBDT05GSUdfVFBTNjEwNVggaXMg
bm90IHNldAojIENPTkZJR19UUFM2NTAxMCBpcyBub3Qgc2V0CiMgQ09ORklHX1RQUzY1MDdY
IGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1RQUzY1MDg2IGlzIG5vdCBzZXQKIyBDT05GSUdf
TUZEX1RQUzY1MDkwIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1RJX0xQODczWCBpcyBub3Qg
c2V0CiMgQ09ORklHX01GRF9UUFM2NTg2WCBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9UUFM2
NTkxMCBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9UUFM2NTkxMl9JMkMgaXMgbm90IHNldAoj
IENPTkZJR19NRkRfVFBTNjU5MTJfU1BJIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1RQUzgw
MDMxIGlzIG5vdCBzZXQKIyBDT05GSUdfVFdMNDAzMF9DT1JFIGlzIG5vdCBzZXQKIyBDT05G
SUdfVFdMNjA0MF9DT1JFIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1dMMTI3M19DT1JFIGlz
IG5vdCBzZXQKIyBDT05GSUdfTUZEX0xNMzUzMyBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9U
UU1YODYgaXMgbm90IHNldAojIENPTkZJR19NRkRfVlg4NTUgaXMgbm90IHNldAojIENPTkZJ
R19NRkRfQVJJWk9OQV9JMkMgaXMgbm90IHNldAojIENPTkZJR19NRkRfQVJJWk9OQV9TUEkg
aXMgbm90IHNldAojIENPTkZJR19NRkRfV004NDAwIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZE
X1dNODMxWF9JMkMgaXMgbm90IHNldAojIENPTkZJR19NRkRfV004MzFYX1NQSSBpcyBub3Qg
c2V0CiMgQ09ORklHX01GRF9XTTgzNTBfSTJDIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1dN
ODk5NCBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9BVEMyNjBYX0kyQyBpcyBub3Qgc2V0CiMg
Q09ORklHX01GRF9JTlRFTF9NMTBfQk1DIGlzIG5vdCBzZXQKIyBlbmQgb2YgTXVsdGlmdW5j
dGlvbiBkZXZpY2UgZHJpdmVycwoKIyBDT05GSUdfUkVHVUxBVE9SIGlzIG5vdCBzZXQKQ09O
RklHX1JDX0NPUkU9eQpDT05GSUdfUkNfTUFQPXkKQ09ORklHX0xJUkM9eQpDT05GSUdfUkNf
REVDT0RFUlM9eQpDT05GSUdfSVJfTkVDX0RFQ09ERVI9eQpDT05GSUdfSVJfUkM1X0RFQ09E
RVI9eQpDT05GSUdfSVJfUkM2X0RFQ09ERVI9eQpDT05GSUdfSVJfSlZDX0RFQ09ERVI9eQpD
T05GSUdfSVJfU09OWV9ERUNPREVSPXkKQ09ORklHX0lSX1NBTllPX0RFQ09ERVI9eQpDT05G
SUdfSVJfU0hBUlBfREVDT0RFUj15CkNPTkZJR19JUl9NQ0VfS0JEX0RFQ09ERVI9eQpDT05G
SUdfSVJfWE1QX0RFQ09ERVI9eQojIENPTkZJR19JUl9JTU9OX0RFQ09ERVIgaXMgbm90IHNl
dAojIENPTkZJR19JUl9SQ01NX0RFQ09ERVIgaXMgbm90IHNldAojIENPTkZJR19SQ19ERVZJ
Q0VTIGlzIG5vdCBzZXQKIyBDT05GSUdfTUVESUFfQ0VDX1NVUFBPUlQgaXMgbm90IHNldApD
T05GSUdfTUVESUFfU1VQUE9SVD15CkNPTkZJR19NRURJQV9TVVBQT1JUX0ZJTFRFUj15CkNP
TkZJR19NRURJQV9TVUJEUlZfQVVUT1NFTEVDVD15CgojCiMgTWVkaWEgZGV2aWNlIHR5cGVz
CiMKQ09ORklHX01FRElBX0NBTUVSQV9TVVBQT1JUPXkKQ09ORklHX01FRElBX0FOQUxPR19U
Vl9TVVBQT1JUPXkKQ09ORklHX01FRElBX0RJR0lUQUxfVFZfU1VQUE9SVD15CkNPTkZJR19N
RURJQV9SQURJT19TVVBQT1JUPXkKIyBDT05GSUdfTUVESUFfU0RSX1NVUFBPUlQgaXMgbm90
IHNldAojIENPTkZJR19NRURJQV9QTEFURk9STV9TVVBQT1JUIGlzIG5vdCBzZXQKIyBDT05G
SUdfTUVESUFfVEVTVF9TVVBQT1JUIGlzIG5vdCBzZXQKIyBlbmQgb2YgTWVkaWEgZGV2aWNl
IHR5cGVzCgpDT05GSUdfVklERU9fREVWPXkKQ09ORklHX01FRElBX0NPTlRST0xMRVI9eQpD
T05GSUdfRFZCX0NPUkU9eQoKIwojIFZpZGVvNExpbnV4IG9wdGlvbnMKIwpDT05GSUdfVklE
RU9fVjRMMj15CkNPTkZJR19WSURFT19WNEwyX0kyQz15CiMgQ09ORklHX1ZJREVPX1Y0TDJf
U1VCREVWX0FQSSBpcyBub3Qgc2V0CkNPTkZJR19WSURFT19BRFZfREVCVUc9eQojIENPTkZJ
R19WSURFT19GSVhFRF9NSU5PUl9SQU5HRVMgaXMgbm90IHNldApDT05GSUdfVklERU9fVFVO
RVI9eQpDT05GSUdfVjRMMl9GV05PREU9eQojIGVuZCBvZiBWaWRlbzRMaW51eCBvcHRpb25z
CgojCiMgTWVkaWEgY29udHJvbGxlciBvcHRpb25zCiMKIyBDT05GSUdfTUVESUFfQ09OVFJP
TExFUl9EVkIgaXMgbm90IHNldAojIGVuZCBvZiBNZWRpYSBjb250cm9sbGVyIG9wdGlvbnMK
CiMKIyBEaWdpdGFsIFRWIG9wdGlvbnMKIwojIENPTkZJR19EVkJfTU1BUCBpcyBub3Qgc2V0
CkNPTkZJR19EVkJfTkVUPXkKQ09ORklHX0RWQl9NQVhfQURBUFRFUlM9OAojIENPTkZJR19E
VkJfRFlOQU1JQ19NSU5PUlMgaXMgbm90IHNldAojIENPTkZJR19EVkJfREVNVVhfU0VDVElP
Tl9MT1NTX0xPRyBpcyBub3Qgc2V0CiMgQ09ORklHX0RWQl9VTEVfREVCVUcgaXMgbm90IHNl
dAojIGVuZCBvZiBEaWdpdGFsIFRWIG9wdGlvbnMKCiMKIyBNZWRpYSBkcml2ZXJzCiMKCiMK
IyBEcml2ZXJzIGZpbHRlcmVkIGFzIHNlbGVjdGVkIGF0ICdGaWx0ZXIgbWVkaWEgZHJpdmVy
cycKIwpDT05GSUdfTUVESUFfVVNCX1NVUFBPUlQ9eQoKIwojIFdlYmNhbSBkZXZpY2VzCiMK
IyBDT05GSUdfVVNCX1ZJREVPX0NMQVNTIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX0dTUENB
IGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1BXQyBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVP
X0NQSUEyIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1pSMzY0WFggaXMgbm90IHNldAojIENP
TkZJR19VU0JfU1RLV0VCQ0FNIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1MyMjU1IGlzIG5v
dCBzZXQKIyBDT05GSUdfVklERU9fVVNCVFYgaXMgbm90IHNldAoKIwojIEFuYWxvZyBUViBV
U0IgZGV2aWNlcwojCiMgQ09ORklHX1ZJREVPX1BWUlVTQjIgaXMgbm90IHNldAojIENPTkZJ
R19WSURFT19IRFBWUiBpcyBub3Qgc2V0CkNPTkZJR19WSURFT19TVEsxMTYwX0NPTU1PTj15
CkNPTkZJR19WSURFT19TVEsxMTYwPXkKIyBDT05GSUdfVklERU9fR083MDA3IGlzIG5vdCBz
ZXQKCiMKIyBBbmFsb2cvZGlnaXRhbCBUViBVU0IgZGV2aWNlcwojCiMgQ09ORklHX1ZJREVP
X0FVMDgyOCBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX0NYMjMxWFggaXMgbm90IHNldAoj
IENPTkZJR19WSURFT19UTTYwMDAgaXMgbm90IHNldAoKIwojIERpZ2l0YWwgVFYgVVNCIGRl
dmljZXMKIwojIENPTkZJR19EVkJfVVNCIGlzIG5vdCBzZXQKIyBDT05GSUdfRFZCX1VTQl9W
MiBpcyBub3Qgc2V0CiMgQ09ORklHX0RWQl9UVFVTQl9CVURHRVQgaXMgbm90IHNldAojIENP
TkZJR19EVkJfVFRVU0JfREVDIGlzIG5vdCBzZXQKIyBDT05GSUdfU01TX1VTQl9EUlYgaXMg
bm90IHNldAojIENPTkZJR19EVkJfQjJDMl9GTEVYQ09QX1VTQiBpcyBub3Qgc2V0CiMgQ09O
RklHX0RWQl9BUzEwMiBpcyBub3Qgc2V0CgojCiMgV2ViY2FtLCBUViAoYW5hbG9nL2RpZ2l0
YWwpIFVTQiBkZXZpY2VzCiMKQ09ORklHX1ZJREVPX0VNMjhYWD15CkNPTkZJR19WSURFT19F
TTI4WFhfVjRMMj15CiMgQ09ORklHX1ZJREVPX0VNMjhYWF9BTFNBIGlzIG5vdCBzZXQKIyBD
T05GSUdfVklERU9fRU0yOFhYX0RWQiBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX0VNMjhY
WF9SQyBpcyBub3Qgc2V0CkNPTkZJR19NRURJQV9QQ0lfU1VQUE9SVD15CgojCiMgTWVkaWEg
Y2FwdHVyZSBzdXBwb3J0CiMKIyBDT05GSUdfVklERU9fU09MTzZYMTAgaXMgbm90IHNldAoj
IENPTkZJR19WSURFT19UVzU4NjQgaXMgbm90IHNldAojIENPTkZJR19WSURFT19UVzY4IGlz
IG5vdCBzZXQKIyBDT05GSUdfVklERU9fVFc2ODZYIGlzIG5vdCBzZXQKCiMKIyBNZWRpYSBj
YXB0dXJlL2FuYWxvZyBUViBzdXBwb3J0CiMKIyBDT05GSUdfVklERU9fSVZUViBpcyBub3Qg
c2V0CiMgQ09ORklHX1ZJREVPX0hFWElVTV9HRU1JTkkgaXMgbm90IHNldAojIENPTkZJR19W
SURFT19IRVhJVU1fT1JJT04gaXMgbm90IHNldAojIENPTkZJR19WSURFT19NWEIgaXMgbm90
IHNldAojIENPTkZJR19WSURFT19EVDMxNTUgaXMgbm90IHNldAoKIwojIE1lZGlhIGNhcHR1
cmUvYW5hbG9nL2h5YnJpZCBUViBzdXBwb3J0CiMKIyBDT05GSUdfVklERU9fQ1gxOCBpcyBu
b3Qgc2V0CiMgQ09ORklHX1ZJREVPX0NYMjM4ODUgaXMgbm90IHNldApDT05GSUdfVklERU9f
Q1gyNTgyMT15CiMgQ09ORklHX1ZJREVPX0NYMjU4MjFfQUxTQSBpcyBub3Qgc2V0CiMgQ09O
RklHX1ZJREVPX0NYODggaXMgbm90IHNldAojIENPTkZJR19WSURFT19CVDg0OCBpcyBub3Qg
c2V0CiMgQ09ORklHX1ZJREVPX1NBQTcxMzQgaXMgbm90IHNldAojIENPTkZJR19WSURFT19T
QUE3MTY0IGlzIG5vdCBzZXQKCiMKIyBNZWRpYSBkaWdpdGFsIFRWIFBDSSBBZGFwdGVycwoj
CiMgQ09ORklHX0RWQl9BVjcxMTAgaXMgbm90IHNldAojIENPTkZJR19EVkJfQlVER0VUX0NP
UkUgaXMgbm90IHNldAojIENPTkZJR19EVkJfQjJDMl9GTEVYQ09QX1BDSSBpcyBub3Qgc2V0
CiMgQ09ORklHX0RWQl9QTFVUTzIgaXMgbm90IHNldAojIENPTkZJR19EVkJfRE0xMTA1IGlz
IG5vdCBzZXQKIyBDT05GSUdfRFZCX1BUMSBpcyBub3Qgc2V0CiMgQ09ORklHX0RWQl9QVDMg
aXMgbm90IHNldAojIENPTkZJR19NQU5USVNfQ09SRSBpcyBub3Qgc2V0CiMgQ09ORklHX0RW
Ql9OR0VORSBpcyBub3Qgc2V0CiMgQ09ORklHX0RWQl9EREJSSURHRSBpcyBub3Qgc2V0CiMg
Q09ORklHX0RWQl9TTUlQQ0lFIGlzIG5vdCBzZXQKIyBDT05GSUdfRFZCX05FVFVQX1VOSURW
QiBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX0lQVTNfQ0lPMiBpcyBub3Qgc2V0CiMgQ09O
RklHX1JBRElPX0FEQVBURVJTIGlzIG5vdCBzZXQKQ09ORklHX1ZJREVPX1RWRUVQUk9NPXkK
Q09ORklHX1ZJREVPQlVGMl9DT1JFPXkKQ09ORklHX1ZJREVPQlVGMl9WNEwyPXkKQ09ORklH
X1ZJREVPQlVGMl9NRU1PUFM9eQpDT05GSUdfVklERU9CVUYyX1ZNQUxMT0M9eQpDT05GSUdf
VklERU9CVUYyX0RNQV9TRz15CiMgZW5kIG9mIE1lZGlhIGRyaXZlcnMKCkNPTkZJR19NRURJ
QV9ISURFX0FOQ0lMTEFSWV9TVUJEUlY9eQoKIwojIE1lZGlhIGFuY2lsbGFyeSBkcml2ZXJz
CiMKQ09ORklHX01FRElBX0FUVEFDSD15CgojCiMgSVIgSTJDIGRyaXZlciBhdXRvLXNlbGVj
dGVkIGJ5ICdBdXRvc2VsZWN0IGFuY2lsbGFyeSBkcml2ZXJzJwojCkNPTkZJR19WSURFT19J
Ul9JMkM9eQoKIwojIGF1ZGlvLCB2aWRlbyBhbmQgcmFkaW8gSTJDIGRyaXZlcnMgYXV0by1z
ZWxlY3RlZCBieSAnQXV0b3NlbGVjdCBhbmNpbGxhcnkgZHJpdmVycycKIwpDT05GSUdfVklE
RU9fTVNQMzQwMD15CkNPTkZJR19WSURFT19TQUE3MTFYPXkKQ09ORklHX1ZJREVPX1RWUDUx
NTA9eQoKIwojIFZpZGVvIGFuZCBhdWRpbyBkZWNvZGVycwojCgojCiMgQ2FtZXJhIHNlbnNv
ciBkZXZpY2VzCiMKIyBDT05GSUdfVklERU9fSEk1NTYgaXMgbm90IHNldAojIENPTkZJR19W
SURFT19JTVgyMTQgaXMgbm90IHNldAojIENPTkZJR19WSURFT19JTVgyMTkgaXMgbm90IHNl
dAojIENPTkZJR19WSURFT19JTVgyNTggaXMgbm90IHNldAojIENPTkZJR19WSURFT19JTVgy
NzQgaXMgbm90IHNldAojIENPTkZJR19WSURFT19JTVgyOTAgaXMgbm90IHNldAojIENPTkZJ
R19WSURFT19JTVgzMTkgaXMgbm90IHNldAojIENPTkZJR19WSURFT19JTVgzNTUgaXMgbm90
IHNldAojIENPTkZJR19WSURFT19PVjAyQTEwIGlzIG5vdCBzZXQKQ09ORklHX1ZJREVPX09W
MjY0MD15CiMgQ09ORklHX1ZJREVPX09WMjY1OSBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVP
X09WMjY4MCBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX09WMjY4NSBpcyBub3Qgc2V0CiMg
Q09ORklHX1ZJREVPX09WMjc0MCBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX09WNTY0NyBp
cyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX09WNTY0OCBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJ
REVPX09WNjY1MCBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX09WNTY3MCBpcyBub3Qgc2V0
CiMgQ09ORklHX1ZJREVPX09WNTY3NSBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX09WNTY5
NSBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX09WNzI1MSBpcyBub3Qgc2V0CiMgQ09ORklH
X1ZJREVPX09WNzcyWCBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX09WNzY0MCBpcyBub3Qg
c2V0CiMgQ09ORklHX1ZJREVPX09WNzY3MCBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX09W
Nzc0MCBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX09WODg1NiBpcyBub3Qgc2V0CiMgQ09O
RklHX1ZJREVPX09WODg2NSBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX09WOTY0MCBpcyBu
b3Qgc2V0CiMgQ09ORklHX1ZJREVPX09WOTY1MCBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVP
X09WOTczNCBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX09WMTM4NTggaXMgbm90IHNldAoj
IENPTkZJR19WSURFT19WUzY2MjQgaXMgbm90IHNldAojIENPTkZJR19WSURFT19NVDlNMDAx
IGlzIG5vdCBzZXQKIyBDT05GSUdfVklERU9fTVQ5TTAzMiBpcyBub3Qgc2V0CiMgQ09ORklH
X1ZJREVPX01UOU0xMTEgaXMgbm90IHNldAojIENPTkZJR19WSURFT19NVDlQMDMxIGlzIG5v
dCBzZXQKIyBDT05GSUdfVklERU9fTVQ5VDAwMSBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVP
X01UOVQxMTIgaXMgbm90IHNldApDT05GSUdfVklERU9fTVQ5VjAxMT15CiMgQ09ORklHX1ZJ
REVPX01UOVYwMzIgaXMgbm90IHNldAojIENPTkZJR19WSURFT19NVDlWMTExIGlzIG5vdCBz
ZXQKIyBDT05GSUdfVklERU9fU1IwMzBQQzMwIGlzIG5vdCBzZXQKIyBDT05GSUdfVklERU9f
Tk9PTjAxMFBDMzAgaXMgbm90IHNldAojIENPTkZJR19WSURFT19NNU1PTFMgaXMgbm90IHNl
dAojIENPTkZJR19WSURFT19SREFDTTIwIGlzIG5vdCBzZXQKIyBDT05GSUdfVklERU9fUkRB
Q00yMSBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX1JKNTROMSBpcyBub3Qgc2V0CiMgQ09O
RklHX1ZJREVPX1M1SzZBQSBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX1M1SzZBMyBpcyBu
b3Qgc2V0CiMgQ09ORklHX1ZJREVPX1M1SzRFQ0dYIGlzIG5vdCBzZXQKIyBDT05GSUdfVklE
RU9fUzVLNUJBRiBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX0NDUyBpcyBub3Qgc2V0CiMg
Q09ORklHX1ZJREVPX0VUOEVLOCBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX1M1QzczTTMg
aXMgbm90IHNldAojIGVuZCBvZiBDYW1lcmEgc2Vuc29yIGRldmljZXMKCiMKIyBMZW5zIGRy
aXZlcnMKIwojIENPTkZJR19WSURFT19BRDU4MjAgaXMgbm90IHNldAojIENPTkZJR19WSURF
T19BSzczNzUgaXMgbm90IHNldAojIENPTkZJR19WSURFT19EVzk3MTQgaXMgbm90IHNldAoj
IENPTkZJR19WSURFT19EVzk3NjggaXMgbm90IHNldAojIENPTkZJR19WSURFT19EVzk4MDdf
VkNNIGlzIG5vdCBzZXQKIyBlbmQgb2YgTGVucyBkcml2ZXJzCgojCiMgRmxhc2ggZGV2aWNl
cwojCiMgQ09ORklHX1ZJREVPX0FEUDE2NTMgaXMgbm90IHNldAojIENPTkZJR19WSURFT19M
TTM1NjAgaXMgbm90IHNldAojIENPTkZJR19WSURFT19MTTM2NDYgaXMgbm90IHNldAojIGVu
ZCBvZiBGbGFzaCBkZXZpY2VzCgojCiMgU1BJIEkyQyBkcml2ZXJzIGF1dG8tc2VsZWN0ZWQg
YnkgJ0F1dG9zZWxlY3QgYW5jaWxsYXJ5IGRyaXZlcnMnCiMKCiMKIyBNZWRpYSBTUEkgQWRh
cHRlcnMKIwojIENPTkZJR19DWEQyODgwX1NQSV9EUlYgaXMgbm90IHNldAojIGVuZCBvZiBN
ZWRpYSBTUEkgQWRhcHRlcnMKCkNPTkZJR19NRURJQV9UVU5FUj15CgojCiMgVHVuZXIgZHJp
dmVycyBhdXRvLXNlbGVjdGVkIGJ5ICdBdXRvc2VsZWN0IGFuY2lsbGFyeSBkcml2ZXJzJwoj
CkNPTkZJR19NRURJQV9UVU5FUl9TSU1QTEU9eQpDT05GSUdfTUVESUFfVFVORVJfVERBODI5
MD15CkNPTkZJR19NRURJQV9UVU5FUl9UREE4MjdYPXkKQ09ORklHX01FRElBX1RVTkVSX1RE
QTE4MjcxPXkKQ09ORklHX01FRElBX1RVTkVSX1REQTk4ODc9eQpDT05GSUdfTUVESUFfVFVO
RVJfVEVBNTc2MT15CkNPTkZJR19NRURJQV9UVU5FUl9URUE1NzY3PXkKQ09ORklHX01FRElB
X1RVTkVSX01UMjBYWD15CkNPTkZJR19NRURJQV9UVU5FUl9YQzIwMjg9eQpDT05GSUdfTUVE
SUFfVFVORVJfWEM1MDAwPXkKQ09ORklHX01FRElBX1RVTkVSX1hDNDAwMD15CkNPTkZJR19N
RURJQV9UVU5FUl9NQzQ0UzgwMz15CgojCiMgRFZCIEZyb250ZW5kIGRyaXZlcnMgYXV0by1z
ZWxlY3RlZCBieSAnQXV0b3NlbGVjdCBhbmNpbGxhcnkgZHJpdmVycycKIwoKIwojIE11bHRp
c3RhbmRhcmQgKHNhdGVsbGl0ZSkgZnJvbnRlbmRzCiMKCiMKIyBNdWx0aXN0YW5kYXJkIChj
YWJsZSArIHRlcnJlc3RyaWFsKSBmcm9udGVuZHMKIwoKIwojIERWQi1TIChzYXRlbGxpdGUp
IGZyb250ZW5kcwojCgojCiMgRFZCLVQgKHRlcnJlc3RyaWFsKSBmcm9udGVuZHMKIwoKIwoj
IERWQi1DIChjYWJsZSkgZnJvbnRlbmRzCiMKCiMKIyBBVFNDIChOb3J0aCBBbWVyaWNhbi9L
b3JlYW4gVGVycmVzdHJpYWwvQ2FibGUgRFRWKSBmcm9udGVuZHMKIwoKIwojIElTREItVCAo
dGVycmVzdHJpYWwpIGZyb250ZW5kcwojCgojCiMgSVNEQi1TIChzYXRlbGxpdGUpICYgSVNE
Qi1UICh0ZXJyZXN0cmlhbCkgZnJvbnRlbmRzCiMKCiMKIyBEaWdpdGFsIHRlcnJlc3RyaWFs
IG9ubHkgdHVuZXJzL1BMTAojCgojCiMgU0VDIGNvbnRyb2wgZGV2aWNlcyBmb3IgRFZCLVMK
IwoKIwojIENvbW1vbiBJbnRlcmZhY2UgKEVONTAyMjEpIGNvbnRyb2xsZXIgZHJpdmVycwoj
CiMgZW5kIG9mIE1lZGlhIGFuY2lsbGFyeSBkcml2ZXJzCgojCiMgR3JhcGhpY3Mgc3VwcG9y
dAojCkNPTkZJR19BR1A9eQpDT05GSUdfQUdQX0FNRDY0PXkKQ09ORklHX0FHUF9JTlRFTD15
CiMgQ09ORklHX0FHUF9TSVMgaXMgbm90IHNldAojIENPTkZJR19BR1BfVklBIGlzIG5vdCBz
ZXQKQ09ORklHX0lOVEVMX0dUVD15CkNPTkZJR19WR0FfQVJCPXkKQ09ORklHX1ZHQV9BUkJf
TUFYX0dQVVM9MTYKIyBDT05GSUdfVkdBX1NXSVRDSEVST08gaXMgbm90IHNldApDT05GSUdf
RFJNPXkKIyBDT05GSUdfRFJNX0RQX0FVWF9DSEFSREVWIGlzIG5vdCBzZXQKIyBDT05GSUdf
RFJNX0RFQlVHX01NIGlzIG5vdCBzZXQKIyBDT05GSUdfRFJNX0RFQlVHX1NFTEZURVNUIGlz
IG5vdCBzZXQKQ09ORklHX0RSTV9LTVNfSEVMUEVSPXkKQ09ORklHX0RSTV9LTVNfRkJfSEVM
UEVSPXkKQ09ORklHX0RSTV9GQkRFVl9FTVVMQVRJT049eQpDT05GSUdfRFJNX0ZCREVWX09W
RVJBTExPQz0xMDAKQ09ORklHX0RSTV9MT0FEX0VESURfRklSTVdBUkU9eQojIENPTkZJR19E
Uk1fRFBfQ0VDIGlzIG5vdCBzZXQKQ09ORklHX0RSTV9UVE09eQpDT05GSUdfRFJNX1RUTV9I
RUxQRVI9eQpDT05GSUdfRFJNX0dFTV9TSE1FTV9IRUxQRVI9eQoKIwojIEkyQyBlbmNvZGVy
IG9yIGhlbHBlciBjaGlwcwojCiMgQ09ORklHX0RSTV9JMkNfQ0g3MDA2IGlzIG5vdCBzZXQK
IyBDT05GSUdfRFJNX0kyQ19TSUwxNjQgaXMgbm90IHNldAojIENPTkZJR19EUk1fSTJDX05Y
UF9UREE5OThYIGlzIG5vdCBzZXQKIyBDT05GSUdfRFJNX0kyQ19OWFBfVERBOTk1MCBpcyBu
b3Qgc2V0CiMgZW5kIG9mIEkyQyBlbmNvZGVyIG9yIGhlbHBlciBjaGlwcwoKIwojIEFSTSBk
ZXZpY2VzCiMKIyBlbmQgb2YgQVJNIGRldmljZXMKCkNPTkZJR19EUk1fUkFERU9OPXkKIyBD
T05GSUdfRFJNX1JBREVPTl9VU0VSUFRSIGlzIG5vdCBzZXQKIyBDT05GSUdfRFJNX0FNREdQ
VSBpcyBub3Qgc2V0CiMgQ09ORklHX0RSTV9OT1VWRUFVIGlzIG5vdCBzZXQKIyBDT05GSUdf
RFJNX0k5MTUgaXMgbm90IHNldAojIENPTkZJR19EUk1fVkdFTSBpcyBub3Qgc2V0CiMgQ09O
RklHX0RSTV9WS01TIGlzIG5vdCBzZXQKIyBDT05GSUdfRFJNX1ZNV0dGWCBpcyBub3Qgc2V0
CiMgQ09ORklHX0RSTV9HTUE1MDAgaXMgbm90IHNldAojIENPTkZJR19EUk1fVURMIGlzIG5v
dCBzZXQKIyBDT05GSUdfRFJNX0FTVCBpcyBub3Qgc2V0CiMgQ09ORklHX0RSTV9NR0FHMjAw
IGlzIG5vdCBzZXQKQ09ORklHX0RSTV9RWEw9eQojIENPTkZJR19EUk1fQk9DSFMgaXMgbm90
IHNldAojIENPTkZJR19EUk1fVklSVElPX0dQVSBpcyBub3Qgc2V0CkNPTkZJR19EUk1fUEFO
RUw9eQoKIwojIERpc3BsYXkgUGFuZWxzCiMKIyBlbmQgb2YgRGlzcGxheSBQYW5lbHMKCkNP
TkZJR19EUk1fQlJJREdFPXkKQ09ORklHX0RSTV9QQU5FTF9CUklER0U9eQoKIwojIERpc3Bs
YXkgSW50ZXJmYWNlIEJyaWRnZXMKIwojIENPTkZJR19EUk1fQU5BTE9HSVhfQU5YNzhYWCBp
cyBub3Qgc2V0CiMgZW5kIG9mIERpc3BsYXkgSW50ZXJmYWNlIEJyaWRnZXMKCiMgQ09ORklH
X0RSTV9FVE5BVklWIGlzIG5vdCBzZXQKQ09ORklHX0RSTV9DSVJSVVNfUUVNVT15CiMgQ09O
RklHX0RSTV9HTTEyVTMyMCBpcyBub3Qgc2V0CiMgQ09ORklHX1RJTllEUk1fSFg4MzU3RCBp
cyBub3Qgc2V0CiMgQ09ORklHX1RJTllEUk1fSUxJOTIyNSBpcyBub3Qgc2V0CiMgQ09ORklH
X1RJTllEUk1fSUxJOTM0MSBpcyBub3Qgc2V0CiMgQ09ORklHX1RJTllEUk1fSUxJOTQ4NiBp
cyBub3Qgc2V0CiMgQ09ORklHX1RJTllEUk1fTUkwMjgzUVQgaXMgbm90IHNldAojIENPTkZJ
R19USU5ZRFJNX1JFUEFQRVIgaXMgbm90IHNldAojIENPTkZJR19USU5ZRFJNX1NUNzU4NiBp
cyBub3Qgc2V0CiMgQ09ORklHX1RJTllEUk1fU1Q3NzM1UiBpcyBub3Qgc2V0CiMgQ09ORklH
X0RSTV9YRU5fRlJPTlRFTkQgaXMgbm90IHNldAojIENPTkZJR19EUk1fVkJPWFZJREVPIGlz
IG5vdCBzZXQKIyBDT05GSUdfRFJNX0dVRCBpcyBub3Qgc2V0CiMgQ09ORklHX0RSTV9MRUdB
Q1kgaXMgbm90IHNldApDT05GSUdfRFJNX1BBTkVMX09SSUVOVEFUSU9OX1FVSVJLUz15Cgoj
CiMgRnJhbWUgYnVmZmVyIERldmljZXMKIwpDT05GSUdfRkJfQ01ETElORT15CkNPTkZJR19G
Ql9OT1RJRlk9eQpDT05GSUdfRkI9eQojIENPTkZJR19GSVJNV0FSRV9FRElEIGlzIG5vdCBz
ZXQKQ09ORklHX0ZCX0JPT1RfVkVTQV9TVVBQT1JUPXkKQ09ORklHX0ZCX0NGQl9GSUxMUkVD
VD15CkNPTkZJR19GQl9DRkJfQ09QWUFSRUE9eQpDT05GSUdfRkJfQ0ZCX0lNQUdFQkxJVD15
CkNPTkZJR19GQl9TWVNfRklMTFJFQ1Q9eQpDT05GSUdfRkJfU1lTX0NPUFlBUkVBPXkKQ09O
RklHX0ZCX1NZU19JTUFHRUJMSVQ9eQojIENPTkZJR19GQl9GT1JFSUdOX0VORElBTiBpcyBu
b3Qgc2V0CkNPTkZJR19GQl9TWVNfRk9QUz15CkNPTkZJR19GQl9ERUZFUlJFRF9JTz15CkNP
TkZJR19GQl9NT0RFX0hFTFBFUlM9eQpDT05GSUdfRkJfVElMRUJMSVRUSU5HPXkKCiMKIyBG
cmFtZSBidWZmZXIgaGFyZHdhcmUgZHJpdmVycwojCiMgQ09ORklHX0ZCX0NJUlJVUyBpcyBu
b3Qgc2V0CiMgQ09ORklHX0ZCX1BNMiBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX0NZQkVSMjAw
MCBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX0FSQyBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX0FT
SUxJQU5UIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfSU1TVFQgaXMgbm90IHNldAojIENPTkZJ
R19GQl9WR0ExNiBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX1VWRVNBIGlzIG5vdCBzZXQKQ09O
RklHX0ZCX1ZFU0E9eQojIENPTkZJR19GQl9ONDExIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJf
SEdBIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfT1BFTkNPUkVTIGlzIG5vdCBzZXQKIyBDT05G
SUdfRkJfUzFEMTNYWFggaXMgbm90IHNldAojIENPTkZJR19GQl9OVklESUEgaXMgbm90IHNl
dAojIENPTkZJR19GQl9SSVZBIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfSTc0MCBpcyBub3Qg
c2V0CiMgQ09ORklHX0ZCX0xFODA1NzggaXMgbm90IHNldAojIENPTkZJR19GQl9NQVRST1gg
aXMgbm90IHNldAojIENPTkZJR19GQl9SQURFT04gaXMgbm90IHNldAojIENPTkZJR19GQl9B
VFkxMjggaXMgbm90IHNldAojIENPTkZJR19GQl9BVFkgaXMgbm90IHNldAojIENPTkZJR19G
Ql9TMyBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX1NBVkFHRSBpcyBub3Qgc2V0CiMgQ09ORklH
X0ZCX1NJUyBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX1ZJQSBpcyBub3Qgc2V0CiMgQ09ORklH
X0ZCX05FT01BR0lDIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfS1lSTyBpcyBub3Qgc2V0CiMg
Q09ORklHX0ZCXzNERlggaXMgbm90IHNldAojIENPTkZJR19GQl9WT09ET08xIGlzIG5vdCBz
ZXQKIyBDT05GSUdfRkJfVlQ4NjIzIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfVFJJREVOVCBp
cyBub3Qgc2V0CiMgQ09ORklHX0ZCX0FSSyBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX1BNMyBp
cyBub3Qgc2V0CiMgQ09ORklHX0ZCX0NBUk1JTkUgaXMgbm90IHNldAojIENPTkZJR19GQl9T
TVNDVUZYIGlzIG5vdCBzZXQKQ09ORklHX0ZCX1VETD15CiMgQ09ORklHX0ZCX0lCTV9HWFQ0
NTAwIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfVklSVFVBTCBpcyBub3Qgc2V0CkNPTkZJR19Y
RU5fRkJERVZfRlJPTlRFTkQ9eQojIENPTkZJR19GQl9NRVRST05PTUUgaXMgbm90IHNldAoj
IENPTkZJR19GQl9NQjg2MlhYIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfU0lNUExFIGlzIG5v
dCBzZXQKIyBDT05GSUdfRkJfU003MTIgaXMgbm90IHNldAojIGVuZCBvZiBGcmFtZSBidWZm
ZXIgRGV2aWNlcwoKIwojIEJhY2tsaWdodCAmIExDRCBkZXZpY2Ugc3VwcG9ydAojCiMgQ09O
RklHX0xDRF9DTEFTU19ERVZJQ0UgaXMgbm90IHNldApDT05GSUdfQkFDS0xJR0hUX0NMQVNT
X0RFVklDRT15CiMgQ09ORklHX0JBQ0tMSUdIVF9LVEQyNTMgaXMgbm90IHNldAojIENPTkZJ
R19CQUNLTElHSFRfQVBQTEUgaXMgbm90IHNldAojIENPTkZJR19CQUNLTElHSFRfUUNPTV9X
TEVEIGlzIG5vdCBzZXQKIyBDT05GSUdfQkFDS0xJR0hUX1NBSEFSQSBpcyBub3Qgc2V0CiMg
Q09ORklHX0JBQ0tMSUdIVF9BRFA4ODYwIGlzIG5vdCBzZXQKIyBDT05GSUdfQkFDS0xJR0hU
X0FEUDg4NzAgaXMgbm90IHNldAojIENPTkZJR19CQUNLTElHSFRfTE0zNjM5IGlzIG5vdCBz
ZXQKIyBDT05GSUdfQkFDS0xJR0hUX0dQSU8gaXMgbm90IHNldAojIENPTkZJR19CQUNLTElH
SFRfTFY1MjA3TFAgaXMgbm90IHNldAojIENPTkZJR19CQUNLTElHSFRfQkQ2MTA3IGlzIG5v
dCBzZXQKIyBDT05GSUdfQkFDS0xJR0hUX0FSQ1hDTk4gaXMgbm90IHNldAojIGVuZCBvZiBC
YWNrbGlnaHQgJiBMQ0QgZGV2aWNlIHN1cHBvcnQKCkNPTkZJR19IRE1JPXkKCiMKIyBDb25z
b2xlIGRpc3BsYXkgZHJpdmVyIHN1cHBvcnQKIwpDT05GSUdfVkdBX0NPTlNPTEU9eQpDT05G
SUdfRFVNTVlfQ09OU09MRT15CkNPTkZJR19EVU1NWV9DT05TT0xFX0NPTFVNTlM9ODAKQ09O
RklHX0RVTU1ZX0NPTlNPTEVfUk9XUz0yNQpDT05GSUdfRlJBTUVCVUZGRVJfQ09OU09MRT15
CkNPTkZJR19GUkFNRUJVRkZFUl9DT05TT0xFX0RFVEVDVF9QUklNQVJZPXkKIyBDT05GSUdf
RlJBTUVCVUZGRVJfQ09OU09MRV9ST1RBVElPTiBpcyBub3Qgc2V0CiMgQ09ORklHX0ZSQU1F
QlVGRkVSX0NPTlNPTEVfREVGRVJSRURfVEFLRU9WRVIgaXMgbm90IHNldAojIGVuZCBvZiBD
b25zb2xlIGRpc3BsYXkgZHJpdmVyIHN1cHBvcnQKCkNPTkZJR19MT0dPPXkKIyBDT05GSUdf
TE9HT19MSU5VWF9NT05PIGlzIG5vdCBzZXQKIyBDT05GSUdfTE9HT19MSU5VWF9WR0ExNiBp
cyBub3Qgc2V0CkNPTkZJR19MT0dPX0xJTlVYX0NMVVQyMjQ9eQojIGVuZCBvZiBHcmFwaGlj
cyBzdXBwb3J0CgpDT05GSUdfU09VTkQ9eQpDT05GSUdfU09VTkRfT1NTX0NPUkU9eQpDT05G
SUdfU09VTkRfT1NTX0NPUkVfUFJFQ0xBSU09eQpDT05GSUdfU05EPXkKQ09ORklHX1NORF9U
SU1FUj15CkNPTkZJR19TTkRfUENNPXkKQ09ORklHX1NORF9IV0RFUD15CkNPTkZJR19TTkRf
U0VRX0RFVklDRT15CkNPTkZJR19TTkRfUkFXTUlEST15CkNPTkZJR19TTkRfSkFDSz15CkNP
TkZJR19TTkRfSkFDS19JTlBVVF9ERVY9eQpDT05GSUdfU05EX09TU0VNVUw9eQpDT05GSUdf
U05EX01JWEVSX09TUz15CkNPTkZJR19TTkRfUENNX09TUz15CkNPTkZJR19TTkRfUENNX09T
U19QTFVHSU5TPXkKQ09ORklHX1NORF9QQ01fVElNRVI9eQpDT05GSUdfU05EX0hSVElNRVI9
eQpDT05GSUdfU05EX0RZTkFNSUNfTUlOT1JTPXkKQ09ORklHX1NORF9NQVhfQ0FSRFM9MzIK
Q09ORklHX1NORF9TVVBQT1JUX09MRF9BUEk9eQpDT05GSUdfU05EX1BST0NfRlM9eQpDT05G
SUdfU05EX1ZFUkJPU0VfUFJPQ0ZTPXkKIyBDT05GSUdfU05EX1ZFUkJPU0VfUFJJTlRLIGlz
IG5vdCBzZXQKIyBDT05GSUdfU05EX0RFQlVHIGlzIG5vdCBzZXQKQ09ORklHX1NORF9WTUFT
VEVSPXkKQ09ORklHX1NORF9ETUFfU0dCVUY9eQpDT05GSUdfU05EX0NUTF9MRUQ9eQpDT05G
SUdfU05EX1NFUVVFTkNFUj15CkNPTkZJR19TTkRfU0VRX0RVTU1ZPXkKQ09ORklHX1NORF9T
RVFVRU5DRVJfT1NTPXkKQ09ORklHX1NORF9TRVFfSFJUSU1FUl9ERUZBVUxUPXkKQ09ORklH
X1NORF9TRVFfTUlESV9FVkVOVD15CkNPTkZJR19TTkRfU0VRX01JREk9eQpDT05GSUdfU05E
X1NFUV9NSURJX0VNVUw9eQpDT05GSUdfU05EX01QVTQwMV9VQVJUPXkKQ09ORklHX1NORF9P
UEwzX0xJQj15CkNPTkZJR19TTkRfT1BMM19MSUJfU0VRPXkKQ09ORklHX1NORF9EUklWRVJT
PXkKIyBDT05GSUdfU05EX1BDU1AgaXMgbm90IHNldAojIENPTkZJR19TTkRfRFVNTVkgaXMg
bm90IHNldAojIENPTkZJR19TTkRfQUxPT1AgaXMgbm90IHNldAojIENPTkZJR19TTkRfVklS
TUlESSBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9NVFBBViBpcyBub3Qgc2V0CiMgQ09ORklH
X1NORF9TRVJJQUxfVTE2NTUwIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX01QVTQwMSBpcyBu
b3Qgc2V0CkNPTkZJR19TTkRfUENJPXkKIyBDT05GSUdfU05EX0FEMTg4OSBpcyBub3Qgc2V0
CiMgQ09ORklHX1NORF9BTFMzMDAgaXMgbm90IHNldAojIENPTkZJR19TTkRfQUxTNDAwMCBp
cyBub3Qgc2V0CiMgQ09ORklHX1NORF9BTEk1NDUxIGlzIG5vdCBzZXQKIyBDT05GSUdfU05E
X0FTSUhQSSBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9BVElJWFAgaXMgbm90IHNldAojIENP
TkZJR19TTkRfQVRJSVhQX01PREVNIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0FVODgxMCBp
cyBub3Qgc2V0CiMgQ09ORklHX1NORF9BVTg4MjAgaXMgbm90IHNldAojIENPTkZJR19TTkRf
QVU4ODMwIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0FXMiBpcyBub3Qgc2V0CiMgQ09ORklH
X1NORF9BWlQzMzI4IGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0JUODdYIGlzIG5vdCBzZXQK
IyBDT05GSUdfU05EX0NBMDEwNiBpcyBub3Qgc2V0CkNPTkZJR19TTkRfQ01JUENJPXkKQ09O
RklHX1NORF9PWFlHRU5fTElCPXkKQ09ORklHX1NORF9PWFlHRU49eQojIENPTkZJR19TTkRf
Q1M0MjgxIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0NTNDZYWCBpcyBub3Qgc2V0CiMgQ09O
RklHX1NORF9DVFhGSSBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9EQVJMQTIwIGlzIG5vdCBz
ZXQKIyBDT05GSUdfU05EX0dJTkEyMCBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9MQVlMQTIw
IGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0RBUkxBMjQgaXMgbm90IHNldAojIENPTkZJR19T
TkRfR0lOQTI0IGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0xBWUxBMjQgaXMgbm90IHNldAoj
IENPTkZJR19TTkRfTU9OQSBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9NSUEgaXMgbm90IHNl
dAojIENPTkZJR19TTkRfRUNITzNHIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0lORElHTyBp
cyBub3Qgc2V0CiMgQ09ORklHX1NORF9JTkRJR09JTyBpcyBub3Qgc2V0CiMgQ09ORklHX1NO
RF9JTkRJR09ESiBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9JTkRJR09JT1ggaXMgbm90IHNl
dAojIENPTkZJR19TTkRfSU5ESUdPREpYIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0VNVTEw
SzEgaXMgbm90IHNldAojIENPTkZJR19TTkRfRU1VMTBLMVggaXMgbm90IHNldAojIENPTkZJ
R19TTkRfRU5TMTM3MCBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9FTlMxMzcxIGlzIG5vdCBz
ZXQKIyBDT05GSUdfU05EX0VTMTkzOCBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9FUzE5Njgg
aXMgbm90IHNldAojIENPTkZJR19TTkRfRk04MDEgaXMgbm90IHNldAojIENPTkZJR19TTkRf
SERTUCBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9IRFNQTSBpcyBub3Qgc2V0CiMgQ09ORklH
X1NORF9JQ0UxNzEyIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0lDRTE3MjQgaXMgbm90IHNl
dAojIENPTkZJR19TTkRfSU5URUw4WDAgaXMgbm90IHNldAojIENPTkZJR19TTkRfSU5URUw4
WDBNIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0tPUkcxMjEyIGlzIG5vdCBzZXQKIyBDT05G
SUdfU05EX0xPTEEgaXMgbm90IHNldAojIENPTkZJR19TTkRfTFg2NDY0RVMgaXMgbm90IHNl
dAojIENPTkZJR19TTkRfTUFFU1RSTzMgaXMgbm90IHNldAojIENPTkZJR19TTkRfTUlYQVJU
IGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX05NMjU2IGlzIG5vdCBzZXQKIyBDT05GSUdfU05E
X1BDWEhSIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX1JJUFRJREUgaXMgbm90IHNldAojIENP
TkZJR19TTkRfUk1FMzIgaXMgbm90IHNldAojIENPTkZJR19TTkRfUk1FOTYgaXMgbm90IHNl
dAojIENPTkZJR19TTkRfUk1FOTY1MiBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9TT05JQ1ZJ
QkVTIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX1RSSURFTlQgaXMgbm90IHNldAojIENPTkZJ
R19TTkRfVklBODJYWCBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9WSUE4MlhYX01PREVNIGlz
IG5vdCBzZXQKIyBDT05GSUdfU05EX1ZJUlRVT1NPIGlzIG5vdCBzZXQKIyBDT05GSUdfU05E
X1ZYMjIyIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX1lNRlBDSSBpcyBub3Qgc2V0CgojCiMg
SEQtQXVkaW8KIwpDT05GSUdfU05EX0hEQT15CkNPTkZJR19TTkRfSERBX0dFTkVSSUNfTEVE
Uz15CkNPTkZJR19TTkRfSERBX0lOVEVMPXkKQ09ORklHX1NORF9IREFfSFdERVA9eQojIENP
TkZJR19TTkRfSERBX1JFQ09ORklHIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0hEQV9JTlBV
VF9CRUVQIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0hEQV9QQVRDSF9MT0FERVIgaXMgbm90
IHNldApDT05GSUdfU05EX0hEQV9DT0RFQ19SRUFMVEVLPXkKQ09ORklHX1NORF9IREFfQ09E
RUNfQU5BTE9HPXkKQ09ORklHX1NORF9IREFfQ09ERUNfU0lHTUFURUw9eQpDT05GSUdfU05E
X0hEQV9DT0RFQ19WSUE9eQpDT05GSUdfU05EX0hEQV9DT0RFQ19IRE1JPXkKQ09ORklHX1NO
RF9IREFfQ09ERUNfQ0lSUlVTPXkKQ09ORklHX1NORF9IREFfQ09ERUNfQ09ORVhBTlQ9eQpD
T05GSUdfU05EX0hEQV9DT0RFQ19DQTAxMTA9eQpDT05GSUdfU05EX0hEQV9DT0RFQ19DQTAx
MzI9eQojIENPTkZJR19TTkRfSERBX0NPREVDX0NBMDEzMl9EU1AgaXMgbm90IHNldApDT05G
SUdfU05EX0hEQV9DT0RFQ19DTUVESUE9eQpDT05GSUdfU05EX0hEQV9DT0RFQ19TSTMwNTQ9
eQpDT05GSUdfU05EX0hEQV9HRU5FUklDPXkKQ09ORklHX1NORF9IREFfUE9XRVJfU0FWRV9E
RUZBVUxUPTAKIyBDT05GSUdfU05EX0hEQV9JTlRFTF9IRE1JX1NJTEVOVF9TVFJFQU0gaXMg
bm90IHNldAojIGVuZCBvZiBIRC1BdWRpbwoKQ09ORklHX1NORF9IREFfQ09SRT15CkNPTkZJ
R19TTkRfSERBX1BSRUFMTE9DX1NJWkU9MApDT05GSUdfU05EX0lOVEVMX05ITFQ9eQpDT05G
SUdfU05EX0lOVEVMX0RTUF9DT05GSUc9eQpDT05GSUdfU05EX0lOVEVMX1NPVU5EV0lSRV9B
Q1BJPXkKQ09ORklHX1NORF9TUEk9eQpDT05GSUdfU05EX1VTQj15CkNPTkZJR19TTkRfVVNC
X0FVRElPPXkKQ09ORklHX1NORF9VU0JfQVVESU9fVVNFX01FRElBX0NPTlRST0xMRVI9eQpD
T05GSUdfU05EX1VTQl9VQTEwMT15CkNPTkZJR19TTkRfVVNCX1VTWDJZPXkKQ09ORklHX1NO
RF9VU0JfQ0FJQVE9eQpDT05GSUdfU05EX1VTQl9DQUlBUV9JTlBVVD15CiMgQ09ORklHX1NO
RF9VU0JfVVMxMjJMIGlzIG5vdCBzZXQKQ09ORklHX1NORF9VU0JfNkZJUkU9eQojIENPTkZJ
R19TTkRfVVNCX0hJRkFDRSBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9CQ0QyMDAwIGlzIG5v
dCBzZXQKIyBDT05GSUdfU05EX1VTQl9QT0QgaXMgbm90IHNldAojIENPTkZJR19TTkRfVVNC
X1BPREhEIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX1VTQl9UT05FUE9SVCBpcyBub3Qgc2V0
CiMgQ09ORklHX1NORF9VU0JfVkFSSUFYIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX1NPQyBp
cyBub3Qgc2V0CiMgQ09ORklHX1NORF9YODYgaXMgbm90IHNldAojIENPTkZJR19TTkRfWEVO
X0ZST05URU5EIGlzIG5vdCBzZXQKCiMKIyBISUQgc3VwcG9ydAojCkNPTkZJR19ISUQ9eQoj
IENPTkZJR19ISURfQkFUVEVSWV9TVFJFTkdUSCBpcyBub3Qgc2V0CkNPTkZJR19ISURSQVc9
eQojIENPTkZJR19VSElEIGlzIG5vdCBzZXQKQ09ORklHX0hJRF9HRU5FUklDPXkKCiMKIyBT
cGVjaWFsIEhJRCBkcml2ZXJzCiMKQ09ORklHX0hJRF9BNFRFQ0g9eQojIENPTkZJR19ISURf
QUNDVVRPVUNIIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX0FDUlVYIGlzIG5vdCBzZXQKQ09O
RklHX0hJRF9BUFBMRT15CiMgQ09ORklHX0hJRF9BUFBMRUlSIGlzIG5vdCBzZXQKIyBDT05G
SUdfSElEX0FTVVMgaXMgbm90IHNldAojIENPTkZJR19ISURfQVVSRUFMIGlzIG5vdCBzZXQK
Q09ORklHX0hJRF9CRUxLSU49eQojIENPTkZJR19ISURfQkVUT1BfRkYgaXMgbm90IHNldAoj
IENPTkZJR19ISURfQklHQkVOX0ZGIGlzIG5vdCBzZXQKQ09ORklHX0hJRF9DSEVSUlk9eQpD
T05GSUdfSElEX0NISUNPTlk9eQojIENPTkZJR19ISURfQ09SU0FJUiBpcyBub3Qgc2V0CiMg
Q09ORklHX0hJRF9DT1VHQVIgaXMgbm90IHNldAojIENPTkZJR19ISURfTUFDQUxMWSBpcyBu
b3Qgc2V0CiMgQ09ORklHX0hJRF9QUk9ESUtFWVMgaXMgbm90IHNldAojIENPTkZJR19ISURf
Q01FRElBIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX0NQMjExMiBpcyBub3Qgc2V0CiMgQ09O
RklHX0hJRF9DUkVBVElWRV9TQjA1NDAgaXMgbm90IHNldApDT05GSUdfSElEX0NZUFJFU1M9
eQojIENPTkZJR19ISURfRFJBR09OUklTRSBpcyBub3Qgc2V0CiMgQ09ORklHX0hJRF9FTVNf
RkYgaXMgbm90IHNldAojIENPTkZJR19ISURfRUxBTiBpcyBub3Qgc2V0CiMgQ09ORklHX0hJ
RF9FTEVDT00gaXMgbm90IHNldAojIENPTkZJR19ISURfRUxPIGlzIG5vdCBzZXQKQ09ORklH
X0hJRF9FWktFWT15CiMgQ09ORklHX0hJRF9GVDI2MCBpcyBub3Qgc2V0CiMgQ09ORklHX0hJ
RF9HRU1CSVJEIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX0dGUk0gaXMgbm90IHNldAojIENP
TkZJR19ISURfR0xPUklPVVMgaXMgbm90IHNldAojIENPTkZJR19ISURfSE9MVEVLIGlzIG5v
dCBzZXQKIyBDT05GSUdfSElEX1ZJVkFMREkgaXMgbm90IHNldAojIENPTkZJR19ISURfR1Q2
ODNSIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX0tFWVRPVUNIIGlzIG5vdCBzZXQKIyBDT05G
SUdfSElEX0tZRSBpcyBub3Qgc2V0CiMgQ09ORklHX0hJRF9VQ0xPR0lDIGlzIG5vdCBzZXQK
IyBDT05GSUdfSElEX1dBTFRPUCBpcyBub3Qgc2V0CiMgQ09ORklHX0hJRF9WSUVXU09OSUMg
aXMgbm90IHNldAojIENPTkZJR19ISURfR1lSQVRJT04gaXMgbm90IHNldAojIENPTkZJR19I
SURfSUNBREUgaXMgbm90IHNldApDT05GSUdfSElEX0lURT15CiMgQ09ORklHX0hJRF9KQUJS
QSBpcyBub3Qgc2V0CiMgQ09ORklHX0hJRF9UV0lOSEFOIGlzIG5vdCBzZXQKQ09ORklHX0hJ
RF9LRU5TSU5HVE9OPXkKIyBDT05GSUdfSElEX0xDUE9XRVIgaXMgbm90IHNldAojIENPTkZJ
R19ISURfTEVEIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX0xFTk9WTyBpcyBub3Qgc2V0CkNP
TkZJR19ISURfTE9HSVRFQ0g9eQojIENPTkZJR19ISURfTE9HSVRFQ0hfREogaXMgbm90IHNl
dAojIENPTkZJR19ISURfTE9HSVRFQ0hfSElEUFAgaXMgbm90IHNldAojIENPTkZJR19MT0dJ
VEVDSF9GRiBpcyBub3Qgc2V0CiMgQ09ORklHX0xPR0lSVU1CTEVQQUQyX0ZGIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTE9HSUc5NDBfRkYgaXMgbm90IHNldAojIENPTkZJR19MT0dJV0hFRUxT
X0ZGIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX01BR0lDTU9VU0UgaXMgbm90IHNldAojIENP
TkZJR19ISURfTUFMVFJPTiBpcyBub3Qgc2V0CiMgQ09ORklHX0hJRF9NQVlGTEFTSCBpcyBu
b3Qgc2V0CkNPTkZJR19ISURfUkVEUkFHT049eQpDT05GSUdfSElEX01JQ1JPU09GVD15CkNP
TkZJR19ISURfTU9OVEVSRVk9eQojIENPTkZJR19ISURfTVVMVElUT1VDSCBpcyBub3Qgc2V0
CiMgQ09ORklHX0hJRF9OVEkgaXMgbm90IHNldAojIENPTkZJR19ISURfTlRSSUcgaXMgbm90
IHNldAojIENPTkZJR19ISURfT1JURUsgaXMgbm90IHNldAojIENPTkZJR19ISURfUEFOVEhF
UkxPUkQgaXMgbm90IHNldAojIENPTkZJR19ISURfUEVOTU9VTlQgaXMgbm90IHNldAojIENP
TkZJR19ISURfUEVUQUxZTlggaXMgbm90IHNldAojIENPTkZJR19ISURfUElDT0xDRCBpcyBu
b3Qgc2V0CkNPTkZJR19ISURfUExBTlRST05JQ1M9eQojIENPTkZJR19ISURfUExBWVNUQVRJ
T04gaXMgbm90IHNldAojIENPTkZJR19ISURfUFJJTUFYIGlzIG5vdCBzZXQKIyBDT05GSUdf
SElEX1JFVFJPREUgaXMgbm90IHNldAojIENPTkZJR19ISURfUk9DQ0FUIGlzIG5vdCBzZXQK
IyBDT05GSUdfSElEX1NBSVRFSyBpcyBub3Qgc2V0CiMgQ09ORklHX0hJRF9TQU1TVU5HIGlz
IG5vdCBzZXQKIyBDT05GSUdfSElEX1NFTUlURUsgaXMgbm90IHNldAojIENPTkZJR19ISURf
U09OWSBpcyBub3Qgc2V0CiMgQ09ORklHX0hJRF9TUEVFRExJTksgaXMgbm90IHNldAojIENP
TkZJR19ISURfU1RFQU0gaXMgbm90IHNldAojIENPTkZJR19ISURfU1RFRUxTRVJJRVMgaXMg
bm90IHNldAojIENPTkZJR19ISURfU1VOUExVUyBpcyBub3Qgc2V0CiMgQ09ORklHX0hJRF9S
TUkgaXMgbm90IHNldAojIENPTkZJR19ISURfR1JFRU5BU0lBIGlzIG5vdCBzZXQKIyBDT05G
SUdfSElEX1NNQVJUSk9ZUExVUyBpcyBub3Qgc2V0CiMgQ09ORklHX0hJRF9USVZPIGlzIG5v
dCBzZXQKIyBDT05GSUdfSElEX1RPUFNFRUQgaXMgbm90IHNldAojIENPTkZJR19ISURfVEhJ
TkdNIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1RIUlVTVE1BU1RFUiBpcyBub3Qgc2V0CiMg
Q09ORklHX0hJRF9VRFJBV19QUzMgaXMgbm90IHNldAojIENPTkZJR19ISURfVTJGWkVSTyBp
cyBub3Qgc2V0CiMgQ09ORklHX0hJRF9XQUNPTSBpcyBub3Qgc2V0CiMgQ09ORklHX0hJRF9X
SUlNT1RFIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1hJTk1PIGlzIG5vdCBzZXQKIyBDT05G
SUdfSElEX1pFUk9QTFVTIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1pZREFDUk9OIGlzIG5v
dCBzZXQKIyBDT05GSUdfSElEX1NFTlNPUl9IVUIgaXMgbm90IHNldAojIENPTkZJR19ISURf
QUxQUyBpcyBub3Qgc2V0CiMgQ09ORklHX0hJRF9NQ1AyMjIxIGlzIG5vdCBzZXQKIyBlbmQg
b2YgU3BlY2lhbCBISUQgZHJpdmVycwoKIwojIFVTQiBISUQgc3VwcG9ydAojCkNPTkZJR19V
U0JfSElEPXkKQ09ORklHX0hJRF9QSUQ9eQpDT05GSUdfVVNCX0hJRERFVj15CiMgZW5kIG9m
IFVTQiBISUQgc3VwcG9ydAoKIwojIEkyQyBISUQgc3VwcG9ydAojCiMgQ09ORklHX0kyQ19I
SURfQUNQSSBpcyBub3Qgc2V0CiMgZW5kIG9mIEkyQyBISUQgc3VwcG9ydAoKIwojIEludGVs
IElTSCBISUQgc3VwcG9ydAojCiMgQ09ORklHX0lOVEVMX0lTSF9ISUQgaXMgbm90IHNldAoj
IGVuZCBvZiBJbnRlbCBJU0ggSElEIHN1cHBvcnQKCiMKIyBBTUQgU0ZIIEhJRCBTdXBwb3J0
CiMKIyBDT05GSUdfQU1EX1NGSF9ISUQgaXMgbm90IHNldAojIGVuZCBvZiBBTUQgU0ZIIEhJ
RCBTdXBwb3J0CiMgZW5kIG9mIEhJRCBzdXBwb3J0CgpDT05GSUdfVVNCX09IQ0lfTElUVExF
X0VORElBTj15CkNPTkZJR19VU0JfU1VQUE9SVD15CkNPTkZJR19VU0JfQ09NTU9OPXkKIyBD
T05GSUdfVVNCX0xFRF9UUklHIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1VMUElfQlVTIGlz
IG5vdCBzZXQKIyBDT05GSUdfVVNCX0NPTk5fR1BJTyBpcyBub3Qgc2V0CkNPTkZJR19VU0Jf
QVJDSF9IQVNfSENEPXkKQ09ORklHX1VTQj15CkNPTkZJR19VU0JfUENJPXkKQ09ORklHX1VT
Ql9BTk5PVU5DRV9ORVdfREVWSUNFUz15CgojCiMgTWlzY2VsbGFuZW91cyBVU0Igb3B0aW9u
cwojCkNPTkZJR19VU0JfREVGQVVMVF9QRVJTSVNUPXkKIyBDT05GSUdfVVNCX0ZFV19JTklU
X1JFVFJJRVMgaXMgbm90IHNldAojIENPTkZJR19VU0JfRFlOQU1JQ19NSU5PUlMgaXMgbm90
IHNldAojIENPTkZJR19VU0JfT1RHIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX09UR19QUk9E
VUNUTElTVCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9MRURTX1RSSUdHRVJfVVNCUE9SVCBp
cyBub3Qgc2V0CkNPTkZJR19VU0JfQVVUT1NVU1BFTkRfREVMQVk9MgpDT05GSUdfVVNCX01P
Tj15CgojCiMgVVNCIEhvc3QgQ29udHJvbGxlciBEcml2ZXJzCiMKIyBDT05GSUdfVVNCX0M2
N1gwMF9IQ0QgaXMgbm90IHNldApDT05GSUdfVVNCX1hIQ0lfSENEPXkKIyBDT05GSUdfVVNC
X1hIQ0lfREJHQ0FQIGlzIG5vdCBzZXQKQ09ORklHX1VTQl9YSENJX1BDST15CiMgQ09ORklH
X1VTQl9YSENJX1BDSV9SRU5FU0FTIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1hIQ0lfUExB
VEZPUk0gaXMgbm90IHNldApDT05GSUdfVVNCX0VIQ0lfSENEPXkKQ09ORklHX1VTQl9FSENJ
X1JPT1RfSFVCX1RUPXkKQ09ORklHX1VTQl9FSENJX1RUX05FV1NDSEVEPXkKQ09ORklHX1VT
Ql9FSENJX1BDST15CiMgQ09ORklHX1VTQl9FSENJX0ZTTCBpcyBub3Qgc2V0CiMgQ09ORklH
X1VTQl9FSENJX0hDRF9QTEFURk9STSBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9PWFUyMTBI
UF9IQ0QgaXMgbm90IHNldAojIENPTkZJR19VU0JfSVNQMTE2WF9IQ0QgaXMgbm90IHNldAoj
IENPTkZJR19VU0JfRk9URzIxMF9IQ0QgaXMgbm90IHNldAojIENPTkZJR19VU0JfTUFYMzQy
MV9IQ0QgaXMgbm90IHNldApDT05GSUdfVVNCX09IQ0lfSENEPXkKQ09ORklHX1VTQl9PSENJ
X0hDRF9QQ0k9eQojIENPTkZJR19VU0JfT0hDSV9IQ0RfUExBVEZPUk0gaXMgbm90IHNldApD
T05GSUdfVVNCX1VIQ0lfSENEPXkKIyBDT05GSUdfVVNCX1NMODExX0hDRCBpcyBub3Qgc2V0
CiMgQ09ORklHX1VTQl9SOEE2NjU5N19IQ0QgaXMgbm90IHNldAojIENPTkZJR19VU0JfSENE
X1RFU1RfTU9ERSBpcyBub3Qgc2V0CgojCiMgVVNCIERldmljZSBDbGFzcyBkcml2ZXJzCiMK
IyBDT05GSUdfVVNCX0FDTSBpcyBub3Qgc2V0CkNPTkZJR19VU0JfUFJJTlRFUj15CiMgQ09O
RklHX1VTQl9XRE0gaXMgbm90IHNldAojIENPTkZJR19VU0JfVE1DIGlzIG5vdCBzZXQKCiMK
IyBOT1RFOiBVU0JfU1RPUkFHRSBkZXBlbmRzIG9uIFNDU0kgYnV0IEJMS19ERVZfU0QgbWF5
CiMKCiMKIyBhbHNvIGJlIG5lZWRlZDsgc2VlIFVTQl9TVE9SQUdFIEhlbHAgZm9yIG1vcmUg
aW5mbwojCkNPTkZJR19VU0JfU1RPUkFHRT15CiMgQ09ORklHX1VTQl9TVE9SQUdFX0RFQlVH
IGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NUT1JBR0VfUkVBTFRFSyBpcyBub3Qgc2V0CiMg
Q09ORklHX1VTQl9TVE9SQUdFX0RBVEFGQUIgaXMgbm90IHNldAojIENPTkZJR19VU0JfU1RP
UkFHRV9GUkVFQ09NIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NUT1JBR0VfSVNEMjAwIGlz
IG5vdCBzZXQKIyBDT05GSUdfVVNCX1NUT1JBR0VfVVNCQVQgaXMgbm90IHNldAojIENPTkZJ
R19VU0JfU1RPUkFHRV9TRERSMDkgaXMgbm90IHNldAojIENPTkZJR19VU0JfU1RPUkFHRV9T
RERSNTUgaXMgbm90IHNldAojIENPTkZJR19VU0JfU1RPUkFHRV9KVU1QU0hPVCBpcyBub3Qg
c2V0CiMgQ09ORklHX1VTQl9TVE9SQUdFX0FMQVVEQSBpcyBub3Qgc2V0CiMgQ09ORklHX1VT
Ql9TVE9SQUdFX09ORVRPVUNIIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NUT1JBR0VfS0FS
TUEgaXMgbm90IHNldAojIENPTkZJR19VU0JfU1RPUkFHRV9DWVBSRVNTX0FUQUNCIGlzIG5v
dCBzZXQKIyBDT05GSUdfVVNCX1NUT1JBR0VfRU5FX1VCNjI1MCBpcyBub3Qgc2V0CiMgQ09O
RklHX1VTQl9VQVMgaXMgbm90IHNldAoKIwojIFVTQiBJbWFnaW5nIGRldmljZXMKIwojIENP
TkZJR19VU0JfTURDODAwIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX01JQ1JPVEVLIGlzIG5v
dCBzZXQKIyBDT05GSUdfVVNCSVBfQ09SRSBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9DRE5T
X1NVUFBPUlQgaXMgbm90IHNldAojIENPTkZJR19VU0JfTVVTQl9IRFJDIGlzIG5vdCBzZXQK
IyBDT05GSUdfVVNCX0RXQzMgaXMgbm90IHNldAojIENPTkZJR19VU0JfRFdDMiBpcyBub3Qg
c2V0CiMgQ09ORklHX1VTQl9DSElQSURFQSBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9JU1Ax
NzYwIGlzIG5vdCBzZXQKCiMKIyBVU0IgcG9ydCBkcml2ZXJzCiMKQ09ORklHX1VTQl9TRVJJ
QUw9eQojIENPTkZJR19VU0JfU0VSSUFMX0NPTlNPTEUgaXMgbm90IHNldAojIENPTkZJR19V
U0JfU0VSSUFMX0dFTkVSSUMgaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX1NJTVBM
RSBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TRVJJQUxfQUlSQ0FCTEUgaXMgbm90IHNldAoj
IENPTkZJR19VU0JfU0VSSUFMX0FSSzMxMTYgaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VS
SUFMX0JFTEtJTiBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TRVJJQUxfQ0gzNDEgaXMgbm90
IHNldAojIENPTkZJR19VU0JfU0VSSUFMX1dISVRFSEVBVCBpcyBub3Qgc2V0CiMgQ09ORklH
X1VTQl9TRVJJQUxfRElHSV9BQ0NFTEVQT1JUIGlzIG5vdCBzZXQKQ09ORklHX1VTQl9TRVJJ
QUxfQ1AyMTBYPXkKQ09ORklHX1VTQl9TRVJJQUxfQ1lQUkVTU19NOD15CiMgQ09ORklHX1VT
Ql9TRVJJQUxfRU1QRUcgaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0ZURElfU0lP
IGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9WSVNPUiBpcyBub3Qgc2V0CiMgQ09O
RklHX1VTQl9TRVJJQUxfSVBBUSBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TRVJJQUxfSVIg
aXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0VER0VQT1JUIGlzIG5vdCBzZXQKIyBD
T05GSUdfVVNCX1NFUklBTF9FREdFUE9SVF9USSBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9T
RVJJQUxfRjgxMjMyIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9GODE1M1ggaXMg
bm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0dBUk1JTiBpcyBub3Qgc2V0CiMgQ09ORklH
X1VTQl9TRVJJQUxfSVBXIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9JVVUgaXMg
bm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0tFWVNQQU5fUERBIGlzIG5vdCBzZXQKIyBD
T05GSUdfVVNCX1NFUklBTF9LRVlTUEFOIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklB
TF9LTFNJIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9LT0JJTF9TQ1QgaXMgbm90
IHNldAojIENPTkZJR19VU0JfU0VSSUFMX01DVF9VMjMyIGlzIG5vdCBzZXQKIyBDT05GSUdf
VVNCX1NFUklBTF9NRVRSTyBpcyBub3Qgc2V0CkNPTkZJR19VU0JfU0VSSUFMX01PUzc3MjA9
eQpDT05GSUdfVVNCX1NFUklBTF9NT1M3ODQwPXkKIyBDT05GSUdfVVNCX1NFUklBTF9NWFVQ
T1JUIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9OQVZNQU4gaXMgbm90IHNldAoj
IENPTkZJR19VU0JfU0VSSUFMX1BMMjMwMyBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TRVJJ
QUxfT1RJNjg1OCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TRVJJQUxfUUNBVVggaXMgbm90
IHNldAojIENPTkZJR19VU0JfU0VSSUFMX1FVQUxDT01NIGlzIG5vdCBzZXQKIyBDT05GSUdf
VVNCX1NFUklBTF9TUENQOFg1IGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9TQUZF
IGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9TSUVSUkFXSVJFTEVTUyBpcyBub3Qg
c2V0CiMgQ09ORklHX1VTQl9TRVJJQUxfU1lNQk9MIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNC
X1NFUklBTF9USSBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TRVJJQUxfQ1lCRVJKQUNLIGlz
IG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9PUFRJT04gaXMgbm90IHNldAojIENPTkZJ
R19VU0JfU0VSSUFMX09NTklORVQgaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX09Q
VElDT04gaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX1hTRU5TX01UIGlzIG5vdCBz
ZXQKIyBDT05GSUdfVVNCX1NFUklBTF9XSVNIQk9ORSBpcyBub3Qgc2V0CiMgQ09ORklHX1VT
Ql9TRVJJQUxfU1NVMTAwIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9RVDIgaXMg
bm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX1VQRDc4RjA3MzAgaXMgbm90IHNldAojIENP
TkZJR19VU0JfU0VSSUFMX1hSIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9ERUJV
RyBpcyBub3Qgc2V0CgojCiMgVVNCIE1pc2NlbGxhbmVvdXMgZHJpdmVycwojCiMgQ09ORklH
X1VTQl9FTUk2MiBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9FTUkyNiBpcyBub3Qgc2V0CiMg
Q09ORklHX1VTQl9BRFVUVVggaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VWU0VHIGlzIG5v
dCBzZXQKIyBDT05GSUdfVVNCX0xFR09UT1dFUiBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9M
Q0QgaXMgbm90IHNldAojIENPTkZJR19VU0JfQ1lQUkVTU19DWTdDNjMgaXMgbm90IHNldAoj
IENPTkZJR19VU0JfQ1lUSEVSTSBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9JRE1PVVNFIGlz
IG5vdCBzZXQKIyBDT05GSUdfVVNCX0ZURElfRUxBTiBpcyBub3Qgc2V0CiMgQ09ORklHX1VT
Ql9BUFBMRURJU1BMQVkgaXMgbm90IHNldAojIENPTkZJR19BUFBMRV9NRklfRkFTVENIQVJH
RSBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TSVNVU0JWR0EgaXMgbm90IHNldAojIENPTkZJ
R19VU0JfTEQgaXMgbm90IHNldAojIENPTkZJR19VU0JfVFJBTkNFVklCUkFUT1IgaXMgbm90
IHNldAojIENPTkZJR19VU0JfSU9XQVJSSU9SIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1RF
U1QgaXMgbm90IHNldAojIENPTkZJR19VU0JfRUhTRVRfVEVTVF9GSVhUVVJFIGlzIG5vdCBz
ZXQKIyBDT05GSUdfVVNCX0lTSUdIVEZXIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1lVUkVY
IGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX0VaVVNCX0ZYMiBpcyBub3Qgc2V0CiMgQ09ORklH
X1VTQl9IVUJfVVNCMjUxWEIgaXMgbm90IHNldAojIENPTkZJR19VU0JfSFNJQ19VU0IzNTAz
IGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX0hTSUNfVVNCNDYwNCBpcyBub3Qgc2V0CiMgQ09O
RklHX1VTQl9MSU5LX0xBWUVSX1RFU1QgaXMgbm90IHNldAojIENPTkZJR19VU0JfQ0hBT1NL
RVkgaXMgbm90IHNldAoKIwojIFVTQiBQaHlzaWNhbCBMYXllciBkcml2ZXJzCiMKIyBDT05G
SUdfTk9QX1VTQl9YQ0VJViBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9HUElPX1ZCVVMgaXMg
bm90IHNldAojIENPTkZJR19VU0JfSVNQMTMwMSBpcyBub3Qgc2V0CiMgZW5kIG9mIFVTQiBQ
aHlzaWNhbCBMYXllciBkcml2ZXJzCgojIENPTkZJR19VU0JfR0FER0VUIGlzIG5vdCBzZXQK
IyBDT05GSUdfVFlQRUMgaXMgbm90IHNldAojIENPTkZJR19VU0JfUk9MRV9TV0lUQ0ggaXMg
bm90IHNldAojIENPTkZJR19NTUMgaXMgbm90IHNldAojIENPTkZJR19NRU1TVElDSyBpcyBu
b3Qgc2V0CkNPTkZJR19ORVdfTEVEUz15CkNPTkZJR19MRURTX0NMQVNTPXkKIyBDT05GSUdf
TEVEU19DTEFTU19GTEFTSCBpcyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfQ0xBU1NfTVVMVElD
T0xPUiBpcyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfQlJJR0hUTkVTU19IV19DSEFOR0VEIGlz
IG5vdCBzZXQKCiMKIyBMRUQgZHJpdmVycwojCiMgQ09ORklHX0xFRFNfQVBVIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTEVEU19MTTM1MzAgaXMgbm90IHNldAojIENPTkZJR19MRURTX0xNMzUz
MiBpcyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfTE0zNjQyIGlzIG5vdCBzZXQKIyBDT05GSUdf
TEVEU19QQ0E5NTMyIGlzIG5vdCBzZXQKIyBDT05GSUdfTEVEU19HUElPIGlzIG5vdCBzZXQK
IyBDT05GSUdfTEVEU19MUDM5NDQgaXMgbm90IHNldAojIENPTkZJR19MRURTX0xQMzk1MiBp
cyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfTFA1MFhYIGlzIG5vdCBzZXQKIyBDT05GSUdfTEVE
U19DTEVWT19NQUlMIGlzIG5vdCBzZXQKIyBDT05GSUdfTEVEU19QQ0E5NTVYIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTEVEU19QQ0E5NjNYIGlzIG5vdCBzZXQKIyBDT05GSUdfTEVEU19EQUMx
MjRTMDg1IGlzIG5vdCBzZXQKIyBDT05GSUdfTEVEU19CRDI4MDIgaXMgbm90IHNldAojIENP
TkZJR19MRURTX0lOVEVMX1NTNDIwMCBpcyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfVENBNjUw
NyBpcyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfVExDNTkxWFggaXMgbm90IHNldAojIENPTkZJ
R19MRURTX0xNMzU1eCBpcyBub3Qgc2V0CgojCiMgTEVEIGRyaXZlciBmb3IgYmxpbmsoMSkg
VVNCIFJHQiBMRUQgaXMgdW5kZXIgU3BlY2lhbCBISUQgZHJpdmVycyAoSElEX1RISU5HTSkK
IwojIENPTkZJR19MRURTX0JMSU5LTSBpcyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfTUxYQ1BM
RCBpcyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfTUxYUkVHIGlzIG5vdCBzZXQKIyBDT05GSUdf
TEVEU19VU0VSIGlzIG5vdCBzZXQKIyBDT05GSUdfTEVEU19OSUM3OEJYIGlzIG5vdCBzZXQK
IyBDT05GSUdfTEVEU19USV9MTVVfQ09NTU9OIGlzIG5vdCBzZXQKCiMKIyBGbGFzaCBhbmQg
VG9yY2ggTEVEIGRyaXZlcnMKIwoKIwojIExFRCBUcmlnZ2VycwojCkNPTkZJR19MRURTX1RS
SUdHRVJTPXkKIyBDT05GSUdfTEVEU19UUklHR0VSX1RJTUVSIGlzIG5vdCBzZXQKIyBDT05G
SUdfTEVEU19UUklHR0VSX09ORVNIT1QgaXMgbm90IHNldAojIENPTkZJR19MRURTX1RSSUdH
RVJfRElTSyBpcyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfVFJJR0dFUl9IRUFSVEJFQVQgaXMg
bm90IHNldAojIENPTkZJR19MRURTX1RSSUdHRVJfQkFDS0xJR0hUIGlzIG5vdCBzZXQKIyBD
T05GSUdfTEVEU19UUklHR0VSX0NQVSBpcyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfVFJJR0dF
Ul9BQ1RJVklUWSBpcyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfVFJJR0dFUl9HUElPIGlzIG5v
dCBzZXQKIyBDT05GSUdfTEVEU19UUklHR0VSX0RFRkFVTFRfT04gaXMgbm90IHNldAoKIwoj
IGlwdGFibGVzIHRyaWdnZXIgaXMgdW5kZXIgTmV0ZmlsdGVyIGNvbmZpZyAoTEVEIHRhcmdl
dCkKIwojIENPTkZJR19MRURTX1RSSUdHRVJfVFJBTlNJRU5UIGlzIG5vdCBzZXQKIyBDT05G
SUdfTEVEU19UUklHR0VSX0NBTUVSQSBpcyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfVFJJR0dF
Ul9QQU5JQyBpcyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfVFJJR0dFUl9ORVRERVYgaXMgbm90
IHNldAojIENPTkZJR19MRURTX1RSSUdHRVJfUEFUVEVSTiBpcyBub3Qgc2V0CkNPTkZJR19M
RURTX1RSSUdHRVJfQVVESU89eQojIENPTkZJR19MRURTX1RSSUdHRVJfVFRZIGlzIG5vdCBz
ZXQKIyBDT05GSUdfQUNDRVNTSUJJTElUWSBpcyBub3Qgc2V0CiMgQ09ORklHX0lORklOSUJB
TkQgaXMgbm90IHNldApDT05GSUdfRURBQ19BVE9NSUNfU0NSVUI9eQpDT05GSUdfRURBQ19T
VVBQT1JUPXkKIyBDT05GSUdfRURBQyBpcyBub3Qgc2V0CkNPTkZJR19SVENfTElCPXkKQ09O
RklHX1JUQ19NQzE0NjgxOF9MSUI9eQpDT05GSUdfUlRDX0NMQVNTPXkKQ09ORklHX1JUQ19I
Q1RPU1lTPXkKQ09ORklHX1JUQ19IQ1RPU1lTX0RFVklDRT0icnRjMCIKQ09ORklHX1JUQ19T
WVNUT0hDPXkKQ09ORklHX1JUQ19TWVNUT0hDX0RFVklDRT0icnRjMCIKIyBDT05GSUdfUlRD
X0RFQlVHIGlzIG5vdCBzZXQKQ09ORklHX1JUQ19OVk1FTT15CgojCiMgUlRDIGludGVyZmFj
ZXMKIwpDT05GSUdfUlRDX0lOVEZfU1lTRlM9eQpDT05GSUdfUlRDX0lOVEZfUFJPQz15CkNP
TkZJR19SVENfSU5URl9ERVY9eQojIENPTkZJR19SVENfSU5URl9ERVZfVUlFX0VNVUwgaXMg
bm90IHNldAojIENPTkZJR19SVENfRFJWX1RFU1QgaXMgbm90IHNldAoKIwojIEkyQyBSVEMg
ZHJpdmVycwojCiMgQ09ORklHX1JUQ19EUlZfQUJCNVpFUzMgaXMgbm90IHNldAojIENPTkZJ
R19SVENfRFJWX0FCRU9aOSBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfQUJYODBYIGlz
IG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9EUzEzMDcgaXMgbm90IHNldAojIENPTkZJR19S
VENfRFJWX0RTMTM3NCBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfRFMxNjcyIGlzIG5v
dCBzZXQKIyBDT05GSUdfUlRDX0RSVl9NQVg2OTAwIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRD
X0RSVl9SUzVDMzcyIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9JU0wxMjA4IGlzIG5v
dCBzZXQKIyBDT05GSUdfUlRDX0RSVl9JU0wxMjAyMiBpcyBub3Qgc2V0CiMgQ09ORklHX1JU
Q19EUlZfWDEyMDUgaXMgbm90IHNldAojIENPTkZJR19SVENfRFJWX1BDRjg1MjMgaXMgbm90
IHNldAojIENPTkZJR19SVENfRFJWX1BDRjg1MDYzIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRD
X0RSVl9QQ0Y4NTM2MyBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfUENGODU2MyBpcyBu
b3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfUENGODU4MyBpcyBub3Qgc2V0CiMgQ09ORklHX1JU
Q19EUlZfTTQxVDgwIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9CUTMySyBpcyBub3Qg
c2V0CiMgQ09ORklHX1JUQ19EUlZfUzM1MzkwQSBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19E
UlZfRk0zMTMwIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9SWDgwMTAgaXMgbm90IHNl
dAojIENPTkZJR19SVENfRFJWX1JYODU4MSBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZf
Ulg4MDI1IGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9FTTMwMjcgaXMgbm90IHNldAoj
IENPTkZJR19SVENfRFJWX1JWMzAyOCBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfUlYz
MDMyIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9SVjg4MDMgaXMgbm90IHNldAojIENP
TkZJR19SVENfRFJWX1NEMzA3OCBpcyBub3Qgc2V0CgojCiMgU1BJIFJUQyBkcml2ZXJzCiMK
IyBDT05GSUdfUlRDX0RSVl9NNDFUOTMgaXMgbm90IHNldAojIENPTkZJR19SVENfRFJWX000
MVQ5NCBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfRFMxMzAyIGlzIG5vdCBzZXQKIyBD
T05GSUdfUlRDX0RSVl9EUzEzMDUgaXMgbm90IHNldAojIENPTkZJR19SVENfRFJWX0RTMTM0
MyBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfRFMxMzQ3IGlzIG5vdCBzZXQKIyBDT05G
SUdfUlRDX0RSVl9EUzEzOTAgaXMgbm90IHNldAojIENPTkZJR19SVENfRFJWX01BWDY5MTYg
aXMgbm90IHNldAojIENPTkZJR19SVENfRFJWX1I5NzAxIGlzIG5vdCBzZXQKIyBDT05GSUdf
UlRDX0RSVl9SWDQ1ODEgaXMgbm90IHNldAojIENPTkZJR19SVENfRFJWX1JTNUMzNDggaXMg
bm90IHNldAojIENPTkZJR19SVENfRFJWX01BWDY5MDIgaXMgbm90IHNldAojIENPTkZJR19S
VENfRFJWX1BDRjIxMjMgaXMgbm90IHNldAojIENPTkZJR19SVENfRFJWX01DUDc5NSBpcyBu
b3Qgc2V0CkNPTkZJR19SVENfSTJDX0FORF9TUEk9eQoKIwojIFNQSSBhbmQgSTJDIFJUQyBk
cml2ZXJzCiMKIyBDT05GSUdfUlRDX0RSVl9EUzMyMzIgaXMgbm90IHNldAojIENPTkZJR19S
VENfRFJWX1BDRjIxMjcgaXMgbm90IHNldAojIENPTkZJR19SVENfRFJWX1JWMzAyOUMyIGlz
IG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9SWDYxMTAgaXMgbm90IHNldAoKIwojIFBsYXRm
b3JtIFJUQyBkcml2ZXJzCiMKQ09ORklHX1JUQ19EUlZfQ01PUz15CiMgQ09ORklHX1JUQ19E
UlZfRFMxMjg2IGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9EUzE1MTEgaXMgbm90IHNl
dAojIENPTkZJR19SVENfRFJWX0RTMTU1MyBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZf
RFMxNjg1X0ZBTUlMWSBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfRFMxNzQyIGlzIG5v
dCBzZXQKIyBDT05GSUdfUlRDX0RSVl9EUzI0MDQgaXMgbm90IHNldAojIENPTkZJR19SVENf
RFJWX1NUSzE3VEE4IGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9NNDhUODYgaXMgbm90
IHNldAojIENPTkZJR19SVENfRFJWX000OFQzNSBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19E
UlZfTTQ4VDU5IGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9NU002MjQyIGlzIG5vdCBz
ZXQKIyBDT05GSUdfUlRDX0RSVl9CUTQ4MDIgaXMgbm90IHNldAojIENPTkZJR19SVENfRFJW
X1JQNUMwMSBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfVjMwMjAgaXMgbm90IHNldAoK
IwojIG9uLUNQVSBSVEMgZHJpdmVycwojCiMgQ09ORklHX1JUQ19EUlZfRlRSVEMwMTAgaXMg
bm90IHNldAoKIwojIEhJRCBTZW5zb3IgUlRDIGRyaXZlcnMKIwojIENPTkZJR19SVENfRFJW
X0dPTERGSVNIIGlzIG5vdCBzZXQKIyBDT05GSUdfRE1BREVWSUNFUyBpcyBub3Qgc2V0Cgoj
CiMgRE1BQlVGIG9wdGlvbnMKIwpDT05GSUdfU1lOQ19GSUxFPXkKIyBDT05GSUdfU1dfU1lO
QyBpcyBub3Qgc2V0CiMgQ09ORklHX1VETUFCVUYgaXMgbm90IHNldAojIENPTkZJR19ETUFC
VUZfTU9WRV9OT1RJRlkgaXMgbm90IHNldAojIENPTkZJR19ETUFCVUZfREVCVUcgaXMgbm90
IHNldAojIENPTkZJR19ETUFCVUZfU0VMRlRFU1RTIGlzIG5vdCBzZXQKIyBDT05GSUdfRE1B
QlVGX0hFQVBTIGlzIG5vdCBzZXQKIyBlbmQgb2YgRE1BQlVGIG9wdGlvbnMKCiMgQ09ORklH
X0FVWERJU1BMQVkgaXMgbm90IHNldAojIENPTkZJR19VSU8gaXMgbm90IHNldAojIENPTkZJ
R19WRklPIGlzIG5vdCBzZXQKIyBDT05GSUdfVklSVF9EUklWRVJTIGlzIG5vdCBzZXQKQ09O
RklHX1ZJUlRJT19NRU5VPXkKIyBDT05GSUdfVklSVElPX1BDSSBpcyBub3Qgc2V0CiMgQ09O
RklHX1ZJUlRJT19NTUlPIGlzIG5vdCBzZXQKIyBDT05GSUdfVkRQQSBpcyBub3Qgc2V0CkNP
TkZJR19WSE9TVF9NRU5VPXkKIyBDT05GSUdfVkhPU1RfTkVUIGlzIG5vdCBzZXQKIyBDT05G
SUdfVkhPU1RfQ1JPU1NfRU5ESUFOX0xFR0FDWSBpcyBub3Qgc2V0CgojCiMgTWljcm9zb2Z0
IEh5cGVyLVYgZ3Vlc3Qgc3VwcG9ydAojCiMgQ09ORklHX0hZUEVSViBpcyBub3Qgc2V0CiMg
ZW5kIG9mIE1pY3Jvc29mdCBIeXBlci1WIGd1ZXN0IHN1cHBvcnQKCiMKIyBYZW4gZHJpdmVy
IHN1cHBvcnQKIwpDT05GSUdfWEVOX0JBTExPT049eQpDT05GSUdfWEVOX1NDUlVCX1BBR0VT
X0RFRkFVTFQ9eQpDT05GSUdfWEVOX0RFVl9FVlRDSE49eQpDT05GSUdfWEVOX0JBQ0tFTkQ9
eQpDT05GSUdfWEVORlM9eQpDT05GSUdfWEVOX0NPTVBBVF9YRU5GUz15CkNPTkZJR19YRU5f
U1lTX0hZUEVSVklTT1I9eQpDT05GSUdfWEVOX1hFTkJVU19GUk9OVEVORD15CkNPTkZJR19Y
RU5fR05UREVWPXkKQ09ORklHX1hFTl9HTlRERVZfRE1BQlVGPXkKQ09ORklHX1hFTl9HUkFO
VF9ERVZfQUxMT0M9eQpDT05GSUdfWEVOX0dSQU5UX0RNQV9BTExPQz15CkNPTkZJR19TV0lP
VExCX1hFTj15CkNPTkZJR19YRU5fUENJREVWX0JBQ0tFTkQ9eQojIENPTkZJR19YRU5fUFZD
QUxMU19GUk9OVEVORCBpcyBub3Qgc2V0CiMgQ09ORklHX1hFTl9QVkNBTExTX0JBQ0tFTkQg
aXMgbm90IHNldApDT05GSUdfWEVOX1BSSVZDTUQ9eQpDT05GSUdfWEVOX0FDUElfUFJPQ0VT
U09SPXkKIyBDT05GSUdfWEVOX01DRV9MT0cgaXMgbm90IHNldApDT05GSUdfWEVOX0hBVkVf
UFZNTVU9eQpDT05GSUdfWEVOX0FVVE9fWExBVEU9eQpDT05GSUdfWEVOX0FDUEk9eQpDT05G
SUdfWEVOX1NZTVM9eQpDT05GSUdfWEVOX0hBVkVfVlBNVT15CiMgZW5kIG9mIFhlbiBkcml2
ZXIgc3VwcG9ydAoKIyBDT05GSUdfR1JFWUJVUyBpcyBub3Qgc2V0CiMgQ09ORklHX0NPTUVE
SSBpcyBub3Qgc2V0CiMgQ09ORklHX1NUQUdJTkcgaXMgbm90IHNldAojIENPTkZJR19YODZf
UExBVEZPUk1fREVWSUNFUyBpcyBub3Qgc2V0CkNPTkZJR19QTUNfQVRPTT15CiMgQ09ORklH
X0NIUk9NRV9QTEFURk9STVMgaXMgbm90IHNldAojIENPTkZJR19NRUxMQU5PWF9QTEFURk9S
TSBpcyBub3Qgc2V0CkNPTkZJR19TVVJGQUNFX1BMQVRGT1JNUz15CiMgQ09ORklHX1NVUkZB
Q0VfM19QT1dFUl9PUFJFR0lPTiBpcyBub3Qgc2V0CiMgQ09ORklHX1NVUkZBQ0VfR1BFIGlz
IG5vdCBzZXQKIyBDT05GSUdfU1VSRkFDRV9IT1RQTFVHIGlzIG5vdCBzZXQKIyBDT05GSUdf
U1VSRkFDRV9QUk8zX0JVVFRPTiBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX0NMSz15CkNPTkZJ
R19DTEtERVZfTE9PS1VQPXkKQ09ORklHX0hBVkVfQ0xLX1BSRVBBUkU9eQpDT05GSUdfQ09N
TU9OX0NMSz15CiMgQ09ORklHX0NPTU1PTl9DTEtfTUFYOTQ4NSBpcyBub3Qgc2V0CiMgQ09O
RklHX0NPTU1PTl9DTEtfU0k1MzQxIGlzIG5vdCBzZXQKIyBDT05GSUdfQ09NTU9OX0NMS19T
STUzNTEgaXMgbm90IHNldAojIENPTkZJR19DT01NT05fQ0xLX1NJNTQ0IGlzIG5vdCBzZXQK
IyBDT05GSUdfQ09NTU9OX0NMS19DRENFNzA2IGlzIG5vdCBzZXQKIyBDT05GSUdfQ09NTU9O
X0NMS19DUzIwMDBfQ1AgaXMgbm90IHNldAojIENPTkZJR19YSUxJTlhfVkNVIGlzIG5vdCBz
ZXQKIyBDT05GSUdfSFdTUElOTE9DSyBpcyBub3Qgc2V0CgojCiMgQ2xvY2sgU291cmNlIGRy
aXZlcnMKIwpDT05GSUdfQ0xLRVZUX0k4MjUzPXkKQ09ORklHX0k4MjUzX0xPQ0s9eQpDT05G
SUdfQ0xLQkxEX0k4MjUzPXkKIyBlbmQgb2YgQ2xvY2sgU291cmNlIGRyaXZlcnMKCkNPTkZJ
R19NQUlMQk9YPXkKQ09ORklHX1BDQz15CiMgQ09ORklHX0FMVEVSQV9NQk9YIGlzIG5vdCBz
ZXQKQ09ORklHX0lPTU1VX0lPVkE9eQpDT05GSUdfSU9NTVVfQVBJPXkKQ09ORklHX0lPTU1V
X1NVUFBPUlQ9eQoKIwojIEdlbmVyaWMgSU9NTVUgUGFnZXRhYmxlIFN1cHBvcnQKIwpDT05G
SUdfSU9NTVVfSU9fUEdUQUJMRT15CiMgZW5kIG9mIEdlbmVyaWMgSU9NTVUgUGFnZXRhYmxl
IFN1cHBvcnQKCiMgQ09ORklHX0lPTU1VX0RFQlVHRlMgaXMgbm90IHNldAojIENPTkZJR19J
T01NVV9ERUZBVUxUX1BBU1NUSFJPVUdIIGlzIG5vdCBzZXQKQ09ORklHX0lPTU1VX0RNQT15
CkNPTkZJR19BTURfSU9NTVU9eQojIENPTkZJR19BTURfSU9NTVVfVjIgaXMgbm90IHNldApD
T05GSUdfRE1BUl9UQUJMRT15CiMgQ09ORklHX0lOVEVMX0lPTU1VIGlzIG5vdCBzZXQKQ09O
RklHX0lSUV9SRU1BUD15CgojCiMgUmVtb3RlcHJvYyBkcml2ZXJzCiMKIyBDT05GSUdfUkVN
T1RFUFJPQyBpcyBub3Qgc2V0CiMgZW5kIG9mIFJlbW90ZXByb2MgZHJpdmVycwoKIwojIFJw
bXNnIGRyaXZlcnMKIwojIENPTkZJR19SUE1TR19RQ09NX0dMSU5LX1JQTSBpcyBub3Qgc2V0
CiMgQ09ORklHX1JQTVNHX1ZJUlRJTyBpcyBub3Qgc2V0CiMgZW5kIG9mIFJwbXNnIGRyaXZl
cnMKCiMgQ09ORklHX1NPVU5EV0lSRSBpcyBub3Qgc2V0CgojCiMgU09DIChTeXN0ZW0gT24g
Q2hpcCkgc3BlY2lmaWMgRHJpdmVycwojCgojCiMgQW1sb2dpYyBTb0MgZHJpdmVycwojCiMg
ZW5kIG9mIEFtbG9naWMgU29DIGRyaXZlcnMKCiMKIyBCcm9hZGNvbSBTb0MgZHJpdmVycwoj
CiMgZW5kIG9mIEJyb2FkY29tIFNvQyBkcml2ZXJzCgojCiMgTlhQL0ZyZWVzY2FsZSBRb3JJ
USBTb0MgZHJpdmVycwojCiMgZW5kIG9mIE5YUC9GcmVlc2NhbGUgUW9ySVEgU29DIGRyaXZl
cnMKCiMKIyBpLk1YIFNvQyBkcml2ZXJzCiMKIyBlbmQgb2YgaS5NWCBTb0MgZHJpdmVycwoK
IwojIEVuYWJsZSBMaXRlWCBTb0MgQnVpbGRlciBzcGVjaWZpYyBkcml2ZXJzCiMKIyBlbmQg
b2YgRW5hYmxlIExpdGVYIFNvQyBCdWlsZGVyIHNwZWNpZmljIGRyaXZlcnMKCiMKIyBRdWFs
Y29tbSBTb0MgZHJpdmVycwojCiMgZW5kIG9mIFF1YWxjb21tIFNvQyBkcml2ZXJzCgojIENP
TkZJR19TT0NfVEkgaXMgbm90IHNldAoKIwojIFhpbGlueCBTb0MgZHJpdmVycwojCiMgZW5k
IG9mIFhpbGlueCBTb0MgZHJpdmVycwojIGVuZCBvZiBTT0MgKFN5c3RlbSBPbiBDaGlwKSBz
cGVjaWZpYyBEcml2ZXJzCgojIENPTkZJR19QTV9ERVZGUkVRIGlzIG5vdCBzZXQKIyBDT05G
SUdfRVhUQ09OIGlzIG5vdCBzZXQKIyBDT05GSUdfTUVNT1JZIGlzIG5vdCBzZXQKIyBDT05G
SUdfSUlPIGlzIG5vdCBzZXQKIyBDT05GSUdfTlRCIGlzIG5vdCBzZXQKIyBDT05GSUdfVk1F
X0JVUyBpcyBub3Qgc2V0CiMgQ09ORklHX1BXTSBpcyBub3Qgc2V0CgojCiMgSVJRIGNoaXAg
c3VwcG9ydAojCiMgZW5kIG9mIElSUSBjaGlwIHN1cHBvcnQKCiMgQ09ORklHX0lQQUNLX0JV
UyBpcyBub3Qgc2V0CiMgQ09ORklHX1JFU0VUX0NPTlRST0xMRVIgaXMgbm90IHNldAoKIwoj
IFBIWSBTdWJzeXN0ZW0KIwpDT05GSUdfR0VORVJJQ19QSFk9eQojIENPTkZJR19VU0JfTEdN
X1BIWSBpcyBub3Qgc2V0CiMgQ09ORklHX0JDTV9LT05BX1VTQjJfUEhZIGlzIG5vdCBzZXQK
IyBDT05GSUdfUEhZX1BYQV8yOE5NX0hTSUMgaXMgbm90IHNldAojIENPTkZJR19QSFlfUFhB
XzI4Tk1fVVNCMiBpcyBub3Qgc2V0CiMgQ09ORklHX1BIWV9JTlRFTF9MR01fRU1NQyBpcyBu
b3Qgc2V0CiMgZW5kIG9mIFBIWSBTdWJzeXN0ZW0KCiMgQ09ORklHX1BPV0VSQ0FQIGlzIG5v
dCBzZXQKIyBDT05GSUdfTUNCIGlzIG5vdCBzZXQKCiMKIyBQZXJmb3JtYW5jZSBtb25pdG9y
IHN1cHBvcnQKIwojIGVuZCBvZiBQZXJmb3JtYW5jZSBtb25pdG9yIHN1cHBvcnQKCkNPTkZJ
R19SQVM9eQojIENPTkZJR19VU0I0IGlzIG5vdCBzZXQKCiMKIyBBbmRyb2lkCiMKIyBDT05G
SUdfQU5EUk9JRCBpcyBub3Qgc2V0CiMgZW5kIG9mIEFuZHJvaWQKCiMgQ09ORklHX0xJQk5W
RElNTSBpcyBub3Qgc2V0CkNPTkZJR19EQVg9eQojIENPTkZJR19ERVZfREFYIGlzIG5vdCBz
ZXQKQ09ORklHX05WTUVNPXkKQ09ORklHX05WTUVNX1NZU0ZTPXkKIyBDT05GSUdfTlZNRU1f
Uk1FTSBpcyBub3Qgc2V0CgojCiMgSFcgdHJhY2luZyBzdXBwb3J0CiMKIyBDT05GSUdfU1RN
IGlzIG5vdCBzZXQKIyBDT05GSUdfSU5URUxfVEggaXMgbm90IHNldAojIGVuZCBvZiBIVyB0
cmFjaW5nIHN1cHBvcnQKCiMgQ09ORklHX0ZQR0EgaXMgbm90IHNldAojIENPTkZJR19URUUg
aXMgbm90IHNldAojIENPTkZJR19VTklTWVNfVklTT1JCVVMgaXMgbm90IHNldAojIENPTkZJ
R19TSU9YIGlzIG5vdCBzZXQKIyBDT05GSUdfU0xJTUJVUyBpcyBub3Qgc2V0CiMgQ09ORklH
X0lOVEVSQ09OTkVDVCBpcyBub3Qgc2V0CiMgQ09ORklHX0NPVU5URVIgaXMgbm90IHNldAoj
IGVuZCBvZiBEZXZpY2UgRHJpdmVycwoKIwojIEZpbGUgc3lzdGVtcwojCkNPTkZJR19EQ0FD
SEVfV09SRF9BQ0NFU1M9eQpDT05GSUdfVkFMSURBVEVfRlNfUEFSU0VSPXkKQ09ORklHX0ZT
X0lPTUFQPXkKIyBDT05GSUdfRVhUMl9GUyBpcyBub3Qgc2V0CkNPTkZJR19FWFQzX0ZTPXkK
Q09ORklHX0VYVDNfRlNfUE9TSVhfQUNMPXkKQ09ORklHX0VYVDNfRlNfU0VDVVJJVFk9eQpD
T05GSUdfRVhUNF9GUz15CkNPTkZJR19FWFQ0X1VTRV9GT1JfRVhUMj15CkNPTkZJR19FWFQ0
X0ZTX1BPU0lYX0FDTD15CkNPTkZJR19FWFQ0X0ZTX1NFQ1VSSVRZPXkKQ09ORklHX0VYVDRf
REVCVUc9eQpDT05GSUdfSkJEMj15CkNPTkZJR19KQkQyX0RFQlVHPXkKQ09ORklHX0ZTX01C
Q0FDSEU9eQojIENPTkZJR19SRUlTRVJGU19GUyBpcyBub3Qgc2V0CiMgQ09ORklHX0pGU19G
UyBpcyBub3Qgc2V0CiMgQ09ORklHX1hGU19GUyBpcyBub3Qgc2V0CkNPTkZJR19HRlMyX0ZT
PXkKQ09ORklHX0JUUkZTX0ZTPXkKQ09ORklHX0JUUkZTX0ZTX1BPU0lYX0FDTD15CiMgQ09O
RklHX0JUUkZTX0ZTX0NIRUNLX0lOVEVHUklUWSBpcyBub3Qgc2V0CiMgQ09ORklHX0JUUkZT
X0ZTX1JVTl9TQU5JVFlfVEVTVFMgaXMgbm90IHNldAojIENPTkZJR19CVFJGU19ERUJVRyBp
cyBub3Qgc2V0CiMgQ09ORklHX0JUUkZTX0FTU0VSVCBpcyBub3Qgc2V0CiMgQ09ORklHX0JU
UkZTX0ZTX1JFRl9WRVJJRlkgaXMgbm90IHNldAojIENPTkZJR19OSUxGUzJfRlMgaXMgbm90
IHNldAojIENPTkZJR19GMkZTX0ZTIGlzIG5vdCBzZXQKIyBDT05GSUdfRlNfREFYIGlzIG5v
dCBzZXQKQ09ORklHX0ZTX1BPU0lYX0FDTD15CkNPTkZJR19FWFBPUlRGUz15CiMgQ09ORklH
X0VYUE9SVEZTX0JMT0NLX09QUyBpcyBub3Qgc2V0CkNPTkZJR19GSUxFX0xPQ0tJTkc9eQpD
T05GSUdfTUFOREFUT1JZX0ZJTEVfTE9DS0lORz15CiMgQ09ORklHX0ZTX0VOQ1JZUFRJT04g
aXMgbm90IHNldAojIENPTkZJR19GU19WRVJJVFkgaXMgbm90IHNldApDT05GSUdfRlNOT1RJ
Rlk9eQpDT05GSUdfRE5PVElGWT15CkNPTkZJR19JTk9USUZZX1VTRVI9eQpDT05GSUdfRkFO
T1RJRlk9eQpDT05GSUdfUVVPVEE9eQpDT05GSUdfUVVPVEFfTkVUTElOS19JTlRFUkZBQ0U9
eQojIENPTkZJR19QUklOVF9RVU9UQV9XQVJOSU5HIGlzIG5vdCBzZXQKIyBDT05GSUdfUVVP
VEFfREVCVUcgaXMgbm90IHNldApDT05GSUdfUVVPVEFfVFJFRT15CiMgQ09ORklHX1FGTVRf
VjEgaXMgbm90IHNldApDT05GSUdfUUZNVF9WMj15CkNPTkZJR19RVU9UQUNUTD15CkNPTkZJ
R19BVVRPRlM0X0ZTPXkKQ09ORklHX0FVVE9GU19GUz15CkNPTkZJR19GVVNFX0ZTPXkKIyBD
T05GSUdfQ1VTRSBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJUlRJT19GUyBpcyBub3Qgc2V0CkNP
TkZJR19PVkVSTEFZX0ZTPXkKIyBDT05GSUdfT1ZFUkxBWV9GU19SRURJUkVDVF9ESVIgaXMg
bm90IHNldApDT05GSUdfT1ZFUkxBWV9GU19SRURJUkVDVF9BTFdBWVNfRk9MTE9XPXkKIyBD
T05GSUdfT1ZFUkxBWV9GU19JTkRFWCBpcyBub3Qgc2V0CiMgQ09ORklHX09WRVJMQVlfRlNf
WElOT19BVVRPIGlzIG5vdCBzZXQKIyBDT05GSUdfT1ZFUkxBWV9GU19NRVRBQ09QWSBpcyBu
b3Qgc2V0CgojCiMgQ2FjaGVzCiMKQ09ORklHX05FVEZTX1NVUFBPUlQ9eQojIENPTkZJR19O
RVRGU19TVEFUUyBpcyBub3Qgc2V0CkNPTkZJR19GU0NBQ0hFPXkKQ09ORklHX0ZTQ0FDSEVf
U1RBVFM9eQpDT05GSUdfRlNDQUNIRV9ISVNUT0dSQU09eQojIENPTkZJR19GU0NBQ0hFX0RF
QlVHIGlzIG5vdCBzZXQKIyBDT05GSUdfRlNDQUNIRV9PQkpFQ1RfTElTVCBpcyBub3Qgc2V0
CiMgQ09ORklHX0NBQ0hFRklMRVMgaXMgbm90IHNldAojIGVuZCBvZiBDYWNoZXMKCiMKIyBD
RC1ST00vRFZEIEZpbGVzeXN0ZW1zCiMKQ09ORklHX0lTTzk2NjBfRlM9eQpDT05GSUdfSk9M
SUVUPXkKQ09ORklHX1pJU09GUz15CkNPTkZJR19VREZfRlM9eQojIGVuZCBvZiBDRC1ST00v
RFZEIEZpbGVzeXN0ZW1zCgojCiMgRE9TL0ZBVC9FWEZBVC9OVCBGaWxlc3lzdGVtcwojCkNP
TkZJR19GQVRfRlM9eQpDT05GSUdfTVNET1NfRlM9eQpDT05GSUdfVkZBVF9GUz15CkNPTkZJ
R19GQVRfREVGQVVMVF9DT0RFUEFHRT00MzcKQ09ORklHX0ZBVF9ERUZBVUxUX0lPQ0hBUlNF
VD0iaXNvODg1OS0xIgojIENPTkZJR19GQVRfREVGQVVMVF9VVEY4IGlzIG5vdCBzZXQKQ09O
RklHX0VYRkFUX0ZTPXkKQ09ORklHX0VYRkFUX0RFRkFVTFRfSU9DSEFSU0VUPSJ1dGY4IgpD
T05GSUdfTlRGU19GUz15CiMgQ09ORklHX05URlNfREVCVUcgaXMgbm90IHNldApDT05GSUdf
TlRGU19SVz15CiMgZW5kIG9mIERPUy9GQVQvRVhGQVQvTlQgRmlsZXN5c3RlbXMKCiMKIyBQ
c2V1ZG8gZmlsZXN5c3RlbXMKIwpDT05GSUdfUFJPQ19GUz15CkNPTkZJR19QUk9DX0tDT1JF
PXkKQ09ORklHX1BST0NfU1lTQ1RMPXkKQ09ORklHX1BST0NfUEFHRV9NT05JVE9SPXkKIyBD
T05GSUdfUFJPQ19DSElMRFJFTiBpcyBub3Qgc2V0CkNPTkZJR19QUk9DX1BJRF9BUkNIX1NU
QVRVUz15CkNPTkZJR19LRVJORlM9eQpDT05GSUdfU1lTRlM9eQpDT05GSUdfVE1QRlM9eQpD
T05GSUdfVE1QRlNfUE9TSVhfQUNMPXkKQ09ORklHX1RNUEZTX1hBVFRSPXkKIyBDT05GSUdf
VE1QRlNfSU5PREU2NCBpcyBub3Qgc2V0CkNPTkZJR19IVUdFVExCRlM9eQpDT05GSUdfSFVH
RVRMQl9QQUdFPXkKQ09ORklHX01FTUZEX0NSRUFURT15CkNPTkZJR19BUkNIX0hBU19HSUdB
TlRJQ19QQUdFPXkKIyBDT05GSUdfQ09ORklHRlNfRlMgaXMgbm90IHNldAojIGVuZCBvZiBQ
c2V1ZG8gZmlsZXN5c3RlbXMKCiMgQ09ORklHX01JU0NfRklMRVNZU1RFTVMgaXMgbm90IHNl
dApDT05GSUdfTkVUV09SS19GSUxFU1lTVEVNUz15CiMgQ09ORklHX05GU19GUyBpcyBub3Qg
c2V0CiMgQ09ORklHX05GU0QgaXMgbm90IHNldApDT05GSUdfQ0VQSF9GUz15CkNPTkZJR19D
RVBIX0ZTQ0FDSEU9eQpDT05GSUdfQ0VQSF9GU19QT1NJWF9BQ0w9eQpDT05GSUdfQ0lGUz15
CkNPTkZJR19DSUZTX1NUQVRTMj15CkNPTkZJR19DSUZTX0FMTE9XX0lOU0VDVVJFX0xFR0FD
WT15CiMgQ09ORklHX0NJRlNfV0VBS19QV19IQVNIIGlzIG5vdCBzZXQKIyBDT05GSUdfQ0lG
U19VUENBTEwgaXMgbm90IHNldAojIENPTkZJR19DSUZTX1hBVFRSIGlzIG5vdCBzZXQKQ09O
RklHX0NJRlNfREVCVUc9eQojIENPTkZJR19DSUZTX0RFQlVHMiBpcyBub3Qgc2V0CiMgQ09O
RklHX0NJRlNfREVCVUdfRFVNUF9LRVlTIGlzIG5vdCBzZXQKIyBDT05GSUdfQ0lGU19ERlNf
VVBDQUxMIGlzIG5vdCBzZXQKIyBDT05GSUdfQ0lGU19TV05fVVBDQUxMIGlzIG5vdCBzZXQK
IyBDT05GSUdfQ0lGU19GU0NBQ0hFIGlzIG5vdCBzZXQKIyBDT05GSUdfQ09EQV9GUyBpcyBu
b3Qgc2V0CiMgQ09ORklHX0FGU19GUyBpcyBub3Qgc2V0CkNPTkZJR19OTFM9eQpDT05GSUdf
TkxTX0RFRkFVTFQ9InV0ZjgiCkNPTkZJR19OTFNfQ09ERVBBR0VfNDM3PXkKIyBDT05GSUdf
TkxTX0NPREVQQUdFXzczNyBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19DT0RFUEFHRV83NzUg
aXMgbm90IHNldAojIENPTkZJR19OTFNfQ09ERVBBR0VfODUwIGlzIG5vdCBzZXQKIyBDT05G
SUdfTkxTX0NPREVQQUdFXzg1MiBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19DT0RFUEFHRV84
NTUgaXMgbm90IHNldAojIENPTkZJR19OTFNfQ09ERVBBR0VfODU3IGlzIG5vdCBzZXQKIyBD
T05GSUdfTkxTX0NPREVQQUdFXzg2MCBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19DT0RFUEFH
RV84NjEgaXMgbm90IHNldAojIENPTkZJR19OTFNfQ09ERVBBR0VfODYyIGlzIG5vdCBzZXQK
IyBDT05GSUdfTkxTX0NPREVQQUdFXzg2MyBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19DT0RF
UEFHRV84NjQgaXMgbm90IHNldAojIENPTkZJR19OTFNfQ09ERVBBR0VfODY1IGlzIG5vdCBz
ZXQKIyBDT05GSUdfTkxTX0NPREVQQUdFXzg2NiBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19D
T0RFUEFHRV84NjkgaXMgbm90IHNldAojIENPTkZJR19OTFNfQ09ERVBBR0VfOTM2IGlzIG5v
dCBzZXQKIyBDT05GSUdfTkxTX0NPREVQQUdFXzk1MCBpcyBub3Qgc2V0CiMgQ09ORklHX05M
U19DT0RFUEFHRV85MzIgaXMgbm90IHNldAojIENPTkZJR19OTFNfQ09ERVBBR0VfOTQ5IGlz
IG5vdCBzZXQKIyBDT05GSUdfTkxTX0NPREVQQUdFXzg3NCBpcyBub3Qgc2V0CiMgQ09ORklH
X05MU19JU084ODU5XzggaXMgbm90IHNldAojIENPTkZJR19OTFNfQ09ERVBBR0VfMTI1MCBp
cyBub3Qgc2V0CiMgQ09ORklHX05MU19DT0RFUEFHRV8xMjUxIGlzIG5vdCBzZXQKQ09ORklH
X05MU19BU0NJST15CkNPTkZJR19OTFNfSVNPODg1OV8xPXkKIyBDT05GSUdfTkxTX0lTTzg4
NTlfMiBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19JU084ODU5XzMgaXMgbm90IHNldAojIENP
TkZJR19OTFNfSVNPODg1OV80IGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0lTTzg4NTlfNSBp
cyBub3Qgc2V0CiMgQ09ORklHX05MU19JU084ODU5XzYgaXMgbm90IHNldAojIENPTkZJR19O
TFNfSVNPODg1OV83IGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0lTTzg4NTlfOSBpcyBub3Qg
c2V0CiMgQ09ORklHX05MU19JU084ODU5XzEzIGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0lT
Tzg4NTlfMTQgaXMgbm90IHNldAojIENPTkZJR19OTFNfSVNPODg1OV8xNSBpcyBub3Qgc2V0
CiMgQ09ORklHX05MU19LT0k4X1IgaXMgbm90IHNldAojIENPTkZJR19OTFNfS09JOF9VIGlz
IG5vdCBzZXQKIyBDT05GSUdfTkxTX01BQ19ST01BTiBpcyBub3Qgc2V0CiMgQ09ORklHX05M
U19NQUNfQ0VMVElDIGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX01BQ19DRU5URVVSTyBpcyBu
b3Qgc2V0CiMgQ09ORklHX05MU19NQUNfQ1JPQVRJQU4gaXMgbm90IHNldAojIENPTkZJR19O
TFNfTUFDX0NZUklMTElDIGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX01BQ19HQUVMSUMgaXMg
bm90IHNldAojIENPTkZJR19OTFNfTUFDX0dSRUVLIGlzIG5vdCBzZXQKIyBDT05GSUdfTkxT
X01BQ19JQ0VMQU5EIGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX01BQ19JTlVJVCBpcyBub3Qg
c2V0CiMgQ09ORklHX05MU19NQUNfUk9NQU5JQU4gaXMgbm90IHNldAojIENPTkZJR19OTFNf
TUFDX1RVUktJU0ggaXMgbm90IHNldApDT05GSUdfTkxTX1VURjg9eQojIENPTkZJR19VTklD
T0RFIGlzIG5vdCBzZXQKQ09ORklHX0lPX1dRPXkKIyBlbmQgb2YgRmlsZSBzeXN0ZW1zCgoj
CiMgU2VjdXJpdHkgb3B0aW9ucwojCkNPTkZJR19LRVlTPXkKQ09ORklHX0tFWVNfUkVRVUVT
VF9DQUNIRT15CiMgQ09ORklHX1BFUlNJU1RFTlRfS0VZUklOR1MgaXMgbm90IHNldApDT05G
SUdfRU5DUllQVEVEX0tFWVM9eQojIENPTkZJR19LRVlfREhfT1BFUkFUSU9OUyBpcyBub3Qg
c2V0CiMgQ09ORklHX1NFQ1VSSVRZX0RNRVNHX1JFU1RSSUNUIGlzIG5vdCBzZXQKIyBDT05G
SUdfU0VDVVJJVFkgaXMgbm90IHNldAojIENPTkZJR19TRUNVUklUWUZTIGlzIG5vdCBzZXQK
Q09ORklHX1BBR0VfVEFCTEVfSVNPTEFUSU9OPXkKQ09ORklHX0hBVkVfSEFSREVORURfVVNF
UkNPUFlfQUxMT0NBVE9SPXkKQ09ORklHX0hBUkRFTkVEX1VTRVJDT1BZPXkKQ09ORklHX0hB
UkRFTkVEX1VTRVJDT1BZX0ZBTExCQUNLPXkKIyBDT05GSUdfRk9SVElGWV9TT1VSQ0UgaXMg
bm90IHNldAojIENPTkZJR19TVEFUSUNfVVNFUk1PREVIRUxQRVIgaXMgbm90IHNldApDT05G
SUdfREVGQVVMVF9TRUNVUklUWV9EQUM9eQpDT05GSUdfTFNNPSJ5YW1hLGxvYWRwaW4sc2Fm
ZXNldGlkLGludGVncml0eSxzZWxpbnV4LHNtYWNrLHRvbW95byxhcHBhcm1vciIKCiMKIyBL
ZXJuZWwgaGFyZGVuaW5nIG9wdGlvbnMKIwoKIwojIE1lbW9yeSBpbml0aWFsaXphdGlvbgoj
CkNPTkZJR19JTklUX1NUQUNLX05PTkU9eQojIENPTkZJR19JTklUX09OX0FMTE9DX0RFRkFV
TFRfT04gaXMgbm90IHNldAojIENPTkZJR19JTklUX09OX0ZSRUVfREVGQVVMVF9PTiBpcyBu
b3Qgc2V0CiMgZW5kIG9mIE1lbW9yeSBpbml0aWFsaXphdGlvbgojIGVuZCBvZiBLZXJuZWwg
aGFyZGVuaW5nIG9wdGlvbnMKIyBlbmQgb2YgU2VjdXJpdHkgb3B0aW9ucwoKQ09ORklHX1hP
Ul9CTE9DS1M9eQpDT05GSUdfQ1JZUFRPPXkKCiMKIyBDcnlwdG8gY29yZSBvciBoZWxwZXIK
IwpDT05GSUdfQ1JZUFRPX0FMR0FQST15CkNPTkZJR19DUllQVE9fQUxHQVBJMj15CkNPTkZJ
R19DUllQVE9fQUVBRD15CkNPTkZJR19DUllQVE9fQUVBRDI9eQpDT05GSUdfQ1JZUFRPX1NL
Q0lQSEVSPXkKQ09ORklHX0NSWVBUT19TS0NJUEhFUjI9eQpDT05GSUdfQ1JZUFRPX0hBU0g9
eQpDT05GSUdfQ1JZUFRPX0hBU0gyPXkKQ09ORklHX0NSWVBUT19STkc9eQpDT05GSUdfQ1JZ
UFRPX1JORzI9eQpDT05GSUdfQ1JZUFRPX1JOR19ERUZBVUxUPXkKQ09ORklHX0NSWVBUT19B
S0NJUEhFUjI9eQpDT05GSUdfQ1JZUFRPX0FLQ0lQSEVSPXkKQ09ORklHX0NSWVBUT19LUFAy
PXkKQ09ORklHX0NSWVBUT19LUFA9eQpDT05GSUdfQ1JZUFRPX0FDT01QMj15CkNPTkZJR19D
UllQVE9fTUFOQUdFUj15CkNPTkZJR19DUllQVE9fTUFOQUdFUjI9eQojIENPTkZJR19DUllQ
VE9fVVNFUiBpcyBub3Qgc2V0CkNPTkZJR19DUllQVE9fTUFOQUdFUl9ESVNBQkxFX1RFU1RT
PXkKQ09ORklHX0NSWVBUT19HRjEyOE1VTD15CkNPTkZJR19DUllQVE9fTlVMTD15CkNPTkZJ
R19DUllQVE9fTlVMTDI9eQojIENPTkZJR19DUllQVE9fUENSWVBUIGlzIG5vdCBzZXQKQ09O
RklHX0NSWVBUT19DUllQVEQ9eQpDT05GSUdfQ1JZUFRPX0FVVEhFTkM9eQojIENPTkZJR19D
UllQVE9fVEVTVCBpcyBub3Qgc2V0CkNPTkZJR19DUllQVE9fU0lNRD15CgojCiMgUHVibGlj
LWtleSBjcnlwdG9ncmFwaHkKIwpDT05GSUdfQ1JZUFRPX1JTQT15CiMgQ09ORklHX0NSWVBU
T19ESCBpcyBub3Qgc2V0CkNPTkZJR19DUllQVE9fRUNDPXkKQ09ORklHX0NSWVBUT19FQ0RI
PXkKQ09ORklHX0NSWVBUT19FQ0RTQT15CiMgQ09ORklHX0NSWVBUT19FQ1JEU0EgaXMgbm90
IHNldAojIENPTkZJR19DUllQVE9fU00yIGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX0NV
UlZFMjU1MTkgaXMgbm90IHNldAojIENPTkZJR19DUllQVE9fQ1VSVkUyNTUxOV9YODYgaXMg
bm90IHNldAoKIwojIEF1dGhlbnRpY2F0ZWQgRW5jcnlwdGlvbiB3aXRoIEFzc29jaWF0ZWQg
RGF0YQojCkNPTkZJR19DUllQVE9fQ0NNPXkKQ09ORklHX0NSWVBUT19HQ009eQojIENPTkZJ
R19DUllQVE9fQ0hBQ0hBMjBQT0xZMTMwNSBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19B
RUdJUzEyOCBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19BRUdJUzEyOF9BRVNOSV9TU0Uy
IGlzIG5vdCBzZXQKQ09ORklHX0NSWVBUT19TRVFJVj15CkNPTkZJR19DUllQVE9fRUNIQUlO
SVY9eQoKIwojIEJsb2NrIG1vZGVzCiMKQ09ORklHX0NSWVBUT19DQkM9eQojIENPTkZJR19D
UllQVE9fQ0ZCIGlzIG5vdCBzZXQKQ09ORklHX0NSWVBUT19DVFI9eQpDT05GSUdfQ1JZUFRP
X0NUUz15CkNPTkZJR19DUllQVE9fRUNCPXkKQ09ORklHX0NSWVBUT19MUlc9eQojIENPTkZJ
R19DUllQVE9fT0ZCIGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX1BDQkMgaXMgbm90IHNl
dApDT05GSUdfQ1JZUFRPX1hUUz15CiMgQ09ORklHX0NSWVBUT19LRVlXUkFQIGlzIG5vdCBz
ZXQKQ09ORklHX0NSWVBUT19OSFBPTFkxMzA1PXkKQ09ORklHX0NSWVBUT19OSFBPTFkxMzA1
X1NTRTI9eQpDT05GSUdfQ1JZUFRPX05IUE9MWTEzMDVfQVZYMj15CiMgQ09ORklHX0NSWVBU
T19BRElBTlRVTSBpcyBub3Qgc2V0CkNPTkZJR19DUllQVE9fRVNTSVY9eQoKIwojIEhhc2gg
bW9kZXMKIwpDT05GSUdfQ1JZUFRPX0NNQUM9eQpDT05GSUdfQ1JZUFRPX0hNQUM9eQojIENP
TkZJR19DUllQVE9fWENCQyBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19WTUFDIGlzIG5v
dCBzZXQKCiMKIyBEaWdlc3QKIwpDT05GSUdfQ1JZUFRPX0NSQzMyQz15CkNPTkZJR19DUllQ
VE9fQ1JDMzJDX0lOVEVMPXkKIyBDT05GSUdfQ1JZUFRPX0NSQzMyIGlzIG5vdCBzZXQKIyBD
T05GSUdfQ1JZUFRPX0NSQzMyX1BDTE1VTCBpcyBub3Qgc2V0CkNPTkZJR19DUllQVE9fWFhI
QVNIPXkKQ09ORklHX0NSWVBUT19CTEFLRTJCPXkKIyBDT05GSUdfQ1JZUFRPX0JMQUtFMlMg
aXMgbm90IHNldAojIENPTkZJR19DUllQVE9fQkxBS0UyU19YODYgaXMgbm90IHNldApDT05G
SUdfQ1JZUFRPX0NSQ1QxMERJRj15CiMgQ09ORklHX0NSWVBUT19DUkNUMTBESUZfUENMTVVM
IGlzIG5vdCBzZXQKQ09ORklHX0NSWVBUT19HSEFTSD15CkNPTkZJR19DUllQVE9fUE9MWTEz
MDU9eQojIENPTkZJR19DUllQVE9fUE9MWTEzMDVfWDg2XzY0IGlzIG5vdCBzZXQKQ09ORklH
X0NSWVBUT19NRDQ9eQpDT05GSUdfQ1JZUFRPX01ENT15CiMgQ09ORklHX0NSWVBUT19NSUNI
QUVMX01JQyBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19STUQxNjAgaXMgbm90IHNldApD
T05GSUdfQ1JZUFRPX1NIQTE9eQpDT05GSUdfQ1JZUFRPX1NIQTFfU1NTRTM9eQpDT05GSUdf
Q1JZUFRPX1NIQTI1Nl9TU1NFMz15CkNPTkZJR19DUllQVE9fU0hBNTEyX1NTU0UzPXkKQ09O
RklHX0NSWVBUT19TSEEyNTY9eQpDT05GSUdfQ1JZUFRPX1NIQTUxMj15CiMgQ09ORklHX0NS
WVBUT19TSEEzIGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX1NNMyBpcyBub3Qgc2V0CiMg
Q09ORklHX0NSWVBUT19TVFJFRUJPRyBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19XUDUx
MiBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19HSEFTSF9DTE1VTF9OSV9JTlRFTCBpcyBu
b3Qgc2V0CgojCiMgQ2lwaGVycwojCkNPTkZJR19DUllQVE9fQUVTPXkKQ09ORklHX0NSWVBU
T19BRVNfVEk9eQpDT05GSUdfQ1JZUFRPX0FFU19OSV9JTlRFTD15CkNPTkZJR19DUllQVE9f
QkxPV0ZJU0g9eQpDT05GSUdfQ1JZUFRPX0JMT1dGSVNIX0NPTU1PTj15CkNPTkZJR19DUllQ
VE9fQkxPV0ZJU0hfWDg2XzY0PXkKIyBDT05GSUdfQ1JZUFRPX0NBTUVMTElBIGlzIG5vdCBz
ZXQKQ09ORklHX0NSWVBUT19DQU1FTExJQV9YODZfNjQ9eQpDT05GSUdfQ1JZUFRPX0NBTUVM
TElBX0FFU05JX0FWWF9YODZfNjQ9eQpDT05GSUdfQ1JZUFRPX0NBTUVMTElBX0FFU05JX0FW
WDJfWDg2XzY0PXkKIyBDT05GSUdfQ1JZUFRPX0NBU1Q1IGlzIG5vdCBzZXQKIyBDT05GSUdf
Q1JZUFRPX0NBU1Q1X0FWWF9YODZfNjQgaXMgbm90IHNldAojIENPTkZJR19DUllQVE9fQ0FT
VDYgaXMgbm90IHNldAojIENPTkZJR19DUllQVE9fQ0FTVDZfQVZYX1g4Nl82NCBpcyBub3Qg
c2V0CkNPTkZJR19DUllQVE9fREVTPXkKIyBDT05GSUdfQ1JZUFRPX0RFUzNfRURFX1g4Nl82
NCBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19GQ1JZUFQgaXMgbm90IHNldAojIENPTkZJ
R19DUllQVE9fQ0hBQ0hBMjAgaXMgbm90IHNldAojIENPTkZJR19DUllQVE9fQ0hBQ0hBMjBf
WDg2XzY0IGlzIG5vdCBzZXQKQ09ORklHX0NSWVBUT19TRVJQRU5UPXkKQ09ORklHX0NSWVBU
T19TRVJQRU5UX1NTRTJfWDg2XzY0PXkKQ09ORklHX0NSWVBUT19TRVJQRU5UX0FWWF9YODZf
NjQ9eQpDT05GSUdfQ1JZUFRPX1NFUlBFTlRfQVZYMl9YODZfNjQ9eQojIENPTkZJR19DUllQ
VE9fU000IGlzIG5vdCBzZXQKQ09ORklHX0NSWVBUT19UV09GSVNIPXkKQ09ORklHX0NSWVBU
T19UV09GSVNIX0NPTU1PTj15CkNPTkZJR19DUllQVE9fVFdPRklTSF9YODZfNjQ9eQpDT05G
SUdfQ1JZUFRPX1RXT0ZJU0hfWDg2XzY0XzNXQVk9eQpDT05GSUdfQ1JZUFRPX1RXT0ZJU0hf
QVZYX1g4Nl82ND15CgojCiMgQ29tcHJlc3Npb24KIwpDT05GSUdfQ1JZUFRPX0RFRkxBVEU9
eQpDT05GSUdfQ1JZUFRPX0xaTz15CiMgQ09ORklHX0NSWVBUT184NDIgaXMgbm90IHNldAoj
IENPTkZJR19DUllQVE9fTFo0IGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX0xaNEhDIGlz
IG5vdCBzZXQKQ09ORklHX0NSWVBUT19aU1REPXkKCiMKIyBSYW5kb20gTnVtYmVyIEdlbmVy
YXRpb24KIwpDT05GSUdfQ1JZUFRPX0FOU0lfQ1BSTkc9eQpDT05GSUdfQ1JZUFRPX0RSQkdf
TUVOVT15CkNPTkZJR19DUllQVE9fRFJCR19ITUFDPXkKIyBDT05GSUdfQ1JZUFRPX0RSQkdf
SEFTSCBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19EUkJHX0NUUiBpcyBub3Qgc2V0CkNP
TkZJR19DUllQVE9fRFJCRz15CkNPTkZJR19DUllQVE9fSklUVEVSRU5UUk9QWT15CiMgQ09O
RklHX0NSWVBUT19VU0VSX0FQSV9IQVNIIGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX1VT
RVJfQVBJX1NLQ0lQSEVSIGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX1VTRVJfQVBJX1JO
RyBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19VU0VSX0FQSV9BRUFEIGlzIG5vdCBzZXQK
Q09ORklHX0NSWVBUT19IQVNIX0lORk89eQoKIwojIENyeXB0byBsaWJyYXJ5IHJvdXRpbmVz
CiMKQ09ORklHX0NSWVBUT19MSUJfQUVTPXkKQ09ORklHX0NSWVBUT19MSUJfQVJDND15CiMg
Q09ORklHX0NSWVBUT19MSUJfQkxBS0UyUyBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19M
SUJfQ0hBQ0hBIGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX0xJQl9DVVJWRTI1NTE5IGlz
IG5vdCBzZXQKQ09ORklHX0NSWVBUT19MSUJfREVTPXkKQ09ORklHX0NSWVBUT19MSUJfUE9M
WTEzMDVfUlNJWkU9MTEKQ09ORklHX0NSWVBUT19MSUJfUE9MWTEzMDVfR0VORVJJQz15CiMg
Q09ORklHX0NSWVBUT19MSUJfUE9MWTEzMDUgaXMgbm90IHNldAojIENPTkZJR19DUllQVE9f
TElCX0NIQUNIQTIwUE9MWTEzMDUgaXMgbm90IHNldApDT05GSUdfQ1JZUFRPX0xJQl9TSEEy
NTY9eQojIENPTkZJR19DUllQVE9fSFcgaXMgbm90IHNldApDT05GSUdfQVNZTU1FVFJJQ19L
RVlfVFlQRT15CkNPTkZJR19BU1lNTUVUUklDX1BVQkxJQ19LRVlfU1VCVFlQRT15CkNPTkZJ
R19YNTA5X0NFUlRJRklDQVRFX1BBUlNFUj15CiMgQ09ORklHX1BLQ1M4X1BSSVZBVEVfS0VZ
X1BBUlNFUiBpcyBub3Qgc2V0CkNPTkZJR19QS0NTN19NRVNTQUdFX1BBUlNFUj15CiMgQ09O
RklHX1BLQ1M3X1RFU1RfS0VZIGlzIG5vdCBzZXQKIyBDT05GSUdfU0lHTkVEX1BFX0ZJTEVf
VkVSSUZJQ0FUSU9OIGlzIG5vdCBzZXQKCiMKIyBDZXJ0aWZpY2F0ZXMgZm9yIHNpZ25hdHVy
ZSBjaGVja2luZwojCkNPTkZJR19TWVNURU1fVFJVU1RFRF9LRVlSSU5HPXkKQ09ORklHX1NZ
U1RFTV9UUlVTVEVEX0tFWVM9IiIKIyBDT05GSUdfU1lTVEVNX0VYVFJBX0NFUlRJRklDQVRF
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VDT05EQVJZX1RSVVNURURfS0VZUklORyBpcyBub3Qg
c2V0CiMgQ09ORklHX1NZU1RFTV9CTEFDS0xJU1RfS0VZUklORyBpcyBub3Qgc2V0CiMgZW5k
IG9mIENlcnRpZmljYXRlcyBmb3Igc2lnbmF0dXJlIGNoZWNraW5nCgpDT05GSUdfQklOQVJZ
X1BSSU5URj15CgojCiMgTGlicmFyeSByb3V0aW5lcwojCkNPTkZJR19SQUlENl9QUT15CiMg
Q09ORklHX1JBSUQ2X1BRX0JFTkNITUFSSyBpcyBub3Qgc2V0CiMgQ09ORklHX1BBQ0tJTkcg
aXMgbm90IHNldApDT05GSUdfQklUUkVWRVJTRT15CkNPTkZJR19HRU5FUklDX1NUUk5DUFlf
RlJPTV9VU0VSPXkKQ09ORklHX0dFTkVSSUNfU1RSTkxFTl9VU0VSPXkKQ09ORklHX0dFTkVS
SUNfTkVUX1VUSUxTPXkKQ09ORklHX0dFTkVSSUNfRklORF9GSVJTVF9CSVQ9eQojIENPTkZJ
R19DT1JESUMgaXMgbm90IHNldAojIENPTkZJR19QUklNRV9OVU1CRVJTIGlzIG5vdCBzZXQK
Q09ORklHX1JBVElPTkFMPXkKQ09ORklHX0dFTkVSSUNfUENJX0lPTUFQPXkKQ09ORklHX0dF
TkVSSUNfSU9NQVA9eQpDT05GSUdfQVJDSF9VU0VfQ01QWENIR19MT0NLUkVGPXkKQ09ORklH
X0FSQ0hfSEFTX0ZBU1RfTVVMVElQTElFUj15CkNPTkZJR19BUkNIX1VTRV9TWU1fQU5OT1RB
VElPTlM9eQpDT05GSUdfQ1JDX0NDSVRUPXkKQ09ORklHX0NSQzE2PXkKQ09ORklHX0NSQ19U
MTBESUY9eQpDT05GSUdfQ1JDX0lUVV9UPXkKQ09ORklHX0NSQzMyPXkKQ09ORklHX0NSQzMy
X1NFTEZURVNUPXkKQ09ORklHX0NSQzMyX1NMSUNFQlk4PXkKIyBDT05GSUdfQ1JDMzJfU0xJ
Q0VCWTQgaXMgbm90IHNldAojIENPTkZJR19DUkMzMl9TQVJXQVRFIGlzIG5vdCBzZXQKIyBD
T05GSUdfQ1JDMzJfQklUIGlzIG5vdCBzZXQKQ09ORklHX0NSQzY0PXkKIyBDT05GSUdfQ1JD
NCBpcyBub3Qgc2V0CiMgQ09ORklHX0NSQzcgaXMgbm90IHNldApDT05GSUdfTElCQ1JDMzJD
PXkKIyBDT05GSUdfQ1JDOCBpcyBub3Qgc2V0CkNPTkZJR19YWEhBU0g9eQojIENPTkZJR19S
QU5ET00zMl9TRUxGVEVTVCBpcyBub3Qgc2V0CkNPTkZJR19aTElCX0lORkxBVEU9eQpDT05G
SUdfWkxJQl9ERUZMQVRFPXkKQ09ORklHX0xaT19DT01QUkVTUz15CkNPTkZJR19MWk9fREVD
T01QUkVTUz15CkNPTkZJR19MWjRfREVDT01QUkVTUz15CkNPTkZJR19aU1REX0NPTVBSRVNT
PXkKQ09ORklHX1pTVERfREVDT01QUkVTUz15CkNPTkZJR19YWl9ERUM9eQpDT05GSUdfWFpf
REVDX1g4Nj15CkNPTkZJR19YWl9ERUNfUE9XRVJQQz15CkNPTkZJR19YWl9ERUNfSUE2ND15
CkNPTkZJR19YWl9ERUNfQVJNPXkKQ09ORklHX1haX0RFQ19BUk1USFVNQj15CkNPTkZJR19Y
Wl9ERUNfU1BBUkM9eQpDT05GSUdfWFpfREVDX0JDSj15CiMgQ09ORklHX1haX0RFQ19URVNU
IGlzIG5vdCBzZXQKQ09ORklHX0RFQ09NUFJFU1NfR1pJUD15CkNPTkZJR19ERUNPTVBSRVNT
X0JaSVAyPXkKQ09ORklHX0RFQ09NUFJFU1NfTFpNQT15CkNPTkZJR19ERUNPTVBSRVNTX1ha
PXkKQ09ORklHX0RFQ09NUFJFU1NfTFpPPXkKQ09ORklHX0RFQ09NUFJFU1NfTFo0PXkKQ09O
RklHX0dFTkVSSUNfQUxMT0NBVE9SPXkKQ09ORklHX1RFWFRTRUFSQ0g9eQpDT05GSUdfVEVY
VFNFQVJDSF9LTVA9eQpDT05GSUdfVEVYVFNFQVJDSF9CTT15CkNPTkZJR19URVhUU0VBUkNI
X0ZTTT15CkNPTkZJR19JTlRFUlZBTF9UUkVFPXkKQ09ORklHX1hBUlJBWV9NVUxUST15CkNP
TkZJR19BU1NPQ0lBVElWRV9BUlJBWT15CkNPTkZJR19IQVNfSU9NRU09eQpDT05GSUdfSEFT
X0lPUE9SVF9NQVA9eQpDT05GSUdfSEFTX0RNQT15CkNPTkZJR19ETUFfT1BTPXkKQ09ORklH
X05FRURfU0dfRE1BX0xFTkdUSD15CkNPTkZJR19ORUVEX0RNQV9NQVBfU1RBVEU9eQpDT05G
SUdfQVJDSF9ETUFfQUREUl9UXzY0QklUPXkKQ09ORklHX1NXSU9UTEI9eQojIENPTkZJR19E
TUFfQ01BIGlzIG5vdCBzZXQKIyBDT05GSUdfRE1BX0FQSV9ERUJVRyBpcyBub3Qgc2V0CiMg
Q09ORklHX0RNQV9NQVBfQkVOQ0hNQVJLIGlzIG5vdCBzZXQKQ09ORklHX1NHTF9BTExPQz15
CkNPTkZJR19JT01NVV9IRUxQRVI9eQpDT05GSUdfQ0hFQ0tfU0lHTkFUVVJFPXkKQ09ORklH
X0NQVV9STUFQPXkKQ09ORklHX0RRTD15CkNPTkZJR19HTE9CPXkKIyBDT05GSUdfR0xPQl9T
RUxGVEVTVCBpcyBub3Qgc2V0CkNPTkZJR19OTEFUVFI9eQpDT05GSUdfQ0xaX1RBQj15CiMg
Q09ORklHX0lSUV9QT0xMIGlzIG5vdCBzZXQKQ09ORklHX01QSUxJQj15CkNPTkZJR19PSURf
UkVHSVNUUlk9eQpDT05GSUdfSEFWRV9HRU5FUklDX1ZEU089eQpDT05GSUdfR0VORVJJQ19H
RVRUSU1FT0ZEQVk9eQpDT05GSUdfR0VORVJJQ19WRFNPX1RJTUVfTlM9eQpDT05GSUdfRk9O
VF9TVVBQT1JUPXkKIyBDT05GSUdfRk9OVFMgaXMgbm90IHNldApDT05GSUdfRk9OVF84eDg9
eQpDT05GSUdfRk9OVF84eDE2PXkKQ09ORklHX1NHX1BPT0w9eQpDT05GSUdfQVJDSF9IQVNf
UE1FTV9BUEk9eQpDT05GSUdfTUVNUkVHSU9OPXkKQ09ORklHX0FSQ0hfSEFTX1VBQ0NFU1Nf
RkxVU0hDQUNIRT15CkNPTkZJR19BUkNIX0hBU19DT1BZX01DPXkKQ09ORklHX0FSQ0hfU1RB
Q0tXQUxLPXkKQ09ORklHX1NCSVRNQVA9eQojIENPTkZJR19TVFJJTkdfU0VMRlRFU1QgaXMg
bm90IHNldAojIGVuZCBvZiBMaWJyYXJ5IHJvdXRpbmVzCgojCiMgS2VybmVsIGhhY2tpbmcK
IwoKIwojIHByaW50ayBhbmQgZG1lc2cgb3B0aW9ucwojCkNPTkZJR19QUklOVEtfVElNRT15
CiMgQ09ORklHX1BSSU5US19DQUxMRVIgaXMgbm90IHNldApDT05GSUdfQ09OU09MRV9MT0dM
RVZFTF9ERUZBVUxUPTcKQ09ORklHX0NPTlNPTEVfTE9HTEVWRUxfUVVJRVQ9NApDT05GSUdf
TUVTU0FHRV9MT0dMRVZFTF9ERUZBVUxUPTQKIyBDT05GSUdfQk9PVF9QUklOVEtfREVMQVkg
aXMgbm90IHNldAojIENPTkZJR19EWU5BTUlDX0RFQlVHIGlzIG5vdCBzZXQKIyBDT05GSUdf
RFlOQU1JQ19ERUJVR19DT1JFIGlzIG5vdCBzZXQKQ09ORklHX1NZTUJPTElDX0VSUk5BTUU9
eQpDT05GSUdfREVCVUdfQlVHVkVSQk9TRT15CiMgZW5kIG9mIHByaW50ayBhbmQgZG1lc2cg
b3B0aW9ucwoKIwojIENvbXBpbGUtdGltZSBjaGVja3MgYW5kIGNvbXBpbGVyIG9wdGlvbnMK
IwpDT05GSUdfREVCVUdfSU5GTz15CiMgQ09ORklHX0RFQlVHX0lORk9fUkVEVUNFRCBpcyBu
b3Qgc2V0CiMgQ09ORklHX0RFQlVHX0lORk9fQ09NUFJFU1NFRCBpcyBub3Qgc2V0CiMgQ09O
RklHX0RFQlVHX0lORk9fU1BMSVQgaXMgbm90IHNldApDT05GSUdfREVCVUdfSU5GT19EV0FS
Rl9UT09MQ0hBSU5fREVGQVVMVD15CiMgQ09ORklHX0RFQlVHX0lORk9fRFdBUkY0IGlzIG5v
dCBzZXQKIyBDT05GSUdfREVCVUdfSU5GT19EV0FSRjUgaXMgbm90IHNldAojIENPTkZJR19E
RUJVR19JTkZPX0JURiBpcyBub3Qgc2V0CiMgQ09ORklHX0dEQl9TQ1JJUFRTIGlzIG5vdCBz
ZXQKQ09ORklHX0ZSQU1FX1dBUk49MjA0OAojIENPTkZJR19TVFJJUF9BU01fU1lNUyBpcyBu
b3Qgc2V0CiMgQ09ORklHX1JFQURBQkxFX0FTTSBpcyBub3Qgc2V0CiMgQ09ORklHX0hFQURF
UlNfSU5TVEFMTCBpcyBub3Qgc2V0CiMgQ09ORklHX0RFQlVHX1NFQ1RJT05fTUlTTUFUQ0gg
aXMgbm90IHNldApDT05GSUdfU0VDVElPTl9NSVNNQVRDSF9XQVJOX09OTFk9eQpDT05GSUdf
U1RBQ0tfVkFMSURBVElPTj15CiMgQ09ORklHX0RFQlVHX0ZPUkNFX1dFQUtfUEVSX0NQVSBp
cyBub3Qgc2V0CiMgZW5kIG9mIENvbXBpbGUtdGltZSBjaGVja3MgYW5kIGNvbXBpbGVyIG9w
dGlvbnMKCiMKIyBHZW5lcmljIEtlcm5lbCBEZWJ1Z2dpbmcgSW5zdHJ1bWVudHMKIwpDT05G
SUdfTUFHSUNfU1lTUlE9eQpDT05GSUdfTUFHSUNfU1lTUlFfREVGQVVMVF9FTkFCTEU9MHgx
CkNPTkZJR19NQUdJQ19TWVNSUV9TRVJJQUw9eQpDT05GSUdfTUFHSUNfU1lTUlFfU0VSSUFM
X1NFUVVFTkNFPSIiCkNPTkZJR19ERUJVR19GUz15CkNPTkZJR19ERUJVR19GU19BTExPV19B
TEw9eQojIENPTkZJR19ERUJVR19GU19ESVNBTExPV19NT1VOVCBpcyBub3Qgc2V0CiMgQ09O
RklHX0RFQlVHX0ZTX0FMTE9XX05PTkUgaXMgbm90IHNldApDT05GSUdfSEFWRV9BUkNIX0tH
REI9eQojIENPTkZJR19LR0RCIGlzIG5vdCBzZXQKQ09ORklHX0FSQ0hfSEFTX1VCU0FOX1NB
TklUSVpFX0FMTD15CiMgQ09ORklHX1VCU0FOIGlzIG5vdCBzZXQKQ09ORklHX0hBVkVfQVJD
SF9LQ1NBTj15CiMgZW5kIG9mIEdlbmVyaWMgS2VybmVsIERlYnVnZ2luZyBJbnN0cnVtZW50
cwoKQ09ORklHX0RFQlVHX0tFUk5FTD15CkNPTkZJR19ERUJVR19NSVNDPXkKCiMKIyBNZW1v
cnkgRGVidWdnaW5nCiMKIyBDT05GSUdfUEFHRV9FWFRFTlNJT04gaXMgbm90IHNldAojIENP
TkZJR19ERUJVR19QQUdFQUxMT0MgaXMgbm90IHNldAojIENPTkZJR19QQUdFX09XTkVSIGlz
IG5vdCBzZXQKIyBDT05GSUdfUEFHRV9QT0lTT05JTkcgaXMgbm90IHNldAojIENPTkZJR19E
RUJVR19QQUdFX1JFRiBpcyBub3Qgc2V0CiMgQ09ORklHX0RFQlVHX1JPREFUQV9URVNUIGlz
IG5vdCBzZXQKQ09ORklHX0FSQ0hfSEFTX0RFQlVHX1dYPXkKIyBDT05GSUdfREVCVUdfV1gg
aXMgbm90IHNldApDT05GSUdfR0VORVJJQ19QVERVTVA9eQojIENPTkZJR19QVERVTVBfREVC
VUdGUyBpcyBub3Qgc2V0CiMgQ09ORklHX0RFQlVHX09CSkVDVFMgaXMgbm90IHNldAojIENP
TkZJR19TTFVCX0RFQlVHX09OIGlzIG5vdCBzZXQKIyBDT05GSUdfU0xVQl9TVEFUUyBpcyBu
b3Qgc2V0CkNPTkZJR19IQVZFX0RFQlVHX0tNRU1MRUFLPXkKIyBDT05GSUdfREVCVUdfS01F
TUxFQUsgaXMgbm90IHNldAojIENPTkZJR19ERUJVR19TVEFDS19VU0FHRSBpcyBub3Qgc2V0
CiMgQ09ORklHX1NDSEVEX1NUQUNLX0VORF9DSEVDSyBpcyBub3Qgc2V0CkNPTkZJR19BUkNI
X0hBU19ERUJVR19WTV9QR1RBQkxFPXkKIyBDT05GSUdfREVCVUdfVk0gaXMgbm90IHNldAoj
IENPTkZJR19ERUJVR19WTV9QR1RBQkxFIGlzIG5vdCBzZXQKQ09ORklHX0FSQ0hfSEFTX0RF
QlVHX1ZJUlRVQUw9eQojIENPTkZJR19ERUJVR19WSVJUVUFMIGlzIG5vdCBzZXQKQ09ORklH
X0RFQlVHX01FTU9SWV9JTklUPXkKIyBDT05GSUdfREVCVUdfUEVSX0NQVV9NQVBTIGlzIG5v
dCBzZXQKQ09ORklHX0FSQ0hfU1VQUE9SVFNfS01BUF9MT0NBTF9GT1JDRV9NQVA9eQojIENP
TkZJR19ERUJVR19LTUFQX0xPQ0FMX0ZPUkNFX01BUCBpcyBub3Qgc2V0CkNPTkZJR19IQVZF
X0FSQ0hfS0FTQU49eQpDT05GSUdfSEFWRV9BUkNIX0tBU0FOX1ZNQUxMT0M9eQpDT05GSUdf
Q0NfSEFTX0tBU0FOX0dFTkVSSUM9eQpDT05GSUdfQ0NfSEFTX1dPUktJTkdfTk9TQU5JVEla
RV9BRERSRVNTPXkKIyBDT05GSUdfS0FTQU4gaXMgbm90IHNldApDT05GSUdfSEFWRV9BUkNI
X0tGRU5DRT15CiMgQ09ORklHX0tGRU5DRSBpcyBub3Qgc2V0CiMgZW5kIG9mIE1lbW9yeSBE
ZWJ1Z2dpbmcKCiMgQ09ORklHX0RFQlVHX1NISVJRIGlzIG5vdCBzZXQKCiMKIyBEZWJ1ZyBP
b3BzLCBMb2NrdXBzIGFuZCBIYW5ncwojCiMgQ09ORklHX1BBTklDX09OX09PUFMgaXMgbm90
IHNldApDT05GSUdfUEFOSUNfT05fT09QU19WQUxVRT0wCkNPTkZJR19QQU5JQ19USU1FT1VU
PTAKQ09ORklHX0xPQ0tVUF9ERVRFQ1RPUj15CkNPTkZJR19TT0ZUTE9DS1VQX0RFVEVDVE9S
PXkKIyBDT05GSUdfQk9PVFBBUkFNX1NPRlRMT0NLVVBfUEFOSUMgaXMgbm90IHNldApDT05G
SUdfQk9PVFBBUkFNX1NPRlRMT0NLVVBfUEFOSUNfVkFMVUU9MApDT05GSUdfSEFSRExPQ0tV
UF9ERVRFQ1RPUl9QRVJGPXkKQ09ORklHX0hBUkRMT0NLVVBfQ0hFQ0tfVElNRVNUQU1QPXkK
Q09ORklHX0hBUkRMT0NLVVBfREVURUNUT1I9eQojIENPTkZJR19CT09UUEFSQU1fSEFSRExP
Q0tVUF9QQU5JQyBpcyBub3Qgc2V0CkNPTkZJR19CT09UUEFSQU1fSEFSRExPQ0tVUF9QQU5J
Q19WQUxVRT0wCkNPTkZJR19ERVRFQ1RfSFVOR19UQVNLPXkKQ09ORklHX0RFRkFVTFRfSFVO
R19UQVNLX1RJTUVPVVQ9MTIwCiMgQ09ORklHX0JPT1RQQVJBTV9IVU5HX1RBU0tfUEFOSUMg
aXMgbm90IHNldApDT05GSUdfQk9PVFBBUkFNX0hVTkdfVEFTS19QQU5JQ19WQUxVRT0wCkNP
TkZJR19XUV9XQVRDSERPRz15CiMgQ09ORklHX1RFU1RfTE9DS1VQIGlzIG5vdCBzZXQKIyBl
bmQgb2YgRGVidWcgT29wcywgTG9ja3VwcyBhbmQgSGFuZ3MKCiMKIyBTY2hlZHVsZXIgRGVi
dWdnaW5nCiMKIyBDT05GSUdfU0NIRURfREVCVUcgaXMgbm90IHNldApDT05GSUdfU0NIRURf
SU5GTz15CiMgQ09ORklHX1NDSEVEU1RBVFMgaXMgbm90IHNldAojIGVuZCBvZiBTY2hlZHVs
ZXIgRGVidWdnaW5nCgojIENPTkZJR19ERUJVR19USU1FS0VFUElORyBpcyBub3Qgc2V0Cgoj
CiMgTG9jayBEZWJ1Z2dpbmcgKHNwaW5sb2NrcywgbXV0ZXhlcywgZXRjLi4uKQojCkNPTkZJ
R19MT0NLX0RFQlVHR0lOR19TVVBQT1JUPXkKIyBDT05GSUdfUFJPVkVfTE9DS0lORyBpcyBu
b3Qgc2V0CiMgQ09ORklHX0xPQ0tfU1RBVCBpcyBub3Qgc2V0CiMgQ09ORklHX0RFQlVHX1JU
X01VVEVYRVMgaXMgbm90IHNldAojIENPTkZJR19ERUJVR19TUElOTE9DSyBpcyBub3Qgc2V0
CiMgQ09ORklHX0RFQlVHX01VVEVYRVMgaXMgbm90IHNldAojIENPTkZJR19ERUJVR19XV19N
VVRFWF9TTE9XUEFUSCBpcyBub3Qgc2V0CiMgQ09ORklHX0RFQlVHX1JXU0VNUyBpcyBub3Qg
c2V0CiMgQ09ORklHX0RFQlVHX0xPQ0tfQUxMT0MgaXMgbm90IHNldAojIENPTkZJR19ERUJV
R19BVE9NSUNfU0xFRVAgaXMgbm90IHNldAojIENPTkZJR19ERUJVR19MT0NLSU5HX0FQSV9T
RUxGVEVTVFMgaXMgbm90IHNldAojIENPTkZJR19MT0NLX1RPUlRVUkVfVEVTVCBpcyBub3Qg
c2V0CiMgQ09ORklHX1dXX01VVEVYX1NFTEZURVNUIGlzIG5vdCBzZXQKIyBDT05GSUdfU0NG
X1RPUlRVUkVfVEVTVCBpcyBub3Qgc2V0CiMgQ09ORklHX0NTRF9MT0NLX1dBSVRfREVCVUcg
aXMgbm90IHNldAojIGVuZCBvZiBMb2NrIERlYnVnZ2luZyAoc3BpbmxvY2tzLCBtdXRleGVz
LCBldGMuLi4pCgojIENPTkZJR19ERUJVR19JUlFGTEFHUyBpcyBub3Qgc2V0CkNPTkZJR19T
VEFDS1RSQUNFPXkKIyBDT05GSUdfV0FSTl9BTExfVU5TRUVERURfUkFORE9NIGlzIG5vdCBz
ZXQKIyBDT05GSUdfREVCVUdfS09CSkVDVCBpcyBub3Qgc2V0CgojCiMgRGVidWcga2VybmVs
IGRhdGEgc3RydWN0dXJlcwojCiMgQ09ORklHX0RFQlVHX0xJU1QgaXMgbm90IHNldAojIENP
TkZJR19ERUJVR19QTElTVCBpcyBub3Qgc2V0CiMgQ09ORklHX0RFQlVHX1NHIGlzIG5vdCBz
ZXQKIyBDT05GSUdfREVCVUdfTk9USUZJRVJTIGlzIG5vdCBzZXQKIyBDT05GSUdfQlVHX09O
X0RBVEFfQ09SUlVQVElPTiBpcyBub3Qgc2V0CiMgZW5kIG9mIERlYnVnIGtlcm5lbCBkYXRh
IHN0cnVjdHVyZXMKCiMgQ09ORklHX0RFQlVHX0NSRURFTlRJQUxTIGlzIG5vdCBzZXQKCiMK
IyBSQ1UgRGVidWdnaW5nCiMKIyBDT05GSUdfUkNVX1NDQUxFX1RFU1QgaXMgbm90IHNldAoj
IENPTkZJR19SQ1VfVE9SVFVSRV9URVNUIGlzIG5vdCBzZXQKIyBDT05GSUdfUkNVX1JFRl9T
Q0FMRV9URVNUIGlzIG5vdCBzZXQKQ09ORklHX1JDVV9DUFVfU1RBTExfVElNRU9VVD02MAoj
IENPTkZJR19SQ1VfVFJBQ0UgaXMgbm90IHNldAojIENPTkZJR19SQ1VfRVFTX0RFQlVHIGlz
IG5vdCBzZXQKIyBlbmQgb2YgUkNVIERlYnVnZ2luZwoKIyBDT05GSUdfREVCVUdfV1FfRk9S
Q0VfUlJfQ1BVIGlzIG5vdCBzZXQKIyBDT05GSUdfREVCVUdfQkxPQ0tfRVhUX0RFVlQgaXMg
bm90IHNldAojIENPTkZJR19DUFVfSE9UUExVR19TVEFURV9DT05UUk9MIGlzIG5vdCBzZXQK
IyBDT05GSUdfTEFURU5DWVRPUCBpcyBub3Qgc2V0CkNPTkZJR19VU0VSX1NUQUNLVFJBQ0Vf
U1VQUE9SVD15CkNPTkZJR19OT1BfVFJBQ0VSPXkKQ09ORklHX0hBVkVfRlVOQ1RJT05fVFJB
Q0VSPXkKQ09ORklHX0hBVkVfRlVOQ1RJT05fR1JBUEhfVFJBQ0VSPXkKQ09ORklHX0hBVkVf
RFlOQU1JQ19GVFJBQ0U9eQpDT05GSUdfSEFWRV9EWU5BTUlDX0ZUUkFDRV9XSVRIX1JFR1M9
eQpDT05GSUdfSEFWRV9EWU5BTUlDX0ZUUkFDRV9XSVRIX0RJUkVDVF9DQUxMUz15CkNPTkZJ
R19IQVZFX0RZTkFNSUNfRlRSQUNFX1dJVEhfQVJHUz15CkNPTkZJR19IQVZFX0ZUUkFDRV9N
Q09VTlRfUkVDT1JEPXkKQ09ORklHX0hBVkVfU1lTQ0FMTF9UUkFDRVBPSU5UUz15CkNPTkZJ
R19IQVZFX0ZFTlRSWT15CkNPTkZJR19IQVZFX09CSlRPT0xfTUNPVU5UPXkKQ09ORklHX0hB
VkVfQ19SRUNPUkRNQ09VTlQ9eQpDT05GSUdfVFJBQ0VfQ0xPQ0s9eQpDT05GSUdfUklOR19C
VUZGRVI9eQpDT05GSUdfRVZFTlRfVFJBQ0lORz15CkNPTkZJR19DT05URVhUX1NXSVRDSF9U
UkFDRVI9eQpDT05GSUdfVFJBQ0lORz15CkNPTkZJR19UUkFDSU5HX1NVUFBPUlQ9eQpDT05G
SUdfRlRSQUNFPXkKIyBDT05GSUdfQk9PVFRJTUVfVFJBQ0lORyBpcyBub3Qgc2V0CiMgQ09O
RklHX0ZVTkNUSU9OX1RSQUNFUiBpcyBub3Qgc2V0CiMgQ09ORklHX1NUQUNLX1RSQUNFUiBp
cyBub3Qgc2V0CiMgQ09ORklHX0lSUVNPRkZfVFJBQ0VSIGlzIG5vdCBzZXQKIyBDT05GSUdf
U0NIRURfVFJBQ0VSIGlzIG5vdCBzZXQKIyBDT05GSUdfSFdMQVRfVFJBQ0VSIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTU1JT1RSQUNFIGlzIG5vdCBzZXQKIyBDT05GSUdfRU5BQkxFX0RFRkFV
TFRfVFJBQ0VSUyBpcyBub3Qgc2V0CiMgQ09ORklHX0ZUUkFDRV9TWVNDQUxMUyBpcyBub3Qg
c2V0CiMgQ09ORklHX1RSQUNFUl9TTkFQU0hPVCBpcyBub3Qgc2V0CkNPTkZJR19CUkFOQ0hf
UFJPRklMRV9OT05FPXkKIyBDT05GSUdfUFJPRklMRV9BTk5PVEFURURfQlJBTkNIRVMgaXMg
bm90IHNldAojIENPTkZJR19QUk9GSUxFX0FMTF9CUkFOQ0hFUyBpcyBub3Qgc2V0CiMgQ09O
RklHX0JMS19ERVZfSU9fVFJBQ0UgaXMgbm90IHNldApDT05GSUdfVVBST0JFX0VWRU5UUz15
CkNPTkZJR19EWU5BTUlDX0VWRU5UUz15CkNPTkZJR19QUk9CRV9FVkVOVFM9eQojIENPTkZJ
R19TWU5USF9FVkVOVFMgaXMgbm90IHNldAojIENPTkZJR19ISVNUX1RSSUdHRVJTIGlzIG5v
dCBzZXQKIyBDT05GSUdfVFJBQ0VfRVZFTlRfSU5KRUNUIGlzIG5vdCBzZXQKIyBDT05GSUdf
VFJBQ0VQT0lOVF9CRU5DSE1BUksgaXMgbm90IHNldAojIENPTkZJR19SSU5HX0JVRkZFUl9C
RU5DSE1BUksgaXMgbm90IHNldAojIENPTkZJR19UUkFDRV9FVkFMX01BUF9GSUxFIGlzIG5v
dCBzZXQKIyBDT05GSUdfUklOR19CVUZGRVJfU1RBUlRVUF9URVNUIGlzIG5vdCBzZXQKIyBD
T05GSUdfUklOR19CVUZGRVJfVkFMSURBVEVfVElNRV9ERUxUQVMgaXMgbm90IHNldAojIENP
TkZJR19QUkVFTVBUSVJRX0RFTEFZX1RFU1QgaXMgbm90IHNldAojIENPTkZJR19QUk9WSURF
X09IQ0kxMzk0X0RNQV9JTklUIGlzIG5vdCBzZXQKIyBDT05GSUdfU0FNUExFUyBpcyBub3Qg
c2V0CkNPTkZJR19BUkNIX0hBU19ERVZNRU1fSVNfQUxMT1dFRD15CkNPTkZJR19TVFJJQ1Rf
REVWTUVNPXkKQ09ORklHX0lPX1NUUklDVF9ERVZNRU09eQoKIwojIHg4NiBEZWJ1Z2dpbmcK
IwpDT05GSUdfVFJBQ0VfSVJRRkxBR1NfU1VQUE9SVD15CkNPTkZJR19UUkFDRV9JUlFGTEFH
U19OTUlfU1VQUE9SVD15CkNPTkZJR19YODZfVkVSQk9TRV9CT09UVVA9eQpDT05GSUdfRUFS
TFlfUFJJTlRLPXkKIyBDT05GSUdfRUFSTFlfUFJJTlRLX0RCR1AgaXMgbm90IHNldAojIENP
TkZJR19FQVJMWV9QUklOVEtfVVNCX1hEQkMgaXMgbm90IHNldAojIENPTkZJR19ERUJVR19U
TEJGTFVTSCBpcyBub3Qgc2V0CiMgQ09ORklHX0lPTU1VX0RFQlVHIGlzIG5vdCBzZXQKQ09O
RklHX0hBVkVfTU1JT1RSQUNFX1NVUFBPUlQ9eQojIENPTkZJR19YODZfREVDT0RFUl9TRUxG
VEVTVCBpcyBub3Qgc2V0CkNPTkZJR19JT19ERUxBWV8wWDgwPXkKIyBDT05GSUdfSU9fREVM
QVlfMFhFRCBpcyBub3Qgc2V0CiMgQ09ORklHX0lPX0RFTEFZX1VERUxBWSBpcyBub3Qgc2V0
CiMgQ09ORklHX0lPX0RFTEFZX05PTkUgaXMgbm90IHNldApDT05GSUdfREVCVUdfQk9PVF9Q
QVJBTVM9eQojIENPTkZJR19DUEFfREVCVUcgaXMgbm90IHNldAojIENPTkZJR19ERUJVR19F
TlRSWSBpcyBub3Qgc2V0CiMgQ09ORklHX0RFQlVHX05NSV9TRUxGVEVTVCBpcyBub3Qgc2V0
CiMgQ09ORklHX1g4Nl9ERUJVR19GUFUgaXMgbm90IHNldAojIENPTkZJR19QVU5JVF9BVE9N
X0RFQlVHIGlzIG5vdCBzZXQKQ09ORklHX1VOV0lOREVSX09SQz15CiMgQ09ORklHX1VOV0lO
REVSX0ZSQU1FX1BPSU5URVIgaXMgbm90IHNldAojIGVuZCBvZiB4ODYgRGVidWdnaW5nCgoj
CiMgS2VybmVsIFRlc3RpbmcgYW5kIENvdmVyYWdlCiMKIyBDT05GSUdfS1VOSVQgaXMgbm90
IHNldAojIENPTkZJR19OT1RJRklFUl9FUlJPUl9JTkpFQ1RJT04gaXMgbm90IHNldAojIENP
TkZJR19GQVVMVF9JTkpFQ1RJT04gaXMgbm90IHNldApDT05GSUdfQVJDSF9IQVNfS0NPVj15
CkNPTkZJR19DQ19IQVNfU0FOQ09WX1RSQUNFX1BDPXkKIyBDT05GSUdfS0NPViBpcyBub3Qg
c2V0CiMgQ09ORklHX1JVTlRJTUVfVEVTVElOR19NRU5VIGlzIG5vdCBzZXQKQ09ORklHX0FS
Q0hfVVNFX01FTVRFU1Q9eQojIENPTkZJR19NRU1URVNUIGlzIG5vdCBzZXQKIyBlbmQgb2Yg
S2VybmVsIFRlc3RpbmcgYW5kIENvdmVyYWdlCiMgZW5kIG9mIEtlcm5lbCBoYWNraW5nCg==
--------------12557986E89B73948A73857A--


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 09:27:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 09:27:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143772.264830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltoIu-0006Yo-LJ; Thu, 17 Jun 2021 09:27:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143772.264830; Thu, 17 Jun 2021 09:27:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltoIu-0006Yh-IA; Thu, 17 Jun 2021 09:27:16 +0000
Received: by outflank-mailman (input) for mailman id 143772;
 Thu, 17 Jun 2021 09:27:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RfGr=LL=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ltoIs-0006YX-IS
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 09:27:14 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bfc7aa84-c0ec-4a62-83d8-5ea64040d7c0;
 Thu, 17 Jun 2021 09:27:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfc7aa84-c0ec-4a62-83d8-5ea64040d7c0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623922032;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=YiInuC9hAJSHpWKjprUpuRB9ZX3PsfFDH8TaCbWPfts=;
  b=LOSWcCCEQe8BszibXCP9E0kCoePnKmJHo44nnWjA5OOCZsdq6pWvhTAd
   vl7I6eKrMb6ABPE6+5h3S60+JADFN2dZRiT0u9JmhsQ7KjuWtuj/4fXOM
   WYcEZm2NoMne7b+3ZJX+Mkbjk2F280qCbgMgOHwPiSOWo6ZsdAvqnFTIW
   w=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: U3qK9CY5UjHEFFHPCls9XmtTy1mX38rEJb9pw7Bl/ngX86FUelGaXY9D1A0wMfeL+RQldDl2WA
 xDl3ImQGDCTtQaaYc4uNoeNRLgwtbxiQsQgo85RssPV0bqO9kr2NXtdZceqQUKKNfZBQKt6i25
 abtBRfKyftn+pGo/PxKp0VeHL4PPkgrk4zXHk8DQjffvgexx4E7MdqtArsinSHuj4Th20sgOk6
 jlEX5f3M7YwXdNF7y2FdnxZ6qjq2oBpMmKc1wiV0N5GBdV8eLyNSkoVNI6crEDReLA1jcEH/Kv
 m5w=
X-SBRS: 5.1
X-MesageID: 47928562
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:TOEXWasP1apxGOqzqwK1HclS7skDu9V00zEX/kB9WHVpm5Sj5q
 eTdPRy73DJYUUqKRcdcLG7SdW9qBznhP1ICOUqUItKGTOW3FdAT7sSkrcKoQeQeREWn9Q1vc
 wLT0E9MqyUMbEQt6jHCXyDc+rIt+PnzEnHv4vjJjxWPHhXgulbnn9E4gr3KDwMeOBpP+tCKL
 OMotNOvSCtdGkXdcKmCHgMVeKrnay3qK7b
X-IronPort-AV: E=Sophos;i="5.83,280,1616472000"; 
   d="scan'208";a="47928562"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FbLCzRR0SmS8O4qo7jlwoYso6SSOE+LEhFz/WyaW1pqr4uhSSSJJ9yhLYVJ1VPo/D0iYiYtbIOO9xE0bLah5xuUXHww+ftZDoG+bDRwOVqO8hXlvLETl6EHvfQe/YtClcGuy4KqKsEPjKQCl4PPa9nn8BM6eWoxW/O3ib6G+7F7l7uv/SliRy82EgVlzZmkWfKGcHPoU9s6HnvgP7sHZn+rzBHnIkicPH/NiyVKDGZFGsgePskp2D3wrue8E8370e+43EvD62EKPzwZ14qHI2DnMMebxAAqfhge6IzH4DBIXaNc6RZxzcxEKRVVVsMBtK/MoXMbkL5eNw7uc9wGwCg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aMc2tDcfiwSqaJYV/V3UiexFJa88SQibSqbYs4O6KDc=;
 b=DaodqIh8b0YHuIvw2Ggj4IvzefdnmE2uywuEOuVTjsq9l1Eai+dmCA1uker+vHxI/30qOWk+j3bQkCiL8RdmfjxjJNVsgt9bQLrxc1TPDNrxQf3Pt1FebZCONFTN6qA8B+rX9LfM2p8osknXrIjOW5rDVmYnonxYbruMtGHeJ0lk7ZA2SSPLfr7CKWCxsb9W3ah00bmuWd8klEPCBHRdkc2I30Pl/eJSh+UPoE3NfO/E8xUIWmHyGdGgmMihcwcz7gopbmUbqnyk6Ybuea9TpsAzbAURewuy8uy7LMjwt/AW3OyPBQ1zaz7DMaxKY5wnmBXcKyOuHmAdQV/gOsDMdA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aMc2tDcfiwSqaJYV/V3UiexFJa88SQibSqbYs4O6KDc=;
 b=SYz+zvzMPugO/z0xdZsEB8VOSLj84kvApJt20U3hePdaSihHVmsYxxlzjpKveU2Q1onARA7p5hcWzw7/k8/v1ITrTv4WlGUg+8fXPVD3Fq3DBtR9deZxn32r7d3I2Kj/dJ11vGKRUQ0GTE/aZ5q3K2IWsTMU92sSTjVII8Lh4Jc=
Date: Thu, 17 Jun 2021 11:22:28 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Juergen Gross <jgross@suse.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: hypercalls with 64-bit results
Message-ID: <YMsUVAvFZ0Zv2U7I@Air-de-Roger>
References: <71b8a4f1-9c18-36e7-56b1-3f1b1dabddd6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <71b8a4f1-9c18-36e7-56b1-3f1b1dabddd6@suse.com>
X-ClientProxiedBy: MR2P264CA0040.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::28)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3e2b529d-3d7f-496f-c879-08d93171701f
X-MS-TrafficTypeDiagnostic: DM5PR03MB3065:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB306505BBB53A950F41A45F398F0E9@DM5PR03MB3065.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: TIz6tSXFkg51uxUwLhvq1o2E+/GajmhMfy0GmEtyroIF3WzAm+p6ujMS89gRSve1KgGA6Mu7nvHUdHq/kErOX0aHNt6UZTxlJOxmr0G+3DPqDsrz4Da9iRU1U/vP4JEpbFm9ld0lK8kiQtY4jdRlq0nPd5LUkq3SGLXF4fGnE1S+H9LPA+nALxJCXu1DR5LxOGMDd9up46tNvZXID3ydqyOD0geLAaqHC4YsQ84sENIEjqfxZwa9ro9oyJC5PFdJpu5OmKN9yahBw9mNAZOzuIdt2UQvUXZrT5b99rTSLaxb53Py4tkVnBRL0epJYW93R4HD+lPnQ/E75AeqjQzxpYSQS7wGJJmWzZF/PSPsZa8u/dm9w2WyO6hJU/OawxKZo4yZ+VZgQgy6i4l4hD4OpFD2gc7evG9ch1TqqvDniNSmE6Rgmqhu4dQG4pmmOUD7X1w1LmepI3MDCEvO65CgKUDpadEOYRCj3Q4qT8A1V9d7vhkdWjvodqy3vqel5wtwOcQMdSu0SFmKkKcRc7zpdaVb7l+z2SHNiRtt78HDEJ/7adtW57yUM50usSxpTlYOj7DHy2a3rE9JByCLqHS+sugGgncZ+IBZjRJBgEYHNSI=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39850400004)(366004)(376002)(136003)(346002)(396003)(2906002)(6666004)(956004)(6916009)(26005)(316002)(9686003)(16526019)(5660300002)(66556008)(66946007)(6486002)(6496006)(54906003)(33716001)(8676002)(66476007)(4326008)(8936002)(38100700002)(83380400001)(86362001)(186003)(478600001)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?b25BNzNSSmxqUXNINE9BcXdOWWlGcExONGFMZVhVOHh2WkRFbUx1MHNuT3Va?=
 =?utf-8?B?L0FMVlVEYUJqYmkrbG5yNEhJREl0NnVvRnJENU9BbW5uejhsbkYwZGVkUEZ6?=
 =?utf-8?B?KzlKQlBGSXZGMzFkaEtHSWxVUXpoWTIwZDF0VS8vN0wzZVRvRGQrRlBzaXBM?=
 =?utf-8?B?UXUweUtLMmZXdUV2MkdDWDNVWE9RWnNiYmljSFJFbzh2NmhWM3JocGg3ZnBk?=
 =?utf-8?B?SThPUXhwU0xha0M1M0E0dUJrVkxEbGVrOHdjYjcxT0JsODVQTElTMndNYU80?=
 =?utf-8?B?YWF0bER3VDJHMDFubmVhK295ZGc5dzFPY1hwNmRBVm4vL1c1bUx6R1V4YjU2?=
 =?utf-8?B?SlkvcFo1UnU1Z29aNUxLZktOSVlqdHlUWVZTSkRPWlpTa2dYNDJmOUF2cFVs?=
 =?utf-8?B?Ullva0RWcGFGRnJBZVpUZUdxczcyZWhwcEl1czhwUGduM3M4ZldjREk4bkNL?=
 =?utf-8?B?UUdqQkh3eDVoRGFNODU1cTVBOFJIbit0bHExb3ZJTktvc3pTeEgzZWdzV2xE?=
 =?utf-8?B?ZUwzazNCRWV0bGdVTzAxQTFMSHVqY0JPdldhSlRVQzRYdXJMOFlMZzFwVHRj?=
 =?utf-8?B?M2FPaDVoazZ1alJuZFRwS1JKS0ZPZm9VZkxWSGxvYzZRMW9mdy8rN3dGMG9M?=
 =?utf-8?B?N3gvV1cyVlNQTmtGZTVLQXMycDZQNGlrYytJczY2WHBjWHdlcEtyaWl2bkhR?=
 =?utf-8?B?UXFCbWVremh3Rmg0OUtHbk1xUEZXOUd2VHUrT3p2dWUveTlxc1VyTG5JdThz?=
 =?utf-8?B?Sks4YWkrRHlBV045U2tzTXUrSkpwZnNXbmhFSEQ3QmhDQWs0TWFScDNrQ1BG?=
 =?utf-8?B?Ujh5TkJQWi9MMFJTN3FzS1lUck1oWjJ1L1dPeExNQng0c3J5STI0SmRSZndp?=
 =?utf-8?B?dkpFVVBDQ2pic1pjNGZUMEczOHRKM1MwdnRMSGdCUDVlR1FrSHYvSms4aXo4?=
 =?utf-8?B?UjdXTTJ2TDFqYkxTeWhvendSQU93Y21LS2xuaWdqR3dDR0Jad1lpb01yV0xF?=
 =?utf-8?B?Y1NDeStFbHFQT2FvNUVqbXQ3TC9YUFlWbjhhTjVIU1Bqa0xBQlBLZUxMei9u?=
 =?utf-8?B?dVluU3c1OUtBWTUwMmRQRnhzbmFmMjBEcWNEeEo5VXlaVmpaOVNMcEFGMTVI?=
 =?utf-8?B?ejBhRzFYR3czcW5NSjJmOG1KNWkwZnFkZUQ3bFdheFpLbTJzUm9VZUlPT3dV?=
 =?utf-8?B?OVdlbXJIRW5nci9peWUxSmoyL0tma09NWG9CWmN3dW4vN0dPTWxqVmxtd1hW?=
 =?utf-8?B?SW5jRFRrb2ZRTFVFcjhCdjhOYlIyUjJ3QzFRZ1B3VG9aNG95VkoyK1o5M1N5?=
 =?utf-8?B?MEpNa0htMzgrTHl4U3F1UjVjTlRFMDhYTTc3RlllTW43QzFWWEpmQVpYWEVD?=
 =?utf-8?B?NTBKRmh2YlhhVEF3YWF0Y09sQi9HbVJ2QW1QaEtTTjVmSkpNTHg3SUNwSjhF?=
 =?utf-8?B?eFpLZXlMUUp6R1k2NmlOWll4cEtzb3pIT20vMktSem5qTXQzMDd1TXpSTjFs?=
 =?utf-8?B?b2FUWTA5RTRXK2huUFdWZ3ZMdjBoTlMzekxMckVIUTIyNTNYM3hPUE53Q1Z2?=
 =?utf-8?B?c0NKSk5qaGFyblBnVGpmY3l0ano5emI0cUVrNGlMKzVWZ3VhaitiVk9kenZK?=
 =?utf-8?B?UDVyajNnTHpXSUp4MEo0QzNXSmpremhocUVEcUlKSWZmN1N1Z0IrSnc2VUxC?=
 =?utf-8?B?UWlBclhwRWtpVXc1OFJMeUI3WVRXdTdvNjNlYlg4azUwdFJJTDA4alNWWmcy?=
 =?utf-8?Q?zhhfo8BROB9ZDXzrUppFkq0HAxAxvT4G3t97X0D?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 3e2b529d-3d7f-496f-c879-08d93171701f
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2021 09:22:33.3734
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vqvSgCi4Zd9iSZ+fs+AT4cTHwwKQMRAjyjwFVZPY6IAvrQIHMN+92T82slqZ0oKf58fdZLfCHNPDQ0DrU+dZMA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3065
X-OriginatorOrg: citrix.com

On Wed, Jun 16, 2021 at 06:04:02PM +0200, Jan Beulich wrote:
> All,
> 
> several years back do_memory_op() in libxc was changed to have "long"
> return type. This is because some of the sub-ops return potentially
> large values as the hypercall return value (i.e. not in an argument
> structure field). This change, however, didn't have the intended
> effect from all I can tell, which apparently manifests in the present
> two remaining ovmf failures in the staging osstest flights. Anthony
> tells me that ovmf as of not very long ago puts the shared info page
> at a really high address, thus making the p2m of the guest very large.
> Its size gets returned by XENMEM_maximum_gpfn, as function return
> value.
> 
> Since hypercalls from the tool stack are based on ioctl(), and since
> ioctl() has a return type of "int", I'm afraid there's no way we can
> deal with this by adjusting function return types in the libraries.
> Instead we appear to need either a new privcmd ioctl or new XENMEM_*
> subops (for those cases where potentially large values get returned).

AFAICT NetBSD and FreeBSD are not affected by this issue as the
hypercall return value is propagated to the caller using a long field
in the ioctl structure payload for hypercalls.

osdep_hypercall however should be fixed in libs/call in order to
return a long instead of an int, and wrappers around it should also be
adjusted.

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 09:31:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 09:31:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143780.264841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltoNB-00081H-8D; Thu, 17 Jun 2021 09:31:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143780.264841; Thu, 17 Jun 2021 09:31:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltoNB-000819-5D; Thu, 17 Jun 2021 09:31:41 +0000
Received: by outflank-mailman (input) for mailman id 143780;
 Thu, 17 Jun 2021 09:31:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MES1=LL=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1ltoN9-000813-PY
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 09:31:39 +0000
Received: from mga18.intel.com (unknown [134.134.136.126])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e750a340-3971-418e-898e-001bf7750e19;
 Thu, 17 Jun 2021 09:31:34 +0000 (UTC)
Received: from fmsmga008.fm.intel.com ([10.253.24.58])
 by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 17 Jun 2021 02:31:30 -0700
Received: from orsmsx605.amr.corp.intel.com ([10.22.229.18])
 by fmsmga008.fm.intel.com with ESMTP; 17 Jun 2021 02:31:30 -0700
Received: from orsmsx603.amr.corp.intel.com (10.22.229.16) by
 ORSMSX605.amr.corp.intel.com (10.22.229.18) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.4; Thu, 17 Jun 2021 02:31:29 -0700
Received: from orsedg603.ED.cps.intel.com (10.7.248.4) by
 orsmsx603.amr.corp.intel.com (10.22.229.16) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.4
 via Frontend Transport; Thu, 17 Jun 2021 02:31:29 -0700
Received: from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.109)
 by edgegateway.intel.com (134.134.137.100) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2242.4; Thu, 17 Jun 2021 02:31:29 -0700
Received: from MWHPR11MB1886.namprd11.prod.outlook.com (2603:10b6:300:110::9)
 by MWHPR11MB0078.namprd11.prod.outlook.com (2603:10b6:301:67::38)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.25; Thu, 17 Jun
 2021 09:31:28 +0000
Received: from MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::6597:eb05:c507:c6c1]) by MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::6597:eb05:c507:c6c1%12]) with mapi id 15.20.4219.026; Thu, 17 Jun
 2021 09:31:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e750a340-3971-418e-898e-001bf7750e19
IronPort-SDR: wN4P4Xdn10if/9ZRvadIT89EFg0JCTyNUuKdLai2QPvwMvcSA39p+4Y6xcNxGQbYT17tutk4sc
 GMVNDsJhZzwA==
X-IronPort-AV: E=McAfee;i="6200,9189,10017"; a="193649631"
X-IronPort-AV: E=Sophos;i="5.83,280,1616482800"; 
   d="scan'208";a="193649631"
IronPort-SDR: EnMhxKYV8PSUfoNKE1yMC5CHGpeU9Lrj0/vUj4M8lzD4pbulkXupir/31KhEpY82HrYIzdS/DW
 FmTjkM3p6d2A==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.83,280,1616482800"; 
   d="scan'208";a="452718626"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JTyI1piCn1cfnBdJXvqHowLFtC/fd19VfIlI4K3PkPdSdd/INWelnIGwbBfiZoubW63v/mNpHPPi6nlMfnGecL/NnrJBimdzbeyYvbSfmKBrDQD3pj5YunsKSl0awwA0WxUUFWc+uw1PEkwjMs8+SOpTDTIlFS9FP9K7OF6vzrMi7VJHu8gkmPGLnAWWbIWQ/gkLAIrhW4AAY6dwqIct9dMe55fMW0xLXeSR5h2SofFqvFhB0VLM8+0ulHx33tyOv06mzv+hbpyEdBq1ONI21XiyCgB1RLqXpn6C5+JqSiYYYJYVpX/xK9TRFabVLLmBF7JaekZrAIt/7JTAkMljAw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kwjA6BTH8ZWpUjj7gzMajhJYmZ3C1iclArhYWN9xA1o=;
 b=CTvJ7UbwcVauQ8eziuhKtd5CVUt2AccEFB4PLG7mFV61bI+76HqmXXaoWR+/qOXZIvrzfXc06j4H7NnSAl0o53BVBGYzJAdC5sIN8ZuUC/kpoXj1i2eLXMZvNKjnsCPLXY0FII4gvDaZLmYND97HhJqK50tqRuMYm0o8AA+SC1eE6bwdb22GSmDHubg2Z+IHOoaecrvbOJC4o87gMFwtJjO2LRdPrtx7hgwkSmVmDFwFnkqY98/thnZdGkV37ilDFGaEImntuFpTv5gRexCQl5GZSCc9EO2twhUO3t/kcyKgGEmpVdX7Fuc3pBJe7+Iski/K6Lqpl8JHg8Nx5817Fg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kwjA6BTH8ZWpUjj7gzMajhJYmZ3C1iclArhYWN9xA1o=;
 b=VdQ5VtjTaCEwft3wJ70AGuOAabxPPRf994JOjotJy0uzYnDjbLxW8kWl2fRY06MW4crVYFblCr/mMJZ9TB0bKDt1aSNPnoS43T7g0L36ah0iCbsUEI4Vf5FP22BZM7hpHFxGX6QCj7enJJK/6W9/Sc7LvDqEWXyZE1JFJn/+LYA=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Roger Pau Monne <roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "Nakajima, Jun" <jun.nakajima@intel.com>, Jan Beulich <jbeulich@suse.com>,
	"Cooper, Andrew" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, "George
 Dunlap" <george.dunlap@citrix.com>
Subject: RE: [PATCH 3/3] x86/ept: force WB cache attributes for grant and
 foreign maps
Thread-Topic: [PATCH 3/3] x86/ept: force WB cache attributes for grant and
 foreign maps
Thread-Index: AQHXU+iHZrSdS3qT+0aqrtTH7ImtXqsYDW1w
Date: Thu, 17 Jun 2021 09:31:28 +0000
Message-ID: <MWHPR11MB188653F8277F861018DB00118C0E9@MWHPR11MB1886.namprd11.prod.outlook.com>
References: <20210528173935.29919-1-roger.pau@citrix.com>
 <20210528173935.29919-4-roger.pau@citrix.com>
In-Reply-To: <20210528173935.29919-4-roger.pau@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.142.24]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 333c5d64-6cff-42c5-f03a-08d93172aefe
x-ms-traffictypediagnostic: MWHPR11MB0078:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <MWHPR11MB00780FB4A32A4B0AAC0FA8338C0E9@MWHPR11MB0078.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:4941;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: MnVZwdKYVTXgTw9U9OrHOa1qxWERUnyQKvyXSJo/3tswfGBf773uCSVwDWsZ3jcza/qKWjrURCYHXskMI8KSq7seNot2PqIGknfq6ACA1xNwnysm5xwRZtE8Js57GOAknpmqKhP/PnrMbtK5vHFn3xbQQ4+4LN5Aa7QxRS+ED6WghsL9VNKFQBZaa+uMKL0qJpLYn3qX6ZyHTc/0cyT16hyJ+7HOoERPIlOw2Ir8Bs0Rl3ifIpqH4r2D8PjCuhIbyrKfC65mvY0CEwiJF9qSijz147tL/otpZlIIjbECKbFU9wQ4oQqqTnVPjhcTtxALUAY7bY+IXMP2fIUJDAk7nYhRURA3qrmyQtk/mCLd+NPomN9LEQEPkPhhxL0YHhagLiJy7ZTwyHoIJUx2u+eUTq/Brdm45VPlaLG3qH4pVnhEc495D3Eiu78WgvAURzWXplziF47CsYk0nf1wx5mvLltHnCGa7TpLuRQm8FescX03XuTqTx3HTOwtgth0qXAqTqA9bGN/stDVchpQ3LjNtmDeXQsqX15QP2E0hrVXM2EwgkhWBjeMzBhI9gES5SZX5ZJkHqKtCLgvPmfQ9TCHy8B4YptGBSePNejDMJIYI3U=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1886.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(376002)(39860400002)(396003)(366004)(136003)(110136005)(316002)(54906003)(83380400001)(8676002)(26005)(5660300002)(33656002)(4326008)(66446008)(71200400001)(55016002)(76116006)(186003)(7696005)(6506007)(52536014)(66476007)(66556008)(64756008)(122000001)(86362001)(9686003)(2906002)(478600001)(8936002)(66946007)(38100700002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?anJLeVlvNXIrZzlhL2JkQjRWa01lQWpMVDNsUXRpWXBUa0dvcTdXOHZ1dWIy?=
 =?utf-8?B?VTY2dEQ5ckxreEJ4YVNXNGhsSUpEOEJNSDNOZGhSTmJsWmZnSERjeGxGb1Vn?=
 =?utf-8?B?aVhBb3JmZTN1WGttLzdnNjEwS2hueWoxQng4TjZram1FemplQTErV3paSkwz?=
 =?utf-8?B?cTEwNHFvZUxZUlpoNWx1c3hyd0d1cVkyZ01pcGFmSnBsc1QyWG9jQm1UWVp6?=
 =?utf-8?B?TUU2L3pKa1dBb0VnQjI3M3VvOXpHMDJUQWNCd21pUUdqOUQva3J2eThWeGlK?=
 =?utf-8?B?VGxyUWFDVVBkVDh6QlRyTDBGZUh0UGxQSmRZUHY1RzRHUWdGa3FuVnNFdUFt?=
 =?utf-8?B?c3llMC9FZ3RsaDFEQXpCUkJmSjV1Mlc3M2xVYmd3VThTN3FIamtFVXd2d2JF?=
 =?utf-8?B?N2hkWFdVMGpKcGNWWDIrWm9wckV0M1BvaFd6UXloQjdza1FtZ0NOU3R3QU5H?=
 =?utf-8?B?Qm5tMWRmMUhvWW1qMzlxTS9lYngvVS9EMFRtWkpBY2ExK0dKQ2FYRGNPWUo5?=
 =?utf-8?B?RzZsb3p3UWhRc2dvOFA1dU01ak9LcXlhaGU2ZFNpTEh2UWlTSHVDL3R0eGQ5?=
 =?utf-8?B?RitCaU5QSDlzTUZYVUZPU2VqajZ5MjBSVmFvT00wUUh6Qlh4M0pnTFU0Z2VD?=
 =?utf-8?B?N3ZlYmVSMVlMNHNodUxOQ25kQkY0NmY1cUlVMUZNZ2FMN3ZUWU4wOUZnck9m?=
 =?utf-8?B?RE5yUzNLdjZDVHNzdGhaNVhoWFJ1V0FWS0ZzQVJYVlcvQ0xlaVJYNmZCKzNs?=
 =?utf-8?B?S2JwanRsdWFNTXVnM1ZDOUdkM0Z5WnVYc1JOMXV4THRsa29oMXBxV1JUa09U?=
 =?utf-8?B?cEoyUTRVanYwWjBROG5ZcExlSGRlelYrOU9RTldDZUk3dk9DRkZFeVY0OGNC?=
 =?utf-8?B?UmppbW9Sd3duVEVZTUcydUU5YnBtZXFZSGNObTV1S2RKRGYxRGVrYXlUNXA5?=
 =?utf-8?B?UllONGJpZnVVVjdyVm9QNHVlekpoOXBiMk1CckF2a0ZnWk9STjZrL1dRMzNq?=
 =?utf-8?B?ZnlBRnk0QmhaNHZPSGhNWFhHKzZyVHpKcFV4QXhhRzBnQ0Eza2NjelZ4TFhY?=
 =?utf-8?B?enVYbWMrNGNKVWlvS2JEaEpXMk4rUE96dGxnbGtXa1EwVlZQSS9CRWlvcS9J?=
 =?utf-8?B?OVdmZmViR0F3SEhtcC9tdjRncU1Va3E5WWlia1FlV2xPL0x4bXBLZWprWXFK?=
 =?utf-8?B?N1lBSHlCendvaHF0cHRHZnA4ZzIxTUI5WXNoc2huT3IvUDd3NG1MaGlDc0RY?=
 =?utf-8?B?TFBndVJDdGF3MkpPRXJNRzFKbS9vTnJaSll2UThaWGlGaW04VGRLbnN5Z3dZ?=
 =?utf-8?B?d1ArVVhwTDNNQUZJWGN6SGp0ZmRDQ3I3Ym43V0hMUFdPazZJL2VuMkdlZWIz?=
 =?utf-8?B?WmVzcmJLdGpaMDhoN2N2enc4dU05SXFiVGhMVVd4aVZDM3BPL1hpcWp4Zk52?=
 =?utf-8?B?K3BFdDZ2MUNyZUhPeEtFUXdjbUVoV0VZV00vY3RDb2p3SCtDY2tKeUNHWHNM?=
 =?utf-8?B?aFpQdkgzU2xpR0pOVGNobXhZNXBBbFZ3TkRqclFjRWlIbzJ3S2RVWUt3ZEc4?=
 =?utf-8?B?RjRYeVNRbjFJTlZDZDdUSU1mZ2hmdnZWNmtIWnl4M3lyQStXRGxPaWhxZ2JH?=
 =?utf-8?B?bjB5VUJoM05VMTFLcWFhQ2VWVEU5dXJ2eTdZN0FacnRsay85enZFb1psRjRS?=
 =?utf-8?B?NFN3THF1d1RPT29kMHg1RDdnSmlPbG9xZHM1ekZGc2hlTlcyQnJSZ3FPZXJy?=
 =?utf-8?Q?Yhtm21zovwUUoB1geeguAASLxcThnbBG4cIz1j5?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1886.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 333c5d64-6cff-42c5-f03a-08d93172aefe
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jun 2021 09:31:28.0472
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: R2dEl3M4ev3SMxzCtXKIi5HrNzY4RS8yf3K++41yXUsMNbgqqoK7+1ghBVZrXsSQIskKyvu8JFlf39c/oGCQBA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR11MB0078
X-OriginatorOrg: intel.com

PiBGcm9tOiBSb2dlciBQYXUgTW9ubmUgPHJvZ2VyLnBhdUBjaXRyaXguY29tPg0KPiBTZW50OiBT
YXR1cmRheSwgTWF5IDI5LCAyMDIxIDE6NDAgQU0NCj4gDQo+IEZvcmNlIFdCIHR5cGUgZm9yIGdy
YW50cyBhbmQgZm9yZWlnbiBwYWdlcy4gVGhvc2UgYXJlIHVzdWFsbHkgbWFwcGVkDQo+IG92ZXIg
dW5wb3B1bGF0ZWQgcGh5c2ljYWwgcmFuZ2VzIGluIHRoZSBwMm0sIGFuZCB0aG9zZSByYW5nZXMg
d291bGQNCj4gdXN1YWxseSBiZSBVQyBpbiB0aGUgTVRSUiBzdGF0ZSwgd2hpY2ggaXMgdW5saWtl
bHkgdG8gYmUgdGhlIGNvcnJlY3QNCj4gY2FjaGUgYXR0cmlidXRlLiBJdCdzIGFsc28gY3VtYmVy
c29tZSAob3IgZXZlbiBpbXBvc3NpYmxlKSBmb3IgdGhlDQo+IGd1ZXN0IHRvIGJlIHNldHRpbmcg
dGhlIE1UUlIgdHlwZSBmb3IgYWxsIHRob3NlIG1hcHBpbmdzIGFzIFdCLCBhcw0KPiBNVFJSIHJh
bmdlcyBhcmUgZmluaXRlLg0KPiANCj4gTm90ZSB0aGF0IG9uIEFNRCB3ZSBjYW5ub3QgZm9yY2Ug
YSBjYWNoZSBhdHRyaWJ1dGUgYmVjYXVzZSBvZiB0aGUgbGFjaw0KPiBvZiBpZ25vcmUgUEFUIGVx
dWl2YWxlbnQsIHNvIHRoZSBiZWhhdmlvciBoZXJlIHNsaWdodGx5IGRpdmVyZ2VzDQo+IGJldHdl
ZW4gQU1EIGFuZCBJbnRlbCAob3IgRVBUIHZzIE5QVC9zaGFkb3cpLg0KPiANCj4gU2lnbmVkLW9m
Zi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+DQoNClJldmlld2Vk
LWJ5OiBLZXZpbiBUaWFuIDxrZXZpbi50aWFuQGludGVsLmNvbT4NCg0KYnR3IGluY29ycmVjdCBj
YWNoZSBhdHRyaWJ1dGUgYnJpbmdzIGZ1bmN0aW9uYWwvcGVyZm9ybWFuY2UgcHJvYmxlbS4gDQpp
dCdkIGJlIGdvb2QgdG8gZXhwbGFpbiBhIGJpdCB3aHkgdGhpcyBwcm9ibGVtIGRvZXNuJ3QgaHVy
dCBBTUQgaW4gdGhlIA0KY29tbWl0IG1zZy4uLg0KDQo+IC0tLQ0KPiAgeGVuL2FyY2gveDg2L2h2
bS92bXgvdm14LmMgICAgICAgIHwgIDIgKy0NCj4gIHhlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0LmMg
ICAgICAgICB8IDM1ICsrKysrKysrKysrKysrKysrKysrKysrKysrLS0tLS0NCj4gIHhlbi9pbmNs
dWRlL2FzbS14ODYvaHZtL3ZteC92bXguaCB8ICAyICstDQo+ICAzIGZpbGVzIGNoYW5nZWQsIDMy
IGluc2VydGlvbnMoKyksIDcgZGVsZXRpb25zKC0pDQo+IA0KPiBkaWZmIC0tZ2l0IGEveGVuL2Fy
Y2gveDg2L2h2bS92bXgvdm14LmMgYi94ZW4vYXJjaC94ODYvaHZtL3ZteC92bXguYw0KPiBpbmRl
eCAwZDRiNDc2ODFiLi5lMDliN2UzYWY5IDEwMDY0NA0KPiAtLS0gYS94ZW4vYXJjaC94ODYvaHZt
L3ZteC92bXguYw0KPiArKysgYi94ZW4vYXJjaC94ODYvaHZtL3ZteC92bXguYw0KPiBAQCAtNDIz
LDcgKzQyMyw3IEBAIHN0YXRpYyB2b2lkIGRvbWFpbl9jcmVhdGlvbl9maW5pc2hlZChzdHJ1Y3Qg
ZG9tYWluDQo+ICpkKQ0KPiAgICAgICAgICByZXR1cm47DQo+IA0KPiAgICAgIEFTU0VSVChlcHRl
X2dldF9lbnRyeV9lbXQoZCwgZ2ZuLCBhcGljX2FjY2Vzc19tZm4sIDAsICZpcGF0LA0KPiAtICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgdHJ1ZSkgPT0gTVRSUl9UWVBFX1dSQkFDSyk7DQo+
ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwMm1fbW1pb19kaXJlY3QpID09IE1UUlJf
VFlQRV9XUkJBQ0spOw0KPiAgICAgIEFTU0VSVChpcGF0KTsNCj4gDQo+ICAgICAgaWYgKCBzZXRf
bW1pb19wMm1fZW50cnkoZCwgZ2ZuLCBhcGljX2FjY2Vzc19tZm4sIFBBR0VfT1JERVJfNEspICkN
Cj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0LmMgYi94ZW4vYXJjaC94ODYv
bW0vcDJtLWVwdC5jDQo+IGluZGV4IGYxZDFkMDdlOTIuLjU5YzAzMjU0NzMgMTAwNjQ0DQo+IC0t
LSBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0LmMNCj4gKysrIGIveGVuL2FyY2gveDg2L21tL3Ay
bS1lcHQuYw0KPiBAQCAtNDg3LDExICs0ODcsMTIgQEAgc3RhdGljIGludCBlcHRfaW52YWxpZGF0
ZV9lbXRfcmFuZ2Uoc3RydWN0DQo+IHAybV9kb21haW4gKnAybSwNCj4gIH0NCj4gDQo+ICBpbnQg
ZXB0ZV9nZXRfZW50cnlfZW10KHN0cnVjdCBkb21haW4gKmQsIGdmbl90IGdmbiwgbWZuX3QgbWZu
LA0KPiAtICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBpbnQgb3JkZXIsIGJvb2wgKmlw
YXQsIGJvb2wgZGlyZWN0X21taW8pDQo+ICsgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVk
IGludCBvcmRlciwgYm9vbCAqaXBhdCwgcDJtX3R5cGVfdCB0eXBlKQ0KPiAgew0KPiAgICAgIGlu
dCBnbXRycl9tdHlwZSwgaG10cnJfbXR5cGU7DQo+ICAgICAgc3RydWN0IHZjcHUgKnYgPSBjdXJy
ZW50Ow0KPiAgICAgIHVuc2lnbmVkIGxvbmcgaTsNCj4gKyAgICBib29sIGRpcmVjdF9tbWlvID0g
dHlwZSA9PSBwMm1fbW1pb19kaXJlY3Q7DQo+IA0KPiAgICAgICppcGF0ID0gZmFsc2U7DQo+IA0K
PiBAQCAtNTM1LDkgKzUzNiwzMyBAQCBpbnQgZXB0ZV9nZXRfZW50cnlfZW10KHN0cnVjdCBkb21h
aW4gKmQsIGdmbl90IGdmbiwNCj4gbWZuX3QgbWZuLA0KPiAgICAgICAgICB9DQo+ICAgICAgfQ0K
PiANCj4gLSAgICBpZiAoIGRpcmVjdF9tbWlvICkNCj4gKyAgICBzd2l0Y2ggKCB0eXBlICkNCj4g
KyAgICB7DQo+ICsgICAgY2FzZSBwMm1fbW1pb19kaXJlY3Q6DQo+ICAgICAgICAgIHJldHVybiBN
VFJSX1RZUEVfVU5DQUNIQUJMRTsNCj4gDQo+ICsgICAgY2FzZSBwMm1fZ3JhbnRfbWFwX3JvOg0K
PiArICAgIGNhc2UgcDJtX2dyYW50X21hcF9ydzoNCj4gKyAgICBjYXNlIHAybV9tYXBfZm9yZWln
bjoNCj4gKyAgICAgICAgLyoNCj4gKyAgICAgICAgICogRm9yY2UgV0IgdHlwZSBmb3IgZ3JhbnRz
IGFuZCBmb3JlaWduIHBhZ2VzLiBUaG9zZSBhcmUgdXN1YWxseQ0KPiBtYXBwZWQNCj4gKyAgICAg
ICAgICogb3ZlciB1bnBvcHVsYXRlZCBwaHlzaWNhbCByYW5nZXMgaW4gdGhlIHAybSwgYW5kIHRo
b3NlIHdvdWxkDQo+IHVzdWFsbHkNCj4gKyAgICAgICAgICogYmUgVUMgaW4gdGhlIE1UUlIgc3Rh
dGUsIHdoaWNoIGlzIHVubGlrZWx5IHRvIGJlIHRoZSBjb3JyZWN0IGNhY2hlDQo+ICsgICAgICAg
ICAqIGF0dHJpYnV0ZS4gSXQncyBhbHNvIGN1bWJlcnNvbWUgKG9yIGV2ZW4gaW1wb3NzaWJsZSkg
Zm9yIHRoZSBndWVzdA0KPiArICAgICAgICAgKiB0byBiZSBzZXR0aW5nIHRoZSBNVFJSIHR5cGUg
Zm9yIGFsbCB0aG9zZSBtYXBwaW5ncyBhcyBXQiwgYXMgTVRSUg0KPiArICAgICAgICAgKiByYW5n
ZXMgYXJlIGZpbml0ZS4NCj4gKyAgICAgICAgICoNCj4gKyAgICAgICAgICogTm90ZSB0aGF0IG9u
IEFNRCB3ZSBjYW5ub3QgZm9yY2UgYSBjYWNoZSBhdHRyaWJ1dGUgYmVjYXVzZSBvZiB0aGUNCj4g
KyAgICAgICAgICogbGFjayBvZiBpZ25vcmUgUEFUIGVxdWl2YWxlbnQsIHNvIHRoZSBiZWhhdmlv
ciBoZXJlIHNsaWdodGx5DQo+ICsgICAgICAgICAqIGRpdmVyZ2VzLiBTZWUgcDJtX3R5cGVfdG9f
ZmxhZ3MgZm9yIHRoZSBBTUQgYXR0cmlidXRlcy4NCj4gKyAgICAgICAgICovDQo+ICsgICAgICAg
ICppcGF0ID0gdHJ1ZTsNCj4gKyAgICAgICAgcmV0dXJuIE1UUlJfVFlQRV9XUkJBQ0s7DQo+ICsN
Cj4gKyAgICBkZWZhdWx0Og0KPiArICAgICAgICBicmVhazsNCj4gKyAgICB9DQo+ICsNCj4gICAg
ICBnbXRycl9tdHlwZSA9IGh2bV9nZXRfbWVtX3Bpbm5lZF9jYWNoZWF0dHIoZCwgZ2ZuLCBvcmRl
cik7DQo+ICAgICAgaWYgKCBnbXRycl9tdHlwZSA+PSAwICkNCj4gICAgICB7DQo+IEBAIC02NDAs
NyArNjY1LDcgQEAgc3RhdGljIGludCByZXNvbHZlX21pc2NvbmZpZyhzdHJ1Y3QgcDJtX2RvbWFp
bg0KPiAqcDJtLCB1bnNpZ25lZCBsb25nIGdmbikNCj4gICAgICAgICAgICAgICAgICAgICAgICAg
IGNvbnRpbnVlOw0KPiAgICAgICAgICAgICAgICAgICAgICBlLmVtdCA9IGVwdGVfZ2V0X2VudHJ5
X2VtdChwMm0tPmRvbWFpbiwgX2dmbihnZm4gKyBpKSwNCj4gICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgX21mbihlLm1mbiksIDAsICZpcGF0LA0KPiAtICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBlLnNhX3AybXQgPT0g
cDJtX21taW9fZGlyZWN0KTsNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgZS5zYV9wMm10KTsNCj4gICAgICAgICAgICAgICAgICAgICAgZS5pcGF0ID0g
aXBhdDsNCj4gDQo+ICAgICAgICAgICAgICAgICAgICAgIG50ID0gcDJtX3JlY2FsY190eXBlKGUu
cmVjYWxjLCBlLnNhX3AybXQsIHAybSwgZ2ZuICsgaSk7DQo+IEBAIC02NTksNyArNjg0LDcgQEAg
c3RhdGljIGludCByZXNvbHZlX21pc2NvbmZpZyhzdHJ1Y3QgcDJtX2RvbWFpbg0KPiAqcDJtLCB1
bnNpZ25lZCBsb25nIGdmbikNCj4gICAgICAgICAgICAgICAgICBpbnQgZW10ID0gZXB0ZV9nZXRf
ZW50cnlfZW10KHAybS0+ZG9tYWluLCBfZ2ZuKGdmbiksDQo+ICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBfbWZuKGUubWZuKSwNCj4gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxldmVsICogRVBUX1RBQkxFX09SREVSLCAm
aXBhdCwNCj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGUu
c2FfcDJtdCA9PSBwMm1fbW1pb19kaXJlY3QpOw0KPiArICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgZS5zYV9wMm10KTsNCj4gICAgICAgICAgICAgICAgICBib29s
X3QgcmVjYWxjID0gZS5yZWNhbGM7DQo+IA0KPiAgICAgICAgICAgICAgICAgIGlmICggcmVjYWxj
ICYmIHAybV9pc19jaGFuZ2VhYmxlKGUuc2FfcDJtdCkgKQ0KPiBAQCAtODk1LDcgKzkyMCw3IEBA
IGVwdF9zZXRfZW50cnkoc3RydWN0IHAybV9kb21haW4gKnAybSwgZ2ZuX3QgZ2ZuXywNCj4gbWZu
X3QgbWZuLA0KPiAgICAgICAgICBib29sIGlwYXQ7DQo+ICAgICAgICAgIGludCBlbXQgPSBlcHRl
X2dldF9lbnRyeV9lbXQocDJtLT5kb21haW4sIF9nZm4oZ2ZuKSwgbWZuLA0KPiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGkgKiBFUFRfVEFCTEVfT1JERVIsICZpcGF0LA0K
PiAtICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHAybXQgPT0gcDJtX21taW9f
ZGlyZWN0KTsNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwMm10KTsN
Cj4gDQo+ICAgICAgICAgIGlmICggZW10ID49IDAgKQ0KPiAgICAgICAgICAgICAgbmV3X2VudHJ5
LmVtdCA9IGVtdDsNCj4gZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS14ODYvaHZtL3ZteC92
bXguaCBiL3hlbi9pbmNsdWRlL2FzbS0NCj4geDg2L2h2bS92bXgvdm14LmgNCj4gaW5kZXggZjY2
OGVlMWYwOS4uMGRlYjUwNzQ5MCAxMDA2NDQNCj4gLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9o
dm0vdm14L3ZteC5oDQo+ICsrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvaHZtL3ZteC92bXguaA0K
PiBAQCAtNjAwLDcgKzYwMCw3IEBAIHZvaWQgZXB0X3AybV91bmluaXQoc3RydWN0IHAybV9kb21h
aW4gKnAybSk7DQo+ICB2b2lkIGVwdF93YWxrX3RhYmxlKHN0cnVjdCBkb21haW4gKmQsIHVuc2ln
bmVkIGxvbmcgZ2ZuKTsNCj4gIGJvb2xfdCBlcHRfaGFuZGxlX21pc2NvbmZpZyh1aW50NjRfdCBn
cGEpOw0KPiAgaW50IGVwdGVfZ2V0X2VudHJ5X2VtdChzdHJ1Y3QgZG9tYWluICpkLCBnZm5fdCBn
Zm4sIG1mbl90IG1mbiwNCj4gLSAgICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQgaW50IG9y
ZGVyLCBib29sICppcGF0LCBib29sIGRpcmVjdF9tbWlvKTsNCj4gKyAgICAgICAgICAgICAgICAg
ICAgICAgdW5zaWduZWQgaW50IG9yZGVyLCBib29sICppcGF0LCBwMm1fdHlwZV90IHR5cGUpOw0K
PiAgdm9pZCBzZXR1cF9lcHRfZHVtcCh2b2lkKTsNCj4gIHZvaWQgcDJtX2luaXRfYWx0cDJtX2Vw
dChzdHJ1Y3QgZG9tYWluICpkLCB1bnNpZ25lZCBpbnQgaSk7DQo+ICAvKiBMb2NhdGUgYW4gYWx0
ZXJuYXRlIHAybSBieSBpdHMgRVBUUCAqLw0KPiAtLQ0KPiAyLjMxLjENCg0K


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 10:31:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 10:31:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143789.264855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltpIK-0005UL-SL; Thu, 17 Jun 2021 10:30:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143789.264855; Thu, 17 Jun 2021 10:30:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltpIK-0005UE-PI; Thu, 17 Jun 2021 10:30:44 +0000
Received: by outflank-mailman (input) for mailman id 143789;
 Thu, 17 Jun 2021 10:30:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bffI=LL=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1ltpIJ-0005U8-Hb
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 10:30:43 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3f01d05b-aa37-4c50-a64c-30ecd6802709;
 Thu, 17 Jun 2021 10:30:42 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 43EA121B07;
 Thu, 17 Jun 2021 10:30:41 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 17BC1118DD;
 Thu, 17 Jun 2021 10:30:41 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id yAmWBFEky2ByZQAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 17 Jun 2021 10:30:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f01d05b-aa37-4c50-a64c-30ecd6802709
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623925841; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=APrhnlbR6CPajc97zKcQcAIuVfRUr6SkhcSylryamRA=;
	b=WdKGZ/zZavFYbJ3yRBWYM8m+C0Yxu0X3sZawTKA/saRhf2LlUYFkW4/YYcFyFHuBlJ5zhI
	PDDL6K72W8BgtpNM6iJmkjM9HjsTZQ0OWeLDD42yrmXj02ztQXwamvcgyL3GS64PRsuam9
	hSrj7/y52jzX3HeBp7OsX3OmLPDrfws=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1623925841; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=APrhnlbR6CPajc97zKcQcAIuVfRUr6SkhcSylryamRA=;
	b=WdKGZ/zZavFYbJ3yRBWYM8m+C0Yxu0X3sZawTKA/saRhf2LlUYFkW4/YYcFyFHuBlJ5zhI
	PDDL6K72W8BgtpNM6iJmkjM9HjsTZQ0OWeLDD42yrmXj02ztQXwamvcgyL3GS64PRsuam9
	hSrj7/y52jzX3HeBp7OsX3OmLPDrfws=
To: Sander Eikelenboom <linux@eikelenboom.it>,
 linux-kernel <linux-kernel@vger.kernel.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Linus Torvalds <torvalds@linux-foundation.org>
References: <ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it>
From: Juergen Gross <jgross@suse.com>
Subject: Re: Linux 5.13-rc6 regression to 5.12.x: kernel OOM and panic during
 kernel boot in low memory Xen VM's (256MB assigned memory).
Message-ID: <6bcd447e-fd49-0519-a59e-478f84e9120f@suse.com>
Date: Thu, 17 Jun 2021 12:30:40 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="Tc8GhS0WamFES6vdNMQyDXzLS0xnkwFuC"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--Tc8GhS0WamFES6vdNMQyDXzLS0xnkwFuC
Content-Type: multipart/mixed; boundary="oMJ6vYp76QKfRvdShaqUSUociYnrR3bDR";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Sander Eikelenboom <linux@eikelenboom.it>,
 linux-kernel <linux-kernel@vger.kernel.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Linus Torvalds <torvalds@linux-foundation.org>
Message-ID: <6bcd447e-fd49-0519-a59e-478f84e9120f@suse.com>
Subject: Re: Linux 5.13-rc6 regression to 5.12.x: kernel OOM and panic during
 kernel boot in low memory Xen VM's (256MB assigned memory).
References: <ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it>
In-Reply-To: <ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it>

--oMJ6vYp76QKfRvdShaqUSUociYnrR3bDR
Content-Type: multipart/mixed;
 boundary="------------DDDAFBAD2DD8943AE90EF9C4"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------DDDAFBAD2DD8943AE90EF9C4
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.06.21 11:26, Sander Eikelenboom wrote:
> L.S.,
>=20
> I just tried to upgrade and test the linux kernel going from the 5.12=20
> kernel series to 5.13-rc6 on my homeserver with Xen, but ran in some=20
> trouble.
>=20
> Some VM's boot fine (with more than 256MB memory assigned), but the=20
> smaller (memory wise) PVH ones crash during kernel boot due to OOM.
> Booting VM's with 5.12(.9) kernel still works fine, also when dom0 is=20
> running 5.13-rc6 (but it has more memory assigned, so that is not=20
> unexpected).
>=20
> The 5.13-rc6'ish kernel is a pull of today, tried both with and without=20

> last AKPM's patches, but that
> makes no difference.
>=20
> Below are stacktraces from a few of the crashing VM's.
>=20
> Attached is the kernel .config
>=20
> Any pointers ?

Did you compare memory usage with a bootable guest between kernel 5.12
and 5.13-rc6?

Any chance you could bisect?


Juergen

--------------DDDAFBAD2DD8943AE90EF9C4
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------DDDAFBAD2DD8943AE90EF9C4--

--oMJ6vYp76QKfRvdShaqUSUociYnrR3bDR--

--Tc8GhS0WamFES6vdNMQyDXzLS0xnkwFuC
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDLJFAFAwAAAAAACgkQsN6d1ii/Ey//
+gf+N8kA12LDZZFA9C+AAo0/gsLkNV6l6YDuK/eI4nDGquKZYOKzF875l7wk117al6jIqZ1gOSIY
bhfiQdfi/KsEopT4o4ZVZzrqC8ifvY5esulkHVnLcSKSdQUjZnRuZ1zkKuMJDR8mm1D2j8k9A2QV
wgs98roioyYw68FG1MeTI1tp/mUUKCj3TYa1dEtN3tfNT6JhcrafhzCKweJMFAV9SlGdcSlPmG7H
xdRtuA7wRqin7ABrhMKFr6Wyk7uGDU4VUQIaQ63wp72zGYtCWbP5k2W8xQohfcU3DwJ8p0yvMJtu
D4h7g7W5wT+1yk1TXeudFHsBZJ6SNbiE79CXtBAmww==
=fNLs
-----END PGP SIGNATURE-----

--Tc8GhS0WamFES6vdNMQyDXzLS0xnkwFuC--


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 10:43:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 10:43:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143798.264869 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltpUy-00076G-7i; Thu, 17 Jun 2021 10:43:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143798.264869; Thu, 17 Jun 2021 10:43:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltpUy-000769-4h; Thu, 17 Jun 2021 10:43:48 +0000
Received: by outflank-mailman (input) for mailman id 143798;
 Thu, 17 Jun 2021 10:43:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltpUw-00075z-Jg; Thu, 17 Jun 2021 10:43:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltpUw-0007B6-Ap; Thu, 17 Jun 2021 10:43:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltpUw-0008ST-2a; Thu, 17 Jun 2021 10:43:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltpUw-00049Q-27; Thu, 17 Jun 2021 10:43:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8WIDsyT4dRDCn3tiZiFO2R4+Eun+5HQCs/uiiyiTnlU=; b=oyS2OWo+R9Im5HN6q+jJ21QbWN
	wzZiSy5GnIr/wsNDdimgBFprq7TUAeU/g274CKrheR8m7jn1g7ajK2yX5D29McG7umwGA34MQDuUL
	7oz0+6qNZJwzGkVsQUTmR5Jq9yJCjPEa7lP8G8Az7dglV+P/7cKp2BgINSEo4wlMDj+c=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162869-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162869: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=04ddd1271e9518008bcd899bdaf29c1701f0f7a0
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Jun 2021 10:43:46 +0000

flight 162869 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162869/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 04ddd1271e9518008bcd899bdaf29c1701f0f7a0
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   13 days
Failing since        162368  2021-06-04 15:42:59 Z   12 days   28 attempts
Testing same since   162865  2021-06-16 16:11:15 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  gaoliming <gaoliming@byosoft.com.cn>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2122 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 11:02:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 11:02:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143810.264883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltpnH-00012m-CB; Thu, 17 Jun 2021 11:02:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143810.264883; Thu, 17 Jun 2021 11:02:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltpnH-00012f-8P; Thu, 17 Jun 2021 11:02:43 +0000
Received: by outflank-mailman (input) for mailman id 143810;
 Thu, 17 Jun 2021 11:02:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ltpnG-00012Z-Qe
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 11:02:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltpnG-0007W7-FQ; Thu, 17 Jun 2021 11:02:42 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltpnG-0002XK-7i; Thu, 17 Jun 2021 11:02:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=4wmRk4AdsuBzMGm2H1KtLqeylc9DvdI3seaWwORVfbg=; b=YWin/4nXpJIppnY5hSEUk43y0G
	usouYQbvfx1Ppmg1IIb77KLJUj6RO9UrofdyVDcrI8UtUsKKHXF4rwTKcL9WMvWuUE4bbcDtYpgEx
	ZDV9EJV3u1njzA5aH+WZz4rf6R3PS1zdtvU5Qkrc1A1WCayGYMQbeEpgdticRVpc0vIg=;
Subject: Re: [PATCH v20210616 00/36] leftover from 2020
To: Olaf Hering <olaf@aepfle.de>, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org
References: <20210616125129.26563-1-olaf@aepfle.de>
 <968713af-3c99-3fe3-8378-9d9c3985098e@citrix.com>
 <20210616170238.376cb13d.olaf@aepfle.de>
From: Julien Grall <julien@xen.org>
Message-ID: <0bf3f6e7-c45e-c158-21d3-e3b636eb71c5@xen.org>
Date: Thu, 17 Jun 2021 13:02:39 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210616170238.376cb13d.olaf@aepfle.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Olaf,

On 16/06/2021 17:02, Olaf Hering wrote:
> Am Wed, 16 Jun 2021 15:50:24 +0100
> schrieb Andrew Cooper <andrew.cooper3@citrix.com>:
> 
>>           new_max |= new_max >> 32;
> 
> Lazy compiler? I hoped this is a compile-time constant, which evaluates to zero in 32bit builds.

See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=4210, this seems to be 
a known issue on GCC.

> 
>      if ( sizeof(unsigned long) > 4 )
> 
> I guess a #ifdef as it is done in old code must be done.
> 
> Olaf
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 11:22:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 11:22:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143840.264930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltq6A-0005Pb-PC; Thu, 17 Jun 2021 11:22:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143840.264930; Thu, 17 Jun 2021 11:22:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltq6A-0005PU-LY; Thu, 17 Jun 2021 11:22:14 +0000
Received: by outflank-mailman (input) for mailman id 143840;
 Thu, 17 Jun 2021 11:22:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ltq68-0005PO-W1
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 11:22:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltq68-0007px-Pv; Thu, 17 Jun 2021 11:22:12 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltq68-0003z4-5Y; Thu, 17 Jun 2021 11:22:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=7eOPxmqioGUTJEF/zP1CGF9hfGxzfNJuFfWnGpbfETw=; b=YSuojr+X5JRPfXMlpH2gfy/1zv
	VHsSdRaG3JEMWJV9YC6mk6bi2ZkQGkCYRgjBQpRv8zZpzLtb9q6UKffH19TVGS9lnk2+DyCybBNqt
	NPWGfZWhFv/IO13wAwRK3GdAUus/kwCEt5wuOIoH3mzqRFNP8EvTzUk58SfjGwBiWPdA=;
Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
To: Penny Zheng <Penny.Zheng@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Chen <Wei.Chen@arm.com>, nd <nd@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-2-penny.zheng@arm.com>
 <e1b90f06-92d2-11da-c556-4081907124b8@xen.org>
 <VE1PR08MB521519C6F09E92EDB9C9A1AEF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <66e32065-ea2d-d000-1a70-e5598a182b6a@xen.org>
 <VE1PR08MB5215C1F5041860102BBAD595F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <14fb6fe4-c293-6994-8cbc-872d3bd8a3ac@xen.org>
 <VE1PR08MB52152792B6771236A6DF37E7F73D9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <4251e0e2-fb53-b8a3-0323-f4ce892cf21e@xen.org>
 <alpine.DEB.2.21.2106031408320.7272@sstabellini-ThinkPad-T480s>
 <CAJ=z9a234ANQDR7BmtSm4AT0k3jrCn67s4b3zZ+jdkUgBMahbw@mail.gmail.com>
 <alpine.DEB.2.21.2106031625530.7272@sstabellini-ThinkPad-T480s>
 <113937c2-f1a7-c27f-8e2e-79de729ea3ce@xen.org>
 <BAC8BC8D-9CD6-4857-88C0-7DCE9267EF0E@arm.com>
 <e3a81b21-fd11-852c-aed7-25e71e4b5539@xen.org>
 <VE1PR08MB52152038F1366DA9B8A7D3D8F7309@VE1PR08MB5215.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <0e7d32ba-0caa-e23d-ddae-66970b972188@xen.org>
Date: Thu, 17 Jun 2021 13:22:08 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <VE1PR08MB52152038F1366DA9B8A7D3D8F7309@VE1PR08MB5215.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

On 15/06/2021 08:08, Penny Zheng wrote:
> Hi julien

Hi Penny,

>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Wednesday, June 9, 2021 6:47 PM
>> To: Bertrand Marquis <Bertrand.Marquis@arm.com>
>> Cc: Stefano Stabellini <sstabellini@kernel.org>; Julien Grall
>> <julien.grall.oss@gmail.com>; Penny Zheng <Penny.Zheng@arm.com>; xen-
>> devel@lists.xenproject.org; Wei Chen <Wei.Chen@arm.com>; nd
>> <nd@arm.com>
>> Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
>>
>>
>>
>> On 09/06/2021 10:56, Bertrand Marquis wrote:
>>> Hi All,
>>
>> Hi,
>>
>>>> On 7 Jun 2021, at 19:09, Julien Grall <julien@xen.org
>>>> <mailto:julien@xen.org>> wrote:
>>>> Feel free to propose one. I suggested to use /reserved-memory because
>>>> this is the approach that makes the most sense to me (see my reply above).
>>>>
>>>> TBH, even after your explanation, I am still a bit puzzled into why
>>>> /reserved-memory cannot be leveraged to exclude domain region from
>>>> the hypervisor allocator.
>>>
>>> I really tend to think that the original solution from Penny is for
>>> now the easiest and simplest to document.
>>
>> I can live with Penny's solution so long we don't duplicate the parsing and we
>> don't create new datastructure in Xen for the new type of reserved memory.
>> However...
>>
> 
> Just to confirm what I understand here, you are not only worrying about the duplication code imported in
> dt_unreserved_regions, but also must introducing another path in func early_scan_node to parse my first implementation
> "xen,static-mem = <...>", right?

That's correct.

> 
> On duplication code part, I could think of a way to extract common codes to fix, but for introducing another new path to parse,
> FWIT, it is inevitable if not re-using reserved-memory. ;/

I don't think this is inevitable. If you look at the code, we already 
share the parsing between reserved-memory and memory.

AFAICT, the main difference now is the property to parse. Other than 
that, the content is exactly the same. So we could pass the name of the 
property to parse.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 11:24:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 11:24:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143847.264941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltq8R-00063O-56; Thu, 17 Jun 2021 11:24:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143847.264941; Thu, 17 Jun 2021 11:24:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltq8R-00063H-1r; Thu, 17 Jun 2021 11:24:35 +0000
Received: by outflank-mailman (input) for mailman id 143847;
 Thu, 17 Jun 2021 11:24:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/zva=LL=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ltq8Q-00063B-I3
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 11:24:34 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 540976f7-2254-4020-8b1e-ca050c4969e7;
 Thu, 17 Jun 2021 11:24:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 540976f7-2254-4020-8b1e-ca050c4969e7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623929073;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=FRTYgjr+wUmeNbU8WdEE6KfYISOKARM/JijwunUPSfs=;
  b=KwJ3wbVF6r0T3Hyd5tHz1QROHJ6VqX117jxp3g6dysMSFex/PuzSikn/
   rYq8p7H01kP2h2kr1mfynB3RWw/1t2zVojixmCljhYSoRdGfkY4sGTXSg
   62f0Yc6IB5J2ToM+KYsQsNvBqr7yTyPfl1UPx0bvcqtqzZ64AOgHpC/Dy
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: rTw3W+Yrn9YB/J/ba8zCeW56gmYfNOwrEPKKVAhQVH6WFi0REFRwM1IPWl9a/LJG1PwED1E2Ph
 Dnk2u4Q5XIjCKbbVbNCxpHQ0vrL6pAgd41bLGWYwPOVg6ggs0ynuCH4ys/x8tBa9mZqtqUHr+q
 vHtQJ4N+QHCbY4CV80mNAVN5T4gRtpVfcz0iYp85Lk/g7wqedNiNdQkFgB4xbzB9ZTAiG4CiTj
 KoF0U33reukO8ElFc7IjnvFxcr/8FlNPuuGCgzPe3PWIaYx53lZ+eWE+r914xrI+92Gd1r2i1F
 YsI=
X-SBRS: 5.1
X-MesageID: 46082215
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:Ui3X56hm8ve2WqPiF0VEx59YsHBQXrkji2hC6mlwRA09TyX4ra
 CTdZEgviMc5wx9ZJhNo7q90cq7IE80i6Qb3WB5B97LYOCMggeVxe9Zg7ff/w==
X-IronPort-AV: E=Sophos;i="5.83,280,1616472000"; 
   d="scan'208";a="46082215"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MCXb3CZhIzUy7B2pPrUhdLmj8pc2LGo/627FsH0+ZSMaA9Jvpbx2GYIwscve49H6fogmzCaSt7TxEiKGWRVphlqnOF1TP4MEhYPu2BI7h7R2tAHm0M+7SI4PMBRSq+MBX8b0bgSPA5cN/OEwdEsIh+XhRldLfF5qhFrc9U25FVftI2qcByLeVI4pytlzMCmwnLN5Am8hOAD7rsDJ6PuVRLlzZYpcqTGn4hN5fRffYfLlbZ01VBvM5IUIvfwJeV1HeyTgYuqvTkHWswWIZ8ZS/ePOQxsh81gMQxXvmcqFkwIv4LWEs/sLQXfpMTEttYKMQORfrvwe3Cr4oMgHPM3amQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FRTYgjr+wUmeNbU8WdEE6KfYISOKARM/JijwunUPSfs=;
 b=LFxxNmqGKdQNcUUono46lgtMhsA/UBUWNJYTO8TpPrtm35IlXQeH9Xenue8c8aaAclsLDucETkw8TA6O/O2mZRImWMkdzEsY7r0vnqrqW+YsSkXqlQQ9MvfJ8FM2kIx6NLE9Wr9CYoCGyMlS7AdywsYbbCl8XZ+C38ArpaAWgWrLPI+fRFM8imJGx3MKKHr6/0w6n9AhYzYHa43kf3gXoDdXSh/ulObt7smgGoTKqtFaUcuKJ2QzgJIxTR6bI5v6AQjXeo8rB4QutRLK9ueoRns4mgjvuuSTiVfo3ZSEqrq5eQpcrqe3yuzQ+/pvO0RiZDVoIILYxyOQJfV5sD6Gsg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FRTYgjr+wUmeNbU8WdEE6KfYISOKARM/JijwunUPSfs=;
 b=MTqqIr8rX4bM6iddfF2K9kZFu2HllSuNa5YRhbBSl3w9aG6H6gM+LuyTX1tLIJ0Bia2H4G5iWkCzdNwc1s3siEKwxXgLDwDutDTHb3C4JswEvvqt3it3qyUtZnMpFCxeHJWb6tjKfqGYn2e3j9h2HK6XpP+xe98t61CeR+iblpI=
To: Olaf Hering <olaf@aepfle.de>
CC: <xen-devel@lists.xenproject.org>
References: <20210616125129.26563-1-olaf@aepfle.de>
 <968713af-3c99-3fe3-8378-9d9c3985098e@citrix.com>
 <20210616173831.5e8214bc.olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v20210616 00/36] leftover from 2020
Message-ID: <861fbbdf-f3a6-ae1e-9487-b3ca37b30ca8@citrix.com>
Date: Thu, 17 Jun 2021 12:24:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210616173831.5e8214bc.olaf@aepfle.de>
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0114.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:c::30) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bc6cdc73-850c-4d93-304c-08d931827863
X-MS-TrafficTypeDiagnostic: BYAPR03MB4551:
X-Microsoft-Antispam-PRVS: <BYAPR03MB4551DCBAC172BC35B35C9C20BA0E9@BYAPR03MB4551.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: g3MddswsixwcAipQbV55OhPp2KjnGkSVldftEg1Xv6rsEV7QzVaVtk8iK8sBwaA9DvuOCcdH4+drlHnkj6P3tbjQ3IUMAuuYiAsBVQub2FIw8/WHPhhEX91S37VbMllqqcPVaM2BCCzE2MOYrOBgxboeAVQDxU/IugH7BrI2NgpRKuNMP2QSpWqanIfuWd3IvIHUj0sF34eQUMVv4fMP/GDOD42hdq6BIa57EuQwvnJ4dyYzyIdoKOlCAygm6onHe03/gxbNMFm95Efy4N2knLMKoSUmm85NA2xCLnxeiNmMZ42nAqpy8MbyQBBoC8jJE5VLxYVxwCcVy9AXh1Zbau4I4Uyt39cptfd/BH5R4wEle3wIxDJjXV3PU4W424qf+q+pZELwINvNNIlAqKKEPg6bJp0YhWVJhQsLVQ7Pz/ACcSXh076Enas6Ps42q1bT1MsnszK3HG3mRJXR3DEUs16WommFVPTONJa6M/PvN8UM03diY+mIKUTgf9jdsdC9hJVOr0k3v7dpopod97nF2A6weYHlxAilTPo+Dj9prRL7rBPRlC6TT+r73TTjT8V2V3dDFl9dYnoMCUAqwg54VKENU6BVHw+tuHyOZFudRgXAAk63Ao0Xpmsgh63xr/pIAboOgnsButpfmpejAtVcE6hMmsYwVtYdCG1dQZY+fWv3LgZOeWHNP3Pg1Kl1zirf
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(396003)(346002)(366004)(66946007)(66476007)(66556008)(38100700002)(16526019)(31686004)(31696002)(53546011)(16576012)(6486002)(6916009)(86362001)(5660300002)(956004)(2906002)(478600001)(316002)(8936002)(6666004)(4744005)(4326008)(8676002)(186003)(26005)(2616005)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?Windows-1252?Q?YMn18TRgUslGBXdBLX+NqOh9xeiGiuz6mnmdUK3SQkbZxyupiT7BMLj8?=
 =?Windows-1252?Q?4xa4nfOK4D5lo7MQpMv3f0xWvJ6EmJwyTENO4ooLiVflFzV4ka98e2f+?=
 =?Windows-1252?Q?RWATVkYOHWk1DExyQuJivxzG331zxcp0k5RcXmZkZ6pkDcor3GaahXIA?=
 =?Windows-1252?Q?Gszo0Oq5s0mOxEEbagJcnJquH1KlIanzmd+oySuvYBl6VEy6AUW/+qvl?=
 =?Windows-1252?Q?w4r7vB8j+GKNfP/LTfCmXRO5XdsG4zHBq7kpgYfYZVAK8Eigy/HTdRnk?=
 =?Windows-1252?Q?Xupp5FB8KhROA6AElERp+erxw9sbL03Iw5vJiC1OYbHqzmsIAAl1dzsk?=
 =?Windows-1252?Q?Wax363xRWE9+9Pp0keOtlte1Yknu3Rt9K9x4VB9j4qkdp3J9EJcqWRHo?=
 =?Windows-1252?Q?QQeV8QxDrPqP6qc92nhqkHZLvq/9dGifkS9AgQlnRlf5NEEXJ5Y03iak?=
 =?Windows-1252?Q?ttBYV4rG2Z/Tip5y07kwBWn0iUbJsYHfkY/Y+Kqeh6J40VTvmdEcL2QB?=
 =?Windows-1252?Q?XXNWNfDcXIQu7J00h2jaX6dOPjhcYvxNlsS07vh6GsDjmItRYADSsNVn?=
 =?Windows-1252?Q?h5b5mz/u62NEMvSeZeDpFrl3+v2vCvdk9t0l3xlnrqmBbaIW9EIQ7lZ+?=
 =?Windows-1252?Q?Bie1chxhKA8ktZbKIAsfH/q+2yeuzDWnFe/FGVq4eh4aB9eTBeImLVze?=
 =?Windows-1252?Q?ONgGImiFzaVOSx8CZJOqnG/iIEM1kMpNFSWmkotvzQMElEZTp6Tu46HE?=
 =?Windows-1252?Q?0SR45DyFyUzSBJsGr+OCRbR9Ao10XNUpxRAFH0WOIS1+nCE/qH6iCEXT?=
 =?Windows-1252?Q?24ynnqvUH17lEQalBkpMb8tQhgdmc4+l3GMW0Y9mf4P1M47/5Vic01JP?=
 =?Windows-1252?Q?MoIjssB9O4AhliK+Mz0xK2f65h6C7M6VxTJInJCEZgJNjlGqDBizRVdg?=
 =?Windows-1252?Q?wfQ3p+NuZbrP3I05Pm5wS18z/HUlhnlSI4wptag8Y2XAlsiaEd1EuDXA?=
 =?Windows-1252?Q?OSo3PEVwCUOIJenWW8j+/25TPVETQ3SGcTf4+to4Ra+YEx0Kx0PHEWb/?=
 =?Windows-1252?Q?j1VhLqp5FyQLzx5qq2+akW7ldfaZ+bbxj9euaDwOMtSPD6HP6C/aL3HZ?=
 =?Windows-1252?Q?NAwSmhfbC/jt7RT6ViLkizjhw0IAt8zZYkk5s+D1IgA3SED35Mf1HVi0?=
 =?Windows-1252?Q?YYr1fpktlNsELt3Onj/1Ygg2+eqI8rYWiwM4JTvIiy0sOQOCd5wN2q5m?=
 =?Windows-1252?Q?DeDRGKIMAKoOTHGcaJncqaiIBthQZYqtv6TMdRAzl5QZizEkjpG3ao/8?=
 =?Windows-1252?Q?QMccTvf8t1H4WzCoLac9nh2PVZ7xEcMvbk9LzQObhkhplPghlvlAz+e2?=
 =?Windows-1252?Q?uthyfqUv+wDXwcKyDChkniUY3TlQU98t71In2fUzuLibx1RlKgPr1O4m?=
X-MS-Exchange-CrossTenant-Network-Message-Id: bc6cdc73-850c-4d93-304c-08d931827863
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2021 11:24:28.6051
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qFfEnrVcjjjM6JMw22Ipsev+RZrB3G7deupOyXrUqFQR+1DT8foMHUaw4G0IcPIS5+0H1NzNh/JFkCN6vLCc1WnFc9/cd2d8Cfjb8mfs0CA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4551
X-OriginatorOrg: citrix.com

On 16/06/2021 16:38, Olaf Hering wrote:
> Am Wed, 16 Jun 2021 15:50:24 +0100
> schrieb Andrew Cooper <andrew.cooper3@citrix.com>:
>
>> 32bit toolstack build
> as in i386?
> How is this used in practice?

Every OSSTest run.=A0 Also, arm32 is absolutely a thing (the only reason
ARM can't migrate right now is because there is no logdirty support in
Xen yet).

> I guess such build should be marked as CONFIG_MIGRATE=3Dn in config/x86_3=
2.mk?

Migration (v2) very definitely works for i386 toolstacks. Part of the
testing process during development was migrating a VM between a 32bit
and 64bit dom0's, specifically to check that we'd got rid of all of the
bitness problems in the stream format.

~Andrew



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 11:31:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 11:31:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143866.264970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltqEp-0008T7-AL; Thu, 17 Jun 2021 11:31:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143866.264970; Thu, 17 Jun 2021 11:31:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltqEp-0008T0-73; Thu, 17 Jun 2021 11:31:11 +0000
Received: by outflank-mailman (input) for mailman id 143866;
 Thu, 17 Jun 2021 11:31:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltqEn-0008Sq-VU; Thu, 17 Jun 2021 11:31:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltqEn-0007zv-Pb; Thu, 17 Jun 2021 11:31:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltqEn-0001vO-E7; Thu, 17 Jun 2021 11:31:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltqEn-0004Da-D5; Thu, 17 Jun 2021 11:31:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UnrBRHXD0MAXwd1pFy4LtEjGIyH5DnwSgO3BfxjWp2o=; b=aXMAijizAjpu5Iafiz88WkRdEN
	aLW5ZLMUyFEepoiQiwMPCejsJvKf0Wsittl9+X1K39yjGzX/6w0K8bLsOACSG29mhqv9k5tMU2t3m
	D9Xhmr6hf8VlWQhPmqVllW6LlsrLdPJNHufFTKop3etTzeID0oEBRB6NEMrO4S97onoE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162867-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162867: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4bcf6433eed3d9cbc00865ec62380a33ca832dac
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Jun 2021 11:31:09 +0000

flight 162867 xen-unstable real [real]
flight 162873 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162867/
http://logs.test-lab.xenproject.org/osstest/logs/162873/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162533
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162533
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4bcf6433eed3d9cbc00865ec62380a33ca832dac
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z    9 days
Failing since        162556  2021-06-08 22:39:08 Z    8 days   13 attempts
Testing same since   162854  2021-06-16 06:56:28 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1115 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 11:37:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 11:37:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143876.264984 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltqKc-0000rk-54; Thu, 17 Jun 2021 11:37:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143876.264984; Thu, 17 Jun 2021 11:37:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltqKc-0000rd-1M; Thu, 17 Jun 2021 11:37:10 +0000
Received: by outflank-mailman (input) for mailman id 143876;
 Thu, 17 Jun 2021 11:37:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltqKb-0000rR-0J; Thu, 17 Jun 2021 11:37:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltqKa-000861-Mq; Thu, 17 Jun 2021 11:37:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltqKa-00024D-Db; Thu, 17 Jun 2021 11:37:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltqKa-0001s3-D6; Thu, 17 Jun 2021 11:37:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=xggqrJ+xdqa5I+zWZ22JJmQ7ThSgVswN5Kej2QV3OHU=; b=nXfuqLKUW5YYeKqmUx0BtoaLCF
	K5NsFcEVTjnmgjuTnt6m9UTAVsZFX3BEPn6AkjPdox5YIjPNC+yeCysbF1ga1AM+2MSlzCAoXQCsm
	VjM1krMvRQFxfGsnJ/9UFmVDrm/67ofvsNwVowofgfi1P0ZCmB8v2ZfnwL5+dWysfJfs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-amd64-xl-qemuu-win7-amd64
Message-Id: <E1ltqKa-0001s3-D6@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Jun 2021 11:37:08 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemuu-win7-amd64
testid guest-saverestore

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8a40754bca14df63c6d2ffe473b68a270dc50679
  Bug not present: dc04d25e2f3f7e26f7f97b860992076b5f04afdb
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161212/


  (Revision log too long, omitted.)


*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  1b507e55f8199eaad99744613823f6929e4d57c6
  Bug not present: 4083904bc9fe5da580f7ca397b1e828fbc322732
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161218/


  commit 1b507e55f8199eaad99744613823f6929e4d57c6
  Merge: 4083904bc9 8d17adf34f
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Thu Mar 18 19:00:49 2021 +0000
  
      Merge remote-tracking branch 'remotes/berrange-gitlab/tags/dep-many-pull-request' into staging
      
      Remove many old deprecated features
      
      The following features have been deprecated for well over the 2
      release cycle we promise
      
        ``-drive file=json:{...{'driver':'file'}}`` (since 3.0)
        ``-vnc acl`` (since 4.0.0)
        ``-mon ...,control=readline,pretty=on|off`` (since 4.1)
        ``migrate_set_downtime`` and ``migrate_set_speed`` (since 2.8.0)
        ``query-named-block-nodes`` result ``encryption_key_missing`` (since 2.10.0)
        ``query-block`` result ``inserted.encryption_key_missing`` (since 2.10.0)
        ``migrate-set-cache-size`` and ``query-migrate-cache-size`` (since 2.11.0)
        ``query-named-block-nodes`` and ``query-block`` result dirty-bitmaps[i].status (since 4.0)
        ``query-cpus`` (since 2.12.0)
        ``query-cpus-fast`` ``arch`` output member (since 3.0.0)
        ``query-events`` (since 4.0)
        chardev client socket with ``wait`` option (since 4.0)
        ``acl_show``, ``acl_reset``, ``acl_policy``, ``acl_add``, ``acl_remove`` (since 4.0.0)
        ``ide-drive`` (since 4.2)
        ``scsi-disk`` (since 4.2)
      
      # gpg: Signature made Thu 18 Mar 2021 09:23:39 GMT
      # gpg:                using RSA key DAF3A6FDB26B62912D0E8E3FBE86EBB415104FDF
      # gpg: Good signature from "Daniel P. Berrange <dan@berrange.com>" [full]
      # gpg:                 aka "Daniel P. Berrange <berrange@redhat.com>" [full]
      # Primary key fingerprint: DAF3 A6FD B26B 6291 2D0E  8E3F BE86 EBB4 1510 4FDF
      
      * remotes/berrange-gitlab/tags/dep-many-pull-request:
        block: remove support for using "file" driver with block/char devices
        block: remove 'dirty-bitmaps' field from 'BlockInfo' struct
        block: remove dirty bitmaps 'status' field
        block: remove 'encryption_key_missing' flag from QAPI
        hw/scsi: remove 'scsi-disk' device
        hw/ide: remove 'ide-drive' device
        chardev: reject use of 'wait' flag for socket client chardevs
        machine: remove 'arch' field from 'query-cpus-fast' QMP command
        machine: remove 'query-cpus' QMP command
        migrate: remove QMP/HMP commands for speed, downtime and cache size
        monitor: remove 'query-events' QMP command
        monitor: raise error when 'pretty' option is used with HMP
        ui, monitor: remove deprecated VNC ACL option and HMP commands
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit 8d17adf34f501ded65a106572740760f0a75577c
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 11:16:32 2021 +0000
  
      block: remove support for using "file" driver with block/char devices
      
      The 'host_device' and 'host_cdrom' drivers must be used instead.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit e67d8e2928200e24ecb47c7be3ea8270077f2996
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 19:22:36 2021 +0000
  
      block: remove 'dirty-bitmaps' field from 'BlockInfo' struct
      
      The same data is available in the 'BlockDeviceInfo' struct.
      
      Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 81cbfd5088690c53541ffd0d74851c8ab055a829
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 19:19:54 2021 +0000
  
      block: remove dirty bitmaps 'status' field
      
      The same information is available via the 'recording' and 'busy' fields.
      
      Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit ad1324e044240ae9fcf67e4c215481b7a35591b9
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 18:53:17 2021 +0000
  
      block: remove 'encryption_key_missing' flag from QAPI
      
      This has been hardcoded to "false" since 2.10.0, since secrets required
      to unlock block devices are now always provided up front instead of using
      interactive prompts.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 879be3af49132d232602e0ca783ec9b4112530fa
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:40:56 2021 +0000
  
      hw/scsi: remove 'scsi-disk' device
      
      The 'scsi-hd' and 'scsi-cd' devices provide suitable alternatives.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit b50101833987b47e0740f1621de48637c468c3d1
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:40:56 2021 +0000
  
      hw/ide: remove 'ide-drive' device
      
      The 'ide-hd' and 'ide-cd' devices provide suitable alternatives.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 24e13a4dc1eb1630eceffc7ab334145d902e763d
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:47:17 2021 +0000
  
      chardev: reject use of 'wait' flag for socket client chardevs
      
      This only makes sense conceptually when used with listener chardevs.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 445a5b4087567bf4d4ce76d394adf78d9d5c88a5
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:29:31 2021 +0000
  
      machine: remove 'arch' field from 'query-cpus-fast' QMP command
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 18:40:12 2021 +0000
  
      migrate: remove QMP/HMP commands for speed, downtime and cache size
      
      The generic 'migrate_set_parameters' command handle all types of param.
      
      Only the QMP commands were documented in the deprecations page, but the
      rationale for deprecating applies equally to HMP, and the replacements
      exist. Furthermore the HMP commands are just shims to the QMP commands,
      so removing the latter breaks the former unless they get re-implemented.
      
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 8becb36063fb14df1e3ae4916215667e2cb65fa2
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:35:15 2021 +0000
  
      monitor: remove 'query-events' QMP command
      
      The code comment suggests removing QAPIEvent_(str|lookup) symbols too,
      however, these are both auto-generated as standard for any enum in
      QAPI. As such it they'll exist whether we use them or not.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 283d845c9164f57f5dba020a4783bb290493802f
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 17:56:13 2021 +0000
  
      monitor: raise error when 'pretty' option is used with HMP
      
      This is only semantically useful for QMP.
      
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 5994dcb8d8525ac044a31913c6bceeee788ec700
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 17:47:31 2021 +0000
  
      ui, monitor: remove deprecated VNC ACL option and HMP commands
      
      The VNC ACL concept has been replaced by the pluggable "authz" framework
      which does not use monitor commands.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-win7-amd64.guest-saverestore.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-win7-amd64.guest-saverestore --summary-out=tmp/162874.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-amd64-xl-qemuu-win7-amd64 guest-saverestore
Searching for failure / basis pass:
 162860 fail [host=godello1] / 160125 ok.
Failure / basis pass flights: 162860 / 160125
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1dd259ae24a26d8a987ab83aefb5c04dbe5f4b2a e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b12498fc575f2ad30f09fe78badc7fef526e2d76 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#030ba3097a6e7d08b99f8a9d19a322f61409c1f6-c410ad4da4b7785170d3d42a3ba190c2caac6feb git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#b12498fc575f2ad30f09fe78badc7fef526e2d76-1dd259ae24a26d8a987ab83aefb5c04dbe5f4b2a git://xenbits.xen.org/osstest/seabios.git#b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee-e3c30795823672eec9bde75187e184f23ed98d70 git://xenbits.xen.org/xen.git#21657ad4f01a634beac570c64c0691e51b9cf366-5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Loaded 31357 nodes in revision graph
Searching for test results:
 162778 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162795 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162818 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1ea06abceec61b6f3ab33dadb0510b6e09fb61e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162840 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1ea06abceec61b6f3ab33dadb0510b6e09fb61e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162850 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1ea06abceec61b6f3ab33dadb0510b6e09fb61e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162860 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1dd259ae24a26d8a987ab83aefb5c04dbe5f4b2a e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162872 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b12498fc575f2ad30f09fe78badc7fef526e2d76 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 162874 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1dd259ae24a26d8a987ab83aefb5c04dbe5f4b2a e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 160125 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b12498fc575f2ad30f09fe78badc7fef526e2d76 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160134 fail irrelevant
 160147 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160167 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca318882714080fb81fe9eb89a7b7934efc5bfae 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bdee969c0e65d4d509932b1d70e3a3b2ffbff6d5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160328 fail irrelevant
 160361 fail irrelevant
 160392 fail irrelevant
 160418 fail irrelevant
 160448 fail irrelevant
 160477 fail irrelevant
 160501 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160522 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160541 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ec2e6e016d24bd429792d08cf607e4c5350dcdaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160563 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7993b0f83fe5c3f8555e79781d5d098f99751a94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cead8c0d17462f3a1150b5657d3f4eaa88faf1cb
 160619 fail irrelevant
 160632 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 62bad17dcae18f55cb3bdc19909543dfdf928a2b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6ee55e1d10c25c2f6bf5ce2084ad2327e17affa5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 90629587e16e2efdb61da77f25c25fba3c4a5fd7
 160650 fail irrelevant
 160736 fail irrelevant
 160748 fail irrelevant
 160779 fail irrelevant
 160801 fail irrelevant
 160827 fail irrelevant
 160851 fail irrelevant
 160883 fail irrelevant
 160916 fail irrelevant
 160980 fail irrelevant
 161050 fail irrelevant
 161088 fail irrelevant
 161121 fail irrelevant
 161147 fail irrelevant
 161184 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b12498fc575f2ad30f09fe78badc7fef526e2d76 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161186 fail irrelevant
 161188 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2a9a6c2a86570ccbf8c5c30cbb8bf723168c459 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161189 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 17422da082ffcecb38bd1f2e2de6d56a61e8cd9c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161171 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2ad22420a710dc07e3b644f91a5b55c09c39ecf3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 264aa183ad85b2779b27d1312724a291259ccc9f
 161190 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dc04d25e2f3f7e26f7f97b860992076b5f04afdb b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161193 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2ad22420a710dc07e3b644f91a5b55c09c39ecf3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 264aa183ad85b2779b27d1312724a291259ccc9f
 161195 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0f418a207696b37f05d38f978c8873ee0a4f9815 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161197 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e71c36557ed41017e634ae392fa80f03ced7fa1 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161199 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161200 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 87a80dc4f2f5e51894db143685a5e39c8ce6f651 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161202 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161205 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1b507e55f8199eaad99744613823f6929e4d57c6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161207 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dc04d25e2f3f7e26f7f97b860992076b5f04afdb b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161191 fail irrelevant
 161208 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161211 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dc04d25e2f3f7e26f7f97b860992076b5f04afdb b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161212 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161213 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161215 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1b507e55f8199eaad99744613823f6929e4d57c6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161217 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161218 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1b507e55f8199eaad99744613823f6929e4d57c6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161210 fail irrelevant
 161232 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b53173e7cdafb7a318a239d557478fd73734a86a
 161256 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161276 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161290 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161308 fail irrelevant
 161334 fail irrelevant
 161364 fail irrelevant
 161388 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
 161401 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aaa3eafb3ba8b544d19ca41cda1477640b22b8fc
 161419 fail irrelevant
 161434 fail irrelevant
 161444 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161455 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161472 fail irrelevant
 161481 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5396354b868bd6652600a654bba7df16701ac1cb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 11e7f0fe72ca0060762d18268e0388731fe8ccb6
 161495 fail irrelevant
 161514 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b90b8abb4049e2d98040f548ad23b6ab22d5d19 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
 161540 fail irrelevant
 161554 fail irrelevant
 161571 fail irrelevant
 161587 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161604 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161616 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 53c5433e84e8935abed8e91d4a2eb813168a0ecf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161631 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 15106f7dc3290ff3254611f265849a314a93eb0e b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161766 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
 161780 fail irrelevant
 161812 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d45a5270d075ea589f0b0ddcf963a5fea1f500ac b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 8cccd6438e86112ab383e41b433b5a7e73be9621
 161826 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161839 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161853 []
 161856 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161862 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161876 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161886 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161890 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161896 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 74e31681ba05ed1876818df30c581bc530554fb3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161907 fail irrelevant
 161915 fail irrelevant
 161924 fail irrelevant
 161938 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161941 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161950 fail irrelevant
 161955 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161961 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161963 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161967 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161971 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161976 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161981 fail irrelevant
 161986 fail irrelevant
 162019 fail irrelevant
 162070 fail irrelevant
 162090 fail irrelevant
 162104 fail irrelevant
 162099 fail irrelevant
 162108 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 15ee7b76891a78141e6e30ef3f8572e8d6b326d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 972e848b53970d12cb2ca64687ef8ff797fb6236 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162112 fail irrelevant
 162116 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162121 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162124 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162127 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162132 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162135 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162139 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162143 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 371ebfe28600fc5a435504b841cd401208a68f07 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162146 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162152 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162158 fail irrelevant
 162167 fail irrelevant
 162197 fail irrelevant
 162240 fail irrelevant
 162244 fail irrelevant
 162252 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7258034ab40e6927acbd005feb295eb3acf972bb 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 9fdcf851689cb2a9501d3947cb5d767d9c7797e8
 162257 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 62c0ac5041e9130b041adfa13a41583d3c3ddd24 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162260 fail irrelevant
 162264 fail irrelevant
 162267 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f9dc72de91d2915b808e82da34bf613afa5cce43 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162270 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162277 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162299 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 683d899e4bffca35c5b192ea0662362b0270a695
 162328 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3ff5dbe1dfc3420e5254d290500c0b6f6282d17 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162331 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fdf3666f01a2dd02d83a808f609b9c744a74c652 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162339 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b233eb1849ac01bdd5b24ea84460a2e481a4c5a9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dd2db39d78431ab5a0b78777afaab3d61e94533e 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162342 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1f515342d8d83ef0fff0c3f4ac67232dd8c97565 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c345b3e6a736d4985b2bca6f7f24b985900de63 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162347 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8e6dad2028d01b7f9ec76cf3b83457fab57fa1eb 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162356 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 453d9c61dd5681159051c6e4d07e7b2633de2e70 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162362 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 453d9c61dd5681159051c6e4d07e7b2633de2e70 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162409 fail irrelevant
 162379 fail irrelevant
 162429 fail irrelevant
 162454 fail irrelevant
 162474 fail irrelevant
 162501 []
 162527 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a35947f15c0ee695eba3c55248ec8ac3e4e23cca 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162551 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a4716fd8d7c877185652f5f8e25032dc7699d51b 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162591 fail irrelevant
 162623 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7fe7fae8b48e3f9c647fd685e5155ebc8e6fb84d e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162650 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162762 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162676 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162712 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Searching for interesting versions
 Result found: flight 160125 (pass), for basis pass
 Result found: flight 162860 (fail), for basis failure
 Repro found: flight 162872 (pass), for basis pass
 Repro found: flight 162874 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
No revisions left to test, checking graph state.
 Result found: flight 161190 (pass), for last pass
 Result found: flight 161199 (fail), for first failure
 Repro found: flight 161207 (pass), for last pass
 Repro found: flight 161208 (fail), for first failure
 Repro found: flight 161211 (pass), for last pass
 Repro found: flight 161212 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8a40754bca14df63c6d2ffe473b68a270dc50679
  Bug not present: dc04d25e2f3f7e26f7f97b860992076b5f04afdb
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161212/


  (Revision log too long, omitted.)

 Result found: flight 161202 (pass), for last pass
 Result found: flight 161205 (fail), for first failure
 Repro found: flight 161213 (pass), for last pass
 Repro found: flight 161215 (fail), for first failure
 Repro found: flight 161217 (pass), for last pass
 Repro found: flight 161218 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  1b507e55f8199eaad99744613823f6929e4d57c6
  Bug not present: 4083904bc9fe5da580f7ca397b1e828fbc322732
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161218/


  commit 1b507e55f8199eaad99744613823f6929e4d57c6
  Merge: 4083904bc9 8d17adf34f
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Thu Mar 18 19:00:49 2021 +0000
  
      Merge remote-tracking branch 'remotes/berrange-gitlab/tags/dep-many-pull-request' into staging
      
      Remove many old deprecated features
      
      The following features have been deprecated for well over the 2
      release cycle we promise
      
        ``-drive file=json:{...{'driver':'file'}}`` (since 3.0)
        ``-vnc acl`` (since 4.0.0)
        ``-mon ...,control=readline,pretty=on|off`` (since 4.1)
        ``migrate_set_downtime`` and ``migrate_set_speed`` (since 2.8.0)
        ``query-named-block-nodes`` result ``encryption_key_missing`` (since 2.10.0)
        ``query-block`` result ``inserted.encryption_key_missing`` (since 2.10.0)
        ``migrate-set-cache-size`` and ``query-migrate-cache-size`` (since 2.11.0)
        ``query-named-block-nodes`` and ``query-block`` result dirty-bitmaps[i].status (since 4.0)
        ``query-cpus`` (since 2.12.0)
        ``query-cpus-fast`` ``arch`` output member (since 3.0.0)
        ``query-events`` (since 4.0)
        chardev client socket with ``wait`` option (since 4.0)
        ``acl_show``, ``acl_reset``, ``acl_policy``, ``acl_add``, ``acl_remove`` (since 4.0.0)
        ``ide-drive`` (since 4.2)
        ``scsi-disk`` (since 4.2)
      
      # gpg: Signature made Thu 18 Mar 2021 09:23:39 GMT
      # gpg:                using RSA key DAF3A6FDB26B62912D0E8E3FBE86EBB415104FDF
      # gpg: Good signature from "Daniel P. Berrange <dan@berrange.com>" [full]
      # gpg:                 aka "Daniel P. Berrange <berrange@redhat.com>" [full]
      # Primary key fingerprint: DAF3 A6FD B26B 6291 2D0E  8E3F BE86 EBB4 1510 4FDF
      
      * remotes/berrange-gitlab/tags/dep-many-pull-request:
        block: remove support for using "file" driver with block/char devices
        block: remove 'dirty-bitmaps' field from 'BlockInfo' struct
        block: remove dirty bitmaps 'status' field
        block: remove 'encryption_key_missing' flag from QAPI
        hw/scsi: remove 'scsi-disk' device
        hw/ide: remove 'ide-drive' device
        chardev: reject use of 'wait' flag for socket client chardevs
        machine: remove 'arch' field from 'query-cpus-fast' QMP command
        machine: remove 'query-cpus' QMP command
        migrate: remove QMP/HMP commands for speed, downtime and cache size
        monitor: remove 'query-events' QMP command
        monitor: raise error when 'pretty' option is used with HMP
        ui, monitor: remove deprecated VNC ACL option and HMP commands
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit 8d17adf34f501ded65a106572740760f0a75577c
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 11:16:32 2021 +0000
  
      block: remove support for using "file" driver with block/char devices
      
      The 'host_device' and 'host_cdrom' drivers must be used instead.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit e67d8e2928200e24ecb47c7be3ea8270077f2996
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 19:22:36 2021 +0000
  
      block: remove 'dirty-bitmaps' field from 'BlockInfo' struct
      
      The same data is available in the 'BlockDeviceInfo' struct.
      
      Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 81cbfd5088690c53541ffd0d74851c8ab055a829
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 19:19:54 2021 +0000
  
      block: remove dirty bitmaps 'status' field
      
      The same information is available via the 'recording' and 'busy' fields.
      
      Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit ad1324e044240ae9fcf67e4c215481b7a35591b9
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 18:53:17 2021 +0000
  
      block: remove 'encryption_key_missing' flag from QAPI
      
      This has been hardcoded to "false" since 2.10.0, since secrets required
      to unlock block devices are now always provided up front instead of using
      interactive prompts.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 879be3af49132d232602e0ca783ec9b4112530fa
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:40:56 2021 +0000
  
      hw/scsi: remove 'scsi-disk' device
      
      The 'scsi-hd' and 'scsi-cd' devices provide suitable alternatives.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit b50101833987b47e0740f1621de48637c468c3d1
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:40:56 2021 +0000
  
      hw/ide: remove 'ide-drive' device
      
      The 'ide-hd' and 'ide-cd' devices provide suitable alternatives.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 24e13a4dc1eb1630eceffc7ab334145d902e763d
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:47:17 2021 +0000
  
      chardev: reject use of 'wait' flag for socket client chardevs
      
      This only makes sense conceptually when used with listener chardevs.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 445a5b4087567bf4d4ce76d394adf78d9d5c88a5
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:29:31 2021 +0000
  
      machine: remove 'arch' field from 'query-cpus-fast' QMP command
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 18:40:12 2021 +0000
  
      migrate: remove QMP/HMP commands for speed, downtime and cache size
      
      The generic 'migrate_set_parameters' command handle all types of param.
      
      Only the QMP commands were documented in the deprecations page, but the
      rationale for deprecating applies equally to HMP, and the replacements
      exist. Furthermore the HMP commands are just shims to the QMP commands,
      so removing the latter breaks the former unless they get re-implemented.
      
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 8becb36063fb14df1e3ae4916215667e2cb65fa2
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:35:15 2021 +0000
  
      monitor: remove 'query-events' QMP command
      
      The code comment suggests removing QAPIEvent_(str|lookup) symbols too,
      however, these are both auto-generated as standard for any enum in
      QAPI. As such it they'll exist whether we use them or not.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 283d845c9164f57f5dba020a4783bb290493802f
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 17:56:13 2021 +0000
  
      monitor: raise error when 'pretty' option is used with HMP
      
      This is only semantically useful for QMP.
      
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 5994dcb8d8525ac044a31913c6bceeee788ec700
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 17:47:31 2021 +0000
  
      ui, monitor: remove deprecated VNC ACL option and HMP commands
      
      The VNC ACL concept has been replaced by the pluggable "authz" framework
      which does not use monitor commands.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.341255 to fit
pnmtopng: 210 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-win7-amd64.guest-saverestore.{dot,ps,png,html,svg}.
----------------------------------------
162874: tolerable FAIL

flight 162874 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162874/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail baseline untested


jobs:
 build-amd64                                                  pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 11:40:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 11:40:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143884.264998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltqOB-0002Ku-T9; Thu, 17 Jun 2021 11:40:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143884.264998; Thu, 17 Jun 2021 11:40:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltqOB-0002Kn-Px; Thu, 17 Jun 2021 11:40:51 +0000
Received: by outflank-mailman (input) for mailman id 143884;
 Thu, 17 Jun 2021 11:40:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RfGr=LL=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ltqOA-0002Kf-8j
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 11:40:50 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a44f6786-c1f6-4c62-b1f6-5d631c10fb83;
 Thu, 17 Jun 2021 11:40:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a44f6786-c1f6-4c62-b1f6-5d631c10fb83
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623930049;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=BjMBZQqQ97Fem6z5BNiJI+cN+RhUVIahknuWCnEulew=;
  b=OAHzswDi2KwGDI9nbA5tK8xDdXoS6RvsxbEdlBzqIZsDz8zdxU1eRKG9
   l5Ut7bHtq5h7vaZct0kLsqfzvDFDsEPZD2wf+J+c34DcCQRkuRTqjYtti
   Y2BzbvDCW1/TcPSmgEFTaF/ywxoFy8AbpA2Ls8bcE/aZnZ6KYBuWLKMdV
   g=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: WBQGKv6YHS42qTnRs3Sm4ChMIBPoR8IxkfL8RQQFh40XaAs5PxK8M1CIDTT31Fl8ePPVFWIdSc
 kjRO5ltD6ZCZMvSRtv72e/HRfv2+SoNj3yihtK05CEqMG0jBPc8eSE8eVM42he+7OoZzjedvWK
 NydlHy03wOiBVt8DvL1+sK6tTJ3QHhgXZ+hwizvECloZLRPMQqdhUl/ZN6nd8lWoosg3PTF/LQ
 0+2FYAY4JQUUNkk+f5RCJ2m2KiuqBCiP4+T6aqyPt/8DQRPRCJ+ba1Qjy2ENRUCSli5pQWI+xN
 hrk=
X-SBRS: 5.1
X-MesageID: 46083128
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:uBuWi6sNBiAUMAV29io57iqd7skDFdV00zEX/kB9WHVpm62j9/
 xG+c5x6faaslsssR0b8+xoW5PgfZqjz/FICOAqVN+ftWLd1FdAQrsN0bff
X-IronPort-AV: E=Sophos;i="5.83,280,1616472000"; 
   d="scan'208";a="46083128"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XKn4/e7oMUpFVP0pfySrlzdUlREtjYaCHSplLPCcx5VdO64cJQlOcd8Qkt9Y29BQg8ZRGQxAreK05L3HRE3P1eCdtrAB08j2smOeLURmxsfoQANJgswS8vClvqaDJfCF1HUcsd0kTjWoQlMBHTLFlFwBrgf/kPGSGN8HOqwz5IEa8J/5IjvlOVTMNGjwPoJxqx+Y/94xtQJbJnVdj0eixcGafJgwnAPeR89Gfx3iisVtBXPdCD4glBo7KXd63huoHn1HwQ4jE9C49QqYQsSdQeDdDuI0zri3TyppEqHCituZhWJO37ECHh0nex0hsZNuaxL4wqhbjoYGo1XETBYWqw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wd1VxetJ+fnJozp7KE+NdfZNJ6gI3wySJMPW5Cp+MTU=;
 b=aSdR+I0uzVOp90wJhPbFDsLibDFBEgeUtQ1fCWItdWuYIRb+9o0qitSqO9GWte0Vb4yNrMsUtHPIl99sw3fuXKGeWOD0Wio2EInIfAq7DquvWPJZYefeayLjTP7NNy8pX83YQrDRMPE27LbUlvK+zOBDHrWku5JZnF1I0vsztzX7jRaLY0ZJUrJ2UmZMGi1PeFXOGjSYMYwWOowzIJFlUXdbGPbN/K3TVcB5U4xc2Fj4NSAgOvJ8pFn6MFSdQ/0qAD6NkeamLGuhi87NSwUzLrRfZt/6U1DiuUwaTXIx6ZNg+2jGYUYQ5ctfsT4rawI31wl8KUmx5JQSUn8+3BSuow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wd1VxetJ+fnJozp7KE+NdfZNJ6gI3wySJMPW5Cp+MTU=;
 b=cMxMkz5Fy4LHB2VuIOlC837P1a9syoXB+TeDZDa53yeUGF8QYykiQy7+01nc0s5A1gz5bYOLQQWAYhQg+y66xtNunc72wBgJXXrNoRHgnH+fsIovzdzPcrNHGyZr1SUnopVzV+/XyXHaKRyPV143y73fGOID4IaRQRL80PmU0ak=
Date: Thu, 17 Jun 2021 13:40:39 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: "Tian, Kevin" <kevin.tian@intel.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Nakajima, Jun" <jun.nakajima@intel.com>, Jan Beulich <jbeulich@suse.com>,
	"Cooper, Andrew" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George
 Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH 3/3] x86/ept: force WB cache attributes for grant and
 foreign maps
Message-ID: <YMs0t/gQEK4kUGiQ@Air-de-Roger>
References: <20210528173935.29919-1-roger.pau@citrix.com>
 <20210528173935.29919-4-roger.pau@citrix.com>
 <MWHPR11MB188653F8277F861018DB00118C0E9@MWHPR11MB1886.namprd11.prod.outlook.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <MWHPR11MB188653F8277F861018DB00118C0E9@MWHPR11MB1886.namprd11.prod.outlook.com>
X-ClientProxiedBy: MR2P264CA0098.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::14) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3ec37145-0385-46b5-aab6-08d93184becf
X-MS-TrafficTypeDiagnostic: DM6PR03MB3673:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB3673D5E9BE463304EDA0C3BE8F0E9@DM6PR03MB3673.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: zceKVqRHG9ise1YnpBvqHNa4ieJfuP5jykEgdf5kbZCnci39NJn+NEZ5NJ/u47t+9tIrgLFLSj1Xby9sfCgGw9hP/fn6oyaL/uwOsYxxsibWqPhgY3bwla0Z81ehAWwh57VMAfMlxxiuI39nSgWBx88+jfDwN7Iuz33LABp5bhYpWdKoeXHQJxJwed2Deu04h49P1I55WFw9Oa001QfTHsskiCVg0QDwFnqUhHG6eh0J8sqlK4exi+eOM3QcgJQw5Nef5Z9X8Pu9ncxLfRJlsXPSnEVQ+ZUFGTwyK7v51RUGW4o+fJ3FCOMrPV4q/5Cv7nhdPDEZae4YgkNf+qVYsgxGHwcgBZBsNk0/mjg5Vl4IvfyhN3Jd0U75JO6OehGnUw/bzUeukLlu0v7lUGjBzCI6Yxdj8dFGVoEPsQf/K9RlnlRPKiNM8AdSeZPtRghMmmlEHGrirX3SSTbJqToG67b85v1xkwtIgj5xqPWaceYiFS8eRujKRUDSfkHX0Nc5KsvNJXrAC/M5xMGseMTC+wjaKyO3JYOeqcZQcQJFJs793rOnRCOD5QjwFABJzrCcNwCL61lQy/tbi/H9LZfanlUJFclyipSWI//zdyLj7Dg=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(39860400002)(366004)(346002)(136003)(376002)(107886003)(316002)(6486002)(4326008)(26005)(16526019)(54906003)(9686003)(186003)(33716001)(2906002)(5660300002)(6496006)(86362001)(6666004)(8676002)(38100700002)(66556008)(478600001)(66946007)(6916009)(66476007)(85182001)(956004)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?OUgxZGxpR3RDdFR3dGJrLzV3aHNRRXJ2YUdVbW5vMGRHd1laV1NISGh4NS9N?=
 =?utf-8?B?a1dteXNXTkFGNEtnZC9zdktjY2tRRjJiY0srL1p4TUhkSUtvNXRSdmRQZzRq?=
 =?utf-8?B?aWdMNHYvUUZwZUFxRGx6SGg1TXJhQWNmeE1SanFDL0h5Z1ZlK3ZOR0hQSm9l?=
 =?utf-8?B?NDBzQlBHS1paNkN4T2lqaE1FVnJqZlNJem5LeC9ENGhpdDMvYWpJSmVOd2ZX?=
 =?utf-8?B?WUVWS2tRd3NhenYyYUU2UnJ4T0d4RExGUFR5bjhvdDhrcmxmTHc1dTN6alZ0?=
 =?utf-8?B?ZVdMd013ak1ydEJ6bHU3Z05VeWJsRko3cDk1RGdUSFNESm93bjA5NzBRNVlU?=
 =?utf-8?B?Q2lYcEdQbkJRb2doZlhhYnU1WVhrNXZRazNkdTE5MW1sTnoxVDVCYnptMkRj?=
 =?utf-8?B?WTZ0MlRXWjFFbklXNWNjWWtuTzczczhJdzhyeGRna09ITjlya0FMaFVEVzVE?=
 =?utf-8?B?YklXSW43UW5SYVo0R2o0bzBteU9JR0Q1VEdsTjM3SnZKTCthM0lYT2NDZTh3?=
 =?utf-8?B?M2w1TWRRenJUbXJCbkRSSFZZRjRRTyt4YkJ4Wk4rUUJEc3NPQ1VGbHlkZTJZ?=
 =?utf-8?B?d0tPdGhsTjU3WTBxTkZKdmhuYS9relFNdVpDSmxkMGtaa21aeHlJM1hIQmts?=
 =?utf-8?B?eUJpRk0yMVNiSWxwakVqWVROcUw4bUpmVVVXaDg1eU9WaTB6ajcxVjhkTzU5?=
 =?utf-8?B?dmRqdDlVb0F4SCs4a1l2RmtleHFrdHRKQUYyMzJBVnVwSGQ5bGpWZDdKZ3h2?=
 =?utf-8?B?Nk1aOEFJRDZwRnpDeThaZXVSL2tzNC9reUZZTnJ5eG5EbTJ1VWltcEs5ZjVy?=
 =?utf-8?B?K3FGY3BpcEJ6bHlNMVVzSm91cVRzYm4vZENXdDNoVTFCamU1S2Via2RETGNz?=
 =?utf-8?B?NmFEdmM2YkIxdmE5OCtDMWM4TWlrNUN1VHIxcU9maFBYZG01T2JtZGNsN0RX?=
 =?utf-8?B?QVhXWWF0QTZkRW80Z2Z2cGl2RTZWS01SUEhjZmUzRlE2MVVYOUt4b3RKVnNJ?=
 =?utf-8?B?anVFeG85OG52U09yVmxNNVp5QXNhQ0FDOElJSmtPZmwyQmdnZzRFY2RRaEZl?=
 =?utf-8?B?blpWTFVzQXU2WUl6eDBCbWU0d2w2WUVLWDJvNDNkMEc3WHRIVGdXWFdoUlB4?=
 =?utf-8?B?bEVhcXBtMXJDcVhKblBCR1FRbzl5RmJMRzAvZXR6d0xCclBuWURaUnhhMkFB?=
 =?utf-8?B?RnZzNFljT3dXSDFvNmY5dDNOMlU0ajRzalBVRDlpbWhzV0phMHRUMFJQWmZC?=
 =?utf-8?B?ekp5TVFCZE9leHo2SUMwVTVVNEVSU2lQN2pmSWM3NVYyYXNRVzlYZTdHSFdq?=
 =?utf-8?B?TFg4WFR5WVdyQnRHZXQ3cnQxZGNFU1lTV1lRT0dBbTQ2K2FkWkhZL2hjTHd5?=
 =?utf-8?B?djN2eGpHdTVzdi9Va1lLZmV4OFJNYXc1Q3NpZTB4azdRL01GYTA5K3R6VWY2?=
 =?utf-8?B?eG9wRXNqZ1A1NXZ3TEZaQWNVQllGZllSb1l2WEhwTkJvRm92Z3ZKMUwzeGdR?=
 =?utf-8?B?WnJ4OTA1aERsdFJhT0EvV2pPR1FrS0lCaWg1TTUyYWoyeGFEbGJRbXN3b21q?=
 =?utf-8?B?UVRHdlBTcVhLa1ZwTG4vb2NRM1JGRjRUSWlVa2hRdlIrOXpiUWdYTjJML05i?=
 =?utf-8?B?QUI3VkNxOVN4dUc3azlvN1VSZUQ1dGZpdm8rdVZYRmFpcWxjaFMwWThzd0x4?=
 =?utf-8?B?b1k5cGRFenVHTUtlZzl1ZXh2SDVCTWVwU2FvcnNJMS9udVNlSmhkUWNVY3Q5?=
 =?utf-8?Q?gkqEUGtZcfoTQvTClg0nFTc17U6GHoU4Mv3MaWz?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 3ec37145-0385-46b5-aab6-08d93184becf
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2021 11:40:45.9466
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Ga5mQDt84Yi4713hbDaU9o3uLHQmln7+HbtNgtQFhKzzBSJXp4M78PZinPmjdOuUWCuyFEyCqJYMrZdVda4iTg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3673
X-OriginatorOrg: citrix.com

On Thu, Jun 17, 2021 at 09:31:28AM +0000, Tian, Kevin wrote:
> > From: Roger Pau Monne <roger.pau@citrix.com>
> > Sent: Saturday, May 29, 2021 1:40 AM
> > 
> > Force WB type for grants and foreign pages. Those are usually mapped
> > over unpopulated physical ranges in the p2m, and those ranges would
> > usually be UC in the MTRR state, which is unlikely to be the correct
> > cache attribute. It's also cumbersome (or even impossible) for the
> > guest to be setting the MTRR type for all those mappings as WB, as
> > MTRR ranges are finite.
> > 
> > Note that on AMD we cannot force a cache attribute because of the lack
> > of ignore PAT equivalent, so the behavior here slightly diverges
> > between AMD and Intel (or EPT vs NPT/shadow).
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
> 
> btw incorrect cache attribute brings functional/performance problem. 
> it'd be good to explain a bit why this problem doesn't hurt AMD in the 
> commit msg...

What about re-writing the last commit paragraph as:

Note that this is not an issue on AMD because WB cache attribute is
already set on grants and foreign mappings in the p2m and MTRR types
are ignored. Also on AMD Xen cannot force a cache attribute because of
the lack of ignore PAT equivalent, so the behavior here slightly
diverges between AMD and Intel (or EPT vs NPT/shadow).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 11:56:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 11:56:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143904.265026 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltqdR-000558-MU; Thu, 17 Jun 2021 11:56:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143904.265026; Thu, 17 Jun 2021 11:56:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltqdR-000551-JW; Thu, 17 Jun 2021 11:56:37 +0000
Received: by outflank-mailman (input) for mailman id 143904;
 Thu, 17 Jun 2021 11:56:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=k/hY=LL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltqdQ-00054v-Uc
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 11:56:36 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c2a2138e-51af-41c6-80a7-3a8d838620ac;
 Thu, 17 Jun 2021 11:56:35 +0000 (UTC)
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur03lp2056.outbound.protection.outlook.com [104.47.10.56]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-23-XEI3itAjM0aPmSpGkxDi-g-1; Thu, 17 Jun 2021 13:56:33 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7039.eurprd04.prod.outlook.com (2603:10a6:800:12b::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Thu, 17 Jun
 2021 11:56:32 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Thu, 17 Jun 2021
 11:56:31 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR10CA0071.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:15::24) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.16 via Frontend Transport; Thu, 17 Jun 2021 11:56:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2a2138e-51af-41c6-80a7-3a8d838620ac
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623930994;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=um14gXO2sX6Dt04QMAVSzkqQaCjkiZkC0fzkNiCbe70=;
	b=SVlVAeXdoCvNbxKWVa7JeYF9sQjDGJP4JTaFDfmvQY8nxKGCadTC4iT3wxnweAyFrY8ah4
	etNyIEPQXMz/wkFpCFeI3u5J2Bka3T+u2VZNi7kgiy/2e1Q6OcBN5Cvf4zYHZYN/2jozHV
	eLAK8+sEBlWEYmemVSP9ikqS4cwYgs4=
X-MC-Unique: XEI3itAjM0aPmSpGkxDi-g-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DG+5xt692pjn7BjBlLqPZ6SILRmN1nGDJGjPdVKre7OMNwhCWfZaUj7OOHnJzX28RQqKse2mSH9Qp/GNJy8sbuHw/uJgZOL9qt/tGkK83vwMRyDjC44zPA1tF0mdlbM43fA/thmgoVBl33S7HR46OEnNW42drMQ1vA7VzdIlbZWud02H6DpcTuVttyLAo4s/W2UTA7Zjpy63kay9n5LKGBvLt+1efun+Qz9eAPPH4e7fnQW238anLsgTg8lMhmv4f5enZE8U7De1sRezVS2BiQRttd7WLN2elZ/iEOiwbXuu2d3CDAsdmRpomp9Oa0mCarG4kFuc/+2aAHJvb/91jA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9aaQA8KNCkXZiPBW7WjoCJ0rMpVDOJvdPF2AbAm77Cs=;
 b=Nr0dhYkvkrbYVDUussRWX12WcvYJGriIoKgHqEZLE0YlObFrMF8uNd7/5jYKs+iBh4zB4ucCN7rfFo2D3pi+m0HTKbbxVy96x72YjxT688BvNJrlhx5ViUqGEHxbj2OQO+Js5eZet3wB4T2pLUeXDf66DeQCb6v1wk0KeCkY5GqIszu5yX8k5Ns03tsr8q8+ipVlqHy8kKP/s9wPiztdHSipC7C3x7Q9E7wWMpuqNrO2ATsd+/O5gD0tPrnnx98skmxDHW11MGInDnFIVv0c7wAHT+nsfX+2e1jdEzyl18ns4267UueCgbE0jhaveqvTmiCVq4jWpUpEWwrrEBM39w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: Re: Regressed XSA-286, was [xen-unstable test] 161917: regressions -
 FAIL
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: xen-devel@lists.xenproject.org, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, "committers@xenproject.org"
 <committers@xenproject.org>
References: <osstest-161917-mainreport@xen.org>
 <7cfa28ae-2fbe-0945-8a6c-a965ec52155f@citrix.com>
 <b57c2120-2f86-caa7-56ec-e215a7ad0529@suse.com>
 <637ff3c7-afeb-aae4-0f1d-5ae168e01e01@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <99833b7b-f626-fac5-d426-109afd4ffa38@suse.com>
Date: Thu, 17 Jun 2021 13:56:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <637ff3c7-afeb-aae4-0f1d-5ae168e01e01@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR10CA0071.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:208:15::24) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 812f0bba-8563-4143-011b-08d93186f2c1
X-MS-TrafficTypeDiagnostic: VI1PR04MB7039:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB7039A8449BFF32FD57C61717B30E9@VI1PR04MB7039.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zInSRfW64XmbVARMWd3gv+s9AVai2aSfTqBAgu4LRv0eJ7+OtKwYyGL3ebAye5tiLwz/TXesXxXW3EuV8igZFSV9T3N3X6wnz5IMG+/OT57OS4eOVXKfJJ1XXdP6TQnOgAIjict3adBCC2ufxjpR22/CwwThYjaIn8dK2uF/h8Cp96AEiSZZTK4tKPj8kXZ4gqs39cky4UUhpi6ovT+b7F2Wi3OsB7ywTCCbxPUVqF0e+KwkeXrsqDpXCZxl60EI8spCBnAd+xmGVx1Kch2gK8HAU7I6d9NVuSup7QDkQGZCMaMSuIinq7npZtWJ4TpbdtpQo6/7lAOugDFv3oX7+NNacTsrTsKCKnRAa8E2uNva74uVP0n1oWGTR+A9dyo5CCLGb3lJcaC6OOJCWDFgmcP1xWZE77iUq3s6VoPEoEio3Bfwxzn7c/G6V44cCCbIlLfa/g9IuPO4xVZuIs5+r/r6iLU9BYTw6yztvhS8icjXnnXkk2WR3Rf9LmLGzhg1fpy0U/ZbwKnYifWQIso/iNymDCWRPxowNe47fWFi3iBEgH7g4/0HIm3s/lr5/vjC6UvEUCtUuq7b+K3U8zhPSyv7YV2Cu1M4yKQgIQXqzKzs2leSOp6A9oEjPinqgcBznG+6Zb9gJnRcR6R+Rx4P89Gg6yblWYXOZiyBkgPOvUR019eDrMzy0YmiqMbm9laIhcYeP9c5wcYaLY9/vGAxL/jL5xKOgpS7tvxyBCjn36zfvdykbFZXMwJuNOO5Up3bL9n1sHXSCb78g1a3h6WC8bpTWmhOVYQtDum+byr643O7beJas1ONUT+GMvh5ZfJ4
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(376002)(396003)(366004)(136003)(39850400004)(2906002)(38100700002)(86362001)(36756003)(316002)(966005)(31686004)(6916009)(26005)(8936002)(186003)(16526019)(478600001)(8676002)(6486002)(83380400001)(66946007)(4326008)(16576012)(956004)(31696002)(66556008)(53546011)(66476007)(5660300002)(54906003)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?BsGp0SESpjGTSa3ZOG/iOA+jKgi5DLegkW+jgDb/ftggPeWx9jQDesyZwdvN?=
 =?us-ascii?Q?PWSRjmhaav4iXzNFaMOcXOWN+h7WhYPgl0PT7/SfFntIW8bzeq1qBjD4guHG?=
 =?us-ascii?Q?FaIiHa7L00MSZyWxU2SaWJ/QwEAqgi2ucB6BX46eAWC63NvRxo2s/jkzVB+0?=
 =?us-ascii?Q?2w50esTFfsfUgG0+4FrKFTwz6xNHXS4bYHmRC7p+apYs04zlRSULu9SWtjYD?=
 =?us-ascii?Q?ONl7Dhyws7gIwWFLN5ji6nqH/mxrzI1TnJ950R1eZxbW4RZUW6grzWlzYFU5?=
 =?us-ascii?Q?HVLMReeA/+javfwtOIfjygTBsbLdcsNKIBO6nDrUohPrFlUoa+JHrdjU4Idv?=
 =?us-ascii?Q?9md/ESV9gbQ5AiE7LTCu9ECeo7zwxa9oKU+0ntlq9vljFyNgK3ppfN6dkwSj?=
 =?us-ascii?Q?HWvkJpCSsreAfWHo+q3Gpvb0NH9rESuRiMWnoTl6aFgtHp34bBnwrj5nxHJ9?=
 =?us-ascii?Q?x3Dnxi0Z6Dm+tcYu9CawWcGAo8BhwWMsqVct0vUszTLDEewVbWjxs1wt5aaV?=
 =?us-ascii?Q?GZyo27baMxWp1XQuuxSm63bFkYn3R9Rzxod2vkGoQcygoW7Ltn7XvLA+L3GF?=
 =?us-ascii?Q?xLbHhDl31+AbL1yr6rYEZGvNz44u7bC7I2wEkP9hoRDLBavB4+9TyP4NEVoa?=
 =?us-ascii?Q?JX5EW6rMpQedM7X0BtBJrE2Mql25RJ5LUIK3Lc+6TkHhKQ6UhEamc7D0MUFc?=
 =?us-ascii?Q?nTivP7Txq0jd/zrJezwuJz/+v5Xk/dXg66MQqRseo41LUR+LXksqlINnlPum?=
 =?us-ascii?Q?GV+KO3Hrx1SbItOzssKJeZZyHB8uix7Y3ZFXE9Uf4t6TbkxUsi5yLSi48SUj?=
 =?us-ascii?Q?l5atTPKggRVufPt2W36beDLGTrEcdDyzKVhF4YNC0jWm2DGFtJWHpjk9bQw1?=
 =?us-ascii?Q?/Gx5aWmRmIj0mPXKuTmVePwD6cuzqs4l2YHDLMeuYmZ4YHq05cGgApAGSfkO?=
 =?us-ascii?Q?38YJaDtpLiiICSPNv8FiyKTLeC8mwfcN3jSLeUIQW3tqroDvJXxh6r19i7B9?=
 =?us-ascii?Q?NP2Q27w1iCAI/5Z7pjwt7Q+zQI6Ppou/kB653CJehgsdtunrlhUAlCScM33b?=
 =?us-ascii?Q?C+p2aUCs4DX85wkt4qxckvReCSOfNxXnnL3s8u7+HCynfJfmNEhsOA9AJagR?=
 =?us-ascii?Q?eAbVSwx5j1wC9Y3o4RLK6HYyvqapGlQpadQgnD5uQoLAX8wibFLjtDpbO09P?=
 =?us-ascii?Q?J/7cZUKN9Nq1XgRTLsx+9Iev2e4bv9vXyKDmxjzBiIxK+srV6SQISHcNw5yf?=
 =?us-ascii?Q?exm7Xvdlkmp45lcxQtLRdI/lvSZTLfjkb2XiqN9X5K0M9nYSDprd4kaiLVAM?=
 =?us-ascii?Q?d4b9s8a4oNrATsrwyXyNQi+Y?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 812f0bba-8563-4143-011b-08d93186f2c1
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2021 11:56:31.8549
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qGZxbmcVHhqEum97uXAzZQ2OnFpkL8QdCqSQ8LCBPW0TY2hDpCUNp1RYS7s6X64VGC/otUFnh7MCnTKhRx5Pyw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7039

On 16.06.2021 17:43, Andrew Cooper wrote:
> On 16/06/2021 09:48, Jan Beulich wrote:
>> On 13.05.2021 22:15, Andrew Cooper wrote:
>>> On 13/05/2021 04:56, osstest service owner wrote:
>>>> flight 161917 xen-unstable real [real]
>>>> http://logs.test-lab.xenproject.org/osstest/logs/161917/
>>>>
>>>> Regressions :-(
>>>>
>>>> Tests which did not succeed and are blocking,
>>>> including tests which could not be run:
>>>>  test-arm64-arm64-examine      8 reboot                   fail REGR. v=
s. 161898
>>>>  test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. v=
s. 161898
>>>>  test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. v=
s. 161898
>>>>  test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. v=
s. 161898
>>>>  test-arm64-arm64-xl           8 xen-boot                 fail REGR. v=
s. 161898
>>> I reported these on IRC, and Julien/Stefano have already committed a fi=
x.
>>>
>>>> Tests which are failing intermittently (not blocking):
>>>>  test-xtf-amd64-amd64-3 92 xtf/test-pv32pae-xsa-286 fail in 161909 pas=
s in 161917
>>> While noticing the ARM issue above, I also spotted this one by chance.=
=C2=A0
>>> There are two issues.
>>>
>>> First, I have reverted bed7e6cad30 and edcfce55917.=C2=A0 The XTF test =
is
>>> correct, and they really do reintroduce XSA-286.=C2=A0 It is a miracle =
of
>>> timing that we don't need an XSA/CVE against Xen 4.15.
>> As expressed at the time already, I view this reverting you did, without
>> there being any emergency and without you having gathered any acks or
>> allowed for objections, as overstepping your competencies. I did post a
>> patch to the XTF test, which I believe is wrong, without having had any
>> feedback there either. Unless I hear back by the end of this week with
>> substantial arguments of why I am wrong (which would need to also cover
>> the fact that an issue was found with 32-bit PAE only, in turn supportin=
g
>> my view on the overall state), I intend to revert your revert early next
>> week.
>=20
> It has frankly taken a while to formulate a civil reply.
>=20
> I am very irritated that you have *twice* recently introduced security
> vulnerabilities by bypassing my reviews/objections on patches.

I'm sorry, Andrew, but already in my original reply a month ago I did
express that I couldn't find any record of you having objected to the
changes. It doesn't help that you claim you've objected when you
really didn't (which is the impression I get from not finding anything,
and which also matches my recollection of what was discussed).

I don't think I know which 2nd instance you're referring to, and hence
I can't respond to that aspect.

> At the time, I had to drop work on an in-progress security issue to
> urgently investigate why we'd regressed upstream, and why OSSTest hadn't
> blocked it.
>=20
> I am more generally irritated that you are constantly breaking things
> which GitlabCI can tell you is broken, and that I'm having to drop work
> I'm supposed to be doing to unbreak them.

GitlabCI doesn't tell me anything just yet, unless I go actively poll
it. And as mentioned just yesterday on irc, I don't think I can easily
navigate my way through those web pages, to find breakage I may have
introduced and hence would better go fix. Unlike osstest, where I am
told what failed, and I know where to find the corresponding logs.

It's also not clear to me at all in how far GitlabCI would have
spotted the issue here, no matter whether it's caused by a hypervisor
change or the XTF test being wrong. So far I've seen GitlabCI only
spot build issues.

I'm also puzzled, to put it mildly, of your use of "constantly" here.

> In the case of this revert specifically, I did get agreement on IRC
> before reverting.

How can I know you did? You didn't even care to reply to my mail from
a month ago. And there was no reason to make an emergency out of this
and ask on irc. You could have sent mail just like is done for all
other normal bug fixes etc. Iirc I was on PTO at that time; it would
hence only have been fair to wait until my return.

> In your proposed edit to the XTF test, you say
>=20
> =C2=A0 L3 entry updates aren't specified to take immediate effect in PAE =
mode:
>=20
> but this is not accurate.=C2=A0 It's what the Intel SDM says, but is
> contradicted by the AMD APM which states that this behaviour is not true
> under NPT under any circumstance, nor is it true on native.
>=20
> Furthermore, any 32bit PV guest knowing it is running on a 64bit Xen
> (even from simply checking Xen >=3D 4.3) can rely on the relaxed
> behaviour, irrespective of what the unwritten PV ABI might want to say
> on the matter, due to knowing that it is running on Long mode paging as
> opposed to legacy PAE paging.

Neither of these are reasons for a 32-bit guest to _rely_ on such
behavior. Hence the change to the XTF test, which so far you also
didn't care to reply to.

I'm aware of NPT having different behavior, but can you point me to
the place in AMD doc saying so also for native? In fact I can find a
statement to the contrary:

"The behavior of PAE mode in a nested-paging guest differs slightly
 from the behavior of (host-only) legacy PAE mode, in that the
 guest=E2=80=99s four PDPEs are not loaded into the processor at the time
 CR3 is written. Instead, the PDPEs are accessed on demand as part
 of a table walk."

This to me implies that in the native case the behavior matches
Intel's.

> If these two technical reasons aren't good enough, then consider the
> manifestation of the issue itself.=C2=A0 XSA-286 is specifically about Xe=
n
> editing the wrong PTE, because of the use of linear pagetables, in light
> of the guest not flushing the TLB.

The PTE edited is, as said, only perceived wrong by the XTF test.
Hence the patch to correct it.

> If you were to remove linear pagetables from Xen, the issue
> (do_mmu_update() edits the wrong PTE) would cease to manifest even on
> legacy PAE paging, demonstrating that the problem is with Xen's actions,
> not with the guests.

And if I introduced shadowing of the L3E writes, pushing the new ones
into the live page tables only upon CR3 writes, the issue would
reappear. It ought to be permissible to make such a change to Xen,
even if we may have no specific reason to do so at this point (albeit
I think we really better would, to match bare metal behavior).

Also, in my request to you (still in context above) I did specifically
ask about the aspect of the observed issue only manifesting on 32-bit,
yet you claiming a general problem, i.e. also affecting 64-bit. You
didn't comment on this at all.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 11:57:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 11:57:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143909.265037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltqeT-0005fj-0U; Thu, 17 Jun 2021 11:57:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143909.265037; Thu, 17 Jun 2021 11:57:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltqeS-0005fc-Th; Thu, 17 Jun 2021 11:57:40 +0000
Received: by outflank-mailman (input) for mailman id 143909;
 Thu, 17 Jun 2021 11:57:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=k/hY=LL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ltqeS-0005fW-4B
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 11:57:40 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d5a3ba1-62df-4f58-ba08-b5e81983f9f7;
 Thu, 17 Jun 2021 11:57:39 +0000 (UTC)
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01lp2052.outbound.protection.outlook.com [104.47.1.52]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-30-g02NrFD3NH-6bMqYU10Dww-1; Thu, 17 Jun 2021 13:57:37 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7152.eurprd04.prod.outlook.com (2603:10a6:800:12b::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Thu, 17 Jun
 2021 11:57:36 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Thu, 17 Jun 2021
 11:57:36 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FRYP281CA0007.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::17) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.7 via Frontend Transport; Thu, 17 Jun 2021 11:57:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d5a3ba1-62df-4f58-ba08-b5e81983f9f7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623931058;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hXiYVOEUaB+fQNMPF15DO6eyKQSCSFHVjXjFHEaUKOk=;
	b=UF/D+sYFDot897EJcwiABr4kVsxtXzG9dC495o5akHxlJb+wWm6mK0o4Gs8VU9UYmhCOmg
	duAmjdqqfgvnV0diX1Sb53e1g4Oo0XhjwzyG1YaDLbV3NwuobYyyk120Cp/Xv3r49+AtOP
	4ztbKHhMvuBW8Jvfi63JI+U5vjJumA0=
X-MC-Unique: g02NrFD3NH-6bMqYU10Dww-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Nen8VcGgLQZVuOQxJeeD3ATwgKbdiHTuT+0mrsLIJlc5Of/flu0bPhRT9oE+RZYa+q3mG7kTl/YsSQu27eImv1AmR3zMfcs4sJcvFnj59oYCNCvoyS8QxQWxAZ84k/Vk/23nBSnOVcV4+Cq6otu+5pAGbzCaFjs4+t6alWQyU3Y5h94cqUSdhacR6F1HWZnbfWAhOX9d8hVGgshark+0z/8mfmJEY5QRWi0ehKbacCFX5BCnuLhd2nzND7XsxessS8RtCkeVTJYeRTQfaabdOPG2lN36FQdlZTHoVY2r/xchc9jJ917V38wTGUB3/21KpA/pzZvITxjLhaDYbieCUQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U3DAWlMsVTted9ih7e+HT8GUOf6oiGB1hE39lPwO7dE=;
 b=OOmnE/vrUBtsAv2fheWlOYSiB6d82sKk/hHQjCs8kyf1h6FDy9JTaQFAIAf13tHcZeJPTyldwurB3zRJfqD09hpJpMqrbYTStxz1aQlDiFks+Hbp4bQj6+C0X6PmCmxVZ+g4+mcc0W3jqdrRXV4ugGBoU7AnsYZwXWOK6ODpPjHSuu9VGSZw7u1PLoZhz2p2QhVmuxgv8xOtlUa742QBDceCPTslGayz69i6nBfiK1nhAtZekUe6bquFeVv8oMgqiiS/oVjGslLKY27k2gU1wDLSRaLxDWf3ctTN2jkV+qSPfUjQZq5k4YOl4x4Yf0ybUvaHY0TSnQTXTV1ck/sdJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH 3/3] x86/ept: force WB cache attributes for grant and
 foreign maps
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "Tian, Kevin" <kevin.tian@intel.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Nakajima, Jun" <jun.nakajima@intel.com>,
 "Cooper, Andrew" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>
References: <20210528173935.29919-1-roger.pau@citrix.com>
 <20210528173935.29919-4-roger.pau@citrix.com>
 <MWHPR11MB188653F8277F861018DB00118C0E9@MWHPR11MB1886.namprd11.prod.outlook.com>
 <YMs0t/gQEK4kUGiQ@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e81419b2-1f35-85f5-af67-6f51e69d0314@suse.com>
Date: Thu, 17 Jun 2021 13:57:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <YMs0t/gQEK4kUGiQ@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FRYP281CA0007.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::17)
 To VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 80df7251-37e6-47fb-a8c2-08d931871902
X-MS-TrafficTypeDiagnostic: VI1PR04MB7152:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB71527CE605FF41CF8FED504EB30E9@VI1PR04MB7152.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yZ7XqPYzquH3ny1o9CS2yS3tGh33zBvadYfuhE8KKDHFgzyyAzV+cTExbSaj43996RXeq009YjWDe9Wgx84dE+pyaYHpt4cub+HaGllIsxpolq6LckevOfb2RdKwnp2YsAqBUNWy5AihuRuY6F63qEHJFzd4zGWO24c/P4RXdQq54T1pxvnnWUD7We9qbU6hK/Sj7G6cY6iWdXjEjYfN9IINHN/79/5tEH7jivh2TWwQ+y1SrmOwDJR8ZrCf4526LVK0cA/MC1tpyt80KiT3cXVmUmN8SUc1n5LfDvBhkKGl7IMGgznuRWRiFliVPwCpe4dWhepZya3TT/DowA4FRV1RgG6XLDJyDtm5icKmXrq0djAmJfFDD8190eeRB3dLtzLYqLkpmpaOlnnrIZg1Y4sCeDg9a3rhLNCtT/HerCelSQOyGn6XqXzgdr29EMpZ9vyqsiuX4C57SJiu3GPwbLGRAM/dcX/HVeZVdhNx57C6WzIDHZt9/p4Nr3vrmE/ZtTmTEosDTPaHAhd4NEWmVdNzLauuvuzBA24GhQkp2i8JWo3AHBkA8leJnxKPCSid8RY1S9NLTZaQTbi5T2+4dYDc85s0a0JXDurMivAgGZwfhTZAdNdeBw7+RKibWMBG0SEGcfWNIl5SrwDjOp2g95c3PtsRTrkkplKXLoqd6efbeTtvfyi9a/fbKsAzF3MW
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(346002)(136003)(396003)(366004)(39850400004)(66476007)(66556008)(16526019)(16576012)(66946007)(31686004)(6486002)(38100700002)(186003)(4326008)(53546011)(26005)(2906002)(54906003)(110136005)(316002)(5660300002)(8936002)(31696002)(478600001)(2616005)(8676002)(86362001)(956004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?ZdfIPlhy61b4zEbcd0w85HjMRrHl8KmA3SoQl9cTeNL0rAGWalBUKz2eoXyt?=
 =?us-ascii?Q?gvYmYRIg3+GbGmMDF2MdraLrBi+m6vr7uoW5em0Rj+t5WqcsYZGuRcjxAk64?=
 =?us-ascii?Q?3tQsVpUU4tEPYnLePWx2jg1IH7kEba5fKluE+DsOIXppWwloP4OI7nQT++h2?=
 =?us-ascii?Q?s5uOaDMaiRnMIqWIkWpflf8s+Kki3RzAkkeUrvGZaSZivDMSIUTFN3SrgOTg?=
 =?us-ascii?Q?uW5MldmifpMDonVyi3WDR5d+hNJ+MXqpYBRQ64UsjEwT338DetMBQ2vSEva5?=
 =?us-ascii?Q?oNRg/7sj3HrBEY+twmeKa6tADJO3lPzFBYRy1geX8P2j3evdvj4NgdiOAO7v?=
 =?us-ascii?Q?+cA9mnil91lXL1tCZNEu+qoFpK++qQHstU8co8xvq2iZF9OZONeG1gY5Zdyb?=
 =?us-ascii?Q?iFdD3BL685KYMXFBrv795kGrNblTBVJ7aAxWUGNyf2W8b6RSevDBfxBkePOv?=
 =?us-ascii?Q?q38BFE2R1aWyvqmkMss/jWp1uiYmRMGyZF1V+0sn+iio2vWniThok4YTX5pb?=
 =?us-ascii?Q?B6j+IVuPSsPDn4diJIOLWjHX/xSsJZ5B8YLUwSPDRBvyvU/8jNk71uWaaUuS?=
 =?us-ascii?Q?wPn4MFQdRudZX42mjJom0vLnSDFcLQMTdOtWEalVjXFgv2F5mCS23SPNuIN3?=
 =?us-ascii?Q?jTo7fQuVPaK6Qn/PtH7bO1cW3hzmVuKIjNL+h6JmMdm4iloIDzMgpY8Edsb1?=
 =?us-ascii?Q?goHK4acgdRPZKsnVMQMI7yccVinuZ6lmwyqb+XJxMXG+lML4jrAw3wKoml7H?=
 =?us-ascii?Q?3vO1aB2ZzxBLJ49uXHAALGBwhC3lo+0E8J1RmnaLXZf+p6KfwUC2cZDc6Zab?=
 =?us-ascii?Q?lFpXIlfavIFlsEgNxI3kc8+06XOvMqfK2uQt6v3mSdt9KRWhNj/tngulKDbO?=
 =?us-ascii?Q?/nf5LM7ctTRyC/d823UpTrr79nGUOqZABF7qrC/JP+KhqeTAQe30aK/Cu/mb?=
 =?us-ascii?Q?UFxP5wN87M+/PpeCioUhyuWCWeA/SrNKuc6HuZ5u+OF/V+ubPVs0OFyx5w19?=
 =?us-ascii?Q?Ra0Xs5RCxdi0Ni+f0PwI2Q1BkiUJYpSSzdySUCQaV07PZ1jyR83IXPKrxsZ3?=
 =?us-ascii?Q?t4j5w0Zi91GObom6G89/uy5IfLvoULKIlzYOK0sW9S6VsMd178jKgxd73Yk9?=
 =?us-ascii?Q?RAeRRB2ITT2uLkrPnkb6YyOzxiBXjWAj+YCY8tSDku9kseVf0L4N4gCCPDka?=
 =?us-ascii?Q?GVrpdqk6nprObg5WTZCUACJHO9lCCKpiOLL93H9g0ap5/qNs0DpYb72Xxblh?=
 =?us-ascii?Q?SHvMT4SYN+02zKuKrYQMlOGis88Xcg+TTtGjqG6si053aP4AhThJQbLM3dnC?=
 =?us-ascii?Q?lhcGrCR5aU2+ZrGLbkdfzKlP?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 80df7251-37e6-47fb-a8c2-08d931871902
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2021 11:57:36.0356
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: boan1tib7nVC8Y57kyejYeyHRye0VeaSXlWSPsLqfiZ335ZNvFuexq8GupSzeIQSV4x8HXOVrgeWjhuca2OBCQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7152

On 17.06.2021 13:40, Roger Pau Monn=C3=A9 wrote:
> On Thu, Jun 17, 2021 at 09:31:28AM +0000, Tian, Kevin wrote:
>>> From: Roger Pau Monne <roger.pau@citrix.com>
>>> Sent: Saturday, May 29, 2021 1:40 AM
>>>
>>> Force WB type for grants and foreign pages. Those are usually mapped
>>> over unpopulated physical ranges in the p2m, and those ranges would
>>> usually be UC in the MTRR state, which is unlikely to be the correct
>>> cache attribute. It's also cumbersome (or even impossible) for the
>>> guest to be setting the MTRR type for all those mappings as WB, as
>>> MTRR ranges are finite.
>>>
>>> Note that on AMD we cannot force a cache attribute because of the lack
>>> of ignore PAT equivalent, so the behavior here slightly diverges
>>> between AMD and Intel (or EPT vs NPT/shadow).
>>>
>>> Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>>
>> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
>>
>> btw incorrect cache attribute brings functional/performance problem.=20
>> it'd be good to explain a bit why this problem doesn't hurt AMD in the=20
>> commit msg...
>=20
> What about re-writing the last commit paragraph as:
>=20
> Note that this is not an issue on AMD because WB cache attribute is
> already set on grants and foreign mappings in the p2m and MTRR types
> are ignored. Also on AMD Xen cannot force a cache attribute because of
> the lack of ignore PAT equivalent, so the behavior here slightly
> diverges between AMD and Intel (or EPT vs NPT/shadow).

I'll try to remember to swap this in when committing.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 12:22:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 12:22:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143922.265049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltr2K-0000dd-D0; Thu, 17 Jun 2021 12:22:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143922.265049; Thu, 17 Jun 2021 12:22:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltr2K-0000dW-9q; Thu, 17 Jun 2021 12:22:20 +0000
Received: by outflank-mailman (input) for mailman id 143922;
 Thu, 17 Jun 2021 12:22:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltr2I-0000dM-95; Thu, 17 Jun 2021 12:22:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltr2I-0000RK-0V; Thu, 17 Jun 2021 12:22:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltr2H-0004N7-KT; Thu, 17 Jun 2021 12:22:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltr2H-00057R-Jw; Thu, 17 Jun 2021 12:22:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=s3E+GUA0hyAKroIrLjK4tvEFLDX89U00F0M/ONhCXfo=; b=PifyfmBj54TVv5BPQ998UgEwye
	QP3YWHYo/BMjHHjVqQqdkHbf2tN+v0Dd+vaeWvI6AC3AW9NsxNJBo3a1uPgTqW6+akCZ/Pzyyn/n/
	gQYvDZHhLH+6ZdxgiowS705webWtC2AOqp5i/v9mcKsdUP/j7gbufnank+OT81w9b9Eo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162868-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162868: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-localmigrate/x10:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6b00bc639f1f2beeff3595e1bab9faaa51d23b01
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Jun 2021 12:22:17 +0000

flight 162868 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162868/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 20 guest-localmigrate/x10  fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 linux                6b00bc639f1f2beeff3595e1bab9faaa51d23b01
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  320 days
Failing since        152366  2020-08-01 20:49:34 Z  319 days  545 attempts
Testing same since   162868  2021-06-16 22:15:06 Z    0 days    1 attempts

------------------------------------------------------------
6171 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1681021 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 12:36:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 12:36:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143968.265086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltrFY-0002aC-Uz; Thu, 17 Jun 2021 12:36:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143968.265086; Thu, 17 Jun 2021 12:36:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltrFY-0002a5-Rc; Thu, 17 Jun 2021 12:36:00 +0000
Received: by outflank-mailman (input) for mailman id 143968;
 Thu, 17 Jun 2021 12:36:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iG+e=LL=eikelenboom.it=linux@srs-us1.protection.inumbo.net>)
 id 1ltrFX-0002Zz-Fx
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 12:36:00 +0000
Received: from server.eikelenboom.it (unknown [91.121.65.215])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e9f9657a-1dbd-477c-a187-4388b7edb73f;
 Thu, 17 Jun 2021 12:35:57 +0000 (UTC)
Received: from 76-24-144-85.ftth.glasoperator.nl ([85.144.24.76]:51062
 helo=[172.16.1.50]) by server.eikelenboom.it with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <linux@eikelenboom.it>)
 id 1ltrK1-0005dC-5w; Thu, 17 Jun 2021 14:40:37 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9f9657a-1dbd-477c-a187-4388b7edb73f
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=eikelenboom.it; s=20180706; h=Content-Transfer-Encoding:Content-Type:
	In-Reply-To:MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender
	:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:
	Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:
	List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=Iq7qdsDsVjBjx3WrDBIM54QXGdH+Ntk8viW76NT5q2g=; b=fyKFlq8/vKowL8Pv4ceazD6cNR
	heZJUaVIV0RryDwRVVvuH4U4C5+XHvUi7E1r72BTz4J2xPWH+h3Pdc6VjRcA1ifK4CgYTUfG5WZat
	qSFHOqdeOYjYSbRUWw/T24bV7JecZTEVWTYpso6cztNwuMwKshaTKoPjmq5kvtjfhJ0Y=;
Subject: Re: Linux 5.13-rc6 regression to 5.12.x: kernel OOM and panic during
 kernel boot in low memory Xen VM's (256MB assigned memory).
To: Juergen Gross <jgross@suse.com>,
 linux-kernel <linux-kernel@vger.kernel.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Linus Torvalds <torvalds@linux-foundation.org>
References: <ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it>
 <6bcd447e-fd49-0519-a59e-478f84e9120f@suse.com>
From: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <46c90fab-5928-d181-432f-228304a7e019@eikelenboom.it>
Date: Thu, 17 Jun 2021 14:35:50 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <6bcd447e-fd49-0519-a59e-478f84e9120f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17/06/2021 12:30, Juergen Gross wrote:
> On 17.06.21 11:26, Sander Eikelenboom wrote:
>> L.S.,
>>
>> I just tried to upgrade and test the linux kernel going from the 5.12
>> kernel series to 5.13-rc6 on my homeserver with Xen, but ran in some
>> trouble.
>>
>> Some VM's boot fine (with more than 256MB memory assigned), but the
>> smaller (memory wise) PVH ones crash during kernel boot due to OOM.
>> Booting VM's with 5.12(.9) kernel still works fine, also when dom0 is
>> running 5.13-rc6 (but it has more memory assigned, so that is not
>> unexpected).
>>
>> The 5.13-rc6'ish kernel is a pull of today, tried both with and without
> 
>> last AKPM's patches, but that
>> makes no difference.
>>
>> Below are stacktraces from a few of the crashing VM's.
>>
>> Attached is the kernel .config
>>
>> Any pointers ?
> 
> Did you compare memory usage with a bootable guest between kernel 5.12
> and 5.13-rc6?
> 
> Any chance you could bisect?
> 
>
> Juergen
> 

Yeah a bisection is already underway, still 10 steps to go ...

--
Sander


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 12:55:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 12:55:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143975.265097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltrY8-0004rI-J5; Thu, 17 Jun 2021 12:55:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143975.265097; Thu, 17 Jun 2021 12:55:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltrY8-0004rB-GA; Thu, 17 Jun 2021 12:55:12 +0000
Received: by outflank-mailman (input) for mailman id 143975;
 Thu, 17 Jun 2021 12:55:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nXfV=LL=ipxe.org=mcb30@srs-us1.protection.inumbo.net>)
 id 1ltrY7-0004r5-H8
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 12:55:11 +0000
Received: from blyat.fensystems.co.uk (unknown
 [2a05:d018:a4d:6403:2dda:8093:274f:d185])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8632a62c-0b59-47a1-a6c8-823fb41cdea7;
 Thu, 17 Jun 2021 12:55:10 +0000 (UTC)
Received: from pudding.home (unknown [213.205.240.250])
 by blyat.fensystems.co.uk (Postfix) with ESMTPSA id 0B10544175;
 Thu, 17 Jun 2021 12:55:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8632a62c-0b59-47a1-a6c8-823fb41cdea7
Subject: Re: [PATCH v1] tools: ipxe: update for fixing build with GCC11
To: "Bernhard M. Wiedemann" <bwiedemann@suse.de>, Olaf Hering <olaf@aepfle.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20210615212613.6270-1-olaf@aepfle.de>
 <b78ccdf3-9898-c903-4d9f-4d25bd27182e@citrix.com>
 <20210616145846.305d3ce1.olaf@aepfle.de>
 <fe5ac73a-6026-6db6-6756-911f803adc5f@suse.de>
From: Michael Brown <mcb30@ipxe.org>
Message-ID: <d8a47c67-0ff3-4fce-5fe5-d444c4c4f859@ipxe.org>
Date: Thu, 17 Jun 2021 13:55:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <fe5ac73a-6026-6db6-6756-911f803adc5f@suse.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,BAYES_00
	autolearn=ham autolearn_force=no version=3.4.2
X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on
	blyat.fensystems.co.uk

On 16/06/2021 14:33, Bernhard M. Wiedemann wrote:
> So this means, CentOS7 binutils has
> 9cb80f72d8b from 2011-12-21
> but not
> git blame binutils/objcopy.c|grep enable-determini
> 955d0b3bd75 (Roland McGrath       2013-01-07 17:40:59 +0000  549)   -D
> --enable-deterministic-archives\n\
> 2e30cb575a1 (Cary Coutant         2012-04-25 17:50:14 +0000  555)   -D
> --enable-deterministic-archives\n\
> 
> one way out could be to call objcopy -D $PARAMS || objcopy $PARAMS

Testing on a clean "centos:7" container shows that "objcopy -D" works as 
expected (and "objcopy --help" shows the option as existing).

This container environment has /etc/centos-release showing:

   CentOS Linux release 7.6.1810 (Core)

Could you provide a simple environment in which to reproduce the problem?

Thanks,

Michael


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 13:01:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 13:01:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143982.265109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltrds-0006GT-86; Thu, 17 Jun 2021 13:01:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143982.265109; Thu, 17 Jun 2021 13:01:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltrds-0006GM-55; Thu, 17 Jun 2021 13:01:08 +0000
Received: by outflank-mailman (input) for mailman id 143982;
 Thu, 17 Jun 2021 13:01:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltrdq-0006GB-Ov; Thu, 17 Jun 2021 13:01:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltrdq-00016Q-IH; Thu, 17 Jun 2021 13:01:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltrdq-0005PU-6B; Thu, 17 Jun 2021 13:01:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltrdq-0004jj-5T; Thu, 17 Jun 2021 13:01:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qjgBsOyZh4ttLj60HW/wh/pn6kV+Q3Pdy9Pifu0+iZo=; b=pv5wJfIQesZzrq3jZpg/4r9agA
	4WsWbdAl5HHUARL7ofuOp/a8VF1vYXjALBz0XLPNpCUjOXMOWSgX7bTiOpzIC7pvJjyyqgJV3/+qH
	L6M2vNZGun+W+TPcvh0WnuJK9e0oz4CcHK4I0Y9Mq8QaSRTNl5y2SNupg+gHNyjid8Nk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162875-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162875: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=1162ae8297e1fc9871e615cad7d505d639b7ed0c
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Jun 2021 13:01:06 +0000

flight 162875 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162875/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 1162ae8297e1fc9871e615cad7d505d639b7ed0c
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   13 days
Failing since        162368  2021-06-04 15:42:59 Z   12 days   29 attempts
Testing same since   162875  2021-06-17 10:45:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  gaoliming <gaoliming@byosoft.com.cn>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2158 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 13:05:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 13:05:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143989.265123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltriA-000715-V5; Thu, 17 Jun 2021 13:05:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143989.265123; Thu, 17 Jun 2021 13:05:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltriA-00070y-SA; Thu, 17 Jun 2021 13:05:34 +0000
Received: by outflank-mailman (input) for mailman id 143989;
 Thu, 17 Jun 2021 13:05:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1ltri9-00070s-KH
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 13:05:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1ltri9-0001BQ-JS
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 13:05:33 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1ltri9-0003b2-IT
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 13:05:33 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1ltri4-0006e3-Nx; Thu, 17 Jun 2021 14:05:28 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=ZaBafeIL/+f2W/mLpVvaaOoiNkqJPEN8iiJhO0wIjFA=; b=1nBgI0JMvvsgSY/24xFxeDhMy5
	1gvUZGgHhwS81Z6yK0Hutidx58ogksFmfp0S3kJ3ndITwf5tk1iDJIZeTQs5STUhiNBJKLXIv3bRc
	GulEb0LkWgbIBwqdaqId3g4nr8Y/xwzBhxygfV6+zIhwVYJw6KZUYA6XG+hHXr8OZu+E=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24779.18584.523983.904660@mariner.uk.xensource.com>
Date: Thu, 17 Jun 2021 14:05:28 +0100
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
    xen-devel@lists.xenproject.org,
    Roger Pau =?iso-8859-1?Q?Monn=E9?=  <roger.pau@citrix.com>,
    "committers\@xenproject.org"  <committers@xenproject.org>
Subject: Re: Regressed XSA-286, was [xen-unstable test] 161917: regressions -
 FAIL
In-Reply-To: <99833b7b-f626-fac5-d426-109afd4ffa38@suse.com>
References: <osstest-161917-mainreport@xen.org>
	<7cfa28ae-2fbe-0945-8a6c-a965ec52155f@citrix.com>
	<b57c2120-2f86-caa7-56ec-e215a7ad0529@suse.com>
	<637ff3c7-afeb-aae4-0f1d-5ae168e01e01@citrix.com>
	<99833b7b-f626-fac5-d426-109afd4ffa38@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Firstly, let me try to deal with substance and/or technical merit.

Jan, I am finding it difficult to follow in your message whether you
are asserting that your disputed change (to Xen) did not introduce a
vulnerability.

I think you are saying that there is no vulnerability, because in any
overall configuration where this is a vulnerability, the guest would
have to be making an unjustified assumption.

If this is your reasoning, I don't think it is sound.  The question is
not whether the assumption is justified or not (answering which
question seems to require nigh-incomprehensible exegesis of processor
documentation).

The question is whether any guest does in fact make that assumption.
If any do, then there is a vulnerability.  Whether that's a
vulnerability "in" Xen or "in" the guest is just a question of
finger-pointing.

If none do then there is no vulnerability.


On to process:

Jan Beulich writes ("Re: Regressed XSA-286, was [xen-unstable test] 161917: regressions - FAIL"):
> On 16.06.2021 17:43, Andrew Cooper wrote:
> > I am very irritated that you have *twice* recently introduced security
> > vulnerabilities by bypassing my reviews/objections on patches.
> 
> I'm sorry, Andrew, but already in my original reply a month ago I did
> express that I couldn't find any record of you having objected to the
> changes. It doesn't help that you claim you've objected when you
> really didn't (which is the impression I get from not finding anything,
> and which also matches my recollection of what was discussed).

Andrew, can you provide references to your objections ?

> I don't think I know which 2nd instance you're referring to, and hence
> I can't respond to that aspect.

And, likewise, references for this.

> > In the case of this revert specifically, I did get agreement on IRC
> > before reverting.
> 
> How can I know you did? You didn't even care to reply to my mail from
> a month ago. And there was no reason to make an emergency out of this
> and ask on irc. You could have sent mail just like is done for all
> other normal bug fixes etc. Iirc I was on PTO at that time; it would
> hence only have been fair to wait until my return.

I think it would be good practice to copy and paste relevant IRC
discussions into email in this kind of situation.  That email also
makes space to properly write down what you are doing, that you
realise it is controversial, who you have consulted, and why you are
going ahead.

I looked at one of the two disputed reverts in Xen,
cb199cc7de987cfda4659fccf51059f210f6ad34, and it does not have any
tags indicating approval by anyone else.

Andy, if you got agreement on IRC, who from ? [1]

Ian.

[1] This may well have included me.  I do not reliably record this
kind of information in my wetware.  That is what we have computers
for.


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 13:47:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 13:47:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.143998.265134 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltsMU-0002YU-7t; Thu, 17 Jun 2021 13:47:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 143998.265134; Thu, 17 Jun 2021 13:47:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltsMU-0002YN-4f; Thu, 17 Jun 2021 13:47:14 +0000
Received: by outflank-mailman (input) for mailman id 143998;
 Thu, 17 Jun 2021 13:47:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nXfV=LL=ipxe.org=mcb30@srs-us1.protection.inumbo.net>)
 id 1ltsMS-0002YH-LP
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 13:47:12 +0000
Received: from blyat.fensystems.co.uk (unknown
 [2a05:d018:a4d:6403:2dda:8093:274f:d185])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa25186d-082c-439d-88c9-47cce3c3273e;
 Thu, 17 Jun 2021 13:47:10 +0000 (UTC)
Received: from pudding.home (unknown [213.205.240.250])
 by blyat.fensystems.co.uk (Postfix) with ESMTPSA id 7D9F644193;
 Thu, 17 Jun 2021 13:47:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa25186d-082c-439d-88c9-47cce3c3273e
Subject: Re: [PATCH v1] tools: ipxe: update for fixing build with GCC11
From: Michael Brown <mcb30@ipxe.org>
To: "Bernhard M. Wiedemann" <bwiedemann@suse.de>, Olaf Hering <olaf@aepfle.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20210615212613.6270-1-olaf@aepfle.de>
 <b78ccdf3-9898-c903-4d9f-4d25bd27182e@citrix.com>
 <20210616145846.305d3ce1.olaf@aepfle.de>
 <fe5ac73a-6026-6db6-6756-911f803adc5f@suse.de>
 <d8a47c67-0ff3-4fce-5fe5-d444c4c4f859@ipxe.org>
Message-ID: <ff98e992-16cb-f4f7-d3ab-5adfcd215b7a@ipxe.org>
Date: Thu, 17 Jun 2021 14:47:05 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <d8a47c67-0ff3-4fce-5fe5-d444c4c4f859@ipxe.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,BAYES_00
	autolearn=ham autolearn_force=no version=3.4.2
X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on
	blyat.fensystems.co.uk

On 17/06/2021 13:55, Michael Brown wrote:
>> one way out could be to call objcopy -D $PARAMS || objcopy $PARAMS
> 
> Testing on a clean "centos:7" container shows that "objcopy -D" works as 
> expected (and "objcopy --help" shows the option as existing).
> 
> This container environment has /etc/centos-release showing:
> 
>    CentOS Linux release 7.6.1810 (Core)
> 
> Could you provide a simple environment in which to reproduce the problem?

I've managed to reproduce it using "centos:7.0.1406".  It should be 
fixed in commit

   https://github.com/ipxe/ipxe/commit/51c88a4a6

Thanks for the report!

Michael


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 14:40:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 14:40:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144005.265144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttCD-0008AG-4b; Thu, 17 Jun 2021 14:40:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144005.265144; Thu, 17 Jun 2021 14:40:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttCD-0008A9-1R; Thu, 17 Jun 2021 14:40:41 +0000
Received: by outflank-mailman (input) for mailman id 144005;
 Thu, 17 Jun 2021 14:40:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=k/hY=LL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lttCC-0008A3-0g
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 14:40:40 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0ae30b1a-8b9d-4557-8aeb-30052137649e;
 Thu, 17 Jun 2021 14:40:38 +0000 (UTC)
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur02lp2051.outbound.protection.outlook.com [104.47.4.51]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-36-aQHJiiBMMlyghr7ag5fvmQ-1; Thu, 17 Jun 2021 16:40:36 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5471.eurprd04.prod.outlook.com (2603:10a6:803:d0::33)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Thu, 17 Jun
 2021 14:40:35 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Thu, 17 Jun 2021
 14:40:35 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR0P281CA0037.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:48::19) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.7 via Frontend Transport; Thu, 17 Jun 2021 14:40:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ae30b1a-8b9d-4557-8aeb-30052137649e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623940837;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iaOlV38ZKFp5FUC5kXh5UF1uoQXByllzTWG/71WSIF4=;
	b=E5KpwtH0XhGPIWZRvMDYsDphFJ8O60B7itxf+llqxn2JmmSNX4HcKjQy6jAD6trEQqqyE7
	8/vuZG3d4Tud4e61WLiNLQo3jJQTydLPKxqdg5JlACMKauXYuKX/9y14kudYj1Drj11iAR
	QKjWY32419a7PrPK4HikeztnjDkIcT8=
X-MC-Unique: aQHJiiBMMlyghr7ag5fvmQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PohVUtb+Sc4OEe5xubJJtuohna2CWUZV/ut8zQsbr2ADj8azf7frkES/R9QMHAsyYrYZ1t/MzXqLju6/mplJPt+UU4wCTho1sx4/vWpwjvSqKyU2ZOAKi/fnPyX+JJ8Zf/CROxJHAjwjO3LdMLbxzYwTGOtjFvb3wsYukiVXXW40NrCejogbMUdcA9JtiIu4d3TrT7PH4uJA1qLYLNh6prKqlHTp7qRhHyOBNLNlIGm9uccY/SFc8ZYgBfzX/Nfk/RpRsZ+NhaeqEZ/bDjg6Y8lKrOhh6G+3kPgzM5qCmLeDo4medkjstbxM5Xu6krX2wEzHaC8aOifd9z9tOpEpEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iaOlV38ZKFp5FUC5kXh5UF1uoQXByllzTWG/71WSIF4=;
 b=ab2CU9pYktn3bdcWXP33gTdFUv/WeS+SJ6ulWRwKRos3gYpvkdoX9+RaIje51rQhb5XHq6/gjsI8TuEHlTovmMps2CjsxBUpbmBvtzKgHyW2gIpPp35V/Fxw+ZKX3hRnAlr5kClniWTbk6MfpOxlZn4xjEk4rpfZupI/N2MxL46cD3qjceGhKs6GPC5Ra8ezfM40GqSEckqVXyfrBK9nmsrZOZOHCxpJ9j0joD85fuNF1hbrMC7Px1G9iYpxN5lXcgCI7IdEFhoXsoCf5zgKEknLaxpFIFhefS9xGm904x8TtO3EFuDY/nTRCN1xVcxuqdfTy+UXGe6hXxO6IgumLg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: Re: Regressed XSA-286, was [xen-unstable test] 161917: regressions -
 FAIL
To: Ian Jackson <iwj@xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, "committers@xenproject.org"
 <committers@xenproject.org>
References: <osstest-161917-mainreport@xen.org>
 <7cfa28ae-2fbe-0945-8a6c-a965ec52155f@citrix.com>
 <b57c2120-2f86-caa7-56ec-e215a7ad0529@suse.com>
 <637ff3c7-afeb-aae4-0f1d-5ae168e01e01@citrix.com>
 <99833b7b-f626-fac5-d426-109afd4ffa38@suse.com>
 <24779.18584.523983.904660@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5537bc9b-0a71-60f0-efce-d0d33301fe45@suse.com>
Date: Thu, 17 Jun 2021 16:40:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <24779.18584.523983.904660@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR0P281CA0037.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::19) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a48dfd55-436c-41d7-16c2-08d9319ddddc
X-MS-TrafficTypeDiagnostic: VI1PR04MB5471:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB54713CE1CCAFFB7766668F19B30E9@VI1PR04MB5471.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xgT/zcHm4gqnY9ioTr2l7nJQEMw7YyejMSMIN0eHUZZVwlROKjmjhFl3w46PooOVqOcrrgncp2jw05ojFLm9RgIrrKo3ABqo5cf0iUa9x8CJdoQ3mGNKt3MJqMPnWaX8dw/9ylD39Icjt2lhprlMCmC5XNVEUDDq8YGoid2RRZ4YIUi2Xzf+nuG7QLmPW3yQ3mTboJNQzC50/T9yfpKhpIkzfbwnToeU7Z6e88cwz7frIz8G/nVeoj5PAv5LHfdb9gSZ5BMBbwmphWuDtJr7lBDm8rpuOAft5MlIoK2xc7MJhbvYIim3M5G+puvTZCMlb/+hJtgZXLrN5qYM/OqI8z/xoPcb/Y2JhmHwvd+WjU2eucUKVxc+wj16jtewKmzlbzzYxrvYYxKBjznVkQh3P1CmT8gAvKpgu9taNSZaaG94vBcs4ykOskF/1LEV1ttuQB21kwi/b6IXuiQjRuTiWBTdszo/I4oRbwNUK6TmzMSnaFP4E5AHXcBEbVUgB+LMWKwMZYe6psVU+q1kZxf4TQOsZmdVYhspQtdgRO5EgP2jSkKuCEdaYkEbeQT/mB1O1rjUKf4BI2HFKE8GXUEztjcjjLf3jQq2Z6fopnWdbvuaEW2BSPE6ZFrsF3AEXY+9zGYoVNzgQ28i46VLuCppufV8hlntLjB2ogLoszvp2ZhILaN0WemoJmdC8tJuDRoH
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39850400004)(346002)(376002)(136003)(396003)(366004)(66476007)(66556008)(66946007)(6916009)(478600001)(36756003)(316002)(86362001)(53546011)(2616005)(186003)(38100700002)(16526019)(8676002)(956004)(6486002)(54906003)(4326008)(31696002)(2906002)(16576012)(83380400001)(26005)(5660300002)(31686004)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U1Rmc24zYXlTcmZOb1NVN1g4aTdZSzJzS1kvQUl1SEdBTkpSOGtvK1VkMFNE?=
 =?utf-8?B?amhaVFRUZkpxaHBVeXhqMEE3em9VWUlsK0Z6M0hodko5OGhFcndZVVU4VDZB?=
 =?utf-8?B?bThIOHlrbStTUmtuU3dVci9jQk9JL1QrU1U1Tys2Uk10bVA2TC9UbUhlQ21R?=
 =?utf-8?B?YlpHM3NpREt0WVN1N3V0NHZqOEl0TThQYk93UnQzd3g0Y1k5N0VwaW1pSGUx?=
 =?utf-8?B?ekFsWVhPT25oYnAvcndJZXlpak51cXhUM1J2SGNMUnE2NXkxME1RK09ZbDNu?=
 =?utf-8?B?ZUtUYWhNRUVuRzNDWFpwVmd1MGgyZmNsYk5zQkFIVXZQbFZkaTNZVUkrVWZx?=
 =?utf-8?B?R2gySHB4QW8rbHJyUEpuUUdqQStJSkNXQ1drZTRRZ2lHeENuL1RReTNzK3dl?=
 =?utf-8?B?MlpKcjFNVThyc3Q5andkSXY2WkxmV1RIeTc5UlRnd2tyZ3EvTzNWdXRJUmw4?=
 =?utf-8?B?SDEvMWRsNXdVS0czVDNYNDBoYXgwZkxFcHgyUlQyQTJTT1F3SWV6MXpIMFZn?=
 =?utf-8?B?andtaXdQWFJJZ2xlSWhiNHlHMHVuRGNES1lZbmN1MjYwUzZLTW0xdDBvNk84?=
 =?utf-8?B?eFNuNHdhT21VQU54R1dQUm1aMzdlcDdCS3FtS2ZzcGM4UU5kVzA1RjRIa3ZG?=
 =?utf-8?B?emtIbEl1RzNycE5WVm1iV2RNb1VJbE56UmxmOHBtTlZzd3E1ZVJOb3VhSmNX?=
 =?utf-8?B?MmJvaWVHZlpWdEc0d3dRYnR1dHRvVkhqY2ppazRTYjBYZUFHcHo2ZnM1Qlh2?=
 =?utf-8?B?MWdqL1U1TjVpSFBYU2g2NmJTRGVCR3BTSU95MmxYZ0M0M3BBTTMzczhwVGNC?=
 =?utf-8?B?REpIS1ZLMGlaeWJNSWlrYUU5Ryt2UTdWdGlnNVZqNXRTaUg5SWZ1a2NpTDJ3?=
 =?utf-8?B?QVI4VUVWa0hYRFY4S2Y0V0lPbGYvQkpNejdGV1QzdmtWYzAvemljemFNRllX?=
 =?utf-8?B?NllMckFDMytISHljdnRHNlV2YVEyY2ZrNU1EYjJZaUF5SlFUdGVxWGxGOUtq?=
 =?utf-8?B?ajFva1R5cGRLZWxNVHY5bFYvU0VGaEFqQ255NzViQjNzb0NBdnNxcFgwV082?=
 =?utf-8?B?K0o5ajV6ZUJRSzBQeFE5ZGZaZXBsZW03aEo4SkRqdzdpQVJtWnZteDBtMWsw?=
 =?utf-8?B?RVpDdjFJdVlOeGdxU2VqMGJSZEFnZjFGWDQwdTBtUGE5NjBMYjcvUXpqblcw?=
 =?utf-8?B?MzZHaklHMytrMFBNNEJNRVZKdFVPRVNNODgzVDl0WVVGak5IdGp1WHhRNzRt?=
 =?utf-8?B?RTRueGZlVTB0U1VuTU5MUTlEMzVSWlcxZ1d0NkxYNFNmUVRoSWhaRFVaSThI?=
 =?utf-8?B?WEowd0s3eGlJRG5UMkxxNFdrOHFMRU5Ybzg2Rm5kTXVYaVUvNzg4a0pFUUti?=
 =?utf-8?B?bkN0K2Q3WHdXTkVtTzlHZDBWUXBUYmpKZHQxOC9xUXFqbEdMckNKVXhldk9w?=
 =?utf-8?B?S1F3M0RsU2hPdFVzTk82UjlxZlBzYmJSdk1yZk9ETmFHaEFOZGx6UE5hbDlv?=
 =?utf-8?B?U2R4Q2RRTmpYZHoxeml1QlJ4QWl4MzVnaVBoelZ5WXViZFNsYktkQkYwakFa?=
 =?utf-8?B?bnpQS1R0ZXJDa2cwMWlJTmZURWRQeVhCbm4zOTJvbElEOGVzeGJ4T3k5VXBH?=
 =?utf-8?B?K0tRaTdoVHFLVlAwZkpROW9RbnVTQzJWUXBTZEczQTlDN05tYlJUSWhHUDJ4?=
 =?utf-8?B?N1VsZ3I3UTBXcnFLcDR4RmhoQkk5RnFnV1I5NFFMMXZRaTR0VmpSbHdDbk10?=
 =?utf-8?Q?903T4lJw+lozj/Nao10QO236Fi1ozqiezsMhiPY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a48dfd55-436c-41d7-16c2-08d9319ddddc
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2021 14:40:35.3419
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: urTbOBbwovS4L5xjTOa42B9yYz9EAXeUvRbkkZDXCwGGv0JFugd3G/5CGFfAe7EDMK/fBJyyZeswcUHEjej/jA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5471

On 17.06.2021 15:05, Ian Jackson wrote:
> Firstly, let me try to deal with substance and/or technical merit.
> 
> Jan, I am finding it difficult to follow in your message whether you
> are asserting that your disputed change (to Xen) did not introduce a
> vulnerability.
> 
> I think you are saying that there is no vulnerability, because in any
> overall configuration where this is a vulnerability, the guest would
> have to be making an unjustified assumption.
> 
> If this is your reasoning, I don't think it is sound.  The question is
> not whether the assumption is justified or not (answering which
> question seems to require nigh-incomprehensible exegesis of processor
> documentation).
> 
> The question is whether any guest does in fact make that assumption.
> If any do, then there is a vulnerability.  Whether that's a
> vulnerability "in" Xen or "in" the guest is just a question of
> finger-pointing.
> 
> If none do then there is no vulnerability.

I don't think any OS does, simply because they can't rely on such
behavior when on on bare metal. The only such assumption was baked
into the respective XTF test.

If any OS made such an assumption, then I don't think it would be
a vulnerability either. It would simply be a guest kernel bug then.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 14:49:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 14:49:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144012.265156 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttKa-0000Sf-Vr; Thu, 17 Jun 2021 14:49:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144012.265156; Thu, 17 Jun 2021 14:49:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttKa-0000SY-Ss; Thu, 17 Jun 2021 14:49:20 +0000
Received: by outflank-mailman (input) for mailman id 144012;
 Thu, 17 Jun 2021 14:49:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lttKZ-0000SS-AY
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 14:49:19 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lttKZ-00030d-7e
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 14:49:19 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lttKZ-0003j3-6V
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 14:49:19 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lttKU-0006qd-Ai; Thu, 17 Jun 2021 15:49:14 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=apCRNlutG6gde4fZIvTvumMxgbmHH/P+Hax+AURj52c=; b=CmUNRlxGDD2sbA0T0kp9NUJkIu
	kPBrHs/w1U68NA+VeoQksHhClFBwc3Kkr9b6GQM7t94Q5xwpx83Dk7MmocT80BtYmVn6Znz6zsIz4
	NHFwEsQVFqMc8KVR8YL277V0dSrpIni68c50Zb6zxzkDQHobKxYHm7VUu3xDmwosE8to=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24779.24810.167567.520077@mariner.uk.xensource.com>
Date: Thu, 17 Jun 2021 15:49:14 +0100
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
    xen-devel@lists.xenproject.org,
    Roger Pau =?iso-8859-1?Q?Monn=E9?=  <roger.pau@citrix.com>,
    "committers\@xenproject.org"  <committers@xenproject.org>
Subject: Re: Regressed XSA-286, was [xen-unstable test] 161917: regressions -
 FAIL
In-Reply-To: <5537bc9b-0a71-60f0-efce-d0d33301fe45@suse.com>
References: <osstest-161917-mainreport@xen.org>
	<7cfa28ae-2fbe-0945-8a6c-a965ec52155f@citrix.com>
	<b57c2120-2f86-caa7-56ec-e215a7ad0529@suse.com>
	<637ff3c7-afeb-aae4-0f1d-5ae168e01e01@citrix.com>
	<99833b7b-f626-fac5-d426-109afd4ffa38@suse.com>
	<24779.18584.523983.904660@mariner.uk.xensource.com>
	<5537bc9b-0a71-60f0-efce-d0d33301fe45@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: Regressed XSA-286, was [xen-unstable test] 161917: regressions - FAIL"):
> If any OS made such an assumption, then I don't think it would be
> a vulnerability either. It would simply be a guest kernel bug then.

For the avoidance of doubt:

I think you are saying that if any OS did make the assumption, the
resulting bug *would not be exploitable* (by an unprivileged guest
process, or by a PV backend it was speaking to, or, somehow, by
another guest).

Ian.


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 14:55:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 14:55:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144019.265167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttQG-0001rk-Lf; Thu, 17 Jun 2021 14:55:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144019.265167; Thu, 17 Jun 2021 14:55:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttQG-0001rd-HZ; Thu, 17 Jun 2021 14:55:12 +0000
Received: by outflank-mailman (input) for mailman id 144019;
 Thu, 17 Jun 2021 14:55:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=k/hY=LL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lttQF-0001rX-4I
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 14:55:11 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c32c430-9034-42ee-a962-585cd50db3f8;
 Thu, 17 Jun 2021 14:55:10 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2111.outbound.protection.outlook.com [104.47.17.111])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-18-jfqm2rRFO0aPyJlxgz_f5Q-1; Thu, 17 Jun 2021 16:55:08 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4190.eurprd04.prod.outlook.com (2603:10a6:803:4b::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.16; Thu, 17 Jun
 2021 14:55:06 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Thu, 17 Jun 2021
 14:55:06 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0084.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:18::24) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Thu, 17 Jun 2021 14:55:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c32c430-9034-42ee-a962-585cd50db3f8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623941709;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=snjY5iWzuh79S0osSNQIhjAsOfAQ8PUKzjU2uTifn20=;
	b=hON/hy2gU0xEZK+j8Ut549LsT8T5072rHT0DLb0GDMUXvBBzqASKOOzC1D9x5FvehLjORi
	0xGxlFDnj1kupsA9OuHFzTMnzh3LQUkilRyvfKcjMs0JQnXYKT4zykZ2sMkfctZuhqHXvW
	KuGLwEE9gSdOFurd1iR3YCPyo8nWLhY=
X-MC-Unique: jfqm2rRFO0aPyJlxgz_f5Q-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b8huagoByWlKxedsteWAmN4ibuRYtzuqqmmt5/Jd92bGwgyPjcESP1kBmudD+WLXvwkopONnZhKV/zzYD5g6FJyebrN38/xhgV9nM0e9obhY8FiHgSNnFM3p/CkbIWuGNa1XjaMUGCB6kmuJmKC52LjyjMJ0MMPCx7KKw+RU+HCEVh/QMlR5MubCR1ksBMpfXouxRK4B9jVBZgNJFJjAcwk2bSMTeIjzHFdR8CnHGXYj8M/5Ed5qwa1huzXmrbkUvmeIoR3uXz76AUoO8tDpx+fQNKnUZwQ2OTnn4EoZSY1BI5GOfQprBaoyEx5vy/59/Mg3490VaPRgwLlS5IkuWw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=snjY5iWzuh79S0osSNQIhjAsOfAQ8PUKzjU2uTifn20=;
 b=d6varVBsflxLNoP3KLF8rDKD2Xu5poAuBOvFmIkCx/3i8RGo3cZulU86/2NIUOj3q1PtFy7gPv2oCJbhvmfY0hcCnqO0iNLG8h1FBhLrzxyvDRLJw58ERqaPvKXGc8Vev5ekJMWzHv/bAomNTqcc+mqzcd0Mb8EgLp7pWAVWYiptqmcy1GJHOgv7ErIggfgCjx5ggkBleDhUgLsZ1j/F3sGZbhg38/B32cNa+EB4u9O6+JrlrzKSflxHIHAEmLXAM9fvk9CT915KrX2BNsRA8xbG9c/DZEO1CMKK/cB3MD6fPdhWpvy6eYEF57RmKur4kbdOyxWiaeidVlZIpyUI3Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: Re: Regressed XSA-286, was [xen-unstable test] 161917: regressions -
 FAIL
To: Ian Jackson <iwj@xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, "committers@xenproject.org"
 <committers@xenproject.org>
References: <osstest-161917-mainreport@xen.org>
 <7cfa28ae-2fbe-0945-8a6c-a965ec52155f@citrix.com>
 <b57c2120-2f86-caa7-56ec-e215a7ad0529@suse.com>
 <637ff3c7-afeb-aae4-0f1d-5ae168e01e01@citrix.com>
 <99833b7b-f626-fac5-d426-109afd4ffa38@suse.com>
 <24779.18584.523983.904660@mariner.uk.xensource.com>
 <5537bc9b-0a71-60f0-efce-d0d33301fe45@suse.com>
 <24779.24810.167567.520077@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <da901c8c-cef0-e4e6-bd27-dbc21cb25523@suse.com>
Date: Thu, 17 Jun 2021 16:55:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <24779.24810.167567.520077@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0084.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:18::24) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b08e7c7f-3b38-456c-738e-08d9319fe510
X-MS-TrafficTypeDiagnostic: VI1PR04MB4190:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4190E2F31F611EBA69DF9F8EB30E9@VI1PR04MB4190.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nTwELHIw8uT/6BchEFeIPiZq0x64pah25chOiI24efq0KiBARzkNutIROuqd0mZgIys4g1PV7/S8optty/64sztjyA1U/1n5YD9zPlbW0Ulfsq2b4YnuLw0tBIekf0HeaNQxpGfao8hW8bvr+Pi56kbXzVkEYGVH7F/Ld4xXTlPMUJ0tG4iFRC/lYU7Qy/HExpBtf6xz8AE53zmkP+VL0mzkaCDneHm1Qbj8b+KVxXyqqQ7XJiuvHXGK0shVaIjz6aZOEorcRoVMjKGVx8aYWQ84uQrIG6XmVrUNc/0+/GbplEnBGS3cmt5j1biMNv6v6eInVOg1uzLp83y+k6ILmpIYIpoJn236rQfMmIq49K6nQQuUw/IpWLs0McBFRqXy9XS1MBCo/Oi9fxibxzzswYA57wpwt6e+JhIwKlVQVhYZOlzSsetRzNdq0497jmSIh3ah5jm/e46xIJUXdSayc0P1PfP5q4F0MK6qgC0EHtp5O7wqixLth7S+vz8FLxGeJtrQFZuIZ1CCYbyo+hcms9PshYnA2hvnzjR43l4KmFXH84VaT+f1vMI66e7HBT7gfKJFJu0EuX41Nd/5CoEoiBtUHr9d+odLddk0vd6qVnZSjEDZjsdlAPe+59RPdJL28+UHIkvM82HVLyg6BPf0zJtK05ZdhYCj3tbOlJLM8fIsFshwFtmmhPhPFv7941aR
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(396003)(366004)(39860400002)(376002)(38100700002)(956004)(66476007)(66946007)(86362001)(316002)(2616005)(186003)(16526019)(31686004)(66556008)(6486002)(478600001)(8936002)(53546011)(16576012)(6916009)(2906002)(54906003)(5660300002)(4744005)(36756003)(8676002)(31696002)(26005)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L2FQdWhzSElTWWFjbUx6YTZMNU5RRm1hNFliWDJoUWk4TFA3Wlk0MGJXZHVV?=
 =?utf-8?B?RUlqUVIxdzlBNWpjOHRrNmFmSUE0L1FjbjMrd3RJaWtic0RLS0RLME9hU2Uy?=
 =?utf-8?B?QXFrVXg2OTl4Z2c1bGFoSjJvM0dJa1RRRnBDQ3FMTHJ4azVlZlVjUDNJV3ow?=
 =?utf-8?B?dzd4RTd5YzF2RTQ5OWdLWTM1T3BseE9nOVBwaHV4TDdBa3ZIeEgxam8rUitQ?=
 =?utf-8?B?NG1WSGdZb2RsbHZ6akdtcmU3OUloSU1ES0pKbmthWXZxS21QMjNsOHA1NjF2?=
 =?utf-8?B?Q083UFVIWmFmOE1JaC9YSUZBMjV0L3NLZ0RZVG5OR0FORTRvdjBlT0xVcDdx?=
 =?utf-8?B?L1V1QnFuY2FnV0YxYXlKZzNhMUZ4Sjd3RjlGY3luZnZuUzNEaTd3QmxaVmZ3?=
 =?utf-8?B?TGNibEJocEgvOFpqYXZoN2d0RURwc2NObkU3aFpLTUg5SzNleTZzZnhITFp4?=
 =?utf-8?B?VG1vOXR2bjJqSmx1cjJzZ0VKSGZCTEdmbmNDZmRYQStXZGQxdzV6OTJXY3VL?=
 =?utf-8?B?TWZONjVTU29venEraSs3bDZDemN4TmdoZ2E0cmw5TDYzelRlREFMTVpTYTB0?=
 =?utf-8?B?RTF4aDBzK3cxdDlOdU96NC9Yd3JZNEFBQVJVVHZjY2pSczFWQkdRU04vQnJo?=
 =?utf-8?B?OWVnNEluQjdGcnQrWVdwV2xQYXdMOVY3bFFIVDFnOTlqeTIvOEtERGRrdG0y?=
 =?utf-8?B?SFhEMjdDYVlJZFRKa3RqNEVTbitEbHRmVHdxcTBWbXIzSnpiMStISnVGdlhF?=
 =?utf-8?B?N0gra3NZaEROREx2Tlh2SzZlbmpLUDRQQTFkWE9kMHlIZ1EwT2NqRXZTZlNM?=
 =?utf-8?B?S0l6OXg3VU9CS2xrT3A0MmFxU25pL0xXTnFFU1BVSjRzdEdicEp3eHFTaVN1?=
 =?utf-8?B?UmxyRFJFT3NRSkI5VjJyRmZiR1lSNEVMcWlHT0p3QUJhajdXTGtLQUZZbWZ3?=
 =?utf-8?B?YnFBSmNLa2FhdzI2b0tKbkxDU2tVbjJRRVNEcjBiSlpSNnJWTWRjclg5Vk4w?=
 =?utf-8?B?TVdXNk4yK29WV0g3YTI5YzVaemVhWnJBN01GYzhmYUpXN3VCUk1zOFJsOElB?=
 =?utf-8?B?SXltOHpnMlBDa0ZJZmJVZ2R4RUJGSjRoUWVPa0ErMDhyYURudkNSZ01xTFdk?=
 =?utf-8?B?ZWJ2UGV3VDQ4MTBKN2NYZWdGNit6UW1WdWVjNmEyZS91Sml4Rkx0dkhqU0ZN?=
 =?utf-8?B?ZjRQSFd0dysyZHZYN2V6UWVyNFBmZ09IdXA0eCtnRTczVDR5TGlOQVBjRUVH?=
 =?utf-8?B?QloxdlhtR2s0ZXFPT2l1bnduR2lVcTJsVklMdjlhRCtNeWhZaTNkdEhCa01M?=
 =?utf-8?B?cEdGc0FYRGhXa1doWnN6Rjh0QUFLamRKaUk4TlBLLzNqbDc5ZFVMYm9NYm5Q?=
 =?utf-8?B?T0FuVnV1T04yVDhKRXhaUVNpcnNOVmpvVU5udGJIeFhObUtVMm5tSGJHOXc5?=
 =?utf-8?B?V091Skc3ZzVjWGNFTXB3VkpPMnViNmNLVm5xSlphRmlYWU5PSEV4c1J0SHBw?=
 =?utf-8?B?aW5Eem54dFdwTFBnUnl6Z3BUY3FScitQNjJhNHFZRDhIZmtCVWFtaGdickxW?=
 =?utf-8?B?N3NTZlBkY0RGaWxLQmpaVFVXRTdkYmZWa3ZVVUNDNWE2L3VVVXZPeEFkVE9X?=
 =?utf-8?B?NVZoOGVKTGs3TzFDeENSaVdhditTRWlPd0NuMmxXWVdJdjdqUTd6VEtsbDNl?=
 =?utf-8?B?NTZxTmJ3SjJDOVRrU2xyTjNRTnRiVGFCQmloU0dhRGsrdVJ5Q1FpM0JiYlh3?=
 =?utf-8?Q?MCT14d6CmQZrx2IncaMDVEOrnbu9uCwlo+i3DxC?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b08e7c7f-3b38-456c-738e-08d9319fe510
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2021 14:55:06.3366
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VwYsI+8avKvYRVV2TrnYPtC9q44Dl3p6fPCvd93MWrr0hOn9/h8etMTVlNIZh2RL2W5/pNOSAR6aanuYtM1y5w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4190

On 17.06.2021 16:49, Ian Jackson wrote:
> Jan Beulich writes ("Re: Regressed XSA-286, was [xen-unstable test] 161917: regressions - FAIL"):
>> If any OS made such an assumption, then I don't think it would be
>> a vulnerability either. It would simply be a guest kernel bug then.
> 
> For the avoidance of doubt:
> 
> I think you are saying that if any OS did make the assumption, the
> resulting bug *would not be exploitable* (by an unprivileged guest
> process, or by a PV backend it was speaking to, or, somehow, by
> another guest).

Not exactly: Whether such a kernel bug would also be a vulnerability
cannot be told without knowing how exactly the kernel screwed up.
But it's definitely not Xen to compensate for this, imo. But anyway,
this it largely moot, as there isn't - afaict - any OS making any
such assumption.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 14:56:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 14:56:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144025.265178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttR1-0002U4-4k; Thu, 17 Jun 2021 14:55:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144025.265178; Thu, 17 Jun 2021 14:55:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttR1-0002Tx-0y; Thu, 17 Jun 2021 14:55:59 +0000
Received: by outflank-mailman (input) for mailman id 144025;
 Thu, 17 Jun 2021 14:55:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZKGl=LL=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lttQz-0002Tl-LH
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 14:55:57 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 244dcfc2-1b99-44ae-bf60-5ede6eb9e374;
 Thu, 17 Jun 2021 14:55:56 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.3 AUTH)
 with ESMTPSA id x0a371x5HEtt03l
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 17 Jun 2021 16:55:55 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 244dcfc2-1b99-44ae-bf60-5ede6eb9e374
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623941755;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=UkMKWSslt6ZtDAj+DNeNKI2yGQmOSaLuVC7u9tB9URI=;
    b=bYHdj4xe1CYVIl5J4nu911eB3b5lMn1mh3E/jSVND+0wtDsEtbqqLQ/DJqJqXgcyZD
    AKjTxko1mwyCACaPTh1+M0KL8f9DRHgi97YM+RG74n4taGIwKDiJHJLtjN0iMIweepr0
    9dcX3Zd9aMIfFW4L2D2J+gm61MMbgVXlorK8KpSOwlkQ62+ijBW4otYKMbjB2IF76+Gt
    SynaUGeECcoR8eE/FXjXLeLKwQMDyR6PgYjeMu6j4y10tRnTvYK7YzJdnuUfLl2xhdm4
    tPf7ngYYFQD/qeOKbldO0ZS4ztqxZRKqGXIyDR+FQGKWl6E+x5uxOXtE3oJlyHvAcL39
    gOyw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisQsVxSIbR7sf0kebs4Z3Zpqv+Sabl5o7CzRq+Ps8Q=="
X-RZG-CLASS-ID: mo00
Date: Thu, 17 Jun 2021 16:55:40 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v20210616 00/36] leftover from 2020
Message-ID: <20210617165540.4bf22cc9.olaf@aepfle.de>
In-Reply-To: <861fbbdf-f3a6-ae1e-9487-b3ca37b30ca8@citrix.com>
References: <20210616125129.26563-1-olaf@aepfle.de>
	<968713af-3c99-3fe3-8378-9d9c3985098e@citrix.com>
	<20210616173831.5e8214bc.olaf@aepfle.de>
	<861fbbdf-f3a6-ae1e-9487-b3ca37b30ca8@citrix.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/qJ.+Ad+bOPC4X91C4iQu7qr";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/qJ.+Ad+bOPC4X91C4iQu7qr
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Thu, 17 Jun 2021 12:24:22 +0100
schrieb Andrew Cooper <andrew.cooper3@citrix.com>:

> On 16/06/2021 16:38, Olaf Hering wrote:
> > Am Wed, 16 Jun 2021 15:50:24 +0100
> > schrieb Andrew Cooper <andrew.cooper3@citrix.com>:
> > =20
> >> 32bit toolstack build =20
> > as in i386?
> > How is this used in practice? =20
> Every OSSTest run.

This is not what I mean.
I think there is a 32bit xen-tools, a 32bit dom0 kernel and a 64bit Xen?
Is 32bit xen-tools, 64bit dom0 kernel and 64bit Xen expected to work?

Olaf

--Sig_/qJ.+Ad+bOPC4X91C4iQu7qr
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmDLYmwACgkQ86SN7mm1
DoDXoA/7B6oQsozyj9wS8rNoWM/eb/cRYx0HEoMB3xwKu1wf3jjkSfHH1l36cGR0
mLnWoMibpO0c/k0wap0vy6ZBjilN4soOryYz9Rl3FLn0kSQV8hEbfh8Z86OdjnHK
Xcg3OFBnBFCoSYDVi2mebpQJ7QrH41tFTUAgp5pusNLVcC+8XxNUwuU1k+QBaW8+
xuZwB5+y0PSFalT4jOxofc8ZPJDJybWCCjQ33GBvvHiV/0emItbUL7nYzhi36OmJ
QXiVmmHaLEWA7ZY+tsRzzZmPhBJpwVj6k3YH0U+0XLP5H6uiZ7DNlDBTFYx7snMC
//7ONwPKKCczwOvxROgzg+O6uzcWmCbhyGidczwJyN+9/lQzZIwFGN/qbuLeTzml
PhBChSkLvoKK+p5IrDDKlkhyReYvSv91lv0gPHBM2fVoNReN/0eTay2jlvFH7FDu
H9yPjBiUVR4javDuE4M+IveUf2yE8N/lVRpbdxwaLQUy0z/RXSp1SuJ5NKUOtBjL
38XltYngs6e4qM4OZaC4gQDoWVTa6PllT2o0JLm/o1o3ccYZfbp4FgOF5/EU7xZi
Fv3Z5QdwIJAZr0dZYEAhN7qEIZ//DD6Ew9sg070m5Mg1iNPATdkrHEeiczCC0+1P
yqGqui/Fyo+mvp+u6CrwqMKhm6V2kKyEn+PoFAU+GQT6wmj1YU8=
=ReOc
-----END PGP SIGNATURE-----

--Sig_/qJ.+Ad+bOPC4X91C4iQu7qr--


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 15:02:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 15:02:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144034.265189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttWv-0003xd-QE; Thu, 17 Jun 2021 15:02:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144034.265189; Thu, 17 Jun 2021 15:02:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttWv-0003xW-N6; Thu, 17 Jun 2021 15:02:05 +0000
Received: by outflank-mailman (input) for mailman id 144034;
 Thu, 17 Jun 2021 15:02:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/B9P=LL=linuxfoundation.org=torvalds@srs-us1.protection.inumbo.net>)
 id 1lttWu-0003xQ-GH
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 15:02:04 +0000
Received: from mail-lf1-x131.google.com (unknown [2a00:1450:4864:20::131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id efa04bd8-6e8a-406b-904e-d58e041f691d;
 Thu, 17 Jun 2021 15:02:03 +0000 (UTC)
Received: by mail-lf1-x131.google.com with SMTP id m21so11008458lfg.13
 for <xen-devel@lists.xenproject.org>; Thu, 17 Jun 2021 08:02:03 -0700 (PDT)
Received: from mail-lf1-f54.google.com (mail-lf1-f54.google.com.
 [209.85.167.54])
 by smtp.gmail.com with ESMTPSA id bp29sm253627lfb.206.2021.06.17.08.02.01
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 17 Jun 2021 08:02:01 -0700 (PDT)
Received: by mail-lf1-f54.google.com with SMTP id p7so10995478lfg.4
 for <xen-devel@lists.xenproject.org>; Thu, 17 Jun 2021 08:02:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: efa04bd8-6e8a-406b-904e-d58e041f691d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linux-foundation.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=Gffjb4M/ANAqzeNlRTmLWG4NfOIBWBpaTeKu6FSOXY4=;
        b=e2pya7mjwKyFA8vanJytXcFh7uxmGZPAlpPiR4xcGsuhpU8OpojtLamxouDGkOAAMZ
         IwsLcVvossuqnXnFhoiFkeYFbsstgBxSGF9e3AJISi2nxNC5XwHJyzfUP8NkIXRymNUQ
         UxNen6DuR0XCy7bVfX0EYRyv6AW8cQOVq9as4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=Gffjb4M/ANAqzeNlRTmLWG4NfOIBWBpaTeKu6FSOXY4=;
        b=Aq7XB4rzu22gwE+Js0aH2YHRPjrKNIR4hBqlQ2YQB9zBgJMTw/lwH+p9qeCGzdAINf
         3MblYddyxpcTDFcDP3OFX9Ij11IKwLFc0jPyKTtqum5xpwFAewVPGxw4ZQzdvSxDjgwM
         w+kIPs9lFnXPJFhS7Jv2zzSYgluxi4NpGAtXwsLNEgIWXw5gKOIFDhdgrkjfSA4/FDie
         M3EsePbLL4u9BzCNlE+B8qng+zfFYCPuA7mhZvEehmr9pYq//ACvN5EfRu5WKs1Rfra9
         rZ7ccq6cWxHUEXf0MV8eAecSmkh9p5YSXcDrDRDDjRWzoKQRkWWTpbtqWwAHTk7sVvoH
         ER0g==
X-Gm-Message-State: AOAM5306rzoP+Ajs9CAyDVgfQa7nl+AspfXmJuhr+ffk0R5qPR0WnbUk
	JIqeKyFYWh3moM22A62bWHVSYwnZhf2m463lwB0=
X-Google-Smtp-Source: ABdhPJyLznj2Woaiag9v/7GAIgtCmjWedgZerHcjlR2OJu45By/QjxhUZcXR2ia64umDd3QJWqrotw==
X-Received: by 2002:ac2:4909:: with SMTP id n9mr4220011lfi.177.1623942121866;
        Thu, 17 Jun 2021 08:02:01 -0700 (PDT)
X-Received: by 2002:a05:6512:3f82:: with SMTP id x2mr4241418lfa.421.1623942120465;
 Thu, 17 Jun 2021 08:02:00 -0700 (PDT)
MIME-Version: 1.0
References: <ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it>
In-Reply-To: <ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it>
From: Linus Torvalds <torvalds@linux-foundation.org>
Date: Thu, 17 Jun 2021 08:01:44 -0700
X-Gmail-Original-Message-ID: <CAHk-=wjgg67NMBNG99naEQ1cM0mXBBzdhCJaYFH-kC+mLK+J2g@mail.gmail.com>
Message-ID: <CAHk-=wjgg67NMBNG99naEQ1cM0mXBBzdhCJaYFH-kC+mLK+J2g@mail.gmail.com>
Subject: Re: Linux 5.13-rc6 regression to 5.12.x: kernel OOM and panic during
 kernel boot in low memory Xen VM's (256MB assigned memory).
To: Sander Eikelenboom <linux@eikelenboom.it>, Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: linux-kernel <linux-kernel@vger.kernel.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Thu, Jun 17, 2021 at 2:26 AM Sander Eikelenboom <linux@eikelenboom.it> wrote:
>
> I just tried to upgrade and test the linux kernel going from the 5.12 kernel series to 5.13-rc6 on my homeserver with Xen, but ran in some trouble.
>
> Some VM's boot fine (with more than 256MB memory assigned), but the smaller (memory wise) PVH ones crash during kernel boot due to OOM.
> Booting VM's with 5.12(.9) kernel still works fine, also when dom0 is running 5.13-rc6 (but it has more memory assigned, so that is not unexpected).

Adding Rasmus to the cc, because this looks kind of like the async
roofs population thing that caused some other oom issues too.

Rasmus? Original report here:

   https://lore.kernel.org/lkml/ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it/

I do find it odd that we'd be running out of memory so early..

        Linus


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 15:03:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 15:03:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144039.265200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttYF-0004ZZ-5O; Thu, 17 Jun 2021 15:03:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144039.265200; Thu, 17 Jun 2021 15:03:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttYF-0004ZS-1X; Thu, 17 Jun 2021 15:03:27 +0000
Received: by outflank-mailman (input) for mailman id 144039;
 Thu, 17 Jun 2021 15:03:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g6FJ=LL=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lttYD-0004ZI-58
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 15:03:25 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c3e924a5-c7cd-4c9b-8916-7be0fcb69ee8;
 Thu, 17 Jun 2021 15:03:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3e924a5-c7cd-4c9b-8916-7be0fcb69ee8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623942203;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=wm0vOGc4ZaC1U6ndvEgQGngy86E0dxchxcXy/tppOuM=;
  b=QtsaFQ4UaLyJakblWHgBaftjELvd6pLj35hB1cZ8wtMxO3GjtDE3VHWl
   9Y50A92ow1T+u+IPCYyTIMVqKZAnqYLKB2inFvp64fsWaEwY6vbaQOa6q
   q5gPoyWEjwtQfK78KhNFVVKsJTtBiw1H9UcNsQORae+wJ2l9Mpd2gIW/O
   M=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: sSBG+VmRBs5zbMiKXFjkTuwkIocCrZVQ0PS/Aoh33mEhMH+s2vt1ArXuo5sFDXKCH3ujKRGO14
 jaTioFHwJoHOj3rvbwD2QoeDo+YeMOwlKCt7UXrklkWYxFQYEOZ8xDJ4p1VHGOtpuG4QC+DTNn
 iTh1Mk7Nwgs/mlEInSkOdN8TsDWCpH0tcaQt7lJ2Aln7BICF8ZdhHKaCfXQM1jzIqGOUue+VsO
 5COX8RVOIHnUYpOQnPNgKxO+ZHNbA0k9XrPcBtKZyTCF1kqiubHJLHoIewy0ElusvgDFYK6yjG
 XvE=
X-SBRS: 5.1
X-MesageID: 46378179
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:kTy/FawXkq8qkKHHjbnZKrPxaegkLtp133Aq2lEZdPULSL39qy
 n+poV/6farskdyZJh5o6HyBEGBKUmsjqKdkrNhT4tKPTOW/VdASbsJ0WKM+UyeJ8STzJ8l6U
 4kSdkPNDT8NzQbsS+Y2nj9Lz9D+qj4zEnAv463pBoDI2AaCNAGnmFE40SgYzxLrWJ9dOAE/e
 +nl7Z6Tk2bCAkqh6qAdwE4tzSqnayUqLvWJTo9QzI34giHij2lrJTgFQKD4xsYWzRThZ8/7G
 nsiWXCl+KemsD+7iWZ+37Y7pxQltek4MBEHtawhs8cLSipohq0Zb5mR6aJsFkO0aeSARcR4Y
 DxSiUbTp9OAkDqDzuISNzWqlTdOQMVmiffIJmj8CfeSILCNW0H4oF69Pdkm1Pimj4dleA56b
 lM2W2BsZpREFfvoATRjuK4By1Cpw6MunwlnvcUj3tDFa0kSJEUg7A+0SpuYcY99AST0vF0LA
 CrNrCO2B8eSyLsU1nJ+mZo29CiRXI1A1OPRVUDoNWc13xMkGl+1FZw/r1eop4szuNyd3B/3Z
 WEDk2orsACcuYGKaZmQOsRS8q+DWLABRrKLWKJOFziUKUKIWjEpZL76Kg8oLjCQu1L8LIi3J
 DaFF9Iv287fEzjTcWIwZ1Q6xjIBGGwRy7kxM1S74Vw/rf8WL3oOyueT01GqbrinxzeOLyVZx
 +XAuMdPxbOFxqnJW955XyzZ3AJEwhWbCQ8gKdxZ7uhmLO8FrHX
X-IronPort-AV: E=Sophos;i="5.83,280,1616472000"; 
   d="scan'208";a="46378179"
Date: Thu, 17 Jun 2021 16:03:19 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Ian
 Jackson" <iwj@xenproject.org>, George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>, Roger Pau
 =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>
Subject: Re: hypercalls with 64-bit results
Message-ID: <YMtkN70EPzT1KO/I@perard>
References: <71b8a4f1-9c18-36e7-56b1-3f1b1dabddd6@suse.com>
 <d38cd3a3-6139-5ebd-6a78-debc20c3b2bf@citrix.com>
 <1adf28a8-a0fd-4ea4-bbd0-52734630d52b@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <1adf28a8-a0fd-4ea4-bbd0-52734630d52b@suse.com>

On Thu, Jun 17, 2021 at 10:03:50AM +0200, Jan Beulich wrote:
> But it's not just XENMEM_maximum_gpfn that's affected; that's just the
> one pointing out the underlying issue. Plus if so, shouldn't we avoid
> returning values that are going to be truncated (and, as can be seen
> here, then get perhaps recognized as error codes up the call chain)?
> 
> > For now, I'd agree with trying to undo the change in OVMF.
> 
> Anthony, thoughts?

I can map the shared_info page somewhere else, that fine. The hard part
is figuring out where. I can probably map it just after the guest RAM
(as described by the e820 from hvmloader or Xen).

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 15:05:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 15:05:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144048.265211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttaa-0005Da-Jp; Thu, 17 Jun 2021 15:05:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144048.265211; Thu, 17 Jun 2021 15:05:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttaa-0005DT-GC; Thu, 17 Jun 2021 15:05:52 +0000
Received: by outflank-mailman (input) for mailman id 144048;
 Thu, 17 Jun 2021 15:05:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/zva=LL=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lttaY-0005DN-WA
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 15:05:51 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cb629e21-575a-421f-b169-cbbf5c3b5d97;
 Thu, 17 Jun 2021 15:05:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb629e21-575a-421f-b169-cbbf5c3b5d97
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623942349;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=U5FFEfAtlZJtGKz0DiWNGtX7DqgnvsRFaAZYCpDWLK4=;
  b=di0X+WE8Yc5VrnsXbP3/QrHEPreiIX0TkgIjbTtBvFOMZyYN8aAQDT26
   UAlJqdpLfl5fObr5UCobF8Q/qRoxtjHVvcKqWuGCZrVKQdLTQYAVx+Oo1
   vFgQQwJnhyCr8gynZDJwYQOHWPeYKfZvQ81U5enMP9z77dZM/UE7OTzrw
   Q=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: N8HeADVp33NOeudm9wHdgJXetfUPD9CvEKSLPB8IuI0xXfUdMDf514AnbC/sLtFyMjF158o8kd
 U51tRiMXsTPCvNhHbBCsdEglvsHdgBSYSnOIX7WgNtRIEyfYogSYgILWsiGO0h3+8jqr0J5m17
 /PPpq1nRBGEyXBaEzsw/josWyijWhuJQncZkrdibYG6btCxHCf/D18txHLUAxIlMdS6Fqo7CQO
 haX5/KPItAMWv59Od508uhVhhmazFponJrBTw0ex3U8ZL4ZREs7qAt35YB0MVkW2nvPTMjBRhC
 mzk=
X-SBRS: 5.1
X-MesageID: 46363019
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:9vKQYqpvmHPs6fMgdE3r2VkaV5vGL9V00zEX/kB9WHVpm5Oj+f
 xGzc516farslossREb+expOMG7MBfhHO1OkPcs1NCZLXbbUQqTXf1fBO7ZogEIdBeOjdK1uZ
 0QFZSWTeeAcGSS7vyKkjVQcexQuOVvmZrA7Yy1ogYPPGMaH52IrT0JbTpzencGNzWubqBJba
 Z0iPA3wgZINU5nFPhSURI+Lpj+TpDw5d3bSC9DIyRixBiFjDuu5rK/Ox+E3i0GWzcK5bs562
 DKnyHw+63m6piAu17h/l6Wy64TtMrqy9NFCsDJos8JKg/0ggLtQIh6QbWNsB08venqwlc3l9
 vnpQsmIq1Imj3sV1DwhSGo9xjr0T4o5XOn40Sfm2HfrcvwQy9/I9ZdhKpCGyGpqXYIjZVZ6u
 ZmzmiZv51YAVfrhyLm/eXFUBlsiw6dvWciq+gOlHZSOLFuK4O5lbZvuH+9La1wWx4TsOscYa
 9T5YDnlbZrmGqhHjXkVjIF+q30YpxbdS32N3TruaSuonJrdT5CvhMlLfck7wE9HaQGOtJ5Dt
 T/Q9NVfY51P4YrhIJGdao8qJiMeyDwqSylChPbHb2xLtB3B5uKke+t3IkI
X-IronPort-AV: E=Sophos;i="5.83,280,1616472000"; 
   d="scan'208";a="46363019"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OeGILgEEvkbGw7CNmH5QX3ASpQt/AvwWwPIM93ASxxKDRMKkCcex+4XyxEoNN0UmA+QWmfrzk7/UVSIz3NGgZOnq4RClL3+FtPaDSP3GdTGAd3QpYI3t5xZFP/WJjPHbzP3ANtvsGK6ecfdL0xEP9bWrP4ID5nNyLbbnVJiwADdeMCXrx2LwCwodG9M+G4DI70ITzmEw2WhqA/xagyopPFFNpRByH1Q/kXnsGCnLohF4fIorDxBkWvIRYV9Z0Iz948NUJQdHtmjvYB6xYV+3eI8941zZQ3uLPGuWq6i/Ttb4uF1zD+n+brC/bydL/Wf7cSlv2znhnunn/0aGeEBebg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U5FFEfAtlZJtGKz0DiWNGtX7DqgnvsRFaAZYCpDWLK4=;
 b=ZP/TGDCYPfPyjZvAqdZc1z+06TYqUI+d3TJdMhcSozOrp8YlExRGoUKVgLixCqTOjAEizbnR6822t/2ZbaWvF8o3n+rdjbgGwbZVRFEqh8wE2CYUfgqyv77HIEgr1MW8Mn21lOg36sjCvBOH61ksGhwbIt468ok+faH0RyJfYi29gFoodH5JomzkTyrogTYLxPnlC/99xUks//jZ+ljtbLwvwcKq0QcRNMp/FWc3vNU6OaS7lXPXGKpnHRPq29lSAFxI8N4SqHezfUPGkhmQg68uFg1ZJrLrc1tULK/pHMpxcXV5RFGD3uGL6wkfBjT5DcwMpTq6LbhTfB7zjkkJQA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U5FFEfAtlZJtGKz0DiWNGtX7DqgnvsRFaAZYCpDWLK4=;
 b=ohKXULnQkSZwXSz6cD6aKUoIcNNkvKjpP6D63xF4vzKZQ8saxHcpT1dhGOfFJRR3tYv/GshGGFa8ZsWro5OM//AbPmx10ruRlvAE0dGYUlJvNjYmzqLdVGrwXlirbFyKYyMfJRJ9I93ptd4WACwdbs12d6odQq1l6awB3/tdAYE=
To: Olaf Hering <olaf@aepfle.de>
CC: <xen-devel@lists.xenproject.org>
References: <20210616125129.26563-1-olaf@aepfle.de>
 <968713af-3c99-3fe3-8378-9d9c3985098e@citrix.com>
 <20210616173831.5e8214bc.olaf@aepfle.de>
 <861fbbdf-f3a6-ae1e-9487-b3ca37b30ca8@citrix.com>
 <20210617165540.4bf22cc9.olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v20210616 00/36] leftover from 2020
Message-ID: <93cb2906-6c39-a5df-c173-03a0ce7e113e@citrix.com>
Date: Thu, 17 Jun 2021 16:05:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210617165540.4bf22cc9.olaf@aepfle.de>
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0036.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:61::24) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ec42e744-19ea-44a0-edcb-08d931a161e4
X-MS-TrafficTypeDiagnostic: BYAPR03MB3863:
X-Microsoft-Antispam-PRVS: <BYAPR03MB38631815CEEFA6CC9119CA3ABA0E9@BYAPR03MB3863.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 4HJhRs6O+TOWEL7egABu7X81lV4LMWrRtcyynuYTIS+siEGSWpyQtteeXf0qmhv0/lhk0wnOMzaxTXagVRNbogocm34fLSUHIytX+bDj3sgW3yPkrZFacjP7MNnHhG4D+OLWdHIvQL/XyFWdVArPF6yxi6aZQE4dtotQnvqbcOd6nC5Tux0OQTWwL/yN3U8TEn+Ai6Elq/EPFRkOZfDtBCXE/fFMBqeS4A0yhTh+8zf7raPlKtzC4JlYykjGPypppm1L6ypeRb7NsprrpVqWb4APq07HeX3J4b4RPHiThpd/+zUh7qo4YcEBeCoi8hS3nxiSPLvfYnqqJRAcwLknEQJ0kl4nQo1A8vJWArSv6rBpOGUoIrjA40qi9sqczKvL7zPl0UMXT+4vNaldHD1I+xxKHeilAet6SU4Q5khkeuJCc+hdlmf9nJr7Tam1+3W8w8LNKPiQSyI/Rb9C5L8lnFVltWpYneo2y/5XAuJxdbfDr7n936/2TCPMWzSOuJ8uQoJL925bSL9ctF1ERdqyYxXXx7lbzP5WLhArx0RFVlshV/YUPPWsXR946yS/Xmx2Ypx7zc0elJdyfb7xCndSv5ublBOzreDBdHPGq2Rz60+yEn1eHqjYuWvbjbZwtpvGkoMHRKI6CVKHnpNM8YKLj9+hTFyk+JR40+MrNZ0Qqhl83MmCEehZSOgAUfQk+fjP
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(366004)(136003)(396003)(376002)(83380400001)(5660300002)(38100700002)(6916009)(2906002)(66476007)(16576012)(66556008)(316002)(26005)(66946007)(6486002)(8936002)(4326008)(86362001)(31696002)(8676002)(4744005)(16526019)(53546011)(6666004)(478600001)(36756003)(186003)(31686004)(956004)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?Windows-1252?Q?jGVTaIuyyyhD3svRMw8U78YankSdETGcdRzql2BSlZXzwL+di8WG2ump?=
 =?Windows-1252?Q?wgIy9FzL/E+ooJe/QrjEt9ev/URN6kKSWlN64pbpKSxqsToXF7DmoeAt?=
 =?Windows-1252?Q?/qNTGIHVbfAPXRbchbDmPDZlPilqGVfbYNJsrtFFaS0DqD0Vv6VbweNR?=
 =?Windows-1252?Q?6u8v5VAE7g3Kk81DN5OECQFE0PqaE9BAORDRLc8WBpxwGuYxczKNGwGC?=
 =?Windows-1252?Q?SBDuj5tosiRMYX+PAJRWvXL8bD3I+dY/+azbTwWFmfStp5x74prnb0P+?=
 =?Windows-1252?Q?KM9bpdLInV1lPZrCOAw1PmkeMBfHK32mVKc0SZpsDX+Q9fmfKPUkt7fd?=
 =?Windows-1252?Q?fnPjsHQqM8cXLmNhJ8bB/SQosPU/M9CS31dsLyT+P8DQMVCY8gRE9lER?=
 =?Windows-1252?Q?LZQ8P5TJjKqM3VVAZM99lVPUqsqpBvQPD7rvUxanuUVwvE+2x//zrCey?=
 =?Windows-1252?Q?vXZzbD0l32LHRGvcfjfyYmvJyg4D7Ik9/iw8Mp/+5Tv+JYa1fn41U3bP?=
 =?Windows-1252?Q?o9ZLT9GXYXLdoFYOR0WRGtIQWXaXW2jJSaQ++eliCnEFD8hzJ+1/5HFm?=
 =?Windows-1252?Q?z630zS6+ov0mjsKzEVIhWarAn74x83t+6L84bB4sf2nVKze9KnIPHJe2?=
 =?Windows-1252?Q?ZZoGdp/ON1YVmlfM1d/lTrHHf7+hX8G8yo4rGs/mRI2mobEYxLSlbFr/?=
 =?Windows-1252?Q?eqhgXynXa05M/RapgPaPCqgMCrUVT1a8SeVoSPo4E0h9TpLWIOIdhfJn?=
 =?Windows-1252?Q?fN+9i5HKQp5fKdtZnlcq4tQ3YkMmbqyQ/5SrFbYOT4E/DvkRyIHzMRr2?=
 =?Windows-1252?Q?KtqLIvwar+Y3lI3/7AL95RPHEX0qasL336DYokWGGEH8PbrVbORyl07u?=
 =?Windows-1252?Q?FnItcM2tAI1GnilaIS9eC1aSeoK3zWwODveG5E9WdRhJdBR3lI0NjGy6?=
 =?Windows-1252?Q?rRPzvuDHq+GCezeW46F/pcG+GLw+OZ13gMcdQH/vnwl2PR5Nqkq0Ab2z?=
 =?Windows-1252?Q?thjSamPQahlER5P5kfJcQbsEKsUbL6CGjtrL7yvJUxJDldRPSVPn1rkQ?=
 =?Windows-1252?Q?tzTaZAGlyHDlJYCvQZSnhXaRFIRBQ8u+z5oaK6XLLU5v7H+Auh+36Ss/?=
 =?Windows-1252?Q?3vSrEKjblnDrvPoDYWk7kcjs46C0xBLkw21l2OMmmxKzYglbDXsO1zLl?=
 =?Windows-1252?Q?ibZfeSXXdQytMDKPZg1SCb/RYqrAIvq9XKP3sQdsjerijx9xcbo6Zydf?=
 =?Windows-1252?Q?WBjERlsXIN5Hanl0NIVCTfacicOgOMawTnb5301Ghar+X1xSQI3sU0UL?=
 =?Windows-1252?Q?KvosCIs1kxH0SWHlF/IaAZunTBoGRzx6egkFP919zpQ0LhZeN6uD39NT?=
 =?Windows-1252?Q?H6XI2rnugNXtav+3z0v9adLx31rxZpUn5ZtDHLWpTHcFgBXI6ba3QvIb?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ec42e744-19ea-44a0-edcb-08d931a161e4
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2021 15:05:45.3473
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HbtT6qkdkSqDovTnaiAXblNr4vXdGFZPYfVLrc8gCutiA09pcCby6M9qp4as0TDiG37/e29fgp9EamqrrJq/W9BdXEkTPg4DwFraP0SBnmA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3863
X-OriginatorOrg: citrix.com

On 17/06/2021 15:55, Olaf Hering wrote:
> Am Thu, 17 Jun 2021 12:24:22 +0100
> schrieb Andrew Cooper <andrew.cooper3@citrix.com>:
>
>> On 16/06/2021 16:38, Olaf Hering wrote:
>>> Am Wed, 16 Jun 2021 15:50:24 +0100
>>> schrieb Andrew Cooper <andrew.cooper3@citrix.com>:
>>> =20
>>>> 32bit toolstack build =20
>>> as in i386?
>>> How is this used in practice? =20
>> Every OSSTest run.
> This is not what I mean.
> I think there is a 32bit xen-tools, a 32bit dom0 kernel and a 64bit Xen?

Yes - this exists.

> Is 32bit xen-tools, 64bit dom0 kernel and 64bit Xen expected to work?

In an ideal world, yes.=A0 In reality, no.

Lots of hypercalls have embedded pointers (every GUEST_HANDLE(), to a
first approximation), and dom0's ABI with Xen is 64bit, which is not the
ABI that 32bit userspace speaks.

This is one of many errors in the hypercall design intending to be
addressed by the ABIv2 plans.

~Andrew



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 15:09:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 15:09:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144055.265222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttdo-0005sb-2O; Thu, 17 Jun 2021 15:09:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144055.265222; Thu, 17 Jun 2021 15:09:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttdn-0005sU-VP; Thu, 17 Jun 2021 15:09:11 +0000
Received: by outflank-mailman (input) for mailman id 144055;
 Thu, 17 Jun 2021 15:09:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=k/hY=LL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lttdm-0005sN-Oo
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 15:09:10 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 69f854a1-cc58-4cd6-b4b3-895075f4b964;
 Thu, 17 Jun 2021 15:09:09 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2109.outbound.protection.outlook.com [104.47.18.109])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-7-ZiTt_QA0Nwyum55kFdi5cQ-1; Thu, 17 Jun 2021 17:09:07 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5741.eurprd04.prod.outlook.com (2603:10a6:803:df::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Thu, 17 Jun
 2021 15:09:06 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Thu, 17 Jun 2021
 15:09:06 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P191CA0029.EURP191.PROD.OUTLOOK.COM (2603:10a6:102:54::34) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Thu, 17 Jun 2021 15:09:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69f854a1-cc58-4cd6-b4b3-895075f4b964
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623942548;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=C0r2bk2ofI/UWtw5xRmWeP/k6ugqIJ8SEfrMEy4f7RY=;
	b=fXNFzzfT0PfsO2eQwL5oXV8LX11v3BN+5uP8O2NhQU0Eqw49fNfY1S8zhgRM9828VRpjIz
	TGAwSeHVaUgMc44fYSokVSx1GYPcFCgYdvkOvT5e1uIr6Da94+QdhTri4iScYNrU1+Z4JL
	EssdpIn0IwwlX/3ZdFrnP4FgZU454R0=
X-MC-Unique: ZiTt_QA0Nwyum55kFdi5cQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Dr8LOSaX64p7iu7FA3tpkrPqzByQcD966oNj5LLmUutyjaoH8NISy8r82M2pSzXspLVx/0qQXZ+N4jKIuyiLfrnu7FWWNQbwQmjotkATGceAyh7AYBzYNtUHi+C34JUuOpkT6ecHSoVoV1mQbenQQDrig9DKY27m9rBzDcfEzftsDnn+Ofh9PmUklRAlkJvvgbyD1aoGGBblwa76K7tf9eypeHJMVJW0kgKrQrCWJTPZ8zJLXAlbdpnrbZ25GaSyi03VxvmSDA6klTyK1n4pesfYOZrlVrwCtGW7vm405VPIhWa9ZWN+B2Ajk75kuCxKK7JeoOmUn2mhiE5GY/ht9Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=C0r2bk2ofI/UWtw5xRmWeP/k6ugqIJ8SEfrMEy4f7RY=;
 b=IAShU52njNL7DHmPLBxy3gsDAjVmnOhNBIOjfqO41Jo9z1K0XpOXfbfbq0zvukFIKtzgAJoDThjqnUWf/74B4Ks7afz586Ng4WLyQVSJEuHcf/n9TvrNaFwoVCQy9NmL13j+vDsKTYIltiOmiNs8RGu84GRp+kVcJFpHCtIfO4QXfBDhIeXQlfQQb4kvWRH2sNjyX9Aix2Mh3u0ywXCxguCTgREHWL6dqTGxrJd03odMThL7ELYeGPRtHajygcndINlGWgXxVIpi6u4QWXPXyHkP9QnMKmRHoEbu6y+CLB7p636gXvnLIb+uZeKrGMkaEySXEEvmuQiZ7WghtThztA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: oracle.com; dkim=none (message not signed)
 header.d=none;oracle.com; dmarc=none action=none header.from=suse.com;
Subject: Re: hypercalls with 64-bit results
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <71b8a4f1-9c18-36e7-56b1-3f1b1dabddd6@suse.com>
 <d38cd3a3-6139-5ebd-6a78-debc20c3b2bf@citrix.com>
 <1adf28a8-a0fd-4ea4-bbd0-52734630d52b@suse.com> <YMtkN70EPzT1KO/I@perard>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <218e3205-b4b1-8a73-462b-c8338f3e107e@suse.com>
Date: Thu, 17 Jun 2021 17:09:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <YMtkN70EPzT1KO/I@perard>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P191CA0029.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:102:54::34) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b3a95a59-c251-46b2-2fb9-08d931a1d99d
X-MS-TrafficTypeDiagnostic: VI1PR04MB5741:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB5741CEA3496AE5F7BD63B0ADB30E9@VI1PR04MB5741.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xc6ipqrTT6tFAmKMkepG94fT5q8MbZKRIBfY9rrPz9fH6t63PYzULZjxmxeNrLwq1duHNJLsqRZnvRpopYahQrPAP/zMeJnlNQqNsSE+CNnINiwruG1oLYYWVclu3i0rGssFmakNfUsiK9DIqUxDteQCojLBYQifxEt7rdasafXrL8QDCUa1q1Sv3YP97pwLkcFI8SnB7jl3Wfmv9HMFavYP745rVsrLeexsteQFa6bg7YCweh/6K2Kwu4NQN6vWkq+H30aXXOoKfAwhre6iCKtYNi8zLf0Lg+iiHcZK4fEfwzGzhPKHFskWwgCl1IlAr0J7ZxDeoHLxAt0N6ttKk+XkRlwi3yA9unBFOqxvtGFeTLM0BdHM+nRX493KS6xgJgdfz60zDYtjESzrOZv6mMID9L2FCJW8WeikSMkoVhtF2LFf1vJEz4FtzEgP02p7OFZxf4AaeiMkGNzI9+f1QB7iLoG0LI/UvbGojTaIkxwPJ7j1ACGyv2V7CWQ+8PAByKE/KK+jgM0CbPWYCb1cKZCelcUZ7Kordjdqy7cyMHI6gNr50TB+337VxaX/PdJUA+ap8szG3XPogkV3feJjgmV9PMF7qtdQIeYnvmH0T8f3GzHYoCmqpsicEitnom7kLe95Yl1Gaz3/ShtzCwKWW89NN24Lnt+O4hxJDy9uqA6EUMJCq4Woax4I7Atlc2+2
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(396003)(346002)(136003)(366004)(376002)(186003)(316002)(66946007)(36756003)(54906003)(4744005)(38100700002)(26005)(16576012)(16526019)(53546011)(66476007)(66556008)(8936002)(83380400001)(5660300002)(8676002)(2616005)(4326008)(956004)(478600001)(7416002)(2906002)(86362001)(31696002)(6486002)(6916009)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZHRBNDd2amMyOFBiMWZNaVRIbW8rdkpUV2pqODM0enQ4TFczVmRtYnJ2TFdQ?=
 =?utf-8?B?RVVGYmowOUU1RXZzU1pnZFoyL3JMQzNBdnIwOHNXVk8vRXU4anlmc0pCV1JW?=
 =?utf-8?B?WENtQnNJeElHZ0F3NndHL2RFcUFLandQK1RRZEUrM0JMcUIwL0EzMWZ0MFNN?=
 =?utf-8?B?M2JVTWFYalJhekQrbXRoT0xpVG9tV2Iwd1dhVVFQNFpvK2NPWUwzWmFsY2di?=
 =?utf-8?B?QXVQWWFSMGRsT1VYdnBYV2F5Wm5NS2swam96SEZ1UWd0ajJ3VGtqbFkwalFq?=
 =?utf-8?B?Rkp2bGNXTDhQbzFsaWFBVGYvTDRDTUgzNHRaT1BTcUtURS9OOXNId3RpKy9G?=
 =?utf-8?B?NlArd1RZTDJCM3VxbGVyUVpBclRycVRiV1JjaDdicXBWY1h6YlZFV2prODJH?=
 =?utf-8?B?aTZHNVkwSFprcW1helNkT3RBNFZzVGhIVU53ZU5JeFdqcGRLWnFSVlJzeUY5?=
 =?utf-8?B?MTl5c0VlelVzYThBOGZlcE1SSTJXS3dhQ1l4WDBqWVdZdkNpTVBBdENnSklZ?=
 =?utf-8?B?WTYvaUtMUmlnRDlLMFcxWml0SGgva3dOSlh3MDJNRHMvZVlDSVorTzV6WjAw?=
 =?utf-8?B?T0xpS0o0TllpT3lHNG5pZXJQUXBvdWZHaUVVMmNUQVo1NjJGZGExZS9BZE5F?=
 =?utf-8?B?ZTMwZHBkVmJCWXhqL2Y3c2UwQ3N3cGxJTE5GMkhTaDZzazMvTWpKc3R1WG5C?=
 =?utf-8?B?K0ZjR1BoVWJLOEQ4OUNYc0hZVU9iNkk2OTlhS1h6Tm44ekpKbXhwdFg2eW1D?=
 =?utf-8?B?MGdhYzZ4QTRML0NuaVMrWkNQWGI3QzF3UWdaVktaMkpjb0pUdkdtWng5UWpy?=
 =?utf-8?B?Q1FJaW5qT0owamJmb0JBM2k5TjhZREVZOEwvZHZMajFrY1hWcmdjdU0rb21l?=
 =?utf-8?B?bldBOFVLWkViRjdPaFNCdmNFMVFuQ0pjb1Q5ZHVrajFzdklIZlBsSml3Tkc0?=
 =?utf-8?B?YkkwMVNpRSt0TVhkU25sbnd4TGJpVWJ0NHc0YzRsWGFMaG1wcmpwV0hRMk4v?=
 =?utf-8?B?UGtZMGRwL09CVVlwcCtZOXdJMFBXaGZ0cjlBOHNkVE5ZVit0Q1dqdThTbHo2?=
 =?utf-8?B?MElmK1Q4SFdnK0NTRHQ0REpnVUFSV1puWTlUeXFMaUJDZzFySzJPQy9HQjFI?=
 =?utf-8?B?aW5JcDRPK3liVjRyY2M5aDl6YlZsbTk1cTlmWGcvaUZLU0lmS3JZRVdxdmtT?=
 =?utf-8?B?YUhiNGFGbWJoVFdSNWsrWXF3ZTFVN051Z0tDaW5NSytFcnlVQzRsUWM1UzdB?=
 =?utf-8?B?eG9XNFQzaFVWR2hnZG5BcmJiSXUweFU0RXdhQTR0OXUxOWkraW4ydmpVZUNV?=
 =?utf-8?B?WUwzMnNLQkFuL3ppdnJkZHlseVdQdnpyVmFzT24yQVhXVm1xS0FkLzI0SHBs?=
 =?utf-8?B?aHNjcTB2dXZyWjVYTy9JV3RPVUZidXlORS9QdWRYUm4wMkk4TnpEOTl4YmRT?=
 =?utf-8?B?aUY5emtPZ2R2UTRYcGFBSDFhblRHc1I0MlJlZTVlWGNML21GWEVUeUh0bUFM?=
 =?utf-8?B?dVdkalZoYi9aUXV4V2xUOUpYRFhzM0U5TmppdkVKSFJlOEpWd2phajBFODBo?=
 =?utf-8?B?b0hNQlhRTk5pd1RkdVpKd0hCaTV3MmV3d1VoR2lLWG1TOEw1YUhmSy9uWll3?=
 =?utf-8?B?YjFyM2QvQzZTeHJ5bjZaZ0RZSFozaTRjWXBETlhWSHZxd1pFNGdnUEZSRlZp?=
 =?utf-8?B?KzBVV1ExSmR2UlEyNVU5MmpEL0VpOWR0R0Z4aGIxczI3cXBEU0RtYkRHTjVY?=
 =?utf-8?Q?tZOADkhwwwYjrGXdL0LFww/t6dp5om7NW1xZXIe?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b3a95a59-c251-46b2-2fb9-08d931a1d99d
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2021 15:09:06.1178
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0VI/8uM13vRUNGAiCh+7kCfAWSSuT3CqMe5Uf4tpSXxezPMsre1IVBIQYfjQSjVDJHxNAMwHGV8lU0rAY3yR1g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5741

On 17.06.2021 17:03, Anthony PERARD wrote:
> On Thu, Jun 17, 2021 at 10:03:50AM +0200, Jan Beulich wrote:
>> But it's not just XENMEM_maximum_gpfn that's affected; that's just the
>> one pointing out the underlying issue. Plus if so, shouldn't we avoid
>> returning values that are going to be truncated (and, as can be seen
>> here, then get perhaps recognized as error codes up the call chain)?
>>
>>> For now, I'd agree with trying to undo the change in OVMF.
>>
>> Anthony, thoughts?
> 
> I can map the shared_info page somewhere else, that fine. The hard part
> is figuring out where. I can probably map it just after the guest RAM
> (as described by the e820 from hvmloader or Xen).

Can't you put it (not necessarily immediately) next to the LAPIC
page, for example? Or the IO-APIC one? Or somewhere in that area,
which normally OSes won't use for placing BARs?

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 15:27:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 15:27:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144064.265233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttv2-0008B7-J8; Thu, 17 Jun 2021 15:27:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144064.265233; Thu, 17 Jun 2021 15:27:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttv2-0008B0-F2; Thu, 17 Jun 2021 15:27:00 +0000
Received: by outflank-mailman (input) for mailman id 144064;
 Thu, 17 Jun 2021 15:26:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=k/hY=LL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lttv1-0008Au-Lj
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 15:26:59 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 653f9a43-12af-4f42-b663-cde94bf8f718;
 Thu, 17 Jun 2021 15:26:58 +0000 (UTC)
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2052.outbound.protection.outlook.com [104.47.8.52]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-8-m6y2IWkkPjW4X5CoyqzPfQ-1;
 Thu, 17 Jun 2021 17:26:56 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6384.eurprd04.prod.outlook.com (2603:10a6:803:126::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Thu, 17 Jun
 2021 15:26:55 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.019; Thu, 17 Jun 2021
 15:26:55 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR2PR09CA0015.eurprd09.prod.outlook.com (2603:10a6:101:16::27) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Thu, 17 Jun 2021 15:26:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 653f9a43-12af-4f42-b663-cde94bf8f718
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1623943617;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pDQwX7dHwh3cGulwT3yX6Ge6wlI9RopclW9K6b0DM2A=;
	b=G6nMXWLX78CagCspCKQiHA0udG6R0AHUluGYMo9GPcBATQnaMLE9cxaaZgeLGtO2ot20CP
	SOGLChfVDixyMHYsKXu6GEkMK2l3l7v14UQN4IJoBS9TB8aZckWG2zwDmxFUA90/eZDmEO
	Ovh7cTDmIDthoIi+bIl1739qHc6G9xs=
X-MC-Unique: m6y2IWkkPjW4X5CoyqzPfQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dwoXRd8xSJEWixeqMokhaA7JKa4R+UOE1f0SZaDpteoMwuo2kuvDWpf8mYdTRSlTivNe6FQxPllkOX8Gs2VFbUBzKNkkUMRq8jA/I6jFgeMldhxKEgoKKpSMuEMDsrKQBpp+7vLxwsAyUMxbo8qvWaTMHXSnHjdHD7Eu8Omv+q1lQcxjeoFB8QtMXulU/oicYQGCnJetMXHg1wxEhtRY3KR32CJ0UhdmWfrpsxCysmLFOcuZHr6tlQ3JTE8RG2xMGI8zuTXrpZcrZ9rMEhMOgUPN8S6k1RrxhrU2Qm8UlWLEeDBYij++/2mrdDiB61XHnWTsrNeP1ijlD+JQFd/8Dw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pDQwX7dHwh3cGulwT3yX6Ge6wlI9RopclW9K6b0DM2A=;
 b=gfjk/HEpeH+7AVOtHaFzaHouIcLirlBCP5oruwAbl/K+8rYVqJmryyaCKbleQ4V7LOME0yNEBtgZ5tZJ9D04ArJBx5/W3frtReFwSSyHHc+8pcJdRnekYzaRU/1FssUQjfwCc0OWvE267La6eph8vfykqBZNtQl4Rki8H3horC1lyEqNXVzA1nnaya8tGeU0TlsS2uHCGRE5RPuUJaj+Pt+ZKXjMnhP94PIs+zG0ybMKQmosR6f1NiaUJyPe3OJ0t/EI5zOqT6wuYJcnPuzApWRboW3Ns4diVNZ8QoeSOm27rdMZ7ePSvU5mH9+5RAFPG8ZUR8QdSzU6y+NIScp7vw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: oracle.com; dkim=none (message not signed)
 header.d=none;oracle.com; dmarc=none action=none header.from=suse.com;
Subject: Re: hypercalls with 64-bit results
From: Jan Beulich <jbeulich@suse.com>
To: Juergen Gross <jgross@suse.com>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <71b8a4f1-9c18-36e7-56b1-3f1b1dabddd6@suse.com>
 <2adfba14-8adb-89d8-3e8b-a25aef6fc2d6@suse.com>
 <cf751696-5c9b-b465-67d0-544245d8563f@suse.com>
 <75d84a60-1857-f62c-23f5-eb3bfa3b93b2@suse.com>
 <c151faff-5ff2-af59-fe95-1675aeb5be33@suse.com>
Message-ID: <d956f5e0-3640-df99-df39-905652713e7e@suse.com>
Date: Thu, 17 Jun 2021 17:26:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <c151faff-5ff2-af59-fe95-1675aeb5be33@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR2PR09CA0015.eurprd09.prod.outlook.com
 (2603:10a6:101:16::27) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 78ce4bda-a889-4f53-9792-08d931a456d5
X-MS-TrafficTypeDiagnostic: VE1PR04MB6384:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB63848522AA68BA45964A79AFB30E9@VE1PR04MB6384.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sMdm/xdRa/lSx1U7dnCetSn3GaUd4AV2o91pRglafhmIl3FID+b1EBKbsxPwqMN30rdXGMNip39DEy2ZhtKr8YpUB2hFxGHERWszb/2MG3P8A+lF5eGsX1pJdL7aJsNOMklACS2CZCyFsZ34ldnTbKaf4mrkJzhM67ZMLGXVVh2qnaR6RAxakAgMBJAC1Rt29PmPYLmu/112L6T9SgI5s0aOl9u6DNoes/jYp4g/pkxPpHjp+AFlHeOUfNESDfAc4COg25OcpdfWjeniSj9T22IGxZv5Ticz9mtrLcs01k80p1T5QAn3UIT2A/myEhE00pdrSMt0WgdLTL1yj+kRSy/OeRqLsOd2m8qmWSLvCiGM6zm/RfTq8SXsgTrj11ODAwmAts9V08tS01cPZULHjxuI9ZrnmZcBe1Q8TLxXcXlV4EavAoIUvXhG0Vmq3+C4sxhTBJkCsv7muEatV4k0ZGVOSnuxlteLRxA7Vu7V8Mg+SRGFp+wax8QDdDOia0m1OYULncl4DnadnVjgMB27bfJiuratTB3PbNo19xbbyJounohZdgIK2adJuPhA+4VhqNu+QSdiFixahHxVHek+X6EuCDc60JsD9WH56Ud+fQMgGSuUaux4zSxmVgf7scI4V6e9spwoA+WftN4azluY6oWLSYJ3f5xi6CnbI0PdxRcQ0RqhvsyeosLhAWyqoBDN
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(366004)(136003)(376002)(39860400002)(346002)(53546011)(8676002)(6486002)(36756003)(86362001)(26005)(6862004)(31686004)(2906002)(31696002)(16576012)(316002)(66946007)(54906003)(478600001)(956004)(16526019)(6636002)(5660300002)(83380400001)(37006003)(38100700002)(186003)(7416002)(66556008)(8936002)(66476007)(4326008)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RHAyTmpOQjRDMnozdmdkTVdYWHlCT1k0RHpFaGc4UmV3M1gxMWtITkx1ejlB?=
 =?utf-8?B?YVlLUXdXQnBHeWNSSjRrMmpMbWtVYzF2VGpKM1N2VDBRKytlWTUzQ0JpRGlv?=
 =?utf-8?B?RVRyU1dEL3pyK2JZZU1jZmd5bE1XaHBXaGRDalpyN0NkU2dWeURQbVpjT0Zm?=
 =?utf-8?B?YThwM2pONjNWVkJzVjFubmtjc0ZialBaMjJpMytkWUlFSGtrSTRvaG14aFZM?=
 =?utf-8?B?WG01ZnpoalhWQ2R6SVk4L1BBbDR6OVdJSjlJN0lWcXpiMFYxTFNoZHF0Q2hT?=
 =?utf-8?B?Ny8xUFdER1dLOXA3ZU1PSXBjcXk0bDRLV3Z1dDRkNmhVK3F4ck1qYytZQ1l0?=
 =?utf-8?B?elA3MTFjY0N6Z2FyM2tBVkpoQVRZUzFGL09kSjdFbXFOK1VLc2FKL2s0TWZx?=
 =?utf-8?B?UmZRZUU1VWRNN2tkMzlFWmowYzBZUEt5NUVuR0xyUXJwUllLbFQ0d213UGls?=
 =?utf-8?B?RWlNSy9yRzRVT0paRWFITDhoTDNUMmxhRDEyaWJiamVCZmVHMisycytPbGRC?=
 =?utf-8?B?dlZMdlZMc0NuSjM3NklkYkJCTHFlOVc4QkUvcVNKYjYvZ09pOEF0eEk4MUJL?=
 =?utf-8?B?KzRmSDJzSHVnN24wbisrd250cEVpNG9LS1A0bVM1WnpWeEpzWGZzM1pRNWF2?=
 =?utf-8?B?SzZYTWJLNWVrMFJ5d29HMVBoRDRDQVJzV3NpdVhDM29lZDRqektWZU43Y01v?=
 =?utf-8?B?QTBjc011aHlNOEI2K0lYbi8xaFVSRzFKLzdDMlMvSm1YUElyT1ZORWRkb3Z3?=
 =?utf-8?B?VHVvYUN4ckl1VURKbUhQOGo2NHh1MVBzVUh6RnovRWFISFFhQmZYZ05XUDJu?=
 =?utf-8?B?Nm1EOTRET2o3Wm4wS2NvcENKZTQ0TCtxeU9DS0U1bDdFTmFXRVdOT25FNXB2?=
 =?utf-8?B?SVRBU1BkWndrelBieVFheWtzSWhrSDNsNmFwcHoyT1dhanlVMzN6NWdKMUs1?=
 =?utf-8?B?VlJ0dDY4T1VSamI1MFRYZjg2WmRKbDhNM0VldlM3dW5lR2d0bkVmVXpTZEJy?=
 =?utf-8?B?ZGhMdU9RSGhFS2kyaWRhbUlKQkU3NHJITHVSdWgvdHNBMHVsaVZwYXl3ZGp2?=
 =?utf-8?B?a0JQa1hGaEVmbi83d1Q5L3VZeWh5WGZDNkVSbUlZb0tpZTYrUjUwVTR5ZUF3?=
 =?utf-8?B?Z1B3NExSWXpMSGQvRHVLWXQ3RVRYQVd4YU56dyt1L0N0Sm85dGd5YW1Gc0pp?=
 =?utf-8?B?dDd3ODFRWFBnc0tNcEJkeHB5bG05Z3ByNDJwazljRmVKRUtGTnh0VXJPQUox?=
 =?utf-8?B?V3Z0b2IvOVNSZGtjVGZ3WFRMSlRxdVQweGpneEFSb1MvK3pPL3I4RThQV1lM?=
 =?utf-8?B?L0xTc2tsWmxxTDBsckhQUzlkUTdKU1MrZmtFRUhRdDMzSWMxMWpOS29hcWpD?=
 =?utf-8?B?N2I5VXhYa3Ivd29WczRqYVJ4QUdzTmZaUUd0THphT0xyMkVndTMzSmJCTE02?=
 =?utf-8?B?bkZ5cndaRnIzL2l4ZmFrT1kzcUNocG5mSUszT0paRVNjcWRJcjU2OU1nSXFE?=
 =?utf-8?B?dDdMYkJrRUJ6dStkd1NLMXpqMWFRV2k0eFFTZkVpQ3V2bEU3RzRWck5RdVd4?=
 =?utf-8?B?WTVpU1RPVGoxRzhTTHRpcmVCY1JOL3lJd29xempKeHQrcnkrZjBvV0o4a2lC?=
 =?utf-8?B?SldzempJZ3F1ZVlaWVkxaVBMWnM1RDVFb2dXM0Y5MldEWXl5bmQvazBkOExj?=
 =?utf-8?B?L0FqZjN1Z3FSdTFkWStmTDQvQWVZT202MlpEanY2LzVYcGtGL1hmcmRBZ0ln?=
 =?utf-8?Q?LV9pUxzwmhvkWJS0nS0bwbQr7tEE78HSpCbpQoe?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 78ce4bda-a889-4f53-9792-08d931a456d5
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2021 15:26:55.2111
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JMHAzNbpiZpd2+T5NftmkAvgXB/Koh/Uz620u0/4EI6Isywjroe3+o93ptjzTkwnuvSXgHNqhvGqxKL9FIIxFw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6384

On 17.06.2021 10:08, Jan Beulich wrote:
> On 17.06.2021 10:05, Juergen Gross wrote:
>> On 17.06.21 10:00, Jan Beulich wrote:
>>> On 17.06.2021 06:55, Juergen Gross wrote:
>>>> On 16.06.21 18:04, Jan Beulich wrote:
>>>>> Since hypercalls from the tool stack are based on ioctl(), and since
>>>>> ioctl() has a return type of "int", I'm afraid there's no way we can
>>>>> deal with this by adjusting function return types in the libraries.
>>>>> Instead we appear to need either a new privcmd ioctl or new XENMEM_*
>>>>> subops (for those cases where potentially large values get returned).
>>>>
>>>> I think we can just use a multicall in libxc to wrap the affected
>>>> operations.
>>>
>>> Hmm, we might, if we're happy for these to then not work in HVM domains
>>> (PVH Dom0, which still is experimental only, or PVH/HVM DomU-s using
>>> the libraries for some purpose), or if we finally wire up multicalls in
>>> the HVM case (there ought to be a reason why they aren't, but I have no
>>> idea what that is).
>>
>> Me neither, especially as on Arm they are supported.
>>
>> And TBH: PVH Dom0 without multicalls might be hard anyway.
> 
> Okay, let me see whether, while trying to wire them up, I run into
> particular issues.

And, at the risk of stating the obvious, this isn't going to help 32-bit
Dom0 at all, so (assuming the immediate issue wasn't going to be taken
care of at the ovmf side) would help only one of the two tests preventing
a push right now. (I have a patch set drafted, but it's yet to be tested.
I did find though that the tool stack already uses multicalls in one
special case.)

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 15:29:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 15:29:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144069.265244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttxq-0000OA-19; Thu, 17 Jun 2021 15:29:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144069.265244; Thu, 17 Jun 2021 15:29:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lttxp-0000O3-Tp; Thu, 17 Jun 2021 15:29:53 +0000
Received: by outflank-mailman (input) for mailman id 144069;
 Thu, 17 Jun 2021 15:29:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r3s9=LL=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1lttxo-0000Nx-TZ
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 15:29:52 +0000
Received: from mail-lj1-x232.google.com (unknown [2a00:1450:4864:20::232])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 08626f12-41aa-432b-b4c0-39abd642500d;
 Thu, 17 Jun 2021 15:29:52 +0000 (UTC)
Received: by mail-lj1-x232.google.com with SMTP id r16so9579815ljk.9
 for <xen-devel@lists.xenproject.org>; Thu, 17 Jun 2021 08:29:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08626f12-41aa-432b-b4c0-39abd642500d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=MOWl3IAqrrwnAU0hsuDFbTkOCTHdMe2wVXNAlkpef0w=;
        b=JzpUmQOWgG5shPDKBd2Sa70R/KH6xMMbA0cphWSAd98japYuj3ZH1TOsWSG3FEUbbJ
         uRKD16hELc7hcrLM59oGnk9II9KBYpseIQzJ+MBe7aO3WcpR1uX5omMDLzXVVSCCAQKa
         eMcH6om2J2JH/fG1rpRGhUERRORZOF+bt8eOt78pmQOPAVBauB7hMs0b/B9u1btr22zv
         QyRtGDlJPUsZNKj+PDXBJfZowxh5FFWU/Gm4AHxg6KSrLxZAQa+Z0lje6JutMpCv9COP
         YEI8PgwNHNPESh9DBDbCSiZO+y+hcp2ZSyQq3rtTrfAxooD26Zg2eHFTthodKq3XVfus
         G5XQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=MOWl3IAqrrwnAU0hsuDFbTkOCTHdMe2wVXNAlkpef0w=;
        b=GlfQJPTveXu8T+4QKcZWa/vwSvBdsJtSewrIsl84Cm3f4KtK4sq9TMk69g02xmSjfb
         ujlOl4s+XzCcGNoMXgo6x6IVjtsJHIBTdRN+xw2QRI72Zyo635QaZE2/ct10EFk6K80P
         M6NDNc5Jfr4TU7UfCZe+wXIvF/afZLVmiPsDrDTPzQj/Vgrc0n3LPZpzH/majPu4KE+J
         QVteRuhIW/vH9o/qkZ/wE6zuWDkUaP8iLN6HmlDJFY8yRP8831WVWzoTpYYTHtN6NXb2
         AGAzoKK/aZZ+xGChxnaS5HX3n0DQM15SFhkW/uBCZPPzsLj/naNU87QYpdrldp0i0ovZ
         yl8Q==
X-Gm-Message-State: AOAM533obDuXbSLCfP1CwbjAc9IM6nIT6//BNidUk6fFX/Q3xa2xr1zV
	A01anwyW/G2pNMkX4SbTWR/xp5bXBY7q5m5gGvoOVDMw3U4=
X-Google-Smtp-Source: ABdhPJysFBEqHzH6Rcu+cn1HdCZdjoqC/nil0GmfT4y9O22b+zsaPIiQ9iaCz3jN0Z4NO7/PHoCfGvqJYn1bUbQ4gvM=
X-Received: by 2002:a2e:7311:: with SMTP id o17mr5171980ljc.415.1623943790921;
 Thu, 17 Jun 2021 08:29:50 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
From: Nick Rosbrook <rosbrookn@gmail.com>
Date: Thu, 17 Jun 2021 11:29:39 -0400
Message-ID: <CAEBZRScBQHKHL5xzxdL_Wct5fOOjTww2_ZEbQ3p3iqDUPr5nDA@mail.gmail.com>
Subject: Re: [RESEND PATCH 00/12] golang/xenlight: domain life cycle support
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"

On Mon, May 24, 2021 at 4:37 PM Nick Rosbrook <rosbrookn@gmail.com> wrote:
>
> The primary goal of this patch series is to allow users of the xenlight
> package to manage a full domain life cycle. In particular, it provides
> support for receiving domain death events so that domain shutdown,
> reboot, destroy, etc. can be handled. And, it addresses issues found
> when using the package to boot domains with various configurations.
>
> These patches address several things (e.g. bug fixes, code style,
> conveniences, new wrapper functions), but are all work towards the final
> goal of allowing a package user to manage a full domain life cycle.
>

George,

I know you have leave coming up, and are likely very busy preparing
for that. Is there any chance this series will get attention before
then?

Thanks,
NR


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 15:37:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 15:37:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144077.265255 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltu5J-0001qB-SI; Thu, 17 Jun 2021 15:37:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144077.265255; Thu, 17 Jun 2021 15:37:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltu5J-0001q4-Ou; Thu, 17 Jun 2021 15:37:37 +0000
Received: by outflank-mailman (input) for mailman id 144077;
 Thu, 17 Jun 2021 15:37:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pqVH=LL=prevas.dk=rasmus.villemoes@srs-us1.protection.inumbo.net>)
 id 1ltu5H-0001py-FD
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 15:37:35 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.126]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6985290b-19a9-4234-a427-b80d500f5820;
 Thu, 17 Jun 2021 15:37:33 +0000 (UTC)
Received: from AM6PR10MB1880.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:2f::12)
 by AS8PR10MB4791.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:20b:33e::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Thu, 17 Jun
 2021 15:37:32 +0000
Received: from AM6PR10MB1880.EURPRD10.PROD.OUTLOOK.COM
 ([fe80::78e7:758a:9dd5:6b52]) by AM6PR10MB1880.EURPRD10.PROD.OUTLOOK.COM
 ([fe80::78e7:758a:9dd5:6b52%7]) with mapi id 15.20.4242.019; Thu, 17 Jun 2021
 15:37:32 +0000
Received: from [192.168.1.149] (80.208.64.110) by
 AM6PR08CA0043.eurprd08.prod.outlook.com (2603:10a6:20b:c0::31) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Thu, 17 Jun 2021 15:37:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6985290b-19a9-4234-a427-b80d500f5820
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QC4dW4HGsfDhQi0lzjYmgPTDbo7QHaySiMf+5R5Xubfhhq46X2UhlK8Mxm97X6KzEmFedPCpEDMdTaoEjuiYyGoWJtqHDpU525RuLJ1bQukq95i7xGL+JxO4F9NlnMi8KFH7uBN/QTtSuG+r2ephKjXslhSDqmI4qQmLgwHyb18PMMsfbJ4G4liDM+rYhIODNWYuo/W6xydMuVG1Ir1sSUZ7DbGnK8GjipUbGnZzIujKYl/mi9pv4G1izOCHdap/FCwi3OIdUBwDweielzSMhPBL5y3E4ZXPLAzTodNHVVnXBP62YIhJGt5VVqPonE8NDEmsbvDcE/oNA8qi1lb6gQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YT85gP+YFDJ5ychGyvNL82kun+NGGvSQi4A8qTsyAT8=;
 b=Jn6IFoWJfr+zFnIgc8XwjnBTU0TkPecvTEejS773agklIsIOBhksDQeOQTuRWM6BFhP6zocssXx/chE/9xIPdQfk1KO3Ya9mKrXwjh+cjDU1JxT4ChayuxWg6fnuLiPCPJHfYQ45uRXsqJ31CFvGVTJ4Xebzra51V2hcNH5EJKq5CaIzRctMuRJeHc0RUC52SFekSyfKXJoowm1qoNhQ19uDV7CtPucV1+TDinqJKzSu3vQkczE+egxtRSKy7DamA+3JgbFmUKYpjN7yLBV1rStuEb3JXPSK507s83SCxiTDrI9hpXRAfvSHpEvtNFdLtYrZwaAh5ILkBWaO+8Vmng==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=prevas.dk; dmarc=pass action=none header.from=prevas.dk;
 dkim=pass header.d=prevas.dk; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=prevas.dk;
 s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YT85gP+YFDJ5ychGyvNL82kun+NGGvSQi4A8qTsyAT8=;
 b=VkTLlQSKzaUXhIugrc3mHjVYLlF61MtgO05xG0WkZRGiYHPGmbsRzBeuTS7atxYnRLl+szj4FbKjPrR1h5O5b5GrFY5OKVYNwvUZCBhbBdQyPfF1NpchbV1Mb/chlw3UPt3w3VJ9pMRWBfebQAiG+h9uOQF/sRUgJsDpo04oDfU=
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=prevas.dk;
Subject: Re: Linux 5.13-rc6 regression to 5.12.x: kernel OOM and panic during
 kernel boot in low memory Xen VM's (256MB assigned memory).
To: Linus Torvalds <torvalds@linux-foundation.org>,
 Sander Eikelenboom <linux@eikelenboom.it>,
 Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: linux-kernel <linux-kernel@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it>
 <CAHk-=wjgg67NMBNG99naEQ1cM0mXBBzdhCJaYFH-kC+mLK+J2g@mail.gmail.com>
From: Rasmus Villemoes <rasmus.villemoes@prevas.dk>
Message-ID: <db3ee875-e299-e8ea-fed2-3b7dbf4682b4@prevas.dk>
Date: Thu, 17 Jun 2021 17:37:25 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <CAHk-=wjgg67NMBNG99naEQ1cM0mXBBzdhCJaYFH-kC+mLK+J2g@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [80.208.64.110]
X-ClientProxiedBy: AM6PR08CA0043.eurprd08.prod.outlook.com
 (2603:10a6:20b:c0::31) To AM6PR10MB1880.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:209:2f::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a0947b78-09b1-49eb-4696-08d931a5d27c
X-MS-TrafficTypeDiagnostic: AS8PR10MB4791:
X-Microsoft-Antispam-PRVS:
	<AS8PR10MB479105EA042A48502224AAA5930E9@AS8PR10MB4791.EURPRD10.PROD.OUTLOOK.COM>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VxBFm0hoMOQp9GwmC//o5Ved+LeuyoLsDaG7Vw7FspA1HBDxp/6eBjAq5m+XGGTL4K5gmyceYhfQZj3BMO3hC9oYsIgQsYNXKxy+knFOwAwDATWYJYnJQ0tpqgWDGkdGlf8vpvFdFaixi03kdo9bQvxyGAUGXfsd8hv+vTRvRSmjBxHSq7u+D6dBi7x7MJh3RDsKlU0z2agzeYPzKfqd2XivO8m6sTYpO8Azaglvm8S8Gt5lx46ed/XTFZ7vnF/ZdpfLPVruv6jShfu94T4QeYvCo3Nqm0bUw6LO7ann4DUqxmni5JcgBf6WgpviNMQ24O/74J4HUpfPLsicl80hr0IitwUy/NKNHROJ4LTcGmejSkXhkChv7YJshKZp0pVFTW6y3t7NDS2+5ULOmdRaaLno4nJ2uagkbC9DDFqHZ+cElCc9Ob3e6JAzv36Psbxnbqmg3T4TwUtIvSrWddCr4ImHheFFr100hUr+37m4Geh/x6kiX3rnHxidO6h8kywJjaTAIaPeLdnN84dGT1FDWaln8JJ6a9PwiC8AQMW/UfrcePRUJEDH6SUtIlAkjBxr4KLraIsk5dVjlw72N0ZoFXxxvqIU55+e25BWU9e0eLVkGcybjaOZRaRkYwC+wD4+B79WvYVgUIEDteRuq2OgMuobSbZd1Zd61hXIy9X8ORDcuyDyiNoSh/rzMgStZwcAP7t9JmZQS3h5zfkyV3n1TK5kA/0QQyIj70RFGWRhCCR1vXnpXtBP7PNN2aPAZ20sVbRzfQNQUMq2h8W9kf3hedcihcPLxQbV6uAJ3iyzKbDdZ/8sE/aJZGEdCvfg68IDbwyGh9zyO+MaUkO3rtB9ZQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR10MB1880.EURPRD10.PROD.OUTLOOK.COM;PTR:;CAT:NONE;SFS:(136003)(366004)(376002)(346002)(396003)(39840400004)(186003)(66946007)(8676002)(66556008)(956004)(66476007)(16576012)(31686004)(53546011)(38100700002)(16526019)(38350700002)(4326008)(8976002)(54906003)(6486002)(52116002)(2616005)(36756003)(83380400001)(31696002)(8936002)(110136005)(5660300002)(316002)(26005)(44832011)(966005)(86362001)(2906002)(478600001)(6666004)(45980500001)(43740500002);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OUpId09BdkNUWnlWUmJPU25tZHpNWnpjcGtsWlNLeUF6bENiWGZvYkVkTytQ?=
 =?utf-8?B?d05jNjJicDdkcFFwczU3YlU2R0ovd3RFVks2U0orem1yRDM2SVdSSGRTRThV?=
 =?utf-8?B?akE5K2xLYjZyaHhQSURkK2JUc1N2RzRoNW1Oa2RTazdrMnNTL1VtOUQvQjdJ?=
 =?utf-8?B?NHlFS3V5SXJOOVovSkFtUkpSLzREdkdaeXpuYWFUOGtzd2s4TDVLYnlLdk4y?=
 =?utf-8?B?NnNDeGx4ZzBmTjVzMURnTW96YytMVkZmbUc5OE01aHB6bURaSFdYYWlRWlox?=
 =?utf-8?B?Q3MwTldMdGFRNmpjcmNWSUJWb3NGN1ExeHhOVXMzSnl6RmRQUUVNTzlsMWti?=
 =?utf-8?B?NjhLZGlDeXQxSVhGeHZRNmdxaGd4eENlUmtlRktQd21VYVZ2Y01DOFByYVA1?=
 =?utf-8?B?TXZ6NjhZOGQrNUYvR0Rtb0kwaXU5TDA2dS9JUGhQR3dYelVvVlZYVFByWCt1?=
 =?utf-8?B?dU1ZOW02d0VFdSs5bnMvVlRDZmRaQStVU1ZUdktXYWp5TkhVWElyZEthZUh6?=
 =?utf-8?B?Rkh6VDI2S3ZtQkNzQW04S0NQeFVNZVpVc2tLaFU0MWlxblhXbVRqRTlyMWR5?=
 =?utf-8?B?cktoWTdqMDZXVlJNZENMVmppbHo1VzdUdjUxSXJIaVpYbVlmZHZLaEw3a3pt?=
 =?utf-8?B?bnpTY3R4YVhTMFlRK0tLci9zTitRNzFQaFVpSlg4cFVUWitPL09meUJPbHB2?=
 =?utf-8?B?a2FpcWpSbWZnYlZMeEdsWGJpR2RQMnJXd0x0WGdxRXZLNnQ5VlhncEtrbEpZ?=
 =?utf-8?B?d1JEVU0ybUV6NzcrOHFMK2prbDNDbjk1ZURoeHQxYkFINnRzb3BxSWQxRXpy?=
 =?utf-8?B?U3RJSzF0SlBpam51WVBqOXBNektiTEFJNk9pOUw1Ym9wNWR2MURrME1uV21l?=
 =?utf-8?B?ZWFBcnhCcHdjWjkxVHJLaENSei95OVlwY3RaVzMzVzRXRGt0UWJyazFINGZ1?=
 =?utf-8?B?UWVEeGF3UjBNdkRQNzBDQXBOdFNId3VnWlZCL3N3WUxuUFBQK3hCOG0xUkI0?=
 =?utf-8?B?RjRyb3RlMWN5VVpGTWFvaHpXbWxudnFJK0ZEVGZuNWFjR2ptRHpQSUE1Z1BR?=
 =?utf-8?B?Vkl0NW4xSXowWHZVZXB0b2NqVE03ZnByd3p6WnZPenRMYXE1L2VMY1I3T1Jx?=
 =?utf-8?B?d0VYTU13a3pnUmhCdkZEeVp1bUh1ZUlTQmxWMGx1YnBGQVpZR3pKU2N6T1JY?=
 =?utf-8?B?UjVzSXBlb3YzdncyYkJScFA0L2tIWHhqb3QvRGZ5amdqbTVXaDZjSWZwL091?=
 =?utf-8?B?Vzc4NnVZUDloN0VJYnhxSS9xRnJLd0VuS2lMbnZlanlPYlREU2xxTTcyM3Q4?=
 =?utf-8?B?eDhwNnJBVWRLMVpFUkNLT0t5TWxJR3RwdHlJdEE5WlQxVW42MXlSR2Z5M1A5?=
 =?utf-8?B?cXJIWXV2ZFFlRkpnejd0N0hVUlB6Z2J1d3VwVHladFFIRVlJbzRIa2tHYUtt?=
 =?utf-8?B?UjE2TlFNL0laYUc0TEJBMjBuY1JWekNXUUxxSXFvSklGbFpWalkvWDRtZ1Fs?=
 =?utf-8?B?T2k4U1ZVRWN5Rmh6QjF1bWowSmhwSk9qV05SV04wVFczR3ZBWXpKM0ZpN0hO?=
 =?utf-8?B?cXNuOTZZcVNoZEt0TUVWTHJZTUpNUWlHOEZkd2RFU3lDTTZGUHJHbUJzdGc2?=
 =?utf-8?B?TnRvZmV1M204ZWZIS1RGYkxBZjQwS0ZKbmFVb1Y5QkpYckdva3dMTEYvUUVI?=
 =?utf-8?B?cXkvSTNjT2RPUVg5OVZsYXl6UlFlNG1NaUdYWlFHSXo0ZE13ZmlROC9aV3cx?=
 =?utf-8?Q?AWpDGZ/5ht3R4M7Q3vnvlhivGsJkcesDF171BwV?=
X-OriginatorOrg: prevas.dk
X-MS-Exchange-CrossTenant-Network-Message-Id: a0947b78-09b1-49eb-4696-08d931a5d27c
X-MS-Exchange-CrossTenant-AuthSource: AM6PR10MB1880.EURPRD10.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2021 15:37:32.1919
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: d350cf71-778d-4780-88f5-071a4cb1ed61
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZxjApnHbKqKS35j3BPDY+yhhYME7TGNuQLJxBhAxHImoqR5oGEAEku7PlN90JY05oMNY7VzDWUrz5lzWnbc3UXn9cXIgz2woArXns7kGv6g=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR10MB4791

On 17/06/2021 17.01, Linus Torvalds wrote:
> On Thu, Jun 17, 2021 at 2:26 AM Sander Eikelenboom <linux@eikelenboom.it> wrote:
>>
>> I just tried to upgrade and test the linux kernel going from the 5.12 kernel series to 5.13-rc6 on my homeserver with Xen, but ran in some trouble.
>>
>> Some VM's boot fine (with more than 256MB memory assigned), but the smaller (memory wise) PVH ones crash during kernel boot due to OOM.
>> Booting VM's with 5.12(.9) kernel still works fine, also when dom0 is running 5.13-rc6 (but it has more memory assigned, so that is not unexpected).
> 
> Adding Rasmus to the cc, because this looks kind of like the async
> roofs population thing that caused some other oom issues too.

Yes, that looks like the same issue.

> Rasmus? Original report here:
> 
>    https://lore.kernel.org/lkml/ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it/
> 
> I do find it odd that we'd be running out of memory so early..

Indeed. It would be nice to know if these also reproduce with
initramfs_async=0 on the command line.

But what is even more curious is that in the other report
(https://lore.kernel.org/lkml/20210607144419.GA23706@xsang-OptiPlex-9020/),
it seemed to trigger with _more_ memory - though I may be misreading
what Oliver was telling me:

> please be noted that we use 'vmalloc=512M' for both parent and this
commit.
> since it's ok on parent but oom on this commit, we want to send this
report
> to show the potential problem of the commit on some cases.
>
> we also tested by changing to use 'vmalloc=128M', it will succeed.

Those tests were done in a VM with 16G memory, and then he also wrote

> we also tried to follow exactly above steps to test on
> some local machine (8G memory), but cannot reproduce.

Are there some special rules for what memory pools PID1 versus the
kworker threads can dip into?


Side note: I also had a ppc64 report with different symptoms (the
initramfs was corrupted), but that turned out to also reproduce with
e7cb072eb98 reverted, so that is likely unrelated. But just FTR that
thread is here:
https://lore.kernel.org/lkml/CA+QYu4qxf2CYe2gC6EYnOHXPKS-+cEXL=MnUvqRFaN7W1i6ahQ@mail.gmail.com/

Rasmus


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 15:37:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 15:37:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144079.265265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltu5e-0002Fx-4C; Thu, 17 Jun 2021 15:37:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144079.265265; Thu, 17 Jun 2021 15:37:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltu5e-0002Fp-19; Thu, 17 Jun 2021 15:37:58 +0000
Received: by outflank-mailman (input) for mailman id 144079;
 Thu, 17 Jun 2021 15:37:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cW0Z=LL=rasmusvillemoes.dk=linux@srs-us1.protection.inumbo.net>)
 id 1ltu5b-0002De-GL
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 15:37:55 +0000
Received: from mail-ed1-x535.google.com (unknown [2a00:1450:4864:20::535])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 06c6ee03-95f7-40e7-ac9b-0c358e427250;
 Thu, 17 Jun 2021 15:37:53 +0000 (UTC)
Received: by mail-ed1-x535.google.com with SMTP id s15so4592467edt.13
 for <xen-devel@lists.xenproject.org>; Thu, 17 Jun 2021 08:37:53 -0700 (PDT)
Received: from [192.168.1.149] ([80.208.64.110])
 by smtp.gmail.com with ESMTPSA id j24sm4538603edv.48.2021.06.17.08.37.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 17 Jun 2021 08:37:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06c6ee03-95f7-40e7-ac9b-0c358e427250
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=rasmusvillemoes.dk; s=google;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=YT85gP+YFDJ5ychGyvNL82kun+NGGvSQi4A8qTsyAT8=;
        b=d2GK9t3nL2nAfTcf7lqXjd31fD7Hz5xzxIAJtdTuGMg15qEOCzDgflXnVUQN8grvuM
         EeDaBMTzUJ5xRvELBxH/CNS4pCSwowu44jiHt3qIjmlgS1IMEZ8ZQIhPGE+4zoWIiZC3
         gnXozEqWX4Im4+7CVdSM9IFtkfATJIFunDZyk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=YT85gP+YFDJ5ychGyvNL82kun+NGGvSQi4A8qTsyAT8=;
        b=L5YBYR9Xnt3SECEZ2HAGHtg/fDfDo6HfB6rdlX1k+hDdZuLuUOiDiUGo03ICabNsm0
         p1OhBHaQyEcR6lfG/V/uTlTiLNK/vZvaA36kz3QjABhyFlQwM8LiG/G8rWGP8rYwItLK
         AlfyrBVRhllLCysc26pgjk7IDMy9BmV7A0N474k6R7PXMd1ePBqll+EK8nkzA6zpjBuG
         e9RLIqAgdgcrIalrRBtDKtp1NPBwMIajDiPGnNpBvljjyThWN50clFt2odndFMW2qIMG
         8n6bEOvp5AGKFD5vJtfbCdASIAk4XIJn70qABGkoe89NnBXKSWhYBAemrijg+ut0XG/z
         rflg==
X-Gm-Message-State: AOAM533ICyixrf01q8hUQKD1Tt6h0ruMMI8Wz4OJYRyjqhLGPIBpgFHw
	dMtkyiNZVO28SGSa3AU9RxJUnJ2sKBzCrg==
X-Google-Smtp-Source: ABdhPJx5RzRT+L7RIRvsl9usmDDDf/qtiyT+vCzmybNW0ErUVakZubdtURjpJlM3bBjtB8KFQFyMAA==
X-Received: by 2002:aa7:c705:: with SMTP id i5mr7526454edq.222.1623944272883;
        Thu, 17 Jun 2021 08:37:52 -0700 (PDT)
Subject: Re: Linux 5.13-rc6 regression to 5.12.x: kernel OOM and panic during
 kernel boot in low memory Xen VM's (256MB assigned memory).
To: Linus Torvalds <torvalds@linux-foundation.org>,
 Sander Eikelenboom <linux@eikelenboom.it>
Cc: linux-kernel <linux-kernel@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it>
 <CAHk-=wjgg67NMBNG99naEQ1cM0mXBBzdhCJaYFH-kC+mLK+J2g@mail.gmail.com>
From: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Message-ID: <9108c22e-3521-9e24-6124-7776d947b788@rasmusvillemoes.dk>
Date: Thu, 17 Jun 2021 17:37:51 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
MIME-Version: 1.0
In-Reply-To: <CAHk-=wjgg67NMBNG99naEQ1cM0mXBBzdhCJaYFH-kC+mLK+J2g@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17/06/2021 17.01, Linus Torvalds wrote:
> On Thu, Jun 17, 2021 at 2:26 AM Sander Eikelenboom <linux@eikelenboom.it> wrote:
>>
>> I just tried to upgrade and test the linux kernel going from the 5.12 kernel series to 5.13-rc6 on my homeserver with Xen, but ran in some trouble.
>>
>> Some VM's boot fine (with more than 256MB memory assigned), but the smaller (memory wise) PVH ones crash during kernel boot due to OOM.
>> Booting VM's with 5.12(.9) kernel still works fine, also when dom0 is running 5.13-rc6 (but it has more memory assigned, so that is not unexpected).
> 
> Adding Rasmus to the cc, because this looks kind of like the async
> roofs population thing that caused some other oom issues too.

Yes, that looks like the same issue.

> Rasmus? Original report here:
> 
>    https://lore.kernel.org/lkml/ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it/
> 
> I do find it odd that we'd be running out of memory so early..

Indeed. It would be nice to know if these also reproduce with
initramfs_async=0 on the command line.

But what is even more curious is that in the other report
(https://lore.kernel.org/lkml/20210607144419.GA23706@xsang-OptiPlex-9020/),
it seemed to trigger with _more_ memory - though I may be misreading
what Oliver was telling me:

> please be noted that we use 'vmalloc=512M' for both parent and this
commit.
> since it's ok on parent but oom on this commit, we want to send this
report
> to show the potential problem of the commit on some cases.
>
> we also tested by changing to use 'vmalloc=128M', it will succeed.

Those tests were done in a VM with 16G memory, and then he also wrote

> we also tried to follow exactly above steps to test on
> some local machine (8G memory), but cannot reproduce.

Are there some special rules for what memory pools PID1 versus the
kworker threads can dip into?


Side note: I also had a ppc64 report with different symptoms (the
initramfs was corrupted), but that turned out to also reproduce with
e7cb072eb98 reverted, so that is likely unrelated. But just FTR that
thread is here:
https://lore.kernel.org/lkml/CA+QYu4qxf2CYe2gC6EYnOHXPKS-+cEXL=MnUvqRFaN7W1i6ahQ@mail.gmail.com/

Rasmus


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 16:13:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 16:13:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144096.265285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltudS-00071l-44; Thu, 17 Jun 2021 16:12:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144096.265285; Thu, 17 Jun 2021 16:12:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltudS-00071e-0p; Thu, 17 Jun 2021 16:12:54 +0000
Received: by outflank-mailman (input) for mailman id 144096;
 Thu, 17 Jun 2021 16:12:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltudR-00071U-1I; Thu, 17 Jun 2021 16:12:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltudQ-0004yH-RJ; Thu, 17 Jun 2021 16:12:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltudQ-0006Gr-Fa; Thu, 17 Jun 2021 16:12:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltudQ-0000he-F5; Thu, 17 Jun 2021 16:12:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Lon+X2SvQJwHdWGdx0IFfN1brQ8rOfsY9JtbqJj9XJA=; b=AEbKkwxDvqZvR4wBIFsgpbu10+
	Ivda9f5GvltUh/0l5AzwCN83YcNEyFlQ7bU9AlOkCYQxRYpJMjWxzrs0QVxYxtnROLaDS6fspEkcL
	5QunMZ2Gqdc+uvAaFUhYcit9vs4jCE+HIIiKnJNycWM1CCCBWFCNlsW1EDVweAEAFxAs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162871-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162871: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=38848ce565849e5b867a5e08022b3c755039c11a
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Jun 2021 16:12:52 +0000

flight 162871 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162871/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                38848ce565849e5b867a5e08022b3c755039c11a
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  301 days
Failing since        152659  2020-08-21 14:07:39 Z  300 days  554 attempts
Testing same since   162871  2021-06-17 04:31:53 Z    0 days    1 attempts

------------------------------------------------------------
534 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 172321 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 17:39:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 17:39:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144104.265298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltvz0-0006C7-KU; Thu, 17 Jun 2021 17:39:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144104.265298; Thu, 17 Jun 2021 17:39:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltvz0-0006C0-HW; Thu, 17 Jun 2021 17:39:14 +0000
Received: by outflank-mailman (input) for mailman id 144104;
 Thu, 17 Jun 2021 17:39:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ltvyy-0006Bt-NB
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 17:39:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltvyw-0006QJ-Tj; Thu, 17 Jun 2021 17:39:10 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ltvyw-0005rm-J9; Thu, 17 Jun 2021 17:39:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=YCJ6YwBhcj+J/NT2ZL9DliHisUCCsiW9BvW2U/IOO7k=; b=QTV/RS55dT4GgwXF8tblze5OSE
	U2AqSOWtgWVKQgfEGN6Re+9h6/+uSY5EXdR5Wf1Fxht3k3u2EDhq19hsE5HMNKgFsOTTEajJBE6nR
	WVqQaqoQD3/nnlsnFv1d8Z/jcHBRtcKvc4fH05jgy4X1bSKPgQUPG8lL362vBm4WOZS8=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	doebel@amazon.de,
	Julien GralL <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [PATCH] tools/xenstored: Don't crash xenstored when Live-Update is cancelled
Date: Thu, 17 Jun 2021 18:38:57 +0100
Message-Id: <20210617173857.6450-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien GralL <jgrall@amazon.com>

As Live-Update is asynchronous, it is possible to receive a request to
cancel it (either on the same connection or from a different one).

Currently, this will crash xenstored because do_lu_start() assumes
lu_status will be valid. This is not the case when Live-Update has been
cancelled. This will result to dereference a NULL pointer and
crash Xenstored.

Rework do_lu_start() to check if lu_status is NULL and return an
error in this case.

Fixes: af216a99fb ("tools/xenstore: add the basic framework for doing the live update")
Signed-off-by: Julien Grall <jgrall@amazon.com>

----

This is currently based on top of:

https://lore.kernel.org/xen-devel/20210616144324.31652-1-julien@xen.org

This can be re-ordered if necessary.
---
 tools/xenstore/xenstored_control.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index a045f102a420..37a3d39f20b5 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -696,7 +696,18 @@ static bool do_lu_start(struct delayed_request *req)
 	time_t now = time(NULL);
 	const char *ret;
 	struct buffered_data *saved_in;
-	struct connection *conn = lu_status->conn;
+	struct connection *conn = req->data;
+
+	/*
+	 * Cancellation may have been requested asynchronously. In this
+	 * case, lu_status will be NULL.
+	 */
+	if (!lu_status) {
+		ret = "Cancellation was requested";
+		conn = req->data;
+		goto out;
+	} else
+		assert(lu_status->conn == conn);
 
 	if (!lu_check_lu_allowed()) {
 		if (now < lu_status->started_at + lu_status->timeout)
@@ -747,7 +758,7 @@ static const char *lu_start(const void *ctx, struct connection *conn,
 	lu_status->timeout = to;
 	lu_status->started_at = time(NULL);
 
-	errno = delay_request(conn, conn->in, do_lu_start, NULL, false);
+	errno = delay_request(conn, conn->in, do_lu_start, conn, false);
 
 	return NULL;
 }
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 17:40:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 17:40:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144108.265310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltw02-0007S1-Vb; Thu, 17 Jun 2021 17:40:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144108.265310; Thu, 17 Jun 2021 17:40:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltw02-0007Ru-SK; Thu, 17 Jun 2021 17:40:18 +0000
Received: by outflank-mailman (input) for mailman id 144108;
 Thu, 17 Jun 2021 17:40:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZKGl=LL=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ltw01-0007Rg-Nx
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 17:40:17 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.217])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd1bfc64-7146-4e82-85c3-6fefdfcb65a1;
 Thu, 17 Jun 2021 17:40:16 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.3 AUTH)
 with ESMTPSA id x0a371x5HHeE0cT
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 17 Jun 2021 19:40:14 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd1bfc64-7146-4e82-85c3-6fefdfcb65a1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623951614;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=XNRiVLz8TCvvIJp6un9U+TyNYA+OhzJ5SZ66SPSgLO8=;
    b=cLqTeIx4lRwhU4dXlcflnDkymUcj051x1Y/rr18pYmQZfiQq0J5/GoxszwjQhzCSO6
    hFic/DHD32MNGpqh6wAgE8p73MANNVBVKqT8kGyfhuDDOpozl21hib3qnr27sB9N/yIX
    8lIwVKvsY8+wRJSf4e4vSNPJ2ykLmjOcjdCTPIw/diaLqhG1/q0VAfwusxoeH1XA4AxB
    nzpMwMm+vNNd44eajlBqKHCpsbUMDa6IiYSUhhLPqKbbZu2ZAG9qh4QSVjJ+sdOgZDdC
    ky3lOY4hyXz0sAlgB3fU3wsugIolHu1lAd/VDsiBHUPkWUulaiIiHVH9muXEsdZehvKQ
    CHDg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisQsVxSIbR7sf0kebs4Z3Zpqv+Sabl5o7CzRq+Ps8Q=="
X-RZG-CLASS-ID: mo00
Date: Thu, 17 Jun 2021 19:40:06 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org
Subject: Re: [PATCH v20210616 00/36] leftover from 2020
Message-ID: <20210617194006.2d7f848f.olaf@aepfle.de>
In-Reply-To: <0bf3f6e7-c45e-c158-21d3-e3b636eb71c5@xen.org>
References: <20210616125129.26563-1-olaf@aepfle.de>
	<968713af-3c99-3fe3-8378-9d9c3985098e@citrix.com>
	<20210616170238.376cb13d.olaf@aepfle.de>
	<0bf3f6e7-c45e-c158-21d3-e3b636eb71c5@xen.org>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/n0NHTaKYb3my3i0UyB5.6c=";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/n0NHTaKYb3my3i0UyB5.6c=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Thu, 17 Jun 2021 13:02:39 +0200
schrieb Julien Grall <julien@xen.org>:

> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D4210

Thanks, comments in this bug suggest a workaround like this:

    new_max |=3D sizeof(unsigned long) > 4 ? new_max >> 32 : 0;

This triggers no warning.

Olaf

--Sig_/n0NHTaKYb3my3i0UyB5.6c=
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmDLiPcACgkQ86SN7mm1
DoCaTg/+Ktd+Hy3kC7uZ332LVb0oMbCOZ8vFSq8uOwtqgcKqDLHlEpZ2WzgJ2PPN
boC8GcXK+Pv/X0HbCuycauh3v0CuDRCu9wFmKsUmyAFXV+XgJg7gNSTw0MZ758Ol
hhywOipWMKybj7rwOBNHE9bFyN4krDdAFhjs+MaFkI89d7RTlVnQ2Rso2liS8tzs
2D2XcGaNCQUxMRV2CdhpB5e0WC8Br6RCgn0PCWVUBlc6wUC0hH9OGApujVbkfurO
T3F7x9B4PKnAP0BDcHvaBejpLFJz7I3SMsWEzkuOSk1QxmhGlTKSbaGtqC69H/5z
6im79Rtg2qfa40I8j8Kk+sPaVGlTG+P0LaVh+lFjOMFj6Cc/+2KL4EnfWRyxhmRe
WLJ2zBUSbLAMjFl+1PMtT+xT9DEXZOfz90DlZVR+9jdllcdhhxFA0AuBr3UaXc+m
bQDZLEpDOyghOxphZA2Gnm5BN43D4CDmmRXe36qbs8ZvR2oofva4jPW8f7bLNlEh
XvcdQ/K6pfdbqpY/MsO7zRyknuUrRF8S0rqSMVcdaOYSQU23+kSddWW997TH0sxz
YF7gXPFTtlA2rMDQWGWRjY7qDBpzPHOHgZcn/k14S9pVFiwwl98J61JM70hV9qXr
Uw93P4LZvgFGOPMPU/2ODurJohlPBc6NDnk+Ynre6gYZVH6IRU8=
=H2y/
-----END PGP SIGNATURE-----

--Sig_/n0NHTaKYb3my3i0UyB5.6c=--


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 18:02:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 18:02:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144116.265321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltwLd-0001WP-Rn; Thu, 17 Jun 2021 18:02:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144116.265321; Thu, 17 Jun 2021 18:02:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltwLd-0001WI-On; Thu, 17 Jun 2021 18:02:37 +0000
Received: by outflank-mailman (input) for mailman id 144116;
 Thu, 17 Jun 2021 18:02:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iG+e=LL=eikelenboom.it=linux@srs-us1.protection.inumbo.net>)
 id 1ltwLc-0001WC-0y
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 18:02:36 +0000
Received: from server.eikelenboom.it (unknown [91.121.65.215])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 48daf945-e739-46c9-b34b-328a35bdbc7c;
 Thu, 17 Jun 2021 18:02:34 +0000 (UTC)
Received: from 76-24-144-85.ftth.glasoperator.nl ([85.144.24.76]:51064
 helo=[172.16.1.50]) by server.eikelenboom.it with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <linux@eikelenboom.it>)
 id 1ltwQB-0006fU-Be; Thu, 17 Jun 2021 20:07:19 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48daf945-e739-46c9-b34b-328a35bdbc7c
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=eikelenboom.it; s=20180706; h=Content-Transfer-Encoding:Content-Type:
	In-Reply-To:MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender
	:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:
	Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:
	List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=PiN3z2mKDdT5442Y+muKtJpynotCbE1hW/7caNM/RJk=; b=SFGTJae2VtNYejTAt/O1zGbbFs
	WqNBFGLdNgegQgVYtdoSjbl7L5z3kKUVETjdj8jFbq5hVa3nKtFGd0AZvB97rSgiBOzlYFHcPHVIL
	zua+3fLh2ng3XN3RDLyuLGzPskn7WR6QA31nbq3EZCvvyJkm1lihbSwwq4f6BxAa/Hyo=;
Subject: Re: Linux 5.13-rc6 regression to 5.12.x: kernel OOM and panic during
 kernel boot in low memory Xen VM's (256MB assigned memory).
To: Rasmus Villemoes <linux@rasmusvillemoes.dk>,
 Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-kernel <linux-kernel@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it>
 <CAHk-=wjgg67NMBNG99naEQ1cM0mXBBzdhCJaYFH-kC+mLK+J2g@mail.gmail.com>
 <9108c22e-3521-9e24-6124-7776d947b788@rasmusvillemoes.dk>
From: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <0b12f27b-1109-b621-c969-10814b2c1c2f@eikelenboom.it>
Date: Thu, 17 Jun 2021 20:02:27 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <9108c22e-3521-9e24-6124-7776d947b788@rasmusvillemoes.dk>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17/06/2021 17:37, Rasmus Villemoes wrote:
> On 17/06/2021 17.01, Linus Torvalds wrote:
>> On Thu, Jun 17, 2021 at 2:26 AM Sander Eikelenboom <linux@eikelenboom.it> wrote:
>>>
>>> I just tried to upgrade and test the linux kernel going from the 5.12 kernel series to 5.13-rc6 on my homeserver with Xen, but ran in some trouble.
>>>
>>> Some VM's boot fine (with more than 256MB memory assigned), but the smaller (memory wise) PVH ones crash during kernel boot due to OOM.
>>> Booting VM's with 5.12(.9) kernel still works fine, also when dom0 is running 5.13-rc6 (but it has more memory assigned, so that is not unexpected).
>>
>> Adding Rasmus to the cc, because this looks kind of like the async
>> roofs population thing that caused some other oom issues too.
> 
> Yes, that looks like the same issue.
> 
>> Rasmus? Original report here:
>>
>>     https://lore.kernel.org/lkml/ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it/
>>
>> I do find it odd that we'd be running out of memory so early..
> 
> Indeed. It would be nice to know if these also reproduce with
> initramfs_async=0 on the command line.
> 
> But what is even more curious is that in the other report
> (https://lore.kernel.org/lkml/20210607144419.GA23706@xsang-OptiPlex-9020/),
> it seemed to trigger with _more_ memory - though I may be misreading
> what Oliver was telling me:
> 
>> please be noted that we use 'vmalloc=512M' for both parent and this
> commit.
>> since it's ok on parent but oom on this commit, we want to send this
> report
>> to show the potential problem of the commit on some cases.
>>
>> we also tested by changing to use 'vmalloc=128M', it will succeed.
> 
> Those tests were done in a VM with 16G memory, and then he also wrote
> 
>> we also tried to follow exactly above steps to test on
>> some local machine (8G memory), but cannot reproduce.
> 
> Are there some special rules for what memory pools PID1 versus the
> kworker threads can dip into?
> 
> 
> Side note: I also had a ppc64 report with different symptoms (the
> initramfs was corrupted), but that turned out to also reproduce with
> e7cb072eb98 reverted, so that is likely unrelated. But just FTR that
> thread is here:
> https://lore.kernel.org/lkml/CA+QYu4qxf2CYe2gC6EYnOHXPKS-+cEXL=MnUvqRFaN7W1i6ahQ@mail.gmail.com/
> 
> Rasmus
> 

I choose to first finish the bisection attempt, not so suprising it ends up with:
e7cb072eb988e46295512617c39d004f9e1c26f8 is the first bad commit

So at least that link is confirmed.

I also checked out booting with "initramfs_async=0" and now the guest boots with the 5.13-rc6-ish kernel which fails without that.

--
Sander



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 18:25:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 18:25:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144126.265332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltwhD-0003sz-Pg; Thu, 17 Jun 2021 18:24:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144126.265332; Thu, 17 Jun 2021 18:24:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltwhD-0003ss-Lu; Thu, 17 Jun 2021 18:24:55 +0000
Received: by outflank-mailman (input) for mailman id 144126;
 Thu, 17 Jun 2021 18:24:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/zva=LL=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ltwhB-0003sm-Vp
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 18:24:54 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 67240689-0265-4969-b12d-4a3fa9728e1c;
 Thu, 17 Jun 2021 18:24:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67240689-0265-4969-b12d-4a3fa9728e1c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623954292;
  h=to:from:subject:message-id:date:
   content-transfer-encoding:mime-version;
  bh=Rupoul5o9Ei/QQF4fh4LN0Bk6hWxsd/xAB7cl+QXhfk=;
  b=IBn9ZfAlVcDEXf4RWHNytWEc13G5i1Hq26wlm5ybDhJ5CXFKHXm8YIiD
   aOeQkl7A82xaH+1kDbdp5Ug6xcFO7wwL+O3Y18D9/AhsyJCTthUwm17Dl
   4NrkvTnjuO3Un50u+cxTemy8iRLwfqE47ZhIb6cQ5De4IY/UVmXabkd4J
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: qt/kw05eh3HycpUMUmD9Bb9cd/c0BHU9lSy3nwyMJPZJGruPNDDtK9B+w5Ie94VtvA9szdXOZ1
 3J6w71xsiP7T11HiK6chi67UJFig3dJMIupz/6Z1X5QV7lDW4B965V6SIAQToH2LSQ1y8z0qbK
 kkdIJSJ24715+ajDA0H4A/w/pdqYNwGoJzdujL7Ke7VxHYjVa1ePG6jECunTOygce8MsJiOJTW
 62DINhh5PxmqXlQ6l5PHtYQkCsluR68yfcHzErcINipobm5jphrKB3AaRIiEeR0nhXRKPskSdn
 9Cc=
X-SBRS: 5.1
X-MesageID: 46472738
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:O44uRa1eJz+DNy0Hcndd0AqjBHYkLtp133Aq2lEZdPU0SKGlfq
 GV7ZEmPHrP4gr5N0tOpTntAse9qBDnhPxICOsqXYtKNTOO0AeVxelZhrcKqAeQeBEWmNQ96U
 9hGZIOcuEZDzJB/LvHCN/TKadd/DGFmprY+ts31x1WPGVXgzkL1XYANu6ceHcGIzVuNN4CO7
 e3wNFInDakcWR/VLXBOpFUN9KzweEijfjdEGc7OyI=
X-IronPort-AV: E=Sophos;i="5.83,281,1616472000"; 
   d="scan'208";a="46472738"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WaZNHiLy4P1B+SnDgRUQCSG6x/zVV0nMCANad2qjCidltBpIqUCI1idoOhR28O0dJQ1GNU2hJ0Cy/dx04eq5YacnEKHjc+m0J83Tp3wnalqVGl5DFyUmTwRRsThhpUK3mRCUnqcEldJ3D0GH+MDXOO5l1T5UvZnKuf4XVA18RBxNtA9cqtG4FP46tTNzgyq8FZxHR+XWugkDNFMMX4sU0dMVwx30ptAlsFkcvwPJ9gLa7vU2I6Fm5GJ6/pI/z7HMKrt6OTp9H7c73MFvyMuMJpakZs96Ye852e09ToNMzoPTLE/b8JiXwOqZEQLx10l0K3LSpkVDNJOStq0CT3MIgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Rupoul5o9Ei/QQF4fh4LN0Bk6hWxsd/xAB7cl+QXhfk=;
 b=aCOqPW6V+NQUWsEblSQV3PA51b2AdyDr+F/UIWXQKd2YUKNb1ag89JMdf1Oirv/7n+7yG4CDKWmvniVaVu8hXoXThF48T2Nqim15569pASHzYfC2R7COXtRBoOILbFXCSzji0sN8cBuj0BnV4fay8fXR98jsSiaQbSNs2vN+GXxvGOrMcLV34Qiq7WFvNmh/MHBhRrPxUUJ9Z9g0ekpDOKfbg9XClKK0WpMSgh9mmau5VuvreS1+5R2pcK73ulsaBwsq3k6lUnRc+hVaB9iKHKdKeU7rWczGU5JOaqKNudv7ob/hPAo4xMXiXs8e3xzFn5/AYu6bV32CSMSO/XQDJw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Rupoul5o9Ei/QQF4fh4LN0Bk6hWxsd/xAB7cl+QXhfk=;
 b=O5vlrPkL2cX6rODtz0oqimUQTLG9QkInn/bOIsGCnygxfk6BY8oJYYIfq2tVeNI7xL2EFAzz7H54F1U4JtD01f7TYA/X9s9FW9BsQiGfaHYmLH51bK+nWiB1SafXbDVGpsqzy+IjFxFpkttmSWSvCCsw39NZGiM7AV4XyzXwPxM=
To: xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Backport 6d49fbdeab3e to 4.14
Message-ID: <c8af8c5e-46cc-ab85-0364-0840d2309759@citrix.com>
Date: Thu, 17 Jun 2021 19:24:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0461.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1aa::16) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e145a085-8de1-44f6-6d5c-08d931bd3054
X-MS-TrafficTypeDiagnostic: BYAPR03MB3942:
X-Microsoft-Antispam-PRVS: <BYAPR03MB39424AB27C4E1EB49F6745B4BA0E9@BYAPR03MB3942.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 0npWTrnRE3QCSqLPipVZaYfxd5OfYFJpM+9oyk87Rv3zqhHvzAk3VTkdnzP3aUpIlsAmNoWZ3lpx7FqoV+Z3gv3bptI6EIxk7pu3VnMIu2Giz0KVFIGZe7JU6foAqGQWntEIAeAsWlmwiEcss2JhySGNkcIZrIcoB+7F3QFgBz4Pl/np2RfaUhvBXRkHMZS6RJ80g4338KvcK5JB4uwEs2XCa/5tq88/E5tVTQH8yELu/EUetfxANVpe321IYlXS3UreH8wLjJxaBqFPAe8K/SRF8pA5RQYnZrXS0LW851Qe7j8IVzAjibM8Cv8p5L+Zx7vOxL3LSyZ5pScttt6QKsAaDql3Pg6r71ni9dRT4q1k/vhD/MnAVcH2fv24CKKVePPPugKy0fW8zhkmqAraB57Tt5QDZgIqe8I0eoW2GNFhdF/OCHkgClhA0D8TpfYlnjBAeIom1/SOf531zN12RnYmWb/f1g1X/U8n1WvMiprFrqK563LrqdOd7w2EhVUPD0R03q/WK4Qr0KPMCX8RtN/j0ZabVlM1kV1ToOWxuBdvpOzMAFQBGAWD79OqdEh/Ez0pcmUPCf9kUoyznP/itiDgt8qSrALokI+EGzBu2/lIZSC/skj7taxgAEIUTHb/3dEv3MptIqc51CVdTxye3wh1SBlCLa94iMHp8spDY42BTcn/hDQB61PY+7jh7CRDvM96FBDW896doJS2hueaptOuc1NAmOAnrbRCYcrfvsQ9xG4Wva3HmN8O3TDAUMPwsEulVljq9/NGDphVTG93FO9qE1qWAQKP7JBnvmWOrwuIjR2DfP+QIKNLfqjEooP8
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(396003)(136003)(376002)(366004)(346002)(2906002)(478600001)(66476007)(8936002)(6666004)(38100700002)(966005)(31686004)(66946007)(110136005)(186003)(66556008)(16576012)(8676002)(6486002)(26005)(5660300002)(16526019)(86362001)(31696002)(316002)(4744005)(36756003)(956004)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?R1pxaFhsOEhDd2g2UVprZ0x1TExyRHRSNHpYUE1IY2FaTlk2ZmtweFcvUG1N?=
 =?utf-8?B?V0V4U0ZWVFpQTktRK2tMODltaXc4NldjR2ViU3p5Wjd6dTh0YUIrckdac2tx?=
 =?utf-8?B?Y3Qzck82VjJ0UDJNcWhrVEpNY1MyRGFpK0NnTVhRNHJ5QyttOUxoWDlQRkZl?=
 =?utf-8?B?Y3ZwTWdnRC96RUsvUVFTQ2NZVEZna2JmME9naWVjSUFyb0FsUm9uYVp4RUg4?=
 =?utf-8?B?VFVIcXZwdzIxbGpacFJQYTI4clloelhWMWxwcmxoUWs5SHJwNWk5UHdJNzBF?=
 =?utf-8?B?dFI5WEdtckVvSkdQSFJkTXoza2ZSbjlOTmlvZmErRnR6Sk14YVBqUlpFTTEr?=
 =?utf-8?B?eGNhTGxsYktKYnRPVFZYMmFBSnNSeWFSU3IrSmtFc0hKWXFnUlBPWVFoTWE5?=
 =?utf-8?B?dkdYUEtLd0txR2xvV2FlQWQ5MTd1MEw5VEZGVlNXWjRMdHVWVjY3SmZHUzht?=
 =?utf-8?B?Z1FTTDBZNFNZemJZTFZ5ODhEOGpaRmlJRUNuakJyVStzd0VveGJOdG1ZTys3?=
 =?utf-8?B?VURpeEs2N2ZqcmZ1SEpPTnoxZGE3TzJxS2ZuTFpUenlkSzFsQlVyMWhaSW9W?=
 =?utf-8?B?RzZTT3VlQW9TbklXVFFqOFpiUXk5VkdWMlVMQmhTQ0grQWhCbUg0L3RqNDJN?=
 =?utf-8?B?cTVpdFRCRTF5UlczRDdtYXE3MiszeFFYQjdGazNrendaK2d5ZHlXRlh4aFFY?=
 =?utf-8?B?c1RBL3FnandBYU0vZENJMTZpWFVmenY2Wkdvb2hMbUFnanN5cjNEcm9vL0VD?=
 =?utf-8?B?OERjVE1Oakh1TitseTdHQzFUUVFldjJpdkJtV3paUXZ0bTJSQ0hFNVVCb1ZB?=
 =?utf-8?B?Z0tUWXpDOWFXcEJaa2FwS25FaE1KZnIyWFhjQUN0Slk1WDJFUFJwTmlyWlBw?=
 =?utf-8?B?VCtVRU53MVBwNDd6cUpDQzhUdmhscm10M05TbkFmVEdjV1VnNkZQeTZKVStP?=
 =?utf-8?B?MG1PNGwySWxjaXNQWWxXQjJONWkrdDE2OWFuWmtaM0R4azN5OWgxS2JPbUgx?=
 =?utf-8?B?R0ZGajZ0S0lVcldMS0hSQnVlYjdTNG5NR1QwQmVCdXhsT0tBNzdvVW9ua0Vk?=
 =?utf-8?B?Q2NlY0lHYUV6OW41Q2l0Tk5vR2JwZ0gybUxhcmJhMnk2VlBlTEtGL2w3bFBz?=
 =?utf-8?B?WWQ3REdULzRGb3VWeXBDK1hNK0o2bFZ0TmJyQ0NCZmJidXlDR3o2RmN0czFs?=
 =?utf-8?B?L1dYY1FyY25Ta3h0bmpKamFWQkNXei90SUorenBVK1kxWGJoOVJVOU9ZdGhD?=
 =?utf-8?B?M1ZuYWp6V0ZTdEwrbFhSOWJtOEJIRUEvOGY4NkhHZmVOMXQ3djdTdWpGZHIr?=
 =?utf-8?B?czBpb2FJUTkwZlhudnhHa0d3TkRQSjhzc3FVUXNSNmE3YUtwQnlUQWEwUlpK?=
 =?utf-8?B?ZzRhYXhPTmZWWjM3aG9FcU5aS3JyWU5HVkl0TzJsUERCejB5TUNuWVR5NERS?=
 =?utf-8?B?TlltR1BKSkJDRzVWb01zcGV4NVRuTGJMREI2MjhDeG1OV2Q5dXY5bWVlKzUz?=
 =?utf-8?B?aDBaYzZtb2p1WU0rSlgzd20yU3B6Sll0MTgzcXcyMTBLVFQ2Y0ZtUVdtNU44?=
 =?utf-8?B?VzhGallrc0hSY2ZkRTF6c0JzdC94UlBUV0RCU05GaC9nMndGeGl3dmdGRFRB?=
 =?utf-8?B?SzBPanl0aXZIVXlzUUI0eWdadmRTZGhiSkJuZlE1Ry9acWtxY0NxdnJtRTUw?=
 =?utf-8?B?WFdrV1IxcjV5eHJTYWlxMVdNa0NjWU54MU9yaE8xVEdtS0E1M0F1aitNUmFH?=
 =?utf-8?Q?LEEvBcI4IDGP64FeYcWvAth8G5BsRvTVe+tPW54?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e145a085-8de1-44f6-6d5c-08d931bd3054
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2021 18:24:48.0954
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: r5trJSrWK9DWVAVtcjWVPEyXVT4iv7WbJa9C+gWcAM7hNvOR1+9Z7we8NHCzkEYuBU3ZMqHgVuZVIgawpOC0lAN7oSjdSCfKJKRrDcrD8cc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3942
X-OriginatorOrg: citrix.com

Hello,

Gitlab CI on 4.14 currently fails, e.g.
https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/322762306

The problem is that CentOS 6 (still supported back then) has Python 2.6
in it, and trips over the bug fixed by c/s 6d49fbdeab3e (4.15).

>From IRC,

20:18 < Diziet> andyhhp: golang is experimental and the impact should be
mild so yes, but please can you c&p what I write now into an email so we
have some record when I have forgotten again :-)

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 19:39:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 19:39:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144141.265375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltxra-0002OM-Rp; Thu, 17 Jun 2021 19:39:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144141.265375; Thu, 17 Jun 2021 19:39:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltxra-0002OF-Mi; Thu, 17 Jun 2021 19:39:42 +0000
Received: by outflank-mailman (input) for mailman id 144141;
 Thu, 17 Jun 2021 19:39:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iG+e=LL=eikelenboom.it=linux@srs-us1.protection.inumbo.net>)
 id 1ltxrY-0002O9-Tf
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 19:39:41 +0000
Received: from server.eikelenboom.it (unknown [91.121.65.215])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd1f1167-bd39-4a7d-a6eb-5326e84ce9ec;
 Thu, 17 Jun 2021 19:39:38 +0000 (UTC)
Received: from 76-24-144-85.ftth.glasoperator.nl ([85.144.24.76]:51066
 helo=[172.16.1.50]) by server.eikelenboom.it with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <linux@eikelenboom.it>)
 id 1ltxw7-0006rg-RX; Thu, 17 Jun 2021 21:44:23 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd1f1167-bd39-4a7d-a6eb-5326e84ce9ec
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=eikelenboom.it; s=20180706; h=Content-Transfer-Encoding:Content-Type:
	In-Reply-To:MIME-Version:Date:Message-ID:References:Cc:To:From:Subject:Sender
	:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:
	Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:
	List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=Zvi0Ng3xRNySrFis8ePYQIwCfqI+TS0n0phqz3c8npI=; b=Br73dsRjyQ56q3GU3RrX4ST7Y8
	oIMz7snPlu63+pDbaayGm5uMJ9zB3cSkujAYMe65iqKBCcBVCuYl181yJf2ZLLufTSNNkfPY+O4Bf
	GRnDgyivVtsHLODmpue38fHrfJOgg3E6ZndjVK6IRiDhKl1IyIGK/FdIUeaDKk54EFB8=;
Subject: Re: Linux 5.13-rc6 regression to 5.12.x: kernel OOM and panic during
 kernel boot in low memory Xen VM's (256MB assigned memory).
From: Sander Eikelenboom <linux@eikelenboom.it>
To: Rasmus Villemoes <linux@rasmusvillemoes.dk>,
 Linus Torvalds <torvalds@linux-foundation.org>,
 Juergen Gross <jgross@suse.com>
Cc: linux-kernel <linux-kernel@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it>
 <CAHk-=wjgg67NMBNG99naEQ1cM0mXBBzdhCJaYFH-kC+mLK+J2g@mail.gmail.com>
 <9108c22e-3521-9e24-6124-7776d947b788@rasmusvillemoes.dk>
 <0b12f27b-1109-b621-c969-10814b2c1c2f@eikelenboom.it>
Message-ID: <7338064f-10b6-545d-bc6c-843d04aafe28@eikelenboom.it>
Date: Thu, 17 Jun 2021 21:39:36 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <0b12f27b-1109-b621-c969-10814b2c1c2f@eikelenboom.it>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17/06/2021 20:02, Sander Eikelenboom wrote:
> On 17/06/2021 17:37, Rasmus Villemoes wrote:
>> On 17/06/2021 17.01, Linus Torvalds wrote:
>>> On Thu, Jun 17, 2021 at 2:26 AM Sander Eikelenboom <linux@eikelenboom.it> wrote:
>>>>
>>>> I just tried to upgrade and test the linux kernel going from the 5.12 kernel series to 5.13-rc6 on my homeserver with Xen, but ran in some trouble.
>>>>
>>>> Some VM's boot fine (with more than 256MB memory assigned), but the smaller (memory wise) PVH ones crash during kernel boot due to OOM.
>>>> Booting VM's with 5.12(.9) kernel still works fine, also when dom0 is running 5.13-rc6 (but it has more memory assigned, so that is not unexpected).
>>>
>>> Adding Rasmus to the cc, because this looks kind of like the async
>>> roofs population thing that caused some other oom issues too.
>>
>> Yes, that looks like the same issue.
>>
>>> Rasmus? Original report here:
>>>
>>>      https://lore.kernel.org/lkml/ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it/
>>>
>>> I do find it odd that we'd be running out of memory so early..
>>
>> Indeed. It would be nice to know if these also reproduce with
>> initramfs_async=0 on the command line.
>>
>> But what is even more curious is that in the other report
>> (https://lore.kernel.org/lkml/20210607144419.GA23706@xsang-OptiPlex-9020/),
>> it seemed to trigger with _more_ memory - though I may be misreading
>> what Oliver was telling me:
>>
>>> please be noted that we use 'vmalloc=512M' for both parent and this
>> commit.
>>> since it's ok on parent but oom on this commit, we want to send this
>> report
>>> to show the potential problem of the commit on some cases.
>>>
>>> we also tested by changing to use 'vmalloc=128M', it will succeed.
>>
>> Those tests were done in a VM with 16G memory, and then he also wrote
>>
>>> we also tried to follow exactly above steps to test on
>>> some local machine (8G memory), but cannot reproduce.
>>
>> Are there some special rules for what memory pools PID1 versus the
>> kworker threads can dip into?
>>
>>
>> Side note: I also had a ppc64 report with different symptoms (the
>> initramfs was corrupted), but that turned out to also reproduce with
>> e7cb072eb98 reverted, so that is likely unrelated. But just FTR that
>> thread is here:
>> https://lore.kernel.org/lkml/CA+QYu4qxf2CYe2gC6EYnOHXPKS-+cEXL=MnUvqRFaN7W1i6ahQ@mail.gmail.com/
>>
>> Rasmus
>>
> 
> I choose to first finish the bisection attempt, not so suprising it ends up with:
> e7cb072eb988e46295512617c39d004f9e1c26f8 is the first bad commit
> 
> So at least that link is confirmed.
> 
> I also checked out booting with "initramfs_async=0" and now the guest boots with the 5.13-rc6-ish kernel which fails without that.
> 
> --
> Sander
> 

CC'ed Juergen.

Juergen, do you know how the direct kernel boot works and if that could interfere
with this commit ?

After reading the last part of the commit message e7cb072eb98 namely:

     Should one of the initcalls done after rootfs_initcall time (i.e., device_
     and late_ initcalls) need something from the initramfs (say, a kernel
     module or a firmware blob), it will simply wait for the initramfs
     unpacking to be done before proceeding, which should in theory make this
     completely safe.
     
     But if some driver pokes around in the filesystem directly and not via one
     of the official kernel interfaces (i.e.  request_firmware*(),
     call_usermodehelper*) that theory may not hold - also, I certainly might
     have missed a spot when sprinkling wait_for_initramfs().  So there is an
     escape hatch in the form of an initramfs_async= command line parameter.

It dawned on me I'm using "direct kernel boot" functionality, which lets you boot a guest
were the kernel and initramfs get copied in from dom0, that works great, but perhaps it
pokes around as the last part of the commit message warns about ?

(I think the feature was called "direct kernel boot", what I mean is using the for example:
     kernel      = '/boot/vmlinuz-5.13.0-rc6-20210617-doflr-mac80211debug+'
     ramdisk     = '/boot/initrd.img-5.13.0-rc6-20210617-doflr-mac80211debug+'
     cmdline     = 'root=UUID=2f757320-caca-4215-868d-73a4aacf12aa ro nomodeset xen_blkfront.max_ring_page_order=1 console=hvc0 earlyprintk=xen initramfs_async=0'

options in the xen guest config file to boot the (in this case PVH) guest.
)

--
Sander





From xen-devel-bounces@lists.xenproject.org Thu Jun 17 20:20:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 20:20:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144148.265385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltyVO-00078c-UP; Thu, 17 Jun 2021 20:20:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144148.265385; Thu, 17 Jun 2021 20:20:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltyVO-00078V-Ra; Thu, 17 Jun 2021 20:20:50 +0000
Received: by outflank-mailman (input) for mailman id 144148;
 Thu, 17 Jun 2021 20:20:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltyVN-00078L-Ti; Thu, 17 Jun 2021 20:20:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltyVN-0000pY-ME; Thu, 17 Jun 2021 20:20:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltyVN-0003dx-CT; Thu, 17 Jun 2021 20:20:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltyVN-0005I3-Bz; Thu, 17 Jun 2021 20:20:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=m4f164NFHcvrc2bYESCAtr4PHJeCM6YMw/27lvfI0B8=; b=2TfwbB9d4DUraFoRp9g9a0ezan
	zjmptKxiC0LTgqMRyoHQei8/afo4KbXQK1WfO7Dui+BFdzXTWkEeHm9RRkfyapegVGIaZJso6BPTc
	Imhl4SRNUrf/kv5SMchIAVhBk1j87tPQAI97UhK8h9IyZgVZ4C1ye4n0TvSIcI5aG7C4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162880-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162880: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
X-Osstest-Versions-That:
    xen=4bcf6433eed3d9cbc00865ec62380a33ca832dac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Jun 2021 20:20:49 +0000

flight 162880 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162880/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
baseline version:
 xen                  4bcf6433eed3d9cbc00865ec62380a33ca832dac

Last test of basis   162848  2021-06-15 21:01:36 Z    1 days
Testing same since   162880  2021-06-17 17:00:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4bcf6433ee..8af4b47f06  8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 21:26:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 21:26:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144156.265400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltzWb-0004RC-VD; Thu, 17 Jun 2021 21:26:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144156.265400; Thu, 17 Jun 2021 21:26:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltzWb-0004R5-SA; Thu, 17 Jun 2021 21:26:09 +0000
Received: by outflank-mailman (input) for mailman id 144156;
 Thu, 17 Jun 2021 21:26:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uYqS=LL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ltzWa-0004Qz-BD
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 21:26:08 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e94a7df-e5ed-47a2-b64d-528f4f4f049d;
 Thu, 17 Jun 2021 21:26:07 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 14473613CE;
 Thu, 17 Jun 2021 21:26:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e94a7df-e5ed-47a2-b64d-528f4f4f049d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623965166;
	bh=QHSiuDjkImhfG7EZBqMDRV8+xFAIQX6kPnewyd+S9CY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=eGz6Pv8aR0JYpMlb7DSwROkVnxGfU4WxMeu4ogi7VgsJV3WfWw4NMYBTQGNVqPCTL
	 xCRtJ/YaLHPvhhKe22KhvWgj+jSQ5T6m5jr7YQ8PkFuiTI8Dyn0asKCq1NaRjlfZCJ
	 bDtL3Pg09pc2QZ32vKKjqz18Dl91bbyEBV4lj8HwGu0PgWEBPBWGyx4GVJd+KMxSNc
	 YU2tMZ4gROoJG6C/MDXsvOjHUqK3Vk9PqKrr1xKgZ+U1pnKIS4jlykFrMsVmUf8OY0
	 ZkMgEBKyg4f2B7ojSk7hZcWu0CmJRT95819u9PeNwtfXUZnPCylmoYe9WzGJSXUod9
	 agzVQ5O6QAYew==
Date: Thu, 17 Jun 2021 14:26:02 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    "committers@xenproject.org" <committers@xenproject.org>
Subject: Re: Regressed XSA-286, was [xen-unstable test] 161917: regressions
 - FAIL
In-Reply-To: <99833b7b-f626-fac5-d426-109afd4ffa38@suse.com>
Message-ID: <alpine.DEB.2.21.2106171409440.24906@sstabellini-ThinkPad-T480s>
References: <osstest-161917-mainreport@xen.org> <7cfa28ae-2fbe-0945-8a6c-a965ec52155f@citrix.com> <b57c2120-2f86-caa7-56ec-e215a7ad0529@suse.com> <637ff3c7-afeb-aae4-0f1d-5ae168e01e01@citrix.com> <99833b7b-f626-fac5-d426-109afd4ffa38@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 17 Jun 2021, Jan Beulich wrote:
> GitlabCI doesn't tell me anything just yet, unless I go actively poll
> it. And as mentioned just yesterday on irc, I don't think I can easily
> navigate my way through those web pages, to find breakage I may have
> introduced and hence would better go fix. Unlike osstest, where I am
> told what failed, and I know where to find the corresponding logs.
> 
> It's also not clear to me at all in how far GitlabCI would have
> spotted the issue here, no matter whether it's caused by a hypervisor
> change or the XTF test being wrong. So far I've seen GitlabCI only
> spot build issues.

Without getting on the specifics of this problem, I just want to let you
know that Doug and I gave a little "tour" of GitlabCI at Xen Summit. I
recommend to watch the video when it becomes available. I find it very
easy to use and generally easier than other CIs. The very short version
is the following:


# find the pipeline running for the commits / patch series you care about

Pipelines for staging are here:
https://gitlab.com/xen-project/xen/-/pipelines

Pipelines for outstanding patch series on xen-devel are here:
https://gitlab.com/xen-project/patchew/xen/-/pipelines

I'll pick one of the recent runs for an outstanding series:
https://gitlab.com/xen-project/patchew/xen/-/pipelines/322112514

you can see what was committed by patchew by clicking on the link "84
jobs for patchew/20210616144324.31652-1-julien@xen.org in 87 minutes and
32 seconds (queued for 15 seconds)". The link brings you here where the
branch with the commits is:
https://gitlab.com/xen-project/patchew/xen/-/commits/patchew/20210616144324.31652-1-julien@xen.org


# find the failed jobs and logs

Look for the red "x" corresponding to individual jobs that failed in the
pipeline. In this case we have 2 red "x" on the right side which
correspond to these 2 jobs:

https://gitlab.com/xen-project/patchew/xen/-/jobs/1352370918
https://gitlab.com/xen-project/patchew/xen/-/jobs/1352370916

To get the full logs in text format simply click on the "document" icon
just above the black square with the logs. Other binary artifacts are
available if you click on "Download" on the right side of the screen.



# find details on the failed job

The jobs are divided into two groups: build jobs and test jobs. The
build jobs simply build Xen and tools with various compilers and
options. They are all in the left column in the pipeline page. They are
straightforward.

The test jobs are actually trying to run something inside QEMU (full
emulation). The scripts that runs things are:

automation/scripts/qemu-smoke-x86-64.sh
automation/scripts/qemu-smoke-arm64.sh
automation/scripts/qemu-alpine-arm64.sh

and their names correspond to the job names. In our example
qemu-smoke-x86-64.sh is the one that failed and it is running XTF inside
QEMU.


I hope this helps! I'd be happy to jump on a call to give you a short
intro on how to use gitlab-ci, just let me know.


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 21:53:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 21:53:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144165.265411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltzwY-0007Xi-7J; Thu, 17 Jun 2021 21:52:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144165.265411; Thu, 17 Jun 2021 21:52:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ltzwY-0007Xb-2T; Thu, 17 Jun 2021 21:52:58 +0000
Received: by outflank-mailman (input) for mailman id 144165;
 Thu, 17 Jun 2021 21:52:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltzwX-0007XP-2n; Thu, 17 Jun 2021 21:52:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltzwW-0002NC-UR; Thu, 17 Jun 2021 21:52:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ltzwW-0006rV-Mf; Thu, 17 Jun 2021 21:52:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ltzwP-0002o4-L1; Thu, 17 Jun 2021 21:52:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nfMN2VPezf5qOI1OqFVUfscbJf3Irw5meW5xLqQFp8w=; b=B5+eJGOLxIwEIhgxaUAvfsvMl1
	ZqsNqDYc+s/p57UGtqZANAaKIk69sgueiKtR5BxtiNLW+dtcQKDNtIpZGb+Xhi2HLe+XC91ocAjvI
	tpThMpE24tzyJu9yOR8ftduZdRlsJoIqumHx7A5HNRP4oqVtGex7MBCWXA4G3yC2U8yI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162878-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162878: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=1162ae8297e1fc9871e615cad7d505d639b7ed0c
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Jun 2021 21:52:49 +0000

flight 162878 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162878/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 1162ae8297e1fc9871e615cad7d505d639b7ed0c
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   13 days
Failing since        162368  2021-06-04 15:42:59 Z   13 days   30 attempts
Testing same since   162875  2021-06-17 10:45:14 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  gaoliming <gaoliming@byosoft.com.cn>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2158 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 22:42:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 22:42:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144174.265425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu0hg-0003yI-24; Thu, 17 Jun 2021 22:41:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144174.265425; Thu, 17 Jun 2021 22:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu0hf-0003yB-VM; Thu, 17 Jun 2021 22:41:39 +0000
Received: by outflank-mailman (input) for mailman id 144174;
 Thu, 17 Jun 2021 22:41:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lu0he-0003y1-Do; Thu, 17 Jun 2021 22:41:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lu0he-0003Ak-3J; Thu, 17 Jun 2021 22:41:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lu0hd-0000jD-Q2; Thu, 17 Jun 2021 22:41:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lu0hd-0000Z2-PP; Thu, 17 Jun 2021 22:41:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IAPf6ld5MIdhGsMyPru7wnzBmjCA1PEi3MFgL7OSu/M=; b=VXdOKMUw3+BbvBNhlv5Sg87Tag
	aE/Wtco9LCPEYR29FR9iyV8cLB6z9d9vq2w0PJqxn6zp2sD5oVLbaUnOx2yFiw2sOG+S+weG+KlZS
	3k5sobyPETfGEafpjvwrFLxTLFiLO4ypuIdHnayvG+TgiPIOPpun9DK3ahhuPJBh2PFg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162876-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162876: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4bcf6433eed3d9cbc00865ec62380a33ca832dac
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Jun 2021 22:41:37 +0000

flight 162876 xen-unstable real [real]
flight 162883 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162876/
http://logs.test-lab.xenproject.org/osstest/logs/162883/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 162422
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162533
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162533
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4bcf6433eed3d9cbc00865ec62380a33ca832dac
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z    9 days
Failing since        162556  2021-06-08 22:39:08 Z    8 days   14 attempts
Testing same since   162854  2021-06-16 06:56:28 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1115 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 22:48:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 22:48:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144182.265439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu0oR-0004gf-Qc; Thu, 17 Jun 2021 22:48:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144182.265439; Thu, 17 Jun 2021 22:48:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu0oR-0004gY-NT; Thu, 17 Jun 2021 22:48:39 +0000
Received: by outflank-mailman (input) for mailman id 144182;
 Thu, 17 Jun 2021 22:48:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/zva=LL=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lu0oQ-0004gS-2h
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 22:48:38 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 016e062f-ea28-4a0f-b704-083bc53c9862;
 Thu, 17 Jun 2021 22:48:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 016e062f-ea28-4a0f-b704-083bc53c9862
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1623970115;
  h=subject:from:to:references:message-id:date:in-reply-to:
   content-transfer-encoding:mime-version;
  bh=HM/n+svpzCMCAw7P5nyu2u5TMosY8jl+AuHp9cgJusU=;
  b=CKgELV7S91o+F04qKmgVmeTtiHVqifMME0EujD7lH1QgMCm//deRx68v
   lzUJpYctZTw1BSTbhzIHTN7whGX2OZodQVlJfh7tV8OtQe3RF9slbYlmX
   2IsvBlIbXDUMtXKHEJ/6SDp+FmihULYKIguNMuip6kGjxUS5vD1M9G0TS
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: +AK4SJE94AZL6187BtVL0/47GP0hBxQQFf7lHpGn46zu63t2nfHCK2O1cYToOgRwctfTTzGriN
 ZYoHOp2FbHK63etGbl7y33CJ8Xsh+fihiYw56xib+nixUCC7u82nQYccJplXWDJUEYshB572dH
 +4FpiGCRLN5jt5fXw5+5FCmWsXsaS0ZBMObWu+QY+UBpwp8P2N/+gQwHkp6AftKrKSIfgLc7io
 5sPnLdAgxMadGM1vpFUsYeLsrYFvSGwVWb8CsYXA5PBv4QZjRcxonTzRqMQYavFxCDqkCJEXv+
 VBM=
X-SBRS: 5.1
X-MesageID: 46418269
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:lWzhFaNq96AeXsBcTjGjsMiBIKoaSvp037BK7S1MoH1uA6ulfq
 WV9sjzuiWatN98Yh8dcJW7Scq9qBDnhPpICOsqXYtKNTOO0AeVxcNZnOnfKlXbcBEWndQtsJ
 uIHZIeNDXxZ2IK8foT4mODYqkdKA/sytHXuQ/cpU0dPD2Dc8tbnmFE4p7wKDwNeOFBb6BJba
 a01458iBeLX28YVci/DmltZZm/mzWa/KiWGSLvHnQcmXKzsQ8=
X-IronPort-AV: E=Sophos;i="5.83,281,1616472000"; 
   d="scan'208";a="46418269"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m6+Uypa9N1PJHe9bHHUk6kXGAl+DbPAuK+khcmyGTcI+pwYjHV8XRDAJWBah1S4cqZOUBBQUAEpLFmYl0yBHhjxZsgeM9h2aqZbzpG2G1lKab7j+MpE9Ox6tSPFsZyMtAM5DOU7beoWMtGLiJrVsAgSSdCEG1xkYUv5kLtabmniXCvRDfL0WwkU0MM3hN8UsDvmjATVKESEQAMItAeNR0AdXakucweihBtGZ7hLLu8TX1p4Y6PCfhx2Yi7f4i/ATRPDBsVPTs0RCy3G7p3U/DFZoVKixl1Yt13d/KdgQqYTUIDr899KY02KCoDQfGezM5eC4Rv0TjELUkwkltuLB/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HM/n+svpzCMCAw7P5nyu2u5TMosY8jl+AuHp9cgJusU=;
 b=kDRl+PVMVCI9IZLObp0s3Nk6U4Sm8UtesNvbI05Z/y5B8EoTg+swgQQ0OfG8ag+bHIm7v5te6t0NowHvFDFQp2wp6gF7ueh39N2TzXVOwtI8VTEphE9JvuOfu+8uOWLVJb/g9349jA0SVW4u2+vLP1wc7ZLjJwFm6450qaiXQ7LNouh+AUGLgpwnxYEF9ul1QmM2k3NS4EOmU8Jn5M2MdE4Qghup3YCbAqFuWCVM7zYHslLu0UhLypUOtt6sjD6ynD6Sn/3z9t8xK1TLK4ywzHKHrEjMLR7X1P/wqaMVj2sXA8rWDORmndYTf0xxpZN32riVkokJBpk9oRK6bxPAdw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HM/n+svpzCMCAw7P5nyu2u5TMosY8jl+AuHp9cgJusU=;
 b=qE3x2k+hr3XM5wOvlKdMvsvJFlPX2sHg9qYjrFQrIH0J1+OUXW9xOo9J2A8qlVnIDYNrVbArSANk74T/X/24CX3USg18lsU+UkN9/9Uu2RKLrgdTGVKlgpFDfaF6NzPllQcqrDtlqoUbNmrFAb2uSChKX+z10D7LtkqUgPVk3fA=
Subject: Re: Backport 6d49fbdeab3e to 4.14
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>
References: <c8af8c5e-46cc-ab85-0364-0840d2309759@citrix.com>
Message-ID: <bc25e94b-a9a8-405e-4cd1-299d60f44fcd@citrix.com>
Date: Thu, 17 Jun 2021 23:48:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <c8af8c5e-46cc-ab85-0364-0840d2309759@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0063.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:153::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bae62f8f-c4bc-42a5-7b2e-08d931e207a8
X-MS-TrafficTypeDiagnostic: BY5PR03MB4967:
X-Microsoft-Antispam-PRVS: <BY5PR03MB4967F89B0383DE7B61873FDFBA0E9@BY5PR03MB4967.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ZCDoFn8bLcPg2PZDyZfJvPDOqcY7quiMrfcPVR2X36xnsUfbB4gsyTikcyBpDvsSPti85+X0po1aGzDvULM822EddVwG/cAvUA52S7+n7IToxeeTWwj68IVHsGJJ2lfEvvoxuHAQTHqpe29tZSiOHpzrQxNIkcBcn8CWtbKVtDEugqZrFcTsg+BPTNtbmLSD94S5yX8bOrUhDx+vhtDFRbiQ0CkyC4Y/0kHnxpT3Zj+SZB+1b5NVy8lh0YlIlhDy6p3XObH6b9IdAMa9jlzYqjY2OqAJHutuGqLPCG0wWvL0cDz/+HAO6adcJiBOWG3rAbfzRhnA8DiLvJ+YpevsB8KOYNaCLQy++qUZuZfUorqBvoC+jYNsBshoPrFyHhgym+rWtj6nMqEeSfovKIKPTTBmS7eqRuCOuxGePQ/rQjjL/F2ytla78K6HLSZG09twdabiVQpFiUOdIIIsKv3xvEU5IJ76xb6U4RrnWqYLEFpoj1nxMFgM5TBSsWpi5HS71UzvtbDQNwFycZvkWmlyzXe2Y3IpKEE86LNm1dB/QtK1XiWvGzI46jyUa5M7urPEx5SB3wF1mmcd1hHig4QOSA10ppORJT/nfpj+9yLM7Ed0vP2gDDQFPbQ/oHklG0hMoLulAWeV/z5kqun3JgV9xmgYj/6wSF7EUbZaLvspVEcKmP/ndniF1w63XFNd7igZJh2WFQ7krCPB4IQd/rYs38qlUmE8inhtaOB1vigIY3PhYIwW8YDgcHXrvl/jNDloOiwaQ5eMk+w33xlqc/5IrhMhUJgUkRvDgprJzFef6jQnU15HKr/0zpObHDUIzY4GA78xcmLv2vUOkNasYVdgBw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(376002)(39860400002)(346002)(366004)(6666004)(66556008)(66476007)(16526019)(16576012)(66946007)(6486002)(31686004)(186003)(31696002)(53546011)(2906002)(26005)(5660300002)(110136005)(316002)(8936002)(966005)(38100700002)(478600001)(2616005)(8676002)(86362001)(36756003)(4744005)(956004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?NkUwc2hmcTFKWitsNEVHM3ovVTJHMWZWU2R5a0RQNWYyYlI5Q1dWNjY4dUxF?=
 =?utf-8?B?Mmp6QkY3UHhydTRJcTM5M2NIVlR2QUlaUVh1S1hWc2NhRFdGcFoxTzk0SFNK?=
 =?utf-8?B?MUZwRy9Cd1l0T05xZUVOeWhldXlVYTFUUWk2Ym9BZGdxSmlGQmlCT203SGpF?=
 =?utf-8?B?MGNhdjhDOVhQZHdwbUk1VGFMNWV1TlorUG40TFFkY3lxKzBuUTFseDFPZDBR?=
 =?utf-8?B?Vk1SQWtuMFh3NUR3VlB1WTBNQ3kweVNteXJndWRDVjd0S2x4NkxMY3JIdHBX?=
 =?utf-8?B?STR4UFF4YWN0d3hBVEdXMW1pRGlRd1pVL2VERTJFUkpOa0V2Um95U3RuOEVF?=
 =?utf-8?B?TityTGR2QWVnOEpxZ3pnV0Vid2NaK0dSY0l3ajZicGNHbGZIKzhXK1JzenJC?=
 =?utf-8?B?YVl4ZFpLcUhaSFdpbTg4TkxkZDNCWFp2eVYvMGpDdGM1ZmJ1MHdvbmVGc0tM?=
 =?utf-8?B?cFlXVGJYZ2pKdElEbkJLalpFVnY4VFRsbjZpekkydWJBeWtVSzl0dnhFby8z?=
 =?utf-8?B?ZTZZcHY1V3JsV0xaN1BjaTRma2JyS0hFZjlHUmFrdElXR3creDVSSnBmejRB?=
 =?utf-8?B?c0ZBY3hhSnJ5OXR3Q2I2QzRZM2NPWEIrVHgzZTR1R2l1azVyTUdwS3JMbEdP?=
 =?utf-8?B?UEJzeXV3RTRzaWF4S2RlOHZzNkplenVzVDRHUUhVVVZlMkQ2Z3JOcFhyKy94?=
 =?utf-8?B?NW5qZVRFZkpsNmtCb1VXSGIxdkxlM3FxSFNVc2hvb2tFZktXUUE3WWNtbjNx?=
 =?utf-8?B?SVN4RWlHWDc5Yjc3cytiekhkZE1udTljb1lFcDFtSUh3RHlhRHBJWW9HUHN1?=
 =?utf-8?B?aFcvZXo1YkhtMElBK3RJcm5wK0tzQUpHUjJOY2FFcUJ5YWU5cVUxYStzQk8r?=
 =?utf-8?B?RU9nZ1Z5cEFZZTg5Ti91eUlVYW5UQmlnNUVIdTF5ajBWdUltVUszOEwzVWt2?=
 =?utf-8?B?Q1JyckVNRTB5ZWU2bFJEdTg5TXZ0NytGek45RmZydkNDU1dCWVhjeG10cUxM?=
 =?utf-8?B?N0oyb2hUc0c2R0tuVk95N2tFTldZNXdkYUVaNEIxdWpQOVBMNTlRN0s5WFpN?=
 =?utf-8?B?Y1dYblpKRG1ORkFBUm00bHROR1luOEd0VGJvOGNUMjIvOHdNQm1lMzV2WWZk?=
 =?utf-8?B?OC9nM1ZGc2FCSmUvSTFMaWhVdk9kTmtDdS9jQUtWcnllejA5S3Y0K3VERURP?=
 =?utf-8?B?eFRKNXV2WGYyM2R1OUE5Y2JtZnRpd1Job2crZjY4amJDeGFTSHNEWE9oV2tu?=
 =?utf-8?B?c2wzSHpBbkpDM3hTOXRnVUVBZHV3OEtITmIyenE4MnNoWk9MeXkzUDBVeHdS?=
 =?utf-8?B?UXU1WllvOTI5K3ZiWlY4Nmd4UldZamdlZ2Jad1BILzF3MklnYUFEQnQxL25Q?=
 =?utf-8?B?VmYyWkZpOHl5d25oUmhMa01FYzdOTCthbk5iVTFhZWNpbWd3NVNsM3dUaGxp?=
 =?utf-8?B?bGdueHJyTmlQMnE2cXNFMm85MUVpVVFzQk1NNlFLNFRvSEdwWE1UVDdHSkRW?=
 =?utf-8?B?Mk4rSk9kYythQUx3ajN0VlpLUkVCSWsyRERXYzBjNnNNK2xmMnp5bUZhckl6?=
 =?utf-8?B?OVdJUG5hY1kzdkNIaE40WUh0bkpNQjJZRzZpSksxNGsxSHkrOWdTcktDTDA3?=
 =?utf-8?B?ekg4RlpBUUcxQms1VlU4YUtHUzBNZ0IzaG9zM0tRUSs0cFJ2MzdiYlZReDUv?=
 =?utf-8?B?eGhXTVdaY3RmcUE1M1J4NEFZNVdoOTV0YzEyWEErc2t0ZzFhTGpwUGcyZTlH?=
 =?utf-8?Q?aQXPVc0u5/fFimYht9G0VMFSMbuJfuwlF/LOdrx?=
X-MS-Exchange-CrossTenant-Network-Message-Id: bae62f8f-c4bc-42a5-7b2e-08d931e207a8
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2021 22:48:31.3556
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vWinva7gnj3CpDD66M6AeysHZM7zMIQPRSem/03oZC5dmOcTqOX2grCnhJp2KzYkTQCiN3xzXrzV+1lu1SJBBwi43eJRqgCghrY6eNNqWTU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB4967
X-OriginatorOrg: citrix.com

On 17/06/2021 19:24, Andrew Cooper wrote:
> Hello,
>
> Gitlab CI on 4.14 currently fails, e.g.
> https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/322762306
>
> The problem is that CentOS 6 (still supported back then) has Python 2.6
> in it, and trips over the bug fixed by c/s 6d49fbdeab3e (4.15).
>
> From IRC,
>
> 20:18 < Diziet> andyhhp: golang is experimental and the impact should be
> mild so yes, but please can you c&p what I write now into an email so we
> have some record when I have forgotten again :-)

And for completely,
https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/322943251
is a CI run with the patch included, green across the board.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 23:29:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 23:29:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144191.265453 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1SE-0000O1-V0; Thu, 17 Jun 2021 23:29:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144191.265453; Thu, 17 Jun 2021 23:29:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1SE-0000Nu-Rx; Thu, 17 Jun 2021 23:29:46 +0000
Received: by outflank-mailman (input) for mailman id 144191;
 Thu, 17 Jun 2021 23:29:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uRdX=LL=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lu1SD-0000No-R9
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 23:29:46 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b2d330fa-39ab-43a1-afab-57eefe836421;
 Thu, 17 Jun 2021 23:29:44 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1623972570136919.7959647662097;
 Thu, 17 Jun 2021 16:29:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2d330fa-39ab-43a1-afab-57eefe836421
ARC-Seal: i=1; a=rsa-sha256; t=1623972571; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=QxO0GJY8DRCnDkKxZKYV7xJanLe7P/YBWo4PsycDOYi9kaed3Nsslnz85pNZdTc65lkqTWR321yVRhVugz6T5XDQr8aqoMNS7knmS5wp+0kKonajeIj+YigRMdZl1DHeyFYMOkzmrj2NT+oJHUl788WqPlQAQ+b31ky5NydyyTA=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1623972571; h=Content-Transfer-Encoding:Cc:Date:From:MIME-Version:Message-ID:Subject:To; 
	bh=WxFwJbkeLx48JJcKKFj8bqNXb8kx5EkWM6kLDvPg1uw=; 
	b=IxDhegwUJgYTV3yqwUkRcu/lKbb75RGd7sHrI0t16hkkX1MXt0ZVdqShVRNpjE2CAHLGGbHO0P9LREYoGOw6UOlo1rfMQJv8o6642GZQr8NmnmHTyHOT71O09pGulmURMWqi2/nXrH4MijYQEi5p12cyWZJ60Mvc6TOjNNR0vSU=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1623972571;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:MIME-Version:Content-Transfer-Encoding;
	bh=WxFwJbkeLx48JJcKKFj8bqNXb8kx5EkWM6kLDvPg1uw=;
	b=LeorGAiZqHjv35c7Kl0p23czVYlFvW9FPu1NjweNlyibyNWCHQ2AbNfhb0lBMo6M
	NbUrVWLT6ngvSrlOC5iLDZ93FNwil0Vq7kGpG0jSrRAjN/SFhF8x04M/KZ6jTsigNi7
	9oDFCccVu7HS6XpfzXC31XwQiHLFCTtnUYWJ6rbo=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Tim Deegan <tim@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Paul Durrant <paul@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	persaur@gmail.com,
	christopher.w.clark@gmail.com,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io
Subject: [PATCH 0/6] xsm: refactoring xsm hooks
Date: Thu, 17 Jun 2021 19:39:12 -0400
Message-Id: <20210617233918.10095-1-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

Based on feedback from 2021 Xen Developers Summit the xsm-roles RFC
patch set is being split into two separate patch sets. This is the first
patch set and is focused purely on the clean up and refactoring of the
XSM hooks.

This patch set refactors the xsm_ops wrapper hooks to use the alternative_call
infrastructure. Then proceeds to move and realign the headers to remove the
psuedo is/is not enable implementation. The remainder of the changes are clean up
and removing no longer necessary abstractions.

Daniel P. Smith (6):
  xsm: refactor xsm_ops handling
  xsm: decouple xsm header inclusion selection
  xsm: enabling xsm to always be included
  xsm: remove xen_defualt_t from hook definitions
  xsm: expanding function related macros in dummy.h
  xsm: removing the XSM_ASSERT_ACTION macro

 xen/arch/arm/dm.c                     |   2 +-
 xen/arch/arm/domctl.c                 |   6 +-
 xen/arch/arm/hvm.c                    |   2 +-
 xen/arch/arm/mm.c                     |   2 +-
 xen/arch/arm/platform_hypercall.c     |   2 +-
 xen/arch/x86/cpu/mcheck/mce.c         |   2 +-
 xen/arch/x86/cpu/vpmu.c               |   2 +-
 xen/arch/x86/domctl.c                 |   8 +-
 xen/arch/x86/hvm/dm.c                 |   2 +-
 xen/arch/x86/hvm/hvm.c                |  12 +-
 xen/arch/x86/irq.c                    |   5 +-
 xen/arch/x86/mm.c                     |  20 +-
 xen/arch/x86/mm/mem_paging.c          |   2 +-
 xen/arch/x86/mm/mem_sharing.c         |   9 +-
 xen/arch/x86/mm/p2m.c                 |   2 +-
 xen/arch/x86/mm/paging.c              |   4 +-
 xen/arch/x86/mm/shadow/set.c          |   2 +-
 xen/arch/x86/msi.c                    |   3 +-
 xen/arch/x86/pci.c                    |   2 +-
 xen/arch/x86/physdev.c                |  17 +-
 xen/arch/x86/platform_hypercall.c     |  10 +-
 xen/arch/x86/pv/emul-priv-op.c        |   2 +-
 xen/arch/x86/sysctl.c                 |   4 +-
 xen/common/Kconfig                    |  55 +-
 xen/common/domain.c                   |   4 +-
 xen/common/domctl.c                   |  12 +-
 xen/common/event_channel.c            |  12 +-
 xen/common/grant_table.c              |  16 +-
 xen/common/hypfs.c                    |   2 +-
 xen/common/kernel.c                   |   2 +-
 xen/common/kexec.c                    |   2 +-
 xen/common/mem_access.c               |   2 +-
 xen/common/memory.c                   |  16 +-
 xen/common/monitor.c                  |   2 +-
 xen/common/sched/core.c               |   6 +-
 xen/common/sysctl.c                   |   8 +-
 xen/common/vm_event.c                 |   2 +-
 xen/common/xenoprof.c                 |   2 +-
 xen/drivers/char/console.c            |   2 +-
 xen/drivers/passthrough/device_tree.c |   4 +-
 xen/drivers/passthrough/pci.c         |  12 +-
 xen/include/xen/sched.h               |   2 +-
 xen/include/xsm/dummy.h               | 774 --------------------------
 xen/include/xsm/xsm-core.h            | 236 ++++++++
 xen/include/xsm/xsm.h                 | 626 +++++++--------------
 xen/xsm/Makefile                      |   4 +-
 xen/xsm/dummy.c                       |   7 +-
 xen/xsm/dummy.h                       | 697 +++++++++++++++++++++++
 xen/xsm/flask/flask_op.c              |  21 +-
 xen/xsm/silo.c                        |  18 +-
 xen/xsm/xsm_core.c                    |  54 +-
 51 files changed, 1309 insertions(+), 1413 deletions(-)
 delete mode 100644 xen/include/xsm/dummy.h
 create mode 100644 xen/include/xsm/xsm-core.h
 create mode 100644 xen/xsm/dummy.h

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 23:30:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 23:30:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144195.265465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1Sh-0001c2-7k; Thu, 17 Jun 2021 23:30:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144195.265465; Thu, 17 Jun 2021 23:30:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1Sh-0001bv-4N; Thu, 17 Jun 2021 23:30:15 +0000
Received: by outflank-mailman (input) for mailman id 144195;
 Thu, 17 Jun 2021 23:30:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uRdX=LL=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lu1Sg-00010r-0g
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 23:30:14 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40ef623f-d304-42f1-ba20-cc0930f6336c;
 Thu, 17 Jun 2021 23:30:10 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1623972573889956.4417367035688;
 Thu, 17 Jun 2021 16:29:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40ef623f-d304-42f1-ba20-cc0930f6336c
ARC-Seal: i=1; a=rsa-sha256; t=1623972575; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=VDKBwQj1nBzvqtW2U2T2yODFSh19+L7TNYnUcxqns/QuplbzvW7nIDrbYXQgy7M91qcLWPbJk6BEFZIPml78757s0m8aEYgJTT5tuhC32L91UqceBmLI3UJFkXAgIPMQMcnPN8b7EBkMBoYrexquS8zNDwKSt3t/lFlvLzTp9aw=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1623972575; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=fWqFbTBqMZZzOoUgYd44Oj5rLniCHoQ0vcCj6pqgPxc=; 
	b=MaY4qcskFZUZlhXJgduggXnSC0ghPNgOGTbIsKKf46uIrBFueWmCl6CYfLB/Vf3lLt+99loQntPcMlgnfvB6u0JhUA6goPnR9oLfd0A9/dj434dsR5VLomAPBPIi34grMh+Zw6kL4RSpPzeNoZUgeQuoneauTf1qgmpg0ujHpBs=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1623972575;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding;
	bh=fWqFbTBqMZZzOoUgYd44Oj5rLniCHoQ0vcCj6pqgPxc=;
	b=S3z/HflOTs1iefs2tCSSlgFxak5hHmUP9Y4UI15vvY+pTCWIsOSyQKXnOTzDmpZC
	TaCMr48D4YFMgM2Bc6aDabniZnrJzOVMCUo0PUe6VtscyCNB47bOBXMN0Ys3UU/zULt
	FzUre7lguw/7AIKMOC5I5N45CskbMKOqDnTYDThk=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Tim Deegan <tim@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Paul Durrant <paul@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	persaur@gmail.com,
	christopher.w.clark@gmail.com,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io
Subject: [PATCH 1/6] xsm: refactor xsm_ops handling
Date: Thu, 17 Jun 2021 19:39:13 -0400
Message-Id: <20210617233918.10095-2-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210617233918.10095-1-dpsmith@apertussolutions.com>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

The assignment and setup of xsm_ops structure was refactored to make it a
one-time assignment. The calling of the xsm_ops were refactored to use the
alternate_call framework to reduce the need for retpolines.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 xen/include/xsm/xsm.h    | 206 ++++++++++++++++++++-------------------
 xen/xsm/dummy.c          |   2 -
 xen/xsm/flask/flask_op.c |  21 +---
 xen/xsm/xsm_core.c       |  50 ++++++----
 4 files changed, 138 insertions(+), 141 deletions(-)

diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index ad3cddbf7d..86ca045e74 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -15,6 +15,9 @@
 #ifndef __XSM_H__
 #define __XSM_H__
 
+#ifdef CONFIG_XSM
+#include <asm/alternative.h>
+#endif
 #include <xen/sched.h>
 #include <xen/multiboot.h>
 
@@ -191,295 +194,295 @@ struct xsm_operations {
 
 #ifdef CONFIG_XSM
 
-extern struct xsm_operations *xsm_ops;
+extern struct xsm_operations xsm_ops;
 
 #ifndef XSM_NO_WRAPPERS
 
 static inline void xsm_security_domaininfo (struct domain *d,
                                         struct xen_domctl_getdomaininfo *info)
 {
-    xsm_ops->security_domaininfo(d, info);
+    alternative_vcall(xsm_ops.security_domaininfo, d, info);
 }
 
 static inline int xsm_domain_create (xsm_default_t def, struct domain *d, u32 ssidref)
 {
-    return xsm_ops->domain_create(d, ssidref);
+    return alternative_call(xsm_ops.domain_create, d, ssidref);
 }
 
 static inline int xsm_getdomaininfo (xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->getdomaininfo(d);
+    return alternative_call(xsm_ops.getdomaininfo, d);
 }
 
 static inline int xsm_domctl_scheduler_op (xsm_default_t def, struct domain *d, int cmd)
 {
-    return xsm_ops->domctl_scheduler_op(d, cmd);
+    return alternative_call(xsm_ops.domctl_scheduler_op, d, cmd);
 }
 
 static inline int xsm_sysctl_scheduler_op (xsm_default_t def, int cmd)
 {
-    return xsm_ops->sysctl_scheduler_op(cmd);
+    return alternative_call(xsm_ops.sysctl_scheduler_op, cmd);
 }
 
 static inline int xsm_set_target (xsm_default_t def, struct domain *d, struct domain *e)
 {
-    return xsm_ops->set_target(d, e);
+    return alternative_call(xsm_ops.set_target, d, e);
 }
 
 static inline int xsm_domctl (xsm_default_t def, struct domain *d, int cmd)
 {
-    return xsm_ops->domctl(d, cmd);
+    return alternative_call(xsm_ops.domctl, d, cmd);
 }
 
 static inline int xsm_sysctl (xsm_default_t def, int cmd)
 {
-    return xsm_ops->sysctl(cmd);
+    return alternative_call(xsm_ops.sysctl, cmd);
 }
 
 static inline int xsm_readconsole (xsm_default_t def, uint32_t clear)
 {
-    return xsm_ops->readconsole(clear);
+    return alternative_call(xsm_ops.readconsole, clear);
 }
 
 static inline int xsm_evtchn_unbound (xsm_default_t def, struct domain *d1, struct evtchn *chn,
                                                                     domid_t id2)
 {
-    return xsm_ops->evtchn_unbound(d1, chn, id2);
+    return alternative_call(xsm_ops.evtchn_unbound, d1, chn, id2);
 }
 
 static inline int xsm_evtchn_interdomain (xsm_default_t def, struct domain *d1,
                 struct evtchn *chan1, struct domain *d2, struct evtchn *chan2)
 {
-    return xsm_ops->evtchn_interdomain(d1, chan1, d2, chan2);
+    return alternative_call(xsm_ops.evtchn_interdomain, d1, chan1, d2, chan2);
 }
 
 static inline void xsm_evtchn_close_post (struct evtchn *chn)
 {
-    xsm_ops->evtchn_close_post(chn);
+    alternative_vcall(xsm_ops.evtchn_close_post, chn);
 }
 
 static inline int xsm_evtchn_send (xsm_default_t def, struct domain *d, struct evtchn *chn)
 {
-    return xsm_ops->evtchn_send(d, chn);
+    return alternative_call(xsm_ops.evtchn_send, d, chn);
 }
 
 static inline int xsm_evtchn_status (xsm_default_t def, struct domain *d, struct evtchn *chn)
 {
-    return xsm_ops->evtchn_status(d, chn);
+    return alternative_call(xsm_ops.evtchn_status, d, chn);
 }
 
 static inline int xsm_evtchn_reset (xsm_default_t def, struct domain *d1, struct domain *d2)
 {
-    return xsm_ops->evtchn_reset(d1, d2);
+    return alternative_call(xsm_ops.evtchn_reset, d1, d2);
 }
 
 static inline int xsm_grant_mapref (xsm_default_t def, struct domain *d1, struct domain *d2,
                                                                 uint32_t flags)
 {
-    return xsm_ops->grant_mapref(d1, d2, flags);
+    return alternative_call(xsm_ops.grant_mapref, d1, d2, flags);
 }
 
 static inline int xsm_grant_unmapref (xsm_default_t def, struct domain *d1, struct domain *d2)
 {
-    return xsm_ops->grant_unmapref(d1, d2);
+    return alternative_call(xsm_ops.grant_unmapref, d1, d2);
 }
 
 static inline int xsm_grant_setup (xsm_default_t def, struct domain *d1, struct domain *d2)
 {
-    return xsm_ops->grant_setup(d1, d2);
+    return alternative_call(xsm_ops.grant_setup, d1, d2);
 }
 
 static inline int xsm_grant_transfer (xsm_default_t def, struct domain *d1, struct domain *d2)
 {
-    return xsm_ops->grant_transfer(d1, d2);
+    return alternative_call(xsm_ops.grant_transfer, d1, d2);
 }
 
 static inline int xsm_grant_copy (xsm_default_t def, struct domain *d1, struct domain *d2)
 {
-    return xsm_ops->grant_copy(d1, d2);
+    return alternative_call(xsm_ops.grant_copy, d1, d2);
 }
 
 static inline int xsm_grant_query_size (xsm_default_t def, struct domain *d1, struct domain *d2)
 {
-    return xsm_ops->grant_query_size(d1, d2);
+    return alternative_call(xsm_ops.grant_query_size, d1, d2);
 }
 
 static inline int xsm_alloc_security_domain (struct domain *d)
 {
-    return xsm_ops->alloc_security_domain(d);
+    return alternative_call(xsm_ops.alloc_security_domain, d);
 }
 
 static inline void xsm_free_security_domain (struct domain *d)
 {
-    xsm_ops->free_security_domain(d);
+    alternative_vcall(xsm_ops.free_security_domain, d);
 }
 
 static inline int xsm_alloc_security_evtchns(
     struct evtchn chn[], unsigned int nr)
 {
-    return xsm_ops->alloc_security_evtchns(chn, nr);
+    return alternative_call(xsm_ops.alloc_security_evtchns, chn, nr);
 }
 
 static inline void xsm_free_security_evtchns(
     struct evtchn chn[], unsigned int nr)
 {
-    xsm_ops->free_security_evtchns(chn, nr);
+    alternative_vcall(xsm_ops.free_security_evtchns, chn, nr);
 }
 
 static inline char *xsm_show_security_evtchn (struct domain *d, const struct evtchn *chn)
 {
-    return xsm_ops->show_security_evtchn(d, chn);
+    return alternative_call(xsm_ops.show_security_evtchn, d, chn);
 }
 
 static inline int xsm_init_hardware_domain (xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->init_hardware_domain(d);
+    return alternative_call(xsm_ops.init_hardware_domain, d);
 }
 
 static inline int xsm_get_pod_target (xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->get_pod_target(d);
+    return alternative_call(xsm_ops.get_pod_target, d);
 }
 
 static inline int xsm_set_pod_target (xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->set_pod_target(d);
+    return alternative_call(xsm_ops.set_pod_target, d);
 }
 
 static inline int xsm_memory_exchange (xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->memory_exchange(d);
+    return alternative_call(xsm_ops.memory_exchange, d);
 }
 
 static inline int xsm_memory_adjust_reservation (xsm_default_t def, struct domain *d1, struct
                                                                     domain *d2)
 {
-    return xsm_ops->memory_adjust_reservation(d1, d2);
+    return alternative_call(xsm_ops.memory_adjust_reservation, d1, d2);
 }
 
 static inline int xsm_memory_stat_reservation (xsm_default_t def, struct domain *d1,
                                                             struct domain *d2)
 {
-    return xsm_ops->memory_stat_reservation(d1, d2);
+    return alternative_call(xsm_ops.memory_stat_reservation, d1, d2);
 }
 
 static inline int xsm_memory_pin_page(xsm_default_t def, struct domain *d1, struct domain *d2,
                                       struct page_info *page)
 {
-    return xsm_ops->memory_pin_page(d1, d2, page);
+    return alternative_call(xsm_ops.memory_pin_page, d1, d2, page);
 }
 
 static inline int xsm_add_to_physmap(xsm_default_t def, struct domain *d1, struct domain *d2)
 {
-    return xsm_ops->add_to_physmap(d1, d2);
+    return alternative_call(xsm_ops.add_to_physmap, d1, d2);
 }
 
 static inline int xsm_remove_from_physmap(xsm_default_t def, struct domain *d1, struct domain *d2)
 {
-    return xsm_ops->remove_from_physmap(d1, d2);
+    return alternative_call(xsm_ops.remove_from_physmap, d1, d2);
 }
 
 static inline int xsm_map_gmfn_foreign (xsm_default_t def, struct domain *d, struct domain *t)
 {
-    return xsm_ops->map_gmfn_foreign(d, t);
+    return alternative_call(xsm_ops.map_gmfn_foreign, d, t);
 }
 
 static inline int xsm_claim_pages(xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->claim_pages(d);
+    return alternative_call(xsm_ops.claim_pages, d);
 }
 
 static inline int xsm_console_io (xsm_default_t def, struct domain *d, int cmd)
 {
-    return xsm_ops->console_io(d, cmd);
+    return alternative_call(xsm_ops.console_io, d, cmd);
 }
 
 static inline int xsm_profile (xsm_default_t def, struct domain *d, int op)
 {
-    return xsm_ops->profile(d, op);
+    return alternative_call(xsm_ops.profile, d, op);
 }
 
 static inline int xsm_kexec (xsm_default_t def)
 {
-    return xsm_ops->kexec();
+    return alternative_call(xsm_ops.kexec);
 }
 
 static inline int xsm_schedop_shutdown (xsm_default_t def, struct domain *d1, struct domain *d2)
 {
-    return xsm_ops->schedop_shutdown(d1, d2);
+    return alternative_call(xsm_ops.schedop_shutdown, d1, d2);
 }
 
 static inline char *xsm_show_irq_sid (int irq)
 {
-    return xsm_ops->show_irq_sid(irq);
+    return alternative_call(xsm_ops.show_irq_sid, irq);
 }
 
 static inline int xsm_map_domain_pirq (xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->map_domain_pirq(d);
+    return alternative_call(xsm_ops.map_domain_pirq, d);
 }
 
 static inline int xsm_map_domain_irq (xsm_default_t def, struct domain *d, int irq, void *data)
 {
-    return xsm_ops->map_domain_irq(d, irq, data);
+    return alternative_call(xsm_ops.map_domain_irq, d, irq, data);
 }
 
 static inline int xsm_unmap_domain_pirq (xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->unmap_domain_pirq(d);
+    return alternative_call(xsm_ops.unmap_domain_pirq, d);
 }
 
 static inline int xsm_unmap_domain_irq (xsm_default_t def, struct domain *d, int irq, void *data)
 {
-    return xsm_ops->unmap_domain_irq(d, irq, data);
+    return alternative_call(xsm_ops.unmap_domain_irq, d, irq, data);
 }
 
 static inline int xsm_bind_pt_irq(xsm_default_t def, struct domain *d,
                                   struct xen_domctl_bind_pt_irq *bind)
 {
-    return xsm_ops->bind_pt_irq(d, bind);
+    return alternative_call(xsm_ops.bind_pt_irq, d, bind);
 }
 
 static inline int xsm_unbind_pt_irq(xsm_default_t def, struct domain *d,
                                     struct xen_domctl_bind_pt_irq *bind)
 {
-    return xsm_ops->unbind_pt_irq(d, bind);
+    return alternative_call(xsm_ops.unbind_pt_irq, d, bind);
 }
 
 static inline int xsm_irq_permission (xsm_default_t def, struct domain *d, int pirq, uint8_t allow)
 {
-    return xsm_ops->irq_permission(d, pirq, allow);
+    return alternative_call(xsm_ops.irq_permission, d, pirq, allow);
 }
 
 static inline int xsm_iomem_permission (xsm_default_t def, struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
 {
-    return xsm_ops->iomem_permission(d, s, e, allow);
+    return alternative_call(xsm_ops.iomem_permission, d, s, e, allow);
 }
 
 static inline int xsm_iomem_mapping (xsm_default_t def, struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
 {
-    return xsm_ops->iomem_mapping(d, s, e, allow);
+    return alternative_call(xsm_ops.iomem_mapping, d, s, e, allow);
 }
 
 static inline int xsm_pci_config_permission (xsm_default_t def, struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access)
 {
-    return xsm_ops->pci_config_permission(d, machine_bdf, start, end, access);
+    return alternative_call(xsm_ops.pci_config_permission, d, machine_bdf, start, end, access);
 }
 
 #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
 static inline int xsm_get_device_group(xsm_default_t def, uint32_t machine_bdf)
 {
-    return xsm_ops->get_device_group(machine_bdf);
+    return alternative_call(xsm_ops.get_device_group, machine_bdf);
 }
 
 static inline int xsm_assign_device(xsm_default_t def, struct domain *d, uint32_t machine_bdf)
 {
-    return xsm_ops->assign_device(d, machine_bdf);
+    return alternative_call(xsm_ops.assign_device, d, machine_bdf);
 }
 
 static inline int xsm_deassign_device(xsm_default_t def, struct domain *d, uint32_t machine_bdf)
 {
-    return xsm_ops->deassign_device(d, machine_bdf);
+    return alternative_call(xsm_ops.deassign_device, d, machine_bdf);
 }
 #endif /* HAS_PASSTHROUGH && HAS_PCI) */
 
@@ -487,240 +490,243 @@ static inline int xsm_deassign_device(xsm_default_t def, struct domain *d, uint3
 static inline int xsm_assign_dtdevice(xsm_default_t def, struct domain *d,
                                       const char *dtpath)
 {
-    return xsm_ops->assign_dtdevice(d, dtpath);
+    return alternative_call(xsm_ops.assign_dtdevice, d, dtpath);
 }
 
 static inline int xsm_deassign_dtdevice(xsm_default_t def, struct domain *d,
                                         const char *dtpath)
 {
-    return xsm_ops->deassign_dtdevice(d, dtpath);
+    return alternative_call(xsm_ops.deassign_dtdevice, d, dtpath);
 }
 
 #endif /* HAS_PASSTHROUGH && HAS_DEVICE_TREE */
 
 static inline int xsm_resource_plug_pci (xsm_default_t def, uint32_t machine_bdf)
 {
-    return xsm_ops->resource_plug_pci(machine_bdf);
+    return alternative_call(xsm_ops.resource_plug_pci, machine_bdf);
 }
 
 static inline int xsm_resource_unplug_pci (xsm_default_t def, uint32_t machine_bdf)
 {
-    return xsm_ops->resource_unplug_pci(machine_bdf);
+    return alternative_call(xsm_ops.resource_unplug_pci, machine_bdf);
 }
 
 static inline int xsm_resource_plug_core (xsm_default_t def)
 {
-    return xsm_ops->resource_plug_core();
+    return alternative_call(xsm_ops.resource_plug_core);
 }
 
 static inline int xsm_resource_unplug_core (xsm_default_t def)
 {
-    return xsm_ops->resource_unplug_core();
+    return alternative_call(xsm_ops.resource_unplug_core);
 }
 
 static inline int xsm_resource_setup_pci (xsm_default_t def, uint32_t machine_bdf)
 {
-    return xsm_ops->resource_setup_pci(machine_bdf);
+    return alternative_call(xsm_ops.resource_setup_pci, machine_bdf);
 }
 
 static inline int xsm_resource_setup_gsi (xsm_default_t def, int gsi)
 {
-    return xsm_ops->resource_setup_gsi(gsi);
+    return alternative_call(xsm_ops.resource_setup_gsi, gsi);
 }
 
 static inline int xsm_resource_setup_misc (xsm_default_t def)
 {
-    return xsm_ops->resource_setup_misc();
+    return alternative_call(xsm_ops.resource_setup_misc);
 }
 
 static inline int xsm_page_offline(xsm_default_t def, uint32_t cmd)
 {
-    return xsm_ops->page_offline(cmd);
+    return alternative_call(xsm_ops.page_offline, cmd);
 }
 
 static inline int xsm_hypfs_op(xsm_default_t def)
 {
-    return xsm_ops->hypfs_op();
+    return alternative_call(xsm_ops.hypfs_op);
 }
 
 static inline long xsm_do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
-    return xsm_ops->do_xsm_op(op);
+    /* "op"(struct) is being passed by value, alternative_call does not support */
+    return xsm_ops.do_xsm_op(op);
 }
 
 #ifdef CONFIG_COMPAT
 static inline int xsm_do_compat_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
-    return xsm_ops->do_compat_op(op);
+    /* "op"(struct) is being passed by value, alternative_call does not support */
+    return xsm_ops.do_compat_op(op);
 }
 #endif
 
 static inline int xsm_hvm_param (xsm_default_t def, struct domain *d, unsigned long op)
 {
-    return xsm_ops->hvm_param(d, op);
+    return alternative_call(xsm_ops.hvm_param, d, op);
 }
 
 static inline int xsm_hvm_control(xsm_default_t def, struct domain *d, unsigned long op)
 {
-    return xsm_ops->hvm_control(d, op);
+    return alternative_call(xsm_ops.hvm_control, d, op);
 }
 
 static inline int xsm_hvm_param_altp2mhvm (xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->hvm_param_altp2mhvm(d);
+    return alternative_call(xsm_ops.hvm_param_altp2mhvm, d);
 }
 
 static inline int xsm_hvm_altp2mhvm_op (xsm_default_t def, struct domain *d, uint64_t mode, uint32_t op)
 {
-    return xsm_ops->hvm_altp2mhvm_op(d, mode, op);
+    return alternative_call(xsm_ops.hvm_altp2mhvm_op, d, mode, op);
 }
 
 static inline int xsm_get_vnumainfo (xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->get_vnumainfo(d);
+    return alternative_call(xsm_ops.get_vnumainfo, d);
 }
 
 static inline int xsm_vm_event_control (xsm_default_t def, struct domain *d, int mode, int op)
 {
-    return xsm_ops->vm_event_control(d, mode, op);
+    return alternative_call(xsm_ops.vm_event_control, d, mode, op);
 }
 
 #ifdef CONFIG_MEM_ACCESS
 static inline int xsm_mem_access (xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->mem_access(d);
+    return alternative_call(xsm_ops.mem_access, d);
 }
 #endif
 
 #ifdef CONFIG_MEM_PAGING
 static inline int xsm_mem_paging (xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->mem_paging(d);
+    return alternative_call(xsm_ops.mem_paging, d);
 }
 #endif
 
 #ifdef CONFIG_MEM_SHARING
 static inline int xsm_mem_sharing (xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->mem_sharing(d);
+    return alternative_call(xsm_ops.mem_sharing, d);
 }
 #endif
 
 static inline int xsm_platform_op (xsm_default_t def, uint32_t op)
 {
-    return xsm_ops->platform_op(op);
+    return alternative_call(xsm_ops.platform_op, op);
 }
 
 #ifdef CONFIG_X86
 static inline int xsm_do_mca(xsm_default_t def)
 {
-    return xsm_ops->do_mca();
+    return alternative_call(xsm_ops.do_mca);
 }
 
 static inline int xsm_shadow_control (xsm_default_t def, struct domain *d, uint32_t op)
 {
-    return xsm_ops->shadow_control(d, op);
+    return alternative_call(xsm_ops.shadow_control, d, op);
 }
 
 static inline int xsm_mem_sharing_op (xsm_default_t def, struct domain *d, struct domain *cd, int op)
 {
-    return xsm_ops->mem_sharing_op(d, cd, op);
+    return alternative_call(xsm_ops.mem_sharing_op, d, cd, op);
 }
 
 static inline int xsm_apic (xsm_default_t def, struct domain *d, int cmd)
 {
-    return xsm_ops->apic(d, cmd);
+    return alternative_call(xsm_ops.apic, d, cmd);
 }
 
 static inline int xsm_memtype (xsm_default_t def, uint32_t access)
 {
-    return xsm_ops->memtype(access);
+    return alternative_call(xsm_ops.memtype, access);
 }
 
 static inline int xsm_machine_memory_map(xsm_default_t def)
 {
-    return xsm_ops->machine_memory_map();
+    return alternative_call(xsm_ops.machine_memory_map);
 }
 
 static inline int xsm_domain_memory_map(xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->domain_memory_map(d);
+    return alternative_call(xsm_ops.domain_memory_map, d);
 }
 
 static inline int xsm_mmu_update (xsm_default_t def, struct domain *d, struct domain *t,
                                   struct domain *f, uint32_t flags)
 {
-    return xsm_ops->mmu_update(d, t, f, flags);
+    return alternative_call(xsm_ops.mmu_update, d, t, f, flags);
 }
 
 static inline int xsm_mmuext_op (xsm_default_t def, struct domain *d, struct domain *f)
 {
-    return xsm_ops->mmuext_op(d, f);
+    return alternative_call(xsm_ops.mmuext_op, d, f);
 }
 
 static inline int xsm_update_va_mapping(xsm_default_t def, struct domain *d, struct domain *f,
                                                             l1_pgentry_t pte)
 {
-    return xsm_ops->update_va_mapping(d, f, pte);
+    /* pte(struct) is being passed by value, alternative_call does not support */
+    return xsm_ops.update_va_mapping(d, f, pte);
 }
 
 static inline int xsm_priv_mapping(xsm_default_t def, struct domain *d, struct domain *t)
 {
-    return xsm_ops->priv_mapping(d, t);
+    return alternative_call(xsm_ops.priv_mapping, d, t);
 }
 
 static inline int xsm_ioport_permission (xsm_default_t def, struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
 {
-    return xsm_ops->ioport_permission(d, s, e, allow);
+    return alternative_call(xsm_ops.ioport_permission, d, s, e, allow);
 }
 
 static inline int xsm_ioport_mapping (xsm_default_t def, struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
 {
-    return xsm_ops->ioport_mapping(d, s, e, allow);
+    return alternative_call(xsm_ops.ioport_mapping, d, s, e, allow);
 }
 
 static inline int xsm_pmu_op (xsm_default_t def, struct domain *d, unsigned int op)
 {
-    return xsm_ops->pmu_op(d, op);
+    return alternative_call(xsm_ops.pmu_op, d, op);
 }
 
 #endif /* CONFIG_X86 */
 
 static inline int xsm_dm_op(xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->dm_op(d);
+    return alternative_call(xsm_ops.dm_op, d);
 }
 
 static inline int xsm_xen_version (xsm_default_t def, uint32_t op)
 {
-    return xsm_ops->xen_version(op);
+    return alternative_call(xsm_ops.xen_version, op);
 }
 
 static inline int xsm_domain_resource_map(xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->domain_resource_map(d);
+    return alternative_call(xsm_ops.domain_resource_map, d);
 }
 
 #ifdef CONFIG_ARGO
 static inline int xsm_argo_enable(const struct domain *d)
 {
-    return xsm_ops->argo_enable(d);
+    return alternative_call(xsm_ops.argo_enable, d);
 }
 
 static inline int xsm_argo_register_single_source(const struct domain *d,
                                                   const struct domain *t)
 {
-    return xsm_ops->argo_register_single_source(d, t);
+    return alternative_call(xsm_ops.argo_register_single_source, d, t);
 }
 
 static inline int xsm_argo_register_any_source(const struct domain *d)
 {
-    return xsm_ops->argo_register_any_source(d);
+    return alternative_call(xsm_ops.argo_register_any_source, d);
 }
 
 static inline int xsm_argo_send(const struct domain *d, const struct domain *t)
 {
-    return xsm_ops->argo_send(d, t);
+    return alternative_call(xsm_ops.argo_send, d, t);
 }
 
 #endif /* CONFIG_ARGO */
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index de44b10130..066694763a 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -13,8 +13,6 @@
 #define XSM_NO_WRAPPERS
 #include <xsm/dummy.h>
 
-struct xsm_operations dummy_xsm_ops;
-
 #define set_to_dummy_if_null(ops, function)                            \
     do {                                                               \
         if ( !ops->function )                                          \
diff --git a/xen/xsm/flask/flask_op.c b/xen/xsm/flask/flask_op.c
index 01e52138a1..df9fcc1d6d 100644
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -225,26 +225,7 @@ static int flask_security_sid(struct xen_flask_sid_context *arg)
 
 static int flask_disable(void)
 {
-    static int flask_disabled = 0;
-
-    if ( ss_initialized )
-    {
-        /* Not permitted after initial policy load. */
-        return -EINVAL;
-    }
-
-    if ( flask_disabled )
-    {
-        /* Only do this once. */
-        return -EINVAL;
-    }
-
-    printk("Flask:  Disabled at runtime.\n");
-
-    flask_disabled = 1;
-
-    /* Reset xsm_ops to the original module. */
-    xsm_ops = &dummy_xsm_ops;
+    printk("Flask:  Disabling is not supported.\n");
 
     return 0;
 }
diff --git a/xen/xsm/xsm_core.c b/xen/xsm/xsm_core.c
index 5eab21e1b1..acc1af7166 100644
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -28,9 +28,17 @@
 #include <asm/setup.h>
 #endif
 
-#define XSM_FRAMEWORK_VERSION    "1.0.0"
+#define XSM_FRAMEWORK_VERSION    "1.0.1"
 
-struct xsm_operations *xsm_ops;
+struct xsm_operations xsm_ops;
+
+enum xsm_ops_state {
+    XSM_OPS_UNREGISTERED,
+    XSM_OPS_REG_FAILED,
+    XSM_OPS_REGISTERED,
+};
+
+static enum xsm_ops_state xsm_ops_registered = XSM_OPS_UNREGISTERED;
 
 enum xsm_bootparam {
     XSM_BOOTPARAM_DUMMY,
@@ -68,15 +76,6 @@ static int __init parse_xsm_param(const char *s)
 }
 custom_param("xsm", parse_xsm_param);
 
-static inline int verify(struct xsm_operations *ops)
-{
-    /* verify the security_operations structure exists */
-    if ( !ops )
-        return -EINVAL;
-    xsm_fixup_ops(ops);
-    return 0;
-}
-
 static int __init xsm_core_init(const void *policy_buffer, size_t policy_size)
 {
 #ifdef CONFIG_XSM_FLASK_POLICY
@@ -87,17 +86,22 @@ static int __init xsm_core_init(const void *policy_buffer, size_t policy_size)
     }
 #endif
 
-    if ( verify(&dummy_xsm_ops) )
+    if ( xsm_ops_registered != XSM_OPS_UNREGISTERED )
     {
-        printk(XENLOG_ERR "Could not verify dummy_xsm_ops structure\n");
+        printk(XENLOG_ERR
+            "Could not init XSM, xsm_ops register already attempted\n");
         return -EIO;
     }
 
-    xsm_ops = &dummy_xsm_ops;
+    /* install the dummy ops as default to ensure ops
+     * are defined if requested policy fails init
+     */
+    xsm_fixup_ops(&xsm_ops);
 
     switch ( xsm_bootparam )
     {
     case XSM_BOOTPARAM_DUMMY:
+        xsm_ops_registered = XSM_OPS_REGISTERED;
         break;
 
     case XSM_BOOTPARAM_FLASK:
@@ -113,6 +117,9 @@ static int __init xsm_core_init(const void *policy_buffer, size_t policy_size)
         break;
     }
 
+    if ( xsm_ops_registered != XSM_OPS_REGISTERED )
+        xsm_ops_registered = XSM_OPS_REG_FAILED;
+
     return 0;
 }
 
@@ -197,16 +204,21 @@ bool __init has_xsm_magic(paddr_t start)
 
 int __init register_xsm(struct xsm_operations *ops)
 {
-    if ( verify(ops) )
+    if ( xsm_ops_registered != XSM_OPS_UNREGISTERED )
+        return -EAGAIN;
+
+    if ( !ops )
     {
-        printk(XENLOG_ERR "Could not verify xsm_operations structure\n");
+        xsm_ops_registered = XSM_OPS_REG_FAILED;
+        printk(XENLOG_ERR "Invalid xsm_operations structure registered\n");
         return -EINVAL;
     }
 
-    if ( xsm_ops != &dummy_xsm_ops )
-        return -EAGAIN;
+    /* use dummy ops for any empty ops */
+    xsm_fixup_ops(ops);
 
-    xsm_ops = ops;
+    xsm_ops = *ops;
+    xsm_ops_registered = XSM_OPS_REGISTERED;
 
     return 0;
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 23:30:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 23:30:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144201.265476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1TF-0002GC-Ln; Thu, 17 Jun 2021 23:30:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144201.265476; Thu, 17 Jun 2021 23:30:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1TF-0002G5-IX; Thu, 17 Jun 2021 23:30:49 +0000
Received: by outflank-mailman (input) for mailman id 144201;
 Thu, 17 Jun 2021 23:30:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uYqS=LL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lu1TE-0002Fv-03
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 23:30:48 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c466cbf8-11f5-4dda-a4d0-2e28884d6a2e;
 Thu, 17 Jun 2021 23:30:47 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 668C56113E;
 Thu, 17 Jun 2021 23:30:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c466cbf8-11f5-4dda-a4d0-2e28884d6a2e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623972646;
	bh=rr/c3XeFS+9sdYq4bUvVJ43Rq1hQvVCPc1yQzRKz0M4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=WNUjjRc9qiEKd1v6ch2oCBMCLr//p27BpMcgjvENehNLikMoev5Wqn272QTxx71CW
	 3D3qfUT5ju+4mKjQzOKKH9gG669sTY/7NSboz0Exn9xL7XiAsYwisipa8+ZFly/R/o
	 DXSq4guI2oTMIWCnLtIfK2u7Aepnlt6Tlb4iaOHXykPvtPc0PDNGmILxhwM/W20xwP
	 EkxLcDsLyTSrutO0JMWUgmxdcIo9gQkjJat7Q9uy0cpusxn7iT8yBAyZlIVSiyaPd+
	 yrJcoSW9cwD0kiFZO94T+u3eUXoEusH4WrtVqLy3ZnfWy+/EPI7J5AbbnN0JrKQ05d
	 sMIO+8CCUsK0g==
Date: Thu, 17 Jun 2021 16:30:43 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Claire Chang <tientzu@chromium.org>
cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, 
    Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, 
    Frank Rowand <frowand.list@gmail.com>, 
    Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, 
    jgross@suse.com, Christoph Hellwig <hch@lst.de>, 
    Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, 
    paulus@samba.org, 
    "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, 
    sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>, 
    grant.likely@arm.com, xypron.glpk@gmx.de, 
    Thierry Reding <treding@nvidia.com>, mingo@kernel.org, 
    bauerman@linux.ibm.com, peterz@infradead.org, 
    Greg KH <gregkh@linuxfoundation.org>, 
    Saravana Kannan <saravanak@google.com>, 
    "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
    heikki.krogerus@linux.intel.com, 
    Andy Shevchenko <andriy.shevchenko@linux.intel.com>, 
    Randy Dunlap <rdunlap@infradead.org>, 
    Dan Williams <dan.j.williams@intel.com>, 
    Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
    linux-devicetree <devicetree@vger.kernel.org>, 
    lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org, 
    xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>, 
    Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org, 
    bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, 
    daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
    intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
    jxgao@google.com, joonas.lahtinen@linux.intel.com, 
    linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
    matthew.auld@intel.com, rodrigo.vivi@intel.com, 
    thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v13 01/12] swiotlb: Refactor swiotlb init functions
In-Reply-To: <20210617062635.1660944-2-tientzu@chromium.org>
Message-ID: <alpine.DEB.2.21.2106171434480.24906@sstabellini-ThinkPad-T480s>
References: <20210617062635.1660944-1-tientzu@chromium.org> <20210617062635.1660944-2-tientzu@chromium.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 17 Jun 2021, Claire Chang wrote:
> Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
> initialization to make the code reusable.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Tested-by: Stefano Stabellini <sstabellini@kernel.org>
> Tested-by: Will Deacon <will@kernel.org>
> ---
>  kernel/dma/swiotlb.c | 50 ++++++++++++++++++++++----------------------
>  1 file changed, 25 insertions(+), 25 deletions(-)
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 52e2ac526757..47bb2a766798 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -168,9 +168,28 @@ void __init swiotlb_update_mem_attributes(void)
>  	memset(vaddr, 0, bytes);
>  }
>  
> -int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
> +static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
> +				    unsigned long nslabs, bool late_alloc)
>  {
> +	void *vaddr = phys_to_virt(start);
>  	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
> +
> +	mem->nslabs = nslabs;
> +	mem->start = start;
> +	mem->end = mem->start + bytes;
> +	mem->index = 0;
> +	mem->late_alloc = late_alloc;
> +	spin_lock_init(&mem->lock);
> +	for (i = 0; i < mem->nslabs; i++) {
> +		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> +		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
> +		mem->slots[i].alloc_size = 0;
> +	}
> +	memset(vaddr, 0, bytes);
> +}
> +
> +int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
> +{
>  	struct io_tlb_mem *mem;
>  	size_t alloc_size;
>  
> @@ -186,16 +205,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
>  	if (!mem)
>  		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
>  		      __func__, alloc_size, PAGE_SIZE);
> -	mem->nslabs = nslabs;
> -	mem->start = __pa(tlb);
> -	mem->end = mem->start + bytes;
> -	mem->index = 0;
> -	spin_lock_init(&mem->lock);
> -	for (i = 0; i < mem->nslabs; i++) {
> -		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> -		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
> -		mem->slots[i].alloc_size = 0;
> -	}
> +
> +	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
>  
>  	io_tlb_default_mem = mem;
>  	if (verbose)
> @@ -282,8 +293,8 @@ swiotlb_late_init_with_default_size(size_t default_size)
>  int
>  swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>  {
> -	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
>  	struct io_tlb_mem *mem;
> +	unsigned long bytes = nslabs << IO_TLB_SHIFT;
>  
>  	if (swiotlb_force == SWIOTLB_NO_FORCE)
>  		return 0;
> @@ -297,20 +308,9 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>  	if (!mem)
>  		return -ENOMEM;
>  
> -	mem->nslabs = nslabs;
> -	mem->start = virt_to_phys(tlb);
> -	mem->end = mem->start + bytes;
> -	mem->index = 0;
> -	mem->late_alloc = 1;
> -	spin_lock_init(&mem->lock);
> -	for (i = 0; i < mem->nslabs; i++) {
> -		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> -		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
> -		mem->slots[i].alloc_size = 0;
> -	}
> -
> +	memset(mem, 0, sizeof(*mem));
> +	swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
>  	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
> -	memset(tlb, 0, bytes);
 
This is good for swiotlb_late_init_with_tbl. However I have just noticed
that mem could also be allocated from swiotlb_init_with_tbl, in which
case the zeroing is missing. I think we need another memset in
swiotlb_init_with_tbl as well. Or maybe it could be better to have a
single memset at the beginning of swiotlb_init_io_tlb_mem instead. Up to
you.


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 23:30:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 23:30:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144202.265486 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1TL-0002ZS-Ty; Thu, 17 Jun 2021 23:30:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144202.265486; Thu, 17 Jun 2021 23:30:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1TL-0002ZJ-QV; Thu, 17 Jun 2021 23:30:55 +0000
Received: by outflank-mailman (input) for mailman id 144202;
 Thu, 17 Jun 2021 23:30:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uYqS=LL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lu1TJ-0002YB-L6
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 23:30:53 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6e7abf90-242b-4f95-a7e4-a4e26874a7e4;
 Thu, 17 Jun 2021 23:30:53 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id EC0DB6117A;
 Thu, 17 Jun 2021 23:30:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e7abf90-242b-4f95-a7e4-a4e26874a7e4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623972652;
	bh=zT1qdWU1gzmRyGeh30JGyHfeH/5dy6E64TCLRTPOMuQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=L3nKu5Hm2ncrIKOEaFTf+2fL0fh6USJhIFnbQA9qkk03x89Y1npCLAh/mGQJh5jgS
	 pnnSruyh6mX3z3Is7HGD+08WcJ0DnFnBawbI2rFKKmajFf1FEQ37ybp7Nq3v8pjbh0
	 NHrvEDTsIOxk4nTYvYmiUcCNBuL4M/XxImwe4e8lhtE6D3K7hOlD4JIdGP+kIOALnw
	 5Scct4vYV5ZtSH09JchbACRHQNDYN1A3YXySJYHgErO3JUbn5qo9xkJbr5DFsapWSj
	 o+frwhth6wJ1B4JrC45X+H0+WpZAVkNuIox4TsHm6lkmMwlV+YZDFHxdVI/S5CtGI5
	 B7G21AS9iH6ZA==
Date: Thu, 17 Jun 2021 16:30:50 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Claire Chang <tientzu@chromium.org>
cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, 
    Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, 
    Frank Rowand <frowand.list@gmail.com>, 
    Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, 
    jgross@suse.com, Christoph Hellwig <hch@lst.de>, 
    Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, 
    paulus@samba.org, 
    "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, 
    sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>, 
    grant.likely@arm.com, xypron.glpk@gmx.de, 
    Thierry Reding <treding@nvidia.com>, mingo@kernel.org, 
    bauerman@linux.ibm.com, peterz@infradead.org, 
    Greg KH <gregkh@linuxfoundation.org>, 
    Saravana Kannan <saravanak@google.com>, 
    "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
    heikki.krogerus@linux.intel.com, 
    Andy Shevchenko <andriy.shevchenko@linux.intel.com>, 
    Randy Dunlap <rdunlap@infradead.org>, 
    Dan Williams <dan.j.williams@intel.com>, 
    Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
    linux-devicetree <devicetree@vger.kernel.org>, 
    lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org, 
    xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>, 
    Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org, 
    bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, 
    daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
    intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
    jxgao@google.com, joonas.lahtinen@linux.intel.com, 
    linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
    matthew.auld@intel.com, rodrigo.vivi@intel.com, 
    thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v13 03/12] swiotlb: Set dev->dma_io_tlb_mem to the swiotlb
 pool used
In-Reply-To: <20210617062635.1660944-4-tientzu@chromium.org>
Message-ID: <alpine.DEB.2.21.2106171444510.24906@sstabellini-ThinkPad-T480s>
References: <20210617062635.1660944-1-tientzu@chromium.org> <20210617062635.1660944-4-tientzu@chromium.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 17 Jun 2021, Claire Chang wrote:
> Always have the pointer to the swiotlb pool used in struct device. This
> could help simplify the code for other pools.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Tested-by: Stefano Stabellini <sstabellini@kernel.org>
> Tested-by: Will Deacon <will@kernel.org>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>

> ---
>  drivers/base/core.c    | 4 ++++
>  include/linux/device.h | 4 ++++
>  kernel/dma/swiotlb.c   | 8 ++++----
>  3 files changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/base/core.c b/drivers/base/core.c
> index f29839382f81..cb3123e3954d 100644
> --- a/drivers/base/core.c
> +++ b/drivers/base/core.c
> @@ -27,6 +27,7 @@
>  #include <linux/netdevice.h>
>  #include <linux/sched/signal.h>
>  #include <linux/sched/mm.h>
> +#include <linux/swiotlb.h>
>  #include <linux/sysfs.h>
>  #include <linux/dma-map-ops.h> /* for dma_default_coherent */
>  
> @@ -2736,6 +2737,9 @@ void device_initialize(struct device *dev)
>      defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU_ALL)
>  	dev->dma_coherent = dma_default_coherent;
>  #endif
> +#ifdef CONFIG_SWIOTLB
> +	dev->dma_io_tlb_mem = io_tlb_default_mem;
> +#endif
>  }
>  EXPORT_SYMBOL_GPL(device_initialize);
>  
> diff --git a/include/linux/device.h b/include/linux/device.h
> index ba660731bd25..240d652a0696 100644
> --- a/include/linux/device.h
> +++ b/include/linux/device.h
> @@ -416,6 +416,7 @@ struct dev_links_info {
>   * @dma_pools:	Dma pools (if dma'ble device).
>   * @dma_mem:	Internal for coherent mem override.
>   * @cma_area:	Contiguous memory area for dma allocations
> + * @dma_io_tlb_mem: Pointer to the swiotlb pool used.  Not for driver use.
>   * @archdata:	For arch-specific additions.
>   * @of_node:	Associated device tree node.
>   * @fwnode:	Associated device node supplied by platform firmware.
> @@ -518,6 +519,9 @@ struct device {
>  #ifdef CONFIG_DMA_CMA
>  	struct cma *cma_area;		/* contiguous memory area for dma
>  					   allocations */
> +#endif
> +#ifdef CONFIG_SWIOTLB
> +	struct io_tlb_mem *dma_io_tlb_mem;
>  #endif
>  	/* arch specific additions */
>  	struct dev_archdata	archdata;
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 2dba659a1e73..de79e9437030 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -340,7 +340,7 @@ void __init swiotlb_exit(void)
>  static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size,
>  			   enum dma_data_direction dir)
>  {
> -	struct io_tlb_mem *mem = io_tlb_default_mem;
> +	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
>  	int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT;
>  	unsigned int offset = (tlb_addr - mem->start) & (IO_TLB_SIZE - 1);
>  	phys_addr_t orig_addr = mem->slots[index].orig_addr;
> @@ -431,7 +431,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
>  static int find_slots(struct device *dev, phys_addr_t orig_addr,
>  		size_t alloc_size)
>  {
> -	struct io_tlb_mem *mem = io_tlb_default_mem;
> +	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
>  	unsigned long boundary_mask = dma_get_seg_boundary(dev);
>  	dma_addr_t tbl_dma_addr =
>  		phys_to_dma_unencrypted(dev, mem->start) & boundary_mask;
> @@ -508,7 +508,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
>  		size_t mapping_size, size_t alloc_size,
>  		enum dma_data_direction dir, unsigned long attrs)
>  {
> -	struct io_tlb_mem *mem = io_tlb_default_mem;
> +	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
>  	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
>  	unsigned int i;
>  	int index;
> @@ -559,7 +559,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
>  			      size_t mapping_size, enum dma_data_direction dir,
>  			      unsigned long attrs)
>  {
> -	struct io_tlb_mem *mem = io_tlb_default_mem;
> +	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
>  	unsigned long flags;
>  	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
>  	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
> -- 
> 2.32.0.288.g62a8d224e6-goog
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 23:31:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 23:31:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144203.265498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1TQ-0002u8-7p; Thu, 17 Jun 2021 23:31:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144203.265498; Thu, 17 Jun 2021 23:31:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1TQ-0002tp-4A; Thu, 17 Jun 2021 23:31:00 +0000
Received: by outflank-mailman (input) for mailman id 144203;
 Thu, 17 Jun 2021 23:30:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uYqS=LL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lu1TO-0002s9-Mw
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 23:30:58 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9187af39-9335-4cea-8bb8-d2342f4342cb;
 Thu, 17 Jun 2021 23:30:58 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id C8E526120A;
 Thu, 17 Jun 2021 23:30:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9187af39-9335-4cea-8bb8-d2342f4342cb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623972657;
	bh=4/LSkdFzmkpBIqOs2lik6bFTOLfsE1vFp3pSPvsIAqo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=KabHsUuSd3hUeQkH5QL2Sast+Pt6XMU4ZN9i3oUCsqYyVMK9whYkZKwQyJXODJVNc
	 TzO5tFR1BNLnZQ5KkIoC5rq/vVgcjDuaAF80/HtiNb1wOVxm3CppaxT2xY/I4dzpYB
	 M+Aev1dOzNNy3oC1+YCccgFClj9tP/zQxcmXw1XC5VbZG80dLVkb8zSynKXi4NVwyn
	 0SJkjSwb2Rhj6hcwYba0AuLzEBy00TQw8hBr3kM6sBHZR9h/6pIWY0zm3Vq8KbPM5Z
	 6BOc0hRZnL9tnUFnAutTcMrNXIzinS4Gk728uyb268u6n8vujkFE/QAFXn20IgjiLI
	 HE4nztpznGM9g==
Date: Thu, 17 Jun 2021 16:30:55 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Claire Chang <tientzu@chromium.org>
cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, 
    Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, 
    Frank Rowand <frowand.list@gmail.com>, 
    Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, 
    jgross@suse.com, Christoph Hellwig <hch@lst.de>, 
    Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, 
    paulus@samba.org, 
    "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, 
    sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>, 
    grant.likely@arm.com, xypron.glpk@gmx.de, 
    Thierry Reding <treding@nvidia.com>, mingo@kernel.org, 
    bauerman@linux.ibm.com, peterz@infradead.org, 
    Greg KH <gregkh@linuxfoundation.org>, 
    Saravana Kannan <saravanak@google.com>, 
    "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
    heikki.krogerus@linux.intel.com, 
    Andy Shevchenko <andriy.shevchenko@linux.intel.com>, 
    Randy Dunlap <rdunlap@infradead.org>, 
    Dan Williams <dan.j.williams@intel.com>, 
    Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
    linux-devicetree <devicetree@vger.kernel.org>, 
    lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org, 
    xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>, 
    Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org, 
    bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, 
    daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
    intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
    jxgao@google.com, joonas.lahtinen@linux.intel.com, 
    linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
    matthew.auld@intel.com, rodrigo.vivi@intel.com, 
    thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v13 04/12] swiotlb: Update is_swiotlb_buffer to add a
 struct device argument
In-Reply-To: <20210617062635.1660944-5-tientzu@chromium.org>
Message-ID: <alpine.DEB.2.21.2106171445110.24906@sstabellini-ThinkPad-T480s>
References: <20210617062635.1660944-1-tientzu@chromium.org> <20210617062635.1660944-5-tientzu@chromium.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 17 Jun 2021, Claire Chang wrote:
> Update is_swiotlb_buffer to add a struct device argument. This will be
> useful later to allow for different pools.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Tested-by: Stefano Stabellini <sstabellini@kernel.org>
> Tested-by: Will Deacon <will@kernel.org>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  drivers/iommu/dma-iommu.c | 12 ++++++------
>  drivers/xen/swiotlb-xen.c |  2 +-
>  include/linux/swiotlb.h   |  7 ++++---
>  kernel/dma/direct.c       |  6 +++---
>  kernel/dma/direct.h       |  6 +++---
>  5 files changed, 17 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 3087d9fa6065..10997ef541f8 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -507,7 +507,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
>  
>  	__iommu_dma_unmap(dev, dma_addr, size);
>  
> -	if (unlikely(is_swiotlb_buffer(phys)))
> +	if (unlikely(is_swiotlb_buffer(dev, phys)))
>  		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
>  }
>  
> @@ -578,7 +578,7 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
>  	}
>  
>  	iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
> -	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
> +	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys))
>  		swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs);
>  	return iova;
>  }
> @@ -749,7 +749,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
>  	if (!dev_is_dma_coherent(dev))
>  		arch_sync_dma_for_cpu(phys, size, dir);
>  
> -	if (is_swiotlb_buffer(phys))
> +	if (is_swiotlb_buffer(dev, phys))
>  		swiotlb_sync_single_for_cpu(dev, phys, size, dir);
>  }
>  
> @@ -762,7 +762,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev,
>  		return;
>  
>  	phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
> -	if (is_swiotlb_buffer(phys))
> +	if (is_swiotlb_buffer(dev, phys))
>  		swiotlb_sync_single_for_device(dev, phys, size, dir);
>  
>  	if (!dev_is_dma_coherent(dev))
> @@ -783,7 +783,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
>  		if (!dev_is_dma_coherent(dev))
>  			arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
>  
> -		if (is_swiotlb_buffer(sg_phys(sg)))
> +		if (is_swiotlb_buffer(dev, sg_phys(sg)))
>  			swiotlb_sync_single_for_cpu(dev, sg_phys(sg),
>  						    sg->length, dir);
>  	}
> @@ -800,7 +800,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
>  		return;
>  
>  	for_each_sg(sgl, sg, nelems, i) {
> -		if (is_swiotlb_buffer(sg_phys(sg)))
> +		if (is_swiotlb_buffer(dev, sg_phys(sg)))
>  			swiotlb_sync_single_for_device(dev, sg_phys(sg),
>  						       sg->length, dir);
>  
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 4c89afc0df62..0c6ed09f8513 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -100,7 +100,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
>  	 * in our domain. Therefore _only_ check address within our domain.
>  	 */
>  	if (pfn_valid(PFN_DOWN(paddr)))
> -		return is_swiotlb_buffer(paddr);
> +		return is_swiotlb_buffer(dev, paddr);
>  	return 0;
>  }
>  
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 216854a5e513..d1f3d95881cd 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -2,6 +2,7 @@
>  #ifndef __LINUX_SWIOTLB_H
>  #define __LINUX_SWIOTLB_H
>  
> +#include <linux/device.h>
>  #include <linux/dma-direction.h>
>  #include <linux/init.h>
>  #include <linux/types.h>
> @@ -101,9 +102,9 @@ struct io_tlb_mem {
>  };
>  extern struct io_tlb_mem *io_tlb_default_mem;
>  
> -static inline bool is_swiotlb_buffer(phys_addr_t paddr)
> +static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  {
> -	struct io_tlb_mem *mem = io_tlb_default_mem;
> +	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
>  
>  	return mem && paddr >= mem->start && paddr < mem->end;
>  }
> @@ -115,7 +116,7 @@ bool is_swiotlb_active(void);
>  void __init swiotlb_adjust_size(unsigned long size);
>  #else
>  #define swiotlb_force SWIOTLB_NO_FORCE
> -static inline bool is_swiotlb_buffer(phys_addr_t paddr)
> +static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  {
>  	return false;
>  }
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index f737e3347059..84c9feb5474a 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -343,7 +343,7 @@ void dma_direct_sync_sg_for_device(struct device *dev,
>  	for_each_sg(sgl, sg, nents, i) {
>  		phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));
>  
> -		if (unlikely(is_swiotlb_buffer(paddr)))
> +		if (unlikely(is_swiotlb_buffer(dev, paddr)))
>  			swiotlb_sync_single_for_device(dev, paddr, sg->length,
>  						       dir);
>  
> @@ -369,7 +369,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
>  		if (!dev_is_dma_coherent(dev))
>  			arch_sync_dma_for_cpu(paddr, sg->length, dir);
>  
> -		if (unlikely(is_swiotlb_buffer(paddr)))
> +		if (unlikely(is_swiotlb_buffer(dev, paddr)))
>  			swiotlb_sync_single_for_cpu(dev, paddr, sg->length,
>  						    dir);
>  
> @@ -504,7 +504,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
>  bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr)
>  {
>  	return !dev_is_dma_coherent(dev) ||
> -		is_swiotlb_buffer(dma_to_phys(dev, dma_addr));
> +	       is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr));
>  }
>  
>  /**
> diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
> index 50afc05b6f1d..13e9e7158d94 100644
> --- a/kernel/dma/direct.h
> +++ b/kernel/dma/direct.h
> @@ -56,7 +56,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev,
>  {
>  	phys_addr_t paddr = dma_to_phys(dev, addr);
>  
> -	if (unlikely(is_swiotlb_buffer(paddr)))
> +	if (unlikely(is_swiotlb_buffer(dev, paddr)))
>  		swiotlb_sync_single_for_device(dev, paddr, size, dir);
>  
>  	if (!dev_is_dma_coherent(dev))
> @@ -73,7 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev,
>  		arch_sync_dma_for_cpu_all();
>  	}
>  
> -	if (unlikely(is_swiotlb_buffer(paddr)))
> +	if (unlikely(is_swiotlb_buffer(dev, paddr)))
>  		swiotlb_sync_single_for_cpu(dev, paddr, size, dir);
>  
>  	if (dir == DMA_FROM_DEVICE)
> @@ -113,7 +113,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
>  	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
>  		dma_direct_sync_single_for_cpu(dev, addr, size, dir);
>  
> -	if (unlikely(is_swiotlb_buffer(phys)))
> +	if (unlikely(is_swiotlb_buffer(dev, phys)))
>  		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
>  }
>  #endif /* _KERNEL_DMA_DIRECT_H */
> -- 
> 2.32.0.288.g62a8d224e6-goog
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 23:31:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 23:31:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144207.265508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1TZ-0003Ta-JO; Thu, 17 Jun 2021 23:31:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144207.265508; Thu, 17 Jun 2021 23:31:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1TZ-0003TT-Fd; Thu, 17 Jun 2021 23:31:09 +0000
Received: by outflank-mailman (input) for mailman id 144207;
 Thu, 17 Jun 2021 23:31:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uYqS=LL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lu1TY-0002s9-Lk
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 23:31:08 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fc0fa325-ec1b-47e9-b6a0-80767d6d58f2;
 Thu, 17 Jun 2021 23:31:03 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id C1A5961249;
 Thu, 17 Jun 2021 23:31:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc0fa325-ec1b-47e9-b6a0-80767d6d58f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623972662;
	bh=jaTzVmgcSfCX/j7A0Q3wWD+e8d+fiEhtBDeTOvUECQg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Vu+FVgRFo8gjxJSbcRPtU5hVDRrL1t6lz+O6xlDZMyaHlVsd0LvbSMCvtJgk20TJP
	 PlQx+1Ru0/4CFPzGCZyS05uIuNhxW6cNPonigwh6cnA9SwWSLRdrB+T/t4Fqu6a6wM
	 BMLE4koVKY0dZPAIyMP/wWHDeP1vXEKxcZWyMm6+3nM5SOR97ZzKQVFtQxUYkQFMiH
	 bijXvEkfzsyMcHYI3Z9pagC6MT2NBZr9zSAstJZean3wmyt3vZHquCRLswFSMjfh1+
	 dUjtyU5okd2qgQNfKofiP6hT4HJPkGwlQ/ddRaj111+1534wu5MlmE9XEwlp1tycWR
	 dWkN61b6c9C2A==
Date: Thu, 17 Jun 2021 16:30:59 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Claire Chang <tientzu@chromium.org>
cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, 
    Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, 
    Frank Rowand <frowand.list@gmail.com>, 
    Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, 
    jgross@suse.com, Christoph Hellwig <hch@lst.de>, 
    Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, 
    paulus@samba.org, 
    "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, 
    sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>, 
    grant.likely@arm.com, xypron.glpk@gmx.de, 
    Thierry Reding <treding@nvidia.com>, mingo@kernel.org, 
    bauerman@linux.ibm.com, peterz@infradead.org, 
    Greg KH <gregkh@linuxfoundation.org>, 
    Saravana Kannan <saravanak@google.com>, 
    "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
    heikki.krogerus@linux.intel.com, 
    Andy Shevchenko <andriy.shevchenko@linux.intel.com>, 
    Randy Dunlap <rdunlap@infradead.org>, 
    Dan Williams <dan.j.williams@intel.com>, 
    Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
    linux-devicetree <devicetree@vger.kernel.org>, 
    lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org, 
    xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>, 
    Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org, 
    bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, 
    daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
    intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
    jxgao@google.com, joonas.lahtinen@linux.intel.com, 
    linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
    matthew.auld@intel.com, rodrigo.vivi@intel.com, 
    thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v13 05/12] swiotlb: Update is_swiotlb_active to add a
 struct device argument
In-Reply-To: <20210617062635.1660944-6-tientzu@chromium.org>
Message-ID: <alpine.DEB.2.21.2106171448050.24906@sstabellini-ThinkPad-T480s>
References: <20210617062635.1660944-1-tientzu@chromium.org> <20210617062635.1660944-6-tientzu@chromium.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 17 Jun 2021, Claire Chang wrote:
> Update is_swiotlb_active to add a struct device argument. This will be
> useful later to allow for different pools.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Tested-by: Stefano Stabellini <sstabellini@kernel.org>
> Tested-by: Will Deacon <will@kernel.org>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +-
>  drivers/gpu/drm/nouveau/nouveau_ttm.c        | 2 +-
>  drivers/pci/xen-pcifront.c                   | 2 +-
>  include/linux/swiotlb.h                      | 4 ++--
>  kernel/dma/direct.c                          | 2 +-
>  kernel/dma/swiotlb.c                         | 4 ++--
>  6 files changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> index a9d65fc8aa0e..4b7afa0fc85d 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> @@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
>  
>  	max_order = MAX_ORDER;
>  #ifdef CONFIG_SWIOTLB
> -	if (is_swiotlb_active()) {
> +	if (is_swiotlb_active(obj->base.dev->dev)) {
>  		unsigned int max_segment;
>  
>  		max_segment = swiotlb_max_segment();
> diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
> index 9662522aa066..be15bfd9e0ee 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
> @@ -321,7 +321,7 @@ nouveau_ttm_init(struct nouveau_drm *drm)
>  	}
>  
>  #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
> -	need_swiotlb = is_swiotlb_active();
> +	need_swiotlb = is_swiotlb_active(dev->dev);
>  #endif
>  
>  	ret = ttm_bo_device_init(&drm->ttm.bdev, &nouveau_bo_driver,
> diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
> index b7a8f3a1921f..0d56985bfe81 100644
> --- a/drivers/pci/xen-pcifront.c
> +++ b/drivers/pci/xen-pcifront.c
> @@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
>  
>  	spin_unlock(&pcifront_dev_lock);
>  
> -	if (!err && !is_swiotlb_active()) {
> +	if (!err && !is_swiotlb_active(&pdev->xdev->dev)) {
>  		err = pci_xen_swiotlb_init_late();
>  		if (err)
>  			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index d1f3d95881cd..dd1c30a83058 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -112,7 +112,7 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  void __init swiotlb_exit(void);
>  unsigned int swiotlb_max_segment(void);
>  size_t swiotlb_max_mapping_size(struct device *dev);
> -bool is_swiotlb_active(void);
> +bool is_swiotlb_active(struct device *dev);
>  void __init swiotlb_adjust_size(unsigned long size);
>  #else
>  #define swiotlb_force SWIOTLB_NO_FORCE
> @@ -132,7 +132,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev)
>  	return SIZE_MAX;
>  }
>  
> -static inline bool is_swiotlb_active(void)
> +static inline bool is_swiotlb_active(struct device *dev)
>  {
>  	return false;
>  }
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 84c9feb5474a..7a88c34d0867 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
>  size_t dma_direct_max_mapping_size(struct device *dev)
>  {
>  	/* If SWIOTLB is active, use its maximum mapping size */
> -	if (is_swiotlb_active() &&
> +	if (is_swiotlb_active(dev) &&
>  	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
>  		return swiotlb_max_mapping_size(dev);
>  	return SIZE_MAX;
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index de79e9437030..409694d7a8ad 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -664,9 +664,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)
>  	return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
>  }
>  
> -bool is_swiotlb_active(void)
> +bool is_swiotlb_active(struct device *dev)
>  {
> -	return io_tlb_default_mem != NULL;
> +	return dev->dma_io_tlb_mem != NULL;
>  }
>  EXPORT_SYMBOL_GPL(is_swiotlb_active);
>  
> -- 
> 2.32.0.288.g62a8d224e6-goog
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 23:31:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 23:31:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144209.265520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1Tf-0003z6-0f; Thu, 17 Jun 2021 23:31:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144209.265520; Thu, 17 Jun 2021 23:31:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1Te-0003yx-Sw; Thu, 17 Jun 2021 23:31:14 +0000
Received: by outflank-mailman (input) for mailman id 144209;
 Thu, 17 Jun 2021 23:31:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uYqS=LL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lu1Td-0003MJ-BR
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 23:31:13 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a033eb9-a3f1-4165-97a7-302543cea2b8;
 Thu, 17 Jun 2021 23:31:12 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id AEEAC6117A;
 Thu, 17 Jun 2021 23:31:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a033eb9-a3f1-4165-97a7-302543cea2b8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623972672;
	bh=OQTvXWc5KTHXd26e21lniSV4nLp1qu2sC2xgNiM6A40=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=igsWuyQPQeQ/DRXp976arBlh2Cd5oMHU2rmslY5U2yOFBKGWeEuqoDHh/Bj3mGHkd
	 bLMM1Kw2CstAytFNmlUOY8WTs+QEtmm/R9UpxalWSGPL6zjJqu+ywf2zRKIqqo2Vn2
	 ska/hVnarr3cmbXV1zh/4q6O7Ya6EcMANA1sH0BM5teIhgKxWe/aPUjXDGMJHDQcyG
	 02ttdVu/nqV8fLZFut1pkllMnC/G+MCX8oF5zPpY6lMJCM4R+UgwgZXWkVN0xhuJAr
	 UeEfhIO3NooiVZttZkgEwUTjN1Qplp4be+9ogZRntMC6T8c0ripHJgEPpcV4OFRjHa
	 ks0He4jC+xUpQ==
Date: Thu, 17 Jun 2021 16:31:10 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Claire Chang <tientzu@chromium.org>
cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, 
    Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, 
    Frank Rowand <frowand.list@gmail.com>, 
    Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, 
    jgross@suse.com, Christoph Hellwig <hch@lst.de>, 
    Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, 
    paulus@samba.org, 
    "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, 
    sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>, 
    grant.likely@arm.com, xypron.glpk@gmx.de, 
    Thierry Reding <treding@nvidia.com>, mingo@kernel.org, 
    bauerman@linux.ibm.com, peterz@infradead.org, 
    Greg KH <gregkh@linuxfoundation.org>, 
    Saravana Kannan <saravanak@google.com>, 
    "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
    heikki.krogerus@linux.intel.com, 
    Andy Shevchenko <andriy.shevchenko@linux.intel.com>, 
    Randy Dunlap <rdunlap@infradead.org>, 
    Dan Williams <dan.j.williams@intel.com>, 
    Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
    linux-devicetree <devicetree@vger.kernel.org>, 
    lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org, 
    xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>, 
    Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org, 
    bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, 
    daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
    intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
    jxgao@google.com, joonas.lahtinen@linux.intel.com, 
    linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
    matthew.auld@intel.com, rodrigo.vivi@intel.com, 
    thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v13 09/12] swiotlb: Add restricted DMA alloc/free
 support
In-Reply-To: <20210617062635.1660944-10-tientzu@chromium.org>
Message-ID: <alpine.DEB.2.21.2106171448490.24906@sstabellini-ThinkPad-T480s>
References: <20210617062635.1660944-1-tientzu@chromium.org> <20210617062635.1660944-10-tientzu@chromium.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 17 Jun 2021, Claire Chang wrote:
> Add the functions, swiotlb_{alloc,free} and is_swiotlb_for_alloc to
> support the memory allocation from restricted DMA pool.
> 
> The restricted DMA pool is preferred if available.
> 
> Note that since coherent allocation needs remapping, one must set up
> another device coherent pool by shared-dma-pool and use
> dma_alloc_from_dev_coherent instead for atomic coherent allocation.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Tested-by: Stefano Stabellini <sstabellini@kernel.org>
> Tested-by: Will Deacon <will@kernel.org>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  include/linux/swiotlb.h | 26 ++++++++++++++++++++++
>  kernel/dma/direct.c     | 49 +++++++++++++++++++++++++++++++----------
>  kernel/dma/swiotlb.c    | 38 ++++++++++++++++++++++++++++++--
>  3 files changed, 99 insertions(+), 14 deletions(-)
> 
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 8d8855c77d9a..a73fad460162 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -85,6 +85,7 @@ extern enum swiotlb_force swiotlb_force;
>   * @debugfs:	The dentry to debugfs.
>   * @late_alloc:	%true if allocated using the page allocator
>   * @force_bounce: %true if swiotlb bouncing is forced
> + * @for_alloc:  %true if the pool is used for memory allocation
>   */
>  struct io_tlb_mem {
>  	phys_addr_t start;
> @@ -96,6 +97,7 @@ struct io_tlb_mem {
>  	struct dentry *debugfs;
>  	bool late_alloc;
>  	bool force_bounce;
> +	bool for_alloc;
>  	struct io_tlb_slot {
>  		phys_addr_t orig_addr;
>  		size_t alloc_size;
> @@ -156,4 +158,28 @@ static inline void swiotlb_adjust_size(unsigned long size)
>  extern void swiotlb_print_info(void);
>  extern void swiotlb_set_max_segment(unsigned int);
>  
> +#ifdef CONFIG_DMA_RESTRICTED_POOL
> +struct page *swiotlb_alloc(struct device *dev, size_t size);
> +bool swiotlb_free(struct device *dev, struct page *page, size_t size);
> +
> +static inline bool is_swiotlb_for_alloc(struct device *dev)
> +{
> +	return dev->dma_io_tlb_mem->for_alloc;
> +}
> +#else
> +static inline struct page *swiotlb_alloc(struct device *dev, size_t size)
> +{
> +	return NULL;
> +}
> +static inline bool swiotlb_free(struct device *dev, struct page *page,
> +				size_t size)
> +{
> +	return false;
> +}
> +static inline bool is_swiotlb_for_alloc(struct device *dev)
> +{
> +	return false;
> +}
> +#endif /* CONFIG_DMA_RESTRICTED_POOL */
> +
>  #endif /* __LINUX_SWIOTLB_H */
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index a92465b4eb12..2de33e5d302b 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -75,6 +75,15 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
>  		min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit);
>  }
>  
> +static void __dma_direct_free_pages(struct device *dev, struct page *page,
> +				    size_t size)
> +{
> +	if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL) &&
> +	    swiotlb_free(dev, page, size))
> +		return;
> +	dma_free_contiguous(dev, page, size);
> +}
> +
>  static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
>  		gfp_t gfp)
>  {
> @@ -86,6 +95,16 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
>  
>  	gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
>  					   &phys_limit);
> +	if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL) &&
> +	    is_swiotlb_for_alloc(dev)) {
> +		page = swiotlb_alloc(dev, size);
> +		if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
> +			__dma_direct_free_pages(dev, page, size);
> +			return NULL;
> +		}
> +		return page;
> +	}
> +
>  	page = dma_alloc_contiguous(dev, size, gfp);
>  	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
>  		dma_free_contiguous(dev, page, size);
> @@ -142,7 +161,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>  		gfp |= __GFP_NOWARN;
>  
>  	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
> -	    !force_dma_unencrypted(dev)) {
> +	    !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) {
>  		page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
>  		if (!page)
>  			return NULL;
> @@ -155,18 +174,23 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>  	}
>  
>  	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
> -	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
> -	    !dev_is_dma_coherent(dev))
> +	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
> +	    !is_swiotlb_for_alloc(dev))
>  		return arch_dma_alloc(dev, size, dma_handle, gfp, attrs);
>  
>  	/*
>  	 * Remapping or decrypting memory may block. If either is required and
>  	 * we can't block, allocate the memory from the atomic pools.
> +	 * If restricted DMA (i.e., is_swiotlb_for_alloc) is required, one must
> +	 * set up another device coherent pool by shared-dma-pool and use
> +	 * dma_alloc_from_dev_coherent instead.
>  	 */
>  	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
>  	    !gfpflags_allow_blocking(gfp) &&
>  	    (force_dma_unencrypted(dev) ||
> -	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev))))
> +	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
> +	      !dev_is_dma_coherent(dev))) &&
> +	    !is_swiotlb_for_alloc(dev))
>  		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
>  
>  	/* we always manually zero the memory once we are done */
> @@ -237,7 +261,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>  			return NULL;
>  	}
>  out_free_pages:
> -	dma_free_contiguous(dev, page, size);
> +	__dma_direct_free_pages(dev, page, size);
>  	return NULL;
>  }
>  
> @@ -247,15 +271,15 @@ void dma_direct_free(struct device *dev, size_t size,
>  	unsigned int page_order = get_order(size);
>  
>  	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
> -	    !force_dma_unencrypted(dev)) {
> +	    !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) {
>  		/* cpu_addr is a struct page cookie, not a kernel address */
>  		dma_free_contiguous(dev, cpu_addr, size);
>  		return;
>  	}
>  
>  	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
> -	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
> -	    !dev_is_dma_coherent(dev)) {
> +	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
> +	    !is_swiotlb_for_alloc(dev)) {
>  		arch_dma_free(dev, size, cpu_addr, dma_addr, attrs);
>  		return;
>  	}
> @@ -273,7 +297,7 @@ void dma_direct_free(struct device *dev, size_t size,
>  	else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
>  		arch_dma_clear_uncached(cpu_addr, size);
>  
> -	dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size);
> +	__dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
>  }
>  
>  struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
> @@ -283,7 +307,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
>  	void *ret;
>  
>  	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
> -	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp))
> +	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) &&
> +	    !is_swiotlb_for_alloc(dev))
>  		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
>  
>  	page = __dma_direct_alloc_pages(dev, size, gfp);
> @@ -310,7 +335,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
>  	*dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
>  	return page;
>  out_free_pages:
> -	dma_free_contiguous(dev, page, size);
> +	__dma_direct_free_pages(dev, page, size);
>  	return NULL;
>  }
>  
> @@ -329,7 +354,7 @@ void dma_direct_free_pages(struct device *dev, size_t size,
>  	if (force_dma_unencrypted(dev))
>  		set_memory_encrypted((unsigned long)vaddr, 1 << page_order);
>  
> -	dma_free_contiguous(dev, page, size);
> +	__dma_direct_free_pages(dev, page, size);
>  }
>  
>  #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index ff09341bb9f5..6499cfbfe95f 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -463,8 +463,9 @@ static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
>  
>  	index = wrap = wrap_index(mem, ALIGN(mem->index, stride));
>  	do {
> -		if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
> -		    (orig_addr & iotlb_align_mask)) {
> +		if (orig_addr &&
> +		    (slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
> +			    (orig_addr & iotlb_align_mask)) {
>  			index = wrap_index(mem, index + 1);
>  			continue;
>  		}
> @@ -703,3 +704,36 @@ static int __init swiotlb_create_default_debugfs(void)
>  late_initcall(swiotlb_create_default_debugfs);
>  
>  #endif
> +
> +#ifdef CONFIG_DMA_RESTRICTED_POOL
> +struct page *swiotlb_alloc(struct device *dev, size_t size)
> +{
> +	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
> +	phys_addr_t tlb_addr;
> +	int index;
> +
> +	if (!mem)
> +		return NULL;
> +
> +	index = swiotlb_find_slots(dev, 0, size);
> +	if (index == -1)
> +		return NULL;
> +
> +	tlb_addr = slot_addr(mem->start, index);
> +
> +	return pfn_to_page(PFN_DOWN(tlb_addr));
> +}
> +
> +bool swiotlb_free(struct device *dev, struct page *page, size_t size)
> +{
> +	phys_addr_t tlb_addr = page_to_phys(page);
> +
> +	if (!is_swiotlb_buffer(dev, tlb_addr))
> +		return false;
> +
> +	swiotlb_release_slots(dev, tlb_addr);
> +
> +	return true;
> +}
> +
> +#endif /* CONFIG_DMA_RESTRICTED_POOL */
> -- 
> 2.32.0.288.g62a8d224e6-goog
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 23:31:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 23:31:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144212.265530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1Tj-0004Sk-A4; Thu, 17 Jun 2021 23:31:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144212.265530; Thu, 17 Jun 2021 23:31:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1Tj-0004SV-6V; Thu, 17 Jun 2021 23:31:19 +0000
Received: by outflank-mailman (input) for mailman id 144212;
 Thu, 17 Jun 2021 23:31:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uYqS=LL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lu1Ti-0002s9-M7
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 23:31:18 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id af42067d-bc38-4d56-88cf-a246319c6201;
 Thu, 17 Jun 2021 23:31:08 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id D38066128B;
 Thu, 17 Jun 2021 23:31:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af42067d-bc38-4d56-88cf-a246319c6201
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623972667;
	bh=fWthFoIm/YEa+gCtln1XDDJSXfB5s/xQ0jk7l7+p7+0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=QWqo3DqxQjEKowfqO/s2UKUn1DguQmxVjG6us/vFiMAFxg5HmszETvkadqrhYhjO6
	 p3G1uYRBl3sgiMDwarsK7QN9GUKmY1ppQedxfXy1LWj8nRA+raX38agu6FcFdZE8rE
	 u0SnAZ+syPcnB24pk3013wQUUK3otAsHcsjjDaqnsrBDVB5TZNRykVgiRAM8ftohQB
	 0zL9uqU+EGZZr7X8V66FRgKnYPrnYunWMJVUC4BfgidwO8ZU2Lg4VckOSqGuRd5+zs
	 mHYalEYo2PrV1TSQiwCdtdYcCgfmpjbkd2fkGoESqy5HZncim8JNKQU8S6M5JXt0Mj
	 NrYdTfVkjlQUA==
Date: Thu, 17 Jun 2021 16:31:05 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Claire Chang <tientzu@chromium.org>
cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, 
    Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, 
    Frank Rowand <frowand.list@gmail.com>, 
    Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, 
    jgross@suse.com, Christoph Hellwig <hch@lst.de>, 
    Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, 
    paulus@samba.org, 
    "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, 
    sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>, 
    grant.likely@arm.com, xypron.glpk@gmx.de, 
    Thierry Reding <treding@nvidia.com>, mingo@kernel.org, 
    bauerman@linux.ibm.com, peterz@infradead.org, 
    Greg KH <gregkh@linuxfoundation.org>, 
    Saravana Kannan <saravanak@google.com>, 
    "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
    heikki.krogerus@linux.intel.com, 
    Andy Shevchenko <andriy.shevchenko@linux.intel.com>, 
    Randy Dunlap <rdunlap@infradead.org>, 
    Dan Williams <dan.j.williams@intel.com>, 
    Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
    linux-devicetree <devicetree@vger.kernel.org>, 
    lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org, 
    xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>, 
    Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org, 
    bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, 
    daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
    intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
    jxgao@google.com, joonas.lahtinen@linux.intel.com, 
    linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
    matthew.auld@intel.com, rodrigo.vivi@intel.com, 
    thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v13 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
In-Reply-To: <20210617062635.1660944-7-tientzu@chromium.org>
Message-ID: <alpine.DEB.2.21.2106171433560.24906@sstabellini-ThinkPad-T480s>
References: <20210617062635.1660944-1-tientzu@chromium.org> <20210617062635.1660944-7-tientzu@chromium.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 17 Jun 2021, Claire Chang wrote:
> Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
> use it to determine whether to bounce the data or not. This will be
> useful later to allow for different pools.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Tested-by: Stefano Stabellini <sstabellini@kernel.org>
> Tested-by: Will Deacon <will@kernel.org>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  drivers/xen/swiotlb-xen.c |  2 +-
>  include/linux/swiotlb.h   | 11 +++++++++++
>  kernel/dma/direct.c       |  2 +-
>  kernel/dma/direct.h       |  2 +-
>  kernel/dma/swiotlb.c      |  4 ++++
>  5 files changed, 18 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 0c6ed09f8513..4730a146fa35 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -369,7 +369,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
>  	if (dma_capable(dev, dev_addr, size, true) &&
>  	    !range_straddles_page_boundary(phys, size) &&
>  		!xen_arch_need_swiotlb(dev, phys, dev_addr) &&
> -		swiotlb_force != SWIOTLB_FORCE)
> +		!is_swiotlb_force_bounce(dev))
>  		goto done;
>  
>  	/*
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index dd1c30a83058..8d8855c77d9a 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -84,6 +84,7 @@ extern enum swiotlb_force swiotlb_force;
>   *		unmap calls.
>   * @debugfs:	The dentry to debugfs.
>   * @late_alloc:	%true if allocated using the page allocator
> + * @force_bounce: %true if swiotlb bouncing is forced
>   */
>  struct io_tlb_mem {
>  	phys_addr_t start;
> @@ -94,6 +95,7 @@ struct io_tlb_mem {
>  	spinlock_t lock;
>  	struct dentry *debugfs;
>  	bool late_alloc;
> +	bool force_bounce;
>  	struct io_tlb_slot {
>  		phys_addr_t orig_addr;
>  		size_t alloc_size;
> @@ -109,6 +111,11 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  	return mem && paddr >= mem->start && paddr < mem->end;
>  }
>  
> +static inline bool is_swiotlb_force_bounce(struct device *dev)
> +{
> +	return dev->dma_io_tlb_mem->force_bounce;
> +}
> +
>  void __init swiotlb_exit(void);
>  unsigned int swiotlb_max_segment(void);
>  size_t swiotlb_max_mapping_size(struct device *dev);
> @@ -120,6 +127,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  {
>  	return false;
>  }
> +static inline bool is_swiotlb_force_bounce(struct device *dev)
> +{
> +	return false;
> +}
>  static inline void swiotlb_exit(void)
>  {
>  }
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 7a88c34d0867..a92465b4eb12 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -496,7 +496,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
>  {
>  	/* If SWIOTLB is active, use its maximum mapping size */
>  	if (is_swiotlb_active(dev) &&
> -	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
> +	    (dma_addressing_limited(dev) || is_swiotlb_force_bounce(dev)))
>  		return swiotlb_max_mapping_size(dev);
>  	return SIZE_MAX;
>  }
> diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
> index 13e9e7158d94..4632b0f4f72e 100644
> --- a/kernel/dma/direct.h
> +++ b/kernel/dma/direct.h
> @@ -87,7 +87,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
>  	phys_addr_t phys = page_to_phys(page) + offset;
>  	dma_addr_t dma_addr = phys_to_dma(dev, phys);
>  
> -	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
> +	if (is_swiotlb_force_bounce(dev))
>  		return swiotlb_map(dev, phys, size, dir, attrs);
>  
>  	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 409694d7a8ad..13891d5de8c9 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -179,6 +179,10 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
>  	mem->end = mem->start + bytes;
>  	mem->index = 0;
>  	mem->late_alloc = late_alloc;
> +
> +	if (swiotlb_force == SWIOTLB_FORCE)
> +		mem->force_bounce = true;
> +
>  	spin_lock_init(&mem->lock);
>  	for (i = 0; i < mem->nslabs; i++) {
>  		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> -- 
> 2.32.0.288.g62a8d224e6-goog
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 17 23:32:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 23:32:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144235.265541 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1V9-000619-Mu; Thu, 17 Jun 2021 23:32:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144235.265541; Thu, 17 Jun 2021 23:32:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1V9-000612-JQ; Thu, 17 Jun 2021 23:32:47 +0000
Received: by outflank-mailman (input) for mailman id 144235;
 Thu, 17 Jun 2021 23:32:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uRdX=LL=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lu1V8-00060e-Eg
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 23:32:46 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 15bb2d43-5a71-4c24-a6f1-e3023f6de940;
 Thu, 17 Jun 2021 23:32:44 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1623972576629514.3458375101217;
 Thu, 17 Jun 2021 16:29:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15bb2d43-5a71-4c24-a6f1-e3023f6de940
ARC-Seal: i=1; a=rsa-sha256; t=1623972579; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=NQqgNX9OxrUfo0glFexSXGJn9emKXMOn+AKVzJNBy8L8GSkzOXd+R4ZsUULFyK1j/9dTVLZ6ckkbbG77mVC+qIy9Dm7kH9cYTZr5k/Y55VSTd/ekZMH0AMLVSq0UGAcd0PIwAr1hokJIOpm0O/ncjP/cb/TLRTb8dqhbI+Stp/g=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1623972579; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=3mB5hO2757p/4LpDpnIePDctsNS8pYN/1zi4yPRAbnY=; 
	b=fujkNITP9QyuBVP4iL5sg7C+yQsMbdqMIdvfgeSTheI0EB9wx+MX+RLyy0+RaP0oU9w8jLM9+aHLH+EJ79uP4abkZ7RdM62S3F7yCdQiGgqoXJgMjV5XUEJGCK5orzn8fVruus11lQjtG0T8RteVksWv/1E1c/238nSb36mzbGA=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1623972579;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding;
	bh=3mB5hO2757p/4LpDpnIePDctsNS8pYN/1zi4yPRAbnY=;
	b=rG/FbrAIkzq7xRAxDNpl6Ro2ZW/1pbT3wvD0pM7I2mKuFGqVOcnUhqnhocn+ajlC
	xWBsHteAvQF1cGoOtrgwPcnjKrkq5bdvWuV7938DWguUgKzfe4ZtBIHNNrACudZ62nM
	TJZWxcK7XoCFGqFR1XK8wngLi4JYjyLg5XUP61YA=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Tim Deegan <tim@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Paul Durrant <paul@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	persaur@gmail.com,
	christopher.w.clark@gmail.com,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io
Subject: [PATCH 2/6] xsm: decouple xsm header inclusion selection
Date: Thu, 17 Jun 2021 19:39:14 -0400
Message-Id: <20210617233918.10095-3-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210617233918.10095-1-dpsmith@apertussolutions.com>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

Multiple preprocessor defines were used as a mechanism to selective
include parts of the xsm.h header file. This makes it difficult to know
which portion is being included at anyone time. This commit works to
simplify this by separating the core structure and functions of XSM into
xsm-core.h away from the wrapper functions which remain in xsm.h and
dummy.h.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 xen/include/xsm/dummy.h    |   2 +-
 xen/include/xsm/xsm-core.h | 262 +++++++++++++++++++++++++++++++++++++
 xen/include/xsm/xsm.h      | 240 +--------------------------------
 xen/xsm/dummy.c            |   1 -
 xen/xsm/silo.c             |   1 -
 5 files changed, 264 insertions(+), 242 deletions(-)
 create mode 100644 xen/include/xsm/xsm-core.h

diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 363c6d7798..c445c5681b 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -16,7 +16,7 @@
  */
 
 #include <xen/sched.h>
-#include <xsm/xsm.h>
+#include <xsm/xsm-core.h>
 #include <public/hvm/params.h>
 
 /* Cannot use BUILD_BUG_ON here because the expressions we check are not
diff --git a/xen/include/xsm/xsm-core.h b/xen/include/xsm/xsm-core.h
new file mode 100644
index 0000000000..5297c73fe6
--- /dev/null
+++ b/xen/include/xsm/xsm-core.h
@@ -0,0 +1,262 @@
+/*
+ *  This file contains the XSM hook definitions for Xen.
+ *
+ *  This work is based on the LSM implementation in Linux 2.6.13.4.
+ *
+ *  Author:  George Coker, <gscoker@alpha.ncsc.mil>
+ *
+ *  Contributors: Michael LeMay, <mdlemay@epoch.ncsc.mil>
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License version 2,
+ *  as published by the Free Software Foundation.
+ */
+
+#ifndef __XSM_CORE_H__
+#define __XSM_CORE_H__
+
+#include <xen/sched.h>
+#include <xen/multiboot.h>
+
+typedef void xsm_op_t;
+DEFINE_XEN_GUEST_HANDLE(xsm_op_t);
+
+/* policy magic number (defined by XSM_MAGIC) */
+typedef u32 xsm_magic_t;
+
+#ifdef CONFIG_XSM_FLASK
+#define XSM_MAGIC 0xf97cff8c
+#else
+#define XSM_MAGIC 0x0
+#endif
+
+/* These annotations are used by callers and in dummy.h to document the
+ * default actions of XSM hooks. They should be compiled out otherwise.
+ */
+enum xsm_default {
+    XSM_HOOK,     /* Guests can normally access the hypercall */
+    XSM_DM_PRIV,  /* Device model can perform on its target domain */
+    XSM_TARGET,   /* Can perform on self or your target domain */
+    XSM_PRIV,     /* Privileged - normally restricted to dom0 */
+    XSM_XS_PRIV,  /* Xenstore domain - can do some privileged operations */
+    XSM_OTHER     /* Something more complex */
+};
+typedef enum xsm_default xsm_default_t;
+
+struct xsm_operations {
+    void (*security_domaininfo) (struct domain *d,
+                                        struct xen_domctl_getdomaininfo *info);
+    int (*domain_create) (struct domain *d, u32 ssidref);
+    int (*getdomaininfo) (struct domain *d);
+    int (*domctl_scheduler_op) (struct domain *d, int op);
+    int (*sysctl_scheduler_op) (int op);
+    int (*set_target) (struct domain *d, struct domain *e);
+    int (*domctl) (struct domain *d, int cmd);
+    int (*sysctl) (int cmd);
+    int (*readconsole) (uint32_t clear);
+
+    int (*evtchn_unbound) (struct domain *d, struct evtchn *chn, domid_t id2);
+    int (*evtchn_interdomain) (struct domain *d1, struct evtchn *chn1,
+                                        struct domain *d2, struct evtchn *chn2);
+    void (*evtchn_close_post) (struct evtchn *chn);
+    int (*evtchn_send) (struct domain *d, struct evtchn *chn);
+    int (*evtchn_status) (struct domain *d, struct evtchn *chn);
+    int (*evtchn_reset) (struct domain *d1, struct domain *d2);
+
+    int (*grant_mapref) (struct domain *d1, struct domain *d2, uint32_t flags);
+    int (*grant_unmapref) (struct domain *d1, struct domain *d2);
+    int (*grant_setup) (struct domain *d1, struct domain *d2);
+    int (*grant_transfer) (struct domain *d1, struct domain *d2);
+    int (*grant_copy) (struct domain *d1, struct domain *d2);
+    int (*grant_query_size) (struct domain *d1, struct domain *d2);
+
+    int (*alloc_security_domain) (struct domain *d);
+    void (*free_security_domain) (struct domain *d);
+    int (*alloc_security_evtchns) (struct evtchn chn[], unsigned int nr);
+    void (*free_security_evtchns) (struct evtchn chn[], unsigned int nr);
+    char *(*show_security_evtchn) (struct domain *d, const struct evtchn *chn);
+    int (*init_hardware_domain) (struct domain *d);
+
+    int (*get_pod_target) (struct domain *d);
+    int (*set_pod_target) (struct domain *d);
+    int (*memory_exchange) (struct domain *d);
+    int (*memory_adjust_reservation) (struct domain *d1, struct domain *d2);
+    int (*memory_stat_reservation) (struct domain *d1, struct domain *d2);
+    int (*memory_pin_page) (struct domain *d1, struct domain *d2, struct page_info *page);
+    int (*add_to_physmap) (struct domain *d1, struct domain *d2);
+    int (*remove_from_physmap) (struct domain *d1, struct domain *d2);
+    int (*map_gmfn_foreign) (struct domain *d, struct domain *t);
+    int (*claim_pages) (struct domain *d);
+
+    int (*console_io) (struct domain *d, int cmd);
+
+    int (*profile) (struct domain *d, int op);
+
+    int (*kexec) (void);
+    int (*schedop_shutdown) (struct domain *d1, struct domain *d2);
+
+    char *(*show_irq_sid) (int irq);
+    int (*map_domain_pirq) (struct domain *d);
+    int (*map_domain_irq) (struct domain *d, int irq, const void *data);
+    int (*unmap_domain_pirq) (struct domain *d);
+    int (*unmap_domain_irq) (struct domain *d, int irq, const void *data);
+    int (*bind_pt_irq) (struct domain *d, struct xen_domctl_bind_pt_irq *bind);
+    int (*unbind_pt_irq) (struct domain *d, struct xen_domctl_bind_pt_irq *bind);
+    int (*irq_permission) (struct domain *d, int pirq, uint8_t allow);
+    int (*iomem_permission) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow);
+    int (*iomem_mapping) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow);
+    int (*pci_config_permission) (struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access);
+
+#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
+    int (*get_device_group) (uint32_t machine_bdf);
+    int (*assign_device) (struct domain *d, uint32_t machine_bdf);
+    int (*deassign_device) (struct domain *d, uint32_t machine_bdf);
+#endif
+
+#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE)
+    int (*assign_dtdevice) (struct domain *d, const char *dtpath);
+    int (*deassign_dtdevice) (struct domain *d, const char *dtpath);
+#endif
+
+    int (*resource_plug_core) (void);
+    int (*resource_unplug_core) (void);
+    int (*resource_plug_pci) (uint32_t machine_bdf);
+    int (*resource_unplug_pci) (uint32_t machine_bdf);
+    int (*resource_setup_pci) (uint32_t machine_bdf);
+    int (*resource_setup_gsi) (int gsi);
+    int (*resource_setup_misc) (void);
+
+    int (*page_offline)(uint32_t cmd);
+    int (*hypfs_op)(void);
+
+    long (*do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
+#ifdef CONFIG_COMPAT
+    int (*do_compat_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
+#endif
+
+    int (*hvm_param) (struct domain *d, unsigned long op);
+    int (*hvm_control) (struct domain *d, unsigned long op);
+    int (*hvm_param_altp2mhvm) (struct domain *d);
+    int (*hvm_altp2mhvm_op) (struct domain *d, uint64_t mode, uint32_t op);
+    int (*get_vnumainfo) (struct domain *d);
+
+    int (*vm_event_control) (struct domain *d, int mode, int op);
+
+#ifdef CONFIG_MEM_ACCESS
+    int (*mem_access) (struct domain *d);
+#endif
+
+#ifdef CONFIG_MEM_PAGING
+    int (*mem_paging) (struct domain *d);
+#endif
+
+#ifdef CONFIG_MEM_SHARING
+    int (*mem_sharing) (struct domain *d);
+#endif
+
+    int (*platform_op) (uint32_t cmd);
+
+#ifdef CONFIG_X86
+    int (*do_mca) (void);
+    int (*shadow_control) (struct domain *d, uint32_t op);
+    int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
+    int (*apic) (struct domain *d, int cmd);
+    int (*memtype) (uint32_t access);
+    int (*machine_memory_map) (void);
+    int (*domain_memory_map) (struct domain *d);
+#define XSM_MMU_UPDATE_READ      1
+#define XSM_MMU_UPDATE_WRITE     2
+#define XSM_MMU_NORMAL_UPDATE    4
+#define XSM_MMU_MACHPHYS_UPDATE  8
+    int (*mmu_update) (struct domain *d, struct domain *t,
+                       struct domain *f, uint32_t flags);
+    int (*mmuext_op) (struct domain *d, struct domain *f);
+    int (*update_va_mapping) (struct domain *d, struct domain *f, l1_pgentry_t pte);
+    int (*priv_mapping) (struct domain *d, struct domain *t);
+    int (*ioport_permission) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow);
+    int (*ioport_mapping) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow);
+    int (*pmu_op) (struct domain *d, unsigned int op);
+#endif
+    int (*dm_op) (struct domain *d);
+    int (*xen_version) (uint32_t cmd);
+    int (*domain_resource_map) (struct domain *d);
+#ifdef CONFIG_ARGO
+    int (*argo_enable) (const struct domain *d);
+    int (*argo_register_single_source) (const struct domain *d,
+                                        const struct domain *t);
+    int (*argo_register_any_source) (const struct domain *d);
+    int (*argo_send) (const struct domain *d, const struct domain *t);
+#endif
+};
+
+#ifdef CONFIG_XSM
+
+#ifdef CONFIG_MULTIBOOT
+extern int xsm_multiboot_init(unsigned long *module_map,
+                              const multiboot_info_t *mbi);
+extern int xsm_multiboot_policy_init(unsigned long *module_map,
+                                     const multiboot_info_t *mbi,
+                                     void **policy_buffer,
+                                     size_t *policy_size);
+#endif
+
+#ifdef CONFIG_HAS_DEVICE_TREE
+/*
+ * Initialize XSM
+ *
+ * On success, return 1 if using SILO mode else 0.
+ */
+extern int xsm_dt_init(void);
+extern int xsm_dt_policy_init(void **policy_buffer, size_t *policy_size);
+extern bool has_xsm_magic(paddr_t);
+#endif
+
+extern int register_xsm(struct xsm_operations *ops);
+
+extern struct xsm_operations dummy_xsm_ops;
+extern void xsm_fixup_ops(struct xsm_operations *ops);
+
+#ifdef CONFIG_XSM_FLASK
+extern void flask_init(const void *policy_buffer, size_t policy_size);
+#else
+static inline void flask_init(const void *policy_buffer, size_t policy_size)
+{
+}
+#endif
+
+#ifdef CONFIG_XSM_FLASK_POLICY
+extern const unsigned char xsm_flask_init_policy[];
+extern const unsigned int xsm_flask_init_policy_size;
+#endif
+
+#ifdef CONFIG_XSM_SILO
+extern void silo_init(void);
+#else
+static inline void silo_init(void) {}
+#endif
+
+#else /* CONFIG_XSM */
+
+#ifdef CONFIG_MULTIBOOT
+static inline int xsm_multiboot_init (unsigned long *module_map,
+                                      const multiboot_info_t *mbi)
+{
+    return 0;
+}
+#endif
+
+#ifdef CONFIG_HAS_DEVICE_TREE
+static inline int xsm_dt_init(void)
+{
+    return 0;
+}
+
+static inline bool has_xsm_magic(paddr_t start)
+{
+    return false;
+}
+#endif /* CONFIG_HAS_DEVICE_TREE */
+
+#endif /* CONFIG_XSM */
+
+#endif /* __XSM_CORE_H */
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 86ca045e74..ecbdee2c7d 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -20,184 +20,12 @@
 #endif
 #include <xen/sched.h>
 #include <xen/multiboot.h>
-
-typedef void xsm_op_t;
-DEFINE_XEN_GUEST_HANDLE(xsm_op_t);
-
-/* policy magic number (defined by XSM_MAGIC) */
-typedef u32 xsm_magic_t;
-
-#ifdef CONFIG_XSM_FLASK
-#define XSM_MAGIC 0xf97cff8c
-#else
-#define XSM_MAGIC 0x0
-#endif
-
-/* These annotations are used by callers and in dummy.h to document the
- * default actions of XSM hooks. They should be compiled out otherwise.
- */
-enum xsm_default {
-    XSM_HOOK,     /* Guests can normally access the hypercall */
-    XSM_DM_PRIV,  /* Device model can perform on its target domain */
-    XSM_TARGET,   /* Can perform on self or your target domain */
-    XSM_PRIV,     /* Privileged - normally restricted to dom0 */
-    XSM_XS_PRIV,  /* Xenstore domain - can do some privileged operations */
-    XSM_OTHER     /* Something more complex */
-};
-typedef enum xsm_default xsm_default_t;
-
-struct xsm_operations {
-    void (*security_domaininfo) (struct domain *d,
-                                        struct xen_domctl_getdomaininfo *info);
-    int (*domain_create) (struct domain *d, u32 ssidref);
-    int (*getdomaininfo) (struct domain *d);
-    int (*domctl_scheduler_op) (struct domain *d, int op);
-    int (*sysctl_scheduler_op) (int op);
-    int (*set_target) (struct domain *d, struct domain *e);
-    int (*domctl) (struct domain *d, int cmd);
-    int (*sysctl) (int cmd);
-    int (*readconsole) (uint32_t clear);
-
-    int (*evtchn_unbound) (struct domain *d, struct evtchn *chn, domid_t id2);
-    int (*evtchn_interdomain) (struct domain *d1, struct evtchn *chn1,
-                                        struct domain *d2, struct evtchn *chn2);
-    void (*evtchn_close_post) (struct evtchn *chn);
-    int (*evtchn_send) (struct domain *d, struct evtchn *chn);
-    int (*evtchn_status) (struct domain *d, struct evtchn *chn);
-    int (*evtchn_reset) (struct domain *d1, struct domain *d2);
-
-    int (*grant_mapref) (struct domain *d1, struct domain *d2, uint32_t flags);
-    int (*grant_unmapref) (struct domain *d1, struct domain *d2);
-    int (*grant_setup) (struct domain *d1, struct domain *d2);
-    int (*grant_transfer) (struct domain *d1, struct domain *d2);
-    int (*grant_copy) (struct domain *d1, struct domain *d2);
-    int (*grant_query_size) (struct domain *d1, struct domain *d2);
-
-    int (*alloc_security_domain) (struct domain *d);
-    void (*free_security_domain) (struct domain *d);
-    int (*alloc_security_evtchns) (struct evtchn chn[], unsigned int nr);
-    void (*free_security_evtchns) (struct evtchn chn[], unsigned int nr);
-    char *(*show_security_evtchn) (struct domain *d, const struct evtchn *chn);
-    int (*init_hardware_domain) (struct domain *d);
-
-    int (*get_pod_target) (struct domain *d);
-    int (*set_pod_target) (struct domain *d);
-    int (*memory_exchange) (struct domain *d);
-    int (*memory_adjust_reservation) (struct domain *d1, struct domain *d2);
-    int (*memory_stat_reservation) (struct domain *d1, struct domain *d2);
-    int (*memory_pin_page) (struct domain *d1, struct domain *d2, struct page_info *page);
-    int (*add_to_physmap) (struct domain *d1, struct domain *d2);
-    int (*remove_from_physmap) (struct domain *d1, struct domain *d2);
-    int (*map_gmfn_foreign) (struct domain *d, struct domain *t);
-    int (*claim_pages) (struct domain *d);
-
-    int (*console_io) (struct domain *d, int cmd);
-
-    int (*profile) (struct domain *d, int op);
-
-    int (*kexec) (void);
-    int (*schedop_shutdown) (struct domain *d1, struct domain *d2);
-
-    char *(*show_irq_sid) (int irq);
-    int (*map_domain_pirq) (struct domain *d);
-    int (*map_domain_irq) (struct domain *d, int irq, const void *data);
-    int (*unmap_domain_pirq) (struct domain *d);
-    int (*unmap_domain_irq) (struct domain *d, int irq, const void *data);
-    int (*bind_pt_irq) (struct domain *d, struct xen_domctl_bind_pt_irq *bind);
-    int (*unbind_pt_irq) (struct domain *d, struct xen_domctl_bind_pt_irq *bind);
-    int (*irq_permission) (struct domain *d, int pirq, uint8_t allow);
-    int (*iomem_permission) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow);
-    int (*iomem_mapping) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow);
-    int (*pci_config_permission) (struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access);
-
-#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
-    int (*get_device_group) (uint32_t machine_bdf);
-    int (*assign_device) (struct domain *d, uint32_t machine_bdf);
-    int (*deassign_device) (struct domain *d, uint32_t machine_bdf);
-#endif
-
-#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE)
-    int (*assign_dtdevice) (struct domain *d, const char *dtpath);
-    int (*deassign_dtdevice) (struct domain *d, const char *dtpath);
-#endif
-
-    int (*resource_plug_core) (void);
-    int (*resource_unplug_core) (void);
-    int (*resource_plug_pci) (uint32_t machine_bdf);
-    int (*resource_unplug_pci) (uint32_t machine_bdf);
-    int (*resource_setup_pci) (uint32_t machine_bdf);
-    int (*resource_setup_gsi) (int gsi);
-    int (*resource_setup_misc) (void);
-
-    int (*page_offline)(uint32_t cmd);
-    int (*hypfs_op)(void);
-
-    long (*do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
-#ifdef CONFIG_COMPAT
-    int (*do_compat_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
-#endif
-
-    int (*hvm_param) (struct domain *d, unsigned long op);
-    int (*hvm_control) (struct domain *d, unsigned long op);
-    int (*hvm_param_altp2mhvm) (struct domain *d);
-    int (*hvm_altp2mhvm_op) (struct domain *d, uint64_t mode, uint32_t op);
-    int (*get_vnumainfo) (struct domain *d);
-
-    int (*vm_event_control) (struct domain *d, int mode, int op);
-
-#ifdef CONFIG_MEM_ACCESS
-    int (*mem_access) (struct domain *d);
-#endif
-
-#ifdef CONFIG_MEM_PAGING
-    int (*mem_paging) (struct domain *d);
-#endif
-
-#ifdef CONFIG_MEM_SHARING
-    int (*mem_sharing) (struct domain *d);
-#endif
-
-    int (*platform_op) (uint32_t cmd);
-
-#ifdef CONFIG_X86
-    int (*do_mca) (void);
-    int (*shadow_control) (struct domain *d, uint32_t op);
-    int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
-    int (*apic) (struct domain *d, int cmd);
-    int (*memtype) (uint32_t access);
-    int (*machine_memory_map) (void);
-    int (*domain_memory_map) (struct domain *d);
-#define XSM_MMU_UPDATE_READ      1
-#define XSM_MMU_UPDATE_WRITE     2
-#define XSM_MMU_NORMAL_UPDATE    4
-#define XSM_MMU_MACHPHYS_UPDATE  8
-    int (*mmu_update) (struct domain *d, struct domain *t,
-                       struct domain *f, uint32_t flags);
-    int (*mmuext_op) (struct domain *d, struct domain *f);
-    int (*update_va_mapping) (struct domain *d, struct domain *f, l1_pgentry_t pte);
-    int (*priv_mapping) (struct domain *d, struct domain *t);
-    int (*ioport_permission) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow);
-    int (*ioport_mapping) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow);
-    int (*pmu_op) (struct domain *d, unsigned int op);
-#endif
-    int (*dm_op) (struct domain *d);
-    int (*xen_version) (uint32_t cmd);
-    int (*domain_resource_map) (struct domain *d);
-#ifdef CONFIG_ARGO
-    int (*argo_enable) (const struct domain *d);
-    int (*argo_register_single_source) (const struct domain *d,
-                                        const struct domain *t);
-    int (*argo_register_any_source) (const struct domain *d);
-    int (*argo_send) (const struct domain *d, const struct domain *t);
-#endif
-};
+#include <xsm/xsm-core.h>
 
 #ifdef CONFIG_XSM
 
 extern struct xsm_operations xsm_ops;
 
-#ifndef XSM_NO_WRAPPERS
-
 static inline void xsm_security_domaininfo (struct domain *d,
                                         struct xen_domctl_getdomaininfo *info)
 {
@@ -731,76 +559,10 @@ static inline int xsm_argo_send(const struct domain *d, const struct domain *t)
 
 #endif /* CONFIG_ARGO */
 
-#endif /* XSM_NO_WRAPPERS */
-
-#ifdef CONFIG_MULTIBOOT
-extern int xsm_multiboot_init(unsigned long *module_map,
-                              const multiboot_info_t *mbi);
-extern int xsm_multiboot_policy_init(unsigned long *module_map,
-                                     const multiboot_info_t *mbi,
-                                     void **policy_buffer,
-                                     size_t *policy_size);
-#endif
-
-#ifdef CONFIG_HAS_DEVICE_TREE
-/*
- * Initialize XSM
- *
- * On success, return 1 if using SILO mode else 0.
- */
-extern int xsm_dt_init(void);
-extern int xsm_dt_policy_init(void **policy_buffer, size_t *policy_size);
-extern bool has_xsm_magic(paddr_t);
-#endif
-
-extern int register_xsm(struct xsm_operations *ops);
-
-extern struct xsm_operations dummy_xsm_ops;
-extern void xsm_fixup_ops(struct xsm_operations *ops);
-
-#ifdef CONFIG_XSM_FLASK
-extern void flask_init(const void *policy_buffer, size_t policy_size);
-#else
-static inline void flask_init(const void *policy_buffer, size_t policy_size)
-{
-}
-#endif
-
-#ifdef CONFIG_XSM_FLASK_POLICY
-extern const unsigned char xsm_flask_init_policy[];
-extern const unsigned int xsm_flask_init_policy_size;
-#endif
-
-#ifdef CONFIG_XSM_SILO
-extern void silo_init(void);
-#else
-static inline void silo_init(void) {}
-#endif
-
 #else /* CONFIG_XSM */
 
 #include <xsm/dummy.h>
 
-#ifdef CONFIG_MULTIBOOT
-static inline int xsm_multiboot_init (unsigned long *module_map,
-                                      const multiboot_info_t *mbi)
-{
-    return 0;
-}
-#endif
-
-#ifdef CONFIG_HAS_DEVICE_TREE
-static inline int xsm_dt_init(void)
-{
-    return 0;
-}
-
-static inline bool has_xsm_magic(paddr_t start)
-{
-    return false;
-}
-#endif /* CONFIG_HAS_DEVICE_TREE */
-
 #endif /* CONFIG_XSM */
 
 #endif /* __XSM_H */
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 066694763a..de4d6cf2cf 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -10,7 +10,6 @@
  *  as published by the Free Software Foundation.
  */
 
-#define XSM_NO_WRAPPERS
 #include <xsm/dummy.h>
 
 #define set_to_dummy_if_null(ops, function)                            \
diff --git a/xen/xsm/silo.c b/xen/xsm/silo.c
index fc2ca5cd2d..b96dacd181 100644
--- a/xen/xsm/silo.c
+++ b/xen/xsm/silo.c
@@ -17,7 +17,6 @@
  * You should have received a copy of the GNU General Public License along with
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
-#define XSM_NO_WRAPPERS
 #include <xsm/dummy.h>
 
 /*
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 23:35:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 23:35:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144244.265553 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1XM-0006oW-8F; Thu, 17 Jun 2021 23:35:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144244.265553; Thu, 17 Jun 2021 23:35:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1XM-0006oP-53; Thu, 17 Jun 2021 23:35:04 +0000
Received: by outflank-mailman (input) for mailman id 144244;
 Thu, 17 Jun 2021 23:35:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uRdX=LL=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lu1XK-0006oF-Ab
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 23:35:02 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5185c99c-f8e0-46e3-bf11-24af8bc8a451;
 Thu, 17 Jun 2021 23:34:59 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1623972579340440.46145847027356;
 Thu, 17 Jun 2021 16:29:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5185c99c-f8e0-46e3-bf11-24af8bc8a451
ARC-Seal: i=1; a=rsa-sha256; t=1623972581; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=JGIPK+loENL3gc2sisv/9P2ZWHcI/9TKvMcOT2By6qbjMqoxlbj4XZDqPqxvrablqk/2jR2N+g2fwrYA5f0Sf6t01vP3pthcZbxEAzLUMa5FlVpzWBY2gsPl0IKmHIA0tzlIPAKd7J85YQWusY07kLkzGgs9FZaIicBqhTXzbmw=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1623972581; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=Ws54z3pzlC/09NQsHviuPwGiXz3verlYYxDyxYgoCQg=; 
	b=FuDY/Zhrieenj6FWF1vF1ndAKH+8r/9vL9hZcGlU4Kx+BYq3nO41u7wRhjQzxOJASMUm3rVozkVycb/OwQUFrzCPkmerMTvfmmu+gD1grEXLXF/Q8xhwYiU1m6XqrTerdoGaysH3qGW4f7xA6h3oDwi118c3WjTaBeh9Qo2nTq4=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1623972581;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding;
	bh=Ws54z3pzlC/09NQsHviuPwGiXz3verlYYxDyxYgoCQg=;
	b=bUFSVtr+adxQN7bFBNMNvm2AHEihwqH2q7chdhKwvE9Atm87n8Yi3CzL9WPchI+y
	hQKHFo7JOsNXYyIgbIGc5VnLkeFUuZclmmgpnjIO+1KglsFxz7irz5/E0m/FGbNgB/K
	2g0fSvP9ctzivFhwe18gvyCsh+ut77UGAgFQlxak=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Tim Deegan <tim@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Paul Durrant <paul@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	persaur@gmail.com,
	christopher.w.clark@gmail.com,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io
Subject: [PATCH 3/6] xsm: enabling xsm to always be included
Date: Thu, 17 Jun 2021 19:39:15 -0400
Message-Id: <20210617233918.10095-4-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210617233918.10095-1-dpsmith@apertussolutions.com>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

The only difference between !CONFIG_XSM and CONFIG_XSM with !CONFIG_XSM_SILO and !CONFIG_XSM_FLASK
is whether the XSM hooks in dummy.h are called as static inline functions or as function
pointers to static functions. As such this commit,
 * eliminates CONFIG_XSM
 * introduces CONFIG_XSM_EVTCHN_LABELING as replacement for enabling event channel labels
 * makes CONFIG_XSM_SILO AND CONFIG_XSM_FLASK default to no

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 xen/common/Kconfig            |  55 ++++-----
 xen/include/xen/sched.h       |   2 +-
 xen/include/xsm/xsm-core.h    |  26 ----
 xen/include/xsm/xsm.h         |   8 --
 xen/xsm/Makefile              |   4 +-
 xen/xsm/dummy.c               |   4 +-
 xen/{include => }/xsm/dummy.h | 220 ++++++++++++++++------------------
 xen/xsm/silo.c                |  17 +--
 xen/xsm/xsm_core.c            |   4 -
 9 files changed, 142 insertions(+), 198 deletions(-)
 rename xen/{include => }/xsm/dummy.h (63%)

diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 0ddd18e11a..203ad7ea23 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -197,22 +197,33 @@ config XENOPROF
 
 	  If unsure, say Y.
 
-config XSM
-	bool "Xen Security Modules support"
-	default ARM
-	---help---
-	  Enables the security framework known as Xen Security Modules which
-	  allows administrators fine-grained control over a Xen domain and
-	  its capabilities by defining permissible interactions between domains,
-	  the hypervisor itself, and related resources such as memory and
-	  devices.
+menu "Xen Security Modules"
 
-	  If unsure, say N.
+choice
+	prompt "Default XSM module"
+	default XSM_SILO_DEFAULT if XSM_SILO && ARM
+	default XSM_FLASK_DEFAULT if XSM_FLASK
+	default XSM_SILO_DEFAULT if XSM_SILO
+	default XSM_DUMMY_DEFAULT
+	config XSM_DUMMY_DEFAULT
+		bool "Match non-XSM behavior"
+	config XSM_FLASK_DEFAULT
+		bool "FLux Advanced Security Kernel" if XSM_FLASK
+	config XSM_SILO_DEFAULT
+		bool "SILO" if XSM_SILO
+endchoice
+
+config XSM_EVTCHN_LABELING
+	bool "Enables security labeling of event channels"
+	default n
+	---help---
+      This enables an XSM module to label and enforce access control over
+      event channels.
 
 config XSM_FLASK
-	def_bool y
+	def_bool n
 	prompt "FLux Advanced Security Kernel support"
-	depends on XSM
+	select XSM_EVTCHN_LABELING
 	---help---
 	  Enables FLASK (FLux Advanced Security Kernel) as the access control
 	  mechanism used by the XSM framework.  This provides a mandatory access
@@ -250,9 +261,8 @@ config XSM_FLASK_POLICY
 	  If unsure, say Y.
 
 config XSM_SILO
-	def_bool y
+	def_bool n
 	prompt "SILO support"
-	depends on XSM
 	---help---
 	  Enables SILO as the access control mechanism used by the XSM framework.
 	  This is not the default module, add boot parameter xsm=silo to choose
@@ -261,25 +271,12 @@ config XSM_SILO
 
 	  If unsure, say Y.
 
-choice
-	prompt "Default XSM implementation"
-	depends on XSM
-	default XSM_SILO_DEFAULT if XSM_SILO && ARM
-	default XSM_FLASK_DEFAULT if XSM_FLASK
-	default XSM_SILO_DEFAULT if XSM_SILO
-	default XSM_DUMMY_DEFAULT
-	config XSM_DUMMY_DEFAULT
-		bool "Match non-XSM behavior"
-	config XSM_FLASK_DEFAULT
-		bool "FLux Advanced Security Kernel" if XSM_FLASK
-	config XSM_SILO_DEFAULT
-		bool "SILO" if XSM_SILO
-endchoice
+endmenu
 
 config LATE_HWDOM
 	bool "Dedicated hardware domain"
 	default n
-	depends on XSM && X86
+	depends on XSM_FLASK && X86
 	---help---
 	  Allows the creation of a dedicated hardware domain distinct from
 	  domain 0 that manages devices without needing access to other
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 3982167144..e92d2cdeab 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -120,7 +120,7 @@ struct evtchn
     unsigned short notify_vcpu_id; /* VCPU for local delivery notification */
     uint32_t fifo_lastq;           /* Data for identifying last queue. */
 
-#ifdef CONFIG_XSM
+#ifdef CONFIG_XSM_EVTCHN_LABELING
     union {
 #ifdef XSM_NEED_GENERIC_EVTCHN_SSID
         /*
diff --git a/xen/include/xsm/xsm-core.h b/xen/include/xsm/xsm-core.h
index 5297c73fe6..e3718bc62d 100644
--- a/xen/include/xsm/xsm-core.h
+++ b/xen/include/xsm/xsm-core.h
@@ -189,8 +189,6 @@ struct xsm_operations {
 #endif
 };
 
-#ifdef CONFIG_XSM
-
 #ifdef CONFIG_MULTIBOOT
 extern int xsm_multiboot_init(unsigned long *module_map,
                               const multiboot_info_t *mbi);
@@ -235,28 +233,4 @@ extern void silo_init(void);
 static inline void silo_init(void) {}
 #endif
 
-#else /* CONFIG_XSM */
-
-#ifdef CONFIG_MULTIBOOT
-static inline int xsm_multiboot_init (unsigned long *module_map,
-                                      const multiboot_info_t *mbi)
-{
-    return 0;
-}
-#endif
-
-#ifdef CONFIG_HAS_DEVICE_TREE
-static inline int xsm_dt_init(void)
-{
-    return 0;
-}
-
-static inline bool has_xsm_magic(paddr_t start)
-{
-    return false;
-}
-#endif /* CONFIG_HAS_DEVICE_TREE */
-
-#endif /* CONFIG_XSM */
-
 #endif /* __XSM_CORE_H */
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index ecbdee2c7d..69f68300e2 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -22,8 +22,6 @@
 #include <xen/multiboot.h>
 #include <xsm/xsm-core.h>
 
-#ifdef CONFIG_XSM
-
 extern struct xsm_operations xsm_ops;
 
 static inline void xsm_security_domaininfo (struct domain *d,
@@ -559,10 +557,4 @@ static inline int xsm_argo_send(const struct domain *d, const struct domain *t)
 
 #endif /* CONFIG_ARGO */
 
-#else /* CONFIG_XSM */
-
-#include <xsm/dummy.h>
-
-#endif /* CONFIG_XSM */
-
 #endif /* __XSM_H */
diff --git a/xen/xsm/Makefile b/xen/xsm/Makefile
index cf0a728f1c..0059794dd5 100644
--- a/xen/xsm/Makefile
+++ b/xen/xsm/Makefile
@@ -1,6 +1,6 @@
 obj-y += xsm_core.o
-obj-$(CONFIG_XSM) += xsm_policy.o
-obj-$(CONFIG_XSM) += dummy.o
+obj-y += xsm_policy.o
+obj-y += dummy.o
 obj-$(CONFIG_XSM_SILO) += silo.o
 
 obj-$(CONFIG_XSM_FLASK) += flask/
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index de4d6cf2cf..bfd8b96f08 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -10,12 +10,12 @@
  *  as published by the Free Software Foundation.
  */
 
-#include <xsm/dummy.h>
+#include "dummy.h"
 
 #define set_to_dummy_if_null(ops, function)                            \
     do {                                                               \
         if ( !ops->function )                                          \
-            ops->function = xsm_##function;                            \
+            ops->function = dummy_##function;                          \
     } while (0)
 
 void __init xsm_fixup_ops (struct xsm_operations *ops)
diff --git a/xen/include/xsm/dummy.h b/xen/xsm/dummy.h
similarity index 63%
rename from xen/include/xsm/dummy.h
rename to xen/xsm/dummy.h
index c445c5681b..7e2bb09dac 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/xsm/dummy.h
@@ -42,12 +42,10 @@ static inline void __xsm_action_mismatch_detected(void)
 void __xsm_action_mismatch_detected(void);
 #endif
 
-#ifdef CONFIG_XSM
-
-/* In CONFIG_XSM builds, this header file is included from xsm/dummy.c, and
- * contains static (not inline) functions compiled to the dummy XSM module.
- * There is no xsm_default_t argument available, so the value from the assertion
- * is used to initialize the variable.
+/* This header file is included from xsm/dummy.c, and contains static (not
+ * inline) functions compiled to the dummy XSM module.  There is no
+ * xsm_default_t argument available, so the value from the assertion is used to
+ * initialize the variable.
  */
 #define XSM_INLINE __maybe_unused
 
@@ -55,20 +53,6 @@ void __xsm_action_mismatch_detected(void);
 #define XSM_DEFAULT_VOID void
 #define XSM_ASSERT_ACTION(def) xsm_default_t action = def; (void)action
 
-#else /* CONFIG_XSM */
-
-/* In !CONFIG_XSM builds, this header file is included from xsm/xsm.h, and
- * contains inline functions for each XSM hook. These functions also perform
- * compile-time checks on the xsm_default_t argument to ensure that the behavior
- * of the dummy XSM module is the same as the behavior with XSM disabled.
- */
-#define XSM_INLINE always_inline
-#define XSM_DEFAULT_ARG xsm_default_t action,
-#define XSM_DEFAULT_VOID xsm_default_t action
-#define XSM_ASSERT_ACTION(def) LINKER_BUG_ON(def != action)
-
-#endif /* CONFIG_XSM */
-
 static always_inline int xsm_default_action(
     xsm_default_t action, struct domain *src, struct domain *target)
 {
@@ -98,43 +82,43 @@ static always_inline int xsm_default_action(
     }
 }
 
-static XSM_INLINE void xsm_security_domaininfo(struct domain *d,
+static XSM_INLINE void dummy_security_domaininfo(struct domain *d,
                                     struct xen_domctl_getdomaininfo *info)
 {
     return;
 }
 
-static XSM_INLINE int xsm_domain_create(XSM_DEFAULT_ARG struct domain *d, u32 ssidref)
+static XSM_INLINE int dummy_domain_create(XSM_DEFAULT_ARG struct domain *d, u32 ssidref)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_getdomaininfo(XSM_DEFAULT_ARG struct domain *d)
+static XSM_INLINE int dummy_getdomaininfo(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_domctl_scheduler_op(XSM_DEFAULT_ARG struct domain *d, int cmd)
+static XSM_INLINE int dummy_domctl_scheduler_op(XSM_DEFAULT_ARG struct domain *d, int cmd)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_sysctl_scheduler_op(XSM_DEFAULT_ARG int cmd)
+static XSM_INLINE int dummy_sysctl_scheduler_op(XSM_DEFAULT_ARG int cmd)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int xsm_set_target(XSM_DEFAULT_ARG struct domain *d, struct domain *e)
+static XSM_INLINE int dummy_set_target(XSM_DEFAULT_ARG struct domain *d, struct domain *e)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int xsm_domctl(XSM_DEFAULT_ARG struct domain *d, int cmd)
+static XSM_INLINE int dummy_domctl(XSM_DEFAULT_ARG struct domain *d, int cmd)
 {
     XSM_ASSERT_ACTION(XSM_OTHER);
     switch ( cmd )
@@ -151,85 +135,85 @@ static XSM_INLINE int xsm_domctl(XSM_DEFAULT_ARG struct domain *d, int cmd)
     }
 }
 
-static XSM_INLINE int xsm_sysctl(XSM_DEFAULT_ARG int cmd)
+static XSM_INLINE int dummy_sysctl(XSM_DEFAULT_ARG int cmd)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int xsm_readconsole(XSM_DEFAULT_ARG uint32_t clear)
+static XSM_INLINE int dummy_readconsole(XSM_DEFAULT_ARG uint32_t clear)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int xsm_alloc_security_domain(struct domain *d)
+static XSM_INLINE int dummy_alloc_security_domain(struct domain *d)
 {
     return 0;
 }
 
-static XSM_INLINE void xsm_free_security_domain(struct domain *d)
+static XSM_INLINE void dummy_free_security_domain(struct domain *d)
 {
     return;
 }
 
-static XSM_INLINE int xsm_grant_mapref(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2,
+static XSM_INLINE int dummy_grant_mapref(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2,
                                                                 uint32_t flags)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int xsm_grant_unmapref(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static XSM_INLINE int dummy_grant_unmapref(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int xsm_grant_setup(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static XSM_INLINE int dummy_grant_setup(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int xsm_grant_transfer(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static XSM_INLINE int dummy_grant_transfer(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int xsm_grant_copy(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static XSM_INLINE int dummy_grant_copy(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int xsm_grant_query_size(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static XSM_INLINE int dummy_grant_query_size(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int xsm_memory_exchange(XSM_DEFAULT_ARG struct domain *d)
+static XSM_INLINE int dummy_memory_exchange(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_memory_adjust_reservation(XSM_DEFAULT_ARG struct domain *d1,
+static XSM_INLINE int dummy_memory_adjust_reservation(XSM_DEFAULT_ARG struct domain *d1,
                                                             struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int xsm_memory_stat_reservation(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static XSM_INLINE int dummy_memory_stat_reservation(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int xsm_console_io(XSM_DEFAULT_ARG struct domain *d, int cmd)
+static XSM_INLINE int dummy_console_io(XSM_DEFAULT_ARG struct domain *d, int cmd)
 {
     XSM_ASSERT_ACTION(XSM_OTHER);
     if ( d->is_console )
@@ -241,129 +225,129 @@ static XSM_INLINE int xsm_console_io(XSM_DEFAULT_ARG struct domain *d, int cmd)
     return xsm_default_action(XSM_PRIV, d, NULL);
 }
 
-static XSM_INLINE int xsm_profile(XSM_DEFAULT_ARG struct domain *d, int op)
+static XSM_INLINE int dummy_profile(XSM_DEFAULT_ARG struct domain *d, int op)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, d, NULL);
 }
 
-static XSM_INLINE int xsm_kexec(XSM_DEFAULT_VOID)
+static XSM_INLINE int dummy_kexec(XSM_DEFAULT_VOID)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int xsm_schedop_shutdown(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static XSM_INLINE int dummy_schedop_shutdown(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int xsm_memory_pin_page(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2,
+static XSM_INLINE int dummy_memory_pin_page(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2,
                                           struct page_info *page)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int xsm_claim_pages(XSM_DEFAULT_ARG struct domain *d)
+static XSM_INLINE int dummy_claim_pages(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_evtchn_unbound(XSM_DEFAULT_ARG struct domain *d, struct evtchn *chn,
+static XSM_INLINE int dummy_evtchn_unbound(XSM_DEFAULT_ARG struct domain *d, struct evtchn *chn,
                                          domid_t id2)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_evtchn_interdomain(XSM_DEFAULT_ARG struct domain *d1, struct evtchn
+static XSM_INLINE int dummy_evtchn_interdomain(XSM_DEFAULT_ARG struct domain *d1, struct evtchn
                                 *chan1, struct domain *d2, struct evtchn *chan2)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE void xsm_evtchn_close_post(struct evtchn *chn)
+static XSM_INLINE void dummy_evtchn_close_post(struct evtchn *chn)
 {
     return;
 }
 
-static XSM_INLINE int xsm_evtchn_send(XSM_DEFAULT_ARG struct domain *d, struct evtchn *chn)
+static XSM_INLINE int dummy_evtchn_send(XSM_DEFAULT_ARG struct domain *d, struct evtchn *chn)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, d, NULL);
 }
 
-static XSM_INLINE int xsm_evtchn_status(XSM_DEFAULT_ARG struct domain *d, struct evtchn *chn)
+static XSM_INLINE int dummy_evtchn_status(XSM_DEFAULT_ARG struct domain *d, struct evtchn *chn)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_evtchn_reset(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static XSM_INLINE int dummy_evtchn_reset(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int xsm_alloc_security_evtchns(
+static XSM_INLINE int dummy_alloc_security_evtchns(
     struct evtchn chn[], unsigned int nr)
 {
     return 0;
 }
 
-static XSM_INLINE void xsm_free_security_evtchns(
+static XSM_INLINE void dummy_free_security_evtchns(
     struct evtchn chn[], unsigned int nr)
 {
     return;
 }
 
-static XSM_INLINE char *xsm_show_security_evtchn(struct domain *d, const struct evtchn *chn)
+static XSM_INLINE char *dummy_show_security_evtchn(struct domain *d, const struct evtchn *chn)
 {
     return NULL;
 }
 
-static XSM_INLINE int xsm_init_hardware_domain(XSM_DEFAULT_ARG struct domain *d)
+static XSM_INLINE int dummy_init_hardware_domain(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_get_pod_target(XSM_DEFAULT_ARG struct domain *d)
+static XSM_INLINE int dummy_get_pod_target(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_set_pod_target(XSM_DEFAULT_ARG struct domain *d)
+static XSM_INLINE int dummy_set_pod_target(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_get_vnumainfo(XSM_DEFAULT_ARG struct domain *d)
+static XSM_INLINE int dummy_get_vnumainfo(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, current->domain, d);
 }
 
 #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
-static XSM_INLINE int xsm_get_device_group(XSM_DEFAULT_ARG uint32_t machine_bdf)
+static XSM_INLINE int dummy_get_device_group(XSM_DEFAULT_ARG uint32_t machine_bdf)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int xsm_assign_device(XSM_DEFAULT_ARG struct domain *d, uint32_t machine_bdf)
+static XSM_INLINE int dummy_assign_device(XSM_DEFAULT_ARG struct domain *d, uint32_t machine_bdf)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_deassign_device(XSM_DEFAULT_ARG struct domain *d, uint32_t machine_bdf)
+static XSM_INLINE int dummy_deassign_device(XSM_DEFAULT_ARG struct domain *d, uint32_t machine_bdf)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
@@ -372,14 +356,14 @@ static XSM_INLINE int xsm_deassign_device(XSM_DEFAULT_ARG struct domain *d, uint
 #endif /* HAS_PASSTHROUGH && HAS_PCI */
 
 #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE)
-static XSM_INLINE int xsm_assign_dtdevice(XSM_DEFAULT_ARG struct domain *d,
+static XSM_INLINE int dummy_assign_dtdevice(XSM_DEFAULT_ARG struct domain *d,
                                           const char *dtpath)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_deassign_dtdevice(XSM_DEFAULT_ARG struct domain *d,
+static XSM_INLINE int dummy_deassign_dtdevice(XSM_DEFAULT_ARG struct domain *d,
                                             const char *dtpath)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
@@ -388,134 +372,134 @@ static XSM_INLINE int xsm_deassign_dtdevice(XSM_DEFAULT_ARG struct domain *d,
 
 #endif /* HAS_PASSTHROUGH && HAS_DEVICE_TREE */
 
-static XSM_INLINE int xsm_resource_plug_core(XSM_DEFAULT_VOID)
+static XSM_INLINE int dummy_resource_plug_core(XSM_DEFAULT_VOID)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int xsm_resource_unplug_core(XSM_DEFAULT_VOID)
+static XSM_INLINE int dummy_resource_unplug_core(XSM_DEFAULT_VOID)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int xsm_resource_plug_pci(XSM_DEFAULT_ARG uint32_t machine_bdf)
+static XSM_INLINE int dummy_resource_plug_pci(XSM_DEFAULT_ARG uint32_t machine_bdf)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int xsm_resource_unplug_pci(XSM_DEFAULT_ARG uint32_t machine_bdf)
+static XSM_INLINE int dummy_resource_unplug_pci(XSM_DEFAULT_ARG uint32_t machine_bdf)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int xsm_resource_setup_pci(XSM_DEFAULT_ARG uint32_t machine_bdf)
+static XSM_INLINE int dummy_resource_setup_pci(XSM_DEFAULT_ARG uint32_t machine_bdf)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int xsm_resource_setup_gsi(XSM_DEFAULT_ARG int gsi)
+static XSM_INLINE int dummy_resource_setup_gsi(XSM_DEFAULT_ARG int gsi)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int xsm_resource_setup_misc(XSM_DEFAULT_VOID)
+static XSM_INLINE int dummy_resource_setup_misc(XSM_DEFAULT_VOID)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int xsm_page_offline(XSM_DEFAULT_ARG uint32_t cmd)
+static XSM_INLINE int dummy_page_offline(XSM_DEFAULT_ARG uint32_t cmd)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int xsm_hypfs_op(XSM_DEFAULT_VOID)
+static XSM_INLINE int dummy_hypfs_op(XSM_DEFAULT_VOID)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE long xsm_do_xsm_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
+static XSM_INLINE long dummy_do_xsm_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return -ENOSYS;
 }
 
 #ifdef CONFIG_COMPAT
-static XSM_INLINE int xsm_do_compat_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
+static XSM_INLINE int dummy_do_compat_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return -ENOSYS;
 }
 #endif
 
-static XSM_INLINE char *xsm_show_irq_sid(int irq)
+static XSM_INLINE char *dummy_show_irq_sid(int irq)
 {
     return NULL;
 }
 
-static XSM_INLINE int xsm_map_domain_pirq(XSM_DEFAULT_ARG struct domain *d)
+static XSM_INLINE int dummy_map_domain_pirq(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_map_domain_irq(XSM_DEFAULT_ARG struct domain *d,
+static XSM_INLINE int dummy_map_domain_irq(XSM_DEFAULT_ARG struct domain *d,
                                          int irq, const void *data)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_unmap_domain_pirq(XSM_DEFAULT_ARG struct domain *d)
+static XSM_INLINE int dummy_unmap_domain_pirq(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_bind_pt_irq(XSM_DEFAULT_ARG struct domain *d, struct xen_domctl_bind_pt_irq *bind)
+static XSM_INLINE int dummy_bind_pt_irq(XSM_DEFAULT_ARG struct domain *d, struct xen_domctl_bind_pt_irq *bind)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_unbind_pt_irq(XSM_DEFAULT_ARG struct domain *d, struct xen_domctl_bind_pt_irq *bind)
+static XSM_INLINE int dummy_unbind_pt_irq(XSM_DEFAULT_ARG struct domain *d, struct xen_domctl_bind_pt_irq *bind)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_unmap_domain_irq(XSM_DEFAULT_ARG struct domain *d,
+static XSM_INLINE int dummy_unmap_domain_irq(XSM_DEFAULT_ARG struct domain *d,
                                            int irq, const void *data)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_irq_permission(XSM_DEFAULT_ARG struct domain *d, int pirq, uint8_t allow)
+static XSM_INLINE int dummy_irq_permission(XSM_DEFAULT_ARG struct domain *d, int pirq, uint8_t allow)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_iomem_permission(XSM_DEFAULT_ARG struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
+static XSM_INLINE int dummy_iomem_permission(XSM_DEFAULT_ARG struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_iomem_mapping(XSM_DEFAULT_ARG struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
+static XSM_INLINE int dummy_iomem_mapping(XSM_DEFAULT_ARG struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_pci_config_permission(XSM_DEFAULT_ARG struct domain *d, uint32_t machine_bdf,
+static XSM_INLINE int dummy_pci_config_permission(XSM_DEFAULT_ARG struct domain *d, uint32_t machine_bdf,
                                         uint16_t start, uint16_t end,
                                         uint8_t access)
 {
@@ -523,43 +507,43 @@ static XSM_INLINE int xsm_pci_config_permission(XSM_DEFAULT_ARG struct domain *d
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_add_to_physmap(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static XSM_INLINE int dummy_add_to_physmap(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int xsm_remove_from_physmap(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static XSM_INLINE int dummy_remove_from_physmap(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int xsm_map_gmfn_foreign(XSM_DEFAULT_ARG struct domain *d, struct domain *t)
+static XSM_INLINE int dummy_map_gmfn_foreign(XSM_DEFAULT_ARG struct domain *d, struct domain *t)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d, t);
 }
 
-static XSM_INLINE int xsm_hvm_param(XSM_DEFAULT_ARG struct domain *d, unsigned long op)
+static XSM_INLINE int dummy_hvm_param(XSM_DEFAULT_ARG struct domain *d, unsigned long op)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_hvm_control(XSM_DEFAULT_ARG struct domain *d, unsigned long op)
+static XSM_INLINE int dummy_hvm_control(XSM_DEFAULT_ARG struct domain *d, unsigned long op)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_hvm_param_altp2mhvm(XSM_DEFAULT_ARG struct domain *d)
+static XSM_INLINE int dummy_hvm_param_altp2mhvm(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_hvm_altp2mhvm_op(XSM_DEFAULT_ARG struct domain *d, uint64_t mode, uint32_t op)
+static XSM_INLINE int dummy_hvm_altp2mhvm_op(XSM_DEFAULT_ARG struct domain *d, uint64_t mode, uint32_t op)
 {
     XSM_ASSERT_ACTION(XSM_OTHER);
 
@@ -578,14 +562,14 @@ static XSM_INLINE int xsm_hvm_altp2mhvm_op(XSM_DEFAULT_ARG struct domain *d, uin
     }
 }
 
-static XSM_INLINE int xsm_vm_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
+static XSM_INLINE int dummy_vm_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
 #ifdef CONFIG_MEM_ACCESS
-static XSM_INLINE int xsm_mem_access(XSM_DEFAULT_ARG struct domain *d)
+static XSM_INLINE int dummy_mem_access(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
@@ -593,7 +577,7 @@ static XSM_INLINE int xsm_mem_access(XSM_DEFAULT_ARG struct domain *d)
 #endif
 
 #ifdef CONFIG_MEM_PAGING
-static XSM_INLINE int xsm_mem_paging(XSM_DEFAULT_ARG struct domain *d)
+static XSM_INLINE int dummy_mem_paging(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
@@ -601,57 +585,57 @@ static XSM_INLINE int xsm_mem_paging(XSM_DEFAULT_ARG struct domain *d)
 #endif
 
 #ifdef CONFIG_MEM_SHARING
-static XSM_INLINE int xsm_mem_sharing(XSM_DEFAULT_ARG struct domain *d)
+static XSM_INLINE int dummy_mem_sharing(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 #endif
 
-static XSM_INLINE int xsm_platform_op(XSM_DEFAULT_ARG uint32_t op)
+static XSM_INLINE int dummy_platform_op(XSM_DEFAULT_ARG uint32_t op)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
 #ifdef CONFIG_X86
-static XSM_INLINE int xsm_do_mca(XSM_DEFAULT_VOID)
+static XSM_INLINE int dummy_do_mca(XSM_DEFAULT_VOID)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int xsm_shadow_control(XSM_DEFAULT_ARG struct domain *d, uint32_t op)
+static XSM_INLINE int dummy_shadow_control(XSM_DEFAULT_ARG struct domain *d, uint32_t op)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_mem_sharing_op(XSM_DEFAULT_ARG struct domain *d, struct domain *cd, int op)
+static XSM_INLINE int dummy_mem_sharing_op(XSM_DEFAULT_ARG struct domain *d, struct domain *cd, int op)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, cd);
 }
 
-static XSM_INLINE int xsm_apic(XSM_DEFAULT_ARG struct domain *d, int cmd)
+static XSM_INLINE int dummy_apic(XSM_DEFAULT_ARG struct domain *d, int cmd)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, d, NULL);
 }
 
-static XSM_INLINE int xsm_machine_memory_map(XSM_DEFAULT_VOID)
+static XSM_INLINE int dummy_machine_memory_map(XSM_DEFAULT_VOID)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int xsm_domain_memory_map(XSM_DEFAULT_ARG struct domain *d)
+static XSM_INLINE int dummy_domain_memory_map(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_mmu_update(XSM_DEFAULT_ARG struct domain *d, struct domain *t,
+static XSM_INLINE int dummy_mmu_update(XSM_DEFAULT_ARG struct domain *d, struct domain *t,
                                      struct domain *f, uint32_t flags)
 {
     int rc = 0;
@@ -663,38 +647,38 @@ static XSM_INLINE int xsm_mmu_update(XSM_DEFAULT_ARG struct domain *d, struct do
     return rc;
 }
 
-static XSM_INLINE int xsm_mmuext_op(XSM_DEFAULT_ARG struct domain *d, struct domain *f)
+static XSM_INLINE int dummy_mmuext_op(XSM_DEFAULT_ARG struct domain *d, struct domain *f)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d, f);
 }
 
-static XSM_INLINE int xsm_update_va_mapping(XSM_DEFAULT_ARG struct domain *d, struct domain *f, 
+static XSM_INLINE int dummy_update_va_mapping(XSM_DEFAULT_ARG struct domain *d, struct domain *f, 
                                                             l1_pgentry_t pte)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d, f);
 }
 
-static XSM_INLINE int xsm_priv_mapping(XSM_DEFAULT_ARG struct domain *d, struct domain *t)
+static XSM_INLINE int dummy_priv_mapping(XSM_DEFAULT_ARG struct domain *d, struct domain *t)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d, t);
 }
 
-static XSM_INLINE int xsm_ioport_permission(XSM_DEFAULT_ARG struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
+static XSM_INLINE int dummy_ioport_permission(XSM_DEFAULT_ARG struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_ioport_mapping(XSM_DEFAULT_ARG struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
+static XSM_INLINE int dummy_ioport_mapping(XSM_DEFAULT_ARG struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_pmu_op (XSM_DEFAULT_ARG struct domain *d, unsigned int op)
+static XSM_INLINE int dummy_pmu_op (XSM_DEFAULT_ARG struct domain *d, unsigned int op)
 {
     XSM_ASSERT_ACTION(XSM_OTHER);
     switch ( op )
@@ -711,30 +695,30 @@ static XSM_INLINE int xsm_pmu_op (XSM_DEFAULT_ARG struct domain *d, unsigned int
 
 #endif /* CONFIG_X86 */
 
-static XSM_INLINE int xsm_dm_op(XSM_DEFAULT_ARG struct domain *d)
+static XSM_INLINE int dummy_dm_op(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
 #ifdef CONFIG_ARGO
-static XSM_INLINE int xsm_argo_enable(const struct domain *d)
+static XSM_INLINE int dummy_argo_enable(const struct domain *d)
 {
     return 0;
 }
 
-static XSM_INLINE int xsm_argo_register_single_source(const struct domain *d,
+static XSM_INLINE int dummy_argo_register_single_source(const struct domain *d,
                                                       const struct domain *t)
 {
     return 0;
 }
 
-static XSM_INLINE int xsm_argo_register_any_source(const struct domain *d)
+static XSM_INLINE int dummy_argo_register_any_source(const struct domain *d)
 {
     return 0;
 }
 
-static XSM_INLINE int xsm_argo_send(const struct domain *d,
+static XSM_INLINE int dummy_argo_send(const struct domain *d,
                                     const struct domain *t)
 {
     return 0;
@@ -743,7 +727,7 @@ static XSM_INLINE int xsm_argo_send(const struct domain *d,
 #endif /* CONFIG_ARGO */
 
 #include <public/version.h>
-static XSM_INLINE int xsm_xen_version (XSM_DEFAULT_ARG uint32_t op)
+static XSM_INLINE int dummy_xen_version (XSM_DEFAULT_ARG uint32_t op)
 {
     XSM_ASSERT_ACTION(XSM_OTHER);
     switch ( op )
@@ -767,7 +751,7 @@ static XSM_INLINE int xsm_xen_version (XSM_DEFAULT_ARG uint32_t op)
     }
 }
 
-static XSM_INLINE int xsm_domain_resource_map(XSM_DEFAULT_ARG struct domain *d)
+static XSM_INLINE int dummy_domain_resource_map(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
diff --git a/xen/xsm/silo.c b/xen/xsm/silo.c
index b96dacd181..ebe5814efc 100644
--- a/xen/xsm/silo.c
+++ b/xen/xsm/silo.c
@@ -17,7 +17,8 @@
  * You should have received a copy of the GNU General Public License along with
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
-#include <xsm/dummy.h>
+#include <xsm/xsm-core.h>
+#include "dummy.h"
 
 /*
  * Check if inter-domain communication is allowed.
@@ -43,7 +44,7 @@ static int silo_evtchn_unbound(struct domain *d1, struct evtchn *chn,
     else
     {
         if ( silo_mode_dom_check(d1, d2) )
-            rc = xsm_evtchn_unbound(d1, chn, id2);
+            rc = dummy_evtchn_unbound(d1, chn, id2);
         rcu_unlock_domain(d2);
     }
 
@@ -54,7 +55,7 @@ static int silo_evtchn_interdomain(struct domain *d1, struct evtchn *chan1,
                                    struct domain *d2, struct evtchn *chan2)
 {
     if ( silo_mode_dom_check(d1, d2) )
-        return xsm_evtchn_interdomain(d1, chan1, d2, chan2);
+        return dummy_evtchn_interdomain(d1, chan1, d2, chan2);
     return -EPERM;
 }
 
@@ -62,21 +63,21 @@ static int silo_grant_mapref(struct domain *d1, struct domain *d2,
                              uint32_t flags)
 {
     if ( silo_mode_dom_check(d1, d2) )
-        return xsm_grant_mapref(d1, d2, flags);
+        return dummy_grant_mapref(d1, d2, flags);
     return -EPERM;
 }
 
 static int silo_grant_transfer(struct domain *d1, struct domain *d2)
 {
     if ( silo_mode_dom_check(d1, d2) )
-        return xsm_grant_transfer(d1, d2);
+        return dummy_grant_transfer(d1, d2);
     return -EPERM;
 }
 
 static int silo_grant_copy(struct domain *d1, struct domain *d2)
 {
     if ( silo_mode_dom_check(d1, d2) )
-        return xsm_grant_copy(d1, d2);
+        return dummy_grant_copy(d1, d2);
     return -EPERM;
 }
 
@@ -86,14 +87,14 @@ static int silo_argo_register_single_source(const struct domain *d1,
                                             const struct domain *d2)
 {
     if ( silo_mode_dom_check(d1, d2) )
-        return xsm_argo_register_single_source(d1, d2);
+        return dummy_argo_register_single_source(d1, d2);
     return -EPERM;
 }
 
 static int silo_argo_send(const struct domain *d1, const struct domain *d2)
 {
     if ( silo_mode_dom_check(d1, d2) )
-        return xsm_argo_send(d1, d2);
+        return dummy_argo_send(d1, d2);
     return -EPERM;
 }
 
diff --git a/xen/xsm/xsm_core.c b/xen/xsm/xsm_core.c
index acc1af7166..a51dc6f7dd 100644
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -18,8 +18,6 @@
 #include <xen/hypercall.h>
 #include <xsm/xsm.h>
 
-#ifdef CONFIG_XSM
-
 #ifdef CONFIG_MULTIBOOT
 #include <asm/setup.h>
 #endif
@@ -223,8 +221,6 @@ int __init register_xsm(struct xsm_operations *ops)
     return 0;
 }
 
-#endif
-
 long do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return xsm_do_xsm_op(op);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 23:35:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 23:35:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144250.265564 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1Xz-0007Ru-LX; Thu, 17 Jun 2021 23:35:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144250.265564; Thu, 17 Jun 2021 23:35:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1Xz-0007Rn-Hr; Thu, 17 Jun 2021 23:35:43 +0000
Received: by outflank-mailman (input) for mailman id 144250;
 Thu, 17 Jun 2021 23:35:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uRdX=LL=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lu1Xy-0007Re-2F
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 23:35:42 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 96190fc1-2305-488a-a97f-897775c10852;
 Thu, 17 Jun 2021 23:35:38 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1623972581937325.8350726165488;
 Thu, 17 Jun 2021 16:29:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96190fc1-2305-488a-a97f-897775c10852
ARC-Seal: i=1; a=rsa-sha256; t=1623972583; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=gwGd/SJGF+vjh/MKS4pCKD08SH1QXWzt3Ra7lskWXhk41FhKSGGlO2WwlO6nW7XixMbDIRDxOhE9zsIRL3rj3HHEr+nPOzoN8hiBRPcyoCZtTu/T6V/MtrDWvy65YONJ4EWK7Jd0fmNZRZU6yTHNvJYCvWZ5MZlzR9wp9nqZyNQ=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1623972583; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=NN+fLziBRzBtkvPvDF6TDsmdflwA5o0WlqND7Rn5lLE=; 
	b=YbyWxcfgydQ/fARBxlYK+NN2A7RKd6HNhHmNM3JSfMtfQhgFcBsjdTpyLnJFaM4dSpktg5VmWgsMC3reUUXUIiEllH8nSHrPX0G/LRlX9bcc6gmzILZmKNrYbex0pcWBX/Q6bYUQGeR9aoy8ff4t3nHOXGtnTSKPvliW4SrQ4j4=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1623972583;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding;
	bh=NN+fLziBRzBtkvPvDF6TDsmdflwA5o0WlqND7Rn5lLE=;
	b=qilEQ+rOIUXXV1erY0KUuwX1qYsRoKC3PgH69GxjXli7bk0cg3uTWqC4xhqxe1XU
	ScRBzVQBpiFYfFPu0mgKQjTwA4pnE013ejJlNX166DVuHhxqv704R1vmNVlED2Bugbn
	RJLfzLx6IhHw0cgSxaArGLhm8OpTjerxu7FDvVF4=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Tim Deegan <tim@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Paul Durrant <paul@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	persaur@gmail.com,
	christopher.w.clark@gmail.com,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io
Subject: [PATCH 4/6] xsm: remove xen_defualt_t from hook definitions
Date: Thu, 17 Jun 2021 19:39:16 -0400
Message-Id: <20210617233918.10095-5-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210617233918.10095-1-dpsmith@apertussolutions.com>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

With the conversion of making XSM always enabled even the dummy XSM module is
being invoked through the xsm_ops dispatch which does not use passing of the
default privilege. This commit removes the xen_default_t parameter from the hook
definitions and all the respective call sites.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 xen/arch/arm/dm.c                     |   2 +-
 xen/arch/arm/domctl.c                 |   6 +-
 xen/arch/arm/hvm.c                    |   2 +-
 xen/arch/arm/mm.c                     |   2 +-
 xen/arch/arm/platform_hypercall.c     |   2 +-
 xen/arch/x86/cpu/mcheck/mce.c         |   2 +-
 xen/arch/x86/cpu/vpmu.c               |   2 +-
 xen/arch/x86/domctl.c                 |   8 +-
 xen/arch/x86/hvm/dm.c                 |   2 +-
 xen/arch/x86/hvm/hvm.c                |  12 +-
 xen/arch/x86/irq.c                    |   5 +-
 xen/arch/x86/mm.c                     |  20 +--
 xen/arch/x86/mm/mem_paging.c          |   2 +-
 xen/arch/x86/mm/mem_sharing.c         |   9 +-
 xen/arch/x86/mm/p2m.c                 |   2 +-
 xen/arch/x86/mm/paging.c              |   4 +-
 xen/arch/x86/mm/shadow/set.c          |   2 +-
 xen/arch/x86/msi.c                    |   3 +-
 xen/arch/x86/pci.c                    |   2 +-
 xen/arch/x86/physdev.c                |  17 ++-
 xen/arch/x86/platform_hypercall.c     |  10 +-
 xen/arch/x86/pv/emul-priv-op.c        |   2 +-
 xen/arch/x86/sysctl.c                 |   4 +-
 xen/common/domain.c                   |   4 +-
 xen/common/domctl.c                   |  12 +-
 xen/common/event_channel.c            |  12 +-
 xen/common/grant_table.c              |  16 +--
 xen/common/hypfs.c                    |   2 +-
 xen/common/kernel.c                   |   2 +-
 xen/common/kexec.c                    |   2 +-
 xen/common/mem_access.c               |   2 +-
 xen/common/memory.c                   |  16 +--
 xen/common/monitor.c                  |   2 +-
 xen/common/sched/core.c               |   6 +-
 xen/common/sysctl.c                   |   8 +-
 xen/common/vm_event.c                 |   2 +-
 xen/common/xenoprof.c                 |   2 +-
 xen/drivers/char/console.c            |   2 +-
 xen/drivers/passthrough/device_tree.c |   4 +-
 xen/drivers/passthrough/pci.c         |  12 +-
 xen/include/xsm/xsm.h                 | 172 +++++++++++++-------------
 41 files changed, 197 insertions(+), 203 deletions(-)

diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c
index 1b3fd6bc7d..c8b89c8f47 100644
--- a/xen/arch/arm/dm.c
+++ b/xen/arch/arm/dm.c
@@ -45,7 +45,7 @@ int dm_op(const struct dmop_args *op_args)
     if ( rc )
         return rc;
 
-    rc = xsm_dm_op(XSM_DM_PRIV, d);
+    rc = xsm_dm_op(d);
     if ( rc )
         goto out;
 
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index b7d27f37df..e7202703ee 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -95,11 +95,11 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
          * done by the 2 hypercalls for consistency with other
          * architectures.
          */
-        rc = xsm_map_domain_irq(XSM_HOOK, d, irq, NULL);
+        rc = xsm_map_domain_irq(d, irq, NULL);
         if ( rc )
             return rc;
 
-        rc = xsm_bind_pt_irq(XSM_HOOK, d, bind);
+        rc = xsm_bind_pt_irq(d, bind);
         if ( rc )
             return rc;
 
@@ -130,7 +130,7 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
         if ( irq != virq )
             return -EINVAL;
 
-        rc = xsm_unbind_pt_irq(XSM_HOOK, d, bind);
+        rc = xsm_unbind_pt_irq(d, bind);
         if ( rc )
             return rc;
 
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 8951b34086..cf4bd9ae09 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -101,7 +101,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( d == NULL )
             return -ESRCH;
 
-        rc = xsm_hvm_param(XSM_TARGET, d, op);
+        rc = xsm_hvm_param(d, op);
         if ( rc )
             goto param_fail;
 
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0e07335291..a256c89b62 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1446,7 +1446,7 @@ int xenmem_add_to_physmap_one(
             return -EINVAL;
         }
 
-        rc = xsm_map_gmfn_foreign(XSM_TARGET, d, od);
+        rc = xsm_map_gmfn_foreign(d, od);
         if ( rc )
         {
             put_pg_owner(od);
diff --git a/xen/arch/arm/platform_hypercall.c b/xen/arch/arm/platform_hypercall.c
index 8efac7ee60..52985f8a51 100644
--- a/xen/arch/arm/platform_hypercall.c
+++ b/xen/arch/arm/platform_hypercall.c
@@ -33,7 +33,7 @@ long do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     if ( d == NULL )
         return -ESRCH;
 
-    ret = xsm_platform_op(XSM_PRIV, op->cmd);
+    ret = xsm_platform_op(op->cmd);
     if ( ret )
         return ret;
 
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index 7f433343bc..58beb96663 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1376,7 +1376,7 @@ long do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc)
     struct xen_mc_msrinject *mc_msrinject;
     struct xen_mc_mceinject *mc_mceinject;
 
-    ret = xsm_do_mca(XSM_PRIV);
+    ret = xsm_do_mca();
     if ( ret )
         return x86_mcerr("", ret);
 
diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index 16e91a3694..34b536417b 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -706,7 +706,7 @@ long do_xenpmu_op(unsigned int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
     if ( !opt_vpmu_enabled || has_vlapic(current->domain) )
         return -EOPNOTSUPP;
 
-    ret = xsm_pmu_op(XSM_OTHER, current->domain, op);
+    ret = xsm_pmu_op(current->domain, op);
     if ( ret )
         return ret;
 
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 26a76d2be9..8343c59e83 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -234,7 +234,7 @@ long arch_do_domctl(
         if ( (fp + np) <= fp || (fp + np) > MAX_IOPORTS )
             ret = -EINVAL;
         else if ( !ioports_access_permitted(currd, fp, fp + np - 1) ||
-                  xsm_ioport_permission(XSM_HOOK, d, fp, fp + np - 1, allow) )
+                  xsm_ioport_permission(d, fp, fp + np - 1, allow) )
             ret = -EPERM;
         else if ( allow )
             ret = ioports_permit_access(d, fp, fp + np - 1);
@@ -534,7 +534,7 @@ long arch_do_domctl(
         if ( !is_hvm_domain(d) )
             break;
 
-        ret = xsm_bind_pt_irq(XSM_HOOK, d, bind);
+        ret = xsm_bind_pt_irq(d, bind);
         if ( ret )
             break;
 
@@ -569,7 +569,7 @@ long arch_do_domctl(
         if ( irq <= 0 || !irq_access_permitted(currd, irq) )
             break;
 
-        ret = xsm_unbind_pt_irq(XSM_HOOK, d, bind);
+        ret = xsm_unbind_pt_irq(d, bind);
         if ( ret )
             break;
 
@@ -616,7 +616,7 @@ long arch_do_domctl(
         if ( !ioports_access_permitted(currd, fmp, fmp + np - 1) )
             break;
 
-        ret = xsm_ioport_mapping(XSM_HOOK, d, fmp, fmp + np - 1, add);
+        ret = xsm_ioport_mapping(d, fmp, fmp + np - 1, add);
         if ( ret )
             break;
 
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index b60b9f3364..6f996371b9 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -370,7 +370,7 @@ int dm_op(const struct dmop_args *op_args)
     if ( !is_hvm_domain(d) )
         goto out;
 
-    rc = xsm_dm_op(XSM_DM_PRIV, d);
+    rc = xsm_dm_op(d);
     if ( rc )
         goto out;
 
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5086773e5c..c1b0d6fca8 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4080,7 +4080,7 @@ static int hvm_allow_set_param(struct domain *d,
     uint64_t value;
     int rc;
 
-    rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_set_param);
+    rc = xsm_hvm_param(d, HVMOP_set_param);
     if ( rc )
         return rc;
 
@@ -4227,7 +4227,7 @@ static int hvm_set_param(struct domain *d, uint32_t index, uint64_t value)
         rc = pmtimer_change_ioport(d, value);
         break;
     case HVM_PARAM_ALTP2M:
-        rc = xsm_hvm_param_altp2mhvm(XSM_PRIV, d);
+        rc = xsm_hvm_param_altp2mhvm(d);
         if ( rc )
             break;
         if ( (value > XEN_ALTP2M_limited) ||
@@ -4356,7 +4356,7 @@ static int hvm_allow_get_param(struct domain *d,
 {
     int rc;
 
-    rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_get_param);
+    rc = xsm_hvm_param(d, HVMOP_get_param);
     if ( rc )
         return rc;
 
@@ -4566,7 +4566,7 @@ static int do_altp2m_op(
         goto out;
     }
 
-    if ( (rc = xsm_hvm_altp2mhvm_op(XSM_OTHER, d, mode, a.cmd)) )
+    if ( (rc = xsm_hvm_altp2mhvm_op(d, mode, a.cmd)) )
         goto out;
 
     switch ( a.cmd )
@@ -4947,7 +4947,7 @@ static int hvmop_get_mem_type(
     if ( d == NULL )
         return -ESRCH;
 
-    rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_get_mem_type);
+    rc = xsm_hvm_param(d, HVMOP_get_mem_type);
     if ( rc )
         goto out;
 
@@ -5040,7 +5040,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( unlikely(d != current->domain) )
             rc = -EOPNOTSUPP;
         else if ( is_hvm_domain(d) && paging_mode_shadow(d) )
-            rc = xsm_hvm_param(XSM_TARGET, d, op);
+            rc = xsm_hvm_param(d, op);
         if ( !rc )
             pagetable_dying(a.gpa);
 
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index a1693f92dd..9f79ec1b86 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2122,7 +2122,7 @@ int map_domain_pirq(
         return 0;
     }
 
-    ret = xsm_map_domain_irq(XSM_HOOK, d, irq, data);
+    ret = xsm_map_domain_irq(d, irq, data);
     if ( ret )
     {
         dprintk(XENLOG_G_ERR, "dom%d: could not permit access to irq %d mapping to pirq %d\n",
@@ -2342,8 +2342,7 @@ int unmap_domain_pirq(struct domain *d, int pirq)
         nr = msi_desc->msi.nvec;
     }
 
-    ret = xsm_unmap_domain_irq(XSM_HOOK, d, irq,
-                               msi_desc ? msi_desc->dev : NULL);
+    ret = xsm_unmap_domain_irq(d, irq, msi_desc ? msi_desc->dev : NULL);
     if ( ret )
         goto done;
 
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 4d799032dc..33d0aa8d4b 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -977,7 +977,7 @@ get_page_from_l1e(
          * minor hack can go away.
          */
         if ( (real_pg_owner == NULL) || (pg_owner == l1e_owner) ||
-             xsm_priv_mapping(XSM_TARGET, pg_owner, real_pg_owner) )
+             xsm_priv_mapping(pg_owner, real_pg_owner) )
         {
             gdprintk(XENLOG_WARNING,
                      "pg_owner d%d l1e_owner d%d, but real_pg_owner d%d\n",
@@ -3407,7 +3407,7 @@ long do_mmuext_op(
         return -EINVAL;
     }
 
-    rc = xsm_mmuext_op(XSM_TARGET, currd, pg_owner);
+    rc = xsm_mmuext_op(currd, pg_owner);
     if ( rc )
     {
         put_pg_owner(pg_owner);
@@ -3497,7 +3497,7 @@ long do_mmuext_op(
                 break;
             }
 
-            rc = xsm_memory_pin_page(XSM_HOOK, currd, pg_owner, page);
+            rc = xsm_memory_pin_page(currd, pg_owner, page);
             if ( !rc && unlikely(test_and_set_bit(_PGT_pinned,
                                                   &page->u.inuse.type_info)) )
             {
@@ -4006,7 +4006,7 @@ long do_mmu_update(
             }
             if ( xsm_needed != xsm_checked )
             {
-                rc = xsm_mmu_update(XSM_TARGET, d, pt_owner, pg_owner, xsm_needed);
+                rc = xsm_mmu_update(d, pt_owner, pg_owner, xsm_needed);
                 if ( rc )
                     break;
                 xsm_checked = xsm_needed;
@@ -4142,7 +4142,7 @@ long do_mmu_update(
             xsm_needed |= XSM_MMU_MACHPHYS_UPDATE;
             if ( xsm_needed != xsm_checked )
             {
-                rc = xsm_mmu_update(XSM_TARGET, d, NULL, pg_owner, xsm_needed);
+                rc = xsm_mmu_update(d, NULL, pg_owner, xsm_needed);
                 if ( rc )
                     break;
                 xsm_checked = xsm_needed;
@@ -4391,7 +4391,7 @@ static int __do_update_va_mapping(
 
     perfc_incr(calls_to_update_va);
 
-    rc = xsm_update_va_mapping(XSM_TARGET, d, pg_owner, val);
+    rc = xsm_update_va_mapping(d, pg_owner, val);
     if ( rc )
         return rc;
 
@@ -4630,7 +4630,7 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( d == NULL )
             return -ESRCH;
 
-        rc = xsm_domain_memory_map(XSM_TARGET, d);
+        rc = xsm_domain_memory_map(d);
         if ( rc )
         {
             rcu_unlock_domain(d);
@@ -4697,7 +4697,7 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         unsigned int i;
         bool store;
 
-        rc = xsm_machine_memory_map(XSM_PRIV);
+        rc = xsm_machine_memory_map();
         if ( rc )
             return rc;
 
@@ -4787,9 +4787,9 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             return -ESRCH;
 
         if ( cmd == XENMEM_set_pod_target )
-            rc = xsm_set_pod_target(XSM_PRIV, d);
+            rc = xsm_set_pod_target(d);
         else
-            rc = xsm_get_pod_target(XSM_PRIV, d);
+            rc = xsm_get_pod_target(d);
 
         if ( rc != 0 )
             goto pod_target_out_unlock;
diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
index 01281f786e..073edb9499 100644
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -452,7 +452,7 @@ int mem_paging_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg)
     if ( rc )
         return rc;
 
-    rc = xsm_mem_paging(XSM_DM_PRIV, d);
+    rc = xsm_mem_paging(d);
     if ( rc )
         goto out;
 
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 98b14f7b0a..db5f5ce8b5 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1883,7 +1883,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
     if ( rc )
         return rc;
 
-    rc = xsm_mem_sharing(XSM_DM_PRIV, d);
+    rc = xsm_mem_sharing(d);
     if ( rc )
         goto out;
 
@@ -1928,7 +1928,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
         if ( rc )
             goto out;
 
-        rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
+        rc = xsm_mem_sharing_op(d, cd, mso.op);
         if ( rc )
         {
             rcu_unlock_domain(cd);
@@ -1994,7 +1994,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
         if ( rc )
             goto out;
 
-        rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
+        rc = xsm_mem_sharing_op(d, cd, mso.op);
         if ( rc )
         {
             rcu_unlock_domain(cd);
@@ -2056,8 +2056,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
          * We reuse XENMEM_sharing_op_share XSM check here as this is
          * essentially the same concept repeated over multiple pages.
          */
-        rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd,
-                                XENMEM_sharing_op_share);
+        rc = xsm_mem_sharing_op(d, cd, XENMEM_sharing_op_share);
         if ( rc )
         {
             rcu_unlock_domain(cd);
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index dbb1cbeb59..5cee9f5a1d 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2637,7 +2637,7 @@ static int p2m_add_foreign(struct domain *tdom, unsigned long fgfn,
             goto out;
     }
 
-    rc = xsm_map_gmfn_foreign(XSM_TARGET, tdom, fdom);
+    rc = xsm_map_gmfn_foreign(tdom, fdom);
     if ( rc )
         goto out;
 
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index c304c24526..86a1ec5b80 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -714,7 +714,7 @@ int paging_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
         return -EBUSY;
     }
 
-    rc = xsm_shadow_control(XSM_HOOK, d, sc->op);
+    rc = xsm_shadow_control(d, sc->op);
     if ( rc )
         return rc;
 
@@ -771,7 +771,7 @@ long paging_domctl_continuation(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
     if ( d == NULL )
         return -ESRCH;
 
-    ret = xsm_domctl(XSM_OTHER, d, op.cmd);
+    ret = xsm_domctl(d, op.cmd);
     if ( !ret )
     {
         if ( domctl_lock_acquire() )
diff --git a/xen/arch/x86/mm/shadow/set.c b/xen/arch/x86/mm/shadow/set.c
index 87e9c6eeb2..e7433ac81b 100644
--- a/xen/arch/x86/mm/shadow/set.c
+++ b/xen/arch/x86/mm/shadow/set.c
@@ -117,7 +117,7 @@ shadow_get_page_from_l1e(shadow_l1e_t sl1e, struct domain *d, p2m_type_t type)
      * not own, we let it succeed anyway.
      */
     if ( owner && (d != owner) &&
-         !(res = xsm_priv_mapping(XSM_TARGET, d, owner)) )
+         !(res = xsm_priv_mapping(d, owner)) )
     {
         res = get_page_from_l1e(sl1e, d, owner);
         SHADOW_PRINTK("privileged %pd installs map of %pd's mfn %"PRI_mfn": %s\n",
diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
index 5febc0ea4b..f821d145e5 100644
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -1310,8 +1310,7 @@ int pci_restore_msi_state(struct pci_dev *pdev)
     if ( !use_msi )
         return -EOPNOTSUPP;
 
-    ret = xsm_resource_setup_pci(XSM_PRIV,
-                                (pdev->seg << 16) | (pdev->bus << 8) |
+    ret = xsm_resource_setup_pci((pdev->seg << 16) | (pdev->bus << 8) |
                                 pdev->devfn);
     if ( ret )
         return ret;
diff --git a/xen/arch/x86/pci.c b/xen/arch/x86/pci.c
index a9decd4f33..d5886c59e6 100644
--- a/xen/arch/x86/pci.c
+++ b/xen/arch/x86/pci.c
@@ -74,7 +74,7 @@ int pci_conf_write_intercept(unsigned int seg, unsigned int bdf,
                              uint32_t *data)
 {
     struct pci_dev *pdev;
-    int rc = xsm_pci_config_permission(XSM_HOOK, current->domain, bdf,
+    int rc = xsm_pci_config_permission(current->domain, bdf,
                                        reg, reg + size - 1, 1);
 
     if ( rc < 0 )
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index 23465bcd00..3f2a2035c5 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -110,7 +110,7 @@ int physdev_map_pirq(domid_t domid, int type, int *index, int *pirq_p,
     if ( d == NULL )
         return -ESRCH;
 
-    ret = xsm_map_domain_pirq(XSM_DM_PRIV, d);
+    ret = xsm_map_domain_pirq(d);
     if ( ret )
         goto free_domain;
 
@@ -148,7 +148,7 @@ int physdev_unmap_pirq(domid_t domid, int pirq)
         return -ESRCH;
 
     if ( domid != DOMID_SELF || !is_hvm_domain(d) || !has_pirq(d) )
-        ret = xsm_unmap_domain_pirq(XSM_DM_PRIV, d);
+        ret = xsm_unmap_domain_pirq(d);
     if ( ret )
         goto free_domain;
 
@@ -355,7 +355,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         ret = -EFAULT;
         if ( copy_from_guest(&apic, arg, 1) != 0 )
             break;
-        ret = xsm_apic(XSM_PRIV, currd, cmd);
+        ret = xsm_apic(currd, cmd);
         if ( ret )
             break;
         ret = ioapic_guest_read(apic.apic_physbase, apic.reg, &apic.value);
@@ -369,7 +369,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         ret = -EFAULT;
         if ( copy_from_guest(&apic, arg, 1) != 0 )
             break;
-        ret = xsm_apic(XSM_PRIV, currd, cmd);
+        ret = xsm_apic(currd, cmd);
         if ( ret )
             break;
         ret = ioapic_guest_write(apic.apic_physbase, apic.reg, apic.value);
@@ -385,7 +385,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
         /* Use the APIC check since this dummy hypercall should still only
          * be called by the domain with access to program the ioapic */
-        ret = xsm_apic(XSM_PRIV, currd, cmd);
+        ret = xsm_apic(currd, cmd);
         if ( ret )
             break;
 
@@ -535,8 +535,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( copy_from_guest(&dev, arg, 1) )
             ret = -EFAULT;
         else
-            ret = xsm_resource_setup_pci(XSM_PRIV,
-                                         (dev.seg << 16) | (dev.bus << 8) |
+            ret = xsm_resource_setup_pci((dev.seg << 16) | (dev.bus << 8) |
                                          dev.devfn) ?:
                   pci_prepare_msix(dev.seg, dev.bus, dev.devfn,
                                    cmd != PHYSDEVOP_prepare_msix);
@@ -546,7 +545,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     case PHYSDEVOP_pci_mmcfg_reserved: {
         struct physdev_pci_mmcfg_reserved info;
 
-        ret = xsm_resource_setup_misc(XSM_PRIV);
+        ret = xsm_resource_setup_misc();
         if ( ret )
             break;
 
@@ -611,7 +610,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( setup_gsi.gsi < 0 || setup_gsi.gsi >= nr_irqs_gsi )
             break;
 
-        ret = xsm_resource_setup_gsi(XSM_PRIV, setup_gsi.gsi);
+        ret = xsm_resource_setup_gsi(setup_gsi.gsi);
         if ( ret )
             break;
 
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 23fadbc782..afa97f74fd 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -196,7 +196,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     if ( op->interface_version != XENPF_INTERFACE_VERSION )
         return -EACCES;
 
-    ret = xsm_platform_op(XSM_PRIV, op->cmd);
+    ret = xsm_platform_op(op->cmd);
     if ( ret )
         return ret;
 
@@ -614,7 +614,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     {
         int cpu = op->u.cpu_ol.cpuid;
 
-        ret = xsm_resource_plug_core(XSM_HOOK);
+        ret = xsm_resource_plug_core();
         if ( ret )
             break;
 
@@ -640,7 +640,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     {
         int cpu = op->u.cpu_ol.cpuid;
 
-        ret = xsm_resource_unplug_core(XSM_HOOK);
+        ret = xsm_resource_unplug_core();
         if ( ret )
             break;
 
@@ -669,7 +669,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     break;
 
     case XENPF_cpu_hotadd:
-        ret = xsm_resource_plug_core(XSM_HOOK);
+        ret = xsm_resource_plug_core();
         if ( ret )
             break;
 
@@ -679,7 +679,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     break;
 
     case XENPF_mem_hotadd:
-        ret = xsm_resource_plug_core(XSM_HOOK);
+        ret = xsm_resource_plug_core();
         if ( ret )
             break;
 
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index 8889509d2a..d833b8e2e6 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -250,7 +250,7 @@ static bool pci_cfg_ok(struct domain *currd, unsigned int start,
     }
 
     return !write ?
-           xsm_pci_config_permission(XSM_HOOK, currd, machine_bdf,
+           xsm_pci_config_permission(currd, machine_bdf,
                                      start, start + size - 1, 0) == 0 :
            pci_conf_write_intercept(0, machine_bdf, start, size, write) >= 0;
 }
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index aff52a13f3..975672360b 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -190,8 +190,8 @@ long arch_do_sysctl(
         }
 
         if ( !ret )
-            ret = plug ? xsm_resource_plug_core(XSM_HOOK)
-                       : xsm_resource_unplug_core(XSM_HOOK);
+            ret = plug ? xsm_resource_plug_core()
+                       : xsm_resource_unplug_core();
 
         if ( !ret )
             ret = continue_hypercall_on_cpu(0, fn, hcpu);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 6b71c6d6a9..392865f0f1 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -311,7 +311,7 @@ static int late_hwdom_init(struct domain *d)
     if ( d != hardware_domain || d->domain_id == 0 )
         return 0;
 
-    rv = xsm_init_hardware_domain(XSM_HOOK, d);
+    rv = xsm_init_hardware_domain(d);
     if ( rv )
         return rv;
 
@@ -649,7 +649,7 @@ struct domain *domain_create(domid_t domid,
         if ( !d->iomem_caps || !d->irq_caps )
             goto fail;
 
-        if ( (err = xsm_domain_create(XSM_HOOK, d, config->ssidref)) != 0 )
+        if ( (err = xsm_domain_create(d, config->ssidref)) != 0 )
             goto fail;
 
         d->controller_pause_count = 1;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index ef202c2b8c..de258ab7f7 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -314,7 +314,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             return -ESRCH;
     }
 
-    ret = xsm_domctl(XSM_OTHER, d, op->cmd);
+    ret = xsm_domctl(d, op->cmd);
     if ( ret )
         goto domctl_out_unlock_domonly;
 
@@ -553,7 +553,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         if ( d == NULL )
             goto getdomaininfo_out;
 
-        ret = xsm_getdomaininfo(XSM_HOOK, d);
+        ret = xsm_getdomaininfo(d);
         if ( ret )
             goto getdomaininfo_out;
 
@@ -688,7 +688,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             break;
         }
         irq = pirq_access_permitted(current->domain, pirq);
-        if ( !irq || xsm_irq_permission(XSM_HOOK, d, irq, allow) )
+        if ( !irq || xsm_irq_permission(d, irq, allow) )
             ret = -EPERM;
         else if ( allow )
             ret = irq_permit_access(d, irq);
@@ -709,7 +709,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
         if ( !iomem_access_permitted(current->domain,
                                      mfn, mfn + nr_mfns - 1) ||
-             xsm_iomem_permission(XSM_HOOK, d, mfn, mfn + nr_mfns - 1, allow) )
+             xsm_iomem_permission(d, mfn, mfn + nr_mfns - 1, allow) )
             ret = -EPERM;
         else if ( allow )
             ret = iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
@@ -746,7 +746,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
              !iomem_access_permitted(d, mfn, mfn_end) )
             break;
 
-        ret = xsm_iomem_mapping(XSM_HOOK, d, mfn, mfn_end, add);
+        ret = xsm_iomem_mapping(d, mfn, mfn_end, add);
         if ( ret )
             break;
 
@@ -804,7 +804,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
         ret = -EOPNOTSUPP;
         if ( is_hvm_domain(e) )
-            ret = xsm_set_target(XSM_HOOK, d, e);
+            ret = xsm_set_target(d, e);
         if ( ret )
         {
             put_domain(e);
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 5479315aae..21e7b496ef 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -296,7 +296,7 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
         ERROR_EXIT_DOM(port, d);
     chn = evtchn_from_port(d, port);
 
-    rc = xsm_evtchn_unbound(XSM_TARGET, d, chn, alloc->remote_dom);
+    rc = xsm_evtchn_unbound(d, chn, alloc->remote_dom);
     if ( rc )
         goto out;
 
@@ -372,7 +372,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
          (rchn->u.unbound.remote_domid != ld->domain_id) )
         ERROR_EXIT_DOM(-EINVAL, rd);
 
-    rc = xsm_evtchn_interdomain(XSM_HOOK, ld, lchn, rd, rchn);
+    rc = xsm_evtchn_interdomain(ld, lchn, rd, rchn);
     if ( rc )
         goto out;
 
@@ -760,7 +760,7 @@ int evtchn_send(struct domain *ld, unsigned int lport)
         goto out;
     }
 
-    ret = xsm_evtchn_send(XSM_HOOK, ld, lchn);
+    ret = xsm_evtchn_send(ld, lchn);
     if ( ret )
         goto out;
 
@@ -985,7 +985,7 @@ int evtchn_status(evtchn_status_t *status)
         goto out;
     }
 
-    rc = xsm_evtchn_status(XSM_TARGET, d, chn);
+    rc = xsm_evtchn_status(d, chn);
     if ( rc )
         goto out;
 
@@ -1310,7 +1310,7 @@ long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( d == NULL )
             return -ESRCH;
 
-        rc = xsm_evtchn_reset(XSM_TARGET, current->domain, d);
+        rc = xsm_evtchn_reset(current->domain, d);
         if ( !rc )
             rc = evtchn_reset(d, cmd == EVTCHNOP_reset_cont);
 
@@ -1371,7 +1371,7 @@ int alloc_unbound_xen_event_channel(
         goto out;
     chn = evtchn_from_port(ld, port);
 
-    rc = xsm_evtchn_unbound(XSM_TARGET, ld, chn, remote_domid);
+    rc = xsm_evtchn_unbound(ld, chn, remote_domid);
     if ( rc )
         goto out;
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index ab30e2e8cf..df516f6340 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1063,7 +1063,7 @@ map_grant_ref(
         return;
     }
 
-    rc = xsm_grant_mapref(XSM_HOOK, ld, rd, op->flags);
+    rc = xsm_grant_mapref(ld, rd, op->flags);
     if ( rc )
     {
         rcu_unlock_domain(rd);
@@ -1403,7 +1403,7 @@ unmap_common(
         return;
     }
 
-    rc = xsm_grant_unmapref(XSM_HOOK, ld, rd);
+    rc = xsm_grant_unmapref(ld, rd);
     if ( rc )
     {
         rcu_unlock_domain(rd);
@@ -2021,7 +2021,7 @@ gnttab_setup_table(
         goto out;
     }
 
-    if ( xsm_grant_setup(XSM_TARGET, curr->domain, d) )
+    if ( xsm_grant_setup(curr->domain, d) )
     {
         op.status = GNTST_permission_denied;
         goto out;
@@ -2103,7 +2103,7 @@ gnttab_query_size(
         goto out;
     }
 
-    if ( xsm_grant_query_size(XSM_TARGET, current->domain, d) )
+    if ( xsm_grant_query_size(current->domain, d) )
     {
         op.status = GNTST_permission_denied;
         goto out;
@@ -2274,7 +2274,7 @@ gnttab_transfer(
             goto put_gfn_and_copyback;
         }
 
-        if ( xsm_grant_transfer(XSM_HOOK, d, e) )
+        if ( xsm_grant_transfer(d, e) )
         {
             gop.status = GNTST_permission_denied;
         unlock_and_copyback:
@@ -2812,7 +2812,7 @@ static int gnttab_copy_lock_domains(const struct gnttab_copy *op,
     if ( rc < 0 )
         goto error;
 
-    rc = xsm_grant_copy(XSM_HOOK, src->domain, dest->domain);
+    rc = xsm_grant_copy(src->domain, dest->domain);
     if ( rc < 0 )
     {
         rc = GNTST_permission_denied;
@@ -3231,7 +3231,7 @@ gnttab_get_status_frames(XEN_GUEST_HANDLE_PARAM(gnttab_get_status_frames_t) uop,
         op.status = GNTST_bad_domain;
         goto out1;
     }
-    rc = xsm_grant_setup(XSM_TARGET, current->domain, d);
+    rc = xsm_grant_setup(current->domain, d);
     if ( rc )
     {
         op.status = GNTST_permission_denied;
@@ -3295,7 +3295,7 @@ gnttab_get_version(XEN_GUEST_HANDLE_PARAM(gnttab_get_version_t) uop)
     if ( d == NULL )
         return -ESRCH;
 
-    rc = xsm_grant_query_size(XSM_TARGET, current->domain, d);
+    rc = xsm_grant_query_size(current->domain, d);
     if ( rc )
     {
         rcu_unlock_domain(d);
diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index e71f7df479..052f3d472a 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -679,7 +679,7 @@ long do_hypfs_op(unsigned int cmd,
     struct hypfs_entry *entry;
     static char path[XEN_HYPFS_MAX_PATHLEN];
 
-    if ( xsm_hypfs_op(XSM_PRIV) )
+    if ( xsm_hypfs_op() )
         return -EPERM;
 
     if ( cmd == XEN_HYPFS_OP_get_version )
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index d77756a81e..89e01e908c 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -459,7 +459,7 @@ __initcall(param_init);
 
 DO(xen_version)(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
-    bool_t deny = !!xsm_xen_version(XSM_OTHER, cmd);
+    bool_t deny = !!xsm_xen_version(cmd);
 
     switch ( cmd )
     {
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index ebeee6405a..a0d2858cd8 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -1219,7 +1219,7 @@ static int do_kexec_op_internal(unsigned long op,
 {
     int ret = -EINVAL;
 
-    ret = xsm_kexec(XSM_PRIV);
+    ret = xsm_kexec();
     if ( ret )
         return ret;
 
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 010e6f8dbf..2066510d3b 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -47,7 +47,7 @@ int mem_access_memop(unsigned long cmd,
     if ( !p2m_mem_access_sanity_check(d) )
         goto out;
 
-    rc = xsm_mem_access(XSM_DM_PRIV, d);
+    rc = xsm_mem_access(d);
     if ( rc )
         goto out;
 
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 72a6b70cb5..d2621bbb47 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -609,7 +609,7 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
         goto fail_early;
     }
 
-    rc = xsm_memory_exchange(XSM_TARGET, d);
+    rc = xsm_memory_exchange(d);
     if ( rc )
     {
         rcu_unlock_domain(d);
@@ -1072,7 +1072,7 @@ static long xatp_permission_check(struct domain *d, unsigned int space)
          (!is_hardware_domain(d) || (d != current->domain)) )
         return -EACCES;
 
-    return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
+    return xsm_add_to_physmap(current->domain, d);
 }
 
 unsigned int ioreq_server_max_frames(const struct domain *d)
@@ -1232,7 +1232,7 @@ static int acquire_resource(
     if ( rc )
         return rc;
 
-    rc = xsm_domain_resource_map(XSM_DM_PRIV, d);
+    rc = xsm_domain_resource_map(d);
     if ( rc )
         goto out;
 
@@ -1388,7 +1388,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
              && (reservation.mem_flags & XENMEMF_populate_on_demand) )
             args.memflags |= MEMF_populate_on_demand;
 
-        if ( xsm_memory_adjust_reservation(XSM_TARGET, curr_d, d) )
+        if ( xsm_memory_adjust_reservation(curr_d, d) )
         {
             rcu_unlock_domain(d);
             return start_extent;
@@ -1462,7 +1462,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( d == NULL )
             return -ESRCH;
 
-        rc = xsm_memory_stat_reservation(XSM_TARGET, curr_d, d);
+        rc = xsm_memory_stat_reservation(curr_d, d);
         if ( rc )
         {
             rcu_unlock_domain(d);
@@ -1584,7 +1584,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             return -ESRCH;
 
         rc = paging_mode_translate(d)
-             ? xsm_remove_from_physmap(XSM_TARGET, curr_d, d)
+             ? xsm_remove_from_physmap(curr_d, d)
              : -EACCES;
         if ( rc )
         {
@@ -1631,7 +1631,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( d == NULL )
             return -EINVAL;
 
-        rc = xsm_claim_pages(XSM_PRIV, d);
+        rc = xsm_claim_pages(d);
 
         if ( !rc )
             rc = domain_set_outstanding_pages(d, reservation.nr_extents);
@@ -1662,7 +1662,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( (d = rcu_lock_domain_by_any_id(topology.domid)) == NULL )
             return -ESRCH;
 
-        rc = xsm_get_vnumainfo(XSM_TARGET, d);
+        rc = xsm_get_vnumainfo(d);
         if ( rc )
         {
             rcu_unlock_domain(d);
diff --git a/xen/common/monitor.c b/xen/common/monitor.c
index d5c9ff1cbf..ff17bad733 100644
--- a/xen/common/monitor.c
+++ b/xen/common/monitor.c
@@ -36,7 +36,7 @@ int monitor_domctl(struct domain *d, struct xen_domctl_monitor_op *mop)
     if ( unlikely(current->domain == d) ) /* no domain_pause() */
         return -EPERM;
 
-    rc = xsm_vm_event_control(XSM_PRIV, d, mop->op, mop->event);
+    rc = xsm_vm_event_control(d, mop->op, mop->event);
     if ( unlikely(rc) )
         return rc;
 
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 6d34764d38..e5c154fe9d 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1944,7 +1944,7 @@ ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( d == NULL )
             break;
 
-        ret = xsm_schedop_shutdown(XSM_DM_PRIV, current->domain, d);
+        ret = xsm_schedop_shutdown(current->domain, d);
         if ( likely(!ret) )
             domain_shutdown(d, sched_remote_shutdown.reason);
 
@@ -2046,7 +2046,7 @@ long sched_adjust(struct domain *d, struct xen_domctl_scheduler_op *op)
 {
     long ret;
 
-    ret = xsm_domctl_scheduler_op(XSM_HOOK, d, op->cmd);
+    ret = xsm_domctl_scheduler_op(d, op->cmd);
     if ( ret )
         return ret;
 
@@ -2081,7 +2081,7 @@ long sched_adjust_global(struct xen_sysctl_scheduler_op *op)
     struct cpupool *pool;
     int rc;
 
-    rc = xsm_sysctl_scheduler_op(XSM_HOOK, op->cmd);
+    rc = xsm_sysctl_scheduler_op(op->cmd);
     if ( rc )
         return rc;
 
diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index 3558641cd9..4e25c0e499 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -41,7 +41,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
     if ( op->interface_version != XEN_SYSCTL_INTERFACE_VERSION )
         return -EACCES;
 
-    ret = xsm_sysctl(XSM_PRIV, op->cmd);
+    ret = xsm_sysctl(op->cmd);
     if ( ret )
         return ret;
 
@@ -58,7 +58,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
     switch ( op->cmd )
     {
     case XEN_SYSCTL_readconsole:
-        ret = xsm_readconsole(XSM_HOOK, op->u.readconsole.clear);
+        ret = xsm_readconsole(op->u.readconsole.clear);
         if ( ret )
             break;
 
@@ -88,7 +88,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
             if ( num_domains == op->u.getdomaininfolist.max_domains )
                 break;
 
-            ret = xsm_getdomaininfo(XSM_HOOK, d);
+            ret = xsm_getdomaininfo(d);
             if ( ret )
                 continue;
 
@@ -191,7 +191,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
         if ( op->u.page_offline.end < op->u.page_offline.start )
             break;
 
-        ret = xsm_page_offline(XSM_HOOK, op->u.page_offline.cmd);
+        ret = xsm_page_offline(op->u.page_offline.cmd);
         if ( ret )
             break;
 
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 70ab3ba406..307f99fcf0 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -584,7 +584,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec)
         return 0;
     }
 
-    rc = xsm_vm_event_control(XSM_PRIV, d, vec->mode, vec->op);
+    rc = xsm_vm_event_control(d, vec->mode, vec->op);
     if ( rc )
         return rc;
 
diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c
index 1926a92fe4..76d8b1f807 100644
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -737,7 +737,7 @@ ret_t do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
         return -EPERM;
     }
 
-    ret = xsm_profile(XSM_HOOK, current->domain, op);
+    ret = xsm_profile(current->domain, op);
     if ( ret )
         return ret;
 
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 7d0a603d03..b5d62ea4ee 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -681,7 +681,7 @@ long do_console_io(unsigned int cmd, unsigned int count,
     long rc;
     unsigned int idx, len;
 
-    rc = xsm_console_io(XSM_OTHER, current->domain, cmd);
+    rc = xsm_console_io(current->domain, cmd);
     if ( rc )
         return rc;
 
diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index 999b831d90..67b03fd2a9 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -230,7 +230,7 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
         if ( ret )
             break;
 
-        ret = xsm_assign_dtdevice(XSM_HOOK, d, dt_node_full_name(dev));
+        ret = xsm_assign_dtdevice(d, dt_node_full_name(dev));
         if ( ret )
             break;
 
@@ -284,7 +284,7 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
         if ( ret )
             break;
 
-        ret = xsm_deassign_dtdevice(XSM_HOOK, d, dt_node_full_name(dev));
+        ret = xsm_deassign_dtdevice(d, dt_node_full_name(dev));
 
         if ( d == dom_io )
             return -EINVAL;
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 199ce08612..1363ef8121 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -704,7 +704,7 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
     else
         pdev_type = "device";
 
-    ret = xsm_resource_plug_pci(XSM_PRIV, (seg << 16) | (bus << 8) | devfn);
+    ret = xsm_resource_plug_pci((seg << 16) | (bus << 8) | devfn);
     if ( ret )
         return ret;
 
@@ -814,7 +814,7 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
     struct pci_dev *pdev;
     int ret;
 
-    ret = xsm_resource_unplug_pci(XSM_PRIV, (seg << 16) | (bus << 8) | devfn);
+    ret = xsm_resource_unplug_pci((seg << 16) | (bus << 8) | devfn);
     if ( ret )
         return ret;
 
@@ -1477,7 +1477,7 @@ static int iommu_get_device_group(
              ((pdev->bus == bus) && (pdev->devfn == devfn)) )
             continue;
 
-        if ( xsm_get_device_group(XSM_HOOK, (seg << 16) | (pdev->bus << 8) | pdev->devfn) )
+        if ( xsm_get_device_group((seg << 16) | (pdev->bus << 8) | pdev->devfn) )
             continue;
 
         sdev_id = ops->get_device_group_id(seg, pdev->bus, pdev->devfn);
@@ -1545,7 +1545,7 @@ int iommu_do_pci_domctl(
         u32 max_sdevs;
         XEN_GUEST_HANDLE_64(uint32) sdevs;
 
-        ret = xsm_get_device_group(XSM_HOOK, domctl->u.get_device_group.machine_sbdf);
+        ret = xsm_get_device_group(domctl->u.get_device_group.machine_sbdf);
         if ( ret )
             break;
 
@@ -1596,7 +1596,7 @@ int iommu_do_pci_domctl(
 
         machine_sbdf = domctl->u.assign_device.u.pci.machine_sbdf;
 
-        ret = xsm_assign_device(XSM_HOOK, d, machine_sbdf);
+        ret = xsm_assign_device(d, machine_sbdf);
         if ( ret )
             break;
 
@@ -1641,7 +1641,7 @@ int iommu_do_pci_domctl(
 
         machine_sbdf = domctl->u.assign_device.u.pci.machine_sbdf;
 
-        ret = xsm_deassign_device(XSM_HOOK, d, machine_sbdf);
+        ret = xsm_deassign_device(d, machine_sbdf);
         if ( ret )
             break;
 
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 69f68300e2..1d25954731 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -30,53 +30,53 @@ static inline void xsm_security_domaininfo (struct domain *d,
     alternative_vcall(xsm_ops.security_domaininfo, d, info);
 }
 
-static inline int xsm_domain_create (xsm_default_t def, struct domain *d, u32 ssidref)
+static inline int xsm_domain_create (struct domain *d, u32 ssidref)
 {
     return alternative_call(xsm_ops.domain_create, d, ssidref);
 }
 
-static inline int xsm_getdomaininfo (xsm_default_t def, struct domain *d)
+static inline int xsm_getdomaininfo (struct domain *d)
 {
     return alternative_call(xsm_ops.getdomaininfo, d);
 }
 
-static inline int xsm_domctl_scheduler_op (xsm_default_t def, struct domain *d, int cmd)
+static inline int xsm_domctl_scheduler_op (struct domain *d, int cmd)
 {
     return alternative_call(xsm_ops.domctl_scheduler_op, d, cmd);
 }
 
-static inline int xsm_sysctl_scheduler_op (xsm_default_t def, int cmd)
+static inline int xsm_sysctl_scheduler_op (int cmd)
 {
     return alternative_call(xsm_ops.sysctl_scheduler_op, cmd);
 }
 
-static inline int xsm_set_target (xsm_default_t def, struct domain *d, struct domain *e)
+static inline int xsm_set_target (struct domain *d, struct domain *e)
 {
     return alternative_call(xsm_ops.set_target, d, e);
 }
 
-static inline int xsm_domctl (xsm_default_t def, struct domain *d, int cmd)
+static inline int xsm_domctl (struct domain *d, int cmd)
 {
     return alternative_call(xsm_ops.domctl, d, cmd);
 }
 
-static inline int xsm_sysctl (xsm_default_t def, int cmd)
+static inline int xsm_sysctl (int cmd)
 {
     return alternative_call(xsm_ops.sysctl, cmd);
 }
 
-static inline int xsm_readconsole (xsm_default_t def, uint32_t clear)
+static inline int xsm_readconsole (uint32_t clear)
 {
     return alternative_call(xsm_ops.readconsole, clear);
 }
 
-static inline int xsm_evtchn_unbound (xsm_default_t def, struct domain *d1, struct evtchn *chn,
+static inline int xsm_evtchn_unbound (struct domain *d1, struct evtchn *chn,
                                                                     domid_t id2)
 {
     return alternative_call(xsm_ops.evtchn_unbound, d1, chn, id2);
 }
 
-static inline int xsm_evtchn_interdomain (xsm_default_t def, struct domain *d1,
+static inline int xsm_evtchn_interdomain (struct domain *d1,
                 struct evtchn *chan1, struct domain *d2, struct evtchn *chan2)
 {
     return alternative_call(xsm_ops.evtchn_interdomain, d1, chan1, d2, chan2);
@@ -87,48 +87,48 @@ static inline void xsm_evtchn_close_post (struct evtchn *chn)
     alternative_vcall(xsm_ops.evtchn_close_post, chn);
 }
 
-static inline int xsm_evtchn_send (xsm_default_t def, struct domain *d, struct evtchn *chn)
+static inline int xsm_evtchn_send (struct domain *d, struct evtchn *chn)
 {
     return alternative_call(xsm_ops.evtchn_send, d, chn);
 }
 
-static inline int xsm_evtchn_status (xsm_default_t def, struct domain *d, struct evtchn *chn)
+static inline int xsm_evtchn_status (struct domain *d, struct evtchn *chn)
 {
     return alternative_call(xsm_ops.evtchn_status, d, chn);
 }
 
-static inline int xsm_evtchn_reset (xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_evtchn_reset (struct domain *d1, struct domain *d2)
 {
     return alternative_call(xsm_ops.evtchn_reset, d1, d2);
 }
 
-static inline int xsm_grant_mapref (xsm_default_t def, struct domain *d1, struct domain *d2,
+static inline int xsm_grant_mapref (struct domain *d1, struct domain *d2,
                                                                 uint32_t flags)
 {
     return alternative_call(xsm_ops.grant_mapref, d1, d2, flags);
 }
 
-static inline int xsm_grant_unmapref (xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_grant_unmapref (struct domain *d1, struct domain *d2)
 {
     return alternative_call(xsm_ops.grant_unmapref, d1, d2);
 }
 
-static inline int xsm_grant_setup (xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_grant_setup (struct domain *d1, struct domain *d2)
 {
     return alternative_call(xsm_ops.grant_setup, d1, d2);
 }
 
-static inline int xsm_grant_transfer (xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_grant_transfer (struct domain *d1, struct domain *d2)
 {
     return alternative_call(xsm_ops.grant_transfer, d1, d2);
 }
 
-static inline int xsm_grant_copy (xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_grant_copy (struct domain *d1, struct domain *d2)
 {
     return alternative_call(xsm_ops.grant_copy, d1, d2);
 }
 
-static inline int xsm_grant_query_size (xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_grant_query_size (struct domain *d1, struct domain *d2)
 {
     return alternative_call(xsm_ops.grant_query_size, d1, d2);
 }
@@ -160,80 +160,80 @@ static inline char *xsm_show_security_evtchn (struct domain *d, const struct evt
     return alternative_call(xsm_ops.show_security_evtchn, d, chn);
 }
 
-static inline int xsm_init_hardware_domain (xsm_default_t def, struct domain *d)
+static inline int xsm_init_hardware_domain (struct domain *d)
 {
     return alternative_call(xsm_ops.init_hardware_domain, d);
 }
 
-static inline int xsm_get_pod_target (xsm_default_t def, struct domain *d)
+static inline int xsm_get_pod_target (struct domain *d)
 {
     return alternative_call(xsm_ops.get_pod_target, d);
 }
 
-static inline int xsm_set_pod_target (xsm_default_t def, struct domain *d)
+static inline int xsm_set_pod_target (struct domain *d)
 {
     return alternative_call(xsm_ops.set_pod_target, d);
 }
 
-static inline int xsm_memory_exchange (xsm_default_t def, struct domain *d)
+static inline int xsm_memory_exchange (struct domain *d)
 {
     return alternative_call(xsm_ops.memory_exchange, d);
 }
 
-static inline int xsm_memory_adjust_reservation (xsm_default_t def, struct domain *d1, struct
+static inline int xsm_memory_adjust_reservation (struct domain *d1, struct
                                                                     domain *d2)
 {
     return alternative_call(xsm_ops.memory_adjust_reservation, d1, d2);
 }
 
-static inline int xsm_memory_stat_reservation (xsm_default_t def, struct domain *d1,
+static inline int xsm_memory_stat_reservation (struct domain *d1,
                                                             struct domain *d2)
 {
     return alternative_call(xsm_ops.memory_stat_reservation, d1, d2);
 }
 
-static inline int xsm_memory_pin_page(xsm_default_t def, struct domain *d1, struct domain *d2,
+static inline int xsm_memory_pin_page(struct domain *d1, struct domain *d2,
                                       struct page_info *page)
 {
     return alternative_call(xsm_ops.memory_pin_page, d1, d2, page);
 }
 
-static inline int xsm_add_to_physmap(xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_add_to_physmap(struct domain *d1, struct domain *d2)
 {
     return alternative_call(xsm_ops.add_to_physmap, d1, d2);
 }
 
-static inline int xsm_remove_from_physmap(xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_remove_from_physmap(struct domain *d1, struct domain *d2)
 {
     return alternative_call(xsm_ops.remove_from_physmap, d1, d2);
 }
 
-static inline int xsm_map_gmfn_foreign (xsm_default_t def, struct domain *d, struct domain *t)
+static inline int xsm_map_gmfn_foreign (struct domain *d, struct domain *t)
 {
     return alternative_call(xsm_ops.map_gmfn_foreign, d, t);
 }
 
-static inline int xsm_claim_pages(xsm_default_t def, struct domain *d)
+static inline int xsm_claim_pages(struct domain *d)
 {
     return alternative_call(xsm_ops.claim_pages, d);
 }
 
-static inline int xsm_console_io (xsm_default_t def, struct domain *d, int cmd)
+static inline int xsm_console_io (struct domain *d, int cmd)
 {
     return alternative_call(xsm_ops.console_io, d, cmd);
 }
 
-static inline int xsm_profile (xsm_default_t def, struct domain *d, int op)
+static inline int xsm_profile (struct domain *d, int op)
 {
     return alternative_call(xsm_ops.profile, d, op);
 }
 
-static inline int xsm_kexec (xsm_default_t def)
+static inline int xsm_kexec (void)
 {
     return alternative_call(xsm_ops.kexec);
 }
 
-static inline int xsm_schedop_shutdown (xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_schedop_shutdown (struct domain *d1, struct domain *d2)
 {
     return alternative_call(xsm_ops.schedop_shutdown, d1, d2);
 }
@@ -243,131 +243,129 @@ static inline char *xsm_show_irq_sid (int irq)
     return alternative_call(xsm_ops.show_irq_sid, irq);
 }
 
-static inline int xsm_map_domain_pirq (xsm_default_t def, struct domain *d)
+static inline int xsm_map_domain_pirq (struct domain *d)
 {
     return alternative_call(xsm_ops.map_domain_pirq, d);
 }
 
-static inline int xsm_map_domain_irq (xsm_default_t def, struct domain *d, int irq, void *data)
+static inline int xsm_map_domain_irq (struct domain *d, int irq, void *data)
 {
     return alternative_call(xsm_ops.map_domain_irq, d, irq, data);
 }
 
-static inline int xsm_unmap_domain_pirq (xsm_default_t def, struct domain *d)
+static inline int xsm_unmap_domain_pirq (struct domain *d)
 {
     return alternative_call(xsm_ops.unmap_domain_pirq, d);
 }
 
-static inline int xsm_unmap_domain_irq (xsm_default_t def, struct domain *d, int irq, void *data)
+static inline int xsm_unmap_domain_irq (struct domain *d, int irq, void *data)
 {
     return alternative_call(xsm_ops.unmap_domain_irq, d, irq, data);
 }
 
-static inline int xsm_bind_pt_irq(xsm_default_t def, struct domain *d,
+static inline int xsm_bind_pt_irq(struct domain *d,
                                   struct xen_domctl_bind_pt_irq *bind)
 {
     return alternative_call(xsm_ops.bind_pt_irq, d, bind);
 }
 
-static inline int xsm_unbind_pt_irq(xsm_default_t def, struct domain *d,
+static inline int xsm_unbind_pt_irq(struct domain *d,
                                     struct xen_domctl_bind_pt_irq *bind)
 {
     return alternative_call(xsm_ops.unbind_pt_irq, d, bind);
 }
 
-static inline int xsm_irq_permission (xsm_default_t def, struct domain *d, int pirq, uint8_t allow)
+static inline int xsm_irq_permission (struct domain *d, int pirq, uint8_t allow)
 {
     return alternative_call(xsm_ops.irq_permission, d, pirq, allow);
 }
 
-static inline int xsm_iomem_permission (xsm_default_t def, struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
+static inline int xsm_iomem_permission (struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
 {
     return alternative_call(xsm_ops.iomem_permission, d, s, e, allow);
 }
 
-static inline int xsm_iomem_mapping (xsm_default_t def, struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
+static inline int xsm_iomem_mapping (struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
 {
     return alternative_call(xsm_ops.iomem_mapping, d, s, e, allow);
 }
 
-static inline int xsm_pci_config_permission (xsm_default_t def, struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access)
+static inline int xsm_pci_config_permission (struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access)
 {
     return alternative_call(xsm_ops.pci_config_permission, d, machine_bdf, start, end, access);
 }
 
 #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
-static inline int xsm_get_device_group(xsm_default_t def, uint32_t machine_bdf)
+static inline int xsm_get_device_group(uint32_t machine_bdf)
 {
     return alternative_call(xsm_ops.get_device_group, machine_bdf);
 }
 
-static inline int xsm_assign_device(xsm_default_t def, struct domain *d, uint32_t machine_bdf)
+static inline int xsm_assign_device(struct domain *d, uint32_t machine_bdf)
 {
     return alternative_call(xsm_ops.assign_device, d, machine_bdf);
 }
 
-static inline int xsm_deassign_device(xsm_default_t def, struct domain *d, uint32_t machine_bdf)
+static inline int xsm_deassign_device(struct domain *d, uint32_t machine_bdf)
 {
     return alternative_call(xsm_ops.deassign_device, d, machine_bdf);
 }
 #endif /* HAS_PASSTHROUGH && HAS_PCI) */
 
 #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE)
-static inline int xsm_assign_dtdevice(xsm_default_t def, struct domain *d,
-                                      const char *dtpath)
+static inline int xsm_assign_dtdevice(struct domain *d, const char *dtpath)
 {
     return alternative_call(xsm_ops.assign_dtdevice, d, dtpath);
 }
 
-static inline int xsm_deassign_dtdevice(xsm_default_t def, struct domain *d,
-                                        const char *dtpath)
+static inline int xsm_deassign_dtdevice(struct domain *d, const char *dtpath)
 {
     return alternative_call(xsm_ops.deassign_dtdevice, d, dtpath);
 }
 
 #endif /* HAS_PASSTHROUGH && HAS_DEVICE_TREE */
 
-static inline int xsm_resource_plug_pci (xsm_default_t def, uint32_t machine_bdf)
+static inline int xsm_resource_plug_pci (uint32_t machine_bdf)
 {
     return alternative_call(xsm_ops.resource_plug_pci, machine_bdf);
 }
 
-static inline int xsm_resource_unplug_pci (xsm_default_t def, uint32_t machine_bdf)
+static inline int xsm_resource_unplug_pci (uint32_t machine_bdf)
 {
     return alternative_call(xsm_ops.resource_unplug_pci, machine_bdf);
 }
 
-static inline int xsm_resource_plug_core (xsm_default_t def)
+static inline int xsm_resource_plug_core (void)
 {
     return alternative_call(xsm_ops.resource_plug_core);
 }
 
-static inline int xsm_resource_unplug_core (xsm_default_t def)
+static inline int xsm_resource_unplug_core (void)
 {
     return alternative_call(xsm_ops.resource_unplug_core);
 }
 
-static inline int xsm_resource_setup_pci (xsm_default_t def, uint32_t machine_bdf)
+static inline int xsm_resource_setup_pci (uint32_t machine_bdf)
 {
     return alternative_call(xsm_ops.resource_setup_pci, machine_bdf);
 }
 
-static inline int xsm_resource_setup_gsi (xsm_default_t def, int gsi)
+static inline int xsm_resource_setup_gsi (int gsi)
 {
     return alternative_call(xsm_ops.resource_setup_gsi, gsi);
 }
 
-static inline int xsm_resource_setup_misc (xsm_default_t def)
+static inline int xsm_resource_setup_misc (void)
 {
     return alternative_call(xsm_ops.resource_setup_misc);
 }
 
-static inline int xsm_page_offline(xsm_default_t def, uint32_t cmd)
+static inline int xsm_page_offline(uint32_t cmd)
 {
     return alternative_call(xsm_ops.page_offline, cmd);
 }
 
-static inline int xsm_hypfs_op(xsm_default_t def)
+static inline int xsm_hypfs_op(void)
 {
     return alternative_call(xsm_ops.hypfs_op);
 }
@@ -386,149 +384,149 @@ static inline int xsm_do_compat_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 }
 #endif
 
-static inline int xsm_hvm_param (xsm_default_t def, struct domain *d, unsigned long op)
+static inline int xsm_hvm_param (struct domain *d, unsigned long op)
 {
     return alternative_call(xsm_ops.hvm_param, d, op);
 }
 
-static inline int xsm_hvm_control(xsm_default_t def, struct domain *d, unsigned long op)
+static inline int xsm_hvm_control(struct domain *d, unsigned long op)
 {
     return alternative_call(xsm_ops.hvm_control, d, op);
 }
 
-static inline int xsm_hvm_param_altp2mhvm (xsm_default_t def, struct domain *d)
+static inline int xsm_hvm_param_altp2mhvm (struct domain *d)
 {
     return alternative_call(xsm_ops.hvm_param_altp2mhvm, d);
 }
 
-static inline int xsm_hvm_altp2mhvm_op (xsm_default_t def, struct domain *d, uint64_t mode, uint32_t op)
+static inline int xsm_hvm_altp2mhvm_op (struct domain *d, uint64_t mode, uint32_t op)
 {
     return alternative_call(xsm_ops.hvm_altp2mhvm_op, d, mode, op);
 }
 
-static inline int xsm_get_vnumainfo (xsm_default_t def, struct domain *d)
+static inline int xsm_get_vnumainfo (struct domain *d)
 {
     return alternative_call(xsm_ops.get_vnumainfo, d);
 }
 
-static inline int xsm_vm_event_control (xsm_default_t def, struct domain *d, int mode, int op)
+static inline int xsm_vm_event_control (struct domain *d, int mode, int op)
 {
     return alternative_call(xsm_ops.vm_event_control, d, mode, op);
 }
 
 #ifdef CONFIG_MEM_ACCESS
-static inline int xsm_mem_access (xsm_default_t def, struct domain *d)
+static inline int xsm_mem_access (struct domain *d)
 {
     return alternative_call(xsm_ops.mem_access, d);
 }
 #endif
 
 #ifdef CONFIG_MEM_PAGING
-static inline int xsm_mem_paging (xsm_default_t def, struct domain *d)
+static inline int xsm_mem_paging (struct domain *d)
 {
     return alternative_call(xsm_ops.mem_paging, d);
 }
 #endif
 
 #ifdef CONFIG_MEM_SHARING
-static inline int xsm_mem_sharing (xsm_default_t def, struct domain *d)
+static inline int xsm_mem_sharing (struct domain *d)
 {
     return alternative_call(xsm_ops.mem_sharing, d);
 }
 #endif
 
-static inline int xsm_platform_op (xsm_default_t def, uint32_t op)
+static inline int xsm_platform_op (uint32_t op)
 {
     return alternative_call(xsm_ops.platform_op, op);
 }
 
 #ifdef CONFIG_X86
-static inline int xsm_do_mca(xsm_default_t def)
+static inline int xsm_do_mca(void)
 {
     return alternative_call(xsm_ops.do_mca);
 }
 
-static inline int xsm_shadow_control (xsm_default_t def, struct domain *d, uint32_t op)
+static inline int xsm_shadow_control (struct domain *d, uint32_t op)
 {
     return alternative_call(xsm_ops.shadow_control, d, op);
 }
 
-static inline int xsm_mem_sharing_op (xsm_default_t def, struct domain *d, struct domain *cd, int op)
+static inline int xsm_mem_sharing_op (struct domain *d, struct domain *cd, int op)
 {
     return alternative_call(xsm_ops.mem_sharing_op, d, cd, op);
 }
 
-static inline int xsm_apic (xsm_default_t def, struct domain *d, int cmd)
+static inline int xsm_apic (struct domain *d, int cmd)
 {
     return alternative_call(xsm_ops.apic, d, cmd);
 }
 
-static inline int xsm_memtype (xsm_default_t def, uint32_t access)
+static inline int xsm_memtype (uint32_t access)
 {
     return alternative_call(xsm_ops.memtype, access);
 }
 
-static inline int xsm_machine_memory_map(xsm_default_t def)
+static inline int xsm_machine_memory_map(void)
 {
     return alternative_call(xsm_ops.machine_memory_map);
 }
 
-static inline int xsm_domain_memory_map(xsm_default_t def, struct domain *d)
+static inline int xsm_domain_memory_map(struct domain *d)
 {
     return alternative_call(xsm_ops.domain_memory_map, d);
 }
 
-static inline int xsm_mmu_update (xsm_default_t def, struct domain *d, struct domain *t,
+static inline int xsm_mmu_update (struct domain *d, struct domain *t,
                                   struct domain *f, uint32_t flags)
 {
     return alternative_call(xsm_ops.mmu_update, d, t, f, flags);
 }
 
-static inline int xsm_mmuext_op (xsm_default_t def, struct domain *d, struct domain *f)
+static inline int xsm_mmuext_op (struct domain *d, struct domain *f)
 {
     return alternative_call(xsm_ops.mmuext_op, d, f);
 }
 
-static inline int xsm_update_va_mapping(xsm_default_t def, struct domain *d, struct domain *f,
+static inline int xsm_update_va_mapping(struct domain *d, struct domain *f,
                                                             l1_pgentry_t pte)
 {
     /* pte(struct) is being passed by value, alternative_call does not support */
     return xsm_ops.update_va_mapping(d, f, pte);
 }
 
-static inline int xsm_priv_mapping(xsm_default_t def, struct domain *d, struct domain *t)
+static inline int xsm_priv_mapping(struct domain *d, struct domain *t)
 {
     return alternative_call(xsm_ops.priv_mapping, d, t);
 }
 
-static inline int xsm_ioport_permission (xsm_default_t def, struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
+static inline int xsm_ioport_permission (struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
 {
     return alternative_call(xsm_ops.ioport_permission, d, s, e, allow);
 }
 
-static inline int xsm_ioport_mapping (xsm_default_t def, struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
+static inline int xsm_ioport_mapping (struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
 {
     return alternative_call(xsm_ops.ioport_mapping, d, s, e, allow);
 }
 
-static inline int xsm_pmu_op (xsm_default_t def, struct domain *d, unsigned int op)
+static inline int xsm_pmu_op (struct domain *d, unsigned int op)
 {
     return alternative_call(xsm_ops.pmu_op, d, op);
 }
 
 #endif /* CONFIG_X86 */
 
-static inline int xsm_dm_op(xsm_default_t def, struct domain *d)
+static inline int xsm_dm_op(struct domain *d)
 {
     return alternative_call(xsm_ops.dm_op, d);
 }
 
-static inline int xsm_xen_version (xsm_default_t def, uint32_t op)
+static inline int xsm_xen_version (uint32_t op)
 {
     return alternative_call(xsm_ops.xen_version, op);
 }
 
-static inline int xsm_domain_resource_map(xsm_default_t def, struct domain *d)
+static inline int xsm_domain_resource_map(struct domain *d)
 {
     return alternative_call(xsm_ops.domain_resource_map, d);
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 23:36:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 23:36:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144258.265575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1Yi-00086d-50; Thu, 17 Jun 2021 23:36:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144258.265575; Thu, 17 Jun 2021 23:36:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1Yi-00086W-1Y; Thu, 17 Jun 2021 23:36:28 +0000
Received: by outflank-mailman (input) for mailman id 144258;
 Thu, 17 Jun 2021 23:36:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uRdX=LL=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lu1Yg-00086O-PB
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 23:36:26 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e289d77-0837-4882-be88-3404faa77fa4;
 Thu, 17 Jun 2021 23:36:24 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1623972584427795.5441509630786;
 Thu, 17 Jun 2021 16:29:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e289d77-0837-4882-be88-3404faa77fa4
ARC-Seal: i=1; a=rsa-sha256; t=1623972585; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=BzEmUufKJmj0NP4I00nmdAkwfhR4QBHQyztN+EpRg3PRKhKzz3noO22qNPkY0+hm712SwcCQSPwOMAJr10Og2iKAHg1MZj/f4mbODGMBEf9RYyTojornls4rFbSUjrNGfVWa+u0a6jpgik5vfmKJtWQLyj9Ke2dOSU+SK8HLFTI=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1623972585; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=v73LsupKsfH+ao4v4x80PpCg92qeO8L0WSQm41NcNK4=; 
	b=agvYgk/iFaV7saOViF0qC33OMoadmwSmGnLWC4S29mTv6hH8t2xE36MvVonCTBIHgfPOvWa9RTN5eS/fluoNZy49pFySOLA0KCrccTmFpRtWvrKJKTpggqsSi5BuGTNnWNbnVjaUT7ia6OsBzxVcgUBXD285hptWZzFqvaka8tc=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1623972585;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding;
	bh=v73LsupKsfH+ao4v4x80PpCg92qeO8L0WSQm41NcNK4=;
	b=eZy1zaiiEtUtY+ndSfX//XV97UQFpH34sjhltS+gflEZ+A5hRSVwdemEUWkVaAf8
	6Yur9wDtqdyq+TrHiVT56tckYfKJYe5B50fHGidg98Mn9k9onkdPk5f2ursi9fJKBN/
	4azcw0x+AQAZwXVW026l7q814Im48QcxsOJ8myco=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Tim Deegan <tim@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Paul Durrant <paul@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	persaur@gmail.com,
	christopher.w.clark@gmail.com,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io
Subject: [PATCH 5/6] xsm: expanding function related macros in dummy.h
Date: Thu, 17 Jun 2021 19:39:17 -0400
Message-Id: <20210617233918.10095-6-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210617233918.10095-1-dpsmith@apertussolutions.com>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

With the elimination of switching how dummy.h gets included, the function
declaration macros are no longer necessary. This commit expands them out to the
only value for which they will ever be set. This results in function
declaration lengths changing and since some definitions did not even follow the
80 column wrapping style, all function definitions were aligned with the
predominate style that is used in the hypervisor code.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 xen/xsm/dummy.h | 274 +++++++++++++++++++++++++++---------------------
 1 file changed, 153 insertions(+), 121 deletions(-)

diff --git a/xen/xsm/dummy.h b/xen/xsm/dummy.h
index 7e2bb09dac..0f8ea163af 100644
--- a/xen/xsm/dummy.h
+++ b/xen/xsm/dummy.h
@@ -9,7 +9,7 @@
  *
  *
  *  Each XSM hook implementing an access check should have its first parameter
- *  preceded by XSM_DEFAULT_ARG (or use XSM_DEFAULT_VOID if it has no
+ *  preceded by (or use XSM_DEFAULT_VOID if it has no
  *  arguments). The first non-declaration statement shold be XSM_ASSERT_ACTION
  *  with the expected type of the hook, which will either define or check the
  *  value of action.
@@ -47,14 +47,12 @@ void __xsm_action_mismatch_detected(void);
  * xsm_default_t argument available, so the value from the assertion is used to
  * initialize the variable.
  */
-#define XSM_INLINE __maybe_unused
-
-#define XSM_DEFAULT_ARG /* */
 #define XSM_DEFAULT_VOID void
 #define XSM_ASSERT_ACTION(def) xsm_default_t action = def; (void)action
 
-static always_inline int xsm_default_action(
-    xsm_default_t action, struct domain *src, struct domain *target)
+static always_inline int xsm_default_action(xsm_default_t action,
+                                            struct domain *src,
+                                            struct domain *target)
 {
     switch ( action ) {
     case XSM_HOOK:
@@ -82,43 +80,43 @@ static always_inline int xsm_default_action(
     }
 }
 
-static XSM_INLINE void dummy_security_domaininfo(struct domain *d,
+static __maybe_unused void dummy_security_domaininfo(struct domain *d,
                                     struct xen_domctl_getdomaininfo *info)
 {
     return;
 }
 
-static XSM_INLINE int dummy_domain_create(XSM_DEFAULT_ARG struct domain *d, u32 ssidref)
+static __maybe_unused int dummy_domain_create(struct domain *d, u32 ssidref)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_getdomaininfo(XSM_DEFAULT_ARG struct domain *d)
+static __maybe_unused int dummy_getdomaininfo(struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_domctl_scheduler_op(XSM_DEFAULT_ARG struct domain *d, int cmd)
+static __maybe_unused int dummy_domctl_scheduler_op(struct domain *d, int cmd)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_sysctl_scheduler_op(XSM_DEFAULT_ARG int cmd)
+static __maybe_unused int dummy_sysctl_scheduler_op(int cmd)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int dummy_set_target(XSM_DEFAULT_ARG struct domain *d, struct domain *e)
+static __maybe_unused int dummy_set_target(struct domain *d, struct domain *e)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int dummy_domctl(XSM_DEFAULT_ARG struct domain *d, int cmd)
+static __maybe_unused int dummy_domctl(struct domain *d, int cmd)
 {
     XSM_ASSERT_ACTION(XSM_OTHER);
     switch ( cmd )
@@ -135,85 +133,91 @@ static XSM_INLINE int dummy_domctl(XSM_DEFAULT_ARG struct domain *d, int cmd)
     }
 }
 
-static XSM_INLINE int dummy_sysctl(XSM_DEFAULT_ARG int cmd)
+static __maybe_unused int dummy_sysctl(int cmd)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int dummy_readconsole(XSM_DEFAULT_ARG uint32_t clear)
+static __maybe_unused int dummy_readconsole(uint32_t clear)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int dummy_alloc_security_domain(struct domain *d)
+static __maybe_unused int dummy_alloc_security_domain(struct domain *d)
 {
     return 0;
 }
 
-static XSM_INLINE void dummy_free_security_domain(struct domain *d)
+static __maybe_unused void dummy_free_security_domain(struct domain *d)
 {
     return;
 }
 
-static XSM_INLINE int dummy_grant_mapref(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2,
-                                                                uint32_t flags)
+static __maybe_unused int dummy_grant_mapref(struct domain *d1,
+                                             struct domain *d2,
+                                             uint32_t flags)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int dummy_grant_unmapref(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static __maybe_unused int dummy_grant_unmapref(struct domain *d1,
+                                               struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int dummy_grant_setup(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static __maybe_unused int dummy_grant_setup(struct domain *d1,
+                                            struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int dummy_grant_transfer(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static __maybe_unused int dummy_grant_transfer(struct domain *d1,
+                                               struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int dummy_grant_copy(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static __maybe_unused int dummy_grant_copy(struct domain *d1, struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int dummy_grant_query_size(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static __maybe_unused int dummy_grant_query_size(struct domain *d1,
+                                                 struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int dummy_memory_exchange(XSM_DEFAULT_ARG struct domain *d)
+static __maybe_unused int dummy_memory_exchange(struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_memory_adjust_reservation(XSM_DEFAULT_ARG struct domain *d1,
-                                                            struct domain *d2)
+static __maybe_unused int dummy_memory_adjust_reservation(struct domain *d1,
+                                                          struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int dummy_memory_stat_reservation(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static __maybe_unused int dummy_memory_stat_reservation(struct domain *d1,
+                                                        struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int dummy_console_io(XSM_DEFAULT_ARG struct domain *d, int cmd)
+static __maybe_unused int dummy_console_io(struct domain *d, int cmd)
 {
     XSM_ASSERT_ACTION(XSM_OTHER);
     if ( d->is_console )
@@ -225,129 +229,140 @@ static XSM_INLINE int dummy_console_io(XSM_DEFAULT_ARG struct domain *d, int cmd
     return xsm_default_action(XSM_PRIV, d, NULL);
 }
 
-static XSM_INLINE int dummy_profile(XSM_DEFAULT_ARG struct domain *d, int op)
+static __maybe_unused int dummy_profile(struct domain *d, int op)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, d, NULL);
 }
 
-static XSM_INLINE int dummy_kexec(XSM_DEFAULT_VOID)
+static __maybe_unused int dummy_kexec(XSM_DEFAULT_VOID)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int dummy_schedop_shutdown(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static __maybe_unused int dummy_schedop_shutdown(struct domain *d1,
+                                                 struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int dummy_memory_pin_page(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2,
-                                          struct page_info *page)
+static __maybe_unused int dummy_memory_pin_page(struct domain *d1,
+                                                struct domain *d2,
+                                                struct page_info *page)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int dummy_claim_pages(XSM_DEFAULT_ARG struct domain *d)
+static __maybe_unused int dummy_claim_pages(struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_evtchn_unbound(XSM_DEFAULT_ARG struct domain *d, struct evtchn *chn,
-                                         domid_t id2)
+static __maybe_unused int dummy_evtchn_unbound(struct domain *d,
+                                               struct evtchn *chn,
+                                               domid_t id2)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_evtchn_interdomain(XSM_DEFAULT_ARG struct domain *d1, struct evtchn
-                                *chan1, struct domain *d2, struct evtchn *chan2)
+static __maybe_unused int dummy_evtchn_interdomain(struct domain *d1,
+                                                   struct evtchn *chan1,
+                                                   struct domain *d2,
+                                                   struct evtchn *chan2)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE void dummy_evtchn_close_post(struct evtchn *chn)
+static __maybe_unused void dummy_evtchn_close_post(struct evtchn *chn)
 {
     return;
 }
 
-static XSM_INLINE int dummy_evtchn_send(XSM_DEFAULT_ARG struct domain *d, struct evtchn *chn)
+static __maybe_unused int dummy_evtchn_send(struct domain *d,
+                                            struct evtchn *chn)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, d, NULL);
 }
 
-static XSM_INLINE int dummy_evtchn_status(XSM_DEFAULT_ARG struct domain *d, struct evtchn *chn)
+static __maybe_unused int dummy_evtchn_status(struct domain *d,
+                                              struct evtchn *chn)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_evtchn_reset(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static __maybe_unused int dummy_evtchn_reset(struct domain *d1,
+                                             struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int dummy_alloc_security_evtchns(
-    struct evtchn chn[], unsigned int nr)
+static __maybe_unused int dummy_alloc_security_evtchns(struct evtchn chn[],
+                                                       unsigned int nr)
 {
     return 0;
 }
 
-static XSM_INLINE void dummy_free_security_evtchns(
-    struct evtchn chn[], unsigned int nr)
+static __maybe_unused void dummy_free_security_evtchns(struct evtchn chn[],
+                                                       unsigned int nr)
 {
     return;
 }
 
-static XSM_INLINE char *dummy_show_security_evtchn(struct domain *d, const struct evtchn *chn)
+static __maybe_unused char *dummy_show_security_evtchn(struct domain *d,
+                                                       const struct evtchn *chn)
 {
     return NULL;
 }
 
-static XSM_INLINE int dummy_init_hardware_domain(XSM_DEFAULT_ARG struct domain *d)
+static __maybe_unused int dummy_init_hardware_domain(struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_get_pod_target(XSM_DEFAULT_ARG struct domain *d)
+static __maybe_unused int dummy_get_pod_target(struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_set_pod_target(XSM_DEFAULT_ARG struct domain *d)
+static __maybe_unused int dummy_set_pod_target(struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_get_vnumainfo(XSM_DEFAULT_ARG struct domain *d)
+static __maybe_unused int dummy_get_vnumainfo(struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, current->domain, d);
 }
 
 #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
-static XSM_INLINE int dummy_get_device_group(XSM_DEFAULT_ARG uint32_t machine_bdf)
+static __maybe_unused int dummy_get_device_group(uint32_t machine_bdf)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int dummy_assign_device(XSM_DEFAULT_ARG struct domain *d, uint32_t machine_bdf)
+static __maybe_unused int dummy_assign_device(struct domain *d,
+                                              uint32_t machine_bdf)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_deassign_device(XSM_DEFAULT_ARG struct domain *d, uint32_t machine_bdf)
+static __maybe_unused int dummy_deassign_device(struct domain *d,
+                                                uint32_t machine_bdf)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
@@ -356,15 +371,15 @@ static XSM_INLINE int dummy_deassign_device(XSM_DEFAULT_ARG struct domain *d, ui
 #endif /* HAS_PASSTHROUGH && HAS_PCI */
 
 #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE)
-static XSM_INLINE int dummy_assign_dtdevice(XSM_DEFAULT_ARG struct domain *d,
-                                          const char *dtpath)
+static __maybe_unused int dummy_assign_dtdevice(struct domain *d,
+                                                const char *dtpath)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_deassign_dtdevice(XSM_DEFAULT_ARG struct domain *d,
-                                            const char *dtpath)
+static __maybe_unused int dummy_deassign_dtdevice(struct domain *d,
+                                                  const char *dtpath)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
@@ -372,178 +387,190 @@ static XSM_INLINE int dummy_deassign_dtdevice(XSM_DEFAULT_ARG struct domain *d,
 
 #endif /* HAS_PASSTHROUGH && HAS_DEVICE_TREE */
 
-static XSM_INLINE int dummy_resource_plug_core(XSM_DEFAULT_VOID)
+static __maybe_unused int dummy_resource_plug_core(XSM_DEFAULT_VOID)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int dummy_resource_unplug_core(XSM_DEFAULT_VOID)
+static __maybe_unused int dummy_resource_unplug_core(XSM_DEFAULT_VOID)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int dummy_resource_plug_pci(XSM_DEFAULT_ARG uint32_t machine_bdf)
+static __maybe_unused int dummy_resource_plug_pci(uint32_t machine_bdf)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int dummy_resource_unplug_pci(XSM_DEFAULT_ARG uint32_t machine_bdf)
+static __maybe_unused int dummy_resource_unplug_pci(uint32_t machine_bdf)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int dummy_resource_setup_pci(XSM_DEFAULT_ARG uint32_t machine_bdf)
+static __maybe_unused int dummy_resource_setup_pci(uint32_t machine_bdf)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int dummy_resource_setup_gsi(XSM_DEFAULT_ARG int gsi)
+static __maybe_unused int dummy_resource_setup_gsi(int gsi)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int dummy_resource_setup_misc(XSM_DEFAULT_VOID)
+static __maybe_unused int dummy_resource_setup_misc(XSM_DEFAULT_VOID)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int dummy_page_offline(XSM_DEFAULT_ARG uint32_t cmd)
+static __maybe_unused int dummy_page_offline(uint32_t cmd)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int dummy_hypfs_op(XSM_DEFAULT_VOID)
+static __maybe_unused int dummy_hypfs_op(XSM_DEFAULT_VOID)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE long dummy_do_xsm_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
+static __maybe_unused long dummy_do_xsm_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return -ENOSYS;
 }
 
 #ifdef CONFIG_COMPAT
-static XSM_INLINE int dummy_do_compat_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
+static __maybe_unused int dummy_do_compat_op(
+                                XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return -ENOSYS;
 }
 #endif
 
-static XSM_INLINE char *dummy_show_irq_sid(int irq)
+static __maybe_unused char *dummy_show_irq_sid(int irq)
 {
     return NULL;
 }
 
-static XSM_INLINE int dummy_map_domain_pirq(XSM_DEFAULT_ARG struct domain *d)
+static __maybe_unused int dummy_map_domain_pirq(struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_map_domain_irq(XSM_DEFAULT_ARG struct domain *d,
-                                         int irq, const void *data)
+static __maybe_unused int dummy_map_domain_irq(struct domain *d, int irq,
+                                               const void *data)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_unmap_domain_pirq(XSM_DEFAULT_ARG struct domain *d)
+static __maybe_unused int dummy_unmap_domain_pirq(struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_bind_pt_irq(XSM_DEFAULT_ARG struct domain *d, struct xen_domctl_bind_pt_irq *bind)
+static __maybe_unused int dummy_bind_pt_irq(struct domain *d,
+                                            struct xen_domctl_bind_pt_irq *bind)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_unbind_pt_irq(XSM_DEFAULT_ARG struct domain *d, struct xen_domctl_bind_pt_irq *bind)
+static __maybe_unused int dummy_unbind_pt_irq(struct domain *d,
+                                        struct xen_domctl_bind_pt_irq *bind)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_unmap_domain_irq(XSM_DEFAULT_ARG struct domain *d,
-                                           int irq, const void *data)
+static __maybe_unused int dummy_unmap_domain_irq(struct domain *d, int irq,
+                                                 const void *data)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_irq_permission(XSM_DEFAULT_ARG struct domain *d, int pirq, uint8_t allow)
+static __maybe_unused int dummy_irq_permission(struct domain *d, int pirq,
+                                               uint8_t allow)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_iomem_permission(XSM_DEFAULT_ARG struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
+static __maybe_unused int dummy_iomem_permission(struct domain *d, uint64_t s,
+                                                 uint64_t e, uint8_t allow)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_iomem_mapping(XSM_DEFAULT_ARG struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
+static __maybe_unused int dummy_iomem_mapping(struct domain *d, uint64_t s,
+                                              uint64_t e, uint8_t allow)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_pci_config_permission(XSM_DEFAULT_ARG struct domain *d, uint32_t machine_bdf,
-                                        uint16_t start, uint16_t end,
-                                        uint8_t access)
+static __maybe_unused int dummy_pci_config_permission(struct domain *d,
+                                                      uint32_t machine_bdf,
+                                                      uint16_t start,
+                                                      uint16_t end,
+                                                      uint8_t access)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_add_to_physmap(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static __maybe_unused int dummy_add_to_physmap(struct domain *d1,
+                                               struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int dummy_remove_from_physmap(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static __maybe_unused int dummy_remove_from_physmap(struct domain *d1,
+                                                    struct domain *d2)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d1, d2);
 }
 
-static XSM_INLINE int dummy_map_gmfn_foreign(XSM_DEFAULT_ARG struct domain *d, struct domain *t)
+static __maybe_unused int dummy_map_gmfn_foreign(struct domain *d,
+                                                 struct domain *t)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d, t);
 }
 
-static XSM_INLINE int dummy_hvm_param(XSM_DEFAULT_ARG struct domain *d, unsigned long op)
+static __maybe_unused int dummy_hvm_param(struct domain *d, unsigned long op)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_hvm_control(XSM_DEFAULT_ARG struct domain *d, unsigned long op)
+static __maybe_unused int dummy_hvm_control(struct domain *d, unsigned long op)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_hvm_param_altp2mhvm(XSM_DEFAULT_ARG struct domain *d)
+static __maybe_unused int dummy_hvm_param_altp2mhvm(struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_hvm_altp2mhvm_op(XSM_DEFAULT_ARG struct domain *d, uint64_t mode, uint32_t op)
+static __maybe_unused int dummy_hvm_altp2mhvm_op(struct domain *d,
+                                                 uint64_t mode, uint32_t op)
 {
     XSM_ASSERT_ACTION(XSM_OTHER);
 
@@ -562,14 +589,15 @@ static XSM_INLINE int dummy_hvm_altp2mhvm_op(XSM_DEFAULT_ARG struct domain *d, u
     }
 }
 
-static XSM_INLINE int dummy_vm_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
+static __maybe_unused int dummy_vm_event_control(struct domain *d, int mode,
+                                                 int op)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
 #ifdef CONFIG_MEM_ACCESS
-static XSM_INLINE int dummy_mem_access(XSM_DEFAULT_ARG struct domain *d)
+static __maybe_unused int dummy_mem_access(struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
@@ -577,7 +605,7 @@ static XSM_INLINE int dummy_mem_access(XSM_DEFAULT_ARG struct domain *d)
 #endif
 
 #ifdef CONFIG_MEM_PAGING
-static XSM_INLINE int dummy_mem_paging(XSM_DEFAULT_ARG struct domain *d)
+static __maybe_unused int dummy_mem_paging(struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
@@ -585,58 +613,59 @@ static XSM_INLINE int dummy_mem_paging(XSM_DEFAULT_ARG struct domain *d)
 #endif
 
 #ifdef CONFIG_MEM_SHARING
-static XSM_INLINE int dummy_mem_sharing(XSM_DEFAULT_ARG struct domain *d)
+static __maybe_unused int dummy_mem_sharing(struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 #endif
 
-static XSM_INLINE int dummy_platform_op(XSM_DEFAULT_ARG uint32_t op)
+static __maybe_unused int dummy_platform_op(uint32_t op)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
 #ifdef CONFIG_X86
-static XSM_INLINE int dummy_do_mca(XSM_DEFAULT_VOID)
+static __maybe_unused int dummy_do_mca(XSM_DEFAULT_VOID)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int dummy_shadow_control(XSM_DEFAULT_ARG struct domain *d, uint32_t op)
+static __maybe_unused int dummy_shadow_control(struct domain *d, uint32_t op)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_mem_sharing_op(XSM_DEFAULT_ARG struct domain *d, struct domain *cd, int op)
+static __maybe_unused int dummy_mem_sharing_op(struct domain *d,
+                                               struct domain *cd, int op)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, cd);
 }
 
-static XSM_INLINE int dummy_apic(XSM_DEFAULT_ARG struct domain *d, int cmd)
+static __maybe_unused int dummy_apic(struct domain *d, int cmd)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, d, NULL);
 }
 
-static XSM_INLINE int dummy_machine_memory_map(XSM_DEFAULT_VOID)
+static __maybe_unused int dummy_machine_memory_map(XSM_DEFAULT_VOID)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int dummy_domain_memory_map(XSM_DEFAULT_ARG struct domain *d)
+static __maybe_unused int dummy_domain_memory_map(struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_mmu_update(XSM_DEFAULT_ARG struct domain *d, struct domain *t,
-                                     struct domain *f, uint32_t flags)
+static __maybe_unused int dummy_mmu_update(struct domain *d, struct domain *t,
+                                           struct domain *f, uint32_t flags)
 {
     int rc = 0;
     XSM_ASSERT_ACTION(XSM_TARGET);
@@ -647,38 +676,41 @@ static XSM_INLINE int dummy_mmu_update(XSM_DEFAULT_ARG struct domain *d, struct
     return rc;
 }
 
-static XSM_INLINE int dummy_mmuext_op(XSM_DEFAULT_ARG struct domain *d, struct domain *f)
+static __maybe_unused int dummy_mmuext_op(struct domain *d, struct domain *f)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d, f);
 }
 
-static XSM_INLINE int dummy_update_va_mapping(XSM_DEFAULT_ARG struct domain *d, struct domain *f, 
-                                                            l1_pgentry_t pte)
+static __maybe_unused int dummy_update_va_mapping(struct domain *d,
+                                                  struct domain *f,
+                                                  l1_pgentry_t pte)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d, f);
 }
 
-static XSM_INLINE int dummy_priv_mapping(XSM_DEFAULT_ARG struct domain *d, struct domain *t)
+static __maybe_unused int dummy_priv_mapping(struct domain *d, struct domain *t)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, d, t);
 }
 
-static XSM_INLINE int dummy_ioport_permission(XSM_DEFAULT_ARG struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
+static __maybe_unused int dummy_ioport_permission(struct domain *d, uint32_t s,
+                                                  uint32_t e, uint8_t allow)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_ioport_mapping(XSM_DEFAULT_ARG struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
+static __maybe_unused int dummy_ioport_mapping(struct domain *d, uint32_t s,
+                                               uint32_t e, uint8_t allow)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int dummy_pmu_op (XSM_DEFAULT_ARG struct domain *d, unsigned int op)
+static __maybe_unused int dummy_pmu_op (struct domain *d, unsigned int op)
 {
     XSM_ASSERT_ACTION(XSM_OTHER);
     switch ( op )
@@ -695,31 +727,31 @@ static XSM_INLINE int dummy_pmu_op (XSM_DEFAULT_ARG struct domain *d, unsigned i
 
 #endif /* CONFIG_X86 */
 
-static XSM_INLINE int dummy_dm_op(XSM_DEFAULT_ARG struct domain *d)
+static __maybe_unused int dummy_dm_op(struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
 #ifdef CONFIG_ARGO
-static XSM_INLINE int dummy_argo_enable(const struct domain *d)
+static __maybe_unused int dummy_argo_enable(const struct domain *d)
 {
     return 0;
 }
 
-static XSM_INLINE int dummy_argo_register_single_source(const struct domain *d,
-                                                      const struct domain *t)
+static __maybe_unused int dummy_argo_register_single_source(const struct domain *d,
+                                                            const struct domain *t)
 {
     return 0;
 }
 
-static XSM_INLINE int dummy_argo_register_any_source(const struct domain *d)
+static __maybe_unused int dummy_argo_register_any_source(const struct domain *d)
 {
     return 0;
 }
 
-static XSM_INLINE int dummy_argo_send(const struct domain *d,
-                                    const struct domain *t)
+static __maybe_unused int dummy_argo_send(const struct domain *d,
+                                          const struct domain *t)
 {
     return 0;
 }
@@ -727,7 +759,7 @@ static XSM_INLINE int dummy_argo_send(const struct domain *d,
 #endif /* CONFIG_ARGO */
 
 #include <public/version.h>
-static XSM_INLINE int dummy_xen_version (XSM_DEFAULT_ARG uint32_t op)
+static __maybe_unused int dummy_xen_version(uint32_t op)
 {
     XSM_ASSERT_ACTION(XSM_OTHER);
     switch ( op )
@@ -751,7 +783,7 @@ static XSM_INLINE int dummy_xen_version (XSM_DEFAULT_ARG uint32_t op)
     }
 }
 
-static XSM_INLINE int dummy_domain_resource_map(XSM_DEFAULT_ARG struct domain *d)
+static __maybe_unused int dummy_domain_resource_map(struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 23:36:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 23:36:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144264.265586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1ZC-0000K3-IN; Thu, 17 Jun 2021 23:36:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144264.265586; Thu, 17 Jun 2021 23:36:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1ZC-0000Jw-FI; Thu, 17 Jun 2021 23:36:58 +0000
Received: by outflank-mailman (input) for mailman id 144264;
 Thu, 17 Jun 2021 23:36:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uRdX=LL=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lu1ZA-0000JT-Lg
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 23:36:56 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cda7e63c-4b33-4129-a64b-357815c15e21;
 Thu, 17 Jun 2021 23:36:54 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1623972586949895.2181897487453;
 Thu, 17 Jun 2021 16:29:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cda7e63c-4b33-4129-a64b-357815c15e21
ARC-Seal: i=1; a=rsa-sha256; t=1623972589; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=d/t4QKafbLsPUbZW/WSVybnT5njvJhUqeMPsqKp81viwStE+ZFxwDsLzDzxs3scW60uB74oNdQ2Rzgat6RAq/Rkuqp+PoVyF41W3cJMjMKLwiN1Sln3N3kwYjHunGpageBsyf2dfe8NWxb9gOIlOrGmD062ix4ME6gjOEMFNFcE=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1623972589; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=YFu+rVtA5VrwxbGKXPK/7zCv7ouqRqBTHwPWaZOrI9E=; 
	b=XYx3F0JTqXMkA5SbqkBGKWUaXyQIbxf78SBhr4stGX6hh/h1DvXX8HHEfVHcgA7D0SJX4jwEsdjsNYOcrioKx86p2KJ61khdEjk+KTbWj/LAwiv/i7SmKCa/5OX02tLc4R27oqEMnyY2kiwpJ9Tcw+lByCYXDCxH42Raj8LLCvg=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1623972589;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding;
	bh=YFu+rVtA5VrwxbGKXPK/7zCv7ouqRqBTHwPWaZOrI9E=;
	b=CTE1RiWznXVFL256Of++6nzl4gL58D9d+O5SM0vlwXjeIpslqW0oTnSD3dGmCoAF
	TUw7jtSAfgF6E+FlRHUqp/3U+U9/9ZUD7vaiMrSazo6hPn0RIgnTtOuGyYYh0/4VnGQ
	ZjA4PiaDHk980cK2f9NS5voEV3e9zjR8QzyHgaSA=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Tim Deegan <tim@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Paul Durrant <paul@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	persaur@gmail.com,
	christopher.w.clark@gmail.com,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io
Subject: [PATCH 6/6] xsm: removing the XSM_ASSERT_ACTION macro
Date: Thu, 17 Jun 2021 19:39:18 -0400
Message-Id: <20210617233918.10095-7-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210617233918.10095-1-dpsmith@apertussolutions.com>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

With the eliminations of default priv from all the XSM hook call sites, this
renders the XSM_ASSERT_ACTION macro unneeded. This commit cleans up all the
dummy hooks, removing the macro.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 xen/xsm/dummy.h | 253 +++++++++++++++---------------------------------
 1 file changed, 80 insertions(+), 173 deletions(-)

diff --git a/xen/xsm/dummy.h b/xen/xsm/dummy.h
index 0f8ea163af..1a9a6b2935 100644
--- a/xen/xsm/dummy.h
+++ b/xen/xsm/dummy.h
@@ -6,13 +6,6 @@
  *  This program is free software; you can redistribute it and/or modify
  *  it under the terms of the GNU General Public License version 2,
  *  as published by the Free Software Foundation.
- *
- *
- *  Each XSM hook implementing an access check should have its first parameter
- *  preceded by (or use XSM_DEFAULT_VOID if it has no
- *  arguments). The first non-declaration statement shold be XSM_ASSERT_ACTION
- *  with the expected type of the hook, which will either define or check the
- *  value of action.
  */
 
 #include <xen/sched.h>
@@ -48,7 +41,6 @@ void __xsm_action_mismatch_detected(void);
  * initialize the variable.
  */
 #define XSM_DEFAULT_VOID void
-#define XSM_ASSERT_ACTION(def) xsm_default_t action = def; (void)action
 
 static always_inline int xsm_default_action(xsm_default_t action,
                                             struct domain *src,
@@ -88,37 +80,31 @@ static __maybe_unused void dummy_security_domaininfo(struct domain *d,
 
 static __maybe_unused int dummy_domain_create(struct domain *d, u32 ssidref)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 static __maybe_unused int dummy_getdomaininfo(struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 static __maybe_unused int dummy_domctl_scheduler_op(struct domain *d, int cmd)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 static __maybe_unused int dummy_sysctl_scheduler_op(int cmd)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, NULL);
+    return xsm_default_action(XSM_HOOK, current->domain, NULL);
 }
 
 static __maybe_unused int dummy_set_target(struct domain *d, struct domain *e)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, NULL);
+    return xsm_default_action(XSM_HOOK, current->domain, NULL);
 }
 
 static __maybe_unused int dummy_domctl(struct domain *d, int cmd)
 {
-    XSM_ASSERT_ACTION(XSM_OTHER);
     switch ( cmd )
     {
     case XEN_DOMCTL_ioport_mapping:
@@ -135,14 +121,12 @@ static __maybe_unused int dummy_domctl(struct domain *d, int cmd)
 
 static __maybe_unused int dummy_sysctl(int cmd)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, NULL);
+    return xsm_default_action(XSM_PRIV, current->domain, NULL);
 }
 
 static __maybe_unused int dummy_readconsole(uint32_t clear)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, NULL);
+    return xsm_default_action(XSM_HOOK, current->domain, NULL);
 }
 
 static __maybe_unused int dummy_alloc_security_domain(struct domain *d)
@@ -159,67 +143,57 @@ static __maybe_unused int dummy_grant_mapref(struct domain *d1,
                                              struct domain *d2,
                                              uint32_t flags)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, d1, d2);
+    return xsm_default_action(XSM_HOOK, d1, d2);
 }
 
 static __maybe_unused int dummy_grant_unmapref(struct domain *d1,
                                                struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, d1, d2);
+    return xsm_default_action(XSM_HOOK, d1, d2);
 }
 
 static __maybe_unused int dummy_grant_setup(struct domain *d1,
                                             struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, d1, d2);
+    return xsm_default_action(XSM_TARGET, d1, d2);
 }
 
 static __maybe_unused int dummy_grant_transfer(struct domain *d1,
                                                struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, d1, d2);
+    return xsm_default_action(XSM_HOOK, d1, d2);
 }
 
 static __maybe_unused int dummy_grant_copy(struct domain *d1, struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, d1, d2);
+    return xsm_default_action(XSM_HOOK, d1, d2);
 }
 
 static __maybe_unused int dummy_grant_query_size(struct domain *d1,
                                                  struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, d1, d2);
+    return xsm_default_action(XSM_TARGET, d1, d2);
 }
 
 static __maybe_unused int dummy_memory_exchange(struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_TARGET, current->domain, d);
 }
 
 static __maybe_unused int dummy_memory_adjust_reservation(struct domain *d1,
                                                           struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, d1, d2);
+    return xsm_default_action(XSM_TARGET, d1, d2);
 }
 
 static __maybe_unused int dummy_memory_stat_reservation(struct domain *d1,
                                                         struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, d1, d2);
+    return xsm_default_action(XSM_TARGET, d1, d2);
 }
 
 static __maybe_unused int dummy_console_io(struct domain *d, int cmd)
 {
-    XSM_ASSERT_ACTION(XSM_OTHER);
     if ( d->is_console )
         return xsm_default_action(XSM_HOOK, d, NULL);
 #ifdef CONFIG_VERBOSE_DEBUG
@@ -231,43 +205,37 @@ static __maybe_unused int dummy_console_io(struct domain *d, int cmd)
 
 static __maybe_unused int dummy_profile(struct domain *d, int op)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, d, NULL);
+    return xsm_default_action(XSM_HOOK, d, NULL);
 }
 
 static __maybe_unused int dummy_kexec(XSM_DEFAULT_VOID)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, NULL);
+    return xsm_default_action(XSM_PRIV, current->domain, NULL);
 }
 
 static __maybe_unused int dummy_schedop_shutdown(struct domain *d1,
                                                  struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, d1, d2);
+    return xsm_default_action(XSM_PRIV, d1, d2);
 }
 
 static __maybe_unused int dummy_memory_pin_page(struct domain *d1,
                                                 struct domain *d2,
                                                 struct page_info *page)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, d1, d2);
+    return xsm_default_action(XSM_HOOK, d1, d2);
 }
 
 static __maybe_unused int dummy_claim_pages(struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_PRIV, current->domain, d);
 }
 
 static __maybe_unused int dummy_evtchn_unbound(struct domain *d,
                                                struct evtchn *chn,
                                                domid_t id2)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_TARGET, current->domain, d);
 }
 
 static __maybe_unused int dummy_evtchn_interdomain(struct domain *d1,
@@ -275,8 +243,7 @@ static __maybe_unused int dummy_evtchn_interdomain(struct domain *d1,
                                                    struct domain *d2,
                                                    struct evtchn *chan2)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, d1, d2);
+    return xsm_default_action(XSM_HOOK, d1, d2);
 }
 
 static __maybe_unused void dummy_evtchn_close_post(struct evtchn *chn)
@@ -287,22 +254,19 @@ static __maybe_unused void dummy_evtchn_close_post(struct evtchn *chn)
 static __maybe_unused int dummy_evtchn_send(struct domain *d,
                                             struct evtchn *chn)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, d, NULL);
+    return xsm_default_action(XSM_HOOK, d, NULL);
 }
 
 static __maybe_unused int dummy_evtchn_status(struct domain *d,
                                               struct evtchn *chn)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_TARGET, current->domain, d);
 }
 
 static __maybe_unused int dummy_evtchn_reset(struct domain *d1,
                                              struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, d1, d2);
+    return xsm_default_action(XSM_TARGET, d1, d2);
 }
 
 static __maybe_unused int dummy_alloc_security_evtchns(struct evtchn chn[],
@@ -325,47 +289,40 @@ static __maybe_unused char *dummy_show_security_evtchn(struct domain *d,
 
 static __maybe_unused int dummy_init_hardware_domain(struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 static __maybe_unused int dummy_get_pod_target(struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_PRIV, current->domain, d);
 }
 
 static __maybe_unused int dummy_set_pod_target(struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_PRIV, current->domain, d);
 }
 
 static __maybe_unused int dummy_get_vnumainfo(struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_TARGET, current->domain, d);
 }
 
 #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
 static __maybe_unused int dummy_get_device_group(uint32_t machine_bdf)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, NULL);
+    return xsm_default_action(XSM_HOOK, current->domain, NULL);
 }
 
 static __maybe_unused int dummy_assign_device(struct domain *d,
                                               uint32_t machine_bdf)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 static __maybe_unused int dummy_deassign_device(struct domain *d,
                                                 uint32_t machine_bdf)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 #endif /* HAS_PASSTHROUGH && HAS_PCI */
@@ -374,71 +331,60 @@ static __maybe_unused int dummy_deassign_device(struct domain *d,
 static __maybe_unused int dummy_assign_dtdevice(struct domain *d,
                                                 const char *dtpath)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 static __maybe_unused int dummy_deassign_dtdevice(struct domain *d,
                                                   const char *dtpath)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 #endif /* HAS_PASSTHROUGH && HAS_DEVICE_TREE */
 
 static __maybe_unused int dummy_resource_plug_core(XSM_DEFAULT_VOID)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, NULL);
+    return xsm_default_action(XSM_HOOK, current->domain, NULL);
 }
 
 static __maybe_unused int dummy_resource_unplug_core(XSM_DEFAULT_VOID)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, NULL);
+    return xsm_default_action(XSM_HOOK, current->domain, NULL);
 }
 
 static __maybe_unused int dummy_resource_plug_pci(uint32_t machine_bdf)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, NULL);
+    return xsm_default_action(XSM_PRIV, current->domain, NULL);
 }
 
 static __maybe_unused int dummy_resource_unplug_pci(uint32_t machine_bdf)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, NULL);
+    return xsm_default_action(XSM_PRIV, current->domain, NULL);
 }
 
 static __maybe_unused int dummy_resource_setup_pci(uint32_t machine_bdf)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, NULL);
+    return xsm_default_action(XSM_PRIV, current->domain, NULL);
 }
 
 static __maybe_unused int dummy_resource_setup_gsi(int gsi)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, NULL);
+    return xsm_default_action(XSM_PRIV, current->domain, NULL);
 }
 
 static __maybe_unused int dummy_resource_setup_misc(XSM_DEFAULT_VOID)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, NULL);
+    return xsm_default_action(XSM_PRIV, current->domain, NULL);
 }
 
 static __maybe_unused int dummy_page_offline(uint32_t cmd)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, NULL);
+    return xsm_default_action(XSM_HOOK, current->domain, NULL);
 }
 
 static __maybe_unused int dummy_hypfs_op(XSM_DEFAULT_VOID)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, NULL);
+    return xsm_default_action(XSM_PRIV, current->domain, NULL);
 }
 
 static __maybe_unused long dummy_do_xsm_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
@@ -461,63 +407,54 @@ static __maybe_unused char *dummy_show_irq_sid(int irq)
 
 static __maybe_unused int dummy_map_domain_pirq(struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_DM_PRIV, current->domain, d);
 }
 
 static __maybe_unused int dummy_map_domain_irq(struct domain *d, int irq,
                                                const void *data)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 static __maybe_unused int dummy_unmap_domain_pirq(struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_DM_PRIV, current->domain, d);
 }
 
 static __maybe_unused int dummy_bind_pt_irq(struct domain *d,
                                             struct xen_domctl_bind_pt_irq *bind)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 static __maybe_unused int dummy_unbind_pt_irq(struct domain *d,
                                         struct xen_domctl_bind_pt_irq *bind)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 static __maybe_unused int dummy_unmap_domain_irq(struct domain *d, int irq,
                                                  const void *data)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 static __maybe_unused int dummy_irq_permission(struct domain *d, int pirq,
                                                uint8_t allow)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 static __maybe_unused int dummy_iomem_permission(struct domain *d, uint64_t s,
                                                  uint64_t e, uint8_t allow)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 static __maybe_unused int dummy_iomem_mapping(struct domain *d, uint64_t s,
                                               uint64_t e, uint8_t allow)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 static __maybe_unused int dummy_pci_config_permission(struct domain *d,
@@ -526,54 +463,45 @@ static __maybe_unused int dummy_pci_config_permission(struct domain *d,
                                                       uint16_t end,
                                                       uint8_t access)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 static __maybe_unused int dummy_add_to_physmap(struct domain *d1,
                                                struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, d1, d2);
+    return xsm_default_action(XSM_TARGET, d1, d2);
 }
 
 static __maybe_unused int dummy_remove_from_physmap(struct domain *d1,
                                                     struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, d1, d2);
+    return xsm_default_action(XSM_TARGET, d1, d2);
 }
 
 static __maybe_unused int dummy_map_gmfn_foreign(struct domain *d,
                                                  struct domain *t)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, d, t);
+    return xsm_default_action(XSM_TARGET, d, t);
 }
 
 static __maybe_unused int dummy_hvm_param(struct domain *d, unsigned long op)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_TARGET, current->domain, d);
 }
 
 static __maybe_unused int dummy_hvm_control(struct domain *d, unsigned long op)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_DM_PRIV, current->domain, d);
 }
 
 static __maybe_unused int dummy_hvm_param_altp2mhvm(struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_PRIV, current->domain, d);
 }
 
 static __maybe_unused int dummy_hvm_altp2mhvm_op(struct domain *d,
                                                  uint64_t mode, uint32_t op)
 {
-    XSM_ASSERT_ACTION(XSM_OTHER);
-
     switch ( mode )
     {
     case XEN_ALTP2M_mixed:
@@ -592,127 +520,109 @@ static __maybe_unused int dummy_hvm_altp2mhvm_op(struct domain *d,
 static __maybe_unused int dummy_vm_event_control(struct domain *d, int mode,
                                                  int op)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_PRIV, current->domain, d);
 }
 
 #ifdef CONFIG_MEM_ACCESS
 static __maybe_unused int dummy_mem_access(struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_DM_PRIV, current->domain, d);
 }
 #endif
 
 #ifdef CONFIG_MEM_PAGING
 static __maybe_unused int dummy_mem_paging(struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_DM_PRIV, current->domain, d);
 }
 #endif
 
 #ifdef CONFIG_MEM_SHARING
 static __maybe_unused int dummy_mem_sharing(struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_DM_PRIV, current->domain, d);
 }
 #endif
 
 static __maybe_unused int dummy_platform_op(uint32_t op)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, NULL);
+    return xsm_default_action(XSM_PRIV, current->domain, NULL);
 }
 
 #ifdef CONFIG_X86
 static __maybe_unused int dummy_do_mca(XSM_DEFAULT_VOID)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, NULL);
+    return xsm_default_action(XSM_PRIV, current->domain, NULL);
 }
 
 static __maybe_unused int dummy_shadow_control(struct domain *d, uint32_t op)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 static __maybe_unused int dummy_mem_sharing_op(struct domain *d,
                                                struct domain *cd, int op)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, cd);
+    return xsm_default_action(XSM_DM_PRIV, current->domain, cd);
 }
 
 static __maybe_unused int dummy_apic(struct domain *d, int cmd)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, d, NULL);
+    return xsm_default_action(XSM_PRIV, d, NULL);
 }
 
 static __maybe_unused int dummy_machine_memory_map(XSM_DEFAULT_VOID)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, NULL);
+    return xsm_default_action(XSM_PRIV, current->domain, NULL);
 }
 
 static __maybe_unused int dummy_domain_memory_map(struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_TARGET, current->domain, d);
 }
 
 static __maybe_unused int dummy_mmu_update(struct domain *d, struct domain *t,
                                            struct domain *f, uint32_t flags)
 {
     int rc = 0;
-    XSM_ASSERT_ACTION(XSM_TARGET);
     if ( f != dom_io )
-        rc = xsm_default_action(action, d, f);
+        rc = xsm_default_action(XSM_TARGET, d, f);
     if ( evaluate_nospec(t) && !rc )
-        rc = xsm_default_action(action, d, t);
+        rc = xsm_default_action(XSM_TARGET, d, t);
     return rc;
 }
 
 static __maybe_unused int dummy_mmuext_op(struct domain *d, struct domain *f)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, d, f);
+    return xsm_default_action(XSM_TARGET, d, f);
 }
 
 static __maybe_unused int dummy_update_va_mapping(struct domain *d,
                                                   struct domain *f,
                                                   l1_pgentry_t pte)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, d, f);
+    return xsm_default_action(XSM_TARGET, d, f);
 }
 
 static __maybe_unused int dummy_priv_mapping(struct domain *d, struct domain *t)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, d, t);
+    return xsm_default_action(XSM_TARGET, d, t);
 }
 
 static __maybe_unused int dummy_ioport_permission(struct domain *d, uint32_t s,
                                                   uint32_t e, uint8_t allow)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 static __maybe_unused int dummy_ioport_mapping(struct domain *d, uint32_t s,
                                                uint32_t e, uint8_t allow)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_HOOK, current->domain, d);
 }
 
 static __maybe_unused int dummy_pmu_op (struct domain *d, unsigned int op)
 {
-    XSM_ASSERT_ACTION(XSM_OTHER);
     switch ( op )
     {
     case XENPMU_init:
@@ -729,8 +639,7 @@ static __maybe_unused int dummy_pmu_op (struct domain *d, unsigned int op)
 
 static __maybe_unused int dummy_dm_op(struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_DM_PRIV, current->domain, d);
 }
 
 #ifdef CONFIG_ARGO
@@ -761,7 +670,6 @@ static __maybe_unused int dummy_argo_send(const struct domain *d,
 #include <public/version.h>
 static __maybe_unused int dummy_xen_version(uint32_t op)
 {
-    XSM_ASSERT_ACTION(XSM_OTHER);
     switch ( op )
     {
     case XENVER_version:
@@ -785,6 +693,5 @@ static __maybe_unused int dummy_xen_version(uint32_t op)
 
 static __maybe_unused int dummy_domain_resource_map(struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(XSM_DM_PRIV, current->domain, d);
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 17 23:40:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Jun 2021 23:40:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144277.265597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1cO-0001nM-1s; Thu, 17 Jun 2021 23:40:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144277.265597; Thu, 17 Jun 2021 23:40:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu1cN-0001nF-V9; Thu, 17 Jun 2021 23:40:15 +0000
Received: by outflank-mailman (input) for mailman id 144277;
 Thu, 17 Jun 2021 23:40:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uRdX=LL=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lu1cM-0001n8-7d
 for xen-devel@lists.xenproject.org; Thu, 17 Jun 2021 23:40:14 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 890392b8-d5c2-4a42-9d0a-01b9182b5493;
 Thu, 17 Jun 2021 23:40:13 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1623973207647594.6141898175358;
 Thu, 17 Jun 2021 16:40:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 890392b8-d5c2-4a42-9d0a-01b9182b5493
ARC-Seal: i=1; a=rsa-sha256; t=1623973209; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=NqOK/dtVWgvW5rJAHaWWyYT/CY/cxgWWpSbhhr8oYJx4PrY+5JhN5rCQHExK4ff12J6yRHZMx0TOsUJ2T77fDFNKn5TYJq4xQKPplgDasYcWKIcLx/eUkntbtdUXkqkOOo15BO6UGSNMeKQLExSP2UMe+t7nZhNOO9i8fgYf8mA=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1623973209; h=Content-Transfer-Encoding:Cc:Date:From:MIME-Version:Message-ID:Subject:To; 
	bh=VG1SGMVtfqvSMcYTyx1bi1ZC6g4jlfGXeSVq6Hbovbc=; 
	b=nxOGT6hQmUw11dwWCGvhVQzYQd5nXZKxeQT0URNxnlyRtqzBMB2Plx8tMNZzPYRQ4e8sa755rlFB840b3e++EGl9BrNefoYchmJnJw+Rt3Hw5cHgHCOdbzj7/RDpm1TYabmYfKfuUc7OJ2mwfzO/XX58M5M9nwiXsgNlTlvjTaI=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1623973209;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:MIME-Version:Content-Transfer-Encoding;
	bh=VG1SGMVtfqvSMcYTyx1bi1ZC6g4jlfGXeSVq6Hbovbc=;
	b=e5xXvQE55vtOHfAF/4iXJVuvv6f7pixgtACEX6wfrkhti0ekkgVYJQMICKnu0Pi1
	WYK/WkOetR8l/fzhPPBeOVhz2d/4L+5hUvyHR9qt3NIqfbxZHLp8g6kfH04qFVAsUdM
	qDcjZjqANn0Qd45McKxzUh76qv+lQwnFIo2Mt9Bw=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] maintainers: adding new reviewer for xsm
Date: Thu, 17 Jun 2021 19:49:55 -0400
Message-Id: <20210617234955.18489-1-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

Would like to add myself as a reviewer for XSM.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index d46b08a0d2..4f759867dc 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -622,6 +622,7 @@ F:	xen/include/xen/trace.h
 
 XSM/FLASK
 M:	Daniel De Graaf <dgdegra@tycho.nsa.gov>
+R:	Daniel P. Smith <dpsmith@apertussolutions.com>
 S:	Supported
 F:	tools/flask/
 F:	xen/include/xsm/
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 00:06:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 00:06:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144298.265607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu20y-0004vD-L5; Fri, 18 Jun 2021 00:05:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144298.265607; Fri, 18 Jun 2021 00:05:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu20y-0004v6-Hj; Fri, 18 Jun 2021 00:05:40 +0000
Received: by outflank-mailman (input) for mailman id 144298;
 Fri, 18 Jun 2021 00:05:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lu20x-0004uw-6w; Fri, 18 Jun 2021 00:05:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lu20w-0005I8-To; Fri, 18 Jun 2021 00:05:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lu20w-00049T-If; Fri, 18 Jun 2021 00:05:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lu20w-0004rH-FX; Fri, 18 Jun 2021 00:05:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KX0osonQiJMUTDrwSz2AJJ48Rer5wjSgjQc6iQ+tZQ4=; b=AxaP6iKHyL7pO06rMysU4vj5SH
	lsu2Bg6GTXJ1r7oXeJdSdOPSq7D8x7EoTEIUGUll731ogsXcqzXfC1iAha6U0D9RC7fFCC+lX3zCc
	Nau+0KXHGSPUTzMhfb5cSJrOp/CMel55p6tai8VxXdb9VFVdGx7Iy5+41eDeY0g99Ugw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162877-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162877: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start/freebsd.repeat:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=70585216fe7730d9fb5453d3e2804e149d0fe201
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Jun 2021 00:05:38 +0000

flight 162877 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162877/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10  fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-qemuu-freebsd11-amd64 21 guest-start/freebsd.repeat fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 linux                70585216fe7730d9fb5453d3e2804e149d0fe201
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  321 days
Failing since        152366  2020-08-01 20:49:34 Z  320 days  546 attempts
Testing same since   162877  2021-06-17 12:28:22 Z    0 days    1 attempts

------------------------------------------------------------
6173 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1681945 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 00:12:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 00:12:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144306.265622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu27K-0006Jy-Cn; Fri, 18 Jun 2021 00:12:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144306.265622; Fri, 18 Jun 2021 00:12:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu27K-0006Jr-9Y; Fri, 18 Jun 2021 00:12:14 +0000
Received: by outflank-mailman (input) for mailman id 144306;
 Fri, 18 Jun 2021 00:12:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6PvD=LM=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lu27J-0006Jl-2I
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 00:12:13 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d175725a-1d61-47f4-b04a-4e4ede768683;
 Fri, 18 Jun 2021 00:12:12 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 63FC8613B9;
 Fri, 18 Jun 2021 00:12:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d175725a-1d61-47f4-b04a-4e4ede768683
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1623975131;
	bh=I85jXme/+M1bZe1XYH+nSyw7AePxBGBpBy0BMnD7tY0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=GvD4MBqz40+RPoSFHbPC/re1mRd2WLTXojLT3dQM8bUrJVnSGp8N8bxhQcBVvnRjV
	 g21Bv8uN++coboEmjReD0kDNPs7HOI10WZzDDaL8GTc6mWtFp/fZl4iLQgrmDUrrjY
	 11Y3/FVfCTyFD7//xwXVoPm2CDwecg+j5gHKuVInXi7EGKigUGVDoU3wz+HaazF6vd
	 GFPrOCmAknRS4aN+VZOWGD37VKm+/e5xwvQRnUXIzTG0K71ulknJNazcWee69iF2DH
	 5UGVA7NvKl3HRMdSLeS9T753zLiR+2+cAtRCK9ryRPEe34j1tZF3xuYlKoYqVTqBsn
	 HW+XLWR2b+W0Q==
Date: Thu, 17 Jun 2021 17:12:10 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] maintainers: adding new reviewer for xsm
In-Reply-To: <20210617234955.18489-1-dpsmith@apertussolutions.com>
Message-ID: <alpine.DEB.2.21.2106171712010.24906@sstabellini-ThinkPad-T480s>
References: <20210617234955.18489-1-dpsmith@apertussolutions.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 17 Jun 2021, Daniel P. Smith wrote:
> Would like to add myself as a reviewer for XSM.
> 
> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  MAINTAINERS | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index d46b08a0d2..4f759867dc 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -622,6 +622,7 @@ F:	xen/include/xen/trace.h
>  
>  XSM/FLASK
>  M:	Daniel De Graaf <dgdegra@tycho.nsa.gov>
> +R:	Daniel P. Smith <dpsmith@apertussolutions.com>
>  S:	Supported
>  F:	tools/flask/
>  F:	xen/include/xsm/
> -- 
> 2.20.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 01:06:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 01:06:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144317.265633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu2xn-0008Ue-MR; Fri, 18 Jun 2021 01:06:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144317.265633; Fri, 18 Jun 2021 01:06:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu2xn-0008UX-Io; Fri, 18 Jun 2021 01:06:27 +0000
Received: by outflank-mailman (input) for mailman id 144317;
 Fri, 18 Jun 2021 01:06:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AHJ/=LM=eikelenboom.it=linux@srs-us1.protection.inumbo.net>)
 id 1lu2xl-0008UR-Su
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 01:06:26 +0000
Received: from server.eikelenboom.it (unknown [91.121.65.215])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1627529-ce36-4d3a-ae9f-2af7dd13f274;
 Fri, 18 Jun 2021 01:06:23 +0000 (UTC)
Received: from 76-24-144-85.ftth.glasoperator.nl ([85.144.24.76]:51070
 helo=[172.16.1.50]) by server.eikelenboom.it with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <linux@eikelenboom.it>)
 id 1lu32K-0007qg-Iu; Fri, 18 Jun 2021 03:11:08 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1627529-ce36-4d3a-ae9f-2af7dd13f274
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=eikelenboom.it; s=20180706; h=Content-Transfer-Encoding:Content-Type:
	In-Reply-To:MIME-Version:Date:Message-ID:References:Cc:To:From:Subject:Sender
	:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:
	Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:
	List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=Fd1zkhbHbgLlW8EB8x79t8JJ4Wq3JOq8GZOSFxBd4ZA=; b=YKxs40SrY3dKlnlP+qZO1JSMMn
	W34a+umSP0S5guWCtNnmZ8r8iBsisO67qgf7zhnAmLj+xOm98sv9MynCbEsThQ1dD24mUHACc3fZb
	5KlRJce1fIM9rquXYX4Z5ZE3WOTwHyXluEdp0S9oHW6JxAju0dQ/U6dMZTBAHbxgWTIg=;
Subject: Re: Linux 5.13-rc6 regression to 5.12.x: kernel OOM and panic during
 kernel boot in low memory Xen VM's (256MB assigned memory).
From: Sander Eikelenboom <linux@eikelenboom.it>
To: Rasmus Villemoes <linux@rasmusvillemoes.dk>,
 Linus Torvalds <torvalds@linux-foundation.org>,
 Juergen Gross <jgross@suse.com>
Cc: linux-kernel <linux-kernel@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it>
 <CAHk-=wjgg67NMBNG99naEQ1cM0mXBBzdhCJaYFH-kC+mLK+J2g@mail.gmail.com>
 <9108c22e-3521-9e24-6124-7776d947b788@rasmusvillemoes.dk>
 <0b12f27b-1109-b621-c969-10814b2c1c2f@eikelenboom.it>
 <7338064f-10b6-545d-bc6c-843d04aafe28@eikelenboom.it>
Message-ID: <e7f9c4f8-1669-75ce-b052-1030350a159e@eikelenboom.it>
Date: Fri, 18 Jun 2021 03:06:16 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <7338064f-10b6-545d-bc6c-843d04aafe28@eikelenboom.it>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17/06/2021 21:39, Sander Eikelenboom wrote:
> On 17/06/2021 20:02, Sander Eikelenboom wrote:
>> On 17/06/2021 17:37, Rasmus Villemoes wrote:
>>> On 17/06/2021 17.01, Linus Torvalds wrote:
>>>> On Thu, Jun 17, 2021 at 2:26 AM Sander Eikelenboom <linux@eikelenboom.it> wrote:
>>>>>
>>>>> I just tried to upgrade and test the linux kernel going from the 5.12 kernel series to 5.13-rc6 on my homeserver with Xen, but ran in some trouble.
>>>>>
>>>>> Some VM's boot fine (with more than 256MB memory assigned), but the smaller (memory wise) PVH ones crash during kernel boot due to OOM.
>>>>> Booting VM's with 5.12(.9) kernel still works fine, also when dom0 is running 5.13-rc6 (but it has more memory assigned, so that is not unexpected).
>>>>
>>>> Adding Rasmus to the cc, because this looks kind of like the async
>>>> roofs population thing that caused some other oom issues too.
>>>
>>> Yes, that looks like the same issue.
>>>
>>>> Rasmus? Original report here:
>>>>
>>>>       https://lore.kernel.org/lkml/ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it/
>>>>
>>>> I do find it odd that we'd be running out of memory so early..
>>>
>>> Indeed. It would be nice to know if these also reproduce with
>>> initramfs_async=0 on the command line.
>>>
>>> But what is even more curious is that in the other report
>>> (https://lore.kernel.org/lkml/20210607144419.GA23706@xsang-OptiPlex-9020/),
>>> it seemed to trigger with _more_ memory - though I may be misreading
>>> what Oliver was telling me:
>>>
>>>> please be noted that we use 'vmalloc=512M' for both parent and this
>>> commit.
>>>> since it's ok on parent but oom on this commit, we want to send this
>>> report
>>>> to show the potential problem of the commit on some cases.
>>>>
>>>> we also tested by changing to use 'vmalloc=128M', it will succeed.
>>>
>>> Those tests were done in a VM with 16G memory, and then he also wrote
>>>
>>>> we also tried to follow exactly above steps to test on
>>>> some local machine (8G memory), but cannot reproduce.
>>>
>>> Are there some special rules for what memory pools PID1 versus the
>>> kworker threads can dip into?
>>>
>>>
>>> Side note: I also had a ppc64 report with different symptoms (the
>>> initramfs was corrupted), but that turned out to also reproduce with
>>> e7cb072eb98 reverted, so that is likely unrelated. But just FTR that
>>> thread is here:
>>> https://lore.kernel.org/lkml/CA+QYu4qxf2CYe2gC6EYnOHXPKS-+cEXL=MnUvqRFaN7W1i6ahQ@mail.gmail.com/
>>>
>>> Rasmus
>>>
>>
>> I choose to first finish the bisection attempt, not so suprising it ends up with:
>> e7cb072eb988e46295512617c39d004f9e1c26f8 is the first bad commit
>>
>> So at least that link is confirmed.
>>
>> I also checked out booting with "initramfs_async=0" and now the guest boots with the 5.13-rc6-ish kernel which fails without that.
>>
>> --
>> Sander
>>
> 
> CC'ed Juergen.
> 
> Juergen, do you know how the direct kernel boot works and if that could interfere
> with this commit ?
> 
> After reading the last part of the commit message e7cb072eb98 namely:
> 
>       Should one of the initcalls done after rootfs_initcall time (i.e., device_
>       and late_ initcalls) need something from the initramfs (say, a kernel
>       module or a firmware blob), it will simply wait for the initramfs
>       unpacking to be done before proceeding, which should in theory make this
>       completely safe.
>       
>       But if some driver pokes around in the filesystem directly and not via one
>       of the official kernel interfaces (i.e.  request_firmware*(),
>       call_usermodehelper*) that theory may not hold - also, I certainly might
>       have missed a spot when sprinkling wait_for_initramfs().  So there is an
>       escape hatch in the form of an initramfs_async= command line parameter.
> 
> It dawned on me I'm using "direct kernel boot" functionality, which lets you boot a guest
> were the kernel and initramfs get copied in from dom0, that works great, but perhaps it
> pokes around as the last part of the commit message warns about ?
> 
> (I think the feature was called "direct kernel boot", what I mean is using the for example:
>       kernel      = '/boot/vmlinuz-5.13.0-rc6-20210617-doflr-mac80211debug+'
>       ramdisk     = '/boot/initrd.img-5.13.0-rc6-20210617-doflr-mac80211debug+'
>       cmdline     = 'root=UUID=2f757320-caca-4215-868d-73a4aacf12aa ro nomodeset xen_blkfront.max_ring_page_order=1 console=hvc0 earlyprintk=xen initramfs_async=0'
> 
> options in the xen guest config file to boot the (in this case PVH) guest.
> )
> 
> --
> Sander
> 

OK, done some experimentation and it seems with 256M assigned to the VM it was almost at the edge of OOM with the 5.12 kernel as well in the config I am using it.
With v5.12 when I assign 240M it boots, with 230M it doesn't. With 5.13 the tipping point seems to be around 265M and 270M, so my config was already quite close to the edge.

The "direct kernel boot" feature I'm using just seems somewhat memory hungry, but using another compression algorithm for the kernel and initramfs already helped in my case.

So sorry for the noise, clearly user-error.

--
Sander





From xen-devel-bounces@lists.xenproject.org Fri Jun 18 04:57:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 04:57:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144325.265644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu6Yi-0003qH-N9; Fri, 18 Jun 2021 04:56:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144325.265644; Fri, 18 Jun 2021 04:56:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu6Yi-0003qA-IO; Fri, 18 Jun 2021 04:56:48 +0000
Received: by outflank-mailman (input) for mailman id 144325;
 Fri, 18 Jun 2021 04:56:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lu6Yh-0003q0-KF; Fri, 18 Jun 2021 04:56:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lu6Yh-0007hj-AL; Fri, 18 Jun 2021 04:56:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lu6Yg-0002px-UD; Fri, 18 Jun 2021 04:56:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lu6Yg-0007jH-RF; Fri, 18 Jun 2021 04:56:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tehsrvsrY/s40jJ0b+nCDnf9lMCxazWk8ouLqm5nvvM=; b=SZtsGRjmwCUZkDEZfWvTr54e/o
	h4ko5pg3hwtgQJU/IZpi7JCGFy9rlo2QZSRyGjOwS7UdUjvRxbNkM7msu3ZPD62pSgHDVqzSCU0Gs
	vm6XvNGU+leMK5ho1flnukET02h1WfAZNtgN9NeWydLwjGGVgE866aKeia1OnaikNSps=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162879-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162879: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=18e53dff939898c6dd00d206a3c2f5cd3d6669db
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Jun 2021 04:56:46 +0000

flight 162879 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162879/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                18e53dff939898c6dd00d206a3c2f5cd3d6669db
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  301 days
Failing since        152659  2020-08-21 14:07:39 Z  300 days  555 attempts
Testing same since   162879  2021-06-17 16:39:39 Z    0 days    1 attempts

------------------------------------------------------------
534 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 172465 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 06:31:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 06:31:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144332.265658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu82Q-0004k3-QR; Fri, 18 Jun 2021 06:31:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144332.265658; Fri, 18 Jun 2021 06:31:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu82Q-0004jw-MK; Fri, 18 Jun 2021 06:31:34 +0000
Received: by outflank-mailman (input) for mailman id 144332;
 Fri, 18 Jun 2021 06:31:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E/iT=LM=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lu82P-0004jW-3U
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 06:31:33 +0000
Received: from mail-qt1-x833.google.com (unknown [2607:f8b0:4864:20::833])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cb17c836-d588-4bbf-bbd9-8aeb3dc3d576;
 Fri, 18 Jun 2021 06:31:32 +0000 (UTC)
Received: by mail-qt1-x833.google.com with SMTP id z4so6781459qts.4
 for <xen-devel@lists.xenproject.org>; Thu, 17 Jun 2021 23:31:31 -0700 (PDT)
Received: from mail-qt1-f177.google.com (mail-qt1-f177.google.com.
 [209.85.160.177])
 by smtp.gmail.com with ESMTPSA id c17sm4667941qtd.22.2021.06.17.23.31.31
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 17 Jun 2021 23:31:31 -0700 (PDT)
Received: by mail-qt1-f177.google.com with SMTP id l2so4005856qtq.10
 for <xen-devel@lists.xenproject.org>; Thu, 17 Jun 2021 23:31:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb17c836-d588-4bbf-bbd9-8aeb3dc3d576
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=K4gmwVrd4ipCgh7Pt4U3qK6zM5RZFqVsfT+5m/JaTNY=;
        b=b3TOMvQ1kXF7QuAFyHdOkvmT8LXOEbUEDAbnT1VD77ESf6dWqJqkel2pij6H8qazUH
         qeKiHzooTElNu69H2pMv9jI9D4woaiXlDH7E4nc1glsurOz9EYewwC9YHW+cnkiHTax7
         tbypLVBnZif7y60KH97lEwI+rEiH+ILoO0+Go=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=K4gmwVrd4ipCgh7Pt4U3qK6zM5RZFqVsfT+5m/JaTNY=;
        b=Tpaq5QywedfuXl4Tj7evpXYmLsyMn6hd+JjIW9cWArCXwMvtyV0Q9p9AzVQIL5+Ni2
         vi6hq9ypnDWTKWTK6Ruqh46NRPHuSbjWp0R6ySnVGSPNz8w4Wbr3qcMtM/N4vQYVWC1U
         zjlyVf0a/2Frv+55RvR5HsQnoY5BFjgV5+xOsrie8Dnhqz1jdEp1kqQKlcVPrZ76fCnH
         Maa9nXthGNFNyNMOIIHC6TsuqOCXRUCF5TyPSiKIwB0YukySokwkbw6hquvc2hiS4g3n
         WcDnXs70JhZyusG9SUM28cJpZULOwr7DGM9pbA1zIcih7Kwm+P1cegwiIj83wQcv31Sj
         DjYQ==
X-Gm-Message-State: AOAM533Yzhy42pKcg/oyZo3dL6NeoQPY+/zj4g31mgWsXJxsPZuRQrAx
	Ei5xSv0Xtr6rYDhkvfUoeBBBd9+p4mvseQ==
X-Google-Smtp-Source: ABdhPJwCIxE7b4Q78n8HhOfUf8aRcHlFE98mTKmC62OrPrEZxUM7fNSifw4WD8UCikXmlJ5rOf1UKA==
X-Received: by 2002:ac8:76ca:: with SMTP id q10mr9074671qtr.150.1623997891402;
        Thu, 17 Jun 2021 23:31:31 -0700 (PDT)
X-Received: by 2002:a02:384b:: with SMTP id v11mr1842282jae.90.1623997548077;
 Thu, 17 Jun 2021 23:25:48 -0700 (PDT)
MIME-Version: 1.0
References: <20210617062635.1660944-1-tientzu@chromium.org>
 <20210617062635.1660944-2-tientzu@chromium.org> <alpine.DEB.2.21.2106171434480.24906@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2106171434480.24906@sstabellini-ThinkPad-T480s>
From: Claire Chang <tientzu@chromium.org>
Date: Fri, 18 Jun 2021 14:25:37 +0800
X-Gmail-Original-Message-ID: <CALiNf29SJ0jXirWVDhJw4BUNvkjUeGPyGNJK9m8c30OPX41=5Q@mail.gmail.com>
Message-ID: <CALiNf29SJ0jXirWVDhJw4BUNvkjUeGPyGNJK9m8c30OPX41=5Q@mail.gmail.com>
Subject: Re: [PATCH v13 01/12] swiotlb: Refactor swiotlb init functions
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, 
	paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, Robin Murphy <robin.murphy@arm.com>, 
	grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding <treding@nvidia.com>, 
	mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, 
	Greg KH <gregkh@linuxfoundation.org>, Saravana Kannan <saravanak@google.com>, 
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

On Fri, Jun 18, 2021 at 7:30 AM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> On Thu, 17 Jun 2021, Claire Chang wrote:
> > Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
> > initialization to make the code reusable.
> >
> > Signed-off-by: Claire Chang <tientzu@chromium.org>
> > Reviewed-by: Christoph Hellwig <hch@lst.de>
> > Tested-by: Stefano Stabellini <sstabellini@kernel.org>
> > Tested-by: Will Deacon <will@kernel.org>
> > ---
> >  kernel/dma/swiotlb.c | 50 ++++++++++++++++++++++----------------------
> >  1 file changed, 25 insertions(+), 25 deletions(-)
> >
> > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> > index 52e2ac526757..47bb2a766798 100644
> > --- a/kernel/dma/swiotlb.c
> > +++ b/kernel/dma/swiotlb.c
> > @@ -168,9 +168,28 @@ void __init swiotlb_update_mem_attributes(void)
> >       memset(vaddr, 0, bytes);
> >  }
> >
> > -int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
> > +static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
> > +                                 unsigned long nslabs, bool late_alloc)
> >  {
> > +     void *vaddr = phys_to_virt(start);
> >       unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
> > +
> > +     mem->nslabs = nslabs;
> > +     mem->start = start;
> > +     mem->end = mem->start + bytes;
> > +     mem->index = 0;
> > +     mem->late_alloc = late_alloc;
> > +     spin_lock_init(&mem->lock);
> > +     for (i = 0; i < mem->nslabs; i++) {
> > +             mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> > +             mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
> > +             mem->slots[i].alloc_size = 0;
> > +     }
> > +     memset(vaddr, 0, bytes);
> > +}
> > +
> > +int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
> > +{
> >       struct io_tlb_mem *mem;
> >       size_t alloc_size;
> >
> > @@ -186,16 +205,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
> >       if (!mem)
> >               panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
> >                     __func__, alloc_size, PAGE_SIZE);
> > -     mem->nslabs = nslabs;
> > -     mem->start = __pa(tlb);
> > -     mem->end = mem->start + bytes;
> > -     mem->index = 0;
> > -     spin_lock_init(&mem->lock);
> > -     for (i = 0; i < mem->nslabs; i++) {
> > -             mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> > -             mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
> > -             mem->slots[i].alloc_size = 0;
> > -     }
> > +
> > +     swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
> >
> >       io_tlb_default_mem = mem;
> >       if (verbose)
> > @@ -282,8 +293,8 @@ swiotlb_late_init_with_default_size(size_t default_size)
> >  int
> >  swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
> >  {
> > -     unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
> >       struct io_tlb_mem *mem;
> > +     unsigned long bytes = nslabs << IO_TLB_SHIFT;
> >
> >       if (swiotlb_force == SWIOTLB_NO_FORCE)
> >               return 0;
> > @@ -297,20 +308,9 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
> >       if (!mem)
> >               return -ENOMEM;
> >
> > -     mem->nslabs = nslabs;
> > -     mem->start = virt_to_phys(tlb);
> > -     mem->end = mem->start + bytes;
> > -     mem->index = 0;
> > -     mem->late_alloc = 1;
> > -     spin_lock_init(&mem->lock);
> > -     for (i = 0; i < mem->nslabs; i++) {
> > -             mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> > -             mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
> > -             mem->slots[i].alloc_size = 0;
> > -     }
> > -
> > +     memset(mem, 0, sizeof(*mem));
> > +     swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
> >       set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
> > -     memset(tlb, 0, bytes);
>
> This is good for swiotlb_late_init_with_tbl. However I have just noticed
> that mem could also be allocated from swiotlb_init_with_tbl, in which
> case the zeroing is missing. I think we need another memset in
> swiotlb_init_with_tbl as well. Or maybe it could be better to have a
> single memset at the beginning of swiotlb_init_io_tlb_mem instead. Up to
> you.

swiotlb_init_with_tbl uses memblock_alloc to allocate the io_tlb_mem
and memblock_alloc[1] will do memset in memblock_alloc_try_nid[2], so
swiotlb_init_with_tbl is also good.
I'm happy to add the memset in swiotlb_init_io_tlb_mem if you think
it's clearer and safer.

[1] https://elixir.bootlin.com/linux/v5.13-rc6/source/include/linux/memblock.h#L407
[2] https://elixir.bootlin.com/linux/v5.13-rc6/source/mm/memblock.c#L1555


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 07:22:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 07:22:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144338.265669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu8oo-0001Bk-JH; Fri, 18 Jun 2021 07:21:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144338.265669; Fri, 18 Jun 2021 07:21:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lu8oo-0001Bd-Fh; Fri, 18 Jun 2021 07:21:34 +0000
Received: by outflank-mailman (input) for mailman id 144338;
 Fri, 18 Jun 2021 07:21:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lu8on-0001AJ-Ac; Fri, 18 Jun 2021 07:21:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lu8on-00021N-3G; Fri, 18 Jun 2021 07:21:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lu8om-0003xo-N7; Fri, 18 Jun 2021 07:21:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lu8om-0001pE-Md; Fri, 18 Jun 2021 07:21:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1vXOqtiIN6jZrDiTmCW1g+QBTA3D3HD0HMmQxfrEZtA=; b=Qfrc6oC73uxcYmpI51leykCOR0
	ASvxEL35xYXWrvNtzZRV0cteRbtcgqcsaytnFBra0QTjZ/tAc5DlEZ8450mG9gVkz6y3DpOie6/nt
	2zpR8GZ2e+f7E0MfUYa7IE2jBXqBIx1c4ACr/NbXZ3OqI1rs7+chUGkKFzzwp6ze5g4s=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162882-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.15-testing test] 162882: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.15-testing:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ec457ac2a29279e8cd91745c410b0f49d5e8f1ff
X-Osstest-Versions-That:
    xen=a339ceaa8f17e827ee5eb25f05ad6f52ba8d6b1c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Jun 2021 07:21:32 +0000

flight 162882 xen-4.15-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162882/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail blocked in 162561
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162561
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162561
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162561
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162561
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162561
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162561
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162561
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162561
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162561
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162561
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162561
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ec457ac2a29279e8cd91745c410b0f49d5e8f1ff
baseline version:
 xen                  a339ceaa8f17e827ee5eb25f05ad6f52ba8d6b1c

Last test of basis   162561  2021-06-09 02:20:51 Z    9 days
Testing same since   162882  2021-06-17 19:07:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   a339ceaa8f..ec457ac2a2  ec457ac2a29279e8cd91745c410b0f49d5e8f1ff -> stable-4.15


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 08:55:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 08:55:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144366.265716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luAHf-0002n9-6Y; Fri, 18 Jun 2021 08:55:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144366.265716; Fri, 18 Jun 2021 08:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luAHf-0002n2-3e; Fri, 18 Jun 2021 08:55:27 +0000
Received: by outflank-mailman (input) for mailman id 144366;
 Fri, 18 Jun 2021 08:55:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pGVE=LM=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1luAHd-0002mt-82
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 08:55:25 +0000
Received: from mail-wm1-x335.google.com (unknown [2a00:1450:4864:20::335])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 00e1a2ad-7210-42f6-ba74-2c91d5cdf6f5;
 Fri, 18 Jun 2021 08:55:23 +0000 (UTC)
Received: by mail-wm1-x335.google.com with SMTP id
 f16-20020a05600c1550b02901b00c1be4abso8171831wmg.2
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 01:55:23 -0700 (PDT)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id z10sm7026080wmp.39.2021.06.18.01.55.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 18 Jun 2021 01:55:21 -0700 (PDT)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 89DD81FF7E;
 Fri, 18 Jun 2021 09:55:20 +0100 (BST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00e1a2ad-7210-42f6-ba74-2c91d5cdf6f5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=references:user-agent:from:to:cc:subject:date:in-reply-to
         :message-id:mime-version:content-transfer-encoding;
        bh=yCs3NROrZCWzC+VF7Ed7Zg+CSfK98h/BzXS8h1Jj/QU=;
        b=SmWBOMEjqZKoh6T5o8G2QaiMAXqEs03dVEIFUTVciNqjmmZTWyKSMMBaECQ7NNimM6
         mjnHnJsDkmXytn8jGqH/EcSCA2D+fbbXQmeYE9chusEMnXAfGpBje6LZPQDBmdL/w375
         FdpzZ/GOEqX9nBwk9KDuu+e8VEwuJf6A0+8JJLpRnNO54idwLuhoTrHmPkKYP0mYqS/2
         Hjj02TZfxg2ubeT0TN5EKy+esAkZtPTL8Vwsytyvkg6tokzYhK5P6De/aasXxftQeeAA
         POIacCGLqQbivekmPOuaKnXsMIIMuF9O1XfBVy/bjf6keS7CMNd8ui7fUUgJIAvRwJlz
         l7pQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:references:user-agent:from:to:cc:subject:date
         :in-reply-to:message-id:mime-version:content-transfer-encoding;
        bh=yCs3NROrZCWzC+VF7Ed7Zg+CSfK98h/BzXS8h1Jj/QU=;
        b=t8SZnLOWhvYXojQayjpaUfzyEWVK0zszn4gVEhO+dLVk48zWWeMQLEPe6a9BxaadvL
         /rgxs0gJFqN1k4Ih/MWONWVB+ZKPK79iwrCzP/eCx9rSqYDY3FjsD5cCHw1SYtblcasY
         yF1fDIHg/3QvofpntPUXIvowYD4GORPkdPKqaDB9nbJSrJJZ1d4CCuTEtbMwjOliAB6X
         IzobmeJu5WB/cdG9PJAxHIPYgjPEtZDpRq5eteFqSm4qUBMhOJ2Ouh62T5BqDOttGtCe
         4e0g7c7e0jfSRooctSvU7FGq8+LCkBK17zhXEJ9P2/12KLOKYH1WiicVweBEyQSwXd/z
         uHng==
X-Gm-Message-State: AOAM531Rvm9fsotwWRbwzcnmZs3ElG96hMjju3/aqP9fWRqXfy4i//BC
	BRN2D7QuZtmNvwXfyUGBY9293g==
X-Google-Smtp-Source: ABdhPJy1iaXxai6Hx51lIotwF5rZjJLBc7BtWiAI4hGPo3ic5xutU6PRA6Srkqgz+SwhbQy8b//Xog==
X-Received: by 2002:a05:600c:3b23:: with SMTP id m35mr10450305wms.185.1624006522221;
        Fri, 18 Jun 2021 01:55:22 -0700 (PDT)
References: <20210426034709.595432-1-marmarek@invisiblethingslab.com>
User-agent: mu4e 1.5.13; emacs 28.0.50
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org, qemu-devel@nongnu.org
Subject: Re: [PATCH] i386: load kernel on xen using DMA
Date: Fri, 18 Jun 2021 09:54:14 +0100
In-reply-to: <20210426034709.595432-1-marmarek@invisiblethingslab.com>
Message-ID: <87sg1feemf.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com> writes:

> Kernel on Xen is loaded via fw_cfg. Previously it used non-DMA version,
> which loaded the kernel (and initramfs) byte by byte. Change this
> to DMA, to load in bigger chunks.
> This change alone reduces load time of a (big) kernel+initramfs from
> ~10s down to below 1s.
>
> This change was suggested initially here:
> https://lore.kernel.org/xen-devel/20180216204031.000052e9@gmail.com/
> Apparently this alone is already enough to get massive speedup.
>
> Signed-off-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingsl=
ab.com>
> ---
>  hw/i386/pc.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> index 8a84b25a03..14e43d4da4 100644
> --- a/hw/i386/pc.c
> +++ b/hw/i386/pc.c
> @@ -839,7 +839,8 @@ void xen_load_linux(PCMachineState *pcms)
>=20=20
>      assert(MACHINE(pcms)->kernel_filename !=3D NULL);
>=20=20
> -    fw_cfg =3D fw_cfg_init_io(FW_CFG_IO_BASE);
> +    fw_cfg =3D fw_cfg_init_io_dma(FW_CFG_IO_BASE, FW_CFG_IO_BASE + 4,
> +                                &address_space_memory);
>      fw_cfg_add_i16(fw_cfg, FW_CFG_NB_CPUS, x86ms->boot_cpus);
>      rom_set_fw(fw_cfg);

Gentle ping. The fix looks perfectly sane to me but I don't have any x86
Xen HW to test this one. Are the x86 maintainers happy to take this on?

FWIW:

Reviewed-by: Alex Benn=C3=A9e <alex.bennee@linaro.org>

--=20
Alex Benn=C3=A9e


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 09:31:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 09:31:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144372.265728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luAqa-0006iH-5P; Fri, 18 Jun 2021 09:31:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144372.265728; Fri, 18 Jun 2021 09:31:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luAqa-0006iA-14; Fri, 18 Jun 2021 09:31:32 +0000
Received: by outflank-mailman (input) for mailman id 144372;
 Fri, 18 Jun 2021 09:31:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ygMg=LM=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1luAqY-0006i4-DC
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 09:31:31 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.216])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b1f67fb9-c2f4-4d48-a458-f099cfa18c62;
 Fri, 18 Jun 2021 09:31:29 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.3 AUTH)
 with ESMTPSA id x0a371x5I9VJ2tW
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 18 Jun 2021 11:31:19 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1f67fb9-c2f4-4d48-a458-f099cfa18c62
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1624008679;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=05YuHSmc3XDXNtUyxlgGBQ2DoWZwTzlzuec6EKXLOIg=;
    b=K2H8vukW6GTnZqw7GaRI0PSTtLgdE8+IAOfnXrwWpqOPOTQsN7b+k/DY50/4GbCAvA
    GbMmiyImj1iYSx1l8eFMgSViroWH3C5UVppgVzmzAUzRg6ZJwSjO1Sz5skuc5dYFML0L
    izDs54b6gHww/Ez2PFZ6bL0kyMzwD4SgJnM5EboTxFIDoglksSArGiExk1KVmd7Qw98o
    NFke+82GRTsMYP/wol50CjFKpK7emGYLdssPfzceCxAC/i7FyjcpruQ13kb4R2lzAU+q
    D0G+bc2ev/fhY7s1fQ4WY7hSNOFY2sREwtWSTJ/ha7cDHqMXSDNCxCWc1fY5dYlv//z0
    GrnA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1] tools: use integer division in convert-legacy-stream
Date: Fri, 18 Jun 2021 11:31:14 +0200
Message-Id: <20210618093114.1640-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

A single slash gives a float, a double slash gives an int.

    bitmap = unpack_exact("Q" * ((max_id/64) + 1))
TypeError: can't multiply sequence by non-int of type 'float'

This is broken for unknown reasons since 4.14.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/python/scripts/convert-legacy-stream | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)


diff --git a/tools/python/scripts/convert-legacy-stream b/tools/python/scripts/convert-legacy-stream
index ca93a93848..a04c6e4165 100755
--- a/tools/python/scripts/convert-legacy-stream
+++ b/tools/python/scripts/convert-legacy-stream
@@ -163,7 +163,7 @@ def write_libxc_hvm_params(params):
         raise RuntimeError("Expected even length list of hvm parameters")
 
     write_record(libxc.REC_TYPE_hvm_params,
-                 pack(libxc.HVM_PARAMS_FORMAT, len(params) / 2, 0),
+                 pack(libxc.HVM_PARAMS_FORMAT, len(params) // 2, 0),
                  pack("Q" * len(params), *params))
 
 def write_libxc_static_data_end():
@@ -264,8 +264,8 @@ def read_pv_extended_info(vm):
                           (so_far - total_length, ))
 
 def read_pv_p2m_frames(vm):
-    fpp = 4096 / vm.width
-    p2m_frame_len = (vm.p2m_size - 1) / fpp + 1
+    fpp = 4096 // vm.width
+    p2m_frame_len = (vm.p2m_size - 1) // fpp + 1
 
     info("P2M frames: fpp %d, p2m_frame_len %d" % (fpp, p2m_frame_len))
     write_libxc_pv_p2m_frames(vm, unpack_ulongs(p2m_frame_len))
@@ -405,7 +405,7 @@ def read_chunks(vm):
                                   (max_id, legacy.MAX_VCPU_ID))
 
             vm.max_vcpu_id = max_id
-            bitmap = unpack_exact("Q" * ((max_id/64) + 1))
+            bitmap = unpack_exact("Q" * ((max_id//64) + 1))
 
             for idx, word in enumerate(bitmap):
                 bit_idx = 0


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 09:43:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 09:43:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144378.265738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luB2P-0008DX-6S; Fri, 18 Jun 2021 09:43:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144378.265738; Fri, 18 Jun 2021 09:43:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luB2P-0008DQ-3X; Fri, 18 Jun 2021 09:43:45 +0000
Received: by outflank-mailman (input) for mailman id 144378;
 Fri, 18 Jun 2021 09:43:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luB2O-0008DK-7d
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 09:43:44 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c7ce48e-2f52-43dc-afc3-f33542252964;
 Fri, 18 Jun 2021 09:43:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c7ce48e-2f52-43dc-afc3-f33542252964
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624009423;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=txFXkZG6O0/GF3INfaD1B/qA4c2KXhus6XLruhKVUrI=;
  b=iaGnlduriAvrevEnSZ9TVuAql7DVTqu3F0RPRWgmZbTcGEFodrQX+60z
   CkYi09JMKC98LUoNHPXH1XJS1w2H8FqzRqq/UosXhY6lGxcej6sULn1Ar
   R/rPsA1qQthfuw3lLH7iWeQGdOa9B19RA6hljGkhm1cHDfmFNRzn8QNiG
   s=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: oncb+Wn1/dOoe/B0nfG/7a+nenUrdYna3xBuA/hFsW/sBnZvCf46BOiRFptlgjBi2xBxTu4oH2
 ZG6xqFNhG9kq+FTzucpcjDglBghZpE2jSKXePbEbBxJAv/i2SrFEpd8PRI19Ol62Bmxqv/By5M
 fnlLOFVkM8USOSYpUu8QufbSLjPhHDexJXxDw44ObJ6is9Aiem+FHaWY3114Z7YCS7Kl7wQGdJ
 jNOt/R+07y0GEZqgUnldLZFzHDTiZlydboim9k/2wRRn3lt01GxiT32gG7X/rGUCbbFql8fQ6o
 o2A=
X-SBRS: 5.1
X-MesageID: 46172715
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:Pe+iIK8BF9xVsyqaWVVuk+E1db1zdoMgy1knxilNoENuHPBwxv
 rAoB1E73PJYVYqOE3Jmbi7Sc69qADnhOBICO4qTMiftWjdyReVxeRZjLcKrAeQYBEWmtQtrJ
 uINpIOdeEYbmIK//oSgjPIa+rIqePvmMvD6Ja8vhUdOT2CKZsQiTuRYjzrYXGeLzM2YKbReq
 Dsgvav6wDQA0j+Oa+Adwk4tqX41pL2vaOjRSRDKw8s6QGIgz/twLnmEyKA1hNbdz9U278t/U
 XMjgS8v8yYwrCG4y6Z81WWw4VdmdPnxNcGLMuQivINIjGprgqzfoxuV5CLoThwiuCy71QBls
 XKvn4bTopOwkKUWlvwjQrm2gHm3jprwWTl00WkjXzqptG8bC4mCuJa7LgpMCfx2g4FhpVRwa
 hL12WWu958FhXbhhnw4NDOSlVDile0m3w/iuQe5kYvErf2UIUh6bD3wXklV6vpREnBmcYa+a
 hVfYHhDc9tABanhyuzhBg3/DTENU5DaytvQSA5y4aoOnZt7ShEJnAjtboid0E7hdkAoql/lp
 P525tT5fhzp+8tHO9A7bQ6MIeK4lKke2OFDIvEGyWXKEhAAQOXl6LK
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46172715"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WudSoL8EhibUTnuLNxzMzeugsd1IY/3Lhs++nVtigPfq7XD9weCpymnK4gBQ1wXzm611/hDy8CJFfr4QNy4l5w+MeYcOovzIuDw+NCwRUi65KDVfqr6kHOrS8ocUKzkVoNwFyfJtAvmoxrmqLe4bqAqmzcGsZsGBYbnqFVSyFGghxLQ7deHrnlqOauL19s11j3AGJHmcdjQDWstI3m/Ilm5MRCJthe2+LZknViM+4kmtxX9HFVZlwlQ4whkiVUvi6SEZrDG8BVY/W0VgUOMhFbJ0FqSxkETXVXr+oDUyYmUiKA53I3qFaeM+v/a7P374KaXl+1FbF9P9Ue5qyNVokg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9sHJWJblTrr0lGRfGH6HlE2YX+loeCDTYTkU0KU8Rgo=;
 b=PvmbtxR/Fdn5ZXAo05UUSJdwK4WAcX6FeOGyNBrCBPJt2Megn863m/e/q/vm049y9FPH1jYUR438RMGnuXme4nshiEEmC66WocqlbdFQlry6sM+VIEiJ1hTbVbDZsh8lZVJzSBW8Y9zzxcGpmKVOC2LJ5Hhf+PryQH3++eS3s1MpP1v+Apr3LbEsz2Pf+JQ1IEJzqAEvqDD6ZMutyPfdqBgH4WfIrQ2HWw3EvU45GnGcQxRyITAjM7V0XBE0y3w4aZEde7DBzltYtVjhkPX+lcMTkWEx398HfeIy1/YjZZGfjG7m/5g5EovLj2KiG0EZI0QKlmrlaLIU03GIAFIuEA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9sHJWJblTrr0lGRfGH6HlE2YX+loeCDTYTkU0KU8Rgo=;
 b=oM/spAMwQJ6ZTtw7NOkzYFBQraqjEK4yaDanr4vkdVG3o7wgs98QlrNMQCRWKw2SxA4EeRg/+u6zQskqEHDdiAtY/9sUAvV4m5/jtNjKRXq+kTQQIpNz00aXyMD3kRolhcYzaXPq81VYQt1PiEaiAg7zBo4AffyZCNCoyqu0SDo=
Subject: Re: [PATCH v1] tools: use integer division in convert-legacy-stream
To: Olaf Hering <olaf@aepfle.de>, <xen-devel@lists.xenproject.org>
CC: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
	<marmarek@invisiblethingslab.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
References: <20210618093114.1640-1-olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <8a9be55a-ad6a-d06d-9ddd-0f2d656e4fac@citrix.com>
Date: Fri, 18 Jun 2021 10:42:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210618093114.1640-1-olaf@aepfle.de>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0475.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a2::31) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: dbe93d56-700e-433e-ec74-08d9323d78ff
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5584:
X-Microsoft-Antispam-PRVS: <SJ0PR03MB558445F9A3BF4D7FF7B5B75FBA0D9@SJ0PR03MB5584.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 8rzn9GFuWEjsIEh1+ADggTDAuyKlDmjhwG1n9vajsMGTF7AWZ4eMo1g/gW5mkkVttfESp+4jF25Bzngj3big7sq0iMUwsC+pMbxibsi/+xxmaA7nH/28wzsUcSXU1uNQs9yRyitb0oF3+uhiw29HUUZfBXgWBrtE2BAYeS6HcTf+SE811z350HwqKYR468/SgDpaAsLfoM4p4F6qF0ms87f6tjMXTUJBFRaV1LMdgImMJ8gjYCoX/ibImqQn/DOWNF8q9+SClAUKZ1135HlSYY6xcr2npxlBUGsnw8vaBBABAIUmam3hBedLL3RS3rkjSkTz0Om0J7gTi3/GJq8IryuDPuUmpsAH+uJon6iRygccMZCxedTbxmndczuR60d/w2MPjV+dikbc+PZBzKsYzExsPqUXko0ZeVE6PxDWX7DxHCP+YeDb5pyD2Y9rqv3P/dGdGC9FcvRa4zpS3MSpxw/AK5fQPb407ZooCrlszAtF+IE+HNMc7KbTrLoayejA/vUvV9UbvLkcSq4og6Mb7smUg5lF3WVJzHKIVRuP3JsIoOqhMTwXJ9YePyKs+4ZOHm219t3/ySvpk9zL2i3C5/h5+k7N5PziIWX1JbdSvC92ZG4/Y5c2rXACaRz9JYFjw0zwDvsKRC4jL3j+m4bkVkb6DZHEMEAvsOTqgGzZI1tNmd9TrHCpaJT9vmpYqBsj
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(396003)(376002)(39860400002)(366004)(53546011)(86362001)(478600001)(26005)(16526019)(2906002)(31696002)(186003)(36756003)(6486002)(8676002)(38100700002)(83380400001)(16576012)(316002)(5660300002)(31686004)(66556008)(66476007)(66946007)(2616005)(54906003)(956004)(6666004)(4326008)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?aWtiS05KTnZIbkhZc1d6OXVUY0wxdCtSWHZ6b0JLQXd2T25yWWgzUTVNam5h?=
 =?utf-8?B?UThMSXg2RWdGZUpTNWJVRE9laUc4UVZTbEliRUxkUVlneWN2RkNodHJEeXcv?=
 =?utf-8?B?LytzZzk1REVzd3NrZ0hTM1IvZE45Y3pnMU55QWtnV2xQcE5IYm52bFM0MzVp?=
 =?utf-8?B?OWl3MW9nbTlyNDBTTXRyenQyT0ZDZ3pWZmh0cE9QTGRTaWN0eUlra3VMam5i?=
 =?utf-8?B?N2hmYXFzeE1OUEdJOU0zNVFiYXhrNkZCNEMvd3V1M0FMR3JCU09EUGZRV3dC?=
 =?utf-8?B?Tk9wSEVzaUJjNWo5d3ltQk52NkFUMkU1dzQyZm5XTnJSOSs5YlJwWU9lcXZY?=
 =?utf-8?B?ZFZUNnVOSmJVWXg3SWZHaERJSHNZc0tJSkxKNkh0bS9aSnBMTXB4bnNuMHd6?=
 =?utf-8?B?RXJKUHV6a0xMR08wWkJLcmRXTS9JVnBwVFVxZkRFc0o2UVpvZXcvcytpekhG?=
 =?utf-8?B?a1dTWW1zWE5pRlZwdEJqOWVwbVhEMEFuREFYN2NQdnNDRlQwZjdSdFlyaUVO?=
 =?utf-8?B?M1ZLN2pOc3lqUmcyL2VISmZTVjFQcUpWd21TazdmRnRQb3BWbTB3aEk0cG5R?=
 =?utf-8?B?UUZRcFQybzNJK05sVC9DejlaeUxuY29sRWxQSXFEdHVTak5mbklxbUtEenpr?=
 =?utf-8?B?ZC9kTG04bXFqeUthclNEOS9rYlA1R04vNVpxcllvbE81bnVCblI0dEJnZHlT?=
 =?utf-8?B?ajVuZkVVc1RGZy9JaDJNQk54MTE3bXFyVy95V1NvdDFTdjR1VU9rZkNCbzNq?=
 =?utf-8?B?NzI2dkFSd1hkM2JUTlY1TkRkMUFQWlhtSHN1Q2JHRXpiRThSRGtuS1Y4bGM1?=
 =?utf-8?B?ajArV3o0ZzhiZmdZQlE3Y0Vpekd6a3RiYUNSSzVsZURjeWorVnRRT3N2MjM5?=
 =?utf-8?B?ZmFVdlF0em52T3phOEFncnEydnl5VlBzd09TclVBaGJna3RjQ0o0MmJ5YkNa?=
 =?utf-8?B?VFoydnlzN0NVVkFqclMwdjhCOEJoRzNWTVdxeEdZMUloM25pUURHUkZtNWt0?=
 =?utf-8?B?K3VPUm5ua2tUdXJWRkRiQVJJR3c2TGhuWWZZdWN0UXlBMHlaL25aSGZWa1J3?=
 =?utf-8?B?VXZtS0JCazRZRFYxNFh3bVlSOTNua2FOQitpMnExcDBMN0ZOeW02Ykk4em12?=
 =?utf-8?B?RVpmLy9JUStFdWxyTUdOanpxd0xaNDBadU5qanBzeCs0cDlrVm15MTNIYXh5?=
 =?utf-8?B?L2lCc0FaUXRTYzc4ZHMyUW5aRTNTT005aXQzSXUxWkloVkFMY1ZKa1dUL29W?=
 =?utf-8?B?Q2NNOHFGd0Z0T1J6Z05RUVNueEhDSHNSdENHZi9zT1dROEI5cU9wS3BVdzFw?=
 =?utf-8?B?RHVFemhMQzFqOThzcUdwMVFwcXZWOTFWc0YwSUFFRld0bXlWcDM4SlJoSVhE?=
 =?utf-8?B?MVhLdWxpMmsxWHpaWS9tRUpiQ0h2b0FNRDZlRkNRYTErL3dJaWZYSDNPV0Z3?=
 =?utf-8?B?KzV5V0lLZXJsTHJBV3RuMnFQbGp4THNMTXRubGxEc1B0RDJiR3pEZVdxSmRi?=
 =?utf-8?B?V3FwaGlWU25FdHVFNXdiSnpIWFI4RUUwTStIMXprengwUjJ6NjdWUlVoeW1j?=
 =?utf-8?B?Z2N2MUladktkUytOK29vSkNrQWZPcWhnZkZsWVhXVW43TExKTFpNQXZXY09T?=
 =?utf-8?B?a2Q3eFZza0w2eGQwUHhyMWpoUkQ4MEJzY0dvUElFazhNU1p3M3RtYVYrWEZy?=
 =?utf-8?B?RUt2MmFqOU0rUjRSczBrZFlsR3dIOFFyb2dNVXFXM0liZG9MbFAyZzM0eGZU?=
 =?utf-8?Q?8P08CzQwockLK7mAzfrxz4GZ+8u1kCt54pXOopr?=
X-MS-Exchange-CrossTenant-Network-Message-Id: dbe93d56-700e-433e-ec74-08d9323d78ff
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 09:43:05.7096
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5QP7oXxGUlNrESUdvfKm6di+hsF8irE+kFkfbebo549XVFz6w2jBnMwekxIQKBKJct97PrfJ/zzgUg1XgypL/DSqaFKc77HALu1WzVXELx8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5584
X-OriginatorOrg: citrix.com

On 18/06/2021 10:31, Olaf Hering wrote:
> A single slash gives a float, a double slash gives an int.
>
>     bitmap = unpack_exact("Q" * ((max_id/64) + 1))
> TypeError: can't multiply sequence by non-int of type 'float'
>
> This is broken for unknown reasons since 4.14.

:(

This is a Py2 vs Py3 difference.

To not break Py2.7, we need "from __future__ import division" at the top
of the script, but I doubt this is the only script impacted.

>
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>  tools/python/scripts/convert-legacy-stream | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
>
> diff --git a/tools/python/scripts/convert-legacy-stream b/tools/python/scripts/convert-legacy-stream
> index ca93a93848..a04c6e4165 100755
> --- a/tools/python/scripts/convert-legacy-stream
> +++ b/tools/python/scripts/convert-legacy-stream
> @@ -163,7 +163,7 @@ def write_libxc_hvm_params(params):
>          raise RuntimeError("Expected even length list of hvm parameters")
>  
>      write_record(libxc.REC_TYPE_hvm_params,
> -                 pack(libxc.HVM_PARAMS_FORMAT, len(params) / 2, 0),
> +                 pack(libxc.HVM_PARAMS_FORMAT, len(params) // 2, 0),
>                   pack("Q" * len(params), *params))
>  
>  def write_libxc_static_data_end():
> @@ -264,8 +264,8 @@ def read_pv_extended_info(vm):
>                            (so_far - total_length, ))
>  
>  def read_pv_p2m_frames(vm):
> -    fpp = 4096 / vm.width
> -    p2m_frame_len = (vm.p2m_size - 1) / fpp + 1
> +    fpp = 4096 // vm.width
> +    p2m_frame_len = (vm.p2m_size - 1) // fpp + 1
>  
>      info("P2M frames: fpp %d, p2m_frame_len %d" % (fpp, p2m_frame_len))
>      write_libxc_pv_p2m_frames(vm, unpack_ulongs(p2m_frame_len))
> @@ -405,7 +405,7 @@ def read_chunks(vm):
>                                    (max_id, legacy.MAX_VCPU_ID))
>  
>              vm.max_vcpu_id = max_id
> -            bitmap = unpack_exact("Q" * ((max_id/64) + 1))
> +            bitmap = unpack_exact("Q" * ((max_id//64) + 1))

While you're changing this, could we make it (max_id // 64) to fix the
style (which I clearly messed up first time around).

~Andrew

>  
>              for idx, word in enumerate(bitmap):
>                  bit_idx = 0
>



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 09:55:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 09:55:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144384.265750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBDU-0001Ej-BQ; Fri, 18 Jun 2021 09:55:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144384.265750; Fri, 18 Jun 2021 09:55:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBDU-0001Ec-6T; Fri, 18 Jun 2021 09:55:12 +0000
Received: by outflank-mailman (input) for mailman id 144384;
 Fri, 18 Jun 2021 09:55:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ygMg=LM=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1luBDS-0001EW-FL
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 09:55:10 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [81.169.146.167])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82e0e802-2a7c-419d-b1a6-44bfd868d6ed;
 Fri, 18 Jun 2021 09:55:09 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.3 AUTH)
 with ESMTPSA id x0a371x5I9sx35T
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 18 Jun 2021 11:54:59 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82e0e802-2a7c-419d-b1a6-44bfd868d6ed
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1624010099;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=a782ZQObEIVx2Ci6TYWBQEYT3BMvOnOng+Nmakl4Drk=;
    b=dkwXdu1Aikow0YuilOZM0Yg+VAmIU1U/Ex5dUqWN62L0ZOvlp8klZkHiWa6Xpqe4Vw
    iKI3oT1ag8Kt4Y0pzNpAMp8QSPXOJprt1GkbYFpAP2gT3mtIHXAUdgyADdg7Xt0bjfCA
    G5ebyGBafsACU3fwl2Qd0QkdMDjf/Bq9Mhx7wIo8L1uK7mojnMTc1e8W6GeRYRypwJ8s
    h7zWithdKqRI3n/vrDeUtcxSs6TepdoLkHnm6vT3b3tPVtMEPo5zAcFwJ5FZsFmnAqVq
    31L3x5busgVqMWwDIAghwdYd/fECh9fmzhdt5CFYOxLKSU5qYZtPlyXsTsfBsr1vN8wx
    J10Q==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisQsVxSIbR7sf0kebs4Z3Zpqv+Sabl5o7CzRq+Ps8Q=="
X-RZG-CLASS-ID: mo00
Date: Fri, 18 Jun 2021 11:54:47 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: <xen-devel@lists.xenproject.org>, Marek =?UTF-8?B?TWFyY3p5a293c2tpLUc=?=
 =?UTF-8?B?w7NyZWNraQ==?= <marmarek@invisiblethingslab.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1] tools: use integer division in convert-legacy-stream
Message-ID: <20210618115447.190922a4.olaf@aepfle.de>
In-Reply-To: <8a9be55a-ad6a-d06d-9ddd-0f2d656e4fac@citrix.com>
References: <20210618093114.1640-1-olaf@aepfle.de>
	<8a9be55a-ad6a-d06d-9ddd-0f2d656e4fac@citrix.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/2Z7qax8sLXjOX1d1U0i61Sv";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/2Z7qax8sLXjOX1d1U0i61Sv
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Fri, 18 Jun 2021 10:42:58 +0100
schrieb Andrew Cooper <andrew.cooper3@citrix.com>:

> This is a Py2 vs Py3 difference.

Indeed, my build wrapper changes every 'env python' to 'python3.4' in 4.14+=
 builds.
That explains why it happens to work with my 4.13 builds, which changes eve=
ry 'env python' to 'python2.7'.

Olaf

--Sig_/2Z7qax8sLXjOX1d1U0i61Sv
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmDMbWcACgkQ86SN7mm1
DoCshQ/9GDIo4Tj5vAmXaUqae2lcDem+0joYdFyAofzjo/63Tm4mfn7x9epIapIt
AxL/6RbUMqIhI30Gv7uabur20VLAMJGH1dNlZBWTV14JvkLepbgy5ib8jpjUb0Ue
SJEhLsbfgKSvgAX9gulR9xe3xxxGu6+STKsJaGh1oBmsv7nqmTxKaq6WtcwalyE+
LAxMQHe/HQOqG5pBqh1xPngIxmlXLmwhnfvk4JMg6VdQFMlXEfrXOGaMnHkD9oly
E0Alrlb0h7j6ZEdRAlOnNF/Gou9o/AGs+VeE8K4trX6PNth0Rnvr7LCOlpLjDcWA
FaWr4+K8grNz8/ORbxKfMO/dFr3pzkrwd51QC0eg2va7RltE1SE+N/XQhSTya2Sw
somPsmRFIeCW6huapZ8eX8nOLFT1kdpS021BV3xkGPc4tNEARz1zOSMGHwl2zYLU
OqthbdDWn6kROMybgFQ1rKUw8eWnd52NtzXDNIio/BbR4gPIAakxcI3q1/bIi6lL
5j7Gl5cvmRZ6afIdr1xfN7B4A50h4ZciowvmWvspJChesj4muKqdw7O+mLlWpu0G
QKt9PINVQUzX5qTF4kwuQpbZnt3rRTaMepODU2oXwBUBREo6aCiC8BvcKMbQct2w
N1gmm2E507opvqgyKDmmVLaCWgdl2wHRPog/fSrL7Q/1UwCpi5Y=
=4yt1
-----END PGP SIGNATURE-----

--Sig_/2Z7qax8sLXjOX1d1U0i61Sv--


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 09:59:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 09:59:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144390.265761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBHl-0001tm-T4; Fri, 18 Jun 2021 09:59:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144390.265761; Fri, 18 Jun 2021 09:59:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBHl-0001tf-P4; Fri, 18 Jun 2021 09:59:37 +0000
Received: by outflank-mailman (input) for mailman id 144390;
 Fri, 18 Jun 2021 09:59:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luBHj-0001tZ-ML
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 09:59:35 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6053c5cb-444e-40d8-933a-def7f51cef72;
 Fri, 18 Jun 2021 09:59:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6053c5cb-444e-40d8-933a-def7f51cef72
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624010374;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=N0UYkNiivjKlqvSzIpY1cnZ/WzS/V01fIIPrt/1IO3U=;
  b=iD+Y4ZItCTOi+RFSY7LvQJt06j/EZW3sNW4QvFbEPqNUFv8zBL3SZ3Uc
   AuDcyRKxx/umNAdeFhLec2n9v37FH2vnqLesYdQGSFbUxJBbHl7SFSdUH
   FVbfUsaheBkYltfFZBoC9MgJ9KLN6dHvt9oj0fbWLmNmoT8rmIph33f1X
   Y=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: x4YwpES4OrPQgYeY9n/Ihf+L7nS6HXf4D0fUGqkZzxjLPubTtwd34v81Zq3JPGL7hqmbyXaa/7
 rCJjsECXPemkOreVhHoMezWO3RVYa28OyrY+wlSzgwWnK0mCjgtFS0Z3uFPupp40tRuNjRppcl
 80mThPTlyqUXfBp8oVzh9ocr+uWNBLesGJPtdjVHCTCAo/LILZ0gtaM/m8+P3Ha2KeO/8iyKeP
 G75zkfKLkff95Rpf150LUxnvAqSIRZmHiz3yKrXbazhuHibY0kxmJyW8kVPs0mra+Cb/lbdtvA
 uak=
X-SBRS: 5.1
X-MesageID: 46173512
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:UlEZIaDF+O2bHdvlHemJ55DYdb4zR+YMi2TDj3oBMCC9Afbo7/
 xG/c5rqSMc5wxhPU3I9ervBEDEewK7yXd52+Is1NSZLXbbUQmTXeVfBOLZqlWMd0LDH6xmpM
 BdmsNFaOEYeGIK7/oSlzPIcOrIruPnzEmvv43jJjxWPHpXgulbnn1E4yigYzBLeDU=
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46173512"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MQWJmDfa0CrUTQ3qQ7+LaYIMauNau9v0qnUKefYTXZ7rmbYIE283TY0JYUVAl+74Q1B28U2b4i1AIHgVfEVdkp80SXXO3BJVIyvgr6ESKTkm6p4vjcFfV35KRPxEG40kxTmClshP3+5YhWFEN7PaETKzs9MvXhlgHlunCXZqlfKoH3w5tLugfiNhYDfEM8dsDDSRrCfNTWnw1+ayRqJ4X8HThcgzkekWttgbtV7p6ARd3tO6rKf0FCg1pxMDBmp/Km7thMnpKywua7rZZrdjirnj1JxEai48a8h4iWJgr4IFUwByevHECvPYmcZdXH0bOf4K0V4iLGgnEaG5qVF9JA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N0UYkNiivjKlqvSzIpY1cnZ/WzS/V01fIIPrt/1IO3U=;
 b=HDDY5hOmUD69nLY/eCfzlY9RsYaZOX446g8qMJk8C/i0YXYBf1utiX9vcFrwQVlI8iCoQYs2R7t34n/O6mR8Q/1EKsNE1ZP9UlxKxv5/TxPw3yYM4mrq8chKxS0BmEbIyyfgSJiSAyhqpwA8CiWfkI32RUs+D36URMnNrTyMd/gvMkTFRySe9EiRfTghW7hPKFSw70OQdqvbeUvtPkpR3gegbeBX59bKNst6Xp5aku48E1W3RNxIDTEzTdHQwL5ag25bryqnVR5pU19CAeJruQu75gPQnPAj22jyHYmzinBrnv68RqYI/qSPsqEub/s8fZVKn94UvUZ9ulWgC3qJHA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N0UYkNiivjKlqvSzIpY1cnZ/WzS/V01fIIPrt/1IO3U=;
 b=D5W4PM4JFEHLVmSWWROa8JbO6wRhxi0DJRC/BkEieeptKjxdBBFcJ+XvnfPVbeTMUgL8o4r+wtf6QDcNlitZQJ9cNJk7fwvWP+PZMWDJP+CldaRKl9QFYYjWvBL9hbagx8LHOAV0VACO7SzeoW6T7REhfIS2O6J/WnNvAIgUkhE=
Subject: Re: [PATCH] maintainers: adding new reviewer for xsm
To: Stefano Stabellini <sstabellini@kernel.org>, "Daniel P. Smith"
	<dpsmith@apertussolutions.com>
CC: <xen-devel@lists.xenproject.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>
References: <20210617234955.18489-1-dpsmith@apertussolutions.com>
 <alpine.DEB.2.21.2106171712010.24906@sstabellini-ThinkPad-T480s>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <635b99e9-815f-edbb-52bf-dd6465bf16c9@citrix.com>
Date: Fri, 18 Jun 2021 10:59:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <alpine.DEB.2.21.2106171712010.24906@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LNXP265CA0077.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:76::17) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bf8f80fd-66a9-41ec-8c44-08d9323fc47d
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5870:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5870B86BEED58D7258A1EBB6BA0D9@SJ0PR03MB5870.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: vKPehqYNQfxfnIcEIh4HQrorLz9W88IezWvwfYprhuPVyfeRkskdeAdlFY9lIxsE3mzF6lV8PepRkE0q4I4G8lUfrnOJtJXuxFmOIGCEKF7jnXA4iAFpHZlHcwTs7+hZ67jmQHUbniREhT+0heURxOpxssBXqI9ZZk/j/uFsp4Hwh9ghxWWOyC3bnejtKJKTHv6GmLn1eMhdC2mM0Jguvs59ZZHA6xMkGibuYBF6T8jpNpxJ7dT3vBQHC9727DFlp0BqCnWN6E8Ej89aBoW6+Fvi7eCllFnCLnw1kj0U06+tGpSi5iwJYiTrwt/vhH8uHP3YzJ39HieidX+D5JHHG/b1mqtaVkeaTSy+p5xSlelXwq4ueJBAZnCBaOzpdSN+8acf/D7CY9gepG2duyPOOPBTrwH7ymh2vbJsvcXqKqK/Di2VngigqWvsaVucQH1I7/jYHWfjaQfIBc8WWELqNFoimfG8UTlUomF+sn868gqNkCWLMH7nBI/FzoYFLV6w2fAYFNhvkB6p9qzVfTATNL9nywmdTMscz9UmfOWRQ2xnRn4BIjyecXgpl1FKB/r+IDS5Te9+l15b0azP3euZcGTJT3azDsCccJMBi7ejMcrFkpVGV8nuvzjyY6SPupCK7Vn3Tva5y5/3kBpwVJAAy+n8Mh6y8sGuQBS4sKRanGSv5jEfaffWvgdm+SNWPpF3
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(396003)(366004)(136003)(316002)(66476007)(4744005)(86362001)(66556008)(8936002)(31686004)(16576012)(54906003)(4326008)(36756003)(66946007)(110136005)(956004)(8676002)(2616005)(6486002)(478600001)(186003)(26005)(2906002)(38100700002)(6666004)(5660300002)(31696002)(16526019)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?bndDWkRpck83SDNjbUFRaCszY3Q5cjJKd1RkUVhmaEkxcnpNL0dnSHBNSHE1?=
 =?utf-8?B?Qjk1UXl5cEtqZml0Tk93dWgzeW1WUUdoYTdkYVpDZ2NwQmY3VTR2Mnlvb1Av?=
 =?utf-8?B?Z21xcjBzVGpmdGExMU9CaE5pOU9WYUNUeUFCUVJqTGo0bG8yb1N3UVN2N3p5?=
 =?utf-8?B?cFYyV2lpb0hLV0VzaC9aZGlDazcvNmFpVkVUSk52SjVSSWJoTjNzYnhzTTZl?=
 =?utf-8?B?MGxUQ0pLYkZaOVk5eUNseWFvRkw5SExFKzV2UUgrRlAzdENtQXZUTVVqM1BT?=
 =?utf-8?B?aFB3NUp1cG5tL3hQVUZmVDhqbGY1ZTFGTGt0b25jL1FnSzBaTTZVamM4QTlp?=
 =?utf-8?B?d0I4UVR6YTJTMndWblJscE5kMzhpYkVScUQreWd5VjROSEVFemx0MWpCZ3k2?=
 =?utf-8?B?cHB0bjcvam8zWWdoMTgyL1VmdzBwRmlqdEgzbWpuNXlsS2NGcjkyZXlIR1lY?=
 =?utf-8?B?VjBTQU9FTjM4cjJWQ2kyNlhxOWNpdkU1OUpwNGo5dDVUNzU4MnJsRHlsc3ZP?=
 =?utf-8?B?aUZ6ZFNLTzErbzBLT29Hc3UxR0l5eGhhSnJLVkkvVHlYaThuZXVVdjluSHJt?=
 =?utf-8?B?QTgxNkpaNDNuZXBsT1RWRFZjNktoNGo0OU5YZlhWRnBUZElIL0EzQ29rVDBy?=
 =?utf-8?B?K3FKT04xZld1dlhOaXNYSVVnVkFteHJ6YXZlNjk0NnZtOUF5VDFBa2ZCditW?=
 =?utf-8?B?ejVnRGxUenF0TWlSeC9yeDVxWXZmaUYyc3hxWVpkc2NDSnU3YlZhd3pmb2pt?=
 =?utf-8?B?YUY2cHBzRWsrcjNHM3FiendYYVJJb21OK0szMmRPWktraUZWeFlCRHl1NTFu?=
 =?utf-8?B?cGhWQUs0MDFOaHd5Nm5RcXhuaUdwKy8yNHV3MkxSL1hOWXJYeFh3VWVLMXRy?=
 =?utf-8?B?V1pYMVVsdXBWdjFwc1FhbWxYZUMwNlpjc25JSHdUMXV5eEZrRmlON1J3UzJn?=
 =?utf-8?B?NjhZbkxyOW5QQkJjVS9vQUh5VGh3VEhWL043UHM5b0M5aWlISUQyWGo1clBD?=
 =?utf-8?B?REJXclJiM1pIVlY4Z2JVYVRmQXowbzdyR0VMZ3JTUEhvZWtyS0xrdloxNXBo?=
 =?utf-8?B?YnVnS2FEZDVDaGNtbnZ2ejVPMEhRcjg5SUo0ZXdPQXJSQUFERWpLTEI5ZnRM?=
 =?utf-8?B?SHpkYXQ2NWVQc2phaDBydjZpQjlaVmVFK1NmSGdxNEt2TzJGRnhwKzM5cHZp?=
 =?utf-8?B?ZElPekFRYVliT0NZRm5IRWVKM3FpK1JYb1YvSXNub0NsK2ZzOU9qMjFyVXVS?=
 =?utf-8?B?azdFU2xNaElwZlNHaC8zMnNENkpEME1TU3JyRC9Zb0s0cW5ueWJzVmlmWHE3?=
 =?utf-8?B?Tk93cHpucWF3S3NkTXJMSFBSQzZGUVhSRnN2SkVHd3lDZUg4SXBwQUQzVDVu?=
 =?utf-8?B?QzlmSVdQOGs2NFhGc1BCbVo4YjVVYnNtVzdFRWNJbzYyZXk4eC9hZlNHb0w5?=
 =?utf-8?B?Ykc0U1dMY0dHY2twZ210cGFNUk9rTXFkRE5mRUxPZjlhdko1Rms2TzhkanpG?=
 =?utf-8?B?MHNueVMwY2pYbEtlbE5RcytXby9YemFiK09tY0Y2U2ZyS0VUVVZsUnVuT3J6?=
 =?utf-8?B?M1NwSWhtemZRV1BlYnNhZGl2T2NEdjVydWpteDlBc3I2RThtMXQyazkxaW1C?=
 =?utf-8?B?RUdMNnJsZ0VFakV3aDFDdUdwb0FCd09tTHg5clpFZ0t2alJBSk5aZzkrKzBk?=
 =?utf-8?B?T1RISU5oSmw1cVBYMmtZU1c2dHpkQkl5VlNPeEZUdVpEcnhkZ2RpMkEwdG5R?=
 =?utf-8?Q?GL99qpPojZMYtbC4/ajQBm09XzncZkPUPG4XO9C?=
X-MS-Exchange-CrossTenant-Network-Message-Id: bf8f80fd-66a9-41ec-8c44-08d9323fc47d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 09:59:31.1865
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rol/0/Q6pBvEZ3bYPwuUTrm3lImW5CRrLHeR9Ry91ELTuaCFhCgzzpGboF6mmt4rrn3ebWlBUEtHAGiNgs52SZHhAjPYbl88vzieEZRP8+s=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5870
X-OriginatorOrg: citrix.com

On 18/06/2021 01:12, Stefano Stabellini wrote:
> On Thu, 17 Jun 2021, Daniel P. Smith wrote:
>> Would like to add myself as a reviewer for XSM.
>>
>> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 10:15:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 10:15:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144397.265772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBWt-0004HK-E2; Fri, 18 Jun 2021 10:15:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144397.265772; Fri, 18 Jun 2021 10:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBWt-0004HD-AN; Fri, 18 Jun 2021 10:15:15 +0000
Received: by outflank-mailman (input) for mailman id 144397;
 Fri, 18 Jun 2021 10:15:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luBWs-0004H7-Fh
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 10:15:14 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0de32527-2123-4f4a-bc46-d555eb2f36aa;
 Fri, 18 Jun 2021 10:15:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0de32527-2123-4f4a-bc46-d555eb2f36aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624011312;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=9cL0jfa7gu+vak3Z+tix5HDGpbgymew93gq1QkmhVkU=;
  b=XMm+UH3+uJ4s8+wDhDvWX+iScgAUXMAc+02qGFi+3yK/Oou1ToNVCcCA
   Nlr/rjNLS+U/SIe9lnNrkdczEdsUZMU58ct5v2lB5lSmd12LMdqWJZZ6q
   mwGcpr7zuFPakdV7oApESj/3CaIfTWN4juUF5167tqewSQyCa0wsAc9aS
   g=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: a0Olj0259qjlTRGaNER4aKtFQDzF5+/YvaOa6UEhpCCGPDASdnkMX8d1wdjno28JbO6uA+6NFu
 PDXpu8j8XMqZju4eZfl72+SzTPegD+N9ZWeBZssVjH0WHvGfNEi+n2m95Wrb9FZdbbLgCyn+g8
 1lHMsDpawkEZkWC1u9TR5pTwxBh/dthkXb7JC8kY/3oaDJeZoOZdY8k6P/r18mNfi/ZOFiZVEb
 zvyGQUOXGrt32V2sd7EdVn/d8zOIuCds55x4d72I7sn0+plaNC4uqGdhm+10ZKuiMBNMhSPQqK
 7MU=
X-SBRS: 5.1
X-MesageID: 46447279
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:/qy77qiQHHwQWxPybOL9JTX/rHBQXiAji2hC6mlwRA09TyX5ra
 2TdTogtSMc6QxhPE3I/OrrBEDuexzhHPJOj7X5Xo3SOTUO2lHYT72KhLGKq1Hd8kXFndK1vp
 0QEZSWZueQMbB75/yKnTVREbwbsaW6GHbDv5ag859vJzsaFZ2J921Ce2Gm+tUdfng8OXI+fq
 DsgPZvln6bVlk8SN+0PXUBV/irnaywqHq3CSR2fiLO8WO1/EuV1II=
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46447279"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CK3GdFSHc4MuvaisFwOSmzB//A6sl29Onspib2ZmdnuTynXHBLtOwDVM0gWNSIqqpUKPVZREjxdsg+SLhv+PBO6+A3zefvM21ujjs5L67FkNazxTDNdTgB3zFXQ5ZX+e+LrGouOfzwsYfL57+74vyjoNeDZ0gHqsxneIlt1qfcUH6ibuyFRBs8tCaVZWSCSM5IZ46Rd/HeoNy/OsJHC4abxKFID956menF3RrHdkQ7cVCq4Tg9Jqwzjj7u4xvV7NZwUkUNNr2TQ0yLyzXNy65RYPifcPARAPzAp53vBVgeSf8iJSZFJ/udSoRiNZMfDDy+5QZDfrw+yeC6AqN0gYHg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oYijSs5H+YfN+SKleG5HxJrXs5ojrMbCfsis7erSI4E=;
 b=bFQx8985cwZjghIc+VMN3ic197YHAvIvML7cCC/XxEtOfsGmXo0veatXwbaN1TXwLEU3LahSDrIQiFVC1FKQC2B0aSLn7O+xH7Adrm6ri8uUYVMxq+Sp38OHxEMlnD+N9zPPgdjxUr4PL4/zC4ed+jY5K/xMe8iwGTO8vUL1nZkVhdsjfE8EcSOVkyIIJKNE8GSKXYpT8puyXEx2GPQYrZPsZdXzFfr3SLHxZ7dEw8BTCS40WU461IgSFDIYF2mVMji/2+mHkoV+PPM5ZJsSFPSHaXHKOwraoOve0JCAtloFG6fzBly3X/I8EZDsMDazUW51xlGUT/pWg/y3MxwsYg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oYijSs5H+YfN+SKleG5HxJrXs5ojrMbCfsis7erSI4E=;
 b=LPlkRVcwZRX4vbbuRIhIJMeM9UFEWzxAROaCj5POx94eiOXFJ7BOVWep7KMxLWDmXyLRp7I/uCwE9JjROqASHu0+9kZ8xg6K2EnMLOAW5LmG+0FabIqs9S1gx0eiOfCOUtM+Q792yxSS7Ou34RQ9bNSvG07Dp2ukr9GG6FLb7O4=
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, "George
 Dunlap" <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, "Jan
 Beulich" <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Tamas K Lengyel
	<tamas@tklengyel.com>, Tim Deegan <tim@xen.org>, Juergen Gross
	<jgross@suse.com>, Alexandru Isaila <aisaila@bitdefender.com>, "Petre
 Pircalabu" <ppircalabu@bitdefender.com>, Dario Faggioli <dfaggioli@suse.com>,
	Paul Durrant <paul@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	<persaur@gmail.com>, <christopher.w.clark@gmail.com>,
	<adam.schwalm@starlab.io>, <scott.davis@starlab.io>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 0/6] xsm: refactoring xsm hooks
Message-ID: <7219a9c8-f6c0-a86c-bf47-5b38c579170b@citrix.com>
Date: Fri, 18 Jun 2021 11:14:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210617233918.10095-1-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0141.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9f::33) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5d0736f6-ca1b-4316-c8c0-08d93241f1ed
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5837:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB583769E279DB09AD0717A9EDBA0D9@SJ0PR03MB5837.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: JH3ZdG0smzs+lZkxTW0+b1pd9j+fcCIgX3LPXefxqzsxp9G1DHYM9esjfMXzAoNZXfReRnWHb+EYma7TjWpVLIEgSU1ZiRsq8cVOw5mqa+0yNUlVOj4SWs/qTvJe+/ewQnwjBKtLEu43MBZ1Gm6ukXgFcyySS7YX/rGP6nveEzZTrdXcNHDUtRhfJdBc5fHCAIT9IYZ2bn3mwqb9F3Wirwzid+zR7yCIvbY8ltX9aHK2TtggJsU2lWFcJxaqL4IDI4XCYp1xJKAnHogSbEq1UgiGucpgbG5Ls2E3+EfuSg8bjqPQCuoYtBwZIT8//EC/Nbeh5p3jKm5Tx0EZH3/BK/RLLABIjKD6iwOjaiNEoIrsePmS/BI1AVt2QK7qH7q7Sk1ZmC9HvXAZ6Z1wcp2TEiCr+CDB3uG1AUWfqPV3/V2jMw7QNjK1G52DyaGhXAQXqOQVWKUSdTYeJvnhRNZ1V6Ppk4/tZ2cXgD073n3IDRDc3cx438oyJhSbEyspM7vIzLU9pd1asSxDP7ROHdBcO5Cjrk+qb2z/V69x7Ah6jnaagnh8EkLvi0geKblcyawvAn7s3UV9l0I43dgqJxHNBKmWcvuE62gRuZvVrEPP3tMd8U7OTczDRadb5GXbtw/ccVo4Z9Z27L1APjBqozOxclWJIB7tNfIVklPO/3MhdWMzxhtGh9lf2HRXPBgxfcuZmZ+EzOx/Kg2tWDWuTVqODv+yeysokgRhjpgs6vi3Vsi+ZFFZb7r71Jp9/NR5ZgtsX0sQzomnsZJfKyBittPJYOuOk7I0hc9ySbt8ofsPNxTciSvyBjoyYc/U0LSp9SCP
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(136003)(376002)(396003)(39860400002)(346002)(6666004)(5660300002)(8676002)(66556008)(66476007)(7416002)(83380400001)(16576012)(2616005)(956004)(66946007)(36756003)(31696002)(38100700002)(2906002)(6486002)(8936002)(316002)(16526019)(53546011)(186003)(478600001)(86362001)(31686004)(54906003)(4326008)(26005)(966005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?bHU1Q2VmVC8xTTFvdG1ORHZyQngxYTB2WU1Uam9KMVNIR1JNdHFTV2s0am1U?=
 =?utf-8?B?WUFKbjVCYjhqNjFwRmR0cGJyQk5SbTB2aC8rYUgvUkR2Y2Z4cWFzekxYdzgr?=
 =?utf-8?B?RmJpYXphSWwzeUdzYUQybHRNMi9QTXpsbXZ4TVZ0UmoyVFQrVHd3aVBSOCtr?=
 =?utf-8?B?NGdvRWZQMDJvb0IyMDJQSXZjV2V5NE84WjZkL1htZk5Gb3JZeFhEa1l0bFVW?=
 =?utf-8?B?NDloclVreG9Tb1V4Ymc3ckhaNDVaaFFhU3Bqa1hnRVUwTy90QTQ0Wi8vdHN5?=
 =?utf-8?B?cEZSN1VoY2UzTFZMT2MxWjFsM3Jra3hVc0RTN2tLYTVZTjZuT0FIRDM3R21D?=
 =?utf-8?B?R0g5RDJZQXpXNUoyeGo4c0lDQzkxUTFBUkNxRW9NMHJFekN1czJrMkpPNk9T?=
 =?utf-8?B?ZkcyTW1MRy9CZm9JcGJ5SnRXc3FOS0l0V2ttM1JmbHBBNm1uTHg5Z0hvdzJZ?=
 =?utf-8?B?NmxYWDZ5eUtiRUZnNHlROFlYS1FTdE9yd1ZNRVdVdEV5dEU5ZmFCTkJlem1j?=
 =?utf-8?B?OUhQVVNGRksyeExENklYdlJvZDY3U0lma2ZJRi9idUtyK3FZRkR5N3VTTGxY?=
 =?utf-8?B?YXRjQVRyUFpTNHpOWTZrWmRjQ0NSbEFsQ0JrcDNmUS9jT3U2bmdpZjBwdk1h?=
 =?utf-8?B?V01GbU54amVIZ01RVmtNNWMrWmR5UUFFWWtOVllQQVJDMEt2RllFSlAzNTQ1?=
 =?utf-8?B?alQyTG1FTGxRZnVFNDlsZ1NBMFpHcWlCSHZYWStibStxNGpUTXFwU1lHMEJO?=
 =?utf-8?B?Yzh6UnJRdUQzKzkzUFM1clBMZ09McEdxYWU0VmJKVGRDYU00Tm9rY1piM2I0?=
 =?utf-8?B?eEl5VE5xUHhGc2ZTNTlhckRHN1dNNWNzbmNFQjY5L1BBcFVyQXRlV0pCekhC?=
 =?utf-8?B?RzI3MkZFUmJ0OGZxbWszaFloUGsxMCtZdFl3cXNYUkUzQmQ5ODUwNSsxSFFu?=
 =?utf-8?B?MDdYMmdKc0JFb1ExNThiTXlJVk11UFJ2VU1EcWdpTGd2UERyOVlyN0RuR2xi?=
 =?utf-8?B?YWNKYXFqd2U5c3E1c3dRTEJYcTI2aTlIclZmMEpNWTQ2bWQ4SkJFTGE5clF0?=
 =?utf-8?B?WjcyVUtwSXNWQkFUMDdYSUZ4WVVwYXR0T2FNVEM2RDdEbnp4RjU0QmJlMVpK?=
 =?utf-8?B?UDhQM2tiZ0JzVlExVE5FVlMvbTJIZDk5KytTL1B2RHluYnpwUDZaWnFMNUtN?=
 =?utf-8?B?L0x3YW4yM05CRWk2WDVENEtWR09pZVFLYncraGlRcEJ0WGxtakt4YkkwNm5E?=
 =?utf-8?B?SmFPL1NzY0IxNlZpOWRjdTNlZ1UxSzU1WHhnL3lIdVNaSWlHaGxGemdvRDl1?=
 =?utf-8?B?NjR3S2Z6a0ttZFB5VVZPeURJUHY5eUk1cCtKZ3hrSHA2V2tQZkxWbHc1V1BV?=
 =?utf-8?B?VjZQTzczYXVsbFRsZXRud2ZtT2VpUGtXbFI1UWVWZkREa0xyOEhCd05UelBr?=
 =?utf-8?B?VXR0dDJKM3JXZ0tIUzJXajBtVmVpNVJtZlh4Sllyamlyd1ZmcHRWbGZVMUlt?=
 =?utf-8?B?a2FaVEl3WWJxWnFGL05tSmxVMDZrcTFvZjh2SEVUcksxaHdUZXRYQldaVnBa?=
 =?utf-8?B?b1d6NWYyRjQwUG1sRGRvcjZxbTduaTNuZWM1ZnZKbWMrMWh1MlFiRGpDUDBY?=
 =?utf-8?B?VGNJcXpZNmdNdlprN0cyUStPVzI2N2dUeDNjVEhnTmViZ25HNnNHclcvcHZl?=
 =?utf-8?B?YnlvaEtoK3gyZC9udFBFT1RVdG5ZelRBQThwODI0c0R4TzB6Rmo3amRJRDlZ?=
 =?utf-8?Q?1kcAKDX3rlMxgDrRmoX5Kk9VPWN3dyfEx1Ux+DV?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 5d0736f6-ca1b-4316-c8c0-08d93241f1ed
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 10:15:06.6239
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xuEmrpgAapBmfhba4wm8qomzcqRgwfkV0+SjkwMszjOpbTbJH0AmjaTcwaF/N+Ogmy5hk7iotHgbkhD6QNLyRCFvAGvYRzbrrAMG0EV9ghA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5837
X-OriginatorOrg: citrix.com

On 18/06/2021 00:39, Daniel P. Smith wrote:
> Based on feedback from 2021 Xen Developers Summit the xsm-roles RFC
> patch set is being split into two separate patch sets. This is the first
> patch set and is focused purely on the clean up and refactoring of the
> XSM hooks.
>
> This patch set refactors the xsm_ops wrapper hooks to use the alternative=
_call
> infrastructure. Then proceeds to move and realign the headers to remove t=
he
> psuedo is/is not enable implementation. The remainder of the changes are =
clean up
> and removing no longer necessary abstractions.
>
> <snip>
>  51 files changed, 1309 insertions(+), 1413 deletions(-)

The diffstat is great, but sadly CI says no.=C2=A0
https://gitlab.com/xen-project/patchew/xen/-/pipelines/323044913

The problem is that ARM doesn't have alternative_vcall().=C2=A0 Given how
much of an improvement this ought to be for hypercalls, I don't want to
lose the vcalls.

One option is to implement vcall() support on ARM, but that will leave
new architectures (RISC-V on the way) with a heavy lift to get XSM to
compile.

Instead, what we want to do is make vcall() a common interface, falling
back to a plain function pointer call for architectures which don't
implement the optimisation.=C2=A0 So something like:

1) Introduce CONFIG_HAS_VCALL, which is selected by X86 only right now
2) Introduce xen/vcall.h which uses CONFIG_HAS_VCALL to either include
asm/vcall.h or provide the fallback implementation
3) Switch x86's current use over to this new interface

The iommu_vcall() is a red herring, not adequately documented, and needs
to stay in some form.=C2=A0 Specifically, it needs to not become an
alternative on ARM, even if ARM gains vcalls.=C2=A0 I'd be tempted to rewor=
k
it in 4) to use the common vcall() by default, and leave ARM as the
special case overriding the default behaviour, along with an explanation
of why it isn't a vcall().

Obviously, name subject bikeshedding.=C2=A0 alternative_vcall() is a bit of=
 a
mouthful, and I don't think that alt_vcall() loses any salient information.

Thoughts?

~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 10:21:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 10:21:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144403.265783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBca-0005eK-4l; Fri, 18 Jun 2021 10:21:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144403.265783; Fri, 18 Jun 2021 10:21:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBca-0005eD-04; Fri, 18 Jun 2021 10:21:08 +0000
Received: by outflank-mailman (input) for mailman id 144403;
 Fri, 18 Jun 2021 10:21:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luBcY-0005e6-9c
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 10:21:06 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 84f872d9-f83e-47d7-87e9-0ea5af0b22d7;
 Fri, 18 Jun 2021 10:21:04 +0000 (UTC)
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01lp2051.outbound.protection.outlook.com [104.47.1.51]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-23-ifoYLjiJO5mrmltFyfeC1Q-1; Fri, 18 Jun 2021 12:21:02 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6862.eurprd04.prod.outlook.com (2603:10a6:803:130::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Fri, 18 Jun
 2021 10:21:01 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 10:21:00 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0163.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1b::31) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.19 via Frontend Transport; Fri, 18 Jun 2021 10:21:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84f872d9-f83e-47d7-87e9-0ea5af0b22d7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624011663;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=dAYCbiHz5W5de7kCOn2UsXKOpDE9eB18Ff1fPYV212I=;
	b=EsV17lEyp9C3Yu8gmZSo/x8qacCCHafpDZAZyNbugb7RgqLYDBp92vQ8b9Hx/h7XBeVNHN
	kWjpF1UcmYaX8It9lOwpP7ww7aMuQIwU2/R3bBhLmDMTdd9LgqOJwHEGopl2gzjGUqVkCh
	pEGznZdRukNKpUNL8u+mRvpJ1oOIHRY=
X-MC-Unique: ifoYLjiJO5mrmltFyfeC1Q-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J1nhdtDduAVF4aythp+atH//rtvnSiSOvZb6nWjvrrpI5PadJ+0jLvyxJRauY2QxFDLwM/39SClHRYDRtVa9YfJ4tSyDD8Fju0W4vcu/WNXKmLWbWjjowXmtMWxdSfMGaQzBXS4zyZ0Rwgsqlpmnjh+dXT97I2otquA0jTU90El2lOWzvln+sUkBp78vUvNkPSOnJ7krLQkMLYwzz5DwSMfI7oj+H/CZc4QrwCFYbbh6hkzA3lTrPd3bpszRp9tkDiK2fbY4e165g9sf+TFh1R94Nz2zOjhZIXIUawNTXGjlXMDN13YPHyE+YSYCkeE1K5QVgElfDUTl7zsYl2ucPA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dAYCbiHz5W5de7kCOn2UsXKOpDE9eB18Ff1fPYV212I=;
 b=DJ7BH0L9TrbJR8pt4ex5xyySk/lVI118/aeHq+9kImMUVdDhzoLSUfzcvmackcyGEEgkYy7h3nFUsSeYfBjmWqCYApN3fn+fo2gYnsDJDzaaexdXPcTIyA/6krKeIAgxbiAJ9l4fAjbYME7+pXu8wUzl7f8VIJW1MhnxlKVZOLiCe6Srz9lgBsUbz4ENJ7eHw1rSBduUhM0UYkSuIWwsn95SHtfNDzU0D8lVFKcBjYWy7F371sieHGLQVXGUwSqPyRg2LoHU1aJsiYL7gMns+AAe24tSj5GzGgMm0GqDbtQfSAiMy/M0aotHgjWIINwHLu8dbQp2atsT25Eu9q+T5w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=suse.com;
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/5] allow xc_domain_maximum_gpfn() to observe full GFN value
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
Date: Fri, 18 Jun 2021 12:20:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0163.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1b::31) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 54da9493-b0a5-49dc-ac73-08d93242c536
X-MS-TrafficTypeDiagnostic: VI1PR04MB6862:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB6862578B731D487B1249DDFEB30D9@VI1PR04MB6862.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8sq7yEMyN6MrBOnU5HividyYGQRo78kE379aEmBT/QorjbmGitHwrzTo9uj3jm/kFY/7n8/fN4ETVzdSyPjrEXTRiZkIPstVaNCqokKM43TiKy+KLIiPVTLRnzpRQmRujkMldhuMRmwfeYAr/laUeOdVTD7uI6Kqr1CWUSfdJi1EZPHTZ3q0QfH3/tCLzaGHPFNnEeBFQXEuJgic5zxDXpi/B1SGoQGhl3aB+sfC1nZrb3+t7ucRiOTzZjLq/F1qYm24KsRKCzZHZ5EJmKTiHJgNRSRw0yJvNEbzp5MIjCthG2o9rACXuY4DSyCiE0cP0/+fZPxuv1cSqnwnIBr5a7J09q/YJ8Tj4dylcUQqQsomK/kFvFf6P7TZHeF4dRS0bH0zJgbwXxOWz1sZoT+Ta3UjUm4PT/xcKFL9ofwHZnBSkaWgAZzrYcG7yPC/UV6KXo8Yu0ehrK59U3UGViPePtLOcmRiH7qF+AXm7lnX3FQBcjdCm1P1zq71dx3REJnkqRpZX9jyc6i8Ig8KmG/FfqfI7I+6AtuNuKV4K5gMtqw+dmfmS2vt1OM2WNUuqf29g80riISVU46Xg4a1KRjFCfVZ9keylcYXshrJjqoPLfZxCfzZ+GPwPfYceknqN10YTTuaI+j2iAyoXIbU3v1Qu3W0yBZRwDNR1UQ+DslDEFv9nDkhhlVyF6SzvLyq+Gif
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(39860400002)(346002)(366004)(376002)(396003)(66946007)(26005)(2616005)(4326008)(66556008)(316002)(6916009)(956004)(107886003)(31686004)(8676002)(36756003)(38100700002)(186003)(2906002)(86362001)(16526019)(83380400001)(54906003)(5660300002)(16576012)(66476007)(8936002)(6486002)(478600001)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TWdHSVFYOUo5TDRkbXdFZmRTMHVXanFNZFVPNUR5NktqR0ZOclcrMFUyQTBP?=
 =?utf-8?B?eUZjNjdrR0Z2d0t6OXBUV1lWVTJkSk9FUXdXM2JyRUZ6NjVWQ1VHUlEyemFz?=
 =?utf-8?B?c3kyYTFqSThOT3E1UEszN3lQZ3hkV2RKYitjS0FQM2pmNmN1MklSUlNidUdh?=
 =?utf-8?B?WWMxampUSmh2b1VwNlpxYnAwNGVuL1R5QmtZdmVReVFvWXQ3VmdER1FMUHIx?=
 =?utf-8?B?OTdyNy9hVlFwZUpKZkJ6YXI1Smo2VW53bTh1YTVqYVl0eVd0YVYrcXE3RmN3?=
 =?utf-8?B?WkoxeVFVUjByZGNxOXVKMEN0cEdYWWpEc1ZQdlRWQlZCMjVHWStYUGpZN1NS?=
 =?utf-8?B?TGFWNjRDZmg2cVkyUlczTEJLb1F4aDlpRExsTFNVaG1VeWVTSzZpazNvN3Rx?=
 =?utf-8?B?TlIwTnliTW9aVFd5MmJuNkZ5M21XSy9UZkFHM3NpbHh5ZlR3ZVFNUXR3YVZI?=
 =?utf-8?B?RDY3WGtoeFprOStHS0I4UjdMcHkyT0NBNlVjYXdnUlh4bnNPMS9DbjUwMDJo?=
 =?utf-8?B?czRZL0Z2bFVnc3Nmb0lpS09qNXlUTzV6bkNzYUZqQWhXdFlyUGpEVGVac1FS?=
 =?utf-8?B?RXNROHA2QnI1TzRDWHFLTmZmcXpScWZqN1E5cVBXZVFjRTV4d3JjRnE3dCta?=
 =?utf-8?B?TWNRREhFZE1TRXFpL1dLd0lKTHkvM0VDQzdMWFV3TSs5K0NwL05ZZzc2U3BG?=
 =?utf-8?B?MERIQzlSSEFnTEhxMGJVeDRBc2tpVENFcmY3ZVpwbFFnREdDUDk4blZueG5Z?=
 =?utf-8?B?b0NjSFYzekpFVWlXVG16T0tFclAxamZocmxmL3c2SUlvZWRhM2NqSkwvVUZu?=
 =?utf-8?B?djRsbndaQ1pZY2c1ZjlZVXNZNWVlbVZUMVB5OS9qN2VJeVhSWFV6c2tkSksz?=
 =?utf-8?B?UXc0ZWJsTzFxdXVYcFh5NTd6Sk82VEYvRXVHMUI2Wm0zN3hGWjlMRkcxdTBN?=
 =?utf-8?B?MTlyRE94amNBeVZjeUsvQlhCTTRvYVBwaFVJUDRWNUdLUGpkVGl2Y0xjVmVy?=
 =?utf-8?B?UDhFbzU2eWhGdzJtL3p0OCtxNUZEb0h5S0JVRmd3T3pUMi9kZWRpamZwaC9v?=
 =?utf-8?B?OWRCcmZEcXFJR0RQTkV0dDNnU1kwcjduYXhWempTNHcrUG5qY1d4TGxpS0F1?=
 =?utf-8?B?SWsydG9iQlJtaFBSTnZ2cE82L3pDQlVHREV1RW90TVliZkRKWWsxbmtuUGdJ?=
 =?utf-8?B?VlI0OW9yQ21XL3JnSloyY1BSTHRoZ05NakI3aTFSQ3lQbjNPY3F0dTU4UkRD?=
 =?utf-8?B?M1RrMGdzcnU0R1AzQTBvN3Y3dE5PcWpIZ1JGU1JRRWdYSG02dllNSDg5ajJs?=
 =?utf-8?B?UFlKNk9FMHpFaGlzQ3k3SkF5OVNXMHdNTnIzNUtYSUJuOHZXT3NaaWZydFhk?=
 =?utf-8?B?Vm1xUHFUaEs4UUIwOXdLMWpXaGExSm5uZUUvT0lNVk8rQkIyMGlYZDZDdENG?=
 =?utf-8?B?NTkyRWprdTgxdi9FbDcyRTJsTklLY3VLWFVvZGlmUWdWTjB6NWFpb2RPUjZs?=
 =?utf-8?B?dDRLV1BXN1hMYzIrV0VFWDV5ZUtTZEREZG14VUVBVU1pSGRNeWRNbkVReDA4?=
 =?utf-8?B?RUZ1NFZCY2U2Rng1UkJmMVJoL054VnEvMDJTZXVMVGJUZndqS080Vi95NW4v?=
 =?utf-8?B?Z3hLQnJXbk5FYUE4ZEl4eFFDN1FyVWkyY2psNmc0SmxzckxOZ1MxQUlCMFpY?=
 =?utf-8?B?NmR1cEtsM2Nubi9CeTJ3R3BFbWtqWUV0eDczc0dpZTRjc1J3MWJ2b25EOXI4?=
 =?utf-8?Q?ga19Mg0Z7uPWxWA8c8LIQ0nv+yb/xEhpjm5+QUZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 54da9493-b0a5-49dc-ac73-08d93242c536
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 10:21:00.8850
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tPGA83IGLh8q+g+Ox93WLsZZF7IAib4Nq85pBthzvUEufDjNTNTgve2oGNfWXjsM+Qf+eoSRjrYoUQS22IyR8w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6862

The present remaining osstest failures are due to truncation of the GFN
resulting from the hypercall return value being passed back through the
ioctl() return value (on Linux and Solaris), which is "int", plus the
same for some further internal interfaces (osdep_hypercall(),
xencall<N>()). Some of the memory-op sub-ops, like the one involved
here, may pass back values which don't fit into "int".

Different effects can be observed with a 32- and 64-bit tool stack,
each causing one test to fail. The changes here will only deal with
the truncation resulting when sizeof(int) < sizeof(long), i.e. only on
64-bit. For the 32-bit tool stack case to work in such a situation,
yet uglier hackery would be needed. But even if the full value got
passed back, we'd then hit:

#ifdef __i386__
    /* Very large domains (> 1TB) will exhaust virtual address space. */
    if ( nr_pfns > 0x0fffffff )
    {
        errno = E2BIG;
        PERROR("Cannot save this big a guest");
        return -1;
    }
#endif

in xg_sr_save_x86_hvm.c:x86_hvm_setup() (and there's a similar check
on the restore path).

I wonder in how far a guest property can legitimately cause an osstest
push to be prevented by causing a failure like this one. And of course
I'm also puzzled by the ovmf change having managed to make it through
its push gate.

Note that I can't tell at this point whether there aren't further
issues, as I've not actually tried the ovmf case. I could easily see
there being oom issues there then, once to full value gets used for
setting up the p2m monitoring during migration. Or processing might
then take overly long.

1: x86/HVM: wire up multicalls
2: libxencall: osdep_hypercall() should return long
3: libxencall: introduce variant of xencall2() returning long
4: libxc: use multicall for memory-op on Linux (and Solaris)
5: libxc: make xc_domain_maximum_gpfn() endianness-agnostic

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 10:23:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 10:23:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144409.265794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBew-0006J3-Hr; Fri, 18 Jun 2021 10:23:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144409.265794; Fri, 18 Jun 2021 10:23:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBew-0006Iw-Dx; Fri, 18 Jun 2021 10:23:34 +0000
Received: by outflank-mailman (input) for mailman id 144409;
 Fri, 18 Jun 2021 10:23:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luBeu-0006Io-Pw
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 10:23:32 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c712d297-e757-4cac-9395-80a200201b2a;
 Fri, 18 Jun 2021 10:23:31 +0000 (UTC)
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02lp2053.outbound.protection.outlook.com [104.47.5.53]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-8-DXMgaPD9OluqSot4-rIl6A-1;
 Fri, 18 Jun 2021 12:23:29 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6864.eurprd04.prod.outlook.com (2603:10a6:803:138::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Fri, 18 Jun
 2021 10:23:26 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 10:23:26 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR0P281CA0015.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:15::20) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.7 via Frontend Transport; Fri, 18 Jun 2021 10:23:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c712d297-e757-4cac-9395-80a200201b2a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624011810;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=am03MNqDtS9SZU1GGRI1+xD+TyxHlgkohOfTvVtAEEs=;
	b=YovsMkCQXfHQeZH0FxFhejiCgsFhsGxlHVJ/3fwXVgPw+e2eJEBGtuKn47Pb8szKay7Wur
	wbIAC8Pb0vHH4gNo5jOfwRtSTDADUS8kOBbLABFF9FzFvXHjfwGbcaDHo+UalSLT7TmREI
	0XhkkBtxbjJFR7QzxepfG5KQPvKD1UM=
X-MC-Unique: DXMgaPD9OluqSot4-rIl6A-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PiB/hRNMUFVLL4X8PtBjATMaJmhVOiqh6IlIUerfthYw4G/KRXQzFbg4mxNkwM+XWWAv0c0f8i300NwkHgiRFFOvS4pOVFRCAStM+tEYofV7OVgJKbqcvbmvNN9Y+t4gCbQ0ZpirHCv107wek4ywpvz3Uh6HmfCxxt9wzUtsjVKLCa7kcTrQF2wf8boNo60rwU1X1EkKaUW0kFVITouo90xxT3Ubc5vpA0ALLTWCSN/y4oVhaHRepMtwRm2n+dwtwqFtPJtaUugQLrknpqpYbw0nui+DF6KIgKP9S1TOjg8IebED1vzVtO5ghqdLX6/LAUy+mHpe/2ngzDxIAwgiXQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=am03MNqDtS9SZU1GGRI1+xD+TyxHlgkohOfTvVtAEEs=;
 b=m/kQyUHT5UKi6rLD6NVF3IO1yNgRpa3Kc+JxpQLA09BGKsbFiL0K1GEOCwGHLf/wUR1SAzmir7lAI7IiVGpd8kRtj0q08ER6Czxyepq3kZJSX6RkmvQt/m6TZVZK/L/14mfdNgUFivVIwiws4fcYQe1EsGFhVId5jjG71VTCVANAecpJuvph3MSiDbt31TiCcTkFs5WGBhYGxIcvjOIJPblerZvvYXNelku+b/gIJV29vJrXKLP5BHyIxc31AatPUphlPnspVtMxuPYq2aiC/chveOUkR0xb3CEIEqiUJRFbj9KS5nsS85u4hXcl11+9iOCsAy6kjbPJT1unwn0GGQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 1/5] x86/HVM: wire up multicalls
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
Message-ID: <c3c5df61-32f8-1ffa-aac4-0c83d620d385@suse.com>
Date: Fri, 18 Jun 2021 12:23:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR0P281CA0015.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::20) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 667b54ce-321f-4e77-ea13-08d932431c26
X-MS-TrafficTypeDiagnostic: VI1PR04MB6864:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB68646CE29D933180C12C150AB30D9@VI1PR04MB6864.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SpKeuOYxzfWL62z6QrSobIJ/O7wHY9N7AZJFcj9SQo1yax1TymzwOPOnf0CzhZ75ukxYu366tvF7pSJtQl6EVdGoeHtWbDsJMxgvyT52kjdZoO7IuaBrGVzt72v8j8+Jg5/VZfW+pV+/EWIgsY2XoPDRxFY8VUXcxsRGDY4guPaPzyPediZuERst5aO4NTTL8jnSSYvSUetazwdkEX3TsaBXvzMet7Hf7CuJJz4IpPQycB1LXQsw+WNjfoeLTs2O0tSzAka+IGu7oLlFu/KE4MJr3B9sgMBb2+gUEeA50N89pcqFs4SB+bs7aoXU6aRRRNrw6oqF4EmPlvFbAkCiLiI0ou8OjFG3Ut4uQX2RGAHryvIhp8rNnpFN1oUlXlOc7W0ZiOJ0s8WVzhP6Kb6d/jeuVDgzdiSf/vi2pn2E0Y1BY5gMe4QkObqXOJIRFGkbgRo/Vi1A5cD6EQB7wFCz1HJJdVfIQYpIjBWQRGMwmAk9H+8mp3Nkua254X9osZGSyAifOnW2tLAqsQ4HkWrYviq7jILADectC9MHRnG2WA8uMQ9biTVoWsnQfgJ92cVhwu2tUqBBJyM5kUnU3QyW/PtpF1EQ1Ojv5nzkcoP3iy58yGagx6TLpSFv6mNLmXPoa+7aWRSb16MgDlWmSteR3GZbk4cbpxmUaiy0IbGMtBBCodI/M/JSodRTNiEnrtXs
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(396003)(376002)(346002)(39860400002)(2616005)(956004)(83380400001)(66946007)(66476007)(66556008)(6486002)(316002)(26005)(186003)(6916009)(16526019)(478600001)(54906003)(16576012)(107886003)(31686004)(38100700002)(2906002)(8936002)(8676002)(4326008)(31696002)(86362001)(5660300002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y0FWdUR1R1Nyd0pBNllKdEowM1o1SllXNGp3V29OeWFLQmV3VkRhcnd6cHQ3?=
 =?utf-8?B?SzRDUzZHeXpGbWs3NE9TR0MzalZ2STREc1JKVXlVOVlUYnI5ci9sU01sR3g0?=
 =?utf-8?B?SncwZlBXdDY0UXhQZFp3M3h4VUxhUlhOYWRVQzZ2UXBEcHFyaGY3N2N1MWlq?=
 =?utf-8?B?NFdiSWhSNVlZWnA3YnM1aWY0UjBPUkFiQlRrdXBZei9DZURiODNSZWQvcG5Y?=
 =?utf-8?B?UWR1dXdSMWt6TVVCak9jUHBLN1p1blRlWXFzR3pNdkM4YTZhU1ExaDQ0NHdp?=
 =?utf-8?B?NG1DOE14dmNuUE9VQjVjaHhmNmo1ekRaZlNqdHlBbFlaQUdvR1hEMnFyZml4?=
 =?utf-8?B?WHovSWdraTNjajVJLy81UDNSN0NQTTdEL3NRZ3AxamFiYmVlTkNDSHljbUlR?=
 =?utf-8?B?STVpR3pCUHZWT3pWVm1ZYTVCbzdNN25LR2pZclQ4TWJzQkJmVUx3RnlYeTA0?=
 =?utf-8?B?TmV5dnR4bDA3WnZtTmlBelFvWmJMMzZpampCSU1yS3NSR1YzanhqREJFMFV4?=
 =?utf-8?B?cWlISC9ybXFxV0tQbHFDdUpPM2Fob0NWWWtURFFmOWc4TGl5MXdGWDI0cVNW?=
 =?utf-8?B?ZzJJT0hjNVZQanh4Lzk1aWZ3VVNjZWFkbS90RTg4enRTRUFGcS8vUWJRQW03?=
 =?utf-8?B?RUZ4VkFnKzFDK1VQZnpabmFLZm14SWFaSmtoMUFwbFUzdTlNcm01dzJFUnpr?=
 =?utf-8?B?YVRzNHFQUlFVUHhoZXJWRW4zczczUTdzN2hxYjdzbk5uUHdHdWNaYWY3RHhu?=
 =?utf-8?B?c2NBUE8waXpVeXNESVdlZk1IUE80ZldqU0EyLzA5UHZqOE5sTFJob0V6b001?=
 =?utf-8?B?Z29oZXMyZHJONjd0dTZRMlRsRUFTNHROL0ZMcmNlejEvWHFLSlh4OXdhdk9O?=
 =?utf-8?B?RkhNeDBXOTE4dUEyOVFnOGV5SzhPZ08wcFBVWmRxYmwvbEJER2pTQW52MVRO?=
 =?utf-8?B?b1NsbnhiQmFvME1IVTJTSEJpSkE5UWZiWDhTaXRkajJGUzdhYWpnNXdNTEtV?=
 =?utf-8?B?c2d5dWhvejU5UEE1L3R6SWp1TytTdENuWURVMnRkVmpUc25SckQ3eC9GTUd1?=
 =?utf-8?B?WjZhZUVsN2E2WUtRZENSQmhWdXJXWVBmNllRb1dSWEhSSmk3N3JvVWNReWlv?=
 =?utf-8?B?aDdXdHBMbEJYUE9KTUJEUUMrNGJBNEJhSFArS3ZJTEhSa0d3ZjdUa1BmOXVP?=
 =?utf-8?B?U2IxOFl5dUx5anVWdmdiVUhVTDluSnBwUkdLcVh4R21PUTdkS2poM2UzTEVG?=
 =?utf-8?B?Nkpid29rVE1ncEtiemRtTXFpbFFHbUhPbTVrbCtyUndxV1VZQlJlM0QzdjlH?=
 =?utf-8?B?OHRnbkRSSkM4bEg4Q0E1dzd5eDhGc2UxYy8zVnlmSFF2bUtsVHJuSnB0LzZQ?=
 =?utf-8?B?VkxuaFZ6eU5CTGFpeEwzS2Z1VE1UNzBRWllaMG5wVW9Ec1dhODZvODM1TzB2?=
 =?utf-8?B?QmU2NVMxS2Q0cHgzMlhnRjRLRStIRC9QZC9nTmNRb21lamZnNnJJbXBDYnk0?=
 =?utf-8?B?YTU5ZVkwb1hXRkw2bi9FTmd4b2VpdlkvQkx6NzdKUlBHc1NGbWIzNFc0Qmdp?=
 =?utf-8?B?VjRNSllOOS8vZ2FEeWpUQm9SWFlUYllFTlExSExuSkMwZnZKOUNZSUFQNnRt?=
 =?utf-8?B?bUY3dVdtcVJSc2YrK0YvOEI2Ry83SDAwOUJUR01ja053MmU5TGIrczl3YzR6?=
 =?utf-8?B?SWlJWjNwY2JQRmptZEtJVjQ2SG9MRnFURVA5YWNuVloxR1M1alhpTEhIUDIy?=
 =?utf-8?Q?tuSGY4iwz2cFeN/bEiKG+9cu8WuV++V8VI+N/7D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 667b54ce-321f-4e77-ea13-08d932431c26
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 10:23:26.7205
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IWuUAzJ6QklGopWelo6pGPVxxX6Yi3uzwXqndQk13mLUTSwVVju5QqlIRWxGnOW1VwYDNP47XWGlDVrj06v4fw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6864

To be able to use them from, in particular, the tool stack, they need to
be supported for all guest types. Note that xc_resource_op() already
does, so would not work without this on PVH Dom0.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -26,6 +26,7 @@
 #include <asm/hvm/emulate.h>
 #include <asm/hvm/support.h>
 #include <asm/hvm/viridian.h>
+#include <asm/multicall.h>
 
 #include <public/hvm/hvm_op.h>
 #include <public/hvm/params.h>
@@ -125,6 +126,7 @@ static const struct {
     hypercall_fn_t *native, *compat;
 } hvm_hypercall_table[] = {
     HVM_CALL(memory_op),
+    COMPAT_CALL(multicall),
 #ifdef CONFIG_GRANT_TABLE
     HVM_CALL(grant_table_op),
 #endif
@@ -334,6 +336,39 @@ int hvm_hypercall(struct cpu_user_regs *
     return curr->hcall_preempted ? HVM_HCALL_preempted : HVM_HCALL_completed;
 }
 
+enum mc_disposition hvm_do_multicall_call(struct mc_state *state)
+{
+    struct vcpu *curr = current;
+    hypercall_fn_t *func = NULL;
+
+    if ( hvm_guest_x86_mode(curr) == 8 )
+    {
+        struct multicall_entry *call = &state->call;
+
+        if ( call->op < ARRAY_SIZE(hvm_hypercall_table) )
+            func = array_access_nospec(hvm_hypercall_table, call->op).native;
+        if ( func )
+            call->result = func(call->args[0], call->args[1], call->args[2],
+                                call->args[3], call->args[4], call->args[5]);
+        else
+            call->result = -ENOSYS;
+    }
+    else
+    {
+        struct compat_multicall_entry *call = &state->compat_call;
+
+        if ( call->op < ARRAY_SIZE(hvm_hypercall_table) )
+            func = array_access_nospec(hvm_hypercall_table, call->op).compat;
+        if ( func )
+            call->result = func(call->args[0], call->args[1], call->args[2],
+                                call->args[3], call->args[4], call->args[5]);
+        else
+            call->result = -ENOSYS;
+    }
+
+    return !hvm_get_cpl(curr) ? mc_continue : mc_preempt;
+}
+
 /*
  * Local variables:
  * mode: C
--- a/xen/arch/x86/hypercall.c
+++ b/xen/arch/x86/hypercall.c
@@ -20,6 +20,7 @@
  */
 
 #include <xen/hypercall.h>
+#include <asm/multicall.h>
 
 #ifdef CONFIG_COMPAT
 #define ARGS(x, n)                              \
@@ -273,13 +274,18 @@ int hypercall_xlat_continuation(unsigned
     return rc;
 }
 
-#ifndef CONFIG_PV
-/* Stub for arch_do_multicall_call */
-enum mc_disposition arch_do_multicall_call(struct mc_state *mc)
+enum mc_disposition arch_do_multicall_call(struct mc_state *state)
 {
+    const struct domain *currd = current->domain;
+
+    if ( is_pv_domain(currd) )
+        return pv_do_multicall_call(state);
+
+    if ( is_hvm_domain(currd) )
+        return hvm_do_multicall_call(state);
+
     return mc_exit;
 }
-#endif
 
 /*
  * Local variables:
--- a/xen/arch/x86/pv/hypercall.c
+++ b/xen/arch/x86/pv/hypercall.c
@@ -23,6 +23,7 @@
 #include <xen/hypercall.h>
 #include <xen/nospec.h>
 #include <xen/trace.h>
+#include <asm/multicall.h>
 #include <irq_vectors.h>
 
 #ifdef CONFIG_PV32
@@ -245,7 +246,7 @@ void pv_hypercall(struct cpu_user_regs *
     perfc_incr(hypercalls);
 }
 
-enum mc_disposition arch_do_multicall_call(struct mc_state *state)
+enum mc_disposition pv_do_multicall_call(struct mc_state *state)
 {
     struct vcpu *curr = current;
     unsigned long op;
--- /dev/null
+++ b/xen/include/asm-x86/multicall.h
@@ -0,0 +1,12 @@
+/******************************************************************************
+ * asm-x86/multicall.h
+ */
+
+#ifndef __ASM_X86_MULTICALL_H__
+#define __ASM_X86_MULTICALL_H__
+
+#include <xen/multicall.h>
+
+typeof(arch_do_multicall_call) pv_do_multicall_call, hvm_do_multicall_call;
+
+#endif /* __ASM_X86_MULTICALL_H__ */



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 10:24:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 10:24:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144415.265805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBfQ-0006uc-03; Fri, 18 Jun 2021 10:24:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144415.265805; Fri, 18 Jun 2021 10:24:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBfP-0006uV-SH; Fri, 18 Jun 2021 10:24:03 +0000
Received: by outflank-mailman (input) for mailman id 144415;
 Fri, 18 Jun 2021 10:24:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luBfP-0006sk-6x
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 10:24:03 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a8e86627-16cc-4c01-b84d-763bcd93f4a1;
 Fri, 18 Jun 2021 10:24:02 +0000 (UTC)
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2056.outbound.protection.outlook.com [104.47.0.56]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-37-TisWPV9fOs6JfCxzx0IXkg-1; Fri, 18 Jun 2021 12:23:59 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2607.eurprd04.prod.outlook.com (2603:10a6:800:58::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Fri, 18 Jun
 2021 10:23:58 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 10:23:58 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR0P281CA0003.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:15::8) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.7 via Frontend Transport; Fri, 18 Jun 2021 10:23:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8e86627-16cc-4c01-b84d-763bcd93f4a1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624011841;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=V7LABsQDsEgnYdE2iTKHmR19AgygtEHLnin18+tzGjg=;
	b=S7PMwGVY6tHA/YfTCMGq6P3ggea2XACdOXaYuVJb48VFJczsbaCrZjVnJC0feviglMYMvu
	+tB0yv/YO9/JhZA4+D/0MRWn4tPrp1q/kKyg3lWIKKHp15sl08V3axQ46Br8+38k+Y4ioS
	UJ4L7h4328JPhDksmN9VBssCj+MqN3M=
X-MC-Unique: TisWPV9fOs6JfCxzx0IXkg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C5PftO1tmGRbFNKFIX6jjW7+0oy5/rJW4KyjNQCOIQcC4uOrMD8FcxzouvBmLcrIOXUsaLK9TGGpiNK6bDfgxNrLaWRe5pa5mwCbXeEe+VxCMOmHbfBLIDbB3v0ZT31migZ7tqo8zOBKmWm6byK3rhTae7/V3Hpau5wNfXrewzg3+WDGgGEeQWh3XWRM0OGFBlAQbK7GUG98qFcAEWbP/LZlpeAf1G6yBb0da9RdSNPKIaRi2F7SDybLOo5h1loFLpQA5NKCQxyfdgzfqnat/n2IWAdgvp4/OZByK4kbRjhyFexiJ6RovrLt50b+HilbX6BkU1+A5xgsnw+1kSexDA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V7LABsQDsEgnYdE2iTKHmR19AgygtEHLnin18+tzGjg=;
 b=JV5nxZdf2fi0XGNmnCox4KL1wDvkc0aF2A+XuDnk9ZUiXPRkTvCtivKhCTnVcorSzFeE1DYf2LUeH/eNMyBHQc7diqPy1BuGn5ZBna+RfNpOBZ703fQ30r1o/qMk+eNJH33dK4jObXnBBYIHxARJFC2f5I3GN0RNoDS8Z3MN5XXPxnH7jYnU6V1xtmU3PRtfnl+9UvRb0XH8cozmsHTO0AUcpHItH9fP1Fdd++/ddu/dSnOXEL8MfJLRPGFbbdCEqoP/T1JECexxXQW1CXgfTg39TEs14g3WGUuNGAHPnssqrntHAVIAgyVxBCI1x0fZsbY/CSeUTzUajE8XhY8h4w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 2/5] libxencall: osdep_hypercall() should return long
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
Message-ID: <798b7eec-e31e-1798-773d-c2865fba4be2@suse.com>
Date: Fri, 18 Jun 2021 12:23:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR0P281CA0003.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::8) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 236ba0cd-44f7-44a7-603b-08d932432eee
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2607:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB2607EC5A3721BC0457B3ADF7B30D9@VI1PR0401MB2607.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2657;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	emXe1XBozK77IhQdKbiHdyH2lglCcaxkd8464YkEJCXvJU+gaLBYbdWZLMOlLG+XNaXDKNbmGOLb3yb4ai7KzlEFR8akGON6uxUmqJ0WY0AU8xEkJrT1NEytRyGJN/t1mCoxJxBO6FSaWy5MXpGo/pNF4WBIYAd82gTzdJAeYsX7NBapwXZR2D58Azixt7Q8Ez+EcuMlNZkE5UECzMxFx0fAMpwQzGbITz78MYQBg6t74ew7ICgULFUcJu3Htl5oT+XjYTIJ9YsVlSPuSTz1z/BSgLeTX7Y7m1wIReef/vw4rCiZg/Mf1RBDTnLi/Ixz/JEfYdQgDMOmxUlATYto+mzSffjjySyZm9oET7BAh9UHtPlStsiVv03L5HV7k1gu2/iT0cISjKhdJb9pRMsIUbVkGkLvYHxas51sy6G6+oxmhJpqhqWm/SVTQhmqWcVtBLO7n9z3EKODXrQ7dTxHtIUWJcTLrHrX+g/aR2nCSNV6tsVynqkRi0f7vgiX/Oy+0zNUG/LhovIRe8wHXu7NmieYHEg+iFI30CPp1sXGaxnSM5EsdOkvirwW97TUgyw/ZqHIzej3w9qUXkSvqHOtaxXSuJKEiJDdNDdFGEoSeMvwOYXgJ9QkrYQpFatuEcp75uvjgm0T2piSpH+4Kg83tUW46PukD9CGtzpVSbUEeAUv/Ri3nzQgadvM9BXbbuPD
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(376002)(396003)(366004)(346002)(39860400002)(54906003)(4326008)(38100700002)(478600001)(316002)(2616005)(16526019)(8676002)(2906002)(6916009)(107886003)(66476007)(31696002)(31686004)(86362001)(956004)(66556008)(36756003)(26005)(6486002)(83380400001)(66946007)(8936002)(16576012)(5660300002)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NC93SE0wc29IbDJnOVhsalV0SDhMNkpaZExkWEJuem15NXl6aFRFT3JKcDhD?=
 =?utf-8?B?SjZadTNBb2lTYUVkaGJydEFkTjAvMlhyVnRHZ1M3QkQxMkthNURuZzlSa0U0?=
 =?utf-8?B?L3habzFHcFlKeUhiYjRpWFo2ZzlWTlpzelRyekpVNm0xK21QVTIwTkdZN2o5?=
 =?utf-8?B?RDJCYjlOOVcxMU5SVG11VDY2VHJiZG1tSmlIMzhhb2w4Tm5BVFFqQUk5clo5?=
 =?utf-8?B?VVZyNWt3eWVIRFY2QzFCYnZqd2h2TjhEdktwQ1BVVHpEMFpFRTlvWFNRUml4?=
 =?utf-8?B?L0dBbWdxRitZN2JjRm9mOVRsT1EyWEVNR1pJUmQvRVF4V1FjcGM1b2JXZ25I?=
 =?utf-8?B?Q3ZGS1RiZTBUL0o5Skt4RjlxQUYxMXdCRVAwOEhLajY5RHkwc3p0eENBdkhL?=
 =?utf-8?B?Qno5cFJzUUtyYnBqbjJsVTcwc1dHVjM3bmtVQjZ3MEplellwR3NrMktzbUFt?=
 =?utf-8?B?YmwxSU5iZmtPd0VOZTBVU0xZSnFXV1l6NFN6amJtRVI0QllMYVNqMFdSczNz?=
 =?utf-8?B?cmpYbGNkUkRrQTZmWnQvckxtREtoYitWZkt0TEtiazBGQ2tGMHFTV2R2SHJS?=
 =?utf-8?B?cG1hNUJxS09OK2R1b3l0NERGUGtBZjVpMFJjeURXaGV3VDRuYlU2YTNGd3pS?=
 =?utf-8?B?anRBcmFWd2VRZXdobUNENnRxa1VqWC93U3ZZSzlxMm81VkNwNE9GajMrY1da?=
 =?utf-8?B?djRacUM3UkpJSzBGVjQ3UnI5THVGM0ZZTThBVHNGcGE3RWYyeFlLMkV3dFcy?=
 =?utf-8?B?eUkvYkJTdGEwMThlTTJFZGFNMGhkNmwxK0UyV3dlZUwrVDNuNWRrTWJZSmFI?=
 =?utf-8?B?WmwzbXNneXRWM3hmeGxscmZiaU0zaWJ4T0FnSVJYVzI1YkNkUGR2ZHpuaEt2?=
 =?utf-8?B?a1NPaDVXNCtKaWUwSFZacHBZMDZKQXIvekd4WWFoQ2o3VERhRG51Zy9pU2w1?=
 =?utf-8?B?K1RybkdSQmFLaW9LNTRoSlp5dzV4NGdKVVY1OWVrNDZvZm9ZYkR2ZU5jUXpp?=
 =?utf-8?B?eG1wczhacHF1YWZEUUJIU1g0emRYZlFOWHVBUEs2S0FMWUFET1VDNTVOOVFO?=
 =?utf-8?B?MExqVkZ4ZEVGR0lBbTJ2WDZlQi83cy9oOGgwVkFqdXhzVHBsY0ZHK2RKaHNP?=
 =?utf-8?B?VEpWRElRcHRYS0RWWmZUZmo5cVphb3V3dEt6M2dObHlLL0wzMjRHaE5kYXMy?=
 =?utf-8?B?a2VORkRGYlFuWUJVWUZDSDRKS0J4RlljS2VEWnJrV0lMa044aDRMN2MxR2h3?=
 =?utf-8?B?Q0tXVjdOV3VBRnlFK1c0MnpsUWpJcHkrVmIzMmdzQXNnclhHR2YwcWsxeUVE?=
 =?utf-8?B?TGZXYy9xS05oOWdZTHJ0cERKVi9SQlpLM2l5OUY1TnVsbDQvb3ZMTmJSYWxu?=
 =?utf-8?B?OFNSRURhTjVWd3ZoRVAyVnNmS0pSQ1VOMm96cVZwQSs5M3Q1dVhjcjhoemI2?=
 =?utf-8?B?N2lzQmxEcTBZUkQ1WTRJWUF2eVZEcU4zR1NHWGtRakI4c096OWh6c1dpaTR1?=
 =?utf-8?B?UU5vQVI3dkJneWphWVBHYUtaWC9BaVFyYlFtQWY1eW9IVHVOc3U5QjF0aGpB?=
 =?utf-8?B?RDJGVk1mM0NHbUpSWDlBZ0lkdjV6THVDWVNkSExPdFpzR01sa2JsUS9XTmtz?=
 =?utf-8?B?ZFZOMVpZdmxvVnVSbGVpN2FuVithTGRYWE0zT08xVE12cWVJWm1ycGRJNFpv?=
 =?utf-8?B?NEpwbE0rUU0xNW00M25lSkI1VU1JMWNYMHlEUnJvRXpjQ0FwQ1lROHU0d2w0?=
 =?utf-8?Q?R+ULyygt8o/XRyIR+1ILlfAsLxzQFWASbsiqKH9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 236ba0cd-44f7-44a7-603b-08d932432eee
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 10:23:58.2277
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9GmKam6HL3cAocIxiUQ9zK7BHcI7YgNjWPjMkChZpT915C+PLpby5lzNTN+MKwuSHQPLo2w8Y9WdUgkZ0zFVoA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2607

Some hypercalls, memory-op in particular, can return values requiring
more than 31 bits to represent. Hence the underlying layers need to make
sure they won't truncate such values. (Note that for Solaris the
function also gets renamed, to match the other OSes.)

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libs/call/freebsd.c
+++ b/tools/libs/call/freebsd.c
@@ -62,7 +62,7 @@ int osdep_xencall_close(xencall_handle *
     return close(fd);
 }
 
-int osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
+long osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
 {
     int fd = xcall->fd;
     int ret;
--- a/tools/libs/call/linux.c
+++ b/tools/libs/call/linux.c
@@ -80,7 +80,7 @@ int osdep_xencall_close(xencall_handle *
     return 0;
 }
 
-int osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
+long osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
 {
     return ioctl(xcall->fd, IOCTL_PRIVCMD_HYPERCALL, hypercall);
 }
--- a/tools/libs/call/minios.c
+++ b/tools/libs/call/minios.c
@@ -38,7 +38,7 @@ int osdep_xencall_close(xencall_handle *
     return 0;
 }
 
-int osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
+long osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
 {
     multicall_entry_t call;
     int i, ret;
--- a/tools/libs/call/netbsd.c
+++ b/tools/libs/call/netbsd.c
@@ -96,7 +96,7 @@ void osdep_free_pages(xencall_handle *xc
     free(ptr);
 }
 
-int osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
+long osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
 {
     int fd = xcall->fd;
     int error = ioctl(fd, IOCTL_PRIVCMD_HYPERCALL, hypercall);
--- a/tools/libs/call/private.h
+++ b/tools/libs/call/private.h
@@ -55,7 +55,7 @@ struct xencall_handle {
 int osdep_xencall_open(xencall_handle *xcall);
 int osdep_xencall_close(xencall_handle *xcall);
 
-int osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall);
+long osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall);
 
 void *osdep_alloc_pages(xencall_handle *xcall, size_t nr_pages);
 void osdep_free_pages(xencall_handle *xcall, void *p, size_t nr_pages);
--- a/tools/libs/call/solaris.c
+++ b/tools/libs/call/solaris.c
@@ -80,7 +80,7 @@ void osdep_free_hypercall_buffer(xencall
     free(ptr);
 }
 
-int do_xen_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
+long osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
 {
     int fd = xcall->fd;
     return ioctl(fd, IOCTL_PRIVCMD_HYPERCALL, hypercall);



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 10:24:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 10:24:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144419.265816 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBg1-0007Uw-8S; Fri, 18 Jun 2021 10:24:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144419.265816; Fri, 18 Jun 2021 10:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBg1-0007Up-5T; Fri, 18 Jun 2021 10:24:41 +0000
Received: by outflank-mailman (input) for mailman id 144419;
 Fri, 18 Jun 2021 10:24:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luBfz-0007Ub-TQ
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 10:24:39 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 85329b78-6aa6-44a6-8731-fae0d3dbe456;
 Fri, 18 Jun 2021 10:24:39 +0000 (UTC)
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02lp2052.outbound.protection.outlook.com [104.47.5.52]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-15-bjUvq9YROz6IVE91bhG_Og-1; Fri, 18 Jun 2021 12:24:37 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6864.eurprd04.prod.outlook.com (2603:10a6:803:138::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Fri, 18 Jun
 2021 10:24:35 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 10:24:35 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR3P281CA0001.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:1d::10) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.7 via Frontend Transport; Fri, 18 Jun 2021 10:24:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85329b78-6aa6-44a6-8731-fae0d3dbe456
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624011878;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lJvvmVhJ6x/yxLap6O6VF1QkaR/Dz+vEuOFGNI8+yBo=;
	b=lUBiefxk9fiCLsvfP2HFiRGfWY3nZ2LikCC7U+Ck4iSDc+7PjrX5u7KBj2+1DFF5PfjMKX
	BtIIpl03AHkHL9UbH8BzvmRGFlu7PTByz6CJiO4uKnY8NUR1/15svMlOOEQcn+vhWsjmSJ
	bQSRtv1KH71fUcoUo3ybUFIx9qSy1zQ=
X-MC-Unique: bjUvq9YROz6IVE91bhG_Og-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=io7zHhlximNRb6H+4Lg5cKks9eQkof85SJxLkPJf+Hq9NM2eQzRRyyqHT0WRc7be6116ph33JYjkXlqNOQtccMjACve5771e8+4+jP+XQXnk96uMsTWYWhKFbv2t40KBw5VGh7tf7jYnSs4OJmBMeri6bJDJgZqhXJEc8gdDRVt1SNx7/Z5FOUWU4lxY5h6dswy5ebT+01Nk/lmVJkNrUiGSKuyW2uJ2CEVkdZjCRgYk5KmFQ3bwMYhdkeF1fULPicDWZHOL2+CEzvKPgsfWvqT8ocPcE8JRcE5edfRMG57JS6Oez9fSMvsRbga3UP3pnGc5Zgv6rtuWdIsTLccQDQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lJvvmVhJ6x/yxLap6O6VF1QkaR/Dz+vEuOFGNI8+yBo=;
 b=AHyl1X4Q9B7isuvkctHvooAbyIBv25erq/5dgqmD7jL7uFOnA1zCp9Is0nzvlVDIh2OG7CiAqFax4ARqa7Tk93MaHoxgHDXWYjHb57tzJ1XBNvknEohn6fBUE/ZXD+sERFYSyiRkzW2xRZgWFBhogaUGqx4OGrKVUbSEBMtagAIgNMaPmSewA0qCqVQajVM6OnezNEjTO22nNObxiuoM5zlFMKimwj1jnIY80CGHqTbwBlgRhEFTjTlJWMVOjzSeVl6YZBCUJZ6C3Kd0yPavvW27oHUCkXX53t4j4QiYgPi3uh07q3DCWBNbeV5z2WnkEPz3WH/J0ppeedBKi+gGnw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 3/5] libxencall: introduce variant of xencall2() returning
 long
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
Message-ID: <c7f93b66-bc4d-708a-6936-e0eac9e36cfa@suse.com>
Date: Fri, 18 Jun 2021 12:24:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR3P281CA0001.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::10) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 72ccd42c-c63a-4e24-fc0a-08d932434541
X-MS-TrafficTypeDiagnostic: VI1PR04MB6864:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB686409383D94A5ECF0DBE10EB30D9@VI1PR04MB6864.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	eORuFAJLq7M0+z8xJXd0MBabM1yfiCgDK3ay0C+uT8XuJI4Ymh6337T6vcZuqLaLvFnXG3WCeAI6IwR2VbcCL2J63tbGdUTi83y3Wfzn1S3NNuwhV0xhKvyNfNxWbOnBzBtDvH5IphcTwYXu7zyDZwzlXOyRt+oVm/10XRLGvSYwsOhKsrXhrWDBxkgbUxTdQmn+RSUcSXoDmt09vgWF001vW0NaNoT0VxlM2TAOsZjbtPwSZ2LyR7WxecC/qQTKrsi9XLWfFVTk34hXdTzJ4/Bh8UnTcp52jLRzXMqa3Cj0fTVE/65WyK6+Zc1ILfHtOQ+kRhQgCnnrEmeRk4T3JrDgomNAdtSKR8RPiALmpGvjZCMCkNtisi5rOAzmgN/7w5N4706ttxrh3jbtyLlceDUWKRSnbsMBfe/wUZUddZmY1g4dS04TuDuX+Mwj5N//Jag+8onDrhV1QsBRZ0R6AmdtLOaCfmDAYpeOI1uL7L62ij/RbxqgSX9DyUELqwR1RIFID7ZXKZfOMRbvgn3I680/RbTpLwXnGhgCcqwPKtKG7rGfeFbpOaWCFgTXHncrdOrKh/eJCnf8QKx6wws61G14CrWdyIukL9yAbXkOeWYVwv2XaIgHDcP63HoLvZYUlcibTZbv6wyr6T+kD9kd4zsngZ+3CaI4ASTScr8IbwxqJYsmX72JQhIk08AJPZmZ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(396003)(376002)(346002)(39860400002)(2616005)(956004)(83380400001)(66946007)(66476007)(66556008)(6486002)(316002)(26005)(186003)(6916009)(16526019)(478600001)(54906003)(16576012)(107886003)(31686004)(38100700002)(2906002)(8936002)(8676002)(4326008)(31696002)(86362001)(5660300002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b0ZlbHRRc0hUaWliMFNEdDlSMEpBODVVSFlBaEJzMFV2d3FsSFBPMCtHVkpT?=
 =?utf-8?B?V2FvZTZqSVdlV2puNXI5SThoekNCUmNHWnFRS1BRSnY1TGdZbEJvZ2hXZ0Fl?=
 =?utf-8?B?WGMwR0E2UUNnUWNyWU1mUG1Xcjk0a2xDQ21Ub3Y5cGtMV1NOcFZFbVVPamlO?=
 =?utf-8?B?N3hodDdNWFpYWjJWYjRJc0JpaTgwRVN0ZXROb2U1NkRsdnhtTTZBcUtGVG0z?=
 =?utf-8?B?aU45UWhkRDJDbUdBb0FnMVJhSGkzQ3d4bDRZVUlLbkZhSHVtOWlvbi9HOHVi?=
 =?utf-8?B?STlialVtY1ozVnNPeW85UjdsQlBBb1JQUFR1NHh6SGRKclhWQytMckdobytS?=
 =?utf-8?B?UHZSSkloejQwS21qb0VzWFdMK3pqeTAvOW9GanZKMEFycklua1QvQTBhMGgz?=
 =?utf-8?B?cCsrOFlZdFNQeHM2VUFmTFVFNG41QkhWRTdoaEpIcHNHZXdnZk43UWlhN3Nq?=
 =?utf-8?B?OUU2RDZYc2Q2WWhPZnRxdkJZMGtHS2RmNmlWSmRJVHZWbk1na1BQMnJJZ2xU?=
 =?utf-8?B?Y1RCcjQvWEdNY0wrMTNiOE1CSFk0R1hUay8xdS80UTJJeGhja1V6Y0JzU29U?=
 =?utf-8?B?dzBtMFNrRFRmZFhDbDRQVnlzcFJZYVBHT29XZFp2amFQSHF1UTFmNENDYlBO?=
 =?utf-8?B?aTZBZXRsd05qeHNHK2tvakNGOHkxZ3BMT3hPNVBXS3lKK1ZDdlh5TW53bTZv?=
 =?utf-8?B?S3o2THpkOXoydW0zTmFCdDFaY2M4Y2R4L3hxR2d6TytLaUh2WlJHeVFSaEpa?=
 =?utf-8?B?OXJlYnUzdjVsS0kvQm1PK3oxeHk2ZUcwWEl5dEdmcVBZYkgrQldkRFRYNFJa?=
 =?utf-8?B?RWVBWEViK0JZY1lXMDMxVkQ2VllHdDR0YXZXN0h6MmdoR0Rzc2txYklGL1FP?=
 =?utf-8?B?RXhmTzkra0NFZTZQVWFOS0s1V3NBRW1aK0tQTFRiM05OY29wU2p4U1hBT1oy?=
 =?utf-8?B?YW8rNzNmVHpPUC9ESm8rK1ZXVUxlZ0UzVjFnbEUrSVliMTgyeTRuYzBaZ2o1?=
 =?utf-8?B?RjBVOFVWT1NIeXZmY0VxWHFUbTZhSENpNEE0VVMrSE5ZY1pDOC9SUlhEZ1M5?=
 =?utf-8?B?cWhvRUhhODlCN0hFak9teGZBaGZQMzZLNG1nMFFieS9hcFhMWHl5Vmd2KzFV?=
 =?utf-8?B?TjNlbi83N21ldVhVYndUalJMME9ZK2g5VUJKbTZYdElRTkpLZzAyTjUrR0wy?=
 =?utf-8?B?OWRsNzhQWDBoRjFnaE5OVGJlRnRQUGo4Q25LRHRWUEZBcHI4MWl5TjFPUlZL?=
 =?utf-8?B?ckh0YnRFY1pqT2VqR3lNN09tdVJ1VGkwWG44RERQT3RpcE5wa21HRUFlQkIz?=
 =?utf-8?B?S2xMMWx3ang3eWhGaWN0Z3V6a2hSYWlSVFhDV3c0WTRNQTBmZkdxdVl6Uzhk?=
 =?utf-8?B?cHNLYzhpcXpnMlNrQVh4UnlyYk5SVXpIOWE0QUE0UXY1L0YzK1ovcDhwSzMr?=
 =?utf-8?B?NExGWWs4N1lyUUpFeHBLTllpQy9heTFscTJ1cjBvUXBOOGFQYW5QeVQvQkFZ?=
 =?utf-8?B?QmlEWjhsVGkwWnd6cDRZcnRTSlkva28vRVhKMGRDR1BLczU0SDltMkhCQTEr?=
 =?utf-8?B?NHhVREpBK1d2SXh1bExCeVpuNzV6M0w1NWE4T2txTDJDTjF3bTZGVUdBYW1y?=
 =?utf-8?B?cXdPR3pkREFIbllXaTlLcWFrY3BVRWhiNjQrUUV2b1BTY2poWkwvV0xqVHdq?=
 =?utf-8?B?Ty9hL3FuMGttSzdPZWIxTGdWUjVYbmpaZ3dVUjdTQ2Z2SW9YUzJoWW9xK1Qx?=
 =?utf-8?Q?NjTVtFv/7gjreYYgMdnzr91c9DsQinaMDgDD8kk?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 72ccd42c-c63a-4e24-fc0a-08d932434541
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 10:24:35.6785
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: N3rNQySP6QSTXCpJmTPzc0dzqZu5zBXLm+e5uUn0/b3HVVG6SnMAz3g7lCuAoZzCNpiYa5Rkrc1rqUp8nN57ZQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6864

Some hypercalls, memory-op in particular, can return values requiring
more than 31 bits to represent. Hence the underlying layers need to make
sure they won't truncate such values.

While adding the new function to the map file, I noticed the stray
xencall6 there. I'm taking the liberty to remove it at this occasion. If
such a function would ever appear, it shouldn't lane in version 1.0.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I wasn't sure whether euqivalents for the other xencall<N>() should also
be introduced, and hence went for the minimal solution first. Otoh there
is also xencall0() which has no users ...

--- a/tools/include/xencall.h
+++ b/tools/include/xencall.h
@@ -113,6 +113,10 @@ int xencall5(xencall_handle *xcall, unsi
              uint64_t arg1, uint64_t arg2, uint64_t arg3,
              uint64_t arg4, uint64_t arg5);
 
+/* Variant(s) of the above, as needed, returning "long" instead of "int". */
+long xencall2L(xencall_handle *xcall, unsigned int op,
+               uint64_t arg1, uint64_t arg2);
+
 /*
  * Allocate and free memory which is suitable for use as a pointer
  * argument to a hypercall.
--- a/tools/libs/call/core.c
+++ b/tools/libs/call/core.c
@@ -127,6 +127,17 @@ int xencall2(xencall_handle *xcall, unsi
     return osdep_hypercall(xcall, &call);
 }
 
+long xencall2L(xencall_handle *xcall, unsigned int op,
+               uint64_t arg1, uint64_t arg2)
+{
+    privcmd_hypercall_t call = {
+        .op = op,
+        .arg = { arg1, arg2 },
+    };
+
+    return osdep_hypercall(xcall, &call);
+}
+
 int xencall3(xencall_handle *xcall, unsigned int op,
              uint64_t arg1, uint64_t arg2, uint64_t arg3)
 {
--- a/tools/libs/call/libxencall.map
+++ b/tools/libs/call/libxencall.map
@@ -9,7 +9,6 @@ VERS_1.0 {
 		xencall3;
 		xencall4;
 		xencall5;
-		xencall6;
 
 		xencall_alloc_buffer;
 		xencall_free_buffer;
@@ -27,3 +26,8 @@ VERS_1.2 {
 	global:
 		xencall_fd;
 } VERS_1.1;
+
+VERS_1.3 {
+	global:
+		xencall2L;
+} VERS_1.2;



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 10:25:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 10:25:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144426.265827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBgT-00085I-HV; Fri, 18 Jun 2021 10:25:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144426.265827; Fri, 18 Jun 2021 10:25:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBgT-00085B-EE; Fri, 18 Jun 2021 10:25:09 +0000
Received: by outflank-mailman (input) for mailman id 144426;
 Fri, 18 Jun 2021 10:25:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luBgS-00081V-15
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 10:25:08 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 930bef26-6136-461b-853f-b92fa8eea898;
 Fri, 18 Jun 2021 10:25:02 +0000 (UTC)
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur03lp2055.outbound.protection.outlook.com [104.47.10.55]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-38-fK_H9dFENZW21h_foU7MHA-2; Fri, 18 Jun 2021 12:25:00 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6173.eurprd04.prod.outlook.com (2603:10a6:803:ff::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Fri, 18 Jun
 2021 10:24:56 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 10:24:56 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR0P281CA0078.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:1e::17) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.7 via Frontend Transport; Fri, 18 Jun 2021 10:24:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 930bef26-6136-461b-853f-b92fa8eea898
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624011901;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ahDWmzB3GHWA1eTFvFk5kHSbaoQEbqF3qXL5cCuWzy4=;
	b=ZmdinxGEnh5mxLbExbRIztNIXxaPPs088bxlYtDJR26uOXjcf8Zeaip4CS84Fm7Kvt/bQ3
	3aVBlhCsuH7FXQFc7o/7ESTd1ya6MaoY+3/axfGfj8zqsb0G3NGjJrhuM45LKoF3K09pmo
	n20KUQ/H3qvVllyH40qvVGt6vd+RK/I=
X-MC-Unique: fK_H9dFENZW21h_foU7MHA-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bwN9k1IQlhorXBmnqvEonzgHr8ttkInSEKDlGnEoX7jASWG5vlrCEL+76a7qGtaq4YkZL72mOArF41U4jp7ZX4+zCe+OTlSpEJQyfd4N5jnU5DZP0ttpuluUKt8c6Bpq/GQtAIKHuztE1VTebXJD/YT0HqZAB5caJ+0iozBw/kIKpWODHRRwJtiENqjFqKfCMBW4095y+t0gshPf+L836y40mSrYemp4U3ZgU8W/3NQGK12aGRjxzv+o1kAaKQdUXIzZF5yahgC6gOfE7+lEbciE1ZPKq+WeF3RvQ7d7sCN3RNFKvodxM/g+7F7llosFvquTL4gGxgNqJdaOOaUwtw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yx3gFp3n9BvdQf1GHZ3mn0AwjS8nAcB86a1HzYSQ+Co=;
 b=HI1FX13ZU4VJXeQin2rnRvR2+LSbEv1hpWWtbuw/z5bSECE4nFxhMjxMCoRrlIYQCtKZSh7JSsoFsP7AuqbtobVRUIAq73JOyj9177tRBE9gp2a2FSatEu1PdFGPzhDBG8aDWnBilmk0mcaW3mrx236FfMd2dwlPWvDpV/BVx1xi64EGu6RQQiPRkwJwZYH6o3In598dxVdjewVXHYcVsauYJVq1KxNpjzqRhFVuPyrOlBxpciHoglurQdreDY8fb+P+0w+2pGVwUPGRc++jt9fBDi7Z5ja6VkDtdR/Z5vHlNHRA+nBd7b/BmqHm2WlYhoOAMPcqiiCUNowkmcb12w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 4/5] libxc: use multicall for memory-op on Linux (and Solaris)
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
Message-ID: <8a5284e3-a029-27c8-103c-cbc12642d24d@suse.com>
Date: Fri, 18 Jun 2021 12:24:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR0P281CA0078.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::17) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 736da1bd-c16a-4f12-fe3d-08d932435185
X-MS-TrafficTypeDiagnostic: VI1PR04MB6173:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB617397373A4F22C480BAAB7CB30D9@VI1PR04MB6173.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6pZxLIzV6Vuoz523a/gYs4k5h1qbEC4FoLbAc9kc80Y4GoUfTKpf0U30bQZPb0lKLeaNx40KSaXj7Hf6+xSNB3TBaE/0UXR1EEdUXzaBS+C1dXQLlRYi6KqTdBlsNUcdhrQ+TNgFn7komXx7jFqc1vsrgj/qkMPCche/iI68KRYsqF4H3jS6VZt6LYDMtlOZWAnEMd6siyzo6GX8RyyBEygl/jD+zy/TVWrQzXkVt8ujwjBFVt2LO4q/EByOFYazOxAN1Cs/kxrEQiQ7mGI69xhKGy3ZSGlFjdJd36eF8v2PRN4hvZtL2xN0RXY7D/ggViIjDIsTxwTQZ9LJqavDOCIn9vO9tCn0n6B2AVTDLGDQq5deHoyIT0MhuUoaFfC9UBDNY+ZCSnuhsIDHwzEMVubh5fwHLD63GfIuZTi0fezRITX3lgcDrTVTTT14kTyQWD3EZZiPhX13xVvy2feGgjnb5KsgC6O6LMdKgHrl/3F4/Xna5b48Dpucsdarx52ly03CHbVf0ute3khPfc5hRBlzUr2gdwj/coj46IqkSAjNJo11usmtTOH5AdUxUqFw17eLDo/8RJuZiLce8aL97PndypPcb6g5a6ZwiC3+jXnWNCbWUbM4osqNNnVr3yCvbsSuXFYSG2BRUGsaF1UxGzjdq4df0CLvCfGFTs1iMpPoGDDYFIGse9maZSZuD0Ma
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(39860400002)(376002)(366004)(346002)(136003)(107886003)(66556008)(66476007)(54906003)(2906002)(316002)(4326008)(31696002)(31686004)(66946007)(26005)(86362001)(6486002)(956004)(16576012)(38100700002)(83380400001)(8936002)(2616005)(5660300002)(6916009)(8676002)(186003)(16526019)(66574015)(36756003)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?+Q8XbxW4PS36on+LUx6VhYq9jPhcbwrN/PYVJvKLHxex2oElc/z5KRXd/v+N?=
 =?us-ascii?Q?nvqWgT0KstneQByiVyJUAza7+1dIF624lCDUXhS14ND4oJjtMQA011tEgBbx?=
 =?us-ascii?Q?RRmmtBOUSOAQ1Zc9thBVaJKTTfJNek13F5O+VPiYhCQ7YefVX7zaAgfPpq14?=
 =?us-ascii?Q?VnyM1Mmyd+av668x1SiFQDBlGMW8WE7wDBKAexG0qh2hePOxsbo7pbg0c/2c?=
 =?us-ascii?Q?PWmlrOSOCv335Jz1JCehEKY20Y4VNTKTR/jgcm2Y0bz1h6azZctMDhUvSemM?=
 =?us-ascii?Q?6FBV87T+bu+GyF9sZEwrWRdPtaNNBn9L+PAawevZUHcAXtwPIBtCbex1nxXo?=
 =?us-ascii?Q?hphb9NE4c6Y1SjZYUSWmpLWprSpDwo1k5htGb4+pFIVcgNDooXcucTxbxa03?=
 =?us-ascii?Q?TJtSeXTpnrNjjXuGSf6IUA838XBD1fUNGmCxVP0R+3HOFjIrKzm51iXZc7zx?=
 =?us-ascii?Q?AShi9QiYtyNYyPbJ/fb17L0FX5szCbMDZHX21CW+PjMRmCX/MQUMqDnfkaC8?=
 =?us-ascii?Q?yGr1RCEFmFufI8ChH1h8y6wxbWL0vUkCpkh/E+TVinsQnAaqEGQE2sE/M+v1?=
 =?us-ascii?Q?Ed1FjLFZ6ITjkxu3iQafrXbUVnhSorvWlDUpSRVL4JTddyYUIBcuZ/Mo1GYR?=
 =?us-ascii?Q?Xpp2O0SLGRisVqVP3QHpBsiCJ3XC8CvN2/4h7NkbNzx81OrKiUaLpikjZ/Zw?=
 =?us-ascii?Q?t3BdqjE08+HZuILqS+LENLlnP6OLX8yuMftYdjFpU25QR7mQOtWEWd1hGzJg?=
 =?us-ascii?Q?4arFhLD2apJ8Jxc4Eoz8hNWLmEZBi1pGsjjS3vUOApZK/UtEVjqoKVPQPriP?=
 =?us-ascii?Q?6CFmqY53oaIQbIvUqtm8s2lRY3HSekeJMr9UchcuhKMBNXFbVZ89T0fufcUF?=
 =?us-ascii?Q?513aHtMUb8+ZszONeURU8CPAZHBOznxqioJmw8+YBCLUbbXWWZXitiJu8tr4?=
 =?us-ascii?Q?Vecocw3yVW5/1Qgl77DPLq0KVShSGSSurn+x6nRmGW0J0k8UcJ6yqOESRx7A?=
 =?us-ascii?Q?pGJ7PzHmP7CU+VE+zWUC7pcbROL3R59PwhVUgYk1KCIsiiCXUCuwHxp6v6x3?=
 =?us-ascii?Q?ylBwGztSS3ocIGgA5Cz3ZX0wfv4T3pI/l3sjOqnWAZfkHjBp/utAPKHvrB9p?=
 =?us-ascii?Q?Sk+szvs891iK2DoQ6BdC6KQCEWRxEHsLMgucxAprGbOEqq0YKHO7S14clJvi?=
 =?us-ascii?Q?Ip1PwHAEaObzl83d7aV9a5QR97yNwvcCxpGaQyyeijuVMnv0dGCi1w/Tsc7y?=
 =?us-ascii?Q?w8drMHHJ7DYX/5ZFZSJ0sVoTeO7BQUxnctE0rCcD9CS7JJ2I41nIpP8SNFla?=
 =?us-ascii?Q?dBjNiwJPZZEG3Pe7Z3tI0jI7?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 736da1bd-c16a-4f12-fe3d-08d932435185
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 10:24:56.2618
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 11THcK2FNmIqLW51evaorMDbsPAXYv51a+Qkl83oXr8Vjt+Hqg5kndklP8Fk9V79HCyxBsFcv4RJX2fDkGEaEQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6173

Some sub-functions, XENMEM_maximum_gpfn in particular, can return values
requiring more than 31 bits to represent. Hence we cannot issue the
hypercall directly when the return value of ioctl() is used to propagate
this value (note that this is not the case for the BSDs, and MiniOS
already wraps all hypercalls in a multicall).

Suggested-by: J=C3=BCrgen Gro=C3=9F <jgross@suse.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libs/ctrl/xc_private.c
+++ b/tools/libs/ctrl/xc_private.c
@@ -337,8 +337,47 @@ long do_memory_op(xc_interface *xch, int
         goto out1;
     }
=20
-    ret =3D xencall2(xch->xcall, __HYPERVISOR_memory_op,
-                   cmd, HYPERCALL_BUFFER_AS_ARG(arg));
+#if defined(__linux__) || defined(__sun__)
+    /*
+     * Some sub-ops return values which don't fit in "int". On platforms
+     * without a specific hypercall return value field in the privcmd
+     * interface structure, issue the request as a single-element multical=
l,
+     * to be able to capture the full return value.
+     */
+    if ( sizeof(long) > sizeof(int) )
+    {
+        multicall_entry_t multicall =3D {
+            .op =3D __HYPERVISOR_memory_op,
+            .args[0] =3D cmd,
+            .args[1] =3D HYPERCALL_BUFFER_AS_ARG(arg),
+        }, *call =3D &multicall;
+        DECLARE_HYPERCALL_BOUNCE(call, sizeof(*call),
+                                 XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
+
+        if ( xc_hypercall_bounce_pre(xch, call) )
+        {
+            PERROR("Could not bounce buffer for memory_op hypercall");
+            goto out1;
+        }
+
+        ret =3D do_multicall_op(xch, HYPERCALL_BUFFER(call), 1);
+
+        xc_hypercall_bounce_post(xch, call);
+
+        if ( !ret )
+        {
+            ret =3D multicall.result;
+            if ( multicall.result > ~0xfffUL )
+            {
+                errno =3D -ret;
+                ret =3D -1;
+            }
+        }
+    }
+    else
+#endif
+        ret =3D xencall2L(xch->xcall, __HYPERVISOR_memory_op,
+                        cmd, HYPERCALL_BUFFER_AS_ARG(arg));
=20
     xc_hypercall_bounce_post(xch, arg);
  out1:



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 10:25:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 10:25:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144435.265838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBh9-0000MA-11; Fri, 18 Jun 2021 10:25:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144435.265838; Fri, 18 Jun 2021 10:25:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBh8-0000M3-Tl; Fri, 18 Jun 2021 10:25:50 +0000
Received: by outflank-mailman (input) for mailman id 144435;
 Fri, 18 Jun 2021 10:25:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luBh6-0000Lk-PL
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 10:25:48 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 99f34364-4788-4541-a3b5-f09668cf26e6;
 Fri, 18 Jun 2021 10:25:48 +0000 (UTC)
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur03lp2050.outbound.protection.outlook.com [104.47.10.50]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-21-sFUrfP-xNny13gJQGRFChQ-1; Fri, 18 Jun 2021 12:25:45 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6173.eurprd04.prod.outlook.com (2603:10a6:803:ff::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Fri, 18 Jun
 2021 10:25:44 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 10:25:44 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0194.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1f::14) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.15 via Frontend Transport; Fri, 18 Jun 2021 10:25:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99f34364-4788-4541-a3b5-f09668cf26e6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624011947;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sxI3txB9v2hvNDBVrrSYN+2xPMzrpPdCoOLA7Q3nSfU=;
	b=jnA5pyDynjRRsKOAHJP1b0LqSb+EEWLqTXFy7cKXFHCCdp0VGY272pNZLfQoftVPnb5Ubf
	aHH0nKjyqHIaqZTIIQ/J583v9cavJbZdDu25ZR6oFx3KzNxNy922TrK0CZy4QN+phhIUNf
	pfJYU/pj9Q6UzCjb0/gs76/5cRXUGNA=
X-MC-Unique: sFUrfP-xNny13gJQGRFChQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fhkRG7zbee8Kj75ldtu1o6tukLGmW2++sTFaFKwApk/5w5Xf7I68+rigAhnFKA5EBShX6NbgnscVZPav150H5iUTo+Gj9M+YuOCdqO6reOg07M9ZZHa2HlDlakHM6vIix6LS7sj9IZw4FobmO1LZ8PKk8eVAp79XXQt2ataMMEYV2zhiROkJLTtH31OnACTimPbfE2zbC1ZOxczGyARa60OIAPQeMv6xIoDjvorNwDIzWmCf7racapKGADqarOw0Ma37Ayn/qq7Y5/9kv/grKLQsiTQiHFMXYSSn3scSOytV1TG/KCaWZAsFM1qBTjfc5DkFvawiaKqvmYFylvYX/g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sxI3txB9v2hvNDBVrrSYN+2xPMzrpPdCoOLA7Q3nSfU=;
 b=CGG/seglcFKuM2fBoPNImN3t/XAqA4xyKzNxMtIu4DdQZkOh9Pf5veiEkXUMxWzq7+w1kPr7ZCirnU3/ut7ELrBAxZpRItxYhXqVI8rM7p35YsvmbAGDmPmatlEXqxkHj4vFnZ4pezzPR92OQLVG1dF8eFBJNV20vv9U0h28nVg0XEj0ky0won8/0cWDQuFBR0DefwfQyr8zuKNMODPJPBlzwdiS+fj0Qz1W1pdh4rxXS8jtzST0EhPo9tFpKUz6tcd/UW43sikfByCGVZomrlckMe1knr7P1Eca6VohEfGvEJJ6W6n4XoqLnEqqvuigwZSqajUNi/nK3xC+opN3gQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 5/5] libxc: make xc_domain_maximum_gpfn() endianness-agnostic
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 Juergen Gross <jgross@suse.com>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
Message-ID: <99979695-e53e-7764-85e1-64dd4cf9447b@suse.com>
Date: Fri, 18 Jun 2021 12:25:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0194.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1f::14) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1c3b44d0-5fe4-4a20-efec-08d932436e58
X-MS-TrafficTypeDiagnostic: VI1PR04MB6173:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB617390A7D033E00387757577B30D9@VI1PR04MB6173.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wCLpK66B7R6eYuHMUV25pMvQYcNcvotjHPDiO2BAwEfO04nZl7uDGE84pUrd++OkNXi+vHZvRsSt/UFjDI+oVVDbj80TrCt7E4QBqVXfQLntLRBoSZpreKvZhUZ+QXZXlb+Q/pkjSWKDbr9b13CgIbEfctW4vp0tTbVApjyaq3II72P/iPnzk0pCN0x68UqYoCOxb4BHfmm9Mkz0L9ZfoRReg8N2KkT42HioybdsO8PGjzx9Fkt0G6uMFleSJs3C3pu/wD1ZvRe5/5xoOF5rQBsSitWz3sSZ25Nx9HQdsAVX8JHOa0W1sCPopxB37FdZ0rwwJVWdoArjwqs0i7Y3m2xKKmOKas0HNq+IhukKVlydtVWJxhKPoXrjZEA4miTVbG/de1FcYZw/f3wpvwNZS6WnuZhvvmtN7T9E7C7vFo0mulzGtPW6JFbipQmer+IblLj5aOm/xaXz4PnRLC+HU/BU6oPf/GZFkPjIjOTJupgcnLLtlE2TmaGbochywu2uX+lu6k62dPOLcZBxVQVccBN8ePAsfJPSu3TmosfchqUFqrnOx4JAm9ep/JZQDGr6juftpqBcyS3jggoqMdYRpnrbChzsu+N7OMUhsMDAa+0vvVO/jfYgDUQM4FfbQRIuAoMFkjeULO3vo1enp5KEvtIWdNMrvypgpPhlpfq1QRiwzb0HXlL++xx5GApM8c9E
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(39860400002)(376002)(366004)(346002)(136003)(107886003)(66556008)(66476007)(54906003)(2906002)(316002)(4326008)(31696002)(31686004)(66946007)(26005)(86362001)(4744005)(6486002)(956004)(16576012)(38100700002)(8936002)(2616005)(5660300002)(6916009)(8676002)(186003)(16526019)(36756003)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U3ovTEEyVXQzazk2YTFwOWp5NlFKTVRwa1htblZQYms3d2FTNWJleldoeFVz?=
 =?utf-8?B?ZnMrRmZhSlVwSkpQaTg5d1g2eWZSbG9XRkJsbVdoUWR6WnR0b0NJajgwTVVF?=
 =?utf-8?B?Qy9IdVBrUzJLcExMN01XamdicEY0N0lEOHRycGUydUtKakJtVnBQN0pBdkxz?=
 =?utf-8?B?aXpWSndicmpmK3Y5b1JXL2V0U21NYm1tRzIzOEJXaUUrS3lKMzVJaDMyMkRw?=
 =?utf-8?B?SWRGa1RkMk1aMWV1MGMvazh3WjcvbVBXelhUNnhXS0JMUDNic1NybEtTUFJN?=
 =?utf-8?B?eTI5SlZkZmJXdW9HYUk4THJBRVdBeEdsS1hPL1JqR3Uwdmk3cHh5S040SW5G?=
 =?utf-8?B?VDVTYXA3RStBSzBERm9VY05LdzdTaU9FaVJna09MVXBGaWpkeXNIcU9wcWZH?=
 =?utf-8?B?cW92TUNSUk9JbEhpTS9QbmtnRytHeE80d1ludU9pMk9TRVJjSnhYWnhCOEdF?=
 =?utf-8?B?TnpsNmIrMktVblJtc09aeHI5TEdDSXN3OElCVlh1dnNHdGEwd1lpbjRDbVhS?=
 =?utf-8?B?L1E0MkJyUFBvN1hralU3WFZDWk9pV1hnN3hNOWdNTmxZdDFYeDY5ekhwV2Nn?=
 =?utf-8?B?dHZWU25wdlRtTnpUNkRRdGFCVEg0R09lQVlEbzgwS2ZLZVExZFYzZUVCR2M2?=
 =?utf-8?B?aTUvWHYzU1Zib2NiZkd4M3ZETTlCaXcybWc1ejA0NEp6VHJXUkpNTkJUai9H?=
 =?utf-8?B?c1JVcjFNcW5Td20vSnpla3FVZ3VVWERQaGNFSVVZSWt1SFR4WStmSDdiajY1?=
 =?utf-8?B?MFFHbUZBQWloS1NzNjZldVVDSUdyTnVnMmh2dEd5UEhSY0pQZ2h5NzEzNHl6?=
 =?utf-8?B?dEdIcWR5ZmRKcXF4QkliM3BsbTFlR0Jkd0hRdGNSWlM1NmhvQmNCZjZJa3ZO?=
 =?utf-8?B?WGoxaTVQcTV1aWw5NWJnbjV1aVlZR2RnKzZZaGEzYkdrQ0RrTmxzZHFkU3VE?=
 =?utf-8?B?RVp6UW82V0dSS3lvR3QycWNpNjF4MjVlZUo1SzUvOXdhc25SakorS21vc3hi?=
 =?utf-8?B?ZUJzVWFFckxhOVN3cDhaVjJNTStlakx5Y0JveXlKYmM2elllRisySVErT1dh?=
 =?utf-8?B?SzQ4cmFtbllUVTVjcGVjMmJHSnpLUmtuSlVZeU01cFUxSC9Lb21KNHFtb0ww?=
 =?utf-8?B?TTF5cUtFWXRsM3lDMnUxTzRqdVhlL1NtU3VISS93UzdwSVdnS01xV29WN093?=
 =?utf-8?B?Q3JsejhzNFNFYmE4R2dld3IrRmdJUjhFdEp6WExKNUkzL3RKazQwZ2thS2tR?=
 =?utf-8?B?clVka21uWTFua2NTeXdnUWF5R3FXTTZPVU8zS2laZGxLOW9ONzgxSEF1bXhN?=
 =?utf-8?B?dGxtYkpRTUlPSjR0V2o3dkVKbVlNaGhpZDRkU25VZ24zanZBQkVSUFhlT21E?=
 =?utf-8?B?b0JObGxKYTQ2aTFSeXovbUJISVhYbFFic0xuQnl6TFpIWGVUQ1dJeThDWVRa?=
 =?utf-8?B?WTVsSER6WHUvaFNxMmRxYWlpVEZGenM2SDB5WDVDZnZDblhVZWVhczFQUmpy?=
 =?utf-8?B?ZWIweXlnWVhFSEd0cGFhZzh1RWlEbWhuaUlzcFhLMnNWTEJ0Y0VCcmlscWdp?=
 =?utf-8?B?VGZiQklPQkxIQkdEOUlsZkVycmxHUWozR3ppVzh1SnZVY1YyZVJkRDF3Tzdh?=
 =?utf-8?B?NjIxVzFUOEFVc05JamVHR09QQU1HOGZ5UEthamlaMjhMNUYzRHgvSk03eERQ?=
 =?utf-8?B?cC9FelQvNGdhWFFYVUJtWUxaL1V6MmQrdTF5TWk2VGYvN2QzdWp2ZFl0NnhE?=
 =?utf-8?Q?DJDdid09N8iw1icEIBvsEqXz/f3jMz3LZw9wfG1?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1c3b44d0-5fe4-4a20-efec-08d932436e58
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 10:25:44.6445
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1MW9I31JvzbzpD9FV6w6hi7JhiS3mFtmJdy2WDuWQ7+Ca5p//GA7+PhbqCrk4OlShOPZ/AyKkDv90o8sHAER/g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6173

libxc generally uses uint32_t to represent domain IDs. This is fine as
long as addresses of such variables aren't taken, to then pass into
hypercalls: To the hypervisor, a domain ID is a 16-bit value. Use an
intermediate variable to deal with the issue. (On architectures with
arguments passed in registers, such an intermediate variable would have
been created by the compiler already anyway, just one of the wrong
type.)

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -856,7 +856,9 @@ int xc_domain_get_tsc_info(xc_interface
 
 int xc_domain_maximum_gpfn(xc_interface *xch, uint32_t domid, xen_pfn_t *gpfns)
 {
-    long rc = do_memory_op(xch, XENMEM_maximum_gpfn, &domid, sizeof(domid));
+    domid_t xen_domid = domid;
+    long rc = do_memory_op(xch, XENMEM_maximum_gpfn, &xen_domid,
+                           sizeof(xen_domid));
 
     if ( rc >= 0 )
     {



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 10:30:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 10:30:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144443.265849 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBlm-0001om-K8; Fri, 18 Jun 2021 10:30:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144443.265849; Fri, 18 Jun 2021 10:30:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luBlm-0001of-Gy; Fri, 18 Jun 2021 10:30:38 +0000
Received: by outflank-mailman (input) for mailman id 144443;
 Fri, 18 Jun 2021 10:30:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfjP=LM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1luBll-0001oZ-FO
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 10:30:37 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id daea3599-2fd9-457f-96f9-d3e904fa3b5e;
 Fri, 18 Jun 2021 10:30:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: daea3599-2fd9-457f-96f9-d3e904fa3b5e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624012236;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=A6WS5RICX2644zyPCiNC6NRK2m601M4cPMJTh6ubd2U=;
  b=PlYNSQ3FZHvoy8M8WVRJiFOcrDqQzCPNOdCZC+F1D71MzzK9ABvCj5Pl
   kZSd1eqveAMVRh8II8AEAaAhRctbCvtqahRkrxxVRbbojK5PP0pe+E9DW
   688/KXHwQUEYlPinXeMW+Vmp/WY/duoIKGDvnKN5eRFKKJdoVstFz+/mQ
   A=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: B7irUKZvRdrO74S32dQOqCbfOaKCp3mdQVDg6ul4IjXRM8chVdiKt8SLq2VINJtT3evKtvoAEh
 Flo+J1JbXhoRzJzpCFKlKACB73qDrUUwjCMDmqRnw8RKJGBYoE0MGAnx62qBnzfYCSSWRY9rxF
 A9DeOJIjCfuwAO5Jyv3Hqoi+ox/mXBXHQxQ+z/QSoVX1vpYRQaRNOnt4ZypxOm++Y+vkROJjpP
 y1MJy0NLVTDbW9ExOuBsLd5Mk2bRnCWHcyYOjdFVbY8rEcblEYhWrIbd3eeXHHqhZoiuDu+TiH
 ifg=
X-SBRS: 5.1
X-MesageID: 46448076
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:bo0nYK2x5UfYr4LDWUDdRgqjBQxyeYIsimQD101hICG9Lfb2qy
 n+ppgmPEHP5Qr5AEtQ5OxpOMG7MBbhHQYc2/hRAV7QZnibhILOFvAj0WKC+UyvJ8SazIBgPM
 hbAtFD4bHLfDtHZIPBkXOF+rUbsZq6GcKT9J/jJh5WJGkAAcAB0+46MHfhLqQffngaOXNTLu
 v52iMznUvHRZ1hVLXdOpBqZZmgm/T70LbdJTIWDR8u7weDyRmy7qThLhSe1hACFxtS3LYL6w
 H+4k7Ez5Tml8v+5g7X1mfV4ZgTssDm0MF/CMuFjdVQAinwizyveJ9qV9S5zXUISaCUmRIXee
 v30lEd1vdImirsl6aO0EPQMjzboXETArnZuASlaDXY0JbErXkBerV8bMpiA2XkAgwbzY1BOe
 twrhKknosSAhXakCvn4d/UExlsi0qvuHIn1fUelnpFTOIlGfVsRRx2xjIlLH4sJlOz1GkcKp
 gkMCgc3ocgTXqKK3TC+mV/yt2lWXo+Wh+AX0gZo8SQlzxbhmpwwUcUzNEW2i5ozuNwd7BUo+
 Dfdqh4nrBHScEbKap7GecaWMOyTmjAWwjFPm6eKUnuUKsHJ3XOoZjq56hd3pDmRHXJ9up6pH
 3laiIWiYcfQTOaNSS+5uw8zvmWehTOYd3E8LAr23FWgMyOeIbW
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46448076"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K+vOcqfRagMfQQovUgYGrNiJ+ee6MqnSir+SHamawyuT8CJj9StNFwAuRHHGqAvojWMu0A9+eAdirGE5uJ4InW8SUIKBuS69XhK9RylTFXj2u4OcHHhS4z6mH5Mzmuk/YwqsMt0f/8bWroP2uoWXrXhM2xR7IISr5+HruX70zm1YlDeSLBznw/YoHFn1m28skmazfEW3kMEolXCOIyYDW7TgrerlOYLH58meB9JynYapK7/K8CQVnC4iGeBqi3auo7GZgI8dDK190cJPCkzaDAc1sk0fuUJpR7bnq3sBMpVJLU9M6gGSdKfMDrhrk/Bfr5fBiXLOTLjG0P684NNrSQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A6WS5RICX2644zyPCiNC6NRK2m601M4cPMJTh6ubd2U=;
 b=mIZlflh9eGDcHIxmHKt2PvgO7MrILPN+G/nP+tNB/kWMYfXO3p1qJQ9v8wFen8g4OLCjSbsIJgT/6q8wydaOZ2/rw43FGSVSGYudDUlXEZFE1+VeXcTENl9w0oWxZhEBNUz9GqHlF/+V7K8QrPPMZlmo9nQqBqTuXdWWqj+WdHauz2RMyHH+3K7uA0szNLoaABA8jM9xk/DSpcI5zee9Adp19o5+lZIfzJKEz7QpC3TMZyGKDqV/A0mtA7yP1zbO+D63zn+4rwDf3vy6wPeC+pl3Z71GmY5P6HLwjIOZ3aXtJ/K8H8LMIdqfCuJCJrxJdGf1hhtWPhGL0zTdEOmmZw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A6WS5RICX2644zyPCiNC6NRK2m601M4cPMJTh6ubd2U=;
 b=vtbuX2Nzx3EPbZ6KCnUMznHMXvdcU9ZIiBsCizJrwRoQ2nJzwpOj9Yx70v20mZdu8J2oMsxaV0OvFMw3S/ZuYBCSO4lhYHEN9vjUOOeWQ4sfeUq1plbgdyX81bHCDw2D9X6fQwJSgnuJ/a7yu8QWfB2e+It0yKBl3du1c+jJOdA=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: "xen-devel@lists.xenproject.prg" <xen-devel@lists.xenproject.prg>,
	xen-devel <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 01/12] golang/xenlight: update generated code
Thread-Topic: [RESEND PATCH 01/12] golang/xenlight: update generated code
Thread-Index: AQHXUNy6uG+fcN5cbEK9jooORTlIlKsZuDOA
Date: Fri, 18 Jun 2021 10:30:32 +0000
Message-ID: <FB5F40FC-D63B-406F-A7EC-5617DA7332BB@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <29e665fc1c9313f5e221e9e5e15d7c2d9c1eb4a7.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <29e665fc1c9313f5e221e9e5e15d7c2d9c1eb4a7.1621887506.git.rosbrookn@ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 11163f00-82e5-488a-d6d3-08d932441a4b
x-ms-traffictypediagnostic: PH0PR03MB5734:
x-microsoft-antispam-prvs: <PH0PR03MB5734535F4ADAACFBBFD04DD7990D9@PH0PR03MB5734.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:6108;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: DvD1mpQ8CEgoTpK5mZtVKeOJLTRuRket2TNsY7ozmKE+yv6um4RLjKZbrEXdmJPig12mpdmGlCrlrQxOvqNsnt9YVpyKnGDomM49ZiGOsNoNh2TUpLtRQauxLRXJR6+f3X504CuAaJl5lfTxhl4pTdg5gr2PqzjDvQLuOFdzRQVkYxo9xKztgLo3YLDI0ugHmtLHenI+263fCF3RB23UJOQzOnsKGJT2DWP25MJp+5jXnjZ7+pNd8JGiSKGrlIY0OXtMfFzN8PXWJuNQEZlDIQOAvZCYxsAgk5c0sTQjMIT4MduYiO4kLT4si/NYfjqM38D72mNSYv43ZR5DK8U134BCH1ksk1YChW452n4MMRBXRu1XoX3QbYSMxqiNbT9yRU6Jac0nCvRLstpFPBfNe3qlFa6scSSsnmFL/JClj4dR4iI4d3TAG2kyNBItZvIcx3Qy4s4S6P1NmFp7GhDbs6OTWg8V6+w1Eee6gqExsRiLHg0WIXTBeApZdsaoaCdW9o6codfMYfALF2ayiBvdpRyivlwA9hM6KI+E2gzA10fV9V4LAfS0t5ZZobnHoqH0ucrdPqgyKCfXP5rtM0e/vC4bSP4BNCXb8dSYNUalRbmv8Dc79XP3gkaZXpKQ4p9nqb6Qt1TGkpy9+ewhqebE8fgdXOdweQouopKKmwfelDE=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(376002)(346002)(366004)(396003)(4326008)(2616005)(186003)(26005)(478600001)(122000001)(66556008)(86362001)(38100700002)(316002)(5660300002)(83380400001)(54906003)(64756008)(91956017)(66476007)(4744005)(66446008)(6916009)(6512007)(53546011)(6506007)(2906002)(66946007)(6486002)(8936002)(36756003)(33656002)(8676002)(76116006)(71200400001)(55236004)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?6gWYsJ+V7M6QXnFAk0kpWX44aKZ1vHGFDRnxoJHoPXb/03kBSJTzzJJsYrRK?=
 =?us-ascii?Q?gqgL6Ydca5D/bqJSYNPFCp+MqjdFVfuPkrv02EOcQM8SS8oIK1xZIumviBd3?=
 =?us-ascii?Q?Y5Lg0jsh83TWbWAennBUoJCjuquPsvg6Nuw32NTJUNwC2RWHc8LYybLN5qux?=
 =?us-ascii?Q?bGumZ72G+xDMGvVMrgBDlzXR8KniPghvAKxhh+42HR6Hc66w7CMPUtA61BYC?=
 =?us-ascii?Q?7Qz3GcTiurPEyeuvtSC/F6eZI6K6kQV+dD5ys7lwW6Yw+yy5k/M5jbETRVp8?=
 =?us-ascii?Q?/nA7aKWvYx2m1Au9995L0Pe8xd1uwn7IHruAyE05k0jX3WAnuGhnmUGX/cP7?=
 =?us-ascii?Q?I2kilYqA7acmR4Wo24UUVJAkskfJrK8aauHvPGm0K2UkbNnF54q0JUyV6Juz?=
 =?us-ascii?Q?Q+RzBsAFtBQxy6QBpA3SJ6+jIq/HIrOk6/OtQfFa19XKcdmYZrhKoL75EiiJ?=
 =?us-ascii?Q?1L2YY56SADaX4M1MI0SuYDThQqHrin6Hg/DwbhfRuvww+WDwFszS5eJLynkF?=
 =?us-ascii?Q?3HOUy7rolb+uPyN+GM5DOHsmdVzmOWGx5Sh2bR7gv9mCwVN1MMRMnO79mQxo?=
 =?us-ascii?Q?nprv/xuQbsss/CloaMHnFSEQv0g1OzMAkdF1KVtBteuAgCdedkA3RMoBCLi3?=
 =?us-ascii?Q?8/cEOxwE+AbvcqCS3kgIgz+f/4w+DnxdKPw+QDMvNBrBESgyLX02mudJac9Q?=
 =?us-ascii?Q?jorfUamJZF/FoJzURcazmwsILTUVnq+l/TO9Qhbiqeblzx6SAMktTPFQU5S8?=
 =?us-ascii?Q?h1SSDYVaGZRiAUClVI9Ov9CM5SVJVMp9bT3mg+fOs83ui85HlB6zN6BDn4yw?=
 =?us-ascii?Q?rpdjZwTVXiP1vxcxETZQJf9vtA04MQLf7v6IwHayX0xPrRVr7pNXKoH9rzMQ?=
 =?us-ascii?Q?FiKXH/mJazFrK7hYEwqVDsJdlREk02qvQGslVu+c3Y06O9WFRGpfbXI7zsHq?=
 =?us-ascii?Q?j4uF8UG13Z2sK+FTdN0kgYz+g5duB+6tx6wN9ohuKbxN92J45vvdnMJsLYuD?=
 =?us-ascii?Q?9gGGZ0Q3Cn2yk4qLS0rXr200ONSbAs7CSZ3jum/MDOQu49amtt5oB/l+grut?=
 =?us-ascii?Q?ub/OWMuRjxLKszZuIZ+gBZv/TMF8AET2bHsiL41Z8pTDDzec27j0hz0NGMwB?=
 =?us-ascii?Q?Rsc0FGEQW3IB899BqNWOBLMK/CLl2pXBVmeHUB2Yf6YpLwGoi7fOFFSdJR2C?=
 =?us-ascii?Q?Xj6gxPkdQ3JmRAjBnXD0CaBzwgua/ZSZ76O3Qu8HAehJ0XgooJ40nPnqsy+V?=
 =?us-ascii?Q?5u3mFSY76xokMcYNTxi9oE3RR62Y7qIos7l29JCNpT3+lJjMwEYK5Zkp/K5u?=
 =?us-ascii?Q?pqD+xpThM4g0EPHlbRak8X8wWu0cogA+S/mDWq/36/QfaA=3D=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <1E9AF86419E0954CB90C26803F274A77@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 11163f00-82e5-488a-d6d3-08d932441a4b
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 10:30:32.8634
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: C8Sdz8hNku187dSt36EiUmvgxlNA8aC7cGUgm+mWldTGSZPtVbwZTVcC58gzgr1RfSWsXx3D08us/FPjMVeeDxNi7oP+5gMiYZuLH6EANDM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5734
X-OriginatorOrg: citrix.com



> On May 24, 2021, at 9:36 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
>=20
> Re-generate code to reflect changes to libxl_types.idl from the
> following commits:
>=20
> 0570d7f276 x86/msr: introduce an option for compatible MSR behavior selec=
tion
> 7e5cffcd1e viridian: allow vCPU hotplug for Windows VMs
> 9835246710 viridian: remove implicit limit of 64 VPs per partition
>=20
> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 10:46:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 10:46:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144450.265859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luC0t-0003M1-1G; Fri, 18 Jun 2021 10:46:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144450.265859; Fri, 18 Jun 2021 10:46:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luC0s-0003Lu-UH; Fri, 18 Jun 2021 10:46:14 +0000
Received: by outflank-mailman (input) for mailman id 144450;
 Fri, 18 Jun 2021 10:46:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=axfQ=LM=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1luC0r-0003Lo-Rg
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 10:46:14 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2421318e-c52e-47bf-8410-e2cc398a69e3;
 Fri, 18 Jun 2021 10:46:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2421318e-c52e-47bf-8410-e2cc398a69e3
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624013170;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=9vOeCeFfOggb0WwAvbYfKTPJrxgP0fgM+hFpQdUu6Cw=;
  b=HDKGAFjkNJUkowY4Elyywh5zx4dnCXXH2K21RoZrWnKTngCGfWlquxKy
   wLzupSVJsuI+52z16zt6LCxIB2uCx9weGm6xDikAEcmJc0bM3ASGXlL7Y
   iCRIf5v1Euj6HR76EAA6FZyALpOrP14Hhuwg5antk8wzVE/dut7JIO57D
   4=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: hoguw0MVsLT8mcdRJCBigjaePAd3Wh3nqUxuzzyqPRAv/ydvVp8VL373AnyMKpeobLEcFpeSbd
 vqJedRcA1OXgjgUCI8Ko3h7khmAoPWsYkSIe5ThfXsFaydOVXtRF7TvpyUXOWZEDb8R2b2+NIE
 ppFkKEiLNiCpFaDLa8R4zFkhC5HWFj8gKjbKgQHHkCByrLotjczxWRf48y5svWfb/84YL/Ga84
 11GA6vs8tMrFA0SLPENTPe6LYuaZChXJFQpajc/KrCjxXzSEUZfRHE/LG67SqmIdwVWkmMoEyB
 Qec=
X-SBRS: 5.1
X-MesageID: 46518866
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:KfGBEqNtJL/JGcBcTs2jsMiBIKoaSvp037Eqv3oedfUzSL3+qy
 nOpoV+6faaslYssR0b9exoW5PwJE80l6QFgrX5VI3KNGKN1VdARLsSi7cKqAeAJ8SRzIFgPN
 9bAspDNOE=
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46518866"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>
Subject: [PATCH] tools/ocaml/libs/xc: add OCaml stubs to query CPU policy
Date: Fri, 18 Jun 2021 11:45:15 +0100
Message-ID: <5fdb7b4cdee69af8e2b9d77b56b1027a8799cf04.1624012999.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Introduces following functions in Xenctrl and associated types:
get_system_cpu_policy
cpu_policy_to_featureset,
string_of_xen_cpu_policy_index

These are wrappers around the existing C functions in xenctrl.h,
that will be used by xenopsd initially.

-Wno-declaration-after-statement is disabled to allow mixing
declarations and code to simplify writing the stubs
by using variable length arrays on the stack instead of
allocating/freeing memory
(which would require additional error-handling logic).

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/libs/xc/Makefile        |   2 +-
 tools/ocaml/libs/xc/xenctrl.ml      |  37 ++++++
 tools/ocaml/libs/xc/xenctrl.mli     |  71 ++++++++++
 tools/ocaml/libs/xc/xenctrl_stubs.c | 195 ++++++++++++++++++++++++++++
 4 files changed, 304 insertions(+), 1 deletion(-)

diff --git a/tools/ocaml/libs/xc/Makefile b/tools/ocaml/libs/xc/Makefile
index b6da4fdbaf..64dca99613 100644
--- a/tools/ocaml/libs/xc/Makefile
+++ b/tools/ocaml/libs/xc/Makefile
@@ -3,7 +3,7 @@ XEN_ROOT=$(TOPLEVEL)/../..
 include $(TOPLEVEL)/common.make
 
 CFLAGS += -I../mmap $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest)
-CFLAGS += $(APPEND_CFLAGS)
+CFLAGS += $(APPEND_CFLAGS) -Wno-declaration-after-statement
 OCAMLINCLUDE += -I ../mmap
 
 OBJS = xenctrl
diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index a5588c643f..fa2cea5091 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -286,6 +286,43 @@ external version_capabilities: handle -> string =
 type featureset_index = Featureset_raw | Featureset_host | Featureset_pv | Featureset_hvm
 external get_cpu_featureset : handle -> featureset_index -> int64 array = "stub_xc_get_cpu_featureset"
 
+(* order must match the order in Val_cpuid_leaf *)
+type xen_cpuid_leaf = {
+  leaf: int64;
+  subleaf: int64;
+  a: int64;
+  b: int64;
+  c: int64;
+  d: int64;
+}
+
+(* order must match the order in Val_msr_entry *)
+type xen_msr_entry = {
+  idx: int64;
+  flags: int64;
+  value: int64; (* val is a keyword, using 'value' *)
+}
+
+type xen_cpu_policy = {
+  leaves: xen_cpuid_leaf array;
+  msrs: xen_msr_entry array;
+}
+
+(* must match XEN_SYSCTL_cpu_policy* order in xen/include/public/sysctl.h *)
+type xen_cpu_policy_index = Cpu_policy_raw | Cpu_policy_host | Cpu_policy_pv_max | Cpu_policy_hvm_max | Cpu_policy_pv_default | Cpu_policy_hvm_default
+
+let string_of_xen_cpu_policy_index = function
+  | Cpu_policy_raw -> "Raw"
+  | Cpu_policy_host -> "Host"
+  | Cpu_policy_pv_max -> "PV Max"
+  | Cpu_policy_hvm_max -> "HVM Max"
+  | Cpu_policy_pv_default -> "PV default"
+  | Cpu_policy_hvm_default -> "HVM default"
+
+external get_system_cpu_policy: handle -> xen_cpu_policy_index -> xen_cpu_policy = "stub_xc_get_system_cpu_policy"
+
+external cpu_policy_to_featureset: handle -> xen_cpu_policy -> int64 array = "stub_xc_policy_to_featureset"
+
 external watchdog : handle -> int -> int32 -> int
   = "stub_xc_watchdog"
 
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index 6e94940a8a..605adeeec9 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -223,6 +223,77 @@ external version_capabilities : handle -> string
 type featureset_index = Featureset_raw | Featureset_host | Featureset_pv | Featureset_hvm
 external get_cpu_featureset : handle -> featureset_index -> int64 array = "stub_xc_get_cpu_featureset"
 
+(** CPUID takes a leaf (EAX) and optional subleaf (ECX) as input and
+    returns feature information bitset in 4 registers (EAX, EBX, ECX, EDX).
+    This record captures one such invocation of CPUID.
+
+    CPU manuals contain tables explaining the available leaves/subleaves and feature bits:
+
+        https://software.intel.com/content/www/us/en/develop/articles/intel-sdm.html
+          Intel® 64 and IA-32 architectures software developer's  manual volume 2A: Instruction set reference
+          Chapter 3.2, Table 3-8
+
+        https://developer.amd.com/resources/developer-guides-manuals/
+          AMD64 Architecture Programmer’s Manual Volume 3: General Purpose and System Instructions
+          Appendix D Instruction Subsets and CPUID Feature Flags
+ *)
+type xen_cpuid_leaf = {
+  leaf: int64; (** initial EAX value *)
+  subleaf: int64; (** initial ECX value *)
+  a: int64; (** EAX result *)
+  b: int64; (** EBX result *)
+  c: int64; (** ECX result *)
+  d: int64; (** EDX result *)
+}
+
+(** CPU Model Specific Registers control various aspects of CPU behaviour.
+
+    RDMSR takes ECX as input and returns its result in EDX:EAX.
+    This record captures one invocation of RDMSR.
+
+    CPU manuals document the available MSRs and feature bits
+
+       https://software.intel.com/content/www/us/en/develop/articles/intel-sdm.html
+         Intel® 64 and IA-32 architectures software developer's manual volume 4: Model-specific registers
+         Chapter 2, "Model-Specific Registers (MSRs)"
+
+       https://developer.amd.com/resources/developer-guides-manuals/
+         AMD64 Architecture Programmer’s Manual Volume 2: System Programming
+         Appendix A "MSR Cross-Reference"
+ *)
+type xen_msr_entry = {
+  idx: int64; (** MSR register - ECX input *)
+  flags: int64; (** reserved, must be zero *)
+  value: int64; (** EDX:EAX output *)
+}
+
+(** Xen CPU policy contains the CPUID features and MSRs visible in a domain.
+    The order of leaves and MSRs is not important, but entries cannot be duplicated.
+ *)
+type xen_cpu_policy = {
+  leaves: xen_cpuid_leaf array; (** Array of CPUID leaves/ *)
+  msrs: xen_msr_entry array; (** Array of MSRs *)
+}
+
+(** Xen CPU policy to query or set *)
+type xen_cpu_policy_index =
+  | Cpu_policy_raw (** as seen on boot *)
+  | Cpu_policy_host (** features implemented by the host *)
+  | Cpu_policy_pv_max (** maximum PV features that we can accept in a migration: either implemented natively or emulated *)
+  | Cpu_policy_hvm_max (** maximum HVM features that we can accept in a migration: either implemented natively or emulated *)
+  | Cpu_policy_pv_default (** default PV features for newly booted VMs *)
+  | Cpu_policy_hvm_default (** default HVM features for newly booted VMs *)
+
+(** [string_of_xen_cpu_policy_index policy_index] is the name of the [policy_index] policy *)
+val string_of_xen_cpu_policy_index : xen_cpu_policy_index -> string
+
+(** [get_system_cpu_policy xenctrlhandle policy_index] retrieves the [policy_index] policy from the running hypervisor *)
+external get_system_cpu_policy: handle -> xen_cpu_policy_index -> xen_cpu_policy = "stub_xc_get_system_cpu_policy"
+
+(** [cpu_policy_to_featureset xenctrlhandle policy] converts [policy] to a featureset for backwards compatibility
+    (e.g. accepting incoming migrations in xenopsd from a non-policy-aware xenopsd) *)
+external cpu_policy_to_featureset: handle -> xen_cpu_policy -> int64 array = "stub_xc_policy_to_featureset"
+
 external pages_to_kib : int64 -> int64 = "stub_pages_to_kib"
 val pages_to_mib : int64 -> int64
 external watchdog : handle -> int -> int32 -> int
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index d05d7bb30e..4a230de8b7 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -34,6 +34,9 @@
 #include <xenctrl.h>
 #include <xen-tools/libs.h>
 
+#include <xen/lib/x86/cpuid.h>
+#include <xen/lib/x86/msr.h>
+
 #include "mmap_stubs.h"
 
 #define PAGE_SHIFT		12
@@ -1216,6 +1219,198 @@ CAMLprim value stub_xc_watchdog(value xch, value domid, value timeout)
 	CAMLreturn(Val_int(ret));
 }
 
+static CAMLprim value Val_cpuid_leaf(const xen_cpuid_leaf_t *leaf)
+{
+    CAMLparam0();
+    CAMLlocal1(result);
+    result = caml_alloc_tuple(6);
+    Store_field(result, 0, caml_copy_int64(leaf->leaf));
+    Store_field(result, 1, caml_copy_int64(leaf->subleaf));
+    Store_field(result, 2, caml_copy_int64(leaf->a));
+    Store_field(result, 3, caml_copy_int64(leaf->b));
+    Store_field(result, 4, caml_copy_int64(leaf->c));
+    Store_field(result, 5, caml_copy_int64(leaf->d));
+
+    CAMLreturn(result);
+}
+
+static CAMLprim void cpuid_leaf_of_val(xen_cpuid_leaf_t *leaf, value v)
+{
+    CAMLparam1(v);
+    leaf->leaf = Int64_val(Field(v, 0));
+    leaf->subleaf = Int64_val(Field(v, 1));
+    leaf->a = Int64_val(Field(v, 2));
+    leaf->b = Int64_val(Field(v, 3));
+    leaf->c = Int64_val(Field(v, 4));
+    leaf->d = Int64_val(Field(v, 5));
+
+    CAMLreturn0;
+}
+
+static CAMLprim value Val_msr_entry(const xen_msr_entry_t *msr)
+{
+    CAMLparam0();
+    CAMLlocal1(result);
+    result = caml_alloc_tuple(3);
+    Store_field(result, 0, caml_copy_int64(msr->idx));
+    Store_field(result, 1, caml_copy_int64(msr->flags));
+    Store_field(result, 2, caml_copy_int64(msr->val));
+    CAMLreturn(result);
+}
+
+#if 0
+static CAMLprim void msr_entry_of_val(xen_msr_entry_t *msr, value v)
+{
+    CAMLparam1(v);
+    msr->idx = Int64_val(Field(v, 0));
+    msr->flags = Int64_val(Field(v, 1));
+    msr->val = Int64_val(Field(v, 2));
+    CAMLreturn0;
+}
+#endif
+
+static CAMLprim value Val_leaves(const xen_cpuid_leaf_t *leaves, uint32_t nr_leaves)
+{
+    CAMLparam0();
+    CAMLlocal1(result);
+    uint32_t i;
+
+    result = caml_alloc(nr_leaves, 0);
+    for (i=0;i<nr_leaves;i++)
+        Store_field(result, i, Val_cpuid_leaf(&leaves[i]));
+
+    CAMLreturn(result);
+}
+
+static CAMLprim value Val_msrs(const xen_msr_entry_t *msrs, uint32_t nr_msrs)
+{
+    CAMLparam0();
+    CAMLlocal1(result);
+
+    result = caml_alloc(nr_msrs, 0);
+    for (unsigned i=0;i<nr_msrs;i++)
+        Store_field(result, i, Val_msr_entry(&msrs[i]));
+    CAMLreturn(result);
+}
+
+static CAMLprim value Val_policy(const xen_cpuid_leaf_t *leaves, uint32_t nr_leaves, const xen_msr_entry_t *msrs, uint32_t nr_msrs)
+{
+    CAMLparam0();
+    CAMLlocal1(result);
+
+    result = caml_alloc_tuple(2);
+    Store_field(result, 0, Val_leaves(leaves, nr_leaves));
+    Store_field(result, 1, Val_msrs(msrs, nr_msrs));
+    CAMLreturn(result);
+}
+
+static void cpuid_policy_of_val(struct cpuid_policy *p, value policy)
+{
+    CAMLparam1(policy);
+    CAMLlocal1(cpu_policy);
+    uint32_t i;
+
+    cpu_policy = Field(policy, 0);
+
+    uint32_t nr_leaves = caml_array_length(cpu_policy);
+    xen_cpuid_leaf_t leaves[nr_leaves];
+    for (i=0;i<nr_leaves;i++)
+        cpuid_leaf_of_val(&leaves[i], Field(cpu_policy, i));
+
+
+    uint32_t err_leaf=0, err_subleaf=0;
+    int rc = x86_cpuid_copy_from_buffer(p, leaves, nr_leaves, &err_leaf, &err_subleaf);
+    if (rc)
+        caml_failwith("Failed to deserialize CPU policy"); /* TODO: err_leaf/err_subleaf */
+
+    CAMLreturn0;
+}
+
+#if 0
+static void msr_policy_of_val(struct msr_policy *p, value policy)
+{
+    CAMLparam1(policy);
+    CAMLlocal1(msr_policy);
+    uint32_t i;
+
+    msr_policy = Field(policy, 1);
+
+    uint32_t nr_msrs = caml_array_length(msr_policy);
+    xen_msr_entry_t msrs[nr_msrs];
+    for (i=0;i<nr_msrs;i++)
+        msr_entry_of_val(&msrs[i], Field(msr_policy, i));
+
+    uint32_t err_msr = 0;
+    int rc = x86_msr_copy_from_buffer(p, msrs, nr_msrs, &err_msr);
+    if (rc)
+        caml_failwith("Failed to deserialize CPU policy"); /* TODO: err_msr */
+
+    CAMLreturn0;
+}
+#endif
+
+CAMLprim value stub_xc_get_system_cpu_policy(value xch, value policy_kind)
+{
+    CAMLparam2(xch, policy_kind);
+    CAMLlocal1(result);
+
+    uint32_t max_leaves = 0, max_msrs = 0;
+
+    if (xc_cpu_policy_get_size(_H(xch), &max_leaves, &max_msrs))
+            failwith_xc(_H(xch));
+
+    xen_cpuid_leaf_t leaves[max_leaves];
+    xen_msr_entry_t msrs[max_msrs];
+    memset(leaves, 0, sizeof(leaves));
+    memset(msrs, 0, sizeof(msrs));
+
+    /* It'd be simpler if we could avoid this allocation here,
+       but the type is private */
+    xc_cpu_policy_t *policy = xc_cpu_policy_init();
+    if (!policy)
+        caml_raise_out_of_memory();
+
+    int rc;
+    rc = xc_cpu_policy_get_system(_H(xch), Int_val(policy_kind), policy) ||
+         xc_cpu_policy_serialise(_H(xch), policy, leaves, &max_leaves, msrs, &max_msrs);
+    xc_cpu_policy_destroy(policy);
+    if (rc)
+        failwith_xc(_H(xch));
+
+    result = Val_policy(leaves, max_leaves, msrs, max_msrs);
+    CAMLreturn(result);
+}
+
+CAMLprim value stub_xc_policy_to_featureset(value xch, value policy)
+{
+    CAMLparam2(xch, policy);
+    CAMLlocal1(result);
+    struct cpuid_policy p;
+
+    memset(&p, 0, sizeof(p));
+    cpuid_policy_of_val(&p, policy);
+
+    uint32_t fs_len;
+    int rc = xc_get_cpu_featureset(_H(xch), 0, &fs_len, NULL);
+    if (rc)
+        failwith_xc(_H(xch));
+    /* xenctrl stub is statically linked, xenctrl is dynamically loaded,
+     * the 2 featureset lengths could be different, but fs_len should be the greater one.
+     * */
+    if (fs_len < FEATURESET_NR_ENTRIES)
+        caml_invalid_argument("cpuid_policy_to_featureset");
+
+    uint32_t featureset[fs_len];
+    memset(featureset, 0, sizeof(featureset));
+    cpuid_policy_to_featureset(&p, featureset);
+
+    result = caml_alloc(fs_len, 0);
+    for(unsigned i=0; i<fs_len; i++)
+        Store_field(result, i, caml_copy_int64(featureset[i]));
+
+    CAMLreturn(result);
+}
+
 /*
  * Local variables:
  *  indent-tabs-mode: t
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 10:51:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 10:51:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144457.265871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luC5m-0004oT-Pd; Fri, 18 Jun 2021 10:51:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144457.265871; Fri, 18 Jun 2021 10:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luC5m-0004oM-MQ; Fri, 18 Jun 2021 10:51:18 +0000
Received: by outflank-mailman (input) for mailman id 144457;
 Fri, 18 Jun 2021 10:51:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfjP=LM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1luC5l-0004oG-HI
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 10:51:17 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a950ffb9-a1d2-4f69-bc5c-18a08af8e13e;
 Fri, 18 Jun 2021 10:51:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a950ffb9-a1d2-4f69-bc5c-18a08af8e13e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624013476;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=LDuMz/juPY5gWNhy1V2tssudSm86+7xqWVv9XlGfvBY=;
  b=KPlsvQl/Qe0CYHqvLsmBmH1q1yDCzkHBzZepwkPnQUfboMrF27KC9cVH
   ftrY7M9kYQURAsbhZf2bpYYkE44WEVQV0voHSibdKaIxALjmYIl6usrEp
   1oCCA8tlkQEMx+q7TWKDbvsxXkSDSEdPnqTKk/4RpTgm3HB/2PEig9KKq
   k=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 1meH1P5UplFF3Qk+CTQUyR3WTiXVVJUBTk3sGZFNPjTNuot4XsS2/Wq3GDwGkRz10IR7+t+JW8
 EK7y0CaRqI5zKtrx5bMebzvWaV6+naCiYVY+bE6iXaLlgziYYhaHSTUz11NIk0O64vSz+xXc7M
 wg7/OHYxuZzijRhVnNTF2w+7+erEAgkxLq0B9WtgN4w7FLQ+BQ48Q1lb7LtwMd0XGpgs6xJZIN
 VTfCOzAMGr4gqXEJvmRR8KBK5LNStywV57MkniyogCBDKm/PIe4n3NbEIL/3XawOsFJxDn3Xfe
 rWA=
X-SBRS: 5.1
X-MesageID: 46176112
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:yLLqFa3bKU8BKKGe6jRPPAqjBQxyeYIsimQD101hICG9Lfb2qy
 n+ppgmPEHP5Qr5AEtQ5OxpOMG7MBbhHQYc2/hRAV7QZnibhILOFvAj0WKC+UyvJ8SazIBgPM
 hbAtFD4bHLfDtHZIPBkXOF+rUbsZq6GcKT9J/jJh5WJGkAAcAB0+46MHfhLqQffngaOXNTLu
 v52iMznUvHRZ1hVLXdOpBqZZmgm/T70LbdJTIWDR8u7weDyRmy7qThLhSe1hACFxtS3LYL6w
 H+4k7Ez5Tml8v+5g7X1mfV4ZgTssDm0MF/CMuFjdVQAinwizyveJ9qV9S5zXUISaCUmRIXee
 v30lEd1vdImirsl6aO0EPQMjzboXETArnZuASlaDXY0JbErXkBerV8bMpiA2XkAgwbzY1BOe
 twrhKknosSAhXakCvn4d/UExlsi0qvuHIn1fUelnpFTOIlGfVsRRx2xjIlLH4sJlOz1GkcKp
 gkMCgc3ocgTXqKK3TC+mV/yt2lWXo+Wh+AX0gZo8SQlzxbhmpwwUcUzNEW2i5ozuNwd7BUo+
 Dfdqh4nrBHScEbKap7GecaWMOyTmjAWwjFPm6eKUnuUKsHJ3XOoZjq56hd3pDmRHXJ9up6pH
 3laiIWiYcfQTOaNSS+5uw8zvmWehTOYd3E8LAr23FWgMyOeIbW
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46176112"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oKNdXtHxdc42/GN4zyiLt8lPZ27rOsasvv+PObgXsRGrvCUu4Tf+ric5YqN01C6pEBQmbLagmE87N2fee051aV206dZFh5I/Lak1H7p7kcK/x4p3a7oL6zNOCvwJ7wyjvRwp6LwHwpgnSth5tWG/yVw2drGwtjyzVS8vpfcOR1arNYVcN50GAOV/vIRSqe/FBcVDj1ru9UMWkKJ16Pape0DpclzI74EdIgyq6eeoMMxJZ6IifqDNdMtRQcNvWVZDSVcXRX+WzDxPEDss+Wt39yXLZ9INea/2MC7a5h/i0t01wY/Cs/ZJ7TDJlZmMdnJ498DxzoVKD+K7ym8+tIpcuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LDuMz/juPY5gWNhy1V2tssudSm86+7xqWVv9XlGfvBY=;
 b=clDwNdx0wwTJO9tmf3cOqQuO0+xjoDnEfDzuKMxpyC15Ay/vH1Hz5bYeH5oI4fWYzwq8nWjOShMZP6hCu5bLBxRPkri4pzXd1wFRU3kOE2ocGdmZbSj4QicILa2aT6ShwEFoYP1DYM20RS+o+Nkpm697Ic4907jducCtYByoJhebHdtcPKcfKoFZPVloAWJxlpDrRXYggkugnh9UXr07+yesdyz26/XiJ9V/eUUMoMlPU40PWExDfWfIsaTrb5eV0lKUvfEATpAw72wBhDYdrCT7EENoXQUegOGjuN/Sx7SWexhUvRr05QjjGrQ8T0pKFadwu3w13+WRzBDJxgDSrw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LDuMz/juPY5gWNhy1V2tssudSm86+7xqWVv9XlGfvBY=;
 b=aTQU1DUA+h5uZvmfOKwwKLcyp0RDidvZ+i8ghRlwcXsi3AyH7D42XXy3kZ/y3OqU0eG6Oy08dVPlbIEKGJ/tYEsvv4oRRxb66bJrcRneZhuLZknP9rz/XsXbNytKBLxfLMgsDLYN2+k+JiLQwKdrq+B21iaKrEvyrXYAYkjJyLM=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 02/12] golang/xenlight: fix StringList toC
 conversion
Thread-Topic: [RESEND PATCH 02/12] golang/xenlight: fix StringList toC
 conversion
Thread-Index: AQHXUNy7TNv+p7nNOkmLFrh5orF2CasZvfqA
Date: Fri, 18 Jun 2021 10:51:11 +0000
Message-ID: <95DDE0E9-E275-4E9D-AAEC-4BD0324F01D4@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <0a15ed9eb6cd70416995f5d9805c98572eb6bd50.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <0a15ed9eb6cd70416995f5d9805c98572eb6bd50.1621887506.git.rosbrookn@ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 1c40206c-1673-4358-78bc-08d93246fcac
x-ms-traffictypediagnostic: PH0PR03MB5782:
x-microsoft-antispam-prvs: <PH0PR03MB5782B503D3F0C475F5344111990D9@PH0PR03MB5782.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: JX+A0plX2b49jJw9b0i9K6tJgLsnHSLYelotK4AbHFTVCe1D59UxRjG0SeEqNwe0GRCfXOs5koDKRHg86QwXuKu58UUPrpCya7Op9wjHfSTzARMnb++LUv9iF5yV5cHlQst2DhPL+sItERwfrdu2cxp5b9h/ojGhxDcsyjqTRUUJEFiVBMiBldkCaUXnfoJ3Q3Y51sWHL+hYD6KXANMpJz3qmv84J3Bbm92NOFUV7uo/uZCxIPpqyEO/2U0eMmVoA8Xl7h7yt6czvNEy1FbZdnd5E6g8sRIrGUiYmSYHV76Ir4ZOLcRgKLxxqRiAU702i0NN1tC6vINdAcHyxCt83OEIM4134McRVI1/mkaIu+wYOOP2wecq0IcZ3RsIpgF+2T0vqs3xj3Qg0pDuz4NSkl3yR+Lue9sW0OoVB4keADKCGacTk7O92ufDZKXOL6jLKP44bRcvJVM/AS1wz9j9IB105X9WKYIMCppzXZQGMSTv9WqIEGQcmMUkHAUMSs6oSZgaa/u2GqY3QJ4MF9mXDCaNoo52SRswnaOCB0emHBe2eHjLqPK/WEKQ3ZH8sggy3CHoC2X1U5phS7panWsFOgNYHLrjFUCjKHc5rx1hxNZq986wcqHV4Cp6WTgSVGfUEOqr5rjfvx83qu1HOwS5T/nEgKCwzcZjbcDcej/lGOc=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(376002)(136003)(366004)(39860400002)(54906003)(316002)(83380400001)(478600001)(2616005)(6506007)(186003)(55236004)(26005)(53546011)(76116006)(71200400001)(33656002)(4326008)(2906002)(8676002)(8936002)(86362001)(66556008)(5660300002)(6512007)(64756008)(66446008)(36756003)(66476007)(6486002)(6916009)(122000001)(4744005)(91956017)(38100700002)(66946007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?QOSz9IVzCo72+g+/F0MMQG7XxEjlyY0uWeGOC6/bwJFZD44+7xb1JrYc5wSP?=
 =?us-ascii?Q?OKlMNcOqGYerpEQBM7dH1+YZ9/6d38FFmgfb78/zydJTR+s8Jp/t4KvYjx6D?=
 =?us-ascii?Q?1dNbTl+SIiAITm80C/JZNtOx7cHuTtAgxe9eExiM76BJGTxoQJz1BKjal18Y?=
 =?us-ascii?Q?Q1sJQ/8AZo5yl3HkSIx3T/+FVjRWEuN5JIAeaokiXnF2QC0yudeLSNlDn5+B?=
 =?us-ascii?Q?tDxVbMEhxtSEL5tRm4244Q976mt7cI4Yy2eag1ZU2Z9ScnHQ6rpcbp89mexV?=
 =?us-ascii?Q?z0Lp94/SXca0hGe/2JUMvZb/UNRIt7lBJDroVX2RB5MxL1EtAe3nU/GpvIzN?=
 =?us-ascii?Q?rbnAokCe2Dn4DSLPej67C05/EXHjM54dIahy255G+KGdUCDD0A6Mn3cQqcNo?=
 =?us-ascii?Q?hZURw4LNDq9JTQr5wyCpNP8us8PtwQmZnhHuuJ51AVXpDIt0SU9dPmhxQRne?=
 =?us-ascii?Q?K4QSq/sV6bZ7ZK3dSeexVdJFv2TieUP2lPTbHjF8HlYDA3phV3kT76No85Zr?=
 =?us-ascii?Q?FIN3O8HqHUlqxwi2iyvVZYBtgO22xgdQcvEe8iiFQG+9AwMM3oJUymuereOU?=
 =?us-ascii?Q?+zO8JXlwyqOGlh5Oa7S+iQT52ixxjxq7D+ynxfE8J6OYJVRSZE9PqjDDDrVN?=
 =?us-ascii?Q?GUv2uqkqG10CKpRXCxT84ms7iyyPgAK8fQG8TzovG1v+4B9lg3XtCSJyWKo6?=
 =?us-ascii?Q?YBUMtGk85+xqV8HNIh+Y8YMCWX+3XVbXNNtfmBYSmuGfpNbvrXkak/vPQkJr?=
 =?us-ascii?Q?dL36KAPpyOjY/8Kar3aX9FjLFymC0hMhv7vP5wsP+c/37IpVqgnDdgzY33sM?=
 =?us-ascii?Q?GlZsA8F/8zZ9ofeMZNZsOewSZXiccAXbs4l12fN7WlIjSTWWWJeBI8880YsQ?=
 =?us-ascii?Q?RNiqZk5pUAfN3kJM9kUxRXSeclZBiWbMoYWdNlqGY/VrJzOIAGtp9dZn6aO4?=
 =?us-ascii?Q?8tkASJ4X7Z3dzgCGhYLYH7YwAXgG/dluxYN27fU9XNoZhEW5XLy1IbDldyll?=
 =?us-ascii?Q?tjJerTBGhezAl2cMBFvHtFZWh8p0jRfhYZbUq8AarS6xpwwPK1t8VO4eAS9l?=
 =?us-ascii?Q?f+BENOPOosrtczSJWJieGuENkD6q6zwZBRhTzC9QAxSK+02E3x2m6QG8fggT?=
 =?us-ascii?Q?z11eDniZHDJCA1rY83AgkDB4X8mJRbNmicKEcy9pViEst20Rekx0CTjI4i6j?=
 =?us-ascii?Q?mghmNBhN58sluRFWhkjigGWCWVEuw3Cw2iKXC/0A8rL40QDuNrawRsJQJATE?=
 =?us-ascii?Q?QcEePFuWO/rTI6lWtN31WgYrr6uuilZ76JLPUPhbSeWEbnZAgZ68t5KorEFp?=
 =?us-ascii?Q?i5VSYLuruRiT7UeMzEUtLSGo3op3JDnf46PIckwdgfEQYw=3D=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <D7EBD2F58C273E4382D6340B00F7A616@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1c40206c-1673-4358-78bc-08d93246fcac
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 10:51:11.7130
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: IgI+ixQ4Y14KFU0pG9M7u+jNc06IvqVoUKcvovPEsDf/Pil6HkcHzDmJxd/t1t4X5G24fg0Kat72V3c+B0YfbovcDyoYBZLtyOaevjwnWQI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5782
X-OriginatorOrg: citrix.com



> On May 24, 2021, at 9:36 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
>=20
> The current implementation of StringList.toC does not correctly account
> for how libxl_string_list is expected to be laid out in C, which is clear
> when one looks at libxl_string_list_length in libxl.c. In particular,
> StringList.toC does not account for the extra memory that should be
> allocated for the "sentinel" entry. And, when using the "slice trick" to
> create a slice that can address C memory, the unsafe.Pointer conversion
> should be on a C.libxl_string_list, not *C.libxl_string_list.
>=20
> Fix these problems by (1) allocating an extra slot in the slice used to
> address the C memory, and explicity set the last entry to nil so the C
> memory will be zeroed out, and (2) dereferencing csl in the
> unsafe.Pointer conversion.
>=20
> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

Even as I was making these I thought it might be good to try to get some un=
it tests of this conversion.



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 11:00:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 11:00:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144467.265898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luCEl-0006Sy-Ue; Fri, 18 Jun 2021 11:00:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144467.265898; Fri, 18 Jun 2021 11:00:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luCEl-0006Sr-RT; Fri, 18 Jun 2021 11:00:35 +0000
Received: by outflank-mailman (input) for mailman id 144467;
 Fri, 18 Jun 2021 11:00:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfjP=LM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1luCEk-0006SP-SA
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 11:00:34 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eb2faaf6-c53e-49e2-a182-86e7cfce0310;
 Fri, 18 Jun 2021 11:00:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb2faaf6-c53e-49e2-a182-86e7cfce0310
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624014031;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=ZM3v6aLaujLwLWtmW4ZLVKHylAxGLWq1ExWf7EuGumQ=;
  b=Dfxav1s2uW/a7yc5TuSuqhdEKCFDRmZ09n4zSnHY5yYkadZRGoXKJKcB
   /UkZ+gHCmHqQdU0/xGZsPu1Q1PgmQl9zPNc3WIgO5FMeBPTUkTaGi+Ykz
   gNTZBKb2aXr2PD6wmPzOawYrc0iv20bqRxd3/xXqQjYizLOAqjZgatRBV
   0=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: XTso1w3teEkxI48ofeUXr3TsFhScy2lpaqN8dGEBBIJV7vnGUNAQpfe+80Pzz/60NhvQXth13Q
 Gs/gUcDSCJVWUaRhb2by8VVXyeiXdN5lEJCBC8W7DDoSMEGTOBDiIP7y4gBwE6zzQZnTfxZS3d
 d5kKAQjjhSpTCwz+d6o4VtLjn8QS5bdgBYKJlg8WxAqtj50N7pNBcoK4gQkclhibqLXWV9c+6s
 Wo9Q5Fx2Lo6M283iF2yfnLmtBZtJOvkiCFFnD9WmhHWcnME8KUMrNzfteXkT4ioLaelbTv3pjR
 K8M=
X-SBRS: 5.1
X-MesageID: 48035744
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:g0ccwKPOwfMcNcBcT3/155DYdb4zR+YMi2TDiHofdfUFSKClfp
 6V8cjzjSWE9Qr4WBkb6LW90DHpewKcyXcH2/hsAV7EZninhILIFvAt0WKG+VPd8kLFh5dgPM
 tbAstD4ZjLfCJHZKXBkUqF+rQbsaG6GcmT7I+0pRYMcegpUdAa0+4QMHfHLqQcfngjOXNNLu
 v72iMxnUvGRZ14VLXYOlA1G8z44/HbnpPvZhALQzQ97hOVsD+u4LnmVzCFwxY3SVp0sPUf2F
 mAtza8yrSosvm9xBOZ/XTU9Y5qlNzozcYGLNCQi/ISNi7nhm+TFcFcsvy5zXQISdOUmRAXee
 r30k4d1gNImivsl1SO0FzQMs/boW0TAjHZuAWlaDDY0LPErXoBerR8bMRiA0fkAgMbzaFB+b
 MO0GSDu5VNCxTc2Cz7+tjTThlv0lG5uHw4jIco/jZiuRt3Us4hkWUzxjIcLH47JlOw1GnnKp
 gYMOjMoPJNNV+KZXHQuWdihNSqQ3QoBx+DBkwPoNac3TRalG1wixJw/r1Sol4QsJYmD5VU7e
 XNNapl0LlIU88NdKp4QOMMW9G+BGDBSQ/FdGiSPVPkHqcaPG+lke+73Fz03pDiRHUs9up8pH
 3saiIsiYcCQTOZNSTV5uw4zvnkehTIYd3C8LAs26RE
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="48035744"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OVRWuo6l2LahYYPbeZBogOIZlIvNG8swwfTbbNtuLCxoLENTr0QXEaeuKB/u5kLfC4fkzc8OAS/gsRugds3efMgIGC6E+8MUjiS6/KwmajmV0mQf8rhYCKhA9crcLAmlmNpLHWBVFV1T5kAPVtE+hmxxqkTk+7Xt3IC49pJ55bF2dUPbWI/Q54mSYxoxdQqsOxysnA9iiE0KZeQuRAgM3oHxhsaK6j4h9SjAHN2elhSJdH5/qK7eC1EnjJFlf8DCh/+nphpN9RunTs3wh9+0A0lvf5G8ixy9n94kMYCILKy1Wrt0pWORIBEgbgTGj8jcM7rhIlSV3G7NyIbFgeNkwQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZM3v6aLaujLwLWtmW4ZLVKHylAxGLWq1ExWf7EuGumQ=;
 b=XDaVItHQ/CNnhNprmw+IRrlTHX0tbdmsBab1lAw6E0zY+t/GQtV2Jicgq0hmqsJN+zAijgO12mCF9rIIV4NFnVieiuy8hgIz1R+7rBXyeRX7JFmSsXNIDVWd9C7iTduwOV1qHVgfuKkuUbOTbOGfMFRtOPWGYvAzeWqH1eJG2GyO5ooSVSkTfHsOlWgKgxh3tJ0UAjW+RiCtpLl6GQqhpUoGnO5IDX9fbP6wHTJUzuFRUfdqAkpikTl9xq5WlHTLxw1o4cmXMesudk6v/g6g/03hCTiovF02YB2QuT5+g5/D6v0W22yqZ5gDZsdqrRyobblaK5u7fgvskQo4yi5Nrw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZM3v6aLaujLwLWtmW4ZLVKHylAxGLWq1ExWf7EuGumQ=;
 b=dpe7AoEHtys/fBOPqLvRK1eONMZCGX0UzG02xeu+pJlL9hccZRCBbc4E1RKN53Hy7Ttz/UhitYeufBmsPoFG4zI0qNo6SNovERK+GIu1kueYVvKdCBryvUTrZbqeZrtlVKbJm/zBhw9+rymVfnt0hD0PU3K2vlmbtnyYenwj3NM=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 03/12] golang/xenlight: fix string conversion in
 generated toC functions
Thread-Topic: [RESEND PATCH 03/12] golang/xenlight: fix string conversion in
 generated toC functions
Thread-Index: AQHXUNysz1ZbuKJBuU+rSVrlC2Thy6sZwI6A
Date: Fri, 18 Jun 2021 11:00:26 +0000
Message-ID: <6BAF6F60-EC63-41AC-A46E-2045E746C7E1@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <06763aceff41167d3d3bbd603f729572c1f55c77.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <06763aceff41167d3d3bbd603f729572c1f55c77.1621887506.git.rosbrookn@ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 8be1c748-6299-4a81-c1a6-08d93248474d
x-ms-traffictypediagnostic: PH0PR03MB6368:
x-microsoft-antispam-prvs: <PH0PR03MB63684F8005ACA02D27F456F7990D9@PH0PR03MB6368.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:64;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: OdHM3lUpBKygdLnHNz4HSoDCiD2rBgbNyoSK1KBzLMjFB/XhOPps+dqEd0E6fEAPc0FUKG/ciBeLcj+yppxM02JXcG8s4RIZffpZL1klsNiNF2VVUevo8qgYtH+0fLkd7Z6sJcpSjIegzpZPVMXqLqp5+Q/ku3IxH7B3f3z2Hvbnf2nZxEHA68V2Zxm2bFiqy1O5hGq9xv4RXhXWjkqZZaX32C7iQqvrJebZsR4yxNlEncIXwGL++rcHz9V4nHGwObZ2FHEts9RqIBQnFenyTYXeeTRJqB4FJrGQp2ykr50Y7xHzAjhVYIjFFcOY9VMD1TpeA4oRyC0nWeWu6A0HhPvIIXYZ5qm2sjqbBP3zyRiYmtTA0KSmh1vLD2NM17sBslf4HebV1pUllPk2WkgSZhHV562nr2CnyvQtK6Sf3uQ3UiBJfztiQ1hoEKyaLu1WcIqgK20Q5s64Y+yGW0k3XHfjjp+tDA/Y2M66i2Re54LHgfRrDQBcY1odbqlWS+lzrB72CVbqJ/cbgUL47JYaBJrHy3CguKKmIZSBh2cunfLpsieGcYOeQjTvEWCMfLLzwzCrL0YRaA0YMPT7d/LRswZOubet+53cAwUfyxlX8WKbZoFIXxoXIAha7T//DpOCfTp1k9lsS/CbpENAgYKcG5mhTPjTCuDkFFaWXa7tnLw=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(39860400002)(136003)(346002)(376002)(66946007)(478600001)(66476007)(66556008)(76116006)(91956017)(54906003)(26005)(6916009)(2616005)(2906002)(66446008)(8936002)(8676002)(316002)(5660300002)(64756008)(6512007)(38100700002)(122000001)(6506007)(186003)(53546011)(86362001)(19627235002)(83380400001)(36756003)(6486002)(33656002)(55236004)(4326008)(30864003)(71200400001)(45980500001)(579004)(559001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?OGNIYm9NM0VtbmNCcnFBOUE1SU1OMGY4OEZXWjErT1E3ZmdLTnVITWk4eWU2?=
 =?utf-8?B?b2puc3h6c3ZBNGEvSzR5dkJzZkt6TVo2TS9sTXh4N1V1a25oak5QNUp6eWdK?=
 =?utf-8?B?eXQ3Vk9QdFVqczlWYUZWZm9meXNGYnFRR0xyNVYxZE5iUlZoZkFxR3ZVWUk4?=
 =?utf-8?B?Z2o4WHQwNWdrcy9PVUEvMkt1T2FPTDZRVldVQjNITVhFUmdIR2ZCTEJGVjgr?=
 =?utf-8?B?UUJJNTFhWCs2b1MxOXJHTHRvTXdVWmVBRkFkcHFxY3M4Qlp2UWlYWHF2aHZN?=
 =?utf-8?B?VUpoY1J0Q3dUWmZKbHlwTWpIVnhPRnFCWUtMa3NIcTE4NUFMYWsxK1JSSW04?=
 =?utf-8?B?MEVJblhDbzBsQkE5Vk96N29aenBqV1JGZExhelhwb2NhNXMvSW5jNFZ2aVE1?=
 =?utf-8?B?QkNDc1c0a0RNZGxUdjllZHlZUWpZRmt5VTR2R3dSTURXUmMvVGlKSXZib0d3?=
 =?utf-8?B?NkJqZ2svNFJiZ3FucGFQc2hpU0xPeFUvQk83Q1FNTDd5UnRkWS90eW04emhO?=
 =?utf-8?B?NUxWQ2xHUGgwOVN4QndIR0NYK0RHbGZGQm9WZ01HN1A2RmZJekg1SmVITzRH?=
 =?utf-8?B?emZ1M0x3OXFlYTFGR084bjBZVFRLaDF0RWlQc2hnWFJ2SGVKN0k4MGdaYmJR?=
 =?utf-8?B?UUlGWmdnaXVwVUpZVkZIMUxFQTdCNldFUE1XVG9qTytiM21BSlRmOWN0YWVT?=
 =?utf-8?B?bm5mdmZJMlViNnVmU3pDaUNxVm1Vdm9ISGtxa0YveXJSRzhlU1pya2xEejdo?=
 =?utf-8?B?NWNZTU9BYW5qbGxFc0FqQjh6OGVkaDcyT010c0FOeUxaSDJ0Y2R0YlMwSW8v?=
 =?utf-8?B?eHg0TmFpRE5UdkJYYVoydTgxRTN5anc0QVRNMkVsZWx0eGNENm5HZmtrUnI4?=
 =?utf-8?B?c05NWFIrRmlIdGViTms4TW1HUCs2TER6WGoybjFySW51YkNETlpySGJsQ0JC?=
 =?utf-8?B?c1FxeXhPTUlqUWwrVkhmWjFGQVlaLzZlakNxaVNHQ0xUU2RDQmdiZTVPNWY5?=
 =?utf-8?B?aHlsYW4yZDhiU2dCUXMzTE4zT3Z1WUt3dGNETjY2R3h5TG9MdVpmbGFLMWZK?=
 =?utf-8?B?WmVCSGR5UElGOVlxK2Uvd2VvS3VEQ3Q3V2NXYlcrV0pUYk9tQnlDWnVlajds?=
 =?utf-8?B?V01wWEgzR1FMQ1FGSHhIOXczYlFub1JiVEt5a1pGNnpqczJzTXVDMXh1MzBD?=
 =?utf-8?B?cDU1M0IwNVM2ZGFxNlBaYytyVGl4Q2N4STF2dzVTbHh2amt4NzNDMzdTZGZO?=
 =?utf-8?B?MUQ4ZEVRYWkrSUdnTFF1bHVycWYvSTUrVHBsVU5Bcmd4S0k2L2FJVzFnN3Fu?=
 =?utf-8?B?blJUTWd1UGZJTGhxcVcrL0RIazVyUEhBdGZyei9WMzI0dkw2a3g4bWNHS280?=
 =?utf-8?B?SVJ0VHBLVXduWGhiQkJROXR0SUswVS9ETmErakY3OUprVm9yWnpiRWx6dGpI?=
 =?utf-8?B?Q1dINkMyMzd5M1dBK3RIbldpM0dGc1ZMYlRET2RhVEhTR2NWb25XVEhvcGpU?=
 =?utf-8?B?VmtZT3hTVld0RjdSR1RWcWdiUGp2RytQenVXc1BFVEN2T1FmajRtM3BuQWJL?=
 =?utf-8?B?eE9tUEl3YXJXOS9jRE1VeWwzTkdjbzQ5QzhVK2VhNnZtYWMvQ2puRytFdVR6?=
 =?utf-8?B?dWd5aENCY2k3R1QzcW9sMlhRZ0xqeW42QkZndGIzSk95UWQzY3EyQldWWXpk?=
 =?utf-8?B?cXZOREcvR2lTNmxzVENkSGFwUTU4TkRqVDUrNVpjVDBBL2creEN5bFRUTTVZ?=
 =?utf-8?B?SXA0Zld2eVNHcHdscncxVVBvVm1GRWg4LzZPYWU1V3VQUEd5bStrS2JIZEpK?=
 =?utf-8?B?dStYaWVPeUtqekxqdysyZz09?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <794294D6E90BDA42B0CF64E89FCBEE89@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8be1c748-6299-4a81-c1a6-08d93248474d
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 11:00:26.3554
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: hvsC6WFVLEC3KP/TYbUfK/ic/AAnfEx+cGyP6MNFieg7bJiVl6mBHdWqUAq8OUEa+uIDQdG0+J6YQzH7ZRSMhL+ORxoCKXYJk+TEXgXgISw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6368
X-OriginatorOrg: citrix.com

DQoNCj4gT24gTWF5IDI0LCAyMDIxLCBhdCA5OjM2IFBNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9v
a25AZ21haWwuY29tPiB3cm90ZToNCj4gDQo+IEluIGdlbmdvdHlwZXMucHksIHRoZSB0b0MgZnVu
Y3Rpb25zIG9ubHkgc2V0IEMgc3RyaW5nIGZpZWxkcyB3aGVuDQo+IHRoZSBHbyBzdHJpbmdzIGFy
ZSBub24tZW1wdHkuIEhvd2V2ZXIsIHRvIHByZXZlbnQgc2VnZmF1bHRzIGluIHNvbWUNCj4gY2Fz
ZXMsIHRoZXNlIGZpZWxkcyBzaG91bGQgYWx3YXlzIGF0IGxlYXN0IGJlIHNldCB0byBuaWwgc28g
dGhhdCB0aGUgQw0KPiBtZW1vcnkgaXMgemVyb2VkIG91dC4NCj4gDQo+IFVwZGF0ZSBnZW5nb3R5
cGVzLnB5IHNvIHRoYXQgdGhlIGdlbmVyYXRlZCBjb2RlIGFsd2F5cyBzZXRzIHRoZXNlIGZpZWxk
cw0KPiB0byBuaWwgZmlyc3QsIGFuZCB0aGVuIHByb2NlZWRzIHRvIGNoZWNrIGlmIHRoZSBHbyBz
dHJpbmcgaXMgbm9uLWVtcHR5Lg0KPiBBbmQsIGNvbW1pdCB0aGUgbmV3IGdlbmVyYXRlZCBjb2Rl
Lg0KPiANCj4gU2lnbmVkLW9mZi1ieTogTmljayBSb3Nicm9vayA8cm9zYnJvb2tuQGFpbmZvc2Vj
LmNvbT4NCg0KU28gd2FpdCDigJQgaWYgeW91IGRvDQoNCnZhciBmb28gQy50eXBlbmFtZQ0KDQpU
aGVuIGdvbGFuZyB3b27igJl0IGF1dG9tYXRpY2FsbHkgemVybyBvdXQgYGZvb2A/DQoNClRoYXQg
c2VlbXMgbGlrZSBhIGJ1ZyByZWFsbHk7IGJ1dCBhc3N1bWluZyB0aGlzIGZpeGVzIHJlYWwgYmVo
YXZpb3IgeW914oCZdmUgZW5jb3VudGVyZWQ6DQoNClJldmlld2VkLWJ5OiBHZW9yZ2UgRHVubGFw
IDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+DQoNCj4gLS0tDQo+IHRvb2xzL2dvbGFuZy94ZW5s
aWdodC9nZW5nb3R5cGVzLnB5ICB8ICAgMSArDQo+IHRvb2xzL2dvbGFuZy94ZW5saWdodC9oZWxw
ZXJzLmdlbi5nbyB8IDE2MCArKysrKysrKysrKysrKysrKysrKysrKysrKysNCj4gMiBmaWxlcyBj
aGFuZ2VkLCAxNjEgaW5zZXJ0aW9ucygrKQ0KPiANCj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2dvbGFu
Zy94ZW5saWdodC9nZW5nb3R5cGVzLnB5IGIvdG9vbHMvZ29sYW5nL3hlbmxpZ2h0L2dlbmdvdHlw
ZXMucHkNCj4gaW5kZXggM2U0MGMzZDVkYy4uZTZkYWE5YjkyZiAxMDA2NDQNCj4gLS0tIGEvdG9v
bHMvZ29sYW5nL3hlbmxpZ2h0L2dlbmdvdHlwZXMucHkNCj4gKysrIGIvdG9vbHMvZ29sYW5nL3hl
bmxpZ2h0L2dlbmdvdHlwZXMucHkNCj4gQEAgLTUyNyw2ICs1MjcsNyBAQCBkZWYgeGVubGlnaHRf
Z29sYW5nX2NvbnZlcnRfdG9fQyh0eSA9IE5vbmUsIG91dGVyX25hbWUgPSBOb25lLA0KPiANCj4g
ICAgIGVsaWYgZ290eXBlbmFtZSA9PSAnc3RyaW5nJzoNCj4gICAgICAgICAjIFVzZSB0aGUgY2dv
IGhlbHBlciBmb3IgY29udmVydGluZyBDIHN0cmluZ3MuDQo+ICsgICAgICAgIHMgKz0gJ3swfS57
MX0gPSBuaWxcbicuZm9ybWF0KGN2YXJuYW1lLCBjbmFtZSkNCj4gICAgICAgICBzICs9ICdpZiB7
MH0uezF9ICE9ICIiIHt7XG4nLmZvcm1hdChnb3Zhcm5hbWUsZ29uYW1lKQ0KPiAgICAgICAgIHMg
Kz0gJ3swfS57MX0gPSBDLkNTdHJpbmcoezJ9LnszfSl9fVxuJy5mb3JtYXQoY3Zhcm5hbWUsY25h
bWUsDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IGdvdmFybmFtZSxnb25hbWUpDQo+IGRpZmYgLS1naXQgYS90b29scy9nb2xhbmcveGVubGlnaHQv
aGVscGVycy5nZW4uZ28gYi90b29scy9nb2xhbmcveGVubGlnaHQvaGVscGVycy5nZW4uZ28NCj4g
aW5kZXggYjQ1NGIxMmQ1Mi4uNTIyMjg5OGZiOCAxMDA2NDQNCj4gLS0tIGEvdG9vbHMvZ29sYW5n
L3hlbmxpZ2h0L2hlbHBlcnMuZ2VuLmdvDQo+ICsrKyBiL3Rvb2xzL2dvbGFuZy94ZW5saWdodC9o
ZWxwZXJzLmdlbi5nbw0KPiBAQCAtMTU0LDggKzE1NCwxMCBAQCBDLmxpYnhsX3ZuY19pbmZvX2Rp
c3Bvc2UoeGMpfQ0KPiBpZiBlcnIgOj0geC5FbmFibGUudG9DKCZ4Yy5lbmFibGUpOyBlcnIgIT0g
bmlsIHsNCj4gcmV0dXJuIGZtdC5FcnJvcmYoImNvbnZlcnRpbmcgZmllbGQgRW5hYmxlOiAldiIs
IGVycikNCj4gfQ0KPiAreGMubGlzdGVuID0gbmlsDQo+IGlmIHguTGlzdGVuICE9ICIiIHsNCj4g
eGMubGlzdGVuID0gQy5DU3RyaW5nKHguTGlzdGVuKX0NCj4gK3hjLnBhc3N3ZCA9IG5pbA0KPiBp
ZiB4LlBhc3N3ZCAhPSAiIiB7DQo+IHhjLnBhc3N3ZCA9IEMuQ1N0cmluZyh4LlBhc3N3ZCl9DQo+
IHhjLmRpc3BsYXkgPSBDLmludCh4LkRpc3BsYXkpDQo+IEBAIC0yMTYsMTEgKzIxOCwxMyBAQCBy
ZXR1cm4gZm10LkVycm9yZigiY29udmVydGluZyBmaWVsZCBFbmFibGU6ICV2IiwgZXJyKQ0KPiB9
DQo+IHhjLnBvcnQgPSBDLmludCh4LlBvcnQpDQo+IHhjLnRsc19wb3J0ID0gQy5pbnQoeC5UbHNQ
b3J0KQ0KPiAreGMuaG9zdCA9IG5pbA0KPiBpZiB4Lkhvc3QgIT0gIiIgew0KPiB4Yy5ob3N0ID0g
Qy5DU3RyaW5nKHguSG9zdCl9DQo+IGlmIGVyciA6PSB4LkRpc2FibGVUaWNrZXRpbmcudG9DKCZ4
Yy5kaXNhYmxlX3RpY2tldGluZyk7IGVyciAhPSBuaWwgew0KPiByZXR1cm4gZm10LkVycm9yZigi
Y29udmVydGluZyBmaWVsZCBEaXNhYmxlVGlja2V0aW5nOiAldiIsIGVycikNCj4gfQ0KPiAreGMu
cGFzc3dkID0gbmlsDQo+IGlmIHguUGFzc3dkICE9ICIiIHsNCj4geGMucGFzc3dkID0gQy5DU3Ry
aW5nKHguUGFzc3dkKX0NCj4gaWYgZXJyIDo9IHguQWdlbnRNb3VzZS50b0MoJnhjLmFnZW50X21v
dXNlKTsgZXJyICE9IG5pbCB7DQo+IEBAIC0yMzMsOCArMjM3LDEwIEBAIGlmIGVyciA6PSB4LkNs
aXBib2FyZFNoYXJpbmcudG9DKCZ4Yy5jbGlwYm9hcmRfc2hhcmluZyk7IGVyciAhPSBuaWwgew0K
PiByZXR1cm4gZm10LkVycm9yZigiY29udmVydGluZyBmaWVsZCBDbGlwYm9hcmRTaGFyaW5nOiAl
diIsIGVycikNCj4gfQ0KPiB4Yy51c2JyZWRpcmVjdGlvbiA9IEMuaW50KHguVXNicmVkaXJlY3Rp
b24pDQo+ICt4Yy5pbWFnZV9jb21wcmVzc2lvbiA9IG5pbA0KPiBpZiB4LkltYWdlQ29tcHJlc3Np
b24gIT0gIiIgew0KPiB4Yy5pbWFnZV9jb21wcmVzc2lvbiA9IEMuQ1N0cmluZyh4LkltYWdlQ29t
cHJlc3Npb24pfQ0KPiAreGMuc3RyZWFtaW5nX3ZpZGVvID0gbmlsDQo+IGlmIHguU3RyZWFtaW5n
VmlkZW8gIT0gIiIgew0KPiB4Yy5zdHJlYW1pbmdfdmlkZW8gPSBDLkNTdHJpbmcoeC5TdHJlYW1p
bmdWaWRlbyl9DQo+IA0KPiBAQCAtMjc4LDggKzI4NCwxMCBAQCByZXR1cm4gZm10LkVycm9yZigi
Y29udmVydGluZyBmaWVsZCBFbmFibGU6ICV2IiwgZXJyKQ0KPiBpZiBlcnIgOj0geC5PcGVuZ2wu
dG9DKCZ4Yy5vcGVuZ2wpOyBlcnIgIT0gbmlsIHsNCj4gcmV0dXJuIGZtdC5FcnJvcmYoImNvbnZl
cnRpbmcgZmllbGQgT3BlbmdsOiAldiIsIGVycikNCj4gfQ0KPiAreGMuZGlzcGxheSA9IG5pbA0K
PiBpZiB4LkRpc3BsYXkgIT0gIiIgew0KPiB4Yy5kaXNwbGF5ID0gQy5DU3RyaW5nKHguRGlzcGxh
eSl9DQo+ICt4Yy54YXV0aG9yaXR5ID0gbmlsDQo+IGlmIHguWGF1dGhvcml0eSAhPSAiIiB7DQo+
IHhjLnhhdXRob3JpdHkgPSBDLkNTdHJpbmcoeC5YYXV0aG9yaXR5KX0NCj4gDQo+IEBAIC0zMzcs
NiArMzQ1LDcgQEAgcmV0dXJuIGZtdC5FcnJvcmYoImNvbnZlcnRpbmcgZmllbGQgVXVpZDogJXYi
LCBlcnIpDQo+IH0NCj4geGMuZG9taWQgPSBDLmxpYnhsX2RvbWlkKHguRG9taWQpDQo+IHhjLnNz
aWRyZWYgPSBDLnVpbnQzMl90KHguU3NpZHJlZikNCj4gK3hjLnNzaWRfbGFiZWwgPSBuaWwNCj4g
aWYgeC5Tc2lkTGFiZWwgIT0gIiIgew0KPiB4Yy5zc2lkX2xhYmVsID0gQy5DU3RyaW5nKHguU3Np
ZExhYmVsKX0NCj4geGMucnVubmluZyA9IEMuYm9vbCh4LlJ1bm5pbmcpDQo+IEBAIC0zOTEsNiAr
NDAwLDcgQEAgQy5saWJ4bF9jcHVwb29saW5mb19kaXNwb3NlKHhjKX0NCj4gfSgpDQo+IA0KPiB4
Yy5wb29saWQgPSBDLnVpbnQzMl90KHguUG9vbGlkKQ0KPiAreGMucG9vbF9uYW1lID0gbmlsDQo+
IGlmIHguUG9vbE5hbWUgIT0gIiIgew0KPiB4Yy5wb29sX25hbWUgPSBDLkNTdHJpbmcoeC5Qb29s
TmFtZSl9DQo+IHhjLnNjaGVkID0gQy5saWJ4bF9zY2hlZHVsZXIoeC5TY2hlZCkNCj4gQEAgLTQ1
OCw5ICs0NjgsMTEgQEAgaWYgZXJyICE9IG5pbHsNCj4gQy5saWJ4bF9jaGFubmVsaW5mb19kaXNw
b3NlKHhjKX0NCj4gfSgpDQo+IA0KPiAreGMuYmFja2VuZCA9IG5pbA0KPiBpZiB4LkJhY2tlbmQg
IT0gIiIgew0KPiB4Yy5iYWNrZW5kID0gQy5DU3RyaW5nKHguQmFja2VuZCl9DQo+IHhjLmJhY2tl
bmRfaWQgPSBDLnVpbnQzMl90KHguQmFja2VuZElkKQ0KPiAreGMuZnJvbnRlbmQgPSBuaWwNCj4g
aWYgeC5Gcm9udGVuZCAhPSAiIiB7DQo+IHhjLmZyb250ZW5kID0gQy5DU3RyaW5nKHguRnJvbnRl
bmQpfQ0KPiB4Yy5mcm9udGVuZF9pZCA9IEMudWludDMyX3QoeC5Gcm9udGVuZElkKQ0KPiBAQCAt
NDc4LDYgKzQ5MCw3IEBAIGlmICFvayB7DQo+IHJldHVybiBlcnJvcnMuTmV3KCJ3cm9uZyB0eXBl
IGZvciB1bmlvbiBrZXkgY29ubmVjdGlvbiIpDQo+IH0NCj4gdmFyIHB0eSBDLmxpYnhsX2NoYW5u
ZWxpbmZvX2Nvbm5lY3Rpb25fdW5pb25fcHR5DQo+ICtwdHkucGF0aCA9IG5pbA0KPiBpZiB0bXAu
UGF0aCAhPSAiIiB7DQo+IHB0eS5wYXRoID0gQy5DU3RyaW5nKHRtcC5QYXRoKX0NCj4gcHR5Qnl0
ZXMgOj0gQy5Hb0J5dGVzKHVuc2FmZS5Qb2ludGVyKCZwdHkpLEMuc2l6ZW9mX2xpYnhsX2NoYW5u
ZWxpbmZvX2Nvbm5lY3Rpb25fdW5pb25fcHR5KQ0KPiBAQCAtNTYzLDI0ICs1NzYsMzMgQEAgQy5s
aWJ4bF92ZXJzaW9uX2luZm9fZGlzcG9zZSh4Yyl9DQo+IA0KPiB4Yy54ZW5fdmVyc2lvbl9tYWpv
ciA9IEMuaW50KHguWGVuVmVyc2lvbk1ham9yKQ0KPiB4Yy54ZW5fdmVyc2lvbl9taW5vciA9IEMu
aW50KHguWGVuVmVyc2lvbk1pbm9yKQ0KPiAreGMueGVuX3ZlcnNpb25fZXh0cmEgPSBuaWwNCj4g
aWYgeC5YZW5WZXJzaW9uRXh0cmEgIT0gIiIgew0KPiB4Yy54ZW5fdmVyc2lvbl9leHRyYSA9IEMu
Q1N0cmluZyh4LlhlblZlcnNpb25FeHRyYSl9DQo+ICt4Yy5jb21waWxlciA9IG5pbA0KPiBpZiB4
LkNvbXBpbGVyICE9ICIiIHsNCj4geGMuY29tcGlsZXIgPSBDLkNTdHJpbmcoeC5Db21waWxlcil9
DQo+ICt4Yy5jb21waWxlX2J5ID0gbmlsDQo+IGlmIHguQ29tcGlsZUJ5ICE9ICIiIHsNCj4geGMu
Y29tcGlsZV9ieSA9IEMuQ1N0cmluZyh4LkNvbXBpbGVCeSl9DQo+ICt4Yy5jb21waWxlX2RvbWFp
biA9IG5pbA0KPiBpZiB4LkNvbXBpbGVEb21haW4gIT0gIiIgew0KPiB4Yy5jb21waWxlX2RvbWFp
biA9IEMuQ1N0cmluZyh4LkNvbXBpbGVEb21haW4pfQ0KPiAreGMuY29tcGlsZV9kYXRlID0gbmls
DQo+IGlmIHguQ29tcGlsZURhdGUgIT0gIiIgew0KPiB4Yy5jb21waWxlX2RhdGUgPSBDLkNTdHJp
bmcoeC5Db21waWxlRGF0ZSl9DQo+ICt4Yy5jYXBhYmlsaXRpZXMgPSBuaWwNCj4gaWYgeC5DYXBh
YmlsaXRpZXMgIT0gIiIgew0KPiB4Yy5jYXBhYmlsaXRpZXMgPSBDLkNTdHJpbmcoeC5DYXBhYmls
aXRpZXMpfQ0KPiAreGMuY2hhbmdlc2V0ID0gbmlsDQo+IGlmIHguQ2hhbmdlc2V0ICE9ICIiIHsN
Cj4geGMuY2hhbmdlc2V0ID0gQy5DU3RyaW5nKHguQ2hhbmdlc2V0KX0NCj4geGMudmlydF9zdGFy
dCA9IEMudWludDY0X3QoeC5WaXJ0U3RhcnQpDQo+IHhjLnBhZ2VzaXplID0gQy5pbnQoeC5QYWdl
c2l6ZSkNCj4gK3hjLmNvbW1hbmRsaW5lID0gbmlsDQo+IGlmIHguQ29tbWFuZGxpbmUgIT0gIiIg
ew0KPiB4Yy5jb21tYW5kbGluZSA9IEMuQ1N0cmluZyh4LkNvbW1hbmRsaW5lKX0NCj4gK3hjLmJ1
aWxkX2lkID0gbmlsDQo+IGlmIHguQnVpbGRJZCAhPSAiIiB7DQo+IHhjLmJ1aWxkX2lkID0gQy5D
U3RyaW5nKHguQnVpbGRJZCl9DQo+IA0KPiBAQCAtNjUwLDggKzY3MiwxMCBAQCBpZiBlcnIgOj0g
eC5Pb3MudG9DKCZ4Yy5vb3MpOyBlcnIgIT0gbmlsIHsNCj4gcmV0dXJuIGZtdC5FcnJvcmYoImNv
bnZlcnRpbmcgZmllbGQgT29zOiAldiIsIGVycikNCj4gfQ0KPiB4Yy5zc2lkcmVmID0gQy51aW50
MzJfdCh4LlNzaWRyZWYpDQo+ICt4Yy5zc2lkX2xhYmVsID0gbmlsDQo+IGlmIHguU3NpZExhYmVs
ICE9ICIiIHsNCj4geGMuc3NpZF9sYWJlbCA9IEMuQ1N0cmluZyh4LlNzaWRMYWJlbCl9DQo+ICt4
Yy5uYW1lID0gbmlsDQo+IGlmIHguTmFtZSAhPSAiIiB7DQo+IHhjLm5hbWUgPSBDLkNTdHJpbmco
eC5OYW1lKX0NCj4geGMuZG9taWQgPSBDLmxpYnhsX2RvbWlkKHguRG9taWQpDQo+IEBAIC02NjUs
NiArNjg5LDcgQEAgaWYgZXJyIDo9IHguUGxhdGZvcm1kYXRhLnRvQygmeGMucGxhdGZvcm1kYXRh
KTsgZXJyICE9IG5pbCB7DQo+IHJldHVybiBmbXQuRXJyb3JmKCJjb252ZXJ0aW5nIGZpZWxkIFBs
YXRmb3JtZGF0YTogJXYiLCBlcnIpDQo+IH0NCj4geGMucG9vbGlkID0gQy51aW50MzJfdCh4LlBv
b2xpZCkNCj4gK3hjLnBvb2xfbmFtZSA9IG5pbA0KPiBpZiB4LlBvb2xOYW1lICE9ICIiIHsNCj4g
eGMucG9vbF9uYW1lID0gQy5DU3RyaW5nKHguUG9vbE5hbWUpfQ0KPiBpZiBlcnIgOj0geC5SdW5I
b3RwbHVnU2NyaXB0cy50b0MoJnhjLnJ1bl9ob3RwbHVnX3NjcmlwdHMpOyBlcnIgIT0gbmlsIHsN
Cj4gQEAgLTcxMiw2ICs3MzcsNyBAQCBDLmxpYnhsX2RvbWFpbl9yZXN0b3JlX3BhcmFtc19kaXNw
b3NlKHhjKX0NCj4gDQo+IHhjLmNoZWNrcG9pbnRlZF9zdHJlYW0gPSBDLmludCh4LkNoZWNrcG9p
bnRlZFN0cmVhbSkNCj4geGMuc3RyZWFtX3ZlcnNpb24gPSBDLnVpbnQzMl90KHguU3RyZWFtVmVy
c2lvbikNCj4gK3hjLmNvbG9fcHJveHlfc2NyaXB0ID0gbmlsDQo+IGlmIHguQ29sb1Byb3h5U2Ny
aXB0ICE9ICIiIHsNCj4geGMuY29sb19wcm94eV9zY3JpcHQgPSBDLkNTdHJpbmcoeC5Db2xvUHJv
eHlTY3JpcHQpfQ0KPiBpZiBlcnIgOj0geC5Vc2Vyc3BhY2VDb2xvUHJveHkudG9DKCZ4Yy51c2Vy
c3BhY2VfY29sb19wcm94eSk7IGVyciAhPSBuaWwgew0KPiBAQCAtMTMxMiw2ICsxMzM4LDcgQEAg
eGMuc2hhZG93X21lbWtiID0gQy51aW50NjRfdCh4LlNoYWRvd01lbWtiKQ0KPiB4Yy5pb21tdV9t
ZW1rYiA9IEMudWludDY0X3QoeC5Jb21tdU1lbWtiKQ0KPiB4Yy5ydGNfdGltZW9mZnNldCA9IEMu
dWludDMyX3QoeC5SdGNUaW1lb2Zmc2V0KQ0KPiB4Yy5leGVjX3NzaWRyZWYgPSBDLnVpbnQzMl90
KHguRXhlY1NzaWRyZWYpDQo+ICt4Yy5leGVjX3NzaWRfbGFiZWwgPSBuaWwNCj4gaWYgeC5FeGVj
U3NpZExhYmVsICE9ICIiIHsNCj4geGMuZXhlY19zc2lkX2xhYmVsID0gQy5DU3RyaW5nKHguRXhl
Y1NzaWRMYWJlbCl9DQo+IGlmIGVyciA6PSB4LkxvY2FsdGltZS50b0MoJnhjLmxvY2FsdGltZSk7
IGVyciAhPSBuaWwgew0KPiBAQCAtMTMyMyw2ICsxMzUwLDcgQEAgcmV0dXJuIGZtdC5FcnJvcmYo
ImNvbnZlcnRpbmcgZmllbGQgRGlzYWJsZU1pZ3JhdGU6ICV2IiwgZXJyKQ0KPiBpZiBlcnIgOj0g
eC5DcHVpZC50b0MoJnhjLmNwdWlkKTsgZXJyICE9IG5pbCB7DQo+IHJldHVybiBmbXQuRXJyb3Jm
KCJjb252ZXJ0aW5nIGZpZWxkIENwdWlkOiAldiIsIGVycikNCj4gfQ0KPiAreGMuYmxrZGV2X3N0
YXJ0ID0gbmlsDQo+IGlmIHguQmxrZGV2U3RhcnQgIT0gIiIgew0KPiB4Yy5ibGtkZXZfc3RhcnQg
PSBDLkNTdHJpbmcoeC5CbGtkZXZTdGFydCl9DQo+IGlmIG51bVZudW1hTm9kZXMgOj0gbGVuKHgu
Vm51bWFOb2Rlcyk7IG51bVZudW1hTm9kZXMgPiAwIHsNCj4gQEAgLTEzNDIsMTUgKzEzNzAsMjAg
QEAgaWYgZXJyIDo9IHguRGV2aWNlTW9kZWxTdHViZG9tYWluLnRvQygmeGMuZGV2aWNlX21vZGVs
X3N0dWJkb21haW4pOyBlcnIgIT0gbmlsIHsNCj4gcmV0dXJuIGZtdC5FcnJvcmYoImNvbnZlcnRp
bmcgZmllbGQgRGV2aWNlTW9kZWxTdHViZG9tYWluOiAldiIsIGVycikNCj4gfQ0KPiB4Yy5zdHVi
ZG9tYWluX21lbWtiID0gQy51aW50NjRfdCh4LlN0dWJkb21haW5NZW1rYikNCj4gK3hjLnN0dWJk
b21haW5fa2VybmVsID0gbmlsDQo+IGlmIHguU3R1YmRvbWFpbktlcm5lbCAhPSAiIiB7DQo+IHhj
LnN0dWJkb21haW5fa2VybmVsID0gQy5DU3RyaW5nKHguU3R1YmRvbWFpbktlcm5lbCl9DQo+ICt4
Yy5zdHViZG9tYWluX3JhbWRpc2sgPSBuaWwNCj4gaWYgeC5TdHViZG9tYWluUmFtZGlzayAhPSAi
IiB7DQo+IHhjLnN0dWJkb21haW5fcmFtZGlzayA9IEMuQ1N0cmluZyh4LlN0dWJkb21haW5SYW1k
aXNrKX0NCj4gK3hjLmRldmljZV9tb2RlbCA9IG5pbA0KPiBpZiB4LkRldmljZU1vZGVsICE9ICIi
IHsNCj4geGMuZGV2aWNlX21vZGVsID0gQy5DU3RyaW5nKHguRGV2aWNlTW9kZWwpfQ0KPiB4Yy5k
ZXZpY2VfbW9kZWxfc3NpZHJlZiA9IEMudWludDMyX3QoeC5EZXZpY2VNb2RlbFNzaWRyZWYpDQo+
ICt4Yy5kZXZpY2VfbW9kZWxfc3NpZF9sYWJlbCA9IG5pbA0KPiBpZiB4LkRldmljZU1vZGVsU3Np
ZExhYmVsICE9ICIiIHsNCj4geGMuZGV2aWNlX21vZGVsX3NzaWRfbGFiZWwgPSBDLkNTdHJpbmco
eC5EZXZpY2VNb2RlbFNzaWRMYWJlbCl9DQo+ICt4Yy5kZXZpY2VfbW9kZWxfdXNlciA9IG5pbA0K
PiBpZiB4LkRldmljZU1vZGVsVXNlciAhPSAiIiB7DQo+IHhjLmRldmljZV9tb2RlbF91c2VyID0g
Qy5DU3RyaW5nKHguRGV2aWNlTW9kZWxVc2VyKX0NCj4gaWYgZXJyIDo9IHguRXh0cmEudG9DKCZ4
Yy5leHRyYSk7IGVyciAhPSBuaWwgew0KPiBAQCAtMTM5NywxNyArMTQzMCwyMiBAQCBpZiBlcnIg
Oj0geC5DbGFpbU1vZGUudG9DKCZ4Yy5jbGFpbV9tb2RlKTsgZXJyICE9IG5pbCB7DQo+IHJldHVy
biBmbXQuRXJyb3JmKCJjb252ZXJ0aW5nIGZpZWxkIENsYWltTW9kZTogJXYiLCBlcnIpDQo+IH0N
Cj4geGMuZXZlbnRfY2hhbm5lbHMgPSBDLnVpbnQzMl90KHguRXZlbnRDaGFubmVscykNCj4gK3hj
Lmtlcm5lbCA9IG5pbA0KPiBpZiB4Lktlcm5lbCAhPSAiIiB7DQo+IHhjLmtlcm5lbCA9IEMuQ1N0
cmluZyh4Lktlcm5lbCl9DQo+ICt4Yy5jbWRsaW5lID0gbmlsDQo+IGlmIHguQ21kbGluZSAhPSAi
IiB7DQo+IHhjLmNtZGxpbmUgPSBDLkNTdHJpbmcoeC5DbWRsaW5lKX0NCj4gK3hjLnJhbWRpc2sg
PSBuaWwNCj4gaWYgeC5SYW1kaXNrICE9ICIiIHsNCj4geGMucmFtZGlzayA9IEMuQ1N0cmluZyh4
LlJhbWRpc2spfQ0KPiAreGMuZGV2aWNlX3RyZWUgPSBuaWwNCj4gaWYgeC5EZXZpY2VUcmVlICE9
ICIiIHsNCj4geGMuZGV2aWNlX3RyZWUgPSBDLkNTdHJpbmcoeC5EZXZpY2VUcmVlKX0NCj4gaWYg
ZXJyIDo9IHguQWNwaS50b0MoJnhjLmFjcGkpOyBlcnIgIT0gbmlsIHsNCj4gcmV0dXJuIGZtdC5F
cnJvcmYoImNvbnZlcnRpbmcgZmllbGQgQWNwaTogJXYiLCBlcnIpDQo+IH0NCj4gK3hjLmJvb3Rs
b2FkZXIgPSBuaWwNCj4gaWYgeC5Cb290bG9hZGVyICE9ICIiIHsNCj4geGMuYm9vdGxvYWRlciA9
IEMuQ1N0cmluZyh4LkJvb3Rsb2FkZXIpfQ0KPiBpZiBlcnIgOj0geC5Cb290bG9hZGVyQXJncy50
b0MoJnhjLmJvb3Rsb2FkZXJfYXJncyk7IGVyciAhPSBuaWwgew0KPiBAQCAtMTQzMiw2ICsxNDcw
LDcgQEAgaWYgIW9rIHsNCj4gcmV0dXJuIGVycm9ycy5OZXcoIndyb25nIHR5cGUgZm9yIHVuaW9u
IGtleSB0eXBlIikNCj4gfQ0KPiB2YXIgaHZtIEMubGlieGxfZG9tYWluX2J1aWxkX2luZm9fdHlw
ZV91bmlvbl9odm0NCj4gK2h2bS5maXJtd2FyZSA9IG5pbA0KPiBpZiB0bXAuRmlybXdhcmUgIT0g
IiIgew0KPiBodm0uZmlybXdhcmUgPSBDLkNTdHJpbmcodG1wLkZpcm13YXJlKX0NCj4gaHZtLmJp
b3MgPSBDLmxpYnhsX2Jpb3NfdHlwZSh0bXAuQmlvcykNCj4gQEAgLTE0NjUsNiArMTUwNCw3IEBA
IHJldHVybiBmbXQuRXJyb3JmKCJjb252ZXJ0aW5nIGZpZWxkIFZpcmlkaWFuRW5hYmxlOiAldiIs
IGVycikNCj4gaWYgZXJyIDo9IHRtcC5WaXJpZGlhbkRpc2FibGUudG9DKCZodm0udmlyaWRpYW5f
ZGlzYWJsZSk7IGVyciAhPSBuaWwgew0KPiByZXR1cm4gZm10LkVycm9yZigiY29udmVydGluZyBm
aWVsZCBWaXJpZGlhbkRpc2FibGU6ICV2IiwgZXJyKQ0KPiB9DQo+ICtodm0udGltZW9mZnNldCA9
IG5pbA0KPiBpZiB0bXAuVGltZW9mZnNldCAhPSAiIiB7DQo+IGh2bS50aW1lb2Zmc2V0ID0gQy5D
U3RyaW5nKHRtcC5UaW1lb2Zmc2V0KX0NCj4gaWYgZXJyIDo9IHRtcC5IcGV0LnRvQygmaHZtLmhw
ZXQpOyBlcnIgIT0gbmlsIHsNCj4gQEAgLTE0ODEsMTAgKzE1MjEsMTMgQEAgcmV0dXJuIGZtdC5F
cnJvcmYoImNvbnZlcnRpbmcgZmllbGQgTmVzdGVkSHZtOiAldiIsIGVycikNCj4gaWYgZXJyIDo9
IHRtcC5BbHRwMk0udG9DKCZodm0uYWx0cDJtKTsgZXJyICE9IG5pbCB7DQo+IHJldHVybiBmbXQu
RXJyb3JmKCJjb252ZXJ0aW5nIGZpZWxkIEFsdHAyTTogJXYiLCBlcnIpDQo+IH0NCj4gK2h2bS5z
eXN0ZW1fZmlybXdhcmUgPSBuaWwNCj4gaWYgdG1wLlN5c3RlbUZpcm13YXJlICE9ICIiIHsNCj4g
aHZtLnN5c3RlbV9maXJtd2FyZSA9IEMuQ1N0cmluZyh0bXAuU3lzdGVtRmlybXdhcmUpfQ0KPiAr
aHZtLnNtYmlvc19maXJtd2FyZSA9IG5pbA0KPiBpZiB0bXAuU21iaW9zRmlybXdhcmUgIT0gIiIg
ew0KPiBodm0uc21iaW9zX2Zpcm13YXJlID0gQy5DU3RyaW5nKHRtcC5TbWJpb3NGaXJtd2FyZSl9
DQo+ICtodm0uYWNwaV9maXJtd2FyZSA9IG5pbA0KPiBpZiB0bXAuQWNwaUZpcm13YXJlICE9ICIi
IHsNCj4gaHZtLmFjcGlfZmlybXdhcmUgPSBDLkNTdHJpbmcodG1wLkFjcGlGaXJtd2FyZSl9DQo+
IGh2bS5oZHR5cGUgPSBDLmxpYnhsX2hkdHlwZSh0bXAuSGR0eXBlKQ0KPiBAQCAtMTQ5Nyw2ICsx
NTQwLDcgQEAgcmV0dXJuIGZtdC5FcnJvcmYoImNvbnZlcnRpbmcgZmllbGQgVmdhOiAldiIsIGVy
cikNCj4gaWYgZXJyIDo9IHRtcC5WbmMudG9DKCZodm0udm5jKTsgZXJyICE9IG5pbCB7DQo+IHJl
dHVybiBmbXQuRXJyb3JmKCJjb252ZXJ0aW5nIGZpZWxkIFZuYzogJXYiLCBlcnIpDQo+IH0NCj4g
K2h2bS5rZXltYXAgPSBuaWwNCj4gaWYgdG1wLktleW1hcCAhPSAiIiB7DQo+IGh2bS5rZXltYXAg
PSBDLkNTdHJpbmcodG1wLktleW1hcCl9DQo+IGlmIGVyciA6PSB0bXAuU2RsLnRvQygmaHZtLnNk
bCk7IGVyciAhPSBuaWwgew0KPiBAQCAtMTUwOSwxOSArMTU1MywyMyBAQCBpZiBlcnIgOj0gdG1w
LkdmeFBhc3N0aHJ1LnRvQygmaHZtLmdmeF9wYXNzdGhydSk7IGVyciAhPSBuaWwgew0KPiByZXR1
cm4gZm10LkVycm9yZigiY29udmVydGluZyBmaWVsZCBHZnhQYXNzdGhydTogJXYiLCBlcnIpDQo+
IH0NCj4gaHZtLmdmeF9wYXNzdGhydV9raW5kID0gQy5saWJ4bF9nZnhfcGFzc3RocnVfa2luZCh0
bXAuR2Z4UGFzc3RocnVLaW5kKQ0KPiAraHZtLnNlcmlhbCA9IG5pbA0KPiBpZiB0bXAuU2VyaWFs
ICE9ICIiIHsNCj4gaHZtLnNlcmlhbCA9IEMuQ1N0cmluZyh0bXAuU2VyaWFsKX0NCj4gK2h2bS5i
b290ID0gbmlsDQo+IGlmIHRtcC5Cb290ICE9ICIiIHsNCj4gaHZtLmJvb3QgPSBDLkNTdHJpbmco
dG1wLkJvb3QpfQ0KPiBpZiBlcnIgOj0gdG1wLlVzYi50b0MoJmh2bS51c2IpOyBlcnIgIT0gbmls
IHsNCj4gcmV0dXJuIGZtdC5FcnJvcmYoImNvbnZlcnRpbmcgZmllbGQgVXNiOiAldiIsIGVycikN
Cj4gfQ0KPiBodm0udXNidmVyc2lvbiA9IEMuaW50KHRtcC5Vc2J2ZXJzaW9uKQ0KPiAraHZtLnVz
YmRldmljZSA9IG5pbA0KPiBpZiB0bXAuVXNiZGV2aWNlICE9ICIiIHsNCj4gaHZtLnVzYmRldmlj
ZSA9IEMuQ1N0cmluZyh0bXAuVXNiZGV2aWNlKX0NCj4gaWYgZXJyIDo9IHRtcC5Wa2JEZXZpY2Uu
dG9DKCZodm0udmtiX2RldmljZSk7IGVyciAhPSBuaWwgew0KPiByZXR1cm4gZm10LkVycm9yZigi
Y29udmVydGluZyBmaWVsZCBWa2JEZXZpY2U6ICV2IiwgZXJyKQ0KPiB9DQo+ICtodm0uc291bmRo
dyA9IG5pbA0KPiBpZiB0bXAuU291bmRodyAhPSAiIiB7DQo+IGh2bS5zb3VuZGh3ID0gQy5DU3Ry
aW5nKHRtcC5Tb3VuZGh3KX0NCj4gaWYgZXJyIDo9IHRtcC5YZW5QbGF0Zm9ybVBjaS50b0MoJmh2
bS54ZW5fcGxhdGZvcm1fcGNpKTsgZXJyICE9IG5pbCB7DQo+IEBAIC0xNTUwLDE4ICsxNTk4LDIz
IEBAIGlmICFvayB7DQo+IHJldHVybiBlcnJvcnMuTmV3KCJ3cm9uZyB0eXBlIGZvciB1bmlvbiBr
ZXkgdHlwZSIpDQo+IH0NCj4gdmFyIHB2IEMubGlieGxfZG9tYWluX2J1aWxkX2luZm9fdHlwZV91
bmlvbl9wdg0KPiArcHYua2VybmVsID0gbmlsDQo+IGlmIHRtcC5LZXJuZWwgIT0gIiIgew0KPiBw
di5rZXJuZWwgPSBDLkNTdHJpbmcodG1wLktlcm5lbCl9DQo+IHB2LnNsYWNrX21lbWtiID0gQy51
aW50NjRfdCh0bXAuU2xhY2tNZW1rYikNCj4gK3B2LmJvb3Rsb2FkZXIgPSBuaWwNCj4gaWYgdG1w
LkJvb3Rsb2FkZXIgIT0gIiIgew0KPiBwdi5ib290bG9hZGVyID0gQy5DU3RyaW5nKHRtcC5Cb290
bG9hZGVyKX0NCj4gaWYgZXJyIDo9IHRtcC5Cb290bG9hZGVyQXJncy50b0MoJnB2LmJvb3Rsb2Fk
ZXJfYXJncyk7IGVyciAhPSBuaWwgew0KPiByZXR1cm4gZm10LkVycm9yZigiY29udmVydGluZyBm
aWVsZCBCb290bG9hZGVyQXJnczogJXYiLCBlcnIpDQo+IH0NCj4gK3B2LmNtZGxpbmUgPSBuaWwN
Cj4gaWYgdG1wLkNtZGxpbmUgIT0gIiIgew0KPiBwdi5jbWRsaW5lID0gQy5DU3RyaW5nKHRtcC5D
bWRsaW5lKX0NCj4gK3B2LnJhbWRpc2sgPSBuaWwNCj4gaWYgdG1wLlJhbWRpc2sgIT0gIiIgew0K
PiBwdi5yYW1kaXNrID0gQy5DU3RyaW5nKHRtcC5SYW1kaXNrKX0NCj4gK3B2LmZlYXR1cmVzID0g
bmlsDQo+IGlmIHRtcC5GZWF0dXJlcyAhPSAiIiB7DQo+IHB2LmZlYXR1cmVzID0gQy5DU3RyaW5n
KHRtcC5GZWF0dXJlcyl9DQo+IGlmIGVyciA6PSB0bXAuRTgyMEhvc3QudG9DKCZwdi5lODIwX2hv
c3QpOyBlcnIgIT0gbmlsIHsNCj4gQEAgLTE1NzgsMTAgKzE2MzEsMTMgQEAgdmFyIHB2aCBDLmxp
YnhsX2RvbWFpbl9idWlsZF9pbmZvX3R5cGVfdW5pb25fcHZoDQo+IGlmIGVyciA6PSB0bXAuUHZz
aGltLnRvQygmcHZoLnB2c2hpbSk7IGVyciAhPSBuaWwgew0KPiByZXR1cm4gZm10LkVycm9yZigi
Y29udmVydGluZyBmaWVsZCBQdnNoaW06ICV2IiwgZXJyKQ0KPiB9DQo+ICtwdmgucHZzaGltX3Bh
dGggPSBuaWwNCj4gaWYgdG1wLlB2c2hpbVBhdGggIT0gIiIgew0KPiBwdmgucHZzaGltX3BhdGgg
PSBDLkNTdHJpbmcodG1wLlB2c2hpbVBhdGgpfQ0KPiArcHZoLnB2c2hpbV9jbWRsaW5lID0gbmls
DQo+IGlmIHRtcC5QdnNoaW1DbWRsaW5lICE9ICIiIHsNCj4gcHZoLnB2c2hpbV9jbWRsaW5lID0g
Qy5DU3RyaW5nKHRtcC5QdnNoaW1DbWRsaW5lKX0NCj4gK3B2aC5wdnNoaW1fZXh0cmEgPSBuaWwN
Cj4gaWYgdG1wLlB2c2hpbUV4dHJhICE9ICIiIHsNCj4gcHZoLnB2c2hpbV9leHRyYSA9IEMuQ1N0
cmluZyh0bXAuUHZzaGltRXh0cmEpfQ0KPiBwdmhCeXRlcyA6PSBDLkdvQnl0ZXModW5zYWZlLlBv
aW50ZXIoJnB2aCksQy5zaXplb2ZfbGlieGxfZG9tYWluX2J1aWxkX2luZm9fdHlwZV91bmlvbl9w
dmgpDQo+IEBAIC0xNjM1LDYgKzE2OTEsNyBAQCBDLmxpYnhsX2RldmljZV92ZmJfZGlzcG9zZSh4
Yyl9DQo+IH0oKQ0KPiANCj4geGMuYmFja2VuZF9kb21pZCA9IEMubGlieGxfZG9taWQoeC5CYWNr
ZW5kRG9taWQpDQo+ICt4Yy5iYWNrZW5kX2RvbW5hbWUgPSBuaWwNCj4gaWYgeC5CYWNrZW5kRG9t
bmFtZSAhPSAiIiB7DQo+IHhjLmJhY2tlbmRfZG9tbmFtZSA9IEMuQ1N0cmluZyh4LkJhY2tlbmRE
b21uYW1lKX0NCj4geGMuZGV2aWQgPSBDLmxpYnhsX2RldmlkKHguRGV2aWQpDQo+IEBAIC0xNjQ0
LDYgKzE3MDEsNyBAQCByZXR1cm4gZm10LkVycm9yZigiY29udmVydGluZyBmaWVsZCBWbmM6ICV2
IiwgZXJyKQ0KPiBpZiBlcnIgOj0geC5TZGwudG9DKCZ4Yy5zZGwpOyBlcnIgIT0gbmlsIHsNCj4g
cmV0dXJuIGZtdC5FcnJvcmYoImNvbnZlcnRpbmcgZmllbGQgU2RsOiAldiIsIGVycikNCj4gfQ0K
PiAreGMua2V5bWFwID0gbmlsDQo+IGlmIHguS2V5bWFwICE9ICIiIHsNCj4geGMua2V5bWFwID0g
Qy5DU3RyaW5nKHguS2V5bWFwKX0NCj4gDQo+IEBAIC0xNjg5LDEwICsxNzQ3LDEyIEBAIEMubGli
eGxfZGV2aWNlX3ZrYl9kaXNwb3NlKHhjKX0NCj4gfSgpDQo+IA0KPiB4Yy5iYWNrZW5kX2RvbWlk
ID0gQy5saWJ4bF9kb21pZCh4LkJhY2tlbmREb21pZCkNCj4gK3hjLmJhY2tlbmRfZG9tbmFtZSA9
IG5pbA0KPiBpZiB4LkJhY2tlbmREb21uYW1lICE9ICIiIHsNCj4geGMuYmFja2VuZF9kb21uYW1l
ID0gQy5DU3RyaW5nKHguQmFja2VuZERvbW5hbWUpfQ0KPiB4Yy5kZXZpZCA9IEMubGlieGxfZGV2
aWQoeC5EZXZpZCkNCj4geGMuYmFja2VuZF90eXBlID0gQy5saWJ4bF92a2JfYmFja2VuZCh4LkJh
Y2tlbmRUeXBlKQ0KPiAreGMudW5pcXVlX2lkID0gbmlsDQo+IGlmIHguVW5pcXVlSWQgIT0gIiIg
ew0KPiB4Yy51bmlxdWVfaWQgPSBDLkNTdHJpbmcoeC5VbmlxdWVJZCl9DQo+IHhjLmZlYXR1cmVf
ZGlzYWJsZV9rZXlib2FyZCA9IEMuYm9vbCh4LkZlYXR1cmVEaXNhYmxlS2V5Ym9hcmQpDQo+IEBA
IC0xNzU4LDE0ICsxODE4LDE4IEBAIEMubGlieGxfZGV2aWNlX2Rpc2tfZGlzcG9zZSh4Yyl9DQo+
IH0oKQ0KPiANCj4geGMuYmFja2VuZF9kb21pZCA9IEMubGlieGxfZG9taWQoeC5CYWNrZW5kRG9t
aWQpDQo+ICt4Yy5iYWNrZW5kX2RvbW5hbWUgPSBuaWwNCj4gaWYgeC5CYWNrZW5kRG9tbmFtZSAh
PSAiIiB7DQo+IHhjLmJhY2tlbmRfZG9tbmFtZSA9IEMuQ1N0cmluZyh4LkJhY2tlbmREb21uYW1l
KX0NCj4gK3hjLnBkZXZfcGF0aCA9IG5pbA0KPiBpZiB4LlBkZXZQYXRoICE9ICIiIHsNCj4geGMu
cGRldl9wYXRoID0gQy5DU3RyaW5nKHguUGRldlBhdGgpfQ0KPiAreGMudmRldiA9IG5pbA0KPiBp
ZiB4LlZkZXYgIT0gIiIgew0KPiB4Yy52ZGV2ID0gQy5DU3RyaW5nKHguVmRldil9DQo+IHhjLmJh
Y2tlbmQgPSBDLmxpYnhsX2Rpc2tfYmFja2VuZCh4LkJhY2tlbmQpDQo+IHhjLmZvcm1hdCA9IEMu
bGlieGxfZGlza19mb3JtYXQoeC5Gb3JtYXQpDQo+ICt4Yy5zY3JpcHQgPSBuaWwNCj4gaWYgeC5T
Y3JpcHQgIT0gIiIgew0KPiB4Yy5zY3JpcHQgPSBDLkNTdHJpbmcoeC5TY3JpcHQpfQ0KPiB4Yy5y
ZW1vdmFibGUgPSBDLmludCh4LlJlbW92YWJsZSkNCj4gQEAgLTE3ODEsMTMgKzE4NDUsMTcgQEAg
cmV0dXJuIGZtdC5FcnJvcmYoImNvbnZlcnRpbmcgZmllbGQgQ29sb0VuYWJsZTogJXYiLCBlcnIp
DQo+IGlmIGVyciA6PSB4LkNvbG9SZXN0b3JlRW5hYmxlLnRvQygmeGMuY29sb19yZXN0b3JlX2Vu
YWJsZSk7IGVyciAhPSBuaWwgew0KPiByZXR1cm4gZm10LkVycm9yZigiY29udmVydGluZyBmaWVs
ZCBDb2xvUmVzdG9yZUVuYWJsZTogJXYiLCBlcnIpDQo+IH0NCj4gK3hjLmNvbG9faG9zdCA9IG5p
bA0KPiBpZiB4LkNvbG9Ib3N0ICE9ICIiIHsNCj4geGMuY29sb19ob3N0ID0gQy5DU3RyaW5nKHgu
Q29sb0hvc3QpfQ0KPiB4Yy5jb2xvX3BvcnQgPSBDLmludCh4LkNvbG9Qb3J0KQ0KPiAreGMuY29s
b19leHBvcnQgPSBuaWwNCj4gaWYgeC5Db2xvRXhwb3J0ICE9ICIiIHsNCj4geGMuY29sb19leHBv
cnQgPSBDLkNTdHJpbmcoeC5Db2xvRXhwb3J0KX0NCj4gK3hjLmFjdGl2ZV9kaXNrID0gbmlsDQo+
IGlmIHguQWN0aXZlRGlzayAhPSAiIiB7DQo+IHhjLmFjdGl2ZV9kaXNrID0gQy5DU3RyaW5nKHgu
QWN0aXZlRGlzayl9DQo+ICt4Yy5oaWRkZW5fZGlzayA9IG5pbA0KPiBpZiB4LkhpZGRlbkRpc2sg
IT0gIiIgew0KPiB4Yy5oaWRkZW5fZGlzayA9IEMuQ1N0cmluZyh4LkhpZGRlbkRpc2spfQ0KPiAN
Cj4gQEAgLTE4ODMsMTI0ICsxOTUxLDE4MCBAQCBDLmxpYnhsX2RldmljZV9uaWNfZGlzcG9zZSh4
Yyl9DQo+IH0oKQ0KPiANCj4geGMuYmFja2VuZF9kb21pZCA9IEMubGlieGxfZG9taWQoeC5CYWNr
ZW5kRG9taWQpDQo+ICt4Yy5iYWNrZW5kX2RvbW5hbWUgPSBuaWwNCj4gaWYgeC5CYWNrZW5kRG9t
bmFtZSAhPSAiIiB7DQo+IHhjLmJhY2tlbmRfZG9tbmFtZSA9IEMuQ1N0cmluZyh4LkJhY2tlbmRE
b21uYW1lKX0NCj4geGMuZGV2aWQgPSBDLmxpYnhsX2RldmlkKHguRGV2aWQpDQo+IHhjLm10dSA9
IEMuaW50KHguTXR1KQ0KPiAreGMubW9kZWwgPSBuaWwNCj4gaWYgeC5Nb2RlbCAhPSAiIiB7DQo+
IHhjLm1vZGVsID0gQy5DU3RyaW5nKHguTW9kZWwpfQ0KPiBpZiBlcnIgOj0geC5NYWMudG9DKCZ4
Yy5tYWMpOyBlcnIgIT0gbmlsIHsNCj4gcmV0dXJuIGZtdC5FcnJvcmYoImNvbnZlcnRpbmcgZmll
bGQgTWFjOiAldiIsIGVycikNCj4gfQ0KPiAreGMuaXAgPSBuaWwNCj4gaWYgeC5JcCAhPSAiIiB7
DQo+IHhjLmlwID0gQy5DU3RyaW5nKHguSXApfQ0KPiAreGMuYnJpZGdlID0gbmlsDQo+IGlmIHgu
QnJpZGdlICE9ICIiIHsNCj4geGMuYnJpZGdlID0gQy5DU3RyaW5nKHguQnJpZGdlKX0NCj4gK3hj
LmlmbmFtZSA9IG5pbA0KPiBpZiB4LklmbmFtZSAhPSAiIiB7DQo+IHhjLmlmbmFtZSA9IEMuQ1N0
cmluZyh4LklmbmFtZSl9DQo+ICt4Yy5zY3JpcHQgPSBuaWwNCj4gaWYgeC5TY3JpcHQgIT0gIiIg
ew0KPiB4Yy5zY3JpcHQgPSBDLkNTdHJpbmcoeC5TY3JpcHQpfQ0KPiB4Yy5uaWN0eXBlID0gQy5s
aWJ4bF9uaWNfdHlwZSh4Lk5pY3R5cGUpDQo+IHhjLnJhdGVfYnl0ZXNfcGVyX2ludGVydmFsID0g
Qy51aW50NjRfdCh4LlJhdGVCeXRlc1BlckludGVydmFsKQ0KPiB4Yy5yYXRlX2ludGVydmFsX3Vz
ZWNzID0gQy51aW50MzJfdCh4LlJhdGVJbnRlcnZhbFVzZWNzKQ0KPiAreGMuZ2F0ZXdheWRldiA9
IG5pbA0KPiBpZiB4LkdhdGV3YXlkZXYgIT0gIiIgew0KPiB4Yy5nYXRld2F5ZGV2ID0gQy5DU3Ry
aW5nKHguR2F0ZXdheWRldil9DQo+ICt4Yy5jb2xvZnRfZm9yd2FyZGRldiA9IG5pbA0KPiBpZiB4
LkNvbG9mdEZvcndhcmRkZXYgIT0gIiIgew0KPiB4Yy5jb2xvZnRfZm9yd2FyZGRldiA9IEMuQ1N0
cmluZyh4LkNvbG9mdEZvcndhcmRkZXYpfQ0KPiAreGMuY29sb19zb2NrX21pcnJvcl9pZCA9IG5p
bA0KPiBpZiB4LkNvbG9Tb2NrTWlycm9ySWQgIT0gIiIgew0KPiB4Yy5jb2xvX3NvY2tfbWlycm9y
X2lkID0gQy5DU3RyaW5nKHguQ29sb1NvY2tNaXJyb3JJZCl9DQo+ICt4Yy5jb2xvX3NvY2tfbWly
cm9yX2lwID0gbmlsDQo+IGlmIHguQ29sb1NvY2tNaXJyb3JJcCAhPSAiIiB7DQo+IHhjLmNvbG9f
c29ja19taXJyb3JfaXAgPSBDLkNTdHJpbmcoeC5Db2xvU29ja01pcnJvcklwKX0NCj4gK3hjLmNv
bG9fc29ja19taXJyb3JfcG9ydCA9IG5pbA0KPiBpZiB4LkNvbG9Tb2NrTWlycm9yUG9ydCAhPSAi
IiB7DQo+IHhjLmNvbG9fc29ja19taXJyb3JfcG9ydCA9IEMuQ1N0cmluZyh4LkNvbG9Tb2NrTWly
cm9yUG9ydCl9DQo+ICt4Yy5jb2xvX3NvY2tfY29tcGFyZV9wcmlfaW5faWQgPSBuaWwNCj4gaWYg
eC5Db2xvU29ja0NvbXBhcmVQcmlJbklkICE9ICIiIHsNCj4geGMuY29sb19zb2NrX2NvbXBhcmVf
cHJpX2luX2lkID0gQy5DU3RyaW5nKHguQ29sb1NvY2tDb21wYXJlUHJpSW5JZCl9DQo+ICt4Yy5j
b2xvX3NvY2tfY29tcGFyZV9wcmlfaW5faXAgPSBuaWwNCj4gaWYgeC5Db2xvU29ja0NvbXBhcmVQ
cmlJbklwICE9ICIiIHsNCj4geGMuY29sb19zb2NrX2NvbXBhcmVfcHJpX2luX2lwID0gQy5DU3Ry
aW5nKHguQ29sb1NvY2tDb21wYXJlUHJpSW5JcCl9DQo+ICt4Yy5jb2xvX3NvY2tfY29tcGFyZV9w
cmlfaW5fcG9ydCA9IG5pbA0KPiBpZiB4LkNvbG9Tb2NrQ29tcGFyZVByaUluUG9ydCAhPSAiIiB7
DQo+IHhjLmNvbG9fc29ja19jb21wYXJlX3ByaV9pbl9wb3J0ID0gQy5DU3RyaW5nKHguQ29sb1Nv
Y2tDb21wYXJlUHJpSW5Qb3J0KX0NCj4gK3hjLmNvbG9fc29ja19jb21wYXJlX3NlY19pbl9pZCA9
IG5pbA0KPiBpZiB4LkNvbG9Tb2NrQ29tcGFyZVNlY0luSWQgIT0gIiIgew0KPiB4Yy5jb2xvX3Nv
Y2tfY29tcGFyZV9zZWNfaW5faWQgPSBDLkNTdHJpbmcoeC5Db2xvU29ja0NvbXBhcmVTZWNJbklk
KX0NCj4gK3hjLmNvbG9fc29ja19jb21wYXJlX3NlY19pbl9pcCA9IG5pbA0KPiBpZiB4LkNvbG9T
b2NrQ29tcGFyZVNlY0luSXAgIT0gIiIgew0KPiB4Yy5jb2xvX3NvY2tfY29tcGFyZV9zZWNfaW5f
aXAgPSBDLkNTdHJpbmcoeC5Db2xvU29ja0NvbXBhcmVTZWNJbklwKX0NCj4gK3hjLmNvbG9fc29j
a19jb21wYXJlX3NlY19pbl9wb3J0ID0gbmlsDQo+IGlmIHguQ29sb1NvY2tDb21wYXJlU2VjSW5Q
b3J0ICE9ICIiIHsNCj4geGMuY29sb19zb2NrX2NvbXBhcmVfc2VjX2luX3BvcnQgPSBDLkNTdHJp
bmcoeC5Db2xvU29ja0NvbXBhcmVTZWNJblBvcnQpfQ0KPiAreGMuY29sb19zb2NrX2NvbXBhcmVf
bm90aWZ5X2lkID0gbmlsDQo+IGlmIHguQ29sb1NvY2tDb21wYXJlTm90aWZ5SWQgIT0gIiIgew0K
PiB4Yy5jb2xvX3NvY2tfY29tcGFyZV9ub3RpZnlfaWQgPSBDLkNTdHJpbmcoeC5Db2xvU29ja0Nv
bXBhcmVOb3RpZnlJZCl9DQo+ICt4Yy5jb2xvX3NvY2tfY29tcGFyZV9ub3RpZnlfaXAgPSBuaWwN
Cj4gaWYgeC5Db2xvU29ja0NvbXBhcmVOb3RpZnlJcCAhPSAiIiB7DQo+IHhjLmNvbG9fc29ja19j
b21wYXJlX25vdGlmeV9pcCA9IEMuQ1N0cmluZyh4LkNvbG9Tb2NrQ29tcGFyZU5vdGlmeUlwKX0N
Cj4gK3hjLmNvbG9fc29ja19jb21wYXJlX25vdGlmeV9wb3J0ID0gbmlsDQo+IGlmIHguQ29sb1Nv
Y2tDb21wYXJlTm90aWZ5UG9ydCAhPSAiIiB7DQo+IHhjLmNvbG9fc29ja19jb21wYXJlX25vdGlm
eV9wb3J0ID0gQy5DU3RyaW5nKHguQ29sb1NvY2tDb21wYXJlTm90aWZ5UG9ydCl9DQo+ICt4Yy5j
b2xvX3NvY2tfcmVkaXJlY3RvcjBfaWQgPSBuaWwNCj4gaWYgeC5Db2xvU29ja1JlZGlyZWN0b3Iw
SWQgIT0gIiIgew0KPiB4Yy5jb2xvX3NvY2tfcmVkaXJlY3RvcjBfaWQgPSBDLkNTdHJpbmcoeC5D
b2xvU29ja1JlZGlyZWN0b3IwSWQpfQ0KPiAreGMuY29sb19zb2NrX3JlZGlyZWN0b3IwX2lwID0g
bmlsDQo+IGlmIHguQ29sb1NvY2tSZWRpcmVjdG9yMElwICE9ICIiIHsNCj4geGMuY29sb19zb2Nr
X3JlZGlyZWN0b3IwX2lwID0gQy5DU3RyaW5nKHguQ29sb1NvY2tSZWRpcmVjdG9yMElwKX0NCj4g
K3hjLmNvbG9fc29ja19yZWRpcmVjdG9yMF9wb3J0ID0gbmlsDQo+IGlmIHguQ29sb1NvY2tSZWRp
cmVjdG9yMFBvcnQgIT0gIiIgew0KPiB4Yy5jb2xvX3NvY2tfcmVkaXJlY3RvcjBfcG9ydCA9IEMu
Q1N0cmluZyh4LkNvbG9Tb2NrUmVkaXJlY3RvcjBQb3J0KX0NCj4gK3hjLmNvbG9fc29ja19yZWRp
cmVjdG9yMV9pZCA9IG5pbA0KPiBpZiB4LkNvbG9Tb2NrUmVkaXJlY3RvcjFJZCAhPSAiIiB7DQo+
IHhjLmNvbG9fc29ja19yZWRpcmVjdG9yMV9pZCA9IEMuQ1N0cmluZyh4LkNvbG9Tb2NrUmVkaXJl
Y3RvcjFJZCl9DQo+ICt4Yy5jb2xvX3NvY2tfcmVkaXJlY3RvcjFfaXAgPSBuaWwNCj4gaWYgeC5D
b2xvU29ja1JlZGlyZWN0b3IxSXAgIT0gIiIgew0KPiB4Yy5jb2xvX3NvY2tfcmVkaXJlY3RvcjFf
aXAgPSBDLkNTdHJpbmcoeC5Db2xvU29ja1JlZGlyZWN0b3IxSXApfQ0KPiAreGMuY29sb19zb2Nr
X3JlZGlyZWN0b3IxX3BvcnQgPSBuaWwNCj4gaWYgeC5Db2xvU29ja1JlZGlyZWN0b3IxUG9ydCAh
PSAiIiB7DQo+IHhjLmNvbG9fc29ja19yZWRpcmVjdG9yMV9wb3J0ID0gQy5DU3RyaW5nKHguQ29s
b1NvY2tSZWRpcmVjdG9yMVBvcnQpfQ0KPiAreGMuY29sb19zb2NrX3JlZGlyZWN0b3IyX2lkID0g
bmlsDQo+IGlmIHguQ29sb1NvY2tSZWRpcmVjdG9yMklkICE9ICIiIHsNCj4geGMuY29sb19zb2Nr
X3JlZGlyZWN0b3IyX2lkID0gQy5DU3RyaW5nKHguQ29sb1NvY2tSZWRpcmVjdG9yMklkKX0NCj4g
K3hjLmNvbG9fc29ja19yZWRpcmVjdG9yMl9pcCA9IG5pbA0KPiBpZiB4LkNvbG9Tb2NrUmVkaXJl
Y3RvcjJJcCAhPSAiIiB7DQo+IHhjLmNvbG9fc29ja19yZWRpcmVjdG9yMl9pcCA9IEMuQ1N0cmlu
Zyh4LkNvbG9Tb2NrUmVkaXJlY3RvcjJJcCl9DQo+ICt4Yy5jb2xvX3NvY2tfcmVkaXJlY3RvcjJf
cG9ydCA9IG5pbA0KPiBpZiB4LkNvbG9Tb2NrUmVkaXJlY3RvcjJQb3J0ICE9ICIiIHsNCj4geGMu
Y29sb19zb2NrX3JlZGlyZWN0b3IyX3BvcnQgPSBDLkNTdHJpbmcoeC5Db2xvU29ja1JlZGlyZWN0
b3IyUG9ydCl9DQo+ICt4Yy5jb2xvX2ZpbHRlcl9taXJyb3JfcXVldWUgPSBuaWwNCj4gaWYgeC5D
b2xvRmlsdGVyTWlycm9yUXVldWUgIT0gIiIgew0KPiB4Yy5jb2xvX2ZpbHRlcl9taXJyb3JfcXVl
dWUgPSBDLkNTdHJpbmcoeC5Db2xvRmlsdGVyTWlycm9yUXVldWUpfQ0KPiAreGMuY29sb19maWx0
ZXJfbWlycm9yX291dGRldiA9IG5pbA0KPiBpZiB4LkNvbG9GaWx0ZXJNaXJyb3JPdXRkZXYgIT0g
IiIgew0KPiB4Yy5jb2xvX2ZpbHRlcl9taXJyb3Jfb3V0ZGV2ID0gQy5DU3RyaW5nKHguQ29sb0Zp
bHRlck1pcnJvck91dGRldil9DQo+ICt4Yy5jb2xvX2ZpbHRlcl9yZWRpcmVjdG9yMF9xdWV1ZSA9
IG5pbA0KPiBpZiB4LkNvbG9GaWx0ZXJSZWRpcmVjdG9yMFF1ZXVlICE9ICIiIHsNCj4geGMuY29s
b19maWx0ZXJfcmVkaXJlY3RvcjBfcXVldWUgPSBDLkNTdHJpbmcoeC5Db2xvRmlsdGVyUmVkaXJl
Y3RvcjBRdWV1ZSl9DQo+ICt4Yy5jb2xvX2ZpbHRlcl9yZWRpcmVjdG9yMF9pbmRldiA9IG5pbA0K
PiBpZiB4LkNvbG9GaWx0ZXJSZWRpcmVjdG9yMEluZGV2ICE9ICIiIHsNCj4geGMuY29sb19maWx0
ZXJfcmVkaXJlY3RvcjBfaW5kZXYgPSBDLkNTdHJpbmcoeC5Db2xvRmlsdGVyUmVkaXJlY3RvcjBJ
bmRldil9DQo+ICt4Yy5jb2xvX2ZpbHRlcl9yZWRpcmVjdG9yMF9vdXRkZXYgPSBuaWwNCj4gaWYg
eC5Db2xvRmlsdGVyUmVkaXJlY3RvcjBPdXRkZXYgIT0gIiIgew0KPiB4Yy5jb2xvX2ZpbHRlcl9y
ZWRpcmVjdG9yMF9vdXRkZXYgPSBDLkNTdHJpbmcoeC5Db2xvRmlsdGVyUmVkaXJlY3RvcjBPdXRk
ZXYpfQ0KPiAreGMuY29sb19maWx0ZXJfcmVkaXJlY3RvcjFfcXVldWUgPSBuaWwNCj4gaWYgeC5D
b2xvRmlsdGVyUmVkaXJlY3RvcjFRdWV1ZSAhPSAiIiB7DQo+IHhjLmNvbG9fZmlsdGVyX3JlZGly
ZWN0b3IxX3F1ZXVlID0gQy5DU3RyaW5nKHguQ29sb0ZpbHRlclJlZGlyZWN0b3IxUXVldWUpfQ0K
PiAreGMuY29sb19maWx0ZXJfcmVkaXJlY3RvcjFfaW5kZXYgPSBuaWwNCj4gaWYgeC5Db2xvRmls
dGVyUmVkaXJlY3RvcjFJbmRldiAhPSAiIiB7DQo+IHhjLmNvbG9fZmlsdGVyX3JlZGlyZWN0b3Ix
X2luZGV2ID0gQy5DU3RyaW5nKHguQ29sb0ZpbHRlclJlZGlyZWN0b3IxSW5kZXYpfQ0KPiAreGMu
Y29sb19maWx0ZXJfcmVkaXJlY3RvcjFfb3V0ZGV2ID0gbmlsDQo+IGlmIHguQ29sb0ZpbHRlclJl
ZGlyZWN0b3IxT3V0ZGV2ICE9ICIiIHsNCj4geGMuY29sb19maWx0ZXJfcmVkaXJlY3RvcjFfb3V0
ZGV2ID0gQy5DU3RyaW5nKHguQ29sb0ZpbHRlclJlZGlyZWN0b3IxT3V0ZGV2KX0NCj4gK3hjLmNv
bG9fY29tcGFyZV9wcmlfaW4gPSBuaWwNCj4gaWYgeC5Db2xvQ29tcGFyZVByaUluICE9ICIiIHsN
Cj4geGMuY29sb19jb21wYXJlX3ByaV9pbiA9IEMuQ1N0cmluZyh4LkNvbG9Db21wYXJlUHJpSW4p
fQ0KPiAreGMuY29sb19jb21wYXJlX3NlY19pbiA9IG5pbA0KPiBpZiB4LkNvbG9Db21wYXJlU2Vj
SW4gIT0gIiIgew0KPiB4Yy5jb2xvX2NvbXBhcmVfc2VjX2luID0gQy5DU3RyaW5nKHguQ29sb0Nv
bXBhcmVTZWNJbil9DQo+ICt4Yy5jb2xvX2NvbXBhcmVfb3V0ID0gbmlsDQo+IGlmIHguQ29sb0Nv
bXBhcmVPdXQgIT0gIiIgew0KPiB4Yy5jb2xvX2NvbXBhcmVfb3V0ID0gQy5DU3RyaW5nKHguQ29s
b0NvbXBhcmVPdXQpfQ0KPiAreGMuY29sb19jb21wYXJlX25vdGlmeV9kZXYgPSBuaWwNCj4gaWYg
eC5Db2xvQ29tcGFyZU5vdGlmeURldiAhPSAiIiB7DQo+IHhjLmNvbG9fY29tcGFyZV9ub3RpZnlf
ZGV2ID0gQy5DU3RyaW5nKHguQ29sb0NvbXBhcmVOb3RpZnlEZXYpfQ0KPiAreGMuY29sb19zb2Nr
X3NlY19yZWRpcmVjdG9yMF9pZCA9IG5pbA0KPiBpZiB4LkNvbG9Tb2NrU2VjUmVkaXJlY3RvcjBJ
ZCAhPSAiIiB7DQo+IHhjLmNvbG9fc29ja19zZWNfcmVkaXJlY3RvcjBfaWQgPSBDLkNTdHJpbmco
eC5Db2xvU29ja1NlY1JlZGlyZWN0b3IwSWQpfQ0KPiAreGMuY29sb19zb2NrX3NlY19yZWRpcmVj
dG9yMF9pcCA9IG5pbA0KPiBpZiB4LkNvbG9Tb2NrU2VjUmVkaXJlY3RvcjBJcCAhPSAiIiB7DQo+
IHhjLmNvbG9fc29ja19zZWNfcmVkaXJlY3RvcjBfaXAgPSBDLkNTdHJpbmcoeC5Db2xvU29ja1Nl
Y1JlZGlyZWN0b3IwSXApfQ0KPiAreGMuY29sb19zb2NrX3NlY19yZWRpcmVjdG9yMF9wb3J0ID0g
bmlsDQo+IGlmIHguQ29sb1NvY2tTZWNSZWRpcmVjdG9yMFBvcnQgIT0gIiIgew0KPiB4Yy5jb2xv
X3NvY2tfc2VjX3JlZGlyZWN0b3IwX3BvcnQgPSBDLkNTdHJpbmcoeC5Db2xvU29ja1NlY1JlZGly
ZWN0b3IwUG9ydCl9DQo+ICt4Yy5jb2xvX3NvY2tfc2VjX3JlZGlyZWN0b3IxX2lkID0gbmlsDQo+
IGlmIHguQ29sb1NvY2tTZWNSZWRpcmVjdG9yMUlkICE9ICIiIHsNCj4geGMuY29sb19zb2NrX3Nl
Y19yZWRpcmVjdG9yMV9pZCA9IEMuQ1N0cmluZyh4LkNvbG9Tb2NrU2VjUmVkaXJlY3RvcjFJZCl9
DQo+ICt4Yy5jb2xvX3NvY2tfc2VjX3JlZGlyZWN0b3IxX2lwID0gbmlsDQo+IGlmIHguQ29sb1Nv
Y2tTZWNSZWRpcmVjdG9yMUlwICE9ICIiIHsNCj4geGMuY29sb19zb2NrX3NlY19yZWRpcmVjdG9y
MV9pcCA9IEMuQ1N0cmluZyh4LkNvbG9Tb2NrU2VjUmVkaXJlY3RvcjFJcCl9DQo+ICt4Yy5jb2xv
X3NvY2tfc2VjX3JlZGlyZWN0b3IxX3BvcnQgPSBuaWwNCj4gaWYgeC5Db2xvU29ja1NlY1JlZGly
ZWN0b3IxUG9ydCAhPSAiIiB7DQo+IHhjLmNvbG9fc29ja19zZWNfcmVkaXJlY3RvcjFfcG9ydCA9
IEMuQ1N0cmluZyh4LkNvbG9Tb2NrU2VjUmVkaXJlY3RvcjFQb3J0KX0NCj4gK3hjLmNvbG9fZmls
dGVyX3NlY19yZWRpcmVjdG9yMF9xdWV1ZSA9IG5pbA0KPiBpZiB4LkNvbG9GaWx0ZXJTZWNSZWRp
cmVjdG9yMFF1ZXVlICE9ICIiIHsNCj4geGMuY29sb19maWx0ZXJfc2VjX3JlZGlyZWN0b3IwX3F1
ZXVlID0gQy5DU3RyaW5nKHguQ29sb0ZpbHRlclNlY1JlZGlyZWN0b3IwUXVldWUpfQ0KPiAreGMu
Y29sb19maWx0ZXJfc2VjX3JlZGlyZWN0b3IwX2luZGV2ID0gbmlsDQo+IGlmIHguQ29sb0ZpbHRl
clNlY1JlZGlyZWN0b3IwSW5kZXYgIT0gIiIgew0KPiB4Yy5jb2xvX2ZpbHRlcl9zZWNfcmVkaXJl
Y3RvcjBfaW5kZXYgPSBDLkNTdHJpbmcoeC5Db2xvRmlsdGVyU2VjUmVkaXJlY3RvcjBJbmRldil9
DQo+ICt4Yy5jb2xvX2ZpbHRlcl9zZWNfcmVkaXJlY3RvcjBfb3V0ZGV2ID0gbmlsDQo+IGlmIHgu
Q29sb0ZpbHRlclNlY1JlZGlyZWN0b3IwT3V0ZGV2ICE9ICIiIHsNCj4geGMuY29sb19maWx0ZXJf
c2VjX3JlZGlyZWN0b3IwX291dGRldiA9IEMuQ1N0cmluZyh4LkNvbG9GaWx0ZXJTZWNSZWRpcmVj
dG9yME91dGRldil9DQo+ICt4Yy5jb2xvX2ZpbHRlcl9zZWNfcmVkaXJlY3RvcjFfcXVldWUgPSBu
aWwNCj4gaWYgeC5Db2xvRmlsdGVyU2VjUmVkaXJlY3RvcjFRdWV1ZSAhPSAiIiB7DQo+IHhjLmNv
bG9fZmlsdGVyX3NlY19yZWRpcmVjdG9yMV9xdWV1ZSA9IEMuQ1N0cmluZyh4LkNvbG9GaWx0ZXJT
ZWNSZWRpcmVjdG9yMVF1ZXVlKX0NCj4gK3hjLmNvbG9fZmlsdGVyX3NlY19yZWRpcmVjdG9yMV9p
bmRldiA9IG5pbA0KPiBpZiB4LkNvbG9GaWx0ZXJTZWNSZWRpcmVjdG9yMUluZGV2ICE9ICIiIHsN
Cj4geGMuY29sb19maWx0ZXJfc2VjX3JlZGlyZWN0b3IxX2luZGV2ID0gQy5DU3RyaW5nKHguQ29s
b0ZpbHRlclNlY1JlZGlyZWN0b3IxSW5kZXYpfQ0KPiAreGMuY29sb19maWx0ZXJfc2VjX3JlZGly
ZWN0b3IxX291dGRldiA9IG5pbA0KPiBpZiB4LkNvbG9GaWx0ZXJTZWNSZWRpcmVjdG9yMU91dGRl
diAhPSAiIiB7DQo+IHhjLmNvbG9fZmlsdGVyX3NlY19yZWRpcmVjdG9yMV9vdXRkZXYgPSBDLkNT
dHJpbmcoeC5Db2xvRmlsdGVyU2VjUmVkaXJlY3RvcjFPdXRkZXYpfQ0KPiAreGMuY29sb19maWx0
ZXJfc2VjX3Jld3JpdGVyMF9xdWV1ZSA9IG5pbA0KPiBpZiB4LkNvbG9GaWx0ZXJTZWNSZXdyaXRl
cjBRdWV1ZSAhPSAiIiB7DQo+IHhjLmNvbG9fZmlsdGVyX3NlY19yZXdyaXRlcjBfcXVldWUgPSBD
LkNTdHJpbmcoeC5Db2xvRmlsdGVyU2VjUmV3cml0ZXIwUXVldWUpfQ0KPiAreGMuY29sb19jaGVj
a3BvaW50X2hvc3QgPSBuaWwNCj4gaWYgeC5Db2xvQ2hlY2twb2ludEhvc3QgIT0gIiIgew0KPiB4
Yy5jb2xvX2NoZWNrcG9pbnRfaG9zdCA9IEMuQ1N0cmluZyh4LkNvbG9DaGVja3BvaW50SG9zdCl9
DQo+ICt4Yy5jb2xvX2NoZWNrcG9pbnRfcG9ydCA9IG5pbA0KPiBpZiB4LkNvbG9DaGVja3BvaW50
UG9ydCAhPSAiIiB7DQo+IHhjLmNvbG9fY2hlY2twb2ludF9wb3J0ID0gQy5DU3RyaW5nKHguQ29s
b0NoZWNrcG9pbnRQb3J0KX0NCj4gDQo+IEBAIC0yMDUzLDYgKzIxNzcsNyBAQCB4Yy5wb3dlcl9t
Z210ID0gQy5ib29sKHguUG93ZXJNZ210KQ0KPiB4Yy5wZXJtaXNzaXZlID0gQy5ib29sKHguUGVy
bWlzc2l2ZSkNCj4geGMuc2VpemUgPSBDLmJvb2woeC5TZWl6ZSkNCj4geGMucmRtX3BvbGljeSA9
IEMubGlieGxfcmRtX3Jlc2VydmVfcG9saWN5KHguUmRtUG9saWN5KQ0KPiAreGMubmFtZSA9IG5p
bA0KPiBpZiB4Lk5hbWUgIT0gIiIgew0KPiB4Yy5uYW1lID0gQy5DU3RyaW5nKHguTmFtZSl9DQo+
IA0KPiBAQCAtMjEyNiw2ICsyMjUxLDcgQEAgeGMuZGV2aWQgPSBDLmxpYnhsX2RldmlkKHguRGV2
aWQpDQo+IHhjLnZlcnNpb24gPSBDLmludCh4LlZlcnNpb24pDQo+IHhjLnBvcnRzID0gQy5pbnQo
eC5Qb3J0cykNCj4geGMuYmFja2VuZF9kb21pZCA9IEMubGlieGxfZG9taWQoeC5CYWNrZW5kRG9t
aWQpDQo+ICt4Yy5iYWNrZW5kX2RvbW5hbWUgPSBuaWwNCj4gaWYgeC5CYWNrZW5kRG9tbmFtZSAh
PSAiIiB7DQo+IHhjLmJhY2tlbmRfZG9tbmFtZSA9IEMuQ1N0cmluZyh4LkJhY2tlbmREb21uYW1l
KX0NCj4gDQo+IEBAIC0yMjIzLDYgKzIzNDksNyBAQCBpZiBlcnIgIT0gbmlsew0KPiBDLmxpYnhs
X2RldmljZV9kdGRldl9kaXNwb3NlKHhjKX0NCj4gfSgpDQo+IA0KPiAreGMucGF0aCA9IG5pbA0K
PiBpZiB4LlBhdGggIT0gIiIgew0KPiB4Yy5wYXRoID0gQy5DU3RyaW5nKHguUGF0aCl9DQo+IA0K
PiBAQCAtMjI1OSw2ICsyMzg2LDcgQEAgQy5saWJ4bF9kZXZpY2VfdnRwbV9kaXNwb3NlKHhjKX0N
Cj4gfSgpDQo+IA0KPiB4Yy5iYWNrZW5kX2RvbWlkID0gQy5saWJ4bF9kb21pZCh4LkJhY2tlbmRE
b21pZCkNCj4gK3hjLmJhY2tlbmRfZG9tbmFtZSA9IG5pbA0KPiBpZiB4LkJhY2tlbmREb21uYW1l
ICE9ICIiIHsNCj4geGMuYmFja2VuZF9kb21uYW1lID0gQy5DU3RyaW5nKHguQmFja2VuZERvbW5h
bWUpfQ0KPiB4Yy5kZXZpZCA9IEMubGlieGxfZGV2aWQoeC5EZXZpZCkNCj4gQEAgLTIyOTksMTIg
KzI0MjcsMTYgQEAgQy5saWJ4bF9kZXZpY2VfcDlfZGlzcG9zZSh4Yyl9DQo+IH0oKQ0KPiANCj4g
eGMuYmFja2VuZF9kb21pZCA9IEMubGlieGxfZG9taWQoeC5CYWNrZW5kRG9taWQpDQo+ICt4Yy5i
YWNrZW5kX2RvbW5hbWUgPSBuaWwNCj4gaWYgeC5CYWNrZW5kRG9tbmFtZSAhPSAiIiB7DQo+IHhj
LmJhY2tlbmRfZG9tbmFtZSA9IEMuQ1N0cmluZyh4LkJhY2tlbmREb21uYW1lKX0NCj4gK3hjLnRh
ZyA9IG5pbA0KPiBpZiB4LlRhZyAhPSAiIiB7DQo+IHhjLnRhZyA9IEMuQ1N0cmluZyh4LlRhZyl9
DQo+ICt4Yy5wYXRoID0gbmlsDQo+IGlmIHguUGF0aCAhPSAiIiB7DQo+IHhjLnBhdGggPSBDLkNT
dHJpbmcoeC5QYXRoKX0NCj4gK3hjLnNlY3VyaXR5X21vZGVsID0gbmlsDQo+IGlmIHguU2VjdXJp
dHlNb2RlbCAhPSAiIiB7DQo+IHhjLnNlY3VyaXR5X21vZGVsID0gQy5DU3RyaW5nKHguU2VjdXJp
dHlNb2RlbCl9DQo+IHhjLmRldmlkID0gQy5saWJ4bF9kZXZpZCh4LkRldmlkKQ0KPiBAQCAtMjMz
OSw2ICsyNDcxLDcgQEAgQy5saWJ4bF9kZXZpY2VfcHZjYWxsc2lmX2Rpc3Bvc2UoeGMpfQ0KPiB9
KCkNCj4gDQo+IHhjLmJhY2tlbmRfZG9taWQgPSBDLmxpYnhsX2RvbWlkKHguQmFja2VuZERvbWlk
KQ0KPiAreGMuYmFja2VuZF9kb21uYW1lID0gbmlsDQo+IGlmIHguQmFja2VuZERvbW5hbWUgIT0g
IiIgew0KPiB4Yy5iYWNrZW5kX2RvbW5hbWUgPSBDLkNTdHJpbmcoeC5CYWNrZW5kRG9tbmFtZSl9
DQo+IHhjLmRldmlkID0gQy5saWJ4bF9kZXZpZCh4LkRldmlkKQ0KPiBAQCAtMjM5OSw5ICsyNTMy
LDExIEBAIEMubGlieGxfZGV2aWNlX2NoYW5uZWxfZGlzcG9zZSh4Yyl9DQo+IH0oKQ0KPiANCj4g
eGMuYmFja2VuZF9kb21pZCA9IEMubGlieGxfZG9taWQoeC5CYWNrZW5kRG9taWQpDQo+ICt4Yy5i
YWNrZW5kX2RvbW5hbWUgPSBuaWwNCj4gaWYgeC5CYWNrZW5kRG9tbmFtZSAhPSAiIiB7DQo+IHhj
LmJhY2tlbmRfZG9tbmFtZSA9IEMuQ1N0cmluZyh4LkJhY2tlbmREb21uYW1lKX0NCj4geGMuZGV2
aWQgPSBDLmxpYnhsX2RldmlkKHguRGV2aWQpDQo+ICt4Yy5uYW1lID0gbmlsDQo+IGlmIHguTmFt
ZSAhPSAiIiB7DQo+IHhjLm5hbWUgPSBDLkNTdHJpbmcoeC5OYW1lKX0NCj4geGMuY29ubmVjdGlv
biA9IEMubGlieGxfY2hhbm5lbF9jb25uZWN0aW9uKHguQ29ubmVjdGlvbikNCj4gQEAgLTI0MTYs
NiArMjU1MSw3IEBAIGlmICFvayB7DQo+IHJldHVybiBlcnJvcnMuTmV3KCJ3cm9uZyB0eXBlIGZv
ciB1bmlvbiBrZXkgY29ubmVjdGlvbiIpDQo+IH0NCj4gdmFyIHNvY2tldCBDLmxpYnhsX2Rldmlj
ZV9jaGFubmVsX2Nvbm5lY3Rpb25fdW5pb25fc29ja2V0DQo+ICtzb2NrZXQucGF0aCA9IG5pbA0K
PiBpZiB0bXAuUGF0aCAhPSAiIiB7DQo+IHNvY2tldC5wYXRoID0gQy5DU3RyaW5nKHRtcC5QYXRo
KX0NCj4gc29ja2V0Qnl0ZXMgOj0gQy5Hb0J5dGVzKHVuc2FmZS5Qb2ludGVyKCZzb2NrZXQpLEMu
c2l6ZW9mX2xpYnhsX2RldmljZV9jaGFubmVsX2Nvbm5lY3Rpb25fdW5pb25fc29ja2V0KQ0KPiBA
QCAtMjQ1Miw2ICsyNTg4LDcgQEAgaWYgZXJyICE9IG5pbHsNCj4gQy5saWJ4bF9jb25uZWN0b3Jf
cGFyYW1fZGlzcG9zZSh4Yyl9DQo+IH0oKQ0KPiANCj4gK3hjLnVuaXF1ZV9pZCA9IG5pbA0KPiBp
ZiB4LlVuaXF1ZUlkICE9ICIiIHsNCj4geGMudW5pcXVlX2lkID0gQy5DU3RyaW5nKHguVW5pcXVl
SWQpfQ0KPiB4Yy53aWR0aCA9IEMudWludDMyX3QoeC5XaWR0aCkNCj4gQEAgLTI0OTcsNiArMjYz
NCw3IEBAIEMubGlieGxfZGV2aWNlX3ZkaXNwbF9kaXNwb3NlKHhjKX0NCj4gfSgpDQo+IA0KPiB4
Yy5iYWNrZW5kX2RvbWlkID0gQy5saWJ4bF9kb21pZCh4LkJhY2tlbmREb21pZCkNCj4gK3hjLmJh
Y2tlbmRfZG9tbmFtZSA9IG5pbA0KPiBpZiB4LkJhY2tlbmREb21uYW1lICE9ICIiIHsNCj4geGMu
YmFja2VuZF9kb21uYW1lID0gQy5DU3RyaW5nKHguQmFja2VuZERvbW5hbWUpfQ0KPiB4Yy5kZXZp
ZCA9IEMubGlieGxfZGV2aWQoeC5EZXZpZCkNCj4gQEAgLTI2MDgsNiArMjc0Niw3IEBAIGlmIGVy
ciAhPSBuaWx7DQo+IEMubGlieGxfdnNuZF9zdHJlYW1fZGlzcG9zZSh4Yyl9DQo+IH0oKQ0KPiAN
Cj4gK3hjLnVuaXF1ZV9pZCA9IG5pbA0KPiBpZiB4LlVuaXF1ZUlkICE9ICIiIHsNCj4geGMudW5p
cXVlX2lkID0gQy5DU3RyaW5nKHguVW5pcXVlSWQpfQ0KPiB4Yy5fdHlwZSA9IEMubGlieGxfdnNu
ZF9zdHJlYW1fdHlwZSh4LlR5cGUpDQo+IEBAIC0yNjU0LDYgKzI3OTMsNyBAQCBpZiBlcnIgIT0g
bmlsew0KPiBDLmxpYnhsX3ZzbmRfcGNtX2Rpc3Bvc2UoeGMpfQ0KPiB9KCkNCj4gDQo+ICt4Yy5u
YW1lID0gbmlsDQo+IGlmIHguTmFtZSAhPSAiIiB7DQo+IHhjLm5hbWUgPSBDLkNTdHJpbmcoeC5O
YW1lKX0NCj4gaWYgZXJyIDo9IHguUGFyYW1zLnRvQygmeGMucGFyYW1zKTsgZXJyICE9IG5pbCB7
DQo+IEBAIC0yNzE0LDExICsyODU0LDE0IEBAIEMubGlieGxfZGV2aWNlX3ZzbmRfZGlzcG9zZSh4
Yyl9DQo+IH0oKQ0KPiANCj4geGMuYmFja2VuZF9kb21pZCA9IEMubGlieGxfZG9taWQoeC5CYWNr
ZW5kRG9taWQpDQo+ICt4Yy5iYWNrZW5kX2RvbW5hbWUgPSBuaWwNCj4gaWYgeC5CYWNrZW5kRG9t
bmFtZSAhPSAiIiB7DQo+IHhjLmJhY2tlbmRfZG9tbmFtZSA9IEMuQ1N0cmluZyh4LkJhY2tlbmRE
b21uYW1lKX0NCj4geGMuZGV2aWQgPSBDLmxpYnhsX2RldmlkKHguRGV2aWQpDQo+ICt4Yy5zaG9y
dF9uYW1lID0gbmlsDQo+IGlmIHguU2hvcnROYW1lICE9ICIiIHsNCj4geGMuc2hvcnRfbmFtZSA9
IEMuQ1N0cmluZyh4LlNob3J0TmFtZSl9DQo+ICt4Yy5sb25nX25hbWUgPSBuaWwNCj4gaWYgeC5M
b25nTmFtZSAhPSAiIiB7DQo+IHhjLmxvbmdfbmFtZSA9IEMuQ1N0cmluZyh4LkxvbmdOYW1lKX0N
Cj4gaWYgZXJyIDo9IHguUGFyYW1zLnRvQygmeGMucGFyYW1zKTsgZXJyICE9IG5pbCB7DQo+IEBA
IC0zMTAzLDkgKzMyNDYsMTEgQEAgaWYgZXJyICE9IG5pbHsNCj4gQy5saWJ4bF9kaXNraW5mb19k
aXNwb3NlKHhjKX0NCj4gfSgpDQo+IA0KPiAreGMuYmFja2VuZCA9IG5pbA0KPiBpZiB4LkJhY2tl
bmQgIT0gIiIgew0KPiB4Yy5iYWNrZW5kID0gQy5DU3RyaW5nKHguQmFja2VuZCl9DQo+IHhjLmJh
Y2tlbmRfaWQgPSBDLnVpbnQzMl90KHguQmFja2VuZElkKQ0KPiAreGMuZnJvbnRlbmQgPSBuaWwN
Cj4gaWYgeC5Gcm9udGVuZCAhPSAiIiB7DQo+IHhjLmZyb250ZW5kID0gQy5DU3RyaW5nKHguRnJv
bnRlbmQpfQ0KPiB4Yy5mcm9udGVuZF9pZCA9IEMudWludDMyX3QoeC5Gcm9udGVuZElkKQ0KPiBA
QCAtMzE0OSw5ICszMjk0LDExIEBAIGlmIGVyciAhPSBuaWx7DQo+IEMubGlieGxfbmljaW5mb19k
aXNwb3NlKHhjKX0NCj4gfSgpDQo+IA0KPiAreGMuYmFja2VuZCA9IG5pbA0KPiBpZiB4LkJhY2tl
bmQgIT0gIiIgew0KPiB4Yy5iYWNrZW5kID0gQy5DU3RyaW5nKHguQmFja2VuZCl9DQo+IHhjLmJh
Y2tlbmRfaWQgPSBDLnVpbnQzMl90KHguQmFja2VuZElkKQ0KPiAreGMuZnJvbnRlbmQgPSBuaWwN
Cj4gaWYgeC5Gcm9udGVuZCAhPSAiIiB7DQo+IHhjLmZyb250ZW5kID0gQy5DU3RyaW5nKHguRnJv
bnRlbmQpfQ0KPiB4Yy5mcm9udGVuZF9pZCA9IEMudWludDMyX3QoeC5Gcm9udGVuZElkKQ0KPiBA
QCAtMzE5OCw5ICszMzQ1LDExIEBAIGlmIGVyciAhPSBuaWx7DQo+IEMubGlieGxfdnRwbWluZm9f
ZGlzcG9zZSh4Yyl9DQo+IH0oKQ0KPiANCj4gK3hjLmJhY2tlbmQgPSBuaWwNCj4gaWYgeC5CYWNr
ZW5kICE9ICIiIHsNCj4geGMuYmFja2VuZCA9IEMuQ1N0cmluZyh4LkJhY2tlbmQpfQ0KPiB4Yy5i
YWNrZW5kX2lkID0gQy51aW50MzJfdCh4LkJhY2tlbmRJZCkNCj4gK3hjLmZyb250ZW5kID0gbmls
DQo+IGlmIHguRnJvbnRlbmQgIT0gIiIgew0KPiB4Yy5mcm9udGVuZCA9IEMuQ1N0cmluZyh4LkZy
b250ZW5kKX0NCj4geGMuZnJvbnRlbmRfaWQgPSBDLnVpbnQzMl90KHguRnJvbnRlbmRJZCkNCj4g
QEAgLTMyNTQsOSArMzQwMywxMSBAQCB4Yy5fdHlwZSA9IEMubGlieGxfdXNiY3RybF90eXBlKHgu
VHlwZSkNCj4geGMuZGV2aWQgPSBDLmxpYnhsX2RldmlkKHguRGV2aWQpDQo+IHhjLnZlcnNpb24g
PSBDLmludCh4LlZlcnNpb24pDQo+IHhjLnBvcnRzID0gQy5pbnQoeC5Qb3J0cykNCj4gK3hjLmJh
Y2tlbmQgPSBuaWwNCj4gaWYgeC5CYWNrZW5kICE9ICIiIHsNCj4geGMuYmFja2VuZCA9IEMuQ1N0
cmluZyh4LkJhY2tlbmQpfQ0KPiB4Yy5iYWNrZW5kX2lkID0gQy51aW50MzJfdCh4LkJhY2tlbmRJ
ZCkNCj4gK3hjLmZyb250ZW5kID0gbmlsDQo+IGlmIHguRnJvbnRlbmQgIT0gIiIgew0KPiB4Yy5m
cm9udGVuZCA9IEMuQ1N0cmluZyh4LkZyb250ZW5kKX0NCj4geGMuZnJvbnRlbmRfaWQgPSBDLnVp
bnQzMl90KHguRnJvbnRlbmRJZCkNCj4gQEAgLTM0MjIsNiArMzU3Myw3IEBAIGlmIGVyciAhPSBu
aWx7DQo+IEMubGlieGxfY29ubmVjdG9yaW5mb19kaXNwb3NlKHhjKX0NCj4gfSgpDQo+IA0KPiAr
eGMudW5pcXVlX2lkID0gbmlsDQo+IGlmIHguVW5pcXVlSWQgIT0gIiIgew0KPiB4Yy51bmlxdWVf
aWQgPSBDLkNTdHJpbmcoeC5VbmlxdWVJZCl9DQo+IHhjLndpZHRoID0gQy51aW50MzJfdCh4Lldp
ZHRoKQ0KPiBAQCAtMzQ3Myw5ICszNjI1LDExIEBAIGlmIGVyciAhPSBuaWx7DQo+IEMubGlieGxf
dmRpc3BsaW5mb19kaXNwb3NlKHhjKX0NCj4gfSgpDQo+IA0KPiAreGMuYmFja2VuZCA9IG5pbA0K
PiBpZiB4LkJhY2tlbmQgIT0gIiIgew0KPiB4Yy5iYWNrZW5kID0gQy5DU3RyaW5nKHguQmFja2Vu
ZCl9DQo+IHhjLmJhY2tlbmRfaWQgPSBDLnVpbnQzMl90KHguQmFja2VuZElkKQ0KPiAreGMuZnJv
bnRlbmQgPSBuaWwNCj4gaWYgeC5Gcm9udGVuZCAhPSAiIiB7DQo+IHhjLmZyb250ZW5kID0gQy5D
U3RyaW5nKHguRnJvbnRlbmQpfQ0KPiB4Yy5mcm9udGVuZF9pZCA9IEMudWludDMyX3QoeC5Gcm9u
dGVuZElkKQ0KPiBAQCAtMzYxMSw5ICszNzY1LDExIEBAIGlmIGVyciAhPSBuaWx7DQo+IEMubGli
eGxfdnNuZGluZm9fZGlzcG9zZSh4Yyl9DQo+IH0oKQ0KPiANCj4gK3hjLmJhY2tlbmQgPSBuaWwN
Cj4gaWYgeC5CYWNrZW5kICE9ICIiIHsNCj4geGMuYmFja2VuZCA9IEMuQ1N0cmluZyh4LkJhY2tl
bmQpfQ0KPiB4Yy5iYWNrZW5kX2lkID0gQy51aW50MzJfdCh4LkJhY2tlbmRJZCkNCj4gK3hjLmZy
b250ZW5kID0gbmlsDQo+IGlmIHguRnJvbnRlbmQgIT0gIiIgew0KPiB4Yy5mcm9udGVuZCA9IEMu
Q1N0cmluZyh4LkZyb250ZW5kKX0NCj4geGMuZnJvbnRlbmRfaWQgPSBDLnVpbnQzMl90KHguRnJv
bnRlbmRJZCkNCj4gQEAgLTM2NjQsOSArMzgyMCwxMSBAQCBpZiBlcnIgIT0gbmlsew0KPiBDLmxp
YnhsX3ZrYmluZm9fZGlzcG9zZSh4Yyl9DQo+IH0oKQ0KPiANCj4gK3hjLmJhY2tlbmQgPSBuaWwN
Cj4gaWYgeC5CYWNrZW5kICE9ICIiIHsNCj4geGMuYmFja2VuZCA9IEMuQ1N0cmluZyh4LkJhY2tl
bmQpfQ0KPiB4Yy5iYWNrZW5kX2lkID0gQy51aW50MzJfdCh4LkJhY2tlbmRJZCkNCj4gK3hjLmZy
b250ZW5kID0gbmlsDQo+IGlmIHguRnJvbnRlbmQgIT0gIiIgew0KPiB4Yy5mcm9udGVuZCA9IEMu
Q1N0cmluZyh4LkZyb250ZW5kKX0NCj4geGMuZnJvbnRlbmRfaWQgPSBDLnVpbnQzMl90KHguRnJv
bnRlbmRJZCkNCj4gQEAgLTM5MDIsNiArNDA2MCw3IEBAIHJldHVybiBmbXQuRXJyb3JmKCJjb252
ZXJ0aW5nIGZpZWxkIENvbXByZXNzaW9uOiAldiIsIGVycikNCj4gaWYgZXJyIDo9IHguTmV0YnVm
LnRvQygmeGMubmV0YnVmKTsgZXJyICE9IG5pbCB7DQo+IHJldHVybiBmbXQuRXJyb3JmKCJjb252
ZXJ0aW5nIGZpZWxkIE5ldGJ1ZjogJXYiLCBlcnIpDQo+IH0NCj4gK3hjLm5ldGJ1ZnNjcmlwdCA9
IG5pbA0KPiBpZiB4Lk5ldGJ1ZnNjcmlwdCAhPSAiIiB7DQo+IHhjLm5ldGJ1ZnNjcmlwdCA9IEMu
Q1N0cmluZyh4Lk5ldGJ1ZnNjcmlwdCl9DQo+IGlmIGVyciA6PSB4LkRpc2tidWYudG9DKCZ4Yy5k
aXNrYnVmKTsgZXJyICE9IG5pbCB7DQo+IEBAIC00MDM1LDYgKzQxOTQsNyBAQCBpZiAhb2sgew0K
PiByZXR1cm4gZXJyb3JzLk5ldygid3JvbmcgdHlwZSBmb3IgdW5pb24ga2V5IHR5cGUiKQ0KPiB9
DQo+IHZhciBkaXNrX2VqZWN0IEMubGlieGxfZXZlbnRfdHlwZV91bmlvbl9kaXNrX2VqZWN0DQo+
ICtkaXNrX2VqZWN0LnZkZXYgPSBuaWwNCj4gaWYgdG1wLlZkZXYgIT0gIiIgew0KPiBkaXNrX2Vq
ZWN0LnZkZXYgPSBDLkNTdHJpbmcodG1wLlZkZXYpfQ0KPiBpZiBlcnIgOj0gdG1wLkRpc2sudG9D
KCZkaXNrX2VqZWN0LmRpc2spOyBlcnIgIT0gbmlsIHsNCj4gLS0gDQo+IDIuMTcuMQ0KPiANCg0K


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 11:02:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 11:02:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144473.265909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luCGC-00078o-DZ; Fri, 18 Jun 2021 11:02:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144473.265909; Fri, 18 Jun 2021 11:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luCGC-00078h-AQ; Fri, 18 Jun 2021 11:02:04 +0000
Received: by outflank-mailman (input) for mailman id 144473;
 Fri, 18 Jun 2021 11:02:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luCGB-00078W-Il; Fri, 18 Jun 2021 11:02:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luCGB-0006Pz-Bv; Fri, 18 Jun 2021 11:02:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luCGB-0006Gw-1L; Fri, 18 Jun 2021 11:02:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1luCGB-0005ti-0U; Fri, 18 Jun 2021 11:02:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+TYt/m6XKq9rn4og8yDqZsKxr8wJUaU6g/tujZXkJBU=; b=ljaTBDBLqgQbvRyX4yi/OFfUke
	hyj+RIveU4MtMabDHwVv3cdmIhKLUbNx6hN+sHXLTvJESwWb72LE19xlZ3z3Mza1JfenRq7asmuiW
	RpIikpb6E8vahBO10bUuTrh1PEdq1TZjyraYGutEI0knB6sjt6KAlKM5ErHGqUkZGr8A=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162881-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 162881: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=45710c02563602343261b690787bb1c76a676bef
X-Osstest-Versions-That:
    xen=0ff7f9c5aa02cd2469a8fc03f1ed262f18933721
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Jun 2021 11:02:03 +0000

flight 162881 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162881/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162547
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162547
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162547
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162547
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162547
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 162547
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162547
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162547
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162547
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162547
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162547
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162547
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  45710c02563602343261b690787bb1c76a676bef
baseline version:
 xen                  0ff7f9c5aa02cd2469a8fc03f1ed262f18933721

Last test of basis   162547  2021-06-08 18:11:20 Z    9 days
Testing same since   162881  2021-06-17 19:07:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0ff7f9c5aa..45710c0256  45710c02563602343261b690787bb1c76a676bef -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 11:20:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 11:20:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144481.265923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luCXb-0000Xw-2i; Fri, 18 Jun 2021 11:20:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144481.265923; Fri, 18 Jun 2021 11:20:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luCXa-0000Xp-Un; Fri, 18 Jun 2021 11:20:02 +0000
Received: by outflank-mailman (input) for mailman id 144481;
 Fri, 18 Jun 2021 11:20:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfjP=LM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1luCXZ-0000LJ-5b
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 11:20:01 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6d69e02a-c7a3-46e8-8c3e-b5f632e57377;
 Fri, 18 Jun 2021 11:19:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d69e02a-c7a3-46e8-8c3e-b5f632e57377
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624015199;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=hvFv158S8RiXxmD9m4q8jv6XDa46ldr+WqVA4byRFXQ=;
  b=GJ0Wf/r1O1y4xUe4OjvpjIa8tpj78KpmOu9NHd6E0xlPp5fZ3o/cSHIE
   BaKsrzjGP1AT7eKaV/fR0bhAIRSEyNTXQ0ZqkyCx2DXdNCKJmFaLUyaZh
   yF/FlLFQSPMNDFAlV4fm67iZhG80/9qJsS9NdCrcKSYyZCGNDrGznUX2h
   g=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: oyNCA5/t2MZ3NC8Mq6uJ6aBzAETaWmhjWTPT537remw0cvWzitjmpkEv5zs3GK+c/gyv9idzc8
 rX2v2UEjeqO5RLIO8P5pkWD7jWCvfdXmIVBv29nNxvuj5EvWjM0j4hSLCFWiSJzNvNxMDJlECJ
 ZQD6E9JxkXb+9IMGFcQUZgxRnF/AJPPM9WzcSA24SqT62JlvPbdEmXIAm/hFik4QT586GpTfuX
 NF632F/oXfseYZQakklH1K1fHBJAdZDhhV8MBTkS2VMIZ2ZTd8K27jNHLYdUfqpDU7qsbKNSC8
 djE=
X-SBRS: 5.1
X-MesageID: 46817966
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:VQ7jyKpJYfPMCThjzaCZBjsaV5rReYIsimQD101hICG9Evb0qy
 lhppQmPH7P+VIssRQb8+xoV5PufZqxz/BICOoqTNKftWvdyQiVxehZhOOP/9SJIUbDH4VmpM
 VdmsZFaeEZDTJB/LvHCAvTKadd/DFQmprY+ts3zB1WPH9Xg7kL1XYfNu4CeHcGPzWvA/ACZf
 yhz/sCnRWMU1INYP+2A3EUNtKz3eEixPrdEGc77wdM0nj3sQ+V
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46817966"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SQInDp0VlqLrubc+dyGr2O8W4L2idhnAsvr3rZ6LPd6hdPMr8wGowTrU1BA6Iunqt59gs+mOSjSsGb4i+w1oWEC4rqfCrbDE0bVuinuw0VCU2CXa4vsQGjrWsL51p3v9hIrDogVtRYV4IV+mT8ZaipjpK7faRy373Dv9teBdTYFH87Ut+XIjbiD+iTva23ZBz9oX8f0E8frU14C+2njtXVgkWDkh5Wa2MgCcoweZ716Lg/aCPDGmcAagqEdUbjpS+lH+6VqLOOnTWJdVs3AEzjtaEP87xD/80mxTUHETw6noMSZXMz3PH89hOEeb7ImUZKgnnkX9+UIgk1CYY3RXMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VG88R5PVfkjFw5U9c/NPxl2z90BrcCABs6qe9+IKPHI=;
 b=R2mlTAIyiEwwRA7V07F1KXl7wlMgFp6s63a1m+IA5ACXFC3VL09S5vMapu5JekMxMiREZZWvqrDECqsA/LqqwfvpeWxkdzNkv6p3jfLY/f1fFzKsNX2lFIhuP1ZbQhlIm+upVi7ibCJgBqem+RNnh7unpMJOTlV18vHFPsEorRy8KeIQDRmu1gm5DyVzXrV9hLiAhmMM+s/w1fLeAyKDZ2uvPmI24s+R2bTqILz5B08Y+HJejmeqMOgZR/s0TYk7mNOqdNaciwJLuyBSlu+wDp619+JfQtX0ml4EvOJhurJmBdOjAqbdtWf1+0OazyfGQyq/kRW32jeS2MsDmLatlg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VG88R5PVfkjFw5U9c/NPxl2z90BrcCABs6qe9+IKPHI=;
 b=ShRcB0M2HdrCLX+GaafdLjMPY1B0KdGrcsKK5ODjJ4R6mrKJKR6Fyz2QbKDElDjmGov71z9eBKo5fRnLYOYWbeQ4GKt4ojpH5t2Sq6PsBJsyArvq2lM1+b6BlnLq3oUUZ8PlW03T288TqvJx4NJV+3lTHOf0akaWtOxVJWqYx20=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: "xen-devel@lists.xenproject.prg" <xen-devel@lists.xenproject.prg>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Nick
 Rosbrook" <rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 04/12] golang/xenlight: export keyed union
 interface types
Thread-Topic: [RESEND PATCH 04/12] golang/xenlight: export keyed union
 interface types
Thread-Index: AQHXUNytQQDjetM7BUSIz+lNSh+ZXasZxf6A
Date: Fri, 18 Jun 2021 11:19:55 +0000
Message-ID: <A972F792-5327-438B-BA92-DB5A334AAA3E@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <29a3fbc93262cb9b31f02d6c94c018b200dfa43e.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <29a3fbc93262cb9b31f02d6c94c018b200dfa43e.1621887506.git.rosbrookn@ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: b19d3218-ff25-4067-0725-08d9324b003e
x-ms-traffictypediagnostic: PH0PR03MB6366:
x-microsoft-antispam-prvs: <PH0PR03MB63668A501A1E7CFBCF0D272F990D9@PH0PR03MB6366.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: aZoco1ovgO54chQ6NJwwTd4up0IxbWZo2vbOIa7isxJoE/gxhvLxmG/06pvHqoqauPQh9/M+OX1ARfX1tQb07qQYJeZubZ5dEUXiOHECCOtzZQLQErKDsVwk7vX2SID2MCkcWMkRv/zRvuDE7MVuPyL4K3BA0gW1e9DEpJ8mt4TFrFy+x8qfpFSTIhiRLL6rycpp7orL+9Sa3jvGQmG+K9fZfNWwE7o66IIxCc8NAWn0e+bcJ0HUT51qvdjQemTb/3EYXfCrdZaef4BqoFnQYCOP/lO8s3K7uk3hXCbqmUtJ6oJe5mnGyIqefUvcBRS1zBOpmHTs2gDWwpFsSl9ypwdN7XsULSms8LCHhh7Le/pAxJoDJbMnSztuWga0U3xV9Hn8G7JkM9NlnBvKpJwT99DHnUi7gr0lcAWExh4ldbk4lp7mHa3FC7wuu1a5EQsBvdKL6OicbgigAu3IP+OcUmQHpiL9lqtzMcJXlPCn5XzXofdGwLXq2xfgp9Y3VefOSMJRPLk22w5uTZZvl6dLMSTKXuUdCLOfFgR6QskdThPWGCzO4GTJVSPS4A6sCaHifgh+i0JR7dp3IDwZz2aw9fci5yBWuZuOjwzCgmJquZkJiidhK1F9wGRXvTygiOI0WdPYBR45ogiuSZWnKTZZiVLKq6GEchd2gGYF7ih+y/P5F/Wcij5xBbJN6VhQV9qKxGjg5QbGHyYTwqAUAQsJA2WvBzrdllQg0GUZG5ZTSfHZFtQNoOAogxTjSTRvED2t9vUso5yD1Cb1suNr7kvDSQ==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(39860400002)(366004)(396003)(376002)(8936002)(966005)(6916009)(4326008)(6486002)(26005)(33656002)(122000001)(66446008)(86362001)(66476007)(2906002)(71200400001)(6512007)(5660300002)(6506007)(186003)(54906003)(478600001)(91956017)(8676002)(66946007)(55236004)(53546011)(36756003)(66556008)(38100700002)(64756008)(76116006)(316002)(2616005)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?Kip1NJkSaxwpMEOsTyQOYfbMQXkm8DeFhn3W4Bp3HxxXQDkr+nwIDhLdp5Qy?=
 =?us-ascii?Q?8oT6bW/0TJ0Pbd/O+Sl9imaqCFo6LtzXLcvDCq9/LdHnvGsnonbgJlMGBv9h?=
 =?us-ascii?Q?qNC8G6xKSkfpieGX/wpyZ/6YyUUx9ULO8hQD/b6indWBiIw6qnDfTWRQa6jf?=
 =?us-ascii?Q?H0cdbz2YlcAL8KoOcC8iQnOUU8pSPfFPFSK0lXqQYt6VCCPsNhfAksoGbLSl?=
 =?us-ascii?Q?DvKuNOZdXKeQXLnoh2cK81yyDBFtGc6xWfRg8U5TIgo14LkxwNaZ1hDdRYRF?=
 =?us-ascii?Q?aH/0pWJCvvf7pwZHkMje0/fDZ6Of8zgrhDX9zxJ8PaG6tCs08ASYC4CPmhRp?=
 =?us-ascii?Q?0Ld7k5RV/VDMNeF/tNO4ebek0szZI/dFlFiQr9z3EBAUlCqHUq4TEHDBcCeU?=
 =?us-ascii?Q?zW+8ni7a6kyvbrdcktGq1UZBECgTMKiz4KkqaaGrxyDTBX/eVcBIVGNVR88A?=
 =?us-ascii?Q?XXd9bY7y0+VyP66KZki0K8/94hSHqXZ7050hIE5efVy196RHXpKxNL4spzMp?=
 =?us-ascii?Q?rLk387Laj48wBkgQ5OBCSISrYaMcpI6Pe0aIbfmPZHeWFhC6NrufJEB2SeSU?=
 =?us-ascii?Q?Djz2n3Dn9RvMQ2qjS0gSeCYQpPBXQO5oxMjFLioAlZjZ12ku30xn9PgOodTb?=
 =?us-ascii?Q?sJd0Yp43lW1yUcdZnaLAfMiL5aPZNPdDWqHJkhuKPh2VROtBxIFh5ioTYHiA?=
 =?us-ascii?Q?gtUUTVztR5rg4UEZD9RRWsMUgpyax7RhdgYDBZJLANP/Tzpmoj4vG5u6DKqT?=
 =?us-ascii?Q?OxsRvWyksTVLO9TxsnX9R97LFqOuOwr6NWRuV+sjIJTo/EDI4jQAy9hX4Rli?=
 =?us-ascii?Q?pw4EwXAx8MTnxemMRBLtx57e6AJYYUmB9c/07nUdQE6Apy4PkC4KIWa/aSr9?=
 =?us-ascii?Q?Oz4HVRZJdFREKmvu6QOzg4DJUvvP4Ta6V2FAXTf4SLZtsPqTdhN0uIk8+Co0?=
 =?us-ascii?Q?r6E9pyc7JNKSe5ZwWVY82QCZzj9qEwGcSQb1+Mhiu9f1lahKXo+xAAyjtf5c?=
 =?us-ascii?Q?+pmD2sAXg5M5eFJAmK1FaP/euNynk1EH0+AZZnqqTd7NOxKuYpOGllarPWZQ?=
 =?us-ascii?Q?sNjwB8oLtiyod+4jCMXGzMSy9alPGM/dBTh353gfYUUjzPBNLgmLR8yW42ZB?=
 =?us-ascii?Q?FUt7JCnzh+hY0rweokjGRWX+bzXW9k9qIyS46fsQUA8d7bXTs8kjSurWElvA?=
 =?us-ascii?Q?7MLeefJgQmbzpd5du+H5atsUQlkUGxVqCW5PWu9Om7ULki71Mf4+xB79qSUY?=
 =?us-ascii?Q?sU4w18wQpNPcrMEMziPWKUWsanT+kHwQRt2Kb+CfQvCcekz7BvyEv5lJqSZ+?=
 =?us-ascii?Q?U1QWLYlzWkK+xTlAKSgtcQxLfrBJZopmPZl+XDt104UmpA=3D=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <31ADC93EE25C8243B33C9E47A271F057@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b19d3218-ff25-4067-0725-08d9324b003e
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 11:19:55.6382
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: PhzBXRx33OfxxL8fQFRaznVGFUKeYvvAs+DSaBieWSjt37UwNJasrFabVsx8UVq0ass8ca2fLG+bSYEuJyj+WqEBwH4LaqKE8qBrwVxYfzg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6366
X-OriginatorOrg: citrix.com



> On May 24, 2021, at 9:36 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
>=20
> For structs that have a keyed union, e.g. DomainBuildInfo, the TypeUnion
> field must be exported so that package users can get/set the fields
> within. This means that users are aware of the existence of the
> interface type used in those fields (see [1]), so it is awkward that the
> interface itself is not exported. However, the single method within the
> interface must remain unexported so that users cannot mistakenly "impleme=
nt"
> those interfaces.
>=20
> Since there seems to be no reason to do otherwise, export the keyed
> union interface types.
>=20
> [1] https://pkg.go.dev/xenbits.xenproject.org/git-http/xen.git/tools/gola=
ng/xenlight?tab=3Ddoc#DeviceUsbdev
>=20
> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

I wonder if at some point we should add documentation to the definitions of=
 specific union types, saying explicitly what union type it implements, and=
 maybe what the Type field should be set to.  e.g.:

DeviceUsbdevTypeUnionHostdev implements the DeviceUsbdevTypeUnion interface=
.  If DeviceUsbdev.TypeUnion is set to this type, DeviceUsbdev.Type should =
be set to UsbdevTypeHostdev.

Might be overkill tho.



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 11:27:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 11:27:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144488.265934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luCeL-0001pW-Ul; Fri, 18 Jun 2021 11:27:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144488.265934; Fri, 18 Jun 2021 11:27:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luCeL-0001pP-Rk; Fri, 18 Jun 2021 11:27:01 +0000
Received: by outflank-mailman (input) for mailman id 144488;
 Fri, 18 Jun 2021 11:27:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfjP=LM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1luCeL-0001pH-Bj
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 11:27:01 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 49e9fd31-7be6-40e3-8fa1-bf030fb56409;
 Fri, 18 Jun 2021 11:27:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49e9fd31-7be6-40e3-8fa1-bf030fb56409
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624015620;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=yaY5N2tOVkTtdixDR7oGscyi7yL+PzB7ecTsnjFt21I=;
  b=Oi4dwMFjbY5mPblghPcqF7ovk1lsjMHNLOiGYeLUg7gD3RVo5tSI0T2l
   9xKvjdC69yTY1pPD2rum0Dt0SBryDkWzbqPdQZkaiPeblAHibguvK9e+U
   fwY81LOC1FBDHHGj8Ft+31pKn4Or334+WH+rKuRl/Yv1pDmZqNovMWytO
   o=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: JUa+CK9JZNyUc/NSgVFg7sTc5KfQscgOYni/+vBkJEaPv9dtfwWvFesc07P+h1xjT3XZUoosoV
 X48277J/7P6mGONd7853k56/Y4YwnI0P8oaKDxovWhUgVgq94DZnDZc5rnInAZljNyuAN+b9aO
 pekSt9OOj5n6Zu0BnahSClRLgjYXwaOnwODAFpdhZlGN8ZXUHNcHYYCsMxEwKzmdUIJAg0ERaN
 zg30yb9M92tbAUxxdhv8jO2eT49eUg0UiwmUYx9iyj7PKy4od/Q3qpNXntnS490wSz2abggtVl
 MEs=
X-SBRS: 5.1
X-MesageID: 46436750
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:WgJQwK/aKpp1xFYluQ9uk+DfI+orL9Y04lQ7vn1ZYhZeG/bo8/
 xG/c526faQsl0ssR4b9uxoVJPvfZq/z/5ICPgqXItKNTOO0AHEEGgI1/qA/9SPIVyaysdtkY
 tmbqhiGJnRIDFB/KHHCdCDYrQd/OU=
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46436750"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ADucqfusRfRNAsB79eXdcKfO/9CNU8UKnfqYZ9EHD88hB/snKj/yl6a6+vdp9LDV6TDrSHoa2jrguYtTEJVbRlyliFEH+rmFQhzy3AGRfQmm7TQ1HaPOKn09278mAL/n4cV6jvg3F7T1EUKiQ0wIT/a2hD8BNEEG2qFRkF39MqDpTrZAlPDgHy/StXausTsXrh1NFKWMAcWJI/1CCpFJhBFUn5K344rFuvkDPnHF3yyWETWWnJsZCdL8/OI/iz8U/Od4d2qLRxGsog+YLVeY8QLijTRdAxwmkPk6/CpBc6zuHLpE0RCnrAb2q7qVZbcyMPCDiBXkArYOadIEqECv+A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yaY5N2tOVkTtdixDR7oGscyi7yL+PzB7ecTsnjFt21I=;
 b=BXI/zhyqmkDV0JCHAk5MdmeXlEbBjv+Z/u7srrSYzgeDfdWrDEebpa3uC/VdwfXxb+PkOSVYhO8ZbCkBLriMwzu1lfSH7m6NTaMEbIi9sC3lizFgTsVErUVz77WdJfXlXPlZhMGbwrSOiYDRDH9t2qEcCadEIiNpDESW4kI/lHbf4ez6r6fWQk7Udpl/xfRvoOp6osIZlshMLREf7RQhBqOcg4Uy+9nlhS1HGjg0Fs/d92vNnzJFK13tEW7Bjts6ByfLHuXC6MFrBewVpHsR4auYnnNR/Qo8MyZKNWcN793xCc1uXpHlbm9H2xMdH5brBO0wmS19E3WVUmSmH1Dm1g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yaY5N2tOVkTtdixDR7oGscyi7yL+PzB7ecTsnjFt21I=;
 b=rxLgMBvec/J2In2xw+qWtlagaXSTmyudl8/rNUklOY/K5TQMmZ6aeZ1M0Y3oFaGJCMdw5CXPfT02Ssmw+s39PO7f3F497TBYTEtVyGczHISmF7+TBMgJrCtDhU6IDYXP9pL7yVHxVvS6qa2nnJE23er/jqugQomvWBr/mz7meJ0=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 05/12] golang/xenlight: use struct pointers in
 keyed union fields
Thread-Topic: [RESEND PATCH 05/12] golang/xenlight: use struct pointers in
 keyed union fields
Thread-Index: AQHXUNzFppin6ei2JkOk4VRu7i00GqsZx/WA
Date: Fri, 18 Jun 2021 11:26:55 +0000
Message-ID: <0AAE0CB0-DE31-430D-A58F-B25B50CF8A2E@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <ebeb085b9b4b5d3dddd66607b409590f5e7cdfc6.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <ebeb085b9b4b5d3dddd66607b409590f5e7cdfc6.1621887506.git.rosbrookn@ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 34aa0d64-5509-4470-c88e-08d9324bfaaa
x-ms-traffictypediagnostic: PH0PR03MB6366:
x-microsoft-antispam-prvs: <PH0PR03MB6366AD787ACAAB6E5F2B37D6990D9@PH0PR03MB6366.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: ahwG7lTcckshAtm+JXmJnbU6Edy2PsZc5HTa33qKgkM6LLK0mc7gT5Cp39DRVC03tXEKyOmnbLUJSu5qsEOVCbJshJdsUFCXnqXG60p7HDsgbf++yOf9xiqxAFxZmOqARS+uXKWSg0MfxmLGWv7BWxl0MNsNiXgieh0eMA1bw8CHFC4GnQ8g7eHla/286rKOXeKAfnQR3ASJFV059prDYU0Lm9W+XWawG3/cli78iP5NjZKHLRf/aVIqyn7qgnR3bMTTeR88HRcIuuRaETsRxnLddG2lEI/aFIjLj7nuQQjti7GfvuoulvBMy29G9fV7+y7SS/8/ePeYELyJzFeAxyOHHQPNZ6c0Uusv/A/U+4ApsBRLUeI7Yjm1A6gKdrT7rXu9pT2X3z1BcIIeUvQ89Ii0y7Rv697fsdjDnr/qGOj8YNGZqFovvapLxqdTQP8Awl/d9pP1WxZ3k0QiEdUsgTiiEjv5klYT5B2bg9NpH3m7z48uN7AMh6giHtc8Gs/CW49DIsJWatgdntA6hVSW4u0bKjldZ7cx6TREbnf2cyMVDmuSzCN4kiRdZh4FX4CQjK4zf58a4HL4fvavaqXcHdoC+7djKERYLvfVx6ssydo/6Mr/WH5k/X0OJRMStJ+EiyWB6d32BtkTkNc4TYT7u0x/11QN6T6m5UCoN4B1eUE=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(39860400002)(366004)(396003)(376002)(8936002)(6916009)(4326008)(6486002)(26005)(33656002)(122000001)(66446008)(86362001)(66476007)(2906002)(71200400001)(6512007)(5660300002)(6506007)(186003)(54906003)(478600001)(91956017)(8676002)(66946007)(55236004)(83380400001)(53546011)(36756003)(66556008)(38100700002)(64756008)(76116006)(316002)(2616005)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?CxkwAp9+4FBGoAEbGcCCJ/vJ+1Ylcqc7BIJVsmWXwdxBBcyytsBD2/gMvTnU?=
 =?us-ascii?Q?QeQd8AfQDqmcpF4oMsVXpepO1p0owA3RO48uazlMW3pk0aNaxhyIjndvxArC?=
 =?us-ascii?Q?sdClwOSlAQh7DsNlzkrrzlmqqOqmsbc9aBXxf0DIxkOYlHiqaEEpc5D9i4bH?=
 =?us-ascii?Q?GXWoRoxecS/oQQOH+NiKfI+IjIFbYLbL9n6xJfnboXMjQtUBeEgou6PO6gof?=
 =?us-ascii?Q?8iwJXTTeNu2UjHv+89MTqBKjARTNgdIOTbMcAQUCpPmZbmcHXXNYC0e+Bvby?=
 =?us-ascii?Q?7zG5I1XDm4p+CxXAVm+LqqvnHek/URRjeR80aL0/yUyZI3QvgMFJpyLWdq5Y?=
 =?us-ascii?Q?MMUBuEfQDhNhDyZWnL/6BMR5Vw5iy9obSOAVAVC60xr7Lm8oiqOLgwZClSk6?=
 =?us-ascii?Q?cjjAcJ/KIptzjeCeqGwEmK5XqoEsuKHidAaFcCJFlBBFOuGgyoccb9DXWI4x?=
 =?us-ascii?Q?TAAW6r85M/plC4xh6dLwPZ1O+3pMisoe0OUvRiNvobgQ/BAzHW3+NOeo0IQ3?=
 =?us-ascii?Q?z/qLdvjxqv6+iXyNqG0pYwjdl7Tzafh8PGr83d3cgk6X+b7Z9HNhigp85vYR?=
 =?us-ascii?Q?3wCQl5PS6pVJli5AzHc+RnB3O+v9NTkuoa8zvPjLi4nzdtJjttxhUs+Klknm?=
 =?us-ascii?Q?x8GSvajLaYI50aabb1nfeJOElNYDmyxQVk6VeRun/3k6oxCx6ljZRq6dfC/v?=
 =?us-ascii?Q?WzxLsJzzxhcOw6up/gKfIsLeTjeAwh+OzHPu1GfMLm0EJJBs5DD9l2Lx0GUH?=
 =?us-ascii?Q?o8UiiXvp6Q3Yk6eNJTHNpo8KNvU31y9tPLIITthWVrqZscjUWjkZWs0idRi8?=
 =?us-ascii?Q?2k6peVr50P3WRcAqTX5jTs2rjK4CCrgQkfUQnmIS6W310gmhp+2iXc0Bh2Tw?=
 =?us-ascii?Q?H0iKHEkEWxas10gKFdAzd3zPwUg0kMOwVeCT6OKRa588UN2Fi1dzC12p8W2R?=
 =?us-ascii?Q?Sek8Mx0Hycgphz5hZ+Dll+fvqiHwZY7qrbGnTFcFA8P6roYd8lCfzSpNvQTR?=
 =?us-ascii?Q?EY4/nx/gsMff2XwvJLrU5tLfRCZySFszo8IE6Zm2sDfUGmfOg+v7LKky6vbl?=
 =?us-ascii?Q?XDorZnyAHDZTVwUqRBxMomms76ucKpVmlqfgYbEEnC2r2uWppp0IVZeov9fY?=
 =?us-ascii?Q?Xd+NP9YJSwr+c2t2B6aP6g9N0+DEWWGh1UpFR7f0L2GLs3YhTBgLIjueVuXy?=
 =?us-ascii?Q?Bm5Ad9206Z9Bw8KVfQqDUhwlzz4yGKGGzCDFiRHPLKDkSXq8bjxNcp1XdmMI?=
 =?us-ascii?Q?4wvafWcMA/iiKeMWFSVdiMxiDXIlxk5pCgusHR1scHYxKhle/z4Eu9Mnx0fS?=
 =?us-ascii?Q?5ykEn/elwJqWhVhnwtHKJFR0l2FMEXcMDqZR4DAXqY17dw=3D=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <9C80363BBB700445A428C54DCC04C9B0@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 34aa0d64-5509-4470-c88e-08d9324bfaaa
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 11:26:55.7485
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: zbrjSrBap7eA8n9oIUtoSEqZZs5GbHtt9zrq+O8m/U1ovtQGS91fCK98vDafguivGfrPphFtiSk/kqSlMc/XJR/ggbBD4P54cQrNlaRJSbE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6366
X-OriginatorOrg: citrix.com



> On May 24, 2021, at 9:36 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
>=20
> Currently, when marshalig Go types with keyed union fields, we assign the
> value of the struct (e.g. DomainBuildInfoTypeUnionHvm) which implements t=
he
> interface of the keyed union field (e.g. DomainBuildInfoTypeUnion).
> As-is, this means that if a populated DomainBuildInfo is marshaled to
> e.g. JSON, unmarshaling back to DomainBuildInfo will fail.
>=20
> When the encoding/json is unmarshaling data into a Go type, and
> encounters a JSON object, it basically can either marshal the data into
> an empty interface, a map, or a struct. It cannot, however, marshal data
> into an interface with at least one method defined on it (e.g.
> DomainBuildInfoTypeUnion). Before this check is done, however, the
> decoder will check if the Go type is a pointer, and dereference it if
> so. It will then use the type of this value as the "target" type.
>=20
> This means that if the TypeUnion field is populated with a
> DomainBuildInfoTypeUnion, the decoder will see a non-empty interface and
> fail. If the TypeUnion field is populated with a
> *DomainBuildInfoTypeUnionHvm, it dereferences the pointer and sees a
> struct instead, allowing decoding to continue normally.
>=20
> Since there does not appear to be a strict need for NOT using pointers
> in these fields, update code generation to set keyed union fields to
> pointers of their implementing structs.
>=20
> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 11:35:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 11:35:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144494.265944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luCmE-0003HI-Pv; Fri, 18 Jun 2021 11:35:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144494.265944; Fri, 18 Jun 2021 11:35:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luCmE-0003HB-Ma; Fri, 18 Jun 2021 11:35:10 +0000
Received: by outflank-mailman (input) for mailman id 144494;
 Fri, 18 Jun 2021 11:35:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luCmD-0003H5-Ux
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 11:35:10 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0b1a9712-1bdd-4366-b8eb-0f79dd3efae7;
 Fri, 18 Jun 2021 11:35:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b1a9712-1bdd-4366-b8eb-0f79dd3efae7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624016108;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=CGp7pFConfiCQs0dCDPpaK7OfgBpoxK58Ip5y6Qi3I8=;
  b=LNbT/rwMWGBNAS6/F4YFkQLFZ9oa5o2+oPAJfh2vlHr3+Sj+aYiqMlbd
   PPYTOIrmtVI4IisNler/t2/qttt0RIU1kGS4THZKqiOdWz2WkbY1YzteQ
   rvslMTv+/+Ac5IYCJGP/ArsUtIvPJtsWoifJhGDcFWwYzijoAWW7ZeWdd
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 7kbziORytCmX9ZIiPgWVDm1MuO8W1aTKqfkr2UmycGUI5dpraZruIWhLHayxpaau2AxmA5TxP4
 4Y2TGHyh4Ij6pv9uQkzoyAgQCmYbD3Tui8yWJukoVOpKA23akDpR1V0RD8mqF2p9Di6VvulQt5
 mfxG+X0oZwArGrKWDCZew4G+TR8leGabSrBh6VXX/GocIwh4AlvpBgcUjUL8/AGL5CF1VDq5rV
 LMqPwxUVHSpaQfdVgbg6ERaVjHLoBCQz4fwxNwRhEe1Nh6kPKIxnqkrhqTKm+HV+4gcSAWvfgX
 KQ8=
X-SBRS: 5.1
X-MesageID: 46437162
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:hVQU7KNtUwZPSsBcT1j155DYdb4zR+YMi2TDiHofdfUFSKClfp
 6V8cjztSWUtN4QMEtQ/OxoS5PwPk80kqQFnbX5XI3SITUO3VHHEGgM1/qb/9SNIVyYygcZ79
 YbT0EcMqyCMbEZt7eC3ODQKb9Jq7PmgcPY8Ns2jU0dKT2CA5sQnzuRYTzrdHGeKjM2Z6bRWK
 Dsnfau8FGbCAUqh4mAdzc4t6+pnayDqLvWJTo9QzI34giHij2lrJb8Dhijxx8bFxdC260r/2
 TpmxHwovzLiYD69jbsk0voq7hGktrozdVOQOSKl8guMz3pziKlfp5oVbGutC085Muv9FEput
 /RpApIBbUz11rhOkWO5Tf90Qjp1zgjr1X4z0WDvHflqcvlABonFston+tiA17kwntlmOs5/L
 NA3mqfuZYSJwjHhj7B69/BUAwvvlaooEAljfUYgxVkIMkjgYdq3MsiFX5uYdE99HqQ0vF/LA
 AuNrCe2B9uSyLfU5iD1VMfmOBFNx8Ib2W7qktrgL3Z79EZpgEj86O0rPZv1kvoz6hNPKWs0d
 60eJiApIs+OvP+UpgNctvpYfHHRlAlEii8f157HzzcZeo60iX22uDKCfMOlbuXRKA=
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46437162"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=G6TxloC/ZUdNwxGwQwBq/eyjJ5R76GktaZxZ94lQWn1O6biQl6Mw+qaiIVbKO4BAF8v3+ZMEtTe2lbx6p34EPE8zj0cezvSOlbPY4hkHoz4P4HaeEjojhpazWnzfQfGNkAq2NFxzBCF7mO+3iQKnbXJ21d21l0fyoKCmwRdg6ZnX/FrUHrHoWwYIPNTO6ICIAh77D4xyVJnhsZa4QBP06pEHH5UPJtH4xmr5xraz/F2aUfkmMUdglJTyBjy9nzGhPkkhmyfTjV7HMc+7epv3tvlFWILi0EQGJIOoFFlEXeQM82GB5FWf1kQT7h6WdFBpcWSQ1eaQnf8bMg0Ig5dHuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AH60yatl15MpGt996L4jLcBhtfAOcyMFE8gReOfSrQY=;
 b=buXyCD4QbhV2BqDb5YWdHDB8YapfdL7uEbTEPIdo054cwbvs6UtC8fZv+Gv+YE9Bg/y2JYmSa8M2Yt/SCSGBTesWQ7c0Kp+pOuFcR3SvopY1AgrlEWFIUNHnRM/gf0c+5m/TQ4lVm0JqTMv7eTjDfMCX8oM4V/b92Q+5Y2OiyzJrGOYa877eMTEBfGtg9qSZkJ52uNtgZMiOUhWTDRYZY/FR5tLRTDF2ST4iorqpr6W6IIOWoabW/RdtPxSHeXVE8c4f+ejJkdVipCyyZcjd0pXcZjKxbs8xQqxHCz0szJTtjKgDl1AfXIQGksAItX0Aj64bri9HNDwfe8kOqDOaUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AH60yatl15MpGt996L4jLcBhtfAOcyMFE8gReOfSrQY=;
 b=ln3wsUiOGZofd+U3HAF2DLTDmFaSEkGObmITdqiI5G+GYZN5mClgwjGIoaG3hpm20mjfc529kfEDVz8UY992SBF4fPAOz8/vSteGL3zJsW6rerIQz8yxHrKy/AtbaCZNuJuYF3+yq7GunHsIeke4aUALAJeLpH7tsebXpsCHvEU=
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, George
 Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan
 Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Tamas K Lengyel
	<tamas@tklengyel.com>, Tim Deegan <tim@xen.org>, Juergen Gross
	<jgross@suse.com>, Alexandru Isaila <aisaila@bitdefender.com>, Petre
 Pircalabu <ppircalabu@bitdefender.com>, Dario Faggioli <dfaggioli@suse.com>,
	Paul Durrant <paul@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	<persaur@gmail.com>, <christopher.w.clark@gmail.com>,
	<adam.schwalm@starlab.io>, <scott.davis@starlab.io>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-2-dpsmith@apertussolutions.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/6] xsm: refactor xsm_ops handling
Message-ID: <07566583-d4bc-dfb8-eccd-d779783d959a@citrix.com>
Date: Fri, 18 Jun 2021 12:34:54 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210617233918.10095-2-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P123CA0090.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:138::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6dfff7f0-7335-42f0-8618-08d9324d1dbb
X-MS-TrafficTypeDiagnostic: BY5PR03MB4967:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB4967443E9C8052A77E5FB8CFBA0D9@BY5PR03MB4967.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: nyCYyE2kuPODr0LWpNdmQjc+iHebVaG5EBGuIyKYLqSVhFR8UYR6liC1NlYYwHSJxr8x71nNd7Hx2mcJ1GqXHYkocKGPT4HNB9DGDZp8QmM3LZsuEx3ygCE5ydRRKCJ5HaAifrkaWHrZO8qQPmHpgTom1v1KuteiBq2zvvapIqfZh9YboM4kS4m1wMNRy5BYAWg1bHyW8jy/ogksz4RPfxhTe8n4QTlJuhs6CnMtRLPg5U4vkngcNZA+vO+aPsMrGgm2YZBWWoBBSofmkSbGS3vkaJvdbcQO0P0BSADp/NMX+AvH3r1QqiQdNVA/gjd/QiEtGKjMffWJbui1LtpW+XI0T6mRqIQW3T4pAEP9jm3ojYXQHLWjLA0NRVWvoyCr8rOEMgABpX3d5nPIR/lOG6P6ic9MYXYaLpa3yLIoY4QnRgXnrm0nswuzc/u8qwsk2yfbzJhJx1uAiGXnlRgBAyFKYDpR8sIwOONRiqejjvqrTyNzkBjlXLKYkeca7OwteJTTbzb5NJQB299vA8vMDWR+CRXfMdnvflVDt4zUwjSIOt04cWsNYcl6xGHRsNqVnZ+6Koa9e1CM2y+jsdjrLDP909xjiWfE+kwyPKNByH3q2PGbxZQjhlSLLocZajo4cj3iE6js+lRpRlsaybCXeHqkAVHz9JQb04qH1RtL6yRS371ohAAPhsgAWt11yISx
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(376002)(136003)(39860400002)(346002)(83380400001)(8676002)(86362001)(2616005)(478600001)(956004)(36756003)(38100700002)(186003)(31696002)(53546011)(66476007)(16526019)(6666004)(66556008)(16576012)(6486002)(31686004)(66946007)(316002)(8936002)(5660300002)(2906002)(54906003)(4326008)(26005)(7416002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?c0JRb1NvQWszajN1TFJSa3d1OVYyRTUzTGgrd1pvb3BMeGQ5TEM5SG01TzJp?=
 =?utf-8?B?ZWJtOUZvV2NYeUdGdXM5NWFjQmsrd3JCT0pHOWk5dTFPdmdzOTQyWE44L0NN?=
 =?utf-8?B?SDArajd3RnBxaElqL3pEeGZtYmFiVERNcUYzV0h4eUxCUU1vc2sybUlxazRK?=
 =?utf-8?B?N29GNlhybGZLa3RSeEpBOHozalN5ZUxaQmJ0RUZVVDBhODRvQzhuMTFUeHE3?=
 =?utf-8?B?Qmk3TnE2ZHZ6VS9CQ0dQaGhHSE4wMDhTSWhNcXpPYmJtNXVkZlBya0dUTUVy?=
 =?utf-8?B?em5RSDQwWHlpNDBwT0xaUFZFS1dROGlCajhoa1JVVlpmUTNMY05GNjRwYktN?=
 =?utf-8?B?M0sxdnNydlNIa2R5a0h6VldwaGIzS09Fb0J2V1JnWUJuenZKakc0ZUcwczU1?=
 =?utf-8?B?WkpmckpnOWZMcmt0dWRvMFZsd2VGejhwZldvazU3UEtpeTlSMEtqaHZTS0c4?=
 =?utf-8?B?dXFaMUZEd1Jwd3RyR1diZkxyREpoOUdZK0pyN1JXUkg2Z2l2QndCbGNaVEJD?=
 =?utf-8?B?VGJnVmdqdVE1UnNXZkY2Q3RtTjhHcUtrSHBtc3pDOHltUG1ITGtqWDNUVFk0?=
 =?utf-8?B?VVhLOXBoREQ0UzBHbWQzSENiK1lhcitkM2x0M0k1OFFuVm84ZWJLaktDSlJu?=
 =?utf-8?B?N2lRY056M0NwemZueTZyUUhpaFlZZHNOc2kwVXNsZHVaUmxIaXNiZis4L2pV?=
 =?utf-8?B?YndTbkt2c1JNU0xoL2M4NklXNXp0NnFxalhieFBJdHU3dHIvT3hweEljd2xz?=
 =?utf-8?B?bk5aM2hhSUVGOWtDZlYyOW5PbGZ2M0ZlNzdReVduWjlMeWx0elBzWkJTNWxP?=
 =?utf-8?B?RmlaN2F0ZkFEenVKSXFUdVNjZTZGSFNJRzZsNXRiSGtiTzdhOS9CM0p6di9h?=
 =?utf-8?B?cmwzNkhFaHp0Q3ErMlpDWGVlVUdoQzhBOG5DNzdoVGphR3FUM1NEWmRoQlYz?=
 =?utf-8?B?TlVnbTM0QUtRY21IOWsvU28vU1d2cnpTbmh6ak5uUnljei9vMzNYQTRIb3ZR?=
 =?utf-8?B?ZjBqWGFlMFI5ckk0a3N0YzcybzdOOTdrc3hiMmh5K2puSmgvN09peTBLak02?=
 =?utf-8?B?dGlyZEpZRk10U3MyRUtQdStCZUVLWHVRR3JKV1hTZ3pVakE2cHF2ellFM1Ri?=
 =?utf-8?B?V09zZ1dyV1RoTSt0SUFTSm10Y3p0UlllRlJmc1Y3TkpOaHp3aFB0eGNiMTZq?=
 =?utf-8?B?VDNkVUVNamhiM2pXMjdVaElCOTlFRm9sZHlXL1IyTkxqT1A4WWJBZ1pFbk0y?=
 =?utf-8?B?SWdEM3dKVjlONEV1UTdaU3hVSitiQkdpWGdkeC9zclV4V1hhcUVwMldLVUd3?=
 =?utf-8?B?ejBHRmRLbkdVS09kK3U5eU1WTlUranU2K1NTbm5LK2xhdUw1akdDanhFd2Zn?=
 =?utf-8?B?djNRZWFweHBEVG8vRWdrc1MrWnF3SWJwbjUxL2J6cUVLT0x2REMzUXBMaXZa?=
 =?utf-8?B?TzJ3OWtVbC9JQ3hZRVRFS2Z6aGVxQXhhM3lxaGZ2TFU1bDlsR0MxU0lHYVll?=
 =?utf-8?B?bDZwVXhvcXpuQ0IvRDBzRmZRUWtlT1gyWVd1NjlMQ1hnRXJLZUNHbVk1dS9l?=
 =?utf-8?B?dFJqWlZON0txMUROTFQ4eEtxckdhclhjdDdmbytIb2JJWEtFZzhYZGhOYUlh?=
 =?utf-8?B?TnorWTJTd2JPR09sczBaeTBIT3lDMm9lOHhkc2gzN1dSMHBIazI4ZEN6UWVB?=
 =?utf-8?B?QTVYUCttcU5idVliL3RMdXJyc0d5YVhxK09oeDZ0VEQwLzhpajVFcTBKcWxE?=
 =?utf-8?Q?8gC0FBRa8yX2TQn9szYFgoGDRmq0PeDN9fLSBJt?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6dfff7f0-7335-42f0-8618-08d9324d1dbb
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 11:35:04.4164
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zEygO8U3NgOdEj77frY4wLPSYG8V/KOAUVvX3Tc1WJlwdRGx9rxyMsNTKhXO605cL1tM54On9DbW1yAzjS68iMZ1YDmrVghm1T7RU6gll0M=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB4967
X-OriginatorOrg: citrix.com

On 18/06/2021 00:39, Daniel P. Smith wrote:
> The assignment and setup of xsm_ops structure was refactored to make it a
> one-time assignment. The calling of the xsm_ops were refactored to use th=
e
> alternate_call framework to reduce the need for retpolines.
>
> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>

I think the commit message needs a little more explanation for anyone
doing code archaeology.

AFAICT, the current infrastructure has some (incomplete?) support for
Flask to unload itself as the security engine, which doesn't sounds like
a clever thing in general.

What we do here is make a semantic change to say that the security
engine (Dummy, Flask or SILO) gets chosen once during boot, and is
immutable thereafter.=C2=A0 This is better from a security standpoint (no
accidentally unloading Flask at runtime), and allows for the use of the
alternative_vcall() infrastructure to drop all the function pointer calls.

Does that about sum things up?

> diff --git a/xen/xsm/flask/flask_op.c b/xen/xsm/flask/flask_op.c
> index 01e52138a1..df9fcc1d6d 100644
> --- a/xen/xsm/flask/flask_op.c
> +++ b/xen/xsm/flask/flask_op.c
> @@ -225,26 +225,7 @@ static int flask_security_sid(struct xen_flask_sid_c=
ontext *arg)
> =20
>  static int flask_disable(void)
>  {
> -    static int flask_disabled =3D 0;
> -
> -    if ( ss_initialized )
> -    {
> -        /* Not permitted after initial policy load. */
> -        return -EINVAL;
> -    }
> -
> -    if ( flask_disabled )
> -    {
> -        /* Only do this once. */
> -        return -EINVAL;
> -    }
> -
> -    printk("Flask:  Disabled at runtime.\n");
> -
> -    flask_disabled =3D 1;
> -
> -    /* Reset xsm_ops to the original module. */
> -    xsm_ops =3D &dummy_xsm_ops;
> +    printk("Flask:  Disabling is not supported.\n");

Judging by this, should this patch be split up more?

I think you want to remove FLASK_DISABLE (and this function too - just
return -EOPNOTSUPP in the parent) as a separate explained change (as it
is a logical change in how Flask works).

The second patch wants to be the rest, which changes the indirection of
xsm_ops and converts to vcall().=C2=A0 This is a fairly mechanical change
without semantic changes.

I'm unsure if you want a 3rd patch in the middle, separating the
xsm_core_init() juggling, with the "switch to using vcall()".=C2=A0 It migh=
t
be a good idea for more easily demonstrating the changes, but I'll leave
it to your judgement.

> diff --git a/xen/xsm/xsm_core.c b/xen/xsm/xsm_core.c
> index 5eab21e1b1..acc1af7166 100644
> --- a/xen/xsm/xsm_core.c
> +++ b/xen/xsm/xsm_core.c
>  static int __init xsm_core_init(const void *policy_buffer, size_t policy=
_size)
>  {
>  #ifdef CONFIG_XSM_FLASK_POLICY
> @@ -87,17 +86,22 @@ static int __init xsm_core_init(const void *policy_bu=
ffer, size_t policy_size)
>      }
>  #endif
> =20
> -    if ( verify(&dummy_xsm_ops) )
> +    if ( xsm_ops_registered !=3D XSM_OPS_UNREGISTERED )
>      {
> -        printk(XENLOG_ERR "Could not verify dummy_xsm_ops structure\n");
> +        printk(XENLOG_ERR
> +            "Could not init XSM, xsm_ops register already attempted\n");

Indentation.

>          return -EIO;
>      }
> =20
> -    xsm_ops =3D &dummy_xsm_ops;
> +    /* install the dummy ops as default to ensure ops
> +     * are defined if requested policy fails init
> +     */
> +    xsm_fixup_ops(&xsm_ops);

/* Comment style. */

or

/*
=C2=A0* Multi-
=C2=A0* line comment style.
=C2=A0*/

>      switch ( xsm_bootparam )
>      {
>      case XSM_BOOTPARAM_DUMMY:
> +        xsm_ops_registered =3D XSM_OPS_REGISTERED;
>          break;
> =20
>      case XSM_BOOTPARAM_FLASK:
> @@ -113,6 +117,9 @@ static int __init xsm_core_init(const void *policy_bu=
ffer, size_t policy_size)
>          break;
>      }
> =20
> +    if ( xsm_ops_registered !=3D XSM_OPS_REGISTERED )
> +        xsm_ops_registered =3D XSM_OPS_REG_FAILED;
> +
>      return 0;
>  }
> =20
> @@ -197,16 +204,21 @@ bool __init has_xsm_magic(paddr_t start)
> =20
>  int __init register_xsm(struct xsm_operations *ops)
>  {
> -    if ( verify(ops) )
> +    if ( xsm_ops_registered !=3D XSM_OPS_UNREGISTERED )
> +        return -EAGAIN;

I know you moved this around the function, but it really isn't -EAGAIN
material any more.=C2=A0 It's "too late - nope".

-EEXIST is probably best for "I'm never going to tolerate another call".

> +
> +    if ( !ops )
>      {
> -        printk(XENLOG_ERR "Could not verify xsm_operations structure\n")=
;
> +        xsm_ops_registered =3D XSM_OPS_REG_FAILED;
> +        printk(XENLOG_ERR "Invalid xsm_operations structure registered\n=
");
>          return -EINVAL;

Honestly, I'd be half tempted to declare register_xsm() with
__nonnull(0) and let the compiler reject any attempt to pass a NULL ops
pointer.

Both callers pass a pointer to a static singleton objects.

>      }
> =20
> -    if ( xsm_ops !=3D &dummy_xsm_ops )
> -        return -EAGAIN;
> +    /* use dummy ops for any empty ops */
> +    xsm_fixup_ops(ops);

Isn't this redundant with the call in xsm_core_init(), seeing as
register_xsm() must be nested within the switch statement?

> -    xsm_ops =3D ops;
> +    xsm_ops =3D *ops;
> +    xsm_ops_registered =3D XSM_OPS_REGISTERED;
> =20
>      return 0;
>  }

Having got to the end, the xsm_core_init() vs register_xsm() dynamic is
quite weird.

I think it would result in clearer code to have init_{flask,silo}()
return pointers to their struct xsm_operations *, and let
xsm_core_init() do the copy in to xsm_ops.=C2=A0 This reduces the scope of
xsm_ops_state to this function only, and gets rid of at least one
runtime panic() call which is dead code.

If you were to go with this approach, you'd definitely want to split
into the 3-patch approach.

~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 11:39:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 11:39:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144500.265956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luCq8-0003w6-Bj; Fri, 18 Jun 2021 11:39:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144500.265956; Fri, 18 Jun 2021 11:39:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luCq8-0003vz-7t; Fri, 18 Jun 2021 11:39:12 +0000
Received: by outflank-mailman (input) for mailman id 144500;
 Fri, 18 Jun 2021 11:39:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfjP=LM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1luCq6-0003vs-PL
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 11:39:10 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id da22b60d-f373-403c-8cda-5e3b22250203;
 Fri, 18 Jun 2021 11:39:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da22b60d-f373-403c-8cda-5e3b22250203
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624016349;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=1e01jVbFGNo3JnTOXqXUg6nUfbCM0wi1qcRNy2JnNCU=;
  b=JZkSzdGm0ElyAUY6zTXCsKwDloLPRWfW0P1NWQjc/ODr8q0znh29EzU2
   C3TfoBi0WDi+CHcVn0+GfLAoyreiW1bQ5zcyKj0PrdktYZnDhEl1eb/U9
   OOu/cdm1HOe+5+6pGHWRE1Gzx4mfZ2RmzgTgzxSZiLqppHlx/KoC3MrgF
   w=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: AspYnF2utBZbfS+e1l7Naxgf1l1qNr4F9ZA7j0xwLymAxAwoZ2vNdracm9J8bGgoC5D9tZ5+Dz
 RffKtWw1fzSnVruxe/2loVl3c6XCPqke5Xv5nJHz0BmaI30VpPIVimpJo3hzp4v1mjfFJzvGtw
 8XcYNNTcQPR+bu7/OnlQPnQbXXOZvQxwLdylPHTV3lXizg/HHon/6nO1SRwamdnBlGijSOvOo0
 2qI00k9z+mAQvw7gQBTndEvAtGO2Y4FqHxuRfOD4pLMjEgg5lhOBodfcwVLQs6alszxEOScDwE
 aCY=
X-SBRS: 5.1
X-MesageID: 46437334
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:YEordKrVdd8BtgT/plenJzoaV5tuL9V00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssREb+OxoWZPwOE80kKQY3WB/B8bEYOCLghrMEGgA1/qB/9SDIVyHygc178
 4JHMYOa+EcFWIbsS+T2njCLz9K+qjgzEnHv5a7855Yd3ARV0gs1XYNNi+rVmlNACVWD5swE5
 SRouBdoSC7RHgRZsOnQlEYQunqvbTw5dnbSC9DIyRixBiFjDuu5rK/OQOfxA0iXzRGxqpn2X
 TZkjb++r6ov5iAu1XhPlfontlrcebau5J+7Y23+74owwzX+3GVjVFaKvW/VDNcmpDe1L9lqq
 iBn/4aBbUO15rmRBDFnfLc4Xic7N8Q0Q6c9ba5uwqRnSWrfkNJNyJ+7bgpDCcxrXBQyO1B7A
 ==
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46437334"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JzlK/NakN49jKolvkIXGfNVdz+5i1/fvWSQo+O/LIQWCiTzZQ+/ePH9CL9PFb30eGW0GhXqfg4U1Hz9ra4+n/uOk/QnWe3GB87w9NwxVnrSiMi4cJpmC56tGytBgfctz/cENhc4j99RUIQNB9EEZIDlCj+VtCKZjbFxsnc1M5oCp4Zm6s8UraeLmuL6oACMnG9opEFBCMvPVwVfzyXckkAu4i95oAVla9ig5iJAKF8Z2QFrBSUQICYTRgdi2qwaINk9em3GwU4GhEl0FO0ygedX61TwK4OSZt1GQik63IGTlXBM1tW2YKVG+YMx7snbBvVE4C8Ik8MVLN+7IMRVXLA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=r+u3PV4BdxAtT+HB9GvfGK6f6I2qOpbhQxhuR3qMc4c=;
 b=Q5SysDqYR7y8l1syOH+BaPAwX6saAvSPLGZPub+dz7U6X/W2k7OiDz3WhRGSSAnfT70L+pBAH6lnWeFTzZv3CA2thIGiABsCcN6pnuaEE9CgA9kiwr5qNxMaA+y58tGVKwEhnydsXkd6DBv6dQrICtXC0JGTm0sc/OJLtgyKh1RAHw4rldft/6c0aQynLFikJNBVPPevtgCzzGjeOrRTQdSM/jHKb58WMZ2nUTGPXCWguHrAPB8Gfxwrs5mybopyn5CvnCECGNKioXVVg/uhpdCQT7UL6eZW/BwAuYXqxpe4LDT0UIKqvvL5e8fNRdYn+QTopy2ATsthuSdXvDsFOQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=r+u3PV4BdxAtT+HB9GvfGK6f6I2qOpbhQxhuR3qMc4c=;
 b=M6tztpz1NDheZRUzunAsNfSWZ0grpTlLnjHWPqnzbnkxApUIH26om08Em35iOf+VgtbbepHDTRH8O/RPztv1w7XdLW3CmvD0v8t3rOpS+IwwnYzlujeJtdzXHN4XtVALMImPN0OaI0xLwetGHdtDMGoJxGOBcbrbH4mYpadOf/s=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 06/12] golang/xenlight: rename Ctx receivers to ctx
Thread-Topic: [RESEND PATCH 06/12] golang/xenlight: rename Ctx receivers to
 ctx
Thread-Index: AQHXUNzF07nShyBlEkWr3AI/MqKPx6sZy10A
Date: Fri, 18 Jun 2021 11:39:06 +0000
Message-ID: <413F7924-2A8D-4BF7-AED5-36AAE1A860FC@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <c1f7b48068d3855f48f818d93ddd23638a0f9f70.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <c1f7b48068d3855f48f818d93ddd23638a0f9f70.1621887506.git.rosbrookn@ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 64eafcd0-b05b-477f-2db9-08d9324dae7a
x-ms-traffictypediagnostic: PH0PR03MB5734:
x-microsoft-antispam-prvs: <PH0PR03MB57343FAC16B4C5C36A1F58E0990D9@PH0PR03MB5734.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:2512;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: b9qMDuMjfXHOy3lyfJZU6j8xztYtdi0fSxYPE3PqgX+RCDT3ikh06RSqDdUMszympblvtjoFUCFUbyMcaruDItRRRT6NAIaB8Q/x03Cv/etSwe2ObpVD5CKZmAC6IbttzWkWjn0aQdL5nPmJhHFeotFl1eGtOzEOguyusGB1oro0omaD1+Wz6XLUVc+5s1JyMSxer1p3RP65pFJW8wuxATbDgK8LfubhmHZqrx1nkQ1Z3CeoMvy8VNLqCaQnUfRJKpGvfJNDC7Z2qIEoiH5E+fLmyNezLR3vktMe4RgjVbXUwINwvEevychs9Dqu3KKCL//deC2zwVUIMwrXoDPdxXJSo7Oj0QCN6+I7pomCw4r8TeTOrCfqg7zShqEjo232F6H/o1BJzzytYzNmNTiGpnhgfP7O93qiy8FW5rHXc5TP76mt5DMEaA9GaoiIAEKyZIuE9fIGqkY6BSPIDhjHruDX3mR4tc8F4CgJ1QX+Y91frJssnOIi5ltTLTMOSHYHCGT4pb7x/dLuBK0FeAlFC/zbxjDzVAYDRzUrZ4BWey/sqxgJMAw5nRRjH+UdXG40+E5ztfoBZhkvMpiJMmgGbIdOhqUHQIvbVUGaVA9spVZYldR3hMS/BOYMkgrYYZzfkTGbKwOedzpraxTZKLmMZk2A5G6YKUUD3xn3l4Xt0G0=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(366004)(376002)(346002)(136003)(4326008)(186003)(26005)(478600001)(86362001)(122000001)(66556008)(38100700002)(5660300002)(316002)(2616005)(54906003)(64756008)(91956017)(66476007)(4744005)(66446008)(6916009)(6512007)(53546011)(6506007)(2906002)(66946007)(6486002)(36756003)(33656002)(8676002)(8936002)(76116006)(71200400001)(55236004)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?aUmdzhA2rtmN2vrn2pngOhInOPPxP4P0ich49Z6LajqYQX09GTfVJ64H5IQX?=
 =?us-ascii?Q?Qjtk2AoaORo1iRj4JvDPSYMtt6weOh2O77s5WZo32/fniFdGEkTQ+d1SUmno?=
 =?us-ascii?Q?+ghNzVynVGctwbHkBggTBZzhQKvkIEkF8W5n0vdCnINapR0lOSSBq1dmdrFz?=
 =?us-ascii?Q?Bd96AE8qs7/ULPUKYUCKpwmYVlIxMkuh7xvVA1AP9OlRFWMC5hXKJ16ilTq0?=
 =?us-ascii?Q?q+9o9bYnEfLkNp4IA5NhAx6TbmjLbbCF6Cr2X9V01bCsspkj6JScxmONHRDs?=
 =?us-ascii?Q?SzGCyF4LOKkicx7tcLOFcvBZN6Q17p4XrdURPGYvHK7GraDumbp09oEr3Kbe?=
 =?us-ascii?Q?RYRLn98yuMIpFTEyDKKzg1hJ6CH15MKq4z1nMyqr59zeBdMDe8CZ7nr7MLwr?=
 =?us-ascii?Q?ycI8/4KlZM8LQy2wu1ILPmVDR2DlDcNV5HYjeDwI5uS4Zvl2Yb4SKYuYWfZU?=
 =?us-ascii?Q?9EpbACRvbeLesJx2h5jO36QV49qyhKGraib3drI5u7P7qGLi1o2zSDFR2h+a?=
 =?us-ascii?Q?BGfAlKPyFio45hiPIwhr5iSTTYq2lOpd052LwxphQ/0Twxb5M89fZoVbipbN?=
 =?us-ascii?Q?sE1eL7H/gY6DCIwJ0TtS/HWxpSF/jjcml6dr5zNJJsl1MMTce7f+H7AOrSOW?=
 =?us-ascii?Q?fJsCSptQFF22xN4lbOuWPM9ddWJrVicasJ7ny9KNB8rH1rsR9tLGkbj2VHQA?=
 =?us-ascii?Q?oDOt0c1H7jyP1ALY6fSR39UmGcKsUu0b2ZJnFwanYGKD9scWpVirlyAgz2a1?=
 =?us-ascii?Q?KfkUHMUk8aX7jxpHBKvN8ekiv0GDMOS+ZRTWNmurJPyB7QAScxqqb6/BREs5?=
 =?us-ascii?Q?ZHBhkvWBWGEJmkMno9WRzlh9epyq4VKuSW0Las5aTWrK6alJKb8J0a5/oMZc?=
 =?us-ascii?Q?uHinE62wERxwXwJl0RhB9fTxGJetjmZyRMf5bBA0PBtPHQ4PGnt7djOfiu9h?=
 =?us-ascii?Q?N25w97OpN16Ao/Fo3SSB+CXIcEXWuySf54ptnKaBfcirNYybb/wlk2wVYOkt?=
 =?us-ascii?Q?0nC0XR/tlTlpaf9lY7rKMVajijpqHop9VFKG+PxvlMg2L/Ge4VUYzckQUA3J?=
 =?us-ascii?Q?gKH1XfpLQ/C8ZanZ3iN7F+ojUY20ckieChYY7Qz7IIvZmQp3zAXcYIxFwpMP?=
 =?us-ascii?Q?5bcgDkWYNp9epEQz+jZtF6yyDiLIdXixOatnPBhn6jlksdDz+nq3BCgPLpLL?=
 =?us-ascii?Q?jVaotD+sIUHqKC+1RzII2zaixT9C58fvQdaGtJeLlGUZKr9ef9IVr+ru+ZVz?=
 =?us-ascii?Q?k7ZkRGc/gHrHt1WqvBWRkRmAoRkhkH93qqlrsekwPcrF9Tiqntc4348MEQsj?=
 =?us-ascii?Q?xTmAreBGFZXAHe7wUu3Rih6kods52yNa3gmJzbcZvwV+zg=3D=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <47B9D8791E940642BF846DAC13ED3141@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 64eafcd0-b05b-477f-2db9-08d9324dae7a
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 11:39:06.9265
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: iJvBZRNoo5tCJXUj+UGnGk1R5qYUfI5q51Tbu5cYrdCurFazbNz2F0JtRs5jxAqyi957YbGOdgjefTnfT5e6P1R5uKI1cBBxPC98Wwp3/WE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5734
X-OriginatorOrg: citrix.com



> On May 24, 2021, at 9:36 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
>=20
> As a matter of style, it is strange to see capitalized receiver names,
> due to the significance of capitalized symbols in Go (although there is
> in fact nothing special about a capitalized receiver name). Fix this in
> xenlight.go by running:
>=20
>  gofmt -w -r 'Ctx -> ctx' xenlight.go
>=20
> from tools/golang/xenlight. There is no functional change.
>=20
> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 11:44:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 11:44:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144507.265967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luCv4-0005Pp-3E; Fri, 18 Jun 2021 11:44:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144507.265967; Fri, 18 Jun 2021 11:44:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luCv3-0005Pi-Vr; Fri, 18 Jun 2021 11:44:17 +0000
Received: by outflank-mailman (input) for mailman id 144507;
 Fri, 18 Jun 2021 11:44:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luCv2-0005Pc-0Z
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 11:44:16 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 60c0895a-7ee9-4f45-b091-36fc777038c3;
 Fri, 18 Jun 2021 11:44:14 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2105.outbound.protection.outlook.com [104.47.17.105])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-39-NkR_hk7kMc-NaWup-g6e0w-1; Fri, 18 Jun 2021 13:44:12 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5903.eurprd04.prod.outlook.com (2603:10a6:803:e0::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Fri, 18 Jun
 2021 11:44:07 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 11:44:07 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0203.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1f::23) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.19 via Frontend Transport; Fri, 18 Jun 2021 11:44:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60c0895a-7ee9-4f45-b091-36fc777038c3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624016653;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SKFckytBowfX9X43g+XZYTZ0ij8kQBLKQUhn+og2JGE=;
	b=CfirM7Us95QEqNFS0H8zQ43Kc79+Lswh6QyM/N5TddcSLvecdhnsBcjOY4ikpmhRH6Dcqi
	MKOhIXe3nkzxKcmUXJDzHvxinyF7gzuZ7WrdPpef5GdPixdFjAGI0yyKl0tKcFYy8ER5FG
	V9EQAnwEo6z5Pma9SF6MGXR7dwqVPTk=
X-MC-Unique: NkR_hk7kMc-NaWup-g6e0w-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BJ+uZbf67Lp7O+dqyjmiXwkwhnVd/rVSavRlxrIJisdf13Jj6mDRSCYeJb0KbGwG1UCQXS8qL+mpjFbtCE8ky/pHhZB1zk0vCvSbn67CmGluRWxUx/IWr7U/Sq9inAHgxD0gRs3kMo0pScbmx43jIYXf2azng9DHu1PDg3ZUm4RhNHYInF8lAxaealou8woAizwfUeQ2UJbE8nX8ekNSCXd2RUmaU51IvSKZE1faATpqjS2+ndo+VXQxYhTH6mNs8fDGjjznJR5x+15lB7kw94s7EPbo/Lk2bzVWIyMDJMOmzHRowHUe6Q4pXOYrubMORuMIWzQJEUf9PqrBe/q3LA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tjmq/86DCfA4709Ryy1nrsKKtZ6y5S67KCEXPc2k1L0=;
 b=Nq/lBzlMY1G/YjN5I/4qhQkLSTOHSLoFT5sOWi6WJl44/+dclJHpNB65XEGA8mCFuI08uhhwREhyIL0qU81jeWWfYKikUzDuKM+vtphzA2+gu8ealbBsW9MB0UJDL9dnzrl0FCXTdsnAKT1TD/IRAai9q3X4eYulDssFeDyTFPCg8YkS/8mDYZo2If+CEcoG74cT08s2y5dy4b89qqL9It/CK5VRCBAiyS3y6F5+eA4CSBsyg/fix2BSUAJKMb+MoB7MBCE9megrlISlTrhGmFsUGWZS5X2MTr88wcZF93y1sSXnJS3OnDGkMbHJkGhvnFfhT5goNd+2lUwnzWoSvw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 1/6] xsm: refactor xsm_ops handling
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Tim Deegan <tim@xen.org>, Juergen Gross <jgross@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io, xen-devel@lists.xenproject.org
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-2-dpsmith@apertussolutions.com>
 <07566583-d4bc-dfb8-eccd-d779783d959a@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3345073c-45ab-b875-6d3c-32dadfb63fc9@suse.com>
Date: Fri, 18 Jun 2021 13:44:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <07566583-d4bc-dfb8-eccd-d779783d959a@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0203.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1f::23) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: eb15906d-ed2a-4fad-fbe2-08d9324e6169
X-MS-TrafficTypeDiagnostic: VI1PR04MB5903:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB59033C5928D70273D81FB3A4B30D9@VI1PR04MB5903.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NKe0H6/N7x15+M/PykG59noZlKtDNxt3JAm6/2bq5jup7XxAWFnpjlZbqTvrJmC2pQIodXgLWBZN1YSFZauhkMQFE1pz1AWaAnjzLhil/MDwMTfMOg+MG/2MhCeln8jvfbmwdzydxoHatWcrzN2/CRvmPR3wI/4QgiIs0jKVI/i2vKD3/nrIK5A4c4HxMWSGMLw0mPMBmH5ZsYTz4Pa7mIqVFXkMDIQwNEKxncQwLeHeZCNpcnMZiCVGK/Zvd9geQw3R/1QJmJ50e/AJtluVI0jkfzLRIk/53WFwgHvPpp/+jit7WUAvi5+duXcq99CRFIagnfqmAl40020vTOnE1tUssiBsaW2BOZUp3V2IwbP2P2TQPNWpwn275Ezs3I0YP1y+wByH+Yd7V0IcRQ/wysMgdzsIptB+xtf/V+/zzdWNK4QyvUt32SQVGjX/w1KqW4CZ+yC3v6+THVcqf5ULX8D/jn1phAbv1TeNmTTWmVpbA0AmGqARaG6WWEsV73H11lphqi5pP/GhSJZaxLHNUSgktA/7WpOm6+ajCER7npw5gQ1BRY4s3QJFXYRiDwCcwvaiFK9z2CL0pMyjKM3cWgV92dNsIArD4cBMq4bxHIyZWOObvkPI50csU775MlOeW/mZp7w3kjQgMKLVxoF/er09EztRa/3cbycZsPOotMI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(376002)(366004)(39840400004)(136003)(396003)(86362001)(36756003)(53546011)(26005)(66946007)(2906002)(66556008)(66476007)(8676002)(83380400001)(4326008)(7416002)(478600001)(5660300002)(31696002)(54906003)(38100700002)(16576012)(8936002)(110136005)(316002)(16526019)(186003)(31686004)(956004)(2616005)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?mpO55/sstfd27gakNa31iXISZ4LCRQBoW9NDB8dUQA2/u/iO9t6GNrWaSKfl?=
 =?us-ascii?Q?aTBj1MmI8HONndnPNcnw/xC992orz3T61foV0drIreckTRMBMeu2x+gCp21v?=
 =?us-ascii?Q?PHdV80NTh/Q56gisi0FqX+QWZy3pNtw4L0SofMDjoxyYi1GjynKui0TX4f4h?=
 =?us-ascii?Q?2eSE03JSp1tvFoHqoaGwYsFHomsxEQuMgyVbiay9ywaOxWolEnkMRyWtgN7i?=
 =?us-ascii?Q?Ius1dJI6EZ1MG2rKEDZZ3iKm3oo8a3HywxuReEJoG3Uicti09zVaIs1nfF/t?=
 =?us-ascii?Q?FPeGFMwSYsgQML8sDCJdNNAN88CxsnbN2rfOP6AavrgAbZhf9baYGEQs0gMj?=
 =?us-ascii?Q?4jEBlLLvtI3WAoBDpLs/bk1AywoIfwo5e9MmakWNJBKN3fJnxhkTk4ESPlRS?=
 =?us-ascii?Q?X59DVSjdb1UDksfpBDZswqFPA7FMPtdK9hkg3QrznXBRLuNiBLyOqOVpy60W?=
 =?us-ascii?Q?ZwZZXHWwbq6nXgH5bpoLvSrliLMiERhhtvhHNLgPK04Dpc6kj9J17U9I6UGd?=
 =?us-ascii?Q?Hk5JtWzc38/ZdNjWToAvStZ1iyPZT7A9400JpTmHSRwwuwV21+Fl+0I6cf0W?=
 =?us-ascii?Q?GnEVJjAKThHmjFU0+u+1+DDcSXIfoLSRAPr6pYcMiqDKFevxNbb3wGn1jx3F?=
 =?us-ascii?Q?G2cgQ5kJC8b8269sz14rfiA4C8ziHL5yGp4hEj2Qky06uGmPy94Ns9lf6B2/?=
 =?us-ascii?Q?/KoAkSyjR/YOf/pZIwXnMMwzGswJXtmJM/4ay5GULS+61jmO6l4RmX6RbdoV?=
 =?us-ascii?Q?0C0NBsARMK2K4LzgjPBlIjiQan9DOfSl8LUcylwEnHxS/V9MMpUpYgbOU2ij?=
 =?us-ascii?Q?lyTOcMTU8iVCcr1y6uuJdge7xIas/HCZDtOxWKeyalJtHjszca/d1pKB0lAB?=
 =?us-ascii?Q?YWoiFZjXRP7lk4w3qySnXaKr0O4YQVjmx2/FsUxAU21PvtE0Fxtp7Ioutpb7?=
 =?us-ascii?Q?25u69whjA/S8k8G26r4lCQMWmqjnUHcWuIzGiYh5Q7HJUAxwSRHXjsykgwIM?=
 =?us-ascii?Q?5myqXLy7a+O5zH0RGNl7nuGIsuQjnMR+cpVHseJUOreZFylxC6Djbuci4dnz?=
 =?us-ascii?Q?Cs3qeDc/zg2xIn+jzGJANA62ZDSUk98RHELFjLhgGDOZ+2vvRUTQZZ9LOMf6?=
 =?us-ascii?Q?oe1jszlar9nbsmIjYCngw2KmULOv/p9UzXa1r0mimLhLZqDr6pZYo4tLL9du?=
 =?us-ascii?Q?3g/hDPT4Udshko1slsalIAcAV14iSxLZK74ts7RKb9BmCRbOG6iLdYCQUR0s?=
 =?us-ascii?Q?yssOXZ3TPWaisbSLVHn1osumbFvQ2KUezWTYIFv4RQ2G/ESHWhRPp4BKG4s2?=
 =?us-ascii?Q?zmthSdP3nYvSwmUAVD18ZhC3?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: eb15906d-ed2a-4fad-fbe2-08d9324e6169
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 11:44:07.4568
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Gtx+Kgeo4jkyfKeDOa36MbSnmtGY4ZYOc4PF5cosRY+N1wGKpW9yvlDep5aTmhWENSgpBiEqRnnSpg1k7RvBXQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5903

On 18.06.2021 13:34, Andrew Cooper wrote:
> On 18/06/2021 00:39, Daniel P. Smith wrote:
>> @@ -197,16 +204,21 @@ bool __init has_xsm_magic(paddr_t start)
>> =20
>>  int __init register_xsm(struct xsm_operations *ops)
>>  {
>> -    if ( verify(ops) )
>> +    if ( xsm_ops_registered !=3D XSM_OPS_UNREGISTERED )
>> +        return -EAGAIN;
>=20
> I know you moved this around the function, but it really isn't -EAGAIN
> material any more.=C2=A0 It's "too late - nope".
>=20
> -EEXIST is probably best for "I'm never going to tolerate another call".
>=20
>> +
>> +    if ( !ops )
>>      {
>> -        printk(XENLOG_ERR "Could not verify xsm_operations structure\n"=
);
>> +        xsm_ops_registered =3D XSM_OPS_REG_FAILED;
>> +        printk(XENLOG_ERR "Invalid xsm_operations structure registered\=
n");
>>          return -EINVAL;
>=20
> Honestly, I'd be half tempted to declare register_xsm() with
> __nonnull(0) and let the compiler reject any attempt to pass a NULL ops
> pointer.
>=20
> Both callers pass a pointer to a static singleton objects.

Why check at all when the source of the arguments is all internal?
We don't check pointers to be non-NULL elsewhere, with a few odd
exceptions (which imo should all be dropped).

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 11:46:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 11:46:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144513.265977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luCwr-00061A-FR; Fri, 18 Jun 2021 11:46:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144513.265977; Fri, 18 Jun 2021 11:46:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luCwr-000613-CW; Fri, 18 Jun 2021 11:46:09 +0000
Received: by outflank-mailman (input) for mailman id 144513;
 Fri, 18 Jun 2021 11:46:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luCwq-00060x-E3
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 11:46:08 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b022b560-f00f-463a-8052-c34740875931;
 Fri, 18 Jun 2021 11:46:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b022b560-f00f-463a-8052-c34740875931
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624016766;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=iC8LLHdweiPa0bZPM+F79j66kYNsC0f2oGJ8k0LDe2c=;
  b=gnpnlmNi8gz6OcTT8hzGL57ElHFDVcG/1ARmLlEmXnw2vQlXYXEYZhPy
   ssH4vQ5UDWwtBHVpoohSVJOFrNZNHSMfRfBFcsZE6WUFwO7y9QfPQLTGB
   SsL4uoFP6lMycE7wDNXWy7ZUdZaTu8K77AaQsE2hw+0o2woiqxV+UXEg5
   4=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 5TD3GT5ZDzb8axULyBOFaU1woHobxXWN1sQrSwJSq6gKCkadtO5SiZkpVVR+QXDyWgQEw4KH4A
 Yv4QaMEkvjKww7IusSKKwvVHCvcrBY+ACGPEktCL5L8jML3W5/y+fuoCFEFEa2JsL1frgSDuto
 45m0qTHor7tVZ1bCqWji0Q2ejozMI08RkFj2Liinrr/XZaSADO1wPyya/aGp0rqTw4h3cZ4KsM
 Yg6zzcym6eW1Xf8EdGqAnZgiFr20lQjfs5bVEzvXjwIzC8qT94TfODmMQU7QNjGyA8mlLIikZl
 2SQ=
X-SBRS: 5.1
X-MesageID: 46437686
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:+P7hNq0/LRHXDPfG7OOxqgqjBSlyeYIsimQD101hICG9Lfb3qy
 n+ppsmPEHP5Ar5OEtBpTiBUJPwJk80hqQFn7X5Wo3SIzUO2VHYUL2KiLGC/9SOIVyEygcw79
 YYT0E6MqyMMbEYt7eJ3ODbKadZ/DDvysnB7o2yvhQdL3AfV0gj1XYeNu/yKDwEeOAsP+tdKH
 Pz3Lsim9PtQwVsUiztbUN1L9Qr6ue71a4PJnU9dlYawTjLqQntxK/xEhCe0BtbezRTwY06+W
 yAtwDi/K2sv9yy1xeZjgbontdrseqk7uEGKN2Hi8ATJDmpogG0ZL55U7nHmDwuuumg5Hsjjd
 GJiRY9OMZY7W/XYwiO0FvQ8jil9Axrx27pyFeej3emi9f+XigGB81Igp8cWgfF6mI71esMlJ
 5j7ia8jd56HBnAlCPy65zjTBdxjHe5pnIkjKo6k2Ffa40Dc7VcxLZvuX+9KK1wWh4S1bpXSd
 WHVKrnla5rmBKhHjLkV1BUsZuRti9ZJGbcfqBq0fblpgS/nxhCvgclLYIk7y09HD9UcegO2w
 3+CNUfqFh5dL5aUUtMPpZ3fSKJMB2FffvtChPcHb21LtBIB5ryw6SHqonds9vaCaDgiqFCxa
 j8bA==
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46437686"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RnTj67ZdkGp5V5ibX0jI+qKQRD9HRj8jcdmkgsQI9Cr2XuDZLSwsDt+zFCGztGE4CIDDzLWqAxy9qaiJb/eLTTE+ojreg5KmMB+QNjxAOWBKonUwGlvzG4zL/yXqf9IN4Oo1jlEKKlmsKUiOb5IxGRJhp+YDK9yysMOaI+JLsA5/t2JRs9g7GrSO5GcEyyKMaLkC6R/2kcD1P238tj/m1IQfpp39dO6d4ZbEoo9QvKmfIwbjkBsvNQPc4gZGteLYMmhVXEVipT1J0OR49thQjPVujgNChNiO14LLTB+dmZS08c3NMKKE3X4S5ZPnduXxMcTqqGsANFoENhN9ZcpC+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QGKFxGyTtatTV1/5mlTLYWzymcf6l52Oic+FjnOF2/U=;
 b=bkIMTMUeKZnW5SzEjjFWlm3EMdcQm3W0PN8k/TmAKPmHAVOwGH1mHHSF0FPvfJIOIvWle04BZIVoyuVhNMd3u15fGyKoik1mSpihrAKMK6vM0autf3d4Wb8i/tVERqC3xq2LZe2fsUdjDL6EcXxUZLXcyTfcbfDbAbqmLWqRlyulGhBcQuEWFP7lFFACm7RyvQLLDS6CvJIawIcSFhE+EMNtlY7jZ1Kp+LGr+Tm9PlBwecRPRGS0deXEK74LH7PkjhQQGcVYeB61+UYz4YEBTS5Z/lnID/5inS/KyjgfmFoh6RgCU8cm6OwVfZSaGQmNB7IdOYOA6AmtJJCoxGNlEA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QGKFxGyTtatTV1/5mlTLYWzymcf6l52Oic+FjnOF2/U=;
 b=PkWgQJJ2/8dGIFiYEvP4YQHLmM0Ue4wHTu691YrBkPFas01DT6Pe9DspFxlZq/BZfVMqyjdeStWQoCzjj8jGpPqPq0ZrfsB6J4469eWGRRyIcCE3WgBCN2g115pUPxvcpEI93esOXYzh6WUD1UOUDExJU+NRMeQ2dAlbZCg8QBo=
Subject: Re: [PATCH 1/6] xsm: refactor xsm_ops handling
To: Jan Beulich <jbeulich@suse.com>, "Daniel P. Smith"
	<dpsmith@apertussolutions.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, "George
 Dunlap" <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, "Tamas
 K Lengyel" <tamas@tklengyel.com>, Tim Deegan <tim@xen.org>, Juergen Gross
	<jgross@suse.com>, Alexandru Isaila <aisaila@bitdefender.com>, "Petre
 Pircalabu" <ppircalabu@bitdefender.com>, Dario Faggioli <dfaggioli@suse.com>,
	Paul Durrant <paul@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	<persaur@gmail.com>, <christopher.w.clark@gmail.com>,
	<adam.schwalm@starlab.io>, <scott.davis@starlab.io>,
	<xen-devel@lists.xenproject.org>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-2-dpsmith@apertussolutions.com>
 <07566583-d4bc-dfb8-eccd-d779783d959a@citrix.com>
 <3345073c-45ab-b875-6d3c-32dadfb63fc9@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d26ee7f3-4840-a601-7447-cc4d97e23dae@citrix.com>
Date: Fri, 18 Jun 2021 12:45:48 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <3345073c-45ab-b875-6d3c-32dadfb63fc9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0037.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:61::25) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3ac35b83-0ebc-4c21-e467-08d9324ea40a
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5837:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5837796A32ECC929B010EA8ABA0D9@SJ0PR03MB5837.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: HttNjEl/+CQGPBPWf0JgUqaz9mF/oKUMhdSWwSaamFwlJbBut34/2/HJ0ufYLv3OjfhaRXd7a40Rc/DIJSLFWqtaQ5/cxUrU2F8VxCoO+adO3FvC9bem9t6fs3jQSQmTMNJEOWWnM8nnlbuAuDJhgqe9rBjurSUdvSzqqFpnJ0j6iZoLN5yQJ3UTYuvN/zKWKEziLZEy53HKbPUDXX0Ld5sH+WY/h+tzkf7ZHZq+2s86xSYRB1MGKVChivHfObk1aR+UbM+aTQl+EpBF/4ywh0udw2dEMjhaKOEYe4+fFBLa9X32V9B2X6AS84zjrABPcoxAr8ObRfoP0L2y/I8gREpFpHZOmYUUcCTfhy6A8/QDbYYKz/qEHBMe5XtyZS5hJx5pNXYgpGR7Ng2IXK6/XBdxVkD3mbR3d2lLYJfpydQy0KDmACoEXO5EWBgJ8DC5FkjesLgf9KS+tBjpEwjhX3d1dEGxn18OMcKRDtq35iDecLmv68c1EJhcQW1fRiLJHwmCb/u2L6wuUaxkZDH3GYL6J3u9GQWi7c3Pl27s0joZbS0+DY9ZI3jVpKSoyJtgEdPnhnZziyyexohrnqRU6ss08qwubqkYGrbaunF1ZhJt21EvnUKuBpocLKfm4TrOtc0GyfTnOoOdRyOrsZ3BLxdcZbSMaHXb7B2Kd4mS4i7g9GCqmg43bGcPdUcDLSug
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(346002)(136003)(39860400002)(396003)(6666004)(83380400001)(66476007)(66556008)(7416002)(16576012)(2616005)(956004)(8676002)(5660300002)(66946007)(36756003)(31696002)(2906002)(38100700002)(6486002)(316002)(8936002)(110136005)(186003)(16526019)(53546011)(86362001)(478600001)(31686004)(54906003)(4326008)(26005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?OUxOaSsxUTBkSGlmeXpadDV5WnowNlY2TGVYSW5JN2pGcDc0Rk51NVZ4eGU1?=
 =?utf-8?B?NFQ2VDNtM3BSa1NnaVIxV0NEb1lTZXB1aFFIU005SXMyK1pmcEZnMWpsTXBz?=
 =?utf-8?B?QUZrbzRHaHdtdFpBWWJBM1VXdUc2bXZvZ0VnN0pPeThVaVJKSmtiU25ERmd2?=
 =?utf-8?B?cE8wbzlUaFlZTnE1cXRob1NVaEx2SVRKMStNZnRGbGkwK3pUU09jM0R0QVRK?=
 =?utf-8?B?RnF2djlXRHBwSlRLYlYrVDRITk41NVZHZnROSUZxMjl3STYwU2hwNTlsem4r?=
 =?utf-8?B?dHExU1NyWlcxVXg4RUpvSzE4dElKNDdMdTYwY1lOU0g1eU85THJCM0Fuc3la?=
 =?utf-8?B?amszVTM4VXpHL2tqcnl4MXJZb0RlSXYyaXdQUlAzZVJCNThTNnVLc1dTWVI1?=
 =?utf-8?B?bFhKdklJc3BEeUpVbTNPV1pYQmN1NDMzbzk3TGhxMmFadXZGZWpaQUhpWTgz?=
 =?utf-8?B?clNjT29GUGd0V1pkci9iZ1ZCVEExZW5MQ1JlckxKRlp4MDNDdjBtRzJaVDE3?=
 =?utf-8?B?R2p4aUxwR0hFWFdVRVR2OFZFTllYWVhBV3NObE1TM29QSkR2Yi9DZGVTVW9i?=
 =?utf-8?B?alVma0UvUVVVWmZoUWd2cHZzU1BYVGxJZzVqR2tvWFBhR0JBRXFyaWZ2NEQ1?=
 =?utf-8?B?UHZkL1hIYXhnUCtmNFNQSXZFdWJMZjdpeUhPTG51ZkVJWmhYRStlT2RQdnhT?=
 =?utf-8?B?TENvVndxbmd0YWY2S1BGSTFZWUZqMlFqVjNMV0J4M1FrcW51UUlJa2oyekNZ?=
 =?utf-8?B?M2xDODZlaVhyWWN2NTRxMFlzOTRvYVdKUVJVNGw4bFdOMWxuTDU0WDJCdkRZ?=
 =?utf-8?B?TlNNY00rdkRYYUVTanVqaU1IQ3BkdzNPV2ZJMGo3eWYwREpKOGtSWFVySW9h?=
 =?utf-8?B?MFpQdWZGNG9zSWQwUEZlcFBhVkpoT3V0cHJnNmFIQ2t6SVNGK0FKcG5TNDhj?=
 =?utf-8?B?UUxydXU1dzRtai9STm40RGtvWDlZTldKekVJSGhGYWJBT0k5UFRJWElwcHFR?=
 =?utf-8?B?aGtVNUVKenlGY3FUZnBlY1lPa29qQW9PaUlFdUI2L2Q2ZndhMVFZWFpqRFJ6?=
 =?utf-8?B?aXhLWTI3Y1dwcTV4N1dObkU2YWxoSkk3SnZMTUFCUkZZZ1ZIUHZUUmY1SWRF?=
 =?utf-8?B?bENXWVBIMGMwWmV4V1k1SkNZSS9aNkJ2SW9IM29oZ1NBTTl5d1QzaHVpTjdz?=
 =?utf-8?B?TmVSWDF5L1ZEY3duby8vaHE1aUJUUTkyVk9yRWhFc3QrZlRrUExSMENvL1lu?=
 =?utf-8?B?ZC9VK0U1OUpmVHRBN2p1cmZPbVdZVDFIeDhRWEhiL3VmOEtNT2kvbkVvL1Jm?=
 =?utf-8?B?T2Zqd0s2QWJieHEvbWF1Rk1Dd3ZKY3k4ekZsKytQQUQyd3ZhKzJmZk4yanpj?=
 =?utf-8?B?dDltNmgrVlYyZTlnbjZsN2tnK2R0R1NtVlByN3lLeVNTajJMNGk0Wnk0SDRC?=
 =?utf-8?B?enhpYkl2RWhVV2piU3krTkh0cVdob3ZFSFBrZjNOU0ZFK0VTWDA0SGp3NExX?=
 =?utf-8?B?QnVtRU1IZWZvMjVGSlNpSVpqaXphNXlUUmtGRmRLNGJoUURwY25Ed2daanM3?=
 =?utf-8?B?NkxZVVk0dnZyUUN3aWpYLzVETG9PdzlCbzlBeC9WQVEzaXl6aWcyeENGZ2wz?=
 =?utf-8?B?ZFNMSUU1RkFrWUM4N0dvTkh3MjdJUWVXMjQ1NlI1RnlxLzlFd1VCZGVoV3JF?=
 =?utf-8?B?Tllsd21vVWc1ckVvMHNvU1NUYU1JUTdvQ1BhNklBZXZCQlc1QlZ4SUZLZWN0?=
 =?utf-8?Q?x82Dfq6KWXJM1FlcpFmchNEmH7sExbuhCd3dvst?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 3ac35b83-0ebc-4c21-e467-08d9324ea40a
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 11:45:59.3899
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SPdSgiknxPCG/Be1Fzb/rwkncFbvtaadLDBlbkwiyffq5JK3UbARyb1iZ8OV45gbwm2U0Kb1ur5FIp6OATex3FpBj7rWJiR1eVHiwjN4Nbg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5837
X-OriginatorOrg: citrix.com

On 18/06/2021 12:44, Jan Beulich wrote:
> On 18.06.2021 13:34, Andrew Cooper wrote:
>> On 18/06/2021 00:39, Daniel P. Smith wrote:
>>> @@ -197,16 +204,21 @@ bool __init has_xsm_magic(paddr_t start)
>>>  
>>>  int __init register_xsm(struct xsm_operations *ops)
>>>  {
>>> -    if ( verify(ops) )
>>> +    if ( xsm_ops_registered != XSM_OPS_UNREGISTERED )
>>> +        return -EAGAIN;
>> I know you moved this around the function, but it really isn't -EAGAIN
>> material any more.  It's "too late - nope".
>>
>> -EEXIST is probably best for "I'm never going to tolerate another call".
>>
>>> +
>>> +    if ( !ops )
>>>      {
>>> -        printk(XENLOG_ERR "Could not verify xsm_operations structure\n");
>>> +        xsm_ops_registered = XSM_OPS_REG_FAILED;
>>> +        printk(XENLOG_ERR "Invalid xsm_operations structure registered\n");
>>>          return -EINVAL;
>> Honestly, I'd be half tempted to declare register_xsm() with
>> __nonnull(0) and let the compiler reject any attempt to pass a NULL ops
>> pointer.
>>
>> Both callers pass a pointer to a static singleton objects.
> Why check at all when the source of the arguments is all internal?
> We don't check pointers to be non-NULL elsewhere, with a few odd
> exceptions (which imo should all be dropped).

That too.  At the end of my email, I suggested an alternative approach
which would remove register_xsm() entirely, and I think that is a
better-still way forward.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 11:48:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 11:48:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144519.265988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luCz9-0006fo-SD; Fri, 18 Jun 2021 11:48:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144519.265988; Fri, 18 Jun 2021 11:48:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luCz9-0006fh-PC; Fri, 18 Jun 2021 11:48:31 +0000
Received: by outflank-mailman (input) for mailman id 144519;
 Fri, 18 Jun 2021 11:48:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luCz8-0006fW-Qc
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 11:48:30 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3711789-05b8-4eba-bc15-e1de9a90a4db;
 Fri, 18 Jun 2021 11:48:30 +0000 (UTC)
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur02lp2053.outbound.protection.outlook.com [104.47.4.53]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-23-f0LP48YXMWWEVF90w17jeA-1; Fri, 18 Jun 2021 13:48:27 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7037.eurprd04.prod.outlook.com (2603:10a6:800:125::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Fri, 18 Jun
 2021 11:48:25 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 11:48:25 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR0P281CA0014.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:15::19) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.7 via Frontend Transport; Fri, 18 Jun 2021 11:48:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3711789-05b8-4eba-bc15-e1de9a90a4db
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624016909;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zNq6/dcayhBVL/eCGgv5P9IP7TQQV4/ngUCWpB2QGa0=;
	b=gId1psMfjOzPqa1B64zP8WEhFs7/OxfkUieaV4tIRWTXWYDJjAinIJodsHU3jPutcbYv3H
	mErsTSmNDMXm2MVjcIOJAiN97FZpIQ/iwzTmc3JtMGjDBHJraHJuU5nPgF4buANcOOWZpy
	gkcC00tCN/vnFzJyDXuX6OIe6zZUZEc=
X-MC-Unique: f0LP48YXMWWEVF90w17jeA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ip7rWOIXmvA19ArFeClCQS9NMu44mupsh+a9j5d0kDWo+/9/xo1CvwPObyYhVD19mBi+qsTyxcfPqTzjucojBwlFGsIM/SIi5Yn1yJFquigNjfCZfxA5I2HMk0HNl9vTrGRuIMSn04awyDT50SYGp2dy9Z2mSpo+40u3i+CydsHuFCVTwW9JX6QC87ytSlFCoTwgjqhyxAWjZoa2Ot7tLd0dHJKQVzoyypwpgWlbTTJSzBT0ax61jqKdjqBm9E/0oQ56JHjkiK28PfWmfMXm0g4LwetOQZ8cokFq817Eyc7Lzn0zyGk4/k6go9F01V2vs7q5OWIDCJMijjmvn0nitw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rldUnsukw/2MnnSommIzXPsDhN5BxKPLlDuZZ7OsPGc=;
 b=K9Vyo50TexTF5EUsyb3puDyiAzYZM80mi9/+0u2oETo9zlzLuJedlDGUU0hVJjqVDJQviUKc5xEfe+vxQAAFHj/3fYJT9mOWb6Gki/DN2IvtzXG8kV9uZr6z9G3wDv04W2B5bPwUYvBLtOD9Gvlbp43/nhWGWWqsc5AISvT/amYkGSps/IWp9pmx2yX5aagcQ3Kv3iYqmlUeZvRSH0omOHx5Tt+1aY/S7QQSJyqPN9bzXYzwUfg7U/BnBhwqrUMGWmbrk9akg7+U64UhA5SFC3alj5Xf+3/DtNbYmuDmlce7JkE7JGUDsajpVWdoKpOTL+ARJHJ2EdOaMDXrZRdmIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 0/6] xsm: refactoring xsm hooks
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Tim Deegan <tim@xen.org>, Juergen Gross <jgross@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io, xen-devel@lists.xenproject.org
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <7219a9c8-f6c0-a86c-bf47-5b38c579170b@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b921c150-84f7-3ab3-1e4a-89d00725d9da@suse.com>
Date: Fri, 18 Jun 2021 13:48:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <7219a9c8-f6c0-a86c-bf47-5b38c579170b@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR0P281CA0014.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::19) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5f2affb2-9d90-419a-421f-08d9324efb3f
X-MS-TrafficTypeDiagnostic: VI1PR04MB7037:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB7037AF72ADEEA852671A5618B30D9@VI1PR04MB7037.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AARGQ8vwspo4FLF+IZLpsc6IQGIE7cmt4s9V4+l8mx8oejKz3wS9eUB88YCTPzSO+/0HpBK7MD24rtBHfBCuUUN7qmG01KXvaYKzi8/6rOi6+V1sSijOKgGPZ11Zawc524O/voAiiIXImTEKELEJv/YkzMz4eqbAeSq25dwjescF+t1uSkD1kEIvhx8NOCMq0FFbYMRO/Inc5FeaipZzTPHwJcTHaXppOR1t498drkQSgz8TtV5JYG/+ExN/1kiJjmKQ6eg93I5BCcHJnzK424kFnGVP9V0em12HcCSnKHp395/v7oy92mYAtd9QLTN3srbQ4nB/q/cS4VwAU00fhL2NTr/a3QOk1uliqUMQoGVrz9Y/wwi/O71SWgoYL8FIkGeszD3KnVUk1xkTzmXWxoddNujb+jQPuGFb7O/S5BmtBOivNiJuhBkV1usGT9EDskrnOTRDcATy0/VUisK6ypnu95LBWB3L4eD8FEzG0boYyomQ3I8IUoX9r9hNMWbjHe1mXHPyVtQcuSC/qM4z0wFJYPIccBrc0wzTaFsudzsMaPTPVtUHpOc8wyevz0Zz5bpR/lX3aQ8lS3BgNXqJUMAhvYRZmzK5Xd560k7BCCVIsMv7sVO+DYKCRns3LnNQ3HCyK53kc81Xohmioy9L87KLmZzRYLX7QJzjNegcwk3DWxUJHBgtB+PXq6lvJ+8Qw3u0zsC4CftnrD0TvVnFSzJ23TWMFv+IsaDQarvNYKeVmuOSa64u6GPYzNDVoL+O0p91wDSCGofocMM21v32cOzdqV0nmlikU7TcR35TASUSN501/WmOAGHvg+GiJ0kF
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(396003)(366004)(39840400004)(346002)(136003)(53546011)(7416002)(86362001)(4326008)(83380400001)(66476007)(5660300002)(2616005)(966005)(2906002)(36756003)(478600001)(956004)(38100700002)(8676002)(66556008)(66946007)(31686004)(8936002)(26005)(54906003)(110136005)(316002)(16576012)(16526019)(186003)(6486002)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?aV40nd3VQ0Q4Q154mWilj5RzeSbW13zcdB9ZNN7S8kel+66yK0JB5G/OoTNr?=
 =?us-ascii?Q?bhWmw5nI/arRmW2vCkkjiz+Ld8a5YzMoWavMXVX7td4BVgofI7/xQ6+tWT7b?=
 =?us-ascii?Q?PwF7kixDFceKujabJd5Hu+hCErK0M4Qcmfr2BP2R5Ex5TjDLcc3/lR2ui4+T?=
 =?us-ascii?Q?PYIIbetzPVd/ulVwJHYt4XnSXrL5S4RsRB5+ExUjgrRjUtIJrnlQpprfh9QC?=
 =?us-ascii?Q?STSR/9WDW6wQGZjsLDbL1qPLJbeEGAX/icYrMYtHx6iZxBizRPbZz+gqNXE3?=
 =?us-ascii?Q?fSoXYjbwlIHNEEH6Tm/sT9STzqIDREPsigaYge/iWOlMLFJCYTQPN3HuG7Gx?=
 =?us-ascii?Q?zltO1odHaaxpFq0ZXVAYENKTCCEynmm0YOzbN57yJ4uvMbMsodAdFwLukfed?=
 =?us-ascii?Q?8PZHhs/jD5W2K+yeC87zUpvPTde0ftU+Whp/9W7RQnHR/O/AucNWfl2If2ak?=
 =?us-ascii?Q?c6kjmkEG8XPMMBtUDPgApSho1Q+z14G3lN6q4SCAOFhVmv/hnddqGRqVmM7j?=
 =?us-ascii?Q?IUfas2WGxvka/yz5FWEXDcvzytq46HptJ2rxN1Hohq5Mx9UAfkiR/eH0GQ4d?=
 =?us-ascii?Q?ymb2SS3gPR9aJYJoZuVAi5MxtVODAC034AKkJqXAWCdVOuzZsjTKvmWNooqe?=
 =?us-ascii?Q?DIqz5PxYdNPRi82DTVsgW9u48CQwzL1JDV/E32Jx6DhZeuec0FXaqMCZ1r7u?=
 =?us-ascii?Q?8uJmUKpxOj8wqmFYeStpKscB1msxeohPnRQyLTPzkXb9RmSY8fbFfcpg5YSW?=
 =?us-ascii?Q?mPLs203rg9tgWOpP72nKZ+ciBnMLIO5TNJJsXTHK1D7eUHzmiELStCs7Q8Tz?=
 =?us-ascii?Q?G3G54dN0EEA/GnR43JZ7Vk2ZaIdXaRr8BYnCdy8uyvKJAt0uclhrSuW5pGS6?=
 =?us-ascii?Q?ZqLKucysG+TaDWCufJt3REDQlNDRssf1L8kWjUznQeSq1qyU/xXmJdijXDo1?=
 =?us-ascii?Q?km28p1VlaagGrmw7J0g5eYJicK7zEXO2hSFGXjzWrzmbMcBUlMCAwXI+pZg3?=
 =?us-ascii?Q?mMhDYQkhSRZ/b+eGO0OgOUsnoRN2unQ+7LsGg5LCuGCSebjJfLZKXXvH9kuC?=
 =?us-ascii?Q?zYbTz/2rIHRTTjhrCD7Eq1aA5Fk8EZLJUGXK6s9zO3lO2YQ7wVZDLJz06ZEk?=
 =?us-ascii?Q?xJwg22X051UzEw2NxngETP1WlBCf17vM/XRDTphOf/7EOt160C/4F3ltMP5t?=
 =?us-ascii?Q?JI4S1H0yLo5ILME10uGLL+tkdzWYLwt440/y4C84PxmLJHuCIjyIVO5JEhh3?=
 =?us-ascii?Q?S/ulIsEfE/3noGmQZtLnSg098s6mCrtD61UC0ZJkTZZRVqzjp7FWruEnl5Bz?=
 =?us-ascii?Q?iaHNtLEnf5tcYRCsDDLGPOey?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5f2affb2-9d90-419a-421f-08d9324efb3f
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 11:48:25.4989
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: q7SzBxhCgZO5tIcynwA5N2t9VdBEuFpl0s/nw7NdiAWVwPE1/bkTagnafv79UY9WeQLbYmaSJLjkrpnRgde2Sw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7037

On 18.06.2021 12:14, Andrew Cooper wrote:
> On 18/06/2021 00:39, Daniel P. Smith wrote:
>> Based on feedback from 2021 Xen Developers Summit the xsm-roles RFC
>> patch set is being split into two separate patch sets. This is the first
>> patch set and is focused purely on the clean up and refactoring of the
>> XSM hooks.
>>
>> This patch set refactors the xsm_ops wrapper hooks to use the alternativ=
e_call
>> infrastructure. Then proceeds to move and realign the headers to remove =
the
>> psuedo is/is not enable implementation. The remainder of the changes are=
 clean up
>> and removing no longer necessary abstractions.
>>
>> <snip>
>>  51 files changed, 1309 insertions(+), 1413 deletions(-)
>=20
> The diffstat is great, but sadly CI says no.=C2=A0
> https://gitlab.com/xen-project/patchew/xen/-/pipelines/323044913
>=20
> The problem is that ARM doesn't have alternative_vcall().=C2=A0 Given how
> much of an improvement this ought to be for hypercalls, I don't want to
> lose the vcalls.
>=20
> One option is to implement vcall() support on ARM, but that will leave
> new architectures (RISC-V on the way) with a heavy lift to get XSM to
> compile.
>=20
> Instead, what we want to do is make vcall() a common interface, falling
> back to a plain function pointer call for architectures which don't
> implement the optimisation.=C2=A0 So something like:
>=20
> 1) Introduce CONFIG_HAS_VCALL, which is selected by X86 only right now
> 2) Introduce xen/vcall.h which uses CONFIG_HAS_VCALL to either include
> asm/vcall.h or provide the fallback implementation

A word on the suggested names: The 'v' in alternative_vcall() stands for
"returning void", as opposed to alternative_call(). It's unclear to me
what you see it stand for in the names you propose.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 11:50:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 11:50:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144525.266000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luD0j-00082s-Bs; Fri, 18 Jun 2021 11:50:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144525.266000; Fri, 18 Jun 2021 11:50:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luD0j-00082l-7y; Fri, 18 Jun 2021 11:50:09 +0000
Received: by outflank-mailman (input) for mailman id 144525;
 Fri, 18 Jun 2021 11:50:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luD0h-000821-PD; Fri, 18 Jun 2021 11:50:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luD0g-0007Ez-Po; Fri, 18 Jun 2021 11:50:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luD0g-0008QW-De; Fri, 18 Jun 2021 11:50:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1luD0g-0000GH-D9; Fri, 18 Jun 2021 11:50:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FGpQO+ImgPh2a2oLBdpIk/99JBrtT2dQeR0Lj7NWAEk=; b=fP3KgXEF1ZuiN6cehvK/nFIBYi
	GbyslnSsRi+teI2ZG/egzPlJLCMB/KlRdFylUmTccQaKbiF7D9F3go3aE6LzNAzgXTiiH0PDr/8De
	UvBOi+s0G/uUjPIpzwwRkyVuzoKvDMHsiw450Ec4RCP2AMxwKSQtlHHltbtOVOFGUVPo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162884-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162884: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=1162ae8297e1fc9871e615cad7d505d639b7ed0c
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Jun 2021 11:50:06 +0000

flight 162884 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162884/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 1162ae8297e1fc9871e615cad7d505d639b7ed0c
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   14 days
Failing since        162368  2021-06-04 15:42:59 Z   13 days   31 attempts
Testing same since   162875  2021-06-17 10:45:14 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  gaoliming <gaoliming@byosoft.com.cn>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2158 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 11:54:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 11:54:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144533.266014 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luD4W-0000LI-SY; Fri, 18 Jun 2021 11:54:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144533.266014; Fri, 18 Jun 2021 11:54:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luD4W-0000LB-PB; Fri, 18 Jun 2021 11:54:04 +0000
Received: by outflank-mailman (input) for mailman id 144533;
 Fri, 18 Jun 2021 11:54:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luD4V-0000L5-2a
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 11:54:03 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0a86203d-3c7f-4a84-b293-b1aa12ff654c;
 Fri, 18 Jun 2021 11:54:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a86203d-3c7f-4a84-b293-b1aa12ff654c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624017241;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=4J/pXe9Zt62kIfRU5iQIK0K/WoifOrN+5LbiCwaizSU=;
  b=JFnVoenskxfB1h8MQZvpSFfVdbz5RP5a1uI4u3BiFHS8YDaGF0awgUZZ
   WvKFodJqG0pX47omIVsDXX5HdzWRRhUjEXDl+mKAuG6ddXaJL1TZmKT/8
   ZRvZ3gf+WmwfsW/CgPFc3C+RMRONQgHk5rcf9LnZ5zHc+YpHlZJbwDMBa
   s=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 7R8d5lzPhwnbscsORUUMIlLwzUpjHBZxqTm8U2sH6XAjzh6q+hTTUeURg0CQIVPaO+QGkjDEc0
 nxd1XUlYM2C9G1VDfjJ1Pw2EgD56aTVgnPotWylCJSnv+2hbycwR8O+DzkdCMO0ijWsU76W3BT
 IEECIuKakQPaHOll5nzpLspMPyXCq7jxfJyVmAE/jxUMNiktRRoLjo1Nh5xBr3Gh0J4FoQx9Ri
 8ajzkoj76EhC3fvlm8eribNUP6a9o/c0F2Gk2TkmBUCN/CYdOcxLA2QuSrTXIag+w35aPpnjvm
 ku8=
X-SBRS: 5.1
X-MesageID: 46522827
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:BeP1JK5wuVjk2rs0qgPXwVOBI+orL9Y04lQ7vn2ZFiY6TiXIra
 +TdaoguSMc6AxwZJkh8erwXJVoMkmsi6KdhrNhQotKPTOWxFdASbsC0WKM+UyaJ8STzJ866U
 4kSdkCNDSSNykJsS+Z2njBLz9I+rDum8rE9ISurQYYcegpUdAG0+4QMHfpLqQcfng+OXNWLu
 v42iMKnUvbRZxBBf7LdkXtEtKz5uEi0/ndEFY7Li9izDPLoSKj6bb8HRTd9hACUwlXybNn1W
 TeiQT26oiqrvn+k3bnpizuxqUTvOGk5spIBcSKhMRQAjLwijywbIAkf7GZpjg6rMym9V5vut
 jRpBULOdh19hrqDyGIiCqo/zOl/Ccl6nfkx1Pdq2Dku9bFSDUzDNcErZ5FczPCgnBQ+O1U4e
 Zu5Sa0ppBXBRTPkGDW/N7TTSxnkUKyvD4LjfMTtXpCSoETAYUh6LD3xHklVqvoIRiKsbzOSI
 JVfZnhDbdtABGnhknizy5SKIfGZAVpIv/uKXJyz/B80FBt7TxEJgUjtZYidtppzuN3d3B+3Z
 WzDk1frsACciYnV9MLOA4/e7r/NoXse2OCDIvAGyWoKEk4U0i94aIft49Fld1CPqZ4kacPpA
 ==
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46522827"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=abu0kwg3UMTF0deug6nVxTOvsZzJaU6clh5fltpHGYf7kO7f7WxV5a8OoycgRrkkyiRWwe/0+WmEPLF/fvs1R7NTV8uzDrio3DA24FSV/ohfpdqG6pYC7L9S46lVfECquZBtn2R0rUUq1DiMokrBKwcGCbN+I+loVgz2Dk9KmBtdDngZ5oMb4P+Iqu1/qsNGIcryIsB1RBju1H2HY4Pu1NnLvJfiEbBo/pnlFZQO7F8veoHJB83uib6qR26lODJR1FWnl3i2mapekW+hQjvREbZCksb1iAr18+bvBwgfXyjFyVKRWt5Zdt8kQsLD3F7s4ICrDEKlmAigWOrCmyf1UA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/PhQsZuKUsAcWxEPaVSWy0dBjr4KgKkVrqO+WXylHhQ=;
 b=Zmj9ORv0h+WOJiG4+cbSuBnUQ/lYz5ugWQR+B2YxjFGnGAKTnAeyLqI41F4zRAh9dHHSi1lpd5XXibms16GTvmt3t0jE7AJOXR7h15JC34TW6+rxWnlk5C7Zxj7UJfpARofnje3yNken8rvr7OgRkERrzkMjnpIngoxkazfog3u36elYEy9vXCKgMdxCSuP68QLPPqDwwTmBFELFr4qwSkhtuB93IhxGQBeq8hdX7C5qlXstQ3SrLBoRxRiyM5/rMqcEAknVy+Gw3jfhm23IChadU3XlH75n8h4dak4BjU89m/RYRWyEUq427GGdr942N2nOuZJFB8Cr5y2F4Za5NQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/PhQsZuKUsAcWxEPaVSWy0dBjr4KgKkVrqO+WXylHhQ=;
 b=cL06WBzzVnrhktQl8PCRO5U25gsDJM0JYPW/g4FzFnhyMkj49ptO+ofOxSHxBM0XEylJRerTFrohGd1c2azAolHywn2A7M6krUAuRZyrEL6KrE5fNArsKqsuFSWcAIhnAXqPZzDIMoMFWZfMO2zC/INWvg/bNL49L1IBT3PcrfE=
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, "George
 Dunlap" <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, "Jan
 Beulich" <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Tamas K Lengyel
	<tamas@tklengyel.com>, Tim Deegan <tim@xen.org>, Juergen Gross
	<jgross@suse.com>, Alexandru Isaila <aisaila@bitdefender.com>, "Petre
 Pircalabu" <ppircalabu@bitdefender.com>, Dario Faggioli <dfaggioli@suse.com>,
	Paul Durrant <paul@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	<persaur@gmail.com>, <christopher.w.clark@gmail.com>,
	<adam.schwalm@starlab.io>, <scott.davis@starlab.io>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-4-dpsmith@apertussolutions.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 3/6] xsm: enabling xsm to always be included
Message-ID: <a8d60866-b9d9-8a76-3acc-703799b204b6@citrix.com>
Date: Fri, 18 Jun 2021 12:53:47 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210617233918.10095-4-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0082.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8::22) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8b697015-3717-4eed-17b1-08d9324fc14d
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5837:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5837109526936BA29C103108BA0D9@SJ0PR03MB5837.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: flPA3IGSUEKnJgCLbji8MMJPb+sQbtv+rUbeQybBekNS5oBdX6gz1FP0YFFQXtSq7iaSZZAblyT9zRSj0X4dkPiczy27P5t5AngKAViowfxgvpRS2I323D+JqdICYodq1KvIiKeKK8okhZ2S+X3niAMfo/AI/RPrWQdu+FQUENeV3rzz1qdKWI4BZnYGhiCnv7+t2eXS5Oj5wJ07vW/NJgRZVCtL7bzGwO8xMjPxnBBkoRj0/SmkCM1igkO8WQqWryJ95NeOuOsmqV6JNlb9+piOvPBViPomXDGiJQoV1ktQX2FDrgNKJxVvQjf7mXr78JSil9BHobglkeimewWxXHzcgHsY8Nu3sT99YcHQJkVrX52HN1A65+skVQPdgjx5eaKkihzDz9xDte9xCORk96PXqiZ8NrE5mvInvsUk9tePrLoELaTOIH/49+HcoqfLqVA5R8zh2YEORTpY4uAKdRMYmKNCbwsBxCFKxjHfo71BHteBpD2+fXgjJtYL+UPFy4RV+9WLsiFG1ifBmYwY9sXtcrKBXGcnUB+Q0cw0hhKbuC8mL/FIt8qBe0r6PZtb8rSQkpTy35zMuABFaVs1mT+RlFUjEALWv5ZUeBKYi+nEQxgVokm3nOunMOvM9MJXRLnJX2AlNulTUjoq9yV/9//3vaIFgpw4gqMN2IUoAtyt1dvOobhP5c05NfnVUCy2
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(346002)(136003)(39860400002)(396003)(6666004)(83380400001)(66476007)(66556008)(7416002)(16576012)(2616005)(956004)(8676002)(5660300002)(66946007)(36756003)(31696002)(2906002)(38100700002)(6486002)(316002)(8936002)(186003)(16526019)(53546011)(86362001)(478600001)(31686004)(54906003)(4326008)(26005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?MWtJZjRzaGYrWXk2WVZudWRiR2l2NjNhTWlzc1pWaksrZlZOREticnc3U1lE?=
 =?utf-8?B?OW5oWXNzeVcvYWllanI0WUFqVHk3b3BtSmlJd2lyZUdNanlHSWtteGg2aVBN?=
 =?utf-8?B?WS9vZ0tCSStTeEREY3MyNWUxRVNVTlhmbWlLSHh5OUFOb3JqaE1UZGdlanl1?=
 =?utf-8?B?dFZRU1U3UXl5VmU2YUdGeGx1c1Bmam1ubTlPbnF6Q093RWgrK1g2Qzk3Rm1h?=
 =?utf-8?B?ZCtzNUZRcjRUUmdLU1ZGdm94MGdnY29nZnJ6UlJPWlcxLy83ZCtqcDRzR2th?=
 =?utf-8?B?Tk9LTXY3YVp0MXVNTFVGamdhOGhBT0RGSFJlL0hlR0F3VjMySG95UXR3Q2dn?=
 =?utf-8?B?RWhkbk9RamM1TER1QTBDNWZhT3UvQ1RIOXdSL0lKRkc4MDBxaSt6NlluSjhk?=
 =?utf-8?B?c0hmd09uTUI5SmdkbEUzVjR0Ulh3WHpmdHRqZ0tBTGZFUnpoL1ZNSDZxeTBJ?=
 =?utf-8?B?dEdFaGtJTFE1b05BY0pOSmtPdVVkaWJNZmNOTHZkYTRoNGo0a2V2bUVyWVhB?=
 =?utf-8?B?VVdYSll6c1pOMThUMDRSd21oVUlRakhsa21aZ0VlU0diTjdSSHUwVmg0K1BI?=
 =?utf-8?B?QWNSRXlkaEgwMjAxb1FnMDh5bjh6WEszOFgwNi9ycFFzRnBQbW5xSER3M3hu?=
 =?utf-8?B?bGxaeStTVmdrRVFtVy9OWkxzOWwwOS9iZk1WWitaSkNZNGtQTnBiNjJ0RE9O?=
 =?utf-8?B?a3VRaXNCYThhMlp3QnRwUXZMTXgzZUZCQ2JJVnRidStYdy92QWxJT083bzZR?=
 =?utf-8?B?WmRVZm1vTGQ0dHp4TDFLWXJ4eGhsMmdJdUJaRSttMDZxZVoweklObDdWZVlh?=
 =?utf-8?B?R3VHR1lCbjFZY3NPVXNoZURRSjNPZHRHQjEyMVI0SzBOYlc2TjBpeUhQWXBl?=
 =?utf-8?B?NjJnUk9TSElDb241RlR1Tm10NXMxSU9TL0N0QktLT1N0VE1EYWdDSVA2VERM?=
 =?utf-8?B?WGdaNXBHdE5LYWg5T2VualNZZGxJenBUdXVVNHpETHZHc2RwQ25iNHU3ZGdW?=
 =?utf-8?B?TURoeTZMSnNjR09KY2d0bmorMkMvNm02ZXJYaGQ3TGZDbTQwczg3MlRXek5N?=
 =?utf-8?B?bGJNYjNyTEdBc1JmMTdPZnNHUzRQd05vekFuS2Z2RWgrY3dYQU5oT3VtK0Fr?=
 =?utf-8?B?STErcWpiZjdVRG1KM1FiU3MvRW9pZWRoNWxYOVA1U0ZwVWcyWFNpNmxNQVI1?=
 =?utf-8?B?eVhkNENtaUVmek53cDQ1RFhmWTVpYVlrUVhRb2lhN2RldzhaZHY5azN2RXlr?=
 =?utf-8?B?azdrd2VQdjhwelFvWWoyWjVmTWY5cWFMRFRwbnZneXRkbUtDWU02V2RUdkxR?=
 =?utf-8?B?Q2I5bjhmdlJlckxEM1BhcXhvbVhqODZNRWJZTWtPVmMzSVh0aGxOMnRJSFV5?=
 =?utf-8?B?VWVMVklNVjlGTXRUMXZrbnhMeUhEQWRzTEJIMllLalFwWU9BdFpuaERqTDhw?=
 =?utf-8?B?dWNBalVwOWZ6NG0vcXRtRnRjU0tiYldZMmVrZ2VNbnU3VWJXN2FDLy9uZ01p?=
 =?utf-8?B?Y2ROZ3hLSmJNMGQxSUc4cStuQ1N0ZWxOcjkwb21kcW16RVpzUldpYTloN3lR?=
 =?utf-8?B?NWo3dGNuME0yMXg4V2djRHJvS042SzRYUXZlTDJqQXc4SnAycmZGZmFva0wy?=
 =?utf-8?B?cHVEeFY3QTdwRmdnQ0E5bFJqclprd2Z3aVo0WitjYmtxeHFtRkxsK2R5LzJ1?=
 =?utf-8?B?bUFsTThhQURVVGZNNllueHpsdjFTb25SMG1BV0p0NWhGYVZpM0JyVHlhZ0N3?=
 =?utf-8?Q?ajlMkSvVW5ftqH7lm6gFDp7KKIspC5scJcaFptC?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 8b697015-3717-4eed-17b1-08d9324fc14d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 11:53:57.9194
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bho7SMvTWV2vCFzkbdqmnrZYI+vznFn5XuDgn0DGZ/aUgJlLx4q8J0dEpYIGxGLjjLB5v5KRIwl4817lyGkCd3N/py1sGQG3X1ExKUPYALU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5837
X-OriginatorOrg: citrix.com

On 18/06/2021 00:39, Daniel P. Smith wrote:
> The only difference between !CONFIG_XSM and CONFIG_XSM with !CONFIG_XSM_S=
ILO and !CONFIG_XSM_FLASK
> is whether the XSM hooks in dummy.h are called as static inline functions=
 or as function
> pointers to static functions. As such this commit,
>  * eliminates CONFIG_XSM
>  * introduces CONFIG_XSM_EVTCHN_LABELING as replacement for enabling even=
t channel labels
>  * makes CONFIG_XSM_SILO AND CONFIG_XSM_FLASK default to no
>
> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
> ---
>  xen/common/Kconfig            |  55 ++++-----
>  xen/include/xen/sched.h       |   2 +-
>  xen/include/xsm/xsm-core.h    |  26 ----
>  xen/include/xsm/xsm.h         |   8 --
>  xen/xsm/Makefile              |   4 +-
>  xen/xsm/dummy.c               |   4 +-
>  xen/{include =3D> }/xsm/dummy.h | 220 ++++++++++++++++------------------
>  xen/xsm/silo.c                |  17 +--
>  xen/xsm/xsm_core.c            |   4 -
>  9 files changed, 142 insertions(+), 198 deletions(-)
>  rename xen/{include =3D> }/xsm/dummy.h (63%)
>
> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
> index 0ddd18e11a..203ad7ea23 100644
> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -197,22 +197,33 @@ config XENOPROF
> =20
>  	  If unsure, say Y.
> =20
> -config XSM
> -	bool "Xen Security Modules support"
> -	default ARM
> -	---help---
> -	  Enables the security framework known as Xen Security Modules which
> -	  allows administrators fine-grained control over a Xen domain and
> -	  its capabilities by defining permissible interactions between domains=
,
> -	  the hypervisor itself, and related resources such as memory and
> -	  devices.
> +menu "Xen Security Modules"
> =20
> -	  If unsure, say N.
> +choice
> +	prompt "Default XSM module"
> +	default XSM_SILO_DEFAULT if XSM_SILO && ARM
> +	default XSM_FLASK_DEFAULT if XSM_FLASK
> +	default XSM_SILO_DEFAULT if XSM_SILO
> +	default XSM_DUMMY_DEFAULT
> +	config XSM_DUMMY_DEFAULT
> +		bool "Match non-XSM behavior"

There is no non-XSM behaviour any more.

Is it time to rename Dummy to "traditional dom0-all-powerful" or
something suitable?

> +	config XSM_FLASK_DEFAULT
> +		bool "FLux Advanced Security Kernel" if XSM_FLASK
> +	config XSM_SILO_DEFAULT
> +		bool "SILO" if XSM_SILO
> +endchoice
> +
> +config XSM_EVTCHN_LABELING
> +	bool "Enables security labeling of event channels"
> +	default n
> +	---help---
> +      This enables an XSM module to label and enforce access control ove=
r
> +      event channels.

Please use help rather than ---help--- for new options (its changed in
upstream Kconfig).=C2=A0 The indentation of the help message wants to be on=
e
tab, then two spaces.=C2=A0 (Yes, sadly...)

>  config XSM_FLASK
> -	def_bool y
> +	def_bool n
>  	prompt "FLux Advanced Security Kernel support"
> -	depends on XSM
> +	select XSM_EVTCHN_LABELING
>  	---help---
>  	  Enables FLASK (FLux Advanced Security Kernel) as the access control
>  	  mechanism used by the XSM framework.  This provides a mandatory acces=
s
> @@ -250,9 +261,8 @@ config XSM_FLASK_POLICY
>  	  If unsure, say Y.
> =20
>  config XSM_SILO
> -	def_bool y
> +	def_bool n

I'm not sure we want to alter the FLASK/SILO defaults.=C2=A0 SILO in
particular is mandatory on ARM, and without it, you're in a security
unsupported configuration.

~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 11:57:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 11:57:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144539.266025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luD7P-0000xy-Bh; Fri, 18 Jun 2021 11:57:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144539.266025; Fri, 18 Jun 2021 11:57:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luD7P-0000xr-7z; Fri, 18 Jun 2021 11:57:03 +0000
Received: by outflank-mailman (input) for mailman id 144539;
 Fri, 18 Jun 2021 11:57:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luD7O-0000xl-MK
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 11:57:02 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6904aebe-1af4-4bed-ba27-f822fa9ebf41;
 Fri, 18 Jun 2021 11:57:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6904aebe-1af4-4bed-ba27-f822fa9ebf41
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624017420;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=y040EGOwykctMb8o9I1ugJ3UNTPDDw/PYoHjD6E3QOc=;
  b=CattnuRXiRBZnFcqbFZ0Bu+esAI+cVExBdDRNQPWqYFC/Md5X7Mpy8IS
   NWoK8p77XCqQjilK4BvjdCqTd2bWcvLoxsTItPKPzi+J6J4GmkbYStH3j
   N0Va+EiI2YJXnBHIr42upm8xGZogeH3x/+2v5VYwcNEYP8tGxPU81oX46
   s=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 9CZXDN+S+M4UmOrc01TaHn3qBgnhVko5I6txtEhiFuxS0D/co+B2TOrzPa3sRYEEL4rRoHai3y
 dMo5/9h0GItT0NXYlSHoLK24zQJz3voB8xMK9kV2tgRnF1Q/V/nM59Dd6W2W2zCXwv4AGRG33r
 XydK1gAT3ELpobUuumrV0WvCqreAw/iCchu6IPmFGeWGnupH/pecrokNi2ls7inCuhEzoTrR1C
 lvPbpUMLk6cKbFIu+ChaMv181CHa2tawg2AvrnqEFjkD/oBVyLbwC6uhig91dr4AvH6i74W/+B
 FaI=
X-SBRS: 5.1
X-MesageID: 46523081
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:jHANH6H+8ySdp7/ipLqFZJHXdLJyesId70hD6qkvc3Jom52j+P
 xGws526fatskdsZJkh8erwXJVoMkmsiqKdgLNhc4tKOTOGhILGFvAb0WKP+UyDJ8S6zJ8h6U
 4CSdk+NDSTNykAsS+S2mDReLxMoKjlzEnrv5al854Ed3AwV0gK1XYfNu/vKDwOeOAwP+teKH
 Pz3LsjmxOQPVAsKuirDHgMWObO4/fRkoj9XBIADxk7rCGTkDKB8tfBYlil9yZbdwkK7aYp8G
 DDnQC8zL6kqeuHxhjV0HKWx4hKmeHm1sBICKW3+4sow3TX+0SVjbZaKvm/VQMO0aaSAZER4Z
 /xSiIbToFOArXqDziISFXWqlHdOX0VmgHfIBej8AreSIrCNWgH4oN69PJkWwqc5Ew6sN5m1q
 VXm2qfqppMFBvF2D/w/t7SSnhR5weJSFcZ4KUuZkZkIMEjgX5q3Poi1VIQFI1FEDPx6YghHu
 UrBMbA5OxOeVffa3zCpGFgzNGlQ3x2R369Mwc/k93Q1yITkGFyzkMeysBalnAc9IglQ50B4+
 jfKKxnmLxHU8dTZ6NgA+UKR9exFwX2MF7x2aKpUBza/YQ8SgXwQrLMkcAIDdCRCdU1JcEJ6e
 b8uXtjxBoPkmzVeL6z4KE=
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46523081"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V4yIpMhUeTh47k36bQpq4hU3DlWw5vbIhe0J6R/UQUocYRgwcyFVuz9FxJ7ndbR6ZJ1hSaebqaQJ10ycB3+V+i26brDJWaCDN+2M9PM5cqKQ5emiWE59i7D6+bN+19KqndGcEMttvdIYopYMjKBdIR7LnpyrV91YyKZXwXK7wGykaDDGC2+34ZSm3LdToBgUEryg32sd3PNLJzajeo+cR9qj86GK6w4lE3Y/nrbbYbvOmwOUYMoA6MHXPkWoNKPZW74ihfUtMIsOryR+37di1yFJEiQw0f3RgD+klFtuXNOYK9OtXlUlVvZ1tDdDzslCKQ6B8gglYgoXkGzBD0lbFg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y040EGOwykctMb8o9I1ugJ3UNTPDDw/PYoHjD6E3QOc=;
 b=Uf5LsJvSMMpP2RMyIADhxeAXxSgi1bFy+/78nNS+znUKF2YfJ42DJZ6l72bgef5V6RkDbbeuyoTcjHeZtIIixNe8C6zhbrDhUZ88yJQIV980UNGLQLEtcuFwKtGX+rr8dXQrBguSSgjq+RPSD2MwMF00Bglz+D62VyQvucazs18fSlJMjXqGqKnVuABnxdQtjywf+qiWj++OGNDS3l2+/Q4i6ylxt8pO8AA7pC+hxs32/hKYiiIZO26PpBIcGCIm1YAoOQQbJyOoFM6bwgJnfCnoZrAYOZtCjzYinLU2y70aSLFgiC4Y9O/FjTuzXkHnAM+PMxglDew1E/vFOkHFiw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y040EGOwykctMb8o9I1ugJ3UNTPDDw/PYoHjD6E3QOc=;
 b=LStPGGUk+LOlFbWVV4ov9rPa+d7FXq6EkyNLZvcpj7D1Sck1qF0bZ/2TzyV36clMBsbxw1ipLmI7mxkqVjxO7hPDtjN2vqm1hMNykmZ3/QfOIrBOaZYysoN5+zrZnH0pVwqaB9beRQ4G61lSmLAOsT3rbzY/XiBcarH3Az06aiU=
Subject: Re: [PATCH 4/6] xsm: remove xen_defualt_t from hook definitions
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, "George
 Dunlap" <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, "Jan
 Beulich" <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Tamas K Lengyel
	<tamas@tklengyel.com>, Tim Deegan <tim@xen.org>, Juergen Gross
	<jgross@suse.com>, Alexandru Isaila <aisaila@bitdefender.com>, "Petre
 Pircalabu" <ppircalabu@bitdefender.com>, Dario Faggioli <dfaggioli@suse.com>,
	Paul Durrant <paul@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	<persaur@gmail.com>, <christopher.w.clark@gmail.com>,
	<adam.schwalm@starlab.io>, <scott.davis@starlab.io>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-5-dpsmith@apertussolutions.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f18aa77f-e2ed-aa10-37d3-2cdbcd5645eb@citrix.com>
Date: Fri, 18 Jun 2021 12:56:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210617233918.10095-5-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0012.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:150::17) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4fff40d1-9b88-417e-5d7e-08d932502a2b
X-MS-TrafficTypeDiagnostic: BY5PR03MB5156:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB51564E8D1F6A52A9DA83A345BA0D9@BY5PR03MB5156.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:324;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Q39nSVaX86m0HPhgOXmV56uBKwLzlbZ6jJ4abiKU7x5MtK2XnhFflN+Hn/qAUMFZGflhlbPBcUBoJjYSK+xKu+b3S4m6q6x+4DAMFFi4f5u5T6CdNQ/UNLz63LbvEmJ49Djf/L+K6wzEgvJRqR6G11gQkH2JGocTj7cbeXVtLei6vP4itkKoA7vaeEbQKLZnz7DLMF6bkLlOsTnGVw1RsXKGteuTLGRgRM3LRVeIFPAQSaFA6O3cuRDl5D7ZH+zdgE6h5MZaq2IZCR9BQHA0VYW4xstilLS9Y7HH35N2oVbNyNOnxTIO/IDbOf2dLDgdMVch7wuja9vMHdmqZIrgjFZ7EzJIeCDSDSPMCqMZdwxryiUECanEg1pw+WU0q+PpzaF5hPtHo309cLOkD4gfegkMKbAxMYpnI2D6bT1J93JUGKt1iqMlM1pxv7pPdJDMD11nrxO6geGpQ6BbjBqJYlyBfhXWHjJLaOCvMcMbvOwQ6ex1QgXYrdVMjdDRYDohKyQ80vFjm3qJi7CJ6UI426HJ1g479Cpqqo8GXDFlANGHs042dSgn1JtBLw9v54hWahHMiINa0FJsjeoqdZh58hrbM0PVB8Q7O2ftS6KrJRuCKxduLGDZErb37vGAeO7kMhsKJsE8djf7Pmb9CUTEqME4yfS4brP6abqtwYcBktfP/ilj3d7l9pLbDUpPQ8Oh
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(396003)(39860400002)(376002)(136003)(346002)(956004)(38100700002)(6486002)(478600001)(2616005)(5660300002)(4326008)(16576012)(31696002)(36756003)(316002)(86362001)(4744005)(54906003)(6666004)(66946007)(16526019)(186003)(8676002)(8936002)(2906002)(31686004)(66476007)(66556008)(26005)(53546011)(7416002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?U0o1T04zcS96cEFhd3V2bzRuaTNTMWhVSm1KYm5XT0pGRnBQaHMvTDk1RXpl?=
 =?utf-8?B?VzYvRk1WOHc1bGpFbjFZZzRBbXp1Uk5aVVFsRHpRQ0RweVd6SFNIcFJxZWtl?=
 =?utf-8?B?amRGUWpITlMzZW9NcnZzMDNxckdSWEFlK1Rtbi80QjVWWVcrTUxGbFU0LzFj?=
 =?utf-8?B?eFZLWERaUUVWWkdIT2o2RzFOUkFQNmFWcFd6QzFiTE16MEdoN242c0t0bGFV?=
 =?utf-8?B?bmNZQ1lpdGR5cVVveGJ0eXI2c1BORkZGZGlCZ1lZWEpsZC9sdTFlYlFxNWNq?=
 =?utf-8?B?VW8zUHI3MENDTVZ4bFozVnVIeVA2YWYrR3JsMExNeUNzd1ZKM3dRam1VL3Ju?=
 =?utf-8?B?WWhvUDRqQXBuZE04OVlBV3pGem1vcFF2ZW5BcXk1UDlma3MrNCtXYUdkTmdN?=
 =?utf-8?B?MEVSd1owOFJkTVRKdThFUU9HbVNWUTJhbEU3WUpqd21UMFZRc2FwRUVnOUJZ?=
 =?utf-8?B?WEJVSjJNY3NJVS9UTGt1WEkzSVVPL1FnRVZZNzcvMDBxNzU5L3FHZ3ZaNVJZ?=
 =?utf-8?B?MmtNZVk0Z2hvQnNGbTJ3ZzErSjVRNWtGKzR1ZzNlNVNmMU84emNmenVVeUVs?=
 =?utf-8?B?UmN3T1BEa0VJeXhHbUtvUHprbWFmcERCYnZtUEtjNkFTdWI4SFYySEZ6VGRG?=
 =?utf-8?B?TmFlQ2c3WU9HMkJ4MG1hTlNhbXRNcmx1OVVNQmcrWDY4c1BZMXZRVzlDSEhM?=
 =?utf-8?B?NVNPZTF4alNoTUpnSllac0RaSlpMSUpwTzd3YWxhaXBGcWFDbVMxMGR1eDA4?=
 =?utf-8?B?Y0dxYnJRUDFMVlYyVTkrS2IwS0ZLckRza2pVYlZXc3U1RGJQaHdPUjhBN1lP?=
 =?utf-8?B?eld6ejFjUCtidVRrNVE2VUdiZXlXOG0wYmxJdCs2RXozc0Y1b2U1K292WDZR?=
 =?utf-8?B?TlQ1STJ3NDVReUo3SGFOdFNoVHBQVEZScnczNXpRRFdvNjBzcnl2MzZsMU5v?=
 =?utf-8?B?bUVzcmkwQUdpTEJrek9HbTdQQ2tEYUJHYlVNeTZpTm1sbS9DMXpOUEFjMzZN?=
 =?utf-8?B?ZVRqTGNpVnBKU1pzZUcyVjRnYTAvRTVsckNmZkRHR1BKZVlOOCtpZDZ6RFFQ?=
 =?utf-8?B?TGdYenppUjZlWmlCaHlhb1cvTHFxS2ZWT3p3am51VGtqK0svRzI5MmtpY0xO?=
 =?utf-8?B?Nnh1azRmRkVFQndFc3B3dXY3VitPSEZpMW1Tb1VSOWt0OFFBeVJ2OUE2VnY4?=
 =?utf-8?B?QTdjNlNpNllWK2VGRnBwUWsrWCt6dVc1TGp4YXlaVnFURCtCbnBxcnpZWlpj?=
 =?utf-8?B?b1M5NlB3ZjVtNThSQ1FvMUw4amFnVnVkM05waDFHOG80UlpMSW1DS2w1ZHVz?=
 =?utf-8?B?UWFsTmFubHhoL1NOaitpZGhkODdqRUdFTHRFenhtdTByWEdkZFhvYjVwSXZv?=
 =?utf-8?B?ZTFPZkp5MmVNdXdDcWhWVGhrVnN5OFNkMnBNSENjVytIV0prdFJUM21FdFZv?=
 =?utf-8?B?TElzRU1ZajVkQ3FGR3UzeXJYOFl4VVBXNmNsOW5nZU5vd0hxTkVKbzEvRTV3?=
 =?utf-8?B?bGlrTFNsaFcweHFMQXlpTFQxVUZQYTUzV1NTYjgwU3l3RllyeHM5WnhSbEl6?=
 =?utf-8?B?QWRvanpCeDdVRVdTNHVXQ1YyMlk2MDhhalgvcFhXRTBvTmhPVzZIVVFxOGlj?=
 =?utf-8?B?MHlFaVJzcndoa3Z4bXhRMkkwUWZMYXgyb2ozUFMwL0tIaWRKUFdnZDFZMmlT?=
 =?utf-8?B?enllZ3lsL210akJxcXNRZCtRSVpSUjFxeWhoSko3bXJJbHpwTlhtR2pQMjNu?=
 =?utf-8?Q?EH+p6nRitG4ou2f7ihZe/rU75cJK1Hhb10mJtFB?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 4fff40d1-9b88-417e-5d7e-08d932502a2b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 11:56:53.8242
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6asgGbIqMLfoBJfRKdciEJ8xPQTFp/rp30csbgP6PG4Esqw7aSpeZj/+7dLdDWbpsgHWypltvamdvO6/S6Il1D5J0k135LHTIrvZ8dMHQAU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5156
X-OriginatorOrg: citrix.com

On 18/06/2021 00:39, Daniel P. Smith wrote:
> With the conversion of making XSM always enabled even the dummy XSM module is
> being invoked through the xsm_ops dispatch which does not use passing of the
> default privilege. This commit removes the xen_default_t parameter from the hook
> definitions and all the respective call sites.
>
> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>

I'm struggling to parse the first sentence.  Also, there's a typo in the
subject.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 12:04:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 12:04:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144551.266036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luDEK-0002gP-Fg; Fri, 18 Jun 2021 12:04:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144551.266036; Fri, 18 Jun 2021 12:04:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luDEK-0002gI-CY; Fri, 18 Jun 2021 12:04:12 +0000
Received: by outflank-mailman (input) for mailman id 144551;
 Fri, 18 Jun 2021 12:04:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luDEJ-0002gB-5n
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 12:04:11 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0b26ad3b-c8b7-4a77-a771-08dc3c847345;
 Fri, 18 Jun 2021 12:04:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b26ad3b-c8b7-4a77-a771-08dc3c847345
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624017849;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=d9K9ap+ETE95oUFWqKttd+E8ehCyTWctqEITAxBskOk=;
  b=iCnDHHX8RzfaxRPnu9sZUYQUBx4ou28jMPctMSo7RRQLkmuLR2S4EL41
   kJKLfAAca+S17WHgHul6N0SXbizRb5yddJ8kpbWk0TLC9edz95UiKiBHl
   8EAdhM/kBbPLTrshTp4ZaRW5pQ7ZOSDV2CTBk4RTeJYD0/CmZAvyER1kA
   k=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 1Zpy0bzQ24qv0HRrQSP+xRTBcsE8h99D6KCfuR8A2vl61lIYXiK7fAXRG0L9IK7HkygzKQTTlN
 gbgfLqNAyd0Qojy1BwZ1i0SQ7c4e88tjPyz4Tkjfhy4kmj4OBMe2W/njW9sdngEbFnjVVlquZ7
 KI6y7fdhcbm8ZlgghCQKMZIqHsNNLnRYEdHOV/h5Gg2ZAEdK+8kI90dosUDS7pVbPVg58WfpnJ
 XwMmYO6rx4SQyj6EdaGucUpuDXYU5K8nLtxneeW4SOpbU3wxXjbxN7ZqBwqhfz62v8FTi8UjUU
 kjA=
X-SBRS: 5.1
X-MesageID: 48039831
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:/PYJj61kZ6xxDg735uxCHwqjBSlyeYIsimQD101hICG9Lfb3qy
 n+ppsmPEHP5Ar5OEtBpTiBUJPwJU80hqQFn7X5Wo3SIzUO2VHYUL2KiLGC/9SOIVyEygcw79
 YHT0E6MqyMMbEYt7eJ3ODbKadZ/DDvysnB7o2yvhQdL3AfV0gj1XYfNu/yKDwEeOAsP+tBKH
 Pz3Lsjm9PtQwVsUiztbUN1L9Qr6ue71a4PJnU9dlYawTjLqQntxK/xEhCe0BtbezRTwY06+W
 yAtwDi/K2sv9yy1xeZjgbontdrseqk7uEGKN2Hi8ATJDmpogG0ZL55U7nHmDwuuumg5Hsjjd
 GJiRY9OMZY7W/XYwiO0FvQ8jil9Axrx27pyFeej3emi9f+XigGB81Igp8cWgfF6mI71esMlJ
 5j7ia8jd56HBnAlCPy65zjTBdxjHe5pnIkjKo6k2Ffa40Dc7VcxLZvuX+9KK1wWh4S1bpXSd
 WHVKrnla5rmBKhHjLkV1BUsZuRti9ZJGbcfqBq0fblpgS/nxhCvgclLYIk7y09HD9UcegO2w
 3+CNUfqFh5dL5aUUtMPpZ3fSKJMB2FffvtChPcHb21LtBIB5ryw6SHqonds9vaCaDgiqFCxa
 j8bA==
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="48039831"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b+y9i4xxBGSgRM9WkPr5v9YXxa69Q4fI6obTJbmCZdaSYXmUTOMsMtq1WEHqllHb1haf3hzzsl3pwISsDAzLAQWk5mbwePDFxP3eTXSnqtOa0UMG0FCIQNVk9KuAWfLBnNte2cAP17XdnwB5X0TslOYYyJhFI2lL4cky/6t1zic7eYdW3YeAxt1Nwde8im0nY1ktPmAAZkkh6Jc7YsgjC0Wr0NQUavt/TF22CeWXvhVPHah8yMQu2aYIakWpUtD8vfX6g3HKUKieozQN/C42Ija9uYyoeGH/QFRxEfZi8++Bt7y1l2wuWslG//RoE3s08lVF7T8mCFZr/5R8H77vhA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8PBiWLjiHQp/SNn8XcLxbh42CJZ3ibzJdi8tJxiJW7A=;
 b=PvYrvrBLf5dXZOxVOEiowuSEpnt5ha4BZYQ9MG/ClD1QQv0J050lrqCOArXn6vsewzqivWJQ15AHXeSVSUpUDcgW7kEC5AvGqpZbQOQl/WYU/n9Zy/Vvz2gc6SL5HLl27QD3ZjTU12PXg9rs61+fo66Wnaq/AFpYGuWzhtMWVNRp09MUFYTSdbaMD/wB0IrgokHNjpYLfFnJ06WoMMcDixemcy3PLswNUVbwp8pkkRrNaAXrbfw2Mgis66/vvkYH61JOURbgJvZI9FvE5D2pX+c+g5wbuDSqRQec5YxsdJ8xyBTqmbqg6W7PYEPjj7R9lEdgkTUq8HKnP2V/CRwPdQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8PBiWLjiHQp/SNn8XcLxbh42CJZ3ibzJdi8tJxiJW7A=;
 b=vTQECRvJrLrXIoK1u7wQBGUjAl6pIrmIMKij9eYrE/wHhAroW3KnritUYQr1bAe2d2njnizYRvhTk1oiWpHNBC1omUjPSq582jbr5/c+b4SEUWybmRhu9ePtxi3Zta3TaRAcy41PImRr4UYYeiN5heVnnlld8ecTC9U9Pzq6Z9s=
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, "George
 Dunlap" <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, "Jan
 Beulich" <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Tamas K Lengyel
	<tamas@tklengyel.com>, Tim Deegan <tim@xen.org>, Juergen Gross
	<jgross@suse.com>, Alexandru Isaila <aisaila@bitdefender.com>, "Petre
 Pircalabu" <ppircalabu@bitdefender.com>, Dario Faggioli <dfaggioli@suse.com>,
	Paul Durrant <paul@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	<persaur@gmail.com>, <christopher.w.clark@gmail.com>,
	<adam.schwalm@starlab.io>, <scott.davis@starlab.io>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-6-dpsmith@apertussolutions.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 5/6] xsm: expanding function related macros in dummy.h
Message-ID: <42e3d58a-edf3-1da9-af6e-c2400f25365c@citrix.com>
Date: Fri, 18 Jun 2021 13:03:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210617233918.10095-6-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0308.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::32) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 39d3152d-621c-4b39-803d-08d932512a8e
X-MS-TrafficTypeDiagnostic: BYAPR03MB4615:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB46152C1B95F4988F6B2122A0BA0D9@BYAPR03MB4615.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 9YFIeBTWGbym/tOl7gWzBoPOYZF/5YDFhdCsFgogFE2PP83ucEKZnzrfUujBSi83JWDmDyiTn/8HhXpeFo6hNRHTzKoIAQNpHa/pQVTndSA6uVChXOaHg9tV4YDXicheP+ZkXeZZwfjRhxJgzACXCg3XO7O4qD1Gvcs28TB9CaIIeJUsXHck/qI9/Eiij624qT+MH1ITDiBzBJAnmSmMmp0Vv7jQY4g2LK565cUMlAyBMSak4FIiZVy+09Vy8ATDV8HlNiQolfer5dBkTDIfvGiiqJllM6bkemu/eIsYaUMi2XEVgyEnSJs1K7nb1EV9ibpMHqHmEv5Z2QQHmOs0jjmVjy+9HWT/HNLBgHmQgL37N5u0p82KXDBC7fov7TVddoQ4/ZQ+bC3QsPRzWSNTvKFMWCNzg+AXvjX2Kh7l6AtUY5jKOxfRze3xTImOigwEEQQOFdAfG3kJE9Twdkn38/12H8frMfwreUgT7z+uONkY6VPkO+R9FSsRXD797jt8wa24VHwXMFo49oUsWdf4N3jYTfD9R3bQ1awjRR6C0Ks2r3KHdxlkuAnzbkOXUE8aSqxOYLnxqpgq32rFEtaZLCXsBaxht3RWfiWl45lDnX4WaazxbJa02WeAZdU4ORXKV1/2QWmyb8WTAvfuzVGlmnW/swKit5tq7r9V7u4iamDVWASeZTNen3xvq2BMM9aq
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(366004)(376002)(39860400002)(396003)(5660300002)(7416002)(8676002)(54906003)(478600001)(2906002)(86362001)(8936002)(31696002)(4326008)(38100700002)(16576012)(316002)(6666004)(26005)(16526019)(186003)(83380400001)(66556008)(36756003)(66476007)(66946007)(31686004)(2616005)(956004)(53546011)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?T3hDRW9kU0VacEhrVXdCcVRmNWpzdG1lNUFzWU1nc1o0aTJlUWtYOUZwVWIz?=
 =?utf-8?B?UnNKL1BQaEYyRUV4bDlEYVdQZFQwcndtN0ZLNVFFTUR3MWxqaWdGbjAwSmhq?=
 =?utf-8?B?cVhVK3Y5cTMyVVhvN1RBRWJwNzI3djFaemxxaGVXbFdFb1BWU1YyRG5iL2cr?=
 =?utf-8?B?VTBhQjJvWGR2MkdZbGFwRFJmenBPaUxINHpHRCt1OFloTUdPM3NrUHpVaTFs?=
 =?utf-8?B?QlRsS2FVeVU4OE9OcEd0MnVHUXpPcUpCcVhVaVUxWXRzYzl3SStqMldsNGl3?=
 =?utf-8?B?K1dsemNwMW93NElVSDZTbHhOU2FNSWFWVXZjaldYbjNtU0NMUWw2R3lFUFJw?=
 =?utf-8?B?TytKZXQrQ1Q2ZXlEM2RtMFpOMTZCQmFMc3pxK1pIWDBHdWNUN0RmV3N4YkM0?=
 =?utf-8?B?blZxN0tuL3Z5Q3JUQThQN3BVNHhoQ1RuZTg2NG83UWQzeXlCS1FxSmtacGIx?=
 =?utf-8?B?MDhLZUd0bFZldUVSTDY2Q0sxdFFFZCtSNCsxNXgzWFE1b1JNNDF6eEVJNWNE?=
 =?utf-8?B?Z2tPWVpUb3Z6MmduanozcUczRU5oOGE5KzgyYkNrQnNmVkdhbEVoajBYNVpp?=
 =?utf-8?B?ekNHcDJVME1FNjNETWdWMzd4MFdYU0dHcWc0cXZWc0xwNC9HbWZ3dDY5aXNB?=
 =?utf-8?B?cnBaTWRpclFsRGpsSExsUk1XQ1djQXo5SjJ6SG1ROGJSQjhKSzBxamkveVRI?=
 =?utf-8?B?bjhoZHBzVDd0SythOTh4aDNWZHhGSnlsSyt4SFByQUduZFR5U1NrTjJTbXYw?=
 =?utf-8?B?ZUV2ekFzdHVxVHZIam03OW1MdEJueFVENFE3ZU41d1pUMzJUN3NoWm0zOEF3?=
 =?utf-8?B?VWt5NzVIREE2c1lOYzBTMmFSVWlSUTByQVF3N0tWV3RFVEFGelY3bmhKajlq?=
 =?utf-8?B?OHNQWVV3QkxOd1pIRnhLK2cySWVCWmFmdzVTSzkrMkJIb1pFNGtvYWNja3lF?=
 =?utf-8?B?bW1zbHpBRk5DK0NOaVU2cjFjYnJiUDdpUUpQUVlSM2EvT1JQQk5mekJHV2Jy?=
 =?utf-8?B?L2hKVkpveHhkS1pOUkZkd0tMTUJSR2tHSVFieEU1Qk9UZHp0clhwMlYrQjFB?=
 =?utf-8?B?eFljTUdIVDVTaWc2VGo4dG9xRmlqQ0k1aVBhbm51bVdYRkMvc1lRbDIwaUlP?=
 =?utf-8?B?T29xOGJKNE8vVTlTQUFYKys4blFnZWtydWJXVXoxT3B3SmdoWjB2OXhaMU9V?=
 =?utf-8?B?cTQrTWVwQUYxZlBsTVZ2TlV0Zi8wc0t4UXNGMW81ckllcGx2bzF3WnFYcXNW?=
 =?utf-8?B?ZzhQNFJzeGdXOFFXd3ZLRWNKZW00YmRiVVdMVGROZFdXb2Z1KzNKRkE0Q01v?=
 =?utf-8?B?OWtQNXQ0ZnU0K3FCM1NiNm1va3NmNElwYTk1KzN6VkVINnN1emgxL1VaaHox?=
 =?utf-8?B?VzllcDRzQTBjZjVsb2dTTkV4UDNOUCtSa1l5ek5MYzZWTzA0MTRXYTlRNXps?=
 =?utf-8?B?c0pORjBkUUZveVhFbSsyaHhoMVB3dWY5eGoxMnFnUmN1eVpCdG1Calo4NUpL?=
 =?utf-8?B?b2lzRkhQc0Mvb1ZMcndGdi96VEoyS2YveUxRY2FzSWplV1djdy9nU1pQODl6?=
 =?utf-8?B?dWhuT3FoMFJXWUJmdnhtOFFpWDNxNDJUc0h6WHdlUzNzS0ZYRTVHNkIzTmxy?=
 =?utf-8?B?aWY2ZWphYzQyK0N5ZXNEeS9VUzUvWVl5Z0NRWlljOHNOZ2wvSFdYV3R6Zysx?=
 =?utf-8?B?THNNZkNVSlRnNVRrcTVBRjErSmlYT1Y1cUxBZFQzd0xHUitNdUlScUhNUHhR?=
 =?utf-8?Q?Vdfgil3232b7jMFumPxkHSgMp2A/Z7XhDhlwnnd?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 39d3152d-621c-4b39-803d-08d932512a8e
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 12:04:03.9126
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HerZurYamLhrmL26YQ0gDr/YJDxZQIzwUNl7d2FepbI5NTWCPFH/7zlyA66aI6GooWiHr6TdT6jy24hYmCxs3Amesa1XKxddcP+ylZpBmQE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4615
X-OriginatorOrg: citrix.com

On 18/06/2021 00:39, Daniel P. Smith wrote:
> diff --git a/xen/xsm/dummy.h b/xen/xsm/dummy.h
> index 7e2bb09dac..0f8ea163af 100644
> --- a/xen/xsm/dummy.h
> +++ b/xen/xsm/dummy.h
> @@ -9,7 +9,7 @@
>   *
>   *
>   *  Each XSM hook implementing an access check should have its first par=
ameter
> - *  preceded by XSM_DEFAULT_ARG (or use XSM_DEFAULT_VOID if it has no
> + *  preceded by (or use XSM_DEFAULT_VOID if it has no
>   *  arguments). The first non-declaration statement shold be XSM_ASSERT_=
ACTION
>   *  with the expected type of the hook, which will either define or chec=
k the
>   *  value of action.
> @@ -47,14 +47,12 @@ void __xsm_action_mismatch_detected(void);
>   * xsm_default_t argument available, so the value from the assertion is =
used to
>   * initialize the variable.
>   */
> -#define XSM_INLINE __maybe_unused

Nothing in a header file should ever need __maybe_unused.=C2=A0 Now that th=
e
!XSM case has been untangled, I think this can be dropped, rather than
expanded inline.

> -
> -#define XSM_DEFAULT_ARG /* */
>  #define XSM_DEFAULT_VOID void

XSM_DEFAULT_VOID needs to disappear too.=C2=A0 I can't see what it is even
doing before the cleanup, because if it is missing, you'll fail the
compile for using K&R style functions.

~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 12:19:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 12:19:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144557.266047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luDTO-0004EW-RO; Fri, 18 Jun 2021 12:19:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144557.266047; Fri, 18 Jun 2021 12:19:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luDTO-0004EP-OF; Fri, 18 Jun 2021 12:19:46 +0000
Received: by outflank-mailman (input) for mailman id 144557;
 Fri, 18 Jun 2021 12:19:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N0aA=LM=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1luDTN-0004EJ-Ec
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 12:19:45 +0000
Received: from mail-lj1-x22d.google.com (unknown [2a00:1450:4864:20::22d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 45eedb53-44e6-4426-b7ff-c373e9b6c102;
 Fri, 18 Jun 2021 12:19:44 +0000 (UTC)
Received: by mail-lj1-x22d.google.com with SMTP id c11so13750617ljd.6
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 05:19:44 -0700 (PDT)
Received: from [192.168.10.24] ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id m17sm893136lfh.288.2021.06.18.05.19.42
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 18 Jun 2021 05:19:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45eedb53-44e6-4426-b7ff-c373e9b6c102
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=VHUmKlI4aJS4pagUzg+7Tt3nHb/YUaiR7m9vEvW57lw=;
        b=OKdgX0EwgTG8Xy0t84JtVhgFPaQZ/KzTyhAT/HdyFiI7MeCDGCsgKqjktVeN+og0cT
         ZDrEAsKSwPvZVXrA5hcJaWy2yyUz5QDNjRJTjDatQWlJHPGu3GqS7PBKt3uy58MkS1TO
         /NggO9JGKsKl4HVWlPU7Bicr0Hfkz+FJ8Xs5Qj+p2ExoJUu445T/5MJNWFUauUDqd0p4
         EUhiTMWkVMfsYgqvvb7jEOR6nxNTFJnJSt6EISdGkKgkoibJGqDYmYzy/5z1iQOHhF/7
         hbE0DUTXUECpqBHs98bD7td7C0WPLUI1cooRFxsjqc0K5UNRHbx/2a0ejlwNBUYlxkTP
         b6LA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=VHUmKlI4aJS4pagUzg+7Tt3nHb/YUaiR7m9vEvW57lw=;
        b=ByOm/BAUUrexuJHa/HMYTc4bioIGG6v/uEHDrrKePoySGxbKFsdBRMLnu7GejHBVeS
         fueN6j2Q1wQL8ZrvP2ugT7Br3xQmHFlqEnhuK4NKWAgyHMEZS/BHdxQTyTXNdbsMNuFM
         l9TXSP3mfT5US8i/Z8vjJsnj8Bxuhrcvq3t60oKIeE/34u9X0ZOHI3RBQUW/8H2mGimj
         xdp9fdGIm4kWpeTHal5kDtSx8hvNvuKbo1Vo3pYqFdGAdOc9RAjjysabhYavJMYExBhW
         iw04nh+WUpbSZtq3ZSeIIpbXWAivcEmodMhYXAIByYuArY5rQxMXyBsak2f1AJrR6Ip1
         /0cQ==
X-Gm-Message-State: AOAM533jxGXXNJestTjoEfyHO40ELyHaSRmpnL0rztT0ZDUwoDmFgI3q
	eT2BQb/B9DL9vcd+nf5pCTk=
X-Google-Smtp-Source: ABdhPJx9I5sxJqdk0Zu4qCCOVZP9Ob1u0Y0JO4nKPKwUnu00rZh+XrJJD1utlMMN/4XtlsriUlBSfg==
X-Received: by 2002:a05:651c:102a:: with SMTP id w10mr8921448ljm.486.1624018783309;
        Fri, 18 Jun 2021 05:19:43 -0700 (PDT)
Subject: Re: Uses of /hypervisor memory range (was: FreeBSD/Xen/ARM issues)
To: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Cc: Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org,
 Roger Pau Monn?? <royger@freebsd.org>, Mitchell Horne <mhorne@freebsd.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Artem Mygaiev <Artem_Mygaiev@epam.com>,
 Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com>
References: <YIptpndhk6MOJFod@Air-de-Roger>
 <YItwHirnih6iUtRS@mattapan.m5p.com> <YIu80FNQHKS3+jVN@Air-de-Roger>
 <YJDcDjjgCsQUdsZ7@mattapan.m5p.com> <YJURGaqAVBSYnMRf@Air-de-Roger>
 <YJYem5CW/97k/e5A@mattapan.m5p.com> <YJs/YAgB8molh7e5@mattapan.m5p.com>
 <54427968-9b13-36e6-0001-27fb49f85635@xen.org>
 <YJ3jlGSxs60Io+dp@mattapan.m5p.com>
 <93936406-574f-7fd0-53bf-3bafaa4b1947@xen.org>
 <YJ8hTE/JbJygtVAL@mattapan.m5p.com>
 <f7360dac-5d83-733b-7ec5-c73d4dc0350d@xen.org>
 <alpine.DEB.2.21.2105191611540.14426@sstabellini-ThinkPad-T480s>
 <b6fe6e06-517c-ee4c-5b71-a1bee4d4df13@xen.org>
 <alpine.DEB.2.21.2105200919100.14426@sstabellini-ThinkPad-T480s>
From: Oleksandr Andrushchenko <andr2000@gmail.com>
Message-ID: <2d18f588-5e76-e3da-e7df-5c754516f8d6@gmail.com>
Date: Fri, 18 Jun 2021 15:19:41 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2105200919100.14426@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US

Hi, all!

What do we need in order to move on on this?

It seems this can be good to have for many use-cases around

Thank you,

Oleksandr

On 20.05.21 19:21, Stefano Stabellini wrote:
> On Thu, 20 May 2021, Julien Grall wrote:
>> On 20/05/2021 00:25, Stefano Stabellini wrote:
>>> On Sat, 15 May 2021, Julien Grall wrote:
>>>>> My feeling is one of two things should happen with the /hypervisor
>>>>> address range:
>>>>>
>>>>> 1>  OSes could be encouraged to use it for all foreign mappings.  The
>>>>> range should be dynamic in some fashion.  There could be a handy way to
>>>>> allow configuring the amount of address space thus reserved.
>>>> In the context of XSA-300 and virtio on Xen on Arm, we discussed about
>>>> providing a revion for foreign mapping. The main trouble here is figuring
>>>> out
>>>> of the size, if you mess it up then you may break all the PV drivers.
>>>>
>>>> If the problem is finding space, then I would like to suggest a different
>>>> approach (I think I may have discussed it with Andrew). Xen is in
>>>> maintaining
>>>> the P2M for the guest and therefore now where are the unallocated space.
>>>> How
>>>> about introducing a new hypercall to ask for "unallocated space"?
>>>>
>>>> This would not work for older hypervisor but you could use the RAM instead
>>>> (as
>>>> Linux does). This is has drawbacks (e.g. shattering superpage, reducing
>>>> the
>>>> amount of memory usuable...), but for 1> you would also need a hack for
>>>> older
>>>> Xen.
>>> I am starting to wonder if it would make sense to add a new device tree
>>> binding to describe larger free region for foreign mapping rather then a
>>> hypercall. It could be several GBs or even TBs in size. I like the idea
>>> of having it device tree because after all this is static information.
>>> I can see that a hypercall would also work and I am open to it but if
>>> possible I think it would be better not to extend the hypercall
>>> interface when there is a good alternative.
>> There are two issues with the Device-Tree approach:
>>    1) This doesn't work on ACPI
>>    2) It is not clear how to define the size of the region. An admin doesn't
>> have the right information in hand to know how many mappings will be done (a
>> backend may map multiple time the same page).
>>
>> The advantage of the hypercall solution is the size is "virtually" unlimited
>> and not based on a specific firmware.
> Fair enough
>


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 12:26:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 12:26:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144562.266058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luDZX-0005e1-HW; Fri, 18 Jun 2021 12:26:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144562.266058; Fri, 18 Jun 2021 12:26:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luDZX-0005du-EB; Fri, 18 Jun 2021 12:26:07 +0000
Received: by outflank-mailman (input) for mailman id 144562;
 Fri, 18 Jun 2021 12:26:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ygMg=LM=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1luDZW-0005do-ER
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 12:26:06 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.219])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 210882b9-42e7-41a0-866c-45128d271e39;
 Fri, 18 Jun 2021 12:26:04 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.3 AUTH)
 with ESMTPSA id x0a371x5ICPx46K
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 18 Jun 2021 14:25:59 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 210882b9-42e7-41a0-866c-45128d271e39
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1624019159;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=/TW4BiEvBDjllDQnZAPrJ+uArR9mZmtaUguutBvsnN8=;
    b=AHhDuUfHTCOzj3u3kQ56HMsDRmNXTyZKC64+QCWxomMkVBJp9AYhIMRyX1p5ZhSdPw
    e/BT50+PSJ+GMzs2e6CGfNcW3RhwMFtFkeu15DZRfE8yvvYYpzs49g7vTprm8mJhlbJH
    iWvohJbsLz/VsT1Ph11w3ngcIb45V8ImdCQNdFzRxMhT8S2rdqpMeu4FvFvY1Sez6SED
    TDRhoZTpNtfhaeuYXU3W4sjTRqwbpyV4nCmnYQQLUhO9K+S5minXfyAL89rS44ZVsEsy
    nyKxAKB9eo3Oj0GpHJA2R+/7zLsA2JLmKOMkid0IqB657QShaVthekaAV/N2ExzjOZY0
    Sg/w==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisQsVxSIbR7sf0kebs4Z3Zpqv+Sabl5o7CzRq+Ps8Q=="
X-RZG-CLASS-ID: mo00
Date: Fri, 18 Jun 2021 14:25:51 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v20210601 07/38] tools: unify type checking for data
 pfns in migration stream
Message-ID: <20210618142551.62d4fd3c.olaf@aepfle.de>
In-Reply-To: <9045add9-0cd0-7f9d-87ef-26cea15b74cd@suse.com>
References: <20210601161118.18986-1-olaf@aepfle.de>
	<20210601161118.18986-8-olaf@aepfle.de>
	<9045add9-0cd0-7f9d-87ef-26cea15b74cd@suse.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/Ukvn0dd5zXdv3TUlbCUnVnj";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/Ukvn0dd5zXdv3TUlbCUnVnj
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Wed, 2 Jun 2021 08:59:13 +0200
schrieb Juergen Gross <jgross@suse.com>:

> > @@ -152,9 +152,8 @@ int populate_pfns(struct xc_sr_context *ctx, unsign=
ed int count,
> >  =20
> >       for ( i =3D 0; i < count; ++i )
> >       {
> > -        if ( (!types || (types &&
> > -                         (types[i] !=3D XEN_DOMCTL_PFINFO_XTAB &&
> > -                          types[i] !=3D XEN_DOMCTL_PFINFO_BROKEN))) &&
> > +        if ( (!types ||
> > +              (types && page_type_has_stream_data(types[i]) =3D=3D tru=
e)) && =20
>=20
> What about XEN_DOMCTL_PFINFO_XALLOC? Is this case impossible here, or
> are you changing behavior?

I guess this needs to be handled somehow, a large enough HVM domU will have=
 XEN_DOMCTL_PFINFO_XALLOC in the stream. Not sure why this was thrown away =
with the v2 format.

Olaf

--Sig_/Ukvn0dd5zXdv3TUlbCUnVnj
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmDMkM8ACgkQ86SN7mm1
DoD2xQ//fO3Xbt+r/BIIt+o6qptSirExEnovM11pchpLRlN3od7LameRXtwixCAR
8WKObbiZ+7KR2hj+Ip7hGEc2fzTuoOYnqT1hh3k5oJdj+Cup3Q8R/H5tLNZb69F0
yGinbKKdFWC37ytGUgQcMHWpohjJN9dF7e1qhaEM9cXz2EpT74lvc8riwWOHH2Lr
9QLtXvuts3zB/flWlfEoEZPui9dtaZVtb/mno+yIgFUtoXB5FF8RIGknaoTMQqnY
88LiwLriMhJ3a8b5Q4wMfG9hfmEwS5Mf8RY5yoyWx5QT0yvrE7w1Rg7wuy7PVJfG
36DiUjhoyW6dZ3lbdVYnAN2zFFQWXwL/HCoq7IFZ0Y3z6Q1CCk7X8QX7+09llyv1
0D5E1WRMiBT9bVGW2JKx+Olew0JIskLybfD4ZzLVyU826L4I9nR+Z1n6vHWoBhX8
RNb/xSsS5q/Vfp7LW3nATK6KzmnAiMo36NgGOowtWec2PJ/rcExuiAU2RRqeESrE
5uVX1uYgfU9ciWymHLa9aDtt9b0nd3QMW0corKek3O2D12GwJquawCJbO0OuJYjZ
zotttuhaGSGFh03npMcJzCV5LBQj4xU+zoIkuipp4ySMdIDox9FMGPUmkEID2hiM
SyZzlqFafv6eC5++2d1FN/tUXuf13TJVUQoa0iN9ySNSsOMoeYI=
=6T+w
-----END PGP SIGNATURE-----

--Sig_/Ukvn0dd5zXdv3TUlbCUnVnj--


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 12:26:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 12:26:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144563.266069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luDZj-0005yJ-Rx; Fri, 18 Jun 2021 12:26:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144563.266069; Fri, 18 Jun 2021 12:26:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luDZj-0005yC-Mm; Fri, 18 Jun 2021 12:26:19 +0000
Received: by outflank-mailman (input) for mailman id 144563;
 Fri, 18 Jun 2021 12:26:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luDZi-0005xn-Pg
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 12:26:18 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d88f9200-790b-4fa9-b8d8-fcda2174bfe8;
 Fri, 18 Jun 2021 12:26:17 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2050.outbound.protection.outlook.com [104.47.14.50]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-1-DQIg10i-P8aympca0c91rA-1;
 Fri, 18 Jun 2021 14:26:15 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3776.eurprd04.prod.outlook.com (2603:10a6:803:18::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.16; Fri, 18 Jun
 2021 12:26:08 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 12:26:08 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR2P264CA0030.FRAP264.PROD.OUTLOOK.COM (2603:10a6:101:1::18) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.15 via Frontend Transport; Fri, 18 Jun 2021 12:26:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d88f9200-790b-4fa9-b8d8-fcda2174bfe8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624019176;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=B35jHuO4X4cfqMD3Cnwo8Bk7zpT7c8awP7FUbsVIVdk=;
	b=FovnSbmK9JeNMEGFCwQckc9s9HSs4JcktRm4ss/VIV+i4r1LowmI7Fr7jOC/dj6fdrWaEn
	56GAEXZez4o/RDinyzAxVD3Sk/XYZeWL6X0oBR6mu7MEXDC4UIH1iUembHit8UJpjbD5RB
	QmEVmKAPPKwqB0G/tWQ0qhGI5S2Io8o=
X-MC-Unique: DQIg10i-P8aympca0c91rA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aEY/+2uAhgjEnR/wfyZ6E/Dl+bmz0IUXaR3/NK63Ks75l0P5Q/QuFjlK3cEHEcs63xb6KqWvMnTpICnuL6yvgWvfpmfBbd8Aea9+RRe3UE0YZaBbitm4ODhKJiLEw4WsqQmT647orVGK3nIU5xfohvtTDpA3EsGATW7gsyMIobjhAuixvkLReybqNZ6ANhNZmC2JF10t3hhz41CW6h5q7ioODaWGWvmxO2Fo5B+Q1u65BPb+FhNm/nlSBY+rMlBWyK50rKlNOX49jPkDLrkoBwy130Notn/iSXXW6AFuVwOQKHW6dG4J0i9X+tlWa9koBbfBHHb1b/tz5muyJ3MSyA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B35jHuO4X4cfqMD3Cnwo8Bk7zpT7c8awP7FUbsVIVdk=;
 b=lrHqv3BopXitkhqvkdLKrzCaKpo2JC5vDBTk0St3NkV+hjVuAycrHWUdJOlV+Wnlh6ROxSrYZlJVH1PwBEyhmVwzErp65aOyeQGCNhdmEGCa7MCXA369TNEoR/EyAKp2NFTIcC8ntp0N3rYyb151wDQk3aBErHNs6Xa5zQxdfGoPQ6WGzNXCpjyRc7vpL+KYY0Y210MEP4qI/WW17q4aUQbok1JhioJsqo/tsF7/bGyMTThRwornPVf0r+S27NqG3XwfQSTRsRzn+Lr81ZnpoHTi0TwEr93eY/NAA1Vf1bY4iKaLnWQM++napSepW0Re6yRRKawT6P2GVpZhN/pQQg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 3/6] xsm: enabling xsm to always be included
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Tim Deegan <tim@xen.org>, Juergen Gross <jgross@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io, xen-devel@lists.xenproject.org
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-4-dpsmith@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c8bd347f-cf14-8b86-81f7-51c035c5c972@suse.com>
Date: Fri, 18 Jun 2021 14:26:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210617233918.10095-4-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR2P264CA0030.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:101:1::18) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b8324a45-552f-4b92-018a-08d932544016
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3776:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB37764466B260EF58A7A608C5B30D9@VI1PR0402MB3776.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OHtIezoPzJmDU8BYniVPt0ZswHFAlkTh5oCxC+jzfBNMkUGsBpXT1R6Wn9v1eQ8rBQNkM0j9LgySRZ9YpvC6QC0SuflF1VfyjtKg55yWWhUDxFTll5L2680QeTTgfcmZNPGD/LenSijodnfqq66ILehBekyEiyssfXAnXqxM0xbucqVIynrSclkWcAVYHIhJ4zbrqMObwthRjLN6hd3E6lZxJ8B7GT9BCt6O6vULK7o4csOG+SqqXO1RAaGPorDaEjs9l7CfIMK/MBjqHUPBJYP9grPKPY3w1hnu/hGp1313P6csOfY+sK0FDX+PqVSHtCxXG5lwFrnAgh/tI9xCJ6XfkJVJZjfaDzt2CrkNOkfnMSYW+eHDGiUdE9g/HSp6uhpc3P9uc7uKX6U6XsYqL5SdhABbeSP6dEWdlUMghY3ZaWihYnsGACIj8ldbzUekDusC0SbAYKjU6ap8hEbW8tu97pvVc4g7ycRUpqViyTp714ddp2ZgZlc+qByzo/+munUS7UKhGdqpZTdcTboft2O9zUSHErlzdyFGtxCuqYC5tFyTHuo2twy90UmCMBeATHIUPBRm8t3avrjDb2130wOlDcDu5+bJOFk0+Bc4BhGbfTnxjm72P3/d9Ypo8IdC4TyU3YDGZHe/BV6Z/hVueG7TBgkXGB9Ry2gXrszGJ8xgAFiZFYkrhhbCRb1HFYki
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(39850400004)(376002)(366004)(136003)(396003)(86362001)(8936002)(38100700002)(6916009)(7416002)(4326008)(316002)(16576012)(6486002)(956004)(53546011)(54906003)(478600001)(66556008)(8676002)(16526019)(83380400001)(31696002)(5660300002)(2906002)(2616005)(66946007)(66476007)(186003)(26005)(36756003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SXZBdU9YckpZU1U1alBBYjQwZFVIUnFnS04yOEgyRmFXdFovSkdCVDVYZXI4?=
 =?utf-8?B?YkVLZlgyNFBsMjRDMWM5dW12Q3lKcDZ5MHZiODVpdW4wZ0JLWDFLeG44Q2FK?=
 =?utf-8?B?cVVWbFFzRWk4Z0NxTWZhT0lxNkhNQXV6RytrZDFaVEhIVDRmQUJkek1remFY?=
 =?utf-8?B?V2FTY3BrQkZCTFhaSEVNbWl3dUlnYVErczFZN1ZmZGFTYmdVaU50OTI4UlJD?=
 =?utf-8?B?OTMyMHhtcmNva3FDU3JScVJGSGN5ZTgrRWltanRJK3BZdG40K0pIZGlKNlB6?=
 =?utf-8?B?L3ZLQjRHblViaUs4eVBLMW5OQ3B4dHprWC9oc0s0RnZkbWdsYjNFWHdzazYw?=
 =?utf-8?B?Y2lQUDk5Ly9vcFk4L3dNUnlMQzZRaHdmNVV2M0hUbkk4Z3VQN3ZMM2I1Q3Fx?=
 =?utf-8?B?cy83ajkvZHpEYmQ4M21TVklrQXRmb0pnS2Jkb1JHL0ZRbjN1bm04S09kS1N1?=
 =?utf-8?B?L1JXTXJBVkdxZmRTQk43cUI5Um5BdDZFeE1HaS9iY1F2YTVabTVKRDlSUURM?=
 =?utf-8?B?K3BYN3ZXdm1tMkZvQ2c0eGIyTUp4ek9QSXpodS93UXQvUGFhci9INGhDMlVo?=
 =?utf-8?B?cXlkUnRMS2ZNUFk0azA0K0UrMzB0bmc4Qzh2RlArOUZKVjdwZzFrWnl4eERs?=
 =?utf-8?B?Y2RiQ3hXNk5UV1lBMnRZREs0b0lJV2I2elpycjg2WGVvTTVVQ3ltbnc0VE5H?=
 =?utf-8?B?V05MRWlucElnN1h0WkRyeHFRTUp0N2gzWW9zM2FkS2RSNC9Lbm9DQXo5eTJh?=
 =?utf-8?B?QTU5Mmc1SC9HNkgzaWNiRWNXeUpqV1gwaGp0cExHUmF4MmE2bnVZY2M0ak80?=
 =?utf-8?B?UVJ6eEl6SzhNZUovc25ENFVpVWNPR0N3YTBBeG94T2tneHZHVm1sRXFhVmsw?=
 =?utf-8?B?S1dLejhTL2tlbzV5YVB4d1pIVzBTQkdYbVlIcXdBbTk1UXRyUnlDSzhzc1dt?=
 =?utf-8?B?dTVMRk5LRjlkNmgrckwvTVdMcEt1dTc4eG5UNEdwMU8rSWFGZjl4cHE3S0Vk?=
 =?utf-8?B?YjFlZnhuaWdFeTdDUTFnRS9uSFlyYUVMNHc1OTNMeUJReFU4bzZxWmlzMUlS?=
 =?utf-8?B?OUtZMWZTMUd4Rm9TclcvczlSZ2FObXFoOXA5Y1BkODlVUnJvcUpmaFFDZFRO?=
 =?utf-8?B?Y29HWnh0cFNsSHRqUWVCZk1PVStqUHVSQ3hvUjVZZkxLalRlZk80QUhwbmZQ?=
 =?utf-8?B?ekRNYVBjYVZMK1RqVlZaOS8reDRvK1R1RktJRlk4dEVDM3dqdElUQit2ZG4v?=
 =?utf-8?B?OHlWQkF5Y1pJUHlxZ0QySGlWaW4rbngrdnVoaXpEMkNEZVdZM2JRam5URFJl?=
 =?utf-8?B?N1Y5bC82bFhydm9KU3NnTHNEN2dXSUozREQ5Tkc0Z3JzTmlnbi9zZEVNaG5x?=
 =?utf-8?B?Vm52dU1kTS9OR2lmSjhkNU9xUVgrd25LblduQkZpK25qVklOOWV6RDlTY1du?=
 =?utf-8?B?SHlQZmZZUVFjR0JHNVVLZ3RWRjluV1FvbjlqK3pScGxTaEZMMXQ5RGdXY1Bp?=
 =?utf-8?B?WWd6WHRqeW9Xa1JFdWpOZkdqSDdEeXB4a0pRdG1CNXlZZkFzYkJJRXBJYUx5?=
 =?utf-8?B?Vnc5M2Z0ZWVTblFOOHMzb0M2VmtZa1JjSW5HZUdUeGYyVVdBalBsWDFDS2ZD?=
 =?utf-8?B?OFdmU3lvSE01d1hzSWJkeFFhcFB3QVVHanprUjR3MTNwYnlFSzNqTU01djJm?=
 =?utf-8?B?djdwL0Vjd2g5ZzdKRm5tOGpyVnlzZG45YVB4aU5GRFI3Y3FyaDZDL1pWelJD?=
 =?utf-8?Q?NSMclatId5fSF8sIrXwDV2tblDweC+TdMpq6MFH?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b8324a45-552f-4b92-018a-08d932544016
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 12:26:08.5710
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 06BahcD5B1G353nC0G0Q6KpoumMADWXR5NkHyVyr3Q4IcqELpJ3+ksyKgJhfcNrgAEK5LDKph+aflTfttlCsiA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3776

On 18.06.2021 01:39, Daniel P. Smith wrote:
> The only difference between !CONFIG_XSM and CONFIG_XSM with !CONFIG_XSM_SILO and !CONFIG_XSM_FLASK
> is whether the XSM hooks in dummy.h are called as static inline functions or as function
> pointers to static functions. As such this commit,
>  * eliminates CONFIG_XSM

Following from what Andrew has said (including him mentioning your
changing of certain Kconfig option defaults), I'm not convinced this is
a good move. This still ought to serve as the overall XSM-yes-or-no
setting. If internally you make said two variants match in behavior,
that's a different thing.

>  * introduces CONFIG_XSM_EVTCHN_LABELING as replacement for enabling event channel labels

Is this mode needed as separate functionality at all? Nothing defines
XSM_NEED_GENERIC_EVTCHN_SSID anywhere. _If_ XSM went away as a separate
setting, then imo this one should go away as well.

> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -197,22 +197,33 @@ config XENOPROF
>  
>  	  If unsure, say Y.
>  
> -config XSM
> -	bool "Xen Security Modules support"
> -	default ARM
> -	---help---
> -	  Enables the security framework known as Xen Security Modules which
> -	  allows administrators fine-grained control over a Xen domain and
> -	  its capabilities by defining permissible interactions between domains,
> -	  the hypervisor itself, and related resources such as memory and
> -	  devices.
> +menu "Xen Security Modules"
>  
> -	  If unsure, say N.
> +choice
> +	prompt "Default XSM module"
> +	default XSM_SILO_DEFAULT if XSM_SILO && ARM
> +	default XSM_FLASK_DEFAULT if XSM_FLASK
> +	default XSM_SILO_DEFAULT if XSM_SILO
> +	default XSM_DUMMY_DEFAULT
> +	config XSM_DUMMY_DEFAULT
> +		bool "Match non-XSM behavior"
> +	config XSM_FLASK_DEFAULT
> +		bool "FLux Advanced Security Kernel" if XSM_FLASK
> +	config XSM_SILO_DEFAULT
> +		bool "SILO" if XSM_SILO
> +endchoice

This did live after the individual options it depends on for a reason,
and you don't say anywhere why you need to move it up. The way you
have it, with the default command line kconfig tool, users will be
presented with dependent options before having chosen the settings of
the dependency ones. That's because this tool, to a degree, moves
linearly through the options it has parsed.

> @@ -261,25 +271,12 @@ config XSM_SILO
>  
>  	  If unsure, say Y.
>  
> -choice
> -	prompt "Default XSM implementation"
> -	depends on XSM
> -	default XSM_SILO_DEFAULT if XSM_SILO && ARM
> -	default XSM_FLASK_DEFAULT if XSM_FLASK
> -	default XSM_SILO_DEFAULT if XSM_SILO
> -	default XSM_DUMMY_DEFAULT
> -	config XSM_DUMMY_DEFAULT
> -		bool "Match non-XSM behavior"
> -	config XSM_FLASK_DEFAULT
> -		bool "FLux Advanced Security Kernel" if XSM_FLASK
> -	config XSM_SILO_DEFAULT
> -		bool "SILO" if XSM_SILO
> -endchoice
> +endmenu
>  
>  config LATE_HWDOM
>  	bool "Dedicated hardware domain"
>  	default n
> -	depends on XSM && X86
> +	depends on XSM_FLASK && X86

I don't think this is a compatible change. I'm not going to exclude that
this is how it was meant, but as it stands LATE_HWDOM right now doesn't
really require FLASK, and could e.g. also go with SILO or dummy. If you
_mean_ to change this, then your description needs to say so (and ideally
it would then be split out, so - if this is actually a bug - it could
also be backported).

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 12:32:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 12:32:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144576.266079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luDfs-0007lV-LY; Fri, 18 Jun 2021 12:32:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144576.266079; Fri, 18 Jun 2021 12:32:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luDfs-0007lO-IS; Fri, 18 Jun 2021 12:32:40 +0000
Received: by outflank-mailman (input) for mailman id 144576;
 Fri, 18 Jun 2021 12:32:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luDfr-0007lI-4I
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 12:32:39 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 20ef7dd4-9602-4dd0-8b04-8bd549dea214;
 Fri, 18 Jun 2021 12:32:38 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2112.outbound.protection.outlook.com [104.47.18.112])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-15-bDZLDxFZNru_03qOs74hWg-1; Fri, 18 Jun 2021 14:32:35 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6862.eurprd04.prod.outlook.com (2603:10a6:803:130::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Fri, 18 Jun
 2021 12:32:33 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 12:32:33 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR07CA0002.eurprd07.prod.outlook.com (2603:10a6:208:ac::15) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.7 via Frontend Transport; Fri, 18 Jun 2021 12:32:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20ef7dd4-9602-4dd0-8b04-8bd549dea214
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624019557;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0VvfffDXY0tADeBCaRKBm9eCXVH9V93z3KCl6JGhN+M=;
	b=C0dP9j8wwjmgCgIsFoSY9W3RMdV1vZQLJdw1WlhCGBtSU1xaBiW0rZ6RJ3JA9dIjl+yRnf
	AI+K2l9H4uJX6kblznsvmLzDmMKAnTd6IMtqiu+/JXCdSATCvqwxNN49Mrmq90ApsLmozw
	WyuJk5/yWj4Yr+P3klDkAdwhd7VM9Po=
X-MC-Unique: bDZLDxFZNru_03qOs74hWg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=l4L2ji637zJDNQWoTVB3LXcyPG8RR47MXtuOEJkf1NWEKj2kskbNVqhumtxp1d08ScrOF6eeXPE/NmiW4NcLgLGF0JPHqHkOej2TCDfsJTWyAE78UY/USjl63fwQV4KA7x3DsR+vvn5gzLVsMIMrfYHkwVc61pqR9YnarhcVSUSmaBrK81635hxGB45to/VrkapSeC/O+LXz7jdoAwpOGvvbomkXC2bax4zKEhDsh89ybLPW8fGFuoQz7GNWxG7/WKaTG3DjCrvAU9ueJih1qibmrPGydA2j0A4xi1kzgZNx9JB4jyl04pGm3ZLhU+Ko+Oq6gap7sWiFr3q80kWhrQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0VvfffDXY0tADeBCaRKBm9eCXVH9V93z3KCl6JGhN+M=;
 b=XzC4l/3BTl27QY8X8I0eETDeKXDN6NABFamgDavvZUrHSGX33UHbOdtWJs7nnVaCOvmJc3zkS+toXtpDAV4tCEprV4xnuTM1OkoTK9bXDqFdbvCyUPztRHrTCNaHjZUcREgxopiVwKDVlo6hXbbcihDatNK8k2esLMcK+W1axbVM3HxQPrWHAcrhIwZkuSt1dnyzpPAdk1WYXsV4bjUR/mjejvwsiggBMiUf/0FO14YEzd4IbNsXX2ecVt6PAD0rRdWRtzykFcE7T/D+zwnvV8K8LzKd66yDO4tgbro8SYy9Kdm5w4oHPQVN5bpkzK0KyJ4nb1rjvXypKdUefoxX5Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 4/6] xsm: remove xen_defualt_t from hook definitions
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Tim Deegan <tim@xen.org>, Juergen Gross <jgross@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io, xen-devel@lists.xenproject.org
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-5-dpsmith@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <08890bd4-05b2-9669-b9d6-89dc3954b7f3@suse.com>
Date: Fri, 18 Jun 2021 14:32:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210617233918.10095-5-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR07CA0002.eurprd07.prod.outlook.com
 (2603:10a6:208:ac::15) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 46946c56-9e24-4a32-3122-08d932552557
X-MS-TrafficTypeDiagnostic: VI1PR04MB6862:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB686236E24C40EFF7C5D5641EB30D9@VI1PR04MB6862.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1284;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3oZicJzt/DnjiB+zrQF02bv44a0X4KlGoYwz4hMmLPphdzxMe29ycIH3U+jqlhK4Wc2AsJOgtemIOaQX4b2a03EZGxG0CaK52/4LRNzTSn/vvjB0BgyU4uul+OFIp6kjHHmkvkiUtjMUInL7LBGLSNtY4RwMKQWZK9m7BaHhyAGg6JOnk5nne5KbJvGuLCLqg+g7bTRdm1AYfU2jWYONjWDxczFzuYHzg/PJ9+IN9H9iX3TDXItIy/S5XH96VC4SvE/wBQ50h513BP7DobjncblV9qegLvNhzQDfihB/cxAeLpE1UA7XYZnMhGvamOIHT044tb0qaprKOra3ke1FpGdw/ZLw+aKCgoVhTzaNS1HRdUNXUKNtqHjUbi7ZI7t9pHwb5Nj3k8lyhz1WjCAjKNe/GKe+4Yo2KLoHdg5keDWt9DW/QEBz2nONcxXCV9iGLpYgvEc1WZE7m6aUrpWkduZDn3QQaJ/J7um6PTFtX8VO+ks0aAVy+f+xFHonpbDM/84lsAqkJo47DVGr6VyCmTPoCF2TTAjZeHuNCyR1xXlJvyRdmO1XvuGkX3Md5ClgXdzm2QTe7wY3xwxlDH0OIPkZ2V4qCJZ2O4aPF3E21n6iLmWBsCnDz5sDJ+7a0VBfLL4huZc3U1D182VYGYMCbjGmtX/AuX+XhAFKaPCLUPo5CPtOJKfpui6FYzGdP2hK
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(366004)(376002)(396003)(39860400002)(26005)(66946007)(2616005)(4326008)(7416002)(66556008)(316002)(6916009)(956004)(31686004)(8676002)(36756003)(38100700002)(186003)(2906002)(86362001)(4744005)(54906003)(16526019)(6486002)(5660300002)(478600001)(8936002)(66476007)(31696002)(53546011)(16576012)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WktGTDVPaVhKWndVT0ZacTBWMXYvaFc0aWhIR1NGSlBnTkRoZkUwMXJIekwz?=
 =?utf-8?B?OStzY0t5U255Q25RT2V5ZXNqQ0VxN1RXUndqaXplc1BwWWFzVDBJNmx5VHdL?=
 =?utf-8?B?MW1mWWVCVzU5NEN3ZkkwWGhmQUV4Y3JwUDFvUVZINWFURmQ4M2lLWHg3OUZj?=
 =?utf-8?B?ZFFPSmZSVlE3WXlIYUNLZ29OazBFVFd6Vi92NzlLcDhxUmgyUUp6eFp6Q1Zi?=
 =?utf-8?B?WXdhdE9oRFBxVlVnL0dJVVVFM0pEaGhpZ1ozSTREMFdpUERNRGltM1VLMjJD?=
 =?utf-8?B?Tlo2d1hKaGEyU05rTG9sSk1URkFNSisyaklqMzU4SUxwUFl6R0w2Q1NOUlNm?=
 =?utf-8?B?Q3hjRURsb2ErdnQ5cWROaHBMVzFpa1Aza1puTUVhenJQS3Z6TFdubG5yaURE?=
 =?utf-8?B?YlBDL0x0eER6M2h1L2o1OTRJc2xZbUEwZVcrL28yV2JCeElPZmIvdHlodDFT?=
 =?utf-8?B?TXJITVM1MHdQN2dsMzdSOExwMkRyb00rRVdvQmQrZEppS3ErWmZ0Zm56b3pq?=
 =?utf-8?B?bzFDbEIyU2U5U2dITG1LeU03YnY4N1lISm44dGJJUzJuaUp4YkdZQTl0L3V0?=
 =?utf-8?B?ajc4cGpuZC8raEJIcWQzaFhwNVVmZGQzekpnUi9zaElpMVJlQXB4b1dWdWQ0?=
 =?utf-8?B?NHpsTmo3UkNMaUlkSkdDL3hRWnBWV1dHRjRqUFRLL25PK0xMZVdaN2EzRWxZ?=
 =?utf-8?B?WHRoMU9HWmtFUFRjUmErbEgyTDdrRXR5YzJ3NGhkNmR2OGQ2ckRXaldBdHlQ?=
 =?utf-8?B?bkZIMnJZOXpKcTdxdFg0WVZSalpOTkVvUzlaS2RrR0NYQnFhK0xHM0F6dkJV?=
 =?utf-8?B?L2R5aUhsZ2NaaWtKRU5vSXRLMFZHY2tJZzVYU0R1VFN4WVRDYzlBR0M4ZDdK?=
 =?utf-8?B?NmlqNER1Q3lkWWwxaVNWT2FsRGtZQWpkamZOQzFnNVowTWdybFhmeTBTVks4?=
 =?utf-8?B?UHUrU1dRdkFOejY5NTBWdVI1WFh3bkt2dEdON1I4SW8vTGlhN1lKMUFSUmQx?=
 =?utf-8?B?RmJRNlYybTNnbHNZa0dyRk8rVHk2MW12Qnc1RlV0VXU0MlpTMTBPV29odlZx?=
 =?utf-8?B?K0M0MXNaVEZyZUpyZ2d0VGljVU16ZXhwWEhYNVM0YUk3K3YzZGxJQUoremFy?=
 =?utf-8?B?NXBHRjB4dldqVG9SMmE1bm9wbWRtT1E1SU5KSkdEVlZobTZ2UGRHTUk5QU9m?=
 =?utf-8?B?VXhQMHFucURrSnpwZFRmN2djYndJd0lVL0VsRkR4T3hhNnpUTXlYbXJmcitP?=
 =?utf-8?B?dGh5OE9XUUI0dVZUeTE5MUVKQzhxZTc5cnhmMWdDUXpDaHJZQmFMMytaS2FH?=
 =?utf-8?B?THJuQzZTdXdVT0VhVDhUby9xZnlnR3V2UEFQN1RkOGtPcGl4YmorSkpGUW9y?=
 =?utf-8?B?QXJac0ZGMk9iOWxZd0RwRysrL1ZBOE9oQjRqVEI5UXVJdGl1OWFjUkh2cGlN?=
 =?utf-8?B?YnZVQzJmNkVFbXZhYU56d1QzV3BYTEtNc3QxUExVZlNzY3oybVFsaFZYVGpC?=
 =?utf-8?B?U2E0c0JkNGh2NnJBRi8wSndpTHB1ODlHSlNpWXovS29CdmlpaE51NjM0c2t3?=
 =?utf-8?B?ajQrbkhQSkZxaHJiclAvalZSZDZkOHJTOVJ6QmdJc3VSckJvL2lENjFmY1Ew?=
 =?utf-8?B?Q2FpL05KTWpVRWNCL1NKNUFKMVUzcWo3V1RaQVRHeHFaMG5JVzFTWW5UUUlL?=
 =?utf-8?B?T2FpTWFHbW91RUVlRU5QNzNXVHBicU9lUXI4TlRXMG8xdmhlVnVaQlo5M1Nw?=
 =?utf-8?Q?jfRp68oqpbfIXIoPhL9rDaVq2UJPpuBNSoiatzF?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 46946c56-9e24-4a32-3122-08d932552557
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 12:32:33.1934
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: a6nrQDYlqlisohjXlni7mzvwX8L+h0wCyjH1/NQghcGbJvfrz122RE2QbCGbgOnrO5MxB92CD+F1xFgm242TUA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6862

On 18.06.2021 01:39, Daniel P. Smith wrote:
> With the conversion of making XSM always enabled even the dummy XSM module is
> being invoked through the xsm_ops dispatch which does not use passing of the
> default privilege. This commit removes the xen_default_t parameter from the hook
> definitions and all the respective call sites.

I don't think it has really become clear from the earlier patches
that even in dummy mode we now have (as per my reading of the
above) actual function calls, where previously everything got
inlined in this case. I'm afraid I view this as another argument
against the removal of XSM as a top level Kconfig option.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 12:41:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 12:41:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144582.266091 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luDo4-0000jN-IP; Fri, 18 Jun 2021 12:41:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144582.266091; Fri, 18 Jun 2021 12:41:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luDo4-0000jG-E2; Fri, 18 Jun 2021 12:41:08 +0000
Received: by outflank-mailman (input) for mailman id 144582;
 Fri, 18 Jun 2021 12:41:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luDo3-0000j9-8L
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 12:41:07 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eeda3ef0-01f2-46d2-bb0b-f7288cee8c8d;
 Fri, 18 Jun 2021 12:41:05 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2109.outbound.protection.outlook.com [104.47.18.109])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-40-7lvNB4lRPpW6XZz-pN2JXg-1; Fri, 18 Jun 2021 14:41:03 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6672.eurprd04.prod.outlook.com (2603:10a6:803:127::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Fri, 18 Jun
 2021 12:41:01 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 12:41:01 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM9P193CA0017.EURP193.PROD.OUTLOOK.COM (2603:10a6:20b:21e::22) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.16 via Frontend Transport; Fri, 18 Jun 2021 12:40:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eeda3ef0-01f2-46d2-bb0b-f7288cee8c8d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624020064;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+OFRb2re+j8mDXoGLge6DvS83zdPaON/XX9S5Wwy/4A=;
	b=bTc1P7s3NIwVSA5s2iBB3QLZxS/5O0u3e5eCNljRPEZOvnmTLAthyLLf1UDfF/mTIOUAxD
	PYUKdmZ/5G23AR+syt7fp5xjtf0t2vCHOX20P8aCpDQc7omr6lxKS8xTdswWmDPy/M7a7u
	afTJLT37slPqS9ExL3areFAAm3ocF2w=
X-MC-Unique: 7lvNB4lRPpW6XZz-pN2JXg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d6CxTyXFg5QyfSK+jmsuzteSOuc4xvQc20FrMVx8Cr7Sw+8kkc9DRdCd9LFmfg6Sj3nOteh8p3u1SM6TFTLLIsEaM2+l6nXYWIu4sDHR4fkktE81CuA2bcWrvujaR/uGW7Pwp89f+sP1iO/IkGTnK5iETKlVz2/KjbFt0gwmIhp8tAc2RnU2LVVdEXI8OBHtL8SLN8D8n9DjqRucl/dun5TS95/H4Nyd7Q5GfJ4Ec31hhSfiGxUqZMO6bas/pmRqJYgJZNGBWgtno2gEJUDg4kS+3EEVI/57K56tZYAN+xLoqce4vY33VrPA8y8wh3BvrgT6zxZW94AjQQ5TPjZXXg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=joSpaatqX+/3axrhqntVSZyIQpT7Mxe2NSP2FvFuLyA=;
 b=g6AiWqLC1gMz71c7+SIhR9cCekJgOHd39h/z4wKjMKk43aZRR4dtMP9GZHc005MHOYp0KZPfdXa29cDlVupOqjuZZ91cZtZI+E5ta40XtipY/IcYjIAK7f1RoYiZVzplCoCzaFj+YOaf1PVZYo2gxbwhvOYggTN4nlDOLVMvq5wIAGiPoOedjSSihGF9L1PV1M4uNOoACdbI8fTTcqCm1KnIXILovMGTJmgsAq+pJDs3BeBbio0yn4TX/scx9uzUV58f8NYz2AD+BaYIfjqscWihE/hg5nd8X41WvkOpb08kCeD9udesdyBgW2XPVIC27gtQcgDaxwNtGNdPj6mPQA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 5/6] xsm: expanding function related macros in dummy.h
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Tim Deegan <tim@xen.org>, Juergen Gross <jgross@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io, "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 xen-devel@lists.xenproject.org
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-6-dpsmith@apertussolutions.com>
 <42e3d58a-edf3-1da9-af6e-c2400f25365c@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <60894a2d-0977-18e7-75d3-726695dd06eb@suse.com>
Date: Fri, 18 Jun 2021 14:40:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <42e3d58a-edf3-1da9-af6e-c2400f25365c@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM9P193CA0017.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:20b:21e::22) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f3d7b74f-3254-4f9d-7374-08d9325653d6
X-MS-TrafficTypeDiagnostic: VE1PR04MB6672:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB6672B8003C75D269AA044F58B30D9@VE1PR04MB6672.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2vnufVPNpcrtrXAqhvaxwiwxQ37uwK2ZvOEmEohg+8qtEwiTv2QHLtKTk6qPHZeiy/qBjEGYBRXFdJzJ+c6xYLe40n935zWbZUrJ66wkHpRSu1+nuFi+3pN7bQSaAtzloR1LFQUne3VRdxBPC6Q4UQHGIxCpPjs/f4h/1cisRjibmS2ZMsy9v5gVl+VQFfmHUh83k2PRJU+aQQzO0NfGhfrhZK2i9ziztt2r7kJ2WjGRyMbKrGbbjaaaJIKTXb8meAOfcgNCg1LmKTpw+4cD9dZXgLklCiMh5v4GxY1/fYl2EBF+SXXOGcmlUvCtg0VefXEiq3C1WBzhQd7hCGEY0idTjfCANbxP52El72jK1GUyXKVzyPz3PGz0uAuHaBJcY3b/hlc541voplIzL1bC7/MX+5qlRWBHtgMXUPGp6AB0LNqUm9YpH3N6lreSftwP2bcexWw9OFBcWKdvw/9eGcabf6BXAK+0OiR4oK/yBqw/FNqN8YEJDhtHEYaL4MkpUSMXkxdBQuB3LALKUUt9U2JNXqa2VIgm/QlltqCq/zEgL8ppUycHl+6lLYBo3Wyr4TJNJzTHPBkzkT5dyxN2sGMZJ4E7FFV63OWH3Z5q5vt9WvuJXS5vkDl7PhvpRHP8f3UsGaPBL+e08N+Bc7/vgA3komtuJS/fuowsb9EWpbV5wGMtZVoBz+3anC6nPyh6
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(39850400004)(136003)(366004)(376002)(346002)(6486002)(6916009)(86362001)(186003)(8936002)(31686004)(36756003)(16576012)(16526019)(26005)(7416002)(38100700002)(2906002)(8676002)(83380400001)(5660300002)(66476007)(316002)(66556008)(66946007)(2616005)(956004)(54906003)(53546011)(4326008)(31696002)(478600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?yPlycS7T/tnvLJlv2fjRIKBHd/zskjvMNFmxLaWYbRlDJreUprHjy99aJvEI?=
 =?us-ascii?Q?xk+QDWhv1yLXRIpSqqiSJrTE26KW42iuXtu7AUdZzDDxkxX3EUhSkIRkOJKs?=
 =?us-ascii?Q?H5+4Z3mZJRzGhjemAaVBu9UR84iV7C/QjklPu1ufPFEzPAqkZ780wbDDfrFF?=
 =?us-ascii?Q?dmvMLiktd2AnqBz7JRpmQGr6eTSnLdEw+Py37ZNym3vcGM8WZTu4Cgtjud1h?=
 =?us-ascii?Q?ynXjBmvlYfOaQQFHWNwGYnfFrbl7fKU2WuSVZTlwMUX9wqfv4P9JiJrixtph?=
 =?us-ascii?Q?efKtKmM/HLVy8fTd+ep4nO+/ZreHyZuUi7d9wASDGrt+v9VzCGv5mYdhELbd?=
 =?us-ascii?Q?wcl/s5CMKWrHteALBzeiK4aZupaN4m0DvAQzoqlVaC0LDSZqA+qmW9pqKk6Z?=
 =?us-ascii?Q?IhLHEzw93avbUD4Wdj8UunCkhODlWBWIMpvAbY//QOhpJ8an41QvMxbcdhuY?=
 =?us-ascii?Q?2u9DUEey5DzdGUw2oZ9KikzPYF6fnCduRYQOFmuohAOvrIQBF0S+rhM5aMY0?=
 =?us-ascii?Q?tazZKhzdDc0Zc5PRXv+dt4/AceBTSqAjhlRqtrgZ3I7ESz4Jj7H/N4vhhfVO?=
 =?us-ascii?Q?L3dbRS5GqoKsW6textIJie2O7KktU0rJu5FHtbDmoXoVtmnN1lVwp+DTMOYd?=
 =?us-ascii?Q?kQmUCPAfswfFYaSPcWNRMn+GhTkSVleJCjybfJzCvIlM0ZROhqO8cWWrrdEs?=
 =?us-ascii?Q?ZBWzrmItFjMjcNtQCCz0yDigad5KF9yMW9XfncJTDR2tF0KNQzOhV6YNOz3D?=
 =?us-ascii?Q?ajuo2NN8nwWXdsghLYhnUSJGtl6K9rczFkx3ThjUH6cufAeDmH8RH2QehkMk?=
 =?us-ascii?Q?ljnKkLf2JrLLikRVygjkuvrHTpe1TY/HnejiqvDxCU6bkCs4sTVun4tLqute?=
 =?us-ascii?Q?/Z3WrrDe+HyMu46/m7TvegIcrzqsf2lxrQX6/14HvmUljGEFSconwzxDhzbO?=
 =?us-ascii?Q?7L9lPILyko2IClFVihYUpAKKQr1pF7yOvVuDR0AuBRGOyKfsRmq3XBdblJZh?=
 =?us-ascii?Q?rWfZL6Xe6fp0v45Wh7c60oIj06Jp/SfzjrSSyQsXleZ40IT108xzX1OMV1NQ?=
 =?us-ascii?Q?OIuw+9eyoKIZaBeFbwayyqM7NtFOmKX044+KUKQKSXTox4mkEEc4i7wJodMS?=
 =?us-ascii?Q?zdVF1xAPvpr2l7GeZ9LWuCPevWxRqjLy0udoqUaSzlthVawLvZ7PNzaQ+NpN?=
 =?us-ascii?Q?zkQCbwN/6bwmEi/nDx+L1gPbrrAswbUWzBfNws42f+P3CVQZYbhCBTHnmB9j?=
 =?us-ascii?Q?IBAkS4XpFLZJ2MsdMqhFcnFqMzpotNfMMP0p8MRv27hVxXyT4YuSOkyCia/G?=
 =?us-ascii?Q?c/GbxjvCbqaGuzl84wJsKHUj?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f3d7b74f-3254-4f9d-7374-08d9325653d6
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 12:41:01.1841
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6ABR8EIWWHOvnJS/Wx5gQiXu6vqGqIjzPRf/LzaKilzf37lAw/71J2hrk6z/j6MrQWvixEThEknW198in5Qj8Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6672

On 18.06.2021 14:03, Andrew Cooper wrote:
> On 18/06/2021 00:39, Daniel P. Smith wrote:
>> diff --git a/xen/xsm/dummy.h b/xen/xsm/dummy.h
>> index 7e2bb09dac..0f8ea163af 100644
>> --- a/xen/xsm/dummy.h
>> +++ b/xen/xsm/dummy.h
>> @@ -9,7 +9,7 @@
>>   *
>>   *
>>   *  Each XSM hook implementing an access check should have its first pa=
rameter
>> - *  preceded by XSM_DEFAULT_ARG (or use XSM_DEFAULT_VOID if it has no
>> + *  preceded by (or use XSM_DEFAULT_VOID if it has no
>>   *  arguments). The first non-declaration statement shold be XSM_ASSERT=
_ACTION
>>   *  with the expected type of the hook, which will either define or che=
ck the
>>   *  value of action.
>> @@ -47,14 +47,12 @@ void __xsm_action_mismatch_detected(void);
>>   * xsm_default_t argument available, so the value from the assertion is=
 used to
>>   * initialize the variable.
>>   */
>> -#define XSM_INLINE __maybe_unused
>=20
> Nothing in a header file should ever need __maybe_unused.=C2=A0 Now that =
the
> !XSM case has been untangled, I think this can be dropped, rather than
> expanded inline.
>=20
>> -
>> -#define XSM_DEFAULT_ARG /* */
>>  #define XSM_DEFAULT_VOID void
>=20
> XSM_DEFAULT_VOID needs to disappear too.=C2=A0 I can't see what it is eve=
n
> doing before the cleanup, because if it is missing, you'll fail the
> compile for using K&R style functions.

You need to look at the state before patch 3 to see its purpose. Patch 3
removed the other variant, and hence the need for this one as well, but
I think it is reasonable to not clean up everything in one go (unless
it would mean touching exactly the same code a 2nd time later on).

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 12:44:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 12:44:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144587.266102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luDrE-0001Qa-1q; Fri, 18 Jun 2021 12:44:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144587.266102; Fri, 18 Jun 2021 12:44:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luDrD-0001QT-Us; Fri, 18 Jun 2021 12:44:23 +0000
Received: by outflank-mailman (input) for mailman id 144587;
 Fri, 18 Jun 2021 12:44:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luDrC-0001QN-Qr
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 12:44:22 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8727e7cd-0264-4c37-96c7-4d095758c3b0;
 Fri, 18 Jun 2021 12:44:21 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2108.outbound.protection.outlook.com [104.47.18.108])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-35-jRA-za6aNlyfxcmLDSiKvw-1; Fri, 18 Jun 2021 14:44:19 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB2702.eurprd04.prod.outlook.com (2603:10a6:800:b4::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Fri, 18 Jun
 2021 12:44:17 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 12:44:17 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P189CA0055.EURP189.PROD.OUTLOOK.COM (2603:10a6:102:53::30) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.16 via Frontend Transport; Fri, 18 Jun 2021 12:44:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8727e7cd-0264-4c37-96c7-4d095758c3b0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624020261;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5/mW6XaXlrz46hEoXTQUSSJn/47iAlwcv40H2df6zDo=;
	b=HXpvoocEjKNovYH1V9rhjFzEdNBCI4lmYbqUQN1bska35DZVjNUtUYsvVpz3EDpDc7KvQh
	n9k4iTtrpGdglGmi8scebCv72F9rMClCeXqrW1tJtDkEwAJbIdf4vlBvggSQsxwT8vmC5O
	AAwHD368UC5kASaGHD3sjVmm5qHEjVU=
X-MC-Unique: jRA-za6aNlyfxcmLDSiKvw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Vmop0yTHjavv63E6Fh810qWNRblOAjNGGwx+moMow2Y6pJLkFa+Dv3B0ML8tAb+jfG2xdDJpMP43JIPQZ/3stqxSSjw9Md+uGwDJ9Qq4Ij6TGk/Wyh8YHSNiSug51vDQTa6wLyO/DKKvvlhEoi1v6YdVvU7jY4LWffY9Bp6xi4bu1DIm6JnrfCg3RpZbW1TCeNR8Jz6KsMbGYT2qUon2Oc31+5alwRWpmXNRFVp6Ib1TYYuWBwXiwmjDnY96AXn3GiMW/Kd3Eq6Z9A7VrgyPpK7D4Zag8MzQyLR6mLS8CuUNBdgVC5wzUUhLg2x6zE6n+paFzHzj0jvd/pgaCaLH0g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ASupVSc59AbXSKKOvmTCbTLNYCTi1P5RmZ+rCbaMqOg=;
 b=CgsEvwBbVSjdZfCnm58OX9sOM+KREHTReixCbol3/e7c1AHw9TMOUqMNyVC9t86yeyR+9E3A6ntro6P4RkMrJKH2E/gEZ5Wed1sk98RP8lNANzACts+JfApNZMBMz7ki4bYA6pbMEp8SSEGU4zWBQq0A7K9/lzeubQ0qflXlgConPhOPqSesX9vxMSXr066yztueumlIFPrDTOpjA2xtqqID9EQmcHI+4ru5zi9cSSHjC9kFBFRg9o0daxcRMZz3SSYB2kfDanLmMaKVTzB2e6BZ8Kh9yNr1mlCA59UnOmBbDaJSn8BOw94Dj9eomYM5Us1iIL8q45xGy6Vd9Y1/4Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 5/6] xsm: expanding function related macros in dummy.h
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Tim Deegan <tim@xen.org>, Juergen Gross <jgross@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io, xen-devel@lists.xenproject.org
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-6-dpsmith@apertussolutions.com>
 <42e3d58a-edf3-1da9-af6e-c2400f25365c@citrix.com>
 <60894a2d-0977-18e7-75d3-726695dd06eb@suse.com>
Message-ID: <e6d0eb00-b2ad-15ae-e9cb-65716779d960@suse.com>
Date: Fri, 18 Jun 2021 14:44:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <60894a2d-0977-18e7-75d3-726695dd06eb@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P189CA0055.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:53::30) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d1cbd86c-74da-48c1-a6d9-08d93256c8da
X-MS-TrafficTypeDiagnostic: VI1PR0402MB2702:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB2702468EDB56B4BA46FAEC99B30D9@VI1PR0402MB2702.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JqZP1xa/x4FJ1GNgt5MF4HZV+q0yYsxm6+U122+Du4+5hblFi52f9vu9bVpuxIj1SkPgL9C5Y804bkgUPbyVDm65n8V4Jkrj51nsW/N7S+q+unG5YPPLKw0+c/wL7SZymQG3jxyfOeJA+TS8X5i3dUfAH3Xe9DTj6LclIM9bs2BHdS4z2KAhdFyDWSiB7cXoE9nbjtp+fSk3i2NFTf/a7uzhJDV+X8hTDEmzq3/t9Vje+ROa2Oc6HiXj/VR68Hy0Fy431ylOmmApPONkkpoqa/ocnVYys1PvyGjuIEZxg09MZ1jEuJ7Xt1xYVy/w2xwaR/9lMm+2OAoeg2kXz7DbC3PB8MJE/fRX/LT96FNJM7DZuPTk1OPYlM78obqvfdD/FdXaw9yW35JixSDoE9zjHugELlqlI40UYSrilb60EAeDNJ3pOFzSEDQ510cSZr9RC1upaSe7A/QZkGs6zMjYI6KYky36DyXtqk+gh9wa2fgPhd5/oT/B1RTKJWFmRHHURo8GqzpCqdFVKzB0LS/hizcPJVkv9LV382HAqx+CyCLvMO3I+V7vGH7wnqfGKtwUcMeyVgslIP81Cfcx/0AOwDyVMAYmja9OV8WZFacwqYuviZibXyQ9UQPS/oNoemlBCUdLohhfBT3as9cJLm6kFf3NIeDSSvZUTYn7Gafe3pOuhkTFv+HCMrXIwJd3NDnC
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(39850400004)(376002)(396003)(346002)(2616005)(956004)(7416002)(6486002)(83380400001)(66556008)(316002)(26005)(186003)(66476007)(54906003)(66946007)(16576012)(4326008)(2906002)(8936002)(8676002)(31686004)(86362001)(31696002)(16526019)(53546011)(5660300002)(38100700002)(36756003)(478600001)(110136005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?tLwBuLQ/2a1GIrLh1ASDqA8Lbr463YwXh8dx5tMvR/4ZetNxOGREuFu6tIOc?=
 =?us-ascii?Q?iAU8nl+qRbE8FiRvJ5WShJ86G3zoC5xyrxO05BNvDWOTVnXSI1Hlfko9g8uQ?=
 =?us-ascii?Q?S5Unah1RmyB5qs5HT0MUmfsSnLRGJ4yIX/B1xHEdwcZivn4L2ertG0FZH03H?=
 =?us-ascii?Q?8iXyv58d3ZLnuWaVlMcReZMG+fNqCPXB9Hz74Z3K74iWbggMBV233pr/vaNp?=
 =?us-ascii?Q?/Zi3UnxuGxjvt/YbWzgHzoo8gfHU9yvoBBeKcX3d4V62IZySpgq4VuHQm+Xx?=
 =?us-ascii?Q?ONjyWhOY4JpvKtQOZdtnQbBk0XmXJSLC9/RfA5zcukFRWgWA2UXyO5EXnurL?=
 =?us-ascii?Q?e12GICZ8MDAz89pg2Orii0Gu64Yl11Lbge49SvGQF2JhvXhSpZT0RYxIk8s+?=
 =?us-ascii?Q?06UDK6qheC0aJOuP9pELebo6YriA9P49zZcV+IFt2tDosmRiEVzXIMDYRKmW?=
 =?us-ascii?Q?vgHtULBB2oEjUCsLlUYhBxebfLpxtMWkkscQkZm4RFtupXnBkEsGRkg0kr8X?=
 =?us-ascii?Q?ZgwFSapevC8qj0Mu0B3rs5ObxfNzSwideeJwYHM6YrKH7kh/i9LhS1yNM8sv?=
 =?us-ascii?Q?bUxcRUbNe8fnxwo3/KS4tbIcAuYSqV5TFY9TLU/m8WnMhN6rZ+E0/cE8qk3R?=
 =?us-ascii?Q?0Vb5RqyyFnGqWx8nq2cvU6UNSaa1m26VGpdo8cTWxn2sb8r3HHNdYfjUq4Zg?=
 =?us-ascii?Q?NLGf3lAzQyHMUwFCmUd8D2swCNP4x5vZJxgloldteia+4aK0AooiCAYPkbyK?=
 =?us-ascii?Q?2fZDQUaMu7buXpsGgi2Yk4KfhXRzO5OT35mLuC91Jf3m4LYc3iIGIAQHJz3T?=
 =?us-ascii?Q?uceJSVcqIAfMCNxujJ1WjMvgReqIXNsEYnFGfL1yBKaJrpuIOcdBntyGQ0G1?=
 =?us-ascii?Q?aRRlUnMFqtb1U6BXNIhWFX2lEk9qvp31QoHiBZIOeMrhV3KMIoa5SkuhQPg9?=
 =?us-ascii?Q?p5ZoAU0rxgpgfWhTiYDnb1HGkJG1GXfbrgjxlKxIpE06K8tgQWRGWntt7J17?=
 =?us-ascii?Q?piesdcvr+nts2Si1/YzpXrUsfTVzIDho3owaDXg+mFfSjwDRXDAxsypWWLfJ?=
 =?us-ascii?Q?7gwrTv7vx0RvCD9Sh3g23Q8uMp9ds1TyKNGzj/VTkVMYqfEKTXiwiAqs4XfB?=
 =?us-ascii?Q?GD9TlhBEDGXdcT598YmdqzPFMTJ1SFUFioCPsodqmiMNnylKUS0sF3tDaMLm?=
 =?us-ascii?Q?s6xvLdQX+xe3pm3vwvoQndPVm7PcmooX5uyqq5o9jh1mGQrd1oJqlV07ifft?=
 =?us-ascii?Q?S5pw+pcAiJdziQMl6NhB25Bt0Fm+iLzMVLYQjXKL78Mo4jRV2DQbVb3M6ezk?=
 =?us-ascii?Q?2PRIEdLOqgIIlIwmoXDs17RM?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d1cbd86c-74da-48c1-a6d9-08d93256c8da
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 12:44:16.9863
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YxBzO3qdzIeZ2K0Ma92yDStMwU/xRksmFConaHTtI4fT430mijUBONpeYpY/u5rxqWk5VMaldlvHdgjMp96cgw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB2702

On 18.06.2021 14:40, Jan Beulich wrote:
> On 18.06.2021 14:03, Andrew Cooper wrote:
>> On 18/06/2021 00:39, Daniel P. Smith wrote:
>>> diff --git a/xen/xsm/dummy.h b/xen/xsm/dummy.h
>>> index 7e2bb09dac..0f8ea163af 100644
>>> --- a/xen/xsm/dummy.h
>>> +++ b/xen/xsm/dummy.h
>>> @@ -9,7 +9,7 @@
>>>   *
>>>   *
>>>   *  Each XSM hook implementing an access check should have its first p=
arameter
>>> - *  preceded by XSM_DEFAULT_ARG (or use XSM_DEFAULT_VOID if it has no
>>> + *  preceded by (or use XSM_DEFAULT_VOID if it has no
>>>   *  arguments). The first non-declaration statement shold be XSM_ASSER=
T_ACTION
>>>   *  with the expected type of the hook, which will either define or ch=
eck the
>>>   *  value of action.
>>> @@ -47,14 +47,12 @@ void __xsm_action_mismatch_detected(void);
>>>   * xsm_default_t argument available, so the value from the assertion i=
s used to
>>>   * initialize the variable.
>>>   */
>>> -#define XSM_INLINE __maybe_unused
>>
>> Nothing in a header file should ever need __maybe_unused.=C2=A0 Now that=
 the
>> !XSM case has been untangled, I think this can be dropped, rather than
>> expanded inline.
>>
>>> -
>>> -#define XSM_DEFAULT_ARG /* */
>>>  #define XSM_DEFAULT_VOID void
>>
>> XSM_DEFAULT_VOID needs to disappear too.=C2=A0 I can't see what it is ev=
en
>> doing before the cleanup, because if it is missing, you'll fail the
>> compile for using K&R style functions.
>=20
> You need to look at the state before patch 3 to see its purpose. Patch 3
> removed the other variant, and hence the need for this one as well, but
> I think it is reasonable to not clean up everything in one go (unless
> it would mean touching exactly the same code a 2nd time later on).

Albeit, having looked at the patch itself, I agree it should be dropped
here together with XSM_DEFAULT_ARG, of which it is (was) a companion.
But again, all provided there is agreement to remove the top level XSM
option, which I personally don't think is a good idea.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 13:03:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 13:03:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144595.266113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luE91-0003qq-QG; Fri, 18 Jun 2021 13:02:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144595.266113; Fri, 18 Jun 2021 13:02:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luE91-0003qj-Lq; Fri, 18 Jun 2021 13:02:47 +0000
Received: by outflank-mailman (input) for mailman id 144595;
 Fri, 18 Jun 2021 13:02:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luE8z-0003qZ-TU
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 13:02:46 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bff6551d-79ad-42b4-90f7-9c9ba6c44261;
 Fri, 18 Jun 2021 13:02:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bff6551d-79ad-42b4-90f7-9c9ba6c44261
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624021364;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=4akS4ZHJYIKwDFwHISfnWsjyQ8h8rnQ7Vk5dUlZFiEw=;
  b=HKjRlwdU5YOsP4muyr2Kc5ku9YJIZzoAq9Q/L+jPEmZxHj9rIjIAPvPy
   U24IUmNRnu7bhCCPoOm47W7hsfs4aGD6AwSrcirvFT/TD7oEqLnMI4zu7
   a6BMidfhrIq6HnDT0Bj3UU/XbEybPZolqye/w3rhaPig9iuRz3LR/9QDt
   E=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: QN/VZVJVBuNgXvBVbGpX0Swulg3CYshVLb3M8U2RM8lO3Hgsi72uv0TG2zclzpz8hqb7guypsB
 ubLSp3BqGpDNrCAYImp/eTc9X2UnX/pEA5VLV7wmyVD5hkyyJPAix6zg7SxU/WcEm00oV+eVBP
 ujLAmmp0CI7MnCtIJsLN1irDdNsa397oXIe08F+3OQHQ7aT3LX7DHOMUMGPZ9k85Z5MmRhC5dk
 sw0s8n+slE5Dn4NCQXU/uMD0JNwxpMDHXqOtJPVlRM9XV2CLfR4fCc5HbVXeRZLjoAl75SRjP+
 IT4=
X-SBRS: 5.1
X-MesageID: 46824899
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:AFoXwaFgoK+XWXpBpLqFDJHXdLJyesId70hD6qkvc3Jom52j+P
 xGws526faVslYssHFJo6HlBEDyewKjyXcT2/hvAV7CZnibhILMFuBfBOTZskbd8kHFh4hgPO
 JbAtVD4b7LfCpHZKTBkXGF+r8bqbHtms3Y5pa9vgNQpENRGsZdBm9Ce3Wm+yZNNXB77PQCZf
 +hD4Z81kCdkSN9VLXKOpBJZZmMm/T70LbdJTIWDR8u7weDyRuu9b7BChCdmjMTSSlGz7sO+X
 XM11WR3NTij9iLjjvnk0PD5ZVfn9XsjvNFGcy3k8AQbhHhkByhaohNU6CL+Bo1vOaswlA3l8
 SkmWZgA+1Dr1fqOk2lqxrk3AftlBw07WX59FOeiXz/5eTkWTMTEaN69MdkWyqcz3BlkMB30a
 pN0W7cnYFQFwn8kCP04MWNfw12l3CzvWEpnYco/j5iuLMlGfhsRLEkjQVo+M9qJlOi1GlnKp
 gsMCjk3ocTTbvABEqp5lWGqbeXLwEO9hTveDlOhiXa6UkMoJjVp3FojPD3pU1wgq7VfaM0rd
 gsAp4Y442mcfVmJJ6VJN1xDfdfWVa9Di4lDgqpUB/a/fY8SgPwQtjMke8I2N0=
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46824899"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lMFtZ+c9WQs2EdYtCqG4WgEXz5BbD9k5Tv9p9sp9tRXMEJjj8zouuY5PeVZMswGE0loXvZBoW+mIuASJvsbyjGG56Czktoqjn0FhlR5YHEam6OOvvI8jrftv6EAczZzCjSQV5A/qcniDxrKkCT5z5W9TA8gi1WTX/VntI+W0KFvabgVIuQvQWPRxN0MQHMyLYsr4CpUapve3M7it+ADwbCZE9MzpRLhb0zkYSCuCeKf+0cXmx2IWn7f+WE/A4L+vKXkoRmSWcid7Ch7OraEnbFwZOwUiDbx8s6s5b5nUUJ8+kAqDpa+WdxL1AGtRnhIDYIysm2FrqgggDspLq0iCdw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ubYbjb3qJxOp3audMyHTIWJqZoVf7Y0wIU88PPTeYLQ=;
 b=IKHn/Muj2XSiHgf4lF/BQmnGf4csaO9f+KFm7JSQinaAVyAFZQXSoVm8WWDM+Ed9To/vAViBbUhNWAiE/fZpL5sK8m3F3Yq956vjbXbyK5hGDmpcUograCqxAWjRwyopHggJB1QE2O/BqHy020QuJmjeJ8qxgREElOtJgdtyVZENNHFD1QYQyd/nAGh/xS/z0EyzJEFbKVNsJMGYI2rRTm4dqZ/TTmp5/jRfukNnBSkt8PZXeFGdX1E9bSOt3qWOfxL5ERtsiz0/qAO/kJ0AYunTtjg5Exm/8jHuwpXR45DNnM44/gpgcPtD8WCmgaYFqELFbKXhQXsqEvDgtYeOmQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ubYbjb3qJxOp3audMyHTIWJqZoVf7Y0wIU88PPTeYLQ=;
 b=KpZeU3mk1Hqldz2YRq/6STv1+GtGwT6bGDpvjjXasmaxx69SxlO1/8XO3oEd7kTwQzNmVUBpO8Q5dd+XksAVJu4RofTmhUFQhXrzdWfgzSgmkSEKAcgqC62e7dloiWdzg2jKKBrkVGcYqE4jR3mgQDIL8/K/tyUgjlJJcSMSWP8=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>, Juergen Gross
	<jgross@suse.com>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
 <c3c5df61-32f8-1ffa-aac4-0c83d620d385@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/5] x86/HVM: wire up multicalls
Message-ID: <511f161f-75b8-f23b-c6ba-cd7afe303760@citrix.com>
Date: Fri, 18 Jun 2021 14:02:26 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <c3c5df61-32f8-1ffa-aac4-0c83d620d385@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0176.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18a::19) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 26955d91-e0cb-4a55-eb89-08d93259588c
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5423:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB542384440B2DA3C775E72BA2BA0D9@SJ0PR03MB5423.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: FlA0yGK5nZ8eQhT7MHa7Gpfu6CoTAyj6tijfW5FZ/TgWZgGJ5ZqYMKwkz6ZPbbGQS7DRRu8ln6RNkXrpuE/o5BFan7fiSYuuDrAtgzdg20Bru7WUiysuvlyC29yslcN4/IPyt5KPGCZs0Ja9OdohkGzBDh2uYeffe/+YoqUOpoq5SmYnJkx8BAeeawUo5NuYB0J90/QV5645WGPyonziEnKCe5axewYQrqJhEG6az3VMl/8lBBi+H8nWDQGyrP78dl7GPE1wz1XA9teybi2gjE4m+ARl5ywKXlFE/5JW5nqt92aWVRb9vrlc8iQMJclFd4+rVWhXolHKVtLi3vSdQ7SHNNwAqXx/NS1gCNVJr6SGdLSdLo+gVQUdUz4emfpm3pRPEe4O1BYD7sTNNM7FbwAkEsQXwe7GOKayQZXSEszqISl5UiKCQTgjjrFeOSkFXLDDsmN9ZJx+Nm2H1vx5uN9RZZ7BqGEPE2jkpNdrM8zdv5m3elrdwioEWGhVRxfCAQMoqD9nSDA8x+SKRMNUf5pJFHb66DC+7DkLDjbGyEro/r30Vl+phw3KElPeHMxRp96ykqV9LtBkoHeSDnMz+TfWdudQoGuhKTJ7y4m1+7+C8Da7buRPRxotWcyG1A6TVwIdjcN9TVvNW0OW9bRAMAJfXDxFosMhY6psEV6N3ahOq5hLG7h0GXnR2JwcYqZDaS67uHXkTx5oVjrK5i6UrA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(376002)(136003)(346002)(366004)(86362001)(26005)(110136005)(6666004)(478600001)(5660300002)(54906003)(16576012)(4326008)(53546011)(36756003)(6486002)(16526019)(66476007)(66556008)(31696002)(2616005)(956004)(83380400001)(186003)(8936002)(8676002)(66946007)(316002)(2906002)(38100700002)(31686004)(156664002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?ZFl2QlFCYnF1NXZTZDFja0Y4Qk5odmdkQlB6OHZ4dHRnSXY5YU5teEtCVDdB?=
 =?utf-8?B?QUU1Y0o5M0puK2JpZjltV1lwRUF1bDljRUZWeGFoeFpsL2owcldTYVQzVVN4?=
 =?utf-8?B?bnYwZzVmeXM2c1kvVElZZnVSeHlqbVlmd1RmbGVKTzVmVVFIVkFHSER1S01O?=
 =?utf-8?B?UllQcmU5K3VLT1JBR1V1ek50bW4ralU3YUlVWFNLekJ4SEZHUFJKUWdHQWRt?=
 =?utf-8?B?bmdkblBSWit4OGwzaFg1MUpNaURuTXdiZWVNY2VNNHQwa2V4cnBFRkkrTTdL?=
 =?utf-8?B?ZithMHlwNXZTVUFuaEhUTS9Cc0NXVUdaZElpSGR3VityRHl0WDZFa1JzbUh5?=
 =?utf-8?B?TnEzK0JTNThjeWlNMG5tL0VsV1RaQll2ZjdxMjkrRHBEc0EzdVNEQmxXSmI4?=
 =?utf-8?B?T2ZEeW1rYVdlaFdKMnJUaHdPUGxaRU05WjAvRGFEVWdraGRlYmRGZXdLdjY3?=
 =?utf-8?B?UWE1aTFlNmpCNGQ5ZTVoVHZBSXpESkt6R091UlR0bEZ1K2NpU3pmNGc4MU8y?=
 =?utf-8?B?aHFHVEg2MDNRZUJRRmVRcUZkRGJtLzhkSld4aEVueGoyRldOVXdRZ2dUNzl3?=
 =?utf-8?B?UWM1NGxNZjc0dDJCeEtWWkhhOGIvbnUvQkRFNStKWnQ1YldaSVRlcTd6WVRQ?=
 =?utf-8?B?RDB6Z2VpTTVDN05sTE1Ic1M2TWRXZE1LRTg4MDl0ZlRZNjRPdXErRG94clhS?=
 =?utf-8?B?eVNiaEhXTWVMdmcvcDBOb25JdFpPUzc0Zk1JSnVEK3FCeWVRZVNrTjNzMDNw?=
 =?utf-8?B?S1YwT2xSamlCNy9ibHBBZ2ltSkJvQU9nLzlPa3B2Y1lMaGlsNkF1NFdkR2t0?=
 =?utf-8?B?cThSR3pBdVR4RHliRWcydE1DWURHUU9xaTFlNGdrS2hHOEtpd0NUaGpOWlh1?=
 =?utf-8?B?dGZxRWlYUUZLWUV0eWtCRFkwQ0tSbUdXck5VL1RuQmEydjR6N3I1N0x5MVZU?=
 =?utf-8?B?UEo0MVVBK2NzUW4zc2h6OGZKQlBVVndMamdHWldmWGQ0dVZkUUsrd1U4WlR0?=
 =?utf-8?B?eG5yTkxyWTNQRG9YeVBwYVhzaDVtbUVTZW1vZ2pFL3pucitQYTY4aUNLTnda?=
 =?utf-8?B?MUtqM1lpTHYxYkRSckxuWWNreE9lK2FxTWN3cFlHU2QyWjFXbTVNaVFJVFVy?=
 =?utf-8?B?QXV4UHRYZ1lEUUhHeUk5T1Z6MmNCMmxIQ3JMT2ZMNU0rQmZwS1A4dEJqbUNx?=
 =?utf-8?B?V2tmWTlqUVpDZzR3UTZEdG9sV2R1bUFueW5oV3I2cUxDeENRSlk2clFTUUMx?=
 =?utf-8?B?NTdoUFFNNXcydWF3NkQrK1RPZFRmcDRLSWRybG94NkQ1OEFQNnNJaFE5U084?=
 =?utf-8?B?Y0k3aVozOW9LUDE5MlhSWDkrL0lnVXZTOGJhQXZiTWJTa2R6QjNCbVRlRW5H?=
 =?utf-8?B?ZEhCSjJzcnZDc2tBamhnVUU4TzY1a3BtM1FzVzd6emJJbWtudHA5ZTlINkpu?=
 =?utf-8?B?ZUhCRm9FN3hScmxxN1pMbTloVzFTcWRjK0JhaEd3WnUrNHUzVGFGYW5rekMv?=
 =?utf-8?B?cVJ4V1NmSVp6WjZFNTBjNXcvU2pmN2lPS3cwYVFFZ0E5Z3VqL3RnV3FHTlZp?=
 =?utf-8?B?RWJMbWd3UUZseWZrRnhkYk92bkViNXN0RVhScXBjWDg1M25wNzlxYndJNkJW?=
 =?utf-8?B?YlQrUEV3M3hqdFhpKzVaRHlrUjBOWXJNRnhlQU5sc1Vzd0V4NlJVUXBabjFR?=
 =?utf-8?B?MnRUZXZUNkVSREFScVl4VlhvTEdOS0J6ZytsSER5TU95UHd3UFBUNVg1elR3?=
 =?utf-8?Q?c8UpZ4G8FbdEDxcdVpWve/ZLz+OPQ7x6SJ8r1Kd?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 26955d91-e0cb-4a55-eb89-08d93259588c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 13:02:37.0033
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 68h/f7YfwIkRDfateTqyWcKRnT474dr5GzfQnzOKWASjK5dHWmLosB/VMmQg3ogVfs6gHMilFyIZpItFhfayED1bdQvPAIS0QioQ8XD5dCw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5423
X-OriginatorOrg: citrix.com

On 18/06/2021 11:23, Jan Beulich wrote:
> To be able to use them from, in particular, the tool stack, they need to
> be supported for all guest types. Note that xc_resource_op() already
> does, so would not work without this on PVH Dom0.

I'm not a fan of multicalls as a concept - they're mostly a layering
violation adding substantial complexity - and frankly, working around a
Linux kernel/user ABI error is a terrible reason to make this change.

But I won't object if it happens to be the least terrible option going.=C2=
=A0
I accept that there are no good options here.

> @@ -334,6 +336,39 @@ int hvm_hypercall(struct cpu_user_regs *
>      return curr->hcall_preempted ? HVM_HCALL_preempted : HVM_HCALL_compl=
eted;
>  }
> =20
> +enum mc_disposition hvm_do_multicall_call(struct mc_state *state)
> +{
> +    struct vcpu *curr =3D current;
> +    hypercall_fn_t *func =3D NULL;
> +
> +    if ( hvm_guest_x86_mode(curr) =3D=3D 8 )
> +    {
> +        struct multicall_entry *call =3D &state->call;
> +
> +        if ( call->op < ARRAY_SIZE(hvm_hypercall_table) )
> +            func =3D array_access_nospec(hvm_hypercall_table, call->op).=
native;
> +        if ( func )
> +            call->result =3D func(call->args[0], call->args[1], call->ar=
gs[2],
> +                                call->args[3], call->args[4], call->args=
[5]);
> +        else
> +            call->result =3D -ENOSYS;
> +    }
> +    else
> +    {
> +        struct compat_multicall_entry *call =3D &state->compat_call;
> +
> +        if ( call->op < ARRAY_SIZE(hvm_hypercall_table) )
> +            func =3D array_access_nospec(hvm_hypercall_table, call->op).=
compat;
> +        if ( func )
> +            call->result =3D func(call->args[0], call->args[1], call->ar=
gs[2],
> +                                call->args[3], call->args[4], call->args=
[5]);
> +        else
> +            call->result =3D -ENOSYS;
> +    }
> +
> +    return !hvm_get_cpl(curr) ? mc_continue : mc_preempt;

This is ported across from XSA-213, but even for PV guests, it was just
defence in depth IIRC for any cases we hadn't spotted, changing privilege.

There is no pagetable accounting in HVM guests to become confused by a
privilege change, and hvm_get_cpl() isn't totally free.=C2=A0 Any kernel
which puts VCPUOP_initialise in a multicall gets to keep all resulting
pieces.

I think this wants to be just "return mc_continue;"

If so, Begrudingly acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 13:04:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 13:04:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144600.266123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luEAx-0004SS-4w; Fri, 18 Jun 2021 13:04:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144600.266123; Fri, 18 Jun 2021 13:04:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luEAx-0004SL-1s; Fri, 18 Jun 2021 13:04:47 +0000
Received: by outflank-mailman (input) for mailman id 144600;
 Fri, 18 Jun 2021 13:04:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luEAv-0004SE-Ie
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 13:04:45 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b5e361e-914c-4c1b-b48f-0f7e7e191113;
 Fri, 18 Jun 2021 13:04:44 +0000 (UTC)
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur03lp2056.outbound.protection.outlook.com [104.47.9.56]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-27-p5ILw7PnOdi8YDQBJ5wKdg-2; Fri, 18 Jun 2021 15:04:43 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3775.eurprd04.prod.outlook.com (2603:10a6:803:1a::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Fri, 18 Jun
 2021 13:04:36 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 13:04:36 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR01CA0160.eurprd01.prod.exchangelabs.com (2603:10a6:208:aa::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.16 via Frontend
 Transport; Fri, 18 Jun 2021 13:04:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b5e361e-914c-4c1b-b48f-0f7e7e191113
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624021483;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+XYIl0P8/37oeSEJ9m/Nvpg/h5kqwzo66zY/oUVIkpI=;
	b=I0hxK+rl1uStAO3w8ChCgsgltvAcsMqki60Y24bFyr1kri0faUwmol5yPqDEBX6W5ikX/G
	VqqehuGvidzTWHLxP1Qw+ks6eJ+PF8a/WFyOvtRVAIKbPbxfKp56yxkD+n9o/oSMQGhHQq
	6xZUxtKgImVWdFFCdVAhumZ1ZSeuGhI=
X-MC-Unique: p5ILw7PnOdi8YDQBJ5wKdg-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CzIixGGDhSjTbSQXTCvFPrYPr3OFCIWe3SPLeU+A89DgNVhZYYyjaBgifglAujZQVJ1AF7ZEeq+C90MCGoj8LlTAlzYREB/2oAHIy4c19Sbs2RGOBs7ZHQAtgsuvz//DRTCBzHidFCyxPNAVG9CP2obOXPYl/TFgHh03y5V1gBc6q+MEQgXdGR2QcN6T9nrfE41aEaxPuo+y2q9k6MR/o/X3wPBhVjgyGFK3VHCF7z6JTm7AZ4X7/puVe/QtgCQH7kLI3OoU3At3vmiOkljjEhCSwdDUrPC7+ORkHLFPoakprrdNQnAsN1f4+D6GQZWeH3OnIjOGrRbBQcz3DAc90w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+XYIl0P8/37oeSEJ9m/Nvpg/h5kqwzo66zY/oUVIkpI=;
 b=HqdgO0y1ptF1/9NN+A8SDPdA8bvnXFgXbCWmqp1Xow7KuaCYtWrElY/Bt4EjSFcUJ1mmdNy9kNCWJjc5ZaxltjEKCBRFTKhT7lrJBu18qMHh3wwJkJJVrlLMpvB/lE+/xtYxzQgBa92AyVDv58+SJ9OS5OMCh73KLAUpZZFXVO+cYoiF2H2rv0sCebOCGUYEACyU7sBlybcc986wpefMrEp186Fv2Ag//KFqXL0UfbumiSIAkKWv34nh/JyTZhvwVee6R00zBdiJe/9EJiK7YNQwUPqsR7GoEpZ+6RTEUsFIVGrOtyRduL395Rg+SKgDrh1slZmNrjR5acQTn6k8TQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH 0/5] allow xc_domain_maximum_gpfn() to observe full GFN
 value
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
Message-ID: <6fc7ff85-efde-bce9-ee44-bf11fb577309@suse.com>
Date: Fri, 18 Jun 2021 15:04:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR01CA0160.eurprd01.prod.exchangelabs.com
 (2603:10a6:208:aa::29) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9dc28238-f3d5-4e0f-1170-08d932599f81
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3775:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB37757F976400496DE8CD8C02B30D9@VI1PR0402MB3775.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3826;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hTX6DnF314Sh8Z6gdwtWWl30rBX+F4YiORPhB5sYltuCltS58jO94yqewuWUK22CgFtiyPhigVN5AyuW4XHBFTJKH8ZOPk6gLa5fz1lqZzRH2BL35W92GUdzay5XBWEScoX0x7j9blPSMWjmvHo9i2H/09egxPkSiar94xCC/+tw/P/QX9OLehxnOBUsCKqNFrVu0e79yl6DdDRpnE5ko4+0qrC1LZYc4UDZNzD1Pa1+4B5e40Gw9g3/OVPwLcFgVMg3QQ09A1PfIFjFqCgAyO963mMaN3mF+JjZts1A/S4b27uwke7bShWiFD070M/8yVSLLHL6keHzSfZGX3zMOW05LX5pjoH0I5a6k5IbEIxAmyu2bcMYH3gmwXo3ZtMu6E5bqxWNLJCZgnPHKReADncJl5/nMbYGtbwObO7AiXBGiMRjRb4l9sGXS9y0H8+41fce0YMT+Xf5PgSUPJBBLoZ/MwtdABRfpuKTN0AsbaI5UpKzr96mnMIifEaTrACoIpzduzKz9rybHlR9gUtaSrGL+VX5EUoXeCs0xXkrKW2PsKT5En6ya1MGTERyymyOdxdkYRKP6ryneGB8eMaItq965r8/x5xolRIFP2tfkGTVcJ6+cWwCj3bTdr7zI7TUKyfZcaQHp8TWaCsanuzSgw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(366004)(39850400004)(376002)(396003)(136003)(8936002)(2906002)(16576012)(66476007)(31696002)(66946007)(54906003)(186003)(26005)(38100700002)(107886003)(2616005)(956004)(66556008)(86362001)(4326008)(16526019)(316002)(6486002)(5660300002)(558084003)(8676002)(31686004)(6916009)(478600001)(36756003)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UTkrbnVYT0VsaUxWeEswVUhPQUFTSGlEUGhLcmtLVkdLME5KYllDM2x4RjBW?=
 =?utf-8?B?NFEyMXlaUjFqcFV4SEp6aXJUa0o2aW9IV3FrbGZ5YjdTWXhlYjBsTHlVeTJW?=
 =?utf-8?B?RkY3S29MbjhPN2hrK0ZGQzMrUWFQRG53Z3pHTG1JQkM5a2Z3c1V4dkxsNXF1?=
 =?utf-8?B?dEJPU2dtb05yOFR3ZVdUMW5MajFKdlVVUXNNMS8vV2xnVXNzRi9UK0pSNWlO?=
 =?utf-8?B?Tk5lVW5ndUx0QTJRS0ljY2tFbnc2SjNXU3VIWlJ1ZVI2T25XZzZxLzBnMFdj?=
 =?utf-8?B?L25Za0VibVczVWxMTXZwQXRlQVdwWTI0clVUQVd0WVFmenZhdVpDYkpXenFs?=
 =?utf-8?B?b0pZaHNicXBlcTlYYTB4cmsrMGJnbGpQeXFEWXpDR0N1WFVQb2ZwME9tRWNk?=
 =?utf-8?B?T25aWWgzNjBzQ2k0U1ljSE1jWkYzYVpnMkVSVGxFMnlKVks1eW00OUNrVklM?=
 =?utf-8?B?WTI4MTA2QXpIbmpUVDY5c1h0SXJETkp1MktUTUppRDZKbWpYQm5aVmp2MC9G?=
 =?utf-8?B?S0U2dGVYTWY0Vnl6emMzOCs3M1B3Ym9yNFVjSU5PSnp5R3VvU2RDaWhzSFE5?=
 =?utf-8?B?R2JocllYU2ljK3REcktvOU1WL210ZnlIRnF3QzhMS3NLejdsYUJxUmJHQ2dS?=
 =?utf-8?B?NXpCK09pTzNTdHR0cUdNeUg4MHVVRFVISG5JQXhGVDNXQ1luZ1VySTIvMWdE?=
 =?utf-8?B?T01OQlh4VWlsUTRDNVA3bVJyZUwwTWxkS0wwMmpNM28vT3gxUmFheFViK1ZG?=
 =?utf-8?B?Ymt6UTZQeE1wZ0R1U1JQK0lWL0U2bVR4ckQ1ZDhtTjhkTHg5eU43QlU4YVUy?=
 =?utf-8?B?ZXFhMU4rOWRSWlp1ckJUa1UxSFJpWE5RM0JUc2tseGtGTHdZR0VuTkZCczRk?=
 =?utf-8?B?UDVyaDNzTzVaVzNkMmt4WEpLVWVmOFdMZVFaSmxzb09ERlJYZThTdjV3OFhn?=
 =?utf-8?B?YmVIMTlYTmVZbkRTWFNHdksrZTc2djI3KzV1elZqNy9qOWJjenZob0RlZ1Vn?=
 =?utf-8?B?eExMS29RVDFUNU8xTFNGRFVuR0NSK2xJOTdTQWJzTnM1dmhFREtmNVJ5cUR4?=
 =?utf-8?B?aXR4M083VkdqMUJJczhuR0puWHZUQjh6dWMxcXEwZ200TDVOT2VRM1Fuc0RC?=
 =?utf-8?B?UURQVlZqMVp0dzVNaTEyemtXVkQrdDV1YWdzbGFCSnJHSlFQNWF6N1Y0S0cv?=
 =?utf-8?B?Z0plek8zeTNnZUFzL3NISlVONFc5VjlsNDFtT1B0R2FuM2dZWU9hdDlvMjM4?=
 =?utf-8?B?MGhvM2o1S0UwWlQwaklIZ0JXRGQ4QzZDSkFzRUE4Q3lLV0d5SWpyM2hmQUdx?=
 =?utf-8?B?UWRTY2YxTXV0R2ZxMDluaWRUYVR3VXVFWDFJSmIwa1ZuS014UzB0dkZSSjZY?=
 =?utf-8?B?bVpRNWl5SW15dlVsYWp4QlVPNVgrZ08vNE9BaFB5WnZYcXJiUnVMQUl3eVpZ?=
 =?utf-8?B?WUI3WGJ5K3hCN3Z2VTdVVkVOelBQSGtDY1hYNmhGVmJuY2NOTElWampFb1hw?=
 =?utf-8?B?S3NLODJWMkdkazJpSkloYlFFNjEvTnIxNHVtT0xScm0xY25oU3VOa2xhNFV1?=
 =?utf-8?B?OGo5NEdhdWdHR3ByendvWHl5Q3NyeTcycjhGeHVMUk9oYWkyQ0FxMjh3Zzdo?=
 =?utf-8?B?Z2F6YWdTT2E3Sks1SXQrTGJHYXZHUmFWZFhSek9QNjdqRFNEYnhLbENvcERE?=
 =?utf-8?B?bzFyZFhsdElYZDRXeTFvSjI1MnQ0bVBDajBmSGpqakduZkd6V1BDN2xObXgw?=
 =?utf-8?Q?xuxzrjHKT//b8XQRG2ssmpbvqJ7E+2p62QLc0Pk?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9dc28238-f3d5-4e0f-1170-08d932599f81
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 13:04:36.0219
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7qR6L/R6X9DO1Va5sXvvMhqhfmHpFk/ZMk3Kjp/L2bnAohB2MS3ThH9sgRNC/6nFzMEDRtKMNJTRcLnyupJ08A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3775

As to the title, xc_maximum_ram_page() is similarly affected. Plus,
perhaps less worrying, xc_sharing_{freed,used}_pages().

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 13:06:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 13:06:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144607.266135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luECT-00054j-Gn; Fri, 18 Jun 2021 13:06:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144607.266135; Fri, 18 Jun 2021 13:06:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luECT-00054c-DY; Fri, 18 Jun 2021 13:06:21 +0000
Received: by outflank-mailman (input) for mailman id 144607;
 Fri, 18 Jun 2021 13:06:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luECS-00054S-La; Fri, 18 Jun 2021 13:06:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luECS-0000AC-HM; Fri, 18 Jun 2021 13:06:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luECS-0004P9-8E; Fri, 18 Jun 2021 13:06:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1luECS-0000PE-5I; Fri, 18 Jun 2021 13:06:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hQg+fyvLgAhRiO8Sg4P5fUnuUppkU6tB8FPGkQTgIi0=; b=ojphLLpP1kcP/aKjf5g1aXYBTU
	AW9s2bPazRelw6KIG30xFbp9Pz8qC9y3RcnHt4JCRwrkteKklnuLR8vuO8OSEf392MCypRK9vCI7U
	aZ9TiQaKUt9CBH8zU533KCXOeoLPkI0qlrggymgHHfrcauef3/yGSZLrOljYWXm6S16Y=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162886-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xtf test] 162886: all pass - PUSHED
X-Osstest-Versions-This:
    xtf=3e800027016ea4eb19887bf626b46f45fc43fa5d
X-Osstest-Versions-That:
    xtf=5ead491e36af6cb8681fc1278bd36c756ad62ac2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Jun 2021 13:06:20 +0000

flight 162886 xtf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162886/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xtf                  3e800027016ea4eb19887bf626b46f45fc43fa5d
baseline version:
 xtf                  5ead491e36af6cb8681fc1278bd36c756ad62ac2

Last test of basis   161978  2021-05-17 10:41:15 Z   32 days
Testing same since   162886  2021-06-17 23:10:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christopher Clark <christopher.w.clark@gmail.com>

jobs:
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-amd64-pvops                                            pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xtf.git
   5ead491..3e80002  3e800027016ea4eb19887bf626b46f45fc43fa5d -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 13:09:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 13:09:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144614.266149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luEFk-0005lD-1c; Fri, 18 Jun 2021 13:09:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144614.266149; Fri, 18 Jun 2021 13:09:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luEFj-0005l6-UD; Fri, 18 Jun 2021 13:09:43 +0000
Received: by outflank-mailman (input) for mailman id 144614;
 Fri, 18 Jun 2021 13:09:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3dwN=LM=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
 id 1luEFj-0005l0-1b
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 13:09:43 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 189e44b3-6b62-449d-9c03-d3c152698728;
 Fri, 18 Jun 2021 13:09:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 189e44b3-6b62-449d-9c03-d3c152698728
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624021781;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=vxbhuTFtjVEcv91IEwFJSsJ5fZxpbROKbd6fvx6vsvQ=;
  b=Q7Oootcdc3oWU8bXV0E2ZVZdiBtMynXcr9xdTKrGuTyuI117JMiMBRYg
   8I0bveKIOEgAACD0Vyc7iKVss2SVr2NLm8M4hmRrtQwXrZ6jbRv76p/iY
   u65+lodUUokBb6Ff/qmHo0zpWZfI6G+WtRTn/w9Y1tOxrOrGV4ZetKUnn
   Y=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: NYezggnu40RfmUJCsV1d4YtKdTjJiheEt1fWtwjvWGUGc0nLLTLRs8VnORJoOTFIrm9yScwvJy
 Vv1LAGpFjq8MIzD/qUuXoLGPR54XyQgyiSYM6HdVHy28sBdx2b8FJFalDg/LstSjWllQGgtS3P
 e7eZGGRTpDdUwnss1tOM/TTDpZxL2cO0dlYR3oz9Jd0O7JkRO8bUbVRkrx8CeY4iAi6Q+U46iS
 sZIfSSnBTgcMsDy9VZ5RTraBgKI/qSoaNC9597NiDN/Gokt3Se2lLHN0X+99aJa3AYmjHBWDR4
 kLk=
X-SBRS: 5.1
X-MesageID: 46825587
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:hMQPZq89wIyo90fcP75uk+C7I+orL9Y04lQ7vn2ZKCY0TiX8ra
 uTdZsguCMc5Ax6ZJhCo7G90de7Lk80nKQdibX5Vo3PYOCJggWVEL0=
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208,217";a="46825587"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QcZmemX2GVaumkco8Mk5kYU7A7qPmVJLv63YI/sZMznhkB8tDaPt0M4Tpl9Tpdog4SfpJYs8McpKCZ139KwaYFWKZfkAul5oQRFzVNrsJBxjEIRjrAckA84WF4w14gmzp+dq+12L36CYAsTdPkiDoDQfxwwCxE8Huud0IdhxDhmy3zA4Th5zUY4waE9NoWQtxO292J2NjKTTp2rwkXkFWCXXaTS9dQQtiKJykLl/gu/UGKAnPuKF0vj+xcydM9HBQSG8yI2uTJgEtbQj72Jj0mVq9uHNDik7L+qZ4GLzFATITzxxSzW1MoYueiZ+/Taz+pWe6uJxczIDGlhHfR24ug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vxbhuTFtjVEcv91IEwFJSsJ5fZxpbROKbd6fvx6vsvQ=;
 b=YUW5XIWAtA3J7e1y7zlNc9BWuhUZFIX+bLe/3wRcWCFBNRPH/poY7/Ie5HBP4oJbHRE2+/5l8CHwdO6pOTZaSWoBlhMkt8CXcKbuSZL0/8rk7EM0FXW0XNtA39BPdN9FLlIgaWny4IQXMzMimmmVl6eS6Sv4jpXiHAW/w6CSbbHrDjD8rJhsB7YN92BjvN16c32Ai5nC12lQ005jbmHHYjKAh0L/Yq/9Lhd0p5/gj5pyNOkC4GilmOuEv3lUNhr418pmDIeAPNh9aqwIZAprP6rcWAzkLNe2R6f+h5IG6vc7FGF6vsnPY65EQpqfRWFmR977lvBpE5bMvkn1YpKpQQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vxbhuTFtjVEcv91IEwFJSsJ5fZxpbROKbd6fvx6vsvQ=;
 b=NnvVJ22ZtEjRTgcV8zqWHcSdzAwOVwhu74I2zShzSxRVx3atNqtfUIgjFmpSZM0XKmy4J+8JR7RmE4AcrvJzycDEUQXz76/CN1TIHXJY45CUPgSeRr9ODBTD1w0KMOyXQXaaIOmGKy1MDzFHeThbidDvU4DhF4BGiY4yhzoxTAM=
From: Christian Lindig <christian.lindig@citrix.com>
To: Edwin Torok <edvin.torok@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "David
 Scott" <dave@recoil.org>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: Re: [PATCH] tools/ocaml/libs/xc: add OCaml stubs to query CPU policy
Thread-Topic: [PATCH] tools/ocaml/libs/xc: add OCaml stubs to query CPU policy
Thread-Index: AQHXZC8pa/DYoLcU2065VvJ6EZGJjqsZvgMA
Date: Fri, 18 Jun 2021 13:09:38 +0000
Message-ID: <70BA77FE-045D-4F67-A61E-030CFAACC8B1@citrix.com>
References: <5fdb7b4cdee69af8e2b9d77b56b1027a8799cf04.1624012999.git.edvin.torok@citrix.com>
In-Reply-To: <5fdb7b4cdee69af8e2b9d77b56b1027a8799cf04.1624012999.git.edvin.torok@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 38e9881a-c103-49bd-d5a3-08d9325a5434
x-ms-traffictypediagnostic: CO2PR03MB2213:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <CO2PR03MB2213C1225367C061AD357D54F60D9@CO2PR03MB2213.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7219;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: Rnm73tac0HwlwSKem7+KFZJ2iE0L9HjyOks6nf6s5NJJtSOt2gEZho0v7a72a5EdPuuSWtunPIS0Xf8XwNOg1QJW6NUNRm+M7SvkglfID0W81aogpaUOSynpVwR42DhhR7h6nt/Tp5QyXqJwWOMNy+XpcK53S7i7UopXT1pM4VQ9u/B0Y/CJ0g7Z3BwtbMVgY8UWgo4VVadymdnA5A06JOiQK6pOHfFz+Aj8HMQQ/E1QvBghnfWGdmMDpQgtTCLzWSMgHThogGX/5Xd3E9HWzM/QGS1/tNtMmpVtSF2wFlWFVLNW8EuS8aItL707W2vl2jozh+wS7ZXqg0dv6PmyrpxL3mHAFcL4HpyPwe/Z8U6xgUm31cGONL0lyN7vVyS0D/3I9TygYZKKMsHWWA7oivGUKbxWE0IMNt7ugLeZrAwF1JKlhYoVMOBnmj+rwsC1KiCum6f/JEGlHCIMSitEnPonmq6Ie0cfQVBXi6U5E4nkAaY01yxf5LR8kFa+30c/IJlRt7ICqS6R4rjd2RtRSJLP6VOWzWx5dQvz4VoscjoEcZYMwmoOUs/W4Zv9v2hpNiyiKZ3mbpQSTiZDyWxrCyUAIPIXgkQ2Foc4TO6kmYUonWJnKBZZYBoaZSaa66B1b5wyGds4SRc72D/VS8tT6A==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW4PR03MB6380.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(136003)(39860400002)(376002)(366004)(396003)(8676002)(54906003)(64756008)(66446008)(36756003)(66556008)(66476007)(76116006)(91956017)(37006003)(6506007)(5660300002)(316002)(66946007)(2906002)(26005)(478600001)(71200400001)(6636002)(8936002)(6862004)(2616005)(66574015)(33656002)(6512007)(186003)(122000001)(86362001)(4326008)(38100700002)(83380400001)(53546011)(44832011)(107886003)(6486002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?UWY2NXY1ejViVnpERUJQdWVvUkp5OUNHQTlJNlVEWWxEamxmTFpHNnkrQzl3?=
 =?utf-8?B?UW40TWFCMDFYZ0Ewb1huVTVGN2RGWWFsQmlaVnN6U1ZzeEswRUo2Zy9PeVRn?=
 =?utf-8?B?cFhkVGdOZjlFcUVRbEIvRWFtL0cyUHpCVEM2MW5vS2ZrRGpDZGNLRVR2REhh?=
 =?utf-8?B?YnFocWFUMUVZUHErMWVVTHFUaUtwSjlJVXRlUEY5UllCdXdpSnllRmZtUWIv?=
 =?utf-8?B?Q25lRVFWcElEMDFJTStUcWpRVG5Ub1Jhc0srMk5ldEVhbUxCNXVWVVF3cndD?=
 =?utf-8?B?a3YrMVo4WjVsOTNWblRMd0VGSWdsSysxSVZBWHJiZHdoVDM5UG41NmN5RUZQ?=
 =?utf-8?B?K2d3ZW5xZ01IMC9qUjlvb3RMNUZsYnRDSzV3NlpjaDJKS0gzRTJyd2lnd1NP?=
 =?utf-8?B?S29URXVXN3FFN2tHZXBjQnQ5aExHUzJWMXlzRHJ5bmdOR2hSMlJoT2hGUzVi?=
 =?utf-8?B?MlEyK1lUSkZWazYxc2pJTTh3cWd0aURoaGppaFcwSVdWMlVBOEJ3VE1NNjJn?=
 =?utf-8?B?US9XY3d2S3dkSE1DZklUYjNPbmdDdXBUczBaS3ZJUFNnRWYvSk4zMlVscVFZ?=
 =?utf-8?B?eGI2c2NhM0hCQTNsYVdBN1J2d05VeFhzRkp2VmJWQ29WWWo2SVE4NXlmWHlK?=
 =?utf-8?B?VEVGYUNlWm55L05meldqZk9TNGx6a3FNZnRrb3A1dkkvY1dpVkVoenRUenAv?=
 =?utf-8?B?ZnQzd0plbS9maUNWeS9yU1J3T2JVemVnTW5iWFk4djd6Y1BnTFlVUzkwWUlG?=
 =?utf-8?B?dmNWOTlLQXg0RlUvRjNUUFd1Q2w2SlgwbnE2WXFrMmhkMW9WMkFMRkpoQncz?=
 =?utf-8?B?VTJjOVlzaVNRYlYwR3RsOWtON3ExRFdUNWVJclgxYThJT1U5WlBSdjIvL21C?=
 =?utf-8?B?OFRZM0luUUdaOUhZQllPV09TT2tsMjlhT3c0T21CTSs5eGM1VE1nM25jTnEz?=
 =?utf-8?B?REFSSldGaEVDQzdyaFluOGw2ZVptdVVYaGpRT2NseHpKM3ZDUWZvOXMwazVv?=
 =?utf-8?B?K0EvYXZmQktXZTBzQjltYjViaFRDRlVZc2RFTmxLYm45bUdFRDBQSUF5bnNS?=
 =?utf-8?B?ZkRmNE9Ndlp2cUZ6dHBWRS9HRnZReGFjUWRRYitERmQ1WVhuZEdybXcwanVn?=
 =?utf-8?B?cjlycDQzZnlpRmIzMDZEMUZ2S3pPa3M2RFdmQzZQVGFsNjhoQmE5SkpuN1U5?=
 =?utf-8?B?WGpObWhFc2RWSHBWVlI2cHRlbjlsVkhWa0tNeHladkhEdkNLcW9Pd0p1T0xi?=
 =?utf-8?B?TzU0YnY0NCtzd0pDclFGWDVVeDAxWG5vUmx5OURxbVRiK1ZjMWpoOFI4ckdv?=
 =?utf-8?B?UlBhb1Vsa21OMUNBeE5qbVY4TWNpM1JqVWZRVkVKbEtPeUh4Z041cENNbmtF?=
 =?utf-8?B?ZERGeFVkdkFuZWh6UzVsOS84c0oyZFpoNG9mdXk3N3VwYUNmS3ZyUFRRNmt6?=
 =?utf-8?B?ZFpiK3hvYnhCam9IV3F0ZEpPODA3ZDRjRTJNL2pNbkhKd3NKRXMrcmRkRWlm?=
 =?utf-8?B?RGJpWEYvbEtVT01rVEtGMmFHc1VidlR3NzZ4UURyZTRacldnRTZMWjFTY3NM?=
 =?utf-8?B?YTdTdXBRd0VsTm5HYXRpNUJFVWF3K21qU0txN3U2VG1vd2tLamJ3c1N5Q2JE?=
 =?utf-8?B?d2NaYnArMGZ1NXZrRHVqaGcwbnBKRWFKQlVBSmZKb2xZNXhYeXl4UllBdHY1?=
 =?utf-8?B?T1FNWHhiZkN5cHczTDFIMDZRRm94c3dpWFBLYnozMFdVR0JNaktTTHlUczUr?=
 =?utf-8?Q?MWsM6wMIIpKDmRB/gXLZtnolHjw3xQfyFT9fBsp?=
Content-Type: multipart/alternative;
	boundary="_000_70BA77FE045D4F67A61E030CFAACC8B1citrixcom_"
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MW4PR03MB6380.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 38e9881a-c103-49bd-d5a3-08d9325a5434
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 13:09:38.9465
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: tNBxTWuzcVIkTXVu9uLlnkV/CfNiz+mu14g29pjj3mNYcejNCxcdXsuzABKzQ8ofE1DFd4Nk5Nk2SI4pQN/ZMFb6pMyBysSeECruSSSc1d0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO2PR03MB2213
X-OriginatorOrg: citrix.com

--_000_70BA77FE045D4F67A61E030CFAACC8B1citrixcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

DQoNCk9uIDE4IEp1biAyMDIxLCBhdCAxMTo0NSwgRWR3aW4gVMO2csO2ayA8ZWR2aW4udG9yb2tA
Y2l0cml4LmNvbTxtYWlsdG86ZWR2aW4udG9yb2tAY2l0cml4LmNvbT4+IHdyb3RlOg0KDQpJbnRy
b2R1Y2VzIGZvbGxvd2luZyBmdW5jdGlvbnMgaW4gWGVuY3RybCBhbmQgYXNzb2NpYXRlZCB0eXBl
czoNCmdldF9zeXN0ZW1fY3B1X3BvbGljeQ0KY3B1X3BvbGljeV90b19mZWF0dXJlc2V0LA0Kc3Ry
aW5nX29mX3hlbl9jcHVfcG9saWN5X2luZGV4DQoNClRoZXNlIGFyZSB3cmFwcGVycyBhcm91bmQg
dGhlIGV4aXN0aW5nIEMgZnVuY3Rpb25zIGluIHhlbmN0cmwuaCwNCnRoYXQgd2lsbCBiZSB1c2Vk
IGJ5IHhlbm9wc2QgaW5pdGlhbGx5Lg0KDQotV25vLWRlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCBpcyBkaXNhYmxlZCB0byBhbGxvdyBtaXhpbmcNCmRlY2xhcmF0aW9ucyBhbmQgY29kZSB0byBz
aW1wbGlmeSB3cml0aW5nIHRoZSBzdHVicw0KYnkgdXNpbmcgdmFyaWFibGUgbGVuZ3RoIGFycmF5
cyBvbiB0aGUgc3RhY2sgaW5zdGVhZCBvZg0KYWxsb2NhdGluZy9mcmVlaW5nIG1lbW9yeQ0KKHdo
aWNoIHdvdWxkIHJlcXVpcmUgYWRkaXRpb25hbCBlcnJvci1oYW5kbGluZyBsb2dpYykuDQoNClNp
Z25lZC1vZmYtYnk6IEVkd2luIFTDtnLDtmsgPGVkdmluLnRvcm9rQGNpdHJpeC5jb208bWFpbHRv
OmVkdmluLnRvcm9rQGNpdHJpeC5jb20+Pg0KLS0tDQp0b29scy9vY2FtbC9saWJzL3hjL01ha2Vm
aWxlICAgICAgICB8ICAgMiArLQ0KdG9vbHMvb2NhbWwvbGlicy94Yy94ZW5jdHJsLm1sICAgICAg
fCAgMzcgKysrKysrDQp0b29scy9vY2FtbC9saWJzL3hjL3hlbmN0cmwubWxpICAgICB8ICA3MSAr
KysrKysrKysrDQp0b29scy9vY2FtbC9saWJzL3hjL3hlbmN0cmxfc3R1YnMuYyB8IDE5NSArKysr
KysrKysrKysrKysrKysrKysrKysrKysrDQo0IGZpbGVzIGNoYW5nZWQsIDMwNCBpbnNlcnRpb25z
KCspLCAxIGRlbGV0aW9uKC0pDQoNCkFja2VkLWJ5OiBDaHJpc3RpYW4gTGluZGlnIDxjaHJpc3Rp
YW4ubGluZGlnQGNpdHJpeC5jb208bWFpbHRvOmNocmlzdGlhbi5saW5kaWdAY2l0cml4LmNvbT4+
DQoNCg0KK3N0YXRpYyBDQU1McHJpbSB2YWx1ZSBWYWxfbGVhdmVzKGNvbnN0IHhlbl9jcHVpZF9s
ZWFmX3QgKmxlYXZlcywgdWludDMyX3QgbnJfbGVhdmVzKQ0KK3sNCisgICAgQ0FNTHBhcmFtMCgp
Ow0KKyAgICBDQU1MbG9jYWwxKHJlc3VsdCk7DQorICAgIHVpbnQzMl90IGk7DQorDQorICAgIHJl
c3VsdCA9IGNhbWxfYWxsb2MobnJfbGVhdmVzLCAwKTsNCisgICAgZm9yIChpPTA7aTxucl9sZWF2
ZXM7aSsrKQ0KKyAgICAgICAgU3RvcmVfZmllbGQocmVzdWx0LCBpLCBWYWxfY3B1aWRfbGVhZigm
bGVhdmVzW2ldKSk7DQorDQorICAgIENBTUxyZXR1cm4ocmVzdWx0KTsNCit9DQoNCklzICBjYW1s
X2FsbG9jKG5yX2xlYXZlcywgMCkgdGhlIHJpZ2h0IGFsbG9jYXRpb24/IFRoZSAwIGlzIHRoZSB0
YWcuIFRoZXJlIGlzIGFub3RoZXIgaW5zdGFuY2Ugb2YgdGhpcyBiZWxvdy4gV2hhdCBpcyB0aGUg
dHlwZSBvZiB0aGUgcmV0dXJuZWQgdmFsdWUgZnJvbSBhbiBPQ2FtbCBwZXJzcGVjdGl2ZT8NCg0K
4oCUIEMNCg0K

--_000_70BA77FE045D4F67A61E030CFAACC8B1citrixcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <99DD4989C7A55D49841E7CCE41A09631@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64

PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgbGluZS1icmVhazogYWZ0
ZXItd2hpdGUtc3BhY2U7IiBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCjxkaXY+PGJyIGNsYXNz
PSIiPg0KPGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSIgY2xhc3M9IiI+DQo8ZGl2IGNsYXNzPSIiPk9u
IDE4IEp1biAyMDIxLCBhdCAxMTo0NSwgRWR3aW4gVMO2csO2ayAmbHQ7PGEgaHJlZj0ibWFpbHRv
OmVkdmluLnRvcm9rQGNpdHJpeC5jb20iIGNsYXNzPSIiPmVkdmluLnRvcm9rQGNpdHJpeC5jb208
L2E+Jmd0OyB3cm90ZTo8L2Rpdj4NCjxiciBjbGFzcz0iQXBwbGUtaW50ZXJjaGFuZ2UtbmV3bGlu
ZSI+DQo8ZGl2IGNsYXNzPSIiPg0KPGRpdiBjbGFzcz0iIj5JbnRyb2R1Y2VzIGZvbGxvd2luZyBm
dW5jdGlvbnMgaW4gWGVuY3RybCBhbmQgYXNzb2NpYXRlZCB0eXBlczo8YnIgY2xhc3M9IiI+DQpn
ZXRfc3lzdGVtX2NwdV9wb2xpY3k8YnIgY2xhc3M9IiI+DQpjcHVfcG9saWN5X3RvX2ZlYXR1cmVz
ZXQsPGJyIGNsYXNzPSIiPg0Kc3RyaW5nX29mX3hlbl9jcHVfcG9saWN5X2luZGV4PGJyIGNsYXNz
PSIiPg0KPGJyIGNsYXNzPSIiPg0KVGhlc2UgYXJlIHdyYXBwZXJzIGFyb3VuZCB0aGUgZXhpc3Rp
bmcgQyBmdW5jdGlvbnMgaW4geGVuY3RybC5oLDxiciBjbGFzcz0iIj4NCnRoYXQgd2lsbCBiZSB1
c2VkIGJ5IHhlbm9wc2QgaW5pdGlhbGx5LjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCi1X
bm8tZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IGlzIGRpc2FibGVkIHRvIGFsbG93IG1peGlu
ZzxiciBjbGFzcz0iIj4NCmRlY2xhcmF0aW9ucyBhbmQgY29kZSB0byBzaW1wbGlmeSB3cml0aW5n
IHRoZSBzdHViczxiciBjbGFzcz0iIj4NCmJ5IHVzaW5nIHZhcmlhYmxlIGxlbmd0aCBhcnJheXMg
b24gdGhlIHN0YWNrIGluc3RlYWQgb2Y8YnIgY2xhc3M9IiI+DQphbGxvY2F0aW5nL2ZyZWVpbmcg
bWVtb3J5PGJyIGNsYXNzPSIiPg0KKHdoaWNoIHdvdWxkIHJlcXVpcmUgYWRkaXRpb25hbCBlcnJv
ci1oYW5kbGluZyBsb2dpYykuPGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNzPSIiPg0KU2lnbmVkLW9m
Zi1ieTogRWR3aW4gVMO2csO2ayAmbHQ7PGEgaHJlZj0ibWFpbHRvOmVkdmluLnRvcm9rQGNpdHJp
eC5jb20iIGNsYXNzPSIiPmVkdmluLnRvcm9rQGNpdHJpeC5jb208L2E+Jmd0OzxiciBjbGFzcz0i
Ij4NCi0tLTxiciBjbGFzcz0iIj4NCnRvb2xzL29jYW1sL2xpYnMveGMvTWFrZWZpbGUgJm5ic3A7
Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7fCAmbmJzcDsmbmJzcDsyICstPGJy
IGNsYXNzPSIiPg0KdG9vbHMvb2NhbWwvbGlicy94Yy94ZW5jdHJsLm1sICZuYnNwOyZuYnNwOyZu
YnNwOyZuYnNwOyZuYnNwO3wgJm5ic3A7MzcgKysrKysrPGJyIGNsYXNzPSIiPg0KdG9vbHMvb2Nh
bWwvbGlicy94Yy94ZW5jdHJsLm1saSAmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDt8ICZuYnNwOzcx
ICsrKysrKysrKys8YnIgY2xhc3M9IiI+DQp0b29scy9vY2FtbC9saWJzL3hjL3hlbmN0cmxfc3R1
YnMuYyB8IDE5NSArKysrKysrKysrKysrKysrKysrKysrKysrKysrPGJyIGNsYXNzPSIiPg0KNCBm
aWxlcyBjaGFuZ2VkLCAzMDQgaW5zZXJ0aW9ucygrKSwgMSBkZWxldGlvbigtKTxiciBjbGFzcz0i
Ij4NCjwvZGl2Pg0KPC9kaXY+DQo8L2Jsb2NrcXVvdGU+DQo8ZGl2PjxiciBjbGFzcz0iIj4NCjwv
ZGl2Pg0KPGRpdj4NCjxkaXYgc3R5bGU9Im1hcmdpbjogMHB4OyBmb250LXN0cmV0Y2g6IG5vcm1h
bDsgZm9udC1zaXplOiAxMXB4OyBsaW5lLWhlaWdodDogbm9ybWFsOyBmb250LWZhbWlseTogTWVu
bG87IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJmb250LXZhcmlhbnQtbGlnYXR1cmVzOiBuby1j
b21tb24tbGlnYXR1cmVzIiBjbGFzcz0iIj5BY2tlZC1ieTogQ2hyaXN0aWFuIExpbmRpZyAmbHQ7
PGEgaHJlZj0ibWFpbHRvOmNocmlzdGlhbi5saW5kaWdAY2l0cml4LmNvbSIgY2xhc3M9IiI+Y2hy
aXN0aWFuLmxpbmRpZ0BjaXRyaXguY29tPC9hPiZndDs8L3NwYW4+PC9kaXY+DQo8L2Rpdj4NCjxi
ciBjbGFzcz0iIj4NCjxibG9ja3F1b3RlIHR5cGU9ImNpdGUiIGNsYXNzPSIiPg0KPGRpdiBjbGFz
cz0iIj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KK3N0YXRpYyBDQU1McHJpbSB2YWx1
ZSBWYWxfbGVhdmVzKGNvbnN0IHhlbl9jcHVpZF9sZWFmX3QgKmxlYXZlcywgdWludDMyX3QgbnJf
bGVhdmVzKTxiciBjbGFzcz0iIj4NCit7PGJyIGNsYXNzPSIiPg0KKyAmbmJzcDsmbmJzcDsmbmJz
cDtDQU1McGFyYW0wKCk7PGJyIGNsYXNzPSIiPg0KKyAmbmJzcDsmbmJzcDsmbmJzcDtDQU1MbG9j
YWwxKHJlc3VsdCk7PGJyIGNsYXNzPSIiPg0KKyAmbmJzcDsmbmJzcDsmbmJzcDt1aW50MzJfdCBp
OzxiciBjbGFzcz0iIj4NCis8YnIgY2xhc3M9IiI+DQorICZuYnNwOyZuYnNwOyZuYnNwO3Jlc3Vs
dCA9IGNhbWxfYWxsb2MobnJfbGVhdmVzLCAwKTs8YnIgY2xhc3M9IiI+DQorICZuYnNwOyZuYnNw
OyZuYnNwO2ZvciAoaT0wO2kmbHQ7bnJfbGVhdmVzO2krKyk8YnIgY2xhc3M9IiI+DQorICZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwO1N0b3JlX2ZpZWxkKHJlc3VsdCwg
aSwgVmFsX2NwdWlkX2xlYWYoJmFtcDtsZWF2ZXNbaV0pKTs8YnIgY2xhc3M9IiI+DQorPGJyIGNs
YXNzPSIiPg0KKyAmbmJzcDsmbmJzcDsmbmJzcDtDQU1McmV0dXJuKHJlc3VsdCk7PGJyIGNsYXNz
PSIiPg0KK308YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjwvZGl2Pg0KPC9ibG9ja3F1b3RlPg0KPGJy
IGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2PklzICZuYnNwO2NhbWxfYWxsb2MobnJfbGVhdmVzLCAw
KSB0aGUgcmlnaHQgYWxsb2NhdGlvbj8gVGhlIDAgaXMgdGhlIHRhZy4gVGhlcmUgaXMgYW5vdGhl
ciBpbnN0YW5jZSBvZiB0aGlzIGJlbG93LiBXaGF0IGlzIHRoZSB0eXBlIG9mIHRoZSByZXR1cm5l
ZCB2YWx1ZSBmcm9tIGFuIE9DYW1sIHBlcnNwZWN0aXZlPzwvZGl2Pg0KPGRpdj48YnIgY2xhc3M9
IiI+DQo8L2Rpdj4NCjxkaXY+4oCUIEM8L2Rpdj4NCjxiciBjbGFzcz0iIj4NCjwvYm9keT4NCjwv
aHRtbD4NCg==

--_000_70BA77FE045D4F67A61E030CFAACC8B1citrixcom_--


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 13:12:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 13:12:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144621.266160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luEI4-0007A3-L8; Fri, 18 Jun 2021 13:12:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144621.266160; Fri, 18 Jun 2021 13:12:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luEI4-00079w-Gg; Fri, 18 Jun 2021 13:12:08 +0000
Received: by outflank-mailman (input) for mailman id 144621;
 Fri, 18 Jun 2021 13:12:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luEI2-00079o-Pw
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 13:12:06 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f959973b-9252-4a73-a8d1-c61ea922ee22;
 Fri, 18 Jun 2021 13:12:05 +0000 (UTC)
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur02lp2052.outbound.protection.outlook.com [104.47.4.52]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-25-DeaJU68mOImYFa9rDvEjlw-2; Fri, 18 Jun 2021 15:12:03 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4351.eurprd04.prod.outlook.com (2603:10a6:803:49::33)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Fri, 18 Jun
 2021 13:12:00 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 13:12:00 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0125.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1a::17) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.16 via Frontend Transport; Fri, 18 Jun 2021 13:12:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f959973b-9252-4a73-a8d1-c61ea922ee22
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624021924;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sgG07cRJC9rjNIy0sCCs9w4UGZnvYXPjD+nBqdwXsQA=;
	b=lNjBHCxo3T0MV0+gtEBSZRRQYU5hJUvVjyr/p6BvnHhu3o9DHUn5QZld+E1LEevYsaM1MB
	1/imuU/rYclIzmvFbfvKKpAn1FnCU8IGrvv4bnf+QgekNyaxLV7hh5rTZzj43ZPE3WHKK1
	Hzj5Dltx+lc5N3eND7p3Rw7tGr8zzjM=
X-MC-Unique: DeaJU68mOImYFa9rDvEjlw-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f/Yu9DY+9IYIKReECdgEMdzI0by1PEyL5JvAFGYKigAwDmX/0wN1E9nuVZnTXnM1PRxf0NCcgGiOXNof5Ez4MpnclLodJkHyOXRRKxhSCw36WfvMNxYzetBFE86+pPGq90OGbaEvfX+ubaQ1tp2kO6DZCEKa7JkNQFSmaEKZ5a4/OslX/Jl5ncZ1RuabdWIaWrMWqE3Dui4mBS86ovYr4u8x6l2ehrXFVBD5h4L/82U8JXGb7uO3mrI4qIaT8+7Ss5VuTRqWi3Pi6pdfI4waXy9fd6IwNyY98AbXv9U5Y1A9wKdXFxhfJ5TqPzAXuvlvD6InStfR2wj10fVoGL5Jxw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zxmM7mI43q5nKOuc7n+9LSefo/HVyKynhrajXY2Sq2Q=;
 b=JtTOkLT91vrkeKlSb2y+qArWOOU2PDC/6YNX8DNh3qalweXB0IkqE25blLe6dCqHIHoTkzPcnLZ7O6zobV7zGzCLZ0XxPcs/i2JI0/ofgHHSCY/5KSsW4VzXq9tm0gRsjhWt4dYJabDyloFFEEEJNpgBHfnYzUqMJDfV2DetfLA2GnxVWfh7dcWMch9l1BpEL4Y309LVMaZJ7d5/gY1d4meBYauwKTiecXoI2fk13hJ4IgwgrsnKPetR62H6Va5/YtAs+q0ZHEhYFHJ58OJPUOe7PFcI6vFt7meJSwNXkWxFtrCJywKFFCuo6I1wRlWNrhllxMMLLH1nnYwFU2b+bw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 1/5] x86/HVM: wire up multicalls
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Juergen Gross <jgross@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
 <c3c5df61-32f8-1ffa-aac4-0c83d620d385@suse.com>
 <511f161f-75b8-f23b-c6ba-cd7afe303760@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c46915cb-15a4-10ca-a566-b892eb8dc4f1@suse.com>
Date: Fri, 18 Jun 2021 15:11:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <511f161f-75b8-f23b-c6ba-cd7afe303760@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0125.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1a::17) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 725e831e-8842-4a5c-ab6e-08d9325aa890
X-MS-TrafficTypeDiagnostic: VI1PR04MB4351:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB43514AFDB462D96CBC0AE727B30D9@VI1PR04MB4351.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rBVUHLKhliHvmjeB2FdXoZgKO6Q/1z79fVv8kTl4k+Ss8HAF7R587xs4sJ4fIQ18gumvHqcu/LDgFSt8OvMYqXK4IMFvWRE3UObsxQvLnOJBUI1NnXOBG45TEWiZFIcXy2PyemXHfgeRUL+AhmXy0TNISehyDId/v8C3AB81igEPA7NactPDGsEOCqkortHw5EtRdlXjI8LEDQ6eWkEwZYNQwy8PXnoDRjYH8Z7uEMH8APPce2k0D/QnS76p+JuI37eSrsz3kVdpvWKYbRMOZvtS8kVvpeedoqUsumMVsx56dppezbgSPvV1yLMD8Ho9aj38G+8Kbe1sy8pBtn5nBIJ+sq1bTqjZc//jk/L1h/MlymaV3m6G6DBVKAu7lRPK3KjGwfjlH4YKTLG7Xuy2uDi2+Z2myhSwLpD+lAGh3GQblY83FUY41vdCTNWvLrhg0QfisGIr6oro1Qi38RDrpxHYyPd9SQDcFprSOIB76MxlfIU2eKd4p5/MsL/mZTjviBQLv7oxU1zkWLPBja1/YH+48teqXtIZgjCQ7Jm9EAnSqhWZ5DhFE4EghMniMYOrhn9QG30eUio+mjaLBfhYpTBk3jZaZf80T6WOo+zpYs7Drlwf5uhn+FArW4UdLrIWrBfPH14kBi/W5Ff+Lf62QOfohL7+5C8eGhWgFIo3ludXn4f9IrFQuxdH85bd8L+JYnZuAnT0lkXyn73A+mljfQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(376002)(396003)(39840400004)(366004)(136003)(5660300002)(38100700002)(956004)(6916009)(8676002)(8936002)(53546011)(16576012)(316002)(66556008)(186003)(66946007)(54906003)(83380400001)(2906002)(66476007)(4326008)(16526019)(2616005)(31696002)(26005)(6486002)(86362001)(31686004)(478600001)(36756003)(156664002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?+rvObwhLpLml0lCuOmFkgDq557DuRtfDq1g/jMxCC58raOLEUMUEEMPEfyv2?=
 =?us-ascii?Q?qIllVR5ABHmQKb4Nc6Ibxi3SyslzRvnuXoMDVig2106TJf3sSbzrU13NZJwk?=
 =?us-ascii?Q?G5m30W3IMIHPg71F3WbO8v/XD5+1q67ROldzOhOICVeOKZMCOuErjVsxU0/C?=
 =?us-ascii?Q?8SIaXBiPYWwYUlP1h/5huMy3K3FroDhDC/1Ck183fgIzqfpvCD/4aaCjmAnm?=
 =?us-ascii?Q?w9oBRJmqfu2CqRlamO3yFsFkepAojzAG6N7dzmT21GysLADQLCEydxj1RR4n?=
 =?us-ascii?Q?CD7kFKD1QL8wgybkjlwzJwVP1zBkOTIjIB/n0/qntL+E/SsfFcFxh/bA6T39?=
 =?us-ascii?Q?ekriPpmYMJFCaF8MlAr6x27FgNdDRMDbkpHF43hfc4Z06ttxj/UVD/DccFb3?=
 =?us-ascii?Q?IJJyddAYoKQ/Bz99y46T6YWveIcRKDsU2+d9FjLjgQ8yGYGFSmPnRPSQDdxa?=
 =?us-ascii?Q?OAsjOhItW0Ij6FtPNqZRsFVLtCBofeTaGrBLYrexjS5/Qh/9ER0lY88JH1ic?=
 =?us-ascii?Q?X9VUOFdxngXMoYSeNIHlX68g9Y4Rh4UTd6soZCRPB4DKNHz5lm3KmrHwM59/?=
 =?us-ascii?Q?jB/aWcxlOS3V3v75rUtLoEVYVxIvW702FnswujX4dEzMdGgv6g46gcqd7noO?=
 =?us-ascii?Q?Yk6rOX4EExKPKBSxtIn+xeCit6Cv4xYjhJdzEBWbkieQxAVqPbM3VwAOjA0M?=
 =?us-ascii?Q?bLyFR54k/a4b1ViVz2EGveRHdLitglvF5o74kW1ycSM5WLkeJ0tw0JAuqSxG?=
 =?us-ascii?Q?KBJc6B0ld0/1eB/Rca2fQHUt9uOBUYOpa0Ve3tBQjdk6RcviDB+MaXjwSHCq?=
 =?us-ascii?Q?Hx3P8n+VfELq7Zr6eqzQvbRwqZnvVquCqHgIawwlMT2J0TaopJ6PTRn0A5jf?=
 =?us-ascii?Q?zda6gMlpYki5BKM8Mdu7ZmaIrtDFG04zDiPYrNbxFjPJm74sL8qLsaDtQuMB?=
 =?us-ascii?Q?hO6B8nWQxMU1QquzmJdPSL8ugmvdcntx2M8MydFIn7gSqzvpst1LKyTPa03T?=
 =?us-ascii?Q?ESofdMUwzE6GABhwBeHYt3amCXwGFXBEbVtJiaYSxxQWNtS417xk/0biwZIS?=
 =?us-ascii?Q?hHVo4QIA0QtBrTkY5bXO7HLefjck3vX2Wg2MBEyVADWJ3iT8qnR9wYM82hGD?=
 =?us-ascii?Q?d/AEPZAE/aAiVcWUwZT3wB+91L1sfcXXBp4bHcIZ5pqsCmrzSx1gTIUGSQe7?=
 =?us-ascii?Q?3zcLBDUsq1xhfFQ1xCzrTvk4mZxob+LkjU09knyx/pAAZ4jHst0EJclYR9W/?=
 =?us-ascii?Q?VPF/wSclpRDlYR9eyW1IQVx6uEc/wjX13YlJr6DK1LWiHC/ndCK2jcHGLVMY?=
 =?us-ascii?Q?Blk/osXOsioOniniiZBBuAnn?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 725e831e-8842-4a5c-ab6e-08d9325aa890
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 13:12:00.7214
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Rgy4NvxXd8lKDc4DyVspKSK4JGKpP11/qlVvAzykL5HRDxLXriTyDilsKRARgOLxuKBHl3J17x5NeJfJ5UleXA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4351

On 18.06.2021 15:02, Andrew Cooper wrote:
> On 18/06/2021 11:23, Jan Beulich wrote:
>> To be able to use them from, in particular, the tool stack, they need to
>> be supported for all guest types. Note that xc_resource_op() already
>> does, so would not work without this on PVH Dom0.
>=20
> I'm not a fan of multicalls as a concept - they're mostly a layering
> violation adding substantial complexity - and frankly, working around a
> Linux kernel/user ABI error is a terrible reason to make this change.

While I agree with the latter, I don't think there's much complexity
here, and there are certainly savings in terms of mode switch between
guest and hypervisor when you can batch up arbitrary calls (and not
just sufficiently similar ones with built-in batching).

>> @@ -334,6 +336,39 @@ int hvm_hypercall(struct cpu_user_regs *
>>      return curr->hcall_preempted ? HVM_HCALL_preempted : HVM_HCALL_comp=
leted;
>>  }
>> =20
>> +enum mc_disposition hvm_do_multicall_call(struct mc_state *state)
>> +{
>> +    struct vcpu *curr =3D current;
>> +    hypercall_fn_t *func =3D NULL;
>> +
>> +    if ( hvm_guest_x86_mode(curr) =3D=3D 8 )
>> +    {
>> +        struct multicall_entry *call =3D &state->call;
>> +
>> +        if ( call->op < ARRAY_SIZE(hvm_hypercall_table) )
>> +            func =3D array_access_nospec(hvm_hypercall_table, call->op)=
.native;
>> +        if ( func )
>> +            call->result =3D func(call->args[0], call->args[1], call->a=
rgs[2],
>> +                                call->args[3], call->args[4], call->arg=
s[5]);
>> +        else
>> +            call->result =3D -ENOSYS;
>> +    }
>> +    else
>> +    {
>> +        struct compat_multicall_entry *call =3D &state->compat_call;
>> +
>> +        if ( call->op < ARRAY_SIZE(hvm_hypercall_table) )
>> +            func =3D array_access_nospec(hvm_hypercall_table, call->op)=
.compat;
>> +        if ( func )
>> +            call->result =3D func(call->args[0], call->args[1], call->a=
rgs[2],
>> +                                call->args[3], call->args[4], call->arg=
s[5]);
>> +        else
>> +            call->result =3D -ENOSYS;
>> +    }
>> +
>> +    return !hvm_get_cpl(curr) ? mc_continue : mc_preempt;
>=20
> This is ported across from XSA-213, but even for PV guests, it was just
> defence in depth IIRC for any cases we hadn't spotted, changing privilege=
.
>=20
> There is no pagetable accounting in HVM guests to become confused by a
> privilege change, and hvm_get_cpl() isn't totally free.=C2=A0 Any kernel
> which puts VCPUOP_initialise in a multicall gets to keep all resulting
> pieces.
>=20
> I think this wants to be just "return mc_continue;"

I had it this way first, but I think the state setting hypercalls
ought to be similarly protected. In fact I did this adjustment at
the last moment before sending, after having looked at Arm code.
If we don't want it here, it ought to go away there as well, and
then also for PV (where then only IRET would need special casing).

> If so, Begrudingly acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks, I'll take this for the moment (ignoring the "if so"), but
I'll wait some to see whether the above wants further discussing.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 13:17:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 13:17:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144627.266171 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luEMz-0007pb-8B; Fri, 18 Jun 2021 13:17:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144627.266171; Fri, 18 Jun 2021 13:17:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luEMz-0007pU-4R; Fri, 18 Jun 2021 13:17:13 +0000
Received: by outflank-mailman (input) for mailman id 144627;
 Fri, 18 Jun 2021 13:17:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfjP=LM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1luEMx-0007pO-Jr
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 13:17:11 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 646bb160-991e-41f9-8741-a3c454cfa650;
 Fri, 18 Jun 2021 13:17:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 646bb160-991e-41f9-8741-a3c454cfa650
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624022230;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=6fHtGz2JESlVjuScKJTP5Z1l76LA/i3gzCifQWsztXQ=;
  b=XJZ91c/sxXnStMs0g0nD6QWVyJvtgCRARc0JYZAnEwZGGw84p974oNxo
   pkAc5EmprV45JTfcWpqIreXPWG8Une+X8tqlQ5MPqOufBGpYYFh7RRT36
   jZp6iZ3k1SKGiWVeNIH1xw99K5QsbfVb/aAnPJEAmQSDxgqddwJbL+Yi8
   0=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: GmhFe+IPf3xdOPdoHEp4qGgVke3EhlFl7+baJO14VWgJ8QBZcO9kfJlS/tA0AtmiyhE5pRbeAW
 S6CIDEqkivWOrXHx9Cdnxz9c84kGAihY+Uv9AKXf7VAt7vyMSUNES2ZHshOnXnMSeSdz1SfGqQ
 JSf8gl8jWffpzFaAfgQ3adWlQaVLWRBXCndvhtZ1Gc2ztDonv9Ug/OFD33MAyyyWJoWcFLvrW4
 HlXyacnC1tgKHwCVHgNcZuFgSI1+wg4lZ2Fy6LeYPUfIqLinxw+X5m9+ehVYyjrmW0o7n7BX93
 riU=
X-SBRS: 5.1
X-MesageID: 46529273
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:pt2IWKjllDpjItuGpA3vqxFmk3BQX2p13DAbv31ZSRFFG/FwyP
 re/sjzhCWE6wr5BktBpTnZAtjwfZvdnaQFmLX5To3SLDUO2VHYbr2KgrGSvgEIdxeOkdK1kJ
 0QDZSWBeeaMbEYt7e+3ODbKadd/DDvysnB6YixrgYJPGVXguNbnnhE426gYxJLrWJ9dOIE/e
 +nl7B6Tk2bCA8qh6qAdx84tu74zeEjdqiKXTc2QzocrCWehzKh77D3VzKC2A0Fbj9JybA+tU
 DYjg3Q/MyYwrWG4y6Z81WWw4VdmdPnxNcGLteLkNIpJjLljRvtTJh9WoeFoCs+rIiUmREXeZ
 j30lEd1vZImivsl1KO0EDQMs7boWwTAkrZuAalaL3Y0JHErXwBepZ8bMliA2jkAgIbzaNBOe
 RwrjykXtNsfGb9tTW46N7SWx5wkE2o5XIkjO4IlnRaFZATcblLsOUkjQJo+Tg7bWnHAa0cYa
 ZT5fvnlbhrmJKhHjrkl3gqxMbpUmU4Hx+ATERHssuJ0yJOlHQ8y0cD3sQQknoJ6Zp4EvB/lq
 f5G7UtkKsLQt4dbKp7CutEScyrCnbVSRaJNG6JO1zoGKwOJnqIoZ/q57c+4v2sZfUzvdsPcV
 T6IR9lXEsJCgrT4OG1ret2GyH2MSiAtG7Wu7ZjDrBCy/TBrZTQQFm+dGw=
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46529273"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lno6nbYN5oEdfgJ+vsXkMNF57SFUpu61vAMdv1o11qwBXA8xfq+HtXWI2W2dQbZe70/z7tKw4GorgvjhAfVdswjXalgD+rKcvGfkVqFnlXmc5+xO0tf9hNlhKPE3bz7Jv7w9Y1pjeQY77mApSHdoXacn115qNkpzRQGY+vcx1WZXx6ujeayxzJxBVD7sEEOFtCoktUvvX+PsJhTCotGEwI1Di8RJyN1OolWYduXS+p5Azjwlfb3mEBuOY9jgct/z+UHdf5vbFNKBJQYZjzVLi4n2H56uzHxi4lCvK4agNGv9GrwRq8ICf92zQGLQxeSG0naMiPdoxNYYD6t4bia7yQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6fHtGz2JESlVjuScKJTP5Z1l76LA/i3gzCifQWsztXQ=;
 b=EwY/3MyFzG8oA1pWU9e7HK1eoHBX87BgjxvhG0bcZESJDLVPSs0ioOX0cvplfRBDhyzbl3ZSpBrMbxeUUU418gzm49Oa6gJ5GJI7JiA0cgCseEJ7RdM+gtPN5iqxkc6mixJhp73BqmJFqrulF/N0ZQQFrEBfO34wsSWBsySwINSIDm8kHam1cdHlfiZ8/0ZV0RUxrednd3vW26LekD+zIvFmTLsaMaLkZFECPUOcvIhywGCfSnB/Z48gj2e84R8U2UrBxP4fiqdUnxDlJEZrar7NA9n+sfoCXbPh+e9jeekn9/B41tki8JyhbBIV0bW4bMCC6JJAFXkS/QYeSQdH1w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6fHtGz2JESlVjuScKJTP5Z1l76LA/i3gzCifQWsztXQ=;
 b=cBWGsKEYIEMBuL4haxnq3pyDaQX2a5kch+wkBTFdT8MdFV9xFxGwrUGp+iUKBDCa5BQ+Astmu0uh00s2IaLSQeBKBPq9+6wC9557xfAuH6KXgtzpCf9/gHQUJXaSo9tPfL2c2ZyU66c8tbj4JtZNMUEZuVZ2EqakFksHV4qt+bw=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: "xen-devel@lists.xenproject.prg" <xen-devel@lists.xenproject.prg>,
	xen-devel <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 07/12] golang/xenlight: add logging conveniences
 for within xenlight
Thread-Topic: [RESEND PATCH 07/12] golang/xenlight: add logging conveniences
 for within xenlight
Thread-Index: AQHXUNy8b62kbX7r3k+nzGZDJK+dZqsZ5r2A
Date: Fri, 18 Jun 2021 13:17:07 +0000
Message-ID: <A96AD4DD-35CB-495F-A008-D72A5C2AF45D@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <452aac2489990ac0195c62d8cb820fbe5786c466.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <452aac2489990ac0195c62d8cb820fbe5786c466.1621887506.git.rosbrookn@ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 76e60eb2-513d-435b-eb3c-08d9325b5f4d
x-ms-traffictypediagnostic: PH0PR03MB6368:
x-microsoft-antispam-prvs: <PH0PR03MB6368FE28176A0F18F184A1BF990D9@PH0PR03MB6368.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: mHIOe+CFMiT8X0s68W2zUTtor5QEpR0fGyxvRCq9tXuq7HBDe1cDMtPIl8M1gzPyltEBWBGncNgRSHbXyOCqfz8KDQshAk5uh56ri7yPWfmZ67RpD8NkJlcKUGhy7lgKVawx3Un0d2OeDZJx9rvqLKNYNEl5DedwodjO9bAyr+IwZiQ+4jhiFjfiSGK8xLjETWAZ3T+xvHEdL+r9N/AKiWoR049L1zV7dVgDdiwtsfb68KCU7ZGNiXe/K6DqjctfSZF3o7r8xeUIkLHkmkAx2+2Yj/Fj0M9P1+3yKU5YNhDulVojK74p5llnvIO4ZLnVKxlhRnrvmnh2tO541YDNbFWqHRPJRldLjqqHTjo5FMSfYe/SfYFngfYkUYS2dyuClUXJ38TyZ404m0DcbpWI/XiZ5f3Mcjzj6IiIzfCskqUh+YdxzhD+N6nZokL69OLvZVF00SCQK6YJUBhfuKyd1U/i7RWlX5EZVuWGIeLXiB4p4bXtVRbGhNCPpglD+DhD3ULMX+WoryDXkpz03Vw3IoEuSr5FXnq5lOw5Vqv3l6bh/vVewxFwoum2zobHzxlIXIO6k61ENUZFtsylzeQ3hr4kDeOm9sorekxVlpiII/hmYNcbHIAvSYCeLFDIB8XYnpv3vojNYNUWEuDD699Zi3I+cz2ySTHLgF3TpYSquoA=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(396003)(346002)(376002)(366004)(83380400001)(8676002)(53546011)(86362001)(186003)(6506007)(71200400001)(4326008)(36756003)(33656002)(6486002)(55236004)(91956017)(54906003)(76116006)(66946007)(478600001)(66476007)(66556008)(64756008)(5660300002)(38100700002)(122000001)(6512007)(2616005)(26005)(6916009)(316002)(8936002)(66446008)(2906002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?NkNMdjU1d2ZyTWd5bldqT3pPcEpLVEZHN1ZodjFsQ3l0cEp6b3IwYmxDdEpD?=
 =?utf-8?B?R1N2YnVqU1VSeC85aVpEWmJLdlNtc3g0YnZ1MVdFZVhUcE1GWlo3MHZaZXpq?=
 =?utf-8?B?ajhNcUU2RGlLOERxSVBzVkJ6b0kxcGFTMGtKUEFUeVBpN09PTEI3QUU2UUsx?=
 =?utf-8?B?dDRrdHBVby9tUlRxeHhTUnZaZzBMNlQ5TStkN1lvcVMwRTZTa0dFbkNxUGRj?=
 =?utf-8?B?RnZTdTJSUVgvVWEySmdvMXp1ZHd1U1Mzeld2TkZPSzVBUlduODN6QTVKUWZU?=
 =?utf-8?B?c0swQmJEaVo5YjNzNEp3VnFHaG9acG5FbnhFeDIrREhNZkExK0szdGpxcXpp?=
 =?utf-8?B?Z1ZBM0hObGU5OWlSd2FLeG5CQW52eGZ0NkFGNjh6aXgraUpaYVhuK0s4dnFh?=
 =?utf-8?B?bEJxcnMvOU9xYWh4WjVMQzNLOUtwZzU4NGw5bnNWWTJKTXk5NWlhK3NuSTBh?=
 =?utf-8?B?MW5OWTY0MGRWNktZb0lwbHdsNjRWVGFmSEZwaXYzZUZwT2hzRzc4VDRmWnhX?=
 =?utf-8?B?V0YrTGkrYUEwMHZYa2pCZGk0N0t1WHpMR091TS9mRk1xSVNWc1RCT21Ebkk5?=
 =?utf-8?B?WEJadGFyd0F5VE4zSE1ySktGWmZHcVB5TkpPQTJEbGYvNWMzZ1ZsT1NlMkls?=
 =?utf-8?B?N2tVQ0F2MThKeFJFeXdoUHE1ZVR5alJzQUE2M2c3SmR3VEQrRWxjMm53QWwr?=
 =?utf-8?B?bjl4QVI4ejlURFk0Z05Hbnh3MWNvc2ZDeCtrQW9sTnlDU3FDTkhYbUxOMUJi?=
 =?utf-8?B?aW9qOEhHYlVFYUZsNHNZZklVdXVqMGVHQmdiRUpBQjBOQnVBeVMyUFFDYmFG?=
 =?utf-8?B?OUFGWE9objY5aXo3WUwzOGhmMUdnOEgxMXh0d0lyeC8vdEpaRmZVdjFlOVQr?=
 =?utf-8?B?WkJ3RDdaTjhUWDlHTU4rY1JpYkdpWVQ2b2Noa3hRaUFSV21VeFZITVVUUUxR?=
 =?utf-8?B?cmwvL0tDNm56K09NSGRPZ1hFZU8rb1ZzSmIrelVleDZPcDVPRWhWQy9MZEtn?=
 =?utf-8?B?ZERjcUdkL1FiZXFjTEdoTDZrRXdLY0xJZEJWQTJDU2ozTm5PeVltTGxUR0tL?=
 =?utf-8?B?OUFoc3B4d2pod0dnVkpaVGdjSUFSUkNndnh4NUgwRnBseHNnSFE5eTlkNDNM?=
 =?utf-8?B?K2VCckQrblNIekgrakt0WFhyVVlrZ1JuSVBCTnVrenNtNVgzQ1RBNmdUdktx?=
 =?utf-8?B?ckFpcGpRa1o0cStjUzBiNnFydVR6UnRmS0dmMWRZZ0ZQZW5Jd2JtRmZVaWdM?=
 =?utf-8?B?U1FFaXB2djVZVFUzdXk3VjlsUWFjQ3o2enN0cGZNL2VudENHN2R2THRJc1Jx?=
 =?utf-8?B?WTJ0MGpTZmtEVTVGOFhZSVVyYVZqVzRlQWh2ZUxGZ3ZEWUp1TEViMnhCUkZI?=
 =?utf-8?B?M2d4K005dEVsdDRqVStXWldiejBzOFVZRHFQSUEwU29hSGlOc0krRHF2cTlD?=
 =?utf-8?B?eTFWekxHQmUzSDZmM1FTdlN3NVdtRXlFUll3bmpmN3Q3OTNDNmVMa2FxUmFZ?=
 =?utf-8?B?THJxdUFQYWdBcGdSM0FzS3Fsc0JyaFV6dDZNUk5SbnBRV1BvbWI1dGh4OThZ?=
 =?utf-8?B?ZXdWdExlS01SeXlvdllmdVdiNGo4bEllT3Vld3hSbFB6bWNNVk5TRFNnanhi?=
 =?utf-8?B?Z0RJU1ltb2RPWlZYWG9JQ00xbUlKb2ZjVXdwTERhRGxpT0Z1SEx6dmpERDVi?=
 =?utf-8?B?dDdyeUNEalc0QkkyOWJtK3ZWMXZWcFk5S0lOdlFudERyWjhmTlFhWlJDSVE1?=
 =?utf-8?Q?V8pFRm8SvHYqAH5/Pj3swA9tKPVm7xxIHCsrtIl?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <CAFE456E750AD04985AC1D8046DF6B94@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 76e60eb2-513d-435b-eb3c-08d9325b5f4d
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 13:17:07.0487
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: gN444GdOPdDZlQZ02Ldn2RkcshPNjtYQd349ixv1mT8/xwTpLwBiukAOwu6JRUtAl8/0y8CP848VH1mULncg15qO3ikR/P3bb4vn/4YgdyA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6368
X-OriginatorOrg: citrix.com

DQoNCj4gT24gTWF5IDI0LCAyMDIxLCBhdCA5OjM2IFBNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9v
a25AZ21haWwuY29tPiB3cm90ZToNCj4gDQo+IEFkZCBzb21lIGxvZ2dpbmcgbWV0aG9kcyB0byBD
b250ZXh0IHRvIHByb3ZpZGUgZWFzeSB1c2Ugb2YgdGhlDQo+IENvbnRlbnh0J3MgeGVudG9vbGxv
Z19sb2dnZXIuIFRoZXNlIGFyZSBub3QgZXhwb3J0ZWQsIGJ1dCB0aGUgTG9nTGV2ZWwNCj4gdHlw
ZSBpcyBzbyB0aGF0IGEgbGF0ZXIgY29tbWl0IGNhbiBhbGxvdyB0aGUgQ29udGV4dCdzIGxvZyBs
ZXZlbCB0byBiZQ0KPiBjb25maWd1cmFibGUuDQo+IA0KPiBCZWN1YXNlIGNnbyBkb2VzIG5vdCBz
dXBwb3J0IGNhbGxpbmcgQyBmdW5jdGlvbnMgd2l0aCB2YXJpYWJsZQ0KPiBhcmd1bWVudHMsIGUu
Zy4geHRsX2xvZywgYWRkIGFuIHh0bF9sb2dfd3JhcCBmdW5jdGlvbiB0byB0aGUgY2dvIHByZWFt
YmxlDQo+IHRoYXQgYWNjZXB0cyBhbiBhbHJlYWR5IGZvcm1hdHRlZCBzdHJpbmcsIGFuZCBoYW5k
bGUgdGhlIGZvcm1hdHRpbmcgaW4NCj4gR28uDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBOaWNrIFJv
c2Jyb29rIDxyb3Nicm9va25AYWluZm9zZWMuY29tPg0KDQpMb29rcyBnb29kLiAgT25lIGNvbW1l
bnQ6DQoNCj4gLS0tDQo+IHRvb2xzL2dvbGFuZy94ZW5saWdodC94ZW5saWdodC5nbyB8IDQ1ICsr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysNCj4gMSBmaWxlIGNoYW5nZWQsIDQ1IGluc2Vy
dGlvbnMoKykNCj4gDQo+IGRpZmYgLS1naXQgYS90b29scy9nb2xhbmcveGVubGlnaHQveGVubGln
aHQuZ28gYi90b29scy9nb2xhbmcveGVubGlnaHQveGVubGlnaHQuZ28NCj4gaW5kZXggZmMzZWIw
YmYzZi4uZjY4ZDdiNmU5NyAxMDA2NDQNCj4gLS0tIGEvdG9vbHMvZ29sYW5nL3hlbmxpZ2h0L3hl
bmxpZ2h0LmdvDQo+ICsrKyBiL3Rvb2xzL2dvbGFuZy94ZW5saWdodC94ZW5saWdodC5nbw0KPiBA
QCAtMzIsNiArMzIsMTUgQEAgc3RhdGljIGNvbnN0IGxpYnhsX2NoaWxkcHJvY19ob29rcyBjaGls
ZHByb2NfaG9va3MgPSB7IC5jaGxkb3duZXIgPSBsaWJ4bF9zaWdjaGwNCj4gdm9pZCB4ZW5saWdo
dF9zZXRfY2hsZHByb2MobGlieGxfY3R4ICpjdHgpIHsNCj4gCWxpYnhsX2NoaWxkcHJvY19zZXRt
b2RlKGN0eCwgJmNoaWxkcHJvY19ob29rcywgTlVMTCk7DQo+IH0NCj4gKw0KPiArdm9pZCB4dGxf
bG9nX3dyYXAoc3RydWN0IHhlbnRvb2xsb2dfbG9nZ2VyICpsb2dnZXIsDQo+ICsJCSAgeGVudG9v
bGxvZ19sZXZlbCBsZXZlbCwNCj4gKwkJICBpbnQgZXJybm92YWwsDQo+ICsJCSAgY29uc3QgY2hh
ciAqY29udGV4dCwNCj4gKwkJICBjb25zdCBjaGFyICptc2cpDQo+ICt7DQo+ICsgICAgeHRsX2xv
Zyhsb2dnZXIsIGxldmVsLCBlcnJub3ZhbCwgY29udGV4dCwgIiVzIiwgbXNnKTsNCj4gK30NCj4g
Ki8NCj4gaW1wb3J0ICJDIg0KPiANCj4gQEAgLTE5Miw2ICsyMDEsNDIgQEAgZnVuYyAoY3R4ICpD
b250ZXh0KSBDbG9zZSgpIGVycm9yIHsNCj4gCXJldHVybiBuaWwNCj4gfQ0KPiANCj4gKy8vIExv
Z0xldmVsIHJlcHJlc2VudHMgYW4geGVudG9vbGxvZ19sZXZlbCwgYW5kIGNhbiBiZSB1c2VkIHRv
IGNvbmZpZ3JlIHRoZSBsb2cNCj4gKy8vIGxldmVsIG9mIGEgQ29udGV4dCdzIGxvZ2dlci4NCj4g
K3R5cGUgTG9nTGV2ZWwgaW50DQo+ICsNCj4gK2NvbnN0ICgNCj4gKwkvL0xvZ0xldmVsTm9uZSAg
ICAgTG9nTGV2ZWwgPSBDLlhUTF9OT05FDQoNCldoeSBhcmUgd2Ugbm90IGRlZmluaW5nIHRoaXMg
b25lPyAgRG9u4oCZdCB3ZSB3YW50IHRvIGJlIGFibGUgdG8gZGlzYWJsZSBsb2dnaW5nIGVudGly
ZWx5Pw0KDQo+ICsJTG9nTGV2ZWxEZWJ1ZyAgICBMb2dMZXZlbCA9IEMuWFRMX0RFQlVHDQo+ICsJ
TG9nTGV2ZWxWZXJib3NlICBMb2dMZXZlbCA9IEMuWFRMX1ZFUkJPU0UNCj4gKwlMb2dMZXZlbERl
dGFpbCAgIExvZ0xldmVsID0gQy5YVExfREVUQUlMDQo+ICsJTG9nTGV2ZWxQcm9ncmVzcyBMb2dM
ZXZlbCA9IEMuWFRMX1BST0dSRVNTDQo+ICsJTG9nTGV2ZWxJbmZvICAgICBMb2dMZXZlbCA9IEMu
WFRMX0lORk8NCj4gKwlMb2dMZXZlbE5vdGljZSAgIExvZ0xldmVsID0gQy5YVExfTk9USUNFDQo+
ICsJTG9nTGV2ZWxXYXJuICAgICBMb2dMZXZlbCA9IEMuWFRMX1dBUk4NCj4gKwlMb2dMZXZlbEVy
cm9yICAgIExvZ0xldmVsID0gQy5YVExfRVJST1INCj4gKwlMb2dMZXZlbENyaXRpY2FsIExvZ0xl
dmVsID0gQy5YVExfQ1JJVElDQUwNCj4gKwkvL0xvZ0xldmVsTnVtTGV2ZWxzIExvZ0xldmVsID0g
Qy5YVExfTlVNX0xFVkVMUw0KPiArKQ0KPiArDQo+ICtmdW5jIChjdHggKkNvbnRleHQpIGxvZyhs
dmwgTG9nTGV2ZWwsIGVycm5vdmFsIGludCwgZm9ybWF0IHN0cmluZywgYSAuLi5pbnRlcmZhY2V7
fSkgew0KPiArCW1zZyA6PSBDLkNTdHJpbmcoZm10LlNwcmludGYoZm9ybWF0LCBhLi4uKSkNCj4g
KwlkZWZlciBDLmZyZWUodW5zYWZlLlBvaW50ZXIobXNnKSkNCj4gKwljb250ZXh0IDo9IEMuQ1N0
cmluZygieGVubGlnaHQiKQ0KPiArCWRlZmVyIEMuZnJlZSh1bnNhZmUuUG9pbnRlcihjb250ZXh0
KSkNCg0KSG1tLCBhbGxvY2F0aW5nIGFuZCBmcmVlaW5nIGEgZml4ZWQgc3RyaW5nIGV2ZXJ5IHRp
bWUgc2VlbXMgcHJldHR5IHdhc3RlZnVsLiAgV291bGQgaXQgbWFrZSBtb3JlIHNlbnNlIHRvIGVp
dGhlciB1c2UgYSBzdGF0aWMgQyBzdHJpbmcgaW4gdGhlIENHbyBjb2RlIGF0IHRoZSB0b3AgaW5z
dGVhZD8gIE9yIGlmIG5vdCwgdG8gbWFrZSBjb250ZXh0IGEgZ2xvYmFsIHZhcmlhYmxlIHdlIGFs
bG9jYXRlIG9uY2UgYXQgdGhlIHBhY2thZ2UgbGV2ZWwgYW5kIHJlLXVzZT8NCg0KQWxzbywgaXMg
4oCYeGVubGlnaHTigJkgaW5mb3JtYXRpdmUgZW5vdWdoPyAgSSBoYXZlbuKAmXQgbG9va2VkIGF0
IHRoZSBvdGhlciDigJxjb250ZXh04oCdIHN0cmluZ3M7IHdvdWxkIOKAnGdvLXhlbmxpZ2h04oCd
IG9yIHNvbWV0aGluZyBiZSBiZXR0ZXI/DQoNCj4gKw0KPiArCUMueHRsX2xvZ193cmFwKCgqQy54
ZW50b29sbG9nX2xvZ2dlcikodW5zYWZlLlBvaW50ZXIoY3R4LmxvZ2dlcikpLA0KPiArCQlDLnhl
bnRvb2xsb2dfbGV2ZWwobHZsKSwgQy5pbnQoZXJybm92YWwpLCBjb250ZXh0LCBtc2cpDQo+ICt9
DQoNCkkgdGhpbmsgd2Ugd2FudCB0byBtYWtlIGl0IHBvc3NpYmxlIGxvbmctdGVybSB0byBjb25m
aWd1cmUgeW91ciBvd24gbG9nZ2VyIG9yIGhhdmUgbm8gbG9nZ2VyIGF0IGFsbDsgc28gbWF5YmUg
d2Ugc2hvdWxkIGFkZCBhIGBpZiAoY3R4LmxvZ2dlciA9PSBuaWwpIHJldHVybjtgIGF0IHRoZW4g
dG9wPw0KDQpPdGhlciB0aGFuIHRoYXQgbG9va3MgZ29vZCwgdGhhbmtzIQ0KDQogLUdlb3JnZQ==


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 13:17:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 13:17:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144629.266182 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luENE-0008Ci-Hd; Fri, 18 Jun 2021 13:17:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144629.266182; Fri, 18 Jun 2021 13:17:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luENE-0008Cb-EQ; Fri, 18 Jun 2021 13:17:28 +0000
Received: by outflank-mailman (input) for mailman id 144629;
 Fri, 18 Jun 2021 13:17:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luEND-00088m-RE
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 13:17:27 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6ddfd478-9cc5-4fb6-a6ab-d05a34cabf36;
 Fri, 18 Jun 2021 13:17:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ddfd478-9cc5-4fb6-a6ab-d05a34cabf36
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624022241;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=1fnVB9OmlOEYBx41D1wGL13OYZu+H0R7LN143SZik0Y=;
  b=dy8LJD5daA9Kj1feinTDzCxrnAmvFm0JEhXTRbe6dTWQyzE2D9SGFmrz
   wHSDUL6CW5EPdJxqoTWJpThchjgldGxJy3UJ7Xb5K3SE7FbN9BUBk/ukb
   5iUIKOZsfjwG8wH3WBUaE0RnCVlkMlb9Ll+dOY3ufPmvr34WWU8BOG476
   A=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: P1x3jPSbjH34GO9UvPIjlTfDqkv6kTbh36vrNwG9Ec55er2oxOTzZQ7d9ACD89hfuvlF9tAmQO
 +kTQghakBYIjRQmlNk5GAjkg0JKFINRfU00ElJhBVs8Px9a1x4/tTqULNHVZdO+0psLrlz8n9L
 QnATpjk+DKV69a06cig/KgmVvvnnGuqYq4uo+hE27B8toE0TxS8A5mq00TM/9RiJGXhKgohz7H
 6qjjlZ61cWboGYp1Bw5u6a2dyscVs0GpbCAPGBwUgXmk0J9qsPGXvzoJSSu5WlLPoCwCwdkYHr
 Izk=
X-SBRS: 5.1
X-MesageID: 46458779
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:ZBDRrqAnr8Sxj5jlHehOsceALOsnbusQ8zAXPh9KJiC9I/b1qy
 nxppkmPH/P6Qr4WBkb6LS90dq7MAzhHPlOkPUs1NaZLXTbUQ6TQr2KgrGSuwEIdxeOkNK1kJ
 0QCZSWa+eAfmSS7/yKmTVQeuxIqLLskNHK9JLjJjVWPGZXgslbnndE422gYy9LrWd9dP8E/d
 anl7F6T23KQwVnUi33PAhLY8Hz4/nw0L72ax8PABAqrCGIkDOT8bb/VzyVxA0XXT9jyaortT
 GtqX252oyT99WAjjPM3W7a6Jpb3PPn19t4HcSJzuwYMC/lhAqEbJloH5eCoDc2iuey70tCqq
 iDnz4Qe+BIr1/BdGC8phXgnyP61iw11nPkwViExVP+vM3QXlsBeoh8rLMcViGcx1srvdl63q
 4O9XmerYBrARTJmzm4z8TUVittilG/rRMZ4K0uZkRkIM8jgYJq3MsiFBs/KuZHIMu60vFmLA
 BWNrCY2B4MGmnqNkww1wJUsa6RtndaJGbNfqFNgL3M79D69EoJhnfw//Zv6UvowqhNAKWs19
 60RpiAq4s+OPP+TZgNSdvpEvHHRlAkf3r3QSqvyAPcZd860jT22sXK3Ik=
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46458779"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MR3K6KArAKq1IatGyKoG/ywxe33lZ4YZhBmA8KHk3CuW9th92Mjtlm/ayjO257s0haWDku6ZD62LGg/9e0itJ5RNpHFXaXgO25DQzxNLnhKfOIUI68abh6EqSwz/UJROR7tUe4OJOyXmwBrtT2AbTv6iwl9w6QXxGAmARRJJpy/XFnMBj6LM+DJVwxfXG608px7c1/Qg6mFbpuaLHDEnCgc6uLoLAPbnQCqAG8NHgV76y2M6KojWxMI5+xGf5KxfaXs4e7eHVjCZ4ac8rcra0B+XLAhXmVCcVpnFxPoskE25AWekw0oM7oVTOyd/is8yxFu4KYlb973o8cNdWK9asg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Bg/jbZ8J1nK+vnOc0wVqtFWyFy4INKNXEvhwpVRBqOE=;
 b=C/Eq5MpL8smcc1i7zqFTggC5j2Ow/u/Kw2uc/r901UwLnSoCzaYRlyH8J01ev42f4ljeZcLcunLiaFZ360ubjdann1BdP1l57W7jOeLg8w5XBGK4vOFJIfgQuNjQCeLpB5GE2VzaSWvqt0ExWWVJcj8yzLQF9SIfgVUOKI3F31fkcrckat5x04sVg/bT58E854ykvhaOlc1hp1JHkVmnN8NXxgDDMo0xYYGYcjjLzCP8VTRfjghCExfzBluhVfol31OLLzVb9DrNDgJ5nyQgFSOz0DLVo5lw6eMsGf0MBx1Qj1jpzZ+k3sZhlXt7Z94eTvKi+a9EyQSEHhq5Gs4DNA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Bg/jbZ8J1nK+vnOc0wVqtFWyFy4INKNXEvhwpVRBqOE=;
 b=SINb94AVO/hKchW6w6xX3lANSfRelxoGKXr6gmJOirDQjz8Ht8nyk5RS3FAd0WdZE/URhecOF8QyI1AlwiTTpFtymX1pTVw0aQaERxmOmIfuMoX148N1ZKXUC6g7GbW9Wr0QkwPNdQyHo1R+nFFRuQJIqLzFsuxu3klp0yafEu4=
Subject: Re: [PATCH] tools/ocaml/libs/xc: add OCaml stubs to query CPU policy
To: =?UTF-8?B?RWR3aW4gVMO2csO2aw==?= <edvin.torok@citrix.com>,
	<xen-devel@lists.xenproject.org>
CC: Christian Lindig <christian.lindig@citrix.com>, David Scott
	<dave@recoil.org>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <5fdb7b4cdee69af8e2b9d77b56b1027a8799cf04.1624012999.git.edvin.torok@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <4eb5d3fb-db71-5981-e6f5-0503ff896fd9@citrix.com>
Date: Fri, 18 Jun 2021 14:17:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <5fdb7b4cdee69af8e2b9d77b56b1027a8799cf04.1624012999.git.edvin.torok@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0101.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:c::17) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c9c4dbbe-76d9-48f9-5e96-08d9325b6592
X-MS-TrafficTypeDiagnostic: BYAPR03MB3990:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB3990C0326A16B8328D5567F0BA0D9@BYAPR03MB3990.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ArCEi2vY/TL2UabBV30rv1B/RYiH30z7haHDS//guVU/dA0OXsi6vKO+ZX7DSZO/zNI9YEwdF8Ua/lUrvpCBoyyLNfV9dEKIgkLT3ydfkfXrfe8MxoTRHbfu+GwEm/Dhgp/AXXTkDP6L4OCj+X9tbnbQF2GKjT5G1TrUnVYxZq8++QbX1ISkcQq2UHxQqTKpOMderVZODenZLXe263ZQB9fuK3qNf6Ds3qUuLySA5MuwxJYRKkh6ru2gxpj0098eqPUu4KdErmXXm4muvoIOJ73NW05qPapnLGT3QyMVYTXNKDi8iqOvtTa9mGknjYD5yMekLmT04gzThhsFKXUhbQMVIVR3LTm7BQLNGsLlMd65Dv3UNgiEpNJI4QnKvzBKYzYGn4m5e0SvDTxkVxlvlz8mmbXj6y1TWkjvhVYU5moR7rULHRQYHfvN3srPRpVGeUbM361nMQiXjOFNpew5dOjFZgEynf9GVwYqceXYj2ig+NIwdNoxSG+hrkQjJAuqU2tsmi87yp2XXmzgJ4N0CzuLQuKWmAvk5nsYOuf/5BO82terJsG2tt+YNtHnN0BsmMaWui8Av6AZrrWw5DTtVbJA6j33WThjHlTVd9cx/kYNiUy1WBwTHJjZNuc+iXPFChcQE+t0YEi81JjVs334SXa1S8HzrmSksEFUIds1re4cULVwBslDNy5iQBm1jFghcdzarH5NTSugkbtjRBkkPq1V0hEu95T0oIIQIbfohOTpGt9Fxy1DwcpGipgqFPH/wVE8gtBVrdBnZ96isSbB8qSnG/Pcr8WxPScxBCRQPtpbX3e0BjTLufA687TJxggC
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(366004)(136003)(346002)(396003)(16576012)(2616005)(31696002)(956004)(54906003)(316002)(26005)(478600001)(966005)(8676002)(31686004)(2906002)(36756003)(186003)(4744005)(16526019)(66556008)(66476007)(86362001)(4326008)(5660300002)(38100700002)(6486002)(8936002)(53546011)(6666004)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?aUxZeExMdmc2a1drQnk0MUQ1UHEwbytMN2Q3WEZwbWtGMlkrN0Z6d1lQY2VY?=
 =?utf-8?B?cUJ4Mmk2UmpPWGkyc0ZwK096eExUWE4wWUNRYTJmeHBqWnpOaU1RNjJtOEhu?=
 =?utf-8?B?UnFFZVhIVk0yQTN2Zy9OMGFOL2IvSW9EM1JHWnpiNmZMTWNhS1VydXlmWHZx?=
 =?utf-8?B?cTFyakRFd3dPSEIzU05LM3p0K2IrczROL2krY1hPYWVBNTZaUVJzS3dxZ1RK?=
 =?utf-8?B?Q2cwU2JlTWdYR0FoZVdSUEZTbllKMGY3TGR0RE01dmVOOTRGWGNXYk5IMVpI?=
 =?utf-8?B?UERBMGZBdkwxRmt2TEd0b1VXWTF6bjFZNWVDTHdZUElBT0hqQ3JBaS8zYm01?=
 =?utf-8?B?eWRsaG95R3ZMMTZ3Ny8xNSsxWndxTkZQWWZkS2RFc3JzL1ZlcTF2TWlnSkNk?=
 =?utf-8?B?SnJtc3ZZeGR1cVlDY2luNUZnQ2MvNFg4REpONTZ4WktBak5WSDhrcmFscnY3?=
 =?utf-8?B?VnN2Sm50cWVNU1p2dmkxNkZGOUZ4UTVCVStnZndMa1R6WmNoc2NsZ29iaVZF?=
 =?utf-8?B?cWlUTFpTdE5jd3lqM2UvK2l6Y0xUQUZpdmtCUTBUUk0wSVkyeEFzMlZpcVJZ?=
 =?utf-8?B?dTNvRzh4Q0p5Rk9maEtTVjhreFdEQkJLbWFhbXdvdGNDT2RyTUtjSURETlNS?=
 =?utf-8?B?VXRKSDZRR2Y3THNON1dwRmIyWU1CQWZCL1p1QnltRHBtaVR6S1dFbVNtUFFv?=
 =?utf-8?B?c1pOK3RUSzdTUEFST0psRXFqYzdzZmEyRHNqY29BNnVDR1hrTTFUZDVQZzR3?=
 =?utf-8?B?aVRRc1ZPMTZFVkoyWmVrWWlEckpDaTVnQUxzdHFvbkM3Z0tlOFZVZ0tmTGJ3?=
 =?utf-8?B?WnNiK2M1MmhVRlU3c1ZYOHAvYUhDRTVWRmluckFDOG9GTHg3aDl2MlJId21s?=
 =?utf-8?B?SlU5TkVSNWpwSFk3cElyNzFWcVVLVDl4bDJjbjRXWC9SSnA1dmpQdGp3ZDhV?=
 =?utf-8?B?YjVqTytrL0hyZ1d2QTdNTkFkODY4M1VhdjJsTThaUkJ5T1FEMmt4MjFZT3Ns?=
 =?utf-8?B?T09QblFSRnZrU2tNOVJZUDF5TnZaeFFENE9jeXliQUNVbmNyY2t5QUwvSXhK?=
 =?utf-8?B?d1lWcktCTDk1U2pQV1JzMHhQRjBMOUpRNGIwSjRpUitNU0xzSmxuUW9NZjh1?=
 =?utf-8?B?cmpxa2Y0bmdJOWZOUGhVRWxqMzZpTjNrb0VCUzFGamRWd2tMZExKb0hRdkVu?=
 =?utf-8?B?WjZIbW1FaEJGc1Vhb0Uxb3c4bVI0L1FHUmhKVCszM1pwZU1PSFpBNU1kYjJZ?=
 =?utf-8?B?ZGpsQ0VPbFpINTJTNTN5SHM4MG9WeFRQaUYrMEpsaGJJanZnSkNRUEJjcFI2?=
 =?utf-8?B?cUlWc2orZkcwU0pQenRDc1lxbGRWMkZvRnNMR0tMSm85Qm9WcmVBRWFtcHZC?=
 =?utf-8?B?NWNMWkV6d2x3UTBXSFEzZ0N4RnZHWnBIRExsRnUxLytHWm95cmNDaXprcDJ5?=
 =?utf-8?B?alRMbHdWTzkvcHpCbDUwcEZPM1N3U2VTeDhmL3ZEb2JvWHB1UUQ2eFRpT0RB?=
 =?utf-8?B?UXAzYnI2LzBTbnA3aGFoMC9qOHY0MGJHdTdGN2NEWDJOR3U0cWJJdTMxMytG?=
 =?utf-8?B?NjA4S09mbm5YSHNwaWw3ZjJkQ2RXK2FnaDFmbnFyRTZBTWhWSlkzWDdWNFI1?=
 =?utf-8?B?UGJpM203SndUeTJLeEM2bkpyL2crMW9IVkxHQUNWOFduWlFpcTZkVUhyOGln?=
 =?utf-8?B?NFBCRXZCMTIrbnFFQkdpSlNCM1JYOWJ4SzRUSms2VUVIL2tlRlM2UnZQTVJp?=
 =?utf-8?Q?fUh3+5bBp4HvqZc6q8S0qPcahnKIcj04aRXSkY1?=
X-MS-Exchange-CrossTenant-Network-Message-Id: c9c4dbbe-76d9-48f9-5e96-08d9325b6592
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 13:17:17.8129
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1MqBRSXA6g5TfQxpI6Ka1RzM8AWHy/sjoff+iA1IJrMIAeJ1IkRQb2klNM8ZiNhPYK3psZ1AjOT4PNKgExhz1JNMqaWCT4G8KG+zd1q8Dj8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3990
X-OriginatorOrg: citrix.com

On 18/06/2021 11:45, Edwin Török wrote:
> diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
> index d05d7bb30e..4a230de8b7 100644
> --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> @@ -34,6 +34,9 @@
>  #include <xenctrl.h>
>  #include <xen-tools/libs.h>
>  
> +#include <xen/lib/x86/cpuid.h>
> +#include <xen/lib/x86/msr.h>

https://gitlab.com/xen-project/patchew/xen/-/jobs/1358403495

CI says no.  This needs to be behind a suitable ifdef, for non-x86 builds.

(I've not looked at the rest of the patch yet.  I'll get around to it at
some point.)

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 13:21:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 13:21:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144637.266193 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luERU-0001SZ-8K; Fri, 18 Jun 2021 13:21:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144637.266193; Fri, 18 Jun 2021 13:21:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luERU-0001SS-4z; Fri, 18 Jun 2021 13:21:52 +0000
Received: by outflank-mailman (input) for mailman id 144637;
 Fri, 18 Jun 2021 13:21:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfjP=LM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1luERT-0001SM-4t
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 13:21:51 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bf25ae9a-eb27-4237-b426-34f271b611ad;
 Fri, 18 Jun 2021 13:21:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf25ae9a-eb27-4237-b426-34f271b611ad
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624022510;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=ZTGNRY6RzNc22ivIBlxCdmgY7xbLd7ZEvjrlaUuZ+Kc=;
  b=BtaWEPhVKZa8a8EukgqKRmOeuWV/GD/fw32JjiW1vimHCygu5sgI5a4m
   SCGWMsVfDBcy9eF1TPUq68SQRmpXeylztnxzGxtgge4lcn7FKSXV6i6fr
   uJjfSh3V7sxPJoTPJIgmUXek4kJURKFMPHXb5C9f2+fF17L0r1nJ0L3Yp
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: kSEflNvjIiGg/SfjPtCIj1IY0d89ASSG5ROKnbS0cyzL/EL0xYhcH5dC3/3mHiQVhHDTuMUXc2
 HUTSTo8VFxfX540yCsAVllqlwZ3fUp+4Zi4uf3F/41CoELgw6XMepOXVXoAB2W32/XysW8XQJ0
 evtKSId4SnS2tubclcaIpKv23z+jNWhDkYRV+i3ZPsKJNJZHIuS681BB99uOfudH82eQddyglq
 06ph5iqifnNStoQm1soIy1xZxrLubwb0bCt/YOLY5juihzj7FnHUJ9rhsIUHrm3Wb2ZNO/TFVq
 VUk=
X-SBRS: 5.1
X-MesageID: 46444919
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:2smU8agr/BSVyuQdrGKAuK8pD3BQX2p13DAbv31ZSRFFG/FwyP
 rBoB1L73DJYWgqNE3IwerwRJVpQRvnhPpICRF4B8bjYOCUghrWEGgE1/qg/9SAIVy+ygc578
 ZdmsdFeaXN5DRB/KTHCUyDYqsdKbq8geKVbIXlvgxQpGhRAskKhWoYe2Wm+w9NNXN77PICZc
 ahD6F81l2dkAEsH72G7w4+Lo7+TrPw5ffbSC9DIyRixBiFjDuu5rK/OQOfxA0iXzRGxqpn2X
 TZkiTij5/T9s2T+1v57Sv+/p5WkNzuxp9oH8qXkPUYLT3ql0KBeJlhYbufpzo4ydvfrGrC0e
 O85CvIDf4Dsk85TVvF+ScFHDOQiwrG3kWSj2NwR0GT+/ARCghKVvapzrgpDCcxo3BQze2Ulp
 g7gF5x/qAnfi/ojWDz4cPFWAptkVfxqX0+kfQLh3gaSocGbqRNxLZvsX+8gP87bVLHAa0cYa
 JT5fvnlbxrmJKhHgbkl3gqxMbpUmU4Hx+ATERHssuJ0yJOlHQ8y0cD3sQQknoJ6Zp4EvB/lq
 v5G7UtkKsLQt4dbKp7CutEScyrCnbVSRaJNG6JO1zoGKwOJnqIoZ/q57c+4v2sZfUzvdcPcV
 T6IRtlXEsJCgzT4OG1rel2GyH2MSyAtG7Wu7RjDrBCy8rBrZTQQF++dGw=
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46444919"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JC1HKCZhhE2dIq64w/tVb1UT70USU/rLyfq1a2HH4Vd0ItfVTjjk3bb3XyYXf52XlaiwZQBm1zCZAdKtEZVFGwTg+7fXXkJziAKb0tgd371Q3b78mrMq2NGKhCWWgtBNr1JBvxSqaZf3Swe4fEbaqy78O4pUSSvjai2nRneBPYSC2cF3fYiMrnPTc/77PtC6tvS6lv1aksdeGVChz/Kqf4tPQ8z+e3b/Pm1V7tBl6iF/COsi0on6d5GrzMWpFfpTdkqiKkcDy0KTiFA47O7xaN4EHDGgRP3Tnc3wNM4XSk9ASovoXoeHc/Yy91yRBlFEKqXo9rZDMzLF3XKjH7tlrA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZTGNRY6RzNc22ivIBlxCdmgY7xbLd7ZEvjrlaUuZ+Kc=;
 b=mXJWwJUETDc3bCckJCfHiEtULa835uYUjmWUdIIkQ0Tsdn1WQWuJge5dRdbckOFU7QffpeffBJheR75PzGkVCdbuC8kaO4/7TOY1PkH6T/d428XDiTTqoBeX+Oow1D9so4ErhzkDw7vb2O6u+yEtOkK9AEQmpSRGHQP/xx/KBux6iX8hjB1d5Uwa3CLw96817TkYvNoa6cJl2BlZv8pVj8D+OgzbGijsvBioWaQDXlpe0Pd8/nS5fWUWAJc2NYGnJBe0fDBkURvhtFX4eVuC1D4mss5ZMTQVZ/H3MjCcob3hv7iAdVIgjTV1AOXFycGaUbDxKcR7AqLXLAkCxmqJ0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZTGNRY6RzNc22ivIBlxCdmgY7xbLd7ZEvjrlaUuZ+Kc=;
 b=mVDSW3gYuQUKjTf1ucLM4KF+Y16h3XcFtQ0mVsZlmsvZdJMRO6xhMcuh4efthi+YFu5Hk8FOjd6/3v8wZvw8z3PJDNSHHcRY8nC0DRTG02uBuVlXI6BE0xUzDDH2WUSUnilukLfae08yvHuEu+UKMX9pem0+w3bepVc2cDuyxY4=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: "xen-devel@lists.xenproject.prg" <xen-devel@lists.xenproject.prg>,
	xen-devel <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 07/12] golang/xenlight: add logging conveniences
 for within xenlight
Thread-Topic: [RESEND PATCH 07/12] golang/xenlight: add logging conveniences
 for within xenlight
Thread-Index: AQHXUNy8b62kbX7r3k+nzGZDJK+dZqsZ5r2AgAABRgA=
Date: Fri, 18 Jun 2021 13:21:40 +0000
Message-ID: <E74C0DF9-3EA4-4B79-8CE4-02F00EC9875C@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <452aac2489990ac0195c62d8cb820fbe5786c466.1621887506.git.rosbrookn@ainfosec.com>
 <A96AD4DD-35CB-495F-A008-D72A5C2AF45D@citrix.com>
In-Reply-To: <A96AD4DD-35CB-495F-A008-D72A5C2AF45D@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 3d827b23-e350-4b79-2d24-08d9325c0248
x-ms-traffictypediagnostic: PH0PR03MB6313:
x-microsoft-antispam-prvs: <PH0PR03MB63137C4E4CE92ACF57F2F3C6990D9@PH0PR03MB6313.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:449;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: gvhct0ZFsw9db4PDNEGAx6Kf2bkYxWF+gOxDItAzXlxYxnE79PCrKK1eQZliz+1Tlmj9QF99mFJq/XN3P/8V/ZHIbuVzXAuaCTmuqnS/2KVqj8Csn8q3u39oCul3I/Z++ujlvDQN0ieQ7ji9WAMVbxQn6hlBi4wM8pfx3BjzpIlfEudDoziAQ6VQaHq2rSzEAPmx8x6ZE5S0fakVPEhh5Wk1p3hUaQiywg5aA32E83OFfmfUMb32TUC0coZVKvYOkyaALe0FatJr9+fnGdimUSiQEOOunp7wQ5zsptKfQOxO5j604TiR1hcoNAbLYEKujR4fxNxBTi/li9a4QXq5MVThH6iDVFPqoCDV6g83kqRxpGGeMUYVohR995463C/Kc3Uzo+9nOhGV8CpdGDJ1ohy4l79bOi+I663rl5KdblgzN9k2619PjL7SE2Q+9pV7abOxssjLRmF7cxj9z1+isIKVy/PBBYBTXWclyHimJRFRvxNzv0iVlJ20cIU1BDf11dAv/6mMEvmUlCYslumI6gMeJZA7S5bGGnA02n+2HFVMkWCnpT2z/wpDrjT0BpfzMSvjOYjT8qxiuFnllYLAC7nsTyxZuCCPMTVOklSncwNK9E2yYfF31rG3jp7Ud65djBTAAvFIO97KvIlabAbtXg==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(376002)(396003)(346002)(366004)(66946007)(66556008)(64756008)(66476007)(66446008)(6916009)(4744005)(6506007)(86362001)(91956017)(478600001)(71200400001)(186003)(76116006)(53546011)(55236004)(316002)(36756003)(6486002)(33656002)(4326008)(2616005)(54906003)(5660300002)(26005)(2906002)(6512007)(122000001)(38100700002)(8676002)(8936002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?NW9OVXhYYkNaY1ZRNkVVYVJKMU1oclRJUldMYzhoajBMaU14bkgyT3o2bTN3?=
 =?utf-8?B?NEI5aTBrNENwUUp6YmlSU0ZGMERKWEJiaGp5NlRiaHhVZis3OHJZUDdFcXp4?=
 =?utf-8?B?Z29rS0UySk1yZFA2WFZ0ZS9uc3VDTHFBNDdqTmVkdmgyUXY4emw0UHI0Yjc3?=
 =?utf-8?B?cEtlLzJuQnZGeFg5eUR3MkdvR2Nla2ErNFRTSitORFFFMEpqdm1HRWhON1A3?=
 =?utf-8?B?Qmd1M2tUSjFRM1RVbllaQzZtYWlZQ3c1NERveDF0MTlxcGlRWXc2YmdBeFNZ?=
 =?utf-8?B?K2NKblcwM0VaMjFpVS9yMm52SG1Pdk5WcFNnakkvNkcxMGNUQ0xPekRNU2R0?=
 =?utf-8?B?RjFidFBKRnArWW9FL3hoOUIrWXh4RjVOU1NqaWdYUW45ZUpCZWVkbFlVQStB?=
 =?utf-8?B?ek1oZU0wU2VldTU3MGRFK29GR1cySTJFNmR2ZFJ3ZjhSMlg5WmpnTWlPNlE2?=
 =?utf-8?B?OGRVcmRBWXdVM3BFM3JRSU9VUjVBWGpiMURWaEEwc2FjZnpWT2VGUlArYXNO?=
 =?utf-8?B?SlB6QzZ6ejh1OUFKcEEyeVpoV01oQUdER1o4KzJpRWhqdXVLS0JKaFFsR3l4?=
 =?utf-8?B?U21MMUJtQWFjK0hSbzVkbHBVWGVkcXJkcTNLdnJoS0xQWFVHK0hQNzJ3Slcr?=
 =?utf-8?B?eU14emVaSnZPbTEvVm12L2t3MXB0U25uUnR0WDcrTEdTaVlVZ1l6eFp2d1dF?=
 =?utf-8?B?STRnZ1piek9CK2x1YUhtaTZPbkM0UVEveGdVTDhlcFRiMGdLY3ZjamZHR2Q5?=
 =?utf-8?B?S0R5V1pTeW52Vm9kRjNaSC9TdDJPcy9Ua2gyMGh2UWNOTTg3MTVvNEFhRkR2?=
 =?utf-8?B?Y2VYZjNmL3lqU0tjSENiNThUYVJyQmh3QkViclFGWUhDWTBKZkpUUVBTaUNR?=
 =?utf-8?B?RlVWZzR1SHU5ZVVZb3A3Ti9mQ3BOUzFmQ0k2NDBBbmNXWHU5bHRHU3hoaGRL?=
 =?utf-8?B?YzYvbGpsQURrbnlWbGZZR1JNQXJvOElQcWUxYWdMWWxYWTQ4d3JSY3o3eE5P?=
 =?utf-8?B?OGR6Z1dBdTgyeW1hWSt5cWwrd0pNcmZtd2djMTcybGsyb0xraTVuS0RUamt2?=
 =?utf-8?B?MHFKTUxGUjUzZkUwR2dqWmdHaE9DNGhGd2NueVFXbVFWaWo1Q2ZCVkpUVGJF?=
 =?utf-8?B?bXU5a09jam13YjFiVTBma05FRmtMN1VrMVh6R2VzNytEUitWSG12NnVpaFVN?=
 =?utf-8?B?RE5JM1lJRk92UjRhaVQxbnkrMzdmLzhYMXBwVDh5WjhOQ1pRZTJxNjlQdTkw?=
 =?utf-8?B?S21sejdBQzBDTW9QYWRzbjJCY2g2ck5GVXRiUjVmVE5sOENNakRSODVNMm9u?=
 =?utf-8?B?RFBtc3Yra1NseTkvR0lndFVsLzhtYjZEVzZLWEdpMFl2S3dka2l6VHNBSldM?=
 =?utf-8?B?eko2RVhCY0tzRExuNWlOVG9UakdadjVCblhkWFQ1V09XVW9yMG9MS09DRm9B?=
 =?utf-8?B?RHYvK25MVXBJTnJob1VqWmxTVWxLWlFNNXhDR3YyL1o4QzBpVGFjZzA1akpx?=
 =?utf-8?B?Y24vQ1JiOVY0VDVTUGRjWEJWNXZWRFlleVlhTFBwRDJHdm93bE9CUHBlOGtK?=
 =?utf-8?B?MjR0RklWU3pVR0Y4YUVQRnVjS0cvRkRPbnlMQjF0R3dWUWN6QTAzVEEzTnFz?=
 =?utf-8?B?VHhwaUsrRk9TdUhuZ0ZmZFd3L0JPbnNnT2ZLaWNMUDJBZU1UYWJCbjJsRW53?=
 =?utf-8?B?R0RsdUk2S2h4ZFlvTHdUVE1aQjRLL1JUM3RFS1BHVjVqUW0yekpXWXl0MXJy?=
 =?utf-8?Q?WmN5eke/cETtY4ROlKid/VEgNVOsOuBRxLWbaNE?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <E8B832458E4B174B9BA2C3325083B906@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3d827b23-e350-4b79-2d24-08d9325c0248
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 13:21:40.4866
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 1QMN3hGQfBzC2zvitSfTxy3vRSu4Day175t8GXCYcLIzbhxDdXtzHQ+cxlR5b/qHllbQDgNQtpna1SJm5aeADELVS6yTc4L0u6Y94uBl0E0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6313
X-OriginatorOrg: citrix.com

DQoNCj4gT24gSnVuIDE4LCAyMDIxLCBhdCAyOjE3IFBNLCBHZW9yZ2UgRHVubGFwIDxHZW9yZ2Uu
RHVubGFwQGNpdHJpeC5jb20+IHdyb3RlOg0KPiANCj4gDQo+IA0KPj4gT24gTWF5IDI0LCAyMDIx
LCBhdCA5OjM2IFBNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9va25AZ21haWwuY29tPiB3cm90ZToN
Cj4+IA0KPj4gQWRkIHNvbWUgbG9nZ2luZyBtZXRob2RzIHRvIENvbnRleHQgdG8gcHJvdmlkZSBl
YXN5IHVzZSBvZiB0aGUNCj4+IENvbnRlbnh0J3MgeGVudG9vbGxvZ19sb2dnZXIuIFRoZXNlIGFy
ZSBub3QgZXhwb3J0ZWQsIGJ1dCB0aGUgTG9nTGV2ZWwNCj4+IHR5cGUgaXMgc28gdGhhdCBhIGxh
dGVyIGNvbW1pdCBjYW4gYWxsb3cgdGhlIENvbnRleHQncyBsb2cgbGV2ZWwgdG8gYmUNCj4+IGNv
bmZpZ3VyYWJsZS4NCj4+IA0KPj4gQmVjdWFzZSBjZ28gZG9lcyBub3Qgc3VwcG9ydCBjYWxsaW5n
IEMgZnVuY3Rpb25zIHdpdGggdmFyaWFibGUNCj4+IGFyZ3VtZW50cywgZS5nLiB4dGxfbG9nLCBh
ZGQgYW4geHRsX2xvZ193cmFwIGZ1bmN0aW9uIHRvIHRoZSBjZ28gcHJlYW1ibGUNCj4+IHRoYXQg
YWNjZXB0cyBhbiBhbHJlYWR5IGZvcm1hdHRlZCBzdHJpbmcsIGFuZCBoYW5kbGUgdGhlIGZvcm1h
dHRpbmcgaW4NCj4+IEdvLg0KPj4gDQo+PiBTaWduZWQtb2ZmLWJ5OiBOaWNrIFJvc2Jyb29rIDxy
b3Nicm9va25AYWluZm9zZWMuY29tPg0KPiANCj4gTG9va3MgZ29vZC4gIE9uZSBjb21tZW50Og0K
DQpFciwgc29ycnksIHR1cm5zIG91dCBJIGhhZCByYXRoZXIgbW9yZSB0aGFuIG9uZSBjb21tZW50
LiAgSGVyZeKAmXMgb25lIG1vcmU6DQoNCklzIHRoZXJlIGFueSBwYXJ0aWN1bGFyIHJlYXNvbiBu
b3QgdG8gZXhwb3J0IHRoZSBDdHguTG9nW1hdKCkgZnVuY3Rpb25zPw0KDQogLUdlb3JnZQ0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 13:27:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 13:27:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144644.266204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luEWR-00028M-Ri; Fri, 18 Jun 2021 13:26:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144644.266204; Fri, 18 Jun 2021 13:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luEWR-00028F-Oh; Fri, 18 Jun 2021 13:26:59 +0000
Received: by outflank-mailman (input) for mailman id 144644;
 Fri, 18 Jun 2021 13:26:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luEWQ-000289-H1
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 13:26:58 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 227dcfdc-82bc-46b5-b0d9-90c9e266bfee;
 Fri, 18 Jun 2021 13:26:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 227dcfdc-82bc-46b5-b0d9-90c9e266bfee
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624022817;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=IMuQilzr99wAiIcEeWK1T2Gbu8saI/qjJKN0Nonk28E=;
  b=g/9OFiGhu1zR78KLyqH3EEozvv58OnQCNMGzMzab3/H4ZiFxSv9fr3lc
   8S7fE9zsETXrcaGftiNg+VkEWByrtr2bh9lCXc3ch/8ryPdenoYl0OzOc
   Y8oPZmeJ+1hoTslbJDa2Dm8KjO4BH/MwcYvCut/ThuDFXB6YaEQGaSdy6
   M=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: j6zIGKgoC5wELQhSd3IRgb7+tIIhb4NImVFv5l3PBJtDftUtVHeaIcfeiPJjbBRcZRN0l6UmjQ
 KN30zRCJGU0H9WgMg5017HezcDQPivr+FAUHr0zVhnXdTJQgkkTJkVYCKkQBoIASCtPRZMA7wf
 SzSEwTsJYMlpUhYUpVnI7RqC8sU5D5iqfMfEkyRjlUaY57RRv9zSK9bXMVgJ7V+dSkG+w6fofw
 pul4fy1601AGdU9ab6DaNDxc4ar1p6iIQ+/GfH+x75tWqsyg77Ujf//rFeNsoLPVlznHHjWBD0
 ggk=
X-SBRS: 5.1
X-MesageID: 46530052
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:ismMFaxZhWRzJWjAF0MpKrPxzuskLtp133Aq2lEZdPULSKOlfp
 GV8MjziyWYtN9wYhAdcdDpAtjkfZquz+8L3WB3B8bfYOCGghrUEGgG1+XfKlLbalXDH4JmpM
 Bdmu1FeafN5DtB/LXHCWuDYq8dKbC8mcjC74eurAYZcegpUdAF0+4QMHfqLqQcfnghOXNWLu
 v/2iMKnUvaRZxBBf7Ld0XtEtKz6OHjpdbDW1orFhQn4A6BgXeB76P7KQGR2lM7XylUybkv3G
 DZm0ihj5/T8s2T+1v57Sv+/p5WkNzuxp9qA9GNsNEcLnHJhhyzbIpsdrWetHQeof2p6nwtjN
 7Qyi1Qcfhb2jf0RCWYsBHt0w7v3HIF8Hn502KVhnPlvIjQWC86I9Apv/MaTjLpr24b+P1s2q
 NC2GyU87BNCwnboSj779/UEzl3i0uPp2Y4m+J7tQ0dbWInUs4UkWUjxjITLH9ZdxiKrrzPUd
 MeTP003cwmNG9zNBvizzBSKLXGZAVAIv+EKnJy8PB9nQImxEyQYiMjtYUid0w7heUAoq9/lq
 /525RT5cVzp/AtHNRA7cc6MIaK4z/2MF7x2Fz7GyWuKEhRAQOyl3ew2sRv2N2X
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46530052"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Vg5kKB8TOYWN/g8g6kaEEfQHAdELbmMjJlmCZzo5yG7LK5ybN3TXU7pJdm/UTR39m1auOQOx9IB+6jvKiE/uZI2tJnOIW0WaNTGI3nXVg3KY5GOqrYiopQ1/g+npxZ4lRkp/1jwkrYDcZUoHD1j1pa14K43U2XU9nJozGzpfcOpaA8P+A4rDJ4cYeHIwJ7GV4DMSl1A10ic+AKM+PExPhyHnVhjzKLFHz+6B0xfQes5K/MHUjOAuvBMhEbxDGkfo8IrFmFYwtR80M/sQOXB9oYzRbLChq8m1GqG8H04S+xNWjzCpJUa6hc1qRkwGlIGyjVqsz3h+IUwrxmUrGTZQFA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IMuQilzr99wAiIcEeWK1T2Gbu8saI/qjJKN0Nonk28E=;
 b=erYblEKrBto1YDRU0UlGd5qu4tlp+B2Tf+marmoQbWuwXVIm8y9eF17t/iQQP7CHlYcg12S27vysUrYM0pOnGnz5Mq3RsxKZJQUnxOHLiZWCOdgrX8zP38/Y+PXryn/G7nJ4ZkHUPeYOUk6KleNKnDC3zTZA0dxL3X5Zxw+aLTEZHLKeD3kBfQYAzKPPZ2XGf/xym41hDIynUmBjLTtbVaW0ZkLQpBo/wsQs5LhV8yvM0Pgru8F+297a4Cio2cLOhWFsvbWgKHphXn/MO+hXJTNMFEr16/IaKUTdu0Y2S+uiBVCmcDpY2uHvtmG9gCp61/tt9g4bdmXx9jyD0yJ/Hg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IMuQilzr99wAiIcEeWK1T2Gbu8saI/qjJKN0Nonk28E=;
 b=VpXzhRQABXiIlWK3SKpa5TXtPS2zQDhQVuFzL+Y9efDk7tzr7xmU3UW00abVxmq46Or2o7frwlm86PKaAXQMWK3590Mf2rDi71sxQal69gitmeik7gS8dCXq3lP9JotLq5i6Ov4c9hkoybZPYAzJx7S3HKC7246qTQ+WdVw26tw=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>, Juergen Gross
	<jgross@suse.com>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
 <798b7eec-e31e-1798-773d-c2865fba4be2@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 2/5] libxencall: osdep_hypercall() should return long
Message-ID: <a8b032ec-bafa-ef50-825f-26e975e64c69@citrix.com>
Date: Fri, 18 Jun 2021 14:26:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <798b7eec-e31e-1798-773d-c2865fba4be2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0453.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1aa::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 34c3049f-05d7-4176-6516-08d9325cbcde
X-MS-TrafficTypeDiagnostic: BYAPR03MB4679:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4679FA6CDE6249921993ADF7BA0D9@BYAPR03MB4679.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:989;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 8lmRqcfDXxgUQ5JIaVzaulqgd58KsHzKAAO7iE6nWxBVyvWOsA+ypKEJ8bJ5z7SB7UoYqhOvdapLFjXmkt9ZyhdROeDNn/fUGNu951SEFwDLuCeIKh9VUV7M22u3iCd2PYjm518Br3gMsqUShiu5F4+WmDLB/uqzlNd3GGzL4SPIMiN/ugQE1fTkbtoulLLlKGonz3M06dRhcBcp4SwcFxYWFfdUH9HxScrbRK6z1OqxtRot6NWBbWbYe+JtRqKeIOpuHqrWKN8bEzsOqb6l/VTTvHsVhc9pzQKI77UxOf0QEEvyRgttIKuTLVvNgLSRneC9OwLPyJCYUg7kgDSlGsQYqR2nIzxg8gGY9KvvP0DwoQEhnT5Rq1ra27nj6lu/I/M8c3rKlMMGtOV0LDFq9lPJxNDBtA0VsFmh+ysJox95lH+VOKS0OAYt5vxwPaOhtiW0WlRqEzlPJIa3DyINMDk77sn22Ejx4IpOgaEeE8GP+zJcf2reQQYvFRNHV0vye8TAcmVJcmw0pIRZqbTf49FeYpVzTtK4lAu6f9nHloTUvm5gtNxXDecWOyN5LsAh7ORKbvlnRDGykrPJgJtkvwsH4bTNAw/YeB23IJdFv77bKD6rR6sMRLm/NLmgUt6RjzsHusI7EXESfMVBvwm3M+XAkhmsYaQRjzZo4OHByW3bAFpuk0m2VnID0QKt7+fd
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(396003)(366004)(376002)(346002)(16526019)(86362001)(110136005)(478600001)(6666004)(66946007)(66556008)(66476007)(186003)(2906002)(16576012)(31686004)(8936002)(38100700002)(31696002)(4326008)(5660300002)(4744005)(316002)(54906003)(6486002)(2616005)(956004)(53546011)(83380400001)(26005)(8676002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?S2ZlZjJzQXl1TllzMXNiWVZLenJBTDV0bzRSV3kwV3FaS0FzdkNqWU5OMkpR?=
 =?utf-8?B?RUhTZi9ZbEp4djJudGNneDNWN2I5OElDMC9qMTljSk1rVG4wYU02R29Od3dL?=
 =?utf-8?B?NWVqSzBORkhBYlhjQ1VNY2lBdXZCaTJRTWVBWDlHdFpFTFkrc0xhWG4yMWhq?=
 =?utf-8?B?cHpZbWN1ZnlvbE9SdzcwekQ4dG5VdUdjdEtLd01ldk5Wc3FwQnI1dUNLZDli?=
 =?utf-8?B?UGlQSThTZVNPVW5sSytXVjYvOFA3ZDFCckZZVnRDYTRFeFludjJkdEU0cWlk?=
 =?utf-8?B?RFhFaWdEVVVHSDh6STlKOStXUHFXR0ZycVhWNGp0N1JEY1NSZGZ4SXJneTA1?=
 =?utf-8?B?Z2RkQ2NJdUU0RXMrUmRlZ1A3a3M3TjVTRVVSa3RSY2FFZ3R4ZExTVU9vMi9Q?=
 =?utf-8?B?WVM4RlpRT3pCT1B0OTF0K3k4Mjc5aE9na2JaTmlHbXhrY3hUSTZVTk12blpK?=
 =?utf-8?B?dzlxVTlTMlpMa0drNmhkODJlTGNSbkR3cmg5VXFJWVpqMWYwQXRtQyt6cGFR?=
 =?utf-8?B?MUlNbkVIcWhxN3lKNXJoSlVyUEl4aG1NUVozMVl0Ylc4SDFicHVJb1hvbWxk?=
 =?utf-8?B?Vm9TblJXNUdLUnhiem5LVHBQbHNWaWVqb2ZlcUg0MEpqTUdta0VHTGJYaGZL?=
 =?utf-8?B?cG5jMGV5clBvT3Z4aFJwcnA3eTVPYnZhSkcvb2FpZVRIVStmNEJOTjdDS0xL?=
 =?utf-8?B?OGVRVGRsa2ZPRVdRQTd1VXlncEsveFBHY1FjZXVOcVBPY2UzTG5ydDV4MUFz?=
 =?utf-8?B?US9GVWhGNVZ4ZS9ncjgwSVRHTnFKZ1dqd21iV201dlUzc2tSSzBDSWF2Mlpj?=
 =?utf-8?B?cDVkVXg4MUw4MGNheDhFbGUyQWFNUlFnZUJuTGE5NnRDN1liYVpZWWg3ckRi?=
 =?utf-8?B?SHVhaWMveStCZ3NQbnNweXFITzF0R21sR2VIdHptSDBubUtmamIxRDU0Nytu?=
 =?utf-8?B?S1EwTTNId1F0Uk41dlh2RDRiVk94NG1XcG1LQytrNGRwY2tHckx6bDNjNnd4?=
 =?utf-8?B?cTZteWY1WVQ5VVIyMTZuM0cvV1BlY1NTVTBJTU9MTzFtSEZ2Y2Qzbnl4a0tZ?=
 =?utf-8?B?bVY4dUs2ZjVlOTlOcFNkYlFIWHdscmZpdjdITGFIVVZGWTR6d2FPTS9yaS9m?=
 =?utf-8?B?RlBpQ2VzVkFWU2dhSlZPUWR0Sm5vL2J3VWtnRHRLNkh6T053blpWQ3pkRXVK?=
 =?utf-8?B?eHp5WFpzVGU3cWgyNGIzclZ5Z2JwZFgvVVpzZjhkQm9VOGU0bUlCeUlodWMx?=
 =?utf-8?B?ZGdkRVJYcGZ6SWRaREVHMDFDd2ZIOGkzaUQvUy81ZFhLQkt3UHY4aGkwN2NR?=
 =?utf-8?B?MTk4QldVZXA5Ylk3TjBWLzVQbjIreTBBY2UvN0dwRHkvUGpWQlZldzF3TVND?=
 =?utf-8?B?Vnowc2kzc0JEYnZLSFRiNzBnbUtIVTRKY205SHJzTzlwRkd2NSt4bVllcG14?=
 =?utf-8?B?VTdpYWxYUFdaQjRhQUhrNTJLK1FNeDN1L3QzVFBsVXdoWWpFaUF6NWU5R3Iz?=
 =?utf-8?B?L29YNlhvSlNiblo2MGxQY0c2ZEF4UFVSN2tUUUNaUUZyc3dndzNnbC9oc0xa?=
 =?utf-8?B?eTdIRDdxZ0xaUEpoV0RMd1RacFhHSkFTNnVtWGE5YkRhc0RQS1U4ZE5BN1FM?=
 =?utf-8?B?WEtxbjNnQkhPTHE4Y0x6T0hYb1hnLzNvMjlwLzdlTEYzdGI4WmlnN2NqSys3?=
 =?utf-8?B?WWk5TnRHQzY5a2NJb3I3NlhjUUV0d0MwVzVjMFdNeUJjc1Z6NU9CeGp6SmxB?=
 =?utf-8?Q?0C1IoCjWxPMGFUXU8/X1pMzey2OhD1uV39Fq4YV?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 34c3049f-05d7-4176-6516-08d9325cbcde
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 13:26:53.8346
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: k1eTy/RMsA3TMQOprj82gqIb4FT14k0l0lPcZfKffLDbJYlSf9x1Bq/X2WsXiU5LyOQLb/YkbY0nSvEwvSDRnGQmBRbq33DicMIl+eioJ9g=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4679
X-OriginatorOrg: citrix.com

On 18/06/2021 11:23, Jan Beulich wrote:
> Some hypercalls, memory-op in particular, can return values requiring
> more than 31 bits to represent. Hence the underlying layers need to make
> sure they won't truncate such values. (Note that for Solaris the
> function also gets renamed, to match the other OSes.)

Spot the environment which obviously hasn't been compiled since the
4.5(?) timeframe...

Also, I'm fairly sure the comment in the NetBSD version is false when it
says it is copying the Linux way of doing things - I'm pretty sure it
means copying the FreeBSD way.

> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I think the commit message needs to state that this doesn't fix
truncation in the Linux or Solaris, nor the truncation in the
xencall{0..5}() public APIs.=C2=A0 It only fixes truncation issues for
FreeBSD, MiniOS and NetBSD.

With a suitable adjustment, and ideally a fix to the NetBSD comment,
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 13:42:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 13:42:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144652.266215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luElZ-0004Og-8g; Fri, 18 Jun 2021 13:42:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144652.266215; Fri, 18 Jun 2021 13:42:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luElZ-0004OZ-5D; Fri, 18 Jun 2021 13:42:37 +0000
Received: by outflank-mailman (input) for mailman id 144652;
 Fri, 18 Jun 2021 13:42:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luElX-0004OT-IL
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 13:42:35 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4df509cf-ad39-4f7c-8a47-da425e2a9d11;
 Fri, 18 Jun 2021 13:42:34 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2107.outbound.protection.outlook.com [104.47.17.107])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-13-RFWHhx_9N_exPU9AEgOT-A-2; Fri, 18 Jun 2021 15:42:32 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7151.eurprd04.prod.outlook.com (2603:10a6:800:129::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Fri, 18 Jun
 2021 13:42:28 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 13:42:28 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P191CA0013.EURP191.PROD.OUTLOOK.COM (2603:10a6:102:54::18) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.19 via Frontend Transport; Fri, 18 Jun 2021 13:42:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4df509cf-ad39-4f7c-8a47-da425e2a9d11
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624023753;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Orf2Bk6rJ/wqUaOsji1+IyGfvZ4DN7fRzHlOSSTtWss=;
	b=QxrYb4W1n2DFxSY5fFDk81vnTSGnKN7Dx+i65WJapPrXZZaWvX5E22hVXTIahGCHJ63QgI
	pX1tnO9IOcBtFUifMo1QE8I4tvK5VJOEOG6KfkI3FrZnxPemp3oeHrvGNp9dkZNIYUNjIM
	b3qklRvmrXvmT252TJakhHjgRfYam4A=
X-MC-Unique: RFWHhx_9N_exPU9AEgOT-A-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SHoGsd1vr0EjB873neCu6XzwjhqQlihpydz/4/rImwXjWUAYToPFFspPQEgvNzyBPmbqVuCp292Y1QyAumMZwtp8GfBHmzEXxzsKAxjwvZ9OpZvcyDGir61kMInoVKuzQ7H8Id+Ce0u2Wfpp+lkk5sJHaCHR9ctP+B9CCw/PiHp4Jxq+I2DbTXKLlZr0t5XLMJfcEkYFhAtsoS1z4mGdVOQ4w4lDkT2wxFf16/pyNWVPo+FLnpwpgGAyIeKIlCQLL+o7JnRafuO8tKHprGuWj+x2vuiHv4/CyZ38dKo5aHRhxxM3umSBYuZpObsP0da9V90QnQK/a/VkY9p9vH6ukQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=t4noPiWMcK0oPSOckF4kf0ES74lzeH7J4YEp4oJnJ8k=;
 b=P4+avrM5BXOuYPP4nb+1eogrUfksbl7tYnlpNE1g4yrAoiBz+fhuaAZ8clG5FxiFoOQ/0Vv8MxKLD7beU9VSMchqgNJI+ehR+91GPcc6k7JNSG8t3W7UZJTsmf8W07GCT6gos76URluLIwnskXeq/H9flZrB54ob6v+drXizbgIDAYatyBMiVP9kZ+eG0S30KGncyrted5xl2bQ8U41SAiDVU1aSTSN+8/udcGQDzkW7Ndbd2cd7BDB7U9kP6zH5DCzDr9j9YGPXByf3SQtvf+BWVr9d3tkDAm5iFsqtJpe7g3gmDWEfDHrdSYCt7C+ODb/G17uq1lcuUrUqYOzeZQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 2/5] libxencall: osdep_hypercall() should return long
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Juergen Gross <jgross@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
 <798b7eec-e31e-1798-773d-c2865fba4be2@suse.com>
 <a8b032ec-bafa-ef50-825f-26e975e64c69@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <31a3f6e7-2fe7-e25b-8e15-82940c9ef03f@suse.com>
Date: Fri, 18 Jun 2021 15:42:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <a8b032ec-bafa-ef50-825f-26e975e64c69@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P191CA0013.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:102:54::18) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b7c0d045-7fa3-4fdb-fea6-08d9325ee9d0
X-MS-TrafficTypeDiagnostic: VI1PR04MB7151:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB715106D1A862A3565C0A1294B30D9@VI1PR04MB7151.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3826;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Esbs6Z85E5i6Poj/qdlCwM30YrNjC2OI4zs/CCR2XylDlroXLKlayoorLL+wMNOUolFzv7/9s0KxDufIzgREtAaX2eVdSr95qGRxwD5bytuqA4eO01JWHQwEl7Oxq8EPxfTnPbWERa1cRPvOu+hEeBIVfNiKWE/JPXTK4ofl6GLkgEv23AFqgZFUMxEKJaTCxORw6VuePzhRAZ8jLPWB5W0k35lCMB8TzUJ/4DkRwatAr7VIirCc3fDry0q5mUXhiog7Fd5HpNo9tBxSZknkFjrUQr1lUu0evLP6B+gnOlWTwUkT0WWcEb/ufK41iwecqKyVu98l13boeX92XVppCIAa2clgoi5qZzToIBbygY9BB2Zpgc1q8w6nMREh3s9leP3swuMpTEUnWvY233LIgOQiCmlYA3MpinRurWg6siMtRUvkdIaWiTDa6+j78VuAztfbh4oC6V6Y2qDgw+aoPVB/nvUt63ycTBXUIh1IS1Bbu3UqtT8Cw+Oy9sOFEANEciIP1ZIDwH6WigErBeoyJsputcDuz1UubYbHAUF5dJnieFFL6WJQ9xOxcXDlMQvx4vZj7pPK8VUdNo8lbmabfaY5OUS1D9vQQdu8+iw/vTyNcnwfstXDYn/ni6eQCBDJFYzSn3o7FxDsn17e075j/cgAwJOuLDSUcfB5gBkDuvjb4MdE7HwxHwPs48o8LMnf
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(376002)(39850400004)(346002)(366004)(136003)(5660300002)(38100700002)(956004)(6916009)(16576012)(4326008)(8676002)(83380400001)(2616005)(2906002)(66556008)(53546011)(8936002)(54906003)(186003)(66476007)(16526019)(66946007)(316002)(36756003)(31696002)(6486002)(31686004)(26005)(86362001)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?B6CDsJTbEYhNCdNBlLWdZXHnoYr8/HSLe3B69s7pqqUBPiAfxfWJoNA4qw4f?=
 =?us-ascii?Q?C9vv9o9PYQMFswWDGN1wA/1tDbIgfamyXglf8ek4zktd9CyJdxemZoOga6zc?=
 =?us-ascii?Q?FofBipKZ0vaoInM4FfncBjYo+JVlAn1PLbYsT660vmWAHtlpbbsT0gqVKei9?=
 =?us-ascii?Q?ygtXtp5s9hd64r6+UbYiT+UosVmb92FloOpxG05I9ddFXLER8XG+ZJpGc8Pj?=
 =?us-ascii?Q?dgh/isGVh7/2wT5uh4V2+HW7uVyhD2KKnhBsvzbjte/QvReWd1aBpPnhYVBU?=
 =?us-ascii?Q?cwSqCoQS5SiDcS7WK3UZ7vo2ZsU8PAMVeMrR4pona/Ut2fIHedzf6xu4le4f?=
 =?us-ascii?Q?5qgtH+cS+MUlGcX0peDSfOa7S1CL+T9PQpf7BfSodQ6LZl992cI871WO6ch3?=
 =?us-ascii?Q?4NT8fLW2JYtq1g/OC9T/pl7n1A8elrLG5NEDC4h81meBz2RZEGDATORmtb2f?=
 =?us-ascii?Q?7G02mkTTngdBXtVWiVMDuYdrki/VCVqw/0cHFc5KQtoeMb4i7+XCkwx1MPwA?=
 =?us-ascii?Q?1jUvRTMBv/V2/3ZIF8L9CkoGfujkifkUCzWv4XFO7H3rx11+2LAkgxrxZWiC?=
 =?us-ascii?Q?8tR+nVD7yf+ubsfRdiAwOl/h0r35jMwra9QOVZvleCZktZXMHAwM6QEadb/z?=
 =?us-ascii?Q?4bRCCFbPxI/ZehCjOeGpoeSSZbSc/WRXyT4b/8G6QR4SjV6409nWnJaC8fCa?=
 =?us-ascii?Q?Roxw3Yc5meWfw4vWtuU+Tdhh/fTLxK7P05UMnnQVAEvMUICboRX1ov5MoEH5?=
 =?us-ascii?Q?Vu/6xYLaUY/5h/wtExlcgJhrgyfNKZPJpnldaFxzf1qik2BsXqXaUi77yhSg?=
 =?us-ascii?Q?QWReZEfth0fv0qlwHWNDVu/oLeN4sqbJ13xNucECZgiMEjqmzYI4PW+Arc/U?=
 =?us-ascii?Q?gZ7dn5NCqhZM/H7OAqLXhrK+ig5ObjMXz43Gdb3sRqAlRjIYvpC7whMJgnqD?=
 =?us-ascii?Q?Je6yzPRun0/fZb7M8pRPVFtlrEZQ9XshbMZwkVtr7tf8srWcJU3Wx1KSOO1c?=
 =?us-ascii?Q?r6gQs5mQpFPcbhaleLY7fYqrzzr4htl55Se2Bb8R/Dua72jnIGKHR25XEoFO?=
 =?us-ascii?Q?9oSiGe5khfq+175fxwLh36lTvnxp22V9HqiDQi3deub/BMcaqfz0NiAOdcds?=
 =?us-ascii?Q?xjKv3jm6kGbcM+C6KGmE4WzdhvgkpRLk+R/cM9fgjsyS0Qz1rix/rXU4z2G6?=
 =?us-ascii?Q?HgbuVei8UtoRYUzGhuxMje8f/J8DB3/HYS8DrIDG5+4+GKpcZRlTE8UtGP9G?=
 =?us-ascii?Q?OnMqiOM2U7SHrzP4oN8M+zUw5nEIBAJXCS8q+A06jQ0IlbGlep8WNiV90las?=
 =?us-ascii?Q?jrUDZG1Rt5x/63DQLXgprnR0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b7c0d045-7fa3-4fdb-fea6-08d9325ee9d0
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 13:42:28.3586
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8Ytk01q6a6vgLt2DwUZKLMEkyOXV1NYbvqT3wOfc5yQtTCghU8Wz/vOaIh9BoAjC5CCowOtw7wxt+UFjJe7EdA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7151

On 18.06.2021 15:26, Andrew Cooper wrote:
> On 18/06/2021 11:23, Jan Beulich wrote:
>> Some hypercalls, memory-op in particular, can return values requiring
>> more than 31 bits to represent. Hence the underlying layers need to make
>> sure they won't truncate such values. (Note that for Solaris the
>> function also gets renamed, to match the other OSes.)
>=20
> Spot the environment which obviously hasn't been compiled since the
> 4.5(?) timeframe...
>=20
> Also, I'm fairly sure the comment in the NetBSD version is false when it
> says it is copying the Linux way of doing things - I'm pretty sure it
> means copying the FreeBSD way.

It doesn't say "copy", but "mimic", and I think what it means is the
effect for its callers. Therefore I think the comment makes sense in
its current shape. And I don't think I should change it right here
anyway.

>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>=20
> I think the commit message needs to state that this doesn't fix
> truncation in the Linux or Solaris, nor the truncation in the
> xencall{0..5}() public APIs.=C2=A0 It only fixes truncation issues for
> FreeBSD, MiniOS and NetBSD.

I've added

"Due to them merely propagating ioctl()'s return value, this change is
 benign on Linux and Solaris. IOW there's an actual effect here only for
 the BSDs and MiniOS, but even then further adjustments are needed at the
 xencall<N>() level."

> With a suitable adjustment, and ideally a fix to the NetBSD comment,
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 13:43:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 13:43:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144655.266226 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luElx-0004vQ-Ke; Fri, 18 Jun 2021 13:43:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144655.266226; Fri, 18 Jun 2021 13:43:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luElx-0004vJ-HU; Fri, 18 Jun 2021 13:43:01 +0000
Received: by outflank-mailman (input) for mailman id 144655;
 Fri, 18 Jun 2021 13:43:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=axfQ=LM=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1luElw-0004nd-It
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 13:43:00 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d95b56b-7821-474e-ac9b-34fba1eb8c92;
 Fri, 18 Jun 2021 13:42:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d95b56b-7821-474e-ac9b-34fba1eb8c92
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624023778;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=govfGUbBDrKO4hTp6bLafsGHvf3JHixr+3zDBDqBrBU=;
  b=ey65oLXli9EFNVRlpghYWjewTYC7TMo01/bld9g0V8SpdbB6diS5S5Ey
   phjt7KYLGje94NYU8CRS8pUbFZrW6Qty2TXQOl29ibEpEo93xn8NtbRYb
   l7ywp7+MsRlb8R7dQnaJ97SN9DouYn67iYc/OIGRH5daStc3bPY/2J3Bh
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: rrWpM8aFJu+vYM+CJMbUuEq6gLcdEzeTajbB7gXQ048oG368LK4Wli37xzyyYW9Ur1DxJLg7xD
 pF0+zLMIBfE/y8WjjKK34jOyTOIrjxtgw0KPpaYU0sY79xr7joD2eTty4bQU5D/nRVXTS02dkt
 dquEzdPNOPRmiHrW3euG05toSy6723yG8dV2s+UlHzNQdBub0IoyaFWO2GC/mdqf/w76ECdFUO
 z6eNvPAbmpZuy5PRlwY7fj7+d3EJG7xAaEI+4Gf0AFeizT6T472/K305jziYxkACuEYx23nV8R
 DUE=
X-SBRS: 5.1
X-MesageID: 46460937
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:ufgPI6i3nf09o/bp44b6Yl0M1HBQXrkji2hC6mlwRA09TyX4ra
 CTdZEgviMc5wx9ZJhNo7q90cq7IE80i6Qb3WB5B97LYOCMggeVxe9Zg7ff/w==
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46460937"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aFo4MsSPIc1U1onEpdxIv4LKjnROBEtXeYOCkACrA9fQQ4U74OzVooXIP0aG6JM7vlojPWTVOOtvgRAWHb1/DyvNHGDS1Reu+olvISwoG3ra2HvqMA4Wg5itEuccVxvZmKvxzTe0A3lIqEnJfO5cPmvIBrMGIRARdat+aE63mcfFGaIsKHHm34rqPa7IWc9347HtaQg/Sjk8OImI2cAcf0sFfjOiVgFGRtAa24xYDHaSpaSS221fT/MdK+uRK57tiebJeebZSw8uqCQJv+xz1Hq2XT9XTp3ToCk2XYazCpEOH/SXVCy4vnenoeAnQbMKCA5yDPPGZSR3dgS2YWvXgg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=govfGUbBDrKO4hTp6bLafsGHvf3JHixr+3zDBDqBrBU=;
 b=mz122f3vQJeCGXMHAOTY3Eqnj9vjzEq679LhcNVl1Gnf9mcJdgJgtkbFSpnhlD+aHiIwnSc6l7iIxb85WK87VWtCkYkkCiszoIYwlNykqfr8WsrPpB6GEj+cY8N74tL4mRT4QyQDzJFyQedaHxjUY03BF81kKWro8vrWCRxA3gPYANfiIW5b9zwRjBQPF8Wdkkv68LmdwUaWA4tiU8hb4+AyoWJhp9frwr4nLzmRf2pVOvj4tIuR+B5YfI6YxoigL3RLkr8vZTku7UFm1Azn8T0Pl8AFPdjjAYbdYGZVhvnL5Wbu/rSemkLDchwIsSjuEsLGMs9zE9uAkJLrFbjpaQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=govfGUbBDrKO4hTp6bLafsGHvf3JHixr+3zDBDqBrBU=;
 b=cqktPiPNg3L6C5dkOTIXpsDF6lfKjrnkDVnemRv3ya6s8cOQqJO2qJP/oudrMmhmGawN/gBS5dOx3JkMtn4BRP1QPnOycn9lFZOdUCp6lq6AZxSYXFfSK7lp3whZr1WB7DpPQqpLc674VvLYFbbbrz6M/vGBChro3MvDV39GTFE=
From: Edwin Torok <edvin.torok@citrix.com>
To: Christian Lindig <christian.lindig@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "David
 Scott" <dave@recoil.org>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: Re: [PATCH] tools/ocaml/libs/xc: add OCaml stubs to query CPU policy
Thread-Topic: [PATCH] tools/ocaml/libs/xc: add OCaml stubs to query CPU policy
Thread-Index: AQHXZC8pdx4Fbe3SD0u2mLoPzHVclasZvgMAgAAJS4A=
Date: Fri, 18 Jun 2021 13:42:54 +0000
Message-ID: <787E5F69-FC38-4B6B-B893-13A5AD417179@citrix.com>
References: <5fdb7b4cdee69af8e2b9d77b56b1027a8799cf04.1624012999.git.edvin.torok@citrix.com>
 <70BA77FE-045D-4F67-A61E-030CFAACC8B1@citrix.com>
In-Reply-To: <70BA77FE-045D-4F67-A61E-030CFAACC8B1@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.100.0.2.22)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 73b210ea-1859-44a7-9f5c-08d9325ef987
x-ms-traffictypediagnostic: SJ0PR03MB6421:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <SJ0PR03MB64213E8B3BA18DF1172C0B509B0D9@SJ0PR03MB6421.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8273;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 7ALaMwNE80vIX1PjXEnYsjiz1hXM2v/zIGG9+PW+rv9BefcAMKp7a6Vg8xSbrECe9eY1acZKGL2SEarO8XEERKnDAGMgO5Mbkbdc9ReBBnGK2ZNM+SW0vci8OvWhW6pBjFGSwyJvqgEWukjJckoQMHOgZ9DgCMnuKzliO8WEgovaYErdcnLWvySgbW5DDDSeaZwIxXYHIidGvi4KUzRXp+Efhy8Qeyh8S7ojGeQS9oOcpeedWv+uLbP0WNfBdHFzOyFjK/kskPb2mrpdaivPf2pLN6svJkwasJKNlUnwLis3r0OKqZ8i1dyn60JVsYoO5H9rTUIfO7k5wAG4QXGo/MeWbueOqMCH3onfSGRtM0MpZaGZ3iYdaLEWFHhPB4BDrK1XpnfKdff9BKFkErYYr19S1I5Op5i+kyvDI//isBll9e0vgecb2vtitXtB2uY/svFQLKGuDg281GkZTgAfYs+fRabCCjxkJLeqelKxCLLHa5x7IQBu9bh6cqac2cmR60fqThaIwUbb40TF9oNNMF7vS1t+sd//aUnCN9quwTpc7eYzoQSQP3m4t1xoZSdEYg+HiqIcdzozO5919sfEBryISvUvJbxCRntGPM7dyTSzSdA+LliYFCMPo0cNTiKH0CEWDEaLk6GUqhkRD/h1RQpaW2cPm5QYXamU3bAtp7o=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB5888.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(136003)(376002)(366004)(39860400002)(83380400001)(6486002)(66574015)(8936002)(26005)(4326008)(186003)(91956017)(76116006)(6506007)(37006003)(71200400001)(6512007)(5660300002)(53546011)(107886003)(2906002)(54906003)(8676002)(122000001)(64756008)(6862004)(38100700002)(86362001)(33656002)(66946007)(478600001)(66446008)(6636002)(2616005)(36756003)(66476007)(316002)(66556008)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?c2V5U0dUOUdIS2hNTktQOUlnSXo5NlVmbFIxTE9MeTBidGRxMXZLOFM2Nklu?=
 =?utf-8?B?S040QWJYNkNOYVRESVJzbnp0dnJUd0pnL0hTUWRBM2VhWVUvY0dGaWVjM3lI?=
 =?utf-8?B?Tk82cUFSUFpjektXeTFFVkVZdVpZSWd6NWRTM2NpYXphWVRHSzdObUdQVytN?=
 =?utf-8?B?K2dTUkxlbzU3anNkWXFHeXB3a1lwNDFwcVEyTEJRMU1tOHd4S1ZGeExETkJo?=
 =?utf-8?B?c3pTN1ZyVEJHK0JmdDR3ZFc5TUlSK2JGYm1kaEpJK3d2OFJSUlFXZnlNNmN1?=
 =?utf-8?B?YnRRR2xPYyszN1E1WW1NZCtMK3EyL2ZPVnoyRVpwdzBDaE8yMnhURXdRbVZj?=
 =?utf-8?B?Sm1oRDd5cCtneGhybUF5NWRyQ3VRNkl0aDNtbThnOWJ0azFHQzdwRkorQkJm?=
 =?utf-8?B?NzY1Z0pwUDRESzk0cG03Mk5jRmJidUp4bGxpUkFPcks5eFBCTVo4RCtRSHJW?=
 =?utf-8?B?T043ZG1oaGVBaXkvZE9McWM2VmhNanI2andCQWRlcDBiMWZJV0g3Ym40M1F3?=
 =?utf-8?B?dG0rQmRFcit3MUtJdnFIcGZ2cEZUNG9YUi8rMjNlVzIwZ1FPazNuLzhWVVdp?=
 =?utf-8?B?MEx5NHQweW5rZFlQbDVROHFaMUVWUGFZY0pnNkd2cU5OZ00vb1RvTzczN245?=
 =?utf-8?B?dzAyNzhsYUF1a2ZLVFhVMlBXVitaREV5QUliZmcyaUQ2K3o4SGVpMmFkSXV4?=
 =?utf-8?B?azBFeFBjTmxac0JuNGR5WGJkN0VUNUh5QVFJRUJTcjhKd2lyZHUxa1U4bmwv?=
 =?utf-8?B?RFd3L1M5Q2l0QzFJQzBwN1loK1h1MVRURFBVSmpaMm1ta0hQUHJXK01OblUw?=
 =?utf-8?B?aTVYaVlXMVBEUEdhUDkyMFRDK0YwVEVwOEdVT3hLU3g2b0czandKRkFoWUpP?=
 =?utf-8?B?WGhibUhoaDFlTEd3NEQyT0tIa0UvaGhpRCtzU3ZMTUJGdW01eXJmMzJDVVBW?=
 =?utf-8?B?MEtSdFJkejdtdFd4Yk9rRlduZ0F0UGxGMThMbStoWXZlNCtTZXJJd0MzOXNs?=
 =?utf-8?B?TmlFai95bjdGL1g4YWF4TG01bXVDS04vTG9Ja2VZRHRvMmswRUlLL2YxSUtY?=
 =?utf-8?B?eDZhcHNmMm02eDRDSSswRTE5YzJTR3RNM0lwUWMwdjEvOVZmZDNQcTNrdU1i?=
 =?utf-8?B?emVuLzB5WENxbHE5eUZBdTVWa2JwZTRrVUNUSkVqbVBWVmV5Rm5ya1VySlRZ?=
 =?utf-8?B?MjBOejduZXpGYnNrcG5jTmRoNk5wemRrL2p5OGRkbUtBYlBlOTdtakVsaTRk?=
 =?utf-8?B?YXBTcHcyTFlBVFRiM2ZJSjFBM2tFU1RicC8xSFMxNkh3cnBGLzZwM2Z5RlF2?=
 =?utf-8?B?djU5OU4rVmdtckF0cUVJcEhXZ1U0aXd5N0phOHJycjRxUGh4NndjdGIyUVNj?=
 =?utf-8?B?V3hucFVXV1F1QUZmdzFFa3dVZXBQVzBHbTRCUFpTdWRtZi9IMXpxRUl0cDFi?=
 =?utf-8?B?YytIT1BiK1NFSWUwdUFKWW9zTHowa3k4cjUwbDV1ZlkxeCszbUdzcjZsZlpm?=
 =?utf-8?B?Q00vVTNLZGU1aGlDYzFOakdRc050NGhJbnB0NUJvWDlTTTlkbFUvd0NDTHJ3?=
 =?utf-8?B?SGN5ZGdNdG1Xcm9iVEwrOTJ2N3NMdVNDc3Azcm01LzFhbGlTSVhIbzJFVE9y?=
 =?utf-8?B?ZlJXUUlZbTNtM05XSmxHVEpiL1Azelp5aG9nTlh4M2NkUHdLOWIvMkZQalcy?=
 =?utf-8?B?ais1MnVtTWlNSHY0MFlhSUFjdVVROGNreTgrTDN5cyt2clZnTkJGMTNUODlU?=
 =?utf-8?Q?Q4LdCmj1DUxN8Sx0OSoReqbNNTX2zbBw3xpSh6N?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <303581C41ACBBD44B69750D761D6DAA7@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB5888.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 73b210ea-1859-44a7-9f5c-08d9325ef987
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 13:42:54.3349
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: kNpV+1MR4juJ06zSx2NaTyUxj9giAv8XTOmUSgPoTKWIXnmmdLRDdew9XAzbrKAAXxAanCxAhxu7e/gWGGPtHA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6421
X-OriginatorOrg: citrix.com

DQoNCj4gT24gMTggSnVuIDIwMjEsIGF0IDE0OjA5LCBDaHJpc3RpYW4gTGluZGlnIDxjaHJpc3Rp
YW4ubGluZGlnQGNpdHJpeC5jb20+IHdyb3RlOg0KPiANCj4gDQo+IA0KPj4gT24gMTggSnVuIDIw
MjEsIGF0IDExOjQ1LCBFZHdpbiBUw7Zyw7ZrIDxlZHZpbi50b3Jva0BjaXRyaXguY29tPiB3cm90
ZToNCj4+IA0KPj4gSW50cm9kdWNlcyBmb2xsb3dpbmcgZnVuY3Rpb25zIGluIFhlbmN0cmwgYW5k
IGFzc29jaWF0ZWQgdHlwZXM6DQo+PiBnZXRfc3lzdGVtX2NwdV9wb2xpY3kNCj4+IGNwdV9wb2xp
Y3lfdG9fZmVhdHVyZXNldCwNCj4+IHN0cmluZ19vZl94ZW5fY3B1X3BvbGljeV9pbmRleA0KPj4g
DQo+PiBUaGVzZSBhcmUgd3JhcHBlcnMgYXJvdW5kIHRoZSBleGlzdGluZyBDIGZ1bmN0aW9ucyBp
biB4ZW5jdHJsLmgsDQo+PiB0aGF0IHdpbGwgYmUgdXNlZCBieSB4ZW5vcHNkIGluaXRpYWxseS4N
Cj4+IA0KPj4gLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgaXMgZGlzYWJsZWQgdG8g
YWxsb3cgbWl4aW5nDQo+PiBkZWNsYXJhdGlvbnMgYW5kIGNvZGUgdG8gc2ltcGxpZnkgd3JpdGlu
ZyB0aGUgc3R1YnMNCj4+IGJ5IHVzaW5nIHZhcmlhYmxlIGxlbmd0aCBhcnJheXMgb24gdGhlIHN0
YWNrIGluc3RlYWQgb2YNCj4+IGFsbG9jYXRpbmcvZnJlZWluZyBtZW1vcnkNCj4+ICh3aGljaCB3
b3VsZCByZXF1aXJlIGFkZGl0aW9uYWwgZXJyb3ItaGFuZGxpbmcgbG9naWMpLg0KPj4gDQo+PiBT
aWduZWQtb2ZmLWJ5OiBFZHdpbiBUw7Zyw7ZrIDxlZHZpbi50b3Jva0BjaXRyaXguY29tPg0KPj4g
LS0tDQo+PiB0b29scy9vY2FtbC9saWJzL3hjL01ha2VmaWxlICAgICAgICB8ICAgMiArLQ0KPj4g
dG9vbHMvb2NhbWwvbGlicy94Yy94ZW5jdHJsLm1sICAgICAgfCAgMzcgKysrKysrDQo+PiB0b29s
cy9vY2FtbC9saWJzL3hjL3hlbmN0cmwubWxpICAgICB8ICA3MSArKysrKysrKysrDQo+PiB0b29s
cy9vY2FtbC9saWJzL3hjL3hlbmN0cmxfc3R1YnMuYyB8IDE5NSArKysrKysrKysrKysrKysrKysr
KysrKysrKysrDQo+PiA0IGZpbGVzIGNoYW5nZWQsIDMwNCBpbnNlcnRpb25zKCspLCAxIGRlbGV0
aW9uKC0pDQo+IA0KPiBBY2tlZC1ieTogQ2hyaXN0aWFuIExpbmRpZyA8Y2hyaXN0aWFuLmxpbmRp
Z0BjaXRyaXguY29tPg0KPiANCj4+IA0KPj4gK3N0YXRpYyBDQU1McHJpbSB2YWx1ZSBWYWxfbGVh
dmVzKGNvbnN0IHhlbl9jcHVpZF9sZWFmX3QgKmxlYXZlcywgdWludDMyX3QgbnJfbGVhdmVzKQ0K
Pj4gK3sNCj4+ICsgICAgQ0FNTHBhcmFtMCgpOw0KPj4gKyAgICBDQU1MbG9jYWwxKHJlc3VsdCk7
DQo+PiArICAgIHVpbnQzMl90IGk7DQo+PiArDQo+PiArICAgIHJlc3VsdCA9IGNhbWxfYWxsb2Mo
bnJfbGVhdmVzLCAwKTsNCj4+ICsgICAgZm9yIChpPTA7aTxucl9sZWF2ZXM7aSsrKQ0KPj4gKyAg
ICAgICAgU3RvcmVfZmllbGQocmVzdWx0LCBpLCBWYWxfY3B1aWRfbGVhZigmbGVhdmVzW2ldKSk7
DQo+PiArDQo+PiArICAgIENBTUxyZXR1cm4ocmVzdWx0KTsNCj4+ICt9DQo+IA0KPiBJcyAgY2Ft
bF9hbGxvYyhucl9sZWF2ZXMsIDApIHRoZSByaWdodCBhbGxvY2F0aW9uPyBUaGUgMCBpcyB0aGUg
dGFnLiBUaGVyZSBpcyBhbm90aGVyIGluc3RhbmNlIG9mIHRoaXMgYmVsb3cuIFdoYXQgaXMgdGhl
IHR5cGUgb2YgdGhlIHJldHVybmVkIHZhbHVlIGZyb20gYW4gT0NhbWwgcGVyc3BlY3RpdmU/DQoN
Cg0KSXQgaXMgYW4gYXJyYXksIHNvIEkgdGhpbmsgdGFnIDAgaXMuIFJpZ2h0ICh0aGF0IGlzIHdo
YXQgdGhlIGltcGxlbWVudGF0aW9uIG9mIGNhbGxfYWxsb2NfYXJyYXkgZG9lcyB0b28pLiBJIGNv
dWxkIHBlcmhhcHMgdHJ5IHRvIHVzZSBjYW1sX2FsbG9jX2FycmF5IGluc3RlYWQgdG8gbWFrZSB0
aGlzIG1vcmUgb2J2aW91cy4NCg0KDQpCZXN0IHJlZ2FyZHMsDQrigJRFZHdpbg0KDQo+IA0KPiDi
gJQgQw0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 13:46:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 13:46:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144664.266236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luEpm-0005he-5j; Fri, 18 Jun 2021 13:46:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144664.266236; Fri, 18 Jun 2021 13:46:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luEpm-0005hX-2d; Fri, 18 Jun 2021 13:46:58 +0000
Received: by outflank-mailman (input) for mailman id 144664;
 Fri, 18 Jun 2021 13:46:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luEpl-0005hR-CE
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 13:46:57 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b3d132b3-1585-49c4-810d-7765698482cc;
 Fri, 18 Jun 2021 13:46:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3d132b3-1585-49c4-810d-7765698482cc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624024015;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=+H0QPI8UXFeGtzRvZHyjPomcsgEF8e+Y1bIOv/9++U0=;
  b=PUDpPGM1GuNMrO229f4RyAl4d8hoLf9FLspJ1F6ZZhGO4RVRHvuuGjc9
   zB/1AtDDqHCpOKQA/to03AOm1dORUsspWfXg3ZJfOzLXu10UFrkgpMVY/
   vDcsx83FfDR31kkPApYr8NA20k8IzlBt7ibqdZG9l7zwucFGedI8VDFzb
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: zjYhc3/OKy9xSIK8rYMZ5Ce4pdO58Bf0e+6YDECEImotp/fEDQPvfxbKzYX8u/gWl1AYv74Hnc
 NgSQvblth5dVg1bknacRvmkCCZHhl/1GoR26dqiU3CSMU5bzq3VDRr9FaAvYlfJ7Veh4T/wqRt
 T46qQ1WOBcfIFYFXnFvEEqYznk2SRvxPFlIEMa1+iPxNRQtgxL4YRSE3+8Yukpn+SEO0WCk6eJ
 6l9ax85g6RCelxRv/fQBdD9dDzZ2dn/cFCaQadtREfxpM7fcK3T+QEQTWfkqyqJmXJgxsGXJoZ
 jNg=
X-SBRS: 5.1
X-MesageID: 46447123
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:5KRhuazuORAjE/U5UkRkKrPxGOskLtp133Aq2lEZdPWaSL37qy
 nIpoVT6faUskdmZJhEo7q90ca7MBbhHPJOgbX5HI3SOjUO/VHIEGgA1/qH/9SfIVyKygc178
 4JGMUTZ7PN5BpB/L7HCMzSKadG/DDtytHhuQ6x9QYPcShaL4xh9Q19AgaeHkAefng6ObMJUL
 q2wo5umH6MVUk6RPmaIF5tZZmym/T70LLMRVovOFoZxDK1rRWOgYSKYCSw71M/eBcK+64r9U
 zCnhD96r/mifu80RPHvlWjkah+qZ/A4f8GPtWFjuwSJynohhztW4h7Qb2FuypdmpDf1H8a1P
 zohDE9Is9093TdF1vF2ycFozOQqwrGkEWSsGOwsD/busr+Sys9C81dwaxkUjax0TtWgPhMlJ
 tR2WSXrpxWCg6Fuh/cyZznazFG/3DE1UbLq4Qo/jZiuE8lGfRsRdp1xjIoLL4QWCjw6I1PKp
 gQMOjMoPxcaFuAY3fFvmQH+q3fYkgO
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46447123"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cM6NlDQilVAeu/Y/fOO2/dv4mDYq6xbfvCfCeEmqY2oINv0p16tsMxMtRaRCxejuJ+JIMnUx1JRl8QvoCnHfo/LerbOTN69fXcvgQGvkuTrp/+9MThPlXV0Tu3hSmxCcgo2/2+3q8ct6Ig43i8JJfi13hkFj1wv3TkHQb+/zgnTRxs+wgVBSyc0AA+7Ek4w9UHek7nldUrA2IEihUlztSRK++ARTwq8KDmRrsGkbbSdDhLdmsiJkyr4fKSRcLlkM8OtePVV6zVVAjlUa2XmMuwmxySZnkmLmcKaz3bhQ32A2cG/docfgrJKhY6sWTuMuCpSFrN9BOSx+PeMVgCyw/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FFP/azzJ8RUG6I3mvukqgsz9SL0FK3W1+CM0M5NPNNs=;
 b=BNGG0/CN7bSD4G+Lkba6ccjqqBaMGkUUTlgBJp7dOqIFc2sXPxKUAvA+B/KBqDPAlB56R23Mn/G0aaabAYjGfZA3L8kYC97Gd1cOcRUoAufEoyDj6+eVdm3sZhHb6uAKZwDOKLYVkBWT9Ncu1/MYwanctYKYHvqBrloqudQJYsytpJLG8FBrakW2EBj+1l/BjjwYE4v26AkCMqIkr+MzDLbdVgI8LsqapkZcL65CCkIYjDVq/V7y3E/16OSrfuMIEz5/UsmktnTZxG1t7DTnXNPjA81KdVMnxUgiZPJ7z53tzn2VHuCRDaYswMtVW9xK9FdeOSkIAKuXEpMu6V6pGQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FFP/azzJ8RUG6I3mvukqgsz9SL0FK3W1+CM0M5NPNNs=;
 b=Td8uiChq1ZgWa+pQNfLK2iEN6mImpuMC7UyKIkxx8L7za5uLgexDOECnQ3yARXYgqN89ySGVH5+qlDSmjZI2lIGnk3rsj9lwXjlUJbxcsNDcysFg+wfV66WBK070WjwJbK6oDW1yVxz2tGzKeMd1juKWsqgJOxCe5vDXA8/h030=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>, Juergen Gross
	<jgross@suse.com>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
 <c7f93b66-bc4d-708a-6936-e0eac9e36cfa@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 3/5] libxencall: introduce variant of xencall2() returning
 long
Message-ID: <0804f068-c016-0099-cebb-dbb8b7f1b794@citrix.com>
Date: Fri, 18 Jun 2021 14:46:44 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <c7f93b66-bc4d-708a-6936-e0eac9e36cfa@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0494.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13a::19) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e2b58b02-067a-4c3b-2279-08d9325f86c9
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5533:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5533046315C992E02F2CD1BCBA0D9@SJ0PR03MB5533.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4714;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: H/AVK/t6uMepIgE5Jgfaj7LmZiRqkE/cAMC6vYpmy68g0im+0Lf9Jz7Uc1UOlIXaIYVlMxM50CgKoTHdBXWDj8PZ2DLt7Ym2udR+J3tihADMtNYJbNn8A0lpZY+VRIhvLgS5HzkQMvzMm3ZaPsT/3vBO3YErLSWNnT+rJ0lxhhivbr0vLMM46efoB3+9ChFhIXcy2Frr8ECSW67o59Z6ccxNloL2FSKS5hnjRn4oguDaokEXrSOJ1OOzyh9XvFm9WP3luu5wLnDC33M6A0dE8q4nBNDtr6w76KB4NXCnJYqo9TyMPosNVvkSzuH2k8Y4cqr9nxs2yTUQwE4hrbRzzEuc++0BIvShRK387obCE/8Ott8pHoeCFtBxMxFROkt5Q2oRQTLCi2nkFf3DXQ0bl3DKbgbMKsltrOcKdEYRsprAlack5KbxtgIx4XrCJ/iSMCU51LBtuQ/E/yA/woZA/ANMJu7BAWmLSYhmrxpNEY9ElgA1ZL6jxZn8kfgvA9IFnJpSeqQGuJIPTQh8f/hzYuvh8ERX3VtAH4Lfy0KPr4oU4H1BHJaS+PQaX84NI+Bq0od6yq/CPRyezQw7/8RbhcOrXBNXYKka46RbzLw8FrwPkZBs9cUzzreZAERnu+awHgPkikVvQ2DrknlR18NBIt8NgGNG5JcawMfPVQCs6tyMXE0BdE/DqtYn3s5DmDtE
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(136003)(396003)(39860400002)(376002)(86362001)(83380400001)(8676002)(54906003)(478600001)(2616005)(956004)(2906002)(36756003)(6666004)(66556008)(66476007)(16526019)(6486002)(16576012)(66946007)(31686004)(316002)(31696002)(110136005)(38100700002)(5660300002)(26005)(4326008)(186003)(53546011)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?Q0xhS0FpOFhpWFN3L0dKMlJEOHprMEUrcU5KUGIvRmsyQkpCK25qMjh6VUs5?=
 =?utf-8?B?L2Y1a0FtbXR2WFo1dTl4UnlwMVFJOXowUUcraXlaczVnSmx0T1lvakNpRzFw?=
 =?utf-8?B?TjdlSk80TDI0WHByWWw1N1lwMHh4ZERPRFF3R0k3dEpDc24xNDA1Z1A4OWVm?=
 =?utf-8?B?UDNYb2hMcHBSZytIY05pZHJWQzB1azRTZFMzN2paSzRBOFA5eUU5dGhON3cr?=
 =?utf-8?B?K1FiY3AybDlIS2ZCUWx2UjVZayttb0p3eTBhc3Rtd3huWlFvZmFKTFNocFNR?=
 =?utf-8?B?Y0xDY0Y5RHJYdEppenZHZ2hjSW5FSm1XT2Q3L1hXMlpyZ1hxUzVKazFDNEor?=
 =?utf-8?B?SjgrZ3Eyb0JLcDZISURndGE1QXRLa1FJVWVYRDRKWlVwc3VQcGtGdlNkVW0x?=
 =?utf-8?B?ekxiS1NzNU92VEFWVHFvSngybVJGOFZvUEZnS1NvdWFOcHFjR043dlR4V2Fp?=
 =?utf-8?B?NFFubUVBOTZMdENLdm9GeVkxYTlObjBZNVVQUlM2QjBzLzM5em5xVnM5cDVS?=
 =?utf-8?B?WCtmTTFoTFUxNUFZemV4dWtYbWF5b1FMWTNZQlIzV3lKZitabjAzbWJmWjNK?=
 =?utf-8?B?RnVGNkNrN1FHdmJEeENtYjNaRUtDSE9QbFV5RW1TSEladzh1OG5vNlRMc3hr?=
 =?utf-8?B?Nk5Bc2o5OWJJcElnU1F0akZISVhUMUNMcmxtNnZoYjN4RDNJUkpUMWFUNks5?=
 =?utf-8?B?V2d1R1NUUVBXVmtJS085RWkydmZRT0g5ZEZHc2RNUDVDNmRrdG1nMThlUHBB?=
 =?utf-8?B?emJyTkwzOEp6QzlLcmtnT3FsQlFxQzU2cklvTzU2V1dUMERuN095MWpreXZP?=
 =?utf-8?B?anpxbkgzQjJoaGY3VTdmRHBkT1B0UWNhbGJ1THJHMlJmL1ZHMkVmSTNrc2dt?=
 =?utf-8?B?ZWlwTk44K0dyUVF2TFVxVi9ZVWZCcGtWSk9EQ2FvSWRRV082SVJyaW45WVk0?=
 =?utf-8?B?bGFjTEJTcThZMlU1VWR3VTZzL0prdUNwRzdGKy9oU2tRTmxwMHdDeUUycy92?=
 =?utf-8?B?ME9UOWoxQWlzaFp5bU1NcDg4WlQ5UHZPd1BUZ2Z2cHZGaTYzZit5U0NvOWFk?=
 =?utf-8?B?dW9paVpPZWNYK1Nyb01HRCtaSzNxaUpkMjhqQVBjNnhuL3NrMG10Q1ZoaStT?=
 =?utf-8?B?NFRiV0U0OWh1RTI1RVlGSCsybnNkNzhzRlJFcktPcFJYZE1VcXprL1Y0Vmdo?=
 =?utf-8?B?Vmowa0FGYTlvejBpRDlTUUZYMnExU3MraHpuZURrdWxTQm9Cc2ZFN3VJWDRG?=
 =?utf-8?B?VjA2RHJvTVUwUmlicmZRRnVyd1l0b2dnUzgvMmxVRk9lbWFZQ0tKOUFEdndv?=
 =?utf-8?B?Q2prU29EYU55andZbDhnaE5JdVRxN1lzVlI4SjVWdjZFVVhmcnpySG14UnZN?=
 =?utf-8?B?T09GOGpNb1FCejBGWHZDMStRNUFUNW1TR0ljdzdxTkQrWUJRRUlZK3Ntb3dK?=
 =?utf-8?B?bmxwTS9BTFFyS0NVeGhMekdkekladEFWNUFlalRLcDFLT1FUT1AwU21IRGdM?=
 =?utf-8?B?VnpRSG5UQ3RQaXFEQUQ2a3JDalVoVjRQRlp3UUZjNXZBMWh1Ty92MFQ5bGFY?=
 =?utf-8?B?eXYvS0h5T3kvdDZ0cm9OQmJSRWVOZlkxdWZGRUgzVU9EcDhXSU1iT3Q1Tlc5?=
 =?utf-8?B?Wko1Y0puNjM2YUhyRUFKYmE2MUZPTncyVVMyZjhGWEhUeGVOK1ZYNkh4dFNI?=
 =?utf-8?B?anBGejVkTWRaeUpxQmFiOTBIbjc3S1ZNL2JTTFg4R0thRjF3ZlJ6NHA2aVVr?=
 =?utf-8?Q?YKxclrAfRGNxsqMgJXBkbcCXCa8ePcyn5UrDWTG?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e2b58b02-067a-4c3b-2279-08d9325f86c9
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 13:46:51.6914
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CmKcLNyq7Woraz+G6IJQzDDCYNW8P9aJu05rRNZrhMxVeJNsEclTjkWMQeRbUU32rP31wec4HJ+FBXWZ3FhEWOv0gze0LSS87IcA3nH8PFc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5533
X-OriginatorOrg: citrix.com

On 18/06/2021 11:24, Jan Beulich wrote:
> Some hypercalls, memory-op in particular, can return values requiring
> more than 31 bits to represent. Hence the underlying layers need to make
> sure they won't truncate such values.
>
> While adding the new function to the map file, I noticed the stray
> xencall6 there. I'm taking the liberty to remove it at this occasion. If
> such a function would ever appear, it shouldn't lane in version 1.0.

s/lane/land/ ?

I'm tempted to suggest spitting this out into a separate change anyway.=C2=
=A0
I'm not sure of the implications on the ABI.

ABI-dumper appears not to have picked anything up, nor has readelf on
the object itself, so we're probably ok ABI wise.

That said, I would really have expected a compile/link error for a bad
symbol in a map file.

>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> I wasn't sure whether euqivalents for the other xencall<N>() should also
> be introduced, and hence went for the minimal solution first. Otoh there
> is also xencall0() which has no users ...
>
> --- a/tools/include/xencall.h
> +++ b/tools/include/xencall.h
> @@ -113,6 +113,10 @@ int xencall5(xencall_handle *xcall, unsi
>               uint64_t arg1, uint64_t arg2, uint64_t arg3,
>               uint64_t arg4, uint64_t arg5);
> =20
> +/* Variant(s) of the above, as needed, returning "long" instead of "int"=
. */
> +long xencall2L(xencall_handle *xcall, unsigned int op,

If we're fixing ABIs, can we see about not truncate op on the way up?

> +               uint64_t arg1, uint64_t arg2);
> +
>  /*
>   * Allocate and free memory which is suitable for use as a pointer
>   * argument to a hypercall.
> --- a/tools/libs/call/core.c
> +++ b/tools/libs/call/core.c
> @@ -127,6 +127,17 @@ int xencall2(xencall_handle *xcall, unsi
>      return osdep_hypercall(xcall, &call);
>  }
> =20
> +long xencall2L(xencall_handle *xcall, unsigned int op,
> +               uint64_t arg1, uint64_t arg2)
> +{
> +    privcmd_hypercall_t call =3D {
> +        .op =3D op,
> +        .arg =3D { arg1, arg2 },
> +    };
> +
> +    return osdep_hypercall(xcall, &call);

(If we're not changing op), I take it there are no alias tricks we can
play to reuse the same implementation?

~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 13:47:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 13:47:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144665.266248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luEps-00060U-H0; Fri, 18 Jun 2021 13:47:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144665.266248; Fri, 18 Jun 2021 13:47:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luEps-00060N-Bt; Fri, 18 Jun 2021 13:47:04 +0000
Received: by outflank-mailman (input) for mailman id 144665;
 Fri, 18 Jun 2021 13:47:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=axfQ=LM=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1luEpr-0005zg-7N
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 13:47:03 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bcc4bad4-082a-4c32-8f07-587709833a43;
 Fri, 18 Jun 2021 13:47:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bcc4bad4-082a-4c32-8f07-587709833a43
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624024022;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=H+610r95/mEl0uWAJ5OlOC40N1wDr546XfSq/hoUwqM=;
  b=O8bixDPTg+VSdglyEnJJzR7fwWsx5nrDJdN5D04uxnfk7jAZjBQajpPI
   1qmcnzt6lBSpXQODwpRaq+fDGc/LSIHuJOY+9PtbTwNUZjcqOEoD2//4w
   bWnV8a+7cEVgn+BBHtgZe6JtMi1DCIrE6jyhrERaseMwWkGytfEOjAhqH
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: gO4jcaCNHacz/TQfRvw/1XvJvn9uPQQIi9L/hM8XcBE/Ul0XQbYwqPtxHAR/dWw94sEcBMwMJk
 ZEKK5LwKoPmic7v+MuKbPJI87ZWgx/6f4I5oQi0MFfH7QW+PEKz5/r2ghwS8Q0pzfmJOKoGYLE
 qRF1+lJM5cSQ5DVETKLt+te8Y/iGpcaSVORy5+pOWXYRfwG0ft8ehTA8mXrGBI3U9AEcp4WYdT
 12D0ah95qCz1SOe3/sY8I4y0nDBcpoHuWi1m0vdQ3EkOkE48Yv2LJq4tyBuMGotU/E7tNn3YFI
 XMg=
X-SBRS: 5.1
X-MesageID: 46461339
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:ZlLsVaCzfS0GWQrlHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-IronPort-AV: E=Sophos;i="5.83,283,1616472000"; 
   d="scan'208";a="46461339"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VhZSGVx03O8ee89VreKa3yuVWlBm/n8oP43D6+vCKNG9X0AeofZyH02wlovpVMYI6kISFjpfk9Ecl/auJ6TtSm6ArN6DRu+YQJkUYI2Jxhm1Ro74vQ66TpeSgpqS9hS98FukthgeTFbHw1HY3fE1Lt+ymhInGBvK8djdDvt76+02Z3BteHqyF6rTi3wa5RqZTi+qx1NJotGSUVU3vLUDUQS4COocarXoEpxWhzmSFA8H1RvJ/q4idJ5pD/bhI5+LwiSZZTFRl/qhBzVhwYr1VxVTtimMfkEpYzFhQccCTV4UWO0ngB9TOpt+adm1bUXBB+DKs5afNtZ56WUeuifu/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H+610r95/mEl0uWAJ5OlOC40N1wDr546XfSq/hoUwqM=;
 b=H5kCPBeeV+LVTJysco9vxYh8Fnf54ayrGZCwOtIWw24MKAEXELyDGeSAZkpAxz+Uo83CmPSaMNdr0BWHotOMDHu1qlA8+0ShZSyjg3b7EAjzKzOp4jZ3Z0Ziy6MNWtJjzNu2NBklqWA8t6v9b3HE37AiJHO27Drs5YVC9LYFOAB8w6Yuayz/Phrb+g50BaGaGzbfJuPL8FbGBqgemUiqAWHiZGaXQUYY/GKCLHIwxbInERmzwKgC+cbP3Q1cNH/vGVVUnCwnmgBMkZLuz5c0ICAaVQdbwnczvL0SQb3Bo3GJGul72YTuwZhihpYCiIUEx2PdMDJ0CAWlAQ3KNfL7nw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H+610r95/mEl0uWAJ5OlOC40N1wDr546XfSq/hoUwqM=;
 b=munzUIenqG/uuX3r/I+4neGk6MxZ3/wEok7vCblQpTqts1ecaeWPVau+KFAQMwo9ENABKs14k/QpbcniyRf0JPGKvbXhwq8OVIIzKnexyuUnnuuYHX5TWVIGaJsa9IddwhlfDAevg39y9RiJgXypZZtiqbnNG2r5ahFoKGZFqds=
From: Edwin Torok <edvin.torok@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Christian Lindig <christian.lindig@citrix.com>, David Scott
	<dave@recoil.org>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] tools/ocaml/libs/xc: add OCaml stubs to query CPU policy
Thread-Topic: [PATCH] tools/ocaml/libs/xc: add OCaml stubs to query CPU policy
Thread-Index: AQHXZC8pdx4Fbe3SD0u2mLoPzHVclasZwB4AgAAIUoA=
Date: Fri, 18 Jun 2021 13:46:58 +0000
Message-ID: <5B331E67-0BE9-47DE-8076-EBBE06BDEBF4@citrix.com>
References: <5fdb7b4cdee69af8e2b9d77b56b1027a8799cf04.1624012999.git.edvin.torok@citrix.com>
 <4eb5d3fb-db71-5981-e6f5-0503ff896fd9@citrix.com>
In-Reply-To: <4eb5d3fb-db71-5981-e6f5-0503ff896fd9@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.100.0.2.22)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 7e08d667-43ec-4429-f728-08d9325f8ad0
x-ms-traffictypediagnostic: BYAPR03MB3655:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BYAPR03MB3655851ABD40D4AC3B85863A9B0D9@BYAPR03MB3655.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: XZZXz1ShlQn4+ML/fOK2ckD6AyFuNDyY9TZkihW8GVX/vPhVoS+0grqUoi/2KPAlKHtrAl/gPQoxw7uUdRevPM4r3jVsHDAZy03h0KS22y4T2EtWJ4CcMURK4ITxBceDqwIKXHCtVWXWDMaFelClRX6msUgQarrmrns4U/khJb5DwlkHt2+7kpjRcAEaqh7VHz6Lz9BrOJwD9NmcnU28kOk26t31TA0v1qwDn5W9hFPL5gVzonvJAlZGs1ktJ0Wthi1l70qqe/EQ9dTZqxe1ltsIEAcC4Su7+erhldbY/KAL3dSoKPyAOgklG/LLMg3v2Z9b38m2Zp67pQQUb2aaRiHinF4y+mHSN4dRThZwzuiT3G/OcGYZDvBXMcQ1qkp+Z/sQ6UVwd1hc36HmJOlMbu8gFRt7y6gUnJs5bxnFju4+LNi3xEqedh/HZFbrkHa1O0zAxChu9+eaOGjcxVTFjpz3+HpE5zKmePZfDhWIHtb9uFUDbSEIzBWHHc0e23cwoAhe/dd5s9jp5sIvGHKGm5xpJJU9vwrJ9nGDDlqRKaXJ9BmMlbF5iaGOkl4uEfzMe5qbke75hisWBAO++Z5aUEFENfUDGDbX7a5E5MsggQe6h68Ab5ncgIQSFgB2h1kSDPnOF814Utllr2q4KxR5BDL4IJ5Udzde4iPDhPXwQ9VVEgaH/bvgW8UF+3ZJPirQiBop8oTz8nmVmQJCJ3fgUNJed1VJOuBhuXjDj09xDpEX23rGQQl2rtOFXBOeSbBKdGIgZtCfnNzUTaUj0206ucZuY+f8lI0qxjDFXljJKjU=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB5888.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(136003)(376002)(39860400002)(366004)(54906003)(2906002)(66574015)(83380400001)(8936002)(6636002)(66556008)(966005)(91956017)(37006003)(478600001)(53546011)(26005)(6512007)(122000001)(316002)(8676002)(71200400001)(86362001)(6486002)(64756008)(36756003)(186003)(2616005)(76116006)(66476007)(6506007)(66446008)(4744005)(66946007)(33656002)(6862004)(4326008)(5660300002)(38100700002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?RTI3dWx2aEVxUTdhVlJUN1l2TzlhTmQ4VWZyWXMrSE1peUNNc1QvdldETEZ6?=
 =?utf-8?B?MHgvcENCY2s3RFk3ZGlVQ0ZOVzNDb2xWZjVnTEFHNmwzeEZzQ0k1SlFuL0lB?=
 =?utf-8?B?WnRZTVdLeVNaakZEK0k2UVVZWDZIaVhFaWI4TytsQjkvbkU3bEtGVkdHVzNn?=
 =?utf-8?B?Y3Ivd0ExVWFtcjZjTVp4d3lvbTlmMWVRVWhwSjNPYngxckxreVlCL0dJemZo?=
 =?utf-8?B?TnNvQVhwWUhYamVMU0ZEVENoeUlPNVBwdWRsUHdRblcwYWQxR3A1WmlvTTJ4?=
 =?utf-8?B?Ry9iTjgzdml0aXJROGoyY3FCZGY3dGc3MndqaHRaVnFzeGRXSFFGLzBrVHQv?=
 =?utf-8?B?SlV3VytKYXRNSGgwU3VFY3N2YzdIRGpUQi9id0hSQ0MzNjExWmVONlU1Wkd6?=
 =?utf-8?B?VnhWUW5mWWxGMklkbU9pTkNRdjNBZ1ZiMlFXMnBkb1Awd1VPL3kzQmlIZHRh?=
 =?utf-8?B?cFJyUmJMRkorMmFPNGxib1NWQVVLQ1puenRNSklFNndFNkZ4SGNESlNtaHJh?=
 =?utf-8?B?Q1k5QVdYaXk0MzFxckJURzZ6REI3L2ZnQTJTWXpTc2lzVE1DaHF5RDB3dTRs?=
 =?utf-8?B?N0NlZWg3Yk9kL0pYTTRrdnZ6eVAvWnNQa2owZks3OXdXT0JXT0c2QVV0bE1v?=
 =?utf-8?B?RHFxYnhXWnVoLzM4amFGeUthYWdWeDdjU2ZrWDlBWU9aTkFRV3psMmU5Q2lk?=
 =?utf-8?B?cnMrQnB2ZmpGL1piNEFCenVhbGMzOEp6S0F6RzV5ZUN6Yjc1Mm1MbE5rOGQ4?=
 =?utf-8?B?RVFxdEw4VlJHc1JRSnNwWmVVRWNXY2wvRVF3d0xaTFZyUXJrK3FzQ1dJaUxJ?=
 =?utf-8?B?YkdFVG9MNWtydTI5T2ZxOG1MeUI5ZFNBTzd5TU1SWCtmOGtwWk85SlB1WlYv?=
 =?utf-8?B?Znl5aGpsVVZzTzZBWDMyVXZvUzZsMkRMc3ZNazVIMDN1cy8vT0FsNDRwbDFw?=
 =?utf-8?B?K0hTNFBYU21UOWZodlJpVXhZTUpaUXU0WEcwTTBla25GV3dKY2dTcWMyZXp2?=
 =?utf-8?B?VlVTOFBXZmZ2MStWWTZuWStaVjVmbXdRRUwza3FmblBpZDBBVS9GcUdrT1lr?=
 =?utf-8?B?OGNaeTNDWDhnekRHYlU5Vi9pU3VNcENvN081bDFDNWt6WWtKVTV4QmtlQ3FK?=
 =?utf-8?B?aHp6R1Fmb1M4SlFFSzI2bWZVSDA5czExb2drYkZ2bERRMjdPYmxWK2JvOUMr?=
 =?utf-8?B?NW5McWIvZGtjckV0REFJV0phYjRBTkY3M3ZFMFYwV1lOQmtxeVNBU2xUS3Yy?=
 =?utf-8?B?SW45WkJsMEF1UU03dTdKSitCdXJtQ3NFMnp5TlA0SlJXMkpTejJPcEV2cTE2?=
 =?utf-8?B?d0ROaXNPRXhSL0VGZnN2M3VkR0NIRkVuRzZBSjU3NE1ESmN5NDdGOVg3cWdM?=
 =?utf-8?B?VllZNTNSRnZHYW4rRExHSERGOUVTVkNlVzZIbFd3bGF4TUVGeWh2Tksrc1pT?=
 =?utf-8?B?RURMN0huYk1JR2phUHRXS0ZRcXd6b3pUYTZaL0FoQmI0cWUzTXJnaUN2WFBG?=
 =?utf-8?B?Q3QvVHFyRmh1aC9hRTl5QkR2UnBsekJ4MFJvR28vbFU5SkM1L2plSjQ0MUJY?=
 =?utf-8?B?UGFjVm52RlNWd3ZTVkJZNnR2QTEwY3RiZjNCMWpPdHBGclhyZzIxZDc2elFE?=
 =?utf-8?B?Q25qMFlxWUVIaHFsTVJGRU5CR2FMcEM0SStIYTlrekkrR0xpWWkxM05qVm0r?=
 =?utf-8?B?WEpWeTB0cXBXeXQ2c0FCTGc4aEVxUXEwM2lvVzdBaVFQdkNyRlF4cXZFTWRF?=
 =?utf-8?B?bUl4ZVg5czV5aTV3bldwdys2dmtvc3VWMndVOVlTOHBETFZoRS9keWZmbTRh?=
 =?utf-8?B?ejRhS283bXVTdWFyWlZ5QT09?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <0C53E1E0088E5A4080036368FC1C3DE7@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB5888.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7e08d667-43ec-4429-f728-08d9325f8ad0
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 13:46:58.0693
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: jwO14xe+Udv9qUSF8CBlkP9m5CVWYx2GEPqAG5OEc9ByRKYKPcaFeWsevN8LjNPwea2lzpOQG4IZlQo71lNwbw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3655
X-OriginatorOrg: citrix.com

DQoNCj4gT24gMTggSnVuIDIwMjEsIGF0IDE0OjE3LCBBbmRyZXcgQ29vcGVyIDxBbmRyZXcuQ29v
cGVyM0BjaXRyaXguY29tPiB3cm90ZToNCj4gDQo+IE9uIDE4LzA2LzIwMjEgMTE6NDUsIEVkd2lu
IFTDtnLDtmsgd3JvdGU6DQo+PiBkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwvbGlicy94Yy94ZW5j
dHJsX3N0dWJzLmMgYi90b29scy9vY2FtbC9saWJzL3hjL3hlbmN0cmxfc3R1YnMuYw0KPj4gaW5k
ZXggZDA1ZDdiYjMwZS4uNGEyMzBkZThiNyAxMDA2NDQNCj4+IC0tLSBhL3Rvb2xzL29jYW1sL2xp
YnMveGMveGVuY3RybF9zdHVicy5jDQo+PiArKysgYi90b29scy9vY2FtbC9saWJzL3hjL3hlbmN0
cmxfc3R1YnMuYw0KPj4gQEAgLTM0LDYgKzM0LDkgQEANCj4+ICNpbmNsdWRlIDx4ZW5jdHJsLmg+
DQo+PiAjaW5jbHVkZSA8eGVuLXRvb2xzL2xpYnMuaD4NCj4+IA0KPj4gKyNpbmNsdWRlIDx4ZW4v
bGliL3g4Ni9jcHVpZC5oPg0KPj4gKyNpbmNsdWRlIDx4ZW4vbGliL3g4Ni9tc3IuaD4NCj4gDQo+
IGh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9wYXRjaGV3L3hlbi8tL2pvYnMvMTM1ODQw
MzQ5NQ0KPiANCj4gQ0kgc2F5cyBuby4gIFRoaXMgbmVlZHMgdG8gYmUgYmVoaW5kIGEgc3VpdGFi
bGUgaWZkZWYsIGZvciBub24teDg2IGJ1aWxkcy4NCg0KDQpTaG91bGQgdGhlIHN0dWJzIGJlIGRp
c2FibGVkIGNvbXBsZXRlbHkgYW5kIHJhaXNlIEVOT1NZUy9mYWlsd2l0aCBvbiBub24teDg2IChl
LmcuIEFSTSksIG9yIGFyZSB0aGVyZSBwbGFucyBvbiBkb2luZyBlcXVpdmFsZW50IENQVSBwb2xp
Y3kgb24gQVJNIGF0IHNvbWUgcG9pbnQ/DQoNCuKAlEVkd2luDQoNCj4gDQo+IChJJ3ZlIG5vdCBs
b29rZWQgYXQgdGhlIHJlc3Qgb2YgdGhlIHBhdGNoIHlldC4gIEknbGwgZ2V0IGFyb3VuZCB0byBp
dCBhdA0KPiBzb21lIHBvaW50LikNCj4gDQo+IH5BbmRyZXcNCg0K


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 13:56:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 13:56:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144677.266259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luEz8-0007nr-HN; Fri, 18 Jun 2021 13:56:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144677.266259; Fri, 18 Jun 2021 13:56:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luEz8-0007nk-EH; Fri, 18 Jun 2021 13:56:38 +0000
Received: by outflank-mailman (input) for mailman id 144677;
 Fri, 18 Jun 2021 13:56:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luEz6-0007nL-MG
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 13:56:36 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 799eaaca-c007-4cda-bb49-b270da288724;
 Fri, 18 Jun 2021 13:56:35 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2055.outbound.protection.outlook.com [104.47.14.55]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-24--X6NwHmXODa7Wrv3E9pPuA-1; Fri, 18 Jun 2021 15:56:33 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4608.eurprd04.prod.outlook.com (2603:10a6:803:72::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.24; Fri, 18 Jun
 2021 13:56:27 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 13:56:27 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P195CA0028.EURP195.PROD.OUTLOOK.COM (2603:10a6:102:b6::33) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Fri, 18 Jun 2021 13:56:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 799eaaca-c007-4cda-bb49-b270da288724
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624024594;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=R0IlHbQIb8QAuQiCHRkHuq1/u0D/+ApbhwMnMZj1PMg=;
	b=GbrweQHiTWIWeJ+Jedbu8UKfWn5HYNgZfLohad8XWycNBvdejGOFafwK9LJpcBGwxhkRtK
	5b9XFDSvpcUz8DF2rCdJqZqx8XqEXL9XSOnUVxHBuS2lkKsT4+ubpYiv85G/SXzMI/A8Fj
	mMzxhq5liyhcsPuDP6DAecX/CSje8Pk=
X-MC-Unique: -X6NwHmXODa7Wrv3E9pPuA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q5LYxYfsUOQIUOBpPn2p8l3xsi/N2x0Sp933Q1cApTDjkSfWW5xPMeIb2uL+IiLj1VJW1X/98Wd1TTkl7oLtN4VXXmim2gmPGy8ZTOkIQTrKEiolhRB4bFnLK3PLYK+UuXoA+AQZlD6fV4SXr+/8OvlAuVMT2rkZy4rdSKyVvq6skz97hze2LNWwQTJXEck0Qh7so5YR6xrjlOjEWj/8hUHx/HL76aUuP8CuEwpCEiDCN4S1Bj7rC5QSbhgFalQWMMXpZOFkyncd8Vg7/166AIMxUfC9a0Vz9pvbAhcAKL2C3ypIG5/xjEO7O1RBiBLEP2Knrd7y1ZLOfuAV8n6+ZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R0IlHbQIb8QAuQiCHRkHuq1/u0D/+ApbhwMnMZj1PMg=;
 b=TSPqOrhu8x2ESt3WanAdbSA7fKnUyOmBF/NWgN52HOKRHT+p2rC15j71wTVA49jebf7+Ui6wg5nk7zxtVmomVOLjrCHU/jQlHr8vUeVnOoKwBnDh2oMNqPDamXkeIWT3YcFisEw0rDChRBxFdqqyZGgGfMlN8uMKPJyop81ha7wfTHnXJOoqUmce/YckbMoTMB6rHCO8JLwWpQGKRNZJLpAi9kENkZjs8dz2gtcmOCCTusQXbjRVR2HJGDUygZSh70bE0mTn1e9c3ENa3QlKEdZsuvgfZ81ZTouXZkp3ywRNmdCQ3TcyXEhb7kPUOXRGkPkbRMiGb0g1fctpu2JTVA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: starlab.io; dkim=none (message not signed)
 header.d=none;starlab.io; dmarc=none action=none header.from=suse.com;
Subject: Re: [RFC PATCH 01/10] headers: introduce new default privilege model
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: sstabellini@kernel.org, julien@xen.org, Volodymyr_Babchuk@epam.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, iwj@xenproject.org,
 wl@xen.org, roger.pau@citrix.com, tamas@tklengyel.com, tim@xen.org,
 jgross@suse.com, aisaila@bitdefender.com, ppircalabu@bitdefender.com,
 dfaggioli@suse.com, paul@xen.org, kevin.tian@intel.com,
 dgdegra@tycho.nsa.gov, adam.schwalm@starlab.io,
 xen-devel@lists.xenproject.org, scott.davis@starlab.io
References: <20210514205437.13661-1-dpsmith@apertussolutions.com>
 <20210514205437.13661-2-dpsmith@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b9e634ab-6118-6c8c-7bc1-8df1f9ceec90@suse.com>
Date: Fri, 18 Jun 2021 15:56:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210514205437.13661-2-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P195CA0028.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:102:b6::33) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f0202cbd-b014-4a54-cefb-08d93260de04
X-MS-TrafficTypeDiagnostic: VI1PR04MB4608:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4608602B73B97B685F53980FB30D9@VI1PR04MB4608.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	G+fPw6tDrDs+YMkhGE/53rthRYRQlwVAVaNDdaSvYnfgAGS8ygT+L+eF8GaXT/3yhZJQityFBw0s5fSJQPT583coUBfZ2PghecR9l6DHLRhwflLUH+DcHmO8xa8ps2aU53onFKCsuH+JY1n95p6+3eYwU8+V24t0uGDgfKUMaMdEK4rnmA/Qc3MjqE2/fD/Z1EPUOO3+19fHwxFRSuzAaCVCMNIj+lGplfDVrjyimfwlxKmeUJEtCbVbX54ZFmYAtlWnLBfwmNHhpBnCabUbqydahwulUT3q5yfcoxHtVeOW+2ykke9dFS5p7oFgoNZEjyWXawPbCTVWU2AiysoT1OztucdkADoe3XTrfkxCpxeCQH7MlmTIhdouuuoNV4EJ4bkdZpn/Kgkxw0AFmdTSBRrbn3kcHPlsyGc/JWxFpwpVBenYja0NuwmO+ibx3qSor4KgcU06fGC3dRG020oRzzbSqTcNFs3WRQm0mrUHP1IUGPldar05OIqsQIu28Lrs0CwCZTRw7xFoKpfDi7EdhhzLaG/XYQNIU9/wQYvL1RAJsaeMWMMhCZiV8f7cV+crZngg+0WiQ8zE79qcGWNFhJmlGOhjMxFwkshh27OOpu4nQytvHMhgOpPPUXlu72Ta70XPDAaTzIFWTDHNJtqkfZH/c87INJt6Ag5qEYW2a0Xr8qk7ZA23OQ5phpiV1Lco
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39850400004)(376002)(396003)(366004)(346002)(136003)(16526019)(6486002)(66556008)(86362001)(316002)(38100700002)(36756003)(186003)(83380400001)(478600001)(31696002)(2616005)(53546011)(8676002)(7416002)(26005)(956004)(5660300002)(66476007)(8936002)(31686004)(66946007)(16576012)(2906002)(4326008)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?czM1TVJBcER4UUxVS0VPR2djb08rN0tPTzdhdFMxTjF0REVQdWVyaFlxS3Jl?=
 =?utf-8?B?QS9hcjNvazNTYW51KzBIdXNYbVpHbEVmNGszT25vZ2NFcXpTOHhwRVFWV2dz?=
 =?utf-8?B?UkZZYzRRMEtVOU5jdlFUb2dLVTkrUXNXKzMzSWEwR1M0bkJaV2Fad2lDMHV2?=
 =?utf-8?B?b3VkUUgzVkJkbFdwNjErWVVZbkFMWUNianpvakZsR0FrYmFhM3VFQWVQNmo5?=
 =?utf-8?B?blFjTVBLL1hOL3pPUms2d3BuUGhPcUNZcWc4bExabWxDRUY3MWU2cFZKTi9D?=
 =?utf-8?B?UFZkNDBzQVEyVXovSEpJY2ZEbmNpZXVJQ2tXaCt4WVo0VllMdmVvYlZkbFIy?=
 =?utf-8?B?dkhmS0xNTWFuaTN2VkoxbkVFbFh5amZoZ1V3NW8zMHhtNmRrNS80Nm5WbDVC?=
 =?utf-8?B?M2ZRb2lMc2xpSEM3RFd4Rjh3SjJHbStHMUNoQy9xTWE0SU14cUFWMVAvL2RF?=
 =?utf-8?B?UXhHVlAzSWJuWTA0b2NDRXBOWmV1TXg2MnVHNkFGSjJTbDE4VlI5NXM1eGd1?=
 =?utf-8?B?R25mT05hUnBacVFhZzVOWGZwbVFNSjh3ZnRkMWQ5eXYvL3ZXOFJSS2JKbWMw?=
 =?utf-8?B?aC9vVTgzK1R3SjVvYlpFZGRaRmY2OUdSbGR2R25RaDkwWDhaREVGYTVTejJI?=
 =?utf-8?B?UTloZEZrTS9xUFlpWkRrZFRrME01S0xLTVRYZ1p2dGd1d2Y1M0RFeC83cjN4?=
 =?utf-8?B?YnJvRTYzSGZHV3BKL1BnRzJqMHdISVdQSHlXZmhiN0FQWlpwcm5SR2JIVlRy?=
 =?utf-8?B?NFFZdHYyVllsNlc5M1drTU1XSDhZbWs2RU8vYzU4U0huYzNUN0JxZnNOb1Rj?=
 =?utf-8?B?WlRuVUsvaHJhNUc0ZzhOTzJSaHU5ZmxhZ2NjSDNORGtzM1ZlMzhRaVNlNWRi?=
 =?utf-8?B?QXZGVXFkZndUQTJHQ3pvdExNQnQxbjdjN3MrdW1UakNBV2pCcGVHaEJNVHVj?=
 =?utf-8?B?R0h6SmV2djVZRTIveEhvTFk0L3VVb0taYjlmTUxmdE94VWp3dldMWFNnUGZ4?=
 =?utf-8?B?ZTc3MmpUYnNaTlJRMVBFVnQ3TDA5T3FUUnNtdWtYNmdRWE5ucHdxMGREbXo3?=
 =?utf-8?B?Mmpqb1NRMHlZcEpBWnA4aHBMckJHVDV0b01ocnk2elpEd042NGpxL3FFNVpp?=
 =?utf-8?B?cVBmN2RBS2lHS09RVzlRZTJRTkUyTVNQVmdvRjJKMG1sS2xhYTNYaDJVNmdK?=
 =?utf-8?B?Wk9sellOc0hDdFA3Uk5EaDRyYVJJSnVoNXlrUGwyNXNDNm1ST0ZWWTlmT2U3?=
 =?utf-8?B?NjVSckZLOG1YdndrZVN3LzVEN1EzamJiek94S0VVTmhXUklla2Q2V2FrRHVn?=
 =?utf-8?B?NnRWa3dLaGwxRTd3U0VwS3dFK1JBY0F0WUdKUGE0a1pyQUllc0FiWThCODZv?=
 =?utf-8?B?NGwyUTl0NzQ0R0RBSDFZWThWMG0yaFNQbnYzSG5seUdpS0xJaXJ6WVlDb1B5?=
 =?utf-8?B?U1NWbnNtMGxMV1hGNWs5WmhHZW44L29zazE0TDNWeTRQV2NzbHJRblZ5WUJQ?=
 =?utf-8?B?cUtSZ2ozYWJXMGUvZU91OGxRYnFuUVdiL2x5aEFWcWVjbjlnMndENEZaMGsz?=
 =?utf-8?B?Z1ZUdDZOeVJ6WWc1NTZ5YVdtdU4wSFlpeVdUV0UyTm5ONGU3ZmlEYU1KWmlX?=
 =?utf-8?B?Q1lOdkFPVjhzeWtqYldxUTduMEFKb2JxZEwrM1RTblo4Qm5VaE0wb3VieGYy?=
 =?utf-8?B?Mm9kUTZkM1JiQlJFL0J3V1lXdm83eUVmaVpRRnFHZkRjYWhvbGNEdU5LUnpO?=
 =?utf-8?Q?9EmEAB3VfqulL+HKe7di+mIvOWa7XJDWNxsVWA0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f0202cbd-b014-4a54-cefb-08d93260de04
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 13:56:27.5680
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yN/XBUGQ0j5hTMnz83mhquCbw9/VSiB+/WjqpkhLUWAYKZlln5qAWAtice7T90usqWhpGkOydast2zX9eIhjMw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4608

On 14.05.2021 22:54, Daniel P. Smith wrote:
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -457,6 +457,24 @@ struct domain
>       */
>      bool             creation_finished;
>  
> +    /* When SILO or Flask are not in use, a domain may have one or more roles
> +     * that are desired for it to fulfill. To accomplish these role a set of
> +     * privilege is required. A break down of the basic privilege is mapped
> +     * to a bit field for assignment and verification.
> +     */
> +#define XSM_NONE      (1U<<0)  /* No role required to make the call */
> +#define XSM_SELF      (1U<<1)  /* Allowed to make the call on self */
> +#define XSM_TARGET    (1U<<2)  /* Allowed to make the call on a domain's target */
> +#define XSM_PLAT_CTRL (1U<<3)  /* Platform Control: domain that control the overall platform */
> +#define XSM_DOM_BUILD (1U<<4)  /* Domain Builder: domain that does domain construction and destruction */
> +#define XSM_DOM_SUPER (1U<<5)  /* Domain Supervisor: domain that control the lifecycle, of all domains */
> +#define XSM_DEV_EMUL  (1U<<6)  /* Device Emulator: domain that provides its target domain's device emulator */
> +#define XSM_DEV_BACK  (1U<<7)  /* Device Backend: domain that provides a device backend */
> +#define XSM_HW_CTRL   (1U<<8)  /* Hardware Control: domain with physical hardware access and its allocation for domain usage */
> +#define XSM_HW_SUPER  (1U<<9)  /* Hardware Supervisor: domain that control allocated physical hardware */
> +#define XSM_XENSTORE  (1U<<31) /* Xenstore: domain that can do privileged operations on xenstore */
> +    uint32_t         xsm_roles;
> +
>      /* Which guest this guest has privileges on */
>      struct domain   *target;

Besides the request to correct various issues with style, I'm struggling
with the differences between some of these, e.g. XSM_HW_CTRL ("allocation
for domain usage") and XSM_HW_SUPER ("control allocated physical hardware").
In the latter case it's not even clear to me what "allocated physical
hardware" is when comparing to just "physical hardware". IOW I think
there's some context (reference to doc) or further commentary missing here.

As a nit, I think in many cases you mean "controls".

I also wonder on what basis you've chosen the place at which you're
inserting the new struct member. I'd expect this to either live next to
related fields, or be put in an available 32-bit padding slot.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 14:02:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 14:02:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144684.266269 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luF4f-0000sJ-6o; Fri, 18 Jun 2021 14:02:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144684.266269; Fri, 18 Jun 2021 14:02:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luF4f-0000sC-3f; Fri, 18 Jun 2021 14:02:21 +0000
Received: by outflank-mailman (input) for mailman id 144684;
 Fri, 18 Jun 2021 14:02:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luF4e-0000s6-4o
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 14:02:20 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ea348c6-9821-4310-9134-0e82431e4f9f;
 Fri, 18 Jun 2021 14:02:19 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2107.outbound.protection.outlook.com [104.47.18.107])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-8--vsdxk3KOZGoL1xTgbXdew-1; Fri, 18 Jun 2021 16:02:16 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6384.eurprd04.prod.outlook.com (2603:10a6:803:126::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Fri, 18 Jun
 2021 14:02:14 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 14:02:14 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM9P195CA0012.EURP195.PROD.OUTLOOK.COM (2603:10a6:20b:21f::17) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.19 via Frontend Transport; Fri, 18 Jun 2021 14:02:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ea348c6-9821-4310-9134-0e82431e4f9f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624024938;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=T0YTZarNfnyQgzC4jCO/JDRvvCRODSqnhxN9u47kB1I=;
	b=S2AEdhjppXAeE5muUG3SkQy4KlLpp0Z8tEWWA2U/Wq97rAaQWnlKPVnTY8O445j5ceQMqd
	8d9igrqpLddDuOexuXnamNz6N0lsh4PJ+fkrMYf1+vjzWOhv8fLo5zWbyu5vxsS0zsQchz
	v6+FRTWGyzUvN1RJAaj20KBTstshkpU=
X-MC-Unique: -vsdxk3KOZGoL1xTgbXdew-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A/PahIRadT4vcalWQV9qQ7Y14FP5eJMaQMxTS9KJgZduHvafXcLdnEaL2by/wijNvaiXP37mMdMWP3cgawp/lryu2mSPh1rtei/54NuZRZIhCTTL96KmQcAmv8G4sV61FAaJaC+XU5kJDO2xz2K2gD6iMKbLua9oPNwj+PwC7IXIC6GtSMunDaQZMe2F1gyKY6q0A9JQcQXgjkBVgcrDKm0oNPb5ae4ZH2MRH+19NuSkmdA8SYygcZlKjEJCLc7t3OgTG/86eqjapDUPflgwZN3ls9i3M6ay9r7MXlOGBpXaBxPZ4a6ujQ0RRcMTZDmlxkRalWj3AGUY1sBkVxDfQg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=T0YTZarNfnyQgzC4jCO/JDRvvCRODSqnhxN9u47kB1I=;
 b=U1GF1ooFfcdrP1gcpm5XhOHmiI9vGXa3FrTJbagYrk/NDbrnUYnWo+h9iT+OUDYNRayUUWv1ukFrj5YNMW7leuMw1+SnHWdpr96Cp6ibflL7iGAIgBfhn0T6Ykg2vWNvX4RrSRT1MNpDNmBKv+7LqFN4El84Tp6dxsYOeY263jJy1WrYEGqBydtdsBGUWH4xk8vT7ka8fV2ZUB3PdL74iGJLYS0Svl8TNjc3nnnsuma+z16jSsiiMdyx0CBVAuEzxB4aTIt61D9/RElJRWfAJoMYqiXNtD3T+tfsSzVObN+zmC8gsUYd4JvAFfr98urxwJHTelNVyEpTC1pgNeWb0g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [RFC PATCH 02/10] control domain: refactor is_control_domain
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: sstabellini@kernel.org, julien@xen.org, Volodymyr_Babchuk@epam.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, iwj@xenproject.org,
 wl@xen.org, roger.pau@citrix.com, tamas@tklengyel.com, tim@xen.org,
 jgross@suse.com, aisaila@bitdefender.com, ppircalabu@bitdefender.com,
 dfaggioli@suse.com, paul@xen.org, kevin.tian@intel.com,
 dgdegra@tycho.nsa.gov, adam.schwalm@starlab.io, scott.davis@starlab.io,
 xen-devel@lists.xenproject.org
References: <20210514205437.13661-1-dpsmith@apertussolutions.com>
 <20210514205437.13661-3-dpsmith@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <caedcf59-ae0c-987c-4574-a4a97757af36@suse.com>
Date: Fri, 18 Jun 2021 16:02:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210514205437.13661-3-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM9P195CA0012.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:20b:21f::17) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 858c2ab7-504f-4d57-831f-08d93261ac61
X-MS-TrafficTypeDiagnostic: VE1PR04MB6384:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB63845AE3BEE093CEBA4A160CB30D9@VE1PR04MB6384.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vtYiTK+D56rnoSzUx4B6zZbiWXJlFLPDkjfuPPam2V6FGzhtxNV2JltapqAzUR2zuLzAXltlnzamohlTXR3Nc7KfbqPFudO0TGYclnHosb74SCEYEgz8yEEAyTQCdb71U+zllGMlzXezf0JE+hY3gaQ/A0UvcYHjG0jusWp94bJILCjDMJZ+n4LSqnRRSxsERY/JF0M1alCEy8pZAl5eLZaDHenKtrhOfkGUFHU4JYvSDA2wtvYOhnFbxqLnDar3M0M6b8YTQJa3qV+34WCODHDqipzABY1OufBaZn5f0sae7o62AkMigiUbBTHiIgYWTxKYjUAzZJRMrAp8LaJOj8j4/2v2YlIiPRI3GnxqnV1pHEBGiFmA1NymMnRx5IKG3w6W1Cl6jnozvxaWTZeu4HI9yoqGtZoJvzVaKqQcqkklZQXsbYrH3jtCCDLf+rrvLR8SUgwFEGf3/0MvxX5VqvUnpVSuzVp4Gk40QQMSXCXL52Moi0W3XMngQDAOqkT7PrUSyIHmHeRk/c3XU7WFRwM8XeH8VWoJUblRh+N9Hiphf5EZncI40hdEeb1/VRLPb7ShhasCFcQc6pWq1A2kyqt0K5JwUEIyKEEUerJ1Rvq3kx6ZN3Aybst5SgQDd74dMO/9cUEKBS1nCS721ddXpUHp/URJIBOkETqi8oJxxdNMkDRNye4qmlJGGdOIIDjA
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(376002)(346002)(366004)(136003)(39850400004)(38100700002)(8676002)(31686004)(186003)(16576012)(31696002)(83380400001)(26005)(86362001)(316002)(2906002)(478600001)(36756003)(7416002)(8936002)(5660300002)(66556008)(66476007)(956004)(6486002)(66946007)(16526019)(4326008)(53546011)(2616005)(6916009)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d3dvV1ZvNGFva3d3OG4zbVZvNkJUWUg3VjVGVDNoN3VGNVhlTkhKMjE1c2xV?=
 =?utf-8?B?S0tPWE1yeW8va2E3dGsrNGVZSXFmWUZaWEZSU0dqVWJza0x2aVFiVEE0UUxR?=
 =?utf-8?B?cGVTTFNyakozWUlHTUZidExDbHRPT2J5UHh3NWpGWDBIRWtOK2tLcTZRY0R3?=
 =?utf-8?B?eHB4TGpjTWxIZ0VtWXZ3akJGNmdJbWlwUklRQ3JvcjRLeDhIUWtXTUwwdmtj?=
 =?utf-8?B?aG1JSk1abDl0SWJuUTVnOStrR3dENHBRMko0ajJPVjBxVXA3Y1pHWUJPYzdE?=
 =?utf-8?B?YTkvYURlSmJWOFltUngwajE2VklHMkRmcGUxUG9JZUZvcnNPWXNFclRKaWNw?=
 =?utf-8?B?WjZDQVR2M21jTGFYcWdxZ2twNlU2RTRHcFhnaEpCdVJYWS9HN2F3NlpLTE41?=
 =?utf-8?B?Z1VqZzdHc1Yrb21iRUYwVWE4YVY2dkdkNGRvbUtpZGxBUFcxZ1VmRzk2L3hN?=
 =?utf-8?B?U3kzRmFzY05GZWloM1Brd1JldWsrdjRjYkxkQ0FFYmdhTnZrUkIxbFNiVGNX?=
 =?utf-8?B?UXhxbTVTK0FaZXlpZGhBelpBOE1PMnBScHhkNTVGeFZIN0htQ2VoTjlyeHZi?=
 =?utf-8?B?aWM1OVo0NXZxRS9pTkFPQkk4UE9GenB5cVphS2hVSjJjWjhhY050MGlhSjd4?=
 =?utf-8?B?Ympld01GNVlWMnYrYlFVbEw4aDhSTXN1em51Yk13MTZLN3MzRE1PbEtrTnFr?=
 =?utf-8?B?UXV3NzVOS1ZPekhnVnBWMFN2QVVrdllHczU0ZEFPWkhmT2NzbFdXd011a25U?=
 =?utf-8?B?UG15Q3F4OEhKMlowMGdpdUNXWk52S3YraE1rK2VaQmpLclNuaDdaQW94OWpk?=
 =?utf-8?B?Z1lFNFBZbktJUW5iL0s5cnhCcjZYdktUWXRQV2Zlb3llNXNzTGg5ZEFhMjVB?=
 =?utf-8?B?dDRzNksydXdMMHhvRFIzZ05iM25IcXRpaWpXVnR2ZUFYanowQWxzdmNtTDVq?=
 =?utf-8?B?aklmeG0xQWFFeEdjOUZ4N2xablFYRy82a041T1c0MGUxWGgrWDRrME5QRTJm?=
 =?utf-8?B?Ui90a1U5dzZIc21SamtHdWRONkNPMkhNb3pwUkRHMDE5eDNsZXJuVnJydy91?=
 =?utf-8?B?aU52VG9nN0dtZVNnR2poV0diZGcrRnhQTFlUYlJUemt1RUV4MzlpSTdmVlpH?=
 =?utf-8?B?VnpId2ZRSVdzWlF1WVhzK1U4WEwxVU8wSlpyRi9DQlZxU3N4SmNwZ3F3VStV?=
 =?utf-8?B?WmFTSTB5V3BJQUpzcHJ5WGVTOFhoZ1BRZGg0MUtjRFpPNEZTRXFzV2ZIbzJZ?=
 =?utf-8?B?SGRNb1JPbkJyWThwMmZsODdGb1lYbEVjcFMwRUtYVC9ZcHdYSmdiZzZxU3VH?=
 =?utf-8?B?Q3VCKytSTE9NWDhmMWlETWQ1TEl0eGJhaHUzem5wSGJsRUhkZ0JQcCtVZFg4?=
 =?utf-8?B?Q3RBRG8rNWp2aWRHd2pFVGtBSDZ5SE9vbmpHZWY4aCs0ZDRiWXdBQzFYS2RE?=
 =?utf-8?B?bW5XN0NzMi95cjVKSjJTeWJ2MElXT2doUGlGQlMxanlGK0t0eGN5b0I2bnpt?=
 =?utf-8?B?ejR4WlZ2UXBLWE1jYXN1bjJmUjdQaW1ETFgzQytoU2lPai9CcHNVQ3VKVnBu?=
 =?utf-8?B?Q2dmYjBQVE9VZ1VrVllxUWNvU2luTHBvUGQxSU1KSTRjcDJTOFFUN0hMdFZ3?=
 =?utf-8?B?QS9UcFJNQ2RiOEJsS1pMVkpCVGtwYmNkeGFCeldyQ3lPVXEvcXlkQXJkNFZN?=
 =?utf-8?B?TWpQSW9lMHRpYjRqTUE4RHV6VWtMR01YUmFzeDZwSVMrS2FQSzlHc0Z6b2RX?=
 =?utf-8?Q?WMWbrZU4PSikSu2n56oaJkUvAmCiCfLSFy8bTdX?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 858c2ab7-504f-4d57-831f-08d93261ac61
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 14:02:14.6927
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hux0AMxAlnVUgmP46LTfOVW7GULIPpLTCOAoIfVozPVQyZoPh2V70HBVJRSyoUaR5r7ksvurQRIKzQHXzGc6Aw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6384

On 14.05.2021 22:54, Daniel P. Smith wrote:
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -556,6 +556,9 @@ struct domain *domain_create(domid_t domid,
>      /* Sort out our idea of is_control_domain(). */
>      d->is_privileged = is_priv;

With the change to is_control_domain() this is the last use of the
field, so your patch should replace it rather than adding yet
another one. (For layout reasons, "replace" doesn't necessarily
mean "in place").

> +    if (is_priv)

Nit: Please add the missing blanks here.

> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -473,6 +473,8 @@ struct domain
>  #define XSM_HW_CTRL   (1U<<8)  /* Hardware Control: domain with physical hardware access and its allocation for domain usage */
>  #define XSM_HW_SUPER  (1U<<9)  /* Hardware Supervisor: domain that control allocated physical hardware */
>  #define XSM_XENSTORE  (1U<<31) /* Xenstore: domain that can do privileged operations on xenstore */
> +#define CLASSIC_DOM0_PRIVS (XSM_PLAT_CTRL | XSM_DOM_BUILD | XSM_DOM_SUPER | \
> +		XSM_DEV_EMUL | XSM_HW_CTRL | XSM_HW_SUPER | XSM_XENSTORE)

The latest at this point I'm inclined to request that these #define-s
don't all live in the middle of struct domain. When you move them
elsewhere, simply have ...

>      uint32_t         xsm_roles;

... a brief comment next to this point at XSM_* as the values applicable
here.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 14:09:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 14:09:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144691.266281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luFBZ-0001aL-Vn; Fri, 18 Jun 2021 14:09:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144691.266281; Fri, 18 Jun 2021 14:09:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luFBZ-0001aE-Sm; Fri, 18 Jun 2021 14:09:29 +0000
Received: by outflank-mailman (input) for mailman id 144691;
 Fri, 18 Jun 2021 14:09:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2h6c=LM=amd.com=thomas.lendacky@srs-us1.protection.inumbo.net>)
 id 1luFBY-0001a8-S3
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 14:09:29 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com (unknown
 [40.107.93.66]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2eca1c6a-880c-43c5-b01e-a62268d1fa5e;
 Fri, 18 Jun 2021 14:09:27 +0000 (UTC)
Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by
 DM5PR12MB1708.namprd12.prod.outlook.com (2603:10b6:3:10e::22) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.19; Fri, 18 Jun 2021 14:09:22 +0000
Received: from DM5PR12MB1355.namprd12.prod.outlook.com
 ([fe80::6437:2e87:f7dc:a686]) by DM5PR12MB1355.namprd12.prod.outlook.com
 ([fe80::6437:2e87:f7dc:a686%12]) with mapi id 15.20.4219.026; Fri, 18 Jun
 2021 14:09:22 +0000
Received: from office-ryzen.texastahm.com (67.79.209.213) by
 SN4PR0601CA0004.namprd06.prod.outlook.com (2603:10b6:803:2f::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19 via Frontend
 Transport; Fri, 18 Jun 2021 14:09:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2eca1c6a-880c-43c5-b01e-a62268d1fa5e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YJSsYvboq8gK2ur+DEJF7/J3IbzVkGpEzFSjZTQotrndojQ62R8+OibN05lQOSwpuRc5yfOJDVQ2BVstOZuRkgARLmJEedTW7/rb4NlqIdG/nSgXjlDPcGA1Bt4ylj1XnEo6/nUi+wHUwWTF3JwoVkJYMu7HlE2dU/pgZ8FDqjbAuoOCBKpYqxRKAz3w/ztqSwpT6Vod3rwpqF14EMGOG0qerWCrHKYmtzTohr2Sq2EeN3kg5pCgJZqLJtKOuFE1+piunZKuc1WJ6WPlW3X250BUvk8p2BVkiraXkGlwBmKS2V9lqjQKki18TVvmKpn9vbaH33eQvvuc2iq56sN1Sw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oLRi1XsPF7LgTrIYyzkEtdY9k36UwfVsdF2v+x/5S7Y=;
 b=TPRxPNpmSTJ/7vN1brX9+s2241iEZh09BhG6JVeUU09lOFXod+A15TqUFVEnNjvSqVI08EutTgspB2uGyk4fOxGW64vkzNmXqYSHoknJoM/xNx9Y8fSVib8DBrhlkTxPCdsmrfcKE4LThxydPppibLHnc7cQ8W/h0YiE+FAROIuH+8TlA/iLle1UkIZOo3BVM6TgRr/dUp1TJNSbBP28UFn7VQaVu2wqYvpnzy7VXoCwBG1zpo1B3AUkbfFtDU5rLW44pTkMKByGlZopoiH6iVwPR32388WBJ7XVHrnPzjuck8iDAf7HLSOv1UL060FuspIg3RA9tI22zAKQZJA+ww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oLRi1XsPF7LgTrIYyzkEtdY9k36UwfVsdF2v+x/5S7Y=;
 b=QkUKm9sET1rE+gXzqy9H/m4lcTx8io7KEu+5iEyilS9oOa536mQ9CuuzUlzqObo8Aqc/DocZRkn8x5EgNoclnkM+a94e4DFoy3ffNlXcdMSAejySPZ7CV8WK9ZTyTrzXWxITeNJ+c38M9xEbthHMZ/icYFKs5avw+x92jS1O4xU=
Authentication-Results: linux.intel.com; dkim=none (message not signed)
 header.d=none;linux.intel.com; dmarc=none action=none header.from=amd.com;
Subject: Re: [PATCH v13 01/12] swiotlb: Refactor swiotlb init functions
To: Claire Chang <tientzu@chromium.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
 Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
 Frank Rowand <frowand.list@gmail.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com,
 jgross@suse.com, Christoph Hellwig <hch@lst.de>,
 Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org,
 paulus@samba.org,
 "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
 Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com,
 xypron.glpk@gmx.de, Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
 bauerman@linux.ibm.com, peterz@infradead.org,
 Greg KH <gregkh@linuxfoundation.org>, Saravana Kannan
 <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
 heikki.krogerus@linux.intel.com,
 Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
 Randy Dunlap <rdunlap@infradead.org>, Dan Williams
 <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>,
 linux-devicetree <devicetree@vger.kernel.org>,
 lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
 xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>,
 Jim Quinlan <james.quinlan@broadcom.com>, Tomasz Figa <tfiga@chromium.org>,
 bskeggs@redhat.com, Bjorn Helgaas <bhelgaas@google.com>,
 chris@chris-wilson.co.uk, Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie,
 dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
 jani.nikula@linux.intel.com, Jianxiong Gao <jxgao@google.com>,
 joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org,
 maarten.lankhorst@linux.intel.com, matthew.auld@intel.com,
 rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com
References: <20210617062635.1660944-1-tientzu@chromium.org>
 <20210617062635.1660944-2-tientzu@chromium.org>
 <alpine.DEB.2.21.2106171434480.24906@sstabellini-ThinkPad-T480s>
 <CALiNf29SJ0jXirWVDhJw4BUNvkjUeGPyGNJK9m8c30OPX41=5Q@mail.gmail.com>
From: Tom Lendacky <thomas.lendacky@amd.com>
Message-ID: <741a34cc-547c-984d-8af4-2f309880acfa@amd.com>
Date: Fri, 18 Jun 2021 09:09:17 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <CALiNf29SJ0jXirWVDhJw4BUNvkjUeGPyGNJK9m8c30OPX41=5Q@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [67.79.209.213]
X-ClientProxiedBy: SN4PR0601CA0004.namprd06.prod.outlook.com
 (2603:10b6:803:2f::14) To DM5PR12MB1355.namprd12.prod.outlook.com
 (2603:10b6:3:6e::7)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f221e95d-4e30-47fa-acd3-08d93262abdb
X-MS-TrafficTypeDiagnostic: DM5PR12MB1708:
X-Microsoft-Antispam-PRVS:
	<DM5PR12MB17081865F840834BCF17A650EC0D9@DM5PR12MB1708.namprd12.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	W81zqNegGhXtmyGutYcXqZK6WKj93KvXEUQwinaX80I9UREJRH1X5ZjFo1bKCiBISoKjvPxFNrviGDgVpQw4M2gllzpECvkJWY31E4OCRtIQaq/+3i9fVL2/FWbzndFFmn2/y+Y5dZPeDx3ZIjwt6mOIuKijzIrzFQ7L07JWGQYcqmOaeNlKVE4rRh56CXuZQSFnvAy9O88DDyIBhSA15TxXobwBeyTQc3xSDctJrRF5sOM9pXIQNjT5lZWdEQR4xbKUdhtfp05lOLmbPIRSJcZQQwovsa52ZlzKn/NBTrjz5vqID/h+QlQpFRrBnO7I08jU+Z+zsoXIHstg6kSSgQ2UpU4Rqlv1EO4zKV/ShGbXgk2s1K2G0pJ1wpIHGmiRTwe/4FVtjVflUFLZdKECngBqGM4OUHsiEvIw6xj2dLFgkO1T/6vX0rIrWZPxn0eaHxGVv+xfGX6fpFkETbq5u79tnzKR5hgOZg2W85yQYeB8urPkhQEF7InNTbkkk6ZkrP6jjoNlBC/mg9wfDaylrRMM7DCner6q6TQ65AT0vHCMj1zGXzP8mAUQY8fy7AGzIrZAYKE+Q/p2lScDVNETcZNZlpdUzPXuQD4KNWNhh/JqTvD/fctFNKibIOQ0kWQP2L7pdUNyyHHO1WMMu3iAVHU8inHMg9CXJD30R9W34kljaPWy58nsvxvQExo0YNdotgHO21WNgw0xATsJMmODiG0GBp8KgvACKlNj80F4coAEgQWg5shi8PBatp0KTneYofTkP67Va0k2my2pPOGXG/JoKR5pQ4pBlKjmk8U2z4ObHjREhC0onwr2djrKFcQdKQzDyK/vHjjb8/LTdYo3vg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(366004)(39860400002)(136003)(83380400001)(186003)(8936002)(16526019)(26005)(4326008)(6486002)(7416002)(31696002)(66556008)(7366002)(6506007)(5660300002)(53546011)(2906002)(54906003)(38100700002)(8676002)(966005)(86362001)(45080400002)(66946007)(36756003)(478600001)(7406005)(6512007)(66476007)(2616005)(316002)(956004)(110136005)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RTF1VW8zMVRmaGpmRzFsSWRqT08wSW5zbFMvbFp6VGtDQXJpc1ZCT0doNCs3?=
 =?utf-8?B?YkxZMlpmMDFVUTZETXRLU0RsMCtwalpJWHlLaDh1U2ZZL2pCSnVCNXdvdUZQ?=
 =?utf-8?B?NG1nOUJ3bDh2NEkzN1VrempKQUpERzgwZ2dRSWZPcEZLRGhtU2N2Qkw0Q2FH?=
 =?utf-8?B?OTV0RWxiWnBrWmczM2FwWTlvRVYzVzljOW9xYTg4ZWhsc01rVVlodWdFUEx4?=
 =?utf-8?B?UFJzcEpIMjcrbGRVcVBoeFVjL09YemszMFNnd1pPRi84bFhibXIvbGhRSk93?=
 =?utf-8?B?Uis2WHFSakxvUjQ0ZUF1USt2NU05Z2tXZkV6TXZBdzdLL0RqWmpMbUF5K1Zv?=
 =?utf-8?B?TnMvSThpODNDMURUaEVmcG5EVFVEdjdpZ2R4RVlaR01NQVBqOGp3cjhxcnE2?=
 =?utf-8?B?b0daNFlPcVorYzlWV2IrNXJiQTA0R1VRY0FrWjV6UmVrYVM0U0VBUFpMYWMr?=
 =?utf-8?B?V3ZmU1pHSnVlUWpnRkJzOTB2Rkl3MDJnL3lRWVJuMkw0OS9nV3llMHMrSWU4?=
 =?utf-8?B?QkNjbGNIMjNNM002Q0pjeDhxL0E3Z29qVm1GQTdmUEpiajAxMGU1NVVJNFo2?=
 =?utf-8?B?WWYrS0V3QWNkSUFZbjVGRU5pQXg5V2hmTHhCaStPUnF3SVlOTHg1VHJmUjUr?=
 =?utf-8?B?a3FuOVFhYUNmQlk2Rk43aHNQTUo0Z29kQ3N2V3dFVkxnQlh6Z3cvZGtpTTBJ?=
 =?utf-8?B?bytrdDdKTm1iV3UvZnNpcjJkNC9RRCtwZlllUmQzOEs4UnZONytFMzZZNnk3?=
 =?utf-8?B?UDFPSEtwUDBGTGpldEJEVmtSNm42SC9PckxwaGxkaUlJMDU4T3lzWldPTUtM?=
 =?utf-8?B?clUzVlpjcDVuYWk3akYrZ0V1RmNwMXZFWVA5S0p4UzA3eUh6azFKVWtBRE5q?=
 =?utf-8?B?MVorMkVnY3l1dFpjNHRzK3E0YTIrSFRjQWlBRnhoZG01UVJNRGMzZXcrb3RD?=
 =?utf-8?B?Z1BSS3FBNm95bFdQemF0eitMMWZvS3Uva1ZlQWI0ZFd3MTB3cXJaS1FiSnRx?=
 =?utf-8?B?V2xkbGJvRnE1VnpMYUZUYjFvQi9RWmwzODcrajFsQzlDQjJaYWo0dDQ1UVB5?=
 =?utf-8?B?cGhpek5NS3hReDcvUVUvZUpoN2V1Q1pxbGNjMUVqUG44MWhMUU94eml6c2Rl?=
 =?utf-8?B?dmdpQysyVWppaE1qbXp6YUh4U0lyZU5TN1VueFFxazdienhmTDVmV1VyeDlT?=
 =?utf-8?B?dXFrNFgwLzNkeDFqRkdmcUFHNjhjWncwTHdKWkk3YVhJWnlMTGNGNkVNc29r?=
 =?utf-8?B?Z3NzSnZLQW5HRXFTMWZYdWdmT21samhtN0JMdzFzTmNLb21ZaHJyVnVqQnpB?=
 =?utf-8?B?VGVkMnFlTktvcldFQVdZMGxBQ1drNjdLV213N3ZIY3c3UGlCUmZuUXFmeHpF?=
 =?utf-8?B?TEJZRnN0U2hKeUx2S0RjWFVSZjllazZ4ckViUm1VSWQ1U0g1L1ZOZ1ZPL2k3?=
 =?utf-8?B?OG81QlFkK3lnUW5weUFJVnVqSnMyNndCSjdJdi9EazBlekxkYThJYldqMDhr?=
 =?utf-8?B?KzRaUUI3OFE1NlNCV1J3elBEVlNBTUtUdTU0eU5Ld3lQKzFER1oyb1ZEL2c0?=
 =?utf-8?B?aklFUEg1bGR2TXp1NE9Nd3dQVU81ZGJXb0FKUG01Tk5ZYUlPTWRWenpJbnAx?=
 =?utf-8?B?M01vYjh1THg4V0VaNDYvVXl1WDZ6aVI5TTFITlRDNzdTMGRmQ21NOTFOZEpz?=
 =?utf-8?B?eno3RGo1dFhiR2c1ZE1WOHlTVnVGenc3L2QrbHZ0S1diWTJJcDVhWEhUc2Fi?=
 =?utf-8?Q?esIpSlmSoBBKai4kWvfAW6bnueBOslxVkn4yALT?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f221e95d-4e30-47fa-acd3-08d93262abdb
X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 14:09:22.5058
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Fc+b5bk+EKsqI5tf6Y9m5leX8+yq5/ThpNw9DurxKQTMG6UvMLJkVdybTkyH0xi09bRkLzh9r2w1pSsYiVXV4Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1708

On 6/18/21 1:25 AM, Claire Chang wrote:
> On Fri, Jun 18, 2021 at 7:30 AM Stefano Stabellini
> <sstabellini@kernel.org> wrote:
>>
>> On Thu, 17 Jun 2021, Claire Chang wrote:
>>> Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
>>> initialization to make the code reusable.
>>>
>>> Signed-off-by: Claire Chang <tientzu@chromium.org>
>>> Reviewed-by: Christoph Hellwig <hch@lst.de>
>>> Tested-by: Stefano Stabellini <sstabellini@kernel.org>
>>> Tested-by: Will Deacon <will@kernel.org>
>>> ---
>>>  kernel/dma/swiotlb.c | 50 ++++++++++++++++++++++----------------------
>>>  1 file changed, 25 insertions(+), 25 deletions(-)
>>>
>>> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
>>> index 52e2ac526757..47bb2a766798 100644
>>> --- a/kernel/dma/swiotlb.c
>>> +++ b/kernel/dma/swiotlb.c
>>> @@ -168,9 +168,28 @@ void __init swiotlb_update_mem_attributes(void)
>>>       memset(vaddr, 0, bytes);
>>>  }
>>>
>>> -int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
>>> +static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
>>> +                                 unsigned long nslabs, bool late_alloc)
>>>  {
>>> +     void *vaddr = phys_to_virt(start);
>>>       unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
>>> +
>>> +     mem->nslabs = nslabs;
>>> +     mem->start = start;
>>> +     mem->end = mem->start + bytes;
>>> +     mem->index = 0;
>>> +     mem->late_alloc = late_alloc;
>>> +     spin_lock_init(&mem->lock);
>>> +     for (i = 0; i < mem->nslabs; i++) {
>>> +             mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
>>> +             mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
>>> +             mem->slots[i].alloc_size = 0;
>>> +     }
>>> +     memset(vaddr, 0, bytes);
>>> +}
>>> +
>>> +int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
>>> +{
>>>       struct io_tlb_mem *mem;
>>>       size_t alloc_size;
>>>
>>> @@ -186,16 +205,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
>>>       if (!mem)
>>>               panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
>>>                     __func__, alloc_size, PAGE_SIZE);
>>> -     mem->nslabs = nslabs;
>>> -     mem->start = __pa(tlb);
>>> -     mem->end = mem->start + bytes;
>>> -     mem->index = 0;
>>> -     spin_lock_init(&mem->lock);
>>> -     for (i = 0; i < mem->nslabs; i++) {
>>> -             mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
>>> -             mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
>>> -             mem->slots[i].alloc_size = 0;
>>> -     }
>>> +
>>> +     swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
>>>
>>>       io_tlb_default_mem = mem;
>>>       if (verbose)
>>> @@ -282,8 +293,8 @@ swiotlb_late_init_with_default_size(size_t default_size)
>>>  int
>>>  swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>>>  {
>>> -     unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
>>>       struct io_tlb_mem *mem;
>>> +     unsigned long bytes = nslabs << IO_TLB_SHIFT;
>>>
>>>       if (swiotlb_force == SWIOTLB_NO_FORCE)
>>>               return 0;
>>> @@ -297,20 +308,9 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>>>       if (!mem)
>>>               return -ENOMEM;
>>>
>>> -     mem->nslabs = nslabs;
>>> -     mem->start = virt_to_phys(tlb);
>>> -     mem->end = mem->start + bytes;
>>> -     mem->index = 0;
>>> -     mem->late_alloc = 1;
>>> -     spin_lock_init(&mem->lock);
>>> -     for (i = 0; i < mem->nslabs; i++) {
>>> -             mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
>>> -             mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
>>> -             mem->slots[i].alloc_size = 0;
>>> -     }
>>> -
>>> +     memset(mem, 0, sizeof(*mem));
>>> +     swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
>>>       set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
>>> -     memset(tlb, 0, bytes);
>>
>> This is good for swiotlb_late_init_with_tbl. However I have just noticed
>> that mem could also be allocated from swiotlb_init_with_tbl, in which
>> case the zeroing is missing. I think we need another memset in
>> swiotlb_init_with_tbl as well. Or maybe it could be better to have a
>> single memset at the beginning of swiotlb_init_io_tlb_mem instead. Up to
>> you.
> 
> swiotlb_init_with_tbl uses memblock_alloc to allocate the io_tlb_mem
> and memblock_alloc[1] will do memset in memblock_alloc_try_nid[2], so
> swiotlb_init_with_tbl is also good.
> I'm happy to add the memset in swiotlb_init_io_tlb_mem if you think
> it's clearer and safer.

On x86, if the memset is done before set_memory_decrypted() and memory
encryption is active, then the memory will look like ciphertext afterwards
and not be zeroes. If zeroed memory is required, then a memset must be
done after the set_memory_decrypted() calls.

Thanks,
Tom

> 
> [1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.13-rc6%2Fsource%2Finclude%2Flinux%2Fmemblock.h%23L407&amp;data=04%7C01%7Cthomas.lendacky%40amd.com%7C3e33e04212b84f9e4ed108d932230511%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637595948355050693%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=TGBDj18KuSHTb45EBz%2Bypfbr4Xgqb1aGTRDCTIpIgJo%3D&amp;reserved=0
> [2] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.13-rc6%2Fsource%2Fmm%2Fmemblock.c%23L1555&amp;data=04%7C01%7Cthomas.lendacky%40amd.com%7C3e33e04212b84f9e4ed108d932230511%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637595948355060689%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=K%2FWbN6iKN9JNtwDSkIaKH2BVLdDTWhn8tPfNdCOVkSA%3D&amp;reserved=0
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 14:14:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 14:14:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144698.266292 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luFGq-00032U-Nz; Fri, 18 Jun 2021 14:14:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144698.266292; Fri, 18 Jun 2021 14:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luFGq-00032N-Kc; Fri, 18 Jun 2021 14:14:56 +0000
Received: by outflank-mailman (input) for mailman id 144698;
 Fri, 18 Jun 2021 14:14:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luFGp-00032H-3f
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 14:14:55 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c5b58a0b-cd29-485b-8792-1659568ff868;
 Fri, 18 Jun 2021 14:14:54 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2057.outbound.protection.outlook.com [104.47.14.57]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-15-IUL3VfimNmyT-TZaOpmkIQ-1; Fri, 18 Jun 2021 16:14:51 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7151.eurprd04.prod.outlook.com (2603:10a6:800:129::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Fri, 18 Jun
 2021 14:14:48 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 14:14:48 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR2P264CA0039.FRAP264.PROD.OUTLOOK.COM (2603:10a6:101:1::27) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.16 via Frontend Transport; Fri, 18 Jun 2021 14:14:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5b58a0b-cd29-485b-8792-1659568ff868
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624025693;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FCT0rcMlrQlC78D+AYNG+gd8eKcIo81ibv67FimMn3A=;
	b=COzW6i/bU3H7WI88Z5aWmTbRb0rtsxlTga4eLSJOsr23gSL5v6DQqvS11FA499IplZu5b7
	dWr3i4sZGDw8b9hq9R4fjBU9AWJv0UIffAT2TCBOyOb13etOOEG0uDi6HbE37QLl3FFSCB
	X+SeQv3PfTo3rAn0vARViwV9nD2cOSg=
X-MC-Unique: IUL3VfimNmyT-TZaOpmkIQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UlS5u0VoCVkT6ytFjKvxzL6GncinTGvl2rgkja6c4EHEkJJ4aG1AyAZHMLdFw6U+H9APYbLsoPlEoqK4OM4hSYU/fUWSgw/mROyMJvsIyM3CH+aszCs2QcFvIGWYUx21pK1YaGO789Q7rBixum6jAoW5WVXbD2wQe3bnNaVYyeCX10E5aP1smBDFCyEc5JkwtJxuIrzFViRRwTFGRwjgFJdddP9iuv5l9qiB4tVwf4mrdjpTJqyP4sn9zIYO1GQG3nGtSyZAH0XRT40LRzol1Rh6rjj0uGwAaPVgsmEvRmAhAvWG44MhiQlANG9X/E5EP7Cuq2k8yq5xYiYir3eNew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FCT0rcMlrQlC78D+AYNG+gd8eKcIo81ibv67FimMn3A=;
 b=dIOryjelEevoB/lvHmU1ptcqrgJUqPSVJB9EVT1GNbIt9WI3QdBMEDRYSGvXA+1ivLpLb6fJaDtE0mZvnRn7JGpFrdyPStY8ISjrSaiXhoBVsCJJAvjgTgfA1bjgrVHbD8pgQKGYpFfDJSxqPcj2oXvtihs8ej2L9Zlf2SX+rI8g+VoLZ6Et+YaZi6Fmul4wL38rwDqXmhp5OzP8MMzwlSIO2E7wEKlk3WjwejF0OWcpua6gtmrI7/5tHMVbRWegs2ZlUS/4HS/xCAyyIUJBN7mj1T9rCj4K5S8kFgCzq0Z9Q1xSWn9OzsSCrpCIOyV+GkcX6AD3Eog0rcbjPxyg2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [RFC PATCH 04/10] xsm: convert rewrite privilege check function
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: sstabellini@kernel.org, julien@xen.org, Volodymyr_Babchuk@epam.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, iwj@xenproject.org,
 wl@xen.org, roger.pau@citrix.com, tamas@tklengyel.com, tim@xen.org,
 jgross@suse.com, aisaila@bitdefender.com, ppircalabu@bitdefender.com,
 dfaggioli@suse.com, paul@xen.org, kevin.tian@intel.com,
 dgdegra@tycho.nsa.gov, adam.schwalm@starlab.io, scott.davis@starlab.io,
 xen-devel@lists.xenproject.org
References: <20210514205437.13661-1-dpsmith@apertussolutions.com>
 <20210514205437.13661-5-dpsmith@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bd1f80e9-5d1c-beb5-f255-4351a5f64955@suse.com>
Date: Fri, 18 Jun 2021 16:14:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210514205437.13661-5-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR2P264CA0039.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:101:1::27) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6a07bf44-c21b-4a21-a94c-08d932636dd2
X-MS-TrafficTypeDiagnostic: VI1PR04MB7151:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB71512F6B94F180FAA47E98FDB30D9@VI1PR04MB7151.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	E/rlfmqViW5ab1LO37Xwjcr4aLG0YVU2rf2hfeRkJDBzc5zOBV38IBX3EW+SBpS9Bg76FZd5M4iZu7Q+p/O/Kg7i1UPuRjIahGpvmz1jnKRNqAg1sVEJ3rzjYKTpZjHMc9rKdN4Ws8FSsLtNNagaa1LqGCDKQmzNw9g1rlNKOg0DLTlEPUSSEZFTdr1SjTpMD5XRH8lX6g0QizSF9fTwfK1F9msQLtXs2HzqHxdYb2cjBJXQGFAhsVljvQEaTTMLDKXb4ansrFOFqc/oDHr5TtVIHoUIrP+OnkL9c/kmVEsAJYZ+80y21a/WpdAm6FrA3KZkCXtJNeTYz4a63Bv4OJ8QOuVoquftnQne87yAMI4wD0/2rBdQ8g5LIvjTEpN58EtMxBHuS/9dhoyM5oS9SX/q+9d+JhWImIqGGuCAO7OmKQ2u4EoZd39JvErVF4yje4rarH2I3ai33ixqEWdQgAUAfedJ56MCKKcPxNPoVUhuPKP0igry21TGiB7eCguDPMldoCYZ9iNazpPEBpvlGJOOv5Z86Fmr3HnAa2Rtv/92iRMCoBOGoZ42dub4AEwZm9sb1PneXCGlokNpSpsbK/2kKNtvAusbYk0gTZ4DjvlWjAQKjeo1WeuleeuLGTWD1tGrrbDPqC9m5BGWaAT42iTWWedBY5904ZtRVnIjiYE34es0Rm8kinXFVj5qtztb
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(39850400004)(376002)(396003)(346002)(366004)(31696002)(6486002)(186003)(36756003)(66476007)(16526019)(66946007)(316002)(478600001)(31686004)(86362001)(26005)(16576012)(38100700002)(5660300002)(6916009)(956004)(7416002)(2616005)(2906002)(83380400001)(66556008)(8936002)(53546011)(4326008)(8676002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ODhpSEhHSUI0UTF1UExML1Riay9mR1BldzRycGJCODR4MFoxWEpaMTQwSVFD?=
 =?utf-8?B?d2R6aVNFamp3ZjBRQnpSWjN3ZEYyanNCQTByVHk3RWFNTmE1cDdRSjNuS1Np?=
 =?utf-8?B?cWpQck9PcUJKQmxYUWV4WTlGdjJYaUt1aXlwZEwrOU9Iam9sL2hpdWlwNVVO?=
 =?utf-8?B?b1hMcVFySzZ4djB5WkxHUHR1U01TVUZUZGU3LzFwRG81cHgrNmdRRTFNNW9D?=
 =?utf-8?B?eCtXZStxRDltcHB3UjYycW9LS0RhcHprR3BNTnhHeS9HZng0bjgrL0krVTM3?=
 =?utf-8?B?aFZIeW8ydy9aZXlXQ3ZJZk5NdER5THNmMEdEWnVGR1VCUkdpYU43VVVFRmtY?=
 =?utf-8?B?V05KRUhOS1JKaldHWmlKNkI0VjVoOWF5TTJqRi9PbTljWlF3cXQ3cGFqREhB?=
 =?utf-8?B?ZUcyZnVGOXVlWU9IdThDdjlodWJqcnhRbkJudm9hUG0zUEcvMEhKaE4zc05w?=
 =?utf-8?B?ZkhabUIwU21VZ1JpZHdpVGxtZ3VTYlJJQmpnQVNwOVdrdkRRQzZPQWQzWXo3?=
 =?utf-8?B?VHp6TzFEd29KTWZrbFBMbFhWeGFYZGhQUmRJdlV5MnVLY0VBSGxEUEtRVWs0?=
 =?utf-8?B?YVNBa1JldUpqelFEUUF0S2c4ZTJ2V21kL1ZXdXZNcG5CcUlYMGVPM3NNUnVK?=
 =?utf-8?B?b1dQZ1JnQ2FIcEpGOCtHTmdZVEQweC9tdENRc1Fuc0c0VVozUWdVWS8za0tE?=
 =?utf-8?B?elFlbi9GbDVzYzF0QWJnNHZMbWtia1BzM3JJalR3dmN6ZUtUSWdGVUYvS3ZK?=
 =?utf-8?B?M1VHOEp4by9qUFMzTE1qSjYvOThJUEU2NVpMa3lHblRIUjVSdUJLek9FWUxm?=
 =?utf-8?B?bndqaWR3Sm1lOTJRT2NmbHZqMEt2N1RqdHVTRXhoNEdmbDQweEpSR1BWVXNL?=
 =?utf-8?B?VWlJSTZSVGpTeVBaZGNvZ0FRaFA4VUJmWVZxKzFTQnh6N05KL2grL3JCSGZr?=
 =?utf-8?B?MThscFNpdTVTdFZsUXdoOFJ5ZTgxU0poZmR0MFY0djY3VWZ0Q25ZK3d1SFpL?=
 =?utf-8?B?UVM1aDRiTVQ2NDJ3VG5zQjZCSmVYdnB4ZnZON3UydmozaWxkb0NQZXdDTTR0?=
 =?utf-8?B?cU1VcGR1R1UrU3NkYjFyS0F5b1N4U1VoYXQ1aGlFRk94LzdLTkFRM1VQSmpt?=
 =?utf-8?B?RzFpcnRHR2UwRjhMTGlwYmZ2c3oxVmdIUXF2MC9CWnBRcmN1YnlUUnZCVm83?=
 =?utf-8?B?RjY0QWpjZVVucGFHb2tFVnpJdnFDUVR0VWkrS2x1Mm1SRzBIdzh0dGJtU3M0?=
 =?utf-8?B?d0RIa2VYblFXRCtpbEpBYXBhSGx5MFp2YzFVZHc3MHI5NnFTZE1Ickd3RXNQ?=
 =?utf-8?B?bUc4dUZiV0VJT3hwM0dHZnRDbnFGT0o3ZGVQVWp3S01rSlRhMC9jblgxdzcy?=
 =?utf-8?B?N0FHRzFIWXFueVJXTmRGZHd3ZnFwVkROQ2R2MndHNWlLNUNPWHJNREg4bTNk?=
 =?utf-8?B?U3lyeW82ZFZycXRUSlNxMUNFV3dBL0paMUhJT0ZIaXlIRTBVQytOeTJVNVNn?=
 =?utf-8?B?bmZWOEZaMksrWGUyUm45L1ptV2lkZDJZdWNKM0Nkd3dOQ3AwSUEyMXJzMlNq?=
 =?utf-8?B?czhFTHFRcVZBSDRqUUxkS0lidldGbWtvQmpjZUl5UnNqdXZpZGxUcm9NTXlq?=
 =?utf-8?B?TDRGYTRZOStiaUMrVEVzWGw1TkZGMjZQOVlSbUtVNk5lSTBJYm1DSlNNYjY5?=
 =?utf-8?B?TnMxYUs0N0tyMFhBT2plMkh3WXU5UEdqLzVoVGJqeUY5ZTkzdUZrN0s0TUdW?=
 =?utf-8?Q?WgS/v5jo2Wc+P6wuTdV4ukWoZwbTCarhqsad8+C?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6a07bf44-c21b-4a21-a94c-08d932636dd2
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 14:14:47.7578
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Iw4dtkqyKbJ5nIu9CbQiixd23ljlQc3zK/4yA/9AzehleEcrprj8mgnVlON03BzDn3ABNApfAwsQyBWSdYoSIA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7151

On 14.05.2021 22:54, Daniel P. Smith wrote:

In the title, did you mean just one of "convert" or "rewrite", or
did you omit some punctuation?

> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -475,6 +475,12 @@ struct domain
>  #define XSM_XENSTORE  (1U<<31) /* Xenstore: domain that can do privileged operations on xenstore */
>  #define CLASSIC_DOM0_PRIVS (XSM_PLAT_CTRL | XSM_DOM_BUILD | XSM_DOM_SUPER | \
>  		XSM_DEV_EMUL | XSM_HW_CTRL | XSM_HW_SUPER | XSM_XENSTORE)
> +/* Any access for which XSM_DEV_EMUL is the restriction, XSM_DOM_SUPER is an override */
> +#define DEV_EMU_PRIVS (XSM_DOM_SUPER | XSM_DEV_EMUL)
> +/* Anytime there is an XSM_TARGET check, XSM_SELF also applies, and XSM_DOM_SUPER is an override */
> +#define TARGET_PRIVS (XSM_TARGET | XSM_SELF | XSM_DOM_SUPER)
> +/* Anytime there is an XSM_XENSTORE check, XSM_DOM_SUPER is an override */
> +#define XENSTORE_PRIVS (XSM_XENSTORE | XSM_DOM_SUPER)

I think all of these want to *start* with a common prefix, e.g.
XSM_PRIVS_*. But of course especially these "override" remarks in
the comments again show that for now it is unclear what the
individual bits really mean, and hence whether these combinations
all make sense.

> --- a/xen/include/xsm/dummy.h
> +++ b/xen/include/xsm/dummy.h
> @@ -65,37 +65,48 @@ void __xsm_action_mismatch_detected(void);
>  #define XSM_INLINE always_inline
>  #define XSM_DEFAULT_ARG xsm_default_t action,
>  #define XSM_DEFAULT_VOID xsm_default_t action
> -#define XSM_ASSERT_ACTION(def) LINKER_BUG_ON(def != action)
> +#define XSM_ASSERT_ACTION(def) LINKER_BUG_ON((def) != action)
>  
>  #endif /* CONFIG_XSM */
>  
>  static always_inline int xsm_default_action(
>      xsm_default_t action, struct domain *src, struct domain *target)
>  {
> -    switch ( action ) {
> -    case XSM_HOOK:
> +    /* TODO: these three if's could be squashed into one, decreasing
> +     *       the readability/logical reason-ability but may decrease the
> +     *       number of spectre gadgets
> +     */

Seeing this remark, I'm particularly puzzled by you dropping all
evaluate_nospec().

> +    if ( action & XSM_NONE )
>          return 0;
> -    case XSM_TARGET:
> -        if ( evaluate_nospec(src == target) )
> -        {
> -            return 0;
> -    case XSM_XS_PRIV:
> -            if ( evaluate_nospec(is_xenstore_domain(src)) )
> -                return 0;
> -        }
> -        /* fall through */
> -    case XSM_DM_PRIV:
> -        if ( target && evaluate_nospec(src->target == target) )
> -            return 0;
> -        /* fall through */
> -    case XSM_PRIV:
> -        if ( is_control_domain(src) )
> -            return 0;
> -        return -EPERM;
> -    default:
> -        LINKER_BUG_ON(1);
> -        return -EPERM;
> -    }
> +
> +    if ( (action & XSM_SELF) && ((!target) || (src == target)) )
> +        return 0;
> +
> +    if ( (action & XSM_TARGET) && ((target) && (src->target == target)) )
> +        return 0;

This is an inline function - no need to parenthesize individual
identifiers (also again below). Similarly no need to parenthesize
!target.

> +    /* XSM_DEV_EMUL is the only domain role with a condition, i.e. the
> +     * role only applies to a domain's target.
> +     */
> +    if ( (action & XSM_DEV_EMUL) && (src->xsm_roles & XSM_DEV_EMUL)
> +        && (target) && (src->target == target) )
> +        return 0;
> +
> +    /* Mask out SELF, TARGET, and DEV_EMUL as they have been handled */
> +    action &= ~(XSM_SELF & XSM_TARGET & XSM_DEV_EMUL);
> +
> +    /* Checks if the domain has one of the remaining roles set on it:
> +     *      XSM_PLAT_CTRL
> +     *      XSM_DOM_BUILD
> +     *      XSM_DOM_SUPER
> +     *      XSM_HW_CTRL
> +     *      XSM_HW_SUPER
> +     *      XSM_XENSTORE
> +     */
> +    if (src->xsm_roles & action)

There are style issues here again. I'm not going to mention such any
further. As to the comment, I'm seeing the risk of it ending up stale
the moment yet another role gets added. IOW I'm not convinced you
should enumerate the remaining ones here.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 14:32:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 14:32:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144704.266303 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luFXg-0005HT-6n; Fri, 18 Jun 2021 14:32:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144704.266303; Fri, 18 Jun 2021 14:32:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luFXg-0005HM-3k; Fri, 18 Jun 2021 14:32:20 +0000
Received: by outflank-mailman (input) for mailman id 144704;
 Fri, 18 Jun 2021 14:32:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l/oB=LM=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1luFXe-0005HG-A3
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 14:32:18 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d8fdbde9-77ec-47df-b4e4-dea65f9d1b95;
 Fri, 18 Jun 2021 14:32:16 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 64B2C68D08; Fri, 18 Jun 2021 16:32:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8fdbde9-77ec-47df-b4e4-dea65f9d1b95
Date: Fri, 18 Jun 2021 16:32:12 +0200
From: Christoph Hellwig <hch@lst.de>
To: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Claire Chang <tientzu@chromium.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com,
	xypron.glpk@gmx.de, Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com,
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk,
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie,
	dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com, Jianxiong Gao <jxgao@google.com>,
	joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com, matthew.auld@intel.com,
	rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v13 01/12] swiotlb: Refactor swiotlb init functions
Message-ID: <20210618143212.GA19284@lst.de>
References: <20210617062635.1660944-1-tientzu@chromium.org> <20210617062635.1660944-2-tientzu@chromium.org> <alpine.DEB.2.21.2106171434480.24906@sstabellini-ThinkPad-T480s> <CALiNf29SJ0jXirWVDhJw4BUNvkjUeGPyGNJK9m8c30OPX41=5Q@mail.gmail.com> <741a34cc-547c-984d-8af4-2f309880acfa@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <741a34cc-547c-984d-8af4-2f309880acfa@amd.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, Jun 18, 2021 at 09:09:17AM -0500, Tom Lendacky wrote:
> > swiotlb_init_with_tbl uses memblock_alloc to allocate the io_tlb_mem
> > and memblock_alloc[1] will do memset in memblock_alloc_try_nid[2], so
> > swiotlb_init_with_tbl is also good.
> > I'm happy to add the memset in swiotlb_init_io_tlb_mem if you think
> > it's clearer and safer.
> 
> On x86, if the memset is done before set_memory_decrypted() and memory
> encryption is active, then the memory will look like ciphertext afterwards
> and not be zeroes. If zeroed memory is required, then a memset must be
> done after the set_memory_decrypted() calls.

Which should be fine - we don't care that the memory is cleared to 0,
just that it doesn't leak other data.  Maybe a comment would be useful,
though,


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 14:44:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 14:44:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144710.266314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luFjN-0006l7-95; Fri, 18 Jun 2021 14:44:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144710.266314; Fri, 18 Jun 2021 14:44:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luFjN-0006l0-5s; Fri, 18 Jun 2021 14:44:25 +0000
Received: by outflank-mailman (input) for mailman id 144710;
 Fri, 18 Jun 2021 14:44:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfjP=LM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1luFjL-0006ku-UJ
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 14:44:24 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1bff9a01-a40b-4e0a-9143-1f90433de55e;
 Fri, 18 Jun 2021 14:44:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1bff9a01-a40b-4e0a-9143-1f90433de55e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624027462;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=sNRrU3JooVO0gSsGbPQoxrXIpNmNssmD/+IcRU6wfok=;
  b=VWGXVnJgzLfK0NyjjDsqTRjfr64XlKycy9OIXGoRwnMRm0K2G8Eb2uAz
   LEpzxWtcYkXwDjiABn2qvkii9zKTehQWBs80R+FDJrEnDZ84O7++rakKT
   kebSmHp1pSOG8NkHLBo/5U1WjVkghS7code1Z2vn/Dxo7OofciLBX8aSw
   Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: kwYnNaAfvjwum9uiKekYAUzvARgShfKr7QZiiRnIoiJB4A/+PIFHdMtM6JIBDDAkH+oVxW2rDr
 soaNAnNK/IAQtGD2uZyj2CIi82GBCzo2skFdlYHal7kMArkuz8QHifnwyBkGW0Vlwf8wV+JzKo
 qqsfpxbiSgzLSJ9IT1WVfFT8wcSQPy5pDkUU9heAa9raLTY2sMQX2Xtf/o5pZ/IQCqEeJx/s5T
 4ZpcD/PVpVquSvnaW0L+fAgElgImAAT/Xv4AHF9fqs6EEFFLWMmq0yv7Ut6jPPe2mYQJx5ra8U
 azs=
X-SBRS: 5.1
X-MesageID: 46834676
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:lFZnNa5yTCJY3mAp5QPXwDLXdLJyesId70hD6qkQc3FomwKj9/
 xG/c5rsSMc7Qx6ZJhOo7+90cW7L080lqQFhLX5X43SPzUO0VHARO1fBOPZqAEIcBeOlNK1u5
 0AT0B/YueAcGSTj6zBkXWF+wBL+qj5zEiq792usUuEVWtRGsZdB58SMHfhLqVxLjM2Y6YRJd
 6nyedsgSGvQngTZtTTPAh+YwCSz+e77a4PeHQ9dmYa1DU=
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46834676"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eNQ+kjF1n5MbIUbs1QdtGvoAZfnnSnRC3T9n1mSJkqgVKBBredNgGd/Ma182Wp2+oNsDr5LU9KSi3ClqQtOXwKzyEVwwTsuTb2F0Q7IDduXbQ3K122i7l4AV1sB3M/vlR2mEDVseeUmVRyCCGBrwm3MyRaPlaXoA93FDLp8cfETkE7/5qeFk55cw4uRU6t8w0DDx6apIApn/vDrvQ0PcSqtgSovzDoPtfrXO2+VwSkbJ98EwfRL8dCVQb/bt5L+SENvCM7pwx/kcyBaX+NhKBKAaXwRwamazP7JUO/PvBYLYbsMKArMRMi+tcXHefxbEIY86LC2Bx9YBRxMG3FtXOA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MEOa0gxWto6hCpCgiwuSqe8UK1DjdPyCs54qQ16sxMk=;
 b=FaO+jwRpXHAOY++s8fKGB88K4PGU8LyAn0lBqQF4FMqf+2Se34eeiK+dqKRBcpM1MlKIXdHC0tn117wjiYeny5k+qSlfBlvgld1se8ZuklbqEpy89Rs2LS4M4xdGoIev4twv4sxF1skmufo91MQit5PyMKopbquwEvOUvE4DXOZNNRgU39DCYHwiCQ19ZjAcpFLd/o5NmfJ7exHe66vz9m6NP9pZYBP+6sjduvHq+XslTUjRaChQ0miI+KG5YCYZMW7QfFzVWLreOob4E8eJoNaHsEbpbLqykYeuuk4I/SA5IR/Jo0pjfqAnzMyLTqHpKhz5nCW6VMqumo4EPQIaQQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MEOa0gxWto6hCpCgiwuSqe8UK1DjdPyCs54qQ16sxMk=;
 b=aRVbxM7G915ThM788TFhUhQBkHcFSol3JvMegChUqiRCZeCE1d7nVAV0aIK2I4TlvG948w89hcKQrlFOOUD1oWkNfemtyW4ub19qsu29mUzE+bICxohxhUWH4TjjET4AKJX7G49Q6mslj9rULZRNqqgdtJd4znUwTKyKDe4SmvE=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 08/12] golang/xenlight: add functional options to
 configure Context
Thread-Topic: [RESEND PATCH 08/12] golang/xenlight: add functional options to
 configure Context
Thread-Index: AQHXUNzBxrtRVA/FskCGl/j6ZWY3UKsZ/xcA
Date: Fri, 18 Jun 2021 14:44:15 +0000
Message-ID: <EF069373-26FC-4151-9CD9-6B8C48D9AEB0@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <dc5cd6728e8477c9eb3ba75a55c7128da46a86ef.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <dc5cd6728e8477c9eb3ba75a55c7128da46a86ef.1621887506.git.rosbrookn@ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 9b9e8795-b770-4600-d196-08d932678b98
x-ms-traffictypediagnostic: PH0PR03MB5782:
x-microsoft-antispam-prvs: <PH0PR03MB578260E28685107AD88285EC990D9@PH0PR03MB5782.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7691;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 3Ul3F+uk8HzAF3DN2ohJtxrX2J5nI44ZB6ixUmAUvC0MqFcn8gzsPN/0S/wYDu85Un/qxipz7CRChgQj1ek01q57qcvvptTwLFb1yKjDfy3UBGa4GpBYrAIXtzik9sWPZWBYjuKhPy5yVYy+Xr74qqvv1vdTW+WL/RhjdZmPaPb7SD6XEZaPrg9pmwrvRNaWmL+7bM0ibeTV3tmiSgvGu7cKeWmEILy3KLgJAgKvZohXM64oRMfmx4NTiSRhEPDuKuoUoro6il6mEn/FQclfVN2O0itULOv3FR8ZQurt25HrneKxxYGfKso6iLpslSJfnnnJp2NHEQCUqSwckv4Ygy/8n22O+CwXWSTSFq+CrXHITzIQ54W0nPOYB67+K6X1uNnMbMPJziHaarMydK651+jowkx1OZcm9pw3NH2u7dRwk01/LabVRE+RzTOl94GkiHxneXhHRsYYVeIQoYfejq44dnJVAghF+TuKdqKptp7gswFTIUYfAw9NIjcAHUDNfoTeYEkWnZLRUGeUYR9s7Xmm2CSVgL8JI/kEWwgFdiAW2PyAmLm0YkeDer1Pg6GTIbimJfW1M7aZSoHkOv0O9KGLRIbYRpsQNoZNYcr4x0rO0JtLX3rYMuNvXwgCIAjWleuoDM8zk7Zg7bwEUtE479YDz0R9aBiurUo3MQ/B/DQ=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(376002)(39860400002)(366004)(136003)(54906003)(316002)(83380400001)(478600001)(6506007)(186003)(2616005)(26005)(53546011)(76116006)(71200400001)(33656002)(4326008)(2906002)(8676002)(8936002)(86362001)(66556008)(5660300002)(64756008)(66446008)(6512007)(66476007)(36756003)(6486002)(6916009)(122000001)(66946007)(38100700002)(91956017)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?dL0dINLw/mbw09AQ/5lUe4ljiQov3V+hAVnHYIplUmDwy43cVLMd+ZaYYweS?=
 =?us-ascii?Q?fuo5qwTdyigIAW2altGooNU1WTvSNrJ6hQ0mUqnyoikcjPkYnmTbTC+J+LwU?=
 =?us-ascii?Q?tMXntwRNFNRXoa/GSFVHN3AuevZIyjmJX7RvIUUrM0+UTn3y9KLSfaecLTni?=
 =?us-ascii?Q?qdNqYWqSIeKJnQNUzqwD0NAXcIGCJL9ntL2gInY7bZaiXhDGVmNzc+CFrfEh?=
 =?us-ascii?Q?N18vz4oije5a3E2j+yX8UH5sARvIBlqInS3T4W19sWGZlwKJPIFPVocdLoDY?=
 =?us-ascii?Q?TNOXq/xubq/yAh8ywwguqbrkIJMr05zh495q93LFT3QU4nTMuRipS0vfN4RS?=
 =?us-ascii?Q?khShNhpy6zRksNVAb+cS2dy7gVryRf1fWhAh//rU2OiGgy3gGhcUeyQQHVI7?=
 =?us-ascii?Q?R4LdczdO0j4DI8MMpzrvbTjdcmJTORAOJ5TusDV6JMj2iGybyBXfF8i0FgVg?=
 =?us-ascii?Q?ooHEITr9mqQpFmVqipVoc6368XsnOYcLwU8KKqjN99XsNlGspDg40M3S5x/K?=
 =?us-ascii?Q?TPwMSxVIWKHBv/pcCijzmYYzKhyI0ZSwyhCdN5q7qEEZBi3y7nsJafIJadXZ?=
 =?us-ascii?Q?/eqCY7BZHrgBPetJiLKfL7nwsTQ3mPYlrtuMzHITgPh4B6ypFNiytQEU9zQz?=
 =?us-ascii?Q?Ehk/JPlNXGhTxVYGJ4BSv7EOf7vstngjRvKQefF7EX7i8rDZqcOboon32oyI?=
 =?us-ascii?Q?YQcV4TiwsT0z+0Wt0mo8Eg0bQzHNjzupbjw7Q+vrjR8SwxeRMh0Y5cEwihSV?=
 =?us-ascii?Q?uowctHhAm+UpwWONnbrPgAx+U3Lctlr9HxTrFUNA2OEAyU6LcHpM23U+AHBa?=
 =?us-ascii?Q?OoJ6/SpDH57ayPhj24U5EnMXewE9P9oUBob7GAnmKoBWl91ILKb6RG4L5Qm+?=
 =?us-ascii?Q?mKJpurjJnzIswL0JeknFKxwyCLd0V1Shmw+NC37MivNV1TvjOr18M1aHGxVI?=
 =?us-ascii?Q?U2ROElsY3kLjfLwWRYyoGuyhRiEH0mqpHfhhx+BuHA53ieM2Jn05zv2yIZ4q?=
 =?us-ascii?Q?J2yO2UWgK1QgH6CvcE9IZGUjSskV06Llbae7ZPxJQHG3gQ4yvrnNmy51JTIs?=
 =?us-ascii?Q?zDYkhPcnZU+6bcK78IPLNgOPU+JHk+GGHDBaX754lYTOX5wpa+P+APnSTmQ0?=
 =?us-ascii?Q?zJmdM+AteNil2i+VuoPHI/nnGBhW7kXUwi0zTjhSPm8VjZDfzr31ZVflhING?=
 =?us-ascii?Q?X8eeDx3r9gXLWg9fwkvLZW287eBIhMdjZcDQd3S/6wXZDWbPtvobhMGOVqxe?=
 =?us-ascii?Q?IOVRFI+TLCJLs1j3EG5UXdn90VMTFRw1jvrGUEL3V4GwGxMahLyYuSo+lYwj?=
 =?us-ascii?Q?0MyYtuXTz4UnQYsjnKu5CUD3?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <78F8D6E82E38E54F938BE012BF0E942B@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9b9e8795-b770-4600-d196-08d932678b98
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 14:44:15.3539
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Z6nkpJS2EaI8rQqvI7RBj7l8ffLtByI0na7DqwD9zZzGTR7dZR5gsgR1l5cF2vbrxgoq4FLHvCrnCCzUDM9k8nJ1dNnAviRF+mhWu9P7Swg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5782
X-OriginatorOrg: citrix.com



> On May 24, 2021, at 9:36 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
>=20
> Add a ContextOption type to support functional options in NewContext.
> Then, add a variadic ContextOption parameter to NewContext, which allows
> callers to specify 0 or more configuration options.
>=20
> For now, just add the WithLogLevel option so that callers can set the
> log level of the Context's xentoollog_logger. Future configuration
> options can be created by adding an appropriate field to the
> contextOptions struct and creating a With<OptionName> function to return
> a ContextOption
>=20
> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
> ---
> tools/golang/xenlight/xenlight.go | 44 +++++++++++++++++++++++++++++--
> 1 file changed, 42 insertions(+), 2 deletions(-)
>=20
> diff --git a/tools/golang/xenlight/xenlight.go b/tools/golang/xenlight/xe=
nlight.go
> index f68d7b6e97..65f93abe32 100644
> --- a/tools/golang/xenlight/xenlight.go
> +++ b/tools/golang/xenlight/xenlight.go
> @@ -136,7 +136,7 @@ func sigchldHandler(ctx *Context) {
> }
>=20
> // NewContext returns a new Context.
> -func NewContext() (ctx *Context, err error) {
> +func NewContext(opts ...ContextOption) (ctx *Context, err error) {
> 	ctx =3D &Context{}
>=20
> 	defer func() {
> @@ -146,8 +146,19 @@ func NewContext() (ctx *Context, err error) {
> 		}
> 	}()
>=20
> +	// Set the default context options. These fields may
> +	// be modified by the provided opts.
> +	copts :=3D &contextOptions{
> +		logLevel: LogLevelError,
> +	}
> +
> +	for _, opt :=3D range opts {
> +		opt.apply(copts)
> +	}
> +
> 	// Create a logger
> -	ctx.logger =3D C.xtl_createlogger_stdiostream(C.stderr, C.XTL_ERROR, 0)
> +	ctx.logger =3D C.xtl_createlogger_stdiostream(C.stderr,
> +		C.xentoollog_level(copts.logLevel), 0)
>=20
> 	// Allocate a context
> 	ret :=3D C.libxl_ctx_alloc(&ctx.ctx, C.LIBXL_VERSION, 0,
> @@ -201,6 +212,35 @@ func (ctx *Context) Close() error {
> 	return nil
> }
>=20
> +type contextOptions struct {
> +	logLevel LogLevel
> +}
> +
> +// ContextOption is used to configure options for a Context.
> +type ContextOption interface {
> +	apply(*contextOptions)
> +}
> +
> +type funcContextOption struct {
> +	f func(*contextOptions)
> +}
> +
> +func (fco *funcContextOption) apply(c *contextOptions) {
> +	fco.f(c)
> +}

Why all this convolution with interfaces and such, rather than just definin=
g ContextOption as a function pointer?  Is it just to keep contextOptions o=
ut of the documentation page?

 -George



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 14:47:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 14:47:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144717.266324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luFmK-0007Ss-RH; Fri, 18 Jun 2021 14:47:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144717.266324; Fri, 18 Jun 2021 14:47:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luFmK-0007Sl-OL; Fri, 18 Jun 2021 14:47:28 +0000
Received: by outflank-mailman (input) for mailman id 144717;
 Fri, 18 Jun 2021 14:47:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luFmJ-0007Sf-Ec
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 14:47:27 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f631ad9-c799-4198-923f-1c708ecad13a;
 Fri, 18 Jun 2021 14:47:26 +0000 (UTC)
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2050.outbound.protection.outlook.com [104.47.12.50]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-15-7GvDr04aO6CJJewxtSDw8A-1; Fri, 18 Jun 2021 16:47:24 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3936.eurprd04.prod.outlook.com (2603:10a6:803:23::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Fri, 18 Jun
 2021 14:47:18 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 14:47:18 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR3P281CA0072.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:4b::23) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.9 via Frontend Transport; Fri, 18 Jun 2021 14:47:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f631ad9-c799-4198-923f-1c708ecad13a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624027645;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rNWu5hnyURETxB0XB+avamY+nelrsgtZgfMod+YF+lg=;
	b=j8OQyS0qB+6L1gWcaSlDjPTrIybryz9EgbLhcW7iU1aa/5iaCiOWMFxPcItJ1lo9UCCaZh
	YPdtd5mJFDm95TAajijPtxICci0kD1FKVQ1oIIDAFJJSF0+D4LIGzQQkKdewldcKo+zWbM
	1qn+ubG0MUOi2CtfHyGBOLSisc6rvo0=
X-MC-Unique: 7GvDr04aO6CJJewxtSDw8A-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m3Ct3gdkK/JOx7RvisnqFqKD1zv0OU6JjfxEZwLVJVIrmVEfLRneWfsUm7wIm5GjTal/kfE68pM24I6zMzMxlTL33fxEhaoxjiUVY6uwgrPpxDYmGbZEdqNBuJUJhpkU8BN82LZ1TNjS0Sv7cF/1Mwek6xD3g7ave0ZFHQCwIv06Qfa0GPpjnIns0G9DAYY9UostS/82ukv2BqWEq8WjYqHD4A7znlIqJp3xmAE+gANIuE6FZr7lTGLKZ42ikWiasdsP0PGYHUCF+3YS7OhQ9oKX/L+B+yKjNoTCnwhAxp+mO5breAoQcB/sPaldibxvomh1R3cUDSNRH0bh9Vy2BA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rNWu5hnyURETxB0XB+avamY+nelrsgtZgfMod+YF+lg=;
 b=RLVuBLYghuKTmcqMoz06kHg0rImAFLhjGiM9kTZBNCHAdnDn5PKdiwyGbnzfrv2o9z3wmqwUQrKmMOsIdGLteu5ZadQagpzuMNJKv/4LMP1vXoaaMjSLMSi7zGSkKuWI9FjkA1xEsIkOtbekD8S1lWk916io/bhp3YVRrRgxbgDQAZ0bKxdyJKkpSk49FQ5lXMEauR8WNgJ1FkSXGdgHlXcWF8rT5mivUHXPgbfSFz+D6i7E0b1hF099E5DBbtnQnU7D+4mLyqayjSJY08wsVhfB4JSaTgJXSN5b+wsa73p4KKvB+Cptn377n/iBrykRkF70AQCoL1aAG13AU4Pw2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [RFC PATCH 05/10] hardware domain: convert to domain roles
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: sstabellini@kernel.org, julien@xen.org, Volodymyr_Babchuk@epam.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, iwj@xenproject.org,
 wl@xen.org, roger.pau@citrix.com, tamas@tklengyel.com, tim@xen.org,
 jgross@suse.com, aisaila@bitdefender.com, ppircalabu@bitdefender.com,
 dfaggioli@suse.com, paul@xen.org, kevin.tian@intel.com,
 dgdegra@tycho.nsa.gov, adam.schwalm@starlab.io, scott.davis@starlab.io,
 xen-devel@lists.xenproject.org
References: <20210514205437.13661-1-dpsmith@apertussolutions.com>
 <20210514205437.13661-6-dpsmith@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dc8f6715-057b-6af8-8846-8b61fba5478d@suse.com>
Date: Fri, 18 Jun 2021 16:47:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210514205437.13661-6-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR3P281CA0072.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::23) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bb14c8ad-ff21-454a-6aab-08d93267f875
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3936:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB39368880CD54FCAF563AE9BBB30D9@VI1PR0402MB3936.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fAqa6/VT/vcN+pR19X8HYT+n2nUdEkz6vgjuMwTzunm3vQB2WrV4JlDlwWtvB+M9r49UN2iwyGi1oabeVUPQLej2PZ+INupWX3ENaIwAw9psc1Tq8uNGHkwrLm+3hlB/3i0A3dwNOY45SeqMtrFoZWGusvMGlPCUs3ZuBh074UXNsi6pPEH/oF02wPNEQGupMkhF7lElkjHY0+wFhy4bRVaSODa7nOCYOq620H2Cv7aEhhWdILBp2ERFPBE1cq8a+RK7awBpoUWlsorh0TMdyoQF4tjwBRqTx4Vh8lU7XXCWBBEbSNzS5CU9GJIEAxGfEq5OTmi5HGnfIzSbiOppAzRG6B+nog7wNHP/2RUNrUg42N5gC96KtfZWpxe0ZsMEb0X+SlL7RdcvTmsN8CUD1UlBzBoIg3sszEfBV1n5Pi0UbKyjze21G7L3NSTjOadS1uyVJB6+TpcE2NVpzrA0IpmV4LomUlLmU0NkLSXv7gjS/G7NmhSbij9KGP8xwxfb/atVgiVvwR1L/BOlT0ykdd63TQLH06Imwqyu1emPZqKcWYuiwpYkEC0Y4tFiLMdKIQLh/H33fihqqD2S1KzcEzJCGe+vdkmcc6uPYe4f0TgqdSXkRmCnBIQuzqCMSr98z5zgWxC+5GoKbfji6zLEf6h2Aua8E4PdeSTpTelSmhyFTzWCj/vi6JljFne7I/rw
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(366004)(39860400002)(396003)(376002)(66946007)(36756003)(16576012)(478600001)(5660300002)(66476007)(38100700002)(53546011)(66556008)(186003)(86362001)(956004)(7416002)(4326008)(6486002)(6916009)(83380400001)(2616005)(316002)(8676002)(2906002)(31696002)(16526019)(31686004)(8936002)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OVcrdmVsZzhZRm9tYVpVS3E2SlY3U0k5SHJkc29mVGx1K1hnbDNvS0s1d3l2?=
 =?utf-8?B?ZGswZ253UFkzbUsyTCtmUG1jL24xRHllMlk3Wnh2Y2E0SmhLclNCWndjL1VK?=
 =?utf-8?B?cVdqTUNLMTlVS2EvQ3ExRll0amN5Qm4rVnNXUkM2VkxSS25CWnd4MXR2LzV2?=
 =?utf-8?B?UFRBR0l3eFYydUoyY0xQVytLb0M4UjVoOXFaYlVPMG9LK2dnd2VpWlVSSHA0?=
 =?utf-8?B?eTN2bENkaGNmRHBkQmNKYUdlbGp5ZjJyOTBsU20zbExIK2dmT3NST3hYdDRr?=
 =?utf-8?B?Wkl1VGNad0R4c1lLNEF1ZWxPcHlQWFd4TEFWd2thc2oyQWp5ZmZZUGNQRzNB?=
 =?utf-8?B?TDFZeXgzMkxjdmZtZUpHNTFBSHEvQVkvU3JJeEVVRDRiNHRxalcxMnRiWURm?=
 =?utf-8?B?UjVFaXZUb2R1bDFXd2o5UGJ4Szc0QmN6MlhQbktEck9QODIwLytYNHBMdjdu?=
 =?utf-8?B?dUlDbVdnZ3hFOTl3c3dLMEx5c0xBOSsvNC90WVVuRERPUE1rV0wza1NkcXFa?=
 =?utf-8?B?SUlZdmJ5dkZjT1NvT21TOTEzaE1xVVg1NjE3cVRXR3FsRWxWakhvM21EZmU4?=
 =?utf-8?B?K25NNVJkZFlYaUV2eUVhWUN2N2hsaWMrWlJUM21nU01QV2QxNThGbUJzbXFM?=
 =?utf-8?B?K00xeDIyWE9WM1MxTUtuWURNd0RoNG9OdHNFYThmQjQwdjJCdzJ5MEdPSUlJ?=
 =?utf-8?B?amVCSUxjdlFoVHRsSjMwNlUwV2xvb296bjc1cGtrTzYwbmJSSk1MVTlsbVZr?=
 =?utf-8?B?Yi9MNWIzK0dlY1FibmNNdEVCNHZ6YkVuLy9VR1RodUZZRmFlNTR0RFRNdXJX?=
 =?utf-8?B?Sks3MUsxdTE5QUVRc2VHelVjQkxMdjJrZW04Mk00SkxjNjFlVXBjYVh4NlFF?=
 =?utf-8?B?bVB5Q3JWbThVWXg4aXFZblU4cUtrSnozV3g0bE1Pb1E2clRGdUxKZUI2TlJQ?=
 =?utf-8?B?VEx3MFVBUEtwMTJIWHZtOVFsOVl2anpHM3B6VktGZEdHWHZEajZhUzNhNUxD?=
 =?utf-8?B?aXJKSUNpbUJ1NFBtSS9rVUJ3Sy9YVW03MmJ0cS9vRG45aTRCWkN5Q1ZvZFJF?=
 =?utf-8?B?SVRONDB2Z2t0RlJzR1c5WFpOTytYeGNlUGgwUloyT0Q2SWRubHA5blJnWXRF?=
 =?utf-8?B?dEFoNElOdTY1eXBQeWlRY0FXZSs2VkhiOVBxZHlaNXh2NlJwU0E1dlYzV0Vh?=
 =?utf-8?B?T2NTMUdHZy9GK1gxRldqRHBnUEFJTTIvMXJOdW1SZGNOekFxelZ3aWxGZllF?=
 =?utf-8?B?VFEzb1lxYkJQNEdJOEwzOTVWUHBFa2U0Y3FaT243anVMTkJsalRUWUV4eHMx?=
 =?utf-8?B?MFAzbEt0ZHhEVHlCTjZOZU5RUXJ6aXpqM3ByeXBlTnplRGVvRGRzUlAvR2tx?=
 =?utf-8?B?b3VjY0RGanpiVUxYL3hNc0ZxL0kzZnc2bnZNcTBoM29jVU1ST1d4dEdJeHVv?=
 =?utf-8?B?WmJoa3MvekZycEUzNFFjWnhhUEdCTTB1WUd3WDJ2NmNSOW5uN2QzQ0NFclJ1?=
 =?utf-8?B?V1Y5Nlpod3QzTXlSSzBERU1JMjVQSGxRNGp6VSt3TVNqU2dUYm9semxqQnQ2?=
 =?utf-8?B?N2NvNmVnUGlXZUJUS2FWLzNERlRsdmFxc0NmUFZ3NEo0eDdWZFRhV3hsTDVa?=
 =?utf-8?B?NWVsUVRGU1BPSDJYeUF0MFpSajFJL3BiQmo4YVdKNDhoaCtyYU5TTjhBWmVU?=
 =?utf-8?B?UnZsT2hYVFpnc2RNQ25KeDVWZU1Fc2EraUorbjEwbmZxbFk1cXBqSk1rWG55?=
 =?utf-8?Q?+Vs21komRlWZQeeEoPjB1HJ9fnLyyhtnGi75WVE?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bb14c8ad-ff21-454a-6aab-08d93267f875
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 14:47:18.2589
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ImQphL3WqcXkACd2R/7Bt2HKfRHlAiOy7LTk4BvDqFDESYf9Jausiupakz4EyRnIf7hhx3SV0Hav3Zoi1U6lzw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3936

On 14.05.2021 22:54, Daniel P. Smith wrote:
> --- a/xen/arch/x86/cpu/vpmu.c
> +++ b/xen/arch/x86/cpu/vpmu.c
> @@ -169,13 +169,14 @@ int vpmu_do_msr(unsigned int msr, uint64_t *msr_content,
>  static inline struct vcpu *choose_hwdom_vcpu(void)
>  {
>      unsigned idx;
> +    struct domain *hwdom = get_hardware_domain();

When introducing new pointer variables, please make them pointer-
to-const whenever possible.

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4917,7 +4917,7 @@ mfn_t alloc_xen_pagetable_new(void)
>      {
>          void *ptr = alloc_xenheap_page();
>  
> -        BUG_ON(!hardware_domain && !ptr);
> +        BUG_ON(!ptr);

This loses an important aspect: We should not crash here anymore once
we've made it far enough to have started constructing Dom0. As you can
see ...

>          return ptr ? virt_to_mfn(ptr) : INVALID_MFN;

... here, the case does actually get handled.

If you make behavioral changes in, especially, an otherwise largely
(seeing its overall size) mechanical change, please make sure you call
them out in the description.

> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -776,6 +776,9 @@ static struct domain *__init create_dom0(const module_t *image,
>      if ( IS_ERR(d) || (alloc_dom0_vcpu0(d) == NULL) )
>          panic("Error creating domain 0\n");
>  
> +    /* Ensure the correct roles are assigned */
> +    d->xsm_roles = CLASSIC_DOM0_PRIVS;

Didn't an earlier change put this in place already? This shouldn't be
needed in arch-specific code. The cover letter also doesn't mention
that you're not touching Arm code in this RFC, so a similar change
would then be missing there.

> @@ -302,23 +303,50 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
>      return NULL;
>  }
>  
> -static int late_hwdom_init(struct domain *d)
> +/* pivot_hw_ctl:
> + *  This is a one-way pivot from existing to new hardware domain. Upon success
> + *  the domain *next_hwdom will be in control of the hardware and domain
> + *  *curr_hwdom will no longer have access.
> + */
> +static int pivot_hw_ctl(struct domain *next_hwdom)
>  {
>  #ifdef CONFIG_LATE_HWDOM
> -    struct domain *dom0;
> +    bool already_found = false;
> +    struct domain **pd = &domain_list, *curr_hwdom = NULL;
> +    domid_t dom0_id = 0;
>      int rv;
>  
> -    if ( d != hardware_domain || d->domain_id == 0 )
> +#ifdef CONFIG_PV_SHIM
> +    /* On PV shim dom0 != 0 */
> +    dom0_id = get_initial_domain_id();
> +#endif

The suddent need for shim specific logic here also wants explaining
in the description (or, if possible, splitting into a separate
change).

> @@ -559,17 +589,19 @@ struct domain *domain_create(domid_t domid,
>      /* Sort out our idea of is_control_domain(). */
>      d->is_privileged = is_priv;
>  
> -    if (is_priv)
> +    /* reality is that is_priv is only set when construction dom0 */
> +    if (is_priv) {
>          d->xsm_roles = CLASSIC_DOM0_PRIVS;
> +        hardware_domain = d;
> +    }
>  
>      /* Sort out our idea of is_hardware_domain(). */
> -    if ( domid == 0 || domid == hardware_domid )
> +    if ( domid == hardware_domid )

With this change it looks to me as if ...

>      {
>          if ( hardware_domid < 0 || hardware_domid >= DOMID_FIRST_RESERVED )
>              panic("The value of hardware_dom must be a valid domain ID\n");

... this was rendered dead code.

> -        old_hwdom = hardware_domain;
> -        hardware_domain = d;
> +        d->xsm_roles = CLASSIC_HWDOM_PRIVS;

Yet another place where this value gets stored. Ideally there would
be exactly one such place.

> @@ -682,12 +714,14 @@ struct domain *domain_create(domid_t domid,
>          if ( (err = sched_init_domain(d, 0)) != 0 )
>              goto fail;
>  
> -        if ( (err = late_hwdom_init(d)) != 0 )
> +        if ( (err = pivot_hw_ctl(d)) != 0 )
>              goto fail;
>  
>          /*
>           * Must not fail beyond this point, as our caller doesn't know whether
> -         * the domain has been entered into domain_list or not.
> +         * the domain has been entered into domain_list or not. Additionally
> +         * if a hardware control pivot occurred then a failure will leave the
> +         * platform without access to hardware.
>           */

s/will/would/, considering the initial "Must not ..."?

> @@ -711,8 +745,6 @@ struct domain *domain_create(domid_t domid,
>      err = err ?: -EILSEQ; /* Release build safety. */
>  
>      d->is_dying = DOMDYING_dead;
> -    if ( hardware_domain == d )
> -        hardware_domain = old_hwdom;
>      atomic_set(&d->refcnt, DOMAIN_DESTROYED);
>  
>      sched_destroy_domain(d);

Isn't this dealing with earlier failures, and hence needs if not
retaining, then replacing?

> @@ -808,6 +840,42 @@ out:
>  }
>  
>  

I realize you've found a pair of blank lines here, but rather than ...

> +bool is_hardware_domain_started()
> +{
> +    bool exists = false;
> +    struct domain **pd = &domain_list;
> +
> +    if ( *pd != NULL) {
> +        rcu_read_lock(&domlist_read_lock);
> +
> +        for ( ; *pd != NULL; pd = &(*pd)->next_in_list )
> +            if ( (*pd)->xsm_roles & XSM_HW_CTRL )
> +                break;
> +
> +        rcu_read_unlock(&domlist_read_lock);
> +
> +        if ( *pd != NULL )
> +            exists = true;
> +    }
> +
> +    if (exists)
> +        ASSERT(*pd == hardware_domain);
> +
> +    return exists;
> +}
> +
> +

... adding more and ...

> +struct domain *get_hardware_domain()
> +{
> +    if (hardware_domain == NULL)
> +        return NULL;
> +
> +    ASSERT(hardware_domain->xsm_roles & XSM_HW_CTRL);
> +
> +    return hardware_domain;
> +}
> +
> +

... yet more, please insert in the middle of those original two
blank lines. Patch application (especially when larger offsets
are involved, e.g. during backporting activities) benefits from
meaningful context lines rather than many almost identical ones
(and then even relatively close to each other).

As to is_hardware_domain_started() - I'm afraid this is too much
overhead in case there are hundreds or thousands of guests.

> --- a/xen/common/keyhandler.c
> +++ b/xen/common/keyhandler.c
> @@ -228,12 +228,12 @@ static void dump_hwdom_registers(unsigned char key)
>  {
>      struct vcpu *v;
>  
> -    if ( hardware_domain == NULL )
> +    if ( is_hardware_domain_started() )
>          return;

Aren't you inverting the original condition?

> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -776,7 +776,7 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>      ret = 0;
>      if ( !pdev->domain )
>      {
> -        pdev->domain = hardware_domain;
> +        pdev->domain = get_hardware_domain();
>          ret = iommu_add_device(pdev);
>          if ( ret )
>          {
> @@ -784,7 +784,7 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>              goto out;
>          }
>  
> -        list_add(&pdev->domain_list, &hardware_domain->pdev_list);
> +        list_add(&pdev->domain_list, &pdev->domain->pdev_list);

It's not immediately obvious that pdev->domain couldn't have changed
by the time we make it here - did you check? I consider this possible
in principle, if e.g. in an error case the device got associated
with the quarantine domain.

> @@ -879,7 +879,7 @@ static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
>      if ( ret )
>          goto out;
>  
> -    if ( pdev->domain == hardware_domain  )
> +    if ( is_hardware_domain(pdev->domain) )
>          pdev->quarantine = false;
>  
>      pdev->fault.count = 0;
> @@ -1403,7 +1403,7 @@ static int device_assigned(u16 seg, u8 bus, u8 devfn)
>       * domain or dom_io then it must be assigned to a guest, or be
>       * hidden (owned by dom_xen).
>       */
> -    else if ( pdev->domain != hardware_domain &&
> +    else if ( !is_hardware_domain(pdev->domain) &&
>                pdev->domain != dom_io )
>          rc = -EBUSY;

May I ask that you split out such cleaning up of cases of open-coded
helpers into a separate (prereq) patch, especially when (like here)
the containing patch is already a pretty big one?

> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -475,6 +475,7 @@ struct domain
>  #define XSM_XENSTORE  (1U<<31) /* Xenstore: domain that can do privileged operations on xenstore */
>  #define CLASSIC_DOM0_PRIVS (XSM_PLAT_CTRL | XSM_DOM_BUILD | XSM_DOM_SUPER | \
>  		XSM_DEV_EMUL | XSM_HW_CTRL | XSM_HW_SUPER | XSM_XENSTORE)
> +#define CLASSIC_HWDOM_PRIVS (XSM_HW_CTRL | XSM_DEV_EMUL)

Oh, maybe I was wrong with saying that the same value gets put in
place in multiple locations. The fact that you start distinguishing
Dom0 and hwdom needs calling out in the description. I'm not
convinced of the inclusion of XSM_DEV_EMUL.

I also think CLASSIC_DOM0_PRIVS then should use CLASSIC_HWDOM_PRIVS
instead of re-enumerating what the latter contains, unless there's
a definitive plan for the latter to include bits the former
shouldn't include.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 14:47:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 14:47:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144718.266336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luFmX-0007nZ-4j; Fri, 18 Jun 2021 14:47:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144718.266336; Fri, 18 Jun 2021 14:47:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luFmX-0007nS-1d; Fri, 18 Jun 2021 14:47:41 +0000
Received: by outflank-mailman (input) for mailman id 144718;
 Fri, 18 Jun 2021 14:47:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfjP=LM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1luFmV-0007mr-OB
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 14:47:39 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 17314be6-aa6d-4aaa-b5b3-b4b5b6a9b25c;
 Fri, 18 Jun 2021 14:47:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17314be6-aa6d-4aaa-b5b3-b4b5b6a9b25c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624027658;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=kG9rzPAvPGzHSSrl8X6sRybi5+/0Q8y9f2aJQljllrY=;
  b=RqpnSk/Ps5js/8GUhkVwQ65eYaLD6Xc4UydM29QtUEzdxkn3fkreU2W1
   s8PGBPGpUvhaqGj4IGSxVFLRdQUkNzXAHrCwcCstO9/U1HKQSKy1ByPH3
   sIXBrYLDZVGo/e4Y8RksCwVdnXHdofzcsEo5Na59P5iBN122Jymh3AbF9
   w=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 2agKAxDxVJPFpRadOz91ApZ0mGCD8wA7G+/MLn7jgTuQa61l6NHqSQh5h6wzUwLkmqBKD9RiGf
 kZqJGotQM+vE3OSnc4MXYbn+ovAZo4uOvlhRuxuuLY05woRkYnYun4H2sZRPJZgCfT7uJJaze2
 TV/l6+H0vyodPOnlgjKm+2qUXu9WprKs/3hPJb3vFvAGCh4z4JtzlF8QuP/3PQj13xjSdOIGJO
 i17NlPoyTUb3kaTtpv1uhueihN43XiIl4nny6cMnwZFXoDWD5rXHupG3gfggQldHDcXTNbiTSi
 tcg=
X-SBRS: 5.1
X-MesageID: 46467554
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:D130B65ZxoaMxQFltgPXwXiBI+orL9Y04lQ7vn2ZFiY6TiXIra
 +TdaoguSMc0AxhJE3Jmbi7Sc29qeu1z+803WBjB8bcYOCAghrqEGgC1/qi/9SEIUzDH4FmpN
 9dmsRFeb/N5B1B/LvHCWqDYpQdKbu8gduVbI7lph8HJ2wLGsJdBkVCe3ym+yVNNVV77PECZf
 2hD7981kOdkAMsH6KG7xc+Lo3+juyOsKijTQ8NBhYh5gXLpyiv8qTGHx+R2Qpbey9TwJ85mF
 K11jDR1+GGibWW2xXc32jc49B9g9360OZOA8SKl4w8NijssAC1f45sMofy+Azd4dvfr2rCou
 O8+ivIDP4Ds085uVvF+icF7jOQlgrGLUWSk2Nwz0GT/PARDwhKdfapzbgpAycxrXBQ4+2UmZ
 g7rF6xpt5ZCwjNkz/64MWNXxZ2llCsqX5niuILiWdDOLFuJYO4PeQkjTNo+bo7bWnHAbocYa
 NT5QDnlYBrWELfa2qcsnhkwdSqUHh2FhCaQlIassjQ1zRNhnh2w0YR2cRaxx47hd0AYogB4/
 6BPrVjlblIQMNTZaVhBP0ZSc/yDmDWWxrDPG+bPFyiHqAaPHDGrYLx/dwOla2XkVwzvdMPcb
 H6IR1lXEIJCjbT4Py1rdR2G0r2MRCAtBzWu7ZjDrZCy8/BeIY=
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46467554"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=USj+ZEZUBKxt2/nkDrFPU6Bp1tMETudyn3jooOUqIwWEP/13KhRDSC0aB3h6NnF6rfOs3N5Qz3/G0PgURwjXImV1Efak31fQqSrF8wjcNHZovHc3Ktcc8QEEDA433vDub0ZBcwy4Ww+STE2h0/P6DlJ6V78FTsT2iBqqsKt6bj1Kb6QJV+mN6cFqWXsS1e+DLmzEnGAUbEUnJGKH8ZvP3Qzb+MSHuXj4LHS4ay93upLItVwJT1jxMR+5LyLYOpyxxwdxLMMETSKzcc1Yg01uCNLmaGqfsXQ35uckLvcHwfdeoXimbdZS0/WYu0QXvRiBjo3nKenmcalwzXCLqVWoVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kG9rzPAvPGzHSSrl8X6sRybi5+/0Q8y9f2aJQljllrY=;
 b=FCKFO+PHWyUGb/nSru79e5j3mJULHj1W+fRlV+bnEe7BUNdQGLpa6lKbTa7gT806eE9rLZ/ObsRXlQoVMPqDT3c9YWK0wm1AsJ0WmWrKGa+V15NUyxPGmqkE3vbgxGYVtZS1jo/3jlQwkvhCsDmEkaRU/iWftQCOaVjheHDT0AsPUB/MYlhaaT6Y+noOlR9OwZqS3pNu6R6iiC2oobUN15v4baaL8XE5wjlYDSlaoRm33PUCtsm9U+BlYGd2FCeC5w4ci7AA+uCIj8QIvC4z9KpptatBzqfrHQtYPL47D0V+jHFug/xxUSQMRRtkDy31ASNN/Gp7iaYM/A6EvlMozw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kG9rzPAvPGzHSSrl8X6sRybi5+/0Q8y9f2aJQljllrY=;
 b=oc/JvuPgJz1FyegSJtdrCdrBAMq5/QANWGuUuT6Bs0GQWZT1kbySQyv8ox6bLWSYaRkxc8UjoMPB7bdfCoZ7RvjJabEqJ79ODrYIvhwanhZldA9/xSt4YfGRaNCp0tgBznhLikqaqcazKrRbldLpSnlnphgJqZY6/d9tpcGn3q0=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 09/12] golang/xenlight: add DomainDestroy wrapper
Thread-Topic: [RESEND PATCH 09/12] golang/xenlight: add DomainDestroy wrapper
Thread-Index: AQHXUNyz5vfbQ7EqBU6xbVFGrQQgbqsaAASA
Date: Fri, 18 Jun 2021 14:47:34 +0000
Message-ID: <4229142D-38F0-4ABB-A509-6E629677948D@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <82c68547f4cec1c82132cd6a867696f4b38dcd3d.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <82c68547f4cec1c82132cd6a867696f4b38dcd3d.1621887506.git.rosbrookn@ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d06ba65c-8cd2-4531-7655-08d93268021b
x-ms-traffictypediagnostic: PH0PR03MB5752:
x-microsoft-antispam-prvs: <PH0PR03MB575211A75A7263F4F6347D09990D9@PH0PR03MB5752.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:38;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: PEK9z0XRXeIwaLcHS2lwS0QYCk9w7P/o+dTOmVeJv4dmK7OFOdN7hUc5J2w0raPizOuI1G1ExoVNVpwqyh3l4cwT5bpO8qwXudzz+0ZcCkkhrAaoWiKSXBHlhpEOG9+L9riem8NRpwyktlqpco+bTBfPZIhQ08Ew15tLIFJjriHCP4UbeXDnU5CNUEYlelzJiAQzXnqqVg9PEP1lh1WXlaWRP1CSi8gXkLvE80vVN6OLgksjafGusXLOQ+qWWKUKBhscU/jwXCcVqyfPWwoB/uICFhFm5XOxIr5+AUcAFuQkLrGn3imCoRAS2k+8rUqVaZcgNXwzX5EHd8WX/EeJzTbzwCIUSwvkY4M23DSwBMv5FjVYEM5Y6GUnxcwKCxYN/Iegh3ChMTW4usO3hOhOZaIhpi5IzmcB1RPeDVR+83ESsGrMwbQzNEilsWONwdUE+3XfHiuQjIZDPGAOzH0CjBlvuvsh0Qs1f3LSRvgzjFDNCKSUYYwfAAwMmnSFvkfi1XNVYb5pdrJxtltX/t7pjJV+K6v3fbEoJ/RnsuvawDvnLTF2kmm+sXRRch2LgS/Kab26v278wArhLSsKUS+UG8m6QHYRKkEaCZJ1iBnoT0eWt4b75Ek8fCccDDpyHK94a2ARe+zuxfH9QezBMBt3NlCWMI5yS8HBJN2Lu4tquMw=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(376002)(136003)(366004)(396003)(346002)(316002)(122000001)(36756003)(38100700002)(71200400001)(66556008)(33656002)(86362001)(53546011)(6512007)(4326008)(8676002)(6506007)(2616005)(54906003)(26005)(478600001)(6486002)(66946007)(91956017)(66476007)(2906002)(76116006)(558084003)(6916009)(186003)(8936002)(5660300002)(66446008)(64756008)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?LzEa5L+u+D6zEmOoQkmXkMdZsgIuOeVmy2n5tqoR2SJi+ud6feHjeJXMKL6C?=
 =?us-ascii?Q?AJu5qyMqz3Ai+JlqtMo06oypnGlrZefY8BvWljf88QUrioHr0L+paAgBepWv?=
 =?us-ascii?Q?GOHYTiRJilwoBEFyW4wx9EWFYh7qp++N/oumlIsrAL/L5UnOaQj9kxxSrIbG?=
 =?us-ascii?Q?ZtKyuEhCvW1dq98xA/N6B1zydGJCwi9vLZ9XGjPGjxBuWfeFJ/amHGyfs008?=
 =?us-ascii?Q?u6ZqbAokdeOJb74mYyiz7HBBZh6MnHsboDh4n8Rs18d1TM+WpM+DCdHQszER?=
 =?us-ascii?Q?CU5LUbJ6+Xnx2CuIirk/AgDhKkMGHXzqIELqPLFyg4yDsdULqKm2A4GrLf8l?=
 =?us-ascii?Q?ahzTPXGSTMsCS3yN0sczMNSMXMY4acU2tKEhUV3EcMw0nC6QuqvlGgONRl07?=
 =?us-ascii?Q?ZzqFnRivzByyzQlfohTFfiqHEEFGxhzV+6fSJ5P0U0MwpPv7sX5tl5PKFWT8?=
 =?us-ascii?Q?MaO+L4fygyFt6hswxeICWa7AnFMcsIcIXQO/FeySvZrNo6oUq7KuW0BcDG4v?=
 =?us-ascii?Q?UWJDZ5bU2rGQQyVwVLSZJc5oQZ+MlskMdDmu26UgVSh5M9AsGsrHUunDrY5O?=
 =?us-ascii?Q?rjVRYvjaG4tNBFpJJys3PlMOxXh31tf1wEfeafpL1FIesphDs58GJApzlNad?=
 =?us-ascii?Q?Hy5vsf06apmALWdSOVv0VeNSSSIwIn+lXB2JgtPhM8R6ogXRi39ZqVxrW5KT?=
 =?us-ascii?Q?W2L3lvvbtAwJ8kW9pJj0R7xf11q8LTEJ3dJkNT4eMESNjYwGa5t/u5rY1CGQ?=
 =?us-ascii?Q?8Z9wN+AwlXReCz3uDUkpmkFk+dNGMOzaa4Kiu0stQ5uv7NKrfsYDypl0ZL1d?=
 =?us-ascii?Q?Jm9KgT3izaYHoHCrOt/zZSIoFigLlcjH/v2fI3BBJPNmFGNtAaJCJO1kcCbg?=
 =?us-ascii?Q?8Ni99jCl+dQffsyEhVBg0fGQj7yeFPxVNBujVCcpkjtCFh5rzMmDTG8duvKY?=
 =?us-ascii?Q?DbNXddEenKt9vln8VzVyFFDfh4z+U8X62heCYuSkX/pfbzEfD0UM4N39apw7?=
 =?us-ascii?Q?i+IYoFa3rvyGkq9esKjyiPtBt66/W0BifvdkTc0VLyZwys4k+OwkhPr+2/Rm?=
 =?us-ascii?Q?6BE3Qt43Zyf3SJXbw8Wt9Quuzq1fpR1S6dkKzaJ+G1m1V1JV6SmykCdSUunx?=
 =?us-ascii?Q?ah8b7sM7f9GzYzp//+c+og7XXrTny/ZEFNECBOowbnauYXM7woAzR12TzJSs?=
 =?us-ascii?Q?L79P/eDPfCAJkQwldF+NLFnVEOdX5/1iQNw93fRlB5oJF04lpojLlpemIHDY?=
 =?us-ascii?Q?RyiUZytsHSw9inhOPDxFBdZi+klzMSj7n1F8kU/yoNjBwM/fgaEFyUmuOk9a?=
 =?us-ascii?Q?IFwNz4/mdnvUa/mVtuY83jJb?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <F082F9431F086149A96DD99358AD4D61@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d06ba65c-8cd2-4531-7655-08d93268021b
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 14:47:34.1733
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 2yTwI3kkUAR1TYJdnuc1MVXwSuSRHLSzD1DfPx1Jsu39AwYs8mOP8FFAqLlS+JB9Wbj+37lTyV1IQAArGnPzY9Aeafqm6M+kCARePpSIfGo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5752
X-OriginatorOrg: citrix.com



> On May 24, 2021, at 9:36 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
>=20
> Add a wrapper around libxl_domain_destroy.
>=20
> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>




From xen-devel-bounces@lists.xenproject.org Fri Jun 18 14:54:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 14:54:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144729.266347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luFtC-00013Y-UD; Fri, 18 Jun 2021 14:54:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144729.266347; Fri, 18 Jun 2021 14:54:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luFtC-00013R-RI; Fri, 18 Jun 2021 14:54:34 +0000
Received: by outflank-mailman (input) for mailman id 144729;
 Fri, 18 Jun 2021 14:54:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfjP=LM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1luFtB-00013L-Bp
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 14:54:33 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c3c7c1a-e041-4b69-b63c-b12a99e0c5f9;
 Fri, 18 Jun 2021 14:54:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c3c7c1a-e041-4b69-b63c-b12a99e0c5f9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624028072;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=gYskcMkifrLBdx7KJpgGJ0ATiNi84lRrFlufKWWJGy8=;
  b=UJmA+vUs6/hMehPAVChDYdi2oncMKnN4uTgZsezPjRgHQ/zFVTeIB9U7
   jm+wBvCIKo3LT09BhQeuWp/uvZBLCGs7U7w9WpdOg1GlRp54Hz7xA2eL+
   m0c6zJvSJ7tRLjGukFAcpilBBQ7OWJQ28cQLsXH2PWocmlHupbArpEEzV
   Y=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ChSi7n+Z6mFiTLWKfbq+eIdgA7eajLu28NATQQ6MSBGYYv6L2GdmaCq/tspNA0CnBJrE80UcZ0
 G/91P/jF0MPrjCMvk8hu6yYc3MLYxnyoDFO/mntmACHqW8MGKjS+l6E1b0MHFgRQYkczqlSNqD
 cO+iJuaUe3RDOQYoTB9Q6rWWJzOhRTp+1fMaMcMFiGzhpYu0fNGMjofgiEbNKIzDSd/kAaDLtV
 ctm1DuU7YUpfJkwYFM6uAjWQAYnISoBKw2mnvuM10QRajbApPJYya3lGmOVEAbWN6ZdGJ8IfWB
 hlY=
X-SBRS: 5.1
X-MesageID: 46195058
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:iDZfB6oLhaLUWJJ+u7Cm2X4aV5pIeYIsimQD101hICG9E/b5qy
 nKpp8mPHDP5Qr5NEtLpTniAsi9qA3nmqKdiLN5VYtKNzOLhILHFu9f0bc=
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46195058"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BJjQVr28mTWUJtzIHwPgYQlf/klvfK2icKRsiaA/pEdYRe7bTWwfu0nzYOiDtzu/C/TC0THsHxvv7YI43QByzXTuDDprqXSLARQpjG5EYRaGr1AKJwgDfrvB/xwJ3vMpQ3nrPrKrrWIavX826OVKZQoqvmjhiuBHox1EGCyGeOX/J0mkR42df2kh0HWgPOhnhh0ILwt7WFuWG9DK4ycPLhvxjFTgkkERxOffiSMTGQx5+SMpIb1KKKSCa8S2T17AIa9bZQ8z0jPG2mzXO9k7xpmTJGnkLVzskDCvPeYtpMyf3/mEVHxcEIWGUidG6xUQSzfubzHYBqn7PZMOEni1CQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gYskcMkifrLBdx7KJpgGJ0ATiNi84lRrFlufKWWJGy8=;
 b=AxVJWRwGbPgVmMeYa9vgfPu0VuOl0cH3Ku2SzuemtI2G5TUWE9LPFcTZeF3oidwnabiP9fFQSLSPMpj331dISP3CTUwtOhTaJqbYNGd9zB5267YCFaYLLG5gufyPl0a9bioNgWNz9mb9sz0UYgLAMrVvKH3RwF0l6dAHMD/wNMF8pAWcNUOUDBPUugwC1ak+0+kcTQ2VKIt2pFG0jw29O9UmU2OJOXd8wK+0ELPYaDahH1SinPueIW6Y2T0bsA/GKLqbcu+9htmXu4A1vc6HO/swgL+MTx3C7zYGx2Dy1p7bK3ipUuZ6vFz5XVlUZH6P8l9B5RFS+HkcURDHX+tWaQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gYskcMkifrLBdx7KJpgGJ0ATiNi84lRrFlufKWWJGy8=;
 b=POkMu6pwcN4af5Jw+5AkCvZxCOALkehukQdEG1cR65zOcLNe+BTdzuCsilj8siU9rX7bGmLl0JenWq046lLL+Ldp6zqKsodaCp2jhQzYTfx6UZP02z1owuEvtAYZWD3e3C2xU1KypIcpbMCT/PjfN2vkUc75z9FEORHliJ8+DxE=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 10/12] golang/xenlight: add SendTrigger wrapper
Thread-Topic: [RESEND PATCH 10/12] golang/xenlight: add SendTrigger wrapper
Thread-Index: AQHXUNyv7x+9IkGe00u122PbmAOg8qsaAfAA
Date: Fri, 18 Jun 2021 14:54:27 +0000
Message-ID: <6396CE24-4BFE-4AE7-94D4-5EF970FDB861@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <7788e3f5f1af622782ede1b879f4f02ec63fa546.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <7788e3f5f1af622782ede1b879f4f02ec63fa546.1621887506.git.rosbrookn@ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 6d16f9ca-5e1d-4a6c-60b9-08d93268f86d
x-ms-traffictypediagnostic: PH0PR03MB5736:
x-microsoft-antispam-prvs: <PH0PR03MB573694657EAC8405B5683D5D990D9@PH0PR03MB5736.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:454;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: n8Hw/kQnJfsbzSyQH3yx5oi1ldnKECZFyXA48d7ahXUANdadA9peNYM8q0tYOaWnMW6oawMeVYNPs8c3R5mpOSc07RPvYNsuimMxP4LeNuLx2sry2xExC7i8uV1bmbUIqnjCPRIgj2mMY7q+iZhrWdGR/EROD+T4yK0RcoNd4/GbZE/oyJ6jZsNScF5hOEPnMXZSN9a2d2gkb6JsuxXhd+uhBxvhcKAM5Spq8x7PlWdZLp27tAlXNUjsVxkKrkOVhpmyiLOIEJal21MARKLiUzfMFCr8YIqWqHVBtB5Yl94qix1pX04d66hDKmIN4ozCIRStLCT3yIbKaCCtiGmrTHdoNz31u2qfTIuFp1mgdMN7YS3ObAoWzRmiaBHTwwMpeQcMZpU6k6VaMZBkqq1q6jN8OBkSPIsnSM3MWdsZNo3jSOjKkghpq9DSgCN8gLypEtIL7e5TMOdNG2L/IpoLKU35yrj1VCDJbxcHgfvFcOCL+UPs9LTr8O+VA0BptBblQQhOAwBi/XdXnxAM7kQnJZZ/G5YhEu4VY8Lvw4ppuezAVcJDCnaY5cNTQJlhytkjU0+yYj05fUiq9ywGWOhM2MylRzyA6x6g5nIJFDFcc3YRP9vrjjoK1WOBtCqbdf5aUX91EcuZOlNAHRt9YFFfUgZlEsQX4/PNvuHeNfv+ziQ=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(39860400002)(366004)(396003)(4326008)(558084003)(8676002)(36756003)(33656002)(53546011)(8936002)(478600001)(66476007)(186003)(64756008)(6506007)(122000001)(66946007)(91956017)(86362001)(5660300002)(38100700002)(66446008)(76116006)(2906002)(6916009)(316002)(71200400001)(66556008)(54906003)(26005)(6512007)(6486002)(2616005)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?6UN/A3+CqiDB2HCHkzQGK/PdDMeY+linMvHhCL0eWkaa/h22NTfDJzn62sxu?=
 =?us-ascii?Q?J+u9z3p6ZzlQMV6AUiTrWTqV6GDSFIynRCnkD5YfY4jOCBXPgIQKpx4C+yqq?=
 =?us-ascii?Q?VXpu29KAOZmu9i9Wzfl6kAZT7Yo1SvxVDCKPtz6dYqEQchtGgeFDDMwEhk94?=
 =?us-ascii?Q?VNKwSYlOxpAGdylukBj4/7lZHwNnxxK097Ak6TAEfjVKbtTV/s5Sn8+70M7g?=
 =?us-ascii?Q?dJRbif+ZFVkQB9uEBaeF4QCuw67zdMewacCchQz3gom73+RRatvtA8ikyVnF?=
 =?us-ascii?Q?cVQxZWbQCMJmjOL013Dk4BBg0VO2n2cIIugIoeePGip7ybFzN0Ck0ah97snU?=
 =?us-ascii?Q?DO/VvQR2cPrNHJml0wfyLPxbFz66ExM41NAkvApauQ3ULPthDif/IEsx/7b4?=
 =?us-ascii?Q?n4zE414gCZ74X0At8TzwNTjDSy/+UG0sDS1ol3fyHNFijHhfpvTRiPYxGpoC?=
 =?us-ascii?Q?s43k53TELzQugfqm2ramMHnz62vZXTN2NBFARkA7NQqU3vz/NRqyEsArX0A6?=
 =?us-ascii?Q?aDFNhsPKCfJoN7ZzS3tYqiiu2u46NZ19jh0Rn50Xza8u83MOTmK7WWOuuAbY?=
 =?us-ascii?Q?XAH6TurFATW51JU1XNrToYfXvU9so46YBuK7oJaQIAX8NMCjTa4Ocm60cYpp?=
 =?us-ascii?Q?Qs5UByNvNfR1SgQbYGJbMvNbCAanUJ3ab+R7RGajhqSK9I3rVbvRXITHAE8h?=
 =?us-ascii?Q?oFDSvNNAqT6891YmvCSRyQ8hqpamODaIiH4STGiiMrzQEp6eRfEgVld+LZEX?=
 =?us-ascii?Q?LcOn9rV/ML7oL69tXevPqupgeQR7ESgNJAIihJ2K2miDm5IFEXt4qPjf6wEo?=
 =?us-ascii?Q?cppAa50Itv9cS/X7i+YdB0GVpSfjujPPEdvFUWYSpMoni6asAhuuw7SZxbje?=
 =?us-ascii?Q?R575+I0WNM8z9i4gnTvzrekBeG+iVR8Z4+qJGJKttUIxz5jqnqCfTCnRs2OM?=
 =?us-ascii?Q?Zmldxd2jFQ3bRqC5va7+53M2Ui/sqKO6bJWHK0f+WTayZHu7/xhWVlKNOKM0?=
 =?us-ascii?Q?MqHDUMq94o5Df6tn/mMlHW8SGZz61/Snqkq9KAoCVFQHrAtrtBBfSM2DEoE1?=
 =?us-ascii?Q?wf3OWTXFFSt6waCQOuaBslRIYGEL8xF/mInzGtsoT9hQASrUUdOZ0NSx8Uxr?=
 =?us-ascii?Q?ekiH9mVc869R7iZDm2UAiFhHAbGiNd9fKrKUIFYHxEswvtYfllMMEWhNpvRu?=
 =?us-ascii?Q?Fv8GvfoTdTfnsBQmiTfYYQ4WUMGBAik/5JL3ewiSHte/Vtr4ZFy7knp2jqek?=
 =?us-ascii?Q?MAkDGD2EWZoVRAuCN4fLOoUTR4q4prTLCBP2wUWBefXDGNjCYAx4O0YQc0eG?=
 =?us-ascii?Q?psd2tZCNu5r6Z8Fs33TxmlJK?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <80C5ADEB13BB684C85FD91332AA76024@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6d16f9ca-5e1d-4a6c-60b9-08d93268f86d
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 14:54:27.4266
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: zHSvLU3QuF6NGR3EMq417hfD2qFBpghJfLPnCqMszYnDZ3JFtAR62cX6PJVzYI0kfCRObPY19j+Z0QdCJErWP9SLvDMdZ5wpOM2uym79akg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5736
X-OriginatorOrg: citrix.com



> On May 24, 2021, at 9:36 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
>=20
> Add a warpper around libxl_send_trigger.
>=20
> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 15:03:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 15:03:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144736.266358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luG1o-0002a7-V2; Fri, 18 Jun 2021 15:03:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144736.266358; Fri, 18 Jun 2021 15:03:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luG1o-0002a0-Rp; Fri, 18 Jun 2021 15:03:28 +0000
Received: by outflank-mailman (input) for mailman id 144736;
 Fri, 18 Jun 2021 15:03:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luG1o-0002Zu-88
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 15:03:28 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 976ad0c7-b0f1-48d9-bb1e-321d2a0c2703;
 Fri, 18 Jun 2021 15:03:27 +0000 (UTC)
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2058.outbound.protection.outlook.com [104.47.12.58]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-23-_R-csKBsPyaZAbgF_S_9Bw-1; Fri, 18 Jun 2021 17:03:25 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6864.eurprd04.prod.outlook.com (2603:10a6:803:138::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Fri, 18 Jun
 2021 15:03:22 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 15:03:22 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P251CA0001.EURP251.PROD.OUTLOOK.COM (2603:10a6:102:b5::14) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Fri, 18 Jun 2021 15:03:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 976ad0c7-b0f1-48d9-bb1e-321d2a0c2703
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624028606;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WlrD8dNUb08dzEJ6lrqje5BHcZk1cIHtEk9GCYAygTE=;
	b=hPWEggYI5cUo7ySfhvOsxbqLHPxTzlJP95fKARF+ciBE/9WH7dO1lR7hWgio74dY6aNPnP
	rLwvmiaHJZ6RibO08RQb6eZ+Jnqt1HBstJ4MyWuFvPVAJglwLkoB3rYqFwTViHuVUHMyKG
	ohVvqUF1qoUk2siwLIt2hv66817IxfQ=
X-MC-Unique: _R-csKBsPyaZAbgF_S_9Bw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B9mOQJUg3ocQM+sAjVKp/qegsHqeJN5qE1S2vwTc1xvUOkuO2BFub40z2/FIuGHnq/NtNkA9VUlIoDG3Dp3F+SVimTdDdrOikti0LirBqFCsuqAEcRF7TBAj8GojbEZDisHozgpaLPUDj//ziLk0xLck/z2oFpmQygeunltXi01fpXK5g9RynrMLmSpTem3HoLVAiFEnloNwmd+ru/sNRAsHQPrtIibGdcw7OlLlvlq3J0heDZIVJ0vttdwGJzhzhhD06TJMIgpBG3DalLcIY4vLdu6rUD+FsNoK0wshBQ2kND6aV+q38K95GVFCmfOlU8xHDMiCRkdAGdzV5cNmfw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h7IOHcl77rAeJmr6WxTqcnqQq6MeFHQjnF0+tn/5Y2w=;
 b=EmStTkIWqP3bmowKFnDs5/M+7sufxl7VPtLM51mYOaWW0/xQhIAgwA0cxl8J/+53GiTUiFQsp3hTSVab1pFdCGnl2VWrFifqMlgL252ABNmYO6H6zPImj2/BycYF7CWaqXceZyfkUteHP51m/fzm6Xo1RGIO0Bg7EHh4m02sBB+Bsp4cZxlC9GbFJaWku0kopWjof0lh9Z77irTq9NwfKdLsPL3YqOK0jBx+zSPQ3uKTiWpw3PuLWKgP1U79qgU4Wz43Nfr3yUm9E43y3yUGp4LyS+PrGNBCVeWgwuvgYzM6hYU1uFloefYG3oLnP30OyYDSSl6yuJRADTrrMHj3QA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 3/5] libxencall: introduce variant of xencall2() returning
 long
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Juergen Gross <jgross@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
 <c7f93b66-bc4d-708a-6936-e0eac9e36cfa@suse.com>
 <0804f068-c016-0099-cebb-dbb8b7f1b794@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <82885eae-ea6f-e463-a966-78a0c753d93e@suse.com>
Date: Fri, 18 Jun 2021 17:03:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <0804f068-c016-0099-cebb-dbb8b7f1b794@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P251CA0001.EURP251.PROD.OUTLOOK.COM
 (2603:10a6:102:b5::14) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e450a6a8-99bb-4e56-4ba9-08d9326a3708
X-MS-TrafficTypeDiagnostic: VI1PR04MB6864:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB6864D8D6B56B8331EEE278A3B30D9@VI1PR04MB6864.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	h6bHXOy61w2TOkrrqEE/9rLM4KvuLLz82xlxDk1T5R8xORRgUZkIKrySQDA5V7kejAxDAGbcsyPROMuMcNmQK83TaNB3zbcRQp6kg4x9ZY3ZGxY1clBIbLB/ERvssYqiMjM/XS1IVjrnUelqz83rfYOwbsA/4fwtTU9P3oqvh1mLvEspMEL2wo+NeihnHFaqvc7iY1KFrkVArkKmamcWEAPvep1vwaFC/2iSzZABgUQGRhCckclaCXXGXhpc0ioJnKRh9hIoZnMWB7cPQWsmOgML5ndCqq97nM4I7GBT5dieSiTTnOHRWabVXXTOc+qc7OpglfBmtUiALP8G2VuN8nlC8r4u6fwdOkIQKI3KRHu4spMl+jBw5Aonbh2TIk9FE/F3PBt4MX6J7DyDnAzLix3zNkFtqhg62/ZlQeQvWWFvwaCJVp7uTpq7QULFfZdSw86y/xOKSuZ78QQWO6eRc29VA3x/578B1Bgu4LvAqxVyOtg1qJE3sTNgy9luFdyvuHiGgpmYaoERM3/H9SySGqIMsAxsLDwNIrk+Er0ak6e25HPONnNURDlK4gbe7VE18/+4xtdaXEVfqMw+qRwpG94UwYL79TOFWUkgegq40kIzHTF2qdB3K4hw/+zoxnODBU2mjd3I5Tb8L2cJH/VelOjiNj2JhmTq3WroMleLIKTS8U4vdcDsfsXI9w83+D6F
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(346002)(376002)(39850400004)(396003)(136003)(2616005)(956004)(83380400001)(186003)(66946007)(66556008)(6486002)(66476007)(26005)(316002)(6916009)(478600001)(31686004)(16526019)(16576012)(54906003)(38100700002)(2906002)(8936002)(8676002)(4326008)(5660300002)(86362001)(31696002)(53546011)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?CuaGgU804Rjfna0MVPMVEf50XEa8orJMd5MSatZJjJPWj4pxBtptNJ5kakcd?=
 =?us-ascii?Q?wTaaHqT0ylyCQsLdMjZTMomDQ509WKdTd1Eh9rT4Wb1CgcQnXBf4kGzJav3H?=
 =?us-ascii?Q?Q0yZ0EBKxwTkuNw32xlehwyQJy8oQJCBdi5BTDGBIUXmyG+/PJkoB4pMRC5n?=
 =?us-ascii?Q?GiLw6dmz79pu0A0EBLGAWHSwAk6pKvaVqCGRB4w1upgre3S1VDrjc8bMWlkZ?=
 =?us-ascii?Q?hcr+ow+uQR5P7aQCUQhrxhyUhE2/UJl4FhhFdEQPIOqHm5UXe1/8me6j7fE1?=
 =?us-ascii?Q?JQnrxCnU1gF4NTYTTDR2y5oQ8um9BHuaJny/2/aJTt01AS1hiIMGeM/wROMy?=
 =?us-ascii?Q?4ZEpfP/xMUJqTP5XlgoFpJVxvbOsVSmTziSzbf8CS9DL8m/5YeiKgq8Mrc18?=
 =?us-ascii?Q?AW51cZwhEWlqdnWSBjPPmGtxaDQzY0f8Lif40zBMRPNAc89wcBihYWw27laa?=
 =?us-ascii?Q?sNKnCJAbS7015sok8zbahjvHnJ95VxhvvnMdunzdpbXmEQFGOU8rexNyF3oN?=
 =?us-ascii?Q?SywNlD2XRHeK6y4K9yDITAREi3ky2m2BxzZLSC63GV9WWmRflfxNxYXhE242?=
 =?us-ascii?Q?tH8NYxY0zgrcYIu38Gkk5u71DsOclmZwXOR8cjWjRUhIjGsrssUJdr+ZYrJx?=
 =?us-ascii?Q?A36IlRquQuBlr1acdYFTEbtDjkQ/WodYgl178MmvQPSiDLhrJhOoMlHaZGIi?=
 =?us-ascii?Q?AIZrdzdclyvN9mITD4HCLYe89C4aaFFc6iGPx27ALb3cpH8g4jx3TKUvs3Ii?=
 =?us-ascii?Q?DJoK1X70u4FjtTorHw++XZkBeXtFzDrsPwGh//yepw+Ys0fqPjHVUJrUU4CN?=
 =?us-ascii?Q?fo/0UXYFUeKs230sUCYQ/dtLiDFp2RMtsorLxA52SwjIqhLeoOC0U20h13O0?=
 =?us-ascii?Q?53UENfxDfg0iJbS3cTnyOKhWK4M99sXwuPW1fc80RJyYpeIm7iDkyZDDa5lM?=
 =?us-ascii?Q?yUW/jIJkAKwXl/VWPCqk3XT0nhbW6qnCvgKhUvGCbJXi4iSvaZfcJ4WvxsaP?=
 =?us-ascii?Q?yKqnuAuMlNNsZj1dJAb2d9tl6naJFTmyi83N1aLYrALCg66DsuAN3hX18pjp?=
 =?us-ascii?Q?6cpojQ538RUsNP9x7OvZbi8/Mc+0Y0Q5P27APnjPhdUqRXQWMvTXd8Mp3e7X?=
 =?us-ascii?Q?wvfmVI76YlQqQnnZb9vEB0BOoOPRk+GDeIK/6MtcIpskAAyPuuQCfh4eZevF?=
 =?us-ascii?Q?xcdGp90X4iIPza5b9X2WLz6grPc7gUKK4g4d/+8c4W1p3JUHV2kO+dVMIbxD?=
 =?us-ascii?Q?9raCaNiHc/geCBToaA8ZVFot5Dp+x2HLwwwiz2sUgsm9U6bSWoEt+SrbmngL?=
 =?us-ascii?Q?mWG7CoHDRrMoTpTgMGPkqkpJ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e450a6a8-99bb-4e56-4ba9-08d9326a3708
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 15:03:22.1598
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: h9aiYPFxFsyWvUZTIOamKmBWv4O07FKqX1R1LqkPFHzOICVg5ySpAhzpK5MJzU1djLirlYurpKX3GajgELWXAw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6864

On 18.06.2021 15:46, Andrew Cooper wrote:
> On 18/06/2021 11:24, Jan Beulich wrote:
>> Some hypercalls, memory-op in particular, can return values requiring
>> more than 31 bits to represent. Hence the underlying layers need to make
>> sure they won't truncate such values.
>>
>> While adding the new function to the map file, I noticed the stray
>> xencall6 there. I'm taking the liberty to remove it at this occasion. If
>> such a function would ever appear, it shouldn't lane in version 1.0.
>=20
> s/lane/land/ ?

Yeah, spotted this already.

> I'm tempted to suggest spitting this out into a separate change anyway.=
=C2=A0
> I'm not sure of the implications on the ABI.

There are none, as a non-existing symbol can't possibly appear in a
DSO's symbol table. But well, yes, I can certainly make this a
separate change; it merely seemed excessive to me because of the
no-op effect the change has at this point in time.

> ABI-dumper appears not to have picked anything up, nor has readelf on
> the object itself, so we're probably ok ABI wise.
>=20
> That said, I would really have expected a compile/link error for a bad
> symbol in a map file.

So would I, but reality tells us otherwise.

>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> I wasn't sure whether euqivalents for the other xencall<N>() should also
>> be introduced, and hence went for the minimal solution first. Otoh there
>> is also xencall0() which has no users ...
>>
>> --- a/tools/include/xencall.h
>> +++ b/tools/include/xencall.h
>> @@ -113,6 +113,10 @@ int xencall5(xencall_handle *xcall, unsi
>>               uint64_t arg1, uint64_t arg2, uint64_t arg3,
>>               uint64_t arg4, uint64_t arg5);
>> =20
>> +/* Variant(s) of the above, as needed, returning "long" instead of "int=
". */
>> +long xencall2L(xencall_handle *xcall, unsigned int op,
>=20
> If we're fixing ABIs, can we see about not truncate op on the way up?

You mean making it unsigned long, when I don't see us ever
gathering enough hypercalls. Even if it were flags to add in, they
would surely fit in the low 32 bits. I'm afraid there's too much
code out there assuming the hypercall numbers can be passed in the
low half of a 64-bit register.

But of course, if you insist ...

>> --- a/tools/libs/call/core.c
>> +++ b/tools/libs/call/core.c
>> @@ -127,6 +127,17 @@ int xencall2(xencall_handle *xcall, unsi
>>      return osdep_hypercall(xcall, &call);
>>  }
>> =20
>> +long xencall2L(xencall_handle *xcall, unsigned int op,
>> +               uint64_t arg1, uint64_t arg2)
>> +{
>> +    privcmd_hypercall_t call =3D {
>> +        .op =3D op,
>> +        .arg =3D { arg1, arg2 },
>> +    };
>> +
>> +    return osdep_hypercall(xcall, &call);
>=20
> (If we're not changing op), I take it there are no alias tricks we can
> play to reuse the same implementation?

Re-use would only be possible if we knew that all psABI-s match up
wrt the treatment of a "long" value becoming the return value of
a function returning "int". An ABI might require sign-extension to
register width (leaving aside yet more exotic options). Then, yes,
the "int" returning one(s) could become alias(es) of the "long"
returning ones.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 15:05:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 15:05:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144742.266368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luG3e-0003Af-BA; Fri, 18 Jun 2021 15:05:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144742.266368; Fri, 18 Jun 2021 15:05:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luG3e-0003AY-81; Fri, 18 Jun 2021 15:05:22 +0000
Received: by outflank-mailman (input) for mailman id 144742;
 Fri, 18 Jun 2021 15:05:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luG3d-0003AO-Bg
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 15:05:21 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 338245cc-a8a0-4467-8a96-979c10d41936;
 Fri, 18 Jun 2021 15:05:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 338245cc-a8a0-4467-8a96-979c10d41936
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624028720;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=KLJmfR8do5vTkflz9Ft7WCK9vvsmW6S1RPOOkiEzXJ0=;
  b=THiIKvBi8aPcLOH+7fhW1UkLoDgMW9krpttLj4kMZ08L2EzW5BOy8Ss3
   viPMdQYN8iwV0Bj9ak9gsFLGp3mj3C9ox2d+ebtLQSlJ8mW0SUqQdN75H
   oiTDUDM8Aa0JYBGYrHwf4SqcagWOopnRxar4JGo9F/h9zMgtgT1TQjSI8
   s=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: RpjsP+j61I9W94Ea/15K25cc/RsRilGUnpYrONBTDqrR1nM9vfFAqcKRPJVNqALAJhu6x5EgWR
 yEkFPDxgH7fG050xMjt5xORvmmK+UqXSE1y4G0YcSLrUgwalePPtMozHhSUMny+LdVz2gExRNo
 zVrCS3ed09mvZu7LyCcJZT0R/icS1s2IcRP1WRnQ2JxU/9PaKM2hxT7dXRzUNGtvMW7rTMAoAc
 FHGlSIn6CJlmYsAWUOubGaDhMxSFuVkv0vG+OvQapP1Xb2r639klfjZsfAmYiWLgDyZ6HuuybT
 +Ms=
X-SBRS: 5.1
X-MesageID: 46469213
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:iT97XavLAIBTcfL2Ypm0hmqH7skDS9V00zEX/kB9WHVpm6yj/f
 xG785rrSMc7wxhI03I+OrwX5VoJEmwyXcb2/hyAV7PZmjbUQiTXeVfBOnZsljd8kTFn4Y3us
 ldmsNFeb7N5C1B/L/HCWeDc+rIuOP3lpxAWt2z80tQ
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46469213"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=frNp4RJglU7NZyCmDo2/6p7wV1KmVn3+/1e2fLFHDT9tAhtkreRyPimWrk0EyF2c7rZGvSinZi9b5LmOAg+k5o0pwnO7iBjLp9jMswqSKWuFDC2Qw2faRKMkIMr2eTDyUbDrcm1nqeIdTDTehviGTr5ryUIx9jMvCVL5BHVgi5aQT18dBsCC1b2xKJt7ptKRvtuY/68V5LNmO6gwzvfF2ozrHIMMEEKwHosv5/n+IGmGr0NqKdoeSYasz3HS1O8DS/sjbjoYtzxG/nQdjafjJe2ZYacEa/gdQlt9hxtMbSLyoCaQxaL/clwEMy0EbhVZtwOyhZJ+ALGs6sbN/1ADFQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZDjMWsaj6Rl70hTYyswO6JVzleQNxKz4HMeMONl6Ui0=;
 b=La1KwRCHkkei5z6elRlEI4BrTpiHivrsf4kBzIG9KaiC9XXJGk/4oeNS2YICu/wzxXXWQctPQJL+GaniVSTC17T0frvWxNRTKxJL9D476TnRWicYnAY+6cSrzfiDpwbkq/QIVx5AJDWwIU2GN4GnnE8hiFqXhZIMMiTxRQxD2WDyFb1nPDsRyJSgfiOjrXC4XuKU5058laeQktiO+OagzYtksjPN+IXE73M2dQC/0pYzlAZ7ZnmDBbW29OJOjMT4vHUixMP9pDNLPRGJb7i0bz8D7C9NLIDubvz5ob5PdnxceSZSwSB9V0qk4Ykd1oqSzNRgDgQZx6FACPApOp56dA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZDjMWsaj6Rl70hTYyswO6JVzleQNxKz4HMeMONl6Ui0=;
 b=LMZLalSO4merkOp/KVjUX1SO8wLkjrd8iBLnZfgFcUBww8nHW0aT3O5sk1D73HJvD1r7aRIL0liE12I7ODAeLXbCWkDmh6AmUXCFfExUa8KBxAbAnVNq0Bz1kAUJpveuLr2MgyToRoD3M+abWnVcQ+b4I27mp8hTr830QgJJ/qo=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>, Juergen Gross
	<jgross@suse.com>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
 <8a5284e3-a029-27c8-103c-cbc12642d24d@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 4/5] libxc: use multicall for memory-op on Linux (and
 Solaris)
Message-ID: <db8662c7-9641-6fb2-54e9-c0e64e03b990@citrix.com>
Date: Fri, 18 Jun 2021 16:05:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <8a5284e3-a029-27c8-103c-cbc12642d24d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0359.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:d::35) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: fa5c52c4-20bc-43e8-92ee-08d9326a777c
X-MS-TrafficTypeDiagnostic: BYAPR03MB4117:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB41171C2C41C158D138F25211BA0D9@BYAPR03MB4117.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:854;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: oC2NotAPihnZpZuFEZF8uy05S67pU56xmINkjoilY9ZPlXByuMItpSoHZnj08bMrFIG0TS/5L/+HpeL2aoEPrvBRv8gw8ZIJJ9QtrSHrYmeYgLYER2L5vhfzn2CgHQnRJdSni2FRqsbPrzxHKi4pTGKjWc0RDl+hg9ybcXmJodO98IjfkJj8ACBKpDcde4Z1GL0ThB0iustywBfgDDIhNPsWZ9T3e1XCJaNAaRa5b+bR0ZsfdL5/+Cst0u8Xl92NjRRjfvBRXOCq8cvJ6nQDq/WMwDIxW3wTLVgnF+hiKzjEptY0EpksgAKme9TJOM837O0uTtx6cGsWV22KoMmke0cRVy0N3sqEz9WJ3hBWLixVj6pJL4E55L66mu/Sr355aRsxkHY8fXRWlZ5gOWZ4y7aWI7tWqx3qmQCf7+PTRwTtZG0eUD50eCE9L+F1H4NStGxNli95/9ofh7+zvDCzWp08SeP50tp2f9epDF0E+Dm86PhxkdWvITMcBMcMAheGYNH0sMePWaih5ZaVSU0jYKN2yFJubmXAjYpYgW5s9P33Jcd4u1Q0gFPahb+QkHhpOeC8UBWF01hBkq1bOU3TGUYgIW617aPtGf0a+UutPbbuCsKDL1qymoSl0sn5hl+MtxGSthZ8u7Dh0ZOe1mmlLaehL8Lrz+J50Ctyx0rFjFv+T8s4wToIz9zgMg0J6ZH6
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(346002)(39860400002)(956004)(36756003)(16576012)(6666004)(110136005)(316002)(54906003)(31686004)(53546011)(6486002)(2616005)(5660300002)(66556008)(66476007)(66946007)(8936002)(4326008)(8676002)(31696002)(2906002)(83380400001)(86362001)(66574015)(478600001)(26005)(186003)(16526019)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?UzBlN2ZESEkwUXR1UWpIK2dpTytxakp1R2k4ZWpWVHp0eXVBR3g1TGtwMU12?=
 =?utf-8?B?U1dMM25BaW81OXdBVGFjMjRZeEFIR1dTTG52T0lvcTJPSHRPMVdOYU51UzB4?=
 =?utf-8?B?YlVZd0RPOWtpazY0VDk0N2hMTzBaR1NMdjVoc0VhYlNBanBub3JLVEZmV0c3?=
 =?utf-8?B?YVB3ZFNTSHJwQlBpVWQrdmtUSEJ0bUZMWXdIdVRDVGNBZFNjQTdPMkF2QmxE?=
 =?utf-8?B?aGFIMmxacEhvMkFXekNON3o1NytyclVSK09NdDdlS0t0N0YwZHZCNXc5WjdM?=
 =?utf-8?B?bkJlVUUzSVdwSjFQUGpudkZoYjRqdFRnbnYySmNQNWVjWTZWSko0VDB4S29s?=
 =?utf-8?B?akI5NGhlYnNCeElyU1F1a3RFSnJrTERLQ1BQS3JUYmxQQlRKZWMwN2E3NHp1?=
 =?utf-8?B?QWZsZUhFWndhTnhDUy9QRUZLK1pzT09SRDhNUkE1bGxZVTVtSEJrc2dsbVV0?=
 =?utf-8?B?ZFVabTYzYlRmTXcvaUZCai80Ti9jOFYwUGFZNDRxQW52K3JpY3ZIQnhHWCtn?=
 =?utf-8?B?MkJQZ0U5TU5ZMHE1cWt3eG8rVnk3ZzB5YVFuV1dlQlJnK3N0MTVsUklLeEdz?=
 =?utf-8?B?WTkvL1RJaVZpN24zU0k0SCtyUEtjRGpHWFNHNGVhbmFHZDM5TEdxN0VUYmg0?=
 =?utf-8?B?U2M5aU44K0pNaUIwSEoyYzd4czRqRWgwQUNtOUhKbXR0ZlVFZHFHeGFaT3hG?=
 =?utf-8?B?NDUxaXhYQ0lLeUcvRSsvSjRSeG5pQW5UM3BVTE5SNnRrb05oa1RranBhKzVW?=
 =?utf-8?B?TlA4Smt5TTJtWUJ3T3djQXozbysyeHhmWXNXN25zbVB2QzNGZlA3Wjd2U2Z2?=
 =?utf-8?B?YVRIMEF3SU1rYmpYTzdZbDYvcS8xa1N6a2pzM0NwcnNVaE9xbFJkZlVhUGt3?=
 =?utf-8?B?ZFd2WGlmUW9hNnNCRHd5dTcyNm9WVUE4SGt2aVBVVjBKa3dYRUIrMWxTdE5Q?=
 =?utf-8?B?dU5ib2ZRZjJJZHdBZTlzNDlYc09KZlJoU3ovUllUREVnNWV0cXUyMjE1czQy?=
 =?utf-8?B?V2dwaGJtNno0bHJnQXhjcHExQ05qRG85M1dqK2w3bGhNaGdFbnNtLzJHWmh0?=
 =?utf-8?B?NjFnRm1qbGNmcldyNjhZNE4zOVRONFJVT3A2ajYxT0dwRGVGSWVtUEhWeHpI?=
 =?utf-8?B?Q3ZrVkw1bjVpL1VqR2w3ZGY2Yis2TjgwT3NlYlZzbWYwN2lrc2Z1Wi9PblVo?=
 =?utf-8?B?UkFkZGYwNTRyaTFmMTlieHlvVTFyOW83Z1NLNkF3VjE5WVVPWUM5dEE5Nk81?=
 =?utf-8?B?U2oxdnFycHVlTlM1dFVucGx0bVpNVWJMb3hzR0Q4b3h0YTY0VlFxMWlTRmE4?=
 =?utf-8?B?Z3BXTXU3WTNDWDE0Q2FEY3ZlZnU3YlVja21YNHZmY2pWa3FvSWtXL0ZnZGRa?=
 =?utf-8?B?ci96UHJHVnMyOHEzd0pQWkRxalBpdGRWclRJNFZTSXFOZzB3OVFsV2lxNkdH?=
 =?utf-8?B?MU5rdzZjUW4zK0Eya3JLUUJIekNqUjlselFXRGJyRlIwTzNFWW5uT2VYL05u?=
 =?utf-8?B?RTZCemtTcnltM1lTOE9zb0ZwZnd4aWl3dHpXQXRVWXBzbkZsbnhYd0FpTEh1?=
 =?utf-8?B?MjRNaW0vZmxwbjczM05Db3NVa2kzVkdHVGhOZEdINkdid3NFdkhRQkVZd2dC?=
 =?utf-8?B?dWZGTC85S2NiaTZYdTNEaUxGZm13UEJ4bGNWbVI2YjdvRWRlV3ljNXFyMzZN?=
 =?utf-8?B?Wm1vZFlZZU5hb2YxdlVqUDFEcmtiUk5HR2lENzI2cTRlUDA5T3MxMi9MSDlF?=
 =?utf-8?Q?CwewiCrpF9TD9q8j+v8+vO+sdWepWL7+s8B0HeY?=
X-MS-Exchange-CrossTenant-Network-Message-Id: fa5c52c4-20bc-43e8-92ee-08d9326a777c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 15:05:10.4869
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EfHwT8ebIu23BDrByD/MI3wk5bhRd73KLSEyIuRxk9M79oy44VAHhadNoSVwLJWpJPI/MAq60JBVOJ+HWh0H981fItoI7w4AqedMNRYcNEA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4117
X-OriginatorOrg: citrix.com

On 18/06/2021 11:24, Jan Beulich wrote:
> Some sub-functions, XENMEM_maximum_gpfn in particular, can return values
> requiring more than 31 bits to represent. Hence we cannot issue the
> hypercall directly when the return value of ioctl() is used to propagate
> this value (note that this is not the case for the BSDs, and MiniOS
> already wraps all hypercalls in a multicall).
>
> Suggested-by: J=C3=BCrgen Gro=C3=9F <jgross@suse.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/tools/libs/ctrl/xc_private.c
> +++ b/tools/libs/ctrl/xc_private.c
> @@ -337,8 +337,47 @@ long do_memory_op(xc_interface *xch, int
>          goto out1;
>      }
> =20
> -    ret =3D xencall2(xch->xcall, __HYPERVISOR_memory_op,
> -                   cmd, HYPERCALL_BUFFER_AS_ARG(arg));
> +#if defined(__linux__) || defined(__sun__)
> +    /*
> +     * Some sub-ops return values which don't fit in "int". On platforms
> +     * without a specific hypercall return value field in the privcmd
> +     * interface structure, issue the request as a single-element multic=
all,
> +     * to be able to capture the full return value.
> +     */
> +    if ( sizeof(long) > sizeof(int) )

This is very fragile.=C2=A0 I spent a while coming up with

=C2=A0=C2=A0=C2=A0 __builtin_types_compatible_p(
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 typeof(ioctl) *,
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 long (*)(int, unsigned long, ...=
));

(which does work if you change int for long), just to realise that this
won't actually help.=C2=A0 I'm suck on trying to see whether
privcmd_hypercall_t has a result member.

> +    {
> +        multicall_entry_t multicall =3D {
> +            .op =3D __HYPERVISOR_memory_op,
> +            .args[0] =3D cmd,
> +            .args[1] =3D HYPERCALL_BUFFER_AS_ARG(arg),
> +        }, *call =3D &multicall;
> +        DECLARE_HYPERCALL_BOUNCE(call, sizeof(*call),
> +                                 XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
> +
> +        if ( xc_hypercall_bounce_pre(xch, call) )
> +        {
> +            PERROR("Could not bounce buffer for memory_op hypercall");
> +            goto out1;
> +        }
> +
> +        ret =3D do_multicall_op(xch, HYPERCALL_BUFFER(call), 1);
> +
> +        xc_hypercall_bounce_post(xch, call);
> +
> +        if ( !ret )
> +        {
> +            ret =3D multicall.result;
> +            if ( multicall.result > ~0xfffUL )

Wouldn't this be clearer as > -4095 ?

~Andrew

> +            {
> +                errno =3D -ret;
> +                ret =3D -1;
> +            }
> +        }
> +    }
> +    else
> +#endif
> +        ret =3D xencall2L(xch->xcall, __HYPERVISOR_memory_op,
> +                        cmd, HYPERCALL_BUFFER_AS_ARG(arg));
> =20
>      xc_hypercall_bounce_post(xch, arg);
>   out1:
>
>




From xen-devel-bounces@lists.xenproject.org Fri Jun 18 15:06:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 15:06:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144748.266380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luG52-0003nr-N8; Fri, 18 Jun 2021 15:06:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144748.266380; Fri, 18 Jun 2021 15:06:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luG52-0003nk-J5; Fri, 18 Jun 2021 15:06:48 +0000
Received: by outflank-mailman (input) for mailman id 144748;
 Fri, 18 Jun 2021 15:06:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luG50-0003ne-Nm
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 15:06:46 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f6dfbde3-9f32-4a01-b80d-48a217fc3b94;
 Fri, 18 Jun 2021 15:06:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6dfbde3-9f32-4a01-b80d-48a217fc3b94
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624028805;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=/svcxhVG8EEGRY/VXc0QLq6Tln/BQuqmJRcsq2pI20U=;
  b=efbORxzw4jZgzDSuN2463l/KUENKud2EGH1O+qz4KCMaMtQ2eP+qU7Qp
   CJB9O4njFIRGxTAFnmQRGc7b5gHyb9YpUqhWSmdAmXCVBusW3qwN6K7Wj
   lO2XdmvcsadZtOwkHOQSANpFyNdBDJAgUuuFtfxA2GeNAaVA4yGp6WjuV
   c=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Ccv9kBffNnDsJTR4D4BIIbBG+3WoJjiqEeQciv9WRkiQdo25v8A+H1dvnSxfnyDFD9q9rGyLhk
 dT7a0+JRu+KUhIKdSYVo75QDX02+iOKj9dv+OQ6VfwLe85MZ0CIm89wdemka4uRqASfcmQLJDF
 AyZv80CAX6btDYvlK2nFagxHGpfiPIgHSgqv0vAYcV1I2y6hzQcr/fS+co5T23Vh/TnuGuy1GG
 nhc5e4gt4P6HwUcLxOSPpBnbMfA4YPN4u1USsv8vWQ/1OTLAIJDqc2a2s5FH06L51CRQghCMEX
 tPc=
X-SBRS: 5.1
X-MesageID: 48057106
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:k9gk1qvFXntY024ySoyz2Cv/7skDgNV00zEX/kB9WHVpm6yj+v
 xGUs566faUskd2ZJhEo7q90ca7Lk80maQa3WBVB8bBYOCEghrOEGgB1/qA/9SIIUSXmtK1l5
 0QFpSWYOeaMbEQt7ef3ODXKbcdKNnsytHWuQ/dpU0dMz2DvctbnnZE4gXwKDwHeOFfb6BJba
 Z1fqB81kedkXJ8VLXCOlA1G9Ltivfsj5zcbRsPF3ccmXWzZWPB0s+AL/CAtC1uKQ9y/Q==
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="48057106"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AqEfkzenuIE1ZxJLrt5PR55ESqCdHKjq15YY/gYa91oKATdRBOiBVyBFrlqIzr5/JeJsI4Q3Y2rrrUG4aw7CQI6X+nMphQf8cTQDi4OQ7rTfSdMmMk59bqpcFT84lGVEH/XwMFCZzWPmREDum5XopanzRqjxMi+DHONS3NV57i+fGNamZlaCk/1quaczLi+JWoetmcQtuWCtiLhOmvTB1UPHGxHp8h3kgfFqLuiUZzCP0z6hI2y9NwUPpCJpNCJidOXRDK4fquB0aLWV/mkMCIQLu5YhfUn/zKSy/232LxRFn2XpVqwV3VbiehvesDxVsQsomcUdPCq+70QSwF3LBQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=e5sO1dSzaUua+udKhamhVhm6K1msGtIfc37nbI2hxgk=;
 b=QKJWD05+9H6GMKi4kjUXlDT5tze3u0BJJJwIoLcRMr0We/sTm4wgv0YANMKWhWcqE76voPcO4FZ7I+w1gWDkgxupTW4CNf+PRLVAK3g3IEXYjgwl3u5iXunVg64X5U99FYahJYb84ipBBlf9OSOiIbw2GF1i6rc7WufM5UKfr22ZXxSgi4sufxEcxBx0R1M0x46Pfi0Zxf++B3bBQRArAWULx6IA+cpUZ5WLuxRFwW3chqKtOJ3lOWZKtPShK+CCrhZD/rmqDqrMaCa9DjfYWPpaTOOk9vQUXrB6Oi7nnRWpNXPl5k7W2PXeWnxwN7qvvvA2wh5RuJamMrhV1EwI4w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=e5sO1dSzaUua+udKhamhVhm6K1msGtIfc37nbI2hxgk=;
 b=A6OABJGnzHAggjlH8yNYfdu/PGX/SWP0dpwNb14oprBALm2gvZCWyDCdBOiUEX8N0lvKBQ4kidcRHqLABoRVXi5czat7GYnUb2C07K0RhVmPGs7oQ+IzL0RdzRIxGZ21i+oxfMazpdllnVPOjozlAwcvX4epMZdWz7NJvJv/NrI=
Subject: Re: [PATCH 5/5] libxc: make xc_domain_maximum_gpfn()
 endianness-agnostic
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>, Juergen Gross
	<jgross@suse.com>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
 <99979695-e53e-7764-85e1-64dd4cf9447b@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <bcbf4a71-bf30-5a9e-a399-d4366ee95423@citrix.com>
Date: Fri, 18 Jun 2021 16:06:28 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <99979695-e53e-7764-85e1-64dd4cf9447b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0241.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8a::13) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f53653f1-849e-4e23-0b19-08d9326aad11
X-MS-TrafficTypeDiagnostic: BYAPR03MB3560:
X-Microsoft-Antispam-PRVS: <BYAPR03MB3560259CD1B1658878496865BA0D9@BYAPR03MB3560.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: lqA29SAkKc/NCD0iX5eHclMaYM6FeQexQJRWEe8rjRh+/cgw2cCedular9IHNfYgEOYXZIJC6lKeU6z9oTT0983IPRDkbMbl602egSC5DyWY4uIokzV1pHeKIjgEnP+MF10DUoQtyKHyKwhyKZwUlOq34wboKhS1pfiOd27gYFdawpbDF3ofPX3UJkSMR3OBiDmJad5ffOsjmVAYqyC/idy/a75P1AEZS1p4RLBFbsntprw8hzj4GgYv1WEbKQEVTzKrYio/J4pedIE9oyNSld3sajafB7bymgggwG5X71Oe1VGDEaiHWL6He3y4HVCZytiKu6CdDnXST3OYV0tueRWFGCtULdM9QhynqvnAzSUsQ7w2wErISm447n0YCqClEtxzCb9Lp1PcREF+T4E8DjCgV2th6PF2CnjAamQfOnjOBea+VDY2BZBCaBH2q4Ic5Rkv0TwUL9lBXel0gBb+0h3bXwRUw0aAvjdAm0EId57DI3K6Be8Z6yML5odKaezSbJ3DYgnz6On/CYvLUVeLG1iFTWaJNaa0z7ZzfVWOp8guFNkCO+VrfeXtj5rLz7RL8Ixhw5x9SSOh5o7XXVKWD5KrM7MkuDscq0ophlMyNPRgqjJzoXcFYtZf4WD61oa8JGPgkFhuKMNbcfyOVl1k9cJqux1aP9Af/pKscy5+lF7bCqCq3OYdmGrjg4hoRmUh
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(376002)(136003)(39860400002)(346002)(86362001)(8676002)(478600001)(2616005)(2906002)(36756003)(956004)(4326008)(8936002)(110136005)(66556008)(16526019)(6666004)(66476007)(6486002)(16576012)(38100700002)(66946007)(316002)(5660300002)(26005)(31696002)(54906003)(53546011)(31686004)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?ZWpXZE9xNTd6aFlWZHRKbUZoWXRNckxrd3AwOUFBV04yY3JOSlg3TGNoeXZM?=
 =?utf-8?B?MW91dW8yalFMMUxOeG1mMlZWc1Rta1BIWU9kdG4rNHllQi9lN3g0dE0vVnZG?=
 =?utf-8?B?ajloek9wa0xCMC83U2NCR1BmMHArR25JK1YvOWljNlozRzFpcG56cGJRcGJC?=
 =?utf-8?B?UDR2SjdjSXZRUFJzVHIzMFFlWmFWRHdMdVluZVIvOGRpTlpsanR6dG4zYmZ4?=
 =?utf-8?B?RmJiNFdEckRWTWtrQlNBZEpVZ3BQTGxlaVpGRnlTVHFxSVF2aWpwclJsVDFz?=
 =?utf-8?B?MkFkei9rTlFYWVZWLzc1UGhWUlNmTmtxQkQwYS9QbmhsNzdjVlNwUnNzNEU1?=
 =?utf-8?B?eVFlZ3p0UFY3R2ZtQkZtRExUQ2hFNDJIYXpINlVLcjcvOTBNTG5RUGUxVFkz?=
 =?utf-8?B?VFZDVHJpR1N4K3V5L29adUlGd0J1TzQ5eFlOTEQyTG1rN1kyK2ltR2UzNHkz?=
 =?utf-8?B?bjNCdkRaV25FZlJydVl6L1VHbkxHd0c3ZUNOVWV6dW11R09ESjBMQ0NaQjN2?=
 =?utf-8?B?N0ZUZ3hRcHlaak1GYnJ3eGtURjVURmhxTlVnVnk5dGtrNjFGdWdnaHkvQkw5?=
 =?utf-8?B?RXEzVXZOVG5JUGJFM1ZaYmZLWXJ3NkpjeXFGeTJZcE50THo1L1hCcnVyZStO?=
 =?utf-8?B?RE0rcEFUdVB5WGw3UHhmb04vZW8wa0UzMVF2NGh6aWhEN29MOVJUeThnYjNt?=
 =?utf-8?B?MFV1Lzk4YU1uM2prNmE5czcyUHh4R2xYOEhMa1ZUVW5NMm5MV3dSQStGMG16?=
 =?utf-8?B?bjB6SnpRVTVMMmdOK1d5L1FSSE9qN1QrL1Jta0k1SCs3aHZERzhOY2d0dnNx?=
 =?utf-8?B?UnlJTno5MHhQRlBUWEl3bFlvMTBBZjJvVDR4R05xOTdEcVpDa2t0RDF2UmMr?=
 =?utf-8?B?dFl1K0VkZU5BTzZEVExvMkZSOHRRQWFqS2lzZEpOMWFmbGxUcDF4N2dWcFF3?=
 =?utf-8?B?RVE3K3JUaGJYWXdkUnVRN0xjNTFacGpYZlBmUWEzVUF3VE84a2NrazFmRjkz?=
 =?utf-8?B?UHpFc2F1dVpFTGtpSEtpZUhJRXoyWTU1MmJkOTNyZXYvTG1BWUFQczQ5OFBG?=
 =?utf-8?B?VkExQXBlYUpoRHFVNE5GaCtBbVp3ajZlZW5aUzFWWnUvOGd5YVJacGRPenRW?=
 =?utf-8?B?NTFyejdLMmdnUElqVXBaMEhWUHpzVXVvaTJHTlB3UHlKamQ1dWpNWldmeXNq?=
 =?utf-8?B?NmVFTDVMR3o1cHZhTHFJUnR4M1EvdXRxL3VraTNNV3Q4U1E2WnlUb2huY1E3?=
 =?utf-8?B?NlF5Wk5IcDBHZnFmYjRqZkt2R2h5RUxma05LQnl1Tjd0VU5wVitsVGE5bG4r?=
 =?utf-8?B?UUw1RmNVWnA0U2VoRGlaWXArK21sZkx6NElpRXRMTmt3MGhuYUh5bnd5MXJG?=
 =?utf-8?B?ampZd0NJNm43Mnk2a05RTXpUdmRCYmJMTWROTm9KcCtua1dVc1I0aWNGcHFR?=
 =?utf-8?B?UUZCZ1FqTkhGNVh1Mm1JeXJiQThjY21nSktHUEc4WkRBb1BDazY4UVhadDZk?=
 =?utf-8?B?cmwyTzdXRkpmK3VlMEd2ZHZnZUY5MjZFMU0vR25XM0lMd2R1Nmw3Y1YwbHlC?=
 =?utf-8?B?Mkl3V3pGcVBYMm5GOVpxdnN5VEFjNkx4bVZxTUIrcXdrbVY3d0pMa2krdGw1?=
 =?utf-8?B?elFCVS9TNTFxOGVxVDlJTGQ5RFlpTXpNYmVmb1hSeTB0UzVwSlJhMVBEQmxN?=
 =?utf-8?B?NjNLOFNOdjF0WjhHbXRMNVA0NXhWbDdjWkFvUTBwYkFDK1NUMGtkVnYwcTEy?=
 =?utf-8?Q?xHPXdEgie29Kl5tIW82pxjBZZU9YvNTrwfRjpgq?=
X-MS-Exchange-CrossTenant-Network-Message-Id: f53653f1-849e-4e23-0b19-08d9326aad11
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 15:06:40.4511
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: o3LPSLqjPUy6Lhr8YJ1yQNAeU+2N3HgX4dB11wg9CChZEo4zKnATaMdKQj/KYfWiMAPaiseEmlQXI4n99wXFcKWpwlXCbOiET8IAzkTmp0M=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3560
X-OriginatorOrg: citrix.com

On 18/06/2021 11:25, Jan Beulich wrote:
> libxc generally uses uint32_t to represent domain IDs. This is fine as
> long as addresses of such variables aren't taken, to then pass into
> hypercalls: To the hypervisor, a domain ID is a 16-bit value. Use an
> intermediate variable to deal with the issue. (On architectures with
> arguments passed in registers, such an intermediate variable would have
> been created by the compiler already anyway, just one of the wrong
> type.)
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/tools/libs/ctrl/xc_domain.c
> +++ b/tools/libs/ctrl/xc_domain.c
> @@ -856,7 +856,9 @@ int xc_domain_get_tsc_info(xc_interface
>  
>  int xc_domain_maximum_gpfn(xc_interface *xch, uint32_t domid, xen_pfn_t *gpfns)
>  {
> -    long rc = do_memory_op(xch, XENMEM_maximum_gpfn, &domid, sizeof(domid));
> +    domid_t xen_domid = domid;
> +    long rc = do_memory_op(xch, XENMEM_maximum_gpfn, &xen_domid,
> +                           sizeof(xen_domid));

Why on earth do we pass the domid in by pointer and not value?

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 15:08:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 15:08:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144755.266391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luG6u-0004Ws-73; Fri, 18 Jun 2021 15:08:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144755.266391; Fri, 18 Jun 2021 15:08:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luG6u-0004Wl-3V; Fri, 18 Jun 2021 15:08:44 +0000
Received: by outflank-mailman (input) for mailman id 144755;
 Fri, 18 Jun 2021 15:08:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tyHv=LM=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1luG6t-0004Wf-CR
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 15:08:43 +0000
Received: from mail-qv1-xf2c.google.com (unknown [2607:f8b0:4864:20::f2c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d6eb31af-5162-4e15-9195-17c501e876a0;
 Fri, 18 Jun 2021 15:08:42 +0000 (UTC)
Received: by mail-qv1-xf2c.google.com with SMTP id f5so3696481qvu.8
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 08:08:42 -0700 (PDT)
Received: from FED-nrosbr-BE.crux.rad.ainfosec.com
 (209-217-208-226.northland.net. [209.217.208.226])
 by smtp.gmail.com with ESMTPSA id q199sm4159540qka.112.2021.06.18.08.08.39
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 18 Jun 2021 08:08:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6eb31af-5162-4e15-9195-17c501e876a0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=Q53gjyKhiXb/mscfh6FFlpfjv7cXuB9K+6OcUvjvCis=;
        b=NGx0d1rI3V6GEEabhodllrlWYofKmYve6mvtaGpNdqWCxLCPdjfv9I+UxP5TAc3pzv
         JFHJFg8Y5bNqq69wLU1XmbflRx+pBddgdSrMqSF3jG1ucCRhU9ljH2KqBOYQ0WUs5QtT
         z3iRbcT5x8zgdl8hL54UGZlSzU1QtvB5Re0GxzdhzRlXqohOr7szGuIyMrKT215kkL7c
         0QgpVKiTreZx6D2tfzy33QhTBa+JI35z6IBG1ua8mRBn5XLoO+RQkNPciqoMY4WLUiPY
         ukSgpitcHVcIcKxjNHkY4gJJXZyGdlkuQE2ixPyIQJWj9Qh18P+CXsNbEtJeOkUkUmCi
         sWgg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=Q53gjyKhiXb/mscfh6FFlpfjv7cXuB9K+6OcUvjvCis=;
        b=GsVAshGQEKM4YiHjnRg2fcNa5muCuSXU9Zpk2HHtSxubPu/3jaqrBY52ZuO6KRnRJA
         I6K6dzLQ0hJWHnVh9B9thnUYRoR9KgfHDN8C3i7kxdWSkCx2jEr3E1VlDHkjDKv7/7SZ
         dEtTtjKw0JjD90ws3a+ezuy/Jau1DZ1HCjJH34603o7iWEANljEwgG/2pZAkUaxjIexj
         AqCbo10jkHfTpk5rpIY1xY+eBCuum//pE/IpNF3KYmas3qF0b6XepHHOCCj4yK51Q804
         1E/lPpuPZzQgpFe6Gzxc69t7zjZBUOYQic8JrYvTSy4SDPwh2qXwSFMuxvY81q/PEJkE
         s5BQ==
X-Gm-Message-State: AOAM531Ehf8tzyDsHeeIppUb2KZYn1WJahN7OrqZxNQtNyMr2tHYJxOM
	aIII+BAlsKDjzLKiChjehk8=
X-Google-Smtp-Source: ABdhPJybQoN+zlAvZDzb6FB4+HcM2QDPMj6NnAGbyUMWT2HzWgU59O+ls6mmww2mkMbWEAhV1q8D1w==
X-Received: by 2002:ad4:54f2:: with SMTP id k18mr6251358qvx.32.1624028920868;
        Fri, 18 Jun 2021 08:08:40 -0700 (PDT)
Date: Fri, 18 Jun 2021 11:08:37 -0400
From: Nick Rosbrook <rosbrookn@gmail.com>
To: George Dunlap <George.Dunlap@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [RESEND PATCH 08/12] golang/xenlight: add functional options to
 configure Context
Message-ID: <YMy29arbPMnPI/+W@FED-nrosbr-BE.crux.rad.ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <dc5cd6728e8477c9eb3ba75a55c7128da46a86ef.1621887506.git.rosbrookn@ainfosec.com>
 <EF069373-26FC-4151-9CD9-6B8C48D9AEB0@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <EF069373-26FC-4151-9CD9-6B8C48D9AEB0@citrix.com>

On Fri, Jun 18, 2021 at 02:44:15PM +0000, George Dunlap wrote:
> 
> 
> > On May 24, 2021, at 9:36 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
> > 
> > Add a ContextOption type to support functional options in NewContext.
> > Then, add a variadic ContextOption parameter to NewContext, which allows
> > callers to specify 0 or more configuration options.
> > 
> > For now, just add the WithLogLevel option so that callers can set the
> > log level of the Context's xentoollog_logger. Future configuration
> > options can be created by adding an appropriate field to the
> > contextOptions struct and creating a With<OptionName> function to return
> > a ContextOption
> > 
> > Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
> > ---
> > tools/golang/xenlight/xenlight.go | 44 +++++++++++++++++++++++++++++--
> > 1 file changed, 42 insertions(+), 2 deletions(-)
> > 
> > diff --git a/tools/golang/xenlight/xenlight.go b/tools/golang/xenlight/xenlight.go
> > index f68d7b6e97..65f93abe32 100644
> > --- a/tools/golang/xenlight/xenlight.go
> > +++ b/tools/golang/xenlight/xenlight.go
> > @@ -136,7 +136,7 @@ func sigchldHandler(ctx *Context) {
> > }
> > 
> > // NewContext returns a new Context.
> > -func NewContext() (ctx *Context, err error) {
> > +func NewContext(opts ...ContextOption) (ctx *Context, err error) {
> > 	ctx = &Context{}
> > 
> > 	defer func() {
> > @@ -146,8 +146,19 @@ func NewContext() (ctx *Context, err error) {
> > 		}
> > 	}()
> > 
> > +	// Set the default context options. These fields may
> > +	// be modified by the provided opts.
> > +	copts := &contextOptions{
> > +		logLevel: LogLevelError,
> > +	}
> > +
> > +	for _, opt := range opts {
> > +		opt.apply(copts)
> > +	}
> > +
> > 	// Create a logger
> > -	ctx.logger = C.xtl_createlogger_stdiostream(C.stderr, C.XTL_ERROR, 0)
> > +	ctx.logger = C.xtl_createlogger_stdiostream(C.stderr,
> > +		C.xentoollog_level(copts.logLevel), 0)
> > 
> > 	// Allocate a context
> > 	ret := C.libxl_ctx_alloc(&ctx.ctx, C.LIBXL_VERSION, 0,
> > @@ -201,6 +212,35 @@ func (ctx *Context) Close() error {
> > 	return nil
> > }
> > 
> > +type contextOptions struct {
> > +	logLevel LogLevel
> > +}
> > +
> > +// ContextOption is used to configure options for a Context.
> > +type ContextOption interface {
> > +	apply(*contextOptions)
> > +}
> > +
> > +type funcContextOption struct {
> > +	f func(*contextOptions)
> > +}
> > +
> > +func (fco *funcContextOption) apply(c *contextOptions) {
> > +	fco.f(c)
> > +}
> 
> Why all this convolution with interfaces and such, rather than just defining ContextOption as a function pointer?  Is it just to keep contextOptions out of the documentation page?

Part of the motivation for using functional options is to abstract the
"options" struct, yes. This allows internal defaults to be applied more
easily -- if you require e.g. a ContextOptions struct to be passed by
the caller, how do you know if they intended to override a default, or
if they just didn't set the field? Additionally, using the ContextOption
as an interface allows variadic arguments, which are just convenient for
API users -- the same NewContext function can be used whether you need
to pass 3 options or 0.

The reason we use ContextOption as an interface, rather than function
pointer of sorts is for flexibility in the signatures of ContextOption
implementations. E.g., we could have

func WithLogLevel(lvl LogLevel) ContextOption
func WithLogContext(s string) ContextOption
func WithFooAndBar(s string, n int) ContextOption

See [1] for more background on this pattern.

Thanks,
NR

[1] https://dave.cheney.net/2014/10/17/functional-options-for-friendly-apis


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 15:13:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 15:13:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144761.266402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGBC-0005vF-MS; Fri, 18 Jun 2021 15:13:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144761.266402; Fri, 18 Jun 2021 15:13:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGBC-0005v8-JU; Fri, 18 Jun 2021 15:13:10 +0000
Received: by outflank-mailman (input) for mailman id 144761;
 Fri, 18 Jun 2021 15:13:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfjP=LM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1luGBB-0005v2-Ut
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 15:13:09 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14f66d6d-1e4b-41ec-8d4c-e21849ea2e84;
 Fri, 18 Jun 2021 15:13:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14f66d6d-1e4b-41ec-8d4c-e21849ea2e84
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624029188;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=zL7N5KFhmFYYv0XI5Kv9FvWQfJWS92wFtadgO96hjps=;
  b=fdwpLqP6Qylttk3d5s5hkcFzNqFQKhI2nT7HfWwL9G/a82IAktRTLGs6
   YMjoWx0JCamb74CU49ugJanSCh9mBS4/Q+MxVtifMK6DnxdF6lB92314u
   v195Cmr0QZ4JcsgxKm6Z+0djxx51sQdvTdXR8+v/1TC1jeFBgoa3Gr4aK
   k=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: yxLrxzVWBPX5BrauzYGrM3EzmamD8m0d8ockMkQXm5seTqgSbYGl8ULGi8j6lY9ckZIXJiDO5u
 EHdWhI55tuulrQCX/QPFoJ2wRBpf57I0cB3ekMLCC/U0Xx4lzU0IMQxyo613PzDFtWobFbD2s+
 I6lcT1bNlpfmabtpBlhfibGZby3CCcKgKJbVp3/YM8snEuXrBkH6dYhZMDQIYnqqHgYPZfO0FX
 V28H1/IXDZeWZYafJKXluyATneP+QcAFCJmKO959f3K9qR0TlPgPy/EgFA/LTjjtWYSPokawUR
 pH0=
X-SBRS: 5.1
X-MesageID: 46470061
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:bfAx/KyNx9T02tmBW3OiKrPxkuskLtp133Aq2lEZdPULSKOlfp
 GV8MjziyWYtN9IYgBcpTiBUJPwJE81bfZOkMgs1MSZLXXbUQyTXcFfBOrZsnPd8kjFmNK1up
 0QCpSWZOeAbmSSyPyKmjVQcOxQg+VvkprY/ds2pk0FJWoBCsFdBkVCe32m+yVNNVR77PECZf
 6hD7981lydkAMsH6OG7xc+Lor+juyOsKijTQ8NBhYh5gXLpyiv8qTGHx+R2Qpbey9TwJ85mF
 K10TDR1+GGibWW2xXc32jc49B9g9360OZOA8SKl4w8NijssAC1f45sMofy+Qzd4dvfrGrCou
 O85SvIDP4Dsk85uVvF+ScF7jOQlwrGLUWSkmNwz0GT+/ARDwhKdPapzbgpDCcxrXBQ4O2UmZ
 g7r16xpt5ZCwjNkz/64MWNXxZ2llCsqX5niuILiWdDOLFuIoO4PeQkjTJo+bo7bWrHAbocYa
 JT5QDnlYFrWELfa2qcsnhkwdSqUHh2FhCaQlIassjQ1zRNhnh2w0YR2cRaxx47hd4AYogB4/
 6BPrVjlblIQMNTZaVhBP0ZSc/yDmDWWxrDPG+bPFyiHqAaPHDGrYLx/dwOlayXkVwzvdIPcb
 H6IRxlXEIJCjfT4Py1ret2G0r2MReAtBzWu7VjDrZCy87BeIY=
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46470061"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VjHCnrJ0EN14y2xFlE3N4ejP4NPg1rFETrmh/39qXmFlicCWm431AtVpt85IW9YwxCOJdR806H+qPlZy+VdylB8FcrIciTbOEZA8l2rAy8oKSXueIkXgHDc2sruRh0aRqkPYNBh6VJkIyCd7w40CWNdw7LaZD37aYWMyYDQukMKjcNj4Nl63SIxi12QiJ0tB3V95fqhzFGRaAlnHWOKOUQDDqSUthbyZMZocxfUyfD/PDAM3IPbO0brDOez40R0erFfRy+YUK2BfvhDMgcNkOOXZOeuFe72A6yWuGvqTzNkGrJQIBLg8PImD6Zo7uQiA/fMhA6tgHQY6DTzLBpIk0A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zL7N5KFhmFYYv0XI5Kv9FvWQfJWS92wFtadgO96hjps=;
 b=CNxs2JCkknQJSKDQjLRbZiu+DY0izHAXwmPlyOITWsXNUbS7C9uotKBdEk7h+I3GXIhFp0NsPsRzTVF9/blHoCVwnNoCqkWKPeHiFRVuihY4ie0QTdSs/og5VdRKkL7cCypEt5Zznunj2L3WkeBk8QJrWSkGTH5/PepnB98KBT7gCIydKYWBmUtjjRETex9s5A4iiG26uMFRPXCx1tQt3AGVymavsrnngkMRnJw58lbmrSlHjPIvyiNJcGZBNGUvqRt4rVyUQH9NY0dxh/Nf4xLAceYZ0+GOdr0pIHhsm1eHnXYiUYqK/g7C5a7RbwzusAv3dNWd8xs8FqIT68kHfg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zL7N5KFhmFYYv0XI5Kv9FvWQfJWS92wFtadgO96hjps=;
 b=cWpkNLzh4SG5sg7iioaz9MsGDIrJMh/kEiR8mGCbYEvMxCNDGQ75sGbNSUi5MUYhjzrUrtBXIQ+cSbguaenDMMHi7SXgBsIT8n4HOE46N5CCi7/Lwl0HROUR4ZLJdoygdjwgu3OZJJ0/hu3J1mxTQM2WV2BqFjn6U3P3Fg64zcA=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 11/12] golang/xenlight: do not negate ret when
 converting to Error
Thread-Topic: [RESEND PATCH 11/12] golang/xenlight: do not negate ret when
 converting to Error
Thread-Index: AQHXUNy0cb4zmhfPmk6oj5nz8SWtlasaByMA
Date: Fri, 18 Jun 2021 15:13:03 +0000
Message-ID: <2C77D13E-ABB5-47F2-B466-07CD6652AF33@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <82bc8b720c3dfb178e52d10ddbebfa8dc5880e7b.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <82bc8b720c3dfb178e52d10ddbebfa8dc5880e7b.1621887506.git.rosbrookn@ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 106e4bbe-a815-4bf5-1d37-08d9326b9187
x-ms-traffictypediagnostic: PH0PR03MB5734:
x-microsoft-antispam-prvs: <PH0PR03MB5734573EBD69C881D99987BD990D9@PH0PR03MB5734.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: T6j4w4WnumrJRnAabXhASor4D3xkeK8SJJLLEWj90/9oYqi/YXflRSjSfEpy9k9bOxJcgfRsaSlQH0xI9IOqQV7MB5PUvthWzD9rbDZBSTxK/tk5R8P1zVpO9NpamgyRgYJ942aY+HjQJ3mC+5S99dxtUAcilBOxC3hx8SvOX8wi9f45nXAMBlhNAizPdwKGNgf5oufXHYKIJuVOJqNMaZIuuiWy8vjM8n2VYKTNZt0H7Njm3BpdgPBlduPefgVGpFzUMlAtvH0DzEOpQCZqwYEVeM0YzWkYy9s5vjLXYlUD7fGH+soTNRMzxHYoisIRC2ZDMhimY+tgwVHvso5YhTZs9lDeZslCxKiO66Q+tDnzhNo4NoQmMug/dM6B6jzMlIk5KTSwuETojbQyokPfoD6aF/CWbZpe3I5J75b0lyyJPFb7AoltJy6RbbzReXdnyTy0Nr7/oNCpbFX+ZGzEPWhV6OSsog52lIv3QZngTLJBzFQ2EZstabnqwzHY1m/uqwAahq6OoSoB2urOpjizLJtNVT6JN5EVs6UB+TIqPFjwMmC+ai+0PhyQk4cqPaEyOeexEKeeVJ5E5NO2Mrc2w2Kn0JuJbDH0AhHqerz2kfIu6arDCzOatJ52K1MFderKGONaaEvhaNdLgVIbGQU4BbpYHB+peVhESNih49q+OjM=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(346002)(366004)(376002)(136003)(4326008)(186003)(26005)(478600001)(122000001)(66556008)(2616005)(38100700002)(316002)(5660300002)(86362001)(83380400001)(64756008)(54906003)(91956017)(66476007)(4744005)(66446008)(6916009)(6512007)(53546011)(6506007)(8676002)(2906002)(66946007)(6486002)(36756003)(33656002)(76116006)(8936002)(71200400001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?TjZnWTg0OTVybWVSd0tINjdJUUdQMXlmaTREazRmdlE0Z1o5N3QrZXY1QWEx?=
 =?utf-8?B?U2x1T0R2K0xYbS9oUHZSTGdxTzUwZm9XYmhLSm8vY3IrQmpRblc2aWVMZ2Rr?=
 =?utf-8?B?YUhYSStNbjlMVW02NXJucVFDVVYxdDRuMXQ1eHgvVjFBUW5wbnV5VEMyeVpJ?=
 =?utf-8?B?aThOKzdoeU5UQUdwUGJ1WDlzTHVRbVdnQTBRZkFtREQrdExMYzROdVNJanpp?=
 =?utf-8?B?SzFxOXFxaFh0bllEUXc3dFMwT1EzRCtvWHNrRTFPZUY5RVJZbHVTdWd4a3p3?=
 =?utf-8?B?aWcvNkFnM3lXVmIxTHZpZjJZUjhBTE0rbWl6Zk1mNm1sek4wSkRsZldrK0I1?=
 =?utf-8?B?TGlwUHhRTEg0L0RpdjA5STVzZjlRMjAycTJMYzVLQ1ZhRXRZcFJjT2JUdVRs?=
 =?utf-8?B?NWdHWWFpbGxWQmZRSkV2UDB1K0lmQkJHLzYwZkRGRnNPYWhnYitKTlE5Sk5i?=
 =?utf-8?B?QUR1MFJBSC9KTzZHcWFUVEp1WlhGN2lRS09QS21jWVpGbTNIRVZRVEFObnpo?=
 =?utf-8?B?QU9DOVpvMnlpdGZwMGd1VHUyL0p1cXNpRFhFMjhjd0NCeks1SmtYMFNRV2xE?=
 =?utf-8?B?K2luYkNJZksvSzQxUnluYzlCcjlWNkZORnB5TWtjV2FBTjFORXZqZk52L1hx?=
 =?utf-8?B?cEc4NS9UUXBXbUpvM3h3WkFqK2x0OFpiUG5qM1NuL05RUFNIOTVzZkE2YXg0?=
 =?utf-8?B?Q05JeHdlT0dTaUdXMStLTlNaQWlqbTNMQ0FYeE9DNzZZc3BCVHpmL2JkcjlH?=
 =?utf-8?B?K29qYXo1TlczN1dKb01wdTFHbVFyRkpNNnJjUkdxMFN2bXlMc2cyTHU4YTJQ?=
 =?utf-8?B?V0xHSWNLRmpDVjBrZDlIZkpkaFd5N0lvcVFHR01kWjFoR2k1VjM2WnFzalJY?=
 =?utf-8?B?Q09KRjNMVytDWW5GK2tFV3k4Vk8xZlhrRDRMOEh1QjAxak9JMEMvYmFkS3dz?=
 =?utf-8?B?UlZPdmp5VW03RytGY1ZwamV6TTdjaFJkbCtzdGRKTDVCRGtRbWcrZExta0VQ?=
 =?utf-8?B?UmxsamFXRWRZMWRpOXZ5RVBaOEZPSlNzc2c3bTJ6WkcrWFlJOEtSUC9rUWhK?=
 =?utf-8?B?Q3RDQmh4SXdPUGFRZ1ZwL0p1WldxTzVZTjZkUHE1TzdTZ0dsUW14QXdRQnl4?=
 =?utf-8?B?YWlMellOK2FJdmk5MzUvZlZHZGk0SllHR2lPaXVER1BnNVZvSWJYczcwU0l2?=
 =?utf-8?B?dGpEaFVWUTd5VE92YVlISlhmOGIrQ1pFM01GZFUrdG94YnpiRU1nS3R4Ujg4?=
 =?utf-8?B?RU5UdlhEbUpUd1MzSDY1UjBobVQ2UEc3OWZ3aC9lckpscEJIZEFsbCtydElY?=
 =?utf-8?B?bzduekpic2JLU3B2TitBZmdYWHdwUFhYeTg3TWluRTZTcUlJNVpMdGhMYlBR?=
 =?utf-8?B?ZkczY2Vpak1oRTJ0blJhdGdTbVFVcFBQNVkrUmVQK1d1MklTeVdkcFRmeVFm?=
 =?utf-8?B?amFCNVM1RDQ3K3N0MFpkM3BMZDUvK3JMbUpWSlUvZW5RbS9FbnZMRDVUTCtG?=
 =?utf-8?B?dCtocGV4LzBUc0kydGZhbnQyak9JZER0RWhnSUk1Q1N1dkNyV21UdzJwdkM4?=
 =?utf-8?B?MzFFVStiL3JhMC9DVWdRdkRRR2NPT0VMZzA2NHVqN2cvYnBRTEY0T1pVRGxm?=
 =?utf-8?B?ZDN3azdkM2ZQek5Pb3dsS2RXUnQxZnQwamJ3QTB3WGVIdEZWRG13T3JHSVpl?=
 =?utf-8?B?SnllUG1jTTZLLzRFMUR3TU92ZkJyR0s1ZnNnYTVPR2VRWXBxeElGT3pFSzMz?=
 =?utf-8?Q?Wg75+E6xxyv5xUjAZp3ijNXAG7gvneOV90aZq34?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <657902F5BDB09A48A70DAEA39AEF54FE@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 106e4bbe-a815-4bf5-1d37-08d9326b9187
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 15:13:03.3050
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Kv9PLUcGnrQGnVvoqQGv7ru5/Qc/lOFxNSjzk/1TaSwdTua1kdkx+Nw/ztznaH1PUlwGXMIX3kDTH0zgK252XKekwmyGo2DahaLA0SD//G4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5734
X-OriginatorOrg: citrix.com

DQoNCj4gT24gTWF5IDI0LCAyMDIxLCBhdCA5OjM2IFBNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9v
a25AZ21haWwuY29tPiB3cm90ZToNCj4gDQo+IFRoZXJlIGFyZSBzZXZlcmFsIGxvY2F0aW9ucyB3
aGVyZSB0aGUgcmV0dXJuIGNvZGUgZnJvbSBjYWxsaW5nIGludG8gQyBpcw0KPiBuZWdhdGVkIHdo
ZW4gYmVpbmcgY29udmVydGVkIHRvIEVycm9yLiBUaGlzIHJlc3VsdHMgaW4gZXJyb3Igc3RyaW5n
cw0KPiBsaWtlICJsaWJ4bCBlcnJvcjogPHg+IiwgcmF0aGVyIHRoYW4gdGhlIGNvcnJlY3QgbWVz
c2FnZS4gRml4IGFsbA0KPiBvY2N1cnJhbmNlcyBvZiB0aGlzIGJ5IHJ1bm5pbmc6DQo+IA0KPiAg
Z29mbXQgLXcgLXIgJ0Vycm9yKC1yZXQpIC0+IEVycm9yKHJldCknIHhlbmxpZ2h0LmdvDQo+IA0K
PiBmcm9tIHRvb2xzL2dvbGFuZy94ZW5saWdodC4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IE5pY2sg
Um9zYnJvb2sgPHJvc2Jyb29rbkBhaW5mb3NlYy5jb20+DQoNCkVyay4gIEnigJlkIGJlIHRlbXB0
ZWQgdG8gc2F5IHNvbWV0aGluZyBtb3JlIGxpa2U6DQoNCiJDb21taXQgODcxZTUxZDJkNCBjaGFu
Z2VkIHRoZSBzaWduIG9uIHRoZSB4ZW5saWdodCBlcnJvciB0eXBlcyAobWFraW5nIHRoZSB2YWx1
ZXMgbmVnYXRpdmUsIHNhbWUgYXMgdGhlIEMtZ2VuZXJhdGVkIGNvbnN0YW50cyksIGJ1dCBmYWls
ZWQgdG8gcmVtb3ZlIHRoZSBjb2RlIGNoYW5naW5nIHRoZSBzaWduIGJlZm9yZSBjYXN0aW5nIHRv
IEVycm9yKCkuICBUaGlzIHJlc3VsdHMgaW7igKbigJ0NCg0KSSBjYW4gZWRpdCB0aGlzIG9uIGNo
ZWNrLWluIGlmIHlvdeKAmXJlIE9LIHdpdGggaXQuICBFaXRoZXIgd2F5Og0KDQpBY2tlZC1ieTog
R2VvcmdlIER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXguY29tPg0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 15:17:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 15:17:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144767.266413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGFf-0006aU-Ac; Fri, 18 Jun 2021 15:17:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144767.266413; Fri, 18 Jun 2021 15:17:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGFf-0006aN-5a; Fri, 18 Jun 2021 15:17:47 +0000
Received: by outflank-mailman (input) for mailman id 144767;
 Fri, 18 Jun 2021 15:17:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tyHv=LM=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1luGFd-0006aD-Mg
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 15:17:45 +0000
Received: from mail-qk1-x732.google.com (unknown [2607:f8b0:4864:20::732])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0cdaf524-8ba8-4e60-88b1-16617967167a;
 Fri, 18 Jun 2021 15:17:44 +0000 (UTC)
Received: by mail-qk1-x732.google.com with SMTP id j62so11749174qke.10
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 08:17:44 -0700 (PDT)
Received: from FED-nrosbr-BE.crux.rad.ainfosec.com
 (209-217-208-226.northland.net. [209.217.208.226])
 by smtp.gmail.com with ESMTPSA id e1sm5762301qti.27.2021.06.18.08.17.43
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 18 Jun 2021 08:17:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0cdaf524-8ba8-4e60-88b1-16617967167a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=qnpssdJeRjmHS8jce8srgqVLuCj7/2TXSjJgX6kjLaA=;
        b=sZ8jFxuDG+9uNKZtROTvtCj2pKYJz5HkBkv3bPigz+we7xiSoHJM7OzJ60rlZ/rzUq
         NiN1+1DyPt74HkuxW+jZR/3rsg1RatryLGAPn2+3YYEt4EZJgX7JZj9ZLQJiNISIO/5m
         4PckllSYK4InYnsJYYWXW0Fv9sC5YyZEI5Ve4fezAn0NTQy7yr3ocU8YPd9BQMtZKSe6
         tN3EXpmj5VWXPV42Q1yG85Q/txQU2GVI0AF021nSnqtrhsSQsCdaxgTrpJ0XzM2qQrIZ
         lzTdhfrXNgindDMlhxagoNywrOY8d78lc4DoqkdoEpze3VYYrOyZd4tYD5SJwyfbxU8x
         WQbQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=qnpssdJeRjmHS8jce8srgqVLuCj7/2TXSjJgX6kjLaA=;
        b=qk6ertpt/4WPqPCyozH0bFaIbx1ED1z5DOrogtfUxdDUjvH5tvtOXN3CGPpboDyNto
         BIjBc8cA1rOsdmru4pAXx07+p8nMpWvSiefT7B4qoGxI1VvfKFGNpeBsRgeCKPH5pUiQ
         9xmMy99WmHrB+KQy+YUjKqq4WNx21E7NQxuahvOyJ/ZlekJZy2V6kiqDj+hrDYPuIVqH
         QkgHFiTZSsRSoQ2aBdYrEPLYO8Zr3RsgvwhrtN7xhkBTowB95WvHBCNJon7RJF+omr5L
         OucYoXywoqLApFwW35upqJOYW6j7EYK1kb6qwxPvXHg5gtmxAX5SMkRogJB9UTH83wes
         7bTA==
X-Gm-Message-State: AOAM530PNNL+IJS5nuGM0f3BIk6qK4HeLs4R5QWbcQK1FRroYADOuec5
	3l2WtJNz4sv0+ijvmRPnKX5oy7LuUM1cfA==
X-Google-Smtp-Source: ABdhPJxRdoMsrfpm/+eRL2QmDxXEGx4GeGKpOA27PV+zwDbWfYHO/lo2+p8iGyhCquF3plv9pnjtgw==
X-Received: by 2002:a05:620a:10aa:: with SMTP id h10mr9781113qkk.377.1624029464365;
        Fri, 18 Jun 2021 08:17:44 -0700 (PDT)
Date: Fri, 18 Jun 2021 11:17:40 -0400
From: Nick Rosbrook <rosbrookn@gmail.com>
To: George Dunlap <George.Dunlap@citrix.com>
Cc: "xen-devel@lists.xenproject.prg" <xen-devel@lists.xenproject.prg>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [RESEND PATCH 07/12] golang/xenlight: add logging conveniences
 for within xenlight
Message-ID: <YMy5FKP7OHHVWXXq@FED-nrosbr-BE.crux.rad.ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <452aac2489990ac0195c62d8cb820fbe5786c466.1621887506.git.rosbrookn@ainfosec.com>
 <A96AD4DD-35CB-495F-A008-D72A5C2AF45D@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <A96AD4DD-35CB-495F-A008-D72A5C2AF45D@citrix.com>

On Fri, Jun 18, 2021 at 01:17:07PM +0000, George Dunlap wrote:
> 
> 
> > On May 24, 2021, at 9:36 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
> > 
> > Add some logging methods to Context to provide easy use of the
> > Contenxt's xentoollog_logger. These are not exported, but the LogLevel
> > type is so that a later commit can allow the Context's log level to be
> > configurable.
> > 
> > Becuase cgo does not support calling C functions with variable
> > arguments, e.g. xtl_log, add an xtl_log_wrap function to the cgo preamble
> > that accepts an already formatted string, and handle the formatting in
> > Go.
> > 
> > Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
> 
> Looks good.  One comment:
> 
> > ---
> > tools/golang/xenlight/xenlight.go | 45 +++++++++++++++++++++++++++++++
> > 1 file changed, 45 insertions(+)
> > 
> > diff --git a/tools/golang/xenlight/xenlight.go b/tools/golang/xenlight/xenlight.go
> > index fc3eb0bf3f..f68d7b6e97 100644
> > --- a/tools/golang/xenlight/xenlight.go
> > +++ b/tools/golang/xenlight/xenlight.go
> > @@ -32,6 +32,15 @@ static const libxl_childproc_hooks childproc_hooks = { .chldowner = libxl_sigchl
> > void xenlight_set_chldproc(libxl_ctx *ctx) {
> > 	libxl_childproc_setmode(ctx, &childproc_hooks, NULL);
> > }
> > +
> > +void xtl_log_wrap(struct xentoollog_logger *logger,
> > +		  xentoollog_level level,
> > +		  int errnoval,
> > +		  const char *context,
> > +		  const char *msg)
> > +{
> > +    xtl_log(logger, level, errnoval, context, "%s", msg);
> > +}
> > */
> > import "C"
> > 
> > @@ -192,6 +201,42 @@ func (ctx *Context) Close() error {
> > 	return nil
> > }
> > 
> > +// LogLevel represents an xentoollog_level, and can be used to configre the log
> > +// level of a Context's logger.
> > +type LogLevel int
> > +
> > +const (
> > +	//LogLevelNone     LogLevel = C.XTL_NONE
> 
> Why are we not defining this one?  Don’t we want to be able to disable logging entirely?

Hm, I'm not sure. I'll poke around to see if I had a reason for this,
otherwise I will un-comment this line.

> 
> > +	LogLevelDebug    LogLevel = C.XTL_DEBUG
> > +	LogLevelVerbose  LogLevel = C.XTL_VERBOSE
> > +	LogLevelDetail   LogLevel = C.XTL_DETAIL
> > +	LogLevelProgress LogLevel = C.XTL_PROGRESS
> > +	LogLevelInfo     LogLevel = C.XTL_INFO
> > +	LogLevelNotice   LogLevel = C.XTL_NOTICE
> > +	LogLevelWarn     LogLevel = C.XTL_WARN
> > +	LogLevelError    LogLevel = C.XTL_ERROR
> > +	LogLevelCritical LogLevel = C.XTL_CRITICAL
> > +	//LogLevelNumLevels LogLevel = C.XTL_NUM_LEVELS
> > +)
> > +
> > +func (ctx *Context) log(lvl LogLevel, errnoval int, format string, a ...interface{}) {
> > +	msg := C.CString(fmt.Sprintf(format, a...))
> > +	defer C.free(unsafe.Pointer(msg))
> > +	context := C.CString("xenlight")
> > +	defer C.free(unsafe.Pointer(context))
> 
> Hmm, allocating and freeing a fixed string every time seems pretty wasteful.  Would it make more sense to either use a static C string in the CGo code at the top instead?  Or if not, to make context a global variable we allocate once at the package level and re-use?

You're right, we should probably define a static C string in the
preamble.
> 
> Also, is ‘xenlight’ informative enough?  I haven’t looked at the other “context” strings; would “go-xenlight” or something be better?
> 

I believe libxl uses "libxl." I would be fine with "go-xenlight" if you
prefer that. 

> > +
> > +	C.xtl_log_wrap((*C.xentoollog_logger)(unsafe.Pointer(ctx.logger)),
> > +		C.xentoollog_level(lvl), C.int(errnoval), context, msg)
> > +}
> 
> I think we want to make it possible long-term to configure your own logger or have no logger at all; so maybe we should add a `if (ctx.logger == nil) return;` at then top?
> 
Yeah, that's a good idea.

> Other than that looks good, thanks!
> 
>  -George

Thanks,
NR


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 15:22:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 15:22:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144773.266424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGKU-0007y9-SS; Fri, 18 Jun 2021 15:22:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144773.266424; Fri, 18 Jun 2021 15:22:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGKU-0007y2-PD; Fri, 18 Jun 2021 15:22:46 +0000
Received: by outflank-mailman (input) for mailman id 144773;
 Fri, 18 Jun 2021 15:22:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luGKT-0007xv-1N
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 15:22:45 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ba8c5105-6eb0-4c2b-97f3-9ac3b71b750a;
 Fri, 18 Jun 2021 15:22:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba8c5105-6eb0-4c2b-97f3-9ac3b71b750a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624029763;
  h=subject:from:to:cc:references:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=wteIRQNxCQ/oqFYfRdxsuTXiioA6dFXU2rBMasv7mCU=;
  b=MU8Ud/2Bhsw960Gwm1w8ZdBNHvAiW2i8Rxl/ShsPs0snXqJXqKfSr8dc
   i5XliJCv4Xl1PIvFq0XpeXUgyhMpNxvmwf1Rzh9tPySiEG3Fja3GzuNEF
   mzwd6mb2FojY7jZS/MQwMviW4c2x98ebEYw4nE5oT5EFigDK+wVGxiZ3M
   M=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: rnjsZHzgBt0AhxagX259s57lBhI8pxs9qXIi2ANNXwCbGd4k1ULHf0ZM6GMK1yuFm7ML1rD0wr
 pNyEW8yHWSp0rztKHbHOdyccDnYPEzscjywiXHEvZcV8zdbd9NK0SdQeTfHfJa5IbgAqtTT08e
 RMItxKxKI+zsP2lvKbTRJcrirlU60rLg0J2coUctQcDVZLmKJJlOTbdJD+14qSscMtVY3oD0XY
 tCv+ygIeuir48Zqz+2S2kNKfpMRGCdJyAC605tKQMgYxRqVNniE7dkwAJAOxBhjXHsZ3B6FvNH
 K9o=
X-SBRS: 5.1
X-MesageID: 46470765
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:876Reasa/iZAxT1AE5bD0Jeb7skCm4Mji2hC6mlwRA09TyXGra
 6TdaUguiMc1gx8ZJhBo7C90KnpewK7yXcH2/huAV7EZniYhILIFvAf0WKG+Vzd8kLFh5VgPM
 tbAsxD4ZjLfCVHZKXBkXuF+rQbsaG6GcmT7I+0pRodLnAJGtVdBkVCe3+m+yVNNXp77PECZf
 +hD6R81l2dkDgsH76G7i5vZZmymzSHruOoXTc2QzocrCWehzKh77D3VzCewxclSjtKhZMv63
 LMnQDV7riq96jT8G6c60bjq7Bt3PfxwNpKA8KBzuATNzXXkw6tIKBsQaeLsjwZqPymrHwqjN
 7PiRE9ONkb0QKeQkiF5T/WnyXw2jcn7HHvjXWCh2H4nMD/TDUmT+JcmINwaHLimggdleA59J
 gO83OStpJRAx+Ftj/6/cL0WxZjkVfxiWY+kNQUk2dUXeIlGfxsRLQkjQdo+ao7bWXHANhNKp
 gpMCic3ocXTbqiVQGdgoE1q+bcB0jad3y9Mz0/Us/86UkdoJk29TpB+CUlpAZ3yHsKcegO2w
 31CNUeqFhwdL5eUUtcPpZNfSLlMB2AffrzWFjiaWgPQ5t3Sk4l7aSHu4kI2A==
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46470765"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B8MUVguJGEuKcqDUm0cxq5jEaD7vblilR1t3jH7qo1LYRZIQKv4LlA6VzIbt+duZ7/w6u8LZxpA2ObYJGTuT9dc4Y3kNUtJV06CKk9ot5zG/WNWin36Vs3xk7hEZKINrpUuPByspHxrAVefovvjKVcoV8AhOAfnc21/47Qu2x1fo0yebCqsmEf6eRHWnLc2+hbaUufw6CUMM03snYAF6kOtTHUXV0s54eI8TOo7zV5hyv0ZX3ZssBa9QkR/mW/pXHbY8hAsTezfj5Mc4Th44BT48OebZIOSYjZ+kroPIe+rFjtEN39aV8y9G9z8EKuWbd1e2Dd7K86l0U32iiimxmQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mwPVoAeXftCSkffDPD5M//2M5vcxyR7OAnF18VnIdPU=;
 b=n53NsvaoM8Ht6A8Cc9U+4NQ0oKW/JxEdWKQauPBrCQRqDuKnzJFz9r7+3k/dGFKRWId/jN08fABFeSygnLscEWGGuUMWXdZpFNXUU317icr8XdlLL5yzMIgVGNsOr2Ci6YPP7XQ6G5U7yU/1TjvRIidZrNXurwZ7tDMVjySV0FLyAuAvDbJErwUv6zAWvTht8lY4Miw82HNlRCvWN6aRXBLUwJCwqv+iE7XObvF4JmCV3QksyHM6Gi+GTiWK/8v40wiY+BgiU7GSZQedneKun3bYgX9qlWneConULzzhrOTKnennBjs0GOZ0OZaaFBgyj6Bls9B5oP5Qt9u6rv4/ew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mwPVoAeXftCSkffDPD5M//2M5vcxyR7OAnF18VnIdPU=;
 b=SGKq9J83wweYcPbZUr8LaCFZeNtlV7cxmwzOT23lQWEGlQ1NiVshHAfhxe7mTYGY+gQXMbVyLqD6xkTPeWMj59W3slzvyUKzem5VqO0nnu/QoReXjpyiP72YKKR0NVH4974MCLHUAjNN5QdVwtUh96HVCbd7Llw4XD8CQtxB+d0=
Subject: Re: [PATCH 5/5] libxc: make xc_domain_maximum_gpfn()
 endianness-agnostic
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>, Juergen Gross
	<jgross@suse.com>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
 <99979695-e53e-7764-85e1-64dd4cf9447b@suse.com>
 <bcbf4a71-bf30-5a9e-a399-d4366ee95423@citrix.com>
Message-ID: <0258fa6c-3202-b012-90d1-f48a5f3e9d8b@citrix.com>
Date: Fri, 18 Jun 2021 16:22:30 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <bcbf4a71-bf30-5a9e-a399-d4366ee95423@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO3P123CA0004.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:ba::9) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: dc933c57-7473-4eb1-86e0-08d9326ce820
X-MS-TrafficTypeDiagnostic: BY5PR03MB5048:
X-Microsoft-Antispam-PRVS: <BY5PR03MB5048F4B578E69D71EEAC5A08BA0D9@BY5PR03MB5048.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: f5GTHXuf3B9DXZUH/TP9ROzu8mz6aSmthnRZq9LJ3GsfVFnxcTa5ZTFX7bGd2QzuW4O3vh2cR6xZWCHB+nIxSrvO1uZsfEdeUa1W81b9pbA5H9+dhx2KkQIp3ooEgv+uEwNRIgWNSQPh7unW38qYOktViQw+izjojh6xFMGgLs/K+8/I77olV74xnCW4cfbr2Zt/sXHudSpb2IjBoY4FZXvg/yQ/MZCsweX3m9oL3hkRYnDjnuIt6FCoyXSkc4zV1CwAyqZw6lDfi6g6Hj9nrgI/5TlOzXbuj04jeY1Vji8efojvQuk0XAxW68HwuAR3B/a8X3mAfeUSvW7LzHEfDp+MUAjvbYF8qzBV8ufjb2psfYcI1346DZ8XLL19tzoBh3206kZ20HtHgS6sL0iFR4Q0gnGeqDPvVmSXK8ewjwgZZIkuI8pu3e998g62x2w1w/swIok8h4Zpx2cxU6IqjCEQy9F9T0NGZit/ySeIWnqJZ+CdmR3oJ44QYvSpjTIcZ1E+W7CyHWFvE8meZUioJz8vrDu6YhWzhUpUYkppmARm4WzFipw4BBDhTZU01IlUu+L95f0B/Yxst5aqOu1IcdwnRn0ywYZwAZinbqr4pYsi9i65n5awSzmJS63uHDoyVI1buIlL3Rqbf8Xwmip7dUQwK+kVc4CsILZwqK3rOcjUuKhCBckROYNA9to3t4GQ
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(376002)(396003)(39860400002)(136003)(16526019)(38100700002)(54906003)(478600001)(186003)(66556008)(6486002)(53546011)(36756003)(6666004)(4326008)(66946007)(956004)(66476007)(2906002)(31696002)(316002)(5660300002)(16576012)(26005)(2616005)(8936002)(110136005)(31686004)(8676002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?cWwwYWVUek1BNmU4UGVkOUVhWk9DZkxJNUllandQaEhZSFVhSmRiWDQrdkc2?=
 =?utf-8?B?aWh0VlQrY2w4bkxhbzRYaWoxZ0NOeEtxQTdiaU1YbzhTQnV0ZEIwZ25xSmF2?=
 =?utf-8?B?dm54UjVDN2FyaTM4TUNYVHo1TWVqbFdPSEF1YmQvZkh4aG5FQWQwMDAyMFpS?=
 =?utf-8?B?L0ZoZHgvMmRidXkxS0cwV0grMU9rSEh4L2IvbWFhNkJlZUxaS050YUc2Yndv?=
 =?utf-8?B?T3pUaDgrcjVwZUdQanQvYVVRR01iYXRJQ09Bb1hpNVFscWZrMkRRMHg2TW5Q?=
 =?utf-8?B?dHdlUWNKR0o1MWZCVWsrUVY4YnJ5VitSUWN1OW0zYUZIcEl1d2pDeEtFQS81?=
 =?utf-8?B?NFU1eTBIcTJYU1k2TTNWSy9reE9ITkVPTm5uL0tSdmh5T0xuVG1KMUhueGM5?=
 =?utf-8?B?dGg5a3NTOENKRE1jdVdVbVVINW5FemRMdlRQWjFVdEFqeEVBajBkQkhyQXQ3?=
 =?utf-8?B?RTA3YnlkSUxpMk96NUFCOGhPUEJKNjZiRVpvRHhCam5WaGI2enh6c2IrcXZI?=
 =?utf-8?B?ZkpYTS9WcjhQMmtaUDRlRkJoTkJMZ0U3TmVVcHZqR3RaWS9mcW40Vkg2Q1FS?=
 =?utf-8?B?dGpGU0lvZ3pxVmFWeFZTRTNPTk1vclU5RVBialhTeTg5Q0ZYN1FuMHhMS2xE?=
 =?utf-8?B?SHdNZXAzQThTV3ZFZlVWY3B5NnhTTUFOanNIUWhPUGw4YytPYWxZN3dIZk9k?=
 =?utf-8?B?VEhzZVEvTUlDSmM1OTJFUWhPdUJHUkswTWdFZ2RRNGZRU0UyZnFRTnVmU2dR?=
 =?utf-8?B?eERyeDVodWJuNkVaT2Z4d2Rka2hMbGxwQTFPMmdSVzNZTjEvV0lCbjdmRzRM?=
 =?utf-8?B?R2pBZUdNaFY0b3pXOTFmWkxpY3Y3Y1RZY1FHYzJ4YlhGVXFYOWs5d2hoa2U2?=
 =?utf-8?B?ZUdXVlkvdndqVFJhYUswM2Y5QWNSMU5PNVI2TmhWU2hBVk1TZlJwcXFiaEky?=
 =?utf-8?B?bGlMZnVzMXpYVFNlMWNFMk5ZU0dLMWlCUDliVldyTnhTMElPN1VmanBROHZP?=
 =?utf-8?B?QXJQbGsyQXZtY2FiWmg5VTdnVXBuTHJWVmlSNHNDQ1ZVZ3RVa0hvUFRZc0Rv?=
 =?utf-8?B?bFoxTitOamt5d3hNTkdEZzNHandUcDZzVSsyNml3NjlpT0hkM3lCZ2NEQ1dD?=
 =?utf-8?B?Snhha0oySEpiWngzNlZGRUNUcWZ1RC9raUd3REcvdTFrR2V2YVJSTVRxWm56?=
 =?utf-8?B?aDBVZkl5YStMcENQa2E2WWY1MmN5azV0YnJDdGt4OU1OeEU0N00zMzIxY1Bm?=
 =?utf-8?B?dEdqN1Z2cnRFTmlFMXZLU2pjNTg3dGZtd2ZKSkU1NGIzampxa3p0c2pXQytG?=
 =?utf-8?B?SU93b0RrL3VxMSs5MGx1Wm9qVzJ3Ny8vKzBiZlczckc2YmpxajhoRVp0ZFhH?=
 =?utf-8?B?QTVpWW1nV2JVS0U0Q3BHWEVDK1BnbzVWTDBqQjZLNlpIaXFCL3RINmM3Y05n?=
 =?utf-8?B?NGZFV0lMaDU0VDVObXpXb0dkbnM0YUpaUjNoenpUeUl5M214bko4UG8xUmdM?=
 =?utf-8?B?UDVpQWJWM2lUY0ZyVEpjcjBvVWZvc0tFRmt1bks3djlrY0ZaU1VQRE90c2dD?=
 =?utf-8?B?dHVmbVE5V3pHenMyZjhQQWhObTd0aldTTXkvM2pVZkNkdjlNbW15cDlCZlM1?=
 =?utf-8?B?TlczbitZUjB0cDRTaVR2NU90YWROcFpmWmRlYnRzOEFzenN3ektkN0ZlMW4x?=
 =?utf-8?B?WHhIeEhQSE93Q2FlbVNoQzVBVmhxWWNEUHpoZXRONlI1SWJHcW1aN0NUNWdM?=
 =?utf-8?Q?mBtVyHhN7P6NKwGzhnlFFjyIbcrGJPg6Y/7HCUG?=
X-MS-Exchange-CrossTenant-Network-Message-Id: dc933c57-7473-4eb1-86e0-08d9326ce820
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 15:22:38.3147
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: B2k2Avky5cNoMs2jNmUK0wRZE4hxCyPA3RC75DDRPZxTEN4SyKKm2nPf+aK598K7uZFPwA+ccU4BZ8veCVvOCrgNoKz6lXAPYpCrmukHteo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5048
X-OriginatorOrg: citrix.com

On 18/06/2021 16:06, Andrew Cooper wrote:
> On 18/06/2021 11:25, Jan Beulich wrote:
>> libxc generally uses uint32_t to represent domain IDs. This is fine as
>> long as addresses of such variables aren't taken, to then pass into
>> hypercalls: To the hypervisor, a domain ID is a 16-bit value. Use an
>> intermediate variable to deal with the issue. (On architectures with
>> arguments passed in registers, such an intermediate variable would have
>> been created by the compiler already anyway, just one of the wrong
>> type.)
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/tools/libs/ctrl/xc_domain.c
>> +++ b/tools/libs/ctrl/xc_domain.c
>> @@ -856,7 +856,9 @@ int xc_domain_get_tsc_info(xc_interface
>>  
>>  int xc_domain_maximum_gpfn(xc_interface *xch, uint32_t domid, xen_pfn_t *gpfns)
>>  {
>> -    long rc = do_memory_op(xch, XENMEM_maximum_gpfn, &domid, sizeof(domid));
>> +    domid_t xen_domid = domid;
>> +    long rc = do_memory_op(xch, XENMEM_maximum_gpfn, &xen_domid,
>> +                           sizeof(xen_domid));
> Why on earth do we pass the domid in by pointer and not value?

This is horrible.

What we're logically doing is passing a  pointer to struct
xen_memory_$FOO { domid_t domid; }, except its all done by void
pointers, and even the public header files don't provide a suitable
structure.

I think we really do want to retrofit a suitable structure in the public
interface and use that, rather than to continue games like this.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 15:24:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 15:24:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144780.266435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGMF-0000CH-CI; Fri, 18 Jun 2021 15:24:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144780.266435; Fri, 18 Jun 2021 15:24:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGMF-0000CA-8w; Fri, 18 Jun 2021 15:24:35 +0000
Received: by outflank-mailman (input) for mailman id 144780;
 Fri, 18 Jun 2021 15:24:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luGMD-0000C4-Mk
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 15:24:33 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f15bceee-c04a-4e94-a031-34a8c2a1a710;
 Fri, 18 Jun 2021 15:24:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f15bceee-c04a-4e94-a031-34a8c2a1a710
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624029872;
  h=subject:from:to:cc:references:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=mqyn611PDd6fVAKKUOtDkdzYytWsLHTZM38wla5oXMs=;
  b=TECovWn3HHFsHXHru5PubhyBO9KZrGZscbOS40Cwgm3Sj5+afbV5Yy0Z
   GO28vEDnx4WW99WnTPrm0MivCadivD0WJhBCC8tojsuxHfEpxTfkH4CnS
   z2PL70jpY8pF8TwJXlVhXTm9g7KVh46D+Vs1pmcPndEEtC6KEOSd60Nmv
   I=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: jILxIwSiqFHrcQ+/eFF+oZHNznDER+poq/zqRso8ON93ri/lgzKkLxPwBOAk5vLn2klE5ch3g5
 su4k2ZR9sn1IGELKxb6+q8pF6VbaSgT0BRSIMcRCUrs5ZaSlh/UWx+tnMkomXQAga8Q+n46vfB
 9ai/GkG5QrfZYqaiYHkHb+gjp2QQfULjyS3NDqtKTG8A8jPH5KvPoZQwrb1hD3dEaXgsECYI44
 7ibRcQIeyEeYC4oYuGXrZliw40FjpxoLxMsUjGt4vKTjrED/NgBpawDiTbThl+uAGRqB+a3ieh
 0VA=
X-SBRS: 5.1
X-MesageID: 46540851
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:fsKWBq/zy+TXMiTGDgNuk+AuI+orL9Y04lQ7vn2ZKSY5TiVXra
 CTdZUgpHnJYVMqMk3I9uruBEDtex3hHNtOkOss1NSZLW7bUQmTXeJfBOLZqlWNJ8S9zJ856U
 4JScND4bbLfDxHZKjBgTVRE7wbsaa6GKLDv5ah85+6JzsaGp2J7G1Ce3am+lUdfng+OXKgfq
 Dsm/auoVCbCAwqR/X+PFYpdc7ZqebGkZr3CCR2eyLOuGG1/EiVAKeRKWnj4isj
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46540851"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZaCW1ZmbPCCAr8QSBe5goB8XgNtivCoprBq1dxMy3+wSX6YGw/aXmXVsu3VOMWn3PlAqLNcC5V5FGGcEH5EHmp9CPvK+KQ+8iyJlliHINtAblWNU01/tvvPY5g5VGrjCMHEjd0EsiONGrCph/odPyxsuxpxLKTjzeijVFmAwsnzru8G/5fGJ0+U112x1zu96aMSEQP17QfwnlLO80EUxTcw6j8fKwiIrAryDv/Ew1DINCYMoBSnnJ4e3JcCo3v79oyNV3x5mTTNbGagJXRKXb7S2Ap8xLEQouIpFqnol+uHLWwp7mAqthia415MmhwKO8jKe4WiPABWfHiP6CsjJ1g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KpDty8QdHfR4l02BoZDsxp/26qG5NlYkfs/5fGe7/c4=;
 b=dOEmoPmfl7OaaCluUq+Qca4PtuxogCCdLAhuptrGjJooe9GFMQZ3meJ+WEfuEoldhHPr4h2OFnulNbYML9O67CxK7j79/SFN/nB4HgSNvtNBkptjGJ8RWARdmjgYPxHDoo0fLvQYCPjTj3/HD+5XR4IKW80y/lfkflu+Z3zTsVau2IXOd0lSHURSQQhIa7r+U1uQasSi2uMH30Lp4h29MMHHgQenIGUPMTbW6D7r+MzAV/hwChvFsctQnqKSxaMVNjZc0w+jUCC+RiXClH/T2tv9VtI+Eu7yVQsJ5qWVpNM5zhMZSfzHmP5+9lTxOadGsL0fS/UeJkxufahRZrplaw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KpDty8QdHfR4l02BoZDsxp/26qG5NlYkfs/5fGe7/c4=;
 b=l1h7ysICwPsmBdhhsgymSeSaaq4rJft33gOHsrqsPuTGNRl7tGjgzmnHnBvjjtfJCw/WDCnSnF3m1ZHaLvYSXx7UFmsVhVdrcRznuvimM0fltYi+o7YEsZSj8+nys+OtIpC0sq9ocJEffjSFRL8mSSeEaMFD51aZCvPg4NsNukc=
Subject: Re: [PATCH 5/5] libxc: make xc_domain_maximum_gpfn()
 endianness-agnostic
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>, Juergen Gross
	<jgross@suse.com>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
 <99979695-e53e-7764-85e1-64dd4cf9447b@suse.com>
 <bcbf4a71-bf30-5a9e-a399-d4366ee95423@citrix.com>
 <0258fa6c-3202-b012-90d1-f48a5f3e9d8b@citrix.com>
Message-ID: <79cdb790-2db8-0922-5412-d5ab69e7c0eb@citrix.com>
Date: Fri, 18 Jun 2021 16:24:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <0258fa6c-3202-b012-90d1-f48a5f3e9d8b@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO3P123CA0008.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:ba::13) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e3e6911a-d16b-4dff-8bfc-08d9326d29fd
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5920:
X-Microsoft-Antispam-PRVS: <SJ0PR03MB592055C1940D29420CBF558FBA0D9@SJ0PR03MB5920.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 6DMIHiYORcW/i3YnWlmejcf5CZ5tE4/5z/CT1NUPe+YnSq71irGcEF67s0YHLY/EJfJAFlLDehNneoBH/2criwZXAzC8FWCtef/kvaEkjdcFy8LPQD4lLtWdEcj3usX/jQpvi7bkXvhxzF2dJVa10UmDAN5P8HWVjycSXOOhC/oo/0dxbecy1hdBp3LzEvPbOVFZw1O5bUsBK/RBYWP4uTIlXNs1z80jxWLYXp+RBvGhbU1i0YrTqZrqoA1+ejY4W28XjBUDKrzXs8O/xGZNj0Xah+S/11VVcCC6l8jLb2ozUrLtW738DnOfEp8uPkfL66pV/ICOiumVCi/EmFbUCNK9S9aGOxJyHQguk/iATlVBlTcAYWGzsroJ9NDxqZ5xSaO+nissqN1hym5N+y50SxhhUttEV9PrADhsLTtZmyJ7bBZpDhaGiWs94rffbQih3s49x3PfL4rhUAR5y+jhEe32UjIgiUf7kw4mzLElnM/N3Q1Y2CcHBk9WBZW9XrkSVG6WAYIR0eng9aPqtmkAdtaZQ1kVG+eCoarFgwPcivT4rrTbbAjPvznlD9eoSlHrJ8yHcUCCNrHIWXQ2ECXP6odth+YZdlbzV/ynafvHW89H+S2Sn3kaYVAyBQZ9JdLlEvZTTr9ejaADQ/xa5VeiTmv7f+DJQZaNKEMQ1xl7Y1S6h81tlPBSTjUWm6V2dRO7
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(39860400002)(396003)(136003)(366004)(2906002)(6666004)(956004)(110136005)(53546011)(31696002)(38100700002)(66476007)(316002)(86362001)(8676002)(5660300002)(2616005)(54906003)(16576012)(36756003)(478600001)(8936002)(31686004)(66556008)(16526019)(26005)(186003)(4326008)(6486002)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?c2ZDZCtGWmhyUTBKeUlmYWRIU1UwSFN1emM4cDVaMDFjcFM1TFJRTkQ1Qktq?=
 =?utf-8?B?Q1JtSGlPWTRIOVRxU3pGZ2MxN2hzVWJpTkY3YnNCWVBaMkFTTXVmYWZqQ0pl?=
 =?utf-8?B?dmNmcFNuRDEyS1lyR1RLTk12aFd5dWFQb3dUdHhSdHVrdDc2aG5tYngvRWYy?=
 =?utf-8?B?Tm0zcFZHMEpDWXVKSFRrdnl6RU5pVHRybVluKzZxU0hPTEVMaUV3SXphRENE?=
 =?utf-8?B?MlhZVWVONEtySUdPN2FBYW9ha3RqVzVhRXJYOElLK29sNFFYZ0NtRFZYM0Rh?=
 =?utf-8?B?OGxGMlN3ZkZRdkQxMGYwcTJoMUs0R21sSnF6d0t3STJVMjhnSGJRZTN0Z2lC?=
 =?utf-8?B?OUhiRU96MmdpY2xONWtUUnRoRTRLMEV2VysyeU95SGxlL3dpOUMxa0F1aW9k?=
 =?utf-8?B?NVRLVDZDWXBsZ0VDMXJzL25VQ1dTeXpMYVlIRmloZG5NVWczY3lCL2R1dFRx?=
 =?utf-8?B?U2prUDhsYkkzWkZQVWpLT3lrWTBqakdtS2lZTGlMeG9hNkxqcnlyZHdCaXQ5?=
 =?utf-8?B?NC9qZTJTdmx0enRHc0ZVNGVyakU1ckFiTW5lWmJtc0I5dGR4ZDE5K3NraVJi?=
 =?utf-8?B?R3BtMUZoMlgxQjN6R1UxZnBTUlNpeTdnR1RnT0d4eWdJVFN2L0FTdXNKSlpp?=
 =?utf-8?B?Rms1WE56Sjg3SjhYZkRwTVdqdlU1d0REYmc4WTZKdzVlRVBrWWN6bnhERlJB?=
 =?utf-8?B?VysremV0RTNWMzZUNzgyalJ3dCt4ckhMMkIzZi9lak1sY2p1VEFxaEFWQThW?=
 =?utf-8?B?dzQ4dmdyUTNsMmJVR2FpdnJEQjE0MWxqdEJxMnBEMHFDRVBmOEF3NUs1dXZi?=
 =?utf-8?B?TlEvZzJpYmRZM1FDUUhwZTdSYWZqOU9QSE5EclMyMnVOektiN1FnNDMyQ2Rj?=
 =?utf-8?B?eVdYMHFVMkk4Q3lTazdmOU5SNDhtcXRPajI4Y1J2eWpvY2p3cVpHaCtIUnBj?=
 =?utf-8?B?VTlGaVRQKzc5K3Q5ZTRPUmRjbk1FbjgrNjlOSFhWZW0vNERkRUdMZWRWWXlr?=
 =?utf-8?B?MzFvd1IrY2pGMmpNUGdDTnBGZEtRZ1g3RHdVUFpnSWhWTU1IdFRLT2Z4c255?=
 =?utf-8?B?VGxOaWRpTWlNT3NkOGtSc3REd1hQNDhpeW1SU1QrbkFOUU1MdGpsYVZCRWNm?=
 =?utf-8?B?emZLbFpSVTVNVTRGaU1MZjN0MUppWUNjN3VZZ0U3ZXdWYVF2bmMwUmM0RS9Q?=
 =?utf-8?B?b0pLbWRWRTVyTTdpVUtaTlB5N203Tm5OVkNPcGgvVGc2VEE0SXFhLy84b0xw?=
 =?utf-8?B?S3RuRzcvR0t4bmFzK1FzZE03V2haTE9yaFg5NXJpK3ZxdmNBMUZYT0FaT29F?=
 =?utf-8?B?bEdjWDdDTTBUVWxjZ2hwbXlUaFdWYVdjRzdXeWZpd0tBbUxaY3FHSmZFOG51?=
 =?utf-8?B?WmR6YkZwVG1SMG1TSzFXVTByNHA1ZEdoVFhyZWtDbFFwN1hzSGg2eGhyM3pa?=
 =?utf-8?B?RThkOTkwa2ZacnFZd2J6azFFeEtlRkVEbXpnaW9GM3NhdCtqb1FOckJISEwv?=
 =?utf-8?B?MUlEQTZuSGhRa3ROSVNIWDVzbTlJNzRZSUg0SkVWb3IyZmljL2x3Z3ZHY1hG?=
 =?utf-8?B?NkZSSTFZZkYxRFFWY25kSi8zaGJuZ1kya25uWHg1TG5YRVU5Q1M4OXViRnFl?=
 =?utf-8?B?aC8ySVduYzFhL1d4WDFoQkVUME1RUmZVNzVaWHNGUVFsYWY3M3IvaGNlREJV?=
 =?utf-8?B?dDNEazJ1bzlvOGttZGtNdDA1ak9BdmRhTklRVDhlSFJaQ0RXc0ZPZjNBamtJ?=
 =?utf-8?Q?8NGbYB4+Qe8dvAYJOm0IKFqxEeV46apYItTYpu4?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e3e6911a-d16b-4dff-8bfc-08d9326d29fd
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 15:24:28.8001
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: I9J4Q4nEdp8OcsaQ1rM33vyJ7nBa58BN0n7jBp+GKwm+P7ggQQ20KGeTvtzt+YH5aDQ0uVS339Mjz9YDRTEAMvIvg2gjedsK9/fUmb/uuNg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5920
X-OriginatorOrg: citrix.com

On 18/06/2021 16:22, Andrew Cooper wrote:
> On 18/06/2021 16:06, Andrew Cooper wrote:
>> On 18/06/2021 11:25, Jan Beulich wrote:
>>> libxc generally uses uint32_t to represent domain IDs. This is fine as
>>> long as addresses of such variables aren't taken, to then pass into
>>> hypercalls: To the hypervisor, a domain ID is a 16-bit value. Use an
>>> intermediate variable to deal with the issue. (On architectures with
>>> arguments passed in registers, such an intermediate variable would have
>>> been created by the compiler already anyway, just one of the wrong
>>> type.)
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> --- a/tools/libs/ctrl/xc_domain.c
>>> +++ b/tools/libs/ctrl/xc_domain.c
>>> @@ -856,7 +856,9 @@ int xc_domain_get_tsc_info(xc_interface
>>>  
>>>  int xc_domain_maximum_gpfn(xc_interface *xch, uint32_t domid, xen_pfn_t *gpfns)
>>>  {
>>> -    long rc = do_memory_op(xch, XENMEM_maximum_gpfn, &domid, sizeof(domid));
>>> +    domid_t xen_domid = domid;
>>> +    long rc = do_memory_op(xch, XENMEM_maximum_gpfn, &xen_domid,
>>> +                           sizeof(xen_domid));
>> Why on earth do we pass the domid in by pointer and not value?
> This is horrible.
>
> What we're logically doing is passing a  pointer to struct
> xen_memory_$FOO { domid_t domid; }, except its all done by void
> pointers, and even the public header files don't provide a suitable
> structure.
>
> I think we really do want to retrofit a suitable structure in the public
> interface and use that, rather than to continue games like this.

Alternatively, declare this interface broken and reimplement it as a
domctl, which is where the functionality ought logically to live.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 15:26:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 15:26:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144786.266446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGOQ-0000q6-QK; Fri, 18 Jun 2021 15:26:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144786.266446; Fri, 18 Jun 2021 15:26:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGOQ-0000pz-MB; Fri, 18 Jun 2021 15:26:50 +0000
Received: by outflank-mailman (input) for mailman id 144786;
 Fri, 18 Jun 2021 15:26:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tyHv=LM=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1luGOP-0000pp-72
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 15:26:49 +0000
Received: from mail-qk1-x72e.google.com (unknown [2607:f8b0:4864:20::72e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b6b1a6a-e4da-426a-8cef-243edaadeef9;
 Fri, 18 Jun 2021 15:26:48 +0000 (UTC)
Received: by mail-qk1-x72e.google.com with SMTP id j62so11819974qke.10
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 08:26:48 -0700 (PDT)
Received: from FED-nrosbr-BE.crux.rad.ainfosec.com
 (209-217-208-226.northland.net. [209.217.208.226])
 by smtp.gmail.com with ESMTPSA id l127sm4119008qkc.64.2021.06.18.08.26.47
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 18 Jun 2021 08:26:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b6b1a6a-e4da-426a-8cef-243edaadeef9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=SxFqflzAcIyPl7uL6HXezQivITkX16R0fc13SxV+fMI=;
        b=a7dZwfbWrrSQakyNMhkJ+rQkRxgwdUitHI99Knyjqq+uHumJO6CgiPkZX8bBhQO+7D
         NaFV7S481DnlcSxhDAmCGXk3ssMFRHU5DzYU9BOZhJimfse18cwXmCbI50vz4DCYBk9j
         RVGgTdstStVHSQqzCnH+p//wVl+NtjrVSewDyNkgDaRzu6eHAq/jj+4HTuMgWfKZk+p2
         LuUBiRJv8I6svpdXF1h42gemXT2/NIUx+aZ57b7Saibrfo0F52qy0c8+7Nk84V70BXq2
         NR42JtffbUbilad5dI1EJt61ZKfaep2TBPSd3yY2E7E7OabOisBub+JDBmdNflW3LaDX
         gmjQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=SxFqflzAcIyPl7uL6HXezQivITkX16R0fc13SxV+fMI=;
        b=nLhESRx2Qb35oj0lkT7wqzkcELcOeRHewojOnifjQ6Ujiu15NHcHcC2vg7WkhU18Va
         +G8B+MObE9+gmV+4G+ZduAYkgjIwgEXf+mkFvUvrp0AiH3oChLzpmm+jxefPG0SYp9P2
         YQ2eeTGum/icC1jcpGTnYGTcPbsX6ax6FpAoUG30kocJmzzARqF7IYbUC8lDMK4/JnHH
         /5xOEmVb1SKM0wrYeWSmSxAe/9fPvt/CT+djW2qbVlGARLr5Grj/5swUAq0M7OePb2tC
         LaGKsFyxvzPkeMTWKt4NGYaoh7vaXxZgQMgpsm/MUdVEd3cLJGsBUBqlQtMRMFpUTDEf
         R0ww==
X-Gm-Message-State: AOAM5303h9EwxIA1qUVM6ntkcb6fB/bhxETpSaRN+wIJowaNwVly+M3F
	iy6looPfky0xhAGDwZWP5UM=
X-Google-Smtp-Source: ABdhPJxITNsBPPE63e+W4wGhX1iAeeHLhOJDxz94jlSlhKk+55Hfn36hjW4yWKbIzRdpZ+Zh5FAm4A==
X-Received: by 2002:a37:7cb:: with SMTP id 194mr1513234qkh.455.1624030008316;
        Fri, 18 Jun 2021 08:26:48 -0700 (PDT)
Date: Fri, 18 Jun 2021 11:26:45 -0400
From: Nick Rosbrook <rosbrookn@gmail.com>
To: George Dunlap <George.Dunlap@citrix.com>
Cc: "xen-devel@lists.xenproject.prg" <xen-devel@lists.xenproject.prg>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [RESEND PATCH 07/12] golang/xenlight: add logging conveniences
 for within xenlight
Message-ID: <YMy7NVTpaIlT+KWJ@FED-nrosbr-BE.crux.rad.ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <452aac2489990ac0195c62d8cb820fbe5786c466.1621887506.git.rosbrookn@ainfosec.com>
 <A96AD4DD-35CB-495F-A008-D72A5C2AF45D@citrix.com>
 <E74C0DF9-3EA4-4B79-8CE4-02F00EC9875C@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <E74C0DF9-3EA4-4B79-8CE4-02F00EC9875C@citrix.com>

On Fri, Jun 18, 2021 at 01:21:40PM +0000, George Dunlap wrote:
> 
> 
> > On Jun 18, 2021, at 2:17 PM, George Dunlap <George.Dunlap@citrix.com> wrote:
> > 
> > 
> > 
> >> On May 24, 2021, at 9:36 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
> >> 
> >> Add some logging methods to Context to provide easy use of the
> >> Contenxt's xentoollog_logger. These are not exported, but the LogLevel
> >> type is so that a later commit can allow the Context's log level to be
> >> configurable.
> >> 
> >> Becuase cgo does not support calling C functions with variable
> >> arguments, e.g. xtl_log, add an xtl_log_wrap function to the cgo preamble
> >> that accepts an already formatted string, and handle the formatting in
> >> Go.
> >> 
> >> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
> > 
> > Looks good.  One comment:
> 
> Er, sorry, turns out I had rather more than one comment.  Here’s one more:
> 
> Is there any particular reason not to export the Ctx.Log[X]() functions?
> 
No reason other than I tend to only export functions when I know they
need to be exported. My motivation for adding these at the time was to
help debug development. Would you prefer to export them then?

Thanks,
NR


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 15:29:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 15:29:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144792.266457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGRC-0001Ur-6l; Fri, 18 Jun 2021 15:29:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144792.266457; Fri, 18 Jun 2021 15:29:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGRC-0001Uk-3Z; Fri, 18 Jun 2021 15:29:42 +0000
Received: by outflank-mailman (input) for mailman id 144792;
 Fri, 18 Jun 2021 15:29:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luGRA-0001Ue-Iy
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 15:29:40 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ddc487c-d55a-442e-876d-9b7f06ea8081;
 Fri, 18 Jun 2021 15:29:39 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2112.outbound.protection.outlook.com [104.47.17.112])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-38-V6QZTT2vPZOu77i5XeSvmA-1; Fri, 18 Jun 2021 17:29:37 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7040.eurprd04.prod.outlook.com (2603:10a6:800:121::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Fri, 18 Jun
 2021 15:29:36 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 15:29:36 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR04CA0020.eurprd04.prod.outlook.com (2603:10a6:208:122::33) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18 via Frontend
 Transport; Fri, 18 Jun 2021 15:29:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ddc487c-d55a-442e-876d-9b7f06ea8081
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624030178;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9QpuST1dr2XN+EuIU+SxOFxKTmZn0TOs6GmPWUWB1Rw=;
	b=FzLk2L6ySa3vd6AeEXzroHqnZP9gtWGCYHSx52lBHbIRRCC0uVNmlmfa6PJW+UyYpsBSiu
	WlUpxl9eyal97OeuHepXoamqSSi4mJnX54Pg1DRrC3ObZ5+y95Lf2HX3Fuq8fMh6iV2oAc
	Js3EZdiHnrFbEfTNCeURTOZST9LuF84=
X-MC-Unique: V6QZTT2vPZOu77i5XeSvmA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ArKB8pHRyKlwbk2stNTeZfU1QNfZdJIqY6/q1jgOPOPTWEO3CYIqYCgikjLilrOfgRWxifZAo9AG+f2loPCRnLRqjOqr3huO8p+MV9AqdKKWNYUb7HAdzY4yoZgQ8O4PRqm7oOqpEEKi/bj+OnwyLxCR+Q+ICxqMLRwldU2tA7tOGDxhtOn+stJegsFQuOTvjjBxFb3PmfzqKSUjcSYFFrVISTBHotq+veRUToMyxActuY+XESUi7IRV/QaceVlHSbELGEJFcw/aJHNpMszE4pGFEWkYuwAsEwP955KV3R8OJWEmHM/EhgFgTZuf6pGgIlE7Nac/0PjNvky1OpyWMw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=St8899CI+T6lTL/COT4NIUP5AdLfRHQOiGe16Rhf3b8=;
 b=Zl4wj0cRSio0Rl2FcWCDIu1mnXG86Bn+cgE1MzDtpsr0ylN5N7FRWlJ8ja1R75Lp5iUHLLqlzimJANY7yhpNwM3YgV0pVMfgbCLjHvTxQ60fsYRxP7gINdHzd0/J7i4dpPx6nculb3djhjsn+ACccBYDWCg3ggaS4qpmmS8HYCC78hAHVtfp/lws6+nuK92hyZJ8KYZOBenMaMo/mHP3W0P8TOYKHCBac9O/CBe4kHi1UgBQmMHDOs4G842cWZo0OZ6kPtdAUr5kZGewvXD99uMDwOlhZLO83aFAf22TZwCmj8NQ3epg4nUg4q+t6O2UHo32bz8X0nxIdOkauDYS0g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 4/5] libxc: use multicall for memory-op on Linux (and
 Solaris)
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Juergen Gross <jgross@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
 <8a5284e3-a029-27c8-103c-cbc12642d24d@suse.com>
 <db8662c7-9641-6fb2-54e9-c0e64e03b990@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d10b9c0b-59f9-6527-897a-374ab7ab9162@suse.com>
Date: Fri, 18 Jun 2021 17:29:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <db8662c7-9641-6fb2-54e9-c0e64e03b990@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR04CA0020.eurprd04.prod.outlook.com
 (2603:10a6:208:122::33) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c5f9b440-878b-4cdf-3fcc-08d9326de100
X-MS-TrafficTypeDiagnostic: VI1PR04MB7040:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB7040F8BD56C6E366827A6C8EB30D9@VI1PR04MB7040.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:275;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZdpAbyLR7+ws+y2LfdUBx+CJ3V0PVzC4bzBn8e/it4/wGQgB4jcRCF9K2WgCkn9DNvKurSFNPBWEVnycTDTTANg4FulVpvGULQZnvy8ali4eJFLjlAjWyo7Q4crJHzdKugaQaY3A1wBZBtVM/1FhOzRVU52+CSdoAwU43RXYjnEsKrK/JSmSdZnni3P591UVd030K/EJ4J6MaMCcMNareiR0/a2f+6Ob1kJ+enI3cnONxp0mqC6d6L5btQpqnWFgoACcZLwyW0a9pdksd1fFF8d85KeXWKEIl5fWkgjSZ8EMuN01+P8/7jrCFaL8y06UlvzWme4Orn+SJmf0uzLEZxVKhhbDLcf2CF34aZWP/khUrFgzcDz6KKTN+Nl3/GgIkysQ5H2yaHDAECvSOMLSY3JadUkK4gMeNvN9tmP3ZbvAg0maHkNICsC5mJ+6pHfZqujObaV0N9JCiofmUA7YqEnhIHevYG5xGzboXwoEiKlhNkA3NwaRPJc1ZbqtltZQQiDSBT1mh0jmTTqaNm2LN8V3odwkqXEaDEqkB+pesDOli1hhwEUfKPe9CIOjRt/KB1fi84MlITFyA/BdNR+38wTiVzp4f5sAC/24MF2xNKIC6FAz/OJaaj0v8Mj/+fdftuo5r9ZVSXIc6OuwB/WJqtU/q/ipIULjcsFOubQTtqStvEOcVTnnHrfGFrh/KLTp
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(366004)(136003)(396003)(376002)(39860400002)(26005)(8936002)(186003)(5660300002)(6486002)(36756003)(31696002)(316002)(86362001)(478600001)(2906002)(83380400001)(4326008)(16526019)(31686004)(956004)(16576012)(54906003)(66574015)(6916009)(66946007)(66476007)(53546011)(8676002)(38100700002)(2616005)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?tod8OT/FMkx8UFsYuIkLqcLQFz6AyGx8dX7x0EbhD0gpKPiKv0mhix89wmDL?=
 =?us-ascii?Q?vJv3vesUHoXHMFdu9utXGsfrbuvD+f+cEgO0viix+ceJ79qFhfZI7aoRj3o1?=
 =?us-ascii?Q?CCW/zTwBmvzKjiUDJpyXJNBu7dBZmsC64qpUFZ0Puyt9sOwdRWj+BZevmuFj?=
 =?us-ascii?Q?2NLHA6xxnSFLGVcchuOitQPLT9+z7jFgBi6tlagGes724vRWFS5FjB/AdDUT?=
 =?us-ascii?Q?vVQHM0Ib38jQxbkOyU9vMc0ZSwvoF8UrtBBlJCsLgHnyz3KMI4uZIYYgxwxM?=
 =?us-ascii?Q?V8vRFy8S1Iwh2a7wJhiRSIhiZpbkJq+WNL1Fcm75bUw/l2V5myI2VBGwBdtz?=
 =?us-ascii?Q?ddZaXpj2ybtSdtqtsrWiFcFMfRhM+GysIdwmBxR/yYnrNg/qwZqVMr0mf4Ex?=
 =?us-ascii?Q?ZJqHQK0S94spc48unv9UdqLI1D4xDPE9p9H9CHhLkQYzI9lFbrHFXerrVZ3j?=
 =?us-ascii?Q?N9iC2u+o2TsV4qLyvgWj8N2+3SPsRyKgi4ic0e1U9F9WyC2frPSsrf43kF4t?=
 =?us-ascii?Q?elMPh/7HN8Yx4DurtyOGKhpammj6eA8fOvbtUih9GCZktDjll+SyvCI6KuAD?=
 =?us-ascii?Q?4MzPPt7YSpNNDzpnIEQLWcL4ZxOqmm9EVClB6veGqyKD9Bqf8WPRmBPQbx1M?=
 =?us-ascii?Q?D+vBITufhDnVIVLO45l0L1Z6td6RjGp9ra6m2UwCC+tlDUxovh19lUhI9AOf?=
 =?us-ascii?Q?iynxKOBqrn/JVKFrj8krE/IsMLukU34HVX3U5u3SDcdIGkvbPUaRamZ0Jv4n?=
 =?us-ascii?Q?HXpfRTvMvrMvDzgvHrfYmkrRJUxpsvw80U1oj91Uv4WXnLA6X4KzhzIcFqs3?=
 =?us-ascii?Q?Ra70TlKKQ+QdrwCXG9mG3c5nl6nF1rzkPN+oQLDtDgUTBTLQfHqa61fCniGd?=
 =?us-ascii?Q?/a2vweATrtonALFedwkMwsG1BOKa2r+V9eCGAPXjVYgGCckUrxGmBircU7IU?=
 =?us-ascii?Q?YWWj0kdqL6fxWENeOHWJw5mkU6sFR3b3wYOi0FsVXwEawUJUdJ8ueAVQ+OH/?=
 =?us-ascii?Q?Pn1S3D/mnw0kJCSLb+bfMzamBzP6I71oPyuAhsiNLa+YMBhxo0Vb1Lkrx/Tx?=
 =?us-ascii?Q?t0xs+nGGOqUx0ugxA0H94QCEUxNBGaYIg9cAMbbkl6HRFj53ZGXanDuIT8vA?=
 =?us-ascii?Q?JYnes+ETrYR5W5koJERrZ5EzKyp1545E5a7qq2hPWJ/ZNE5TWc+svIXal4IM?=
 =?us-ascii?Q?fAcvwCznBv0t3ybKCbbhIJ0zfYuMzg9baBVozvrBsgGmMR3r+veVxve7/ZEj?=
 =?us-ascii?Q?BkIsFRnpFlQ7jVOrCCfw3cFo9VNAan6J0ztNvaHBPSqV+dTNxdRLLiAZ2bUh?=
 =?us-ascii?Q?jTZLiSMbOypV51+LqmC2zRQ9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c5f9b440-878b-4cdf-3fcc-08d9326de100
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 15:29:35.8179
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AcRNdtcZoXLtdlbhDcUYLxwg6aTpc2cX1iQk2HurppmbSnyaw3BWS/R9+3kqysxHFXxClBipDu43+Tkaub3/QA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7040

On 18.06.2021 17:05, Andrew Cooper wrote:
> On 18/06/2021 11:24, Jan Beulich wrote:
>> Some sub-functions, XENMEM_maximum_gpfn in particular, can return values
>> requiring more than 31 bits to represent. Hence we cannot issue the
>> hypercall directly when the return value of ioctl() is used to propagate
>> this value (note that this is not the case for the BSDs, and MiniOS
>> already wraps all hypercalls in a multicall).
>>
>> Suggested-by: J=C3=BCrgen Gro=C3=9F <jgross@suse.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/tools/libs/ctrl/xc_private.c
>> +++ b/tools/libs/ctrl/xc_private.c
>> @@ -337,8 +337,47 @@ long do_memory_op(xc_interface *xch, int
>>          goto out1;
>>      }
>> =20
>> -    ret =3D xencall2(xch->xcall, __HYPERVISOR_memory_op,
>> -                   cmd, HYPERCALL_BUFFER_AS_ARG(arg));
>> +#if defined(__linux__) || defined(__sun__)
>> +    /*
>> +     * Some sub-ops return values which don't fit in "int". On platform=
s
>> +     * without a specific hypercall return value field in the privcmd
>> +     * interface structure, issue the request as a single-element multi=
call,
>> +     * to be able to capture the full return value.
>> +     */
>> +    if ( sizeof(long) > sizeof(int) )
>=20
> This is very fragile.=C2=A0 I spent a while coming up with
>=20
> =C2=A0=C2=A0=C2=A0 __builtin_types_compatible_p(
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 typeof(ioctl) *,
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 long (*)(int, unsigned long, .=
..));
>=20
> (which does work if you change int for long), just to realise that this
> won't actually help.

Help with what exactly? I'm not sure I see the severe fragility that
you see. I'd call this a little fragile because new OSes would need
to explicitly be checked for which behavior they ought to get. But
if one failed to do so, all that would happen is that they'd start
out with the same issue we're now trying to address.

>=C2=A0 I'm suck on trying to see whether
> privcmd_hypercall_t has a result member.

I.e. you're imagining __builtin_has_field()?

>> +    {
>> +        multicall_entry_t multicall =3D {
>> +            .op =3D __HYPERVISOR_memory_op,
>> +            .args[0] =3D cmd,
>> +            .args[1] =3D HYPERCALL_BUFFER_AS_ARG(arg),
>> +        }, *call =3D &multicall;
>> +        DECLARE_HYPERCALL_BOUNCE(call, sizeof(*call),
>> +                                 XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
>> +
>> +        if ( xc_hypercall_bounce_pre(xch, call) )
>> +        {
>> +            PERROR("Could not bounce buffer for memory_op hypercall");
>> +            goto out1;
>> +        }
>> +
>> +        ret =3D do_multicall_op(xch, HYPERCALL_BUFFER(call), 1);
>> +
>> +        xc_hypercall_bounce_post(xch, call);
>> +
>> +        if ( !ret )
>> +        {
>> +            ret =3D multicall.result;
>> +            if ( multicall.result > ~0xfffUL )
>=20
> Wouldn't this be clearer as > -4095 ?

Not to me.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 15:32:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 15:32:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144798.266468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGTd-0002pK-Ji; Fri, 18 Jun 2021 15:32:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144798.266468; Fri, 18 Jun 2021 15:32:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGTd-0002pD-Gl; Fri, 18 Jun 2021 15:32:13 +0000
Received: by outflank-mailman (input) for mailman id 144798;
 Fri, 18 Jun 2021 15:32:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tyHv=LM=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1luGTc-0002p5-3p
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 15:32:12 +0000
Received: from mail-qv1-xf31.google.com (unknown [2607:f8b0:4864:20::f31])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2cdc3b37-e2b9-4d5c-80e2-3a2cb50d2c5a;
 Fri, 18 Jun 2021 15:32:11 +0000 (UTC)
Received: by mail-qv1-xf31.google.com with SMTP id 5so3750129qvf.1
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 08:32:11 -0700 (PDT)
Received: from FED-nrosbr-BE.crux.rad.ainfosec.com
 (209-217-208-226.northland.net. [209.217.208.226])
 by smtp.gmail.com with ESMTPSA id y16sm5301877qta.72.2021.06.18.08.32.09
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 18 Jun 2021 08:32:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2cdc3b37-e2b9-4d5c-80e2-3a2cb50d2c5a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=ufURiA5G+QmMKc3cHR4SDlIUzS0QueFElFdTPdxsl6I=;
        b=pLj8K+rg1GqSADyz/MIdH86j7yhTw4X6FEllGnEMoZ/+dthKafKcoLXhypthtRHser
         vLV6/FiEtISL8X/w8Nt+4oeHiw4yZYjfoZI2XZQFIdnXkyMJI53y0y5nIR8EClZ4OOXZ
         NKernvGFD2j+8S92sCk+7p5MHJpoumABzQYeeRl7OVvAgGltEZD5OAv1Rf2B6vx3HlT2
         QR+sptNBIC+JqjToSw9koeGiAbk/oUv6AC6oNyQhTVe/TS0GT4wHo5P5Sm3f2ML1i+HC
         fXGoajQ5D4DM9K3iFrtuWmtGLPC6ZcU/6QEanKrl7jROQomDTSexOy5NDV5o3PhVZXZ7
         Ge4g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=ufURiA5G+QmMKc3cHR4SDlIUzS0QueFElFdTPdxsl6I=;
        b=RZw5maxCfOoH4H8p1+6oQwgysKvfh1qefhY+iySVMPe7uhV2/mtkbL0VF1jseHIlVr
         bGEDLtqz5zzjGMNmFFQ3iYcVvhjTVU6sPy1okLKTGkY0RUdlsTUnnxBSp8qvqWGnolPj
         PB2chumvpbuZ4EdXY1DrOOv+xgqxSYnfsTN4DUzMddOE96UgdTZgKNI0dGeNq2XM5ahi
         UxkMfC8j8hck/Mn8NH4+3XnuIWgtggbpTGCIPoCCs5Gez5l8AFGCnm6mJoSB/7Tmeyfn
         vkJVzlFGD+/qeoDMkEJATn4cuVYB+PQUqZyIamdJwz4wJSc9RcPbMH99DTXw2l2NAnYG
         Hpzw==
X-Gm-Message-State: AOAM531neZm6LekrDwMBEK/BmLf+o4tKEi88p1R4AT+bfqFvMyiBSFLl
	/1pGOHXpJZtNUE3Lj9hKbrs=
X-Google-Smtp-Source: ABdhPJxSFOWwfFoYtUPbRUcO7GndZn7BdNWsH/2tmWfME2elhdpq+/Y1jtxyj0U95+Gr9zv5kAR+cA==
X-Received: by 2002:a0c:d602:: with SMTP id c2mr6228887qvj.41.1624030331002;
        Fri, 18 Jun 2021 08:32:11 -0700 (PDT)
Date: Fri, 18 Jun 2021 11:32:07 -0400
From: Nick Rosbrook <rosbrookn@gmail.com>
To: George Dunlap <George.Dunlap@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [RESEND PATCH 11/12] golang/xenlight: do not negate ret when
 converting to Error
Message-ID: <YMy8d29gPIZl1NxB@FED-nrosbr-BE.crux.rad.ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <82bc8b720c3dfb178e52d10ddbebfa8dc5880e7b.1621887506.git.rosbrookn@ainfosec.com>
 <2C77D13E-ABB5-47F2-B466-07CD6652AF33@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2C77D13E-ABB5-47F2-B466-07CD6652AF33@citrix.com>

On Fri, Jun 18, 2021 at 03:13:03PM +0000, George Dunlap wrote:
> 
> 
> > On May 24, 2021, at 9:36 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
> > 
> > There are several locations where the return code from calling into C is
> > negated when being converted to Error. This results in error strings
> > like "libxl error: <x>", rather than the correct message. Fix all
> > occurrances of this by running:
> > 
> >  gofmt -w -r 'Error(-ret) -> Error(ret)' xenlight.go
> > 
> > from tools/golang/xenlight.
> > 
> > Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
> 
> Erk.  I’d be tempted to say something more like:
> 
> "Commit 871e51d2d4 changed the sign on the xenlight error types (making the values negative, same as the C-generated constants), but failed to remove the code changing the sign before casting to Error().  This results in…”
> 
> I can edit this on check-in if you’re OK with it.  Either way:

I would appreciate that. Thanks!

-NR

> 
> Acked-by: George Dunlap <george.dunlap@citrix.com>
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 15:34:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 15:34:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144805.266478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGVg-0003X4-6w; Fri, 18 Jun 2021 15:34:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144805.266478; Fri, 18 Jun 2021 15:34:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGVg-0003Wx-3o; Fri, 18 Jun 2021 15:34:20 +0000
Received: by outflank-mailman (input) for mailman id 144805;
 Fri, 18 Jun 2021 15:34:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ygMg=LM=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1luGVe-0003Wp-Jq
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 15:34:18 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.21])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e33be499-6021-454e-b867-f7db4f6a3100;
 Fri, 18 Jun 2021 15:34:17 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.3 AUTH)
 with ESMTPSA id x0a371x5IFY64tn
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 18 Jun 2021 17:34:06 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e33be499-6021-454e-b867-f7db4f6a3100
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1624030447;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=myTPQL8emWIF/wznjYNNFDT7VqB+zmIy2OxGuMP3FCM=;
    b=qKLzhUMf0wM7kYF4wBEnTlNZYvn0Tf/+9BnOngg47gKsLpfcBSLu1dFdvEOrGXY5Ol
    Gf/hKLUaIzauHxiqnJAR5LhRYNDAkgN3Ny0OgPfoELs7NYfAn0tzd7zs1i+Il87r1dLs
    0JFmJ8Jm0kxZULSR7RupVw0Q5CiPJT9Q2oiAbC234I3p0OgH3rbr8T29evvhMVzfwAS4
    fIxLgNqgMZi0P2ln+oCb8knepUFJ82uNfrg3BtiPCrsZj5TAjaFixLh16HKttm0V3SUi
    BkR8j2PEWMCOWfXG4dYT4LulYTWP1DHLyaaXxmBI3qy9B++97Dai7ieUuUGZ577dHR1M
    SOzQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisQsVxSIbR7sf0kebs4Z3Zpqv+Sabl5o7CzRq+Ps8Q=="
X-RZG-CLASS-ID: mo00
Date: Fri, 18 Jun 2021 17:33:56 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: <xen-devel@lists.xenproject.org>, Marek =?UTF-8?B?TWFyY3p5a293c2tpLUc=?=
 =?UTF-8?B?w7NyZWNraQ==?= <marmarek@invisiblethingslab.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1] tools: use integer division in convert-legacy-stream
Message-ID: <20210618173356.108f9935.olaf@aepfle.de>
In-Reply-To: <8a9be55a-ad6a-d06d-9ddd-0f2d656e4fac@citrix.com>
References: <20210618093114.1640-1-olaf@aepfle.de>
	<8a9be55a-ad6a-d06d-9ddd-0f2d656e4fac@citrix.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/CnFjWk_OTXbOt567kf8w90h";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/CnFjWk_OTXbOt567kf8w90h
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Fri, 18 Jun 2021 10:42:58 +0100
schrieb Andrew Cooper <andrew.cooper3@citrix.com>:

> This is a Py2 vs Py3 difference.

This script is not ready for python 3.

At a first glance it is the usual type confusion.
It seems the type 'bytearray' exists in both variants.
Perhaps stream_read() should return such an object, instead of either 'str'=
 or 'bytes'.
I'm sure there are more pitfalls.

Olaf

--Sig_/CnFjWk_OTXbOt567kf8w90h
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmDMvOQACgkQ86SN7mm1
DoD/lg//W1f7Ux2wrSZi5WEj0/uMUz+94JH+cdMB2csW8CwgJNKwmqYGTgwFHAp9
f14Sx/VjZe/FzI5CCT4NwF9ec1CrMn+Pa52QOVNc9guIProT6BMhjzuKgngTIG4p
0yysTglGeZ0hS6UsLkxQXbCCUZJHg9Z23eQZhOi0wuAwshm6wl6gOPRV6zzVRo2w
ujdi5v8Fgj5d/uRyAiW7CewhLEhMQJrTQj0WA3nlqETF3dUgTpXXis2IentUa6N6
quitZY+rk8nTUPLocIFEKiv1MBpP58PSvZ9ZxsUtOcfWDO6rIqBfuhp6lXyuK5E5
bPOahqDHyLrLkK3RZOUcBQ5u8yjuxI9rY5nk/GNYRKofBPsMWzelOjmXOBOG7VuA
i676sbX/0sGxyKwWxzw3xg0tzwnLM4UsiP3PdRnVk0DjnyxgZM7kJeh6vYDnxQJM
+SMwEozzVAG+1fwOkpo7jPRpzIzW6/BnL5hgR623hiqndQTnyX/tkE6u2k92CXYL
AKTx8hgeaZgOSyIhgVC720kd9rEKEG4ENduZl8HKtgtrQpM0dLggHoNCd6EGI9VQ
vySz3rtcJk2bs83ugd9UooSQk10TuQQXbABDJ50Kx0NGN2CWURYPj458XTxkDFV1
+kKQPO6o0ZQGcM9K8LA2yeBKcV03Mxy9sfyd90zDUcFT9AJu2JE=
=J7WG
-----END PGP SIGNATURE-----

--Sig_/CnFjWk_OTXbOt567kf8w90h--


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 15:36:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 15:36:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144811.266490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGXu-0004AL-LN; Fri, 18 Jun 2021 15:36:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144811.266490; Fri, 18 Jun 2021 15:36:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGXu-0004AE-Gu; Fri, 18 Jun 2021 15:36:38 +0000
Received: by outflank-mailman (input) for mailman id 144811;
 Fri, 18 Jun 2021 15:36:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luGXt-0004A8-O1
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 15:36:37 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3b6dacfd-c78b-4e92-8526-aa26da9bc323;
 Fri, 18 Jun 2021 15:36:37 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2106.outbound.protection.outlook.com [104.47.17.106])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-39-Z382xySGOmG0iezf-rtcGA-2; Fri, 18 Jun 2021 17:36:35 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB3295.eurprd04.prod.outlook.com (2603:10a6:802:f::25) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.25; Fri, 18 Jun
 2021 15:36:31 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 15:36:31 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR3P281CA0061.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:4b::9) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.10 via Frontend Transport; Fri, 18 Jun 2021 15:36:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b6dacfd-c78b-4e92-8526-aa26da9bc323
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624030596;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SSbh5l5U+U0FVABHB3hjWWZ2nfD2yn777G5Xq/PsoVY=;
	b=TBL5Kdtfs/bw3ZwONbqa8C/RhZ7NlATs+bo4DRBXGbL9x53bxvTds1LaLhlA8wjDqcYxZH
	VzFSea46Kd3++P+2WCMe3loE0F9YVdfGgIrp6KZUrNuK+Z+OPcg8Rzk5UOamOUDYXketqR
	j3Q1RxwgRthT95QGRmMHkNb5WIP22HE=
X-MC-Unique: Z382xySGOmG0iezf-rtcGA-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OkCaKxTegjPJnHTMYtLHhP+UJvPjFjzExNrseK+EiCRdGi+GxgXgvryD1YG4nE7tE1ckwgTQ6/7//Mzdpc2PyI33b1HF38z5CNEHEZ31dg2jA8PZqchqMc4Vpaudg7gJmXyRFdKs497mAoQcIdxG6c1JNfD/kZbmdtPCUa36N4LLfrqq7RnE91sBbmgKFEqfA2XgLHEr5SIIU3k73Q+GtSlBEm5xeABR1VQFxbHng6z8QRhhoAwoNtqetoh+Gi5FIz+WMJz4+zUDcbOMws8PFiCiNzgBmQmeN1AMU67dVRYvBR8PU69abL2C7GZu91IkjxOpXYKeW8UdFkxENi/CgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Xz1pzk1+R8C8+7jLgvopLD8NK0XD6rH7eFt/+eifFuc=;
 b=E3ZTiIzNlEHHOeP7cx1KdzeUUu2zF2I21xISz2A4YqblvvZIXlPvIIcAZsyF/dTr5HAaa6lET7W1KwDCPgt9SVwO6BclddtvdU3SkN7nLPjCre7JTnFbg0NsMa+cvcVXuJsOMtwrJZ6DqSfXtnqj2tpaV1cGCHPzv3XLJE21i4aNd0FsomVZySsGLLUSV+01cNdM/y4emFzuIC1uxp76AE/SDe7NxfJ02Litzw7nK+n4W8K8aRjpn97ThtffyJ/PyhC8JLAYeEEtevr/Ufn4HaJaJBGSJzodDUpdHFU8jnqFv59YBGlh3G4iNXqVzTpG3wc3O5U4L4NjR2lueRg6oQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 5/5] libxc: make xc_domain_maximum_gpfn()
 endianness-agnostic
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 Juergen Gross <jgross@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
 <99979695-e53e-7764-85e1-64dd4cf9447b@suse.com>
 <bcbf4a71-bf30-5a9e-a399-d4366ee95423@citrix.com>
 <0258fa6c-3202-b012-90d1-f48a5f3e9d8b@citrix.com>
 <79cdb790-2db8-0922-5412-d5ab69e7c0eb@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fcbb0fcf-b20a-9408-345f-5bfcfb138035@suse.com>
Date: Fri, 18 Jun 2021 17:36:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <79cdb790-2db8-0922-5412-d5ab69e7c0eb@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR3P281CA0061.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::9) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: acefbd3a-4a94-4479-c2c6-08d9326ed89e
X-MS-TrafficTypeDiagnostic: VI1PR04MB3295:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB329553F617E845A5B7FBFB8EB30D9@VI1PR04MB3295.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Bg4qVnUfSwyCn3227a0iwqkAG1ozLF2nsHUo6sNWPwP+lpZVsG8ljl9R4ZRu4IxD4eOhIwIKXNXX9MNlK2IduArieT4K/wKlkjFcvY5iGLaX8Lu9K1r5H0d3gGm46O+/ZNSXniqyx5z4/93UTwltwwnPAlgc9kscxw2vp2RBbpRzDAB6EUTXxldC+HWaK6el8VQdnHh+2CstKA5kT4f5ZZfcTAKbSke/OsBicXK+Mg4XvRBcR08LvPWr+kf066DtNBQE7nf0TLZv1LFZo/dJYgQ14ymgBS/ULlBu6zQC4Fk3OT2yjThXRXpKaDoHIveWp5hPmWiBmyF096e1TV3GOCTW7FBzLt43g+Z/RDkNNTfKDZpte5X2dJj1WMqqHigdVEdeG5fzTrbktEYpHqt4DdMmLG8wI0sr//9QQC0cDQ31jxUIOeYeJS6X50j1YajyNSDv2HfOG6CJTEHQICanghm4qPNHTDEv1Lh0DH1l3R3UILeQJimEP1wcK3ab2+tignWZfJTf0D6wItEnFtGNCek4z5qjOGBTuWKpCbS84upzIymACTomXkXTi1gSmUeAH3DrwKrqoPt5afOlkKoWljeSbGYI11xtu8zgjUjU7QtMPDgm4+xOHEXDvw2BjG/pxXTLqZgl9/9kZpX8yQuVr2I0UTdU3oJGd+Uh0Io76WqXqIsyLaMMEF9O9Bc7NkrF
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(396003)(39860400002)(376002)(346002)(2616005)(956004)(316002)(86362001)(6916009)(6486002)(8676002)(31696002)(4326008)(478600001)(54906003)(2906002)(53546011)(66476007)(16526019)(5660300002)(26005)(38100700002)(8936002)(16576012)(66556008)(66946007)(36756003)(186003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?UFfRdiX+xqDrbpresYpxoOfoS5tFEJaLaVa5/X/c3Pn6Cq5nQ/xB/p6bvgbq?=
 =?us-ascii?Q?u+QIQL/vNU0LZTyzY60riTshGBZcJe6u6CRoj+bpk8FlokHM79NoWelDcKnT?=
 =?us-ascii?Q?ZNnzh0Z63I60ahf10OW16wkjdSc6oOqDCJuAzxIhluLS9Vtw5EUtq27xeDlf?=
 =?us-ascii?Q?dxD5lFk3ib2LF3dy9kukvyohTVPS69l90vRpz4cScmzvisP7DroBJJFbr1XA?=
 =?us-ascii?Q?69vg+Mrf+aKBCutuTvLlWkuMNmMux5p/fJH/A38XfKVh7pekYlG8Zj9XS+mC?=
 =?us-ascii?Q?13gmpEBcV8wX5nJqp+OCS2ue/PrDjcrG/cL/Qn2OUlU/B3tGbiYgKp8ViURO?=
 =?us-ascii?Q?/jn7uct1Rm2YNlGa3kavWGX3D4yDglxaYGBMIdcCretZJ5cmEKe3Zz44oJJI?=
 =?us-ascii?Q?zJYDR4hNpWFG0CDTnHvvvWeiuM02I2wD+2UMGgZau4sRgdAnu5YipBI52aQk?=
 =?us-ascii?Q?2hBP3arUce5OOsjcz2f4Jer9/4iBKPd/9Qlro6p7kuWQ0qVstwRFy5bPhyPd?=
 =?us-ascii?Q?dsAc8TS/c8Jy0ry1o3b2hd8kDY/h/r8w6ItEyr6kOH19+JGE5kz0jKMEdjEa?=
 =?us-ascii?Q?nONkq2Yo8imL4OcfsSzCLmJB2bmXrNO7PIZUrncc0NMrEasOkN4IlPncQjOi?=
 =?us-ascii?Q?5ghTwB94a9q0spIrLjr3+/yYw6TaGeoHKEMO6a4Oic/re6MNp2q7zEQ0KtHi?=
 =?us-ascii?Q?wqCmS6eifcph0SF574VYiCpbXUWuPmQ6Lc92n6MvmnJpleM2uiiruAd9OBxG?=
 =?us-ascii?Q?h9gKhiYSdOyoPwhqQaenmnEqc0MfFA+VBg8sLZFGcb7kSggwX9+uj3uJoK88?=
 =?us-ascii?Q?GhzObxYBIj2qAMp2IhvsbJlOL5g75kGpBq+UgRKRL4zwzJkW/M6uZyBValYu?=
 =?us-ascii?Q?GZlwqpd7ZjYqxME11Mblrz99LR39FzjbwTK1JTKDHYvn6NZSKASHAzF1+tPo?=
 =?us-ascii?Q?MWVDFZ8gr6NKtBF70Ic+3O8LRzcoo4U17FRnfHZ5LWJkjUDfG8HdyyWr239G?=
 =?us-ascii?Q?5X2YXilxeFH5cOw1zDTQDVURyxBkKGQR1btA/gidbKri7vxTsSk0pvEKRD3x?=
 =?us-ascii?Q?U1BewN+gk7ewi6ooBrk9iWUncofnjBDIZdk763EmVAQK83e9Y+oQ9S0ESKN+?=
 =?us-ascii?Q?i/CEaZtvi1aYuopmZmQ0tqXTftoHet5dnhDlGMPhtaPnwF5FU6HcFN9TwH9g?=
 =?us-ascii?Q?yahl0ptz2TWDulRvBj1lVpSqz+WNUH3wA5I+yVReqmr2ViLJKan979ohvZyy?=
 =?us-ascii?Q?7tHfpShX4th662xiWMmSGO+wlN0t9Lw9oCRTZ9pRECJvsR7DwAu7WqSS5po4?=
 =?us-ascii?Q?MRpzgW1suAgZ6cLdPpsmNiEH?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: acefbd3a-4a94-4479-c2c6-08d9326ed89e
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 15:36:31.2779
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DHy0UGjHOt2NhR5PuZ0qtF7vhZvUHCXqSoqgZlmnR9RfcXnxpnGvSvct1NqjNw5//kli/rFcGUa214xgbhfvkw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB3295

On 18.06.2021 17:24, Andrew Cooper wrote:
> On 18/06/2021 16:22, Andrew Cooper wrote:
>> On 18/06/2021 16:06, Andrew Cooper wrote:
>>> On 18/06/2021 11:25, Jan Beulich wrote:
>>>> libxc generally uses uint32_t to represent domain IDs. This is fine as
>>>> long as addresses of such variables aren't taken, to then pass into
>>>> hypercalls: To the hypervisor, a domain ID is a 16-bit value. Use an
>>>> intermediate variable to deal with the issue. (On architectures with
>>>> arguments passed in registers, such an intermediate variable would hav=
e
>>>> been created by the compiler already anyway, just one of the wrong
>>>> type.)
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> --- a/tools/libs/ctrl/xc_domain.c
>>>> +++ b/tools/libs/ctrl/xc_domain.c
>>>> @@ -856,7 +856,9 @@ int xc_domain_get_tsc_info(xc_interface
>>>> =20
>>>>  int xc_domain_maximum_gpfn(xc_interface *xch, uint32_t domid, xen_pfn=
_t *gpfns)
>>>>  {
>>>> -    long rc =3D do_memory_op(xch, XENMEM_maximum_gpfn, &domid, sizeof=
(domid));
>>>> +    domid_t xen_domid =3D domid;
>>>> +    long rc =3D do_memory_op(xch, XENMEM_maximum_gpfn, &xen_domid,
>>>> +                           sizeof(xen_domid));
>>> Why on earth do we pass the domid in by pointer and not value?
>> This is horrible.
>>
>> What we're logically doing is passing a=C2=A0 pointer to struct
>> xen_memory_$FOO { domid_t domid; }, except its all done by void
>> pointers, and even the public header files don't provide a suitable
>> structure.
>>
>> I think we really do want to retrofit a suitable structure in the public
>> interface and use that, rather than to continue games like this.
>=20
> Alternatively, declare this interface broken and reimplement it as a
> domctl, which is where the functionality ought logically to live.

Not really, no - this is something a domain can inquire on itself.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 15:53:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 15:53:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144818.266501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGoZ-0006Sb-56; Fri, 18 Jun 2021 15:53:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144818.266501; Fri, 18 Jun 2021 15:53:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGoZ-0006SU-1I; Fri, 18 Jun 2021 15:53:51 +0000
Received: by outflank-mailman (input) for mailman id 144818;
 Fri, 18 Jun 2021 15:53:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UjTI=LM=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1luGoY-0006SO-46
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 15:53:50 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 08ebd227-764f-4cdb-9248-5c8e9f95d7bf;
 Fri, 18 Jun 2021 15:53:49 +0000 (UTC)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1624031612945314.7643764006626;
 Fri, 18 Jun 2021 08:53:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08ebd227-764f-4cdb-9248-5c8e9f95d7bf
ARC-Seal: i=1; a=rsa-sha256; t=1624031615; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=GWxLYqJd2pFqAY4smF8hNBFJvbufP05u/lOIOY22CsEkWeH44Uzn8vvO7cqHKISU2feoLMD8Mu9EvBjtd1nOK1MmVWF4ieT1aIxImcswttJiyDQZ80591XJbgzko+eEEcjRTnPU5e2HqgzTiZSWjzdcnlQ0Yj5wp/WpF5aX0GSU=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1624031615; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=XEiZ0OtFS7O29rUsQHkZqz53ltyNSi0NrVXpKFni0Bg=; 
	b=S1NGG3ZUyXBvgeZBGT6Sxp2C28PNPoPziWFSwniL7q71pXEGH3Ml9Fz/AsSQ9o+UEmewNE36d4ZyJYa5gB2YVS7iVbAa23OUEkJVFynYGsQAm/zMI3Fi+7P/7XL9qlAqJ6B+e6MmHxHLiaFt35oYT9WSMaRiQ6ZtmbrXu88s3dA=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1624031615;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Subject:To:Cc:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
	bh=XEiZ0OtFS7O29rUsQHkZqz53ltyNSi0NrVXpKFni0Bg=;
	b=RN+Gj/MjiE+1thAyjgxU5zo5agCYL6plIbehBcdq4KXK3VVwVKhY0kxjLL3MkXua
	IeCLI6u03+VEqMGUsFQYDC1ejl/hFfgQgwf2tfGCWo/u3KN1fIhHtjHxA4BEvoE2WKm
	hDT8+QAj6HV7tkiF48/21HrxNTvphTprk1Pt5xmI=
Subject: Re: [PATCH 0/6] xsm: refactoring xsm hooks
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Tim Deegan <tim@xen.org>,
 Juergen Gross <jgross@suse.com>, Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <7219a9c8-f6c0-a86c-bf47-5b38c579170b@citrix.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Message-ID: <6ab0d368-2625-40b6-bdce-69a76c6542fb@apertussolutions.com>
Date: Fri, 18 Jun 2021 11:53:29 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <7219a9c8-f6c0-a86c-bf47-5b38c579170b@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 6/18/21 6:14 AM, Andrew Cooper wrote:
> On 18/06/2021 00:39, Daniel P. Smith wrote:
>> Based on feedback from 2021 Xen Developers Summit the xsm-roles RFC
>> patch set is being split into two separate patch sets. This is the first
>> patch set and is focused purely on the clean up and refactoring of the
>> XSM hooks.
>>
>> This patch set refactors the xsm_ops wrapper hooks to use the alternative_call
>> infrastructure. Then proceeds to move and realign the headers to remove the
>> psuedo is/is not enable implementation. The remainder of the changes are clean up
>> and removing no longer necessary abstractions.
>>
>> <snip>
>>  51 files changed, 1309 insertions(+), 1413 deletions(-)
> 
> The diffstat is great, but sadly CI says no. 
> https://gitlab.com/xen-project/patchew/xen/-/pipelines/323044913
> 
> The problem is that ARM doesn't have alternative_vcall().  Given how
> much of an improvement this ought to be for hypercalls, I don't want to
> lose the vcalls.
> 
> One option is to implement vcall() support on ARM, but that will leave
> new architectures (RISC-V on the way) with a heavy lift to get XSM to
> compile.
> 
> Instead, what we want to do is make vcall() a common interface, falling
> back to a plain function pointer call for architectures which don't
> implement the optimisation.  So something like:
> 
> 1) Introduce CONFIG_HAS_VCALL, which is selected by X86 only right now
> 2) Introduce xen/vcall.h which uses CONFIG_HAS_VCALL to either include
> asm/vcall.h or provide the fallback implementation
> 3) Switch x86's current use over to this new interface
> 
> The iommu_vcall() is a red herring, not adequately documented, and needs
> to stay in some form.  Specifically, it needs to not become an
> alternative on ARM, even if ARM gains vcalls.  I'd be tempted to rework
> it in 4) to use the common vcall() by default, and leave ARM as the
> special case overriding the default behaviour, along with an explanation
> of why it isn't a vcall().
> 
> Obviously, name subject bikeshedding.  alternative_vcall() is a bit of a
> mouthful, and I don't think that alt_vcall() loses any salient information.
> 
> Thoughts?

I can look at spinning an attempt at what you are suggesting and submitting.

v/r,
dps



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 16:00:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 16:00:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144823.266511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGuw-0008Mi-SH; Fri, 18 Jun 2021 16:00:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144823.266511; Fri, 18 Jun 2021 16:00:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luGuw-0008Mb-PI; Fri, 18 Jun 2021 16:00:26 +0000
Received: by outflank-mailman (input) for mailman id 144823;
 Fri, 18 Jun 2021 16:00:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lC6W=LM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1luGuv-0008MV-N9
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 16:00:25 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eea6fcc0-f29f-4621-a163-e5d9d5229f5e;
 Fri, 18 Jun 2021 16:00:24 +0000 (UTC)
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur03lp2059.outbound.protection.outlook.com [104.47.9.59]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-31-5xO7tFhcPkyxgLeGsUyDUw-1; Fri, 18 Jun 2021 18:00:22 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2445.eurprd04.prod.outlook.com (2603:10a6:800:55::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Fri, 18 Jun
 2021 16:00:21 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.021; Fri, 18 Jun 2021
 16:00:21 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR0P281CA0069.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:49::22) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.7 via Frontend Transport; Fri, 18 Jun 2021 16:00:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eea6fcc0-f29f-4621-a163-e5d9d5229f5e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624032023;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=gDu8v9Z6S9LCduvADm2MEdzkMI9LRIFgjRlVPFAk6+k=;
	b=J/VsPkeiHFR7IbkHFU8SvLF9YUbvEUSy/0w5gynf4p1L1FBfTjEjadXuVjG0sooBc6SBze
	G5p8m0Ocj+lTZWoXibDfHA0WY9Puv5WBhNVBlPJAzuQKOAoOata9FqpD9eUbEtRRxNMz2K
	T5Gd+WH+mdez6rvhkoh/7esRyqFKK7Q=
X-MC-Unique: 5xO7tFhcPkyxgLeGsUyDUw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CMfvnT1t2ckkboxwxZkK92n++C5W6YLHO7iDPu9RzDhICL/bBUrWBmCE3VkkGQFiRjqe8IsLpBvK5vHHkn0X2N6MlWjv00w3VQck30jelQ6GKmb0JytUTSoTzcGA9I/4+RnM756JiQis5IjPisCYwDR/1prNlzha7CUHCAkeuHS5zOSgfkUQ5+GK187cggqUiJ5riL+I1VG/QuAAdHVmlGkp8lFP/bXaguUgzNIM5YloID9Lc9TN3IQTuK3kHyWldJumsc2QvesA4CZdjc44pcNnMRnsXxQKo6NkdRbfy5RCw/t45/x7evSwovc9Y7StHQbgHXV4Nh5OTOJnYWdHoQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gDu8v9Z6S9LCduvADm2MEdzkMI9LRIFgjRlVPFAk6+k=;
 b=PNfnMdwokdFkZmMFu02E13ESOC24hcV1Cv/PVB/WJ9JzLIq2tIfjHIM78r6CxpOhzCqkVMDcUPdePCGo1HDGd1sFxk1uHGnnSCgUtAQHY8cD6VRjmSZi5PwXv2CnhMygM+TyBEltZoWpLyI4lJopcQi2tDEqPmxN/6qhvbgy9nG4MgRtuxKB1XHTyOvLn7By53+ksYWTRNF8shO+ghov11hHgcMqZ4tLV4X5bHeirL76uCjvuZ4giPhkeKpsavinZrLttBm51efgYEEaVvBnobHS2wVG86XSJtwiQ8S9lEUFWAwQPXoe8eqpUh/4+/oSkSKxdALdFYigWY49X48cKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/AMD: make HT range dynamic for Fam17 and up
Message-ID: <4f46f3ab-60e4-3118-1438-10a1e17cd900@suse.com>
Date: Fri, 18 Jun 2021 18:00:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR0P281CA0069.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::22) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a3f919cb-976c-469d-19b7-08d932722cdf
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2445:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB2445A8E4CAE9A2FAFD467372B30D9@VI1PR0401MB2445.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mTvfW0zhQxNJGZqr9mNEjNvpkv10jXWublwl0SVV405DVEGsx3vBxvA6XUx4W+fi9O6FMbK+z56vDTPMGKCmOVYHBoZjrJ5tlswWnmMnwilk2UFrTo9A6zpO2ZHoMNWprLV6yJ0BQqQHn4/VM7TkhwUQ0fZyh3IdIKigsvKfZwnUIxvmgpz4Cxfsu7Rl45Ks3mbyt3hoaSsIHXXp55fk8vpmpC8gV/snobpSf5t0pkKUxvtafgVb55q0BrIYBeQ5TYVNUt/9qzTDSDgyZYkDXV0aEFDXN57Ej+f+hPUAO/6/lAL55qqtVKSvwXY5NvcfgTp8OAOVe6hAA1zDAYF1DUZo7wem8SfIcY9ouaim384hMkMA4jQ6pRk4GRotyDKNaZUerAUNbSKc2Ac4jP+b8Tv1YXWK4swNrwK2RSz5lB//B4xHZagQzvU6I0QUW/ktuDu1AZVZWUhwkJF8uA7KyZLhbPOmeAM0tKBv/i2IA+Lg3i+GnRWdh6hXTGrNFmU5RKQaGraMs865Dy22EiJmNbNUQe4yCUjse2Hm9WwgwUAq+6y4piLcsNhBYle1BB9nRNc1fI8Gd4aKpbTz1wvnFr+CcOxo9cbQ0XRA6c9HhtFNo881l08CgAqcGuotZ9NeTXtWrUer0lEHSATSNp2/pRybPnbgYJktLnsTKW0iXLvVoDhTu85GPQEsbiMgH5TG
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(346002)(39860400002)(136003)(366004)(396003)(316002)(8676002)(16576012)(38100700002)(66476007)(36756003)(5660300002)(83380400001)(2616005)(86362001)(2906002)(6916009)(66946007)(4326008)(31686004)(66556008)(186003)(26005)(16526019)(478600001)(6486002)(31696002)(8936002)(956004)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OUpkRHpxMnFqKzd3TlpZNkYzRHB4N0dUU1NTMU9YbkZxQzQwd0FhLy9vNW5u?=
 =?utf-8?B?czNGOEdDUER1aWVoVG80OTdzWVdhdWJZOG4xMVcycGcvYXRBeDlQR045ZnpZ?=
 =?utf-8?B?U2J6aTJvcGVURU9tU0pXUGtNbW9YbnFHQTF0aTl0R0RtcG9QNWlBcm1aWnpl?=
 =?utf-8?B?dzhrbTM0a2QxQ2tINTJuYXdsL3h0bCtTelhZOXoyanZiaEpMNkZRdlhQLzk5?=
 =?utf-8?B?VUUvYUpybVpzcXcwdmlkeUIxTWpMZytNRnBCQXF1SUdkamRZc0NCdkNFaWpP?=
 =?utf-8?B?ejF2Zk8vWTRkUWowdWJDbDQvU0N3WXRDT1N4Z2V5MElRTTZScHBLTHAxekNC?=
 =?utf-8?B?OGlNU2k1WnExVEVKejA4WUpwMkx5UjFnT0p3ZTdtM3Rrc3AwZzl1MFBNN09p?=
 =?utf-8?B?ODhxZ2pwVGtKdlJ5MXpUZHZ0WXJrWjZVaFNobVZ0Z1Y0WWs2bUZWaFJFNllj?=
 =?utf-8?B?VWovdk0zQ3diaVdrc1JiNXJheXJJV1czMExQQTF5cDEydnVyc3h0Zzk2OWpp?=
 =?utf-8?B?cFNoZUtWSU5iUGp6SFl1RDc2SGpEbThZdnBWVEdxZ0lBSTM4T01BSGJoaVEx?=
 =?utf-8?B?NW5oMWJKSEVmb3psam1Pa0VBckJtUGdsODhteXU2TExRanErOFBaYUo3RWcv?=
 =?utf-8?B?YnNKN0Z2VW16WVM1WlhFT21RZzNIdWFTK095S2VoTWx2WHpIaVJvT0Q3Q1Y3?=
 =?utf-8?B?bGpHMC9VVXpWUHlyT3cvYlNLVTBUTEZKM0l2MC8xQ3d3WWtZdXBIRTZkOERW?=
 =?utf-8?B?L3lsS0MraGRIZFc5ZC8rcTdlcHRNUlk1TERORm9aVzNwUUpkOFhSaUFkSCtW?=
 =?utf-8?B?dC82dzNyOGpFSW8yY2pabTlvNjNJRWFkWlhjNEU4OHg4US8xTzNZMUZOOUNi?=
 =?utf-8?B?L3R2NWNLTW0yZzY5RGU3SzlrYk8yLzdZaGxjWHd4K0p5MExybExEQ0Q2UHZv?=
 =?utf-8?B?Y1hQL2Vmayt2VXprdUNaVWQ0VVJkak5sNWhHdVQ5aFZndHJBMUVGcmgyeUpa?=
 =?utf-8?B?Q0hwazVReHYrckZFdjE5a2phUUYwV1I4NUNJQldXWTFuVjhJeG5NWGRGNkU1?=
 =?utf-8?B?VDQ0VytPRXRuWU0yMFZ6d0FCaDkwcVpGRnpVSjBlNStUdmpkRW5XcjlRb1pv?=
 =?utf-8?B?L0FCeU82Um05M2J3aWJRTEIrc1JscHprNnlmMmRIcXFGaXVJcDZMUTQvMzZx?=
 =?utf-8?B?YjlVblAzZXcyajBUVHRZQ3MzVkltc05vZDAyKzFCK1doZlNVWnNtdWdvVmRN?=
 =?utf-8?B?M3FzWUw4eXJveENOczhadkE0TGNzVHkySWRMR1JBcE1TWEx5RTdvVi9udW1O?=
 =?utf-8?B?YnRvK3diZTJCNWtMWDZjSXNnaHd0TVRna1NpcEdNUGNkY0wwL1k5NGM0MUl2?=
 =?utf-8?B?TFI0V2lyNjZNaWIzL3JCSFphSEZMblZ6cllFQW52eWVlaDNwdnMrWVJ2ODFU?=
 =?utf-8?B?Zyt1WGcwQkxIVnBKY0NrS1M5N3J1dWN6c2hXMktoWkNxN0Y0LzRmMzh5dUZh?=
 =?utf-8?B?c09NdXVwdTIzMnpqWjF3ajJqR0FoVVRYY0ZqRHNFTHhvcHVpMUVMWFc5V3lh?=
 =?utf-8?B?RW9MVmxJdVYxZDJqRG1yUVg0Q1dMMktJMXcxVWp6bWVrWjN4Y2VESlRUK0dn?=
 =?utf-8?B?dFMrSHRDdWFEenZEY0syQjNoWEl1SElFZm1sS2tPa1pTZFVtWHlzcEt0ZTdj?=
 =?utf-8?B?S1VuUlBhTTRENUErc1V4aTZMZzRiZ0xzRGZ2K2x1bXNjaHZZZXVISnNFa016?=
 =?utf-8?Q?P/c+mvm3nc0bWUnpfQL0lEYmuaQvT+HYhSQgiGY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a3f919cb-976c-469d-19b7-08d932722cdf
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 16:00:21.1144
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hix8lf89OuGLhHC0kX0quuwvTCE05Q4NP3DLAf+x0r5+9/QKDhuk/ahErPq5dEPn+5KXRMrIWbRKAwmzCzjVOA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2445

At the time of d838ac2539cf ("x86: don't allow Dom0 access to the HT
address range") documentation correctly stated that the range was
completely fixed. For Fam17 and newer, it lives at the top of physical
address space, though.

To correctly determine the top of physical address space, we need to
account for their physical address reduction, hence the calculation of
paddr_bits also gets adjusted.

While for paddr_bits < 40 the HT range is completely hidden, there's no
need to suppress the range insertion in that case: It'll just have no
real meaning.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -349,13 +349,17 @@ void __init early_cpu_init(void)
 
 	eax = cpuid_eax(0x80000000);
 	if ((eax >> 16) == 0x8000 && eax >= 0x80000008) {
+		ebx = eax >= 0x8000001f ? cpuid_ebx(0x8000001f) : 0;
 		eax = cpuid_eax(0x80000008);
-		paddr_bits = eax & 0xff;
+
+		paddr_bits = (eax & 0xff) - ((ebx >> 6) & 0x3f);
 		if (paddr_bits > PADDR_BITS)
 			paddr_bits = PADDR_BITS;
+
 		vaddr_bits = (eax >> 8) & 0xff;
 		if (vaddr_bits > VADDR_BITS)
 			vaddr_bits = VADDR_BITS;
+
 		hap_paddr_bits = ((eax >> 16) & 0xff) ?: paddr_bits;
 		if (hap_paddr_bits > PADDR_BITS)
 			hap_paddr_bits = PADDR_BITS;
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -524,8 +524,11 @@ int __init dom0_setup_permissions(struct
                                          MSI_ADDR_DEST_ID_MASK));
     /* HyperTransport range. */
     if ( boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON) )
-        rc |= iomem_deny_access(d, paddr_to_pfn(0xfdULL << 32),
-                                paddr_to_pfn((1ULL << 40) - 1));
+    {
+        mfn = paddr_to_pfn(1UL <<
+                           (boot_cpu_data.x86 < 0x17 ? 40 : paddr_bits));
+        rc |= iomem_deny_access(d, mfn - paddr_to_pfn(3UL << 32), mfn - 1);
+    }
 
     /* Remove access to E820_UNUSABLE I/O regions above 1MB. */
     for ( i = 0; i < e820.nr_map; i++ )



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 16:18:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 16:18:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144832.266523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHC1-0001aZ-HU; Fri, 18 Jun 2021 16:18:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144832.266523; Fri, 18 Jun 2021 16:18:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHC1-0001aS-EF; Fri, 18 Jun 2021 16:18:05 +0000
Received: by outflank-mailman (input) for mailman id 144832;
 Fri, 18 Jun 2021 16:18:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UjTI=LM=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1luHC0-0001aM-2L
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 16:18:04 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65a7c35b-948c-4a08-bd48-10dcef48dec0;
 Fri, 18 Jun 2021 16:18:02 +0000 (UTC)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1624033060676206.71396478217548;
 Fri, 18 Jun 2021 09:17:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65a7c35b-948c-4a08-bd48-10dcef48dec0
ARC-Seal: i=1; a=rsa-sha256; t=1624033063; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=SVY7lzYI+Eh7+hr/K2mSbWdkEF25BqhzEJx8wfVRbvPeCX/qBUPi+C3Z3oK3bZSY+6zkpxEI3tE3jgHsA3YcCPIv+shRD9W/C54YuMV0G2jlcop3xHERDyvoIKITJWefcPMJVbntTv9Q40YjoVM1XkbFbtyWXTlQOCwhA7yp4jE=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1624033063; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=IVB4Qr50UvnbQI3LXiJt1lhd6SJ2NfYMvK8xQdCWmtM=; 
	b=NPeRghH30tz16mMJZaNMFe32EnxbjT1Q8koqbZcmSrcs97dAs0uNZjuyVQOGgqkfYnT1pCLpswoLHPFL4jpt6NLwheBmoYZ/G0M1Q6zr2inDXJoMn9M/SV4lnX/Gj8yQbnZyLSk7W63tlpOi69v5pRS/DhmgVOs2jNYcN9D9Ico=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1624033063;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=To:Cc:References:From:Subject:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
	bh=IVB4Qr50UvnbQI3LXiJt1lhd6SJ2NfYMvK8xQdCWmtM=;
	b=OnLuvlcNs3XJJd8KtdjGCb9fgEQkl/Wx/C10auXC0+ELvJvxVqKgUiUoMKyAvtnt
	sI0soJAhlP7EH2qP4oHONz/49Cr34hr0PHLp4HbQXImsJ1fna8cyW+ZHWUUUleXfHs4
	LXEKUqw0sit3eMkyqtC3SDsT+3RWXqatdeRXwilo=
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Tim Deegan <tim@xen.org>,
 Juergen Gross <jgross@suse.com>, Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-2-dpsmith@apertussolutions.com>
 <07566583-d4bc-dfb8-eccd-d779783d959a@citrix.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH 1/6] xsm: refactor xsm_ops handling
Message-ID: <09b506df-3b4b-45f1-5503-ffd44f31a902@apertussolutions.com>
Date: Fri, 18 Jun 2021 12:17:37 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <07566583-d4bc-dfb8-eccd-d779783d959a@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 6/18/21 7:34 AM, Andrew Cooper wrote:
> On 18/06/2021 00:39, Daniel P. Smith wrote:
>> The assignment and setup of xsm_ops structure was refactored to make it a
>> one-time assignment. The calling of the xsm_ops were refactored to use the
>> alternate_call framework to reduce the need for retpolines.
>>
>> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
> 
> I think the commit message needs a little more explanation for anyone
> doing code archaeology.
> 
> AFAICT, the current infrastructure has some (incomplete?) support for
> Flask to unload itself as the security engine, which doesn't sounds like
> a clever thing in general.
> 
> What we do here is make a semantic change to say that the security
> engine (Dummy, Flask or SILO) gets chosen once during boot, and is
> immutable thereafter.  This is better from a security standpoint (no
> accidentally unloading Flask at runtime), and allows for the use of the
> alternative_vcall() infrastructure to drop all the function pointer calls.
> 
> Does that about sum things up?

ack

>> diff --git a/xen/xsm/flask/flask_op.c b/xen/xsm/flask/flask_op.c
>> index 01e52138a1..df9fcc1d6d 100644
>> --- a/xen/xsm/flask/flask_op.c
>> +++ b/xen/xsm/flask/flask_op.c
>> @@ -225,26 +225,7 @@ static int flask_security_sid(struct xen_flask_sid_context *arg)
>>  
>>  static int flask_disable(void)
>>  {
>> -    static int flask_disabled = 0;
>> -
>> -    if ( ss_initialized )
>> -    {
>> -        /* Not permitted after initial policy load. */
>> -        return -EINVAL;
>> -    }
>> -
>> -    if ( flask_disabled )
>> -    {
>> -        /* Only do this once. */
>> -        return -EINVAL;
>> -    }
>> -
>> -    printk("Flask:  Disabled at runtime.\n");
>> -
>> -    flask_disabled = 1;
>> -
>> -    /* Reset xsm_ops to the original module. */
>> -    xsm_ops = &dummy_xsm_ops;
>> +    printk("Flask:  Disabling is not supported.\n");
> 
> Judging by this, should this patch be split up more?
> 
> I think you want to remove FLASK_DISABLE (and this function too - just
> return -EOPNOTSUPP in the parent) as a separate explained change (as it
> is a logical change in how Flask works).
> 
> The second patch wants to be the rest, which changes the indirection of
> xsm_ops and converts to vcall().  This is a fairly mechanical change
> without semantic changes.
> 
> I'm unsure if you want a 3rd patch in the middle, separating the
> xsm_core_init() juggling, with the "switch to using vcall()".  It might
> be a good idea for more easily demonstrating the changes, but I'll leave
> it to your judgement.
> 
>> diff --git a/xen/xsm/xsm_core.c b/xen/xsm/xsm_core.c
>> index 5eab21e1b1..acc1af7166 100644
>> --- a/xen/xsm/xsm_core.c
>> +++ b/xen/xsm/xsm_core.c
>>  static int __init xsm_core_init(const void *policy_buffer, size_t policy_size)
>>  {
>>  #ifdef CONFIG_XSM_FLASK_POLICY
>> @@ -87,17 +86,22 @@ static int __init xsm_core_init(const void *policy_buffer, size_t policy_size)
>>      }
>>  #endif
>>  
>> -    if ( verify(&dummy_xsm_ops) )
>> +    if ( xsm_ops_registered != XSM_OPS_UNREGISTERED )
>>      {
>> -        printk(XENLOG_ERR "Could not verify dummy_xsm_ops structure\n");
>> +        printk(XENLOG_ERR
>> +            "Could not init XSM, xsm_ops register already attempted\n");
> 
> Indentation.
ack

>>          return -EIO;
>>      }
>>  
>> -    xsm_ops = &dummy_xsm_ops;
>> +    /* install the dummy ops as default to ensure ops
>> +     * are defined if requested policy fails init
>> +     */
>> +    xsm_fixup_ops(&xsm_ops);
> 
> /* Comment style. */
> 
> or
> 
> /*
>  * Multi-
>  * line comment style.
>  */

ack

>>      switch ( xsm_bootparam )
>>      {
>>      case XSM_BOOTPARAM_DUMMY:
>> +        xsm_ops_registered = XSM_OPS_REGISTERED;
>>          break;
>>  
>>      case XSM_BOOTPARAM_FLASK:
>> @@ -113,6 +117,9 @@ static int __init xsm_core_init(const void *policy_buffer, size_t policy_size)
>>          break;
>>      }
>>  
>> +    if ( xsm_ops_registered != XSM_OPS_REGISTERED )
>> +        xsm_ops_registered = XSM_OPS_REG_FAILED;
>> +
>>      return 0;
>>  }
>>  
>> @@ -197,16 +204,21 @@ bool __init has_xsm_magic(paddr_t start)
>>  
>>  int __init register_xsm(struct xsm_operations *ops)
>>  {
>> -    if ( verify(ops) )
>> +    if ( xsm_ops_registered != XSM_OPS_UNREGISTERED )
>> +        return -EAGAIN;
> 
> I know you moved this around the function, but it really isn't -EAGAIN
> material any more.  It's "too late - nope".
> 
> -EEXIST is probably best for "I'm never going to tolerate another call".

Honestly I didn't think EAGAIN was correct in the first place, so will
gladly change it.

>> +
>> +    if ( !ops )
>>      {
>> -        printk(XENLOG_ERR "Could not verify xsm_operations structure\n");
>> +        xsm_ops_registered = XSM_OPS_REG_FAILED;
>> +        printk(XENLOG_ERR "Invalid xsm_operations structure registered\n");
>>          return -EINVAL;
> 
> Honestly, I'd be half tempted to declare register_xsm() with
> __nonnull(0) and let the compiler reject any attempt to pass a NULL ops
> pointer.
> 
> Both callers pass a pointer to a static singleton objects.
> 
>>      }
>>  
>> -    if ( xsm_ops != &dummy_xsm_ops )
>> -        return -EAGAIN;
>> +    /* use dummy ops for any empty ops */
>> +    xsm_fixup_ops(ops);
> 
> Isn't this redundant with the call in xsm_core_init(), seeing as
> register_xsm() must be nested within the switch statement?

I don't believe so, the one in xsm_core_init is setting the
default/fallback of xsm_ops var to dummy_* before attempting to register
a policy module's ops. Here in register_xsm we are taking a new instance
of a struct xsm_ops and ensuring every handler has a defined entry
before overwriting the xsm_ops variable with passed in xsm module's ops.
Now with that said, I do like your suggest down at the end.

>> -    xsm_ops = ops;
>> +    xsm_ops = *ops;
>> +    xsm_ops_registered = XSM_OPS_REGISTERED;
>>  
>>      return 0;
>>  }
> 
> Having got to the end, the xsm_core_init() vs register_xsm() dynamic is
> quite weird.
> 
> I think it would result in clearer code to have init_{flask,silo}()
> return pointers to their struct xsm_operations *, and let
> xsm_core_init() do the copy in to xsm_ops.  This reduces the scope of
> xsm_ops_state to this function only, and gets rid of at least one
> runtime panic() call which is dead code.

I agree full.

> If you were to go with this approach, you'd definitely want to split
> into the 3-patch approach.

v2 will have this broken out

v/r,
dps



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 16:18:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 16:18:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144836.266534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHCn-00028S-SK; Fri, 18 Jun 2021 16:18:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144836.266534; Fri, 18 Jun 2021 16:18:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHCn-00028L-OZ; Fri, 18 Jun 2021 16:18:53 +0000
Received: by outflank-mailman (input) for mailman id 144836;
 Fri, 18 Jun 2021 16:18:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfjP=LM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1luHCm-00028B-JK
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 16:18:52 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37e3e002-7d26-4d98-96ba-7408e90a7ee7;
 Fri, 18 Jun 2021 16:18:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37e3e002-7d26-4d98-96ba-7408e90a7ee7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624033130;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=wR/FLwG+HqsAqx2kOjn3nKF0NDAb9DW0DAztudyIirM=;
  b=DFdTiJ5IddGbm4s8rHZYbPDtXadNhynLSqfB3BKWNbrU4jiVy3JNaz18
   FzQhfvpYIDMPDT9mfOHwldgqX9nSGJBF5iWCbM63LQ8uJUcLuyc7/t1ES
   GyB5zPH776lDzDjRWM4I7mHUABlAMqHnrhoPaJbRWv0JAn2nMgNLH7zaK
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: pi0AL+V0vxbRqttopNs7cKEpn95qoshkb3qy9FfqEgcpwgOxXGP7fWe1CeIGHS8UR8IijNtfow
 QNFA6VVLDbgxaeL7G3vu9r/4ZBFV7qd+g184ktWQDxMibAR4wbxalIMqZXsxJkbZ6itEmkgps7
 /7WDMIvnLBdHta/XxMWm5ZOANW5jyjqNtLsqIwbdfKHc2EP7MHoVObPsjIR5CQJC4stC3PwQxP
 01PVJukjgjZLBH+ota4HpuvP/jCt+dG+svKSNVMp6NUoXWpJS7jmChlr2LUUTimvqDHga4CeGt
 Sfs=
X-SBRS: 5.1
X-MesageID: 46842819
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:iiEdpKMgtibJL8BcTjujsMiBIKoaSvp037BK7S1MoNJuEvBw9v
 re+MjzsCWftN9/Yh4dcLy7VpVoIkmskKKdg7NhXotKNTOO0AeVxelZhrcKqAeQeREWmNQ96U
 9hGZIOdeEZDzJB/LrHCN/TKade/DGFmprY+9s31x1WPGZXgzkL1XYDNu6ceHcGIjVuNN4CO7
 e3wNFInDakcWR/VLXAOpFUN9Kz3uEijfjdEGY7OyI=
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46842819"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Mjqj8JCwownivfsj/d1hBw9daand2ZmpDtBLaabBkai1QajaejuVg+gUW4vhZ5wsKXeYX1eLY5vV3HWWE7TlJTRqXBb1YksChCfTVG6wqUD3WFUV8OK8RM6c2ZdYQ0yGnii6ev9SG5pKqe565lbLH3ILLZ7fEM+n0o7TVy/MtJb9znztMH8jMgQg65dkyGLV2KzYQuT1YxBE15GXpW0JEjj3eSeEuXVBZyJQojh68oS7Rn4mX+uqF9aMCeCYYJReAkprRDswxH3ERDzMEtBDK1/3mAGCVQKyVH2a4BX+B6+lq1UvK2w+Jgeij8pyAo3hEMABbaDq92useUJ09/Tiuw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wR/FLwG+HqsAqx2kOjn3nKF0NDAb9DW0DAztudyIirM=;
 b=RO5D9ga2LtptonTTacSho9bhDtUGcIGUbQ18K4X9/U7bv3hjQw8+S4Tr+L/EA5vyZMLjmSWiLEjmzO0rJMWMC5KJJ1IG4qtjuDGcBjQqXQkh9GuQBq6E9hpEg6f/Jh5iPGDzQI1YhZNiQ3s2EWOLCBbmfd5kiBwAbWaW1Gu8re5M4CqHqcX+/FeO1iEQwnOjsX7njkUs8YWnUNhkRwWWFMUad3nnjU+yciYnXNpB0yECJvTqnvvyBtcYd8uNjN7Tn6LrBcvTHBgGI0TRstI5zqM/b5DBtikWE3Fs84brB6n8vlqs8HM89Da1H9TrEzJqNakK9sAsP8QTFti7V9WRJA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wR/FLwG+HqsAqx2kOjn3nKF0NDAb9DW0DAztudyIirM=;
 b=gkl7rhvaB/XekG7hrcX9L49HxlLisnnLxIatpvljSdV1Po5P9bcFSZXfLsUPodP11fj8OMFa39x5zYpqfPEWybT7KpG6UnpU481pKO8p6kUaBgtzU8k3lfJIvDJyjmR4UXnWxVonDpuiNqEc1OKtBLEUfad4fIgf6bkR7wFIS6w=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 08/12] golang/xenlight: add functional options to
 configure Context
Thread-Topic: [RESEND PATCH 08/12] golang/xenlight: add functional options to
 configure Context
Thread-Index: AQHXUNzBxrtRVA/FskCGl/j6ZWY3UKsZ/xcAgAAG0ICAABOXAA==
Date: Fri, 18 Jun 2021 16:18:44 +0000
Message-ID: <8727719E-9548-40CF-A186-14E2ECA3801D@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <dc5cd6728e8477c9eb3ba75a55c7128da46a86ef.1621887506.git.rosbrookn@ainfosec.com>
 <EF069373-26FC-4151-9CD9-6B8C48D9AEB0@citrix.com>
 <YMy29arbPMnPI/+W@FED-nrosbr-BE.crux.rad.ainfosec.com>
In-Reply-To: <YMy29arbPMnPI/+W@FED-nrosbr-BE.crux.rad.ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 543f1a9f-c055-43a3-068d-08d93274bec1
x-ms-traffictypediagnostic: PH0PR03MB5720:
x-microsoft-antispam-prvs: <PH0PR03MB5720CFCF4CA9E2192378898A990D9@PH0PR03MB5720.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7219;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 6cLIgv6HHnZ8kFSmUi2rHPj4Md29Hin8m+q8I2Irc6WNnWkveHklT9PgGPf1t0OVgCkDV4qFPcTeyLAcYkMau8BzAYZ+uv5ij6pfsIoLwYW6n2NopFxn0V1uNuANElyzptFgj9hlXUizBpck1N95Fb+eSmXFlrxiCIJX+sRbGsOOZsnZV2W2ElERfUZ/anDEl4fRCoZzJDx3V2F+WcBfJ1lL9bpmfh+ESRhE/lPmV5nlkkvxGlO1eoE9NM2maBnWUurRaJ3T4pVdbQPgVhkn5bp3qREeq8/75C/B3ePqO9/qxtU5ioDeRdswri6UG0JTP8E207ycXB7X9FOittbaTq5+qWGohRyhL8sTITjqFog9sO1ptFLqHWxNpG9I2nlhisT8jE+sROSaHAdJc80fGEX6K/iyshJMbhxK/tXmP0N/QSbz509RdeszLYmKYJrHJxhaRzGcGmIHmLbnHB8da+Zeg/X1WtyfXftMKN+tVwLvppgutfrJEhh+p7aNQPomAJIA3H/bJz0B6Aov9fGb3cBR/zCyHLvGFZ4Vykp06/rDxhdKbZ1CE8jH7uLEwc1nDxuC1EMtP7WiSoQ0vMuDgGcn6vx7vhGj+t1OjmF3iqAhCcqXAe+DOYmUY9IBrRjrSdsrsqAD6asuFzC84I5rpp5NX9gnkQkOgmhQbw5057+/zVjEAe2i993X18IQFMTiEoeYxtxrLSQ7H3vhGxuYrzptuJ0ajzXRen8FrWYfngDZjHFYJSussCh0QK5kCvfjwloVUsfwQzumfJKUSjg+Wt9+iMiExrVCuIbi913s5N4=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(39860400002)(396003)(366004)(33656002)(8676002)(38100700002)(122000001)(966005)(66446008)(86362001)(6506007)(66556008)(71200400001)(36756003)(66476007)(64756008)(53546011)(478600001)(26005)(54906003)(8936002)(76116006)(91956017)(6512007)(6916009)(66946007)(316002)(4326008)(6486002)(83380400001)(186003)(2616005)(5660300002)(2906002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?VDRVdG11WGZENHhYRlNnZG1UZ1FZYXdqbzRReXhLcjhHMEJ5OEJKVW5FQzli?=
 =?utf-8?B?TU1CdmdEZldPQVNtUEJsL3N2dm5UdnBjZU1HZTdPUy82dk1zM2ZYK2cydlVz?=
 =?utf-8?B?SEFlVWZVT1ZFRU45c2RRT0RidnNMY2V1TjFqT0ErOGtuNHFPSUUxc3ZYWm1S?=
 =?utf-8?B?ZGJlVDQyYnZPd1M3anY4em9NN2oyUnZkN0hzbjY1RE1VM3pUSUNHc1dNTHcz?=
 =?utf-8?B?emxmTTUxSitlS2J5OTJ0bUNTMGNYcHRkdHNtVnBicjZZMlVVZ2Y3RDlwZTY4?=
 =?utf-8?B?WUdBYmZVcnBwWXpyYXNQb29hZEdvY2pmMzRPb1RaTEdKcXorVHV3emZCNmxn?=
 =?utf-8?B?T0VjQWVIOUJWZExGRTlDMXFGUk1yMlBvZnBGSUtQZTI3S0c0WnoyNnhBSXFT?=
 =?utf-8?B?bE5uaXRpbzNqQ3FBV0xSUmo3Z1ZPSHdrNHEva1YvY1ZCeUN0SnVHOWVQclgz?=
 =?utf-8?B?TTFuZDM5WGhsbHZtalNzWXB1RGlRVzdQWXFDZWpDaHRiR2RHRnNRTjd4UUxC?=
 =?utf-8?B?ZXR1Tm16aXl5KzU2eUxWQzNFdSs2aFpkaGhvRmUzbmNYR1o0UkVBV3NCV0dr?=
 =?utf-8?B?TVpmR3h0VldRVVZCbnZiTVJlM0VZVDJvMGxld0JVSjlOQk9CNU53czNQek05?=
 =?utf-8?B?WE8zaXA3RzFQZnBBMStudGNsN3BOZmdwZSttcFJlRlN6aEJTWHdqOE4yR1Jm?=
 =?utf-8?B?bjZuaVRReC81ZWlGUk5hakNxUGoxUmVaUlRjbzdkSUs1bVhxVXlTRTBQNFA4?=
 =?utf-8?B?Qk5SdWduc09IWEFyZlNSOUVNZ1BPWThPSUEvN2Nyd1hpa1FHK2loQzZKVVYz?=
 =?utf-8?B?MFFSOWptcE42eU8wMTlBYjh5N1VsUlhId0puWW04UEJMMm4xd0UzN0FFbjQy?=
 =?utf-8?B?ZkVhT05CRDhYM2p3c2JaQ1B6QWhTUHc4eHF4RzhpVkJqM1RnR1A4d0kxMlQx?=
 =?utf-8?B?QUlFaWJvdUQyMmNkeVNVVzhWMWNwWkFIUlYrUnhreUdkMXFiWXJnSytETFVn?=
 =?utf-8?B?b2JjMEFpV2pMOUZ5a0w1Q3l4cFdSeHArcDNTTjFIbjg4azRiYmIzRjFJWG0z?=
 =?utf-8?B?QU0yb3J5ZGhTdDJMWDcvY3IzRzAvSzZpRDRXMWxqakxHYldyYk5YekZncWQ1?=
 =?utf-8?B?cDZyTU85MzZiOVFDNmNYWWFJaENtU0dYSkJKNEJrU1VmclZqTXppTlpuSmc4?=
 =?utf-8?B?R1hLVFZLeDQrRkpkbXJXR1I0M2RWekI4YXdpRzdGd1hwb0Vra0FySy85bW12?=
 =?utf-8?B?T3VsclNXeWY1QkxrQ09WK1czV0NPNDZaVFZYTFJ5QzRqY0xBUmhYVmRiSFdk?=
 =?utf-8?B?cG1hRSs4SGlKSTZBaEJXZXI0WEZwU3BhZ1BYRVRRZkVDMUJnTmxRYWh4Y1FJ?=
 =?utf-8?B?WDJZdHhpN2N6MFdKVGdRWUhKcjRFN1VFOEl3Tk9jUkZiK0lsWWFSY1lwdWp2?=
 =?utf-8?B?QWZsSXExLzJmenpGSEp4cjk0czQ4c2xDRzNVeU5HdDZobG9DcE5Dd1ZueFNk?=
 =?utf-8?B?V2YwTjloT3YxbkkxVi9ETjJOSzBZWlhzR1FYOCtPaklJb3RnWDE0L1FQN2h4?=
 =?utf-8?B?aXdrNERiTU1OaEkxaDZjVC9yMkdzdFpCMHUrcXpBZ09JQWQybDdEeGkxSC9O?=
 =?utf-8?B?NnBjeCtza1E2anVnSjNNWTN0OWZvR1RkVmlpWVVHdjB1YTBWZzRTeEdCd2pL?=
 =?utf-8?B?a05rUjl4ajVNRlRYUmFORlNTOGExZmxWS2RLRlVqbmt4MzFpSE9HNGo5T3Ux?=
 =?utf-8?Q?9WQ7XcrSaUeSzOJgZiclDf1hVyOE2PytBoJBm+H?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <A8358BEC8F8A4A44BB7B4D4702E9161A@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 543f1a9f-c055-43a3-068d-08d93274bec1
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 16:18:44.6267
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: b/CI8UwZ60iNX/OcyABS/pqyIhzrq+Hcj1iymh+l0YgR2ckqh1VE7FsYGyQ9fsT/Sfqyp/mfelGLgg2V87lFw3XX4cdA+pD5oC8ip77vTZ8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5720
X-OriginatorOrg: citrix.com

DQoNCj4gT24gSnVuIDE4LCAyMDIxLCBhdCA0OjA4IFBNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9v
a25AZ21haWwuY29tPiB3cm90ZToNCj4gDQo+IE9uIEZyaSwgSnVuIDE4LCAyMDIxIGF0IDAyOjQ0
OjE1UE0gKzAwMDAsIEdlb3JnZSBEdW5sYXAgd3JvdGU6DQo+PiANCj4+IA0KPj4+IE9uIE1heSAy
NCwgMjAyMSwgYXQgOTozNiBQTSwgTmljayBSb3Nicm9vayA8cm9zYnJvb2tuQGdtYWlsLmNvbT4g
d3JvdGU6DQo+Pj4gDQo+Pj4gQWRkIGEgQ29udGV4dE9wdGlvbiB0eXBlIHRvIHN1cHBvcnQgZnVu
Y3Rpb25hbCBvcHRpb25zIGluIE5ld0NvbnRleHQuDQo+Pj4gVGhlbiwgYWRkIGEgdmFyaWFkaWMg
Q29udGV4dE9wdGlvbiBwYXJhbWV0ZXIgdG8gTmV3Q29udGV4dCwgd2hpY2ggYWxsb3dzDQo+Pj4g
Y2FsbGVycyB0byBzcGVjaWZ5IDAgb3IgbW9yZSBjb25maWd1cmF0aW9uIG9wdGlvbnMuDQo+Pj4g
DQo+Pj4gRm9yIG5vdywganVzdCBhZGQgdGhlIFdpdGhMb2dMZXZlbCBvcHRpb24gc28gdGhhdCBj
YWxsZXJzIGNhbiBzZXQgdGhlDQo+Pj4gbG9nIGxldmVsIG9mIHRoZSBDb250ZXh0J3MgeGVudG9v
bGxvZ19sb2dnZXIuIEZ1dHVyZSBjb25maWd1cmF0aW9uDQo+Pj4gb3B0aW9ucyBjYW4gYmUgY3Jl
YXRlZCBieSBhZGRpbmcgYW4gYXBwcm9wcmlhdGUgZmllbGQgdG8gdGhlDQo+Pj4gY29udGV4dE9w
dGlvbnMgc3RydWN0IGFuZCBjcmVhdGluZyBhIFdpdGg8T3B0aW9uTmFtZT4gZnVuY3Rpb24gdG8g
cmV0dXJuDQo+Pj4gYSBDb250ZXh0T3B0aW9uDQo+Pj4gDQo+Pj4gU2lnbmVkLW9mZi1ieTogTmlj
ayBSb3Nicm9vayA8cm9zYnJvb2tuQGFpbmZvc2VjLmNvbT4NCj4+PiAtLS0NCj4+PiB0b29scy9n
b2xhbmcveGVubGlnaHQveGVubGlnaHQuZ28gfCA0NCArKysrKysrKysrKysrKysrKysrKysrKysr
KysrKy0tDQo+Pj4gMSBmaWxlIGNoYW5nZWQsIDQyIGluc2VydGlvbnMoKyksIDIgZGVsZXRpb25z
KC0pDQo+Pj4gDQo+Pj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2dvbGFuZy94ZW5saWdodC94ZW5saWdo
dC5nbyBiL3Rvb2xzL2dvbGFuZy94ZW5saWdodC94ZW5saWdodC5nbw0KPj4+IGluZGV4IGY2OGQ3
YjZlOTcuLjY1ZjkzYWJlMzIgMTAwNjQ0DQo+Pj4gLS0tIGEvdG9vbHMvZ29sYW5nL3hlbmxpZ2h0
L3hlbmxpZ2h0LmdvDQo+Pj4gKysrIGIvdG9vbHMvZ29sYW5nL3hlbmxpZ2h0L3hlbmxpZ2h0Lmdv
DQo+Pj4gQEAgLTEzNiw3ICsxMzYsNyBAQCBmdW5jIHNpZ2NobGRIYW5kbGVyKGN0eCAqQ29udGV4
dCkgew0KPj4+IH0NCj4+PiANCj4+PiAvLyBOZXdDb250ZXh0IHJldHVybnMgYSBuZXcgQ29udGV4
dC4NCj4+PiAtZnVuYyBOZXdDb250ZXh0KCkgKGN0eCAqQ29udGV4dCwgZXJyIGVycm9yKSB7DQo+
Pj4gK2Z1bmMgTmV3Q29udGV4dChvcHRzIC4uLkNvbnRleHRPcHRpb24pIChjdHggKkNvbnRleHQs
IGVyciBlcnJvcikgew0KPj4+IAljdHggPSAmQ29udGV4dHt9DQo+Pj4gDQo+Pj4gCWRlZmVyIGZ1
bmMoKSB7DQo+Pj4gQEAgLTE0Niw4ICsxNDYsMTkgQEAgZnVuYyBOZXdDb250ZXh0KCkgKGN0eCAq
Q29udGV4dCwgZXJyIGVycm9yKSB7DQo+Pj4gCQl9DQo+Pj4gCX0oKQ0KPj4+IA0KPj4+ICsJLy8g
U2V0IHRoZSBkZWZhdWx0IGNvbnRleHQgb3B0aW9ucy4gVGhlc2UgZmllbGRzIG1heQ0KPj4+ICsJ
Ly8gYmUgbW9kaWZpZWQgYnkgdGhlIHByb3ZpZGVkIG9wdHMuDQo+Pj4gKwljb3B0cyA6PSAmY29u
dGV4dE9wdGlvbnN7DQo+Pj4gKwkJbG9nTGV2ZWw6IExvZ0xldmVsRXJyb3IsDQo+Pj4gKwl9DQo+
Pj4gKw0KPj4+ICsJZm9yIF8sIG9wdCA6PSByYW5nZSBvcHRzIHsNCj4+PiArCQlvcHQuYXBwbHko
Y29wdHMpDQo+Pj4gKwl9DQo+Pj4gKw0KPj4+IAkvLyBDcmVhdGUgYSBsb2dnZXINCj4+PiAtCWN0
eC5sb2dnZXIgPSBDLnh0bF9jcmVhdGVsb2dnZXJfc3RkaW9zdHJlYW0oQy5zdGRlcnIsIEMuWFRM
X0VSUk9SLCAwKQ0KPj4+ICsJY3R4LmxvZ2dlciA9IEMueHRsX2NyZWF0ZWxvZ2dlcl9zdGRpb3N0
cmVhbShDLnN0ZGVyciwNCj4+PiArCQlDLnhlbnRvb2xsb2dfbGV2ZWwoY29wdHMubG9nTGV2ZWwp
LCAwKQ0KPj4+IA0KPj4+IAkvLyBBbGxvY2F0ZSBhIGNvbnRleHQNCj4+PiAJcmV0IDo9IEMubGli
eGxfY3R4X2FsbG9jKCZjdHguY3R4LCBDLkxJQlhMX1ZFUlNJT04sIDAsDQo+Pj4gQEAgLTIwMSw2
ICsyMTIsMzUgQEAgZnVuYyAoY3R4ICpDb250ZXh0KSBDbG9zZSgpIGVycm9yIHsNCj4+PiAJcmV0
dXJuIG5pbA0KPj4+IH0NCj4+PiANCj4+PiArdHlwZSBjb250ZXh0T3B0aW9ucyBzdHJ1Y3Qgew0K
Pj4+ICsJbG9nTGV2ZWwgTG9nTGV2ZWwNCj4+PiArfQ0KPj4+ICsNCj4+PiArLy8gQ29udGV4dE9w
dGlvbiBpcyB1c2VkIHRvIGNvbmZpZ3VyZSBvcHRpb25zIGZvciBhIENvbnRleHQuDQo+Pj4gK3R5
cGUgQ29udGV4dE9wdGlvbiBpbnRlcmZhY2Ugew0KPj4+ICsJYXBwbHkoKmNvbnRleHRPcHRpb25z
KQ0KPj4+ICt9DQo+Pj4gKw0KPj4+ICt0eXBlIGZ1bmNDb250ZXh0T3B0aW9uIHN0cnVjdCB7DQo+
Pj4gKwlmIGZ1bmMoKmNvbnRleHRPcHRpb25zKQ0KPj4+ICt9DQo+Pj4gKw0KPj4+ICtmdW5jIChm
Y28gKmZ1bmNDb250ZXh0T3B0aW9uKSBhcHBseShjICpjb250ZXh0T3B0aW9ucykgew0KPj4+ICsJ
ZmNvLmYoYykNCj4+PiArfQ0KPj4gDQo+PiBXaHkgYWxsIHRoaXMgY29udm9sdXRpb24gd2l0aCBp
bnRlcmZhY2VzIGFuZCBzdWNoLCByYXRoZXIgdGhhbiBqdXN0IGRlZmluaW5nIENvbnRleHRPcHRp
b24gYXMgYSBmdW5jdGlvbiBwb2ludGVyPyAgSXMgaXQganVzdCB0byBrZWVwIGNvbnRleHRPcHRp
b25zIG91dCBvZiB0aGUgZG9jdW1lbnRhdGlvbiBwYWdlPw0KPiANCj4gUGFydCBvZiB0aGUgbW90
aXZhdGlvbiBmb3IgdXNpbmcgZnVuY3Rpb25hbCBvcHRpb25zIGlzIHRvIGFic3RyYWN0IHRoZQ0K
PiAib3B0aW9ucyIgc3RydWN0LCB5ZXMuIFRoaXMgYWxsb3dzIGludGVybmFsIGRlZmF1bHRzIHRv
IGJlIGFwcGxpZWQgbW9yZQ0KPiBlYXNpbHkgLS0gaWYgeW91IHJlcXVpcmUgZS5nLiBhIENvbnRl
eHRPcHRpb25zIHN0cnVjdCB0byBiZSBwYXNzZWQgYnkNCj4gdGhlIGNhbGxlciwgaG93IGRvIHlv
dSBrbm93IGlmIHRoZXkgaW50ZW5kZWQgdG8gb3ZlcnJpZGUgYSBkZWZhdWx0LCBvcg0KPiBpZiB0
aGV5IGp1c3QgZGlkbid0IHNldCB0aGUgZmllbGQ/IEFkZGl0aW9uYWxseSwgdXNpbmcgdGhlIENv
bnRleHRPcHRpb24NCj4gYXMgYW4gaW50ZXJmYWNlIGFsbG93cyB2YXJpYWRpYyBhcmd1bWVudHMs
IHdoaWNoIGFyZSBqdXN0IGNvbnZlbmllbnQgZm9yDQo+IEFQSSB1c2VycyAtLSB0aGUgc2FtZSBO
ZXdDb250ZXh0IGZ1bmN0aW9uIGNhbiBiZSB1c2VkIHdoZXRoZXIgeW91IG5lZWQNCj4gdG8gcGFz
cyAzIG9wdGlvbnMgb3IgMC4NCj4gDQo+IFRoZSByZWFzb24gd2UgdXNlIENvbnRleHRPcHRpb24g
YXMgYW4gaW50ZXJmYWNlLCByYXRoZXIgdGhhbiBmdW5jdGlvbg0KPiBwb2ludGVyIG9mIHNvcnRz
IGlzIGZvciBmbGV4aWJpbGl0eSBpbiB0aGUgc2lnbmF0dXJlcyBvZiBDb250ZXh0T3B0aW9uDQo+
IGltcGxlbWVudGF0aW9ucy4gRS5nLiwgd2UgY291bGQgaGF2ZQ0KPiANCj4gZnVuYyBXaXRoTG9n
TGV2ZWwobHZsIExvZ0xldmVsKSBDb250ZXh0T3B0aW9uDQo+IGZ1bmMgV2l0aExvZ0NvbnRleHQo
cyBzdHJpbmcpIENvbnRleHRPcHRpb24NCj4gZnVuYyBXaXRoRm9vQW5kQmFyKHMgc3RyaW5nLCBu
IGludCkgQ29udGV4dE9wdGlvbg0KPiANCj4gU2VlIFsxXSBmb3IgbW9yZSBiYWNrZ3JvdW5kIG9u
IHRoaXMgcGF0dGVybi4NCj4gDQo+IFRoYW5rcywNCj4gTlINCj4gDQo+IFsxXSBodHRwczovL2Rh
dmUuY2hlbmV5Lm5ldC8yMDE0LzEwLzE3L2Z1bmN0aW9uYWwtb3B0aW9ucy1mb3ItZnJpZW5kbHkt
YXBpcw0KDQpZZXMsIEkgZnJlcXVlbnRseSB1c2UgYSBwYXR0ZXJuIGxpa2UgdGhlIG9uZSBkZXNj
cmliZWQgaW4gdGhhdCBibG9nIHBvc3QgbXlzZWxmLiAgQnV0IHRoYXQgYmxvZyBwb3N0IGRvZXNu
4oCZdCB1c2UgaW50ZXJmYWNlcyDigJQgdGhlIGZpbmFsIHNsaWRlIGFjdHVhbGx5IGhhcyB0aGUg
4oCcb3B0aW9uIGZ1bmN0aW9u4oCdIHR5cGUgYXMgYW4gb3Blbi1jb2RlZCBmdW5jdGlvbiBwb2lu
dGVyIHR5cGUuDQoNClNvIG15IHF1ZXN0aW9uIHdhcywgd2h5IG5vdCBkbyBzb21ldGhpbmcgbGlr
ZSB0aGlzOg0KDQp0eXBlIENvbnRleHRPcHRpb24gZnVuYygqY29udGV4dE9wdGlvbnMpIGVycm9y
DQoNCmZ1bmMgV2l0aExvZ0xldmVsKGxldmVsIExvZ0xldmVsKSBDb250ZXh0T3B0aW9uIHsNCiAg
cmV0dXJuIGZ1bmMoY28gKmNvbnRleHRPcHRpb25zKSB7DQogICAgY28ubG9nTGV2ZWwgPSBsZXZl
bA0KICB9DQp9DQoNCkFUTSB0aGUgb25seSBhZHZhbnRhZ2UgSSBjYW4gc2VlIG9mIGRlZmluaW5n
IENvbnRleHRPcHRpb24gYXMgYW4gaW50ZXJmYWNlIHJhdGhlciB0aGFuIGFzIGEgZnVuY3Rpb24g
cG9pbnRlciBpcyB0aGF0IHRoZSBnb2RvYyBmb3IgQ29udGV4dE9wdGlvbiB3b3VsZCBsb29rIGxp
a2U6DQoNCnR5cGUgQ29udGV4dE9wdGlvbiBpbnRlcmZhY2Ugew0KICAgLy8gY29udGFpbnMgZmls
dGVyZWQgb3IgdW5leHBvcnRlZCBmaWVsZHMNCn0NCg0KUmF0aGVyIHRoYW4NCg0KdHlwZSBDb250
ZXh0T3B0aW9uIGZ1bmMoKmNvbnRleHRPcHRpb25zKSBlcnJvcg0KDQpXaGljaCBzaG93cyB5b3Ug
dGhlIG5hbWUgb2YgdGhlIHVuZXhwb3J0ZWQgZmllbGQuDQoNCklzIHRoZXJlIGFub3RoZXIgcmVh
c29uIEkgbWlzc2VkPw0KDQogLUdlb3JnZQ==


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 16:26:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 16:26:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144844.266545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHK3-0003ao-Kx; Fri, 18 Jun 2021 16:26:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144844.266545; Fri, 18 Jun 2021 16:26:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHK3-0003ah-H9; Fri, 18 Jun 2021 16:26:23 +0000
Received: by outflank-mailman (input) for mailman id 144844;
 Fri, 18 Jun 2021 16:26:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UjTI=LM=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1luHK2-0003ab-4S
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 16:26:22 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 73fbbfcd-046e-411f-baa8-79811bb6328e;
 Fri, 18 Jun 2021 16:26:21 +0000 (UTC)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1624033567093708.0667345382088;
 Fri, 18 Jun 2021 09:26:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73fbbfcd-046e-411f-baa8-79811bb6328e
ARC-Seal: i=1; a=rsa-sha256; t=1624033568; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=kXnR/sGZffRZiaozBMT4wEAGaR8WcbChg3EQD2vpRi2C16XSGIwNNIMXq/BNXHNFW8h9IzJQbr3fuZaLXfQnG/vKTCOTNOULA47LqIt81dtHLpfnyydDfqDB9dzwYrpqD7A7vOfz7WimlFI3Tuj5RiC8A9GBBv8M+8gYxx98YOA=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1624033568; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=JEcFlCIYiQ9K1LpfSywvu/l9GKUmomfwRNSk+UmgtM4=; 
	b=A4jrXjHtU7NhfFmlVwx2BvsovT10ZIvMgpAibgmwh8tqCz95lkrAxNThRolWuY2qfeYAEcM8T7OUNNB8YzZjaFbUOKCeXbuxLgXkTuPJeoqKj3PbndcNYCbsA63ksqUk7sF4p8p25cdppsHkOPZgBgUQKmPG7+AXpAJaIKiHGEo=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1624033568;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=To:Cc:References:From:Subject:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
	bh=JEcFlCIYiQ9K1LpfSywvu/l9GKUmomfwRNSk+UmgtM4=;
	b=RSNo66P38whmO9Xy2XIGEFvdCpqG9Cu/PyPIsUrmOyOdogAdlgW+StEg+vOTiLo4
	kgi36lWI/MoPMSTe6RdufpuHRnVPFdHX+otBhlPNRIv+1TjS7MEL9pQ3PS7ZzaNNHw3
	1JlVO1dY/REIOUD3Ap+mS2Zv/Tr8B/i8P7AkxMnw=
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Tim Deegan <tim@xen.org>, Juergen Gross <jgross@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io, xen-devel@lists.xenproject.org
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-2-dpsmith@apertussolutions.com>
 <07566583-d4bc-dfb8-eccd-d779783d959a@citrix.com>
 <3345073c-45ab-b875-6d3c-32dadfb63fc9@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH 1/6] xsm: refactor xsm_ops handling
Message-ID: <88ece8ae-f1a0-927d-e1cd-20a65fa0b583@apertussolutions.com>
Date: Fri, 18 Jun 2021 12:26:04 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <3345073c-45ab-b875-6d3c-32dadfb63fc9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 6/18/21 7:44 AM, Jan Beulich wrote:
> On 18.06.2021 13:34, Andrew Cooper wrote:
>> On 18/06/2021 00:39, Daniel P. Smith wrote:
>>> @@ -197,16 +204,21 @@ bool __init has_xsm_magic(paddr_t start)
>>>  
>>>  int __init register_xsm(struct xsm_operations *ops)
>>>  {
>>> -    if ( verify(ops) )
>>> +    if ( xsm_ops_registered != XSM_OPS_UNREGISTERED )
>>> +        return -EAGAIN;
>>
>> I know you moved this around the function, but it really isn't -EAGAIN
>> material any more.  It's "too late - nope".
>>
>> -EEXIST is probably best for "I'm never going to tolerate another call".
>>
>>> +
>>> +    if ( !ops )
>>>      {
>>> -        printk(XENLOG_ERR "Could not verify xsm_operations structure\n");
>>> +        xsm_ops_registered = XSM_OPS_REG_FAILED;
>>> +        printk(XENLOG_ERR "Invalid xsm_operations structure registered\n");
>>>          return -EINVAL;
>>
>> Honestly, I'd be half tempted to declare register_xsm() with
>> __nonnull(0) and let the compiler reject any attempt to pass a NULL ops
>> pointer.
>>
>> Both callers pass a pointer to a static singleton objects.
> 
> Why check at all when the source of the arguments is all internal?
> We don't check pointers to be non-NULL elsewhere, with a few odd
> exceptions (which imo should all be dropped).

In verify(ops) there was a check on ops for NULL, I pulled that check up
into here as I removed verify(). Based on Andy's comment, this function
is looking like it is on the chopping block as well.

v/r,
dps



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 16:32:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 16:32:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144851.266556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHQ2-000556-Ew; Fri, 18 Jun 2021 16:32:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144851.266556; Fri, 18 Jun 2021 16:32:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHQ2-00054z-Bu; Fri, 18 Jun 2021 16:32:34 +0000
Received: by outflank-mailman (input) for mailman id 144851;
 Fri, 18 Jun 2021 16:32:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfjP=LM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1luHQ1-00054t-23
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 16:32:33 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 015997d4-224b-4c9a-adee-030d2032845b;
 Fri, 18 Jun 2021 16:32:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 015997d4-224b-4c9a-adee-030d2032845b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624033951;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=MZ6c8awNEGrWMtofaktIYlmJmZ3wfwjrlvmEp2hFHfk=;
  b=ZjLDR0XLCs0H3pSsHKb+8NZwpz0l+d70lbcpVU/E0eUZRHBUKayyw/Yl
   0V7qdkt0Zyo+cFdidk+fusYwdMvLYcuYrTvepxCzwfTg+1XS5cJti6VPi
   3tqWcGq5FcdDvvuloYXkbgJWfONcaQPJ6RzJVEchOJrEORJ5ZghgKR0KZ
   Q=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 8woQerB6Sa97VYgc2L5hko8jmDVULX+TFSL9QGxYLgSMH+0KWufluHlQcnsjwtRi5YVI6Sf1o9
 +I2aX/1eav2gRKPqy49S9uJnuWDr5K0W6hP5lkUWItkJTfOGdbjX4wZu2o4Oq+GuyigbBwaQcJ
 mFxZHwKShKvkQnCyWnEe7MgqVwJcniyhR0Js66k/qL+cxzEbEuLIcu6xGSaYxyD632vvBVb/6L
 uTha98BBrTLCicTN6hymeJNXNtiIdFre9Lw0P7pVaMPTZ9YY4qtcjDqznFGhSe2+beG5fFsMTf
 wzw=
X-SBRS: 5.1
X-MesageID: 46462065
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:VWzC2q9XaBA6DVfTPOduk+Fodb1zdoMgy1knxilNoENuHPBwxv
 rAoB1E73PJYW4qKQ0dcdDpAtjlfZtFnaQFobX5To3SIzUO31HYb72KjLGSggEIfheeygcz79
 YZT0ETMqyTMbE+t7eG3ODaKadj/DDkytHSuQ629R4EJmsGC9AC0+46MHfgLqQffngbOXNTLu
 v62iMznUvYRZ1hVLXcOpBqZZmnm/T70LbdJTIWDR8u7weDyRmy7qThLhSe1hACFxtS3LYL6w
 H+4knEz5Tml8v+5g7X1mfV4ZgTssDm0MF/CMuFjdVQAinwizyveJ9qV9S5zXQISaCUmREXee
 v30k4d1vdImivsl6aO0EDQMjzboXATArnZuAWlaDXY0JHErXkBerR8bMpiA2rkAgwbzY5BOe
 twrhGknosSAhXakCvn4d/UExlsi0qvuHIn1fUelnpFTOIlGfJsRRx2xjIkLH4sJlOw1GkcKp
 glMCgc3ochTXqKK3TC+mV/yt2lWXo+Wh+AX0gZo8SQlzxbhmpwwUcUzNEW2i5ozuNzd7BUo+
 Dfdqh4nrBHScEbKap7GecaWMOyTmjAWwjFPm6eKUnuUKsHJ3XOoZjq56hd3pDnRHXJ9up7pH
 3laiIXiYcfQTObNSS+5uwDzvmWehTJYd3E8LAo23FWgMyPeIbW
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46462065"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fpLRzBRHUX/qOBt6Ibfyc2g2Znagn0nXiLyIVDnm/02ckUzdNt8V3AqlqF5DxORMWy4xwfFtP3wtB1AwfoKLpc8LuktPHea4DGNQW9/CjumbZdkRPPprVMyRQ7i4RVjB0MMroO8RBXJMHfG17vc3gj07nbWmwBPtBpDtaSWi85ye6BJFm3or17oYk3VqH/+hMoD4hZSdwa5ApqCWyQNc0VsMC4ZfRtfXKekBAzwDvES3Z2r1eu2uVVGOgAmUhAZQuzgRH7u48bNlpwpPmS2rrhC0wtiWrbl+gHBEdaISO+8P/9q/C86Gf0AGjeJuvtp5qQsUTAmZhHfJdl/HIb3s/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MZ6c8awNEGrWMtofaktIYlmJmZ3wfwjrlvmEp2hFHfk=;
 b=eGYwDqomtEjhyiJ+U1uS8phiO6gmLkZ077Pt5GmHwdabHbQk3boUKJfEA8ak3emlLiTUNPQPXqh+VttxQHNCNPQK63wBeXitbN1dVNKnWVoyOi6zWoe7NK6WPkRo0Nfoft2hPgwLJs6PgdEu5xLmu6TvT4SwgyxqTqcUeUODd87iJqJ0kKERshkhGiUR+5Li7B2I6k92VX/SVqj9fSvXQLyI8z8jPsYNnuZu8kL3ltZQ/CrLC4XQGbWUTgwm1+RZLH3elhjL2YAn2QcnPgFNyIP7oxorvFuvJNzBK/zTfff6ot7mVYgpzkkQXr0VZn/R49YVFYBcWOZlp9Tf8DCraw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MZ6c8awNEGrWMtofaktIYlmJmZ3wfwjrlvmEp2hFHfk=;
 b=hlmGFFY1Ik5c5BjmpnsVNjj9M5iO1G6f3dXpdwM/6HBpPDBRQsIhG65tAxOo427l5GwXsmVl9zx5jV17cjtHyQSgRYedI1bt1d1RwpY3cq9sHgDCmaBRSehOhrOiLeDBsJprmD4Hwi1xfnxCuxagYEZ+X+6TyLSDfVxUgMfc9xs=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: "xen-devel@lists.xenproject.prg" <xen-devel@lists.xenproject.prg>,
	xen-devel <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 07/12] golang/xenlight: add logging conveniences
 for within xenlight
Thread-Topic: [RESEND PATCH 07/12] golang/xenlight: add logging conveniences
 for within xenlight
Thread-Index: AQHXUNy8b62kbX7r3k+nzGZDJK+dZqsZ5r2AgAABRgCAACL1gIAAEdqA
Date: Fri, 18 Jun 2021 16:30:40 +0000
Message-ID: <41F44BB8-0C20-44CA-AD7D-895C886115D0@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <452aac2489990ac0195c62d8cb820fbe5786c466.1621887506.git.rosbrookn@ainfosec.com>
 <A96AD4DD-35CB-495F-A008-D72A5C2AF45D@citrix.com>
 <E74C0DF9-3EA4-4B79-8CE4-02F00EC9875C@citrix.com>
 <YMy7NVTpaIlT+KWJ@FED-nrosbr-BE.crux.rad.ainfosec.com>
In-Reply-To: <YMy7NVTpaIlT+KWJ@FED-nrosbr-BE.crux.rad.ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 303a97b9-28e0-4fd2-a194-08d932766934
x-ms-traffictypediagnostic: PH0PR03MB6368:
x-microsoft-antispam-prvs: <PH0PR03MB636862E042754C5C66B30447990D9@PH0PR03MB6368.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7691;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: ZLhGtMGWjzE+p0y0F+lkpfvV/P4PEZ34Q+og6XjhmMihC1z2MpUQOnLTOGuRDKKJftMuwadOJpOMycOxksQC+J0EUlaVKkzMwdryu7cXFfVqKaF/H/dLuKvlsB1W+wJvXgbCn32NCem4dkfnuyCQnFCWoUrCKzEA67xFio2tWQB1Fstty51AkJiSXXBdLsiSupSgWM5c3MAC7PLPT7z15Mtq0kqjOAvhZoL1jPmToH8Bz1OIRYDyVFgbkpQC7SeGCu4nUKVjWth33b3SHI2ZKOJdoVk6GH2j+bHgJEswVyhukxcNieBmQlvyC20bW60iEJWbwGr1vRCo8u/L526AFd0Kum45jUV87+/yN7qv/iXDPSm2Kdd3oDvdw/9HFefZr5KZ1pPUlq4b7b/+1mYKY2PJ88ENCRWdK7Y2JHzvem1/1n1gfb6vSZeD5ovYj1EbM8cuWYNAg/mq3Jn9sedWctOw+mcaMr3aR29llayuyIIFlQO68fwXVgl59TGTrkPNC4G8mc1BjpvN8haTjlkIuAU+HN3f3OZAACzfwd3D0JbnJeSQqtpXLJ0WH0wkPIOP/ZqSO5fJoWdCaTT3W/j8aVwHgB+c+pmDlIM/fmyGqxGTZK/wJNuxdtR5LEPQApZlHrdCo8MxGuTOw67lSAQC2vQ2bJ+toC0606DuKbLBux0=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(346002)(366004)(39860400002)(136003)(66946007)(478600001)(66476007)(66556008)(76116006)(54906003)(91956017)(26005)(6916009)(2616005)(316002)(66446008)(8936002)(2906002)(5660300002)(64756008)(6512007)(38100700002)(122000001)(53546011)(186003)(8676002)(86362001)(33656002)(6486002)(36756003)(4326008)(71200400001)(6506007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?VXZGQm9RUTZGYUhHSGF2MnRWUmRDTGIrelBsL1NlNHVtQll3QWZaUEJkd0R4?=
 =?utf-8?B?S05FMU1oM240a0RzeWVBRldMeDR6L3B0ZWo5bFpJSGpIU1BHZGpkTmJwV1lM?=
 =?utf-8?B?bUV5NE55VjkybnJvUUM4VkFjL0VOaXRnWGNod2t5Z0xBajQvVWo5OVQ5MDNi?=
 =?utf-8?B?VlpiL09aeWN5N085M0NpclRFdlNmRzJuQWV6djRwMXpOWUg1QlFHdm0vck9r?=
 =?utf-8?B?ckVkRmlHZEhsdms4TzI0QW5XOW9QQUY2USt6QWxFY0dxSS9pdWhwcFoxczZF?=
 =?utf-8?B?T1pWeUFKVGRZc3lsYTJwZCtWN0lMdU9uVnllVzFPaXIxQkswUzhEMStxRjJI?=
 =?utf-8?B?cnNFSDRwM21aUkhKSWM4L01PNVNLQ1ZBTGFqcjQ3MWNCMk1PV3JRbGx0dnUw?=
 =?utf-8?B?VSs1dEIyTVNOT0NIWGhKazVpdldQbnBtZHFmZWhMK1BTU0ZsRldnUjhvdVAx?=
 =?utf-8?B?ajRCU3lmZkIwZmc1blBsMWVGU0NVWFdCN01zaVg4QVZHQSs2MmV4aklMdzNj?=
 =?utf-8?B?QnF5K3pIUmJLTG5mY1RBQ090R2pKSVVKWlRVMzVtR2g0cFU2VHR2SlRGKzJx?=
 =?utf-8?B?dWRLTlo2WFRQclc2SzlYakdWdTVjV01zWFEweFlWaG9GNHdLZzZWRzI0dEZz?=
 =?utf-8?B?eXR2aEkvYkpFUVN4WG8vcHdEeWExbUQ3TzVUY0g0TlEwaVVFR015RVR0UGsw?=
 =?utf-8?B?V3FXNkgrRCtwS3JiYmp6MUVSa0JqbG1pOFFxYmZjZUxJZVR2Z1lTZ3BMelBs?=
 =?utf-8?B?MGlaZjdHNTZ2ODhaak03cGVpVlF2T1hDd0w1aEJmTmNIYWNUUEdjeG5xSEp1?=
 =?utf-8?B?b0p0REtJMFpvNGNMdCtyMUZOQUZ5MVRJNHNCdG5ocXUwVFpvRjRyTVczZzU4?=
 =?utf-8?B?Z2orTXBHb2NZWTJvdEY0T09Ic3VKQzUwdWlRSVlUTnRtWEdWZGJWY1hiOVll?=
 =?utf-8?B?dGV4enUzWkZERE1ZU0E3a2NWV3Y0WjcyOHFnaWxTeFFoMzN1VDNZTTdlRUdk?=
 =?utf-8?B?NFVWdnlyd3RlaW90VCs3SS8yWXptcU80T2tDY3ZqbTRRSVU5bllFY3ZXWnpL?=
 =?utf-8?B?WUVWUUE3amtiaUxYYlBuUTlZVkVSMVJQcXFsVEZMcDMrQlJSaHJZMDhnZUho?=
 =?utf-8?B?cWpjRkN4UVBNUHFFMlpCL2oxTkdEa0RvcHRZR2lnRWdIWDNOYUpUS3R5eGNl?=
 =?utf-8?B?NlJxZVd0blkzRk1OcWtBU0h0emFuTW04YVp4SUNQM3VwdDdUSmdUd1cvZTlB?=
 =?utf-8?B?cHJxZmYxSUpTMC9rOGt3TjhtY1JYNmFCS2xPWkVyYzZwTUdNVHdwaGVNWEhr?=
 =?utf-8?B?S2pXUDZXWHlubGFxYTdhZTJKUjFEZEJVOHZ4cmhiZXlPUFpCZGVrblFVME81?=
 =?utf-8?B?OHFkZUpDM1lhM05VMXUyaGhtQitHZXlBdVkrYnNPVnl4dTdpL01ydVJwTVRV?=
 =?utf-8?B?bVV3RmV2MXdLTFJSMGM1SnluVUIySGxvRmRPS3c4OWQ5ZHJGWGszT1hGY09t?=
 =?utf-8?B?ZmRpMXMwY0cvVmJ4b2lQeitMbVBqcmZLVFBPTXZJSHpZditWeTJjL29BRDhM?=
 =?utf-8?B?eGpFQVZzYTFOMVViUEhqUU9uTUthcHJVd1B1U1l5a2pPOFMrWStzc01adWZG?=
 =?utf-8?B?RG94TCt4OVIyeUp2OWJiVTFvS1d4Vm9kM2EraXNJengwdWJlNnBKeHlJdFJS?=
 =?utf-8?B?SDNqRmhCWWNPaHRiM2QrRllqVmlUTmYrZlZBSFI4OGhYREZqRTJFSlVtdFk1?=
 =?utf-8?Q?oTgn6EE521UFVBvlCArgUosQ5itnvPOITeCvqmM?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <F79E20B4806C5643871B4122A4FEC86F@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 303a97b9-28e0-4fd2-a194-08d932766934
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 16:30:40.0938
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: fvcTcmC8lb9Cxjhp1txlekQCy6hXhNS33gOdCqhnpWBZaUtD1+fvTOHbH3cPT5rcaZp5CIT/TfacpJJD2o7CVyNYimncXxozILv1H9kGP3g=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6368
X-OriginatorOrg: citrix.com

DQoNCj4gT24gSnVuIDE4LCAyMDIxLCBhdCA0OjI2IFBNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9v
a25AZ21haWwuY29tPiB3cm90ZToNCj4gDQo+IE9uIEZyaSwgSnVuIDE4LCAyMDIxIGF0IDAxOjIx
OjQwUE0gKzAwMDAsIEdlb3JnZSBEdW5sYXAgd3JvdGU6DQo+PiANCj4+IA0KPj4+IE9uIEp1biAx
OCwgMjAyMSwgYXQgMjoxNyBQTSwgR2VvcmdlIER1bmxhcCA8R2VvcmdlLkR1bmxhcEBjaXRyaXgu
Y29tPiB3cm90ZToNCj4+PiANCj4+PiANCj4+PiANCj4+Pj4gT24gTWF5IDI0LCAyMDIxLCBhdCA5
OjM2IFBNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9va25AZ21haWwuY29tPiB3cm90ZToNCj4+Pj4g
DQo+Pj4+IEFkZCBzb21lIGxvZ2dpbmcgbWV0aG9kcyB0byBDb250ZXh0IHRvIHByb3ZpZGUgZWFz
eSB1c2Ugb2YgdGhlDQo+Pj4+IENvbnRlbnh0J3MgeGVudG9vbGxvZ19sb2dnZXIuIFRoZXNlIGFy
ZSBub3QgZXhwb3J0ZWQsIGJ1dCB0aGUgTG9nTGV2ZWwNCj4+Pj4gdHlwZSBpcyBzbyB0aGF0IGEg
bGF0ZXIgY29tbWl0IGNhbiBhbGxvdyB0aGUgQ29udGV4dCdzIGxvZyBsZXZlbCB0byBiZQ0KPj4+
PiBjb25maWd1cmFibGUuDQo+Pj4+IA0KPj4+PiBCZWN1YXNlIGNnbyBkb2VzIG5vdCBzdXBwb3J0
IGNhbGxpbmcgQyBmdW5jdGlvbnMgd2l0aCB2YXJpYWJsZQ0KPj4+PiBhcmd1bWVudHMsIGUuZy4g
eHRsX2xvZywgYWRkIGFuIHh0bF9sb2dfd3JhcCBmdW5jdGlvbiB0byB0aGUgY2dvIHByZWFtYmxl
DQo+Pj4+IHRoYXQgYWNjZXB0cyBhbiBhbHJlYWR5IGZvcm1hdHRlZCBzdHJpbmcsIGFuZCBoYW5k
bGUgdGhlIGZvcm1hdHRpbmcgaW4NCj4+Pj4gR28uDQo+Pj4+IA0KPj4+PiBTaWduZWQtb2ZmLWJ5
OiBOaWNrIFJvc2Jyb29rIDxyb3Nicm9va25AYWluZm9zZWMuY29tPg0KPj4+IA0KPj4+IExvb2tz
IGdvb2QuICBPbmUgY29tbWVudDoNCj4+IA0KPj4gRXIsIHNvcnJ5LCB0dXJucyBvdXQgSSBoYWQg
cmF0aGVyIG1vcmUgdGhhbiBvbmUgY29tbWVudC4gIEhlcmXigJlzIG9uZSBtb3JlOg0KPj4gDQo+
PiBJcyB0aGVyZSBhbnkgcGFydGljdWxhciByZWFzb24gbm90IHRvIGV4cG9ydCB0aGUgQ3R4Lkxv
Z1tYXSgpIGZ1bmN0aW9ucz8NCj4+IA0KPiBObyByZWFzb24gb3RoZXIgdGhhbiBJIHRlbmQgdG8g
b25seSBleHBvcnQgZnVuY3Rpb25zIHdoZW4gSSBrbm93IHRoZXkNCj4gbmVlZCB0byBiZSBleHBv
cnRlZC4gTXkgbW90aXZhdGlvbiBmb3IgYWRkaW5nIHRoZXNlIGF0IHRoZSB0aW1lIHdhcyB0bw0K
PiBoZWxwIGRlYnVnIGRldmVsb3BtZW50LiBXb3VsZCB5b3UgcHJlZmVyIHRvIGV4cG9ydCB0aGVt
IHRoZW4/DQoNCkkgZG9u4oCZdCBoYXZlIGEgc3VwZXItc3Ryb25nIHByZWZlcmVuY2UuICBJIHdh
cyBqdXN0IHRoaW5raW5nIHRoYXQgeGwgYW5kIGxpYnhsIGJvdGggdXNlIHRoZSBzYW1lIGxvZ2dl
ciwgc28gaXQgd291bGQgbWFrZSBzZW5zZSBmb3Igd2hhdGV2ZXIgd2FzIGJ1aWx0IG9uIHRvcCBv
ZiB4ZW5saWdodCB0byB1c2UgdGhlIHNhbWUgbG9nZ2VyLg0KDQpCdXQgSSBndWVzcyB3ZeKAmWQg
d2FudCB0aGUgZXhwb3J0ZWQgdmVyc2lvbiB0byBiZSBhYmxlIHRvIHBhc3MgaW4gaXRzIG93biDi
gJxjb250ZXh04oCdOyBzaW5jZSBpdOKAmXMgbW9yZSB3b3JrIHRoYW4ganVzdCBjYXBpdGFsaXpp
bmcgdGhlIG1ldGhvZCBuYW1lcywgSeKAmWQgc2F5IGdvIGFoZWFkIGFuZCBwb3N0cG9uZSB0aGF0
IGZvciBub3cuDQoNClRoYW5rcywNCiAtR2Vvcmdl


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 16:33:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 16:33:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144856.266566 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHQR-0005Y6-P9; Fri, 18 Jun 2021 16:32:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144856.266566; Fri, 18 Jun 2021 16:32:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHQR-0005Xz-M7; Fri, 18 Jun 2021 16:32:59 +0000
Received: by outflank-mailman (input) for mailman id 144856;
 Fri, 18 Jun 2021 16:32:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luHQR-0005Xt-AS
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 16:32:59 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 32fb2256-d57b-4bec-8ed5-70b7b3912b47;
 Fri, 18 Jun 2021 16:32:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32fb2256-d57b-4bec-8ed5-70b7b3912b47
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624033978;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=RhpCjIM+Jr/2K4hi89BtEhMrTob0fjsMnDMUig4Y4M8=;
  b=Ljo3MhiDko3jhA/tz0sXV+35XNSCOSbSVg7R9qChmmR+eWHU3Yd0x58S
   cfdONfMyMFLwwxHMz+3jzcouPbpj66B3fRsaZZSlZOaRr/7D0BWT9YN1m
   eMQ5YazM5Ssuis/vOHHZJpo5JG2kTrgO1Uu5sCyqKKVQ8mjK28782RYTr
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: jRHSwYPqHe5dyAdq8r2iZRqfnKbD3kBCFPVwqNwxLjfM1UyqNnR3VGhZalkGFIibMVhDGYa3Jw
 auTDUMm21T5N0d97bvdOtHPzUgY5axwt65Y11QPxy8YeFbna67R/OrHly2h3MgUNcllPMbQUvA
 1YzHsY5xMNnYWYdXUTACb7vIlXkxJlDadqMyeJBb4glKp6JF9V58Re1znqqQB3dS4a3rhXlzvN
 Ud6cAUnbLAE5DBjjmsN8A5FhUZbbtDPmK1yMBur94vAPeH5KnDqVMnLQbMNY7NLm8M+2lQ+dJ3
 Zow=
X-SBRS: 5.1
X-MesageID: 46476633
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:95bMk6BG45vN+S3lHehOsceALOsnbusQ8zAXPh9KJiC9I/b1qy
 nxppkmPH/P6Qr4WBkb6LW90dq7MAzhHPlOkPUs1NaZLXTbUQ6TQr2KgrGSuwEIdxeOkdK1kJ
 0QCZSWa+eAfmSS7/yKmTVQeuxIqLLskNHK9JLjJjVWPGVXgslbnndE422gYytLrWd9dPgE/d
 anl7F6T23KQwVnUi33PAhLY8Hz4/nw0L72ax8PABAqrCGIkDOT8bb/VzyVxA0XXT9jyaortT
 GtqX252oyT99WAjjPM3W7a6Jpb3PPn19t4HcSJzuwYMC/lhAqEbJloH5eCoDc2iuey70tCqq
 iDnz4Qe+BIr1/BdGC8phXgnyP61iw11nPkwViExVP+vM3QXlsBeoh8rLMcViGcx1srvdl63q
 4O9XmerYBrARTJmzm4z8TUVittilG/rRMZ4K0uZkRkIM8jgYJq3MsiFBs/KuZHIMu60vFmLA
 BWNrCY2B4MGmnqNkww1wJUsa6RtndaJGbNfqFNgL3M79D69EoJhnfw//Zv6UvowqhNAKWs19
 60RpiAq4s+OPP+TZgNSdvpEvHHRlAkf3r3QSqvyAPcZd860jT22sXK3Ik=
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46476633"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HlOhcWqabsv5zjy3NnxHNg4KCvoFmRC42h5DvR/RzyX5/3Y3eqJi2pM6sA5zBc+eqlVQ963TxtPz7Mvbx3JZmbl02sayRCs1qcXePi0q+oRWZyXgPinJZWQFEYjvDe1X6/QfF/PYKvIJihVc+y6qnepYEqn1Nhl6rO4xZZNvXccU7I1Zg+n7KPYgImldoPr8aanWaKQPKObCHi+u1FGP6E0kixC41g4mubItrvXLL3KbMGK9A4qHwmsUUXPw7mbgPOW2JGECf/yPJI5L1TfM00TB46QoMIGzyq06As/LaLL+TR59nB93WrJ7hOSGoiTcWGrljRnT/fOQdnf6GVhnkA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IUzAvpRI63xrXKeWz2La8gOULHzD47Sn7nma0MKN2lY=;
 b=oQOnpDNQHTuB/W2IueboM2XqARgZp6xR+gCa6OfHk0BhT7FanYCMMVxiR6ss49y4qlFYShp7UdD0aP3W9jtAUOfp6bTBx67gSEdeK177Hum/MgEXF0vef4703pGdhV7DpMY/l9/BD5qrBvEuDyXg5mCBbNUoHG1qGzR7a3ipjoUcYAm30byJl1IhEREmV56m55DPcg94Y9Q4Ahax8ojGXvmeuYzci81NBxsmaqUMJ/T00NpxI7xGz1ArZW2u0t34jtFRgaEhzCwxjpHTE7MaJobEiVgDlokysL3Yx2imZ9OY6VxDKJ24+B+vbUQWL9ImKfIZ3J+9N1PiISzAmUG+jQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IUzAvpRI63xrXKeWz2La8gOULHzD47Sn7nma0MKN2lY=;
 b=oeu6VZ/Z72j2UBj0MYAHpDMnir99bAUZuRUEWDMCGq6U9TIKSDIbjgoqZrLBa7SszaiLEsf/a4pD0GTvqNVCDTJ8Kub9EZyhJXNwh3vXYcd38BxDs6oGm9IVDZaZN1JlpCnbl7tOozkhOygoDvxOrxaSpK12TkmVuU1V8kTCR8Q=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Igor Druzhinin <igor.druzhinin@citrix.com>
References: <4f46f3ab-60e4-3118-1438-10a1e17cd900@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/AMD: make HT range dynamic for Fam17 and up
Message-ID: <9fdd9760-6de7-be4c-a75b-0d3b1cc10a38@citrix.com>
Date: Fri, 18 Jun 2021 17:32:40 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <4f46f3ab-60e4-3118-1438-10a1e17cd900@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0292.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:196::9) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 036686df-722b-4265-0e53-08d93276b4b4
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6271:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB6271B5E3A8AC989567A0C057BA0D9@SJ0PR03MB6271.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: odc77ZBFa8pxFyPPwANFt1e4i3fsS5RGXzNt6s/he2YPBVNX67QLJoRWN/m9xR7N1VOIi7GBbIULK4WiLcqtv73GliK8N/gMzVRwDHv9aej6yGHkhBSx9uW/7UmVShQ5AjLxgaVqKIsEXLfh/AoVOnBY2n7/uwFz+ZfKWn4aeE8rK6gkZK71/NUKCTHF2N+eRc7FeRz1xuncjb++sR721uWlnV6n6jpusTBUUIjjq2A0wPmtUsZlV3N1Acpjz63s1Db0DisYBabrfbCTZ9/lxO6rR06GXusuU95SaDU0cagUB9L8O9FCiXqueLazIGQExjkfRodVEFKCohw5u1qZlltBh9UCYjQN/exPBMASIu9Te4lbrkkek0I/wxCP3zKhijK4LTOJ+fGu495yyGKGbumbFwpLn8rjWUKi5CZCzTJ6AzKcppd7emucNneAz7CKrwi63c8URh+nNYU4l8e3vGU75mZEd7fmW2gaG2Eed8k50glyozUgE+BJhEOzS6/7H3mvhdwUO5AXYWJqAVJdaWAH+K4Da0v3ul57Hpt4L7lhB7BZBmjS9t9bSQaW/3AWApIuJV6Fsp2z841CXR0ihr5V+UlnidCqzLI4/1SA8XJ7DBawqMEk5+ecW4vpRq/ucZ5jCSpDA5q64hDHHSiAfWtkRKLPqLtVXluC6Tz6TsCUzPVZZwJ5awdFQCUFVez2
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(366004)(346002)(376002)(136003)(86362001)(110136005)(54906003)(16526019)(316002)(2906002)(478600001)(4326008)(6486002)(5660300002)(38100700002)(26005)(6666004)(53546011)(36756003)(31686004)(8936002)(31696002)(2616005)(956004)(107886003)(66946007)(66476007)(8676002)(66556008)(16576012)(83380400001)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?K2xLRFU4cTNlT0xaOTF0ZXFVY2FzYlY4OFA4ZldmaVB5WUwxZGVCc0poaUlC?=
 =?utf-8?B?TllUMTZCcFErS0owd1JsWEUrQTV4OW9LSVNSclhzYytLdnZyeFFrYmtCQlFK?=
 =?utf-8?B?YlZDM0krbjV1M1B4bHZlZDVkRmhmazhucHRTN0k3TnBaUXp4REtmUVVuMFpQ?=
 =?utf-8?B?NENGZ2J3RjNPNU13UnJ6NVJpbmdLTWRLY2EvcGxPcDdtd0NPdHVmYTJSQnd0?=
 =?utf-8?B?djVQeGJ3YkVCVkR4bEpLcHhKRE13YWdZODgxQkFCSkdaSVhzaEZKSFFRUE54?=
 =?utf-8?B?YVgycllmNytLbUI4aG9QVnAwb1pDS3diMG9vR0dtWXlEaHBadGxkQ0x3M1BE?=
 =?utf-8?B?KzBHaDNBZWwyeE9BcXk2Y1g3eW9VcnhQeVduRitjQ3BJaXZScFR1ek1KZUlz?=
 =?utf-8?B?SzV6VkNCemYyMXhXbU9LOEZxZ3NaTTlMSWFBOU1FM3JZUVNSRktPVnJyR3M0?=
 =?utf-8?B?UEl6QlJ5eFpkQm9MTlI3R3hWVDk3eVFHcmlhUWVyeGpoT2FtV3NIZ2F6cjgx?=
 =?utf-8?B?b1hmVXdsQU56ZCtNUGxvdm1ndTBnanpRQWVVYlpqeWxQeWNMdU1DcnM3anVj?=
 =?utf-8?B?MFhqVGhwWkJpbk9MUHRLcE5XUVJibDNNQXJON0NPZnZPdVhxZlU5cEpybExq?=
 =?utf-8?B?L0dPSDBTdjhTcDg2WWtrMmQ2ZWJ1dWdLb2ZoUjQ1WjQvSEFFU1Y5OWxEcXg3?=
 =?utf-8?B?N0x5Y3NKSDk0QU5hVk9zUVZwcnJjMzhlKzY1OGpwWkN3WmdIY29SejArNkpp?=
 =?utf-8?B?Z2hqY1Z5RmFLS3hKdnp3VW9laUxEendXa0tXbmx0TWhRZmF2b3hheUFLeUpZ?=
 =?utf-8?B?a1QvODRvcVhsbmpZdmVkbUhuanM0ZHk5bTk2aUlJNjdXdnhPTHJ4OGFnMytm?=
 =?utf-8?B?ckVTMUJialFIQjB4YTZXV1ZMOHBDNmR0UnZYbnJzRXliVU5IaGw5d1VQaTRp?=
 =?utf-8?B?bDlRVzdiaHIxelc4ZGZveCtVcHFCMys3ZzQxRG9nVXQzSDEvTFV6LzBHRllD?=
 =?utf-8?B?Z2F2UUk2clk5ZnZrZkplRHVHR0lPMG5BT0hYQVhNMW5JcXF5UVA3bE5YVnZW?=
 =?utf-8?B?SkU0OUhIb0ZLbXpKdUprdjJtQWR3eHYwZCtHbG9aN2kzUDhTbldqNDJmVlBq?=
 =?utf-8?B?SzBLOGhnV1Njc2RPdmZhamozREU5MmlWNGdKRlo5aEQvMWs5ZUtEYTBhb1U0?=
 =?utf-8?B?dnhwM2x2TUt4TldBa09RdUIzcFh4SUlEK2NjdDhXR09nNG9jMkJlYzYvYS9G?=
 =?utf-8?B?Qk95SWVrNkNJMWs3VGJKSzE3U0gzS3VVN3Bwb2VvTWFnUnMvSEhQVjNhVWMx?=
 =?utf-8?B?STNFNzZjbEczSFFmMGNEUCsvRW4zb2orN005Mnl0MGlHVUVzdk5YZGgxM0xy?=
 =?utf-8?B?d1hZdHp2enBmaFBTQTQwdWNYeXlYM0xadTRERXJjMjJzWlN0UWZlcmZ0RCtD?=
 =?utf-8?B?MXlrbHFQWm9Hd0FUdTEvUDBCbEk2UXRWc2NncjFMRDl1WkJHZmdGdEJWckFO?=
 =?utf-8?B?VDdoU3dkTDI1SWNoK05pbHBGSVlnSFpSOS9Wbm1KN0JnamMvSmJ0ZFNRc3BH?=
 =?utf-8?B?WW1HbzhmbHVvQTRaaDlRc1duaVBoMGtDYW5aZ0lpdC90Ylk4azlvUmlhaUUw?=
 =?utf-8?B?Y2p5L2pIYjhjd0YwYVMvWG0za3ZBc3JyT3lDZDlkejc0dEF3UGJOWTZudnZh?=
 =?utf-8?B?S0x0UEpscGdJYVlVQmJRQWJsdzVMRnF2MmpHS0VhMkRTdm1yYXgwbnBWQ2NP?=
 =?utf-8?Q?tnKzicAQebgrcf13Rsw848Z9XZQFEAhUd3Uu95A?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 036686df-722b-4265-0e53-08d93276b4b4
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 16:32:46.9973
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wm45pi7k5PIwKuybcI1U898qfWHmqgHTg6+EvrpvtcrjZUzNOjM4l7OGZOvtWjF33C2sV+lgoX2ajtnoLnzf54FFm2YqeWzZb5f9plJ948Q=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6271
X-OriginatorOrg: citrix.com

On 18/06/2021 17:00, Jan Beulich wrote:
> At the time of d838ac2539cf ("x86: don't allow Dom0 access to the HT
> address range") documentation correctly stated that the range was
> completely fixed. For Fam17 and newer, it lives at the top of physical
> address space, though.
>
> To correctly determine the top of physical address space, we need to
> account for their physical address reduction, hence the calculation of
> paddr_bits also gets adjusted.
>
> While for paddr_bits < 40 the HT range is completely hidden, there's no
> need to suppress the range insertion in that case: It'll just have no
> real meaning.
>
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>

Really, this ought to be reported by Igor.=C2=A0 He did all the hard work.

Also, I'd like to get something more formal out of AMD if possible.

> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/cpu/common.c
> +++ b/xen/arch/x86/cpu/common.c
> @@ -349,13 +349,17 @@ void __init early_cpu_init(void)
> =20
>  	eax =3D cpuid_eax(0x80000000);
>  	if ((eax >> 16) =3D=3D 0x8000 && eax >=3D 0x80000008) {
> +		ebx =3D eax >=3D 0x8000001f ? cpuid_ebx(0x8000001f) : 0;
>  		eax =3D cpuid_eax(0x80000008);
> -		paddr_bits =3D eax & 0xff;
> +
> +		paddr_bits =3D (eax & 0xff) - ((ebx >> 6) & 0x3f);

While I can see the attraction of editing paddr_bits, I think it will
break the emulated pagewalk (at least).

With SME active, the address space reduction is only physical addresses
only, not guest physical.=C2=A0 An HVM guest still needs to see the origina=
l
paddr_bits, and the emulated pagewalk needs to use this for reserved bit
calculations.

i.e. under NPT, you can still have a working 2^48 guest physical address
space despite the host physical address space is limited to 2^43 by
encryption being active.

~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 16:33:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 16:33:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144857.266578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHQY-0005rw-3V; Fri, 18 Jun 2021 16:33:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144857.266578; Fri, 18 Jun 2021 16:33:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHQX-0005ri-VG; Fri, 18 Jun 2021 16:33:05 +0000
Received: by outflank-mailman (input) for mailman id 144857;
 Fri, 18 Jun 2021 16:33:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfjP=LM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1luHQW-0005r8-VX
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 16:33:04 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03b75236-8ee2-4819-85ce-96ec09fe55dd;
 Fri, 18 Jun 2021 16:33:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03b75236-8ee2-4819-85ce-96ec09fe55dd
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624033984;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=sK7m/7GEPqI2fInlZG4KEFlh1SRJzOjGxVQkcb/DqDg=;
  b=TD50OQvTXnMg0S8RflJ7K/K0Nki9L6xTpLyQhFsS3OMrbx+Mc5kBGtgF
   jjJ+aPON65NIRWpTS6gnkto+iGDtFiNg/ecHDe0hGNtlzoTEeHAk7TEU/
   sT+3DzGiyESNAXFYkQuEUzTX28qhb8PED/4hXUlfuead4XHURAXC1/FLg
   I=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: dkCWcdbJ/uI4O2FNERgiwDNoi0K1mK2xQI9/rnbnvWGdbISkhpfh0Tvd1MaGkwtTNZnfMvvCfK
 avhgXmsiPLELJNRGQd1ajTJ2rFTUfXBMRbsfz3vphAOCqhCcltGF5xP8bLF+eW6jawFMN9+j18
 1XR+1TQ99/dosOfndO6GMz+uE8nLpHF85OQEzYM5n1zeGzz2tekjlNtDLEuB3yn5Vx6oMMK9/Y
 Hs7oAHTG7qn67NYGCSADuAUu8INGdML1a9IcpL1LstiaGqR/X8VckHJhkvdOq/4ZBZA7YwfaOD
 +5c=
X-SBRS: 5.1
X-MesageID: 46546462
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:9B0GwK156b/FepVPm3vo4gqjBEgkLtp133Aq2lEZdPU0SKGlfg
 6V/MjztCWE7Ar5PUtLpTnuAsa9qB/nm6KdgrNhWItKPjOW21dARbsKheffKlXbcBEWndQtt5
 uIHZIeNDXxZ2IK8PoT4mODYqodKA/sytHWuQ/cpU0dMz2Dc8tbnmBE4p7wKDwMeOFBb6BJcq
 a01458iBeLX28YVci/DmltZZm4mzWa/KiWGCLvHnQcmXGzsQ8=
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46546462"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EkBII2Ykiv1yYBW3M7Xz6D9kJKFORpYyVoda3tEpZwkFO9a2b721YJC4F4v/rmlTIawJuhoEP8ribH1Teu2nXwiLY97WPSYEphAKuVPlk4M1yYHn/I2jIask5RfNXaa1SMktUC3ZMcMxajM7CeTgjjQwVsbpbnhUeIVRU+aiAAYc7rmXaCXK1JMGOtk9KaKYwgMgTADLc6D8iIhVCoFSFcl2miackDIrCkWVuwzpmE5acsYs3e77HDcm9L9rhOGtjjvcIkmNkd/2CXgdD+VL1H6wwIKxvvrAvYvSzCIgbM0cNUCw0jJ0eBCfSn4V8AHqWC8svFNK2/gtPyTVCNTNZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sK7m/7GEPqI2fInlZG4KEFlh1SRJzOjGxVQkcb/DqDg=;
 b=ktGY+5ZShfTxeOFE1njOOzMcXdupa5EDPeAdcS99K9D7t8snee5/pUOjnVmsSKfOVXN4VF+wRmAEFmhC7RwIY6ppJIV9nXMSb32/CrPU0o9NwQYb+aYNHnIGvr4ElrBnoxZJ7OQKvnbj5hmw8YrVOBXPQ9zT0fqF/SEVwf91ZStUO7vcFhKpheesqk9EtiTEdlI1AQedXH5HMgRgoyTTS1bkOPgCP/h0m8ztIEEQqGDWR4wrdo/KAoLjVeqj9GfNHSjMfgvtYLvOodgwdL9Ixgf57NC+mLATkZ2iKKXwgP7n02Wio/Ame9uaGS/ReTc3II7g4maCS6LuQQ9nAsmhBA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sK7m/7GEPqI2fInlZG4KEFlh1SRJzOjGxVQkcb/DqDg=;
 b=ZwlxWk+oWu0CfeUq/4W7tIfyMF6s2lrcJHxTLsgjrcWqtW7JrhdbCdfQVygzb6ryRVlD3aJzc9o90BvQMxpJ1uKmemuyC2hn5P9//ItjW7gbPAjwUZzdxFyBPmxmPHp6psxGBzKUf2euZ6J2RCsNAoZo9jIGEXM1Eu4NKKIIEMo=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: "xen-devel@lists.xenproject.prg" <xen-devel@lists.xenproject.prg>,
	xen-devel <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 07/12] golang/xenlight: add logging conveniences
 for within xenlight
Thread-Topic: [RESEND PATCH 07/12] golang/xenlight: add logging conveniences
 for within xenlight
Thread-Index: AQHXUNy8b62kbX7r3k+nzGZDJK+dZqsZ5r2AgAAhsQCAABOrAA==
Date: Fri, 18 Jun 2021 16:28:04 +0000
Message-ID: <8E549F61-EE34-465C-B109-72A3B37164D6@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <452aac2489990ac0195c62d8cb820fbe5786c466.1621887506.git.rosbrookn@ainfosec.com>
 <A96AD4DD-35CB-495F-A008-D72A5C2AF45D@citrix.com>
 <YMy5FKP7OHHVWXXq@FED-nrosbr-BE.crux.rad.ainfosec.com>
In-Reply-To: <YMy5FKP7OHHVWXXq@FED-nrosbr-BE.crux.rad.ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 3e6e51a6-ee72-4763-682a-08d932760c88
x-ms-traffictypediagnostic: PH0PR03MB6250:
x-microsoft-antispam-prvs: <PH0PR03MB62505BA88564B4C10AB198B4990D9@PH0PR03MB6250.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7219;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 4FNhnRXPVeFhjOngl08mf6INUj3EmcJGsvcW5lyutjp5cov1SKhXE+bGP+zOi+Tb6R5dTlVxtx0xiP39xSnuEfMA/1gVbmVGLE8uLvDK8eESah1vLtUT1N6Ugj5B0RbBHc7pPlXg5EwcADtUj4Sx2mQ5jvVXWqkBqXK0m3SwowuOIbdq2RaF7O4qQhkOwsATuaAVGXY0FJyVzznOAAQOm3uMcrPBDaGK+z+jVIkkOPXW3eAp8Bj5IpTlNZaePbuxh1LVF5W/zQ4b6kPb77aJZ1eM6mwqTSd4QiZN9F07k6VhjuOm/MfpWjNhWgd6jCYoETfL0RN85koz9i2Uq5ceckuW0BAVQh5wq5+yVzOkVCWu3KzLiZotLYKmQhLdq4UvBSg9g5G1Wkt0fEvPrgRces6zBBNH/8ndGO6n+7MLggcWOucL/FRGuSyMhZKentj96sJBsfSdEExhJ7nSPgMkT7mrMZgRBKpoTX2O39CVMFEx38uNB97J8oxNMcXAru5Cb6/zinL6RPUp9lMPDQtPJA3dQ3ioWpAH0bLombtF8qig4ROp0jVCKrrEBBGqgSNUi47JY8Jv1mKWhwXOcAURKVVNFxgcKX3cDE7WLgI+Fb9VT2Ok59Ea+aGy/ZLGVj5FRzGNQCWj9lRRkedmCU4aqIK6cbpROjn239NC3pJ817U=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(39860400002)(396003)(366004)(4744005)(6916009)(478600001)(5660300002)(38100700002)(4326008)(186003)(33656002)(26005)(6486002)(54906003)(8936002)(8676002)(2616005)(316002)(53546011)(66946007)(66446008)(66476007)(64756008)(66556008)(76116006)(91956017)(36756003)(71200400001)(6512007)(122000001)(6506007)(2906002)(86362001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?ZkQySU43SzJSQS8wWDE5UnJ4YjhIQWdZY1JPc0dGeDgyY01NTkJOR2dEL28y?=
 =?utf-8?B?ZUNhU3BYMTRDeTBYSS9qazB1ZW4xZjRzVFN6eVpQKzdML04vTUY4YlZGY1dz?=
 =?utf-8?B?T3FHbUVKZ3prLzREZmlsU2FZTkJtd3lMdHI5cW5hUHpxVUxkVnc1K1Baaks3?=
 =?utf-8?B?WmRJL2ppc2k4Mk9PT0FjQjVwWndKeUozdi90QlpBK2FPSXJMbzBWOENJTGFX?=
 =?utf-8?B?NjFObDYyMGl6U1BFVkxjbHArb2RpczBKZGdrM2lCMEUwY2Q4Skx1QWJmL3J4?=
 =?utf-8?B?WFlqVFI5b05xcG1qejh6M3c2NkpYNUZlTDB2UkNQZHAvTTlYYVZ1KzBIVXV2?=
 =?utf-8?B?OEwxUFFVekczcGJmaldDYzhkR1p4Z2NZc1JSSElnakxlb2JJU1VBVnZsNzY0?=
 =?utf-8?B?SFlmd3BZaVphdlJOV2orMTc1bXBoQmlLWURPOTNid0ZHT2lIUUZXcHNiS2ps?=
 =?utf-8?B?QWp6d0oydHRLeGNnbU0va0g5c0g2OFJGN1VqS2ZUNzBodHBiL1Fhc2lNclNI?=
 =?utf-8?B?M2FCYkFUL3Z3K012NkNGMjNhUHJiT3BwSFZPR0MxdXA4bE1ZcXBiUTlqcmpk?=
 =?utf-8?B?dDBaZmdUUEtiM3orc09XSEt5U1hITCtFRFBVbmxtb0xFYlBFempkVm85Znlx?=
 =?utf-8?B?WllQYmRmY0I3a1ZZMmdpWmN1TjVtS0I4aDFlNVlmck51aEk1dzJaMi8wZlhH?=
 =?utf-8?B?cmxZV2pNNzFtejdWRWprcFlZU0FxT2x5VjVhbkMrUWxiOTRHWll5QXpwVXl3?=
 =?utf-8?B?Q0FPeW1oUHh5VENoVm1tdWUyTHp3em1pSCtHSjZCdmtaM2hYL1RMU1ZrQUlk?=
 =?utf-8?B?dVplUFk1dWhOV3pOeGVPQ1BPZkpvM1V0QklMU21aUWlLc2I1dTYzZ3pzL1Ux?=
 =?utf-8?B?aFJmNWZxZGxyeWlrVU5ucEpMV1RZc1V6UE5lblN3dXluQTdKc0hZejEvS1VT?=
 =?utf-8?B?aWRremkxK2ZYK0NLWFB2b0RMN3ZIb0xsQ1JiYW0wSmxWVVlBcUNJSzAwNkk0?=
 =?utf-8?B?d29hOTVXdzFTZU9yeHoxV2VZQVZJS1FqSzZMT0xjSWF1YXBpTWI5RC9ZVk96?=
 =?utf-8?B?ZmFxU3RTVVV5c0hWWE1MNlB3eXJNYS9waGxhWXpNb0tIc1R3ditXTXU1Y1dK?=
 =?utf-8?B?bGdwRnVWZnNRMkljY1IvQUdZaWNPbUtKTVNMWjBhRTV3S3Ewc1pMNXMyaUpm?=
 =?utf-8?B?V1dBTHBVOTB0MENGVjdJYUw3bExaeURlY3QyZmtLYUhMY1BkNW1UaWtMa1NY?=
 =?utf-8?B?YWZqM0JTZ3JETTZJOXVaRkJYVnErWjV6Q2ZSQTBiMUV2UE9PZ2k1WHVZRFR5?=
 =?utf-8?B?bnI2cnVSS0FqOHBKSGpITXZldFh3YkJGZVRCYW8rTkU1U1FkWnZLeDg4VndF?=
 =?utf-8?B?blpkd2Y2NHhFOUkvWHRXdTN2Z25yMjg4T2lNSFFwcDdtcmpEcjBzVVk3TmdI?=
 =?utf-8?B?NmpieW5Sb1k5ZXJiUmJZa3NIR2Q5OUdCYnFNT2hpeFpMTElDbUtpUWg3VDlY?=
 =?utf-8?B?c3A4ejFJUTl5UHgyOTBoMXJRYVFDZlNBWmMrMCsvOWZJL3hQRFE3RHlzME5p?=
 =?utf-8?B?UVp3cVJzMzZWejhmK28rZGxjM3lTOEdnTHQ3SXdvL1BWVzhvREs0ZDdTL1hJ?=
 =?utf-8?B?REk5cFJyK1cxTHJNclVhTjN1ZFNMaGxUTzFtY1pUZ1dLSXdSL1hvVzZycXoy?=
 =?utf-8?B?SzhTbXg4V24wTHZpQm1SUXJEZjRRd3NUNDQwSHZzRzVuSXpPWTlDeDN0YWVr?=
 =?utf-8?Q?EKp6qzd8tSQnUfyrv2UirydebFiYSYJo2YTqC1g?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <D960EC29876EAE4A91CE0D17BF244229@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3e6e51a6-ee72-4763-682a-08d932760c88
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 16:28:04.6904
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: VgjRuqrJp/D4bXRVNp0Hs2iR49eqgkG64kPxssqhjzLCAtQwFMVHZFK8whIYZ727eWX/menoLh5tXM/wjTMlVNqCsdOJEiSJO4mZfpUsM3M=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6250
X-OriginatorOrg: citrix.com

DQoNCj4gT24gSnVuIDE4LCAyMDIxLCBhdCA0OjE3IFBNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9v
a25AZ21haWwuY29tPiB3cm90ZToNCj4gDQo+IE9uIEZyaSwgSnVuIDE4LCAyMDIxIGF0IDAxOjE3
OjA3UE0gKzAwMDAsIEdlb3JnZSBEdW5sYXAgd3JvdGU6DQo+PiBBbHNvLCBpcyDigJh4ZW5saWdo
dOKAmSBpbmZvcm1hdGl2ZSBlbm91Z2g/ICBJIGhhdmVu4oCZdCBsb29rZWQgYXQgdGhlIG90aGVy
IOKAnGNvbnRleHTigJ0gc3RyaW5nczsgd291bGQg4oCcZ28teGVubGlnaHTigJ0gb3Igc29tZXRo
aW5nIGJlIGJldHRlcj8NCj4+IA0KPiANCj4gSSBiZWxpZXZlIGxpYnhsIHVzZXMgImxpYnhsLiIg
SSB3b3VsZCBiZSBmaW5lIHdpdGggImdvLXhlbmxpZ2h0IiBpZiB5b3UNCj4gcHJlZmVyIHRoYXQu
IA0KDQpJIHRoaW5rIHNvLCBqdXN0IHNvIHRoZXJl4oCZcyBubyBjb25mdXNpb24gaWYgc29tZW9u
ZSBkZWNpZGVzIHRvIHdyaXRlIFB5dGhvbiAvIFJ1c3QgLyBMdWEgLyB3aGF0ZXZlciBiaW5kaW5n
cy4NCg0KVGhhbmtzLA0KIC1HZW9yZ2U=


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 16:35:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 16:35:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144869.266589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHT2-0006yy-LJ; Fri, 18 Jun 2021 16:35:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144869.266589; Fri, 18 Jun 2021 16:35:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHT2-0006yr-Ha; Fri, 18 Jun 2021 16:35:40 +0000
Received: by outflank-mailman (input) for mailman id 144869;
 Fri, 18 Jun 2021 16:35:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UjTI=LM=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1luHT1-0006yV-HF
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 16:35:39 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 26cbc658-a2d0-4e0d-b0c9-0688d50c9c35;
 Fri, 18 Jun 2021 16:35:38 +0000 (UTC)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1624034124464510.7593830258778;
 Fri, 18 Jun 2021 09:35:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26cbc658-a2d0-4e0d-b0c9-0688d50c9c35
ARC-Seal: i=1; a=rsa-sha256; t=1624034125; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=HukGgTr7YThUCg13o8ENVb++NzuLt+JUzZ3I+kUUqwE/7+TMyXvig22rXMuljWosYScOgXR+Bsues32DOkHyPkhWT4wRpIBDqFZ3RkrwEHph6Dhxwj7sUyjRfyEzhiLOHcBxNz6/x1yJ/uLpUurSaI483scT7QzVfnv4Uyc6BHo=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1624034125; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=IGVKY4qQeOEt4A6vIiT7QkQMNAdGzI6LenF3GB37rkQ=; 
	b=jrSt+19BF2S+bculTFGSiA8SdpJcY/Ipm6RZ53Me1cSuLKKnU6H8FuPn+fsDS9TFT1UwnnmSeCStcqudofg893SUzvhZiaydcZn/gn5DBq78aF/lHyrslqUyxXIWETt1rt4esPz6XyugGEKtA06/kx9zCf/9gzOrJslnANp7xV8=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1624034125;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=To:Cc:References:From:Subject:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
	bh=IGVKY4qQeOEt4A6vIiT7QkQMNAdGzI6LenF3GB37rkQ=;
	b=vYCU5t76s1bLKPzrnvPFjATVRPH2kUnl1EQZ92gz8ipuqBsQxXTsSAslJjeoOIDN
	SNU83JN28hoRvCr2lw85Z/F6K+hERaVty32pw/Qxf6kx7YshcWUqu3UgRx6jiyizdZ7
	YafAqLOdnFAd9xrm6A89MxBqU2kO/cKfWxhdWDC0=
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Tim Deegan <tim@xen.org>,
 Juergen Gross <jgross@suse.com>, Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-4-dpsmith@apertussolutions.com>
 <a8d60866-b9d9-8a76-3acc-703799b204b6@citrix.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH 3/6] xsm: enabling xsm to always be included
Message-ID: <3df8648d-f706-9684-4e6d-9438dc9f0894@apertussolutions.com>
Date: Fri, 18 Jun 2021 12:35:21 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <a8d60866-b9d9-8a76-3acc-703799b204b6@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 6/18/21 7:53 AM, Andrew Cooper wrote:
> On 18/06/2021 00:39, Daniel P. Smith wrote:
>> The only difference between !CONFIG_XSM and CONFIG_XSM with !CONFIG_XSM_SILO and !CONFIG_XSM_FLASK
>> is whether the XSM hooks in dummy.h are called as static inline functions or as function
>> pointers to static functions. As such this commit,
>>  * eliminates CONFIG_XSM
>>  * introduces CONFIG_XSM_EVTCHN_LABELING as replacement for enabling event channel labels
>>  * makes CONFIG_XSM_SILO AND CONFIG_XSM_FLASK default to no
>>
>> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
>> ---
>>  xen/common/Kconfig            |  55 ++++-----
>>  xen/include/xen/sched.h       |   2 +-
>>  xen/include/xsm/xsm-core.h    |  26 ----
>>  xen/include/xsm/xsm.h         |   8 --
>>  xen/xsm/Makefile              |   4 +-
>>  xen/xsm/dummy.c               |   4 +-
>>  xen/{include => }/xsm/dummy.h | 220 ++++++++++++++++------------------
>>  xen/xsm/silo.c                |  17 +--
>>  xen/xsm/xsm_core.c            |   4 -
>>  9 files changed, 142 insertions(+), 198 deletions(-)
>>  rename xen/{include => }/xsm/dummy.h (63%)
>>
>> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
>> index 0ddd18e11a..203ad7ea23 100644
>> --- a/xen/common/Kconfig
>> +++ b/xen/common/Kconfig
>> @@ -197,22 +197,33 @@ config XENOPROF
>>  
>>  	  If unsure, say Y.
>>  
>> -config XSM
>> -	bool "Xen Security Modules support"
>> -	default ARM
>> -	---help---
>> -	  Enables the security framework known as Xen Security Modules which
>> -	  allows administrators fine-grained control over a Xen domain and
>> -	  its capabilities by defining permissible interactions between domains,
>> -	  the hypervisor itself, and related resources such as memory and
>> -	  devices.
>> +menu "Xen Security Modules"
>>  
>> -	  If unsure, say N.
>> +choice
>> +	prompt "Default XSM module"
>> +	default XSM_SILO_DEFAULT if XSM_SILO && ARM
>> +	default XSM_FLASK_DEFAULT if XSM_FLASK
>> +	default XSM_SILO_DEFAULT if XSM_SILO
>> +	default XSM_DUMMY_DEFAULT
>> +	config XSM_DUMMY_DEFAULT
>> +		bool "Match non-XSM behavior"
> 
> There is no non-XSM behaviour any more.
> 
> Is it time to rename Dummy to "traditional dom0-all-powerful" or

Well, I left as dummy since that is what it has been known by thus far
and additionally the subsequent patch set was going to rename this to
XSM_ROLES/"XSM Role-based Access Control" For the intermediate time, I
can change the wording to reflect the correct state.

>> +	config XSM_FLASK_DEFAULT
>> +		bool "FLux Advanced Security Kernel" if XSM_FLASK
>> +	config XSM_SILO_DEFAULT
>> +		bool "SILO" if XSM_SILO
>> +endchoice
>> +
>> +config XSM_EVTCHN_LABELING
>> +	bool "Enables security labeling of event channels"
>> +	default n
>> +	---help---
>> +      This enables an XSM module to label and enforce access control over
>> +      event channels.
> 
> Please use help rather than ---help--- for new options (its changed in
> upstream Kconfig).  The indentation of the help message wants to be one
> tab, then two spaces.  (Yes, sadly...)

ack

>>  config XSM_FLASK
>> -	def_bool y
>> +	def_bool n
>>  	prompt "FLux Advanced Security Kernel support"
>> -	depends on XSM
>> +	select XSM_EVTCHN_LABELING
>>  	---help---
>>  	  Enables FLASK (FLux Advanced Security Kernel) as the access control
>>  	  mechanism used by the XSM framework.  This provides a mandatory access
>> @@ -250,9 +261,8 @@ config XSM_FLASK_POLICY
>>  	  If unsure, say Y.
>>  
>>  config XSM_SILO
>> -	def_bool y
>> +	def_bool n
> 
> I'm not sure we want to alter the FLASK/SILO defaults.  SILO in
> particular is mandatory on ARM, and without it, you're in a security
> unsupported configuration.
The intent here is the default is the classic dom0 configuration. What
if I did,

def bool n
def bool y if ARM

v/r
dps



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 16:36:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 16:36:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144872.266600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHTX-0007V7-TD; Fri, 18 Jun 2021 16:36:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144872.266600; Fri, 18 Jun 2021 16:36:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHTX-0007V0-Pl; Fri, 18 Jun 2021 16:36:11 +0000
Received: by outflank-mailman (input) for mailman id 144872;
 Fri, 18 Jun 2021 16:36:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UjTI=LM=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1luHTW-0007Uq-F5
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 16:36:10 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a5b0bc74-a6ec-41a5-9928-a60b8f77da7b;
 Fri, 18 Jun 2021 16:36:09 +0000 (UTC)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1624034153434589.5185837456162;
 Fri, 18 Jun 2021 09:35:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5b0bc74-a6ec-41a5-9928-a60b8f77da7b
ARC-Seal: i=1; a=rsa-sha256; t=1624034155; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=YR+uAMLbJUT11rFLVyCpG0GfOQvqt/17jTPwkRwGaKPAJp+4AZRp1kPLzqoCkBKlCrGJ2R+of03eWvh7CgRPuxbmPpURFUcKs7pjEb/733arpPPq2m8rz76HeqMteuw1qx251yRfEhge6XghL4oizKiJvI33sZdPPDf2Bd//OEU=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1624034155; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=69IpwWXK3mWUm2Wmb9lSHteuSzYc/bj0Wh2e6z959Z8=; 
	b=F7BEjOPuG5C4cLwhyPn6ImL5no1O6gduLNftFzSYIVOkxpTjPJS51/XOoUtaUaXR13Rhs3A2qwJja0e8FxJixPJcPF+y7zlRSl/wMwMlEogCjuWyXjTtIWLcEm/nE4qKGopJsDETOkET/eJHUh7o5kJTApNMVeYIcu7O4LsXL5I=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1624034155;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Subject:To:Cc:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
	bh=69IpwWXK3mWUm2Wmb9lSHteuSzYc/bj0Wh2e6z959Z8=;
	b=XurGKqVlinI2AvHAE7rKaD41qxC0jJmHDVnyktE/ORrd0hzEdFnr+VT55rMhM5YC
	d4JfWcLfaeEdXp1030CTkyAA96RK79Nm2CKrAceyCXRjI0pFCZ9Yc/PHz0O8dUY4axc
	J/CMoNSF4m2jY+2th0POMW/BNv5C5Xwd9oq1au+E=
Subject: Re: [PATCH 4/6] xsm: remove xen_defualt_t from hook definitions
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Tim Deegan <tim@xen.org>,
 Juergen Gross <jgross@suse.com>, Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-5-dpsmith@apertussolutions.com>
 <f18aa77f-e2ed-aa10-37d3-2cdbcd5645eb@citrix.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Message-ID: <b976ff05-14a2-1c7c-5f16-40e9d4324afd@apertussolutions.com>
Date: Fri, 18 Jun 2021 12:35:50 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <f18aa77f-e2ed-aa10-37d3-2cdbcd5645eb@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 6/18/21 7:56 AM, Andrew Cooper wrote:
> On 18/06/2021 00:39, Daniel P. Smith wrote:
>> With the conversion of making XSM always enabled even the dummy XSM module is
>> being invoked through the xsm_ops dispatch which does not use passing of the
>> default privilege. This commit removes the xen_default_t parameter from the hook
>> definitions and all the respective call sites.
>>
>> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
> 
> I'm struggling to parse the first sentence.  Also, there's a typo in the
> subject.

I will rewrite.

v/r,
dps



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 16:38:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 16:38:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144881.266610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHVo-0008EA-AS; Fri, 18 Jun 2021 16:38:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144881.266610; Fri, 18 Jun 2021 16:38:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHVo-0008E3-7A; Fri, 18 Jun 2021 16:38:32 +0000
Received: by outflank-mailman (input) for mailman id 144881;
 Fri, 18 Jun 2021 16:38:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UjTI=LM=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1luHVn-0008Dv-Mn
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 16:38:31 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b3cffec1-4cf8-4843-ad1f-63c6600a1285;
 Fri, 18 Jun 2021 16:38:30 +0000 (UTC)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1624034221232839.5733958860641;
 Fri, 18 Jun 2021 09:37:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3cffec1-4cf8-4843-ad1f-63c6600a1285
ARC-Seal: i=1; a=rsa-sha256; t=1624034223; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=HXK2h/vHO++iqp8YBYGlicqV1WC3M3K552YSTsSOwaydlmoypxXLGS2QTfCZfJNobbVbM/82FUSmMvuN/eNS+vANcip4dVpEe9XQzsFJVEwDsQ4fqy6m39o8I6WTwS+eR0zxKHXjmM02AlrWVHYFICsasWNroPmHSC27PpRJCEM=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1624034223; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=fwD4j2C0XcqU9HM5vAwXuMOllkmXuq+LYI6CbhHLhMc=; 
	b=YnYNUqng8458qQdgPlM0cfXEb7j1ss5yQl/0yBBbX07kHv4RA8diF2a7Fe5sWiCzynTq9dep4AKMkidj/w8VdNhxIAHl/SNBNcvf/SEcNyHu8VHs3c3UgbOMU9p5QQmeNiq4Bn73VVhDuYg2isyEozcipyq7DzD8GEljD9hhQZA=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1624034223;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Subject:To:Cc:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
	bh=fwD4j2C0XcqU9HM5vAwXuMOllkmXuq+LYI6CbhHLhMc=;
	b=hkJIKQv8QCOz97uGTEz0NQn00sFnjL6N+H3Ws+uOjV1aut/eyMA07rxyKqf4Fx1u
	x09bK3cM38fyPiJicaxaxwuggUvPvnhIAG7U3sg2y0NjtZvk4G8EW/InWno278BjpAP
	adnD8u4sBr+I5o9p1En+bY6qbtJJdj64BC8B0gvs=
Subject: Re: [PATCH 5/6] xsm: expanding function related macros in dummy.h
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Tim Deegan <tim@xen.org>,
 Juergen Gross <jgross@suse.com>, Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-6-dpsmith@apertussolutions.com>
 <42e3d58a-edf3-1da9-af6e-c2400f25365c@citrix.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Message-ID: <5f2e34ac-e611-26b8-81b2-a63bc7bfc197@apertussolutions.com>
Date: Fri, 18 Jun 2021 12:36:58 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <42e3d58a-edf3-1da9-af6e-c2400f25365c@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 6/18/21 8:03 AM, Andrew Cooper wrote:
> On 18/06/2021 00:39, Daniel P. Smith wrote:
>> diff --git a/xen/xsm/dummy.h b/xen/xsm/dummy.h
>> index 7e2bb09dac..0f8ea163af 100644
>> --- a/xen/xsm/dummy.h
>> +++ b/xen/xsm/dummy.h
>> @@ -9,7 +9,7 @@
>>   *
>>   *
>>   *  Each XSM hook implementing an access check should have its first parameter
>> - *  preceded by XSM_DEFAULT_ARG (or use XSM_DEFAULT_VOID if it has no
>> + *  preceded by (or use XSM_DEFAULT_VOID if it has no
>>   *  arguments). The first non-declaration statement shold be XSM_ASSERT_ACTION
>>   *  with the expected type of the hook, which will either define or check the
>>   *  value of action.
>> @@ -47,14 +47,12 @@ void __xsm_action_mismatch_detected(void);
>>   * xsm_default_t argument available, so the value from the assertion is used to
>>   * initialize the variable.
>>   */
>> -#define XSM_INLINE __maybe_unused
> 
> Nothing in a header file should ever need __maybe_unused.  Now that the
> !XSM case has been untangled, I think this can be dropped, rather than
> expanded inline.

ack

>> -
>> -#define XSM_DEFAULT_ARG /* */
>>  #define XSM_DEFAULT_VOID void
> 
> XSM_DEFAULT_VOID needs to disappear too.  I can't see what it is even
> doing before the cleanup, because if it is missing, you'll fail the
> compile for using K&R style functions.

ack, will drop

v/r,
dps



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 16:38:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 16:38:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144884.266622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHWB-0000IN-IW; Fri, 18 Jun 2021 16:38:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144884.266622; Fri, 18 Jun 2021 16:38:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHWB-0000IG-FT; Fri, 18 Jun 2021 16:38:55 +0000
Received: by outflank-mailman (input) for mailman id 144884;
 Fri, 18 Jun 2021 16:38:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UjTI=LM=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1luHWA-0000I2-7z
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 16:38:54 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5261fe5b-cbf6-4d37-9a43-710cdf632d28;
 Fri, 18 Jun 2021 16:38:53 +0000 (UTC)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1624034315940750.6890552082559;
 Fri, 18 Jun 2021 09:38:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5261fe5b-cbf6-4d37-9a43-710cdf632d28
ARC-Seal: i=1; a=rsa-sha256; t=1624034317; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=Dv46wiqkC3MRZnUVVRCEgJfXCVTnjqu5YyfSiTcbx6Jv5N0IJU3wsiQuuo7fGqNzH1rScZOPvvtzoZ/KhOZWUP5P5T7gjeRhk6UTdWGvHjJR0BkbFns+7MMcB4LRc9etxZHfhofjh3Z0rObm8PxOD2CtfOpEk1A2wdjWw40/0So=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1624034317; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=79+2Bn17i2gljjwjmZjJM5aEdhfxcybpIU7fGuMkMl8=; 
	b=By4p1CaaRKSvaFg5DNPBmM5G8nK4r/yOcr5nb+6/qIgNqpiuKHFnkDxpUXoI0fbmPhRiGnOBRprUpdmgMDGqLmSiuWz3UeXb2yMgvmFN/YKiC1DXqJNjmUZRi8W20EakAs7WArPVForftd4F7oW9S+UChYFVAY+RAJatnxUz1tI=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1624034317;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Subject:To:Cc:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
	bh=79+2Bn17i2gljjwjmZjJM5aEdhfxcybpIU7fGuMkMl8=;
	b=V+lWIFvVFKGO+9JXNC0pvNQsjHLJZOKYM0LOeu1OzT1s1X0LhMIwp6A6u1bAk101
	f1uiP59V8CJjgO8LJsUxscCwXurFQDoBo1PQsl64e+/LTFMiAbIS0hi9KDpHcJvfzWm
	Gww//2DEZ44iiv+Riei3GoIWlzq2lhVUH2MRicXE=
Subject: Re: [PATCH 5/6] xsm: expanding function related macros in dummy.h
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Tim Deegan <tim@xen.org>, Juergen Gross <jgross@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io, xen-devel@lists.xenproject.org
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-6-dpsmith@apertussolutions.com>
 <42e3d58a-edf3-1da9-af6e-c2400f25365c@citrix.com>
 <60894a2d-0977-18e7-75d3-726695dd06eb@suse.com>
 <e6d0eb00-b2ad-15ae-e9cb-65716779d960@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Message-ID: <db18d9d4-b10c-0a4b-22e7-95daa9575888@apertussolutions.com>
Date: Fri, 18 Jun 2021 12:38:33 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <e6d0eb00-b2ad-15ae-e9cb-65716779d960@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 6/18/21 8:44 AM, Jan Beulich wrote:
> On 18.06.2021 14:40, Jan Beulich wrote:
>> On 18.06.2021 14:03, Andrew Cooper wrote:
>>> On 18/06/2021 00:39, Daniel P. Smith wrote:
>>>> diff --git a/xen/xsm/dummy.h b/xen/xsm/dummy.h
>>>> index 7e2bb09dac..0f8ea163af 100644
>>>> --- a/xen/xsm/dummy.h
>>>> +++ b/xen/xsm/dummy.h
>>>> @@ -9,7 +9,7 @@
>>>>   *
>>>>   *
>>>>   *  Each XSM hook implementing an access check should have its first parameter
>>>> - *  preceded by XSM_DEFAULT_ARG (or use XSM_DEFAULT_VOID if it has no
>>>> + *  preceded by (or use XSM_DEFAULT_VOID if it has no
>>>>   *  arguments). The first non-declaration statement shold be XSM_ASSERT_ACTION
>>>>   *  with the expected type of the hook, which will either define or check the
>>>>   *  value of action.
>>>> @@ -47,14 +47,12 @@ void __xsm_action_mismatch_detected(void);
>>>>   * xsm_default_t argument available, so the value from the assertion is used to
>>>>   * initialize the variable.
>>>>   */
>>>> -#define XSM_INLINE __maybe_unused
>>>
>>> Nothing in a header file should ever need __maybe_unused.  Now that the
>>> !XSM case has been untangled, I think this can be dropped, rather than
>>> expanded inline.
>>>
>>>> -
>>>> -#define XSM_DEFAULT_ARG /* */
>>>>  #define XSM_DEFAULT_VOID void
>>>
>>> XSM_DEFAULT_VOID needs to disappear too.  I can't see what it is even
>>> doing before the cleanup, because if it is missing, you'll fail the
>>> compile for using K&R style functions.
>>
>> You need to look at the state before patch 3 to see its purpose. Patch 3
>> removed the other variant, and hence the need for this one as well, but
>> I think it is reasonable to not clean up everything in one go (unless
>> it would mean touching exactly the same code a 2nd time later on).
> 
> Albeit, having looked at the patch itself, I agree it should be dropped
> here together with XSM_DEFAULT_ARG, of which it is (was) a companion.
> But again, all provided there is agreement to remove the top level XSM
> option, which I personally don't think is a good idea.

ack, will be removing it

v/r,
dps



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 16:42:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 16:42:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144893.266633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHZs-0001mk-3l; Fri, 18 Jun 2021 16:42:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144893.266633; Fri, 18 Jun 2021 16:42:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHZs-0001md-0X; Fri, 18 Jun 2021 16:42:44 +0000
Received: by outflank-mailman (input) for mailman id 144893;
 Fri, 18 Jun 2021 16:42:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ygMg=LM=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1luHZq-0001mX-FO
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 16:42:42 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.50])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba8a0a5d-3007-41b7-8ba4-da2a522e30b6;
 Fri, 18 Jun 2021 16:42:40 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.3 AUTH)
 with ESMTPSA id x0a371x5IGgO510
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 18 Jun 2021 18:42:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba8a0a5d-3007-41b7-8ba4-da2a522e30b6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1624034544;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=1FV4B93jd0TczXufcHJFMNxUzrlABxhY02e7UXDejuk=;
    b=pOZPcCtVDrI2SaWkL4Bt16nAID8t/HBfwwOxK/z8tKPhV8VmuHszSsPcTK40MACdh0
    nzwg5QcZWzXaz2cxPtvX/YoAfQ60eD6cmslSKhv/2fXkRSw/BNvpiwMUvr5sjsyjF7pl
    MNsFM0RZMflcZgLNPlSXlyl+eLblVfBa0dYG6Axo9UcLvab0GrW3xQLhyiHETzdaaHPm
    BnnJVSO5/93qf+CFQo2Ys5+ww/raASAS/d41gWkgwAx+AiKQi6eKeE8BhfkbA6aFtJ28
    tmN/tTrjm7YnobOm9NX4OlF8frbKwx5w6d7i+tiLMQ4nmOn8TVCn7dQ3n0nF9Lg31son
    MtJg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1] compiler.h: define CONFIG_GCC_VERSION
Date: Fri, 18 Jun 2021 18:42:07 +0200
Message-Id: <20210618164207.5111-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Fixes commit fa5afbbc20ef3577c5338f9d0b24dad45cef59cd,
due to lack of commit 534519f0514f52007d504e0f2eeb714de7b2468d.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 xen/include/xen/compiler.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/xen/include/xen/compiler.h b/xen/include/xen/compiler.h
index 85cbd1ab00..e2b7193042 100644
--- a/xen/include/xen/compiler.h
+++ b/xen/include/xen/compiler.h
@@ -99,6 +99,13 @@
     __asm__ ("" : "=r"(__ptr) : "0"(ptr));      \
     (typeof(ptr)) (__ptr + (off)); })
 
+#ifndef CONFIG_GCC_VERSION
+# ifdef __GNUC__
+#  define CONFIG_GCC_VERSION (__GNUC__ * 10000           \
+                              + __GNUC_MINOR__ * 100     \
+                              + __GNUC_PATCHLEVEL__)
+# endif
+#endif
 #if CONFIG_GCC_VERSION >= 110000 /* See gcc bug 100680. */
 # define gcc11_wrap(x) RELOC_HIDE(x, 0)
 #else


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 16:47:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 16:47:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144899.266644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHeC-0002Pz-L7; Fri, 18 Jun 2021 16:47:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144899.266644; Fri, 18 Jun 2021 16:47:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHeC-0002Ps-I5; Fri, 18 Jun 2021 16:47:12 +0000
Received: by outflank-mailman (input) for mailman id 144899;
 Fri, 18 Jun 2021 16:47:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luHeB-0002Pm-4H
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 16:47:11 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f25f7dc8-5f2e-4155-a273-54ca586610c5;
 Fri, 18 Jun 2021 16:47:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f25f7dc8-5f2e-4155-a273-54ca586610c5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624034830;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=E33zJeAK7v1x3UBrXwFB2f85AIq4cUvJGEhMawteysw=;
  b=VNpysAW/FEPZ6/yjG6aW8Tn0m41EryNQHJHqby62LCZ2jL+OI3iKndGe
   eQNhQr9hpHUxBKwR2YCZgJ7Ak6OhUU7Cd/TCcicG4eVDcuthFsmx0vouC
   g8P0pGVFn7JXJEfASIct4gRYHVDYBuPmsmDaQFXNmUb38w3+gBjXCQScs
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: FJdM/tIGopib35+q0+P0aN6ruoX4mJahtzApeMdY9kWlzv4lzyKOmZrYcl5Ez+pGSCUiv7FSvY
 rQhXJNVVH+RiuqZm3HVToJohWTN6DlAXu/HJqXFzVuSatZpZGuLs9oRO5SIdeqp01EehHgZDlh
 /RCBPK5qIRQ572CozhaOWDLoSuRqDICbvKPR3/IrnHhvXGzSfc/kohwPJqzdWIPfOz7ShZrkW/
 yQ/D+WsDAiYtDV5lM3HO6piqxoRi/rjIm5avu5zSSKeXozjwkKLtTEeA9/O2dgPpaVOiPgTP/+
 izQ=
X-SBRS: 5.1
X-MesageID: 46463207
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:dlERcaDA8GvYZ5nlHehIsceALOsnbusQ8zAXPh9KJyC9I/b2qy
 nxppgmPH/P6Ar4WBkb6Le90dq7MA3hHPlOkPYs1NaZLXXbUQ6TTb2KgrGSuAEIdxeOj9K1kJ
 0QDpSWa+eAf2SS7/yKmDVQeuxIqLLsndHK9IWuu0uFDzsaDZ2Ihz0JeTpzeXcGITWua6BJcK
 Z0qvA33QZJLh8sH7SG7zQ+Lqb+juyOsKijTQ8NBhYh5gXLpTS06ITiGxzd+hsFSTtAzZor7G
 CAymXCl+WemsD+7iWZ+37Y7pxQltek4txfBPaUgsxQDjn3kA6naKloRrXHljEop+OE7kosjb
 D30lYdFvU2z0mUUnC+oBPr1QWl+i0p8WXexViRhmamidDlRRohYvAxwL5xQ1/80Q4Nrdt82K
 VE0yayrJxMFy7Nmyz7+pzhSwxqrEypunAv+NRjzUC3abFuL4O5kLZvun+8SPw7bXvHAcEcYa
 pT5fjnlbJrmQjwVQGAgoEHq+bcK0jaHX+9MwU/U4KuomNrdN0Q9TpR+CUlpAZ3yHsKcegP2w
 31CNUeqFhwdL5eUUtcPpZMfSLlMB2DffrzWFjiamgPQ5t3Sk4l7aSHuokI2A==
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46463207"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V1kcDOnTIClpo4oTObGMvLI54/ILQsqRYgN4ue9wja988/uQh/c2t1k71LvEddp1ddu9NoSaSofKtSwoxgGmpHl7vFhXa3GM4QFYMCkLdEXEKzZJdRKkcIa+zBNJxWBR7pJ2PWevnAiWbdnKQFPanWUSQq+dTT6HCJwXr2q0wubhVMM0z77zSZKyzT2KjG9Eokkz+S8lbhnZZbtdVHHXv1Je284oaURJsnm1s61H0T6ti/9DeY9OLdeD92ZReFbFSMp+rWoH9PvyZpdrY7pT8AzHu5o1bGs7KgWX6LiRgHXNiQhgeh3345vtm2o/flF+QvRLUUT8Ln701fRtuK1Fnw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E33zJeAK7v1x3UBrXwFB2f85AIq4cUvJGEhMawteysw=;
 b=ib/y0NqJ6pRlDxYrBGhlAr2ns8Hof3hclnU66DYEW7d7e/TsEZyfgD8No52PrcBWHBYkJgkIooe7LojmyHMN5iWkkm3YFpp7ematzpKxc1/2BjB+0IvWiU2r6A1pOFwKMOaQl7PX5QnGOe9BWhU372hpZtNMVEO6JkknVHsO3LeefMoBTU1fQA1oyLPDdWuPNkvmxJtzTfH1LJ49Ax/4HDoUpECgtRqSs2zdNjEh2E7X0OUtXqBknszrddtX8RvvFYmtEA9VMw44felqAHTeGVBW7EFhfe6qPOdnZsHMQIZxWFhyAlsuRDK86XV+RxGB5iwn94Kw8yNMNz5s7itxIA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E33zJeAK7v1x3UBrXwFB2f85AIq4cUvJGEhMawteysw=;
 b=rbY2LLud+d6xeZbJtduVQZcF1Vd1QgbmNbcyW/GFSX4W3edM8JE8jgPN/+eDKOrG+IgVLbmkcd7h53O35ayppj9MCDtradqkG5MMO1HF763KHB4x7QqByFKRG9Nbc+8lhn9vjBQ+QRKTHs71EN6CQV7PLSgFtWMkY8enO1AmFTk=
Subject: Re: [PATCH v1] compiler.h: define CONFIG_GCC_VERSION
To: Olaf Hering <olaf@aepfle.de>, <xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
	<ian.jackson@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20210618164207.5111-1-olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <1a3b3a14-61e6-c805-78e4-4b1dbff322f3@citrix.com>
Date: Fri, 18 Jun 2021 17:46:47 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210618164207.5111-1-olaf@aepfle.de>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P123CA0006.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:a6::18) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cd093300-3d26-426e-4a91-08d93278ae90
X-MS-TrafficTypeDiagnostic: BYAPR03MB4118:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB411811EC8F4F686B5928B8ABBA0D9@BYAPR03MB4118.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:186;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ZArD90XeP3c3lkFwyF+7Y62HeY4SadOqPEHK/qZmqTXoqOhQp5Aj+3enG0iXJazlQzpFFDLaLoKxX/ckHmhRj05VKuxLt3gdp42FZ5G6XEMD0ypz1KZtV96a25DxnhGZr/5JbF3iZW9jmdDtTr81HRDnjwU6IiktoGz5bIpei9vdIlYlo+vpeHJMDF1J2fXZ4pKaaFYbsJ+DkOZiNN3MsUSwMByBRTYYfAxaOtRA04CkpAJg4ky4exmduV8ImuVvwpOGHy+1PcVmbE885jEqTgExL7jficpj35GX9nq6iWEwgeN5c4alWWaHHAvQr6S4tPPoHyAsyMpmwvFjsP35CCdXcKY29mHUi5gV/DnhOrBhA/5VEQbuQU3ZI0C08rk1bSggMEfg4+mvHgu37PtIpODja1cLZxV36F1U//ozM3AYM5saOVGxtJz8wpitsYqV0jB14g/tQJeU24/K5ETHc8ai0re5+Vkpp1DQVBVDxKPcbBgGesV8B5YDJlNmqW+UmhWIt0UFq0tqmQ1M0AyXT8HYwyGbO2SZbYVUolfnuychQ77jXhzKKrXOlq9XBsqOdMBiDbJ7o0eeDuyOJHCsaaWQg59WGbBnQfoSm7VluMI2ST66Tfi2evqdFWbP2gjgS20zrossfdpPcIlsU/rE4x/hOlN/Pns04jB8mlszKYOBisbml1y3D7G31qJEp/AK
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(366004)(346002)(39860400002)(136003)(86362001)(6666004)(38100700002)(956004)(2616005)(478600001)(5660300002)(66556008)(31686004)(66946007)(26005)(53546011)(558084003)(186003)(66476007)(31696002)(6486002)(16576012)(316002)(54906003)(8936002)(8676002)(16526019)(4326008)(2906002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?Q0ZpQXdha0FQNGU5d2czRjlXWnNqN25BUTJ5M3lCVk9INWVLRkJTU0krd2N0?=
 =?utf-8?B?bTZpZFhUZHpKU2hhT3U5aUcvUld1Y000QW94WVFxQm9MdVFReDZMa2gzNUcr?=
 =?utf-8?B?V2JLTnQ4RytsTnBmUmxxd2lzVG92QXRHaXNuMFRJSUxlZkZqMWJYb1l3Qlgr?=
 =?utf-8?B?Wk1sczBUTWF0VkNoM2Z1M3pkOUFIUlp2RDA0dFFRbDdWbS85Y1JFaG41bVVZ?=
 =?utf-8?B?d0xEcTdEd0NUQ2FmM0wvaEtPU2duTmxBMjA4V0NER2R3SjlCTXd5Sm5NU2Vq?=
 =?utf-8?B?QjMxQXR2ejhCSkwwTVR2eko4N1lNanNmVnByaS9lUGtFRG9BTEVTbjRHNTNJ?=
 =?utf-8?B?QktCVXQ3MDJjenVOeVovbXk0Y0JTVWVRYUFkQTF6SzFMRnR2Wi80WEhuVUc2?=
 =?utf-8?B?T0I5eGNFTWlHaVlQeU8wRHdLUVI1T3FGdlpxdDVxUGR6MXpmSTZnOVpzeHFt?=
 =?utf-8?B?ZGlCTDBmYjlVQndHNkE2Z0JzZ3FSbnJDcE1wUGFXS3YrS1lxejJBWFJwQ25E?=
 =?utf-8?B?bkQ2bVA5Yks4R3k3aThDSVdrZTE0cUttVVpJbW1uWG5LbEU1Y0dxQU43RmlS?=
 =?utf-8?B?UjdZM3JrdUVDNjRHOTRGUjNFaHBNOFl3K293OUJUVjRLbkxKVy9Dd3VpK2hI?=
 =?utf-8?B?OVk2L2I5aGo0WXlLcW1RVEVqTCt6N0lQclJoa0YraTRXZThPQ3BHaUtrelky?=
 =?utf-8?B?bmI1YlBhYzUrTStCcUsyNHBsZUVkQU9OT3hqdkxpcjREWEZjNXFjNmZoa2pP?=
 =?utf-8?B?aUtCYnNXdFVHNmd6TUVVTFRrZzBid2lhTUxpSUIreGhLQ2ozQmNDL0puR2xT?=
 =?utf-8?B?KzdrU2Ztb05SOG94ZnJ5TmVLY1dLU3ZtUzJQdmxvdW5weUNtZWpSUzY3RWRL?=
 =?utf-8?B?VlQ0MWkrRmkxeWpwQXlVQU8yVXN5ZTg4bkR3NlU4eEhVV2tKWmJuRlZTbTh1?=
 =?utf-8?B?dUhtcFAzaENOUk14cDRScjZCaWtiZTBjUWtpRXQyVE9Xd3lpbjI4a1FlK21i?=
 =?utf-8?B?bVF1ZnFJU0dTSy9xeFRFc0o1ZEFwRHJteWFrUEd5cmVPclBRQnJ1ZEs0M2Ja?=
 =?utf-8?B?K3E2aDlhMEg5SWdxTC84ZGZPd21kLzdCT3RVNy83ZGV3Y0RvZmY1RjNpWE45?=
 =?utf-8?B?TSttc3gybUM0dmVLOEsvSVpPRTFYVWVFeG5VNHpZWGpHdC9OVGVscFZ4RXlP?=
 =?utf-8?B?ZG1EZytpbjhhWDZvMzhCc1h4aGxwakR5NTIySml3Tk15TllTbE5vWk9lc0hC?=
 =?utf-8?B?U0V0YzR0TzBYcjMvNzNBQkNZMHFyamVzaGVVWkZwa2VveEhRWm4weFNZZFhu?=
 =?utf-8?B?QlNTVHdzZzRmSmV3TVdrVGU0OXFLcmlGS1V0Yk50aDA1V3hRTWFiMXVoTjFW?=
 =?utf-8?B?ZkdEa055UXo1cHVOY0t2SnNBbVhMcUJhaFRTSkl4N2dxNkZ4WWpoZ1hKTXdw?=
 =?utf-8?B?THA5MC9wanVoV2dBR1RSUUNHUjkzMFFNSExhVldEM1JwOEJxWXJ6Ukp6LzlX?=
 =?utf-8?B?OEtCSUljUHRkR0FNWXV0a0ZGclZZc1hLNlB3V2xNUXRlWDNiU3NqSlZhaXc5?=
 =?utf-8?B?RUpvV2ZxQlNUaURPd2JiV0RlTkhFT2xCVDJqRmx3aXk1bGpvOHVLVmVUKzh6?=
 =?utf-8?B?T1BWa2JUdzZBbm9OVnUrdkRMVkhTZk1XQTVMcjRwOFNOLzNHTDlHQWhpRnE2?=
 =?utf-8?B?MExmeXAzVUtIZU1PTjlRZ1AwTURsNDc0cnBEY004SzlsQ2p5R05TdEhFNG80?=
 =?utf-8?Q?oSbR0X25V40TUrPUlVVNQD+ugC3IR/J3a72hzy5?=
X-MS-Exchange-CrossTenant-Network-Message-Id: cd093300-3d26-426e-4a91-08d93278ae90
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 16:46:55.6831
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: it/oGcWuJjEUsmVXM2ojjiMPY6pgf4Gp9lK6Y/TVP6AwHFbxu2Zy/JWyMfS5FGrQ/ZAIx+PaACUAQBhuVr/byLvkrUsqEBb/g8SnWWZva/s=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4118
X-OriginatorOrg: citrix.com

On 18/06/2021 17:42, Olaf Hering wrote:
> Fixes commit fa5afbbc20ef3577c5338f9d0b24dad45cef59cd,
> due to lack of commit 534519f0514f52007d504e0f2eeb714de7b2468d.
>
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Presumably you're intending this for Xen 4.13 and older?

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 16:56:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 16:56:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144907.266655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHml-0003v9-NZ; Fri, 18 Jun 2021 16:56:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144907.266655; Fri, 18 Jun 2021 16:56:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHml-0003v2-Jq; Fri, 18 Jun 2021 16:56:03 +0000
Received: by outflank-mailman (input) for mailman id 144907;
 Fri, 18 Jun 2021 16:56:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ygMg=LM=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1luHmk-0003uw-9q
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 16:56:02 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.54])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 76209236-7a57-4377-94af-6bf1197b1ff2;
 Fri, 18 Jun 2021 16:56:01 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.3 AUTH)
 with ESMTPSA id x0a371x5IGtk523
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 18 Jun 2021 18:55:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76209236-7a57-4377-94af-6bf1197b1ff2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1624035347;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=TA448Wauq+GZdmM+o1OHTf267x1J8Horm5DfTeI3j0o=;
    b=ZQ4ZTVquUiMk1VdHSoJgLKJwJaa0iBOrqFGq0lmqn/GyXe3ApJ27I7kjqP8tkbC79K
    TafxdQoFcjNWNhfW+Y38s9U8/H8Aqrmj/lqA1rIvqY25uIzqlGCMfDCg+HmXxDiTfuII
    3JOCKwvNDog+X5EbnOjOy4XOBrsX9Oty5NJCoIKxUUcEU7+V/9xCkDMHYGygsqwZLUZ0
    /eFV224gAJu3I9yjqtkRfGRnH7kIb+M5G2rT7IkZrnrTV5rClcqCM90TCd9i1XwCal+r
    eRiWrwnX5nqdWskCGC2JIIYYeglJwmouTal5+a8M/iPg2DoiAx27QzJRqESJELnAKXBq
    jm7g==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisQsVxSIbR7sf0kebs4Z3Zpqv+Sabl5o7CzRq+Ps8Q=="
X-RZG-CLASS-ID: mo00
Date: Fri, 18 Jun 2021 18:55:39 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: <xen-devel@lists.xenproject.org>, George Dunlap
 <George.Dunlap@eu.citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Konrad
 Rzeszutek Wilk <konrad.wilk@oracle.com>, "Stefano Stabellini"
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1] compiler.h: define CONFIG_GCC_VERSION
Message-ID: <20210618185539.491fc904.olaf@aepfle.de>
In-Reply-To: <1a3b3a14-61e6-c805-78e4-4b1dbff322f3@citrix.com>
References: <20210618164207.5111-1-olaf@aepfle.de>
	<1a3b3a14-61e6-c805-78e4-4b1dbff322f3@citrix.com>
X-Mailer: Claws Mail 2021.05.24 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/w2c4J=kHxKgrAPpXWcZ9SnC";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/w2c4J=kHxKgrAPpXWcZ9SnC
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Fri, 18 Jun 2021 17:46:47 +0100
schrieb Andrew Cooper <andrew.cooper3@citrix.com>:

> On 18/06/2021 17:42, Olaf Hering wrote:
> > Fixes commit fa5afbbc20ef3577c5338f9d0b24dad45cef59cd,
> > due to lack of commit 534519f0514f52007d504e0f2eeb714de7b2468d.

> Presumably you're intending this for Xen 4.13 and older?

722f59d38c710a940ab05e542a83020eb5546dea without the required changes exist=
s only in staging-4.13 at this point.

Olaf

--Sig_/w2c4J=kHxKgrAPpXWcZ9SnC
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmDM0AsACgkQ86SN7mm1
DoBrbhAAj93d/CLPmBqP1iDiqNrAm674ZbyRYtcBQGUDrHcGNEKuqJuW9x6hCVrg
6B/l260mB/kcgv6zbPk2VnjcixTNfjo3Vd2g/TzRolQ0vfqo8W9x1FDy9eT0qQqx
/LNdaaZXazfWrYynhB58DDABBr+TUAeU5dXizp5lmTBSP6cl3LVcEeZKC+tgB0kJ
eQIcTkble9rSdVcUrdDunW0Tov+cU9PQqr6zgsmPXr0SLC7LP1CjTDkFhodAYl/w
W9NSjV0KxHLMnmIn17R8KWidLn8UUn0y88TSmf613UfuSBS3ryZxLn8JG9doSI5P
wFU2FzlnKxSzR06CYfgPf+ousJ0EUYXmIshkZOCkOR1CUEU3zdraxnPHlo5UTgCz
1UF980rB5ntrlraOvQtprx/oGHyVsPAa/bQTOHPzRUXB6w/isJAa4iExd9PM23xk
DaH0YONEOiT1RFXW5w8NReR59vH/c3y2Y7MzDtdHBocBGMDtFCHsKMiQCl+o9pP5
03plJ+BUzwXn5mgAeKVs8ZBdg8BScwinIj7a29RykwOwzXWOiGV2J3K8ohIO/81y
u2xsB3/i4vGBa48c7j4OvOK1OoQkZrbZprGMhOpdMidTxtAwPL7kJIUZrDx6+LlB
YagQt1tbeiMFsADFSJrZIlBeVWkx6VBrwMNGSgPSo7OraJOnpdA=
=ZBy1
-----END PGP SIGNATURE-----

--Sig_/w2c4J=kHxKgrAPpXWcZ9SnC--


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 16:59:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 16:59:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144913.266666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHq0-0004aW-5i; Fri, 18 Jun 2021 16:59:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144913.266666; Fri, 18 Jun 2021 16:59:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHq0-0004aP-2Z; Fri, 18 Jun 2021 16:59:24 +0000
Received: by outflank-mailman (input) for mailman id 144913;
 Fri, 18 Jun 2021 16:59:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luHpy-0004aF-Ru; Fri, 18 Jun 2021 16:59:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luHpy-0004wk-Jo; Fri, 18 Jun 2021 16:59:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luHpy-0007g7-AS; Fri, 18 Jun 2021 16:59:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1luHpy-0006qH-A0; Fri, 18 Jun 2021 16:59:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=e5ZE5XZ/01zMJgC+viZWbTnHNtCsRzaho/A/o+DCdKU=; b=jHCRChvKAZIp63xhsRcy8sJaHq
	wumecY23eT8fh699C/bRGwqGUektiCqVKJY8Ab9MkhjePx2l+OWs5IJV0ItEXKId3ILaASQckcAia
	4zntTHhI27/WfJ/3/x1zmrD4WtWGSc+Os+2keaPTI8TeJMkqMRJspFvNKvSmV6MiIC+Q=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162888-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162888: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=68940b3fb3c43b8aa03cb6fd2f1d00b1737c9b2c
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Jun 2021 16:59:22 +0000

flight 162888 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162888/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              68940b3fb3c43b8aa03cb6fd2f1d00b1737c9b2c
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  343 days
Failing since        151818  2020-07-11 04:18:52 Z  342 days  335 attempts
Testing same since   162888  2021-06-18 04:20:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 62627 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 17:00:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 17:00:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144919.266680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHrE-0005uL-Gn; Fri, 18 Jun 2021 17:00:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144919.266680; Fri, 18 Jun 2021 17:00:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHrE-0005tv-Ds; Fri, 18 Jun 2021 17:00:40 +0000
Received: by outflank-mailman (input) for mailman id 144919;
 Fri, 18 Jun 2021 17:00:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tyHv=LM=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1luHrC-0005tn-O9
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 17:00:38 +0000
Received: from mail-qt1-x82a.google.com (unknown [2607:f8b0:4864:20::82a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0a5e9295-6315-467d-b0a7-5eeab2601abf;
 Fri, 18 Jun 2021 17:00:37 +0000 (UTC)
Received: by mail-qt1-x82a.google.com with SMTP id d9so8009112qtp.11
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 10:00:37 -0700 (PDT)
Received: from FED-nrosbr-BE.crux.rad.ainfosec.com
 (209-217-208-226.northland.net. [209.217.208.226])
 by smtp.gmail.com with ESMTPSA id g15sm5505947qtx.75.2021.06.18.10.00.36
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 18 Jun 2021 10:00:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a5e9295-6315-467d-b0a7-5eeab2601abf
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=z5zsokqXgiD6bC7McyK/M0FOdguu6oT7x1Jlxm2c+lU=;
        b=ntlpwJIMtoPglnvfMmUfquAP9465QRaDEybwjBPldd3FLNFFuRBmCz1RQETys4ASwu
         TfnWA/tqcDquTihzqUc0wjEP9nehanGO3g6xg1f6EEXCFvSRvU4BqetCD7lBOjxmRBXm
         mU+WffQz9qWQn8NB/kbiuOqpM/ZrMZIxMe/Yql8z6paEG/rhE9zS6Pfp0GLNID83hGf5
         NZvNfnTiH9XAwTqoNa44qvEfLtq7d/+Zf7eK1pEUTBmPA7/6UO3bPG5Dujivwy5k8npv
         LyQjfc9Rn/yilQUDi69/UugtrdkIBs+GobBbejowQ0pqGFWmDuMGMTatHt6n9QHyOvjN
         HTcQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=z5zsokqXgiD6bC7McyK/M0FOdguu6oT7x1Jlxm2c+lU=;
        b=IMEEGffOBoCWEnTk60GU490elafvJcYP1BAoa1MOP+ukPU0fLAVanKETyw0dacPYts
         ghbt25JXwiPpLXHq2hdbLEzB1xV85dvbrhxygJF7DuCIgLlzT+IisuHWd15KY/zo1qNJ
         qodbh7jMqWSDknhWYA3yx9ELES9iuvSUYTrCoATFi51sy0fv/qjVmhdIEZSrkc2xnNZA
         +YWCxCikNkwZLEGMEpXdaUVnW2/gyE1iYphQwlI5j4oAVItmd2scwaMxvhFkGY7aCUV1
         VrEIumr75uXHeFuB34UnEmUfwnM1/BSLE8kzHl5Z1btiYT8e1BnlDYZFiNFUM7Y3D22Y
         DSyQ==
X-Gm-Message-State: AOAM5303G6lwMpH71Rp1S6AFia2G0jf58GNUfzIsvZZM78UHdmJsO9a8
	YmOHCk8KdXlLvoWzrl1CYaw=
X-Google-Smtp-Source: ABdhPJzzGzyBvxCbPBAbZwitT73glTef0IpyjPU2hmxF1lekYVNaWi6FVxBrlX9uaK4pwiD8TJjwfA==
X-Received: by 2002:ac8:7516:: with SMTP id u22mr11404297qtq.160.1624035637474;
        Fri, 18 Jun 2021 10:00:37 -0700 (PDT)
Date: Fri, 18 Jun 2021 13:00:34 -0400
From: Nick Rosbrook <rosbrookn@gmail.com>
To: George Dunlap <George.Dunlap@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [RESEND PATCH 08/12] golang/xenlight: add functional options to
 configure Context
Message-ID: <YMzRMlaHapLn7msf@FED-nrosbr-BE.crux.rad.ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <dc5cd6728e8477c9eb3ba75a55c7128da46a86ef.1621887506.git.rosbrookn@ainfosec.com>
 <EF069373-26FC-4151-9CD9-6B8C48D9AEB0@citrix.com>
 <YMy29arbPMnPI/+W@FED-nrosbr-BE.crux.rad.ainfosec.com>
 <8727719E-9548-40CF-A186-14E2ECA3801D@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <8727719E-9548-40CF-A186-14E2ECA3801D@citrix.com>

On Fri, Jun 18, 2021 at 04:18:44PM +0000, George Dunlap wrote:
> 
> 
> > On Jun 18, 2021, at 4:08 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
> > 
> > On Fri, Jun 18, 2021 at 02:44:15PM +0000, George Dunlap wrote:
> >> 
> >> 
> >>> On May 24, 2021, at 9:36 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
> >>> 
> >>> Add a ContextOption type to support functional options in NewContext.
> >>> Then, add a variadic ContextOption parameter to NewContext, which allows
> >>> callers to specify 0 or more configuration options.
> >>> 
> >>> For now, just add the WithLogLevel option so that callers can set the
> >>> log level of the Context's xentoollog_logger. Future configuration
> >>> options can be created by adding an appropriate field to the
> >>> contextOptions struct and creating a With<OptionName> function to return
> >>> a ContextOption
> >>> 
> >>> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
> >>> ---
> >>> tools/golang/xenlight/xenlight.go | 44 +++++++++++++++++++++++++++++--
> >>> 1 file changed, 42 insertions(+), 2 deletions(-)
> >>> 
> >>> diff --git a/tools/golang/xenlight/xenlight.go b/tools/golang/xenlight/xenlight.go
> >>> index f68d7b6e97..65f93abe32 100644
> >>> --- a/tools/golang/xenlight/xenlight.go
> >>> +++ b/tools/golang/xenlight/xenlight.go
> >>> @@ -136,7 +136,7 @@ func sigchldHandler(ctx *Context) {
> >>> }
> >>> 
> >>> // NewContext returns a new Context.
> >>> -func NewContext() (ctx *Context, err error) {
> >>> +func NewContext(opts ...ContextOption) (ctx *Context, err error) {
> >>> 	ctx = &Context{}
> >>> 
> >>> 	defer func() {
> >>> @@ -146,8 +146,19 @@ func NewContext() (ctx *Context, err error) {
> >>> 		}
> >>> 	}()
> >>> 
> >>> +	// Set the default context options. These fields may
> >>> +	// be modified by the provided opts.
> >>> +	copts := &contextOptions{
> >>> +		logLevel: LogLevelError,
> >>> +	}
> >>> +
> >>> +	for _, opt := range opts {
> >>> +		opt.apply(copts)
> >>> +	}
> >>> +
> >>> 	// Create a logger
> >>> -	ctx.logger = C.xtl_createlogger_stdiostream(C.stderr, C.XTL_ERROR, 0)
> >>> +	ctx.logger = C.xtl_createlogger_stdiostream(C.stderr,
> >>> +		C.xentoollog_level(copts.logLevel), 0)
> >>> 
> >>> 	// Allocate a context
> >>> 	ret := C.libxl_ctx_alloc(&ctx.ctx, C.LIBXL_VERSION, 0,
> >>> @@ -201,6 +212,35 @@ func (ctx *Context) Close() error {
> >>> 	return nil
> >>> }
> >>> 
> >>> +type contextOptions struct {
> >>> +	logLevel LogLevel
> >>> +}
> >>> +
> >>> +// ContextOption is used to configure options for a Context.
> >>> +type ContextOption interface {
> >>> +	apply(*contextOptions)
> >>> +}
> >>> +
> >>> +type funcContextOption struct {
> >>> +	f func(*contextOptions)
> >>> +}
> >>> +
> >>> +func (fco *funcContextOption) apply(c *contextOptions) {
> >>> +	fco.f(c)
> >>> +}
> >> 
> >> Why all this convolution with interfaces and such, rather than just defining ContextOption as a function pointer?  Is it just to keep contextOptions out of the documentation page?
> > 
> > Part of the motivation for using functional options is to abstract the
> > "options" struct, yes. This allows internal defaults to be applied more
> > easily -- if you require e.g. a ContextOptions struct to be passed by
> > the caller, how do you know if they intended to override a default, or
> > if they just didn't set the field? Additionally, using the ContextOption
> > as an interface allows variadic arguments, which are just convenient for
> > API users -- the same NewContext function can be used whether you need
> > to pass 3 options or 0.
> > 
> > The reason we use ContextOption as an interface, rather than function
> > pointer of sorts is for flexibility in the signatures of ContextOption
> > implementations. E.g., we could have
> > 
> > func WithLogLevel(lvl LogLevel) ContextOption
> > func WithLogContext(s string) ContextOption
> > func WithFooAndBar(s string, n int) ContextOption
> > 
> > See [1] for more background on this pattern.
> > 
> > Thanks,
> > NR
> > 
> > [1] https://dave.cheney.net/2014/10/17/functional-options-for-friendly-apis
> 
> Yes, I frequently use a pattern like the one described in that blog post myself.  But that blog post doesn’t use interfaces — the final slide actually has the “option function” type as an open-coded function pointer type.
> 
> So my question was, why not do something like this:
> 
> type ContextOption func(*contextOptions) error
> 
> func WithLogLevel(level LogLevel) ContextOption {
>   return func(co *contextOptions) {
>     co.logLevel = level
>   }
> }
> 
> ATM the only advantage I can see of defining ContextOption as an interface rather than as a function pointer is that the godoc for ContextOption would look like:
> 
> type ContextOption interface {
>    // contains filtered or unexported fields
> }
> 
> Rather than
> 
> type ContextOption func(*contextOptions) error
> 
> Which shows you the name of the unexported field.
> 
> Is there another reason I missed?

Technically it does allow more flexibility in implementing
ContextOption, e.g. you could do...

func (lvl LogLevel) apply(co *contextOptions) { co.logLevel = lvl }

...and then pass a LogLevel directly as a ContextOption. But generally
everyone implements these things as funcs.

I will admit that when it comes to my choice of using the interface
version instead of function pointers, I am just more familiar with the
former and encounter it more often in other Go packages I use.

Thanks,
NR


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 17:07:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 17:07:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144926.266691 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHy5-0006gG-9G; Fri, 18 Jun 2021 17:07:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144926.266691; Fri, 18 Jun 2021 17:07:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luHy5-0006g9-5x; Fri, 18 Jun 2021 17:07:45 +0000
Received: by outflank-mailman (input) for mailman id 144926;
 Fri, 18 Jun 2021 17:07:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luHy3-0006g1-8o
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 17:07:43 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7dfafc1b-0a62-420c-8f71-3fc762af466f;
 Fri, 18 Jun 2021 17:07:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7dfafc1b-0a62-420c-8f71-3fc762af466f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624036061;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=tdNyLLlSw0qMVJehgjoDsRuNiAn6teHXz8gahh4senc=;
  b=XDFYNKKMXtEFRN9e8T9buIryi/Xb1JSry5OqchqhlL3JAtoqSMFze/o/
   lsomEosxYKi6l5RZ8xS62v9WTjCe9NrWpNxuzQvfEGPDKniAcMCTaT9Ev
   TlT4H98C2H4O6Z0WA4J+g6cwjmgN+1a544XDtVy9Vqcv8QJu0OncaZuCJ
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: RVDpfXBWZVkDUKJoSufMmfybxX0e4lCEfh240PodOF3RyceWNiOg5LQvK/NCBbtmAKu0wSUVZL
 psFszpP09QmWBgOQ5KUF5LWY+cQHwl88totO64GuwHVCi7Pp9TSrtcGSs4+74PJhBRqe4Bg7l6
 MtG7EUyB9UG92wqc4wTmol3OnMvqzG6WAhqwHp4DjMxxOAG7ulFSq0EUrG/JR61BrpEwIw/6fm
 wniGmNZbg9t5nUM+wVHF7t7wnPeNp7mi4olntatSlDwqyIsMVdYud5nJGytNolVXGGrEoZOthO
 nFs=
X-SBRS: 5.1
X-MesageID: 46846757
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:PMKqC6+Jqq7TybXv98tuk+AiI+orL9Y04lQ7vn2ZKSY5TiVXra
 CTdZUgpHvJYVMqMk3I9uruBEDtex3hHP1OkOws1NWZLWrbUQKTRekP0WKL+Vbd8kbFh4xgPM
 lbEpSXCLfLfCVHZcSR2njFLz73quP3j5xBho3lvglQpRkBUdAG0+/gYDzraXGfQmN9dPwEPa
 vZ3OVrjRy6d08aa8yqb0N1JdQq97Xw5evbiQdtPW9e1DWz
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46846757"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A+XrbDiJQo7iYfggKY06MvLlxXI62tOJudKV4UcK53ecOKOwnuLAZXfpOO4mU3PcTtyVEt/vpycO0VcTYJoKniibArWd+oWU+Eg0kIi/7GDbAUO/PgWAHbprqeUz+OwDgUmJlkXkN7gNmMBtDV8mCBhOaEfTtmpREiJXBV77TQQ4pYLCeqvamkbrM225JPObIJZ48nHa9J1tPshlt2B+DCYZWbJDLLPEF42c3qXcsGl4h6FaB/jQ9/2aOwuinV644qqNjXOlU6aHn+y+tV51h8jjBfB7kx29mBPBwQ+OL46nbFJcfGYN4xVad7z5gWAw1MOTEmyka9sBJH4f4+FEeQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NKgHKUfz7/qH+TdJA7nGfrNDbbvgtfrzWNWzblCR9Ns=;
 b=VJYhNzxomiE4oX/8BhFz8+NGcE1hIMy+5YZ0EzNnL6VkZUY/Dh/Wadlde63KibtBDsqShaQwoOnsaHl8/9pgu5kLdishSnV/QgsN/UxaHEoDXn9iO8AxjhOCQI9slQekT4RNk7lhGG1qgb+5JfyEwdH80qwPkX1q0VoCDTMmrtEwpe2aWTS/BEWmSuev4hHUGr8XvN0F64MsqT4/MbwjH2HcjT1PFXRDskJxDH3+aefrfhbr5yNk3idRM/VU2sHxVLBzp3qPozxutaFbanJuLbvTZlf4RNkBtn94grK0eZUqNl/AZqLoGa+U6wQwbhYNAUivrLO3GfSGYXlJCkHXjg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NKgHKUfz7/qH+TdJA7nGfrNDbbvgtfrzWNWzblCR9Ns=;
 b=e13TXkyg/HnugXyf+1dSB2mqZaqmBiYi3zKP+6DWKCyfI6oUK0HaJenKxhDotbD3ttOa4iZY9A2tlMiMQrsf7gzJm5cSXVObWWBGOWpZFSPArqcHwNNBXK9jh+RuFI212zJKv6jaH1eBz/jmoDuAssTJ2QZfKqvHmBwvgUaky1A=
Subject: Re: [PATCH] tools/ocaml/libs/xc: add OCaml stubs to query CPU policy
To: Edwin Torok <edvin.torok@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Christian Lindig <christian.lindig@citrix.com>, David Scott
	<dave@recoil.org>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <5fdb7b4cdee69af8e2b9d77b56b1027a8799cf04.1624012999.git.edvin.torok@citrix.com>
 <4eb5d3fb-db71-5981-e6f5-0503ff896fd9@citrix.com>
 <5B331E67-0BE9-47DE-8076-EBBE06BDEBF4@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <70ab3e24-7586-b75a-6c88-ac22157d76e9@citrix.com>
Date: Fri, 18 Jun 2021 18:07:31 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <5B331E67-0BE9-47DE-8076-EBBE06BDEBF4@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0367.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18e::12) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 55ef8128-a3a6-49e0-f28a-08d9327b926b
X-MS-TrafficTypeDiagnostic: BY5PR03MB4997:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB4997D0F865611D23FC83824EBA0D9@BY5PR03MB4997.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: QFHXYRX4afRDtWKmOS3dSJEiDqKZlkOt94Gza9Wmdl9fFKK/IcYizCRnb2twCUzWPH0dZiL/8iM0xggCVRXSv6Y+1Fi2qvB8E9frQGgaqQcQBqUWdGiRbPCzDAdTj10l1OvflCfvqkT+EfBWtWFd0W0v9Q/aDtoDEci8+KyBDQrVmrlZx9QcDjrVUpkNY+FAEHeS9Jrs6k5zBFUOP7k+QuI4Nw07u1iRqtxbBFPbl+kURktuN6KUBsgxLzwilT5Tl7k/3CQNJ2GEcgF147XwgnxN8iTiQpMNmJXYLuleruZtmfstArYHNJvbit6U5kGNALuvNhs8GNDZThYl/vo+YHxbQwON3WrrK9SzNuy7SqYFzkshpdF7z89to6XZsAsl2PVGW93fgVWZoE+Q59JAJdl3agWrIGyUVvnUEuVbBVtyhMtaQOrgLxC4zDpN1Cu18UCRtxOPaG/v8et2JIwNEy5wBgsHr8YPwkLP70cJPJOVa1F9jNGfPqoXCyH0b4YLtXXShRseuUB7mh9J/CtGWb53Jngf7pyM5JLFzQfjg780rPIzNWi8n9tf2wQpRvwvqjcbnVyx+3z0WwJl9g3SeX7Yvtv2GrYyiWsL1HwDBxI5Qa/xom0lgwvmY0UAiuZivZjLKLffgOurFfuKACV5XmHylEU1ud62ol0xNXdBzTmYvpkgjvU1l1m1DArWQla7wfnivvVloWGv0T6MORq6hqCrk/8qMD4AoluRtbDWdmsf48lBTmHF95VIUgI3YbMnqHNe8b0dDF75nIxD/N87Ymb+/mmMwQ8p6xaSempSm3nbWSqy7d26cehaRhv1oFkN
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(39860400002)(396003)(346002)(376002)(6486002)(31696002)(66946007)(16526019)(316002)(36756003)(478600001)(6862004)(966005)(66574015)(86362001)(31686004)(6666004)(26005)(186003)(8676002)(53546011)(38100700002)(5660300002)(956004)(6636002)(37006003)(66556008)(4326008)(2906002)(83380400001)(16576012)(54906003)(8936002)(2616005)(66476007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?azVoWnFFOXRDTTUwN2VSK1dzOEVTU3Zoa3hlZ2hIT0dSTzg4QnFpS2JFdWhl?=
 =?utf-8?B?dm1abGErK09HTDUrWm92ajhwZmlwejBVQTRKTU8xNjI1U2k4Q25ZMXM2akJo?=
 =?utf-8?B?cEdvU2t5SWlaTms1UnEyUE1xSHFJYVgvdmF0MUliYk1nRlB3KzR1a1JwSDYw?=
 =?utf-8?B?WHVjZzd1K3I3TVYyTW11WXlWZVhRaHptK3ZTNjMxYk51OHAvM3RYN2IxYjlS?=
 =?utf-8?B?Yzg2U2VSYlE3ejkraGZwazRLSjJkeHFibzN1bzZLRFJwN3BxVnkzMThDc0hI?=
 =?utf-8?B?Tnp2ajIzNTYvN2xUcFdxOE9MQXE4Z25tZVhocUhKWTh4TXlFalc5MUJFY0Rj?=
 =?utf-8?B?U0M3MnNVUzFVZFZSb0hkU2lzOXFUSUxhc1Q1Q2NhUU1TMmZHcDBzTVB4QzlL?=
 =?utf-8?B?WVNKbHMvVjNIdXV3TTNUMHNFdmRqUXkrS0hLWE9qMVFZbmJoZExZSllEVlZl?=
 =?utf-8?B?OEtpbEI5ZHVTa2hoUG8zZGovY0l3emF5enEwYUhXN1VwZXVkT3Nva1RuYThh?=
 =?utf-8?B?TWg2Ri9YaENxVjhJUTB0clp2SG1BUFdqRS9WWmZuN20yblJRRTNKL1dZUU9T?=
 =?utf-8?B?TGdmU3pIbFRUajA3VkJmYjFhK3QrR00ySktYQk5zUnMxYzd1OHBKWTZ3dkJn?=
 =?utf-8?B?YzYrUlRxZWlpV29pQ0ZlcC9XRDNjOFpIRzM4R1NqQ3h1WUt5YmZhdEpoZW45?=
 =?utf-8?B?VEFmSzVnc3JVbE9KajF6c2lSQVJOeHdUYnZleTIxREZ6WG5ZNllKbGN1RThl?=
 =?utf-8?B?YWttQzFoZUVuSVVYc1BycUl2Z0FpNmVIMWYyWkZyQUl1Q013MkUvV09TR3gr?=
 =?utf-8?B?dVhuek12VmVvMm9rYi85RTVDSlVUd083MWJ6K3A5Q3ZXeE1jbnhTMG5Ub0xn?=
 =?utf-8?B?Wk13TmJvUHcwaVErN1VoQXRkTDg0WmQvVXdWeWJNUzFtYWRHZmsveHIzak5R?=
 =?utf-8?B?M3lOSHpWLzhlUVZMN0lzMWl2TlZLRnBtamF5bW5vWEZpSDZkUDRKZTRmcUEz?=
 =?utf-8?B?MkxGSm9VckszamcyTkpnNlk2bG1QYXdBSG1VUHNoNStZbWUvWWVQb1JhajEx?=
 =?utf-8?B?UnpZZCt2d2llR0pDV0M1cnpkK0RCMldUd2ZhY2RSUmMrMXFSV0R6YXdBdW9L?=
 =?utf-8?B?ZGYySkIwRm1rVCsvZDdGem5EVFIzYWFhOHZ0QkErc2tIVUNpNHgvVzFlZjBS?=
 =?utf-8?B?NldaT0k4aHpKckVCL3VIV2p6dlBLUW91eERyUDZlVkVvY09YVjhhM1JscmlM?=
 =?utf-8?B?Wlg1WkxlV0hLT3p3RGxoMDkzVTIwcTd0R3FSUW5KSUhhVFRyRFM0WUtuQ2lE?=
 =?utf-8?B?UWtlTU9HdEs3OWZQemtkZ0hYbTBJRDh5YW5STXo4UkJtSDBjTlVIQ2FMTzJp?=
 =?utf-8?B?WlF6MklMNVc0SWVpakxqanFTa000U1JoYXNiZk9FY2lBOVJSYlk2eFRNRkxX?=
 =?utf-8?B?K25iL0tRNUNiU08zTHhzYWFsQ2tkdzNPVXZvWHF3NXZtUW8vbFAydUVQbzlR?=
 =?utf-8?B?c1VCdXYyVkJUT0t5OEhiSzNVKy8yWUpwVjNxcEc1c3VvTDM0SVpmVkZScnRZ?=
 =?utf-8?B?enQwT0Z5VE9MM2RsOVlyMDMrc1F5amVYb3lFclZHMVZESkJ1QXZlTjlHZlVK?=
 =?utf-8?B?VERkN2ZiVE5nSjA3cUs2eWd2c29TZFNoSVpnVm1pSk5GSm1OcVNmOUIrUlpC?=
 =?utf-8?B?ZDlnaTVKSlFLZVBZQ2VnZlJrWSsvMk9lY0MxV2JJMDJmY0gydFI0b3VpMWJ0?=
 =?utf-8?Q?Ums6GvEQRpZAc11gAINY346qLmFoP8KhQPR+3oA?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 55ef8128-a3a6-49e0-f28a-08d9327b926b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 17:07:37.0528
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7KFOx9Sr7aDqzLMeG8yx9KGh7cpKL+BY6n7X3TbqVYJ8ZRhGPQJ8sS/C7+7AfrqF3dPNr7UeGQL4BlVcUDjWu0n6t0yCT7UWIRKGQZdB6ss=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB4997
X-OriginatorOrg: citrix.com

On 18/06/2021 14:46, Edwin Torok wrote:
>> On 18 Jun 2021, at 14:17, Andrew Cooper <Andrew.Cooper3@citrix.com> wrote:
>>
>> On 18/06/2021 11:45, Edwin Török wrote:
>>> diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
>>> index d05d7bb30e..4a230de8b7 100644
>>> --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
>>> +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
>>> @@ -34,6 +34,9 @@
>>> #include <xenctrl.h>
>>> #include <xen-tools/libs.h>
>>>
>>> +#include <xen/lib/x86/cpuid.h>
>>> +#include <xen/lib/x86/msr.h>
>> https://gitlab.com/xen-project/patchew/xen/-/jobs/1358403495
>>
>> CI says no.  This needs to be behind a suitable ifdef, for non-x86 builds.
>
> Should the stubs be disabled completely and raise ENOSYS/failwith on non-x86 (e.g. ARM), or are there plans on doing equivalent CPU policy on ARM at some point?

No plans.  "CPU Policies" comprising of CPUID and MSR data is entirely
x86 specific.

ARM do need a logical equivalent, but don't even have migration yet, so
all there is a stripped-down set of settings done by Xen.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 17:15:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 17:15:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144933.266702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luI5Q-0008CO-79; Fri, 18 Jun 2021 17:15:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144933.266702; Fri, 18 Jun 2021 17:15:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luI5Q-0008CH-3F; Fri, 18 Jun 2021 17:15:20 +0000
Received: by outflank-mailman (input) for mailman id 144933;
 Fri, 18 Jun 2021 17:15:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ssXN=LM=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1luI5O-0008CB-W1
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 17:15:19 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 21d4ff3c-e749-40dc-99c9-b1ab254133dd;
 Fri, 18 Jun 2021 17:15:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21d4ff3c-e749-40dc-99c9-b1ab254133dd
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624036518;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=dMSVwPPBIbnpDgqBTvEOWZ2ZifdfiU8iW1yPnQuBO/Y=;
  b=I76iihOyLGewvPU7qA0S/uyuOYB7imik1lFFaBA1G3logza24i70EQyT
   54a/at04s1QHCHaRANEx9k80/VAGMnIrghAHr/Yb1sh5AdbHPS6v+qXsx
   T7s4a806WF+Yj+gP65tj9KuNPZJX+d/HSBRaK0SZnRELd015IowcHx+RD
   c=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 15ps8/VAlFS9sB/F6ODQfuppQyx3q094sFxMEgBwPh6OQ/+X8wl2Ip9Gyofv37DPIAuhajVsZv
 lymfDs1Jhol9AaP3DahttXvqMH8Uh1Qbf18Y/VA6sU3SzkoJEdDgQHvOygRFNHTeqf/IvSr8Hg
 M7TP0sAi9rwxk3xkl84RzkT8pIbFUsx868OdulCbODIIPBuaakgkGxwXZwvIh0JyFVexRt7dRQ
 tEOsxgU2LYTD5cUZB+9inEIywZiZPAWEGoU2c2liAVAI3HksI3BhUhdRr3ObgbjO//WA+qk/Nc
 nH8=
X-SBRS: 5.1
X-MesageID: 48067209
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:FkDc06g6GxTBfq2heOzsUiPlfXBQXzh13DAbv31ZSRFFG/FwyP
 rAoB1L73PJYWgqNU3I+ergBEGBKUmskaKdkrNhQYtKOzOWx1dATbsSkLcKpgePJ8SQzJ8k6U
 4NSdkZNDS0NykBsS+Y2njKLz9D+qj/zEnAv463pB0MPGIaHp2IrT0JbTpzencGNDWubqBJdq
 Z0iPA3wgZINU5nFfhSURI+Lpn+TpDw5d3bSC9DIyRixBiFjDuu5rK/Ox+E3i0GWzcK5bs562
 DKnyHw+63m6piAu17h/l6Wy64TtMrqy9NFCsDJos8JKg/0ggLtQIh6QbWNsB08venqwlc3l9
 vnpQsmIq1Imj3sV1DwhSGo9xjr0T4o5XOn40Sfm2HfrcvwQy9/I9ZdhKpCGyGpqXYIjZVZ6u
 ZmzmiZv51YAVfrhyLm/eXFUBlsiw6dvWciq+gOlHZSOLFuK4O5lbZvuH+9La1wWx4TsOscYa
 9T5YDnlbZrmGqhHjXkVjIF+q30YpxbdS32MHTruaSuonJrdT5CvhMlLGF2pAZIyHsHcegy2w
 3zCNUiqFh/dL5jUUtDPpZ2fSKWMB2BffueChPfHbzYfJt3c04l/KSHnondotvaI6A18A==
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="48067209"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Tfwn6ZJuKxbl0jjdMTnrUO1e0emSX5LbGVpa7YF8GNRfYneXQWXU9Gc7qVPqUSYAlRn6Fjti68BDOcZo+8eBGtG8SWZJ98OHA9CPFXvL3+e7QpTKVTqsOO9Zb9+GRGxVluL7zCBsZ9vyfV1uZUdpAgg51k7mkRctIl3Zt+8W2qIiMb8Gypa8GZhObHJnBNlZ8IjHjIgzu4nSUfkz5ZO26RZ14ZUDJieN4MOi7y5bZ7kcrKxMQV547lKiBhXfQAehXZiDxO1+EhxvM9KoP75aaNWRbZbq9i5Asd6hpArFUK1dG3A5rqqFHqHeKdqgfl3ov97Kbwins2PieEURUPe95g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vsplhcTyofF7vdbssvmn1EOhC8L9VOs7HZ+acT1iFP4=;
 b=PSi4NNux1ahWaARg1brEJM/zU0Nlb8BAwEvNBN9QfINOFcvfjI3hpvooqwQSg5TtwqMjzUkEw56EUx+pZ7YcEN2YyUZnz/29oo3GpYSKsCnrDHCJCNtqfew6e0zRo45OT7LZAyJqBgtk3R/HUue6q5fnsEwrgwVhLooQAIh81WvZrncTTrAdM6WTZeHQ71L2JGqdQv15s0j1ZqH57gOQADzd0aJMK+DklZi8Xm3dOLWVeLgPAxIG5kvfGYvahb39ORaWxk6mz/lJrlGMWY/UcEuHw3dnQedOj3xOThetLni9QQNyHP6JJKCxXDGwKR40ifIXuVj2Z74Zg+PT5+rtMA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vsplhcTyofF7vdbssvmn1EOhC8L9VOs7HZ+acT1iFP4=;
 b=pHnfbo8Ko6eXoXXaiDI/sLAEV2OMAHxUDWUpk+obEWQGA+qegahqeurHnEsyru5YL1sStUoiUuBDYm9cgkfpVjHUZJ8k0ZiQzuqvQYrDuXWtqQc6xtMEoGJ/rvNlTnX88S/20xUXVMvFbdSpTl7Xq51Ujeo6QjSXPLRTxqyTHJg=
Subject: Re: [PATCH] x86/AMD: make HT range dynamic for Fam17 and up
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <4f46f3ab-60e4-3118-1438-10a1e17cd900@suse.com>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <1eb16baa-6b1b-3b18-c712-4459bd83e1aa@citrix.com>
Date: Fri, 18 Jun 2021 18:15:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
In-Reply-To: <4f46f3ab-60e4-3118-1438-10a1e17cd900@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0070.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:60::34) To BN3PR03MB2401.namprd03.prod.outlook.com
 (2a01:111:e400:7bbc::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bad206aa-65bc-4a62-6d2b-08d9327ca22b
X-MS-TrafficTypeDiagnostic: BN6PR03MB3316:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BN6PR03MB3316AE1ACC01025318007A89E40D9@BN6PR03MB3316.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: CGrH4c5DPE1sW3NNCHv678cGwVHOX0Uzb1ovC1MUcWSAR/Cz+SFbOM3JSc09i2G7j9L5U6TzWmfrPMxGAoYAD5fr+c5NeIiAG7F+uK9CLMruLGNWMMK5n0davvGMDvvPOo3u4pEIJnbD4nUGdpvvxDRlwT8c8bPOgu8DAv5GdRvokKv7VpuyS3r/hJurpqYQo863eu4mt1vCd+mfQGmoDxY0+Md1y1my3trt7vpVv3UT8dgq19EDgG4U5dwBlIfAcd5XQm3VKUvsdnXtaiWlG7PLf8t0yPVKtqkJw3811UX6WASjp5GeURTl/1841L9r7xvDSAw3ANSXz4533b66qbARZMu4cSG0v//an/FpTqQoZCiX5MUAdvT9rcSxngQyKGZwEfx19WkfWe3byDHtOtpzngE6j2BPcJ9eBRG8TnifJJkWPHZmb+KXYveuC6682Se4JuW5vuVQlyaCD/XndwwuvUtkiXzypX2EhbDFvCtEf7bYSWljgksR3GFytX66bM+MdFYqWOQMlpjgcX2yKjJzX6kf/Pvt9y7ieLcdksu8albR9WTmBCrfaenjG2Q7Vu0rX/y1w9KR4qKLx8OBFJuExYRF7bWx4lnd1tHCEol2W9rb4kNKOev9zHnxuMWGu9UQdmqu0l9oH8Pu9hORvp8HRI5iqKYzeG8YegMiNzKq8UUUWcFbyFtF2ykMmzflBW6MBXiNtxFyCuc4kNcsrLN0eY19BI4Btgj7OB2uEIQG/72e79URnMdtT858mxFbRMHB2Kbn1iaGR8RIjQw2ksqJPr5TUxo3WqSTV6hOtiAru3iaUnhJMg0rNrd7QxsR
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN3PR03MB2401.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(39860400002)(366004)(346002)(136003)(38100700002)(54906003)(316002)(4326008)(53546011)(107886003)(8676002)(16576012)(26005)(2906002)(6486002)(31696002)(2616005)(83380400001)(86362001)(8936002)(66476007)(66946007)(966005)(110136005)(186003)(478600001)(36756003)(31686004)(5660300002)(16526019)(44832011)(6666004)(66556008)(956004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?ajBodEhCTTJBZWRaMmx2KzQzeFh3Q2JtMjZTYmJNc0lZRnN5RGYwcWF3SkpO?=
 =?utf-8?B?RVUxbUxKR0swZmxzaTRETlVIbXRWZmR0a1VPR0xFaDJocmlaeWk3bkwrWWda?=
 =?utf-8?B?MkdiMW1XdXFMTE9hNnoyOENaNkRtSTh1dGkyMVRvOUxzVVREc0l6N1duV01t?=
 =?utf-8?B?ZGN1bEtyMEs0SitOc0g1ZCtNOVUzcmhHMUF2R3o4S1FWNjltbU9sT2hRbDlS?=
 =?utf-8?B?ekx6NzBzRjJiSGZjT3VWTGFGalZERGk4RmxKYjdBN29HT3dBSGl0YmU0N0p3?=
 =?utf-8?B?bGtNTzJROU0xb2F3MjdwVjVCN2dHem5nVklGaUVVOTRPVE1iZUhXTVBBMTRO?=
 =?utf-8?B?L0tHeGE0ZmdoQ2dVNHNPV0tFWXR6dXRWd1lOVXRWVE1lakYvL3A5VUw0M1V4?=
 =?utf-8?B?ZHZCRVVudW5kYnJLNS81TEJ0U2NmSUhlOTNmWFJSOUxRNExXUWVtMFo1MnU5?=
 =?utf-8?B?NmhJUHRUall2RWU5OWI2SVlGNUQ4a1ZCSWpaYWpMQlRYbU1IdjNrY3RvWjJK?=
 =?utf-8?B?R2JqbmtKUnJicW9obXU3MzhMNUxhQVloM2hjTWVXL0Nnd2tEMnFWNjRQRlBt?=
 =?utf-8?B?VjdrRTdDa3BzRVNuclZpZHorcW40TlBsUEVCY21PZ05tZXBnaE5TdUZZMkRy?=
 =?utf-8?B?c0dKTXZrQnhsNlZlR1BsK09hSWRLZ0JoYTZMY0JKWUxMdUVwdklubDNNSjVH?=
 =?utf-8?B?SmRlYWMydlFDUkdrMlBTZmlZYjRBaHFXNHg2cXFJaXREUGI4N21XU3A2c2Fz?=
 =?utf-8?B?NXF3S0c0MXJtK1hGdER2amdqWXMwN3J5ajRFNE4wOVVSRG84N1FnL3V3L0Yw?=
 =?utf-8?B?cjBrYmZNdE4vQnZ1UnMxTEtyeHRLRHJsNk1NQ3FoUGhzUzkwWWNNeFltcVVu?=
 =?utf-8?B?K0lTak8vazA0UmlQL3Y1WERoK1pXbHkzd3FnejZuNDNCMDhnU3lHSkxOTUlj?=
 =?utf-8?B?WmVVek1UUUVWeUJKZklubGZUTDhEeCs3NHJGZ2cxOG9ZU294R3VneGN1WGky?=
 =?utf-8?B?V0hlYm5KWTBGMWNYNVViVmVqVG9QR2cwVUlwN0ZONGVSUkdDOEs1ekkvL250?=
 =?utf-8?B?TnpwTVgrQVdkaUZJa3lIcDJEM1E2andpQVFYK3lxWHJsTzh3RXZoOHdSbUQ0?=
 =?utf-8?B?K2xSMGE4Q21WaDduZFRuSTg0Ukx3L1Fjb25zclYvc3ZyV0QzNzV3ODdDbm11?=
 =?utf-8?B?OFdpRVoxQlRJcGFUV0p2TXpYUTA0TVAxcXlKZGloZEFUa2xoa0NCQ3ZPQWxv?=
 =?utf-8?B?WXVHSnVxWTlMYU1qWC9pMW93SGtlMlZPcDV3S3BsZ2F0WS80MFQ4Z3hSS2wz?=
 =?utf-8?B?aUwxZTYvN0QvN1NmWlNjeTVVNTk2d0FVT211UmgxSyt4a1dlQTVodkNnbEM3?=
 =?utf-8?B?eFdHSklrUTJFN0t1MXpxUUovUVZ5U3dEZjB6SStRYWtyQ3BXbmRYdlppR1pZ?=
 =?utf-8?B?dkwrSzVNWFpoWjVhL2NReDdqb2dsOWdYSmVSV1V5T1BtQ0pmenU3d2dNcE5W?=
 =?utf-8?B?ZXJYd1p3czY1VEIvM09qV1d5WXUyMWkveDlsL1VIZXFoTkF4dFZiU1Ntc24y?=
 =?utf-8?B?T2lXMkJZVEE3bVp5L3dlcy9lNGExNFphaUFHMm13MFJVNGE0eUJnaE9UQkZn?=
 =?utf-8?B?UzlVcWVUV245WCswV2dqY2o3WFhVRWgzZ2hpN0d0QmxDYlA2ams0ekhIS00w?=
 =?utf-8?B?bHFrVFlRN21XWkFiVzZUZzByY2VtS21TY1NsYi9ndUJta3NWWWVIbkJpSVRD?=
 =?utf-8?Q?s+I/9GmHFMObEahRpp1o29tXkhxf2KntL92omh8?=
X-MS-Exchange-CrossTenant-Network-Message-Id: bad206aa-65bc-4a62-6d2b-08d9327ca22b
X-MS-Exchange-CrossTenant-AuthSource: BN3PR03MB2401.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 17:15:13.0203
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wxgkQNFA00mnHvwhBUf2jrdsomCV3ghZO8nHGCgCT033pID1Karw3icslLSvmQFM/fSr8VxgsGH94Ar4/Cj+o7LEqxkQNb6oxZm6l3LT3Oo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR03MB3316
X-OriginatorOrg: citrix.com

On 18/06/2021 17:00, Jan Beulich wrote:
> At the time of d838ac2539cf ("x86: don't allow Dom0 access to the HT
> address range") documentation correctly stated that the range was
> completely fixed. For Fam17 and newer, it lives at the top of physical
> address space, though.

 From "Open-Source Register Reference for AMD Family 17h Processors (PUB)":
https://developer.amd.com/wp-content/resources/56255_3_03.PDF

"The processor defines a reserved memory address region starting at
FFFD_0000_0000h and extending up to FFFF_FFFF_FFFFh."

It's still doesn't say that it's at the top of physical address space
although I understand that's how it's now implemented. The official
document doesn't confirm it will move along with physical address space
extension.

> To correctly determine the top of physical address space, we need to
> account for their physical address reduction, hence the calculation of
> paddr_bits also gets adjusted.
> 
> While for paddr_bits < 40 the HT range is completely hidden, there's no
> need to suppress the range insertion in that case: It'll just have no
> real meaning.
> 
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/cpu/common.c
> +++ b/xen/arch/x86/cpu/common.c
> @@ -349,13 +349,17 @@ void __init early_cpu_init(void)
>   
>   	eax = cpuid_eax(0x80000000);
>   	if ((eax >> 16) == 0x8000 && eax >= 0x80000008) {
> +		ebx = eax >= 0x8000001f ? cpuid_ebx(0x8000001f) : 0;
>   		eax = cpuid_eax(0x80000008);
> -		paddr_bits = eax & 0xff;
> +

I understand Andrew has some concerns regarding changing paddr_bits but
some comment explaining what's located at 0x8000001f:ebx[11:6] and why
we're doing this might be useful.

> +		paddr_bits = (eax & 0xff) - ((ebx >> 6) & 0x3f);
>   		if (paddr_bits > PADDR_BITS)
>   			paddr_bits = PADDR_BITS;
> +
>   		vaddr_bits = (eax >> 8) & 0xff;
>   		if (vaddr_bits > VADDR_BITS)
>   			vaddr_bits = VADDR_BITS;
> +
>   		hap_paddr_bits = ((eax >> 16) & 0xff) ?: paddr_bits;
>   		if (hap_paddr_bits > PADDR_BITS)
>   			hap_paddr_bits = PADDR_BITS;
> --- a/xen/arch/x86/dom0_build.c
> +++ b/xen/arch/x86/dom0_build.c
> @@ -524,8 +524,11 @@ int __init dom0_setup_permissions(struct
>                                            MSI_ADDR_DEST_ID_MASK));
>       /* HyperTransport range. */
>       if ( boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON) )
> -        rc |= iomem_deny_access(d, paddr_to_pfn(0xfdULL << 32),
> -                                paddr_to_pfn((1ULL << 40) - 1));
> +    {
> +        mfn = paddr_to_pfn(1UL <<
> +                           (boot_cpu_data.x86 < 0x17 ? 40 : paddr_bits));

That doesn't really follow what Andrew gave us, namely:

1) On parts with <40 bits, its fully hidden from software
2) Before Fam17h, it was always 12G just below 1T, even if there was more RAM above this location
3) On Fam17h and later, it is variable based on SME, and is either just below 2^48 (no encryption) or 2^43 (encryption)

Do we need (1) to be coded here as well?

Igor   


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 17:25:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 17:25:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144939.266713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luIFB-0001DA-39; Fri, 18 Jun 2021 17:25:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144939.266713; Fri, 18 Jun 2021 17:25:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luIFB-0001D3-0C; Fri, 18 Jun 2021 17:25:25 +0000
Received: by outflank-mailman (input) for mailman id 144939;
 Fri, 18 Jun 2021 17:25:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ssXN=LM=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1luIF9-0001Cx-Ua
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 17:25:24 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2415bc1d-5d72-43d9-a2eb-b4bb224ca01b;
 Fri, 18 Jun 2021 17:25:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2415bc1d-5d72-43d9-a2eb-b4bb224ca01b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624037122;
  h=subject:from:to:cc:references:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=n0FAyM7R+rLorLB6FP2+ZWWPUiaY/7Ucfuo58u8bK7w=;
  b=Jezv9xWSgkib2FFr6IDDNEFnQO+G9Y4JXUDtGFsMx0XF4UVHvJygD5BU
   sELbJS3sZfwwrkV6QJWme8IIs6yw7zsK/E6sbTx52O8eQzryhnA3Tu710
   I/TBW+1GuBiC60dhRcLao5IK8RX2YKfGZJzu6OCFiWjIPIGPGrl73y15I
   A=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: DZQknM9K3i8mRakznPiQUeQfK9FxKYFiPyznlTVxkHnQIIeL+MhdVmW99eEZm0j0gr5sGkMKAO
 HT6KQR/eu4P4Mp1ZjuETPJXLn7EX4AlWME8YbD8SbHerXxan9allkaQfB3UxmLrtXT2iqUr8Xe
 ToZXF+oSo8oTMVJrQrZikU8DkJeUvy35c4OFuAgVlIpofk5LM5QhgTn9RS8wTIpwyqVCcLXWgm
 /tDsdpoh0Kxyg8a0EodqbD0k7qbnfhgmAHC9s/tjQkBbsXZWhnBgPEgdu43HTauvoHc7Hml//i
 AwE=
X-SBRS: 5.1
X-MesageID: 46848126
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:/NDAA6tOG/EG5uiYjCLX3ojH7skDjNV00zEX/kB9WHVpm6yj+v
 xGUs566faUskd0ZJhEo7q90ca7Lk80maQa3WBzB8bGYOCFghrKEGgK1+KLrwEIcxeUygc379
 YDT0ERMrzN5VgRt7eG3OG7eexQvOVuJsqT9JjjJ3QGd3AVV0l5hT0JbTpyiidNNXJ77ZxSLu
 v72uN34wCOVF4wdcqBCnwMT4H41qf2fMKPW29+O/Y/gjP+9Q+V1A==
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46848126"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HCIo83evMgyIa6x7ZXj9kzN0NEVX60dwF8732j6mK3QbEhhlRx4NJKzHMgO1kdhtSj7VjcugfADc/5YOCFIaO5XJcy9sssESQNPqis15V9K40ngPvoRcIgInh3I3zEqvQlHrQvaDJK2Hu8D4Pdn/EUnXtwRum+0RwRB4an4JdAa2NkmerCf4PgxEqsw/CHA9lB8x/Zvj5hOjr1JDC1usLf0aATyetMTmxgDsrAwVTt/o3woEHgIODq/YI6nLs1zGR8TyPxJylunFTa9aYw66889kea4BepA+7vFHIpsnCkNhcHXdsxA1FOAEKskRqtpKjjzl5LMa3WZU4sMZEMIxtg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=89VJQLXCF12yB6848z6j6LbU4nssVh1otDcC+GyiXQM=;
 b=ecwRWGhv3vWVkDwRRa3ZIllx1JPcpkOt74hNyc1kp3CmR47kW5dt21PYJ0c+7sXOP6p+HL6dw8g9u1jNxtpqrGCMX4ComX2MmMYpPMqJ0pvWe6VDGlmOnaccu2XdrUK+BIUF2bFHPCSHXKYsYGd4aFj0sMJBTcU3AmmMk5Fwx3O0grBDYUfp0T9egQsUUodNUkPjJwdv1clquVCwmjCmmsJup8OAe0cEV9H1usANw7skW6xS9bILQZnhGWhN5XRjud4ruXCuPv12RJ6nBk0XYTKGNhDPGLflA41h5vpH/PNUFct6FCl/2Zqk+ZXWPuOyrBBhGhnUjPvlLmQs+rZhnQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=89VJQLXCF12yB6848z6j6LbU4nssVh1otDcC+GyiXQM=;
 b=mQJpnoMfe3zECdFEfNkzIhBrerzBTWwO0QiG5bYQSccGzY7xhrEkLOrCkN8yDjNbWLPjkf70EPYkuLu5+c0mDZ6MH9e17QfwCOTh6EmbH1gzo1qkiCyJ7HSb+xDID+zr1Hi5Ig/Pfif8xroH7rVf1ZeNuhyTjOwpbpVN6afa+po=
Subject: Re: [PATCH] x86/AMD: make HT range dynamic for Fam17 and up
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <4f46f3ab-60e4-3118-1438-10a1e17cd900@suse.com>
 <1eb16baa-6b1b-3b18-c712-4459bd83e1aa@citrix.com>
Message-ID: <d057818f-4542-7305-ad95-61996385968e@citrix.com>
Date: Fri, 18 Jun 2021 18:25:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
In-Reply-To: <1eb16baa-6b1b-3b18-c712-4459bd83e1aa@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO3P265CA0013.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:bb::18) To BN3PR03MB2401.namprd03.prod.outlook.com
 (2a01:111:e400:7bbc::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5c60c010-3fad-4058-fdd4-08d9327e0b2d
X-MS-TrafficTypeDiagnostic: BN3PR03MB2212:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BN3PR03MB22124EF6CA5CCEFFDE612482E40D9@BN3PR03MB2212.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: zgJ8lzze5nxA8d646J2G9FubUUAY5RhYeDnvIIrJOBEO/uL5ajenAWPpn/CZ9OiBLRzUQ6GoDxE1grjrQw6uN3cVJ8ADdAQuFjnKHC9Kjp2X/rpNflgMNInw1LezSQfC+DJZFrdp6wPT7XQEH1c3kezrsavMAAWQHR9hcxxZ+emDErP/OoLa10zvdtW0AhE2lzqyAmMaQ/lV+/FTbs8eIUjbw4v2UAzMQs/i3N07L3wbGmguQRFYmxqWs4ovmCq+fsP8AgVSRQ/IC9qozzu2MJRzuvESe20dHkvUkh9iEZy+DWr0Vp6umqafiVFNFh71gDvcQy64Td3EitKpuWCrfpC2Q2jSR/ssNpFnz3s9DN+RSxRyuA0fFhi+KI5UfQwHfMGqt/t5L+LJ7DOd3qD6rfaqvwCcH7AWY+1uB1CpySousrXne0L8ZPHDolRIcmPAfo9JsQza+dzazMTQjjJOm7mHrsa4oJoNep/z0COTXt2ZxTwDjOTI5fZZsVfAIueTW1428dlBtYSsE3JWwNOyuqzgAag8fo+f4xBrKgdoj1oZoUp722AbKdUtGTV1To4IWrMbd34dQoC6Ohno5aDEcAkEm/MjB/x1zKBG23fbUBC7hU9+oWUp2GexS+Z7eXcvlwhx4AEBgq1D6L++7k0FpWlTA0/OaeJ3Vas+4UOXsJxiMg4LPT0H2ZkHyh4udTBRAKTH78bk3TEWO0VzqRG8vpSfVZ6oh7DzMGLl+fnqpgnLHNulDfCUXWhNeABq2Uq0Am3rEWYA9HTpzCxcS6mCYFQl1LBqyGvABuxkaGCq+5Ls4AQqQxbhUwK16dQu3qC8
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN3PR03MB2401.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(136003)(346002)(376002)(366004)(5660300002)(2616005)(44832011)(110136005)(16576012)(86362001)(316002)(956004)(31696002)(8936002)(8676002)(66556008)(2906002)(66946007)(31686004)(478600001)(53546011)(38100700002)(966005)(107886003)(186003)(26005)(4326008)(6486002)(36756003)(54906003)(66476007)(83380400001)(16526019)(6666004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?NkIwU0FVaDR6NWFVTzgzVjZabUZFdUpWa3Q3UlNPMEsyYjBaTWxzMnd1SFlY?=
 =?utf-8?B?bnJQZnhBZjJ5eFBCYTJ2eEJSTHV2cVgxUVg1STJuZmpQMGVEOXN1Ym5VRC9U?=
 =?utf-8?B?S2ZwK1VVQmRDM3EvOFFiZzRNZXI0V0NpK2cwU3VGZnF5YlNoeDFKa1JnMnBj?=
 =?utf-8?B?bHNiRGpobEJMQU1RalJtT09KajlJTlRJdG8zeDM2a3Bnd3N1MGIvK3NISTk5?=
 =?utf-8?B?YWRUUzg2TFJseUk1aklKU3paeFVSeVAxRW05YXhaREZBTHdSSUVvWGJXaEM0?=
 =?utf-8?B?TmNqQ2owemVBeGtQNE1idUZ6aGo2dGN5OE9hNEZVSk1CclM2RWdRdUhBRFhH?=
 =?utf-8?B?ekdEVU5yRTg2aVZhRUJva1B4YzloS3h6bjRxc1ZVbFl2MmNicXVVRDRPRHE5?=
 =?utf-8?B?eTB6djFjaWEzVDFQVElRTnpReHdRUkp4Z2pDZEpaZHlES1llRlVxMVR6Mis2?=
 =?utf-8?B?RkhoeExvam5TTTBsWXRSV2NrL3JCRHgxcmdIR3B6MVVZSkhHckIzTEtNWTlJ?=
 =?utf-8?B?NFAwcEFycEtVb2VOVSsveDFwcnNyVVNiWUQ2dktkWnVWY2l3cXpiMjd6TGZv?=
 =?utf-8?B?eFN1Z3d5VjgwK3F0TEIyRmxOcUwyWk9NVC9CVkM1akRFRkczUCtGakNHU3Vw?=
 =?utf-8?B?bUZsVVFFSjliVTNCaENBemROUUp5TjVlNXhUV2ZUQ3R5WW5zUGY3MVhiMFNi?=
 =?utf-8?B?dW5kYXNwZHdITG9pLzFnSzZpOWVyeW9hckd1ZXJZcmZKYndtZEZ3R0tDZlBO?=
 =?utf-8?B?ZVVBUk13dlo4TnBMdnd2dTQ0aXR2a0pTZG1aRzBSeWF0ME04MDFCQVJ5UHZi?=
 =?utf-8?B?dlR5L1M3Nysza0o4d0syVWxuUFoyVCtaTU12eEpLa3VzV3Y3SkZ6cEtuZlFJ?=
 =?utf-8?B?U0ZUc2dvZTdYU0FzWU5pQjAyU1JaYm03T29HWW5qTEsydjdSbC9ZUDh3NTBm?=
 =?utf-8?B?SVFiSlRGaXdVQVRXVVd3ZGxvSVFMKzBadDAxeW1HNE1sb3Y2NDR4d0k4T2hx?=
 =?utf-8?B?ZFEwMzY3QU1FckxlR0xYcm13OEROMmRNNlZPMUk0Z2xab1hFaFFiSWdrL3lu?=
 =?utf-8?B?M1RzaFVsODRRL3lSaWN3azFQZWlWRGxQMjk5QnJ0aFc4UWhVRUkrZDhISHZM?=
 =?utf-8?B?enJhREczTFpDMXpaenVyeTR2YUtBU1o2YXpRek0vVW0rSUdJdXVjUlRzVTlH?=
 =?utf-8?B?K2ZmYklvaThBMEZ5ZU85WldmbTBQVUlRNkxUc3kzUUtZcEk1RURHRTFwcklO?=
 =?utf-8?B?cE5UN2VDU3JtWHBiZ09hYnAxdDV3R0Z4d21ibXAxZjZxL0VKc25oMDgwNUZv?=
 =?utf-8?B?a0ZsSHA5ZzlQNmJMbXpCRnlSdVJ6NWlHZmd6YjJBQkZlbjdRaTVwUjZpZ0tz?=
 =?utf-8?B?WURTcHIzWE9oS0F2YnhnUEx4cVY3V1dINE9CbkZLVWVzMU5Xd1p3YmtPakhM?=
 =?utf-8?B?aExwTGZJZWFVRlFzOVNUN042U2krandKK1F5VmxzbG9rTU92NSt1OEMyQWlF?=
 =?utf-8?B?bmgwcFRIL0FWRXFJQkg2K2ZDR1VzcVJoYm95cGRlQXNGQUdoTXZBNjRyekJE?=
 =?utf-8?B?WjlqaEl5aFROcFlXUTVPSGMxcmNOTm9kYnB6MTB3VWhZKzNTN0VOeXM1bUxC?=
 =?utf-8?B?Q2d3VVpRQVpkTGNPc2R0MURjd05tVkl4NFRmOVFyQlVPS1dXSnZDTEdzRDJU?=
 =?utf-8?B?MDZLT0VRa2YvK3M0c2JxU2JBRm1uNmlZRmI5d1o2WkxGVVBhL0poZ014OWEz?=
 =?utf-8?Q?VNfcOdNcjuWHhwucxtV8qY4wZXMB22RPKNqRuK/?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 5c60c010-3fad-4058-fdd4-08d9327e0b2d
X-MS-Exchange-CrossTenant-AuthSource: BN3PR03MB2401.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 17:25:18.6696
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xLU+wiFB0F11ah/w5hB+xATSIzeC0SsoxFnEcOmqQip07nBNa/8AWiDM8CSvnfFZZ8gr+dT3pKYZporgYBvn/S2k1tRmfdtAXYeb/JR2C2g=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN3PR03MB2212
X-OriginatorOrg: citrix.com

On 18/06/2021 18:15, Igor Druzhinin wrote:
> On 18/06/2021 17:00, Jan Beulich wrote:
>> At the time of d838ac2539cf ("x86: don't allow Dom0 access to the HT
>> address range") documentation correctly stated that the range was
>> completely fixed. For Fam17 and newer, it lives at the top of physical
>> address space, though.
> 
>  From "Open-Source Register Reference for AMD Family 17h Processors (PUB)":
> https://developer.amd.com/wp-content/resources/56255_3_03.PDF
> 
> "The processor defines a reserved memory address region starting at
> FFFD_0000_0000h and extending up to FFFF_FFFF_FFFFh."
> 
> It's still doesn't say that it's at the top of physical address space
> although I understand that's how it's now implemented. The official
> document doesn't confirm it will move along with physical address space
> extension.
> 
>> To correctly determine the top of physical address space, we need to
>> account for their physical address reduction, hence the calculation of
>> paddr_bits also gets adjusted.
>>
>> While for paddr_bits < 40 the HT range is completely hidden, there's no
>> need to suppress the range insertion in that case: It'll just have no
>> real meaning.
>>
>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/arch/x86/cpu/common.c
>> +++ b/xen/arch/x86/cpu/common.c
>> @@ -349,13 +349,17 @@ void __init early_cpu_init(void)
>>       eax = cpuid_eax(0x80000000);
>>       if ((eax >> 16) == 0x8000 && eax >= 0x80000008) {
>> +        ebx = eax >= 0x8000001f ? cpuid_ebx(0x8000001f) : 0;
>>           eax = cpuid_eax(0x80000008);
>> -        paddr_bits = eax & 0xff;
>> +
> 
> I understand Andrew has some concerns regarding changing paddr_bits but
> some comment explaining what's located at 0x8000001f:ebx[11:6] and why
> we're doing this might be useful.
> 
>> +        paddr_bits = (eax & 0xff) - ((ebx >> 6) & 0x3f);
>>           if (paddr_bits > PADDR_BITS)
>>               paddr_bits = PADDR_BITS;
>> +
>>           vaddr_bits = (eax >> 8) & 0xff;
>>           if (vaddr_bits > VADDR_BITS)
>>               vaddr_bits = VADDR_BITS;
>> +
>>           hap_paddr_bits = ((eax >> 16) & 0xff) ?: paddr_bits;
>>           if (hap_paddr_bits > PADDR_BITS)
>>               hap_paddr_bits = PADDR_BITS;
>> --- a/xen/arch/x86/dom0_build.c
>> +++ b/xen/arch/x86/dom0_build.c
>> @@ -524,8 +524,11 @@ int __init dom0_setup_permissions(struct
>>                                            MSI_ADDR_DEST_ID_MASK));
>>       /* HyperTransport range. */
>>       if ( boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON) )
>> -        rc |= iomem_deny_access(d, paddr_to_pfn(0xfdULL << 32),
>> -                                paddr_to_pfn((1ULL << 40) - 1));
>> +    {
>> +        mfn = paddr_to_pfn(1UL <<
>> +                           (boot_cpu_data.x86 < 0x17 ? 40 : paddr_bits));
> 
> That doesn't really follow what Andrew gave us, namely:
> 
> 1) On parts with <40 bits, its fully hidden from software
> 2) Before Fam17h, it was always 12G just below 1T, even if there was more RAM above this location
> 3) On Fam17h and later, it is variable based on SME, and is either just below 2^48 (no encryption) or 2^43 (encryption)
> 
> Do we need (1) to be coded here as well?

Ignore this last paragraph - I lost your statement in comment description.

Igor


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 18:12:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 18:12:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144945.266723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luIya-000628-CN; Fri, 18 Jun 2021 18:12:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144945.266723; Fri, 18 Jun 2021 18:12:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luIya-000621-9M; Fri, 18 Jun 2021 18:12:20 +0000
Received: by outflank-mailman (input) for mailman id 144945;
 Fri, 18 Jun 2021 18:12:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfjP=LM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1luIyZ-00061v-L8
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 18:12:19 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id efd74c3c-c37e-4fdf-af98-c206d14e8993;
 Fri, 18 Jun 2021 18:12:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: efd74c3c-c37e-4fdf-af98-c206d14e8993
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624039937;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=xOlEkVzRwxXh1MTcfFjDKTK8VFesmToA0glAgk6iTn0=;
  b=NPRzf8Ej51yxaVN3WH4wGj+bhzYC7KDf+VPyxS9Ft0oBbjCUFPZv+/PV
   8sm1wkV2+fHCBQyzdFFcZYIdyPN6EX8grbLQiCGgbIh8lIXAHUpud1XmQ
   mUvhqxFR6BkG4UCWgIyUba1t+fP+kHfq25dZbj607+nw2s9pESurMIKrT
   Y=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: UFPVzFUhIcoa/KSGOzMXK8efDs0nvwEmNODyCHCDlMSWMDFchXOgcL0awgTJF4k8Y/85C+oHs/
 pah0GR3owMFWc5Z0xO1+kJrUdXiR4jq1jmbRYMJHd/5R9CU9HSNItz/WHiEjFdkt++KoKS/Cqe
 SyEHFR/LK7HLTETuhKvPoWo9fIxl9NCypp5e/xi8gXzjOWSqmLDrgOPzRi0lsbBSPjFd1V9ErU
 i4Nnp35za62P6hcFSSZd1ULC0+UUfqPFVJWx3vQVeicjvwrhA1MS9tUi6hlQU4eSBCHJI6SDW6
 wDk=
X-SBRS: 5.1
X-MesageID: 46851490
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:N0T/WKAXpsIks73lHegSsceALOsnbusQ8zAXPh9KJiC9I/b1qy
 nxppkmPEfP+UsssHFJo6HkBEEZKUmstKKdkrNhQYtKOzOW+FdATbsSo7cKpgePJ8SQzJ8l6U
 4NSdkcNDS0NykBsS+Y2nj6Lz9D+qj+zEnAv463pB0NLT2CKZsQlDuRYjzrT3GeLzM2YabRYa
 DsgPav0ADQHkj/AP7LZEUtbqzmnZnmhZjmaRkJC1oM8w+Vlw6l77b8Dlyxwgoeeykn+8ZhzU
 H11yjCoomzufCyzRHRk0XJ6Y5NpdfnwtxfQOSRl8kuLCn2gArAXvUlZ1TChkFwnAic0idtrD
 D+mWZ4Ay210QKIQoiBm2qr5+An6kd015at8y7DvZKpm72HeNtzMbs+uWseSGqE16NohqAN7I
 tbm22erJZZFhXGgWD04MXJTQhjkg6urWMlivN7tQ0UbWOPAIUh3LD30XklWKvoJhiKo7zP0d
 Mee/309bJTaxeXfnrZtm5gzJilWWkyBA6PRgwHttaO2zZbkXhlxw9ArfZv0kso5dY4Ud1J9u
 7EOqNnmPVHSdIXd7t0AKMETdGsAmLATBrQOCaZIEjhFqsAJ3XRwqSHrYndJNvaMaDg6aFC16
 gpfGkowFLaSnief/Fmhqc7gCwlaF/NKQgF5PsulKREhg==
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46851490"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SD8DBc4FFPzxN2crfT3/YSEzhSQKYkOYxLfE4bBbJfDxh4nT7JnVLVKZBWmy3xbcfZ8AD+CU8f7B52D+mEI6zHrZFE5A4wATagergQ1bnm5t2Nnabe1e3tH5ZMGmQ31EU97BTo/TpSyWX0HPeVMaM45f/wqdX9uHi+kSscyVXlyFjmbopJI6idI8DR5C+uV6nH+BaQArvDjwbRdPYs0UAYO1SQx4ce/h53zOyim0kLjyzj9Tx9aBU9yquuZI7RKysxXns6k3X7jimF5wZ5C++GygEf6oUUzhW46fGVgVIkTir0PewO8jL0Jza2USMkK9253rjckyyXawxaI36RQMPw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xOlEkVzRwxXh1MTcfFjDKTK8VFesmToA0glAgk6iTn0=;
 b=Iu+6q8mRM1SFADjFfIDSy6yIbPUJQeJSKeQwS2Csskjd0dyabTNq3ZXuzdsqt0svcbC1wjuUVXeeoVALdhwHfsBdlziBLoJ+TPYdEY6IuGIqdXKn3UYRlYfIdlRhk54sXEARVIvWDxKj2RfD2482ieiAPQRPTKIgASJk1pxgQgZWZ71lkyuy9lLx/xZ/ZHuX8nC7QLBeuH9fOFUQjTMnoVPmcqN1NGrN1nPgak3+T7AL4BfBFt2ibLo0af93/eA1ke4a1DNJvV8+tJZUy7S2XBnCLwMCdIMCBLE+PRcPcE8QXsQAgOoGQ/oMIw0mgOH0mnITeQ8R4A2yuvck8CVsZA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xOlEkVzRwxXh1MTcfFjDKTK8VFesmToA0glAgk6iTn0=;
 b=OhVIBYbs9PTGczZC8pVxL7BCwgIFfwJe4lfYmx6HFdMgRPsSkrO36KENOHt0Pzlxeq1S+SkBUrNPQEjppUHFNDKLBMdj8gZKHkevVr9XoWsBgj+0wZUKUWWMkQBLo63ewszV6eiSX9vewtkYymGH6AR1lH7Z91wRrQ9FRaruWc8=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 08/12] golang/xenlight: add functional options to
 configure Context
Thread-Topic: [RESEND PATCH 08/12] golang/xenlight: add functional options to
 configure Context
Thread-Index: AQHXUNzBxrtRVA/FskCGl/j6ZWY3UKsZ/xcAgAAG0ICAABOXAIAAC7AAgAAUAoA=
Date: Fri, 18 Jun 2021 18:12:11 +0000
Message-ID: <EF3300BF-5BE2-4C35-B196-D94224619629@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <dc5cd6728e8477c9eb3ba75a55c7128da46a86ef.1621887506.git.rosbrookn@ainfosec.com>
 <EF069373-26FC-4151-9CD9-6B8C48D9AEB0@citrix.com>
 <YMy29arbPMnPI/+W@FED-nrosbr-BE.crux.rad.ainfosec.com>
 <8727719E-9548-40CF-A186-14E2ECA3801D@citrix.com>
 <YMzRMlaHapLn7msf@FED-nrosbr-BE.crux.rad.ainfosec.com>
In-Reply-To: <YMzRMlaHapLn7msf@FED-nrosbr-BE.crux.rad.ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: ab484d54-6136-4489-e95a-08d93284983c
x-ms-traffictypediagnostic: PH0PR03MB6349:
x-microsoft-antispam-prvs: <PH0PR03MB6349F80C5174766FE163A06C990D9@PH0PR03MB6349.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7691;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: RWxDS5Q/gypYsTkkbq/xtJ+SM00xw0Tu07ClMV8dCquyyWacidZ+5FKNq5YECeKHWiXp2g6QOPRxFffqXXNyhlafL110eCkcExPzWDoKGz99VwYItf+fANNIIziAPD2dDRuGbZ5aFonjjy81GNwtjjW/Y1mmKJ0lqmggrF+qEZ+6tCMxzGOWs3jHHItRszYb2+l4o+GX6y/v5+ESM6Dceq7RyqkziVAjkAjHF5k8zppnrORmd9GT3VuMsND9F8C/0DWA+R/T0H7GWrCdUr4pWgmVEdQB0cAjQ4eFJagZM57WUmWSONj4M7y4kRjMr/mI5Xd1YAH562SYYi3+zmxRG8DYqxb9VlzLqn8r5TZxz/z61XWZsn9787kiPtyBwfmehOTHrtbnxrfyW2UZV1+PpmRk3RTeyncOPY3WoDgdLaTaIx9hdoJRZBDPmf44Deuvp+nqGc7fPmIZ1zDOwQdZwMfFNms8j8zkIuV+YgJX3BmLswJJEcI/XpsBOiwC81iE+9k0/cFUaSHluFxwgMVR/OXpzPGqEpJpuD7muwFem8uPg7KzDd4S7oyT+ghZZKFlBorivBMuO/0M5SeTl4yQMeVT6upHAWz3C5AxReWHD/KEMuQqqCpJMNHSsZdFgmzy1Ux0mBrHa8VS/iEjcPQdx2iXDa8aqObZHJYOi8oBqa42Fhth5OrLrzwN84P6oA4wvC0j4RrUrWDri8fWwxmYOHNh6OEPUZ66laF/oWx9edkdZ6BORrz81htkl8J3B8cvRH37gZ6bKM+XJ9QfrGu+/ENUuc7y8WorDO8dmSKaS5U=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(366004)(136003)(376002)(396003)(66946007)(54906003)(83380400001)(66476007)(66446008)(2616005)(478600001)(76116006)(91956017)(4326008)(66556008)(64756008)(38100700002)(122000001)(966005)(71200400001)(2906002)(8676002)(5660300002)(53546011)(6512007)(36756003)(6506007)(186003)(6916009)(316002)(8936002)(33656002)(6486002)(86362001)(26005)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?STJNaC82VUJSSnNqWExlaE9HYXUvaTJhY0I5b3VkODlDbXZFL0R5TVpuL3RU?=
 =?utf-8?B?TjdXdmhvejlXVlhZVzBaUU0vVDlLYUFnUTdWb1I3VWZ4eUp3M1k1Zno5MVVv?=
 =?utf-8?B?bmFRWklBL2U5d2FiayttWEZkQUdkNkoxSmQ1L0VkUDJVWDVienlKSkRXQnR1?=
 =?utf-8?B?dm1hSjE3NVFLd3R2TUd3TElTL01RY3FSUDVSMWhIOEQ0bnNqL25PUUV4OVY3?=
 =?utf-8?B?N3NMVjFPTE5JR3VBQkg5SzhCM01CcjdwQmFDVkVRbUp4Y2ZTOUJGQXprNUZx?=
 =?utf-8?B?U0F4N0tma2JNVWQ1UGhibzhBUDlYNE1zbEpKZkFWQy9ZS05Qc2ZRTTNIRjZG?=
 =?utf-8?B?U0RjaDl1enRFTVh1UGVpamxKTWtORGs0aUMzUHRpZjlFNThveUNyMEZIaEo1?=
 =?utf-8?B?QkloSWRZUUpYc3lxSms5OW9DSGVGcVdpYk8xdDRKVm9UNyt4YWxXc1Joc2VE?=
 =?utf-8?B?dTEwQnBGbFlrei9YcXcvUlJWbmc5YlF4L001Q0doREhZRmNRYlV0a2g3Mk9Y?=
 =?utf-8?B?akV6Qm9va3pzeDVSam5YM1pjdzE4NmJ2Z0p0ZER6bitPemIvVkJaa0xvczUr?=
 =?utf-8?B?RFdHZFEzNUxGc2t4WFlraU5ZM1BuR2R1MGU5QUVoVnNyZytRL3RKYTJYY0pZ?=
 =?utf-8?B?eWk0RC91WUZpUkRYeFlzQ1E1VElKTzVTNEtiYWVvb2lnOHJhcC9OMlpHcVpo?=
 =?utf-8?B?ZldpY0ZJc3JFa3JNRnNSUDdoV1JqcTlrWXpwdDl4K1BWWWhOWWR3WjZvYVNS?=
 =?utf-8?B?bmR5UThjNm81bVg3bEpqSk9RWEpTT2RZeTNaWkxVQUwvLzQ3TktFWWRBbHpP?=
 =?utf-8?B?ZGZ4aURVM2YzeUVVQ3lnYVF2NmdENEo1OWRhajEzbGpIc2FhZ00rclVTNVUy?=
 =?utf-8?B?THdTaHlVdVRFZkNydndBVUZQTjNhbVJMNHZTWXladXBaZ3NuRzFOVGpwajZ4?=
 =?utf-8?B?Rk5Gc1hvcWZwTk5Gd1N3NzdlbUpHbXN5YzlHSGUvZG83MUx5c0FUVDhSNDB3?=
 =?utf-8?B?cVcraSt2SkZZWms3SjBvcWJocHBEakxhdS9Md3locFI2UlFwWGdnWW5kdUhH?=
 =?utf-8?B?WnhZNXR0eHlQZkh6Yy9XeTRDQ1JyQ3Q1KzloMmRMc0RGSDQzTmFQN1FGODZ2?=
 =?utf-8?B?dUNpT0VhdmNFVDJBYWZFUVR5SWJkYlpOMkhyM3VFWjdjWGt3V2hJcDNvaXQz?=
 =?utf-8?B?MXR1clllNkNzRVRjSlB6Z1RYU3ZTN0NPc0RHcW0wNEpaeEpwS0pjMmZzejB4?=
 =?utf-8?B?em9rR1RabjlRV1JLM1NaS2dKd28xN2tQT0orU2h3SmYyV1VwTk5tOWxCMjR3?=
 =?utf-8?B?d2lsSzFLWjF6ZjVkd2l6MTJHYjFqdnRaNGZLUko2TWQwejdtZXZVVWFOankz?=
 =?utf-8?B?QVpBRXJJZEozUVhycTF3TVlHNVBCbFJkK01PeVFLc205aktEa1lGUkh0Zkgr?=
 =?utf-8?B?ZWR3dENBSXQzb0ZZRFg0bHM2aER2b1ZFRTgzZ3BUbit5T2M2Ulg3REl4WVA1?=
 =?utf-8?B?K2ppOVdqOUxERTdaK1F6bHBDemFjZlpmdDE5WUtBSysvQ3FJTTZacFVxdUl0?=
 =?utf-8?B?ZCtXSlVTM2pJNG9aY241U1V4a3lPdSsxVmI5d3dhU3BwcTBkUmRoN2Z4Q1pi?=
 =?utf-8?B?Q2FDN1hTajdMTUFPTklXeUUwdHoyZ2wrV1RtaU9SemNCQ0RVWGtwSWhackNs?=
 =?utf-8?B?bHlSTmNGREVJMWRSU01BMXIxOVFvQnNSU2JqUC92MXVBT0lLeWtueWErUnp4?=
 =?utf-8?Q?f3qNeVsXpOCJsLhbh7FgphQG+NmweODGgSoaQKr?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <2EE1C0FB248FFF43A459D385FF3750C1@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ab484d54-6136-4489-e95a-08d93284983c
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 18:12:11.9940
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: JZ6cv1RCyaUQXqqzc1Cn+StaGuI4rKn6H3vbmusJOi2Tc7G0tkHkJpBD5BQL3hEZAasyblshZqGrV/tthzahPDQRgJrbQl9yAI+R8j/vaKw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6349
X-OriginatorOrg: citrix.com

DQoNCj4gT24gSnVuIDE4LCAyMDIxLCBhdCA2OjAwIFBNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9v
a25AZ21haWwuY29tPiB3cm90ZToNCj4gDQo+IE9uIEZyaSwgSnVuIDE4LCAyMDIxIGF0IDA0OjE4
OjQ0UE0gKzAwMDAsIEdlb3JnZSBEdW5sYXAgd3JvdGU6DQo+PiANCj4+IA0KPj4+IE9uIEp1biAx
OCwgMjAyMSwgYXQgNDowOCBQTSwgTmljayBSb3Nicm9vayA8cm9zYnJvb2tuQGdtYWlsLmNvbT4g
d3JvdGU6DQo+Pj4gDQo+Pj4gT24gRnJpLCBKdW4gMTgsIDIwMjEgYXQgMDI6NDQ6MTVQTSArMDAw
MCwgR2VvcmdlIER1bmxhcCB3cm90ZToNCj4+Pj4gDQo+Pj4+IA0KPj4+Pj4gT24gTWF5IDI0LCAy
MDIxLCBhdCA5OjM2IFBNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9va25AZ21haWwuY29tPiB3cm90
ZToNCj4+Pj4+IA0KPj4+Pj4gQWRkIGEgQ29udGV4dE9wdGlvbiB0eXBlIHRvIHN1cHBvcnQgZnVu
Y3Rpb25hbCBvcHRpb25zIGluIE5ld0NvbnRleHQuDQo+Pj4+PiBUaGVuLCBhZGQgYSB2YXJpYWRp
YyBDb250ZXh0T3B0aW9uIHBhcmFtZXRlciB0byBOZXdDb250ZXh0LCB3aGljaCBhbGxvd3MNCj4+
Pj4+IGNhbGxlcnMgdG8gc3BlY2lmeSAwIG9yIG1vcmUgY29uZmlndXJhdGlvbiBvcHRpb25zLg0K
Pj4+Pj4gDQo+Pj4+PiBGb3Igbm93LCBqdXN0IGFkZCB0aGUgV2l0aExvZ0xldmVsIG9wdGlvbiBz
byB0aGF0IGNhbGxlcnMgY2FuIHNldCB0aGUNCj4+Pj4+IGxvZyBsZXZlbCBvZiB0aGUgQ29udGV4
dCdzIHhlbnRvb2xsb2dfbG9nZ2VyLiBGdXR1cmUgY29uZmlndXJhdGlvbg0KPj4+Pj4gb3B0aW9u
cyBjYW4gYmUgY3JlYXRlZCBieSBhZGRpbmcgYW4gYXBwcm9wcmlhdGUgZmllbGQgdG8gdGhlDQo+
Pj4+PiBjb250ZXh0T3B0aW9ucyBzdHJ1Y3QgYW5kIGNyZWF0aW5nIGEgV2l0aDxPcHRpb25OYW1l
PiBmdW5jdGlvbiB0byByZXR1cm4NCj4+Pj4+IGEgQ29udGV4dE9wdGlvbg0KPj4+Pj4gDQo+Pj4+
PiBTaWduZWQtb2ZmLWJ5OiBOaWNrIFJvc2Jyb29rIDxyb3Nicm9va25AYWluZm9zZWMuY29tPg0K
Pj4+Pj4gLS0tDQo+Pj4+PiB0b29scy9nb2xhbmcveGVubGlnaHQveGVubGlnaHQuZ28gfCA0NCAr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKy0tDQo+Pj4+PiAxIGZpbGUgY2hhbmdlZCwgNDIg
aW5zZXJ0aW9ucygrKSwgMiBkZWxldGlvbnMoLSkNCj4+Pj4+IA0KPj4+Pj4gZGlmZiAtLWdpdCBh
L3Rvb2xzL2dvbGFuZy94ZW5saWdodC94ZW5saWdodC5nbyBiL3Rvb2xzL2dvbGFuZy94ZW5saWdo
dC94ZW5saWdodC5nbw0KPj4+Pj4gaW5kZXggZjY4ZDdiNmU5Ny4uNjVmOTNhYmUzMiAxMDA2NDQN
Cj4+Pj4+IC0tLSBhL3Rvb2xzL2dvbGFuZy94ZW5saWdodC94ZW5saWdodC5nbw0KPj4+Pj4gKysr
IGIvdG9vbHMvZ29sYW5nL3hlbmxpZ2h0L3hlbmxpZ2h0LmdvDQo+Pj4+PiBAQCAtMTM2LDcgKzEz
Niw3IEBAIGZ1bmMgc2lnY2hsZEhhbmRsZXIoY3R4ICpDb250ZXh0KSB7DQo+Pj4+PiB9DQo+Pj4+
PiANCj4+Pj4+IC8vIE5ld0NvbnRleHQgcmV0dXJucyBhIG5ldyBDb250ZXh0Lg0KPj4+Pj4gLWZ1
bmMgTmV3Q29udGV4dCgpIChjdHggKkNvbnRleHQsIGVyciBlcnJvcikgew0KPj4+Pj4gK2Z1bmMg
TmV3Q29udGV4dChvcHRzIC4uLkNvbnRleHRPcHRpb24pIChjdHggKkNvbnRleHQsIGVyciBlcnJv
cikgew0KPj4+Pj4gCWN0eCA9ICZDb250ZXh0e30NCj4+Pj4+IA0KPj4+Pj4gCWRlZmVyIGZ1bmMo
KSB7DQo+Pj4+PiBAQCAtMTQ2LDggKzE0NiwxOSBAQCBmdW5jIE5ld0NvbnRleHQoKSAoY3R4ICpD
b250ZXh0LCBlcnIgZXJyb3IpIHsNCj4+Pj4+IAkJfQ0KPj4+Pj4gCX0oKQ0KPj4+Pj4gDQo+Pj4+
PiArCS8vIFNldCB0aGUgZGVmYXVsdCBjb250ZXh0IG9wdGlvbnMuIFRoZXNlIGZpZWxkcyBtYXkN
Cj4+Pj4+ICsJLy8gYmUgbW9kaWZpZWQgYnkgdGhlIHByb3ZpZGVkIG9wdHMuDQo+Pj4+PiArCWNv
cHRzIDo9ICZjb250ZXh0T3B0aW9uc3sNCj4+Pj4+ICsJCWxvZ0xldmVsOiBMb2dMZXZlbEVycm9y
LA0KPj4+Pj4gKwl9DQo+Pj4+PiArDQo+Pj4+PiArCWZvciBfLCBvcHQgOj0gcmFuZ2Ugb3B0cyB7
DQo+Pj4+PiArCQlvcHQuYXBwbHkoY29wdHMpDQo+Pj4+PiArCX0NCj4+Pj4+ICsNCj4+Pj4+IAkv
LyBDcmVhdGUgYSBsb2dnZXINCj4+Pj4+IC0JY3R4LmxvZ2dlciA9IEMueHRsX2NyZWF0ZWxvZ2dl
cl9zdGRpb3N0cmVhbShDLnN0ZGVyciwgQy5YVExfRVJST1IsIDApDQo+Pj4+PiArCWN0eC5sb2dn
ZXIgPSBDLnh0bF9jcmVhdGVsb2dnZXJfc3RkaW9zdHJlYW0oQy5zdGRlcnIsDQo+Pj4+PiArCQlD
LnhlbnRvb2xsb2dfbGV2ZWwoY29wdHMubG9nTGV2ZWwpLCAwKQ0KPj4+Pj4gDQo+Pj4+PiAJLy8g
QWxsb2NhdGUgYSBjb250ZXh0DQo+Pj4+PiAJcmV0IDo9IEMubGlieGxfY3R4X2FsbG9jKCZjdHgu
Y3R4LCBDLkxJQlhMX1ZFUlNJT04sIDAsDQo+Pj4+PiBAQCAtMjAxLDYgKzIxMiwzNSBAQCBmdW5j
IChjdHggKkNvbnRleHQpIENsb3NlKCkgZXJyb3Igew0KPj4+Pj4gCXJldHVybiBuaWwNCj4+Pj4+
IH0NCj4+Pj4+IA0KPj4+Pj4gK3R5cGUgY29udGV4dE9wdGlvbnMgc3RydWN0IHsNCj4+Pj4+ICsJ
bG9nTGV2ZWwgTG9nTGV2ZWwNCj4+Pj4+ICt9DQo+Pj4+PiArDQo+Pj4+PiArLy8gQ29udGV4dE9w
dGlvbiBpcyB1c2VkIHRvIGNvbmZpZ3VyZSBvcHRpb25zIGZvciBhIENvbnRleHQuDQo+Pj4+PiAr
dHlwZSBDb250ZXh0T3B0aW9uIGludGVyZmFjZSB7DQo+Pj4+PiArCWFwcGx5KCpjb250ZXh0T3B0
aW9ucykNCj4+Pj4+ICt9DQo+Pj4+PiArDQo+Pj4+PiArdHlwZSBmdW5jQ29udGV4dE9wdGlvbiBz
dHJ1Y3Qgew0KPj4+Pj4gKwlmIGZ1bmMoKmNvbnRleHRPcHRpb25zKQ0KPj4+Pj4gK30NCj4+Pj4+
ICsNCj4+Pj4+ICtmdW5jIChmY28gKmZ1bmNDb250ZXh0T3B0aW9uKSBhcHBseShjICpjb250ZXh0
T3B0aW9ucykgew0KPj4+Pj4gKwlmY28uZihjKQ0KPj4+Pj4gK30NCj4+Pj4gDQo+Pj4+IFdoeSBh
bGwgdGhpcyBjb252b2x1dGlvbiB3aXRoIGludGVyZmFjZXMgYW5kIHN1Y2gsIHJhdGhlciB0aGFu
IGp1c3QgZGVmaW5pbmcgQ29udGV4dE9wdGlvbiBhcyBhIGZ1bmN0aW9uIHBvaW50ZXI/ICBJcyBp
dCBqdXN0IHRvIGtlZXAgY29udGV4dE9wdGlvbnMgb3V0IG9mIHRoZSBkb2N1bWVudGF0aW9uIHBh
Z2U/DQo+Pj4gDQo+Pj4gUGFydCBvZiB0aGUgbW90aXZhdGlvbiBmb3IgdXNpbmcgZnVuY3Rpb25h
bCBvcHRpb25zIGlzIHRvIGFic3RyYWN0IHRoZQ0KPj4+ICJvcHRpb25zIiBzdHJ1Y3QsIHllcy4g
VGhpcyBhbGxvd3MgaW50ZXJuYWwgZGVmYXVsdHMgdG8gYmUgYXBwbGllZCBtb3JlDQo+Pj4gZWFz
aWx5IC0tIGlmIHlvdSByZXF1aXJlIGUuZy4gYSBDb250ZXh0T3B0aW9ucyBzdHJ1Y3QgdG8gYmUg
cGFzc2VkIGJ5DQo+Pj4gdGhlIGNhbGxlciwgaG93IGRvIHlvdSBrbm93IGlmIHRoZXkgaW50ZW5k
ZWQgdG8gb3ZlcnJpZGUgYSBkZWZhdWx0LCBvcg0KPj4+IGlmIHRoZXkganVzdCBkaWRuJ3Qgc2V0
IHRoZSBmaWVsZD8gQWRkaXRpb25hbGx5LCB1c2luZyB0aGUgQ29udGV4dE9wdGlvbg0KPj4+IGFz
IGFuIGludGVyZmFjZSBhbGxvd3MgdmFyaWFkaWMgYXJndW1lbnRzLCB3aGljaCBhcmUganVzdCBj
b252ZW5pZW50IGZvcg0KPj4+IEFQSSB1c2VycyAtLSB0aGUgc2FtZSBOZXdDb250ZXh0IGZ1bmN0
aW9uIGNhbiBiZSB1c2VkIHdoZXRoZXIgeW91IG5lZWQNCj4+PiB0byBwYXNzIDMgb3B0aW9ucyBv
ciAwLg0KPj4+IA0KPj4+IFRoZSByZWFzb24gd2UgdXNlIENvbnRleHRPcHRpb24gYXMgYW4gaW50
ZXJmYWNlLCByYXRoZXIgdGhhbiBmdW5jdGlvbg0KPj4+IHBvaW50ZXIgb2Ygc29ydHMgaXMgZm9y
IGZsZXhpYmlsaXR5IGluIHRoZSBzaWduYXR1cmVzIG9mIENvbnRleHRPcHRpb24NCj4+PiBpbXBs
ZW1lbnRhdGlvbnMuIEUuZy4sIHdlIGNvdWxkIGhhdmUNCj4+PiANCj4+PiBmdW5jIFdpdGhMb2dM
ZXZlbChsdmwgTG9nTGV2ZWwpIENvbnRleHRPcHRpb24NCj4+PiBmdW5jIFdpdGhMb2dDb250ZXh0
KHMgc3RyaW5nKSBDb250ZXh0T3B0aW9uDQo+Pj4gZnVuYyBXaXRoRm9vQW5kQmFyKHMgc3RyaW5n
LCBuIGludCkgQ29udGV4dE9wdGlvbg0KPj4+IA0KPj4+IFNlZSBbMV0gZm9yIG1vcmUgYmFja2dy
b3VuZCBvbiB0aGlzIHBhdHRlcm4uDQo+Pj4gDQo+Pj4gVGhhbmtzLA0KPj4+IE5SDQo+Pj4gDQo+
Pj4gWzFdIGh0dHBzOi8vZGF2ZS5jaGVuZXkubmV0LzIwMTQvMTAvMTcvZnVuY3Rpb25hbC1vcHRp
b25zLWZvci1mcmllbmRseS1hcGlzDQo+PiANCj4+IFllcywgSSBmcmVxdWVudGx5IHVzZSBhIHBh
dHRlcm4gbGlrZSB0aGUgb25lIGRlc2NyaWJlZCBpbiB0aGF0IGJsb2cgcG9zdCBteXNlbGYuIEJ1
dCB0aGF0IGJsb2cgcG9zdCBkb2VzbuKAmXQgdXNlIGludGVyZmFjZXMg4oCUIHRoZSBmaW5hbCBz
bGlkZSBhY3R1YWxseSBoYXMgdGhlIOKAnG9wdGlvbiBmdW5jdGlvbuKAnSB0eXBlIGFzIGFuIG9w
ZW4tY29kZWQgZnVuY3Rpb24gcG9pbnRlciB0eXBlLg0KPj4gDQo+PiBTbyBteSBxdWVzdGlvbiB3
YXMsIHdoeSBub3QgZG8gc29tZXRoaW5nIGxpa2UgdGhpczoNCj4+IA0KPj4gdHlwZSBDb250ZXh0
T3B0aW9uIGZ1bmMoKmNvbnRleHRPcHRpb25zKSBlcnJvcg0KPj4gDQo+PiBmdW5jIFdpdGhMb2dM
ZXZlbChsZXZlbCBMb2dMZXZlbCkgQ29udGV4dE9wdGlvbiB7DQo+PiAgcmV0dXJuIGZ1bmMoY28g
KmNvbnRleHRPcHRpb25zKSB7DQo+PiAgICBjby5sb2dMZXZlbCA9IGxldmVsDQo+PiAgfQ0KPj4g
fQ0KPj4gDQo+PiBBVE0gdGhlIG9ubHkgYWR2YW50YWdlIEkgY2FuIHNlZSBvZiBkZWZpbmluZyBD
b250ZXh0T3B0aW9uIGFzIGFuIGludGVyZmFjZSByYXRoZXIgdGhhbiBhcyBhIGZ1bmN0aW9uIHBv
aW50ZXIgaXMgdGhhdCB0aGUgZ29kb2MgZm9yIENvbnRleHRPcHRpb24gd291bGQgbG9vayBsaWtl
Og0KPj4gDQo+PiB0eXBlIENvbnRleHRPcHRpb24gaW50ZXJmYWNlIHsNCj4+ICAgLy8gY29udGFp
bnMgZmlsdGVyZWQgb3IgdW5leHBvcnRlZCBmaWVsZHMNCj4+IH0NCj4+IA0KPj4gUmF0aGVyIHRo
YW4NCj4+IA0KPj4gdHlwZSBDb250ZXh0T3B0aW9uIGZ1bmMoKmNvbnRleHRPcHRpb25zKSBlcnJv
cg0KPj4gDQo+PiBXaGljaCBzaG93cyB5b3UgdGhlIG5hbWUgb2YgdGhlIHVuZXhwb3J0ZWQgZmll
bGQuDQo+PiANCj4+IElzIHRoZXJlIGFub3RoZXIgcmVhc29uIEkgbWlzc2VkPw0KPiANCj4gVGVj
aG5pY2FsbHkgaXQgZG9lcyBhbGxvdyBtb3JlIGZsZXhpYmlsaXR5IGluIGltcGxlbWVudGluZw0K
PiBDb250ZXh0T3B0aW9uLCBlLmcuIHlvdSBjb3VsZCBkby4uLg0KPiANCj4gZnVuYyAobHZsIExv
Z0xldmVsKSBhcHBseShjbyAqY29udGV4dE9wdGlvbnMpIHsgY28ubG9nTGV2ZWwgPSBsdmwgfQ0K
PiANCj4gLi4uYW5kIHRoZW4gcGFzcyBhIExvZ0xldmVsIGRpcmVjdGx5IGFzIGEgQ29udGV4dE9w
dGlvbi4gQnV0IGdlbmVyYWxseQ0KPiBldmVyeW9uZSBpbXBsZW1lbnRzIHRoZXNlIHRoaW5ncyBh
cyBmdW5jcy4NCj4gDQo+IEkgd2lsbCBhZG1pdCB0aGF0IHdoZW4gaXQgY29tZXMgdG8gbXkgY2hv
aWNlIG9mIHVzaW5nIHRoZSBpbnRlcmZhY2UNCj4gdmVyc2lvbiBpbnN0ZWFkIG9mIGZ1bmN0aW9u
IHBvaW50ZXJzLCBJIGFtIGp1c3QgbW9yZSBmYW1pbGlhciB3aXRoIHRoZQ0KPiBmb3JtZXIgYW5k
IGVuY291bnRlciBpdCBtb3JlIG9mdGVuIGluIG90aGVyIEdvIHBhY2thZ2VzIEkgdXNlLg0KDQpP
Sy4gIEl0IHNlZW1zIGEgYml0IHdlaXJkIHRvIG1lLCBidXQgdGhhdOKAmXMgbm90IHJlYWxseSBh
IGdvb2QgcmVhc29uIHRvIGJsb2NrIGl0LiA6LSkgSSBqdXN0IHdhbnRlZCB0byBtYWtlIHN1cmUg
SSB1bmRlcnN0b29kIHdoeSBpdCB3YXMgYmVpbmcgY2hvc2VuLg0KDQpBY2tlZC1ieTogR2Vvcmdl
IER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXguY29tPg==


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 18:29:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 18:29:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144952.266735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luJEc-0007db-Rp; Fri, 18 Jun 2021 18:28:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144952.266735; Fri, 18 Jun 2021 18:28:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luJEc-0007dU-OT; Fri, 18 Jun 2021 18:28:54 +0000
Received: by outflank-mailman (input) for mailman id 144952;
 Fri, 18 Jun 2021 18:28:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfjP=LM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1luJEb-0007dO-HM
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 18:28:53 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc1463f3-1471-4f8e-9544-586c8187f2c3;
 Fri, 18 Jun 2021 18:28:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc1463f3-1471-4f8e-9544-586c8187f2c3
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624040931;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=M70gTATHOyQFtqqrtQeN05DFbaJkhmQ4gWznLU+T/Bc=;
  b=JOBVa5d0QiKmmqLusX8dxWbbBmMUkDZE33ZV34M8TbStMU997iEuyAfH
   5uxLsCEqqEx8UKAcv3ZLs+RoFi5zP9/Fbs5W1LcuezXKgJ2Gc6ZIE4VlL
   +/p+SkmfftNZTqSHaobOa0xrTUmofn/V6IYoo4qTgZ6QmPKdxxdbyd8DI
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: p1yKKSg80IUkElYVko/bszzkkhCpRy57MqVSOfWh1lWIHSuGJk2yMgScsZoPduPMT2ujAwco7T
 tVYPW+L6BIdHLtPN/OybwXqzAcBnOfWw4moIkj5GK9J2rkNbBeq0BFeU6f4ORJt1XHGLfZzzS/
 5tK/gbDAZjo1esMAACbTFyOSjvbz7N1/zdyR8X3dwZXDXGk80etk3s+dm3AoYnlVb+V19SHuZq
 q/kJovfijwBtJ+ZCvCsg1NN5rQunRnRUd3599eh03OyPdDNy/ZwfFBFMzX68C7sv4xVRH3j1ab
 oP8=
X-SBRS: 5.1
X-MesageID: 46852530
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:wLZUAqDcQp+esv3lHegZsceALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPEfP+U4ssQIb6Ku90ci7MD3hHPtOjbX5Uo3SODUO1FHIEGgm1/qa/9SCIVy+ygc+79
 YGT0EWMrSZYjZHZITBkW+F+r0bsbq6GdWT9ILjJgBWPGNXgs9bjztRO0K+KAlbVQNGDZ02GN
 63/cxcvQetfnwRc4CSGmQFd/KrnayJqLvWJTo9QzI34giHij2lrJTgFQKD4xsYWzRThZ8/7G
 n+lRDj7KnLiYD49vac7R6X031loqqn9jJxPr3LtiHTEES0tu+cXvUkZ1RFhkFsnAjg0idwrD
 CGmWZbAy060QKtQojym2qs5+Co6kde15c7pGXo/kcKeqHCNUwH4xAtv/MkTjLJr0ElppV7iv
 0Wm3+UsJJREHr77VPAzsmNWBdwmkWup30+1eYVknxESIMbLKRctIoF4SpuYd099Q/Bmcga+d
 NVfYrhDTdtACenRmGcunMqzM2nX3w1EBvDSk8eutaN2zwTmHxi1UMXyMEWg39FrfsGOtV5zv
 WBNr4tmKBFT8cQY644DOAdQdGvAmiIRR7XKmqdLVnuCalCMXPQrJz85qkz+YiRCdE1JVsJ6d
 38uXZjxCEPkm7VeL+zNaxwg2bwqT+GLEfQI+lllu1EU5PHNcrWDRE=
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46852530"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KfS/prxqV3f1b7hP7Mvp+ANlYuiJmaz1bj0otmJX14z05joWQMC+mKdClK6jKSM5/1NfUd9B5H2yD6d76l3OPKtR4ouKbOYnbGNt149lVBQX7RX0Gd5QIy2/U0aOVk+HG2BKI5huoDSQb2RfAg1mkbt95Ly1JOiL3dLLNbNEf6ApPH1JYuvX4QSuUQI/Pz4Jj5CiHUt0jTdyJT31PNbSyKskro8xGZLUWJtiexFmvcd5nXhlxPZFD0xIuipCmtPLGEy4iFcc83wZDW4+1BOmS6Qzi17K/URsEGO82jey7h8/8W8UN3V6Aei2U98v5DOLDSgZAmAV8cx7sJPJnYd/+Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M70gTATHOyQFtqqrtQeN05DFbaJkhmQ4gWznLU+T/Bc=;
 b=T6WDL5xISxg9X2i4bUn5VYloBcZira2dHuSS1JY2W7l83E5kGMg72G1ZuyHPSYzrPVWeKGdi9AruhJC0M2WzbAZ+mf3quJSkfiOMd2TZppmSPDF4E32F178psTR0z6ABhtdPaDdaKNm7oFVpeTHof813afk8XSalYvB32c/MVHRc2U2o6HsaiKX4PWQ10G3TSGHr4tyI2wNpuXXXp8Y/wYI8kd0U6h3OfYq/yablzPlsDucFRcDRM2TfUobpBupUhy7HnvFVye2HEKq6TIxBFKU8vmMUxinhr+XMyUOz+tzy2N1NrNWERn0XZ8msgxMH2uaO3FU74HrgZg394u3oJA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M70gTATHOyQFtqqrtQeN05DFbaJkhmQ4gWznLU+T/Bc=;
 b=BRAGaTRcbHcSkcutE+/o3jQoEYvg0uLCdnhD+s973wmrXQUMnYuVVZkO2MATup+EubN5U5WsH+1SBHk54+IT3VdCqzR0bm+NEYezD+zHNchc+NmEZEcOqsW2PZEHiWKKFce14ibECWKpV++Efzjo41xWqm0w4sXmTNEBKxKUWME=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 12/12] golang/xenlight: add NotifyDomainDeath
 method to Context
Thread-Topic: [RESEND PATCH 12/12] golang/xenlight: add NotifyDomainDeath
 method to Context
Thread-Index: AQHXUNy1vChg4LMpNU6QjEPyRTou1KsaPdIA
Date: Fri, 18 Jun 2021 18:28:47 +0000
Message-ID: <56DEEBE0-88E3-4E00-A998-30FF034BCB73@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <e415b0e26954cfc6689fbd3ba7d79fe664f3bb50.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <e415b0e26954cfc6689fbd3ba7d79fe664f3bb50.1621887506.git.rosbrookn@ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: c971a2c4-81b1-41b0-15b0-08d93286e97d
x-ms-traffictypediagnostic: PH0PR03MB5797:
x-microsoft-antispam-prvs: <PH0PR03MB5797D9140C0AD46D15BA2B53990D9@PH0PR03MB5797.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7691;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: ceijTakFFwDDrwJeTlf2NP4wRDsVcSzE5+2SvtsK0szfE1FbWX5WMam10sO/iLEzo5c31PaCgd64ljRfAYJ4/wfmLy20sDZ1p05dcsiJVIobuaTm0qS1FWEQ+Sf+c2I/OshukHVoHsf9543u3ao/OVDVIaE2MepJBBSgDw4r0/LAleJeVJBuTvJmcQ1AA15d0nDs3WMxiLsWnHvoAVCr9yjzxlezqS2Qz4EhtMD3EOgF/GkdxapjCEmLwNzMidhko5kKgl+ZwH1TogNJYPwq25h0R67WEo58TFJAWdoBzw4EZgE9GzAjDrU9xV/Wf7KC4d41j2vK1Eja+jbNfB7/JXG7FQWoBpjuvBfX9P5aoKfaFsu/CCiaJYemxX13eEbCnuwYBts/O7pzngLJ2gY7qR/0xmtvP7cVL04SHMPcYaSuOY/ODylzRmW/zNQWGJKcNZlfdTnOpNHZ1trqFfR/jHx3SN8VL3qJC8RKHnHPP8IEiqqHhZHCKlJB4TKw+iHdxH5CLrF0CzJYSpzMFKESibcWiFRD/jOBpZs4kWWQ4O/QwN1S9TUY1aWd6Bnb8vFozh+It/ZwND/DE0YnQib21bAsru8YSQcljkZpjx5CTa8N3uvPcXSwwk8tVsQDLfyJmQDH1Kp55whM/alYr5mtcfT8OIhuiTiwHb8+x0wB4z4=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(39860400002)(376002)(346002)(366004)(66446008)(54906003)(71200400001)(33656002)(64756008)(8936002)(122000001)(2906002)(66476007)(36756003)(91956017)(6916009)(6506007)(76116006)(2616005)(66556008)(316002)(83380400001)(66946007)(186003)(6512007)(5660300002)(478600001)(4326008)(26005)(53546011)(8676002)(86362001)(38100700002)(6486002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?ZDhTSFBzc1YxRVdkb0NMSkJhTHVLRXJWRzZSeDRXYnlaZGpHVS8yK2c0dmFp?=
 =?utf-8?B?U1c4TElqRkl6aW9Ddm84SmlZQ3ZXWTNaQ05HV1IzanVROHJic2NzMEdjL0hy?=
 =?utf-8?B?M2VGRld2WmdMc0RuaHUyN0VBSUVLeVVkTWx1TlB2cW1HNVcyTjhiQ2FPRGlm?=
 =?utf-8?B?U2NrRG5JM0krSnNzUnZ5eGgvU0xCcEZ5emRxQWVWZ3ZYWkRhNVJSVm5OL3F3?=
 =?utf-8?B?Uk0wVjlFeDlnNVRXRXdWcTF5R1k5MmpNcUw0cWhkMytyWEoycFdyTlhlckJB?=
 =?utf-8?B?ZHN0c1cxeUw2SjZ2c01taFpMa2lzdnVxTU5oVUw2N3ZBdytYWmxBOE9Ga3RV?=
 =?utf-8?B?ZzRURC8xRSsxQjBzeG1PVDZLQjJkYjJ4MUE2Z08ydmtNR0NUQVBwQXJTcGM1?=
 =?utf-8?B?SkhEa1ltbDVqTktvUVJmVGV0NlBQbTRNUnIvVXBrZGJaUFZJenRlaHRScHpj?=
 =?utf-8?B?RFpjM2FpUkhTQUpyZHlvdG9oeGdMN205T3JVblY5NGRnNTRwZXRwWFZmUUI2?=
 =?utf-8?B?d0ZtaHhJZ3JHUmJUcEtodGxPZ2pKU0tIcDlwOTJsY25QaVdLMFVyQWZRS2U2?=
 =?utf-8?B?SnR3US9QQWE2RitRM3NmTld2MGFhRWo2V2g2OTVaTnp5Nm1GazlGbTBFYXBS?=
 =?utf-8?B?TklJcjc2UW1zR0IrYm4rcllqNWJuV0JWYzVBYkZRaEJlK2FEelFlNy9iRi9j?=
 =?utf-8?B?akd6UDlZM3pPQTR1b0dXWFFWU2Vwd0R3OGViMm9tOWRRQ1lRZys1aG5TZXdr?=
 =?utf-8?B?VHpoUDkxZDlkbGVBbEVqL1NNZFJwVzdTTWhhV2s3T01BdGp3REU2NzlTdllN?=
 =?utf-8?B?V0s1SVNQZ2M1bWV2TlpLSUIyVkJQNm4xeE1UaWc5M3NCN3ZISXJIUitBWkNV?=
 =?utf-8?B?MVpNUmptdFhHZ3RVNDU2enJIdUErT2Z6S1lyYzd1UTNrM2FhVkZwUnkvRDFF?=
 =?utf-8?B?RmhiNVZZZFhiNlJ1ejZIZUpuK0l6bWJMemZ0MStsM01YQm5ZMEdrZzhFNTY4?=
 =?utf-8?B?R25iTzlVRTJjbHZRMGF1U1JNSFpSN0lNaWplTUFMVU5CNU5nR3p5RkoxOHRX?=
 =?utf-8?B?eXlvWjFXVDQzd1hDbEp1SW11SjdlK1dtcEV5UTVTQmF3OGNiTEUyc0pYZWFK?=
 =?utf-8?B?RjV4YjUydWlMTW1YQ0RxbkdpWGR4RXVHNkJXb2o2eHdPNzFuM0p5N2MvRnkw?=
 =?utf-8?B?ZERPM2I2bzJQcWZEZ3B1WFdjZStjUU5NWk5QaXAvcTY4TktBMEtoY0lYL28x?=
 =?utf-8?B?aFRvVVZZMnFFZU4rdENzNSt6NTg2akgrM3JERDE4QmZlL2ZqbGVwbnNBWUJI?=
 =?utf-8?B?d2Y3VTFNTlFObVVLOE5Lei9ka2JaWW5JSWI5dEMyNldtTU92WmF6VEp4c3dl?=
 =?utf-8?B?LzZwcDRMNHJFaFdTSVNIYy8zWC9ZVXV0b09HK0NDb3dhdWQ2NFE1RWRZdkV5?=
 =?utf-8?B?UEpidFU2L3VxUFBja0h4MXg1KzEzT1FBL1NTSWhGN0wxQ1dHbndZTTduTWtk?=
 =?utf-8?B?eGlEdEhoODFrUmlhVXE1VGkySHZKQ3Nhay92T3NDWFFCUnJ5T3NQV2x3L1RX?=
 =?utf-8?B?ZFgvK3JrdGlTVGxTUFdlOER6SjBGVWJnYTdMRnVEbEVWbkkrdVhhZ2hFYnBO?=
 =?utf-8?B?WUJETXdvV2Jmd29KS21hMDNwSjFKTVlvTVJQNTk1Nmp3bkhZODBxSHErc2VY?=
 =?utf-8?B?eVR2YVQ4QXpER3g5ZGllM1ROY2lZeG4rSk93Y3lWdzRuOEk1azVBa1hwT1dh?=
 =?utf-8?Q?rJScgmv4qya2Zbe1eAw6zZ4X6/ioG50csi9OrXl?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <DDB28C33B02CEB4FA78C64FC39702B64@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c971a2c4-81b1-41b0-15b0-08d93286e97d
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 18:28:47.3025
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: KuS49agjjdG4qcG9PwqyAFOOhLOeb2wN154WywnjTaTHVK3cVKw/OQfCWzuOhKeWrOCIILdQEYtoJoxSYj/SeoJ7iqC7JoMufC6szeUepBE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5797
X-OriginatorOrg: citrix.com

DQoNCj4gT24gTWF5IDI0LCAyMDIxLCBhdCA5OjM2IFBNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9v
a25AZ21haWwuY29tPiB3cm90ZToNCj4gDQo+IEFkZCBhIGhlbHBlciBmdW5jdGlvbiB0byB3YWl0
IGZvciBkb21haW4gZGVhdGggZXZlbnRzLCBhbmQgdGhlbiB3cml0ZQ0KPiB0aGUgZXZlbnRzIHRv
IGEgcHJvdmlkZWQgY2hhbm5lbC4gVGhpcyBoYW5kbGVzIHRoZSBlbmFibGluZy9kaXNhYmxpbmcg
b2YNCj4gdGhlIGV2ZW50IHR5cGUsIGZyZWVpbmcgdGhlIGV2ZW50LCBhbmQgY29udmVydGluZyBp
dCB0byBhIEdvIHR5cGUuIFRoZQ0KPiBjYWxsZXIgY2FuIHRoZW4gaGFuZGxlIHRoZSBldmVudCBo
b3dldmVyIHRoZXkgbmVlZCB0by4gVGhpcyBmdW5jdGlvbg0KPiB3aWxsIHJ1biB1bnRpbCBhIHBy
b3ZpZGVkIGNvbnRleHQuQ29udGV4dCBpcyBjYW5jZWxsZWQuDQo+IA0KPiBOb3RpZnlEb21haW5E
ZWF0aCBzcGF3bnMgdHdvIGdvcm91dGluZXMgdGhhdCByZXR1cm4gd2hlbiB0aGUNCj4gY29udGV4
dC5Db250ZXh0IGlzIGRvbmUuIFRoZSBmaXJzdCB3aWxsIG1ha2Ugc3VyZSB0aGF0IHRoZSBkb21h
aW4gZGVhdGgNCj4gZXZlbnQgaXMgZGlzYWJsZWQsIGFuZCB0aGF0IHRoZSBjb3JyZXNwb25kaW5n
IGV2ZW50IHF1ZXVlIGlzIGNsZWFyZWQuDQo+IFRoZSBzZWNvbmQgY2FsbHMgbGlieGxfZXZlbnRf
d2FpdCwgYW5kIHdyaXRlcyB0aGUgZXZlbnQgdG8gdGhlIHByb3ZpZGVkDQo+IGNoYW5uZWwuDQo+
IA0KPiBXaXRoIHRoaXMsIGNhbGxlcnMgc2hvdWxkIGJlIGFibGUgdG8gbWFuYWdlIGEgZnVsbCBk
b21haW4gbGlmZSBjeWNsZS4NCj4gQWRkIHRvIHRoZSBjb21tZW50IG9mIERvbWFpbkNyZWF0ZU5l
dyBzbyB0aGF0IHBhY2thZ2UgdXNlcyBrbm93IHRoZXkNCj4gc2hvdWxkIHVzZSB0aGlzIG1ldGhv
ZCBpbiBjb25qdW5jdGlvbiB3aXRoIERvbWFpbkNyZWF0ZU5ldy4NCj4gDQo+IFNpZ25lZC1vZmYt
Ynk6IE5pY2sgUm9zYnJvb2sgPHJvc2Jyb29rbkBhaW5mb3NlYy5jb20+DQo+IC0tLQ0KPiB0b29s
cy9nb2xhbmcveGVubGlnaHQveGVubGlnaHQuZ28gfCA4MyArKysrKysrKysrKysrKysrKysrKysr
KysrKysrKystDQo+IDEgZmlsZSBjaGFuZ2VkLCA4MiBpbnNlcnRpb25zKCspLCAxIGRlbGV0aW9u
KC0pDQo+IA0KPiBkaWZmIC0tZ2l0IGEvdG9vbHMvZ29sYW5nL3hlbmxpZ2h0L3hlbmxpZ2h0Lmdv
IGIvdG9vbHMvZ29sYW5nL3hlbmxpZ2h0L3hlbmxpZ2h0LmdvDQo+IGluZGV4IDZmYjIyNjY1Y2Mu
Ljg0MDY4ODM0MzMgMTAwNjQ0DQo+IC0tLSBhL3Rvb2xzL2dvbGFuZy94ZW5saWdodC94ZW5saWdo
dC5nbw0KPiArKysgYi90b29scy9nb2xhbmcveGVubGlnaHQveGVubGlnaHQuZ28NCj4gQEAgLTUz
LDYgKzUzLDcgQEAgaW1wb3J0ICJDIg0KPiAgKi8NCj4gDQo+IGltcG9ydCAoDQo+ICsJImNvbnRl
eHQiDQo+IAkiZm10Ig0KPiAJIm9zIg0KPiAJIm9zL3NpZ25hbCINCj4gQEAgLTEzNDAsNyArMTM0
MSw5IEBAIGZ1bmMgKGN0eCAqQ29udGV4dCkgRGV2aWNlVXNiZGV2UmVtb3ZlKGRvbWlkIERvbWlk
LCB1c2JkZXYgKkRldmljZVVzYmRldikgZXJyb3INCj4gCXJldHVybiBuaWwNCj4gfQ0KPiANCj4g
LS8vIERvbWFpbkNyZWF0ZU5ldyBjcmVhdGVzIGEgbmV3IGRvbWFpbi4NCj4gKy8vIERvbWFpbkNy
ZWF0ZU5ldyBjcmVhdGVzIGEgbmV3IGRvbWFpbi4gQ2FsbGVycyBvZiBEb21haW5DcmVhdGVOZXcg
YXJlDQo+ICsvLyByZXNwb25zaWJsZSBmb3IgaGFuZGxpbmcgdGhlIGRlYXRoIG9mIHRoZSByZXN1
bHRpbmcgZG9tYWluLiBUaGlzIHNob3VsZCBiZQ0KPiArLy8gZG9uZSB1c2luZyBOb3RpZnlEb21h
aW5EZWF0aC4NCj4gZnVuYyAoY3R4ICpDb250ZXh0KSBEb21haW5DcmVhdGVOZXcoY29uZmlnICpE
b21haW5Db25maWcpIChEb21pZCwgZXJyb3IpIHsNCj4gCXZhciBjZG9taWQgQy51aW50MzJfdA0K
PiAJdmFyIGNjb25maWcgQy5saWJ4bF9kb21haW5fY29uZmlnDQo+IEBAIC0xMzU4LDYgKzEzNjEs
ODQgQEAgZnVuYyAoY3R4ICpDb250ZXh0KSBEb21haW5DcmVhdGVOZXcoY29uZmlnICpEb21haW5D
b25maWcpIChEb21pZCwgZXJyb3IpIHsNCj4gCXJldHVybiBEb21pZChjZG9taWQpLCBuaWwNCj4g
fQ0KPiANCj4gKy8vIE5vdGlmeURvbWFpbkRlYXRoIHJlZ2lzdGVycyBhbiBldmVudCBoYW5kbGVy
IGZvciBkb21haW4gZGVhdGggZXZlbnRzIGZvciBhDQo+ICsvLyBnaXZlbiBkb21uaWQsIGFuZCB3
cml0ZXMgZXZlbnRzIHJlY2VpdmVkIHRvIGVjLiBOb3RpZnlEb21haW5EZWF0aCByZXR1cm5zIGFu
DQo+ICsvLyBlcnJvciBpZiBpdCBjYW5ub3QgcmVnaXN0ZXIgdGhlIGV2ZW50IGhhbmRsZXIsIGJ1
dCBvdGhlciBlcnJvcnMgZW5jb3VudGVyZWQNCj4gKy8vIGFyZSBqdXN0IGxvZ2dlZC4gVGhlIGdv
cm91dGluZSBzcGF3bmVkIGJ5IGNhbGxpbmcgTm90aWZ5RG9tYWluRGVhdGggcnVucw0KPiArLy8g
dW50aWwgdGhlIHByb3ZpZGVkIGNvbnRleHQuQ29udGV4dCdzIERvbmUgY2hhbm5lbCBpcyBjbG9z
ZWQuDQo+ICtmdW5jIChjdHggKkNvbnRleHQpIE5vdGlmeURvbWFpbkRlYXRoKGMgY29udGV4dC5D
b250ZXh0LCBkb21pZCBEb21pZCwgZWMgY2hhbjwtIEV2ZW50KSBlcnJvciB7DQo+ICsJdmFyIGRl
YXRodyAqQy5saWJ4bF9ldmdlbl9kb21haW5fZGVhdGgNCj4gKw0KPiArCXJldCA6PSBDLmxpYnhs
X2V2ZW5hYmxlX2RvbWFpbl9kZWF0aChjdHguY3R4LCBDLnVpbnQzMl90KGRvbWlkKSwgMCwgJmRl
YXRodykNCj4gKwlpZiByZXQgIT0gMCB7DQo+ICsJCXJldHVybiBFcnJvcihyZXQpDQo+ICsJfQ0K
PiArDQo+ICsJLy8gU3Bhd24gYSBnb3JvdXRpbmUgdGhhdCBpcyByZXNwb25zaWJsZSBmb3IgY2xl
YW5pbmcgdXAgd2hlbiB0aGUNCj4gKwkvLyBwYXNzZWQgY29udGV4dC5Db250ZXh0J3MgRG9uZSBj
aGFubmVsIGlzIGNsb3NlZC4NCj4gKwlnbyBmdW5jKCkgew0KPiArCQk8LWMuRG9uZSgpDQo+ICsN
Cj4gKwkJY3R4LmxvZ2QoImNsZWFuaW5nIHVwIGRvbWFpbiBkZWF0aCBldmVudCBoYW5kbGVyIGZv
ciBkb21haW4gJWQiLCBkb21pZCkNCj4gKw0KPiArCQkvLyBEaXNhYmxlIHRoZSBldmVudCBnZW5l
cmF0aW9uLg0KPiArCQlDLmxpYnhsX2V2ZGlzYWJsZV9kb21haW5fZGVhdGgoY3R4LmN0eCwgZGVh
dGh3KQ0KPiArDQo+ICsJCS8vIE1ha2Ugc3VyZSBhbnkgZXZlbnRzIHRoYXQgd2VyZSBnZW5lcmF0
ZWQgZ2V0IGNsZWFuZWQgdXAgc28gdGhleQ0KPiArCQkvLyBkbyBub3QgbGluZ2VyIGluIHRoZSBs
aWJ4bCBldmVudCBxdWV1ZS4NCj4gKwkJdmFyIGV2YyAqQy5saWJ4bF9ldmVudA0KPiArCQlmb3Ig
ew0KPiArCQkJcmV0IDo9IEMubGlieGxfZXZlbnRfY2hlY2soY3R4LmN0eCwgJmV2YywgQy5MSUJY
TF9FVkVOVE1BU0tfQUxMLCBuaWwsIG5pbCkNCj4gKwkJCWlmIHJldCAhPSAwIHsNCj4gKwkJCQly
ZXR1cm4NCj4gKwkJCX0NCj4gKwkJCUMubGlieGxfZXZlbnRfZnJlZShjdHguY3R4LCBldmMpDQoN
CkkgaGF2ZSB0byBhZG1pdCwgSSBkb27igJl0IHJlYWxseSB1bmRlcnN0YW5kIGhvdyB0aGUgbGli
eGwgZXZlbnQgc3R1ZmYgaXMgc3VwcG9zZWQgdG8gd29yay4gIEJ1dCBpdCBsb29rcyBsaWtlIHRo
aXMgd2lsbCBkcmFpbiBhbGwgZXZlbnRzIG9mIGFueSB0eXBlLCBmb3IgYW55IGRvbWFpbiwgYXNz
b2NpYXRlZCB3aXRoIHRoaXMgY29udGV4dD8NCg0KU28gaWYgeW91IGhhZCB0d28gZG9tYWlucywg
YW5kIGNhbGxlZCBOb3RpZnlEb21haW5EZWF0aCgpIG9uIGJvdGggd2l0aCBkaWZmZXJlbnQgY29u
dGV4dHMsIGFuZCB5b3UgY2xvc2VkIHRoZSBvbmUgY29udGV4dCwgeW91IG1pZ2h0IG1pc3MgZXZl
bnRzIGZyb20gdGhlIG90aGVyIGNvbnRleHQ/DQoNCk9yLCBzdXBwb3NlIHlvdSBkaWQgdGhpczoN
CiAqIGN0eC5Ob3RpZnlEb21haW5EZWF0aChjdHgxLCBkb20xLCBlYzEpDQogKiBjdHguTm90aWZ5
RGlza0VqZWN0KGN0eDIsIGRvbTEsIGVjMikNCiAqIGN0eDFDYW5jZWxGdW5jKCkNCg0KV291bGRu
4oCZdCB0aGVyZSBiZSBhIHJpc2sgdGhhdCB0aGUgZGlzayBlamVjdCBtZXNzYWdlIHdvdWxkIGdl
dCBsb3N0Pw0KDQpJYW4sIGFueSBzdWdnZXN0aW9ucyBmb3IgdGhlIHJpZ2h0IHdheSB0byB1c2Ug
dGhlc2UgZnVuY3Rpb25zIGluIHRoaXMgc2NlbmFyaW8/DQoNCiAtR2VvcmdlDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 19:18:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 19:18:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144958.266746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luK0h-00043z-Nn; Fri, 18 Jun 2021 19:18:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144958.266746; Fri, 18 Jun 2021 19:18:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luK0h-00043s-KF; Fri, 18 Jun 2021 19:18:35 +0000
Received: by outflank-mailman (input) for mailman id 144958;
 Fri, 18 Jun 2021 19:18:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luK0g-00043i-PD; Fri, 18 Jun 2021 19:18:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luK0g-0007M0-IR; Fri, 18 Jun 2021 19:18:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luK0g-0005FJ-5r; Fri, 18 Jun 2021 19:18:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1luK0g-0005lY-5L; Fri, 18 Jun 2021 19:18:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2/7fsfllT9yo4qUfWN6cXIF3YUX02kBA3dQ9mbdE7V8=; b=28GcFsVvuOVuq3jX+cynk8gjgA
	wZg5cKgKhQQ6OX9F6ow3oSWw0gimrNN29WbAUQ73JEshBELQdSOx67qsgyQ2BsoVKFkV/2FmzGNHu
	6L/dkIzE61c+ET7sTkyUI6bOs9+y5xwkF32JnMYuLhrj8IV++OFTUXqPoq1KN138OGdg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162885-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162885: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:guest-start/redhat.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Jun 2021 19:18:34 +0000

flight 162885 xen-unstable real [real]
flight 162893 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162885/
http://logs.test-lab.xenproject.org/osstest/logs/162893/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-intel 14 guest-start/redhat.repeat fail pass in 162893-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162533
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162533
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z   10 days
Failing since        162556  2021-06-08 22:39:08 Z    9 days   15 attempts
Testing same since   162885  2021-06-17 23:08:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1163 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 19:32:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 19:32:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144965.266760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luKDc-0006Kv-3A; Fri, 18 Jun 2021 19:31:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144965.266760; Fri, 18 Jun 2021 19:31:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luKDc-0006Ko-08; Fri, 18 Jun 2021 19:31:56 +0000
Received: by outflank-mailman (input) for mailman id 144965;
 Fri, 18 Jun 2021 19:31:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfjP=LM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1luKDa-0006Ki-12
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 19:31:54 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e71707e5-85e2-46b2-bc47-65fd885e7a37;
 Fri, 18 Jun 2021 19:31:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e71707e5-85e2-46b2-bc47-65fd885e7a37
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624044712;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=x0fOnCH0yAQs/bd9+Sl21d26qdj9CFx97Mu0niP1xfE=;
  b=ZM39YF0frOXWDBnwstrX2G5B/PEmcW22EnF5qI+3gBrkvSkYuTb6C3wy
   n7jtRPBwtfM/cUHAXmmZupf1mL8ssm64y+E/jSLKGCIipKBucZkEMbXnX
   TBVthRnQp5R9Y7Q/I2TFefg34Q1S0thu83+v76/oi74Zhc0RC6jxbNBaG
   A=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 9dHkLl3LBTTtm8y7cZlnwtjI9Bcvr4oWJxUVGlkzUPbIsLQkpEKpMlUlDGUMOubkA/UmRT8J46
 xU25OjhtklKp2rfu4KT5RCW9vT7SNwH66IacraQB6OgwLgZQ+GYD4oKaAf2RJVEZU7QSLBH/nQ
 IwV6Ae/B0CqJT8Bx3i49ZNSthblLFwaEiVu/G1ouV1H3tjEQ+iwRCUjFkcBoVE8AMAYRmQ6Dax
 HsdnLbGpH/rCHMUI9ehPBSqKTgJO/gH5V9Znn75tToEtczinz3bqbvDpQNZ4xX4LB4K6ZA3ioS
 zb8=
X-SBRS: 5.1
X-MesageID: 46215958
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:LwXOJ6kahREvkQTYP44Hvne0i4HpDfNqiWdD5ihNYBxZY6Wkfp
 +V8sjzhCWatN9OYh0dcIi7SdW9qXO1z+8Q3WBjB8bcYOCGghrnEGgG1+GC/9SOIVyHygcw79
 YDT0E6MqyMMbEYt7e63ODbKada/DDvysnB7oq/vhRQpENRGttdBmxCe2Gm+zhNNXB77O0CZf
 yhD6R81l+dUEVSSv7+KmgOXuDFqdGOvonhewQ6Cxku7xTLpS+06ZbheiLonSs2Yndq+/MP4G
 LFmwv26uGIqPeg0CLR0GfV8tB/hMbh8N1eH8aB4/JlaAkEyzzYIbiJaYfy+wzdk9vfrmrCV+
 O8+ivICv4Dr085uFvF+ScFlTOQiwrGoEWSt2NwyUGT0PARAghKS/ZptMZhaR3e5FMnvNZglI
 x2/0/xjesMMTrw2B3n4d7GThdrkVfxh0EDv6o8s1xzOLFuNYN5nMgn50VSH44HHCXmrLocN4
 BVfZ7hDNA/SyLHU5mRhBge/PW8Gns0BRuIWU4Ep4ic1CVXhmlwyw8CyNUYhWpozuN2d3Bo3Z
 WNDk1TrsAEcibWV9MLOM4RBc+sTmDdSxPFN2yfZVzhCaEcInrI75r6+q886u2mcIEBiMJaou
 WMbHpI8WopP07+A8yH25NGthjLXWWmRDzojsVT/YJwtLHwTKfidSeDVFctmc29pOh3OLyXZx
 9yAuMePxbHFxqgJW9k5XyKZ3BiEwhpbCROgKdIZ7unmLO+FmTFjJ2pTMru
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46215958"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z2mUstABdH14gWlOy93+fQBCkAGI8DJ89Uuaj62oYa/9O5JyTj76CkJOdGU5OZHEyrLCtMiunPcwGgp8U13L5DERc6RcgPsfExFFVeiNbLoMW/6x5zJgVkpWOFp95+Dvty0/WKYqpDMxHFgC80ZMP7yk8nKw8qjWstsbx34OF0jeDnhHTRTu2UXrBRbeJYlq1taX3xWW9AkewhPEQ4NEujmwMzXynUFg2zm72IGX5W93o1r3tgeoiRTRrm1yUwNR2nWaz003QwGXSIyUeMiy1m7oWaoFO75djgqMQKHoYZzKxUmT2G4s3bKuuQIMH6qh1lxKHWX96ihyTvj0u1IfTQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x0fOnCH0yAQs/bd9+Sl21d26qdj9CFx97Mu0niP1xfE=;
 b=TNzrltIqP0cZz3N4kFdcdhHyGUde4BDyxOi1yJjzgqC7PdT/UpL/l+Ki0NcaCM8ls2/w8WiDsPfRoGrDYaY2G0LU85IafRkbsgmkYxEXOP881dLb4quioBWRWsy1/J9SFg7lqjaWGM2jZMhQnD4QrIrP35rYG8aoIFdmv/TAYMQVl/1CGLh8emhjE+WQ8ILPhAWcGoUygR+scBm1p0yMrDGw+Saqygz06SAGAVf+nHKNet+Zo1Et4OtK25ECfzfU/SvrrDqB4Ojsg7RrdhUq7wpvD+flE64ylBNwPgSeAQd7edE+uW2hN3ANi4ZJizGRosz6LL62Iiz4o5QlGHacQQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x0fOnCH0yAQs/bd9+Sl21d26qdj9CFx97Mu0niP1xfE=;
 b=XUNRLn9E4OY/GJTJmw1tL2I2/tHIYSYIKQNvtWWa/d8bDb7U3kdiA6o8a5OVcpnsvkjAqYcSs2n/M7J7pbyqT7nqjL1mnqCjcWwJYMPARkPBK/QNWS8wlMUr6czfDbXmjlnSkit2v7Zlu8WJd4YYxkBTyI5jumh2qbLVDH2B9K4=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 12/12] golang/xenlight: add NotifyDomainDeath
 method to Context
Thread-Topic: [RESEND PATCH 12/12] golang/xenlight: add NotifyDomainDeath
 method to Context
Thread-Index: AQHXUNy1vChg4LMpNU6QjEPyRTou1KsaPdIAgAARmAA=
Date: Fri, 18 Jun 2021 19:31:46 +0000
Message-ID: <8D6E3510-754C-450C-99F6-63BE9FA6F748@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <e415b0e26954cfc6689fbd3ba7d79fe664f3bb50.1621887506.git.rosbrookn@ainfosec.com>
 <56DEEBE0-88E3-4E00-A998-30FF034BCB73@citrix.com>
In-Reply-To: <56DEEBE0-88E3-4E00-A998-30FF034BCB73@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 08fb4591-f49e-461e-e2d3-08d9328fb5d8
x-ms-traffictypediagnostic: PH0PR03MB5832:
x-microsoft-antispam-prvs: <PH0PR03MB5832380F6C8D1A1EEFEA4BC0990D9@PH0PR03MB5832.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: gTqShPlO6fbV7LdeEuaXKfSz1/2IEsPoqEHoxvHd6DUMXnj7Me+kU8qoGktCOymCgcqXBBq4/qEDsLbrYtxaSuhQRQS+GtE7zR0s/RPup+yrNayaLe2VUze/F10aHPim6YqfjuDK9zuGS1aBslh1RbZqsqhQX4wmvORduIidN2K+3HkVXrHEpXaQzi9ITtbJAIlhFsgtkn5+X7grz7wqxb2XEDXJPqbc4WrXxmf/fxGh+iuHoCskY/bUNVQyl/s2KtCTi6D3iti3/78zC35x06Q03ECki4dMnPsg3SvRpA3EHB2VFKUaNABvzeLJ40E6v7uTJdIOL+m1o+kv/Kp0eYO21padAiiipfas+OZ6NxxTZbFERCg0kksEhtg+XimxW2mEZ1N2UV65YklcgDDOqEqB4I9+Rf+tsqYPNySa08FQ6Vpfe0AhkTX2+U81yfXqN9durhOYvqbzjJng7Zq5BpzLam/C3fBaTUKimefBpRekM7Tx6MXbrIO0ow2VKqtLEBzdK3nAOOtqow/mgUJrJ75Oa01hjGW3hoi/oQ59SMySPqZ7fAGJdIRejrhjWhmVu4vs9P61GbtFcDi1NMj8n3QjWEFMwTBDu5Jnqng2iZoOl8p1sOmRh5ZDDTueg5btb2siDdH9SMeevza2fe+5fc67Cs64REs2DCWJEQa4SxQ=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(396003)(346002)(39860400002)(366004)(2906002)(26005)(83380400001)(6916009)(86362001)(33656002)(5660300002)(4326008)(53546011)(6506007)(478600001)(316002)(186003)(54906003)(66556008)(2616005)(66476007)(8676002)(66946007)(76116006)(64756008)(122000001)(38100700002)(71200400001)(91956017)(66446008)(36756003)(6486002)(8936002)(6512007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?eWNsRHF6Uy9mV0M1L1pzVmJadnp6T0lNV3l3aTlVamtRdG1zMTBpVlE4SzdD?=
 =?utf-8?B?UzA5Wm5FUU5Vc1hUNmR3MGI5OGlRdGpxckZtWlhkY0l1R0ExUVZVRGJpSTBy?=
 =?utf-8?B?Q1hWaitQOHcrT21HTE0veGtremFVMXJuVFFqdW1LZ3gvbE42Wm9WTGFBMnpp?=
 =?utf-8?B?eXNPN3pLTUJwSzFhUGFJNTc4bytUcXpQZVBpMEVHVXJ3Znd2TDRtU01CTDFv?=
 =?utf-8?B?OWI0OERQSFZ3ejdmRFVORFJwaHZvNkpNUkxZVHRCYzJuNmtXZnM5a01aaGsr?=
 =?utf-8?B?TWpOb3MyZ1NZdEdhTGsrSjVSS2xHUCtFUjdHUHFybFpiZEJ3WVVkWFBraGtk?=
 =?utf-8?B?a0hWMGMzRHZoNzBTNStIMUNVLy9oWVgyeWhXeDZiZ1JXYTVjbW1nNmhaa0Rh?=
 =?utf-8?B?OEZpWHNhc28rYWxvRTFQZzJNUUFiNXpITklzaFVQNmhqS0FQdG45SmR3aHds?=
 =?utf-8?B?eEV4djlsSmdOL2RHMExFZmxkVFF0ZGE4MGVLS3dtVm12KzFwR25yUkRhMHg1?=
 =?utf-8?B?UEJQUkk5QW9FVmozcThLOEcxYWFnWm1yd1NybDdML1NyR0prQ3J5NGdFNmdu?=
 =?utf-8?B?NmMyZVBFODkzTjBYdFJzZkZjdmlkUkFWVm5uYXpEZXdjMk1NMHNXd2Mwa1ZC?=
 =?utf-8?B?b3p0eUhmWFBiL0d1aVJoSzRpU0pDdXNCeHFLKzhObFZSc3Y4dGZSQmxlVWZa?=
 =?utf-8?B?RjVZV0ozSEpTTTl2anJtK2tUUmg5T0FmQWlScTg0VkxLTkI2SjNuZXVVZFQy?=
 =?utf-8?B?dGFFR2lpWk8vbXczQy9QeHNUdGFLWW9QTVRrZ1VJSU9mZy9xVmQ2Y2liajdQ?=
 =?utf-8?B?cjZIQlk0UUZMankxU1A3akJZRWlqbGxaUkJ1ZExKSjFDSTFiZjNtWmlzY25v?=
 =?utf-8?B?ZElRcmpBdGJHNzlHeUdxQlNMSkx5ZVRYTDc2UWQ0R05WdXNidjhyby9LTnJi?=
 =?utf-8?B?cG9vQXJxcG9LSkdxWEVOTHJYdnh3cWxVNmdKaWVIcm9xQnBMbHdFQm90NWF5?=
 =?utf-8?B?bWxBZkpHN2NvNWE2Q3poUCs0QkRya3F0cElXVEhzaUk3UGJMaE9hTmFUY09k?=
 =?utf-8?B?cHdma2ZzbytTSzFrbUdnMUpNZk1OdHhDSkdQb3V0M3hTODVwL1ZXRDRlQjlq?=
 =?utf-8?B?UHpKRVJIVGF1RjYwdXdzb1JtZGhPZmFtQ2ZSNmlsQkh4VTBnRmxWZmN2TlM1?=
 =?utf-8?B?U1FCWUZJZ1ErQ29PYWxKbU1NaCs2SnhwSkwvSTU1UWp6REdVYzhGMFd2VkZE?=
 =?utf-8?B?eVRtMm9DckdVU2c1WnNtVWVmR0ZXbVVKeVFiR1pMcDJwU3BaUU5iL2wxUlJG?=
 =?utf-8?B?bWJaek1aYVQ0Vkg2MTkvUGtJZS9JYjVyakZTVmhIVFZOSjFKb01VbXM3UkRL?=
 =?utf-8?B?aFBBdGhGTHNoK1I0ZjZkeDUrcjkrdGtoTmpKMmJLUGxOLy9qOW1LeWtIVmEx?=
 =?utf-8?B?RUpDWmtud3JVSEhRY3Y3eVV5VWwxcEZSUUdCYlRTMWxTbFlvSlA1bFVXangv?=
 =?utf-8?B?YVBxRGJOZnM3aU04cHN2UUs5T1MrSS9Yd2ZhRDl6YjgyN0V6MHIyQ21vd3h5?=
 =?utf-8?B?VW5DK251OTJPWEtncSsrcllDTm9CVWdKMDc1Rnc1ZVY4eVZHaW8wdVRiOFlR?=
 =?utf-8?B?bFJtL2hiTkJMcmg4dWN5eU4rL0tjb3pUOTh0VmRKWTJQM3hBK1R6RUpVek5G?=
 =?utf-8?B?MitkajVzeXNlUnJzczczc1hmZlBKWWJsQWR0UmRJV3YxV294SEtlaWl0UlB3?=
 =?utf-8?Q?Fdjuor+ZOkq5e9D7hx0qBTC4fBTrE6hGJ5BSZa3?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <907EBCE509A36E4D803E8D23E0EA59E0@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 08fb4591-f49e-461e-e2d3-08d9328fb5d8
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2021 19:31:46.0694
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: A29vX2cY1uWOKZ01Boz+XxSahGN7U0DHwZ3t4qb8hylW54DYbHVBsoW7fv9c3uQAZSK2sps8iDCjVdEkFXqTCTVDpv9QEqfTBtnm9Led/j0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5832
X-OriginatorOrg: citrix.com

DQoNCj4gT24gSnVuIDE4LCAyMDIxLCBhdCA3OjI4IFBNLCBHZW9yZ2UgRHVubGFwIDxnZW9yZ2Uu
ZHVubGFwQGNpdHJpeC5jb20+IHdyb3RlOg0KPiANCj4gDQo+IA0KPj4gT24gTWF5IDI0LCAyMDIx
LCBhdCA5OjM2IFBNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9va25AZ21haWwuY29tPiB3cm90ZToN
Cj4+IA0KPj4gQWRkIGEgaGVscGVyIGZ1bmN0aW9uIHRvIHdhaXQgZm9yIGRvbWFpbiBkZWF0aCBl
dmVudHMsIGFuZCB0aGVuIHdyaXRlDQo+PiB0aGUgZXZlbnRzIHRvIGEgcHJvdmlkZWQgY2hhbm5l
bC4gVGhpcyBoYW5kbGVzIHRoZSBlbmFibGluZy9kaXNhYmxpbmcgb2YNCj4+IHRoZSBldmVudCB0
eXBlLCBmcmVlaW5nIHRoZSBldmVudCwgYW5kIGNvbnZlcnRpbmcgaXQgdG8gYSBHbyB0eXBlLiBU
aGUNCj4+IGNhbGxlciBjYW4gdGhlbiBoYW5kbGUgdGhlIGV2ZW50IGhvd2V2ZXIgdGhleSBuZWVk
IHRvLiBUaGlzIGZ1bmN0aW9uDQo+PiB3aWxsIHJ1biB1bnRpbCBhIHByb3ZpZGVkIGNvbnRleHQu
Q29udGV4dCBpcyBjYW5jZWxsZWQuDQo+PiANCj4+IE5vdGlmeURvbWFpbkRlYXRoIHNwYXducyB0
d28gZ29yb3V0aW5lcyB0aGF0IHJldHVybiB3aGVuIHRoZQ0KPj4gY29udGV4dC5Db250ZXh0IGlz
IGRvbmUuIFRoZSBmaXJzdCB3aWxsIG1ha2Ugc3VyZSB0aGF0IHRoZSBkb21haW4gZGVhdGgNCj4+
IGV2ZW50IGlzIGRpc2FibGVkLCBhbmQgdGhhdCB0aGUgY29ycmVzcG9uZGluZyBldmVudCBxdWV1
ZSBpcyBjbGVhcmVkLg0KPj4gVGhlIHNlY29uZCBjYWxscyBsaWJ4bF9ldmVudF93YWl0LCBhbmQg
d3JpdGVzIHRoZSBldmVudCB0byB0aGUgcHJvdmlkZWQNCj4+IGNoYW5uZWwuDQo+PiANCj4+IFdp
dGggdGhpcywgY2FsbGVycyBzaG91bGQgYmUgYWJsZSB0byBtYW5hZ2UgYSBmdWxsIGRvbWFpbiBs
aWZlIGN5Y2xlLg0KPj4gQWRkIHRvIHRoZSBjb21tZW50IG9mIERvbWFpbkNyZWF0ZU5ldyBzbyB0
aGF0IHBhY2thZ2UgdXNlcyBrbm93IHRoZXkNCj4+IHNob3VsZCB1c2UgdGhpcyBtZXRob2QgaW4g
Y29uanVuY3Rpb24gd2l0aCBEb21haW5DcmVhdGVOZXcuDQo+PiANCj4+IFNpZ25lZC1vZmYtYnk6
IE5pY2sgUm9zYnJvb2sgPHJvc2Jyb29rbkBhaW5mb3NlYy5jb20+DQo+PiAtLS0NCj4+IHRvb2xz
L2dvbGFuZy94ZW5saWdodC94ZW5saWdodC5nbyB8IDgzICsrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKy0NCj4+IDEgZmlsZSBjaGFuZ2VkLCA4MiBpbnNlcnRpb25zKCspLCAxIGRlbGV0aW9u
KC0pDQo+PiANCj4+IGRpZmYgLS1naXQgYS90b29scy9nb2xhbmcveGVubGlnaHQveGVubGlnaHQu
Z28gYi90b29scy9nb2xhbmcveGVubGlnaHQveGVubGlnaHQuZ28NCj4+IGluZGV4IDZmYjIyNjY1
Y2MuLjg0MDY4ODM0MzMgMTAwNjQ0DQo+PiAtLS0gYS90b29scy9nb2xhbmcveGVubGlnaHQveGVu
bGlnaHQuZ28NCj4+ICsrKyBiL3Rvb2xzL2dvbGFuZy94ZW5saWdodC94ZW5saWdodC5nbw0KPj4g
QEAgLTUzLDYgKzUzLDcgQEAgaW1wb3J0ICJDIg0KPj4gKi8NCj4+IA0KPj4gaW1wb3J0ICgNCj4+
ICsJImNvbnRleHQiDQo+PiAJImZtdCINCj4+IAkib3MiDQo+PiAJIm9zL3NpZ25hbCINCj4+IEBA
IC0xMzQwLDcgKzEzNDEsOSBAQCBmdW5jIChjdHggKkNvbnRleHQpIERldmljZVVzYmRldlJlbW92
ZShkb21pZCBEb21pZCwgdXNiZGV2ICpEZXZpY2VVc2JkZXYpIGVycm9yDQo+PiAJcmV0dXJuIG5p
bA0KPj4gfQ0KPj4gDQo+PiAtLy8gRG9tYWluQ3JlYXRlTmV3IGNyZWF0ZXMgYSBuZXcgZG9tYWlu
Lg0KPj4gKy8vIERvbWFpbkNyZWF0ZU5ldyBjcmVhdGVzIGEgbmV3IGRvbWFpbi4gQ2FsbGVycyBv
ZiBEb21haW5DcmVhdGVOZXcgYXJlDQo+PiArLy8gcmVzcG9uc2libGUgZm9yIGhhbmRsaW5nIHRo
ZSBkZWF0aCBvZiB0aGUgcmVzdWx0aW5nIGRvbWFpbi4gVGhpcyBzaG91bGQgYmUNCj4+ICsvLyBk
b25lIHVzaW5nIE5vdGlmeURvbWFpbkRlYXRoLg0KPj4gZnVuYyAoY3R4ICpDb250ZXh0KSBEb21h
aW5DcmVhdGVOZXcoY29uZmlnICpEb21haW5Db25maWcpIChEb21pZCwgZXJyb3IpIHsNCj4+IAl2
YXIgY2RvbWlkIEMudWludDMyX3QNCj4+IAl2YXIgY2NvbmZpZyBDLmxpYnhsX2RvbWFpbl9jb25m
aWcNCj4+IEBAIC0xMzU4LDYgKzEzNjEsODQgQEAgZnVuYyAoY3R4ICpDb250ZXh0KSBEb21haW5D
cmVhdGVOZXcoY29uZmlnICpEb21haW5Db25maWcpIChEb21pZCwgZXJyb3IpIHsNCj4+IAlyZXR1
cm4gRG9taWQoY2RvbWlkKSwgbmlsDQo+PiB9DQo+PiANCj4+ICsvLyBOb3RpZnlEb21haW5EZWF0
aCByZWdpc3RlcnMgYW4gZXZlbnQgaGFuZGxlciBmb3IgZG9tYWluIGRlYXRoIGV2ZW50cyBmb3Ig
YQ0KPj4gKy8vIGdpdmVuIGRvbW5pZCwgYW5kIHdyaXRlcyBldmVudHMgcmVjZWl2ZWQgdG8gZWMu
IE5vdGlmeURvbWFpbkRlYXRoIHJldHVybnMgYW4NCj4+ICsvLyBlcnJvciBpZiBpdCBjYW5ub3Qg
cmVnaXN0ZXIgdGhlIGV2ZW50IGhhbmRsZXIsIGJ1dCBvdGhlciBlcnJvcnMgZW5jb3VudGVyZWQN
Cj4+ICsvLyBhcmUganVzdCBsb2dnZWQuIFRoZSBnb3JvdXRpbmUgc3Bhd25lZCBieSBjYWxsaW5n
IE5vdGlmeURvbWFpbkRlYXRoIHJ1bnMNCj4+ICsvLyB1bnRpbCB0aGUgcHJvdmlkZWQgY29udGV4
dC5Db250ZXh0J3MgRG9uZSBjaGFubmVsIGlzIGNsb3NlZC4NCj4+ICtmdW5jIChjdHggKkNvbnRl
eHQpIE5vdGlmeURvbWFpbkRlYXRoKGMgY29udGV4dC5Db250ZXh0LCBkb21pZCBEb21pZCwgZWMg
Y2hhbjwtIEV2ZW50KSBlcnJvciB7DQo+PiArCXZhciBkZWF0aHcgKkMubGlieGxfZXZnZW5fZG9t
YWluX2RlYXRoDQo+PiArDQo+PiArCXJldCA6PSBDLmxpYnhsX2V2ZW5hYmxlX2RvbWFpbl9kZWF0
aChjdHguY3R4LCBDLnVpbnQzMl90KGRvbWlkKSwgMCwgJmRlYXRodykNCj4+ICsJaWYgcmV0ICE9
IDAgew0KPj4gKwkJcmV0dXJuIEVycm9yKHJldCkNCj4+ICsJfQ0KPj4gKw0KPj4gKwkvLyBTcGF3
biBhIGdvcm91dGluZSB0aGF0IGlzIHJlc3BvbnNpYmxlIGZvciBjbGVhbmluZyB1cCB3aGVuIHRo
ZQ0KPj4gKwkvLyBwYXNzZWQgY29udGV4dC5Db250ZXh0J3MgRG9uZSBjaGFubmVsIGlzIGNsb3Nl
ZC4NCj4+ICsJZ28gZnVuYygpIHsNCj4+ICsJCTwtYy5Eb25lKCkNCj4+ICsNCj4+ICsJCWN0eC5s
b2dkKCJjbGVhbmluZyB1cCBkb21haW4gZGVhdGggZXZlbnQgaGFuZGxlciBmb3IgZG9tYWluICVk
IiwgZG9taWQpDQo+PiArDQo+PiArCQkvLyBEaXNhYmxlIHRoZSBldmVudCBnZW5lcmF0aW9uLg0K
Pj4gKwkJQy5saWJ4bF9ldmRpc2FibGVfZG9tYWluX2RlYXRoKGN0eC5jdHgsIGRlYXRodykNCj4+
ICsNCj4+ICsJCS8vIE1ha2Ugc3VyZSBhbnkgZXZlbnRzIHRoYXQgd2VyZSBnZW5lcmF0ZWQgZ2V0
IGNsZWFuZWQgdXAgc28gdGhleQ0KPj4gKwkJLy8gZG8gbm90IGxpbmdlciBpbiB0aGUgbGlieGwg
ZXZlbnQgcXVldWUuDQo+PiArCQl2YXIgZXZjICpDLmxpYnhsX2V2ZW50DQo+PiArCQlmb3Igew0K
Pj4gKwkJCXJldCA6PSBDLmxpYnhsX2V2ZW50X2NoZWNrKGN0eC5jdHgsICZldmMsIEMuTElCWExf
RVZFTlRNQVNLX0FMTCwgbmlsLCBuaWwpDQo+PiArCQkJaWYgcmV0ICE9IDAgew0KPj4gKwkJCQly
ZXR1cm4NCj4+ICsJCQl9DQo+PiArCQkJQy5saWJ4bF9ldmVudF9mcmVlKGN0eC5jdHgsIGV2YykN
Cj4gDQo+IEkgaGF2ZSB0byBhZG1pdCwgSSBkb27igJl0IHJlYWxseSB1bmRlcnN0YW5kIGhvdyB0
aGUgbGlieGwgZXZlbnQgc3R1ZmYgaXMgc3VwcG9zZWQgdG8gd29yay4gIEJ1dCBpdCBsb29rcyBs
aWtlIHRoaXMgd2lsbCBkcmFpbiBhbGwgZXZlbnRzIG9mIGFueSB0eXBlLCBmb3IgYW55IGRvbWFp
biwgYXNzb2NpYXRlZCB3aXRoIHRoaXMgY29udGV4dD8NCj4gDQo+IFNvIGlmIHlvdSBoYWQgdHdv
IGRvbWFpbnMsIGFuZCBjYWxsZWQgTm90aWZ5RG9tYWluRGVhdGgoKSBvbiBib3RoIHdpdGggZGlm
ZmVyZW50IGNvbnRleHRzLCBhbmQgeW91IGNsb3NlZCB0aGUgb25lIGNvbnRleHQsIHlvdSBtaWdo
dCBtaXNzIGV2ZW50cyBmcm9tIHRoZSBvdGhlciBjb250ZXh0Pw0KPiANCj4gT3IsIHN1cHBvc2Ug
eW91IGRpZCB0aGlzOg0KPiAqIGN0eC5Ob3RpZnlEb21haW5EZWF0aChjdHgxLCBkb20xLCBlYzEp
DQo+ICogY3R4Lk5vdGlmeURpc2tFamVjdChjdHgyLCBkb20xLCBlYzIpDQo+ICogY3R4MUNhbmNl
bEZ1bmMoKQ0KPiANCj4gV291bGRu4oCZdCB0aGVyZSBiZSBhIHJpc2sgdGhhdCB0aGUgZGlzayBl
amVjdCBtZXNzYWdlIHdvdWxkIGdldCBsb3N0Pw0KPiANCj4gSWFuLCBhbnkgc3VnZ2VzdGlvbnMg
Zm9yIHRoZSByaWdodCB3YXkgdG8gdXNlIHRoZXNlIGZ1bmN0aW9ucyBpbiB0aGlzIHNjZW5hcmlv
Pw0KDQpJdCBsb29rcyBsaWtlIG9uZSBvcHRpb24gd291bGQgYmUgdG8gYWRkIGEg4oCccHJlZGlj
YXRl4oCdIGZ1bmN0aW9uIGZpbHRlciwgdG8gZmlsdGVyIGJ5IHR5cGUgYW5kIGRvbWlkLg0KDQpJ
dCBsb29rcyBsaWtlIHRoZSBvdGhlciBvcHRpb24gd291bGQgYmUgdG8gdHJ5IHRvIHVzZSBsaWJ4
bF9ldmVudF9yZWdpc3Rlcl9jYWxsYmFja3MoKS4gIFdlIGNvdWxkIGhhdmUgdGhlIEMgY2FsbGJh
Y2sgcGFzcyBhbGwgdGhlIGV2ZW50cyB0byBhIGdvcm91dGluZSB3aGljaCB3b3VsZCBhY3QgYXMg
YSBkaXNwYXRjaGVyLg0KDQogLUdlb3JnZQ==


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 20:30:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 20:30:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144986.266790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luL7r-00049O-4C; Fri, 18 Jun 2021 20:30:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144986.266790; Fri, 18 Jun 2021 20:30:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luL7q-00048j-Tr; Fri, 18 Jun 2021 20:30:02 +0000
Received: by outflank-mailman (input) for mailman id 144986;
 Fri, 18 Jun 2021 20:30:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UjTI=LM=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1luL7o-0003yP-V2
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 20:30:01 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1a62f856-a8a6-49c5-9ec4-8928bff86b4c;
 Fri, 18 Jun 2021 20:29:59 +0000 (UTC)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1624048080467885.5781318434639;
 Fri, 18 Jun 2021 13:28:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a62f856-a8a6-49c5-9ec4-8928bff86b4c
ARC-Seal: i=1; a=rsa-sha256; t=1624048081; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=VyxdgyLR87Wd5m3aw3r+KDkbfngFMbHFqQQfOQL9T7sOYkHn5oqWpwKJVS24TCkRAja8bduEfLPlMU4lVZPmxFaUJmIq9QClWPIAX3V6N7YlRPk5z3H9qN79LY5pxKoUArAXw0HySqKFmCJPItB1g7U2bpuUZMH79zLrfoXJAn0=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1624048081; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=kDdALYAB/s1jcxETPetMlY2UomHy8r2GG7UtLQiV0mQ=; 
	b=PL2fKmMhB2y19A8ZS6w7tfgNNND2YE7wG6KNsNYrjHGyfeFeUgHQPAcL2RA3FiMhBVBgg9ArEhGyws59BxHsHhtyda8V9gKZs/QF0l9KGnEarXO2tQ0rgD1MHjNqrofl4clvXtIBsUBJ5+BzRV0IR4C3X3vgT/t82nxX2WkUOLo=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1624048081;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=To:Cc:References:From:Subject:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
	bh=kDdALYAB/s1jcxETPetMlY2UomHy8r2GG7UtLQiV0mQ=;
	b=U2hq3b0RXDrOOR/qoiYaedJL4F0hXo6bWFsVDTDbsbQBXWdK9/EtixZv2pvpmNZ3
	c22Z4FeGBFhD56rff43+rH4MZKnbRWmwYOP2wWdDNFMqrsrTdi7k5XXVoxLixKsF8f/
	AkmdOdywWiAPzXgH+OhqA+HkqcPnSCMsOGJaD+Lo=
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Tim Deegan <tim@xen.org>, Juergen Gross <jgross@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io, xen-devel@lists.xenproject.org
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-4-dpsmith@apertussolutions.com>
 <c8bd347f-cf14-8b86-81f7-51c035c5c972@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH 3/6] xsm: enabling xsm to always be included
Message-ID: <ff0c9f42-f45e-e78e-35b9-c030011eed8f@apertussolutions.com>
Date: Fri, 18 Jun 2021 16:27:57 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <c8bd347f-cf14-8b86-81f7-51c035c5c972@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 6/18/21 8:26 AM, Jan Beulich wrote:
> On 18.06.2021 01:39, Daniel P. Smith wrote:
>> The only difference between !CONFIG_XSM and CONFIG_XSM with !CONFIG_XSM_SILO and !CONFIG_XSM_FLASK
>> is whether the XSM hooks in dummy.h are called as static inline functions or as function
>> pointers to static functions. As such this commit,
>>  * eliminates CONFIG_XSM
> 
> Following from what Andrew has said (including him mentioning your
> changing of certain Kconfig option defaults), I'm not convinced this is
> a good move. This still ought to serve as the overall XSM-yes-or-no
> setting. If internally you make said two variants match in behavior,
> that's a different thing.

Apologies that I did not express this clearly. What I was attempting to
say is the fact of the matter is that there is no logical behavior
difference between "XSM no" and "XSM yes with dummy policy". The only
difference is the mechanics of how the dummy functions get called.
Specifically via macro magic the dummy functions are either flipped into
static inline declarations that are then included into the code where
they are invoked or the macro magic has them ending up in the dummy.c
XSM module where they are wrapped in macro generated functions that are
set as the functions in the dummy xsm_ops structure. Thus it is always
the same logic being invoked, it is just mechanics of how you get to the
logic.


>>  * introduces CONFIG_XSM_EVTCHN_LABELING as replacement for enabling event channel labels
> 
> Is this mode needed as separate functionality at all? Nothing defines
> XSM_NEED_GENERIC_EVTCHN_SSID anywhere. _If_ XSM went away as a separate
> setting, then imo this one should go away as well.
> 
>> --- a/xen/common/Kconfig
>> +++ b/xen/common/Kconfig
>> @@ -197,22 +197,33 @@ config XENOPROF
>>  
>>  	  If unsure, say Y.
>>  
>> -config XSM
>> -	bool "Xen Security Modules support"
>> -	default ARM
>> -	---help---
>> -	  Enables the security framework known as Xen Security Modules which
>> -	  allows administrators fine-grained control over a Xen domain and
>> -	  its capabilities by defining permissible interactions between domains,
>> -	  the hypervisor itself, and related resources such as memory and
>> -	  devices.
>> +menu "Xen Security Modules"
>>  
>> -	  If unsure, say N.
>> +choice
>> +	prompt "Default XSM module"
>> +	default XSM_SILO_DEFAULT if XSM_SILO && ARM
>> +	default XSM_FLASK_DEFAULT if XSM_FLASK
>> +	default XSM_SILO_DEFAULT if XSM_SILO
>> +	default XSM_DUMMY_DEFAULT
>> +	config XSM_DUMMY_DEFAULT
>> +		bool "Match non-XSM behavior"
>> +	config XSM_FLASK_DEFAULT
>> +		bool "FLux Advanced Security Kernel" if XSM_FLASK
>> +	config XSM_SILO_DEFAULT
>> +		bool "SILO" if XSM_SILO
>> +endchoice
> 
> This did live after the individual options it depends on for a reason,
> and you don't say anywhere why you need to move it up. The way you
> have it, with the default command line kconfig tool, users will be
> presented with dependent options before having chosen the settings of
> the dependency ones. That's because this tool, to a degree, moves
> linearly through the options it has parsed.

Yes, this is specifically why I moved it up. Clearly we have different
approaches to how we like to interact with configurations, which is not
bad thing. I personally found it awkward the other way but can easily
move it back.

>> @@ -261,25 +271,12 @@ config XSM_SILO
>>  
>>  	  If unsure, say Y.
>>  
>> -choice
>> -	prompt "Default XSM implementation"
>> -	depends on XSM
>> -	default XSM_SILO_DEFAULT if XSM_SILO && ARM
>> -	default XSM_FLASK_DEFAULT if XSM_FLASK
>> -	default XSM_SILO_DEFAULT if XSM_SILO
>> -	default XSM_DUMMY_DEFAULT
>> -	config XSM_DUMMY_DEFAULT
>> -		bool "Match non-XSM behavior"
>> -	config XSM_FLASK_DEFAULT
>> -		bool "FLux Advanced Security Kernel" if XSM_FLASK
>> -	config XSM_SILO_DEFAULT
>> -		bool "SILO" if XSM_SILO
>> -endchoice
>> +endmenu
>>  
>>  config LATE_HWDOM
>>  	bool "Dedicated hardware domain"
>>  	default n
>> -	depends on XSM && X86
>> +	depends on XSM_FLASK && X86
> 
> I don't think this is a compatible change. I'm not going to exclude that
> this is how it was meant, but as it stands LATE_HWDOM right now doesn't
> really require FLASK, and could e.g. also go with SILO or dummy. If you
> _mean_ to change this, then your description needs to say so (and ideally
> it would then be split out, so - if this is actually a bug - it could
> also be backported).

Actually this is the root cause that started all of this work. If you
want to get technical, LATE_HWDOM does not rely on XSM at all. The issue
is that you cannot use it as it was originally intended, to run Xen
without a classic dom0 while still having all existing capabilities.
Specifically the hardware domain does not have the ability to assign the
pass-through devices for which it is in control. This is were Flask
comes in to enable assigning specific privileges to labels and then
constructing domains with those labels, In particular it grants the
ability to do pass-through assignment to the label assigned to the
hardware domain. With the upcoming XSM-Roles patch set, these privileges
are assigned to roles and it becomes possible to assign the necessary
roles to the hardware domain.

v/r,
dps




From xen-devel-bounces@lists.xenproject.org Fri Jun 18 21:20:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 21:20:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144992.266800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luLuf-00017J-SQ; Fri, 18 Jun 2021 21:20:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144992.266800; Fri, 18 Jun 2021 21:20:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luLuf-00017C-OJ; Fri, 18 Jun 2021 21:20:29 +0000
Received: by outflank-mailman (input) for mailman id 144992;
 Fri, 18 Jun 2021 21:20:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luLud-000171-FQ
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 21:20:27 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eccaab4f-c6d2-4081-8143-e1ec20aedaf6;
 Fri, 18 Jun 2021 21:20:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eccaab4f-c6d2-4081-8143-e1ec20aedaf6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624051225;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ARBXJeOVQw8T8Xec/bmCX5aULsG3S/oNo7tvPb4YRk8=;
  b=G+UG6nTBsC121f/ifB3uuv1wkpEBqcRUUeP/poouP/CqRCTrt3bIx7Af
   9QLt+Y7AtDb+z6nuZI/U7xRUIAPljRWTrUEbZavS+SPr0wi21+leIb1pW
   1vqgGB/iHwK4JjOVpq1dnG+bDDksqR68O5HyHGzeQVb2XGeX2LcfJHQQK
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: b+ETk2dl0f5Ytp4jv7vGIZLl2GUd+XuQXhxgNn7NR8bCbv0UqO8z3mldOXFCJcWxC54WEHteB0
 En0H1f0n0qU6LcvMk+wbZ/qWWdy9LktvbaV/T6L3eJ4JlPXJTJEUbG4SHaYBEfBsSlLP2jN0AX
 c1392C5WCcAY2H2PIEFvQasM+7uNrX0ciHwn+Ad/ptmTjheu3/TSwzt75tSeavb5L3fk6bRCJb
 rgiXVYxjOKRCuCiUlueh0jAqC7XO2cKz8Ger4xiRqKzl80I1RnwarWWeNRUTotGelYZ83zMh29
 Ov4=
X-SBRS: 5.1
X-MesageID: 46494394
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:VlQ8X6xO97x3xys/F4uxKrPxteskLtp133Aq2lEZdPULSKOlfp
 GV8MjziyWYtN9wYhAdcdDpAtjkfZquz+8L3WB3B8bfYOCGghrUEGgG1+XfKlLbalXDH4JmpM
 Bdmu1FeafN5DtB/LXHCWuDYq8dKbC8mcjC74eurAYZcegpUdAF0+4QMHfqLqQcfnghOXNWLu
 v/2iMKnUvaRZxBBf7Ld0XtEtKz6eEi0/ndEFc7Li9izDPLoSKj6bb8HRTd9hACUwlXybNn1W
 TeiQT26oiqrvn+k3bnpi/uxqUTvOGk5spIBcSKhMRQAjLwijywbIAkf7GZpjg6rMym9V5vut
 jRpBULOdh19hrqDyCIiCqo/zOl/Ccl6nfkx1Pdq2Dku9bFSDUzDNcErZ5FczPCgnBQ+e1U4e
 Zu5Sa0ppBXBRTPkGDW/N7TTSxnkUKyvD4LjfMTtXpCSoETAYUh77D3xHklV6voIRiKrrzOSI
 JVfZjhDbdtABCnhknizy1SKIfGZAVqIv/uKXJyyPB80FBt7T1EJgUjtZcidtppzuN0d3B+3Z
 WyDk1frsAFciYnV9MIOA4/e7rANoXse2OBDIvAGyWpKEk4U0i94KIft49Fmt1CPqZ4lqcPpA
 ==
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46494394"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=irf/RhW85TI0KZjdqkFwCKEOwDOJ5pzyjeQDGvsR41ZocYE6/SZaOU1AGcUi1FEoMf5nLOltf79EXQRSsLn+HdQ19/91yVaXeJJ1tXGrFBMhuoH0BsHzXQpiLpDfovcqFVd20KkoY6pRIQybeGrxXuaPyM8/T+kVuxfiOgUs5OR8cm0vNGIiOmq+Dsz1o24sEFrsS+mS5SeVZN7Dn6R8QIoahiZ/3eHLtm1OmkGicpTs7kcyIJUFqNdlSRBV0cc1rU3F6SH2Wy39s9lCmELd1AswkhtXg3v1XUo8VwQrWxaJFcqllpYHm8zo3r8vM4n3aPNfIgblxb9GyJR0GoFZmA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U/oTuQZ2LTig2J7h9rI2ap5PGwt1R20OK25TxbnMrp0=;
 b=hyaZ5d6FrzDp6Nii2qcbGCqrIJYZlqvC+E4AKeHfU+7XiV/KyWMoJ0bF6ZoSB8TCkiLUomDhPiNH9BrfrLX06xEwVygrSB+HMNsbMmudTwk61eqTpLCyZMxyp69Xp3ZCJDq+kO6mPDRBTNDGEKswDsmcztE4Mny3pvaDQ3UUzMloLGASg7plPtJyONrVddm/TcD1XuV6A5OsAS+ryvpSbXxvwz4N5yKKZJZeqwnrKztbQErfw0TD0bFcJJ5ah2DpNDjJYDn2WRa2a27frsM2/wMp0UyJL4KRNwmziwTx3sSnxHcfr3Cu4vtwbNJ7jHsVd3lzpxiAc/LTL3AWfYBZhw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U/oTuQZ2LTig2J7h9rI2ap5PGwt1R20OK25TxbnMrp0=;
 b=dz5cz6pkVwp7ZWfC2CMET5qgPQW5pCnBygXIGZkCuQha1yd7n+8evJW4b+QlaGrMrIjBscHH5tFaxIg++zAQ4on8R3JbD2cZWNh089/H0toOBV3L3TFLoWx6GqHobf853id+B4T3XUSR2iPgWsx40sWwpqZaebl0fNlYD5TuWSs=
To: Jan Beulich <jbeulich@suse.com>, "Daniel P. Smith"
	<dpsmith@apertussolutions.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, George
 Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Tamas
 K Lengyel <tamas@tklengyel.com>, Tim Deegan <tim@xen.org>, Juergen Gross
	<jgross@suse.com>, Alexandru Isaila <aisaila@bitdefender.com>, Petre
 Pircalabu <ppircalabu@bitdefender.com>, Dario Faggioli <dfaggioli@suse.com>,
	Paul Durrant <paul@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	<persaur@gmail.com>, <christopher.w.clark@gmail.com>,
	<adam.schwalm@starlab.io>, <scott.davis@starlab.io>,
	<xen-devel@lists.xenproject.org>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-4-dpsmith@apertussolutions.com>
 <c8bd347f-cf14-8b86-81f7-51c035c5c972@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 3/6] xsm: enabling xsm to always be included
Message-ID: <3a86c791-e508-36a4-a48c-6cdb810f81f9@citrix.com>
Date: Fri, 18 Jun 2021 22:20:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <c8bd347f-cf14-8b86-81f7-51c035c5c972@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P123CA0036.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600::24)
 To BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 668c94ed-9d20-49e4-3ae1-08d9329ee0a4
X-MS-TrafficTypeDiagnostic: BY5PR03MB4967:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB4967784069E78A1835CF78A1BA0D9@BY5PR03MB4967.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2043;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: u3csH95tpjbSHJ78fyvBnigyvoqf2TFK6wK4tfmCeC+ACp9+k3N1ZMa1kVG8HCzk84wBDA4i0Tcsku0V2YtRtAabFPfy5s3l/6xL/u+WAfcbU+rLO4n0TcBAD8sNpCQWSHBBSgFN7qldQ0jYn2BWI5A/4EA4zl6m8h0Hd4RszctJcDt9p2FHEo7icQ36djJM1KzpBwRpmkVbLCowFLHyGmBAVAUtSt0m6T0rY7S7/xL2MyQBfbuVA16y2+3mC6nf4kHi+qjFqLFNFmI6mILX4iPJ/1yv5aWgYrWM1AgL/473mtEBd9cQf191GF/jTZJL4JAvK0q9vMIFbiDR1T9SdskhqzG2tBF4elZa1foQhqjNRhrzLjNt4Fl0sjfXofYZxvt2MGrMD+ytS0ZCBlNaFczbZNnL1VWkWLJ5k2gwmIYVGd6WHZewwrYywmZ5jcXahZcZvvGBmZq09iazr1TTBed3bUyVQHe0UhOAl4rlFpc5XfGQkhMj+77E0y6ooulDyE2rXi6ah06Ry7mXi19hbyh2cMOa1OBLxCvntMuiNB9foZvYrbhZYkXs4KC6ZMLLaIJWZyM12lc1J8iX73PjkPptzXVWvvMI79oHPV8w7sz/fF1FDUDTzyPOZrp8ORpWuEzWI/H6/d+5Nm3mBhw1RQ7geReUe0a+eQd+dJJ2UB9kMLFW0sdd7+SmpvARI8ku
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(376002)(366004)(346002)(396003)(6666004)(66556008)(66476007)(16526019)(16576012)(66946007)(6486002)(31686004)(38100700002)(110136005)(186003)(31696002)(53546011)(26005)(7416002)(5660300002)(316002)(8936002)(2906002)(4326008)(478600001)(54906003)(2616005)(83380400001)(86362001)(8676002)(36756003)(956004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?MlI2S1ZQOXhod1VGc0p2VDV6TTBVUTdrZnJjdlRBRDlMZXNqRVVNc0NwVXIz?=
 =?utf-8?B?YjJKMW5ONThZcHMreE5WSjhNS1MzMEJ4TlNuZmY5RHplSTRqVDVXZ2VLazBH?=
 =?utf-8?B?bFVOb1QwMzhQT0x3VmxXdXRwMys5OFVVRU40VmppcWJoWGw0MnBOTUtyWXFR?=
 =?utf-8?B?SnRYdkZPV2xYekNQcGRFRlVvL0kxczdKbTk0c1ZtV3lMN0pISWhWV2hTV3FY?=
 =?utf-8?B?VHZjSzdlbWN3SndxZVYxV3RjckNWNVJ6ejFiejdoTjJpMS9SaWs5bjVoMkYr?=
 =?utf-8?B?akR2WlRQVkRxeUQ2dGhxallNc3FVYm1CMTNvU250elo4K0xFL21GWUlsY3d2?=
 =?utf-8?B?d0ZoNkZVWHVhTmEzdCtxQkFWR2xtazFUb1p4VS9tT21vZ1pJNGY0UjYrY2Fz?=
 =?utf-8?B?VHhxTm8zRlhLQjY2anl5NTZ2emZiKzR0b3orZ2lEcFNMdTArbUNrWTRXanky?=
 =?utf-8?B?N3N3WlVtRklsODcvVTNJQ0VMOE9PT1FKcjlKbDVUajFrLzhGb1hNaFFKR3ZG?=
 =?utf-8?B?Sk9VVVhCVmlEajcwWkVVUDRmb21OTXFhSXFDUzlBTWhwaDFPNVEyc2ZHbnJU?=
 =?utf-8?B?ZmtQV3VNSytuSU9NNnh6Z3oxUlg3T2hBSnhmM1FFbndkVEt2KzV3eDh5bTVX?=
 =?utf-8?B?REZ4MHNwRHVCY2dFR2ZDTVoxWFhDZ3pETXMwMDh3ZWtzaTJaOFAxYTA1R3ha?=
 =?utf-8?B?L1p6V1RSMTZXM0U1NHl5QmxZaUx6MTNHZjVNZ2dSTjRBTU91T1g4UkVCazJP?=
 =?utf-8?B?ZUUwdDZta0prbUMzZ1J3eTFnWVRDR09HcWlqaEJxYjJWbTBoUC9zYkR6NzNX?=
 =?utf-8?B?SlN6ekwyM0JBdGdnOWFZOG5Qam1ldkw4MkhIVUNiOFp3YVBLL3hnM0Jtd3Bx?=
 =?utf-8?B?L3dtQjVYbFZ2ZWNNbENYUTZ3TWtIUEM5OFlacGZLVFdXWDQwdWVrZDZibXlH?=
 =?utf-8?B?ZldycU5oNzkyZ2lkM3RIYTloSVBlL2lXenhmWlEvL0RKWkdFYjR0dXJzZlFl?=
 =?utf-8?B?aWUrd2tBdjJwTGFUSzljL0E2WDZyZHFaeHBDMmhnaGZwbDBkdG4yNVdzV05u?=
 =?utf-8?B?MlZMT0daZFRTd2pyOUFVZ3p6QVV5RnpxME5kQStRN0gzd1pNd21jaTBqa1Bo?=
 =?utf-8?B?RFdGaHpkMlRvaVhicHJXcXQ1SnBzdzZzK1R6RHZqengxU3FFTW5BOERiN3pN?=
 =?utf-8?B?UDVLTXRSK3RzcWlaUDE3WURzem5EdTdQU01QVlcwdFBCM0hNMm92K0RKUEc4?=
 =?utf-8?B?b2puaGt4Q0J1ZFZzTDBYSVI5L1hROFpwM0tWL3Y3RmVmRzhhZ2NnekFSazNv?=
 =?utf-8?B?SWUyTWtlT011dU43c2tRUjdXditzOTJieE9wZlVJdmliSEZiOWV6NStTWWFB?=
 =?utf-8?B?d0xsNHNBUEc1cDA5UWREMlIrbEtaaXNxbHpZQlBUa1VGVW1sTWRUOGE4djVa?=
 =?utf-8?B?QnB5cjltbmgwdTcvd2RSTHZ5RUZMQmpHTEhXdVoyTHRNVVl2WUxNcmtIeTVM?=
 =?utf-8?B?VU9qdG5xMlVLZGNYVmJnd0F6aG53bGVDdDdoQWFyZi9uR0lRVUtVVnc3S2My?=
 =?utf-8?B?dTBmYjZUWjJxZHZKbW5mRDB1dXp1R09nS0U4WEFiZW1VcDlQazRBck1VSldJ?=
 =?utf-8?B?ZHdRRndmODU3RnJmU2llNXVtdGdaUGZ0M2dBMEhuSjVtaE52SUN3T2xEYXFh?=
 =?utf-8?B?Njc0cDBtWlNLMmY0VlpUTGEvbVNNbXBxcE5rNlNKWnZHaWsvbmErSXB1NVpH?=
 =?utf-8?Q?CDZGJk3TK3AB566WAELUG7Ah8cL2TvP4VroXUl8?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 668c94ed-9d20-49e4-3ae1-08d9329ee0a4
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 21:20:20.6625
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wWoiuotRVHqw2avn13jC8E+yX13tOdzxAOUtnEaqw54UF2rXtzMXKbOjQgfLk5DgsOl005fmGYL5BMnL3czpWNBFwLpvuBTDpsz/B54CAPU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB4967
X-OriginatorOrg: citrix.com

On 18/06/2021 13:26, Jan Beulich wrote:
> On 18.06.2021 01:39, Daniel P. Smith wrote:
>> The only difference between !CONFIG_XSM and CONFIG_XSM with !CONFIG_XSM_=
SILO and !CONFIG_XSM_FLASK
>> is whether the XSM hooks in dummy.h are called as static inline function=
s or as function
>> pointers to static functions. As such this commit,
>>  * eliminates CONFIG_XSM
> Following from what Andrew has said (including him mentioning your
> changing of certain Kconfig option defaults), I'm not convinced this is
> a good move. This still ought to serve as the overall XSM-yes-or-no
> setting. If internally you make said two variants match in behavior,
> that's a different thing.

I firmly disagree. There is no such thing as !XSM even in staging right now=
.

All over Xen, we have calls to xsm_*() functions which, even in the !XSM
case, contain a non-trivial security policy.

The fact that under the hood, XSM vs !XSM creates two very different
implementations of "the dom0-all-powerful model" is an error needing
correcting, as it contributes a massive quantity of code complexity.

This series of Daniel's takes steps to make the code match reality, and
getting rid of CONFIG_XSM is absolutely the right thing to do.=C2=A0 XSM is
never actually absent from a build of Xen, even if you choose CONFIG_XSM=3D=
n.


I do think that the thing we currently call XSM_DUMMY wants renaming to
indicate that it is "the dom0-all-powerful" security model, and I think
that wants doing as part of this series.=C2=A0 Name suggestions welcome.

~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 18 21:21:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 21:21:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.144999.266810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luLvm-0001lN-9w; Fri, 18 Jun 2021 21:21:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 144999.266810; Fri, 18 Jun 2021 21:21:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luLvm-0001lG-6h; Fri, 18 Jun 2021 21:21:38 +0000
Received: by outflank-mailman (input) for mailman id 144999;
 Fri, 18 Jun 2021 21:21:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CUMw=LM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1luLvk-0001kN-Mr
 for xen-devel@lists.xenproject.org; Fri, 18 Jun 2021 21:21:36 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 07bb0c4e-8f65-4cbc-a09f-fcc9ceb23d3b;
 Fri, 18 Jun 2021 21:21:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07bb0c4e-8f65-4cbc-a09f-fcc9ceb23d3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624051294;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=jYITn8QtPUJZPZ5DB3WvN3JE8VbLOCc8z5qLhy6WMhg=;
  b=S1qnVYXCdJ9WGdFkEp+AyXWIfef6d9yLiAGV2jUUMKNWsrEEE4nfAtUd
   GKPaqwxIJDGHQpYyOz3KEvEDIxEiwB5tehSoBYgsD8/lgO/59R5dHCDce
   41WDKpj6I25/laOnv/za0S5+9o/QeioW5P3nBsgGGExuJy5gfQuCFf2VS
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: J12I5d3PmPlDZmswEW4cKV5kkwYAbsQGT33WTo15HzQzNWgX98HfS7rMh9ev7etvVWMJpgooml
 7XuoKwaBE8gwjyO3YzdOyZkCvZaJGNoO81T3u+A7J0QJcSDoAGMCYPRLUo5sX4WYMnq4oY1qm3
 rGsA+qKG4Oq1f0+U4po/FFHv3K7WsPkVVEtUFa0Ry8qUMJSVscnsYQCVZ4tKU9WmS4WBNnIy5h
 KuwYtQVSQPiiZcC/08EaLRMYVoUv3iY89p+5uN4WgIH8j2vTcQ+Z++rg1EhEFJOssTpmh/yiQu
 blw=
X-SBRS: 5.1
X-MesageID: 46564440
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:1dGfjqyJkq6vRuVJGRRoKrPxteskLtp133Aq2lEZdPULSKOlfp
 GV8MjziyWYtN9wYhAdcdDpAtjlfZquz+8L3WB3B8bfYOCGghrUEGgG1+XfKlLbalXDH4JmpM
 Fdmu1FeafN5DtB/LXHCWuDYq8dKbC8mcjC74eurAYZcegpUdAG0+4QMHfqLqQcfnglOXNWLu
 v42iMKnUvaRZxBBf7Ld0XtEtKz6eEi0/ndEFc7Li9izDPLoSKj6bb8HRTd9hACUwlXybNn1W
 TeiQT26oiqrvn+k3bnpi/uxqUTvOGk5spIBcSKhMRQAjLwijywbIAkf7GZpjg6rMym9V5vut
 jRpBULOdh19hrqDyCIiCqo/zOl/Ccl6nfkx1Pdq2Dku9bFSDUzDNcErZ5FczPCgnBQ+e1U4e
 Zu5Sa0ppBXBRTPkGDW/N7TTSxnkUKyvD4LjfMTtXpCSoETAYUh77D3xHklV6voIRiKrrzOSI
 JVfZjhDbdtABCnhknizy1SKIfGZAVqIv/uKXJyyPB80FBt7T1EJgUjtZcidtppzuN0d3B+3Z
 WyDk1frsAFciYnV9MIOA4/e7rANoXse2OBDIvAGyWpKEk4U0i94KIft49Fmt1CPqZ4lqcPpA
 ==
X-IronPort-AV: E=Sophos;i="5.83,284,1616472000"; 
   d="scan'208";a="46564440"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QLxtDKyc0D/Yj8s7uwhzyaNDutC9YTTdLYZTQxDazrE3rNVB8XEzCqaPMZ7N04MmtDwH4ZlE792tsLIb1jZFcs5fGaDOljdszjSZRnRawoUlPWhFhHNQ57vz/SzgJi8b3uW4HKLMyHXnhut38JSuau7LjbMTXCe6Sz3CVlufTi1K1Ogj8Uc6pJcIiH4bkIQNbXjNvfCw6RWhY04pSqIWNZN9q+6/rrbIPI0Oked2Pd11yR3WidO+s/k4eInEAzUXoPreSUA8GjmHOzbHFfTE9f2CVfu2KfEAy0bX3dUeC6p8MJAzFYrNu5AQmtm2b9WHsIjd0LRts95eAyp5mm2lxQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NTA8X8eMw2UWAV74CePqAm/QBBr6ZrD0qMZ6isEI8F4=;
 b=NdlVAtwCgvix6oavZdc3j+d/Iwd7GmZJg6z77HMASzodBjPioiNgrjWMHbshTy5qiqi86pxgB9VEbvK7MkeTwQUYu9gq357y236Ih5a+X4aCyWq1kfr2Sdh8ZIyclj6d06eNtISZ6rF4wZFUsTF4i3Hg7wf0E4p/YhqqcsLYJHi2Fcqk0wrBXPpw4jtvaZ5QZBuWwVDDejj9axwHA5XBrB3X56ZLZCD7hpcPvYS9gP2WsYL6NlOMY3OOoUc0y2szaVLKeMV/2ktl9fBmgfXjBOpCcneK9mH4RTN8qSFLZtS9kpefUZeoFjRBtw/qKRtw6epROWRPBXXMuNCSxOYssw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NTA8X8eMw2UWAV74CePqAm/QBBr6ZrD0qMZ6isEI8F4=;
 b=lEenhRYFlW9czaJlbM5DBy7GOpWwGcE3Gx6MrgpoTUxE07pdsIKjm8rypBmb7WsI1t0ItgELoi9zevIS9tEus1tzTIRAh4AUNFdIujoUSTKQimEh2QaqO3JKhUF9ara/7+eO7JGzcNDYi/G3G9arfsWMwABNwgZyBCbEqJI9pbA=
Subject: Re: [PATCH 0/6] xsm: refactoring xsm hooks
To: Jan Beulich <jbeulich@suse.com>, "Daniel P. Smith"
	<dpsmith@apertussolutions.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, George
 Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Tamas
 K Lengyel <tamas@tklengyel.com>, Tim Deegan <tim@xen.org>, Juergen Gross
	<jgross@suse.com>, Alexandru Isaila <aisaila@bitdefender.com>, Petre
 Pircalabu <ppircalabu@bitdefender.com>, Dario Faggioli <dfaggioli@suse.com>,
	Paul Durrant <paul@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	<persaur@gmail.com>, <christopher.w.clark@gmail.com>,
	<adam.schwalm@starlab.io>, <scott.davis@starlab.io>,
	<xen-devel@lists.xenproject.org>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <7219a9c8-f6c0-a86c-bf47-5b38c579170b@citrix.com>
 <b921c150-84f7-3ab3-1e4a-89d00725d9da@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6ed12320-f454-2751-1a41-014eaa835762@citrix.com>
Date: Fri, 18 Jun 2021 22:21:19 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <b921c150-84f7-3ab3-1e4a-89d00725d9da@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P123CA0043.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600::31)
 To BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6f75df53-40f6-4b3a-2155-08d9329f09e2
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6391:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB6391B1670E144E2ECE232025BA0D9@SJ0PR03MB6391.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: u+LeLSm4VpBEncSpJ9qLm8PQJdTo4RYi0LsTTRbAUWckvrEMwzAiNC5AnykutpJC2BPjY4d4cPvQzZ29eXPYJOES9Fu7KDxhtz9vu2v3F5bbNZ3Yx4x5yvibcupIrjEumdLdyr7XqdFWV7V9PksMmK9XPfoWcM7WCDMiijEzUjs2T46XyBj/v+8c5T0rrdh1/qm3NEm0unRnZ83Zj0a7swuz/rbCC2JuucOedYDpO0b6Bp5FnDFluanFdxM436pjJf3LWWX8G1X3uwnKlqf/DDm3SohpHnS2DHY2qY5gKcKNwAFcY0pCkeskFWT0di2uHQpOx8NKA+cRB9IGpUgapzgCxOVmHeDXRsVwmklfsskL3F9eYNjqlNLWZCpk6BHVHEghQtXQ1okbQJ3sJhje2ehU2QfMW1HBbhIFLYVUbrXppY+C5R5o9IV22hSz0sEngjXctQxBLe8LO3FtWg3vBki9fFeJ76Y2TJjr90TbpBcNN0XgWRo9tJFve9wX3yBBjuh9ZktV0+jlB6Fvvg3DQ8TxnlkdUZKvsp/u1Og5moCZZJ94K3qDxCzuoOxiQQ2MA0FxrQ9GlkoczFraqFGaqlULdN93XMNs9eq6h7Lu8DJsTeCmYf1PBZKrgioWLEGbC5d9K/v67ndVKTbA2luhZywcVkD6dnB3wYcUIo4+M4wF1yOm0XiMv93/h6UyCvIMRDsewlDOLMgmDG6WVWdgNS8Zr9qFWZ185LnsLHyRUk51qqcXyL1Jnk8M+kphc7eFK1au1Ogixob+xpYOfhXaAXfo2R2rpLYpkpixtGjRMEPYXsfGgC4CmNSNXwK1KW4W
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(346002)(366004)(39860400002)(376002)(2906002)(66476007)(316002)(7416002)(4326008)(31686004)(6666004)(66556008)(31696002)(83380400001)(16576012)(53546011)(86362001)(110136005)(2616005)(54906003)(6486002)(5660300002)(8936002)(956004)(966005)(186003)(26005)(478600001)(66946007)(38100700002)(36756003)(16526019)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?QWkzRzE0bUVyaWx3NE1JUzVrUkFUWm5KV2lMSVI3dm5UZEVDekdFOHJpUmt6?=
 =?utf-8?B?UE1qR01qaXFjaTRRdjNJRFVyQnRVVEdJbnNlTi8zZXlMSVh6aXRyME4xT1pU?=
 =?utf-8?B?TjAzM1pici93UVRZVytpaUI2U1J0QTREaytnZWpNVlcxRmFibFVHTWdHeThN?=
 =?utf-8?B?U3puVTJkQmQ1RDdVeXhWK3pJdjM1dEZxWHpTbXZTNyt0anBjVURIYVdaU2Fo?=
 =?utf-8?B?L2o0TUFLb1QyRUg2c2xGb2EyZkl4L0ZZNnNZeDFOTVM0NUZTank5cWdyczhn?=
 =?utf-8?B?WHVGOThNUmpSUmZ6TDFkUk5tdXJEVmlRZStQYS8zaVVxcHkvSzh2dGUrYjNp?=
 =?utf-8?B?Z3AyVUl1SDQrYVV2YW5JVmV6VW9xZEVrdEVaTkQ5Z0hJaXhSZ01pcWZEZ0Zq?=
 =?utf-8?B?d3duR2k1SVFMbmhmcElRdlNibnVzV0VhejhUUkE4UlBjL3NpZ3RPOER4Wmcy?=
 =?utf-8?B?SG5HR1JleU9YYk5WTlhubU1qZHpielNFZzNFd2tPVi9TTHc2U0FDb3NvTC9B?=
 =?utf-8?B?Nld0T0N6VWN6TjVaNTNnRGxUNmRJanRyMVVnbTlpbjhISTIrNzZMZ2FZYVFm?=
 =?utf-8?B?RzJLRzNLbmFlVGZQZmZsS0lIWHQrWHNqenlqNnZUMGNoOXRIVFZqSU5SQ241?=
 =?utf-8?B?bEkrZnhtT1kxOU9Ec0RVY0EzQTc5VHlSNHF2R1hObmo0ZjIwQWN1RVg0N2Nm?=
 =?utf-8?B?UGhDQ21VOURaOUY1Q0tqQW1QT29hYk9xcEoxcEVVcW9ramJpRlh6SnBqNVdN?=
 =?utf-8?B?aG4vQUJJL3BwNHQ0U3BmQTd6ZUxDcmNnamUwMU1uaDRkTFVYdlVxSUMydk9M?=
 =?utf-8?B?VEVkT3dNbFcvcGhVeHBhODdXR3pidXJkQ0g1SkIyTmtENUlPVHFHbklVZzFN?=
 =?utf-8?B?M1JDNTFZSHUxZmlGMW5ZV3B1L01nZStVY1FnVXU3ZVdYUVpxSExhS1RWWUth?=
 =?utf-8?B?SGVTd3hwVFE2ZW1DN0F4NERWVlFEWjliS0xJaGVGUG4rbHZGMFdmbjFDblRn?=
 =?utf-8?B?Q2k4Q2R2ci9uRmpwbDJyUDJ2VTB5NTlKTFp1NUxTUTdCd01PZSttZGRabDZ2?=
 =?utf-8?B?cTZWb0hibVliU01NbDVTM0U1VVVIWjR2dVdWU1VKeHFpRDN5NmJjR0h5b0hj?=
 =?utf-8?B?cEJId1BhUWs3eHd4ci84Z1BkZnBXdWwxTUtudGNnTXV0UW8wVldNNkxxQTdr?=
 =?utf-8?B?Ym5QWFp0Mi94bVg2My90Yi9lUmFKeTFsOFk2YVROaWp0TTFLYXUyenB6dXNw?=
 =?utf-8?B?dzlaZFdTdW9TTHlUVDI1RzhCMnlVNDcrTm8vT000WlEramdXN2ZFRWxFdzhW?=
 =?utf-8?B?TERSMEJJVlNsTzBqWjU5dURMVGxja1J6WEpQcjg5bjdsMGtOTDJlSTZ1UVdH?=
 =?utf-8?B?N3RrNFZOYTBYdG52LzMyVXY4UEtmb0JwUUJobWtsMnd0SXpUTFNRVHJ1b3p3?=
 =?utf-8?B?YXZiMHdObUowZFpGZUFUMUtXUzJKYmJMdGNheTRkRHFONnRJdnVYYjIyK1VK?=
 =?utf-8?B?a2lFQ1EwSDM0Rmp0SXJLMGpLTUx0cU4xVEpRejdXalNvZFVyd25uWkVUNS9Q?=
 =?utf-8?B?TG1MSkpGOTJwa2dzYzlSdkFaOU5zMG9aZ0YxcmhyeXBKQXpHUHZ3bk1IcGpa?=
 =?utf-8?B?MGFMYWJqMmsvSzFIR01NWEdZUTQvTmd5QVBrN3k5bmkzb1hNWFhJc1d1dEJj?=
 =?utf-8?B?cDhnam1YNS94VkZtTlhoMUxkTXRmeGs3c1Y5eUZQaDRaZFpPTzhVZHMxbnBY?=
 =?utf-8?Q?2wTaZngdjzg+U0neZjXGPFYzWnpVYaBnY6n87xU?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6f75df53-40f6-4b3a-2155-08d9329f09e2
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2021 21:21:30.0527
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ckva34l5629zYGO8296rUw3oNx8u77QAXCnqVT97pC04PPqBculq2YaJunJABlVy9Sgc6C/tBv5LUUYp3hFM/X2dMYMCZdj7gx3md1Ifqxg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6391
X-OriginatorOrg: citrix.com

On 18/06/2021 12:48, Jan Beulich wrote:
> On 18.06.2021 12:14, Andrew Cooper wrote:
>> On 18/06/2021 00:39, Daniel P. Smith wrote:
>>> Based on feedback from 2021 Xen Developers Summit the xsm-roles RFC
>>> patch set is being split into two separate patch sets. This is the first
>>> patch set and is focused purely on the clean up and refactoring of the
>>> XSM hooks.
>>>
>>> This patch set refactors the xsm_ops wrapper hooks to use the alternative_call
>>> infrastructure. Then proceeds to move and realign the headers to remove the
>>> psuedo is/is not enable implementation. The remainder of the changes are clean up
>>> and removing no longer necessary abstractions.
>>>
>>> <snip>
>>>  51 files changed, 1309 insertions(+), 1413 deletions(-)
>> The diffstat is great, but sadly CI says no. 
>> https://gitlab.com/xen-project/patchew/xen/-/pipelines/323044913
>>
>> The problem is that ARM doesn't have alternative_vcall().  Given how
>> much of an improvement this ought to be for hypercalls, I don't want to
>> lose the vcalls.
>>
>> One option is to implement vcall() support on ARM, but that will leave
>> new architectures (RISC-V on the way) with a heavy lift to get XSM to
>> compile.
>>
>> Instead, what we want to do is make vcall() a common interface, falling
>> back to a plain function pointer call for architectures which don't
>> implement the optimisation.  So something like:
>>
>> 1) Introduce CONFIG_HAS_VCALL, which is selected by X86 only right now
>> 2) Introduce xen/vcall.h which uses CONFIG_HAS_VCALL to either include
>> asm/vcall.h or provide the fallback implementation
> A word on the suggested names: The 'v' in alternative_vcall() stands for
> "returning void", as opposed to alternative_call(). It's unclear to me
> what you see it stand for in the names you propose.

Urgh - yet another reason to prefer the Linux static_call() infrastructure.

Would a general alt_call name be acceptable?

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 18 21:54:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Jun 2021 21:54:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145005.266822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luMQy-00056x-T5; Fri, 18 Jun 2021 21:53:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145005.266822; Fri, 18 Jun 2021 21:53:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luMQy-00056q-PZ; Fri, 18 Jun 2021 21:53:52 +0000
Received: by outflank-mailman (input) for mailman id 145005;
 Fri, 18 Jun 2021 21:53:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luMQx-00056g-PM; Fri, 18 Jun 2021 21:53:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luMQx-0001aS-DG; Fri, 18 Jun 2021 21:53:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luMQx-0002qB-2L; Fri, 18 Jun 2021 21:53:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1luMQx-0001JE-1Y; Fri, 18 Jun 2021 21:53:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sYRA4l05W+6TZWqU9FLQP+vmTorWen9w98+OecY82JI=; b=vAJ569RpOG86lYbGtsKJ0ytjQ1
	TdEjtiH+bnu/F862xw491ZsvNcJVKdXn6zHqy0k5Ww1oWdFXkg8dcqDmr5uvagzUfMTTigtgI05pH
	zIqCb4DThN2XfHTCD9p4r6jKTQSRR87m0EM+KofS9IhWQC/Eap/1HqrfzBYUrDFkasVk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162887-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162887: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=fd0aa1a4567d0f09e1bfe367a950b004f99ac290
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Jun 2021 21:53:51 +0000

flight 162887 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162887/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 linux                fd0aa1a4567d0f09e1bfe367a950b004f99ac290
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  322 days
Failing since        152366  2020-08-01 20:49:34 Z  321 days  547 attempts
Testing same since   162887  2021-06-18 00:13:42 Z    0 days    1 attempts

------------------------------------------------------------
6178 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1682302 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 19 02:11:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 02:11:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145016.266836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luQSD-0000vq-Qj; Sat, 19 Jun 2021 02:11:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145016.266836; Sat, 19 Jun 2021 02:11:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luQSD-0000vi-Ky; Sat, 19 Jun 2021 02:11:25 +0000
Received: by outflank-mailman (input) for mailman id 145016;
 Sat, 19 Jun 2021 02:11:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luQSC-0000vY-UU; Sat, 19 Jun 2021 02:11:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luQSC-0003ez-KZ; Sat, 19 Jun 2021 02:11:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luQSC-0001df-8E; Sat, 19 Jun 2021 02:11:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1luQSC-0002hF-6c; Sat, 19 Jun 2021 02:11:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/n6Kf0n/xLBLuhpzzQN1posILhYzpBxXQ4Awj9BTyZM=; b=IaDpoIC32+UJ3HQZJondsN+kha
	tHO/KO25/4fY66th8DD4LxfP3ajO8MpNIcp+p/iTqKUteGTt6etNw3Cwk13PJcKjd4OsWwAijgKi4
	lr35fmGPrv/Vvm6b4YJVV1o5uqZVqEykzHdPdIynTTl815DDXhbluz/Awriyv4Akkow8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162889-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162889: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=18e53dff939898c6dd00d206a3c2f5cd3d6669db
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 19 Jun 2021 02:11:24 +0000

flight 162889 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162889/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                18e53dff939898c6dd00d206a3c2f5cd3d6669db
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  302 days
Failing since        152659  2020-08-21 14:07:39 Z  301 days  556 attempts
Testing same since   162879  2021-06-17 16:39:39 Z    1 days    2 attempts

------------------------------------------------------------
534 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 172465 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 19 03:41:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 03:41:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145025.266853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luRqs-0000mQ-Kb; Sat, 19 Jun 2021 03:40:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145025.266853; Sat, 19 Jun 2021 03:40:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luRqs-0000mJ-He; Sat, 19 Jun 2021 03:40:58 +0000
Received: by outflank-mailman (input) for mailman id 145025;
 Sat, 19 Jun 2021 03:40:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oaj8=LN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1luRqq-0000mD-Ix
 for xen-devel@lists.xenproject.org; Sat, 19 Jun 2021 03:40:56 +0000
Received: from mail-pl1-x630.google.com (unknown [2607:f8b0:4864:20::630])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 246ee875-db1c-4980-b357-6f7286b5b6c0;
 Sat, 19 Jun 2021 03:40:55 +0000 (UTC)
Received: by mail-pl1-x630.google.com with SMTP id x22so4131386pll.11
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 20:40:55 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:4a46:e208:29e8:e076])
 by smtp.gmail.com with UTF8SMTPSA id h9sm3396879pgn.57.2021.06.18.20.40.46
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 18 Jun 2021 20:40:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 246ee875-db1c-4980-b357-6f7286b5b6c0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=wvS9l30i3GYuPyiKgQTQF0b37cCydWlT07dudvlm33Q=;
        b=nzhm+1SSPBjVZjE/s5NdvBue0n/c7M+1C3XPeFYVZL6vXhVJvQ+WMmRbQnedYpdFhz
         +BXJSvmDWQ2zbmhQ0mXQVGS2uwlDDSzoC4WdeSTpiL7y3Iv4utmKFnBhkI2zT9V/Pvi8
         OcjCsO1ux4x7lk9Z463hWgr5tPX6YDT2An/Go=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=wvS9l30i3GYuPyiKgQTQF0b37cCydWlT07dudvlm33Q=;
        b=LpeVEg1NGb3baVHidssigueDy/c3PlyMvupJv+tSs+pP/2XDAh5l0lnHkJnvzdqRz/
         H0Gy4Stw0IxpUVngNbyAhiuMITly6MBh27hhGBJsFN2ihQUK5/taKqc/G/k3+JgJUguz
         BziiCcmXZztqbVAhdfnsqEPIIsQuw4mCebXWxWqFp12cZ7tHW5UCFTIFEy7cOktZRlcg
         O3SyQ9+cs72RWcOo+UIVbngipWDFYjZJN+/aP+h7dxs/D8g5B0yok1kUA50mD+jqslxp
         EECz/FJJOXt/ToQxXw6wVqXzbCpBabONAf0akT6Yb+sAos/9LdfuiQtsEE+jXt9tqhJW
         RIYA==
X-Gm-Message-State: AOAM5300j0z3dlrHclOYGQtcMKYff3FA3R0JK27e9SOyKMujqgG/YDGN
	sbr2JeCH4dn6xBeNi8JOnaD3xw==
X-Google-Smtp-Source: ABdhPJz3/l7Y1S8+WYsfcCUqr2yTHWs8fYRW3TvBYf0s7lBSL7y2P67ZunbdUFtwSnyF77FW0iYCxQ==
X-Received: by 2002:a17:90a:9910:: with SMTP id b16mr14283876pjp.94.1624074054156;
        Fri, 18 Jun 2021 20:40:54 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com
Subject: [PATCH v14 00/12] Restricted DMA
Date: Sat, 19 Jun 2021 11:40:31 +0800
Message-Id: <20210619034043.199220-1-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series implements mitigations for lack of DMA access control on
systems without an IOMMU, which could result in the DMA accessing the
system memory at unexpected times and/or unexpected addresses, possibly
leading to data leakage or corruption.

For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
not behind an IOMMU. As PCI-e, by design, gives the device full access to
system memory, a vulnerability in the Wi-Fi firmware could easily escalate
to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
full chain of exploits; [2], [3]).

To mitigate the security concerns, we introduce restricted DMA. Restricted
DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
specially allocated region and does memory allocation from the same region.
The feature on its own provides a basic level of protection against the DMA
overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system needs
to provide a way to restrict the DMA to a predefined memory region (this is
usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).

[1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
[1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
[2] https://blade.tencent.com/en/advisories/qualpwn/
[3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
[4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132

v14:
- Move set_memory_decrypted before swiotlb_init_io_tlb_mem (patch 01/12, 10,12)
- Add Stefano's Acked-by tag from v13

v13:
- Fix xen-swiotlb issues
  - memset in patch 01/12
  - is_swiotlb_force_bounce in patch 06/12
- Fix the dts example typo in reserved-memory.txt
- Add Stefano and Will's Tested-by tag from v12
https://lore.kernel.org/patchwork/cover/1448001/

v12:
Split is_dev_swiotlb_force into is_swiotlb_force_bounce (patch 06/12) and
is_swiotlb_for_alloc (patch 09/12)
https://lore.kernel.org/patchwork/cover/1447254/

v11:
- Rebase against swiotlb devel/for-linus-5.14
- s/mempry/memory/g
- exchange the order of patch 09/12 and 10/12
https://lore.kernel.org/patchwork/cover/1447216/

v10:
Address the comments in v9 to
  - fix the dev->dma_io_tlb_mem assignment
  - propagate swiotlb_force setting into io_tlb_default_mem->force
  - move set_memory_decrypted out of swiotlb_init_io_tlb_mem
  - move debugfs_dir declaration into the main CONFIG_DEBUG_FS block
  - add swiotlb_ prefix to find_slots and release_slots
  - merge the 3 alloc/free related patches
  - move the CONFIG_DMA_RESTRICTED_POOL later
https://lore.kernel.org/patchwork/cover/1446882/

v9:
Address the comments in v7 to
  - set swiotlb active pool to dev->dma_io_tlb_mem
  - get rid of get_io_tlb_mem
  - dig out the device struct for is_swiotlb_active
  - move debugfs_create_dir out of swiotlb_create_debugfs
  - do set_memory_decrypted conditionally in swiotlb_init_io_tlb_mem
  - use IS_ENABLED in kernel/dma/direct.c
  - fix redefinition of 'of_dma_set_restricted_buffer'
https://lore.kernel.org/patchwork/cover/1445081/

v8:
- Fix reserved-memory.txt and add the reg property in example.
- Fix sizeof for of_property_count_elems_of_size in
  drivers/of/address.c#of_dma_set_restricted_buffer.
- Apply Will's suggestion to try the OF node having DMA configuration in
  drivers/of/address.c#of_dma_set_restricted_buffer.
- Fix typo in the comment of drivers/of/address.c#of_dma_set_restricted_buffer.
- Add error message for PageHighMem in
  kernel/dma/swiotlb.c#rmem_swiotlb_device_init and move it to
  rmem_swiotlb_setup.
- Fix the message string in rmem_swiotlb_setup.
https://lore.kernel.org/patchwork/cover/1437112/

v7:
Fix debugfs, PageHighMem and comment style in rmem_swiotlb_device_init
https://lore.kernel.org/patchwork/cover/1431031/

v6:
Address the comments in v5
https://lore.kernel.org/patchwork/cover/1423201/

v5:
Rebase on latest linux-next
https://lore.kernel.org/patchwork/cover/1416899/

v4:
- Fix spinlock bad magic
- Use rmem->name for debugfs entry
- Address the comments in v3
https://lore.kernel.org/patchwork/cover/1378113/

v3:
Using only one reserved memory region for both streaming DMA and memory
allocation.
https://lore.kernel.org/patchwork/cover/1360992/

v2:
Building on top of swiotlb.
https://lore.kernel.org/patchwork/cover/1280705/

v1:
Using dma_map_ops.
https://lore.kernel.org/patchwork/cover/1271660/


Claire Chang (12):
  swiotlb: Refactor swiotlb init functions
  swiotlb: Refactor swiotlb_create_debugfs
  swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
  swiotlb: Update is_swiotlb_buffer to add a struct device argument
  swiotlb: Update is_swiotlb_active to add a struct device argument
  swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing
  swiotlb: Move alloc_size to swiotlb_find_slots
  swiotlb: Refactor swiotlb_tbl_unmap_single
  swiotlb: Add restricted DMA alloc/free support
  swiotlb: Add restricted DMA pool initialization
  dt-bindings: of: Add restricted DMA pool
  of: Add plumbing for restricted DMA pool

 .../reserved-memory/reserved-memory.txt       |  36 ++-
 drivers/base/core.c                           |   4 +
 drivers/gpu/drm/i915/gem/i915_gem_internal.c  |   2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c         |   2 +-
 drivers/iommu/dma-iommu.c                     |  12 +-
 drivers/of/address.c                          |  33 +++
 drivers/of/device.c                           |   3 +
 drivers/of/of_private.h                       |   6 +
 drivers/pci/xen-pcifront.c                    |   2 +-
 drivers/xen/swiotlb-xen.c                     |   4 +-
 include/linux/device.h                        |   4 +
 include/linux/swiotlb.h                       |  51 +++-
 kernel/dma/Kconfig                            |  14 +
 kernel/dma/direct.c                           |  59 +++--
 kernel/dma/direct.h                           |   8 +-
 kernel/dma/swiotlb.c                          | 250 +++++++++++++-----
 16 files changed, 387 insertions(+), 103 deletions(-)

-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Sat Jun 19 03:41:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 03:41:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145026.266864 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luRr1-00014C-1d; Sat, 19 Jun 2021 03:41:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145026.266864; Sat, 19 Jun 2021 03:41:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luRr0-000145-UZ; Sat, 19 Jun 2021 03:41:06 +0000
Received: by outflank-mailman (input) for mailman id 145026;
 Sat, 19 Jun 2021 03:41:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oaj8=LN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1luRqz-00013X-1J
 for xen-devel@lists.xenproject.org; Sat, 19 Jun 2021 03:41:05 +0000
Received: from mail-pj1-x1030.google.com (unknown [2607:f8b0:4864:20::1030])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c1ebcd3b-bcac-455e-b3bc-6f813f2e8d5a;
 Sat, 19 Jun 2021 03:41:04 +0000 (UTC)
Received: by mail-pj1-x1030.google.com with SMTP id
 z3-20020a17090a3983b029016bc232e40bso7029723pjb.4
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 20:41:04 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:4a46:e208:29e8:e076])
 by smtp.gmail.com with UTF8SMTPSA id v15sm10722584pgf.26.2021.06.18.20.40.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 18 Jun 2021 20:41:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1ebcd3b-bcac-455e-b3bc-6f813f2e8d5a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=omy6cCV/mqNeN2Sml+g2G3v+YRTOPwPuj3rsSOaGbf8=;
        b=I5TZjmCgnGS7QCzs+GcKzND2zopF611oSqEk2ecm02oxJXrYZUgoQDr2D20q350b//
         QYtNYWxqxTTaqABZudTjO+NdWbPCAL6S986A1o51hT/KC6Y6jv56Aec5CbBEKD54AZvh
         ONSLv3o8P/DZRDHIEq/KTzFADPEWg4DUzfKuQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=omy6cCV/mqNeN2Sml+g2G3v+YRTOPwPuj3rsSOaGbf8=;
        b=V6UR2zUIWktPsDGIP7o4kkSQ6MEnKCkDBt1tkXatOKuDFp2tTRdRjGdURYngjCZh7Z
         bhpeL8xVbp5AxXwrnbeYcGuTZqTmB5ObyoKJGDvyccruXz9Z2H5rY3qbmWMJdAsqhJ32
         zxdFLKb0ixDIUaRx6tdfkuSO5ZCU4OlcoV6rOfRNgfB4m4Dc+xOD4rXHkhY7e6tyTzQG
         SvLa0I/vuK2mNn4a2O/GIYQVJFpgK2tNphQVVQ8CCChIFgcurgx0qi98bVtsvJdj90zM
         VSDKoxXrrQFmuvvEZ3IyWtE7r9y2h+k7w+v1oArKCat/jPsDS4uprZAwRKWkOS4bE4br
         bMbw==
X-Gm-Message-State: AOAM533OtYtFOj1Jc16fXD2vOEFIMRgr8UXJRXUrAKIbJGBzHbgXPB/n
	8IcoXg36I2vJvkmXZx0csZgRGg==
X-Google-Smtp-Source: ABdhPJxyHWTN70AeFSyU8U3sk1NtIzEKCT9EWu4JIVK1vqFXvWavgzlnfDikinONTGDBOm/HI5iFZg==
X-Received: by 2002:a17:90a:3c8d:: with SMTP id g13mr7572126pjc.229.1624074063342;
        Fri, 18 Jun 2021 20:41:03 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com
Subject: [PATCH v14 01/12] swiotlb: Refactor swiotlb init functions
Date: Sat, 19 Jun 2021 11:40:32 +0800
Message-Id: <20210619034043.199220-2-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210619034043.199220-1-tientzu@chromium.org>
References: <20210619034043.199220-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
initialization to make the code reusable.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 kernel/dma/swiotlb.c | 50 ++++++++++++++++++++++----------------------
 1 file changed, 25 insertions(+), 25 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 52e2ac526757..1f9b2b9e7490 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -168,9 +168,28 @@ void __init swiotlb_update_mem_attributes(void)
 	memset(vaddr, 0, bytes);
 }
 
-int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
+				    unsigned long nslabs, bool late_alloc)
 {
+	void *vaddr = phys_to_virt(start);
 	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
+
+	mem->nslabs = nslabs;
+	mem->start = start;
+	mem->end = mem->start + bytes;
+	mem->index = 0;
+	mem->late_alloc = late_alloc;
+	spin_lock_init(&mem->lock);
+	for (i = 0; i < mem->nslabs; i++) {
+		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
+		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
+		mem->slots[i].alloc_size = 0;
+	}
+	memset(vaddr, 0, bytes);
+}
+
+int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+{
 	struct io_tlb_mem *mem;
 	size_t alloc_size;
 
@@ -186,16 +205,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 	if (!mem)
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
-	mem->nslabs = nslabs;
-	mem->start = __pa(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
+
+	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
 
 	io_tlb_default_mem = mem;
 	if (verbose)
@@ -282,8 +293,8 @@ swiotlb_late_init_with_default_size(size_t default_size)
 int
 swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 {
-	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
 	struct io_tlb_mem *mem;
+	unsigned long bytes = nslabs << IO_TLB_SHIFT;
 
 	if (swiotlb_force == SWIOTLB_NO_FORCE)
 		return 0;
@@ -297,20 +308,9 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	if (!mem)
 		return -ENOMEM;
 
-	mem->nslabs = nslabs;
-	mem->start = virt_to_phys(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	mem->late_alloc = 1;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
-
+	memset(mem, 0, sizeof(*mem));
 	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
-	memset(tlb, 0, bytes);
+	swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
 
 	io_tlb_default_mem = mem;
 	swiotlb_print_info();
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Sat Jun 19 03:41:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 03:41:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145027.266875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luRr8-0001Pb-BE; Sat, 19 Jun 2021 03:41:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145027.266875; Sat, 19 Jun 2021 03:41:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luRr8-0001PU-7g; Sat, 19 Jun 2021 03:41:14 +0000
Received: by outflank-mailman (input) for mailman id 145027;
 Sat, 19 Jun 2021 03:41:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oaj8=LN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1luRr7-0001Ok-Dy
 for xen-devel@lists.xenproject.org; Sat, 19 Jun 2021 03:41:13 +0000
Received: from mail-pg1-x533.google.com (unknown [2607:f8b0:4864:20::533])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6564cac7-5fb1-43f6-bec5-edb14cd76b9f;
 Sat, 19 Jun 2021 03:41:12 +0000 (UTC)
Received: by mail-pg1-x533.google.com with SMTP id u190so5614287pgd.8
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 20:41:12 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:4a46:e208:29e8:e076])
 by smtp.gmail.com with UTF8SMTPSA id 25sm10160700pgp.51.2021.06.18.20.41.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 18 Jun 2021 20:41:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6564cac7-5fb1-43f6-bec5-edb14cd76b9f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=87F32lyNJPVTVtO+0wV8iOXVR4FYPSJ8Efeoe/l3AWY=;
        b=i1ENWaFNKwGMCMiAeHBUHeHeWCgJo2DMoHTJ8FYIFrvt5MmhJ7OFiSW0SvgsImDco7
         eyyIqWu+/Z0bEeYC6m5AiJ1SXr0nn6NCHriX3ZUuh6wWoj4iO9mtnXQIU1hsa49/kNzW
         JSXwJ+CCVTQ+vErOyYxRYe6Lgoq4CUcXaMXXc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=87F32lyNJPVTVtO+0wV8iOXVR4FYPSJ8Efeoe/l3AWY=;
        b=l+iJHq5yTHS/+iZKHg0H8GhLXwM50Ks0cl0QeMdx+DceBCxxbEruAwMa7mP8rL6Srw
         wUET2O7hO3v0DFflS89PGm15/n+m4i1+TbQOLjylhsvwOOAtwbpdFLPNLQVoAsEQxgwA
         2sijtA5/fXxqbDn8Q74zWaKWAYa2aNSp+rcpOFZILsPjMmxtZYjzt20Hn2wYWsbI7Isz
         eHEaLIHyzqBHcxSNDku6oR5ANg89o0286mZ+vno8Jxx2p5EDyFYYNT2//htdLlLp6Hr9
         juEDv4hgOqzHCB3SQL3TJAxyrz0NBixkbA4r30+EpEodFmq9bm/Aw0mk7Qhak9J25NVh
         LfUA==
X-Gm-Message-State: AOAM533b3hiLKpKw66gOVYq1080fWUbEMfll8pHL4/kjnrA3ga0z4Tme
	JmB/a0666Dwek6AWb4HntIa7fA==
X-Google-Smtp-Source: ABdhPJyiBODmYapISErXaci/paTftzSQoq4G3MHhJ6blBXA4Xk2Rt3t9DUi7Q/sURCxYsvPxcPxH7g==
X-Received: by 2002:a63:4915:: with SMTP id w21mr13046659pga.363.1624074072021;
        Fri, 18 Jun 2021 20:41:12 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com
Subject: [PATCH v14 02/12] swiotlb: Refactor swiotlb_create_debugfs
Date: Sat, 19 Jun 2021 11:40:33 +0800
Message-Id: <20210619034043.199220-3-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210619034043.199220-1-tientzu@chromium.org>
References: <20210619034043.199220-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Split the debugfs creation to make the code reusable for supporting
different bounce buffer pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 kernel/dma/swiotlb.c | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 1f9b2b9e7490..ede66df6835b 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -671,19 +671,26 @@ bool is_swiotlb_active(void)
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
 #ifdef CONFIG_DEBUG_FS
+static struct dentry *debugfs_dir;
 
-static int __init swiotlb_create_debugfs(void)
+static void swiotlb_create_debugfs_files(struct io_tlb_mem *mem)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
-
-	if (!mem)
-		return 0;
-	mem->debugfs = debugfs_create_dir("swiotlb", NULL);
 	debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs);
 	debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used);
+}
+
+static int __init swiotlb_create_default_debugfs(void)
+{
+	struct io_tlb_mem *mem = io_tlb_default_mem;
+
+	debugfs_dir = debugfs_create_dir("swiotlb", NULL);
+	if (mem) {
+		mem->debugfs = debugfs_dir;
+		swiotlb_create_debugfs_files(mem);
+	}
 	return 0;
 }
 
-late_initcall(swiotlb_create_debugfs);
+late_initcall(swiotlb_create_default_debugfs);
 
 #endif
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Sat Jun 19 03:41:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 03:41:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145033.266886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luRrI-0001yQ-LH; Sat, 19 Jun 2021 03:41:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145033.266886; Sat, 19 Jun 2021 03:41:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luRrI-0001yF-Gt; Sat, 19 Jun 2021 03:41:24 +0000
Received: by outflank-mailman (input) for mailman id 145033;
 Sat, 19 Jun 2021 03:41:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oaj8=LN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1luRrH-0001Ok-D4
 for xen-devel@lists.xenproject.org; Sat, 19 Jun 2021 03:41:23 +0000
Received: from mail-pf1-x429.google.com (unknown [2607:f8b0:4864:20::429])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c87204bf-8d78-4697-8b0c-c4888a738812;
 Sat, 19 Jun 2021 03:41:21 +0000 (UTC)
Received: by mail-pf1-x429.google.com with SMTP id x73so9202135pfc.8
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 20:41:21 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:4a46:e208:29e8:e076])
 by smtp.gmail.com with UTF8SMTPSA id w8sm10652886pgf.81.2021.06.18.20.41.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 18 Jun 2021 20:41:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c87204bf-8d78-4697-8b0c-c4888a738812
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=+yhAgwjq5z1dAtXH3bS6halUgnpsXrGer7IGToK2ifg=;
        b=LUdmghaaazqKba1aLn04PfMlpRzk0BOMm/oE/2uT5oGK/4M/pLpeKNZ/XhE8AOCxT6
         DdLBVhPVTM8zRaiAVOx71WwL1fVRW31Q+t9OixeH0FKnoqxhnrXmYUW3XL7vbfEoJ9zk
         yTjNLO8fmwuJgRViIKfsMx70t7sOnNzESCzr0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=+yhAgwjq5z1dAtXH3bS6halUgnpsXrGer7IGToK2ifg=;
        b=sAsIhEeJnWeVnfT6HeuCQRTZYsA7iHJvgWYKXw1jpzMRqvYU5dQCMFWm7pjuPzdUE8
         thJ+FXOPk9fKqLu3Pbdx+up7U8tqfc1YIuin8vgfhOEJuxjFzLPwKLztvSjpT1FwKC7b
         rKfcUDSLF0VHLwngZ2HHK8Kfoa96j+pf7ItDNLSaJLvHApGW1/iOgrbabk4XQRpY2eVd
         lR0Lj1HLvA7T8EKky29lKpWRTYjhsJBGctszD7bEqxJsVq+0vEVq1mtCqKq2xlzYjJ9K
         UsyLKWa9latnUAj5BBekBSobz2ZRNJNT8DqWha80D2WQf/Ztvxmku6rxSGxwdKpfnoTw
         4qsA==
X-Gm-Message-State: AOAM531cs5YbFOMw2TGhFpvLy6xsR915q0bLtN3Y1rTuOvemx44BmFeI
	unEsqrTKxCMYyRg2HdG9qKmpHQ==
X-Google-Smtp-Source: ABdhPJw3C1XOZjkgf0B8Yi3ZxQ9pW8ObuiE6ObvPA0Nzzbh696ZvPaWqhtVMAuHHTXvq5xv83a5dDQ==
X-Received: by 2002:a63:1b54:: with SMTP id b20mr12910328pgm.151.1624074080668;
        Fri, 18 Jun 2021 20:41:20 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com
Subject: [PATCH v14 03/12] swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
Date: Sat, 19 Jun 2021 11:40:34 +0800
Message-Id: <20210619034043.199220-4-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210619034043.199220-1-tientzu@chromium.org>
References: <20210619034043.199220-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Always have the pointer to the swiotlb pool used in struct device. This
could help simplify the code for other pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
 drivers/base/core.c    | 4 ++++
 include/linux/device.h | 4 ++++
 kernel/dma/swiotlb.c   | 8 ++++----
 3 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/base/core.c b/drivers/base/core.c
index f29839382f81..cb3123e3954d 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -27,6 +27,7 @@
 #include <linux/netdevice.h>
 #include <linux/sched/signal.h>
 #include <linux/sched/mm.h>
+#include <linux/swiotlb.h>
 #include <linux/sysfs.h>
 #include <linux/dma-map-ops.h> /* for dma_default_coherent */
 
@@ -2736,6 +2737,9 @@ void device_initialize(struct device *dev)
     defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU_ALL)
 	dev->dma_coherent = dma_default_coherent;
 #endif
+#ifdef CONFIG_SWIOTLB
+	dev->dma_io_tlb_mem = io_tlb_default_mem;
+#endif
 }
 EXPORT_SYMBOL_GPL(device_initialize);
 
diff --git a/include/linux/device.h b/include/linux/device.h
index ba660731bd25..240d652a0696 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -416,6 +416,7 @@ struct dev_links_info {
  * @dma_pools:	Dma pools (if dma'ble device).
  * @dma_mem:	Internal for coherent mem override.
  * @cma_area:	Contiguous memory area for dma allocations
+ * @dma_io_tlb_mem: Pointer to the swiotlb pool used.  Not for driver use.
  * @archdata:	For arch-specific additions.
  * @of_node:	Associated device tree node.
  * @fwnode:	Associated device node supplied by platform firmware.
@@ -518,6 +519,9 @@ struct device {
 #ifdef CONFIG_DMA_CMA
 	struct cma *cma_area;		/* contiguous memory area for dma
 					   allocations */
+#endif
+#ifdef CONFIG_SWIOTLB
+	struct io_tlb_mem *dma_io_tlb_mem;
 #endif
 	/* arch specific additions */
 	struct dev_archdata	archdata;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index ede66df6835b..72a4289faed1 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -340,7 +340,7 @@ void __init swiotlb_exit(void)
 static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size,
 			   enum dma_data_direction dir)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT;
 	unsigned int offset = (tlb_addr - mem->start) & (IO_TLB_SIZE - 1);
 	phys_addr_t orig_addr = mem->slots[index].orig_addr;
@@ -431,7 +431,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
 static int find_slots(struct device *dev, phys_addr_t orig_addr,
 		size_t alloc_size)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
 	dma_addr_t tbl_dma_addr =
 		phys_to_dma_unencrypted(dev, mem->start) & boundary_mask;
@@ -508,7 +508,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		size_t mapping_size, size_t alloc_size,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
 	unsigned int i;
 	int index;
@@ -559,7 +559,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 			      size_t mapping_size, enum dma_data_direction dir,
 			      unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
 	unsigned long flags;
 	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Sat Jun 19 03:41:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 03:41:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145037.266897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luRrS-0002fT-Uj; Sat, 19 Jun 2021 03:41:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145037.266897; Sat, 19 Jun 2021 03:41:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luRrS-0002fM-RF; Sat, 19 Jun 2021 03:41:34 +0000
Received: by outflank-mailman (input) for mailman id 145037;
 Sat, 19 Jun 2021 03:41:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oaj8=LN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1luRrR-0001Ok-DS
 for xen-devel@lists.xenproject.org; Sat, 19 Jun 2021 03:41:33 +0000
Received: from mail-pj1-x1032.google.com (unknown [2607:f8b0:4864:20::1032])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7b013d6-61c8-4e81-8714-3bb66144a08a;
 Sat, 19 Jun 2021 03:41:30 +0000 (UTC)
Received: by mail-pj1-x1032.google.com with SMTP id
 k22-20020a17090aef16b0290163512accedso8072644pjz.0
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 20:41:30 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:4a46:e208:29e8:e076])
 by smtp.gmail.com with UTF8SMTPSA id t7sm8188546pgh.52.2021.06.18.20.41.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 18 Jun 2021 20:41:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7b013d6-61c8-4e81-8714-3bb66144a08a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=n+WwDLLEZtw2fHfSMVUWtPxnVCk0hCuq73bmvGm/m1o=;
        b=j+Az6O+p1qb8cN2QvjJ451IxdXmi4TGSnNoj0Z/o3Pg4OcQKhDLRIVkSi5ieJF6wc1
         lI2TJv8I+nFhwYgNx5VwL1681UNdqugIV6qNf0FCzDgIssWBMteXSxuh6wuhTB3oh1kB
         3eZPlG1ArHGdgMdptxdQc5PcF4TEIk0cI4d4c=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=n+WwDLLEZtw2fHfSMVUWtPxnVCk0hCuq73bmvGm/m1o=;
        b=L234G5FeHJeKYEwpRmAR8Yp233AM6Qf9i1HZj3r8+TUkRcqKKlpzV6NGdHwZH/66Bf
         1RJI76t7cXtb5hfjNQwzKkLL8rr9YwbdKFfqaipGyKmDcN5Djd7MYySfMRxmsQbvgJ6S
         M8JFY4OZ7grMtQgfQMVHleDvyXwsfJS1CX8+gT1qOrxZM8hNcUgjC0VLfzUMVZFRwByR
         VRkO+N40X2fGMrm250a8nFWMhkssgsY+oC45REPJ5GxNgx9ZtyYj4ssFoeeFX0qnGy1/
         KjhaX4JDS9eBuds0Stc7bCSj9sp1/St4fxFdBRBwi88ozA2dDEp9B9QAbYaThIcgOYpj
         fwPQ==
X-Gm-Message-State: AOAM533N2XfPY23Js7frMoCAf3JgfuCjQ7VMBCg+4Wn/+v9H3ZivpOb8
	k9S5Ebe7DgatQS0q5z+vcwV1pw==
X-Google-Smtp-Source: ABdhPJxY49uyton1y+WOtqleukQHJjbBRP+EMSmxnvCLQvywl2S5ZwrvLwUfWk18Z2FiiJodXh0QuQ==
X-Received: by 2002:a17:902:d48b:b029:122:d1b9:7e5a with SMTP id c11-20020a170902d48bb0290122d1b97e5amr961916plg.47.1624074089382;
        Fri, 18 Jun 2021 20:41:29 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com
Subject: [PATCH v14 04/12] swiotlb: Update is_swiotlb_buffer to add a struct device argument
Date: Sat, 19 Jun 2021 11:40:35 +0800
Message-Id: <20210619034043.199220-5-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210619034043.199220-1-tientzu@chromium.org>
References: <20210619034043.199220-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_buffer to add a struct device argument. This will be
useful later to allow for different pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
 drivers/iommu/dma-iommu.c | 12 ++++++------
 drivers/xen/swiotlb-xen.c |  2 +-
 include/linux/swiotlb.h   |  7 ++++---
 kernel/dma/direct.c       |  6 +++---
 kernel/dma/direct.h       |  6 +++---
 5 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 3087d9fa6065..10997ef541f8 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -507,7 +507,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
 
 	__iommu_dma_unmap(dev, dma_addr, size);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 
@@ -578,7 +578,7 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
 	}
 
 	iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
-	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
+	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys))
 		swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs);
 	return iova;
 }
@@ -749,7 +749,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
 	if (!dev_is_dma_coherent(dev))
 		arch_sync_dma_for_cpu(phys, size, dir);
 
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_cpu(dev, phys, size, dir);
 }
 
@@ -762,7 +762,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev,
 		return;
 
 	phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_device(dev, phys, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -783,7 +783,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
 
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_cpu(dev, sg_phys(sg),
 						    sg->length, dir);
 	}
@@ -800,7 +800,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
 		return;
 
 	for_each_sg(sgl, sg, nelems, i) {
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_device(dev, sg_phys(sg),
 						       sg->length, dir);
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 4c89afc0df62..0c6ed09f8513 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -100,7 +100,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	 * in our domain. Therefore _only_ check address within our domain.
 	 */
 	if (pfn_valid(PFN_DOWN(paddr)))
-		return is_swiotlb_buffer(paddr);
+		return is_swiotlb_buffer(dev, paddr);
 	return 0;
 }
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 216854a5e513..d1f3d95881cd 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_SWIOTLB_H
 #define __LINUX_SWIOTLB_H
 
+#include <linux/device.h>
 #include <linux/dma-direction.h>
 #include <linux/init.h>
 #include <linux/types.h>
@@ -101,9 +102,9 @@ struct io_tlb_mem {
 };
 extern struct io_tlb_mem *io_tlb_default_mem;
 
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
@@ -115,7 +116,7 @@ bool is_swiotlb_active(void);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index f737e3347059..84c9feb5474a 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -343,7 +343,7 @@ void dma_direct_sync_sg_for_device(struct device *dev,
 	for_each_sg(sgl, sg, nents, i) {
 		phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_device(dev, paddr, sg->length,
 						       dir);
 
@@ -369,7 +369,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(paddr, sg->length, dir);
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_cpu(dev, paddr, sg->length,
 						    dir);
 
@@ -504,7 +504,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr)
 {
 	return !dev_is_dma_coherent(dev) ||
-		is_swiotlb_buffer(dma_to_phys(dev, dma_addr));
+	       is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr));
 }
 
 /**
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 50afc05b6f1d..13e9e7158d94 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -56,7 +56,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev,
 {
 	phys_addr_t paddr = dma_to_phys(dev, addr);
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_device(dev, paddr, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -73,7 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev,
 		arch_sync_dma_for_cpu_all();
 	}
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_cpu(dev, paddr, size, dir);
 
 	if (dir == DMA_FROM_DEVICE)
@@ -113,7 +113,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
 		dma_direct_sync_single_for_cpu(dev, addr, size, dir);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 #endif /* _KERNEL_DMA_DIRECT_H */
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Sat Jun 19 03:41:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 03:41:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145043.266908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luRrh-0003P1-EB; Sat, 19 Jun 2021 03:41:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145043.266908; Sat, 19 Jun 2021 03:41:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luRrh-0003Ou-9z; Sat, 19 Jun 2021 03:41:49 +0000
Received: by outflank-mailman (input) for mailman id 145043;
 Sat, 19 Jun 2021 03:41:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oaj8=LN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1luRrg-0001Ok-Dz
 for xen-devel@lists.xenproject.org; Sat, 19 Jun 2021 03:41:48 +0000
Received: from mail-pj1-x102c.google.com (unknown [2607:f8b0:4864:20::102c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dceb24c6-72c9-40ef-8dd7-2491c2e17815;
 Sat, 19 Jun 2021 03:41:39 +0000 (UTC)
Received: by mail-pj1-x102c.google.com with SMTP id
 m15-20020a17090a5a4fb029016f385ffad0so4212034pji.0
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 20:41:39 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:4a46:e208:29e8:e076])
 by smtp.gmail.com with UTF8SMTPSA id bp2sm2313035pjb.20.2021.06.18.20.41.31
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 18 Jun 2021 20:41:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dceb24c6-72c9-40ef-8dd7-2491c2e17815
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=zrGk5vL3UOmfV1+EKp1h1m3lS/SKJ23U3aRYavMW2u4=;
        b=BFagvH2d1+jFVqk3x7a7fdtcLbrA9KLeqPaRYxC4m59HXZlMztmGwqO1z/ykJ+WJpd
         rqzzg+qqfiWUljkrHR8vXBZX5jVlUpgiPPNGLjiy1t1keidN2x6SJq30tTtRjxGdPJAQ
         PUvDTKjTiAO4Ppj2mMNYS/gDAl5l3ULKGYSQo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=zrGk5vL3UOmfV1+EKp1h1m3lS/SKJ23U3aRYavMW2u4=;
        b=TS+k6QklYz4jKOc9XYYj5WjZbdu28+2WCslAakPGQpF6KqLNb3d7bMEtd/XYsiydS+
         zQwguFCv6wWMXUpSzMN4RvD6pTIKITfKmPFX+syJK/TmYjviHFCn3hIfcIKRVG7/2+w/
         TMdFOINLYhM+t7nSCQp1CUZBPwg3ruVuCJo4a/Lqr83DuUeK11+V5KS807VAr0ijN9v3
         rB+E1E/zgBh/ZgPuNTHb/gNrlAQkY+/t2dtAKXtgtU5E6sQ179fEwZu3knfeIGr6Q9Pj
         FjiFxthMFLdiU92EzhjUewI4qhRsAkoUycJAkFczvGQSj2GuuRdEdBvUdaVYTo08jS0+
         VSRQ==
X-Gm-Message-State: AOAM532DR0lilbaXsKfV7hUvwLTy/ZKD70qBMMRop7xWjj6+PKw9lgUl
	wzX1BWT2fTCaQt18hx73HYXFqA==
X-Google-Smtp-Source: ABdhPJySL8LyiFJgKMn6YIK1rQRoUoEueFH5Ee9/KPNH8Svj2cORHQyHhlJVtqo/KSLaIk9InLr0HQ==
X-Received: by 2002:a17:90a:aa05:: with SMTP id k5mr25005023pjq.117.1624074098672;
        Fri, 18 Jun 2021 20:41:38 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com
Subject: [PATCH v14 05/12] swiotlb: Update is_swiotlb_active to add a struct device argument
Date: Sat, 19 Jun 2021 11:40:36 +0800
Message-Id: <20210619034043.199220-6-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210619034043.199220-1-tientzu@chromium.org>
References: <20210619034043.199220-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_active to add a struct device argument. This will be
useful later to allow for different pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
 drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c        | 2 +-
 drivers/pci/xen-pcifront.c                   | 2 +-
 include/linux/swiotlb.h                      | 4 ++--
 kernel/dma/direct.c                          | 2 +-
 kernel/dma/swiotlb.c                         | 4 ++--
 6 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index a9d65fc8aa0e..4b7afa0fc85d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 
 	max_order = MAX_ORDER;
 #ifdef CONFIG_SWIOTLB
-	if (is_swiotlb_active()) {
+	if (is_swiotlb_active(obj->base.dev->dev)) {
 		unsigned int max_segment;
 
 		max_segment = swiotlb_max_segment();
diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
index 9662522aa066..be15bfd9e0ee 100644
--- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
@@ -321,7 +321,7 @@ nouveau_ttm_init(struct nouveau_drm *drm)
 	}
 
 #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
-	need_swiotlb = is_swiotlb_active();
+	need_swiotlb = is_swiotlb_active(dev->dev);
 #endif
 
 	ret = ttm_bo_device_init(&drm->ttm.bdev, &nouveau_bo_driver,
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index b7a8f3a1921f..0d56985bfe81 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
 
 	spin_unlock(&pcifront_dev_lock);
 
-	if (!err && !is_swiotlb_active()) {
+	if (!err && !is_swiotlb_active(&pdev->xdev->dev)) {
 		err = pci_xen_swiotlb_init_late();
 		if (err)
 			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index d1f3d95881cd..dd1c30a83058 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -112,7 +112,7 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
-bool is_swiotlb_active(void);
+bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
@@ -132,7 +132,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev)
 	return SIZE_MAX;
 }
 
-static inline bool is_swiotlb_active(void)
+static inline bool is_swiotlb_active(struct device *dev)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 84c9feb5474a..7a88c34d0867 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
 size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
-	if (is_swiotlb_active() &&
+	if (is_swiotlb_active(dev) &&
 	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 72a4289faed1..8a120f42340b 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -664,9 +664,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)
 	return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
 }
 
-bool is_swiotlb_active(void)
+bool is_swiotlb_active(struct device *dev)
 {
-	return io_tlb_default_mem != NULL;
+	return dev->dma_io_tlb_mem != NULL;
 }
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Sat Jun 19 03:44:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 03:44:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145054.266919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luRtn-0004YI-RI; Sat, 19 Jun 2021 03:43:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145054.266919; Sat, 19 Jun 2021 03:43:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luRtn-0004YB-NA; Sat, 19 Jun 2021 03:43:59 +0000
Received: by outflank-mailman (input) for mailman id 145054;
 Sat, 19 Jun 2021 03:43:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oaj8=LN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1luRtm-0004Xt-Az
 for xen-devel@lists.xenproject.org; Sat, 19 Jun 2021 03:43:58 +0000
Received: from mail-pg1-x529.google.com (unknown [2607:f8b0:4864:20::529])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26e4112f-df9f-4006-ad8f-b7d2ab53ecef;
 Sat, 19 Jun 2021 03:43:57 +0000 (UTC)
Received: by mail-pg1-x529.google.com with SMTP id t9so9439410pgn.4
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 20:43:57 -0700 (PDT)
Received: from mail-pg1-f180.google.com (mail-pg1-f180.google.com.
 [209.85.215.180])
 by smtp.gmail.com with ESMTPSA id j79sm52780pfd.172.2021.06.18.20.43.55
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 18 Jun 2021 20:43:56 -0700 (PDT)
Received: by mail-pg1-f180.google.com with SMTP id e20so9482427pgg.0
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 20:43:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26e4112f-df9f-4006-ad8f-b7d2ab53ecef
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=zO5LL7Owrsk50u85z5NQdq/+aN2FoxcpBADlxiwbwtc=;
        b=GoMVApQGRvRIO7blwwZSLVX7KnEaLUsgQiZnYGnQ8mp+79muo9lTEuES+H8kLhS8xC
         djxEq6nvfo+uAAf09DzdZIJgtSBquVbHx1Yy7Zj9ycZDtCn5PRIixQGHzFvC0S0QErMk
         dzuQb1UZyozHrgC7e811z9kAB0L/tg8WkvJDw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=zO5LL7Owrsk50u85z5NQdq/+aN2FoxcpBADlxiwbwtc=;
        b=XrbtClHVJOUB549XSeUBsyIs+uQznKlW3gRLKT5T0fhkXVMD9TVq6PX4ZR1AL9j1rR
         2pTGe4xOpMfN1VnUW8Q+YfMbSl/VILx0KQQn/MPRxT5U0QRpZY1MOijH99mWvcgeyU8e
         TcNd17PrcyPQOZ6d4hV4vy6ip9sS6SQLY02aRgObFXBfDs3ik6QzhD8sitbeflIi1qV+
         gwnTISxe5SjGQ/A+kny9W3SsmtYrwJxFOQ+RIp0s3Y5wS4nxZt7cxk23V5L/3qTeW/68
         JWdIKIzmiw4oSb5JRNsdOE1C+cWIiYGQJ6ViWDI/VY/oWZvHNJGZaDipVrX2C6+vOEa0
         sW0g==
X-Gm-Message-State: AOAM532HJN7Xugjx1VQUnT/m0pZlZITJVv0c7HqfyL6/C3yIQbLGE8af
	ktI0SfptlVQYCEYUX87qSD8kylEQuRYrIQ==
X-Google-Smtp-Source: ABdhPJwyjQuHqdCFbb/37uFSKWTZPN3jb0/AjmQtlyH/x8QlRWaFaOM4u7arFkyUE6gq0MVTdZ2RWw==
X-Received: by 2002:aa7:9578:0:b029:2e9:dec0:bef4 with SMTP id x24-20020aa795780000b02902e9dec0bef4mr8262908pfq.29.1624074236736;
        Fri, 18 Jun 2021 20:43:56 -0700 (PDT)
X-Received: by 2002:a05:6602:50:: with SMTP id z16mr10626949ioz.155.1624074224014;
 Fri, 18 Jun 2021 20:43:44 -0700 (PDT)
MIME-Version: 1.0
References: <20210617062635.1660944-1-tientzu@chromium.org> <CALiNf2_qF7OY0LHToNYx0E79BWMt2n7=nepPPLf+7YV3=KFEyw@mail.gmail.com>
In-Reply-To: <CALiNf2_qF7OY0LHToNYx0E79BWMt2n7=nepPPLf+7YV3=KFEyw@mail.gmail.com>
From: Claire Chang <tientzu@chromium.org>
Date: Sat, 19 Jun 2021 11:43:33 +0800
X-Gmail-Original-Message-ID: <CALiNf289bo1WzEWWapzeQ8xYiH8s1qgDkpHVgy=PgAmv6rvGnQ@mail.gmail.com>
Message-ID: <CALiNf289bo1WzEWWapzeQ8xYiH8s1qgDkpHVgy=PgAmv6rvGnQ@mail.gmail.com>
Subject: Re: [PATCH v13 00/12] Restricted DMA
To: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, 
	xypron.glpk@gmx.de, Thierry Reding <treding@nvidia.com>, mingo@kernel.org, 
	bauerman@linux.ibm.com, peterz@infradead.org, 
	Greg KH <gregkh@linuxfoundation.org>, Saravana Kannan <saravanak@google.com>, 
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

v14: https://lore.kernel.org/patchwork/cover/1448954/


From xen-devel-bounces@lists.xenproject.org Sat Jun 19 03:51:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 03:51:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145061.266935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luS18-000621-Ug; Sat, 19 Jun 2021 03:51:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145061.266935; Sat, 19 Jun 2021 03:51:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luS18-00061M-Nj; Sat, 19 Jun 2021 03:51:34 +0000
Received: by outflank-mailman (input) for mailman id 145061;
 Sat, 19 Jun 2021 03:51:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oaj8=LN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1luRse-0001Ok-GC
 for xen-devel@lists.xenproject.org; Sat, 19 Jun 2021 03:42:48 +0000
Received: from mail-pg1-x532.google.com (unknown [2607:f8b0:4864:20::532])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96fec9f1-b58b-4aaa-9957-00385e044bb1;
 Sat, 19 Jun 2021 03:42:16 +0000 (UTC)
Received: by mail-pg1-x532.google.com with SMTP id t13so9419962pgu.11
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 20:42:16 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:4a46:e208:29e8:e076])
 by smtp.gmail.com with UTF8SMTPSA id k7sm12125402pjj.46.2021.06.18.20.42.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 18 Jun 2021 20:42:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96fec9f1-b58b-4aaa-9957-00385e044bb1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=hXEUgb4Sv9lytUREZCJklb6uiHdL/XNWKamYYGystbg=;
        b=floXiGukoM2XY/IEUAlqBWMnVt9036pBacfgzl4K/Qw/2SYHQanXn6H03cZ32woxc2
         BNPFYl0cjqam+FtSDCJjv3FW6vQqgRR5U4JogeMh7GXne5ZPonsZYce90Ej8TwprtTxa
         BwILhJReTwKL98BavcP7BDJj8z8TDtnd415xA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=hXEUgb4Sv9lytUREZCJklb6uiHdL/XNWKamYYGystbg=;
        b=XNn2NE0LeTk5Z/bbw6A59zkd202lNUNKLvYUIzbXedIcK5jLW/hFeTzy1RKhXJT54R
         g1JAwZuJx2AjygDACnZJuBN3miaGQyACKt8hbAmHDb7efjzkEFCTHyP/L92kNyw2BTgi
         zkdyNGk2BJyPN/oZygMBb5P/6eMfd0nEkqtf4eCS16cNDa7ggx2nQtfUyD3Q6Tku2jbu
         KuM8Dq06wEdEbfLiyvRsv0SP61OTKu0PKMhXdkW+Q+vQoJmFYhJlvIda4t5muiQETv1n
         65tXnVkxsMxf3sKBQECX6MtHeykonIh2SvGEmhWz1DLyABrUq4hAyBAsc1ghr91XaxBw
         4GYg==
X-Gm-Message-State: AOAM531vRJJXIIC6yZRpxEs0sUzAw79NsMZVK0C/+SExaXSo8fQXCTJD
	R+NdTjxwrsQSTJR3WKA3eOVd2w==
X-Google-Smtp-Source: ABdhPJy0ypzrzEu2i4Qc0z5Kwcx6tibZSD/xS/3V4U1s+97Kqk4GGqZNgdbt15Zm/tqRwwEPO4UroQ==
X-Received: by 2002:a63:ff20:: with SMTP id k32mr13288563pgi.82.1624074135070;
        Fri, 18 Jun 2021 20:42:15 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com
Subject: [PATCH v14 09/12] swiotlb: Add restricted DMA alloc/free support
Date: Sat, 19 Jun 2021 11:40:40 +0800
Message-Id: <20210619034043.199220-10-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210619034043.199220-1-tientzu@chromium.org>
References: <20210619034043.199220-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the functions, swiotlb_{alloc,free} and is_swiotlb_for_alloc to
support the memory allocation from restricted DMA pool.

The restricted DMA pool is preferred if available.

Note that since coherent allocation needs remapping, one must set up
another device coherent pool by shared-dma-pool and use
dma_alloc_from_dev_coherent instead for atomic coherent allocation.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
 include/linux/swiotlb.h | 26 ++++++++++++++++++++++
 kernel/dma/direct.c     | 49 +++++++++++++++++++++++++++++++----------
 kernel/dma/swiotlb.c    | 38 ++++++++++++++++++++++++++++++--
 3 files changed, 99 insertions(+), 14 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 8d8855c77d9a..a73fad460162 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -85,6 +85,7 @@ extern enum swiotlb_force swiotlb_force;
  * @debugfs:	The dentry to debugfs.
  * @late_alloc:	%true if allocated using the page allocator
  * @force_bounce: %true if swiotlb bouncing is forced
+ * @for_alloc:  %true if the pool is used for memory allocation
  */
 struct io_tlb_mem {
 	phys_addr_t start;
@@ -96,6 +97,7 @@ struct io_tlb_mem {
 	struct dentry *debugfs;
 	bool late_alloc;
 	bool force_bounce;
+	bool for_alloc;
 	struct io_tlb_slot {
 		phys_addr_t orig_addr;
 		size_t alloc_size;
@@ -156,4 +158,28 @@ static inline void swiotlb_adjust_size(unsigned long size)
 extern void swiotlb_print_info(void);
 extern void swiotlb_set_max_segment(unsigned int);
 
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size);
+bool swiotlb_free(struct device *dev, struct page *page, size_t size);
+
+static inline bool is_swiotlb_for_alloc(struct device *dev)
+{
+	return dev->dma_io_tlb_mem->for_alloc;
+}
+#else
+static inline struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	return NULL;
+}
+static inline bool swiotlb_free(struct device *dev, struct page *page,
+				size_t size)
+{
+	return false;
+}
+static inline bool is_swiotlb_for_alloc(struct device *dev)
+{
+	return false;
+}
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
+
 #endif /* __LINUX_SWIOTLB_H */
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index a92465b4eb12..2de33e5d302b 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -75,6 +75,15 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 		min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit);
 }
 
+static void __dma_direct_free_pages(struct device *dev, struct page *page,
+				    size_t size)
+{
+	if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL) &&
+	    swiotlb_free(dev, page, size))
+		return;
+	dma_free_contiguous(dev, page, size);
+}
+
 static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 		gfp_t gfp)
 {
@@ -86,6 +95,16 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 
 	gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
 					   &phys_limit);
+	if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL) &&
+	    is_swiotlb_for_alloc(dev)) {
+		page = swiotlb_alloc(dev, size);
+		if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
+			__dma_direct_free_pages(dev, page, size);
+			return NULL;
+		}
+		return page;
+	}
+
 	page = dma_alloc_contiguous(dev, size, gfp);
 	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
 		dma_free_contiguous(dev, page, size);
@@ -142,7 +161,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 		gfp |= __GFP_NOWARN;
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) {
 		page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
 		if (!page)
 			return NULL;
@@ -155,18 +174,23 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev))
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_swiotlb_for_alloc(dev))
 		return arch_dma_alloc(dev, size, dma_handle, gfp, attrs);
 
 	/*
 	 * Remapping or decrypting memory may block. If either is required and
 	 * we can't block, allocate the memory from the atomic pools.
+	 * If restricted DMA (i.e., is_swiotlb_for_alloc) is required, one must
+	 * set up another device coherent pool by shared-dma-pool and use
+	 * dma_alloc_from_dev_coherent instead.
 	 */
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
 	    !gfpflags_allow_blocking(gfp) &&
 	    (force_dma_unencrypted(dev) ||
-	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev))))
+	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
+	      !dev_is_dma_coherent(dev))) &&
+	    !is_swiotlb_for_alloc(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	/* we always manually zero the memory once we are done */
@@ -237,7 +261,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 			return NULL;
 	}
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -247,15 +271,15 @@ void dma_direct_free(struct device *dev, size_t size,
 	unsigned int page_order = get_order(size);
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) {
 		/* cpu_addr is a struct page cookie, not a kernel address */
 		dma_free_contiguous(dev, cpu_addr, size);
 		return;
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev)) {
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_swiotlb_for_alloc(dev)) {
 		arch_dma_free(dev, size, cpu_addr, dma_addr, attrs);
 		return;
 	}
@@ -273,7 +297,7 @@ void dma_direct_free(struct device *dev, size_t size,
 	else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
 		arch_dma_clear_uncached(cpu_addr, size);
 
-	dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size);
+	__dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
 }
 
 struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
@@ -283,7 +307,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	void *ret;
 
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
-	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp))
+	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) &&
+	    !is_swiotlb_for_alloc(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	page = __dma_direct_alloc_pages(dev, size, gfp);
@@ -310,7 +335,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	*dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
 	return page;
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -329,7 +354,7 @@ void dma_direct_free_pages(struct device *dev, size_t size,
 	if (force_dma_unencrypted(dev))
 		set_memory_encrypted((unsigned long)vaddr, 1 << page_order);
 
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 }
 
 #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index e79383df5d4a..273b21090ee8 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -463,8 +463,9 @@ static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
 
 	index = wrap = wrap_index(mem, ALIGN(mem->index, stride));
 	do {
-		if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
-		    (orig_addr & iotlb_align_mask)) {
+		if (orig_addr &&
+		    (slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
+			    (orig_addr & iotlb_align_mask)) {
 			index = wrap_index(mem, index + 1);
 			continue;
 		}
@@ -703,3 +704,36 @@ static int __init swiotlb_create_default_debugfs(void)
 late_initcall(swiotlb_create_default_debugfs);
 
 #endif
+
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+	phys_addr_t tlb_addr;
+	int index;
+
+	if (!mem)
+		return NULL;
+
+	index = swiotlb_find_slots(dev, 0, size);
+	if (index == -1)
+		return NULL;
+
+	tlb_addr = slot_addr(mem->start, index);
+
+	return pfn_to_page(PFN_DOWN(tlb_addr));
+}
+
+bool swiotlb_free(struct device *dev, struct page *page, size_t size)
+{
+	phys_addr_t tlb_addr = page_to_phys(page);
+
+	if (!is_swiotlb_buffer(dev, tlb_addr))
+		return false;
+
+	swiotlb_release_slots(dev, tlb_addr);
+
+	return true;
+}
+
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Sat Jun 19 03:51:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 03:51:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145060.266929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luS18-0005zT-Ht; Sat, 19 Jun 2021 03:51:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145060.266929; Sat, 19 Jun 2021 03:51:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luS18-0005zM-F4; Sat, 19 Jun 2021 03:51:34 +0000
Received: by outflank-mailman (input) for mailman id 145060;
 Sat, 19 Jun 2021 03:51:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oaj8=LN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1luRst-0001Ok-Gg
 for xen-devel@lists.xenproject.org; Sat, 19 Jun 2021 03:43:03 +0000
Received: from mail-pf1-x42b.google.com (unknown [2607:f8b0:4864:20::42b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e86861e3-af97-45bc-ae63-07f57bfc5a5f;
 Sat, 19 Jun 2021 03:42:24 +0000 (UTC)
Received: by mail-pf1-x42b.google.com with SMTP id z26so9213844pfj.5
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 20:42:24 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:4a46:e208:29e8:e076])
 by smtp.gmail.com with UTF8SMTPSA id o72sm3823389pfg.102.2021.06.18.20.42.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 18 Jun 2021 20:42:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e86861e3-af97-45bc-ae63-07f57bfc5a5f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=siEy8G7PeUIP5SwwgMvapyAkxanEyejlu7ONUZ9yGrs=;
        b=PVzE47oPcF725/coDqsJRzac3T/wkFYs0l0OUgKwwlqVYtfZ1VcPE+apLpO3PiE39E
         hroZdkWyJ1FY8+MMxcIXfLg8mSzUQOWKW1HghPEs/CnHdT3B8XoNEo0T/Ku1bIExGIif
         gFdxf+gYkQKTSymCtV+BwSVCd70PKzuU3VIss=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=siEy8G7PeUIP5SwwgMvapyAkxanEyejlu7ONUZ9yGrs=;
        b=hBtlbFZgmL1K+a+y6qU0H3Pl0u9VOheUSWLO50vpAqMRywJiJPwdjMXf2ti2ns92C7
         sc9uff3Z/TnKqA3DIOg79JWKW3UFjgfbzchZ8X3uYEhnlu2bfvhaoA6W2OVVIcNTbUi0
         Ud9t9beQO03cwVJLMgJplV7FuQN95Dg0zX44Iv3cyv+e5wSTQ8QCPLqY2dglOTfYNFAM
         gfa7umFjxEJcHv6nFaRf+XUcsPCy1/3vjsNe8MgJ2dKOACccXexAQ5NQc15KKUUjpqwW
         ZI30BLZrgSmSwYpcP4dYyW4l+ssEY11pbSxFfNskjd6Ye/xh9cNeNOdEHShEn97ZqUrn
         r49A==
X-Gm-Message-State: AOAM5315/f71ZLLp+PXXQV2aDWz1NzNMbOOpRE+2UU2zYrYlSVzLdsIN
	6HaqE6Cdiq0JGEx1ybn4CXJeag==
X-Google-Smtp-Source: ABdhPJwzMKzuXPBcXuPpa9pE0clhI9Qb+YkVTpTbg16/nkn2r8LOUuO+QRq9ZqpeBfYYhrJIx8pKXg==
X-Received: by 2002:a63:d305:: with SMTP id b5mr13411892pgg.67.1624074143723;
        Fri, 18 Jun 2021 20:42:23 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com
Subject: [PATCH v14 10/12] swiotlb: Add restricted DMA pool initialization
Date: Sat, 19 Jun 2021 11:40:41 +0800
Message-Id: <20210619034043.199220-11-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210619034043.199220-1-tientzu@chromium.org>
References: <20210619034043.199220-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the initialization function to create restricted DMA pools from
matching reserved-memory nodes.

Regardless of swiotlb setting, the restricted DMA pool is preferred if
available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 include/linux/swiotlb.h |  3 +-
 kernel/dma/Kconfig      | 14 ++++++++
 kernel/dma/swiotlb.c    | 76 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 92 insertions(+), 1 deletion(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index a73fad460162..175b6c113ed8 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -73,7 +73,8 @@ extern enum swiotlb_force swiotlb_force;
  *		range check to see if the memory was in fact allocated by this
  *		API.
  * @nslabs:	The number of IO TLB blocks (in groups of 64) between @start and
- *		@end. This is command line adjustable via setup_io_tlb_npages.
+ *		@end. For default swiotlb, this is command line adjustable via
+ *		setup_io_tlb_npages.
  * @used:	The number of used IO TLB block.
  * @list:	The free list describing the number of free entries available
  *		from each index.
diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
index 77b405508743..3e961dc39634 100644
--- a/kernel/dma/Kconfig
+++ b/kernel/dma/Kconfig
@@ -80,6 +80,20 @@ config SWIOTLB
 	bool
 	select NEED_DMA_MAP_STATE
 
+config DMA_RESTRICTED_POOL
+	bool "DMA Restricted Pool"
+	depends on OF && OF_RESERVED_MEM
+	select SWIOTLB
+	help
+	  This enables support for restricted DMA pools which provide a level of
+	  DMA memory protection on systems with limited hardware protection
+	  capabilities, such as those lacking an IOMMU.
+
+	  For more information see
+	  <Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt>
+	  and <kernel/dma/swiotlb.c>.
+	  If unsure, say "n".
+
 #
 # Should be selected if we can mmap non-coherent mappings to userspace.
 # The only thing that is really required is a way to set an uncached bit
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 273b21090ee8..1aef294c82b5 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -39,6 +39,13 @@
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
 #endif
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_fdt.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/slab.h>
+#endif
 
 #include <asm/io.h>
 #include <asm/dma.h>
@@ -736,4 +743,73 @@ bool swiotlb_free(struct device *dev, struct page *page, size_t size)
 	return true;
 }
 
+static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
+				    struct device *dev)
+{
+	struct io_tlb_mem *mem = rmem->priv;
+	unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
+
+	/*
+	 * Since multiple devices can share the same pool, the private data,
+	 * io_tlb_mem struct, will be initialized by the first device attached
+	 * to it.
+	 */
+	if (!mem) {
+		mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
+		if (!mem)
+			return -ENOMEM;
+
+		set_memory_decrypted((unsigned long)phys_to_virt(rmem->base),
+				     rmem->size >> PAGE_SHIFT);
+		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
+		mem->force_bounce = true;
+		mem->for_alloc = true;
+
+		rmem->priv = mem;
+
+		if (IS_ENABLED(CONFIG_DEBUG_FS)) {
+			mem->debugfs =
+				debugfs_create_dir(rmem->name, debugfs_dir);
+			swiotlb_create_debugfs_files(mem);
+		}
+	}
+
+	dev->dma_io_tlb_mem = mem;
+
+	return 0;
+}
+
+static void rmem_swiotlb_device_release(struct reserved_mem *rmem,
+					struct device *dev)
+{
+	dev->dma_io_tlb_mem = io_tlb_default_mem;
+}
+
+static const struct reserved_mem_ops rmem_swiotlb_ops = {
+	.device_init = rmem_swiotlb_device_init,
+	.device_release = rmem_swiotlb_device_release,
+};
+
+static int __init rmem_swiotlb_setup(struct reserved_mem *rmem)
+{
+	unsigned long node = rmem->fdt_node;
+
+	if (of_get_flat_dt_prop(node, "reusable", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,cma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,dma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "no-map", NULL))
+		return -EINVAL;
+
+	if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
+		pr_err("Restricted DMA pool must be accessible within the linear mapping.");
+		return -EINVAL;
+	}
+
+	rmem->ops = &rmem_swiotlb_ops;
+	pr_info("Reserved memory: created restricted DMA pool at %pa, size %ld MiB\n",
+		&rmem->base, (unsigned long)rmem->size / SZ_1M);
+	return 0;
+}
+
+RESERVEDMEM_OF_DECLARE(dma, "restricted-dma-pool", rmem_swiotlb_setup);
 #endif /* CONFIG_DMA_RESTRICTED_POOL */
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Sat Jun 19 03:51:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 03:51:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145062.266952 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luS1B-0006XX-Ai; Sat, 19 Jun 2021 03:51:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145062.266952; Sat, 19 Jun 2021 03:51:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luS1B-0006XH-7H; Sat, 19 Jun 2021 03:51:37 +0000
Received: by outflank-mailman (input) for mailman id 145062;
 Sat, 19 Jun 2021 03:51:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oaj8=LN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1luRsy-0001Ok-Go
 for xen-devel@lists.xenproject.org; Sat, 19 Jun 2021 03:43:08 +0000
Received: from mail-pf1-x436.google.com (unknown [2607:f8b0:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a0ef1c7-fbf4-47fc-9eba-a519c0232052;
 Sat, 19 Jun 2021 03:42:33 +0000 (UTC)
Received: by mail-pf1-x436.google.com with SMTP id u18so3854983pfk.11
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 20:42:33 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:4a46:e208:29e8:e076])
 by smtp.gmail.com with UTF8SMTPSA id h8sm10337503pgr.43.2021.06.18.20.42.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 18 Jun 2021 20:42:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a0ef1c7-fbf4-47fc-9eba-a519c0232052
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=YPdZA059qJtBhHo92L+QtxB+I/HHPOi+M+A9tEqXqKY=;
        b=Wu0eLHFq0Y/87UVuH2wG2UQKehneEtjyPBVQdTP1lnkewLoZWvPjWvbDIzh+Arzpty
         6/gTzGLkVnr6kmY9LH3UOa/5MBfVIDw2V2Pk6DizW41XnS2UTJ+egFCFXcGunjOko5uu
         NFfdBa59v19TyyGjeObnddQmZaP9dKr5Tb8k0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=YPdZA059qJtBhHo92L+QtxB+I/HHPOi+M+A9tEqXqKY=;
        b=orn61udtSKkiSlOKBmcBTcbFGyEVrPT737pYbeNGDbKlXq8haq7r6snN5UQQa5NLas
         fyJj9sOUSaRiN4s/P0pip1OIANCBmhBd9VFu+l2Z2ggwVdfDyC9WruRMrQwr7EQUJA3X
         Ckep0uaFWXIAwKW4Z8PrFyHgJfO6KodyWinhOzqEAf2/QebMHTr1+nY2w608F96PFLux
         cTD1fSWUMUtDdXkPXj6eaqHY8Yx8sXH4fmpZnkc0eJ/Z7MEPOLpccLc9ipeQEdWo8vCU
         fULIRh+BZnKdL6ESEm88OhIz/xuJSFhz2dXHwEmSAaNGQqD9em37trJmS5a2D0je6UMw
         pRVQ==
X-Gm-Message-State: AOAM533ch5SypMlfwlPFTpJory6D5dLy4q8CIQKYPuoo4xLPzRRCXTfZ
	58wfWtArIFRcQH1Z/AI/h/iIlw==
X-Google-Smtp-Source: ABdhPJzLaZpsZm1niTfUb8nxR32Zg+/27KFI+rEZFvk6OtIgjDsZGMpMmjJEaTHEl6JDmRuEDbffng==
X-Received: by 2002:a63:6644:: with SMTP id a65mr12953547pgc.431.1624074152888;
        Fri, 18 Jun 2021 20:42:32 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com
Subject: [PATCH v14 11/12] dt-bindings: of: Add restricted DMA pool
Date: Sat, 19 Jun 2021 11:40:42 +0800
Message-Id: <20210619034043.199220-12-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210619034043.199220-1-tientzu@chromium.org>
References: <20210619034043.199220-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce the new compatible string, restricted-dma-pool, for restricted
DMA. One can specify the address and length of the restricted DMA memory
region by restricted-dma-pool in the reserved-memory node.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 .../reserved-memory/reserved-memory.txt       | 36 +++++++++++++++++--
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
index e8d3096d922c..39b5f4c5a511 100644
--- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
@@ -51,6 +51,23 @@ compatible (optional) - standard definition
           used as a shared pool of DMA buffers for a set of devices. It can
           be used by an operating system to instantiate the necessary pool
           management subsystem if necessary.
+        - restricted-dma-pool: This indicates a region of memory meant to be
+          used as a pool of restricted DMA buffers for a set of devices. The
+          memory region would be the only region accessible to those devices.
+          When using this, the no-map and reusable properties must not be set,
+          so the operating system can create a virtual mapping that will be used
+          for synchronization. The main purpose for restricted DMA is to
+          mitigate the lack of DMA access control on systems without an IOMMU,
+          which could result in the DMA accessing the system memory at
+          unexpected times and/or unexpected addresses, possibly leading to data
+          leakage or corruption. The feature on its own provides a basic level
+          of protection against the DMA overwriting buffer contents at
+          unexpected times. However, to protect against general data leakage and
+          system memory corruption, the system needs to provide way to lock down
+          the memory access, e.g., MPU. Note that since coherent allocation
+          needs remapping, one must set up another device coherent pool by
+          shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic
+          coherent allocation.
         - vendor specific string in the form <vendor>,[<device>-]<usage>
 no-map (optional) - empty property
     - Indicates the operating system must not create a virtual mapping
@@ -85,10 +102,11 @@ memory-region-names (optional) - a list of names, one for each corresponding
 
 Example
 -------
-This example defines 3 contiguous regions are defined for Linux kernel:
+This example defines 4 contiguous regions for Linux kernel:
 one default of all device drivers (named linux,cma@72000000 and 64MiB in size),
-one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB), and
-one for multimedia processing (named multimedia-memory@77000000, 64MiB).
+one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB),
+one for multimedia processing (named multimedia-memory@77000000, 64MiB), and
+one for restricted dma pool (named restricted_dma_reserved@0x50000000, 64MiB).
 
 / {
 	#address-cells = <1>;
@@ -120,6 +138,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 			compatible = "acme,multimedia-memory";
 			reg = <0x77000000 0x4000000>;
 		};
+
+		restricted_dma_reserved: restricted_dma_reserved {
+			compatible = "restricted-dma-pool";
+			reg = <0x50000000 0x4000000>;
+		};
 	};
 
 	/* ... */
@@ -138,4 +161,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 		memory-region = <&multimedia_reserved>;
 		/* ... */
 	};
+
+	pcie_device: pcie_device@0,0 {
+		reg = <0x83010000 0x0 0x00000000 0x0 0x00100000
+		       0x83010000 0x0 0x00100000 0x0 0x00100000>;
+		memory-region = <&restricted_dma_reserved>;
+		/* ... */
+	};
 };
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Sat Jun 19 03:51:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 03:51:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145063.266956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luS1B-0006b2-Lx; Sat, 19 Jun 2021 03:51:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145063.266956; Sat, 19 Jun 2021 03:51:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luS1B-0006aJ-Gc; Sat, 19 Jun 2021 03:51:37 +0000
Received: by outflank-mailman (input) for mailman id 145063;
 Sat, 19 Jun 2021 03:51:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oaj8=LN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1luRrv-0001Ok-Eu
 for xen-devel@lists.xenproject.org; Sat, 19 Jun 2021 03:42:03 +0000
Received: from mail-pg1-x534.google.com (unknown [2607:f8b0:4864:20::534])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c326dcd-3f7a-4799-952a-9f13e6a44a8f;
 Sat, 19 Jun 2021 03:41:48 +0000 (UTC)
Received: by mail-pg1-x534.google.com with SMTP id v7so9458737pgl.2
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 20:41:48 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:4a46:e208:29e8:e076])
 by smtp.gmail.com with UTF8SMTPSA id h8sm10454969pgc.60.2021.06.18.20.41.40
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 18 Jun 2021 20:41:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c326dcd-3f7a-4799-952a-9f13e6a44a8f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=gxm70gAYTdFiTzOkp/yUuQHBQugIweYy8lC0PJAcI2M=;
        b=Kuy54DcZY87FwF3oB7V0bzV2JNfwZq/uyqj2I9xxSjTRA6y1E2CPnZRumaM765f0Xb
         v0KP1I+m9BrPsl7oLCPGjn2nUXJ6FogHpPE720TRxD14bZA4nhyVgdy4jcbkK5hFKtxf
         glPncRicDrPZCvz+B08B+NO/Ur9FbSSa+c39o=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=gxm70gAYTdFiTzOkp/yUuQHBQugIweYy8lC0PJAcI2M=;
        b=dmcDaLfkoesQG7oZ5BC6sCo8KlA7RkzhYoGxLye8TnqkqZH7HYX42sMrFUVdLdXQl+
         kahZeWuUT7RGgXiPJTht76EObNqgr79g4e3ktR3IB1znxnB38xg0M+eK1LB7K3ni8HOX
         ZuOxxxGIZ7YuKIlbHeJrRZqHHyghNtzIKgz8ZSElOAyFFG3mgeliWfrG0eSuvFYOrrbr
         K/vZt+3nElH+HwhTdO40iWtEuoBReqw98G+pBOi0WKDf0CdVKSJVyCsXE6gsUkBe2Jc+
         yXsN1YJnDba8rpdYGf7lokfps7l3aMEqVOBmcnX9HiEhsw7aLlTGsAm9H2zJsObxHjir
         Tv5A==
X-Gm-Message-State: AOAM532hkEMZ9yLERGXwdTbsGKhc/0zgz2z6I0wuZDoudzML9Hb1hwe0
	Qg3zB+M4serBNTxpl2jGdOIdjA==
X-Google-Smtp-Source: ABdhPJzpQGo5H9t3rpBLNNSR4BleWfZxTJ0TrreB34cRZso42pP7NjaNJuaK2gVPNxfqAewQD9NbiA==
X-Received: by 2002:aa7:83d9:0:b029:2eb:b0ef:2a67 with SMTP id j25-20020aa783d90000b02902ebb0ef2a67mr8451796pfn.1.1624074107829;
        Fri, 18 Jun 2021 20:41:47 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com
Subject: [PATCH v14 06/12] swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing
Date: Sat, 19 Jun 2021 11:40:37 +0800
Message-Id: <20210619034043.199220-7-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210619034043.199220-1-tientzu@chromium.org>
References: <20210619034043.199220-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
use it to determine whether to bounce the data or not. This will be
useful later to allow for different pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
 drivers/xen/swiotlb-xen.c |  2 +-
 include/linux/swiotlb.h   | 11 +++++++++++
 kernel/dma/direct.c       |  2 +-
 kernel/dma/direct.h       |  2 +-
 kernel/dma/swiotlb.c      |  4 ++++
 5 files changed, 18 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 0c6ed09f8513..4730a146fa35 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -369,7 +369,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 	if (dma_capable(dev, dev_addr, size, true) &&
 	    !range_straddles_page_boundary(phys, size) &&
 		!xen_arch_need_swiotlb(dev, phys, dev_addr) &&
-		swiotlb_force != SWIOTLB_FORCE)
+		!is_swiotlb_force_bounce(dev))
 		goto done;
 
 	/*
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index dd1c30a83058..8d8855c77d9a 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -84,6 +84,7 @@ extern enum swiotlb_force swiotlb_force;
  *		unmap calls.
  * @debugfs:	The dentry to debugfs.
  * @late_alloc:	%true if allocated using the page allocator
+ * @force_bounce: %true if swiotlb bouncing is forced
  */
 struct io_tlb_mem {
 	phys_addr_t start;
@@ -94,6 +95,7 @@ struct io_tlb_mem {
 	spinlock_t lock;
 	struct dentry *debugfs;
 	bool late_alloc;
+	bool force_bounce;
 	struct io_tlb_slot {
 		phys_addr_t orig_addr;
 		size_t alloc_size;
@@ -109,6 +111,11 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
 
+static inline bool is_swiotlb_force_bounce(struct device *dev)
+{
+	return dev->dma_io_tlb_mem->force_bounce;
+}
+
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
@@ -120,6 +127,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
+static inline bool is_swiotlb_force_bounce(struct device *dev)
+{
+	return false;
+}
 static inline void swiotlb_exit(void)
 {
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 7a88c34d0867..a92465b4eb12 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -496,7 +496,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
 	if (is_swiotlb_active(dev) &&
-	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
+	    (dma_addressing_limited(dev) || is_swiotlb_force_bounce(dev)))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
 }
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 13e9e7158d94..4632b0f4f72e 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -87,7 +87,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
 	phys_addr_t phys = page_to_phys(page) + offset;
 	dma_addr_t dma_addr = phys_to_dma(dev, phys);
 
-	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
+	if (is_swiotlb_force_bounce(dev))
 		return swiotlb_map(dev, phys, size, dir, attrs);
 
 	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 8a120f42340b..0d294bbf274c 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -179,6 +179,10 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
 	mem->end = mem->start + bytes;
 	mem->index = 0;
 	mem->late_alloc = late_alloc;
+
+	if (swiotlb_force == SWIOTLB_FORCE)
+		mem->force_bounce = true;
+
 	spin_lock_init(&mem->lock);
 	for (i = 0; i < mem->nslabs; i++) {
 		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Sat Jun 19 03:51:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 03:51:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145065.266974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luS1G-0007D0-0c; Sat, 19 Jun 2021 03:51:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145065.266974; Sat, 19 Jun 2021 03:51:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luS1F-0007Cp-SB; Sat, 19 Jun 2021 03:51:41 +0000
Received: by outflank-mailman (input) for mailman id 145065;
 Sat, 19 Jun 2021 03:51:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oaj8=LN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1luRt3-0001Ok-H3
 for xen-devel@lists.xenproject.org; Sat, 19 Jun 2021 03:43:13 +0000
Received: from mail-pf1-x42f.google.com (unknown [2607:f8b0:4864:20::42f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9145b49e-f47d-434a-9ccf-b76367529716;
 Sat, 19 Jun 2021 03:42:42 +0000 (UTC)
Received: by mail-pf1-x42f.google.com with SMTP id y4so2233285pfi.9
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 20:42:42 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:4a46:e208:29e8:e076])
 by smtp.gmail.com with UTF8SMTPSA id w123sm2105172pff.186.2021.06.18.20.42.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 18 Jun 2021 20:42:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9145b49e-f47d-434a-9ccf-b76367529716
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=xfRvKXXPDGozL9xpkc2RXgCWY6OxEYfQN41+lxoFnvY=;
        b=Cri+5q/XNXci1dYh6yOlMvklOoS1iDf8znay8QN5g7HsmU2nwTC31fUj0kmPFl/d3E
         UrEjZA23CpWhEyqBQFiiqYrjH128zTZ50xa1kFSsB4C7Qh0tYSnM/l7KxCIpPRLbXGbP
         rfG2xRynjepqsE/jzwrzX5C1T4DpCVjhYeV64=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=xfRvKXXPDGozL9xpkc2RXgCWY6OxEYfQN41+lxoFnvY=;
        b=T+AgCI+bXB6sNY5Ba7HNC0N5+OaNws4rF6ZHGVcHRI6CE6BZKkC95QqAgMUenAgnqW
         tnilqhCRfsYHeoTue1RRrBhF42n0TEtA/E0LLFafCgGEkpjXW1glqXMHNS1iROjuYEjj
         tIt6rTRkKe+G6YCKGtrScX3ZxNPHi/4wTKfZB2cOG0LaZLiL4gWkVryn6EN7SXPXTGgo
         auk60jqS3rp+Zq39pVInkv4Oi8EfRJppm5qvfM5h95tJXb6YIDecBZSheyJPL36KWn+l
         2WUIhfgniBGqe652f6Ove3DnFFn4Mwwffn6qe66og0dtj0mk7uTCEPMC+oFzoXBJnVoM
         FiIQ==
X-Gm-Message-State: AOAM531cct2YS/Mve36/9608gBhnC9fpQq+ikDYC3kOJhiffKXY9yyGi
	QCPTEkgeyXJRThg5PqFd3YUkRA==
X-Google-Smtp-Source: ABdhPJxWjk0LuUPrhsnJEc72kJfiH5b2VGot6yoSKHBS3bYastAOHzgWOpZYPuBhn5z9HTDgvvE2Og==
X-Received: by 2002:aa7:9384:0:b029:2cc:5e38:933a with SMTP id t4-20020aa793840000b02902cc5e38933amr8282582pfe.81.1624074161816;
        Fri, 18 Jun 2021 20:42:41 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com
Subject: [PATCH v14 12/12] of: Add plumbing for restricted DMA pool
Date: Sat, 19 Jun 2021 11:40:43 +0800
Message-Id: <20210619034043.199220-13-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210619034043.199220-1-tientzu@chromium.org>
References: <20210619034043.199220-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If a device is not behind an IOMMU, we look up the device node and set
up the restricted DMA when the restricted-dma-pool is presented.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 drivers/of/address.c    | 33 +++++++++++++++++++++++++++++++++
 drivers/of/device.c     |  3 +++
 drivers/of/of_private.h |  6 ++++++
 3 files changed, 42 insertions(+)

diff --git a/drivers/of/address.c b/drivers/of/address.c
index 73ddf2540f3f..cdf700fba5c4 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -8,6 +8,7 @@
 #include <linux/logic_pio.h>
 #include <linux/module.h>
 #include <linux/of_address.h>
+#include <linux/of_reserved_mem.h>
 #include <linux/pci.h>
 #include <linux/pci_regs.h>
 #include <linux/sizes.h>
@@ -1022,6 +1023,38 @@ int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map)
 	of_node_put(node);
 	return ret;
 }
+
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np)
+{
+	struct device_node *node, *of_node = dev->of_node;
+	int count, i;
+
+	count = of_property_count_elems_of_size(of_node, "memory-region",
+						sizeof(u32));
+	/*
+	 * If dev->of_node doesn't exist or doesn't contain memory-region, try
+	 * the OF node having DMA configuration.
+	 */
+	if (count <= 0) {
+		of_node = np;
+		count = of_property_count_elems_of_size(
+			of_node, "memory-region", sizeof(u32));
+	}
+
+	for (i = 0; i < count; i++) {
+		node = of_parse_phandle(of_node, "memory-region", i);
+		/*
+		 * There might be multiple memory regions, but only one
+		 * restricted-dma-pool region is allowed.
+		 */
+		if (of_device_is_compatible(node, "restricted-dma-pool") &&
+		    of_device_is_available(node))
+			return of_reserved_mem_device_init_by_idx(dev, of_node,
+								  i);
+	}
+
+	return 0;
+}
 #endif /* CONFIG_HAS_DMA */
 
 /**
diff --git a/drivers/of/device.c b/drivers/of/device.c
index 6cb86de404f1..e68316836a7a 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
 
 	arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
 
+	if (!iommu)
+		return of_dma_set_restricted_buffer(dev, np);
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(of_dma_configure_id);
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index d9e6a324de0a..25cebbed5f02 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -161,12 +161,18 @@ struct bus_dma_region;
 #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
 int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map);
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np);
 #else
 static inline int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map)
 {
 	return -ENODEV;
 }
+static inline int of_dma_set_restricted_buffer(struct device *dev,
+					       struct device_node *np)
+{
+	return -ENODEV;
+}
 #endif
 
 #endif /* _LINUX_OF_PRIVATE_H */
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Sat Jun 19 03:51:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 03:51:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145066.266985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luS1H-0007W4-EU; Sat, 19 Jun 2021 03:51:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145066.266985; Sat, 19 Jun 2021 03:51:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luS1H-0007US-7L; Sat, 19 Jun 2021 03:51:43 +0000
Received: by outflank-mailman (input) for mailman id 145066;
 Sat, 19 Jun 2021 03:51:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oaj8=LN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1luRs5-0001Ok-FD
 for xen-devel@lists.xenproject.org; Sat, 19 Jun 2021 03:42:13 +0000
Received: from mail-pf1-x435.google.com (unknown [2607:f8b0:4864:20::435])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c973ee08-cfc2-4a99-aa39-42c61d4456e0;
 Sat, 19 Jun 2021 03:41:57 +0000 (UTC)
Received: by mail-pf1-x435.google.com with SMTP id p13so9282034pfw.0
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 20:41:57 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:4a46:e208:29e8:e076])
 by smtp.gmail.com with UTF8SMTPSA id d8sm9801812pfq.198.2021.06.18.20.41.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 18 Jun 2021 20:41:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c973ee08-cfc2-4a99-aa39-42c61d4456e0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=yBt4hQLYm3mqRXyO6Ue9ozYRrGCBFDovL1RqQBClg04=;
        b=kUc0aW3NrRDs5U6B12hrWDg0XWc0HZESQnTqGuqF1lOwto9c+7mt8gkEytThboQsHz
         XLan+5H8MDt3PRVY3ReT+qPhu28xh6wjPY72lJEd12tdFryWy7Q2zgQsEGYf4tF3oitf
         PfRsCwU9hKassTMBsMhc64Ic4q5xVPJYHDueg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=yBt4hQLYm3mqRXyO6Ue9ozYRrGCBFDovL1RqQBClg04=;
        b=mZCpJdjyCJ21E6KYiGzGbtz915o1mXtdOU4PYyR48Y6UDPEevSG4oCzKKDxrpUbbul
         pPlZulGllF6Zsy2gUodxZFCEkruMgXlNPhnwBvRcwHgqPG6FU9d/BkwXuDaUjyA4jSY+
         qSNWDbinPSNxquWsgAVIJ7v/2VALDK2+b6Hsqmi8GpxjQniE7P4XOu60jRSNc2SFjH9v
         Fn8HwogTYG/SH7aph5GZXVtMYqxC8YvsRzUcM4KtHEcaV+cjm+7dgAFUnMp7XSU25h0L
         plSI70g4hZsBVH6xBwXw/InQbjGsD+tuX/g0jsH1+ADmZwitzf4Ky8ucnCmsF0WO5Fsy
         u1mQ==
X-Gm-Message-State: AOAM532S8RNGHmNuc+ISetqwEgSh+S+ol4u6MCoeLjwGQbA04Jq2Razx
	ajHX7KCjg2ii3JKeup8VuZ4b9g==
X-Google-Smtp-Source: ABdhPJz6BPgqX8oIJ5DFtEDFLRXoE9Gt4G8MSlnWyfHJyjQTUFcAShEWPij+Zot6PwERvV6L2IAQtA==
X-Received: by 2002:a63:dd4a:: with SMTP id g10mr12899661pgj.144.1624074117138;
        Fri, 18 Jun 2021 20:41:57 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com
Subject: [PATCH v14 07/12] swiotlb: Move alloc_size to swiotlb_find_slots
Date: Sat, 19 Jun 2021 11:40:38 +0800
Message-Id: <20210619034043.199220-8-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210619034043.199220-1-tientzu@chromium.org>
References: <20210619034043.199220-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Rename find_slots to swiotlb_find_slots and move the maintenance of
alloc_size to it for better code reusability later.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 kernel/dma/swiotlb.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 0d294bbf274c..daf38a52e66d 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -432,8 +432,8 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
  * Find a suitable number of IO TLB entries size that will fit this request and
  * allocate a buffer from that IO TLB pool.
  */
-static int find_slots(struct device *dev, phys_addr_t orig_addr,
-		size_t alloc_size)
+static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
+			      size_t alloc_size)
 {
 	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
@@ -488,8 +488,11 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 	return -1;
 
 found:
-	for (i = index; i < index + nslots; i++)
+	for (i = index; i < index + nslots; i++) {
 		mem->slots[i].list = 0;
+		mem->slots[i].alloc_size =
+			alloc_size - ((i - index) << IO_TLB_SHIFT);
+	}
 	for (i = index - 1;
 	     io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 &&
 	     mem->slots[i].list; i--)
@@ -530,7 +533,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		return (phys_addr_t)DMA_MAPPING_ERROR;
 	}
 
-	index = find_slots(dev, orig_addr, alloc_size + offset);
+	index = swiotlb_find_slots(dev, orig_addr, alloc_size + offset);
 	if (index == -1) {
 		if (!(attrs & DMA_ATTR_NO_WARN))
 			dev_warn_ratelimited(dev,
@@ -544,11 +547,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	 * This is needed when we sync the memory.  Then we sync the buffer if
 	 * needed.
 	 */
-	for (i = 0; i < nr_slots(alloc_size + offset); i++) {
+	for (i = 0; i < nr_slots(alloc_size + offset); i++)
 		mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
-		mem->slots[index + i].alloc_size =
-			alloc_size - (i << IO_TLB_SHIFT);
-	}
 	tlb_addr = slot_addr(mem->start, index) + offset;
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
 	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Sat Jun 19 03:51:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 03:51:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145075.266996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luS1L-000801-23; Sat, 19 Jun 2021 03:51:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145075.266996; Sat, 19 Jun 2021 03:51:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luS1K-0007zU-S6; Sat, 19 Jun 2021 03:51:46 +0000
Received: by outflank-mailman (input) for mailman id 145075;
 Sat, 19 Jun 2021 03:51:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oaj8=LN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1luRsU-0001Ok-G1
 for xen-devel@lists.xenproject.org; Sat, 19 Jun 2021 03:42:38 +0000
Received: from mail-pl1-x631.google.com (unknown [2607:f8b0:4864:20::631])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 42238ea9-1a6d-427a-9e6a-7e70cc864b6d;
 Sat, 19 Jun 2021 03:42:07 +0000 (UTC)
Received: by mail-pl1-x631.google.com with SMTP id 69so5677170plc.5
 for <xen-devel@lists.xenproject.org>; Fri, 18 Jun 2021 20:42:06 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:4a46:e208:29e8:e076])
 by smtp.gmail.com with UTF8SMTPSA id a25sm4118804pfg.47.2021.06.18.20.41.58
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 18 Jun 2021 20:42:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42238ea9-1a6d-427a-9e6a-7e70cc864b6d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=6UYcmj71Nciebm6oUVf56G+z9ljUTqD3hIvM0xvNwlw=;
        b=CvzIlEZzhAOxT+qjDZ+7NzV4sGhDEmWKj9nbd5uDoHpJqzgCuGiJUhHaLcawz3/br/
         6Le30VZKk818Tu1yH7ZE7vV0/ya4I+xpPNn/tUWCeTGbH9HsAFe0ZC4NYZBO9G7TYfyS
         3lkrZEYq4wbjCO3JNlcKxJtKWZiG1vegiLrIs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=6UYcmj71Nciebm6oUVf56G+z9ljUTqD3hIvM0xvNwlw=;
        b=GanHKMry2RVbwV8RsTKeDL9WTWQbMBpHY1qV7EqX4maB/wxPqPBIZICRDMcI1ITbH+
         +FTsGJGRmPH1M1bKRkUOUjQgU501Mk9REuWYHbKUB5g8woI4Qfv8jVNb+k5ni11RrFi5
         e8FMoqO7eyLxdd8AxBmlHrtWFnt71Ki1WKoOlsGukqC3HR/wjpJ2+vba8SZkYuqDupc6
         +Xz5NGP+xs79q1U5bBCWaGW7oAwlgBzEnXTMzIVhKpkZ68YICsPbplQc5rcIiHRopkuN
         6luSSeaEYc3jOUbPBWSUCyMadMgICfblNHRQSzbJwjX8sXWKM3z6mAOn2ED54BxMneFF
         NpEg==
X-Gm-Message-State: AOAM531lD+oE93OYiXMi2JnqMq8ZYAiDtDqAl5EfI99Zih7ESXKnWriY
	c75iwR2zqXFb/dDXGr0XzRpr/Q==
X-Google-Smtp-Source: ABdhPJxlbApkRXsP2fgkAdUNCHQ1WjOiOdFp1rz7jZq5s11WVxe5ByFrpAGijwIKxprmkrdki9AO2Q==
X-Received: by 2002:a17:902:e843:b029:109:4dbc:d4ed with SMTP id t3-20020a170902e843b02901094dbcd4edmr7628002plg.74.1624074126329;
        Fri, 18 Jun 2021 20:42:06 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com
Subject: [PATCH v14 08/12] swiotlb: Refactor swiotlb_tbl_unmap_single
Date: Sat, 19 Jun 2021 11:40:39 +0800
Message-Id: <20210619034043.199220-9-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210619034043.199220-1-tientzu@chromium.org>
References: <20210619034043.199220-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, swiotlb_release_slots, to make the code reusable for
supporting different bounce buffer pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 kernel/dma/swiotlb.c | 35 ++++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index daf38a52e66d..e79383df5d4a 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -556,27 +556,15 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	return tlb_addr;
 }
 
-/*
- * tlb_addr is the physical address of the bounce buffer to unmap.
- */
-void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
-			      size_t mapping_size, enum dma_data_direction dir,
-			      unsigned long attrs)
+static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr)
 {
-	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long flags;
-	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
+	unsigned int offset = swiotlb_align_offset(dev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
 	int nslots = nr_slots(mem->slots[index].alloc_size + offset);
 	int count, i;
 
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
-		swiotlb_bounce(hwdev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
-
 	/*
 	 * Return the buffer to the free list by setting the corresponding
 	 * entries to indicate the number of contiguous entries available.
@@ -611,6 +599,23 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 	spin_unlock_irqrestore(&mem->lock, flags);
 }
 
+/*
+ * tlb_addr is the physical address of the bounce buffer to unmap.
+ */
+void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
+			      size_t mapping_size, enum dma_data_direction dir,
+			      unsigned long attrs)
+{
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
+		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
+
+	swiotlb_release_slots(dev, tlb_addr);
+}
+
 void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
 		size_t size, enum dma_data_direction dir)
 {
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Sat Jun 19 06:09:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 06:09:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145113.267007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luUA7-0005R2-T4; Sat, 19 Jun 2021 06:08:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145113.267007; Sat, 19 Jun 2021 06:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luUA7-0005Qv-OO; Sat, 19 Jun 2021 06:08:59 +0000
Received: by outflank-mailman (input) for mailman id 145113;
 Sat, 19 Jun 2021 06:08:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luUA6-0005Ql-Uj; Sat, 19 Jun 2021 06:08:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luUA6-0008EM-Nh; Sat, 19 Jun 2021 06:08:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luUA6-0005I3-Cb; Sat, 19 Jun 2021 06:08:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1luUA6-0003wS-Be; Sat, 19 Jun 2021 06:08:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wddoPEbYzuCLR5z8eNYy3AFraMhnCzR4ZmemIx6AXGc=; b=yftRNuONktWaclmHBJ8J+AXl3z
	ZaxuMzC+VFX5vTjk2m2F45Wn21DotChl0vPVCfo1euMp8AHn8avYR1tQZBhUR6EsCCe0fZK6YPAuX
	h0aHIHZb/QWXAfoTnALs5xDhfvnXhP5vzSYH8X/BacVgXvgLrd0YSlhQ+vaPq1gZ/vwQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162890-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 162890: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a82d4d5e9fe6e6448fb5885a184460864c2f14a5
X-Osstest-Versions-That:
    linux=ffe4d2a0684d4e6aa6ff066b1cd8663a62f2888c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 19 Jun 2021 06:08:58 +0000

flight 162890 linux-5.4 real [real]
flight 162899 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162890/
http://logs.test-lab.xenproject.org/osstest/logs/162899/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 162899-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162859
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162859
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162859
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162859
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162859
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162859
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162859
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162859
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162859
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162859
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162859
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                a82d4d5e9fe6e6448fb5885a184460864c2f14a5
baseline version:
 linux                ffe4d2a0684d4e6aa6ff066b1cd8663a62f2888c

Last test of basis   162859  2021-06-16 10:12:06 Z    2 days
Testing same since   162890  2021-06-18 08:12:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Aring <aahringo@redhat.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Anirudh Rayabharam <mail@anirudhrb.com>
  Benjamin Tissoires <benjamin.tissoires@redhat.com>
  Bindu Ramamurthy <bindu.r@amd.com>
  Bixuan Cui <cuibixuan@huawei.com>
  Christoph Hellwig <hch@lst.de>
  Dan Robertson <dan@dlrobertson.com>
  Daniel Wagner <dwagner@suse.de>
  David S. Miller <davem@davemloft.net>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Ewan D. Milne <emilne@redhat.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hannes Reinecke <hare@suse.de>
  Hillf Danton <hdanton@sina.com>
  Hulk Robot <hulkrobot@huawei.com>
  Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
  Jiri Kosina <jkosina@suse.cz>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Josh Triplett <josh@joshtriplett.org>
  Khem Raj <raj.khem@gmail.com>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Mark Bolhuis <mark@bolhuis.dev>
  Martin K. Petersen <martin.petersen@oracle.com>
  Maurizio Lombardi <mlombard@redhat.com>
  Nirenjan Krishnan <nirenjan@gmail.com>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Pavel Machek (CIP) <pavel@denx.de>
  Saeed Mirzamohammadi <saeed.mirzamohammadi@oracle.com>
  Sasha Levin <sashal@kernel.org>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Stefan Schmidt <stefan@datenfreihafen.org>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Thierry Reding <treding@nvidia.com>
  Tony Lindgren <tony@atomide.com>
  Yongqiang Liu <liuyongqiang13@huawei.com>
  Zheng Yongjun <zhengyongjun3@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   ffe4d2a0684d..a82d4d5e9fe6  a82d4d5e9fe6e6448fb5885a184460864c2f14a5 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Sat Jun 19 07:16:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 07:16:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145120.267021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luVDP-0003dM-NY; Sat, 19 Jun 2021 07:16:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145120.267021; Sat, 19 Jun 2021 07:16:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luVDP-0003dF-KL; Sat, 19 Jun 2021 07:16:27 +0000
Received: by outflank-mailman (input) for mailman id 145120;
 Sat, 19 Jun 2021 07:16:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luVDO-0003d5-As; Sat, 19 Jun 2021 07:16:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luVDO-0000tX-3Q; Sat, 19 Jun 2021 07:16:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luVDN-0000BY-SF; Sat, 19 Jun 2021 07:16:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1luVDN-0004BL-Ri; Sat, 19 Jun 2021 07:16:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4qjKL1N4HH9exZ4Y6451a3/qJkLSjpnwLNqdYwuY7B8=; b=Tx7RyJ6xZ9VSUR9fy7+KSbSMYh
	fhcYLqyCyay6glliLxFHnZsYOmcrtyuW4enkou1VnwzEQdcA9h6Vo4fu9RbbzxKKroqSGYPPnbi3y
	aA2Ime2RchqRqtgcaVc70aE0m2K6MYpq2hv/grtW9xYUCPI/fRjDAhl+UHEIZ280DDLk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162892-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162892: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=1162ae8297e1fc9871e615cad7d505d639b7ed0c
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 19 Jun 2021 07:16:25 +0000

flight 162892 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162892/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 1162ae8297e1fc9871e615cad7d505d639b7ed0c
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   15 days
Failing since        162368  2021-06-04 15:42:59 Z   14 days   32 attempts
Testing same since   162875  2021-06-17 10:45:14 Z    1 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  gaoliming <gaoliming@byosoft.com.cn>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2158 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 19 07:33:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 07:33:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145129.267038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luVTc-00061D-B9; Sat, 19 Jun 2021 07:33:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145129.267038; Sat, 19 Jun 2021 07:33:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luVTc-000616-8F; Sat, 19 Jun 2021 07:33:12 +0000
Received: by outflank-mailman (input) for mailman id 145129;
 Sat, 19 Jun 2021 07:33:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luVTa-00060n-Rd; Sat, 19 Jun 2021 07:33:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luVTa-0001Bp-LP; Sat, 19 Jun 2021 07:33:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luVTa-0000mC-8F; Sat, 19 Jun 2021 07:33:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1luVTa-00064e-7m; Sat, 19 Jun 2021 07:33:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3gFUxr5dRiuXpVa3uBmCVuZdblLTYFhMGHaalJIs5fo=; b=ASH13UQgxToz3oprIMPaPs5/11
	MwnOLnz035iRQjmo8BhpUuYkw1kVCbOjCC/wMWn3htTOUNWnBHkxYuEzibqvD3BPyj6U9y75rGdcv
	4a88PNgCkXi/qWdvkvjrdjxJz2+Ubba9W4it/H4FWTjrPhCA3ckTOyMyvTVOeJP+NFm0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162891-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 162891: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b6a8c4f72def4d1135ff42660a86276ce2565c8c
X-Osstest-Versions-That:
    xen=45710c02563602343261b690787bb1c76a676bef
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 19 Jun 2021 07:33:10 +0000

flight 162891 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162891/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162881
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162881
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162881
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162881
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162881
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162881
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162881
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162881
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162881
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162881
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162881
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  b6a8c4f72def4d1135ff42660a86276ce2565c8c
baseline version:
 xen                  45710c02563602343261b690787bb1c76a676bef

Last test of basis   162881  2021-06-17 19:07:30 Z    1 days
Testing same since   162891  2021-06-18 11:05:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   45710c0256..b6a8c4f72d  b6a8c4f72def4d1135ff42660a86276ce2565c8c -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Sat Jun 19 17:45:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 17:45:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145149.267052 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luf1S-0001sL-VQ; Sat, 19 Jun 2021 17:44:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145149.267052; Sat, 19 Jun 2021 17:44:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luf1S-0001sE-Rz; Sat, 19 Jun 2021 17:44:46 +0000
Received: by outflank-mailman (input) for mailman id 145149;
 Sat, 19 Jun 2021 17:44:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luf1Q-0001s4-NB; Sat, 19 Jun 2021 17:44:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luf1Q-00044D-Gv; Sat, 19 Jun 2021 17:44:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luf1Q-00024v-5M; Sat, 19 Jun 2021 17:44:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1luf1Q-0006t1-4s; Sat, 19 Jun 2021 17:44:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yfsswzmLqPBTwXL7ifwItSD3bbJEwVlZ8qkc6SkIL/k=; b=lsS6ZiYEZaZdDmxboZ+Cgn1Ha4
	jcpwudIZwH09wyNDqLtY/R3vye/cIobfAuhS2v8Yzg5OUiAjDFMx/iOYojh/FICVhzxr5fxBM+AQ/
	ApVtDYct0lli/Lxjr/ReXE1d37RizdVUCoVC9yCeRnMbaI+krOMRZoVPUWGedeV2DK3Y=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162894-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162894: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:<job status>:broken:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-pvshim:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-raw:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-libvirt:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-shadow:<job status>:broken:regression
    xen-unstable:test-amd64-coresched-i386-xl:<job status>:broken:regression
    xen-unstable:test-amd64-i386-libvirt:<job status>:broken:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:<job status>:broken:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-i386-xl-raw:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-coresched-i386-xl:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-credit2:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-shadow:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-libvirt:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-libvirt:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-rtds:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-libvirt-xsm:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-xl-pvshim:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:guest-start/redhat.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 19 Jun 2021 17:44:44 +0000

flight 162894 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162894/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-amd    <job status>                 broken
 test-amd64-i386-qemuu-rhel6hvm-intel    <job status>                 broken
 test-amd64-i386-xl-pvshim       <job status>                 broken
 test-amd64-i386-xl-qemut-debianhvm-amd64    <job status>                broken
 test-amd64-i386-xl-qemut-win7-amd64    <job status>                 broken
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <job status>                broken
 test-amd64-i386-xl-qemuu-ovmf-amd64    <job status>                 broken
 test-amd64-i386-xl-qemuu-win7-amd64    <job status>                 broken
 test-amd64-i386-xl-qemuu-ws16-amd64    <job status>                 broken
 test-amd64-i386-xl-raw          <job status>                 broken
 test-amd64-amd64-xl-rtds        <job status>                 broken
 test-amd64-amd64-xl-credit2     <job status>                 broken
 test-amd64-amd64-libvirt        <job status>                 broken
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <job status>      broken
 test-amd64-amd64-xl-shadow      <job status>                 broken
 test-amd64-coresched-i386-xl    <job status>                 broken
 test-amd64-i386-libvirt         <job status>                 broken
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <job status>       broken
 test-amd64-i386-libvirt-xsm     <job status>                 broken
 test-amd64-i386-qemut-rhel6hvm-amd    <job status>                 broken
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail in 162885 REGR. vs. 162533

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-raw        5 host-install(5)          broken pass in 162885
 test-amd64-i386-qemuu-rhel6hvm-intel  5 host-install(5)  broken pass in 162885
 test-amd64-i386-qemut-rhel6hvm-amd  5 host-install(5)    broken pass in 162885
 test-amd64-coresched-i386-xl  5 host-install(5)          broken pass in 162885
 test-amd64-i386-xl-qemuu-ovmf-amd64  5 host-install(5)   broken pass in 162885
 test-amd64-amd64-xl-credit2   5 host-install(5)          broken pass in 162885
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken pass in 162885
 test-amd64-amd64-xl-shadow    5 host-install(5)          broken pass in 162885
 test-amd64-i386-qemuu-rhel6hvm-amd  5 host-install(5)    broken pass in 162885
 test-amd64-i386-xl-qemuu-debianhvm-amd64 5 host-install(5) broken pass in 162885
 test-amd64-i386-xl-qemut-debianhvm-amd64 5 host-install(5) broken pass in 162885
 test-amd64-amd64-libvirt      5 host-install(5)          broken pass in 162885
 test-amd64-i386-xl-qemut-win7-amd64  5 host-install(5)   broken pass in 162885
 test-amd64-i386-libvirt       5 host-install(5)          broken pass in 162885
 test-amd64-amd64-xl-rtds      5 host-install(5)          broken pass in 162885
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken pass in 162885
 test-amd64-i386-libvirt-xsm   5 host-install(5)          broken pass in 162885
 test-amd64-i386-xl-qemuu-ws16-amd64  5 host-install(5)   broken pass in 162885
 test-amd64-i386-xl-qemuu-win7-amd64  5 host-install(5)   broken pass in 162885
 test-amd64-i386-xl-pvshim     5 host-install(5)          broken pass in 162885
 test-amd64-i386-qemut-rhel6hvm-intel 14 guest-start/redhat.repeat fail in 162885 pass in 162894

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop   fail in 162885 like 162533
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop   fail in 162885 like 162533
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop   fail in 162885 like 162533
 test-amd64-amd64-libvirt    15 migrate-support-check fail in 162885 never pass
 test-amd64-i386-libvirt-xsm 15 migrate-support-check fail in 162885 never pass
 test-amd64-i386-xl-pvshim    14 guest-start          fail in 162885 never pass
 test-amd64-i386-libvirt     15 migrate-support-check fail in 162885 never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail in 162885 never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail in 162885 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162533
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162533
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z   11 days
Failing since        162556  2021-06-08 22:39:08 Z   10 days   16 attempts
Testing same since   162885  2021-06-17 23:08:00 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 broken  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           broken  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            broken  
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  broken  
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     broken  
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          broken  
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          broken  
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  broken  
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     broken  
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      broken  
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    broken  
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       broken  
 test-amd64-amd64-xl-rtds                                     broken  
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   broken  
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-qemuu-rhel6hvm-amd broken
broken-job test-amd64-i386-qemuu-rhel6hvm-intel broken
broken-job test-amd64-i386-xl-pvshim broken
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-i386-xl-qemut-win7-amd64 broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-i386-xl-qemuu-win7-amd64 broken
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-i386-xl-raw broken
broken-job test-amd64-amd64-xl-rtds broken
broken-job test-amd64-amd64-xl-credit2 broken
broken-job test-amd64-amd64-libvirt broken
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-amd64-xl-shadow broken
broken-job test-amd64-coresched-i386-xl broken
broken-job test-amd64-i386-libvirt broken
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-i386-libvirt-xsm broken
broken-job test-amd64-i386-qemut-rhel6hvm-amd broken
broken-step test-amd64-i386-xl-raw host-install(5)
broken-step test-amd64-i386-qemuu-rhel6hvm-intel host-install(5)
broken-step test-amd64-i386-qemut-rhel6hvm-amd host-install(5)
broken-step test-amd64-coresched-i386-xl host-install(5)
broken-step test-amd64-i386-xl-qemuu-ovmf-amd64 host-install(5)
broken-step test-amd64-amd64-xl-credit2 host-install(5)
broken-step test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm host-install(5)
broken-step test-amd64-amd64-xl-shadow host-install(5)
broken-step test-amd64-i386-qemuu-rhel6hvm-amd host-install(5)
broken-step test-amd64-i386-xl-qemuu-debianhvm-amd64 host-install(5)
broken-step test-amd64-i386-xl-qemut-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-libvirt host-install(5)
broken-step test-amd64-i386-xl-qemut-win7-amd64 host-install(5)
broken-step test-amd64-i386-libvirt host-install(5)
broken-step test-amd64-amd64-xl-rtds host-install(5)
broken-step test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm host-install(5)
broken-step test-amd64-i386-libvirt-xsm host-install(5)
broken-step test-amd64-i386-xl-qemuu-ws16-amd64 host-install(5)
broken-step test-amd64-i386-xl-qemuu-win7-amd64 host-install(5)
broken-step test-amd64-i386-xl-pvshim host-install(5)

Not pushing.

(No revision log; it would be 1163 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 19 19:27:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Jun 2021 19:27:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145156.267067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lugcl-0002kw-K9; Sat, 19 Jun 2021 19:27:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145156.267067; Sat, 19 Jun 2021 19:27:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lugcl-0002kg-Gw; Sat, 19 Jun 2021 19:27:23 +0000
Received: by outflank-mailman (input) for mailman id 145156;
 Sat, 19 Jun 2021 19:27:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lugck-0002kM-83; Sat, 19 Jun 2021 19:27:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lugck-0005rX-1p; Sat, 19 Jun 2021 19:27:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lugcj-0004HK-Le; Sat, 19 Jun 2021 19:27:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lugcj-0007u2-L5; Sat, 19 Jun 2021 19:27:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Opu5fh8Kp9gRC9b4XePZMNpMSmXM5JFmvmWQWNTYU68=; b=al2ea/s8bVcUFXg69ckx28UhTU
	UfNpw8z+Xru3eZWvhAcFZxD84maBqMiqAPUlDVywqML1u7AVo9ntoj3boat7cxeVtJgwzt71E6qVf
	jl85NYo57DXOnuo6AK/Cky4Krgv9uk7liYZqH2YOkAO7M9SXNZ1phCr5CUrSlosDqvKI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162898-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162898: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    libvirt:build-i386:<job status>:broken:regression
    libvirt:build-i386-pvops:<job status>:broken:regression
    libvirt:build-i386-xsm:<job status>:broken:regression
    libvirt:build-i386-pvops:host-install(4):broken:regression
    libvirt:build-i386-xsm:host-install(4):broken:regression
    libvirt:build-i386:host-install(4):broken:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=54b602019d7dfa94a6c52ef7aa3abdfaa93ed233
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 19 Jun 2021 19:27:21 +0000

flight 162898 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162898/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 151777
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 151777
 build-i386                    4 host-install(4)        broken REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              54b602019d7dfa94a6c52ef7aa3abdfaa93ed233
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  344 days
Failing since        151818  2020-07-11 04:18:52 Z  343 days  336 attempts
Testing same since   162898  2021-06-19 04:19:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               broken  
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             broken  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386-pvops host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386 host-install(4)

Not pushing.

(No revision log; it would be 62906 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 20 00:08:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Jun 2021 00:08:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145165.267081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lul0J-0003Ce-TI; Sun, 20 Jun 2021 00:07:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145165.267081; Sun, 20 Jun 2021 00:07:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lul0J-0003CX-Q2; Sun, 20 Jun 2021 00:07:59 +0000
Received: by outflank-mailman (input) for mailman id 145165;
 Sun, 20 Jun 2021 00:07:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lul0I-0003CN-BE; Sun, 20 Jun 2021 00:07:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lul0H-0002kW-S2; Sun, 20 Jun 2021 00:07:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lul0H-0001wp-IF; Sun, 20 Jun 2021 00:07:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lul0H-0001pJ-Hk; Sun, 20 Jun 2021 00:07:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=31qRUpCf7+nL20gEzoITmnh1ERknlB1XbPFOK8sFPfI=; b=JpTVn5nZXNNVnLjrUiYdsWNKf/
	3bmadjEhPZNriouY0Y5DKklqDKv9PMW5owF1lmZsuMHBQgKKqD5fNLWLdwHqtM+q5Ss9Et7BcPSZa
	D8oc51c8n4syBWG8xFaDidfNboyq1HbcUayFOVGC+B7U9hHbV6HYmuNYS6mEkHllOfsU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162900-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162900: trouble: blocked/broken/pass
X-Osstest-Failures:
    ovmf:build-i386:<job status>:broken:regression
    ovmf:build-i386-pvops:<job status>:broken:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    ovmf:build-i386:host-install(4):broken:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:host-install(5):broken:regression
    ovmf:build-i386-pvops:host-install(4):broken:regression
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=a63914d3f603580e5aeceb5edbafe56688210141
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Jun 2021 00:07:57 +0000

flight 162900 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162900/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <job status>                 broken
 build-i386                    4 host-install(4)        broken REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 5 host-install(5) broken REGR. vs. 162359
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 162359

Tests which did not succeed, but are not blocking:
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 a63914d3f603580e5aeceb5edbafe56688210141
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   15 days
Failing since        162368  2021-06-04 15:42:59 Z   15 days   33 attempts
Testing same since   162900  2021-06-19 07:19:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  gaoliming <gaoliming@byosoft.com.cn>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         broken  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 broken
broken-step build-i386 host-install(4)
broken-step test-amd64-amd64-xl-qemuu-ovmf-amd64 host-install(5)
broken-step build-i386-pvops host-install(4)

Not pushing.

(No revision log; it would be 2173 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 20 06:05:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Jun 2021 06:05:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145177.267104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luqZp-000129-AY; Sun, 20 Jun 2021 06:05:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145177.267104; Sun, 20 Jun 2021 06:05:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luqZp-000121-4r; Sun, 20 Jun 2021 06:05:01 +0000
Received: by outflank-mailman (input) for mailman id 145177;
 Sun, 20 Jun 2021 06:04:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luqZn-00011r-2e; Sun, 20 Jun 2021 06:04:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luqZm-0007yd-TW; Sun, 20 Jun 2021 06:04:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luqZm-0001Dx-HY; Sun, 20 Jun 2021 06:04:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1luqZm-0003Ny-H1; Sun, 20 Jun 2021 06:04:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PmgFa5Toz8AQjAznzMZwSBtw30DpQBEj4jYV8kyOsF4=; b=kQMbGYksOaTMhQJSF7CK0OCM6g
	iYGjDaXAFs7ccY2U9GLroL0kj0WsieZfsUCtwuSjxQRPXyc5NOVs2uKve4YYidBv7RDKRQ3OfIAc1
	+tT1Tp2AiMuO8HtYpnSiIgCzJUmhiJ10Jq8c/RXczme8D3SxnRgJKm5IcPWHAAkKOELs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162902-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162902: trouble: blocked/broken
X-Osstest-Failures:
    ovmf:build-amd64:<job status>:broken:regression
    ovmf:build-amd64-pvops:<job status>:broken:regression
    ovmf:build-amd64-xsm:<job status>:broken:regression
    ovmf:build-i386:<job status>:broken:regression
    ovmf:build-i386-pvops:<job status>:broken:regression
    ovmf:build-i386-xsm:<job status>:broken:regression
    ovmf:build-i386-pvops:host-install(4):broken:regression
    ovmf:build-i386:host-install(4):broken:regression
    ovmf:build-i386-xsm:host-install(4):broken:regression
    ovmf:build-amd64:host-install(4):broken:regression
    ovmf:build-amd64-pvops:host-install(4):broken:regression
    ovmf:build-amd64-xsm:host-install(4):broken:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=a63914d3f603580e5aeceb5edbafe56688210141
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Jun 2021 06:04:58 +0000

flight 162902 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162902/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 162359
 build-i386                    4 host-install(4)        broken REGR. vs. 162359
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 162359
 build-amd64                   4 host-install(4)        broken REGR. vs. 162359
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 162359
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 162359

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 a63914d3f603580e5aeceb5edbafe56688210141
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   16 days
Failing since        162368  2021-06-04 15:42:59 Z   15 days   34 attempts
Testing same since   162900  2021-06-19 07:19:12 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  gaoliming <gaoliming@byosoft.com.cn>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386-pvops host-install(4)
broken-step build-i386 host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-amd64-xsm host-install(4)

Not pushing.

(No revision log; it would be 2173 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 20 10:58:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Jun 2021 10:58:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145191.267133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luv9W-0002Bc-Qk; Sun, 20 Jun 2021 10:58:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145191.267133; Sun, 20 Jun 2021 10:58:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luv9W-0002BV-NN; Sun, 20 Jun 2021 10:58:10 +0000
Received: by outflank-mailman (input) for mailman id 145191;
 Sun, 20 Jun 2021 10:58:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luv9V-0002BL-GK; Sun, 20 Jun 2021 10:58:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luv9V-0004vW-6t; Sun, 20 Jun 2021 10:58:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luv9U-0007eM-RS; Sun, 20 Jun 2021 10:58:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1luv9U-0007iW-R0; Sun, 20 Jun 2021 10:58:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uOzLr0T4Vsk/nZEwKEBWOwgqNdou1Tn74eCcHu2Tv0w=; b=IbU6WDjw14IkCUVktsd5do6LIh
	y6RbLGBhfydtXerbXLeG1ktBiWmJpHczfpjtQ2ctu5yS10YeZ15TddIyOnXfcO906peFyEtugA+hW
	Me6RhI7RI3EgTdTKjsaW9+7tI7jPtVaJHmHapp7q6SjYRj73bM7tEkEOMHamtXMnzyVc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162895-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162895: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-raw:<job status>:broken:regression
    linux-linus:test-amd64-coresched-i386-xl:<job status>:broken:regression
    linux-linus:test-amd64-coresched-amd64-xl:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-shadow:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qcow2:<job status>:broken:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-pvshim:<job status>:broken:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:<job status>:broken:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:<job status>:broken:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:<job status>:broken:regression
    linux-linus:test-amd64-amd64-libvirt:<job status>:broken:regression
    linux-linus:test-amd64-amd64-libvirt-pair:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:<job status>:broken:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:<job status>:broken:regression
    linux-linus:test-amd64-amd64-pair:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-credit1:<job status>:broken:regression
    linux-linus:test-amd64-amd64-pygrub:<job status>:broken:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl:<job status>:broken:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:<job status>:broken:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:<job status>:broken:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-shadow:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-xsm:<job status>:broken:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:<job status>:broken:regression
    linux-linus:test-amd64-i386-freebsd10-i386:<job status>:broken:regression
    linux-linus:test-amd64-i386-libvirt:<job status>:broken:regression
    linux-linus:test-amd64-i386-libvirt-pair:<job status>:broken:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    linux-linus:test-amd64-i386-libvirt-xsm:<job status>:broken:regression
    linux-linus:test-amd64-i386-pair:<job status>:broken:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:<job status>:broken:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:<job status>:broken:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:<job status>:broken:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-pvshim:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:<job status>:broken:regression
    linux-linus:test-amd64-i386-examine:host-install:broken:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-xl-xsm:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:host-install(5):broken:nonblocking
    linux-linus:test-amd64-coresched-i386-xl:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-libvirt-xsm:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-freebsd10-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-xl-raw:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-xl-pvshim:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:host-install/src_host(6):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:host-install/dst_host(7):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-libvirt:host-install(5):broken:nonblocking
    linux-linus:test-amd64-coresched-amd64-xl:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-xl:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-i386-pvgrub:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-pygrub:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-freebsd10-i386:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-xl-shadow:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-libvirt:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-pair:host-install/src_host(6):broken:nonblocking
    linux-linus:test-amd64-i386-pair:host-install/dst_host(7):broken:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qcow2:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-i386-libvirt-pair:host-install/src_host(6):broken:nonblocking
    linux-linus:test-amd64-i386-libvirt-pair:host-install/dst_host(7):broken:nonblocking
    linux-linus:test-amd64-amd64-pair:host-install/src_host(6):broken:nonblocking
    linux-linus:test-amd64-amd64-pair:host-install/dst_host(7):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=728a748b3ff70326f652ab92081d639dc51269ea
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Jun 2021 10:58:08 +0000

flight 162895 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162895/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm    <job status>    broken
 test-amd64-i386-xl-qemut-win7-amd64    <job status>                 broken
 test-amd64-i386-xl-qemut-ws16-amd64    <job status>                 broken
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <job status>                broken
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow    <job status>         broken
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <job status>             broken
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict    <job status>    broken
 test-amd64-i386-xl-qemuu-ovmf-amd64    <job status>                 broken
 test-amd64-i386-xl-qemuu-win7-amd64    <job status>                 broken
 test-amd64-i386-xl-qemuu-ws16-amd64    <job status>                 broken
 test-amd64-i386-xl-raw          <job status>                 broken
 test-amd64-coresched-i386-xl    <job status>                 broken
 test-amd64-coresched-amd64-xl    <job status>                 broken
 test-amd64-amd64-xl-shadow      <job status>                 broken
 test-amd64-amd64-xl-rtds        <job status>                 broken
 test-amd64-amd64-xl-qemuu-ws16-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-win7-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict    <job status>   broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow    <job status>        broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <job status>               broken
 test-amd64-amd64-xl-qemut-ws16-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <job status>               broken
 test-amd64-amd64-xl-qcow2       <job status>                 broken
 test-amd64-amd64-amd64-pvgrub    <job status>                 broken
 test-amd64-amd64-xl-pvshim      <job status>                 broken
 test-amd64-amd64-dom0pvh-xl-amd    <job status>                 broken
 test-amd64-amd64-xl-pvhv2-intel    <job status>                 broken
 test-amd64-amd64-dom0pvh-xl-intel    <job status>                 broken
 test-amd64-amd64-i386-pvgrub    <job status>                 broken
 test-amd64-amd64-xl-pvhv2-amd    <job status>                 broken
 test-amd64-amd64-libvirt        <job status>                 broken
 test-amd64-amd64-libvirt-pair    <job status>                 broken
 test-amd64-amd64-xl-multivcpu    <job status>                 broken
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <job status>      broken
 test-amd64-amd64-libvirt-vhd    <job status>                 broken
 test-amd64-amd64-xl-credit2     <job status>                 broken
 test-amd64-amd64-libvirt-xsm    <job status>                 broken
 test-amd64-amd64-pair           <job status>                 broken
 test-amd64-amd64-xl-credit1     <job status>                 broken
 test-amd64-amd64-pygrub         <job status>                 broken
 test-amd64-amd64-qemuu-freebsd11-amd64    <job status>                 broken
 test-amd64-amd64-xl             <job status>                 broken
 test-amd64-amd64-qemuu-freebsd12-amd64    <job status>                 broken
 test-amd64-amd64-qemuu-nested-amd    <job status>                 broken
 test-amd64-amd64-qemuu-nested-intel    <job status>                 broken
 test-amd64-i386-xl-shadow       <job status>                 broken
 test-amd64-i386-xl-xsm          <job status>                 broken
 test-amd64-i386-freebsd10-amd64    <job status>                 broken
 test-amd64-i386-freebsd10-i386    <job status>                 broken
 test-amd64-i386-libvirt         <job status>                 broken
 test-amd64-i386-libvirt-pair    <job status>                 broken
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <job status>       broken
 test-amd64-i386-libvirt-xsm     <job status>                 broken
 test-amd64-i386-pair            <job status>                 broken
 test-amd64-i386-qemut-rhel6hvm-amd    <job status>                 broken
 test-amd64-i386-qemut-rhel6hvm-intel    <job status>                 broken
 test-amd64-i386-qemuu-rhel6hvm-amd    <job status>                 broken
 test-amd64-i386-qemuu-rhel6hvm-intel    <job status>                 broken
 test-amd64-i386-xl              <job status>                 broken
 test-amd64-i386-xl-pvshim       <job status>                 broken
 test-amd64-i386-xl-qemut-debianhvm-amd64    <job status>                broken
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm    <job status>             broken
 test-amd64-i386-examine       5 host-install           broken REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-amd64-pvgrub  5 host-install(5)      broken blocked in 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-i386-xl-xsm        5 host-install(5)       broken blocked in 152332
 test-amd64-i386-qemut-rhel6hvm-intel 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-xl-credit2   5 host-install(5)       broken blocked in 152332
 test-amd64-i386-qemuu-rhel6hvm-intel 5 host-install(5) broken blocked in 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken blocked in 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-xl           5 host-install(5)       broken blocked in 152332
 test-amd64-i386-xl-qemuu-ws16-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken blocked in 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  5 host-install(5) broken blocked in 152332
 test-amd64-coresched-i386-xl  5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-dom0pvh-xl-amd  5 host-install(5)    broken blocked in 152332
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken blocked in 152332
 test-amd64-i386-libvirt-xsm   5 host-install(5)       broken blocked in 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken blocked in 152332
 test-amd64-i386-freebsd10-amd64  5 host-install(5)    broken blocked in 152332
 test-amd64-i386-xl-raw        5 host-install(5)       broken blocked in 152332
 test-amd64-i386-qemut-rhel6hvm-amd  5 host-install(5) broken blocked in 152332
 test-amd64-amd64-qemuu-nested-amd  5 host-install(5)  broken blocked in 152332
 test-amd64-i386-xl-pvshim     5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-libvirt-pair 6 host-install/src_host(6) broken blocked in 152332
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-libvirt-pair 7 host-install/dst_host(7) broken blocked in 152332
 test-amd64-amd64-xl-multivcpu  5 host-install(5)      broken blocked in 152332
 test-amd64-amd64-xl-pvhv2-amd  5 host-install(5)      broken blocked in 152332
 test-amd64-amd64-xl-shadow    5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-libvirt      5 host-install(5)       broken blocked in 152332
 test-amd64-coresched-amd64-xl  5 host-install(5)      broken blocked in 152332
 test-amd64-i386-xl            5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-i386-pvgrub  5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-pygrub       5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-qemuu-freebsd12-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-i386-freebsd10-i386  5 host-install(5)     broken blocked in 152332
 test-amd64-amd64-xl-qemut-debianhvm-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-libvirt-xsm  5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-xl-pvhv2-intel  5 host-install(5)    broken blocked in 152332
 test-amd64-amd64-xl-credit1   5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-qemuu-nested-intel 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-i386-xl-shadow     5 host-install(5)       broken blocked in 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-libvirt-vhd  5 host-install(5)       broken blocked in 152332
 test-amd64-i386-libvirt       5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-xl-rtds      5 host-install(5)       broken blocked in 152332
 test-amd64-i386-xl-qemut-ws16-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-i386-pair       6 host-install/src_host(6) broken blocked in 152332
 test-amd64-i386-pair       7 host-install/dst_host(7) broken blocked in 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 5 host-install(5) broken blocked in 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 5 host-install(5) broken blocked in 152332
 test-amd64-i386-xl-qemuu-win7-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-i386-xl-qemut-win7-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-xl-qcow2     5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-qemuu-freebsd11-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-i386-libvirt-pair 6 host-install/src_host(6) broken blocked in 152332
 test-amd64-i386-libvirt-pair 7 host-install/dst_host(7) broken blocked in 152332
 test-amd64-amd64-pair      6 host-install/src_host(6) broken blocked in 152332
 test-amd64-amd64-pair      7 host-install/dst_host(7) broken blocked in 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-dom0pvh-xl-intel  5 host-install(5)  broken blocked in 152332
 test-amd64-amd64-xl-pvshim    5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 linux                728a748b3ff70326f652ab92081d639dc51269ea
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  323 days
Failing since        152366  2020-08-01 20:49:34 Z  322 days  548 attempts
Testing same since   162895  2021-06-18 21:58:18 Z    1 days    1 attempts

------------------------------------------------------------
6184 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-coresched-amd64-xl                                broken  
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           broken  
 test-amd64-coresched-i386-xl                                 broken  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           broken  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            broken  
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         broken  
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  broken  
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  broken  
 test-amd64-amd64-libvirt-xsm                                 broken  
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  broken  
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       broken  
 test-amd64-amd64-qemuu-nested-amd                            broken  
 test-amd64-amd64-xl-pvhv2-amd                                broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-dom0pvh-xl-amd                              broken  
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    broken  
 test-amd64-i386-xl-qemut-debianhvm-amd64                     broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    broken  
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     broken  
 test-amd64-i386-freebsd10-amd64                              broken  
 test-amd64-amd64-qemuu-freebsd11-amd64                       broken  
 test-amd64-amd64-qemuu-freebsd12-amd64                       broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         broken  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          broken  
 test-amd64-amd64-xl-qemut-ws16-amd64                         broken  
 test-amd64-i386-xl-qemut-ws16-amd64                          broken  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         broken  
 test-amd64-i386-xl-qemuu-ws16-amd64                          broken  
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  broken  
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  broken  
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        broken  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         broken  
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               broken  
 test-amd64-amd64-qemuu-nested-intel                          broken  
 test-amd64-amd64-xl-pvhv2-intel                              broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-amd64-dom0pvh-xl-intel                            broken  
 test-amd64-amd64-libvirt                                     broken  
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      broken  
 test-amd64-amd64-xl-multivcpu                                broken  
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-libvirt-pair                                broken  
 test-amd64-i386-libvirt-pair                                 broken  
 test-amd64-amd64-amd64-pvgrub                                broken  
 test-amd64-amd64-i386-pvgrub                                 broken  
 test-amd64-amd64-xl-pvshim                                   broken  
 test-amd64-i386-xl-pvshim                                    broken  
 test-amd64-amd64-pygrub                                      broken  
 test-amd64-amd64-xl-qcow2                                    broken  
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       broken  
 test-amd64-amd64-xl-rtds                                     broken  
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             broken  
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              broken  
 test-amd64-amd64-xl-shadow                                   broken  
 test-amd64-i386-xl-shadow                                    broken  
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 broken  
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm broken
broken-job test-amd64-i386-xl-qemut-win7-amd64 broken
broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm broken
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-i386-xl-qemuu-win7-amd64 broken
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-i386-xl-raw broken
broken-job test-amd64-coresched-i386-xl broken
broken-job test-amd64-coresched-amd64-xl broken
broken-job test-amd64-amd64-xl-shadow broken
broken-job test-amd64-amd64-xl-rtds broken
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-amd64-xl-qcow2 broken
broken-job test-amd64-amd64-amd64-pvgrub broken
broken-job test-amd64-amd64-xl-pvshim broken
broken-job test-amd64-amd64-dom0pvh-xl-amd broken
broken-job test-amd64-amd64-xl-pvhv2-intel broken
broken-job test-amd64-amd64-dom0pvh-xl-intel broken
broken-job test-amd64-amd64-i386-pvgrub broken
broken-job test-amd64-amd64-xl-pvhv2-amd broken
broken-job test-amd64-amd64-libvirt broken
broken-job test-amd64-amd64-libvirt-pair broken
broken-job test-amd64-amd64-xl-multivcpu broken
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-amd64-libvirt-vhd broken
broken-job test-amd64-amd64-xl-credit2 broken
broken-job test-amd64-amd64-libvirt-xsm broken
broken-job test-amd64-amd64-pair broken
broken-job test-amd64-amd64-xl-credit1 broken
broken-job test-amd64-amd64-pygrub broken
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 broken
broken-job test-amd64-amd64-xl broken
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 broken
broken-job test-amd64-amd64-qemuu-nested-amd broken
broken-job test-amd64-amd64-qemuu-nested-intel broken
broken-job test-amd64-i386-xl-shadow broken
broken-job test-amd64-i386-xl-xsm broken
broken-job test-amd64-i386-freebsd10-amd64 broken
broken-job test-amd64-i386-freebsd10-i386 broken
broken-job test-amd64-i386-libvirt broken
broken-job test-amd64-i386-libvirt-pair broken
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-i386-libvirt-xsm broken
broken-job test-amd64-i386-pair broken
broken-job test-amd64-i386-qemut-rhel6hvm-amd broken
broken-job test-amd64-i386-qemut-rhel6hvm-intel broken
broken-job test-amd64-i386-qemuu-rhel6hvm-amd broken
broken-job test-amd64-i386-qemuu-rhel6hvm-intel broken
broken-job test-amd64-i386-xl broken
broken-job test-amd64-i386-xl-pvshim broken
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-i386-xl-qemut-debianhvm-i386-xsm broken
broken-step test-amd64-amd64-amd64-pvgrub host-install(5)
broken-step test-amd64-i386-xl-qemut-debianhvm-amd64 host-install(5)
broken-step test-amd64-i386-xl-xsm host-install(5)
broken-step test-amd64-i386-qemut-rhel6hvm-intel host-install(5)
broken-step test-amd64-amd64-xl-credit2 host-install(5)
broken-step test-amd64-i386-qemuu-rhel6hvm-intel host-install(5)
broken-step test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow host-install(5)
broken-step test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict host-install(5)
broken-step test-amd64-amd64-xl host-install(5)
broken-step test-amd64-i386-xl-qemuu-ws16-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-ovmf-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow host-install(5)
broken-step test-amd64-i386-qemuu-rhel6hvm-amd host-install(5)
broken-step test-amd64-i386-examine host-install
broken-step test-amd64-coresched-i386-xl host-install(5)
broken-step test-amd64-amd64-dom0pvh-xl-amd host-install(5)
broken-step test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm host-install(5)
broken-step test-amd64-i386-libvirt-xsm host-install(5)
broken-step test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm host-install(5)
broken-step test-amd64-i386-freebsd10-amd64 host-install(5)
broken-step test-amd64-i386-xl-raw host-install(5)
broken-step test-amd64-i386-qemut-rhel6hvm-amd host-install(5)
broken-step test-amd64-amd64-qemuu-nested-amd host-install(5)
broken-step test-amd64-i386-xl-pvshim host-install(5)
broken-step test-amd64-amd64-libvirt-pair host-install/src_host(6)
broken-step test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict host-install(5)
broken-step test-amd64-amd64-libvirt-pair host-install/dst_host(7)
broken-step test-amd64-amd64-xl-multivcpu host-install(5)
broken-step test-amd64-amd64-xl-pvhv2-amd host-install(5)
broken-step test-amd64-amd64-xl-shadow host-install(5)
broken-step test-amd64-amd64-libvirt host-install(5)
broken-step test-amd64-coresched-amd64-xl host-install(5)
broken-step test-amd64-i386-xl host-install(5)
broken-step test-amd64-amd64-i386-pvgrub host-install(5)
broken-step test-amd64-amd64-pygrub host-install(5)
broken-step test-amd64-amd64-qemuu-freebsd12-amd64 host-install(5)
broken-step test-amd64-i386-freebsd10-i386 host-install(5)
broken-step test-amd64-amd64-xl-qemut-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-libvirt-xsm host-install(5)
broken-step test-amd64-amd64-xl-pvhv2-intel host-install(5)
broken-step test-amd64-amd64-xl-credit1 host-install(5)
broken-step test-amd64-amd64-qemuu-nested-intel host-install(5)
broken-step test-amd64-amd64-xl-qemut-ws16-amd64 host-install(5)
broken-step test-amd64-i386-xl-shadow host-install(5)
broken-step test-amd64-i386-xl-qemut-debianhvm-i386-xsm host-install(5)
broken-step test-amd64-amd64-xl-qemuu-win7-amd64 host-install(5)
broken-step test-amd64-amd64-libvirt-vhd host-install(5)
broken-step test-amd64-i386-libvirt host-install(5)
broken-step test-amd64-amd64-xl-rtds host-install(5)
broken-step test-amd64-i386-xl-qemut-ws16-amd64 host-install(5)
broken-step test-amd64-i386-xl-qemuu-debianhvm-amd64 host-install(5)
broken-step test-amd64-i386-pair host-install/src_host(6)
broken-step test-amd64-i386-pair host-install/dst_host(7)
broken-step test-amd64-i386-xl-qemuu-debianhvm-i386-xsm host-install(5)
broken-step test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm host-install(5)
broken-step test-amd64-i386-xl-qemuu-win7-amd64 host-install(5)
broken-step test-amd64-i386-xl-qemuu-ovmf-amd64 host-install(5)
broken-step test-amd64-i386-xl-qemut-win7-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qcow2 host-install(5)
broken-step test-amd64-amd64-qemuu-freebsd11-amd64 host-install(5)
broken-step test-amd64-i386-libvirt-pair host-install/src_host(6)
broken-step test-amd64-i386-libvirt-pair host-install/dst_host(7)
broken-step test-amd64-amd64-pair host-install/src_host(6)
broken-step test-amd64-amd64-pair host-install/dst_host(7)
broken-step test-amd64-amd64-xl-qemuu-ws16-amd64 host-install(5)
broken-step test-amd64-amd64-dom0pvh-xl-intel host-install(5)
broken-step test-amd64-amd64-xl-pvshim host-install(5)

Not pushing.

(No revision log; it would be 1683232 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 20 12:29:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Jun 2021 12:29:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145198.267147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luwZj-0002G2-T0; Sun, 20 Jun 2021 12:29:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145198.267147; Sun, 20 Jun 2021 12:29:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luwZj-0002Fv-Pj; Sun, 20 Jun 2021 12:29:19 +0000
Received: by outflank-mailman (input) for mailman id 145198;
 Sun, 20 Jun 2021 12:29:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luwZi-0002Fl-0D; Sun, 20 Jun 2021 12:29:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luwZh-0006Ox-PN; Sun, 20 Jun 2021 12:29:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luwZh-0001B6-G0; Sun, 20 Jun 2021 12:29:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1luwZh-00017Q-FY; Sun, 20 Jun 2021 12:29:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hZcMcmMepv5aA70N8/lLdPs47COj8FSAWFMxT8itTb8=; b=BHHs9A0u3pykQx2mxPjA8sMDmy
	UAzOaZjspGygl/P7IwpNiMT6BaEF8t56Ml92TnwadsEQpFLtAkO0BPbL7r8utv3Fyk9Q7fGdpRIId
	Nn89b8ELut0F4XdsscNvt1RluWcGsv3EPHtj3ZD4BTdcYUUgVGdbF/cNebMekynYildI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162908-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 162908: trouble: broken
X-Osstest-Failures:
    xen-unstable-coverity:coverity-amd64:<job status>:broken:regression
    xen-unstable-coverity:coverity-amd64:host-install(4):broken:regression
X-Osstest-Versions-This:
    xen=8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
X-Osstest-Versions-That:
    xen=4bcf6433eed3d9cbc00865ec62380a33ca832dac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Jun 2021 12:29:17 +0000

flight 162908 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162908/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 coverity-amd64                  <job status>                 broken
 coverity-amd64                4 host-install(4)        broken REGR. vs. 162857

version targeted for testing:
 xen                  8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
baseline version:
 xen                  4bcf6433eed3d9cbc00865ec62380a33ca832dac

Last test of basis   162857  2021-06-16 09:18:30 Z    4 days
Testing same since   162908  2021-06-20 09:18:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 coverity-amd64                                               broken  


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Thu Jun 17 18:00:57 2021 +0200

    x86/ept: force WB cache attributes for grant and foreign maps
    
    Force WB type for grants and foreign pages. Those are usually mapped
    over unpopulated physical ranges in the p2m, and those ranges would
    usually be UC in the MTRR state, which is unlikely to be the correct
    cache attribute. It's also cumbersome (or even impossible) for the
    guest to be setting the MTRR type for all those mappings as WB, as
    MTRR ranges are finite.
    
    Note that this is not an issue on AMD because WB cache attribute is
    already set on grants and foreign mappings in the p2m and MTRR types
    are ignored. Also on AMD Xen cannot force a cache attribute because of
    the lack of ignore PAT equivalent, so the behavior here slightly
    diverges between AMD and Intel (or EPT vs NPT/shadow).
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit ed464d4e8a9a49559307c96ee6aa59e97820f692
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Thu Jun 17 17:58:11 2021 +0200

    x86/mtrr: move epte_get_entry_emt to p2m-ept.c
    
    This is an EPT specific function, so it shouldn't live in the generic
    mtrr file. Such movement is also needed for future work that will
    require passing a p2m_type_t parameter to epte_get_entry_emt, and
    making that type visible to the mtrr users is cumbersome and
    unneeded.
    
    Moving epte_get_entry_emt out of mtrr.c requires making the helper to
    get the MTRR type of an address from the mtrr state public. While
    there rename the function to start with the mtrr prefix, like other
    mtrr related functions.
    
    While there fix some of the types of the function parameters.
    
    No functional change intended.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Jun 20 14:15:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Jun 2021 14:15:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145210.267167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luyEg-0003zy-Sn; Sun, 20 Jun 2021 14:15:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145210.267167; Sun, 20 Jun 2021 14:15:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luyEg-0003zr-P1; Sun, 20 Jun 2021 14:15:42 +0000
Received: by outflank-mailman (input) for mailman id 145210;
 Sun, 20 Jun 2021 14:15:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luyEf-0003zh-P3; Sun, 20 Jun 2021 14:15:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luyEf-0008CP-JC; Sun, 20 Jun 2021 14:15:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luyEf-0003Td-BI; Sun, 20 Jun 2021 14:15:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1luyEf-0003Pm-An; Sun, 20 Jun 2021 14:15:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=B2V7+au4jk7qYNVkXLe+wBrzd+lsMs0A+1IDvJjVIQs=; b=uT0Cjp4zKlkjT2xSb+Joc7ZJ3p
	Xzq10FL+ALHcRtGcFA/akVOezBeowTC7T1SdqwwvxcNRmIEFngvKrBrpG+IW1TCN5Q0HS/LP3Na+t
	IkisU785ChIP/OUw7Bf6kblrot0Qxf9uzJyWsva3XayVZdLDgQG04LX5z0pmyrMw/gn0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162904-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162904: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    libvirt:build-amd64:<job status>:broken:regression
    libvirt:build-amd64-pvops:<job status>:broken:regression
    libvirt:build-amd64-xsm:<job status>:broken:regression
    libvirt:build-arm64:<job status>:broken:regression
    libvirt:build-arm64-pvops:<job status>:broken:regression
    libvirt:build-arm64-xsm:<job status>:broken:regression
    libvirt:build-armhf-pvops:<job status>:broken:regression
    libvirt:build-i386:<job status>:broken:regression
    libvirt:build-i386-pvops:<job status>:broken:regression
    libvirt:build-i386-xsm:<job status>:broken:regression
    libvirt:build-i386-pvops:host-install(4):broken:regression
    libvirt:build-arm64:host-install(4):broken:regression
    libvirt:build-i386:host-install(4):broken:regression
    libvirt:build-arm64-pvops:host-install(4):broken:regression
    libvirt:build-arm64-xsm:host-install(4):broken:regression
    libvirt:build-i386-xsm:host-install(4):broken:regression
    libvirt:build-amd64-xsm:host-install(4):broken:regression
    libvirt:build-amd64:host-install(4):broken:regression
    libvirt:build-armhf-pvops:host-install(4):broken:regression
    libvirt:build-amd64-pvops:host-install(4):broken:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:build-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:build-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=54b602019d7dfa94a6c52ef7aa3abdfaa93ed233
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Jun 2021 14:15:41 +0000

flight 162904 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162904/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 151777
 build-arm64                   4 host-install(4)        broken REGR. vs. 151777
 build-i386                    4 host-install(4)        broken REGR. vs. 151777
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 151777
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 151777
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 151777
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 151777
 build-amd64                   4 host-install(4)        broken REGR. vs. 151777
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 151777
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              54b602019d7dfa94a6c52ef7aa3abdfaa93ed233
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  345 days
Failing since        151818  2020-07-11 04:18:52 Z  344 days  337 attempts
Testing same since   162898  2021-06-19 04:19:58 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-i386 host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-armhf-pvops host-install(4)
broken-step build-amd64-pvops host-install(4)

Not pushing.

(No revision log; it would be 62906 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 20 15:00:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Jun 2021 15:00:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145216.267181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luyvj-0000PH-EY; Sun, 20 Jun 2021 15:00:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145216.267181; Sun, 20 Jun 2021 15:00:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1luyvj-0000PA-Aw; Sun, 20 Jun 2021 15:00:11 +0000
Received: by outflank-mailman (input) for mailman id 145216;
 Sun, 20 Jun 2021 15:00:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luyvi-0000P0-6G; Sun, 20 Jun 2021 15:00:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luyvh-0000VK-UM; Sun, 20 Jun 2021 15:00:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1luyvh-0004Ri-Lg; Sun, 20 Jun 2021 15:00:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1luyvh-0004aw-L8; Sun, 20 Jun 2021 15:00:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EsuEx75nB+UO6eUnK1wQNFTTwqjmS+8ypaWBrptgP0o=; b=AavS/0bKgTm1rmgmLGFmu8MHrH
	c+/djt761d8S1RK0V0DAHvKsOCIIQLMylopCNwmmM+6Gyh8Oh/k+cNyJy3NcVhEWvS4kjLYw86IH6
	FIrWK/BjYpud8HM1ugE56KuOgxTTlu1are+ZDSJdjQ8/zpPtmUex0BsL2zIngOPCxSGg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162897-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162897: trouble: blocked/broken/pass
X-Osstest-Failures:
    qemu-mainline:test-amd64-coresched-amd64-xl:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-xsm:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-shadow:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<job status>:broken:regression
    qemu-mainline:build-i386:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    qemu-mainline:build-i386-pvops:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    qemu-mainline:build-i386-xsm:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-pvshim:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-i386-pvgrub:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-libvirt:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-multivcpu:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-pair:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-pygrub:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-credit1:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl:<job status>:broken:regression
    qemu-mainline:build-i386:host-install(4):broken:regression
    qemu-mainline:build-i386-xsm:host-install(4):broken:regression
    qemu-mainline:build-i386-pvops:host-install(4):broken:regression
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:host-install/src_host(6):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-pair:host-install/dst_host(7):broken:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:host-install/src_host(6):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:host-install/dst_host(7):broken:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:host-install(5):broken:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3ccf6cd0e3e1dfd663814640b3b18b55715d7a75
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Jun 2021 15:00:09 +0000

flight 162897 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162897/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-amd64-xl    <job status>                 broken
 test-amd64-amd64-xl-xsm         <job status>                 broken
 test-amd64-amd64-xl-shadow      <job status>                 broken
 test-amd64-amd64-xl-rtds        <job status>                 broken
 test-amd64-amd64-xl-qemuu-ws16-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-win7-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict    <job status>   broken
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <job status>            broken
 build-i386                      <job status>                 broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow    <job status>        broken
 build-i386-pvops                <job status>                 broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <job status>               broken
 build-i386-xsm                  <job status>                 broken
 test-amd64-amd64-amd64-pvgrub    <job status>                 broken
 test-amd64-amd64-xl-qcow2       <job status>                 broken
 test-amd64-amd64-dom0pvh-xl-amd    <job status>                 broken
 test-amd64-amd64-xl-pvshim      <job status>                 broken
 test-amd64-amd64-dom0pvh-xl-intel    <job status>                 broken
 test-amd64-amd64-i386-pvgrub    <job status>                 broken
 test-amd64-amd64-libvirt        <job status>                 broken
 test-amd64-amd64-xl-pvhv2-intel    <job status>                 broken
 test-amd64-amd64-libvirt-pair    <job status>                 broken
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <job status>      broken
 test-amd64-amd64-xl-pvhv2-amd    <job status>                 broken
 test-amd64-amd64-libvirt-vhd    <job status>                 broken
 test-amd64-amd64-libvirt-xsm    <job status>                 broken
 test-amd64-amd64-xl-multivcpu    <job status>                 broken
 test-amd64-amd64-pair           <job status>                 broken
 test-amd64-amd64-pygrub         <job status>                 broken
 test-amd64-amd64-xl-credit2     <job status>                 broken
 test-amd64-amd64-qemuu-freebsd11-amd64    <job status>                 broken
 test-amd64-amd64-qemuu-freebsd12-amd64    <job status>                 broken
 test-amd64-amd64-xl-credit1     <job status>                 broken
 test-amd64-amd64-qemuu-nested-amd    <job status>                 broken
 test-amd64-amd64-qemuu-nested-intel    <job status>                 broken
 test-amd64-amd64-xl             <job status>                 broken
 build-i386                    4 host-install(4)        broken REGR. vs. 152631
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 152631
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair      6 host-install/src_host(6) broken blocked in 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-xl-multivcpu  5 host-install(5)      broken blocked in 152631
 test-amd64-amd64-pair      7 host-install/dst_host(7) broken blocked in 152631
 test-amd64-amd64-qemuu-nested-amd  5 host-install(5)  broken blocked in 152631
 test-amd64-amd64-dom0pvh-xl-amd  5 host-install(5)    broken blocked in 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-xl-pvhv2-amd  5 host-install(5)      broken blocked in 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-amd64-pvgrub  5 host-install(5)      broken blocked in 152631
 test-amd64-amd64-i386-pvgrub  5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-xl-rtds      5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-xl-pvshim    5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-qemuu-nested-intel 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-xl-pvhv2-intel  5 host-install(5)    broken blocked in 152631
 test-amd64-coresched-amd64-xl  5 host-install(5)      broken blocked in 152631
 test-amd64-amd64-qemuu-freebsd11-amd64 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-libvirt      5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-libvirt-xsm  5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-libvirt-pair 6 host-install/src_host(6) broken blocked in 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-libvirt-pair 7 host-install/dst_host(7) broken blocked in 152631
 test-amd64-amd64-pygrub       5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-xl           5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-xl-shadow    5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-xl-xsm       5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-xl-credit1   5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-libvirt-vhd  5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-xl-qcow2     5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-xl-credit2   5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-dom0pvh-xl-intel  5 host-install(5)  broken blocked in 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 5 host-install(5) broken blocked in 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                3ccf6cd0e3e1dfd663814640b3b18b55715d7a75
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  304 days
Failing since        152659  2020-08-21 14:07:39 Z  303 days  557 attempts
Testing same since   162897  2021-06-19 02:19:17 Z    1 days    1 attempts

------------------------------------------------------------
540 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               broken  
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          broken  
 test-amd64-coresched-amd64-xl                                broken  
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           broken  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 broken  
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 broken  
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      broken  
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            broken  
 test-amd64-amd64-xl-pvhv2-amd                                broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    broken  
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       broken  
 test-amd64-amd64-qemuu-freebsd12-amd64                       broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         broken  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         broken  
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  broken  
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  broken  
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        broken  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          broken  
 test-amd64-amd64-xl-pvhv2-intel                              broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            broken  
 test-amd64-amd64-libvirt                                     broken  
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                broken  
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                broken  
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                broken  
 test-amd64-amd64-i386-pvgrub                                 broken  
 test-amd64-amd64-xl-pvshim                                   broken  
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      broken  
 test-amd64-amd64-xl-qcow2                                    broken  
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     broken  
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             broken  
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   broken  
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 broken  
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-coresched-amd64-xl broken
broken-job test-amd64-amd64-xl-xsm broken
broken-job test-amd64-amd64-xl-shadow broken
broken-job test-amd64-amd64-xl-rtds broken
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm broken
broken-job build-i386 broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow broken
broken-job build-i386-pvops broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 broken
broken-job build-i386-xsm broken
broken-job test-amd64-amd64-amd64-pvgrub broken
broken-job test-amd64-amd64-xl-qcow2 broken
broken-job test-amd64-amd64-dom0pvh-xl-amd broken
broken-job test-amd64-amd64-xl-pvshim broken
broken-job test-amd64-amd64-dom0pvh-xl-intel broken
broken-job test-amd64-amd64-i386-pvgrub broken
broken-job test-amd64-amd64-libvirt broken
broken-job test-amd64-amd64-xl-pvhv2-intel broken
broken-job test-amd64-amd64-libvirt-pair broken
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-amd64-xl-pvhv2-amd broken
broken-job test-amd64-amd64-libvirt-vhd broken
broken-job test-amd64-amd64-libvirt-xsm broken
broken-job test-amd64-amd64-xl-multivcpu broken
broken-job test-amd64-amd64-pair broken
broken-job test-amd64-amd64-pygrub broken
broken-job test-amd64-amd64-xl-credit2 broken
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 broken
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 broken
broken-job test-amd64-amd64-xl-credit1 broken
broken-job test-amd64-amd64-qemuu-nested-amd broken
broken-job test-amd64-amd64-qemuu-nested-intel broken
broken-job test-amd64-amd64-xl broken
broken-step test-amd64-amd64-pair host-install/src_host(6)
broken-step test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict host-install(5)
broken-step test-amd64-amd64-xl-multivcpu host-install(5)
broken-step test-amd64-amd64-pair host-install/dst_host(7)
broken-step test-amd64-amd64-qemuu-nested-amd host-install(5)
broken-step test-amd64-amd64-dom0pvh-xl-amd host-install(5)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow host-install(5)
broken-step test-amd64-amd64-xl-pvhv2-amd host-install(5)
broken-step test-amd64-amd64-xl-qemuu-ovmf-amd64 host-install(5)
broken-step test-amd64-amd64-amd64-pvgrub host-install(5)
broken-step test-amd64-amd64-i386-pvgrub host-install(5)
broken-step test-amd64-amd64-xl-rtds host-install(5)
broken-step test-amd64-amd64-xl-pvshim host-install(5)
broken-step test-amd64-amd64-qemuu-nested-intel host-install(5)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-xl-pvhv2-intel host-install(5)
broken-step test-amd64-coresched-amd64-xl host-install(5)
broken-step test-amd64-amd64-qemuu-freebsd11-amd64 host-install(5)
broken-step test-amd64-amd64-libvirt host-install(5)
broken-step test-amd64-amd64-libvirt-xsm host-install(5)
broken-step test-amd64-amd64-libvirt-pair host-install/src_host(6)
broken-step test-amd64-amd64-xl-qemuu-win7-amd64 host-install(5)
broken-step test-amd64-amd64-libvirt-pair host-install/dst_host(7)
broken-step test-amd64-amd64-pygrub host-install(5)
broken-step build-i386 host-install(4)
broken-step test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm host-install(5)
broken-step test-amd64-amd64-xl host-install(5)
broken-step build-i386-xsm host-install(4)
broken-step test-amd64-amd64-xl-shadow host-install(5)
broken-step test-amd64-amd64-xl-xsm host-install(5)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm host-install(5)
broken-step build-i386-pvops host-install(4)
broken-step test-amd64-amd64-xl-credit1 host-install(5)
broken-step test-amd64-amd64-libvirt-vhd host-install(5)
broken-step test-amd64-amd64-xl-qcow2 host-install(5)
broken-step test-amd64-amd64-qemuu-freebsd12-amd64 host-install(5)
broken-step test-amd64-amd64-xl-credit2 host-install(5)
broken-step test-amd64-amd64-dom0pvh-xl-intel host-install(5)
broken-step test-amd64-amd64-xl-qemuu-ws16-amd64 host-install(5)

Not pushing.

(No revision log; it would be 173607 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 20 16:17:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Jun 2021 16:17:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145228.267195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lv08i-0007sa-A4; Sun, 20 Jun 2021 16:17:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145228.267195; Sun, 20 Jun 2021 16:17:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lv08i-0007sT-75; Sun, 20 Jun 2021 16:17:40 +0000
Received: by outflank-mailman (input) for mailman id 145228;
 Sun, 20 Jun 2021 16:17:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lv08h-0007sJ-9m; Sun, 20 Jun 2021 16:17:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lv08h-0002FX-2j; Sun, 20 Jun 2021 16:17:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lv08g-00066S-Or; Sun, 20 Jun 2021 16:17:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lv08g-00014p-OJ; Sun, 20 Jun 2021 16:17:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=t4hnzxpBEcoXz+wBL3HG6kdz2g8v8ONncewcGF/B6Eg=; b=OJVMR4a7dZ4/BHC+jMxdNAfoox
	KXi41QcyCgRIvpEwcAsUNpJoZfDiFeTni0MlXlfxSz0AOW8g4D5+sqY5+dD4qS6J+GDsTRLvDJG+K
	gyVbRtuD5SxmCiKJnnvxSVQ8r/L6yTuwwtvEW4pr2GNbuqg1bb+/k9+7yhJNtNbV6tZE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162906-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162906: trouble: blocked/broken
X-Osstest-Failures:
    ovmf:build-amd64:<job status>:broken:regression
    ovmf:build-amd64-pvops:<job status>:broken:regression
    ovmf:build-amd64-xsm:<job status>:broken:regression
    ovmf:build-i386:<job status>:broken:regression
    ovmf:build-i386-pvops:<job status>:broken:regression
    ovmf:build-i386-xsm:<job status>:broken:regression
    ovmf:build-i386-pvops:host-install(4):broken:regression
    ovmf:build-i386:host-install(4):broken:regression
    ovmf:build-i386-xsm:host-install(4):broken:regression
    ovmf:build-amd64-pvops:host-install(4):broken:regression
    ovmf:build-amd64-xsm:host-install(4):broken:regression
    ovmf:build-amd64:host-install(4):broken:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=a63914d3f603580e5aeceb5edbafe56688210141
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Jun 2021 16:17:38 +0000

flight 162906 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162906/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 162359
 build-i386                    4 host-install(4)        broken REGR. vs. 162359
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 162359
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 162359
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 162359
 build-amd64                   4 host-install(4)        broken REGR. vs. 162359

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 a63914d3f603580e5aeceb5edbafe56688210141
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   16 days
Failing since        162368  2021-06-04 15:42:59 Z   16 days   35 attempts
Testing same since   162900  2021-06-19 07:19:12 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  gaoliming <gaoliming@byosoft.com.cn>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386-pvops host-install(4)
broken-step build-i386 host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64 host-install(4)

Not pushing.

(No revision log; it would be 2173 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 20 17:59:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Jun 2021 17:59:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145236.267215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lv1j7-0000Fy-Mb; Sun, 20 Jun 2021 17:59:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145236.267215; Sun, 20 Jun 2021 17:59:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lv1j7-0000Fr-JG; Sun, 20 Jun 2021 17:59:21 +0000
Received: by outflank-mailman (input) for mailman id 145236;
 Sun, 20 Jun 2021 17:59:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lv1j5-0000Fg-Iv; Sun, 20 Jun 2021 17:59:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lv1j5-0003va-6w; Sun, 20 Jun 2021 17:59:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lv1j4-0008Ke-UY; Sun, 20 Jun 2021 17:59:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lv1j4-0005yJ-U2; Sun, 20 Jun 2021 17:59:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=14yhH/lw/ZNAfEtimfDsCZlsk4guRqhJbof5TcI1md0=; b=jX14/1GR/EpIyVfT2VirUI6ogW
	6Uqnel8W0JIcu5cT1F+xLUVQs2uXPZx9JmW70iPzt0Hu2Rm8VjPUzIoME+quNuwqlzI2nLpG5SdoE
	kEHHVc+GfpBXBYsjzGQnLsG/qsHxKTNJ0Jn6NJk8Isx+Pdk0X8zsc+hytq+/6pagH5fU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162901-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162901: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-arm64-arm64-libvirt-xsm:<job status>:broken:regression
    xen-unstable:test-arm64-arm64-xl:<job status>:broken:regression
    xen-unstable:test-arm64-arm64-xl-credit1:<job status>:broken:regression
    xen-unstable:test-arm64-arm64-xl-credit2:<job status>:broken:regression
    xen-unstable:test-arm64-arm64-xl-seattle:<job status>:broken:regression
    xen-unstable:test-arm64-arm64-xl-thunderx:<job status>:broken:regression
    xen-unstable:test-arm64-arm64-xl-xsm:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    xen-unstable:build-amd64-prev:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qcow2:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-pvshim:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-multivcpu:<job status>:broken:regression
    xen-unstable:build-armhf-pvops:<job status>:broken:regression
    xen-unstable:build-i386:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    xen-unstable:build-i386-prev:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-credit1:<job status>:broken:regression
    xen-unstable:build-i386-pvops:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl:<job status>:broken:regression
    xen-unstable:build-i386-xsm:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-amd64-pvgrub:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-i386-pvgrub:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-libvirt:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-libvirt-pair:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-libvirt-vhd:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-libvirt-xsm:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-pygrub:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-livepatch:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-pair:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-shadow:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-xsm:<job status>:broken:regression
    xen-unstable:test-amd64-coresched-amd64-xl:<job status>:broken:regression
    xen-unstable:test-xtf-amd64-amd64-1:<job status>:broken:regression
    xen-unstable:test-xtf-amd64-amd64-2:<job status>:broken:regression
    xen-unstable:test-xtf-amd64-amd64-3:<job status>:broken:regression
    xen-unstable:test-xtf-amd64-amd64-4:<job status>:broken:regression
    xen-unstable:test-xtf-amd64-amd64-5:<job status>:broken:regression
    xen-unstable:build-i386-pvops:host-install(4):broken:regression
    xen-unstable:build-i386-xsm:host-install(4):broken:regression
    xen-unstable:build-i386:host-install(4):broken:regression
    xen-unstable:build-i386-prev:host-install(4):broken:regression
    xen-unstable:build-amd64-prev:host-install(4):broken:regression
    xen-unstable:build-armhf-pvops:host-install(4):broken:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-i386-libvirt:<job status>:broken:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:<job status>:broken:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:<job status>:broken:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-pvshim:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-raw:<job status>:broken:regression
    xen-unstable:test-amd64-coresched-i386-xl:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-i386-xl-raw:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-coresched-i386-xl:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-libvirt:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-libvirt-xsm:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-xl-pvshim:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-shadow:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-libvirt:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-credit2:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-rtds:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-i386-pvgrub:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:host-install(5):broken:heisenbug
    xen-unstable:test-xtf-amd64-amd64-3:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:host-install(5):broken:heisenbug
    xen-unstable:test-arm64-arm64-xl-credit2:host-install(5):broken:heisenbug
    xen-unstable:test-arm64-arm64-xl-seattle:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-amd64-pvgrub:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-multivcpu:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-xsm:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-pair:host-install/src_host(6):broken:heisenbug
    xen-unstable:test-amd64-amd64-pair:host-install/dst_host(7):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-pair:host-install/src_host(6):broken:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-pair:host-install/dst_host(7):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-credit1:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-coresched-amd64-xl:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-pygrub:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:host-install(5):broken:heisenbug
    xen-unstable:test-xtf-amd64-amd64-1:host-install(5):broken:heisenbug
    xen-unstable:test-arm64-arm64-xl-credit1:host-install(5):broken:heisenbug
    xen-unstable:test-arm64-arm64-xl-xsm:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:host-install(5):broken:heisenbug
    xen-unstable:test-xtf-amd64-amd64-5:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl:host-install(5):broken:heisenbug
    xen-unstable:test-xtf-amd64-amd64-4:host-install(5):broken:heisenbug
    xen-unstable:test-xtf-amd64-amd64-2:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-pvshim:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-livepatch:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-xsm:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-examine:host-install:broken:heisenbug
    xen-unstable:test-arm64-arm64-libvirt-xsm:host-install(5):broken:heisenbug
    xen-unstable:test-arm64-arm64-xl-thunderx:host-install(5):broken:heisenbug
    xen-unstable:test-arm64-arm64-xl:host-install(5):broken:heisenbug
    xen-unstable:test-arm64-arm64-examine:host-install:broken:heisenbug
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:guest-start/redhat.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Jun 2021 17:59:18 +0000

flight 162901 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162901/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm    <job status>                 broken
 test-arm64-arm64-xl             <job status>                 broken
 test-arm64-arm64-xl-credit1     <job status>                 broken
 test-arm64-arm64-xl-credit2     <job status>                 broken
 test-arm64-arm64-xl-seattle     <job status>                 broken
 test-arm64-arm64-xl-thunderx    <job status>                 broken
 test-arm64-arm64-xl-xsm         <job status>                 broken
 test-amd64-amd64-xl-rtds        <job status>                 broken
 test-amd64-amd64-xl-qemuu-ws16-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-win7-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict    <job status>   broken
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <job status>            broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow    <job status>        broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <job status>               broken
 test-amd64-amd64-xl-qemut-ws16-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemut-win7-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm    <job status>   broken
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm    <job status>            broken
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <job status>               broken
 build-amd64-prev                <job status>                 broken
 test-amd64-amd64-xl-qcow2       <job status>                 broken
 test-amd64-amd64-xl-pvshim      <job status>                 broken
 test-amd64-amd64-xl-pvhv2-intel    <job status>                 broken
 test-amd64-amd64-xl-pvhv2-amd    <job status>                 broken
 test-amd64-amd64-xl-multivcpu    <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 test-amd64-amd64-xl-credit2     <job status>                 broken
 build-i386-prev                 <job status>                 broken
 test-amd64-amd64-xl-credit1     <job status>                 broken
 build-i386-pvops                <job status>                 broken
 test-amd64-amd64-xl             <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 test-amd64-amd64-amd64-pvgrub    <job status>                 broken
 test-amd64-amd64-dom0pvh-xl-amd    <job status>                 broken
 test-amd64-amd64-qemuu-nested-intel    <job status>                 broken
 test-amd64-amd64-dom0pvh-xl-intel    <job status>                 broken
 test-amd64-amd64-qemuu-nested-amd    <job status>                 broken
 test-amd64-amd64-i386-pvgrub    <job status>                 broken
 test-amd64-amd64-libvirt        <job status>                 broken
 test-amd64-amd64-qemuu-freebsd12-amd64    <job status>                 broken
 test-amd64-amd64-libvirt-pair    <job status>                 broken
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <job status>      broken
 test-amd64-amd64-qemuu-freebsd11-amd64    <job status>                 broken
 test-amd64-amd64-libvirt-vhd    <job status>                 broken
 test-amd64-amd64-libvirt-xsm    <job status>                 broken
 test-amd64-amd64-pygrub         <job status>                 broken
 test-amd64-amd64-livepatch      <job status>                 broken
 test-amd64-amd64-pair           <job status>                 broken
 test-amd64-amd64-xl-shadow      <job status>                 broken
 test-amd64-amd64-xl-xsm         <job status>                 broken
 test-amd64-coresched-amd64-xl    <job status>                 broken
 test-xtf-amd64-amd64-1          <job status>                 broken
 test-xtf-amd64-amd64-2          <job status>                 broken
 test-xtf-amd64-amd64-3          <job status>                 broken
 test-xtf-amd64-amd64-4          <job status>                 broken
 test-xtf-amd64-amd64-5          <job status>                 broken
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 162533
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 162533
 build-i386                    4 host-install(4)        broken REGR. vs. 162533
 build-i386-prev               4 host-install(4)        broken REGR. vs. 162533
 build-amd64-prev              4 host-install(4)        broken REGR. vs. 162533
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 162533
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <job status>      broken in 162894
 test-amd64-i386-xl-qemuu-win7-amd64    <job status>           broken in 162894
 test-amd64-i386-libvirt         <job status>                 broken  in 162894
 test-amd64-i386-libvirt-xsm     <job status>                 broken  in 162894
 test-amd64-i386-xl-qemut-win7-amd64    <job status>           broken in 162894
 test-amd64-i386-qemuu-rhel6hvm-amd    <job status>            broken in 162894
 test-amd64-i386-qemuu-rhel6hvm-intel    <job status>          broken in 162894
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm <job status> broken in 162894
 test-amd64-i386-xl-qemuu-ws16-amd64    <job status>           broken in 162894
 test-amd64-i386-xl-pvshim       <job status>                 broken  in 162894
 test-amd64-i386-xl-qemut-debianhvm-amd64    <job status>      broken in 162894
 test-amd64-i386-xl-qemuu-ovmf-amd64    <job status>           broken in 162894
 test-amd64-i386-qemut-rhel6hvm-amd    <job status>            broken in 162894
 test-amd64-i386-xl-raw          <job status>                 broken  in 162894
 test-amd64-coresched-i386-xl    <job status>                 broken  in 162894
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail in 162885 REGR. vs. 162533
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail in 162894 REGR. vs. 162533

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-raw       5 host-install(5) broken in 162894 pass in 162885
 test-amd64-i386-qemuu-rhel6hvm-intel 5 host-install(5) broken in 162894 pass in 162885
 test-amd64-i386-qemut-rhel6hvm-amd 5 host-install(5) broken in 162894 pass in 162885
 test-amd64-coresched-i386-xl 5 host-install(5) broken in 162894 pass in 162885
 test-amd64-i386-xl-qemuu-ovmf-amd64 5 host-install(5) broken in 162894 pass in 162885
 test-amd64-i386-xl-qemuu-debianhvm-amd64 5 host-install(5) broken in 162894 pass in 162885
 test-amd64-i386-qemuu-rhel6hvm-amd 5 host-install(5) broken in 162894 pass in 162885
 test-amd64-i386-xl-qemut-debianhvm-amd64 5 host-install(5) broken in 162894 pass in 162885
 test-amd64-i386-xl-qemut-win7-amd64 5 host-install(5) broken in 162894 pass in 162885
 test-amd64-i386-libvirt      5 host-install(5) broken in 162894 pass in 162885
 test-amd64-i386-libvirt-xsm  5 host-install(5) broken in 162894 pass in 162885
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken in 162894 pass in 162885
 test-amd64-i386-xl-qemuu-ws16-amd64 5 host-install(5) broken in 162894 pass in 162885
 test-amd64-i386-xl-qemuu-win7-amd64 5 host-install(5) broken in 162894 pass in 162885
 test-amd64-i386-xl-pvshim    5 host-install(5) broken in 162894 pass in 162885
 test-amd64-amd64-xl-shadow    5 host-install(5)          broken pass in 162885
 test-amd64-amd64-libvirt      5 host-install(5)          broken pass in 162885
 test-amd64-amd64-xl-credit2   5 host-install(5)          broken pass in 162885
 test-amd64-amd64-xl-rtds      5 host-install(5)          broken pass in 162885
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken pass in 162885
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken pass in 162894
 test-amd64-amd64-xl-qemuu-ws16-amd64  5 host-install(5)  broken pass in 162894
 test-amd64-amd64-qemuu-nested-intel  5 host-install(5)   broken pass in 162894
 test-amd64-amd64-i386-pvgrub  5 host-install(5)          broken pass in 162894
 test-amd64-amd64-xl-qcow2     5 host-install(5)          broken pass in 162894
 test-xtf-amd64-amd64-3        5 host-install(5)          broken pass in 162894
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 5 host-install(5) broken pass in 162894
 test-amd64-amd64-dom0pvh-xl-amd  5 host-install(5)       broken pass in 162894
 test-arm64-arm64-xl-credit2   5 host-install(5)          broken pass in 162894
 test-arm64-arm64-xl-seattle   5 host-install(5)          broken pass in 162894
 test-amd64-amd64-amd64-pvgrub  5 host-install(5)         broken pass in 162894
 test-amd64-amd64-xl-pvhv2-amd  5 host-install(5)         broken pass in 162894
 test-amd64-amd64-xl-multivcpu  5 host-install(5)         broken pass in 162894
 test-amd64-amd64-libvirt-xsm  5 host-install(5)          broken pass in 162894
 test-amd64-amd64-qemuu-nested-amd  5 host-install(5)     broken pass in 162894
 test-amd64-amd64-xl-qemuu-ovmf-amd64  5 host-install(5)  broken pass in 162894
 test-amd64-amd64-pair         6 host-install/src_host(6) broken pass in 162894
 test-amd64-amd64-pair         7 host-install/dst_host(7) broken pass in 162894
 test-amd64-amd64-xl-qemut-ws16-amd64  5 host-install(5)  broken pass in 162894
 test-amd64-amd64-libvirt-vhd  5 host-install(5)          broken pass in 162894
 test-amd64-amd64-libvirt-pair 6 host-install/src_host(6) broken pass in 162894
 test-amd64-amd64-libvirt-pair 7 host-install/dst_host(7) broken pass in 162894
 test-amd64-amd64-xl-credit1   5 host-install(5)          broken pass in 162894
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken pass in 162894
 test-amd64-amd64-dom0pvh-xl-intel  5 host-install(5)     broken pass in 162894
 test-amd64-amd64-qemuu-freebsd11-amd64 5 host-install(5) broken pass in 162894
 test-amd64-coresched-amd64-xl  5 host-install(5)         broken pass in 162894
 test-amd64-amd64-pygrub       5 host-install(5)          broken pass in 162894
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 5 host-install(5) broken pass in 162894
 test-xtf-amd64-amd64-1        5 host-install(5)          broken pass in 162894
 test-arm64-arm64-xl-credit1   5 host-install(5)          broken pass in 162894
 test-arm64-arm64-xl-xsm       5 host-install(5)          broken pass in 162894
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 5 host-install(5) broken pass in 162894
 test-amd64-amd64-xl-qemut-debianhvm-amd64 5 host-install(5) broken pass in 162894
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 5 host-install(5) broken pass in 162894
 test-xtf-amd64-amd64-5        5 host-install(5)          broken pass in 162894
 test-amd64-amd64-xl-qemut-win7-amd64  5 host-install(5)  broken pass in 162894
 test-amd64-amd64-xl           5 host-install(5)          broken pass in 162894
 test-xtf-amd64-amd64-4        5 host-install(5)          broken pass in 162894
 test-xtf-amd64-amd64-2        5 host-install(5)          broken pass in 162894
 test-amd64-amd64-xl-qemuu-win7-amd64  5 host-install(5)  broken pass in 162894
 test-amd64-amd64-xl-pvshim    5 host-install(5)          broken pass in 162894
 test-amd64-amd64-livepatch    5 host-install(5)          broken pass in 162894
 test-amd64-amd64-xl-pvhv2-intel  5 host-install(5)       broken pass in 162894
 test-amd64-amd64-qemuu-freebsd12-amd64 5 host-install(5) broken pass in 162894
 test-amd64-amd64-xl-xsm       5 host-install(5)          broken pass in 162894
 test-amd64-amd64-examine      5 host-install             broken pass in 162894
 test-arm64-arm64-libvirt-xsm  5 host-install(5)          broken pass in 162894
 test-arm64-arm64-xl-thunderx  5 host-install(5)          broken pass in 162894
 test-arm64-arm64-xl           5 host-install(5)          broken pass in 162894
 test-arm64-arm64-examine      5 host-install             broken pass in 162894
 test-amd64-i386-qemut-rhel6hvm-intel 14 guest-start/redhat.repeat fail in 162885 pass in 162894

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop   fail in 162885 like 162533
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop   fail in 162885 like 162533
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop   fail in 162885 like 162533
 test-amd64-amd64-libvirt    15 migrate-support-check fail in 162885 never pass
 test-amd64-i386-libvirt-xsm 15 migrate-support-check fail in 162885 never pass
 test-amd64-i386-xl-pvshim    14 guest-start          fail in 162885 never pass
 test-amd64-i386-libvirt     15 migrate-support-check fail in 162885 never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail in 162885 never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail in 162885 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop  fail in 162894 like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail in 162894 like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop  fail in 162894 like 162533
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 162894 like 162533
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop   fail in 162894 like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 162894 like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop  fail in 162894 like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop  fail in 162894 like 162533
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check fail in 162894 never pass
 test-arm64-arm64-xl         15 migrate-support-check fail in 162894 never pass
 test-arm64-arm64-xl     16 saverestore-support-check fail in 162894 never pass
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 162894 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 162894 never pass
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 162894 never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 162894 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 162894 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 162894 never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 162894 never pass
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 162894 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 162894 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 162894 never pass
 test-armhf-armhf-xl-arndale 15 migrate-support-check fail in 162894 never pass
 test-armhf-armhf-xl-arndale 16 saverestore-support-check fail in 162894 never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check fail in 162894 never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 162894 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 162894 never pass
 test-armhf-armhf-xl         15 migrate-support-check fail in 162894 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 162894 never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check fail in 162894 never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check fail in 162894 never pass
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 162894 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 162894 never pass
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 162894 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 162894 never pass
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 162894 never pass
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 162894 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 162894 never pass
 test-arm64-arm64-xl-seattle 15 migrate-support-check fail in 162894 never pass
 test-arm64-arm64-xl-seattle 16 saverestore-support-check fail in 162894 never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 162894 never pass
 test-armhf-armhf-xl-vhd     14 migrate-support-check fail in 162894 never pass
 test-armhf-armhf-xl-vhd 15 saverestore-support-check fail in 162894 never pass

version targeted for testing:
 xen                  8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z   12 days
Failing since        162556  2021-06-08 22:39:08 Z   11 days   17 attempts
Testing same since   162885  2021-06-17 23:08:00 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               broken  
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             broken  
 build-i386-prev                                              broken  
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-xtf-amd64-amd64-1                                       broken  
 test-xtf-amd64-amd64-2                                       broken  
 test-xtf-amd64-amd64-3                                       broken  
 test-xtf-amd64-amd64-4                                       broken  
 test-xtf-amd64-amd64-5                                       broken  
 test-amd64-amd64-xl                                          broken  
 test-amd64-coresched-amd64-xl                                broken  
 test-arm64-arm64-xl                                          broken  
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           broken  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        broken  
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 broken  
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 broken  
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 broken  
 test-arm64-arm64-libvirt-xsm                                 broken  
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      broken  
 test-arm64-arm64-xl-xsm                                      broken  
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            broken  
 test-amd64-amd64-xl-pvhv2-amd                                broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              broken  
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    broken  
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    broken  
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       broken  
 test-amd64-amd64-qemuu-freebsd12-amd64                       broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         broken  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         broken  
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         broken  
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  broken  
 test-arm64-arm64-xl-credit1                                  broken  
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  broken  
 test-arm64-arm64-xl-credit2                                  broken  
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        broken  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          broken  
 test-amd64-amd64-xl-pvhv2-intel                              broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            broken  
 test-amd64-amd64-libvirt                                     broken  
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   broken  
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                broken  
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                broken  
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                broken  
 test-amd64-amd64-i386-pvgrub                                 broken  
 test-amd64-amd64-xl-pvshim                                   broken  
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      broken  
 test-amd64-amd64-xl-qcow2                                    broken  
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     broken  
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             broken  
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   broken  
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 broken  
 test-amd64-amd64-libvirt-vhd                                 broken  
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-libvirt-xsm broken
broken-job test-arm64-arm64-xl broken
broken-job test-arm64-arm64-xl-credit1 broken
broken-job test-arm64-arm64-xl-credit2 broken
broken-job test-arm64-arm64-xl-seattle broken
broken-job test-arm64-arm64-xl-thunderx broken
broken-job test-arm64-arm64-xl-xsm broken
broken-job test-amd64-amd64-xl-rtds broken
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemut-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-i386-xsm broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 broken
broken-job build-amd64-prev broken
broken-job test-amd64-amd64-xl-qcow2 broken
broken-job test-amd64-amd64-xl-pvshim broken
broken-job test-amd64-amd64-xl-pvhv2-intel broken
broken-job test-amd64-amd64-xl-pvhv2-amd broken
broken-job test-amd64-amd64-xl-multivcpu broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job test-amd64-amd64-xl-credit2 broken
broken-job build-i386-prev broken
broken-job test-amd64-amd64-xl-credit1 broken
broken-job build-i386-pvops broken
broken-job test-amd64-amd64-xl broken
broken-job build-i386-xsm broken
broken-job test-amd64-amd64-amd64-pvgrub broken
broken-job test-amd64-amd64-dom0pvh-xl-amd broken
broken-job test-amd64-amd64-qemuu-nested-intel broken
broken-job test-amd64-amd64-dom0pvh-xl-intel broken
broken-job test-amd64-amd64-qemuu-nested-amd broken
broken-job test-amd64-amd64-i386-pvgrub broken
broken-job test-amd64-amd64-libvirt broken
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 broken
broken-job test-amd64-amd64-libvirt-pair broken
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 broken
broken-job test-amd64-amd64-libvirt-vhd broken
broken-job test-amd64-amd64-libvirt-xsm broken
broken-job test-amd64-amd64-pygrub broken
broken-job test-amd64-amd64-livepatch broken
broken-job test-amd64-amd64-pair broken
broken-job test-amd64-amd64-xl-shadow broken
broken-job test-amd64-amd64-xl-xsm broken
broken-job test-amd64-coresched-amd64-xl broken
broken-job test-xtf-amd64-amd64-1 broken
broken-job test-xtf-amd64-amd64-2 broken
broken-job test-xtf-amd64-amd64-3 broken
broken-job test-xtf-amd64-amd64-4 broken
broken-job test-xtf-amd64-amd64-5 broken
broken-step build-i386-pvops host-install(4)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow host-install(5)
broken-step test-amd64-amd64-xl-qemuu-ws16-amd64 host-install(5)
broken-step test-amd64-amd64-qemuu-nested-intel host-install(5)
broken-step test-amd64-amd64-i386-pvgrub host-install(5)
broken-step test-amd64-amd64-xl-qcow2 host-install(5)
broken-step test-xtf-amd64-amd64-3 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-dom0pvh-xl-amd host-install(5)
broken-step test-arm64-arm64-xl-credit2 host-install(5)
broken-step test-arm64-arm64-xl-seattle host-install(5)
broken-step test-amd64-amd64-amd64-pvgrub host-install(5)
broken-step test-amd64-amd64-xl-pvhv2-amd host-install(5)
broken-step test-amd64-amd64-xl-shadow host-install(5)
broken-step test-amd64-amd64-xl-multivcpu host-install(5)
broken-step test-amd64-amd64-libvirt-xsm host-install(5)
broken-step test-amd64-amd64-qemuu-nested-amd host-install(5)
broken-step test-amd64-amd64-libvirt host-install(5)
broken-step test-amd64-amd64-xl-qemuu-ovmf-amd64 host-install(5)
broken-step test-amd64-amd64-pair host-install/src_host(6)
broken-step test-amd64-amd64-pair host-install/dst_host(7)
broken-step test-amd64-amd64-xl-qemut-ws16-amd64 host-install(5)
broken-step test-amd64-amd64-libvirt-vhd host-install(5)
broken-step test-amd64-amd64-libvirt-pair host-install/src_host(6)
broken-step test-amd64-amd64-libvirt-pair host-install/dst_host(7)
broken-step test-amd64-amd64-xl-credit1 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict host-install(5)
broken-step test-amd64-amd64-xl-credit2 host-install(5)
broken-step test-amd64-amd64-dom0pvh-xl-intel host-install(5)
broken-step test-amd64-amd64-qemuu-freebsd11-amd64 host-install(5)
broken-step test-amd64-coresched-amd64-xl host-install(5)
broken-step test-amd64-amd64-xl-rtds host-install(5)
broken-step test-amd64-amd64-pygrub host-install(5)
broken-step test-amd64-amd64-xl-qemut-debianhvm-i386-xsm host-install(5)
broken-step test-xtf-amd64-amd64-1 host-install(5)
broken-step test-arm64-arm64-xl-credit1 host-install(5)
broken-step test-arm64-arm64-xl-xsm host-install(5)
broken-step test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm host-install(5)
broken-step test-amd64-amd64-xl-qemut-debianhvm-amd64 host-install(5)
broken-step build-i386-xsm host-install(4)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm host-install(5)
broken-step build-i386 host-install(4)
broken-step test-xtf-amd64-amd64-5 host-install(5)
broken-step test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm host-install(5)
broken-step test-amd64-amd64-xl-qemut-win7-amd64 host-install(5)
broken-step build-i386-prev host-install(4)
broken-step test-amd64-amd64-xl host-install(5)
broken-step test-xtf-amd64-amd64-4 host-install(5)
broken-step test-xtf-amd64-amd64-2 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-win7-amd64 host-install(5)
broken-step test-amd64-amd64-xl-pvshim host-install(5)
broken-step test-amd64-amd64-livepatch host-install(5)
broken-step test-amd64-amd64-xl-pvhv2-intel host-install(5)
broken-step test-amd64-amd64-qemuu-freebsd12-amd64 host-install(5)
broken-step test-amd64-amd64-xl-xsm host-install(5)
broken-step test-amd64-amd64-examine host-install
broken-step test-arm64-arm64-libvirt-xsm host-install(5)
broken-step test-arm64-arm64-xl-thunderx host-install(5)
broken-step test-arm64-arm64-xl host-install(5)
broken-step test-arm64-arm64-examine host-install
broken-step build-amd64-prev host-install(4)
broken-step build-armhf-pvops host-install(4)
broken-job test-amd64-amd64-xl-rtds broken
broken-job test-amd64-amd64-xl-shadow broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-i386-xl-qemuu-win7-amd64 broken
broken-job test-amd64-i386-libvirt broken
broken-job test-amd64-i386-libvirt-xsm broken
broken-job test-amd64-i386-xl-qemut-win7-amd64 broken
broken-job test-amd64-amd64-libvirt broken
broken-job test-amd64-amd64-xl-credit2 broken
broken-job test-amd64-i386-qemuu-rhel6hvm-amd broken
broken-job test-amd64-i386-qemuu-rhel6hvm-intel broken
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-i386-xl-pvshim broken
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-i386-qemut-rhel6hvm-amd broken
broken-job test-amd64-i386-xl-raw broken
broken-job test-amd64-coresched-i386-xl broken

Not pushing.

(No revision log; it would be 1163 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 20 19:30:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Jun 2021 19:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145247.267229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lv39U-00015z-Lg; Sun, 20 Jun 2021 19:30:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145247.267229; Sun, 20 Jun 2021 19:30:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lv39U-00015s-HI; Sun, 20 Jun 2021 19:30:40 +0000
Received: by outflank-mailman (input) for mailman id 145247;
 Sun, 20 Jun 2021 19:30:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lv39T-00015i-QR; Sun, 20 Jun 2021 19:30:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lv39T-0005UA-I7; Sun, 20 Jun 2021 19:30:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lv39T-0001s1-5i; Sun, 20 Jun 2021 19:30:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lv39T-00005K-5G; Sun, 20 Jun 2021 19:30:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=w7nMRuYH5bdEIFh+Evq+KhjW//Lbnk51QTjrYkiQsf8=; b=UtqJejqumqMG8E2h+C4WKPb7zf
	Ua3K1eH3YVJcc482yEEptf8vcyJSukoZSK/LKlrEi9I/Bx9GY7sC7BkY044r4GSpKDmrOm+fcPljX
	EXdbzK1dGmCblkC1XcDHm5gaAt8wf8OYVCL3t3lwo9Zr9MbYnLZPLoAz7JGqKcGJdOEs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162910-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162910: trouble: blocked/broken/pass
X-Osstest-Failures:
    linux-linus:build-amd64:<job status>:broken:regression
    linux-linus:build-amd64-pvops:<job status>:broken:regression
    linux-linus:build-amd64-xsm:<job status>:broken:regression
    linux-linus:build-arm64:<job status>:broken:regression
    linux-linus:build-arm64-pvops:<job status>:broken:regression
    linux-linus:build-arm64-xsm:<job status>:broken:regression
    linux-linus:build-armhf-pvops:<job status>:broken:regression
    linux-linus:build-i386:<job status>:broken:regression
    linux-linus:build-i386-pvops:<job status>:broken:regression
    linux-linus:build-i386-xsm:<job status>:broken:regression
    linux-linus:build-i386-pvops:host-install(4):broken:regression
    linux-linus:build-i386:host-install(4):broken:regression
    linux-linus:build-i386-xsm:host-install(4):broken:regression
    linux-linus:build-arm64-pvops:host-install(4):broken:regression
    linux-linus:build-arm64:host-install(4):broken:regression
    linux-linus:build-arm64-xsm:host-install(4):broken:regression
    linux-linus:build-amd64-pvops:host-install(4):broken:regression
    linux-linus:build-amd64:host-install(4):broken:regression
    linux-linus:build-amd64-xsm:host-install(4):broken:regression
    linux-linus:build-armhf-pvops:host-install(4):broken:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    linux=913ec3c22ef425d63dd0bc81fb008ce7f9bcb07b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Jun 2021 19:30:39 +0000

flight 162910 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162910/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 152332
 build-i386                    4 host-install(4)        broken REGR. vs. 152332
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 152332
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152332
 build-arm64                   4 host-install(4)        broken REGR. vs. 152332
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152332
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 152332
 build-amd64                   4 host-install(4)        broken REGR. vs. 152332
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 152332
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a

version targeted for testing:
 linux                913ec3c22ef425d63dd0bc81fb008ce7f9bcb07b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  323 days
Failing since        152366  2020-08-01 20:49:34 Z  322 days  549 attempts
Testing same since   162910  2021-06-20 11:00:35 Z    0 days    1 attempts

------------------------------------------------------------
6198 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386-pvops host-install(4)
broken-step build-i386 host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-armhf-pvops host-install(4)

Not pushing.

(No revision log; it would be 1688088 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 20 22:36:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Jun 2021 22:36:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145275.267285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lv63F-0000bX-TE; Sun, 20 Jun 2021 22:36:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145275.267285; Sun, 20 Jun 2021 22:36:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lv63F-0000bQ-Ps; Sun, 20 Jun 2021 22:36:25 +0000
Received: by outflank-mailman (input) for mailman id 145275;
 Sun, 20 Jun 2021 22:36:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lv63E-0000bG-BU; Sun, 20 Jun 2021 22:36:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lv63D-000088-V0; Sun, 20 Jun 2021 22:36:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lv63D-0005tV-NN; Sun, 20 Jun 2021 22:36:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lv63D-0001ml-Mo; Sun, 20 Jun 2021 22:36:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OtuunT43T0tT9ov/ia7tpyYciRYKGUSwYo5I0oXh5Rg=; b=Ua1vj7tcwTk/65mOKr9tW/md0P
	9rDaCdLdbq1RvfSyATRlzz1JUuO6AI+Rad21rOUCE5MDDjo2wrcSNELq/78Xn2aJsjfce55dEIj6q
	eUV+LKV9BI6VPl8SH1xlSLZtnetfglA81BAx+T1Fp5/VQPh4q5ztA9o4oxcuLtUeAMjc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162912-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162912: trouble: blocked/broken/pass
X-Osstest-Failures:
    qemu-mainline:build-amd64:<job status>:broken:regression
    qemu-mainline:build-amd64-pvops:<job status>:broken:regression
    qemu-mainline:build-amd64-xsm:<job status>:broken:regression
    qemu-mainline:build-arm64:<job status>:broken:regression
    qemu-mainline:build-arm64-pvops:<job status>:broken:regression
    qemu-mainline:build-arm64-xsm:<job status>:broken:regression
    qemu-mainline:build-armhf-pvops:<job status>:broken:regression
    qemu-mainline:build-i386:<job status>:broken:regression
    qemu-mainline:build-i386-pvops:<job status>:broken:regression
    qemu-mainline:build-i386-xsm:<job status>:broken:regression
    qemu-mainline:build-armhf-pvops:host-install(4):broken:regression
    qemu-mainline:build-arm64:host-install(4):broken:regression
    qemu-mainline:build-arm64-pvops:host-install(4):broken:regression
    qemu-mainline:build-arm64-xsm:host-install(4):broken:regression
    qemu-mainline:build-i386-xsm:host-install(4):broken:regression
    qemu-mainline:build-i386:host-install(4):broken:regression
    qemu-mainline:build-i386-pvops:host-install(4):broken:regression
    qemu-mainline:build-amd64:host-install(4):broken:regression
    qemu-mainline:build-amd64-xsm:host-install(4):broken:regression
    qemu-mainline:build-amd64-pvops:host-install(4):broken:regression
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=3ccf6cd0e3e1dfd663814640b3b18b55715d7a75
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Jun 2021 22:36:23 +0000

flight 162912 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162912/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 152631
 build-arm64                   4 host-install(4)        broken REGR. vs. 152631
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152631
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152631
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 152631
 build-i386                    4 host-install(4)        broken REGR. vs. 152631
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 152631
 build-amd64                   4 host-install(4)        broken REGR. vs. 152631
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 152631
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                3ccf6cd0e3e1dfd663814640b3b18b55715d7a75
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  304 days
Failing since        152659  2020-08-21 14:07:39 Z  303 days  558 attempts
Testing same since   162897  2021-06-19 02:19:17 Z    1 days    2 attempts

------------------------------------------------------------
540 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-armhf-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386 host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64-pvops host-install(4)

Not pushing.

(No revision log; it would be 173607 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 00:43:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 00:43:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145293.267305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lv82J-0004Ka-Vc; Mon, 21 Jun 2021 00:43:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145293.267305; Mon, 21 Jun 2021 00:43:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lv82J-0004KT-SY; Mon, 21 Jun 2021 00:43:35 +0000
Received: by outflank-mailman (input) for mailman id 145293;
 Mon, 21 Jun 2021 00:43:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lv82H-0004KJ-O6; Mon, 21 Jun 2021 00:43:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lv82H-0002pG-I5; Mon, 21 Jun 2021 00:43:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lv82H-0000DU-7T; Mon, 21 Jun 2021 00:43:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lv82H-00086V-6z; Mon, 21 Jun 2021 00:43:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KNoYSiXIafQ1fSLXhmtfSL+DG3HOw/0bYf/SS04Mvcc=; b=VaXf/UDw3R34pKIwJ4TEsw4buN
	wc93GM4RExlROXiizursP4Rk5/XmEWPWnpYi05BqK21Frgo+FUfJL4AzRwcgavnfA5q82yETUCFSt
	Zyqdepub9FyiMKy2Sxb/PyuNomd8rVBptZPqX8VsV7cX522HtEO3EHWV8Jk3EfD2gMa4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162914-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162914: trouble: blocked/broken
X-Osstest-Failures:
    ovmf:build-amd64:<job status>:broken:regression
    ovmf:build-amd64-pvops:<job status>:broken:regression
    ovmf:build-amd64-xsm:<job status>:broken:regression
    ovmf:build-i386:<job status>:broken:regression
    ovmf:build-i386-pvops:<job status>:broken:regression
    ovmf:build-i386-xsm:<job status>:broken:regression
    ovmf:build-i386-pvops:host-install(4):broken:regression
    ovmf:build-i386:host-install(4):broken:regression
    ovmf:build-i386-xsm:host-install(4):broken:regression
    ovmf:build-amd64-pvops:host-install(4):broken:regression
    ovmf:build-amd64-xsm:host-install(4):broken:regression
    ovmf:build-amd64:host-install(4):broken:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=a63914d3f603580e5aeceb5edbafe56688210141
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Jun 2021 00:43:33 +0000

flight 162914 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162914/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 162359
 build-i386                    4 host-install(4)        broken REGR. vs. 162359
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 162359
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 162359
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 162359
 build-amd64                   4 host-install(4)        broken REGR. vs. 162359

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 a63914d3f603580e5aeceb5edbafe56688210141
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   16 days
Failing since        162368  2021-06-04 15:42:59 Z   16 days   36 attempts
Testing same since   162900  2021-06-19 07:19:12 Z    1 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  gaoliming <gaoliming@byosoft.com.cn>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386-pvops host-install(4)
broken-step build-i386 host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64 host-install(4)

Not pushing.

(No revision log; it would be 2173 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 03:01:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 03:01:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145305.267331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvABn-0001cr-8G; Mon, 21 Jun 2021 03:01:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145305.267331; Mon, 21 Jun 2021 03:01:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvABn-0001cj-2F; Mon, 21 Jun 2021 03:01:31 +0000
Received: by outflank-mailman (input) for mailman id 145305;
 Mon, 21 Jun 2021 03:01:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvABl-0001cZ-Rj; Mon, 21 Jun 2021 03:01:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvABl-0006di-Jv; Mon, 21 Jun 2021 03:01:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvABl-0003Cz-Bi; Mon, 21 Jun 2021 03:01:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvABl-0002uE-BD; Mon, 21 Jun 2021 03:01:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wM2NMCHdnsNkdhMzfCUr0uL7p2WTKM/iJzKFytZG3FA=; b=2nhEvdqYI59iswueWEIsHiNkJ+
	b61YUMIyvOWAPPMCB8wi9RLhLhoeekRDxfi5jKhAZO5+u08EDP54+g+h3x4dfGv8GZohfQI/ALesz
	y5ZNdEBZBwf0zI4BSNpl1amAIsdpOzJDYTo4KlDOE/TpMeT+4immtza6uPFVJIxchWHY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162915-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162915: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable:build-amd64:<job status>:broken:regression
    xen-unstable:build-amd64-prev:<job status>:broken:regression
    xen-unstable:build-amd64-pvops:<job status>:broken:regression
    xen-unstable:build-amd64-xsm:<job status>:broken:regression
    xen-unstable:build-amd64-xtf:<job status>:broken:regression
    xen-unstable:build-arm64:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:<job status>:broken:regression
    xen-unstable:build-arm64-xsm:<job status>:broken:regression
    xen-unstable:build-armhf-pvops:<job status>:broken:regression
    xen-unstable:build-i386:<job status>:broken:regression
    xen-unstable:build-i386-prev:<job status>:broken:regression
    xen-unstable:build-i386-pvops:<job status>:broken:regression
    xen-unstable:build-i386-xsm:<job status>:broken:regression
    xen-unstable:build-amd64-xsm:host-install(4):broken:regression
    xen-unstable:build-amd64:host-install(4):broken:regression
    xen-unstable:build-amd64-pvops:host-install(4):broken:regression
    xen-unstable:build-i386-pvops:host-install(4):broken:regression
    xen-unstable:build-arm64-pvops:host-install(4):broken:regression
    xen-unstable:build-arm64-xsm:host-install(4):broken:regression
    xen-unstable:build-arm64:host-install(4):broken:regression
    xen-unstable:build-i386:host-install(4):broken:regression
    xen-unstable:build-i386-xsm:host-install(4):broken:regression
    xen-unstable:build-i386-prev:host-install(4):broken:regression
    xen-unstable:build-amd64-prev:host-install(4):broken:regression
    xen-unstable:build-amd64-xtf:host-install(4):broken:regression
    xen-unstable:build-armhf-pvops:host-install(4):broken:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Jun 2021 03:01:29 +0000

flight 162915 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162915/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-prev                <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-amd64-xtf                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-prev                 <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 162533
 build-amd64                   4 host-install(4)        broken REGR. vs. 162533
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 162533
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 162533
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 162533
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 162533
 build-arm64                   4 host-install(4)        broken REGR. vs. 162533
 build-i386                    4 host-install(4)        broken REGR. vs. 162533
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 162533
 build-i386-prev               4 host-install(4)        broken REGR. vs. 162533
 build-amd64-prev              4 host-install(4)        broken REGR. vs. 162533
 build-amd64-xtf               4 host-install(4)        broken REGR. vs. 162533
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 162533

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z   13 days
Failing since        162556  2021-06-08 22:39:08 Z   12 days   18 attempts
Testing same since   162885  2021-06-17 23:08:00 Z    3 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64-xtf                                              broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             broken  
 build-i386-prev                                              broken  
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-prev broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-amd64-xtf broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-prev broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-i386 host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386-prev host-install(4)
broken-step build-amd64-prev host-install(4)
broken-step build-amd64-xtf host-install(4)
broken-step build-armhf-pvops host-install(4)

Not pushing.

(No revision log; it would be 1163 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 06:00:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 06:00:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145314.267351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvCyR-0000RW-9C; Mon, 21 Jun 2021 05:59:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145314.267351; Mon, 21 Jun 2021 05:59:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvCyR-0000RP-5C; Mon, 21 Jun 2021 05:59:55 +0000
Received: by outflank-mailman (input) for mailman id 145314;
 Mon, 21 Jun 2021 05:59:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvCyP-0000RF-IX; Mon, 21 Jun 2021 05:59:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvCyP-0001Yp-AT; Mon, 21 Jun 2021 05:59:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvCyP-00074Y-0V; Mon, 21 Jun 2021 05:59:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvCyO-0000yg-WF; Mon, 21 Jun 2021 05:59:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7ztEicOpI3FCH2eaa4/4CZflatttORAZgipaW9s0FVs=; b=GKdgLF+QMk7lqvBjDyfPX9yX9x
	pEMLsvN0/VKh0HDMuMWZNIFXHGx6NARdmUogT7/p85BjvxAp/MbX00iyPCldFTLIrFwL/jhFcxhlF
	9lt1Ro59g6XWThoaiKo71l3cjIQHJ28xMBMWJagTwgkcKTezSSUh35ISsn2oEVp+6J84=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162917-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162917: trouble: blocked/broken/pass
X-Osstest-Failures:
    linux-linus:build-amd64:<job status>:broken:regression
    linux-linus:build-amd64-pvops:<job status>:broken:regression
    linux-linus:build-amd64-xsm:<job status>:broken:regression
    linux-linus:build-arm64:<job status>:broken:regression
    linux-linus:build-arm64-pvops:<job status>:broken:regression
    linux-linus:build-arm64-xsm:<job status>:broken:regression
    linux-linus:build-armhf-pvops:<job status>:broken:regression
    linux-linus:build-i386:<job status>:broken:regression
    linux-linus:build-i386-pvops:<job status>:broken:regression
    linux-linus:build-i386-xsm:<job status>:broken:regression
    linux-linus:build-i386-xsm:host-install(4):broken:regression
    linux-linus:build-arm64-pvops:host-install(4):broken:regression
    linux-linus:build-arm64-xsm:host-install(4):broken:regression
    linux-linus:build-arm64:host-install(4):broken:regression
    linux-linus:build-i386:host-install(4):broken:regression
    linux-linus:build-i386-pvops:host-install(4):broken:regression
    linux-linus:build-amd64-pvops:host-install(4):broken:regression
    linux-linus:build-amd64:host-install(4):broken:regression
    linux-linus:build-amd64-xsm:host-install(4):broken:regression
    linux-linus:build-armhf-pvops:host-install(4):broken:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    linux=cba5e97280f53ec7feb656fcdf0ec00a5c6dd539
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Jun 2021 05:59:52 +0000

flight 162917 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162917/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 152332
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152332
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152332
 build-arm64                   4 host-install(4)        broken REGR. vs. 152332
 build-i386                    4 host-install(4)        broken REGR. vs. 152332
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 152332
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 152332
 build-amd64                   4 host-install(4)        broken REGR. vs. 152332
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 152332
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a

version targeted for testing:
 linux                cba5e97280f53ec7feb656fcdf0ec00a5c6dd539
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  324 days
Failing since        152366  2020-08-01 20:49:34 Z  323 days  550 attempts
Testing same since   162917  2021-06-20 19:40:36 Z    0 days    1 attempts

------------------------------------------------------------
6199 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386-xsm host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-i386 host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-armhf-pvops host-install(4)

Not pushing.

(No revision log; it would be 1688742 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 06:18:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 06:18:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145322.267371 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvDGd-0002sU-1j; Mon, 21 Jun 2021 06:18:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145322.267371; Mon, 21 Jun 2021 06:18:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvDGc-0002sN-U7; Mon, 21 Jun 2021 06:18:42 +0000
Received: by outflank-mailman (input) for mailman id 145322;
 Mon, 21 Jun 2021 06:18:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f9W1=LP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lvDGc-0002sH-8q
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 06:18:42 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c0ffcd80-629c-42c7-b8da-f4cc5f666608;
 Mon, 21 Jun 2021 06:18:39 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2059.outbound.protection.outlook.com [104.47.13.59]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-29-dysjftUcPm-Z3ApCW7QsZg-1; Mon, 21 Jun 2021 08:18:37 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB7375.eurprd04.prod.outlook.com (2603:10a6:800:1a8::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Mon, 21 Jun
 2021 06:18:36 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 06:18:35 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P189CA0061.EURP189.PROD.OUTLOOK.COM (2603:10a6:102:b4::6) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Mon, 21 Jun 2021 06:18:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0ffcd80-629c-42c7-b8da-f4cc5f666608
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624256318;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bK/ws4KH/gYxybM6pzTLKMKor186mLSooFL+vjHghz0=;
	b=DuwkRqWNqyIKRwETJzP50X4v1h83r19E1TJouhs85dFpNAFTJ6HnRHxTz2xt6sqm7PztSR
	VwXEmj1oMB4sFWPEG6n1kRSd2Um+MwEzDaFYR/RPCzH9QY2Lkqc2PYL+54dOxeBKF+jXZh
	fXjJZjBXCyFdbYuZUxBDCKtLCHm+LCw=
X-MC-Unique: dysjftUcPm-Z3ApCW7QsZg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CyyMR3E7w6gkWAms95h5uW9WNpq3q4whQ0l6qCcIC2m1riijgx6sUlsGlZWAbDA2oSbsM7KFrZRO3s/PW3gVEez/Em0F/WRoIuO3PE0xnG6bKP/Jn4UTxT8rDh2N9bUwfWkVAYKwt365XJYhKlnRdQl4CadMgfgufdgvehK1dzzWQBFaXnXYUgwxYmGS5NAw37MjheV0a70O3Fmpu6llEZeWRG/niMBMsZSHkDu8OeQE/a4RB90sw+yWCzkrJ+dIIh1HzfZTLnAatvYogPpnSuaTKF7cP0i0kHMyybOrtO8j/GMsg84u3SypjfkMyY+RvR7KBTIbYy8ej8C9RNLbjQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kltkVFRlB8kDK0sQj9Q8oAUvfPDsLIYipRkAA2WSHZc=;
 b=iQJ+rpHFIudDxv7oZycBGxaOUPRJhD6BdzwhqMigDZ7jWR4fp+5wGhJogP3/7FqR+wtU5g5WjpbdgzmpeJ7VRn1YWAtWuAuqfGbY9uCL9Q2S3SlgxHwY1KkcH4bBmJqCCdLb8GMGKaYAN6F1JWmGPkWxlYAMAQYfGmU9TGoUwHWWSG+9eT15nBj76IXjUg5vfwPmlRpv/XgFiP84t0ZUUPBhd3mDl7Ea/D5TL+3vbjS/5q2vSJC8xjk6ic0UQQ6jSPRxZ8jGDGQxU4U8DnXAhCUeUPOZ0HxvUYqLYLUKhb2z7e/4TMefG+XaP093mIApxby0WbjlEgCiYGWxMHSiEQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] x86/AMD: make HT range dynamic for Fam17 and up
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Igor Druzhinin <igor.druzhinin@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <4f46f3ab-60e4-3118-1438-10a1e17cd900@suse.com>
 <9fdd9760-6de7-be4c-a75b-0d3b1cc10a38@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ee781cf2-47c2-0b5c-4f21-37ceaed283af@suse.com>
Date: Mon, 21 Jun 2021 08:18:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <9fdd9760-6de7-be4c-a75b-0d3b1cc10a38@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P189CA0061.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:b4::6) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ecfe30d7-3835-4b5d-1ff2-08d9347c66f1
X-MS-TrafficTypeDiagnostic: VE1PR04MB7375:
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB7375491AC7DCBC6E5B6350C0B30A9@VE1PR04MB7375.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gmL26AYMYIUa6R0LDlf9I8262upnRNLo2Z9RJqkij+RqEhkVEWtbBjKe4ItbW+RI7U2uUHykCydQ0g+IQA6KR3WnsHlE2y8B0dJM0Hk6bDu9A9vswXL6KxE01Jq81XXePvfLRJLnkDnn4CfwI9bALgoDFCGOWvqa3FePCIrLJAJ27OopmjVVF6CPGYGFbpsfrq/NthVn+F7Kqnep+TSOMyzCtNyoceK51cIsmPRiMR4LiaMwGUh7JdD3nVjjPbF9Q/EpEkW904FfxRqLlfL1LnmCCMCJOtKgYnvapibV9hmQaqJ/7R6lqxCCCzCtpToQQ8v3jW60yXLFg0J9ivYI3Yi+xuxyfhYzux9JxpTzoYWrBNCvS0GX8XA3DUWJgC/EQq5S1BepTB1hGIrCBoX+D5fBTtT/QSHDXnq4N9EDb69AMIMC8imJq2TVeUEDYRPBJUYEDdMLVgFIyTsVzsisofu7Whi4ei9itwHskCTZkN47zOhfR8yRTcG16L09CcowcsidI1nK8LpmS7pyftS/hKrVeVI1agUDj5JlzxUlGPlfCpfBegGIQpZypvkteHSPgrP+si4XVSndfv4V+fvIXj9mJTQHisHJoWSheeur5RMItvWGwkSlTTfcGgcYbUngi1v4pOH+fH1Upr2zTyz16go03NG1AhHOg+k956MzpZMT03BzcZ5rz78YY8biPWtc
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(39850400004)(346002)(366004)(136003)(396003)(6486002)(2616005)(956004)(4326008)(66556008)(66476007)(5660300002)(6916009)(38100700002)(8676002)(36756003)(53546011)(16526019)(54906003)(2906002)(83380400001)(31696002)(186003)(26005)(66946007)(31686004)(316002)(8936002)(478600001)(86362001)(16576012)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?TUJ9cVfpsClveCJnoSGsm8OvgRvPQqJhDiQ9KCYrPpX2w1eXJEeHi7In+wuX?=
 =?us-ascii?Q?bFRxkPZ4/6skHT5hsT3X2LbzioKlWSGbRMC9c8hmJu9AXPSApmXJFbSbS7mv?=
 =?us-ascii?Q?mMCPpm8Sj9l2hVtEytLkf5pQvwoLEVtYlrFYjPcDulBxpQLCXV3BPLaVtZFj?=
 =?us-ascii?Q?+IAQKF3U7/A9SzDYJL9iZuq1RKBkFJ0K0wwfmKuy3Eba4O1x/llRpa1A4m5K?=
 =?us-ascii?Q?f4yZRFFr00lLDo1An94zo1WMpG8IgxE2W9FfxWPCOQuZCqhKneOWPQ48Zasy?=
 =?us-ascii?Q?sO/bQNja9LZQlTMnGuata2jvHUiqlRGlJlk6dBBcbGmdwk4Hw6awLijBjAYC?=
 =?us-ascii?Q?sVvPgykDkW/5hg6JzLeY9NJHcOH8umK50vEYDkLM16PYFUVz3AjdJeUj7cdk?=
 =?us-ascii?Q?TTL1gGcPbRBbcOxjX9FUfH+uc4sdEWGtV4+DHCesNjfol+C84wkJSVSAQKQ1?=
 =?us-ascii?Q?iLAE6I5fTJ/HKrWVo7suh+C62FrG9QrwOSOj/wQCjhGPPcmo4LC94X4K1QnE?=
 =?us-ascii?Q?TsaASprP5a7paRqHjJZXa0kjCi82QeDKWO/sNLwGtx4zkDn+c5pE7eQEWd3M?=
 =?us-ascii?Q?eQTPxBmQenhsL1cs0Wg6vWexw9lUXl4rydonSW9e6astz+lCFezARJ+N5eoJ?=
 =?us-ascii?Q?q/0xD3icZkkSbYxSD6WS28kzZd+EBDvwuRHEeNsrCcetyIpEUAMUePOlfP0H?=
 =?us-ascii?Q?sDVy1RMSjb0eDIBqBFiH6nGoq7hDk0sA+ik7MSuD5Yzgj/KBQ8a9FGGoSNu4?=
 =?us-ascii?Q?aCT2LH8seJTnWIMhhNqiPICkobceCBT7TW+8mwFewWtZE9FAIhs9JyMYynjM?=
 =?us-ascii?Q?h1IB4IM6TuSsHZ174jXlM9k+Z5+NuHC/06x9IWJfyuyKkLZiAUMgp31KSB/i?=
 =?us-ascii?Q?qoX0gBxNHbOHc3NvrbBgwYocoxmSnKh2kKu6ysZLzYllbJwT4SQX+LoDoPL6?=
 =?us-ascii?Q?/TFAIm90b2WokTl2/mm3iIbmfxibcwemLAI1ldbbA0ZxynyROiQyrYYVXPCz?=
 =?us-ascii?Q?4nO3h2zgB9dkimXnN8+EYIp25L8BfpJrIvSQ3QPhR8AbNfg8LXal9R+4FBx0?=
 =?us-ascii?Q?+Pyhb/KU5OeEvy7mhmv/4MRjl0p0Ht+FD8aqxqxCbqpEAngM5QXgGRsWvtLw?=
 =?us-ascii?Q?WuDIeq1YHM2HjEn5CKWD72ZHwKSz8qtzN4s0z0ssKwFXGABKvzgp6/sXBjhT?=
 =?us-ascii?Q?AmI5yPs07+7QD+ut9FDyYenAkY9vybGiRPDNRWM06vmXKqhbAqHBMruImR0y?=
 =?us-ascii?Q?P56o7WCgQzrt8bXWYqyS2tVspo8KVQAiJUkEUET5gvtQr+kN7KGHGv2jGFBV?=
 =?us-ascii?Q?ANQbQeXaWS99Bhnc/y0pmDB1?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ecfe30d7-3835-4b5d-1ff2-08d9347c66f1
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 06:18:35.8512
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VB3e8aOGLo8APMG7ytqA9OdMCRx8S7Ik+n05/D9c1rGZVt6Spq9+dugBqQax5TglTZENYOF4UoKLGy8/B7Qqhg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7375

On 18.06.2021 18:32, Andrew Cooper wrote:
> On 18/06/2021 17:00, Jan Beulich wrote:
>> At the time of d838ac2539cf ("x86: don't allow Dom0 access to the HT
>> address range") documentation correctly stated that the range was
>> completely fixed. For Fam17 and newer, it lives at the top of physical
>> address space, though.
>>
>> To correctly determine the top of physical address space, we need to
>> account for their physical address reduction, hence the calculation of
>> paddr_bits also gets adjusted.
>>
>> While for paddr_bits < 40 the HT range is completely hidden, there's no
>> need to suppress the range insertion in that case: It'll just have no
>> real meaning.
>>
>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>=20
> Really, this ought to be reported by Igor.=C2=A0 He did all the hard work=
.

Sure, changed.

>> --- a/xen/arch/x86/cpu/common.c
>> +++ b/xen/arch/x86/cpu/common.c
>> @@ -349,13 +349,17 @@ void __init early_cpu_init(void)
>> =20
>>  	eax =3D cpuid_eax(0x80000000);
>>  	if ((eax >> 16) =3D=3D 0x8000 && eax >=3D 0x80000008) {
>> +		ebx =3D eax >=3D 0x8000001f ? cpuid_ebx(0x8000001f) : 0;
>>  		eax =3D cpuid_eax(0x80000008);
>> -		paddr_bits =3D eax & 0xff;
>> +
>> +		paddr_bits =3D (eax & 0xff) - ((ebx >> 6) & 0x3f);
>=20
> While I can see the attraction of editing paddr_bits, I think it will
> break the emulated pagewalk (at least).
>=20
> With SME active, the address space reduction is only physical addresses
> only, not guest physical.=C2=A0 An HVM guest still needs to see the origi=
nal
> paddr_bits, and the emulated pagewalk needs to use this for reserved bit
> calculations.
>=20
> i.e. under NPT, you can still have a working 2^48 guest physical address
> space despite the host physical address space is limited to 2^43 by
> encryption being active.

Which means we may need to split the variable into paddr_bits
(calculated as above) and gaddr_bits (left at what paddr_bits has
been so far). However, isn't that what hap_paddr_bits is already
for, while for shadow mode it still needs to be the "adjusted" way?
We'd then simply not fall back to the "adjusted" paddr_bits, but to
the original one.

Another aspect I was wondering about is whether

 		if (paddr_bits > PADDR_BITS)
 			paddr_bits =3D PADDR_BITS;

should apply before or after subtracting the value from
80000008.EBX. I was first inclined to make it the more relaxed
way (applying before reduction), but then thought I'd first leave
it as is now, on the basis that the PTE layout doesn't change, and
hence 52 remains the limit for the full address.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 06:20:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 06:20:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145328.267381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvDIj-0004FD-Gr; Mon, 21 Jun 2021 06:20:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145328.267381; Mon, 21 Jun 2021 06:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvDIj-0004F6-Dr; Mon, 21 Jun 2021 06:20:53 +0000
Received: by outflank-mailman (input) for mailman id 145328;
 Mon, 21 Jun 2021 06:20:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f9W1=LP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lvDIi-0004F0-2C
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 06:20:52 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 87f8be02-5eed-42c5-b700-e87e6d5882cb;
 Mon, 21 Jun 2021 06:20:50 +0000 (UTC)
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur03lp2052.outbound.protection.outlook.com [104.47.10.52]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-1-LDTNUoYjNcahE98F_2ShsQ-1;
 Mon, 21 Jun 2021 08:20:48 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4607.eurprd04.prod.outlook.com (2603:10a6:803:71::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.22; Mon, 21 Jun
 2021 06:20:47 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 06:20:47 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P189CA0013.EURP189.PROD.OUTLOOK.COM (2603:10a6:102:52::18) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.19 via Frontend Transport; Mon, 21 Jun 2021 06:20:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87f8be02-5eed-42c5-b700-e87e6d5882cb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624256449;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ruhE0QKoRuCVYe7RS0x0yN4oNulKQ6DELES75MMD1vI=;
	b=ESy9RQjK93x3vH1GvbZvOdxlLwpDl0np1tAMGa3dimRh2HOfaUiSz6F3a7oidlodgzHv0v
	i9jYqXRj7dBvpZ9jBhK2rs4jrmIh/LJWX3F+DRglx8OiCGxZng4wtTC0Z9yoBB77YCFD4F
	jAhbDUI63GsRuuEU2Rgrjqi6v/s9Tbg=
X-MC-Unique: LDTNUoYjNcahE98F_2ShsQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DevzNIQ38Q58EImuhosbplUFEBF9r4E4Y4AcSVTg5Ll0/2dvNoKmD3dklvF6czrWULSXOTXdbYXVTDiiabBJghDOCGgkydUH5CGZfJLVlqDrsNTcgTRDR/FCHVoffV95wgDSWbRS2XT25in4aL4HzrU5BvZDALSGjknR5ildNhurHLkQSr6x+YF0DG/43qmhFL4C9sh1MkCWLKDtdjyUVibHrXkdUXoPZDSpMlDSMnT8k7dVLP+Nkyr9ppkOGgRuxzX1ACiUazMH4WHE/HKSc6gtGMnVwHOQlu3s6yQpVh0OaMuKnfp14mG/AYaS45XMK3cTuvhds+P3MKlIdyJX9Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ruhE0QKoRuCVYe7RS0x0yN4oNulKQ6DELES75MMD1vI=;
 b=Knw5IE/4g7Yj8XOU5kLs/oo9qoGIUCseL7OJsSJPmLkdLJbo86wTZz0dsJsP+oDIwq7oDB6TGAOt32ljj1FQMZ01JNc6u+FF9F4l5H5kqFLLx3njq7eamoRdKTSi4Ug3wicz6Qp37ceLK0l3EGj1w5MzN8nvF3qiYObrex+2T5S6FJ+/Qnmtl8pT7EDUeUvBRgIqeGV8FfApniK8dsQYbN2zvvJFN2+nOY1vS7MEdR9GWFqRolvKFWDBjcmN+fVuMGvHcHwYYuuDgZrcxlvlcu+hDeIutVh226ms6sFGisPZHtXILPjM46p9Y54G23IkrUINCOzzz19ckxlcxzKY2A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] x86/AMD: make HT range dynamic for Fam17 and up
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Igor Druzhinin <igor.druzhinin@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <4f46f3ab-60e4-3118-1438-10a1e17cd900@suse.com>
 <9fdd9760-6de7-be4c-a75b-0d3b1cc10a38@citrix.com>
 <ee781cf2-47c2-0b5c-4f21-37ceaed283af@suse.com>
Message-ID: <6f210c4e-4f52-af78-2bff-bdc65da6abfe@suse.com>
Date: Mon, 21 Jun 2021 08:20:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <ee781cf2-47c2-0b5c-4f21-37ceaed283af@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P189CA0013.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:52::18) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6938720b-54fc-4339-ef2d-08d9347cb537
X-MS-TrafficTypeDiagnostic: VI1PR04MB4607:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4607E1CDB069C2A8A9FCDDEEB30A9@VI1PR04MB4607.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	w0DHZCQ29n9KIFDbkAcB5YWkCEc+7gKhkNC531WrG1Gy77oyF7BljZ2Zn+dqFYibOCdsqH4LU8GHOIv1PhgbVqFFLUcrmMXdGrvxhoZeaFoHAVv/h6W67xgsiWnu58C8kM99HyjKTStGBBzTxeREa3kyPC2JZGN8Zrxw5Z6q+mSI7IY+ic6E707ib3lBhHrgxyvLNDzDztXZPVmnCwbb+PkqPpGhHZXex2nrRvs22L/qilntqBu7b7BQBCDwV3NpOszwt9nEHNumzN8/hPXJdhmKNkN0sSsC2zb2vuct/GgCHDR/YMr9fcCLrGBLzUVnIPXKXxggKznpn9Dusmbr6WQIVfWfVpwfvZNfR6sLJdJMT8A5kDkR/7hBed4xiTZYtg09LzxYQVIWPffzGOIcbNhx33El5S+nGS9yBoXGdSE5DjOwtitRG5xtx01EQWS/D0CP/c6Ye9UuvMrcWrfiJV/3K/8VF59oDQq5Yzftaup0VPOPTcze/DVHgkUGFwGaotWpYqIthBT9JD0Ipt6ynxwSK2OeYv7EJuiVFKZLBcKRIhtYsFAk+GYULXCd1+sxC9sewtDGjG60xmxZK6o6lkCqFw5nIdoLgsqhogb4UtKsym1WND0Agf+Q5xtDXLFkea62MytU9uiyCe3JvH8eYGr/HAtxnlDvW5ZCH8KDOG/qxmwvii5IadXr503QMEEr
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(136003)(39850400004)(376002)(396003)(346002)(36756003)(956004)(5660300002)(8676002)(16526019)(6486002)(2616005)(478600001)(8936002)(31686004)(26005)(6916009)(2906002)(4744005)(31696002)(186003)(86362001)(66946007)(38100700002)(54906003)(53546011)(4326008)(66556008)(316002)(66476007)(16576012)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RTlPcmEvWEVKOUNyWnJnV2xVRnpvcG5yR0I4Vno3UEtGYnVJWnhqdDI4Z0RE?=
 =?utf-8?B?d0tkTlUzU0J0U0RCZTFyaGJRTjdYKzdDMzd2RHBzSThwUlVJWEl1bkM4Z3Qv?=
 =?utf-8?B?emJKZDh1UHprT0k5MTYvSVI4VkxoSS9JdUhkd0VuSXB1MkpFQ1kwZVdNR1pJ?=
 =?utf-8?B?THcwSkJQNEs5MTFVMmZiNTFIWFhmMlpRRnBQbXp6bmlidkN0TXNVZXNaVk5C?=
 =?utf-8?B?UTh4eTN6RFNndEhqZkpka0NIUTBEVUdzYklBQ1VEZ3RWVGFsMTNvQ2p1dU5z?=
 =?utf-8?B?NTdzMWZyMUc0NjVGNHFCTzhrSlI2WG5jcTl2VDVCbFpyQnBKV3NEbGRvOGJN?=
 =?utf-8?B?b0RsM1JWU3kxM1gxWUkwQ25PayswbkVwTjFaU0lGRnJlS3NSZG04TXBlalhE?=
 =?utf-8?B?WncxL3Fic0VYSmRxckhsYXRMMUliMGF2OXY4bFNSbkJNZEp6REZWMkRDUjR5?=
 =?utf-8?B?SGJ0S3NsQ01RV0JpN3hFcnYwMmFmaktEZVI4ZUo4QW1zeTE1WkQwMHZDa2hG?=
 =?utf-8?B?dXVHVmNGeU5nREdLRUR5N21wd1NGZkVUbnVMeklhVm1NR21KWlM0ZXBtVkYx?=
 =?utf-8?B?N2ZPNVdzU0RCMGdRZGl1d0hPOElncHFzVXNUQ1p0ZysyaWovY1l5QUw4VkJZ?=
 =?utf-8?B?SmNlb1Y3UnFNMDJwTVFsVzhqS1FDOU02Rkg3SU9oWjhJZnpiblo4Y0l6NWor?=
 =?utf-8?B?NVNIRDFnMk9nbGUzYVR0NE5vbHdsT1R0QkJPS3JOTHorM050YWM4TGpqMU55?=
 =?utf-8?B?K003cm55ajc1WXQ0K1lDYXRaUzFIVXByeXNqalgxeDczZlpMQ3FYM0xaa3R1?=
 =?utf-8?B?eE5kT2VaV1NUbWh6Sk9IRlhiZ29zZFBYK0hWWVBWMzJDU1FxMEllcFBTUTdw?=
 =?utf-8?B?ckJ0RmVKTCtVYm5WNnZSeDM0YlZIaTh6c2drTGlLZVE3Z0NUeVBaNFJmYkhZ?=
 =?utf-8?B?MXM3SEY2TS9ZOC9sOVlTUCs0RCtqK2p3OFlUWmJzRnNXYzlDZW0rSWQxa0pX?=
 =?utf-8?B?Q0RmQXB1NUJZdys4SCtzVHFER2laTFZWb3FRQ01JNjNtaVk0UWlVT2xJMEVQ?=
 =?utf-8?B?SWJRblVsMUVOSjUvS3NFbFNBWnRqcVRzZVY4cG1TYkxBOWdBNjlpRnlLaFJR?=
 =?utf-8?B?NmlIMmlQWStDdXg3MVcwcHh3MUkzRUpNWXhqUjlNbEwxekVBOWF1WFoyQll3?=
 =?utf-8?B?ZkRrVGlrazFUT0J4SEpiS3l3dkRPWElDUjI1Q2ZhaFAyQmRFMkltZnR4R28r?=
 =?utf-8?B?dmlpNzJKK2pUTkdWOGtLRGJERHlrY1Nxbm5uL2dSZlp0cmVBaGJydXVJeWlt?=
 =?utf-8?B?NXB3NHRlZDZ2c01mOGJ5VlcrTkl6a3hvZlhhV0Zqai9McVd1SmJxY3pHWU9N?=
 =?utf-8?B?VTJRcUhiZjZNdElLMkp4N1Zic25uOEt0eStpVFpzTUc2SHREUnZtS3NmMEhQ?=
 =?utf-8?B?a0xiK1FUeWx1SWJoWUJXQ0dReFoySXRTUmI5YUwxV1VmOFZuMUhWdU5PRWtQ?=
 =?utf-8?B?cjJWSnZDOGppQjR1bEJlL1h3R1g1Tlp3QXh3NWgzUnhVRGtWRisxbHRVZ3Zi?=
 =?utf-8?B?QjhoZjRVSGRtN2s1MndqakN6N1ZQQkZyZ25LTmVUM0NoVDZ2aGlTVVljdlZj?=
 =?utf-8?B?L2hDNkpwVHFNdEZrdlNRenM5MGpzajYzSDdvSm1Eb0I3azJMcnhSVXpxZUx5?=
 =?utf-8?B?VDNpalIvL01rR0pYTjViVWtTVWpxMElwWlFTdWR1V3pQYWc2ays5Q1BGUDFV?=
 =?utf-8?Q?9KsZVfJx8rVUg/NEiHVt3eJYN/aj1/IvKn7B9yX?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6938720b-54fc-4339-ef2d-08d9347cb537
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 06:20:47.1516
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7MEMJsbjBjrBTFqWkeqaQEZ25EPAl4WWTztP9GXVWZgeYnRzMEy2ktiqlp7XX7Qa/tDE8w/Qkc3sObkw2Pmuqg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4607

On 21.06.2021 08:18, Jan Beulich wrote:
> Another aspect I was wondering about is whether
> 
>  		if (paddr_bits > PADDR_BITS)
>  			paddr_bits = PADDR_BITS;
> 
> should apply before or after subtracting the value from
> 80000008.EBX. I was first inclined to make it the more relaxed
> way (applying before reduction), but then thought I'd first leave
> it as is now, on the basis that the PTE layout doesn't change, and
> hence 52 remains the limit for the full address.

Actually that was the wrong way round - the PTE layout argument
suggests I should cap first, then subtract. Which will also simplify
what needs doing for hap_paddr_bits.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 06:29:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 06:29:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145333.267393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvDRC-0004zA-Du; Mon, 21 Jun 2021 06:29:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145333.267393; Mon, 21 Jun 2021 06:29:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvDRC-0004z3-Aq; Mon, 21 Jun 2021 06:29:38 +0000
Received: by outflank-mailman (input) for mailman id 145333;
 Mon, 21 Jun 2021 06:29:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f9W1=LP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lvDRA-0004yx-J5
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 06:29:36 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a9ab3232-aa2b-4504-aa99-7dbeddceb42c;
 Mon, 21 Jun 2021 06:29:35 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2110.outbound.protection.outlook.com [104.47.18.110])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-8-NFnNjG9RM9maupu51qD9Cg-1; Mon, 21 Jun 2021 08:29:33 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4607.eurprd04.prod.outlook.com (2603:10a6:803:71::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.22; Mon, 21 Jun
 2021 06:29:32 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 06:29:32 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR01CA0123.eurprd01.prod.exchangelabs.com (2603:10a6:208:168::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.16 via Frontend
 Transport; Mon, 21 Jun 2021 06:29:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9ab3232-aa2b-4504-aa99-7dbeddceb42c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624256974;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4Uv0XarbtPB6daZS22G14d9F1EhsAX6KBLEB/Yaibts=;
	b=gE8jyVJ9Kp7uiZccr94SQusHAIB9kUkBakNb6AVughCqS5R5xPOv2yhca8AdxsannG1ktM
	OBnFjP1tKmIyscb8croKzb4nj/XCjs0gwB1vpPt0pzBJ8XUNqj7fXjhLOO3B+PBa5Hqkpa
	XxZEmXsJcd5VnSwkc4BceKscrswp4uQ=
X-MC-Unique: NFnNjG9RM9maupu51qD9Cg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GfqLiWbRCaPHeEtQvhhgxL581LjB0mHVeTnGtSGTxP49eI7NydKRjCP0T9IIIHIfzA5xJ7Z+P6bFCoJe7k1jIPO21BKQeOQE8bxS4Z1bAqRa3pTVNcinNUjq7NG9FnhJojd2uS/eC9N24A0WU+mnYqr50vFSXJwrPDauqTYrNn0Zusb/Ca+9nlWasE+1623WdMBQzEG0PJU5TqT4+t8k3YWkzKWJpuuHxERekFg5xjgLiAOpkA61E1ZcUZv6UF/7Io85XlARe4FuS+HFFgFghDiU57tdm8fauUJMpt6zFW4TmB3ZGcJyHbcsBSG6xp/eEYOa+rVqPSCk+/fCMFAleg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=I91bC7ihyW4hvTflqsSmS4NmmT/3qXbhdMHAfE7waik=;
 b=l9aBuZUPHhsoZPhc8Be9vbpD0Ocs1LpTP6xqxQv8cF6puGA8Jsze09CWA9ifY/A+gX5CryEWyZaaXmgf/V/2W4zkxTC5XACeglPlaa6S1Bu7Sarj+A1d+sOnyYrwcNFVvTR7uGQJStorw8kojYAKaQzkBUeN5zifd8FDVvxtbqBl++mRHnO/Ay/GtOgrNuj8tVquCwJPC9OxisFtrnNc24Wz/5rjZZmi73eLflQM8H/O/1zvnNTmaUhr22Zn3cbT+MvFjJGl/TJMixj+0MPOhqNRAH2Q1wkQ0aHPYEnCMzJ+WXH/8odi7tv9kcLWrjWb1xb253QqGa4PWrP/9ln+TQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] x86/AMD: make HT range dynamic for Fam17 and up
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Igor Druzhinin <igor.druzhinin@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <4f46f3ab-60e4-3118-1438-10a1e17cd900@suse.com>
 <9fdd9760-6de7-be4c-a75b-0d3b1cc10a38@citrix.com>
 <ee781cf2-47c2-0b5c-4f21-37ceaed283af@suse.com>
Message-ID: <3415ff03-fb0c-1aab-aa57-9c5e4bb1c8eb@suse.com>
Date: Mon, 21 Jun 2021 08:29:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <ee781cf2-47c2-0b5c-4f21-37ceaed283af@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR01CA0123.eurprd01.prod.exchangelabs.com
 (2603:10a6:208:168::28) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 033b89a0-7251-4daa-6bcf-08d9347dee15
X-MS-TrafficTypeDiagnostic: VI1PR04MB4607:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB460782206B05AC71E6BD99ACB30A9@VI1PR04MB4607.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	odI+EtJe/ReiT3LxB6Q6hGPUqAKj4QCo/zzBTJmuDN5coiJqwb197Vfsp/WLcH15g0U4EdCqXa9j1MeF+Ix+E+wcUxbLh3ZDjCzEhhre+0XNqmSCsU6AeT4gGItfX2cAvdzx0BOTIw8ey9CxlK6CZA4sHDcHI560342ika/xZMy/e+DrQ/NvnjqYPcget8VwlDEObQ/CB5OMkg+ZHRwDUpaWt5SCIhcynmQ4YzDRB1tqhwVrTsk5X4/GqOTclz2wx3VSnx/TNXDODQYodbHEvhAhk6yWHRWENwvkhrJZfTJeiMXh7/YuUFmi05zhbF69MJEv2Vx73sCvGpIjcRiblB0JBHhLEkzB6kG/0bsyLerUon4bbGW/apuGXlcLjrX3+gCRs22SPsx/bq4xxbzuUK5qxaNC6ZtQBlk/aSnLmi73aD0kF0ysk2N4oG5JcD3fzUydKiDXlSR0zT3vU6EV8fzENLwBhtuwvqJXL/YeBTaQ8Dnb5AoyRmkbcKKMzdLpiyZvy87cLFEw51l0/8pK2VUOlwbA+jLGk+9edXhtYOZRRQht2vt5BxspXXTNMkHfVDZTsAYD53hS9JSfZj4jIEjnCquc5uUX80KW3AQnYBkqJBXi2CIV3ddl5ngEZ8uz3fgxYumTP5yHgQGqYuVAn6S7MfUl+gGLiVlytWuX8r0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(396003)(376002)(39850400004)(136003)(366004)(53546011)(38100700002)(54906003)(66946007)(83380400001)(16576012)(316002)(4326008)(66556008)(66476007)(8676002)(2616005)(478600001)(16526019)(6486002)(36756003)(956004)(5660300002)(31696002)(186003)(86362001)(6916009)(26005)(2906002)(8936002)(31686004)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?/6qw5e57+hsZ8E2D/Rx8kJb9Xn+r+7cAgIeXf+neWQlc8iANMFFrqIEGKbm4?=
 =?us-ascii?Q?naP0sUa6xpppfIL9cyNAK1mOnF0qOw2qHp0Sb5Yo0eAtr+fBlAwxgm3nqDP6?=
 =?us-ascii?Q?SGBDAaKNUD1IAnZkNvt0fZ6evVJAlI2mtZjF41pl0pj1Vw/g96r2Arql0EG1?=
 =?us-ascii?Q?VSTXyAGzU4pwX/oVvhanY4fJJ6X0xOsQWVyEfa4eJ5lJ4IUfiI3ZAJi79+r8?=
 =?us-ascii?Q?3hcFN/6DrPq5lA5kx6bfQ8lNAI2q7DdQP9BW/oKW2/guEfMJWH+g19/VAiAr?=
 =?us-ascii?Q?8R4E/PJ+TyqQv1KOoaDntDAZDh9tbUML/ZjK0w4N2euMofqGErdsldP7GJHK?=
 =?us-ascii?Q?tEZ8nzAyF1O/7vf8ZutTxEx373j2OBJfV3SrfZMXR2aSmQYD+Mad0LPHklx1?=
 =?us-ascii?Q?7lECYhrcCHLKeUsNjZatDt6PymEKSaNrHagajSpmmJ+6rJy+0ghPiofAXkO9?=
 =?us-ascii?Q?Xeh9bI8px97+bZUPjAAfrNZ3QgiuX1RKh5jhHEHZv6rClqUKMZTJWg/l24Du?=
 =?us-ascii?Q?W/0njoNYg8j6TDIw/pA0oY6l7Z+6Sl/i5jk1kdwM9VhLdqFlHIMSSuUGO/RM?=
 =?us-ascii?Q?71GlnGYVyMgzxPsKBxRmchPNjaq1KVgKWSDevkPGRFMYQKjJ3e9yq4Xc0iFc?=
 =?us-ascii?Q?hENMEOxIFgw/37tNTCZ7J2FC4+pYoV9KacgwIHuqNEp0XWa7R+PuNdh0qQEI?=
 =?us-ascii?Q?Z97QfOYZfo24vU2jBv96W/+zrGzYCODXND5pLMBZQmjip/lta1woAnaFLRsq?=
 =?us-ascii?Q?GEzUtS+5Pb+WyU+BkKXodTZlD+OXBac0ZWstiQGvYZL0J/bnuS3h2zBgO4hv?=
 =?us-ascii?Q?8LdigxnfLk3190bln05sSh4UsGkKuXJoOj/6sqIcWqKddsm9cjRm37htw6ls?=
 =?us-ascii?Q?xj4c6IYcb4kMW+UWCnzs+5oU1dA+OWr195LvVo6Yfco4Tr/MjlpL53mpfdDz?=
 =?us-ascii?Q?2Zz0CG2UY1wvdZjZMdOk5VhPStIn0BF60Njd2IHrN965O9+Fq+/B6foyuT3F?=
 =?us-ascii?Q?9vnvuHUM2OIE/WjHBPD/saBcMFhFoA8AMDLRAI0GDRtKcI4WV9JBZ6NI69td?=
 =?us-ascii?Q?Y/5hAgWl6VQbftz8fVnGw6Cpa4CwexQBU2V9c7ZPgiMQSwoNOOuWyBHiUsnQ?=
 =?us-ascii?Q?JR/FbcGrWj3eKEXfNYns0HRAI8vw8NKQlZAzikDg2K83kjBrWVIJNP+dbz4X?=
 =?us-ascii?Q?3zsmn0ScgcXbKJOmTC+FacwcoCxrGk2qGS+AlsnJcMLRcybms2SHwv04e2D4?=
 =?us-ascii?Q?i6O8lHaTk1nTcnL6Apw0abMdqJhF650jeS9k9+nYiXKjkgtD/IWUmbrlCw7S?=
 =?us-ascii?Q?AEzNlDoOGnOM6wtNHdSDMUod?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 033b89a0-7251-4daa-6bcf-08d9347dee15
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 06:29:32.0492
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ee4zQeptCpvUwNV/mDgh0lX6dhHjXpbfZ9Gy6Nk+683XR1GDZaJFSLG+V0DM5ZUsiR5VRJR5eGPDM/6lyNV5iw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4607

On 21.06.2021 08:18, Jan Beulich wrote:
> On 18.06.2021 18:32, Andrew Cooper wrote:
>> On 18/06/2021 17:00, Jan Beulich wrote:
>>> At the time of d838ac2539cf ("x86: don't allow Dom0 access to the HT
>>> address range") documentation correctly stated that the range was
>>> completely fixed. For Fam17 and newer, it lives at the top of physical
>>> address space, though.
>>>
>>> To correctly determine the top of physical address space, we need to
>>> account for their physical address reduction, hence the calculation of
>>> paddr_bits also gets adjusted.
>>>
>>> While for paddr_bits < 40 the HT range is completely hidden, there's no
>>> need to suppress the range insertion in that case: It'll just have no
>>> real meaning.
>>>
>>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>
>> Really, this ought to be reported by Igor.=C2=A0 He did all the hard wor=
k.
>=20
> Sure, changed.
>=20
>>> --- a/xen/arch/x86/cpu/common.c
>>> +++ b/xen/arch/x86/cpu/common.c
>>> @@ -349,13 +349,17 @@ void __init early_cpu_init(void)
>>> =20
>>>  	eax =3D cpuid_eax(0x80000000);
>>>  	if ((eax >> 16) =3D=3D 0x8000 && eax >=3D 0x80000008) {
>>> +		ebx =3D eax >=3D 0x8000001f ? cpuid_ebx(0x8000001f) : 0;
>>>  		eax =3D cpuid_eax(0x80000008);
>>> -		paddr_bits =3D eax & 0xff;
>>> +
>>> +		paddr_bits =3D (eax & 0xff) - ((ebx >> 6) & 0x3f);
>>
>> While I can see the attraction of editing paddr_bits, I think it will
>> break the emulated pagewalk (at least).
>>
>> With SME active, the address space reduction is only physical addresses
>> only, not guest physical.=C2=A0 An HVM guest still needs to see the orig=
inal
>> paddr_bits, and the emulated pagewalk needs to use this for reserved bit
>> calculations.
>>
>> i.e. under NPT, you can still have a working 2^48 guest physical address
>> space despite the host physical address space is limited to 2^43 by
>> encryption being active.
>=20
> Which means we may need to split the variable into paddr_bits
> (calculated as above) and gaddr_bits (left at what paddr_bits has
> been so far). However, isn't that what hap_paddr_bits is already
> for, while for shadow mode it still needs to be the "adjusted" way?
> We'd then simply not fall back to the "adjusted" paddr_bits, but to
> the original one.
>=20
> Another aspect I was wondering about is whether
>=20
>  		if (paddr_bits > PADDR_BITS)
>  			paddr_bits =3D PADDR_BITS;
>=20
> should apply before or after subtracting the value from
> 80000008.EBX.

And of course this was meant to be 8000001F.EBX.

Jan

> I was first inclined to make it the more relaxed
> way (applying before reduction), but then thought I'd first leave
> it as is now, on the basis that the PTE layout doesn't change, and
> hence 52 remains the limit for the full address.
>=20
> Jan
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 06:46:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 06:46:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145338.267404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvDhC-0007C4-N5; Mon, 21 Jun 2021 06:46:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145338.267404; Mon, 21 Jun 2021 06:46:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvDhC-0007Bx-Jl; Mon, 21 Jun 2021 06:46:10 +0000
Received: by outflank-mailman (input) for mailman id 145338;
 Mon, 21 Jun 2021 06:46:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f9W1=LP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lvDhA-0007Br-PT
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 06:46:08 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc3889f7-9395-440d-8409-b443aa24051d;
 Mon, 21 Jun 2021 06:46:07 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2054.outbound.protection.outlook.com [104.47.13.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-20-QhOCkrwBMtu2MY5681cn0g-1; Mon, 21 Jun 2021 08:46:04 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4605.eurprd04.prod.outlook.com (2603:10a6:803:65::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.16; Mon, 21 Jun
 2021 06:45:59 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 06:45:59 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3PR09CA0024.eurprd09.prod.outlook.com (2603:10a6:102:b7::29) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Mon, 21 Jun 2021 06:45:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc3889f7-9395-440d-8409-b443aa24051d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624257966;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xSQMHhMMdiUqJiddxuM4o3Ct4MjvlySw4cuAZxLnN1k=;
	b=gYw41w7HDtT3clBbEPg9EcfdpnY470hLHDNv1vIDMdLABacR4bIthSgjc7Gnzf1Mbn9asQ
	OouaAhzTZYOTflmhQTXrGHkZYBBx7xuL4HhcsuNnUXwKnHoxAv2hU0kwt4w2+ki60tS5On
	fWqLnhjR55T+NF+HgMcMJZxhLjTaiek=
X-MC-Unique: QhOCkrwBMtu2MY5681cn0g-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ia5xDHsetaCQq9q3zD4Wtk/yJs3zxQkDzPaqoidy1z0lqOhxCIJYCxffhiQvJP14YhTC+KV5r88Wq/pkw+YzBYRDGPEgKEtXJkbd7pw/zpJ6y/cow2LiDOD2apKXnpMwSHgrmGp/9DCvQPZEbyhwo1ZB2rvW4+zU9IHTQnAj2kmtfIyDp2ok3M82jk5/D4RhKnd7ijiWGcdJsBEkLNPqIdlQnBn1XBwtMV34NMU5OYHvmazDmTWh6HuajL8bZVWUGI8hEiwwB84arj0bk/1EAEa+9bE1vLDWFkSSLhGRHETvmd4+0ODIopPmit9CVPQIud5+bxcrERoNP0vtMQibDw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VYihzVTM/ol/7ZMKdyazy8gVhEp70auxUpIFfoun3mk=;
 b=LsXlKquva0duqY5gJtpWJPTxfJzpVjQMdabJW/9yovEfIKVCQa0Vz69X4p4AQE0cKj/5g6R9dgdoSZA20832gb4wVlrxAiWKwHrJooKnq6jpd4ckGEEq+t97Qov73o9E11BQK0bwdkVoo4mGONMshLu8DojyIknQ+zAOodCZ9ld3emfzDkLFFN9nLFN+p/wKglBB2mE6uJ4n1WYRSd087hrEqMTHSslBaO9UyOMR269yDwqQ8/fJ8Q4fuEeMNn3e7GfHwOL5U14USjy8IjfLNaDIlKkiNRZPQel0xqDvW1mMWu4QWUg6GIdTkmOnDsCEuvyKKYhXeRkWC736G+rLoQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: apertussolutions.com; dkim=none (message not signed)
 header.d=none;apertussolutions.com; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 0/6] xsm: refactoring xsm hooks
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Tim Deegan <tim@xen.org>, Juergen Gross <jgross@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io, xen-devel@lists.xenproject.org,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <7219a9c8-f6c0-a86c-bf47-5b38c579170b@citrix.com>
 <b921c150-84f7-3ab3-1e4a-89d00725d9da@suse.com>
 <6ed12320-f454-2751-1a41-014eaa835762@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <529179db-ea47-72a8-ed71-604b1da2b4cd@suse.com>
Date: Mon, 21 Jun 2021 08:45:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <6ed12320-f454-2751-1a41-014eaa835762@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3PR09CA0024.eurprd09.prod.outlook.com
 (2603:10a6:102:b7::29) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: efa67176-11a1-40b3-a58a-08d934803a8a
X-MS-TrafficTypeDiagnostic: VI1PR04MB4605:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB46053415C5F8C557020AEB7FB30A9@VI1PR04MB4605.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5k4+nixlEzZR+CIC9FsrIx/2UBI6Cyg7sNoI6nzs95pOZFGc+2MlWEhgnWCHEilIDBtK2K+xPz7YEzKAZ9DqaG6/JyEkzCOdUzOKf9ievhZrINXOWTToG0f+wkytD5nj3wXh7cgqPiWrN4VEnGOGJQQFLOb1MUFPXANBTPL2sKFKOVVAcCHha9Q4GFuxxvIsGHZp0+dpSRqEnun5rlxNF/IuQSKe248upFTf7xNJQpUT0SEDNm0t0USz3xwebF0hh/dUxMPUgwam/+iC6Eae4WmCeqgwVa3Z6ZaKSml29q9bkG72BSKjbfoeN92A3fdG6NCWGS3lGlW+UWWThWyH2icgnHS6N0hxI5zf5oM/nUe3xtc+xjaJx501N+tKReAL8Ak+xFOnU0lORgS4UqNVK7ENNYHGKaRNGoziZnNN0+X2bwry/fZHpKdc92GtMk5o6S12N1QrgI2zHPCo6pFgLgwVKAgbSn8ldjU4zPP+yWvvDHdttQwiDwRtYeREduv7FVq2xToFb9QHXzWKzaPOjPIvXNQBbIdBcR5fZ2Kr/nZ+/GlGXiutfoLtYd2qkBW5p76VwHnT9OoQJERWrLpeKQWJLw2KxNxY5KQisPhxheHbk0mpabpbT4JGwnpnvKisfYjDXDSkdoQsRZD6vR5u0ahxFkTu3IadprQs9EVDp8ghx2DENZ1gCSNG0qyjfuRL3LFViF3ltY7/LIax6C8RkceXPJmfOAkV3WzFfLkNSgl0frgvBYy6IwC2toH3oxGtAdOhpbanjbBKgN4rClxp5lLiEGXKGwfWE16f15L7ltlUdV2wHNrUKixbl91RurD2
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(346002)(39860400002)(136003)(376002)(366004)(66946007)(66476007)(53546011)(38100700002)(66556008)(26005)(2906002)(6486002)(6916009)(2616005)(956004)(5660300002)(16576012)(31686004)(316002)(36756003)(54906003)(83380400001)(8676002)(86362001)(31696002)(8936002)(478600001)(16526019)(966005)(7416002)(186003)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?GZf9VTvPLBMYRnKXTf9eSGvnJ1a9c+u3I0jIL4y4JApnN5K8A8s22KWMFoXC?=
 =?us-ascii?Q?JkHEI7vxpfFt+dQee2JDxpJwA5PuxirwKz3kbVozG8Twk/Nfb0Jd4Q/0SeWM?=
 =?us-ascii?Q?pxqj10IP6nHtuK8J2bQGMgnVEqyfwJ1Pzs6CFzp/IIGJ4vAVDCf8G0UQp5lh?=
 =?us-ascii?Q?E3V1bJt9m/8SDmDXQaw3HyXKSbgj+nUYfD+boY3hu7u9uDyhnpIil/9A7/dZ?=
 =?us-ascii?Q?sDTyEI8ZgH+OC1rPzJ9HyZpYcIHUxA+N0CYpTmB9E8/Y2MLs6ly1Fyyi28ck?=
 =?us-ascii?Q?U9cXYas2its13cCZMImftf+i9uAE9cyy0GLLtssozE2mTbzMK8DOOT9WRmvS?=
 =?us-ascii?Q?oLQyWitpbeoBcmQ3RP9c9wRMW8ePeVghlypvhBv0MldMbrn/v0q1xZfSfT50?=
 =?us-ascii?Q?Xf6r+E3/7YyozZatkriR0tD/EbGY2CtqPIfeV111jjDrWgxZKcgQC2luZSD8?=
 =?us-ascii?Q?0UZD3dABftJKh8KivUA67+2tS5WK23x5xDentuc7nuag8hfHBGpMPgWv9XyC?=
 =?us-ascii?Q?7TG/mkt/hdjDt/BkcHtny+AxI0FwomedA4LU6va7Xi+/xHiy5erY8bHQL4yN?=
 =?us-ascii?Q?CXmYyKq+jXRQDflgKZgbK6BRS1gn7PrXngVj4DzKVtOmQvpwryR36F/ADpE7?=
 =?us-ascii?Q?ibKUqwbIuLmKYjhyuZmF6rlK4UBZDne+cVvBdQjhR/bDkbbKSpPnFjzyyNEL?=
 =?us-ascii?Q?vBzI5gr6AyCFf+QHmTZvh/T+yaDfcRF6vJWVorzg4SRauz+xJ86mgL3+8hRI?=
 =?us-ascii?Q?NW/5tchvPn8u3lpg1+cCTdFezYJYdkNce4cXyfcoaK8K4z3WoYUEobeMOAv0?=
 =?us-ascii?Q?ukpOD36L9sOJkw7D6owaoZ3xxl0z8VeR3ziTo5UUZMR9LWlNuGFX63/aeYmi?=
 =?us-ascii?Q?aFcA3HvgOxhslzS+yVNTjQkeKleaNDgjKfgdgBCioekgTkRYJQs4wKrUSk+t?=
 =?us-ascii?Q?vKAQ7zkMswytF51OO45/XZ4eH+4PtVFhsPvAHxiirDsxHz7D2uLmfuSpEwhN?=
 =?us-ascii?Q?3BsAijUKXAuPW9yV8S/6Gy2lHS73qd+3kAfULmw8ZQEaPlVlflMaTuNER95u?=
 =?us-ascii?Q?hLVMvW1toFDclwks6INcxxkJaQRkvsOXXJ4x5Q0NwNcg0oLJkHxkit3H4b0K?=
 =?us-ascii?Q?ld8800F8hsjr1KgtxJMlqivk94tJHmgqbzDTu/SfI8DrrJOySXZjStQ46rzT?=
 =?us-ascii?Q?sQg2NZKD8wJ3eWtBbWz3crnz3+woe4N87E58Tv4lnXiL7XdTxTeIi/Ly6urQ?=
 =?us-ascii?Q?ctCmBAt12Z5i9ypxYN7dFGZp4HOQEkNC0OugZICpvDEKqKeUjhN7z9oW8ZC1?=
 =?us-ascii?Q?87qGHvhcjU9UBJ1nXBFb3X0+?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: efa67176-11a1-40b3-a58a-08d934803a8a
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 06:45:59.5137
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: iyU8BhMzv63k/7yhQmWdkQFcDBg1M1Gqa9FtDxI3Ckcb+/CENgly/gTxWycprRQ8asuOPCMZmtGBdKLV4Vskfw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4605

On 18.06.2021 23:21, Andrew Cooper wrote:
> On 18/06/2021 12:48, Jan Beulich wrote:
>> On 18.06.2021 12:14, Andrew Cooper wrote:
>>> On 18/06/2021 00:39, Daniel P. Smith wrote:
>>>> Based on feedback from 2021 Xen Developers Summit the xsm-roles RFC
>>>> patch set is being split into two separate patch sets. This is the fir=
st
>>>> patch set and is focused purely on the clean up and refactoring of the
>>>> XSM hooks.
>>>>
>>>> This patch set refactors the xsm_ops wrapper hooks to use the alternat=
ive_call
>>>> infrastructure. Then proceeds to move and realign the headers to remov=
e the
>>>> psuedo is/is not enable implementation. The remainder of the changes a=
re clean up
>>>> and removing no longer necessary abstractions.
>>>>
>>>> <snip>
>>>>  51 files changed, 1309 insertions(+), 1413 deletions(-)
>>> The diffstat is great, but sadly CI says no.=C2=A0
>>> https://gitlab.com/xen-project/patchew/xen/-/pipelines/323044913
>>>
>>> The problem is that ARM doesn't have alternative_vcall().=C2=A0 Given h=
ow
>>> much of an improvement this ought to be for hypercalls, I don't want to
>>> lose the vcalls.
>>>
>>> One option is to implement vcall() support on ARM, but that will leave
>>> new architectures (RISC-V on the way) with a heavy lift to get XSM to
>>> compile.
>>>
>>> Instead, what we want to do is make vcall() a common interface, falling
>>> back to a plain function pointer call for architectures which don't
>>> implement the optimisation.=C2=A0 So something like:
>>>
>>> 1) Introduce CONFIG_HAS_VCALL, which is selected by X86 only right now
>>> 2) Introduce xen/vcall.h which uses CONFIG_HAS_VCALL to either include
>>> asm/vcall.h or provide the fallback implementation
>> A word on the suggested names: The 'v' in alternative_vcall() stands for
>> "returning void", as opposed to alternative_call(). It's unclear to me
>> what you see it stand for in the names you propose.
>=20
> Urgh - yet another reason to prefer the Linux static_call() infrastructur=
e.

Which iirc wasn't there yet when I wrote ours.

> Would a general alt_call name be acceptable?

Well, it seemed a little terse to me at the time, but I'm not opposed.
There's hardly anything else the "alt" there could stand for, I guess.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 06:53:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 06:53:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145344.267414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvDoG-0000Fz-Jc; Mon, 21 Jun 2021 06:53:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145344.267414; Mon, 21 Jun 2021 06:53:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvDoG-0000Fs-Gk; Mon, 21 Jun 2021 06:53:28 +0000
Received: by outflank-mailman (input) for mailman id 145344;
 Mon, 21 Jun 2021 06:53:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f9W1=LP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lvDoF-0000Fl-Bc
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 06:53:27 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 309c829e-7998-414c-af78-a00854850643;
 Mon, 21 Jun 2021 06:53:25 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2173.outbound.protection.outlook.com [104.47.17.173])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-39-FpNKF-fqP32nTzXhmLp3ig-1; Mon, 21 Jun 2021 08:53:23 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7024.eurprd04.prod.outlook.com (2603:10a6:800:124::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Mon, 21 Jun
 2021 06:53:20 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 06:53:20 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P193CA0043.EURP193.PROD.OUTLOOK.COM (2603:10a6:102:51::18) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.16 via Frontend Transport; Mon, 21 Jun 2021 06:53:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 309c829e-7998-414c-af78-a00854850643
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624258404;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eLEXoIilmuvkCXi00yf0q3lLwh+NaukY5dn/yNHPvCU=;
	b=jkeRvyitHoHZ8R/h0AMcJTfNaqTdH2mvC9RwfY4XNzeHSc0yFCZ6fhkP5mHTfnogNaEAao
	EBK8/DfOWngNF61IFQqjVzu676hTkaUVTQWiwFoy8nIXy60W5VTK/r9oHpWZbMti7SF+e1
	AdkiHZJWBnXaY9ef0lus3p+OETg/d7I=
X-MC-Unique: FpNKF-fqP32nTzXhmLp3ig-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=djAl4iIIR9yiCSDJuRvV7+iJ8zLjCjBMz1SNkgrcXX7/nexs7tN6oiNdV6DlqneZ+mOPHBlf1rPhMFwebsOQ4WiEYozVvzk+IiKC+utFAoxQtjr+hhqmUAFUNwxytgE/IjwRbK5V1I2yv0JkwDkwkPW6evCcAiTj8ohW7jYoBp/b4N0urF1AW1ouGYAMeowCG/5RRwJZHt2zqEkeEgL3eTno5ITTTUbEk6IG2Qw1VpdNhXAj3qzt/5LBLzjfYCOsbucMVzi81WSjwgRac8aRtDC1FY1Xf69TURCxt7eoNicmJ6hnxEupUmi2dRx9OX0l2JM1F/Te/qY6GphqQhmJkA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RtwSiBQbdc9hHT/mQAjV1D2ovZ6TtLU975rgOZo9QIc=;
 b=jK1Xv2SaU1hLHeJMKe8L9E5RLOwsKvCMZWiegrjXlstCnOs2LMcVD2AncO4XGsb28SiA4CIkvBOEAc+dKlKACK/u+9v+siacOBxIlNb9AFeN5EIU88JqCG+wng5zlMAUwfZgTL0FEMdGTJCxH5+2u09/dseu7tpIGks3Rn5xTIFHbWH5UIC0w2iGCh6N2gpVnbgxfq8EgfgNnnoWvoOTo/23QoLgDPv4TvNzwEtZfvbYGnCwP0NkdsvtalZ/UxzU/cmTy8OA+cGiJ/XLWrk62tcgc0Rict4Lkrjx7WABaVViGdjY3hICI0Lz5W/mX0jguIuTMkxmEqmHBxWFYWrcqA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH 3/6] xsm: enabling xsm to always be included
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Tim Deegan <tim@xen.org>, Juergen Gross <jgross@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io, xen-devel@lists.xenproject.org,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-4-dpsmith@apertussolutions.com>
 <a8d60866-b9d9-8a76-3acc-703799b204b6@citrix.com>
 <3df8648d-f706-9684-4e6d-9438dc9f0894@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ca65acbc-c57c-6056-7abd-299ce5ccd643@suse.com>
Date: Mon, 21 Jun 2021 08:53:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <3df8648d-f706-9684-4e6d-9438dc9f0894@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P193CA0043.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:102:51::18) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cfbaf96d-ab69-4211-150f-08d93481413b
X-MS-TrafficTypeDiagnostic: VI1PR04MB7024:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB70240FE2961B1338C4F75BD4B30A9@VI1PR04MB7024.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5UuBDVCF7rnQWqYha2cetAy7p7aI9Xk5DkLzi1rh0sDXSCoYHh7bGpkTKHCIvM8b2Zz/Gqe2Icex3Rh/sj/Haw/bANPN/15IPtZtYjCZ0F+VPt9fkoRkDWiYLj4oXQ8KYqUcCq6Qnbu/gVNPDmmY/dWuiHxfKvpnl1hU/DfdTiOTPia6N1ux04ZnA/e2faFOERgUfSGRbm/PJ8vPam4JT65XtzryCal/c70qMYSpGIM79/sq7GkIg99J9eb2zJXgyDMHXMJR7xXuee957b6FzEvuDkP5Ve3rJVitW7YIl9YrbWxmdD27zYF8N3qMR9b5a1JgToyyroN1rNuy4veIwaboPvGdi89O2R1alCKpIRocT1/AWxTyHGj2ktl4rdOtq7LXWC6zFwbrLjMJHfgRVJV6VefXfyG8Wr+l23doKr8F9QFGRza04iF6Lvc1+AenIyTCEVENT+b7JpOla8voVroSNTW2kMLjat+lG7t4ColCoc5R0y4Y+bJN9F0Vm1C47Mt/HLzDd/bB0ChusZWZG1mKepeghZNRnGLyC1u/SZZMyoUxhcZ7yDG0+CQ50ISGvq9430+7+KXKwPiXPgrFdsDT0TT0KxJ0VO6FnDuPeT5KDsSraWLd/u2MbnTZTzW/geJZgxN8rs1gB7doGIpGsTbCWfhlIv2Uc+1h6CdPSdWckrbPoKbdbg1fntmlLV9t
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(366004)(396003)(39860400002)(376002)(53546011)(478600001)(86362001)(316002)(7416002)(26005)(38100700002)(4326008)(36756003)(2906002)(6486002)(8676002)(16526019)(186003)(956004)(2616005)(83380400001)(4744005)(6916009)(8936002)(66476007)(66946007)(54906003)(16576012)(66556008)(5660300002)(31686004)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?IiBXYvCpvQZ1Zmg25p94wpeuOB9og5VBgNvTXg0yAClEu1OFZHxIda40h8L+?=
 =?us-ascii?Q?csiaP4AJOeq5eIq+WC63SNH7X/TdhjY7A9nmD2iJfFFfWm8BABHoxQ3ZZRFs?=
 =?us-ascii?Q?T5kkLsUyZOLLJ/NJWUk/P8JJi36T72gwaIVuZzrLtzVWoIdMDf7hC7Mfx/O5?=
 =?us-ascii?Q?08fPm7AhPq7sTlT19SK6jI97XFypXfZ+IVRebrNtyDSad093r8PxvJ5/Jl9y?=
 =?us-ascii?Q?iHpEECROrRNRIZDn+XaEJ9HH6trRzhGd/l6lSTdJkbqMhgHCsY1BCGLFQExI?=
 =?us-ascii?Q?54i4U0TCum5J1FRox/ItwGtiMUaFQQIpS1pFIwiSYjuA2jJ1eeJx2E+01yg3?=
 =?us-ascii?Q?OHiGBp4C6V0yClPvCbS6dD2NlmhKvTwYI3cO4Z2R/9enT30wyYI8XOah/rV7?=
 =?us-ascii?Q?VdLNxqqVOWFGC6SzjG3RpBuin0vKm9y8iIesZaZzrv6T3btApvJpWG34QV0y?=
 =?us-ascii?Q?0IKsabwdRpL3a6hD48Gwj1SlL0TnMWdGQncown9NgLO7rR1/WXLRJOScQ9DE?=
 =?us-ascii?Q?uKc2JfgvJEsG4g3+ymwYitIT8PoYTqeirs605HKmsuV75U7Ew39s6u7pFeuW?=
 =?us-ascii?Q?hu5yqxPt+cBwsaM7nMt4R6FPzfmuexK41hyyz3bzNkolaIL6p1mJ8FVQ6c8s?=
 =?us-ascii?Q?Ya9tjMBjhqwoA4jnacedNqZY58tV47+VmrPEtZw9HC63Q88pA52Czdg8L+9V?=
 =?us-ascii?Q?cFcHQN/IXbZzv+0pn/0Qm/3eJl6OxA+06ja+rQvrWSS+3+7ABgE6bzoRVF5i?=
 =?us-ascii?Q?KPZunY3Uy62/skSVWV76wRV6m1fuXPRshPIe3pYYdrkRr1qgkchm0ZK4urHk?=
 =?us-ascii?Q?03+k1Akcam+T0ReVu8D1P5cEx/tevvyWi2lf66LT4pOWhjwLbOLj+ua+5dDQ?=
 =?us-ascii?Q?sWWaNbXlrngdWpoiFsGq4kem7DKqcYyKW1VqgFQOeljJFMTbQLhsJ7ulpySW?=
 =?us-ascii?Q?hxSjqFYstpZIDqNSqqotsA5OtJEXsGOfuCBZJ1dQF9lztbeZt73ryC2AIflr?=
 =?us-ascii?Q?W6WaVshGizzR3OOvDTrVSbLAvEe80zV+q/xuPCueJH9nNFaQuqGidu9xiDt3?=
 =?us-ascii?Q?k+//WZ17zKJME3lKi05r7jUkf5aBQc54TrVfJQmDAbqKRJkhF9S941aQ8Vu6?=
 =?us-ascii?Q?+C0t1pS+kLePtaqs4eXg3ia0Og24k1zFGxG/nr3851BpS020B7E7aIqK6FfJ?=
 =?us-ascii?Q?oGQMMGEq33FA3hKV9N0WLs+bRMHia4r9X6ZLKSggc0n7YSr4GtS5J39COWjd?=
 =?us-ascii?Q?0XyVt39UPjMRE6DSgqvFbEm8afDxuLtgqhrOQZB8RBKSPN+Bf5DISayt5zwH?=
 =?us-ascii?Q?2AiKEZtD7pUrF9i+vGYGiM5Q?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cfbaf96d-ab69-4211-150f-08d93481413b
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 06:53:20.1811
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: m0Aunz1n3EpvHbTL5fefF9YzHu1P2xl/9nwi5b4UOcyuHAM1bsCLxW8/Id1ygaQFxP8oMI3/lX8brOW3EDAPPQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7024

On 18.06.2021 18:35, Daniel P. Smith wrote:
> On 6/18/21 7:53 AM, Andrew Cooper wrote:
>> On 18/06/2021 00:39, Daniel P. Smith wrote:
>>> @@ -250,9 +261,8 @@ config XSM_FLASK_POLICY
>>>  	  If unsure, say Y.
>>> =20
>>>  config XSM_SILO
>>> -	def_bool y
>>> +	def_bool n
>>
>> I'm not sure we want to alter the FLASK/SILO defaults.=C2=A0 SILO in
>> particular is mandatory on ARM, and without it, you're in a security
>> unsupported configuration.
> The intent here is the default is the classic dom0 configuration. What
> if I did,
>=20
> def bool n
> def bool y if ARM

Besides it needing to be with the order of the two lines flipped, if
Arm requires XSM_SILO, then I think it would better "select" it.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 06:58:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 06:58:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145349.267426 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvDtF-0000wC-88; Mon, 21 Jun 2021 06:58:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145349.267426; Mon, 21 Jun 2021 06:58:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvDtF-0000w5-51; Mon, 21 Jun 2021 06:58:37 +0000
Received: by outflank-mailman (input) for mailman id 145349;
 Mon, 21 Jun 2021 06:58:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f9W1=LP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lvDtD-0000vz-Of
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 06:58:35 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6cb8bbda-a48a-4ec0-854b-43e9b8323ec8;
 Mon, 21 Jun 2021 06:58:33 +0000 (UTC)
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur03lp2053.outbound.protection.outlook.com [104.47.9.53]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-20-d4H7NInUOAaDJvPTnhUSzw-1; Mon, 21 Jun 2021 08:58:30 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3392.eurprd04.prod.outlook.com (2603:10a6:803:7::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Mon, 21 Jun
 2021 06:58:28 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 06:58:28 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR2P281CA0004.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:a::14) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.7 via Frontend Transport; Mon, 21 Jun 2021 06:58:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6cb8bbda-a48a-4ec0-854b-43e9b8323ec8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624258712;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cA9dvG83fpTvhDA2rrO+6pXu9a135jKxNSZBk3jSpaU=;
	b=KfuAIwiAepsmkvmXDU27SjYqPPGCOOhnttU1iRt/LMhUHKEyne4Y6oDi6x6xoyUWil0Fpz
	86toNupT+hnzhfPW+Xb/t9jSM4/t3d12S07nfehBrO7VWf97hpSBSjfEERR4BSkVH5D39W
	ew9NpNY0kR7P/sfHXmab5GLr5O1YYTU=
X-MC-Unique: d4H7NInUOAaDJvPTnhUSzw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N07Q2kdx0KqrRXFzh5fUMRq6PlWdoGZlyuElE9Z0oGAHn2w4eUQkCdI1NVeE+oVNAl72ASKAqu2f9uyjj9XnxCoqA1eoPKegSqYG9twwp4sDsI/kW5niauz/S333+PMRF2SUQNr6rhAcNKZG53WuvtZ/LB/i/X194jRp02RPLvyVhUdfbD/3o00+9zYEPVvhPZWvrEiZBWdp3wWIbOTmHqDuPyGGknw0jGjoCrnn3FbePhG1J+gEPQ0dB7KuHynecR0ILP5kFV7sCMAw5c7yioAmdPH0Ye+JcAVd8Mv/Z1hcTn8Y1611YIP6iCC1Qi6wRC6KhdnwzKjsu/9kVg0GmA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cA9dvG83fpTvhDA2rrO+6pXu9a135jKxNSZBk3jSpaU=;
 b=aVMn+aIi0GDKEcncMQmyVnfD2z1ONeDGCQLrqCU52ZINpJRbkqAlQ75bJm7j2aY3RrOG4w8Z4gkUCiwuYTLHDKxhJGQmwen3ifTqNfqMQMZvt3dlBcLPGAaava+KifH36ypkWQ+qgbmtUv+N34INM01Ton2oON412uPghxCS/o9Yny6Z1/I4Gf2DwpyZmRC4cApazaFUztMETisuSsEZ+u+3VL0q3LpHYSgx3JqKTNMc3kCLC1toNhlgAXd3PRh+JxABYjpqM+DKdS+kB0x5AhQu8e4sQBxpTlC9Q6qtLcFGy/kpvWn/TOE/ysz0TFZ4/ZhOJZTxG4ofCgplJ/Wk2Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 3/6] xsm: enabling xsm to always be included
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Tim Deegan <tim@xen.org>, Juergen Gross <jgross@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io, xen-devel@lists.xenproject.org
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-4-dpsmith@apertussolutions.com>
 <c8bd347f-cf14-8b86-81f7-51c035c5c972@suse.com>
 <ff0c9f42-f45e-e78e-35b9-c030011eed8f@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6d50efc1-6c13-1481-b70c-0abfa99aa610@suse.com>
Date: Mon, 21 Jun 2021 08:58:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <ff0c9f42-f45e-e78e-35b9-c030011eed8f@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR2P281CA0004.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::14) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 16227cd3-0513-41de-fe31-08d93481f8f9
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3392:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB3392B37CCA097718FC016E57B30A9@VI1PR0402MB3392.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HsSclYk4vqkAejJQRNnwodiWnQIMfhqoMNiZUpLX7QPNaFkZtsAmZrtu1mYR+5B2JbNKKNXj4TY9qSiLAO78rKrPdTuUlZDvjGjngSHaQVL2HpeMuV685owYJJFcFS+peqPhPK+f7j+C8rLDjNExjKR0UUEprHIJ4YV3xRRK53FufNw/+xD+xWfRTXJpnhq7WFmARdUq3nB2DMoVAUDMm3656WupXUssolG5Qi9P/j2QW0F8WKYx4onU+8zBS9WNhYDMfATnqcjKwjXTce13K+vJRYznnW8gCKfEjPD/bVDjT5dBCPsD675AdvPgabkkEWi9LkQbkKUBosAaz5c6Fe4yHxYxcXq3T3xUSingjQ+cO+YhvBziBmM/XwlIOxGOFC/yoKDWuh8JI4R2jcSCnV+io9uV41qcIneduAiXgzvFAd6h0sYM4V3HjohwmGX3c6ySmE2rI3CZdVDkXp4DrBwMsIljzXYgA4+H4qkdaZSYYRxDeRU++pcVjH1J3S+HmPOq3U3SJbaM37qzaYp693ycPfdi5YtuSZlGhyrzTjHacIAksZv/HWg/a90icwrGbwoOxZiKWYHTo0LMSAXxeO+WF/d2EldyJSEacSVBTGTLvJgNLMYU22tRqGfU6ToTREB/ZWyLnpk/QNlFLMZ1td9zV+6/u7rqXaaSQb9eFZXuIf/+l90Sf6ACrSxE0o/b
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(376002)(346002)(136003)(366004)(39860400002)(36756003)(956004)(4326008)(53546011)(83380400001)(31686004)(66946007)(5660300002)(26005)(66476007)(8936002)(186003)(8676002)(66556008)(38100700002)(16526019)(54906003)(16576012)(2616005)(478600001)(31696002)(316002)(86362001)(6916009)(2906002)(6486002)(7416002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M2tjWXVsQXlaT0VHYXZ0UzdmWlBjZmRTenBOQ2xRdkQ2cjZMS2MyZGJQR0ZK?=
 =?utf-8?B?MFJHUTZKLzN2c0c5Z20rQW1BcUFydU1qNkJGeUFsVGFxRUoyTVFoTGdzVkNs?=
 =?utf-8?B?KzFRWFh0T2QzcU00cXMzVFdiSU1FdXpQUWt3WjVjOEVqNVZDT2xtWEhjbzZB?=
 =?utf-8?B?KzBaOHVNTEtIeFRHelEyRjhOL1FvMkg1a2xGRHdJNWpCZVFSKzBBVGNVNjdY?=
 =?utf-8?B?MXp5SG5kbjdDOSs2YVY2SHRtYU90d0RlSDh2a2hseG1CU0UxaEhnWmd0NGF4?=
 =?utf-8?B?MDJWSUhVZW0wOVRubDlXeHJXdUFWb29sckN6T0pGWE0yQ0xQTzNQVmZ4N1Rk?=
 =?utf-8?B?VitOMGxhSHVVZzZDc3NKSmJ1RzBXQk9LUmM0UEdVWmZ6WDNEUlBWUHhOVnZl?=
 =?utf-8?B?OGxtVnU5OEw1Vyt3ajFnR1NNQjN6WmZzbExwdUdWSjBBVktOMTgxZlhWeE4r?=
 =?utf-8?B?ejNNYUhvMW9jTGJoODgwYkxtUnhWUDE1aWhlUSsyS1BzTWRURHFvd0ZhMDRo?=
 =?utf-8?B?dXgzUEFGcUxFZndqM0M0VisxbHdrdmt1SEhYUFpaWDV4eHhQMkdmR29XMUMz?=
 =?utf-8?B?QWZYeW5jMEQ3VVUwaVIzT2U4bzZ2VVQvVEo3WTcwT1orR3hiK0pqQTM0bUFO?=
 =?utf-8?B?TXhQdkZVVGVPWE9RMmtqRjVCZnEzeko5Q0ZQb3B3WE1GeDkxOWNUbWtLOXRC?=
 =?utf-8?B?ZzEyUERlbUlVWk0rUGh0MGVOcE1TSTFueUtKTGVOaEhWTlA5VU5DKzVBampW?=
 =?utf-8?B?RVU5YXFjQ0x6RmZ0OUZFb3BtQVZLUkRka1o4a1V2dEVPcEVGZm1XWnh0ck1p?=
 =?utf-8?B?R1VuYWtzbzdocHlUc1pWY09XM3pURXdTZU5acmR6S1FoZUt1TDNwWEJpVXZU?=
 =?utf-8?B?bkV2RW1yVFhsOFkrM2pWaE5LVXFRNmRINkdYejJOOUNjbFUvQWRmNHhoS0lH?=
 =?utf-8?B?eE5WNVEzSm82azFsUEtDWWpNUHQ4Y3VZNTJ1cCs0UGVlRlJPV0Nzb2tGSEdx?=
 =?utf-8?B?cnh1WGY0WWdPc1V2ckQydXhpT2dQZ2p0Z2tCNGY3azBHemFzZXM1dzg5T2dl?=
 =?utf-8?B?TjJmcWU1QzFMYTRsZ2FtQXRXSS9mckdkNW95a0ZaYnNYSnFxZmllaEFUdU5F?=
 =?utf-8?B?MTNwQjk2ekREWW5LTUJxdng2VDZZc3dWL1FwWnFGUTl3Ni9kTmxUQWVIQmQw?=
 =?utf-8?B?WmI1S1p5eDIyZFBQSXNPcnpKK0tTNmR4cUE1VmoxYzkvQnpyVXV1dTUxd24y?=
 =?utf-8?B?U2hNY2VxWXU3TlJmRWsxQzArRGtaWHdsaGRKRmNUL29oUkJ2VVU2bUh4cDdC?=
 =?utf-8?B?Y3RCbTQralIxSGd2VDByQ0xyRUVydlZrdWJrS3JwSUZlMW9kMFVNa0d6d3Ns?=
 =?utf-8?B?T0wyTWpWVDZwZS84ak5OcUNFTU5GUTA5dldJYTIwTTlLWjdHWmlOZGxFN0lw?=
 =?utf-8?B?ZExOZi9SUWtGclJRb1pMZ1Y4bFBsdnFyUUk0T1dlWmRNN09yR3U1WlUyK0ZN?=
 =?utf-8?B?a1pVTHFQVkZ6TFE5VTI4bDd0Q0h0SkplbTJlTXphRk5Ib1pIcFNxLzBSRjc5?=
 =?utf-8?B?WG13K2RFbHlaanN1U0x4aFhRMGJhaXlPd2JqZkxad1JvUFlwR0FCZGtTRytB?=
 =?utf-8?B?eCtwak40QlI3a29jL3JMZ1pNVVRITkFtK0NlN2RZTXJhOFUyQTMxNmVHNEVR?=
 =?utf-8?B?dGwwTlloa0ViWDJRdllSYzF2TGY4REVwLzlPSmtiTE1BeUVzVHVNS3h4RGRV?=
 =?utf-8?Q?CA5gCQbaGsiO9bXpKf1vCdG1DU7DhSBQx8h/WL9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 16227cd3-0513-41de-fe31-08d93481f8f9
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 06:58:28.4513
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bizV+fQPHYeHyibzHQDTtuR2PFaNG0hG9VooyVk2v/kRqc+MUyLmqS0FycRrRR69MiFdqO6VBKLTDOnRdRLkYQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3392

On 18.06.2021 22:27, Daniel P. Smith wrote:
> On 6/18/21 8:26 AM, Jan Beulich wrote:
>> On 18.06.2021 01:39, Daniel P. Smith wrote:
>>> The only difference between !CONFIG_XSM and CONFIG_XSM with !CONFIG_XSM_SILO and !CONFIG_XSM_FLASK
>>> is whether the XSM hooks in dummy.h are called as static inline functions or as function
>>> pointers to static functions. As such this commit,
>>>  * eliminates CONFIG_XSM
>>
>> Following from what Andrew has said (including him mentioning your
>> changing of certain Kconfig option defaults), I'm not convinced this is
>> a good move. This still ought to serve as the overall XSM-yes-or-no
>> setting. If internally you make said two variants match in behavior,
>> that's a different thing.
> 
> Apologies that I did not express this clearly. What I was attempting to
> say is the fact of the matter is that there is no logical behavior
> difference between "XSM no" and "XSM yes with dummy policy". The only
> difference is the mechanics of how the dummy functions get called.
> Specifically via macro magic the dummy functions are either flipped into
> static inline declarations that are then included into the code where
> they are invoked or the macro magic has them ending up in the dummy.c
> XSM module where they are wrapped in macro generated functions that are
> set as the functions in the dummy xsm_ops structure. Thus it is always
> the same logic being invoked, it is just mechanics of how you get to the
> logic.

That's what I understood, really. What I dislike is the inline functions
going away in what we currently call !XSM.

>>>  * introduces CONFIG_XSM_EVTCHN_LABELING as replacement for enabling event channel labels
>>
>> Is this mode needed as separate functionality at all? Nothing defines
>> XSM_NEED_GENERIC_EVTCHN_SSID anywhere. _If_ XSM went away as a separate
>> setting, then imo this one should go away as well.
>>
>>> --- a/xen/common/Kconfig
>>> +++ b/xen/common/Kconfig
>>> @@ -197,22 +197,33 @@ config XENOPROF
>>>  
>>>  	  If unsure, say Y.
>>>  
>>> -config XSM
>>> -	bool "Xen Security Modules support"
>>> -	default ARM
>>> -	---help---
>>> -	  Enables the security framework known as Xen Security Modules which
>>> -	  allows administrators fine-grained control over a Xen domain and
>>> -	  its capabilities by defining permissible interactions between domains,
>>> -	  the hypervisor itself, and related resources such as memory and
>>> -	  devices.
>>> +menu "Xen Security Modules"
>>>  
>>> -	  If unsure, say N.
>>> +choice
>>> +	prompt "Default XSM module"
>>> +	default XSM_SILO_DEFAULT if XSM_SILO && ARM
>>> +	default XSM_FLASK_DEFAULT if XSM_FLASK
>>> +	default XSM_SILO_DEFAULT if XSM_SILO
>>> +	default XSM_DUMMY_DEFAULT
>>> +	config XSM_DUMMY_DEFAULT
>>> +		bool "Match non-XSM behavior"
>>> +	config XSM_FLASK_DEFAULT
>>> +		bool "FLux Advanced Security Kernel" if XSM_FLASK
>>> +	config XSM_SILO_DEFAULT
>>> +		bool "SILO" if XSM_SILO
>>> +endchoice
>>
>> This did live after the individual options it depends on for a reason,
>> and you don't say anywhere why you need to move it up. The way you
>> have it, with the default command line kconfig tool, users will be
>> presented with dependent options before having chosen the settings of
>> the dependency ones. That's because this tool, to a degree, moves
>> linearly through the options it has parsed.
> 
> Yes, this is specifically why I moved it up. Clearly we have different
> approaches to how we like to interact with configurations, which is not
> bad thing. I personally found it awkward the other way but can easily
> move it back.

I'm having a hard time seeing how presenting dependent options ahead
of their dependencies can be a good thing: The user will have made
their "choice" here, while the availability of the individual settings
then may change in case the depended upon options change from their
defaults upon the user reacting to those prompts.

>>> @@ -261,25 +271,12 @@ config XSM_SILO
>>>  
>>>  	  If unsure, say Y.
>>>  
>>> -choice
>>> -	prompt "Default XSM implementation"
>>> -	depends on XSM
>>> -	default XSM_SILO_DEFAULT if XSM_SILO && ARM
>>> -	default XSM_FLASK_DEFAULT if XSM_FLASK
>>> -	default XSM_SILO_DEFAULT if XSM_SILO
>>> -	default XSM_DUMMY_DEFAULT
>>> -	config XSM_DUMMY_DEFAULT
>>> -		bool "Match non-XSM behavior"
>>> -	config XSM_FLASK_DEFAULT
>>> -		bool "FLux Advanced Security Kernel" if XSM_FLASK
>>> -	config XSM_SILO_DEFAULT
>>> -		bool "SILO" if XSM_SILO
>>> -endchoice
>>> +endmenu
>>>  
>>>  config LATE_HWDOM
>>>  	bool "Dedicated hardware domain"
>>>  	default n
>>> -	depends on XSM && X86
>>> +	depends on XSM_FLASK && X86
>>
>> I don't think this is a compatible change. I'm not going to exclude that
>> this is how it was meant, but as it stands LATE_HWDOM right now doesn't
>> really require FLASK, and could e.g. also go with SILO or dummy. If you
>> _mean_ to change this, then your description needs to say so (and ideally
>> it would then be split out, so - if this is actually a bug - it could
>> also be backported).
> 
> Actually this is the root cause that started all of this work. If you
> want to get technical, LATE_HWDOM does not rely on XSM at all. The issue
> is that you cannot use it as it was originally intended, to run Xen
> without a classic dom0 while still having all existing capabilities.
> Specifically the hardware domain does not have the ability to assign the
> pass-through devices for which it is in control. This is were Flask
> comes in to enable assigning specific privileges to labels and then
> constructing domains with those labels, In particular it grants the
> ability to do pass-through assignment to the label assigned to the
> hardware domain. With the upcoming XSM-Roles patch set, these privileges
> are assigned to roles and it becomes possible to assign the necessary
> roles to the hardware domain.

In which case this needs spelling out in the description, to justify 
the change in behavior (which I'm not sure I really follow / agree with
just yet).

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 07:04:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 07:04:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145354.267436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvDyQ-0002Mq-Rb; Mon, 21 Jun 2021 07:03:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145354.267436; Mon, 21 Jun 2021 07:03:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvDyQ-0002Mj-Oh; Mon, 21 Jun 2021 07:03:58 +0000
Received: by outflank-mailman (input) for mailman id 145354;
 Mon, 21 Jun 2021 07:03:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f9W1=LP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lvDyP-0002Md-N5
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 07:03:57 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 75388125-dea7-4e8c-9bc8-2238defb87f9;
 Mon, 21 Jun 2021 07:03:56 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2170.outbound.protection.outlook.com [104.47.17.170])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-1-6Jcj02l_OPme_XEFxTKQaA-1; Mon, 21 Jun 2021 09:03:53 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2445.eurprd04.prod.outlook.com (2603:10a6:800:55::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Mon, 21 Jun
 2021 07:03:51 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 07:03:51 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P191CA0042.EURP191.PROD.OUTLOOK.COM (2603:10a6:102:55::17) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Mon, 21 Jun 2021 07:03:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75388125-dea7-4e8c-9bc8-2238defb87f9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624259035;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bgKsnnteWjheYKz/topZVXsci2lt42zenZ3ki+XMsZg=;
	b=NU2fmCllcNOmP1KiGq5KRCJ3YG6mk8B89v3CDYXn15iOGxv+AX30hLWwb4SxnF+UiU9Td6
	Ca1+RZtk2gEfA77XfatatoZCTzeDm74T96R6WI2FG55wlJU3xF8uYLnbNm8BGu+1EbwIqm
	iJRP5EZhpTVExaRcyrvu4TH1+XpvqzA=
X-MC-Unique: 6Jcj02l_OPme_XEFxTKQaA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JwZt/kNfVy0daiv1Hjqjjg7Psq5m04TOBmhnME2kPaZ7uk70V3NBHz5KEdCQM+AYa0XLGWE/fT32Ci5tAvRf+hXyiNZF4RhaOMZNaPhMYu8ctMx0A2xFu09/17wf63+7bMwKlOiXz074DGmdlz2VFzG4NT9q/6dZwJ5wb0x2cSQ8PpR/nqgpboK+UaOgCzGlZ2diQICYERqzhdObrDr58vE2aj8Dh5lirbrk46Jf35KNPVD/a3ockhnqbe9/3eDp3tlAnwCFgwwY4POI/UIzyZIrH42Xni3ZzZhk4ApVVpBvgxNPx3a/tTo25ooPUeuSFCHZHwONWBj9IrNJDiMwnw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cjDrlNUQDeor/lDlEup/fopOtBtExPBqffBIyfG2bLo=;
 b=lO72+jRxQxp/jZe4Oo/zKXnrZIra+BBrbLRO1oElxjm+DGRzmJGTR6s22KKD/jmX8c2b7PMjhaY54c4q2UDQ6BjfDEDef+sz+k8Tk7CwvR9NSlTiI/jJLhfIEpyX8buHweKavLT6i5fRcoMxGeqSRyxjLQqJlDSFDH0CB9pmVMqkLw1T1DZwaVcMz5pgxKB/wIeBJogXmezbwF7jjAMqVWB5cPbNGEnMNArfCMw4iGsFay8A19ZvBE8EmorM9pISxwIh9kGGlbUWHB9SwdFk63poKXZOTn9vnWyOZItMAp3mSf9LS8fvYWDwIa18fN4g9DUrtaNVRHaN9jxbuQhPNg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: apertussolutions.com; dkim=none (message not signed)
 header.d=none;apertussolutions.com; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 3/6] xsm: enabling xsm to always be included
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Tim Deegan <tim@xen.org>, Juergen Gross <jgross@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io, xen-devel@lists.xenproject.org,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-4-dpsmith@apertussolutions.com>
 <c8bd347f-cf14-8b86-81f7-51c035c5c972@suse.com>
 <3a86c791-e508-36a4-a48c-6cdb810f81f9@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3fd8395f-87af-b32a-2dfe-1683299c4906@suse.com>
Date: Mon, 21 Jun 2021 09:03:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <3a86c791-e508-36a4-a48c-6cdb810f81f9@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P191CA0042.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:102:55::17) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c9eeb846-bb73-4025-5f42-08d93482b93a
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2445:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB2445CD779ADB35DDA19953EEB30A9@VI1PR0401MB2445.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4714;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hQJRizwUv5a2t2JnuK6b2hg6jePCcixTNWjlSKQDgtUDeHFX5WQFDSHgUJSguG0NLbqPZhjwAYagri6ijp+dzzpIM0hUIoWL6LeLdCgJ/pSafrArWJzN4ZKS1yltTRROP5DDNDaMzfgzDUZTn851mLG/QydrCf1bDGBEnNSgUB1N/jGlYe3dfI3MqAx4aArk/tku1KQEZ1mOz2R14LpfVG1823C+efJjoAbFIu9AhKGIcsLDhqHSfseDqfGYLGL0JNme/Tum1RAzR5lJM1EkEHy0IwCfNsmkug6Yc60/bJoXrHH9bVl0gEr77d7pgNEYtt/2Gh2XqbV87VS7TUJA2Bdxr/ADU9/GgwKYHpqMLHzRIYLeMM3XjF38UhfdAMqtlTWixz/QHQHki34f6owHdNJSkRxNT8BExh+6M3A/hzJpSs6iQZGdGIlnqojvYi4G2hfx+J73uF35J5q6rGAnR6ow+SpTNH5SntR7WgZrOzZZS/SL+prxoMPYIQd324mXF7PqgUoe2Z8dBuLYxajNxdYtK4OABuqzeuVorYTjpXRRiiBk0Nr+QUviaODAg/Cnh/IPBIFC8CtYqViriDzi4YLeAXEGzOIwmx9toZEf4TMkezxB3fLZ7iaJNsF8nDCk0EtCDA5ZLBvtD5/GjGW5N8eaWalHfvOP78Xzb+AzsYloddrdgKpnxwTe5lq4CYts
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(376002)(396003)(366004)(39860400002)(36756003)(31686004)(86362001)(186003)(16526019)(8676002)(53546011)(16576012)(31696002)(7416002)(956004)(478600001)(54906003)(6486002)(5660300002)(8936002)(38100700002)(66556008)(316002)(26005)(4326008)(6916009)(2616005)(2906002)(66476007)(83380400001)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?y8EwWtM43QxlW9aHKtqpXxu3kIaHfadjwo94rMa5BmcvvEA7OzjqKTeeJ765?=
 =?us-ascii?Q?yTWHUyItqcgJumhV82OBscrN+AeR3x1IHt5xjEqM74gAgW7uiXspi/8KRUr9?=
 =?us-ascii?Q?Hb5qg7H2XRf1+UMaJrpHC5T3vt+gDJr3domGfB7509FEOx9Wxhy8vgODp9JX?=
 =?us-ascii?Q?NjpY5uH9eSJYvvVAURDBnX/YI4xAk3p5XRI5P1jUVs2KY0TdJOmZOchTK8hj?=
 =?us-ascii?Q?JjvxRM2ruYyMlv07Ly9Z5spPxbH1mgQ0AP461GRi0DoRIXOWLvY4yvV+3s9n?=
 =?us-ascii?Q?ut3zVIx5Y5xcjTSAuCM0noJehxsA4bQvPxszhZSyUUsP5MrDVeNdS7k8WHzN?=
 =?us-ascii?Q?Xvq2rAzHZEAhj7b015BYxgGk3wU+IaJ3XnsnbW/v+uAbp1kaI0sK2W+27FRB?=
 =?us-ascii?Q?OXLl/Ez2vv91WWb1E4YzFQaCeYX+ykv5N3HNaKxRCVT3q1q4IBnHak2WX5uN?=
 =?us-ascii?Q?3h+DZaj2I/rQ/V1j1IptwVHVTMc1wZHy/Y2IE6SE6IkfxExEKR8O8zZ00m0T?=
 =?us-ascii?Q?NRcjQSFD5xb2exG1yB1Zt020v0t4uQ9qv2C9qzuuYHspR7RwvVnNFRz27JRI?=
 =?us-ascii?Q?Vh4hL2LD1eMs0vJy7MeieinUrpqDJs0hEE72K+rGbkl+VTq/RMPHpXSNc/RM?=
 =?us-ascii?Q?oeY6xVgBp+ugx3flhA8uUgkMRZqf2RbD/zK3Qm6k54cZNyqL6y+5BvWi23Rk?=
 =?us-ascii?Q?0snGxyIkUsSr9zJU4JJHUFgo5MeoPe6Gpo4Vgoj4VOJhkVtGKXsHPDdhf9FE?=
 =?us-ascii?Q?lJ4ndzOgUOzNAgGyXAhlJeQ/pih8aD2cEuYleGXEtvQLjXQ57vHNH+L60vwN?=
 =?us-ascii?Q?fP7Os+RmBZwYOb2kMNBEInJppVMNJyMOYOhq+lYt/ezuwcYrKVsQ6lTgiEUk?=
 =?us-ascii?Q?D/CHEHASkwmWIag2LL0lUlF0h/O2ad9KCsNDQ3rh/q1SFEq7BHTUoxTRvsEl?=
 =?us-ascii?Q?Dc+I4zhC4vkoitscwzwDZVThnCCD0Pl42aIt0v28WvzIFsytTufGoUp1WD6Q?=
 =?us-ascii?Q?yg05NYY6QG0ZIMQiR6LjUPxLhyDm6kio4vceqohJpZey/FBftPmCFZFAF4JC?=
 =?us-ascii?Q?FF2sE9eS+ZMlXFen7U032pq9b1EfcV5SzAtZEpJz8xu9pFZIzKfd4cHAxWtp?=
 =?us-ascii?Q?gkAXxMY5T06b+v2xUcxlFIMIfNgkvzJySK8w9L9Z7tlRRMyCLJkCkAO4UWck?=
 =?us-ascii?Q?1XbV+2wy0WWaoieAN7Ak2zmPa26WDEL+ZiyfdnE7pdJvoSgkDpJSMEtnQRB+?=
 =?us-ascii?Q?jMEmNFqTKIrS/w04JBmbEsyVdAiGCkTdlFZMCENwQuJw+GUL9pIzzTam5Sz3?=
 =?us-ascii?Q?W5sxiQLb1NXJuKcODxc6RToz?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c9eeb846-bb73-4025-5f42-08d93482b93a
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 07:03:51.1017
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kgj/FpOfOHgt2dpXONeRMOKgB44jRHCAsrz7QZbQWpWFzepUu7VL0cbIs4m8DAHF/tNC7fHbdMbPXVSlpGGFoA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2445

On 18.06.2021 23:20, Andrew Cooper wrote:
> On 18/06/2021 13:26, Jan Beulich wrote:
>> On 18.06.2021 01:39, Daniel P. Smith wrote:
>>> The only difference between !CONFIG_XSM and CONFIG_XSM with !CONFIG_XSM=
_SILO and !CONFIG_XSM_FLASK
>>> is whether the XSM hooks in dummy.h are called as static inline functio=
ns or as function
>>> pointers to static functions. As such this commit,
>>>  * eliminates CONFIG_XSM
>> Following from what Andrew has said (including him mentioning your
>> changing of certain Kconfig option defaults), I'm not convinced this is
>> a good move. This still ought to serve as the overall XSM-yes-or-no
>> setting. If internally you make said two variants match in behavior,
>> that's a different thing.
>=20
> I firmly disagree. There is no such thing as !XSM even in staging right n=
ow.
>=20
> All over Xen, we have calls to xsm_*() functions which, even in the !XSM
> case, contain a non-trivial security policy.

Compared with the full-fledged XSM, I view the present xsm_default_action()
as sufficiently trivial. The inline wrappers of it really only exist to
allow #ifdef-ary at all the use sites to be avoided, and for the code to
act like before XSM got introduced. Whether you call this !XSM or
XSM_HWDOM_ALL_POWERFUL is secondary to me.

> The fact that under the hood, XSM vs !XSM creates two very different
> implementations of "the dom0-all-powerful model" is an error needing
> correcting, as it contributes a massive quantity of code complexity.
>=20
> This series of Daniel's takes steps to make the code match reality, and
> getting rid of CONFIG_XSM is absolutely the right thing to do.=C2=A0 XSM =
is
> never actually absent from a build of Xen, even if you choose CONFIG_XSM=
=3Dn.

As said, what you discuss is just how to call the child. What I point out
as undesirable is the going away of the inline functions, replaced by real
function calls (not indirect ones thanks to alternatives patching, but
still not clearly on par with the current model in terms of overhead).

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 07:07:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 07:07:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145360.267448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvE2E-00035P-GH; Mon, 21 Jun 2021 07:07:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145360.267448; Mon, 21 Jun 2021 07:07:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvE2E-00035I-Cr; Mon, 21 Jun 2021 07:07:54 +0000
Received: by outflank-mailman (input) for mailman id 145360;
 Mon, 21 Jun 2021 07:07:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f9W1=LP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lvE2C-00035C-Fl
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 07:07:52 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 86c598b7-584e-40a3-9ebc-554d495a378b;
 Mon, 21 Jun 2021 07:07:51 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2051.outbound.protection.outlook.com [104.47.13.51]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-35-_Roxd3jGMt6-UlfmEg6Jzw-1; Mon, 21 Jun 2021 09:07:49 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7087.eurprd04.prod.outlook.com (2603:10a6:800:12a::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Mon, 21 Jun
 2021 07:07:48 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 07:07:48 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR06CA0135.eurprd06.prod.outlook.com (2603:10a6:208:ab::40) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Mon, 21 Jun 2021 07:07:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86c598b7-584e-40a3-9ebc-554d495a378b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624259270;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qlYQYlX+s20UqRifCIPhrdNowloNdm5RZNoGOufdMa0=;
	b=XPTnG2qY848wWCByAcbEq33/iyA+T1p71j6boMqzcVPCzvFlyrS0PPCtMpdfoc/8caCC/H
	c4L2mysYq40YLiLA99RQR9njmj2qJ/3X8CO1nX20Q21VAapNxsMsxglEbbj0nfbQelqVMm
	VxbhLVTKB0tpcNZ2YEOy/w3k6seqVJg=
X-MC-Unique: _Roxd3jGMt6-UlfmEg6Jzw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O5Moh5651TNd4AzdUKRQobJPv/xVlxN/Kxr+TaFEoxVwSwbDQIVlFFX+coAftn+J0njJ37uqikOJwuO/dG7C8Eoke5T5JIJcB54/jjnjxgeOeuhRsJBpp5c7dNZNU7c3sU8rEnNT/oDznQtImPMnVLMMzLZxjV2AYkTfX/+SB4xNmjPFqmmz8ocZyvB7Epb5pDRFzc7/gzdp2R/qbvqhIcFobpJbw8nlhb9DBg46y8BTQ3OHjcqT2i0wwZowLBTp4MQpjTxmodsTlETzQGa+611Pp0JkTtvj8DwexbWxjmz33qGBpzZTDWCAliIuGDRgwO1tv2rj7ifBQibgurOIUw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qlYQYlX+s20UqRifCIPhrdNowloNdm5RZNoGOufdMa0=;
 b=ZNEC/sDThMC8tB0UO9wP7iD1PAr6EuxZdBZv0R+XbZxOSlEFsvp+62Rlud8W7i1YZ/eW0Y6WV0ChkgSkbkrDeqcpd+3kFk23XGEnCb4bcaTohoMaLJpQbZDuUb+iFsT0hl6gCf9itHDHWEB3eVPZtL7xM1WDuLmUc0i/NSLB8ZELJt0PVngw4iVDHP6WvVwlYPNEDrW0k3ENWH5SDM67HKN2plRx22IXs37iJFmxzf40D8q3SdfoS5NmBCoeQ0u+4WX+jEnwbFcJpiK713SHH1C9VaDNRHpBNKt/fp6oIDmumlS1KhDBgjZIstSRoGDO6IdHAS5EfIArjtGwKx7+Gg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH v1] compiler.h: define CONFIG_GCC_VERSION
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Julien Grall <julien@xen.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <20210618164207.5111-1-olaf@aepfle.de>
 <1a3b3a14-61e6-c805-78e4-4b1dbff322f3@citrix.com>
 <20210618185539.491fc904.olaf@aepfle.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fbf5883c-2fce-71eb-7496-70a31c150f1d@suse.com>
Date: Mon, 21 Jun 2021 09:07:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210618185539.491fc904.olaf@aepfle.de>
Content-Type: text/plain; charset=windows-1252
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR06CA0135.eurprd06.prod.outlook.com
 (2603:10a6:208:ab::40) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cfee0244-bbaa-44ba-2e72-08d93483468c
X-MS-TrafficTypeDiagnostic: VI1PR04MB7087:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB70871858C19EA61460E2AC96B30A9@VI1PR04MB7087.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MsJo7AmZdfMsU1E6buZgE5bMFl5BcBSi1LCMkNwlCcZ/1qc6gQFAiV2QyNhV80H7Vnifc+VtoHN3iCkV5f6Cu8RtjwIJG/Rdzxw1kt2SkPb84h/iW6wgPn2RDzXHNCGw2YquvDam1eYLs+79Xab/WjwkjlagDx2SE4WlIOS15UaLU0yEDe9bUsa363mccii3OVMX0GIdRlxztC/1VULzbUq6mgZa8enF80UE2wuuzib4lU3jK4M/6FpsXilefEZ1IAjdoEDL2o7txtZ9eWV6OqMd3vptBv26SKSJGMqsGUv68tbYUDPH8JflUYWO+bMEWs3BZ+qAD3WXE2CM66ZzYEauEpk9t6FKhmE7D3b0T3+H1N1VQ3J4y9TYO8CNIrPg8kQpM0eHVrnHkZ4AYnmmvCa4aTZYZaHwVjefR3cw0ZjWMK8mn0yplagzNit4luFHtVeJ4IZD+n1Qi+fr6Zfbspn0oFLh+N2Ic7e3A92AsN+pMUCOGYmyNFC3aGeoAQzwVsn0kZNK+ST12b/IaR3XxpMUt6PsZ7/UQX3Bq59SI+GrIX3SHftqxsMQTDfA/cuFmjseWZGEhfjExcQi0iNxkGudYfiG82boAXV2/3NYaO3cN0UaPI94osYG/lBtWv/YmUiatbcS3sftB6O7vUZUbvaCnO0hkHi2IgvGgCgZCi4wGl8z3thQ1d9ASMPVvzxZ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(366004)(376002)(346002)(136003)(396003)(66476007)(66946007)(66556008)(53546011)(4326008)(5660300002)(956004)(31686004)(4744005)(2906002)(2616005)(8676002)(8936002)(16526019)(86362001)(186003)(478600001)(26005)(6486002)(31696002)(16576012)(54906003)(38100700002)(316002)(6916009)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?Windows-1252?Q?/FNNLi7dpRHKVBVLEL4k6c/+A2mLfsdravwQa4M+hhlR8BJi15glUse/?=
 =?Windows-1252?Q?okksqQhPos/9S87BWOsRxx+wuI5N3m8KCtBv/eJkaduGLktEc6wqMH0r?=
 =?Windows-1252?Q?hg1ZAdbFU1PfyG2R8soqq6P4O5WzV3YjGsr17ZBuk7XYZWe10F5LZjBR?=
 =?Windows-1252?Q?cOijR6EsoMeg7bI3TMcxZrreYkTipdf5xpKhjJvP7PnCayn/47jdAljV?=
 =?Windows-1252?Q?+nVf66YeuLCJ/MjaZNmsndSdKGSs5SRezuvL1zNOJMPDFKLBr8A5cwD8?=
 =?Windows-1252?Q?n9TjuThiQmnWOiIWwsVYtWoEbTzMoTpzCWPraTCDYBa/MJQohjGEVF2l?=
 =?Windows-1252?Q?bUlFcaTJVyxG9X5a1UuQ6a8w5Dw361rOp+7g9EbQKDTitPEgrnIZlrP6?=
 =?Windows-1252?Q?C4fnM/wL22N5V4uRXUBK4zI8v4wWlTsVkrUSMDh2pM6RWikuwP0eEPiP?=
 =?Windows-1252?Q?7a7Y+SpBWNoFP/+QmxqGYQ/Gw31Vuyod/yJ/P7jt/jqrAF5XbRnkxz1S?=
 =?Windows-1252?Q?i8nU6XhYufxjE6qlTtDLpSly6Q2LA/LQPLfzj3iBAOCijJfYBFD41qX9?=
 =?Windows-1252?Q?UCpJDWXWH16noOoO3ot9VMY2dlZy10lmF+VrxuMQbor7jr0ho5ao2sMx?=
 =?Windows-1252?Q?5L80a8uqwVfLeftxroZUck1+nBAKrZwkBe9yOnTDc7y79qwYzG9mOPr9?=
 =?Windows-1252?Q?J9h+U1bUpQ97V5ZNfa+kFmYJJXr6BO6JCkdjMHaXQVeWGVURL7IpwuCl?=
 =?Windows-1252?Q?BHNgnbDJZTWo2csskAcl9GkNnktd/WvK+5yUVWDQ3P7FNFeascbsYVz2?=
 =?Windows-1252?Q?jO9RU6HZ6ftAt9j1KsY3yXXa/dY44zxNkN0scofPtmX9O9FGbWZP4M9u?=
 =?Windows-1252?Q?7y/F1jKmzRz0a1++TaXQaQ6c+Rq2JoMU9J55fu5b6295iPWC1wmN9m0s?=
 =?Windows-1252?Q?ic/vugKplwN18TBdKZTSDuh9QSnkTdNcnznGPdON7pH8SMt6CJB2RCFf?=
 =?Windows-1252?Q?mLf+OCAWZHycKSSBS5PC0VorxSIET9A/2nFChqtBN6gK4Yvw1GU9O6wZ?=
 =?Windows-1252?Q?SQRbhZ0tttrh3HIgJqoHWBYZrbPISsPY2UOH1MTXpIIT0vAZIAMeOH7b?=
 =?Windows-1252?Q?VB3l8GJZbvK0uONMTvoUUmXZlEBnXhEzdA2Vkz0nI5pygyxViYPpfb3u?=
 =?Windows-1252?Q?l0b+orLOOoJVvfEsJgh80l83oyftVUa+4ogwzM4Zr5Tv+G7tz/qt3oen?=
 =?Windows-1252?Q?X1D0h1kQMyoZ+5yg/Ax+LDNmjTLC8nYoxDA1yadAHsa27DpFXHYq/12M?=
 =?Windows-1252?Q?25R/uWyPquVzoZqQ2nrDxv9YXFgiS9SuvvHmy6nEgRjMUCRY8sE9a4ak?=
 =?Windows-1252?Q?5TD7Imj7qClJPAzHA2xB5rpTAXNU+UmFRIeEVPhuAW8z7rS5LqdAQkAV?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cfee0244-bbaa-44ba-2e72-08d93483468c
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 07:07:47.9957
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yw993Ebe62BIUvGD7kjVtxrgUrE+7D1vPO5ND/vaXovzzkNA20ccII7rY/S/GcfD8jwusmuLZhjypVuiruv4vw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7087

On 18.06.2021 18:55, Olaf Hering wrote:
> Am Fri, 18 Jun 2021 17:46:47 +0100
> schrieb Andrew Cooper <andrew.cooper3@citrix.com>:
> 
>> On 18/06/2021 17:42, Olaf Hering wrote:
>>> Fixes commit fa5afbbc20ef3577c5338f9d0b24dad45cef59cd,
>>> due to lack of commit 534519f0514f52007d504e0f2eeb714de7b2468d.
> 
>> Presumably you're intending this for Xen 4.13 and older?
> 
> 722f59d38c710a940ab05e542a83020eb5546dea without the required changes exists only in staging-4.13 at this point.

But please could you help readers by making this obvious without
needing to check what branch(es) said commit is part of, e.g. by
tagging the subject with [PATCH for-4.13] or some such?

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 07:14:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 07:14:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145365.267458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvE8O-0004VY-7Q; Mon, 21 Jun 2021 07:14:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145365.267458; Mon, 21 Jun 2021 07:14:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvE8O-0004VR-4P; Mon, 21 Jun 2021 07:14:16 +0000
Received: by outflank-mailman (input) for mailman id 145365;
 Mon, 21 Jun 2021 07:14:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f9W1=LP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lvE8M-0004VL-W1
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 07:14:15 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f8c24296-2931-416f-bb65-984b3fa6b7bc;
 Mon, 21 Jun 2021 07:14:14 +0000 (UTC)
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01lp2050.outbound.protection.outlook.com [104.47.1.50]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-40-8Ev31JlwO1uOcBxyeTXmPQ-1; Mon, 21 Jun 2021 09:14:12 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6477.eurprd04.prod.outlook.com (2603:10a6:803:11e::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Mon, 21 Jun
 2021 07:14:10 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 07:14:10 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR0P281CA0043.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:48::8) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.9 via Frontend Transport; Mon, 21 Jun 2021 07:14:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8c24296-2931-416f-bb65-984b3fa6b7bc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624259653;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=v8p5EWsavLYqmFEVhNRluVS3NJB+znh1u9p/pejtpCw=;
	b=NC4K6qtuGcBZQUjfngoevr9Jg6PaSMmPmdLXCOHguhpvE15MbpoarMMS2BDJuqtlXQ5J1x
	9kxAR3LcHSfG/XVcZNurC4yqeJUdL8GV8Vg0QkLZwz2iXZ4FMGR3wN+axHgkJFWa7nJKEN
	909hejM8DUchf3zKC0hd18eJ7tjeXGI=
X-MC-Unique: 8Ev31JlwO1uOcBxyeTXmPQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VfI7pscZH+/duSoCP3VqMIpKtNleH9ZrR02j7Aw0MyAlTHVSxyW+vcN44DgHQcE4A6X4zf4bNgsOslzeTbVB7XIhjwwT59TfUvD5/pMhlLqH8PrXHdQkdXIeNGJXYdv8bGm71feqSgBjxRAGVx7pPFvTaC4GKS6xciBJH1DVvwkq6zorAUZTqBdvz0hMzeozIBrjDAYU//a2WG52VNuzrHtUjUz6suoH9NOGZFVIbajzYsIZ/rHKJ/CjnUWZan3bvU0Z33ozSA6zkiX7Dw7f5VD26HA987jpSj333ZtKWIn/l8SdgoPxDEjEkon2w4xDOMGX2gnZqs5KvWXDVuhhCg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v8p5EWsavLYqmFEVhNRluVS3NJB+znh1u9p/pejtpCw=;
 b=PXeIk0ob6GiedsnyxCPC7tkCVvLIEacH8gq2NiT4RIWmooKw/2Rg77b+foCh3350qNeJThI6VwAkmHzmlkXMgiw0EqVyCx+SqTOJb8YpUAJ2kYxorGO+SWI4SYGJIyfLb8UAtVjpnpq79KUubyFGa6ygPBBwyVWwUGVpAbr4H3HHO5SI880ohuh8MdYkyatzXkk55K3eP0JM6hmEPgxEZsnLbxSLlEynAOHsfn9xwJhUSxxfuU22pE+gDIolqAVuxgGkGEki2UYNVi8pM2LoGhNTKlWIrTCeani4dnlBwAp0sF1X+pa6J7pmulElavTfqArZuzbYzuGvWiaR2pezAA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH for-4.13] compiler.h: define CONFIG_GCC_VERSION
To: Olaf Hering <olaf@aepfle.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Julien Grall <julien@xen.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210618164207.5111-1-olaf@aepfle.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5d2d9420-2285-8742-f59d-d136e97f8fd2@suse.com>
Date: Mon, 21 Jun 2021 09:14:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210618164207.5111-1-olaf@aepfle.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR0P281CA0043.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::8) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8432c990-b422-4347-0fc7-08d934842a46
X-MS-TrafficTypeDiagnostic: VE1PR04MB6477:
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB6477761B4DC27E33795D8882B30A9@VE1PR04MB6477.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:422;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ptoU/6PM7mPIhQ6Zbmt+der2hezOZGmpmKBPGT/uTzm9C4ixilNYhoK9rIdzMOmCXuNmFPQTz176c1U5qX8ZvYBfi1ts1oPkfWWNexEvQFmyVTQEvYloZn9KzToLCFJsInWT/1U5HwiLgsFA20d8qttdJzii99lAHcSzKkyUPHqV2XrUCXHdZGgp11keJwZTaIDov+WmntZ6+Q1wSNa+aeGEfWOkWVrenhBqTkkvopnTJldSupT+S/4u28+Wo7pWO/w7nRsTaj9t01h2wFSfYVe/jTlIQh2K5/9+Bdu8mw9/oSc2o3SJR4UI8h0LOHFwdIfwOxtZY8Eb4xmCeYs3h1vshDd2IokFgPXxym5Li2lj4ed3aszqLrXPQaKIrBlsf59BGoyJA6F5FZCm5La54LOGoDdR1SFBsOd9JgQBzpMZcvUsU7G16vcFNqEyUmb7ovQd92ZSpuBCCXkkXvBAgSP9tw8Gf34aJyDE430edfIkbDn/b/2Mt5WHcNQlq1GXKZBn5yk5njEmCp7QtwlTcYlpl9Kvlwwk7Xg9aQzyZZiDzq0qgrs3Fm6MSeJQ7hdgfhx35oxli6hPhGS+jYx3TDXSotEwhS12IKxHXndpfro6tmMb5g15XLYOAd4d082Bqlv2fg+Xxv63kCHaw0XYJuQ7riXV1m7V6wxCYyzAWh4FLBJFGgqXPeVudBHF2F35
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(376002)(396003)(346002)(366004)(39860400002)(6916009)(31696002)(478600001)(5660300002)(316002)(4326008)(8676002)(86362001)(36756003)(956004)(2616005)(38100700002)(186003)(31686004)(8936002)(4744005)(53546011)(66946007)(54906003)(16576012)(26005)(2906002)(6486002)(16526019)(66556008)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QVRtbi9FSzBVNGpaK09HTXFrY1NTQWdXTmxzZnB2U01NNTM2QW9VV0FKVDUz?=
 =?utf-8?B?d1gva0hZbGZTS25CVGxZU1ZrdUFGWGVGMkZCRU5ycUJVMFNrM09zL2c5NHRS?=
 =?utf-8?B?Qm5VUXlZeTRuUVpTVlNUdHo2NnV3Z2dZYUVrbCtYV2NaOXJtbXRYTTQwWXNB?=
 =?utf-8?B?Z1BCUWRML1lHN1dWVHcwcE5TcHV5QTlFdFFEK3V1aGRTNlZwOEVkT3dpek82?=
 =?utf-8?B?c1FYOXBzZDVRN2x5VGpVaUhkcGhiN2tZWGkwMFo1Y1dEeWtiSTRxYmtKQlJy?=
 =?utf-8?B?dy96MDBqeE9tUGhmSEhrMnJLcUFKRWtKd0JyVnZObWVPWU5MZEMvN1J6THRi?=
 =?utf-8?B?eS9icGtrZWpYamFBR3oxMmxzTXpwL1BHdjk1YW01TVZVZE8zRE41Z2w0WWc1?=
 =?utf-8?B?SFR1aDVrNzhTUzdDcU5PTU1KZktqWjdhQm9TQnA2UVMrb0lYQStEOWU4eUdT?=
 =?utf-8?B?Ynh6Z3JsYmNNeG5WU1NPZWEzSWFXanYzSXNycEdZcGltekZOQ0Uvc1ZRNXIx?=
 =?utf-8?B?eDNiQVJuN2FYMkFFVVJrb0t5VnQ5TCsvTG16R29XMUNSNlloUHE1L3pIOGtC?=
 =?utf-8?B?Nm9BZEZ6RnBnQjkzMWpBVml4NlhHRXhyc1ozTHc0b1lZWDdjZ0FJRk5tcEdI?=
 =?utf-8?B?cGhyOWY1YnNCT1J1djMrcytBZjhNcVljRm1QeG9DQjdwTXAzVCt1MHYwbWRK?=
 =?utf-8?B?NFZBYStwVnkrQTBkdStFSnRLT24vSVlxaDNUeTBpT3V0Uy9KN29MMjRkRU1Q?=
 =?utf-8?B?RWVFM2gwdStObzFBWmY5SmhZQ2Z1WXYwUzVHdG8xcUhNbHhJYWIzWHBGeGs3?=
 =?utf-8?B?N3Z0NzRVekVKNEg2Y2RoRXY4a2h2MGpkYUZiZ1hCaENKUTU5VHdveGR3YVN1?=
 =?utf-8?B?ekJjSnpHbkVJMHFIVFRwc3lyVHlGVmlGV3dzTnhodlIra3R2YXBtSjhLSUJW?=
 =?utf-8?B?UmE3WHhtemZwNzlvS2dPQVlSaFVPcEFjRlljUlRCMTJzN1c1WnJJcnFsUXBp?=
 =?utf-8?B?UXJ6YWdWMHh4UUhTNFJEQ1IrVUFHbGFxeUFpS2M4bDdGUUI4SFZmQnFwRVIx?=
 =?utf-8?B?bmlkZjlXUi9HZ0wxWmZ0TGI1MitUN29ZRWw4NGx5N21pV2lsSkpqRGlsZ2RV?=
 =?utf-8?B?cWY2enVueUdMa2tZbitlTDEyczErTGhLcFBVNXlNNW9TVndPYTZldDhGTW5Y?=
 =?utf-8?B?TG4rbVQ4b1NSdE9ob0Y2VDFOWUNGL3EwZmtLZUJMVTZlNmhMUjZiZDB0N0J2?=
 =?utf-8?B?MEY1emZaYUplNGQyZTNSTDA1TXRaenArZkpjbUF0bWd6NXFLUndPcTMzeFJV?=
 =?utf-8?B?UGR5ZXIxWG9KZUNXcThhVkpacnVZMkI4TXBXdnV3UFFVSWhFbUtTZ25oM2FN?=
 =?utf-8?B?OVdWMk1Mc0ZxUFFEVGJmcW5OV0NKZVJFd2JmbDJURXBiMWozSW05WE9DNUcz?=
 =?utf-8?B?aEN0RXlHWEpodS9SNW5vL0ZQKzRJY1dTRzQwYVRVWFFvaTNiSHkwZzZmWmpF?=
 =?utf-8?B?T3ZBOHhHVzBHT0hkTjBUR1Zaak1rQ0gvRDcrU0Rzb1BxV1A3RlMzV3R6OVFT?=
 =?utf-8?B?ZlNyeFZ5bDdBMzdFTEZjbDNnZzB2RTlTQ1QrdFRFVHZrblNraFZHa0JDajUz?=
 =?utf-8?B?bDk2Z1pvaDRSZklDbURMNTl6OEEzcFF5eGpVSVYvWWErWEJRMEp0cjNHb0dK?=
 =?utf-8?B?OXJDVnpKUTNoY3V1SFRpTnRUK1pwckpXc0NheDl4cnVQUUJFTVR2SE53VGRV?=
 =?utf-8?Q?CEZDG9gUP5i+nxSgPdrbeGdsc5vcTZEBcyGRkEQ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8432c990-b422-4347-0fc7-08d934842a46
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 07:14:10.0647
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +1z0GOM6f5FEZZ6sgEYLdtwknTE0fRuwZjCBoCZ8foXo1Xh5YteMufxHrFKOEZdczJN3H0PuViBavlSVbzSqgg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6477

On 18.06.2021 18:42, Olaf Hering wrote:
> Fixes commit fa5afbbc20ef3577c5338f9d0b24dad45cef59cd,
> due to lack of commit 534519f0514f52007d504e0f2eeb714de7b2468d.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
albeit ...

> --- a/xen/include/xen/compiler.h
> +++ b/xen/include/xen/compiler.h
> @@ -99,6 +99,13 @@
>      __asm__ ("" : "=r"(__ptr) : "0"(ptr));      \
>      (typeof(ptr)) (__ptr + (off)); })
>  
> +#ifndef CONFIG_GCC_VERSION
> +# ifdef __GNUC__
> +#  define CONFIG_GCC_VERSION (__GNUC__ * 10000           \
> +                              + __GNUC_MINOR__ * 100     \
> +                              + __GNUC_PATCHLEVEL__)
> +# endif
> +#endif

... I question the need for the surrounding #ifdef, and I may also move
this higher up in the file while committing.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 08:22:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 08:22:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145375.267475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvFCC-000342-Mk; Mon, 21 Jun 2021 08:22:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145375.267475; Mon, 21 Jun 2021 08:22:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvFCC-00033v-Jc; Mon, 21 Jun 2021 08:22:16 +0000
Received: by outflank-mailman (input) for mailman id 145375;
 Mon, 21 Jun 2021 08:22:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YNPZ=LP=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lvFCB-00033p-76
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 08:22:15 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.49]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8a9c8d83-ded7-4a7a-9d5d-b75933a1e724;
 Mon, 21 Jun 2021 08:22:13 +0000 (UTC)
Received: from AM5PR0202CA0016.eurprd02.prod.outlook.com
 (2603:10a6:203:69::26) by AM6PR08MB4755.eurprd08.prod.outlook.com
 (2603:10a6:20b:c2::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Mon, 21 Jun
 2021 08:22:10 +0000
Received: from VE1EUR03FT016.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:69:cafe::30) by AM5PR0202CA0016.outlook.office365.com
 (2603:10a6:203:69::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21 via Frontend
 Transport; Mon, 21 Jun 2021 08:22:10 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT016.mail.protection.outlook.com (10.152.18.115) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.16 via Frontend Transport; Mon, 21 Jun 2021 08:22:09 +0000
Received: ("Tessian outbound 2df94acd389f:v96");
 Mon, 21 Jun 2021 08:22:09 +0000
Received: from e755bfcd6e3c.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 EC949530-64DB-4CDB-91F6-2D7BC8DD633A.1; 
 Mon, 21 Jun 2021 08:21:43 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e755bfcd6e3c.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 21 Jun 2021 08:21:43 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com (2603:10a6:102:130::10)
 by PA4PR08MB6093.eurprd08.prod.outlook.com (2603:10a6:102:e8::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Mon, 21 Jun
 2021 08:21:40 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025]) by PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025%8]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 08:21:40 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO2P265CA0190.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:a::34) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Mon, 21 Jun 2021 08:21:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a9c8d83-ded7-4a7a-9d5d-b75933a1e724
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nGMKo0qGWfo7XVnZRFqH37QHdgjMpcPpJ1kdhXswjsg=;
 b=4OYBmSKHoPLOjVVw+BYJ18GMB4mgFRL/Yt/LX6JuscXKUvKSlpwpDzXTUkKd5Q+cdLSuuYDHwA7S4KyjqisFNS0RQGYdiGiRXXghf01TVWFfngm6+KEevFUOPeJGhChOe04gsTHE4itx6GWt/5W52KGKx4iRzPYIO4xRwTXuWdA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: db4a638ceab793e1
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z0zcAhQoTCxKuggFpx2QzK+9ZFnQLHbn9gpE+32U4YSFoGPbFWQGme8DR2UHHKHRLcOq66AZZFdQXFNC5ov5wzBkzK6+SOXpjfBp1RC2EyP0mr57Ky+aPnBJkGN+6qwDLVCjmlxyZCSrvGmDRbqzShFSGY743nQPpLC72ba3uUUY+DCqHWZM9ZFgbGLS1gOheJKOwl0A+zYu/V1KRSsDgFpigSctLC+ghzc9nV4AtUGEGMARE2KuGO08QJGGvK//cZxh624KB552QITm6DT44Nl9jzPha+ZuSYOp0zBtHFTjCQTCgNblPPS+REl4Cn+EIBfudglO7ScRv0h4fUfmIQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nGMKo0qGWfo7XVnZRFqH37QHdgjMpcPpJ1kdhXswjsg=;
 b=H5e5RuS+0LEB1+u5CdoJoXh58cFVb7rz6jqv+uIfBOobTd8SZsI8E0v4edm3Tj2dZE5D7wo/BFP2BaKL/HfWR78uCU/D6CJzyCEczXcFIcLTOkDPPbrD+19MGK8KvS0c1cc7f/fGibin52TWZh3+GAKc21kabj5XVRXtHQmifIawVxhkmXobaiuFWCXHeCHTnRpBeFgZ/TrcvEeu8bN0RzykpnWceYsYG2Q8N0G35rmVcuM9Br9SLUMg0Eof5GVHsESeI1w8HpR/rDJl00dJG4HYpxc4LFStjZo8429xSaa8fZ2NDHelL14YYZRE5GvbgRJqxOPAhafUhb7vf8H52w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nGMKo0qGWfo7XVnZRFqH37QHdgjMpcPpJ1kdhXswjsg=;
 b=4OYBmSKHoPLOjVVw+BYJ18GMB4mgFRL/Yt/LX6JuscXKUvKSlpwpDzXTUkKd5Q+cdLSuuYDHwA7S4KyjqisFNS0RQGYdiGiRXXghf01TVWFfngm6+KEevFUOPeJGhChOe04gsTHE4itx6GWt/5W52KGKx4iRzPYIO4xRwTXuWdA=
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=us-ascii
Subject: Re: [PATCH 02/10] tools/xenstored: Introduce lu_get_connection() and
 use it
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <20210616144324.31652-3-julien@xen.org>
Date: Mon, 21 Jun 2021 09:21:34 +0100
Cc: xen-devel@lists.xenproject.org,
 raphning@amazon.co.uk,
 doebel@amazon.de,
 Julien Grall <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>
Content-Transfer-Encoding: quoted-printable
Message-Id: <BB8C183A-23BD-4C58-AE02-5F63B6E1B0AD@arm.com>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-3-julien@xen.org>
To: Julien Grall <julien@xen.org>
X-Mailer: Apple Mail (2.3654.100.0.2.22)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO2P265CA0190.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a::34) To PAXPR08MB6816.eurprd08.prod.outlook.com
 (2603:10a6:102:130::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 27779e85-331b-4a78-04d0-08d9348daa13
X-MS-TrafficTypeDiagnostic: PA4PR08MB6093:|AM6PR08MB4755:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB4755EE6CEA564BC2A4444F78E40A9@AM6PR08MB4755.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:480;OLM:480;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 WXp/+HrmDgerUhfVHmAgy0sPy6QPzNOL4hcU1JBdLgzANCxHZc7DRpU70cQpid8i1GZBIwE9pm4ecExmEOxDHeJkjq1Wdv2avVC2vbG18Dfq5j6WS3AoyOhXM7P/CsC9dER9yvs9k2f2XyLHMryr2Pec7rGUHWLbHdAQi5xkoPJs/Lao9EzrASm7oqb0OILAxDNbq/uePVLblPKaNbxoHXOXT/XSKXVx7nQT0oH8kBdIzFBnQsiQXC88mpjkvAastlFv8aSV/mKU5eWWK9XstdntzMnUMKm6TMfxBO1hTv5S6tASzQFdAbqZfjPKL84Na1ZthXVLn0GV5gMgE6ewNYmTOO9m/NNIHknSOyxRsbELoRdwCFuabaDSjUbguUuivAV7oqOVXsZB/CE10HEQhmZRZJeUUK0eJbdIsGoLyUGzvAbDDq7tqzsYTZg4VceVFkymtoG9YpCLCU00hrQSgmEAK6mXzSW8QImp+42Zevb9SG3mAUDZepuROazMnTwBEU/5a+Tz9RR0jmlqhBFzChkpw/zJtsdp4hy/UUoihskMZ/XJ/g6kkLQpNMCKd+36aMLLlu3AgiiBp+VHJX83/fJGur2f3AvyIIZS+8lbqElFQrS5niUU3DFe3rxg3zZojPyq06I1wtCfN7xpf1wK89mzqWYjEmm/cauCWAmRPbeDDV54YMhIm70/YtyM9I2F
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB6816.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39850400004)(346002)(376002)(136003)(366004)(54906003)(44832011)(5660300002)(4326008)(6506007)(6512007)(2616005)(956004)(26005)(33656002)(8676002)(6486002)(66556008)(52116002)(66476007)(6916009)(478600001)(53546011)(36756003)(6666004)(16526019)(83380400001)(66946007)(186003)(38100700002)(86362001)(38350700002)(8936002)(2906002)(316002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
 =?us-ascii?Q?l3Mn5ILU7AHfF4suYkdKC7ZAKFu0It5iznbEJsEfMEqUmUKWtIphK3kHOo2w?=
 =?us-ascii?Q?Da/WjMIA40vzo534a7hwizpkvkpm9/TZnWMRREfC8lY6KxoOh00pcjxSqcvN?=
 =?us-ascii?Q?xUJVTy4sWgHxvXAdLaDjlJFbHGOoYKH1Np8sg+swQpjEkqnl0jH9GGLPKgDK?=
 =?us-ascii?Q?3KxKzan/Gur7ufai30vshfB9AfsMM4+pPubiFeU/R3xWZHbZFloJ8EpGc5Oq?=
 =?us-ascii?Q?1siF2dspaiVkaoTptbFIkOqFzwT1FmitcTC4aQT+YL+aZA3KKAgJmV/vG2Uh?=
 =?us-ascii?Q?ckJtV/0zZ/So++qgum/xbJDSAkJ9mFR7PIC5RjgGlvxiyGUADGWbb30ZURQo?=
 =?us-ascii?Q?Nay7jfY6lvdWT3YCwwI9WsxnV5ouZ0K3oLi+Vi9RyFRk/TPrIqa6V/UoaYPZ?=
 =?us-ascii?Q?qZip8UzXuf2pvWq7ZeCJqxvUhQiFfKfqrfEp2Fx5lexvQM9SmbIslPaQB61D?=
 =?us-ascii?Q?gTvJ8a5DLuMINSwO9uALzooQ3PsOYI/jR8PKOc1DWHGcj8+BGdtJjMqV/q42?=
 =?us-ascii?Q?757Rmr4wwjvAYdVrpsXoE52zAgCByH/pUDjOisl5FPEzcfeHkzzhURmXE4KA?=
 =?us-ascii?Q?Y05pU3swOTDSqB26EWzBXPqoohnJQ1VJ6Yr9INx2+rKTn0VFQAaJSfvG7Hna?=
 =?us-ascii?Q?LJ7LwD30A7joK6owbAfE6crIcQVMv7Go27f9hHw070HojOz4oj6NuwtOWb1U?=
 =?us-ascii?Q?b9IPYCPoCf9+LDV7K5IF4vnDGufB3NXL1hLTDkRgBt/DA//MqI2hn6nDhyMp?=
 =?us-ascii?Q?3iwWGBQz4lfuEEoxJB3np2DJxwzumkLaUT6oaFQaYeoSR5Okk+1chVzh3/ca?=
 =?us-ascii?Q?c2o3Z3GBMLAdgOvxRcyTKzl1Gaa852Ekkd9uGbQqQ9tUELNEM6TW3Ksg95iW?=
 =?us-ascii?Q?/F1ujsdenlXZEEVeYzvEk6lXNGWQZAI/N3kc2vf+6zU9c2gm9Gwxp2vp+15K?=
 =?us-ascii?Q?KxXDcHohnfvhXBU+PCdLUC2xHiDNxa/iRh8f6idH9Y0SgAKFY53HdaKv/c0T?=
 =?us-ascii?Q?hKR8+AIbXAumi/YmeqQsLb2kChntY7lnVQoEpVNkJpUojzeDcjD1pW5ooQQA?=
 =?us-ascii?Q?6kbmhyhu72kLBBpEheNgBWmUejJ6nZX0I+QDpXnCReR+AWSYbiOGpMtexbJg?=
 =?us-ascii?Q?Yavl9PN9lStPr0CI9mzfAFXRLR8/+Hgz9gpUnb7JgFijsFdhiFuKv9Zh106I?=
 =?us-ascii?Q?ArYo6zD9SabzcsgLycDeRX7224EAo4lPDLTggbMoPRKlnEa/R0UjNa7acloq?=
 =?us-ascii?Q?/b6/6lAoRpkLFRymfaj6HT8yjHFIbumKJxeRduNiFpu+QLB3iR0vpWnWPjhd?=
 =?us-ascii?Q?Q+koc6j86zt2ZeRu7PFU7Vr8?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6093
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5d16dead-7a04-4a3f-891a-08d9348d988a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vbsj8Vww0F8yLOAU+5cotxqfWDcVuNrr4qScW5xLbplM7u0g9yrmt+dO156UC6xyBFsAykoZq9cxoFe4cJg4mkyyv65aImlglf8eowD34IsR/kMLfbtyS8ONl/bjugG8+hWx9XDzlDU5UXj9FiyvQQyvMb4VFuTwfS7PK999LbZor85ha5lmdEN6mtweWnFW9iFE8okYMCGt4cbIZ6YzOYEAXwUxsBw30KH09YeruccH5tlWQ8o4eIAg2U1fz8Qun/mQPgTV++k9GTjkq/4BXx16XhUBTqxhSwIUwc6VppRu2sByEHBsfe1ip220hJ+t+TcqU7YVIjahemFGl6Z+T9LJOuFYoEucH3hk+MBCjlTWiDY/oGegdhsv3KoA0s49oNcVU6LQQE7PPEFBiIhoXsBSGUBoDgyDjBX+D/AyPT9LW0aRT0x107JyUA4TrpGTQqhEwjU28LN0CmyQjvVcrttMpAA5WUGCqT2ke/+yJmp29STFSUTTanQs8KkMDcNkClQ3kZJ9h6BfsNQ08C9PPleP1dL4/BZ7ZMl99xzbr8Ko2Ia0AnXD33SYEqvNEisW690g5WZ52RHki8UxXIiLNfQ4ArXS1QVYVCP55C2QCLaxOeU79BgvQAeHUS22m46EoPe3WtJud1r0atHIyt0w7NaYBD2jhX7wzdHVeh5axA0=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(376002)(39850400004)(396003)(346002)(46966006)(36840700001)(70586007)(70206006)(44832011)(6666004)(36756003)(186003)(2906002)(336012)(82740400003)(6506007)(47076005)(16526019)(83380400001)(8936002)(107886003)(82310400003)(54906003)(356005)(36860700001)(4326008)(5660300002)(316002)(6512007)(6486002)(8676002)(33656002)(26005)(478600001)(2616005)(6862004)(956004)(81166007)(53546011)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 08:22:09.5943
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 27779e85-331b-4a78-04d0-08d9348daa13
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4755



> On 16 Jun 2021, at 15:43, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> At the moment, dump_state_buffered_data() is taking two connections
> in parameters (one is the connection to dump, the other is the
> connection used to request LU). The naming doesn't help to
> distinguish (c vs conn) them and this already lead to several mistake
> while modifying the function.
>=20
> To remove the confusion, introduce an help lu_get_connection() that
> will return the connection used to request LU and use it
> in place of the existing parameter.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

> ---
> tools/xenstore/xenstored_control.c | 13 ++++++++++++-
> tools/xenstore/xenstored_control.h |  2 ++
> tools/xenstore/xenstored_core.c    |  7 +++----
> tools/xenstore/xenstored_core.h    |  1 -
> tools/xenstore/xenstored_domain.c  |  6 +++---
> tools/xenstore/xenstored_domain.h  |  2 +-
> 6 files changed, 21 insertions(+), 10 deletions(-)
>=20
> diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstore=
d_control.c
> index 0d57f9f9400d..d08a2b961432 100644
> --- a/tools/xenstore/xenstored_control.c
> +++ b/tools/xenstore/xenstored_control.c
> @@ -104,6 +104,17 @@ static const char *lu_begin(struct connection *conn)
>=20
> 	return NULL;
> }
> +
> +struct connection *lu_get_connection(void)
> +{
> +	return lu_status ? lu_status->conn : NULL;
> +}
> +
> +#else
> +struct connection *lu_get_connection(void)
> +{
> +	return NULL;
> +}
> #endif
>=20
> struct cmd_s {
> @@ -516,7 +527,7 @@ static const char *lu_dump_state(const void *ctx, str=
uct connection *conn)
> 	ret =3D dump_state_global(fp);
> 	if (ret)
> 		goto out;
> -	ret =3D dump_state_connections(fp, conn);
> +	ret =3D dump_state_connections(fp);
> 	if (ret)
> 		goto out;
> 	ret =3D dump_state_special_nodes(fp);
> diff --git a/tools/xenstore/xenstored_control.h b/tools/xenstore/xenstore=
d_control.h
> index aac61f05908f..6842b8d88760 100644
> --- a/tools/xenstore/xenstored_control.h
> +++ b/tools/xenstore/xenstored_control.h
> @@ -18,3 +18,5 @@
>=20
> int do_control(struct connection *conn, struct buffered_data *in);
> void lu_read_state(void);
> +
> +struct connection *lu_get_connection(void);
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_c=
ore.c
> index 883a1a582a60..607187361d84 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -2369,14 +2369,13 @@ const char *dump_state_global(FILE *fp)
>=20
> /* Called twice: first with fp =3D=3D NULL to get length, then for writin=
g data. */
> const char *dump_state_buffered_data(FILE *fp, const struct connection *c=
,
> -				     const struct connection *conn,
> 				     struct xs_state_connection *sc)
> {
> 	unsigned int len =3D 0, used;
> 	struct buffered_data *out, *in =3D c->in;
> 	bool partial =3D true;
>=20
> -	if (in && c !=3D conn) {
> +	if (in && c !=3D lu_get_connection()) {
> 		len =3D in->inhdr ? in->used : sizeof(in->hdr);
> 		if (fp && fwrite(&in->hdr, len, 1, fp) !=3D 1)
> 			return "Dump read data error";
> @@ -2416,8 +2415,8 @@ const char *dump_state_buffered_data(FILE *fp, cons=
t struct connection *c,
> 	}
>=20
> 	/* Add "OK" for live-update command. */
> -	if (c =3D=3D conn) {
> -		struct xsd_sockmsg msg =3D conn->in->hdr.msg;
> +	if (c =3D=3D lu_get_connection()) {
> +		struct xsd_sockmsg msg =3D c->in->hdr.msg;
>=20
> 		msg.len =3D sizeof("OK");
> 		if (fp && fwrite(&msg, sizeof(msg), 1, fp) !=3D 1)
> diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_c=
ore.h
> index bb36111ecc56..89ce155e755b 100644
> --- a/tools/xenstore/xenstored_core.h
> +++ b/tools/xenstore/xenstored_core.h
> @@ -269,7 +269,6 @@ void set_tdb_key(const char *name, TDB_DATA *key);
>=20
> const char *dump_state_global(FILE *fp);
> const char *dump_state_buffered_data(FILE *fp, const struct connection *c=
,
> -				     const struct connection *conn,
> 				     struct xs_state_connection *sc);
> const char *dump_state_nodes(FILE *fp, const void *ctx);
> const char *dump_state_node_perms(FILE *fp, const struct xs_permissions *=
perms,
> diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored=
_domain.c
> index 322b0dbca449..6d8d29cbe41c 100644
> --- a/tools/xenstore/xenstored_domain.c
> +++ b/tools/xenstore/xenstored_domain.c
> @@ -1183,7 +1183,7 @@ void wrl_apply_debit_trans_commit(struct connection=
 *conn)
> 	wrl_apply_debit_actual(conn->domain);
> }
>=20
> -const char *dump_state_connections(FILE *fp, struct connection *conn)
> +const char *dump_state_connections(FILE *fp)
> {
> 	const char *ret =3D NULL;
> 	unsigned int conn_id =3D 1;
> @@ -1209,7 +1209,7 @@ const char *dump_state_connections(FILE *fp, struct=
 connection *conn)
> 			sc.spec.socket_fd =3D c->fd;
> 		}
>=20
> -		ret =3D dump_state_buffered_data(NULL, c, conn, &sc);
> +		ret =3D dump_state_buffered_data(NULL, c, &sc);
> 		if (ret)
> 			return ret;
> 		head.length +=3D sc.data_in_len + sc.data_out_len;
> @@ -1219,7 +1219,7 @@ const char *dump_state_connections(FILE *fp, struct=
 connection *conn)
> 		if (fwrite(&sc, offsetof(struct xs_state_connection, data),
> 			   1, fp) !=3D 1)
> 			return "Dump connection state error";
> -		ret =3D dump_state_buffered_data(fp, c, conn, NULL);
> +		ret =3D dump_state_buffered_data(fp, c, NULL);
> 		if (ret)
> 			return ret;
> 		ret =3D dump_state_align(fp);
> diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored=
_domain.h
> index cc5147d7e747..62ee471ea6aa 100644
> --- a/tools/xenstore/xenstored_domain.h
> +++ b/tools/xenstore/xenstored_domain.h
> @@ -101,7 +101,7 @@ void wrl_log_periodic(struct wrl_timestampt now);
> void wrl_apply_debit_direct(struct connection *conn);
> void wrl_apply_debit_trans_commit(struct connection *conn);
>=20
> -const char *dump_state_connections(FILE *fp, struct connection *conn);
> +const char *dump_state_connections(FILE *fp);
> const char *dump_state_special_nodes(FILE *fp);
>=20
> void read_state_connection(const void *ctx, const void *state);
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 08:56:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 08:56:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145382.267487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvFjb-0006Rl-IT; Mon, 21 Jun 2021 08:56:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145382.267487; Mon, 21 Jun 2021 08:56:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvFjb-0006Re-Ey; Mon, 21 Jun 2021 08:56:47 +0000
Received: by outflank-mailman (input) for mailman id 145382;
 Mon, 21 Jun 2021 08:56:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YNPZ=LP=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lvFja-0006RY-Gw
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 08:56:46 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.55]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 43b038ea-db58-457f-88fc-df8a44ca6c2e;
 Mon, 21 Jun 2021 08:56:44 +0000 (UTC)
Received: from DB7PR03CA0086.eurprd03.prod.outlook.com (2603:10a6:10:72::27)
 by AM6PR08MB5125.eurprd08.prod.outlook.com (2603:10a6:20b:e2::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.16; Mon, 21 Jun
 2021 08:56:33 +0000
Received: from DB5EUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:72:cafe::66) by DB7PR03CA0086.outlook.office365.com
 (2603:10a6:10:72::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18 via Frontend
 Transport; Mon, 21 Jun 2021 08:56:33 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT026.mail.protection.outlook.com (10.152.20.159) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.16 via Frontend Transport; Mon, 21 Jun 2021 08:56:33 +0000
Received: ("Tessian outbound f945d55369ce:v96");
 Mon, 21 Jun 2021 08:56:33 +0000
Received: from 5c0fb2f4340d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8AF347C4-D08C-415E-97F8-44E4AEE21DDF.1; 
 Mon, 21 Jun 2021 08:55:57 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5c0fb2f4340d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 21 Jun 2021 08:55:57 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com (2603:10a6:102:130::10)
 by PA4PR08MB6269.eurprd08.prod.outlook.com (2603:10a6:102:ed::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Mon, 21 Jun
 2021 08:55:55 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025]) by PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025%8]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 08:55:54 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO4P123CA0086.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:190::19) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.19 via Frontend Transport; Mon, 21 Jun 2021 08:55:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43b038ea-db58-457f-88fc-df8a44ca6c2e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yrAjr5G1UUQrp36qRjGeyweSc9U2mkXfIlhg8j21I1Q=;
 b=gK6TsYMnVUmTFMM7OOv5twDkqNtjgm5BaKO5zv9YUlqGoREUYdi3LIliPIdcWMiTrrXaZGZXNXpa4MrCIJE3lKxRcjdpTZFAy7FOB4at2ugJmRYqmi3B67Uywfc+pIW0/x8iCVNH6YnHnIeJohVmLnWemsjN7cwgnwVEYOIG5uw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7d035d1fd5171091
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Arr2vYD7lJ7Bxsz5sfUCYhRlaYksr90xhoSo0MilaGCn7i3/Qpk0udrF5qfwGH8cH66vWHToFVGgSkJO4L1/pemHsa01eIBGqH4LD8kK39IaHQwpiEQwr+DOkqWlNxFz21nLdZJjxLtPIibq3qoKCk1WNgMmf6Vz1NYZDzuG7XHkQo+nZqxDcGQ1CJLbPJjd0SK4+12124x0GUaEAR/pb4M+u7udE7GDOyqamD+bdOZ7soUGSc7Yy04nGmiDsOv9ntR6Phm7IMbwIarI9rE1Zk+1LqhvSmn7TiM93VKpNHaEFmUZ3kvVKmSzMcBnRDImjb5OwL/aegVYD7vgrvQYOQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yrAjr5G1UUQrp36qRjGeyweSc9U2mkXfIlhg8j21I1Q=;
 b=hXAbiQf6+1AIvsHYYBaoM5ggs2TE5jPz2zZXGRtdOLVyHT7MgeebPdfh58WsTzuB6uDfEwaaXEglX1HYuiMsCgw90liwbkBQMbBzCvSEaLoYIPsTTsLE9oG+2aIVzNJXb8afd9DasWPO4sfwEcvjUsbqN+l7p09hToUgEI1fzLWYZs0viCQtLbit/l/Muedmr8JEkhaX9eySa38+8/L/1mpyE9Hv1D8mzr/0gDAIdoZiBsyYlv+MmcUC/0qE3bBe8GR0e4XosusIrIUhFc4Pp5arhenN24AkAF8+1Rt6eMaKJawh0E3qobAgh9GU6yQuKoJ9Xoj7mwtDiuFQEJnzeA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yrAjr5G1UUQrp36qRjGeyweSc9U2mkXfIlhg8j21I1Q=;
 b=gK6TsYMnVUmTFMM7OOv5twDkqNtjgm5BaKO5zv9YUlqGoREUYdi3LIliPIdcWMiTrrXaZGZXNXpa4MrCIJE3lKxRcjdpTZFAy7FOB4at2ugJmRYqmi3B67Uywfc+pIW0/x8iCVNH6YnHnIeJohVmLnWemsjN7cwgnwVEYOIG5uw=
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=us-ascii
Subject: Re: [PATCH 03/10] tools/xenstore: Don't assume conn->in points to the
 LU request
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <20210616144324.31652-4-julien@xen.org>
Date: Mon, 21 Jun 2021 09:55:48 +0100
Cc: xen-devel@lists.xenproject.org,
 raphning@amazon.co.uk,
 doebel@amazon.de,
 Julien Grall <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>
Content-Transfer-Encoding: quoted-printable
Message-Id: <4316B0AD-5F89-4D04-8996-00836AE3991A@arm.com>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-4-julien@xen.org>
To: Julien Grall <julien@xen.org>
X-Mailer: Apple Mail (2.3654.100.0.2.22)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO4P123CA0086.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:190::19) To PAXPR08MB6816.eurprd08.prod.outlook.com
 (2603:10a6:102:130::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: fff5baee-4c45-4519-2844-08d934927849
X-MS-TrafficTypeDiagnostic: PA4PR08MB6269:|AM6PR08MB5125:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB5125AC08B94D9148C84DF9FDE40A9@AM6PR08MB5125.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 QOPBLZ1bYBpiIbr1DTbPawQ+d50MPth+nTH2/UGbg8qQ9LoMsZ6/ZhtjpCvoLm2EsDiEreaHeUQSOntwN5OyUHiYZvJDGZVQbOZpqr9p0klXemxeoOmPgnZiJgyfvu1rD0kDDVzBCF2szaQ3IOIKKBg+OJNpCN8dgXJ1x47xEB4B+sqou8EzzEMReE4yHHmMNIL3d8y3zdFnJKmNyNXrbHeG1RonrhnVDQ2KyZ9VjanwqOVCKKDmVaUF3gJZPDyc9YtUiGQs+g7L7/ElOZdjUwm2JLHwUCxE0GWP+e+hInGQztnvlm+VufQGquacyAYlfPOMLub3mSRL/nwwvhkwT/S8ftC8Anxlzk0yl8WCWOTXue/HPLSs3+uH03tveEpHp2N0bXQrZsN01MbToEqDwFJn7VOrM4aP9gA1RShUHnN61CaDAo+d5HDuNgufy7m7RW3u1UF8RapZZAnp+c+ykNuVf93NJSeNS6QggQTt6Yy26pOuP13bYlcVL7HJHSjZEPcfLsYHtyJ9ujjwbA04CnJuXlkcfg0q6WTquAh1wEh7szJvjtpFR/+1F1YhN7bOpNpESZJVqRTHrp98stmKqE74DWTMAfnkP8W3FOy98CS7XETxVrZHIL/XNsalZOX3BbQZw5WY1l+QiinnXYH3hAZrZ4MdiMRlpNSUE3k1WDebutMyz759HdKcI1uxrj5n
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB6816.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(6486002)(36756003)(83380400001)(86362001)(956004)(4326008)(2616005)(38100700002)(16526019)(6916009)(44832011)(6512007)(8936002)(38350700002)(53546011)(8676002)(498600001)(66946007)(66476007)(66556008)(2906002)(26005)(186003)(52116002)(54906003)(6666004)(6506007)(33656002)(5660300002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
 =?us-ascii?Q?SmkOpIXoV1vrW0UPUVaYmsjc/15FfIgjltaYTA1zY8lhxXUDqGjvziVu4K5y?=
 =?us-ascii?Q?zZaeI7Az6scLD3oKlPUjWDG1oH4aXTWjSrlmjpvoKPhkc5Ox0M31PpGey7xn?=
 =?us-ascii?Q?YySXwVvIYL7+0jMRbL/sgSZxTaRHAfXeCPsMDW1JgMO+VOfZ82AK9tLZ19t0?=
 =?us-ascii?Q?tZGQVrMe4wbAGW503NP7OU1+L/q34LXjVcSJwNt30Tf3WE8J22oXDJJNjAPN?=
 =?us-ascii?Q?PmvyYjygfoRmnDQUHl4UZHwwVmuFvRiqzEik3a3ScwIh7Jk8DjPYooVAK44q?=
 =?us-ascii?Q?uNtiWOyd7hntZ4ARZkTijPiLEjpDkbykDdvKFhKkeHBU3tu/OJSXcW4HMzjo?=
 =?us-ascii?Q?C0mWU75qhwgYWNm8cS6X6rOKKYhEyUnTaFJpGL/rlXCYtLuAZddRm7NQIV0E?=
 =?us-ascii?Q?ZbYiNu/FZXby5vEo/PKW0WOGSO3Tc+yzq3gTdFMUdXoGA0CiOoHvwBZUtnbH?=
 =?us-ascii?Q?s/lRlzMVl8JEGlNCcKExJbnJJrIJLm8Agt7I16zSakKGt04+6bsczwAXyh+z?=
 =?us-ascii?Q?US/S3zBWj6wlABfw09wacR2agkS8fR25nx+iRb4yWmrpbRhxDq6SOP7xmcdD?=
 =?us-ascii?Q?14pgLN9eOZBXLjugectCnhXY+OpURvXUGmSsE3j/VdyY6nrR2aLnpPjSgt4Q?=
 =?us-ascii?Q?d5TXKc4FvtuPd60Du5S3PBCOBfwFovqm9xmuiMylTmfVofNkciwvgBp8N6cG?=
 =?us-ascii?Q?2FIQGoSKzAjOOBkBmJNcXCGYZs8TZ8jLfTLhvwVi9q/jWv7zaugIEQP+Okn4?=
 =?us-ascii?Q?rGKJORwGccEFNT9Yz6F6yFAkA0YL3aBTH0tML8UdYdz2sRI/psbHbrpd9DRy?=
 =?us-ascii?Q?JWw2ZIDvWVCJvX2PiUVtsZxtwc7XdG5lEeIMr/EclU/noaxqxltY/DO2ai3s?=
 =?us-ascii?Q?Pxg3lak1ZBAWq4EzqSpyKKs+46ksSXQ3WoncKXJ976ypf7HGcopqTCsRT/Qg?=
 =?us-ascii?Q?dn30qexwH4fLvCqrvLO4y+8xcAXaINR5dzUZRmnfeOFgi+9S9imLHEfiYMh1?=
 =?us-ascii?Q?t+XyZaxiouJsFcaWxIT1gww6k9owxBS9asLxwGxEqjqVrKLW4O/f8Z4sfbfr?=
 =?us-ascii?Q?kBln4MbYeAdb4VgKbXkyVDrU1C3RVuEyUbM684ult0Dey3NsLk3JxRt25n5z?=
 =?us-ascii?Q?40PBifp5W/EfmqNRsD0Pz0CLzNFJqpbW7orVN2uriLqgwlVOZ7FRasQWk2EL?=
 =?us-ascii?Q?CJLbImUxTbbxxk8/ZJmT0gMzv34TxZ5EBPec6G9Qk4tVLArkpcllTEMFOBCM?=
 =?us-ascii?Q?FunRXmyZaZ66BvolD6MvviF7p71l43YQCff2ki3+4BKBjX2oN5hc/gcnJFKt?=
 =?us-ascii?Q?N0tTWn5gYFWZfLK5SSx0fBZn?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6269
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	aa27fa23-f775-4cec-8157-08d9349260fe
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5sudh+ayAKEgMyUzYYn7B3vhdaHbTgOHjx6ZCJsJ5Q4k/xjxcnWvXEaYuJHU8bEGsvNvl28srHb4uNDZvOAstxkzi6RnS+tmcBumM2J03u+uw8rtvEGKWIrOXZ3v6ZGgzlzG8+vxqwsai28do/3MS7OCwVEuvmZJI6dA2Nm/hRrSzAOsojUYvvLjfHSnEbzjk9o4TGdlZh44bc5T1BoDjYzwpPheeJ5LoUz7IS/vOfTqrKhacnQI0ig92h1T4yFXTXw3rQn8L24dDeaV7iPeNfOnGFkisgQ7W8VEZZj7CCwSdPll8MuCn2jtJydn+W/gmpZYNSq43nHsIMj18bzQuAq8lE6POe/9UawbcGgyzxky4kYsQ0LkpG6nrA+rUwBkyBpS0mo9Zi8lf3EHfqvrCZ+wAQgry9ePPVqcVqfvMiiRODvU3JjrPsIRKVzYO1hmcGr0LZ+1IK0dIXZnLMNFsjNk16o64L1MAUr9wyKsGuF2qXiFZt2e4QZ4diz6xl9CVeFoNOLkKbaG7htyOLfWplkuQ7ZjSNHjVH33oPYITe77evGuIGUul/FayiZgti960N04Ppto+RPWmZseutcI7hRIowfi5iWq4tQ9aAfWj+QndTU0TGYlLQ92Pxu5MNpvkavsl2q0FPelDmw8dJcNyB7gp2TR2pDEQQKDVRIe0kk=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(136003)(376002)(39860400002)(396003)(36840700001)(46966006)(8936002)(53546011)(6506007)(6512007)(82740400003)(70206006)(316002)(70586007)(478600001)(336012)(54906003)(86362001)(4326008)(8676002)(47076005)(83380400001)(36860700001)(2906002)(956004)(44832011)(36756003)(33656002)(107886003)(6666004)(356005)(6862004)(82310400003)(16526019)(186003)(2616005)(6486002)(5660300002)(26005)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 08:56:33.7495
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fff5baee-4c45-4519-2844-08d934927849
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5125



> On 16 Jun 2021, at 15:43, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> call_delayed() is currently assuming that conn->in is NULL when
> handling delayed request. However, the connection is not paused.
> Therefore new request can be processed and conn->in may be non-NULL
> if we have only received a partial request.
>=20
> Furthermore, as we overwrite conn->in, the current partial request
> will not be transferred. This will result to corrupt the connection.
>=20
> Rather than updating conn->in, stash the LU request in lu_status and
> let each callback for delayed request to update conn->in when
> necessary.
>=20
> To keep a sane interface, the code to write the "OK" response the
> LU request is moved in xenstored_core.c.
>=20
> Fixes: c5ca1404b4 ("tools/xenstore: add support for delaying execution of=
 a xenstore request")
> Fixes: ed6eebf17d ("tools/xenstore: dump the xenstore state for live upda=
te")
> Signed-off-by: Julien Grall <jgrall@amazon.com>
>=20
> ----
>=20
> This is fixing bugs from two separate commits. I couldn't figure out
> how to split in two patches without breaking bisection.
> ---
> tools/xenstore/xenstored_control.c | 41 ++++++++++++++++++++++++++++--
> tools/xenstore/xenstored_control.h |  3 +++
> tools/xenstore/xenstored_core.c    | 17 +++----------
> 3 files changed, 46 insertions(+), 15 deletions(-)
>=20
> diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstore=
d_control.c
> index d08a2b961432..7acc2d134f9f 100644
> --- a/tools/xenstore/xenstored_control.c
> +++ b/tools/xenstore/xenstored_control.c
> @@ -50,6 +50,9 @@ struct live_update {
> 	/* For verification the correct connection is acting. */
> 	struct connection *conn;
>=20
> +	/* Pointer to the command used to request LU */
> +	struct buffered_data *in;
> +
> #ifdef __MINIOS__
> 	void *kernel;
> 	unsigned int kernel_size;
> @@ -100,6 +103,7 @@ static const char *lu_begin(struct connection *conn)
> 	if (!lu_status)
> 		return "Allocation failure.";
> 	lu_status->conn =3D conn;
> +	lu_status->in =3D conn->in;
> 	talloc_set_destructor(lu_status, lu_destroy);
>=20
> 	return NULL;
> @@ -110,11 +114,34 @@ struct connection *lu_get_connection(void)
> 	return lu_status ? lu_status->conn : NULL;
> }
>=20
> +unsigned int lu_write_response(FILE *fp)
> +{
> +	struct xsd_sockmsg msg;
> +
> +	assert(lu_status);
> +
> +	msg =3D lu_status->in->hdr.msg;
> +
> +	msg.len =3D sizeof("OK");
> +	if (fp && fwrite(&msg, sizeof(msg), 1, fp) !=3D 1)
> +		return 0;
> +	if (fp && fwrite("OK", msg.len, 1, fp) !=3D 1)
> +		return 0;
> +
> +	return sizeof(msg) + msg.len;
> +}
> +
> #else
> struct connection *lu_get_connection(void)
> {
> 	return NULL;
> }
> +
> +unsigned int lu_write_response(FILE *fp)
> +{
> +	/* Unsupported */
> +	return 0;
> +}
> #endif
>=20
> struct cmd_s {
> @@ -658,6 +685,8 @@ static bool do_lu_start(struct delayed_request *req)
> {
> 	time_t now =3D time(NULL);
> 	const char *ret;
> +	struct buffered_data *saved_in;
> +	struct connection *conn =3D lu_status->conn;
>=20
> 	if (!lu_check_lu_allowed()) {
> 		if (now < lu_status->started_at + lu_status->timeout)
> @@ -668,8 +697,9 @@ static bool do_lu_start(struct delayed_request *req)
> 		}
> 	}
>=20
> +	assert(req->in =3D=3D lu_status->in);
> 	/* Dump out internal state, including "OK" for live update. */
> -	ret =3D lu_dump_state(req->in, lu_status->conn);
> +	ret =3D lu_dump_state(req->in, conn);
> 	if (!ret) {
> 		/* Perform the activation of new binary. */
> 		ret =3D lu_activate_binary(req->in);
> @@ -677,7 +707,14 @@ static bool do_lu_start(struct delayed_request *req)
>=20
> 	/* We will reach this point only in case of failure. */
>  out:
> -	send_reply(lu_status->conn, XS_CONTROL, ret, strlen(ret) + 1);
> +	/*
> +	 * send_reply() will send the response for conn->in. Save the current
> +	 * conn->in and restore it afterwards.
> +	 */
> +	saved_in =3D conn->in;
> +	conn->in =3D req->in;
> +	send_reply(conn, XS_CONTROL, ret, strlen(ret) + 1);
> +	conn->in =3D saved_in;
> 	talloc_free(lu_status);
>=20
> 	return true;
> diff --git a/tools/xenstore/xenstored_control.h b/tools/xenstore/xenstore=
d_control.h
> index 6842b8d88760..27d7f19e4b7f 100644
> --- a/tools/xenstore/xenstored_control.h
> +++ b/tools/xenstore/xenstored_control.h
> @@ -20,3 +20,6 @@ int do_control(struct connection *conn, struct buffered=
_data *in);
> void lu_read_state(void);
>=20
> struct connection *lu_get_connection(void);
> +
> +/* Write the "OK" response for the live-update command */
> +unsigned int lu_write_response(FILE *fp);
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_c=
ore.c
> index 607187361d84..41b26d7094c8 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -272,15 +272,10 @@ static int undelay_request(void *_req)
>=20
> static void call_delayed(struct connection *conn, struct delayed_request =
*req)

Here the conn parameter is not needed anymore, or am I missing something?

Cheers,
Luca

> {
> -	assert(conn->in =3D=3D NULL);
> -	conn->in =3D req->in;
> -
> 	if (req->func(req)) {
> 		undelay_request(req);
> 		talloc_set_destructor(req, NULL);
> 	}
> -
> -	conn->in =3D NULL;
> }
>=20
> int delay_request(struct connection *conn, struct buffered_data *in,
> @@ -2375,7 +2370,7 @@ const char *dump_state_buffered_data(FILE *fp, cons=
t struct connection *c,
> 	struct buffered_data *out, *in =3D c->in;
> 	bool partial =3D true;
>=20
> -	if (in && c !=3D lu_get_connection()) {
> +	if (in) {
> 		len =3D in->inhdr ? in->used : sizeof(in->hdr);
> 		if (fp && fwrite(&in->hdr, len, 1, fp) !=3D 1)
> 			return "Dump read data error";
> @@ -2416,16 +2411,12 @@ const char *dump_state_buffered_data(FILE *fp, co=
nst struct connection *c,
>=20
> 	/* Add "OK" for live-update command. */
> 	if (c =3D=3D lu_get_connection()) {
> -		struct xsd_sockmsg msg =3D c->in->hdr.msg;
> +		unsigned int rc =3D lu_write_response(fp);
>=20
> -		msg.len =3D sizeof("OK");
> -		if (fp && fwrite(&msg, sizeof(msg), 1, fp) !=3D 1)
> +		if (!rc)
> 			return "Dump buffered data error";
> -		len +=3D sizeof(msg);
> -		if (fp && fwrite("OK", msg.len, 1, fp) !=3D 1)
>=20
> -			return "Dump buffered data error";
> -		len +=3D msg.len;
> +		len +=3D rc;
> 	}
>=20
> 	if (sc)
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 09:03:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 09:03:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145388.267498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvFpZ-0007st-9d; Mon, 21 Jun 2021 09:02:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145388.267498; Mon, 21 Jun 2021 09:02:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvFpZ-0007sm-6P; Mon, 21 Jun 2021 09:02:57 +0000
Received: by outflank-mailman (input) for mailman id 145388;
 Mon, 21 Jun 2021 09:02:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YNPZ=LP=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lvFpX-0007sg-U0
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 09:02:55 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.42]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id be28687a-6c61-406a-9fb6-cefaa4c6307b;
 Mon, 21 Jun 2021 09:02:52 +0000 (UTC)
Received: from DB8PR06CA0026.eurprd06.prod.outlook.com (2603:10a6:10:100::39)
 by VI1PR08MB4159.eurprd08.prod.outlook.com (2603:10a6:803:e9::33)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Mon, 21 Jun
 2021 09:02:50 +0000
Received: from DB5EUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:100:cafe::7d) by DB8PR06CA0026.outlook.office365.com
 (2603:10a6:10:100::39) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18 via Frontend
 Transport; Mon, 21 Jun 2021 09:02:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT026.mail.protection.outlook.com (10.152.20.159) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.16 via Frontend Transport; Mon, 21 Jun 2021 09:02:50 +0000
Received: ("Tessian outbound f88ae75fbd47:v96");
 Mon, 21 Jun 2021 09:02:50 +0000
Received: from 7b6928a2d106.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 EC85D21A-38D0-40DB-95B1-09D162C1C02B.1; 
 Mon, 21 Jun 2021 09:02:12 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7b6928a2d106.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 21 Jun 2021 09:02:12 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com (2603:10a6:102:130::10)
 by PA4PR08MB6254.eurprd08.prod.outlook.com (2603:10a6:102:f3::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Mon, 21 Jun
 2021 09:02:07 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025]) by PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025%8]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 09:02:07 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LNXP265CA0014.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:5e::26) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.15 via Frontend Transport; Mon, 21 Jun 2021 09:02:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be28687a-6c61-406a-9fb6-cefaa4c6307b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7C18WQ4WmLiN6bjCWWNLNGvCw4AQ2jUQz27y43G7p7s=;
 b=26lQoUc1XFqAT+YOr7Ci4JBK7/DArdzPPaARsisX6ir/Oc+JU2EYWUHIzjv3eVR7K7vZ+8WzUPBATtc/gmn2O0D40R8uyzJR7xnHX4xlvHdBAxHM0UUD5Ef5pwBokTDY+JdldqoTLMom/U7Xsgaf+z2UeZJd1EgJUaZLsj5pL0U=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: c44f137472fc59e4
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m72A5m7JfUUSWURMcq2lkUIs2hsDxmMfi7A5bSl2ZvPybOEGEpS7oEWtIWoWjICxKnzLXwMmf52dqLFwa4hdlCm+EujFhJY4XHV/l3vYvjaynCHwMTQ3EM0GpCVAgSOvtJ4LEJpLIsyx8bkC8UaqGaWKzcZXUxc7yd4LmMuUQ+nkb78ubA3pUDdVzyRj9Hq4HneQkv2Q7/Xrl7fiGYmSfpl1hTFoc/mSnb+3tzYMtiLd1v9NzHptjQ4P/i4CoLjXqhzJOvfuswyvp8EUhtyLuxbTtM7NDKkoYvqdes7g43Wt3DMGeQzwj7dKNuEOEI1djkF12aQPLgKBLmj/b0yoJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7C18WQ4WmLiN6bjCWWNLNGvCw4AQ2jUQz27y43G7p7s=;
 b=ZY7d6YDD/sZLC3YIm/cx3EVh4Mw2C+6eb+Am7S+/QxViDWUiaNiYQMIWoOm1v3YVJVGosS72r3/e3Xk7Qzfliif9MYwZAqwMX84C6LX7ilXfO290XhwY1CH6z9ZqQ49WTP9idSJ201dTg5gnAY/ngUvESTyO4KrIl6Qu9YJl7W6IAY5i3wAKoWF3Bb8v5s/Uj7PcZuPjWIxLNeC1Fd9DLdMCmsgCikV8bo50jh+ituuxCjuBx7NwB0+w7BQe2HYFOiPJUa+0k489HLdpD9ve2xV1YrOj9W4HLgFG9DxZH8gs0WQmuS9v9z6ZsIN7aMo2LGEwEjeNe0KJ8kk2BPqnHw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7C18WQ4WmLiN6bjCWWNLNGvCw4AQ2jUQz27y43G7p7s=;
 b=26lQoUc1XFqAT+YOr7Ci4JBK7/DArdzPPaARsisX6ir/Oc+JU2EYWUHIzjv3eVR7K7vZ+8WzUPBATtc/gmn2O0D40R8uyzJR7xnHX4xlvHdBAxHM0UUD5Ef5pwBokTDY+JdldqoTLMom/U7Xsgaf+z2UeZJd1EgJUaZLsj5pL0U=
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=us-ascii
Subject: Re: [PATCH 04/10] tools/xenstored: Limit the number of requests a
 connection can delay
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <20210616144324.31652-5-julien@xen.org>
Date: Mon, 21 Jun 2021 10:02:01 +0100
Cc: xen-devel@lists.xenproject.org,
 raphning@amazon.co.uk,
 doebel@amazon.de,
 Julien Grall <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>
Content-Transfer-Encoding: quoted-printable
Message-Id: <77658E0F-C333-430C-A0E4-13F08E943A8B@arm.com>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-5-julien@xen.org>
To: Julien Grall <julien@xen.org>
X-Mailer: Apple Mail (2.3654.100.0.2.22)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LNXP265CA0014.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5e::26) To PAXPR08MB6816.eurprd08.prod.outlook.com
 (2603:10a6:102:130::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 667018d2-9f5f-4771-57e7-08d934935899
X-MS-TrafficTypeDiagnostic: PA4PR08MB6254:|VI1PR08MB4159:
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB4159A77453A35168BF74FF2DE40A9@VI1PR08MB4159.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 jRMdJn8WPZOhM/9o6LalQMfZerd+5Hg7bLaKySaPXw7gLPKoeFkD8wxpl8aymxsn5/zIMNl8GKn09/QPqWESemkO3vHap4xfR+nC7ZKBP/ffhLLjZ8+aIl6wuxcvHj3bt72/0thU5+VBgnHvdVbuZHCGE7ZR9XUo86LDsuESMtMWLDjJ91kMK5Pn2tmtN1+3CdsF197fNyld5xKa9DvZHNxGiuSrR5ZuhidBDIMWQhCpQD23zaL6E9ldusSK/zabcWaORg2ZjgVKNM35cmHWYB70g5TCzLWdcx+SMguF8YLd94XaZ4IEx+6PjUx1Hn5XmgfUL96n9Wc14p80TwkV4NATfHVB0lO57ynAZX/4jy0zlmCaKlqPbqmPfDVUey/FcosrTPsZ1kXlcMI88oy3A+hJ8ig/VNx0yBWpCqrbOsJXDqmy3fDcJbzt8QmKr/8ms7TXzUA5cWTLrPk4v4NIayOKt6r7eEQTY+up2ZgzPflSjgzLlYQxC1uRgArCzBxJFrepdwaATA5iys4q9I01F8NmSn2UBBi66lhm6YTGZra0F+VqYZN4ZgSXy4xWxktOZBdNM6ZSWY9hTxGLXx+GE5MPxHV8c8OxJTjNTwU7OIvc7JWcuxo4+WAH8Yl53Jh+L0E9r5fWHOx+YaecRrpyNDpRw7Sr5PWUzfP0i2ZD/q0ZiqSAGxHHS6kp17fX4ylG
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB6816.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(376002)(39860400002)(136003)(396003)(54906003)(8936002)(6666004)(5660300002)(8676002)(316002)(66946007)(66476007)(36756003)(66556008)(52116002)(83380400001)(2616005)(44832011)(478600001)(186003)(6486002)(2906002)(53546011)(6506007)(38350700002)(956004)(38100700002)(4326008)(16526019)(6512007)(26005)(6916009)(86362001)(33656002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
 =?us-ascii?Q?ROIEvlReMyIkX/nfRbyc/uae9lm4gUA5YXb9UiRuFNXgyY28TlRTj7bGQbV5?=
 =?us-ascii?Q?zjNbqg1fVlfRBHyYlR4w07fstQgEMStsRdN0EcsvMc567UdNOHZNwIF0mZF7?=
 =?us-ascii?Q?6dbIaiFzLRzD+E8THBhRRknOKeXPp8Ga+UgKiZyXQgyIVNRytQ5ovqAxri2u?=
 =?us-ascii?Q?ThQUxXy/B4JIKbvmCcQDypQ73I1GcXFkc7y+NEniN4Sxky33BR6ejJmyLf5j?=
 =?us-ascii?Q?opJNRBGhzyKtyXoHoVfeC8GEkS4pVHf9vPEMwv08y7J/+Xgzx6qv0Q7RaPvV?=
 =?us-ascii?Q?De3plR1OdnL7m3z1lGB+RLKSqYJV4xAc1+14W0Sz1zz0Rc/82hR057mFQXhU?=
 =?us-ascii?Q?qOtw3bcb42BnujrNpGNKnFnpqI3/4xHhvZSJgCTbbtXC/tasIV2EWrUBOr0M?=
 =?us-ascii?Q?oVWY5lcRPKvtwNTHRAiDPFT1d4eMlS6SwJLgO6wtm9PDxmYAFbGNvwz7KTCa?=
 =?us-ascii?Q?lQk6non4wS8ktdDJmPGM8b4QPl3m2zFjq3n6P5iMVR4K+VgnX+PspeY7CNjg?=
 =?us-ascii?Q?BN1YwmSKle6Fko3H96RxgD1KUn+Ve4qNSqZWIdRQ4SdlYGn8xeHKz2Grk3CK?=
 =?us-ascii?Q?jiIBAWP363wFDd4HBryhv8gWnUBG81jEso3cFDGaoTakmdA27oegNbdPzjxB?=
 =?us-ascii?Q?0ySTJOI9+/Tvsrh9coFYxQl/PdDmCzc34fL4Kme27LrY2UwDxfoGhpNgoljd?=
 =?us-ascii?Q?2f7ImqemJUb5YgBC8mZTAEGY+8v2qVLDX59bn+sPszChaEkTw/IxF521gXnO?=
 =?us-ascii?Q?eo+IEHP4eY9oPBZWo4TLE8/Ur1+cs6HuxLsJz7XKlyY9MQ6yCU+I2wXDzB8/?=
 =?us-ascii?Q?k3BYmm/wxQgjQ5oB3uOO9bI7CTOrAnhtV7Tssz3joff6tr0BroxbKpZQowHW?=
 =?us-ascii?Q?TzTaOIAkQa4O1uL4uiot9DcYK/2gMJ1wUVg/tDeJ9zPbJjjMz0fQeyZgn9hZ?=
 =?us-ascii?Q?9PBy+awqUBO/IOwyOKfHpVi+NlvrX11RBnx8oUfXQb/dUVxjvzmocIUyHMnU?=
 =?us-ascii?Q?mAigADfMAdQ17KUma2wfYEudb5/+75EAj5EPij9EyIKShXlSyas1sXG25Vvc?=
 =?us-ascii?Q?Urt5JjfK6TzD7jX2WAoEazGFT//AwffR4sWP40T+WUSbmKgUtkx0vlJxy2ZX?=
 =?us-ascii?Q?7DqJSa4pbDTOrB5EOLToJHH6zokPtZl3OYZIPKbb01/ZEr//dbj87IwZpO/0?=
 =?us-ascii?Q?EItyKOjvbkLd+S7trzB64+8udyut0VpyXZZnmpzStQicslQpPKEGUHsAyXOv?=
 =?us-ascii?Q?k0hwVk3b0bIiIGeHES5GSdYZ4qQZTjtwwWa/uROzH+44k6MZcvWcTXlG3VbR?=
 =?us-ascii?Q?1HsgyPaqRYeufee6Er+FcHyX?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6254
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5f22bfca-e19c-4ad7-91cb-08d934933eed
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/gj2B9AQPKAYiG0Y3KXOHFOOBFLkfuCuT0S1iAToNlgYFpCBSPaBUnq4Cv9XzEEuJIkU95QdSU4CHYiKJOc537fPNUBauTQt7J7O3GrVGcSXQYJa0J5++JSlyR1gB9GKN8bZQq9EO3ju+Xx6Zw6JCxGHApUl/1p9lzIUtcD7R6Qt+re2yfXq58XbWSv43riXJLnkvgofkWZ54X342Kd/cd3Tphi7xg6V5vbxGOmJ0tCyMYyOJLNmZR26MqnPAoUzXhqBIAYlHICjoq7V8+r2KS6Pfybgh8ZVzZmoGP9+q6ssW+gdF0ojwFDTKC3gev4J0Vx0LmEP+AjrIF0qQv4J7E5qZLxoD5+LJQrQ4d5H4r2uCB9eDV00kBKEwS6CU8KCQCggI9qrBUFNYeAGKEVqaPUIs25EGbiNCPqgikhNpORHMOOCoNXG87EBk440zZNC5bI4ikbO1Bb8X6cNY2fA6TazrhwouWwmlICkjtj9YBy43k24aaraTsdU16YUwbCw62YfKF13COWV5+V0Avb4Be06JDYCjHZTy3Qoko6cE1aB79hYRVg4gJfWs14BxHgQ5FJKOJAzgMaSnbb26Zrlnpw6DVqlLq8ixV3aK4lHyKQ0w0WUnUCkQe6/MeeHaDC11hL5ASvymD/1kCAvSn7DNkmZOhx1Pn5a5Z0o36NT35w=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(376002)(346002)(136003)(396003)(46966006)(36840700001)(82740400003)(36756003)(36860700001)(6512007)(8676002)(44832011)(356005)(70206006)(53546011)(478600001)(5660300002)(83380400001)(47076005)(4326008)(107886003)(186003)(6486002)(26005)(16526019)(70586007)(82310400003)(54906003)(6862004)(86362001)(2616005)(2906002)(8936002)(956004)(6666004)(336012)(33656002)(81166007)(6506007)(316002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 09:02:50.0847
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 667018d2-9f5f-4771-57e7-08d934935899
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4159



> On 16 Jun 2021, at 15:43, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> Currently, only liveupdate request can be delayed. The request can only
> be performed by a privileged connection (e.g. dom0). So it is fine to
> have no limits.
>=20
> In a follow-up patch we will want to delay request for unprivileged
> connection as well. So it is best to apply a limit.
>=20
> For now and for simplicity, only a single request can be delayed
> for a given unprivileged connection.
>=20
> Take the opportunity to tweak the prototype and provide a way to
> bypass the quota check. This would be useful when the function
> is called from the restore code.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

> ---
> tools/xenstore/xenstored_control.c |  2 +-
> tools/xenstore/xenstored_core.c    | 11 ++++++++++-
> tools/xenstore/xenstored_core.h    |  3 ++-
> 3 files changed, 13 insertions(+), 3 deletions(-)
>=20
> diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstore=
d_control.c
> index 7acc2d134f9f..1c24d4869eab 100644
> --- a/tools/xenstore/xenstored_control.c
> +++ b/tools/xenstore/xenstored_control.c
> @@ -737,7 +737,7 @@ static const char *lu_start(const void *ctx, struct c=
onnection *conn,
> 	lu_status->timeout =3D to;
> 	lu_status->started_at =3D time(NULL);
>=20
> -	errno =3D delay_request(conn, conn->in, do_lu_start, NULL);
> +	errno =3D delay_request(conn, conn->in, do_lu_start, NULL, false);
>=20
> 	return NULL;
> }
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_c=
ore.c
> index 41b26d7094c8..51d210828922 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -279,10 +279,19 @@ static void call_delayed(struct connection *conn, s=
truct delayed_request *req)
> }
>=20
> int delay_request(struct connection *conn, struct buffered_data *in,
> -		  bool (*func)(struct delayed_request *), void *data)
> +		  bool (*func)(struct delayed_request *), void *data,
> +		  bool no_quota_check)
> {
> 	struct delayed_request *req;
>=20
> +	/*
> +	 * Only allow one request can be delayed for an unprivileged
> +	 * connection.
> +	 */
> +	if (!no_quota_check && domain_is_unprivileged(conn) &&
> +	    !list_empty(&conn->delayed))
> +		return ENOSPC;
> +
> 	req =3D talloc(in, struct delayed_request);
> 	if (!req)
> 		return ENOMEM;
> diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_c=
ore.h
> index 89ce155e755b..34839b34f6e9 100644
> --- a/tools/xenstore/xenstored_core.h
> +++ b/tools/xenstore/xenstored_core.h
> @@ -213,7 +213,8 @@ char *get_parent(const void *ctx, const char *node);
>=20
> /* Delay a request. */
> int delay_request(struct connection *conn, struct buffered_data *in,
> -		  bool (*func)(struct delayed_request *), void *data);
> +		  bool (*func)(struct delayed_request *), void *data,
> +		  bool no_quota_check);
>=20
> /* Tracing infrastructure. */
> void trace_create(const void *data, const char *type);
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 09:04:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 09:04:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145394.267509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvFqt-00005h-Pd; Mon, 21 Jun 2021 09:04:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145394.267509; Mon, 21 Jun 2021 09:04:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvFqt-00005Z-MM; Mon, 21 Jun 2021 09:04:19 +0000
Received: by outflank-mailman (input) for mailman id 145394;
 Mon, 21 Jun 2021 09:04:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YNPZ=LP=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lvFqs-00005S-1Z
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 09:04:18 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.49]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14e2e508-8cd9-4dc8-ae2a-a4403b58f767;
 Mon, 21 Jun 2021 09:04:16 +0000 (UTC)
Received: from DB6PR07CA0171.eurprd07.prod.outlook.com (2603:10a6:6:43::25) by
 PA4PR08MB6319.eurprd08.prod.outlook.com (2603:10a6:102:e8::5) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18; Mon, 21 Jun 2021 09:04:07 +0000
Received: from DB5EUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:43:cafe::81) by DB6PR07CA0171.outlook.office365.com
 (2603:10a6:6:43::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.7 via Frontend
 Transport; Mon, 21 Jun 2021 09:04:07 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT033.mail.protection.outlook.com (10.152.20.76) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.16 via Frontend Transport; Mon, 21 Jun 2021 09:04:07 +0000
Received: ("Tessian outbound f945d55369ce:v96");
 Mon, 21 Jun 2021 09:04:07 +0000
Received: from b7f4a495dbdc.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 EA823434-BD8F-4375-9AEF-44C40CEAF9A5.1; 
 Mon, 21 Jun 2021 09:03:35 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b7f4a495dbdc.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 21 Jun 2021 09:03:35 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com (2603:10a6:102:130::10)
 by PA4PR08MB6286.eurprd08.prod.outlook.com (2603:10a6:102:f2::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Mon, 21 Jun
 2021 09:03:33 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025]) by PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025%8]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 09:03:33 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO2P265CA0091.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:8::31) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.19 via Frontend Transport; Mon, 21 Jun 2021 09:03:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14e2e508-8cd9-4dc8-ae2a-a4403b58f767
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NIn4QFZpjWQFUKJu9q89YMpvDVKYdNvpKtom6uEO1e8=;
 b=SSNbsR92ZnoCG5w7QXRvO3D50A3icmohar3ibWn63bg1hyUAJbP+1/ANxGL0+60puRutS1z7Zj9oIKkDj3h8xjvnhQmEZ/JJaP0AxnM3i6bMueADgbz7aMalYydbv1h73MMlrJ9lZcnQ1HxkAtlEnxOKfS0BaO4q9k/or5Tpev8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: a0b529280689042f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IMMMIV345Z5wytheB9+VHeSlQtgJHUcc/7NfxbBu97Rv9QAjO6XedTt3uWeMZu12p/U6wo0wWRCLrR6rnLpsi7RZwUM2cmx0WPiE0xiIymuTBWGDyfFmy+zKWKfjFlCzqrX+hZuYZqY8B0T+NjtBUWl54di9wfN/5BuK/tctcC4XyuCnYbYJiEeaZiC08oyX1C+tcFwpPT01Jz64uLrFZbOpTNGZKoxpA7+SKJiVFeFdUSAywynyDsq7RqL9/+0Cg4AEiaMxv3KXvOwDJXsW/COITgk5BzwHoRTgF4mYqSy6eryFJRsaE4uQim0zoNzCA+DlDTaPZfpqA0jjIshf2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NIn4QFZpjWQFUKJu9q89YMpvDVKYdNvpKtom6uEO1e8=;
 b=VA/UuaPNQ9W2rx+yAREDOQAh/FRnTjxpCJxh4hLzTkznmCb3agOKRxPvaNoF192jKTbou1oAyjJECM4tNKZEp1RbXF5DvKkgQrphO/obrW1M1GL2RlNTy9hYklAwOaja4/aslv0B3bMCQBkQE/RsV5bai3pQzt/bZ12C4rHCI5mjevifNh229I0UDFMXZWy7ycB3JPU5yeSIsx82ivCeVtmAcrCt0c1g8CkNii1m0mAVfiVDCfeCuR6LhOjwVR5Sp0BymVWFTNd2brLFmefnnJ4jPXcRFNkNcrL8v1DxhFXDjnDb7emxKpvYZYeszaLQTEQZj66nfOs78yGSWBgoMg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NIn4QFZpjWQFUKJu9q89YMpvDVKYdNvpKtom6uEO1e8=;
 b=SSNbsR92ZnoCG5w7QXRvO3D50A3icmohar3ibWn63bg1hyUAJbP+1/ANxGL0+60puRutS1z7Zj9oIKkDj3h8xjvnhQmEZ/JJaP0AxnM3i6bMueADgbz7aMalYydbv1h73MMlrJ9lZcnQ1HxkAtlEnxOKfS0BaO4q9k/or5Tpev8=
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=us-ascii
Subject: Re: [PATCH 05/10] tools/xenstored: xenstored_core.h should include
 fcntl.h
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <20210616144324.31652-6-julien@xen.org>
Date: Mon, 21 Jun 2021 10:03:26 +0100
Cc: xen-devel@lists.xenproject.org,
 raphning@amazon.co.uk,
 doebel@amazon.de,
 Julien Grall <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>
Content-Transfer-Encoding: quoted-printable
Message-Id: <3CF3D42A-DFC8-48CA-A46D-CB2166F820AB@arm.com>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-6-julien@xen.org>
To: Julien Grall <julien@xen.org>
X-Mailer: Apple Mail (2.3654.100.0.2.22)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO2P265CA0091.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8::31) To PAXPR08MB6816.eurprd08.prod.outlook.com
 (2603:10a6:102:130::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ca37b43c-d785-47b2-eaaa-08d9349386ea
X-MS-TrafficTypeDiagnostic: PA4PR08MB6286:|PA4PR08MB6319:
X-Microsoft-Antispam-PRVS:
	<PA4PR08MB631971E0E70096373DF5DC7EE40A9@PA4PR08MB6319.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 nNPzBSfi//WjkmHgB27Vw+H55d8oY+kVmobuuO1yp2TfCGkRIIZJicXcPI86SL0CCJTRywH4CN0YheUSP35ImFP7EPAxVSlG9XK+bSFud0lpf4QBwfh8k/Fgp7dvO1ZC8Eo+xNS/bLGIxoCyEwqw2bvhkQU8VkuGCzif2xkzxMriNoq0kEEzt8OFFf38iTXVbNVL+4nt4TCrO1fKp4CqQ2rZJcd2FUzejd/yTV9AIZiXdxEBBeYug+K8MKLE6wu4VXSURIJzdYtwlWirMAhEz/MmWr0nC3mbw7X5DQrJDz72XlIPMXLTPfu5gSwhjoPHjf/6I8XM3M8KZl56nPBjYMf8BXo4zCvAwJaRWAFJloI1rv9438ROj38IpK9JyZoNrmNE+FfrHJ6tw13YjruNP9Sj43Jxax8KyN3Ggpw9/yDF92VhUQxL8h5LZOi3oDcUo0Oa/HpqOBzNoYnZ/jrK3ojOknUIUfZNh1nSdZ/6a8mSJs5AvWrEKj9AJCUN1iRCuEDZ4/jHCDVCeY/KtwIhqpz5QzGRRk4PveyH/N5G7jbz6aAU8+SEhjvF/C6XbOoLP1m/iQtJ+ahUtJ6/lwhz7BTpRsyd+5Aq6s4M41vZ5zouZJ5a4e6vEUCZmFd0SyZDKODgmAGZxS3Q/tNCj95l3Qk/lQhmxBMXc5Bi5vJTdiTh1x0ftaFKjDi3+evYczTu
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB6816.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(396003)(376002)(346002)(136003)(86362001)(38350700002)(83380400001)(2906002)(8676002)(8936002)(6486002)(2616005)(478600001)(44832011)(38100700002)(36756003)(956004)(54906003)(316002)(33656002)(5660300002)(186003)(26005)(66946007)(53546011)(66556008)(66476007)(52116002)(6666004)(4326008)(6506007)(6512007)(6916009)(16526019)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
 =?us-ascii?Q?38LskC+TtNC5Yi7e6PGZdMKVmqKWPw1Lx/kuwGtMJJg17DseTs7AL0svvA8u?=
 =?us-ascii?Q?91eCmXom73b/FDkpT3qEkIlpXBbhizn8F4aIsRYZRAweKtrnMDDfMPLi5Dzv?=
 =?us-ascii?Q?8pMzMvhv3mcVEU9tTfZThI52EkekP1WV4qGX8z0cWUIm0CkrEKDf5mGSUEL1?=
 =?us-ascii?Q?9APm6os/VnfMLSPnaNFyvyK0iP26fcChrPnh5YnV+LtBq5ktsOuUQe3czNWi?=
 =?us-ascii?Q?P7+4g0sQsupVs2SsSNjPoGgFkwiRFJibwK2vrm6102Ie7X9AhY/S7tp+Z1tO?=
 =?us-ascii?Q?i6l1XUdnlWmB1+R2U0vovGK4eWhOpsKJGe2jtC35smrO3AiGZaWQM6vdFRjL?=
 =?us-ascii?Q?lBJIiRGHsqu9L6ONRcxot+6wMnXk8lIpNH8LJdr+cPwWy1Wvp5VYm5fdRn4r?=
 =?us-ascii?Q?P1YeG9Wh20snv2Qa7WVnp3x6Q2MhCu0hESiHNnocMYiMD7m1RVg8CecNflWg?=
 =?us-ascii?Q?Hv/OxRUv8iHV3MDeQO3t23YU2oeh5ZTVd18z4tzdZ7izusiC/WnjmmZ8VE4G?=
 =?us-ascii?Q?DtGFleHPSVbGDVlYK/tU7Q+YBYwkivOiutrS1b8a+SCWO8BwZH+DsKF8WDx3?=
 =?us-ascii?Q?wV2WGTDpequIIoxKHyXLAk3eibgwvBk/Hu1KxH+ZD93htisqaJR53QZ/VAyy?=
 =?us-ascii?Q?9CbG5oIVU/jMPpbJZSY06TTg9+suthlgsti3juyZuy/569zN2FT/MkFIYxlp?=
 =?us-ascii?Q?1srByegtKJ9z8mYJy2JZAzi+nlmX9aURDQpw21RMCsjDePcFPYVplN6WMPXa?=
 =?us-ascii?Q?YsLwuWrACMWkPP5io9516MpqyMmEsAuBVRRhqeavErOnYZLQW67iPnf6PcTZ?=
 =?us-ascii?Q?JR+kZ86k+rEO/x3zAwB6ROh9qwT6Ng2ujsvDUdCpwEQgGruvc8sbhRK+YGK2?=
 =?us-ascii?Q?nSSv5XU3y3DtA50e9ziLK27rWLD//EVHvmXmpTlG9rDamyGo138CACEBLEbd?=
 =?us-ascii?Q?CQFndcqqFrubPoP65Q1kAxrzJJTVpHVIR++wk7oGmzt7atbQUyssCgL/itUQ?=
 =?us-ascii?Q?Fqz0UpAowxrw15wj9w5Y466BMoQFaUoYjqRpYeSN3mH7JFGOYgbcUoSUbe67?=
 =?us-ascii?Q?18gflJ7DskZKZtzpKWJ97oC+2qd8tjIEErnUYcSrX9jpM9LWl727MHD5OprD?=
 =?us-ascii?Q?ghp38FY/FBznX1MGQAcdWTdB/Co051HAgtjRE9DnWeXSQmeq/osaphVkzvQw?=
 =?us-ascii?Q?hszsiu7Nrng8dISIQ/3oO93BMcCuzREHfHACR+FiE0Ed0GAVy6kNMvEQf62a?=
 =?us-ascii?Q?bEMEEcN4vAFwUduixZvIpqmJ2Uj0ajNxbeX72uvnfqiAgAAcV/WhKzI069To?=
 =?us-ascii?Q?WdyRjLM8Xlm6qxS2Ko2OLdVL?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6286
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	64545849-5cab-4d54-a043-08d934937210
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	j1yfZ/qUSIEJ+QCmGIZWg+rlEuJ5d+A5sMF6lRKZDzSG0TXfRhaGQPC4OVpWZU/tVKLfCLpP24WA4ZtpcS+X1yghixuTky+Gaqd4JVnKBlyJSVZR1hWyLbYx7mH+ii6YviroQHGwxfXeuJHZdIvWedhQ3YEIZOArrdfAnUug5+Wlco9pAewwp3i5eL4oId7qxmeU0RkICAmghAnEZnM8SSS8jBZrCcTTa72WV0veQhM+nD2YMfspMribF079hQu1tNoXvLqs3rUDjJrQp3/zdplARNbbmYNAkArIzoQq9IoM9G8STCop/3hCUn2q7w5OIgD00bGv2iR5PwwQcOesme8P1gf+W/SFrEkvA7AEcJfcPzYHw58Y1ge718M6UhlSPR7j/ThLOx/QE1igEoaLHWbwAvblbEzNI9BHKY98brjCYiKo5H41o6psVDT4mBLNuZ5xjg3ldXGI0XZfYbtaQwiQqZSc/tsrFbxTSIkxxhDEOIfCHd7QaYGQ9voWxd99nSGSeVuZfvtNJdYyvbQVFW1fXiNlH0Xx+H6UP5dWns1dIlmDGHqDBa0bU06cJpwrsD7cqFLc+wQBEsJ8dnCksPu2PyMP1PnRJWmiNmAlZUGxjHV3mZi81HMtwTWVIsz7VfP9LdrUjuvKNOtlM0DrTuTpWI/9V2eqPJP+NMx52Rw=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(346002)(376002)(39860400002)(396003)(46966006)(36840700001)(44832011)(70206006)(8676002)(83380400001)(6512007)(107886003)(26005)(36860700001)(2906002)(6862004)(356005)(6486002)(47076005)(70586007)(478600001)(81166007)(82740400003)(54906003)(2616005)(86362001)(36756003)(8936002)(6506007)(16526019)(53546011)(186003)(5660300002)(4326008)(82310400003)(316002)(336012)(6666004)(956004)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 09:04:07.7895
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ca37b43c-d785-47b2-eaaa-08d9349386ea
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6319



> On 16 Jun 2021, at 15:43, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> xenstored_core.h will consider live-udpate is not supported if
> O_CLOEXEC doesn't exist. However, the header doesn't include the one
> defining O_CLOEXEC (i.e. fcntl.h). This means that depending on
> the header included, some source file will think Live-Update is not
> supported.
>=20
> I am not aware of any issue with the existing. Therefore this is just
> a latent bug so far.
>=20
> Prevent any potential issue by including fcntl.h in xenstored_core.h
>=20
> Fixes: cd831ee438 ("tools/xenstore: handle CLOEXEC flag for local files a=
nd pipes")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

> ---
> tools/xenstore/xenstored_core.h | 1 +
> 1 file changed, 1 insertion(+)
>=20
> diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_c=
ore.h
> index 34839b34f6e9..dac517156993 100644
> --- a/tools/xenstore/xenstored_core.h
> +++ b/tools/xenstore/xenstored_core.h
> @@ -24,6 +24,7 @@
>=20
> #include <sys/types.h>
> #include <dirent.h>
> +#include <fcntl.h>
> #include <stdbool.h>
> #include <stdint.h>
> #include <errno.h>
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 09:11:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 09:11:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145401.267526 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvFyE-0001ZV-Mw; Mon, 21 Jun 2021 09:11:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145401.267526; Mon, 21 Jun 2021 09:11:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvFyE-0001ZO-Jd; Mon, 21 Jun 2021 09:11:54 +0000
Received: by outflank-mailman (input) for mailman id 145401;
 Mon, 21 Jun 2021 09:11:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YNPZ=LP=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lvFyE-0001ZI-49
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 09:11:54 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.41]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7e9e8742-e1f8-4dcc-8c69-2a869a513f7b;
 Mon, 21 Jun 2021 09:11:52 +0000 (UTC)
Received: from AM5PR0402CA0016.eurprd04.prod.outlook.com
 (2603:10a6:203:90::26) by AM5PR0802MB2386.eurprd08.prod.outlook.com
 (2603:10a6:203:9b::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Mon, 21 Jun
 2021 09:11:45 +0000
Received: from AM5EUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:90:cafe::c5) by AM5PR0402CA0016.outlook.office365.com
 (2603:10a6:203:90::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.16 via Frontend
 Transport; Mon, 21 Jun 2021 09:11:45 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT012.mail.protection.outlook.com (10.152.16.161) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.16 via Frontend Transport; Mon, 21 Jun 2021 09:11:44 +0000
Received: ("Tessian outbound 41e46b2c3cec:v96");
 Mon, 21 Jun 2021 09:11:43 +0000
Received: from 1c68c5b02afe.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D5723455-03F0-4231-B1E7-36484E43BEAF.1; 
 Mon, 21 Jun 2021 09:11:05 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1c68c5b02afe.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 21 Jun 2021 09:11:05 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com (2603:10a6:102:130::10)
 by PR3PR08MB5625.eurprd08.prod.outlook.com (2603:10a6:102:89::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Mon, 21 Jun
 2021 09:11:04 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025]) by PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025%8]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 09:11:04 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LNXP123CA0013.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:d2::25) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Mon, 21 Jun 2021 09:11:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e9e8742-e1f8-4dcc-8c69-2a869a513f7b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N0S+LDj98BFOC45O6LsSQdEu3H3PdBd/n+fCFIcB5lA=;
 b=JMs8gUdYordMhYJSzeWj6f3ZsJ+FMdBkaHpUIDxwCSePmhi7rt5TduKaSYvM7LUuBVI/1XYR4mf3m2EXUtj7h3mtvivPTOwitSArLmhISVT5PF+OIg7hHIMB8hQWOVH/Y1kENK6GsyOhwkWsbJit3oldz6Ds1HANO6Uwk4D9K5k=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: b45c6c42ae8a2686
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LbQIy6WEYuLcfk59NGBdTIsDw64Dkabu91ra75Sp/PqvkMnZbA0F0xDaIDVgBeieSU11pYFuupt5qmLpGHOoBTbteIprF/YdWzdFtjFj9DwOl1c900zAQ3oTQQrRqvhqPxIdZHV3xrdhasJ0paoLjGYIY/Div2OGJ2wFuKfgJG19tsxMRDHBkLaSg84zEJjXHfFL1extLl56p6cs+XWagIiwkG/LknfpIBL4caVIVJhAIfpyxydfBrULV6AQ+9vaUrm517JTLlPyCTU/AhDh1SzLVNP1J5eFGiwAX5tubn1mA2p/NoPDQ7NcKvxAKi1pADf3l5Omi0j3FH/Q9Htc+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N0S+LDj98BFOC45O6LsSQdEu3H3PdBd/n+fCFIcB5lA=;
 b=kbdwogkorj3E4yeOzIakV3KsrMrG+JDKbcRFZElNr5MBoa3utZQIxmVILefyFceQ4jZmIlsuP73MG8YAoxFWmxVxg8YbpyFrN3nK00ZD0cENhRc10Y12hvP8ojHKw95Pq/O5ENg/AD4MESFLeHDNFP36nNq1alO3H45vbGGMtbb6jtpLVhz8tmlE9kYSHBmSOXEGTeAXYsbS9oe4afXcDWQANvPKjxRgT59dNWjo7f4Psokzu585Oepf8u5ayTZvlShQWPOYueqssgSQ4h/RhKO3UUVL9x2lfueiCPqkyRLOEuged9xOKSVIWISmalAYTX0VKI5S2dliZYHaU8UF9w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N0S+LDj98BFOC45O6LsSQdEu3H3PdBd/n+fCFIcB5lA=;
 b=JMs8gUdYordMhYJSzeWj6f3ZsJ+FMdBkaHpUIDxwCSePmhi7rt5TduKaSYvM7LUuBVI/1XYR4mf3m2EXUtj7h3mtvivPTOwitSArLmhISVT5PF+OIg7hHIMB8hQWOVH/Y1kENK6GsyOhwkWsbJit3oldz6Ds1HANO6Uwk4D9K5k=
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=us-ascii
Subject: Re: [PATCH 06/10] tools/xenstored: Introduce a wrapper for
 conn->funcs->can_{read, write}
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <20210616144324.31652-7-julien@xen.org>
Date: Mon, 21 Jun 2021 10:10:58 +0100
Cc: xen-devel@lists.xenproject.org,
 raphning@amazon.co.uk,
 doebel@amazon.de,
 Julien Grall <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>
Content-Transfer-Encoding: quoted-printable
Message-Id: <73401908-D406-4ED8-93FA-C0A0911651BF@arm.com>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-7-julien@xen.org>
To: Julien Grall <julien@xen.org>
X-Mailer: Apple Mail (2.3654.100.0.2.22)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LNXP123CA0013.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:d2::25) To PAXPR08MB6816.eurprd08.prod.outlook.com
 (2603:10a6:102:130::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e2065cfa-33a2-4738-57c8-08d93494972f
X-MS-TrafficTypeDiagnostic: PR3PR08MB5625:|AM5PR0802MB2386:
X-Microsoft-Antispam-PRVS:
	<AM5PR0802MB23869EB955249D955F25B26CE40A9@AM5PR0802MB2386.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:2958;OLM:2958;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 hZmF8KDRGEdy97RZJTz/LXLMddt1Z6faKr0Dy3c7IJTU+Be1INI0HW7zz2tqloo7pXidQUi40Wr+ge+YEJRLh3/5nzRDyTYWBvSlivNUgMiYvtzrVK0x8s8U8ufBj9KWCE6syFSeuduUEnbSoqlIURaPJFhxMOd14VdATrl2wK3k4K3jo8rpeuWS1Z0ZMdOnTqQ2BHKMYHeEyYom9Wrfs03G0TKrFhTEK2i/+hIDbsVVYV3+8EEwvyDq4xs50pmhj0+cODduuy/I+4Xm5zHjuoQNPYIQ35REX6qxnkG1fspRlSRuiLuAIKiC5wiDxD4flIEensfMxK8NYgn3Fv87lNffZOHN9ZIhsFrFHRxtKhE3D2pbU2pavVFKzJGmLY9t9sQSU4NmcCZHFpGNj556YfaJwRdiOmsG8KnJ0CgdyN1aWhxy+y1RcNjA9J+vbJGU6QBNuGDjUO5NsdydGMpMvAlLZGUgyih++Ikg/HppPsDAKaJHWcN4SBGvYfK8G07/n6qfXrgOEaRibcZc4c3wteT0n7uBSFJEaSOycjgWtkurrT6gmssY23UsEOS41A/SIUT9zizpBWOfh+x8y013bru33BXDKp4hvomgXFPJIDla0RD0YQccN5vgx3yHnt8dcFnEV0A+GuCKAJd4pusdIXwEjjrZbLIxqVell9Q2J26VvRGjag+x07fR2AWCT3+e
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB6816.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(136003)(346002)(376002)(366004)(86362001)(54906003)(66476007)(2906002)(8936002)(66556008)(2616005)(33656002)(53546011)(316002)(6506007)(956004)(52116002)(6666004)(6512007)(36756003)(66946007)(26005)(38350700002)(8676002)(5660300002)(44832011)(478600001)(6486002)(16526019)(186003)(6916009)(4326008)(83380400001)(38100700002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
 =?us-ascii?Q?DzH9BEKIRfrDnPLpCvBbVi0k/E7ZoYuhwR3Hwzbgqkfwb2I45hmONkxdF07M?=
 =?us-ascii?Q?SsrbxxGAmR2Ro8VWf+YbSJSoxXamYkK/EteT1EcWjtR+1Y5LiGje1mwfVUeY?=
 =?us-ascii?Q?Jqsly8RJjAoCual45KQk+psIu5NJHvLYcmAxtAeK0WqqMbvr33XrNV0tHshA?=
 =?us-ascii?Q?hHZTxO6QIeD5tFcINRhd8o+kVIA4L3Yxglp+ywgqdJnkoEN+dvjFG3/iSxUs?=
 =?us-ascii?Q?JNN5M9c1a4EEell0gS3uS6TXGHTIHpImbZeObdm+9wIxlXLRD+tDxJFHTiBo?=
 =?us-ascii?Q?EcHPkX9oh0HzXCWTV+rtt80ySFov3EdPNq2jksIjh0ETGjIAtjkfg1L1F+dI?=
 =?us-ascii?Q?agS+ana4LHHkdBJlkPDOH2G5MthhmGbFsZVwIfnoS0cTxPoh4iTHzhsAeKNj?=
 =?us-ascii?Q?JmEqtktAX/O4o49NyBXGagHEnmFylB6uAqE6gnR098AJ1rViD0ghN9XSbUmH?=
 =?us-ascii?Q?vzDT5uUN3vkZrSGB/6axvvZrITrRFvnAXBNdY23XBrxs2eU1Y/7UQnMuZsRR?=
 =?us-ascii?Q?Iu9G1zWPqbRHvVlG9gFtr7qt4aGU8VZuxJedzz9WVB5zzsQMhvLjlecvB4M/?=
 =?us-ascii?Q?eVXjEXGi/wbkxd+fWSwn6pE2LHwfXKu3wiouknHJUGavv69qeXiTj19gUP/3?=
 =?us-ascii?Q?m1zA/PonR1LI/k+WMn2CXp+Yy16ra5LCLeh7wcmAkFen0skHGlAGYyAVNiLe?=
 =?us-ascii?Q?TuGpjVA4OVqZKvP7GMd/cJUiqm4EeCeSlBol6TZm4Y7pPoe2EgCY1pSj3siE?=
 =?us-ascii?Q?a+/SUVt/sJyHOcnWFINUnv17NuAk2zCNS2BgU6b2PBhApDCIUClY5sFgfGmS?=
 =?us-ascii?Q?USKVHsSirJEnGW8Yelk45lrCdp0pYwnavkTaDskockeJrK/YSUMO3avCqQif?=
 =?us-ascii?Q?c6ZclGkc39DKTaFmevQPbMngqS3BOyxt9nWUCBupPF0+Ii3/DfND48ohfesK?=
 =?us-ascii?Q?zA1sbj7Li2ZBoaCJ8g9tT0Xu8aSku4/5sxe+hs/yBvZGGPitBlv7tTiOLxz8?=
 =?us-ascii?Q?dv5xMfXRXUTv2ZovjunOF66G0r3tRFaEoBBs2JvIMssQCA8ec6Nm9yHP3Rdl?=
 =?us-ascii?Q?hFpP+9bgdKM5x8FEwx/8bvu/bnhFi7ydk7Rs559OAG6GjsMLRd4Vwjm4RBLJ?=
 =?us-ascii?Q?kZbFqcb7NI4EPrHmPM9b2BS/VvP90cXRBzd8bFanDyF+FSEJUP5hvBpFQAJC?=
 =?us-ascii?Q?VUORiS387sea75siILCueXLi/4aEKBdBBQNk4DAWffZWklpbnwnYmYvh6Nmx?=
 =?us-ascii?Q?a6qIjZGXFoglPyPXjV1M9DOvUCgyO1fFDoqDHvKRcD1TsZGXk1ikeSNbIlxN?=
 =?us-ascii?Q?Wi7cIQ1Kxf1j+QU2qU6KeHSL?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5625
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	501a4c6e-9c06-45cc-1341-08d934947f23
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2TgeC4QP7Pspx7+WgRQn5Qzszvkd/VPtxdxn+fVhQ2YOT7sTEmUtBBt4wCDXAm8k5SHW8vWEr0/8MW5ihM6TftzKpB2nxe5vUHttX8EsxDKgqJUCh9Ghr9BnULZ3aniO6yxo3SRIdhXs1moTyAZDLCSS1ZZ2MkLG+YhhLKW8jHxZVVL4K2Yhe8/Aax+t0QIcpcYaSyE2Fc2MznxW8BE9a/VHz2lkvsIIg4IOQ1x19JHp3MQDTmc5fbEDsaXQBDk1plLeW8uyHqW/q10s4jDSWjpuiJwvsq/PjDKkErI/EGgkzcFLAdNpCkwQXI6ynTZFcaj1rxLrg7/b5X2dJEQJjWaVDpV7U4IGjmffMUczCq/UvzxffM5gqq5R2pd647sJVfDfFPfKoz3ByQBQnDPPcWIHDFxyjL2JGog343GsNgEsVrlxF43W1Z4FIm7nwVq86hlSXYTQgxBITSH/6D74ccpQ57ALilfGONnWQmCTs0VtZSQ+pDoh7HHdA3KhD1bRiG1TJQPIMqtyBSoSsew9ExIy9FUpQuw0U+wcXXtA017YFc5FGsr+ha88UOxceXsMMVzz2ek2i0HsSKVWKKxVcZqCW67KEUqYTBqH1IsUZjWHQhfkh5xUlDGCLdfdNUbISxAOV+OumqUkLU28tqAks6v64nDbg5dV+k/EBMj4fm4=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(346002)(376002)(396003)(39860400002)(36840700001)(46966006)(53546011)(6506007)(336012)(82310400003)(36860700001)(316002)(107886003)(478600001)(956004)(2616005)(44832011)(8676002)(54906003)(6512007)(70206006)(70586007)(6486002)(26005)(6862004)(81166007)(2906002)(47076005)(82740400003)(33656002)(5660300002)(16526019)(186003)(4326008)(83380400001)(356005)(6666004)(36756003)(86362001)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 09:11:44.5142
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e2065cfa-33a2-4738-57c8-08d93494972f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0802MB2386



> On 16 Jun 2021, at 15:43, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> Currently, the callbacks can_read and can_write are called directly. This
> doesn't allow us to add generic check and therefore requires duplication.
>=20
> At the moment, one check that could benefit to be common is whether the
> connection should ignored. The position is slightly different between
> domain and socket because for the latter we want to check the state of
> the file descriptor first.
>=20
> In follow-up patches, there will be more potential generic checks.
>=20
> This patch provides wrappers to read/write a connection and move
> the check ->is_ignored after the callback for everyone.
>=20
> This also requires to replace the direct call to domain_can_read()
> and domain_can_write() with the new wrapper. At the same time,
> both functions can now be static. Note that the implementations need
> to be moved earlier in the file xenstored_domain.c to avoid
> declaring the prototype.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

> ---
> tools/xenstore/xenstored_core.c   | 18 ++++++++++----
> tools/xenstore/xenstored_domain.c | 40 +++++++++++++------------------
> tools/xenstore/xenstored_domain.h |  4 ----
> 3 files changed, 31 insertions(+), 31 deletions(-)
>=20
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_c=
ore.c
> index 51d210828922..2e5760fe4599 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -334,6 +334,16 @@ static int destroy_conn(void *_conn)
> 	return 0;
> }
>=20
> +static bool conn_can_read(struct connection *conn)
> +{
> +	return conn->funcs->can_read(conn) && !conn->is_ignored;
> +}
> +
> +static bool conn_can_write(struct connection *conn)
> +{
> +	return conn->funcs->can_write(conn) && !conn->is_ignored;
> +}
> +
> /* This function returns index inside the array if succeed, -1 if fail */
> static int set_fd(int fd, short events)
> {
> @@ -396,8 +406,8 @@ static void initialize_fds(int *p_sock_pollfd_idx, in=
t *ptimeout)
> 	list_for_each_entry(conn, &connections, list) {
> 		if (conn->domain) {
> 			wrl_check_timeout(conn->domain, now, ptimeout);
> -			if (domain_can_read(conn) ||
> -			    (domain_can_write(conn) &&
> +			if (conn_can_read(conn) ||
> +			    (conn_can_write(conn) &&
> 			     !list_empty(&conn->out_list)))
> 				*ptimeout =3D 0;
> 		} else {
> @@ -2325,14 +2335,14 @@ int main(int argc, char *argv[])
> 			if (&next->list !=3D &connections)
> 				talloc_increase_ref_count(next);
>=20
> -			if (conn->funcs->can_read(conn))
> +			if (conn_can_read(conn))
> 				handle_input(conn);
> 			if (talloc_free(conn) =3D=3D 0)
> 				continue;
>=20
> 			talloc_increase_ref_count(conn);
>=20
> -			if (conn->funcs->can_write(conn))
> +			if (conn_can_write(conn))
> 				handle_output(conn);
> 			if (talloc_free(conn) =3D=3D 0)
> 				continue;
> diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored=
_domain.c
> index 6d8d29cbe41c..47e9107c144e 100644
> --- a/tools/xenstore/xenstored_domain.c
> +++ b/tools/xenstore/xenstored_domain.c
> @@ -172,6 +172,23 @@ static int readchn(struct connection *conn, void *da=
ta, unsigned int len)
> 	return len;
> }
>=20
> +static bool domain_can_write(struct connection *conn)
> +{
> +	struct xenstore_domain_interface *intf =3D conn->domain->interface;
> +
> +	return ((intf->rsp_prod - intf->rsp_cons) !=3D XENSTORE_RING_SIZE);
> +}
> +
> +static bool domain_can_read(struct connection *conn)
> +{
> +	struct xenstore_domain_interface *intf =3D conn->domain->interface;
> +
> +	if (domain_is_unprivileged(conn) && conn->domain->wrl_credit < 0)
> +		return false;
> +
> +	return (intf->req_cons !=3D intf->req_prod);
> +}
> +
> static const struct interface_funcs domain_funcs =3D {
> 	.write =3D writechn,
> 	.read =3D readchn,
> @@ -290,19 +307,6 @@ void handle_event(void)
> 		barf_perror("Failed to write to event fd");
> }
>=20
> -bool domain_can_read(struct connection *conn)
> -{
> -	struct xenstore_domain_interface *intf =3D conn->domain->interface;
> -
> -	if (domain_is_unprivileged(conn) && conn->domain->wrl_credit < 0)
> -		return false;
> -
> -	if (conn->is_ignored)
> -		return false;
> -
> -	return (intf->req_cons !=3D intf->req_prod);
> -}
> -
> static bool domid_is_unprivileged(unsigned int domid)
> {
> 	return domid !=3D 0 && domid !=3D priv_domid;
> @@ -314,16 +318,6 @@ bool domain_is_unprivileged(struct connection *conn)
> 	       domid_is_unprivileged(conn->domain->domid);
> }
>=20
> -bool domain_can_write(struct connection *conn)
> -{
> -	struct xenstore_domain_interface *intf =3D conn->domain->interface;
> -
> -	if (conn->is_ignored)
> -		return false;
> -
> -	return ((intf->rsp_prod - intf->rsp_cons) !=3D XENSTORE_RING_SIZE);
> -}
> -
> static char *talloc_domain_path(void *context, unsigned int domid)
> {
> 	return talloc_asprintf(context, "/local/domain/%u", domid);
> diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored=
_domain.h
> index 62ee471ea6aa..1e929b8f8c6f 100644
> --- a/tools/xenstore/xenstored_domain.h
> +++ b/tools/xenstore/xenstored_domain.h
> @@ -51,10 +51,6 @@ void domain_deinit(void);
> /* Returns the implicit path of a connection (only domains have this) */
> const char *get_implicit_path(const struct connection *conn);
>=20
> -/* Can connection attached to domain read/write. */
> -bool domain_can_read(struct connection *conn);
> -bool domain_can_write(struct connection *conn);
> -
> bool domain_is_unprivileged(struct connection *conn);
>=20
> /* Remove node permissions for no longer existing domains. */
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 09:13:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 09:13:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145406.267537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvFzQ-0002Eh-5g; Mon, 21 Jun 2021 09:13:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145406.267537; Mon, 21 Jun 2021 09:13:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvFzQ-0002Ea-1o; Mon, 21 Jun 2021 09:13:08 +0000
Received: by outflank-mailman (input) for mailman id 145406;
 Mon, 21 Jun 2021 09:13:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YNPZ=LP=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lvFzP-0002EQ-2u
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 09:13:07 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.57]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e1f01ae-940a-4cf8-97e2-55f3742dee54;
 Mon, 21 Jun 2021 09:13:05 +0000 (UTC)
Received: from AM5PR0701CA0014.eurprd07.prod.outlook.com
 (2603:10a6:203:51::24) by PA4PR08MB6080.eurprd08.prod.outlook.com
 (2603:10a6:102:ec::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Mon, 21 Jun
 2021 09:13:03 +0000
Received: from AM5EUR03FT008.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:51:cafe::8c) by AM5PR0701CA0014.outlook.office365.com
 (2603:10a6:203:51::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.7 via Frontend
 Transport; Mon, 21 Jun 2021 09:13:03 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT008.mail.protection.outlook.com (10.152.16.123) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.16 via Frontend Transport; Mon, 21 Jun 2021 09:13:02 +0000
Received: ("Tessian outbound 41e46b2c3cec:v96");
 Mon, 21 Jun 2021 09:13:02 +0000
Received: from 1c51224efe75.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F8D3370E-981F-4B5F-AFCA-152BEDEA9464.1; 
 Mon, 21 Jun 2021 09:12:36 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1c51224efe75.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 21 Jun 2021 09:12:36 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com (2603:10a6:102:130::10)
 by PAXPR08MB6878.eurprd08.prod.outlook.com (2603:10a6:102:139::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Mon, 21 Jun
 2021 09:12:33 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025]) by PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025%8]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 09:12:33 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO4P123CA0001.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:150::6) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Mon, 21 Jun 2021 09:12:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e1f01ae-940a-4cf8-97e2-55f3742dee54
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wcB0ay7yLz754DCvA6zo75l+swGavIXx1xClsoBuoLQ=;
 b=vh624p/8m3RyYfPbOk5UPiZj7lqgUlBcSVk/6nDrDXjQYsXBvz/lb6civGJKNE7IYdp7h+TNgl51jXsThRIPga8ePhsUYHxplSqMmDxNC06myg4NWwb9bcJrWc5/5eT4+2vUvCnQWmeEHz+mNkyXhg4xS5AjUPQ2qWFA3+QwH3o=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 479a7ecdb81ca24d
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cMz81y2Duil5IkpEudtPuDdDm+JjqFWDhb9NwaXM+UEvrvwYEiwa1EB8Nvxy5VB+hT53gcLYRLnMbwpG1HQlOFk6iLiAum4NDLWKP2XfKuFrN1jy3O+iix2Yh0MEcS618zQ0DXQS+8jwl/RdVdbQsQoWkE4g1K7YdQAqgEYuSgNmEUSGwgdUGtXzicvyGA9u9f9a/iVRFoJq98oBJDKw79dDDbAtKsLbjE8AihxgYJaLM3o/CH/oxwXHt+/lVf5ONwo/XGiK7TxUl4GFPnhi+t2JtgSw8dtgEYCAC2KHAY8oZkQo/wKoOGyv7rkNSChUeBTMWGFhMl8mXgpjJt+/yQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wcB0ay7yLz754DCvA6zo75l+swGavIXx1xClsoBuoLQ=;
 b=CM9KlHvuw1ubLXLFW+11kQNDebROkYQQcjku5J7VsTp5yE2jhS3lCM1TUFXvjJHj9GkplLnh8P3PjJf0jtfXxCrnNF4XbYpL3hpoFsKmAmhC1IepXzlYdRncaF3miVZCdHN9uNMqDX2lQndSagPirzt1qsFCfcJzNMFC4cro/bpoCgICCTEUAaI4cfZGTFaKvNYsdzKmY3JD+xase6roAup6wvgvjA95MlMtkgYwfjLBEMtXWQngYWXRPmu9nOEjZX+NsZ5KIO0v1/FaGj/deXaujP1n114NGZb7c0ICpwfCTBEA9kvu99Ia3lt4nQZ0zJMXNBcc6HvXtUPx0zmUWA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wcB0ay7yLz754DCvA6zo75l+swGavIXx1xClsoBuoLQ=;
 b=vh624p/8m3RyYfPbOk5UPiZj7lqgUlBcSVk/6nDrDXjQYsXBvz/lb6civGJKNE7IYdp7h+TNgl51jXsThRIPga8ePhsUYHxplSqMmDxNC06myg4NWwb9bcJrWc5/5eT4+2vUvCnQWmeEHz+mNkyXhg4xS5AjUPQ2qWFA3+QwH3o=
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=us-ascii
Subject: Re: [PATCH 07/10] tools/xenstored: delay_request: don't assume
 conn->in == in
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <20210616144324.31652-8-julien@xen.org>
Date: Mon, 21 Jun 2021 10:12:27 +0100
Cc: xen-devel@lists.xenproject.org,
 raphning@amazon.co.uk,
 doebel@amazon.de,
 Julien Grall <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>
Content-Transfer-Encoding: quoted-printable
Message-Id: <2BDB4DB7-3132-48BD-A83B-4E5E7D9C48B9@arm.com>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-8-julien@xen.org>
To: Julien Grall <julien@xen.org>
X-Mailer: Apple Mail (2.3654.100.0.2.22)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO4P123CA0001.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:150::6) To PAXPR08MB6816.eurprd08.prod.outlook.com
 (2603:10a6:102:130::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8d14ea80-9ac5-4c0c-cff9-08d93494c5c1
X-MS-TrafficTypeDiagnostic: PAXPR08MB6878:|PA4PR08MB6080:
X-Microsoft-Antispam-PRVS:
	<PA4PR08MB6080DDC7DF8ADDDD00C06B3EE40A9@PA4PR08MB6080.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 sfVxTA7cdRPC2Mqsaz2t7vYDuqDt8ELnTf8b5fxXzAfoSezIXP6J32SJKCR0iofEcl29VpM/mLBbxO9YWFGLj5CY9D8cQGeX+EBCebcnnAhgQSkcEXmH+0/KnQ/tLjAwF0JNEWspmqKfgI/a+8ImydiO5ICIx095xZSq9rjWLd3yzPzrmPanK+gWEOTwN5srmpcRTZhz2cdZ2jhNE7DxIxB5KIkUhlGg7vNcOfPuN4cZ0kRqI23Z99/HPQSz47BFkvbXcKLottqEgU/wM0fBWiIAVeCFBS4WL3s9J+hIu7ZfwSoXWeatEGobWa1BLHyF2mg/kVcxtDIVuHehxEbXzBogr49jDHx6StxBS/X+vywVn1qjplSSL79IG3LVat4LVfVCihW6ae2HR5oYRkZt6G1VA8syNOJ6f1mZpuj51LNaSrpYt+wg9D8NWWrhfkWJIhfUlY0oi5cVRDJ1lZC6BE0SOxtJYJopSlNTG8txJhDSJ/alFDaf3jsowHSG0R+6U4IVlzLahW7Dhhn7B5w9C4lh6vhpyeT/Y/m0vc51pHnCGMuZJrXC+glh13fxn+I8QhRgC3K9uH2jc/zUAAUyLm7J2Vmcq9XJjIDJ6PuhbRsbhXaQ26k4knX3Z5TtA0h86IQ1hnzUcqBV+WBBdjOV9IdkFyQ+VJhOhky65rKd8HGLF90YKSkLnByRlNx7wDw3
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB6816.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(39860400002)(136003)(366004)(396003)(6666004)(6916009)(44832011)(8936002)(26005)(52116002)(8676002)(36756003)(86362001)(4326008)(33656002)(38100700002)(186003)(38350700002)(16526019)(6512007)(956004)(2616005)(54906003)(6486002)(66556008)(66476007)(478600001)(83380400001)(316002)(6506007)(66946007)(2906002)(53546011)(5660300002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
 =?us-ascii?Q?p5S3czgWt4DobJ2FMRH7Rj6/4WWiSoF+aohJLe0dL7mPJbsMNWJMAOE7Ru+f?=
 =?us-ascii?Q?3L1SsfzY47NzgchzfXde5cLl0dZL6O7j/R5vWk2f6TsKHdSX9E7Neg6l60gF?=
 =?us-ascii?Q?JvJZa6rif2wavqItCGh93bEQSV1ZiHSMuyQlg67XDR8f0RxbSB95zskKByAe?=
 =?us-ascii?Q?5H/Oe0lgYbJ4M7lCkcjpuoeCoNGkwAbUkYlH0VoA7u9ecKgvARWDv5kbocqq?=
 =?us-ascii?Q?o7EXaKuYQrcmey3t7cirZL3h83UIs3ZJdc5iWPwWtgHEz2Q9xGRiqV6FptSh?=
 =?us-ascii?Q?TMNsTtm9Ici76TT7QcQDDF2MeDuI2B/14sPPD0fa+nexrc8QRxluMKTRaMZ8?=
 =?us-ascii?Q?nPpWQlLoNTSML/nN96/bPnlIAGVL+Vjs7+6OuuxVKMHGM26UIDMSylIw2aPA?=
 =?us-ascii?Q?p6gQ/zb6mqOlv9xU1oFDDwzi3NgSCByKvHALQAnxB5OaQaUr+tuIWSHMZZdg?=
 =?us-ascii?Q?Ve78uJ8lwiCjsQ8Lp3W3LpFOriG+w8lXZwLN+kl9wCS7Wx0qCzxhXxomaRCh?=
 =?us-ascii?Q?a44MJLKoQat8B17/FEo+8OIqzEGbIV/D85fh2bxs+qfOGEPiaWYHU3D6NGVk?=
 =?us-ascii?Q?39j4fZrShDBuO2YgT1F7Wmc4OM9bThNMagkjdcpK3mHwiixF7v7/Vsb/SA1l?=
 =?us-ascii?Q?8Xb7jHgWOXOgarLmZjwH6OMJYhWMVbcVPUqZxy8sa9DHjaeryFN+af92IlJv?=
 =?us-ascii?Q?f03vLBXYVyvQY2x87zEjOeHu2Dn6/3uEZKZh7tD8uiThWxr0ZRHHF1fbPhko?=
 =?us-ascii?Q?Pq9lzrAK32NNwoIpyP5kfRCujGpP71TCCVSi638Mmo/4QlP45DPjW3OR4k+E?=
 =?us-ascii?Q?Fi4BRur4xo04LJXZKRLJO4tsb6AnHAs+k3+GpTkiCvo/1fETURCSi8zt+rhN?=
 =?us-ascii?Q?kiWHpilWnmrz1gPp4dNvT6tCXl4GmkbHAlgfgKgsJZ3+K7BK+qgcSX6VC8zB?=
 =?us-ascii?Q?RgORvyUpZnwoVhTyE2ytHPfynZmbOi76TZjWVgu7ONU70few+9NYlPBSERGe?=
 =?us-ascii?Q?0exPEHSCvU8YL3ZvV7EpXw/D9rw3khSub7LNr14imBUrlC9OICKQP7dMQsDh?=
 =?us-ascii?Q?9VmOHJHHZ8Dz8dd2lG5YSYTxTPSac7YfEuE2WlANJ0ncxi1LL1/O6PGkrFuo?=
 =?us-ascii?Q?SBpAVwgIS+3gH/rnWYUbRP1uHbRRe16K0v5oprnjcwtmNy1QoskjEwQ0Ch7C?=
 =?us-ascii?Q?L4eFoCuBxBhtHxthF4tJMxUvsT+ivgEcNx6UyshlQUa4F6C2iJcaVnaZKEpk?=
 =?us-ascii?Q?fWBDwZS9D9zmD9tnJa/4vL4ugdJ31UD1fc/X1B1MjLezLvf7mgN67aOlzC3Z?=
 =?us-ascii?Q?AhicBpyD8HnJWB3v5ywVyWE/?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6878
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9e5ebb3f-f28e-4f37-3663-08d93494b46e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hnecnqT37K0fpf+Q5XbMJSInwujIKfSHabVOsIcGhLZCIKcqQ4EwL9LJzYEoJcwEn/u+jHnt6d0RZ2mFihk6UrMPOyYSzEcFJHmPkv1VnGOZa53/9iQ2IMmQ2Jbe2Qz2RPhVeQpwTl7fWZ+2z3c7krsBivxhTDs6FcxZJgAzhI8ouW/b0F/zWjDGyCMXpgJbdWXhkESAaQp8QOFwLYawVS/8bf1sy3kB8/JS6ta/gxcAMHPZgfyOdJ5bQl7e/DtrKYKMsJyQnviAK3GG2pXy1OAZ2TjoDyn01SStJHFSjI+kmXYxX5KbF1/+am0Ke3qP7zqDz2CtDvnkgjdblHmOey0yMv5D4oX8alWFQf/I1OF/gAu1frbwmYw51BCz+H9Fe4/CrTy7NLAffN4Xz/EyRwWjHogkAVq1p+OyI79hLib62A/roVYvPermePRMMlCNV9R1OOuwlgxVyeCcqDYkAAa1xaZUDTYLAc1b/nwjyWVg3YWK3B/9VEy9KtDGRmnkbspiEewCMTTVLnYukaMC+yBLNwGCIQbT//Q6FeQKV/t4m5JCbKAipKDTnSTNpub4sXFr8gVX/af9ABtqf5eMcUs0f8xRHTpa0BvfMVFAiHdc3QDRYH2gHh5S2M6LBMMTwfwNKvGJYd5l961HuQ++LbiYH7IN7d8QOVRBHCSPOFA=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(396003)(39850400004)(346002)(136003)(46966006)(36840700001)(70586007)(70206006)(44832011)(6666004)(36756003)(186003)(2906002)(336012)(47076005)(16526019)(82740400003)(6506007)(83380400001)(8936002)(107886003)(82310400003)(54906003)(5660300002)(316002)(356005)(36860700001)(4326008)(6512007)(6486002)(8676002)(33656002)(26005)(478600001)(2616005)(6862004)(956004)(81166007)(53546011)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 09:13:02.6723
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8d14ea80-9ac5-4c0c-cff9-08d93494c5c1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6080



> On 16 Jun 2021, at 15:43, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> delay_request() is currently assuming that the request delayed is
> always conn->in. This is currently correct, but it is a call for
> a latent bug as the function allows the caller to specify any request.
>=20
> To prevent any future surprise, check if the request delayed is the
> current one.
>=20
> Fixes: c5ca1404b4 ("tools/xenstore: add support for delaying execution of=
 a xenstore request")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

> ---
> tools/xenstore/xenstored_core.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>=20
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_c=
ore.c
> index 2e5760fe4599..a5084a5b173d 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -306,7 +306,9 @@ int delay_request(struct connection *conn, struct buf=
fered_data *in,
> 	delayed_requests++;
> 	list_add(&req->list, &conn->delayed);
>=20
> -	conn->in =3D NULL;
> +	/* Unlink the request from conn if this is the current one */
> +	if (conn->in =3D=3D in)
> +		conn->in =3D NULL;
>=20
> 	return 0;
> }
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 09:22:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 09:22:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145413.267548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvG8P-0003hy-03; Mon, 21 Jun 2021 09:22:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145413.267548; Mon, 21 Jun 2021 09:22:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvG8O-0003hr-Rk; Mon, 21 Jun 2021 09:22:24 +0000
Received: by outflank-mailman (input) for mailman id 145413;
 Mon, 21 Jun 2021 09:22:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YNPZ=LP=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lvG8N-0003gK-Qq
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 09:22:24 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.48]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 561844a9-3f4f-435c-859b-e3846f904d11;
 Mon, 21 Jun 2021 09:22:22 +0000 (UTC)
Received: from DU2PR04CA0009.eurprd04.prod.outlook.com (2603:10a6:10:3b::14)
 by DB9PR08MB7115.eurprd08.prod.outlook.com (2603:10a6:10:2c8::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Mon, 21 Jun
 2021 09:22:18 +0000
Received: from DB5EUR03FT011.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:3b:cafe::f0) by DU2PR04CA0009.outlook.office365.com
 (2603:10a6:10:3b::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18 via Frontend
 Transport; Mon, 21 Jun 2021 09:22:18 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT011.mail.protection.outlook.com (10.152.20.95) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.16 via Frontend Transport; Mon, 21 Jun 2021 09:22:18 +0000
Received: ("Tessian outbound 7799c3c2ab28:v96");
 Mon, 21 Jun 2021 09:22:18 +0000
Received: from 974aac4ca62c.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C20CD189-0122-4808-8FDD-CCB16E8F0B0C.1; 
 Mon, 21 Jun 2021 09:21:40 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 974aac4ca62c.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 21 Jun 2021 09:21:40 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com (2603:10a6:102:130::10)
 by PA4PR08MB5998.eurprd08.prod.outlook.com (2603:10a6:102:e9::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Mon, 21 Jun
 2021 09:21:37 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025]) by PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025%8]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 09:21:37 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO4P123CA0118.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:192::15) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.19 via Frontend Transport; Mon, 21 Jun 2021 09:21:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 561844a9-3f4f-435c-859b-e3846f904d11
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KB2l6Hom7Es/jr83vEc+nSM1AX8WyJrcuMmjibanQNg=;
 b=CfXAGeSE0aoHFIijnHEdMvwyDPmagaHr05EmLbKi5lY+SZdGbPyY3OrlrA9oePeyn2ZWxcfIDb5Nw8Abo7CM03wc+VnF/Luokp5GgrpR1hhrNVq1/NU9dFWK2z322qv0a6I231qk59DU6MDhKB2vI0wQ/eestRae8BuTgLBBTuQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2680b739582b2fe3
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=h1NBlhoIWdRUlt337C6ofn7gNERra7U+s5TEWAGOvtT2UHPK2qzpza1h5rWIRlYlgqwcf9oWwDRX7M5+6yDORxIn6R6ueEXpVLN6/HrHs8ym3sIU0zM4ZxoYLQy8WdZi2bsZnsyYTyS4IUEkCNDx8q6cTm3v3cnWmqP3E5GsAGy7kfBe3U/8E8wcpe/YsPncXZgiiYsRKjgKdqvmIoWQvAIQbwk74sDiY3R6nQrMuHeC49/y3nUgDabkZA63dbR9GdNlN37V7Icwk7EB1Q+URnP6k9SWPVZBOjNbctgiZPlgTE0wr+IyieXB4HoJaWfqueGUvCqIKTpy8bTLBHIkrQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KB2l6Hom7Es/jr83vEc+nSM1AX8WyJrcuMmjibanQNg=;
 b=SNTb0LsB+j5Eb5r6Uz1c5o5L2Ok3dnrGdPMcQ6RKaxje70JHmtSNo6kGHp3jr8Jp1eonzTP/rZeQ/RCL1J4V5ukbvsQYchMbDRL9OwWuF18bPDAYlt0my6hMdXAU6OvjSj5vUY8BvehkeLzFJocJsuUYXS8sSunfZ1PAAtTqA74w/rhHxPJTtXN3UFPewxh3TVs+CjW3qKetprVgB7Q0WddDLEiQA3xrQiwmU0XMKY2eY9y3qz46052LvVVlktcfFekSB5HRBWG5wd+ca24NQoklsxduKzSi80SuoL4ScxpIc2JLHSJ8Iv8ZkWtl6eBQg9QoBcdoHAWp3GnZ/z64OQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KB2l6Hom7Es/jr83vEc+nSM1AX8WyJrcuMmjibanQNg=;
 b=CfXAGeSE0aoHFIijnHEdMvwyDPmagaHr05EmLbKi5lY+SZdGbPyY3OrlrA9oePeyn2ZWxcfIDb5Nw8Abo7CM03wc+VnF/Luokp5GgrpR1hhrNVq1/NU9dFWK2z322qv0a6I231qk59DU6MDhKB2vI0wQ/eestRae8BuTgLBBTuQ=
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=us-ascii
Subject: Re: [PATCH 08/10] tools/xenstored: Extend restore code to handle
 multiple input buffer
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <20210616144324.31652-9-julien@xen.org>
Date: Mon, 21 Jun 2021 10:21:30 +0100
Cc: xen-devel@lists.xenproject.org,
 raphning@amazon.co.uk,
 doebel@amazon.de,
 Julien Grall <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>
Content-Transfer-Encoding: quoted-printable
Message-Id: <F12B315C-8C0C-4EEE-A3DE-209C8F9EA04E@arm.com>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-9-julien@xen.org>
To: Julien Grall <julien@xen.org>
X-Mailer: Apple Mail (2.3654.100.0.2.22)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO4P123CA0118.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:192::15) To PAXPR08MB6816.eurprd08.prod.outlook.com
 (2603:10a6:102:130::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1a31ee7f-a1f0-4db6-effa-08d9349610dd
X-MS-TrafficTypeDiagnostic: PA4PR08MB5998:|DB9PR08MB7115:
X-Microsoft-Antispam-PRVS:
	<DB9PR08MB7115327C904E184C97D907B4E40A9@DB9PR08MB7115.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 mBxxaTpu8nFzeGn0CfzuJ6nA1GCrw4jesTkR4HB+16Ot85oeIe5OLUX0K6S/80urDgAAQSg16Mn26nrGn4MopdWrr5FjHOwPmHOL+dydAkWyUQrKRKu97EEKrJzWMAmNK/llz0hqisiIFjKdT0Fe/Nba+47n3L/1iMtbybhkqYwosWLKbJSumhpg+2WoAQvEoTfoMAQH81tjMAbXeQ1D4nFB+HtypWLNgPnt/CxLMWUU5KTvE2f3eHd2X/BrFQT1KJTDV/cotw2cbTwPqWYS6T7sqcE+9kqHCPhYLNzkqc+GQXnoAkRzkQ2OHdwXuL+T51YmSUJxUfT3IdjEVera/OC+AwiIYEfyNJYVOQlZ2T52gHl3bU00903mJj2iD3e3wPJrosWpz9PgZdmibz13Dtnh/R79R5uEfCLHMb/Y4Qj90+wm6HpQrrT1pp0XIgtGDWcJru0aVeWrnCUPn90tgyBiR1V6JUxkgWXOQzEMwRTEE1C5qJsazfsuOzBBJrgc/mDsutgFi/Ux6fexlKEfmw5L8x9494hrL+PqSfW+kZlOhnKRz1ICdCec+t2MFIKyjmUlZmzwl9y60YNcJUzM2Waf5OqOZ8YhoyuAGBQXu5K/OoQW36MN9MzZgG4nyjKnOSKAswwHpQnHPX+1YbQCPFN/zQngxlg0F6aLJnVjpPeyyI9PKEWSSc0B3POGpX5N
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB6816.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(376002)(136003)(366004)(396003)(44832011)(956004)(2616005)(2906002)(16526019)(316002)(6512007)(33656002)(83380400001)(52116002)(5660300002)(186003)(26005)(478600001)(53546011)(66476007)(6506007)(66946007)(54906003)(66556008)(86362001)(4326008)(6666004)(6916009)(38100700002)(38350700002)(8676002)(6486002)(8936002)(36756003)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
 =?us-ascii?Q?hJ42lcwhpwYZu0nr3SU4y1Q/IRwKbOmJ7jqpThMoRwJiHILpVXN5DnVi6mAa?=
 =?us-ascii?Q?Piv8COVZtTfP0mjFhf1fs18W3v/iyvEJucyIGXb7+4oAPUzWLZB0Tdwu9NYh?=
 =?us-ascii?Q?iu2BERh6+5skJN134Z7kXkqhTSKY7IeX9QW9GpzFXPBxsq29lJk+SaP5oHLJ?=
 =?us-ascii?Q?a2lyGjMcDTc/sL0075uzVtKH8Nr9lwb3G9xWngaF3koR6eX9ik8pzcuQ4ccE?=
 =?us-ascii?Q?9davK1UEmGSMUqhwbwHuDPoJKYnxyPTJyh+s2O3AXrUU5KxDmrAcbgy6xmKS?=
 =?us-ascii?Q?HWqzmi6MwtiHF1cLfGdq8XcIRLlL+ZbgTFolYKNbJleTq+KoLeQOGtxVfaAo?=
 =?us-ascii?Q?hIQtLkkz/GpaqNtaazS5iwFKoC6BNF7xl0P8tVjeftIUB0PHLfjmxVNsASMj?=
 =?us-ascii?Q?n+85MNCM1QQMcix5370pxlqwzK9OW6Qdgy5uXisgs1+vC0jBwM3bnmPgzji5?=
 =?us-ascii?Q?WToZD6CvTDQdb+nmHdPh47CC2fKz+h20chAFR+B4r9Zx+IcrhwXx3GEI2D4z?=
 =?us-ascii?Q?i6rsoLNWvbyQr7IaQKsZVlMqa2f3QGp93DtVTKWkR1BOzpjuWSzW5BiCjZgD?=
 =?us-ascii?Q?3HTuoQSGBEIgrcz0YIXIasXx3/kGmvScsecMf6w/vJv8JLUce1dMI5jjymjO?=
 =?us-ascii?Q?u3iiyLYBuPCNgh89KNEYEMX9qUczrEBEFOBEIC3GcVeXwP5egnjfCQ6rxVpD?=
 =?us-ascii?Q?F7qFarJzc80J1qjOXs93SsIUGiHlA881nKCS2lOaPZT4+wUK4ybq4dmLjDrm?=
 =?us-ascii?Q?Yf7s4eCQQN76wBeyxdcQEwMFXVLDAb4EWomsCPCf796D5wAjysUrf0CCtrj7?=
 =?us-ascii?Q?m6VZiZxjuWIEDYquRwcj2dE8O3QY/3zs4K/mEa2mDElR2H/snCfIGmDNFO/B?=
 =?us-ascii?Q?6vhBO7xqIkU1HTg+jsm/htSDJG7g0V7bjs/FJ5Bt1nJs/qfobOK6E+gwZyvH?=
 =?us-ascii?Q?SsHJ8ClO6teHsnELWxYpuNBh5ySCNFHVIurPyHMjUQc/DwbJIPwT5GwMFjal?=
 =?us-ascii?Q?Ffs/D8ONAAE++Oe7hHBOyBSnIMzDz8UbaE9gSEjCLy28iU9bXpBlRev5mwDU?=
 =?us-ascii?Q?PW5glEejloI4nxC1SGFHtCV+bPlU6kAN/2NmgbOzV0qiNmuWxMp0cyqRAoD/?=
 =?us-ascii?Q?pQsjzkDkooasDE8p3UvezkKIwT5BD1AgA2qNS5e1SrW6ArF7gSWbw1AgyYDj?=
 =?us-ascii?Q?mVfdAPV17UhP4X/AIDNJlzAYJzRRkqMFtZfCvcIWiiRD7CtTnXwvQeQJjAYT?=
 =?us-ascii?Q?g6YJBM8EgE+rNI5OKtiPE1BXgVdRGbnvbzQeM+B8ivsuW/qv+fC79mgzTiiF?=
 =?us-ascii?Q?H7zuhN++IbRTwZUFtLnzCdLU?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB5998
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e0dbd496-c424-4ff0-2f46-08d93495f88d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lJ60TPyke9PkClMZ7oJvUuFnipLyRRXQMRdv6IQH3sccLu2qkX54y/2B02RD0aPbgo8ocg02JHB0NeUd2FSTZJe3CUMO7AQeS0lFxS7yytaKb59eb8SrAUtUpl6+d2mlGyvBHA3Mab/HVqri/QNnEtVVtGZrTNYR5OLXfqk4UDjee+Bb5vKMjcYSU6c4QXn32+5DrJh2q1D/M0WNSXLIqPAZXKKYYUZFQAgfZMj45nFiyzb3Er9oXjBefbcrNBzUEcVDZ4scEicaPKNk7XRMh6X/9NAHCE3Ob09EPWLiJ2/T2/2PjQuqtMDOQ7IShRDNktzZYat5Z7Axc2rScRsLqs7SJwzzuBnM7ZvN/bM0m2ZpdlE8nDO8DQw9G+/kjQD6hbbrdgCdMbtvZgPJx4J6Pm2Ly+6ur/cJfuIYUc8NAzOr2Ngn94wmZZXuzzX5Dr4NmbN1yvwgkAdcPYKVUn4/jwxb1T1Jg4IYD7cwnuIwkoGLBhTj70eaNZlLbuhyvDq6pYGPfoxVbJOl9vDa2B3MFMEK4wJd970wVU3rZqTr3YfiVuDHmH+/JM6alPd6OWmztnEdGI8J2xKEIiOWfEMO9qPsFT1CR8zW4WWoZYAgv/RWIDas9ZCwI2AVbwYvUMxxk7C6BOTxnQzsTp9yI02iLhAeKy5rAht39YCtur72WLM=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(376002)(39860400002)(136003)(346002)(46966006)(36840700001)(36756003)(186003)(47076005)(54906003)(70206006)(70586007)(6486002)(4326008)(2906002)(82310400003)(316002)(82740400003)(44832011)(86362001)(8676002)(8936002)(5660300002)(16526019)(36860700001)(6862004)(6506007)(356005)(53546011)(336012)(2616005)(81166007)(107886003)(956004)(6512007)(6666004)(26005)(83380400001)(478600001)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 09:22:18.2225
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1a31ee7f-a1f0-4db6-effa-08d9349610dd
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7115



> On 16 Jun 2021, at 15:43, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> Currently, the restore code is considering the stream will contain at
> most one in-flight request per connection. In a follow-up changes, we
> will want to transfer multiple in-flight requests.
>=20
> The function read_state_buffered() is now extended to restore multiple
> in-flight request. Complete requests will be queued as delayed
> requests, if there a partial request (only the last one can) then it
> will used as the current in-flight request.
>=20
> Note that we want to bypass the quota check for delayed requests as
> the new Xenstore may have a lower limit.
>=20
> Lastly, there is no need to change the specification as there was
> no restriction on the number of in-flight requests preserved.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

> ---
> tools/xenstore/xenstored_core.c | 56 ++++++++++++++++++++++++++++-----
> 1 file changed, 48 insertions(+), 8 deletions(-)
>=20
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_c=
ore.c
> index a5084a5b173d..5b7ab7f74013 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -1486,6 +1486,10 @@ static void process_message(struct connection *con=
n, struct buffered_data *in)
> 	enum xsd_sockmsg_type type =3D in->hdr.msg.type;
> 	int ret;
>=20
> +	/* At least send_error() and send_reply() expects conn->in =3D=3D in */
> +	assert(conn->in =3D=3D in);
> +	trace_io(conn, in, 0);
> +
> 	if ((unsigned int)type >=3D XS_TYPE_COUNT || !wire_funcs[type].func) {
> 		eprintf("Client unknown operation %i", type);
> 		send_error(conn, ENOSYS);
> @@ -1515,6 +1519,23 @@ static void process_message(struct connection *con=
n, struct buffered_data *in)
> 	conn->transaction =3D NULL;
> }
>=20
> +static bool process_delayed_message(struct delayed_request *req)
> +{
> +	struct connection *conn =3D req->data;
> +	struct buffered_data *saved_in =3D conn->in;
> +
> +	/*
> +	 * Part of process_message() expects conn->in to contains the
> +	 * processed response. So save the current conn->in and restore it
> +	 * afterwards.
> +	 */
> +	conn->in =3D req->in;
> +	process_message(req->data, req->in);
> +	conn->in =3D saved_in;
> +
> +	return true;
> +}
> +
> static void consider_message(struct connection *conn)
> {
> 	if (verbose)
> @@ -1582,7 +1603,6 @@ static void handle_input(struct connection *conn)
> 	if (in->used !=3D in->hdr.msg.len)
> 		return;
>=20
> -	trace_io(conn, in, 0);
> 	consider_message(conn);
> 	return;
>=20
> @@ -2611,14 +2631,20 @@ void read_state_buffered_data(const void *ctx, st=
ruct connection *conn,
> 	unsigned int len;
> 	bool partial =3D sc->data_resp_len;
>=20
> -	if (sc->data_in_len) {
> +	for (data =3D sc->data; data < sc->data + sc->data_in_len; data +=3D le=
n) {
> 		bdata =3D new_buffer(conn);
> 		if (!bdata)
> 			barf("error restoring read data");
> -		if (sc->data_in_len < sizeof(bdata->hdr)) {
> +
> +		/*
> +		 * We don't know yet if there is more than one message
> +		 * to process. So the len is the size of the leftover data.
> +		 */
> +		len =3D sc->data_in_len - (data - sc->data);
> +		if (len < sizeof(bdata->hdr)) {
> 			bdata->inhdr =3D true;
> -			memcpy(&bdata->hdr, sc->data, sc->data_in_len);
> -			bdata->used =3D sc->data_in_len;
> +			memcpy(&bdata->hdr, sc->data, len);
> +			bdata->used =3D len;
> 		} else {
> 			bdata->inhdr =3D false;
> 			memcpy(&bdata->hdr, sc->data, sizeof(bdata->hdr));
> @@ -2629,12 +2655,26 @@ void read_state_buffered_data(const void *ctx, st=
ruct connection *conn,
> 							bdata->hdr.msg.len);
> 			if (!bdata->buffer)
> 				barf("Error allocating in buffer");
> -			bdata->used =3D sc->data_in_len - sizeof(bdata->hdr);
> -			memcpy(bdata->buffer, sc->data + sizeof(bdata->hdr),
> +			bdata->used =3D min_t(unsigned int,
> +					    len - sizeof(bdata->hdr),
> +					    bdata->hdr.msg.len);
> +			memcpy(bdata->buffer, data + sizeof(bdata->hdr),
> 			       bdata->used);
> +			/* Update len to match the size of the message. */
> +			len =3D bdata->used + sizeof(bdata->hdr);
> 		}
>=20
> -		conn->in =3D bdata;
> +		/*
> +		 * If the message is not complete, then it means this was
> +		 * the current processed message. All the other messages
> +		 * will be queued to be handled after restoring.
> +		 */
> +		if (bdata->inhdr || bdata->used !=3D bdata->hdr.msg.len) {
> +			assert(conn->in =3D=3D NULL);
> +			conn->in =3D bdata;
> +		} else if (delay_request(conn, bdata, process_delayed_message,
> +					 conn, true))
> +			barf("Unable to delay the request");
> 	}
>=20
> 	for (data =3D sc->data + sc->data_in_len;
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 09:27:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 09:27:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145419.267558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvGDN-0004RK-ML; Mon, 21 Jun 2021 09:27:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145419.267558; Mon, 21 Jun 2021 09:27:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvGDN-0004RD-JL; Mon, 21 Jun 2021 09:27:33 +0000
Received: by outflank-mailman (input) for mailman id 145419;
 Mon, 21 Jun 2021 09:27:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YNPZ=LP=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lvGDL-0004R5-KF
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 09:27:31 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1a::62a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1ba56565-6bf4-441b-811e-9530991ece27;
 Mon, 21 Jun 2021 09:27:29 +0000 (UTC)
Received: from AS8PR04CA0156.eurprd04.prod.outlook.com (2603:10a6:20b:331::11)
 by VI1PR08MB4383.eurprd08.prod.outlook.com (2603:10a6:803:fc::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Mon, 21 Jun
 2021 09:27:27 +0000
Received: from AM5EUR03FT060.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:331:cafe::3f) by AS8PR04CA0156.outlook.office365.com
 (2603:10a6:20b:331::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15 via Frontend
 Transport; Mon, 21 Jun 2021 09:27:27 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT060.mail.protection.outlook.com (10.152.16.160) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.16 via Frontend Transport; Mon, 21 Jun 2021 09:27:26 +0000
Received: ("Tessian outbound d6f95fd272ef:v96");
 Mon, 21 Jun 2021 09:27:26 +0000
Received: from c9b54244e0e1.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A66F881C-3085-4C0B-9702-5F9488D1C81B.1; 
 Mon, 21 Jun 2021 09:27:08 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c9b54244e0e1.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 21 Jun 2021 09:27:08 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com (2603:10a6:102:130::10)
 by PAXPR08MB6815.eurprd08.prod.outlook.com (2603:10a6:102:134::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Mon, 21 Jun
 2021 09:27:07 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025]) by PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025%8]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 09:27:07 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO3P123CA0011.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:ba::16) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Mon, 21 Jun 2021 09:27:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ba56565-6bf4-441b-811e-9530991ece27
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YVXEi4qfK31avwtbIbY3UEhmsYdI5IEqgtLALzzYuvc=;
 b=zcb0erKGsJnCg297vbRxVjsrbanGNajnDbLmkX6bRn4xaAwP9pT+gQqR/D7Ez6k9Eem1MvIPPpp68HlFXjMWJ06TSd5M/Tvac0NGMmYOs9UtNHuikBEmRLJpZRfeCYdBFPQpJw8kjuH7uLEi5A/xkLsRClrxIDkVEoFSojKYYU4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: f405bdbc7e8b9b0f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FWHM2ByYQ1wgtDt3jCZKLPXxE/G9Lv/sqZys2bdSMDEY1CUx5nNgCUN7ncguU7XAtrvZv8+KkaAlL81HW+WRtNvidEJq6zrVopQxmvXASUediBeO7g4aKFoa6MS7bL2sXJJp3vlGFg9W3vQ41m8FzVpBzxJ1Cwfn+mSrjUxBQL+pZ5zqVnEG5CADRA2B1PCPjtGR7UzX+ZZ4c48na8A3eYtGeX4A7f4lFR0W8fyKMDTlNWeBPG0VDplFnKk2H/6okD/S4o2UEo4lG/sHNbl38S3cVrQ7GZmY9tSdy8SSbQPCk75tkKC4LFQoorge8JYxdUHzJ3O8wBdDfmEqfv/NYA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YVXEi4qfK31avwtbIbY3UEhmsYdI5IEqgtLALzzYuvc=;
 b=RDg4wHW8ChmoVjZpDm0+mSmQK0bY98vlkjsBw/9xoAHiGXw3X4z5+1+JOsGTbXq0vQROiu9NrLLmHYHUkkgq3uP/wGBkcZ8iBnkEn9PqDtPXFKP9/SdXuWgzHyXQipereuSlhu9D4zCjrDsH1CsDCbjosgEdKV2B+GC9ZG3a1LAEFddikPoQSFve86FPGkDdfoLJVcbSK77ik5ehkhe15A2GXWYDD03P/LasFgyJIBx/d9v4/PxoDutMV1jo6h6ugDm/+CZ+/EHCAi62nmRwx9zkcLEO10gZqhPPZHawQTdaGjLGPbtWA/7Qsn/qS0ASxENcvU1G+FDVGAjGmL7oCA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YVXEi4qfK31avwtbIbY3UEhmsYdI5IEqgtLALzzYuvc=;
 b=zcb0erKGsJnCg297vbRxVjsrbanGNajnDbLmkX6bRn4xaAwP9pT+gQqR/D7Ez6k9Eem1MvIPPpp68HlFXjMWJ06TSd5M/Tvac0NGMmYOs9UtNHuikBEmRLJpZRfeCYdBFPQpJw8kjuH7uLEi5A/xkLsRClrxIDkVEoFSojKYYU4=
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=us-ascii
Subject: Re: [PATCH 09/10] tools/xenstored: Dump delayed requests
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <20210616144324.31652-10-julien@xen.org>
Date: Mon, 21 Jun 2021 10:27:00 +0100
Cc: xen-devel@lists.xenproject.org,
 raphning@amazon.co.uk,
 doebel@amazon.de,
 Julien Grall <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>
Content-Transfer-Encoding: quoted-printable
Message-Id: <2CBBD1CA-0E66-4FEA-949A-77DE0BDEBF82@arm.com>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-10-julien@xen.org>
To: Julien Grall <julien@xen.org>
X-Mailer: Apple Mail (2.3654.100.0.2.22)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO3P123CA0011.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:ba::16) To PAXPR08MB6816.eurprd08.prod.outlook.com
 (2603:10a6:102:130::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a3ca0f12-bf1a-408f-44a4-08d93496c8df
X-MS-TrafficTypeDiagnostic: PAXPR08MB6815:|VI1PR08MB4383:
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB438323AD523B7DE45631453EE40A9@VI1PR08MB4383.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 03TO/OH9jjgwMq33Qp7w7ae9tG0F1H4jtTVXeF7Y+r6B7VDH0wQ0g+kcUru+3uwLx0Ai7nvwP2zIV9I3/abetAS0bvPTK3L0DoeYREwgeaYzCyNEKp0IMh2FN0iJXpxvCm2LJ6yhAK+x791GqWd+yEU2oKyLCSOCfyYmqdI86u/ilikrPE6l0VXw/KsSWV7fPq615RYDzS0a0zy8bW96yhJCCfWQqAZ/u/bhEafiCC7n3DOXVITG/Xx4NmLh9yfbpKdqiJmQ02JukqzKIdnqH8Qt9j9PgWUF/5U1wFRwVNcB/k09QLRtEVr+/k/u2HQ/v3SS7OfNgsqosaKqfI191tJgxyAicz6f2bSKAHhfkNLv1bI17asRXkLGRBIfpukqe67CuzWvdD8rpojlbzC1GLQ1/D2j2fSyJj3nakinJMYwIwKhDzwX0fHRZhA8cnNMBUK1yuptI2mERuh2TRkFeKkh8l9z/OTst5TKVx2wZKgAOH5PnOF8pGGw8Qm9LgfeIFC0O5I4aKHtvYiX3WSxUHKzsmXquMQgiy8QGbji399rWmy6xqkKOnoQ3dB7FSJ06cdWCSpJlcso7D3hP3kCx/M3qpULcRug8od10WIQA3dhqulaclJqs+MNWasawOBdGfdE57TmsbF766SkCpykmSNQK9voSVa+S/iXLyjBdfyLu0g4Lr+4pqO4IzdKVROj
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB6816.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(346002)(376002)(396003)(366004)(83380400001)(6486002)(36756003)(86362001)(478600001)(6666004)(316002)(2906002)(54906003)(16526019)(186003)(8676002)(6916009)(26005)(6512007)(8936002)(66476007)(66556008)(66946007)(52116002)(53546011)(4326008)(6506007)(33656002)(2616005)(5660300002)(956004)(44832011)(38350700002)(38100700002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
 =?us-ascii?Q?M1hRdcUmMgbuRQFyCU63nP8rmRf8rp3NYbEIZwPe3jw2M6PCCpQlyKtkWvk5?=
 =?us-ascii?Q?n5BFKiOEYamoYrH/WGpqlvXhuwcK7YypHFd0tkXFbpV0AWJ4F6wwsSd6jwA/?=
 =?us-ascii?Q?ofCSacpdLOe96Nw4zfpZ53825jN6nRfpucwm1jlkT5njQpJqdmy7iOgrOox9?=
 =?us-ascii?Q?F7dWglx3ksWIgme7qwSg7svQRQRBB1k/nl9hukfTDwthCaDNIwqlQswTJUdd?=
 =?us-ascii?Q?czo32vJ5fi8Kf7kldmJLPqEEMi+Dd7oFTSDpzvWzFiCxpV12FzZo1gR6o293?=
 =?us-ascii?Q?SRkyYdDXmM8x/sBxGk1f9Xi5WgiMVNPvSTxW5k2mVNr7gBtx4X8hbgouA+0q?=
 =?us-ascii?Q?cf0yfBKIFYHYZxffa37Ypq8wUSTHoSekAGGVVVsnjxzFQz1UlNYJIxfFK5oi?=
 =?us-ascii?Q?pbXC/fmS2lG0xwOM4CoNRh9Wesfq/KcvyqsgK0HMsh9tFQY8kjVDY317p45L?=
 =?us-ascii?Q?Bq4V4uDwgA0LXuJfz6dosWzJxzNEOpNpgWSLXLjm7zYzTYbQTGYuoobpjaxo?=
 =?us-ascii?Q?zPTUG9rkSc743XjoTzgweaFJx7WwurlkZqJCtjOo3yWqAvcCBAERyzWDUT/s?=
 =?us-ascii?Q?ml5d+8kYBJey/C7bIzI5N4SmDuKySVOwi31bSM3z8uUlkGI6J1Mh30wXzFLc?=
 =?us-ascii?Q?cvMYNpvI/+VQgdJHeNwBLLnA4eSHeMjDREYJFPaHehTAqztRRZAf0ERhuwlT?=
 =?us-ascii?Q?IW3BklopbMzGWHHUYHvC6Cp7bVCTCsILNf5hOe0kCkpvMcbfmqvRyRYzaSlt?=
 =?us-ascii?Q?Yqzj/5R6igRJuExkhUe3hdSMWtpaHqL81r6krhGK88Lp+Vy9v+bCnBvlV/Ze?=
 =?us-ascii?Q?L0RcX6PfsVCndHStrpb1FGL1piT2zrKcvdwWE1rnJjhU5cXMxZ09stAZJJVu?=
 =?us-ascii?Q?G+sUTNOv1GO76B8apO4cJ12rDwo/eIgIWb8GS/IO32vX6rkk607VFYnIvNqX?=
 =?us-ascii?Q?qJVV2JoGaG/GF3TGPg2hX9+Durdk8seS7/Kkt/qEN84+lV/nYRLkObDURHTc?=
 =?us-ascii?Q?1ibzuK13aM8CXFFPvdnnfwvz/cIBnJPLN7aD9uuSJ6ys7n6iub3nwDy1FSNm?=
 =?us-ascii?Q?xTE+CssvkCJnrEBFaLY6qC1uXFsK4JAUUWpzTfJqpZRJ1zmErl4kmZmiqDcT?=
 =?us-ascii?Q?/7l5fMagwD8ujIKJGcFw5ifFhRbYXHm3qvMO276WjIaNNrgSFvA9sCEpukEa?=
 =?us-ascii?Q?dxdPiw08V92txodCqeciCqRcuZGYOzvbpuu77jDFCEnRWf3x8Bk72t3z5s5/?=
 =?us-ascii?Q?3+wIIJglAkgpxuefqr4spkwNVulYm0g06jMmY5X1plsC4V/jpaQfulTAgJw5?=
 =?us-ascii?Q?lM1syga/dJHU4NpUvUPbm4Sy?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6815
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e8fbae68-823c-4272-98a8-08d93496bcfb
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ML36jmNbtmAYVO+cH18tD45mOVYMWk3L1Qq/w2l1VjuPUiqzobJQG3D2rdyl0W+JhTr1v+9Auf84SuFAh/78jpI2sWyqjrddKKBFqIFTDyGyvK18lM65G8rgU/RcdIYQLEdEEzM6sTkGjsju6z32y/dUXI5jr7GE9ZPz0YipIa27VJIAZtDeN9LR5DxJsOBxvqJq78FxrWP+tc4R0FxbqkDCYeCTVSVkJ6tqDdhzvCCK6jOYJslZVHREzDKCfSguhEa0TqEUV+krLnu3V4aVeROKeiJkdNc6lTUYKn06oYdNLayWdEOEKihTbjMUx3VM1uP4uSj9PEvIN5i0JhTQrM5HfzDzielQU71QEYDIn6JNIM0L1vAWRMugnYHNCzXdaIrQaZrwPXWT7WQ1qPu0ppJxC4NkxKnWHzUhZlH2/L5DJAUlNHeQa4WKzuCNvZrFL2oGNVzHqjt/Q+7c0j7UCpqzVIVydzT0R+wUCIc0Y5z0ucBngNPCn9zuQVZKb123yM98xeH0lc5dePPM6q5l6pdyQ3ox3fwxFJ8kvAZOGJuBsY8SiILTxsLrs4u+RBFR2ePDpNMns4Zubbf1bSz8CbWu/kYjROWOTrBPO/w8hiOQCQgVPxXMsn8hYP0TTquZJxpTJCYvShmQ22zeYWwiI+qUvkK4Xsgmxumbgj7myg8=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(376002)(39860400002)(136003)(396003)(46966006)(36840700001)(6486002)(6506007)(53546011)(478600001)(6666004)(81166007)(70586007)(8676002)(956004)(26005)(4326008)(44832011)(107886003)(316002)(2616005)(36756003)(6512007)(5660300002)(8936002)(54906003)(47076005)(16526019)(186003)(6862004)(82310400003)(356005)(82740400003)(70206006)(86362001)(336012)(2906002)(36860700001)(83380400001)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 09:27:26.8920
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a3ca0f12-bf1a-408f-44a4-08d93496c8df
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4383



> On 16 Jun 2021, at 15:43, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> Currently, only Live-Update request can be delayed. In a follow-up,
> we will want to delay more requests (e.g. transaction start).
> Therefore we want to preserve delayed requests across Live-Update.
>=20
> Delayed requests are just complete "in" buffer. So the code is
> refactored to allow sharing the code to dump "in" buffer.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

> ---
> tools/xenstore/xenstored_core.c | 42 +++++++++++++++++++++++++--------
> 1 file changed, 32 insertions(+), 10 deletions(-)
>=20
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_c=
ore.c
> index 5b7ab7f74013..9eca58682b51 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -2403,25 +2403,47 @@ const char *dump_state_global(FILE *fp)
> 	return NULL;
> }
>=20
> +static const char *dump_input_buffered_data(FILE *fp,
> +					    const struct buffered_data *in,
> +					    unsigned int *total_len)
> +{
> +	unsigned int hlen =3D in->inhdr ? in->used : sizeof(in->hdr);
> +
> +	*total_len +=3D hlen;
> +	if (fp && fwrite(&in->hdr, hlen, 1, fp) !=3D 1)
> +		return "Dump read data error";
> +	if (!in->inhdr && in->used) {
> +		*total_len +=3D in->used;
> +		if (fp && fwrite(in->buffer, in->used, 1, fp) !=3D 1)
> +			return "Dump read data error";
> +	}
> +
> +	return NULL;
> +}
> +
> /* Called twice: first with fp =3D=3D NULL to get length, then for writin=
g data. */
> const char *dump_state_buffered_data(FILE *fp, const struct connection *c=
,
> 				     struct xs_state_connection *sc)
> {
> 	unsigned int len =3D 0, used;
> -	struct buffered_data *out, *in =3D c->in;
> +	struct buffered_data *out;
> 	bool partial =3D true;
> +	struct delayed_request *req;
> +	const char *ret;
>=20
> -	if (in) {
> -		len =3D in->inhdr ? in->used : sizeof(in->hdr);
> -		if (fp && fwrite(&in->hdr, len, 1, fp) !=3D 1)
> -			return "Dump read data error";
> -		if (!in->inhdr && in->used) {
> -			len +=3D in->used;
> -			if (fp && fwrite(in->buffer, in->used, 1, fp) !=3D 1)
> -				return "Dump read data error";
> -		}
> +	/* Dump any command that was delayed */
> +	list_for_each_entry(req, &c->delayed, list) {
> +		if (req->func !=3D process_delayed_message)
> +			continue;
> +
> +		assert(!req->in->inhdr);
> +		if ((ret =3D dump_input_buffered_data(fp, req->in, &len)))
> +			return ret;
> 	}
>=20
> +	if (c->in && (ret =3D dump_input_buffered_data(fp, c->in, &len)))
> +		return ret;
> +
> 	if (sc) {
> 		sc->data_in_len =3D len;
> 		sc->data_resp_len =3D 0;
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 09:29:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 09:29:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145424.267570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvGFE-00052c-2r; Mon, 21 Jun 2021 09:29:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145424.267570; Mon, 21 Jun 2021 09:29:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvGFD-00052V-W2; Mon, 21 Jun 2021 09:29:27 +0000
Received: by outflank-mailman (input) for mailman id 145424;
 Mon, 21 Jun 2021 09:29:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvGFD-00052J-FM; Mon, 21 Jun 2021 09:29:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvGFD-0005iU-8M; Mon, 21 Jun 2021 09:29:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvGFC-0003Eh-Tl; Mon, 21 Jun 2021 09:29:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvGFC-00071o-TD; Mon, 21 Jun 2021 09:29:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XbYo4uRXp3HHyJQCJpnR8slp/+tk3XeH1mM/lMqOPr4=; b=erA7PfbV1CY4cd1/Nc5MHQvRco
	K1S5/S+d4zw6Mr6hzfIhoIMDtq/dvYS389BKOUB6xek7PJPViOZnn+yQhajmcdtU21QcPpw9159Uf
	BdZJYYNQQdwJ+fAQjwZnJ5rEJmMgnG8dWdMz6y0nJLsteymWTm2UY9nowxTeyA9KmN7o=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162924-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162924: trouble: blocked/broken/pass
X-Osstest-Failures:
    qemu-mainline:build-amd64:<job status>:broken:regression
    qemu-mainline:build-amd64-pvops:<job status>:broken:regression
    qemu-mainline:build-amd64-xsm:<job status>:broken:regression
    qemu-mainline:build-arm64:<job status>:broken:regression
    qemu-mainline:build-arm64-pvops:<job status>:broken:regression
    qemu-mainline:build-arm64-xsm:<job status>:broken:regression
    qemu-mainline:build-armhf-pvops:<job status>:broken:regression
    qemu-mainline:build-i386:<job status>:broken:regression
    qemu-mainline:build-i386-pvops:<job status>:broken:regression
    qemu-mainline:build-i386-xsm:<job status>:broken:regression
    qemu-mainline:build-arm64:host-install(4):broken:regression
    qemu-mainline:build-arm64-pvops:host-install(4):broken:regression
    qemu-mainline:build-arm64-xsm:host-install(4):broken:regression
    qemu-mainline:build-i386-xsm:host-install(4):broken:regression
    qemu-mainline:build-i386:host-install(4):broken:regression
    qemu-mainline:build-i386-pvops:host-install(4):broken:regression
    qemu-mainline:build-amd64:host-install(4):broken:regression
    qemu-mainline:build-amd64-xsm:host-install(4):broken:regression
    qemu-mainline:build-amd64-pvops:host-install(4):broken:regression
    qemu-mainline:build-armhf-pvops:host-install(4):broken:regression
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=8f521741e1280f0957ac1b873292c19219e1fb9a
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Jun 2021 09:29:26 +0000

flight 162924 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162924/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-arm64                   4 host-install(4)        broken REGR. vs. 152631
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152631
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152631
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 152631
 build-i386                    4 host-install(4)        broken REGR. vs. 152631
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 152631
 build-amd64                   4 host-install(4)        broken REGR. vs. 152631
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 152631
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 152631
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                8f521741e1280f0957ac1b873292c19219e1fb9a
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  305 days
Failing since        152659  2020-08-21 14:07:39 Z  303 days  559 attempts
Testing same since   162924  2021-06-20 23:06:51 Z    0 days    1 attempts

------------------------------------------------------------
541 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-arm64 host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386 host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-armhf-pvops host-install(4)

Not pushing.

(No revision log; it would be 173699 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 09:31:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 09:31:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145431.267584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvGHG-0006Rw-LO; Mon, 21 Jun 2021 09:31:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145431.267584; Mon, 21 Jun 2021 09:31:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvGHG-0006Rp-IC; Mon, 21 Jun 2021 09:31:34 +0000
Received: by outflank-mailman (input) for mailman id 145431;
 Mon, 21 Jun 2021 09:31:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YNPZ=LP=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lvGHE-0006Qz-Uo
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 09:31:33 +0000
Received: from FRA01-PR2-obe.outbound.protection.outlook.com (unknown
 [40.107.12.75]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5cbc2d62-89a5-4056-9c0c-77771c504048;
 Mon, 21 Jun 2021 09:31:31 +0000 (UTC)
Received: from DU2PR04CA0245.eurprd04.prod.outlook.com (2603:10a6:10:28e::10)
 by PR2PR08MB4860.eurprd08.prod.outlook.com (2603:10a6:101:20::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Mon, 21 Jun
 2021 09:31:28 +0000
Received: from DB5EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:28e:cafe::89) by DU2PR04CA0245.outlook.office365.com
 (2603:10a6:10:28e::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19 via Frontend
 Transport; Mon, 21 Jun 2021 09:31:28 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT003.mail.protection.outlook.com (10.152.20.157) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.16 via Frontend Transport; Mon, 21 Jun 2021 09:31:28 +0000
Received: ("Tessian outbound 7799c3c2ab28:v96");
 Mon, 21 Jun 2021 09:31:28 +0000
Received: from e5e3dca8cb3b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B64BCB73-9CE2-4AB1-A4FC-E35C2279C194.1; 
 Mon, 21 Jun 2021 09:30:52 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e5e3dca8cb3b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 21 Jun 2021 09:30:52 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com (2603:10a6:102:130::10)
 by PA4PR08MB6000.eurprd08.prod.outlook.com (2603:10a6:102:ed::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Mon, 21 Jun
 2021 09:30:48 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025]) by PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025%8]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 09:30:48 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO4P123CA0106.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:191::21) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.19 via Frontend Transport; Mon, 21 Jun 2021 09:30:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5cbc2d62-89a5-4056-9c0c-77771c504048
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8ASjjV7ehB6D0BbhjjaMU4QkbWhPk+IeC69pZ9yy5qI=;
 b=pPQKhPuCctIWgD1SIKEuYLtBFP5u/U+smnUmZ4DoQfT+BRwYUV8sbMGt0eE61FLdEdgovV/NH8GmKeO541LeZfGrSNyvIkDlfQQIFXnK8ccBUsNweI8UuW5IFOtZdATWV3R8kG2Jsey8qI9jGiVsgSwIMDGJM4rbBXnY8bjxhGM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 07e935261128089e
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=esS0ioUkbx7dNMuJHTEtHolhLDUuUOt9v3MZyajYPY9v7/QTshbVp1yBOgfrfF5T2/i5iPmdyF6TSlo8vVniiTBjGx2/ZozPqvkyGewbKUTS+jQbQjtSJQlSN1ixPfGJkZXWYAmrO7OCmYAnxH5lL3QkFjYT7MROT9PauYP2s13xHOAZEzn1d30fVpOVkd/wUooHTn8paifzbwDU4oRS33mT5/iGlLgIDTWVB/RGQogOg5JbMO205iEANl0mOdNrt8qKiEIPGosrpJHreBJSCVFM1ESs+jXU93FxmRZ8UNwun/QnifqfYQq+xW6H19oFal8+Mc2v2oiHTDbUvjixIw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8ASjjV7ehB6D0BbhjjaMU4QkbWhPk+IeC69pZ9yy5qI=;
 b=V6EpkMTKzR8MniK4yjMa5vFsfWKxElGeJGkJFOBDfOUv6WKEbSSwlmI3rCsZXhwSaM91gAfkLI8UfrpkkrnJH0lzuRE84/9gcbyHIWSETOuYmXjdksoJgC2LUbu0FBRxAS/yAjhB+nLtSm8U17PsVlMb0qPeJh3n8AYEjH63rSFkwXWLZXTLo69BHMy9mPXJjd4tWV96mvG4qvNl+DV03ViB7F2Qqfdah8genJzXbWnhs1GKmrLU1NWgWnlNrKP8ZSBWlf1Y4d0VwiZoTPEMC+7WrBt2Tl+AZdbv5OaMo+Mb7VB96zQvMs1k2iGdLG95P2VEaOvKaf5gGRT2q8qkSg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8ASjjV7ehB6D0BbhjjaMU4QkbWhPk+IeC69pZ9yy5qI=;
 b=pPQKhPuCctIWgD1SIKEuYLtBFP5u/U+smnUmZ4DoQfT+BRwYUV8sbMGt0eE61FLdEdgovV/NH8GmKeO541LeZfGrSNyvIkDlfQQIFXnK8ccBUsNweI8UuW5IFOtZdATWV3R8kG2Jsey8qI9jGiVsgSwIMDGJM4rbBXnY8bjxhGM=
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=us-ascii
Subject: Re: [PATCH 10/10] tools/xenstored: Delay new transaction while
 Live-Update is pending
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <20210616144324.31652-11-julien@xen.org>
Date: Mon, 21 Jun 2021 10:30:41 +0100
Cc: xen-devel@lists.xenproject.org,
 raphning@amazon.co.uk,
 doebel@amazon.de,
 Julien Grall <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>
Content-Transfer-Encoding: quoted-printable
Message-Id: <111A40CF-F03A-4C03-BA32-55E1F1537F9A@arm.com>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-11-julien@xen.org>
To: Julien Grall <julien@xen.org>
X-Mailer: Apple Mail (2.3654.100.0.2.22)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO4P123CA0106.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:191::21) To PAXPR08MB6816.eurprd08.prod.outlook.com
 (2603:10a6:102:130::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e09ed5c8-ac72-41be-7ee2-08d9349758d3
X-MS-TrafficTypeDiagnostic: PA4PR08MB6000:|PR2PR08MB4860:
X-Microsoft-Antispam-PRVS:
	<PR2PR08MB4860F83F7B21458AB7DD3A99E40A9@PR2PR08MB4860.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 wn61GMi4/31BxeMbfviw4pJ2N8o1Ksi0Dqg1rG5ETh6+iZSkpDy+SgHuJsv9NCEdIgpuZKLA0qToa+D9Irdbxje9P9ON9Qd4Ghc6zQObYNZ35UvAM1RrKzRxNjVoMbtV/X+vHwIZ1ZMuRkTZvAhPU/WGr1C/Ow+koKYf7jsHSPHNftSx7O3iUUZVN0MkwL7ega9MUJtX0ZeJ0z0T/QtJ9qLfFdOAGLou7/P5Pk3EyKCX4xI8nmUStZuC86bTh/XTie3UdNGQHgKsTQNfaggRbNB1WvT8PAS5iWtN1F4xnS1ENxT2Dib4x3S2XL622UJjClL003G8VhuyVPgL5eeVSOfqaG92dFv0hYjp1Qi9690Egbm2edln7FFlmKV9csLsHkd0OW+/o1VhJulk+PXvbBK2urfvP3VZrmgCe7kIN++unAe9JYYaEMcX9QRCvnpR+rBgpS4/aXGM1GTiNChcc02AffUNjvld8NHI/K5DWqm4E5WmxkiMYqpK05TUXmrAaU4l/J6YmQB396e2aMZbqNNZ4mpStlc0LAo9otjLyiNLHzNG/6xV7QCsapXIS6nHopUpOQ8HQWNCR0h/hCRySX6X8vuEGkfPKXpltqa5BeHZ9y5VhcPSxDRsIDHQEPTEpAJXI8nve5mv2JGzBqsUgnjMjlnYX56xR2N3bhJwSONUNGo9t0PB9szHe3Xuxj5k
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB6816.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(366004)(136003)(39860400002)(346002)(53546011)(6506007)(956004)(2616005)(15650500001)(44832011)(8936002)(26005)(52116002)(4326008)(38350700002)(38100700002)(33656002)(478600001)(186003)(16526019)(36756003)(54906003)(66946007)(66556008)(86362001)(8676002)(6666004)(2906002)(5660300002)(6512007)(83380400001)(6916009)(66476007)(316002)(6486002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
 =?us-ascii?Q?l4XrUOIjMtWT88D5vSJxlYyDgfWHVxxbBiEHFcalMdfEGrv7HRxPjmgJalgX?=
 =?us-ascii?Q?7bgaZj6icpesG2ceO07i3hQWKhyWzhP4FxdNGjqdqEcR+RwKEB+R85H7VTGQ?=
 =?us-ascii?Q?1kEy5otY0yPUcB+MJKzq/bn1pRVcwvWUk1mE64e5d8bbm4REOXGNeK8yM+dr?=
 =?us-ascii?Q?919K3VBmbHWqghqGvQmdMn2xI0E6h8ISjHK3KifThHIyVXnkt/eqdINfjC33?=
 =?us-ascii?Q?/Wb+29LlSdENGhpQ77/NorX1u2FpyWV0XGviI28rTmmIDi5xyzE+1BB4lpyN?=
 =?us-ascii?Q?hs7Of9dIj+L1bJlXZr0smdVeQMF2IXEjwNG+bQXDgqSU9u3bNDz98ShJRsLF?=
 =?us-ascii?Q?K7lA1APfEOpI4TleNqnm515Dz7hQiGbwO3WOw1rAM9obcn+NlbuKyl91iXsr?=
 =?us-ascii?Q?lNU6b0PxkEvjVJHrD5NbxIrUjYAK1BPsTlsHIDdMIpPUeD+v2o4BW6kWNkmS?=
 =?us-ascii?Q?OcRyReEhZ3jC09RazJsbXnEbctjGlWDax2yoRQEybrYXJYTDDnpguqUpzq7H?=
 =?us-ascii?Q?I8YXsUl+FXYR/w1ngc8L8X+KanuQf238DOmLknahHUSSZosgY/ZBMhVUK0Mw?=
 =?us-ascii?Q?lBddJsWMtFw8gaOrHYmnwFiyjdyGm50WPbLcTsIiYpn/hqfjNPlPzqlZe7nD?=
 =?us-ascii?Q?Paaj0nJ42RQIpPvOBiLvLhi1k0hWMlqv4OuFurwplA1JQVPMULIhYsU/xH/U?=
 =?us-ascii?Q?HWsL9EE/AKjw6/mSeVE0xcVI1F71d0w+XUtQl5vZJp/6ZMhB8AIjZ6dkTO8h?=
 =?us-ascii?Q?Dl+f1a+JsO7iNj4RBzpxmdFVkxxgxk6Nhk1PofTNY1Q3XLK4DVE5okEL860q?=
 =?us-ascii?Q?9YeWdosBP/gXqMQP7zshTSOkUah4WR0m3Q4bcGf6jyt8f6m5uMnrrBct3h68?=
 =?us-ascii?Q?RuDcuZFxYGknqm5yfSutcFeVjxNESD7lYaaaW9Y5Q8vS3c7/S5n9h+zL4w21?=
 =?us-ascii?Q?4EfAPuDfCW95cwIcqBZ5yJ0PDiYdacNmm2b0ETRAeUa6LXsGi/ZHJWcTM0i8?=
 =?us-ascii?Q?1yfbQlynA2lmJjX268IwvSONxWh9Z/iXEL48ZhDJYpOGHVKvgHW9ex9KNfiV?=
 =?us-ascii?Q?RIfWMU3z4NVeve6235qqFfH2FCdbIC1jHTtiUUrWIyBCK6cFkumHEzAUThV9?=
 =?us-ascii?Q?RahqfzE9BGEAqrxog4hOrUXSMEOi+0QSmyxHqvFPVKtwRJMtqUqcoLuubVyZ?=
 =?us-ascii?Q?JAPJspej5+Mx5mNLJHpEmBT5dAIbmFFi825jiKJR1p+snrPCkHdiF8b1Msox?=
 =?us-ascii?Q?p9ysVw6UBVI39JQ86po7uDuZbqRyBv89YBfv9n9ndsnEIJsr5lZmL+aWooHm?=
 =?us-ascii?Q?R/01Vm1OC6DINsQ6CtqbXF8t?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6000
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	da514532-59ad-40d4-acb2-08d934974117
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oRHPT295lQDx4QvF1GiGi186bYG2vqQDSXIv+XdJj95ouzSxvufI6RiiLXXUYe50cy575QjcdN1BK53lh7Lo5X2UK581zLPB29oBEe8pkEuwEi2iho0ptLVaRFVq1ZctCEp5Mofam4tnP1jaBpKKJLur4m9f3k1soiSRwD6ofJrNvvLXtv+Gdo8dyvXhCXwvAr7/XLVmaFHTPoHyDGO6jDSUKMcPczXLqCXLKDiTn7lUGRLWYkTbp9en6USsL+1mzhtVL2ZoHaRAenI6lulkTcPyMgEZgetuXZsnXQEjbQKQ3i2bhDW67m3dDOhA5OXHQGctcNf7JyHjup0dMZqP/O72B137139XMieyRyfYj/aG4Nz31gNHjz3KxAB76Eg/IenMa8vQktRsM0ykqhtGIIUJlO1cMWTeX05jEd3uLfcv+XvatWj9fhYqDHRWAfQPNtT46xSZyTGHQNqYHm+Ymt3NBvPsmpD5u8X65KO2FYw5dtaf1zAXZxZlXvkh5SCxMmwTryfVeazDbYKbxbWFBHvaxdx3DglGskRwiuIOWdbwDjfIlT8+Lu136T1ToBC2IciDRPIrgTvUP8Dz65qsTut8m6aSxzNzd4JhDzrr/yDGWUqsp+p4KWwoO8iA/9sUqM3fn4IlUAdUsV04g+8KW3KJsBjPRrrcDwdGCRBYag8=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(376002)(39860400002)(136003)(346002)(46966006)(36840700001)(6862004)(16526019)(82310400003)(6666004)(356005)(186003)(44832011)(2906002)(956004)(107886003)(15650500001)(26005)(33656002)(81166007)(2616005)(6486002)(36756003)(5660300002)(6512007)(4326008)(70206006)(82740400003)(53546011)(6506007)(8936002)(70586007)(478600001)(83380400001)(47076005)(36860700001)(8676002)(316002)(54906003)(336012)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 09:31:28.4507
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e09ed5c8-ac72-41be-7ee2-08d9349758d3
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR2PR08MB4860



> On 16 Jun 2021, at 15:43, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> At the moment, Live-Update will, by default, not proceed if there are
> in-flight transactions. It is possible force it by passing -F but this
> will break any connection with in-flight transactions.
>=20
> There are PV drivers out that may never terminate some transaction. On
> host running such guest, we would need to use -F. Unfortunately, this
> also risks to break well-behaving guests (and even dom0) because
> Live-Update will happen as soon as the timeout is hit.
>=20
> Ideally, we would want to preserve transactions but this requires
> some work and a lot of testing to be able to use it in production.
>=20
> As a stop gap, we want to limit the damage of -F. This patch will delay
> any transactions that are started after Live-Update has been requested.
>=20
> If the request cannot be delayed, the connection will be stalled to
> avoid loosing requests.
>=20
> If the connection has already a pending transaction before Live-Update,
> then new transaction will not be delayed. This is to avoid the connection
> to stall.
>=20
> With this stop gap in place, domains with long running transactions will
> still break when using -F, but other domains which starts a transaction
> in the middle of Live-Update will continue to work.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

> ---
> tools/xenstore/xenstored_control.c | 10 ++++++
> tools/xenstore/xenstored_control.h |  2 ++
> tools/xenstore/xenstored_core.c    | 49 +++++++++++++++++++++++++++++-
> tools/xenstore/xenstored_core.h    |  3 ++
> 4 files changed, 63 insertions(+), 1 deletion(-)
>=20
> diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstore=
d_control.c
> index 1c24d4869eab..a045f102a420 100644
> --- a/tools/xenstore/xenstored_control.c
> +++ b/tools/xenstore/xenstored_control.c
> @@ -131,6 +131,11 @@ unsigned int lu_write_response(FILE *fp)
> 	return sizeof(msg) + msg.len;
> }
>=20
> +bool lu_is_pending(void)
> +{
> +	return lu_status !=3D NULL;
> +}
> +
> #else
> struct connection *lu_get_connection(void)
> {
> @@ -142,6 +147,11 @@ unsigned int lu_write_response(FILE *fp)
> 	/* Unsupported */
> 	return 0;
> }
> +
> +bool lu_is_pending(void)
> +{
> +	return false;
> +}
> #endif
>=20
> struct cmd_s {
> diff --git a/tools/xenstore/xenstored_control.h b/tools/xenstore/xenstore=
d_control.h
> index 27d7f19e4b7f..98b6fbcea2b1 100644
> --- a/tools/xenstore/xenstored_control.h
> +++ b/tools/xenstore/xenstored_control.h
> @@ -23,3 +23,5 @@ struct connection *lu_get_connection(void);
>=20
> /* Write the "OK" response for the live-update command */
> unsigned int lu_write_response(FILE *fp);
> +
> +bool lu_is_pending(void);
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_c=
ore.c
> index 9eca58682b51..10b53af76ac5 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -338,7 +338,20 @@ static int destroy_conn(void *_conn)
>=20
> static bool conn_can_read(struct connection *conn)
> {
> -	return conn->funcs->can_read(conn) && !conn->is_ignored;
> +	if (!conn->funcs->can_read(conn))
> +		return false;
> +
> +	if (conn->is_ignored)
> +		return false;
> +
> +	/*
> +	 * For stalled connection, we want to process the pending
> +	 * command as soon as live-update has aborted.
> +	 */
> +	if (conn->is_stalled)
> +		return !lu_is_pending();
> +
> +	return true;
> }
>=20
> static bool conn_can_write(struct connection *conn)
> @@ -417,6 +430,12 @@ static void initialize_fds(int *p_sock_pollfd_idx, i=
nt *ptimeout)
> 			if (!list_empty(&conn->out_list))
> 				events |=3D POLLOUT;
> 			conn->pollfd_idx =3D set_fd(conn->fd, events);
> +			/*
> +			 * For stalled connection, we want to process the
> +			 * pending command as soon as live-update has aborted.
> +			 */
> +			if (conn->is_stalled && !lu_is_pending())
> +				*ptimeout =3D 0;
> 		}
> 	}
> }
> @@ -1524,6 +1543,9 @@ static bool process_delayed_message(struct delayed_=
request *req)
> 	struct connection *conn =3D req->data;
> 	struct buffered_data *saved_in =3D conn->in;
>=20
> +	if (lu_is_pending())
> +		return false;
> +
> 	/*
> 	 * Part of process_message() expects conn->in to contains the
> 	 * processed response. So save the current conn->in and restore it
> @@ -1543,6 +1565,30 @@ static void consider_message(struct connection *co=
nn)
> 			sockmsg_string(conn->in->hdr.msg.type),
> 			conn->in->hdr.msg.len, conn);
>=20
> +	conn->is_stalled =3D false;
> +	/*
> +	 * Currently, Live-Update is not supported if there is active
> +	 * transactions. In order to reduce the number of retry, delay
> +	 * any new request to start a transaction if Live-Update is pending
> +	 * and there are no transactions in-flight.
> +	 *
> +	 * If we can't delay the request, then mark the connection as
> +	 * stalled. This will ignore new requests until Live-Update happened
> +	 * or it was aborted.
> +	 */
> +	if (lu_is_pending() && conn->transaction_started =3D=3D 0 &&
> +	    conn->in->hdr.msg.type =3D=3D XS_TRANSACTION_START) {
> +		trace("Delaying transaction start for connection %p req_id %u\n",
> +		      conn, conn->in->hdr.msg.req_id);
> +
> +		if (delay_request(conn, conn->in, process_delayed_message,
> +				  conn, false) !=3D 0) {
> +			trace("Stalling connection %p\n", conn);
> +			conn->is_stalled =3D true;
> +		}
> +		return;
> +	}
> +
> 	process_message(conn, conn->in);
>=20
> 	assert(conn->in =3D=3D NULL);
> @@ -1629,6 +1675,7 @@ struct connection *new_connection(const struct inte=
rface_funcs *funcs)
> 	new->pollfd_idx =3D -1;
> 	new->funcs =3D funcs;
> 	new->is_ignored =3D false;
> +	new->is_stalled =3D false;
> 	new->transaction_started =3D 0;
> 	INIT_LIST_HEAD(&new->out_list);
> 	INIT_LIST_HEAD(&new->watches);
> diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_c=
ore.h
> index dac517156993..258f6ff38279 100644
> --- a/tools/xenstore/xenstored_core.h
> +++ b/tools/xenstore/xenstored_core.h
> @@ -110,6 +110,9 @@ struct connection
> 	/* Is this connection ignored? */
> 	bool is_ignored;
>=20
> +	/* Is the connection stalled? */
> +	bool is_stalled;
> +
> 	/* Buffered incoming data. */
> 	struct buffered_data *in;
>=20
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 09:37:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 09:37:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145436.267594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvGMl-00077y-Ab; Mon, 21 Jun 2021 09:37:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145436.267594; Mon, 21 Jun 2021 09:37:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvGMl-00077r-7Y; Mon, 21 Jun 2021 09:37:15 +0000
Received: by outflank-mailman (input) for mailman id 145436;
 Mon, 21 Jun 2021 09:37:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YNPZ=LP=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lvGMj-00077l-IN
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 09:37:13 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0e::62e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 89d918d2-37d8-4754-bb50-c6b85a0967e9;
 Mon, 21 Jun 2021 09:37:12 +0000 (UTC)
Received: from PR0P264CA0099.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:19::15)
 by AM0PR08MB4449.eurprd08.prod.outlook.com (2603:10a6:208:143::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Mon, 21 Jun
 2021 09:37:08 +0000
Received: from VE1EUR03FT022.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:100:19:cafe::a2) by PR0P264CA0099.outlook.office365.com
 (2603:10a6:100:19::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.16 via Frontend
 Transport; Mon, 21 Jun 2021 09:37:08 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT022.mail.protection.outlook.com (10.152.18.64) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.16 via Frontend Transport; Mon, 21 Jun 2021 09:37:08 +0000
Received: ("Tessian outbound 7799c3c2ab28:v96");
 Mon, 21 Jun 2021 09:37:05 +0000
Received: from 03717eeeeda5.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6CF20D52-80BE-41F1-8426-E027FE529BED.1; 
 Mon, 21 Jun 2021 09:36:56 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 03717eeeeda5.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 21 Jun 2021 09:36:56 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com (2603:10a6:102:130::10)
 by PR3PR08MB5625.eurprd08.prod.outlook.com (2603:10a6:102:89::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Mon, 21 Jun
 2021 09:36:54 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025]) by PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025%8]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 09:36:54 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO2P265CA0056.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:60::20) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.21 via Frontend Transport; Mon, 21 Jun 2021 09:36:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89d918d2-37d8-4754-bb50-c6b85a0967e9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ka8N4Ch0f35EKm0tUvbUB0qb6lt5G0qJTeudjEJziC4=;
 b=2WWLYNcbMJEBayFvAvSHfTbEc3GkGuz/iOHwvszoCJb/ezkyjnlSbZBzyxxZg20AYCoBjmwlbD0sV247res9LbluggbukuEEGe8ctgmmV4RRQiAizjnyAP+BcuPDewtOTXgOAEIWyUu/RuLPOWYVZS6VVtGHlE3T01Yx7PRqSZc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: b5731595b6848011
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IPRZmp2/ePj/Wa+kQvGpwq3YdPCtLSk31aTz3ONJ2kQ2U96f6VISxMf9RlFEtzSMlyKMwrg7FKDwhW7/ZMRq6Mfpobdw3ODu205qgAlt4SS1S9Oho5T8XB0PnpMEanCBWf0VubauYMKF/V6ckgTmBzU82inMvV02qWup+bXAUl/QlIwCUGFqzJdx1jNLW72jKcStxTouyK8XM56uhCdE8PBhVkX00FnCdZgo+EInXaAy95CyNQs30jZgvJ1lNTL4zkFP9BVQsk47tgbkV5GgM9rm/08/TOtbCYR0YAOXO0Nwz5rYmsbNcwSJpXX+U6vMuPsEnGf8t1fYkr6OLT5pbg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ka8N4Ch0f35EKm0tUvbUB0qb6lt5G0qJTeudjEJziC4=;
 b=KTfdCUXvZUy6bstRVJoCcRl2M1Fn5150P3tgf6COm3TL+wZjo/qb3DzgDpn0iE4yVHT+cCPD6ICawsHDfllgies1bCshotBfvFWM0l39NnnK52QFf9If2ZNskAnIb1S+1/itO2W5l4Sz/Z66PxMEyY3RoDbbB1rnhYE29FYUMRJS0dPrLZeQaghuBmYUt8EijVTSboL+zb+g9LyBKPSVcl+71aW3g3N/bACKpOIqDsEM6WGzhnFGmaOR5HpqwhBvmVHUHEd7s762Q+sALendDED572kdoK1EhQSIbgEYXiBtbEBJUBQ/MAFb+dAcMvrINcBFLL9+xeydn4zhY1oJlg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ka8N4Ch0f35EKm0tUvbUB0qb6lt5G0qJTeudjEJziC4=;
 b=2WWLYNcbMJEBayFvAvSHfTbEc3GkGuz/iOHwvszoCJb/ezkyjnlSbZBzyxxZg20AYCoBjmwlbD0sV247res9LbluggbukuEEGe8ctgmmV4RRQiAizjnyAP+BcuPDewtOTXgOAEIWyUu/RuLPOWYVZS6VVtGHlE3T01Yx7PRqSZc=
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=us-ascii
Subject: Re: [PATCH] tools/xenstored: Don't crash xenstored when Live-Update
 is cancelled
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <20210617173857.6450-1-julien@xen.org>
Date: Mon, 21 Jun 2021 10:36:47 +0100
Cc: xen-devel@lists.xenproject.org,
 raphning@amazon.co.uk,
 doebel@amazon.de,
 Julien GralL <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>
Content-Transfer-Encoding: quoted-printable
Message-Id: <3AC8670C-7FA8-4968-951A-E67E95B0AF97@arm.com>
References: <20210617173857.6450-1-julien@xen.org>
To: Julien Grall <julien@xen.org>
X-Mailer: Apple Mail (2.3654.100.0.2.22)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO2P265CA0056.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:60::20) To PAXPR08MB6816.eurprd08.prod.outlook.com
 (2603:10a6:102:130::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d3acdb1c-c7ca-4092-3087-08d934982353
X-MS-TrafficTypeDiagnostic: PR3PR08MB5625:|AM0PR08MB4449:
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB4449517398A53B883BE89128E40A9@AM0PR08MB4449.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 9Zrgyhw9kLLJdsub0GBsrDPhSADb5H29LXlMOIAbtQ+bsarsBzRTcV5XO1D+V6IfeR1lMBZ3EyyDNXWg5XDA5bOGOfOiydK5d2HPjyAnCF9WdfKZ4bA9HFShIvgs+43GvGkdd/e+0B+ncvjE4sdBNuGdGd/6XlJmlBwFC8wbpZDFow9psUJ3i/P1Oznzbus+17xmdnoPotT7q+mPkqiZ7NC7EpBuCdnH6jy29LiWFL6f/qj5KgqlRdG5AT/TjhI4qoc0D62hBgyZw2m7A3UOcd4mJzKuZYQWEDGITYWzjah3qrVSeJ84E3m50AjS1+E56/xTJjT+0DA3Vtvy8RphVuxlVa7Jnv3a9PZ3h4EqEig9rKv3ZhONbXapwBq/+sgF+j2DYMLTr9THLNH/wPA9+12hkuGd2zrj+NDnG2H3lyv3OvC9lJxySmZCtvh6cyuw5F347OaIw8Hs1j1YeFQIfkwauqz5GPXfMpQtMZFSs/+nenChiMa3YFeWckCIrY8vErL1ZUnZklhr9pf5VGVNal3n7QUv/q6Z9OaYFjRA+w9qHhEU7SXBEVQF51JU2GwoFwKwfF1b0aHvdFoF+Dfo0789gM9oeaI+aEK6rwIqi1gPGahDdWMtxMn+G1p2EwUic7s3Zzk+WtBuytGrwnkZCxwmQ219OQtyRfDixyFLmfa/YxCV1kb94BmL1PxpYTJIahYFOcFC9PEZLsrnu1YQ87Re9Li1gihaBgBSm+jyqleDeMbFOWLv62c6K/ipMBUb6krq91CAUwxtAyWVpULHAaPlrZHm0r3mCAUCU9XhBiYIpdv7bOnaX8qs9rCUYqNd
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB6816.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(376002)(366004)(346002)(136003)(86362001)(54906003)(66476007)(2906002)(8936002)(66556008)(2616005)(33656002)(53546011)(956004)(316002)(15650500001)(6506007)(52116002)(6666004)(6512007)(36756003)(966005)(66946007)(26005)(44832011)(38350700002)(8676002)(5660300002)(478600001)(6486002)(16526019)(186003)(6916009)(4326008)(38100700002)(83380400001)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
 =?us-ascii?Q?sVb82NCfDWVqyAeXtbrFk6fAkOi6NjjYRdSHmRcOP3He6D54FuKOYHJCyxOA?=
 =?us-ascii?Q?fK7/D2WEaq8sdRDUCxNd8+YmsUR4Tzdl/TLRYcAHkhf6DB0gMIcWQoyfNNcU?=
 =?us-ascii?Q?7wJAmpYzG24P77ev3DJYvfdW7x2ePstczm4MK82ExSEgbKmtrq1dVuM1mVKO?=
 =?us-ascii?Q?N25nNE4C9Em8097qJ0gdAEOcnBl2lJa46FHxBZ5WFiWjsflz7tFfNK1G5TCg?=
 =?us-ascii?Q?nlxcmwlg2FDSNm57DPRFnI7bLVfClISib/LueU4TBWQmmJsSQxt7ShR9FC9S?=
 =?us-ascii?Q?K4/GfSODeKsC3XQQuE/NackL9OvwZo5VRP2W2uLjYE76uR0s7DYNjKrtidYi?=
 =?us-ascii?Q?9O+1cTCACfHdeIXT+lhNPHaOuTDsyUObZNG5/l6kqdtR4xG2bdiBLWN89zcO?=
 =?us-ascii?Q?xEKWtvhOfBGKyc0pmsM5Cr2OTDqBDN+QXoC6DLk+hx0SK+Cd5YVZk6snUV+l?=
 =?us-ascii?Q?naSvclc/gwioFjUJyluBpxcJ3SAmU4TFNyDwz29PyGbarBpkxXmGLxpLg0aU?=
 =?us-ascii?Q?zvCPnUtzc6rIonX4FrqMHghQjFoVst4g4RoPkqdZlMuXF0cDXfIpdkSHcG93?=
 =?us-ascii?Q?o8admJW6kIdc2oYELzYaQx22kLCoFopXwZb9ZqFtnHHGdQWqA44O28tEUPnY?=
 =?us-ascii?Q?tSGyjWdCAPBc2wwD9emiG0Xd+YNoVQ7ZwgSipk5f6CjIH7Yz9LOKOrOcMbq7?=
 =?us-ascii?Q?f7yhMCbKYoMOPD/XJB9TArKZ9Eemd8lEjTAGc74TKuR7ww2lqX2j7/wCyCx+?=
 =?us-ascii?Q?jy5DPmmu6xIfg7HUWzO2i7gOwETmAUCJOboE9N0+oS4ukVHlgyp7Xk1q4Zzp?=
 =?us-ascii?Q?9ZapubpXqdnCnUoFXsVhjXJk3NKEeRCUS3YUggLK+cwPE9y/bsUHyaFOmHFA?=
 =?us-ascii?Q?hJJWvpEJ/twoaK40BPWiVMvbAakumjc3j53qRSLM+Lsdj75kxBuuKh4j9zQI?=
 =?us-ascii?Q?qnPPCU9w2r/jXPDfEr5ELxgOnLmK/zscFpI/lPFMANtm0MXVo8ihNjksLo2Q?=
 =?us-ascii?Q?9rZWecMDomv9Bc/KFhnDOT8mEuhBPo9AG2aM0kOu/7tHYSZ/SjFYOmpEhrtS?=
 =?us-ascii?Q?36e9mfaquE0zD5ISG2jgvBWdkQQ+NKPwhwO2O2hb7WymeVjNhibXFevj7fOP?=
 =?us-ascii?Q?hnIpNcSpTDnwiVtkt8MenGIpcq3ZHhIQugm4DAASrL8ZfYl3LWDVRnSJdcwf?=
 =?us-ascii?Q?pz8Xe4hUOtlao+f6XQY246DU/iMZRIfI3sK/7CS1/Y3ILkfhYcUv1Mr1Wxpd?=
 =?us-ascii?Q?CviT+0iU2FqwhgvK3Z0UfT7mcFGJfRUObk8fnCraMqNRNmO/rov96X9A5jbm?=
 =?us-ascii?Q?8MgetHnptyRgYH8evWCKj6ab?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5625
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	25308b83-7505-43a1-8739-08d934981ae3
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8D6KpN1A89VWu/aX2W8w3E2m8AzXZ1dC88rx0LA/GjDsXaYS/V45D9kDFQzVfCyzRaxJjEBe9y1slOD79M9ayncNdWC77Rmo9aUadibyeC9PJT9tpphk+5Gc7R6D8/5vInXf3kGL1CydDP0XAOEtlZrrloe12Lk0T8K1F9j857ScP9MTPDu21caDrkXfZDbRK2l5TJ9VqWhlUxKBCFH+DwnTD89fDFgzR0P/0yX9gvP/3hHHn6IQN58yaJiljkKfmRukq/nLkKWKT/OmHRgn9ixsGc3N5Ld/TTdXULQSdR1IoY9XfTxbFiwfZgw9hILWNcK//7LFWiO1k3Z5Wf8GOD6k3Qf1+1HDpr6Iaz8Qem/Mk6ocXFfqTeVYkXULs0ktTGCPKv57SfgAhgUs3koYOVmn1fFRWjb4HWp1QP19fwu/mBoT7KyvAVJqC0Hm9UZM8zQiSb5E522V1gDvmRABFEZe1kEXXOmCJ5CT4N/BkSXVFSe1QAcmoTKCFSKC74Ji78YRap298An0xFDPPPdSgFHsC/jYzaXDTcqg6Uq1AzK/5FQ+9Yc/sY49JFxsnTpkix6k2+TWSm2lazT/kt1DzFD+wmQcJMsgTbq9/qTuVjIYKj2o/jQ4RpbB+0rdTqmokfxTuIGsI4WaukKiHbnaha9jqxfYAOF6BFlcFP/fXOMDkl9NPFM0q4P8i8Ykzf/X+Z8hac5i4hbFFSSP5SHMBvHNLxkFIJzvrT7Ue2VtWYICjXIKshRj49SxBm3JSYbx4GtLnR8S6x3bNhRGTc63iw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(376002)(346002)(136003)(396003)(46966006)(36840700001)(82740400003)(36756003)(36860700001)(966005)(6512007)(8676002)(47076005)(44832011)(356005)(53546011)(5660300002)(83380400001)(70206006)(4326008)(107886003)(186003)(6486002)(26005)(16526019)(478600001)(82310400003)(54906003)(6862004)(86362001)(15650500001)(2616005)(2906002)(8936002)(6506007)(316002)(336012)(33656002)(70586007)(81166007)(956004)(6666004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 09:37:08.0794
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d3acdb1c-c7ca-4092-3087-08d934982353
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4449



> On 17 Jun 2021, at 18:38, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien GralL <jgrall@amazon.com>
>=20
> As Live-Update is asynchronous, it is possible to receive a request to
> cancel it (either on the same connection or from a different one).
>=20
> Currently, this will crash xenstored because do_lu_start() assumes
> lu_status will be valid. This is not the case when Live-Update has been
> cancelled. This will result to dereference a NULL pointer and
> crash Xenstored.
>=20
> Rework do_lu_start() to check if lu_status is NULL and return an
> error in this case.
>=20
> Fixes: af216a99fb ("tools/xenstore: add the basic framework for doing the=
 live update")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

>=20
> ----
>=20
> This is currently based on top of:
>=20
> https://lore.kernel.org/xen-devel/20210616144324.31652-1-julien@xen.org
>=20
> This can be re-ordered if necessary.
> ---
> tools/xenstore/xenstored_control.c | 15 +++++++++++++--
> 1 file changed, 13 insertions(+), 2 deletions(-)
>=20
> diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstore=
d_control.c
> index a045f102a420..37a3d39f20b5 100644
> --- a/tools/xenstore/xenstored_control.c
> +++ b/tools/xenstore/xenstored_control.c
> @@ -696,7 +696,18 @@ static bool do_lu_start(struct delayed_request *req)
> 	time_t now =3D time(NULL);
> 	const char *ret;
> 	struct buffered_data *saved_in;
> -	struct connection *conn =3D lu_status->conn;
> +	struct connection *conn =3D req->data;
> +
> +	/*
> +	 * Cancellation may have been requested asynchronously. In this
> +	 * case, lu_status will be NULL.
> +	 */
> +	if (!lu_status) {
> +		ret =3D "Cancellation was requested";
> +		conn =3D req->data;
> +		goto out;
> +	} else
> +		assert(lu_status->conn =3D=3D conn);
>=20
> 	if (!lu_check_lu_allowed()) {
> 		if (now < lu_status->started_at + lu_status->timeout)
> @@ -747,7 +758,7 @@ static const char *lu_start(const void *ctx, struct c=
onnection *conn,
> 	lu_status->timeout =3D to;
> 	lu_status->started_at =3D time(NULL);
>=20
> -	errno =3D delay_request(conn, conn->in, do_lu_start, NULL, false);
> +	errno =3D delay_request(conn, conn->in, do_lu_start, conn, false);
>=20
> 	return NULL;
> }
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 10:41:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 10:41:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145444.267612 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvHN6-0005HZ-FL; Mon, 21 Jun 2021 10:41:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145444.267612; Mon, 21 Jun 2021 10:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvHN6-0005HS-CI; Mon, 21 Jun 2021 10:41:40 +0000
Received: by outflank-mailman (input) for mailman id 145444;
 Mon, 21 Jun 2021 10:41:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=22HU=LP=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lvHN5-0005HM-78
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 10:41:39 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9eb70eab-c82f-4d87-8a59-63724fb2ebc1;
 Mon, 21 Jun 2021 10:41:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9eb70eab-c82f-4d87-8a59-63724fb2ebc1
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624272098;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=4/QvMKUH8PTSygpmPmBbtHJHy6tcnDiPJec1dUIPPOg=;
  b=RWmVFwDUbjg+sCTWKgzULVds8RPZSFj2QatY6QgG/DjimBkN8DD6+9S5
   pwoLhYcKeE6qdZphqiFBTHvq7ybi3fQ3vXpnbWHMLvvz7Fxx8geqfZGPS
   psu4xRythzm6ISqcMYPA5uVGBSPLpeYZkkmeDbDpxbDI2GCTbOzfVbEbH
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: EVhOul9vqS7YydjoHpMySQj1q+SXI2rH8C5D4sMBQtdSJlIbp8l1kroNLCssRKAg+sQnUy2cvk
 PB6xruCqxEszK18RO1dZ3Jp552zr/hFYMO3h20vicN1j3iYwmK/Z6pJPk452XCNN7O/RpMxFPU
 itgVIMTl3x4tirI8+69jAQGp3gr9w0oy92uS3sGx4XrI1AH/52SlVpD+dG/Kxbu0/knzDCfFD/
 8grew3tk4npIv+dhiBcEMESQDGxu8dCuv8l/kyLR4o5MMca1SRuXcIAmqUO2nchcK1ZusLbo3+
 f3s=
X-SBRS: 5.1
X-MesageID: 46301366
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:RvDLvqzNJm1f34sftuVSKrPxteskLtp133Aq2lEZdPULSKOlfp
 GV8MjziyWYtN9wYhAdcdDpAtjkfZquz+8L3WB3B8bfYOCGghrUEGgG1+XfKlLbalXDH4JmpM
 Bdmu1FeafN5DtB/LXHCWuDYq8dKbC8mcjC74eurAYZcegpUdAF0+4QMHfqLqQcfnghOXNWLu
 v/2iMKnUvaRZxBBf7Ld0XtEtKz6eEi0/ndEFc7Li9izDPLoSKj6bb8HRTd9hACUwlXybNn1W
 TeiQT26oiqrvn+k3bnpi/uxqUTvOGk5spIBcSKhMRQAjLwijywbIAkf7GZpjg6rMym9V5vut
 jRpBULOdh19hrqDyCIiCqo/zOl/Ccl6nfkx1Pdq2Dku9bFSDUzDNcErZ5FczPCgnBQ+e1U4e
 Zu5Sa0ppBXBRTPkGDW/N7TTSxnkUKyvD4LjfMTtXpCSoETAYUh77D3xHklV6voIRiKrrzOSI
 JVfZjhDbdtABCnhknizy1SKIfGZAVqIv/uKXJyyPB80FBt7T1EJgUjtZcidtppzuN0d3B+3Z
 WyDk1frsAFciYnV9MIOA4/e7rANoXse2OBDIvAGyWpKEk4U0i94KIft49Fmt1CPqZ4lqcPpA
 ==
X-IronPort-AV: E=Sophos;i="5.83,289,1616472000"; 
   d="scan'208";a="46301366"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=k6b7JYa5Jid1qJypnSIxNbwRfruvafmiRvRrf8PUx7WsGCNwC4oRDWQUTUHqic+2NX49pSLEcoCrTj+zBgrCBOLopPlVNBG5ECB8DnTnSNAkhhDaGzti15Swqoo+3bgq/XzdcR9v9VsICGzRzg3ZovxR1yOHI24XnkfFhL82gSM/c7QQvGtvcNCYaud6+PTxJdwQa2j3DD2AktcGYnf+TGGro9PdNx2O8MCX/m25SJZd3btZ4IU+wUqRs3x5k/np4jexZaJrX+3oPOpDzBI6BpjUFmrTTB+75b4ZKqw91IpmUT1jFXZRkFJRL/HGyNHjaK5rFNj2bcmM8IN3qUKqnA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+7Z5rl6BOUJjVox1tz9DXVUzFGaNOceGaxqhCmktjwo=;
 b=COveSTiUm9FoeZYhOeC6a+ODdWa/1wd7juqbTlTwEgtY1yUkZba11XeIDPxB0aJARohpdBkR9nlpKaPTubiYNcOYxKCPbcNFCHzLzJBfhHC8UNDroVqghjQrdJLLf7xWyrdWbx7TZLdaOPtJAHd1VgpvJoIYPol9+lhAWQoGnEHeS7M45HQB/BL7yVv+6IonWn5v6Dl84ModbHidFEH10K9lUAMX5TRc3HPeEyitznkYQqJL2Yj72hJucwiT1tpRaAFarlXmjXfdpLTJRgmQdd//rSoPoLz3zA1IHOJLg+TDopa+Wjzque1xOWbWPvEwobLrM2CTW5fLx/jWuE0PCQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+7Z5rl6BOUJjVox1tz9DXVUzFGaNOceGaxqhCmktjwo=;
 b=ktwrLXeYamNUBCnh9/FwZXW6K79+lzEq9ETYwltIKxcUZxGAW+j96xUG6hZQ+QplojZP/p+cFMlApLU2mdMg1CvctvW9xZvJs5FSWbF3VRCbTxKdJjBH8eMVGTwc1wUY35N5K/MiEUZY854Zds0NsOV3GMevkmfvmwGdNQMzBsc=
To: Jan Beulich <jbeulich@suse.com>, "Daniel P. Smith"
	<dpsmith@apertussolutions.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, George
 Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Tamas
 K Lengyel <tamas@tklengyel.com>, Tim Deegan <tim@xen.org>, Juergen Gross
	<jgross@suse.com>, Alexandru Isaila <aisaila@bitdefender.com>, Petre
 Pircalabu <ppircalabu@bitdefender.com>, Dario Faggioli <dfaggioli@suse.com>,
	Paul Durrant <paul@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	<persaur@gmail.com>, <christopher.w.clark@gmail.com>,
	<adam.schwalm@starlab.io>, <scott.davis@starlab.io>,
	<xen-devel@lists.xenproject.org>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-4-dpsmith@apertussolutions.com>
 <c8bd347f-cf14-8b86-81f7-51c035c5c972@suse.com>
 <ff0c9f42-f45e-e78e-35b9-c030011eed8f@apertussolutions.com>
 <6d50efc1-6c13-1481-b70c-0abfa99aa610@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 3/6] xsm: enabling xsm to always be included
Message-ID: <8a909d6b-e69c-05ce-35dd-0f6be719b5ae@citrix.com>
Date: Mon, 21 Jun 2021 11:41:21 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <6d50efc1-6c13-1481-b70c-0abfa99aa610@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0454.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1aa::9) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 102d8348-da31-4539-9809-08d934a12237
X-MS-TrafficTypeDiagnostic: BYAPR03MB3864:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB3864E74B4AD33269BF8698C0BA0A9@BYAPR03MB3864.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2803;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: aRUUOFra2oT9MYhak2DMbsAdQMBp9dn4TLtB44yBohAueNQLkWx3JNGXCLFZ9Iz3wDGXDbL6/O2uX6JN2fWxXm0+2y0GYlqLsKD89T/cDi5Fe21F1xS7hCyf3LAneyC9pUj04xBMcYt70rJXTHeMDpqfhfyXcsvGlDTxhcy5zEX8lKIjVvRRwUe1/aRP7a4J58Kcqi7C9HJcINIR9qo6hACWJK/11iQWtSHfSkRr8Hkr08jS4kf8MxDoKtvibCqzmOlHqom3aCuwyrnhHOiYfU8P1nAH81mzMgntjgCHjneNZD2C0SORm3b/NM+DFOK7DEEt+S/m655fmtbC2W+ia4plRt1uRSITPhsc+hMBP0GYJ52q4mwRKQtD7vLsfqeI742qRbCWc3qTc4utLLLl8Ip16UqRhMJIFwvi64O2OJ2i32nXjgxmZzG8GGJoWDJnapIw3V/f+1F3vCk493Ne4KGbKvLCOocVjcM9wJjuqx0fGDaqVvG7CSHnskCr6Sraqsm+XBy8goqXKEf0DGgP2RPJLHWPbA3T3ZeSvghv42d3ObWldpUeXK7ZXX9QxP95VY2xIVlb202SW/Z47qNlwBgEulOhAWSHGDe6qBmL5uppPVuK/MSaRjCYqUEeoRaq4HE9jfAu6GNUcS5cuORJP7X6BuZjwc7KqlfKFGd4RshcDj99DUlOkNALH8pWMnd/
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(366004)(396003)(39860400002)(346002)(5660300002)(4326008)(31696002)(66476007)(478600001)(53546011)(956004)(66946007)(316002)(86362001)(7416002)(2616005)(31686004)(54906003)(110136005)(16576012)(38100700002)(66556008)(83380400001)(36756003)(26005)(2906002)(8936002)(6486002)(16526019)(6666004)(8676002)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?ajRMSERPVzd5clNpeU9NKys0dHVDWWZuaXhOaUh4emlIck9vaHJpWlYwMldu?=
 =?utf-8?B?YUVuOUM0ZExrSFpDN1RDZmZ2TzFTcFU5MHd1eW9aVHpvR0xvUDRUNXNMYmc0?=
 =?utf-8?B?bTd2Q2hJd2FPeDVGZ0VkelBZWC94alB0OHU0Q0lYb1RKQ2pkVG5xQjBpNlo0?=
 =?utf-8?B?WTdhRytsd2dMRDFUcjRLdDVYNnl0RDFVN01Sek9LS1hMUjJURExOYldFSE1K?=
 =?utf-8?B?R1FrTDNvL1EyNVJkbjFUcmJvemx3ZlBGc0MvbVY5cGhQOElRWENXN0tWQ2Vs?=
 =?utf-8?B?T3YzeEc5V2hteStEaGxxamRVOWRVOU9UU0VUR2hwUi94dTNudnZUVmNTU0l4?=
 =?utf-8?B?dXVUTlUrbnpLUVozdXduRjNlMUdIb0NFSmJaWVErMHVBYnVmY05XOTdlSGVS?=
 =?utf-8?B?SytTemhreGNxcGl5MXRyWjFXekVVVEFpcWhTMnFFTXZPckFnYXlrSWcwd1l6?=
 =?utf-8?B?ODJXNWZGcWxkVGw2dWNqNWJiQzlCNGhnWm1Od0dtSXF1M3ZaVVZSakhvaGky?=
 =?utf-8?B?VXBHaEt4SWJsK1U0MDlDYlo0eTZyd2J1TmlnMkRmZ2VFeFUydnJqYW94RmEx?=
 =?utf-8?B?VlFST0lxZXN1NEEwa282Vis2Z2hBNW9nRzN6VlN3N2VVbkxzWm1hZFFZZGtB?=
 =?utf-8?B?Q3lTMllJd3pqTlRTSTFMUmZLSENhY0crMVpZOGo5b1ltNFlPQjdicnRaaVhu?=
 =?utf-8?B?STB3RXZVV2IwK216TVBPaFZjeUZJa0YrNHpoUjVTQXRXcndkQ2dkTHQ3ekdW?=
 =?utf-8?B?amNFb3lMR2RMSTJTbzJvQVhsWXVZL3pvU0pudVU1QUI5MjBucnNZZnZ2YzU5?=
 =?utf-8?B?d2oxS1JZbjN2d21FazYrTW9oN0k3NmF6ZC9OczRqOXFQM0VGZjI1cndyL0JC?=
 =?utf-8?B?UTQxeXlKS3BydjdWU2ZYVkZHQm43VllEOHpzTm1WZ2J3RlRaMXh2QkdhMGVo?=
 =?utf-8?B?bFNicXVEYUVqelVVazZpQ2RHendId0poeDdoT2traGsvamRFOGdvUzRRWnZU?=
 =?utf-8?B?UlNkVHVxalpYS1JtQm1kYk9ZSlNLMk1HajhOY1Z1di9oNG0yMHdUbDZBclM4?=
 =?utf-8?B?U1cxV0psemNZajBKNlVtK3JCbTQwVVNsK1pSaHV0NGJFUTlhMFdyWmlUWDFq?=
 =?utf-8?B?aS9IK3k1S2wyUE5KSjNxYndlaUIyc2swSFhPMDBKalhPUElCSUpIYnovUkNa?=
 =?utf-8?B?WjFicm5YWFI3c3RnUFdmVncwN1lQeVd3bzFnb3Y4SDRkeXF3YTdlU0dSb1pR?=
 =?utf-8?B?TCt6VWZyWTFYMGRublJmWWhvOVFrSjR1R1o0Z202SE1KNExKbWFRUkpBaENX?=
 =?utf-8?B?M3l2NGUwRmMzWHdadlptMTlKSk1vUzZaYWlkWFR6K0RrRzNQcWtDcWh6Q3ZS?=
 =?utf-8?B?SUk4WEM4SUwweENIWjNmVlQ4RzIwdFd4NXlSN2hTYXFUQkp1aHU3aGRrd0pJ?=
 =?utf-8?B?NUl2RzE4K2JOMnY2VW1RS1puUllqV2RqUHRFMDJZLy9MWExyVUNJVGZyaDBH?=
 =?utf-8?B?RkJEUFNXU2ltejBEb1g3cFNtUkpvazBSWnZLbG41ZzB6MnNGS0s0L1NFejB6?=
 =?utf-8?B?dTFXUjRoT082NGtqTVlFOTV6OHZWejFGV25QWlI3TEg4ZGFLM1lHenF4MWZ1?=
 =?utf-8?B?Q1owSTB3aXRaeHJvRFNweHFKa1NHQklrdkF2ZWNWb2hzODZXUmQxMURFSWIz?=
 =?utf-8?B?UDlZZTRKOHdPeE1uejR3Vmlja2JmMzdKZkFGeXFtRDRXREdCSzh2b3g1dFFS?=
 =?utf-8?Q?b73QOj/KRUdhesqa2y+/A0Dx7VCBBiDkVDuvZew?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 102d8348-da31-4539-9809-08d934a12237
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 10:41:32.0622
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Vh8TNzwp/ww0GoYYOk4Yc2Gc6CVHwu+uT/lyuasUAy8gkq+cSy1OxG54OMFUbUO1sAV6rgPxkHv4i5fBvWR+F6erIzJblBZj67PXfSv6SdA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3864
X-OriginatorOrg: citrix.com

On 21/06/2021 07:58, Jan Beulich wrote:
> On 18.06.2021 22:27, Daniel P. Smith wrote:
>> On 6/18/21 8:26 AM, Jan Beulich wrote:
>>> On 18.06.2021 01:39, Daniel P. Smith wrote:
>>>> The only difference between !CONFIG_XSM and CONFIG_XSM with !CONFIG_XS=
M_SILO and !CONFIG_XSM_FLASK
>>>> is whether the XSM hooks in dummy.h are called as static inline functi=
ons or as function
>>>> pointers to static functions. As such this commit,
>>>>  * eliminates CONFIG_XSM
>>> Following from what Andrew has said (including him mentioning your
>>> changing of certain Kconfig option defaults), I'm not convinced this is
>>> a good move. This still ought to serve as the overall XSM-yes-or-no
>>> setting. If internally you make said two variants match in behavior,
>>> that's a different thing.
>> Apologies that I did not express this clearly. What I was attempting to
>> say is the fact of the matter is that there is no logical behavior
>> difference between "XSM no" and "XSM yes with dummy policy". The only
>> difference is the mechanics of how the dummy functions get called.
>> Specifically via macro magic the dummy functions are either flipped into
>> static inline declarations that are then included into the code where
>> they are invoked or the macro magic has them ending up in the dummy.c
>> XSM module where they are wrapped in macro generated functions that are
>> set as the functions in the dummy xsm_ops structure. Thus it is always
>> the same logic being invoked, it is just mechanics of how you get to the
>> logic.
> That's what I understood, really. What I dislike is the inline functions
> going away in what we currently call !XSM.

I'm sorry, but this is an unreasonable objection.

The mess used to create the status quo *is* the majority reason why
fixing/developing XSM is so hard, and why the code is so obfuscated.=C2=A0 =
To
prove this point, how many people on this email thread realise that
calls using XSM_HOOK offer 0 security under xsm_default_action()?

Having xsm_default_action() forced inline isn't obviously the right move
in the first place, and I doubt that you could even measure a
performance difference for using real function calls.

Even if there is a marginal performance difference, and I doubt that
there is, performance is far less important than de-obfuscating the code
and fixing our various security mechanisms to be first-class supported
citizens.

~Andrew



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 11:32:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 11:32:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145455.267622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvIAe-0001lz-8w; Mon, 21 Jun 2021 11:32:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145455.267622; Mon, 21 Jun 2021 11:32:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvIAe-0001ls-61; Mon, 21 Jun 2021 11:32:52 +0000
Received: by outflank-mailman (input) for mailman id 145455;
 Mon, 21 Jun 2021 11:32:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvIAd-0001li-6q; Mon, 21 Jun 2021 11:32:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvIAd-0007q3-2K; Mon, 21 Jun 2021 11:32:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvIAc-0005se-Mq; Mon, 21 Jun 2021 11:32:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvIAc-0007eG-MN; Mon, 21 Jun 2021 11:32:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OuJJYc0OTR8g+kLqxh2AnRO4FrMfLtC5btEc67Hvqmc=; b=qKWaDwnCTiY9HwqqLoCDLyNMuJ
	c0Cm0c5yzG15f9eb16TfK52EYaFKo+h4cdWpN78jjVyMfmK0VD8ihrCpM/GieOHdS9qC1FHA/nqIA
	vz+LqJuF+71NSby872XQZOalmu9WeDEyJtX8NE2E6VSzDf8o71sgUXMvRHfjBuGWkoFw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162926-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162926: trouble: blocked/broken
X-Osstest-Failures:
    ovmf:build-amd64:<job status>:broken:regression
    ovmf:build-amd64-pvops:<job status>:broken:regression
    ovmf:build-amd64-xsm:<job status>:broken:regression
    ovmf:build-i386:<job status>:broken:regression
    ovmf:build-i386-pvops:<job status>:broken:regression
    ovmf:build-i386-xsm:<job status>:broken:regression
    ovmf:build-i386:host-install(4):broken:regression
    ovmf:build-i386-pvops:host-install(4):broken:regression
    ovmf:build-i386-xsm:host-install(4):broken:regression
    ovmf:build-amd64-xsm:host-install(4):broken:regression
    ovmf:build-amd64-pvops:host-install(4):broken:regression
    ovmf:build-amd64:host-install(4):broken:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=a63914d3f603580e5aeceb5edbafe56688210141
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Jun 2021 11:32:50 +0000

flight 162926 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162926/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386                    4 host-install(4)        broken REGR. vs. 162359
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 162359
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 162359
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 162359
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 162359
 build-amd64                   4 host-install(4)        broken REGR. vs. 162359

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 a63914d3f603580e5aeceb5edbafe56688210141
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   17 days
Failing since        162368  2021-06-04 15:42:59 Z   16 days   37 attempts
Testing same since   162900  2021-06-19 07:19:12 Z    2 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  gaoliming <gaoliming@byosoft.com.cn>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386 host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-amd64 host-install(4)

Not pushing.

(No revision log; it would be 2173 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 11:39:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 11:39:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145461.267636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvIHD-0002TD-1v; Mon, 21 Jun 2021 11:39:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145461.267636; Mon, 21 Jun 2021 11:39:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvIHC-0002T6-VA; Mon, 21 Jun 2021 11:39:38 +0000
Received: by outflank-mailman (input) for mailman id 145461;
 Mon, 21 Jun 2021 11:39:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f9W1=LP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lvIHB-0002Sz-Pr
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 11:39:37 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2e4ae187-a617-4c8d-a929-c8b651bc30c4;
 Mon, 21 Jun 2021 11:39:36 +0000 (UTC)
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2054.outbound.protection.outlook.com [104.47.12.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-23-7okBVBkwMeS8GEPwXFYsGg-1; Mon, 21 Jun 2021 13:39:33 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2607.eurprd04.prod.outlook.com (2603:10a6:800:58::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15; Mon, 21 Jun
 2021 11:39:30 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.023; Mon, 21 Jun 2021
 11:39:30 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR3P281CA0056.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:4b::6) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.7 via Frontend Transport; Mon, 21 Jun 2021 11:39:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e4ae187-a617-4c8d-a929-c8b651bc30c4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624275575;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qVvdvaKSlEorf5ozDin6rJuLWD4MJm85bcddg6oLp0g=;
	b=fqIATg0R39+/ybRyaQtXcps/0TNAMKgeRT6iWsUa1gdBEuvSR3O94CLFdwyMSLZXbiQfCW
	opZtpVxq+y/v+IssfcTpyrj9OaIUDdpWa6DyQaUHIQBiT/WpRbOPm2yFJi+yhvDRBTr9V+
	47/2MRb5sN6+YCVFqNKVKN0dOdoKQSo=
X-MC-Unique: 7okBVBkwMeS8GEPwXFYsGg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Nf06IVAOSJDPQKbqyMoN2xFWFjUnjK6xTD/QaKwweZiSaGbMpqMac+/0FCjj+rxh1XBdqMrFCp8opPPUQx2BtEz/lXujfOAOW1QaZ+IWnogVyM6OetIx2gkhdr4nqHUusEn5K/us4pULsG8R3///Ea3KolFMgcnbyrOyt1LaTyt5u3ga4EfyEI9o1lxYawX+MlPX8MwIeHzXekp+SK5vtD/hd40KzIZNUHjPwO8K9oi9lNZI5SSvfCKzC1lGn4k1VD6JWy0+CqYeOT+zfqerGFVg2uYRxeDMxId9vAklQbQG7abkafAeH4e+MyvAvcU8hm2b/vV4vDn2V5XZuxi1BQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y2K3k6ILRqsCQVyzQYQIMug8zuW08dD6yZrRB9WIvJ0=;
 b=P/symqb/3Q4ql2LXb/6ALRJHSTSMKWL51PcVbGuw/XBL/dIqIxD9uxEp9zN8rVqzPMfRLRylnnB575TtseoZXXMtKlIyXHwW1OFev/wCq+EsB7/8Q7zLjoqhZjWfMHQD1qYQ2Cs7faYxJMhODA3Mqt3ETleJdMpX7Z0UI11+Oxmpm2WeoR4IR4aecRc9LaGYojjXKZsSVzpkrhelMDreOxOEAYHPbG39o2Z1zeVpjfn6qMfHMzf1I2odEhbLNzeJ/Q/15n0PAee9mzN7lxXCxhkwkS/+RLjL14q4EE7PHZl00wJ1rtNi6ajwCDSosTPBjM2vaNCh+YRAhyPep3MqRQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: apertussolutions.com; dkim=none (message not signed)
 header.d=none;apertussolutions.com; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 3/6] xsm: enabling xsm to always be included
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Tim Deegan <tim@xen.org>, Juergen Gross <jgross@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io, xen-devel@lists.xenproject.org,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-4-dpsmith@apertussolutions.com>
 <c8bd347f-cf14-8b86-81f7-51c035c5c972@suse.com>
 <ff0c9f42-f45e-e78e-35b9-c030011eed8f@apertussolutions.com>
 <6d50efc1-6c13-1481-b70c-0abfa99aa610@suse.com>
 <8a909d6b-e69c-05ce-35dd-0f6be719b5ae@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <51ca858a-b3ca-ed35-9dd3-22bf81ca12e3@suse.com>
Date: Mon, 21 Jun 2021 13:39:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <8a909d6b-e69c-05ce-35dd-0f6be719b5ae@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR3P281CA0056.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::6) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f7f087f3-dfd2-492f-4d63-08d934a93b8f
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2607:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB26071D323D18CC013C944797B30A9@VI1PR0401MB2607.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3173;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BlomdypqbRU8AkZZs3vr6PqP/aTC0y3SkDYJF2tMdKT6ZEeP8gvZmFOKbePEVSwAq8XNzgnCs+Ekr+PHX5oOX0kXAdZ9XoCTaVYZaj2tbMb7U2vSeAVriCCOBoWt+CGNonqLCy7dPWxtHXxuQUccmdSA4RRRVyfaQL6IEp2Ly4/36Z1MYCS1MLQh1zhQ6ndYf51btTh3r3C3QxCXp2mt2L4WYaoclQjgTwJaf0UGBEW825zrUU7x0ICUTaKV4UUenE3W0RsZ4DK6JCYQWmDYTWfXZY2+UUZJXyix5EMAolfzLNweqUh82mo6MzPq2yTEYzBvpicMVwNw8SlrlIXH22vHJPdYPblKDzrmyq7Mx2x5YUc3KJ1fbn6Y+AmpoBewuHCRgGOn8Ttd/pHbeVtG4Cskx35xbOgm+M/fOCaUy/mbZMVtE6KQL3Uv85TFkMsuqAvFdL+ccI9xbVyG9PpodfEvIczqSsFHU33aUGyXpT+slsyH31SUdsY2kibQyCIcQrdx9qf3NT3cbNoYV2fHGNA2QAdwhDOjhEuR62roJhENkCz7i1XQ/5UZZXiqCLKe2G/5z9I/muLEfYMCG9KL0Fu/Iz69cqs2QvPhgJOfneEQyXZs+xoh0ESZBJfUPzVff7UdeMv6Flcr9dYEuyhEEA/A7uL3gSKWtHWWXW5g7F0cpVtoNqxEE5YEoFczu0UD
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(376002)(346002)(39850400004)(136003)(366004)(31686004)(36756003)(86362001)(186003)(8676002)(53546011)(16576012)(31696002)(7416002)(16526019)(956004)(478600001)(5660300002)(54906003)(38100700002)(8936002)(66556008)(316002)(26005)(4326008)(6916009)(6486002)(2906002)(2616005)(66476007)(83380400001)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?MtX0blAMwOr6Fi7DY/ZSCAug1uVRo27EjQA5099HXDdmwkOy5KyBu+DK4TVt?=
 =?us-ascii?Q?uwMvzJrmQSvKC1DAREaVfq75MKreLQiDl6DMHdDALwByoRx2Y9NK+r4+KXQr?=
 =?us-ascii?Q?7HVMWsX4KLmNTEBxHwCO7t05FXBQuSjNNjlCpSIiLTeOZHqBs9jJkE1KcvGJ?=
 =?us-ascii?Q?+cVLDomaMc5A3nuWRGa0/Qjc3UDlQDW6CE//CXDEy+bdWhhZF0XlyCWVAoZp?=
 =?us-ascii?Q?117JyLbleEE87kU4TQ6hf8FnISsa9a6Y+3vllGvOBI00RqavEpD9JPR3E8/Y?=
 =?us-ascii?Q?7FYAtic8FEwydr2FTsyqTcAi/HZ55CBf/EoZqE4lqJizDMeJfyTYoBy2Az8t?=
 =?us-ascii?Q?huNYcnFonPT0qVuaqUUaRm5XMYberTyQELogiuevydKFdFC0TcT/wRokQ9C2?=
 =?us-ascii?Q?62o9wTqlZ6h0ZedgqvVToTuhaRRU+u1/1R84T6963XkmLXDzNJoQvMvmUmXw?=
 =?us-ascii?Q?FhVBOjOaScxKLjvpXAfqSXGn/6U1bS2oSKbiRvy1TS7en6RehWETunRumAyt?=
 =?us-ascii?Q?w/CkFD5z3MmPPm8ZLESDdANMINi9CMJsdBuyLwcEdwXnC5XytOyahBOJV0v2?=
 =?us-ascii?Q?z7pfHm+QiVF7wicpFqwcJGgfIn2y3fvv7EOGekH+zn7SBa9FEiwtFhj+Bvc8?=
 =?us-ascii?Q?tDn09ALc93Af/4AwrNTO7fqfwaudJ03bIssN+u9kOrdCc7GrXZJVp4SgrK3C?=
 =?us-ascii?Q?AFXDGvEGju0TIyw6wXlA1uTf5sDc742wG67q6QFRiNVJzLvMTUbKJBKwdmDB?=
 =?us-ascii?Q?9sQgwpgcCscHECCfulSZaMDLDMpcnGtCuwSLArVYKFV94IlYnTd4l+2K6lZY?=
 =?us-ascii?Q?aplH8YwDlDJwEGxH+bjo+7tvZfKNY4VMepK8z+MKv8wBLEdH3K9DauiT4zPN?=
 =?us-ascii?Q?MYZ/N34ZBW2NXpsztCiz8h5rrdQDI14CP9Oh0dc0eDqPMRvveUbhgAFrNjrk?=
 =?us-ascii?Q?iHCfiZBGGA66fZZOnsARQj1fX63p2Wh4LdG3SWJnPH+EE8xQLpHWLZ9/BlzY?=
 =?us-ascii?Q?6/be+4PyWrks/Y6Tgs3c7THRXDs9seDT0vnYwySpyWFsrYx5goTdE6hHN7I9?=
 =?us-ascii?Q?zl+BXu1spdXPNl5/fBbBEeDjvMj7vIoViPh0sOtOvkoxJ9LcB4nnxkt9EytK?=
 =?us-ascii?Q?rBuXyn4+trFvlFZf2T4twcM7Rx3LXGgxxBlFHtiJLcD8Vplyfe0COZYMqbnN?=
 =?us-ascii?Q?OaGklhaVwSyPaeDM6kAC7atSr+r6XsWclVz2psyJSUjsPNT3Ksghb8uPmE02?=
 =?us-ascii?Q?g+TKTgWhdYHbWm0fOCfahyrCe9zOSuz7il15NxZuaNyCnQyPPQCvQypjKtx/?=
 =?us-ascii?Q?UMGrAIZRi3ITnOmc+p47GTWL?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f7f087f3-dfd2-492f-4d63-08d934a93b8f
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2021 11:39:30.5918
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9JcbCBbhUqrssFJl7DoUDD75V8+94UDhPLR2Ev2zxeE228Z0CYUGLmOR7OGdorXwknJ2KrRSbiQirhQzpKefsg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2607

On 21.06.2021 12:41, Andrew Cooper wrote:
> On 21/06/2021 07:58, Jan Beulich wrote:
>> On 18.06.2021 22:27, Daniel P. Smith wrote:
>>> On 6/18/21 8:26 AM, Jan Beulich wrote:
>>>> On 18.06.2021 01:39, Daniel P. Smith wrote:
>>>>> The only difference between !CONFIG_XSM and CONFIG_XSM with !CONFIG_X=
SM_SILO and !CONFIG_XSM_FLASK
>>>>> is whether the XSM hooks in dummy.h are called as static inline funct=
ions or as function
>>>>> pointers to static functions. As such this commit,
>>>>>  * eliminates CONFIG_XSM
>>>> Following from what Andrew has said (including him mentioning your
>>>> changing of certain Kconfig option defaults), I'm not convinced this i=
s
>>>> a good move. This still ought to serve as the overall XSM-yes-or-no
>>>> setting. If internally you make said two variants match in behavior,
>>>> that's a different thing.
>>> Apologies that I did not express this clearly. What I was attempting to
>>> say is the fact of the matter is that there is no logical behavior
>>> difference between "XSM no" and "XSM yes with dummy policy". The only
>>> difference is the mechanics of how the dummy functions get called.
>>> Specifically via macro magic the dummy functions are either flipped int=
o
>>> static inline declarations that are then included into the code where
>>> they are invoked or the macro magic has them ending up in the dummy.c
>>> XSM module where they are wrapped in macro generated functions that are
>>> set as the functions in the dummy xsm_ops structure. Thus it is always
>>> the same logic being invoked, it is just mechanics of how you get to th=
e
>>> logic.
>> That's what I understood, really. What I dislike is the inline functions
>> going away in what we currently call !XSM.
>=20
> I'm sorry, but this is an unreasonable objection.
>=20
> The mess used to create the status quo *is* the majority reason why
> fixing/developing XSM is so hard, and why the code is so obfuscated.=C2=
=A0 To
> prove this point, how many people on this email thread realise that
> calls using XSM_HOOK offer 0 security under xsm_default_action()?
>=20
> Having xsm_default_action() forced inline isn't obviously the right move
> in the first place, and I doubt that you could even measure a
> performance difference for using real function calls.
>=20
> Even if there is a marginal performance difference, and I doubt that
> there is, performance is far less important than de-obfuscating the code
> and fixing our various security mechanisms to be first-class supported
> citizens.

What I don't understand from all you say is why you think that having
an as-if-no-XSM build configuration, without any way to switch to an
alternative model (i.e. the XSM=3Dn that we have right now), is a bad
thing. I don't mind the XSM=3Dy case getting improved, but I don't see
(yet) why it is a good thing to force this onto everyone.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 15:54:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 15:54:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145591.267726 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvMF8-0000og-N5; Mon, 21 Jun 2021 15:53:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145591.267726; Mon, 21 Jun 2021 15:53:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvMF8-0000oZ-Jp; Mon, 21 Jun 2021 15:53:46 +0000
Received: by outflank-mailman (input) for mailman id 145591;
 Mon, 21 Jun 2021 15:53:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+afF=LP=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1lvMF8-0000oT-1J
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 15:53:46 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6b44c409-0a83-47dd-acba-1c5a01d65538;
 Mon, 21 Jun 2021 15:53:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b44c409-0a83-47dd-acba-1c5a01d65538
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624290824;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=N43ch99DbZIDDdSWa7kxrMjr29Vk4s1i1VrHju5TFdk=;
  b=OqLIw0ka0XdxTIR0DJg60b0s5addP/R/AEgIF+GjX2mQgwjc0MAGS8zx
   Sh8cW6IzjMYPoSkvtiME5igQrbrv6io1Vn+KWp9xZCLyY4+IOp5CvV/bC
   nh4whQ5HLWa7iR8zqhKZCze2BasUVFN+OzjNUdt9AsTAdw7QpeeH3y7NM
   Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 6izfFzwsrWqRGaeVD1SePH1jmh9jKFVWW881cWSy5QTgBVNMXf+SNHRQeFzA6OtaI7LZJGhRzK
 er5wl5dU2787KAuUchM2JxENdhdtM9JYryTOGm1Z4PqhAtgXpU7GQON+bRIqeBvhq5AXoFd4Dl
 f5GcWaTpzoxzBSb1FSlzwwZ5bXEa4qZbcX8i3EPjFcXpL3oKR23E02ox20obYicYbS1U0LOq6V
 /4ArSjmqPg0OxHtY/upm5VrVF4Fpim4/5DhdWZNyA5oKoopMvzn7o4zol9lBO5TI0m2LOVy3W3
 Rx0=
X-SBRS: 5.1
X-MesageID: 46606871
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:c1ICt6qTDtnCVkKhjUX4YqAaV5uVL9V00zEX/kB9WHVpm5Oj+P
 xGzc526farslsssSkb6K290KnpewK4yXbsibNhfItKLzOWxFdAS7sSrbcKogeQVREWk9Qy6U
 4OSdkGNDSdNykYsS++2njDLz9C+qjFzEnLv5an854Fd2gDAMsAjzuRSDzraXGeLDM2WKbRf6
 Dsgvav0gDQH0j/Gf7LYUXtMdKzxeHjpdbDW1orFhQn4A6BgXeD87jhCSWV2R8YTndm3aoi2X
 KtqX262oyT99WAjjPM3W7a6Jpb3PH7zMFYOcCKgs8Jbh3xlweTYph7UbHqhkFxnAjv0idvrD
 D/mWZnAy1B0QKJQohzm2q05+DU6kdo15Yl8y7CvZKsm72ieNtwMbs/uWsQSGqm16NnhqAh7E
 sD5RPoi7NHSRzHhyjz/N7OSlVjkVe1u2MrlaoJg2VYSpZ2Us4akWSOlHklYavoMRiKoLzPKt
 MeR/00JcwmBm+yfjTcpC1i0dasVnM8ElOPRVUDoNWc13xTkGpix0UVycQDljNYnahNB6Vs9q
 DBKOBlhbtORsgZYeZ0A/oAW9K+DijITQjXOGyfLFz7HOUMOm7LqZTw/LIpjdvaNaDgDKFC0a
 gpdWko+lLaV3iefPFm7ac7hCwlGl/NLwgF4vsuk6SRlIeMN4bWDQ==
X-IronPort-AV: E=Sophos;i="5.83,289,1616472000"; 
   d="scan'208";a="46606871"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b9hbQqG15TexcEbuMVEu1l6g2nJIduSSND3QVgyrSETpMUtxarS/SJkR7cc32pJVH2VoX1YL8kT3hcv10+vOdVKdmAVmaMgxiLG774sovk71PLHLytasrA7diGcefy/nH9fRfV7HIb6CY6Rh1Cgy+ga5ya+CpWs7UMveLfFm9kdBp0DuYu+nL9h7j0Gyr8/1R/4n8cAkj+gLYiPq4XYpsFuvqi7DCPek5YAC+7e9zLKIIJM0x/2zM1NSO40DJtrbcO+n1rX8QYPoU+F1m8UPnSnSeXQx7xYBe0FpIrTDn+oMySXO5DRBtAZOHUblh56NlDMWhv2bEeaMwPMozpYrEg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N43ch99DbZIDDdSWa7kxrMjr29Vk4s1i1VrHju5TFdk=;
 b=nyH2YAcDWXJzPN/44PZrne3cDXWfxhLQJlw+du8Z8IX4sMMZ+erhcZWs8hVNL2/N3WzrKIZc5yj8qqOnjvCZFgAgbYE4rBtWmF/q9bvA2JAod69RMRmo4GM1t1afB9Kk29c/xXCWbg4XcYTEr4P+cKKf8HtXMVolXnpkkOIw9sBs4F0xqpXzVnp7iY+pV53dMhxNE2JYaubaICNeN5r6Bu3sXczOhdLjqgnVEtoxYlgaoNFEbD+L+A2Y18xd+a5D1C4obFGRRpYwQ152DJm/NpyNqIQudmAMQ8wy79bQBNotpxeBfmNNQDI9c9AErq6wWwze+3ggY10vvBMyXZ/ItQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N43ch99DbZIDDdSWa7kxrMjr29Vk4s1i1VrHju5TFdk=;
 b=WXeW/qLV2qN4yc7iidAa27+5pizZYsRnySzbncEKgTqLDJVCSFRVx8Hl2cE4qnNr8ooVdierg5/pRE6UihoCxsZ/S7/kD8uYc4XsheEcFOGXVFMMwSEi4AkLL5W+owUt9MMeWU78+VN2TWTmefvpdO/GqjTkj54RyNCkbL3osJo=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RESEND PATCH 00/12] golang/xenlight: domain life cycle support
Thread-Topic: [RESEND PATCH 00/12] golang/xenlight: domain life cycle support
Thread-Index: AQHXUNyxCw9CepZKn023Zd4/zB8JpKseyXiA
Date: Mon, 21 Jun 2021 15:53:39 +0000
Message-ID: <AD0B1F68-3FAE-4F3C-BCF5-9623A76A0A9B@citrix.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: f00a466f-d47a-4ca1-c87f-08d934ccbcbc
x-ms-traffictypediagnostic: PH0PR03MB5912:
x-microsoft-antispam-prvs: <PH0PR03MB591286B74235E635C0850AE4990A9@PH0PR03MB5912.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8273;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: FhIyACGUrmAyScXUfFFhQlmjOAsUzZWybR1D0N4ASjGaRZzh1XJPajR5lKQIwbj3vl8/fkR5I2hk0hBar9SoBlVPxuYUuIzdkD9SVG9dbmHuhkTya1OwPyIctetCsUyTZpc3b7qNWX1lw9dt2BmD7Fu4oW8vW0zTraS4IBuDmkJY0G8T8isBGaUSdXo4bAWtv3tvjh+vh5l78MEUivHIiOkTMmkEaSkSiHL2XSGpqdy26cC7lIMZMR/bIteJKLQipj5ReMj14t/B+dFfwd2YccBf2XO476qlkAddtG80LPHTG36vmM9m9uE6/z5xW3eZRXcrm/w+IitLTHgESEPBRrd9L5MkaXlVpnUYiR/Reyk9SMQoM7862fdkhgypRRhXVpXl3he2lGdwjQIkwqvtOAqFiAHKWEosOiQz3ZNAeHhqV+CIE8Y/cDtKdQ1bqez3FLNWYchbEG2gNw9AskVeyxMmgE3wCkH6e4ZIi1Q5VTgOjzpGJAWVv+gsn5zTdJWYvSaT3HZy9Fw3Vk4N+UjxmxcmeASyJfUrSAGoHeFHlfg4AblWbBSL1PcsTSQoMCQyaczktfUy+fY7iO0I1sUlCRQiNxrQm+WVPr1tQgVJiR5b2xEWEK1ck/au4WNJ1kIpQY6UFvE+WmPH+UCF7tE82Uq6f7WJsxsMD4WQvqa3EZM=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(396003)(346002)(366004)(39860400002)(36756003)(54906003)(33656002)(6512007)(2616005)(6486002)(5660300002)(8676002)(2906002)(8936002)(4326008)(186003)(316002)(6506007)(91956017)(478600001)(66476007)(76116006)(38100700002)(122000001)(86362001)(66446008)(64756008)(66556008)(6916009)(71200400001)(53546011)(66946007)(26005)(83380400001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?ejhtVFg3RlZCVklVc05CRDB2TWNKcVVWMmpZdXR5ZVVudzdRQUZzZXZ1eGtm?=
 =?utf-8?B?b0tDNjN0YmpLdS9odmpEQzN3U0NVSGllcW9HS1B1YkFzMkZ3a0ZnWUp6TmdE?=
 =?utf-8?B?SW5CTkJXbGJNVUc3YmR6cEgyaW1aVW1pLzBKUm9XMVhIcWdlSGRtRDBleFpF?=
 =?utf-8?B?S2JDcXJBYUhqUnZkTXdnYzc1bUhJTFowbnZ3SFlqOWNUcjJNMm4zSG8yc2dO?=
 =?utf-8?B?V2hUN0V1L0pUa0dRNGF2aWovbkU5ZHN5WkIzZVZnQUV4REFwZWRIUlNMekxN?=
 =?utf-8?B?Zy9OL0thWHRtV3VocXhiTlpqQzAya1pabkdLOVpwc3d3OGpUYmN3NGdVbVNU?=
 =?utf-8?B?aTdRYkdWem5UTUVzbDQwb0xVOGQ2Zkhhb1dUSDg0MW8wOWtGQ1l1UFF4cVA5?=
 =?utf-8?B?V3c1V1JKZ2Z0cHZLWDJkWS9oL1QvMmJoQWp4UkE2MlFodjNtblJaQ0FQTm1v?=
 =?utf-8?B?QnM2MlVYNW1veWxBSE5UMWhqMnNvdk5JbkhNOE4vazNCeHJuL1JIMko0Ty9N?=
 =?utf-8?B?UCtlTm5MTUN5Ung4bWoxem1Ka0IwQXEwRUxsWXQ2SGlNSW52R09mOXhCOEtH?=
 =?utf-8?B?TzdSR2JIRUZCZ2QzTDdUcGQrMmUvaWxGY1JkZUFhZm5ZVVNQaG5idHJVYUk3?=
 =?utf-8?B?OEdmU0hxOGFJUUc3TTY1YmhobDdybmxRQVNrSjBGSWtwT2lhNUkvNXREZ1pW?=
 =?utf-8?B?RzlJVXhWeUR0UmRJUERBcVg4VzVhMXNUaGM3b21ZU0dCdnBZUno0cmhoOFhm?=
 =?utf-8?B?dlpzcnlxVTh0VW9Ua09xWnBhV3hEZGQrSjVMV0Q2akZKNDRRMG9zSHNpT21o?=
 =?utf-8?B?NngxcTlPdkhwSGsvZTI2bHlVT0g3bENDbUxTWW1CRkk4bzk1ZURCTExwN1ZP?=
 =?utf-8?B?VHVxbmRycWJpZDlPVWVTa29tT0VzV29wbjZhSXlBSlRRVWVPZS9Xb2ZTYWE3?=
 =?utf-8?B?SzVDSENJV2R2ei9ZVmFIRVAzMFcxT25jNGpVbHJwQWhEbW9obEhDV3NWWE1o?=
 =?utf-8?B?dUZRTGUwdjFlZEp0Qm9RUm50MkJsb09hUDlWdzBaYU9EMVV5U1FCZTZtRVp4?=
 =?utf-8?B?dysvSmZhOEg1ZlZYN202WDBDZjhNM1hFaUZ3enNmR0pldlZIVy9lQjNMQnhw?=
 =?utf-8?B?ZVNkRGVnc0libnlVVWk0R0lwbzR2L0VWTDdBL1haZDZxaHR4SlNkUFVwM2ov?=
 =?utf-8?B?MjAwemJnZkY2TUZzWjJBblVuQ0NWekJxaDRhU0NZdU55QlBzUms0b2JLQVhG?=
 =?utf-8?B?MHY3cXhpcVd0djNPRkx4angyTmJWT1ZJeGR1OTRzRHJRamo4UWtyeW9yZlpI?=
 =?utf-8?B?SzhmSzNvTWNyZ1luSm9kSjc0VkFUQ2djeFphUFJxUW52N2dXbllMT2xyMmJM?=
 =?utf-8?B?RkgvbFFKdnBNc1ZuQUk2T0xnQk5SNEc3SzV0NGZsa1pqSyswZFpMbFdNSVhw?=
 =?utf-8?B?TUFBeXVrZDBNVmZmejI4LzhoNmE3YlZPZGFhQ2MwRS9ZcUtKZmhTYmk5cTRl?=
 =?utf-8?B?Y2k5S0h0ampWUTJXL2xVYUhkOFhsMTU3eHNpTVliMW1raUN1WW13VDFoWWps?=
 =?utf-8?B?VmZTVDhIUkNHNnNDMndHQ3RnN2hzbnV1ZDNMSy81N3pSclVDbWNmZVhtWHFO?=
 =?utf-8?B?L21odUYzZ05NcUlaSDk0REFITUlXdW1STWQ5MzF3SVpNQjNUWThYd2VsYjBt?=
 =?utf-8?B?cVZoRHJ0WS9SKy9LMnJLNkZpR0RvRmhvMHBlN0pTd2ppbWFKZ25PSUkrTVdB?=
 =?utf-8?Q?37KsVwwrKASQkc+21rtN4Iu8Ak7MeeRXAjOgahm?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <D8F94C9DD6072D44A073041BE170D77A@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f00a466f-d47a-4ca1-c87f-08d934ccbcbc
X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Jun 2021 15:53:39.2713
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: eXB+2SPPwbPxCF2MLtoI+1+EIK8sb/EZ3jL5TKqTfm5B+hWqd8/iaR0nfPYuVb5u+FU+UM7aZLxehusL5F8faI7ZjdmHOBl61ExPU/7Zhf8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5912
X-OriginatorOrg: citrix.com

DQoNCj4gT24gTWF5IDI0LCAyMDIxLCBhdCA5OjM2IFBNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9v
a25AZ21haWwuY29tPiB3cm90ZToNCj4gDQo+IFRoZSBwcmltYXJ5IGdvYWwgb2YgdGhpcyBwYXRj
aCBzZXJpZXMgaXMgdG8gYWxsb3cgdXNlcnMgb2YgdGhlIHhlbmxpZ2h0DQo+IHBhY2thZ2UgdG8g
bWFuYWdlIGEgZnVsbCBkb21haW4gbGlmZSBjeWNsZS4gSW4gcGFydGljdWxhciwgaXQgcHJvdmlk
ZXMNCj4gc3VwcG9ydCBmb3IgcmVjZWl2aW5nIGRvbWFpbiBkZWF0aCBldmVudHMgc28gdGhhdCBk
b21haW4gc2h1dGRvd24sDQo+IHJlYm9vdCwgZGVzdHJveSwgZXRjLiBjYW4gYmUgaGFuZGxlZC4g
QW5kLCBpdCBhZGRyZXNzZXMgaXNzdWVzIGZvdW5kDQo+IHdoZW4gdXNpbmcgdGhlIHBhY2thZ2Ug
dG8gYm9vdCBkb21haW5zIHdpdGggdmFyaW91cyBjb25maWd1cmF0aW9ucy4NCj4gDQo+IFRoZXNl
IHBhdGNoZXMgYWRkcmVzcyBzZXZlcmFsIHRoaW5ncyAoZS5nLiBidWcgZml4ZXMsIGNvZGUgc3R5
bGUsDQo+IGNvbnZlbmllbmNlcywgbmV3IHdyYXBwZXIgZnVuY3Rpb25zKSwgYnV0IGFyZSBhbGwg
d29yayB0b3dhcmRzIHRoZSBmaW5hbA0KPiBnb2FsIG9mIGFsbG93aW5nIGEgcGFja2FnZSB1c2Vy
IHRvIG1hbmFnZSBhIGZ1bGwgZG9tYWluIGxpZmUgY3ljbGUuDQo+IA0KPiBOaWNrIFJvc2Jyb29r
ICgxMik6DQoNCk9LLCBJ4oCZdmUgY2hlY2tlZCBpbiB0aGUgZm9sbG93aW5nIHBhdGNoZXM6ICgx
LCAyLCA0LCA1LCA2LCA5LCAxMCwgMTEpOg0KDQo+ICBnb2xhbmcveGVubGlnaHQ6IHVwZGF0ZSBn
ZW5lcmF0ZWQgY29kZQ0KPiAgZ29sYW5nL3hlbmxpZ2h0OiBmaXggU3RyaW5nTGlzdCB0b0MgY29u
dmVyc2lvbg0KPiAgZ29sYW5nL3hlbmxpZ2h0OiBleHBvcnQga2V5ZWQgdW5pb24gaW50ZXJmYWNl
IHR5cGVzDQo+ICBnb2xhbmcveGVubGlnaHQ6IHVzZSBzdHJ1Y3QgcG9pbnRlcnMgaW4ga2V5ZWQg
dW5pb24gZmllbGRzDQo+ICBnb2xhbmcveGVubGlnaHQ6IHJlbmFtZSBDdHggcmVjZWl2ZXJzIHRv
IGN0eA0KDQo+ICBnb2xhbmcveGVubGlnaHQ6IGFkZCBEb21haW5EZXN0cm95IHdyYXBwZXINCj4g
IGdvbGFuZy94ZW5saWdodDogYWRkIFNlbmRUcmlnZ2VyIHdyYXBwZXINCj4gIGdvbGFuZy94ZW5s
aWdodDogZG8gbm90IG5lZ2F0ZSByZXQgd2hlbiBjb252ZXJ0aW5nIHRvIEVycm9yDQoNClRoZSBm
b2xsb3dpbmcgaGF2ZSBub3QgYmVlbiBjaGVja2VkIGluIGR1ZSBvdXRzYW5kaW5nIHJldmlldyBj
b21tZW50cyAocGF0Y2hlcyAzLCA3LCAxMiksIG9yIGJlY2F1c2UgdGhleSBkZXBlbmQgb24gYSBw
YXRjaCBub3QgYmVpbmcgY2hlY2tlZCBpbiAocGF0Y2ggOCk6DQoNCj4gIGdvbGFuZy94ZW5saWdo
dDogZml4IHN0cmluZyBjb252ZXJzaW9uIGluIGdlbmVyYXRlZCB0b0MgZnVuY3Rpb25zDQo+ICBn
b2xhbmcveGVubGlnaHQ6IGFkZCBsb2dnaW5nIGNvbnZlbmllbmNlcyBmb3Igd2l0aGluIHhlbmxp
Z2h0DQo+ICBnb2xhbmcveGVubGlnaHQ6IGFkZCBmdW5jdGlvbmFsIG9wdGlvbnMgdG8gY29uZmln
dXJlIENvbnRleHQNCj4gIGdvbGFuZy94ZW5saWdodDogYWRkIE5vdGlmeURvbWFpbkRlYXRoIG1l
dGhvZCB0byBDb250ZXh0DQoNClRoYW5rcywNCiAtR2VvcmdlDQoNCg==


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 16:11:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 16:11:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145608.267769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvMWM-0003yF-1g; Mon, 21 Jun 2021 16:11:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145608.267769; Mon, 21 Jun 2021 16:11:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvMWL-0003y8-UV; Mon, 21 Jun 2021 16:11:33 +0000
Received: by outflank-mailman (input) for mailman id 145608;
 Mon, 21 Jun 2021 16:11:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gHMo=LP=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1lvMWK-0003xw-9D
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 16:11:32 +0000
Received: from mail-qk1-x729.google.com (unknown [2607:f8b0:4864:20::729])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 13c5dde4-a379-4324-86e8-893671a03d73;
 Mon, 21 Jun 2021 16:11:31 +0000 (UTC)
Received: by mail-qk1-x729.google.com with SMTP id bm25so16928998qkb.0
 for <xen-devel@lists.xenproject.org>; Mon, 21 Jun 2021 09:11:31 -0700 (PDT)
Received: from FED-nrosbr-BE.crux.rad.ainfosec.com
 (209-217-208-226.northland.net. [209.217.208.226])
 by smtp.gmail.com with ESMTPSA id t30sm10129212qkm.11.2021.06.21.09.11.29
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 21 Jun 2021 09:11:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13c5dde4-a379-4324-86e8-893671a03d73
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=AKyi6iJ5iPX7DVHeybiBSfN2Gudg4/i02xAoUSMGk70=;
        b=AEG6KKRefrncUFo7JlLKRfr7osC/yZ0WSDrTuWyJlWN9FGaDa6YtZjPctDjHgQ8fUT
         rUdIqNJhkwKJlaAv68aTEr6TkWnuCQ5FuW2Fxl64ewrIPVb20daFyWKQuOCzY1Fmzx7M
         c7zBslbbwD7/EkeCfO9Huqq7X81teilioJFKeqEl8MYh8OAOTPterMvXEKgRgkb9OsrM
         MrwTBIk5ittBzJa8LKPz2Tn+IhapSIE052+K4RYG6uR7gOMReu/DE0JCgG0VN67SvOuG
         +1TEc3tLZt2Lx9Fv1mxlpAMfwXVqsg8XNBPOcGldJzbUSDrG8mNAnv33yIHseTmPThIZ
         Yq6g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=AKyi6iJ5iPX7DVHeybiBSfN2Gudg4/i02xAoUSMGk70=;
        b=LxtIepvLJAUCZDvWp3BzgUz0EmI/jKU3PAjcvVNJS4oIj9fejUevuqVu1CfLERaaSl
         kOFHLBeDlNlkaapD//Lqa3sRpav+VZ7drMHm7liFeOvEc1vPG11ShtewqL5IJ8JYMmXU
         qzNcSiQZ0Ii5JXkC43jbJ9SseNJgsud6Q9XisfcepjaSoUpMGyc9CAS4pp3gBMaU+8cE
         5ZejXy43xepoQtcjWq0U3h9qK4DOiRPKV86Lyln0JL3WrlrYxh4KA6FQVn/CwsfYd2RP
         sUwTDAgvyGSGxtMnmLI9qqf9TZ+PSK68Opl2zbeJ1LMiFi41NVE4STbGiQvytL88WUCW
         KEWw==
X-Gm-Message-State: AOAM5307HrJ9+1d9hdqNZOxQcMUooZl2m7jqVwIEg/8SMZh91e5Dh0bz
	TU3F94gn2gFNFVWd63TkIDw=
X-Google-Smtp-Source: ABdhPJyGsps87WvVsXQ/ydIDuU2vK4OUVkjVK9nn9Z3/RYYWW27YD3SI4WD7M+wF+CmmJ0hC6ISCFg==
X-Received: by 2002:a37:b44:: with SMTP id 65mr23760153qkl.248.1624291890714;
        Mon, 21 Jun 2021 09:11:30 -0700 (PDT)
Date: Mon, 21 Jun 2021 12:11:27 -0400
From: Nick Rosbrook <rosbrookn@gmail.com>
To: George Dunlap <George.Dunlap@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [RESEND PATCH 03/12] golang/xenlight: fix string conversion in
 generated toC functions
Message-ID: <YNC6LzVHXCcNfg+E@FED-nrosbr-BE.crux.rad.ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <06763aceff41167d3d3bbd603f729572c1f55c77.1621887506.git.rosbrookn@ainfosec.com>
 <6BAF6F60-EC63-41AC-A46E-2045E746C7E1@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <6BAF6F60-EC63-41AC-A46E-2045E746C7E1@citrix.com>

On Fri, Jun 18, 2021 at 11:00:26AM +0000, George Dunlap wrote:
> 
> 
> > On May 24, 2021, at 9:36 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
> > 
> > In gengotypes.py, the toC functions only set C string fields when
> > the Go strings are non-empty. However, to prevent segfaults in some
> > cases, these fields should always at least be set to nil so that the C
> > memory is zeroed out.
> > 
> > Update gengotypes.py so that the generated code always sets these fields
> > to nil first, and then proceeds to check if the Go string is non-empty.
> > And, commit the new generated code.
> > 
> > Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
> 
> So wait — if you do
> 
> var foo C.typename
> 
> Then golang won’t automatically zero out `foo`?
> 
> That seems like a bug really; but assuming this fixes real behavior you’ve encountered:

I would have to dig in again to figure out exactly what Go/cgo is doing
here, and whether or not this is a bug. But, the behavior I observed was
that without these nil assignments, I would sometimes get segfaults in
libxl_string_copy. This patch ensures that libxl__str_dup is not called
in the empty string case, thus avoiding the segfault.
> 
> Reviewed-by: George Dunlap <george.dunlap@citrix.com>

Thanks,
NR


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 16:12:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 16:12:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145611.267779 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvMWv-0004Ua-AS; Mon, 21 Jun 2021 16:12:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145611.267779; Mon, 21 Jun 2021 16:12:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvMWv-0004UT-7X; Mon, 21 Jun 2021 16:12:09 +0000
Received: by outflank-mailman (input) for mailman id 145611;
 Mon, 21 Jun 2021 16:12:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uQgB=LP=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1lvMWt-0004UN-Qp
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 16:12:07 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 53dd6fcd-32fe-498f-ae2b-9efa961282c3;
 Mon, 21 Jun 2021 16:12:06 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 5D1612191C;
 Mon, 21 Jun 2021 16:12:05 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 049FA118DD;
 Mon, 21 Jun 2021 16:12:04 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id fWFAOlS60GC2KAAALh3uQQ
 (envelope-from <dfaggioli@suse.com>); Mon, 21 Jun 2021 16:12:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53dd6fcd-32fe-498f-ae2b-9efa961282c3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624291925; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=tAVEVsbsfjXCOOTXv/NHanBYHSaedMRbDLIpDJdYQs0=;
	b=Himgh+ETRFtBu32tfpOv3hhkEY18IxcnH5pHJKXnXN/e/6noJ6vwBg1qowsQ79GXxOkOcp
	ifT4fVdI9HQTCplGzuwydWqJNdn2jZ0fx4YbI2ppQzClvGld31fSrx4gogXLMDuCDlJrvv
	7DINYpFJPkQIqgm3b3yQ8mXHkcDhDoY=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624291925; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=tAVEVsbsfjXCOOTXv/NHanBYHSaedMRbDLIpDJdYQs0=;
	b=Himgh+ETRFtBu32tfpOv3hhkEY18IxcnH5pHJKXnXN/e/6noJ6vwBg1qowsQ79GXxOkOcp
	ifT4fVdI9HQTCplGzuwydWqJNdn2jZ0fx4YbI2ppQzClvGld31fSrx4gogXLMDuCDlJrvv
	7DINYpFJPkQIqgm3b3yQ8mXHkcDhDoY=
Message-ID: <4302ad195e7072d937c76ed955d90792a0c301d8.camel@suse.com>
Subject: Re: [PATCH] credit2: make sure we pick a runnable unit from the
 runq if there is one
From: Dario Faggioli <dfaggioli@suse.com>
To: George Dunlap <George.Dunlap@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	=?UTF-8?Q?Micha=C5=82_Leszczy=C5=84ski?=
	 <michal.leszczynski@cert.pl>, Dion Kant <g.w.kant@hunenet.nl>, Jan Beulich
	 <jbeulich@suse.com>
Date: Mon, 21 Jun 2021 18:12:04 +0200
In-Reply-To: <5D80842F-4479-46CC-A391-28E4EF364C7E@citrix.com>
References: <162221476843.1378.16573083798333423966.stgit@Wayrath>
	 <5D80842F-4479-46CC-A391-28E4EF364C7E@citrix.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-Uxd0edeoaGAyn99Hj6Mb"
User-Agent: Evolution 3.40.1 (by Flathub.org) 
MIME-Version: 1.0


--=-Uxd0edeoaGAyn99Hj6Mb
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi George (and sorry for the delay in replying),

On Mon, 2021-06-07 at 12:10 +0000, George Dunlap wrote:
> > On May 28, 2021, at 4:12 PM, Dario Faggioli <dfaggioli@suse.com>
> > wrote:
> > Reported-by: Micha=C5=82 Leszczy=C5=84ski <michal.leszczynski@cert.pl>
> > Reported-by: Dion Kant <g.w.kant@hunenet.nl>
> > Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
>=20
> Thanks for tracking this down, Dario!
>=20
Hehe, this was a nasty one indeed! :-P

> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
>=20
> Just one comment:
> > @@ -3496,8 +3500,7 @@ runq_candidate(struct csched2_runqueue_data
> > *rqd,
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * some budget, then ch=
oose it.
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 */
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( (yield || svc->credit >=
 snext->credit) &&
> > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 (!has_cap(svc) || unit_grab_budget(svc)) &&
> > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 unit_runnable_state(svc->unit) )
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 (!has_cap(svc) || unit_grab_budget(svc)) )
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 snex=
t =3D svc;
>=20
> By the same logic, shouldn=E2=80=99t we also move the `(!has_cap() =E2=80=
=A6)` clause
> into a separate `if(x) continue` clause?=C2=A0 There may be runnable unit=
s
> further down the queue which aren=E2=80=99t capped / haven=E2=80=99t exha=
usted their
> budget yet.
>=20
That's actually a very good point. I think, however, that it's even
more complicated than this.

In fact, if I move the cap+budget check in its own if above this one,
it can happen that some budget is grabbed by the unit. If, however, we
then don't pick it up (because of priority) we then need to have it
return the budget right away, which it's tricky.

So, I came up with the solution of turning the above `if` into this:

        /*
         * If the one in the runqueue has more credit than current (or idle=
,
         * if current was not runnable) or if current is yielding, we can t=
ry
         * to actually pick it.
         */
        if ( (yield || svc->credit > snext->credit) )
        {
            /*
             * The last thing we need to check, is whether the unit has eno=
ugh
             * budget, in case it is capped. We need to do it now, when we =
are
             * otherwise sure that we want to pick it already (rather than =
in
             * its own independent 'if'), to avoid the hassle of needing to
             * return the budget (which we'd have to if we checked and grab=
bed
             * it but then decide to run someone else).
             */
            if ( has_cap(svc) && !unit_grab_budget(svc) )
                continue;

            /* And if we have go this far, we are done. */
            snext =3D svc;
            break;
        }

Of course, something else that we can do is to pull the
`if (yield || svc->credit > snext->credit))` "out" of all the other
checks, i.e., doing all the checks we do only either if we're yielding
or if the unit in the runq actually has higher priority.

Efficiency wise, I don't think there will be much difference. The
latter solution is more code-churn (we're basically re-indenting one
level to the right almost the entire `list_for_each_safe` body), but it
might be more readable and easy to understand and follow in the long
run.

So, any preference? :-)

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-Uxd0edeoaGAyn99Hj6Mb
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmDQulQACgkQFkJ4iaW4
c+4AUg//YcWAARUF9e6DqYYUYiZRY8ozFoGDil7BythEyxSzjrcGmBmQgAYfmrpT
Kc8VbhmvgghpR+BrmvJ8Ad1LZ2RuL+hJKpOkty2R4YMcTSL45DpuTNdRTZRysT2X
7emMEvg1silQZyZmTsFbkTzUXtwcUl5D3WdhwHZRUXfMLlWZIuwxFkRq6DMjH5Tk
gxBdwKL4Gy+iVSx/5xrz1gxkToewaLg3D8oUPcLqPjOjfZWLX6i6MTrOiiCVnzjh
yMFDhrgfwe/OeB+CdUFACznq/Japo36futUaZ811gilwfWS3kiWoZ+gPm2W88Kxx
ag7mjvJ0R6m5heBeQbdSyAO2HA250DM8XJlFe6/tXdwD1K2Oubz9sCO5Pj99FZ0F
sqODGZICgp6kGWmSJhi92uwIq8rjjTkuftGpfRTmon6wGHjyuFeHj0Ook2iiJ4ZH
gK6xZpzFu1KS1Ne7PpsMo3YCGRww4oLgtG6XYjsjttSA4cVnkPILrp4LAfjE1o82
JFVxbr5NKEhZMsQx3ssaCg79Hcl9corggxn+h9a79oyOw6FScdcPbdHgSNvebeeE
DyTE9OmFYe92lJ1xysBRColwb4Vjr4A0RSA205v44POKQ4QcFWK/j1uSq7JSRdBH
yT2YKpEBLoRW5zavE1nP8rr/0Qn3H5m9s2nRbplWiOsk+0xX4xs=
=ycLN
-----END PGP SIGNATURE-----

--=-Uxd0edeoaGAyn99Hj6Mb--



From xen-devel-bounces@lists.xenproject.org Mon Jun 21 16:19:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 16:19:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145618.267790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvMeA-0005Hv-39; Mon, 21 Jun 2021 16:19:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145618.267790; Mon, 21 Jun 2021 16:19:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvMeA-0005Ho-0C; Mon, 21 Jun 2021 16:19:38 +0000
Received: by outflank-mailman (input) for mailman id 145618;
 Mon, 21 Jun 2021 16:19:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gHMo=LP=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1lvMe8-0005Hi-Bm
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 16:19:36 +0000
Received: from mail-qk1-x72d.google.com (unknown [2607:f8b0:4864:20::72d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 81a0a5fb-9272-4fbe-9261-c797ce43203a;
 Mon, 21 Jun 2021 16:19:35 +0000 (UTC)
Received: by mail-qk1-x72d.google.com with SMTP id w21so18109011qkb.9
 for <xen-devel@lists.xenproject.org>; Mon, 21 Jun 2021 09:19:35 -0700 (PDT)
Received: from FED-nrosbr-BE.crux.rad.ainfosec.com
 (209-217-208-226.northland.net. [209.217.208.226])
 by smtp.gmail.com with ESMTPSA id 9sm3813425qkj.123.2021.06.21.09.19.33
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 21 Jun 2021 09:19:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81a0a5fb-9272-4fbe-9261-c797ce43203a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=V7QToCOyN8s1F99hOvuuZdcA0iA3LZFYKSsCWLq+9vg=;
        b=MxG9drfY/W60SkzgHIcpFgyX91L4nLM/SJsK9B5lwHCthIYMFSXm9vgCCW04m8XQW7
         fVHR43tgexLKh45Lrx/WRtdWGvJ5GdgS1kOmN41iPHldjuaAGzeC2v4IdqHorQ3t0fMb
         c4dOTZQewtVS6qV9gZOigK4g5jxopAkS9Q7hoxcj8FoSkBbomuJJHGzOBTwYu7xJcc1W
         HF87Y53vZVjxEOICDKVz90M9PGMuv6DgKnhpHBla8KKo9nXhRopxtOrP406BCoY7uI1l
         o/Z43RF0UXcb1YzRDDm6HvL/3BMVGWQFyh8bQiHgB5c41iW+JvTf1O4yW7wwlegJVzSg
         zY6w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=V7QToCOyN8s1F99hOvuuZdcA0iA3LZFYKSsCWLq+9vg=;
        b=uWR7SNuXbStjo5oCDhrf03N5WQqMALGUuE12v9i4AcqVKZ66gEEwJwZp325/yAECzn
         zSai1//1SZAyT0MkM+fucHKcr9iS+utwV+mm3GWVKRtRgTTtEs0PAZarp/A2bWrLBtIM
         Mrqr8ZehF41SppqZRsRiAFrLW5KxAXDVsoGZPrIMXWAliYBb+V66jLj8/dC8cdD7KMWE
         sIf31enCa6Yz16TZNzeCKe1aSZfpG6cd8gVr7VPqAtemmz/UWbIa4JpofVYL2L9fKd1v
         cRPmCNhkepApPTh4aKHDLaQN8AlOJe6+pinqWGTQ/0YBCPokBw57lin/CCD92EP70s9e
         LVIw==
X-Gm-Message-State: AOAM533trZmTsqjIEBdG1Zfca2Fbo1CIy/TLmVH+kllKQ77ko3ftncMf
	rF34NM8TXyGcEUCzLF0MF00=
X-Google-Smtp-Source: ABdhPJzxDU7XfQCxb84tsK3YQm6WF/kI6dXosXRxwhKPTYeTeHjB7kuxbpezXRCsv1X5T1pzuvEKOw==
X-Received: by 2002:ae9:c115:: with SMTP id z21mr23962335qki.458.1624292374992;
        Mon, 21 Jun 2021 09:19:34 -0700 (PDT)
Date: Mon, 21 Jun 2021 12:19:31 -0400
From: Nick Rosbrook <rosbrookn@gmail.com>
To: George Dunlap <George.Dunlap@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [RESEND PATCH 00/12] golang/xenlight: domain life cycle support
Message-ID: <YNC8E1wwSG54RVzC@FED-nrosbr-BE.crux.rad.ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <AD0B1F68-3FAE-4F3C-BCF5-9623A76A0A9B@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <AD0B1F68-3FAE-4F3C-BCF5-9623A76A0A9B@citrix.com>

On Mon, Jun 21, 2021 at 03:53:39PM +0000, George Dunlap wrote:
> 
> 
> > On May 24, 2021, at 9:36 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
> > 
> > The primary goal of this patch series is to allow users of the xenlight
> > package to manage a full domain life cycle. In particular, it provides
> > support for receiving domain death events so that domain shutdown,
> > reboot, destroy, etc. can be handled. And, it addresses issues found
> > when using the package to boot domains with various configurations.
> > 
> > These patches address several things (e.g. bug fixes, code style,
> > conveniences, new wrapper functions), but are all work towards the final
> > goal of allowing a package user to manage a full domain life cycle.
> > 
> > Nick Rosbrook (12):
> 
> OK, I’ve checked in the following patches: (1, 2, 4, 5, 6, 9, 10, 11):
> 
> >  golang/xenlight: update generated code
> >  golang/xenlight: fix StringList toC conversion
> >  golang/xenlight: export keyed union interface types
> >  golang/xenlight: use struct pointers in keyed union fields
> >  golang/xenlight: rename Ctx receivers to ctx
> 
> >  golang/xenlight: add DomainDestroy wrapper
> >  golang/xenlight: add SendTrigger wrapper
> >  golang/xenlight: do not negate ret when converting to Error
> 
> The following have not been checked in due outsanding review comments (patches 3, 7, 12), or because they depend on a patch not being checked in (patch 8):
> 
> >  golang/xenlight: fix string conversion in generated toC functions
> >  golang/xenlight: add logging conveniences for within xenlight
> >  golang/xenlight: add functional options to configure Context
> >  golang/xenlight: add NotifyDomainDeath method to Context
> 

Thanks! I am planning on addressing patch 12 comments later today.

-NR


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 16:30:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 16:30:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145624.267802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvMoP-0007X9-6K; Mon, 21 Jun 2021 16:30:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145624.267802; Mon, 21 Jun 2021 16:30:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvMoP-0007Wz-2c; Mon, 21 Jun 2021 16:30:13 +0000
Received: by outflank-mailman (input) for mailman id 145624;
 Mon, 21 Jun 2021 16:30:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvMoO-0007Wn-4K; Mon, 21 Jun 2021 16:30:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvMoN-00057N-G4; Mon, 21 Jun 2021 16:30:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvMoN-0003uf-3B; Mon, 21 Jun 2021 16:30:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvMoN-0003Io-2f; Mon, 21 Jun 2021 16:30:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Px965uPbg+cgcrIaefyjh2akwtAtGTqcZ9o4hCxT67I=; b=lkT4ZHP5OgiaoX/4V0BJW+rvCn
	qKh3GthiyfGDBPOBRv4ewTekCGex7yHnjus7768k73p2BNpXVyGOMdig53UKxC0C/T1EEgJbg02MS
	hoKIU52pR2uOjE1gJmIjLjbdp74fMUF9sqNw7adW4x1zQf2fFjdoVcJZ4DXuTQQQ7MQ4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162929-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162929: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable:build-amd64:<job status>:broken:regression
    xen-unstable:build-amd64-prev:<job status>:broken:regression
    xen-unstable:build-amd64-pvops:<job status>:broken:regression
    xen-unstable:build-amd64-xsm:<job status>:broken:regression
    xen-unstable:build-amd64-xtf:<job status>:broken:regression
    xen-unstable:build-arm64:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:<job status>:broken:regression
    xen-unstable:build-arm64-xsm:<job status>:broken:regression
    xen-unstable:build-armhf-pvops:<job status>:broken:regression
    xen-unstable:build-i386:<job status>:broken:regression
    xen-unstable:build-i386-prev:<job status>:broken:regression
    xen-unstable:build-i386-pvops:<job status>:broken:regression
    xen-unstable:build-i386-xsm:<job status>:broken:regression
    xen-unstable:build-amd64-pvops:host-install(4):broken:regression
    xen-unstable:build-arm64-pvops:host-install(4):broken:regression
    xen-unstable:build-arm64-xsm:host-install(4):broken:regression
    xen-unstable:build-i386-pvops:host-install(4):broken:regression
    xen-unstable:build-arm64:host-install(4):broken:regression
    xen-unstable:build-i386:host-install(4):broken:regression
    xen-unstable:build-i386-xsm:host-install(4):broken:regression
    xen-unstable:build-i386-prev:host-install(4):broken:regression
    xen-unstable:build-amd64:host-install(4):broken:regression
    xen-unstable:build-amd64-xsm:host-install(4):broken:regression
    xen-unstable:build-amd64-prev:host-install(4):broken:regression
    xen-unstable:build-amd64-xtf:host-install(4):broken:regression
    xen-unstable:build-armhf-pvops:host-install(4):broken:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Jun 2021 16:30:11 +0000

flight 162929 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162929/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-prev                <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-amd64-xtf                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-prev                 <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 162533
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 162533
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 162533
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 162533
 build-arm64                   4 host-install(4)        broken REGR. vs. 162533
 build-i386                    4 host-install(4)        broken REGR. vs. 162533
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 162533
 build-i386-prev               4 host-install(4)        broken REGR. vs. 162533
 build-amd64                   4 host-install(4)        broken REGR. vs. 162533
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 162533
 build-amd64-prev              4 host-install(4)        broken REGR. vs. 162533
 build-amd64-xtf               4 host-install(4)        broken REGR. vs. 162533
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 162533

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z   13 days
Failing since        162556  2021-06-08 22:39:08 Z   12 days   19 attempts
Testing same since   162885  2021-06-17 23:08:00 Z    3 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64-xtf                                              broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             broken  
 build-i386-prev                                              broken  
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-prev broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-amd64-xtf broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-prev broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-amd64-pvops host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-i386 host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386-prev host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64-prev host-install(4)
broken-step build-amd64-xtf host-install(4)
broken-step build-armhf-pvops host-install(4)

Not pushing.

(No revision log; it would be 1163 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 16:54:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 16:54:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145637.267822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvNBV-0001ae-G8; Mon, 21 Jun 2021 16:54:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145637.267822; Mon, 21 Jun 2021 16:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvNBV-0001aX-Bd; Mon, 21 Jun 2021 16:54:05 +0000
Received: by outflank-mailman (input) for mailman id 145637;
 Mon, 21 Jun 2021 16:54:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8wfi=LP=rasmusvillemoes.dk=linux@srs-us1.protection.inumbo.net>)
 id 1lvNBU-0001aR-27
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 16:54:04 +0000
Received: from mail-ej1-x62e.google.com (unknown [2a00:1450:4864:20::62e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b1f2e15-99bd-43c8-89e0-ce801c88abc4;
 Mon, 21 Jun 2021 16:54:02 +0000 (UTC)
Received: by mail-ej1-x62e.google.com with SMTP id ho18so29939679ejc.8
 for <xen-devel@lists.xenproject.org>; Mon, 21 Jun 2021 09:54:02 -0700 (PDT)
Received: from [192.168.1.149] ([80.208.64.110])
 by smtp.gmail.com with ESMTPSA id f14sm313127edd.69.2021.06.21.09.54.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 21 Jun 2021 09:54:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b1f2e15-99bd-43c8-89e0-ce801c88abc4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=rasmusvillemoes.dk; s=google;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=P2Td/+tIipAgfyxPjVHvnCAo9n1WaYveTmTAYM/j+jY=;
        b=diuj5gabJqISgVVxx3Akhkd9RYmJFByRkUBchT88Z00CkuGWojs+VGfxVUt18Ow1NT
         iIs9CPaqGh7r/1KD5cILlcpveOUj3CJF4M7DbQqERFA9+ywOFCmKxKF8X0t8+AlKR+tN
         jXwgDjJn9NupBFYxyzDNQyDRkmzMZohSurPoY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=P2Td/+tIipAgfyxPjVHvnCAo9n1WaYveTmTAYM/j+jY=;
        b=Q7EZduHsWZnptYf703qVFasm8lDLK8Psk6QpL5o3le9U3IYrvmdCNWXVDJPuIPCG/9
         +ewzT6uWyA41ZiDO/Wt6a3ryAymNMJbf7LfIeIwBK0WpyCIrT/E/FPyJIkYHUQSsP7ib
         yAmxBjya1dISb1wjXXmBb1TJVDfHPfdczCcL8LzEYX658KhFLW523SxeJ/NwbfoLsB0Q
         4OPN5xnmDuZzGSntzv6zn5fdZ5CAmCqooGstI6Zc6CA75dxR9rQw2EuU4mWz+nnCH0FI
         CNBpuWCRGlHaxVsUWJtAJZ8xF8rlGSoDVMNmrRkjnnO/V3kokx2ZKj+nqU3RgTB/9r65
         KpDQ==
X-Gm-Message-State: AOAM5331kkDsYSci9/CR4N2aeQ5Zy7vvl2nr3i9RpFnRNTEZ3NOXBgdB
	yoVlAANxdyJ0tlKVd9qhn7Km2g==
X-Google-Smtp-Source: ABdhPJxjYawB6DJpcIRc0YMX54xgWCzAW7ksE8qOaz7mcYqVsYntvbdMZwPqOAEECmQCM421IbYweQ==
X-Received: by 2002:a17:906:264c:: with SMTP id i12mr25567215ejc.101.1624294441554;
        Mon, 21 Jun 2021 09:54:01 -0700 (PDT)
Subject: Re: Linux 5.13-rc6 regression to 5.12.x: kernel OOM and panic during
 kernel boot in low memory Xen VM's (256MB assigned memory).
To: Sander Eikelenboom <linux@eikelenboom.it>,
 Linus Torvalds <torvalds@linux-foundation.org>,
 Juergen Gross <jgross@suse.com>
Cc: linux-kernel <linux-kernel@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 kernel test robot <oliver.sang@intel.com>
References: <ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it>
 <CAHk-=wjgg67NMBNG99naEQ1cM0mXBBzdhCJaYFH-kC+mLK+J2g@mail.gmail.com>
 <9108c22e-3521-9e24-6124-7776d947b788@rasmusvillemoes.dk>
 <0b12f27b-1109-b621-c969-10814b2c1c2f@eikelenboom.it>
 <7338064f-10b6-545d-bc6c-843d04aafe28@eikelenboom.it>
 <e7f9c4f8-1669-75ce-b052-1030350a159e@eikelenboom.it>
From: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Message-ID: <bfdd1d6b-77a3-450b-71f4-63e9cc314ace@rasmusvillemoes.dk>
Date: Mon, 21 Jun 2021 18:54:00 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
MIME-Version: 1.0
In-Reply-To: <e7f9c4f8-1669-75ce-b052-1030350a159e@eikelenboom.it>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18/06/2021 03.06, Sander Eikelenboom wrote:
> On 17/06/2021 21:39, Sander Eikelenboom wrote:

> 
> OK, done some experimentation and it seems with 256M assigned to the VM
> it was almost at the edge of OOM with the 5.12 kernel as well in the
> config I am using it.
> With v5.12 when I assign 240M it boots, with 230M it doesn't. With 5.13
> the tipping point seems to be around 265M and 270M, so my config was
> already quite close to the edge.
> 
> The "direct kernel boot" feature I'm using just seems somewhat memory
> hungry, but using another compression algorithm for the kernel and
> initramfs already helped in my case.
> 
> So sorry for the noise, clearly user-error.

Hm, perhaps, but I'm still a bit nervous about that report from Oliver
Sang/kernel test robot, which was for a VM equipped with 16G of memory.
But despite quite a few attempts, I haven't been able to reproduce that
locally, so unfortunately I have no idea what's going on.

Rasmus


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 17:59:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 17:59:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145658.267850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvOCj-0008HT-RO; Mon, 21 Jun 2021 17:59:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145658.267850; Mon, 21 Jun 2021 17:59:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvOCj-0008HM-OV; Mon, 21 Jun 2021 17:59:25 +0000
Received: by outflank-mailman (input) for mailman id 145658;
 Mon, 21 Jun 2021 17:59:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rcYN=LP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lvOCi-0008HG-0G
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 17:59:24 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eee13637-5c1e-4326-b4f7-d79d554dd7b3;
 Mon, 21 Jun 2021 17:59:22 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 94438611CE;
 Mon, 21 Jun 2021 17:59:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eee13637-5c1e-4326-b4f7-d79d554dd7b3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1624298362;
	bh=y8attfyCFvmhE/rEq1lKuxtMSlY+L9uu0zEo4FMjkv8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=nVdx/hwygYr/MG4oSfAecQoTkcAWFsVr6dFhXJUhFXwqSbH08hcJcQHHaJ6yNTgwJ
	 WFFrnk+RLaMbXPz1Xtr0MorwwBCBRCnNVEqXP2tUXHhGV2Usf3U4gjEu3zxeInr5od
	 B9ktxTxXUHFXTSVy1nFGEpPSBK9O5Aws7oD7ucKaVc2hx8hm7IAXVZOQ8U1l9ZRj62
	 sEXsTrg9MCBDzVl47wBX5+3xzZhCsWmVRvgAWZv47Xmzv8udI19xFqd0CjGvBvgqzh
	 BtSw3vV40r9z2Jdokw3+RVbueWCN8P3KLA5YE9ZkG0ZSqiTJvVk9SGjw4FEHT8nF7N
	 esGUOKjtCMepQ==
Date: Mon, 21 Jun 2021 10:59:20 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Christoph Hellwig <hch@lst.de>
cc: Tom Lendacky <thomas.lendacky@amd.com>, 
    Claire Chang <tientzu@chromium.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, 
    Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, 
    Frank Rowand <frowand.list@gmail.com>, 
    Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, 
    jgross@suse.com, Marek Szyprowski <m.szyprowski@samsung.com>, 
    benh@kernel.crashing.org, paulus@samba.org, 
    "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, 
    Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, 
    xypron.glpk@gmx.de, Thierry Reding <treding@nvidia.com>, mingo@kernel.org, 
    bauerman@linux.ibm.com, peterz@infradead.org, 
    Greg KH <gregkh@linuxfoundation.org>, 
    Saravana Kannan <saravanak@google.com>, 
    "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
    heikki.krogerus@linux.intel.com, 
    Andy Shevchenko <andriy.shevchenko@linux.intel.com>, 
    Randy Dunlap <rdunlap@infradead.org>, 
    Dan Williams <dan.j.williams@intel.com>, 
    Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
    linux-devicetree <devicetree@vger.kernel.org>, 
    lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org, 
    xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>, 
    Jim Quinlan <james.quinlan@broadcom.com>, Tomasz Figa <tfiga@chromium.org>, 
    bskeggs@redhat.com, Bjorn Helgaas <bhelgaas@google.com>, 
    chris@chris-wilson.co.uk, Daniel Vetter <daniel@ffwll.ch>, 
    airlied@linux.ie, dri-devel@lists.freedesktop.org, 
    intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
    Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
    linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
    matthew.auld@intel.com, rodrigo.vivi@intel.com, 
    thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v13 01/12] swiotlb: Refactor swiotlb init functions
In-Reply-To: <20210618143212.GA19284@lst.de>
Message-ID: <alpine.DEB.2.21.2106211052270.24906@sstabellini-ThinkPad-T480s>
References: <20210617062635.1660944-1-tientzu@chromium.org> <20210617062635.1660944-2-tientzu@chromium.org> <alpine.DEB.2.21.2106171434480.24906@sstabellini-ThinkPad-T480s> <CALiNf29SJ0jXirWVDhJw4BUNvkjUeGPyGNJK9m8c30OPX41=5Q@mail.gmail.com>
 <741a34cc-547c-984d-8af4-2f309880acfa@amd.com> <20210618143212.GA19284@lst.de>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 18 Jun 2021, Christoph Hellwig wrote:
> On Fri, Jun 18, 2021 at 09:09:17AM -0500, Tom Lendacky wrote:
> > > swiotlb_init_with_tbl uses memblock_alloc to allocate the io_tlb_mem
> > > and memblock_alloc[1] will do memset in memblock_alloc_try_nid[2], so
> > > swiotlb_init_with_tbl is also good.
> > > I'm happy to add the memset in swiotlb_init_io_tlb_mem if you think
> > > it's clearer and safer.
> > 
> > On x86, if the memset is done before set_memory_decrypted() and memory
> > encryption is active, then the memory will look like ciphertext afterwards
> > and not be zeroes. If zeroed memory is required, then a memset must be
> > done after the set_memory_decrypted() calls.
> 
> Which should be fine - we don't care that the memory is cleared to 0,
> just that it doesn't leak other data.  Maybe a comment would be useful,
> though,

Just as a clarification: I was referring to the zeroing of "mem" in
swiotlb_late_init_with_tbl and swiotlb_init_with_tbl. While it looks
like Tom and Christoph are talking about the zeroing of "tlb".

The zeroing of "mem" is required as some fields of struct io_tlb_mem
need to be initialized to zero. While the zeroing of "tlb" is not
required except from the point of view of not leaking sensitive data as
Christoph mentioned.

Either way, Claire's v14 patch 01/12 looks fine to me.


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 18:11:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 18:11:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145663.267862 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvOOB-000278-UZ; Mon, 21 Jun 2021 18:11:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145663.267862; Mon, 21 Jun 2021 18:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvOOB-000271-R0; Mon, 21 Jun 2021 18:11:15 +0000
Received: by outflank-mailman (input) for mailman id 145663;
 Mon, 21 Jun 2021 18:11:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvOOB-00026r-AG; Mon, 21 Jun 2021 18:11:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvOOB-0006pF-3P; Mon, 21 Jun 2021 18:11:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvOOA-00068s-PF; Mon, 21 Jun 2021 18:11:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvOOA-0007Qz-Og; Mon, 21 Jun 2021 18:11:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RpFv7o5yp0lZJJTpcgw7X4olTdgGTN7eB8zWrjaffE4=; b=j+RgQGL9bGONWVcaRIyB/ODhBn
	OI84tFVufJX4aQJ9mF2okx+D9Mgnf+pFSRQjrYS3lfw9aq1gc4FexeUFxa3IHC9jLt4Z48RndgKiX
	I6H1hx6v1wpQaCaZO91pQIwAYyokAaH232YPgQ4cTtTZRdZvOULi1P7gArG9guIyY2Us=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162942-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162942: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:<job status>:broken:regression
    xen-unstable-smoke:build-amd64:host-install(4):broken:regression
    xen-unstable-smoke:build-arm64-xsm:host-install(4):broken:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c9b59f9032d41be8bade8a25d9148cf6ed203704
X-Osstest-Versions-That:
    xen=8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Jun 2021 18:11:14 +0000

flight 162942 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162942/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-amd64                   4 host-install(4)        broken REGR. vs. 162880
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 162880

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c9b59f9032d41be8bade8a25d9148cf6ed203704
baseline version:
 xen                  8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5

Last test of basis   162880  2021-06-17 17:00:36 Z    4 days
Testing same since   162942  2021-06-21 16:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>

jobs:
 build-arm64-xsm                                              broken  
 build-amd64                                                  broken  
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-arm64-xsm broken
broken-step build-amd64 host-install(4)
broken-step build-arm64-xsm host-install(4)

Not pushing.

------------------------------------------------------------
commit c9b59f9032d41be8bade8a25d9148cf6ed203704
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:52 2021 -0400

    golang/xenlight: do not negate ret when converting to Error
    
    Commit 871e51d2d4 changed the sign on the xenlight error types (making
    the values negative, same as the C-generated constants), but failed to
    remove the code changing the sign before casting to Error().  This
    results in error strings like "libxl error: <x>", rather than the
    correct message. Fix all occurrances of this by running:
    
      gofmt -w -r 'Error(-ret) -> Error(ret)' xenlight.go
    
    from tools/golang/xenlight.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>

commit 1d95fd75df18bf25cb445feb47caf62da25c00e8
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:51 2021 -0400

    golang/xenlight: add SendTrigger wrapper
    
    Add a warpper around libxl_send_trigger.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 9b6d865e0af56e376740ba03b1ccdf316362a71e
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:50 2021 -0400

    golang/xenlight: add DomainDestroy wrapper
    
    Add a wrapper around libxl_domain_destroy.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit c089de0e2fa56d846cfb658b7b5efc3426895973
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:47 2021 -0400

    golang/xenlight: rename Ctx receivers to ctx
    
    As a matter of style, it is strange to see capitalized receiver names,
    due to the significance of capitalized symbols in Go (although there is
    in fact nothing special about a capitalized receiver name). Fix this in
    xenlight.go by running:
    
      gofmt -w -r 'Ctx -> ctx' xenlight.go
    
    from tools/golang/xenlight. There is no functional change.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 1997940ad25e3566d1ab38496b8c7b07a086695a
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:46 2021 -0400

    golang/xenlight: use struct pointers in keyed union fields
    
    Currently, when marshalig Go types with keyed union fields, we assign the
    value of the struct (e.g. DomainBuildInfoTypeUnionHvm) which implements the
    interface of the keyed union field (e.g. DomainBuildInfoTypeUnion).
    As-is, this means that if a populated DomainBuildInfo is marshaled to
    e.g. JSON, unmarshaling back to DomainBuildInfo will fail.
    
    When the encoding/json is unmarshaling data into a Go type, and
    encounters a JSON object, it basically can either marshal the data into
    an empty interface, a map, or a struct. It cannot, however, marshal data
    into an interface with at least one method defined on it (e.g.
    DomainBuildInfoTypeUnion). Before this check is done, however, the
    decoder will check if the Go type is a pointer, and dereference it if
    so. It will then use the type of this value as the "target" type.
    
    This means that if the TypeUnion field is populated with a
    DomainBuildInfoTypeUnion, the decoder will see a non-empty interface and
    fail. If the TypeUnion field is populated with a
    *DomainBuildInfoTypeUnionHvm, it dereferences the pointer and sees a
    struct instead, allowing decoding to continue normally.
    
    Since there does not appear to be a strict need for NOT using pointers
    in these fields, update code generation to set keyed union fields to
    pointers of their implementing structs.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit bc9f632e31ee66be3f1860fc7303fe91a42e56a6
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:45 2021 -0400

    golang/xenlight: export keyed union interface types
    
    For structs that have a keyed union, e.g. DomainBuildInfo, the TypeUnion
    field must be exported so that package users can get/set the fields
    within. This means that users are aware of the existence of the
    interface type used in those fields (see [1]), so it is awkward that the
    interface itself is not exported. However, the single method within the
    interface must remain unexported so that users cannot mistakenly "implement"
    those interfaces.
    
    Since there seems to be no reason to do otherwise, export the keyed
    union interface types.
    
    [1] https://pkg.go.dev/xenbits.xenproject.org/git-http/xen.git/tools/golang/xenlight?tab=doc#DeviceUsbdev
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 1422d8db1b3dfdf7d9179944e594876e5e356a4b
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:43 2021 -0400

    golang/xenlight: fix StringList toC conversion
    
    The current implementation of StringList.toC does not correctly account
    for how libxl_string_list is expected to be laid out in C, which is clear
    when one looks at libxl_string_list_length in libxl.c. In particular,
    StringList.toC does not account for the extra memory that should be
    allocated for the "sentinel" entry. And, when using the "slice trick" to
    create a slice that can address C memory, the unsafe.Pointer conversion
    should be on a C.libxl_string_list, not *C.libxl_string_list.
    
    Fix these problems by (1) allocating an extra slot in the slice used to
    address the C memory, and explicity set the last entry to nil so the C
    memory will be zeroed out, and (2) dereferencing csl in the
    unsafe.Pointer conversion.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit b291ce703b9cebef0800267446334e867588354a
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:42 2021 -0400

    golang/xenlight: update generated code
    
    Re-generate code to reflect changes to libxl_types.idl from the
    following commits:
    
    0570d7f276 x86/msr: introduce an option for compatible MSR behavior selection
    7e5cffcd1e viridian: allow vCPU hotplug for Windows VMs
    9835246710 viridian: remove implicit limit of 64 VPs per partition
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 18:33:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 18:33:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145671.267882 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvOju-0004SX-12; Mon, 21 Jun 2021 18:33:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145671.267882; Mon, 21 Jun 2021 18:33:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvOjt-0004SQ-TB; Mon, 21 Jun 2021 18:33:41 +0000
Received: by outflank-mailman (input) for mailman id 145671;
 Mon, 21 Jun 2021 18:33:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvOjs-0004SG-O8; Mon, 21 Jun 2021 18:33:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvOjs-0007CW-Ee; Mon, 21 Jun 2021 18:33:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvOjs-0006ds-5D; Mon, 21 Jun 2021 18:33:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvOjs-0004bs-4h; Mon, 21 Jun 2021 18:33:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0wRygpdpcYo4VVOBVA2cFhVPri6i7lGYBGg/v/AZgis=; b=HE5x4nMTp8C68CzsvuuwF8brGK
	KbdtK+nJMkbrhFkVW7LlWu6sUsTvfAhGAc4EOf+QCI4PWyqlbTPto1EOV5RS8jWjayXXloep4D6vH
	bDtmNcCm3m17b1mwdzNiF4jkixelhY1gBgWuFGck54vhCxld+kttJZ3q7uDO4nhssnHQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162930-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162930: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    libvirt:build-amd64:<job status>:broken:regression
    libvirt:build-amd64-pvops:<job status>:broken:regression
    libvirt:build-amd64-xsm:<job status>:broken:regression
    libvirt:build-arm64:<job status>:broken:regression
    libvirt:build-arm64-pvops:<job status>:broken:regression
    libvirt:build-arm64-xsm:<job status>:broken:regression
    libvirt:build-armhf-pvops:<job status>:broken:regression
    libvirt:build-i386:<job status>:broken:regression
    libvirt:build-i386-pvops:<job status>:broken:regression
    libvirt:build-i386-xsm:<job status>:broken:regression
    libvirt:build-i386-pvops:host-install(4):broken:regression
    libvirt:build-arm64:host-install(4):broken:regression
    libvirt:build-i386:host-install(4):broken:regression
    libvirt:build-arm64-pvops:host-install(4):broken:regression
    libvirt:build-arm64-xsm:host-install(4):broken:regression
    libvirt:build-i386-xsm:host-install(4):broken:regression
    libvirt:build-amd64-xsm:host-install(4):broken:regression
    libvirt:build-amd64:host-install(4):broken:regression
    libvirt:build-armhf-pvops:host-install(4):broken:regression
    libvirt:build-amd64-pvops:host-install(4):broken:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:build-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:build-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=54b602019d7dfa94a6c52ef7aa3abdfaa93ed233
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Jun 2021 18:33:40 +0000

flight 162930 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162930/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 151777
 build-arm64                   4 host-install(4)        broken REGR. vs. 151777
 build-i386                    4 host-install(4)        broken REGR. vs. 151777
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 151777
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 151777
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 151777
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 151777
 build-amd64                   4 host-install(4)        broken REGR. vs. 151777
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 151777
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              54b602019d7dfa94a6c52ef7aa3abdfaa93ed233
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  346 days
Failing since        151818  2020-07-11 04:18:52 Z  345 days  338 attempts
Testing same since   162898  2021-06-19 04:19:58 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-i386 host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-armhf-pvops host-install(4)
broken-step build-amd64-pvops host-install(4)

Not pushing.

(No revision log; it would be 62906 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 20:19:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 20:19:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145681.267902 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvQNk-0005BG-LW; Mon, 21 Jun 2021 20:18:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145681.267902; Mon, 21 Jun 2021 20:18:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvQNk-0005B9-I2; Mon, 21 Jun 2021 20:18:56 +0000
Received: by outflank-mailman (input) for mailman id 145681;
 Mon, 21 Jun 2021 20:18:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvQNj-0005Az-6J; Mon, 21 Jun 2021 20:18:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvQNi-0000Zb-Rg; Mon, 21 Jun 2021 20:18:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvQNi-0000Tq-Hl; Mon, 21 Jun 2021 20:18:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvQNi-0000Rk-HB; Mon, 21 Jun 2021 20:18:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Qv3qEFZQ/EpoqG371OvoGvjk1TBCcIiLS3eB1bg8ey0=; b=IjBMdE2jn1o3/rIUtNxVg16/Kt
	q401XeyS7dviJt9ndTXfjLUtioCvBf2I0L/uOv2yRdgO/CcUjXPBifDSzFC2xzTcm7b7rtPSqR/0Z
	BGnsiPaOC3/xs2vlBPzkMQo6SVKxXc9hpWzFQZ+Gebyi0u+CsK4PDZi2sFZtTH2kaNT4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162932-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162932: trouble: blocked/broken/pass
X-Osstest-Failures:
    linux-linus:build-amd64:<job status>:broken:regression
    linux-linus:build-amd64-pvops:<job status>:broken:regression
    linux-linus:build-amd64-xsm:<job status>:broken:regression
    linux-linus:build-arm64:<job status>:broken:regression
    linux-linus:build-arm64-pvops:<job status>:broken:regression
    linux-linus:build-arm64-xsm:<job status>:broken:regression
    linux-linus:build-armhf-pvops:<job status>:broken:regression
    linux-linus:build-i386:<job status>:broken:regression
    linux-linus:build-i386-pvops:<job status>:broken:regression
    linux-linus:build-i386-xsm:<job status>:broken:regression
    linux-linus:build-i386-xsm:host-install(4):broken:regression
    linux-linus:build-arm64-pvops:host-install(4):broken:regression
    linux-linus:build-arm64-xsm:host-install(4):broken:regression
    linux-linus:build-arm64:host-install(4):broken:regression
    linux-linus:build-i386-pvops:host-install(4):broken:regression
    linux-linus:build-i386:host-install(4):broken:regression
    linux-linus:build-amd64-pvops:host-install(4):broken:regression
    linux-linus:build-amd64:host-install(4):broken:regression
    linux-linus:build-amd64-xsm:host-install(4):broken:regression
    linux-linus:build-armhf-pvops:host-install(4):broken:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    linux=13311e74253fe64329390df80bed3f07314ddd61
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Jun 2021 20:18:54 +0000

flight 162932 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162932/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 152332
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152332
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152332
 build-arm64                   4 host-install(4)        broken REGR. vs. 152332
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 152332
 build-i386                    4 host-install(4)        broken REGR. vs. 152332
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 152332
 build-amd64                   4 host-install(4)        broken REGR. vs. 152332
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 152332
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a

version targeted for testing:
 linux                13311e74253fe64329390df80bed3f07314ddd61
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  325 days
Failing since        152366  2020-08-01 20:49:34 Z  323 days  551 attempts
Testing same since   162932  2021-06-21 06:00:58 Z    0 days    1 attempts

------------------------------------------------------------
6199 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386-xsm host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-i386 host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-armhf-pvops host-install(4)

Not pushing.

(No revision log; it would be 1688748 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 20:19:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 20:19:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145686.267916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvQOf-0005jc-00; Mon, 21 Jun 2021 20:19:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145686.267916; Mon, 21 Jun 2021 20:19:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvQOe-0005jV-Sn; Mon, 21 Jun 2021 20:19:52 +0000
Received: by outflank-mailman (input) for mailman id 145686;
 Mon, 21 Jun 2021 20:19:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QZvr=LP=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lvQOd-0005jN-PD
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 20:19:51 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 54d26042-399b-4d28-a7f3-b4d421e5a5e2;
 Mon, 21 Jun 2021 20:19:51 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id AC60868B05; Mon, 21 Jun 2021 22:19:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54d26042-399b-4d28-a7f3-b4d421e5a5e2
Date: Mon, 21 Jun 2021 22:19:46 +0200
From: Christoph Hellwig <hch@lst.de>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Tom Lendacky <thomas.lendacky@amd.com>,
	Claire Chang <tientzu@chromium.org>,
	Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com,
	xypron.glpk@gmx.de, Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com,
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk,
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie,
	dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com, Jianxiong Gao <jxgao@google.com>,
	joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com, matthew.auld@intel.com,
	rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v13 01/12] swiotlb: Refactor swiotlb init functions
Message-ID: <20210621201946.GA13822@lst.de>
References: <20210617062635.1660944-1-tientzu@chromium.org> <20210617062635.1660944-2-tientzu@chromium.org> <alpine.DEB.2.21.2106171434480.24906@sstabellini-ThinkPad-T480s> <CALiNf29SJ0jXirWVDhJw4BUNvkjUeGPyGNJK9m8c30OPX41=5Q@mail.gmail.com> <741a34cc-547c-984d-8af4-2f309880acfa@amd.com> <20210618143212.GA19284@lst.de> <alpine.DEB.2.21.2106211052270.24906@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2106211052270.24906@sstabellini-ThinkPad-T480s>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Mon, Jun 21, 2021 at 10:59:20AM -0700, Stefano Stabellini wrote:
> Just as a clarification: I was referring to the zeroing of "mem" in
> swiotlb_late_init_with_tbl and swiotlb_init_with_tbl. While it looks
> like Tom and Christoph are talking about the zeroing of "tlb".

Indeed. 

> 
> The zeroing of "mem" is required as some fields of struct io_tlb_mem
> need to be initialized to zero. While the zeroing of "tlb" is not
> required except from the point of view of not leaking sensitive data as
> Christoph mentioned.
> 
> Either way, Claire's v14 patch 01/12 looks fine to me.

Agreed.


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 21:37:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 21:37:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145695.267932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvRbE-0004i6-1P; Mon, 21 Jun 2021 21:36:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145695.267932; Mon, 21 Jun 2021 21:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvRbD-0004hz-Uj; Mon, 21 Jun 2021 21:36:55 +0000
Received: by outflank-mailman (input) for mailman id 145695;
 Mon, 21 Jun 2021 21:36:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UZyg=LP=eikelenboom.it=linux@srs-us1.protection.inumbo.net>)
 id 1lvRbC-0004ht-Cd
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 21:36:54 +0000
Received: from server.eikelenboom.it (unknown [91.121.65.215])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 14300c5c-d142-4fe4-a306-176859f5f7b0;
 Mon, 21 Jun 2021 21:36:52 +0000 (UTC)
Received: from 76-24-144-85.ftth.glasoperator.nl ([85.144.24.76]:35594
 helo=[172.16.1.50]) by server.eikelenboom.it with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <linux@eikelenboom.it>)
 id 1lvRfm-0002an-Va; Mon, 21 Jun 2021 23:41:39 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14300c5c-d142-4fe4-a306-176859f5f7b0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=eikelenboom.it; s=20180706; h=Content-Transfer-Encoding:Content-Type:
	In-Reply-To:MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender
	:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:
	Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:
	List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=echRYO9/03A/RWXXeBXOytrx6HhFq/gOusrcAB2T0SM=; b=QMPoXeGzyDQrsOIfNIieSIfeAg
	n/wxyrVCxb+cCG47obHfKhLENJth+oPCFd5VIHGVTll5N/G/9yNjsJi7Yx8F/K89OG2RDP6V+TUjY
	KzzBpOpGalQ7ffutRpu+8cDkyXvDBMQ3R6mVGBMc3rVAslKKuTWUD3tz8UJS5E17hAto=;
Subject: Re: Linux 5.13-rc6 regression to 5.12.x: kernel OOM and panic during
 kernel boot in low memory Xen VM's (256MB assigned memory).
To: Rasmus Villemoes <linux@rasmusvillemoes.dk>,
 Linus Torvalds <torvalds@linux-foundation.org>,
 Juergen Gross <jgross@suse.com>
Cc: linux-kernel <linux-kernel@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 kernel test robot <oliver.sang@intel.com>
References: <ee8bf04c-6e55-1d9b-7bdb-25e6108e8e1e@eikelenboom.it>
 <CAHk-=wjgg67NMBNG99naEQ1cM0mXBBzdhCJaYFH-kC+mLK+J2g@mail.gmail.com>
 <9108c22e-3521-9e24-6124-7776d947b788@rasmusvillemoes.dk>
 <0b12f27b-1109-b621-c969-10814b2c1c2f@eikelenboom.it>
 <7338064f-10b6-545d-bc6c-843d04aafe28@eikelenboom.it>
 <e7f9c4f8-1669-75ce-b052-1030350a159e@eikelenboom.it>
 <bfdd1d6b-77a3-450b-71f4-63e9cc314ace@rasmusvillemoes.dk>
From: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <c4a0bc1e-9b20-47d9-7299-71bac5c43596@eikelenboom.it>
Date: Mon, 21 Jun 2021 23:36:48 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <bfdd1d6b-77a3-450b-71f4-63e9cc314ace@rasmusvillemoes.dk>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: nl-NL
Content-Transfer-Encoding: 7bit

On 21/06/2021 18:54, Rasmus Villemoes wrote:
> On 18/06/2021 03.06, Sander Eikelenboom wrote:
>> On 17/06/2021 21:39, Sander Eikelenboom wrote:
> 
>>
>> OK, done some experimentation and it seems with 256M assigned to the VM
>> it was almost at the edge of OOM with the 5.12 kernel as well in the
>> config I am using it.
>> With v5.12 when I assign 240M it boots, with 230M it doesn't. With 5.13
>> the tipping point seems to be around 265M and 270M, so my config was
>> already quite close to the edge.
>>
>> The "direct kernel boot" feature I'm using just seems somewhat memory
>> hungry, but using another compression algorithm for the kernel and
>> initramfs already helped in my case.
>>
>> So sorry for the noise, clearly user-error.
> 
> Hm, perhaps, but I'm still a bit nervous about that report from Oliver
> Sang/kernel test robot, which was for a VM equipped with 16G of memory.
> But despite quite a few attempts, I haven't been able to reproduce that
> locally, so unfortunately I have no idea what's going on.
> 
> Rasmus
> 

Hmm I just tried to switch all VM's to a 5.13-rc7 kernel.
Some worked since i reduced the size, but some still fail.

The difference seems the be the number of vcpu's I assign to the VM's

The ones with 1 vcpu now boot with 256MB assigned (that was what I tested before),
but the ones with 2 vcpu's assigned don't and still OOM
on the same kernel and initramfs that I pass in from the host.

Could that box from the test-robot have a massive amount of cpu-cores
and that it is some how related to that ?

--
Sander


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 21:41:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 21:41:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145700.267943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvRg0-00065A-KN; Mon, 21 Jun 2021 21:41:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145700.267943; Mon, 21 Jun 2021 21:41:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvRg0-000653-HQ; Mon, 21 Jun 2021 21:41:52 +0000
Received: by outflank-mailman (input) for mailman id 145700;
 Mon, 21 Jun 2021 21:41:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gHMo=LP=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1lvRg0-00064x-6z
 for xen-devel@lists.xenproject.org; Mon, 21 Jun 2021 21:41:52 +0000
Received: from mail-qk1-x72e.google.com (unknown [2607:f8b0:4864:20::72e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9bf9ff23-bf92-4d98-9009-e72554c2db0d;
 Mon, 21 Jun 2021 21:41:51 +0000 (UTC)
Received: by mail-qk1-x72e.google.com with SMTP id j184so34342178qkd.6
 for <xen-devel@lists.xenproject.org>; Mon, 21 Jun 2021 14:41:51 -0700 (PDT)
Received: from six (c-73-89-138-5.hsd1.vt.comcast.net. [73.89.138.5])
 by smtp.gmail.com with ESMTPSA id o15sm262880qtw.5.2021.06.21.14.41.50
 (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256);
 Mon, 21 Jun 2021 14:41:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9bf9ff23-bf92-4d98-9009-e72554c2db0d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to
         :user-agent;
        bh=RTDqoPWBmnNXjzYXpRN/pgg58HnV7jt084UOmjEj8vU=;
        b=U78hj7TG3nAQhorHcSFeNuv7Ayb6XjQwqHZPBegB4Pnp3RQ1VXiNbQfpZAbvA/YEAb
         h/+8VQ3WuRyY7sJPMlyeP+v+l3tXxRzL7hHf7od6ImUDY/uET4NfrJCLt61bctaaL/c7
         neZkgK1eewEXxdy1GUbNbUp+eJRpHHoUTzFa1De+zcDebqIQJHmNCanSQabgGuFOBrWz
         udfuC6cmkX4sDWv3lhwH0pXUSt5kaY+a1L5ao9se8ARoBq3PAKqc25hEyOa5ROdbyhRm
         TNYneIlX6Qj2yb4GNYs1miKs8nNNURL9Edb9sSWNZ869COppEkKy5JtwmApnD4Bwu4FM
         tbZw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to:user-agent;
        bh=RTDqoPWBmnNXjzYXpRN/pgg58HnV7jt084UOmjEj8vU=;
        b=ZZjqgj3T21QexsjAWNkGOC7lK8VxZXK1xSLhZYZctrowio/kgNSCAlvdg++CoxbbyC
         oWmX0djLDWacCf15PyusU91J1S9ZlCSuJzrjqRX6zwgIPJsH2SOeBM15CV01HjV08Dpd
         bGw9BJYZplFpE98ao2vHFfkWzvoSt3PjlIrpFgcmyWluCYnpjNCuNxYZ6r/pvi5zobyJ
         vFzuOPUrP4Im+G3f7KK2vn8lBvrhe2k+OfdsXG08aL9Kihr7FHshZ28XVWEp+OEwHzmF
         hNe1Anw1kdMG07EZ/O5z5AYXI+MWiryV7leRWc4hUQ+7ZyBrSnNZymEnG6tRt59pajyn
         gZ2g==
X-Gm-Message-State: AOAM532SWDd6JbID1AtIBgeMTNwy7wtvxdFMsgpDajQfYyYA02YUeWvT
	Ef+DV2xDfgNlT5ak2iU7IEY=
X-Google-Smtp-Source: ABdhPJz+fiuvOVB7/nmV1w7A5sINpvgt1TAG5gd/BAUJ0lFiT5y/dnLV3+XSd+HL+wzkTvfO9nFLnQ==
X-Received: by 2002:a37:2ec1:: with SMTP id u184mr779755qkh.500.1624311710992;
        Mon, 21 Jun 2021 14:41:50 -0700 (PDT)
Date: Mon, 21 Jun 2021 17:41:48 -0400
From: Nick Rosbrook <rosbrookn@gmail.com>
To: George Dunlap <George.Dunlap@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [RESEND PATCH 12/12] golang/xenlight: add NotifyDomainDeath
 method to Context
Message-ID: <20210621214148.GA27530@six>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
 <e415b0e26954cfc6689fbd3ba7d79fe664f3bb50.1621887506.git.rosbrookn@ainfosec.com>
 <56DEEBE0-88E3-4E00-A998-30FF034BCB73@citrix.com>
 <8D6E3510-754C-450C-99F6-63BE9FA6F748@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <8D6E3510-754C-450C-99F6-63BE9FA6F748@citrix.com>
User-Agent: Mutt/1.9.4 (2018-02-28)

On Fri, Jun 18, 2021 at 07:31:46PM +0000, George Dunlap wrote:
> 
> 
> > On Jun 18, 2021, at 7:28 PM, George Dunlap <george.dunlap@citrix.com> wrote:
> > 
> > 
> > 
> >> On May 24, 2021, at 9:36 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
> >> 
> >> Add a helper function to wait for domain death events, and then write
> >> the events to a provided channel. This handles the enabling/disabling of
> >> the event type, freeing the event, and converting it to a Go type. The
> >> caller can then handle the event however they need to. This function
> >> will run until a provided context.Context is cancelled.
> >> 
> >> NotifyDomainDeath spawns two goroutines that return when the
> >> context.Context is done. The first will make sure that the domain death
> >> event is disabled, and that the corresponding event queue is cleared.
> >> The second calls libxl_event_wait, and writes the event to the provided
> >> channel.
> >> 
> >> With this, callers should be able to manage a full domain life cycle.
> >> Add to the comment of DomainCreateNew so that package uses know they
> >> should use this method in conjunction with DomainCreateNew.
> >> 
> >> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
> >> ---
> >> tools/golang/xenlight/xenlight.go | 83 ++++++++++++++++++++++++++++++-
> >> 1 file changed, 82 insertions(+), 1 deletion(-)
> >> 
> >> diff --git a/tools/golang/xenlight/xenlight.go b/tools/golang/xenlight/xenlight.go
> >> index 6fb22665cc..8406883433 100644
> >> --- a/tools/golang/xenlight/xenlight.go
> >> +++ b/tools/golang/xenlight/xenlight.go
> >> @@ -53,6 +53,7 @@ import "C"
> >> */
> >> 
> >> import (
> >> +	"context"
> >> 	"fmt"
> >> 	"os"
> >> 	"os/signal"
> >> @@ -1340,7 +1341,9 @@ func (ctx *Context) DeviceUsbdevRemove(domid Domid, usbdev *DeviceUsbdev) error
> >> 	return nil
> >> }
> >> 
> >> -// DomainCreateNew creates a new domain.
> >> +// DomainCreateNew creates a new domain. Callers of DomainCreateNew are
> >> +// responsible for handling the death of the resulting domain. This should be
> >> +// done using NotifyDomainDeath.
> >> func (ctx *Context) DomainCreateNew(config *DomainConfig) (Domid, error) {
> >> 	var cdomid C.uint32_t
> >> 	var cconfig C.libxl_domain_config
> >> @@ -1358,6 +1361,84 @@ func (ctx *Context) DomainCreateNew(config *DomainConfig) (Domid, error) {
> >> 	return Domid(cdomid), nil
> >> }
> >> 
> >> +// NotifyDomainDeath registers an event handler for domain death events for a
> >> +// given domnid, and writes events received to ec. NotifyDomainDeath returns an
> >> +// error if it cannot register the event handler, but other errors encountered
> >> +// are just logged. The goroutine spawned by calling NotifyDomainDeath runs
> >> +// until the provided context.Context's Done channel is closed.
> >> +func (ctx *Context) NotifyDomainDeath(c context.Context, domid Domid, ec chan<- Event) error {
> >> +	var deathw *C.libxl_evgen_domain_death
> >> +
> >> +	ret := C.libxl_evenable_domain_death(ctx.ctx, C.uint32_t(domid), 0, &deathw)
> >> +	if ret != 0 {
> >> +		return Error(ret)
> >> +	}
> >> +
> >> +	// Spawn a goroutine that is responsible for cleaning up when the
> >> +	// passed context.Context's Done channel is closed.
> >> +	go func() {
> >> +		<-c.Done()
> >> +
> >> +		ctx.logd("cleaning up domain death event handler for domain %d", domid)
> >> +
> >> +		// Disable the event generation.
> >> +		C.libxl_evdisable_domain_death(ctx.ctx, deathw)
> >> +
> >> +		// Make sure any events that were generated get cleaned up so they
> >> +		// do not linger in the libxl event queue.
> >> +		var evc *C.libxl_event
> >> +		for {
> >> +			ret := C.libxl_event_check(ctx.ctx, &evc, C.LIBXL_EVENTMASK_ALL, nil, nil)
> >> +			if ret != 0 {
> >> +				return
> >> +			}
> >> +			C.libxl_event_free(ctx.ctx, evc)
> > 
> > I have to admit, I don’t really understand how the libxl event stuff is supposed to work.  But it looks like this will drain all events of any type, for any domain, associated with this context?
> > 
> > So if you had two domains, and called NotifyDomainDeath() on both with different contexts, and you closed the one context, you might miss events from the other context?
> > 
> > Or, suppose you did this:
> > * ctx.NotifyDomainDeath(ctx1, dom1, ec1)
> > * ctx.NotifyDiskEject(ctx2, dom1, ec2)
> > * ctx1CancelFunc()
> > 
> > Wouldn’t there be a risk that the disk eject message would get lost?
> > 
> > Ian, any suggestions for the right way to use these functions in this scenario?
> 
> It looks like one option would be to add a “predicate” function filter, to filter by type and domid.
> 
> It looks like the other option would be to try to use libxl_event_register_callbacks().  We could have the C callback pass all the events to a goroutine which would act as a dispatcher.
> 
After a brief look at the documentation within libxl_event.h, it seems
using predicate functions would be the easiest solution given the
current layout. Though I will look closer at using
libxl_event_register_callbacks before sending a v2.

Thanks,
NR


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 21:42:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 21:42:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145704.267955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvRgj-0006fl-UA; Mon, 21 Jun 2021 21:42:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145704.267955; Mon, 21 Jun 2021 21:42:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvRgj-0006fe-Qw; Mon, 21 Jun 2021 21:42:37 +0000
Received: by outflank-mailman (input) for mailman id 145704;
 Mon, 21 Jun 2021 21:42:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvRgi-0006fQ-UB; Mon, 21 Jun 2021 21:42:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvRgi-0001xJ-Pb; Mon, 21 Jun 2021 21:42:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvRgi-0002Kr-GT; Mon, 21 Jun 2021 21:42:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvRgi-0002OV-Fz; Mon, 21 Jun 2021 21:42:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DgZGlAFzCbtluIr9RpsACNxxdcSOLNuNvJIFNg/IwEE=; b=eJt2YLBIfj19j/vva67M+ZbsDs
	0ZEp/NDGks7l5QFTOgmbvMTjpI3x/SyCq13mnAQjrWy1isHV+Re8il9ERQo2V6wApfZfBFyg3QL9t
	WJ6znhUjUG+WMUbZ/ycOGsMOBEuZqQkp1E5OV+4rWQH5zyFR8TNuoEG+icAQtMSKcv8c=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162946-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162946: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:host-install(4):broken:regression
    xen-unstable-smoke:build-amd64:host-install(4):broken:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c9b59f9032d41be8bade8a25d9148cf6ed203704
X-Osstest-Versions-That:
    xen=8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Jun 2021 21:42:36 +0000

flight 162946 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162946/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 162880
 build-amd64                   4 host-install(4)        broken REGR. vs. 162880

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c9b59f9032d41be8bade8a25d9148cf6ed203704
baseline version:
 xen                  8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5

Last test of basis   162880  2021-06-17 17:00:36 Z    4 days
Testing same since   162942  2021-06-21 16:00:26 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>

jobs:
 build-arm64-xsm                                              broken  
 build-amd64                                                  broken  
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-arm64-xsm broken
broken-step build-arm64-xsm host-install(4)
broken-step build-amd64 host-install(4)

Not pushing.

------------------------------------------------------------
commit c9b59f9032d41be8bade8a25d9148cf6ed203704
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:52 2021 -0400

    golang/xenlight: do not negate ret when converting to Error
    
    Commit 871e51d2d4 changed the sign on the xenlight error types (making
    the values negative, same as the C-generated constants), but failed to
    remove the code changing the sign before casting to Error().  This
    results in error strings like "libxl error: <x>", rather than the
    correct message. Fix all occurrances of this by running:
    
      gofmt -w -r 'Error(-ret) -> Error(ret)' xenlight.go
    
    from tools/golang/xenlight.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>

commit 1d95fd75df18bf25cb445feb47caf62da25c00e8
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:51 2021 -0400

    golang/xenlight: add SendTrigger wrapper
    
    Add a warpper around libxl_send_trigger.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 9b6d865e0af56e376740ba03b1ccdf316362a71e
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:50 2021 -0400

    golang/xenlight: add DomainDestroy wrapper
    
    Add a wrapper around libxl_domain_destroy.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit c089de0e2fa56d846cfb658b7b5efc3426895973
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:47 2021 -0400

    golang/xenlight: rename Ctx receivers to ctx
    
    As a matter of style, it is strange to see capitalized receiver names,
    due to the significance of capitalized symbols in Go (although there is
    in fact nothing special about a capitalized receiver name). Fix this in
    xenlight.go by running:
    
      gofmt -w -r 'Ctx -> ctx' xenlight.go
    
    from tools/golang/xenlight. There is no functional change.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 1997940ad25e3566d1ab38496b8c7b07a086695a
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:46 2021 -0400

    golang/xenlight: use struct pointers in keyed union fields
    
    Currently, when marshalig Go types with keyed union fields, we assign the
    value of the struct (e.g. DomainBuildInfoTypeUnionHvm) which implements the
    interface of the keyed union field (e.g. DomainBuildInfoTypeUnion).
    As-is, this means that if a populated DomainBuildInfo is marshaled to
    e.g. JSON, unmarshaling back to DomainBuildInfo will fail.
    
    When the encoding/json is unmarshaling data into a Go type, and
    encounters a JSON object, it basically can either marshal the data into
    an empty interface, a map, or a struct. It cannot, however, marshal data
    into an interface with at least one method defined on it (e.g.
    DomainBuildInfoTypeUnion). Before this check is done, however, the
    decoder will check if the Go type is a pointer, and dereference it if
    so. It will then use the type of this value as the "target" type.
    
    This means that if the TypeUnion field is populated with a
    DomainBuildInfoTypeUnion, the decoder will see a non-empty interface and
    fail. If the TypeUnion field is populated with a
    *DomainBuildInfoTypeUnionHvm, it dereferences the pointer and sees a
    struct instead, allowing decoding to continue normally.
    
    Since there does not appear to be a strict need for NOT using pointers
    in these fields, update code generation to set keyed union fields to
    pointers of their implementing structs.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit bc9f632e31ee66be3f1860fc7303fe91a42e56a6
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:45 2021 -0400

    golang/xenlight: export keyed union interface types
    
    For structs that have a keyed union, e.g. DomainBuildInfo, the TypeUnion
    field must be exported so that package users can get/set the fields
    within. This means that users are aware of the existence of the
    interface type used in those fields (see [1]), so it is awkward that the
    interface itself is not exported. However, the single method within the
    interface must remain unexported so that users cannot mistakenly "implement"
    those interfaces.
    
    Since there seems to be no reason to do otherwise, export the keyed
    union interface types.
    
    [1] https://pkg.go.dev/xenbits.xenproject.org/git-http/xen.git/tools/golang/xenlight?tab=doc#DeviceUsbdev
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 1422d8db1b3dfdf7d9179944e594876e5e356a4b
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:43 2021 -0400

    golang/xenlight: fix StringList toC conversion
    
    The current implementation of StringList.toC does not correctly account
    for how libxl_string_list is expected to be laid out in C, which is clear
    when one looks at libxl_string_list_length in libxl.c. In particular,
    StringList.toC does not account for the extra memory that should be
    allocated for the "sentinel" entry. And, when using the "slice trick" to
    create a slice that can address C memory, the unsafe.Pointer conversion
    should be on a C.libxl_string_list, not *C.libxl_string_list.
    
    Fix these problems by (1) allocating an extra slot in the slice used to
    address the C memory, and explicity set the last entry to nil so the C
    memory will be zeroed out, and (2) dereferencing csl in the
    unsafe.Pointer conversion.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit b291ce703b9cebef0800267446334e867588354a
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:42 2021 -0400

    golang/xenlight: update generated code
    
    Re-generate code to reflect changes to libxl_types.idl from the
    following commits:
    
    0570d7f276 x86/msr: introduce an option for compatible MSR behavior selection
    7e5cffcd1e viridian: allow vCPU hotplug for Windows VMs
    9835246710 viridian: remove implicit limit of 64 VPs per partition
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jun 21 23:48:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Jun 2021 23:48:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145713.267975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvTeV-0000eY-55; Mon, 21 Jun 2021 23:48:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145713.267975; Mon, 21 Jun 2021 23:48:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvTeV-0000eR-1j; Mon, 21 Jun 2021 23:48:27 +0000
Received: by outflank-mailman (input) for mailman id 145713;
 Mon, 21 Jun 2021 23:48:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvTeT-0000eH-GY; Mon, 21 Jun 2021 23:48:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvTeT-0003y9-AU; Mon, 21 Jun 2021 23:48:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvTeS-000557-TL; Mon, 21 Jun 2021 23:48:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvTeS-0002tI-Ss; Mon, 21 Jun 2021 23:48:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=f/A+c+9O2r1LE2V9KDtqxXnKly0Sdx62huruLr1lYLY=; b=j2E8qAl7YeuVMshHc5qHCTT3Ys
	JEIvqAWYa9r7GK3m/TxgspBKTCjnNvaVZeHtr9I0l/hOdsTU0gwkV9ICdYOZzLSWx/PQx4jTe5EW2
	Woda2qxA+/C68c/XjzMbJ3+A/hL3YUg+XV4Him0wue+74Db9WcPppSZ+WeRqapoeQvZU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162936-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162936: trouble: blocked/broken/pass
X-Osstest-Failures:
    qemu-mainline:build-amd64:<job status>:broken:regression
    qemu-mainline:build-amd64-pvops:<job status>:broken:regression
    qemu-mainline:build-amd64-xsm:<job status>:broken:regression
    qemu-mainline:build-arm64:<job status>:broken:regression
    qemu-mainline:build-arm64-pvops:<job status>:broken:regression
    qemu-mainline:build-arm64-xsm:<job status>:broken:regression
    qemu-mainline:build-armhf-pvops:<job status>:broken:regression
    qemu-mainline:build-i386:<job status>:broken:regression
    qemu-mainline:build-i386-pvops:<job status>:broken:regression
    qemu-mainline:build-i386-xsm:<job status>:broken:regression
    qemu-mainline:build-arm64:host-install(4):broken:regression
    qemu-mainline:build-arm64-pvops:host-install(4):broken:regression
    qemu-mainline:build-arm64-xsm:host-install(4):broken:regression
    qemu-mainline:build-i386-xsm:host-install(4):broken:regression
    qemu-mainline:build-i386-pvops:host-install(4):broken:regression
    qemu-mainline:build-i386:host-install(4):broken:regression
    qemu-mainline:build-amd64:host-install(4):broken:regression
    qemu-mainline:build-amd64-xsm:host-install(4):broken:regression
    qemu-mainline:build-amd64-pvops:host-install(4):broken:regression
    qemu-mainline:build-armhf-pvops:host-install(4):broken:regression
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=8f521741e1280f0957ac1b873292c19219e1fb9a
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Jun 2021 23:48:24 +0000

flight 162936 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162936/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-arm64                   4 host-install(4)        broken REGR. vs. 152631
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152631
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152631
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 152631
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 152631
 build-i386                    4 host-install(4)        broken REGR. vs. 152631
 build-amd64                   4 host-install(4)        broken REGR. vs. 152631
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 152631
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 152631
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                8f521741e1280f0957ac1b873292c19219e1fb9a
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  305 days
Failing since        152659  2020-08-21 14:07:39 Z  304 days  560 attempts
Testing same since   162924  2021-06-20 23:06:51 Z    1 days    2 attempts

------------------------------------------------------------
541 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-arm64 host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-i386 host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-armhf-pvops host-install(4)

Not pushing.

(No revision log; it would be 173699 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 01:12:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 01:12:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145723.267994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvUxz-00020M-4v; Tue, 22 Jun 2021 01:12:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145723.267994; Tue, 22 Jun 2021 01:12:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvUxz-00020F-1m; Tue, 22 Jun 2021 01:12:39 +0000
Received: by outflank-mailman (input) for mailman id 145723;
 Tue, 22 Jun 2021 01:12:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvUxx-000205-K3; Tue, 22 Jun 2021 01:12:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvUxx-0006Y9-DS; Tue, 22 Jun 2021 01:12:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvUxx-0006vq-5K; Tue, 22 Jun 2021 01:12:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvUxx-0004X6-4p; Tue, 22 Jun 2021 01:12:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=f0Ls3uSqqUiyoZoGk5UBt1XFq+RYER+o5MpuCVCeF4Y=; b=E/J3ibHqaOvTRJLv73c1jnh9EK
	p9t3UiX3xD2GFs2qHOf76l9N5yIATIgJOH11lgWD335ICh2rEZ+JFPopto67vD9mQQkrzQsxzVE0H
	O/or3eBFdwIdj76+pqPL9h1fkpqBlAjhVZsF3GpovV1Zf7CIDHi5EWdLL+gpTr3fnfgE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162950-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162950: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:host-install(4):broken:regression
    xen-unstable-smoke:build-amd64:host-install(4):broken:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c9b59f9032d41be8bade8a25d9148cf6ed203704
X-Osstest-Versions-That:
    xen=8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Jun 2021 01:12:37 +0000

flight 162950 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162950/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 162880
 build-amd64                   4 host-install(4)        broken REGR. vs. 162880

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c9b59f9032d41be8bade8a25d9148cf6ed203704
baseline version:
 xen                  8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5

Last test of basis   162880  2021-06-17 17:00:36 Z    4 days
Testing same since   162942  2021-06-21 16:00:26 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>

jobs:
 build-arm64-xsm                                              broken  
 build-amd64                                                  broken  
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-arm64-xsm broken
broken-step build-arm64-xsm host-install(4)
broken-step build-amd64 host-install(4)

Not pushing.

------------------------------------------------------------
commit c9b59f9032d41be8bade8a25d9148cf6ed203704
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:52 2021 -0400

    golang/xenlight: do not negate ret when converting to Error
    
    Commit 871e51d2d4 changed the sign on the xenlight error types (making
    the values negative, same as the C-generated constants), but failed to
    remove the code changing the sign before casting to Error().  This
    results in error strings like "libxl error: <x>", rather than the
    correct message. Fix all occurrances of this by running:
    
      gofmt -w -r 'Error(-ret) -> Error(ret)' xenlight.go
    
    from tools/golang/xenlight.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>

commit 1d95fd75df18bf25cb445feb47caf62da25c00e8
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:51 2021 -0400

    golang/xenlight: add SendTrigger wrapper
    
    Add a warpper around libxl_send_trigger.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 9b6d865e0af56e376740ba03b1ccdf316362a71e
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:50 2021 -0400

    golang/xenlight: add DomainDestroy wrapper
    
    Add a wrapper around libxl_domain_destroy.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit c089de0e2fa56d846cfb658b7b5efc3426895973
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:47 2021 -0400

    golang/xenlight: rename Ctx receivers to ctx
    
    As a matter of style, it is strange to see capitalized receiver names,
    due to the significance of capitalized symbols in Go (although there is
    in fact nothing special about a capitalized receiver name). Fix this in
    xenlight.go by running:
    
      gofmt -w -r 'Ctx -> ctx' xenlight.go
    
    from tools/golang/xenlight. There is no functional change.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 1997940ad25e3566d1ab38496b8c7b07a086695a
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:46 2021 -0400

    golang/xenlight: use struct pointers in keyed union fields
    
    Currently, when marshalig Go types with keyed union fields, we assign the
    value of the struct (e.g. DomainBuildInfoTypeUnionHvm) which implements the
    interface of the keyed union field (e.g. DomainBuildInfoTypeUnion).
    As-is, this means that if a populated DomainBuildInfo is marshaled to
    e.g. JSON, unmarshaling back to DomainBuildInfo will fail.
    
    When the encoding/json is unmarshaling data into a Go type, and
    encounters a JSON object, it basically can either marshal the data into
    an empty interface, a map, or a struct. It cannot, however, marshal data
    into an interface with at least one method defined on it (e.g.
    DomainBuildInfoTypeUnion). Before this check is done, however, the
    decoder will check if the Go type is a pointer, and dereference it if
    so. It will then use the type of this value as the "target" type.
    
    This means that if the TypeUnion field is populated with a
    DomainBuildInfoTypeUnion, the decoder will see a non-empty interface and
    fail. If the TypeUnion field is populated with a
    *DomainBuildInfoTypeUnionHvm, it dereferences the pointer and sees a
    struct instead, allowing decoding to continue normally.
    
    Since there does not appear to be a strict need for NOT using pointers
    in these fields, update code generation to set keyed union fields to
    pointers of their implementing structs.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit bc9f632e31ee66be3f1860fc7303fe91a42e56a6
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:45 2021 -0400

    golang/xenlight: export keyed union interface types
    
    For structs that have a keyed union, e.g. DomainBuildInfo, the TypeUnion
    field must be exported so that package users can get/set the fields
    within. This means that users are aware of the existence of the
    interface type used in those fields (see [1]), so it is awkward that the
    interface itself is not exported. However, the single method within the
    interface must remain unexported so that users cannot mistakenly "implement"
    those interfaces.
    
    Since there seems to be no reason to do otherwise, export the keyed
    union interface types.
    
    [1] https://pkg.go.dev/xenbits.xenproject.org/git-http/xen.git/tools/golang/xenlight?tab=doc#DeviceUsbdev
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 1422d8db1b3dfdf7d9179944e594876e5e356a4b
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:43 2021 -0400

    golang/xenlight: fix StringList toC conversion
    
    The current implementation of StringList.toC does not correctly account
    for how libxl_string_list is expected to be laid out in C, which is clear
    when one looks at libxl_string_list_length in libxl.c. In particular,
    StringList.toC does not account for the extra memory that should be
    allocated for the "sentinel" entry. And, when using the "slice trick" to
    create a slice that can address C memory, the unsafe.Pointer conversion
    should be on a C.libxl_string_list, not *C.libxl_string_list.
    
    Fix these problems by (1) allocating an extra slot in the slice used to
    address the C memory, and explicity set the last entry to nil so the C
    memory will be zeroed out, and (2) dereferencing csl in the
    unsafe.Pointer conversion.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit b291ce703b9cebef0800267446334e867588354a
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:42 2021 -0400

    golang/xenlight: update generated code
    
    Re-generate code to reflect changes to libxl_types.idl from the
    following commits:
    
    0570d7f276 x86/msr: introduce an option for compatible MSR behavior selection
    7e5cffcd1e viridian: allow vCPU hotplug for Windows VMs
    9835246710 viridian: remove implicit limit of 64 VPs per partition
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 02:58:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 02:58:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145731.268015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvWbl-0003Fe-3E; Tue, 22 Jun 2021 02:57:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145731.268015; Tue, 22 Jun 2021 02:57:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvWbk-0003FX-Vj; Tue, 22 Jun 2021 02:57:48 +0000
Received: by outflank-mailman (input) for mailman id 145731;
 Tue, 22 Jun 2021 02:57:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvWbk-0003FN-3w; Tue, 22 Jun 2021 02:57:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvWbj-0000GK-Qx; Tue, 22 Jun 2021 02:57:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvWbj-0000m9-Id; Tue, 22 Jun 2021 02:57:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvWbj-0007gM-I6; Tue, 22 Jun 2021 02:57:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yu20q9b3wZ5aiMDqdKY8OOFowXJ460cTwtGSm4sLcP0=; b=r7SrawCRZEjD7OxH5lv47Oht7q
	EdS5ZytNm0+foYc8dmzCzKNUb15NA3wClQbQyrfQwKYlARDuP7hZ4Xh0kUgjdRSDVMZ7bFDjBM5Eb
	BXdCeoWIkI5nnorFv21M3IUXuuIAJEf4kN60w3sdMxadTWpaCfFCvFYuwcJmOxznN1Ac=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162938-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162938: trouble: blocked/broken
X-Osstest-Failures:
    ovmf:build-amd64:<job status>:broken:regression
    ovmf:build-amd64-pvops:<job status>:broken:regression
    ovmf:build-amd64-xsm:<job status>:broken:regression
    ovmf:build-i386:<job status>:broken:regression
    ovmf:build-i386-pvops:<job status>:broken:regression
    ovmf:build-i386-xsm:<job status>:broken:regression
    ovmf:build-i386:host-install(4):broken:regression
    ovmf:build-i386-pvops:host-install(4):broken:regression
    ovmf:build-i386-xsm:host-install(4):broken:regression
    ovmf:build-amd64-xsm:host-install(4):broken:regression
    ovmf:build-amd64-pvops:host-install(4):broken:regression
    ovmf:build-amd64:host-install(4):broken:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=6cfeeb71c43173d657d86d4a38ed655b0fc5f277
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Jun 2021 02:57:47 +0000

flight 162938 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162938/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386                    4 host-install(4)        broken REGR. vs. 162359
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 162359
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 162359
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 162359
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 162359
 build-amd64                   4 host-install(4)        broken REGR. vs. 162359

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 6cfeeb71c43173d657d86d4a38ed655b0fc5f277
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   17 days
Failing since        162368  2021-06-04 15:42:59 Z   17 days   38 attempts
Testing same since   162938  2021-06-21 11:33:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386 host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-amd64 host-install(4)

Not pushing.

(No revision log; it would be 2193 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 05:01:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 05:01:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145739.268035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvYXU-0006lc-2y; Tue, 22 Jun 2021 05:01:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145739.268035; Tue, 22 Jun 2021 05:01:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvYXT-0006lV-WD; Tue, 22 Jun 2021 05:01:32 +0000
Received: by outflank-mailman (input) for mailman id 145739;
 Tue, 22 Jun 2021 05:01:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvYXS-0006lL-OZ; Tue, 22 Jun 2021 05:01:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvYXS-0002qp-Gj; Tue, 22 Jun 2021 05:01:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvYXS-0003WQ-7X; Tue, 22 Jun 2021 05:01:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvYXS-000059-70; Tue, 22 Jun 2021 05:01:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4QacmPXLgDr2zwdzsqChu7E/ZQnn+idYeogiDccWEyU=; b=f/IdhYDX7fYU9OCUxx8sI6ltp3
	wCMyR8q2OFw5rCQLJ7SdoDlUYPt9UwOmQ4MCvPmgSOeifcyPgH8jjplvYq0uGIkrXsDtNXXKmxaNW
	n2e8igHgCLZF473UY8Gjj90DyBa9Y9bAj6JWsejCsJpNX4DHt9Cg+DTcWrNdsHvTCkoY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162955-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162955: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:host-install(4):broken:regression
    xen-unstable-smoke:build-amd64:host-install(4):broken:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c9b59f9032d41be8bade8a25d9148cf6ed203704
X-Osstest-Versions-That:
    xen=8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Jun 2021 05:01:30 +0000

flight 162955 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162955/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 162880
 build-amd64                   4 host-install(4)        broken REGR. vs. 162880

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c9b59f9032d41be8bade8a25d9148cf6ed203704
baseline version:
 xen                  8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5

Last test of basis   162880  2021-06-17 17:00:36 Z    4 days
Testing same since   162942  2021-06-21 16:00:26 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>

jobs:
 build-arm64-xsm                                              broken  
 build-amd64                                                  broken  
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-arm64-xsm broken
broken-step build-arm64-xsm host-install(4)
broken-step build-amd64 host-install(4)

Not pushing.

------------------------------------------------------------
commit c9b59f9032d41be8bade8a25d9148cf6ed203704
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:52 2021 -0400

    golang/xenlight: do not negate ret when converting to Error
    
    Commit 871e51d2d4 changed the sign on the xenlight error types (making
    the values negative, same as the C-generated constants), but failed to
    remove the code changing the sign before casting to Error().  This
    results in error strings like "libxl error: <x>", rather than the
    correct message. Fix all occurrances of this by running:
    
      gofmt -w -r 'Error(-ret) -> Error(ret)' xenlight.go
    
    from tools/golang/xenlight.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>

commit 1d95fd75df18bf25cb445feb47caf62da25c00e8
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:51 2021 -0400

    golang/xenlight: add SendTrigger wrapper
    
    Add a warpper around libxl_send_trigger.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 9b6d865e0af56e376740ba03b1ccdf316362a71e
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:50 2021 -0400

    golang/xenlight: add DomainDestroy wrapper
    
    Add a wrapper around libxl_domain_destroy.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit c089de0e2fa56d846cfb658b7b5efc3426895973
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:47 2021 -0400

    golang/xenlight: rename Ctx receivers to ctx
    
    As a matter of style, it is strange to see capitalized receiver names,
    due to the significance of capitalized symbols in Go (although there is
    in fact nothing special about a capitalized receiver name). Fix this in
    xenlight.go by running:
    
      gofmt -w -r 'Ctx -> ctx' xenlight.go
    
    from tools/golang/xenlight. There is no functional change.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 1997940ad25e3566d1ab38496b8c7b07a086695a
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:46 2021 -0400

    golang/xenlight: use struct pointers in keyed union fields
    
    Currently, when marshalig Go types with keyed union fields, we assign the
    value of the struct (e.g. DomainBuildInfoTypeUnionHvm) which implements the
    interface of the keyed union field (e.g. DomainBuildInfoTypeUnion).
    As-is, this means that if a populated DomainBuildInfo is marshaled to
    e.g. JSON, unmarshaling back to DomainBuildInfo will fail.
    
    When the encoding/json is unmarshaling data into a Go type, and
    encounters a JSON object, it basically can either marshal the data into
    an empty interface, a map, or a struct. It cannot, however, marshal data
    into an interface with at least one method defined on it (e.g.
    DomainBuildInfoTypeUnion). Before this check is done, however, the
    decoder will check if the Go type is a pointer, and dereference it if
    so. It will then use the type of this value as the "target" type.
    
    This means that if the TypeUnion field is populated with a
    DomainBuildInfoTypeUnion, the decoder will see a non-empty interface and
    fail. If the TypeUnion field is populated with a
    *DomainBuildInfoTypeUnionHvm, it dereferences the pointer and sees a
    struct instead, allowing decoding to continue normally.
    
    Since there does not appear to be a strict need for NOT using pointers
    in these fields, update code generation to set keyed union fields to
    pointers of their implementing structs.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit bc9f632e31ee66be3f1860fc7303fe91a42e56a6
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:45 2021 -0400

    golang/xenlight: export keyed union interface types
    
    For structs that have a keyed union, e.g. DomainBuildInfo, the TypeUnion
    field must be exported so that package users can get/set the fields
    within. This means that users are aware of the existence of the
    interface type used in those fields (see [1]), so it is awkward that the
    interface itself is not exported. However, the single method within the
    interface must remain unexported so that users cannot mistakenly "implement"
    those interfaces.
    
    Since there seems to be no reason to do otherwise, export the keyed
    union interface types.
    
    [1] https://pkg.go.dev/xenbits.xenproject.org/git-http/xen.git/tools/golang/xenlight?tab=doc#DeviceUsbdev
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 1422d8db1b3dfdf7d9179944e594876e5e356a4b
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:43 2021 -0400

    golang/xenlight: fix StringList toC conversion
    
    The current implementation of StringList.toC does not correctly account
    for how libxl_string_list is expected to be laid out in C, which is clear
    when one looks at libxl_string_list_length in libxl.c. In particular,
    StringList.toC does not account for the extra memory that should be
    allocated for the "sentinel" entry. And, when using the "slice trick" to
    create a slice that can address C memory, the unsafe.Pointer conversion
    should be on a C.libxl_string_list, not *C.libxl_string_list.
    
    Fix these problems by (1) allocating an extra slot in the slice used to
    address the C memory, and explicity set the last entry to nil so the C
    memory will be zeroed out, and (2) dereferencing csl in the
    unsafe.Pointer conversion.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit b291ce703b9cebef0800267446334e867588354a
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:42 2021 -0400

    golang/xenlight: update generated code
    
    Re-generate code to reflect changes to libxl_types.idl from the
    following commits:
    
    0570d7f276 x86/msr: introduce an option for compatible MSR behavior selection
    7e5cffcd1e viridian: allow vCPU hotplug for Windows VMs
    9835246710 viridian: remove implicit limit of 64 VPs per partition
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 08:18:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 08:18:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145756.268067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvbc0-0007mU-0U; Tue, 22 Jun 2021 08:18:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145756.268067; Tue, 22 Jun 2021 08:18:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvbbz-0007mN-TI; Tue, 22 Jun 2021 08:18:23 +0000
Received: by outflank-mailman (input) for mailman id 145756;
 Tue, 22 Jun 2021 08:18:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvbby-0007mD-K9; Tue, 22 Jun 2021 08:18:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvbby-0006fG-F0; Tue, 22 Jun 2021 08:18:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvbby-0007tk-7A; Tue, 22 Jun 2021 08:18:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvbby-0005VA-6f; Tue, 22 Jun 2021 08:18:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=M7HtYhDPIavdQF283i5H4cCsRtQvgcRnv6L2/bqCuPg=; b=qW5P9rTTgkT9le7S+bA3/kLbys
	WTQy2lzltmzcFOSYAGvi6xBu+ge3w4KX/uxjGMXQ0y436Zvud/PbZnZ/H+9EQtJ5YZGY1W7QdIj6E
	rgXjtQU+WHjmGyt5pPQEDaOjqmWHRebwj2Z2iPrbjCqnK0+OEBwnfPceni0NGq/ExcOM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162960-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162960: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:host-install(4):broken:regression
    xen-unstable-smoke:build-amd64:host-install(4):broken:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c9b59f9032d41be8bade8a25d9148cf6ed203704
X-Osstest-Versions-That:
    xen=8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Jun 2021 08:18:22 +0000

flight 162960 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162960/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 162880
 build-amd64                   4 host-install(4)        broken REGR. vs. 162880

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c9b59f9032d41be8bade8a25d9148cf6ed203704
baseline version:
 xen                  8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5

Last test of basis   162880  2021-06-17 17:00:36 Z    4 days
Testing same since   162942  2021-06-21 16:00:26 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>

jobs:
 build-arm64-xsm                                              broken  
 build-amd64                                                  broken  
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-arm64-xsm broken
broken-step build-arm64-xsm host-install(4)
broken-step build-amd64 host-install(4)

Not pushing.

------------------------------------------------------------
commit c9b59f9032d41be8bade8a25d9148cf6ed203704
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:52 2021 -0400

    golang/xenlight: do not negate ret when converting to Error
    
    Commit 871e51d2d4 changed the sign on the xenlight error types (making
    the values negative, same as the C-generated constants), but failed to
    remove the code changing the sign before casting to Error().  This
    results in error strings like "libxl error: <x>", rather than the
    correct message. Fix all occurrances of this by running:
    
      gofmt -w -r 'Error(-ret) -> Error(ret)' xenlight.go
    
    from tools/golang/xenlight.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>

commit 1d95fd75df18bf25cb445feb47caf62da25c00e8
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:51 2021 -0400

    golang/xenlight: add SendTrigger wrapper
    
    Add a warpper around libxl_send_trigger.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 9b6d865e0af56e376740ba03b1ccdf316362a71e
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:50 2021 -0400

    golang/xenlight: add DomainDestroy wrapper
    
    Add a wrapper around libxl_domain_destroy.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit c089de0e2fa56d846cfb658b7b5efc3426895973
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:47 2021 -0400

    golang/xenlight: rename Ctx receivers to ctx
    
    As a matter of style, it is strange to see capitalized receiver names,
    due to the significance of capitalized symbols in Go (although there is
    in fact nothing special about a capitalized receiver name). Fix this in
    xenlight.go by running:
    
      gofmt -w -r 'Ctx -> ctx' xenlight.go
    
    from tools/golang/xenlight. There is no functional change.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 1997940ad25e3566d1ab38496b8c7b07a086695a
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:46 2021 -0400

    golang/xenlight: use struct pointers in keyed union fields
    
    Currently, when marshalig Go types with keyed union fields, we assign the
    value of the struct (e.g. DomainBuildInfoTypeUnionHvm) which implements the
    interface of the keyed union field (e.g. DomainBuildInfoTypeUnion).
    As-is, this means that if a populated DomainBuildInfo is marshaled to
    e.g. JSON, unmarshaling back to DomainBuildInfo will fail.
    
    When the encoding/json is unmarshaling data into a Go type, and
    encounters a JSON object, it basically can either marshal the data into
    an empty interface, a map, or a struct. It cannot, however, marshal data
    into an interface with at least one method defined on it (e.g.
    DomainBuildInfoTypeUnion). Before this check is done, however, the
    decoder will check if the Go type is a pointer, and dereference it if
    so. It will then use the type of this value as the "target" type.
    
    This means that if the TypeUnion field is populated with a
    DomainBuildInfoTypeUnion, the decoder will see a non-empty interface and
    fail. If the TypeUnion field is populated with a
    *DomainBuildInfoTypeUnionHvm, it dereferences the pointer and sees a
    struct instead, allowing decoding to continue normally.
    
    Since there does not appear to be a strict need for NOT using pointers
    in these fields, update code generation to set keyed union fields to
    pointers of their implementing structs.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit bc9f632e31ee66be3f1860fc7303fe91a42e56a6
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:45 2021 -0400

    golang/xenlight: export keyed union interface types
    
    For structs that have a keyed union, e.g. DomainBuildInfo, the TypeUnion
    field must be exported so that package users can get/set the fields
    within. This means that users are aware of the existence of the
    interface type used in those fields (see [1]), so it is awkward that the
    interface itself is not exported. However, the single method within the
    interface must remain unexported so that users cannot mistakenly "implement"
    those interfaces.
    
    Since there seems to be no reason to do otherwise, export the keyed
    union interface types.
    
    [1] https://pkg.go.dev/xenbits.xenproject.org/git-http/xen.git/tools/golang/xenlight?tab=doc#DeviceUsbdev
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 1422d8db1b3dfdf7d9179944e594876e5e356a4b
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:43 2021 -0400

    golang/xenlight: fix StringList toC conversion
    
    The current implementation of StringList.toC does not correctly account
    for how libxl_string_list is expected to be laid out in C, which is clear
    when one looks at libxl_string_list_length in libxl.c. In particular,
    StringList.toC does not account for the extra memory that should be
    allocated for the "sentinel" entry. And, when using the "slice trick" to
    create a slice that can address C memory, the unsafe.Pointer conversion
    should be on a C.libxl_string_list, not *C.libxl_string_list.
    
    Fix these problems by (1) allocating an extra slot in the slice used to
    address the C memory, and explicity set the last entry to nil so the C
    memory will be zeroed out, and (2) dereferencing csl in the
    unsafe.Pointer conversion.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit b291ce703b9cebef0800267446334e867588354a
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon May 24 16:36:42 2021 -0400

    golang/xenlight: update generated code
    
    Re-generate code to reflect changes to libxl_types.idl from the
    following commits:
    
    0570d7f276 x86/msr: introduce an option for compatible MSR behavior selection
    7e5cffcd1e viridian: allow vCPU hotplug for Windows VMs
    9835246710 viridian: remove implicit limit of 64 VPs per partition
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 08:34:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 08:34:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145762.268081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvbrU-0001eW-D9; Tue, 22 Jun 2021 08:34:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145762.268081; Tue, 22 Jun 2021 08:34:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvbrU-0001eP-9C; Tue, 22 Jun 2021 08:34:24 +0000
Received: by outflank-mailman (input) for mailman id 145762;
 Tue, 22 Jun 2021 08:34:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvbrT-0001eF-In; Tue, 22 Jun 2021 08:34:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvbrT-0006vg-6F; Tue, 22 Jun 2021 08:34:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvbrS-0008El-QH; Tue, 22 Jun 2021 08:34:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvbrS-0003JI-Ph; Tue, 22 Jun 2021 08:34:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HQrDhNh/0uvAWmObVs2pYQWe6eAUyV/wXPFALR8omd8=; b=gT1rcAeYoeu733KhUVnTyaasSJ
	yzeNFEn1Q/rUzYFC6GGQei4WuLW0wDHaVmsWc0kmKe4EPIRujRLkGSfFhnTOjUDodc/uspoMk8jNq
	0gebs2gObBSiEX/0jmKMw4FVB7H+4ZKQ8qQwDK7G7kGChCwAtHX+a1/9zS+kWh63PJxQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162943-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162943: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable:build-amd64:<job status>:broken:regression
    xen-unstable:build-amd64-prev:<job status>:broken:regression
    xen-unstable:build-amd64-pvops:<job status>:broken:regression
    xen-unstable:build-amd64-xsm:<job status>:broken:regression
    xen-unstable:build-amd64-xtf:<job status>:broken:regression
    xen-unstable:build-arm64:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:<job status>:broken:regression
    xen-unstable:build-arm64-xsm:<job status>:broken:regression
    xen-unstable:build-armhf-pvops:<job status>:broken:regression
    xen-unstable:build-i386:<job status>:broken:regression
    xen-unstable:build-i386-prev:<job status>:broken:regression
    xen-unstable:build-i386-pvops:<job status>:broken:regression
    xen-unstable:build-i386-xsm:<job status>:broken:regression
    xen-unstable:build-amd64-pvops:host-install(4):broken:regression
    xen-unstable:build-arm64-pvops:host-install(4):broken:regression
    xen-unstable:build-i386-pvops:host-install(4):broken:regression
    xen-unstable:build-arm64-xsm:host-install(4):broken:regression
    xen-unstable:build-arm64:host-install(4):broken:regression
    xen-unstable:build-i386:host-install(4):broken:regression
    xen-unstable:build-i386-xsm:host-install(4):broken:regression
    xen-unstable:build-i386-prev:host-install(4):broken:regression
    xen-unstable:build-amd64:host-install(4):broken:regression
    xen-unstable:build-amd64-prev:host-install(4):broken:regression
    xen-unstable:build-amd64-xsm:host-install(4):broken:regression
    xen-unstable:build-amd64-xtf:host-install(4):broken:regression
    xen-unstable:build-armhf-pvops:host-install(4):broken:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Jun 2021 08:34:22 +0000

flight 162943 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162943/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-prev                <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-amd64-xtf                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-prev                 <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 162533
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 162533
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 162533
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 162533
 build-arm64                   4 host-install(4)        broken REGR. vs. 162533
 build-i386                    4 host-install(4)        broken REGR. vs. 162533
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 162533
 build-i386-prev               4 host-install(4)        broken REGR. vs. 162533
 build-amd64                   4 host-install(4)        broken REGR. vs. 162533
 build-amd64-prev              4 host-install(4)        broken REGR. vs. 162533
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 162533
 build-amd64-xtf               4 host-install(4)        broken REGR. vs. 162533
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 162533

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z   14 days
Failing since        162556  2021-06-08 22:39:08 Z   13 days   20 attempts
Testing same since   162885  2021-06-17 23:08:00 Z    4 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64-xtf                                              broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             broken  
 build-i386-prev                                              broken  
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-prev broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-amd64-xtf broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-prev broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-amd64-pvops host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-i386 host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386-prev host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-prev host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64-xtf host-install(4)
broken-step build-armhf-pvops host-install(4)

Not pushing.

(No revision log; it would be 1163 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 08:46:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 08:46:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145769.268095 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvc2y-0003H6-LY; Tue, 22 Jun 2021 08:46:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145769.268095; Tue, 22 Jun 2021 08:46:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvc2y-0003Gz-IQ; Tue, 22 Jun 2021 08:46:16 +0000
Received: by outflank-mailman (input) for mailman id 145769;
 Tue, 22 Jun 2021 08:46:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=P8ns=LQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lvc2x-0003Gr-He
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 08:46:15 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 110aa5c1-2451-44e1-8d5c-2908cf473679;
 Tue, 22 Jun 2021 08:46:14 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 53B1D21952;
 Tue, 22 Jun 2021 08:46:13 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 22899118DD;
 Tue, 22 Jun 2021 08:46:13 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id FhQLB1Wj0WAWFQAALh3uQQ
 (envelope-from <jgross@suse.com>); Tue, 22 Jun 2021 08:46:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 110aa5c1-2451-44e1-8d5c-2908cf473679
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624351573; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=bqO6v55VW8i6fUwH8jsbTg7KSAoJ15/gYicXfwyE0TY=;
	b=JewN9AmvU6VG/Kr/64X+t08+PUDkji4lzGw9Ifp8kjMUTAVVnloC47pYm14nCQE6p2i0sT
	3Ov5yjVTfjgYq/jRp7vA6Hg7QVV/qg1uf4Xu4xkRv3o5qbu0dnkeuz39fWT3H6+78OSazo
	x8vMmsOfeQQGP3jm9laJFPXCwBuYxTU=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624351573; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=bqO6v55VW8i6fUwH8jsbTg7KSAoJ15/gYicXfwyE0TY=;
	b=JewN9AmvU6VG/Kr/64X+t08+PUDkji4lzGw9Ifp8kjMUTAVVnloC47pYm14nCQE6p2i0sT
	3Ov5yjVTfjgYq/jRp7vA6Hg7QVV/qg1uf4Xu4xkRv3o5qbu0dnkeuz39fWT3H6+78OSazo
	x8vMmsOfeQQGP3jm9laJFPXCwBuYxTU=
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien GralL
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210617173857.6450-1-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] tools/xenstored: Don't crash xenstored when Live-Update
 is cancelled
Message-ID: <e7458af5-a128-fc01-21ee-34a02f2fdf9b@suse.com>
Date: Tue, 22 Jun 2021 10:46:12 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210617173857.6450-1-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="waj9IQirOABc9bYDUCo6F0BptcRzrt9sA"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--waj9IQirOABc9bYDUCo6F0BptcRzrt9sA
Content-Type: multipart/mixed; boundary="JAOfMLXpXkIk7kohTNQRscUQJblPJQ7Ji";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien GralL
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <e7458af5-a128-fc01-21ee-34a02f2fdf9b@suse.com>
Subject: Re: [PATCH] tools/xenstored: Don't crash xenstored when Live-Update
 is cancelled
References: <20210617173857.6450-1-julien@xen.org>
In-Reply-To: <20210617173857.6450-1-julien@xen.org>

--JAOfMLXpXkIk7kohTNQRscUQJblPJQ7Ji
Content-Type: multipart/mixed;
 boundary="------------E75DA7B5BE3C054BC560017B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E75DA7B5BE3C054BC560017B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.06.21 19:38, Julien Grall wrote:
> From: Julien GralL <jgrall@amazon.com>
>=20
> As Live-Update is asynchronous, it is possible to receive a request to
> cancel it (either on the same connection or from a different one).
>=20
> Currently, this will crash xenstored because do_lu_start() assumes
> lu_status will be valid. This is not the case when Live-Update has been=

> cancelled. This will result to dereference a NULL pointer and
> crash Xenstored.

Umm, you introduced that bug in "[PATCH 03/10] tools/xenstore: Don't
assume conn->in points to the LU request".

So I think you should fold your fix into that series.


Juergen

--------------E75DA7B5BE3C054BC560017B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E75DA7B5BE3C054BC560017B--

--JAOfMLXpXkIk7kohTNQRscUQJblPJQ7Ji--

--waj9IQirOABc9bYDUCo6F0BptcRzrt9sA
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDRo1QFAwAAAAAACgkQsN6d1ii/Ey8j
/Qf8DBtb+Mnyoogf6HvuJnE4sqz+Pa9CXLL0xKXJyt2hQGDwspSdv8oLq761JIjuK01JlaO5o8IM
HzUyNOvRhbz1g5I7+5oZdMH9cj1aYLfsKgjieFj7IMRDtlDQZK/bH0ZVPdzvaunSK5opuWczi07g
hVF2/8t2/+TpNDyLScSoawL67OzcrTYnvYC8waFvyBL/0s9ti0zwN5ISnZI1cqy4RZqDQtg4gli6
GS9wwds/Euauj1HZn/MAAvMY2+QDwN3B12u2pvQ8/NCF5g+3x7NHKotnNrlept3eo2MhWOxAiq2z
UIU4ko/UUkMwH72JBbXmoq4pm2cxbGT3wUSrMerSuw==
=eEQF
-----END PGP SIGNATURE-----

--waj9IQirOABc9bYDUCo6F0BptcRzrt9sA--


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 08:53:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 08:53:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145774.268106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvcAC-0004k9-Ew; Tue, 22 Jun 2021 08:53:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145774.268106; Tue, 22 Jun 2021 08:53:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvcAC-0004k2-Ai; Tue, 22 Jun 2021 08:53:44 +0000
Received: by outflank-mailman (input) for mailman id 145774;
 Tue, 22 Jun 2021 08:53:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lvcAB-0004jw-15
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 08:53:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lvcA9-0007FW-JZ; Tue, 22 Jun 2021 08:53:41 +0000
Received: from [54.239.6.182] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lvcA9-0004ZZ-B3; Tue, 22 Jun 2021 08:53:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=oiu1y/VFotGgWs6FCOYGAiLGP/wrlWBqk+Z0GNUHPBM=; b=CBtsKTGQJmAh0UVudCn8bFjvas
	cgx7nPNnq7Rth/eo/RyyzC28o/ObcKNvSk4h4MFEdRuLsupAavrboeN81ZcKBvjWtwAeLHMydC+4w
	w1fVt1x6Wbx+3YDlnNimePttsW4XSPRIRkSzH2iBuMDFIusIXfPt66xX430czdVSeXOs=;
Subject: Re: [PATCH] tools/xenstored: Don't crash xenstored when Live-Update
 is cancelled
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien GralL
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210617173857.6450-1-julien@xen.org>
 <e7458af5-a128-fc01-21ee-34a02f2fdf9b@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ba1c16d7-1820-a146-2d64-6d4cc5901f04@xen.org>
Date: Tue, 22 Jun 2021 10:53:38 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <e7458af5-a128-fc01-21ee-34a02f2fdf9b@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 22/06/2021 10:46, Juergen Gross wrote:
> On 17.06.21 19:38, Julien Grall wrote:
>> From: Julien GralL <jgrall@amazon.com>
>>
>> As Live-Update is asynchronous, it is possible to receive a request to
>> cancel it (either on the same connection or from a different one).
>>
>> Currently, this will crash xenstored because do_lu_start() assumes
>> lu_status will be valid. This is not the case when Live-Update has been
>> cancelled. This will result to dereference a NULL pointer and
>> crash Xenstored.
> 
> Umm, you introduced that bug in "[PATCH 03/10] tools/xenstore: Don't
> assume conn->in points to the LU request".

No. I did reproduced this one without my series. If there are in-flight 
transaction this will crash in lu_check_lu_allowed() otherwise, it will 
crash when calling lu_dump_state().

The easiest way to reproduce is requesting live-update when there are 
long transactions and from a different connection (still in dom0) 
requesting to abort the connection.

In this case, lu_abort() will free lu_status and the destructor will set 
it to NULL. But the actual request is still in the delayed request queue 
for the other connection.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 09:13:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 09:13:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145779.268117 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvcTb-00072o-1I; Tue, 22 Jun 2021 09:13:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145779.268117; Tue, 22 Jun 2021 09:13:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvcTa-00072h-UE; Tue, 22 Jun 2021 09:13:46 +0000
Received: by outflank-mailman (input) for mailman id 145779;
 Tue, 22 Jun 2021 09:13:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=P8ns=LQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lvcTZ-00072b-GN
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 09:13:45 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1edd42b3-1de2-4b0b-a114-0d2bf25066b2;
 Tue, 22 Jun 2021 09:13:44 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 7C1CC1FD5F;
 Tue, 22 Jun 2021 09:13:43 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 49660118DD;
 Tue, 22 Jun 2021 09:13:43 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id FP3LEMep0WDcIwAALh3uQQ
 (envelope-from <jgross@suse.com>); Tue, 22 Jun 2021 09:13:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1edd42b3-1de2-4b0b-a114-0d2bf25066b2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624353223; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=5uW1mvwPWJAxbKJxXhY9mWmrSh/sL25QUQ3IwGQ7nSA=;
	b=i5qQ1IL1cl1ABzHYSualVqG2OtOSlvhHeDe1+Rf5DpUEsf21mDHP9yPKLhkrM0aytyRkXA
	SCDG/42LlTnRXMFrXcuUqTrAmzod1oRHoufZtCz3qVMNIJyMfWYpK+yiPE7J/Ywns+BCTT
	DGvEQpj+TFNngjcckDorV2DTj1OeQJw=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624353223; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=5uW1mvwPWJAxbKJxXhY9mWmrSh/sL25QUQ3IwGQ7nSA=;
	b=i5qQ1IL1cl1ABzHYSualVqG2OtOSlvhHeDe1+Rf5DpUEsf21mDHP9yPKLhkrM0aytyRkXA
	SCDG/42LlTnRXMFrXcuUqTrAmzod1oRHoufZtCz3qVMNIJyMfWYpK+yiPE7J/Ywns+BCTT
	DGvEQpj+TFNngjcckDorV2DTj1OeQJw=
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien GralL
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210617173857.6450-1-julien@xen.org>
 <e7458af5-a128-fc01-21ee-34a02f2fdf9b@suse.com>
 <ba1c16d7-1820-a146-2d64-6d4cc5901f04@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] tools/xenstored: Don't crash xenstored when Live-Update
 is cancelled
Message-ID: <5f4883a2-fbeb-6bf4-c5ef-46e55c74f5b0@suse.com>
Date: Tue, 22 Jun 2021 11:13:42 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <ba1c16d7-1820-a146-2d64-6d4cc5901f04@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="Tr2iMtsNeS4mT5XmpMmWNGu9mnlpBXBvR"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--Tr2iMtsNeS4mT5XmpMmWNGu9mnlpBXBvR
Content-Type: multipart/mixed; boundary="XYsNLoGG7uydEIBYAwQNVOcrd4DfoHqT3";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien GralL
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <5f4883a2-fbeb-6bf4-c5ef-46e55c74f5b0@suse.com>
Subject: Re: [PATCH] tools/xenstored: Don't crash xenstored when Live-Update
 is cancelled
References: <20210617173857.6450-1-julien@xen.org>
 <e7458af5-a128-fc01-21ee-34a02f2fdf9b@suse.com>
 <ba1c16d7-1820-a146-2d64-6d4cc5901f04@xen.org>
In-Reply-To: <ba1c16d7-1820-a146-2d64-6d4cc5901f04@xen.org>

--XYsNLoGG7uydEIBYAwQNVOcrd4DfoHqT3
Content-Type: multipart/mixed;
 boundary="------------E2257B4A84088B714C77AD72"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E2257B4A84088B714C77AD72
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 22.06.21 10:53, Julien Grall wrote:
> Hi Juergen,
>=20
> On 22/06/2021 10:46, Juergen Gross wrote:
>> On 17.06.21 19:38, Julien Grall wrote:
>>> From: Julien GralL <jgrall@amazon.com>
>>>
>>> As Live-Update is asynchronous, it is possible to receive a request t=
o
>>> cancel it (either on the same connection or from a different one).
>>>
>>> Currently, this will crash xenstored because do_lu_start() assumes
>>> lu_status will be valid. This is not the case when Live-Update has be=
en
>>> cancelled. This will result to dereference a NULL pointer and
>>> crash Xenstored.
>>
>> Umm, you introduced that bug in "[PATCH 03/10] tools/xenstore: Don't
>> assume conn->in points to the LU request".
>=20
> No. I did reproduced this one without my series. If there are in-flight=20

> transaction this will crash in lu_check_lu_allowed() otherwise, it will=20

> crash when calling lu_dump_state().

Oh, right, I missed the indirection via delay_request().

Sorry.


Juergen

--------------E2257B4A84088B714C77AD72
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E2257B4A84088B714C77AD72--

--XYsNLoGG7uydEIBYAwQNVOcrd4DfoHqT3--

--Tr2iMtsNeS4mT5XmpMmWNGu9mnlpBXBvR
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDRqcYFAwAAAAAACgkQsN6d1ii/Ey90
rAf/f4v5o4ocdrlvJ3ohGUU7afDie8X20DcOQrI63WA/XdLfVmkWT9BHveFxjm0NT2JgwvhVnLr/
WfwBrzzOcplNey0hCn5dNipDP7T0ctxGuWvrBZtVwiEVhy4cwsBUf6CjkRbEmAAOH5LWKhTLiHWr
7jfeMR70dCKuAMlf/kpomYfRlDX9wd89xJlSphoGBeBayPQzo4sjx2AUh/YrCKGUrY9qV4wGUtSJ
AXd+RpEAt8JNgxI+XWuHbz07mEk6aWtx6wMUbB580+h5KVWOCQV8r0TTqEt3NTza8duyenZorHc+
fuSz/Bkneni586pwx31KaUOhbblYdXkkAaeXqjFznQ==
=8Ga+
-----END PGP SIGNATURE-----

--Tr2iMtsNeS4mT5XmpMmWNGu9mnlpBXBvR--


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 09:23:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 09:23:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145784.268128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvcd9-0008TH-Vd; Tue, 22 Jun 2021 09:23:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145784.268128; Tue, 22 Jun 2021 09:23:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvcd9-0008TA-S2; Tue, 22 Jun 2021 09:23:39 +0000
Received: by outflank-mailman (input) for mailman id 145784;
 Tue, 22 Jun 2021 09:23:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=P8ns=LQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lvcd8-0008T4-P7
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 09:23:38 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2f02d891-5cb1-466d-914f-b9591431af90;
 Tue, 22 Jun 2021 09:23:36 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id AFD5D2198C;
 Tue, 22 Jun 2021 09:23:35 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 7E0CE118DD;
 Tue, 22 Jun 2021 09:23:35 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id tnypHBes0WAJKQAALh3uQQ
 (envelope-from <jgross@suse.com>); Tue, 22 Jun 2021 09:23:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2f02d891-5cb1-466d-914f-b9591431af90
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624353815; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ZrGILk5TF1Zhl5wMEICap1qzcWtZAIP+G9BYsVDJ00k=;
	b=GuvXIBgtdSgCuYNvJW4AGUqIOwfkJ9tW4a/n8gh+DXeF+kyGrBWCJnjWs9MRWlPqv2Z0m2
	qurszEL6mTyB7fZ5N5b1eEVkWVtBlr2SdRTwsyYAQ1HUIYvTi7tWOE14QXzNLF2fAAU8IL
	9i1HT+hOgBtcHAl/xvWZgKegr5m9iC4=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624353815; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ZrGILk5TF1Zhl5wMEICap1qzcWtZAIP+G9BYsVDJ00k=;
	b=GuvXIBgtdSgCuYNvJW4AGUqIOwfkJ9tW4a/n8gh+DXeF+kyGrBWCJnjWs9MRWlPqv2Z0m2
	qurszEL6mTyB7fZ5N5b1eEVkWVtBlr2SdRTwsyYAQ1HUIYvTi7tWOE14QXzNLF2fAAU8IL
	9i1HT+hOgBtcHAl/xvWZgKegr5m9iC4=
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien GralL
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210617173857.6450-1-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] tools/xenstored: Don't crash xenstored when Live-Update
 is cancelled
Message-ID: <136d6a10-c93d-accd-fc34-62fbaa4742b0@suse.com>
Date: Tue, 22 Jun 2021 11:23:34 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210617173857.6450-1-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="BM8HYKSk6JU0miiBhsDfUZQ99pUV5IlxA"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--BM8HYKSk6JU0miiBhsDfUZQ99pUV5IlxA
Content-Type: multipart/mixed; boundary="csuHntVeq7SnTgkWBQ8iGcGbjzxcTxVkv";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien GralL
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <136d6a10-c93d-accd-fc34-62fbaa4742b0@suse.com>
Subject: Re: [PATCH] tools/xenstored: Don't crash xenstored when Live-Update
 is cancelled
References: <20210617173857.6450-1-julien@xen.org>
In-Reply-To: <20210617173857.6450-1-julien@xen.org>

--csuHntVeq7SnTgkWBQ8iGcGbjzxcTxVkv
Content-Type: multipart/mixed;
 boundary="------------194A444496820619141E8FBE"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------194A444496820619141E8FBE
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.06.21 19:38, Julien Grall wrote:
> From: Julien GralL <jgrall@amazon.com>
>=20
> As Live-Update is asynchronous, it is possible to receive a request to
> cancel it (either on the same connection or from a different one).
>=20
> Currently, this will crash xenstored because do_lu_start() assumes
> lu_status will be valid. This is not the case when Live-Update has been=

> cancelled. This will result to dereference a NULL pointer and
> crash Xenstored.
>=20
> Rework do_lu_start() to check if lu_status is NULL and return an
> error in this case.
>=20
> Fixes: af216a99fb ("tools/xenstore: add the basic framework for doing t=
he live update")
> Signed-off-by: Julien Grall <jgrall@amazon.com>
>=20
> ----
>=20
> This is currently based on top of:
>=20
> https://lore.kernel.org/xen-devel/20210616144324.31652-1-julien@xen.org=

>=20
> This can be re-ordered if necessary.
> ---
>   tools/xenstore/xenstored_control.c | 15 +++++++++++++--
>   1 file changed, 13 insertions(+), 2 deletions(-)
>=20
> diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xensto=
red_control.c
> index a045f102a420..37a3d39f20b5 100644
> --- a/tools/xenstore/xenstored_control.c
> +++ b/tools/xenstore/xenstored_control.c
> @@ -696,7 +696,18 @@ static bool do_lu_start(struct delayed_request *re=
q)
>   	time_t now =3D time(NULL);
>   	const char *ret;
>   	struct buffered_data *saved_in;
> -	struct connection *conn =3D lu_status->conn;
> +	struct connection *conn =3D req->data;
> +
> +	/*
> +	 * Cancellation may have been requested asynchronously. In this
> +	 * case, lu_status will be NULL.
> +	 */
> +	if (!lu_status) {
> +		ret =3D "Cancellation was requested";
> +		conn =3D req->data;

This will set conn to the same value it already has.


Other than that:

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------194A444496820619141E8FBE
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------194A444496820619141E8FBE--

--csuHntVeq7SnTgkWBQ8iGcGbjzxcTxVkv--

--BM8HYKSk6JU0miiBhsDfUZQ99pUV5IlxA
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDRrBcFAwAAAAAACgkQsN6d1ii/Ey+u
2wf+JWKqfQB7M4Nd/jEL3m9Nj9NV7U32Mxb3+Xbhtj+vx0nKg7P8dDExh6IkAgvCQp23Ch+kA8OI
HIVPiPYOA1fEfC6ysWMDOUr//Po67boxROWF3YIHB4WE4aSt4rYzk2BuEOtSnxUImKSRBJbvvGhv
j/1Ox+EUW4KAqHZFqpcucaXmOTgHPA6KPPDSmKwiC5wduCenU6PBTO7cKMxggKD1eVlB3iJaWYOq
pKyEENSJrCm3sSQz3Z8eQPx7EO6+Tu8dweMotZx56bRmKFM9ojShFEL6Lhtf2flH2TX1L2l8cd9J
GEHsDVD7oDl11waIz37i37y04xHoWMQqHSxThW6EtQ==
=5QqH
-----END PGP SIGNATURE-----

--BM8HYKSk6JU0miiBhsDfUZQ99pUV5IlxA--


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 09:45:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 09:45:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145790.268144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvcxr-0002QV-1W; Tue, 22 Jun 2021 09:45:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145790.268144; Tue, 22 Jun 2021 09:45:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvcxq-0002QO-Ul; Tue, 22 Jun 2021 09:45:02 +0000
Received: by outflank-mailman (input) for mailman id 145790;
 Tue, 22 Jun 2021 09:38:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vYqY=LQ=intel.com=lingshan.zhu@srs-us1.protection.inumbo.net>)
 id 1lvcrs-0001c3-8g
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 09:38:52 +0000
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd10bab4-da0c-4e9e-b681-c0e2cc166455;
 Tue, 22 Jun 2021 09:38:48 +0000 (UTC)
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 22 Jun 2021 02:38:41 -0700
Received: from vmm_a4_icx.sh.intel.com (HELO localhost.localdomain)
 ([10.239.53.245])
 by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 22 Jun 2021 02:38:37 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd10bab4-da0c-4e9e-b681-c0e2cc166455
IronPort-SDR: WPCN1YEWpq5MG15nMM/dYxBBowaDGo7G+G0/cusFM3LhlBXHojBu4V4Y5841JRXsnLysxJF7sT
 MbRMQlPKuQuw==
X-IronPort-AV: E=McAfee;i="6200,9189,10022"; a="206964821"
X-IronPort-AV: E=Sophos;i="5.83,291,1616482800"; 
   d="scan'208";a="206964821"
IronPort-SDR: YjghSkQQhfpqXchU5FmETDRBEKjg++NYbLVqNiZ3bX2P3p3aIdtsxEvlzAZ3khA29zw8feGTq5
 hSRTxjz0q+eg==
X-IronPort-AV: E=Sophos;i="5.83,291,1616482800"; 
   d="scan'208";a="641599201"
From: Zhu Lingshan <lingshan.zhu@intel.com>
To: lingshan.zhu@live.com
Cc: Like Xu <like.xu@linux.intel.com>,
	Will Deacon <will@kernel.org>,
	Marc Zyngier <maz@kernel.org>,
	Guo Ren <guoren@kernel.org>,
	Nick Hu <nickhu@andestech.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu,
	linux-csky@vger.kernel.org,
	linux-riscv@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	Peter Zijlstra <peterz@infradead.org>,
	Zhu Lingshan <lingshan.zhu@intel.com>
Subject: [PATCH V7 01/18] perf/core: Use static_call to optimize perf_guest_info_callbacks
Date: Tue, 22 Jun 2021 17:38:06 +0800
Message-Id: <20210622093823.8215-2-lingshan.zhu@intel.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20210622093823.8215-1-lingshan.zhu@intel.com>
References: <20210622093823.8215-1-lingshan.zhu@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Like Xu <like.xu@linux.intel.com>

For "struct perf_guest_info_callbacks", the two fields "is_in_guest"
and "is_user_mode" are replaced with a new multiplexed member named
"state", and the "get_guest_ip" field will be renamed to "get_ip".

For arm64, xen and kvm/x86, the application of DEFINE_STATIC_CALL_RET0
could make all that perf_guest_cbs stuff suck less. For arm, csky, nds32,
and riscv, just applied some renamed refactoring.

Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Guo Ren <guoren@kernel.org>
Cc: Nick Hu <nickhu@andestech.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: kvmarm@lists.cs.columbia.edu
Cc: linux-csky@vger.kernel.org
Cc: linux-riscv@lists.infradead.org
Cc: xen-devel@lists.xenproject.org
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Original-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Like Xu <like.xu@linux.intel.com>
Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com>
---
 arch/arm/kernel/perf_callchain.c   | 16 ++++++++-----
 arch/arm64/kernel/perf_callchain.c | 29 ++++++++++++++++++-----
 arch/arm64/kvm/perf.c              | 22 ++++++++---------
 arch/csky/kernel/perf_callchain.c  |  4 ++--
 arch/nds32/kernel/perf_event_cpu.c | 16 ++++++++-----
 arch/riscv/kernel/perf_callchain.c |  4 ++--
 arch/x86/events/core.c             | 38 ++++++++++++++++++++++++------
 arch/x86/events/intel/core.c       |  7 +++---
 arch/x86/include/asm/kvm_host.h    |  2 +-
 arch/x86/kvm/pmu.c                 |  2 +-
 arch/x86/kvm/x86.c                 | 37 ++++++++++++++++-------------
 arch/x86/xen/pmu.c                 | 33 +++++++++++---------------
 include/linux/perf_event.h         | 12 ++++++----
 kernel/events/core.c               |  9 +++++++
 14 files changed, 144 insertions(+), 87 deletions(-)

diff --git a/arch/arm/kernel/perf_callchain.c b/arch/arm/kernel/perf_callchain.c
index 3b69a76d341e..1ce30f86d6c7 100644
--- a/arch/arm/kernel/perf_callchain.c
+++ b/arch/arm/kernel/perf_callchain.c
@@ -64,7 +64,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs
 {
 	struct frame_tail __user *tail;
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (perf_guest_cbs && perf_guest_cbs->state()) {
 		/* We don't support guest os callchain now */
 		return;
 	}
@@ -100,7 +100,7 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
 {
 	struct stackframe fr;
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (perf_guest_cbs && perf_guest_cbs->state()) {
 		/* We don't support guest os callchain now */
 		return;
 	}
@@ -111,8 +111,8 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
 
 unsigned long perf_instruction_pointer(struct pt_regs *regs)
 {
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
-		return perf_guest_cbs->get_guest_ip();
+	if (perf_guest_cbs && perf_guest_cbs->state())
+		return perf_guest_cbs->get_ip();
 
 	return instruction_pointer(regs);
 }
@@ -120,9 +120,13 @@ unsigned long perf_instruction_pointer(struct pt_regs *regs)
 unsigned long perf_misc_flags(struct pt_regs *regs)
 {
 	int misc = 0;
+	unsigned int state = 0;
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
-		if (perf_guest_cbs->is_user_mode())
+	if (perf_guest_cbs)
+		state = perf_guest_cbs->state();
+
+	if (perf_guest_cbs && state) {
+		if (state & PERF_GUEST_USER)
 			misc |= PERF_RECORD_MISC_GUEST_USER;
 		else
 			misc |= PERF_RECORD_MISC_GUEST_KERNEL;
diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c
index 88ff471b0bce..5df2bd5d12ba 100644
--- a/arch/arm64/kernel/perf_callchain.c
+++ b/arch/arm64/kernel/perf_callchain.c
@@ -5,6 +5,7 @@
  * Copyright (C) 2015 ARM Limited
  */
 #include <linux/perf_event.h>
+#include <linux/static_call.h>
 #include <linux/uaccess.h>
 
 #include <asm/pointer_auth.h>
@@ -99,10 +100,25 @@ compat_user_backtrace(struct compat_frame_tail __user *tail,
 }
 #endif /* CONFIG_COMPAT */
 
+DEFINE_STATIC_CALL_RET0(arm64_guest_state, *(perf_guest_cbs->state));
+DEFINE_STATIC_CALL_RET0(arm64_guest_get_ip, *(perf_guest_cbs->get_ip));
+
+void arch_perf_update_guest_cbs(void)
+{
+	static_call_update(arm64_guest_state, (void *)&__static_call_return0);
+	static_call_update(arm64_guest_get_ip, (void *)&__static_call_return0);
+
+	if (perf_guest_cbs && perf_guest_cbs->state)
+		static_call_update(arm64_guest_state, perf_guest_cbs->state);
+
+	if (perf_guest_cbs && perf_guest_cbs->get_ip)
+		static_call_update(arm64_guest_get_ip, perf_guest_cbs->get_ip);
+}
+
 void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
 			 struct pt_regs *regs)
 {
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (static_call(arm64_guest_state)()) {
 		/* We don't support guest os callchain now */
 		return;
 	}
@@ -149,7 +165,7 @@ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
 {
 	struct stackframe frame;
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (static_call(arm64_guest_state)()) {
 		/* We don't support guest os callchain now */
 		return;
 	}
@@ -160,8 +176,8 @@ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
 
 unsigned long perf_instruction_pointer(struct pt_regs *regs)
 {
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
-		return perf_guest_cbs->get_guest_ip();
+	if (static_call(arm64_guest_state)())
+		return static_call(arm64_guest_get_ip)();
 
 	return instruction_pointer(regs);
 }
@@ -169,9 +185,10 @@ unsigned long perf_instruction_pointer(struct pt_regs *regs)
 unsigned long perf_misc_flags(struct pt_regs *regs)
 {
 	int misc = 0;
+	unsigned int guest = static_call(arm64_guest_state)();
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
-		if (perf_guest_cbs->is_user_mode())
+	if (guest) {
+		if (guest & PERF_GUEST_USER)
 			misc |= PERF_RECORD_MISC_GUEST_USER;
 		else
 			misc |= PERF_RECORD_MISC_GUEST_KERNEL;
diff --git a/arch/arm64/kvm/perf.c b/arch/arm64/kvm/perf.c
index 151c31fb9860..8a3387e58f42 100644
--- a/arch/arm64/kvm/perf.c
+++ b/arch/arm64/kvm/perf.c
@@ -13,21 +13,20 @@
 
 DEFINE_STATIC_KEY_FALSE(kvm_arm_pmu_available);
 
-static int kvm_is_in_guest(void)
-{
-        return kvm_get_running_vcpu() != NULL;
-}
-
-static int kvm_is_user_mode(void)
+static unsigned int kvm_guest_state(void)
 {
 	struct kvm_vcpu *vcpu;
+	unsigned int state = 0;
+
+	if (kvm_get_running_vcpu())
+		state |= PERF_GUEST_ACTIVE;
 
 	vcpu = kvm_get_running_vcpu();
 
-	if (vcpu)
-		return !vcpu_mode_priv(vcpu);
+	if (vcpu && !vcpu_mode_priv(vcpu))
+		state |= PERF_GUEST_USER;
 
-	return 0;
+	return state;
 }
 
 static unsigned long kvm_get_guest_ip(void)
@@ -43,9 +42,8 @@ static unsigned long kvm_get_guest_ip(void)
 }
 
 static struct perf_guest_info_callbacks kvm_guest_cbs = {
-	.is_in_guest	= kvm_is_in_guest,
-	.is_user_mode	= kvm_is_user_mode,
-	.get_guest_ip	= kvm_get_guest_ip,
+	.state		= kvm_guest_state,
+	.get_ip		= kvm_get_guest_ip,
 };
 
 int kvm_perf_init(void)
diff --git a/arch/csky/kernel/perf_callchain.c b/arch/csky/kernel/perf_callchain.c
index ab55e98ee8f6..3e42239dd1b2 100644
--- a/arch/csky/kernel/perf_callchain.c
+++ b/arch/csky/kernel/perf_callchain.c
@@ -89,7 +89,7 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
 	unsigned long fp = 0;
 
 	/* C-SKY does not support virtualization. */
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
+	if (perf_guest_cbs && perf_guest_cbs->state())
 		return;
 
 	fp = regs->regs[4];
@@ -113,7 +113,7 @@ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
 	struct stackframe fr;
 
 	/* C-SKY does not support virtualization. */
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (perf_guest_cbs && perf_guest_cbs->state()) {
 		pr_warn("C-SKY does not support perf in guest mode!");
 		return;
 	}
diff --git a/arch/nds32/kernel/perf_event_cpu.c b/arch/nds32/kernel/perf_event_cpu.c
index 0ce6f9f307e6..1dc32ba842ce 100644
--- a/arch/nds32/kernel/perf_event_cpu.c
+++ b/arch/nds32/kernel/perf_event_cpu.c
@@ -1371,7 +1371,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry,
 
 	leaf_fp = 0;
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (perf_guest_cbs && perf_guest_cbs->state()) {
 		/* We don't support guest os callchain now */
 		return;
 	}
@@ -1481,7 +1481,7 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
 {
 	struct stackframe fr;
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (perf_guest_cbs && perf_guest_cbs->state()) {
 		/* We don't support guest os callchain now */
 		return;
 	}
@@ -1494,8 +1494,8 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
 unsigned long perf_instruction_pointer(struct pt_regs *regs)
 {
 	/* However, NDS32 does not support virtualization */
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
-		return perf_guest_cbs->get_guest_ip();
+	if (perf_guest_cbs && perf_guest_cbs->state())
+		return perf_guest_cbs->get_ip();
 
 	return instruction_pointer(regs);
 }
@@ -1503,10 +1503,14 @@ unsigned long perf_instruction_pointer(struct pt_regs *regs)
 unsigned long perf_misc_flags(struct pt_regs *regs)
 {
 	int misc = 0;
+	unsigned int state = 0;
+
+	if (perf_guest_cbs)
+		state = perf_guest_cbs->state();
 
 	/* However, NDS32 does not support virtualization */
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
-		if (perf_guest_cbs->is_user_mode())
+	if (perf_guest_cbs && state) {
+		if (state & PERF_GUEST_USER)
 			misc |= PERF_RECORD_MISC_GUEST_USER;
 		else
 			misc |= PERF_RECORD_MISC_GUEST_KERNEL;
diff --git a/arch/riscv/kernel/perf_callchain.c b/arch/riscv/kernel/perf_callchain.c
index 0bb1854dce83..ea63f70cae5d 100644
--- a/arch/riscv/kernel/perf_callchain.c
+++ b/arch/riscv/kernel/perf_callchain.c
@@ -59,7 +59,7 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
 	unsigned long fp = 0;
 
 	/* RISC-V does not support perf in guest mode. */
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
+	if (perf_guest_cbs && perf_guest_cbs->state())
 		return;
 
 	fp = regs->s0;
@@ -79,7 +79,7 @@ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
 			   struct pt_regs *regs)
 {
 	/* RISC-V does not support perf in guest mode. */
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (perf_guest_cbs && perf_guest_cbs->state()) {
 		pr_warn("RISC-V does not support perf in guest mode!");
 		return;
 	}
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 8f71dd72ef95..c71af4cfba9b 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -90,6 +90,27 @@ DEFINE_STATIC_CALL_NULL(x86_pmu_pebs_aliases, *x86_pmu.pebs_aliases);
  */
 DEFINE_STATIC_CALL_RET0(x86_pmu_guest_get_msrs, *x86_pmu.guest_get_msrs);
 
+DEFINE_STATIC_CALL_RET0(x86_guest_state, *(perf_guest_cbs->state));
+DEFINE_STATIC_CALL_RET0(x86_guest_get_ip, *(perf_guest_cbs->get_ip));
+DEFINE_STATIC_CALL_RET0(x86_guest_handle_intel_pt_intr, *(perf_guest_cbs->handle_intel_pt_intr));
+
+void arch_perf_update_guest_cbs(void)
+{
+	static_call_update(x86_guest_state, (void *)&__static_call_return0);
+	static_call_update(x86_guest_get_ip, (void *)&__static_call_return0);
+	static_call_update(x86_guest_handle_intel_pt_intr, (void *)&__static_call_return0);
+
+	if (perf_guest_cbs && perf_guest_cbs->state)
+		static_call_update(x86_guest_state, perf_guest_cbs->state);
+
+	if (perf_guest_cbs && perf_guest_cbs->get_ip)
+		static_call_update(x86_guest_get_ip, perf_guest_cbs->get_ip);
+
+	if (perf_guest_cbs && perf_guest_cbs->handle_intel_pt_intr)
+		static_call_update(x86_guest_handle_intel_pt_intr,
+				   perf_guest_cbs->handle_intel_pt_intr);
+}
+
 u64 __read_mostly hw_cache_event_ids
 				[PERF_COUNT_HW_CACHE_MAX]
 				[PERF_COUNT_HW_CACHE_OP_MAX]
@@ -2738,7 +2759,7 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
 	struct unwind_state state;
 	unsigned long addr;
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (static_call(x86_guest_state)()) {
 		/* TODO: We don't support guest os callchain now */
 		return;
 	}
@@ -2841,7 +2862,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs
 	struct stack_frame frame;
 	const struct stack_frame __user *fp;
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (static_call(x86_guest_state)()) {
 		/* TODO: We don't support guest os callchain now */
 		return;
 	}
@@ -2918,18 +2939,21 @@ static unsigned long code_segment_base(struct pt_regs *regs)
 
 unsigned long perf_instruction_pointer(struct pt_regs *regs)
 {
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
-		return perf_guest_cbs->get_guest_ip();
+	unsigned long ip = static_call(x86_guest_get_ip)();
+
+	if (likely(!ip))
+		ip = regs->ip + code_segment_base(regs);
 
-	return regs->ip + code_segment_base(regs);
+	return ip;
 }
 
 unsigned long perf_misc_flags(struct pt_regs *regs)
 {
+	unsigned int guest = static_call(x86_guest_state)();
 	int misc = 0;
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
-		if (perf_guest_cbs->is_user_mode())
+	if (guest) {
+		if (guest & PERF_GUEST_USER)
 			misc |= PERF_RECORD_MISC_GUEST_USER;
 		else
 			misc |= PERF_RECORD_MISC_GUEST_KERNEL;
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index e28892270c58..430f5743f3ca 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -2780,6 +2780,8 @@ static void intel_pmu_reset(void)
 	local_irq_restore(flags);
 }
 
+DECLARE_STATIC_CALL(x86_guest_handle_intel_pt_intr, *(perf_guest_cbs->handle_intel_pt_intr));
+
 static int handle_pmi_common(struct pt_regs *regs, u64 status)
 {
 	struct perf_sample_data data;
@@ -2850,10 +2852,7 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
 	 */
 	if (__test_and_clear_bit(GLOBAL_STATUS_TRACE_TOPAPMI_BIT, (unsigned long *)&status)) {
 		handled++;
-		if (unlikely(perf_guest_cbs && perf_guest_cbs->is_in_guest() &&
-			perf_guest_cbs->handle_intel_pt_intr))
-			perf_guest_cbs->handle_intel_pt_intr();
-		else
+		if (!static_call(x86_guest_handle_intel_pt_intr)())
 			intel_pt_interrupt();
 	}
 
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 9c7ced0e3171..f752fcf56d76 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1813,7 +1813,7 @@ int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu);
 int kvm_complete_insn_gp(struct kvm_vcpu *vcpu, int err);
 void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu);
 
-int kvm_is_in_guest(void);
+unsigned int kvm_guest_state(void);
 
 void __user *__x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa,
 				     u32 size);
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index 827886c12c16..2dcbd1b30004 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -87,7 +87,7 @@ static void kvm_perf_overflow_intr(struct perf_event *perf_event,
 		 * woken up. So we should wake it, but this is impossible from
 		 * NMI context. Do it from irq work instead.
 		 */
-		if (!kvm_is_in_guest())
+		if (!kvm_guest_state())
 			irq_work_queue(&pmc_to_pmu(pmc)->irq_work);
 		else
 			kvm_make_request(KVM_REQ_PMI, pmc->vcpu);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b594275d49b5..9cb1c02d348c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8035,44 +8035,47 @@ static void kvm_timer_init(void)
 DEFINE_PER_CPU(struct kvm_vcpu *, current_vcpu);
 EXPORT_PER_CPU_SYMBOL_GPL(current_vcpu);
 
-int kvm_is_in_guest(void)
+unsigned int kvm_guest_state(void)
 {
-	return __this_cpu_read(current_vcpu) != NULL;
-}
-
-static int kvm_is_user_mode(void)
-{
-	int user_mode = 3;
+	struct kvm_vcpu *vcpu = __this_cpu_read(current_vcpu);
+	unsigned int state = 0;
 
-	if (__this_cpu_read(current_vcpu))
-		user_mode = static_call(kvm_x86_get_cpl)(__this_cpu_read(current_vcpu));
+	if (vcpu) {
+		state |= PERF_GUEST_ACTIVE;
+		if (static_call(kvm_x86_get_cpl)(vcpu))
+			state |= PERF_GUEST_USER;
+	}
 
-	return user_mode != 0;
+	return state;
 }
 
-static unsigned long kvm_get_guest_ip(void)
+static unsigned long kvm_guest_get_ip(void)
 {
+	struct kvm_vcpu *vcpu = __this_cpu_read(current_vcpu);
 	unsigned long ip = 0;
 
-	if (__this_cpu_read(current_vcpu))
-		ip = kvm_rip_read(__this_cpu_read(current_vcpu));
+	if (vcpu)
+		ip = kvm_rip_read(vcpu);
 
 	return ip;
 }
 
-static void kvm_handle_intel_pt_intr(void)
+static unsigned int kvm_handle_intel_pt_intr(void)
 {
 	struct kvm_vcpu *vcpu = __this_cpu_read(current_vcpu);
 
+	if (!vcpu)
+		return 0;
+
 	kvm_make_request(KVM_REQ_PMI, vcpu);
 	__set_bit(MSR_CORE_PERF_GLOBAL_OVF_CTRL_TRACE_TOPA_PMI_BIT,
 			(unsigned long *)&vcpu->arch.pmu.global_status);
+	return 1;
 }
 
 static struct perf_guest_info_callbacks kvm_guest_cbs = {
-	.is_in_guest		= kvm_is_in_guest,
-	.is_user_mode		= kvm_is_user_mode,
-	.get_guest_ip		= kvm_get_guest_ip,
+	.state			= kvm_guest_state,
+	.get_ip			= kvm_guest_get_ip,
 	.handle_intel_pt_intr	= kvm_handle_intel_pt_intr,
 };
 
diff --git a/arch/x86/xen/pmu.c b/arch/x86/xen/pmu.c
index e13b0b49fcdf..7352cf002b87 100644
--- a/arch/x86/xen/pmu.c
+++ b/arch/x86/xen/pmu.c
@@ -413,34 +413,30 @@ int pmu_apic_update(uint32_t val)
 }
 
 /* perf callbacks */
-static int xen_is_in_guest(void)
+static unsigned int xen_guest_state(void)
 {
 	const struct xen_pmu_data *xenpmu_data = get_xenpmu_data();
+	unsigned int state = 0;
 
 	if (!xenpmu_data) {
 		pr_warn_once("%s: pmudata not initialized\n", __func__);
-		return 0;
+		return state;
 	}
 
 	if (!xen_initial_domain() || (xenpmu_data->domain_id >= DOMID_SELF))
-		return 0;
-
-	return 1;
-}
+		return state;
 
-static int xen_is_user_mode(void)
-{
-	const struct xen_pmu_data *xenpmu_data = get_xenpmu_data();
+	state |= PERF_GUEST_ACTIVE;
 
-	if (!xenpmu_data) {
-		pr_warn_once("%s: pmudata not initialized\n", __func__);
-		return 0;
+	if (xenpmu_data->pmu.pmu_flags & PMU_SAMPLE_PV) {
+		if (xenpmu_data->pmu.pmu_flags & PMU_SAMPLE_USER)
+			state |= PERF_GUEST_USER;
+	} else {
+		if (!!(xenpmu_data->pmu.r.regs.cpl & 3))
+			state |= PERF_GUEST_USER;
 	}
 
-	if (xenpmu_data->pmu.pmu_flags & PMU_SAMPLE_PV)
-		return (xenpmu_data->pmu.pmu_flags & PMU_SAMPLE_USER);
-	else
-		return !!(xenpmu_data->pmu.r.regs.cpl & 3);
+	return state;
 }
 
 static unsigned long xen_get_guest_ip(void)
@@ -456,9 +452,8 @@ static unsigned long xen_get_guest_ip(void)
 }
 
 static struct perf_guest_info_callbacks xen_guest_cbs = {
-	.is_in_guest            = xen_is_in_guest,
-	.is_user_mode           = xen_is_user_mode,
-	.get_guest_ip           = xen_get_guest_ip,
+	.state                  = xen_guest_state,
+	.get_ip			= xen_get_guest_ip,
 };
 
 /* Convert registers from Xen's format to Linux' */
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index f5a6a2f069ed..8065e5f093f4 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -26,11 +26,13 @@
 # include <asm/local64.h>
 #endif
 
+#define PERF_GUEST_ACTIVE	0x01
+#define PERF_GUEST_USER	0x02
+
 struct perf_guest_info_callbacks {
-	int				(*is_in_guest)(void);
-	int				(*is_user_mode)(void);
-	unsigned long			(*get_guest_ip)(void);
-	void				(*handle_intel_pt_intr)(void);
+	unsigned int			(*state)(void);
+	unsigned long			(*get_ip)(void);
+	unsigned int			(*handle_intel_pt_intr)(void);
 };
 
 #ifdef CONFIG_HAVE_HW_BREAKPOINT
@@ -1237,6 +1239,8 @@ extern void perf_event_bpf_event(struct bpf_prog *prog,
 				 u16 flags);
 
 extern struct perf_guest_info_callbacks *perf_guest_cbs;
+extern void __weak arch_perf_update_guest_cbs(void);
+
 extern int perf_register_guest_info_callbacks(struct perf_guest_info_callbacks *callbacks);
 extern int perf_unregister_guest_info_callbacks(struct perf_guest_info_callbacks *callbacks);
 
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 6fee4a7e88d7..101fa7d0bfda 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6479,9 +6479,18 @@ static void perf_pending_event(struct irq_work *entry)
  */
 struct perf_guest_info_callbacks *perf_guest_cbs;
 
+/* explicitly use __weak to fix duplicate symbol error */
+void __weak arch_perf_update_guest_cbs(void)
+{
+}
+
 int perf_register_guest_info_callbacks(struct perf_guest_info_callbacks *cbs)
 {
+	if (WARN_ON_ONCE(perf_guest_cbs))
+		return -EBUSY;
+
 	perf_guest_cbs = cbs;
+	arch_perf_update_guest_cbs();
 	return 0;
 }
 EXPORT_SYMBOL_GPL(perf_register_guest_info_callbacks);
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 09:45:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 09:45:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145793.268150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvcxr-0002TX-C5; Tue, 22 Jun 2021 09:45:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145793.268150; Tue, 22 Jun 2021 09:45:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvcxr-0002Sa-5z; Tue, 22 Jun 2021 09:45:03 +0000
Received: by outflank-mailman (input) for mailman id 145793;
 Tue, 22 Jun 2021 09:43:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vYqY=LQ=intel.com=lingshan.zhu@srs-us1.protection.inumbo.net>)
 id 1lvcwJ-0002Ou-3X
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 09:43:27 +0000
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 99b49689-23d4-4200-bbf8-82762d7e0451;
 Tue, 22 Jun 2021 09:43:24 +0000 (UTC)
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 22 Jun 2021 02:43:23 -0700
Received: from vmm_a4_icx.sh.intel.com (HELO localhost.localdomain)
 ([10.239.53.245])
 by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 22 Jun 2021 02:43:17 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99b49689-23d4-4200-bbf8-82762d7e0451
IronPort-SDR: u9i8JCbWldif/bC6AokQBV/cZ3NR1mmL50esS0L2nvnjMXJUj35kmfday8Vq8N94nPWul15Q3V
 ywv0f4JAnWzg==
X-IronPort-AV: E=McAfee;i="6200,9189,10022"; a="206965396"
X-IronPort-AV: E=Sophos;i="5.83,291,1616482800"; 
   d="scan'208";a="206965396"
IronPort-SDR: YT0UXablTbHFjVGOUphLdQZI3IoiFPL8yBc2UqJr8G7mpfASBqseVxlrWsQj5BcIMDyWLT+dx2
 b78pMKFrVOjA==
X-IronPort-AV: E=Sophos;i="5.83,291,1616482800"; 
   d="scan'208";a="641600114"
From: Zhu Lingshan <lingshan.zhu@intel.com>
To: peterz@infradead.org,
	pbonzini@redhat.com
Cc: bp@alien8.de,
	seanjc@google.com,
	vkuznets@redhat.com,
	wanpengli@tencent.com,
	jmattson@google.com,
	joro@8bytes.org,
	weijiang.yang@intel.com,
	kan.liang@linux.intel.com,
	ak@linux.intel.com,
	wei.w.wang@intel.com,
	eranian@google.com,
	liuxiangdong5@huawei.com,
	linux-kernel@vger.kernel.org,
	x86@kernel.org,
	kvm@vger.kernel.org,
	like.xu.linux@gmail.com,
	Like Xu <like.xu@linux.intel.com>,
	Will Deacon <will@kernel.org>,
	Marc Zyngier <maz@kernel.org>,
	Guo Ren <guoren@kernel.org>,
	Nick Hu <nickhu@andestech.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu,
	linux-csky@vger.kernel.org,
	linux-riscv@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	Zhu Lingshan <lingshan.zhu@intel.com>
Subject: [PATCH V7 01/18] perf/core: Use static_call to optimize perf_guest_info_callbacks
Date: Tue, 22 Jun 2021 17:42:49 +0800
Message-Id: <20210622094306.8336-2-lingshan.zhu@intel.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20210622094306.8336-1-lingshan.zhu@intel.com>
References: <20210622094306.8336-1-lingshan.zhu@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Like Xu <like.xu@linux.intel.com>

For "struct perf_guest_info_callbacks", the two fields "is_in_guest"
and "is_user_mode" are replaced with a new multiplexed member named
"state", and the "get_guest_ip" field will be renamed to "get_ip".

For arm64, xen and kvm/x86, the application of DEFINE_STATIC_CALL_RET0
could make all that perf_guest_cbs stuff suck less. For arm, csky, nds32,
and riscv, just applied some renamed refactoring.

Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Guo Ren <guoren@kernel.org>
Cc: Nick Hu <nickhu@andestech.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: kvmarm@lists.cs.columbia.edu
Cc: linux-csky@vger.kernel.org
Cc: linux-riscv@lists.infradead.org
Cc: xen-devel@lists.xenproject.org
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Original-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Like Xu <like.xu@linux.intel.com>
Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com>
---
 arch/arm/kernel/perf_callchain.c   | 16 ++++++++-----
 arch/arm64/kernel/perf_callchain.c | 29 ++++++++++++++++++-----
 arch/arm64/kvm/perf.c              | 22 ++++++++---------
 arch/csky/kernel/perf_callchain.c  |  4 ++--
 arch/nds32/kernel/perf_event_cpu.c | 16 ++++++++-----
 arch/riscv/kernel/perf_callchain.c |  4 ++--
 arch/x86/events/core.c             | 38 ++++++++++++++++++++++++------
 arch/x86/events/intel/core.c       |  7 +++---
 arch/x86/include/asm/kvm_host.h    |  2 +-
 arch/x86/kvm/pmu.c                 |  2 +-
 arch/x86/kvm/x86.c                 | 37 ++++++++++++++++-------------
 arch/x86/xen/pmu.c                 | 33 +++++++++++---------------
 include/linux/perf_event.h         | 12 ++++++----
 kernel/events/core.c               |  9 +++++++
 14 files changed, 144 insertions(+), 87 deletions(-)

diff --git a/arch/arm/kernel/perf_callchain.c b/arch/arm/kernel/perf_callchain.c
index 3b69a76d341e..1ce30f86d6c7 100644
--- a/arch/arm/kernel/perf_callchain.c
+++ b/arch/arm/kernel/perf_callchain.c
@@ -64,7 +64,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs
 {
 	struct frame_tail __user *tail;
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (perf_guest_cbs && perf_guest_cbs->state()) {
 		/* We don't support guest os callchain now */
 		return;
 	}
@@ -100,7 +100,7 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
 {
 	struct stackframe fr;
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (perf_guest_cbs && perf_guest_cbs->state()) {
 		/* We don't support guest os callchain now */
 		return;
 	}
@@ -111,8 +111,8 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
 
 unsigned long perf_instruction_pointer(struct pt_regs *regs)
 {
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
-		return perf_guest_cbs->get_guest_ip();
+	if (perf_guest_cbs && perf_guest_cbs->state())
+		return perf_guest_cbs->get_ip();
 
 	return instruction_pointer(regs);
 }
@@ -120,9 +120,13 @@ unsigned long perf_instruction_pointer(struct pt_regs *regs)
 unsigned long perf_misc_flags(struct pt_regs *regs)
 {
 	int misc = 0;
+	unsigned int state = 0;
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
-		if (perf_guest_cbs->is_user_mode())
+	if (perf_guest_cbs)
+		state = perf_guest_cbs->state();
+
+	if (perf_guest_cbs && state) {
+		if (state & PERF_GUEST_USER)
 			misc |= PERF_RECORD_MISC_GUEST_USER;
 		else
 			misc |= PERF_RECORD_MISC_GUEST_KERNEL;
diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c
index 88ff471b0bce..5df2bd5d12ba 100644
--- a/arch/arm64/kernel/perf_callchain.c
+++ b/arch/arm64/kernel/perf_callchain.c
@@ -5,6 +5,7 @@
  * Copyright (C) 2015 ARM Limited
  */
 #include <linux/perf_event.h>
+#include <linux/static_call.h>
 #include <linux/uaccess.h>
 
 #include <asm/pointer_auth.h>
@@ -99,10 +100,25 @@ compat_user_backtrace(struct compat_frame_tail __user *tail,
 }
 #endif /* CONFIG_COMPAT */
 
+DEFINE_STATIC_CALL_RET0(arm64_guest_state, *(perf_guest_cbs->state));
+DEFINE_STATIC_CALL_RET0(arm64_guest_get_ip, *(perf_guest_cbs->get_ip));
+
+void arch_perf_update_guest_cbs(void)
+{
+	static_call_update(arm64_guest_state, (void *)&__static_call_return0);
+	static_call_update(arm64_guest_get_ip, (void *)&__static_call_return0);
+
+	if (perf_guest_cbs && perf_guest_cbs->state)
+		static_call_update(arm64_guest_state, perf_guest_cbs->state);
+
+	if (perf_guest_cbs && perf_guest_cbs->get_ip)
+		static_call_update(arm64_guest_get_ip, perf_guest_cbs->get_ip);
+}
+
 void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
 			 struct pt_regs *regs)
 {
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (static_call(arm64_guest_state)()) {
 		/* We don't support guest os callchain now */
 		return;
 	}
@@ -149,7 +165,7 @@ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
 {
 	struct stackframe frame;
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (static_call(arm64_guest_state)()) {
 		/* We don't support guest os callchain now */
 		return;
 	}
@@ -160,8 +176,8 @@ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
 
 unsigned long perf_instruction_pointer(struct pt_regs *regs)
 {
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
-		return perf_guest_cbs->get_guest_ip();
+	if (static_call(arm64_guest_state)())
+		return static_call(arm64_guest_get_ip)();
 
 	return instruction_pointer(regs);
 }
@@ -169,9 +185,10 @@ unsigned long perf_instruction_pointer(struct pt_regs *regs)
 unsigned long perf_misc_flags(struct pt_regs *regs)
 {
 	int misc = 0;
+	unsigned int guest = static_call(arm64_guest_state)();
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
-		if (perf_guest_cbs->is_user_mode())
+	if (guest) {
+		if (guest & PERF_GUEST_USER)
 			misc |= PERF_RECORD_MISC_GUEST_USER;
 		else
 			misc |= PERF_RECORD_MISC_GUEST_KERNEL;
diff --git a/arch/arm64/kvm/perf.c b/arch/arm64/kvm/perf.c
index 151c31fb9860..8a3387e58f42 100644
--- a/arch/arm64/kvm/perf.c
+++ b/arch/arm64/kvm/perf.c
@@ -13,21 +13,20 @@
 
 DEFINE_STATIC_KEY_FALSE(kvm_arm_pmu_available);
 
-static int kvm_is_in_guest(void)
-{
-        return kvm_get_running_vcpu() != NULL;
-}
-
-static int kvm_is_user_mode(void)
+static unsigned int kvm_guest_state(void)
 {
 	struct kvm_vcpu *vcpu;
+	unsigned int state = 0;
+
+	if (kvm_get_running_vcpu())
+		state |= PERF_GUEST_ACTIVE;
 
 	vcpu = kvm_get_running_vcpu();
 
-	if (vcpu)
-		return !vcpu_mode_priv(vcpu);
+	if (vcpu && !vcpu_mode_priv(vcpu))
+		state |= PERF_GUEST_USER;
 
-	return 0;
+	return state;
 }
 
 static unsigned long kvm_get_guest_ip(void)
@@ -43,9 +42,8 @@ static unsigned long kvm_get_guest_ip(void)
 }
 
 static struct perf_guest_info_callbacks kvm_guest_cbs = {
-	.is_in_guest	= kvm_is_in_guest,
-	.is_user_mode	= kvm_is_user_mode,
-	.get_guest_ip	= kvm_get_guest_ip,
+	.state		= kvm_guest_state,
+	.get_ip		= kvm_get_guest_ip,
 };
 
 int kvm_perf_init(void)
diff --git a/arch/csky/kernel/perf_callchain.c b/arch/csky/kernel/perf_callchain.c
index ab55e98ee8f6..3e42239dd1b2 100644
--- a/arch/csky/kernel/perf_callchain.c
+++ b/arch/csky/kernel/perf_callchain.c
@@ -89,7 +89,7 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
 	unsigned long fp = 0;
 
 	/* C-SKY does not support virtualization. */
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
+	if (perf_guest_cbs && perf_guest_cbs->state())
 		return;
 
 	fp = regs->regs[4];
@@ -113,7 +113,7 @@ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
 	struct stackframe fr;
 
 	/* C-SKY does not support virtualization. */
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (perf_guest_cbs && perf_guest_cbs->state()) {
 		pr_warn("C-SKY does not support perf in guest mode!");
 		return;
 	}
diff --git a/arch/nds32/kernel/perf_event_cpu.c b/arch/nds32/kernel/perf_event_cpu.c
index 0ce6f9f307e6..1dc32ba842ce 100644
--- a/arch/nds32/kernel/perf_event_cpu.c
+++ b/arch/nds32/kernel/perf_event_cpu.c
@@ -1371,7 +1371,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry,
 
 	leaf_fp = 0;
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (perf_guest_cbs && perf_guest_cbs->state()) {
 		/* We don't support guest os callchain now */
 		return;
 	}
@@ -1481,7 +1481,7 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
 {
 	struct stackframe fr;
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (perf_guest_cbs && perf_guest_cbs->state()) {
 		/* We don't support guest os callchain now */
 		return;
 	}
@@ -1494,8 +1494,8 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
 unsigned long perf_instruction_pointer(struct pt_regs *regs)
 {
 	/* However, NDS32 does not support virtualization */
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
-		return perf_guest_cbs->get_guest_ip();
+	if (perf_guest_cbs && perf_guest_cbs->state())
+		return perf_guest_cbs->get_ip();
 
 	return instruction_pointer(regs);
 }
@@ -1503,10 +1503,14 @@ unsigned long perf_instruction_pointer(struct pt_regs *regs)
 unsigned long perf_misc_flags(struct pt_regs *regs)
 {
 	int misc = 0;
+	unsigned int state = 0;
+
+	if (perf_guest_cbs)
+		state = perf_guest_cbs->state();
 
 	/* However, NDS32 does not support virtualization */
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
-		if (perf_guest_cbs->is_user_mode())
+	if (perf_guest_cbs && state) {
+		if (state & PERF_GUEST_USER)
 			misc |= PERF_RECORD_MISC_GUEST_USER;
 		else
 			misc |= PERF_RECORD_MISC_GUEST_KERNEL;
diff --git a/arch/riscv/kernel/perf_callchain.c b/arch/riscv/kernel/perf_callchain.c
index 0bb1854dce83..ea63f70cae5d 100644
--- a/arch/riscv/kernel/perf_callchain.c
+++ b/arch/riscv/kernel/perf_callchain.c
@@ -59,7 +59,7 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
 	unsigned long fp = 0;
 
 	/* RISC-V does not support perf in guest mode. */
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
+	if (perf_guest_cbs && perf_guest_cbs->state())
 		return;
 
 	fp = regs->s0;
@@ -79,7 +79,7 @@ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
 			   struct pt_regs *regs)
 {
 	/* RISC-V does not support perf in guest mode. */
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (perf_guest_cbs && perf_guest_cbs->state()) {
 		pr_warn("RISC-V does not support perf in guest mode!");
 		return;
 	}
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 8f71dd72ef95..c71af4cfba9b 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -90,6 +90,27 @@ DEFINE_STATIC_CALL_NULL(x86_pmu_pebs_aliases, *x86_pmu.pebs_aliases);
  */
 DEFINE_STATIC_CALL_RET0(x86_pmu_guest_get_msrs, *x86_pmu.guest_get_msrs);
 
+DEFINE_STATIC_CALL_RET0(x86_guest_state, *(perf_guest_cbs->state));
+DEFINE_STATIC_CALL_RET0(x86_guest_get_ip, *(perf_guest_cbs->get_ip));
+DEFINE_STATIC_CALL_RET0(x86_guest_handle_intel_pt_intr, *(perf_guest_cbs->handle_intel_pt_intr));
+
+void arch_perf_update_guest_cbs(void)
+{
+	static_call_update(x86_guest_state, (void *)&__static_call_return0);
+	static_call_update(x86_guest_get_ip, (void *)&__static_call_return0);
+	static_call_update(x86_guest_handle_intel_pt_intr, (void *)&__static_call_return0);
+
+	if (perf_guest_cbs && perf_guest_cbs->state)
+		static_call_update(x86_guest_state, perf_guest_cbs->state);
+
+	if (perf_guest_cbs && perf_guest_cbs->get_ip)
+		static_call_update(x86_guest_get_ip, perf_guest_cbs->get_ip);
+
+	if (perf_guest_cbs && perf_guest_cbs->handle_intel_pt_intr)
+		static_call_update(x86_guest_handle_intel_pt_intr,
+				   perf_guest_cbs->handle_intel_pt_intr);
+}
+
 u64 __read_mostly hw_cache_event_ids
 				[PERF_COUNT_HW_CACHE_MAX]
 				[PERF_COUNT_HW_CACHE_OP_MAX]
@@ -2738,7 +2759,7 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
 	struct unwind_state state;
 	unsigned long addr;
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (static_call(x86_guest_state)()) {
 		/* TODO: We don't support guest os callchain now */
 		return;
 	}
@@ -2841,7 +2862,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs
 	struct stack_frame frame;
 	const struct stack_frame __user *fp;
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+	if (static_call(x86_guest_state)()) {
 		/* TODO: We don't support guest os callchain now */
 		return;
 	}
@@ -2918,18 +2939,21 @@ static unsigned long code_segment_base(struct pt_regs *regs)
 
 unsigned long perf_instruction_pointer(struct pt_regs *regs)
 {
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
-		return perf_guest_cbs->get_guest_ip();
+	unsigned long ip = static_call(x86_guest_get_ip)();
+
+	if (likely(!ip))
+		ip = regs->ip + code_segment_base(regs);
 
-	return regs->ip + code_segment_base(regs);
+	return ip;
 }
 
 unsigned long perf_misc_flags(struct pt_regs *regs)
 {
+	unsigned int guest = static_call(x86_guest_state)();
 	int misc = 0;
 
-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
-		if (perf_guest_cbs->is_user_mode())
+	if (guest) {
+		if (guest & PERF_GUEST_USER)
 			misc |= PERF_RECORD_MISC_GUEST_USER;
 		else
 			misc |= PERF_RECORD_MISC_GUEST_KERNEL;
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index e28892270c58..430f5743f3ca 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -2780,6 +2780,8 @@ static void intel_pmu_reset(void)
 	local_irq_restore(flags);
 }
 
+DECLARE_STATIC_CALL(x86_guest_handle_intel_pt_intr, *(perf_guest_cbs->handle_intel_pt_intr));
+
 static int handle_pmi_common(struct pt_regs *regs, u64 status)
 {
 	struct perf_sample_data data;
@@ -2850,10 +2852,7 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
 	 */
 	if (__test_and_clear_bit(GLOBAL_STATUS_TRACE_TOPAPMI_BIT, (unsigned long *)&status)) {
 		handled++;
-		if (unlikely(perf_guest_cbs && perf_guest_cbs->is_in_guest() &&
-			perf_guest_cbs->handle_intel_pt_intr))
-			perf_guest_cbs->handle_intel_pt_intr();
-		else
+		if (!static_call(x86_guest_handle_intel_pt_intr)())
 			intel_pt_interrupt();
 	}
 
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 9c7ced0e3171..f752fcf56d76 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1813,7 +1813,7 @@ int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu);
 int kvm_complete_insn_gp(struct kvm_vcpu *vcpu, int err);
 void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu);
 
-int kvm_is_in_guest(void);
+unsigned int kvm_guest_state(void);
 
 void __user *__x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa,
 				     u32 size);
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index 827886c12c16..2dcbd1b30004 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -87,7 +87,7 @@ static void kvm_perf_overflow_intr(struct perf_event *perf_event,
 		 * woken up. So we should wake it, but this is impossible from
 		 * NMI context. Do it from irq work instead.
 		 */
-		if (!kvm_is_in_guest())
+		if (!kvm_guest_state())
 			irq_work_queue(&pmc_to_pmu(pmc)->irq_work);
 		else
 			kvm_make_request(KVM_REQ_PMI, pmc->vcpu);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b594275d49b5..9cb1c02d348c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8035,44 +8035,47 @@ static void kvm_timer_init(void)
 DEFINE_PER_CPU(struct kvm_vcpu *, current_vcpu);
 EXPORT_PER_CPU_SYMBOL_GPL(current_vcpu);
 
-int kvm_is_in_guest(void)
+unsigned int kvm_guest_state(void)
 {
-	return __this_cpu_read(current_vcpu) != NULL;
-}
-
-static int kvm_is_user_mode(void)
-{
-	int user_mode = 3;
+	struct kvm_vcpu *vcpu = __this_cpu_read(current_vcpu);
+	unsigned int state = 0;
 
-	if (__this_cpu_read(current_vcpu))
-		user_mode = static_call(kvm_x86_get_cpl)(__this_cpu_read(current_vcpu));
+	if (vcpu) {
+		state |= PERF_GUEST_ACTIVE;
+		if (static_call(kvm_x86_get_cpl)(vcpu))
+			state |= PERF_GUEST_USER;
+	}
 
-	return user_mode != 0;
+	return state;
 }
 
-static unsigned long kvm_get_guest_ip(void)
+static unsigned long kvm_guest_get_ip(void)
 {
+	struct kvm_vcpu *vcpu = __this_cpu_read(current_vcpu);
 	unsigned long ip = 0;
 
-	if (__this_cpu_read(current_vcpu))
-		ip = kvm_rip_read(__this_cpu_read(current_vcpu));
+	if (vcpu)
+		ip = kvm_rip_read(vcpu);
 
 	return ip;
 }
 
-static void kvm_handle_intel_pt_intr(void)
+static unsigned int kvm_handle_intel_pt_intr(void)
 {
 	struct kvm_vcpu *vcpu = __this_cpu_read(current_vcpu);
 
+	if (!vcpu)
+		return 0;
+
 	kvm_make_request(KVM_REQ_PMI, vcpu);
 	__set_bit(MSR_CORE_PERF_GLOBAL_OVF_CTRL_TRACE_TOPA_PMI_BIT,
 			(unsigned long *)&vcpu->arch.pmu.global_status);
+	return 1;
 }
 
 static struct perf_guest_info_callbacks kvm_guest_cbs = {
-	.is_in_guest		= kvm_is_in_guest,
-	.is_user_mode		= kvm_is_user_mode,
-	.get_guest_ip		= kvm_get_guest_ip,
+	.state			= kvm_guest_state,
+	.get_ip			= kvm_guest_get_ip,
 	.handle_intel_pt_intr	= kvm_handle_intel_pt_intr,
 };
 
diff --git a/arch/x86/xen/pmu.c b/arch/x86/xen/pmu.c
index e13b0b49fcdf..7352cf002b87 100644
--- a/arch/x86/xen/pmu.c
+++ b/arch/x86/xen/pmu.c
@@ -413,34 +413,30 @@ int pmu_apic_update(uint32_t val)
 }
 
 /* perf callbacks */
-static int xen_is_in_guest(void)
+static unsigned int xen_guest_state(void)
 {
 	const struct xen_pmu_data *xenpmu_data = get_xenpmu_data();
+	unsigned int state = 0;
 
 	if (!xenpmu_data) {
 		pr_warn_once("%s: pmudata not initialized\n", __func__);
-		return 0;
+		return state;
 	}
 
 	if (!xen_initial_domain() || (xenpmu_data->domain_id >= DOMID_SELF))
-		return 0;
-
-	return 1;
-}
+		return state;
 
-static int xen_is_user_mode(void)
-{
-	const struct xen_pmu_data *xenpmu_data = get_xenpmu_data();
+	state |= PERF_GUEST_ACTIVE;
 
-	if (!xenpmu_data) {
-		pr_warn_once("%s: pmudata not initialized\n", __func__);
-		return 0;
+	if (xenpmu_data->pmu.pmu_flags & PMU_SAMPLE_PV) {
+		if (xenpmu_data->pmu.pmu_flags & PMU_SAMPLE_USER)
+			state |= PERF_GUEST_USER;
+	} else {
+		if (!!(xenpmu_data->pmu.r.regs.cpl & 3))
+			state |= PERF_GUEST_USER;
 	}
 
-	if (xenpmu_data->pmu.pmu_flags & PMU_SAMPLE_PV)
-		return (xenpmu_data->pmu.pmu_flags & PMU_SAMPLE_USER);
-	else
-		return !!(xenpmu_data->pmu.r.regs.cpl & 3);
+	return state;
 }
 
 static unsigned long xen_get_guest_ip(void)
@@ -456,9 +452,8 @@ static unsigned long xen_get_guest_ip(void)
 }
 
 static struct perf_guest_info_callbacks xen_guest_cbs = {
-	.is_in_guest            = xen_is_in_guest,
-	.is_user_mode           = xen_is_user_mode,
-	.get_guest_ip           = xen_get_guest_ip,
+	.state                  = xen_guest_state,
+	.get_ip			= xen_get_guest_ip,
 };
 
 /* Convert registers from Xen's format to Linux' */
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index f5a6a2f069ed..8065e5f093f4 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -26,11 +26,13 @@
 # include <asm/local64.h>
 #endif
 
+#define PERF_GUEST_ACTIVE	0x01
+#define PERF_GUEST_USER	0x02
+
 struct perf_guest_info_callbacks {
-	int				(*is_in_guest)(void);
-	int				(*is_user_mode)(void);
-	unsigned long			(*get_guest_ip)(void);
-	void				(*handle_intel_pt_intr)(void);
+	unsigned int			(*state)(void);
+	unsigned long			(*get_ip)(void);
+	unsigned int			(*handle_intel_pt_intr)(void);
 };
 
 #ifdef CONFIG_HAVE_HW_BREAKPOINT
@@ -1237,6 +1239,8 @@ extern void perf_event_bpf_event(struct bpf_prog *prog,
 				 u16 flags);
 
 extern struct perf_guest_info_callbacks *perf_guest_cbs;
+extern void __weak arch_perf_update_guest_cbs(void);
+
 extern int perf_register_guest_info_callbacks(struct perf_guest_info_callbacks *callbacks);
 extern int perf_unregister_guest_info_callbacks(struct perf_guest_info_callbacks *callbacks);
 
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 6fee4a7e88d7..101fa7d0bfda 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6479,9 +6479,18 @@ static void perf_pending_event(struct irq_work *entry)
  */
 struct perf_guest_info_callbacks *perf_guest_cbs;
 
+/* explicitly use __weak to fix duplicate symbol error */
+void __weak arch_perf_update_guest_cbs(void)
+{
+}
+
 int perf_register_guest_info_callbacks(struct perf_guest_info_callbacks *cbs)
 {
+	if (WARN_ON_ONCE(perf_guest_cbs))
+		return -EBUSY;
+
 	perf_guest_cbs = cbs;
+	arch_perf_update_guest_cbs();
 	return 0;
 }
 EXPORT_SYMBOL_GPL(perf_register_guest_info_callbacks);
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 10:24:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 10:24:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145804.268166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvdaC-00077h-HK; Tue, 22 Jun 2021 10:24:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145804.268166; Tue, 22 Jun 2021 10:24:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvdaC-00077a-EO; Tue, 22 Jun 2021 10:24:40 +0000
Received: by outflank-mailman (input) for mailman id 145804;
 Tue, 22 Jun 2021 10:24:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lvdaB-00077U-Oa
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 10:24:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lvdaA-0000Oo-MH; Tue, 22 Jun 2021 10:24:38 +0000
Received: from [54.239.6.182] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lvdaA-00033j-DQ; Tue, 22 Jun 2021 10:24:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:Date:
	Message-ID:Subject:From:Cc:To;
	bh=H4BHrj9tYa11orFuxm3i4mnP3I9EL9nTIjRz1y9m00Y=; b=VNTh95rr2KdfSaQ4SiQBXMCIHn
	BbYAKUXl9rLE3ZGaafm695upeDqf+AfrCoM9tyAZmOo5e36q10kHq/06MFt4wTvnYZq6gMx7r7+PY
	/sVlMN/LmxLz6czFsKEmOrZk2C7HViN3i2CB4fH6mtk+WQYFOOQAa7ceb2Gv/W8yulYo=;
To: Juergen Gross <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 linux-kernel@vger.kernel.org, mheyne@amazon.de
From: Julien Grall <julien@xen.org>
Subject: Interrupt for port 19, but apparently not enabled; per-user
 000000004af23acc
Message-ID: <6552fc66-ba19-2c77-7928-b0272d3e1622@xen.org>
Date: Tue, 22 Jun 2021 12:24:36 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

As discussed on IRC yesterday, we noticed a couple of splat in 5.13-rc6 
(and stable 5.4) in the evtchn driver:

[    7.581000] ------------[ cut here ]------------
[    7.581899] Interrupt for port 19, but apparently not enabled; 
per-user 000000004af23acc
[    7.583401] WARNING: CPU: 0 PID: 467 at 
/home/ANT.AMAZON.COM/jgrall/works/oss/linux/drivers/xen/evtchn.c:169 
evtchn_interrupt+0xd5/0x100
[    7.585583] Modules linked in:
[    7.586188] CPU: 0 PID: 467 Comm: xenstore-read Not tainted 
5.13.0-rc6 #240
[    7.587462] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 
rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[    7.589462] RIP: e030:evtchn_interrupt+0xd5/0x100
[    7.590361] Code: 48 8d bb d8 01 00 00 ba 01 00 00 00 be 1d 00 00 00 
e8 5f 72 c4 ff eb b2 8b 75 20 48 89 da 48 c7 c7 a8 03 5f 82 e8 6b 2d 96 
ff <0f> 0b e9 4d ff ff ff 41 0f b6 f4 48 c7 c7 80 da a2 82 e8 f0
[    7.593662] RSP: e02b:ffffc90040003e60 EFLAGS: 00010082
[    7.594636] RAX: 0000000000000000 RBX: ffff888102328c00 RCX: 
0000000000000027
[    7.595924] RDX: 0000000000000000 RSI: ffff88817fe18ad0 RDI: 
ffff88817fe18ad8
[    7.597216] RBP: ffff888108ef8140 R08: 0000000000000000 R09: 
0000000000000001
[    7.598522] R10: 0000000000000000 R11: 7075727265746e49 R12: 
0000000000000000
[    7.599810] R13: ffffc90040003ec4 R14: ffff8881001b8000 R15: 
ffff888109b36f80
[    7.601113] FS:  0000000000000000(0000) GS:ffff88817fe00000(0000) 
knlGS:0000000000000000
[    7.602570] CS:  10000e030 DS: 0000 ES: 0000 CR0: 0000000080050033
[    7.603700] CR2: 00007f15b390e368 CR3: 000000010bb04000 CR4: 
0000000000050660
[    7.604993] Call Trace:
[    7.605501]  <IRQ>
[    7.605929]  __handle_irq_event_percpu+0x4c/0x330
[    7.606817]  handle_irq_event_percpu+0x32/0xa0
[    7.607670]  handle_irq_event+0x3a/0x60
[    7.608416]  handle_edge_irq+0x9b/0x1f0
[    7.609154]  generic_handle_irq+0x4f/0x60
[    7.609918]  __evtchn_fifo_handle_events+0x195/0x3a0
[    7.610864]  __xen_evtchn_do_upcall+0x66/0xb0
[    7.611693]  __xen_pv_evtchn_do_upcall+0x1d/0x30
[    7.612582]  xen_pv_evtchn_do_upcall+0x9d/0xc0
[    7.613439]  </IRQ>
[    7.613882]  exc_xen_hypervisor_callback+0x8/0x10

This is quite similar to the problem I reported a few months ago (see 
[1]) but this time this is happening with fifo rather than 2L.

I haven't been able to reproduced it reliably so far. But looking at the 
code, I think I have found another potential race after commit

commit b6622798bc50b625a1e62f82c7190df40c1f5b21
Author: Juergen Gross <jgross@suse.com>
Date:   Sat Mar 6 17:18:33 2021 +0100
     xen/events: avoid handling the same event on two cpus at the same time
     When changing the cpu affinity of an event it can happen today that
     (with some unlucky timing) the same event will be handled on the old
     and the new cpu at the same time.
     Avoid that by adding an "event active" flag to the per-event data and
     call the handler only if this flag isn't set.
     Cc: stable@vger.kernel.org
     Reported-by: Julien Grall <julien@xen.org>
     Signed-off-by: Juergen Gross <jgross@suse.com>
     Reviewed-by: Julien Grall <jgrall@amazon.com>
     Link: https://lore.kernel.org/r/20210306161833.4552-4-jgross@suse.com
     Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

The evtchn driver will use the lateeoi handlers. So the code to ack 
looks like:

do_mask(..., EVT_MASK_REASON_EOI_PENDING)
smp_store_release(&info->is_active, 0);
clear_evtchn(info->evtchn);

The code to handle an interrupts look like:

clear_link(...)
if ( evtchn_fifo_is_pending(port) && !evtchn_fifo_is_mask()) {
    if (xchg_acquire(&info->is_active, 1)
      return;
    generic_handle_irq();
}

After changing the affinity, an interrupt may be received once on the 
previous vCPU. So, I think the following can happen:

vCPU0                             | vCPU1
				  |
   Receive event			  |
				  | change affinity to vCPU1
   clear_link()			  |
      				  |
	           /* The interrupt is re-raised */
				  | receive event
    				  |
				  | /* The interrupt is not masked */
   info->is_active = 1		  |
   do_mask(...)			  |
   info->is_active = 0		  |
				  | info->is_active = 1
   clear_evtchn(...)               |
                                   | do_mask(...)
                                   | info->is_active = 0
				  | clear_evtchn(...)

Does this look plausible to you?

Cheers,

[1] https://www.spinics.net/lists/kernel/msg3771782.html



-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 11:04:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 11:04:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145809.268178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lveCZ-0002fe-FR; Tue, 22 Jun 2021 11:04:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145809.268178; Tue, 22 Jun 2021 11:04:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lveCZ-0002fX-CK; Tue, 22 Jun 2021 11:04:19 +0000
Received: by outflank-mailman (input) for mailman id 145809;
 Tue, 22 Jun 2021 11:04:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=P8ns=LQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lveCY-0002fR-HD
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 11:04:18 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5941771d-dd3a-436d-98d4-c7f9b0c4ef10;
 Tue, 22 Jun 2021 11:04:16 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 793BE1FD36;
 Tue, 22 Jun 2021 11:04:15 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 49B15118DD;
 Tue, 22 Jun 2021 11:04:15 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id zYjUEK/D0WC+YQAALh3uQQ
 (envelope-from <jgross@suse.com>); Tue, 22 Jun 2021 11:04:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5941771d-dd3a-436d-98d4-c7f9b0c4ef10
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624359855; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=+n3t6AUp5TS1LSvBcGc6EusYr8I4k6PmN6LeGwwd17I=;
	b=TY5MY4Dw3cPZNj+44gw3Fp/9lHNfVHd6rQoU47fKxPLADKuYk3fBw0SxqVWvFspk6BqfQd
	dVaU8uM0LemFv9kFHsIl3HBEM+XMQTmPJCli/+sEGi6aaH/FNSNYkUOBOcX+j58ULmXpah
	nECMhENcEEjfz9pEbV3doBD7SNV5lI8=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624359855; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=+n3t6AUp5TS1LSvBcGc6EusYr8I4k6PmN6LeGwwd17I=;
	b=TY5MY4Dw3cPZNj+44gw3Fp/9lHNfVHd6rQoU47fKxPLADKuYk3fBw0SxqVWvFspk6BqfQd
	dVaU8uM0LemFv9kFHsIl3HBEM+XMQTmPJCli/+sEGi6aaH/FNSNYkUOBOcX+j58ULmXpah
	nECMhENcEEjfz9pEbV3doBD7SNV5lI8=
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 linux-kernel@vger.kernel.org, mheyne@amazon.de
References: <6552fc66-ba19-2c77-7928-b0272d3e1622@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: Interrupt for port 19, but apparently not enabled; per-user
 000000004af23acc
Message-ID: <4d8a7ba7-a9f6-2999-8750-bfe2b85f064e@suse.com>
Date: Tue, 22 Jun 2021 13:04:14 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <6552fc66-ba19-2c77-7928-b0272d3e1622@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="5bfnwLbfxXmISXkyLk2SYmzVPwvAKlta3"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--5bfnwLbfxXmISXkyLk2SYmzVPwvAKlta3
Content-Type: multipart/mixed; boundary="E1JyQPrHo0Y9ofROyUJLoka3jMCPwQIMz";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 linux-kernel@vger.kernel.org, mheyne@amazon.de
Message-ID: <4d8a7ba7-a9f6-2999-8750-bfe2b85f064e@suse.com>
Subject: Re: Interrupt for port 19, but apparently not enabled; per-user
 000000004af23acc
References: <6552fc66-ba19-2c77-7928-b0272d3e1622@xen.org>
In-Reply-To: <6552fc66-ba19-2c77-7928-b0272d3e1622@xen.org>

--E1JyQPrHo0Y9ofROyUJLoka3jMCPwQIMz
Content-Type: multipart/mixed;
 boundary="------------E9CE6572C601B995D9AE67C0"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E9CE6572C601B995D9AE67C0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 22.06.21 12:24, Julien Grall wrote:
> Hi Juergen,
>=20
> As discussed on IRC yesterday, we noticed a couple of splat in 5.13-rc6=20

> (and stable 5.4) in the evtchn driver:
>=20
> [=C2=A0=C2=A0=C2=A0 7.581000] ------------[ cut here ]------------
> [=C2=A0=C2=A0=C2=A0 7.581899] Interrupt for port 19, but apparently not=20
enabled;=20
> per-user 000000004af23acc
> [=C2=A0=C2=A0=C2=A0 7.583401] WARNING: CPU: 0 PID: 467 at=20
> /home/ANT.AMAZON.COM/jgrall/works/oss/linux/drivers/xen/evtchn.c:169=20
> evtchn_interrupt+0xd5/0x100
> [=C2=A0=C2=A0=C2=A0 7.585583] Modules linked in:
> [=C2=A0=C2=A0=C2=A0 7.586188] CPU: 0 PID: 467 Comm: xenstore-read Not t=
ainted=20
> 5.13.0-rc6 #240
> [=C2=A0=C2=A0=C2=A0 7.587462] Hardware name: QEMU Standard PC (Q35 + IC=
H9, 2009), BIOS=20
> rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [=C2=A0=C2=A0=C2=A0 7.589462] RIP: e030:evtchn_interrupt+0xd5/0x100
> [=C2=A0=C2=A0=C2=A0 7.590361] Code: 48 8d bb d8 01 00 00 ba 01 00 00 00=20
be 1d 00 00 00=20
> e8 5f 72 c4 ff eb b2 8b 75 20 48 89 da 48 c7 c7 a8 03 5f 82 e8 6b 2d 96=20

> ff <0f> 0b e9 4d ff ff ff 41 0f b6 f4 48 c7 c7 80 da a2 82 e8 f0
> [=C2=A0=C2=A0=C2=A0 7.593662] RSP: e02b:ffffc90040003e60 EFLAGS: 000100=
82
> [=C2=A0=C2=A0=C2=A0 7.594636] RAX: 0000000000000000 RBX: ffff888102328c=
00 RCX:=20
> 0000000000000027
> [=C2=A0=C2=A0=C2=A0 7.595924] RDX: 0000000000000000 RSI: ffff88817fe18a=
d0 RDI:=20
> ffff88817fe18ad8
> [=C2=A0=C2=A0=C2=A0 7.597216] RBP: ffff888108ef8140 R08: 00000000000000=
00 R09:=20
> 0000000000000001
> [=C2=A0=C2=A0=C2=A0 7.598522] R10: 0000000000000000 R11: 7075727265746e=
49 R12:=20
> 0000000000000000
> [=C2=A0=C2=A0=C2=A0 7.599810] R13: ffffc90040003ec4 R14: ffff8881001b80=
00 R15:=20
> ffff888109b36f80
> [=C2=A0=C2=A0=C2=A0 7.601113] FS:=C2=A0 0000000000000000(0000) GS:ffff8=
8817fe00000(0000)=20
> knlGS:0000000000000000
> [=C2=A0=C2=A0=C2=A0 7.602570] CS:=C2=A0 10000e030 DS: 0000 ES: 0000 CR0=
: 0000000080050033
> [=C2=A0=C2=A0=C2=A0 7.603700] CR2: 00007f15b390e368 CR3: 000000010bb040=
00 CR4:=20
> 0000000000050660
> [=C2=A0=C2=A0=C2=A0 7.604993] Call Trace:
> [=C2=A0=C2=A0=C2=A0 7.605501]=C2=A0 <IRQ>
> [=C2=A0=C2=A0=C2=A0 7.605929]=C2=A0 __handle_irq_event_percpu+0x4c/0x33=
0
> [=C2=A0=C2=A0=C2=A0 7.606817]=C2=A0 handle_irq_event_percpu+0x32/0xa0
> [=C2=A0=C2=A0=C2=A0 7.607670]=C2=A0 handle_irq_event+0x3a/0x60
> [=C2=A0=C2=A0=C2=A0 7.608416]=C2=A0 handle_edge_irq+0x9b/0x1f0
> [=C2=A0=C2=A0=C2=A0 7.609154]=C2=A0 generic_handle_irq+0x4f/0x60
> [=C2=A0=C2=A0=C2=A0 7.609918]=C2=A0 __evtchn_fifo_handle_events+0x195/0=
x3a0
> [=C2=A0=C2=A0=C2=A0 7.610864]=C2=A0 __xen_evtchn_do_upcall+0x66/0xb0
> [=C2=A0=C2=A0=C2=A0 7.611693]=C2=A0 __xen_pv_evtchn_do_upcall+0x1d/0x30=

> [=C2=A0=C2=A0=C2=A0 7.612582]=C2=A0 xen_pv_evtchn_do_upcall+0x9d/0xc0
> [=C2=A0=C2=A0=C2=A0 7.613439]=C2=A0 </IRQ>
> [=C2=A0=C2=A0=C2=A0 7.613882]=C2=A0 exc_xen_hypervisor_callback+0x8/0x1=
0
>=20
> This is quite similar to the problem I reported a few months ago (see=20
> [1]) but this time this is happening with fifo rather than 2L.
>=20
> I haven't been able to reproduced it reliably so far. But looking at th=
e=20
> code, I think I have found another potential race after commit
>=20
> commit b6622798bc50b625a1e62f82c7190df40c1f5b21
> Author: Juergen Gross <jgross@suse.com>
> Date:=C2=A0=C2=A0 Sat Mar 6 17:18:33 2021 +0100
>  =C2=A0=C2=A0 xen/events: avoid handling the same event on two cpus at =
the same time
>  =C2=A0=C2=A0 When changing the cpu affinity of an event it can happen =
today that
>  =C2=A0=C2=A0 (with some unlucky timing) the same event will be handled=20
on the old
>  =C2=A0=C2=A0 and the new cpu at the same time.
>  =C2=A0=C2=A0 Avoid that by adding an "event active" flag to the per-ev=
ent data and
>  =C2=A0=C2=A0 call the handler only if this flag isn't set.
>  =C2=A0=C2=A0 Cc: stable@vger.kernel.org
>  =C2=A0=C2=A0 Reported-by: Julien Grall <julien@xen.org>
>  =C2=A0=C2=A0 Signed-off-by: Juergen Gross <jgross@suse.com>
>  =C2=A0=C2=A0 Reviewed-by: Julien Grall <jgrall@amazon.com>
>  =C2=A0=C2=A0 Link: https://lore.kernel.org/r/20210306161833.4552-4-jgr=
oss@suse.com
>  =C2=A0=C2=A0 Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.co=
m>
>=20
> The evtchn driver will use the lateeoi handlers. So the code to ack=20
> looks like:
>=20
> do_mask(..., EVT_MASK_REASON_EOI_PENDING)
> smp_store_release(&info->is_active, 0);
> clear_evtchn(info->evtchn);
>=20
> The code to handle an interrupts look like:
>=20
> clear_link(...)
> if ( evtchn_fifo_is_pending(port) && !evtchn_fifo_is_mask()) {
>  =C2=A0 if (xchg_acquire(&info->is_active, 1)
>  =C2=A0=C2=A0=C2=A0 return;
>  =C2=A0 generic_handle_irq();
> }
>=20
> After changing the affinity, an interrupt may be received once on the=20
> previous vCPU. So, I think the following can happen:
>=20
> vCPU0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | vCPU1
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>  =C2=A0Receive event=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | change affinity to vCPU1
>  =C2=A0clear_link()=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 |
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 /* The interrupt is re-raised */
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | receive event
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | /* The interrupt is not masked */
>  =C2=A0info->is_active =3D 1=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 |
>  =C2=A0do_mask(...)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 |
>  =C2=A0info->is_active =3D 0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 |
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | info->is_active =3D 1
>  =C2=A0clear_evtchn(...)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | do_mask(...)
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | info->is_active =3D 0
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | clear_evtchn(...)
>=20
> Does this look plausible to you?

Yes, it does.

Thanks for the analysis.

So I guess for lateeoi events we need to clear is_active only in
xen_irq_lateeoi()? At a first glance this should fix the issue.

What do you think?


Juergen

--------------E9CE6572C601B995D9AE67C0
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E9CE6572C601B995D9AE67C0--

--E1JyQPrHo0Y9ofROyUJLoka3jMCPwQIMz--

--5bfnwLbfxXmISXkyLk2SYmzVPwvAKlta3
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDRw64FAwAAAAAACgkQsN6d1ii/Ey/O
Mwf/Y+kgGXEKVGb54GFE2yOWsGqbJvcT5wgtmHvYffxdIfeprxwByYqx4Jw8/c+lRATGViEg5cN6
U6GUY7J6FEG8dGLuFo8ErlCWCg5eRHnNy6TQUVP/qPk7OUyyB3LhfBJY0Zn9yJf29VatqjL7OLy0
95vOMAOWJomZqPlGaNbCCbqckeysyyhK6tDqunuaEss6BQcxyTob82fIoV2FBBGqk0qFs2jYw9NJ
o97RuPkMAF7kAebnyK3pgKdZFUcD2Bp2uhB80DSRydjD8E/hwDLF6lFWndbtYuh7Z96IpltPg0mF
Yg68M4rp6OsKtjVj674wamXDQyHfCyF/yHSG3yxRug==
=xyiD
-----END PGP SIGNATURE-----

--5bfnwLbfxXmISXkyLk2SYmzVPwvAKlta3--


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 11:21:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 11:21:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145816.268195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lveSd-0004w7-36; Tue, 22 Jun 2021 11:20:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145816.268195; Tue, 22 Jun 2021 11:20:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lveSc-0004w0-Vb; Tue, 22 Jun 2021 11:20:54 +0000
Received: by outflank-mailman (input) for mailman id 145816;
 Tue, 22 Jun 2021 11:20:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1lveSb-0004vt-IS
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 11:20:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1lveSa-0001Ix-LN
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 11:20:52 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1lveSa-0007Io-KJ
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 11:20:52 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1lveSY-0003Uk-F5; Tue, 22 Jun 2021 12:20:50 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=Dy3EzjPU+kw34U/+oCRxwKhnKvTKMSWBReFgmFJ5pwA=; b=eC/61ZkTuHlp0e37NDJudS0JFy
	ciZAFhlEZpbit87I7U9rU9BQgnLr3RqsPHMuJJExYM3ZyKEeiBO2FARPQINpHImQEZ1pFoqHjW2Nv
	SmSG38DQkjF9Q3JhveCDfoIKyJPrK5D8rd0tsS5QXCW2kboWhJyDcfbYYwkyyHy3Rmws=;
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH] di-version update
Date: Tue, 22 Jun 2021 12:20:43 +0100
Message-Id: <20210622112043.18794-1-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 production-config | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/production-config b/production-config
index bc658006..9a405444 100644
--- a/production-config
+++ b/production-config
@@ -91,7 +91,7 @@ TftpNetbootGroup osstest
 TftpDiVersion_wheezy 2016-06-08
 TftpDiVersion_jessie 2018-06-26
 TftpDiVersion_stretch 2020-09-24
-TftpDiVersion_buster 2021-02-10
+TftpDiVersion_buster 2021-06-22
 
 DebianMirror_buster_armhf http://snapshot.debian.org/archive/debian/20210124T203726Z/
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 11:24:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 11:24:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145821.268209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lveVh-0005bn-JR; Tue, 22 Jun 2021 11:24:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145821.268209; Tue, 22 Jun 2021 11:24:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lveVh-0005bg-G9; Tue, 22 Jun 2021 11:24:05 +0000
Received: by outflank-mailman (input) for mailman id 145821;
 Tue, 22 Jun 2021 11:24:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lveVg-0005bW-Em; Tue, 22 Jun 2021 11:24:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lveVg-0001NT-7x; Tue, 22 Jun 2021 11:24:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lveVg-0003UO-1G; Tue, 22 Jun 2021 11:24:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lveVg-0005y8-0l; Tue, 22 Jun 2021 11:24:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=i1G5AfhqcEiQ6NP14RwP8pkT3+GqwRr7owMUBFysigs=; b=aIZgK5zm6zzWZpFKxCKi85z9xl
	ZPkX4OFWTTh7rtGD+y1o2Vjwa54ZSMEN4ODGbtFQ8s/ZQL0gs3ZKgtGXCLM/jeUS01jcKYFY/renS
	EkwSuxqKjDg+CVMdTw8rvP3dLM1RMPb5LrPiehTb7N1tPqqeW6PVfX+tL0HaI9K7wlqk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162963-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162963: trouble: blocked/broken/pass/preparing/queued/running
X-Osstest-Failures:
    xen-unstable:build-amd64-xtf:<job status>:broken:regression
    xen-unstable:build-arm64:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:<job status>:broken:regression
    xen-unstable:build-i386-prev:<job status>:broken:regression
    xen-unstable:build-i386-xsm:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:host-install(4):broken:regression
    xen-unstable:build-arm64:host-install(4):broken:regression
    xen-unstable:build-i386-xsm:host-install(4):broken:regression
    xen-unstable:build-i386-prev:host-install(4):broken:regression
    xen-unstable:build-amd64-xtf:host-install(4):broken:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-pvshim:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-raw:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-shadow:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-xsm:<none executed>:queued:regression
    xen-unstable:test-arm64-arm64-libvirt-xsm:<none executed>:queued:regression
    xen-unstable:test-arm64-arm64-xl-xsm:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-examine:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-libvirt:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-libvirt-raw:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-xl:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-rtds:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:<none executed>:queued:regression
    xen-unstable:build-amd64-libvirt:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qcow2:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-pvshim:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-multivcpu:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-credit2:<none executed>:queued:regression
    xen-unstable:build-i386-libvirt:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-credit1:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-amd64-pvgrub:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-examine:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-i386-pvgrub:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-libvirt:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-libvirt-pair:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-libvirt-vhd:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-libvirt-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-pygrub:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-livepatch:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-migrupgrade:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-pair:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-shadow:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-coresched-amd64-xl:<none executed>:queued:regression
    xen-unstable:test-amd64-coresched-i386-xl:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-examine:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-freebsd10-i386:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-libvirt:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-libvirt-pair:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-livepatch:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-migrupgrade:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-pair:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-xl-arndale:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-xl-credit1:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-xl-credit2:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-xl-cubietruck:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-xl-multivcpu:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-xl-rtds:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-xl-vhd:<none executed>:queued:regression
    xen-unstable:test-xtf-amd64-amd64-1:<none executed>:queued:regression
    xen-unstable:test-xtf-amd64-amd64-2:<none executed>:queued:regression
    xen-unstable:test-xtf-amd64-amd64-3:<none executed>:queued:regression
    xen-unstable:test-xtf-amd64-amd64-4:<none executed>:queued:regression
    xen-unstable:test-xtf-amd64-amd64-5:<none executed>:queued:regression
    xen-unstable:build-amd64-prev:hosts-allocate:running:regression
    xen-unstable:build-armhf-libvirt:hosts-allocate:running:regression
    xen-unstable:build-amd64:hosts-allocate:running:regression
    xen-unstable:build-amd64-xsm:hosts-allocate:running:regression
    xen-unstable:build-amd64-pvops:hosts-allocate:running:regression
    xen-unstable:build-i386-pvops:host-install(4):running:regression
    xen-unstable:build-arm64-xsm:host-install(4):running:regression
    xen-unstable:build-i386:host-install(4):running:regression
    xen-unstable:build-arm64-xsm:syslog-server:running:regression
    xen-unstable:build-i386:syslog-server:running:regression
    xen-unstable:build-i386-pvops:syslog-server:running:regression
    xen-unstable:build-armhf-pvops:host-install(4):running:regression
    xen-unstable:build-armhf-pvops:syslog-server:running:regression
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Jun 2021 11:24:04 +0000

flight 162963 xen-unstable running [real]
http://logs.test-lab.xenproject.org/osstest/logs/162963/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xtf                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-i386-prev                 <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 162533
 build-arm64                   4 host-install(4)        broken REGR. vs. 162533
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 162533
 build-i386-prev               4 host-install(4)        broken REGR. vs. 162533
 build-amd64-xtf               4 host-install(4)        broken REGR. vs. 162533
 test-amd64-i386-qemuu-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-xl              <none executed>              queued
 test-amd64-i386-xl-pvshim       <none executed>              queued
 test-amd64-i386-xl-qemut-debianhvm-amd64    <none executed>             queued
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm    <none executed>          queued
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm    <none executed> queued
 test-amd64-i386-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemut-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <none executed>             queued
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict    <none executed> queued
 test-amd64-i386-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-raw          <none executed>              queued
 test-amd64-i386-xl-shadow       <none executed>              queued
 test-amd64-i386-xl-xsm          <none executed>              queued
 test-arm64-arm64-libvirt-xsm    <none executed>              queued
 test-arm64-arm64-xl-xsm         <none executed>              queued
 test-armhf-armhf-examine        <none executed>              queued
 test-armhf-armhf-libvirt        <none executed>              queued
 test-armhf-armhf-libvirt-raw    <none executed>              queued
 test-armhf-armhf-xl             <none executed>              queued
 test-amd64-amd64-xl-rtds        <none executed>              queued
 test-amd64-amd64-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict   <none executed> queued
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <none executed>         queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow    <none executed>     queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <none executed>            queued
 test-amd64-amd64-xl-qemut-ws16-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm   <none executed> queued
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm    <none executed>         queued
 build-amd64-libvirt             <none executed>              queued
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <none executed>            queued
 test-amd64-amd64-xl-qcow2       <none executed>              queued
 test-amd64-amd64-xl-pvshim      <none executed>              queued
 test-amd64-amd64-xl-pvhv2-intel    <none executed>              queued
 test-amd64-amd64-xl-pvhv2-amd    <none executed>              queued
 test-amd64-amd64-xl-multivcpu    <none executed>              queued
 test-amd64-amd64-xl-credit2     <none executed>              queued
 build-i386-libvirt              <none executed>              queued
 test-amd64-amd64-xl-credit1     <none executed>              queued
 test-amd64-amd64-xl             <none executed>              queued
 test-amd64-amd64-amd64-pvgrub    <none executed>              queued
 test-amd64-amd64-dom0pvh-xl-amd    <none executed>              queued
 test-amd64-amd64-qemuu-nested-intel    <none executed>              queued
 test-amd64-amd64-dom0pvh-xl-intel    <none executed>              queued
 test-amd64-amd64-examine        <none executed>              queued
 test-amd64-amd64-qemuu-nested-amd    <none executed>              queued
 test-amd64-amd64-i386-pvgrub    <none executed>              queued
 test-amd64-amd64-libvirt        <none executed>              queued
 test-amd64-amd64-qemuu-freebsd12-amd64    <none executed>              queued
 test-amd64-amd64-libvirt-pair    <none executed>              queued
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>   queued
 test-amd64-amd64-qemuu-freebsd11-amd64    <none executed>              queued
 test-amd64-amd64-libvirt-vhd    <none executed>              queued
 test-amd64-amd64-libvirt-xsm    <none executed>              queued
 test-amd64-amd64-pygrub         <none executed>              queued
 test-amd64-amd64-livepatch      <none executed>              queued
 test-amd64-amd64-migrupgrade    <none executed>              queued
 test-amd64-amd64-pair           <none executed>              queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow    <none executed>      queued
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <none executed>          queued
 test-amd64-amd64-xl-shadow      <none executed>              queued
 test-amd64-amd64-xl-xsm         <none executed>              queued
 test-amd64-coresched-amd64-xl    <none executed>              queued
 test-amd64-coresched-i386-xl    <none executed>              queued
 test-amd64-i386-examine         <none executed>              queued
 test-amd64-i386-freebsd10-amd64    <none executed>              queued
 test-amd64-i386-freebsd10-i386    <none executed>              queued
 test-amd64-i386-libvirt         <none executed>              queued
 test-amd64-i386-libvirt-pair    <none executed>              queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>    queued
 test-amd64-i386-libvirt-xsm     <none executed>              queued
 test-amd64-i386-livepatch       <none executed>              queued
 test-amd64-i386-migrupgrade     <none executed>              queued
 test-amd64-i386-pair            <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-intel    <none executed>              queued
 test-armhf-armhf-xl-arndale     <none executed>              queued
 test-armhf-armhf-xl-credit1     <none executed>              queued
 test-armhf-armhf-xl-credit2     <none executed>              queued
 test-armhf-armhf-xl-cubietruck    <none executed>              queued
 test-armhf-armhf-xl-multivcpu    <none executed>              queued
 test-armhf-armhf-xl-rtds        <none executed>              queued
 test-armhf-armhf-xl-vhd         <none executed>              queued
 test-xtf-amd64-amd64-1          <none executed>              queued
 test-xtf-amd64-amd64-2          <none executed>              queued
 test-xtf-amd64-amd64-3          <none executed>              queued
 test-xtf-amd64-amd64-4          <none executed>              queued
 test-xtf-amd64-amd64-5          <none executed>              queued
 build-amd64-prev              2 hosts-allocate               running
 build-armhf-libvirt           2 hosts-allocate               running
 build-amd64                   2 hosts-allocate               running
 build-amd64-xsm               2 hosts-allocate               running
 build-amd64-pvops             2 hosts-allocate               running
 build-i386-pvops              4 host-install(4)              running
 build-arm64-xsm               4 host-install(4)              running
 build-i386                    4 host-install(4)              running
 build-arm64-xsm               3 syslog-server                running
 build-i386                    3 syslog-server                running
 build-i386-pvops              3 syslog-server                running
 build-armhf-pvops             4 host-install(4)              running
 build-armhf-pvops             3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z   14 days
Failing since        162556  2021-06-08 22:39:08 Z   13 days   20 attempts
Testing same since   162885  2021-06-17 23:08:00 Z    4 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              preparing
 build-arm64-xsm                                              running 
 build-i386-xsm                                               broken  
 build-amd64-xtf                                              broken  
 build-amd64                                                  preparing
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   running 
 build-amd64-libvirt                                          queued  
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          preparing
 build-i386-libvirt                                           queued  
 build-amd64-prev                                             preparing
 build-i386-prev                                              broken  
 build-amd64-pvops                                            preparing
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            running 
 build-i386-pvops                                             running 
 test-xtf-amd64-amd64-1                                       queued  
 test-xtf-amd64-amd64-2                                       queued  
 test-xtf-amd64-amd64-3                                       queued  
 test-xtf-amd64-amd64-4                                       queued  
 test-xtf-amd64-amd64-5                                       queued  
 test-amd64-amd64-xl                                          queued  
 test-amd64-coresched-amd64-xl                                queued  
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          queued  
 test-amd64-i386-xl                                           queued  
 test-amd64-coresched-i386-xl                                 queued  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           queued  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            queued  
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        queued  
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         queued  
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 queued  
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 queued  
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-libvirt-xsm                                 queued  
 test-arm64-arm64-libvirt-xsm                                 queued  
 test-amd64-i386-libvirt-xsm                                  queued  
 test-amd64-amd64-xl-xsm                                      queued  
 test-arm64-arm64-xl-xsm                                      queued  
 test-amd64-i386-xl-xsm                                       queued  
 test-amd64-amd64-qemuu-nested-amd                            queued  
 test-amd64-amd64-xl-pvhv2-amd                                queued  
 test-amd64-i386-qemut-rhel6hvm-amd                           queued  
 test-amd64-i386-qemuu-rhel6hvm-amd                           queued  
 test-amd64-amd64-dom0pvh-xl-amd                              queued  
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    queued  
 test-amd64-i386-xl-qemut-debianhvm-amd64                     queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    queued  
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     queued  
 test-amd64-i386-freebsd10-amd64                              queued  
 test-amd64-amd64-qemuu-freebsd11-amd64                       queued  
 test-amd64-amd64-qemuu-freebsd12-amd64                       queued  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         queued  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          queued  
 test-amd64-amd64-xl-qemut-win7-amd64                         queued  
 test-amd64-i386-xl-qemut-win7-amd64                          queued  
 test-amd64-amd64-xl-qemuu-win7-amd64                         queued  
 test-amd64-i386-xl-qemuu-win7-amd64                          queued  
 test-amd64-amd64-xl-qemut-ws16-amd64                         queued  
 test-amd64-i386-xl-qemut-ws16-amd64                          queued  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         queued  
 test-amd64-i386-xl-qemuu-ws16-amd64                          queued  
 test-armhf-armhf-xl-arndale                                  queued  
 test-amd64-amd64-xl-credit1                                  queued  
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  queued  
 test-amd64-amd64-xl-credit2                                  queued  
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  queued  
 test-armhf-armhf-xl-cubietruck                               queued  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        queued  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         queued  
 test-amd64-amd64-examine                                     queued  
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     queued  
 test-amd64-i386-examine                                      queued  
 test-amd64-i386-freebsd10-i386                               queued  
 test-amd64-amd64-qemuu-nested-intel                          queued  
 test-amd64-amd64-xl-pvhv2-intel                              queued  
 test-amd64-i386-qemut-rhel6hvm-intel                         queued  
 test-amd64-i386-qemuu-rhel6hvm-intel                         queued  
 test-amd64-amd64-dom0pvh-xl-intel                            queued  
 test-amd64-amd64-libvirt                                     queued  
 test-armhf-armhf-libvirt                                     queued  
 test-amd64-i386-libvirt                                      queued  
 test-amd64-amd64-livepatch                                   queued  
 test-amd64-i386-livepatch                                    queued  
 test-amd64-amd64-migrupgrade                                 queued  
 test-amd64-i386-migrupgrade                                  queued  
 test-amd64-amd64-xl-multivcpu                                queued  
 test-armhf-armhf-xl-multivcpu                                queued  
 test-amd64-amd64-pair                                        queued  
 test-amd64-i386-pair                                         queued  
 test-amd64-amd64-libvirt-pair                                queued  
 test-amd64-i386-libvirt-pair                                 queued  
 test-amd64-amd64-amd64-pvgrub                                queued  
 test-amd64-amd64-i386-pvgrub                                 queued  
 test-amd64-amd64-xl-pvshim                                   queued  
 test-amd64-i386-xl-pvshim                                    queued  
 test-amd64-amd64-pygrub                                      queued  
 test-amd64-amd64-xl-qcow2                                    queued  
 test-armhf-armhf-libvirt-raw                                 queued  
 test-amd64-i386-xl-raw                                       queued  
 test-amd64-amd64-xl-rtds                                     queued  
 test-armhf-armhf-xl-rtds                                     queued  
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             queued  
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              queued  
 test-amd64-amd64-xl-shadow                                   queued  
 test-amd64-i386-xl-shadow                                    queued  
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 queued  
 test-armhf-armhf-xl-vhd                                      queued  


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-qemuu-rhel6hvm-amd queued
broken-job test-amd64-i386-qemuu-rhel6hvm-intel queued
broken-job test-amd64-i386-xl queued
broken-job test-amd64-i386-xl-pvshim queued
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-qemut-debianhvm-i386-xsm queued
broken-job test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-xl-qemut-win7-amd64 queued
broken-job test-amd64-i386-xl-qemut-ws16-amd64 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-i386-xl-qemuu-win7-amd64 queued
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-i386-xl-raw queued
broken-job test-amd64-i386-xl-shadow queued
broken-job test-amd64-i386-xl-xsm queued
broken-job test-arm64-arm64-libvirt-xsm queued
broken-job test-arm64-arm64-xl-xsm queued
broken-job test-armhf-armhf-examine queued
broken-job test-armhf-armhf-libvirt queued
broken-job test-armhf-armhf-libvirt-raw queued
broken-job test-armhf-armhf-xl queued
broken-job test-amd64-amd64-xl-rtds queued
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 queued
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 queued
broken-job test-amd64-amd64-xl-qemut-win7-amd64 queued
broken-job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm queued
broken-job test-amd64-amd64-xl-qemut-debianhvm-i386-xsm queued
broken-job build-amd64-libvirt queued
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 queued
broken-job test-amd64-amd64-xl-qcow2 queued
broken-job build-amd64-xtf broken
broken-job test-amd64-amd64-xl-pvshim queued
broken-job build-arm64 broken
broken-job test-amd64-amd64-xl-pvhv2-intel queued
broken-job build-arm64-pvops broken
broken-job test-amd64-amd64-xl-pvhv2-amd queued
broken-job test-amd64-amd64-xl-multivcpu queued
broken-job test-amd64-amd64-xl-credit2 queued
broken-job build-i386-libvirt queued
broken-job build-i386-prev broken
broken-job test-amd64-amd64-xl-credit1 queued
broken-job test-amd64-amd64-xl queued
broken-job build-i386-xsm broken
broken-job test-amd64-amd64-amd64-pvgrub queued
broken-job test-amd64-amd64-dom0pvh-xl-amd queued
broken-job test-amd64-amd64-qemuu-nested-intel queued
broken-job test-amd64-amd64-dom0pvh-xl-intel queued
broken-job test-amd64-amd64-examine queued
broken-job test-amd64-amd64-qemuu-nested-amd queued
broken-job test-amd64-amd64-i386-pvgrub queued
broken-job test-amd64-amd64-libvirt queued
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 queued
broken-job test-amd64-amd64-libvirt-pair queued
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 queued
broken-job test-amd64-amd64-libvirt-vhd queued
broken-job test-amd64-amd64-libvirt-xsm queued
broken-job test-amd64-amd64-pygrub queued
broken-job test-amd64-amd64-livepatch queued
broken-job test-amd64-amd64-migrupgrade queued
broken-job test-amd64-amd64-pair queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm queued
broken-job test-amd64-amd64-xl-shadow queued
broken-job test-amd64-amd64-xl-xsm queued
broken-job test-amd64-coresched-amd64-xl queued
broken-job test-amd64-coresched-i386-xl queued
broken-job test-amd64-i386-examine queued
broken-job test-amd64-i386-freebsd10-amd64 queued
broken-job test-amd64-i386-freebsd10-i386 queued
broken-job test-amd64-i386-libvirt queued
broken-job test-amd64-i386-libvirt-pair queued
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-libvirt-xsm queued
broken-job test-amd64-i386-livepatch queued
broken-job test-amd64-i386-migrupgrade queued
broken-job test-amd64-i386-pair queued
broken-job test-amd64-i386-qemut-rhel6hvm-amd queued
broken-job test-amd64-i386-qemut-rhel6hvm-intel queued
broken-job test-armhf-armhf-xl-arndale queued
broken-job test-armhf-armhf-xl-credit1 queued
broken-job test-armhf-armhf-xl-credit2 queued
broken-job test-armhf-armhf-xl-cubietruck queued
broken-job test-armhf-armhf-xl-multivcpu queued
broken-job test-armhf-armhf-xl-rtds queued
broken-job test-armhf-armhf-xl-vhd queued
broken-job test-xtf-amd64-amd64-1 queued
broken-job test-xtf-amd64-amd64-2 queued
broken-job test-xtf-amd64-amd64-3 queued
broken-job test-xtf-amd64-amd64-4 queued
broken-job test-xtf-amd64-amd64-5 queued
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386-prev host-install(4)
broken-step build-amd64-xtf host-install(4)

Not pushing.

(No revision log; it would be 1163 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 11:24:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 11:24:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145825.268223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lveVz-00064z-3Z; Tue, 22 Jun 2021 11:24:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145825.268223; Tue, 22 Jun 2021 11:24:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lveVy-00064s-VL; Tue, 22 Jun 2021 11:24:22 +0000
Received: by outflank-mailman (input) for mailman id 145825;
 Tue, 22 Jun 2021 11:24:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lveVy-00064Y-IR; Tue, 22 Jun 2021 11:24:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lveVy-0001Ne-Bf; Tue, 22 Jun 2021 11:24:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lveVy-0003Ue-4b; Tue, 22 Jun 2021 11:24:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lveVy-000687-46; Tue, 22 Jun 2021 11:24:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eZYnSh3nvqMNJ18i7ze7F/+s8ODh6z7foJQpuHLYKco=; b=skppIX4LIKUL3DrlqrweZXQXVQ
	LWnp9RnhWin0TBo5P/pGf7llm7vpSVwINSFl5jJ3gUa/3Smcl/Qc4vnuzCWIuvC8L/u+a42kfxVCi
	CI0i0F64q/6086o/UHXxciLSE/CFCRmob8HYk3fAqLSqdP2CvI0Jl07kk+jhsoILpwN0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162948-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162948: trouble: blocked/broken/pass/queued/running
X-Osstest-Failures:
    linux-linus:build-amd64:<job status>:broken:regression
    linux-linus:build-amd64-pvops:<job status>:broken:regression
    linux-linus:build-arm64:<job status>:broken:regression
    linux-linus:build-arm64-pvops:<job status>:broken:regression
    linux-linus:build-arm64-xsm:<job status>:broken:regression
    linux-linus:build-armhf-pvops:<job status>:broken:regression
    linux-linus:build-i386:<job status>:broken:regression
    linux-linus:build-i386-pvops:<job status>:broken:regression
    linux-linus:build-i386-xsm:<job status>:broken:regression
    linux-linus:build-i386-xsm:host-install(4):broken:regression
    linux-linus:build-arm64-pvops:host-install(4):broken:regression
    linux-linus:build-arm64-xsm:host-install(4):broken:regression
    linux-linus:build-arm64:host-install(4):broken:regression
    linux-linus:build-i386-pvops:host-install(4):broken:regression
    linux-linus:build-i386:host-install(4):broken:regression
    linux-linus:build-amd64-pvops:host-install(4):broken:regression
    linux-linus:build-amd64:host-install(4):broken:regression
    linux-linus:build-armhf-pvops:host-install(4):broken:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-i386-libvirt-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:<none executed>:queued:regression
    linux-linus:build-amd64-xsm:host-install(4):running:regression
    linux-linus:build-amd64-xsm:syslog-server:running:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    linux=a96bfed64c8986d6404e553f18203cae1f5ac7e6
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Jun 2021 11:24:22 +0000

flight 162948 linux-linus running [real]
http://logs.test-lab.xenproject.org/osstest/logs/162948/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 152332
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152332
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152332
 build-arm64                   4 host-install(4)        broken REGR. vs. 152332
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 152332
 build-i386                    4 host-install(4)        broken REGR. vs. 152332
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 152332
 build-amd64                   4 host-install(4)        broken REGR. vs. 152332
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm    <none executed> queued
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <none executed>          queued
 test-amd64-amd64-xl-xsm         <none executed>              queued
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <none executed>         queued
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm   <none executed> queued
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm    <none executed>         queued
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>   queued
 test-amd64-amd64-libvirt-xsm    <none executed>              queued
 test-amd64-i386-xl-xsm          <none executed>              queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>    queued
 test-amd64-i386-libvirt-xsm     <none executed>              queued
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm    <none executed>          queued
 build-amd64-xsm               4 host-install(4)              running
 build-amd64-xsm               3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 linux                a96bfed64c8986d6404e553f18203cae1f5ac7e6
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  325 days
Failing since        152366  2020-08-01 20:49:34 Z  324 days  551 attempts
Testing same since                          (not found)         0 attempts

------------------------------------------------------------
6199 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              running 
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           queued  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            queued  
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        queued  
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         queued  
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 queued  
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 queued  
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-libvirt-xsm                                 queued  
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  queued  
 test-amd64-amd64-xl-xsm                                      queued  
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       queued  
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm queued
broken-job test-amd64-amd64-xl-xsm queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm queued
broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm queued
broken-job test-amd64-amd64-xl-qemut-debianhvm-i386-xsm queued
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-amd64-libvirt-xsm queued
broken-job test-amd64-i386-xl-xsm queued
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-libvirt-xsm queued
broken-job test-amd64-i386-xl-qemut-debianhvm-i386-xsm queued
broken-step build-i386-xsm host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-i386 host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-armhf-pvops host-install(4)

Not pushing.

(No revision log; it would be 1688799 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 11:24:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 11:24:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145830.268239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lveWF-0006eg-Dt; Tue, 22 Jun 2021 11:24:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145830.268239; Tue, 22 Jun 2021 11:24:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lveWF-0006eY-Al; Tue, 22 Jun 2021 11:24:39 +0000
Received: by outflank-mailman (input) for mailman id 145830;
 Tue, 22 Jun 2021 11:24:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lveWD-0006cb-NB; Tue, 22 Jun 2021 11:24:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lveWD-0001O6-J5; Tue, 22 Jun 2021 11:24:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lveWD-0003Uz-Cc; Tue, 22 Jun 2021 11:24:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lveWD-0006Nj-C2; Tue, 22 Jun 2021 11:24:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pMTSCm802R6UQ7f9LAkTg4m/RdGDRPU/Vr5fwteA2nQ=; b=PYfENT+Ux+m35Q2RzuDNZesZIT
	t356K/97rvll7m9W0kUfwMWYHQyeI45REYH7UYFQpCd8NtWTGLQsNsigfYwnMjEq8/XylN1AuMCi7
	eBaGQoZusOjm/lk8aozzr8J6cu+RL3d2gesRXjvzCxkfmPph7xcq5F+aZSvCWchZudRw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162952-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162952: trouble: blocked/broken/pass/preparing/queued
X-Osstest-Failures:
    qemu-mainline:build-arm64:<job status>:broken:regression
    qemu-mainline:build-arm64-pvops:<job status>:broken:regression
    qemu-mainline:build-arm64-xsm:<job status>:broken:regression
    qemu-mainline:build-i386:<job status>:broken:regression
    qemu-mainline:build-i386-pvops:<job status>:broken:regression
    qemu-mainline:build-i386-xsm:<job status>:broken:regression
    qemu-mainline:build-arm64:host-install(4):broken:regression
    qemu-mainline:build-arm64-pvops:host-install(4):broken:regression
    qemu-mainline:build-arm64-xsm:host-install(4):broken:regression
    qemu-mainline:build-i386-xsm:host-install(4):broken:regression
    qemu-mainline:build-i386-pvops:host-install(4):broken:regression
    qemu-mainline:build-i386:host-install(4):broken:regression
    qemu-mainline:test-armhf-armhf-libvirt:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-xl:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-xl-credit1:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-xl-multivcpu:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-xl-rtds:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-pair:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-libvirt:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-coresched-i386-xl:<none executed>:queued:regression
    qemu-mainline:test-amd64-coresched-amd64-xl:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-xsm:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-shadow:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:<none executed>:queued:regression
    qemu-mainline:build-amd64-libvirt:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-pvshim:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-i386-pvgrub:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-libvirt:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-multivcpu:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-pair:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-pygrub:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-credit2:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-credit1:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-xl-credit2:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-xl-cubietruck:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-pvshim:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-raw:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-shadow:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-xsm:<none executed>:queued:regression
    qemu-mainline:build-amd64:hosts-allocate:running:regression
    qemu-mainline:build-amd64-pvops:hosts-allocate:running:regression
    qemu-mainline:build-amd64-xsm:hosts-allocate:running:regression
    qemu-mainline:build-armhf-pvops:hosts-allocate:running:regression
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=0add99ea3ea91af8230e3933ad7826b2da25a44d
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Jun 2021 11:24:37 +0000

flight 162952 qemu-mainline running [real]
http://logs.test-lab.xenproject.org/osstest/logs/162952/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-arm64                   4 host-install(4)        broken REGR. vs. 152631
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152631
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152631
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 152631
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 152631
 build-i386                    4 host-install(4)        broken REGR. vs. 152631
 test-armhf-armhf-libvirt        <none executed>              queued
 test-armhf-armhf-libvirt-raw    <none executed>              queued
 test-armhf-armhf-xl             <none executed>              queued
 test-armhf-armhf-xl-arndale     <none executed>              queued
 test-armhf-armhf-xl-credit1     <none executed>              queued
 test-armhf-armhf-xl-multivcpu    <none executed>              queued
 test-armhf-armhf-xl-rtds        <none executed>              queued
 test-armhf-armhf-xl-vhd         <none executed>              queued
 test-amd64-i386-pair            <none executed>              queued
 test-amd64-i386-libvirt-xsm     <none executed>              queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>    queued
 test-amd64-i386-libvirt-pair    <none executed>              queued
 test-amd64-i386-libvirt         <none executed>              queued
 test-amd64-i386-freebsd10-i386    <none executed>              queued
 test-amd64-i386-freebsd10-amd64    <none executed>              queued
 test-amd64-coresched-i386-xl    <none executed>              queued
 test-amd64-coresched-amd64-xl    <none executed>              queued
 test-amd64-amd64-xl-xsm         <none executed>              queued
 test-amd64-amd64-xl-shadow      <none executed>              queued
 test-amd64-amd64-xl-rtds        <none executed>              queued
 build-amd64-libvirt             <none executed>              queued
 test-amd64-amd64-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict   <none executed> queued
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <none executed>         queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow    <none executed>     queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <none executed>            queued
 test-amd64-amd64-amd64-pvgrub    <none executed>              queued
 test-amd64-amd64-xl-qcow2       <none executed>              queued
 test-amd64-amd64-dom0pvh-xl-amd    <none executed>              queued
 test-amd64-amd64-xl-pvshim      <none executed>              queued
 test-amd64-amd64-dom0pvh-xl-intel    <none executed>              queued
 test-amd64-amd64-i386-pvgrub    <none executed>              queued
 test-amd64-amd64-libvirt        <none executed>              queued
 test-amd64-amd64-xl-pvhv2-intel    <none executed>              queued
 test-amd64-amd64-libvirt-pair    <none executed>              queued
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>   queued
 test-amd64-amd64-xl-pvhv2-amd    <none executed>              queued
 test-amd64-amd64-libvirt-vhd    <none executed>              queued
 test-amd64-amd64-libvirt-xsm    <none executed>              queued
 test-amd64-amd64-xl-multivcpu    <none executed>              queued
 test-amd64-amd64-pair           <none executed>              queued
 test-amd64-amd64-pygrub         <none executed>              queued
 test-amd64-amd64-xl-credit2     <none executed>              queued
 test-amd64-amd64-qemuu-freebsd11-amd64    <none executed>              queued
 test-amd64-amd64-qemuu-freebsd12-amd64    <none executed>              queued
 test-amd64-amd64-xl-credit1     <none executed>              queued
 test-amd64-amd64-qemuu-nested-amd    <none executed>              queued
 test-amd64-amd64-qemuu-nested-intel    <none executed>              queued
 test-amd64-amd64-xl             <none executed>              queued
 test-armhf-armhf-xl-credit2     <none executed>              queued
 test-armhf-armhf-xl-cubietruck    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-xl              <none executed>              queued
 test-amd64-i386-xl-pvshim       <none executed>              queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <none executed>             queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow    <none executed>      queued
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <none executed>          queued
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict    <none executed> queued
 test-amd64-i386-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-raw          <none executed>              queued
 test-amd64-i386-xl-shadow       <none executed>              queued
 test-amd64-i386-xl-xsm          <none executed>              queued
 build-amd64                   2 hosts-allocate               running
 build-amd64-pvops             2 hosts-allocate               running
 build-amd64-xsm               2 hosts-allocate               running
 build-armhf-pvops             2 hosts-allocate               running

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                0add99ea3ea91af8230e3933ad7826b2da25a44d
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  306 days
Failing since        152659  2020-08-21 14:07:39 Z  304 days  560 attempts
Testing same since                          (not found)         0 attempts

------------------------------------------------------------
541 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              preparing
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  preparing
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          queued  
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            preparing
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            preparing
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          queued  
 test-amd64-coresched-amd64-xl                                queued  
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          queued  
 test-amd64-i386-xl                                           queued  
 test-amd64-coresched-i386-xl                                 queued  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           queued  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            queued  
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 queued  
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-libvirt-xsm                                 queued  
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  queued  
 test-amd64-amd64-xl-xsm                                      queued  
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       queued  
 test-amd64-amd64-qemuu-nested-amd                            queued  
 test-amd64-amd64-xl-pvhv2-amd                                queued  
 test-amd64-i386-qemuu-rhel6hvm-amd                           queued  
 test-amd64-amd64-dom0pvh-xl-amd                              queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    queued  
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     queued  
 test-amd64-i386-freebsd10-amd64                              queued  
 test-amd64-amd64-qemuu-freebsd11-amd64                       queued  
 test-amd64-amd64-qemuu-freebsd12-amd64                       queued  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         queued  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          queued  
 test-amd64-amd64-xl-qemuu-win7-amd64                         queued  
 test-amd64-i386-xl-qemuu-win7-amd64                          queued  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         queued  
 test-amd64-i386-xl-qemuu-ws16-amd64                          queued  
 test-armhf-armhf-xl-arndale                                  queued  
 test-amd64-amd64-xl-credit1                                  queued  
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  queued  
 test-amd64-amd64-xl-credit2                                  queued  
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  queued  
 test-armhf-armhf-xl-cubietruck                               queued  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        queued  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         queued  
 test-amd64-i386-freebsd10-i386                               queued  
 test-amd64-amd64-qemuu-nested-intel                          queued  
 test-amd64-amd64-xl-pvhv2-intel                              queued  
 test-amd64-i386-qemuu-rhel6hvm-intel                         queued  
 test-amd64-amd64-dom0pvh-xl-intel                            queued  
 test-amd64-amd64-libvirt                                     queued  
 test-armhf-armhf-libvirt                                     queued  
 test-amd64-i386-libvirt                                      queued  
 test-amd64-amd64-xl-multivcpu                                queued  
 test-armhf-armhf-xl-multivcpu                                queued  
 test-amd64-amd64-pair                                        queued  
 test-amd64-i386-pair                                         queued  
 test-amd64-amd64-libvirt-pair                                queued  
 test-amd64-i386-libvirt-pair                                 queued  
 test-amd64-amd64-amd64-pvgrub                                queued  
 test-amd64-amd64-i386-pvgrub                                 queued  
 test-amd64-amd64-xl-pvshim                                   queued  
 test-amd64-i386-xl-pvshim                                    queued  
 test-amd64-amd64-pygrub                                      queued  
 test-amd64-amd64-xl-qcow2                                    queued  
 test-armhf-armhf-libvirt-raw                                 queued  
 test-amd64-i386-xl-raw                                       queued  
 test-amd64-amd64-xl-rtds                                     queued  
 test-armhf-armhf-xl-rtds                                     queued  
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             queued  
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              queued  
 test-amd64-amd64-xl-shadow                                   queued  
 test-amd64-i386-xl-shadow                                    queued  
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 queued  
 test-armhf-armhf-xl-vhd                                      queued  


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-libvirt queued
broken-job test-armhf-armhf-libvirt-raw queued
broken-job test-armhf-armhf-xl queued
broken-job test-armhf-armhf-xl-arndale queued
broken-job test-armhf-armhf-xl-credit1 queued
broken-job test-armhf-armhf-xl-multivcpu queued
broken-job test-armhf-armhf-xl-rtds queued
broken-job test-armhf-armhf-xl-vhd queued
broken-job test-amd64-i386-pair queued
broken-job test-amd64-i386-libvirt-xsm queued
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-libvirt-pair queued
broken-job test-amd64-i386-libvirt queued
broken-job test-amd64-i386-freebsd10-i386 queued
broken-job test-amd64-i386-freebsd10-amd64 queued
broken-job test-amd64-coresched-i386-xl queued
broken-job test-amd64-coresched-amd64-xl queued
broken-job test-amd64-amd64-xl-xsm queued
broken-job test-amd64-amd64-xl-shadow queued
broken-job test-amd64-amd64-xl-rtds queued
broken-job build-amd64-libvirt queued
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 queued
broken-job build-arm64 broken
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 queued
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm queued
broken-job build-i386 broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow queued
broken-job build-i386-pvops broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 queued
broken-job build-i386-xsm broken
broken-job test-amd64-amd64-amd64-pvgrub queued
broken-job test-amd64-amd64-xl-qcow2 queued
broken-job test-amd64-amd64-dom0pvh-xl-amd queued
broken-job test-amd64-amd64-xl-pvshim queued
broken-job test-amd64-amd64-dom0pvh-xl-intel queued
broken-job test-amd64-amd64-i386-pvgrub queued
broken-job test-amd64-amd64-libvirt queued
broken-job test-amd64-amd64-xl-pvhv2-intel queued
broken-job test-amd64-amd64-libvirt-pair queued
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-amd64-xl-pvhv2-amd queued
broken-job test-amd64-amd64-libvirt-vhd queued
broken-job test-amd64-amd64-libvirt-xsm queued
broken-job test-amd64-amd64-xl-multivcpu queued
broken-job test-amd64-amd64-pair queued
broken-job test-amd64-amd64-pygrub queued
broken-job test-amd64-amd64-xl-credit2 queued
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 queued
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 queued
broken-job test-amd64-amd64-xl-credit1 queued
broken-job test-amd64-amd64-qemuu-nested-amd queued
broken-job test-amd64-amd64-qemuu-nested-intel queued
broken-job test-amd64-amd64-xl queued
broken-job test-armhf-armhf-xl-credit2 queued
broken-job test-armhf-armhf-xl-cubietruck queued
broken-job test-amd64-i386-qemuu-rhel6hvm-amd queued
broken-job test-amd64-i386-qemuu-rhel6hvm-intel queued
broken-job test-amd64-i386-xl queued
broken-job test-amd64-i386-xl-pvshim queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm queued
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-i386-xl-qemuu-win7-amd64 queued
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-i386-xl-raw queued
broken-job test-amd64-i386-xl-shadow queued
broken-job test-amd64-i386-xl-xsm queued
broken-step build-arm64 host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-i386 host-install(4)

Not pushing.

(No revision log; it would be 175648 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 11:24:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 11:24:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145834.268254 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lveWR-0007C7-Rj; Tue, 22 Jun 2021 11:24:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145834.268254; Tue, 22 Jun 2021 11:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lveWR-0007C0-OT; Tue, 22 Jun 2021 11:24:51 +0000
Received: by outflank-mailman (input) for mailman id 145834;
 Tue, 22 Jun 2021 11:24:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lveWQ-0007Ap-Ll; Tue, 22 Jun 2021 11:24:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lveWQ-0001OH-FY; Tue, 22 Jun 2021 11:24:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lveWQ-0003VE-8Q; Tue, 22 Jun 2021 11:24:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lveWQ-0006gR-7w; Tue, 22 Jun 2021 11:24:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dR5ZDHhCT8v+d4uy9UXPonHct+idWiishUyoNPv6Ol8=; b=kWSZ+irmoBWqfKi4crWz1xfxXO
	fP4s0ZlQJDCrZa5rEnBZ4s8eQvbfD/QFfOTtGMP3FPhbtdUekK6i/uGFwCuSNm+sP7HDTOg32Zrb+
	6vGj4WFUAn9WqKqmZhbsazOC3MiQpI8+yviS87hP4UVX772tsTW1cyJ1eEsrAqKXZqDI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162958-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162958: regressions - trouble: blocked/broken/fail/pass/preparing/queued
X-Osstest-Failures:
    libvirt:build-arm64:<job status>:broken:regression
    libvirt:build-arm64-pvops:<job status>:broken:regression
    libvirt:build-arm64-xsm:<job status>:broken:regression
    libvirt:build-armhf-pvops:<job status>:broken:regression
    libvirt:build-i386:<job status>:broken:regression
    libvirt:build-i386-pvops:<job status>:broken:regression
    libvirt:build-i386-xsm:<job status>:broken:regression
    libvirt:build-i386-pvops:host-install(4):broken:regression
    libvirt:build-arm64:host-install(4):broken:regression
    libvirt:build-i386:host-install(4):broken:regression
    libvirt:build-arm64-pvops:host-install(4):broken:regression
    libvirt:build-arm64-xsm:host-install(4):broken:regression
    libvirt:build-i386-xsm:host-install(4):broken:regression
    libvirt:build-armhf-pvops:host-install(4):broken:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:<none executed>:queued:regression
    libvirt:test-amd64-amd64-libvirt:<none executed>:queued:regression
    libvirt:test-amd64-amd64-libvirt-pair:<none executed>:queued:regression
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    libvirt:test-amd64-amd64-libvirt-vhd:<none executed>:queued:regression
    libvirt:test-amd64-amd64-libvirt-xsm:<none executed>:queued:regression
    libvirt:test-amd64-i386-libvirt:<none executed>:queued:regression
    libvirt:test-amd64-i386-libvirt-pair:<none executed>:queued:regression
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    libvirt:test-amd64-i386-libvirt-xsm:<none executed>:queued:regression
    libvirt:build-amd64-pvops:hosts-allocate:running:regression
    libvirt:build-amd64-xsm:hosts-allocate:running:regression
    libvirt:build-amd64:hosts-allocate:running:regression
    libvirt:build-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:build-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=b1112f6c0f4bd1244cd2913bfa5f1e0b6548049a
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Jun 2021 11:24:50 +0000

flight 162958 libvirt running [real]
http://logs.test-lab.xenproject.org/osstest/logs/162958/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 151777
 build-arm64                   4 host-install(4)        broken REGR. vs. 151777
 build-i386                    4 host-install(4)        broken REGR. vs. 151777
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 151777
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 151777
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 151777
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt             <none executed>              queued
 test-amd64-amd64-libvirt        <none executed>              queued
 test-amd64-amd64-libvirt-pair    <none executed>              queued
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>   queued
 test-amd64-amd64-libvirt-vhd    <none executed>              queued
 test-amd64-amd64-libvirt-xsm    <none executed>              queued
 test-amd64-i386-libvirt         <none executed>              queued
 test-amd64-i386-libvirt-pair    <none executed>              queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>    queued
 test-amd64-i386-libvirt-xsm     <none executed>              queued
 build-amd64-pvops             2 hosts-allocate               running
 build-amd64-xsm               2 hosts-allocate               running
 build-amd64                   2 hosts-allocate               running

Tests which did not succeed, but are not blocking:
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              b1112f6c0f4bd1244cd2913bfa5f1e0b6548049a
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  347 days
Failing since        151818  2020-07-11 04:18:52 Z  346 days  338 attempts
Testing same since                          (not found)         0 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              preparing
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  preparing
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          queued  
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            preparing
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           queued  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            queued  
 test-amd64-amd64-libvirt-xsm                                 queued  
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  queued  
 test-amd64-amd64-libvirt                                     queued  
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      queued  
 test-amd64-amd64-libvirt-pair                                queued  
 test-amd64-i386-libvirt-pair                                 queued  
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 queued  


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64-libvirt queued
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-job test-amd64-amd64-libvirt queued
broken-job test-amd64-amd64-libvirt-pair queued
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-amd64-libvirt-vhd queued
broken-job test-amd64-amd64-libvirt-xsm queued
broken-job test-amd64-i386-libvirt queued
broken-job test-amd64-i386-libvirt-pair queued
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-libvirt-xsm queued
broken-step build-i386-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-i386 host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-armhf-pvops host-install(4)

Not pushing.

(No revision log; it would be 63010 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 11:25:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 11:25:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145837.268268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lveWa-0007br-7C; Tue, 22 Jun 2021 11:25:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145837.268268; Tue, 22 Jun 2021 11:25:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lveWa-0007bd-20; Tue, 22 Jun 2021 11:25:00 +0000
Received: by outflank-mailman (input) for mailman id 145837;
 Tue, 22 Jun 2021 11:24:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lveWZ-0007am-6q; Tue, 22 Jun 2021 11:24:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lveWZ-0001OS-1M; Tue, 22 Jun 2021 11:24:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lveWY-0003VN-OA; Tue, 22 Jun 2021 11:24:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lveWY-0006xt-Ne; Tue, 22 Jun 2021 11:24:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=B+P8w1ZmIGclDoeens/X/uml0fqOEiRy8YnpjxFW9ys=; b=CA7VILXGFzs2M0YtAjKnhv3+cA
	ByOBBP1xNt+hIHP4pX9JYPvugwoKg9P5jVUhM90IJO641T45tXPKP9dYcK3HDd4bMoV2W8J0ZsjZg
	Ad+NPjailWCtyuaWByGkW1w/QseinX3jGoh7lCfJuiwr/rYvqaXdOf82mhr/J+GTV7kM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162956-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162956: trouble: blocked/broken/preparing/queued
X-Osstest-Failures:
    ovmf:build-amd64-pvops:<job status>:broken:regression
    ovmf:build-i386:<job status>:broken:regression
    ovmf:build-i386-pvops:<job status>:broken:regression
    ovmf:build-i386-xsm:<job status>:broken:regression
    ovmf:build-i386:host-install(4):broken:regression
    ovmf:build-i386-pvops:host-install(4):broken:regression
    ovmf:build-i386-xsm:host-install(4):broken:regression
    ovmf:build-amd64-pvops:host-install(4):broken:regression
    ovmf:build-amd64-libvirt:<none executed>:queued:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    ovmf:build-amd64-xsm:hosts-allocate:running:regression
    ovmf:build-amd64:hosts-allocate:running:regression
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=6cfeeb71c43173d657d86d4a38ed655b0fc5f277
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Jun 2021 11:24:58 +0000

flight 162956 ovmf running [real]
http://logs.test-lab.xenproject.org/osstest/logs/162956/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386                    4 host-install(4)        broken REGR. vs. 162359
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 162359
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 162359
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 162359
 build-amd64-libvirt             <none executed>              queued
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-ovmf-amd64    <none executed>              queued
 build-amd64-xsm               2 hosts-allocate               running
 build-amd64                   2 hosts-allocate               running

Tests which did not succeed, but are not blocking:
 build-i386-libvirt            1 build-check(1)               blocked  n/a

version targeted for testing:
 ovmf                 6cfeeb71c43173d657d86d4a38ed655b0fc5f277
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   18 days
Failing since        162368  2021-06-04 15:42:59 Z   17 days   38 attempts
Testing same since   162938  2021-06-21 11:33:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              preparing
 build-i386-xsm                                               broken  
 build-amd64                                                  preparing
 build-i386                                                   broken  
 build-amd64-libvirt                                          queued  
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         queued  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          queued  


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64-libvirt queued
broken-job build-amd64-pvops broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 queued
broken-step build-i386 host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-amd64-pvops host-install(4)

Not pushing.

(No revision log; it would be 2193 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 11:38:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 11:38:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145854.268281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvejJ-0001I5-Cw; Tue, 22 Jun 2021 11:38:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145854.268281; Tue, 22 Jun 2021 11:38:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvejJ-0001Hy-9y; Tue, 22 Jun 2021 11:38:09 +0000
Received: by outflank-mailman (input) for mailman id 145854;
 Tue, 22 Jun 2021 11:38:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lvejI-0001Hs-HD
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 11:38:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lvejI-0001ds-CZ
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 11:38:08 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lvejI-00028M-Bd
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 11:38:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lvejE-0003Xy-Q3; Tue, 22 Jun 2021 12:38:04 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=GuDijgEM/gxBLTHzy3G1iP3qZzBVCe1tc04lGY29Sy8=; b=N30Dc9PXAtoVz2r4lnlERSbagP
	lgEaBLt4pyahPO6AHlk5byJwz7obAzWwJUfOaTCKxQHu2+VF61LLDkxD5KaXbj8fQ0SrKjoj6Yk1f
	lpqCv7gcuQ1dD0FqHYmm6aWoD5ylcrp/zOvnLPtm5mQGMZAAI/pByl+RYBhW5Dg8nL0A=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24785.52124.476428.674630@mariner.uk.xensource.com>
Date: Tue, 22 Jun 2021 12:38:04 +0100
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 3/5] libxencall: introduce variant of xencall2() returning
 long
In-Reply-To: <c7f93b66-bc4d-708a-6936-e0eac9e36cfa@suse.com>
References: <edaf04ec-335f-a156-34c4-5c0385cba08b@suse.com>
	<c7f93b66-bc4d-708a-6936-e0eac9e36cfa@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("[PATCH 3/5] libxencall: introduce variant of xencall2() returning long"):
> Some hypercalls, memory-op in particular, can return values requiring
> more than 31 bits to represent. Hence the underlying layers need to make
> sure they won't truncate such values.

Thanks for this.

All 5 patches:

Acked-by: Ian Jackson <iwj@xenproject.org>

Nit:

> While adding the new function to the map file, I noticed the stray
> xencall6 there. I'm taking the liberty to remove it at this occasion. If
> such a function would ever appear, it shouldn't lane in version 1.0.
                                                  ^^^^

Typo for "land", I think.

Ian.


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 12:21:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 12:21:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145869.268293 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvfP9-0006HF-71; Tue, 22 Jun 2021 12:21:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145869.268293; Tue, 22 Jun 2021 12:21:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvfP9-0006H8-3a; Tue, 22 Jun 2021 12:21:23 +0000
Received: by outflank-mailman (input) for mailman id 145869;
 Tue, 22 Jun 2021 12:21:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lvfP7-0006H2-UO
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 12:21:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lvfP6-0002LX-OE; Tue, 22 Jun 2021 12:21:20 +0000
Received: from [54.239.6.182] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lvfP6-0005P7-Ey; Tue, 22 Jun 2021 12:21:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=M0Y3trJ0hBFNWH8kNw4EvDo6aslDCpP5XqPwDmKmclg=; b=nHLtiNmGFSPdcnX0RLBYD156cX
	0SKd/Ms6obc2pi1YLItOiufKYUrDzYprkklLMDzNahgdHqV1egpJ66GCHLe4GwxsIGwksdEFVlrga
	kcYllL7BCT6kQvHHXxNOzK6Fq/zBvVVR5qsNmK77DFFlGSXgNOcOnNptTG39ob2dnSOM=;
Subject: Re: Interrupt for port 19, but apparently not enabled; per-user
 000000004af23acc
To: Juergen Gross <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 linux-kernel@vger.kernel.org, mheyne@amazon.de
References: <6552fc66-ba19-2c77-7928-b0272d3e1622@xen.org>
 <4d8a7ba7-a9f6-2999-8750-bfe2b85f064e@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <9a08bbf2-ba6a-6e49-3bcb-bfe2beb32b99@xen.org>
Date: Tue, 22 Jun 2021 14:21:18 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <4d8a7ba7-a9f6-2999-8750-bfe2b85f064e@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 22/06/2021 13:04, Juergen Gross wrote:
> On 22.06.21 12:24, Julien Grall wrote:
>> Hi Juergen,
>>
>> As discussed on IRC yesterday, we noticed a couple of splat in 5.13-rc6 
> 
>> (and stable 5.4) in the evtchn driver:
>>
>> [    7.581000] ------------[ cut here ]------------
>> [    7.581899] Interrupt for port 19, but apparently not 
> enabled;
>> per-user 000000004af23acc
>> [    7.583401] WARNING: CPU: 0 PID: 467 at 
>> /home/ANT.AMAZON.COM/jgrall/works/oss/linux/drivers/xen/evtchn.c:169 
>> evtchn_interrupt+0xd5/0x100
>> [    7.585583] Modules linked in:
>> [    7.586188] CPU: 0 PID: 467 Comm: xenstore-read Not tainted 
>> 5.13.0-rc6 #240
>> [    7.587462] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), 
>> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
>> [    7.589462] RIP: e030:evtchn_interrupt+0xd5/0x100
>> [    7.590361] Code: 48 8d bb d8 01 00 00 ba 01 00 00 00 
> be 1d 00 00 00
>> e8 5f 72 c4 ff eb b2 8b 75 20 48 89 da 48 c7 c7 a8 03 5f 82 e8 6b 2d 96 
> 
>> ff <0f> 0b e9 4d ff ff ff 41 0f b6 f4 48 c7 c7 80 da a2 82 e8 f0
>> [    7.593662] RSP: e02b:ffffc90040003e60 EFLAGS: 00010082
>> [    7.594636] RAX: 0000000000000000 RBX: ffff888102328c00 RCX: 
>> 0000000000000027
>> [    7.595924] RDX: 0000000000000000 RSI: ffff88817fe18ad0 RDI: 
>> ffff88817fe18ad8
>> [    7.597216] RBP: ffff888108ef8140 R08: 0000000000000000 R09: 
>> 0000000000000001
>> [    7.598522] R10: 0000000000000000 R11: 7075727265746e49 R12: 
>> 0000000000000000
>> [    7.599810] R13: ffffc90040003ec4 R14: ffff8881001b8000 R15: 
>> ffff888109b36f80
>> [    7.601113] FS:  0000000000000000(0000) GS:ffff88817fe00000(0000) 
>> knlGS:0000000000000000
>> [    7.602570] CS:  10000e030 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [    7.603700] CR2: 00007f15b390e368 CR3: 000000010bb04000 CR4: 
>> 0000000000050660
>> [    7.604993] Call Trace:
>> [    7.605501]  <IRQ>
>> [    7.605929]  __handle_irq_event_percpu+0x4c/0x330
>> [    7.606817]  handle_irq_event_percpu+0x32/0xa0
>> [    7.607670]  handle_irq_event+0x3a/0x60
>> [    7.608416]  handle_edge_irq+0x9b/0x1f0
>> [    7.609154]  generic_handle_irq+0x4f/0x60
>> [    7.609918]  __evtchn_fifo_handle_events+0x195/0x3a0
>> [    7.610864]  __xen_evtchn_do_upcall+0x66/0xb0
>> [    7.611693]  __xen_pv_evtchn_do_upcall+0x1d/0x30
>> [    7.612582]  xen_pv_evtchn_do_upcall+0x9d/0xc0
>> [    7.613439]  </IRQ>
>> [    7.613882]  exc_xen_hypervisor_callback+0x8/0x10
>>
>> This is quite similar to the problem I reported a few months ago (see 
>> [1]) but this time this is happening with fifo rather than 2L.
>>
>> I haven't been able to reproduced it reliably so far. But looking at 
>> the code, I think I have found another potential race after commit
>>
>> commit b6622798bc50b625a1e62f82c7190df40c1f5b21
>> Author: Juergen Gross <jgross@suse.com>
>> Date:   Sat Mar 6 17:18:33 2021 +0100
>>     xen/events: avoid handling the same event on two cpus at the same 
>> time
>>     When changing the cpu affinity of an event it can happen today that
>>     (with some unlucky timing) the same event will be handled 
> on the old
>>     and the new cpu at the same time.
>>     Avoid that by adding an "event active" flag to the per-event data and
>>     call the handler only if this flag isn't set.
>>     Cc: stable@vger.kernel.org
>>     Reported-by: Julien Grall <julien@xen.org>
>>     Signed-off-by: Juergen Gross <jgross@suse.com>
>>     Reviewed-by: Julien Grall <jgrall@amazon.com>
>>     Link: https://lore.kernel.org/r/20210306161833.4552-4-jgross@suse.com
>>     Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>
>> The evtchn driver will use the lateeoi handlers. So the code to ack 
>> looks like:
>>
>> do_mask(..., EVT_MASK_REASON_EOI_PENDING)
>> smp_store_release(&info->is_active, 0);
>> clear_evtchn(info->evtchn);
>>
>> The code to handle an interrupts look like:
>>
>> clear_link(...)
>> if ( evtchn_fifo_is_pending(port) && !evtchn_fifo_is_mask()) {
>>    if (xchg_acquire(&info->is_active, 1)
>>      return;
>>    generic_handle_irq();
>> }
>>
>> After changing the affinity, an interrupt may be received once on the 
>> previous vCPU. So, I think the following can happen:
>>
>> vCPU0                             | vCPU1
>>                    |
>>   Receive event              |
>>                    | change affinity to vCPU1
>>   clear_link()              |
>>                        |
>>                 /* The interrupt is re-raised */
>>                    | receive event
>>                      |
>>                    | /* The interrupt is not masked */
>>   info->is_active = 1          |
>>   do_mask(...)              |
>>   info->is_active = 0          |
>>                    | info->is_active = 1
>>   clear_evtchn(...)               |
>>                                   | do_mask(...)
>>                                   | info->is_active = 0
>>                    | clear_evtchn(...)
>>
>> Does this look plausible to you?
> 
> Yes, it does.
> 
> Thanks for the analysis.
> 
> So I guess for lateeoi events we need to clear is_active only in
> xen_irq_lateeoi()? At a first glance this should fix the issue.

It should work and would be quite neat. But, I believe clear_evtchn() 
would have to stick in the ack helper to avoid losing interrupts.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 12:23:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 12:23:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145874.268304 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvfR2-0006un-Ju; Tue, 22 Jun 2021 12:23:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145874.268304; Tue, 22 Jun 2021 12:23:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvfR2-0006ug-Fy; Tue, 22 Jun 2021 12:23:20 +0000
Received: by outflank-mailman (input) for mailman id 145874;
 Tue, 22 Jun 2021 12:23:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=P8ns=LQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lvfR1-0006uV-0S
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 12:23:19 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ace219bf-7234-42e6-820e-de6de085a632;
 Tue, 22 Jun 2021 12:23:18 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 3CD032196D;
 Tue, 22 Jun 2021 12:23:17 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 04F75118DD;
 Tue, 22 Jun 2021 12:23:16 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 9hi+OjTW0WCLEAAALh3uQQ
 (envelope-from <jgross@suse.com>); Tue, 22 Jun 2021 12:23:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ace219bf-7234-42e6-820e-de6de085a632
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624364597; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=aIIzQ0v9nG6I6oy8FGefgn9agrr42ubns73HDhdxzRQ=;
	b=d7zxMFyycD3VL6IuXR5GGObn1Bg4LSATrgFDPTtfk5s9KqS1++FQ2+HTxzUR4Lmbxmdjrj
	KonZv5mZ9UkbUmWjqSEivGFQlL5MFr8X8fLc3bnDHk0GwnA6d6zTGR60k1WPPrJfviCSaZ
	iid2W8qTeLCYqf1t2Qh1EAO+3HLXiqk=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624364597; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=aIIzQ0v9nG6I6oy8FGefgn9agrr42ubns73HDhdxzRQ=;
	b=d7zxMFyycD3VL6IuXR5GGObn1Bg4LSATrgFDPTtfk5s9KqS1++FQ2+HTxzUR4Lmbxmdjrj
	KonZv5mZ9UkbUmWjqSEivGFQlL5MFr8X8fLc3bnDHk0GwnA6d6zTGR60k1WPPrJfviCSaZ
	iid2W8qTeLCYqf1t2Qh1EAO+3HLXiqk=
Subject: Re: Interrupt for port 19, but apparently not enabled; per-user
 000000004af23acc
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 linux-kernel@vger.kernel.org, mheyne@amazon.de
References: <6552fc66-ba19-2c77-7928-b0272d3e1622@xen.org>
 <4d8a7ba7-a9f6-2999-8750-bfe2b85f064e@suse.com>
 <9a08bbf2-ba6a-6e49-3bcb-bfe2beb32b99@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <8dd2aa85-e3d3-2fe1-86dc-145bbadf921b@suse.com>
Date: Tue, 22 Jun 2021 14:23:16 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <9a08bbf2-ba6a-6e49-3bcb-bfe2beb32b99@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="Bf2ph7ybTNBsMgYU5NfRTZjCxgZxGn4K3"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--Bf2ph7ybTNBsMgYU5NfRTZjCxgZxGn4K3
Content-Type: multipart/mixed; boundary="ieGDTbGIV81r0dLyI4LUjpigFd7g1moSf";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 linux-kernel@vger.kernel.org, mheyne@amazon.de
Message-ID: <8dd2aa85-e3d3-2fe1-86dc-145bbadf921b@suse.com>
Subject: Re: Interrupt for port 19, but apparently not enabled; per-user
 000000004af23acc
References: <6552fc66-ba19-2c77-7928-b0272d3e1622@xen.org>
 <4d8a7ba7-a9f6-2999-8750-bfe2b85f064e@suse.com>
 <9a08bbf2-ba6a-6e49-3bcb-bfe2beb32b99@xen.org>
In-Reply-To: <9a08bbf2-ba6a-6e49-3bcb-bfe2beb32b99@xen.org>

--ieGDTbGIV81r0dLyI4LUjpigFd7g1moSf
Content-Type: multipart/mixed;
 boundary="------------975C2C0ACCBF84BF28806F70"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------975C2C0ACCBF84BF28806F70
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 22.06.21 14:21, Julien Grall wrote:
> Hi Juergen,
>=20
> On 22/06/2021 13:04, Juergen Gross wrote:
>> On 22.06.21 12:24, Julien Grall wrote:
>>> Hi Juergen,
>>>
>>> As discussed on IRC yesterday, we noticed a couple of splat in 5.13-r=
c6=20
>>
>>> (and stable 5.4) in the evtchn driver:
>>>
>>> [=C2=A0=C2=A0=C2=A0 7.581000] ------------[ cut here ]------------
>>> [=C2=A0=C2=A0=C2=A0 7.581899] Interrupt for port 19, but apparently n=
ot=20
>> enabled;
>>> per-user 000000004af23acc
>>> [=C2=A0=C2=A0=C2=A0 7.583401] WARNING: CPU: 0 PID: 467 at=20
>>> /home/ANT.AMAZON.COM/jgrall/works/oss/linux/drivers/xen/evtchn.c:169 =

>>> evtchn_interrupt+0xd5/0x100
>>> [=C2=A0=C2=A0=C2=A0 7.585583] Modules linked in:
>>> [=C2=A0=C2=A0=C2=A0 7.586188] CPU: 0 PID: 467 Comm: xenstore-read Not=20
tainted=20
>>> 5.13.0-rc6 #240
>>> [=C2=A0=C2=A0=C2=A0 7.587462] Hardware name: QEMU Standard PC (Q35 + =
ICH9, 2009),=20
>>> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
>>> [=C2=A0=C2=A0=C2=A0 7.589462] RIP: e030:evtchn_interrupt+0xd5/0x100
>>> [=C2=A0=C2=A0=C2=A0 7.590361] Code: 48 8d bb d8 01 00 00 ba 01 00 00 =
00=20
>> be 1d 00 00 00
>>> e8 5f 72 c4 ff eb b2 8b 75 20 48 89 da 48 c7 c7 a8 03 5f 82 e8 6b 2d =
96=20
>>
>>> ff <0f> 0b e9 4d ff ff ff 41 0f b6 f4 48 c7 c7 80 da a2 82 e8 f0
>>> [=C2=A0=C2=A0=C2=A0 7.593662] RSP: e02b:ffffc90040003e60 EFLAGS: 0001=
0082
>>> [=C2=A0=C2=A0=C2=A0 7.594636] RAX: 0000000000000000 RBX: ffff88810232=
8c00 RCX:=20
>>> 0000000000000027
>>> [=C2=A0=C2=A0=C2=A0 7.595924] RDX: 0000000000000000 RSI: ffff88817fe1=
8ad0 RDI:=20
>>> ffff88817fe18ad8
>>> [=C2=A0=C2=A0=C2=A0 7.597216] RBP: ffff888108ef8140 R08: 000000000000=
0000 R09:=20
>>> 0000000000000001
>>> [=C2=A0=C2=A0=C2=A0 7.598522] R10: 0000000000000000 R11: 707572726574=
6e49 R12:=20
>>> 0000000000000000
>>> [=C2=A0=C2=A0=C2=A0 7.599810] R13: ffffc90040003ec4 R14: ffff8881001b=
8000 R15:=20
>>> ffff888109b36f80
>>> [=C2=A0=C2=A0=C2=A0 7.601113] FS:=C2=A0 0000000000000000(0000) GS:fff=
f88817fe00000(0000)=20
>>> knlGS:0000000000000000
>>> [=C2=A0=C2=A0=C2=A0 7.602570] CS:=C2=A0 10000e030 DS: 0000 ES: 0000 C=
R0:0000000080050033
>>> [=C2=A0=C2=A0=C2=A0 7.603700] CR2: 00007f15b390e368 CR3: 000000010bb0=
4000 CR4:=20
>>> 0000000000050660
>>> [=C2=A0=C2=A0=C2=A0 7.604993] Call Trace:
>>> [=C2=A0=C2=A0=C2=A0 7.605501]=C2=A0 <IRQ>
>>> [=C2=A0=C2=A0=C2=A0 7.605929]=C2=A0 __handle_irq_event_percpu+0x4c/0x=
330
>>> [=C2=A0=C2=A0=C2=A0 7.606817]=C2=A0 handle_irq_event_percpu+0x32/0xa0=

>>> [=C2=A0=C2=A0=C2=A0 7.607670]=C2=A0 handle_irq_event+0x3a/0x60
>>> [=C2=A0=C2=A0=C2=A0 7.608416]=C2=A0 handle_edge_irq+0x9b/0x1f0
>>> [=C2=A0=C2=A0=C2=A0 7.609154]=C2=A0 generic_handle_irq+0x4f/0x60
>>> [=C2=A0=C2=A0=C2=A0 7.609918]=C2=A0 __evtchn_fifo_handle_events+0x195=
/0x3a0
>>> [=C2=A0=C2=A0=C2=A0 7.610864]=C2=A0 __xen_evtchn_do_upcall+0x66/0xb0
>>> [=C2=A0=C2=A0=C2=A0 7.611693]=C2=A0 __xen_pv_evtchn_do_upcall+0x1d/0x=
30
>>> [=C2=A0=C2=A0=C2=A0 7.612582]=C2=A0 xen_pv_evtchn_do_upcall+0x9d/0xc0=

>>> [=C2=A0=C2=A0=C2=A0 7.613439]=C2=A0 </IRQ>
>>> [=C2=A0=C2=A0=C2=A0 7.613882]=C2=A0 exc_xen_hypervisor_callback+0x8/0=
x10
>>>
>>> This is quite similar to the problem I reported a few months ago (see=20

>>> [1]) but this time this is happening with fifo rather than 2L.
>>>
>>> I haven't been able to reproduced it reliably so far. But looking at =

>>> the code, I think I have found another potential race after commit
>>>
>>> commit b6622798bc50b625a1e62f82c7190df40c1f5b21
>>> Author: Juergen Gross <jgross@suse.com>
>>> Date:=C2=A0=C2=A0 Sat Mar 6 17:18:33 2021 +0100
>>> =C2=A0=C2=A0=C2=A0 xen/events: avoid handling the same event on two c=
pusat the same=20
>>> time
>>> =C2=A0=C2=A0=C2=A0 When changing the cpu affinity of an event it can =
happen today that
>>> =C2=A0=C2=A0=C2=A0 (with some unlucky timing) the same event will be =
handled=20
>> on the old
>>> =C2=A0=C2=A0=C2=A0 and the new cpu at the same time.
>>> =C2=A0=C2=A0=C2=A0 Avoid that by adding an "event active" flag to the=20
per-event data=20
>>> and
>>> =C2=A0=C2=A0=C2=A0 call the handler only if this flag isn't set.
>>> =C2=A0=C2=A0=C2=A0 Cc: stable@vger.kernel.org
>>> =C2=A0=C2=A0=C2=A0 Reported-by: Julien Grall <julien@xen.org>
>>> =C2=A0=C2=A0=C2=A0 Signed-off-by: Juergen Gross <jgross@suse.com>
>>> =C2=A0=C2=A0=C2=A0 Reviewed-by: Julien Grall <jgrall@amazon.com>
>>> =C2=A0=C2=A0=C2=A0 Link:=20
>>> https://lore.kernel.org/r/20210306161833.4552-4-jgross@suse.com
>>> =C2=A0=C2=A0=C2=A0 Signed-off-by: Boris Ostrovsky <boris.ostrovsky@or=
acle.com>
>>>
>>> The evtchn driver will use the lateeoi handlers. So the code to ack=20
>>> looks like:
>>>
>>> do_mask(..., EVT_MASK_REASON_EOI_PENDING)
>>> smp_store_release(&info->is_active, 0);
>>> clear_evtchn(info->evtchn);
>>>
>>> The code to handle an interrupts look like:
>>>
>>> clear_link(...)
>>> if ( evtchn_fifo_is_pending(port) && !evtchn_fifo_is_mask()) {
>>> =C2=A0=C2=A0 if (xchg_acquire(&info->is_active, 1)
>>> =C2=A0=C2=A0=C2=A0=C2=A0 return;
>>> =C2=A0=C2=A0 generic_handle_irq();
>>> }
>>>
>>> After changing the affinity, an interrupt may be received once on the=20

>>> previous vCPU. So, I think the following can happen:
>>>
>>> vCPU0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | vCPU1
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0Receive event=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | change affinity to vCPU1
>>> =C2=A0=C2=A0clear_link()=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 /* The interrupt is re-raised */
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | receive event
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | /* The interrupt is not masked =
*/
>>> =C2=A0=C2=A0info->is_active =3D 1=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0do_mask(...)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0info->is_active =3D 0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | info->is_active =3D 1
>>> =C2=A0=C2=A0clear_evtchn(...)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | do_mask(...)
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | info->is_active =
=3D 0
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | clear_evtchn(...)
>>>
>>> Does this look plausible to you?
>>
>> Yes, it does.
>>
>> Thanks for the analysis.
>>
>> So I guess for lateeoi events we need to clear is_active only in
>> xen_irq_lateeoi()? At a first glance this should fix the issue.
>=20
> It should work and would be quite neat. But, I believe clear_evtchn()=20
> would have to stick in the ack helper to avoid losing interrupts.

I agree.

Preparing a patch ...


Juergen


--------------975C2C0ACCBF84BF28806F70
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------975C2C0ACCBF84BF28806F70--

--ieGDTbGIV81r0dLyI4LUjpigFd7g1moSf--

--Bf2ph7ybTNBsMgYU5NfRTZjCxgZxGn4K3
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDR1jQFAwAAAAAACgkQsN6d1ii/Ey8d
dQgAjloi4X5EXiWIATOsp2GOOBKyT69h1TcxcQD2N/VSpTDR6cjhptnQf8la1ei/dg/oiqBc/1Us
WyCr97FqVEIYPdwHvOwyl+dIlls81ymOxjFTBk56Nq+XLKW66SB3Y3QgFXuUhw84mNAm3yG8M6ND
+XphS1K04D18jaVgB+38mnbdW1Zmjma0hXJVpd3J6fuasq79zM4qp0m889GZkECEa/nn+PnvqxwK
9VwGs4fFqhdI3LkconOyyZNlJYtgctyTLZPPUNps8Upz1iy9asGwCsXv8wWji/AfTO7CyMrzqPWh
LGyDCwp3l3t642b8c/RWmv8PrTp7pxlhuBbTCqYY2w==
=AXI/
-----END PGP SIGNATURE-----

--Bf2ph7ybTNBsMgYU5NfRTZjCxgZxGn4K3--


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 13:34:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 13:34:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145882.268322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvgXt-00053d-7G; Tue, 22 Jun 2021 13:34:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145882.268322; Tue, 22 Jun 2021 13:34:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvgXt-00053W-1O; Tue, 22 Jun 2021 13:34:29 +0000
Received: by outflank-mailman (input) for mailman id 145882;
 Tue, 22 Jun 2021 13:34:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BK9I=LQ=gmail.com=rm.skakun@srs-us1.protection.inumbo.net>)
 id 1lvgXr-00053O-5u
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 13:34:27 +0000
Received: from mail-lf1-x131.google.com (unknown [2a00:1450:4864:20::131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8b1a62bf-a1d7-4d72-9b82-de0ce5bc0d9a;
 Tue, 22 Jun 2021 13:34:25 +0000 (UTC)
Received: by mail-lf1-x131.google.com with SMTP id f30so36061424lfj.1
 for <xen-devel@lists.xenproject.org>; Tue, 22 Jun 2021 06:34:25 -0700 (PDT)
Received: from localhost ([178.151.124.169])
 by smtp.gmail.com with ESMTPSA id l27sm2790583ljb.90.2021.06.22.06.34.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 22 Jun 2021 06:34:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b1a62bf-a1d7-4d72-9b82-de0ce5bc0d9a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=VpDilbWjWRuwm2Y0aj2xxaAxAvje8bBd6jPUCV3pOXQ=;
        b=fMPS6tmNxJ4WajUXO/WI+lAB/0vRCw/g40VCYzUAplHrCiShgYL4/pOWQt53jgsCrf
         grGcdhQjl09EnXls7tfkPf8x+Ekama0Z44pP8gN135yakp6jJ7WWKkoYaJkmCL6OFLUP
         nzOd+zztLMvLPOmX/8NCU+Nk6hN+3hpThfa+ReQZO3uaXdRMX34IHf9sOOvRgnLkv9G6
         alLI1QKAtA9yVaF6DuEsmIzeEi1WOWG1sjS4RoyJ/m5jVWPmmDB72ai9HnffPCdZGRoU
         45nWuPRiQFyqomMSwjaZkjlqTkynSdsmN1sLTW+F35Ra3uJpZOrA6cn28uqXYgBBKVYK
         yvuw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=VpDilbWjWRuwm2Y0aj2xxaAxAvje8bBd6jPUCV3pOXQ=;
        b=MGgbTrGzkdzUk4d323vcR3FbiFbKGZgIadhdaeihjmwZzIG4pTq0VB6SClZQB0XeTE
         UEzDBnaUv1tmd222TGNLYmOFQ/yYXSEJ+wDeAeWR1/YBH66TGQLaVxjpD+pSbdXWFxUi
         EWNm0Mmu0AlthbhnPk9FZB5N5AoDiXOwC0HLXAooObekS4OnLqcWk1wkch+2QxK/ej2S
         kXsOFpZAysfbPGlxTF9IfAHlZv9G56taqmPTPfjW2sHPTIGmbM5+7Fje+EQuAYjrxF+Q
         QVBMUT+X6O2I6O05XzbOWchxRi2GteC43aXQM8qCvoa1kU5kei0zViIybDYU/E4KTSBq
         Y2cA==
X-Gm-Message-State: AOAM532cvuJXMiLmTocLUerHq3ZoFlF6hTeqsOgUctGIF1Vq7SLsyW4k
	/IN8Z5j135PSDjWa+MHL+Ok=
X-Google-Smtp-Source: ABdhPJyfiEMp266a5Vc5gXIKMSmpRt+IMoLHqhjEwQPGGcLOElP7Q+Iu0ooyEO/TCg6nVx1MPndmrQ==
X-Received: by 2002:ac2:5192:: with SMTP id u18mr2841831lfi.619.1624368864308;
        Tue, 22 Jun 2021 06:34:24 -0700 (PDT)
From: Roman Skakun <rm.skakun@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org,
	iommu@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Roman Skakun <rm.skakun@gmail.com>,
	Roman Skakun <roman_skakun@epam.com>,
	Andrii Anisov <andrii_anisov@epam.com>
Subject: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses
Date: Tue, 22 Jun 2021 16:34:14 +0300
Message-Id: <20210622133414.132754-1-rm.skakun@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210616154436.GA7619@lst.de>
References: <20210616154436.GA7619@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This commit is dedicated to fix incorrect conversion from
cpu_addr to page address in cases when we get virtual
address which allocated in the vmalloc range.
As the result, virt_to_page() cannot convert this address
properly and return incorrect page address.

Need to detect such cases and obtains the page address using
vmalloc_to_page() instead.

Signed-off-by: Roman Skakun <roman_skakun@epam.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
---
Hey!
Thanks for suggestions, Christoph!
I updated the patch according to your advice.
But, I'm so surprised because nobody catches this problem
in the common code before. It looks a bit strange as for me. 


 kernel/dma/ops_helpers.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c
index 910ae69cae77..782728d8a393 100644
--- a/kernel/dma/ops_helpers.c
+++ b/kernel/dma/ops_helpers.c
@@ -5,6 +5,14 @@
  */
 #include <linux/dma-map-ops.h>
 
+static struct page *cpu_addr_to_page(void *cpu_addr)
+{
+	if (is_vmalloc_addr(cpu_addr))
+		return vmalloc_to_page(cpu_addr);
+	else
+		return virt_to_page(cpu_addr);
+}
+
 /*
  * Create scatter-list for the already allocated DMA buffer.
  */
@@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
 		 void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		 unsigned long attrs)
 {
-	struct page *page = virt_to_page(cpu_addr);
+	struct page *page = cpu_addr_to_page(cpu_addr);
 	int ret;
 
 	ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
@@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
 		return -ENXIO;
 
 	return remap_pfn_range(vma, vma->vm_start,
-			page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff,
+			page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff,
 			user_count << PAGE_SHIFT, vma->vm_page_prot);
 #else
 	return -ENXIO;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 15:15:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 15:15:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145890.268338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvi73-0005z9-IS; Tue, 22 Jun 2021 15:14:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145890.268338; Tue, 22 Jun 2021 15:14:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvi73-0005z2-En; Tue, 22 Jun 2021 15:14:53 +0000
Received: by outflank-mailman (input) for mailman id 145890;
 Tue, 22 Jun 2021 15:14:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=P8ns=LQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lvi72-0005yw-66
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 15:14:52 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6e33ddd3-985b-4d0e-aa69-25eecf1231d5;
 Tue, 22 Jun 2021 15:14:51 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id F08061FD36;
 Tue, 22 Jun 2021 15:14:49 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id B4EF9118DD;
 Tue, 22 Jun 2021 15:14:49 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id dSDtKmn+0WCbdwAALh3uQQ
 (envelope-from <jgross@suse.com>); Tue, 22 Jun 2021 15:14:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e33ddd3-985b-4d0e-aa69-25eecf1231d5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624374889; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=2/ON+NfgYqKAIMgYQmTDPsmxKkiGS3qSwNNgrMI4ut0=;
	b=MDBvo4yqAImjsv9U+ttGl3hiLmvCTq5/CqD+BWdSSD3GRhIiz6N9m1qrPqAbpTb1fuBXkh
	x919OSDfn/KtVrJlq5pyejjy7W5wqxVfZwyCpbH/YLVRHX2BxQIsslysf7u3K2lRtg5ska
	5OfYY+I9ntovLPpei6nS7vlDCzHBiUg=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624374889; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=2/ON+NfgYqKAIMgYQmTDPsmxKkiGS3qSwNNgrMI4ut0=;
	b=MDBvo4yqAImjsv9U+ttGl3hiLmvCTq5/CqD+BWdSSD3GRhIiz6N9m1qrPqAbpTb1fuBXkh
	x919OSDfn/KtVrJlq5pyejjy7W5wqxVfZwyCpbH/YLVRHX2BxQIsslysf7u3K2lRtg5ska
	5OfYY+I9ntovLPpei6nS7vlDCzHBiUg=
Subject: Re: Interrupt for port 19, but apparently not enabled; per-user
 000000004af23acc
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 linux-kernel@vger.kernel.org, mheyne@amazon.de
References: <6552fc66-ba19-2c77-7928-b0272d3e1622@xen.org>
 <4d8a7ba7-a9f6-2999-8750-bfe2b85f064e@suse.com>
 <9a08bbf2-ba6a-6e49-3bcb-bfe2beb32b99@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <5d88a82e-d237-7803-7b50-897e857f2fbd@suse.com>
Date: Tue, 22 Jun 2021 17:14:49 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <9a08bbf2-ba6a-6e49-3bcb-bfe2beb32b99@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="s2NUokYSTqiR8e9giFHRsLXUh4hQPOnGN"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--s2NUokYSTqiR8e9giFHRsLXUh4hQPOnGN
Content-Type: multipart/mixed; boundary="hDCgbE4sBykbQxxgwMdSjlnRJltYZjgyU";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 linux-kernel@vger.kernel.org, mheyne@amazon.de
Message-ID: <5d88a82e-d237-7803-7b50-897e857f2fbd@suse.com>
Subject: Re: Interrupt for port 19, but apparently not enabled; per-user
 000000004af23acc
References: <6552fc66-ba19-2c77-7928-b0272d3e1622@xen.org>
 <4d8a7ba7-a9f6-2999-8750-bfe2b85f064e@suse.com>
 <9a08bbf2-ba6a-6e49-3bcb-bfe2beb32b99@xen.org>
In-Reply-To: <9a08bbf2-ba6a-6e49-3bcb-bfe2beb32b99@xen.org>

--hDCgbE4sBykbQxxgwMdSjlnRJltYZjgyU
Content-Type: multipart/mixed;
 boundary="------------BAC75389B48160B12DD27824"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------BAC75389B48160B12DD27824
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 22.06.21 14:21, Julien Grall wrote:
> Hi Juergen,
>=20
> On 22/06/2021 13:04, Juergen Gross wrote:
>> On 22.06.21 12:24, Julien Grall wrote:
>>> Hi Juergen,
>>>
>>> As discussed on IRC yesterday, we noticed a couple of splat in 5.13-r=
c6=20
>>
>>> (and stable 5.4) in the evtchn driver:
>>>
>>> [=C2=A0=C2=A0=C2=A0 7.581000] ------------[ cut here ]------------
>>> [=C2=A0=C2=A0=C2=A0 7.581899] Interrupt for port 19, but apparently n=
ot=20
>> enabled;
>>> per-user 000000004af23acc
>>> [=C2=A0=C2=A0=C2=A0 7.583401] WARNING: CPU: 0 PID: 467 at=20
>>> /home/ANT.AMAZON.COM/jgrall/works/oss/linux/drivers/xen/evtchn.c:169 =

>>> evtchn_interrupt+0xd5/0x100
>>> [=C2=A0=C2=A0=C2=A0 7.585583] Modules linked in:
>>> [=C2=A0=C2=A0=C2=A0 7.586188] CPU: 0 PID: 467 Comm: xenstore-read Not=20
tainted=20
>>> 5.13.0-rc6 #240
>>> [=C2=A0=C2=A0=C2=A0 7.587462] Hardware name: QEMU Standard PC (Q35 + =
ICH9, 2009),=20
>>> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
>>> [=C2=A0=C2=A0=C2=A0 7.589462] RIP: e030:evtchn_interrupt+0xd5/0x100
>>> [=C2=A0=C2=A0=C2=A0 7.590361] Code: 48 8d bb d8 01 00 00 ba 01 00 00 =
00=20
>> be 1d 00 00 00
>>> e8 5f 72 c4 ff eb b2 8b 75 20 48 89 da 48 c7 c7 a8 03 5f 82 e8 6b 2d =
96=20
>>
>>> ff <0f> 0b e9 4d ff ff ff 41 0f b6 f4 48 c7 c7 80 da a2 82 e8 f0
>>> [=C2=A0=C2=A0=C2=A0 7.593662] RSP: e02b:ffffc90040003e60 EFLAGS: 0001=
0082
>>> [=C2=A0=C2=A0=C2=A0 7.594636] RAX: 0000000000000000 RBX: ffff88810232=
8c00 RCX:=20
>>> 0000000000000027
>>> [=C2=A0=C2=A0=C2=A0 7.595924] RDX: 0000000000000000 RSI: ffff88817fe1=
8ad0 RDI:=20
>>> ffff88817fe18ad8
>>> [=C2=A0=C2=A0=C2=A0 7.597216] RBP: ffff888108ef8140 R08: 000000000000=
0000 R09:=20
>>> 0000000000000001
>>> [=C2=A0=C2=A0=C2=A0 7.598522] R10: 0000000000000000 R11: 707572726574=
6e49 R12:=20
>>> 0000000000000000
>>> [=C2=A0=C2=A0=C2=A0 7.599810] R13: ffffc90040003ec4 R14: ffff8881001b=
8000 R15:=20
>>> ffff888109b36f80
>>> [=C2=A0=C2=A0=C2=A0 7.601113] FS:=C2=A0 0000000000000000(0000) GS:fff=
f88817fe00000(0000)=20
>>> knlGS:0000000000000000
>>> [=C2=A0=C2=A0=C2=A0 7.602570] CS:=C2=A0 10000e030 DS: 0000 ES: 0000 C=
R0:0000000080050033
>>> [=C2=A0=C2=A0=C2=A0 7.603700] CR2: 00007f15b390e368 CR3: 000000010bb0=
4000 CR4:=20
>>> 0000000000050660
>>> [=C2=A0=C2=A0=C2=A0 7.604993] Call Trace:
>>> [=C2=A0=C2=A0=C2=A0 7.605501]=C2=A0 <IRQ>
>>> [=C2=A0=C2=A0=C2=A0 7.605929]=C2=A0 __handle_irq_event_percpu+0x4c/0x=
330
>>> [=C2=A0=C2=A0=C2=A0 7.606817]=C2=A0 handle_irq_event_percpu+0x32/0xa0=

>>> [=C2=A0=C2=A0=C2=A0 7.607670]=C2=A0 handle_irq_event+0x3a/0x60
>>> [=C2=A0=C2=A0=C2=A0 7.608416]=C2=A0 handle_edge_irq+0x9b/0x1f0
>>> [=C2=A0=C2=A0=C2=A0 7.609154]=C2=A0 generic_handle_irq+0x4f/0x60
>>> [=C2=A0=C2=A0=C2=A0 7.609918]=C2=A0 __evtchn_fifo_handle_events+0x195=
/0x3a0
>>> [=C2=A0=C2=A0=C2=A0 7.610864]=C2=A0 __xen_evtchn_do_upcall+0x66/0xb0
>>> [=C2=A0=C2=A0=C2=A0 7.611693]=C2=A0 __xen_pv_evtchn_do_upcall+0x1d/0x=
30
>>> [=C2=A0=C2=A0=C2=A0 7.612582]=C2=A0 xen_pv_evtchn_do_upcall+0x9d/0xc0=

>>> [=C2=A0=C2=A0=C2=A0 7.613439]=C2=A0 </IRQ>
>>> [=C2=A0=C2=A0=C2=A0 7.613882]=C2=A0 exc_xen_hypervisor_callback+0x8/0=
x10
>>>
>>> This is quite similar to the problem I reported a few months ago (see=20

>>> [1]) but this time this is happening with fifo rather than 2L.
>>>
>>> I haven't been able to reproduced it reliably so far. But looking at =

>>> the code, I think I have found another potential race after commit
>>>
>>> commit b6622798bc50b625a1e62f82c7190df40c1f5b21
>>> Author: Juergen Gross <jgross@suse.com>
>>> Date:=C2=A0=C2=A0 Sat Mar 6 17:18:33 2021 +0100
>>> =C2=A0=C2=A0=C2=A0 xen/events: avoid handling the same event on two c=
pusat the same=20
>>> time
>>> =C2=A0=C2=A0=C2=A0 When changing the cpu affinity of an event it can =
happen today that
>>> =C2=A0=C2=A0=C2=A0 (with some unlucky timing) the same event will be =
handled=20
>> on the old
>>> =C2=A0=C2=A0=C2=A0 and the new cpu at the same time.
>>> =C2=A0=C2=A0=C2=A0 Avoid that by adding an "event active" flag to the=20
per-event data=20
>>> and
>>> =C2=A0=C2=A0=C2=A0 call the handler only if this flag isn't set.
>>> =C2=A0=C2=A0=C2=A0 Cc: stable@vger.kernel.org
>>> =C2=A0=C2=A0=C2=A0 Reported-by: Julien Grall <julien@xen.org>
>>> =C2=A0=C2=A0=C2=A0 Signed-off-by: Juergen Gross <jgross@suse.com>
>>> =C2=A0=C2=A0=C2=A0 Reviewed-by: Julien Grall <jgrall@amazon.com>
>>> =C2=A0=C2=A0=C2=A0 Link:=20
>>> https://lore.kernel.org/r/20210306161833.4552-4-jgross@suse.com
>>> =C2=A0=C2=A0=C2=A0 Signed-off-by: Boris Ostrovsky <boris.ostrovsky@or=
acle.com>
>>>
>>> The evtchn driver will use the lateeoi handlers. So the code to ack=20
>>> looks like:
>>>
>>> do_mask(..., EVT_MASK_REASON_EOI_PENDING)
>>> smp_store_release(&info->is_active, 0);
>>> clear_evtchn(info->evtchn);
>>>
>>> The code to handle an interrupts look like:
>>>
>>> clear_link(...)
>>> if ( evtchn_fifo_is_pending(port) && !evtchn_fifo_is_mask()) {
>>> =C2=A0=C2=A0 if (xchg_acquire(&info->is_active, 1)
>>> =C2=A0=C2=A0=C2=A0=C2=A0 return;
>>> =C2=A0=C2=A0 generic_handle_irq();
>>> }
>>>
>>> After changing the affinity, an interrupt may be received once on the=20

>>> previous vCPU. So, I think the following can happen:
>>>
>>> vCPU0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | vCPU1
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0Receive event=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | change affinity to vCPU1
>>> =C2=A0=C2=A0clear_link()=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 /* The interrupt is re-raised */
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | receive event
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | /* The interrupt is not masked =
*/
>>> =C2=A0=C2=A0info->is_active =3D 1=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0do_mask(...)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0info->is_active =3D 0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | info->is_active =3D 1
>>> =C2=A0=C2=A0clear_evtchn(...)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | do_mask(...)
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | info->is_active =
=3D 0
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | clear_evtchn(...)
>>>
>>> Does this look plausible to you?
>>
>> Yes, it does.
>>
>> Thanks for the analysis.
>>
>> So I guess for lateeoi events we need to clear is_active only in
>> xen_irq_lateeoi()? At a first glance this should fix the issue.
>=20
> It should work and would be quite neat. But, I believe clear_evtchn()=20
> would have to stick in the ack helper to avoid losing interrupts.
>=20

Could you try the attached patch, please? Only compile tested.


Juergen

--------------BAC75389B48160B12DD27824
Content-Type: text/x-patch; charset=UTF-8;
 name="0001-xen-events-reset-active-flag-for-lateeoi-events-late.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename*0="0001-xen-events-reset-active-flag-for-lateeoi-events-late.pa";
 filename*1="tch"

=46rom 593e81cfb5d4a15e5e420111565e7f6019da9b72 Mon Sep 17 00:00:00 2001
From: Juergen Gross <jgross@suse.com>
To: linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org
Date: Tue, 22 Jun 2021 17:05:03 +0200
Subject: [PATCH] xen/events: reset active flag for lateeoi events later

In order to avoid a race condition for user events when changing
cpu affinity reset the active flag only when EOI-ing the event.

This is working fine as all user events are lateeoi events.

Reported-by: Julien Grall <julien@xen.org>
Fixes: b6622798bc50b62 ("xen/events: avoid handling the same event on two=20
cpus at the same time")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/events/events_base.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events=
_base.c
index 7bbfd58958bc..26836546e1cd 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -642,6 +642,7 @@ static void xen_irq_lateeoi_locked(struct irq_info *i=
nfo, bool spurious)
 	}
=20
 	info->eoi_time =3D 0;
+	smp_store_release(&info->is_active, 0);
 	do_unmask(info, EVT_MASK_REASON_EOI_PENDING);
 }
=20
@@ -1883,7 +1884,7 @@ static void lateeoi_ack_dynirq(struct irq_data *dat=
a)
=20
 	if (VALID_EVTCHN(evtchn)) {
 		do_mask(info, EVT_MASK_REASON_EOI_PENDING);
-		event_handler_exit(info);
+		clear_evtchn(evtchn);
 	}
 }
=20
--=20
2.26.2


--------------BAC75389B48160B12DD27824
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------BAC75389B48160B12DD27824--

--hDCgbE4sBykbQxxgwMdSjlnRJltYZjgyU--

--s2NUokYSTqiR8e9giFHRsLXUh4hQPOnGN
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDR/mkFAwAAAAAACgkQsN6d1ii/Ey9h
Mwf/dzQN4+BLw8AyejZW0UdiO/zHkz6xRDrTbZc9bHb9aRkP6JUoFdA3rbQpGFiBaIHDu+aHwzpt
X8xdBz5YIRZ/+XySJSpOrCvir88EJJ/aydzYHFYzxvx7cMki0pVH+5H66J0WuOaMoucMn8IcWPvT
S+xnoCvZQX9+xQc4FlVEQGM/iV4yKR8oKtaXjrKUaLd5zfcMrhaX6wdkQW6wBeRegHo+Q+TCazVc
nzFzTY24VGBoSFJN3cKfg+3znN6zBrFzsTSKi4XeQn00B7QckEi4vscgYvRquSUXWgkwGEEemk63
/scvx4awB+n3mWkoF9ke+VeDRYJ9EqOEbsFHYSZGgw==
=vvOW
-----END PGP SIGNATURE-----

--s2NUokYSTqiR8e9giFHRsLXUh4hQPOnGN--


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 15:16:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 15:16:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145894.268348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvi8c-0006YL-V0; Tue, 22 Jun 2021 15:16:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145894.268348; Tue, 22 Jun 2021 15:16:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvi8c-0006YE-Rk; Tue, 22 Jun 2021 15:16:30 +0000
Received: by outflank-mailman (input) for mailman id 145894;
 Tue, 22 Jun 2021 15:16:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RtyV=LQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lvi8b-0006Y8-UI
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 15:16:29 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37a1fb50-c36a-4f4c-8049-5a9bf9deaffa;
 Tue, 22 Jun 2021 15:16:28 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2175.outbound.protection.outlook.com [104.47.17.175])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-39-vjjNn9FcPiGvTo-uhLjzsg-1; Tue, 22 Jun 2021 17:16:26 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2445.eurprd04.prod.outlook.com (2603:10a6:800:55::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Tue, 22 Jun
 2021 15:16:23 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.023; Tue, 22 Jun 2021
 15:16:23 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0256.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100::28) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Tue, 22 Jun 2021 15:16:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37a1fb50-c36a-4f4c-8049-5a9bf9deaffa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624374987;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=5BzWviYvjy6fMCruQkIalETkiPlSQqLj37o2AeMY5ns=;
	b=TzwLXwnfwsb3g2TRHYmL1AFe6yGeNc5B996t92XNpvArMl0RR2wbuU7YLlB1hJ+Aj1Z1Ft
	5nreXNvICwDSqg9DMf7RgHu6xDAXuxSqlhV8tk338Yao2idZOO+ab7UoVTn/pwf8+csW4w
	mKpz6yrJfniz2GcVuqiXXU/ngwNXblc=
X-MC-Unique: vjjNn9FcPiGvTo-uhLjzsg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U5JkWPuV7XEm6WzlRcKfCsSTv+8JrNsq9m0fAxRm4ZvRLyr+OoA9hdZ6pbZ/UepQVkVkTZzFO0rhp2ZKmMn7HtgcCjuOG2Y9M6QywmtiszaaMBKztd45lGrL0U9TEOnhnCU3+xwc45yrzD3BtVyIjSMFVSjFHAKjqM9YSPf5qQ/TFenixAK21W6LRovXfYMpL1DhhAMefXaXzx4b2pX/9e1efbdPS//2s0GChLKb2vG/lvqNJN0zE4op/PWk5CWxEpBcoEQ0SdmxG6zVBUJGgDKt6tX6etxl5Vf3xcVtyO7O6hAKceqcG3baNyzjCcqftbajTccKSjIAF+nex95T/g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5BzWviYvjy6fMCruQkIalETkiPlSQqLj37o2AeMY5ns=;
 b=g2lUkJRa4D030nYh90wVpAXUywPggepsh/4lZEf35I+cl3PJavNRL/4+kVDZsD1BPZaxU30O1VlEMckKY1ylaKLgfaPrmx95rS6uhinNibpDVDoIENaSv2BY/T2yh+hTA1wkpSJwTD2f3C9j3wkdfJBO4VeUViYJg++4myTB9Rp/WCNemqxUNY2AeNV1WbdvnNaI8sviW4f+nTFSFjBa7/ihbGIXZUV1FzilIu+ffjUG0nxQTShHofxr4kMw5TBaA5Ug/XiEiM4BN22vjMgfARop2yRv/FyXmmux1zUx11xQcj0XfRoLsrm+QEIn2qm+JLatIHQ9LuXtUAbfLgWRLg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=suse.com;
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/6] allow xc_domain_maximum_gpfn() to observe full GFN
 value
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>
Message-ID: <6c532607-c2a3-d0ab-e4e5-428f85f4a045@suse.com>
Date: Tue, 22 Jun 2021 17:16:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0256.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100::28)
 To VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 515c00f5-1a0a-479f-7b75-08d93590b20f
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2445:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB244549DF17EE810F4D077286B3099@VI1PR0401MB2445.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pZx/2rC1VUljRodRK62qt8tRlvJ/9edJbW8w+SAB9MMXNMDBI3vV/1sLZ/Fz3sGMKOVkA6vB/r7pN8PsBsujE5bTgEGUPphryHJUxa0SsZzdNczVNNxAarXCK5pD+I0D0ba663MXFhlhBUVQqeoQFq/DU0qcCce+tfPHxB3ickPU5tQ5QUZdGYGdkTD2UCo0En8/Xre6/j0DWdH0BWd3/z7BXFV+HttuzPO6NZaXaYcWBgjK4Vdi+vRY3lJ3C0dLiZKXyu8k1vwNCpqdVgC6LcbcOA3tKhJNaumgZ7rDwuIDM6lmKGCFPNhE4SPXq6H+VTEvTOoysbb3b2qAuqQ2Ep/QxCKqyvJV7xG5tWQW+hZWsCm4YZ2cha3aTqUVRStILTjpJCIs52R5kTQVAss9bC1GakuZV1MtFeqw1KdTnqYlEcRMN8AVF2tFkaOn+mYThQUalIMb9+uIQttbWOgBYr9EYjS5FPbnIjcxKNuhUADRlfXDlsUi4g+UyjxAAbF8qG1Epol0qmr+04gx2yRsAxDYtuMM2XwLOJs7yfCWp79MLwJJh3lY7H7lm/vHZDslpyY8qEd7Yqlv/lSTqyR1Jt7Pt689ABQWYux1vBaJxBOq1S0xXVBGmoRu2mnSCrFGWyUXkEhBFJvSs1DNdAhMpvpTDNF6BUHokmJMhymSK/U8ejkUeC6hSPbtzRTv9gjx
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39850400004)(376002)(346002)(396003)(366004)(136003)(54906003)(2906002)(5660300002)(66556008)(66476007)(66946007)(186003)(4326008)(26005)(107886003)(16526019)(6486002)(478600001)(31686004)(36756003)(8936002)(6916009)(8676002)(86362001)(31696002)(83380400001)(2616005)(956004)(16576012)(316002)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TFkvSkNZNVlmOHV4WkpFd1ZkYmV2Z0hOU0ltcTBheUJ5VjNKMTI5WkVENmQx?=
 =?utf-8?B?VkxFSlBFZm0ySHJwcXdTdC9QWFFqTTN5bE9OOEFDemU3OUtwU0I0VnNlMk9l?=
 =?utf-8?B?YmJSTXB1TWYrSFMzbUZhTisxSmxvUDJNbzc0Z2d4MUZLcTVCUU41ZG5SUDlo?=
 =?utf-8?B?alEranZvWHV4K2FmWWI4RFJ0NFVpVGM0KzRvajlSSWZKMFp2b1JTakNFdjdu?=
 =?utf-8?B?c2VIZHpGRnRtdUQrRm9FclBtNjlrOHNjUWdwdi9kSVhVRFlPM2dzRytjLy90?=
 =?utf-8?B?UVZDbnMxSlc5c2JwVzRXUGJSY2MwMjVSSUdwaTg2N2doaFJkcXFnNElDb2JR?=
 =?utf-8?B?NUxlZ3BnNTJ5c1drVVB0VktheXFodG1rdThYTzJIaTBlbGZPaFQxTFNrUFYx?=
 =?utf-8?B?WDNEYXFpTU93d1E0TTVvVFlITlNDcU85NCtSOEdUb01Mdi81bXczT2dWYU1O?=
 =?utf-8?B?R0Mwa0pyVFJra1VXQThDWnQrTkJTSW5OSThsQjRRMUk2THJJdG9nMVE2OXdv?=
 =?utf-8?B?MFh1bTJWR2gxNFkwTDZmZGNPMk5VbGQ2SXNmZFh6TXVCeXJIQzkrZ3hzWjVI?=
 =?utf-8?B?ZFV5MDl6MGFTaUl1eWp1TDJVdnY0WE9vYlg2TFVLc05mWVJkV29zcWJjQitX?=
 =?utf-8?B?V1NNaTI1VkZjRXh0aVlITExQc3VDME1NNmNGRnNXMXNEWThkelRIOXlXdmtG?=
 =?utf-8?B?bFRESUEzNGpVTEx2Ym5KZEtRa1lnRkkyYm13R09OdEJHMXNtaS9aS1grM21J?=
 =?utf-8?B?MFdacVN3Y2hzdTh0Y3Bob0xmcWhqc2ZhQTNnUEJxY3ZSMWovSStzSHArelpk?=
 =?utf-8?B?RngzVG8vZk5PVXRTZDhjRVY4a3M3eDRRclVnRThNWHI0TEg1eEpORmo4Yyti?=
 =?utf-8?B?RmdaM2hYclQ3OTBldFBtcXhEYXprSXM3cG1ydWZDRk1HWStjUlpUalcwcmI1?=
 =?utf-8?B?VkJjbkkxRlExQitNM3lmTFk1c3ZkZFBtM1ZDTk1jSDcwdVJRY3g2WHRhRlBG?=
 =?utf-8?B?N0pZNmdQTWpTd1UzVE55cVIrSVcranAzM29lbFcyczI4MnQ0OU81RElJa3o5?=
 =?utf-8?B?TXlObTlGQW5QMjhSd3RRTTY5TU1MMUNWL1pBN2NhNThKWEhaQi9reTVNZ1JW?=
 =?utf-8?B?cForNU9GbWJUeW01eW5TbzNWT204c25YUm8rMCt5M0dSQXZUNU5qQ3hMZW5Q?=
 =?utf-8?B?b2gybHA2VnNGM24rSTFLbnc4d2ZYVVBlU3Q3RmlRdkdsZU90eXdaLzRTWEtC?=
 =?utf-8?B?eGZrV1hvWHBYUGNhbGhRcjEvWEwzaWd2Qkk2eGpYNEtDOXJjSWFZc3NqRlh0?=
 =?utf-8?B?SU9wZUNVTnkvOG5helBEQnA1c3ZvbnlCMm5zWjBYRGxidWMzenBLRG9hNlEw?=
 =?utf-8?B?UStub1EyakdCRFErU1lGdWlVR2tteHlkN0R6ZE1LSE1xUTAxSFJ1THNyNnJN?=
 =?utf-8?B?SmE1NFdvdEtUNUFXdUlLOU92K1IvZm1pcjRWd2czNHZldVlqaEJadGhOT1Ar?=
 =?utf-8?B?OU1OZDh0cFJlRjd4V1pRemlFSUpSc3QrQTlwZXZSQTJ4V0I5ZURwRm1rZXN4?=
 =?utf-8?B?M2NEQXp2Njg2OCtMYnNsaXNqcjQ3NXc2L2JYbll2ejhPdkY0anJDWEVnSUR2?=
 =?utf-8?B?YnBxZ1N4cE9zMjk4YktDR1ptUVA3dGwwRVR2TUo0YlhrMDNuSkxGRGhqRGJt?=
 =?utf-8?B?V0I5S2ZzSk9UQ1dsVFZ1dUg3UXRZT2JRRUhFa01aR1JuQmcxeGVSWVd3SEwx?=
 =?utf-8?Q?YiiKfc/8fWBg8RHBzqS9O/I1yCDWSbxXt5UCmkH?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 515c00f5-1a0a-479f-7b75-08d93590b20f
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2021 15:16:23.0106
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wFaE1SJq4JvfozJE5S3rQr8E2z7oML3pOsKeEWVnS2CUjO/5xvGtNJB4ZMu82qIYCbVpwd6IDZN7Xl8EPjYAGQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2445

The present remaining osstest failures are due to truncation of the GFN
resulting from the hypercall return value being passed back through the
ioctl() return value (on Linux and Solaris), which is "int", plus the
same for some further internal interfaces (osdep_hypercall(),
xencall<N>()). Some of the memory-op sub-ops, like the one involved
here, may pass back values which don't fit into "int".

Different effects can be observed with a 32- and 64-bit tool stack,
each causing one test to fail. The changes here will only deal with
the truncation resulting when sizeof(int) < sizeof(long), i.e. only on
64-bit. For the 32-bit tool stack case to work in such a situation,
yet uglier hackery would be needed. But even if the full value got
passed back, we'd then hit:

#ifdef __i386__
    /* Very large domains (> 1TB) will exhaust virtual address space. */
    if ( nr_pfns > 0x0fffffff )
    {
        errno = E2BIG;
        PERROR("Cannot save this big a guest");
        return -1;
    }
#endif

in xg_sr_save_x86_hvm.c:x86_hvm_setup() (and there's a similar check
on the restore path).

I wonder in how far a guest property can legitimately cause an osstest
push to be prevented by causing a failure like this one. And of course
I'm also puzzled by the ovmf change having managed to make it through
its push gate.

Note that I can't tell at this point whether there aren't further
issues, as I've not actually tried the ovmf case. I could easily see
there being oom issues there then, once to full value gets used for
setting up the p2m monitoring during migration. Or processing might
then take overly long.

See the individual patches for changes in v2, all of which are to
address review feedback.

1: x86/HVM: wire up multicalls
2: libxencall: osdep_hypercall() should return long
3: libxencall: introduce variant of xencall2() returning long
4: libxc: use multicall for memory-op on Linux (and Solaris)
5: libxencall: drop bogus mentioning of xencall6()
6: libxc: make xc_domain_maximum_gpfn() endianness-agnostic

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 15:18:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 15:18:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145898.268360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lviA4-0007GF-F2; Tue, 22 Jun 2021 15:18:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145898.268360; Tue, 22 Jun 2021 15:18:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lviA4-0007G8-Bz; Tue, 22 Jun 2021 15:18:00 +0000
Received: by outflank-mailman (input) for mailman id 145898;
 Tue, 22 Jun 2021 15:17:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RtyV=LQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lviA2-0007G0-Ho
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 15:17:58 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1b8ddb81-b846-47de-a8ff-f1fbe46eb514;
 Tue, 22 Jun 2021 15:17:57 +0000 (UTC)
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur03lp2050.outbound.protection.outlook.com [104.47.10.50]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-18-aivuLulkPmu8ZRcb5NoxNg-1; Tue, 22 Jun 2021 17:17:55 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2445.eurprd04.prod.outlook.com (2603:10a6:800:55::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Tue, 22 Jun
 2021 15:17:53 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.023; Tue, 22 Jun 2021
 15:17:53 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR3P281CA0001.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:1d::10) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.7 via Frontend Transport; Tue, 22 Jun 2021 15:17:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b8ddb81-b846-47de-a8ff-f1fbe46eb514
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624375076;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nSjrFG92yCM/r3hkNcHh022+l3pV9APtRcoqPp/lD/Y=;
	b=IkEjoS6lCOeRUxmeBCxGgXARVvBsQ5TFn8KlS97wRgl9mawNAu0Q0M82CflV9IddfcBkls
	5l8yhYikdXKBu9L4RvlKtuJrRVMIxJ/7BG/GccOep5XmkNaCr+fTw9KSR3n+lwl+Lxo3SO
	0tMUOci+GO1XT/lxd6pMNySpCo9/H50=
X-MC-Unique: aivuLulkPmu8ZRcb5NoxNg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PVZDEdr8mieCB4Jwfs3AvNwXJwzL8vthjqfImqvJPFXMM1zrVHsTelDFWWE6Z2u3mL1a4qnNMIv7klZ/4oNV+7dZ53tOmIpDM33W/lwmzow3uQbNlJNy8yLWgHn+uJvA4tIhu93flmkVvJyRFtB5S4wLM4W11nNqk6P4gJyTWAKGIVQs9g4v6Pwq9QfL7BLnBOHVmmgOplsXOu7WLx66hcPiDgErWTZQaNCHs1DjazhM9m2XmOPgE5S+VSQv1fls5Sb4Davc9BOGFA4LIw0RT8uaGsOV4MuZTxTISrgzH6CScwMZwPeY99aV5XJT40hrbLarwqrAIsX0J1XEZduvug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nSjrFG92yCM/r3hkNcHh022+l3pV9APtRcoqPp/lD/Y=;
 b=FOKLzwF3Ojlz16x4m5P0IwDsBQIsjZs4cSmKyO4VGNUY9y0avxQ4m/FR5aOqV5/UDhVPY8R0vD7SZFL+8gedHTsTkpHtpce10lSdN2V+84YOhH+t+a6pQxVtWQB9PrbBrGFD7TwGvPKkLgMXzLmphDfXzTR77R9H5G7rboU29lagg0HFkaTJja5mETjbMluyVF5ChnfOQAfp6ovLDGc3WgN0tOab99035xv0iZDdcl8yLQp/dTRw4l1d2FxClYaul8nMEKA23BWKAOhhPMHxRl060Sok/uH1JZFVCaukd7tpSchLungiFz8pI7Ur20clHrNNx5a0thRpvAXRzwpHww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH v2 1/6] x86/HVM: wire up multicalls
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>
References: <6c532607-c2a3-d0ab-e4e5-428f85f4a045@suse.com>
Message-ID: <a96ff7d7-f594-4b86-e9fa-6b1a99edc992@suse.com>
Date: Tue, 22 Jun 2021 17:17:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <6c532607-c2a3-d0ab-e4e5-428f85f4a045@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR3P281CA0001.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::10) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6f37d13f-1e4b-4364-dfa0-08d93590e7df
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2445:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB244593A8373E27A07ACAE81BB3099@VI1PR0401MB2445.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LuJxCbLLbTKPVs4vUXAyavKerV0mbZFm5QhOGhF5TpWlzY+WcwMXVgyrURlO4xek773Ob3Al1ysqqeIJGo+QIx6VnGaDwoWP2K8ZzdL4lla8X/VWcy8cSZl02yYeuLmQyhXV2DJS1Uf+binZupKi0DVzkHOVZD9XBP26tqEnbJx/UraSGpZ08RYVaRWf0p0dLgyOtdP4wRhiUJOH3ceuepC5PXuEe9hm/npyhpjuwcbIhUOzVFvN7BkX795fmQdwhCcrmnwlgUQtZzzVBtS6bdCoS87rC3ajBUDISZS9pI/vgzBH7DrAEBkzrk2cGislOvjebstynPlDiRtMsYNq9zZKn2nxWBATMZGPhhpWgq59FjajkfWCcqDnBwhUb0lqpBVSafnhWC1/pYlQMV282RynQ0lsZkXh7OkQZAKhi+0y/ovesrKNkAQvVCcN/nr8VwMsGbhcaIWOSkZ3QJyN5jx6w7Onck4JYJi6k1XqIvR2H5NPq1LS17gcXmqamEnpBHf4VjJZRGgAF5sICba1e1YnWD/hD2jvfMLsdFy5wFzDk4rszpd5IqSD4wgG/sWqp0m8qeLHKvKlWOapmGqs8+G8o9puP8KGLQIarXT8EvAdGenBTiRgGDQMslA5EE7IL3snB1v4tGwAuZeEDrxWoUKUk6cbdHUaiXxaOfr3KBHSuLB95IKDa595Iygt7mhY
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39850400004)(376002)(346002)(396003)(366004)(136003)(54906003)(2906002)(5660300002)(66556008)(66476007)(66946007)(186003)(4326008)(26005)(107886003)(16526019)(6486002)(478600001)(31686004)(36756003)(8936002)(6916009)(8676002)(86362001)(31696002)(83380400001)(2616005)(956004)(16576012)(316002)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?amN5R0x1NFQxMjBiZ3EzcThaanhTMnBCVlVRSlNpdi9qYlpHV0RhQ0E5UXI3?=
 =?utf-8?B?VFdheDN6NlhkUlFaL2MxbVp2K3VEdnFCNjQ2VFlaRGRGTVoyL2x2a3JFWWZi?=
 =?utf-8?B?Rll6QzU1ajJJZnliYllmT3pjdTk0dDdGNVpYcDV2NVB1bEM5eGcxekNOM3BN?=
 =?utf-8?B?bW93ZG9QUUpESVU4V0hTeFFpN2s1ZldVY3Vtdmk5RnJCNGgxQTFIR0RKVnBj?=
 =?utf-8?B?TDNvQmtvYXJhTVN2cU9ZVFJBUzBTT2VwT3VnVlFvUERBb2hXRzVKaXJTRmtw?=
 =?utf-8?B?MzF2KzNpZ2VRdFo5dmRlRGFiZyt5Ry8xbjlkNTNvK1hpQ2plSkVCZ3BKMXZT?=
 =?utf-8?B?VXNEd1g5S0FubTlUbkVBUjNxWFczTGtQNGFDUnlWeW1vcVR0andac3B0OUQz?=
 =?utf-8?B?OGJQRXhrRFY5Mjl4VXl1dmpFNTV4Q3lVVitteEYxSE1qMEUwZzA1K0FLRjVU?=
 =?utf-8?B?THRxRW1nUXRKemhqQ0w4b0hGSDNNcnZqNkhZWFY3TlRodzhOdjlUUXFHS2x0?=
 =?utf-8?B?ZjVDY20ycmpCVklQL1hZbzJhcUVzNnlyenphSEptYkEyTE0ydFV2bTFETy9k?=
 =?utf-8?B?Rjl3K2RpZ0Erd3U4aUJ0NHg5c1czY2hvZFVWM2kxREJxYk5zWGJwak0waXJN?=
 =?utf-8?B?a1NJT2V4YWxPYkNLZUhqcXhYL01HcEsyWGFBZkJqYlhjWWNuQTJxVm8vQ0w2?=
 =?utf-8?B?L001RzVpMkZmbGdQTkJSRzE5cXQzQ1ZGdmI0TzZxTzU3eEw3Z1JYQUlKandp?=
 =?utf-8?B?enQrY2lqSGh4RmJWWElmblR1ay9CMkw5N2tNNEUvU0NlY0cvWmhoMEpDZVVT?=
 =?utf-8?B?bklXNmxKS3g5NG9EUmNhQXdkMDBqdENjUmsvM1NROXdtNENPdlF6SGhmVXFZ?=
 =?utf-8?B?aXVXR0djcHE5a21mYXNuR2p3L1BROWRoa0tvZDV0cEFBazRlbXdSZUlaekp4?=
 =?utf-8?B?TWJ6VlFLMXNIK20yWHgrMDdGdVU5Qm1DWFRJcjAzRDN3MFZld0R1ZUdiRmFO?=
 =?utf-8?B?VEhKenFhaDU3d1lvQldUeXNBSnRPT1BjRjgzTDdZWm5lMit2aXU4cHpucGE4?=
 =?utf-8?B?RFNvQnhqeThSRUJKQWl3eGFscEw1ZjdDandCSVF1VnZ3Tm4yZEg3OTBjaHVQ?=
 =?utf-8?B?eldSVEZ6OUpFUHBWaFlrYWNobzdMOGhjK3pzMjY3L1RwdXYzTVRBLzEwN3kx?=
 =?utf-8?B?bHRES3FLRzkrdmx3VDduMW9jdHB3emxhdnc3Q1RBUTY2L0R3N0tXcEFNa3Vt?=
 =?utf-8?B?anM1bVJ5YlpENnlhSkhpRW5oaTZUbzUvNDlURDkyUnVac0MyREJTVTQxTUFT?=
 =?utf-8?B?ZmFXUFBQZjVDa1Bhb0M1YzhZREdZR282czJQdms0SFdGTWNaaDRUVHRCa2Nk?=
 =?utf-8?B?ZTdGTm5BcDRMZUVtSEJnZXkwS2h5b2NIeDIwQWhXUHY2UTYvWnNkY1lPcVRY?=
 =?utf-8?B?M2phTUZkZEJoUExmNUszUGdNcXg5ZUd4UVJrZmh1NTRyU2tkYmdUMVFwN2VO?=
 =?utf-8?B?RFJmQkI0N3FoVFFaeDBlek9XOFpZa1N0em9GeG9QYkJkR04yMWlrY3k0NFBI?=
 =?utf-8?B?Tk14Rm5iVUZadHZ3SlBHTUVUZGk4NFFKN3ZuelBqNmVZL0JSai81REhDK3Jt?=
 =?utf-8?B?bDlBbEdmZ0RDbWFSdS9VdDFoZVBVTmtCT3dhZ1kvTGJQUnkrMU5SVTR6VGFG?=
 =?utf-8?B?S1o0cjRKNTZIclV6cDVyS0czMnBwYUNBK3FlMVlta2FyTEk5K3hkRUJwRklr?=
 =?utf-8?Q?XS1dwwg4YAKeh5hjKwwP5UJkNh3+glFGmN1qZc5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6f37d13f-1e4b-4364-dfa0-08d93590e7df
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2021 15:17:53.2566
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3v+jbFkIYEiJgdK063B+qRuVF90l/23xtsqgGAQFChQxMQwh/nvvzkEwYx1FOV9x5ThkIpsLf7e1nFUPy1NODg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2445

To be able to use them from, in particular, the tool stack, they need to
be supported for all guest types. Note that xc_resource_op() already
does, so would not work without this on PVH Dom0.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Begrudingly acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Ian Jackson <iwj@xenproject.org>

--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -26,6 +26,7 @@
 #include <asm/hvm/emulate.h>
 #include <asm/hvm/support.h>
 #include <asm/hvm/viridian.h>
+#include <asm/multicall.h>
 
 #include <public/hvm/hvm_op.h>
 #include <public/hvm/params.h>
@@ -125,6 +126,7 @@ static const struct {
     hypercall_fn_t *native, *compat;
 } hvm_hypercall_table[] = {
     HVM_CALL(memory_op),
+    COMPAT_CALL(multicall),
 #ifdef CONFIG_GRANT_TABLE
     HVM_CALL(grant_table_op),
 #endif
@@ -334,6 +336,39 @@ int hvm_hypercall(struct cpu_user_regs *
     return curr->hcall_preempted ? HVM_HCALL_preempted : HVM_HCALL_completed;
 }
 
+enum mc_disposition hvm_do_multicall_call(struct mc_state *state)
+{
+    struct vcpu *curr = current;
+    hypercall_fn_t *func = NULL;
+
+    if ( hvm_guest_x86_mode(curr) == 8 )
+    {
+        struct multicall_entry *call = &state->call;
+
+        if ( call->op < ARRAY_SIZE(hvm_hypercall_table) )
+            func = array_access_nospec(hvm_hypercall_table, call->op).native;
+        if ( func )
+            call->result = func(call->args[0], call->args[1], call->args[2],
+                                call->args[3], call->args[4], call->args[5]);
+        else
+            call->result = -ENOSYS;
+    }
+    else
+    {
+        struct compat_multicall_entry *call = &state->compat_call;
+
+        if ( call->op < ARRAY_SIZE(hvm_hypercall_table) )
+            func = array_access_nospec(hvm_hypercall_table, call->op).compat;
+        if ( func )
+            call->result = func(call->args[0], call->args[1], call->args[2],
+                                call->args[3], call->args[4], call->args[5]);
+        else
+            call->result = -ENOSYS;
+    }
+
+    return !hvm_get_cpl(curr) ? mc_continue : mc_preempt;
+}
+
 /*
  * Local variables:
  * mode: C
--- a/xen/arch/x86/hypercall.c
+++ b/xen/arch/x86/hypercall.c
@@ -20,6 +20,7 @@
  */
 
 #include <xen/hypercall.h>
+#include <asm/multicall.h>
 
 #ifdef CONFIG_COMPAT
 #define ARGS(x, n)                              \
@@ -273,13 +274,18 @@ int hypercall_xlat_continuation(unsigned
     return rc;
 }
 
-#ifndef CONFIG_PV
-/* Stub for arch_do_multicall_call */
-enum mc_disposition arch_do_multicall_call(struct mc_state *mc)
+enum mc_disposition arch_do_multicall_call(struct mc_state *state)
 {
+    const struct domain *currd = current->domain;
+
+    if ( is_pv_domain(currd) )
+        return pv_do_multicall_call(state);
+
+    if ( is_hvm_domain(currd) )
+        return hvm_do_multicall_call(state);
+
     return mc_exit;
 }
-#endif
 
 /*
  * Local variables:
--- a/xen/arch/x86/pv/hypercall.c
+++ b/xen/arch/x86/pv/hypercall.c
@@ -23,6 +23,7 @@
 #include <xen/hypercall.h>
 #include <xen/nospec.h>
 #include <xen/trace.h>
+#include <asm/multicall.h>
 #include <irq_vectors.h>
 
 #ifdef CONFIG_PV32
@@ -245,7 +246,7 @@ void pv_hypercall(struct cpu_user_regs *
     perfc_incr(hypercalls);
 }
 
-enum mc_disposition arch_do_multicall_call(struct mc_state *state)
+enum mc_disposition pv_do_multicall_call(struct mc_state *state)
 {
     struct vcpu *curr = current;
     unsigned long op;
--- /dev/null
+++ b/xen/include/asm-x86/multicall.h
@@ -0,0 +1,12 @@
+/******************************************************************************
+ * asm-x86/multicall.h
+ */
+
+#ifndef __ASM_X86_MULTICALL_H__
+#define __ASM_X86_MULTICALL_H__
+
+#include <xen/multicall.h>
+
+typeof(arch_do_multicall_call) pv_do_multicall_call, hvm_do_multicall_call;
+
+#endif /* __ASM_X86_MULTICALL_H__ */



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 15:18:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 15:18:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145903.268370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lviAX-0007oj-Ne; Tue, 22 Jun 2021 15:18:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145903.268370; Tue, 22 Jun 2021 15:18:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lviAX-0007oc-Kf; Tue, 22 Jun 2021 15:18:29 +0000
Received: by outflank-mailman (input) for mailman id 145903;
 Tue, 22 Jun 2021 15:18:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RtyV=LQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lviAW-0007oS-BJ
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 15:18:28 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 35ea69e4-2842-4ddd-b153-ae9677eb296b;
 Tue, 22 Jun 2021 15:18:26 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2104.outbound.protection.outlook.com [104.47.17.104])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-34-vbilEjMmNLC95g-EmX-spg-2; Tue, 22 Jun 2021 17:18:24 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6478.eurprd04.prod.outlook.com (2603:10a6:803:12a::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Tue, 22 Jun
 2021 15:18:21 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.023; Tue, 22 Jun 2021
 15:18:21 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR3P281CA0018.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:1d::11) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.7 via Frontend Transport; Tue, 22 Jun 2021 15:18:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35ea69e4-2842-4ddd-b153-ae9677eb296b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624375105;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/bbmcF2dLfMyovvu1hUGHMNb0R327+N+VEXYHAXs66E=;
	b=QlLPqon1+UwlZlCMubS4XV9pvMt7zJY6JJhB3IXcUcQt5waaQZ3ERfB7Ig0f/CwtQ9Kq6T
	A38rL+fpxqPATscipoyoZO5WKgxUpFbZikYNtDoubotq1iF7T8CDfBXBuPeYdehZOPtp6S
	Vo4jjJhgasYUDUYNibcNamNlAVaIw8U=
X-MC-Unique: vbilEjMmNLC95g-EmX-spg-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S2j0IASNa5nR4KKaIRZHlECF5NvhpaYFWioGDwq7eHNhlWQfEbMPAIVusigqngRkIh4oRHmSwHtPjeZ9xHnrDQJX3ojlRfnsDjkZr/OtneDVduSHuG5zQgHFNj30oxJctWVUtbVrmYKqsWdlwVIqVxLnA4lPdTASkqruaInjmbRmWUFEXWN7/Y53qFfYn66hwEIeJzelPX20GcSxH7FRGtNKFnB0cS52QgEzT4A+wA15LmYZhPIL2OsAmjilqFiAY507hvWe0aHlFyQU1TG6YhQhOQZIo1HoJ/I2sBfnglTBqaatd7GnmRw53TqQbYtWImWSH5SSxmrRxmqOSzTMzA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/bbmcF2dLfMyovvu1hUGHMNb0R327+N+VEXYHAXs66E=;
 b=ir59Oj/ExaGbMb21bypfYMpyharY3NuUnL6LyqrKPEQle+iDHwP3ymddkNL9a+ZhwkxqpCFhXc742n2luw7BNP+nQIYLwCgz1d/lfopfraRjcOknwMPII7laet2Fw+NM49gryRwCm6vzHQUA9rTUweOml8dNL9xgIOwrlvOIAPxfj+nMW5ErEDiJ6jtj4GdsPBvIsluh6K5vBrGxJPeIm63XdCcXumrh4uTDhbcyMEI+t/lXGA6jeCRP+mUcVVFuhj+wdmlzCpPkeTXmRR3YYDRibPWXVvxEDkkgWhdNO9dD0Fgzo3MKlnoVdEMcBde20y2iJwbLdRf5oKyRNLJwow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH v2 2/6] libxencall: osdep_hypercall() should return long
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>
References: <6c532607-c2a3-d0ab-e4e5-428f85f4a045@suse.com>
Message-ID: <0a912030-8ead-62e8-863b-01f296782bfb@suse.com>
Date: Tue, 22 Jun 2021 17:18:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <6c532607-c2a3-d0ab-e4e5-428f85f4a045@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR3P281CA0018.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::11) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: aedf138b-9f0b-4ea4-f992-08d93590f8bb
X-MS-TrafficTypeDiagnostic: VE1PR04MB6478:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB6478500EB016897A4956E4DBB3099@VE1PR04MB6478.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5516;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	L7AlSXWM/DHEHuoaGD1RSQldZSH/m/2J+QZDb81RJnPLYAMDSZHmer8GUv7Ha3SjI+ZZYMrd2OFEMI506GubTAMTZk0bzVb3+OAfBoi01HhCRdue9KRsoHy+qIl3oPrWZnCGIvP1zhRArs3fPbomlOjc5PkDlPrTpPEPBR27PU3oCEkrTaFpW2lFpSKTBix5kwdINtxmKh03ZHCY7pF1xMc+hPS1MdNXK3Ocu92WqgMNqNV2ztfUXBDpYk55jLEuXSvUmfUWxMb4oAimIHOIIXHVcMr9TNRxTBXS0mspTLN3pSZs1YgWozWtGQf4J0RKr/5YslkFDGUfvXTse0kpqKpBWD3wqtxDr763ZVvFNrQq8iJKLiJBfmZyRzxTrUpm4P4g6gvX3j7YcTivPcij4Swo3SyqOv8ejxJEl0nDg74LjpdDTODkcpBKbCgGfjj4xQZC7qG07tpv2GS2nV22ZPZ0DAEUmGBubEQrq8yBEiAMJTHEnoefKBgBZqKCRMoiwUPHNbph7wSf3/X+Zz1BUXhnl+40mwh98OZIB9niMh6VlgC2ZK8XCxJBN/gKbNspUg5aS8ME1FC83mUmWYvDJdRjkDmhOvMkR/ItSB+vtctR2j5dZESILCv63KrtGbMoTv7Kfz/Nes3o9x8qocngwUGg+znfbnCw4xrktNaD0BIGlU0/CaaHWlShs3CZNVCE
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(39850400004)(376002)(396003)(366004)(26005)(186003)(5660300002)(4326008)(16526019)(36756003)(38100700002)(2616005)(107886003)(316002)(956004)(54906003)(16576012)(2906002)(31696002)(66946007)(8936002)(86362001)(83380400001)(6916009)(478600001)(6486002)(8676002)(66556008)(66476007)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dlo5MHpWMytvaEJYa09wQlpOaDc1SkVPMUNNTXlpclpQSDFwNkIyeEgwRExD?=
 =?utf-8?B?dzh1V1RPTUdLN01XMkRtaDFyb3hpVHZGSXhEUjRkZGJmY3REM0MxQ214U1pt?=
 =?utf-8?B?VTZSaUV3ZXNlWlk0bEw0UzcrRFNFaGd1ZEhkMjhVbE5IaDd5Y1kvclNRdVhP?=
 =?utf-8?B?OXliSG9QS0diY2FSaUQwS2FKTm5tb0wxeG84OXQrNHBTMVJHcDAvYkMyMzlq?=
 =?utf-8?B?R0F4dXg1OEZydm93cWsrSjdrNVFEamF2bUpZeFBCZktsV3FwZTliSVYwNENY?=
 =?utf-8?B?WWFhWlZSaktaSGs3ek1CWXBGeTBpMzhJZURkUW9aVkNyMVo0Y2JKcDlLdTVl?=
 =?utf-8?B?SHdVU2FaUzNlbkxNS0IwQ3BwV3BLQ0J1bEJUUDZZOWJlRUlhcWtzcTVhdStl?=
 =?utf-8?B?WXZoWWUvZDJIK29mdXBDak5ZbXlpRzlwZkhVWWE3TnQ1ejdRMC9uS2lGdjJV?=
 =?utf-8?B?SFFSSnNwQVo0MGFLS01PSTRwVkxUa0Z3Nm9XbjQwcTJBQjJKV3FGa2tBTGdq?=
 =?utf-8?B?enZLRzJqK29sa041UnF0dmpGcEt2ZS9pWlNQOWxFdkF6WlIrZmgyTVBiSkxE?=
 =?utf-8?B?QVFRUlBnMHNGQ3A5YU9Bcjlma3VVSk1IWDhuOU1qdlRxS0NiMUswUVRiZTJr?=
 =?utf-8?B?ZEwxUGdybjRsSW5sRXpFeTZOTEhXSDRmeUdMcjBJK1A5VUgvR1k3VWdJMDFx?=
 =?utf-8?B?T0hCZEVud2lkOWtId3dsd0dhZDhtUHFVRllQZzVyanhjSExQL0JwSmFPVGE3?=
 =?utf-8?B?T3dUK082cjlPUithU29mdDVSUVRGc1FCWlBsSS8yb1h1U1hvbE0zSHJuNGcy?=
 =?utf-8?B?YjlzbFFLU2x6YmNBUDczaWZVVHlOSnZzT1BUM3N5eitKMXdYcFUwOUF2K05V?=
 =?utf-8?B?LzlmL1NMWXVQK2RHUW54SUpLemtzcFVYL1RONXRLb1JqQ3hVNjFzTExuQmkv?=
 =?utf-8?B?Uml0QURPUjRHaGhGbjcrRHI2blZ6RVlkV29DdHBaS2hyU0VmcTVVWEdERWM4?=
 =?utf-8?B?eUc2UHdWaFpueEllVGZqNUNXUURrYkdLNjh2N1NKcFpzOWZCTW5oa1FnQVVr?=
 =?utf-8?B?T092RWZCTVJ0MU43YmxQaEEzVWVBS1IrNlhjdU5kODdYZGltVEpKU3RzWVhJ?=
 =?utf-8?B?bWQ3bjh5YXdkY2ZaU3k2UExTS0VCNENCY3Z2c2ZkeEJaekVYeXlCWGs5a3NW?=
 =?utf-8?B?MndxOUtrWWhJK2hQclloZWc3bGNpY29kMG50c0hqN0V4b29BMUhETFpGREVk?=
 =?utf-8?B?MHpsNS9XZ2J0VEo2Y2paY3J1eFRoTXFlcFRwcFljMGVjMnVQNHE4QnU4L1Fn?=
 =?utf-8?B?S1Avcm50NUJsTEhiakYvajFuVDMranVvbnowQjJjczNZUHNSQ1BGcWUyWENR?=
 =?utf-8?B?Q3VQWUFSaWIwR21NRVI1L29TZFcyMnI4WGlSS2lsV0c2YnpSRnNYVmRMTXJP?=
 =?utf-8?B?aXBzaDJZaVdtaFFNK0JHRTFmZndFR3pORG95Zy9YQUlZNjdYOURHQm1VUnNU?=
 =?utf-8?B?czZWanhaZm1YejE4QjZiUXBtY2dvOWtGd3huM20vcVcxb3IvbUVnMnh3L285?=
 =?utf-8?B?M1JiekVrb3FVS0ZoM2dFN2VHblVRZXZLdHZOS3hJckdoRlhXS2JYeTJFRndU?=
 =?utf-8?B?bjBOaFhzNXVOVlVQTlVPYms2NEhicTRRSzVCM0dhaTd4WkcrRTMyVXQ2ejBL?=
 =?utf-8?B?ekVXWkQxRmtsL0t2WldzWnQrYm1tYUs0cUJFVmpCOTQ2cUk2dWZQM2VTRWFL?=
 =?utf-8?Q?Zfc6HhcT8ShB6LyK2f5w//fi7dGvYeCBIMl3SSs?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: aedf138b-9f0b-4ea4-f992-08d93590f8bb
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2021 15:18:21.5246
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Pz6XHDyYunzyGJf3POskmJsz8b5DnS8ABM3mLJbKDOdHWCihk6eiMfV+63DdGtw9vdOwme/YFGSF5m1aAENdEw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6478

Some hypercalls, memory-op in particular, can return values requiring
more than 31 bits to represent. Hence the underlying layers need to make
sure they won't truncate such values. (Note that for Solaris the
function also gets renamed, to match the other OSes.)

Due to them merely propagating ioctl()'s return value, this change is
benign on Linux and Solaris. IOW there's an actual effect here only for
the BSDs and MiniOS, but even then further adjustments are needed at the
xencall<N>() level.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Ian Jackson <iwj@xenproject.org>

--- a/tools/libs/call/freebsd.c
+++ b/tools/libs/call/freebsd.c
@@ -62,7 +62,7 @@ int osdep_xencall_close(xencall_handle *
     return close(fd);
 }
 
-int osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
+long osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
 {
     int fd = xcall->fd;
     int ret;
--- a/tools/libs/call/linux.c
+++ b/tools/libs/call/linux.c
@@ -80,7 +80,7 @@ int osdep_xencall_close(xencall_handle *
     return 0;
 }
 
-int osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
+long osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
 {
     return ioctl(xcall->fd, IOCTL_PRIVCMD_HYPERCALL, hypercall);
 }
--- a/tools/libs/call/minios.c
+++ b/tools/libs/call/minios.c
@@ -38,7 +38,7 @@ int osdep_xencall_close(xencall_handle *
     return 0;
 }
 
-int osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
+long osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
 {
     multicall_entry_t call;
     int i, ret;
--- a/tools/libs/call/netbsd.c
+++ b/tools/libs/call/netbsd.c
@@ -96,7 +96,7 @@ void osdep_free_pages(xencall_handle *xc
     free(ptr);
 }
 
-int osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
+long osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
 {
     int fd = xcall->fd;
     int error = ioctl(fd, IOCTL_PRIVCMD_HYPERCALL, hypercall);
--- a/tools/libs/call/private.h
+++ b/tools/libs/call/private.h
@@ -55,7 +55,7 @@ struct xencall_handle {
 int osdep_xencall_open(xencall_handle *xcall);
 int osdep_xencall_close(xencall_handle *xcall);
 
-int osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall);
+long osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall);
 
 void *osdep_alloc_pages(xencall_handle *xcall, size_t nr_pages);
 void osdep_free_pages(xencall_handle *xcall, void *p, size_t nr_pages);
--- a/tools/libs/call/solaris.c
+++ b/tools/libs/call/solaris.c
@@ -80,7 +80,7 @@ void osdep_free_hypercall_buffer(xencall
     free(ptr);
 }
 
-int do_xen_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
+long osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
 {
     int fd = xcall->fd;
     return ioctl(fd, IOCTL_PRIVCMD_HYPERCALL, hypercall);



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 15:18:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 15:18:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145904.268382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lviAr-0008Gg-24; Tue, 22 Jun 2021 15:18:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145904.268382; Tue, 22 Jun 2021 15:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lviAq-0008GZ-UD; Tue, 22 Jun 2021 15:18:48 +0000
Received: by outflank-mailman (input) for mailman id 145904;
 Tue, 22 Jun 2021 15:18:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RtyV=LQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lviAp-0008En-V1
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 15:18:47 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bb75cbff-754a-4c11-94fc-1c9c6f25f856;
 Tue, 22 Jun 2021 15:18:46 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2107.outbound.protection.outlook.com [104.47.17.107])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-19-Fa_m54PQP6eF-ciZ81bwUw-1; Tue, 22 Jun 2021 17:18:44 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6478.eurprd04.prod.outlook.com (2603:10a6:803:12a::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Tue, 22 Jun
 2021 15:18:43 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.023; Tue, 22 Jun 2021
 15:18:43 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0263.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100::35) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Tue, 22 Jun 2021 15:18:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb75cbff-754a-4c11-94fc-1c9c6f25f856
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624375126;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=H8bfy44Kldc2RqppGX0GDbslnJnZuK1kudBdrWRi9kc=;
	b=I9QWOPLQMHSt9CkiOdEuXi1s0Lc2oWdIrwLbZYe102YmaTVu6l0voQTqsYYVw1hs1GwbjE
	D7lNH/RKO5TcDw+ttF6/lc/WS8JtDHVpFxJde8bf2jVkOy0Q05yirSjHKRYeViD+C1QiFm
	/hpgUgj5+/qoJ8o+5Jjtg+oRRnaSNR8=
X-MC-Unique: Fa_m54PQP6eF-ciZ81bwUw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZABdDmzHteT8ecvqp3G/jO/1syomfBkwkR/qouxxIXlUvOY2Db60xyUPHOTsdnrwJPuumXTW5dx9ogGug9ASF7pFDCxfEg53VMSY7YbhuaojchOoA/oI28LPS6yTxHsiVvod/wcpKcwJlIia+4eqf5uwcv3EXHaJTpYAup/dk1ZRO7dgaFte4RkYp4NzmnjEGl/5N9kBWk9AFUym6LOhQNhWhJAvHLDjcZZV9KgPVbketx6c7wz59FbYf9J9Cnb4al4q5qGY1ojq7iA7JKS+wRrTBaIvWOlc5LA/EbAdU6kYfgN3rcdD9ZKcDsL46aTGLwwb1lQTgEIAV/eUXE/5OA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H8bfy44Kldc2RqppGX0GDbslnJnZuK1kudBdrWRi9kc=;
 b=P9FxXsRRMy49hPf2nzaNwZfZf9YKIV9wU2NBjvhZVANXwZbFSdDVlfJ4UbFwVXIxna3/+wQAglgpetaadaymP4xuAbTInNBDCiits6Lh1zQC8htZr9tV3QKWXsMZ8JA+2ikOxSxXMEeeZZzXZu6yZmttOPkIaYxgNxkYxYqgKlYIYxsInLUeDGdP7GRLcuMhv4ZMOOZ1zQMTkVMziSGHutn4Wma9E6MfUE7YAtaS2wse1juBBIP04RQA5hNRJdy77WDOHy0NY835ixlXi9zFLir0b03iTS/HMGi5alCkvZ2EoMzNa3HuCsSt12x6ARhGThzNQS7lupieO0HZOaNgww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH v2 3/6] libxencall: introduce variant of xencall2() returning
 long
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>
References: <6c532607-c2a3-d0ab-e4e5-428f85f4a045@suse.com>
Message-ID: <beca6bb7-a991-8a94-ebbe-af973e1f3958@suse.com>
Date: Tue, 22 Jun 2021 17:18:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <6c532607-c2a3-d0ab-e4e5-428f85f4a045@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0263.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100::35)
 To VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c06bc438-2769-4886-86ad-08d9359105fc
X-MS-TrafficTypeDiagnostic: VE1PR04MB6478:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB6478BDD1F24198CFAF5DB73FB3099@VE1PR04MB6478.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gUDDvKzVKiCCj9xzUaYYLKGC/hdCUgGTqdigMLN9pgc+9YegNHzpkuUSH5Iyk8AUGN1EZn0eOW7+hdntP6C+pZTxZoiR+LdxELYVM+ERTmZhr0G4mY8OFmn6cPo2VrG+/KhF0WJ9L5QSSUBjJ6vb+vO2x92sWQq/4dv3vEreyialBAPhR28atST4cXioa4hSNALNfKbTcmTD9TPWVCAA26L7D/QPxT+N9kjEyRJHGElMkOzHuYvS09ugABexXDPY1C9SVbakc3U3XA3yqid3aT4bN/FWvZHlkEhY5nyMMCe9RS1XHYwsfl9/6iKRvay0G0brn/ymBdipvqImwViW/fcGUN1DkYFBMviTb6qOJdqHY5fFIqVZ0+sU6VL7ZRT+7qggyDYXBwWvv5ZNqtJ/tEk8fkfrtpTmpXvAiso3fDr2riNteQ+dXRtzwh0z4FuZE4QkfODxOgDUAVp3GDT1g1ix3I5M4GudnfxbcDBh67JYE4WeBNB1vnJVU+HqAvgKZqzOIMq/+/okEHhwUtiwRhJENq/8k2wQ6HffTr69aLL8NPkw9KfUFGiQQSuKRARnPSSxwHDB6sALUvf8TyIEseUV5VZmsLgSYwx2Op8LN1XAGKFveSLmJqrHNqgtCJVpZp1KdJedVfEm1rj/1caNygui04k5tLHlNADgH8ZcVAaC5E46dIhXfbAOjzzLSYdw
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(39850400004)(376002)(396003)(366004)(26005)(186003)(5660300002)(4326008)(16526019)(36756003)(38100700002)(2616005)(107886003)(316002)(956004)(54906003)(16576012)(2906002)(31696002)(66946007)(8936002)(86362001)(6916009)(478600001)(6486002)(8676002)(66556008)(66476007)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bVhYVG4xN3ZiWjBIMW1TTk0vV3VueGZHNzJXNGhLTVFsQm1BTHZNVW5OSUNt?=
 =?utf-8?B?MUI3czNqdkhZVnZEOWJabGFTaDRyU283azJJWnlRRk9aREY0bHZ2OUVlT2Ri?=
 =?utf-8?B?NHkwRmJiZXZONG9BVVJueXFqdFlZREFsVVgrd3pDc0F5TmNicHhiRXQxTXpD?=
 =?utf-8?B?cVhuZnpET1VsYlhseXZyTHB6RGR6SGpUcnQrSzlSWnlSeFBVQ0Y1MzlVdExo?=
 =?utf-8?B?T3pZRHNPbVRFTG5jSmFkeEFOVGhzYjNOR3pmLzEvREt4SEJHSTlQOWs0aytI?=
 =?utf-8?B?NlljTVRPcUVpMy95S0RMejZ5UEFIcERzZ3ZuYy85RCtZVkN0L0VEdTF1OE5I?=
 =?utf-8?B?dG4vaFgrNmVUbVdoSi9RK1JaTmdjOTVVZDNUL3RUT2lGR0V4RWZtRkxiMFlW?=
 =?utf-8?B?RHdqNUtkU2JXWVFGWDBXNVZqUkQzU04zam9LYnJ1dTRtak1xd3VHbVJ6V21Z?=
 =?utf-8?B?TXJJcFFRVXBQeG1oeWh3WDdaV0V0V2RyUlp6NmlEZGVSb3QvemcrY0tvZmdZ?=
 =?utf-8?B?bEVPQXJ2RnN5ZkR4d0NEd3FBM0loZjBjbjE5bkRmTnlaVXJYOEpuMnkzRlVu?=
 =?utf-8?B?NTZNQkhiT0VtRCt2MXRNOEtUQTJWeHEwN2RBL3paWmFaeCtDMGp3M3Z4SnhE?=
 =?utf-8?B?M05QNnN6TUVlOGhSOHg3SU9zY3E1QTQyL1h2ZWZsZHROL28yM1MvZVB2Ykdw?=
 =?utf-8?B?YUpCd2NJekJsMVR5a1YzM2FjdWF4bjU1VTVoV0hCU1d5cEpVSlRGaUFvSExj?=
 =?utf-8?B?V0phUUtMbkRiUm15WnVIVklCVGNBNU1tTEd3THRUK2xtcE1CRGZqejZSTHJE?=
 =?utf-8?B?dm9hU0tMOUtLOHYweGtWMGtNS1JJdnhUTFNEc1c1WTVXbnlIaFhKMGJFYXo0?=
 =?utf-8?B?am1qbENncUxOblVQcDMvakdLN0hPTkVVbXY5Z3pRbDN6T3d3ZEZjS0NnZ0JC?=
 =?utf-8?B?WHlKeVlvRFg1c1VvRWJ0VFVOSjFBWWJvcGFPZVk1a3hNYy9QalExYURSTVhl?=
 =?utf-8?B?QTVKNUlMNnJDUGdNajhiVUtPWFdvZ0RwdXVKZk5Jb3V4SmtIZ1NKak4rS2tM?=
 =?utf-8?B?OXlVamZuY3I2UEtGMzA2VWlJU0pXMkpmODBxWEVIYWFBN3pZWFJsK0ZBL3Y3?=
 =?utf-8?B?aUpXajNIdkJvcG1VaTRFMk5qRkVLVHJyTGl5THY3Zjd0NTNvWlBQRUpVa3pv?=
 =?utf-8?B?bWRyb2w4eTVnVW9yWFZSMFZHNWlDK0hjSEpTZ0pTbzhkcVlBdGNreER3U21F?=
 =?utf-8?B?QVE4ZUoyT3lDcWlvb05kLzdLWWZMQ2pGRDN1WGRHOUxMU3V2RDMxdDh3cHNB?=
 =?utf-8?B?Ri92RHp3Smh6SWFKdFBKTGpnN3h5SVZBL3FuTHo0KzgrU0NoQzcvV09JUzBu?=
 =?utf-8?B?N0huanBOL09TTXdWb1QzWWRTeHNMMHR1UHJzTWEySmNFMFk2N2k3alVCaDJi?=
 =?utf-8?B?anF4a04zTGY3UjVFS2VTVDhvNnBzaGg2cU8rS0RmSnZpK0hQYVhENlFkcC8r?=
 =?utf-8?B?dzQwd0hsM2taK2pFcXpROHVLYWZIY2pRUHcwTEIxdXZRSEhzMEc5OGN5SDlZ?=
 =?utf-8?B?RFRxYUJhUEZCaHFLRWMrVmsvbWlwSkYzWmsvS2VGaDI3MStzeFJiRU1pdjJG?=
 =?utf-8?B?V24ySUttUFZSVEFpNGt6cFBQRHlhdjRYaUFtalBEQ3pMN1ZWamU5K0RxSE93?=
 =?utf-8?B?ZnFKWkVsdENaQUI2SnRiTEdzT2cxWUt6WGpCSG44bGYrREVIZEpVcG02dVU5?=
 =?utf-8?Q?xHk3BjVxgJIj60+15dmKFP2a+TVfcXERkIQ2bV4?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c06bc438-2769-4886-86ad-08d9359105fc
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2021 15:18:43.7670
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9tc9Vx6D3GP3CSY+XZ8ho/Ind66LGTJurpJDoqtwRaqm8zdp5jKDax0AMm/jFDhCp333ek20xdQolCFG+RzeCQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6478

Some hypercalls, memory-op in particular, can return values requiring
more than 31 bits to represent. Hence the underlying layers need to make
sure they won't truncate such values.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Ian Jackson <iwj@xenproject.org>
---
v2: Move dropping of xencall6 from the version script to a separate
    patch.
---
I wasn't sure whether equivalents for the other xencall<N>() should also
be introduced, and hence I went for the minimal solution first. Otoh
there's also xencall0() without any users ...

--- a/tools/include/xencall.h
+++ b/tools/include/xencall.h
@@ -113,6 +113,10 @@ int xencall5(xencall_handle *xcall, unsi
              uint64_t arg1, uint64_t arg2, uint64_t arg3,
              uint64_t arg4, uint64_t arg5);
 
+/* Variant(s) of the above, as needed, returning "long" instead of "int". */
+long xencall2L(xencall_handle *xcall, unsigned int op,
+               uint64_t arg1, uint64_t arg2);
+
 /*
  * Allocate and free memory which is suitable for use as a pointer
  * argument to a hypercall.
--- a/tools/libs/call/core.c
+++ b/tools/libs/call/core.c
@@ -127,6 +127,17 @@ int xencall2(xencall_handle *xcall, unsi
     return osdep_hypercall(xcall, &call);
 }
 
+long xencall2L(xencall_handle *xcall, unsigned int op,
+               uint64_t arg1, uint64_t arg2)
+{
+    privcmd_hypercall_t call = {
+        .op = op,
+        .arg = { arg1, arg2 },
+    };
+
+    return osdep_hypercall(xcall, &call);
+}
+
 int xencall3(xencall_handle *xcall, unsigned int op,
              uint64_t arg1, uint64_t arg2, uint64_t arg3)
 {
--- a/tools/libs/call/libxencall.map
+++ b/tools/libs/call/libxencall.map
@@ -27,3 +27,8 @@ VERS_1.2 {
 	global:
 		xencall_fd;
 } VERS_1.1;
+
+VERS_1.3 {
+	global:
+		xencall2L;
+} VERS_1.2;



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 15:19:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 15:19:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145909.268392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lviBF-0000a9-EZ; Tue, 22 Jun 2021 15:19:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145909.268392; Tue, 22 Jun 2021 15:19:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lviBF-0000a2-BK; Tue, 22 Jun 2021 15:19:13 +0000
Received: by outflank-mailman (input) for mailman id 145909;
 Tue, 22 Jun 2021 15:19:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RtyV=LQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lviBD-0000XJ-Mc
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 15:19:11 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 998ae3e5-60d6-4031-abb0-03934d01a185;
 Tue, 22 Jun 2021 15:19:10 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2109.outbound.protection.outlook.com [104.47.17.109])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-17-6zrQ2TaTMkqiTtvGoEOE0w-2; Tue, 22 Jun 2021 17:19:08 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6478.eurprd04.prod.outlook.com (2603:10a6:803:12a::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Tue, 22 Jun
 2021 15:19:06 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.023; Tue, 22 Jun 2021
 15:19:06 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR1P264CA0022.FRAP264.PROD.OUTLOOK.COM (2603:10a6:102:19f::9) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Tue, 22 Jun 2021 15:19:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 998ae3e5-60d6-4031-abb0-03934d01a185
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624375149;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=h7aZ9aamUBA9PBvkAH24rsLpHVK608muDDgeGAckfnI=;
	b=aS8BNci371L7vMIMzl9srivjh3hBVrl5HwhCcaPU3npBsfVWE5AtbkBK8RvjbiKAywvrkx
	HSrep8NFvAOdU/izWN6zfYs5L+uYezUYbcvA9MgSBju2daG//PSlApG4aPNQ+vSJKe19h0
	cv66Xt5JkUSQspcyyWWmhh28e+6VNCs=
X-MC-Unique: 6zrQ2TaTMkqiTtvGoEOE0w-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OoF6GGcQ0TodynqEUwR3HXb+4LvimWVBOX/LWgojfN4MABHmlIwTimmPclQeJB6KFsM8RV/Acxr35rJJTsr84zgTSvDpTEeHFYLVA/EIdN701YpzshuR3ZBweWPlfNDo+gl3O67h8omnrRo0MfU2RlFeFOfdrYYywOmr6ilM/vUjhS+m5jmTxS7cotbD8vsk1BNNl6rIqNdAiCCSbK3GSwM3CfpHMgkklyIw2CkU0YlLxjOUGuQdtSbxzyy6w8RwHkTeHX8k0Q8y64+cIUST3pRiOgaAVmwSVfqzdjGT3tOTWKEkm2UH822obVXs8l35i4wshaPX8jfR19pvl3xkag==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PwPksR3zOXlIg4DjGCbrhutFiluGq2C63mB3WLwPcfg=;
 b=a8mBeyHUwTDt1boieaUJo9i4xWpV30HOBBCh4Kb3FT2DhvMKpbrpGw7gmKqyYcF3SIHZF+n3GnrJjm+AGcStdIv1a4dvFiwUPrZ8+HnIcrhJ4JXoJll+OOFYtU1ZDIeIdTanE6CadO/XeBB0HncyP0/LBe+w2c4bCL2O3EsDCQX/SsfpFjsG5EcbtlP/HCPRdsVoVWqmc9yfpTc/Q5jIOFJl8JP7/59H+McreytCRCpA+cGnc07dXu+9cX3FZh0h+5ARqey6bHVA4P82vGIrdZ3VrzXAxUkboGAxI5A80QAG+GTnZIXzGeMxiECpfUcYZspfoq1lbGh2C79J8dJWaQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH v2 4/6] libxc: use multicall for memory-op on Linux (and
 Solaris)
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>
References: <6c532607-c2a3-d0ab-e4e5-428f85f4a045@suse.com>
Message-ID: <f56ceeaf-745b-9064-e6c1-83bdb0d04360@suse.com>
Date: Tue, 22 Jun 2021 17:19:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <6c532607-c2a3-d0ab-e4e5-428f85f4a045@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR1P264CA0022.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:102:19f::9) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2af4d3b3-c47e-4da7-2ee7-08d93591136e
X-MS-TrafficTypeDiagnostic: VE1PR04MB6478:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB64784B0C4D8C7755C44C2839B3099@VE1PR04MB6478.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kPZVrueqmqCyMIXqHap0u4FjWA5I14tDUujkboLal9DQ3naUECJGVN+qvodZMGyDM0FWsiOYBiXeUyqLkLVVadwQrCcEuGCf87+KT+IPoQCGTHNYbEToA6f53sANC1BrfWCwJ0R+wN5Q0YWdskgOqEXVG8SL69gmRkQ/oBxxNLc9kEpwhaBYpQE4YOJreRi1ot+Fvi1czjt35cEbrwRVUFJ/MPcd3jUZ5iYLOSq4OAVffSubYb742FI+by98GD0N2Rv475YMiP1hqta+vCYrPWfM162tUT0NGop5ReK9y6sZ9pyog1X4a+3dtbhAIMWAHgKjtWHUutLTLgzXpARNnPWB5dG9IUeyC2pRr3duPTuOuhrlQ/14opze5Hll+J67ojrMG6HzsOmNcXTm6V8yXKy/gC9Nbk8RmR4cshdsBDwyu74aTSUo5kl5YeKrJ4FN0YX2kHdAGtFKbR9QsJBxZP9omtcXaAYC0VAi5UrewgLnA5qpSRpt5p9jjp0KCTQrNo1E+dx8P+1fm2QlhDSI01uCkp+zbBFbpaxWS+IlkoRVw6i3jjIGcsPlkGPifnQ1OswGav9y4r/Tj6mUFOavF1y8kJQO31RBMmk1XwShd0WHc/qMqC0d/opQ0jcDZor90UIeiuVdqsP8HhtMOnvAgmVwITPOXGh2jjZvibutvqJWdyScRpAdp+6BzBlQlPGU
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(39850400004)(376002)(396003)(366004)(26005)(186003)(5660300002)(4326008)(16526019)(36756003)(38100700002)(2616005)(107886003)(316002)(956004)(54906003)(16576012)(2906002)(66574015)(31696002)(66946007)(8936002)(86362001)(83380400001)(6916009)(478600001)(6486002)(8676002)(66556008)(66476007)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?RUpPJBFxRdYJawBxj1vTgUDdSTCvXM1/Vhfa7cO1LhXJEtB2xGAntV9uJMJm?=
 =?us-ascii?Q?ZaLm1WYDdqTRZsv+hSauZaTBJwIxZimiXT414vnBGTcRdtZTVYMhIRWYbqX5?=
 =?us-ascii?Q?9mjbvmSPqybCHpfkCOupvIV2K5BvEOZzQYtJmix2NMJQeKxSliDgZRI3c2UR?=
 =?us-ascii?Q?6GdDNgV0JfiqIIVVGbz/Oowb0EZkB134Zdo+KlyT9BOhQIuRuBultgUiMP2Z?=
 =?us-ascii?Q?seyR4zGhiY4WLHBNc+nr7K8xCi9/1YdYSe/wHVQi2LKUaG14R6PlOGW6SacQ?=
 =?us-ascii?Q?wk9he1QaIu54Phogtf6GueoTouQMpvuXwPhhW0GNTrpazfTNRbJdVatJ8yXf?=
 =?us-ascii?Q?iUU53po5xGb9mA/VV2I/3YScXm4XmR36dAyrwhCIZ5AKpnH23Ha1XyzxZ3c5?=
 =?us-ascii?Q?fPt2Jiucre6rHHmMicxULto/+gk9rKM+dhf0m7X3u7dw1NdCcEzs0WBEbQ9t?=
 =?us-ascii?Q?QkOhmKjoje6OEaFsLST7q74O4wGoJdmX4a6ZbbNG1LAV0PLy7iUuPXQaDBKK?=
 =?us-ascii?Q?WIEZBLN4AenYy9deoRMQEbru8HO41s0lO7LgiUIGsGhwKJBh4Svfv6dz30DF?=
 =?us-ascii?Q?deBEYEcVOHfJbwNkwPg2WY7tgR/j+H39VffMnYYzWqqEepcASAGFJQO9juHu?=
 =?us-ascii?Q?3y9YGOjQuViZjPm64AzHZVCyOp53MUpDB0YspOAa2t0IDlJOdSEX/whCNYAY?=
 =?us-ascii?Q?8+2uMyYvOUBsgtHPrdEDRfxF4O8dvody3JQCKfm+EBM+bpGBHhsWnQyqYWrH?=
 =?us-ascii?Q?kEzNenmr1OZZB00d/wRbPgT/5XgtUmH7cZkvzduDjYGYMmqBsmOeuMIHy1hE?=
 =?us-ascii?Q?6W7qzEbLgk7PsXNVRxVpCIxsPxc+AWd/UwR84BdKk+rOUp14YpLvhpplSejU?=
 =?us-ascii?Q?H0qtAPXXXZz94vNv5E8ondBvdKqyJjGVyC8j1hpjcofnNFkmew1WXUrlgFZI?=
 =?us-ascii?Q?Bw1sWHV0p9QsoHizErpwwxZOiOgmmhjJMaLi3fz77S7iaRSrs2nSncIK0sQo?=
 =?us-ascii?Q?01W+jr8ys+Sg5HzkQkow61v98K2WjaPuYjpssNuyMHqInKQd7udImahjHjWD?=
 =?us-ascii?Q?xrlGF2z43CpFkPwf96iW+s8CCAuI0OOCDX8FYSo2USWlP/d9curQQ2AKTiyR?=
 =?us-ascii?Q?hdew/VXRk3A1g79YjsBCqhVUPiCk0AeON1KLOKX3T56C6JaborLD8kKi+WbW?=
 =?us-ascii?Q?G1VHL1hwiL6kmWe4leX7I+j5KLY/1sAzqiff/mGS48h2PyCc2QdYRnWefkEF?=
 =?us-ascii?Q?nFNMHtALK08pXBkNM9ZWYf48ldUytIrSZm0asNN4r7aB1vrxbReIBbWdXFhV?=
 =?us-ascii?Q?g18ar28YAch0PetxwgOBV4yQ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2af4d3b3-c47e-4da7-2ee7-08d93591136e
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2021 15:19:06.3243
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pnohVVCWTz4H60C68EKvgp7ZWF9n498W8LboSspQaRwNDfAwxdMr6/eD6939Kw4yxlJNrhX5yyrzvfZR8WPeUg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6478

Some sub-functions, XENMEM_maximum_gpfn and XENMEM_maximum_ram_page in
particular, can return values requiring more than 31 bits to represent.
Hence we cannot issue the hypercall directly when the return value of
ioctl() is used to propagate this value (note that this is not the case
for the BSDs, and MiniOS already wraps all hypercalls in a multicall).

Suggested-by: J=C3=BCrgen Gro=C3=9F <jgross@suse.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Ian Jackson <iwj@xenproject.org>

--- a/tools/libs/ctrl/xc_private.c
+++ b/tools/libs/ctrl/xc_private.c
@@ -337,8 +337,47 @@ long do_memory_op(xc_interface *xch, int
         goto out1;
     }
=20
-    ret =3D xencall2(xch->xcall, __HYPERVISOR_memory_op,
-                   cmd, HYPERCALL_BUFFER_AS_ARG(arg));
+#if defined(__linux__) || defined(__sun__)
+    /*
+     * Some sub-ops return values which don't fit in "int". On platforms
+     * without a specific hypercall return value field in the privcmd
+     * interface structure, issue the request as a single-element multical=
l,
+     * to be able to capture the full return value.
+     */
+    if ( sizeof(long) > sizeof(int) )
+    {
+        multicall_entry_t multicall =3D {
+            .op =3D __HYPERVISOR_memory_op,
+            .args[0] =3D cmd,
+            .args[1] =3D HYPERCALL_BUFFER_AS_ARG(arg),
+        }, *call =3D &multicall;
+        DECLARE_HYPERCALL_BOUNCE(call, sizeof(*call),
+                                 XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
+
+        if ( xc_hypercall_bounce_pre(xch, call) )
+        {
+            PERROR("Could not bounce buffer for memory_op hypercall");
+            goto out1;
+        }
+
+        ret =3D do_multicall_op(xch, HYPERCALL_BUFFER(call), 1);
+
+        xc_hypercall_bounce_post(xch, call);
+
+        if ( !ret )
+        {
+            ret =3D multicall.result;
+            if ( multicall.result > ~0xfffUL )
+            {
+                errno =3D -ret;
+                ret =3D -1;
+            }
+        }
+    }
+    else
+#endif
+        ret =3D xencall2L(xch->xcall, __HYPERVISOR_memory_op,
+                        cmd, HYPERCALL_BUFFER_AS_ARG(arg));
=20
     xc_hypercall_bounce_post(xch, arg);
  out1:



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 15:19:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 15:19:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145913.268404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lviBY-000158-OP; Tue, 22 Jun 2021 15:19:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145913.268404; Tue, 22 Jun 2021 15:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lviBY-000151-LD; Tue, 22 Jun 2021 15:19:32 +0000
Received: by outflank-mailman (input) for mailman id 145913;
 Tue, 22 Jun 2021 15:19:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RtyV=LQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lviBX-00011D-0m
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 15:19:31 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1552bf0d-5e04-4f1e-bb80-8d5d36dee918;
 Tue, 22 Jun 2021 15:19:30 +0000 (UTC)
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2050.outbound.protection.outlook.com [104.47.8.50]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-36-EjaQGootNOWTr-Qsi8WUZQ-2; Tue, 22 Jun 2021 17:19:28 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6478.eurprd04.prod.outlook.com (2603:10a6:803:12a::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Tue, 22 Jun
 2021 15:19:26 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.023; Tue, 22 Jun 2021
 15:19:26 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR1P264CA0002.FRAP264.PROD.OUTLOOK.COM (2603:10a6:102:19e::7) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4219.22 via Frontend Transport; Tue, 22 Jun 2021 15:19:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1552bf0d-5e04-4f1e-bb80-8d5d36dee918
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624375169;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=n+RA79yrrkoc85+kR+9lxbJJZIXFoICggX2gAQT5Oiw=;
	b=YYEPU0t9ETg/l0xQXtmuQgkK8xujZ+5+wzJOxuFrlvAWn/HosTobkS5GA8jVDEZml5d6hK
	jqoUdlTTIWqb/nLjobrcKpSeaAyfbjnnKfkhxXCLHjfhwvRmKjiApbWNqfCwbQLqmKftXM
	qAqOIA60zkKAf8X5X5Akpkfx32YLI3g=
X-MC-Unique: EjaQGootNOWTr-Qsi8WUZQ-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QR5lT6eN8nUgwo3hqeFyOU/uZJt4Jxyd+J+sHKY+lXml3CtPfL2Pc1t4j2kV/ulsCVWdXhOGM6ExbrYR0bTR4CPbIekrzN9QP5ansZfnuLnjQvJJq1Wsj5l/L3qlVGQXtIRB1hkLrUBplKtPF1+qyh0Um4QaHLpS4Ek3I6TO3JCrUYTzgOFK0thKLOAmXcyeXXOAhq5rIHz/KB8Xy8+AWgXNO6Xas18xmA4QEw+D/CbRde3qZHFT3ixnshqy92SCcoQ51Q27POI7ye5Cz9tua4BVVt4TjeHSsClPgGEoR05sSZ8gBVjT8jXh/K6A0ViLJ6b8lYV+8v+0TDZlJ7lPdg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=n+RA79yrrkoc85+kR+9lxbJJZIXFoICggX2gAQT5Oiw=;
 b=lljFb9IhQ0FZ6QZBKE7bmgXYuvWIqLgIHcED5GOJvI6Sg25/Mo9U4N9Z6v2iGtcjOIVaWJTQvVuv6RzFUQo9IHGsxz4jnpS+ejZSsoRdO9fkrBvOJRtO7wU5L4I5yK/APsz7fh/VmKBRNxlSjSH1mOFQ/EGP8otMiZEWlDQlnePTsw2XEmjNMVBKZGSyygPnwZHj3gJWsG53a8a6oAjE0OUvvOzW0tjP6dSN+BAtxSRkmo0NGi0Kzm9Ne5r7bFuYmnU7qeE1GFLNxpZZUAmzZuaHeVQm/Av3nXVVCpmYyISr2xNcG3rUbad1Ld/vtloGuDgboebBXRtGo/XdrqolUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH v2 5/6] libxencall: drop bogus mentioning of xencall6()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>
References: <6c532607-c2a3-d0ab-e4e5-428f85f4a045@suse.com>
Message-ID: <1792bdfc-7510-6628-e63c-aee2c7d2cab5@suse.com>
Date: Tue, 22 Jun 2021 17:19:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <6c532607-c2a3-d0ab-e4e5-428f85f4a045@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR1P264CA0002.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:102:19e::7) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 466f75d3-720c-4221-a87a-08d935911f51
X-MS-TrafficTypeDiagnostic: VE1PR04MB6478:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB6478597A97DF07B454702078B3099@VE1PR04MB6478.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	V5no5adaQmJmsYZYJWWP+tX6bD5MiRu9+RVS8jMO/dLCxb8BVK6jcBGin4wLhpB5B/SA5cjLvkZSx235bnbunTDOWy/NNGT97DQjxzokwtpd38d1lKv/SFCFYZ8zsVKfvtT/a7JgdHEqWbXHL2QFnMMBeKACYpOOv8gWEiP78OR6ktnDm0vC7GcJVu3vItk3/Ceb02R3JVWShhZYHvEePGaNb4ZbbOSM6fXjHdFKmvmyU5+09YqZKoyGY9t/QWIeupOmqjAy3ClrUMLO/P2ljW6e6tg4Bs3aILjx8ry5/bXE14JnXiNZeOdQm4lxdF7cJH1pVxNWQ8IWRDotoUaS8E1QYhlnTsZirM1TMqolN2bkCc9AVZeNHpyXeXccVHO7yZseRisBPwIz02tgcuvWpF/DlZb/NiNTh6Sz1MrsaEq/Qm/lttM4SjBy+oCalwfs8zxy2pzl2KATJrBUUNIgjWiwfVqm6M7wyTAKr7STmZ6jRPW12YseW3hSSOD2n0THE31YYmcxfEC1NK4K/4MjNvxz/kgHfJDYN1q+TfEIs2g9SJAEfAA3crue9WE1F7GuCh8HXIO6ounIek8A21OeX12IjCr5UoW2nY/xL9MaJDN9tuf4M4Ziepo6Jzwr582rp8R5jr7sqI/F16Mz1LgNP9ieC+UF8MaJ6WA+kHWvkWZNHbaQxwRuz/qIVq/7MnrC
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(39850400004)(376002)(396003)(366004)(26005)(186003)(5660300002)(4326008)(16526019)(36756003)(38100700002)(2616005)(107886003)(4744005)(316002)(956004)(54906003)(16576012)(2906002)(31696002)(66946007)(8936002)(86362001)(6916009)(478600001)(6486002)(8676002)(66556008)(66476007)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M3ZEZHpFTlNLT2FPaEZxTzd1Wm9KUU5jNXFhUHZ4KzREZUVPUUVDNmpuZkJl?=
 =?utf-8?B?ZGs4U0FyNWNod2JVTDYxM0k0b0ZYOHlDNzhGY0hTVHlrU3M5akwyNzYwL2JW?=
 =?utf-8?B?Q1hnanRCOFpXQlhEOWI2YUd6T1N5TkZocElYL1RTVmwrVGMwMC91MXUyVmlT?=
 =?utf-8?B?UmQzR2dVN2RqeDVReVlhcHpBVDQ4MFA1b1ZiMi8vaWZ6bXNoZGtvY0NXdExq?=
 =?utf-8?B?MytFN0N3LzVOZStnbzZTc1ZENlNsWEVBTjFSTkNiNHdmbmluMGh0KzFFejJt?=
 =?utf-8?B?R1lFbUlaLzNVVXFIbXA5MFZJUmR4UmdEQ3lvWnI0b1U0cnpDVzllSGxwTnli?=
 =?utf-8?B?NlpldWxZak9DOG9aZisrbWNkZEJqNFB4ZVBraWt4VFRRZGxRYkNMM1VsWXBr?=
 =?utf-8?B?ZVBDNWd3d01HVUJhRURxd2Z2b3Rwc0w4R2p3d2JDbFNYTTN3TURxMEhnWXdU?=
 =?utf-8?B?NTRzblFTR1lTbHgyTUgrdjRMQkQwczh4c09BWUwzMnZ2MXY3WWtnTEMxZmtI?=
 =?utf-8?B?RGZXVnRPSWw0SVpEYmg4OUorcW5NWWJELzZaWnV0STZnRFBJUzY4VStUcGVU?=
 =?utf-8?B?eU5STEIyLzUwdXNDVDBjdjJPaVJneHRqalJSREVRYmp6SkE3amNqMnBJeGZ5?=
 =?utf-8?B?UjljYVFhN1ZtVmQ4UlVmMlFqMWRuM2xTNUVCWmppQnlremsveGpuTTVHdXRR?=
 =?utf-8?B?d1dQYWhZWXFaL2Y3NkJhSHFVRjJpdE0wWGtuTFhZU3gxbTdMNzF2OU94ZEVv?=
 =?utf-8?B?VUtHQUZWcnV4c1g0UERicjA3dHlUaHRCUnpJZUdPVCthVVBDRUp3NkF1Zyt3?=
 =?utf-8?B?cEYrcDEvTWdpRm8vNWpzUWRhM1l1b1ZhWUlpd3BBaXRnVnBNbGhTV2VFL0Yr?=
 =?utf-8?B?V2pZdVpWeXpOYXZNSmQvVHR3T1JtR0lJQjlWTEE3c1pLTlVoc2JsQ1NQVEd1?=
 =?utf-8?B?OWxWV05sRjFXcGhzeXZPam5QZVExNTRRUTNQMzBSbFhvTmpTOFhkcjU4Zm9p?=
 =?utf-8?B?OXorTGtETGkzWnFGOEkyTC9vZjBHVDdsMW5TVDgveWNGTERFclRmRWEyWWll?=
 =?utf-8?B?Vy9pVzFHbXd3Ykx0dU95a3V0Z3BqOGRneHhBcS95c001Z2hteFpzalAxeWZp?=
 =?utf-8?B?YTdtOEZmdjJTM2JHbzI2VzJtbGIyaVRWQUhFT2U1ZDFONVRjT3IwSktJaHAz?=
 =?utf-8?B?T1lsZWNQczhwbS9HMVV1L2NWNkh1R1FrOWttMHUzbldIMllsQk9rUWJ6WEI3?=
 =?utf-8?B?ZGRLeXF5c0E1bEV3K2tIdW9BbWhOWjBmRzM3Q3ZuOUl0N0ZGbVRFVGpEbFdt?=
 =?utf-8?B?c2pjZXA2YWZpdVZKSEoydmdlVDJzNjRUcXA4NE9uZG5ST2E3UXpiRUZDckkx?=
 =?utf-8?B?ZExySmhTMmFSOEpjeEdHVFpzRXlkemRqRnByNVBCOUJMbUpHRzB2MG9mKzdU?=
 =?utf-8?B?OW1sYlBzekxRbnQzVHdGcS9PMzNUWHpZTXpQV0d1d0lOam90QXY0YitMZUlG?=
 =?utf-8?B?T0JXaDV4NWYrdGx2MTFXTTdsUWJUYkFXYXQ2blNJWkdYYmRBbVdHcWdGbzRi?=
 =?utf-8?B?UjI3MzZXb0lIK1F0VjhBL1hhVSsrdDZndUdjLzJlbStLbDgwZm9UT3oyYnlB?=
 =?utf-8?B?SGtidWxSSEN5YWVIaFFWNzJiQ0VPK25USitjTks5K0REMmdWZFl0dUlYSDNw?=
 =?utf-8?B?ZTdVMmV6cXRBV2dBSzZvbmJPL1JrU3VuTkJrRU9TQjlVclpkaTUzQW0wTE1t?=
 =?utf-8?Q?I8fXkiX6ZE0NBjUsXo6S8pGm30Vwfj9Roj3PVDf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 466f75d3-720c-4221-a87a-08d935911f51
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2021 15:19:26.2670
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: W28wX9AKm0/AP/3oHofUSx4d/CxK07IYcK3tWNxf5oDOuVzPz+GRpfXczKuizrNeem6pN1ykKWFasQJ7uyxDAQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6478

There's no xencall6(), so the version script also shouldn't mention it.
If such a function would ever appear, it shouldn't land in version 1.0.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Ian Jackson <iwj@xenproject.org>
---
v2: Split out of earlier patch.

--- a/tools/libs/call/libxencall.map
+++ b/tools/libs/call/libxencall.map
@@ -9,7 +9,6 @@ VERS_1.0 {
 		xencall3;
 		xencall4;
 		xencall5;
-		xencall6;
 
 		xencall_alloc_buffer;
 		xencall_free_buffer;



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 15:20:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 15:20:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145921.268415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lviCC-0002UX-1j; Tue, 22 Jun 2021 15:20:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145921.268415; Tue, 22 Jun 2021 15:20:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lviCB-0002UQ-Uj; Tue, 22 Jun 2021 15:20:11 +0000
Received: by outflank-mailman (input) for mailman id 145921;
 Tue, 22 Jun 2021 15:20:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RtyV=LQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lviCA-0002Tt-Ky
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 15:20:10 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 253937bb-8550-4922-98f7-4079fef71411;
 Tue, 22 Jun 2021 15:20:09 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2105.outbound.protection.outlook.com [104.47.17.105])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-30-5rTlzz3pNXOZEdWPCLCoPw-2; Tue, 22 Jun 2021 17:20:07 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6478.eurprd04.prod.outlook.com (2603:10a6:803:12a::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Tue, 22 Jun
 2021 15:20:04 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.023; Tue, 22 Jun 2021
 15:20:04 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR2PR09CA0022.eurprd09.prod.outlook.com (2603:10a6:101:16::34) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.18 via Frontend Transport; Tue, 22 Jun 2021 15:20:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 253937bb-8550-4922-98f7-4079fef71411
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624375208;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Og7xcSihKTx4mD8l0odbkYhE4SgiX50KQmBwymR7sNw=;
	b=h4Ng03j4luCQoddfREkTDPaZQQbDw8aI8iV95OOYKLe+PunZwHWrrfOj9J53QjITZDARZr
	+LQA6aNtprN0yl137yQBpJZhIPchfozO8MvGGU6iOds5+UMPsQyjgRz8IeJP2Q+ZFkONhc
	A8DuOMHbHayTqLtVgg/PbL8cC7Rshks=
X-MC-Unique: 5rTlzz3pNXOZEdWPCLCoPw-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MIpdHQwhCtZK++gMEWTkSAUTnkXJpzi6L1GXgGrXrdqlUmPJGlqKM7zQKCBVEm073rDNPuwQPIuDwYoM5c9HIY5u/0jBeKMCIrG3gPSyliuVCQ3cdg+FJhXoa4J3HnnEX0I9BNubhV0iMnwNZ9O2vFqR3uLyLeRg/nv/7Cxr/CugSl/pDdXXTAu2pjITRDPQJv7OV8l8nHBj0yrMNCQGVJ3M98lYyy3srIgQMrDoRV6TGgCGjexODvvJ+RUYkxc/Kbq03Irn5+RbmJRKITCS/Zmrv5WS0UmciqFCzLy+oOe9p9vWL7diR9U+saPe2x72NSnjYnEE6VQTtNrFYTSyUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Og7xcSihKTx4mD8l0odbkYhE4SgiX50KQmBwymR7sNw=;
 b=Ftx7AgSXqBg3UyQldkqjCDigyWEJlD4fwpwGF5k64oo/M3vFArsX2E9LulU66b8UigNokjtJWuolEETZyUlFPJgFc1uYCVY1suxN/WwPRKDQpfZ+r/M8FN8oRK0Ak+ZGZD8Pl1NF0qZMA19y7P7TniplBBsAKabD/bGy393Uzh2FnKsnRD7E2Ig6CYfKOynGO/v3gPDbuB1y10zxXf9TpkxxnkwxgiXPu9kvDiuDe/ZNlXHbcah9sqtJC3wFqhORKanv0JkdzakfOxSeWzx4tqKclN9SKgXD/QT0xCs5bd0Sz/0hKI8O5wHnaZPg0j/woQudMeATW9IAE4cW7/9Xsw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=suse.com;
Subject: [PATCH v2 6/6] libxc: make xc_domain_maximum_gpfn()
 endianness-agnostic
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <6c532607-c2a3-d0ab-e4e5-428f85f4a045@suse.com>
Message-ID: <fed4cdad-950d-6d39-d372-37f88dcc2819@suse.com>
Date: Tue, 22 Jun 2021 17:20:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <6c532607-c2a3-d0ab-e4e5-428f85f4a045@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR2PR09CA0022.eurprd09.prod.outlook.com
 (2603:10a6:101:16::34) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2b1f91f6-246f-4bd0-07cf-08d935913620
X-MS-TrafficTypeDiagnostic: VE1PR04MB6478:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB6478D58883008457C30F0F9FB3099@VE1PR04MB6478.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	cFyWxvbp+oMFkbut65Pk5jFUjlvVpICzlfRRJzWYKH5EDLxvo3JOMnIDfaiIsAB6j+xUpZ3hgpT3K86HdOmC1KOTG23/q7zqxPuILwf3OooU8/xlCrFmdquw9qseTnEFvqNFeqEs0uK+vFC/51zeJpDM3drWI8T3ujQo/3jOS40RDIeW79ZZpWk/fk8I4Nlas10EoKcvmUTolLbrRRoVApPQ6KgDFYSN85R0pL5zm3JPe4YWF+KbAzheZjGLbn+72WwMc6LxyWx0/dRtiu3736VzWoIQcUnkFLCncyIWrjQJjIK+u22RlgBSqfH79uRHoz6r7meqL9hKsPccq4T1bSnExYyPsyGZaRQGuE+LHp7m1o28QxuzVIpk7i0pSz/1muxBe/810sbuFRYU2Etp22U4INrzEd9tG4nqEFg5tKaw7/kweQj4JdettXftCzK+7pSY3iSkx5mTliRfbkhrtDrJmsUTygcILhZTsopeIBVAnO4xEJrHiIfJGK0nmBb6RujOUvRA9qqerdloYKVKoaX+UsDo27s4NJXO9sLya6yIogLSNi8Y1wQyHmp7XPczrQGtlX8DmYXOAwpzv1/XRp8cL4ttpLO/1AWeepKb39LDGBOqagVpuGDyD9IHtnsQwLo7S8S1VILjWutO4N7RCFo55Tb3ssrxcBcIbdxKpzc4gRo0CBJizBmCW0pXPRuY
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(39850400004)(376002)(396003)(366004)(26005)(186003)(5660300002)(4326008)(16526019)(36756003)(38100700002)(2616005)(316002)(956004)(54906003)(16576012)(2906002)(31696002)(66946007)(8936002)(86362001)(83380400001)(6916009)(478600001)(6486002)(8676002)(66556008)(66476007)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VGVMNnlBVUEwa2tXRlc0UzhGaTl1WXBmSEl2QjhTK0cyNEJIdzNuMTNIa24v?=
 =?utf-8?B?VzluWDNkb0Q0RkhtM1Y5MkVPandSTnpYSGJjS0dHcFFZd1J6cXVWTFcwVjZ3?=
 =?utf-8?B?R2hJS3ozZi9jenZYOURnNG1ZZTlCSmJPVkdBRlVUM3RTK3VnTk85MTA2R0hr?=
 =?utf-8?B?VkdTUXhuSFQybS84V3E4QktCc0d4QVV6K1cxdHpock9rUzZ5cTZnUUpvbWZr?=
 =?utf-8?B?VkFqUG5EV09vODBtKy9hZENtbDRwSnlOYURvd2VBVkRGeTBqWjJZTEVBWWtN?=
 =?utf-8?B?d3hRUldwUlpJdkt5MndRZWtYQXpuRE1FRUdSNHlrMS9vSmdqLzdidWJ5Mjlr?=
 =?utf-8?B?OVZKak1WWm5BQWEvL1hLWnFFbTdvTlFQbkJFMExGUDhVc3Fjd3pSbFkyMmIz?=
 =?utf-8?B?a1k1YVlIV28vdXhZNU9MblVrL3hTL3l3YzRaNGZLT295RUFtdFJCTDRaSGNU?=
 =?utf-8?B?S2xjNlJCMFVuWk9OZjgyOE93OHFNRTJZRm9kRlRzc3ptQTNuM21Wcy8zd2k4?=
 =?utf-8?B?S01wTlBTKzM3Qkw5T2N0U3hhRlZTQTNrK2tKbzU1RUptQXY0TDZjaXBrS21L?=
 =?utf-8?B?ZmJQbGdidk5ZdXNEbGlIekp4bk1DcG1Qc0dNd0pZcTRkQVFEYkl2c3FBYmpT?=
 =?utf-8?B?d0RuK3d1T3BCQk01MFZ2M3JSblZ5aUtSQWVIWjVqM3JKWjBKbXppVmFPbGty?=
 =?utf-8?B?aHc3bHhvRDB1cE1adVhqNjlFTVhpb2xaZ2pzTUk5RGxhNk9GdnUwOXFLMmNQ?=
 =?utf-8?B?MXYwMHcxdXFTeEtxSnhvVmZQbFVnMlFvb0h4MGQ5bmVLZkxRbDJWWUdxNGI1?=
 =?utf-8?B?NVBHcFhWTEp1eVQ4V0dnZHZlQlFuQkF1eVJwd0J5YkdEY1RHbHNZZmM1dlAz?=
 =?utf-8?B?WjF2Rm5LOUFpU2pScnRVNTcrOGF6cE1lNHdUTDc2c3U1SFlKV1JicHZuNnRZ?=
 =?utf-8?B?SG55dTJ2TGNxdjk1RVBKdWJ0RVZzalZERDdQai9mYWpVOE1jcWUzdmVEVW15?=
 =?utf-8?B?L1RXZFhwR2JkZHFVMlNhUStMWGpxeG5tTUh6QmRlOW51cHJIeFE0ZHczUWFv?=
 =?utf-8?B?bXVxeENhKzhiZFpLOCt4Z256M0JnVkNnSjZBSEFYNVZySHhtSmt1UXpSMEJR?=
 =?utf-8?B?U0Q1N0V4RHFHOFhaL0NsK1k0Q25TcWdRY2ZpUkFBZ05IemU1ckpoUUFKYkVO?=
 =?utf-8?B?dXBTbmprQUNwdkpUamRaZGtSMDA4aHcxamdPWGhFSU1OU05PRWpodm5tZk9o?=
 =?utf-8?B?WE9kSC9xQXlha2cwbE0wRFVuWVVWOGhqRWErOURoWWoyUm1qaG52ckhXUncw?=
 =?utf-8?B?eTcza01GL0ZXWkVnRjduemtZT1NzS2gxcWdFN2NYRGw5a3M2UUJwWkZWc2E1?=
 =?utf-8?B?TXdIR093bWRSWkNMblhYYlh0eHpBNHU2ZVNzVS80UFRYcThDcXpaVk91Vksz?=
 =?utf-8?B?YThJUC9ITlFGSno2dnNNYW9EME9JRW5KM09iZmQ4a1FuSmxIKzVKTzRlZ3E1?=
 =?utf-8?B?TTFwdGEvZFY5VkNpZVRScnJUZ3hlSEpLYUtBY2RhMjEzOVBINkx6cGlqbkVY?=
 =?utf-8?B?aFNaNm9seU5mQXk2K2ZpVkJXRmRRVE1ZOVN3a3BSUlFMWGJpS3VWSDRmZVBF?=
 =?utf-8?B?WkM2aUJ0bVJncFhiOXpTRjFSU0ZBdGduS2xwdGoxazlMMWVYWEpPeG9pckZs?=
 =?utf-8?B?ZDVRWUR4Ti9BMTNPYUs5djNqMXJscXFZSlZ1ZitNSG9xaWM3UFRqaVhvSzFk?=
 =?utf-8?Q?OFG8x0bm9o7KssZ4VyQqAcga1zlUza3ofbwj126?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2b1f91f6-246f-4bd0-07cf-08d935913620
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2021 15:20:04.5803
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: x1c42363XXMi6dBvJRmVnKEADMwNB6dDjrtTIXFhSAtPOphvay9MPzbmKZ077tz3B2q8nYiHAR6B5J31IA+QkA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6478

libxc generally uses uint32_t to represent domain IDs. This is fine as
long as addresses of such variables aren't taken, to then pass into
hypercalls: To the hypervisor, a domain ID is a 16-bit value. Introduce
a wrapper struct to deal with the issue. (On architectures with
arguments passed in registers, an intermediate variable would have been
created by the compiler already anyway, just one of the wrong type.)

The public interface change is both source and binary compatible for
the architectures we currently support.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Ian Jackson <iwj@xenproject.org>
---
v2: Introduce wrapper struct in public interface.
---
Together with the comment change I was half tempted to also rename the
sub-function identifier to XENMEM_maximum_gfn. But I then decided this
would go too far here.

--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -856,7 +856,8 @@ int xc_domain_get_tsc_info(xc_interface
 
 int xc_domain_maximum_gpfn(xc_interface *xch, uint32_t domid, xen_pfn_t *gpfns)
 {
-    long rc = do_memory_op(xch, XENMEM_maximum_gpfn, &domid, sizeof(domid));
+    struct xen_memory_domain dom = { .domid = domid };
+    long rc = do_memory_op(xch, XENMEM_maximum_gpfn, &dom, sizeof(dom));
 
     if ( rc >= 0 )
     {
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1351,7 +1351,6 @@ long do_memory_op(unsigned long cmd, XEN
     long rc;
     struct xen_memory_reservation reservation;
     struct memop_args args;
-    domid_t domid;
     unsigned long start_extent = cmd >> MEMOP_EXTENT_SHIFT;
     int op = cmd & MEMOP_CMD_MASK;
 
@@ -1452,13 +1451,16 @@ long do_memory_op(unsigned long cmd, XEN
     case XENMEM_current_reservation:
     case XENMEM_maximum_reservation:
     case XENMEM_maximum_gpfn:
+    {
+        struct xen_memory_domain domain;
+
         if ( unlikely(start_extent) )
             return -EINVAL;
 
-        if ( copy_from_guest(&domid, arg, 1) )
+        if ( copy_from_guest(&domain, arg, 1) )
             return -EFAULT;
 
-        d = rcu_lock_domain_by_any_id(domid);
+        d = rcu_lock_domain_by_any_id(domain.domid);
         if ( d == NULL )
             return -ESRCH;
 
@@ -1486,6 +1488,7 @@ long do_memory_op(unsigned long cmd, XEN
         rcu_unlock_domain(d);
 
         break;
+    }
 
     case XENMEM_add_to_physmap:
     {
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -148,16 +148,23 @@ DEFINE_XEN_GUEST_HANDLE(xen_memory_excha
  */
 #define XENMEM_maximum_ram_page     2
 
+struct xen_memory_domain {
+    /* [IN] Domain information is being queried for. */
+    domid_t domid;
+};
+
 /*
  * Returns the current or maximum memory reservation, in pages, of the
  * specified domain (may be DOMID_SELF). Returns -ve errcode on failure.
- * arg == addr of domid_t.
+ * arg == addr of struct xen_memory_domain.
  */
 #define XENMEM_current_reservation  3
 #define XENMEM_maximum_reservation  4
 
 /*
- * Returns the maximum GPFN in use by the guest, or -ve errcode on failure.
+ * Returns the maximum GFN in use by the specified domain (may be DOMID_SELF).
+ * Returns -ve errcode on failure.
+ * arg == addr of struct xen_memory_domain.
  */
 #define XENMEM_maximum_gpfn         14
 



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 15:28:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 15:28:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145935.268425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lviJz-0003U0-Vt; Tue, 22 Jun 2021 15:28:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145935.268425; Tue, 22 Jun 2021 15:28:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lviJz-0003Tt-SU; Tue, 22 Jun 2021 15:28:15 +0000
Received: by outflank-mailman (input) for mailman id 145935;
 Tue, 22 Jun 2021 15:28:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lviJz-0003Tj-Al; Tue, 22 Jun 2021 15:28:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lviJz-0005Yg-6b; Tue, 22 Jun 2021 15:28:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lviJy-0003Vq-Ro; Tue, 22 Jun 2021 15:28:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lviJy-0006Lx-RH; Tue, 22 Jun 2021 15:28:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sAn4EVCwxWDlNeoHWHpQAdiKMnOe+4slZAijZ9sBhns=; b=TCzqyrPVA/HCHTPZ1l9U1A5s3x
	COVpPnCagE77E7bD+/pmBgv7NDuv7nuCAdykLxKu5LTHcdqmqej+vgAj8V+EYLvZmPsHaS/KM/Y0T
	gz+BTY5dtl+pfo/ZVX04hEZ4E8l1m4ym1PlNYWp6xjoNeDJe1BkDlGdbzf0+b29NQrM4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162973-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162973: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c9b59f9032d41be8bade8a25d9148cf6ed203704
X-Osstest-Versions-That:
    xen=8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Jun 2021 15:28:14 +0000

flight 162973 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162973/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c9b59f9032d41be8bade8a25d9148cf6ed203704
baseline version:
 xen                  8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5

Last test of basis   162880  2021-06-17 17:00:36 Z    4 days
Testing same since   162942  2021-06-21 16:00:26 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8af4b47f06..c9b59f9032  c9b59f9032d41be8bade8a25d9148cf6ed203704 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 15:46:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 15:46:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145941.268440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvibw-0005k5-Ii; Tue, 22 Jun 2021 15:46:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145941.268440; Tue, 22 Jun 2021 15:46:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvibw-0005jy-EY; Tue, 22 Jun 2021 15:46:48 +0000
Received: by outflank-mailman (input) for mailman id 145941;
 Tue, 22 Jun 2021 15:46:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZL//=LQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lvibv-0005js-Gw
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 15:46:47 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c5e83a49-14c2-4599-80c3-6e2025f93d9d;
 Tue, 22 Jun 2021 15:46:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5e83a49-14c2-4599-80c3-6e2025f93d9d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624376806;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=dTu4iINFlzHt/M9SZNB59k/OflALkkf5CWWkf5QRENM=;
  b=f0NDttE4FURmbfYrFIt1bu18aYA5ibZxsLLODAeGwSYyluHjM32e1m/H
   wXJr1EhULNveUCrVnzdQQvF/OdDRZdI4J6bV9oXZ+cD3yqm0txd2X87lL
   g6hReG8sbtLNxyr7UaT1Sz6Xhmei6J6XT56oN802pUAmf+m2WCwElriX/
   I=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: eNmThEQKZny8qB64yJOq+VxeJj3STEFLbmwZmPQ2UqEWbtE7tYBLNGx4SHUPRw+4Aef2lqngQV
 lwglRSved37GQRP80mi21SAVDCi26elUVB3UVDo1Zx9bsv+nbkVnSYvjI76dqg95bdxbLgb3Jf
 Zp4XcBxZtO8xdXeDUGHCJNhXAmqPtLc8NEG96jOlNQ6PlJljIx8Q4agTAu7mIppgOoBYjSfnOc
 7hg3P/8sSRGPTvm8w1Xl/rQNg6D4sncRBxAESyLxOJRenY/6snKbOoKKcACyLY/9bvNPLoe7n5
 xBQ=
X-SBRS: 5.1
X-MesageID: 46701430
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:Q7SWZKDJaP+TEmPlHemk55DYdb4zR+YMi2TC1yhKKCC9E/bo8f
 xG885rtiMc5Ax/ZJhCo6HmBEDjewK/yXcd2+B4Vt3OMDUO0FHYSL2KhrGD/9SPIUPDH5ZmpM
 JdT5Q=
X-IronPort-AV: E=Sophos;i="5.83,291,1616472000"; 
   d="scan'208";a="46701430"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Anthony PERARD
	<anthony.perard@citrix.com>, George Dunlap <George.Dunlap@eu.citrix.com>,
	"Ian Jackson" <iwj@xenproject.org>, Jan Beulich <JBeulich@suse.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: [PATCH] Revert "tools/firmware/ovmf: Use OvmfXen platform file is exist"
Date: Tue, 22 Jun 2021 16:39:30 +0100
Message-ID: <20210622153930.16003-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain

This reverts commit aad7b5c11d51d57659978e04702ac970906894e8.

The change from OvmfX64 to OvmfXen causes a change in behaviour, whereby
OvmfXen maps its shared info page at the top of address space.  When trying to
migrate such a domain, XENMEM_maximum_gpfn returns a very large value.  This
has uncovered multiple issues:

 1) The userspace hypercall wrappers truncate all return values to int on
    Linux and Solaris.  This needs fixing in Xen.
 2) 32bit toolstacks can't migrate any domain with RAM above the 2^40 mark,
    because of virtual address constraints.  This needs fixing in OVMF.

Fixes for both of these aren't completely trivial.  Revert the change to
unblock staging in the meantime.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Anthony PERARD <anthony.perard@citrix.com>
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Ian Jackson <iwj@xenproject.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>
---
 tools/firmware/ovmf-makefile | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/tools/firmware/ovmf-makefile b/tools/firmware/ovmf-makefile
index 637ee509c3..55f9992145 100644
--- a/tools/firmware/ovmf-makefile
+++ b/tools/firmware/ovmf-makefile
@@ -17,14 +17,8 @@ all: build
 .PHONY: build
 build:
 	if test -e .git ; then $(GIT) submodule update --init --recursive ; fi
-	set -ex; \
-	if test -e OvmfPkg/OvmfXen.dsc; then \
-	  OvmfPkg/build.sh -a X64 -b $(TARGET) -n 4 -p OvmfPkg/OvmfXen.dsc; \
-	  cp Build/OvmfXen/$(TARGET)_GCC*/FV/OVMF.fd ovmf.bin; \
-	else \
-	  OvmfPkg/build.sh -a X64 -b $(TARGET) -n 4; \
-	  cp Build/OvmfX64/$(TARGET)_GCC*/FV/OVMF.fd ovmf.bin; \
-	fi
+	OvmfPkg/build.sh -a X64 -b $(TARGET) -n 4
+	cp Build/OvmfX64/$(TARGET)_GCC*/FV/OVMF.fd ovmf.bin
 
 .PHONY: clean
 clean:
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 15:56:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 15:56:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145945.268450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lviko-0007AX-Cp; Tue, 22 Jun 2021 15:55:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145945.268450; Tue, 22 Jun 2021 15:55:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lviko-0007AQ-9j; Tue, 22 Jun 2021 15:55:58 +0000
Received: by outflank-mailman (input) for mailman id 145945;
 Tue, 22 Jun 2021 15:55:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nQNH=LQ=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lvikm-0007AK-K9
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 15:55:56 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e74d563c-3940-4db4-a43c-823ecfc4d455;
 Tue, 22 Jun 2021 15:55:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e74d563c-3940-4db4-a43c-823ecfc4d455
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624377355;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=JthbSS9hhfSjxnfO7f8LXXvJ6x04gBBc+yphges23jI=;
  b=AzUo21cg87x53+/X+NdfN9KhXUkjjsRIs7Fu52W6fabb8d3Dp/ups+G8
   3pjPPyjbXxY3I0JXpg9yYoMJeTUkViOFfJMUVrL7FLjZiObvTdIT8kySZ
   YrBrEAVlCAUy4gZjgkTSuaaNpPXalsJ+qWDr5JDs6p0iB83Ha9kvrg04O
   I=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: oMSkGR1CgLKk3ZvGVfsfXbNFiR2xJl+pCqIe8SdpBNrXpIq/EXbKZeLzqU2DX/OiUErGEpKikI
 QAPkjdCbvtoG/QEFIJto2sYygo84Yzqe1CuVpLPSaSXvhYZM7nGMAhm3TrsusHaNjR60xGuPC2
 BQDklCOHeXqMqG3NQRNgNliEROfXpj6niI1RM+G4CDqB8CN8voxO4EkQw6afFkiy4a0St1zMHI
 L9SEZq4W0CooZlaHvHcJYsKscYghVkVEWdr+n3784vqNzFnuk4krdT9i7MCeTifDK/3c1OmFv5
 EWc=
X-SBRS: 5.1
X-MesageID: 46689271
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:3d4Si6rbDy9ZZmaQhLsPwM0aV5rReYIsimQD101hICG9Evb0qy
 lhppQmPH7P+VIssRQb8+xoV5PufZqxz/BICOoqTNKftWvdyQiVxehZhOOP/9SJIUbDH4VmpM
 VdmsZFaeEZDTJB/LvHCAvTKadd/DFQmprY+ts3zB1WPH9Xg7kL1XYfNu4CeHcGPzWvA/ACZf
 yhz/sCnRWMU1INYP+2A3EUNtKz3eEixPrdEGc77wdM0nj3sQ+V
X-IronPort-AV: E=Sophos;i="5.83,291,1616472000"; 
   d="scan'208";a="46689271"
Date: Tue, 22 Jun 2021 16:55:46 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Julien Grall <julien@xen.org>
Subject: Re: [PATCH] Revert "tools/firmware/ovmf: Use OvmfXen platform file
 is exist"
Message-ID: <YNIIAvdClPze2cEg@perard>
References: <20210622153930.16003-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20210622153930.16003-1-andrew.cooper3@citrix.com>

On Tue, Jun 22, 2021 at 04:39:30PM +0100, Andrew Cooper wrote:
> This reverts commit aad7b5c11d51d57659978e04702ac970906894e8.
> 
> The change from OvmfX64 to OvmfXen causes a change in behaviour, whereby
> OvmfXen maps its shared info page at the top of address space.  When trying to
> migrate such a domain, XENMEM_maximum_gpfn returns a very large value.  This
> has uncovered multiple issues:
> 
>  1) The userspace hypercall wrappers truncate all return values to int on
>     Linux and Solaris.  This needs fixing in Xen.
>  2) 32bit toolstacks can't migrate any domain with RAM above the 2^40 mark,
>     because of virtual address constraints.  This needs fixing in OVMF.
> 
> Fixes for both of these aren't completely trivial.  Revert the change to
> unblock staging in the meantime.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 16:00:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 16:00:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145950.268462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvipI-0000dh-VJ; Tue, 22 Jun 2021 16:00:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145950.268462; Tue, 22 Jun 2021 16:00:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvipI-0000da-S2; Tue, 22 Jun 2021 16:00:36 +0000
Received: by outflank-mailman (input) for mailman id 145950;
 Tue, 22 Jun 2021 16:00:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3wlN=LQ=gmail.com=neilsikka@srs-us1.protection.inumbo.net>)
 id 1lvipH-0000dU-EN
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 16:00:35 +0000
Received: from mail-ed1-x52e.google.com (unknown [2a00:1450:4864:20::52e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 67e9bdca-0f31-4b4b-a7d5-5672d0d329d1;
 Tue, 22 Jun 2021 16:00:34 +0000 (UTC)
Received: by mail-ed1-x52e.google.com with SMTP id h17so14458770edw.11
 for <xen-devel@lists.xenproject.org>; Tue, 22 Jun 2021 09:00:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67e9bdca-0f31-4b4b-a7d5-5672d0d329d1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:from:date:message-id:subject:to;
        bh=EP9w62tNmtHNt+8DgW/leZX7ZZ1P63XzCfOLOGIqNlY=;
        b=Yw/FPpD+IFCjfSoDw8jH3eLU1eAgczG/d8yUXfr4QvdFS+Ozh/ZuwYq4Nx/Az3IL9h
         Pvv/tBGYSPoSXWe/gP91dYwu387OjcvR+U3zHqcmawFUz+QFOEHPyHOglbQuQ98f7dV+
         oqYXSjZ3Dj20m5xCsjNnGAgKxa765Fb9c8kRxayhR83GoLo+PkZnsLS4Zh31FFmU/Uak
         brLv0PDvB5YWt5rWGa7r2TfVMXH+SEPboKIZ4ZNNS5ID+f7p6cbgLLvNvqNJFf/dpd5H
         GQuI2YjBd8sWZ7MdLVKnGncuxQfOm0404VgnVxk5LjphNFDSXb2bS6u7dodpnKoL0gqy
         OMNg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
        bh=EP9w62tNmtHNt+8DgW/leZX7ZZ1P63XzCfOLOGIqNlY=;
        b=iQY6CjG6vfjbxYodF1Oo9ZUBEvwgeF3uCc2+snLr7c3tuIwfpd4NP+yr2Puq+TobKy
         RJcq72dIXAaiuCX9SGTRl1b3aMn6m2XAHeyxMFOKDJ6GrYJO7ZJaQL1YIRG/7ykMmBxp
         My6yiFo66mpQF9AnEwknXWxzL11JWyLZdGDb8YCUKHGVToVi71VGeyd7tYDNhEvwu7dG
         2yuxaNhzGrsDrATHAq48Txng4zPf8+kfuluyoKCwPoJ3JR9LSc36LAK3srtONdZ4DwUm
         t53mwt6KGYKMWHlDMpfndLP0Pt/seX3PI+ok3+rAmFevOXyWGaPubaYedDVx0h7eEURn
         1Khw==
X-Gm-Message-State: AOAM533MWaMTiPYMeqfgTokHaaZQcgHwAlSVLiYR0CygmfZEeu927LfR
	kcG/9LyJtM8mYTVglMcvB9hsJaXk5ptzUF/14ZumbYiyfAulbg==
X-Google-Smtp-Source: ABdhPJw+ihHJpO7jlFMgvZVJ3fjjT8YCfR0RIE146Ha1JVHNa0blKAdVVNV4fiYqHugywvLOFWXJ7C5jxff92QH75iw=
X-Received: by 2002:a05:6402:848:: with SMTP id b8mr5970828edz.44.1624377633510;
 Tue, 22 Jun 2021 09:00:33 -0700 (PDT)
MIME-Version: 1.0
From: Neil Sikka <neilsikka@gmail.com>
Date: Tue, 22 Jun 2021 12:00:22 -0400
Message-ID: <CAHPMNWcQgUEvd7aYiNx1U+wphmuJr_Q8JXWw3mE812uN5hEARQ@mail.gmail.com>
Subject: Windows 10 Kernel Debugging on Xen
To: Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: multipart/alternative; boundary="000000000000d3973905c55ce399"

--000000000000d3973905c55ce399
Content-Type: text/plain; charset="UTF-8"

Hello,
Has anyone gotten a Windows10 (Version 1709 of later) kernel debugger
attached when running the Windows10 debugger VM and the Windows10 debugee
VM on Xen 4.13.0 hypervisor? I am getting a "NIC hardware initialization
failed" error. I tried the suggestions in the discussion here (
https://bugzilla.redhat.com/show_bug.cgi?id=1947015):
-cpu
Skylake-Server-IBRS,ss=on,vmx=on,hypervisor=on,tsc-adjust=on,clflushopt=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,ssbd=on,xsaves=on,ibpb=on,amd-ssbd=on,
\
skip-l1dfl-vmentry=on,mpx=off,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vendor-id=KVMKVMKVM

note: i had to remove the following 2 arguments due to errors from QEMU:
pschange-mc-no=on
hv_vpindex

Here was the error:
C:\Users\user\Desktop\oldDebuggers\x64>kdnet.exe

Network debugging is supported on the following NICs:
busparams=0.4.0, Intel(R) PRO/1000 MT Network Connection, Plugged in.
The Microsoft hypervisor running this VM does not support KDNET.
Please upgrade to the hypervisor shipped in Windows 8 or WS2012 or later.

KDNET initialization failed.  Status = 0xC0000182.
NIC hardware initialization failed.

I am using an Intel e1000 NIC emulated through QEMU because its supposedly
a supported NIC for Windows kernel NET debugging.

Thanks in Advance!

-- 
My Blog: http://www.neilscomputerblog.blogspot.com/
Twitter: @neilsikka

--000000000000d3973905c55ce399
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hello,<div>Has anyone gotten a Windows10 (Version 1709 of =
later) kernel debugger attached when running the Windows10 debugger VM and =
the Windows10 debugee VM on Xen 4.13.0 hypervisor? I am getting a &quot;NIC=
 hardware initialization failed&quot; error. I tried the suggestions in the=
 discussion=C2=A0here (<a href=3D"https://bugzilla.redhat.com/show_bug.cgi?=
id=3D1947015">https://bugzilla.redhat.com/show_bug.cgi?id=3D1947015</a>):<b=
r>-cpu Skylake-Server-IBRS,ss=3Don,vmx=3Don,hypervisor=3Don,tsc-adjust=3Don=
,clflushopt=3Don,umip=3Don,pku=3Don,md-clear=3Don,stibp=3Don,arch-capabilit=
ies=3Don,ssbd=3Don,xsaves=3Don,ibpb=3Don,amd-ssbd=3Don, \<br>skip-l1dfl-vme=
ntry=3Don,mpx=3Doff,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=3D0x1fff,hv-ve=
ndor-id=3DKVMKVMKVM<br><br>note: i had to remove the following 2 arguments =
due to errors from QEMU:<br>pschange-mc-no=3Don<br>hv_vpindex<br><br>Here w=
as the error:<br>C:\Users\user\Desktop\oldDebuggers\x64&gt;kdnet.exe<br><br=
>Network debugging is supported on the following NICs:<br>busparams=3D0.4.0=
, Intel(R) PRO/1000 MT Network Connection, Plugged in.<br>The Microsoft hyp=
ervisor running this VM does not support KDNET.<br>Please upgrade to the hy=
pervisor shipped in Windows 8 or WS2012 or later.<br><br>KDNET initializati=
on failed.=C2=A0 Status =3D 0xC0000182.<br>NIC hardware initialization fail=
ed.<br><br>I am using an Intel e1000 NIC emulated through QEMU because its =
supposedly a supported NIC for Windows kernel NET debugging.<br><br>Thanks =
in Advance!<br clear=3D"all"><div><br></div>-- <br><div dir=3D"ltr" class=
=3D"gmail_signature" data-smartmail=3D"gmail_signature"><div>My Blog: <a hr=
ef=3D"http://www.neilscomputerblog.blogspot.com/" target=3D"_blank">http://=
www.neilscomputerblog.blogspot.com/</a></div><div>Twitter: @neilsikka</div>=
</div></div></div>

--000000000000d3973905c55ce399--


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 16:10:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 16:10:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145954.268473 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lviyc-00024a-Um; Tue, 22 Jun 2021 16:10:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145954.268473; Tue, 22 Jun 2021 16:10:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lviyc-00024T-Q4; Tue, 22 Jun 2021 16:10:14 +0000
Received: by outflank-mailman (input) for mailman id 145954;
 Tue, 22 Jun 2021 16:10:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p8IA=LQ=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1lviyc-00024N-0R
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 16:10:14 +0000
Received: from mail-wr1-x42e.google.com (unknown [2a00:1450:4864:20::42e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0812c552-472b-4020-85a0-27002685bddf;
 Tue, 22 Jun 2021 16:10:12 +0000 (UTC)
Received: by mail-wr1-x42e.google.com with SMTP id e22so20776265wrc.1
 for <xen-devel@lists.xenproject.org>; Tue, 22 Jun 2021 09:10:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0812c552-472b-4020-85a0-27002685bddf
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=GhwNfhMVdQu8uAR13vpdnf7+TZvoJmIXHX9FI/JeEXE=;
        b=Vq5qatDmRcNaTLrAYjTpFZi0LOqn7RfpcNlh9F8xquRlKssG85rRksHbvnCMf+6A4l
         DeGIIdH5/3D3xh241pmm3kVXjHESKVfAwfIw+lQcym/rfb3TPHl4rf77Lhiy6gWLweVA
         EpOF66cvU3PVVaIzW6IkZ7jWZQ/Go8pmBk6LH9plp6h48LRAW3wmZASXhpUfaSIZj0py
         a3uPrxNb62wGw+HGKamCnRAf6mh7ErTgYeRH+VMPLg1JmX1NVjYBUYJBWRXd+9Dt5kYj
         tQ/qa0rJHF9Rg09Ix+tkYeohmDF+BZDHVJcTJ2O0lc6h2LXHLXqQVR3VrnLSh1o1VgF4
         iP9w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=GhwNfhMVdQu8uAR13vpdnf7+TZvoJmIXHX9FI/JeEXE=;
        b=OsSq1fPTHEzS5p7/eICVyz8U0rOlkQGLwDcCsneK0H8qs5G/P13JWgTtso2ODLkygC
         s6n607NFAaEKREqOfGzIrs1LIfPgpr0B5N1tqexsDF8tGgC8pj8GAFVpvoo+AVM9mXtf
         kXdAsopkZNi5ziadkf1Glf95zZTo/SIcC0oWvSy0FSNtYb+GKQxiCsVcp/ekR6rGNmIt
         F/QInVg+1v2dpCnYaeqj/IXwFq72X85ol1KHzudq45sXZhjVJuBvpWuSHo7lRDeLDxD4
         xEcjTg/wB8MNU+h517hTxB04tqczqHiRaattBjpI6RReV4gLMMELjU6Lh36/UldIWHlQ
         4dtg==
X-Gm-Message-State: AOAM533ItBoNFCmXUpLGwk7YOaMZEuxIKCyUyAwQnLBkncB8jvePbnWB
	OGKgvqub7F0s8mWsUaBQQ5QimUSDNwNxjcqbz04=
X-Google-Smtp-Source: ABdhPJwavLsZhKp1hvvIlXsomWdAciI4Xe4MueQSrZTmwa6v+QEOEP2fzY3NsRS1l7ShNrnSnWD4NLE2f+MB5GC8pO0=
X-Received: by 2002:adf:fb92:: with SMTP id a18mr5678539wrr.182.1624378212075;
 Tue, 22 Jun 2021 09:10:12 -0700 (PDT)
MIME-Version: 1.0
References: <CAHPMNWcQgUEvd7aYiNx1U+wphmuJr_Q8JXWw3mE812uN5hEARQ@mail.gmail.com>
In-Reply-To: <CAHPMNWcQgUEvd7aYiNx1U+wphmuJr_Q8JXWw3mE812uN5hEARQ@mail.gmail.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Tue, 22 Jun 2021 12:09:36 -0400
Message-ID: <CABfawhk4D+30_DX5cwYG-=SthQ=CXLRLL7VeXeVK48Oj0efn2Q@mail.gmail.com>
Subject: Re: Windows 10 Kernel Debugging on Xen
To: Neil Sikka <neilsikka@gmail.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

I have managed to get windbg working with a serial bridge between two
Win10 VMs using the following script:
https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/scripts/=
serial-bridge.sh.
The debugee has to enable a couple options so that windbg can attach:
https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/scripts/=
debug.cmd.

Tamas

On Tue, Jun 22, 2021 at 12:01 PM Neil Sikka <neilsikka@gmail.com> wrote:
>
> Hello,
> Has anyone gotten a Windows10 (Version 1709 of later) kernel debugger att=
ached when running the Windows10 debugger VM and the Windows10 debugee VM o=
n Xen 4.13.0 hypervisor? I am getting a "NIC hardware initialization failed=
" error. I tried the suggestions in the discussion here (https://bugzilla.r=
edhat.com/show_bug.cgi?id=3D1947015):
> -cpu Skylake-Server-IBRS,ss=3Don,vmx=3Don,hypervisor=3Don,tsc-adjust=3Don=
,clflushopt=3Don,umip=3Don,pku=3Don,md-clear=3Don,stibp=3Don,arch-capabilit=
ies=3Don,ssbd=3Don,xsaves=3Don,ibpb=3Don,amd-ssbd=3Don, \
> skip-l1dfl-vmentry=3Don,mpx=3Doff,hv-time,hv-relaxed,hv-vapic,hv-spinlock=
s=3D0x1fff,hv-vendor-id=3DKVMKVMKVM
>
> note: i had to remove the following 2 arguments due to errors from QEMU:
> pschange-mc-no=3Don
> hv_vpindex
>
> Here was the error:
> C:\Users\user\Desktop\oldDebuggers\x64>kdnet.exe
>
> Network debugging is supported on the following NICs:
> busparams=3D0.4.0, Intel(R) PRO/1000 MT Network Connection, Plugged in.
> The Microsoft hypervisor running this VM does not support KDNET.
> Please upgrade to the hypervisor shipped in Windows 8 or WS2012 or later.
>
> KDNET initialization failed.  Status =3D 0xC0000182.
> NIC hardware initialization failed.
>
> I am using an Intel e1000 NIC emulated through QEMU because its supposedl=
y a supported NIC for Windows kernel NET debugging.
>
> Thanks in Advance!
>
> --
> My Blog: http://www.neilscomputerblog.blogspot.com/
> Twitter: @neilsikka


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 16:10:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 16:10:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145957.268484 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvizA-0002cR-6i; Tue, 22 Jun 2021 16:10:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145957.268484; Tue, 22 Jun 2021 16:10:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvizA-0002cK-3M; Tue, 22 Jun 2021 16:10:48 +0000
Received: by outflank-mailman (input) for mailman id 145957;
 Tue, 22 Jun 2021 16:10:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RtyV=LQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lviz8-0002cB-Ui
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 16:10:46 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 15f82209-6f92-476b-bc05-1ecb86e59190;
 Tue, 22 Jun 2021 16:10:46 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2109.outbound.protection.outlook.com [104.47.18.109])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-31-NmOAeSOXMF2GiINgEWtqmQ-1; Tue, 22 Jun 2021 18:10:43 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4382.eurprd04.prod.outlook.com (2603:10a6:803:73::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Tue, 22 Jun
 2021 16:10:41 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4242.023; Tue, 22 Jun 2021
 16:10:41 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P195CA0004.EURP195.PROD.OUTLOOK.COM (2603:10a6:102:b6::9) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.19 via Frontend Transport; Tue, 22 Jun 2021 16:10:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15f82209-6f92-476b-bc05-1ecb86e59190
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624378245;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zm9L+/AvzN0dIVbx8GeBZqJQhaMXOfsGlUm9zPZfbzY=;
	b=IUY+6HKUwq2WNzKaT+knxV0Phcdo3/Pa0znBlACb8Z2JLcti4zvgcwWJEVEU1anxiL+rYx
	JTXqfuKFH3W5f/IRIcaU4c+AZAt+lt6czT5qNUi0M+dTtGfTieAROaaxeSLJne3NN9lxPX
	nMz+nUK7khQ6KB4uNsVgR5Og8XPZ/5c=
X-MC-Unique: NmOAeSOXMF2GiINgEWtqmQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dXYpECBxZu7gOPaCT5bbGhUOtoxA6Em0atGydqa0I6gTiuggmMyORhMwOrtnC9tP5mgWqTyEzhEEJN0Z5jaTILSTlzkFQnynR+28XlW+5VBp8gbsIiKXTDA84jiG+MZmZOD3zqBWmkJn8sEyWYEuous1n1PGmIqNfCVp8i53G8VrmSHjwSZe5QyMqBQsDbZSvwiE4kgfQwGIeg7u12f3IbsCF9MV3abUUfbA9LJpJ9BgEmeTEFJeH+p0sHG5nexgu5qQCbU5ZlWNh5MwknQ/J91pokouUC/29b8nE3tVVqJoTSXr8sIF9VVMEjEPFtCx+TOoK3LkkLu36SA/cH8NZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zm9L+/AvzN0dIVbx8GeBZqJQhaMXOfsGlUm9zPZfbzY=;
 b=hBHWiB50KI6cMtwKtzBC6F4HncGePloUz8gYmZAVCR9lixE9NEXQvCdkdFiZZ/Jn4zorcAKVAPyUjYFEoTs1TOt7xT5lcIEkchhpY6nAW/eSZOY2g/ys97Wno8GbtTF8UP1wx22XLUkZBCkV2LsXrnt5tN27vCBmsHvDEjDadgJV/Wchxn4JT859nPxyicqc4G4gnAR2EmxVt7XEqiJoUeOEqlaByqUX0cT/g7pD1jecmyb/zQr960Si8WcB9WRgoEb48gP5lfp6BjKti7aHOJ75d+ghBYhKs+xjlxABGC5LFbqBFlj0jrDBg6M34rIH+A4x2HXWjpbP8E2yeO/l2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] Revert "tools/firmware/ovmf: Use OvmfXen platform file is
 exist"
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210622153930.16003-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e3104d67-d988-06f7-58e1-92ed3ef739bd@suse.com>
Date: Tue, 22 Jun 2021 18:10:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210622153930.16003-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P195CA0004.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:102:b6::9) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cf0f16c6-0579-4b67-b24e-08d935984851
X-MS-TrafficTypeDiagnostic: VI1PR04MB4382:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB43822634D50F5E3E4071CE37B3099@VI1PR04MB4382.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1303;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Z8VqRfDDHGk+DSgkn42zuUoYyFpkLVIJROxla0WAnSKXCwiWFvyvnVqRQ29jOcuyzTWxSEyw/XC4lmyvZs1210OO2/cKAtmtM/KVmq5yd9qscBaHi3rmlHNOHBxXpmYlZ7Qxu0vRZPu0KTfVs9FXDJsRmJuxJrp4BQMqjtpFQtEgWQ2DzoBMRtg3Ey7eJIbYelu2Drc0x1wfqmV+eyCxm4FsxPvgP/PnAfaEG7+EBchAooyt4olmwkRSjV6/noqlwEZkCIYkne6xzmqvk8WwgOVOy5tTVZqd5R45yiHkxP6fAIdBK0pZSNFMvU/RJBfpnm/47ILiuVLboBJTrk62poqjGoVaqmZz1WEugpntCSP1i6YHVDUr627Y2a5SpkabG+fx++DxlQHULylh94iCHnSVZKy/VRMwnfdjeYNXS2wE/oUnx2aA+dGhSmlMCtinz5eS4scFFxfybIlw0hDkI4bmJwHBFvW3TtOTdvXjC8px6NpIrbr3uVxq39j/KgpHHS4hQ9GBaIl57uNzA/CjHWeV1T+ezZ05xUPXbLX3dbZgKQr5gC66HNDo+YZrmaQdO2+1DxGDAjgGlFhmQMoqagKYS9zZUh2mlUxSZPrYXtd84bP4bkiOzdZHJnDtcFnXjsu3c6x0RdDaTlQIwDoIlQnhdtkbGCCc4UcMhvWb+J30a4fAaQtoSBJiqyESajT3
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(39860400002)(396003)(366004)(376002)(956004)(54906003)(8936002)(186003)(26005)(66476007)(2616005)(83380400001)(86362001)(2906002)(53546011)(5660300002)(16526019)(38100700002)(16576012)(478600001)(6486002)(66556008)(6916009)(316002)(66946007)(36756003)(8676002)(31686004)(4326008)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YzQ5VzV5V0lQamIvUGpBbm5OdkVXNGVFU2VjUXpwRG5TYzI3SXp0QzJoa05Y?=
 =?utf-8?B?ZnBsbTJWaXVVb0lBelJaRzQrU3RpdkIzZmNCZ3RmMUEzOG9RRFZWUUM0RFVI?=
 =?utf-8?B?UExuWG90QmJXcFI4c1A4WDhhcEVpSTU5bElUSENZSklBd2MxYWhHRmEvL1R2?=
 =?utf-8?B?SWVRWUFHcjZjOHBFUnFzUURHUnZXSjNtb0VIMXNDajZvL0tsTFpCL01UdE1R?=
 =?utf-8?B?KzJqdzVrNndPNkFCNkR0dEJOM1Rma0E4Z0Z2N1dwNExLNk1ZaXNNa0R5MUhO?=
 =?utf-8?B?NWIvK251amQ3c3h5aUI3NXZOT09TQWF4eWxjLy91bkc4MkZuQnVMQmxTNTVY?=
 =?utf-8?B?eEJhbjZlc0IrczB2enVyNkVybXBBVGttMGdJK2MyVVFSTDhydVNxS2xFM3E5?=
 =?utf-8?B?ZVFtN2g0S0pvcGxQNGRZVzVwcXYyL01LRzR4MjBCTGRUbWhpc3dnQXpZNnNo?=
 =?utf-8?B?bC94cjY4QS9rMUpvZkhidmUxK1dYbTRjRlhGM0ZZMk9KZ0hTMGdiaHRPM2du?=
 =?utf-8?B?OHBPNFdDd0JnR1p6ZVZNamNLb2xDL1owWWYwVHRJdVJGTDFyMG8xUzY4bEJ4?=
 =?utf-8?B?bzdURzNqc3BiZEFHZm9WUlJWWWVZa1NVUjdCRGgzOHJKdkl2TGVOR0FJbmdG?=
 =?utf-8?B?QmRzVzF6TmZXdW90ZEwzZTlyRWpaaVl5dlpmSWszMW40UmVlN21nMVc0a2l3?=
 =?utf-8?B?MDhLdGpFampZWS9GeTZNS2JPdFE2c1BlSTBkT0o3NndIYTQvQzloQ2Y5bUhq?=
 =?utf-8?B?RFhmc0wrcFROb0J4REpiZW5hV3RxVGxEcHNqbWxkWUJCMGRGSXJ6MkluZzJv?=
 =?utf-8?B?SGhJbEJ1Q1ZZRVhQbndvRWxFSFlyRHZacExSUldwY3h6Zmd1dDBXZmtWVHYv?=
 =?utf-8?B?cTBVblZEZ291dW9SZWVjMFdtZmNJcmoyVHBMTHhCbEJFYXpKZ1I3Q056ZTF6?=
 =?utf-8?B?K0NJREltV1pPMkhQRUZBcFVqOVhsUWZBMVl5SHJ1UG5odWZtYnJiQzk3dlZv?=
 =?utf-8?B?bGhwdDRCeUI3R1VaSFYxckQ1L1QvTVJ4YmptNGpsN2xVV25xZGpraEx6SVNp?=
 =?utf-8?B?dks2d1BWa3pUb25LYWRXTVJpenhySjlUWm03cml1MkE3a1M4amJSTlNhUUJm?=
 =?utf-8?B?Q25qWnJIZGZJMllJYlVjOEJmYkZPdGRRRDFPVVl1QTc4TzY3QkNDOWZ3Mko1?=
 =?utf-8?B?czY0azBFRWNiYjVQQ29RMlZnbjJQdGJUMDFuYnFleGM4akdhMkhEL09ybDlt?=
 =?utf-8?B?TlpzaHhaaXNFRzk4SmdyM2x0M3BERFo4VWx0aXpSaFFoTGU5RVZjMzQvOFVF?=
 =?utf-8?B?RE1CWDhieUJhblRzOTMzcTdxTittVjVLZVQzRnFSQjErNE53eGh3SGRZV2tk?=
 =?utf-8?B?c001MWJZRk5IanY1WFl6bUJDMEd1Yi9tSmNsd0VKbzl0dGhBWjVSU1F0U3VB?=
 =?utf-8?B?Y21ITEgxV2M4cVkyaTUwRGl1Si80NHJjWHRPQVRPaGtRUitUaUVobXlpTzJC?=
 =?utf-8?B?NVdJOUlra1A3S3FucXlnVFV1QlJmaGtLSzJ6VnVmNU5BT2VFOUhWUnNtQUo0?=
 =?utf-8?B?MlhSS2RtN3FGQ3l0bjBlbzJDMm1FZldEZ28rZVRCN0t2MnFZUng2ditaSGtS?=
 =?utf-8?B?YkU4TUlQZnI1cUpITmw3WWMybkdDM1BhenNtTnlEbGhKejlnQjNsQ0JrRjlu?=
 =?utf-8?B?NjFEanNQUTZuMytobVdZcDEvdHE1UHRBbHUyOHNPejZZYmhLNlpQc2xmN3FT?=
 =?utf-8?Q?aLBymYe0xRiksYWmv+wF60UvXPFd/lC2X1mvbpJ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cf0f16c6-0579-4b67-b24e-08d935984851
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2021 16:10:41.5230
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1Cl9efhYBZ/tCqUIJ1vGNv4+dFJredfvMS+7uNE0Fds59ze2QV8ZBvkcYV0mtN9C2LOW52G2c4b4NTDItovbXw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4382

On 22.06.2021 17:39, Andrew Cooper wrote:
> This reverts commit aad7b5c11d51d57659978e04702ac970906894e8.
> 
> The change from OvmfX64 to OvmfXen causes a change in behaviour, whereby
> OvmfXen maps its shared info page at the top of address space.  When trying to
> migrate such a domain, XENMEM_maximum_gpfn returns a very large value.  This
> has uncovered multiple issues:
> 
>  1) The userspace hypercall wrappers truncate all return values to int on
>     Linux and Solaris.  This needs fixing in Xen.
>  2) 32bit toolstacks can't migrate any domain with RAM above the 2^40 mark,
>     because of virtual address constraints.  This needs fixing in OVMF.

And I suspect even that presently enforce boundary of 2^40 is actually
too high, and things still wouldn't work when getting close. At the
very least the tool stack then depends on a fairly big chunk of memory
(2^30 bytes) to be available in one single, virtually contiguous piece.
Iirc 32-bit Linux can be configured to not even leave this much space
for user mode.

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 16:13:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 16:13:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145965.268495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvj20-0003Oi-Od; Tue, 22 Jun 2021 16:13:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145965.268495; Tue, 22 Jun 2021 16:13:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvj20-0003Ob-La; Tue, 22 Jun 2021 16:13:44 +0000
Received: by outflank-mailman (input) for mailman id 145965;
 Tue, 22 Jun 2021 16:13:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dK4H=LQ=tklengyel.com=tamas@srs-us1.protection.inumbo.net>)
 id 1lvj1z-0003OV-Pc
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 16:13:43 +0000
Received: from MTA-14-4.privateemail.com (unknown [198.54.118.206])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 40282eeb-9681-41e0-b222-81fd7873ae37;
 Tue, 22 Jun 2021 16:13:43 +0000 (UTC)
Received: from mta-14.privateemail.com (localhost [127.0.0.1])
 by mta-14.privateemail.com (Postfix) with ESMTP id 56BFD80163
 for <xen-devel@lists.xenproject.org>; Tue, 22 Jun 2021 12:13:42 -0400 (EDT)
Received: from mail-wm1-f52.google.com (unknown [10.20.151.244])
 by mta-14.privateemail.com (Postfix) with ESMTPA id 1FA4D80168
 for <xen-devel@lists.xenproject.org>; Tue, 22 Jun 2021 12:13:42 -0400 (EDT)
Received: by mail-wm1-f52.google.com with SMTP id n23so13193330wms.2
 for <xen-devel@lists.xenproject.org>; Tue, 22 Jun 2021 09:13:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40282eeb-9681-41e0-b222-81fd7873ae37
X-Gm-Message-State: AOAM530IeQ20jwFQvNhChPKEzRIFGWKgci1sahA6VLkcAzlwLPcN7cHn
	FSuwULf3Th/YvCF1jCeJcY/FNA+N41Jxso0xwD8=
X-Google-Smtp-Source: ABdhPJyP/1rJci1YVdsFmGAugekVzVntgX4LKIBtbym5cNwQwUKlBtl4cOswNNeD08+p78ehn4O4nqCHHYt1zldo6Dk=
X-Received: by 2002:a7b:c149:: with SMTP id z9mr5160659wmi.77.1624378420854;
 Tue, 22 Jun 2021 09:13:40 -0700 (PDT)
MIME-Version: 1.0
References: <20210507152836.20026-1-tamas@tklengyel.com>
In-Reply-To: <20210507152836.20026-1-tamas@tklengyel.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Tue, 22 Jun 2021 12:13:05 -0400
X-Gmail-Original-Message-ID: <CABfawhnWJkvq9z1vB2mLwto7GV=xGu181w6LzFh6NeX-qXdbcQ@mail.gmail.com>
Message-ID: <CABfawhnWJkvq9z1vB2mLwto7GV=xGu181w6LzFh6NeX-qXdbcQ@mail.gmail.com>
Subject: Re: [PATCH] tools/misc/xen-vmtrace: handle more signals and install
 by default
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-Virus-Scanned: ClamAV using ClamSMTP

Patch ping.

On Fri, May 7, 2021 at 11:28 AM Tamas K Lengyel <tamas@tklengyel.com> wrote:
>
> Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
> ---
>  tools/misc/Makefile      |  2 +-
>  tools/misc/xen-vmtrace.c | 12 +++++++++---
>  2 files changed, 10 insertions(+), 4 deletions(-)
>
> diff --git a/tools/misc/Makefile b/tools/misc/Makefile
> index 2b683819d4..c32c42d546 100644
> --- a/tools/misc/Makefile
> +++ b/tools/misc/Makefile
> @@ -25,6 +25,7 @@ INSTALL_SBIN-$(CONFIG_X86)     += xen-lowmemd
>  INSTALL_SBIN-$(CONFIG_X86)     += xen-memshare
>  INSTALL_SBIN-$(CONFIG_X86)     += xen-mfndump
>  INSTALL_SBIN-$(CONFIG_X86)     += xen-ucode
> +INSTALL_SBIN-$(CONFIG_X86)     += xen-vmtrace
>  INSTALL_SBIN                   += xencov
>  INSTALL_SBIN                   += xenhypfs
>  INSTALL_SBIN                   += xenlockprof
> @@ -51,7 +52,6 @@ TARGETS_COPY += xenpvnetboot
>  TARGETS_BUILD := $(filter-out $(TARGETS_COPY),$(TARGETS_ALL))
>
>  # ... including build-only targets
> -TARGETS_BUILD-$(CONFIG_X86)    += xen-vmtrace
>  TARGETS_BUILD += $(TARGETS_BUILD-y)
>
>  .PHONY: all build
> diff --git a/tools/misc/xen-vmtrace.c b/tools/misc/xen-vmtrace.c
> index 35d14c6a9b..5b688a54af 100644
> --- a/tools/misc/xen-vmtrace.c
> +++ b/tools/misc/xen-vmtrace.c
> @@ -44,7 +44,7 @@ static size_t size;
>  static char *buf;
>
>  static sig_atomic_t interrupted;
> -static void int_handler(int signum)
> +static void close_handler(int signum)
>  {
>      interrupted = 1;
>  }
> @@ -78,8 +78,14 @@ int main(int argc, char **argv)
>      int rc, exit = 1;
>      xenforeignmemory_resource_handle *fres = NULL;
>
> -    if ( signal(SIGINT, int_handler) == SIG_ERR )
> -        err(1, "Failed to register signal handler\n");
> +    struct sigaction act;
> +    act.sa_handler = close_handler;
> +    act.sa_flags = 0;
> +    sigemptyset(&act.sa_mask);
> +    sigaction(SIGHUP,  &act, NULL);
> +    sigaction(SIGTERM, &act, NULL);
> +    sigaction(SIGINT,  &act, NULL);
> +    sigaction(SIGALRM, &act, NULL);
>
>      if ( argc != 3 )
>      {
> --
> 2.27.0
>


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 16:28:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 16:28:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145973.268507 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvjFh-0004tw-4P; Tue, 22 Jun 2021 16:27:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145973.268507; Tue, 22 Jun 2021 16:27:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvjFg-0004tp-VH; Tue, 22 Jun 2021 16:27:52 +0000
Received: by outflank-mailman (input) for mailman id 145973;
 Tue, 22 Jun 2021 16:27:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZL//=LQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lvjFf-0004tj-0O
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 16:27:51 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cc0bb070-2254-4ae4-8cc3-99988f5ef58d;
 Tue, 22 Jun 2021 16:27:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc0bb070-2254-4ae4-8cc3-99988f5ef58d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624379269;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=VtVlYcU6+/wbuJX1pOjnPD5Exirrp4sS0B8dgtDtJ80=;
  b=HMQK5oxfjSoj7zG0TxqOJI99NnekFhmXuB4GPw+3ab0ameqXhh7/Qq23
   vWTOLjgx4gWm0ruDOn5AlGf5g3jUQg4B7ts6pXgC1BDuVD6a62uEVBzxW
   ESr5iTdOcDmY/C/WxUwS/2vk0/vbd+IrwfY4KcFlThi79XRGLn4KmaQBt
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ioagPGl7N+nv4jQXrGQ8gPeLSnUJaM+5K600rqCKIlW7Up7/EfwrX/nB+vrOX5VxhgAjblbGba
 UohJqLxAyJwT0ZYyYeOD0ybCP4el9Z/162MbUojLiEIZsjeKLXsl0Nf0TNkPPmvhvZ2ICcvdMW
 ddBBg34SGHs4ixIOOB0iHWHyY/vffVUEDvbRvYR+UVP+2+jydggPtLKhU83yT4oqGtwovCo3h5
 /GFOJPZTVzeFRfl5GKlEzSeYxwTziEIjNRNgeMBeNaBKa3B7c+1XAkgfw+enoBzK66VdrRHi4A
 ByM=
X-SBRS: 5.1
X-MesageID: 47072762
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:0Xi4xK+LRZLJpZa66HRuk+AiI+orL9Y04lQ7vn2ZKSY5TiVXra
 CTdZUgpHvJYVMqMk3I9uruBEDtex3hHP1OkOws1NWZLWrbUQKTRekP0WKL+Vbd8kbFh4xgPM
 lbEpSXCLfLfCVHZcSR2njFLz73quP3j5xBho3lvglQpRkBUdAG0+/gYDzraXGfQmN9dPwEPa
 vZ3OVrjRy6d08aa8yqb0N1JdQq97Xw5evbiQdtPW9e1DWz
X-IronPort-AV: E=Sophos;i="5.83,291,1616472000"; 
   d="scan'208";a="47072762"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=E8B/am2kDLy30FrAkr5/+ytjhFRKIzhL5nqr2bY4Td+nl5cMtQz0M4qzory+3jhMxP1t8ZBkK42Fid014L/a9pkC+F5ejCyC+BNxzBD070RfEjUogIh+rOgoBZUuEQWlmF6mUSoTNkDrOT8oG2R+zrNjk8cDn4X2UDU1xUw1bscnjmgGCKrysjflV7HYWgSLxSPDoNWOAdQqWO6IqYn+olEFFgZ/G83dQtH1coWbwWiA73O/4SGdCRM97t3sPTKsAbLIRlFTHj+i8C0TqRUo4l11EZB18KIrfDRSoN2ujT0ra2VO6t4glGIbqVEE+ctLMjKr+Jo1MBCOKdsSDX4MEg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fTfSzXv9hlAttnN6XQ5b6pwrnkXJF1fp8R9V9tpTRKM=;
 b=L5+WkHy8UIFwfyRJmyI5tGqW7IFCa8Rdgb/QUUMyYwNslOm45zJyAlVlglB9psXe2Aam8UesvV2znNDzJzpBACxwuh8MDjnCKMLELJs4zRHK9K1Z2GBPMz6n7ru5fMcyJaa6mBXMep07yejDNNjbYSp0UI5Hcyyju7ldQCSzCRwjLN9RFOWkqieY+gnUK+ZFDsCe/46hDDBcWcz1eRT9ZtA+A9lrArW7STPvAFZ9Ow64q1bR+jqPp5h8fXHKRj9AwsrBOTI5fNZ7VaAlR6sUP2OsBhfhsFGgef0xEBaxoLLkvze1q3HnklPdmc7RtWyDiATJbG4BuMhhI5HGDbRzXw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fTfSzXv9hlAttnN6XQ5b6pwrnkXJF1fp8R9V9tpTRKM=;
 b=b6bHUgKjBVCA2M9Q2nTRQr788XAerYB9TAsTPTuKBsbKGSdT6GOJhJFnUlF2FIzVGCFMSgWaWMhVsG+z6DTFxR/0nylhU3y4cPkPD8qynUaEj91hUr+T9Y1QCvv50T3JCdKGKvaVPVawGbmAX1PhUwWZWiX6Uy74PQ/R8u1CHnY=
To: Jan Beulich <jbeulich@suse.com>
CC: Anthony PERARD <anthony.perard@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210622153930.16003-1-andrew.cooper3@citrix.com>
 <e3104d67-d988-06f7-58e1-92ed3ef739bd@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] Revert "tools/firmware/ovmf: Use OvmfXen platform file is
 exist"
Message-ID: <58ac70ac-f205-564b-184b-a86c19bd2906@citrix.com>
Date: Tue, 22 Jun 2021 17:27:35 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <e3104d67-d988-06f7-58e1-92ed3ef739bd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0394.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18f::21) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 11486de7-d4fe-4f98-613b-08d9359aa94a
X-MS-TrafficTypeDiagnostic: BY5PR03MB5047:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB5047C0FB5745ACA87C64566EBA099@BY5PR03MB5047.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2657;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: v5TZKEgkLXuhYuZ5ilwWaQewSCxGQSrC0tpjAJgqjyjMv37APgJDgiMeq0eF3tD7am5Zozwsy5dc6p3kyLcKJisvOv5WR8IblbE5Zrmigy3QWOnOHukS40qo4Z4ybEuhvlhvqiQjBNfHPO4rgJ7lrTHEKSAq8qlAOSDeRQ6ktXl4SEAsTlSbiwlQ5t8ecphn9sLUgZD1E/Zj2vINTN6RdSKwhqoeffD7s6AICRcDfFLtpLY54mqZx99Inv5kV/saBgHn7EsSjrIJhN162sVFw+Ve0J931U0ua0/A++VOyXyeBBhsqnEYy36KwOft1KypXOxBsvRfPgkQzJ/Gao5Zv9a9Ot49DwhOwgifVvpA5zaIeW+R0J8tagoKpfdqckZx6yLUQ3ljkSSUwv8Cc01pu+m4F6KEW+mfT1Qn9r4NoyGbMDGSePBeToLehIk8OUcFKEn1opXb5ooXax0Vq+pyuFQGgb/n+B9ZUet4QBdWnBUb9nMWU8dvAeZt0j5H/c+tfadBJikQ08YHcS7XTLsMZDnWYK23BCiyN7CpeW1r+yoa5O53JLjUb7EZQcsGw0eSNk7C7U9jw6XE8SyTuWWQ9Mt7rF47bs4gAOkThirLJywbeU9P7/elrb3OWtbzgPCJb0m2eoK/nastB9e3qq6f5x+fKgMO/8OtFqn1eu3SXFloG5g72g5H84kXKPmJSqy+
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(136003)(346002)(396003)(376002)(36756003)(66476007)(66946007)(956004)(66556008)(2616005)(2906002)(8936002)(8676002)(316002)(16576012)(54906003)(5660300002)(6666004)(86362001)(31696002)(6486002)(16526019)(186003)(53546011)(26005)(83380400001)(31686004)(38100700002)(4326008)(6916009)(478600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?T3Y1TlBhYVpBdlozU2NqYyt1WjZhZmxMOVpIRzg5cjJ4TFhMelBvaS9lZ0Vm?=
 =?utf-8?B?ZmdsdkR4VkFXTWNySUNzclRRRFhQSFpDT1NhR01vSm8wNXAxcEo2dC9jS0Rn?=
 =?utf-8?B?R3VjMDJMQkg3bURXY0ZMemJ2eS9TN2JqZXp5Z2I5WHNZUnJqbUVlS0ZodjZ5?=
 =?utf-8?B?NnhBTGtsWDhtakhzRFQ0d2NuK1NnN1Vla3EzQkprTWlvdHVHL3NSR1Z4OXY4?=
 =?utf-8?B?ZFRxTnVMYWY5bkVmZEtYMzVIU2lkWDJ2WWJCNkoxNEFzR1hQKytZbnBoVHZN?=
 =?utf-8?B?OGgxVElzaDA1NEsxOXlvYTgyM1NmUVIrTG00OEZ6OE94OUwzTEFmeHU1aDND?=
 =?utf-8?B?cFpyUG02OGZlRW9yamcyRGhlU0wvL0Z2SVY4NEkwZFBjRWFBMWNDYU0vd2d1?=
 =?utf-8?B?bGFMNkJKSUR0RGNIV1JlM3REVnYxN3dLZnRlUExWZHFlWS9sTkZodWJLU2Jx?=
 =?utf-8?B?L0h3cWN2aFNXSXQwbnhoYWQ5L3g0bXN3cnczV0xFMUZVUWZLUjRZdmp2SnFH?=
 =?utf-8?B?YWtDLzNwRklkZDZ2NEQrd3lCRm5TY0k1UUdCQk55OVNSK0J6b1k2em5PWk52?=
 =?utf-8?B?MWNLcXFsU2ZGVVFHTmovK1U0MHgra21NTnJBYWwrTzJlNmN2RjIzeVBzc2Uw?=
 =?utf-8?B?bFFhNHlVb1VzV1phbm9UdUdMWCtEWURmUGdFT284R0pENXpiYUUyOGlLL0p2?=
 =?utf-8?B?dmtQWE8rMDQyMVFUK3M1Z1MrOFpqQjY2dk5OM01yRnV4Njh1ckpWSEJNak8r?=
 =?utf-8?B?aFpUdFpmQlhpM0paVWlOY3h0aWlMamtyaW91QWlzRVRGa0FTODRVcldoR0F0?=
 =?utf-8?B?eTJpQVFBcWxQTTQrZlkra3JXSmlTMmozT1NiZStzR3l3WlNBK01ZdTJCY1Ry?=
 =?utf-8?B?VG5OMUFOZVI2VUxIUlJJRFFrdVBmTDhQNlNFZkhvQmpTdCt3SitkYVdORkNG?=
 =?utf-8?B?cFFRMGZGMEtQRGV3eGUxUUdUWERodGpIYTU4WXd1YjdoL0lJL1NhZ2g0M0wx?=
 =?utf-8?B?bTNLcEZxRHZ4WW1RbFZEUFhyemtNdkhYUmxpREgvTmlpalB4eERkZi9NYTAz?=
 =?utf-8?B?YWlXMWVpRDFkT1ZSSVJvdjQ3a21IVW9RM0cvTDFoKzVMM3U5QXl3K2pGMUZI?=
 =?utf-8?B?cmkwektsSlM3V3JhYmkrR2VVZmI2RGVLZGlmRkJDQ1dMVjF1YmhLNklibjhK?=
 =?utf-8?B?NEcybTFQSFlHc1U1bDJ4THYyZDFBY0JJaTFydG83UXhtZEFoYllNakJhNGlz?=
 =?utf-8?B?UUxqU0RVeEpjYzJ0MS9sTUl0T0J4NVFicEJ2NlZlOEZhdXcvVFdYR3hLdm16?=
 =?utf-8?B?Zm9EMit4ZVp1RkZGdWlycU4zYkR5SmtVdGFDMEk1NFl0WEZ5ZnlyZ21EaHRi?=
 =?utf-8?B?citkam51TFpHNlVGWU5FbmI1NklyVys4cFhDamRCcElNRWNwc3VmWWF1UWdW?=
 =?utf-8?B?S0lJc2ExeU1OdVUwWThkRTNXOU84NEgwYi9nRjZZKzduUEw1eS85ZVVHNTRO?=
 =?utf-8?B?dFdSejZlaXcrOFlGd01jSnJ5cU9Lb0ZEZWRvQjN6RmlvZWNqczN2NkxkZS91?=
 =?utf-8?B?N1RqY0hvRkc0a01vd01VMXNkWkdLN0ZKMGVVbFkvZ3pFQ2NGTFpHU25LTWpl?=
 =?utf-8?B?ME5vd21QWDM0NXlISXB2bVpmRXloWk5sR2x4bXVhN3MxZGRJOWVTMWFBcHJz?=
 =?utf-8?B?OThnelB0SHd3UCtLNW1nOTIzUHFBU3lIZ1drajQ2ayszNkRYeW5qYkdJSWlj?=
 =?utf-8?Q?fErSOpDSHPG/ipHTr1rwSr5EVi9yHQOvAHJOAe7?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 11486de7-d4fe-4f98-613b-08d9359aa94a
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2021 16:27:43.2924
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xEDSQL3WkLjScBg3YM6j6qW1thQgLA7c0jtTLMnGaFxbMZ6XC3o2YEYGrSs5AGNIxuS9Fv4g5RruzLJE+SYkfVAEwnz4r84x0JL+0x9iS08=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5047
X-OriginatorOrg: citrix.com

On 22/06/2021 17:10, Jan Beulich wrote:
> On 22.06.2021 17:39, Andrew Cooper wrote:
>> This reverts commit aad7b5c11d51d57659978e04702ac970906894e8.
>>
>> The change from OvmfX64 to OvmfXen causes a change in behaviour, whereby
>> OvmfXen maps its shared info page at the top of address space.  When try=
ing to
>> migrate such a domain, XENMEM_maximum_gpfn returns a very large value.  =
This
>> has uncovered multiple issues:
>>
>>  1) The userspace hypercall wrappers truncate all return values to int o=
n
>>     Linux and Solaris.  This needs fixing in Xen.
>>  2) 32bit toolstacks can't migrate any domain with RAM above the 2^40 ma=
rk,
>>     because of virtual address constraints.  This needs fixing in OVMF.
> And I suspect even that presently enforce boundary of 2^40 is actually
> too high, and things still wouldn't work when getting close. At the
> very least the tool stack then depends on a fairly big chunk of memory
> (2^30 bytes) to be available in one single, virtually contiguous piece.
> Iirc 32-bit Linux can be configured to not even leave this much space
> for user mode.

I tested it once during Migration v2 development, and it worked for me,
but I do expect that that is as much testing as it has had since...

A 3G/1G split is the default under multiple 32bit kernels, and the
allocation is made right at the start, so there is a reasonable chance
of finding space.=C2=A0 After all, it only needs 4k alignment.

Whether ASLR has changed the chances in the meantime remains to be seen,
but honestly - 32bit toolstacks on x86 really don't exist in production
any more, and Arm still hasn't implemented logdirty support, so the
limit has little practical consequence.

~Andrew



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 16:33:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 16:33:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145977.268517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvjLE-0006Gp-No; Tue, 22 Jun 2021 16:33:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145977.268517; Tue, 22 Jun 2021 16:33:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvjLE-0006Gi-K4; Tue, 22 Jun 2021 16:33:36 +0000
Received: by outflank-mailman (input) for mailman id 145977;
 Tue, 22 Jun 2021 16:33:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3wlN=LQ=gmail.com=neilsikka@srs-us1.protection.inumbo.net>)
 id 1lvjLE-0006Gc-1S
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 16:33:36 +0000
Received: from mail-ed1-x533.google.com (unknown [2a00:1450:4864:20::533])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id def03ceb-54a3-4ed3-8bf3-7ddd2bdc0081;
 Tue, 22 Jun 2021 16:33:35 +0000 (UTC)
Received: by mail-ed1-x533.google.com with SMTP id s6so24479173edu.10
 for <xen-devel@lists.xenproject.org>; Tue, 22 Jun 2021 09:33:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: def03ceb-54a3-4ed3-8bf3-7ddd2bdc0081
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=EuglMweKhcssS/9eQCqRk+a+i6CIroCO89mBFrjoy0c=;
        b=Ccz2H5UEbTOmSqA4fi1NzImuW9E++REFeGJiUj8urAcRMnPrGesURZau4g8Ya/DmVQ
         qP9KXy3TxYlFFepf2C8GCd6iBWH6woavULVaXIyZ3MTiogE2/HGZcaK5fDJxbP7Ucf0q
         ChW4Pf+vFbG31W6758yHEabWpUkN4UCGe6ryVRR3Ve2hvcP1bCKggTmAnuX56ia3O5C9
         8y9zpuNzfPL0fvrF8UK/p6+vtIII6TuvsNGqbcgOZUlvnqAxI6OeiLr1m0JwWHFz6tun
         A+TYcyBniAE3jkoqrW4AKHFlH+/1Ywar9LMkiGYwUZ3j68gkgUtjEqlvxK9R0O2v+NN2
         gV5g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=EuglMweKhcssS/9eQCqRk+a+i6CIroCO89mBFrjoy0c=;
        b=Cx6f8TKzusK5q42USjiXW/YG7q2+ZlLYzlNSryS4vRoUuaCC4rKL5ecHG9VfLdWtFp
         i3KTfSRVFRyP9xkOZX+IQntwLhpGKP/XztqOYbGft6cu2ffkujdvPJ6obmDHFErPSZaj
         BFiWxS3qCyOXNesWzwJ+KCEu7tTWO3HP2l3/dLzOrr+WLbKwke9/DWFaYNYFVsf5sWo0
         DAloxHiOgYgooDiab6epkumJ/9oIS0cbB4Q7gG4Hz1nD8lLnCt/vL0aCorGT0z8eB7+w
         /xCt4/reArl0mC0+AxnJ1Pemm7+QdKS0gEhOFsUQfPPAssBx11ESN21tTtBocUTsOJ3U
         p+sQ==
X-Gm-Message-State: AOAM531njtU3C6Ia989fvUkPsgOCgxzWAVRn/lPUG/RSM2sCBjRVDbrr
	InsP7UCiXrqTG4ZEAXyPpodkZO64TpfXHMDgolg=
X-Google-Smtp-Source: ABdhPJyUReF+2Ti3QzyW55fra8U3V5rCRx9Y6H9k9c6cev5IpJgjuy69FPGtc49tqL4MqTohmxbBo5zevTGefhv+KOs=
X-Received: by 2002:a05:6402:848:: with SMTP id b8mr6198351edz.44.1624379614001;
 Tue, 22 Jun 2021 09:33:34 -0700 (PDT)
MIME-Version: 1.0
References: <CAHPMNWcQgUEvd7aYiNx1U+wphmuJr_Q8JXWw3mE812uN5hEARQ@mail.gmail.com>
 <CABfawhk4D+30_DX5cwYG-=SthQ=CXLRLL7VeXeVK48Oj0efn2Q@mail.gmail.com>
In-Reply-To: <CABfawhk4D+30_DX5cwYG-=SthQ=CXLRLL7VeXeVK48Oj0efn2Q@mail.gmail.com>
From: Neil Sikka <neilsikka@gmail.com>
Date: Tue, 22 Jun 2021 12:33:22 -0400
Message-ID: <CAHPMNWd1QFYfbuPdEPZgwKrXE6dhi_X-bqZfPQj4zo4AioL81w@mail.gmail.com>
Subject: Re: Windows 10 Kernel Debugging on Xen
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: multipart/alternative; boundary="000000000000df7c3405c55d59a2"

--000000000000df7c3405c55d59a2
Content-Type: text/plain; charset="UTF-8"

Thanks for the quick response, Tamas. I tried what you said and windbg
waits and the debugee hangs when I click the break button in windbg, but I
don't see any output in windbg. This means that there is SOME communication
over the serial port that causes the debugee to hang when I click break.
Could it be a debugger protocol issue? I also tried the guidance here by
running the crlf program:
https://www.qubes-os.org/doc/windows-debugging/
But windbg waits and the debugee hangs in a similar manner.

What versions of WIndows and Xen are you using?

On Tue, Jun 22, 2021 at 12:10 PM Tamas K Lengyel <tamas.k.lengyel@gmail.com>
wrote:

> I have managed to get windbg working with a serial bridge between two
> Win10 VMs using the following script:
>
> https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/scripts/serial-bridge.sh
> .
> The debugee has to enable a couple options so that windbg can attach:
>
> https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/scripts/debug.cmd
> .
>
> Tamas
>
> On Tue, Jun 22, 2021 at 12:01 PM Neil Sikka <neilsikka@gmail.com> wrote:
> >
> > Hello,
> > Has anyone gotten a Windows10 (Version 1709 of later) kernel debugger
> attached when running the Windows10 debugger VM and the Windows10 debugee
> VM on Xen 4.13.0 hypervisor? I am getting a "NIC hardware initialization
> failed" error. I tried the suggestions in the discussion here (
> https://bugzilla.redhat.com/show_bug.cgi?id=1947015):
> > -cpu
> Skylake-Server-IBRS,ss=on,vmx=on,hypervisor=on,tsc-adjust=on,clflushopt=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,ssbd=on,xsaves=on,ibpb=on,amd-ssbd=on,
> \
> >
> skip-l1dfl-vmentry=on,mpx=off,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vendor-id=KVMKVMKVM
> >
> > note: i had to remove the following 2 arguments due to errors from QEMU:
> > pschange-mc-no=on
> > hv_vpindex
> >
> > Here was the error:
> > C:\Users\user\Desktop\oldDebuggers\x64>kdnet.exe
> >
> > Network debugging is supported on the following NICs:
> > busparams=0.4.0, Intel(R) PRO/1000 MT Network Connection, Plugged in.
> > The Microsoft hypervisor running this VM does not support KDNET.
> > Please upgrade to the hypervisor shipped in Windows 8 or WS2012 or later.
> >
> > KDNET initialization failed.  Status = 0xC0000182.
> > NIC hardware initialization failed.
> >
> > I am using an Intel e1000 NIC emulated through QEMU because its
> supposedly a supported NIC for Windows kernel NET debugging.
> >
> > Thanks in Advance!
> >
> > --
> > My Blog: http://www.neilscomputerblog.blogspot.com/
> > Twitter: @neilsikka
>


-- 
My Blog: http://www.neilscomputerblog.blogspot.com/
Twitter: @neilsikka

--000000000000df7c3405c55d59a2
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Thanks for the quick response, Tamas. I tried what you sai=
d and windbg waits and the debugee hangs when I click the break button in w=
indbg, but I don&#39;t see any output in windbg. This means that there is S=
OME communication over the serial port that causes the debugee to hang when=
 I click break. Could it be a debugger protocol issue? I also tried the gui=
dance here by running the crlf program:<div><a href=3D"https://www.qubes-os=
.org/doc/windows-debugging/">https://www.qubes-os.org/doc/windows-debugging=
/</a><br></div><div>But windbg waits and the debugee hangs in a similar man=
ner.</div><div><br></div><div>What versions of WIndows and Xen are you usin=
g?</div></div><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmai=
l_attr">On Tue, Jun 22, 2021 at 12:10 PM Tamas K Lengyel &lt;<a href=3D"mai=
lto:tamas.k.lengyel@gmail.com">tamas.k.lengyel@gmail.com</a>&gt; wrote:<br>=
</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;b=
order-left:1px solid rgb(204,204,204);padding-left:1ex">I have managed to g=
et windbg working with a serial bridge between two<br>
Win10 VMs using the following script:<br>
<a href=3D"https://github.com/intel/kernel-fuzzer-for-xen-project/blob/mast=
er/scripts/serial-bridge.sh" rel=3D"noreferrer" target=3D"_blank">https://g=
ithub.com/intel/kernel-fuzzer-for-xen-project/blob/master/scripts/serial-br=
idge.sh</a>.<br>
The debugee has to enable a couple options so that windbg can attach:<br>
<a href=3D"https://github.com/intel/kernel-fuzzer-for-xen-project/blob/mast=
er/scripts/debug.cmd" rel=3D"noreferrer" target=3D"_blank">https://github.c=
om/intel/kernel-fuzzer-for-xen-project/blob/master/scripts/debug.cmd</a>.<b=
r>
<br>
Tamas<br>
<br>
On Tue, Jun 22, 2021 at 12:01 PM Neil Sikka &lt;<a href=3D"mailto:neilsikka=
@gmail.com" target=3D"_blank">neilsikka@gmail.com</a>&gt; wrote:<br>
&gt;<br>
&gt; Hello,<br>
&gt; Has anyone gotten a Windows10 (Version 1709 of later) kernel debugger =
attached when running the Windows10 debugger VM and the Windows10 debugee V=
M on Xen 4.13.0 hypervisor? I am getting a &quot;NIC hardware initializatio=
n failed&quot; error. I tried the suggestions in the discussion here (<a hr=
ef=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1947015" rel=3D"norefer=
rer" target=3D"_blank">https://bugzilla.redhat.com/show_bug.cgi?id=3D194701=
5</a>):<br>
&gt; -cpu Skylake-Server-IBRS,ss=3Don,vmx=3Don,hypervisor=3Don,tsc-adjust=
=3Don,clflushopt=3Don,umip=3Don,pku=3Don,md-clear=3Don,stibp=3Don,arch-capa=
bilities=3Don,ssbd=3Don,xsaves=3Don,ibpb=3Don,amd-ssbd=3Don, \<br>
&gt; skip-l1dfl-vmentry=3Don,mpx=3Doff,hv-time,hv-relaxed,hv-vapic,hv-spinl=
ocks=3D0x1fff,hv-vendor-id=3DKVMKVMKVM<br>
&gt;<br>
&gt; note: i had to remove the following 2 arguments due to errors from QEM=
U:<br>
&gt; pschange-mc-no=3Don<br>
&gt; hv_vpindex<br>
&gt;<br>
&gt; Here was the error:<br>
&gt; C:\Users\user\Desktop\oldDebuggers\x64&gt;kdnet.exe<br>
&gt;<br>
&gt; Network debugging is supported on the following NICs:<br>
&gt; busparams=3D0.4.0, Intel(R) PRO/1000 MT Network Connection, Plugged in=
.<br>
&gt; The Microsoft hypervisor running this VM does not support KDNET.<br>
&gt; Please upgrade to the hypervisor shipped in Windows 8 or WS2012 or lat=
er.<br>
&gt;<br>
&gt; KDNET initialization failed.=C2=A0 Status =3D 0xC0000182.<br>
&gt; NIC hardware initialization failed.<br>
&gt;<br>
&gt; I am using an Intel e1000 NIC emulated through QEMU because its suppos=
edly a supported NIC for Windows kernel NET debugging.<br>
&gt;<br>
&gt; Thanks in Advance!<br>
&gt;<br>
&gt; --<br>
&gt; My Blog: <a href=3D"http://www.neilscomputerblog.blogspot.com/" rel=3D=
"noreferrer" target=3D"_blank">http://www.neilscomputerblog.blogspot.com/</=
a><br>
&gt; Twitter: @neilsikka<br>
</blockquote></div><br clear=3D"all"><div><br></div>-- <br><div dir=3D"ltr"=
 class=3D"gmail_signature"><div>My Blog: <a href=3D"http://www.neilscompute=
rblog.blogspot.com/" target=3D"_blank">http://www.neilscomputerblog.blogspo=
t.com/</a></div><div>Twitter: @neilsikka</div></div>

--000000000000df7c3405c55d59a2--


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 16:43:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 16:43:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145984.268527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvjUt-0007he-Lr; Tue, 22 Jun 2021 16:43:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145984.268527; Tue, 22 Jun 2021 16:43:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvjUt-0007hX-Io; Tue, 22 Jun 2021 16:43:35 +0000
Received: by outflank-mailman (input) for mailman id 145984;
 Tue, 22 Jun 2021 16:43:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lvjUs-0007hR-Q3
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 16:43:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lvjUr-0007LF-Ss; Tue, 22 Jun 2021 16:43:33 +0000
Received: from [54.239.6.180] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lvjUr-0000c8-Le; Tue, 22 Jun 2021 16:43:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=KpftfilInNWc/ShgTGCTIa+k0kYvxGlta4CEhJ7KT+0=; b=ShzyUHaR3lfmcG3KSJkcqjZO6b
	+R9xUELrHzJ2KQ58hKiB09awVKjivs2UKNKTeYVPTjKnaOcpu0GPw4DmbqeNGXtISTyNV9cggRlHI
	wsM34nT+zKPPZ0Ebv7Ma9eWWFktY9b0iMC8f/WAvj3x1ybvh3acqV7vG8MZ09pI9BWjo=;
Subject: Re: [PATCH] iommu/arm: ipmmu-vmsa: Add compatible for Renesas R-Car
 M3-W+ SoC
To: Oleksandr Tyshchenko <olekstysh@gmail.com>, xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <1623698292-7464-1-git-send-email-olekstysh@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <36ae57eb-e7ce-8ca9-0f4e-23b9f1b0c0e7@xen.org>
Date: Tue, 22 Jun 2021 18:43:31 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <1623698292-7464-1-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Oleksandr,

On 14/06/2021 21:18, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> The "renesas,r8a77961" string identifies M3-W+ (aka M3-W ES3.0)
> instead of "renesas,r8a7796" since Linux commit:
> "9c9f7891093b02eb64ca4e1c7ab776a4296c058f soc: renesas: Identify R-Car M3-W+".
> Add new compatible to the Xen driver.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
>   xen/drivers/passthrough/arm/ipmmu-vmsa.c | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/xen/drivers/passthrough/arm/ipmmu-vmsa.c b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
> index 8b8e3a0..1255b0d 100644
> --- a/xen/drivers/passthrough/arm/ipmmu-vmsa.c
> +++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
> @@ -1316,6 +1316,7 @@ static const struct dt_device_match ipmmu_dt_match[] __initconst =
>       DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a7795"),
>       DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a77965"),
>       DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a7796"),
> +    DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a77961"),
>       { /* sentinel */ },
>   };
>   
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 16:44:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 16:44:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145988.268542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvjVK-0008Ep-4a; Tue, 22 Jun 2021 16:44:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145988.268542; Tue, 22 Jun 2021 16:44:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvjVK-0008Ei-1Y; Tue, 22 Jun 2021 16:44:02 +0000
Received: by outflank-mailman (input) for mailman id 145988;
 Tue, 22 Jun 2021 16:44:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p8IA=LQ=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1lvjVJ-00088P-2m
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 16:44:01 +0000
Received: from mail-wm1-x335.google.com (unknown [2a00:1450:4864:20::335])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a53bc060-620b-4270-b905-69a87c3d67ba;
 Tue, 22 Jun 2021 16:43:56 +0000 (UTC)
Received: by mail-wm1-x335.google.com with SMTP id
 o33-20020a05600c5121b02901e360c98c08so460669wms.5
 for <xen-devel@lists.xenproject.org>; Tue, 22 Jun 2021 09:43:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a53bc060-620b-4270-b905-69a87c3d67ba
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=xLHrOk5OVDoI85ryDFnPzSJzc2ejQRiXSH+YDbiqsic=;
        b=P/u+uaa/ptDfeKpbPc7mua+7ZWJnGEUpntNynLCH2A5RoZ9PIVT5SuERdlCFhvrnU2
         QWOhDSnd0rY7xwoMCY+lZ+ENcwDmgzmxx8uR/XNViIqgCwuScrwecSYxNheZBxB+Z0GR
         8/bAilMoaX6KXajBYcTOOG3pPGSdohlZM/dVyQB+EeilZWGo2Ajbkg0LVUft2CIwgNN0
         mr9Liw9R+FtvE0Rn2Om3JS3Z3vhSZ/ZHONRxmo3jc8JaEfyFhc3ED33RDEgfHPSZcgz0
         0SQecS1vsHHgOIRd4BnAjk4Guvb9FSWj3GbhMwwOpHvJhSK73r4wRAalxa0Q79+U2mgS
         q12w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=xLHrOk5OVDoI85ryDFnPzSJzc2ejQRiXSH+YDbiqsic=;
        b=hHanrCG5GMK/dlt4OheZFa1usFSCvbO9AVVKKcYGKdJGWdmw3+pDmFqz7x5BS5J6Dh
         oGJbWKE9hgHoxJWOYe+e5nQGZUg31uGT56Otx7lYqtDEwiO9P7OqSwbYfKR3jXU5YYXp
         tmrL9dUhoOVgWwhb9lLr2yw+4U/HZQptOVR920GCGKL9pBEXVUeiSOGSA4EbWtkV+y5l
         kKhxlhfdf3aUJ+2ZfUYee+jCfzrwugTdn7wKvFiFBEcHk+EtV+qnGIe+w0f0SL4Mz9n+
         r2VlqPSVbfLdbg+cKi7r6OHQDR/Mz8tihQUpK5hWxNbbPwrqbD24tnHvAMXib8jSMnZp
         7T0Q==
X-Gm-Message-State: AOAM531uo2kanRybTYXEzbYzkyGUPBqSK3qw+2UPi9fcrZ4xlSjmHbvv
	XEgvbh5kNJl9crlvS/j+WHfpZdDdUEaof7vC76I=
X-Google-Smtp-Source: ABdhPJw9jRJxK3pjTQw0gOz0fE2qhZ6h6OXEaVm9yPI77UZXUmt1dD9bLmS7SnPCOWJRumkUC+h8CaW6y5Wv8dyK4eE=
X-Received: by 2002:a05:600c:224a:: with SMTP id a10mr5576754wmm.154.1624380235589;
 Tue, 22 Jun 2021 09:43:55 -0700 (PDT)
MIME-Version: 1.0
References: <CAHPMNWcQgUEvd7aYiNx1U+wphmuJr_Q8JXWw3mE812uN5hEARQ@mail.gmail.com>
 <CABfawhk4D+30_DX5cwYG-=SthQ=CXLRLL7VeXeVK48Oj0efn2Q@mail.gmail.com> <CAHPMNWd1QFYfbuPdEPZgwKrXE6dhi_X-bqZfPQj4zo4AioL81w@mail.gmail.com>
In-Reply-To: <CAHPMNWd1QFYfbuPdEPZgwKrXE6dhi_X-bqZfPQj4zo4AioL81w@mail.gmail.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Tue, 22 Jun 2021 12:43:19 -0400
Message-ID: <CABfawh=W92ioejsZ-zu+WVofw_jfxVLteVieC2Ysfxd3Wrs+Og@mail.gmail.com>
Subject: Re: Windows 10 Kernel Debugging on Xen
To: Neil Sikka <neilsikka@gmail.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

I used Xen 4.15 and a pretty new version of Windows 10. It is a bit
finicky, you have to run the debug commands on the debugee and then
reboot. When the VM is rebooting the domain ID changes so you have to
start the serial bridge then. Windbg will attach afterwards. Just make
sure both VMs have serial=3D'pty' set in their config file.

Tamas

On Tue, Jun 22, 2021 at 12:33 PM Neil Sikka <neilsikka@gmail.com> wrote:
>
> Thanks for the quick response, Tamas. I tried what you said and windbg wa=
its and the debugee hangs when I click the break button in windbg, but I do=
n't see any output in windbg. This means that there is SOME communication o=
ver the serial port that causes the debugee to hang when I click break. Cou=
ld it be a debugger protocol issue? I also tried the guidance here by runni=
ng the crlf program:
> https://www.qubes-os.org/doc/windows-debugging/
> But windbg waits and the debugee hangs in a similar manner.
>
> What versions of WIndows and Xen are you using?
>
> On Tue, Jun 22, 2021 at 12:10 PM Tamas K Lengyel <tamas.k.lengyel@gmail.c=
om> wrote:
>>
>> I have managed to get windbg working with a serial bridge between two
>> Win10 VMs using the following script:
>> https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/scrip=
ts/serial-bridge.sh.
>> The debugee has to enable a couple options so that windbg can attach:
>> https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/scrip=
ts/debug.cmd.
>>
>> Tamas
>>
>> On Tue, Jun 22, 2021 at 12:01 PM Neil Sikka <neilsikka@gmail.com> wrote:
>> >
>> > Hello,
>> > Has anyone gotten a Windows10 (Version 1709 of later) kernel debugger =
attached when running the Windows10 debugger VM and the Windows10 debugee V=
M on Xen 4.13.0 hypervisor? I am getting a "NIC hardware initialization fai=
led" error. I tried the suggestions in the discussion here (https://bugzill=
a.redhat.com/show_bug.cgi?id=3D1947015):
>> > -cpu Skylake-Server-IBRS,ss=3Don,vmx=3Don,hypervisor=3Don,tsc-adjust=
=3Don,clflushopt=3Don,umip=3Don,pku=3Don,md-clear=3Don,stibp=3Don,arch-capa=
bilities=3Don,ssbd=3Don,xsaves=3Don,ibpb=3Don,amd-ssbd=3Don, \
>> > skip-l1dfl-vmentry=3Don,mpx=3Doff,hv-time,hv-relaxed,hv-vapic,hv-spinl=
ocks=3D0x1fff,hv-vendor-id=3DKVMKVMKVM
>> >
>> > note: i had to remove the following 2 arguments due to errors from QEM=
U:
>> > pschange-mc-no=3Don
>> > hv_vpindex
>> >
>> > Here was the error:
>> > C:\Users\user\Desktop\oldDebuggers\x64>kdnet.exe
>> >
>> > Network debugging is supported on the following NICs:
>> > busparams=3D0.4.0, Intel(R) PRO/1000 MT Network Connection, Plugged in=
.
>> > The Microsoft hypervisor running this VM does not support KDNET.
>> > Please upgrade to the hypervisor shipped in Windows 8 or WS2012 or lat=
er.
>> >
>> > KDNET initialization failed.  Status =3D 0xC0000182.
>> > NIC hardware initialization failed.
>> >
>> > I am using an Intel e1000 NIC emulated through QEMU because its suppos=
edly a supported NIC for Windows kernel NET debugging.
>> >
>> > Thanks in Advance!
>> >
>> > --
>> > My Blog: http://www.neilscomputerblog.blogspot.com/
>> > Twitter: @neilsikka
>
>
>
> --
> My Blog: http://www.neilscomputerblog.blogspot.com/
> Twitter: @neilsikka


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 16:55:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 16:55:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.145998.268555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvjgW-0001PS-7w; Tue, 22 Jun 2021 16:55:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 145998.268555; Tue, 22 Jun 2021 16:55:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvjgW-0001PL-4t; Tue, 22 Jun 2021 16:55:36 +0000
Received: by outflank-mailman (input) for mailman id 145998;
 Tue, 22 Jun 2021 16:55:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bO1Q=LQ=ffwll.ch=daniel.vetter@srs-us1.protection.inumbo.net>)
 id 1lvjgT-0001PF-Tl
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 16:55:34 +0000
Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 07707264-9625-4264-856a-67d160a0b4b4;
 Tue, 22 Jun 2021 16:55:32 +0000 (UTC)
Received: by mail-wr1-x432.google.com with SMTP id f15so9044290wro.8
 for <xen-devel@lists.xenproject.org>; Tue, 22 Jun 2021 09:55:32 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
 by smtp.gmail.com with ESMTPSA id l23sm3632342wmc.5.2021.06.22.09.55.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 22 Jun 2021 09:55:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07707264-9625-4264-856a-67d160a0b4b4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=b9Jdr4wtlOOCFldrnFk+zQMEf14dJEiyFw/3fjGC0jA=;
        b=Df+8Oxx3Gx1nETy5Ci5Z78MEEXbbmKXn04S8kpH8fxaA3WGc7MUpF4hyIdZ4rL7Yr9
         GFJXVc+kuPYqmtVY6trwYU2hBdMa6rqz2bUh1rvC8BM+79JDQGK97f2T+vvSqzWlrTxc
         dUnCGNGzWkUB/ubYFJep4xlk9o9DrWfUpEfGU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=b9Jdr4wtlOOCFldrnFk+zQMEf14dJEiyFw/3fjGC0jA=;
        b=ddtdWUkGFpkz5OGGaX5sLNnk6IyBwRJBIiBF2jGLwaPoyeqUmIYR0d8fqlc5EinkAB
         rYYY74otwqrmquDZEzdyixNHfO1pTHrPjwOpAFJ8Y0DpMquulZAHRzksUqVwOR/RuiX8
         8vwc5sPtsmoGe1xUQiczEQJDOQPXOj5IsCoX/pGZ/GmEMI42zB7Jy3XsrxHxaOuAix+s
         Ez9NFnHe23yMPIY9xwFFoeVtVlTZSn0cqhwmkWoFALXHy4qus6KE32Hd6n5d6hrdMU65
         CODI618BuEle1MSlHIBFFgAqC7SCqsd3aGbQ68/bLR8WcjxdKrGTfjuxP71itTiWmcfD
         4mrQ==
X-Gm-Message-State: AOAM532qrzInn9vSRryr0t61qxtZhHZ6Wjq3f80mjIe2gpLPlqW1/nmr
	jzk/dSlGBEH3PIaS0XklgIHMGA==
X-Google-Smtp-Source: ABdhPJxRaQeu5zw+01KaYFtztrrwy3gdY+LcwVsX0y3odT09LLdZWhYYMbIKwOWFsIhlqUJAbbR8qw==
X-Received: by 2002:a05:6000:188:: with SMTP id p8mr6123747wrx.296.1624380931724;
        Tue, 22 Jun 2021 09:55:31 -0700 (PDT)
From: Daniel Vetter <daniel.vetter@ffwll.ch>
To: DRI Development <dri-devel@lists.freedesktop.org>
Cc: Intel Graphics Development <intel-gfx@lists.freedesktop.org>,
	Daniel Vetter <daniel.vetter@ffwll.ch>,
	David Lechner <david@lechnology.com>,
	=?UTF-8?q?Noralf=20Tr=C3=B8nnes?= <noralf@tronnes.org>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Linus Walleij <linus.walleij@linaro.org>,
	Daniel Vetter <daniel.vetter@intel.com>,
	Joel Stanley <joel@jms.id.au>,
	Andrew Jeffery <andrew@aj.id.au>,
	Emma Anholt <emma@anholt.net>,
	Kamlesh Gurudasani <kamlesh.gurudasani@gmail.com>,
	Maxime Ripard <mripard@kernel.org>,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Sam Ravnborg <sam@ravnborg.org>,
	Alex Deucher <alexander.deucher@amd.com>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	linux-aspeed@lists.ozlabs.org,
	linux-arm-kernel@lists.infradead.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH 13/15] drm/tiny: drm_gem_simple_display_pipe_prepare_fb is the default
Date: Tue, 22 Jun 2021 18:55:09 +0200
Message-Id: <20210622165511.3169559-14-daniel.vetter@ffwll.ch>
X-Mailer: git-send-email 2.32.0.rc2
In-Reply-To: <20210622165511.3169559-1-daniel.vetter@ffwll.ch>
References: <20210622165511.3169559-1-daniel.vetter@ffwll.ch>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Goes through all the drivers and deletes the default hook since it's
the default now.

Acked-by: David Lechner <david@lechnology.com>
Acked-by: Noralf Trønnes <noralf@tronnes.org>
Acked-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Joel Stanley <joel@jms.id.au>
Cc: Andrew Jeffery <andrew@aj.id.au>
Cc: "Noralf Trønnes" <noralf@tronnes.org>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Emma Anholt <emma@anholt.net>
Cc: David Lechner <david@lechnology.com>
Cc: Kamlesh Gurudasani <kamlesh.gurudasani@gmail.com>
Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: linux-aspeed@lists.ozlabs.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: xen-devel@lists.xenproject.org
---
 drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c | 1 -
 drivers/gpu/drm/gud/gud_drv.c            | 1 -
 drivers/gpu/drm/mcde/mcde_display.c      | 1 -
 drivers/gpu/drm/pl111/pl111_display.c    | 1 -
 drivers/gpu/drm/tiny/hx8357d.c           | 1 -
 drivers/gpu/drm/tiny/ili9225.c           | 1 -
 drivers/gpu/drm/tiny/ili9341.c           | 1 -
 drivers/gpu/drm/tiny/ili9486.c           | 1 -
 drivers/gpu/drm/tiny/mi0283qt.c          | 1 -
 drivers/gpu/drm/tiny/repaper.c           | 1 -
 drivers/gpu/drm/tiny/st7586.c            | 1 -
 drivers/gpu/drm/tiny/st7735r.c           | 1 -
 drivers/gpu/drm/tve200/tve200_display.c  | 1 -
 drivers/gpu/drm/xen/xen_drm_front_kms.c  | 1 -
 14 files changed, 14 deletions(-)

diff --git a/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c b/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
index 098f96d4d50d..827e62c1daba 100644
--- a/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
+++ b/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
@@ -220,7 +220,6 @@ static const struct drm_simple_display_pipe_funcs aspeed_gfx_funcs = {
 	.enable		= aspeed_gfx_pipe_enable,
 	.disable	= aspeed_gfx_pipe_disable,
 	.update		= aspeed_gfx_pipe_update,
-	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 	.enable_vblank	= aspeed_gfx_enable_vblank,
 	.disable_vblank	= aspeed_gfx_disable_vblank,
 };
diff --git a/drivers/gpu/drm/gud/gud_drv.c b/drivers/gpu/drm/gud/gud_drv.c
index e8b672dc9832..1925df9c0fb7 100644
--- a/drivers/gpu/drm/gud/gud_drv.c
+++ b/drivers/gpu/drm/gud/gud_drv.c
@@ -364,7 +364,6 @@ static void gud_debugfs_init(struct drm_minor *minor)
 static const struct drm_simple_display_pipe_funcs gud_pipe_funcs = {
 	.check      = gud_pipe_check,
 	.update	    = gud_pipe_update,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_mode_config_funcs gud_mode_config_funcs = {
diff --git a/drivers/gpu/drm/mcde/mcde_display.c b/drivers/gpu/drm/mcde/mcde_display.c
index 4ddc55d58f38..ce12a36e2db4 100644
--- a/drivers/gpu/drm/mcde/mcde_display.c
+++ b/drivers/gpu/drm/mcde/mcde_display.c
@@ -1479,7 +1479,6 @@ static struct drm_simple_display_pipe_funcs mcde_display_funcs = {
 	.update = mcde_display_update,
 	.enable_vblank = mcde_display_enable_vblank,
 	.disable_vblank = mcde_display_disable_vblank,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 int mcde_display_init(struct drm_device *drm)
diff --git a/drivers/gpu/drm/pl111/pl111_display.c b/drivers/gpu/drm/pl111/pl111_display.c
index 6fd7f13f1aca..b5a8859739a2 100644
--- a/drivers/gpu/drm/pl111/pl111_display.c
+++ b/drivers/gpu/drm/pl111/pl111_display.c
@@ -440,7 +440,6 @@ static struct drm_simple_display_pipe_funcs pl111_display_funcs = {
 	.enable = pl111_display_enable,
 	.disable = pl111_display_disable,
 	.update = pl111_display_update,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static int pl111_clk_div_choose_div(struct clk_hw *hw, unsigned long rate,
diff --git a/drivers/gpu/drm/tiny/hx8357d.c b/drivers/gpu/drm/tiny/hx8357d.c
index da5df93450de..9b33c05732aa 100644
--- a/drivers/gpu/drm/tiny/hx8357d.c
+++ b/drivers/gpu/drm/tiny/hx8357d.c
@@ -184,7 +184,6 @@ static const struct drm_simple_display_pipe_funcs hx8357d_pipe_funcs = {
 	.enable = yx240qv29_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_display_mode yx350hv15_mode = {
diff --git a/drivers/gpu/drm/tiny/ili9225.c b/drivers/gpu/drm/tiny/ili9225.c
index 69265d8a3beb..976d3209f164 100644
--- a/drivers/gpu/drm/tiny/ili9225.c
+++ b/drivers/gpu/drm/tiny/ili9225.c
@@ -328,7 +328,6 @@ static const struct drm_simple_display_pipe_funcs ili9225_pipe_funcs = {
 	.enable		= ili9225_pipe_enable,
 	.disable	= ili9225_pipe_disable,
 	.update		= ili9225_pipe_update,
-	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_display_mode ili9225_mode = {
diff --git a/drivers/gpu/drm/tiny/ili9341.c b/drivers/gpu/drm/tiny/ili9341.c
index ad9ce7b4f76f..37e0c33399c8 100644
--- a/drivers/gpu/drm/tiny/ili9341.c
+++ b/drivers/gpu/drm/tiny/ili9341.c
@@ -140,7 +140,6 @@ static const struct drm_simple_display_pipe_funcs ili9341_pipe_funcs = {
 	.enable = yx240qv29_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_display_mode yx240qv29_mode = {
diff --git a/drivers/gpu/drm/tiny/ili9486.c b/drivers/gpu/drm/tiny/ili9486.c
index 75aa1476c66c..e9a63f4b2993 100644
--- a/drivers/gpu/drm/tiny/ili9486.c
+++ b/drivers/gpu/drm/tiny/ili9486.c
@@ -153,7 +153,6 @@ static const struct drm_simple_display_pipe_funcs waveshare_pipe_funcs = {
 	.enable = waveshare_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_display_mode waveshare_mode = {
diff --git a/drivers/gpu/drm/tiny/mi0283qt.c b/drivers/gpu/drm/tiny/mi0283qt.c
index 82fd1ad3413f..023de49e7a8e 100644
--- a/drivers/gpu/drm/tiny/mi0283qt.c
+++ b/drivers/gpu/drm/tiny/mi0283qt.c
@@ -144,7 +144,6 @@ static const struct drm_simple_display_pipe_funcs mi0283qt_pipe_funcs = {
 	.enable = mi0283qt_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_display_mode mi0283qt_mode = {
diff --git a/drivers/gpu/drm/tiny/repaper.c b/drivers/gpu/drm/tiny/repaper.c
index 2cee07a2e00b..007d9d59f01c 100644
--- a/drivers/gpu/drm/tiny/repaper.c
+++ b/drivers/gpu/drm/tiny/repaper.c
@@ -861,7 +861,6 @@ static const struct drm_simple_display_pipe_funcs repaper_pipe_funcs = {
 	.enable = repaper_pipe_enable,
 	.disable = repaper_pipe_disable,
 	.update = repaper_pipe_update,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static int repaper_connector_get_modes(struct drm_connector *connector)
diff --git a/drivers/gpu/drm/tiny/st7586.c b/drivers/gpu/drm/tiny/st7586.c
index 05db980cc047..1be55bed609a 100644
--- a/drivers/gpu/drm/tiny/st7586.c
+++ b/drivers/gpu/drm/tiny/st7586.c
@@ -268,7 +268,6 @@ static const struct drm_simple_display_pipe_funcs st7586_pipe_funcs = {
 	.enable		= st7586_pipe_enable,
 	.disable	= st7586_pipe_disable,
 	.update		= st7586_pipe_update,
-	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_display_mode st7586_mode = {
diff --git a/drivers/gpu/drm/tiny/st7735r.c b/drivers/gpu/drm/tiny/st7735r.c
index ec9dc817a2cc..122320db5d38 100644
--- a/drivers/gpu/drm/tiny/st7735r.c
+++ b/drivers/gpu/drm/tiny/st7735r.c
@@ -136,7 +136,6 @@ static const struct drm_simple_display_pipe_funcs st7735r_pipe_funcs = {
 	.enable		= st7735r_pipe_enable,
 	.disable	= mipi_dbi_pipe_disable,
 	.update		= mipi_dbi_pipe_update,
-	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct st7735r_cfg jd_t18003_t01_cfg = {
diff --git a/drivers/gpu/drm/tve200/tve200_display.c b/drivers/gpu/drm/tve200/tve200_display.c
index 50e1fb71869f..17b8c8dd169d 100644
--- a/drivers/gpu/drm/tve200/tve200_display.c
+++ b/drivers/gpu/drm/tve200/tve200_display.c
@@ -316,7 +316,6 @@ static const struct drm_simple_display_pipe_funcs tve200_display_funcs = {
 	.enable = tve200_display_enable,
 	.disable = tve200_display_disable,
 	.update = tve200_display_update,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 	.enable_vblank = tve200_display_enable_vblank,
 	.disable_vblank = tve200_display_disable_vblank,
 };
diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.c b/drivers/gpu/drm/xen/xen_drm_front_kms.c
index 371202ebe900..cfda74490765 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_kms.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_kms.c
@@ -302,7 +302,6 @@ static const struct drm_simple_display_pipe_funcs display_funcs = {
 	.mode_valid = display_mode_valid,
 	.enable = display_enable,
 	.disable = display_disable,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 	.check = display_check,
 	.update = display_update,
 };
-- 
2.32.0.rc2



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 17:32:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 17:32:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146005.268571 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvkGB-0005Ng-4k; Tue, 22 Jun 2021 17:32:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146005.268571; Tue, 22 Jun 2021 17:32:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvkGB-0005NZ-1P; Tue, 22 Jun 2021 17:32:27 +0000
Received: by outflank-mailman (input) for mailman id 146005;
 Tue, 22 Jun 2021 17:32:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cRfs=LQ=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lvkGA-0005NT-0k
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 17:32:26 +0000
Received: from mx0a-00069f02.pphosted.com (unknown [205.220.165.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b55deedd-7656-4874-a7a0-a3530016ff2a;
 Tue, 22 Jun 2021 17:32:24 +0000 (UTC)
Received: from pps.filterd (m0246627.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 15MHHQkU010365; Tue, 22 Jun 2021 17:32:04 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by mx0b-00069f02.pphosted.com with ESMTP id 39as86ujdq-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 22 Jun 2021 17:32:04 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 15MHFqTF173442;
 Tue, 22 Jun 2021 17:32:03 GMT
Received: from nam02-dm3-obe.outbound.protection.outlook.com
 (mail-dm3nam07lp2042.outbound.protection.outlook.com [104.47.56.42])
 by userp3020.oracle.com with ESMTP id 399tbt3d1n-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 22 Jun 2021 17:32:02 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by MN2PR10MB3949.namprd10.prod.outlook.com (2603:10b6:208:186::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Tue, 22 Jun
 2021 17:32:00 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4264.018; Tue, 22 Jun 2021
 17:32:00 +0000
Received: from [10.74.101.176] (160.34.89.176) by
 SA9PR13CA0177.namprd13.prod.outlook.com (2603:10b6:806:28::32) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.9 via Frontend Transport; Tue, 22 Jun 2021 17:31:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b55deedd-7656-4874-a7a0-a3530016ff2a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=sF9B9msUqlPodgpOgTK/j2ZCCIDp54oksLKTPM+GdSs=;
 b=GykAkj2D6CJ1tsRQ7U1VWSofxSM8d0qtq5tbVgAz3NR9ESi+s2xSqxZ7fHlJxoUZvzdO
 zWOKZDoEf67FZyRCH6bGUEgyrTtlqq0WQehb2dqdcRRyw8PCeCWcJ4+IUFWM6GsVQxW0
 /joRjbaTUgtCAOetGkqM2mqWIEgW1/JmZHZtcnPcNBMB25IT0zYk+USrrX8jRtFaDvSc
 SanhtuNAIxriphvb0XFdpipSnSkUnoT01gN6a9n2i5Jdn0O7Qk7xXOYvE15tXmBUieWn
 ezebfytcPh8sEtLPo3Yf2KhVHHAndjrEFZGgfwe1WJC6uXBbf9J9Z3eEx7+eU8c35cg/ fA== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CQWuFbq/lPVSfpTZw948PFpsFc3T2BkUDI1mzqDMtsGtRrFA7brLUaDstrTzwkEhtlXOY75znL7dkcnTF9hc9XwTAG5XNbAP4FfM1wMnBwwu1n8ySTGXIwZZ8zngeKaMAjAOcY2nCVmpcaytQwIR9TTstoKngxr49bbk/oq9x1Avyb0Bg0bBAfbmAVzFCtD9Fj2EtL5FeIwI58vlmBRLzda7PmFh18RM7F4wgwqoVzN24FlYSbxmpWiGSWWeRT22RxO9IMkJeSAngXE3vnJ0xr3yJgJqEN3UHt81jhchQgtM2paWUtLobpVD/eCkNXrnvk9ul/Osiv7vVxqSopLrUg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sF9B9msUqlPodgpOgTK/j2ZCCIDp54oksLKTPM+GdSs=;
 b=aRT/1LrfyMgmyC/2mSYKvai+JpoeKCOO1yPoQU8hGX6FYfXF0njmZ5eeEAxglNomlYzknMiGBub1zB/n1eH18C0Jbv8Z6V/Gu9q0tEeLuGjMmP8CxhAYjHUjX1n/ypfCwnqkglXYHsSER8MUbCHMlDEl3uClT5RGh5ujep6YV5XNSGKN/ydmiaOSoDSZorO/o2tYhUIWOLZQIwHb0h7OZN19XaVBzEQi8583VLKmWtsoRh9lnXW98P8BiH8iLuM1WyZGwilPpNDfQNIcimIFwyO2KOj7uBIjgFz6TLUgGoiHUWDFvPKAgycCvzMyE0bHEDsXqR6IQrjzsUoXzGwvPw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sF9B9msUqlPodgpOgTK/j2ZCCIDp54oksLKTPM+GdSs=;
 b=ZUhrK/s/8ExAbRgDCFHBdfTz4nzz+DgUQAsCPaQ6HhJGhVVs117vynu7uI/8p2c6TQuOsa4WlYM9MzMu5d2a6w7Q9WHtIhoBJ43r4zRMzBEHZ0k0+fzBg2gPrnJdPau2qf01SG/oh23jEXoO67DbZl2KiuMY3lSELfF3r4zyohc=
Authentication-Results: infradead.org; dkim=none (message not signed)
 header.d=none;infradead.org; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH V7 01/18] perf/core: Use static_call to optimize
 perf_guest_info_callbacks
To: Zhu Lingshan <lingshan.zhu@intel.com>, lingshan.zhu@live.com
Cc: Like Xu <like.xu@linux.intel.com>, Will Deacon <will@kernel.org>,
        Marc Zyngier <maz@kernel.org>, Guo Ren <guoren@kernel.org>,
        Nick Hu <nickhu@andestech.com>,
        Paul Walmsley <paul.walmsley@sifive.com>,
        linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu,
        linux-csky@vger.kernel.org, linux-riscv@lists.infradead.org,
        xen-devel@lists.xenproject.org, Peter Zijlstra <peterz@infradead.org>
References: <20210622093823.8215-1-lingshan.zhu@intel.com>
 <20210622093823.8215-2-lingshan.zhu@intel.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <92fdf981-68ef-92a2-b1ae-0c5f347ae460@oracle.com>
Date: Tue, 22 Jun 2021 13:31:54 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
In-Reply-To: <20210622093823.8215-2-lingshan.zhu@intel.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [160.34.89.176]
X-ClientProxiedBy: SA9PR13CA0177.namprd13.prod.outlook.com
 (2603:10b6:806:28::32) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1ec7e3d1-b17d-488f-fd04-08d935a3a47f
X-MS-TrafficTypeDiagnostic: MN2PR10MB3949:
X-Microsoft-Antispam-PRVS: 
	<MN2PR10MB3949FCEA45214EAC56C215418A099@MN2PR10MB3949.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2887;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	S2TECVWxkXCYnj2uONAjQLYztdX2kYyeQe/02lbuR4XxwbK6hKZn7UmndvLLAtWcsAdHDngPtvDawnUE85tcIlSVyuxvtNpDKptFYxaFA8Hb/jY5oeufKf0pxmt0ALF5MVJb5k/IcVWi/1GtVy3u+TJcrXrlE0P7aMeeArHAvle+ypdnBg64A9r/rPbEOt82rNeZinqjIJ2IEyM0ij6yCJvE3n4q0ITK9GN3TeH8N+QaRIn/pq9J/WnHY9RxuiGPvsxFzrZ/Aoh/oQPLCtqKLaxbdwRLW7b/8s1xmegebER/ET0Or+XFqiGQsogZ3RETrQfT57sOEHpuy5hA/zxFlM65oUkxlqhC7ZjiLCGGcsXBgfbTmgqMf6GglOJpfmlKn9ciK3i1RMYJqzvIcfOzwV9D2rWNd9sWpLshsc85rDZAg5i4do3syTLVlvaGDUNao1epaLZ6gWJM4vgp6Qv8El+WwY6oGzctU/qotfQGmuON3ix/JS2Q5JBnvgLhyK/YibhH0JS6z5mg8PflInW7h9KLb50dCpSoDpMusKhZcR6eznzN3vCUu72wenaBe2XNTVT+naQSxJMKgj3cSsf3OH2YlEGyqZ0Pgz/6xxhRdbsWeLLYP4yq1H1XYWgVUcLLgo23B4YdYFFM1TJMEX5PDnxy4KkB2t9BthLt5dFjVeN8VnR3M5OOXLOymVGMHLjU
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(136003)(346002)(39860400002)(396003)(376002)(2616005)(956004)(478600001)(86362001)(6486002)(31686004)(8676002)(44832011)(6666004)(5660300002)(31696002)(38100700002)(8936002)(53546011)(4744005)(16526019)(186003)(7416002)(316002)(54906003)(66946007)(2906002)(16576012)(66476007)(66556008)(26005)(36756003)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?QSs3OHlvN2hibUlOMDlteWFoaGtvaW5oVFVqUm1UZVZSUmVxckpZRzB2MGlF?=
 =?utf-8?B?WkZmQjI3SlNaVHpTWTJCQ2tybWRaR2FzVkhLenB1TzQ1ZnRvMHRBbDZBWE9q?=
 =?utf-8?B?bDV1K2NBalpVTjQ2TE95bzZuVTVHcm1sd0dNODVjUzJTM0pPK2Jpb2R2V1R6?=
 =?utf-8?B?MFF4SVhsenpZa3h0bFd0U3FNMTY3ZFhxb2xXUG04Ym0xZG9aTDRmcURQN0Ju?=
 =?utf-8?B?MGNJd3BCWURZc3hiZTVsNFFPSEdOc212dHhvR1ZpOFp4MVdQWkI1bHI0UFVt?=
 =?utf-8?B?Yi9RZmFZTnBIZ0VKUHJVbk93Rk5YSFdrRWdZU2lnR3Y0SFE2QTIrVUZjYUJZ?=
 =?utf-8?B?MUt6VVF4V2I2TUZRWlVnSUlvRTZwbklscTN2NnJYc1laMmdpODlIblMyQUFT?=
 =?utf-8?B?R3l5OFNyUG9QM2xucys0N2FiSUwzeWdYOW9wbDZhWFBRTjN4dVo4R1BnSjY3?=
 =?utf-8?B?eVBSSStEc1N4QXRuUkJreDFjNWlLVHYvNlVNb2xlUlMxd2tTUmx4eUxjQWxl?=
 =?utf-8?B?a2tqbXhocDJLQ3VUZjBwOUJlQkxPdENYK3EzMUZBdjJsM2lXYXQ1aVFWVGRO?=
 =?utf-8?B?WWRWSnpiTHJ5RUFMak5KUGw3ck55WlJyM3JXcWZFWnFOUnIwSmppNjFQSmc4?=
 =?utf-8?B?N2xEOXFWSGxtMDV6UkxIU2RNWVJQbUlrNStYTVJoRXo2bmJ3bk5idHZNeFJ4?=
 =?utf-8?B?elFLTXhMMkwzdGFVQktuZkNFZ1NTbk11YXV5aE5DbGlUdTZjVTQrYXlqWHc3?=
 =?utf-8?B?RFN0dUJkc0h3b1UzWGp5cGxQRkF5V3EvY0c5bmFlRnEzdDhCMEk0QVRsNmQ0?=
 =?utf-8?B?VEVCaENUaTJWQk5pSUhUQkZZUW5RbHVwN0hnVWk5ZlVWQWd2UGVYZ3dsUEM0?=
 =?utf-8?B?Z1lYM2RNOWVHbnQ0WmpNZVFCQTBiSGZvbitPWmF0QUdNaGllSVVsVXlJVzRI?=
 =?utf-8?B?dzNIYUZMWnVZdkZHWXdXbnVuRUQxNGdLTkJLV3NJakNZSVZYS2o1UWMxeHNq?=
 =?utf-8?B?QThWVTM5VU4yVVNTMVBtN1ZDREJuVFdTaUFDcTNNN0g2UjFueFdPZ2pua1o4?=
 =?utf-8?B?VFFxWDNQYkxJVGtIVGtjTlQwZndlWW5ySUMrWTEzOFZEWXovU3U5RU9qZlo3?=
 =?utf-8?B?UzVnL2xnVFRoUXFmMnJYKzJhRHpHazMvWGgvakYvci9nU1pQSnYwcFF4c2RH?=
 =?utf-8?B?UUI4YjdoYy9tVHpCdGdEVDBQby9oL3ovQ2hmRElGZml5WlY4S2JyMjNqdEJq?=
 =?utf-8?B?allJbVN4aURuaGZodkxkYklrcnZLekM0SlBsbUF0ZHIyalVyVklSL013a25E?=
 =?utf-8?B?STl2RGpES0FDQnF0aEFQWkcxTXpNcG1wVzB2cGxzYVkwbWI1Wjh6bWtWd0Nx?=
 =?utf-8?B?c1k2WGRYaXdsM0pXUjU5eWlKTDZiQUVQR29HNzFLd1ZlVW00dExFU2hGZVlt?=
 =?utf-8?B?Qy8rdGgrMmZlMkNRNHl2Qm01NmJMYmltZEZsMnRMN0djOXpLZ2VkdHhaQ0w3?=
 =?utf-8?B?aXZ5TFNLeERjWUVlRjRRdFN6cksxL0FpaTdOdE9oRXNuNm9ibHhZUGNNTFFk?=
 =?utf-8?B?MGtwN1k0M3M5S3dxTnFBYlI5OGRPRWhWNW9QUVBEdmNwRStyV3dZNFJpTUhI?=
 =?utf-8?B?dDVwaER3WlBmWGhxUTVzS2gvVUxFdEpoVElJMjZXOWlVMTFUM2pRckh6dGU2?=
 =?utf-8?B?SWQ5SlF1REFmL0xCY1Rta3BqYStWTitRSytOekY0eTZSNU04bGFUcW9jbDhB?=
 =?utf-8?Q?u0hDQz4k9Jzl7q6Wd+MGKOZ7AH9+4fScol5NbUj?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1ec7e3d1-b17d-488f-fd04-08d935a3a47f
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2021 17:32:00.7111
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3YVs+b2Nc31rMRqE2PKtH/74M4K4k0fKBthZ8m1eG3GUPyhbDbT6WoR7GkrDJjLvpoKrLDJpWNDWzXlbsmaVPIPDEkyCDRbZc7vFQYTlpow=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR10MB3949
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10023 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 phishscore=0 mlxscore=0
 spamscore=0 mlxlogscore=999 bulkscore=0 suspectscore=0 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106220107
X-Proofpoint-ORIG-GUID: UXmCLfIhSmGpnCOpGGDyWcShhxKwOCv9
X-Proofpoint-GUID: UXmCLfIhSmGpnCOpGGDyWcShhxKwOCv9



On 6/22/21 5:38 AM, Zhu Lingshan wrote:

> -static int xen_is_user_mode(void)
> -{
> -	const struct xen_pmu_data *xenpmu_data = get_xenpmu_data();
> +	state |= PERF_GUEST_ACTIVE;
>  
> -	if (!xenpmu_data) {
> -		pr_warn_once("%s: pmudata not initialized\n", __func__);
> -		return 0;
> +	if (xenpmu_data->pmu.pmu_flags & PMU_SAMPLE_PV) {
> +		if (xenpmu_data->pmu.pmu_flags & PMU_SAMPLE_USER)
> +			state |= PERF_GUEST_USER;
> +	} else {
> +		if (!!(xenpmu_data->pmu.r.regs.cpl & 3))
> +			state |= PERF_GUEST_USER;



Please drop "!!", it's not needed here. And use "else if".


With that, for Xen bits:

Reviewed-by: Boris Ostrovsky <boris.ostrvsky@oracle.com>

-boris



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 18:08:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 18:08:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146011.268581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvkoP-0000E7-0G; Tue, 22 Jun 2021 18:07:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146011.268581; Tue, 22 Jun 2021 18:07:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvkoO-0000E0-TY; Tue, 22 Jun 2021 18:07:48 +0000
Received: by outflank-mailman (input) for mailman id 146011;
 Tue, 22 Jun 2021 18:07:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3wlN=LQ=gmail.com=neilsikka@srs-us1.protection.inumbo.net>)
 id 1lvkoM-0000DV-Vh
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 18:07:47 +0000
Received: from mail-ed1-x52a.google.com (unknown [2a00:1450:4864:20::52a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2012609d-82fa-4712-9328-f34c21e8bed9;
 Tue, 22 Jun 2021 18:07:45 +0000 (UTC)
Received: by mail-ed1-x52a.google.com with SMTP id r7so24874377edv.12
 for <xen-devel@lists.xenproject.org>; Tue, 22 Jun 2021 11:07:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2012609d-82fa-4712-9328-f34c21e8bed9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=zOAHJHWGJHsn5RVDzqZZ0AJDt6ARJeAKc3ID1hRRB/Y=;
        b=Bv9aB99aaBeThtB+2eazKlzvjKHsaBXClmNbb2YG9y0VsozXKbakwx5Tq54g8vAwdg
         Ri83LEkawTngXVVsWAUyuCsq6G3kX8Ak53+zlKTbFioDjfMF78erkMsDgbEXDd2W4++5
         p+DgN1J0UVGwNLBZ1vR9M/veerChRVtgbO4g4Rrpil9PZMRh1EaNYva7QBpl3M0dYwji
         3UxiZjU1UKyf3snHr4H3XM2Yzv+X6DdHbRbG5u+AIqj9FCWh6hUuBOTfPtiW9QwUomHV
         mYNOIJZYUDQffW6Y66RpBVol3SQzGYx6vIwdNpv2CK6LDBaYcucjxkiatLcUkl/KvyY0
         Zblg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=zOAHJHWGJHsn5RVDzqZZ0AJDt6ARJeAKc3ID1hRRB/Y=;
        b=OwzIIr0yKxYqOgKBj4KnlAlz5UExbO6xiCi3U03KRfuOzbWHzDnEbWi86aLO/Ttk2j
         +D3DcLVHZC2OahSKarwpH42tyE6ajCFO+C/kkZ7VynqxZy4d5lIdghsD/QVRS82os9jJ
         Id/I2Ja/UoPTqtFrIZw5krxpc1uTqRvv5kYEwkOQRotvGEDVcvuHOByu0J3Ak8K8/yUD
         rsXKX0J+4zHlxC78nl7vkPF0HER9mUJ1nmb7S3r0hGMQHRjSdbd8CH48j0rpH87UHbGZ
         RFhx0Am/O5P2algYZ3NvsrvxzQtBjRnmeDNecfeK44TqW0b+Xcj3aKb3DZz928X7bAkG
         Biuw==
X-Gm-Message-State: AOAM533+VjSpLALZfH2ILU4LeGsEvxPD6E8tZgtXsBN/cxPIH30+t9Wm
	AhEyoNBIFGuPrjqJCiPEcPf0k20FU3gAz8ElS8A=
X-Google-Smtp-Source: ABdhPJwPWFij88Efr88aaS3I7q4Qxf18Mf0HwkuRPYF+1jfK5QJQ+6fj2Llua6NVEH7n/rNsTC9VKs3DCYXc7JQIpw4=
X-Received: by 2002:a05:6402:54c:: with SMTP id i12mr6881031edx.64.1624385264789;
 Tue, 22 Jun 2021 11:07:44 -0700 (PDT)
MIME-Version: 1.0
References: <CAHPMNWcQgUEvd7aYiNx1U+wphmuJr_Q8JXWw3mE812uN5hEARQ@mail.gmail.com>
 <CABfawhk4D+30_DX5cwYG-=SthQ=CXLRLL7VeXeVK48Oj0efn2Q@mail.gmail.com>
 <CAHPMNWd1QFYfbuPdEPZgwKrXE6dhi_X-bqZfPQj4zo4AioL81w@mail.gmail.com> <CABfawh=W92ioejsZ-zu+WVofw_jfxVLteVieC2Ysfxd3Wrs+Og@mail.gmail.com>
In-Reply-To: <CABfawh=W92ioejsZ-zu+WVofw_jfxVLteVieC2Ysfxd3Wrs+Og@mail.gmail.com>
From: Neil Sikka <neilsikka@gmail.com>
Date: Tue, 22 Jun 2021 14:07:33 -0400
Message-ID: <CAHPMNWcfz+9zUv7gfwu5V6zPVBHiFc-EZDJ70-4DWHjOtyBOHg@mail.gmail.com>
Subject: Re: Windows 10 Kernel Debugging on Xen
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: multipart/alternative; boundary="000000000000afae1705c55eaad2"

--000000000000afae1705c55eaad2
Content-Type: text/plain; charset="UTF-8"

I tried that, but it seems like I'm getting an interrupt storm on the
debugger VM (CPU spends all its time in the kernel) when I try to attach
the debugger. This observation furthers my suspicion that there is
communication, but there is something wrong with the protocol...

On Tue, Jun 22, 2021 at 12:43 PM Tamas K Lengyel <tamas.k.lengyel@gmail.com>
wrote:

> I used Xen 4.15 and a pretty new version of Windows 10. It is a bit
> finicky, you have to run the debug commands on the debugee and then
> reboot. When the VM is rebooting the domain ID changes so you have to
> start the serial bridge then. Windbg will attach afterwards. Just make
> sure both VMs have serial='pty' set in their config file.
>
> Tamas
>
> On Tue, Jun 22, 2021 at 12:33 PM Neil Sikka <neilsikka@gmail.com> wrote:
> >
> > Thanks for the quick response, Tamas. I tried what you said and windbg
> waits and the debugee hangs when I click the break button in windbg, but I
> don't see any output in windbg. This means that there is SOME communication
> over the serial port that causes the debugee to hang when I click break.
> Could it be a debugger protocol issue? I also tried the guidance here by
> running the crlf program:
> > https://www.qubes-os.org/doc/windows-debugging/
> > But windbg waits and the debugee hangs in a similar manner.
> >
> > What versions of WIndows and Xen are you using?
> >
> > On Tue, Jun 22, 2021 at 12:10 PM Tamas K Lengyel <
> tamas.k.lengyel@gmail.com> wrote:
> >>
> >> I have managed to get windbg working with a serial bridge between two
> >> Win10 VMs using the following script:
> >>
> https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/scripts/serial-bridge.sh
> .
> >> The debugee has to enable a couple options so that windbg can attach:
> >>
> https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/scripts/debug.cmd
> .
> >>
> >> Tamas
> >>
> >> On Tue, Jun 22, 2021 at 12:01 PM Neil Sikka <neilsikka@gmail.com>
> wrote:
> >> >
> >> > Hello,
> >> > Has anyone gotten a Windows10 (Version 1709 of later) kernel debugger
> attached when running the Windows10 debugger VM and the Windows10 debugee
> VM on Xen 4.13.0 hypervisor? I am getting a "NIC hardware initialization
> failed" error. I tried the suggestions in the discussion here (
> https://bugzilla.redhat.com/show_bug.cgi?id=1947015):
> >> > -cpu
> Skylake-Server-IBRS,ss=on,vmx=on,hypervisor=on,tsc-adjust=on,clflushopt=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,ssbd=on,xsaves=on,ibpb=on,amd-ssbd=on,
> \
> >> >
> skip-l1dfl-vmentry=on,mpx=off,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vendor-id=KVMKVMKVM
> >> >
> >> > note: i had to remove the following 2 arguments due to errors from
> QEMU:
> >> > pschange-mc-no=on
> >> > hv_vpindex
> >> >
> >> > Here was the error:
> >> > C:\Users\user\Desktop\oldDebuggers\x64>kdnet.exe
> >> >
> >> > Network debugging is supported on the following NICs:
> >> > busparams=0.4.0, Intel(R) PRO/1000 MT Network Connection, Plugged in.
> >> > The Microsoft hypervisor running this VM does not support KDNET.
> >> > Please upgrade to the hypervisor shipped in Windows 8 or WS2012 or
> later.
> >> >
> >> > KDNET initialization failed.  Status = 0xC0000182.
> >> > NIC hardware initialization failed.
> >> >
> >> > I am using an Intel e1000 NIC emulated through QEMU because its
> supposedly a supported NIC for Windows kernel NET debugging.
> >> >
> >> > Thanks in Advance!
> >> >
> >> > --
> >> > My Blog: http://www.neilscomputerblog.blogspot.com/
> >> > Twitter: @neilsikka
> >
> >
> >
> > --
> > My Blog: http://www.neilscomputerblog.blogspot.com/
> > Twitter: @neilsikka
>


-- 
My Blog: http://www.neilscomputerblog.blogspot.com/
Twitter: @neilsikka

--000000000000afae1705c55eaad2
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">I tried that, but it seems like I&#39;m getting an interru=
pt storm on the debugger VM (CPU spends all its=C2=A0time in the kernel) wh=
en I try to attach the debugger. This observation furthers my suspicion tha=
t there is communication, but there is something wrong with the protocol...=
</div><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">=
On Tue, Jun 22, 2021 at 12:43 PM Tamas K Lengyel &lt;<a href=3D"mailto:tama=
s.k.lengyel@gmail.com">tamas.k.lengyel@gmail.com</a>&gt; wrote:<br></div><b=
lockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-le=
ft:1px solid rgb(204,204,204);padding-left:1ex">I used Xen 4.15 and a prett=
y new version of Windows 10. It is a bit<br>
finicky, you have to run the debug commands on the debugee and then<br>
reboot. When the VM is rebooting the domain ID changes so you have to<br>
start the serial bridge then. Windbg will attach afterwards. Just make<br>
sure both VMs have serial=3D&#39;pty&#39; set in their config file.<br>
<br>
Tamas<br>
<br>
On Tue, Jun 22, 2021 at 12:33 PM Neil Sikka &lt;<a href=3D"mailto:neilsikka=
@gmail.com" target=3D"_blank">neilsikka@gmail.com</a>&gt; wrote:<br>
&gt;<br>
&gt; Thanks for the quick response, Tamas. I tried what you said and windbg=
 waits and the debugee hangs when I click the break button in windbg, but I=
 don&#39;t see any output in windbg. This means that there is SOME communic=
ation over the serial port that causes the debugee to hang when I click bre=
ak. Could it be a debugger protocol issue? I also tried the guidance here b=
y running the crlf program:<br>
&gt; <a href=3D"https://www.qubes-os.org/doc/windows-debugging/" rel=3D"nor=
eferrer" target=3D"_blank">https://www.qubes-os.org/doc/windows-debugging/<=
/a><br>
&gt; But windbg waits and the debugee hangs in a similar manner.<br>
&gt;<br>
&gt; What versions of WIndows and Xen are you using?<br>
&gt;<br>
&gt; On Tue, Jun 22, 2021 at 12:10 PM Tamas K Lengyel &lt;<a href=3D"mailto=
:tamas.k.lengyel@gmail.com" target=3D"_blank">tamas.k.lengyel@gmail.com</a>=
&gt; wrote:<br>
&gt;&gt;<br>
&gt;&gt; I have managed to get windbg working with a serial bridge between =
two<br>
&gt;&gt; Win10 VMs using the following script:<br>
&gt;&gt; <a href=3D"https://github.com/intel/kernel-fuzzer-for-xen-project/=
blob/master/scripts/serial-bridge.sh" rel=3D"noreferrer" target=3D"_blank">=
https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/scripts/=
serial-bridge.sh</a>.<br>
&gt;&gt; The debugee has to enable a couple options so that windbg can atta=
ch:<br>
&gt;&gt; <a href=3D"https://github.com/intel/kernel-fuzzer-for-xen-project/=
blob/master/scripts/debug.cmd" rel=3D"noreferrer" target=3D"_blank">https:/=
/github.com/intel/kernel-fuzzer-for-xen-project/blob/master/scripts/debug.c=
md</a>.<br>
&gt;&gt;<br>
&gt;&gt; Tamas<br>
&gt;&gt;<br>
&gt;&gt; On Tue, Jun 22, 2021 at 12:01 PM Neil Sikka &lt;<a href=3D"mailto:=
neilsikka@gmail.com" target=3D"_blank">neilsikka@gmail.com</a>&gt; wrote:<b=
r>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Hello,<br>
&gt;&gt; &gt; Has anyone gotten a Windows10 (Version 1709 of later) kernel =
debugger attached when running the Windows10 debugger VM and the Windows10 =
debugee VM on Xen 4.13.0 hypervisor? I am getting a &quot;NIC hardware init=
ialization failed&quot; error. I tried the suggestions in the discussion he=
re (<a href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1947015" rel=
=3D"noreferrer" target=3D"_blank">https://bugzilla.redhat.com/show_bug.cgi?=
id=3D1947015</a>):<br>
&gt;&gt; &gt; -cpu Skylake-Server-IBRS,ss=3Don,vmx=3Don,hypervisor=3Don,tsc=
-adjust=3Don,clflushopt=3Don,umip=3Don,pku=3Don,md-clear=3Don,stibp=3Don,ar=
ch-capabilities=3Don,ssbd=3Don,xsaves=3Don,ibpb=3Don,amd-ssbd=3Don, \<br>
&gt;&gt; &gt; skip-l1dfl-vmentry=3Don,mpx=3Doff,hv-time,hv-relaxed,hv-vapic=
,hv-spinlocks=3D0x1fff,hv-vendor-id=3DKVMKVMKVM<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; note: i had to remove the following 2 arguments due to errors=
 from QEMU:<br>
&gt;&gt; &gt; pschange-mc-no=3Don<br>
&gt;&gt; &gt; hv_vpindex<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Here was the error:<br>
&gt;&gt; &gt; C:\Users\user\Desktop\oldDebuggers\x64&gt;kdnet.exe<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Network debugging is supported on the following NICs:<br>
&gt;&gt; &gt; busparams=3D0.4.0, Intel(R) PRO/1000 MT Network Connection, P=
lugged in.<br>
&gt;&gt; &gt; The Microsoft hypervisor running this VM does not support KDN=
ET.<br>
&gt;&gt; &gt; Please upgrade to the hypervisor shipped in Windows 8 or WS20=
12 or later.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; KDNET initialization failed.=C2=A0 Status =3D 0xC0000182.<br>
&gt;&gt; &gt; NIC hardware initialization failed.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; I am using an Intel e1000 NIC emulated through QEMU because i=
ts supposedly a supported NIC for Windows kernel NET debugging.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Thanks in Advance!<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; --<br>
&gt;&gt; &gt; My Blog: <a href=3D"http://www.neilscomputerblog.blogspot.com=
/" rel=3D"noreferrer" target=3D"_blank">http://www.neilscomputerblog.blogsp=
ot.com/</a><br>
&gt;&gt; &gt; Twitter: @neilsikka<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; --<br>
&gt; My Blog: <a href=3D"http://www.neilscomputerblog.blogspot.com/" rel=3D=
"noreferrer" target=3D"_blank">http://www.neilscomputerblog.blogspot.com/</=
a><br>
&gt; Twitter: @neilsikka<br>
</blockquote></div><br clear=3D"all"><div><br></div>-- <br><div dir=3D"ltr"=
 class=3D"gmail_signature"><div>My Blog: <a href=3D"http://www.neilscompute=
rblog.blogspot.com/" target=3D"_blank">http://www.neilscomputerblog.blogspo=
t.com/</a></div><div>Twitter: @neilsikka</div></div>

--000000000000afae1705c55eaad2--


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 18:12:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 18:12:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146017.268593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvksw-0001bL-J5; Tue, 22 Jun 2021 18:12:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146017.268593; Tue, 22 Jun 2021 18:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvksw-0001bE-Fk; Tue, 22 Jun 2021 18:12:30 +0000
Received: by outflank-mailman (input) for mailman id 146017;
 Tue, 22 Jun 2021 18:12:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p8IA=LQ=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1lvksv-0001b8-8F
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 18:12:29 +0000
Received: from mail-wr1-x42d.google.com (unknown [2a00:1450:4864:20::42d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0db5ea3e-d579-44e5-b77e-958270f9ae8e;
 Tue, 22 Jun 2021 18:12:28 +0000 (UTC)
Received: by mail-wr1-x42d.google.com with SMTP id a13so2939013wrf.10
 for <xen-devel@lists.xenproject.org>; Tue, 22 Jun 2021 11:12:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0db5ea3e-d579-44e5-b77e-958270f9ae8e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=4PZYCRkIJVGeuNR28a/CBUMBUxrv4oHeXg5/uSnpCrg=;
        b=D2+8c9jcXg/0XCwyJBexw4loidOxz4r6AJA+WK5VIUeV1eSyr8eywAz/r+uXx0zlGT
         tr5BdU1NZIo7zrZAWmveJxwmIrOKrS4J9Latoso+zbVCCo5j9bkIgg9SlxA+c5soWzAv
         arYH7dLkOkROTMRcUGQHffbp+3K1a9y3c+0Tk89hDHfQ0Y0+6RMMmCfufiHC9rG+AzkI
         AQUPnE9bRagXby88CLgfEqBh6Tyl2sbwUCpcmRmkwR1TigQVko6tSoEpGZ5L6D+Bna1R
         6uIwVFmGbzYNV492d6CtGJdnLYroawUwdO1b0sCfFYdHUCMSzD0UaqFSpaNXtxc5ifWe
         0EXw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=4PZYCRkIJVGeuNR28a/CBUMBUxrv4oHeXg5/uSnpCrg=;
        b=p3B/YnKMBQyzeEL+j1a+v74F2VYBZqVcdGl0YonnTMWiaanMPX4odavdO8gepJcg+y
         ED11ODPFyVNubW+tOTOTHyPc9FPxtjHJkIERIzJclebJ96DsLo1nHnYzxcApZkuX/0ty
         k/pgiiqlKCU2TMGmF+MGvoxzLazACrnIqbGLegf8AeM2+bCss4YAWCFoT07RezUvyu3y
         9ef4zoNTMgy54ZC/SrDvAL/GCQPaOfgm5DSXby7sxS2rdZjxVTYegFDykCpyf5tUrltY
         uryWd1MpiNDL9WNAxw+Q5czP29lZekQo5DfqBL2yhlCk9nhhMXojTdbXY/lnZ21VtJiH
         AiLA==
X-Gm-Message-State: AOAM531ccLNx3CeSGL6wwJndSOnVtcLn3tKeEAe/wVnlM2aOtoz3+rtN
	9VSaHHWv4rsPLdemO5WXSp2eqZeAzqhWtBQdtfE=
X-Google-Smtp-Source: ABdhPJxwaFkVpPfM+J+Qo41arge1t+mWtoN98JdQq/JPxFHbIDTUshywGFxyxpA+Ca0yygYbCTiGih8PoRyKh4YpAo0=
X-Received: by 2002:a5d:50ce:: with SMTP id f14mr3539992wrt.259.1624385547416;
 Tue, 22 Jun 2021 11:12:27 -0700 (PDT)
MIME-Version: 1.0
References: <CAHPMNWcQgUEvd7aYiNx1U+wphmuJr_Q8JXWw3mE812uN5hEARQ@mail.gmail.com>
 <CABfawhk4D+30_DX5cwYG-=SthQ=CXLRLL7VeXeVK48Oj0efn2Q@mail.gmail.com>
 <CAHPMNWd1QFYfbuPdEPZgwKrXE6dhi_X-bqZfPQj4zo4AioL81w@mail.gmail.com>
 <CABfawh=W92ioejsZ-zu+WVofw_jfxVLteVieC2Ysfxd3Wrs+Og@mail.gmail.com> <CAHPMNWcfz+9zUv7gfwu5V6zPVBHiFc-EZDJ70-4DWHjOtyBOHg@mail.gmail.com>
In-Reply-To: <CAHPMNWcfz+9zUv7gfwu5V6zPVBHiFc-EZDJ70-4DWHjOtyBOHg@mail.gmail.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Tue, 22 Jun 2021 14:11:51 -0400
Message-ID: <CABfawhkMb8Pnr6+NxsoaXKCyaBH8Tax8_1ABHjyGGp5j9hOkVA@mail.gmail.com>
Subject: Re: Windows 10 Kernel Debugging on Xen
To: Neil Sikka <neilsikka@gmail.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Make sure windbg is already waiting for the connection from the
debugee by the time Windows starts booting. If you try to attach
windbg later it won't work. It worked for me but obviously YMMV.

Tamas

On Tue, Jun 22, 2021 at 2:07 PM Neil Sikka <neilsikka@gmail.com> wrote:
>
> I tried that, but it seems like I'm getting an interrupt storm on the deb=
ugger VM (CPU spends all its time in the kernel) when I try to attach the d=
ebugger. This observation furthers my suspicion that there is communication=
, but there is something wrong with the protocol...
>
> On Tue, Jun 22, 2021 at 12:43 PM Tamas K Lengyel <tamas.k.lengyel@gmail.c=
om> wrote:
>>
>> I used Xen 4.15 and a pretty new version of Windows 10. It is a bit
>> finicky, you have to run the debug commands on the debugee and then
>> reboot. When the VM is rebooting the domain ID changes so you have to
>> start the serial bridge then. Windbg will attach afterwards. Just make
>> sure both VMs have serial=3D'pty' set in their config file.
>>
>> Tamas
>>
>> On Tue, Jun 22, 2021 at 12:33 PM Neil Sikka <neilsikka@gmail.com> wrote:
>> >
>> > Thanks for the quick response, Tamas. I tried what you said and windbg=
 waits and the debugee hangs when I click the break button in windbg, but I=
 don't see any output in windbg. This means that there is SOME communicatio=
n over the serial port that causes the debugee to hang when I click break. =
Could it be a debugger protocol issue? I also tried the guidance here by ru=
nning the crlf program:
>> > https://www.qubes-os.org/doc/windows-debugging/
>> > But windbg waits and the debugee hangs in a similar manner.
>> >
>> > What versions of WIndows and Xen are you using?
>> >
>> > On Tue, Jun 22, 2021 at 12:10 PM Tamas K Lengyel <tamas.k.lengyel@gmai=
l.com> wrote:
>> >>
>> >> I have managed to get windbg working with a serial bridge between two
>> >> Win10 VMs using the following script:
>> >> https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/sc=
ripts/serial-bridge.sh.
>> >> The debugee has to enable a couple options so that windbg can attach:
>> >> https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/sc=
ripts/debug.cmd.
>> >>
>> >> Tamas
>> >>
>> >> On Tue, Jun 22, 2021 at 12:01 PM Neil Sikka <neilsikka@gmail.com> wro=
te:
>> >> >
>> >> > Hello,
>> >> > Has anyone gotten a Windows10 (Version 1709 of later) kernel debugg=
er attached when running the Windows10 debugger VM and the Windows10 debuge=
e VM on Xen 4.13.0 hypervisor? I am getting a "NIC hardware initialization =
failed" error. I tried the suggestions in the discussion here (https://bugz=
illa.redhat.com/show_bug.cgi?id=3D1947015):
>> >> > -cpu Skylake-Server-IBRS,ss=3Don,vmx=3Don,hypervisor=3Don,tsc-adjus=
t=3Don,clflushopt=3Don,umip=3Don,pku=3Don,md-clear=3Don,stibp=3Don,arch-cap=
abilities=3Don,ssbd=3Don,xsaves=3Don,ibpb=3Don,amd-ssbd=3Don, \
>> >> > skip-l1dfl-vmentry=3Don,mpx=3Doff,hv-time,hv-relaxed,hv-vapic,hv-sp=
inlocks=3D0x1fff,hv-vendor-id=3DKVMKVMKVM
>> >> >
>> >> > note: i had to remove the following 2 arguments due to errors from =
QEMU:
>> >> > pschange-mc-no=3Don
>> >> > hv_vpindex
>> >> >
>> >> > Here was the error:
>> >> > C:\Users\user\Desktop\oldDebuggers\x64>kdnet.exe
>> >> >
>> >> > Network debugging is supported on the following NICs:
>> >> > busparams=3D0.4.0, Intel(R) PRO/1000 MT Network Connection, Plugged=
 in.
>> >> > The Microsoft hypervisor running this VM does not support KDNET.
>> >> > Please upgrade to the hypervisor shipped in Windows 8 or WS2012 or =
later.
>> >> >
>> >> > KDNET initialization failed.  Status =3D 0xC0000182.
>> >> > NIC hardware initialization failed.
>> >> >
>> >> > I am using an Intel e1000 NIC emulated through QEMU because its sup=
posedly a supported NIC for Windows kernel NET debugging.
>> >> >
>> >> > Thanks in Advance!
>> >> >
>> >> > --
>> >> > My Blog: http://www.neilscomputerblog.blogspot.com/
>> >> > Twitter: @neilsikka
>> >
>> >
>> >
>> > --
>> > My Blog: http://www.neilscomputerblog.blogspot.com/
>> > Twitter: @neilsikka
>
>
>
> --
> My Blog: http://www.neilscomputerblog.blogspot.com/
> Twitter: @neilsikka


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 18:21:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 18:21:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146024.268615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvl1m-0003GE-Np; Tue, 22 Jun 2021 18:21:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146024.268615; Tue, 22 Jun 2021 18:21:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvl1m-0003G5-KH; Tue, 22 Jun 2021 18:21:38 +0000
Received: by outflank-mailman (input) for mailman id 146024;
 Tue, 22 Jun 2021 18:21:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZL//=LQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lvl1l-0002zr-9j
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 18:21:37 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eb49b165-8fa2-4f2b-908d-da54465b2493;
 Tue, 22 Jun 2021 18:21:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb49b165-8fa2-4f2b-908d-da54465b2493
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624386095;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=yumBJZ7SzzoE1gqVWcuD4EWuJPstmL3Sq9uYgjD4ZW0=;
  b=PrdsVIs0Pqz13jLI7x/bs8wFneDxqGsZdqOZAquBOqc8uWt23wropwnw
   9QaIu0JlnU/y7MKBkglzCw8tDWO5LEe0FI42C8Kw6V97xeGtzVyjEslfb
   NbKqWTgSeXz93rgmjaqNkiGSsBzZ1OxiuWw958AB9QeHa8TpPgKLLPAjs
   0=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: A5DyEGfyng1UQ//+6ENZ34sgDjCcEJOcZXPBddorI+5N56alJNch/vW0wXM6fxFNEtV+gwXqzY
 Yvm1cKuzk3V3xiWE4b9ImcZaKLw4SMY/A75jI0hN3jcRrKoj/kJd703huUnqOP33Uv8OSZHfNo
 twBuLCOpZNe2Vz9ptccNJMnioHVMdhBKrnqw3sXzleEHg+zCVPhrcZqhmuWBvw7Vv3kgyMVMSQ
 CXJiMvyMepZwdFXmJCxkeC8kr2T45dE/zYCWoDP3Zexo9AFHpOJWTL9zwypoW3If/RpwccYEuq
 33E=
X-SBRS: 5.1
X-MesageID: 46716668
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:ZA9SuKGJqbvbHc2NpLqE0seALOsnbusQ8zAXP0AYc31om6uj5r
 iTdZUgpGbJYVkqKRIdcLy7V5VoBEmskaKdgrNhW4tKPjOW2ldARbsKheCJrlHd8m/Fh4lgPM
 9bAtND4bbLbWSS4/yV3ODBKadE/OW6
X-IronPort-AV: E=Sophos;i="5.83,292,1616472000"; 
   d="scan'208";a="46716668"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Juergen Gross
	<jgross@suse.com>
Subject: [PATCH 1/4] tools/tests: Drop obsolete mce-test infrastructure
Date: Tue, 22 Jun 2021 19:21:21 +0100
Message-ID: <20210622182124.11571-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210622182124.11571-1-andrew.cooper3@citrix.com>
References: <20210622182124.11571-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

mce-test has a test suite, but it depends on xend, needs to run in-tree, and
requires manual setup of at least one guest, and manual parameters to pass
into cases.  Drop the test infrasturcture.

Move the one useful remaining item, xen-mceinj, into misc/, fixing some minor
style issues as it goes.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Juergen Gross <jgross@suse.com>
---
 .gitignore                                         |   1 -
 tools/misc/.gitignore                              |   1 +
 tools/misc/Makefile                                |   4 +
 tools/{tests/mce-test/tools => misc}/xen-mceinj.c  |  32 +--
 tools/tests/Makefile                               |   1 -
 tools/tests/mce-test/Makefile                      |  12 -
 tools/tests/mce-test/README                        |  75 ------
 tools/tests/mce-test/cases/srao_llc/dom0/cases.sh  |  73 ------
 tools/tests/mce-test/cases/srao_llc/guest/cases.sh |  94 --------
 tools/tests/mce-test/cases/srao_llc/xen/cases.sh   |  69 ------
 tools/tests/mce-test/cases/srao_mem/dom0/cases.sh  |  73 ------
 tools/tests/mce-test/cases/srao_mem/guest/cases.sh |  94 --------
 tools/tests/mce-test/cases/srao_mem/xen/cases.sh   |  69 ------
 tools/tests/mce-test/cases/ucna_llc/dom0/cases.sh  |  72 ------
 tools/tests/mce-test/cases/ucna_llc/guest/cases.sh |  92 --------
 tools/tests/mce-test/cases/ucna_llc/xen/cases.sh   |  68 ------
 tools/tests/mce-test/config/setup.conf             |  24 --
 tools/tests/mce-test/lib/xen-mceinj-tool.sh        | 260 ---------------------
 tools/tests/mce-test/tools/Makefile                |  24 --
 tools/tests/mce-test/tools/README                  |  24 --
 20 files changed, 24 insertions(+), 1138 deletions(-)
 rename tools/{tests/mce-test/tools => misc}/xen-mceinj.c (97%)
 delete mode 100644 tools/tests/mce-test/Makefile
 delete mode 100644 tools/tests/mce-test/README
 delete mode 100644 tools/tests/mce-test/cases/srao_llc/dom0/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_llc/guest/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_llc/xen/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_mem/dom0/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_mem/guest/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_mem/xen/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/ucna_llc/dom0/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/ucna_llc/guest/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/ucna_llc/xen/cases.sh
 delete mode 100644 tools/tests/mce-test/config/setup.conf
 delete mode 100644 tools/tests/mce-test/lib/xen-mceinj-tool.sh
 delete mode 100644 tools/tests/mce-test/tools/Makefile
 delete mode 100644 tools/tests/mce-test/tools/README

diff --git a/.gitignore b/.gitignore
index 38a085e398..d4b90303b2 100644
--- a/.gitignore
+++ b/.gitignore
@@ -276,7 +276,6 @@ tools/tests/x86_emulator/test_x86_emulator
 tools/tests/x86_emulator/x86_emulate
 tools/tests/x86_emulator/xop*.[ch]
 tools/tests/xenstore/xs-test
-tools/tests/mce-test/tools/xen-mceinj
 tools/tests/vpci/list.h
 tools/tests/vpci/vpci.[hc]
 tools/tests/vpci/test_vpci
diff --git a/tools/misc/.gitignore b/tools/misc/.gitignore
index ce6f937d0c..73ce95e6d7 100644
--- a/tools/misc/.gitignore
+++ b/tools/misc/.gitignore
@@ -1,4 +1,5 @@
 xen-access
+xen-mceinj
 xen-memshare
 xen-ucode
 xen-vmtrace
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 2b683819d4..1a07191d83 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -22,6 +22,7 @@ INSTALL_SBIN-$(CONFIG_MIGRATE) += xen-hptool
 INSTALL_SBIN-$(CONFIG_X86)     += xen-hvmcrash
 INSTALL_SBIN-$(CONFIG_X86)     += xen-hvmctx
 INSTALL_SBIN-$(CONFIG_X86)     += xen-lowmemd
+INSTALL_SBIN-$(CONFIG_X86)     += xen-mceinj
 INSTALL_SBIN-$(CONFIG_X86)     += xen-memshare
 INSTALL_SBIN-$(CONFIG_X86)     += xen-mfndump
 INSTALL_SBIN-$(CONFIG_X86)     += xen-ucode
@@ -97,6 +98,9 @@ xen-memshare: xen-memshare.o
 xen-vmtrace: xen-vmtrace.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(LDLIBS_libxenforeignmemory) $(APPEND_LDFLAGS)
 
+xen-mceinj: xen-mceinj.o
+	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(APPEND_LDFLAGS)
+
 xenperf: xenperf.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
 
diff --git a/tools/tests/mce-test/tools/xen-mceinj.c b/tools/misc/xen-mceinj.c
similarity index 97%
rename from tools/tests/mce-test/tools/xen-mceinj.c
rename to tools/misc/xen-mceinj.c
index 1187d01e5f..df55eefbac 100644
--- a/tools/tests/mce-test/tools/xen-mceinj.c
+++ b/tools/misc/xen-mceinj.c
@@ -137,7 +137,7 @@ static void err(xc_interface *xc_handle, const char *fmt, ...)
     va_list args;
 
     va_start(args, fmt);
-    if (vasprintf(&buf, fmt, args) < 0)
+    if ( vasprintf(&buf, fmt, args) < 0 )
         abort();
     perror(buf);
     va_end(args);
@@ -173,7 +173,7 @@ static unsigned int mca_cpuinfo(xc_interface *xc_handle)
     mc.cmd = XEN_MC_physcpuinfo;
     mc.interface_version = XEN_MCA_INTERFACE_VERSION;
 
-    if (!xc_mca_op(xc_handle, &mc))
+    if ( !xc_mca_op(xc_handle, &mc) )
         return mc.u.mc_physcpuinfo.ncpus;
     else
         return 0;
@@ -187,9 +187,9 @@ static int inject_cmci(xc_interface *xc_handle, unsigned int cpu_nr)
     memset(&mc, 0, sizeof(struct xen_mc));
 
     nr_cpus = mca_cpuinfo(xc_handle);
-    if (!nr_cpus)
+    if ( !nr_cpus )
         err(xc_handle, "Failed to get mca_cpuinfo");
-    if (cpu_nr >= nr_cpus)
+    if ( cpu_nr >= nr_cpus )
         err(xc_handle, "-c %u is larger than %u", cpu_nr, nr_cpus - 1);
 
     mc.cmd = XEN_MC_inject_v2;
@@ -284,7 +284,7 @@ static int add_msr_intpose(xc_interface *xc_handle,
         flush_msr_inj(xc_handle);
         init_msr_inj();
     }
-    count= msr_inj.mcinj_count;
+    count = msr_inj.mcinj_count;
 
     if ( !count )
     {
@@ -422,7 +422,7 @@ static long xs_get_dom_mem(int domid)
     if (!memstr || !plen)
         return -1;
 
-    mem = atoll(memstr)*1024;
+    mem = atoll(memstr) * 1024;
     free(memstr);
 
     return mem;
@@ -474,17 +474,20 @@ int main(int argc, char *argv[])
     cpu_nr = 0;
 
     init_msr_inj();
-    xc_handle = xc_interface_open(0, 0, 0);
-    if ( !xc_handle ) {
+    xc_handle = xc_interface_open(NULL, NULL, 0);
+    if ( !xc_handle )
+    {
         Lprintf("Failed to get xc interface");
         exit(EXIT_FAILURE);
     }
 
-    while ( 1 ) {
+    while ( 1 )
+    {
         c = getopt_long(argc, argv, "c:Dd:t:hp:l", opts, &opt_index);
         if ( c == -1 )
             break;
-        switch ( c ) {
+        switch ( c )
+        {
         case 'D':
             dump=1;
             break;
@@ -516,7 +519,8 @@ int main(int argc, char *argv[])
         }
     }
 
-    if ( domid != DOMID_XEN ) {
+    if ( domid != DOMID_XEN )
+    {
         max_gpa = xs_get_dom_mem(domid);
         Lprintf("get domain %d max gpa is: 0x%lx", domid, max_gpa);
         if ( gaddr >= max_gpa )
@@ -524,7 +528,8 @@ int main(int argc, char *argv[])
     }
     Lprintf("get gaddr of error inject is: 0x%lx", gaddr);
 
-    if ( dump ) {
+    if ( dump )
+    {
         if ( domid == DOMID_XEN )
             Lprintf("Xen: gaddr=0x%lx", gaddr);
         else
@@ -532,7 +537,8 @@ int main(int argc, char *argv[])
         goto out;
     }
 
-    if ( type < 0 || type >= MCE_TABLE_SIZE ) {
+    if ( type < 0 || type >= MCE_TABLE_SIZE )
+    {
         err(xc_handle, "Unsupported error type");
         goto out;
     }
diff --git a/tools/tests/Makefile b/tools/tests/Makefile
index 25531a984a..33e32730c4 100644
--- a/tools/tests/Makefile
+++ b/tools/tests/Makefile
@@ -4,7 +4,6 @@ include $(XEN_ROOT)/tools/Rules.mk
 SUBDIRS-y :=
 SUBDIRS-y += resource
 SUBDIRS-$(CONFIG_X86) += cpu-policy
-SUBDIRS-$(CONFIG_X86) += mce-test
 SUBDIRS-$(CONFIG_X86) += tsx
 ifneq ($(clang),y)
 SUBDIRS-$(CONFIG_X86) += x86_emulator
diff --git a/tools/tests/mce-test/Makefile b/tools/tests/mce-test/Makefile
deleted file mode 100644
index 1395df38ac..0000000000
--- a/tools/tests/mce-test/Makefile
+++ /dev/null
@@ -1,12 +0,0 @@
-.PHONY: all clean distclean
-
-all: 
-	$(MAKE) -C tools
-
-clean:
-	$(MAKE) -C tools clean
-
-distclean:
-	$(MAKE) -C tools distclean
-
-install uninstall:
diff --git a/tools/tests/mce-test/README b/tools/tests/mce-test/README
deleted file mode 100644
index 65e6d1b045..0000000000
--- a/tools/tests/mce-test/README
+++ /dev/null
@@ -1,75 +0,0 @@
-Xen MCE test suite
----------------
-
-The Xen MCE test suite is a collection of tools and test scripts for
-testing the Xen MCE processing features. The goal is to cover
-most Xen MCE processing code paths and features with automation tests.
-
-
-In the Package
---------------
-
-Here is a short description of what is included in the package
-
-README
-	This is document
-
-Makefile
-	For compile
-
-cases/*
-	Contains all test cases, which may be organized in sub-directories, 
-	the interface of test case is a shell script under cases/, such as:
-	   -- cases/srao_mem/dom0/cases.sh
-
-config/*
-	Contains test configuration files, which specifies the parameters 
-	for test cases, etc.
-
-lib/*
-	Contains some shell scripts, in which some common shell
-	functions and variable definitions are defined to be used by
-	test cases.
-
-tools/*
-	Tools used by MCE test suites, now only xen-mceinj tool.
-
-results/
-	When test is done, the test result will be placed in this
-	directory, test results	of various cases may be in corresponding 
-	directory. 
-	For example, files in
-	    results/srao_mem_dom0/result
-	is the result for test case cases/srao_mem/dom0/cases.sh, there will
-	be 3 result conditions: PASSED/FAILED/NORESULT.
-		results/<test_case>/testlog   #the test log during testing
-		results/<test_case>/mcelog    #mcelog output during testing
-		results/<test_case>/xenlog    #Xen log during testing
-		results/<test_case>/gklog     #VM guest kernel log during testing
-		results/<test_case>/guest_config   #config file used to create guest
-
-
-Test Instruction
-----------------
-
-1.	make sure you have a dom0 with mce support
-	CONFIG_X86_MCE=y
-	CONFIG_X86_MCE_INTEL=y
-	CONFIG_X86_MCE_AMD=y
-	CONFIG_X86_MCE_THRESHOLD=y
-	CONFIG_X86_MCE_INJECT=y
-
-2.	run system at xen and start xend. A installed guest image is
-	necessary when do guest MCE error injection.
-3.	compile tools that used to test. in mce-test, $make.
-	Note: make sure compile xen/tools before do this step
-4.	run test cases that you want.
-	e.g. $sh cases/srao_mem/dom0/cases.sh -d 0 -p 0x0200 -c 2 -t 1
-5.	get test result in results directory
-
-
-Notes
-----------------
-All test cases fake a error and inject this error in 0x180020, For Xen
-test cases(e.g. cases/srao_mem/xen/cases.sh), error happen on every page 
-may cause a Xen panic. 
diff --git a/tools/tests/mce-test/cases/srao_llc/dom0/cases.sh b/tools/tests/mce-test/cases/srao_llc/dom0/cases.sh
deleted file mode 100644
index c540f64998..0000000000
--- a/tools/tests/mce-test/cases/srao_llc/dom0/cases.sh
+++ /dev/null
@@ -1,73 +0,0 @@
-#!/bin/bash
-#
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-sd=$(dirname $0)
-export ROOT=`(cd $sd/../../../; pwd)`
-export this_case=srao_llc_dom0
-
-. $ROOT/lib/xen-mceinj-tool.sh
-
-usage()
-{
-    echo "Usage: ./cases.sh [-options] [arguments]"
-    echo "================Below are the optional options================"
-    echo -e "\t-d domainID\t: 0"
-    echo -e "\t-c injcpu\t: which cpu to inject error"
-    echo -e "\t-p pageaddr\t: Guest Physical Address to inject error"
-    echo -e "\t\t\tBy default, the GPA is 0x180020"
-    echo -e "\t-h help"
-    exit 0
-}
-
-while getopts ":c:d:p:h" option
-do
-    case "$option" in
-    c) injcpu=$OPTARG;;
-    d) domid=$OPTARG;;
-    p) pageaddr=$OPTARG;;
-    h) usage;;
-    *) echo "invalid option!"; usage;;
-    esac
-done
-
-[ -z $domid ] && domid=0
-
-inject()
-{
-    mce_inject_trigger $MCE_SRAO_LLC -d $domid -u $injcpu -p $pageaddr 
-    if [ $? -eq 0 ]; then
-        show "  Passed: Successfully to fake and inject a MCE error"
-    else
-        show "  Failed: Fake error and inject fail !!"
-        return 1
-    fi
-    return 0
-}
-
-do_main()
-{
-    ret_val=0
-    clean_env
-    inject || ret_val=1
-    xen_verify || ret_val=1
-    mcelog_verify $MCE_SRAO_LLC || ret_val=1
-    gen_result $ret_val
-}
-
-do_main "$@"
diff --git a/tools/tests/mce-test/cases/srao_llc/guest/cases.sh b/tools/tests/mce-test/cases/srao_llc/guest/cases.sh
deleted file mode 100644
index 47a7ee4ab9..0000000000
--- a/tools/tests/mce-test/cases/srao_llc/guest/cases.sh
+++ /dev/null
@@ -1,94 +0,0 @@
-#!/bin/bash
-#
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-sd=$(dirname $0)
-export ROOT=`(cd $sd/../../../; pwd)`
-export this_case=srao_llc_guest
-
-. $ROOT/lib/xen-mceinj-tool.sh
-
-usage()
-{
-    echo "Usage: ./cases.sh [-options] [arguments]"
-    echo "================Below are the must have options==============="
-    echo -e "\t-i image\t: guest image"
-    echo -e "\t-m memory\t: set guest virtual memory"
-    echo "========                                              ========"
-    echo "================Below are the optional options================"
-    echo -e "\t-u vcpus\t: set guest virtual cpus number"
-    echo -e "\t-c injcpu\t: which cpu to inject error"
-    echo -e "\t-p pageaddr\t: Guest Physical Address to inject error"
-    echo -e "\t\t\tBy default, the GPA is 0x180020"
-    echo -e "\t-h help"
-    exit 0
-}
-
-[ $# -lt 1 ] && usage
-
-while getopts ":i:u:m:c:p:hl:" option
-do
-    case "$option" in
-    i) image=$OPTARG; offset=`kpartx -l $image | awk '{print $NF*512}'`;;
-    u) vcpus=$OPTARG;;
-    m) memory=$OPTARG;;
-    c) injcpu=$OPTARG;;
-    p) pageaddr=$OPTARG;;
-    l) early_kill="0";;
-    h) usage;;
-    *) echo "invalid option!"; usage;;
-    esac
-done
-
-
-start_guest()
-{
-    create_hvm_guest $image -u $vcpus -m $memory
-    if [ $? -ne 0 ]; then
-        echo "  Create guest fail!"
-        return 1
-    fi
-    return 0
-}
-
-inject()
-{
-    mce_inject_trigger $MCE_SRAO_LLC -u $injcpu -p $pageaddr 
-    if [ $? -eq 0 ]; then
-        show "  Passed: Successfully to fake and inject a MCE error"
-    else
-        show "  Failed: Fake error and inject fail !!"
-        return 1
-    fi
-    return 0
-}
-
-do_main()
-{
-    ret_val=0
-    clean_env
-    start_guest || ret_val=1
-    inject || ret_val=1
-    xen_verify || ret_val=1
-    guest_verify || ret_val=1
-    mcelog_verify $MCE_SRAO_LLC || ret_val=1
-    des_guest
-    gen_result $ret_val
-}
-
-do_main "$@"
diff --git a/tools/tests/mce-test/cases/srao_llc/xen/cases.sh b/tools/tests/mce-test/cases/srao_llc/xen/cases.sh
deleted file mode 100644
index 1d8e02ff65..0000000000
--- a/tools/tests/mce-test/cases/srao_llc/xen/cases.sh
+++ /dev/null
@@ -1,69 +0,0 @@
-#!/bin/bash
-#
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-sd=$(dirname $0)
-export ROOT=`(cd $sd/../../../; pwd)`
-export this_case=srao_llc_xen
-
-. $ROOT/lib/xen-mceinj-tool.sh
-
-usage()
-{
-    echo "Usage: ./cases.sh [-options] [arguments]"
-    echo "================Below are the optional options================"
-    echo -e "\t-c injcpu\t: which cpu to inject error"
-    echo -e "\t-p pageaddr\t: Guest Physical Address to inject error"
-    echo -e "\t\t\tBy default, the GPA is 0x180020"
-    echo -e "\t-h help"
-    exit 0
-}
-
-while getopts ":c:p:h" option
-do
-    case "$option" in
-    c) injcpu=$OPTARG;;
-    p) pageaddr=$OPTARG;;
-    h) usage;;
-    *) echo "invalid option!"; usage;;
-    esac
-done
-
-inject()
-{
-    mce_inject_trigger $MCE_SRAO_LLC -u $injcpu -p $pageaddr 
-    if [ $? -eq 0 ]; then
-        show "  Passed: Successfully to fake and inject a MCE error"
-    else
-        show "  Failed: Fake error and inject fail !!"
-        return 1
-    fi
-    return 0
-}
-
-do_main()
-{
-    ret_val=0
-    clean_env
-    inject || ret_val=1
-    xen_verify || ret_val=1
-    mcelog_verify $MCE_SRAO_LLC || ret_val=1
-    gen_result $ret_val
-}
-
-do_main "$@"
diff --git a/tools/tests/mce-test/cases/srao_mem/dom0/cases.sh b/tools/tests/mce-test/cases/srao_mem/dom0/cases.sh
deleted file mode 100644
index 22d4a00960..0000000000
--- a/tools/tests/mce-test/cases/srao_mem/dom0/cases.sh
+++ /dev/null
@@ -1,73 +0,0 @@
-#!/bin/bash
-#
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-sd=$(dirname $0)
-export ROOT=`(cd $sd/../../../; pwd)`
-export this_case=srao_mem_dom0
-
-. $ROOT/lib/xen-mceinj-tool.sh
-
-usage()
-{
-    echo "Usage: ./cases.sh [-options] [arguments]"
-    echo "================Below are the optional options================"
-    echo -e "\t-d domainID\t: 0"
-    echo -e "\t-c injcpu\t: which cpu to inject error"
-    echo -e "\t-p pageaddr\t: Guest Physical Address to inject error"
-    echo -e "\t\t\tBy default, the GPA is 0x180020"
-    echo -e "\t-h help"
-    exit 0
-}
-
-while getopts ":c:d:p:h" option
-do
-    case "$option" in
-    c) injcpu=$OPTARG;;
-    d) domid=$OPTARG;;
-    p) pageaddr=$OPTARG;;
-    h) usage;;
-    *) echo "invalid option!"; usage;;
-    esac
-done
-
-[ -z $domid ] && domid=0
-
-inject()
-{
-    mce_inject_trigger $MCE_SRAO_MEM -d $domid -u $injcpu -p $pageaddr 
-    if [ $? -eq 0 ]; then
-        show "  Passed: Successfully to fake and inject a MCE error"
-    else
-        show "  Failed: Fake error and inject fail !!"
-        return 1
-    fi
-    return 0
-}
-
-do_main()
-{
-    ret_val=0
-    clean_env
-    inject || ret_val=1
-    xen_verify || ret_val=1
-    mcelog_verify $MCE_SRAO_MEM || ret_val=1
-    gen_result $ret_val
-}
-
-do_main "$@"
diff --git a/tools/tests/mce-test/cases/srao_mem/guest/cases.sh b/tools/tests/mce-test/cases/srao_mem/guest/cases.sh
deleted file mode 100644
index 7ab4523096..0000000000
--- a/tools/tests/mce-test/cases/srao_mem/guest/cases.sh
+++ /dev/null
@@ -1,94 +0,0 @@
-#!/bin/bash
-#
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-sd=$(dirname $0)
-export ROOT=`(cd $sd/../../../; pwd)`
-export this_case=srao_mem_guest
-
-. $ROOT/lib/xen-mceinj-tool.sh
-
-usage()
-{
-    echo "Usage: ./cases.sh [-options] [arguments]"
-    echo "================Below are the must have options==============="
-    echo -e "\t-i image\t: guest image"
-    echo -e "\t-m memory\t: set guest virtual memory"
-    echo "========                                              ========"
-    echo "================Below are the optional options================"
-    echo -e "\t-u vcpus\t: set guest virtual cpus number"
-    echo -e "\t-c injcpu\t: which cpu to inject error"
-    echo -e "\t-p pageaddr\t: Guest Physical Address to inject error"
-    echo -e "\t\t\tBy default, the GPA is 0x180020"
-    echo -e "\t-h help"
-    exit 0
-}
-
-[ $# -lt 1 ] && usage
-
-while getopts ":i:u:m:c:p:hl:" option
-do
-    case "$option" in
-    i) image=$OPTARG; offset=`kpartx -l $image | awk '{print $NF*512}'`;;
-    u) vcpus=$OPTARG;;
-    m) memory=$OPTARG;;
-    c) injcpu=$OPTARG;;
-    p) pageaddr=$OPTARG;;
-    l) early_kill="0";;
-    h) usage;;
-    *) echo "invalid option!"; usage;;
-    esac
-done
-
-
-start_guest()
-{
-    create_hvm_guest $image -u $vcpus -m $memory
-    if [ $? -ne 0 ]; then
-        echo "  Create guest fail!"
-        return 1
-    fi
-    return 0
-}
-
-inject()
-{
-    mce_inject_trigger $MCE_SRAO_MEM -u $injcpu -p $pageaddr 
-    if [ $? -eq 0 ]; then
-        show "  Passed: Successfully to fake and inject a MCE error"
-    else
-        show "  Failed: Fake error and inject fail !!"
-        return 1
-    fi
-    return 0
-}
-
-do_main()
-{
-    ret_val=0
-    clean_env
-    start_guest || ret_val=1
-    inject || ret_val=1
-    xen_verify || ret_val=1
-    guest_verify || ret_val=1
-    mcelog_verify $MCE_SRAO_MEM || ret_val=1
-    des_guest
-    gen_result $ret_val
-}
-
-do_main "$@"
diff --git a/tools/tests/mce-test/cases/srao_mem/xen/cases.sh b/tools/tests/mce-test/cases/srao_mem/xen/cases.sh
deleted file mode 100644
index 7ae49a82ac..0000000000
--- a/tools/tests/mce-test/cases/srao_mem/xen/cases.sh
+++ /dev/null
@@ -1,69 +0,0 @@
-#!/bin/bash
-#
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-sd=$(dirname $0)
-export ROOT=`(cd $sd/../../../; pwd)`
-export this_case=srao_mem_xen
-
-. $ROOT/lib/xen-mceinj-tool.sh
-
-usage()
-{
-    echo "Usage: ./cases.sh [-options] [arguments]"
-    echo "================Below are the optional options================"
-    echo -e "\t-c injcpu\t: which cpu to inject error"
-    echo -e "\t-p pageaddr\t: Guest Physical Address to inject error"
-    echo -e "\t\t\tBy default, the GPA is 0x180020"
-    echo -e "\t-h help"
-    exit 0
-}
-
-while getopts ":c:p:h" option
-do
-    case "$option" in
-    c) injcpu=$OPTARG;;
-    p) pageaddr=$OPTARG;;
-    h) usage;;
-    *) echo "invalid option!"; usage;;
-    esac
-done
-
-inject()
-{
-    mce_inject_trigger $MCE_SRAO_MEM -u $injcpu -p $pageaddr 
-    if [ $? -eq 0 ]; then
-        show "  Passed: Successfully to fake and inject a MCE error"
-    else
-        show "  Failed: Fake error and inject fail !!"
-        return 1
-    fi
-    return 0
-}
-
-do_main()
-{
-    ret_val=0
-    clean_env
-    inject || ret_val=1
-    xen_verify || ret_val=1
-    mcelog_verify $MCE_SRAO_MEM || ret_val=1
-    gen_result $ret_val
-}
-
-do_main "$@"
diff --git a/tools/tests/mce-test/cases/ucna_llc/dom0/cases.sh b/tools/tests/mce-test/cases/ucna_llc/dom0/cases.sh
deleted file mode 100644
index 808f007708..0000000000
--- a/tools/tests/mce-test/cases/ucna_llc/dom0/cases.sh
+++ /dev/null
@@ -1,72 +0,0 @@
-#!/bin/bash
-#
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-sd=$(dirname $0)
-export ROOT=`(cd $sd/../../../; pwd)`
-export this_case=ucna_llc_dom0
-
-. $ROOT/lib/xen-mceinj-tool.sh
-
-usage()
-{
-    echo "Usage: ./cases.sh [-options] [arguments]"
-    echo "================Below are the optional options================"
-    echo -e "\t-d domainID\t: 0"
-    echo -e "\t-c injcpu\t: which cpu to inject error"
-    echo -e "\t-p pageaddr\t: Guest Physical Address to inject error"
-    echo -e "\t\t\tBy default, the GPA is 0x180020"
-    echo -e "\t-h help"
-    exit 0
-}
-
-while getopts ":c:d:p:h" option
-do
-    case "$option" in
-    c) injcpu=$OPTARG;;
-    d) domid=$OPTARG;;
-    p) pageaddr=$OPTARG;;
-    h) usage;;
-    *) echo "invalid option!"; usage;;
-    esac
-done
-
-[ -z $domid ] && domid=0
-
-inject()
-{
-    mce_inject_trigger $CMCI_UCNA_LLC -d $domid -u $injcpu -p $pageaddr 
-    if [ $? -eq 0 ]; then
-        show "  Passed: Successfully to fake and inject a MCE error"
-    else
-        show "  Failed: Fake error and inject fail !!"
-        return 1
-    fi
-    return 0
-}
-
-do_main()
-{
-    ret_val=0
-    clean_env
-    inject || ret_val=1
-    mcelog_verify $CMCI_UCNA_LLC || ret_val=1
-    gen_result $ret_val
-}
-
-do_main "$@"
diff --git a/tools/tests/mce-test/cases/ucna_llc/guest/cases.sh b/tools/tests/mce-test/cases/ucna_llc/guest/cases.sh
deleted file mode 100644
index 0ca4e2c961..0000000000
--- a/tools/tests/mce-test/cases/ucna_llc/guest/cases.sh
+++ /dev/null
@@ -1,92 +0,0 @@
-#!/bin/bash
-#
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-sd=$(dirname $0)
-export ROOT=`(cd $sd/../../../; pwd)`
-export this_case=ucna_llc_guest
-
-. $ROOT/lib/xen-mceinj-tool.sh
-
-usage()
-{
-    echo "Usage: ./cases.sh [-options] [arguments]"
-    echo "================Below are the must have options==============="
-    echo -e "\t-i image\t: guest image"
-    echo -e "\t-m memory\t: set guest virtual memory"
-    echo "========                                              ========"
-    echo "================Below are the optional options================"
-    echo -e "\t-u vcpus\t: set guest virtual cpus number"
-    echo -e "\t-c injcpu\t: which cpu to inject error"
-    echo -e "\t-p pageaddr\t: Guest Physical Address to inject error"
-    echo -e "\t\t\tBy default, the GPA is 0x180020"
-    echo -e "\t-h help"
-    exit 0
-}
-
-[ $# -lt 1 ] && usage
-
-while getopts ":i:u:m:c:p:hl:" option
-do
-    case "$option" in
-    i) image=$OPTARG; offset=`kpartx -l $image | awk '{print $NF*512}'`;;
-    u) vcpus=$OPTARG;;
-    m) memory=$OPTARG;;
-    c) injcpu=$OPTARG;;
-    p) pageaddr=$OPTARG;;
-    l) early_kill="0";;
-    h) usage;;
-    *) echo "invalid option!"; usage;;
-    esac
-done
-
-
-start_guest()
-{
-    create_hvm_guest $image -u $vcpus -m $memory
-    if [ $? -ne 0 ]; then
-        echo "  Create guest fail!"
-        return 1
-    fi
-    return 0
-}
-
-inject()
-{
-    mce_inject_trigger $CMCI_UCNA_LLC -u $injcpu -p $pageaddr 
-    if [ $? -eq 0 ]; then
-        show "  Passed: Successfully to fake and inject a MCE error"
-    else
-        show "  Failed: Fake error and inject fail !!"
-        return 1
-    fi
-    return 0
-}
-
-do_main()
-{
-    ret_val=0
-    clean_env
-    start_guest || ret_val=1
-    inject || ret_val=1
-    mcelog_verify $CMCI_UCNA_LLC || ret_val=1
-    des_guest
-    gen_result $ret_val
-}
-
-do_main "$@"
diff --git a/tools/tests/mce-test/cases/ucna_llc/xen/cases.sh b/tools/tests/mce-test/cases/ucna_llc/xen/cases.sh
deleted file mode 100644
index c73a2f6c16..0000000000
--- a/tools/tests/mce-test/cases/ucna_llc/xen/cases.sh
+++ /dev/null
@@ -1,68 +0,0 @@
-#!/bin/bash
-#
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-sd=$(dirname $0)
-export ROOT=`(cd $sd/../../../; pwd)`
-export this_case=ucna_llc_xen
-
-. $ROOT/lib/xen-mceinj-tool.sh
-
-usage()
-{
-    echo "Usage: ./cases.sh [-options] [arguments]"
-    echo "================Below are the optional options================"
-    echo -e "\t-c injcpu\t: which cpu to inject error"
-    echo -e "\t-p pageaddr\t: Guest Physical Address to inject error"
-    echo -e "\t\t\tBy default, the GPA is 0x180020"
-    echo -e "\t-h help"
-    exit 0
-}
-
-while getopts ":c:p:h" option
-do
-    case "$option" in
-    c) injcpu=$OPTARG;;
-    p) pageaddr=$OPTARG;;
-    h) usage;;
-    *) echo "invalid option!"; usage;;
-    esac
-done
-
-inject()
-{
-    mce_inject_trigger $CMCI_UCNA_LLC -u $injcpu -p $pageaddr 
-    if [ $? -eq 0 ]; then
-        show "  Passed: Successfully to fake and inject a MCE error"
-    else
-        show "  Failed: Fake error and inject fail !!"
-        return 1
-    fi
-    return 0
-}
-
-do_main()
-{
-    ret_val=0
-    clean_env
-    inject || ret_val=1
-    mcelog_verify $CMCI_UCNA_LLC || ret_val=1
-    gen_result $ret_val
-}
-
-do_main "$@"
diff --git a/tools/tests/mce-test/config/setup.conf b/tools/tests/mce-test/config/setup.conf
deleted file mode 100644
index 05f754dfd6..0000000000
--- a/tools/tests/mce-test/config/setup.conf
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/bin/bash
-#
-# Software injection based test cases: test cases are triggered via
-# mce-inject tool.
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-export MCE_SRAO_MEM=0
-export MCE_SRAO_LLC=1
-export CMCI_UCNA_LLC=2
diff --git a/tools/tests/mce-test/lib/xen-mceinj-tool.sh b/tools/tests/mce-test/lib/xen-mceinj-tool.sh
deleted file mode 100644
index c0a3b293c5..0000000000
--- a/tools/tests/mce-test/lib/xen-mceinj-tool.sh
+++ /dev/null
@@ -1,260 +0,0 @@
-#!/bin/bash
-#
-# Software injection based test cases: test cases are triggered via
-# mce-inject tool.
-# Copyright (c) 2010, Intel Corporation
-# 
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License version
-# 2 as published by the Free Software Foundation.
-# 
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-# 
-# You should have received a copy of the GNU General Public License
-# along with this program; If not, see <http://www.gnu.org/licenses/>.
-#
-# Author: Xudong Hao <xudong.hao@intel.com>
-#
-
-. $ROOT/config/setup.conf
-
-#Guest Image Preparation
-hvm_image_prepare()
-{
-    local image=$1
-    local tmpdir=`mktemp -d`
-    local tmpfile=`mktemp`
-    local offset=`kpartx -l $image | awk '{print $NF*512}'`
-    mount -oloop,offset=$offset $image $tmpdir && echo "mount image to $tmpdir"
-    local g_grub=$tmpdir/boot/grub/grub.conf
-    if [ $? -ne 0 ]; then
-        show "  Mount image failed!"
-        return 1
-    fi
-
-    if ! grep FLAG_CONSOLE $g_grub; then
-        sed -e '/kernel/s/$/ console=ttyS0,115200,8n1 console=tty0/g' \
-            $g_grub > $tmpfile
-        mv -f $tmpfile $g_grub
-        rm -f $tmpfile
-        echo "
-#### FLAG_CONSOLE #### " >> $g_grub
-    fi
-    umount $tmpdir
-    rm -fr $tmpdir
-
-    return 0
-}
-
-create_hvm_guest()
-{
-    local image=$1
-    local originconfig="/etc/xen/xmexample.hvm"
-    local TF=`mktemp`
-    local case_dir=$ROOT/results/$this_case
-    local config=$case_dir/guest_config
-    [ -d $case_dir ] || mkdir $case_dir
-    [ -f $logfile ] || touch $logfile
-    local File=`echo $image|sed "s/\//\\\\\\\\\\//g"`
-    local g_name="`basename $image`_`date +%H%M%S`"
-
-    hvm_image_prepare $image
-
-    while getopts ":u:m:" Option
-    do
-        case $Option in
-            u ) vcpus=$OPTARG;;
-            m ) memory=$OPTARG;;
-            e ) bridge_name=$OPTARG;;
-            * ) ;;
-        esac
-    done
-
-    cp $originconfig $config -f
-
-    if [ -z $image ]; then
-        show "Image file $image does not exist, Please input one valid file"
-        return 1
-    fi
-
-    sed -e "/^disk/s/file:.*,\(hda\)/file:${File},\1/" $config \
-          | sed -e "/^disk/s/phy:.*,\(hda\)/file:${File},\1/" >$TF
-    mv -f $TF $config
-
-    [ -z $memory ] || sed -i "/^memory/s/^.*$/memory = $memory/" $config
-    [ -z $vcpus ] || sed -i "1,/^#vcpus/s/^#vcpus.*$/vcpus=$vcpus/;1d" $config
-    sed -i "/^vif/s/vif/#vif/" $config
-    sed -i "/^name/s/^.*$/name = \"$g_name\"/" $config
-
-    string1=$(ls /dev/pts | sort)
-    xm cr $config
-    [ $? -eq 0 ] && domid=`xm list $g_name | tail -n1 | awk '{print $2}'`
-    if [ -z $domid ]; then
-        show "  Guest can not boot up"
-        return 1
-    fi
-    
-    sleep 10
-
-    string2=$(ls /dev/pts | sort)
-
-    get_guest_klog
-    sleep 40
-
-    return 0
-}
-
-get_guest_klog()
-{
-    local case_dir=$ROOT/results/$this_case
-    gklog=$case_dir/gklog
-    [ -d $case_dir ] || mkdir $case_dir
-    [ -f $gklog ] || touch $gklog
-    for fo in $string2; do
-        echo $string1 | grep $fo -wq
-        [ $? -eq 1 ] && num=$fo
-    done
-    cat /dev/pts/$num > $gklog &
-}
-
-mce_inject_trigger()
-{
-    local errtype=$1
-    local append=""
-    while getopts ":d:u:p:" Option
-    do
-        case $Option in
-            d ) domid=$OPTARG;;
-            u ) cpu=$OPTARG;;
-            p ) pageaddr=$OPTARG;;
-            * ) ;;
-        esac
-    done
-
-    [ -z $domid ] || append=$append" -d $domid"
-    [ -z $cpu ] || append=$append" -c $cpu"
-    [ -z $pageaddr ] || append=$append" -p $pageaddr"
-
-    [ -f $ROOT/tools/xen-mceinj ]
-    if [ $? -eq 0 ]; then
-        xm dmesg -c
-        $ROOT/tools/xen-mceinj -t $errtype $append
-        if [ $? -ne 0 ]; then
-            show "  Failed: Maybe the memory addr is out of range. \
-                      Please check whether used xen-mceinj tool correctlly"
-            return 1
-        fi
-    else
-        show "  Failed: please compile xen-mce inject tool firstly"
-        return 1
-    fi
-    return 0
-}
-
-xen_verify()
-{
-    local case_dir=$ROOT/results/$this_case
-    local xenlog=$case_dir/xenlog
-    [ -d $case_dir ] || mkdir $case_dir
-    [ -f $xenlog ] || touch $xenlog
-    xm dmesg > $xenlog
-    grep "Error is successfully recovered" $xenlog > /dev/null
-    if [ $? -eq 0 ]; then
-        show "  Passed: Xen handle this MCE error successfully"
-    else
-        show "  Failed: Xen does not handle MCE error correctly !!"
-        return 1
-    fi
-    return 0
-}
-
-guest_verify()
-{
-    grep "kernel page recovery" $gklog > /dev/null
-    if [ $? -eq 0 ]; then
-        show "  Passed: Guest recive MCE error and solved correctly"
-    else
-        show "  Failed: Guest fail to solve MCE error"
-        return 1
-    fi
-    return 0
-}
-
-mcelog_verify()
-{
-    local err_type=$1
-    local ret=0
-    local case_dir=$ROOT/results/$this_case
-    local mcelog=$case_dir/mcelog
-    [ -d $case_dir ] || mkdir $case_dir
-    [ -f $mcelog ] || touch $mcelog
-    mcelog > $mcelog
-    if [ -z $mcelog ]; then
-        show "  Failed: MCELOG does not catch anything"
-        return 1
-    else
-        if [ $err_type -eq 0 ]; then
-            grep "MEMORY CONTROLLER MS_CHANNELunspecified_ERR" $mcelog \
-                > /dev/null
-            ret=$?
-        elif [ $err_type -eq 1 ]; then
-            grep "Generic CACHE Level-2 Eviction Error" $mcelog > /dev/null
-            ret=$?
-        elif [ $err_type -eq 2 ]; then
-            grep "Data CACHE Level-2 Data-Read Error" $mcelog > /dev/null
-            ret=$?
-        fi
-
-        if [ $ret -eq 0 ]; then
-            show "  Passed: MCElog catch a correct error"
-        else 
-            show "  Failed: MCE log catch a incorrect error !!"
-            return 1
-        fi
-    fi
-
-    return 0
-}
-
-function des_guest()
-{
-    xm des $domid    
-}
-
-function clean_env()
-{
-    [ -d $ROOT/results ] || mkdir $ROOT/results
-    # clean logs and results of last test for this case
-    rm -fr $ROOT/results/$this_case/*
-}
-
-function show()
-{
-    local case_dir=$ROOT/results/$this_case
-    local logfile=$case_dir/testlog
-    [ -d $case_dir ] || mkdir $case_dir
-    [ -f $logfile ] || touch $logfile
-    echo -e $* | tee -a $logfile > /dev/null
-}
-
-function gen_result()
-{
-    local ret=$1
-    local case_dir=$ROOT/results/$this_case
-    local result=$case_dir/result
-    [ -d $case_dir ] || mkdir $case_dir
-    [ -f $result ] || touch $result
-    
-    if [ $ret -eq 0 ]; then
-        echo "PASSED" > $result
-    elif [ $ret -eq 1 ]; then
-        echo "FAILED" > $result
-        echo "   Please check testlog for details!!! " >> $result
-    else
-        echo "NORESULT" > $result
-        echo "   Please check testlog for details!!! " >> $result
-    fi
-}
diff --git a/tools/tests/mce-test/tools/Makefile b/tools/tests/mce-test/tools/Makefile
deleted file mode 100644
index 0e92ac2977..0000000000
--- a/tools/tests/mce-test/tools/Makefile
+++ /dev/null
@@ -1,24 +0,0 @@
-XEN_ROOT=$(CURDIR)/../../../..
-include $(XEN_ROOT)/tools/Rules.mk
-
-CFLAGS += -Werror
-CFLAGS += $(CFLAGS_libxenctrl)
-CFLAGS += $(CFLAGS_libxenguest)
-CFLAGS += $(CFLAGS_libxenstore)
-CFLAGS += $(CFLAGS_xeninclude)
-
-.PHONY: all
-all: xen-mceinj
-
-install: xen-mceinj
-	$(INSTALL_PROG) xen-mceinj $(DESTDIR)$(sbindir)
-
-.PHONY: clean
-clean:
-	$(RM) *.o xen-mceinj
-
-.PHONY: distclean
-distclean: clean
-
-xen-mceinj: xen-mceinj.o Makefile
-	$(CC) -o $@ $< $(LDFLAGS) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore)
diff --git a/tools/tests/mce-test/tools/README b/tools/tests/mce-test/tools/README
deleted file mode 100644
index bd0d442bae..0000000000
--- a/tools/tests/mce-test/tools/README
+++ /dev/null
@@ -1,24 +0,0 @@
-Xen Machine Check Exception(MCE) error inject tool
-----------------------------------------------
-
-xen-mceinj is a software MCE injection tool, which is based on Xen 
-MCE injection mechanism. It allows to inject machine check errors on the
-software level into a running Xen/dom0/VM. This is intended for
-validation of the Xen machine check handler.
-
-With the help of the Makefile, it is possible to compile a binary file 
-named "xen-mceinj".
-
-Usage
------
-$make (make install) --Note: make sure compile xen/tools before do this step
-$./xen-mceinj [OPTION]...
-
-OPTION arguments can be:
-  -D, --dump           dump addr info without error injection
-  -c, --cpu=CPU_ID     target CPU, the default is CPU0
-  -d, --domain=DomID   target domain, the default is Xen itself
-  -p, --page           physical page address, the default is 0x180020
-  -t, --type=error     error type
-
-For detail help, please refer to "./xen-mceinj -h"
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 18:21:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 18:21:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146023.268604 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvl1l-000300-Gg; Tue, 22 Jun 2021 18:21:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146023.268604; Tue, 22 Jun 2021 18:21:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvl1l-0002zs-BZ; Tue, 22 Jun 2021 18:21:37 +0000
Received: by outflank-mailman (input) for mailman id 146023;
 Tue, 22 Jun 2021 18:21:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZL//=LQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lvl1k-0002zl-7M
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 18:21:36 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6423d0e5-51cd-4e5f-b3e0-de17c7735f40;
 Tue, 22 Jun 2021 18:21:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6423d0e5-51cd-4e5f-b3e0-de17c7735f40
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624386094;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=xirZ/a/+ZKtgbEaH8f9M+g1BsL/PFi7C3fygtGPi0A4=;
  b=JANuMnmhtVsJXcMo8N0LIqj+teUrvhDkXId4p1FsDAS0C2WSFEVCutKS
   jgnl0Ed9tOzXbCSIfoR03+8fD4CBy0kPU/L+1AQ9qydRZ5uM0SX3RePlJ
   kCXcc1+2clJ7LyaFjgYlepStZ/coad7sF0ZcfoFz4Xr1Cld5othN79mEA
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: wGsUtjOtS6zwDe5u4ynUrwyNEwzwy4X/PtNdnicaN8Yg6QKheIJ6wm/cqDgC6V6cYx76ocC7gn
 bbqK63nzl5mm70NxKJYAGW/MUEtX3u8r4iTvOe7AIBnMt3N9epj4Sx6Hexs0tqoufBY4g34QlK
 v097W3azrFmpxIVAeqbObWgdxrWkzKpQTwPj2G5l4bg9r0/Q/UBl12fq62cO+EPU6E6iZmwsBI
 Z+eDSRgSEg+CoUB1SNq74bX8kPDSfySM/DPegWieICcgJmD315JLIuiXOKOSl50XbgtbWzE0wT
 cys=
X-SBRS: 5.1
X-MesageID: 47083114
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:C8gp7amcZw5eDE7O3Scy8LcSXMDpDfLa3DAbv31ZSRFFG/Fw9/
 rCoB3U73/JYVcqKRcdcLW7UpVoLkmyyXcY2+cs1PKZLWvbUQiTXeZfBOnZsl7d8kTFn4Yw6U
 4jSdkaNDSZNzNHZK3BkW2F+rgboeVu8MqT9JjjJ3UGd3AVV0m3hT0JezpyESdNNXl77YJSLu
 vk2iLezQDQBEj+aK6AdwE4dtmGnfLnvrT8byULAhY2gTP+8Q9BuNbBYmOlNg51aUI0/Ysf
X-IronPort-AV: E=Sophos;i="5.83,292,1616472000"; 
   d="scan'208";a="47083114"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Juergen Gross
	<jgross@suse.com>
Subject: [PATCH v2 0/4] tools/tests: More cleanup for automation improvements
Date: Tue, 22 Jun 2021 19:21:20 +0100
Message-ID: <20210622182124.11571-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

v2:
 * Fix CI failures from newly-exposed logic
 * Drop -f's from $(RM)
 * Drop the 'run' rune patch.  Its clearly controvertial, but ignoring the
   problems isn't an available option in the longterm.

All other RFC questions still outstanding.

Andrew Cooper (4):
  tools/tests: Drop obsolete mce-test infrastructure
  tests/resource: Rework Makefile
  tests/cpu-policy: Rework Makefile
  tests/xenstore: Rework Makefile

 .gitignore                                         |   2 -
 tools/misc/.gitignore                              |   1 +
 tools/misc/Makefile                                |   4 +
 tools/{tests/mce-test/tools => misc}/xen-mceinj.c  |  32 +--
 tools/tests/Makefile                               |   1 -
 tools/tests/cpu-policy/Makefile                    |  31 ++-
 tools/tests/mce-test/Makefile                      |  12 -
 tools/tests/mce-test/README                        |  75 ------
 tools/tests/mce-test/cases/srao_llc/dom0/cases.sh  |  73 ------
 tools/tests/mce-test/cases/srao_llc/guest/cases.sh |  94 --------
 tools/tests/mce-test/cases/srao_llc/xen/cases.sh   |  69 ------
 tools/tests/mce-test/cases/srao_mem/dom0/cases.sh  |  73 ------
 tools/tests/mce-test/cases/srao_mem/guest/cases.sh |  94 --------
 tools/tests/mce-test/cases/srao_mem/xen/cases.sh   |  69 ------
 tools/tests/mce-test/cases/ucna_llc/dom0/cases.sh  |  72 ------
 tools/tests/mce-test/cases/ucna_llc/guest/cases.sh |  92 --------
 tools/tests/mce-test/cases/ucna_llc/xen/cases.sh   |  68 ------
 tools/tests/mce-test/config/setup.conf             |  24 --
 tools/tests/mce-test/lib/xen-mceinj-tool.sh        | 260 ---------------------
 tools/tests/mce-test/tools/Makefile                |  24 --
 tools/tests/mce-test/tools/README                  |  24 --
 tools/tests/resource/Makefile                      |  11 +-
 tools/tests/xenstore/.gitignore                    |   1 +
 tools/tests/xenstore/Makefile                      |  31 ++-
 .../tests/xenstore/{xs-test.c => test-xenstore.c}  |   8 +-
 25 files changed, 80 insertions(+), 1165 deletions(-)
 rename tools/{tests/mce-test/tools => misc}/xen-mceinj.c (97%)
 delete mode 100644 tools/tests/mce-test/Makefile
 delete mode 100644 tools/tests/mce-test/README
 delete mode 100644 tools/tests/mce-test/cases/srao_llc/dom0/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_llc/guest/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_llc/xen/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_mem/dom0/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_mem/guest/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/srao_mem/xen/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/ucna_llc/dom0/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/ucna_llc/guest/cases.sh
 delete mode 100644 tools/tests/mce-test/cases/ucna_llc/xen/cases.sh
 delete mode 100644 tools/tests/mce-test/config/setup.conf
 delete mode 100644 tools/tests/mce-test/lib/xen-mceinj-tool.sh
 delete mode 100644 tools/tests/mce-test/tools/Makefile
 delete mode 100644 tools/tests/mce-test/tools/README
 create mode 100644 tools/tests/xenstore/.gitignore
 rename tools/tests/xenstore/{xs-test.c => test-xenstore.c} (98%)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 18:21:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 18:21:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146025.268626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvl1q-0003ZC-4s; Tue, 22 Jun 2021 18:21:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146025.268626; Tue, 22 Jun 2021 18:21:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvl1q-0003Z5-0x; Tue, 22 Jun 2021 18:21:42 +0000
Received: by outflank-mailman (input) for mailman id 146025;
 Tue, 22 Jun 2021 18:21:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZL//=LQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lvl1p-0002zl-1t
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 18:21:41 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7030a2c-e0bb-4f1f-b998-596486f30556;
 Tue, 22 Jun 2021 18:21:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7030a2c-e0bb-4f1f-b998-596486f30556
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624386094;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=/W38Qs0N/Kt0Gac/Q7kHnj3/eNBZAqKOdXBUp3mC4Ds=;
  b=T5jvf/OIzyKMzVrOL37gRBBmwxBj706IDYpeETg9wa3ePMfeId6ljnNY
   uUIZfRUs7cp27W0Ipl5KuEFiAKSQcW2sficQRFdh4GeCV5QGo9vS1w6Am
   pjwKsU/QRZiEydA/u1LAWTLmzDBDPpRrPv1i0QrCuRNMpbui77csUcCe3
   I=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: /4xMhkTIhjDo+FBl+R9uyiEQKfld8a8hmxdUJAiJukp4aXoZCe4GI7NMdoORRk/rRa/uZ3FtRL
 zyn8Jpq1j6OshxBpbOIkSd4Azfp50S71xIBdCVL9Q7Khfl6tRd9+KuGXxXzspJVW+rzIZmfR2G
 6PjX/C3ddjxH4G9xQ3fO0DHvA9x52JQsAccwlIX1GrPudTDcQrO5lZlR4gXPaE7Iw6e+Jt9slV
 DIYikB/SUpG3t3LHhwP/q4alimHdXDrUh7JHLStuDpK5rip4u8UFFCkNHLlBrSlrcdVNrPn28z
 BNk=
X-SBRS: 5.1
X-MesageID: 46437179
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:tDpkyqmeE9njscldXpEiZmfQTgHpDfORimdD5ihNYBxZY6Wkfp
 +V7ZEmPE7P+VQssS8b6LO90fG7MADhHZ4c2/hrAV7QZniVhILIFvAn0WKG+VeQfxEWmdQtqp
 uIH5IOceEYSGIK/foSgzPIVOrIouP3jpxA7N22pxwBIW4FCsEQiHYeNu/YKDwGeOAsP+tCKH
 Po3Ls6m9PWQwVrUi3UPAh8Y8Hz4/nw0L72ax8PABAqrCOUiymz1bL8Gx+Emj8DTjJm294ZgC
 74uj28wp/mn+Cwyxfa2WOWxY9RgsHdxtxKA9HJotQJKw/rlh2jaO1aKuS/VXEO0bmSAWQR4Y
 PxSiQbTplOArTqDz2ISC7WqlLdOfAVmiDfIBGj8CXeSIfCNUcH4oJ69PZkm13imhsdVQZHof
 J2Niuixuxq5R+splWL2/HYEx5tjUa6unwkjKoaiGFeS5IXbPtLoZUY5149KuZCIMvW0vFnLA
 BVNrCd2B+WSyLXU1nJ+m10hNC8VHU6GRmLBkAEp8yOyjBT2HR01VERysATlmoJsMtVcegA28
 3UdqBz0L1eRM4faqxwQO8HXMusE2TIBRbBKnibL1jrHLwOf3jNt5n06rMo4/zCQu1L8HLzou
 WObLp8jx9+R6vDM7zE4HR7yGGDfIzmZ0Wk9ih33ekyhlTTfsujDRG+
X-IronPort-AV: E=Sophos;i="5.83,292,1616472000"; 
   d="scan'208";a="46437179"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Juergen Gross
	<jgross@suse.com>
Subject: [PATCH 2/4] tests/resource: Rework Makefile
Date: Tue, 22 Jun 2021 19:21:22 +0100
Message-ID: <20210622182124.11571-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210622182124.11571-1-andrew.cooper3@citrix.com>
References: <20210622182124.11571-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

In particular, fill in the install/uninstall rules so this test can be
packaged to be automated sensibly.

Make all object files depend on the Makefile, drop redundant -f's for $(RM),
and use $(TARGET) when appropriate.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Juergen Gross <jgross@suse.com>

v2:
 * Fix typo in commit message
 * Drop -f's
 * Use %.o rather than *.o for Make level wildcards
---
 tools/tests/resource/Makefile | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/tools/tests/resource/Makefile b/tools/tests/resource/Makefile
index 4bef482966..1c3aee4ff7 100644
--- a/tools/tests/resource/Makefile
+++ b/tools/tests/resource/Makefile
@@ -12,17 +12,20 @@ run: $(TARGET)
 
 .PHONY: clean
 clean:
-	$(RM) -f -- *.o $(TARGET) $(DEPS_RM)
+	$(RM) -- *.o $(TARGET) $(DEPS_RM)
 
 .PHONY: distclean
 distclean: clean
-	$(RM) -f -- *~
+	$(RM) -- *~
 
 .PHONY: install
 install: all
+	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
+	$(INSTALL_PROG) $(TARGET) $(DESTDIR)$(LIBEXEC_BIN)
 
 .PHONY: uninstall
 uninstall:
+	$(RM) -- $(DESTDIR)$(LIBEXEC_BIN)/$(TARGET)
 
 CFLAGS += -Werror
 CFLAGS += $(CFLAGS_xeninclude)
@@ -34,7 +37,9 @@ LDFLAGS += $(LDLIBS_libxenctrl)
 LDFLAGS += $(LDLIBS_libxenforeignmemory)
 LDFLAGS += $(APPEND_LDFLAGS)
 
-test-resource: test-resource.o
+%.o: Makefile
+
+$(TARGET): test-resource.o
 	$(CC) -o $@ $< $(LDFLAGS)
 
 -include $(DEPS_INCLUDE)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 18:21:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 18:21:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146026.268637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvl1v-0003xK-EN; Tue, 22 Jun 2021 18:21:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146026.268637; Tue, 22 Jun 2021 18:21:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvl1v-0003x5-AI; Tue, 22 Jun 2021 18:21:47 +0000
Received: by outflank-mailman (input) for mailman id 146026;
 Tue, 22 Jun 2021 18:21:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZL//=LQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lvl1u-0002zl-2D
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 18:21:46 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 502f1c37-ee41-478d-a30e-b37fa28ee278;
 Tue, 22 Jun 2021 18:21:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 502f1c37-ee41-478d-a30e-b37fa28ee278
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624386096;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=FuX/gdX/Z8iVIv8CdPdcrz1kl6FHA1swtRRbhrQEv1c=;
  b=U9B1y1p3TLadBIoPobG7VilBm0p3/YOsX0pDheUtntruFNOKuQKBoh4M
   AtYkdII3XozdQmiLzIf6AxTIOcHnt84wZZe4UKpTcOCtB3EfOH9/+jCtU
   o/7Jhzkahs00b6rirOJVIaBj+5R2pM3VydH8fuphTH3AdMX+WYIyi+Mnn
   w=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: OGBFqrArpyeROX0zZptHVTRLumLtvycqbI69jAu+cwGqyeLCt4BxrCiuEv1N4fOU7FfvNMSkSW
 uHj3JGzH4+A+cfXakgdxfYjuFLrMKtI3svo6IopvkNnskwqK6wE2VyNrO4qlRoDY6f3gg5un1c
 BIwknrx/utyqJ12m6EKPWSfu+EI7SNu5EzOEtLe03rzMyellzY3TGc/dbducKIPWMZFiWEYyla
 EQak3yGLq3vFabJFNka7EV1SS9RKcnzTlqZoNlegZj5gvKaFgT1XE8QjbkfS819QFHd0b06P3P
 6Tg=
X-SBRS: 5.1
X-MesageID: 46703313
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:3mOxfK4tkSdTt2hrAAPXwXWBI+orL9Y04lQ7vn2ZFiYlF/Bwxv
 re/sjziyWE6wr5AEtQ6uxpOMG7MAjhHO1OkPss1NaZLU3bUQ6TRvAH0WKM+UyeJ8STzJ8l6U
 4kSdkPNDSSNyk8sS+Z2njHLz9I+rDum83F6om+rwYLPGdXguNbnnZE422gYzdLrXx9dOYE/e
 2nl7d6TlSbCAwqR/X+IkNAc/nIptXNmp6jSwUBHQQb5A6Hii7twKLmEjCDty1uFQ9n8PMHyy
 zoggb57qKsv7WQ0RnHzVLe6JxQhZ/I1sZDPsqRkcIYQw+c0zpAJb4RA4FqjgpF+t1H22xaze
 UkZC1QY/ib3kmhJV1dZyGdhDUIngxetUMKgmXo9EcL6faJMA7STfAx1L6xpSGpu3bI9esMpp
 6i0w+ixu1qJAKFkyLn69fSURZ20kKyvHo5iOYWy2dSSI0EddZq3M8iFW5uYdY99RjBmcAa+S
 hVfY3hzecTdUnfY2HSv2FpztDpVnMvHg2eSkxHvsCOyTBZkH1w0kNdnaUk7zI93YN4T4MB6/
 XPM6xumr0LRsgKbbhlDONERcesEGTCTR/FLWrXK1X6E6MMPW7LtvfMkfcIDSGRCdI1Jb4J6d
 n8uX9jxCUPknPVeIKzNcdwg1jwqU2GLH7QI+9lltFEhoE=
X-IronPort-AV: E=Sophos;i="5.83,292,1616472000"; 
   d="scan'208";a="46703313"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Juergen Gross
	<jgross@suse.com>
Subject: [PATCH 3/4] tests/cpu-policy: Rework Makefile
Date: Tue, 22 Jun 2021 19:21:23 +0100
Message-ID: <20210622182124.11571-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210622182124.11571-1-andrew.cooper3@citrix.com>
References: <20210622182124.11571-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

In particular, fill in the install/uninstall rules so this test can be
packaged to be automated sensibly.

Rework TARGET-y to be TARGETS, drop redundant -f's for $(RM), drop the
unconditional -O3 and use the default instead, and drop CFLAGS from the link
line but honour APPEND_LDFLAGS.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Juergen Gross <jgross@suse.com>

v2:
 * Drop -f's
 * Use %.o rather than *.o for Make level wildcards
---
 tools/tests/cpu-policy/Makefile | 31 +++++++++++++++++++------------
 1 file changed, 19 insertions(+), 12 deletions(-)

diff --git a/tools/tests/cpu-policy/Makefile b/tools/tests/cpu-policy/Makefile
index 70ff154da6..161732ad16 100644
--- a/tools/tests/cpu-policy/Makefile
+++ b/tools/tests/cpu-policy/Makefile
@@ -1,21 +1,19 @@
 XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-TARGET-y := test-cpu-policy
+TARGETS :=
 
 # For brevity, these tests make extensive use of designated initialisers in
 # anonymous unions, but GCCs older than 4.6 can't cope.  Ignore the test in
 # this case.
-ifneq ($(clang),y)
-TARGET-$(call cc-ver,$(CC),lt,0x040600) :=
-endif
-
-ifeq ($(TARGET-y),)
+ifneq ($(gcc)$(call cc-ver,$(CC),lt,0x040600),yy)
+TARGETS += test-cpu-policy
+else
 $(warning Test harness not built, use newer compiler than "$(CC)" (version $(shell $(CC) -dumpversion)))
 endif
 
 .PHONY: all
-all: $(TARGET-y)
+all: $(TARGETS)
 
 .PHONY: run
 run: $(TARGET-y)
@@ -23,23 +21,32 @@ run: $(TARGET-y)
 
 .PHONY: clean
 clean:
-	$(RM) -f -- *.o .*.d .*.d2 test-cpu-policy
+	$(RM) -- *.o $(TARGETS) $(DEPS_RM)
 
 .PHONY: distclean
 distclean: clean
-	$(RM) -f -- *~
+	$(RM) -- *~
 
 .PHONY: install
 install: all
+	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
+	$(if $(TARGETS),$(INSTALL_PROG) $(TARGETS) $(DESTDIR)$(LIBEXEC_BIN))
 
 .PHONY: uninstall
+uninstall:
+	$(RM) -- $(addprefix $(DESTDIR)$(LIBEXEC_BIN)/,$(TARGETS))
 
-CFLAGS += -Werror $(CFLAGS_xeninclude) -D__XEN_TOOLS__ -O3
+CFLAGS += -Werror -D__XEN_TOOLS__
+CFLAGS += $(CFLAGS_xeninclude)
 CFLAGS += $(APPEND_CFLAGS)
 
-vpath %.c ../../../xen/lib/x86
+LDFLAGS += $(APPEND_LDFLAGS)
+
+vpath %.c $(XEN_ROOT)/xen/lib/x86
+
+%.o: Makefile
 
 test-cpu-policy: test-cpu-policy.o msr.o cpuid.o policy.o
-	$(CC) $(CFLAGS) $^ -o $@
+	$(CC) $^ -o $@ $(LDFLAGS)
 
 -include $(DEPS_INCLUDE)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 18:21:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 18:21:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146027.268648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvl1z-0004N8-Mo; Tue, 22 Jun 2021 18:21:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146027.268648; Tue, 22 Jun 2021 18:21:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvl1z-0004Mv-Ik; Tue, 22 Jun 2021 18:21:51 +0000
Received: by outflank-mailman (input) for mailman id 146027;
 Tue, 22 Jun 2021 18:21:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZL//=LQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lvl1z-0002zl-2W
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 18:21:51 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96f7003a-5612-49b9-808a-f7dedefdfffc;
 Tue, 22 Jun 2021 18:21:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96f7003a-5612-49b9-808a-f7dedefdfffc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624386096;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=5PxTKbRVgESGjhDeGvA5yScbwAfUHH/0kZsWSvQHRtg=;
  b=PX+ADeaTNspKM4iPe94x6j52yqj2NjNwVGH07aWQ+EOVeqp5FozrUc2u
   KQkzQM83Y9CN3j9oJ5bKzcb7gmikicaEUBq9f17gZBY9U9cnFLUFmjZIy
   SYh+gMXvbSmkTL9PqSpB551bbIiqh5TekclEmApOqDBsskl6d+5MYgKEy
   U=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: nNBSgmEc2l3uPwrXLlSutjFcrc/TtqFKiKywnwAIDNwZx4wjNqh+Dtk4VAYdwfHtX706zFsS/Y
 rE9NIpz+4BegSTjK6W5Eb7FU3eARVzY632hfWOfQiIeUIYT6WmvBGoSW8nyk8wFuBysEv23rnb
 p1JMXNzKSoHYJakbEiYpqU3z8eiUfzTvnFDfp234VfcxsZ+itkW8W+5Vh3nVAzfUL9cDMtepNm
 nAOSYtGHilnVwgiocSmtazJ6vtIYcUaZLFL0AQvt3ammidaun4Rv/VZKMAyXY+D9Bv/OV9sx5b
 WJU=
X-SBRS: 5.1
X-MesageID: 48311393
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:0/v5M6kV6wZ7xVp3rYI/eMDVqtLpDfIW3DAbv31ZSRFFG/Fxl6
 iV/cjzsiWE8Ar5OUtQ4OxoV5PwIk80maQb3WBVB8bHYOCEghrPEGgB1/qB/9SIIUSXnYQxuZ
 uIMZIOb+EYZWIK9voSizPZLz9P+re6GdiT9ILj80s=
X-IronPort-AV: E=Sophos;i="5.83,292,1616472000"; 
   d="scan'208";a="48311393"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Juergen Gross
	<jgross@suse.com>
Subject: [PATCH 4/4] tests/xenstore: Rework Makefile
Date: Tue, 22 Jun 2021 19:21:24 +0100
Message-ID: <20210622182124.11571-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210622182124.11571-1-andrew.cooper3@citrix.com>
References: <20210622182124.11571-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

In particular, fill in the install/uninstall rules so this test can be
packaged to be automated sensibly.

This causes the code to be noticed by CI, which objects as follows:

  test-xenstore.c: In function 'main':
  test-xenstore.c:486:5: error: ignoring return value of 'asprintf', declared
  with attribute warn_unused_result [-Werror=unused-result]
       asprintf(&path, "%s/%u", TEST_PATH, getpid());
       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Address the CI failure by checking the asprintf() return value and exiting.

Rename xs-test to test-xenstore to be consistent with other tests.  Honour
APPEND_FLAGS too.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Juergen Gross <jgross@suse.com>

v2:
 * Drop -f's
 * Fix CI breakage, now that CI can build the test.
---
 .gitignore                                         |  1 -
 tools/tests/xenstore/.gitignore                    |  1 +
 tools/tests/xenstore/Makefile                      | 31 +++++++++++++++-------
 .../tests/xenstore/{xs-test.c => test-xenstore.c}  |  8 ++++--
 4 files changed, 29 insertions(+), 12 deletions(-)
 create mode 100644 tools/tests/xenstore/.gitignore
 rename tools/tests/xenstore/{xs-test.c => test-xenstore.c} (98%)

diff --git a/.gitignore b/.gitignore
index d4b90303b2..8ebb51b6c5 100644
--- a/.gitignore
+++ b/.gitignore
@@ -275,7 +275,6 @@ tools/tests/x86_emulator/*sse*.[ch]
 tools/tests/x86_emulator/test_x86_emulator
 tools/tests/x86_emulator/x86_emulate
 tools/tests/x86_emulator/xop*.[ch]
-tools/tests/xenstore/xs-test
 tools/tests/vpci/list.h
 tools/tests/vpci/vpci.[hc]
 tools/tests/vpci/test_vpci
diff --git a/tools/tests/xenstore/.gitignore b/tools/tests/xenstore/.gitignore
new file mode 100644
index 0000000000..4b44f5dd60
--- /dev/null
+++ b/tools/tests/xenstore/.gitignore
@@ -0,0 +1 @@
+test-xenstore
diff --git a/tools/tests/xenstore/Makefile b/tools/tests/xenstore/Makefile
index a367d88803..b9969dd090 100644
--- a/tools/tests/xenstore/Makefile
+++ b/tools/tests/xenstore/Makefile
@@ -1,11 +1,7 @@
 XEN_ROOT=$(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
-
-CFLAGS += $(CFLAGS_libxenstore)
-
-TARGETS-y := xs-test
+TARGETS-y := test-xenstore
 TARGETS := $(TARGETS-y)
 
 .PHONY: all
@@ -16,14 +12,31 @@ build: $(TARGETS)
 
 .PHONY: clean
 clean:
-	$(RM) *.o $(TARGETS) *~ $(DEPS_RM)
+	$(RM) -- *.o $(TARGETS) $(DEPS_RM)
 
 .PHONY: distclean
 distclean: clean
+	$(RM) -- *~
+
+.PHONY: install
+install: all
+	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
+	$(if $(TARGETS),$(INSTALL_PROG) $(TARGETS) $(DESTDIR)$(LIBEXEC_BIN))
+
+.PHONY: uninstall
+uninstall:
+	$(RM) -- $(addprefix $(DESTDIR)$(LIBEXEC_BIN)/,$(TARGETS))
+
+CFLAGS += -Werror
+CFLAGS += $(CFLAGS_libxenstore)
+CFLAGS += $(APPEND_CFLAGS)
+
+LDFLAGS += $(LDLIBS_libxenstore)
+LDFLAGS += $(APPEND_LDFLAGS)
 
-xs-test: xs-test.o Makefile
-	$(CC) -o $@ $< $(LDFLAGS) $(LDLIBS_libxenstore)
+%.o: Makefile
 
-install uninstall:
+test-xenstore: test-xenstore.o
+	$(CC) -o $@ $< $(LDFLAGS)
 
 -include $(DEPS_INCLUDE)
diff --git a/tools/tests/xenstore/xs-test.c b/tools/tests/xenstore/test-xenstore.c
similarity index 98%
rename from tools/tests/xenstore/xs-test.c
rename to tools/tests/xenstore/test-xenstore.c
index c4c99c0661..d3574b3fa2 100644
--- a/tools/tests/xenstore/xs-test.c
+++ b/tools/tests/xenstore/test-xenstore.c
@@ -20,6 +20,7 @@
  */
 
 #define _GNU_SOURCE
+#include <err.h>
 #include <getopt.h>
 #include <inttypes.h>
 #include <stdbool.h>
@@ -483,11 +484,14 @@ int main(int argc, char *argv[])
         return 0;
     }
 
-    asprintf(&path, "%s/%u", TEST_PATH, getpid());
+    if ( asprintf(&path, "%s/%u", TEST_PATH, getpid()) < 0 )
+        err(2, "asprintf() malloc failure\n");
+
     for ( t = 0; t < WRITE_BUFFERS_N; t++ )
     {
         memset(write_buffers[t], 'a' + t, WRITE_BUFFERS_SIZE);
-        asprintf(&paths[t], "%s/%c", path, 'a' + t);
+        if ( asprintf(&paths[t], "%s/%c", path, 'a' + t) < 0 )
+            err(2, "asprintf() malloc failure\n");
     }
 
     xsh = xs_open(0);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 18:22:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 18:22:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146038.268659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvl2g-0005sx-Vk; Tue, 22 Jun 2021 18:22:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146038.268659; Tue, 22 Jun 2021 18:22:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvl2g-0005sq-Se; Tue, 22 Jun 2021 18:22:34 +0000
Received: by outflank-mailman (input) for mailman id 146038;
 Tue, 22 Jun 2021 18:22:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZL//=LQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lvl2g-0005sW-1n
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 18:22:34 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db067075-8aa5-4bdc-9374-29722aa2e6e0;
 Tue, 22 Jun 2021 18:22:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db067075-8aa5-4bdc-9374-29722aa2e6e0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624386153;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=mH8URGvQhNiKjp8UINoe2iWsXQVOEuwefjeCIRcJy/w=;
  b=PbdPwhjWvikgOdDoNwlAyPG9SjqQLGtjwxbagCRBBmZxOcd7Z0JW8faD
   vhD4CDgJpIpQ1OAtAdt2GmEDrwH6HIWzQq2CMEiik1zXXOAtkPF68g8RY
   47Od2YSB8MaxT1gEsyVeJl5Kyq21bzTD9wiQ4EgbyXcZue9fMr860iXe+
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: H9npQaHT54pAZxL+bdjfBUARIFgFFgFS35SWKq3RTW10dx2SZAA7n4LamzCN2FP0UEAahuMDTz
 EIBMAKBZUz0r5CJPfInzBNNVt+DhC7rmtzQjjLBcLoR8qd8q4wPQusaJoGSZMP3SxxDEPA0Vya
 zDo6Zm4YVtzgl5OvszIC/HURaV+/Ktckt+yVD3EKhybr8h/A+Kalma8njMQh9QkNoTws4uMmjb
 RNnN3HxuA/JVBE7uwxvAR/FK9pIHZ5xeYRob2g3+T1HWEF5bG5XbPtbmnVIFt13nqvQbBa+ILQ
 +xg=
X-SBRS: 5.1
X-MesageID: 48311456
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:Zf8bdqtcJD5gzDsWB+pkqZsF7skClIMji2hC6mlwRA09TyXGra
 +TdaUguSMc1gx9ZJhBo7G90KnpewK5yXcH2/hvAV7EZnibhILIFvAe0WKG+VPd8kLFh5ZgPM
 tbAs5D4ZjLfCJHZKXBkXqF+rQbsaC6GcmT7I+0pRcdLnAeV0gj1XYfNu/yKDwHeOAsP+taKH
 Pz3Lsjm9PtQwVtUiztbUN1LtQr6ue7267OUFojPVoK+QOOhTSn5PrTFAWZ5A4XV3dqza05+W
 bIvgTl7uH72svLiyP05iv21dB7idHhwtxMCIiljdUUECzljkKNaJ56U7OPkTgpqKWE6Uoskv
 PLvxA8Vv4DpU/5TyWQm1/AygPg2DEh5zvJ0lmDm0bupsT/WXYTF9dBrZgxSGqa12MQ+PVHlI
 5b1WOQsJRaSTnamj7m2tTOXxZ20mKpvHsZl/IJhXA3a/pcVFZol/1awKppKuZGIMqjg7pXVt
 WGTfuspMq+SGnqKkww5QJUsYWRth1ZJGb1fqAA0vblmQS+0koJl3fxaaQk7z49HakGOu55Dt
 L/Q+9VfYF1P7srhJ1GdZE8qOuMeyHwqEH3QS6vyWqOLtBOB5ubke+I3Fxy3pDwRKA1
X-IronPort-AV: E=Sophos;i="5.83,292,1616472000"; 
   d="scan'208";a="48311456"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AA08M32GlLsVbITuqp3as1c8JFWkgpCANwNGNaMuG8UoO7O4c9IiAgCs3itpoGk67TDhe7W7wdWDkelNQ+PHCpWBiSPmdjcJkSNRNjV3QoBD+t0OzsLvJhdRB6U3uLYQbJVFtMNF8/BAk1wPAIVZdDYMJiUgwYVRwwokeAajf6nX+CdQxKVt1UfF2hxznzch0ewY4EnT0atLovXnQBO96nsvf5qSp3jsH/6wh/xTbn5z7AHzA1MtfHtRvnJv+d+avzRGJpPDYByMFTFvDrzSrpwk5rZbCuEiUoG17VDOepAeufI/Mch2u2wB7GgMaeQCwBZVALUKPBj/0oSbGOBt0A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mH8URGvQhNiKjp8UINoe2iWsXQVOEuwefjeCIRcJy/w=;
 b=FwapX+Wpd08fPw/M8p7zqHpLc7sTD2cYClojHVkxKLu0nbjjNxgLG541QUjZc5fli+rhn/9XNq4AlT2vjkDBB3Kx/hrObJ8ysfKocJ3fboCo7hWzGZygtnqmCQTCW5v+zp+4bruGeeHbynNGcyn6bK0DnIYwYbLGBkHpSwTcgR1IfXpb77U6j7t3OKdGlQeZMZyLXl3Sl5dTCkMexiIOPXm+Cd8Pcn1eMS3kIUJqKzDhkJ9D2ECkgJr7ztCEf1yzZDSCUmf6+q7tqBOa2OklekkUTt3glR18W45Sd0uoFfk84NLwmk8dqTatTbsXiQIxLX8iVq17rZnKwhqqYeD23w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mH8URGvQhNiKjp8UINoe2iWsXQVOEuwefjeCIRcJy/w=;
 b=IBEoK9ibRqD2xFznY3KbGb1QhwLR31azEE+H0gN9fUE5MvjWKKiyWBa+aSVbYP8sGyl2+xRNmZvmz8KbaDDnVRaOEanjNTFRxhmZjwWYr7d2KmNxEEow3b3zXggZ0to8b5CpaM8fJr+bvw85gg6AoKkHFcJ3bBrvltkMS1c/80w=
Subject: Re: [PATCH v2 3/6] libxencall: introduce variant of xencall2()
 returning long
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>, Juergen Gross
	<jgross@suse.com>
References: <6c532607-c2a3-d0ab-e4e5-428f85f4a045@suse.com>
 <beca6bb7-a991-8a94-ebbe-af973e1f3958@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <3a1f9e09-dc01-1107-6a76-5e24d34e0292@citrix.com>
Date: Tue, 22 Jun 2021 19:22:21 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <beca6bb7-a991-8a94-ebbe-af973e1f3958@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0132.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9f::24) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4cec6b8d-a40a-4b65-86a8-08d935aab05b
X-MS-TrafficTypeDiagnostic: BY5PR03MB4968:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB4968A558903B1C794C704288BA099@BY5PR03MB4968.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3826;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 8n8CWYGfON4vkL0eAZc/uaJQ8Ap5PD7phOUTK89CdRVffqkAOjhqB6qyJLDI5o8eAZanKOAa1UpcYoyyDI9ALiNbt/zun70vE0IW7YDgPj3AlM4qNoCs/tshE67B3v/XFAMqqX5kUrmpAGSUaXp6p4dWLvrllBS6P/lWo2yUI1kncl2R9Kvj99IVXQd3GvhBzJtxi4wXcg9OTWGJWRxYIW/gjaHm7Pl24UPcyxk0h9/BEkzqDZ3WBvOuKfOwtmaHtcX0UuhXLtWEUP2K77lGrGGfIiQEGldEkZsibmdJYhCco1Fe2Tja5vFS1zgyBXhD5Plm6UgJSoZGmhPJWM4QO59n5Y7YHWspk0cM1Huwas6HcqhyQS5hbh6D1hFN1UEGbXV2fq9j2UktR9a1F3TEbxUOZORJSSNHwjCCdpPeAOQX8id0N3YypxJG3RmRxM7XsUQIBpLsNnngLYiTni4Sr9bOE/2cjHYk+pFsa5RqiTIMjEEUKGKd3T2civjPZRAIEFvLdns4plUKmKmTmvqWWKskTaFo/SQqMCfPSbpIq1FkIf+TFIPL6AAobbq4kRoVU6lyFJPzxAPMXoX3DWHnQKYdXpqk8bhYhrZkDNEy+m7iXbKbZ/E+2dYrEl4IiYurkn9eFHFjNDgMgMNbQcrT8XUdq+gW9QYi1iqMwXQerKnBkQsqJ7bGK/G7QPHtNylN
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(8676002)(6666004)(31686004)(36756003)(8936002)(86362001)(31696002)(498600001)(6486002)(38100700002)(2616005)(956004)(16576012)(5660300002)(66556008)(66476007)(66946007)(4744005)(54906003)(110136005)(2906002)(53546011)(16526019)(186003)(26005)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?MGFJWmEySHUvRDhiclRTdGdwaVkrRnlMdVkrRHlPRnU3UnpIYUFpRStsbSt1?=
 =?utf-8?B?aEwwVzV5NUQwVEhEVjlkM3dnczNxcTZBWncxWXlsalV3Q1R6MGRjK3hkMHlO?=
 =?utf-8?B?UWJHRmNyYTdGN29pTHhmRWNhKzhEQi9NaHREQjRUdmppd1NWRDdvQ0sxMEt3?=
 =?utf-8?B?OGVaUFRJOW1YUW4vWTlBZnVMbmRzSmV6UjloUEF1blVuVmo3czM1RFNSa2Jx?=
 =?utf-8?B?clFZWXVMVTg1aUVtTmJyNnJUZDBobmM3YjZrZmEyRElBUnlUa1NJK09SOWlP?=
 =?utf-8?B?S25rMnMzZnMvcnAyQVVGanluSUI1aUp6Wkk1VzZ5ckFPQ0dManVVaFJpTVNo?=
 =?utf-8?B?MWpvT24zbUJJRmJSbkErbTgzWThRNjV2NlUxTjBxRlhNSGtRN1hWQUQwMVJ0?=
 =?utf-8?B?ZTdpeDEzUm1wblAxbHJjV1VuL3dvekMzcXoxVytiVHNZZm1mYXFtLzlOZ01H?=
 =?utf-8?B?cnB0UU5aQUZCODJ5UzRDaDdDdG5SaHo5a3hvQXp2QWNKVllFS204OTViRGxk?=
 =?utf-8?B?cXNIcmtjbWxCajFubHduR2twUU9obnd2NGNXdlBCVTZwZm9LK3FoemZkSXQ5?=
 =?utf-8?B?SDZuRGNaM016NGI4ZWFrVmZ1b2x2Q1BhVk5VaEhYRWZ0OThlYWljTFJGa0g1?=
 =?utf-8?B?L0dvN2dzOHhwT1lqbExYb1kwVkJ4RHRWZnJ4VlJiWjVrUWcxdWtsWHlCTnFu?=
 =?utf-8?B?Y0w3NllHZU5zV2g5OUluM3lIUC9GMm1PdkNnY1hTV1hncms2aGZhclQ0RGZw?=
 =?utf-8?B?Z3c1Q1ZvbmVWZHFFTVJGdldsVG5RcGJCeVdmMndKeGRZbTFKM1BRdy9aMkk0?=
 =?utf-8?B?RWYyY1ZtTVBUOFJMTWdxR1A0TFdJMDhyNjJHa05URHpQdEdBTnpKMWlXTnIr?=
 =?utf-8?B?eUYvek96YWNlMTRxaXdMVHM0Z1MxaWR2R21IRWlPa3FRQkl5a3hCTU5Vcmxo?=
 =?utf-8?B?dVZaM0ZTR29nR2ozY1BqOHVydk43RWlNWWZsK0Fpcmd0RStSL09TMUt3YmpW?=
 =?utf-8?B?OUNpN1ZrZ3dJNmR4R2FLVWp2WFp3RVM1YXRQcTl6Vys5RGlkeDFXYWQxUnRS?=
 =?utf-8?B?TDVFb1BPOEpDMjFaeHNxWFNwc3prNkpKM2JDSk9waGdCN3dxanUzbUxvNkxD?=
 =?utf-8?B?bFpOQjRLN0pDM1lVOUxJNnVtODdZN01NOE9kd1Z2WUZFMGpmcytaZm1ndWhH?=
 =?utf-8?B?a1lHMnZ3akZxUG9uQURJeE0zSk4rYkRnYm1rN2k3eEJvY3Jia3lsZUJldHBa?=
 =?utf-8?B?bGEyQVlwQU9sUFlsdzJ1Z1Jia1FzZlpZbXZmY1MwSG9kU280WlV0Q05ON1Jz?=
 =?utf-8?B?MG9VMGp5OEU1M0xnVlBkVzUrYjZwYWdKbTNxdEdZRlppZlhHOHpyZnhuMXli?=
 =?utf-8?B?Ymhjc1JEcVd4SWxkTzhqMEZPVTc5MmU1VTlPZTJ4ZFQyOXlUaVhEN3V5bUd2?=
 =?utf-8?B?cnVzdGdoOEM1WkRCR28ycXQyU0VQaytxQnltc2xERldmbGZ5bENCOU4wQmxa?=
 =?utf-8?B?V2ZqYndPQks3eUZHZGxmaXFKdTUvamVOclh6RWQ4cWF2WVlpbGRoL2xOYTI5?=
 =?utf-8?B?Y1JhOXE5TCtDZkJDMmRGcWI0NVphTTBUTVMyaWZqMXROQUVnbkFVV292djVp?=
 =?utf-8?B?K2VHRjlMc1NncXIwNTFiaXZGWDNsWFhhdEtYdkZFeEVYYzZrR0psMEpvb08y?=
 =?utf-8?B?bW9PemVBWnFYeVh4RWpMNnJTOThxNGNHUU1kM0tHVHM4UDNjcWxJWVpkeVBP?=
 =?utf-8?Q?B+enMeRWj57AUpqKt/4Q19bHiGtZ7tq2W0En71y?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 4cec6b8d-a40a-4b65-86a8-08d935aab05b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2021 18:22:27.2704
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: slXbmrRwhtmDwG2a4mnSXMTzClS0PmcDiK/Gtr/njyxxBu9iTkGcqTVrwdjnsEFWgWSlcSdePwcrR2pez8F84b626ByndR/mWUni+HpGQTY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB4968
X-OriginatorOrg: citrix.com

On 22/06/2021 16:18, Jan Beulich wrote:
> Some hypercalls, memory-op in particular, can return values requiring
> more than 31 bits to represent. Hence the underlying layers need to make
> sure they won't truncate such values.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Ian Jackson <iwj@xenproject.org>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 18:25:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 18:25:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146052.268670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvl5K-0006lD-F0; Tue, 22 Jun 2021 18:25:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146052.268670; Tue, 22 Jun 2021 18:25:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvl5K-0006l6-Ag; Tue, 22 Jun 2021 18:25:18 +0000
Received: by outflank-mailman (input) for mailman id 146052;
 Tue, 22 Jun 2021 18:25:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZL//=LQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lvl5J-0006l0-3t
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 18:25:17 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 72b3a7ab-c484-40c5-887e-88548c7a3b25;
 Tue, 22 Jun 2021 18:25:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72b3a7ab-c484-40c5-887e-88548c7a3b25
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624386316;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=CFmUwofobeFCB6oZ+ZkeDOZmYg/dJAqTPKpkqU/4bP0=;
  b=GBECJmWIL01v6kbmvpmYjsCuEErsO7l0tJbFGbS86ND8eLit/ulbExA1
   2hGGNdMGLE08QsCvP9kHziumEdvcC5GwkXeBpT1znojXm5ZVOINhGGfBv
   670d4tu4Dlwq2Yd2iQ6bTLE+WVvosab0d/+gu7w7YN6WRqsbxQPCDyt06
   4=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: +vfNiQdSGaQ0iF1hQ7IEE540KZfodB5JffXvyUtqOm0Qs4Q6xjATh4oQcK7kTlEcVZbmQifg22
 Bt3WcXUON8w3IkwjBvMOFkPHgwwzx1tYXfBbl4WvEc/ky9fLdoOS90S9H+aTa7AlSKMFtewGIt
 4bRruGrAI/JsxAtODDHKcVOUVdXFFJqtljaV4ecGTgt0ccv5CBzTL3uLOCx8x126hO0UW/emdf
 fllMp+mWcq1NR9cLi11wtGN6/814DEujU8lPzwvMsMx7jRUu0bKuBBDq09KtIHQfQc+dmgc5MJ
 yoE=
X-SBRS: 5.1
X-MesageID: 46703587
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:wzRrs6EDj+VEl3UTpLqFDJHXdLJyesId70hD6qkvc3Jom52j+P
 xGws526faVslYssHFJo6HmBEDyewKjyXcT2/hvAV7CZnibhILMFuBfBOTZskbd8kHFh5dgPO
 JbAtVD4b7LfCpHZKTBkXGF+r8bqbHtms3Y5pa9vgNQpENRGsddBm9Ce3Wm+yZNNWx77PQCZf
 6hD4Z81kCdkSN9VLXKOpBJZZmMm/T70LbdJTIWDR8u7weDyRuu9b7BChCdmjMTSSlGz7sO+X
 XM11WR3NTij9iLjjvnk0PD5ZVfn9XsjvNFGcy3k8AQbhHhkByhaohNU6CL+Bo1vOaswlA3l8
 SkmWZgA+1Dr1fqOk2lqxrk3AftlBw07WX59FOeiXz/5eTkWTMTEaN69MdkWyqcz3BlkMB30a
 pN0W7cnYFQFwn8kCP04MWNfw12l3CzvWEpnYco/j5iuLMlGfhsRLEkjQVo+M9qJlOi1GlnKp
 gsMCjk3ocTTbvABEqp5lWGqbeXLwEO9hTveDlOhiXa6UkMoJjVp3FojPD3pU1wgq7VfaM0rd
 gsAp4Y442mcfVmJJ6VJN1xDfdfWVa9Di4lDgqpUB/a/fY8SgPwQtjMke8I2N0=
X-IronPort-AV: E=Sophos;i="5.83,292,1616472000"; 
   d="scan'208";a="46703587"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n68M7+pgQDdTGU6dGuzoLLKNSmkLYwDeGxKwdbcK4UeJEz4KoUMCvvFUILjVr38IGEtAr8WIl8UW2a6MJzaZ80t++tZVRlWhz8/QyPD3N4p2mVHFDcoTHv7DRbmUEEFzXPaAXEB5McOzHnWAETuLJQVb97cYE3OvW4GOKXR8RN2gAuKG3t0B8CCXGSCLOXK19hd4h5BlhsCeOPRWM1mIGVrvCwvSeMKYEuxyjogG5ZeY6UW7FE2tlHVn2I0jJGmXu5xZ/Z31fRO7MH6WegZbqDFOSI54VGOzAqjK5ZdCp5qHktE4YWO1o6iy4LH4F1w4yAGZ5wObXDGQyy6lwAARMg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CFmUwofobeFCB6oZ+ZkeDOZmYg/dJAqTPKpkqU/4bP0=;
 b=YRSQ5Ph9NLehgopFSopEjjVGPWCEOuC1pmdbHa4axQitp6NKDIv8iGrXVqXpQvgJcft+DP9ZGItNxIcG1vLPm+ebCUnaQqGIDDRtgmiNoO8xsaogrunUEVHAVjozjS4Q+/EZFCZ+HDeEbLgSaVfIESmBr6VuoSaW031u9RHqR3eFqSmNgZk9f6+3dPA9ewgSd8pT3B0aO70bKuemsbz9/RLIf42Q19scJUQarTHKB/nhTGsOq4eMzCT9yuJHr0ImqYUSG5s00/dImR/4qonLCS/4jHiPOdvAErPsgNfnC0HcW5uAvRMrXBbBIoO2WXLMjh1xIG0sLHYMnoX+cUkUMw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CFmUwofobeFCB6oZ+ZkeDOZmYg/dJAqTPKpkqU/4bP0=;
 b=pPZ287j1nuHmxfGjXRABsr1rQTqVVJtkEO/vmAo9JiDuaZhOZmf1BWW15AgiuhzqMJZYtPbaH6Zz5F02xxR42Jy38ERJteOCXOXOhVcIVKdU18O2+I/ecIC29kdDghkBG/6w5oZxxNqbauOLI8RHirHs3CKUSdyT1nbjqv0lQBs=
Subject: Re: [PATCH v2 5/6] libxencall: drop bogus mentioning of xencall6()
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>, Juergen Gross
	<jgross@suse.com>
References: <6c532607-c2a3-d0ab-e4e5-428f85f4a045@suse.com>
 <1792bdfc-7510-6628-e63c-aee2c7d2cab5@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <25fa82fd-957a-d378-9761-070006d083ae@citrix.com>
Date: Tue, 22 Jun 2021 19:25:05 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <1792bdfc-7510-6628-e63c-aee2c7d2cab5@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0414.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:189::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ff1d846e-8b3f-42c5-3922-08d935ab12fc
X-MS-TrafficTypeDiagnostic: BYAPR03MB4806:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB48064A8398567BDDDBA5A489BA099@BYAPR03MB4806.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5516;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: GyKXOQsE7+p5WoBZj4wVAgDONvx8Sphu6+X/EdHPpEisQPawvXXhiLNlWX5DVqIQ4MEMqdE6Ilcctb/pGIy2kgrFMMtkslbqfuSwfB9qSz3QCMbhw+t9NHBXGxDqOp8x1S822ms2iU+KQ0pPoKPnx4uxT8GozZuEA3n0dTR3UnZUWo7yAEawzDQmKXImU2o7C2BXLIbiTJ1X7Y8eLfRen0EHO0982CiyevZEc0s9nx6/fshbm/NyXxUHhn3KcC6NPSr/gnYjdKIg7mzDKPf1ZF4rgu9nHrD7Tv/74Um2W0QfCL/yHJowKwef9cfvdoX4PgvLVbyRRkLWAOVmM9EkwMTOiq9/y5/+6t1roFzPnsZZJpjA8tzEBe1cpKkf8pgpyFqoWKLXiLSEdf7f51PGDFjvb+D4JzoY8h+bau/RqH4PL9k5Sh0iu8wDtD9c9l3nCDSjjadkBvkJGIQ5F8ep7s/j8CqywQyRTvA7AFk/mP1yVlh3/NRA0zl2ILkpn/xqke5rFpuSxZlTFa9tx4HcwOlMnqIKZuWKfnH9s7ToBjj41Q4RRWWqS1aKlLhYs/poUqXfijwhycpQRIZKbEfKXN+4XZQMt8aIxnCryyUauzoDJ3OW5lr/+ytdQCFuz2s+unvdoACt7CjEuugm+Hqbl6FK61yrFfVJIubs8b6p4ozgFCTR7q38ZDyVhuwZsd2h
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(366004)(346002)(396003)(39860400002)(6666004)(6486002)(38100700002)(956004)(2906002)(2616005)(36756003)(8676002)(478600001)(16526019)(66476007)(66946007)(110136005)(53546011)(66556008)(26005)(5660300002)(8936002)(186003)(4744005)(16576012)(316002)(54906003)(4326008)(31686004)(31696002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?dWVyWGdrNjFvODNCY0c4eXhYYlpYM0Yxc1gyMXZDL0NHcjMvcURmNFdjZko0?=
 =?utf-8?B?bUg2WitpdzN0cnVzOGF5a0JOU3hxMG9xaEJyb2VtNlUvV2ZwZ1V4NUlaM0hp?=
 =?utf-8?B?RDJLQjlWbjBwR05sNWlxMVlaZHB1MmUrQU9OaUhLUjhwWGFJNXZja0MzZS9F?=
 =?utf-8?B?clJTSk5BOTFqdFFZTHRPTHoyVVpKdGZWZ0d0c1plb1FUOVlJZzZlNHd2NmNh?=
 =?utf-8?B?LytIcHlmWktqa2RUYjJ0eXgxOUJpNGdjZ3NtRVFtWGJFQUJwT29XMSt1Rk5w?=
 =?utf-8?B?aTNjd04zRnAzOEdFYjh1UUV5Yk85MUlvWFNGanpLQ1d0OFFtbkU3a0R1VWJD?=
 =?utf-8?B?cDIwNFdpOXYzaDdDQm84ODgwdU9TeElUOFdEZ3M4RTFqS0pvcWNoRUZWZUlu?=
 =?utf-8?B?UkpqUldENm1qODBQZnI4Q0ZXUmR6aE13T1h4UzhOMGkwY09xdjVTVkJVY05n?=
 =?utf-8?B?d01VaEJITjNBblZuN1kwTFJleWp0TjQySjdJYkRHclZxS01VOVMydE1JQzBL?=
 =?utf-8?B?OG90UWZqS0lGU2pVc0orRDMySFhiU3V1dWlGdXJrdWpZQjVSbDlLTnVGbllz?=
 =?utf-8?B?KzhjYjR5SXpXZXlQV2ROaEVxYmZjcElyVEJWOVZBTXhER1FtcEpiRmM2QTFG?=
 =?utf-8?B?V0J4MXB4S21icElJTnlOMFNWTnZWMFRWbEFhV0Y1TDYyYXlRR1crb2xjZ0hC?=
 =?utf-8?B?aTdENXZBQ0pTQnMxek5LeHFDTWJnZW1ZVGtpRXoxbW1FVUFTUG54MTdRYWJZ?=
 =?utf-8?B?aDJEUU1qNlpXYlJGRlFwTnV2UmlEK3dNaHhiMXlKNlVUTGdvRHVKQlFuL1Bv?=
 =?utf-8?B?bGlSbWZVcFlJWGJGZmpRdms2OTRlektWdnZlYTJjbGVCRVBpY2dES0I4TDBY?=
 =?utf-8?B?NVBROFlIUmVNY2xYM2tRWWhQWEU2VFFpSkgvc1I2QzVlSk44R3VxcllQSlhp?=
 =?utf-8?B?UlNObUl2Yzdla0VxQUZHc29RYVJWUUxqWWFRSnR0NkFLenZhOHVxQisxOEh1?=
 =?utf-8?B?VXdwaktOdm5XL0dIVDkrdW1mRm1QaWJYbElsUEswbGkrc1VxVXFEVURpQkpv?=
 =?utf-8?B?eWl4Mk9lc1dXRkQwNi9QNmVEOXhId25tZDArUGxwbGZJa2tMUWRjWUNrRXhM?=
 =?utf-8?B?NVl1R1k4VUdQV2VNdVJxYU9CNHhGSXE2Q3F4dXluZzVWUWR5MDllQmMxckdk?=
 =?utf-8?B?T2NETlY4V09TWC90N1ZTdnhhMU5jMG1uK2RYaHc3VmdJbTNpSDlLUVR4Vkln?=
 =?utf-8?B?Q21kR01QMjNLSEdxaERsWHJYRlJvbXdmTGNzR1R0ZkljWXY3blVkS1ROaFNn?=
 =?utf-8?B?VmdsYjFmMU5JUFNNQkxrejdPRnB5ZEFNNXk5RDRyZk5RQ0JOeUdQSEdOWXVB?=
 =?utf-8?B?QUtaUlRJM2R1VmZ1VDBRb09MS3Z4MzdseEVKVkI1cXpUUGhnZzdYRGlML1Rw?=
 =?utf-8?B?K1p2WUxmRFVvc1pHYy9lb1Fpd2k1dFNkdjY5UitLOGFLOUJRSmNJVWtXSWQ3?=
 =?utf-8?B?S3cxeXVnUzdncjNqYlhWaDNsck1ya09RTGpvNlBOV05NdmJ1aklHbGxObFZ2?=
 =?utf-8?B?YkEzM2M1eCtEOWJMNUhaQmpIR2gxMjdhNW5Pa3ZsVmcvbUhiYTFJM0x6VGNu?=
 =?utf-8?B?UzQ2aWg5SjNYNkFvbnNiS2FpcWNiSGlMQUhDdjRYOGc3MnFLQ0ZiQzVJTXUx?=
 =?utf-8?B?d0pjTU01US9rU3I3NWFzVjQ3UVN3amlaVmNiaGVEMG01VHc2WUFCbkNWZGYz?=
 =?utf-8?Q?VRC6SnUVMyjJBp/isxStrXSeDiDCq4i8bi9Hgdg?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ff1d846e-8b3f-42c5-3922-08d935ab12fc
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2021 18:25:12.5044
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FCS99H/Z32M0atiJFGkqSpap4iQzmiSF9lSqe9n1E0OEj30k4Oe1FashCPgo9vE3SuH/lc0Ci/J9K2vlcxDmfyG/L/b2DJXyLwZZYAMWXTI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4806
X-OriginatorOrg: citrix.com

On 22/06/2021 16:19, Jan Beulich wrote:
> There's no xencall6(), so the version script also shouldn't mention it.
> If such a function would ever appear, it shouldn't land in version 1.0.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Ian Jackson <iwj@xenproject.org>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

This really does need backporting, and it is probably worth stating
explicitly that there is no change in the resulting object, nor
abi-dumper's view of the object.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 18:34:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 18:34:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146058.268681 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvlDr-0008GP-Ed; Tue, 22 Jun 2021 18:34:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146058.268681; Tue, 22 Jun 2021 18:34:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvlDr-0008GI-BP; Tue, 22 Jun 2021 18:34:07 +0000
Received: by outflank-mailman (input) for mailman id 146058;
 Tue, 22 Jun 2021 18:34:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZL//=LQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lvlDq-0008GC-6h
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 18:34:06 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bdffe248-1a4b-43e2-9b61-a1748a031563;
 Tue, 22 Jun 2021 18:34:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bdffe248-1a4b-43e2-9b61-a1748a031563
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624386844;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ByaQcM/oTyhAHtsKoD/+5z7Z8WiwSvG41oOTw0CkyDY=;
  b=NVnLTV7RDZTdzGMjMtBBkXcemHzzktkEMMym+DAlis1HQo9+nj5yyklP
   XipQxzAujya/2XnfbqGrN5ckB7pROBppWOQ6vSwUmXKkOA9IJ6pj2bb60
   otTCdlGNloF2w4bh/I8Rv+6MHWDs6yey170b2XwRGhlE9LeenF0f6hPLK
   Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: j5SJrXeJznnNPYhhwAlkUtgo5J+WR4WqqhCNqJtnckj/eLUV+8SXFF+Z46o0OuqQkBdXdyeSK7
 9AO5fqN1sDyg9JJcaaR3nuWBZCksGxT8CKPwlD0M/yKsrURP8a3V4rwOAdtLuzk8ZHKqPBOagZ
 swmluLvf0qrOpxzWvmnKZ2WQ1TZlmWl2HLjyWnsTTLwHpvDyFLuAAJLveeVbFBKhTz+Obi9Nm9
 GQOAa2oyFZjrvwkApt+auXeZCpgK4sm6EU0wcCroxW9lg9y966GeBaJheRQn9GnnaYZ6t5oAOd
 G3s=
X-SBRS: 5.1
X-MesageID: 46717711
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:JLQyKqxeiw8YKKbl75oKKrPwSb1zdoMgy1knxilNoH1uA6+lfq
 +V9sjzuSWYtN9zYhEdcLK7VpVoKEm0nfVICO8qUYtKNzOGhILHFu5f0bc=
X-IronPort-AV: E=Sophos;i="5.83,292,1616472000"; 
   d="scan'208";a="46717711"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jn9dmYYkO7jARu+a4Rq6V7903IH7FhXivDuHk8p1vbK4PRLhFmvDeHHKRD7kf87R7lb+HC7cFTKR56qAz/BiEFfrX+cLhpSKE4K5vLoCBNY6mEGm9/YwUQzbXhKk9niBaiSN8Lyyns2yjyhX+bMqaGw4BmejiJJuy6o39F10oTaZ7Yj2xi2iAdpMcHf3SlA5ANP+FnGWcBGcWC5X2dO8qiwpUZJf9h8W7yEH4yR507m8gnSgBuwlF/tBOLROIiL5TnW5MZoD8nVHo6V1otA/6RGuhF4+3BR3BsZFLEmDF+poAqEgDaj91RnKxq9XQMA75d+SXpbBe5KTvweCK5dVQg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ByaQcM/oTyhAHtsKoD/+5z7Z8WiwSvG41oOTw0CkyDY=;
 b=bxfE9EoRxEoe6lIp5QlNWiDLLFXVGYWRbPlDRxF2IJFrED/xAdqku6NJqIMqdVEsTC0wJzmLSXZFGOtjGtAn3r4PTISzGJF6TMogRjB5SYAi97RSldbmTzxynv6xFJiXjjLLCyJ6xkNTK2cc4sba+vqe84730DMtiEYpFjX4yOKxjNRQMrr5ml2Dl0YEL1v4UoPAx2SxK5kgCTMqq0PiZo/3JChcBkMyoqVH5xQclUckmo8nG6GqqeKzOJA2ijNqTPB2DZ6uPbCXY3jygpsHBZAUHD2Nqco6CRXbunDJsPE/ed/w405sVZfRLXzDVBjszXflVwXu5MD3RIqVb6uSWg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ByaQcM/oTyhAHtsKoD/+5z7Z8WiwSvG41oOTw0CkyDY=;
 b=ewq/qzfwfcH+bSEGSaWz7KB6F/LRbJbevDRhUoSxsnz7YxpSknQV8GoEtk+Rpwj7er5uH3CPkPQxQc43z0gacjfGEq3Yp77KUmCsY+/wcGC5fbzBnJ8jVUjyclcy3EQ/krUVPdoCaW2UOf2o2fGuJEbCLWgPJFFgNKq2GadNZDA=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Juergen Gross
	<jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <6c532607-c2a3-d0ab-e4e5-428f85f4a045@suse.com>
 <fed4cdad-950d-6d39-d372-37f88dcc2819@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 6/6] libxc: make xc_domain_maximum_gpfn()
 endianness-agnostic
Message-ID: <a90c24ab-141d-ef65-11ad-6149bfca5d3d@citrix.com>
Date: Tue, 22 Jun 2021 19:33:50 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <fed4cdad-950d-6d39-d372-37f88dcc2819@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LNXP265CA0008.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5e::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ab3091b8-e052-4815-0702-08d935ac4c0b
X-MS-TrafficTypeDiagnostic: BYAPR03MB3797:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB3797A8953700EC3257ED927FBA099@BYAPR03MB3797.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: AvrL5ucz4VNYuJFRdVUn+e623S/3rjH0EkTked9vMrkbnB3i+pp9ECxyxChUNYLA+Qf7q8aJd0dA6yr6BjwIzLVagfc9NGPZWoX20P1UYrzToZW1ePIwTDAHYs0AxrfEsymjOIQd6tkoFaNbXHKHtDkG9jAP/1TLerI1WwPB0Nn8rBxrA00vl77/D2X936yinOec1s2qvMgmssJu/h2kSHXtoexJzwTJs9VEKCAIDqY8zqaudEFVinlGevxfdKh7+8fVSAC6ojEFQuBD12tYCd8WTzCahF56raejCnhXJQ+b4V//OwJIXhcTCWLHSsQJZrTmJ7wuOngUR3/CLQOHsRfbFjuUSjwba0idCKxuH1abhRJWkvjSh5itw0GGw49Sp6WOZK+ffC9+7+3ZSxn+sQr7sFA+Gw1a1F79AX+dGrmXgHwD8nod9Z2yCX37yUQlsq5eqG3oHsA5YYjie3kngQpSH7TsVvJ2d+cw8Na6LwL8bm5RQzhLfwC+heRe+zzp1v2o003Pw/EtFnhd3R/U4I616Bu6J0PvsbsoxyRj9xMoKdwx9uxqHa0RWPuLtP7/jz/FqDvMk30hDHGQ3Y1U/bWr54k1+5enbCm3obaxPBsi08XQ7lRC0Bk7GVb0qfXjE0WslROjiaxjU1VIFEfjNsihjSbSfKAEPJ+eOK04n5poNHgQXyVRo0+2xtC9T3PM
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(136003)(39860400002)(366004)(376002)(396003)(316002)(6666004)(16576012)(38100700002)(31686004)(5660300002)(110136005)(478600001)(36756003)(8676002)(66946007)(16526019)(31696002)(2906002)(4326008)(956004)(6486002)(186003)(26005)(53546011)(2616005)(8936002)(86362001)(66556008)(54906003)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?UmtkTVpUcVJ1c1VxbnRPbXlkbXJkcWdVRlVDUzFWdTgxcUVtTTUyK01URzZK?=
 =?utf-8?B?S2VUNkYwZVhveG9Tbk9Ca2UwR0RtVUhvZEJYQXJOdTlrQjBrMWIzY3NxRW9x?=
 =?utf-8?B?Rk9sQUVMRGIvL3RSdzJWSjZZN2FRaVU1UUZXZjI2SDFCMG5TU2hEN29abDNi?=
 =?utf-8?B?eFpicFd3ZVVPWmY1TEhZejNXMERBRUpNZGcxSGYwWExLZ2JZUnFxa2lJcmtB?=
 =?utf-8?B?NzNOWVhyUURvTXNiZ2FnWmtINzFqdWx5N1FINlFGTlE5K1lwSDdQejBCdFpI?=
 =?utf-8?B?Q3ZLL1ZDZWJ6UUFEMFk4OTJVOUpickdXQ1VGK3E2dWlYOWZSbzVqVkFBaGRq?=
 =?utf-8?B?VHg0TWRlRmpFRTJNV3RQRW5vTE9TRnBOOFdEWnRpVFJGZnowOUNyTDFoQlRO?=
 =?utf-8?B?NjJRSjFrVnFUU2hQV0NrdktXcDNoSTRFbVR4ckRxYS9aTE1pZlBGZ1lBTVNZ?=
 =?utf-8?B?dW5mOWhDeUdadTZWVy8rbEZrcm9hdlFNQ0x5cjQxS2ViOXpMc0hHZzF2ajJi?=
 =?utf-8?B?K1AxS3ZFRCtUVjFjL1ZINm1Lb2YzeHRpbXRNVUxnSUc1Qld6ZWN6UFlDR3Rj?=
 =?utf-8?B?WmVvS01qeW5MYS9JWnNBeVJIczJWOEZkbGdSeWdiTXUyNFdOOXpvR3dWV0F3?=
 =?utf-8?B?MFBKVXh0OW03OVh1YUVVcmJob2NNejRHQkN1MGQwb1NZSWF0bVROVkgzQjlU?=
 =?utf-8?B?R2huRU5pMHhaZHEwSkRtUDJXdHUyN3ZXRDI3NS91aGQxU0hwZEhTbU0wZ3RI?=
 =?utf-8?B?ZlRoNFBvK21PdDlEMG0reDY3RXVWSWdmTTYxcE1XUVN6NnY2a05EUFFRNWha?=
 =?utf-8?B?eTljVGs1ZUkwYlpLcjBzRFIvTzM2ZlVic0FVeUZYa2dxbDgvSjZ4UGdhMGo5?=
 =?utf-8?B?N2pEeU5DSDBHeWx6UFExeGdzcSt0eklXMmJZVWVnRTZNakVFZGx5OVpMeTd5?=
 =?utf-8?B?MklCTjcrTU5pK0RHWWxyUjlRVVhpQk9UVVVZbVF5UVhKVFJCSnA1ZlY3NUYx?=
 =?utf-8?B?dEhwRjdxQkRMdkZ6aWNEUEdvMy85bG9HdTg2L3ZiNHd1L25XNWVWZ2tmcytH?=
 =?utf-8?B?RXkySnNYQkhJV0gvSFVTZ1VsZ0dQY0Y4WDFxS2xNSEJoWkw4OUxEazhqU21q?=
 =?utf-8?B?VDZzVGZsLzlSZzB3K2Y1U1A1d1hwRmpDczBhQjdENUl2Y2NpbXBMR0Y5c25n?=
 =?utf-8?B?RGMzaFVtL3diNGx6YUtmMVdZUzVsd0twazF6akpWeDR0R25sMjdmcC9YR0Nw?=
 =?utf-8?B?OUkrSFZLakdNSlhyWmZnTHhvL2d2WkE1TTdrbVFkcnNyUFJ1RjZ1VzZPeUJL?=
 =?utf-8?B?Ukc1QXVaSHo0MGZiTW14VE03UmhOQ0RHM09RVWcxMXFRRGVTSEs4TjFTWHBV?=
 =?utf-8?B?b2pOTE5GZjU4S0hnWmlmQWszRXJmVVc5a254V25mWjRiOEZITHQxdUhFUlNO?=
 =?utf-8?B?Tk9pVXJOU2tzRHNWdVRUK2g0Q3MrUWRKYlZMZ0E1VHQrOTM5RE9nMDMwSnFs?=
 =?utf-8?B?R09ZUWdWKzFkaW43dkg3MnJnNUtSemx3eGVMa0gvMVRZZHlWdDVXZW5veUNo?=
 =?utf-8?B?N1ZuM3c0bXBRaWpFZGxNN0dFMHpQQXVzZ2NSQmpCa1RTQkhxWVZiT1VHVXd5?=
 =?utf-8?B?anJoRS91MFhvWmlXTGJFa3l6TnFsZVdWd1k4bmE2RnE3U2xMTkZReG4yMmxU?=
 =?utf-8?B?SURXbDV4aVBMdllaa3ltWUJFU1RmV3ZaaXdYTXdHSVNCQWlvRHBBVHBJeERl?=
 =?utf-8?Q?V1DzpBzniPCTQST6ZkaUngzN9fIy4t2z+MKJ8Dx?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ab3091b8-e052-4815-0702-08d935ac4c0b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2021 18:33:57.9132
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SSdpN6pWRaJCAU/Ikk01msq8VZnz3r5RFG9lqS/CikgKP/5HmEh7xg2vQb9/KW7s4/7YaA30bAMq3mKRofVEEPtJkfsxZaVGG+qmvuCwTLk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3797
X-OriginatorOrg: citrix.com

On 22/06/2021 16:20, Jan Beulich wrote:
> libxc generally uses uint32_t to represent domain IDs. This is fine as
> long as addresses of such variables aren't taken, to then pass into
> hypercalls: To the hypervisor, a domain ID is a 16-bit value. Introduce
> a wrapper struct to deal with the issue. (On architectures with
> arguments passed in registers, an intermediate variable would have been
> created by the compiler already anyway, just one of the wrong type.)
>
> The public interface change is both source and binary compatible for
> the architectures we currently support.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Ian Jackson <iwj@xenproject.org>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

> ---
> v2: Introduce wrapper struct in public interface.
> ---
> Together with the comment change I was half tempted to also rename the
> sub-function identifier to XENMEM_maximum_gfn. But I then decided this
> would go too far here.

We ought to fix it, but that could be a followup, or could be when this
interface disappears in ABIv2.

~Andrew



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 19:35:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 19:35:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146067.268697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvmBI-0005T8-BD; Tue, 22 Jun 2021 19:35:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146067.268697; Tue, 22 Jun 2021 19:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvmBI-0005T1-7Y; Tue, 22 Jun 2021 19:35:32 +0000
Received: by outflank-mailman (input) for mailman id 146067;
 Tue, 22 Jun 2021 19:35:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZL//=LQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lvmBH-0005Sv-85
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 19:35:31 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bdaeebac-cc0c-4144-9ecd-3d2f657fa5cc;
 Tue, 22 Jun 2021 19:35:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bdaeebac-cc0c-4144-9ecd-3d2f657fa5cc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624390528;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=UxgAPpj3Bju409mglp5pUiFnbpcpj17y4j6EuXBk8ZY=;
  b=JR2KOG/bJWqC0I+rCixfl1XId5lNEWcm/wjBbTfQhRT80Hjk4qLW+n8U
   0nQF/g85gU/9XMIzaopZMc8nw+16wVPjDlUAtm46NwI4psoq+Stx6KmxN
   lFPi49896RXKodMe375ic2oJsVinTkct1h6YbnfwMqnV+PSD9O7Zb8tVl
   Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: snSCV/BtX0JFjoKmblLr2umn3XiSiFDmCjovm2Hcv+gCgcWbhgdJjRuz+u76NEWq+LDEecnBI1
 arFCyTuBHNDXbcRxZ+6Uu6LZho7dqgQhK3Y/OXX6atoiQID+WYv9COvV+2l5GhomKrg62EGnLH
 ofUyQ5fDogDg+kAzyYn7SD8k2FSBqal22BXWhJ06IxNJgElM16xlZIZt5H29m/F1FMJuna8cQF
 iqkfY0B+l6RQoHBMB+vdYam0FARL8ofC8ZYXYI8Jk61EzYwTrPEujWQFNwK9+jZzP56GHkVU/g
 3ZE=
X-SBRS: 5.1
X-MesageID: 46787766
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:LpnRLqiY9iI54JcxnInN8GdxRXBQXh4ji2hC6mlwRA09TyX5ra
 2TdZUgpHrJYVMqMk3I9uruBEDtex3hHP1OkOss1NWZPDUO0VHARO1fBOPZqAEIcBeOldK1u5
 0AT0B/YueAd2STj6zBkXSF+wBL+qj6zEiq792usEuEVWtRGsVdB58SMHfiLqVxLjM2YqYRJd
 6nyedsgSGvQngTZtTTPAh/YwCSz+e78q4PeHQ9dmca1DU=
X-IronPort-AV: E=Sophos;i="5.83,292,1616472000"; 
   d="scan'208";a="46787766"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VRuDmQGGiwolNgS1tvThDaMYzM4IFz/J4uhkRR1tgfJ962O4FDnQH3LYww7PPh/R9tCxK2MYRuyUycqMEnWHm91ZL9HHaCnL4ZYZMHRZ6XxGl3RfguOGUjCml025ua4vzGK9BsWj9jxAeqnrJGU1tbPK/7oxVSdJQl4zabZArwadof+rp3g3Ri+ZuI+ER84LffIqzvXolKyCReePbyAWFmGRBtSlLp6e0Ccd54g1DliN9hEsCpCFOHP5PwyxafgaB6mbvQbm4Ut6GjjG8RtdxRie/txotOXwCh+WhvbPqqCTWNm9DbS/oXpYMzyuesngfIIzmVj15GRgjNp6mbP7iA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UxgAPpj3Bju409mglp5pUiFnbpcpj17y4j6EuXBk8ZY=;
 b=JTRc/y6qnzTZ6p/KRfxXVYot6UQ9VjuwPpM06/zXjfz5e2w/bhCGp8+nNdQYyke3l4uXLMJTwuGiIWjYns1h02VcmBSt1+6wmz2O26QxRJ0XjbTRcUFIum1YJ+fuHXgdhd/09HyaV5lsNkpNuas/9e8LIhFcexdoH2GpFvZzcvXvlWLH3uAl8fRQRT7ZwIdxuZij/0bxZWET8VVn7pqBCQjSMpRJYa5c9MQhQWFkxUzJzgEKZL+0DVG4ReHSywQmpynHV9/2C32EuepKSWPUB9om0KJsdluIUNoUK7fk+zZKplBAjCMD4eRzAeow7FLdLVk+jc0lGffJn4VAnPC9GQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UxgAPpj3Bju409mglp5pUiFnbpcpj17y4j6EuXBk8ZY=;
 b=vrNUiKzS6fJJuQY9rOmxgIAFWwYBRNvYwKewDihH/McaDTR7ZaDA7w2jCtXGa3jzI85dMXYDxhbIxLDFfdm4CVUDc9MB4vgspxvileS1fERRux5zFXEeD6+iAz+hWdb5XhpWDVvw1Z0ZajnUZUsyeWagXgdeYt9YSh+q+AOAqEU=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>, Juergen Gross
	<jgross@suse.com>
References: <6c532607-c2a3-d0ab-e4e5-428f85f4a045@suse.com>
 <f56ceeaf-745b-9064-e6c1-83bdb0d04360@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 4/6] libxc: use multicall for memory-op on Linux (and
 Solaris)
Message-ID: <215dccb1-602b-20dd-2298-84ce11a797b7@citrix.com>
Date: Tue, 22 Jun 2021 20:35:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <f56ceeaf-745b-9064-e6c1-83bdb0d04360@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0497.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13a::22) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 44ad89ca-4f6a-4612-75f6-08d935b4e1d4
X-MS-TrafficTypeDiagnostic: BYAPR03MB3670:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB367009E138CE661DA1C7DDB1BA099@BYAPR03MB3670.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: gVvC9gT65ZdHUvSnTWxKXLnMhYHlcVZDl///rtKy4PMGm+aiaeHirRQTEr4PCjL9u69aCx2d9jwAlDGowYVzH14KubiIb5qDLTrUUlPa/B4drP+/Q/qQmnsig30RvrqX287FqR7vRwpD3ZrNSzbj84DGW5OCZYz4MLWrZ93mEJfmHgHQcFuzx/VL5lHi3pFj51oBz/zCwBCajS9X66HKkEoOgaIvhiN8W5u4ejrxGGTbulFn1BTqDetduJUbanuHOlPEmYckWfHRXMfIKrUrd5ZmZHZ4VAdqrbDyixWxMoBThvZWX7sRZ1osSQEJqvUFgwA5oPSa5rVTvkx6Cm277LCtP/DIsNILvh0mMU3XM0Ed4TFjRbm6feZw121IlxYBO3xzGr2LFnj+l4YAchVzF1vtvDVwjrj24IY41gB9c2vvlLLIu33UR738Y7gsTaVNFGoNq810mJXM5XHFOUXegCPtWCCcSPV+EPUpsfV6Lop4oU138YOJC/e6lgQItWd+tDdTRGujUR4x4c6npCMg/8VZTIiCMcMegPNk9YcFejvhsvCqWhKjRqf8E+Ej8L4HnH1bZnOmQSZ/OBxPWsudF8fXqNCKYH1kKsfpMadPistf7cNsioe1Oy4NzXMvXyp69J+H1/wcdv4Mi+Lbsg63kh+0+ZqSXLjDWt0uP8bt2TuVvJhEG+ZTPla6VEgcPfbT
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(396003)(346002)(366004)(39860400002)(5660300002)(86362001)(110136005)(316002)(31696002)(478600001)(54906003)(186003)(16526019)(53546011)(26005)(16576012)(2616005)(66556008)(66476007)(4326008)(956004)(2906002)(36756003)(66946007)(8936002)(38100700002)(31686004)(8676002)(6666004)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?UC9HMVdjanpWbFJxNFA5K3I3M0FZVS9pVWVZK3NGMDNqdDFxZk10VUtoQ2ZB?=
 =?utf-8?B?ekJKaDB1VzdUVXRCNW5vRG1uTE1JdmhlS29QVzM2dUdmQVBDTXpPOHhIWEVK?=
 =?utf-8?B?cFpDRTA2eDlBSWlXRmpmNzA4UUh1YjJURFdkd1lyaS9VcW5QdnJWbjdjUXpK?=
 =?utf-8?B?MXh6UmYzTDk4VnN5Um12ZzNOdnVqSXBqWk5KbTQ1eWJSVHZIZmF2QnJIWWZq?=
 =?utf-8?B?K2dRL0FCdFdGWUhDbDNZMkY0ZnFRMm02VW1uM1RpNXVXWWV0Unl2ZlkvT2xO?=
 =?utf-8?B?QyszZS9KWDVQbU5ocUxxb2dwQnI5U0ZpU2tRRTFPMldRb05vbjNudzFhSW80?=
 =?utf-8?B?ZmRBd04yZTZXY2N1cDE1cjFwTm5CSnhPNEdGMlNOSkxCRWtucFJ6ZTRXZkhM?=
 =?utf-8?B?QkVLZ25TOXB6Q3Nid1pNbWFVUUxMRDN3TUdKWjdCM2Q0NkNZdDc5NStxRHly?=
 =?utf-8?B?UXhTQWhsejVVZXBvR3NkZHExdjF1OXE0TFF6MVZmY2xEN2dlSDFMUzFwa2Iv?=
 =?utf-8?B?R1hMZ1dLa0VTQXREWWhVeU56SnV2MEpya2FCR2dXcjhWNG5Bc1BWYjBWT2VW?=
 =?utf-8?B?S0lTZm5vOWdvK0NaQzlmdDdrOTlQdHNHMXJOK1E3SnpQT1VuTFhRVmpieHRw?=
 =?utf-8?B?a3dkUXdLaHgybXdLVHJBUWtEUk43NmxnaFlwbE9qWTZRYkJVbU5WVHlwMEhK?=
 =?utf-8?B?bGRNMVk2ZjIvQW4rTzJJREtGd3R5dHl4M3lGWXNUVmI2aUtOdEh6ZjQ0MDgv?=
 =?utf-8?B?OVRHUWRCOXd2WG42ZzRxNkZkVlNOMjlvWlBjNDZpdjlYR0hDdWJSYmJlNE9L?=
 =?utf-8?B?cEJOeW9lNGRLVnJLNE9hYmNmeHpZekxSVXE3NTRUNHZOLzRaSGRteld4cHV6?=
 =?utf-8?B?N1R6OFBzWThWUWJaeW4xenI2U0syVmFDZXBIRU41WSs4RkNVem80RHh3cGU3?=
 =?utf-8?B?UkFvRGxQR24rQWJJVFk3MTZ5ODdaOFVrSlorV0d0RjVUYk9IRW5NTXFQeWhr?=
 =?utf-8?B?Q1JqN2ducnFyZmR3OVIvb25UQndHVG1jSmZFQ2lTemNKRCtIdVF4T0IwNndI?=
 =?utf-8?B?eFArQnMrS2diZjlKWjNpbkxKbTNKRFRYUlB4OHBxenRBSWlidElYcHhVUC9I?=
 =?utf-8?B?ZkJDVVk1M1hZSDlVaHlFS3lwOUI4REErV3N2Um9tbG1mSnc4dnBzWlBZYWVN?=
 =?utf-8?B?SFRzQlZ6S0tURDFkNWt6eGVHVzJ6MEU5Qy9leFNIK2ZIUFBSbkFjYnV0UUZG?=
 =?utf-8?B?UEx2TUdobG53SlFtRFlTZHdnOWtGc2RkK1p2dTVGTWhxOURHZDFXVUljLzlB?=
 =?utf-8?B?WEY1VC9uYUY4YkFHeWsrTHB1OFZxNXB0Q2Y1VElQYzdyZzc1ZUNRYVpqclQx?=
 =?utf-8?B?UnR4OXNiWHJERHVPMU5iT1NBd2NhRXJlZUVJeUxwMjJiWlZPTjhJRXp4ZTd0?=
 =?utf-8?B?MnhzM0hqUjVxRTRXbnpaRTlmcGlFNnk1cmVva2ozVk5hTktNSFo0N01NNW5T?=
 =?utf-8?B?MlFldkZmaElmU24rRHY3UVVrNysrak1vN0hrN083c3JQeGl0MlJrdzhrcGJU?=
 =?utf-8?B?QlhvRUl2SDMydUNTVm1reGU5RktFU0pEZ3NRSHNOd0xDVzhHeXlyMXdqNzIy?=
 =?utf-8?B?c3pxcWZYak55d213ZjNRL2NnRmYyVnV1ZjZXY2t1WWZGUndVekhqNUVTcUh6?=
 =?utf-8?B?WCtiSlE2MzVWenVnRVlCRGM3d0tBd2NaWk9tTmhGSitBVUhxSEhaV3FRV2oy?=
 =?utf-8?Q?A0dkqebPpj6jiwPOn6Lo/zdA6nny5mrXgAYRFEs?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 44ad89ca-4f6a-4612-75f6-08d935b4e1d4
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2021 19:35:25.1806
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: c5mN59aBmZZpuCYtGnHLVAxGsC1Rm/UiyRFKS1l4CHxSUMTT7ND9+sNGhDHbg87DddLyM/Y6PnQhkSgvFCiN2sEiTwX1mYLIZ/IdNRc4Y+c=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3670
X-OriginatorOrg: citrix.com

On 22/06/2021 16:19, Jan Beulich wrote:
> Some sub-functions, XENMEM_maximum_gpfn and XENMEM_maximum_ram_page in
> particular, can return values requiring more than 31 bits to represent.
> Hence we cannot issue the hypercall directly when the return value of
> ioctl() is used to propagate this value (note that this is not the case
> for the BSDs, and MiniOS already wraps all hypercalls in a multicall).

It took me 3 readings to realise you weren't saying that BSDs used
multicall (which they don't).

Instead, I'd suggest rewording it.

"Some sub-functions, XENMEM_maximum_gpfn and XENMEM_maximum_ram_page in
particular, can return values requiring more than 31 bits to represent.

The BSDs deliberately avoid using the ioctl() return value, and MiniOS
already uses a multicall.=C2=A0 They aren't affected.

Linux and Solaris however do use the ioctl() return value, which must be
avoided in this case to avoid truncation."

Or something similar.

With a suitable improvement along these lines, Acked-by: Andrew Cooper
<andrew.cooper3@citrix.com>



From xen-devel-bounces@lists.xenproject.org Tue Jun 22 20:24:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 20:24:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146073.268709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvmwp-0001rs-Vr; Tue, 22 Jun 2021 20:24:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146073.268709; Tue, 22 Jun 2021 20:24:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvmwp-0001rl-So; Tue, 22 Jun 2021 20:24:39 +0000
Received: by outflank-mailman (input) for mailman id 146073;
 Tue, 22 Jun 2021 20:24:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvmwo-0001rb-EV; Tue, 22 Jun 2021 20:24:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvmwo-0002i2-5B; Tue, 22 Jun 2021 20:24:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvmwn-0001wf-T1; Tue, 22 Jun 2021 20:24:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvmwn-0008Ji-SV; Tue, 22 Jun 2021 20:24:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mOyUjNftX81+ixDWE/B/0LTMvfrLh++6WrrfgFRkcx4=; b=OWTnODISX6nyPP98Rs4qKHxdKQ
	d7KSDU4eqWVHy2reQwsWa35DAx0aWqvcwGs1scRQPBlyBLpUcRUOJpcvFDFxOpBO/0FJ8v5YJxDvk
	FAUrpvmT3N4ox2lB97HCd8D80JQ7FFDJiCpCGZTxr40UgG9z7lwNQNJbn61iWUhHoWCw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162977-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162977: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c7691f5e340f3b579d40c77548f63133cdf5aac7
X-Osstest-Versions-That:
    xen=c9b59f9032d41be8bade8a25d9148cf6ed203704
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Jun 2021 20:24:37 +0000

flight 162977 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162977/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c7691f5e340f3b579d40c77548f63133cdf5aac7
baseline version:
 xen                  c9b59f9032d41be8bade8a25d9148cf6ed203704

Last test of basis   162973  2021-06-22 12:00:28 Z    0 days
Testing same since   162977  2021-06-22 18:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c9b59f9032..c7691f5e34  c7691f5e340f3b579d40c77548f63133cdf5aac7 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 21:03:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 21:03:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146087.268729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvnY0-0005yD-3m; Tue, 22 Jun 2021 21:03:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146087.268729; Tue, 22 Jun 2021 21:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvnY0-0005y6-0T; Tue, 22 Jun 2021 21:03:04 +0000
Received: by outflank-mailman (input) for mailman id 146087;
 Tue, 22 Jun 2021 21:03:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UKzY=LQ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lvnXy-0005y0-H6
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 21:03:02 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03166124-228a-4691-bc76-5eac6cf225e7;
 Tue, 22 Jun 2021 21:03:01 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id B82596108E;
 Tue, 22 Jun 2021 21:02:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03166124-228a-4691-bc76-5eac6cf225e7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1624395780;
	bh=v96U5qnlOhDf3w1FKsu7SjrxREMp2X2HSv3JOsCkgmU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=jp86WFjWXckU//EExyz6NMfilYlNSGD/Sp+e74iycIMnJPnrnvb2f/IMZhcf30u/Y
	 nZN/eIYPHlNFb7aaAK4BGrzQ8h60A0DeuaUctP8s9mDpf/LCXTiw3xhjYNzVwnhtzX
	 TCr/gTHfn3WY41P/JfXANtN8uFlsPQPDxHmgwY0xOxtbMzYm2w8bezm6pQenS5mO9T
	 QACOuqiKoEaVeTvNfrREoxgLiMZfrcQDhTBPBD0g1ngPW8KyWNjA64wNxxoLTDCtna
	 okD7zfnO67pIIlP+aLcNDcQOeKWv20TpkQkk9C2UIZdw5p+0AdNoxQDYVfglebC4DR
	 9wbHzTawfV26w==
Date: Tue, 22 Jun 2021 14:02:58 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Claire Chang <tientzu@chromium.org>
cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, 
    Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, 
    Frank Rowand <frowand.list@gmail.com>, 
    Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, 
    jgross@suse.com, Christoph Hellwig <hch@lst.de>, 
    Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, 
    paulus@samba.org, 
    "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, 
    sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>, 
    grant.likely@arm.com, xypron.glpk@gmx.de, 
    Thierry Reding <treding@nvidia.com>, mingo@kernel.org, 
    bauerman@linux.ibm.com, peterz@infradead.org, 
    Greg KH <gregkh@linuxfoundation.org>, 
    Saravana Kannan <saravanak@google.com>, 
    "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
    heikki.krogerus@linux.intel.com, 
    Andy Shevchenko <andriy.shevchenko@linux.intel.com>, 
    Randy Dunlap <rdunlap@infradead.org>, 
    Dan Williams <dan.j.williams@intel.com>, 
    Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
    linux-devicetree <devicetree@vger.kernel.org>, 
    lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org, 
    xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>, 
    Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org, 
    bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, 
    daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
    intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
    jxgao@google.com, joonas.lahtinen@linux.intel.com, 
    linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
    matthew.auld@intel.com, rodrigo.vivi@intel.com, 
    thomas.hellstrom@linux.intel.com, thomas.lendacky@amd.com
Subject: Re: [PATCH v14 01/12] swiotlb: Refactor swiotlb init functions
In-Reply-To: <20210619034043.199220-2-tientzu@chromium.org>
Message-ID: <alpine.DEB.2.21.2106221402390.24906@sstabellini-ThinkPad-T480s>
References: <20210619034043.199220-1-tientzu@chromium.org> <20210619034043.199220-2-tientzu@chromium.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 19 Jun 2021, Claire Chang wrote:
> Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
> initialization to make the code reusable.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Tested-by: Stefano Stabellini <sstabellini@kernel.org>
> Tested-by: Will Deacon <will@kernel.org>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  kernel/dma/swiotlb.c | 50 ++++++++++++++++++++++----------------------
>  1 file changed, 25 insertions(+), 25 deletions(-)
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 52e2ac526757..1f9b2b9e7490 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -168,9 +168,28 @@ void __init swiotlb_update_mem_attributes(void)
>  	memset(vaddr, 0, bytes);
>  }
>  
> -int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
> +static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
> +				    unsigned long nslabs, bool late_alloc)
>  {
> +	void *vaddr = phys_to_virt(start);
>  	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
> +
> +	mem->nslabs = nslabs;
> +	mem->start = start;
> +	mem->end = mem->start + bytes;
> +	mem->index = 0;
> +	mem->late_alloc = late_alloc;
> +	spin_lock_init(&mem->lock);
> +	for (i = 0; i < mem->nslabs; i++) {
> +		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> +		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
> +		mem->slots[i].alloc_size = 0;
> +	}
> +	memset(vaddr, 0, bytes);
> +}
> +
> +int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
> +{
>  	struct io_tlb_mem *mem;
>  	size_t alloc_size;
>  
> @@ -186,16 +205,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
>  	if (!mem)
>  		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
>  		      __func__, alloc_size, PAGE_SIZE);
> -	mem->nslabs = nslabs;
> -	mem->start = __pa(tlb);
> -	mem->end = mem->start + bytes;
> -	mem->index = 0;
> -	spin_lock_init(&mem->lock);
> -	for (i = 0; i < mem->nslabs; i++) {
> -		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> -		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
> -		mem->slots[i].alloc_size = 0;
> -	}
> +
> +	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
>  
>  	io_tlb_default_mem = mem;
>  	if (verbose)
> @@ -282,8 +293,8 @@ swiotlb_late_init_with_default_size(size_t default_size)
>  int
>  swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>  {
> -	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
>  	struct io_tlb_mem *mem;
> +	unsigned long bytes = nslabs << IO_TLB_SHIFT;
>  
>  	if (swiotlb_force == SWIOTLB_NO_FORCE)
>  		return 0;
> @@ -297,20 +308,9 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>  	if (!mem)
>  		return -ENOMEM;
>  
> -	mem->nslabs = nslabs;
> -	mem->start = virt_to_phys(tlb);
> -	mem->end = mem->start + bytes;
> -	mem->index = 0;
> -	mem->late_alloc = 1;
> -	spin_lock_init(&mem->lock);
> -	for (i = 0; i < mem->nslabs; i++) {
> -		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> -		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
> -		mem->slots[i].alloc_size = 0;
> -	}
> -
> +	memset(mem, 0, sizeof(*mem));
>  	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
> -	memset(tlb, 0, bytes);
> +	swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
>  
>  	io_tlb_default_mem = mem;
>  	swiotlb_print_info();
> -- 
> 2.32.0.288.g62a8d224e6-goog
> 


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 21:07:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 21:07:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146093.268739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvnbo-0006gb-PU; Tue, 22 Jun 2021 21:07:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146093.268739; Tue, 22 Jun 2021 21:07:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvnbo-0006gU-ML; Tue, 22 Jun 2021 21:07:00 +0000
Received: by outflank-mailman (input) for mailman id 146093;
 Tue, 22 Jun 2021 21:06:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UKzY=LQ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lvnbn-0006gO-4r
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 21:06:59 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7069e148-3d84-4315-8a0c-80f89a771609;
 Tue, 22 Jun 2021 21:06:58 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 6C7A061001;
 Tue, 22 Jun 2021 21:06:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7069e148-3d84-4315-8a0c-80f89a771609
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1624396017;
	bh=ErG3K5MuQrOEmX4SnKcaihrG+b9kb2e1ERDZE7FGN8M=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=mZIyJAmKOvesLYQLDoIbFdEQzYTGHB2H8MZRjAgf3sUo3Ll3A0f2OdDwtqAxh1aSl
	 78F0epC30//7iAej9ekmTTzFuhpjM857uxS7fOKTktHVyvK7A/1ZpnRgPUVkp4D6R+
	 SZjxz6tqK1lUNO/H8dpEaGHPrshBPut80pjHT4fee73Z/X6ESHpqFls7IapnO5jqAc
	 EUhgFL6Iu0jhxMrThlYemLMjk8kebbPG6zGqCnlSwvLnkubz8H28+B1RY2LVESYy1N
	 30qJqeSFC4b80P5vc+MUpCC57en5cG5LPAdslsph9fvwc2iezHsw5W1hXDaQkJbZvS
	 d5VD+OhW/9zlQ==
Date: Tue, 22 Jun 2021 14:06:56 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul.Singh@arm.com
cc: "edgar.iglesias@xilinx.com" <edgar.iglesias@xilinx.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, Julien Grall <julien@xen.org>, 
    "fnuv@xilinx.com" <fnuv@xilinx.com>, sstabellini@kernel.org
Subject: Re: smmuv1 breakage
In-Reply-To: <alpine.DEB.2.21.2106151756190.24906@sstabellini-ThinkPad-T480s>
Message-ID: <alpine.DEB.2.21.2106221405220.24906@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2106141840150.24906@sstabellini-ThinkPad-T480s> <791BFC00-6A50-48D2-A208-E529B887441F@arm.com> <alpine.DEB.2.21.2106151756190.24906@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1392728712-1624396017=:24906"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1392728712-1624396017=:24906
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

Hi Rahul,

Do you have an opinion on how we should move forward on this?

Do you think it is OK to go for a full revert of "xen/arm: smmuv1:
Intelligent SMR allocation" or do you think it is best to go with an
alternative fix? If so, do you have something in mind?



On Tue, 15 Jun 2021, Stefano Stabellini wrote:
> On Tue, 15 Jun 2021, Rahul Singh wrote:
> > Hi Stefano
> > 
> > > On 15 Jun 2021, at 3:21 am, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > > 
> > > Hi Rahul,
> > > 
> > > Unfortunately, after bisecting, I discovered a few more breakages due to
> > > your smmuv1 series (commits e889809b .. 3e6047ddf) on Xilinx ZynqMP. I
> > > attached the DTB as reference. Please note that I made sure to
> > > cherry-pick "xen/arm: smmuv1: Revert associating the group pointer with
> > > the S2CR" during bisection. So the errors are present also on staging.
> > > 
> > > The first breakage is an error at boot time in smmu.c#find_smmu_master,
> > > see log1. I think it is due to the lack of ability to parse the new smmu
> > > bindings in the old smmu driver.
> > > 
> > > After removing all the "smmus" and "#stream-id-cells" properties in
> > > device tree, I get past the previous error, everything seems to be OK at
> > > early boot, but I actually get SMMU errors as soon as dom0 starting
> > > using devices:
> > > 
> > > (XEN) smmu: /smmu@fd800000: Unexpected global fault, this could be serious
> > > (XEN) smmu: /smmu@fd800000:     GFSR 0x80000002, GFSYNR0 0x00000000, GFSYNR1 0x00000877, GFSYNR2 0x00000000
> > 
> >  This fault is "Unidentified stream fault” for StreamID “ 0x877” that means SMMU SMR is not configured for streamID “0x877"
> > 
> > 
> > > [   10.419681] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
> > > [   10.426452] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
> > > 
> > > Do you think you'll be able to help fix them?
> > > 
> > > 
> > > You should be able to reproduce the two issues using Xilinx QEMU (but to
> > > be honest I haven't tested it on QEMU yet, I was testing on real
> > > hardware):
> > > - clone and compile xilinx QEMU https://github.com/Xilinx/qemu.git
> > >  ./configure  --target-list=aarch64-softmmu
> > >  make
> > > - clone and build git://github.com/Xilinx/qemu-devicetrees.git
> > > - use the attached script to run it
> > >    - kernel can be upstream defconfig 5.10
> > > 
> > 
> > I tried to reproduce the issue on Xilinx QEMU as per the steps shared above 
> > but I am not observing any issue on Xilinx QEMU.
> 
> I tried on QEMU and it doesn't repro. I cannot explain why it works on
> QEMU and it fails on real hardware.
> 
> 
> > I also tested and confirmed on QEMU that SMMU is configured correctly 
> > for specifically StreamID “ 0x877” and for other streamIDs.
> > 
> > I check the xen.dtb shared by you and found out the there is no "stream-id-cells”
> > property in the master device but the "mmu-masters" property is present in the
> > smmu node. For legacy smmu binding we need both "stream-id-cells” and "mmu-masters”.
> > If you need to add the new smmu binding please add the "iommu-cells”
> > property in the smmu node and the “iommus” property in the master device.
> 
> In regards to the missing "stream-id-cells" property, I shared the wrong
> dtb before, sorry. I was running a number of tests and I might have
> picked the wrong file. The proper dtb comes with "stream-id-cells" for
> the 0x877 device, see attached.
> 
> 
> 
> > Can you please share the xen boot logs with me so that I can debug further why the error is observed? 
> 
> See attached. I did some debugging and discovered that it crashes while
> accessing master->of_node in find_smmu_master. If I revert your series,
> the crash goes away. It is very strange because your patches don't touch
> find_smmu_master or insert_smmu_master directly.
> 
> I did a git reset --hard on the commit "xen/arm: smmuv1: Add a stream
> map entry iterator" and it worked, which points to "xen/arm: smmuv1:
> Intelligent SMR allocation" being the problem, even if I have the revert
> cherry-picked on top. Maybe the revert is not reverting enough?
> 
> After this test, I switched back to staging and did:
> git revert 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
> git revert 0435784cc75dcfef3b5f59c29deb1dbb84265ddb
> 
> And it worked! So the issue truly is that
> 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a doesn't revert "enough".
> See "full-revert" for the patch reverting the remaining code. That on
> top of staging fixes boot for me.
--8323329-1392728712-1624396017=:24906--


From xen-devel-bounces@lists.xenproject.org Tue Jun 22 22:14:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Jun 2021 22:14:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146099.268751 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvofM-0004g3-VF; Tue, 22 Jun 2021 22:14:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146099.268751; Tue, 22 Jun 2021 22:14:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvofM-0004fw-Rt; Tue, 22 Jun 2021 22:14:44 +0000
Received: by outflank-mailman (input) for mailman id 146099;
 Tue, 22 Jun 2021 22:14:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3wlN=LQ=gmail.com=neilsikka@srs-us1.protection.inumbo.net>)
 id 1lvofL-0004fq-SJ
 for xen-devel@lists.xenproject.org; Tue, 22 Jun 2021 22:14:43 +0000
Received: from mail-ed1-x52f.google.com (unknown [2a00:1450:4864:20::52f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d72308e-ec4a-461a-8cbb-6445ecc69c20;
 Tue, 22 Jun 2021 22:14:42 +0000 (UTC)
Received: by mail-ed1-x52f.google.com with SMTP id m14so785566edp.9
 for <xen-devel@lists.xenproject.org>; Tue, 22 Jun 2021 15:14:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d72308e-ec4a-461a-8cbb-6445ecc69c20
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=gQhxyhRnTRTY0cR8GsvGYgrV9DB7RNLvWVlnB6Bthts=;
        b=Lh8JSPc1M0omiw4UZ0sBQxYU+C1Zpod/Msvcbpq/60CtEtcmuw8x9HKFXtLSJlCxcZ
         g7Jmnl9tUAzCdd9jloamHl5B2OtYH0yJdXlURQT5uVvciyQdNO3IJeDQSOjrZqp/x6v+
         gm3rKmqMJ7ZVnCsbY2luiB5PKnHvMyA5ZSvxyaAFFJtmpASerC+qb2M73pkCMpaMcIMR
         SrGrTnxMeOKOiTHrKtbarSrKfQAKK/NfgHAopwUNl8eYQqhDtjhneaIme+2AOhWFQdkp
         THKxDUdkhaXO00FTWqP4vpKFOTNng89Rq/DBAa9HH1LtkRFIkDqse2CLnu7ywpTBYa/3
         OQQQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=gQhxyhRnTRTY0cR8GsvGYgrV9DB7RNLvWVlnB6Bthts=;
        b=E2im2DvBJPDd4ngORayOiO+cV+2PFFZFaTK91RNX/U3zeh38xZ8xY1C9koKzulRwl3
         amt3p5x3G5e/qDuxVikhB4a1NmLkUmU0KLHF1d2N2P9P4u8PfDqbpdZTItJZ+Q2uAzzo
         EXIbr1iawjxb68WPJcUw95+faMulYGESiuuQD1SWtErBT7VsfEYfVDNc9JczgBhJHhwg
         t2z/P0DUDgGxyrI2ANzzHqR360n9wWkiSsCvrYAY95gcVZoc7CUlVijjGIdQ/AJlTajY
         LPpYLN6MM36L2Sek83P3ZXHzZhLWmwcQErEbAQnZsOea6RWzrn2D6yoBEpN9EJNRTJa0
         08LA==
X-Gm-Message-State: AOAM532IELkLXl9X2E6s+74eRekvHukb6fc1HpuzhtuyD4VCbNeNlsli
	uVFqzA0Npd0y58Gmfwfmr0PeiTVw7IUBxYA2ugw=
X-Google-Smtp-Source: ABdhPJx5ubdq1k8iUYuQVNi+WzSoOERtGQEBP5z7vjQZOdSgGI0F08X+GnmH45Vv9SoCYeuU+K9HMrTKLA/crd9pgqo=
X-Received: by 2002:a50:fd1a:: with SMTP id i26mr7972891eds.181.1624400081183;
 Tue, 22 Jun 2021 15:14:41 -0700 (PDT)
MIME-Version: 1.0
References: <CAHPMNWcQgUEvd7aYiNx1U+wphmuJr_Q8JXWw3mE812uN5hEARQ@mail.gmail.com>
 <CABfawhk4D+30_DX5cwYG-=SthQ=CXLRLL7VeXeVK48Oj0efn2Q@mail.gmail.com>
 <CAHPMNWd1QFYfbuPdEPZgwKrXE6dhi_X-bqZfPQj4zo4AioL81w@mail.gmail.com>
 <CABfawh=W92ioejsZ-zu+WVofw_jfxVLteVieC2Ysfxd3Wrs+Og@mail.gmail.com>
 <CAHPMNWcfz+9zUv7gfwu5V6zPVBHiFc-EZDJ70-4DWHjOtyBOHg@mail.gmail.com> <CABfawhkMb8Pnr6+NxsoaXKCyaBH8Tax8_1ABHjyGGp5j9hOkVA@mail.gmail.com>
In-Reply-To: <CABfawhkMb8Pnr6+NxsoaXKCyaBH8Tax8_1ABHjyGGp5j9hOkVA@mail.gmail.com>
From: Neil Sikka <neilsikka@gmail.com>
Date: Tue, 22 Jun 2021 18:14:29 -0400
Message-ID: <CAHPMNWc4CjdpusJ_8RsdE1UMd1KjbTaRE7=cxxvd01A=tmWVcg@mail.gmail.com>
Subject: Re: Windows 10 Kernel Debugging on Xen
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: multipart/alternative; boundary="000000000000cfe63405c5621dab"

--000000000000cfe63405c5621dab
Content-Type: text/plain; charset="UTF-8"

I figured it out. Microsoft did not document that testsigning needs to be
enabled for kdnet to work.

On Tue, Jun 22, 2021 at 2:12 PM Tamas K Lengyel <tamas.k.lengyel@gmail.com>
wrote:

> Make sure windbg is already waiting for the connection from the
> debugee by the time Windows starts booting. If you try to attach
> windbg later it won't work. It worked for me but obviously YMMV.
>
> Tamas
>
> On Tue, Jun 22, 2021 at 2:07 PM Neil Sikka <neilsikka@gmail.com> wrote:
> >
> > I tried that, but it seems like I'm getting an interrupt storm on the
> debugger VM (CPU spends all its time in the kernel) when I try to attach
> the debugger. This observation furthers my suspicion that there is
> communication, but there is something wrong with the protocol...
> >
> > On Tue, Jun 22, 2021 at 12:43 PM Tamas K Lengyel <
> tamas.k.lengyel@gmail.com> wrote:
> >>
> >> I used Xen 4.15 and a pretty new version of Windows 10. It is a bit
> >> finicky, you have to run the debug commands on the debugee and then
> >> reboot. When the VM is rebooting the domain ID changes so you have to
> >> start the serial bridge then. Windbg will attach afterwards. Just make
> >> sure both VMs have serial='pty' set in their config file.
> >>
> >> Tamas
> >>
> >> On Tue, Jun 22, 2021 at 12:33 PM Neil Sikka <neilsikka@gmail.com>
> wrote:
> >> >
> >> > Thanks for the quick response, Tamas. I tried what you said and
> windbg waits and the debugee hangs when I click the break button in windbg,
> but I don't see any output in windbg. This means that there is SOME
> communication over the serial port that causes the debugee to hang when I
> click break. Could it be a debugger protocol issue? I also tried the
> guidance here by running the crlf program:
> >> > https://www.qubes-os.org/doc/windows-debugging/
> >> > But windbg waits and the debugee hangs in a similar manner.
> >> >
> >> > What versions of WIndows and Xen are you using?
> >> >
> >> > On Tue, Jun 22, 2021 at 12:10 PM Tamas K Lengyel <
> tamas.k.lengyel@gmail.com> wrote:
> >> >>
> >> >> I have managed to get windbg working with a serial bridge between two
> >> >> Win10 VMs using the following script:
> >> >>
> https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/scripts/serial-bridge.sh
> .
> >> >> The debugee has to enable a couple options so that windbg can attach:
> >> >>
> https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/scripts/debug.cmd
> .
> >> >>
> >> >> Tamas
> >> >>
> >> >> On Tue, Jun 22, 2021 at 12:01 PM Neil Sikka <neilsikka@gmail.com>
> wrote:
> >> >> >
> >> >> > Hello,
> >> >> > Has anyone gotten a Windows10 (Version 1709 of later) kernel
> debugger attached when running the Windows10 debugger VM and the Windows10
> debugee VM on Xen 4.13.0 hypervisor? I am getting a "NIC hardware
> initialization failed" error. I tried the suggestions in the discussion
> here (https://bugzilla.redhat.com/show_bug.cgi?id=1947015):
> >> >> > -cpu
> Skylake-Server-IBRS,ss=on,vmx=on,hypervisor=on,tsc-adjust=on,clflushopt=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,ssbd=on,xsaves=on,ibpb=on,amd-ssbd=on,
> \
> >> >> >
> skip-l1dfl-vmentry=on,mpx=off,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vendor-id=KVMKVMKVM
> >> >> >
> >> >> > note: i had to remove the following 2 arguments due to errors from
> QEMU:
> >> >> > pschange-mc-no=on
> >> >> > hv_vpindex
> >> >> >
> >> >> > Here was the error:
> >> >> > C:\Users\user\Desktop\oldDebuggers\x64>kdnet.exe
> >> >> >
> >> >> > Network debugging is supported on the following NICs:
> >> >> > busparams=0.4.0, Intel(R) PRO/1000 MT Network Connection, Plugged
> in.
> >> >> > The Microsoft hypervisor running this VM does not support KDNET.
> >> >> > Please upgrade to the hypervisor shipped in Windows 8 or WS2012 or
> later.
> >> >> >
> >> >> > KDNET initialization failed.  Status = 0xC0000182.
> >> >> > NIC hardware initialization failed.
> >> >> >
> >> >> > I am using an Intel e1000 NIC emulated through QEMU because its
> supposedly a supported NIC for Windows kernel NET debugging.
> >> >> >
> >> >> > Thanks in Advance!
> >> >> >
> >> >> > --
> >> >> > My Blog: http://www.neilscomputerblog.blogspot.com/
> >> >> > Twitter: @neilsikka
> >> >
> >> >
> >> >
> >> > --
> >> > My Blog: http://www.neilscomputerblog.blogspot.com/
> >> > Twitter: @neilsikka
> >
> >
> >
> > --
> > My Blog: http://www.neilscomputerblog.blogspot.com/
> > Twitter: @neilsikka
>


-- 
My Blog: http://www.neilscomputerblog.blogspot.com/
Twitter: @neilsikka

--000000000000cfe63405c5621dab
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">I figured it out. Microsoft did not document that testsign=
ing needs to be enabled for kdnet to work.<br></div><br><div class=3D"gmail=
_quote"><div dir=3D"ltr" class=3D"gmail_attr">On Tue, Jun 22, 2021 at 2:12 =
PM Tamas K Lengyel &lt;<a href=3D"mailto:tamas.k.lengyel@gmail.com">tamas.k=
.lengyel@gmail.com</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote=
" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);=
padding-left:1ex">Make sure windbg is already waiting for the connection fr=
om the<br>
debugee by the time Windows starts booting. If you try to attach<br>
windbg later it won&#39;t work. It worked for me but obviously YMMV.<br>
<br>
Tamas<br>
<br>
On Tue, Jun 22, 2021 at 2:07 PM Neil Sikka &lt;<a href=3D"mailto:neilsikka@=
gmail.com" target=3D"_blank">neilsikka@gmail.com</a>&gt; wrote:<br>
&gt;<br>
&gt; I tried that, but it seems like I&#39;m getting an interrupt storm on =
the debugger VM (CPU spends all its time in the kernel) when I try to attac=
h the debugger. This observation furthers my suspicion that there is commun=
ication, but there is something wrong with the protocol...<br>
&gt;<br>
&gt; On Tue, Jun 22, 2021 at 12:43 PM Tamas K Lengyel &lt;<a href=3D"mailto=
:tamas.k.lengyel@gmail.com" target=3D"_blank">tamas.k.lengyel@gmail.com</a>=
&gt; wrote:<br>
&gt;&gt;<br>
&gt;&gt; I used Xen 4.15 and a pretty new version of Windows 10. It is a bi=
t<br>
&gt;&gt; finicky, you have to run the debug commands on the debugee and the=
n<br>
&gt;&gt; reboot. When the VM is rebooting the domain ID changes so you have=
 to<br>
&gt;&gt; start the serial bridge then. Windbg will attach afterwards. Just =
make<br>
&gt;&gt; sure both VMs have serial=3D&#39;pty&#39; set in their config file=
.<br>
&gt;&gt;<br>
&gt;&gt; Tamas<br>
&gt;&gt;<br>
&gt;&gt; On Tue, Jun 22, 2021 at 12:33 PM Neil Sikka &lt;<a href=3D"mailto:=
neilsikka@gmail.com" target=3D"_blank">neilsikka@gmail.com</a>&gt; wrote:<b=
r>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Thanks for the quick response, Tamas. I tried what you said a=
nd windbg waits and the debugee hangs when I click the break button in wind=
bg, but I don&#39;t see any output in windbg. This means that there is SOME=
 communication over the serial port that causes the debugee to hang when I =
click break. Could it be a debugger protocol issue? I also tried the guidan=
ce here by running the crlf program:<br>
&gt;&gt; &gt; <a href=3D"https://www.qubes-os.org/doc/windows-debugging/" r=
el=3D"noreferrer" target=3D"_blank">https://www.qubes-os.org/doc/windows-de=
bugging/</a><br>
&gt;&gt; &gt; But windbg waits and the debugee hangs in a similar manner.<b=
r>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; What versions of WIndows and Xen are you using?<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; On Tue, Jun 22, 2021 at 12:10 PM Tamas K Lengyel &lt;<a href=
=3D"mailto:tamas.k.lengyel@gmail.com" target=3D"_blank">tamas.k.lengyel@gma=
il.com</a>&gt; wrote:<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt; I have managed to get windbg working with a serial bridge=
 between two<br>
&gt;&gt; &gt;&gt; Win10 VMs using the following script:<br>
&gt;&gt; &gt;&gt; <a href=3D"https://github.com/intel/kernel-fuzzer-for-xen=
-project/blob/master/scripts/serial-bridge.sh" rel=3D"noreferrer" target=3D=
"_blank">https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master=
/scripts/serial-bridge.sh</a>.<br>
&gt;&gt; &gt;&gt; The debugee has to enable a couple options so that windbg=
 can attach:<br>
&gt;&gt; &gt;&gt; <a href=3D"https://github.com/intel/kernel-fuzzer-for-xen=
-project/blob/master/scripts/debug.cmd" rel=3D"noreferrer" target=3D"_blank=
">https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/script=
s/debug.cmd</a>.<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt; Tamas<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt; On Tue, Jun 22, 2021 at 12:01 PM Neil Sikka &lt;<a href=
=3D"mailto:neilsikka@gmail.com" target=3D"_blank">neilsikka@gmail.com</a>&g=
t; wrote:<br>
&gt;&gt; &gt;&gt; &gt;<br>
&gt;&gt; &gt;&gt; &gt; Hello,<br>
&gt;&gt; &gt;&gt; &gt; Has anyone gotten a Windows10 (Version 1709 of later=
) kernel debugger attached when running the Windows10 debugger VM and the W=
indows10 debugee VM on Xen 4.13.0 hypervisor? I am getting a &quot;NIC hard=
ware initialization failed&quot; error. I tried the suggestions in the disc=
ussion here (<a href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D19470=
15" rel=3D"noreferrer" target=3D"_blank">https://bugzilla.redhat.com/show_b=
ug.cgi?id=3D1947015</a>):<br>
&gt;&gt; &gt;&gt; &gt; -cpu Skylake-Server-IBRS,ss=3Don,vmx=3Don,hypervisor=
=3Don,tsc-adjust=3Don,clflushopt=3Don,umip=3Don,pku=3Don,md-clear=3Don,stib=
p=3Don,arch-capabilities=3Don,ssbd=3Don,xsaves=3Don,ibpb=3Don,amd-ssbd=3Don=
, \<br>
&gt;&gt; &gt;&gt; &gt; skip-l1dfl-vmentry=3Don,mpx=3Doff,hv-time,hv-relaxed=
,hv-vapic,hv-spinlocks=3D0x1fff,hv-vendor-id=3DKVMKVMKVM<br>
&gt;&gt; &gt;&gt; &gt;<br>
&gt;&gt; &gt;&gt; &gt; note: i had to remove the following 2 arguments due =
to errors from QEMU:<br>
&gt;&gt; &gt;&gt; &gt; pschange-mc-no=3Don<br>
&gt;&gt; &gt;&gt; &gt; hv_vpindex<br>
&gt;&gt; &gt;&gt; &gt;<br>
&gt;&gt; &gt;&gt; &gt; Here was the error:<br>
&gt;&gt; &gt;&gt; &gt; C:\Users\user\Desktop\oldDebuggers\x64&gt;kdnet.exe<=
br>
&gt;&gt; &gt;&gt; &gt;<br>
&gt;&gt; &gt;&gt; &gt; Network debugging is supported on the following NICs=
:<br>
&gt;&gt; &gt;&gt; &gt; busparams=3D0.4.0, Intel(R) PRO/1000 MT Network Conn=
ection, Plugged in.<br>
&gt;&gt; &gt;&gt; &gt; The Microsoft hypervisor running this VM does not su=
pport KDNET.<br>
&gt;&gt; &gt;&gt; &gt; Please upgrade to the hypervisor shipped in Windows =
8 or WS2012 or later.<br>
&gt;&gt; &gt;&gt; &gt;<br>
&gt;&gt; &gt;&gt; &gt; KDNET initialization failed.=C2=A0 Status =3D 0xC000=
0182.<br>
&gt;&gt; &gt;&gt; &gt; NIC hardware initialization failed.<br>
&gt;&gt; &gt;&gt; &gt;<br>
&gt;&gt; &gt;&gt; &gt; I am using an Intel e1000 NIC emulated through QEMU =
because its supposedly a supported NIC for Windows kernel NET debugging.<br=
>
&gt;&gt; &gt;&gt; &gt;<br>
&gt;&gt; &gt;&gt; &gt; Thanks in Advance!<br>
&gt;&gt; &gt;&gt; &gt;<br>
&gt;&gt; &gt;&gt; &gt; --<br>
&gt;&gt; &gt;&gt; &gt; My Blog: <a href=3D"http://www.neilscomputerblog.blo=
gspot.com/" rel=3D"noreferrer" target=3D"_blank">http://www.neilscomputerbl=
og.blogspot.com/</a><br>
&gt;&gt; &gt;&gt; &gt; Twitter: @neilsikka<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; --<br>
&gt;&gt; &gt; My Blog: <a href=3D"http://www.neilscomputerblog.blogspot.com=
/" rel=3D"noreferrer" target=3D"_blank">http://www.neilscomputerblog.blogsp=
ot.com/</a><br>
&gt;&gt; &gt; Twitter: @neilsikka<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; --<br>
&gt; My Blog: <a href=3D"http://www.neilscomputerblog.blogspot.com/" rel=3D=
"noreferrer" target=3D"_blank">http://www.neilscomputerblog.blogspot.com/</=
a><br>
&gt; Twitter: @neilsikka<br>
</blockquote></div><br clear=3D"all"><div><br></div>-- <br><div dir=3D"ltr"=
 class=3D"gmail_signature"><div>My Blog: <a href=3D"http://www.neilscompute=
rblog.blogspot.com/" target=3D"_blank">http://www.neilscomputerblog.blogspo=
t.com/</a></div><div>Twitter: @neilsikka</div></div>

--000000000000cfe63405c5621dab--


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 04:33:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 04:33:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146116.268792 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvuZo-0005Pn-Ko; Wed, 23 Jun 2021 04:33:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146116.268792; Wed, 23 Jun 2021 04:33:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvuZo-0005PL-Dd; Wed, 23 Jun 2021 04:33:24 +0000
Received: by outflank-mailman (input) for mailman id 146116;
 Wed, 23 Jun 2021 04:33:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvuZm-0005PB-Nf; Wed, 23 Jun 2021 04:33:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvuZm-0004H8-AS; Wed, 23 Jun 2021 04:33:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvuZl-0008H2-Ut; Wed, 23 Jun 2021 04:33:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvuZl-0005uq-UM; Wed, 23 Jun 2021 04:33:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0JrKejV00ZyyLf6yX5IMBSurgPNzcbUpqjHqYAKCkYo=; b=fVJ/l1RcpeO18CEiIvDivFzuo/
	qNZOsUkQwPaeMtlETBvhu8MJ7veS1ZC3o5vY0v1s4rUz5CJJARvXn1DSMHt5WmsyF+IRoGteVhZ2V
	NZeXGfJUJ253N6d/V1cjUIHWCIcXqgwzCrycjSfjcZQe2DzAhwIY/+0vRBnwDa+kbti0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162968-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162968: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a96bfed64c8986d6404e553f18203cae1f5ac7e6
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Jun 2021 04:33:21 +0000

flight 162968 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162968/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                a96bfed64c8986d6404e553f18203cae1f5ac7e6
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  326 days
Failing since        152366  2020-08-01 20:49:34 Z  325 days  552 attempts
Testing same since   162968  2021-06-22 11:25:31 Z    0 days    1 attempts

------------------------------------------------------------
6199 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1688799 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 05:50:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 05:50:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146123.268805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvvm7-0004f9-Ip; Wed, 23 Jun 2021 05:50:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146123.268805; Wed, 23 Jun 2021 05:50:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvvm7-0004f2-Fp; Wed, 23 Jun 2021 05:50:11 +0000
Received: by outflank-mailman (input) for mailman id 146123;
 Wed, 23 Jun 2021 05:50:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvvm6-0004es-7U; Wed, 23 Jun 2021 05:50:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvvm5-0005lo-H3; Wed, 23 Jun 2021 05:50:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvvm5-0002cI-9M; Wed, 23 Jun 2021 05:50:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvvm5-0007AJ-8r; Wed, 23 Jun 2021 05:50:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Om7bOXf0oo+hIc0+AJ7A5Acz1ZfUWGa2GrwqHT2+MT8=; b=X3Fxe1gL+l5BYDkhdnpqAaaVdr
	x8xLxJ9DAg/cBlFR5dzEHu4UEEUfkaL33zjpJMd47TwpZS32/a+IVveqTfWZpWAOjI095VjhxjWFB
	hfHM9YfgeKWTOzN99JfUuQCvck2HOMl4CPQUUgDfncg0cyGlvxRbefTpzhDZMJ4LN+w8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162972-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162972: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=d9a7612f8d1da197883bd1cb9f91f229522d39b1
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Jun 2021 05:50:09 +0000

flight 162972 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162972/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 d9a7612f8d1da197883bd1cb9f91f229522d39b1
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   19 days
Failing since        162368  2021-06-04 15:42:59 Z   18 days   39 attempts
Testing same since   162972  2021-06-22 11:39:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2211 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 06:18:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 06:18:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146131.268826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvwDo-000799-Um; Wed, 23 Jun 2021 06:18:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146131.268826; Wed, 23 Jun 2021 06:18:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvwDo-000792-Rc; Wed, 23 Jun 2021 06:18:48 +0000
Received: by outflank-mailman (input) for mailman id 146131;
 Wed, 23 Jun 2021 06:18:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YmAD=LR=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lvwDn-00078w-OW
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 06:18:47 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e99ee2f-ec17-4fdb-8321-b602159a094b;
 Wed, 23 Jun 2021 06:18:46 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2108.outbound.protection.outlook.com [104.47.17.108])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-1-oRNL6vCaOCOXB_UY3VwJKQ-1; Wed, 23 Jun 2021 08:18:43 +0200
Received: from AM0PR04MB5587.eurprd04.prod.outlook.com (2603:10a6:208:125::12)
 by AM0PR04MB5428.eurprd04.prod.outlook.com (2603:10a6:208:113::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Wed, 23 Jun
 2021 06:18:42 +0000
Received: from AM0PR04MB5587.eurprd04.prod.outlook.com
 ([fe80::d52b:45ce:e793:f9c]) by AM0PR04MB5587.eurprd04.prod.outlook.com
 ([fe80::d52b:45ce:e793:f9c%7]) with mapi id 15.20.4242.023; Wed, 23 Jun 2021
 06:18:41 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM4PR05CA0026.eurprd05.prod.outlook.com (2603:10a6:205::39) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Wed, 23 Jun 2021 06:18:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e99ee2f-ec17-4fdb-8321-b602159a094b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624429124;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ytVM0OQTV9HSm4wl625AUkrcGQRWP/PTGgMxmvw8Jyc=;
	b=cw/5KBtfXVNgYnBwLV2ZFBwA9AT2JoT2qzNW4Aa+77V/GaR7T4eB1dvbhltjY2MJk2QdJY
	li8LDgieXo6pENx1oixoulbljpBQEpnNd9OCFe/If7Z+Geow4wXW+ay6nuQT/97w5Em+Uf
	9EV9q2LhvMN10ERIy77YvgtIHN2EH2U=
X-MC-Unique: oRNL6vCaOCOXB_UY3VwJKQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U9ywDJ6ITUngm/Y4tFfN9HSzO3Ke6LQ1/tMSHc41OjhR/Z0BdZJPvLgW6LpLz3WmuQA7vtiym5CKAH5oAPa3gXQzHkqprPT7U4eTy4ixA8QeC9clBsiPYLkaPtI5D0fGjXOhrXc1vrfcZEun5eHJPh4BPgropJ6eujvM5PRBYHu9MzFD6Rallcg3hv5YPCDTJGK5M9QPqpj7OgOm2hh/eFH5izbgJIfI0UgSVFMq0qA6gYXm+shK5Oz4vZV+xUxcwi5QESJiRPWK8GCQ1MICFbvTUJL8bqPmIR8R2WsmN7Fg2Cdl+GU3rbPfowvMEsb7eLMyWtZTO+J0zmtRzb3xmg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ytVM0OQTV9HSm4wl625AUkrcGQRWP/PTGgMxmvw8Jyc=;
 b=jUzOv1C4BGTBJuE1/R57+u1kH5AfG3+PUz5qo4ljJFyr8K5P+mJDJJhNUSujLewfjFyhulBJuTTc09hjAtdZL0syJTPQ0jT0i39rQx6jvlGiPSjGm97MLWNVqGmLtA+S/xWtMozaaQwrQxU1DCPeD9JHCfFSxCdrYt45cUXvD890le0OAECVhGXukdKyzR9nyUFoKT/pxQ1ns+Yl0h+6ibN+KurYXV3gGpBRYheRQ1BShosqQvyj99NaXIevdupqJTlymZdI41f+u+4b87yzCWlLa3JdUV0RwHgp6v5LqYVcEGiMpDfw32oGolvDE/9/f9MltAWAWfv8rXY1lg/EKg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH v2 5/6] libxencall: drop bogus mentioning of xencall6()
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Juergen Gross <jgross@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <6c532607-c2a3-d0ab-e4e5-428f85f4a045@suse.com>
 <1792bdfc-7510-6628-e63c-aee2c7d2cab5@suse.com>
 <25fa82fd-957a-d378-9761-070006d083ae@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b729fbd9-90cd-0bf1-c40f-c8382a5ec7ac@suse.com>
Date: Wed, 23 Jun 2021 08:18:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <25fa82fd-957a-d378-9761-070006d083ae@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM4PR05CA0026.eurprd05.prod.outlook.com (2603:10a6:205::39)
 To AM0PR04MB5587.eurprd04.prod.outlook.com (2603:10a6:208:125::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6ba6957b-2d46-4bbf-6adc-08d9360ebeda
X-MS-TrafficTypeDiagnostic: AM0PR04MB5428:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<AM0PR04MB54281D69379E7967A4B929C8B3089@AM0PR04MB5428.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OpHwStPUMM3ez1oskxH2FhqqexmYC+r4BAjtAkfccCP56lvR6dRBXrvj8hKVn0V7ZXaeXNv0KwZLuXwFozdDPewvSLT2kPsS4g4i4OXMSoMYBmanFdgkhai5+gnx4IwEVtiibpZ7MSlvQZfujSh9c48o5P7BTHgwWq9IRzL2A3krGTLAnIWQFz05Uin8bHx3aSMHeFX2wnUu6s4HzbnuadWYwci3c7YIi60LqIzP+HCJYPHT3aHIvGYFGDmiUhsoHOhD2vcqkQs9rYvZ78DyhHj8nIgrtqZk74BT+z7eDRbIeVxsdWc4Tv+Mm3KZgM2ZHdTSk/uf0r2I3nbGqcvnz0qNbJ6731n41pLXSag8o97eZtmdCITPeK/YumX1YEe68AuNO3bhrGDiNHSjlUhHvlgAIAICC6Qb+kay4pTpVpxuN0LmgZhAh8HM8onX1IMO1wmax3d0cp1PNDyCqUUdBf1RyUT3W8jeq6MmgALWK49xdu1Fsvf2RKsUCpLxiA754SBiEF8CaDzvL3ZIpAdU8QjCPCPzVyBhjNUp/GPX3JmfhZJP766ZkJkIwT46d/NM9DlRm34Am3H2zfshPWsHn9yZnJ0k9f7oNjGiE9WFHsEng4cdA/+mdzZ8JB8dreMoC+pENEU8He4UrNz8ZvK2x0u7QeDKyMD96SPXWnNLrOIgXib+WStSkpslOnGmP2eS
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR04MB5587.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(376002)(346002)(396003)(39850400004)(4326008)(110136005)(66946007)(86362001)(31696002)(478600001)(5660300002)(956004)(36756003)(53546011)(2616005)(31686004)(38100700002)(2906002)(54906003)(316002)(16576012)(83380400001)(66476007)(26005)(6486002)(8936002)(16526019)(186003)(66556008)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZFYxV0pMWEFLa0VHOVMvZ3hKdTh3TlFxSjNSMy9CT0lrWFE5VEJaK1NZNjVj?=
 =?utf-8?B?MDBYRldNWUtrTVZ2Nk1oMzFjbmVZcWx2SFlOWDZIYkFwSlMxVzEyM0hTTTJl?=
 =?utf-8?B?WFZOQW5NWENNUkFDMkFEdnlCZS9LV3ArZWoxQjFyTUtWeGg4cU9QUFR1Rlcx?=
 =?utf-8?B?TWRtMndORDlFVHVQTnpnbzIzMTJhdE9TN3d4UzlpL0ZmaEZmdGhFY2lWVGUx?=
 =?utf-8?B?S0RocHdPWUFSbjUxZ2luQ2RHQjdZZzRJMXFTcUV6anFUT3JTNndBSHFxdmgx?=
 =?utf-8?B?b0Y3cVlvQWE5dThWWFppVkVKUkkyRjVIN0ZFV3h5VUM2TWxwck9SOGxmUEtw?=
 =?utf-8?B?ZnJOa1FTRXV4d1pwUUQxU2Nib2Jka0RKaXRVRTFSY000Ryt6OHV2Zm1sYVhp?=
 =?utf-8?B?MlllL1pwMjI5cHluQUVmdGRRSkZUMUlrL1kwaWpvdFZ4OXpBelB0ZXU3MnZD?=
 =?utf-8?B?KzYvcEtLOFpFcHhEUXlhR0JMQlVyeE5iNUZzWTFta215SlJ6S2hRV2NKcGls?=
 =?utf-8?B?R1k1OWZsd2dRbVRKNCtBTldwVTVPNzRFZUM0RW02aDVqRG54czRIN1I2WllM?=
 =?utf-8?B?RU9QcjQreEl5bW1heUh6dzduSThWT29TUGYvZXlRd3VJcUR0MmxSSUJYRVlT?=
 =?utf-8?B?VUhpV0IvVWN0eDY0elRlNGhFOTNINHAyQzIyb3R0cWg2L1JTMkxrU3NGQ3k4?=
 =?utf-8?B?ZnpoWXdTUjd2M0hmazRjeGFuSFVmQmphUlJyMWVCenNyUDVIcXllT0NoNVB3?=
 =?utf-8?B?WXdJVktxd1EwMjh2MFZ5bHN3eDFLbTlHbU5KOFozb3JWMlpFYlpGY0NDMVh4?=
 =?utf-8?B?UEN4SWxTWlJ4amQrMFlLNDA0dERXNTVmaHBwVEZ6bzhYRXNaSmdFSU9pUnFP?=
 =?utf-8?B?YUJPWWtaOGdyZG15cEVuOXRQZXJZd0Q2NUt2VXBCQzRtU3pYZmZsa05XMG9U?=
 =?utf-8?B?SEVNVFhVcVJDOXVMVDRjYm8xRExQRGlLVjlDQWN2cjlMUGdZbXpMWGErRlZ3?=
 =?utf-8?B?Z05aWEVwZ0J3NWhORUpXeUFGWDcxcllodEN0L1l0YjFXaHlwTFI0K1Q2ZU5R?=
 =?utf-8?B?VmpqekozZ3loWVlUbTRwdTBZeStyRFJwczVZUloxRVdiNHNna0hFUUg0OEJw?=
 =?utf-8?B?Z1Ezeng4MmY4eW5tdWRhblUycDl3NUpMWjFYdDVRYW8xbDVDTjRLaUd1NXFV?=
 =?utf-8?B?cDN3TEFmcDgrajJzeXhLREN3dHlVMFVMenhUdW1JeVRoY3hxSGNOVE9VcjZw?=
 =?utf-8?B?L3B3ZURadk9pQ21vcmFTeUtNZGxKQU1NNlpiOUhuNUZGRkRUK01JRWowd24w?=
 =?utf-8?B?QzJkUGNldWs0Q0hncU00Y2VQTm50S0FuSkZSWEh1Tm9KM3YrMHNEczFDbVRm?=
 =?utf-8?B?cis4YkkxVlk5U1hjMUprQ0pSMXhJQWpPU2ttSjM1cUFrbldtQUI1Q0xndXp0?=
 =?utf-8?B?bHZlYm10TkZLMjlaQVlySzhSbVJyM0FUQVBHZGlXQ0tMOU8rRjEvbkdxNkVN?=
 =?utf-8?B?ZmlJbFUwaUtNQUNIb1Nka0dEaGRienRadStQRTl6TUsvdkMycC9HWWN6bDFM?=
 =?utf-8?B?cmpkK2J1bTBwSkw1WmhXcmlOalN4ZWJNaktPWTJXNUVYeW5jcHI0OHNUM1pv?=
 =?utf-8?B?UEFOMTVFbVVNbm91ZnBibXVJK0Z3Y00zUG9lMXBkSERxWGZ2alVjQTFlNVRL?=
 =?utf-8?B?eE1YemZ1dTY5THhkRFFEK0taR3p6eHhET3NRaDRsTkFjdHdub0YzTFlpd2ZO?=
 =?utf-8?Q?fNtQcz2qbe0Tti/Tt3NxKj02nmgpjy/xxWaTpr3?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6ba6957b-2d46-4bbf-6adc-08d9360ebeda
X-MS-Exchange-CrossTenant-AuthSource: AM0PR04MB5587.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2021 06:18:41.0422
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gooen4tdf2/BL08dZ+7ejFzOnKr8oSKA/HKovfiUaXwc0wSuJuUyAEoT/3DjOimGfVrV3inbqxt/VqSTG1x5SA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB5428

On 22.06.2021 20:25, Andrew Cooper wrote:
> On 22/06/2021 16:19, Jan Beulich wrote:
>> There's no xencall6(), so the version script also shouldn't mention it.
>> If such a function would ever appear, it shouldn't land in version 1.0.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Acked-by: Ian Jackson <iwj@xenproject.org>
> 
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

> This really does need backporting,

So far I've been unconvinced it really would need to, unless we
expect that a further backport might introduce such a call without
us noticing. Since on the main branch such a change would then
need to add xencall6 in a new version, the question is what the
linker's behavior is when the same symbol would appear in two
distinct version sections of the script.

In any event, this being a tool stack change, the final word on
whether to backport this one is going to be with Ian. My
backporting request here only covers the first 4 patches of the
series (and I'm likely to take the liberty and include them in my
own backport sets, so Ian, if you want this one backported too,
you may want to indicate to me that I should include the one here
right away, should you agree with Andrew).

> and it is probably worth stating
> explicitly that there is no change in the resulting object, nor
> abi-dumper's view of the object.

I've added a sentence along these lines.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 23 06:22:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 06:22:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146136.268836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvwHM-0008VL-EY; Wed, 23 Jun 2021 06:22:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146136.268836; Wed, 23 Jun 2021 06:22:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvwHM-0008VE-Bg; Wed, 23 Jun 2021 06:22:28 +0000
Received: by outflank-mailman (input) for mailman id 146136;
 Wed, 23 Jun 2021 06:22:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvwHL-0008V4-MZ; Wed, 23 Jun 2021 06:22:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvwHL-0006Nw-IO; Wed, 23 Jun 2021 06:22:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvwHL-000493-AG; Wed, 23 Jun 2021 06:22:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvwHL-0004WK-9l; Wed, 23 Jun 2021 06:22:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oqw/36BS1RjKEGo+9Ep2036WQV7g+Nl/cX++0aFqx+U=; b=ziWXGwGvdxSLbD9WGNBdpu3WD0
	Ii1YjieAnFkh7M7py6GOThDiMu6DUYA4TTMKhLYZRcyTuwm51Hgp/Dh7wCCYeZaOQd9iJ3coZFtxw
	YSiUjuAkzJzPqNL7nonzt6F9k1z7B1tA3Z8/Ujau+P+zrM2IKSwvwF+NUe1GKVOsqN8A=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162985-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162985: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-xsm:xen-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=e2ebbd4097e8e147eb05e8295e95f663141e3b0d
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Jun 2021 06:22:27 +0000

flight 162985 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162985/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-xsm               6 xen-build                fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              e2ebbd4097e8e147eb05e8295e95f663141e3b0d
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  348 days
Failing since        151818  2020-07-11 04:18:52 Z  347 days  339 attempts
Testing same since   162985  2021-06-23 04:18:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 63033 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 06:52:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 06:52:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146144.268854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvwjs-0003FG-Vy; Wed, 23 Jun 2021 06:51:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146144.268854; Wed, 23 Jun 2021 06:51:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvwjs-0003F9-Sj; Wed, 23 Jun 2021 06:51:56 +0000
Received: by outflank-mailman (input) for mailman id 146144;
 Wed, 23 Jun 2021 06:51:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YmAD=LR=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lvwjr-0003Ek-Q1
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 06:51:55 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e26dedd6-dc9c-44ca-8d94-34b1f62afcad;
 Wed, 23 Jun 2021 06:51:54 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2050.outbound.protection.outlook.com [104.47.14.50]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-1-SOlpFAFdMDOinD5Y6leH7A-1;
 Wed, 23 Jun 2021 08:51:52 +0200
Received: from AM0PR04MB5587.eurprd04.prod.outlook.com (2603:10a6:208:125::12)
 by AM0PR04MB4148.eurprd04.prod.outlook.com (2603:10a6:208:5f::30)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Wed, 23 Jun
 2021 06:51:50 +0000
Received: from AM0PR04MB5587.eurprd04.prod.outlook.com
 ([fe80::d52b:45ce:e793:f9c]) by AM0PR04MB5587.eurprd04.prod.outlook.com
 ([fe80::d52b:45ce:e793:f9c%7]) with mapi id 15.20.4242.023; Wed, 23 Jun 2021
 06:51:50 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR04CA0071.eurprd04.prod.outlook.com (2603:10a6:208:1::48) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Wed, 23 Jun 2021 06:51:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e26dedd6-dc9c-44ca-8d94-34b1f62afcad
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624431113;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JKc7QJIF37h3jBkPcG1fjLAfzoeytO1fLL3nk2/aeWQ=;
	b=eM+gCsI4PNTbphbnH50Oy9DXt1a4BoVLRWE6LuFDsUCVv8XRKZB1onsoM+eCeUagwrs1XY
	lbDvmtZ1AQX14kBQKi3+ZOH6/UAO03G/smDfp/25A5QOhbUb700g3iAdgPFOV7nrln8F4j
	+dFyc1JldGXJ4tb52Qy1wUu61mHrfPc=
X-MC-Unique: SOlpFAFdMDOinD5Y6leH7A-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K7oXWmd19KAcJoUK2BIyUKOCSN1KRXU8NkR/AKJgVhzOoChY2RMYp2ExnbiMm00t+Dl1oJyj2BRxMBXJ8CX9KlJCENJmQQPU6hLqAlZhyd9Ny7vjkhpQdbLNxBISQohJnTFaY3ud3Kks7qx+9uivLOctlU1MNQc5uAsJplgb1iD3+dzc3dbnt1RlyQZsm3PAqwZlutLnKAT78/0KKVnOgRXCcKgltwgs/3e0RLWsnK05qR5jzza1SjXn9g6SgIe1OgTUpneOE2mmfNbicwZDFXbZ1KhJ5AV0xAbC4kZrOEKUQD3CxZAOHhO6BkrqZ3LZnQgT53Sys2yFdT8n4GrMMg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JKc7QJIF37h3jBkPcG1fjLAfzoeytO1fLL3nk2/aeWQ=;
 b=n+SqariGS/FgG1/WTvmJrHKgZx55hoFzCFLT7VOZHX4cTgn6mBc/rLo+WqDbxoNxHvkzE/OmTNpwch5m2WiqKg9Y+FehRE7iDRPW06sKLUUs5Jx1TwsVk5HiHvVnrI9cFIj1gTyhncq5edm3sENDBVjb8TNSXQPOzlYlvk+XQUFUkybWKfhNtKymubg+2MjMndczFRYsY79hcuaTT47d17XiNmaPWKxcyRX5GdKuWthxioP1+sXmzIQX43P7uARXLlaOsDIVa2BB+NM6clGsrOcqqc82EefaYaCJn867K3bkt5AWeMJYJT1p9jxbgNy0FhLSCcKKuc81rE9v/mHSFg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Ping: [PATCH 0/9] IOMMU: XSA-373 follow-on
From: Jan Beulich <jbeulich@suse.com>
To: Paul Durrant <paul@xen.org>, Kevin Tian <kevin.tian@intel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Message-ID: <e4037580-4c37-13ea-7667-625a2211aaa3@suse.com>
Date: Wed, 23 Jun 2021 08:51:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR04CA0071.eurprd04.prod.outlook.com
 (2603:10a6:208:1::48) To AM0PR04MB5587.eurprd04.prod.outlook.com
 (2603:10a6:208:125::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 630de0e7-b153-4070-5023-08d93613607f
X-MS-TrafficTypeDiagnostic: AM0PR04MB4148:
X-Microsoft-Antispam-PRVS:
	<AM0PR04MB4148C19DFD98E0DCED79A8B0B3089@AM0PR04MB4148.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gLNW7YOiMcp47gfhc7l5Zlonxn7LbaMHwToUElyNrQb5VoLS9p8+Ua0IwfqklfRXPDCPY6DfwN0UGFzT6hDEzca+EHAyxSSdly87EWN/6yMFC7XOgTUwtBwRkjM12PPK5G8eGLuTME+JSkzrNi9sfXL8NcfAitXoQc7AKC/Gu145V9J6echN46VH7dRzZWVk85fc12wBIbnyceh1/0Q4dx9PlJf3jTiMZXZtlL8EEorwE2OyRmX0fdZkq0LsyGE9RahTt+4EuyNiLoVDxWTQUya2XHqHRsA5+EfLgDjZnfMzC4uG4xT2Mi2GuzNRscFPs1aBSL6ckuMSO6UrehlzvbOpTPzk5vV1nJ+dTzBcRF3qhtAQlPc2AC2wlkMglApne7WVkeIhdjyPVlteVkI8KfAlGV9SlCVsbSABlF4ncYRpJGlfPxk8DGB/cd/sy00LTizif/CWROz6iixjDjC6PtMi5hxSl7qUYZSRK/aexnnA4BeizEmMWONVcBgwfg7Z3AW2RqjBEjq+IblYcjhU7ZvSXIJUlgMxwKB+jghMIqZE+JILlgp7WSIyqZeN/+Kr2C0lY3S2e7Mte/l0AdmXo1shAOhoRzxpxqxKqKn1OiPowtRVbk/EoNqXXb8pyO2+iwmFDQiX4MlgMfgorirM8CPek4DxtxYAoYcrNa39FxIwZxRe5vEkfTW1x0rGXFY7gqfmt9izHQzXEJJ/JcpNnT5ZCQ0oVVDlZpKoxRfltzM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR04MB5587.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(396003)(346002)(39850400004)(136003)(376002)(186003)(16526019)(86362001)(4744005)(31686004)(6486002)(8676002)(83380400001)(38100700002)(8936002)(4326008)(36756003)(26005)(5660300002)(66946007)(16576012)(66476007)(66556008)(54906003)(478600001)(316002)(2906002)(53546011)(31696002)(2616005)(956004)(110136005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?anB3QWZYVVppREQvQ1M5N20xcnFoNG1weE5SL0FEbUFXR2ZIakpocE9xZzQ4?=
 =?utf-8?B?MmVGOEY1V0hQNERQS3JiN3d3RCs4TVVETXhRT0pmenR4RFc3RnpKVzRGQ29k?=
 =?utf-8?B?R2dLdTZLNnIwenJrSkNhcFFCcU9NUkdnSlVvTHJ2cEhGSXlhV3g1TDVEVEpz?=
 =?utf-8?B?Q0tSNXBCWE9VRGloUFd3cWV4ckZUd1o5T1hVMVBBdS93eVRGcTRzWXo3WERJ?=
 =?utf-8?B?bGlRcWV2WDJtbnZKV2ZPWHhKQ3U5dVN5Sk9GakZhbDRsQVZSSXo3cjE1N2Z0?=
 =?utf-8?B?VEZXeEswZkNhNUludHkvMjRDTU96OUFQUmRCM0xSbGxYSnd5NmNtQzAwNTJv?=
 =?utf-8?B?MUQ0MlQzRFhpV2tENERSUHMrcS9wbEpDaGpWOENXQzJzMTZkT3dsQ0tnUi9W?=
 =?utf-8?B?U1pjMlpLVkVGSnFkcDV1Zmx5alZGQy9aZVpoR3ZjbXFTNVlMRFFLd0I2UkZ2?=
 =?utf-8?B?Vy85U0h1aUlKa3VkVU45VmU5S0hVcjJQQVp0TFVmOHV4WW9YRzdOS3VWTkJI?=
 =?utf-8?B?NHVRMTZtTzN4NWJLUDhXbk52YTR4dWRsbXhUZjlPVEJsT1lxdEpac0RWMmdS?=
 =?utf-8?B?M0tNS1M3SFJkVWlaUndSWmtNWE5UQjhqUld2cERCZnVvN204MEdOMGR1WGps?=
 =?utf-8?B?WEMzdEJXWTQ3N3h2a0p4NE5OaFZpejFmZEsvRHBwS2NqT3h6Q0dCRUxYNWxT?=
 =?utf-8?B?bmgwL0pwbU84OVVPWGNhbDhObWlWOEdUQUdlVGtMbjBjbjNsU2RxZitXeGdq?=
 =?utf-8?B?MTBteTlEODZ4T01rU1JYdTArWEt5Zko1SEN1Skh2OGhXdy9YbXA0c20yTlh1?=
 =?utf-8?B?MlpFd3VXKzBTM1VKaEF4WjloOGxhNklFRVhFb1dPdkxsTVB6Y0JQMndUa1Rq?=
 =?utf-8?B?OUJseVJ6VFdnc1VhS3RHUnEzdEcyVU1XRGh6bGJVQ04zTXdTSjZxODN3SVhT?=
 =?utf-8?B?bVQwbHRWTjFvVi9yWFZRc1ZDS3BkcUttQWQyS0Naak80T2ZxZTlmZFFnNUNJ?=
 =?utf-8?B?cFN2MUI5MC9vMTljVFZCRUFWUnU1cWtkQlZoTXFCSGdzZXJDZ1pDWTQ0K21T?=
 =?utf-8?B?bWZ2b1l1WU9zY1hEOUJLR2xDVmFuVk9FZWl1R0pWbk42RDJzWkJpTEljUVIv?=
 =?utf-8?B?bnFrS2x1Uldpc1VSWVNMeE5KUEN0Z0M0MDdMenFpeHh3ZTlmRU5qTWcwcXRi?=
 =?utf-8?B?NVNMdHVDNGQ1dlpmT3VONUJMdjY4Uzd0NVZXNmdoZVZFMmxSMnptenk3bkkz?=
 =?utf-8?B?emVoRnFReXFwTDFwT3dTbThOSUxpSW9LNVVudndGbWxjR2ttZjU2dVNVeUt3?=
 =?utf-8?B?dnF3aHo3WVZ2NXg2YmQyb2tvWTg0bkUyemhaQW05L3N0SmFVaDJWT1QxT2FE?=
 =?utf-8?B?UTZQcWx3TFMxOHd6MUdhUW5pNkFEVFEySmhuQ3VpR0RBMmpUQVdMR3Rob2pR?=
 =?utf-8?B?QXhZTThyU2NvNWpPeDlCWDZKN3hVandPNFdTT1QvbzBQYlFVMmViZjM0Q3JI?=
 =?utf-8?B?b3hPMksya3VRQmVLVit0aHNaN1pRSElUY0pPb2xCOEplcXM3aFpPa0FCamtk?=
 =?utf-8?B?Wm1zWFZFZVFkMjRRMHlQQlZOenFjWjgybFMvaUYrQS9BQ3R2cWpSK3Z2bC9F?=
 =?utf-8?B?aUNHMVExL3VRdmUwaVk3V2RGcGhJa2tLQXlCbmwrUFJOaXZkbklET1Q3M1pE?=
 =?utf-8?B?TGlaYk1jcGhJUERvNUE2RWtnOTBCMHFsUlB2NCsyOGZtYWZ1SXJScFQ5TmFN?=
 =?utf-8?Q?Wj3J5jBF0VRooGyoUav8y4iWqpds9QqMyxIfJG+?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 630de0e7-b153-4070-5023-08d93613607f
X-MS-Exchange-CrossTenant-AuthSource: AM0PR04MB5587.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2021 06:51:50.2060
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sMRPoBrTeIJXCCj0f7Mz+hPZf6cuHjnOxeK/5BzJSKX6hIHaymHpIybRtm2V9xhI/aNnoU+bmEnxLkKkqlQpqQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB4148

On 09.06.2021 11:25, Jan Beulich wrote:
> A number of further adjustments were left out of the XSA, for not
> being a security concern (anymore in some of the cases, with the
> changes put in place there). This is the collection, possibly
> looking a little random in what it contains.
> 
> 1: AMD/IOMMU: redo awaiting of command completion
> 2: AMD/IOMMU: re-work locking around sending of commands

For these two I have v2 largely ready.

> 3: VT-d: undo device mappings upon error
> 4: VT-d: adjust domid map updating when unmapping context
> 5: VT-d: clear_fault_bits() should clear all fault bits
> 6: VT-d: don't lose errors when flushing TLBs on multiple IOMMUs
> 7: VT-d: centralize mapping of QI entries
> 8: VT-d: drop/move a few QI related constants

Kevin, any word on these?

> 9: IOMMU/PCI: don't let domain cleanup continue when device de-assignment failed

Paul, any feedback on this one?

Thanks, Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 23 06:58:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 06:58:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146149.268864 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvwqJ-0003xG-ND; Wed, 23 Jun 2021 06:58:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146149.268864; Wed, 23 Jun 2021 06:58:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvwqJ-0003x9-Jy; Wed, 23 Jun 2021 06:58:35 +0000
Received: by outflank-mailman (input) for mailman id 146149;
 Wed, 23 Jun 2021 06:58:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g8be=LR=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1lvwqH-0003x3-N7
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 06:58:33 +0000
Received: from mga17.intel.com (unknown [192.55.52.151])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 142b4de3-a896-4164-9013-af052d2d2c42;
 Wed, 23 Jun 2021 06:58:27 +0000 (UTC)
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
 by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 22 Jun 2021 23:58:26 -0700
Received: from fmsmsx603.amr.corp.intel.com ([10.18.126.83])
 by fmsmga001.fm.intel.com with ESMTP; 22 Jun 2021 23:58:25 -0700
Received: from fmsmsx612.amr.corp.intel.com (10.18.126.92) by
 fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.4; Tue, 22 Jun 2021 23:58:25 -0700
Received: from fmsedg601.ED.cps.intel.com (10.1.192.135) by
 fmsmsx612.amr.corp.intel.com (10.18.126.92) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.4
 via Frontend Transport; Tue, 22 Jun 2021 23:58:25 -0700
Received: from NAM04-MW2-obe.outbound.protection.outlook.com (104.47.73.177)
 by edgegateway.intel.com (192.55.55.70) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2242.4; Tue, 22 Jun 2021 23:58:25 -0700
Received: from MWHPR11MB1886.namprd11.prod.outlook.com (2603:10b6:300:110::9)
 by MWHPR1101MB2271.namprd11.prod.outlook.com (2603:10b6:301:52::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Wed, 23 Jun
 2021 06:58:18 +0000
Received: from MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::6597:eb05:c507:c6c1]) by MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::6597:eb05:c507:c6c1%12]) with mapi id 15.20.4242.024; Wed, 23 Jun
 2021 06:58:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 142b4de3-a896-4164-9013-af052d2d2c42
IronPort-SDR: 8lOAmD+vDMOvBACwSeFHakU6vrziYgvhiXnqfpkCbUWRFJgigIB27QOPIE6G4ErsuYn10Gp2PT
 DrRo2fFE8Tfw==
X-IronPort-AV: E=McAfee;i="6200,9189,10023"; a="187588573"
X-IronPort-AV: E=Sophos;i="5.83,293,1616482800"; 
   d="scan'208";a="187588573"
IronPort-SDR: X5ekwXbp+GAE5lm5gNQm/LljaXnTKIojtQrpOC99tu/qXSNQa1L5xJbiCprGfkmqj3CNsfnkso
 6kSwPudY2laQ==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.83,293,1616482800"; 
   d="scan'208";a="556875686"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i8fMbqibm7qsV2sRxiXvEjKlSxKPnx/BCRs8jAuODGkYxvD+Zm4hIORJp4bX96VJgLSDfXk7mrzr9F/pzEx7n4SnYYTCSZPCrIKBs21bmV/PrmUhR8DlxRmqw3AKBlVX2bYA9zp9CQBNVCc4QraqS17TMU/uIvoBwSVWsbrYszyMFEY4P0HETB8GIX2RebVhL+HbzJwEWEgIU0UycOKpOZDfeeH30kmxJiarU23R1JNWfjpKMouMlT6BExEyzjErWixPTYyZCAhhBsVqtPB1qLxRRbDXfO569cm6EgAZeMZJhjnrC9zzu/5ohagaj5o2lFUIODMiaKD0HdeJwrhQOw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rKbibNmf6U+HXj1UKcHd0r7LDzPyMPoJUg6UneLdkKo=;
 b=jIYbxuv4PMKCQP42PsORapbsBaMNZIdcTavwT4cED0idzM8X8NjrWrXRP1F8rsfthn1ZtGOplUrzGScXaJGlnl6PXj407ZzE3e2LFv5wmY6Ub2x6UjOG+ZmNYod4VGujGXaDoNXN7anht/5R5Z6GuryAj7PksMADlTYvFrID9XmCCUikfHok0BL1TEmmFsfhymqCwNYz48ucFu0XrGNFJ8XXeo2XBQCoMC/hGIKp4orWNSdWAhjqsRoNXL6Tll2RPauGC4pK24UvGTya6djPECqhtL0bxgEyUyQOBMZwtMMUf2cAP6HBT54u+EnZE0/7DRPgFtCJyc6C0aGQdVAaOw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rKbibNmf6U+HXj1UKcHd0r7LDzPyMPoJUg6UneLdkKo=;
 b=g/vUSPRwyH5M9yBCtm8ctyrQDPkRhnD9V7qLM35M1mj13ibOHGWT5rwEBpPty9tGkpwBTFM+UdNACdjCqKRG1D5nx/TZLCCbRnh9twnPPBnQXC97zNXHO8hkgO3TnwIUfY9eTax/xtTVknPy2pIBbEuw5ZCmYUCulGMXZrbv5PE=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
CC: "Cooper, Andrew" <andrew.cooper3@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: Ping: [PATCH 0/9] IOMMU: XSA-373 follow-on
Thread-Topic: Ping: [PATCH 0/9] IOMMU: XSA-373 follow-on
Thread-Index: AQHXXRF3+9Tikxxns0WkinxCJ12rU6shPlYAgAABsLA=
Date: Wed, 23 Jun 2021 06:58:18 +0000
Message-ID: <MWHPR11MB1886112723E7E2192D22733F8C089@MWHPR11MB1886.namprd11.prod.outlook.com>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
 <e4037580-4c37-13ea-7667-625a2211aaa3@suse.com>
In-Reply-To: <e4037580-4c37-13ea-7667-625a2211aaa3@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.143.21]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d039d7c6-376c-4997-eadf-08d9361447fc
x-ms-traffictypediagnostic: MWHPR1101MB2271:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-microsoft-antispam-prvs: <MWHPR1101MB227132C03AE48EF4FFE6715D8C089@MWHPR1101MB2271.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: za43FJtlH/vpeGlUN2wtW1zrejpOLIti5dfd3lHY0A2H4RAerA8OyoD+8VKE2ymj3JgqKAEl8UkPb+VAM6GbxCV6BayBtraF3aJ9iBqa7HTlO5Ttz/Lrx7r9iEsBUAA2OTLDHXW4cAOtJxCIYJ9DrHA6fGV12MM7NDrfwXUdIBElnIT5SwMpbzJHEmuZXbS5IN0F2aLNNj5qGlNXrWQXIkT5EZiZDhDXUrNSM7cm3wpWVuyw9duil5miBy1c0tcf8jd2hu4k2BU5gPmugS/q3rxDd8J/EFCExnFSkWmqMK0BZp9aUvCTugeHWTt7HfayMhFEw07rBalACTyzTtoZuahE0BwOqH8DZGZ2un3D0/xT8OxzVS72dCb1jImllKhKnPVR05eaOmF3TEoFa+3eGxeKJYXgolPY9tChTAVz+aT8jx7pYWNGGwuGD7+VoHmwJYe6L/F/VAWaIMZwlgYXofoO3MfNqg3cy+Zt0a/89ZEV3KVL9g39a+Do18RqP0Xqd6Erdv0MVEzi1Q0pLWa2z6Twi/T7+KiYhtpCkG9bYZZpa62gNvxWV6U3tgZW7ZS4cNxkeWZbS8onO9niJ8fepl8IrrK/mHHqlkj+Bmyulh4=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1886.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(396003)(346002)(39860400002)(376002)(136003)(2906002)(186003)(66946007)(26005)(76116006)(8936002)(52536014)(4326008)(122000001)(66476007)(86362001)(7696005)(5660300002)(38100700002)(478600001)(9686003)(6506007)(71200400001)(66556008)(55016002)(8676002)(316002)(54906003)(33656002)(53546011)(110136005)(83380400001)(64756008)(66446008);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?VVVvZkVvbDJKcDJyL1djMTZuRVd2QnNQYU11czFkdTROT21aNXZydzRHNjJK?=
 =?utf-8?B?WWlSOTFmYmluc3BiNDczbkErWkNUMUxZOFJLN3E3REdoUVQwclBFeHo3R1Ar?=
 =?utf-8?B?M1dIS09Xc2FzNW94ZjZiWGNYWDJwTThENVhHK0VUeitYTXNnQVVLQVhrM09a?=
 =?utf-8?B?cklTaEdlcFA1VUExS0F1SlRQZklZQ0diUThzOWtheXUwWFNVbDA2d0RjSjhy?=
 =?utf-8?B?elZPNVQ4ZkFnbWYyeUlZT3NWRnJmOVdrZTFqdmlXcFZ3MHdhM2U0dThWWDNQ?=
 =?utf-8?B?K05tYmwyMWQ5TEx6cFJRaXJaV3E1V2ppUU9qTUJLM1BOK0pVOVhCanFIajNj?=
 =?utf-8?B?cFlpOHVxcmNjZm5qUmFwYnlkYTRWdTFYVHJvRENDYk93RjJVczZrSXZER0ZP?=
 =?utf-8?B?cW5aYXdrbTIrSStQUG4zU3pDck1EUGNxZzk2amoyZGZGSlk1UlQxME1YakxJ?=
 =?utf-8?B?Sm1QenI0TGE1dEYya2dYbHJpTzdnZmE5WnAvNW5sTDFyU2NtVWM1aFdkUlpO?=
 =?utf-8?B?VlVlbytxR0JhWU1ERXh5UnBEdUV6WjVENHhXQXVMeFdEbDNHSFJnejd3K2FU?=
 =?utf-8?B?MUFDTERNOWtsd2N0MnFOTTI3ZVdnaXp6L3c1Q3lNc2l1NndRNVhnTGJPRndS?=
 =?utf-8?B?V1FHYnlDYU5mNUNTMzZEcHp5eGJZZGJlWFZzRExvUUxCSVBwM1hUTE83VWxv?=
 =?utf-8?B?dE1lUmVqQk5sTHgreHVMN25vMFBGT0ZDclR2OHF5TVl6cHRGeld6TEh3VlF4?=
 =?utf-8?B?dFNFcVhZRytFZWJHNzNjSlowWlQ3NGwyUXRhNEM0UWloY2xrY3ptSnRFb3F5?=
 =?utf-8?B?dFlmWTU5UTFXTDY3cmxvblRMQWVMTnJQMTNZbUtMZ1BtaDloKzlEOEdZdklv?=
 =?utf-8?B?V05DTjRnUjZUc3gzbHhwaloycGJ4NmZOaEhOMHVJZVVYYndyUklQRlh3eGFq?=
 =?utf-8?B?dzRra09GYVlXNDh5S3hGU3RmVDc2Y0RhSEh3aDY1R1B1K2pSa3JPbFo5ZXFX?=
 =?utf-8?B?dzVYMk1sRmNWVGV2YkNyMnQvR1p4QVMwVVhoQ010VG0rQUlEZXViL3J1U0Na?=
 =?utf-8?B?NlFYT3dIRTI3QTQ5bU1UVUhqMjFacXJsamdieTg3bHozbWpCTm5XUVU0Qkk2?=
 =?utf-8?B?akQxVmxjSDBlOVo3VUZqeWtXalJ4Mmk4N0FvMFJUNEVIaDJsRHUyQThLQ0pY?=
 =?utf-8?B?dWRJeXgwYkxMWjdFSU1QelE5ME1pdmtvUEJ2TElUZlgydkw2ZkxsbGlCRjlu?=
 =?utf-8?B?SkxNR0RCREpoOFZjSnA5cy9IMlRRZlhKcUFsQWFFVXJhUTlOOTExWkdCMmlB?=
 =?utf-8?B?YUlPY3JUZHk0Q3E5WUlSTXd4WVAxRGc3YzkyUmdtN25EenZCTGRkY2Y0dlVu?=
 =?utf-8?B?aVV5MkFCOEtUaC9rU2d1WnB5MWhLZHRzd2JjWUF1Vk11RHlQbVJVNmVOdXdV?=
 =?utf-8?B?dnNhTWNsQUFiWjBYOUQ3VTVISFNZNVNqY0FRTi9uRWJNNThSVlRQUitVN2h5?=
 =?utf-8?B?bzc1WjVOSnU0QU9lYms5QThEWGJBMEhmMVRvN1FZQmhyN2kyeExiZnhWN2lx?=
 =?utf-8?B?VTFDZVpwODRpUFJOQVdUYkV5aHhDeVkxU01uZmtCa3NFc2ZWUWZNeFQ0ckJF?=
 =?utf-8?B?ZU1LeGlqcXdKQmZhdGsyOHk2U0xTRS9taUpKQ21tMXl1RDVHTEJTUTl0SGQy?=
 =?utf-8?B?Tnd4Y0hPaVorVUZ5SHZxR1VwWGZ1b0tNSUdqMmxQMWxtY2hBVGM1TURKb3Bp?=
 =?utf-8?Q?cQQa0ByWpguk1W6FWl5QlkrbNN3DNKmCbjn2VTc?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1886.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d039d7c6-376c-4997-eadf-08d9361447fc
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Jun 2021 06:58:18.2989
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 17NZvoC8d9j6pYDj7ijGv1P63rixBsmIZYQozeVJANWFtSBK9sTLZvkAfXw7x8LNqVexj/rqwmHPP7beZgdWTA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR1101MB2271
X-OriginatorOrg: intel.com

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IFdlZG5lc2Rh
eSwgSnVuZSAyMywgMjAyMSAyOjUyIFBNDQo+IA0KPiBPbiAwOS4wNi4yMDIxIDExOjI1LCBKYW4g
QmV1bGljaCB3cm90ZToNCj4gPiBBIG51bWJlciBvZiBmdXJ0aGVyIGFkanVzdG1lbnRzIHdlcmUg
bGVmdCBvdXQgb2YgdGhlIFhTQSwgZm9yIG5vdA0KPiA+IGJlaW5nIGEgc2VjdXJpdHkgY29uY2Vy
biAoYW55bW9yZSBpbiBzb21lIG9mIHRoZSBjYXNlcywgd2l0aCB0aGUNCj4gPiBjaGFuZ2VzIHB1
dCBpbiBwbGFjZSB0aGVyZSkuIFRoaXMgaXMgdGhlIGNvbGxlY3Rpb24sIHBvc3NpYmx5DQo+ID4g
bG9va2luZyBhIGxpdHRsZSByYW5kb20gaW4gd2hhdCBpdCBjb250YWlucy4NCj4gPg0KPiA+IDE6
IEFNRC9JT01NVTogcmVkbyBhd2FpdGluZyBvZiBjb21tYW5kIGNvbXBsZXRpb24NCj4gPiAyOiBB
TUQvSU9NTVU6IHJlLXdvcmsgbG9ja2luZyBhcm91bmQgc2VuZGluZyBvZiBjb21tYW5kcw0KPiAN
Cj4gRm9yIHRoZXNlIHR3byBJIGhhdmUgdjIgbGFyZ2VseSByZWFkeS4NCj4gDQo+ID4gMzogVlQt
ZDogdW5kbyBkZXZpY2UgbWFwcGluZ3MgdXBvbiBlcnJvcg0KPiA+IDQ6IFZULWQ6IGFkanVzdCBk
b21pZCBtYXAgdXBkYXRpbmcgd2hlbiB1bm1hcHBpbmcgY29udGV4dA0KPiA+IDU6IFZULWQ6IGNs
ZWFyX2ZhdWx0X2JpdHMoKSBzaG91bGQgY2xlYXIgYWxsIGZhdWx0IGJpdHMNCj4gPiA2OiBWVC1k
OiBkb24ndCBsb3NlIGVycm9ycyB3aGVuIGZsdXNoaW5nIFRMQnMgb24gbXVsdGlwbGUgSU9NTVVz
DQo+ID4gNzogVlQtZDogY2VudHJhbGl6ZSBtYXBwaW5nIG9mIFFJIGVudHJpZXMNCj4gPiA4OiBW
VC1kOiBkcm9wL21vdmUgYSBmZXcgUUkgcmVsYXRlZCBjb25zdGFudHMNCj4gDQo+IEtldmluLCBh
bnkgd29yZCBvbiB0aGVzZT8NCg0Kd2lsbCB0YWtlIGEgbG9vayBsYXRlciB0b2RheSBvciB0b21v
cnJvdy4NCg0KPiANCj4gPiA5OiBJT01NVS9QQ0k6IGRvbid0IGxldCBkb21haW4gY2xlYW51cCBj
b250aW51ZSB3aGVuIGRldmljZSBkZS0NCj4gYXNzaWdubWVudCBmYWlsZWQNCj4gDQo+IFBhdWws
IGFueSBmZWVkYmFjayBvbiB0aGlzIG9uZT8NCj4gDQo+IFRoYW5rcywgSmFuDQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 08:09:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 08:09:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146159.268882 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvxx3-0002jC-Fa; Wed, 23 Jun 2021 08:09:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146159.268882; Wed, 23 Jun 2021 08:09:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvxx3-0002j5-Az; Wed, 23 Jun 2021 08:09:37 +0000
Received: by outflank-mailman (input) for mailman id 146159;
 Wed, 23 Jun 2021 08:09:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G2Dd=LR=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1lvxx2-0002iz-02
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 08:09:36 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.2.77]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7262ea8f-9761-455d-84c2-dcd21b127f8d;
 Wed, 23 Jun 2021 08:09:33 +0000 (UTC)
Received: from AM5PR0601CA0078.eurprd06.prod.outlook.com (2603:10a6:206::43)
 by PAXPR08MB6478.eurprd08.prod.outlook.com (2603:10a6:102:159::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Wed, 23 Jun
 2021 08:09:31 +0000
Received: from AM5EUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:0:cafe::50) by AM5PR0601CA0078.outlook.office365.com
 (2603:10a6:206::43) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19 via Frontend
 Transport; Wed, 23 Jun 2021 08:09:31 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT007.mail.protection.outlook.com (10.152.16.145) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4242.16 via Frontend Transport; Wed, 23 Jun 2021 08:09:30 +0000
Received: ("Tessian outbound 7f55dcc5b33a:v96");
 Wed, 23 Jun 2021 08:09:30 +0000
Received: from ff496ac8c364.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D72A3C70-DFA6-4B25-A139-15DC13766792.1; 
 Wed, 23 Jun 2021 08:09:15 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ff496ac8c364.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 23 Jun 2021 08:09:15 +0000
Received: from AS8PR08MB6919.eurprd08.prod.outlook.com (2603:10a6:20b:39e::10)
 by AS8PR08MB6295.eurprd08.prod.outlook.com (2603:10a6:20b:295::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Wed, 23 Jun
 2021 08:09:14 +0000
Received: from AS8PR08MB6919.eurprd08.prod.outlook.com
 ([fe80::2de3:452a:87cf:3ff5]) by AS8PR08MB6919.eurprd08.prod.outlook.com
 ([fe80::2de3:452a:87cf:3ff5%9]) with mapi id 15.20.4242.023; Wed, 23 Jun 2021
 08:09:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7262ea8f-9761-455d-84c2-dcd21b127f8d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lgZB6MUWnCUSvUpM580yoqja+zml5jHOHpd9ePJWOBs=;
 b=E71RBzLLUwpdbnTPqO9B0eSRXNr0snqgpKLQqd1dZi8IgNBuiUu/40BTWAuQ+4PbfZPnBGZWHx7VpQuvB0Y67GKcSDIvRekibT8N4XZ+6R8/9uoTMhtX2bwy1K1m8SI94V20I8gRPPvIg86OkrdMuo1vEM2Dspb9/9oeo6hhrrQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 445d1896dc139d23
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NoPPcjUUFnSmheuizRFBwcWVG/1OBbN0NJUkRUUOOgq52YPGoLFzJ7hQ83ENbm86oi3WqVRWjKfhTRQ62TvzZQMnO/OV1GfgRuneZXPALwyBqGk6mRrZulCE8q9tyjXRwjr0pen9brCx19t9F4wTEY4kZ5YVRnc3ke5DUrqlKUFk7um+rshK+YoxCM+Lu3i2weXM2Hrnd6B5ayYSVSouyx3Pa1R0ZudcAMzTwn72AsESTxjbQW8/9RVbVUjfAmDiJEB9MYRCDNW/JssAPP4wyiOAp6IQSxt+WwNiXt6DtrPLJRGDnpIyIgtsG+/rS1iF2LuCvskaO9T1Bl2wiqhMsA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lgZB6MUWnCUSvUpM580yoqja+zml5jHOHpd9ePJWOBs=;
 b=DMze14FT5MmPa1uw6UzSDHn/8aTiTquMh2PDzQgdLRnjzyslPbUpopvNobAeIrT6nLw+0Sa88MZd9LjeJbN7CniIFarQfEc02jmWVlrFYNpwbajisrJ4QXU+DFgXYbdl8q7JtbtfAXeoP+Kp2CCbA4h8ml6oUISGSdPoQTLHAi/YwOoDHDO1sBEYenGBllFcApRw2OlOR9+K0bDv3nanPCkd4ngswR1LqW3/hmnWio3LCq8ytqPecNA5fvFRFBe22y1tJ94OIQoRTKntfmrNdPwHO6tCKKhz9BI4SnE9Te7g5YJgSL8BATVF/Npt1jBE169mceDoLeg0LohRYQPKmw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lgZB6MUWnCUSvUpM580yoqja+zml5jHOHpd9ePJWOBs=;
 b=E71RBzLLUwpdbnTPqO9B0eSRXNr0snqgpKLQqd1dZi8IgNBuiUu/40BTWAuQ+4PbfZPnBGZWHx7VpQuvB0Y67GKcSDIvRekibT8N4XZ+6R8/9uoTMhtX2bwy1K1m8SI94V20I8gRPPvIg86OkrdMuo1vEM2Dspb9/9oeo6hhrrQ=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "edgar.iglesias@xilinx.com" <edgar.iglesias@xilinx.com>, xen-devel
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Julien Grall <julien@xen.org>, "fnuv@xilinx.com"
	<fnuv@xilinx.com>
Subject: Re: smmuv1 breakage
Thread-Topic: smmuv1 breakage
Thread-Index: AQHXYY0v3XVdqEOvqESHjaQGLRWMRKsVQ9IAgACUkYCACrmTAIAAuQiA
Date: Wed, 23 Jun 2021 08:09:13 +0000
Message-ID: <55ACE88F-F72C-4715-B3B1-B7B7F1B4CFFB@arm.com>
References: <alpine.DEB.2.21.2106141840150.24906@sstabellini-ThinkPad-T480s>
 <791BFC00-6A50-48D2-A208-E529B887441F@arm.com>
 <alpine.DEB.2.21.2106151756190.24906@sstabellini-ThinkPad-T480s>
 <alpine.DEB.2.21.2106221405220.24906@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2106221405220.24906@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 9ce42053-5ad8-494a-192a-08d9361e3a69
x-ms-traffictypediagnostic: AS8PR08MB6295:|PAXPR08MB6478:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<PAXPR08MB64782BF7DE96FCC346C6713BFC089@PAXPR08MB6478.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 pddmyaZCXHqkdi8za1pKbEru7dPQ94Z9szvQio4Pe2kCzov6sLFmxImJvFudGO8PttcElSifQjZ5H+zc4fNWcOZnNXim04ECOJ/v4ILbIPW34POjcpB+Q4zbOggE7H18VPJ2RsodCPMhg9osRuzWBrfB+tfaRyoTX0PD86VpjcSotV8ycs8+O2QxoqtLz2x8Vf5Ne0I/JyIrwBhn2GIR1WeuRJNWm8wUiKNOnuMG+GlXp5ww+fSFH076p3uo3wb9+XWCX0uTQ9ofQQt9dT6ASTSmgtd6dgVdNSizFzYAeK3KtOq0XLk6gOja/xq+jwoMp2oNR4rn2lDkfYdyKXgAek82f7F06u1TmXBjrh6VmcOGMrlWqp7ptXVvt4zQWyCX7v9FLxHo871uRsG7f6BI9WB7DAHHKg9tJJNmkKRC+8Z0rjyQYDJOE3+wlDrzTk4NhuE42mvMMOEo8KW44AlFMxc69XvZFgWT7TbCMFgF+D5fQccX/ZhA6JQ248FMm94jarkWHIJZ5d7VRXpkf7/ScO6BTEFE86rcJJdyiChNS8bY/NlHr8fFkbKnXtbjdBf0Iyr9jJsiLF88ppwjRm5FEDX6Mcdc/tvc45nLIy5Cmgs2xXkNRcixst6rV7sl4IfLpGdRIJbFRSH7NPigTRfBMhfzgRO7d53rptNvgkGad8FuieY+8u6wsfsJbIdWI+msYqs4OjumEVfNMWylKqc05QVbtcuBonQgw26Q0ccnZF/P/hyhsnSnN1YbBb3c45O9zGYwArMf6wQTsKfF0WNJZw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB6919.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39840400004)(136003)(396003)(376002)(366004)(86362001)(2616005)(6512007)(316002)(7116003)(5660300002)(4326008)(54906003)(33656002)(8936002)(71200400001)(2906002)(66476007)(6506007)(83380400001)(36756003)(26005)(186003)(64756008)(122000001)(66446008)(38100700002)(66556008)(6916009)(66946007)(76116006)(8676002)(91956017)(53546011)(966005)(478600001)(6486002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?SXUzRExJdFRUYmJaT2ZLdDh1NWNDcTFtZkxnRE5OSnFaMCs0L2dqZzVsYzVJ?=
 =?utf-8?B?YjBaZTg5cmoxa2FodVU0RlZZMVIxL3FERmVxM3dFSERYdk9GYjB4QzM1ZXdY?=
 =?utf-8?B?NFNvV0ZMdlhKZmNqWEFVd3lEdHZ5SVFzV1BiMVV5NjdUMkZBbzlpekVJdzFC?=
 =?utf-8?B?dUF6ZEdvOFBFTlkwaGd3NU1oYk5tNFVJSnJtT09JTHFiMzJXNjBxZFRsZ0d2?=
 =?utf-8?B?WDQ0Y1ovSjF6RXNwR3lFZ0lLSnhZWmJlK0NzRWQ5aDgvR0FiaW9TbjdMcGxH?=
 =?utf-8?B?cDd2TWFWMVJEY3VOcWRLbG90M25BN0hTSmt0MUU5R0w4ai8wcExEdzNiRWFl?=
 =?utf-8?B?OTZNZ0ZaMnZ5bUcra1VOVHJWeWh2THk0SEtNTFByV2t5S2lmY2xrRjJ6eW5B?=
 =?utf-8?B?b0EzUXJ0ektvaVBRVnJrR3UwNWw5OCtXbTVnZlRLTzBDRCsxQ2xnNnYxR0Nk?=
 =?utf-8?B?eWNDMXJrSU5sUXhVU3NxMVJLUDB3RG1uZXNBYnlXNmt3TjBtQmp3NGdtVDBD?=
 =?utf-8?B?OHhLeHY3U0t0VHV2czN6SmFoUGxtMjdpVk1ZQ3JES3dXQ2oyN0poRE5iYzZV?=
 =?utf-8?B?NDZEZEpBczlNa2NHWGM5VVBPUkZOcm1GSG5abXFpa2JEdEVqSksveEpDT1FB?=
 =?utf-8?B?clNCdjVIdmZPUkFFQmZOQVhlYmJlNzBFclVUZjZWem9hN2pVQ3JjUHNNeHNt?=
 =?utf-8?B?d1ViS0tOMUdYckhaVTdvMldaSE9zM3Y0MzVIMEJlend1b09CM3AySlNrVTcx?=
 =?utf-8?B?MVA0c05UVVMrR3RXZjkvL2lhbk9JMVdHUnUvaWpGc2VMZldSL2NtanlzTmNK?=
 =?utf-8?B?K1V0Vk82enlJVEV0bkZxRmJ5Y1RHQlFsYUFraXpZWmdaOU1heGZGTE5QRU5W?=
 =?utf-8?B?YjNWaCsrc0xTanhCVFAvMWFNTHF2SjFiK3diU2MzZlkrbTNoelRTdmdmWk5P?=
 =?utf-8?B?V0MwRjVQTVU5M0FkYndHdFM1Z1hOK3F3MUQrOXlyUU56c21UNDliaTVydHVr?=
 =?utf-8?B?SlNNeDl6Qi9Wd2NGOEdUNFFyeW5wYlljS3dmT0lvajV0WEZlWnZ0LzIreENT?=
 =?utf-8?B?L1RaM25MaGhEL3JxbVNoNDVGOHFBeXZBemlVaFV6ZDFzRTUzTnhoYmYybFRm?=
 =?utf-8?B?QUJJWnBNNjhPWGdERU5CVTZqVHFaUkdBZFNuSC9Cb3JEdDQwZXlQbmltaGVl?=
 =?utf-8?B?YzhTYno0TWQxdVcydlBsTnc4eC8zOVRKYzZBc0k2WW1KN2htY3hWYVlTUktQ?=
 =?utf-8?B?UENSdHJ1ODBRZUpxdkxxRE5lMmdHZ3VBM2VIeUVxUjM3bFFneTV4WFgvR2Fa?=
 =?utf-8?B?NGFZZWZnTVFWMyttNkVwRkdvQzZSZkFWQ05VanpHdU5oZzlxSWhSeG10bS9Z?=
 =?utf-8?B?dGR1ZlRnSTJDSlB6dTZvdE5CMy9vV25JZUJsQTB3M1ppcHdwWk5YRXZuM1Fs?=
 =?utf-8?B?UWVUdTdBbmZUV1BMV21sNUJzbnpwWkg5Rlp4Y3VvUjVVUm5UemlxUUtJOS9K?=
 =?utf-8?B?encycVdoYW9XTTlyS0pRQzBPSFBUendIZjh3a2lNZGp3aDVhNWMvWlVESzhC?=
 =?utf-8?B?MTl2VGtJUVVCZXFocW9RMzkxamNuSFl1RlRlZmF1UWN3NnZKYWpKcWx6djNK?=
 =?utf-8?B?aUd5KysxM29HcThRV2kvVS9mN24xT3ZUSlRMTGlmV0N6WWFyQmxJYXoxMzl0?=
 =?utf-8?B?cFpHdzlRR1YvNTVqdFhxaHZNSjVmZjU5NjdvUklOeE9ISWdiNHcxNnVnVlBp?=
 =?utf-8?Q?aPT4i/4aSAIWTfdfD+GNTFpNUkIrqOtoM8I7lnV?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <157F9D603C4BE744AE1F926CFBFBD4BC@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6295
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	29265d1f-995a-4a98-0aae-08d9361e3080
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PA7ZXIwcB9TnEs/fdJ1TG2NIgpQYP2U7urU3QOqfNH1CZtVVoXlZ76JEx9ahou9N8EruZP7R7gwD5BWJoL4a8RpfhMroMeR81HIApTWo4wreaz7h7Rq1h46QrAewYnskmHA42YhkiNuQuwhLgvbWGbWf0xwLnS8A1OVKoAMbjZgBw8QOx751qzmfgh+H5UX1nh3c+0Fq08CCw9aBmX8uQx16UBpg8Vjssb8tLCWvhAtNGm3EJfvlYMH1sKT/2y+e+9JYzdICzP+lXOFO1sibWGUw0UVyxEc/I2NmLqVNi4QgtlAKqOcYOfpGB+Iku9JwpB85BPpnkLbEtAjOX+iQZZuI4RFYq1LY4QbIW3nlJHl9ya5uw8GTcjbY73PABMmcOtHU1R5F7oVu2DcjwVEr9+ML9JOefyfxD3hoIRJ2hH0YZjOErmnWoKvqmlUgY9K+gzGoCG6iJb1get8Srw+OkUG03OIS+6WmymXVcgcE+MYTF9II7EduYLu/DNIFHnyVPyte5VZH/3v4o6b+Bu4Z+2wK7it3cXtgiqf4e0WyOBFJLHF0H8aheVkGSYxdG5rodiO7dd7ZJvw70DARLC9GJXb+7QZqdpO0Lt0BPWbhD72OyV/rAZ0J0bgRqM13sM79LXIn8VGSdZd/KtQkfKP+rfglErS8nCuHx8wvSxzwFr63dnRV7MzLQdoDaIROAgTpOelRl8cEYrlH28ShTEeNfjsfPC2+jdOiTQVRZ7yHHIRGBKm74is4a63xK4HkXI4wIWG9BxiGi3dnKsGRVSRNDw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(396003)(39840400004)(346002)(136003)(46966006)(36840700001)(53546011)(6506007)(36860700001)(186003)(356005)(54906003)(70206006)(47076005)(5660300002)(81166007)(8936002)(70586007)(33656002)(4326008)(2616005)(83380400001)(966005)(107886003)(82310400003)(7116003)(6512007)(86362001)(478600001)(316002)(6862004)(6486002)(36756003)(26005)(2906002)(8676002)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2021 08:09:30.5821
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9ce42053-5ad8-494a-192a-08d9361e3a69
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6478

SGkgU3RlZmFubywNCg0KPiBPbiAyMiBKdW4gMjAyMSwgYXQgMTA6MDYgcG0sIFN0ZWZhbm8gU3Rh
YmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gd3JvdGU6DQo+IA0KPiBIaSBSYWh1bCwN
Cj4gDQo+IERvIHlvdSBoYXZlIGFuIG9waW5pb24gb24gaG93IHdlIHNob3VsZCBtb3ZlIGZvcndh
cmQgb24gdGhpcz8NCj4gDQo+IERvIHlvdSB0aGluayBpdCBpcyBPSyB0byBnbyBmb3IgYSBmdWxs
IHJldmVydCBvZiAieGVuL2FybTogc21tdXYxOg0KPiBJbnRlbGxpZ2VudCBTTVIgYWxsb2NhdGlv
biIgb3IgZG8geW91IHRoaW5rIGl0IGlzIGJlc3QgdG8gZ28gd2l0aCBhbg0KPiBhbHRlcm5hdGl2
ZSBmaXg/IElmIHNvLCBkbyB5b3UgaGF2ZSBzb21ldGhpbmcgaW4gbWluZD8NCj4gDQoNClNvcnJ5
IGZvciB0aGUgbGF0ZSByZXBseSBJIHdhcyB3b3JraW5nIG9uIGFub3RoZXIgaGlnaC1wcmlvcml0
eSB0YXNrLiANCkkgd2lsbCB3b3JrIG9uIHRoaXMgd2lsbCB0cnkgdG8gZml4IHRoZSBpc3N1ZS4g
SSB3aWxsIHVwZGF0ZSB5b3Ugd2l0aGluIDItMyBkYXlzLiANCg0KUmVnYXJkcywNClJhaHVsDQoN
Cj4gDQo+IA0KPiBPbiBUdWUsIDE1IEp1biAyMDIxLCBTdGVmYW5vIFN0YWJlbGxpbmkgd3JvdGU6
DQo+PiBPbiBUdWUsIDE1IEp1biAyMDIxLCBSYWh1bCBTaW5naCB3cm90ZToNCj4+PiBIaSBTdGVm
YW5vDQo+Pj4gDQo+Pj4+IE9uIDE1IEp1biAyMDIxLCBhdCAzOjIxIGFtLCBTdGVmYW5vIFN0YWJl
bGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPj4+PiANCj4+Pj4gSGkgUmFo
dWwsDQo+Pj4+IA0KPj4+PiBVbmZvcnR1bmF0ZWx5LCBhZnRlciBiaXNlY3RpbmcsIEkgZGlzY292
ZXJlZCBhIGZldyBtb3JlIGJyZWFrYWdlcyBkdWUgdG8NCj4+Pj4geW91ciBzbW11djEgc2VyaWVz
IChjb21taXRzIGU4ODk4MDliIC4uIDNlNjA0N2RkZikgb24gWGlsaW54IFp5bnFNUC4gSQ0KPj4+
PiBhdHRhY2hlZCB0aGUgRFRCIGFzIHJlZmVyZW5jZS4gUGxlYXNlIG5vdGUgdGhhdCBJIG1hZGUg
c3VyZSB0bw0KPj4+PiBjaGVycnktcGljayAieGVuL2FybTogc21tdXYxOiBSZXZlcnQgYXNzb2Np
YXRpbmcgdGhlIGdyb3VwIHBvaW50ZXIgd2l0aA0KPj4+PiB0aGUgUzJDUiIgZHVyaW5nIGJpc2Vj
dGlvbi4gU28gdGhlIGVycm9ycyBhcmUgcHJlc2VudCBhbHNvIG9uIHN0YWdpbmcuDQo+Pj4+IA0K
Pj4+PiBUaGUgZmlyc3QgYnJlYWthZ2UgaXMgYW4gZXJyb3IgYXQgYm9vdCB0aW1lIGluIHNtbXUu
YyNmaW5kX3NtbXVfbWFzdGVyLA0KPj4+PiBzZWUgbG9nMS4gSSB0aGluayBpdCBpcyBkdWUgdG8g
dGhlIGxhY2sgb2YgYWJpbGl0eSB0byBwYXJzZSB0aGUgbmV3IHNtbXUNCj4+Pj4gYmluZGluZ3Mg
aW4gdGhlIG9sZCBzbW11IGRyaXZlci4NCj4+Pj4gDQo+Pj4+IEFmdGVyIHJlbW92aW5nIGFsbCB0
aGUgInNtbXVzIiBhbmQgIiNzdHJlYW0taWQtY2VsbHMiIHByb3BlcnRpZXMgaW4NCj4+Pj4gZGV2
aWNlIHRyZWUsIEkgZ2V0IHBhc3QgdGhlIHByZXZpb3VzIGVycm9yLCBldmVyeXRoaW5nIHNlZW1z
IHRvIGJlIE9LIGF0DQo+Pj4+IGVhcmx5IGJvb3QsIGJ1dCBJIGFjdHVhbGx5IGdldCBTTU1VIGVy
cm9ycyBhcyBzb29uIGFzIGRvbTAgc3RhcnRpbmcNCj4+Pj4gdXNpbmcgZGV2aWNlczoNCj4+Pj4g
DQo+Pj4+IChYRU4pIHNtbXU6IC9zbW11QGZkODAwMDAwOiBVbmV4cGVjdGVkIGdsb2JhbCBmYXVs
dCwgdGhpcyBjb3VsZCBiZSBzZXJpb3VzDQo+Pj4+IChYRU4pIHNtbXU6IC9zbW11QGZkODAwMDAw
OiAgICAgR0ZTUiAweDgwMDAwMDAyLCBHRlNZTlIwIDB4MDAwMDAwMDAsIEdGU1lOUjEgMHgwMDAw
MDg3NywgR0ZTWU5SMiAweDAwMDAwMDAwDQo+Pj4gDQo+Pj4gVGhpcyBmYXVsdCBpcyAiVW5pZGVu
dGlmaWVkIHN0cmVhbSBmYXVsdOKAnSBmb3IgU3RyZWFtSUQg4oCcIDB4ODc34oCdIHRoYXQgbWVh
bnMgU01NVSBTTVIgaXMgbm90IGNvbmZpZ3VyZWQgZm9yIHN0cmVhbUlEIOKAnDB4ODc3Ig0KPj4+
IA0KPj4+IA0KPj4+PiBbICAgMTAuNDE5NjgxXSBtYWNiIGZmMGUwMDAwLmV0aGVybmV0IGV0aDA6
IERNQSBidXMgZXJyb3I6IEhSRVNQIG5vdCBPSw0KPj4+PiBbICAgMTAuNDI2NDUyXSBJUHY2OiBB
RERSQ09ORihORVRERVZfQ0hBTkdFKTogZXRoMDogbGluayBiZWNvbWVzIHJlYWR5DQo+Pj4+IA0K
Pj4+PiBEbyB5b3UgdGhpbmsgeW91J2xsIGJlIGFibGUgdG8gaGVscCBmaXggdGhlbT8NCj4+Pj4g
DQo+Pj4+IA0KPj4+PiBZb3Ugc2hvdWxkIGJlIGFibGUgdG8gcmVwcm9kdWNlIHRoZSB0d28gaXNz
dWVzIHVzaW5nIFhpbGlueCBRRU1VIChidXQgdG8NCj4+Pj4gYmUgaG9uZXN0IEkgaGF2ZW4ndCB0
ZXN0ZWQgaXQgb24gUUVNVSB5ZXQsIEkgd2FzIHRlc3Rpbmcgb24gcmVhbA0KPj4+PiBoYXJkd2Fy
ZSk6DQo+Pj4+IC0gY2xvbmUgYW5kIGNvbXBpbGUgeGlsaW54IFFFTVUgaHR0cHM6Ly9naXRodWIu
Y29tL1hpbGlueC9xZW11LmdpdA0KPj4+PiAuL2NvbmZpZ3VyZSAgLS10YXJnZXQtbGlzdD1hYXJj
aDY0LXNvZnRtbXUNCj4+Pj4gbWFrZQ0KPj4+PiAtIGNsb25lIGFuZCBidWlsZCBnaXQ6Ly9naXRo
dWIuY29tL1hpbGlueC9xZW11LWRldmljZXRyZWVzLmdpdA0KPj4+PiAtIHVzZSB0aGUgYXR0YWNo
ZWQgc2NyaXB0IHRvIHJ1biBpdA0KPj4+PiAgIC0ga2VybmVsIGNhbiBiZSB1cHN0cmVhbSBkZWZj
b25maWcgNS4xMA0KPj4+PiANCj4+PiANCj4+PiBJIHRyaWVkIHRvIHJlcHJvZHVjZSB0aGUgaXNz
dWUgb24gWGlsaW54IFFFTVUgYXMgcGVyIHRoZSBzdGVwcyBzaGFyZWQgYWJvdmUgDQo+Pj4gYnV0
IEkgYW0gbm90IG9ic2VydmluZyBhbnkgaXNzdWUgb24gWGlsaW54IFFFTVUuDQo+PiANCj4+IEkg
dHJpZWQgb24gUUVNVSBhbmQgaXQgZG9lc24ndCByZXByby4gSSBjYW5ub3QgZXhwbGFpbiB3aHkg
aXQgd29ya3Mgb24NCj4+IFFFTVUgYW5kIGl0IGZhaWxzIG9uIHJlYWwgaGFyZHdhcmUuDQo+PiAN
Cj4+IA0KPj4+IEkgYWxzbyB0ZXN0ZWQgYW5kIGNvbmZpcm1lZCBvbiBRRU1VIHRoYXQgU01NVSBp
cyBjb25maWd1cmVkIGNvcnJlY3RseSANCj4+PiBmb3Igc3BlY2lmaWNhbGx5IFN0cmVhbUlEIOKA
nCAweDg3N+KAnSBhbmQgZm9yIG90aGVyIHN0cmVhbUlEcy4NCj4+PiANCj4+PiBJIGNoZWNrIHRo
ZSB4ZW4uZHRiIHNoYXJlZCBieSB5b3UgYW5kIGZvdW5kIG91dCB0aGUgdGhlcmUgaXMgbm8gInN0
cmVhbS1pZC1jZWxsc+KAnQ0KPj4+IHByb3BlcnR5IGluIHRoZSBtYXN0ZXIgZGV2aWNlIGJ1dCB0
aGUgIm1tdS1tYXN0ZXJzIiBwcm9wZXJ0eSBpcyBwcmVzZW50IGluIHRoZQ0KPj4+IHNtbXUgbm9k
ZS4gRm9yIGxlZ2FjeSBzbW11IGJpbmRpbmcgd2UgbmVlZCBib3RoICJzdHJlYW0taWQtY2VsbHPi
gJ0gYW5kICJtbXUtbWFzdGVyc+KAnS4NCj4+PiBJZiB5b3UgbmVlZCB0byBhZGQgdGhlIG5ldyBz
bW11IGJpbmRpbmcgcGxlYXNlIGFkZCB0aGUgImlvbW11LWNlbGxz4oCdDQo+Pj4gcHJvcGVydHkg
aW4gdGhlIHNtbXUgbm9kZSBhbmQgdGhlIOKAnGlvbW11c+KAnSBwcm9wZXJ0eSBpbiB0aGUgbWFz
dGVyIGRldmljZS4NCj4+IA0KPj4gSW4gcmVnYXJkcyB0byB0aGUgbWlzc2luZyAic3RyZWFtLWlk
LWNlbGxzIiBwcm9wZXJ0eSwgSSBzaGFyZWQgdGhlIHdyb25nDQo+PiBkdGIgYmVmb3JlLCBzb3Jy
eS4gSSB3YXMgcnVubmluZyBhIG51bWJlciBvZiB0ZXN0cyBhbmQgSSBtaWdodCBoYXZlDQo+PiBw
aWNrZWQgdGhlIHdyb25nIGZpbGUuIFRoZSBwcm9wZXIgZHRiIGNvbWVzIHdpdGggInN0cmVhbS1p
ZC1jZWxscyIgZm9yDQo+PiB0aGUgMHg4NzcgZGV2aWNlLCBzZWUgYXR0YWNoZWQuDQo+PiANCj4+
IA0KPj4gDQo+Pj4gQ2FuIHlvdSBwbGVhc2Ugc2hhcmUgdGhlIHhlbiBib290IGxvZ3Mgd2l0aCBt
ZSBzbyB0aGF0IEkgY2FuIGRlYnVnIGZ1cnRoZXIgd2h5IHRoZSBlcnJvciBpcyBvYnNlcnZlZD8g
DQo+PiANCj4+IFNlZSBhdHRhY2hlZC4gSSBkaWQgc29tZSBkZWJ1Z2dpbmcgYW5kIGRpc2NvdmVy
ZWQgdGhhdCBpdCBjcmFzaGVzIHdoaWxlDQo+PiBhY2Nlc3NpbmcgbWFzdGVyLT5vZl9ub2RlIGlu
IGZpbmRfc21tdV9tYXN0ZXIuIElmIEkgcmV2ZXJ0IHlvdXIgc2VyaWVzLA0KPj4gdGhlIGNyYXNo
IGdvZXMgYXdheS4gSXQgaXMgdmVyeSBzdHJhbmdlIGJlY2F1c2UgeW91ciBwYXRjaGVzIGRvbid0
IHRvdWNoDQo+PiBmaW5kX3NtbXVfbWFzdGVyIG9yIGluc2VydF9zbW11X21hc3RlciBkaXJlY3Rs
eS4NCj4+IA0KPj4gSSBkaWQgYSBnaXQgcmVzZXQgLS1oYXJkIG9uIHRoZSBjb21taXQgInhlbi9h
cm06IHNtbXV2MTogQWRkIGEgc3RyZWFtDQo+PiBtYXAgZW50cnkgaXRlcmF0b3IiIGFuZCBpdCB3
b3JrZWQsIHdoaWNoIHBvaW50cyB0byAieGVuL2FybTogc21tdXYxOg0KPj4gSW50ZWxsaWdlbnQg
U01SIGFsbG9jYXRpb24iIGJlaW5nIHRoZSBwcm9ibGVtLCBldmVuIGlmIEkgaGF2ZSB0aGUgcmV2
ZXJ0DQo+PiBjaGVycnktcGlja2VkIG9uIHRvcC4gTWF5YmUgdGhlIHJldmVydCBpcyBub3QgcmV2
ZXJ0aW5nIGVub3VnaD8NCj4+IA0KPj4gQWZ0ZXIgdGhpcyB0ZXN0LCBJIHN3aXRjaGVkIGJhY2sg
dG8gc3RhZ2luZyBhbmQgZGlkOg0KPj4gZ2l0IHJldmVydCA5ZjZjZDQ5ODM3MTVjYjMxZjBlYTU0
MGU2YmJiNjNmNzk5YTM1ZDhhDQo+PiBnaXQgcmV2ZXJ0IDA0MzU3ODRjYzc1ZGNmZWYzYjVmNTlj
MjlkZWIxZGJiODQyNjVkZGINCj4+IA0KPj4gQW5kIGl0IHdvcmtlZCEgU28gdGhlIGlzc3VlIHRy
dWx5IGlzIHRoYXQNCj4+IDlmNmNkNDk4MzcxNWNiMzFmMGVhNTQwZTZiYmI2M2Y3OTlhMzVkOGEg
ZG9lc24ndCByZXZlcnQgImVub3VnaCIuDQo+PiBTZWUgImZ1bGwtcmV2ZXJ0IiBmb3IgdGhlIHBh
dGNoIHJldmVydGluZyB0aGUgcmVtYWluaW5nIGNvZGUuIFRoYXQgb24NCj4+IHRvcCBvZiBzdGFn
aW5nIGZpeGVzIGJvb3QgZm9yIG1lLg0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 08:39:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 08:39:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146166.268896 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvyPp-0005sp-0G; Wed, 23 Jun 2021 08:39:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146166.268896; Wed, 23 Jun 2021 08:39:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvyPo-0005si-TE; Wed, 23 Jun 2021 08:39:20 +0000
Received: by outflank-mailman (input) for mailman id 146166;
 Wed, 23 Jun 2021 08:39:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dIpp=LR=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
 id 1lvyPn-0005sc-FB
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 08:39:19 +0000
Received: from mx0b-00069f02.pphosted.com (unknown [205.220.177.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cb7e6e1f-15d0-4137-8764-d84473f41512;
 Wed, 23 Jun 2021 08:39:18 +0000 (UTC)
Received: from pps.filterd (m0246631.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 15N8WMEv025971; Wed, 23 Jun 2021 08:38:28 GMT
Received: from oracle.com (aserp3020.oracle.com [141.146.126.70])
 by mx0b-00069f02.pphosted.com with ESMTP id 39bf94t8jj-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 23 Jun 2021 08:38:27 +0000
Received: from aserp3020.oracle.com (aserp3020.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 15N8cRcR151078;
 Wed, 23 Jun 2021 08:38:27 GMT
Received: from nam12-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam12lp2176.outbound.protection.outlook.com [104.47.59.176])
 by aserp3020.oracle.com with ESMTP id 3998d8tdp5-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 23 Jun 2021 08:38:26 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com (2603:10b6:a03:85::27)
 by BYAPR10MB2503.namprd10.prod.outlook.com (2603:10b6:a02:b4::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Wed, 23 Jun
 2021 08:38:23 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::8111:d8f1:c262:808d]) by BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::8111:d8f1:c262:808d%6]) with mapi id 15.20.4219.030; Wed, 23 Jun 2021
 08:38:23 +0000
Received: from char.us.oracle.com (138.3.200.48) by
 SN4PR0501CA0102.namprd05.prod.outlook.com (2603:10b6:803:42::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.10 via Frontend
 Transport; Wed, 23 Jun 2021 08:38:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb7e6e1f-15d0-4137-8764-d84473f41512
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : content-type : in-reply-to :
 mime-version; s=corp-2020-01-29;
 bh=Gaj3bJH1nWu6cTGQb4/Le70tkQtz+SbdZZs371zuQ8E=;
 b=n4ocjr1hNU93O1B8pGsPcPiYSN4slbczqk7c+6czCRsq1nps1Xt3AkaXL5u0Su7Eq02m
 vko/beTo5d3SsRnCrpuplazABNra3O7m9XbefzuCe7PF0n8/z2pHmvFjX3XeUB0M+duz
 rU4TwlYrdS4g//Yg44KGo4SKMovTDVZnDDeHQJpxmh4gBISprD1LKSU//E/4MRXQJ5fH
 viPUNXGspeKmx1DMs2LukYMD28qXoVgWRT8ZkyZdx3TaKJ0mB9QVh06cb08JETtgkz3B
 PClbnVCXm5KytAxcoR3uI/9ZAXd38htS/RWj/Lqdu/fVQqHxieB73AU3+ZDQ+9UnkvQL og== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WI1aUnHQhxE0AtLgfze/R8DRK+DeDYFnaJwzSqeodH0g5wP1xbTQX4TYCvBrqNLFh8JN0Pu+v8g7oe3oPcrT51Ph75Ii8Q3sKIx3Pkn0wMNwc1Z2aACj9Rh5jmQT7PSESiRHCCK0PhTuWmDFYQSm83/jacQhndlz9V1w7aICOjdLP9Vn+cpAlJXiW9q1neE0QbnZcFWZuOZsV/0Q9V52WNeGbq4RzqMYjNYWEIPTkxTEfBTHrDei4CAZCry1tPlwRitIGRtcLIVE3rglJ9fPVXc201yo207Nc4ho6X/HnAeRjsgzuxh81ffo4mZOMZ+Wx0xu8at+px9CYueFYAsBZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Gaj3bJH1nWu6cTGQb4/Le70tkQtz+SbdZZs371zuQ8E=;
 b=eVDrNfym1ZvFsYrr5zsrQaj6DKmdMGd+nfXO9aUBvgPungzuPacqps/PzkFUUn1JfOcHGZ7Fejvqma9G3PRm2NxULBFvZK9xs61Z24tfU0b6SyaVjL+Oa/nYKDLsR9ohJTaGTxGi6lECKxIa7l2o6sdoZPFNCbdHO6ysxj/qTiLE5d0JuAzpil/fCDWfGs45vFQdgg1Oq75aaC2xVQVifxY5Uj/wq5tFa/dnCAZwYD6+LeoscsfyWejQda/ONqlszALC9L9aJ3BiAhNXdKgZmeQg28eeT4dCbxd0t8ZVzeqj7Nd+UDkcVVfNCPeupNPNceDDgzxMTXf3ohMfCfpNeg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Gaj3bJH1nWu6cTGQb4/Le70tkQtz+SbdZZs371zuQ8E=;
 b=N+Dv9CNvrAboAJXvhkR0yyNjco3C9tXkVw/jybJcJ0bIPTIMKyu362Pej9uqJ1R7GXKF0kNjie4Sue1XZ/AUxhrV9LwrlCjGCRcfNLKz3JtXw0vALCwxA/ZOmU7G2U6PQKl0ztxJChs+MX9v/5Cuzph4Z1mUb4OseEheJ2DP0i8=
Authentication-Results: chromium.org; dkim=none (message not signed)
 header.d=none;chromium.org; dmarc=none action=none header.from=oracle.com;
Date: Wed, 23 Jun 2021 04:38:07 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
        Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
        Frank Rowand <frowand.list@gmail.com>, boris.ostrovsky@oracle.com,
        jgross@suse.com, Christoph Hellwig <hch@lst.de>,
        Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org,
        paulus@samba.org,
        "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
        sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
        grant.likely@arm.com, xypron.glpk@gmx.de,
        Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
        bauerman@linux.ibm.com, peterz@infradead.org,
        Greg KH <gregkh@linuxfoundation.org>,
        Saravana Kannan <saravanak@google.com>,
        "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
        heikki.krogerus@linux.intel.com,
        Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
        Randy Dunlap <rdunlap@infradead.org>,
        Dan Williams <dan.j.williams@intel.com>,
        Bartosz Golaszewski <bgolaszewski@baylibre.com>,
        linux-devicetree <devicetree@vger.kernel.org>,
        lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
        xen-devel@lists.xenproject.org,
        Nicolas Boichat <drinkcat@chromium.org>,
        Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
        bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
        daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
        intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
        jxgao@google.com, joonas.lahtinen@linux.intel.com,
        linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
        matthew.auld@intel.com, rodrigo.vivi@intel.com,
        thomas.hellstrom@linux.intel.com, thomas.lendacky@amd.com
Subject: Re: [PATCH v14 00/12] Restricted DMA
Message-ID: <YNLy7z0Zq1AXKLng@char.us.oracle.com>
References: <20210619034043.199220-1-tientzu@chromium.org>
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210619034043.199220-1-tientzu@chromium.org>
X-Originating-IP: [138.3.200.48]
X-ClientProxiedBy: SN4PR0501CA0102.namprd05.prod.outlook.com
 (2603:10b6:803:42::19) To BYAPR10MB2999.namprd10.prod.outlook.com
 (2603:10b6:a03:85::27)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 805f75bd-cc70-41ff-1939-08d9362242b6
X-MS-TrafficTypeDiagnostic: BYAPR10MB2503:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BYAPR10MB2503181E6FA65D137481E97689089@BYAPR10MB2503.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	zgL9UOwPml6MZjuPphnT7OQLh1da47R+oZFlzh0Nt6n9y+7I9fiyLiVISHmsfci2GjStfWORRxneL+LzMnzAqQ6tXThPinDQ8LMpLxuNbzT1w5ghISDs6iWcni1lSqmQDH8b7p5VJsxkjlGlvS41RouWCzwUpsWNh2mX/C37tdQ/Jm16Qgw4BE1RFMmvr+PAyilu5J0s3uZiyFayNiRByKuhEofOc03sY1k2F7hzup5hTETSnb5REdNWb/hFxlvTGwxZMrcQJK29Ef2n/Xpn/w92PFaRNoaZ0rqscgHdg0Pemt3YQLJZhQzdi4Qlg6ffPpQdYH2z4dIHqmdJW9thLoq8KmXQLgJStsxPjAKC3hgTDz1NePjBDe8U5KyoVQAHvx/gLLtoOeKUm72/MIil/4wkFnvC6W54oQphei6PS2jY7hjVZ6KgPfr4I7BPeB4BF9velgXFOT1vwbR6IHe/u9dOjt5h0M6wGF9rn6NdfdyDIeofJ5HNhLHAfo2pNv1WW+eAyFvaAGFjMven/Byk9VxmS26xlDvjEx9gPT7gSn55qJRGLvhhyWoIhmBapKX0RcXj1X7LpVr2N6rwbugI5HZ+zAidxOZqzQ1ywQIBaYJM6vi90TYhlF9iI84yS9MP1oGRgoo3KiOijKFin0V5ZwdMDHdnQy5EmiFRCpQ0647eX+5JqdgDxsS5ye133aQI3OBqVUCxi6LXiO8dtXYXNZ1fjj+TnLzmYC/GVn3p3JuuRfd6EHWeDhA9TLdlIKjxS5ThMieuUr3EwXRqQtqoRqKLIVEsZcDG+2OOeVNKclGJlVQ/VbWdXCgWYBeMLoBF
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2999.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(39860400002)(376002)(396003)(366004)(966005)(478600001)(66476007)(66556008)(55016002)(52116002)(8936002)(316002)(86362001)(54906003)(7406005)(7416002)(38350700002)(6916009)(7696005)(5660300002)(38100700002)(7366002)(2906002)(66946007)(83380400001)(6666004)(8676002)(16526019)(4326008)(956004)(26005)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?ETL4JO0gSLVK3Yd0vvyFZhuEF5lLHStrrJznjei9ayieyQm8cyAaI/4/CU1o?=
 =?us-ascii?Q?Vq3W6y2biS74NgL7vLhSUh2StsPwYm6Q97Z6ZMktjn4UdYEk8OUl46Nnwa3L?=
 =?us-ascii?Q?A/wO9Ukz3NkJya/NDyKUutJ2XBbNuHa1HbyVQPIhLVOzBXDkUePCAhO3eEWJ?=
 =?us-ascii?Q?gx1+SSeTOtxL9ylRVet1krm6z5AEOCb8YLH+BFxd6W7uWqVVADtCQAsTeunX?=
 =?us-ascii?Q?TnBYDRIjk4lL0DYp5n9kfx5fQpdZe/zqKK3ulrN5nVTfZHawfRIZLuDowxNq?=
 =?us-ascii?Q?MmfRJQzl4Rruj2WHvP2f7iWhlcAdSyqDF9+9Utaum80VYQ8UoSfLGw+0zCM6?=
 =?us-ascii?Q?97yT4lKqtGDd1XFsYmB1JwUVPWiexyY5wRJxcG+v8gnByhbUWp5pO6q2Rlss?=
 =?us-ascii?Q?mV8jDDwPZW6di7VYd5gRo6Iq8CjJvlz2H7u6XjQg7RlFdSjMKvBp/LrsFPDv?=
 =?us-ascii?Q?5p5KqZwwYt26SOS1a3m8mIoJW3OHLK0Er2jsnbnDHdHW56wujJXinCpy8M2m?=
 =?us-ascii?Q?wwkZDsOPsToE2BlrL29f73Tv8hMeQlzCabqQimzuzE8ws288Z/2SL08sGeRY?=
 =?us-ascii?Q?vXSZhIeknlI/QHxTbPKV+eWEy1mhmCQh8+211DJ+BsqHzZZg0ZOlxmcXbWyU?=
 =?us-ascii?Q?nPUzb/r+RexmWncgANiP72/L1bpInlqWYMu4XPutmWWB0auYRFy4lLfEJCiJ?=
 =?us-ascii?Q?xq2PNRewT+nIUxTPyW21zF73ZuV4qDx34RLx+ZggH+CfDdiq8tLaZBfpScEg?=
 =?us-ascii?Q?qvcnzGQrvX0B9y+mk6Lrc6FN0slIQUzhBeZr0b6tx5Ga31nF1tJRonTlXUrv?=
 =?us-ascii?Q?M1k2LDumr3sUcTe6yjbPMTfZAoyS+2nitDdTug559MbOMN3SAvc7csUo20ZW?=
 =?us-ascii?Q?v3R4IS5cfN0xYDK7wsImO7L3bvjhrcksEaozppfy2VT2m9qoCcmDNVKn4D/C?=
 =?us-ascii?Q?FLC3eC8p1zH0oSNfgTCftONWIy34RMF0gPaBP0R5lcAYG2p7aHvGGbsR/Pxn?=
 =?us-ascii?Q?1LfQ94uv7RZ715laJ1ijDPbmRnbMbH6EgOIl9KecA59ohMu9i+7SW2KvHl84?=
 =?us-ascii?Q?PEGddSU+Kzj6ZshHqZrIVDC/upL0I+QWB1JgymQax7oyi7sulgio3SkEYDCc?=
 =?us-ascii?Q?x9vCT2FV7/RlfrxcGMGBjRQRL8hxQOaB8lEtM6SyeZMi2pY8L2IUFeFlJkg9?=
 =?us-ascii?Q?N8ze4O/aDXZx3nH2bhKQHLIM9xgitniryOLU3ySZSjGMIzJuX+NVT/iaLt3B?=
 =?us-ascii?Q?Yf8DGYWPtxEnhnzknwteJfgVGmoNsMsQkxNecTvU8Mi4leX3ffe+eGjw5PPf?=
 =?us-ascii?Q?Q9HVuLF9HQM5bG1VUBklPeFz?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 805f75bd-cc70-41ff-1939-08d9362242b6
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2999.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2021 08:38:22.9441
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sZZ2vLTNQgExjqnVBj2BwNt2chpabGrUNsp8UYuEoQc09uS/RPrRjPP5wExFxTdfWShsqtsdaDSQKUDz3MGcuw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB2503
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10023 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxscore=0 adultscore=0
 mlxlogscore=999 phishscore=0 suspectscore=0 bulkscore=0 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106230050
X-Proofpoint-ORIG-GUID: maU1n2Hjektafvbr8bE3jvBse6G9hmyq
X-Proofpoint-GUID: maU1n2Hjektafvbr8bE3jvBse6G9hmyq

On Sat, Jun 19, 2021 at 11:40:31AM +0800, Claire Chang wrote:
> This series implements mitigations for lack of DMA access control on
> systems without an IOMMU, which could result in the DMA accessing the
> system memory at unexpected times and/or unexpected addresses, possibly
> leading to data leakage or corruption.
> 
> For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
> not behind an IOMMU. As PCI-e, by design, gives the device full access to
> system memory, a vulnerability in the Wi-Fi firmware could easily escalate
> to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
> full chain of exploits; [2], [3]).
> 
> To mitigate the security concerns, we introduce restricted DMA. Restricted
> DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
> specially allocated region and does memory allocation from the same region.
> The feature on its own provides a basic level of protection against the DMA
> overwriting buffer contents at unexpected times. However, to protect
> against general data leakage and system memory corruption, the system needs
> to provide a way to restrict the DMA to a predefined memory region (this is
> usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).
> 
> [1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
> [1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
> [2] https://blade.tencent.com/en/advisories/qualpwn/
> [3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
> [4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132

Heya Claire,

I put all your patches on
https://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb.git/log/?h=devel/for-linus-5.14

Please double-check that they all look ok.

Thank you!


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 09:06:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 09:06:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146172.268907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvyqO-0000Xa-C7; Wed, 23 Jun 2021 09:06:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146172.268907; Wed, 23 Jun 2021 09:06:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvyqO-0000XT-8R; Wed, 23 Jun 2021 09:06:48 +0000
Received: by outflank-mailman (input) for mailman id 146172;
 Wed, 23 Jun 2021 09:06:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VPk1=LR=intel.com=lingshan.zhu@srs-us1.protection.inumbo.net>)
 id 1lvyqN-0000XN-6b
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 09:06:47 +0000
Received: from mga11.intel.com (unknown [192.55.52.93])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 890be39e-298b-4775-aebd-080efc68c078;
 Wed, 23 Jun 2021 09:06:44 +0000 (UTC)
Received: from orsmga001.jf.intel.com ([10.7.209.18])
 by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 23 Jun 2021 02:06:43 -0700
Received: from lingshan-mobl5.ccr.corp.intel.com (HELO [10.255.30.127])
 ([10.255.30.127])
 by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 23 Jun 2021 02:06:38 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 890be39e-298b-4775-aebd-080efc68c078
IronPort-SDR: rP8uekMS6K6bv8j+KZX8YdnQ7UWdg1QfG6KloBMeZFiWjHEp+U6iLhZw7vXOQZJFg5EpLLIapF
 lN3ojr528o1g==
X-IronPort-AV: E=McAfee;i="6200,9189,10023"; a="204217864"
X-IronPort-AV: E=Sophos;i="5.83,293,1616482800"; 
   d="scan'208";a="204217864"
IronPort-SDR: g2eO6eKo7NA4GY19a7CouO+5j1o7GAnTSLroawrK3+rYpOJrEVfMjr60J7q5kjcIpvJfXcjBDr
 3E4HH0+4OM1A==
X-IronPort-AV: E=Sophos;i="5.83,293,1616482800"; 
   d="scan'208";a="487232734"
Subject: Re: [PATCH V7 01/18] perf/core: Use static_call to optimize
 perf_guest_info_callbacks
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>, pbonzini@redhat.com
Cc: Like Xu <like.xu@linux.intel.com>, Will Deacon <will@kernel.org>,
 Marc Zyngier <maz@kernel.org>, Guo Ren <guoren@kernel.org>,
 Nick Hu <nickhu@andestech.com>, Paul Walmsley <paul.walmsley@sifive.com>,
 linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu,
 linux-csky@vger.kernel.org, linux-riscv@lists.infradead.org,
 xen-devel@lists.xenproject.org, Peter Zijlstra <peterz@infradead.org>,
 "kvm@vger.kernel.org" <kvm@vger.kernel.org>, bp@alien8.de,
 kan.liang@linux.intel.com
References: <20210622093823.8215-1-lingshan.zhu@intel.com>
 <20210622093823.8215-2-lingshan.zhu@intel.com>
 <92fdf981-68ef-92a2-b1ae-0c5f347ae460@oracle.com>
From: "Zhu, Lingshan" <lingshan.zhu@intel.com>
Message-ID: <43f6bb94-616c-f0c9-edc6-a72ea1244f59@intel.com>
Date: Wed, 23 Jun 2021 17:06:36 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <92fdf981-68ef-92a2-b1ae-0c5f347ae460@oracle.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US

Thanks Boris, I will fix this in V8

On 6/23/2021 1:31 AM, Boris Ostrovsky wrote:
>
> On 6/22/21 5:38 AM, Zhu Lingshan wrote:
>
>> -static int xen_is_user_mode(void)
>> -{
>> -	const struct xen_pmu_data *xenpmu_data = get_xenpmu_data();
>> +	state |= PERF_GUEST_ACTIVE;
>>   
>> -	if (!xenpmu_data) {
>> -		pr_warn_once("%s: pmudata not initialized\n", __func__);
>> -		return 0;
>> +	if (xenpmu_data->pmu.pmu_flags & PMU_SAMPLE_PV) {
>> +		if (xenpmu_data->pmu.pmu_flags & PMU_SAMPLE_USER)
>> +			state |= PERF_GUEST_USER;
>> +	} else {
>> +		if (!!(xenpmu_data->pmu.r.regs.cpl & 3))
>> +			state |= PERF_GUEST_USER;
>
>
> Please drop "!!", it's not needed here. And use "else if".
>
>
> With that, for Xen bits:
>
> Reviewed-by: Boris Ostrovsky <boris.ostrvsky@oracle.com>
>
> -boris
>



From xen-devel-bounces@lists.xenproject.org Wed Jun 23 09:06:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 09:06:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146173.268918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvyqY-0000qR-Km; Wed, 23 Jun 2021 09:06:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146173.268918; Wed, 23 Jun 2021 09:06:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvyqY-0000qK-Gh; Wed, 23 Jun 2021 09:06:58 +0000
Received: by outflank-mailman (input) for mailman id 146173;
 Wed, 23 Jun 2021 09:06:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cw8P=LR=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lvyqW-0000pO-Mo
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 09:06:56 +0000
Received: from mail-qk1-x72c.google.com (unknown [2607:f8b0:4864:20::72c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a113d435-c3ca-4a81-b146-0afc8af513da;
 Wed, 23 Jun 2021 09:06:55 +0000 (UTC)
Received: by mail-qk1-x72c.google.com with SMTP id c138so3346864qkg.5
 for <xen-devel@lists.xenproject.org>; Wed, 23 Jun 2021 02:06:55 -0700 (PDT)
Received: from mail-qt1-f173.google.com (mail-qt1-f173.google.com.
 [209.85.160.173])
 by smtp.gmail.com with ESMTPSA id r6sm3406353qtx.89.2021.06.23.02.06.55
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 23 Jun 2021 02:06:55 -0700 (PDT)
Received: by mail-qt1-f173.google.com with SMTP id g12so1534524qtb.2
 for <xen-devel@lists.xenproject.org>; Wed, 23 Jun 2021 02:06:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a113d435-c3ca-4a81-b146-0afc8af513da
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=13Gr0i1MTKRVJOLYjbaQS0t+sJQavN2HZYVSf9igv0I=;
        b=SA9G17LOrb34DyVZWPMTTHIONdcJ/5ZbZWqssqBb4aWqa4vseTx3+V6CkFU4RvePBr
         YNNS8jqCGxfTDXRduKfQQMwLt3JL1m3nr4PNVWIksCr6YiJm78aGWUxZDo4V4hXGS1bN
         zAH4ZuRlx4WIbGetsYgzQvT1yucBC3KtrF3U0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=13Gr0i1MTKRVJOLYjbaQS0t+sJQavN2HZYVSf9igv0I=;
        b=RSP0TaNPiqFCPhsm30u1De6RRjfJS/B5UNGuKATnDMa5XjDqyGkGlDT0FobcPGmJ8k
         QcNm1wwXZLFQ5gJ+ETv3Ek/9xwjT72xrT4VkpKYtTzOy3E1PcmFu8FeX8bGWocP9xN2R
         DX6gzJ/wrdH31My6lFEsIsh4pJU2cS23BONN3OPIyQFVlTihUkI2ZI2wE9zIV/ucNxz4
         D4ArUCIutkGGKc75cixA0oTxe9PAEGht85LhQA1f8zc7qGvbK0TP5pAwWWaOSLGLHlf2
         8c+KRlnVDqJBs6MvGN9AAekKY3+PNvDBlWhO/n6Sf/KkpoYmxF9T2VtXm/8plw9UAvih
         MU+A==
X-Gm-Message-State: AOAM531lgprUaliWXIt/6nQCAjYNwm9EH/yJQ86PEqO/vW8kr7ed1mro
	vSoig7Ux9gBnhRgzulG8lkmjuj5PrrzEMg==
X-Google-Smtp-Source: ABdhPJxmi7LbVr6jESnI4VkdvdqvMC1q8m3WInQVl31MMJkE7IKnwh14vDANLWQmnkMZl+fwBLYHpw==
X-Received: by 2002:a37:44f:: with SMTP id 76mr9235947qke.161.1624439215410;
        Wed, 23 Jun 2021 02:06:55 -0700 (PDT)
X-Received: by 2002:a02:4b46:: with SMTP id q67mr7991027jaa.84.1624438886886;
 Wed, 23 Jun 2021 02:01:26 -0700 (PDT)
MIME-Version: 1.0
References: <20210619034043.199220-1-tientzu@chromium.org> <YNLy7z0Zq1AXKLng@char.us.oracle.com>
In-Reply-To: <YNLy7z0Zq1AXKLng@char.us.oracle.com>
From: Claire Chang <tientzu@chromium.org>
Date: Wed, 23 Jun 2021 17:01:16 +0800
X-Gmail-Original-Message-ID: <CALiNf28U9xaqth99u=hB45b=qWMYaSoe2DGgNVFrHXze6wNmdQ@mail.gmail.com>
Message-ID: <CALiNf28U9xaqth99u=hB45b=qWMYaSoe2DGgNVFrHXze6wNmdQ@mail.gmail.com>
Subject: Re: [PATCH v14 00/12] Restricted DMA
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, boris.ostrovsky@oracle.com, 
	jgross@suse.com, Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>, 
	benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, 
	xypron.glpk@gmx.de, Thierry Reding <treding@nvidia.com>, mingo@kernel.org, 
	bauerman@linux.ibm.com, peterz@infradead.org, 
	Greg KH <gregkh@linuxfoundation.org>, Saravana Kannan <saravanak@google.com>, 
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com, Tom Lendacky <thomas.lendacky@amd.com>
Content-Type: text/plain; charset="UTF-8"

On Wed, Jun 23, 2021 at 4:38 PM Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
>
> On Sat, Jun 19, 2021 at 11:40:31AM +0800, Claire Chang wrote:
> > This series implements mitigations for lack of DMA access control on
> > systems without an IOMMU, which could result in the DMA accessing the
> > system memory at unexpected times and/or unexpected addresses, possibly
> > leading to data leakage or corruption.
> >
> > For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
> > not behind an IOMMU. As PCI-e, by design, gives the device full access to
> > system memory, a vulnerability in the Wi-Fi firmware could easily escalate
> > to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
> > full chain of exploits; [2], [3]).
> >
> > To mitigate the security concerns, we introduce restricted DMA. Restricted
> > DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
> > specially allocated region and does memory allocation from the same region.
> > The feature on its own provides a basic level of protection against the DMA
> > overwriting buffer contents at unexpected times. However, to protect
> > against general data leakage and system memory corruption, the system needs
> > to provide a way to restrict the DMA to a predefined memory region (this is
> > usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).
> >
> > [1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
> > [1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
> > [2] https://blade.tencent.com/en/advisories/qualpwn/
> > [3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
> > [4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132
>
> Heya Claire,
>
> I put all your patches on
> https://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb.git/log/?h=devel/for-linus-5.14
>
> Please double-check that they all look ok.
>
> Thank you!

They look fine. Thank you!


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 09:28:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 09:28:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146182.268929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvzBJ-0003Qm-6T; Wed, 23 Jun 2021 09:28:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146182.268929; Wed, 23 Jun 2021 09:28:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvzBJ-0003Qf-31; Wed, 23 Jun 2021 09:28:25 +0000
Received: by outflank-mailman (input) for mailman id 146182;
 Wed, 23 Jun 2021 09:28:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iC25=LR=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1lvzBH-0003QZ-NP
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 09:28:23 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a0285806-922a-485d-a98a-3f33f56d5de1;
 Wed, 23 Jun 2021 09:28:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0285806-922a-485d-a98a-3f33f56d5de1
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624440502;
  h=from:to:cc:subject:date:message-id:content-id:
   content-transfer-encoding:mime-version;
  bh=pFOPDETfwZJaOe7twdax4Wn6oS+JjDQ67+7lVx9wUS0=;
  b=C4f/cPrUdKQ+pPr9o3pJaSFzN8CSvfGczd5/tAwU97IxyDyI6JHTEI6E
   roz8V4KeIttrZNbY5DDl4lgL3COO8kry2uEYZswNMxDRspSkHtAm8PYSk
   iYKvXJ1r2FCgIffYavYVwxJ4suVE3ksMY2dq0p8GOjemBxq4vcxgtF3UT
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: WH/Lz7hrvgI13fnDyYImTb7tJfmXYAiKr8YnBB46L3HxSQGErwqgXfw2oPsDUiU3t0NTIfiS32
 4uTPWM1ITSMNSnk83CY+XJRaYIzi+01BVZz8nB9Rh2NuJ1ZsrVcAhiA/fDo9Kn+6ws/DWMINns
 s9xcOfkDwPAZr8PS6DgdEi0IH7Q/6jA4OijZs0BrPKub32KriwVhS+P2hUJLYVxZ/Inu2TShdG
 lqmeH2gNsYHuXONCY9B6lmvzlMaJep7UV2SFgQSovOt4ArpeY9G/fyMk/GbkYT5t1e5Sliqe25
 kv0=
X-SBRS: 5.1
X-MesageID: 46826513
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:gWp3+atRqv1pZ60exPaDnkgj7skDjNV00zEX/kB9WHVpm6yj+v
 xGUs566faUskd0ZJhEo7q90ca7Lk80maQa3WBzB8bGYOCFghrKEGgK1+KLrwEIcxeUygc379
 YDT0ERMrzN5VgRt7eG3OG7eexQvOVuJsqT9JjjJ3QGd3AVV0l5hT0JbTpyiidNNXJ77ZxSLu
 v72uN34wCOVF4wdcqBCnwMT4H41qf2fMKPW29+O/Y/gjP+9Q+V1A==
X-IronPort-AV: E=Sophos;i="5.83,293,1616472000"; 
   d="scan'208";a="46826513"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=M2qDzptXCAn5S/zI+UUKJvxLWS3xVjr8VigqOXz3URndS0hYdr29/J/byyIk5gfX9ljBSx8bIXPkPeqfU4hUUJjMIb1wLxn9n6MrTS5/RmGvQkdPho68w2vx2dN2jqBY3ZKhV99KZehcVVNBiYnzHhhuxDuV7KqjScmv8YO9YXrv1ALuzEJ+x4ayibyJSmgISZYW73qxf5SIDwgNV8KFSxLGV7VvTqq5jsglNMBqwtvD6Q4AvbKL0tW+XGJIKwpThwlrXxvwi9GLeNNjHr1DCta3raZ27u0zwJsgJiI6/0EeD9uYZLMxCc1Fd6mA2PcR5YTuKfohpBqfpsOw2xheqA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pFOPDETfwZJaOe7twdax4Wn6oS+JjDQ67+7lVx9wUS0=;
 b=Ukkt7yL6q0Ya3libRsq/n8qpNVNzQzzJhWvmHs7IRwIv3Si2UH+G8f8pNR+Je54cDo2wFwGQrbtzdP9DwVvtZ9nMxDldia8ocpDBMMJ1XUR1ZBHovNOIOEiglwukDxfulPode3bG2fpeNJ5ztBCZ4u/qOnVEhPVIwpi9Y47FU5Rnc77T3FNVcxuE5CXDxjeA19Kf5dPNwE0KCYrCzwNrOZX1DID2nKC6lYEjUYM/B3SDCTaHuT06H7hX5FFL7nJr3kwDzQdhWDo1c82cV1gIR4fRtts3HiocO8ECKy9ti1/kNi2wxQDCtrD5c7BbJu5bHlIVpdYARXVOw/zTYTuACw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pFOPDETfwZJaOe7twdax4Wn6oS+JjDQ67+7lVx9wUS0=;
 b=IFyD+6v47OAhcX4psPNcDKAnmbyLkCriptHHddpk72MryLmpKgaaBze9p9D+ymtI2qC2YTlJtn1i5J15QSt9bA9OBcHvvmw9vR4uf26nz1Cg5Kym/evqXhfxd5q1Gj1A52gpN1XIVf1wMYp8WeZP4e2cfie1tKgwbgIXfV7+n2I=
From: George Dunlap <George.Dunlap@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
CC: Ashley Weltz <aweltz@linuxfoundation.org>
Subject: Welcome Ashley Weltz, the new XenProject Project Coordinator 
Thread-Topic: Welcome Ashley Weltz, the new XenProject Project Coordinator 
Thread-Index: AQHXaBIa0kQCTROEFUGBiYmQ/zZkBQ==
Date: Wed, 23 Jun 2021 09:28:19 +0000
Message-ID: <C148BD69-9717-4B94-A11A-5C96C12CAFD4@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 58a1b014-08d8-4b2f-5ce0-08d936293d02
x-ms-traffictypediagnostic: PH0PR03MB5896:
x-microsoft-antispam-prvs: <PH0PR03MB589645BE8E217AF48BD529BE99089@PH0PR03MB5896.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:6430;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: RXIlwT7OFzr9PscOsVhmYmRhprmn3VIBwn1RMyidpGTpxjjFP8hWrO3zqLeokUdK223zSDBg/IisPqMaSAvspfR0zYdf4cJvTW60K6H+T4wqBtTpPdQl31B5c2Oo9AnG0TVLI9m/ln2M4REoT3SHvuwQUhIym9QAMoGxyUzW0pCT9wYmLWab3i+HcAerrmBgfLC2Vqvs/TN9xb1iaV55gH0q29SyKELIycjvsYeszBmAFR62/5Uh1JIGm23vJQl+ZwSRKEv1ZmL+T1ACYiZu8/3bmRmXAspbq3+aCAD61uIHNQ/CZ5R7MInUGF6Ghc077VnFgCXlNNGE2YZcSxD9lBdk3RVC2H5I/N8Fzx9jPBYm1MOlwQqevgBERkz75f7zE+o7XyrpxmVRwzZtmXziWFKG/bTNBVeeGYcdnC/R+hR51i21dEGnYs+9VFr5PteyUTy+JKOIM1nYhHOnSB9Hn4W0CxCgKdzhmJZW1Lhvid7JYrRdHVeGnnaReZasCflnzWjckrEpm8JRiWp8UUXqqMyxLm3ChDO1p6UyXjJUBmZvP5LnUg2BWnl1cZho+7K9ti2EPpPNSXnGAAgPPpD8fapEnjYi0e5kz0j6SXfgrjVPpYW45PFJV95Fp1BiaFjumoat1ubJ7Hst3tPHBsIOkBafqlqLyrHCYQKMnALnJ/Q=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39850400004)(376002)(396003)(346002)(136003)(86362001)(71200400001)(4744005)(122000001)(6916009)(8676002)(316002)(5660300002)(4326008)(2616005)(36756003)(6512007)(2906002)(76116006)(8936002)(6506007)(478600001)(91956017)(64756008)(26005)(33656002)(66446008)(66556008)(66476007)(186003)(66946007)(38100700002)(6486002)(4743002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?SjArTFlhaUhJeEJ0VFZWeXA5NlkxRUZDWmRXMWJlRVpzNmtLT0g2OGM3ZFJw?=
 =?utf-8?B?TkpuUTBkajBUVWlJdHBLYmIvWlB2UjRlTUsvMTVxTkFVdFlENVlOcDAvL0Uz?=
 =?utf-8?B?azJCMExnSzVQamFQNUkvclBOYWlBdVNwZDluUXh1UjY5eFBYOEhZL2tLOTRK?=
 =?utf-8?B?N2ZNRC9qNk5teTBXaS9XUW1OSTIzOFlRSFZZSGduOERFRDJ1cFVlWXpUclZq?=
 =?utf-8?B?c0ZUU25jVzNDYXZXNzAxcjRYRVdFbmRTK3lOVThmTHlGOEpKRzF1aW44S0JV?=
 =?utf-8?B?WkFJa1FWMUU3Y2cxVlNCNU1KckdDWjFKWHpIcjBnNXNzdm9hbVZrRVNvd1U0?=
 =?utf-8?B?SXhuOWQ4SFM2b0lDdlFIWXd4ZDFPL0I5b0tqdlEvYm1CaUI3QUFMc1lqeHIv?=
 =?utf-8?B?cTdtRjc1MlFMaEZVNkRNbUlrZFdCcHh0TmNqUW1zcXNsVnVyY3V6WUhDZEpX?=
 =?utf-8?B?V2doM0FYUm9lSHBrOVRoajVqYWtERjltRmtPODJhNTRPV2lxZ2NaZEo1TUo2?=
 =?utf-8?B?azBVTmlXS2FTWlNFbDNIeEhCekNJc2VxZjRqWDBtdHlScG1uaWZBTk9ZWUl0?=
 =?utf-8?B?Q1cvTXVENnpvb2Z0c2ZpdzdCQXBMbWpzTUpPS3U1MUxkUEtwMk93NHUvSnJn?=
 =?utf-8?B?SkhlbExOZkpMU01iajdRZG0rT3lnbDFDbEdqaXU5T1ZhaVRrdUIwY1FwOWxT?=
 =?utf-8?B?SjcxWDVOZUYzYTQ3OHFPU1VtVmpMajNUaEZGaFBVdkRxT2ZDdjBYMUlXTk1J?=
 =?utf-8?B?Zjc3QUxJdGNyVnlnV0lKZDNIRmpTWXdHdFJjYnVNaTlkc25BWUdTa25FU25z?=
 =?utf-8?B?UWFxbnlLb2hhYlN6NXFNeG1BNTlzYTdGSmJSelh1N0ZUOGZVclJIdmpVODRj?=
 =?utf-8?B?eU9SaDk0VzhOVjBaMTNWVEFhQmhmRXNUWUE2bGtKVTJRdHlWa2xiSXBjOFJX?=
 =?utf-8?B?aFg3ak1PS2FxTjYrR1ZmV0hERndMU083WmNYQW8xYzBSY0FXbnJuTHh4SVBM?=
 =?utf-8?B?S0xUa2dHcitSNloxd1hoTW9GWkRmUHBaQXJOeWxqZjVCSTFpbGUyQVlRTHlJ?=
 =?utf-8?B?YS9qdEtsTmtoaUVIK0dZNzh6VDRsSTNZMGpEajJ5VHFGR0xKU2lWRFpwSGZ6?=
 =?utf-8?B?RmVadXNlWXhWY0tMdUNtMTUyT2xvSEhOVFQxT1A0SkVEbHUraEQrUml0YTN1?=
 =?utf-8?B?OTM4WnNNcy9xcm5aSmNZYTUxV29EUGl6T3V2L0FwYzlSYnd1UXVVVFVRYjZ2?=
 =?utf-8?B?MzJvajNkdHBoWDBOeVJqNzVEM0QyOWh4TlZXcmhUL1BJSUVnQWlzdHdwbTJH?=
 =?utf-8?B?TjUxRk1kam1CTVZ4T3NDNFJUTzRNNWxaRnIzNXpOUlllNGlXTkR5aThIbSsv?=
 =?utf-8?B?bi8xMEpwbTFMY01BOUtaelovSVYwOFlnVGpOSUg1YXlPUGZiWDFVSmZ5Z3Zw?=
 =?utf-8?B?YjFXSlNmdCtjSWtmclRERnFudXpkT2tZRkpKc09uRDcvdjZqaUJOeDhmcHFO?=
 =?utf-8?B?SWNudWxXeFZ4cUduZ3RTdmI0djdYUWUySGZuejlFamVNb0JjcGkva2JCcklY?=
 =?utf-8?B?TXNqcXRwK09vWWtETitweUlKMm5VcUdjTGpUMjV3aWV0aDJMd0lWTFdMZG5v?=
 =?utf-8?B?SUdOdjBOUVNSQXFVYVRpaENvaXVzUGxHVUxkdW92bllYeDFBNDhsUC9vZkNK?=
 =?utf-8?B?MUFGR0hiczBPVHM1TTY1TG4rNlJqVnU0aEhDelY5QmpWUGV6bC9tS21iYVEz?=
 =?utf-8?Q?j8BRreG4T6Xk3c//abXvnRTo3JsttvJmkVh4AhE?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <1637D843BB5FF74281DF6E0BAFB6A349@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 58a1b014-08d8-4b2f-5ce0-08d936293d02
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Jun 2021 09:28:19.3256
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Wu+fSLPlOTovZK5PsjUzaWUUhsnZc0M/bNN7Um+QahZwaEoRqfODgV+aT4wh4pOqvOWAY4Q99dI9sRpljA3TVDTIi74Ab3o6TMTvX+S6hMs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5896
X-OriginatorOrg: citrix.com

SGVsbG8gZXZlcnlvbmUhDQoNClBsZWFzZSB3ZWxjb21lIEFzaGxleSBXZWx0eiwgb3VyIG5ldyBQ
cm9qZWN0IENvb3JkaW5hdG9yIGZyb20gdGhlIExpbnV4IEZvdW5kYXRpb24uICBBcyB0aGUgam9i
IHRpdGxlIGltcGxpZXMsIEFzaGxleSB3aWxsIGJlIGhlbHBpbmcgb3V0IHdpdGggdmFyaW91cyBw
cm9qZWN0IGNvb3JkaW5hdGlvbiBhY3Rpdml0aWVzLCBpbmNsdWRpbmcgY2hhaXJpbmcgdGhlIG1v
bnRobHkgY29tbXVuaXR5IGNhbGwgYW5kIHJ1bm5pbmcgdGhlIGpvYnMgcGFnZS4gIFNvIGRvbuKA
mXQgYmUgc3VycHJpc2VkIGlmIHlvdSBzdGFydCByZWNlaXZpbmcgZW1haWxzIGZyb20gaGVyIGZv
ciBYZW5Qcm9qZWN0LXJlbGF0ZWQgdGhpbmdzLiA6LSkNCg0KUGVhY2UsDQogLUdlb3JnZSBEdW5s
YXA=


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 10:00:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 10:00:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146190.268943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvzg8-0007OE-Tz; Wed, 23 Jun 2021 10:00:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146190.268943; Wed, 23 Jun 2021 10:00:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvzg8-0007O7-Qk; Wed, 23 Jun 2021 10:00:16 +0000
Received: by outflank-mailman (input) for mailman id 146190;
 Wed, 23 Jun 2021 10:00:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvzg6-0007Nx-QZ; Wed, 23 Jun 2021 10:00:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvzg6-0002Bi-DS; Wed, 23 Jun 2021 10:00:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvzg6-0003za-4s; Wed, 23 Jun 2021 10:00:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvzg6-0004Sn-4J; Wed, 23 Jun 2021 10:00:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5XalakZtcYt9g2Y/OPdFVno0jxQK6W4OsvSe/i+LdMM=; b=n6LnZfUYR6RKf1/+oBpXp41BFJ
	ShWZOrKHZ7dxxg2E9CcYTWMTEgSaabp7pTDCT4uMDsiVVgcTbQwpeGwnN7pY/qj8OSbwGd6c14Lw6
	0CW2KnXKKwI8ZwB4UUsgG6XniJEvPfHxUpe1hH84GifOobJiMvrPg83+PFATKNkTsOAI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162992-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 162992: all pass - PUSHED
X-Osstest-Versions-This:
    xen=c7691f5e340f3b579d40c77548f63133cdf5aac7
X-Osstest-Versions-That:
    xen=4bcf6433eed3d9cbc00865ec62380a33ca832dac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Jun 2021 10:00:14 +0000

flight 162992 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162992/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  c7691f5e340f3b579d40c77548f63133cdf5aac7
baseline version:
 xen                  4bcf6433eed3d9cbc00865ec62380a33ca832dac

Last test of basis   162857  2021-06-16 09:18:30 Z    7 days
Failing since        162908  2021-06-20 09:18:24 Z    3 days    2 attempts
Testing same since   162992  2021-06-23 09:20:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4bcf6433ee..c7691f5e34  c7691f5e340f3b579d40c77548f63133cdf5aac7 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 10:04:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 10:04:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146198.268959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvzkV-00084X-Is; Wed, 23 Jun 2021 10:04:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146198.268959; Wed, 23 Jun 2021 10:04:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lvzkV-00084Q-Ft; Wed, 23 Jun 2021 10:04:47 +0000
Received: by outflank-mailman (input) for mailman id 146198;
 Wed, 23 Jun 2021 10:04:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvzkU-00084G-IA; Wed, 23 Jun 2021 10:04:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvzkU-0002GK-En; Wed, 23 Jun 2021 10:04:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lvzkU-00046v-45; Wed, 23 Jun 2021 10:04:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lvzkU-00067c-3a; Wed, 23 Jun 2021 10:04:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=i9qzm7lSPNDslZgSBQ22VyZEFLbz3vnT8GoWH3Emcn0=; b=0xwbpMcICOUS2JWcfKGrYLRz1v
	i39mTmOtCc330jWm3cG39Y8nRLhdiuCDprqI8bIkc+2n4EheLxlxaWj9MUf3Vpkyuj6/x8RY5YJ14
	edSSnNRq0C2Fg79+xglaGR2iI5VPhwL3PxkbT1lVEjOP31enhYhmyj/zvWo+Z65lCCcE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162987-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162987: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=7471751a4d813a64501a9d7819b1eb405911b310
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Jun 2021 10:04:46 +0000

flight 162987 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162987/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 7471751a4d813a64501a9d7819b1eb405911b310
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   19 days
Failing since        162368  2021-06-04 15:42:59 Z   18 days   40 attempts
Testing same since   162987  2021-06-23 05:52:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2284 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 11:46:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 11:46:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146206.268976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw1Ko-0000R1-3I; Wed, 23 Jun 2021 11:46:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146206.268976; Wed, 23 Jun 2021 11:46:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw1Ko-0000Qu-0I; Wed, 23 Jun 2021 11:46:22 +0000
Received: by outflank-mailman (input) for mailman id 146206;
 Wed, 23 Jun 2021 11:46:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw1Km-0000Qk-PW; Wed, 23 Jun 2021 11:46:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw1Km-0003za-H5; Wed, 23 Jun 2021 11:46:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw1Km-0006Ze-6i; Wed, 23 Jun 2021 11:46:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lw1Km-0003sk-6G; Wed, 23 Jun 2021 11:46:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sS25R88B0ZKpTUUQENNGsxLZ9UIHVaMl2xq6Tx63W1U=; b=KhUqxXuc47WQHSq+ayTbs+h3+Z
	b0TdTfZFMtUhq4Woqbaq8oeuPcYlLrrvrp7aZA10Vze/4/FRcdeGtJ3ryMB9p6xzyxOZLIe/kccQI
	E9PCfXZ7VIvRQqvvDqWVuGCnGig8MK0v1xQRmReMt3rJC0pChXUTGjfNvpM5uKP+76J0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162994-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162994: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=7471751a4d813a64501a9d7819b1eb405911b310
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Jun 2021 11:46:20 +0000

flight 162994 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162994/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 7471751a4d813a64501a9d7819b1eb405911b310
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   19 days
Failing since        162368  2021-06-04 15:42:59 Z   18 days   41 attempts
Testing same since   162987  2021-06-23 05:52:05 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2284 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 12:25:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 12:25:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146214.268993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw1wp-0004Rx-6t; Wed, 23 Jun 2021 12:25:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146214.268993; Wed, 23 Jun 2021 12:25:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw1wp-0004Rq-3r; Wed, 23 Jun 2021 12:25:39 +0000
Received: by outflank-mailman (input) for mailman id 146214;
 Wed, 23 Jun 2021 12:25:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lw1wn-0004Rk-Cs
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 12:25:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lw1wm-0004dR-0J; Wed, 23 Jun 2021 12:25:36 +0000
Received: from [54.239.6.179] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lw1wl-0007NC-NK; Wed, 23 Jun 2021 12:25:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=B0DJ7ghfxZQgt2aFki6iaUURBqib+4g/6GH5PMjJUlg=; b=HTRta7BV6amD3qJrqcG53oPEri
	/dCQ6UIvMvuZe0Bp5ki/k2+tDu5OhywzaB4L2DasWju10H0R7KbnmphtMLwxqmpQjd8vH5qq2DwmY
	NDxrPWm5aKtfzivwtHLshlk0BcoGFbZ4qG5jOfsSINA8x2Ck8Q/S7JpT+ENixO5zZnkk=;
Subject: Re: Interrupt for port 19, but apparently not enabled; per-user
 000000004af23acc
To: Juergen Gross <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 linux-kernel@vger.kernel.org, mheyne@amazon.de
References: <6552fc66-ba19-2c77-7928-b0272d3e1622@xen.org>
 <4d8a7ba7-a9f6-2999-8750-bfe2b85f064e@suse.com>
 <9a08bbf2-ba6a-6e49-3bcb-bfe2beb32b99@xen.org>
 <5d88a82e-d237-7803-7b50-897e857f2fbd@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <3d029164-43b7-d65f-4a8a-3ddef5e743e5@xen.org>
Date: Wed, 23 Jun 2021 14:25:33 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <5d88a82e-d237-7803-7b50-897e857f2fbd@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 22/06/2021 17:14, Juergen Gross wrote:
> On 22.06.21 14:21, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 22/06/2021 13:04, Juergen Gross wrote:
>>> On 22.06.21 12:24, Julien Grall wrote:
>>>> Hi Juergen,
>>>>
>>>> As discussed on IRC yesterday, we noticed a couple of splat in 5.13-rc6 
>>>
>>>> (and stable 5.4) in the evtchn driver:
>>>>
>>>> [    7.581000] ------------[ cut here ]------------
>>>> [    7.581899] Interrupt for port 19, but apparently not 
>>> enabled;
>>>> per-user 000000004af23acc
>>>> [    7.583401] WARNING: CPU: 0 PID: 467 at 
>>>> /home/ANT.AMAZON.COM/jgrall/works/oss/linux/drivers/xen/evtchn.c:169 
>>>> evtchn_interrupt+0xd5/0x100
>>>> [    7.585583] Modules linked in:
>>>> [    7.586188] CPU: 0 PID: 467 Comm: xenstore-read Not 
> tainted
>>>> 5.13.0-rc6 #240
>>>> [    7.587462] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), 
>>>> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
>>>> [    7.589462] RIP: e030:evtchn_interrupt+0xd5/0x100
>>>> [    7.590361] Code: 48 8d bb d8 01 00 00 ba 01 00 00 00 
>>> be 1d 00 00 00
>>>> e8 5f 72 c4 ff eb b2 8b 75 20 48 89 da 48 c7 c7 a8 03 5f 82 e8 6b 2d 96 
>>>
>>>> ff <0f> 0b e9 4d ff ff ff 41 0f b6 f4 48 c7 c7 80 da a2 82 e8 f0
>>>> [    7.593662] RSP: e02b:ffffc90040003e60 EFLAGS: 00010082
>>>> [    7.594636] RAX: 0000000000000000 RBX: ffff888102328c00 RCX: 
>>>> 0000000000000027
>>>> [    7.595924] RDX: 0000000000000000 RSI: ffff88817fe18ad0 RDI: 
>>>> ffff88817fe18ad8
>>>> [    7.597216] RBP: ffff888108ef8140 R08: 0000000000000000 R09: 
>>>> 0000000000000001
>>>> [    7.598522] R10: 0000000000000000 R11: 7075727265746e49 R12: 
>>>> 0000000000000000
>>>> [    7.599810] R13: ffffc90040003ec4 R14: ffff8881001b8000 R15: 
>>>> ffff888109b36f80
>>>> [    7.601113] FS:  0000000000000000(0000) GS:ffff88817fe00000(0000) 
>>>> knlGS:0000000000000000
>>>> [    7.602570] CS:  10000e030 DS: 0000 ES: 0000 CR0:0000000080050033
>>>> [    7.603700] CR2: 00007f15b390e368 CR3: 000000010bb04000 CR4: 
>>>> 0000000000050660
>>>> [    7.604993] Call Trace:
>>>> [    7.605501]  <IRQ>
>>>> [    7.605929]  __handle_irq_event_percpu+0x4c/0x330
>>>> [    7.606817]  handle_irq_event_percpu+0x32/0xa0
>>>> [    7.607670]  handle_irq_event+0x3a/0x60
>>>> [    7.608416]  handle_edge_irq+0x9b/0x1f0
>>>> [    7.609154]  generic_handle_irq+0x4f/0x60
>>>> [    7.609918]  __evtchn_fifo_handle_events+0x195/0x3a0
>>>> [    7.610864]  __xen_evtchn_do_upcall+0x66/0xb0
>>>> [    7.611693]  __xen_pv_evtchn_do_upcall+0x1d/0x30
>>>> [    7.612582]  xen_pv_evtchn_do_upcall+0x9d/0xc0
>>>> [    7.613439]  </IRQ>
>>>> [    7.613882]  exc_xen_hypervisor_callback+0x8/0x10
>>>>
>>>> This is quite similar to the problem I reported a few months ago (see 
> 
>>>> [1]) but this time this is happening with fifo rather than 2L.
>>>>
>>>> I haven't been able to reproduced it reliably so far. But looking at 
>>>> the code, I think I have found another potential race after commit
>>>>
>>>> commit b6622798bc50b625a1e62f82c7190df40c1f5b21
>>>> Author: Juergen Gross <jgross@suse.com>
>>>> Date:   Sat Mar 6 17:18:33 2021 +0100
>>>>     xen/events: avoid handling the same event on two cpusat the same 
>>>> time
>>>>     When changing the cpu affinity of an event it can happen today that
>>>>     (with some unlucky timing) the same event will be handled 
>>> on the old
>>>>     and the new cpu at the same time.
>>>>     Avoid that by adding an "event active" flag to the 
> per-event data
>>>> and
>>>>     call the handler only if this flag isn't set.
>>>>     Cc: stable@vger.kernel.org
>>>>     Reported-by: Julien Grall <julien@xen.org>
>>>>     Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>     Reviewed-by: Julien Grall <jgrall@amazon.com>
>>>>     Link: 
>>>> https://lore.kernel.org/r/20210306161833.4552-4-jgross@suse.com
>>>>     Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>>>
>>>> The evtchn driver will use the lateeoi handlers. So the code to ack 
>>>> looks like:
>>>>
>>>> do_mask(..., EVT_MASK_REASON_EOI_PENDING)
>>>> smp_store_release(&info->is_active, 0);
>>>> clear_evtchn(info->evtchn);
>>>>
>>>> The code to handle an interrupts look like:
>>>>
>>>> clear_link(...)
>>>> if ( evtchn_fifo_is_pending(port) && !evtchn_fifo_is_mask()) {
>>>>    if (xchg_acquire(&info->is_active, 1)
>>>>      return;
>>>>    generic_handle_irq();
>>>> }
>>>>
>>>> After changing the affinity, an interrupt may be received once on the 
> 
>>>> previous vCPU. So, I think the following can happen:
>>>>
>>>> vCPU0                             | vCPU1
>>>>                    |
>>>>   Receive event              |
>>>>                    | change affinity to vCPU1
>>>>   clear_link()              |
>>>>                        |
>>>>                 /* The interrupt is re-raised */
>>>>                    | receive event
>>>>                      |
>>>>                    | /* The interrupt is not masked */
>>>>   info->is_active = 1          |
>>>>   do_mask(...)              |
>>>>   info->is_active = 0          |
>>>>                    | info->is_active = 1
>>>>   clear_evtchn(...)               |
>>>>                                   | do_mask(...)
>>>>                                   | info->is_active = 0
>>>>                    | clear_evtchn(...)
>>>>
>>>> Does this look plausible to you?
>>>
>>> Yes, it does.
>>>
>>> Thanks for the analysis.
>>>
>>> So I guess for lateeoi events we need to clear is_active only in
>>> xen_irq_lateeoi()? At a first glance this should fix the issue.
>>
>> It should work and would be quite neat. But, I believe clear_evtchn() 
>> would have to stick in the ack helper to avoid losing interrupts.
>>
> 
> Could you try the attached patch, please? Only compile tested.

Thanks for the patch! I have also found a reproducer on Linux 5.13 so it 
was easier to confirm the patch works.

The reproducer is continuously the affinity of the interrupt under high 
interrupt load. After a few seconds I can see dozen of WARN splat.

Regarding the patch itself, a few suggestions:
   1) It is not entirely obvious from the code why ack_mask_dynirq() is 
not modified. My understanding is we are assuming that the 
xen_irq_lateeoi_locked() will not be called in this case. I would 
suggest to spell it clearly in the commit message
   2) I would suggest to add a comment in the code explaining why 
event_handler_exit() is not used. It is probably worth to also add one 
in event_handler_exit() so one know that this doesn't cover all the paths.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 12:44:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 12:44:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146219.269004 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw2FF-0006hg-LM; Wed, 23 Jun 2021 12:44:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146219.269004; Wed, 23 Jun 2021 12:44:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw2FF-0006hZ-IR; Wed, 23 Jun 2021 12:44:41 +0000
Received: by outflank-mailman (input) for mailman id 146219;
 Wed, 23 Jun 2021 12:44:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw2FD-0006hP-Or; Wed, 23 Jun 2021 12:44:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw2FD-0004wl-Jz; Wed, 23 Jun 2021 12:44:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw2FD-0008Sm-5p; Wed, 23 Jun 2021 12:44:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lw2FD-0006zN-5M; Wed, 23 Jun 2021 12:44:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Wmb6Tzfp3J3SySDkzxE7qMqTTKMlCeVWvQ4cuoQkOec=; b=eHU/LIV2sUPwMb7tRTRcNytHNr
	qPRmMWfr5jt85S3EDHn5WlDMNmD8XiE3y0kt6IjMK+e/+wfL26XiUojZAKmJHh6FK03mtz4+fCUN3
	/m685lheBKbnrayAwTDAZT3euS36Z/n918zOYtGqOSZKhN9PVYfqVzyH22hde4hok/Kc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162969-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162969: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=0add99ea3ea91af8230e3933ad7826b2da25a44d
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Jun 2021 12:44:39 +0000

flight 162969 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162969/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                0add99ea3ea91af8230e3933ad7826b2da25a44d
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  307 days
Failing since        152659  2020-08-21 14:07:39 Z  305 days  561 attempts
Testing same since   162969  2021-06-22 11:25:32 Z    1 days    1 attempts

------------------------------------------------------------
541 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 175648 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 13:09:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 13:09:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146226.269021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw2d9-0000cf-Oi; Wed, 23 Jun 2021 13:09:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146226.269021; Wed, 23 Jun 2021 13:09:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw2d9-0000cY-Ll; Wed, 23 Jun 2021 13:09:23 +0000
Received: by outflank-mailman (input) for mailman id 146226;
 Wed, 23 Jun 2021 13:09:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9gCE=LR=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lw2d8-0000cS-L0
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 13:09:22 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 98a1a20a-0f8f-4e98-bb5c-527349548266;
 Wed, 23 Jun 2021 13:09:21 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 3764521962;
 Wed, 23 Jun 2021 13:09:20 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id F245A11A97;
 Wed, 23 Jun 2021 13:09:19 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id ol9pOX8y02DIMQAALh3uQQ
 (envelope-from <jgross@suse.com>); Wed, 23 Jun 2021 13:09:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98a1a20a-0f8f-4e98-bb5c-527349548266
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624453760; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=fvLe4EDcZBFIJFV4utDnoKGG4SlCKvEBy8vMSAJDpnk=;
	b=JvFAJrnRk4z9JofF9qR8IZLIrtA/HR4LsTpSYytHLGr/baiqJyKWcoa8i77iQYjkHGDtPG
	nYCqIBCJPS3jLWcXfr2juHvEIVvlZd0a1X2+W61iXuQ1PV0TMhkYUzTbGzHjeNOQrJm9Ix
	TgdnlutnZLfBh6DyZ7HJqAeYOQKMUPk=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624453760; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=fvLe4EDcZBFIJFV4utDnoKGG4SlCKvEBy8vMSAJDpnk=;
	b=JvFAJrnRk4z9JofF9qR8IZLIrtA/HR4LsTpSYytHLGr/baiqJyKWcoa8i77iQYjkHGDtPG
	nYCqIBCJPS3jLWcXfr2juHvEIVvlZd0a1X2+W61iXuQ1PV0TMhkYUzTbGzHjeNOQrJm9Ix
	TgdnlutnZLfBh6DyZ7HJqAeYOQKMUPk=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>
Subject: [PATCH] xen/events: reset active flag for lateeoi events later
Date: Wed, 23 Jun 2021 15:09:13 +0200
Message-Id: <20210623130913.9405-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to avoid a race condition for user events when changing
cpu affinity reset the active flag only when EOI-ing the event.

This is working fine as all user events are lateeoi events. Note that
lateeoi_ack_mask_dynirq() is not modified as there is no explicit call
to xen_irq_lateeoi() expected later.

Reported-by: Julien Grall <julien@xen.org>
Fixes: b6622798bc50b62 ("xen/events: avoid handling the same event on two cpus at the same time")
Tested-by: Julien Grall <julien@xen.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/events/events_base.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 7bbfd58958bc..d7e361fb0548 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -642,6 +642,9 @@ static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious)
 	}
 
 	info->eoi_time = 0;
+
+	/* is_active hasn't been reset yet, do it now. */
+	smp_store_release(&info->is_active, 0);
 	do_unmask(info, EVT_MASK_REASON_EOI_PENDING);
 }
 
@@ -811,6 +814,7 @@ static void xen_evtchn_close(evtchn_port_t port)
 		BUG();
 }
 
+/* Not called for lateeoi events. */
 static void event_handler_exit(struct irq_info *info)
 {
 	smp_store_release(&info->is_active, 0);
@@ -1883,7 +1887,12 @@ static void lateeoi_ack_dynirq(struct irq_data *data)
 
 	if (VALID_EVTCHN(evtchn)) {
 		do_mask(info, EVT_MASK_REASON_EOI_PENDING);
-		event_handler_exit(info);
+		/*
+		 * Don't call event_handler_exit().
+		 * Need to keep is_active non-zero in order to ignore re-raised
+		 * events after cpu affinity changes while a lateeoi is pending.
+		 */
+		clear_evtchn(evtchn);
 	}
 }
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 23 13:33:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 13:33:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146233.269033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw2zx-0003hq-O4; Wed, 23 Jun 2021 13:32:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146233.269033; Wed, 23 Jun 2021 13:32:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw2zx-0003hj-KN; Wed, 23 Jun 2021 13:32:57 +0000
Received: by outflank-mailman (input) for mailman id 146233;
 Wed, 23 Jun 2021 13:32:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw2zw-0003hZ-Ss; Wed, 23 Jun 2021 13:32:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw2zw-0005js-LU; Wed, 23 Jun 2021 13:32:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw2zw-00015g-CD; Wed, 23 Jun 2021 13:32:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lw2zw-0002aH-Bj; Wed, 23 Jun 2021 13:32:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qmGQTh/uUs/ePJvUvUvyGodGyGBN8pgCm4cxfnk3av0=; b=XGNY9rzmIQVK22l9+9IGMGQq3j
	zFxgqpoaIdJeR5EiSTFQrWDXkjd8bbWXDzhjbXblGwwriZTWtXCaV8tJXXVsQD99tgYYBXVBHe/iu
	y9IEdlKWUeGWr/H2wsuvqRAs/oaNW3vxrulq6OVEH0yHD5lQP2ba5UaNPSh02KH+v0Y8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162995-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162995: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=7471751a4d813a64501a9d7819b1eb405911b310
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Jun 2021 13:32:56 +0000

flight 162995 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162995/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 7471751a4d813a64501a9d7819b1eb405911b310
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   19 days
Failing since        162368  2021-06-04 15:42:59 Z   18 days   42 attempts
Testing same since   162987  2021-06-23 05:52:05 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2284 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 16:01:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 16:01:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146243.269056 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw5JG-0000UG-NX; Wed, 23 Jun 2021 16:01:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146243.269056; Wed, 23 Jun 2021 16:01:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw5JG-0000U8-KS; Wed, 23 Jun 2021 16:01:02 +0000
Received: by outflank-mailman (input) for mailman id 146243;
 Wed, 23 Jun 2021 16:01:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw5JG-0000Tz-AR; Wed, 23 Jun 2021 16:01:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw5JG-0000Iw-1H; Wed, 23 Jun 2021 16:01:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw5JF-0006F9-K2; Wed, 23 Jun 2021 16:01:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lw5JF-00050h-JG; Wed, 23 Jun 2021 16:01:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8YcCv5XdOmfX+BxXjpTiQJRb6H6UMIekVjk79ysjx9I=; b=kbBxbaSdLaRoekvw5CW0V8vlPb
	bOAwCTms9eB4c4LDZc/16+j6u6rQoQH/zTmvuRnO3OAKwDduiW42FTVo6p904/8E9l2VlB13sEmDy
	Zao6OQzTgQzrBXe2WJtofcakRgz2CJzU/BF4xTf9SE1KogW4MkBAk+UXB6sse5D91RaQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162986-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162986: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:guest-start.2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=0c18f29aae7ce3dadd26d8ee3505d07cc982df75
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Jun 2021 16:01:01 +0000

flight 162986 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162986/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     19 guest-start.2           fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                0c18f29aae7ce3dadd26d8ee3505d07cc982df75
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  326 days
Failing since        152366  2020-08-01 20:49:34 Z  325 days  553 attempts
Testing same since   162986  2021-06-23 04:37:31 Z    0 days    1 attempts

------------------------------------------------------------
6199 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1688820 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 16:16:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 16:16:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146249.269070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw5Xv-0001z1-4r; Wed, 23 Jun 2021 16:16:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146249.269070; Wed, 23 Jun 2021 16:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw5Xv-0001yu-1L; Wed, 23 Jun 2021 16:16:11 +0000
Received: by outflank-mailman (input) for mailman id 146249;
 Wed, 23 Jun 2021 16:16:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G2Dd=LR=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1lw5Xt-0001yo-Td
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 16:16:10 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.4.41]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 028fa92f-2248-475a-87da-9cdde862bb59;
 Wed, 23 Jun 2021 16:16:06 +0000 (UTC)
Received: from DB6PR0202CA0034.eurprd02.prod.outlook.com (2603:10a6:4:a5::20)
 by DB8PR08MB4107.eurprd08.prod.outlook.com (2603:10a6:10:ac::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Wed, 23 Jun
 2021 16:16:04 +0000
Received: from DB5EUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a5:cafe::a8) by DB6PR0202CA0034.outlook.office365.com
 (2603:10a6:4:a5::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19 via Frontend
 Transport; Wed, 23 Jun 2021 16:16:04 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT033.mail.protection.outlook.com (10.152.20.76) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Wed, 23 Jun 2021 16:16:04 +0000
Received: ("Tessian outbound 2df94acd389f:v96");
 Wed, 23 Jun 2021 16:16:04 +0000
Received: from f2aa59f3f165.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6E9379DA-BA61-438B-AA22-84BA4CAF2E95.1; 
 Wed, 23 Jun 2021 16:15:26 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f2aa59f3f165.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 23 Jun 2021 16:15:26 +0000
Received: from AS8PR08MB6919.eurprd08.prod.outlook.com (2603:10a6:20b:39e::10)
 by AM6PR08MB5239.eurprd08.prod.outlook.com (2603:10a6:20b:e6::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Wed, 23 Jun
 2021 16:15:24 +0000
Received: from AS8PR08MB6919.eurprd08.prod.outlook.com
 ([fe80::2de3:452a:87cf:3ff5]) by AS8PR08MB6919.eurprd08.prod.outlook.com
 ([fe80::2de3:452a:87cf:3ff5%9]) with mapi id 15.20.4242.023; Wed, 23 Jun 2021
 16:15:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 028fa92f-2248-475a-87da-9cdde862bb59
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vZgPLMACgiAJKQAGrxXC+noIWaodAbe1w3tkxUdyYqI=;
 b=wfEH5X6c7i/2hqjZbNxEdfDwhEBGsSUarVnOIHkvVMKSxZNaiN9MiS+tBvq1EpPnuEERACqPqjRIsCn/h316DP81sOra7T2JnG61RAzZ71nbhxLijykTpBChX2gKHTojCcy9mSU+XsL0mtrFkQe9zgqa2HrL50W1o4+4SRuhhrQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: b00e56675b05c072
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DGCeQskn7+W8gS9ZFzLe0AeJ3NPG672gjEDLxx6BkWKkBF4Xxy3DRF2ystu559vbX7ZPYqNmmjCuEF8LxX/qEjFCc++8ij2bveNDAXDkp9QKB90Ibgiw6zgvof7brAFY6CFskmhdtno6ZKVywMA34EZrjYgix49Aj/iXNfQ2mdVexeC32LhB5G//appda/fRmQ/Y+O6Ztarw0sQ4wOYf/FB4pn/eFs0mXZdI8VAOZRs5PKszYak7zliBY+R/yXAlel9GlzSjVD+5FzGkSe2+K8wODYLqCQAdAqbygrHvz/zYZvFsSWeFUvcHuSg5xUbDkKIe0iGHiEatwe5YWXiYqg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vZgPLMACgiAJKQAGrxXC+noIWaodAbe1w3tkxUdyYqI=;
 b=UXVpkRMkzxGo4Lo55FOSTX33iMRg/UWJODW1uCAYkXSt5rYZmSTAfXpKqqit/UZpUdK0r1IVzIdUKFQlXJfLqVy0HiX6JJtHIVC9u0dCFqGLdPi3z7aiwm/oT2lrxzg5hz5zGau2y58Z8MtVyDPQgzZw63grmoWt6cFEZ9RpH83rvce1LCC8Na0wHJUT7+P603d3ct3XGWpL/hGMq6NSJYPHECyCZVktcz72fvtTb6YsuXdk5x/FxyHbrmfE+imui4nHW4vgmSoGoK8luo9UtkNtJEekioqQo5it2OlCreMgwmOdqhVdrdFrvnzBvK5jXnOisQHtfAo6Kn4dmfCk2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vZgPLMACgiAJKQAGrxXC+noIWaodAbe1w3tkxUdyYqI=;
 b=wfEH5X6c7i/2hqjZbNxEdfDwhEBGsSUarVnOIHkvVMKSxZNaiN9MiS+tBvq1EpPnuEERACqPqjRIsCn/h316DP81sOra7T2JnG61RAzZ71nbhxLijykTpBChX2gKHTojCcy9mSU+XsL0mtrFkQe9zgqa2HrL50W1o4+4SRuhhrQ=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "edgar.iglesias@xilinx.com" <edgar.iglesias@xilinx.com>, xen-devel
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Julien Grall <julien@xen.org>, "fnuv@xilinx.com"
	<fnuv@xilinx.com>
Subject: Re: smmuv1 breakage
Thread-Topic: smmuv1 breakage
Thread-Index: AQHXYY0v3XVdqEOvqESHjaQGLRWMRKsVQ9IAgACUkYCACrmTAIAAuQiAgACH14A=
Date: Wed, 23 Jun 2021 16:15:23 +0000
Message-ID: <6CBD56EF-7420-4A43-8EF6-6CB0532BF108@arm.com>
References: <alpine.DEB.2.21.2106141840150.24906@sstabellini-ThinkPad-T480s>
 <791BFC00-6A50-48D2-A208-E529B887441F@arm.com>
 <alpine.DEB.2.21.2106151756190.24906@sstabellini-ThinkPad-T480s>
 <alpine.DEB.2.21.2106221405220.24906@sstabellini-ThinkPad-T480s>
 <55ACE88F-F72C-4715-B3B1-B7B7F1B4CFFB@arm.com>
In-Reply-To: <55ACE88F-F72C-4715-B3B1-B7B7F1B4CFFB@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: cb21ad1d-99b8-4222-e7f3-08d936623326
x-ms-traffictypediagnostic: AM6PR08MB5239:|DB8PR08MB4107:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB8PR08MB41072D57D73FC18ECEB3ED23FC089@DB8PR08MB4107.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 bahRVka/KnWZ17vYq0Ddw6hWehRJRAWQMrgI/d54ay748EqhrC3ZhRqeUvoBNZH5dDYzING6hmOXMEjaR3sQnkd/K4Lj/cnAveittO+UxX7O7ZcswVbREUZPrCBPahBX9qWTaNAIHSliFGtoagfFpIh8m76HWu0YJ0I1xHp87oTu+mkpTcJ57aI7qpCeM8koiCImXeUf0AeUdCiFOPDSiBzjaAnae7YxR2ERG/gscSsy3/oKJF92QGEoO3WXNXHH/VN+3eCmCI5QZJaRwJj2gA4r7LBFza9/iBOky08lOab30h6oCDuMte+0A/6gl03R3KFBUBEUpXJeheKEipYOtlITWQhPWL0q9fo6XYnC9rhuX2BWKlgRR5ONOmKe1RqzjRsYL7kzf+I28S17BcfNrIZux5jGJq5CLk+KfKfUa1UcIhVlapRYGVAh0dpct8tMKtj0bZTCqFvn9+J/cvacUau2WOOT0xqs90NGpGIWddrzPLLJng2ij1FXaRUtBzRbNW/LV/OZ7uJ345vsg8+WHlBKA3Y2zPl/gocUvT1CaLa/4Yo3jrDYV5/KuKj2I+Zz6gUuOfH4rp7YUAc1B4c09vCr0t/+u11QSTz9RqiM7I/NvKgYUHYOCt0MeR2se1S86NYVFEIHB1MMr9kMAPNDzpjK6MQroC9vZqyvujlDJbWSL6prmFBown2IfqYDThH1oOPcTntqEjvmU1m5V5evvPKCmsKTD7b/FDSGi8KyXz8I8jnGkMvZJybFwmWRs7TyXTAzczw2rSBc/USyBQYogQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB6919.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(346002)(39850400004)(376002)(396003)(99936003)(8936002)(36756003)(7116003)(2906002)(5660300002)(316002)(26005)(6916009)(83380400001)(166002)(91956017)(966005)(6486002)(66616009)(6512007)(186003)(66446008)(66476007)(71200400001)(33656002)(64756008)(66556008)(478600001)(66946007)(76116006)(122000001)(86362001)(6506007)(53546011)(4326008)(38100700002)(8676002)(54906003)(2616005)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?R2JSV0tlUExLR2VXV09TSkwzaDArUUtqRWJKS0VyK0Nrb1MrV0xOa2hONXlD?=
 =?utf-8?B?ZzRCd2FsRUp5WEdmTnFtd3JGV0JlK1g2OGJwVnE4VktmMFY3dEE4R0Z4MGJw?=
 =?utf-8?B?a09hRTBud09xejNaQk1aMlNVTU80SUNjU09rYytlTjhLeFY1dUdHZm5mZ2dV?=
 =?utf-8?B?VVVDdnY1S2hLOERUMVl2a1Z5R3N0WGsxY2xTT1R4bjkrQ0M2Nk9VOVVRQU5y?=
 =?utf-8?B?S0RLeXoreGFmak4wNlFzRDRwTGhHc3d5dDNNQm94MzJlM2xJZmpNYTd5Z0xN?=
 =?utf-8?B?NVl1Uk9WVEhHeGpHYVk5Y3JwT01kcndtOHI0OExPdUVVbTRYTmtsR1FjeGtS?=
 =?utf-8?B?QlVXVTA4dzVsSnptT1J3ejZxcXRnWnRoSXBNUFp4ZHlhQ3hMRlZLbVFMVHZh?=
 =?utf-8?B?bko2RzJwSjg5cHZueWRlRU95VWhlQ1I0YllSU2xMSmVkY3Vqa3paZndyeEx0?=
 =?utf-8?B?QkF1ZHdkUGl6T2lYcWRSd0pEQWlnYmRhejVpZTQxeHFoa3dyNnJnNStJY25p?=
 =?utf-8?B?TGVDOStRaUM3M3RxdXBheVBCdEIzYXdtSkRDVnV3ZzBMM2oxblRkZ2RpWlND?=
 =?utf-8?B?WnFwQUN4RzZlT2x2Y25lOUdpckxtc1dmOUI0Y3Zsb3V5YlEwUWlGSm5KcHBh?=
 =?utf-8?B?NHZUK2J4TmovNldNakhLRnVMNmlJemVkMWVXSGhuWWxUZWxCa09WcW1iUEli?=
 =?utf-8?B?YUlnZHFsdDJZT29LY1NndW9vNk1TRnk3UHg1ZmZ4V3FsNzJLalNGd2YyQ0tt?=
 =?utf-8?B?RkNTdGJpWWtCNWtwaW9qOEdveDZ4WEZVbE1KeStmZHh5T2I4Qjc3bkRXbkd3?=
 =?utf-8?B?N1luOXUrWTFQLzhsVVRnWlVHdFNER2FWRFJPMEpWb1hJSDRnRy8rcEdiNTJF?=
 =?utf-8?B?UDFQR09xdjNjSHluejY2RkdVSG9TWmpVRlJBRCtxNGVuZTRLekp6V3JRN2cr?=
 =?utf-8?B?TlgwVlQvR1BJeUJodmlrY0s4NUptR0JTd3F4ZCt6R1NLYU94QWliNEdOQkNr?=
 =?utf-8?B?L2x6d2c1SmNYS1UyMGVZMWxSMDJ4bThFK1I5a0dOTHhqaG1rREhFZWVjVU1Q?=
 =?utf-8?B?RVYvR1J3YTJsMDgvZHpJeklNM2ZBcjVZQ29ZWGZMWGM3aENIU3M5dVNOcW1X?=
 =?utf-8?B?c0NJRnpJdmtEREFtaDFnOGUxSkYrSXMxYTUxem8yVnhOdWlaeHhlSlBmdXFL?=
 =?utf-8?B?ZkwwVHdjM1gzYy9CQ0dSSEFhR1NEbERTd3orVC9aRjE1V1dUVzM0NVBuQmZw?=
 =?utf-8?B?a0kzUzFWbFRzb253NjlEaDVkdHZXQXAza0tjdURDWXJRSGpOTnhTTUZrLzMz?=
 =?utf-8?B?aHNLWUREMkFmZElkeTdUeUxOUm9XN2ZCRmNPT0hLQU0wd2NCVjBaQ3NydXg1?=
 =?utf-8?B?NVQyZ3JncHZ4VXdaZUZqMm5NSlJKaEFzUDBTa3lSQ25KNGJxMFdnam5xODA2?=
 =?utf-8?B?b2pUZFFITlJtK2NpWDcyUjdZOWVjMEpzZ05zd2d0bWpiMjZ0eGcvUXNnTXZm?=
 =?utf-8?B?M0VrZVF3VmV4Zk9mbytxT1BJSEo3allJcDdGSHhTM2NqMWF6T2hzM0M2ekIx?=
 =?utf-8?B?M2g3UVkvUERhL3JqdmRLOW01RkVWb0lwSjlGaERBVDNhcGJKajN3dHd6bWxr?=
 =?utf-8?B?WURlSlVBNlZGTWtuT0pIUWlFZTZCdE55MExVbFZNaTVyRlhwSkU0M0M2OFRt?=
 =?utf-8?B?Y2U2WWE0SlJvK0FOZ1ZrQyt1Q1JFQ1JnTVdhdjFGZk5mbDd4RFYrM2RZeTFP?=
 =?utf-8?Q?+6OatIO6sZSA2QapRZUONCTSxHEiraz0wuarTJd?=
Content-Type: multipart/mixed;
	boundary="_004_6CBD56EF74204A438EF66CB0532BF108armcom_"
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5239
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c2c60081-297d-4cac-b9c1-08d936621b35
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RpWEXe3TvU6Qc57UGqbVO5/4GFKqxTLrdONyv85OCZNWi2/FIN9omoA0IWSI198eTJ8qUzUKiug7JBEcH4gox4xRoD1g/kOKh1EZyOO7Wq+c7BBcTLaFTJp+1LkLX32OhEDcQH2jGHmGrSR3nGpFP9z4M1vFdADK7r0t3Ya2atkWPuoNJyGBLz1mG9d4suib17RoyPEK5expJqiLUpx5MSq5oozkzFfRcsZ5QIQQvH3w5DOY83H8+25rDCvLdhADRx+mL3evXUTtrEsCuTVaLuGy1gsc6opasPN39AceSixYz4FPiDD75a5NEmMlt64yKLC66QH+GPX6g6vkl7IGbHBgByUkKk2TSYyEVSDYEAiu492rUDuy61t4zn7WtH6wPMaqqE8oShlULq6/54Q/mYw3nQA3dCgv2YUXwqBlEta2ihS5tU5rLboxgHRYiQV0UOaXnmzvpXEuaOuld3o1EGLaWerWzdcWifHJtZzgMTc89O+bp6SD7n4LQRb1BoxZXJPMjAyUU6ThRNF83BC/JLsFY2IDF657hl3XPShyxFaJacphi8SLvtDbqj+rMyocGlAvOVf1ooWfMoMpG1tKNmHSDNgWF8GHTCy13jYbGhru1AsQw+qqxHy2EvW0dm1hDuJYTSG66HJn1Xj/1sr25ilYXnxKayKOJpgbo8jGpbd1bUzTnPpd7Nc0+VtZPnHYT7LgzJcOqNsOCuGDNko2BrtbnDw4A6a9+cPD4347mEsNZSiZsb5L6BvouMtkaHvHx2DXzf1GPWq6bfdjuPVKig==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39850400004)(396003)(376002)(346002)(136003)(46966006)(36840700001)(6512007)(6506007)(53546011)(33964004)(54906003)(36756003)(47076005)(186003)(8936002)(83380400001)(316002)(33656002)(66616009)(26005)(107886003)(6862004)(8676002)(70586007)(235185007)(30864003)(6486002)(166002)(70206006)(5660300002)(82310400003)(4326008)(45080400002)(81166007)(478600001)(966005)(82740400003)(336012)(99936003)(86362001)(36860700001)(2616005)(7116003)(2906002)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2021 16:16:04.2348
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: cb21ad1d-99b8-4222-e7f3-08d936623326
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB4107

--_004_6CBD56EF74204A438EF66CB0532BF108armcom_
Content-Type: multipart/alternative;
	boundary="_000_6CBD56EF74204A438EF66CB0532BF108armcom_"

--_000_6CBD56EF74204A438EF66CB0532BF108armcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

SGkgU3RlZmFubywNCg0KPiBPbiAyMyBKdW4gMjAyMSwgYXQgOTowOSBhbSwgUmFodWwgU2luZ2gg
PFJhaHVsLlNpbmdoQGFybS5jb20+IHdyb3RlOg0KPg0KPiBIaSBTdGVmYW5vLA0KPg0KPj4gT24g
MjIgSnVuIDIwMjEsIGF0IDEwOjA2IHBtLCBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5p
QGtlcm5lbC5vcmc+IHdyb3RlOg0KPj4NCj4+IEhpIFJhaHVsLA0KPj4NCj4+IERvIHlvdSBoYXZl
IGFuIG9waW5pb24gb24gaG93IHdlIHNob3VsZCBtb3ZlIGZvcndhcmQgb24gdGhpcz8NCj4+DQo+
PiBEbyB5b3UgdGhpbmsgaXQgaXMgT0sgdG8gZ28gZm9yIGEgZnVsbCByZXZlcnQgb2YgInhlbi9h
cm06IHNtbXV2MToNCj4+IEludGVsbGlnZW50IFNNUiBhbGxvY2F0aW9uIiBvciBkbyB5b3UgdGhp
bmsgaXQgaXMgYmVzdCB0byBnbyB3aXRoIGFuDQo+PiBhbHRlcm5hdGl2ZSBmaXg/IElmIHNvLCBk
byB5b3UgaGF2ZSBzb21ldGhpbmcgaW4gbWluZD8NCj4+DQo+DQo+IFNvcnJ5IGZvciB0aGUgbGF0
ZSByZXBseSBJIHdhcyB3b3JraW5nIG9uIGFub3RoZXIgaGlnaC1wcmlvcml0eSB0YXNrLg0KPiBJ
IHdpbGwgd29yayBvbiB0aGlzIHdpbGwgdHJ5IHRvIGZpeCB0aGUgaXNzdWUuIEkgd2lsbCB1cGRh
dGUgeW91IHdpdGhpbiAyLTMgZGF5cy4NCg0KSSBhZ2FpbiBjaGVja2VkIG15IHBhdGNoZXMgYW5k
IGZvdW5kIG91dCB0aGF0IHdoaWxlIGFsbG9jYXRpbmcgU01SIEkgYnkgbWlzdGFrZQ0KYWxsb2Nh
dGVkIG9uZSBTTVIgZm9yIGVhY2ggU01NVSBkZXZpY2UgYnV0IHdlIGhhdmUgdG8gYWxsb2NhdGUg
dGhlIG51bWJlciBvZg0KU01SIGJhc2VkIG9uIHN1cHBvcnRlZCBzdHJlYW0gbWF0Y2hpbmcgcmVn
aXN0ZXIgZm9yIGVhY2ggU01NVSBkZXZpY2UuDQoNClRoaXMgbWlnaHQgYmUgY2F1c2luZyB0aGUg
aXNzdWUuIEFzIEkgZG9u4oCZdCBoYXZlIGFueSBYaWxpbnggaGFyZHdhcmUgYW5kIG9uDQpRRU1V
L0p1bm8gaXNzdWUgaXMgbm90IHJlcHJvZHVjaWJsZS5DYW4geW91IHBsZWFzZSB0ZXN0IHRoZSBh
dHRhY2hlZCBwYXRjaCBhbmQNCmxldCBtZSBrbm93IGlmIGl0IHdvcmtzLg0KDQoNCg0KUmVnYXJk
cywNClJhaHVsDQoNCj4NCj4gUmVnYXJkcywNCj4gUmFodWwNCj4NCj4+DQo+Pg0KPj4gT24gVHVl
LCAxNSBKdW4gMjAyMSwgU3RlZmFubyBTdGFiZWxsaW5pIHdyb3RlOg0KPj4+IE9uIFR1ZSwgMTUg
SnVuIDIwMjEsIFJhaHVsIFNpbmdoIHdyb3RlOg0KPj4+PiBIaSBTdGVmYW5vDQo+Pj4+DQo+Pj4+
PiBPbiAxNSBKdW4gMjAyMSwgYXQgMzoyMSBhbSwgU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVs
bGluaUBrZXJuZWwub3JnPiB3cm90ZToNCj4+Pj4+DQo+Pj4+PiBIaSBSYWh1bCwNCj4+Pj4+DQo+
Pj4+PiBVbmZvcnR1bmF0ZWx5LCBhZnRlciBiaXNlY3RpbmcsIEkgZGlzY292ZXJlZCBhIGZldyBt
b3JlIGJyZWFrYWdlcyBkdWUgdG8NCj4+Pj4+IHlvdXIgc21tdXYxIHNlcmllcyAoY29tbWl0cyBl
ODg5ODA5YiAuLiAzZTYwNDdkZGYpIG9uIFhpbGlueCBaeW5xTVAuIEkNCj4+Pj4+IGF0dGFjaGVk
IHRoZSBEVEIgYXMgcmVmZXJlbmNlLiBQbGVhc2Ugbm90ZSB0aGF0IEkgbWFkZSBzdXJlIHRvDQo+
Pj4+PiBjaGVycnktcGljayAieGVuL2FybTogc21tdXYxOiBSZXZlcnQgYXNzb2NpYXRpbmcgdGhl
IGdyb3VwIHBvaW50ZXIgd2l0aA0KPj4+Pj4gdGhlIFMyQ1IiIGR1cmluZyBiaXNlY3Rpb24uIFNv
IHRoZSBlcnJvcnMgYXJlIHByZXNlbnQgYWxzbyBvbiBzdGFnaW5nLg0KPj4+Pj4NCj4+Pj4+IFRo
ZSBmaXJzdCBicmVha2FnZSBpcyBhbiBlcnJvciBhdCBib290IHRpbWUgaW4gc21tdS5jI2ZpbmRf
c21tdV9tYXN0ZXIsDQo+Pj4+PiBzZWUgbG9nMS4gSSB0aGluayBpdCBpcyBkdWUgdG8gdGhlIGxh
Y2sgb2YgYWJpbGl0eSB0byBwYXJzZSB0aGUgbmV3IHNtbXUNCj4+Pj4+IGJpbmRpbmdzIGluIHRo
ZSBvbGQgc21tdSBkcml2ZXIuDQo+Pj4+Pg0KPj4+Pj4gQWZ0ZXIgcmVtb3ZpbmcgYWxsIHRoZSAi
c21tdXMiIGFuZCAiI3N0cmVhbS1pZC1jZWxscyIgcHJvcGVydGllcyBpbg0KPj4+Pj4gZGV2aWNl
IHRyZWUsIEkgZ2V0IHBhc3QgdGhlIHByZXZpb3VzIGVycm9yLCBldmVyeXRoaW5nIHNlZW1zIHRv
IGJlIE9LIGF0DQo+Pj4+PiBlYXJseSBib290LCBidXQgSSBhY3R1YWxseSBnZXQgU01NVSBlcnJv
cnMgYXMgc29vbiBhcyBkb20wIHN0YXJ0aW5nDQo+Pj4+PiB1c2luZyBkZXZpY2VzOg0KPj4+Pj4N
Cj4+Pj4+IChYRU4pIHNtbXU6IC9zbW11QGZkODAwMDAwOiBVbmV4cGVjdGVkIGdsb2JhbCBmYXVs
dCwgdGhpcyBjb3VsZCBiZSBzZXJpb3VzDQo+Pj4+PiAoWEVOKSBzbW11OiAvc21tdUBmZDgwMDAw
MDogICAgIEdGU1IgMHg4MDAwMDAwMiwgR0ZTWU5SMCAweDAwMDAwMDAwLCBHRlNZTlIxIDB4MDAw
MDA4NzcsIEdGU1lOUjIgMHgwMDAwMDAwMA0KPj4+Pg0KPj4+PiBUaGlzIGZhdWx0IGlzICJVbmlk
ZW50aWZpZWQgc3RyZWFtIGZhdWx04oCdIGZvciBTdHJlYW1JRCDigJwgMHg4NzfigJ0gdGhhdCBt
ZWFucyBTTU1VIFNNUiBpcyBub3QgY29uZmlndXJlZCBmb3Igc3RyZWFtSUQg4oCcMHg4NzciDQo+
Pj4+DQo+Pj4+DQo+Pj4+PiBbICAgMTAuNDE5NjgxXSBtYWNiIGZmMGUwMDAwLmV0aGVybmV0IGV0
aDA6IERNQSBidXMgZXJyb3I6IEhSRVNQIG5vdCBPSw0KPj4+Pj4gWyAgIDEwLjQyNjQ1Ml0gSVB2
NjogQUREUkNPTkYoTkVUREVWX0NIQU5HRSk6IGV0aDA6IGxpbmsgYmVjb21lcyByZWFkeQ0KPj4+
Pj4NCj4+Pj4+IERvIHlvdSB0aGluayB5b3UnbGwgYmUgYWJsZSB0byBoZWxwIGZpeCB0aGVtPw0K
Pj4+Pj4NCj4+Pj4+DQo+Pj4+PiBZb3Ugc2hvdWxkIGJlIGFibGUgdG8gcmVwcm9kdWNlIHRoZSB0
d28gaXNzdWVzIHVzaW5nIFhpbGlueCBRRU1VIChidXQgdG8NCj4+Pj4+IGJlIGhvbmVzdCBJIGhh
dmVuJ3QgdGVzdGVkIGl0IG9uIFFFTVUgeWV0LCBJIHdhcyB0ZXN0aW5nIG9uIHJlYWwNCj4+Pj4+
IGhhcmR3YXJlKToNCj4+Pj4+IC0gY2xvbmUgYW5kIGNvbXBpbGUgeGlsaW54IFFFTVUgaHR0cHM6
Ly9naXRodWIuY29tL1hpbGlueC9xZW11LmdpdA0KPj4+Pj4gLi9jb25maWd1cmUgIC0tdGFyZ2V0
LWxpc3Q9YWFyY2g2NC1zb2Z0bW11DQo+Pj4+PiBtYWtlDQo+Pj4+PiAtIGNsb25lIGFuZCBidWls
ZCBnaXQ6Ly9naXRodWIuY29tL1hpbGlueC9xZW11LWRldmljZXRyZWVzLmdpdA0KPj4+Pj4gLSB1
c2UgdGhlIGF0dGFjaGVkIHNjcmlwdCB0byBydW4gaXQNCj4+Pj4+ICAtIGtlcm5lbCBjYW4gYmUg
dXBzdHJlYW0gZGVmY29uZmlnIDUuMTANCj4+Pj4+DQo+Pj4+DQo+Pj4+IEkgdHJpZWQgdG8gcmVw
cm9kdWNlIHRoZSBpc3N1ZSBvbiBYaWxpbnggUUVNVSBhcyBwZXIgdGhlIHN0ZXBzIHNoYXJlZCBh
Ym92ZQ0KPj4+PiBidXQgSSBhbSBub3Qgb2JzZXJ2aW5nIGFueSBpc3N1ZSBvbiBYaWxpbnggUUVN
VS4NCj4+Pg0KPj4+IEkgdHJpZWQgb24gUUVNVSBhbmQgaXQgZG9lc24ndCByZXByby4gSSBjYW5u
b3QgZXhwbGFpbiB3aHkgaXQgd29ya3Mgb24NCj4+PiBRRU1VIGFuZCBpdCBmYWlscyBvbiByZWFs
IGhhcmR3YXJlLg0KPj4+DQo+Pj4NCj4+Pj4gSSBhbHNvIHRlc3RlZCBhbmQgY29uZmlybWVkIG9u
IFFFTVUgdGhhdCBTTU1VIGlzIGNvbmZpZ3VyZWQgY29ycmVjdGx5DQo+Pj4+IGZvciBzcGVjaWZp
Y2FsbHkgU3RyZWFtSUQg4oCcIDB4ODc34oCdIGFuZCBmb3Igb3RoZXIgc3RyZWFtSURzLg0KPj4+
Pg0KPj4+PiBJIGNoZWNrIHRoZSB4ZW4uZHRiIHNoYXJlZCBieSB5b3UgYW5kIGZvdW5kIG91dCB0
aGUgdGhlcmUgaXMgbm8gInN0cmVhbS1pZC1jZWxsc+KAnQ0KPj4+PiBwcm9wZXJ0eSBpbiB0aGUg
bWFzdGVyIGRldmljZSBidXQgdGhlICJtbXUtbWFzdGVycyIgcHJvcGVydHkgaXMgcHJlc2VudCBp
biB0aGUNCj4+Pj4gc21tdSBub2RlLiBGb3IgbGVnYWN5IHNtbXUgYmluZGluZyB3ZSBuZWVkIGJv
dGggInN0cmVhbS1pZC1jZWxsc+KAnSBhbmQgIm1tdS1tYXN0ZXJz4oCdLg0KPj4+PiBJZiB5b3Ug
bmVlZCB0byBhZGQgdGhlIG5ldyBzbW11IGJpbmRpbmcgcGxlYXNlIGFkZCB0aGUgImlvbW11LWNl
bGxz4oCdDQo+Pj4+IHByb3BlcnR5IGluIHRoZSBzbW11IG5vZGUgYW5kIHRoZSDigJxpb21tdXPi
gJ0gcHJvcGVydHkgaW4gdGhlIG1hc3RlciBkZXZpY2UuDQo+Pj4NCj4+PiBJbiByZWdhcmRzIHRv
IHRoZSBtaXNzaW5nICJzdHJlYW0taWQtY2VsbHMiIHByb3BlcnR5LCBJIHNoYXJlZCB0aGUgd3Jv
bmcNCj4+PiBkdGIgYmVmb3JlLCBzb3JyeS4gSSB3YXMgcnVubmluZyBhIG51bWJlciBvZiB0ZXN0
cyBhbmQgSSBtaWdodCBoYXZlDQo+Pj4gcGlja2VkIHRoZSB3cm9uZyBmaWxlLiBUaGUgcHJvcGVy
IGR0YiBjb21lcyB3aXRoICJzdHJlYW0taWQtY2VsbHMiIGZvcg0KPj4+IHRoZSAweDg3NyBkZXZp
Y2UsIHNlZSBhdHRhY2hlZC4NCj4+Pg0KPj4+DQo+Pj4NCj4+Pj4gQ2FuIHlvdSBwbGVhc2Ugc2hh
cmUgdGhlIHhlbiBib290IGxvZ3Mgd2l0aCBtZSBzbyB0aGF0IEkgY2FuIGRlYnVnIGZ1cnRoZXIg
d2h5IHRoZSBlcnJvciBpcyBvYnNlcnZlZD8NCj4+Pg0KPj4+IFNlZSBhdHRhY2hlZC4gSSBkaWQg
c29tZSBkZWJ1Z2dpbmcgYW5kIGRpc2NvdmVyZWQgdGhhdCBpdCBjcmFzaGVzIHdoaWxlDQo+Pj4g
YWNjZXNzaW5nIG1hc3Rlci0+b2Zfbm9kZSBpbiBmaW5kX3NtbXVfbWFzdGVyLiBJZiBJIHJldmVy
dCB5b3VyIHNlcmllcywNCj4+PiB0aGUgY3Jhc2ggZ29lcyBhd2F5LiBJdCBpcyB2ZXJ5IHN0cmFu
Z2UgYmVjYXVzZSB5b3VyIHBhdGNoZXMgZG9uJ3QgdG91Y2gNCj4+PiBmaW5kX3NtbXVfbWFzdGVy
IG9yIGluc2VydF9zbW11X21hc3RlciBkaXJlY3RseS4NCj4+Pg0KPj4+IEkgZGlkIGEgZ2l0IHJl
c2V0IC0taGFyZCBvbiB0aGUgY29tbWl0ICJ4ZW4vYXJtOiBzbW11djE6IEFkZCBhIHN0cmVhbQ0K
Pj4+IG1hcCBlbnRyeSBpdGVyYXRvciIgYW5kIGl0IHdvcmtlZCwgd2hpY2ggcG9pbnRzIHRvICJ4
ZW4vYXJtOiBzbW11djE6DQo+Pj4gSW50ZWxsaWdlbnQgU01SIGFsbG9jYXRpb24iIGJlaW5nIHRo
ZSBwcm9ibGVtLCBldmVuIGlmIEkgaGF2ZSB0aGUgcmV2ZXJ0DQo+Pj4gY2hlcnJ5LXBpY2tlZCBv
biB0b3AuIE1heWJlIHRoZSByZXZlcnQgaXMgbm90IHJldmVydGluZyBlbm91Z2g/DQo+Pj4NCj4+
PiBBZnRlciB0aGlzIHRlc3QsIEkgc3dpdGNoZWQgYmFjayB0byBzdGFnaW5nIGFuZCBkaWQ6DQo+
Pj4gZ2l0IHJldmVydCA5ZjZjZDQ5ODM3MTVjYjMxZjBlYTU0MGU2YmJiNjNmNzk5YTM1ZDhhDQo+
Pj4gZ2l0IHJldmVydCAwNDM1Nzg0Y2M3NWRjZmVmM2I1ZjU5YzI5ZGViMWRiYjg0MjY1ZGRiDQo+
Pj4NCj4+PiBBbmQgaXQgd29ya2VkISBTbyB0aGUgaXNzdWUgdHJ1bHkgaXMgdGhhdA0KPj4+IDlm
NmNkNDk4MzcxNWNiMzFmMGVhNTQwZTZiYmI2M2Y3OTlhMzVkOGEgZG9lc24ndCByZXZlcnQgImVu
b3VnaCIuDQo+Pj4gU2VlICJmdWxsLXJldmVydCIgZm9yIHRoZSBwYXRjaCByZXZlcnRpbmcgdGhl
IHJlbWFpbmluZyBjb2RlLiBUaGF0IG9uDQo+Pj4gdG9wIG9mIHN0YWdpbmcgZml4ZXMgYm9vdCBm
b3IgbWUuDQo+DQoNCg==

--_000_6CBD56EF74204A438EF66CB0532BF108armcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <87FAE541223D6F408511F336F1A9884B@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64

PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPGRpdiBjbGFzcz0i
Qm9keUZyYWdtZW50Ij48Zm9udCBzaXplPSIyIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExcHQ7
Ij4NCjxkaXYgY2xhc3M9IlBsYWluVGV4dCI+SGkgU3RlZmFubyw8YnI+DQo8YnI+DQomZ3Q7IE9u
IDIzIEp1biAyMDIxLCBhdCA5OjA5IGFtLCBSYWh1bCBTaW5naCAmbHQ7UmFodWwuU2luZ2hAYXJt
LmNvbSZndDsgd3JvdGU6PGJyPg0KJmd0OyA8YnI+DQomZ3Q7IEhpIFN0ZWZhbm8sPGJyPg0KJmd0
OyA8YnI+DQomZ3Q7Jmd0OyBPbiAyMiBKdW4gMjAyMSwgYXQgMTA6MDYgcG0sIFN0ZWZhbm8gU3Rh
YmVsbGluaSAmbHQ7c3N0YWJlbGxpbmlAa2VybmVsLm9yZyZndDsgd3JvdGU6PGJyPg0KJmd0OyZn
dDsgPGJyPg0KJmd0OyZndDsgSGkgUmFodWwsPGJyPg0KJmd0OyZndDsgPGJyPg0KJmd0OyZndDsg
RG8geW91IGhhdmUgYW4gb3BpbmlvbiBvbiBob3cgd2Ugc2hvdWxkIG1vdmUgZm9yd2FyZCBvbiB0
aGlzPzxicj4NCiZndDsmZ3Q7IDxicj4NCiZndDsmZ3Q7IERvIHlvdSB0aGluayBpdCBpcyBPSyB0
byBnbyBmb3IgYSBmdWxsIHJldmVydCBvZiAmcXVvdDt4ZW4vYXJtOiBzbW11djE6PGJyPg0KJmd0
OyZndDsgSW50ZWxsaWdlbnQgU01SIGFsbG9jYXRpb24mcXVvdDsgb3IgZG8geW91IHRoaW5rIGl0
IGlzIGJlc3QgdG8gZ28gd2l0aCBhbjxicj4NCiZndDsmZ3Q7IGFsdGVybmF0aXZlIGZpeD8gSWYg
c28sIGRvIHlvdSBoYXZlIHNvbWV0aGluZyBpbiBtaW5kPzxicj4NCiZndDsmZ3Q7IDxicj4NCiZn
dDsgPGJyPg0KJmd0OyBTb3JyeSBmb3IgdGhlIGxhdGUgcmVwbHkgSSB3YXMgd29ya2luZyBvbiBh
bm90aGVyIGhpZ2gtcHJpb3JpdHkgdGFzay4gPGJyPg0KJmd0OyBJIHdpbGwgd29yayBvbiB0aGlz
IHdpbGwgdHJ5IHRvIGZpeCB0aGUgaXNzdWUuIEkgd2lsbCB1cGRhdGUgeW91IHdpdGhpbiAyLTMg
ZGF5cy4NCjxicj4NCjxicj4NCkkgYWdhaW4gY2hlY2tlZCBteSBwYXRjaGVzIGFuZCBmb3VuZCBv
dXQgdGhhdCB3aGlsZSBhbGxvY2F0aW5nIFNNUiBJIGJ5IG1pc3Rha2UgPGJyPg0KYWxsb2NhdGVk
IG9uZSBTTVIgZm9yIGVhY2ggU01NVSBkZXZpY2UgYnV0IHdlIGhhdmUgdG8gYWxsb2NhdGUgdGhl
IG51bWJlciBvZiA8YnI+DQpTTVIgYmFzZWQgb24gc3VwcG9ydGVkIHN0cmVhbSBtYXRjaGluZyBy
ZWdpc3RlciBmb3IgZWFjaCBTTU1VIGRldmljZS4gPGJyPg0KPGJyPg0KVGhpcyBtaWdodCBiZSBj
YXVzaW5nIHRoZSBpc3N1ZS4gQXMgSSBkb27igJl0IGhhdmUgYW55IFhpbGlueCBoYXJkd2FyZSBh
bmQgb24gPGJyPg0KUUVNVS9KdW5vIGlzc3VlIGlzIG5vdCByZXByb2R1Y2libGUuQ2FuIHlvdSBw
bGVhc2UgdGVzdCB0aGUgYXR0YWNoZWQgcGF0Y2ggYW5kIDxicj4NCmxldCBtZSBrbm93IGlmIGl0
IHdvcmtzLjxicj4NCjxicj4NCjwvZGl2Pg0KPC9zcGFuPjwvZm9udD48L2Rpdj4NCjxkaXYgY2xh
c3M9IkJvZHlGcmFnbWVudCI+PGZvbnQgc2l6ZT0iMiI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTox
MXB0OyI+DQo8ZGl2IGNsYXNzPSJQbGFpblRleHQiPjxicj4NCjxicj4NClJlZ2FyZHMsPGJyPg0K
UmFodWw8YnI+DQo8YnI+DQomZ3Q7IDxicj4NCiZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7IFJhaHVs
PGJyPg0KJmd0OyA8YnI+DQomZ3Q7Jmd0OyA8YnI+DQomZ3Q7Jmd0OyA8YnI+DQomZ3Q7Jmd0OyBP
biBUdWUsIDE1IEp1biAyMDIxLCBTdGVmYW5vIFN0YWJlbGxpbmkgd3JvdGU6PGJyPg0KJmd0OyZn
dDsmZ3Q7IE9uIFR1ZSwgMTUgSnVuIDIwMjEsIFJhaHVsIFNpbmdoIHdyb3RlOjxicj4NCiZndDsm
Z3Q7Jmd0OyZndDsgSGkgU3RlZmFubzxicj4NCiZndDsmZ3Q7Jmd0OyZndDsgPGJyPg0KJmd0OyZn
dDsmZ3Q7Jmd0OyZndDsgT24gMTUgSnVuIDIwMjEsIGF0IDM6MjEgYW0sIFN0ZWZhbm8gU3RhYmVs
bGluaSAmbHQ7c3N0YWJlbGxpbmlAa2VybmVsLm9yZyZndDsgd3JvdGU6PGJyPg0KJmd0OyZndDsm
Z3Q7Jmd0OyZndDsgPGJyPg0KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgSGkgUmFodWwsPGJyPg0KJmd0
OyZndDsmZ3Q7Jmd0OyZndDsgPGJyPg0KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgVW5mb3J0dW5hdGVs
eSwgYWZ0ZXIgYmlzZWN0aW5nLCBJIGRpc2NvdmVyZWQgYSBmZXcgbW9yZSBicmVha2FnZXMgZHVl
IHRvPGJyPg0KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgeW91ciBzbW11djEgc2VyaWVzIChjb21taXRz
IGU4ODk4MDliIC4uIDNlNjA0N2RkZikgb24gWGlsaW54IFp5bnFNUC4gSTxicj4NCiZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7IGF0dGFjaGVkIHRoZSBEVEIgYXMgcmVmZXJlbmNlLiBQbGVhc2Ugbm90ZSB0
aGF0IEkgbWFkZSBzdXJlIHRvPGJyPg0KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgY2hlcnJ5LXBpY2sg
JnF1b3Q7eGVuL2FybTogc21tdXYxOiBSZXZlcnQgYXNzb2NpYXRpbmcgdGhlIGdyb3VwIHBvaW50
ZXIgd2l0aDxicj4NCiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IHRoZSBTMkNSJnF1b3Q7IGR1cmluZyBi
aXNlY3Rpb24uIFNvIHRoZSBlcnJvcnMgYXJlIHByZXNlbnQgYWxzbyBvbiBzdGFnaW5nLjxicj4N
CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDxicj4NCiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IFRoZSBmaXJz
dCBicmVha2FnZSBpcyBhbiBlcnJvciBhdCBib290IHRpbWUgaW4gc21tdS5jI2ZpbmRfc21tdV9t
YXN0ZXIsPGJyPg0KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgc2VlIGxvZzEuIEkgdGhpbmsgaXQgaXMg
ZHVlIHRvIHRoZSBsYWNrIG9mIGFiaWxpdHkgdG8gcGFyc2UgdGhlIG5ldyBzbW11PGJyPg0KJmd0
OyZndDsmZ3Q7Jmd0OyZndDsgYmluZGluZ3MgaW4gdGhlIG9sZCBzbW11IGRyaXZlci48YnI+DQom
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyA8YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBBZnRlciByZW1v
dmluZyBhbGwgdGhlICZxdW90O3NtbXVzJnF1b3Q7IGFuZCAmcXVvdDsjc3RyZWFtLWlkLWNlbGxz
JnF1b3Q7IHByb3BlcnRpZXMgaW48YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBkZXZpY2UgdHJl
ZSwgSSBnZXQgcGFzdCB0aGUgcHJldmlvdXMgZXJyb3IsIGV2ZXJ5dGhpbmcgc2VlbXMgdG8gYmUg
T0sgYXQ8YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBlYXJseSBib290LCBidXQgSSBhY3R1YWxs
eSBnZXQgU01NVSBlcnJvcnMgYXMgc29vbiBhcyBkb20wIHN0YXJ0aW5nPGJyPg0KJmd0OyZndDsm
Z3Q7Jmd0OyZndDsgdXNpbmcgZGV2aWNlczo8YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyA8YnI+
DQomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAoWEVOKSBzbW11OiAvc21tdUBmZDgwMDAwMDogVW5leHBl
Y3RlZCBnbG9iYWwgZmF1bHQsIHRoaXMgY291bGQgYmUgc2VyaW91czxicj4NCiZndDsmZ3Q7Jmd0
OyZndDsmZ3Q7IChYRU4pIHNtbXU6IC9zbW11QGZkODAwMDAwOiZuYnNwOyZuYnNwOyZuYnNwOyZu
YnNwOyBHRlNSIDB4ODAwMDAwMDIsIEdGU1lOUjAgMHgwMDAwMDAwMCwgR0ZTWU5SMSAweDAwMDAw
ODc3LCBHRlNZTlIyIDB4MDAwMDAwMDA8YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7IDxicj4NCiZndDsm
Z3Q7Jmd0OyZndDsgVGhpcyBmYXVsdCBpcyAmcXVvdDtVbmlkZW50aWZpZWQgc3RyZWFtIGZhdWx0
4oCdIGZvciBTdHJlYW1JRCDigJwgMHg4NzfigJ0gdGhhdCBtZWFucyBTTU1VIFNNUiBpcyBub3Qg
Y29uZmlndXJlZCBmb3Igc3RyZWFtSUQg4oCcMHg4NzcmcXVvdDs8YnI+DQomZ3Q7Jmd0OyZndDsm
Z3Q7IDxicj4NCiZndDsmZ3Q7Jmd0OyZndDsgPGJyPg0KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgWyZu
YnNwOyZuYnNwOyAxMC40MTk2ODFdIG1hY2IgZmYwZTAwMDAuZXRoZXJuZXQgZXRoMDogRE1BIGJ1
cyBlcnJvcjogSFJFU1Agbm90IE9LPGJyPg0KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgWyZuYnNwOyZu
YnNwOyAxMC40MjY0NTJdIElQdjY6IEFERFJDT05GKE5FVERFVl9DSEFOR0UpOiBldGgwOiBsaW5r
IGJlY29tZXMgcmVhZHk8YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyA8YnI+DQomZ3Q7Jmd0OyZn
dDsmZ3Q7Jmd0OyBEbyB5b3UgdGhpbmsgeW91J2xsIGJlIGFibGUgdG8gaGVscCBmaXggdGhlbT88
YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyA8YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyA8YnI+
DQomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBZb3Ugc2hvdWxkIGJlIGFibGUgdG8gcmVwcm9kdWNlIHRo
ZSB0d28gaXNzdWVzIHVzaW5nIFhpbGlueCBRRU1VIChidXQgdG88YnI+DQomZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyBiZSBob25lc3QgSSBoYXZlbid0IHRlc3RlZCBpdCBvbiBRRU1VIHlldCwgSSB3YXMg
dGVzdGluZyBvbiByZWFsPGJyPg0KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgaGFyZHdhcmUpOjxicj4N
CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IC0gY2xvbmUgYW5kIGNvbXBpbGUgeGlsaW54IFFFTVUgPGEg
aHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC9xZW11LmdpdCI+DQpodHRwczovL2dpdGh1
Yi5jb20vWGlsaW54L3FlbXUuZ2l0PC9hPjxicj4NCiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IC4vY29u
ZmlndXJlJm5ic3A7IC0tdGFyZ2V0LWxpc3Q9YWFyY2g2NC1zb2Z0bW11PGJyPg0KJmd0OyZndDsm
Z3Q7Jmd0OyZndDsgbWFrZTxicj4NCiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IC0gY2xvbmUgYW5kIGJ1
aWxkIGdpdDovL2dpdGh1Yi5jb20vWGlsaW54L3FlbXUtZGV2aWNldHJlZXMuZ2l0PGJyPg0KJmd0
OyZndDsmZ3Q7Jmd0OyZndDsgLSB1c2UgdGhlIGF0dGFjaGVkIHNjcmlwdCB0byBydW4gaXQ8YnI+
DQomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyAtIGtlcm5lbCBjYW4gYmUgdXBzdHJlYW0gZGVm
Y29uZmlnIDUuMTA8YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyA8YnI+DQomZ3Q7Jmd0OyZndDsm
Z3Q7IDxicj4NCiZndDsmZ3Q7Jmd0OyZndDsgSSB0cmllZCB0byByZXByb2R1Y2UgdGhlIGlzc3Vl
IG9uIFhpbGlueCBRRU1VIGFzIHBlciB0aGUgc3RlcHMgc2hhcmVkIGFib3ZlIDxicj4NCiZndDsm
Z3Q7Jmd0OyZndDsgYnV0IEkgYW0gbm90IG9ic2VydmluZyBhbnkgaXNzdWUgb24gWGlsaW54IFFF
TVUuPGJyPg0KJmd0OyZndDsmZ3Q7IDxicj4NCiZndDsmZ3Q7Jmd0OyBJIHRyaWVkIG9uIFFFTVUg
YW5kIGl0IGRvZXNuJ3QgcmVwcm8uIEkgY2Fubm90IGV4cGxhaW4gd2h5IGl0IHdvcmtzIG9uPGJy
Pg0KJmd0OyZndDsmZ3Q7IFFFTVUgYW5kIGl0IGZhaWxzIG9uIHJlYWwgaGFyZHdhcmUuPGJyPg0K
Jmd0OyZndDsmZ3Q7IDxicj4NCiZndDsmZ3Q7Jmd0OyA8YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7IEkg
YWxzbyB0ZXN0ZWQgYW5kIGNvbmZpcm1lZCBvbiBRRU1VIHRoYXQgU01NVSBpcyBjb25maWd1cmVk
IGNvcnJlY3RseSA8YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7IGZvciBzcGVjaWZpY2FsbHkgU3RyZWFt
SUQg4oCcIDB4ODc34oCdIGFuZCBmb3Igb3RoZXIgc3RyZWFtSURzLjxicj4NCiZndDsmZ3Q7Jmd0
OyZndDsgPGJyPg0KJmd0OyZndDsmZ3Q7Jmd0OyBJIGNoZWNrIHRoZSB4ZW4uZHRiIHNoYXJlZCBi
eSB5b3UgYW5kIGZvdW5kIG91dCB0aGUgdGhlcmUgaXMgbm8gJnF1b3Q7c3RyZWFtLWlkLWNlbGxz
4oCdPGJyPg0KJmd0OyZndDsmZ3Q7Jmd0OyBwcm9wZXJ0eSBpbiB0aGUgbWFzdGVyIGRldmljZSBi
dXQgdGhlICZxdW90O21tdS1tYXN0ZXJzJnF1b3Q7IHByb3BlcnR5IGlzIHByZXNlbnQgaW4gdGhl
PGJyPg0KJmd0OyZndDsmZ3Q7Jmd0OyBzbW11IG5vZGUuIEZvciBsZWdhY3kgc21tdSBiaW5kaW5n
IHdlIG5lZWQgYm90aCAmcXVvdDtzdHJlYW0taWQtY2VsbHPigJ0gYW5kICZxdW90O21tdS1tYXN0
ZXJz4oCdLjxicj4NCiZndDsmZ3Q7Jmd0OyZndDsgSWYgeW91IG5lZWQgdG8gYWRkIHRoZSBuZXcg
c21tdSBiaW5kaW5nIHBsZWFzZSBhZGQgdGhlICZxdW90O2lvbW11LWNlbGxz4oCdPGJyPg0KJmd0
OyZndDsmZ3Q7Jmd0OyBwcm9wZXJ0eSBpbiB0aGUgc21tdSBub2RlIGFuZCB0aGUg4oCcaW9tbXVz
4oCdIHByb3BlcnR5IGluIHRoZSBtYXN0ZXIgZGV2aWNlLjxicj4NCiZndDsmZ3Q7Jmd0OyA8YnI+
DQomZ3Q7Jmd0OyZndDsgSW4gcmVnYXJkcyB0byB0aGUgbWlzc2luZyAmcXVvdDtzdHJlYW0taWQt
Y2VsbHMmcXVvdDsgcHJvcGVydHksIEkgc2hhcmVkIHRoZSB3cm9uZzxicj4NCiZndDsmZ3Q7Jmd0
OyBkdGIgYmVmb3JlLCBzb3JyeS4gSSB3YXMgcnVubmluZyBhIG51bWJlciBvZiB0ZXN0cyBhbmQg
SSBtaWdodCBoYXZlPGJyPg0KJmd0OyZndDsmZ3Q7IHBpY2tlZCB0aGUgd3JvbmcgZmlsZS4gVGhl
IHByb3BlciBkdGIgY29tZXMgd2l0aCAmcXVvdDtzdHJlYW0taWQtY2VsbHMmcXVvdDsgZm9yPGJy
Pg0KJmd0OyZndDsmZ3Q7IHRoZSAweDg3NyBkZXZpY2UsIHNlZSBhdHRhY2hlZC48YnI+DQomZ3Q7
Jmd0OyZndDsgPGJyPg0KJmd0OyZndDsmZ3Q7IDxicj4NCiZndDsmZ3Q7Jmd0OyA8YnI+DQomZ3Q7
Jmd0OyZndDsmZ3Q7IENhbiB5b3UgcGxlYXNlIHNoYXJlIHRoZSB4ZW4gYm9vdCBsb2dzIHdpdGgg
bWUgc28gdGhhdCBJIGNhbiBkZWJ1ZyBmdXJ0aGVyIHdoeSB0aGUgZXJyb3IgaXMgb2JzZXJ2ZWQ/
DQo8YnI+DQomZ3Q7Jmd0OyZndDsgPGJyPg0KJmd0OyZndDsmZ3Q7IFNlZSBhdHRhY2hlZC4gSSBk
aWQgc29tZSBkZWJ1Z2dpbmcgYW5kIGRpc2NvdmVyZWQgdGhhdCBpdCBjcmFzaGVzIHdoaWxlPGJy
Pg0KJmd0OyZndDsmZ3Q7IGFjY2Vzc2luZyBtYXN0ZXItJmd0O29mX25vZGUgaW4gZmluZF9zbW11
X21hc3Rlci4gSWYgSSByZXZlcnQgeW91ciBzZXJpZXMsPGJyPg0KJmd0OyZndDsmZ3Q7IHRoZSBj
cmFzaCBnb2VzIGF3YXkuIEl0IGlzIHZlcnkgc3RyYW5nZSBiZWNhdXNlIHlvdXIgcGF0Y2hlcyBk
b24ndCB0b3VjaDxicj4NCiZndDsmZ3Q7Jmd0OyBmaW5kX3NtbXVfbWFzdGVyIG9yIGluc2VydF9z
bW11X21hc3RlciBkaXJlY3RseS48YnI+DQomZ3Q7Jmd0OyZndDsgPGJyPg0KJmd0OyZndDsmZ3Q7
IEkgZGlkIGEgZ2l0IHJlc2V0IC0taGFyZCBvbiB0aGUgY29tbWl0ICZxdW90O3hlbi9hcm06IHNt
bXV2MTogQWRkIGEgc3RyZWFtPGJyPg0KJmd0OyZndDsmZ3Q7IG1hcCBlbnRyeSBpdGVyYXRvciZx
dW90OyBhbmQgaXQgd29ya2VkLCB3aGljaCBwb2ludHMgdG8gJnF1b3Q7eGVuL2FybTogc21tdXYx
Ojxicj4NCiZndDsmZ3Q7Jmd0OyBJbnRlbGxpZ2VudCBTTVIgYWxsb2NhdGlvbiZxdW90OyBiZWlu
ZyB0aGUgcHJvYmxlbSwgZXZlbiBpZiBJIGhhdmUgdGhlIHJldmVydDxicj4NCiZndDsmZ3Q7Jmd0
OyBjaGVycnktcGlja2VkIG9uIHRvcC4gTWF5YmUgdGhlIHJldmVydCBpcyBub3QgcmV2ZXJ0aW5n
IGVub3VnaD88YnI+DQomZ3Q7Jmd0OyZndDsgPGJyPg0KJmd0OyZndDsmZ3Q7IEFmdGVyIHRoaXMg
dGVzdCwgSSBzd2l0Y2hlZCBiYWNrIHRvIHN0YWdpbmcgYW5kIGRpZDo8YnI+DQomZ3Q7Jmd0OyZn
dDsgZ2l0IHJldmVydCA5ZjZjZDQ5ODM3MTVjYjMxZjBlYTU0MGU2YmJiNjNmNzk5YTM1ZDhhPGJy
Pg0KJmd0OyZndDsmZ3Q7IGdpdCByZXZlcnQgMDQzNTc4NGNjNzVkY2ZlZjNiNWY1OWMyOWRlYjFk
YmI4NDI2NWRkYjxicj4NCiZndDsmZ3Q7Jmd0OyA8YnI+DQomZ3Q7Jmd0OyZndDsgQW5kIGl0IHdv
cmtlZCEgU28gdGhlIGlzc3VlIHRydWx5IGlzIHRoYXQ8YnI+DQomZ3Q7Jmd0OyZndDsgOWY2Y2Q0
OTgzNzE1Y2IzMWYwZWE1NDBlNmJiYjYzZjc5OWEzNWQ4YSBkb2Vzbid0IHJldmVydCAmcXVvdDtl
bm91Z2gmcXVvdDsuPGJyPg0KJmd0OyZndDsmZ3Q7IFNlZSAmcXVvdDtmdWxsLXJldmVydCZxdW90
OyBmb3IgdGhlIHBhdGNoIHJldmVydGluZyB0aGUgcmVtYWluaW5nIGNvZGUuIFRoYXQgb248YnI+
DQomZ3Q7Jmd0OyZndDsgdG9wIG9mIHN0YWdpbmcgZml4ZXMgYm9vdCBmb3IgbWUuPGJyPg0KJmd0
OyA8YnI+DQo8YnI+DQo8L2Rpdj4NCjwvc3Bhbj48L2ZvbnQ+PC9kaXY+DQo8L2JvZHk+DQo8L2h0
bWw+DQo=

--_000_6CBD56EF74204A438EF66CB0532BF108armcom_--

--_004_6CBD56EF74204A438EF66CB0532BF108armcom_
Content-Type: application/octet-stream;
	name="0001-xen-arm-smmuv1-Fixed-SMR-allocation.patch"
Content-Description: 0001-xen-arm-smmuv1-Fixed-SMR-allocation.patch
Content-Disposition: attachment;
	filename="0001-xen-arm-smmuv1-Fixed-SMR-allocation.patch"; size=1582;
	creation-date="Wed, 23 Jun 2021 16:15:23 GMT";
	modification-date="Wed, 23 Jun 2021 16:15:23 GMT"
Content-ID: <C63B25A01D5D7A4B8D74F1FD97491816@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64

RnJvbSA3ZmJkM2IyNzhhNWRiMDk1ZGExYjJkMTYzYjFhY2E1ZGE3ZDViYjkzIE1vbiBTZXAgMTcg
MDA6MDA6MDAgMjAwMQpNZXNzYWdlLUlkOiA8N2ZiZDNiMjc4YTVkYjA5NWRhMWIyZDE2M2IxYWNh
NWRhN2Q1YmI5My4xNjI0NDY0MDUzLmdpdC5yYWh1bC5zaW5naEBhcm0uY29tPgpGcm9tOiBSYWh1
bCBTaW5naCA8cmFodWwuc2luZ2hAYXJtLmNvbT4KRGF0ZTogV2VkLCAyMyBKdW4gMjAyMSAxNjo1
NDo1NSArMDEwMApTdWJqZWN0OiBbUEFUQ0hdIHhlbi9hcm06IHNtbXV2MTogRml4ZWQgU01SIGFs
bG9jYXRpb24KClNNUiBhbGxvY2F0aW9uIHNob3VsZCBiZSBiYXNlZCBvbiBudW1iZXIgb2Ygc3Vw
cG9ydGVkIHN0cmVhbSBtYXRjaGluZy8KaW5kZXhpbmcgcmVnaXN0ZXIgZm9yIGVhY2ggU01NVSBk
ZXZpY2UuCgpTaWduZWQtb2ZmLWJ5OiBSYWh1bCBTaW5naCA8cmFodWwuc2luZ2hAYXJtLmNvbT4K
LS0tCiB4ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0vc21tdS5jIHwgMyArKy0KIDEgZmlsZSBj
aGFuZ2VkLCAyIGluc2VydGlvbnMoKyksIDEgZGVsZXRpb24oLSkKCmRpZmYgLS1naXQgYS94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0vc21tdS5jIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gv
YXJtL3NtbXUuYwppbmRleCBhYjViMWJjNDM0Li44YWM3ODIwNDFmIDEwMDY0NAotLS0gYS94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0vc21tdS5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJv
dWdoL2FybS9zbW11LmMKQEAgLTE0OSw2ICsxNDksNyBAQCB0eXBlZGVmIGVudW0gaXJxcmV0dXJu
IGlycXJldHVybl90OwogI2RlZmluZSBremFsbG9jKHNpemUsIGZsYWdzKQkJX3h6YWxsb2Moc2l6
ZSwgc2l6ZW9mKHZvaWQgKikpCiAjZGVmaW5lIGRldm1fa3phbGxvYyhkZXYsIHNpemUsIGZsYWdz
KQlfeHphbGxvYyhzaXplLCBzaXplb2Yodm9pZCAqKSkKICNkZWZpbmUga21hbGxvY19hcnJheShz
aXplLCBuLCBmbGFncykJX3htYWxsb2NfYXJyYXkoc2l6ZSwgc2l6ZW9mKHZvaWQgKiksIG4pCisj
ZGVmaW5lIGt6YWxsb2NfYXJyYXkoc2l6ZSwgbiwgZmxhZ3MpCV94emFsbG9jX2FycmF5KHNpemUs
IHNpemVvZih2b2lkICopLCBuKQogCiBzdGF0aWMgdm9pZCBfX2lvbWVtICpkZXZtX2lvcmVtYXBf
cmVzb3VyY2Uoc3RydWN0IGRldmljZSAqZGV2LAogCQkJCQkgICBzdHJ1Y3QgcmVzb3VyY2UgKnJl
cykKQEAgLTIxODUsNyArMjE4Niw3IEBAIHN0YXRpYyBpbnQgYXJtX3NtbXVfZGV2aWNlX2NmZ19w
cm9iZShzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11KQogCQlzbW11LT5zbXJfbWFza19tYXNr
ID0gc21yID4+IFNNUl9NQVNLX1NISUZUOwogCiAJCS8qIFplcm8taW5pdGlhbGlzZWQgdG8gbWFy
ayBhcyBpbnZhbGlkICovCi0JCXNtbXUtPnNtcnMgPSBkZXZtX2t6YWxsb2Moc21tdS0+ZGV2LCBz
aXplb2YoKnNtbXUtPnNtcnMpLCBHRlBfS0VSTkVMKTsKKwkJc21tdS0+c21ycyA9IGt6YWxsb2Nf
YXJyYXkoc2l6ZW9mKCpzbW11LT5zbXJzKSwgc2l6ZSwgR0ZQX0tFUk5FTCk7CiAJCWlmICghc21t
dS0+c21ycykKIAkJCXJldHVybiAtRU5PTUVNOwogCi0tIAoyLjE3LjEKCg==

--_004_6CBD56EF74204A438EF66CB0532BF108armcom_--


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 16:19:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 16:19:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146255.269081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw5bR-0002hR-Pf; Wed, 23 Jun 2021 16:19:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146255.269081; Wed, 23 Jun 2021 16:19:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw5bR-0002hK-Lq; Wed, 23 Jun 2021 16:19:49 +0000
Received: by outflank-mailman (input) for mailman id 146255;
 Wed, 23 Jun 2021 16:19:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw5bQ-0002hA-LW; Wed, 23 Jun 2021 16:19:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw5bQ-0000kW-Du; Wed, 23 Jun 2021 16:19:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw5bQ-0007ay-1c; Wed, 23 Jun 2021 16:19:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lw5bQ-0008T6-19; Wed, 23 Jun 2021 16:19:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VYqiIRm5eW1Uyc/zpoRyWArsbHONQ7G0qjs6Hx42Txg=; b=Dw3607n7v9uDgtMJq0/ioKEiAZ
	QPtf0WYmrbc9WMpSO6WUL595EN5iIH46/mti+ormPxlEYkzG1kIiSwTWCDhkumS1OoZpWcu6eefAr
	LSDs0gYuZ5NbOgdc7mixzxfGbAu0PnYq+T/QYMu6Ln2ZuSpHe2GKdtu3Af8vPd+Fcw1E=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162971-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162971: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Jun 2021 16:19:48 +0000

flight 162971 xen-unstable real [real]
flight 163000 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162971/
http://logs.test-lab.xenproject.org/osstest/logs/163000/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162533
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8af4b47f061edf6450f2b0a0a8134fdf1c13b3e5
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z   15 days
Failing since        162556  2021-06-08 22:39:08 Z   14 days   21 attempts
Testing same since   162885  2021-06-17 23:08:00 Z    5 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1163 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 16:35:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 16:35:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146263.269095 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw5qb-0004uB-E8; Wed, 23 Jun 2021 16:35:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146263.269095; Wed, 23 Jun 2021 16:35:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw5qb-0004u4-Ah; Wed, 23 Jun 2021 16:35:29 +0000
Received: by outflank-mailman (input) for mailman id 146263;
 Wed, 23 Jun 2021 16:35:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw5qa-0004tu-MP; Wed, 23 Jun 2021 16:35:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw5qa-00010P-H8; Wed, 23 Jun 2021 16:35:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw5qa-0008Ug-9J; Wed, 23 Jun 2021 16:35:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lw5qa-0003vf-8n; Wed, 23 Jun 2021 16:35:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=+zJp6jKfd3I5dqrYYsImkX+BXiJbM79hUljImLP5dx0=; b=T1iih68XVdKrL/Rtek9+3frAiv
	1rqp2bpe5q0FjzK7sJEbrq67bi+A8E9/f9hfQ35R9q3kAhxV7C59svdK2CrThBA+rUUThJ6j7hso7
	9pBQxPhcaKQ3OWXBH4afIG3SP9jFD4JLIXYab3a9U+jLSOewkkrU+tJnUd9x+QVqr8dw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-i386-xl-qemuu-debianhvm-i386-xsm
Message-Id: <E1lw5qa-0003vf-8n@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Jun 2021 16:35:28 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm
testid guest-saverestore

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161372/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-debianhvm-i386-xsm.guest-saverestore.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-debianhvm-i386-xsm.guest-saverestore --summary-out=tmp/163001.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-i386-xl-qemuu-debianhvm-i386-xsm guest-saverestore
Searching for failure / basis pass:
 162969 fail [host=chardonnay0] / 160125 [host=fiano1] 160119 [host=pinot1] 160113 [host=albana1] 160104 [host=albana0] 160097 [host=huxelrebe1] 160091 [host=elbling0] 160082 ok.
Failure / basis pass flights: 162969 / 160082
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0add99ea3ea91af8230e3933ad7826b2da25a44d e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6157b0e19721aadb4c7fdcfe57b2924af6144b14 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#4751a48aeb2ab828b0a5cbdc585fd3642967cda1-c410ad4da4b7785170d3d42a3ba190c2caac6feb git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#6157b0e19721aadb4c7fdcfe57b2924af6144b14-0add99ea3ea91af8230e3933ad7826b2da25a44d git://xenbits.xen.org/osstest/seabios.git#b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee-e3c30795823672eec9bde75187e184f23ed98d70 git://xenbits.xen.org/xen.git#14b95b3b8546db201e7efd0636ae0e215fae98f3-5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Loaded 55584 nodes in revision graph
Searching for test results:
 162778 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162795 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162818 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1ea06abceec61b6f3ab33dadb0510b6e09fb61e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162840 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1ea06abceec61b6f3ab33dadb0510b6e09fb61e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162850 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1ea06abceec61b6f3ab33dadb0510b6e09fb61e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162860 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1dd259ae24a26d8a987ab83aefb5c04dbe5f4b2a e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162871 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 38848ce565849e5b867a5e08022b3c755039c11a e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162879 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 18e53dff939898c6dd00d206a3c2f5cd3d6669db e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162889 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 18e53dff939898c6dd00d206a3c2f5cd3d6669db e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162969 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0add99ea3ea91af8230e3933ad7826b2da25a44d e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162997 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6157b0e19721aadb4c7fdcfe57b2924af6144b14 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 163001 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0add99ea3ea91af8230e3933ad7826b2da25a44d e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 160082 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6157b0e19721aadb4c7fdcfe57b2924af6144b14 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 160088 []
 160091 [host=elbling0]
 160097 [host=huxelrebe1]
 160104 [host=albana0]
 160113 [host=albana1]
 160119 [host=pinot1]
 160125 [host=fiano1]
 160134 fail irrelevant
 160147 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160167 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca318882714080fb81fe9eb89a7b7934efc5bfae 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bdee969c0e65d4d509932b1d70e3a3b2ffbff6d5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160328 fail irrelevant
 160361 fail irrelevant
 160392 fail irrelevant
 160418 fail irrelevant
 160448 fail irrelevant
 160477 fail irrelevant
 160501 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160522 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160541 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ec2e6e016d24bd429792d08cf607e4c5350dcdaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160563 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7993b0f83fe5c3f8555e79781d5d098f99751a94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cead8c0d17462f3a1150b5657d3f4eaa88faf1cb
 160619 fail irrelevant
 160632 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 62bad17dcae18f55cb3bdc19909543dfdf928a2b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6ee55e1d10c25c2f6bf5ce2084ad2327e17affa5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 90629587e16e2efdb61da77f25c25fba3c4a5fd7
 160650 fail irrelevant
 160736 fail irrelevant
 160748 fail irrelevant
 160779 fail irrelevant
 160801 fail irrelevant
 160827 fail irrelevant
 160851 fail irrelevant
 160883 fail irrelevant
 160916 fail irrelevant
 160980 fail irrelevant
 161050 fail irrelevant
 161088 fail irrelevant
 161121 fail irrelevant
 161147 fail irrelevant
 161171 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2ad22420a710dc07e3b644f91a5b55c09c39ecf3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 264aa183ad85b2779b27d1312724a291259ccc9f
 161191 fail irrelevant
 161210 fail irrelevant
 161232 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b53173e7cdafb7a318a239d557478fd73734a86a
 161256 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161276 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161290 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161319 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6157b0e19721aadb4c7fdcfe57b2924af6144b14 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 161325 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161329 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2a9a6c2a86570ccbf8c5c30cbb8bf723168c459 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161308 fail irrelevant
 161333 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161336 fail irrelevant
 161337 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7286d62d4e259be8cecf3dc2deea80ecc14489a5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161339 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2255564fd21059960966b47212def9069cb56077 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161341 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e71c36557ed41017e634ae392fa80f03ced7fa1 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161342 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 fc4a62f65cbd2d5d2c247ed4fbf64a05e6485859 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161344 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8b858f9998a9d59a9a7188f2c5c6ffb99eff6115 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161346 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 30ca7eddc486646fa19c9619fcf233ceaa65e28c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161347 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 283d845c9164f57f5dba020a4783bb290493802f b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161349 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a752dd07466c4f0bda5c14d001b096e778a44ad5 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e31b3a5c34c6e5be7ef60773e607f189eaa15f3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 161351 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 db492ebb91059b818d5b5ea5975d227e5c3c9bcc b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161353 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 313d86c956d4599054a9dcd524668f67797d317a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 56b89f455894e4628ad7994fe5dd348145d1a9c5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161354 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161355 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 146f720c55637410062041f68dc908645cd18aaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161356 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ad1324e044240ae9fcf67e4c215481b7a35591b9 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161357 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 445a5b4087567bf4d4ce76d394adf78d9d5c88a5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161358 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161361 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161334 fail irrelevant
 161362 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161365 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6157b0e19721aadb4c7fdcfe57b2924af6144b14 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 161366 fail irrelevant
 161367 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161370 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161372 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161364 fail irrelevant
 161388 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
 161401 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aaa3eafb3ba8b544d19ca41cda1477640b22b8fc
 161419 fail irrelevant
 161434 fail irrelevant
 161444 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161455 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161472 fail irrelevant
 161481 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5396354b868bd6652600a654bba7df16701ac1cb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 11e7f0fe72ca0060762d18268e0388731fe8ccb6
 161495 fail irrelevant
 161514 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b90b8abb4049e2d98040f548ad23b6ab22d5d19 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
 161540 fail irrelevant
 161554 fail irrelevant
 161571 fail irrelevant
 161587 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161604 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161616 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 53c5433e84e8935abed8e91d4a2eb813168a0ecf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161631 []
 161766 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
 161780 fail irrelevant
 161812 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d45a5270d075ea589f0b0ddcf963a5fea1f500ac b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 8cccd6438e86112ab383e41b433b5a7e73be9621
 161826 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161839 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161853 []
 161856 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161862 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161876 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161886 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161890 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161896 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 74e31681ba05ed1876818df30c581bc530554fb3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161907 fail irrelevant
 161915 fail irrelevant
 161924 fail irrelevant
 161938 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161942 blocked irrelevant
 161945 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161941 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161950 fail irrelevant
 161955 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161961 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161963 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161967 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161971 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161976 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161981 fail irrelevant
 161986 fail irrelevant
 162019 fail irrelevant
 162070 fail irrelevant
 162090 fail irrelevant
 162104 fail irrelevant
 162099 fail irrelevant
 162108 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 15ee7b76891a78141e6e30ef3f8572e8d6b326d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 972e848b53970d12cb2ca64687ef8ff797fb6236 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162112 fail irrelevant
 162116 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162121 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162124 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162127 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162132 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162135 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162139 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162143 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 371ebfe28600fc5a435504b841cd401208a68f07 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162146 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162152 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162158 fail irrelevant
 162167 fail irrelevant
 162197 fail irrelevant
 162240 fail irrelevant
 162244 fail irrelevant
 162252 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7258034ab40e6927acbd005feb295eb3acf972bb 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 9fdcf851689cb2a9501d3947cb5d767d9c7797e8
 162257 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 62c0ac5041e9130b041adfa13a41583d3c3ddd24 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162260 fail irrelevant
 162264 fail irrelevant
 162267 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f9dc72de91d2915b808e82da34bf613afa5cce43 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162270 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162277 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162299 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 683d899e4bffca35c5b192ea0662362b0270a695
 162328 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3ff5dbe1dfc3420e5254d290500c0b6f6282d17 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162331 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fdf3666f01a2dd02d83a808f609b9c744a74c652 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162339 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b233eb1849ac01bdd5b24ea84460a2e481a4c5a9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dd2db39d78431ab5a0b78777afaab3d61e94533e 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162342 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1f515342d8d83ef0fff0c3f4ac67232dd8c97565 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c345b3e6a736d4985b2bca6f7f24b985900de63 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162347 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8e6dad2028d01b7f9ec76cf3b83457fab57fa1eb 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162356 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 453d9c61dd5681159051c6e4d07e7b2633de2e70 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162362 []
 162409 fail irrelevant
 162379 fail irrelevant
 162429 fail irrelevant
 162454 fail irrelevant
 162474 fail irrelevant
 162501 []
 162527 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a35947f15c0ee695eba3c55248ec8ac3e4e23cca 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162551 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a4716fd8d7c877185652f5f8e25032dc7699d51b 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162591 fail irrelevant
 162623 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7fe7fae8b48e3f9c647fd685e5155ebc8e6fb84d e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162650 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162762 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162676 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162712 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Searching for interesting versions
 Result found: flight 160082 (pass), for basis pass
 Result found: flight 162969 (fail), for basis failure
 Repro found: flight 162997 (pass), for basis pass
 Repro found: flight 163001 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
No revisions left to test, checking graph state.
 Result found: flight 161358 (pass), for last pass
 Result found: flight 161361 (fail), for first failure
 Repro found: flight 161362 (pass), for last pass
 Repro found: flight 161367 (fail), for first failure
 Repro found: flight 161370 (pass), for last pass
 Repro found: flight 161372 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161372/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.306968 to fit
pnmtopng: 236 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-debianhvm-i386-xsm.guest-saverestore.{dot,ps,png,html,svg}.
----------------------------------------
163001: tolerable FAIL

flight 163001 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/163001/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail baseline untested


jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Jun 23 18:38:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 18:38:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146278.269118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw7l1-0007Oa-94; Wed, 23 Jun 2021 18:37:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146278.269118; Wed, 23 Jun 2021 18:37:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw7l1-0007OT-5x; Wed, 23 Jun 2021 18:37:51 +0000
Received: by outflank-mailman (input) for mailman id 146278;
 Wed, 23 Jun 2021 18:37:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kcuX=LR=kernel.org=will@srs-us1.protection.inumbo.net>)
 id 1lw7l0-0007ON-5K
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 18:37:50 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8169a997-27df-40f9-8136-c29e1e413bc1;
 Wed, 23 Jun 2021 18:37:49 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 3C1E061185;
 Wed, 23 Jun 2021 18:37:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8169a997-27df-40f9-8136-c29e1e413bc1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1624473468;
	bh=K9ctSOOXg/HLNG/KMzo+bt+uXbhOKksVK31vHq8QEAc=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=pew0mCStiOrsU0bWjM0b54f2cDTk0BFJYuT7oL0Hl93RfndilrrpZ00P5EcbMTEX/
	 /lbkdoUtJfhzG6xhgO6NWycBIcDRvmNhEJGTWAKSFbeL2aFhOe2jpLrVodVSrKGkzr
	 gHxURUCJo+Bry9z+VHTZDIOT/ABoSs0DN2Tjo7IL/UHU38iukI2V/m5xdJIYc7NUws
	 7NMVKjT7vcMuSIu2fy0OZOLVI8+LwwecPlMOHv27TBpClMax/sAEQmgzVN3VS5iIlB
	 HGiL4Uzn3xHn2voW7eTXBh/NjM9NO32LN+GkhPpRHj+kINjyQizH9+P5WDGveYcADU
	 kFBCzfquuEa8w==
Date: Wed, 23 Jun 2021 19:37:37 +0100
From: Will Deacon <will@kernel.org>
To: Qian Cai <quic_qiancai@quicinc.com>
Cc: Claire Chang <tientzu@chromium.org>, Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	heikki.krogerus@linux.intel.com, thomas.hellstrom@linux.intel.com,
	peterz@infradead.org, benh@kernel.crashing.org,
	joonas.lahtinen@linux.intel.com, dri-devel@lists.freedesktop.org,
	chris@chris-wilson.co.uk, grant.likely@arm.com, paulus@samba.org,
	mingo@kernel.org, jxgao@google.com, sstabellini@kernel.org,
	Saravana Kannan <saravanak@google.com>, xypron.glpk@gmx.de,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>, bskeggs@redhat.com,
	linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org,
	Thierry Reding <treding@nvidia.com>,
	intel-gfx@lists.freedesktop.org, matthew.auld@intel.com,
	linux-devicetree <devicetree@vger.kernel.org>, daniel@ffwll.ch,
	airlied@linux.ie, maarten.lankhorst@linux.intel.com,
	linuxppc-dev@lists.ozlabs.org, jani.nikula@linux.intel.com,
	Nicolas Boichat <drinkcat@chromium.org>, rodrigo.vivi@intel.com,
	bhelgaas@google.com, Dan Williams <dan.j.williams@intel.com>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Greg KH <gregkh@linuxfoundation.org>,
	Randy Dunlap <rdunlap@infradead.org>,
	lkml <linux-kernel@vger.kernel.org>,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, thomas.lendacky@amd.com,
	Robin Murphy <robin.murphy@arm.com>, bauerman@linux.ibm.com
Subject: Re: [PATCH v14 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
Message-ID: <20210623183736.GA472@willie-the-truck>
References: <20210619034043.199220-1-tientzu@chromium.org>
 <20210619034043.199220-7-tientzu@chromium.org>
 <76c3343d-72e5-9df3-8924-5474ee698ef4@quicinc.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <76c3343d-72e5-9df3-8924-5474ee698ef4@quicinc.com>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed, Jun 23, 2021 at 12:39:29PM -0400, Qian Cai wrote:
> 
> 
> On 6/18/2021 11:40 PM, Claire Chang wrote:
> > Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
> > use it to determine whether to bounce the data or not. This will be
> > useful later to allow for different pools.
> > 
> > Signed-off-by: Claire Chang <tientzu@chromium.org>
> > Reviewed-by: Christoph Hellwig <hch@lst.de>
> > Tested-by: Stefano Stabellini <sstabellini@kernel.org>
> > Tested-by: Will Deacon <will@kernel.org>
> > Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> Reverting the rest of the series up to this patch fixed a boot crash with NVMe on today's linux-next.

Hmm, so that makes patch 7 the suspicious one, right?

Looking at that one more closely, it looks like swiotlb_find_slots() takes
'alloc_size + offset' as its 'alloc_size' parameter from
swiotlb_tbl_map_single() and initialises 'mem->slots[i].alloc_size' based
on 'alloc_size + offset', which looks like a change in behaviour from the
old code, which didn't include the offset there.

swiotlb_release_slots() then adds the offset back on afaict, so we end up
accounting for it twice and possibly unmap more than we're supposed to?

Will


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 18:50:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 18:50:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146285.269129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw7ww-00018T-DN; Wed, 23 Jun 2021 18:50:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146285.269129; Wed, 23 Jun 2021 18:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw7ww-00018M-A8; Wed, 23 Jun 2021 18:50:10 +0000
Received: by outflank-mailman (input) for mailman id 146285;
 Wed, 23 Jun 2021 18:50:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5/Od=LR=gmail.com=salvatore.bonaccorso@srs-us1.protection.inumbo.net>)
 id 1lw7wv-00018F-1W
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 18:50:09 +0000
Received: from mail-wr1-x42a.google.com (unknown [2a00:1450:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c72ab878-9c9d-428a-b6bb-a016de2846ff;
 Wed, 23 Jun 2021 18:50:07 +0000 (UTC)
Received: by mail-wr1-x42a.google.com with SMTP id b3so3755512wrm.6
 for <xen-devel@lists.xenproject.org>; Wed, 23 Jun 2021 11:50:07 -0700 (PDT)
Received: from eldamar (80-218-24-251.dclient.hispeed.ch. [80.218.24.251])
 by smtp.gmail.com with ESMTPSA id x10sm848038wru.58.2021.06.23.11.50.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 23 Jun 2021 11:50:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: c72ab878-9c9d-428a-b6bb-a016de2846ff
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=t9g61bv+JZ07tY5UuVmyIiDTVqgatmyK1QRV0XqD+Hc=;
        b=KLXuHll2RXbwuztCykmE1EQTu6vmTzU1QQjpNTWzNCxaM1/Ps+7XEiFt8zUeOZ1baA
         xe7gg6HXyDZ4KaB5Rul1MiFZoKZ0NCIbHMmtoCQwOsmnIQiG+kgYk7+D1Nwn5IGm+2zB
         +ei/17jncOmir+dd9UyMDCtbBeqxpGTR5rZqJk0gYytyX0CnJeaGFqWYDEu6HFskFOOE
         nDzu6jGbO5UvY6TkF6GoI2grAJpQLbjUvI6cQO7vFjzZCaOOWSL1cuXCzLO2WUEC+XMu
         tRtnxpuK28KFSNdjX7/SGtxLwQUIkf75kqAgcewqO+eglB9DMfpSiAc0IPNPS5ithG0y
         BKnQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=t9g61bv+JZ07tY5UuVmyIiDTVqgatmyK1QRV0XqD+Hc=;
        b=tM2sPUmnd88O5Cnx8N+7I2kLV9hk9S05S+VAVb8zmP+EnRLH0BTV5djFCHojbJo8EH
         pZg2xp3JGQB13DO3qhrXLL6txdlPu+l4ggtGjC20wEr4QQThbZoXa9qCO/K23GXbi472
         etCULP0uGpS/QPpCK5kdOQ+9AOPiO8f8UcfndKzEp8EVPBwHy3hgCBaBiVhxI2BVGAie
         rPUVrFDJRGTZdBwFoSxzrUPkiXfSBHdLs/Qjb0Z+ZbeeXOB7ErIOvnbu5YNA02/PFM+n
         +ITOcYZFDlJJz+sIdpwLhcXox3K9dQ/CnoCmelflmC3I8hRp/XwXLzowqUBlekBOMN9p
         DChA==
X-Gm-Message-State: AOAM533BZtWrHbGB3oR++ZDCTL+S4z80rKZkAtK516b4NJZ/TtxcZdD9
	ppFzsliPowJmbItpgMq+ZZeZxjRCr7fv+Q==
X-Google-Smtp-Source: ABdhPJwow7ob1nYUTJTVlm2hcLViMuMJ7U2nNz+pDoNObCJVja/RfxSZArMIJ4rIufME1rWIeSAkvw==
X-Received: by 2002:a5d:6b91:: with SMTP id n17mr1793938wrx.166.1624474207108;
        Wed, 23 Jun 2021 11:50:07 -0700 (PDT)
Sender: Salvatore Bonaccorso <salvatore.bonaccorso@gmail.com>
Date: Wed, 23 Jun 2021 20:50:05 +0200
From: Salvatore Bonaccorso <carnil@debian.org>
To: Robin Murphy <robin.murphy@arm.com>, 989778-maintonly@bugs.debian.org
Cc: =?utf-8?B?5bCP5aSq?= <nospam@kota.moe>,
	Jianxiong Gao <jxgao@google.com>, Christoph Hellwig <hch@lst.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	xen-devel@lists.xenproject.org
Subject: Re: Bug#989778: Regression in at least 5.10.y and mainline: Firewire
 audio interface fails to work properly (when booted under Xen)
Message-ID: <YNOCXe1cuNDNAB+Z@eldamar.lan>
References: <162352833546.2353.230557992597997974.reportbug@home.kota.moe>
 <YMWl4UnFBAVRDnys@eldamar.lan>
 <162352833546.2353.230557992597997974.reportbug@home.kota.moe>
 <2f7c7d36-b6f4-f8ab-756e-a563fa03b9e4@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <2f7c7d36-b6f4-f8ab-756e-a563fa03b9e4@arm.com>

Hi Robin,

On Mon, Jun 14, 2021 at 02:29:08PM +0100, Robin Murphy wrote:
> On 2021-06-13 07:29, Salvatore Bonaccorso wrote:
> > A user in Debian reported the above issue, which was reproducible with
> > 5.13-rc5 and 5.10.y as packaged in Debian and found that 85a5a6875ca9
> > ("swiotlb: don't modify orig_addr in swiotlb_tbl_sync_single") that
> > introduced the issue.
> 
> Sounds like it's probably the same thing as being discussed over here:
> 
> https://lore.kernel.org/linux-iommu/2e899de2-4b69-c4b6-33a6-09fb8949d2fd@nxp.com/

Thanks for the pointer, it seems that now it has been fixed upstream
with 5f89468e2f06 ("swiotlb: manipulate orig_addr when tlb_addr has
offset").

Regards,
Salvatore


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 19:02:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 19:02:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146290.269140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw897-0002dY-Ix; Wed, 23 Jun 2021 19:02:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146290.269140; Wed, 23 Jun 2021 19:02:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw897-0002dR-Ev; Wed, 23 Jun 2021 19:02:45 +0000
Received: by outflank-mailman (input) for mailman id 146290;
 Wed, 23 Jun 2021 19:02:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw895-0002dH-WE; Wed, 23 Jun 2021 19:02:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw895-0003Sg-N2; Wed, 23 Jun 2021 19:02:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lw895-0008Hj-Ed; Wed, 23 Jun 2021 19:02:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lw895-0007vc-EA; Wed, 23 Jun 2021 19:02:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=mW5N0qHW1FWayGs4NlL/szE3o3uqb7krvrYn7O9JW10=; b=hkaR3UnYvhNBZguZVrBqc22lBG
	2ejHXQQ7XhUIgS2K9dR2Mf86sDiHGz2VqNj/n/8s48RqN2815zZs6OEZa+XxnRb9TQEVQlarTWRPW
	IRLyKdsJIdpsYlU1CytQTT6JFZFEsntC/xVN++AyP3QDwICkM2keOyvjrdc+HTz36GRo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-amd64-xl-qemuu-ovmf-amd64
Message-Id: <E1lw895-0007vc-EA@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Jun 2021 19:02:43 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemuu-ovmf-amd64
testid guest-saverestore

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161274/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-ovmf-amd64.guest-saverestore.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-ovmf-amd64.guest-saverestore --summary-out=tmp/163006.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-amd64-xl-qemuu-ovmf-amd64 guest-saverestore
Searching for failure / basis pass:
 162969 fail [host=pinot0] / 160125 [host=albana0] 160119 [host=godello1] 160113 [host=huxelrebe1] 160104 [host=fiano1] 160097 [host=pinot1] 160091 [host=godello0] 160088 [host=elbling0] 160082 [host=albana0] 160079 [host=albana1] 160070 [host=chardonnay1] 160066 [host=chardonnay0] 160002 [host=huxelrebe1] 159947 [host=godello0] 159926 [host=godello0] 159911 [host=fiano0] 159898 [host=albana0] 159888 [host=godello1] 159878 [host=godello0] 159869 [host=godello1] 159860 [host=albana1] 159853 [host\
 =elbling1] 159848 [host=huxelrebe1] 159842 [host=elbling0] 159834 [host=godello0] 159828 ok.
Failure / basis pass flights: 162969 / 159828
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0add99ea3ea91af8230e3933ad7826b2da25a44d e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#ef91b07388e1c0a50c604e5350eeda98428ccea6-c410ad4da4b7785170d3d42a3ba190c2caac6feb git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#cb90ecf9349198558569f6c86c4c27d215406095-0add99ea3ea91af8230e3933ad7826b2da25a44d git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-e3c30795823672eec9bde75187e184f23ed98d70 git://xenbits.xen.org/xen.git#243036df0d55673de59c214e240b9b914d278b65-5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Loaded 55673 nodes in revision graph
Searching for test results:
 162778 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162795 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162818 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1ea06abceec61b6f3ab33dadb0510b6e09fb61e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162840 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1ea06abceec61b6f3ab33dadb0510b6e09fb61e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162850 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1ea06abceec61b6f3ab33dadb0510b6e09fb61e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162860 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1dd259ae24a26d8a987ab83aefb5c04dbe5f4b2a e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162871 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 38848ce565849e5b867a5e08022b3c755039c11a e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162879 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 18e53dff939898c6dd00d206a3c2f5cd3d6669db e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162889 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 18e53dff939898c6dd00d206a3c2f5cd3d6669db e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162969 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0add99ea3ea91af8230e3933ad7826b2da25a44d e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 163005 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
 163006 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0add99ea3ea91af8230e3933ad7826b2da25a44d e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 159828 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
 159834 [host=godello0]
 159842 [host=elbling0]
 159848 [host=huxelrebe1]
 159853 [host=elbling1]
 159860 [host=albana1]
 159869 [host=godello1]
 159878 [host=godello0]
 159888 [host=godello1]
 159898 [host=albana0]
 159911 [host=fiano0]
 159926 [host=godello0]
 159947 [host=godello0]
 160002 [host=huxelrebe1]
 160048 []
 160050 []
 160057 []
 160062 []
 160064 []
 160066 [host=chardonnay0]
 160070 [host=chardonnay1]
 160079 [host=albana1]
 160082 [host=albana0]
 160088 [host=elbling0]
 160091 [host=godello0]
 160097 [host=pinot1]
 160104 [host=fiano1]
 160113 [host=huxelrebe1]
 160119 [host=godello1]
 160125 [host=albana0]
 160134 fail irrelevant
 160147 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160167 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca318882714080fb81fe9eb89a7b7934efc5bfae 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bdee969c0e65d4d509932b1d70e3a3b2ffbff6d5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160328 fail irrelevant
 160361 fail irrelevant
 160392 fail irrelevant
 160418 fail irrelevant
 160448 fail irrelevant
 160477 fail irrelevant
 160501 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160522 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160541 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ec2e6e016d24bd429792d08cf607e4c5350dcdaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160563 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7993b0f83fe5c3f8555e79781d5d098f99751a94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cead8c0d17462f3a1150b5657d3f4eaa88faf1cb
 160619 fail irrelevant
 160632 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 62bad17dcae18f55cb3bdc19909543dfdf928a2b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6ee55e1d10c25c2f6bf5ce2084ad2327e17affa5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 90629587e16e2efdb61da77f25c25fba3c4a5fd7
 160650 fail irrelevant
 160736 fail irrelevant
 160748 fail irrelevant
 160779 fail irrelevant
 160801 fail irrelevant
 160827 fail irrelevant
 160851 fail irrelevant
 160883 fail irrelevant
 160916 fail irrelevant
 160980 fail irrelevant
 161050 fail irrelevant
 161088 fail irrelevant
 161121 fail irrelevant
 161147 fail irrelevant
 161171 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2ad22420a710dc07e3b644f91a5b55c09c39ecf3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 264aa183ad85b2779b27d1312724a291259ccc9f
 161191 fail irrelevant
 161220 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
 161225 fail irrelevant
 161226 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2a9a6c2a86570ccbf8c5c30cbb8bf723168c459 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161227 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161228 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7286d62d4e259be8cecf3dc2deea80ecc14489a5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161230 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 69259911f948ad2755bd1f2c999dd60ac322c890 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161210 fail irrelevant
 161231 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e71c36557ed41017e634ae392fa80f03ced7fa1 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161233 fail irrelevant
 161234 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2255564fd21059960966b47212def9069cb56077 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161235 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e51b27fed31eb7b2a2cb4245806c8c7859207f7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0693602a23276b076a679b1e7ed9125a444336b6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161237 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8b858f9998a9d59a9a7188f2c5c6ffb99eff6115 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161239 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 30ca7eddc486646fa19c9619fcf233ceaa65e28c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161255 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d1929069e355afb809a50a7f6b6affdea399cc8c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 368096b9c4a273be58dd897e996e3e010bcfc21b
 161240 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2615a5e433aeb812c300d3a48e1a88e1303e2339 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 161242 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 51204c2f188ec1e2a38f14718d38a3772f850a4b b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 161243 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 773b0bc2838ede154c6de9d78401b91fafa91062 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5e8892db93f3fb6a7221f2d47f3c952a7e489737 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161245 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8e6bc6cdc82d45f203bc9fc4342c0452214c74fe b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 161246 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 757acb9a8295e8be4a37b2cfc1cd947e357fd29c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 161247 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 9abda42bf2f5aa6ef403d3140fd3d7d88e8064e9 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 893103e286ac1c500d2ad113f55c41edb35e047c
 161249 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f34661b6c97a37a5efc27d31c037ddeda4547e2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 0570d7f276dd20a3adee80ca44a5fe7daf7566cd
 161250 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 edd46cd407ea4a0adaa8d6ca86f550c2a4d5c507 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a557b00469bca61a058fc1db4855503cac1c3219 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 4e01c48886d9fbfee3bf7e481c4529a176331c78
 161252 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1941858448e76f83eb00614c4f34ac29e9a8e792 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 0570d7f276dd20a3adee80ca44a5fe7daf7566cd
 161253 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 edd46cd407ea4a0adaa8d6ca86f550c2a4d5c507 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 65a9d3807e9a0ffd9f9719416a07be41b6f39e94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e4bdcc8aef6707027168ea29caed844a7da67b4d
 161254 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 94fa95c8746c553324e8b69ea4a74af670075324 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e4341623a3b87e7eca87d42b7b88da967cd21c49 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 60c0444fae2148452f9ed0b7c49af1fa41f8f522
 161232 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b53173e7cdafb7a318a239d557478fd73734a86a
 161274 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161258 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b6d5996706ddb6082e3ea8de79849bfecf2aaa15 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e31b3a5c34c6e5be7ef60773e607f189eaa15f3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 161260 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161262 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ee2e67da8f882fcdef2c49fcc58e9962aa695f5a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161263 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f9c53a69edeb94ae8c65276b885c1a7efe4f613a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 571d413b5da6bc6f1c2aaca8484717642255ddb0 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161265 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 283d845c9164f57f5dba020a4783bb290493802f b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161266 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8becb36063fb14df1e3ae4916215667e2cb65fa2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161267 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161268 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161271 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161272 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161273 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161256 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161276 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161290 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161308 fail irrelevant
 161334 fail irrelevant
 161364 fail irrelevant
 161388 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
 161401 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aaa3eafb3ba8b544d19ca41cda1477640b22b8fc
 161419 fail irrelevant
 161434 fail irrelevant
 161444 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161455 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161472 fail irrelevant
 161481 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5396354b868bd6652600a654bba7df16701ac1cb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 11e7f0fe72ca0060762d18268e0388731fe8ccb6
 161495 fail irrelevant
 161514 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b90b8abb4049e2d98040f548ad23b6ab22d5d19 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
 161540 fail irrelevant
 161554 fail irrelevant
 161571 fail irrelevant
 161587 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161604 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161616 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 53c5433e84e8935abed8e91d4a2eb813168a0ecf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161631 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 15106f7dc3290ff3254611f265849a314a93eb0e b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161766 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
 161780 fail irrelevant
 161812 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d45a5270d075ea589f0b0ddcf963a5fea1f500ac b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 8cccd6438e86112ab383e41b433b5a7e73be9621
 161826 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161839 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161853 []
 161856 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161862 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161876 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161886 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161890 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161896 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 74e31681ba05ed1876818df30c581bc530554fb3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161907 fail irrelevant
 161915 fail irrelevant
 161924 fail irrelevant
 161938 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161941 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161950 fail irrelevant
 161955 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161961 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161963 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161967 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161971 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161976 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161981 fail irrelevant
 161986 fail irrelevant
 162019 fail irrelevant
 162070 fail irrelevant
 162090 fail irrelevant
 162104 fail irrelevant
 162099 fail irrelevant
 162108 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 15ee7b76891a78141e6e30ef3f8572e8d6b326d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 972e848b53970d12cb2ca64687ef8ff797fb6236 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162112 fail irrelevant
 162116 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162121 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162124 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162127 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162132 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162135 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162139 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162143 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 371ebfe28600fc5a435504b841cd401208a68f07 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162146 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162152 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162158 fail irrelevant
 162167 fail irrelevant
 162197 fail irrelevant
 162240 fail irrelevant
 162244 fail irrelevant
 162252 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7258034ab40e6927acbd005feb295eb3acf972bb 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 9fdcf851689cb2a9501d3947cb5d767d9c7797e8
 162257 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1999b264f1f9d7230edf2448f757c73da567832 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 62c0ac5041e9130b041adfa13a41583d3c3ddd24 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162260 fail irrelevant
 162264 fail irrelevant
 162267 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f9dc72de91d2915b808e82da34bf613afa5cce43 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162270 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162277 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 6eff8085980dba0938cea0193b8a0fd3c6b0c4ca 683d899e4bffca35c5b192ea0662362b0270a695
 162299 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fe5da0927aad98f3c005088197fa30c1b8f9d3e8 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 683d899e4bffca35c5b192ea0662362b0270a695
 162328 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3ff5dbe1dfc3420e5254d290500c0b6f6282d17 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162331 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fdf3666f01a2dd02d83a808f609b9c744a74c652 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 52848929b70dcf92a68aedcfd90207be81ba3274 81433aa8a19b36f9e3d50697608c93d8a28bf772 57f68dfd2d111a2ad381df740543c901b41f2299
 162339 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b233eb1849ac01bdd5b24ea84460a2e481a4c5a9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dd2db39d78431ab5a0b78777afaab3d61e94533e 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162342 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1f515342d8d83ef0fff0c3f4ac67232dd8c97565 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c345b3e6a736d4985b2bca6f7f24b985900de63 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162347 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8e6dad2028d01b7f9ec76cf3b83457fab57fa1eb 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162356 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 453d9c61dd5681159051c6e4d07e7b2633de2e70 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162362 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 75e9154f818a58ffc3a28db9f8c97279e723f02d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 453d9c61dd5681159051c6e4d07e7b2633de2e70 81433aa8a19b36f9e3d50697608c93d8a28bf772 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162409 blocked irrelevant
 162379 fail irrelevant
 162429 blocked irrelevant
 162454 blocked irrelevant
 162477 blocked irrelevant
 162484 blocked irrelevant
 162487 blocked irrelevant
 162491 blocked irrelevant
 162493 blocked irrelevant
 162497 blocked irrelevant
 162474 blocked irrelevant
 162499 blocked irrelevant
 162502 blocked irrelevant
 162507 blocked irrelevant
 162519 blocked irrelevant
 162521 blocked irrelevant
 162525 blocked irrelevant
 162501 []
 162526 blocked irrelevant
 162527 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a35947f15c0ee695eba3c55248ec8ac3e4e23cca 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162551 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a4716fd8d7c877185652f5f8e25032dc7699d51b 7292e4a0a8f58333ccbd2d0d47242f9865083c9c 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162591 fail irrelevant
 162623 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7fe7fae8b48e3f9c647fd685e5155ebc8e6fb84d e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162650 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162762 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162676 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 162712 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 894fc4fd670aaf04a67dc7507739f914ff4bacf2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Searching for interesting versions
 Result found: flight 159828 (pass), for basis pass
 Result found: flight 162969 (fail), for basis failure
 Repro found: flight 163005 (pass), for basis pass
 Repro found: flight 163006 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
No revisions left to test, checking graph state.
 Result found: flight 161267 (pass), for last pass
 Result found: flight 161268 (fail), for first failure
 Repro found: flight 161271 (pass), for last pass
 Repro found: flight 161272 (fail), for first failure
 Repro found: flight 161273 (pass), for last pass
 Repro found: flight 161274 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161274/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.273186 to fit
pnmtopng: 241 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-ovmf-amd64.guest-saverestore.{dot,ps,png,html,svg}.
----------------------------------------
163006: tolerable FAIL

flight 163006 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/163006/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail baseline untested


jobs:
 build-amd64                                                  pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Jun 23 20:57:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 20:57:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146299.269154 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw9wC-000472-11; Wed, 23 Jun 2021 20:57:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146299.269154; Wed, 23 Jun 2021 20:57:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lw9wB-00046v-U3; Wed, 23 Jun 2021 20:57:31 +0000
Received: by outflank-mailman (input) for mailman id 146299;
 Wed, 23 Jun 2021 20:57:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Zqu9=LR=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lw9wB-00046n-Cn
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 20:57:31 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0afca0c7-0c06-4544-8b13-2307501881fc;
 Wed, 23 Jun 2021 20:57:30 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 5130B610C7;
 Wed, 23 Jun 2021 20:57:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0afca0c7-0c06-4544-8b13-2307501881fc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1624481849;
	bh=YCJzZScTVuwY+INyiGZAOgJVxazgYAMNMs/LWeB4mds=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=NvtcE2jHVWpNU3nXBZ1C/O5ZMb3RFrFtQPPMLXPib9ZaJBsKcYcdy7MYVXhTXNAzE
	 XB7VvUnqQI0uc013npZMhec0gXTUNzWq3fswwGPurTes91+DT+lSFYHHOM8G8+gyxh
	 gTp+HeS+9dEDBcsq9BIAuEJ/FSWryklTrVdnmkEtHKyb98NqMxj4k4Tk2KYwDOt1uH
	 tPho2HKHEuzPenlqm8YYzpWbFlo9mYC+R+B6JP9n2wXN8kyr6ENox/469GdgfKgOCV
	 sgc7AgDKeyL3+2yi/un531QwPq+NYaKlmOYhlQiL0oE6n3po1kPY5rhlgx779E5pya
	 lVzYeT3RqGJ6w==
Date: Wed, 23 Jun 2021 13:57:28 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <Rahul.Singh@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    "edgar.iglesias@xilinx.com" <edgar.iglesias@xilinx.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, Julien Grall <julien@xen.org>, 
    "fnuv@xilinx.com" <fnuv@xilinx.com>
Subject: Re: smmuv1 breakage
In-Reply-To: <6CBD56EF-7420-4A43-8EF6-6CB0532BF108@arm.com>
Message-ID: <alpine.DEB.2.21.2106231357040.24906@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2106141840150.24906@sstabellini-ThinkPad-T480s> <791BFC00-6A50-48D2-A208-E529B887441F@arm.com> <alpine.DEB.2.21.2106151756190.24906@sstabellini-ThinkPad-T480s> <alpine.DEB.2.21.2106221405220.24906@sstabellini-ThinkPad-T480s>
 <55ACE88F-F72C-4715-B3B1-B7B7F1B4CFFB@arm.com> <6CBD56EF-7420-4A43-8EF6-6CB0532BF108@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1939976230-1624481849=:24906"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1939976230-1624481849=:24906
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 23 Jun 2021, Rahul Singh wrote:
> Hi Stefano,
> 
> > On 23 Jun 2021, at 9:09 am, Rahul Singh <Rahul.Singh@arm.com> wrote:
> >
> > Hi Stefano,
> >
> >> On 22 Jun 2021, at 10:06 pm, Stefano Stabellini <sstabellini@kernel.org> wrote:
> >>
> >> Hi Rahul,
> >>
> >> Do you have an opinion on how we should move forward on this?
> >>
> >> Do you think it is OK to go for a full revert of "xen/arm: smmuv1:
> >> Intelligent SMR allocation" or do you think it is best to go with an
> >> alternative fix? If so, do you have something in mind?
> >>
> >
> > Sorry for the late reply I was working on another high-priority task.
> > I will work on this will try to fix the issue. I will update you within 2-3 days.
> 
> I again checked my patches and found out that while allocating SMR I by mistake
> allocated one SMR for each SMMU device but we have to allocate the number of
> SMR based on supported stream matching register for each SMMU device.
> 
> This might be causing the issue. As I don’t have any Xilinx hardware and on
> QEMU/Juno issue is not reproducible.Can you please test the attached patch and
> let me know if it works.

Yes this solves the issue for me, thank you!!


Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>


> >
> > Regards,
> > Rahul
> >
> >>
> >>
> >> On Tue, 15 Jun 2021, Stefano Stabellini wrote:
> >>> On Tue, 15 Jun 2021, Rahul Singh wrote:
> >>>> Hi Stefano
> >>>>
> >>>>> On 15 Jun 2021, at 3:21 am, Stefano Stabellini <sstabellini@kernel.org> wrote:
> >>>>>
> >>>>> Hi Rahul,
> >>>>>
> >>>>> Unfortunately, after bisecting, I discovered a few more breakages due to
> >>>>> your smmuv1 series (commits e889809b .. 3e6047ddf) on Xilinx ZynqMP. I
> >>>>> attached the DTB as reference. Please note that I made sure to
> >>>>> cherry-pick "xen/arm: smmuv1: Revert associating the group pointer with
> >>>>> the S2CR" during bisection. So the errors are present also on staging.
> >>>>>
> >>>>> The first breakage is an error at boot time in smmu.c#find_smmu_master,
> >>>>> see log1. I think it is due to the lack of ability to parse the new smmu
> >>>>> bindings in the old smmu driver.
> >>>>>
> >>>>> After removing all the "smmus" and "#stream-id-cells" properties in
> >>>>> device tree, I get past the previous error, everything seems to be OK at
> >>>>> early boot, but I actually get SMMU errors as soon as dom0 starting
> >>>>> using devices:
> >>>>>
> >>>>> (XEN) smmu: /smmu@fd800000: Unexpected global fault, this could be serious
> >>>>> (XEN) smmu: /smmu@fd800000:     GFSR 0x80000002, GFSYNR0 0x00000000, GFSYNR1 0x00000877, GFSYNR2 0x00000000
> >>>>
> >>>> This fault is "Unidentified stream fault” for StreamID “ 0x877” that means SMMU SMR is not configured for streamID “0x877"
> >>>>
> >>>>
> >>>>> [   10.419681] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
> >>>>> [   10.426452] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
> >>>>>
> >>>>> Do you think you'll be able to help fix them?
> >>>>>
> >>>>>
> >>>>> You should be able to reproduce the two issues using Xilinx QEMU (but to
> >>>>> be honest I haven't tested it on QEMU yet, I was testing on real
> >>>>> hardware):
> >>>>> - clone and compile xilinx QEMU https://github.com/Xilinx/qemu.git
> >>>>> ./configure  --target-list=aarch64-softmmu
> >>>>> make
> >>>>> - clone and build git://github.com/Xilinx/qemu-devicetrees.git
> >>>>> - use the attached script to run it
> >>>>>  - kernel can be upstream defconfig 5.10
> >>>>>
> >>>>
> >>>> I tried to reproduce the issue on Xilinx QEMU as per the steps shared above
> >>>> but I am not observing any issue on Xilinx QEMU.
> >>>
> >>> I tried on QEMU and it doesn't repro. I cannot explain why it works on
> >>> QEMU and it fails on real hardware.
> >>>
> >>>
> >>>> I also tested and confirmed on QEMU that SMMU is configured correctly
> >>>> for specifically StreamID “ 0x877” and for other streamIDs.
> >>>>
> >>>> I check the xen.dtb shared by you and found out the there is no "stream-id-cells”
> >>>> property in the master device but the "mmu-masters" property is present in the
> >>>> smmu node. For legacy smmu binding we need both "stream-id-cells” and "mmu-masters”.
> >>>> If you need to add the new smmu binding please add the "iommu-cells”
> >>>> property in the smmu node and the “iommus” property in the master device.
> >>>
> >>> In regards to the missing "stream-id-cells" property, I shared the wrong
> >>> dtb before, sorry. I was running a number of tests and I might have
> >>> picked the wrong file. The proper dtb comes with "stream-id-cells" for
> >>> the 0x877 device, see attached.
> >>>
> >>>
> >>>
> >>>> Can you please share the xen boot logs with me so that I can debug further why the error is observed?
> >>>
> >>> See attached. I did some debugging and discovered that it crashes while
> >>> accessing master->of_node in find_smmu_master. If I revert your series,
> >>> the crash goes away. It is very strange because your patches don't touch
> >>> find_smmu_master or insert_smmu_master directly.
> >>>
> >>> I did a git reset --hard on the commit "xen/arm: smmuv1: Add a stream
> >>> map entry iterator" and it worked, which points to "xen/arm: smmuv1:
> >>> Intelligent SMR allocation" being the problem, even if I have the revert
> >>> cherry-picked on top. Maybe the revert is not reverting enough?
> >>>
> >>> After this test, I switched back to staging and did:
> >>> git revert 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
> >>> git revert 0435784cc75dcfef3b5f59c29deb1dbb84265ddb
> >>>
> >>> And it worked! So the issue truly is that
> >>> 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a doesn't revert "enough".
> >>> See "full-revert" for the patch reverting the remaining code. That on
> >>> top of staging fixes boot for me.
> >
> 
> 
> 
--8323329-1939976230-1624481849=:24906--


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 22:03:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 22:03:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146304.269165 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwAxc-00027B-1I; Wed, 23 Jun 2021 22:03:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146304.269165; Wed, 23 Jun 2021 22:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwAxb-000274-Tp; Wed, 23 Jun 2021 22:03:03 +0000
Received: by outflank-mailman (input) for mailman id 146304;
 Wed, 23 Jun 2021 22:03:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwAxa-00026u-Tc; Wed, 23 Jun 2021 22:03:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwAxa-0006XG-LA; Wed, 23 Jun 2021 22:03:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwAxa-0000Np-8u; Wed, 23 Jun 2021 22:03:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwAxa-0004td-8Q; Wed, 23 Jun 2021 22:03:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dleiQiHBJfPIu7TXgw1SeA9/EA6IozlXdlJLWj4E6zg=; b=aIWxzIqH0wgGP0yEOiUlf3Yqun
	WTpQmKgWBfv4gSy1osdq7KTo/nNcq6/MGT/7O9jW9K2QhiX+dJga3PNXgbQ6tInyqDSQN+i0wcHLe
	8AFEVqZ1Ln+bzPQbk3yT6y95F8V+WAj0R/XekvMmc8QNofEA2iSBImgHWu6cSEfWqUzM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162996-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162996: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=b22726abdfa54592d6ad88f65b0297c0e8b363e2
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Jun 2021 22:03:02 +0000

flight 162996 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162996/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                b22726abdfa54592d6ad88f65b0297c0e8b363e2
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  307 days
Failing since        152659  2020-08-21 14:07:39 Z  306 days  562 attempts
Testing same since   162996  2021-06-23 12:46:39 Z    0 days    1 attempts

------------------------------------------------------------
546 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 176957 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 22:03:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 22:03:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146309.269178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwAyP-0002f1-Ac; Wed, 23 Jun 2021 22:03:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146309.269178; Wed, 23 Jun 2021 22:03:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwAyP-0002eu-7l; Wed, 23 Jun 2021 22:03:53 +0000
Received: by outflank-mailman (input) for mailman id 146309;
 Wed, 23 Jun 2021 22:03:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Zqu9=LR=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lwAyN-0002dX-Kr
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 22:03:51 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 75c40fa5-ee0c-470a-afc8-348740e8aaef;
 Wed, 23 Jun 2021 22:03:51 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id E490B613B0;
 Wed, 23 Jun 2021 22:03:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75c40fa5-ee0c-470a-afc8-348740e8aaef
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1624485830;
	bh=w2sbGSFId0NaFC8mtIXbtUBfZVXUfEfQrH+1XXHbucg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=eBQALQ45wV05wUPyRYOpUY8DL3cvZmCHYOq5cqiz1QJVmle/NtymOjiU89dM3mqmQ
	 xdYTb6Hjt9aAHL+diJth/2urr9/DRNeL5laoj8JIQJu60KmrCSGfxoWd+qGbm3fL2K
	 gyvUN3HKg5uq/C5kOGX64l2NH5DnDQpfIELn4Zqj92pDOuBqYscGZtwji3LQ1i6BMu
	 4uA70JwDX+hXNiUWo10I9To/rj9Ob37n0KAvly1jYIf/0CLuvNaP+S0iuAvCx6DSWC
	 56Fv/HEhHMf50H0HtDEAkczbrdQDWD73j6bBcn+LveLuHz01tgXBSP5Uq3ufFHKIjn
	 uhfRaiFNyQcVA==
Date: Wed, 23 Jun 2021 15:03:49 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v6 6/9] docs: add doxygen preprocessor and related
 files
In-Reply-To: <20210510084105.17108-7-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.21.2106231456290.24906@sstabellini-ThinkPad-T480s>
References: <20210510084105.17108-1-luca.fancellu@arm.com> <20210510084105.17108-7-luca.fancellu@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 10 May 2021, Luca Fancellu wrote:
> Add preprocessor called by doxygen before parsing headers,
> it will include in every header a doxygen_include.h file
> that provides missing defines and includes that are
> usually passed by the compiler.
> 
> Add doxy_input.list that is a text file containing the
> relative path to the source code file to be parsed by
> doxygen. The path sould be relative to the xen folder:
> E.g. xen/include/public/grant_table.h
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  docs/xen-doxygen/doxy-preprocessor.py | 110 ++++++++++++++++++++++++++
>  docs/xen-doxygen/doxy_input.list      |   0
>  docs/xen-doxygen/doxygen_include.h.in |  32 ++++++++
>  3 files changed, 142 insertions(+)
>  create mode 100755 docs/xen-doxygen/doxy-preprocessor.py
>  create mode 100644 docs/xen-doxygen/doxy_input.list
>  create mode 100644 docs/xen-doxygen/doxygen_include.h.in
> 
> diff --git a/docs/xen-doxygen/doxy-preprocessor.py b/docs/xen-doxygen/doxy-preprocessor.py
> new file mode 100755
> index 0000000000..496899d8e6
> --- /dev/null
> +++ b/docs/xen-doxygen/doxy-preprocessor.py
> @@ -0,0 +1,110 @@
> +#!/usr/bin/python3
> +#
> +# Copyright (c) 2021, Arm Limited.
> +#
> +# SPDX-License-Identifier: GPL-2.0
> +#
> +
> +import os, sys, re
> +
> +
> +# Variables that holds the preprocessed header text
> +output_text = ""
> +header_file_name = ""
> +
> +# Variables to enumerate the anonymous structs/unions
> +anonymous_struct_count = 0
> +anonymous_union_count = 0
> +
> +
> +def error(text):
> +    sys.stderr.write("{}\n".format(text))
> +    sys.exit(1)
> +
> +
> +def write_to_output(text):
> +    sys.stdout.write(text)
> +
> +
> +def insert_doxygen_header(text):
> +    # Here the strategy is to insert the #include <doxygen_include.h> in the
> +    # first line of the header
> +    abspath = os.path.dirname(os.path.abspath(__file__))
> +    text += "#include \"{}/doxygen_include.h\"\n".format(abspath)
> +
> +    return text
> +
> +
> +def enumerate_anonymous(match):
> +    global anonymous_struct_count
> +    global anonymous_union_count
> +
> +    if "struct" in match.group(1):
> +        label = "anonymous_struct_%d" % anonymous_struct_count
> +        anonymous_struct_count += 1
> +    else:
> +        label = "anonymous_union_%d" % anonymous_union_count
> +        anonymous_union_count += 1
> +
> +    return match.group(1) + " " + label + " {"
> +
> +
> +def manage_anonymous_structs_unions(text):
> +    # Match anonymous unions/structs with this pattern:
> +    # struct/union {
> +    #     [...]
> +    #
> +    # and substitute it in this way:
> +    #
> +    # struct anonymous_struct_# {
> +    #     [...]
> +    # or
> +    # union anonymous_union_# {
> +    #     [...]
> +    # where # is a counter starting from zero, different between structs and
> +    # unions
> +    #
> +    # We don't count anonymous union/struct that are part of a typedef because
> +    # they don't create any issue for doxygen
> +    text = re.sub(
> +        "(?<!typedef\s)(struct|union)\s+?\{",
> +        enumerate_anonymous,
> +        text,
> +        flags=re.S
> +    )

My python is a bit rusty but I thought this is really clever!

One question: given that anonymous_struct_count is local per file being
processed, it always starts at 0 for each header. I think that is
actually better from a documentation readability point of view.

However, is it possible that Doxygen gets confused in a case where we
can multiple "struct anonymous_struct_0", e.g. one for grant_table.h,
one for event_channel.h, etc. ?


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 23:19:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 23:19:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146316.269190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwC8u-0000qp-27; Wed, 23 Jun 2021 23:18:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146316.269190; Wed, 23 Jun 2021 23:18:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwC8t-0000qi-VA; Wed, 23 Jun 2021 23:18:47 +0000
Received: by outflank-mailman (input) for mailman id 146316;
 Wed, 23 Jun 2021 23:18:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5ekh=LR=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lwC8s-0000qc-AG
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 23:18:46 +0000
Received: from mx0a-00069f02.pphosted.com (unknown [205.220.165.32])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 998e9687-8905-4f30-8a4d-689758ea0451;
 Wed, 23 Jun 2021 23:18:45 +0000 (UTC)
Received: from pps.filterd (m0246629.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 15NNH8i2002775; Wed, 23 Jun 2021 23:18:43 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by mx0b-00069f02.pphosted.com with ESMTP id 39c2wnhjwd-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 23 Jun 2021 23:18:43 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 15NNFv29113613;
 Wed, 23 Jun 2021 23:18:42 GMT
Received: from nam10-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam10lp2107.outbound.protection.outlook.com [104.47.58.107])
 by userp3020.oracle.com with ESMTP id 399tbv65g3-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 23 Jun 2021 23:18:41 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by MN2PR10MB4333.namprd10.prod.outlook.com (2603:10b6:208:199::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Wed, 23 Jun
 2021 23:18:40 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4264.019; Wed, 23 Jun 2021
 23:18:39 +0000
Received: from [10.74.102.31] (160.34.88.31) by
 SN7P220CA0029.NAMP220.PROD.OUTLOOK.COM (2603:10b6:806:123::34) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Wed, 23 Jun 2021 23:18:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 998e9687-8905-4f30-8a4d-689758ea0451
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=NXaHNbeqOPtKkQw9Dha8CY4CHp/tDnQ+pswMmyzLSBE=;
 b=PpEy9HS3Iao0eEuAmfoKeID8w5fl5yun9EwiIWAzhbiA3cVu7sdyUKta8RaRve9ZT5ux
 IJHS8T4vOpm9VCmxmjmkXjoPppVwQf9KpGr3efKQgwr150hbK9s4QH6uVloL2+7rvy+7
 wZtOLtyJaXC8dcqZaD/VwoihW5bhS40b3hjaQbLmVo8LlSOWm04y4j1lVZuL4ba8rAEK
 9f/ObZyIQU6ZasJGc2PSMEOYkNfQEwA9Grg4gVJ8zHg9kv6bJrUH7OrUJEqdsJCzbqXE
 WHAKWDArNDuP5BhD+adWlA0cwucNf9+Q9xWqlp5ivm6lkpl9YeJe88ctpQ0KuSP4ihmm VQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ss2M0SQwbLE+mlwY1AYzlTSJyF675t05Zx9kicGoOAU7jKlGy3V3Kkj3aUmQ9DvLzY9xDO9uA4v8zD+4miZM0uAs20n9+vf2Y7Ov2KEG7jLUtk+1sWMYhQji6lm2UpsMqF4CZGeX0cyV1E9soZO+gruNEY+Xqj5aFwIkhOaMSrhoqM1Q/2iEaCDSVYrsnhKBEmse8RrcZV2nQMLfEaSyRlERIXRvkaT5druzL8PUq8Wii1//dKo4rrB39WbTjQcV0n7xc64utbq2vEmeqPhvPF3l3xATOXi0gYANb97gsqTSL8Z2LPxERYhtmVcvXN0RZMgO+3XzNWoomdLDnn5Cag==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NXaHNbeqOPtKkQw9Dha8CY4CHp/tDnQ+pswMmyzLSBE=;
 b=hw/t5+sDyfonvPOL476MLtABlWL+bbyMrvJDCT7JLoQm0K5AwhemBI+6Ly1vqtXTQuvDSgWnt26FY69Sw2rMZ09CDSNBkUG6klNx+GJ15Oao6vZvigiDMCxqMjFCa6C3J3Rihpzytzu53bP7yz4LLhS+NKxJbkaJJiLj1AKXRdtPQNk+5Vzuj7I+hig7+KrzO0/w2WpnaNkrmOM3lWG5Ul202wM1+rh2a7rplHx0QPQFzOo5ERlMM+pzZwhs3bTJFpa0JSD1BYwO3FiTqN8E7sW9BX6qUVXjnJ5oigSXjRUzYvdUpvZNvePhiFklFzjFK++7j17mXvLey6Ces5tbgw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NXaHNbeqOPtKkQw9Dha8CY4CHp/tDnQ+pswMmyzLSBE=;
 b=d4MU6BiyF51FSyL51WIPKn3qjVuaRj7XfnovC1bNVYkzbzJU0uf4lA68EEU5qhmlQuxRU5rJutXEWNQ4j/u+juI9Z6nV7iV2cVy9XhUQTvYqqVc97gAHp5n2SNK+lPtHyWG86BMRHhqpO5L7KH4bdFYdENlhzMekDjzXvwBA30k=
Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH] xen/events: reset active flag for lateeoi events later
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
        linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
References: <20210623130913.9405-1-jgross@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <9736ccf7-ee9d-ccae-fed6-cfeef78e6a8f@oracle.com>
Date: Wed, 23 Jun 2021 19:18:36 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
In-Reply-To: <20210623130913.9405-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [160.34.88.31]
X-ClientProxiedBy: SN7P220CA0029.NAMP220.PROD.OUTLOOK.COM
 (2603:10b6:806:123::34) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2acce191-0931-4846-9914-08d9369d3c17
X-MS-TrafficTypeDiagnostic: MN2PR10MB4333:
X-Microsoft-Antispam-PRVS: 
	<MN2PR10MB433351FA3AD64AC14BAFA3048A089@MN2PR10MB4333.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3044;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	XNGdQbzXxLo/6UiMA0Y1JfMqrjebBKjA9by96F759xpSHCqBxuAwQO5A/fznaSqkve2Qq7f1krkNehc3kCLFylkkutbmweJHaLt19/9AblxskiEQpUpDfleFxxv3K3HIx8nmljc91S/7BanG1K9fZK4FNmcA/u8ZNTafGRXeDyWI0mH+v/5+d2IR5Qm4BtvY/fRbOaz49CVB1nq/KWmo3us7puocayI7essSD/lL/IPHdkGQStPR3bHi1vAKMZeSAGIInF+beT5n1fsbN0lBpqNVlfodoCdbjsyo62GpBfheK8G6U8kNf9IBn3gR8RqkVO0kqzH6VyLAPom8pt48n1/YId9/akFZi+b4DL1R/ZyD8sLCH76Ptkz04yXzyVm6Zcalu5zWgR7Bd7LnTJj5ABszHocAnPOCyKzke/MsAbk+Tw4l1yNIcxHcqBUrKb3BhxmD8B+yZELS0LvB1GZ9YnYNl+maicOU8H3v19wgnyU8RfU05sIOGyD57UQSuS45DAJsvo1TSHXoXrHsFfKslzmUIXiHp/agPUZvXl/1NUx3ZiGtdxlL9K89a0n5XKSWf6gEEnivDAYSLcZukBBd7Oa5N46DftOyx3moXtCzvL9l9/HiRb8estGzePPUa3r04WroEtf1GEJpvbjSqsar7kOSg5ADSpRZl7jUUxXS3XWjpgIm3Fs/WVNmaT+IyPGG
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(346002)(376002)(366004)(39860400002)(136003)(83380400001)(38100700002)(54906003)(6486002)(31696002)(2906002)(86362001)(956004)(478600001)(16576012)(2616005)(44832011)(316002)(26005)(4744005)(8676002)(36756003)(16526019)(186003)(31686004)(53546011)(8936002)(66946007)(5660300002)(66476007)(4326008)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?RlcwZFRSQjFjVEYwRkRzN2t2anBHdlpaZGRHZzkyc3pGdWtYVkZtRmgzN3Ri?=
 =?utf-8?B?YkVJZi9FZVdlVGtNenM0M1pUYU9QeUJZVjIvZE11OE5nUmlvbmRVR0xxa3kx?=
 =?utf-8?B?VEV6NmYvbmZITmlnV2c1cFZUU2g3N2o5REc4ZEliVG93UStiODA2RlhQT0pU?=
 =?utf-8?B?M3JpajI0NHk4UHF4cXdDenMyNGxqTVBsKzhjRHhvMUp4THdZQ2NId2syOWRW?=
 =?utf-8?B?b1AzeGtKUHZEazFaNUIyNkx4Yk9nRWdPMXhNOEZlUE1RNkk1dWdJUVVMa2V5?=
 =?utf-8?B?N2FVQ095SXMySVBxeVI2REQ0RVd2V0c3THVsK015eTdVd0RoVzEya3JBa2xT?=
 =?utf-8?B?bFpsS05pcE5zL2o0SExDcEZNVS9ISFlnSERQQThFdU81eGpwSEdIWjQvLzhU?=
 =?utf-8?B?aTdrZ08zUGl2NVYyUzJ3eWNyYW1RYTNUc1lrNlN4Vnk3ZUl2Q0ZLWXJIa0xo?=
 =?utf-8?B?QnI0aXhiUHVjUlFLZldZRHEzeHBHMkhZaC9zUHpvdlpMc2Y3dXk3TlZtNXVS?=
 =?utf-8?B?N2FoZmczc3VLeVB5ZVlZUGs5L0tpLzl4czZVTTlzangwb0crd3Y3bDJyOVpC?=
 =?utf-8?B?SzZ6TG42OXdYcUhNR0Y4a1ZIbytHR2ZscUJjWmNRUWxzSzZzc2Ivb0xSbm02?=
 =?utf-8?B?S2tlaTR6NVkrTVcwVENML2RCclppRHZpcHoremVJRlkrNHoxTFNVSm05M21h?=
 =?utf-8?B?aCsxZWJ1czZFdE9hSXhpcDNDS3BFL1BDektGOGFHQS9sOTF3cXNGOEtsQnM0?=
 =?utf-8?B?OVV5dXI2Y2YrTDZpckg0bk9sR3I1bUM3dG1jSVd3MWcveit6NWFhdjFnbjFl?=
 =?utf-8?B?RmhJdFRPZWNTRHhna24wak5wc2xsVVpjR09mN2ZvRlRnb3oyN1ExUVRFVXFC?=
 =?utf-8?B?d3JYRnJLSUtwV2dOTm1yS1FUVTFLekhpcFlXb1NHbjd2TjZ1SW5PZkRLZE4y?=
 =?utf-8?B?di9VYXlDS3JhTWxvQkVJbWRORzQzVjJUUU52R3NqR2c5Q2U5cy9xLy9ZMGRR?=
 =?utf-8?B?QTA0dG9IRDMvMS9OWGc5OVRoT3ZoWVhLWGNBQ0Vnc2JXd2pXVmRIeVhmdmZl?=
 =?utf-8?B?SmZuaFRYN0FWQmc3WkFPNGMxQm9UcjYwR3NlZDVSMStlUGRLZ3VxaHBLN3ZV?=
 =?utf-8?B?N1A0Zmc0ak5LdS9Db3VBTzNwV2NjbTVjOVNTUDB1ekRFeENJRVhKQWwzVmtq?=
 =?utf-8?B?OFBTUkdqR1VXZEd6RUVaLzZCc2RubUJFMDdCRVdCbkswNGs5c2VkRThtM000?=
 =?utf-8?B?UzcyZ2JEWGVBVUJoZWxFU3VmY1lIbWlKTlNnTVJ0WHdwWmZXTk8xVkUyUWZV?=
 =?utf-8?B?bW9GOGVGaUUvRW9QS3JMcXZxM0loUHB0UUVyYVZORUJGVGY3ajRCcHkvd0lW?=
 =?utf-8?B?M05FdnV1YjV4dkRMMTZUWm9zbjlkZ1NCZmRnME83T3B1SEh2R2wwVi9pR1k1?=
 =?utf-8?B?dmpuMldOb1FmL0Nrci9Mdm9WbkhpRm9SaVhpdUl6Z3ByQ1ZyUENCRDhtUzRa?=
 =?utf-8?B?b2pQOStvMGR6OXhpM3dYR1JwZnVtaVFzMXZJV0JpODJmSjJsdkxaYUJ2M0Yr?=
 =?utf-8?B?VTBGdUdLSndKYWovd01lWHQ5NnI2Mm0xck82OFpWbEk1YTZmQlVzTW5xZFcw?=
 =?utf-8?B?OS90S1N6UkV6UUxtVm10c21vVTlFRlVPV21McFZtWjFJUUxFWGhaVCtUcjJ3?=
 =?utf-8?B?MDVhVFg5Q1FBZm5pTkx6SEducjNZeHpoeUQ2S3dRQ0ordSs1Z2R3QVNtVUtI?=
 =?utf-8?Q?MMvVzPc05mfLqC5t0X3pnfqUVe6IVYqIKmqI0X2?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2acce191-0931-4846-9914-08d9369d3c17
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2021 23:18:39.8019
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qc1eb/+I/oFnyx4aYPGgwRn1qe1Q+y+cVNJvzgDL6CpeUZvWoyCs/DVTd+esriLE2DixpzC3yWreu/3fu/mYPb6MsXt/F2UFxDa6VRlonFg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR10MB4333
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10024 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 phishscore=0 mlxscore=0
 spamscore=0 mlxlogscore=999 bulkscore=0 suspectscore=0 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106230136
X-Proofpoint-GUID: vj2DWSSemc14Kp2eE4WH09dPyS1VC6xS
X-Proofpoint-ORIG-GUID: vj2DWSSemc14Kp2eE4WH09dPyS1VC6xS


On 6/23/21 9:09 AM, Juergen Gross wrote:
> In order to avoid a race condition for user events when changing
> cpu affinity reset the active flag only when EOI-ing the event.
>
> This is working fine as all user events are lateeoi events. Note that
> lateeoi_ack_mask_dynirq() is not modified as there is no explicit call
> to xen_irq_lateeoi() expected later.
>
> Reported-by: Julien Grall <julien@xen.org>
> Fixes: b6622798bc50b62 ("xen/events: avoid handling the same event on two cpus at the same time")
> Tested-by: Julien Grall <julien@xen.org>
> Signed-off-by: Juergen Gross <jgross@suse.com>


Reviewed-by: Boris Ostrovsky <boris.ostrvsky@oracle.com>




From xen-devel-bounces@lists.xenproject.org Wed Jun 23 23:30:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 23:30:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146322.269200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwCJn-0002Wt-3r; Wed, 23 Jun 2021 23:30:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146322.269200; Wed, 23 Jun 2021 23:30:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwCJn-0002Wm-0Z; Wed, 23 Jun 2021 23:30:03 +0000
Received: by outflank-mailman (input) for mailman id 146322;
 Wed, 23 Jun 2021 23:30:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwCJl-0002L6-KW; Wed, 23 Jun 2021 23:30:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwCJl-0007wh-Dc; Wed, 23 Jun 2021 23:30:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwCJl-0003Hd-4P; Wed, 23 Jun 2021 23:30:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwCJl-0003k6-3r; Wed, 23 Jun 2021 23:30:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OBrRJAMvKXa9pJwxmHeNDsUTuYPCxVp9OWhBqtuv8S8=; b=mCsOJkFYdfb2rl0pVbTcFqEXtf
	XRiIROfBmKoZDalUZGXiZon0P4qonbi6Z1puiakMjXKtvZTU5BC42WIIHs7xOwGMId7jBOUQ7jVPx
	X2dIs+pj16MgPxVEiQ4RMQ/AfpRcZ5/OsBoQkD24TY7f5aG3IXbpyWn8vcEpaZZQsnGQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162999-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162999: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=7471751a4d813a64501a9d7819b1eb405911b310
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Jun 2021 23:30:01 +0000

flight 162999 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162999/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 7471751a4d813a64501a9d7819b1eb405911b310
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   19 days
Failing since        162368  2021-06-04 15:42:59 Z   19 days   43 attempts
Testing same since   162987  2021-06-23 05:52:05 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2284 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 23:34:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 23:34:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146328.269215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwCNb-0003f2-Mz; Wed, 23 Jun 2021 23:33:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146328.269215; Wed, 23 Jun 2021 23:33:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwCNb-0003ev-I0; Wed, 23 Jun 2021 23:33:59 +0000
Received: by outflank-mailman (input) for mailman id 146328;
 Wed, 23 Jun 2021 23:33:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Zqu9=LR=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lwCNa-0003ep-JS
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 23:33:58 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 15f43d0b-6ffc-4f4c-82f1-2c4edf39579c;
 Wed, 23 Jun 2021 23:33:58 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id F17F16115A;
 Wed, 23 Jun 2021 23:33:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15f43d0b-6ffc-4f4c-82f1-2c4edf39579c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1624491237;
	bh=H1lrrEXk9RvRM/wYIpux1mO7nd10iVLfJmxirgIycVs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ghA+BvMHSjrs6yO2uIBxXvl4Yb6LbeEiU0fKMKv3/r4KYVOZBuGpSElkmCE3EoM3W
	 2L25+eB2WFZMs6o4om1vwUdYbTHfYQnMO9vXK31UL3+cVjlRZpt1ajLK3DQidJIMri
	 kRrQ7s4GWAjlzclaScNordE/N52cyyM+ZbVtXAClyWcuI9lFaXKRw4HuDlrACiSQbc
	 JIFoD7W1RJJgow5fIG99STdC9bML5++bJai2U0z+8i9+Tnw+JKPzwWREBltLQJcrwB
	 3OsBgUwYN+2Wc7EzzngbVmMdBjYd1vPzneIc3j9l+GJSM9EMmR+fiY8/WjWtI6VzI7
	 8q+G90p/Bb2dQ==
Date: Wed, 23 Jun 2021 16:33:56 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v6 7/9] docs: Change Makefile and sphinx configuration
 for doxygen
In-Reply-To: <20210510084105.17108-8-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.21.2106231506040.24906@sstabellini-ThinkPad-T480s>
References: <20210510084105.17108-1-luca.fancellu@arm.com> <20210510084105.17108-8-luca.fancellu@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 10 May 2021, Luca Fancellu wrote:
> Modify docs/Makefile to call doxygen and generate sphinx
> html documentation given the doxygen XML output.
> 
> Modify docs/conf.py sphinx configuration file to setup
> the breathe extension that works as bridge between
> sphinx and doxygen.
> 
> Add some files to the .gitignore to ignore some
> generated files for doxygen.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  .gitignore    |  6 ++++++
>  docs/Makefile | 42 +++++++++++++++++++++++++++++++++++++++---
>  docs/conf.py  | 48 +++++++++++++++++++++++++++++++++++++++++++++---
>  3 files changed, 90 insertions(+), 6 deletions(-)
> 
> diff --git a/.gitignore b/.gitignore
> index 1c2fa1530b..d271e0ce6a 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -58,6 +58,12 @@ docs/man7/
>  docs/man8/
>  docs/pdf/
>  docs/txt/
> +docs/doxygen-output
> +docs/sphinx
> +docs/xen.doxyfile
> +docs/xen.doxyfile.tmp
> +docs/xen-doxygen/doxygen_include.h
> +docs/xen-doxygen/doxygen_include.h.tmp
>  extras/mini-os*
>  install/*
>  stubdom/*-minios-config.mk
> diff --git a/docs/Makefile b/docs/Makefile
> index 8de1efb6f5..2f784c36ce 100644
> --- a/docs/Makefile
> +++ b/docs/Makefile
> @@ -17,6 +17,18 @@ TXTSRC-y := $(sort $(shell find misc -name '*.txt' -print))
>  
>  PANDOCSRC-y := $(sort $(shell find designs/ features/ misc/ process/ specs/ \( -name '*.pandoc' -o -name '*.md' \) -print))
>  
> +# Directory in which the doxygen documentation is created
> +# This must be kept in sync with breathe_projects value in conf.py
> +DOXYGEN_OUTPUT = doxygen-output
> +
> +# Doxygen input headers from xen-doxygen/doxy_input.list file
> +DOXY_LIST_SOURCES != cat "xen-doxygen/doxy_input.list"
> +DOXY_LIST_SOURCES := $(realpath $(addprefix $(XEN_ROOT)/,$(DOXY_LIST_SOURCES)))

I cannot find exactly who is populating doxy_input.list. I can see it is
empty in patch #6. Does it get populated during the build?


> +DOXY_DEPS := xen.doxyfile \
> +			 xen-doxygen/mainpage.md \
> +			 xen-doxygen/doxygen_include.h
> +
>  # Documentation targets
>  $(foreach i,$(MAN_SECTIONS), \
>    $(eval DOC_MAN$(i) := $(patsubst man/%.$(i),man$(i)/%.$(i), \
> @@ -46,8 +58,28 @@ all: build
>  build: html txt pdf man-pages figs
>  
>  .PHONY: sphinx-html
> -sphinx-html:
> -	sphinx-build -b html . sphinx/html
> +sphinx-html: $(DOXY_DEPS) $(DOXY_LIST_SOURCES)
> +ifneq ($(SPHINXBUILD),no)

This check on SPHINXBUILD is new, it wasn't there before. Why do we need
it now? We are not really changing anything in regards to Sphinx, just
adding Doxygen support. Or was it a mistake that it was missing even
before this patch?


> +	$(DOXYGEN) xen.doxyfile
> +	XEN_ROOT=$(realpath $(XEN_ROOT)) $(SPHINXBUILD) -b html . sphinx/html
> +else
> +	@echo "Sphinx is not installed; skipping sphinx-html documentation."
> +endif
> +
> +xen.doxyfile: xen.doxyfile.in xen-doxygen/doxy_input.list
> +	@echo "Generating $@"
> +	@sed -e "s,@XEN_BASE@,$(realpath $(XEN_ROOT)),g" $< \
> +		| sed -e "s,@DOXY_OUT@,$(DOXYGEN_OUTPUT),g" > $@.tmp
> +	@$(foreach inc,\
> +		$(DOXY_LIST_SOURCES),\
> +		echo "INPUT += \"$(inc)\"" >> $@.tmp; \
> +	)
> +	mv $@.tmp $@
> +
> +xen-doxygen/doxygen_include.h: xen-doxygen/doxygen_include.h.in
> +	@echo "Generating $@"
> +	@sed -e "s,@XEN_BASE@,$(realpath $(XEN_ROOT)),g" $< > $@.tmp
> +	@mv $@.tmp $@

Is the absolute path required? If not, we can probably get rid of this
generation step and simply have the relative path in
xen-doxygen/doxygen_include.h. I think this could apply to
xen.doxyfile.in above.


>  .PHONY: html
>  html: $(DOC_HTML) html/index.html
> @@ -71,7 +103,11 @@ clean: clean-man-pages
>  	$(MAKE) -C figs clean
>  	rm -rf .word_count *.aux *.dvi *.bbl *.blg *.glo *.idx *~
>  	rm -rf *.ilg *.log *.ind *.toc *.bak *.tmp core
> -	rm -rf html txt pdf sphinx/html
> +	rm -rf html txt pdf sphinx $(DOXYGEN_OUTPUT)
> +	rm -f xen.doxyfile
> +	rm -f xen.doxyfile.tmp
> +	rm -f xen-doxygen/doxygen_include.h
> +	rm -f xen-doxygen/doxygen_include.h.tmp
>  
>  .PHONY: distclean
>  distclean: clean
> diff --git a/docs/conf.py b/docs/conf.py
> index 50e41501db..a48de42331 100644
> --- a/docs/conf.py
> +++ b/docs/conf.py
> @@ -13,13 +13,17 @@
>  # add these directories to sys.path here. If the directory is relative to the
>  # documentation root, use os.path.abspath to make it absolute, like shown here.
>  #
> -# import os
> -# import sys
> +import os
> +import sys
>  # sys.path.insert(0, os.path.abspath('.'))
>  
>  
>  # -- Project information -----------------------------------------------------
>  
> +if "XEN_ROOT" not in os.environ:
> +    sys.exit("$XEN_ROOT environment variable undefined.")
> +XEN_ROOT = os.path.abspath(os.environ["XEN_ROOT"])
> +
>  project = u'Xen'
>  copyright = u'2019, The Xen development community'
>  author = u'The Xen development community'
> @@ -35,6 +39,7 @@ try:
>              xen_subver = line.split(u"=")[1].strip()
>          elif line.startswith(u"export XEN_EXTRAVERSION"):
>              xen_extra = line.split(u"=")[1].split(u"$", 1)[0].strip()
> +

spurious change?


>  except:
>      pass
>  finally:
> @@ -44,6 +49,15 @@ finally:
>      else:
>          version = release = u"unknown version"
>  
> +try:
> +    xen_doxygen_output = None
> +
> +    for line in open(u"Makefile"):
> +        if line.startswith(u"DOXYGEN_OUTPUT"):
> +                xen_doxygen_output = line.split(u"=")[1].strip()
> +except:
> +    sys.exit("DOXYGEN_OUTPUT variable undefined.")

This is a bit strange: isn't there a better way to get the
DOXYGEN_OUTPUT variable than reading the Makefile?

At that point I think it would be better to define DOXYGEN_OUTPUT a
second time in conf.py. But maybe it could be passed as an evironmental
variable?


>  # -- General configuration ---------------------------------------------------
>  
>  # If your documentation needs a minimal Sphinx version, state it here.
> @@ -53,7 +67,8 @@ needs_sphinx = '1.4'
>  # Add any Sphinx extension module names here, as strings. They can be
>  # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
>  # ones.
> -extensions = []
> +# breathe -> extension that integrates doxygen xml output with sphinx
> +extensions = ['breathe']
>  
>  # Add any paths that contain templates here, relative to this directory.
>  templates_path = ['_templates']
> @@ -175,6 +190,33 @@ texinfo_documents = [
>       'Miscellaneous'),
>  ]
>  
> +# -- Options for Breathe extension -------------------------------------------
> +
> +breathe_projects = {
> +    "Xen": "{}/docs/{}/xml".format(XEN_ROOT, xen_doxygen_output)
> +}
> +breathe_default_project = "Xen"
> +
> +breathe_domain_by_extension = {
> +    "h": "c",
> +    "c": "c",
> +}
> +breathe_separate_member_pages = True
> +breathe_show_enumvalue_initializer = True
> +breathe_show_define_initializer = True
> +
> +# Qualifiers to a function are causing Sphihx/Breathe to warn about
> +# Error when parsing function declaration and more.  This is a list
> +# of strings that the parser additionally should accept as
> +# attributes.
> +cpp_id_attributes = [
> +    '__syscall', '__deprecated', '__may_alias',
> +    '__used', '__unused', '__weak',
> +    '__DEPRECATED_MACRO', 'FUNC_NORETURN',
> +    '__subsystem',

Should we also have any of following:

__packed
__init
__attribute__
__aligned__

in the list? In any case, we don't have to add them right now, we could
add them later as we expand Doxygen coverage if they become needed.


> +]
> +c_id_attributes = cpp_id_attributes
> +
>  
>  # -- Options for Epub output -------------------------------------------------



From xen-devel-bounces@lists.xenproject.org Wed Jun 23 23:34:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 23:34:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146329.269226 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwCNg-0003wy-TA; Wed, 23 Jun 2021 23:34:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146329.269226; Wed, 23 Jun 2021 23:34:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwCNg-0003wq-QE; Wed, 23 Jun 2021 23:34:04 +0000
Received: by outflank-mailman (input) for mailman id 146329;
 Wed, 23 Jun 2021 23:34:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Zqu9=LR=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lwCNf-0003w1-7p
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 23:34:03 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 627905af-7947-41b8-88ad-7c95b769bd0b;
 Wed, 23 Jun 2021 23:34:02 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 5BC1C61166;
 Wed, 23 Jun 2021 23:34:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 627905af-7947-41b8-88ad-7c95b769bd0b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1624491241;
	bh=7tSNNITUacF15FFuRbyhihpM/sSFWQlAjhsa/yA1V5E=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=HzaYHrcJHqYRJI7JSEUHpDL9vWJFOjoDBdyujAvwekbSeQplbmeP4SAubl45n45o6
	 QjGZJMqbrB8XBSvHvC+tmB60QMRdMFQmTFzPY11U35BrMomIbucPp+GivUcwPK8yJv
	 8z40Jcys+D+tmntoVJPwdt5YkowgFpAOmptPlmvEsm1gViR9JYWaHICqi3typYwFf2
	 K5jIdtebwMXSO9bN733b/ndjmCPaCqnG99De4xIGXHh97vOBy/R5VWcdJUnPonXvvL
	 I0YMnBMEvvw8/zibLCjl0bN2ZWYnu33Vt66zD++qUn/S4LAl+Gb+HQ27c0FWYty+oV
	 KqdV7k9qKMK5w==
Date: Wed, 23 Jun 2021 16:34:01 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v6 8/9] docs: hypercalls sphinx skeleton for generated
 html
In-Reply-To: <20210510084105.17108-9-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.21.2106231523210.24906@sstabellini-ThinkPad-T480s>
References: <20210510084105.17108-1-luca.fancellu@arm.com> <20210510084105.17108-9-luca.fancellu@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 10 May 2021, Luca Fancellu wrote:
> Create a skeleton for the documentation about hypercalls
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> v6 changes:
> - Now every platform has the same sections in .rst files
> ---
>  .gitignore                             |  1 +
>  docs/Makefile                          |  4 ++++
>  docs/hypercall-interfaces/arm32.rst    | 32 ++++++++++++++++++++++++++
>  docs/hypercall-interfaces/arm64.rst    | 32 ++++++++++++++++++++++++++
>  docs/hypercall-interfaces/index.rst.in |  7 ++++++
>  docs/hypercall-interfaces/x86_64.rst   | 32 ++++++++++++++++++++++++++
>  docs/index.rst                         |  8 +++++++
>  7 files changed, 116 insertions(+)
>  create mode 100644 docs/hypercall-interfaces/arm32.rst
>  create mode 100644 docs/hypercall-interfaces/arm64.rst
>  create mode 100644 docs/hypercall-interfaces/index.rst.in
>  create mode 100644 docs/hypercall-interfaces/x86_64.rst
> 
> diff --git a/.gitignore b/.gitignore
> index d271e0ce6a..a9aab120ae 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -64,6 +64,7 @@ docs/xen.doxyfile
>  docs/xen.doxyfile.tmp
>  docs/xen-doxygen/doxygen_include.h
>  docs/xen-doxygen/doxygen_include.h.tmp
> +docs/hypercall-interfaces/index.rst
>  extras/mini-os*
>  install/*
>  stubdom/*-minios-config.mk
> diff --git a/docs/Makefile b/docs/Makefile
> index 2f784c36ce..b02c3dfb79 100644
> --- a/docs/Makefile
> +++ b/docs/Makefile
> @@ -61,6 +61,9 @@ build: html txt pdf man-pages figs
>  sphinx-html: $(DOXY_DEPS) $(DOXY_LIST_SOURCES)
>  ifneq ($(SPHINXBUILD),no)
>  	$(DOXYGEN) xen.doxyfile
> +	@echo "Generating hypercall-interfaces/index.rst"
> +	@sed -e "s,@XEN_TARGET_ARCH@,$(XEN_TARGET_ARCH),g" \
> +		hypercall-interfaces/index.rst.in > hypercall-interfaces/index.rst

I take that this means we are going to generate docs only for the
architecture that we are building? So if we build for x86, then the docs
are for x86 (no arm32 and arm64 docs.) Is that right?

Is that because Doxygen relies somehow on the compiler to extract data?
I am asking because if Doxygen doesn't rely on the compiler, then it
could probably generate the docs for all architectures in one go?



>  	XEN_ROOT=$(realpath $(XEN_ROOT)) $(SPHINXBUILD) -b html . sphinx/html
>  else
>  	@echo "Sphinx is not installed; skipping sphinx-html documentation."
> @@ -108,6 +111,7 @@ clean: clean-man-pages
>  	rm -f xen.doxyfile.tmp
>  	rm -f xen-doxygen/doxygen_include.h
>  	rm -f xen-doxygen/doxygen_include.h.tmp
> +	rm -f hypercall-interfaces/index.rst
>  
>  .PHONY: distclean
>  distclean: clean
> diff --git a/docs/hypercall-interfaces/arm32.rst b/docs/hypercall-interfaces/arm32.rst
> new file mode 100644
> index 0000000000..6762d9fc7c
> --- /dev/null
> +++ b/docs/hypercall-interfaces/arm32.rst
> @@ -0,0 +1,32 @@
> +.. SPDX-License-Identifier: CC-BY-4.0
> +
> +Hypercall Interfaces - arm32
> +============================
> +
> +Starting points
> +---------------
> +.. toctree::
> +   :maxdepth: 2
> +
> +
> +
> +Functions
> +---------
> +
> +
> +Structs
> +-------
> +
> +
> +Enums and sets of #defines
> +--------------------------
> +
> +
> +Typedefs
> +--------
> +
> +
> +Enum values and individual #defines
> +-----------------------------------
> +
> +
> diff --git a/docs/hypercall-interfaces/arm64.rst b/docs/hypercall-interfaces/arm64.rst
> new file mode 100644
> index 0000000000..5e701a2adc
> --- /dev/null
> +++ b/docs/hypercall-interfaces/arm64.rst
> @@ -0,0 +1,32 @@
> +.. SPDX-License-Identifier: CC-BY-4.0
> +
> +Hypercall Interfaces - arm64
> +============================
> +
> +Starting points
> +---------------
> +.. toctree::
> +   :maxdepth: 2
> +
> +
> +
> +Functions
> +---------
> +
> +
> +Structs
> +-------
> +
> +
> +Enums and sets of #defines
> +--------------------------
> +
> +
> +Typedefs
> +--------
> +
> +
> +Enum values and individual #defines
> +-----------------------------------
> +
> +
> diff --git a/docs/hypercall-interfaces/index.rst.in b/docs/hypercall-interfaces/index.rst.in
> new file mode 100644
> index 0000000000..e4dcc5db8d
> --- /dev/null
> +++ b/docs/hypercall-interfaces/index.rst.in
> @@ -0,0 +1,7 @@
> +.. SPDX-License-Identifier: CC-BY-4.0
> +
> +Hypercall Interfaces
> +====================
> +
> +.. toctree::
> +   @XEN_TARGET_ARCH@
> diff --git a/docs/hypercall-interfaces/x86_64.rst b/docs/hypercall-interfaces/x86_64.rst
> new file mode 100644
> index 0000000000..59e948900c
> --- /dev/null
> +++ b/docs/hypercall-interfaces/x86_64.rst
> @@ -0,0 +1,32 @@
> +.. SPDX-License-Identifier: CC-BY-4.0
> +
> +Hypercall Interfaces - x86_64
> +=============================
> +
> +Starting points
> +---------------
> +.. toctree::
> +   :maxdepth: 2
> +
> +
> +
> +Functions
> +---------
> +
> +
> +Structs
> +-------
> +
> +
> +Enums and sets of #defines
> +--------------------------
> +
> +
> +Typedefs
> +--------
> +
> +
> +Enum values and individual #defines
> +-----------------------------------
> +
> +
> diff --git a/docs/index.rst b/docs/index.rst
> index b75487a05d..52226a42d8 100644
> --- a/docs/index.rst
> +++ b/docs/index.rst
> @@ -53,6 +53,14 @@ kind of development environment.
>     hypervisor-guide/index
>  
>  
> +Hypercall Interfaces documentation
> +----------------------------------
> +
> +.. toctree::
> +   :maxdepth: 2
> +
> +   hypercall-interfaces/index
> +
>  Miscellanea
>  -----------
>  
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 23 23:34:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Jun 2021 23:34:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146330.269237 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwCNj-0004Fk-Aq; Wed, 23 Jun 2021 23:34:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146330.269237; Wed, 23 Jun 2021 23:34:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwCNj-0004Fd-7E; Wed, 23 Jun 2021 23:34:07 +0000
Received: by outflank-mailman (input) for mailman id 146330;
 Wed, 23 Jun 2021 23:34:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Zqu9=LR=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lwCNi-0004Ex-Lq
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 23:34:06 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 712ef5b7-b9b6-4454-9808-033235ac36e5;
 Wed, 23 Jun 2021 23:34:05 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id EFCAB61164;
 Wed, 23 Jun 2021 23:34:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 712ef5b7-b9b6-4454-9808-033235ac36e5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1624491245;
	bh=1Lob8hvsFJNLnpC4tzWVNTer2x/N7GOqdLaCxLWRHIU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=HHM9sJoYuCF4jKapkmfVph+pmHAwMn1NMXvmHOMz+kiOJGWuFwMZSVd40x0intJvB
	 8I5NIXodoX34ckvzP2NPNT70N2UdWBvlKUaWFtRlqy8iChQjMRKPgpGHKgxUFNdPKq
	 hJ1jx1V5eMHyMk3Up7csaiOiammjMPtB29hEEb/k4I2MnZJmNgs4T8cv2Hua53oOOe
	 NL8gavBqU2yWjmhwv15SDOlCHgVWDF2I1bxqrgHvtDpG4Dm1SqTatchXRZ8BAqmaom
	 fEgOfHmLRKzZDK8YLQmTHPnzklzi5HjqKOViHHKT/uqkWEYmGRv48YlJUr6bnDWdo4
	 3Sq9HZCEcKdcQ==
Date: Wed, 23 Jun 2021 16:34:04 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v6 9/9] docs/doxygen: doxygen documentation for
 grant_table.h
In-Reply-To: <20210510084105.17108-10-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.21.2106231530320.24906@sstabellini-ThinkPad-T480s>
References: <20210510084105.17108-1-luca.fancellu@arm.com> <20210510084105.17108-10-luca.fancellu@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 10 May 2021, Luca Fancellu wrote:
> Modification to include/public/grant_table.h:
> 
> 1) Add doxygen tags to:
>  - Create Grant tables section
>  - include variables in the generated documentation
>  - Used @keepindent/@endkeepindent to enclose comment
>    section that are indented using spaces, to keep
>    the indentation.
> 2) Add .rst file for grant table for Arm64

Why only arm64?


> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> v6 changes:
> - Fix misaligned comment
> - Moved comments to make them display in the docs
> - Included more documentation in the docs
>   (see output here: https://luca.fancellu.gitlab.io/xen-docs/hypercall-interfaces/common/grant_tables.html)

It looks much much better. All the info we care about seems to be there.
The only things that I noticed missing and might be good to keep is the
small comment about HYPERVISOR_grant_table_op:

/* ` enum neg_errnoval
 * ` HYPERVISOR_grant_table_op(enum grant_table_op cmd,
 * `                           void *args,
 * `                           unsigned int count)
 * `
 *
 * @args points to an array of a per-command data structure. The array
 * has @count members

All the changes look good to me.



> v5 changes:
> - Move GNTCOPY_* define next to the flags field
> v4 changes:
> - Used @keepindent/@endkeepindent doxygen commands
>   to keep text with spaces indentation.
> - drop changes to grant_entry_v1 comment, it will
>   be changed and included in the docs in a future patch
> - Move docs .rst to "common" folder
> v3 changes:
> - removed tags to skip anonymous union/struct
> - moved back comment pointed out by Jan
> - moved down defines related to struct gnttab_copy
>   as pointed out by Jan
> v2 changes:
> - Revert back to anonymous union/struct
> - add doxygen tags to skip anonymous union/struct
> ---
>  docs/hypercall-interfaces/arm64.rst           |   1 +
>  .../common/grant_tables.rst                   |   9 +
>  docs/xen-doxygen/doxy_input.list              |   1 +
>  xen/include/public/grant_table.h              | 387 +++++++++++-------
>  4 files changed, 245 insertions(+), 153 deletions(-)
>  create mode 100644 docs/hypercall-interfaces/common/grant_tables.rst
> 
> diff --git a/docs/hypercall-interfaces/arm64.rst b/docs/hypercall-interfaces/arm64.rst
> index 5e701a2adc..cb4c0d13de 100644
> --- a/docs/hypercall-interfaces/arm64.rst
> +++ b/docs/hypercall-interfaces/arm64.rst
> @@ -8,6 +8,7 @@ Starting points
>  .. toctree::
>     :maxdepth: 2
>  
> +   common/grant_tables
>  
>  
>  Functions
> diff --git a/docs/hypercall-interfaces/common/grant_tables.rst b/docs/hypercall-interfaces/common/grant_tables.rst
> new file mode 100644
> index 0000000000..b8a1ef8759
> --- /dev/null
> +++ b/docs/hypercall-interfaces/common/grant_tables.rst
> @@ -0,0 +1,9 @@
> +.. SPDX-License-Identifier: CC-BY-4.0
> +
> +Grant Tables
> +============
> +
> +.. doxygengroup:: grant_table
> +   :project: Xen
> +   :members:
> +   :undoc-members:
> diff --git a/docs/xen-doxygen/doxy_input.list b/docs/xen-doxygen/doxy_input.list
> index e69de29bb2..233d692fa7 100644
> --- a/docs/xen-doxygen/doxy_input.list
> +++ b/docs/xen-doxygen/doxy_input.list
> @@ -0,0 +1 @@
> +xen/include/public/grant_table.h
> diff --git a/xen/include/public/grant_table.h b/xen/include/public/grant_table.h
> index 84b1d26b36..dfa5155927 100644
> --- a/xen/include/public/grant_table.h
> +++ b/xen/include/public/grant_table.h
> @@ -25,15 +25,19 @@
>   * Copyright (c) 2004, K A Fraser
>   */
>  
> +/**
> + * @file
> + * @brief Interface for granting foreign access to page frames, and receiving
> + * page-ownership transfers.
> + */
> +
>  #ifndef __XEN_PUBLIC_GRANT_TABLE_H__
>  #define __XEN_PUBLIC_GRANT_TABLE_H__
>  
>  #include "xen.h"
>  
> -/*
> - * `incontents 150 gnttab Grant Tables
> - *
> - * Xen's grant tables provide a generic mechanism to memory sharing
> +/**
> + * @brief Xen's grant tables provide a generic mechanism to memory sharing
>   * between domains. This shared memory interface underpins the split
>   * device drivers for block and network IO.
>   *
> @@ -51,13 +55,13 @@
>   * know the real machine address of a page it is sharing. This makes
>   * it possible to share memory correctly with domains running in
>   * fully virtualised memory.
> - */
> -
> -/***********************************
> + *
>   * GRANT TABLE REPRESENTATION
> - */
> -
> -/* Some rough guidelines on accessing and updating grant-table entries
> + *
> + * A grant table comprises a packed array of grant entries in one or more
> + * page frames shared between Xen and a guest.
> + *
> + * Some rough guidelines on accessing and updating grant-table entries
>   * in a concurrency-safe manner. For more information, Linux contains a
>   * reference implementation for guest OSes (drivers/xen/grant_table.c, see
>   * http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=drivers/xen/grant-table.c;hb=HEAD
> @@ -66,6 +70,7 @@
>   *     compiler barrier will still be required.
>   *
>   * Introducing a valid entry into the grant table:
> + * @keepindent
>   *  1. Write ent->domid.
>   *  2. Write ent->frame:
>   *      GTF_permit_access:   Frame to which access is permitted.
> @@ -73,20 +78,25 @@
>   *                           frame, or zero if none.
>   *  3. Write memory barrier (WMB).
>   *  4. Write ent->flags, inc. valid type.
> + * @endkeepindent
>   *
>   * Invalidating an unused GTF_permit_access entry:
> + * @keepindent
>   *  1. flags = ent->flags.
>   *  2. Observe that !(flags & (GTF_reading|GTF_writing)).
>   *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
>   *  NB. No need for WMB as reuse of entry is control-dependent on success of
>   *      step 3, and all architectures guarantee ordering of ctrl-dep writes.
> + * @endkeepindent
>   *
>   * Invalidating an in-use GTF_permit_access entry:
> + *
>   *  This cannot be done directly. Request assistance from the domain controller
>   *  which can set a timeout on the use of a grant entry and take necessary
>   *  action. (NB. This is not yet implemented!).
>   *
>   * Invalidating an unused GTF_accept_transfer entry:
> + * @keepindent
>   *  1. flags = ent->flags.
>   *  2. Observe that !(flags & GTF_transfer_committed). [*]
>   *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
> @@ -97,29 +107,32 @@
>   *      transferred frame is written. It is safe for the guest to spin waiting
>   *      for this to occur (detect by observing GTF_transfer_completed in
>   *      ent->flags).
> + * @endkeepindent
>   *
>   * Invalidating a committed GTF_accept_transfer entry:
>   *  1. Wait for (ent->flags & GTF_transfer_completed).
>   *
>   * Changing a GTF_permit_access from writable to read-only:
> + *
>   *  Use SMP-safe CMPXCHG to set GTF_readonly, while checking !GTF_writing.
>   *
>   * Changing a GTF_permit_access from read-only to writable:
> + *
>   *  Use SMP-safe bit-setting instruction.
> + *
> + * Data structure fields or defines described below have the following tags:
> + * * [XEN]: This field is written by Xen and read by the sharing guest.
> + * * [GST]: This field is written by the guest and read by Xen.
> + *
> + * @addtogroup grant_table Grant Tables
> + * @{
>   */
>  
> -/*
> +/**
>   * Reference to a grant entry in a specified domain's grant table.
>   */
>  typedef uint32_t grant_ref_t;
>  
> -/*
> - * A grant table comprises a packed array of grant entries in one or more
> - * page frames shared between Xen and a guest.
> - * [XEN]: This field is written by Xen and read by the sharing guest.
> - * [GST]: This field is written by the guest and read by Xen.
> - */
> -
>  /*
>   * Version 1 of the grant table entry structure is maintained purely
>   * for backwards compatibility.  New guests should use version 2.
> @@ -129,15 +142,17 @@ typedef uint32_t grant_ref_t;
>  #define grant_entry_v1_t grant_entry_t
>  #endif
>  struct grant_entry_v1 {
> -    /* GTF_xxx: various type and flag information.  [XEN,GST] */
> +    /** GTF_xxx: various type and flag information.  [XEN,GST] */
>      uint16_t flags;
> -    /* The domain being granted foreign privileges. [GST] */
> +    /** The domain being granted foreign privileges. [GST] */
>      domid_t  domid;
> -    /*
> +    /**
> +     * @keepindent
>       * GTF_permit_access: GFN that @domid is allowed to map and access. [GST]
>       * GTF_accept_transfer: GFN that @domid is allowed to transfer into. [GST]
>       * GTF_transfer_completed: MFN whose ownership transferred by @domid
>       *                         (non-translated guests only). [XEN]
> +     * @endkeepindent
>       */
>      uint32_t frame;
>  };
> @@ -150,60 +165,99 @@ typedef struct grant_entry_v1 grant_entry_v1_t;
>  #define GNTTAB_RESERVED_CONSOLE        0
>  #define GNTTAB_RESERVED_XENSTORE       1
>  
> -/*
> - * Type of grant entry.
> - *  GTF_invalid: This grant entry grants no privileges.
> - *  GTF_permit_access: Allow @domid to map/access @frame.
> - *  GTF_accept_transfer: Allow @domid to transfer ownership of one page frame
> - *                       to this guest. Xen writes the page number to @frame.
> - *  GTF_transitive: Allow @domid to transitively access a subrange of
> - *                  @trans_grant in @trans_domid.  No mappings are allowed.
> - */
> +/** This type of grant entry grants no privileges. */
>  #define GTF_invalid         (0U<<0)
> +
> +/** This type of grant entry allow \@domid to map/access \@frame. */
>  #define GTF_permit_access   (1U<<0)
> +
> +/**
> + * This type of grant entry allow \@domid to transfer ownership of one pageframe
> + * to this guest. Xen writes the page number to \@frame.
> + */
>  #define GTF_accept_transfer (2U<<0)
> +
> +/**
> + * This type of grant entry allow \@domid to transitively access a subrange of
> + * \@trans_grant in \@trans_domid.  No mappings are allowed.
> + */
>  #define GTF_transitive      (3U<<0)
> +
>  #define GTF_type_mask       (3U<<0)
>  
> -/*
> - * Subflags for GTF_permit_access and GTF_transitive.
> - *  GTF_readonly: Restrict @domid to read-only mappings and accesses. [GST]
> - *  GTF_reading: Grant entry is currently mapped for reading by @domid. [XEN]
> - *  GTF_writing: Grant entry is currently mapped for writing by @domid. [XEN]
> - * Further subflags for GTF_permit_access only.
> - *  GTF_PAT, GTF_PWT, GTF_PCD: (x86) cache attribute flags to be used for
> - *                             mappings of the grant [GST]
> - *  GTF_sub_page: Grant access to only a subrange of the page.  @domid
> - *                will only be allowed to copy from the grant, and not
> - *                map it. [GST]
> +/**
> + * @def GTF_readonly
> + * Subflag for GTF_permit_access and GTF_transitive: Restrict \@domid to
> + * read-only mappings and accesses. [GST]
>   */
>  #define _GTF_readonly       (2)
>  #define GTF_readonly        (1U<<_GTF_readonly)
> +
> +/**
> + * @def GTF_reading
> + * Subflag for GTF_permit_access and GTF_transitive: Grant entry is currently
> + * mapped for reading by \@domid. [XEN]
> + */
>  #define _GTF_reading        (3)
>  #define GTF_reading         (1U<<_GTF_reading)
> +
> +/**
> + * @def GTF_writing
> + * Subflag for GTF_permit_access and GTF_transitive: Grant entry is currently
> + * mapped for writing by \@domid. [XEN]
> + */
>  #define _GTF_writing        (4)
>  #define GTF_writing         (1U<<_GTF_writing)
> +
> +/**
> + * @def GTF_PWT
> + * Subflag for GTF_permit_access only: (x86) cache attribute flags to be used
> + * for mappings of the grant [GST]
> + */
>  #define _GTF_PWT            (5)
>  #define GTF_PWT             (1U<<_GTF_PWT)
> +
> +/**
> + * @def GTF_PCD
> + * Subflag for GTF_permit_access only: (x86) cache attribute flags to be used
> + * for mappings of the grant [GST]
> + */
>  #define _GTF_PCD            (6)
>  #define GTF_PCD             (1U<<_GTF_PCD)
> +
> +/**
> + * @def GTF_PAT
> + * Subflag for GTF_permit_access only: (x86) cache attribute flags to be used
> + * for mappings of the grant [GST]
> + */
>  #define _GTF_PAT            (7)
>  #define GTF_PAT             (1U<<_GTF_PAT)
> +
> +/**
> + * @def GTF_sub_page
> + * Subflag for GTF_permit_access only: Grant access to only a subrange of the
> + * page. \@domid will only be allowed to copy from the grant, and not map it.
> + * [GST]
> + */
>  #define _GTF_sub_page       (8)
>  #define GTF_sub_page        (1U<<_GTF_sub_page)
>  
> -/*
> - * Subflags for GTF_accept_transfer:
> - *  GTF_transfer_committed: Xen sets this flag to indicate that it is committed
> - *      to transferring ownership of a page frame. When a guest sees this flag
> - *      it must /not/ modify the grant entry until GTF_transfer_completed is
> - *      set by Xen.
> - *  GTF_transfer_completed: It is safe for the guest to spin-wait on this flag
> - *      after reading GTF_transfer_committed. Xen will always write the frame
> - *      address, followed by ORing this flag, in a timely manner.
> +/**
> + * @def GTF_transfer_committed
> + * Subflag for GTF_accept_transfer: Xen sets this flag to indicate that it is
> + * committed to transferring ownership of a page frame. When a guest sees this
> + * flag it must /not/ modify the grant entry until GTF_transfer_completed is
> + * set by Xen.
>   */
>  #define _GTF_transfer_committed (2)
>  #define GTF_transfer_committed  (1U<<_GTF_transfer_committed)
> +
> +/**
> + * @def GTF_transfer_completed
> + * Subflag for GTF_accept_transfer: It is safe for the guest to spin-wait on
> + * this flag after reading GTF_transfer_committed. Xen will always write the
> + * frame address, followed by ORing this flag, in a timely manner.
> + */
>  #define _GTF_transfer_completed (3)
>  #define GTF_transfer_completed  (1U<<_GTF_transfer_completed)
>  
> @@ -228,17 +282,17 @@ struct grant_entry_header {
>  };
>  typedef struct grant_entry_header grant_entry_header_t;
>  
> -/*
> +/**
>   * Version 2 of the grant entry structure.
>   */
>  union grant_entry_v2 {
>      grant_entry_header_t hdr;
>  
> -    /*
> +    /**
>       * This member is used for V1-style full page grants, where either:
>       *
> -     * -- hdr.type is GTF_accept_transfer, or
> -     * -- hdr.type is GTF_permit_access and GTF_sub_page is not set.
> +     * * hdr.type is GTF_accept_transfer, or
> +     * * hdr.type is GTF_permit_access and GTF_sub_page is not set.
>       *
>       * In that case, the frame field has the same semantics as the
>       * field of the same name in the V1 entry structure.
> @@ -249,10 +303,10 @@ union grant_entry_v2 {
>          uint64_t frame;
>      } full_page;
>  
> -    /*
> +    /**
>       * If the grant type is GTF_grant_access and GTF_sub_page is set,
> -     * @domid is allowed to access bytes [@page_off,@page_off+@length)
> -     * in frame @frame.
> +     * \@domid is allowed to access bytes [\@page_off,\@page_off+\@length)
> +     * in frame \@frame.
>       */
>      struct {
>          grant_entry_header_t hdr;
> @@ -261,9 +315,9 @@ union grant_entry_v2 {
>          uint64_t frame;
>      } sub_page;
>  
> -    /*
> -     * If the grant is GTF_transitive, @domid is allowed to use the
> -     * grant @gref in domain @trans_domid, as if it was the local
> +    /**
> +     * If the grant is GTF_transitive, \@domid is allowed to use the
> +     * grant \@gref in domain \@trans_domid, as if it was the local
>       * domain.  Obviously, the transitive access must be compatible
>       * with the original grant.
>       *
> @@ -277,7 +331,7 @@ union grant_entry_v2 {
>          grant_ref_t gref;
>      } transitive;
>  
> -    uint32_t __spacer[4]; /* Pad to a power of two */
> +    uint32_t __spacer[4]; /**< Pad to a power of two */
>  };
>  typedef union grant_entry_v2 grant_entry_v2_t;
>  
> @@ -317,24 +371,25 @@ typedef uint16_t grant_status_t;
>  #endif /* __XEN_INTERFACE_VERSION__ */
>  /* ` } */
>  
> -/*
> +/**
>   * Handle to track a mapping created via a grant reference.
>   */
>  typedef uint32_t grant_handle_t;
>  
> -/*
> - * GNTTABOP_map_grant_ref: Map the grant entry (<dom>,<ref>) for access
> - * by devices and/or host CPUs. If successful, <handle> is a tracking number
> - * that must be presented later to destroy the mapping(s). On error, <status>
> +/**
> + * GNTTABOP_map_grant_ref: Map the grant entry (\@dom,\@ref) for access
> + * by devices and/or host CPUs. If successful, \@handle is a tracking number
> + * that must be presented later to destroy the mapping(s). On error, \@status
>   * is a negative status code.
> + *
>   * NOTES:
> - *  1. If GNTMAP_device_map is specified then <dev_bus_addr> is the address
> + *  1. If GNTMAP_device_map is specified then \@dev_bus_addr is the address
>   *     via which I/O devices may access the granted frame.
>   *  2. If GNTMAP_host_map is specified then a mapping will be added at
>   *     either a host virtual address in the current address space, or at
>   *     a PTE at the specified machine address.  The type of mapping to
>   *     perform is selected through the GNTMAP_contains_pte flag, and the
> - *     address is specified in <host_addr>.
> + *     address is specified in \@host_addr.
>   *  3. Mappings should only be destroyed via GNTTABOP_unmap_grant_ref. If a
>   *     host mapping is destroyed by other means then it is *NOT* guaranteed
>   *     to be accounted to the correct grant reference!
> @@ -342,25 +397,26 @@ typedef uint32_t grant_handle_t;
>  struct gnttab_map_grant_ref {
>      /* IN parameters. */
>      uint64_t host_addr;
> -    uint32_t flags;               /* GNTMAP_* */
> +    uint32_t flags;               /**< GNTMAP_* */
>      grant_ref_t ref;
>      domid_t  dom;
>      /* OUT parameters. */
> -    int16_t  status;              /* => enum grant_status */
> +    int16_t  status;              /**< GNTST_* status code */
>      grant_handle_t handle;
>      uint64_t dev_bus_addr;
>  };
>  typedef struct gnttab_map_grant_ref gnttab_map_grant_ref_t;
>  DEFINE_XEN_GUEST_HANDLE(gnttab_map_grant_ref_t);
>  
> -/*
> +/**
>   * GNTTABOP_unmap_grant_ref: Destroy one or more grant-reference mappings
> - * tracked by <handle>. If <host_addr> or <dev_bus_addr> is zero, that
> + * tracked by \@handle. If \@host_addr or \@dev_bus_addr is zero, that
>   * field is ignored. If non-zero, they must refer to a device/host mapping
> - * that is tracked by <handle>
> + * that is tracked by \@handle
> + *
>   * NOTES:
>   *  1. The call may fail in an undefined manner if either mapping is not
> - *     tracked by <handle>.
> + *     tracked by \@handle.
>   *  3. After executing a batch of unmaps, it is guaranteed that no stale
>   *     mappings will remain in the device or host TLBs.
>   */
> @@ -370,18 +426,19 @@ struct gnttab_unmap_grant_ref {
>      uint64_t dev_bus_addr;
>      grant_handle_t handle;
>      /* OUT parameters. */
> -    int16_t  status;              /* => enum grant_status */
> +    int16_t  status;              /**< GNTST_* status code */
>  };
>  typedef struct gnttab_unmap_grant_ref gnttab_unmap_grant_ref_t;
>  DEFINE_XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t);
>  
> -/*
> - * GNTTABOP_setup_table: Set up a grant table for <dom> comprising at least
> - * <nr_frames> pages. The frame addresses are written to the <frame_list>.
> - * Only <nr_frames> addresses are written, even if the table is larger.
> +/**
> + * GNTTABOP_setup_table: Set up a grant table for \@dom comprising at least
> + * \@nr_frames pages. The frame addresses are written to the \@frame_list.
> + * Only \@nr_frames addresses are written, even if the table is larger.
> + *
>   * NOTES:
> - *  1. <dom> may be specified as DOMID_SELF.
> - *  2. Only a sufficiently-privileged domain may specify <dom> != DOMID_SELF.
> + *  1. \@dom may be specified as DOMID_SELF.
> + *  2. Only a sufficiently-privileged domain may specify \@dom != DOMID_SELF.
>   *  3. Xen may not support more than a single grant-table page per domain.
>   */
>  struct gnttab_setup_table {
> @@ -389,7 +446,7 @@ struct gnttab_setup_table {
>      domid_t  dom;
>      uint32_t nr_frames;
>      /* OUT parameters. */
> -    int16_t  status;              /* => enum grant_status */
> +    int16_t  status;              /**< GNTST_* status code */
>  #if __XEN_INTERFACE_VERSION__ < 0x00040300
>      XEN_GUEST_HANDLE(ulong) frame_list;
>  #else
> @@ -399,7 +456,7 @@ struct gnttab_setup_table {
>  typedef struct gnttab_setup_table gnttab_setup_table_t;
>  DEFINE_XEN_GUEST_HANDLE(gnttab_setup_table_t);
>  
> -/*
> +/**
>   * GNTTABOP_dump_table: Dump the contents of the grant table to the
>   * xen console. Debugging use only.
>   */
> @@ -407,14 +464,14 @@ struct gnttab_dump_table {
>      /* IN parameters. */
>      domid_t dom;
>      /* OUT parameters. */
> -    int16_t status;               /* => enum grant_status */
> +    int16_t status;               /**< GNTST_* status code */
>  };
>  typedef struct gnttab_dump_table gnttab_dump_table_t;
>  DEFINE_XEN_GUEST_HANDLE(gnttab_dump_table_t);
>  
> -/*
> - * GNTTABOP_transfer: Transfer <frame> to a foreign domain. The foreign domain
> - * has previously registered its interest in the transfer via <domid, ref>.
> +/**
> + * GNTTABOP_transfer: Transfer \@frame to a foreign domain. The foreign domain
> + * has previously registered its interest in the transfer via \@domid, \@ref.
>   *
>   * Note that, even if the transfer fails, the specified page no longer belongs
>   * to the calling domain *unless* the error is GNTST_bad_page.
> @@ -427,13 +484,13 @@ struct gnttab_transfer {
>      domid_t       domid;
>      grant_ref_t   ref;
>      /* OUT parameters. */
> -    int16_t       status;
> +    int16_t       status;               /**< GNTST_* status code */
>  };
>  typedef struct gnttab_transfer gnttab_transfer_t;
>  DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
>  
>  
> -/*
> +/**
>   * GNTTABOP_copy: Hypervisor based copy
>   * source and destinations can be eithers MFNs or, for foreign domains,
>   * grant references. the foreign domain has to grant read/write access
> @@ -451,11 +508,6 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
>   * bytes to be copied.
>   */
>  
> -#define _GNTCOPY_source_gref      (0)
> -#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
> -#define _GNTCOPY_dest_gref        (1)
> -#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
> -
>  struct gnttab_copy {
>      /* IN parameters. */
>      struct gnttab_copy_ptr {
> @@ -467,19 +519,24 @@ struct gnttab_copy {
>          uint16_t offset;
>      } source, dest;
>      uint16_t      len;
> -    uint16_t      flags;          /* GNTCOPY_* */
> +    uint16_t      flags;          /**< GNTCOPY_* */
> +#define _GNTCOPY_source_gref      (0)
> +#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
> +#define _GNTCOPY_dest_gref        (1)
> +#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
>      /* OUT parameters. */
>      int16_t       status;
>  };
>  typedef struct gnttab_copy  gnttab_copy_t;
>  DEFINE_XEN_GUEST_HANDLE(gnttab_copy_t);
>  
> -/*
> +/**
>   * GNTTABOP_query_size: Query the current and maximum sizes of the shared
>   * grant table.
> + *
>   * NOTES:
> - *  1. <dom> may be specified as DOMID_SELF.
> - *  2. Only a sufficiently-privileged domain may specify <dom> != DOMID_SELF.
> + *  1. \@dom may be specified as DOMID_SELF.
> + *  2. Only a sufficiently-privileged domain may specify \@dom != DOMID_SELF.
>   */
>  struct gnttab_query_size {
>      /* IN parameters. */
> @@ -487,19 +544,20 @@ struct gnttab_query_size {
>      /* OUT parameters. */
>      uint32_t nr_frames;
>      uint32_t max_nr_frames;
> -    int16_t  status;              /* => enum grant_status */
> +    int16_t  status;              /**< GNTST_* status code */
>  };
>  typedef struct gnttab_query_size gnttab_query_size_t;
>  DEFINE_XEN_GUEST_HANDLE(gnttab_query_size_t);
>  
> -/*
> +/**
>   * GNTTABOP_unmap_and_replace: Destroy one or more grant-reference mappings
> - * tracked by <handle> but atomically replace the page table entry with one
> - * pointing to the machine address under <new_addr>.  <new_addr> will be
> + * tracked by \@handle but atomically replace the page table entry with one
> + * pointing to the machine address under \@new_addr. \@new_addr will be
>   * redirected to the null entry.
> + *
>   * NOTES:
>   *  1. The call may fail in an undefined manner if either mapping is not
> - *     tracked by <handle>.
> + *     tracked by \@handle.
>   *  2. After executing a batch of unmaps, it is guaranteed that no stale
>   *     mappings will remain in the device or host TLBs.
>   */
> @@ -509,13 +567,13 @@ struct gnttab_unmap_and_replace {
>      uint64_t new_addr;
>      grant_handle_t handle;
>      /* OUT parameters. */
> -    int16_t  status;              /* => enum grant_status */
> +    int16_t  status;              /**< GNTST_* status code */
>  };
>  typedef struct gnttab_unmap_and_replace gnttab_unmap_and_replace_t;
>  DEFINE_XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t);
>  
>  #if __XEN_INTERFACE_VERSION__ >= 0x0003020a
> -/*
> +/**
>   * GNTTABOP_set_version: Request a particular version of the grant
>   * table shared table structure.  This operation may be used to toggle
>   * between different versions, but must be performed while no grants
> @@ -529,32 +587,33 @@ typedef struct gnttab_set_version gnttab_set_version_t;
>  DEFINE_XEN_GUEST_HANDLE(gnttab_set_version_t);
>  
>  
> -/*
> +/**
>   * GNTTABOP_get_status_frames: Get the list of frames used to store grant
> - * status for <dom>. In grant format version 2, the status is separated
> + * status for \@dom. In grant format version 2, the status is separated
>   * from the other shared grant fields to allow more efficient synchronization
>   * using barriers instead of atomic cmpexch operations.
> - * <nr_frames> specify the size of vector <frame_list>.
> - * The frame addresses are returned in the <frame_list>.
> - * Only <nr_frames> addresses are returned, even if the table is larger.
> + * \@nr_frames specify the size of vector \@frame_list.
> + * The frame addresses are returned in the \@frame_list.
> + * Only \@nr_frames addresses are returned, even if the table is larger.
> + *
>   * NOTES:
> - *  1. <dom> may be specified as DOMID_SELF.
> - *  2. Only a sufficiently-privileged domain may specify <dom> != DOMID_SELF.
> + *  1. \@dom may be specified as DOMID_SELF.
> + *  2. Only a sufficiently-privileged domain may specify \@dom != DOMID_SELF.
>   */
>  struct gnttab_get_status_frames {
>      /* IN parameters. */
>      uint32_t nr_frames;
>      domid_t  dom;
>      /* OUT parameters. */
> -    int16_t  status;              /* => enum grant_status */
> +    int16_t  status;              /**< GNTST_* status code */
>      XEN_GUEST_HANDLE(uint64_t) frame_list;
>  };
>  typedef struct gnttab_get_status_frames gnttab_get_status_frames_t;
>  DEFINE_XEN_GUEST_HANDLE(gnttab_get_status_frames_t);
>  
> -/*
> +/**
>   * GNTTABOP_get_version: Get the grant table version which is in
> - * effect for domain <dom>.
> + * effect for domain \@dom.
>   */
>  struct gnttab_get_version {
>      /* IN parameters */
> @@ -566,7 +625,7 @@ struct gnttab_get_version {
>  typedef struct gnttab_get_version gnttab_get_version_t;
>  DEFINE_XEN_GUEST_HANDLE(gnttab_get_version_t);
>  
> -/*
> +/**
>   * GNTTABOP_swap_grant_ref: Swap the contents of two grant entries.
>   */
>  struct gnttab_swap_grant_ref {
> @@ -574,12 +633,12 @@ struct gnttab_swap_grant_ref {
>      grant_ref_t ref_a;
>      grant_ref_t ref_b;
>      /* OUT parameters */
> -    int16_t status;             /* => enum grant_status */
> +    int16_t status;             /**< GNTST_* status code */
>  };
>  typedef struct gnttab_swap_grant_ref gnttab_swap_grant_ref_t;
>  DEFINE_XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t);
>  
> -/*
> +/**
>   * Issue one or more cache maintenance operations on a portion of a
>   * page granted to the calling domain by a foreign domain.
>   */
> @@ -588,8 +647,8 @@ struct gnttab_cache_flush {
>          uint64_t dev_bus_addr;
>          grant_ref_t ref;
>      } a;
> -    uint16_t offset; /* offset from start of grant */
> -    uint16_t length; /* size within the grant */
> +    uint16_t offset; /**< offset from start of grant */
> +    uint16_t length; /**< size within the grant */
>  #define GNTTAB_CACHE_CLEAN          (1u<<0)
>  #define GNTTAB_CACHE_INVAL          (1u<<1)
>  #define GNTTAB_CACHE_SOURCE_GREF    (1u<<31)
> @@ -600,40 +659,60 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_cache_flush_t);
>  
>  #endif /* __XEN_INTERFACE_VERSION__ */
>  
> -/*
> - * Bitfield values for gnttab_map_grant_ref.flags.
> +/**
> + * @def GNTMAP_device_map
> + * Bitfield value for gnttab_map_grant_ref.flags: Map the grant entry for
> + * access by I/O devices.
>   */
> - /* Map the grant entry for access by I/O devices. */
>  #define _GNTMAP_device_map      (0)
>  #define GNTMAP_device_map       (1<<_GNTMAP_device_map)
> - /* Map the grant entry for access by host CPUs. */
> +
> +/**
> + * @def GNTMAP_host_map
> + * Bitfield value for gnttab_map_grant_ref.flags: Map the grant entry for
> + * access by host CPUs.
> + */
>  #define _GNTMAP_host_map        (1)
>  #define GNTMAP_host_map         (1<<_GNTMAP_host_map)
> - /* Accesses to the granted frame will be restricted to read-only access. */
> +
> +/**
> + * @def GNTMAP_readonly
> + * Bitfield value for gnttab_map_grant_ref.flags: Accesses to the granted frame
> + * will be restricted to read-only access.
> + */
>  #define _GNTMAP_readonly        (2)
>  #define GNTMAP_readonly         (1<<_GNTMAP_readonly)
> - /*
> -  * GNTMAP_host_map subflag:
> -  *  0 => The host mapping is usable only by the guest OS.
> -  *  1 => The host mapping is usable by guest OS + current application.
> -  */
> +
> +/**
> + * @def GNTMAP_application_map
> + * Bitfield value for gnttab_map_grant_ref.flags.
> + *
> + * GNTMAP_host_map subflag:
> + * * 0 => The host mapping is usable only by the guest OS.
> + * * 1 => The host mapping is usable by guest OS + current application.
> + */
>  #define _GNTMAP_application_map (3)
>  #define GNTMAP_application_map  (1<<_GNTMAP_application_map)
>  
> - /*
> -  * GNTMAP_contains_pte subflag:
> -  *  0 => This map request contains a host virtual address.
> -  *  1 => This map request contains the machine addess of the PTE to update.
> -  */
> +/**
> + * @def GNTMAP_contains_pte
> + * Bitfield value for gnttab_map_grant_ref.flags.
> + *
> + * GNTMAP_contains_pte subflag:
> + * * 0 => This map request contains a host virtual address.
> + * * 1 => This map request contains the machine addess of the PTE to update.
> + */
>  #define _GNTMAP_contains_pte    (4)
>  #define GNTMAP_contains_pte     (1<<_GNTMAP_contains_pte)
>  
>  #define _GNTMAP_can_fail        (5)
>  #define GNTMAP_can_fail         (1<<_GNTMAP_can_fail)
>  
> -/*
> - * Bits to be placed in guest kernel available PTE bits (architecture
> - * dependent; only supported when XENFEAT_gnttab_map_avail_bits is set).
> +/**
> + * @def GNTMAP_guest_avail_mask
> + * Bitfield value for gnttab_map_grant_ref.flags: Bits to be placed in guest
> + * kernel available PTE bits (architecture dependent; only supported when
> + * XENFEAT_gnttab_map_avail_bits is set).
>   */
>  #define _GNTMAP_guest_avail0    (16)
>  #define GNTMAP_guest_avail_mask ((uint32_t)~0 << _GNTMAP_guest_avail0)
> @@ -641,21 +720,19 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_cache_flush_t);
>  /*
>   * Values for error status returns. All errors are -ve.
>   */
> -/* ` enum grant_status { */
> -#define GNTST_okay             (0)  /* Normal return.                        */
> -#define GNTST_general_error    (-1) /* General undefined error.              */
> -#define GNTST_bad_domain       (-2) /* Unrecognsed domain id.                */
> -#define GNTST_bad_gntref       (-3) /* Unrecognised or inappropriate gntref. */
> -#define GNTST_bad_handle       (-4) /* Unrecognised or inappropriate handle. */
> -#define GNTST_bad_virt_addr    (-5) /* Inappropriate virtual address to map. */
> -#define GNTST_bad_dev_addr     (-6) /* Inappropriate device address to unmap.*/
> -#define GNTST_no_device_space  (-7) /* Out of space in I/O MMU.              */
> -#define GNTST_permission_denied (-8) /* Not enough privilege for operation.  */
> -#define GNTST_bad_page         (-9) /* Specified page was invalid for op.    */
> -#define GNTST_bad_copy_arg    (-10) /* copy arguments cross page boundary.   */
> -#define GNTST_address_too_big (-11) /* transfer page address too large.      */
> -#define GNTST_eagain          (-12) /* Operation not done; try again.        */
> -/* ` } */
> +#define GNTST_okay             (0)  /**< Normal return.                        */
> +#define GNTST_general_error    (-1) /**< General undefined error.              */
> +#define GNTST_bad_domain       (-2) /**< Unrecognsed domain id.                */
> +#define GNTST_bad_gntref       (-3) /**< Unrecognised or inappropriate gntref. */
> +#define GNTST_bad_handle       (-4) /**< Unrecognised or inappropriate handle. */
> +#define GNTST_bad_virt_addr    (-5) /**< Inappropriate virtual address to map. */
> +#define GNTST_bad_dev_addr     (-6) /**< Inappropriate device address to unmap.*/
> +#define GNTST_no_device_space  (-7) /**< Out of space in I/O MMU.              */
> +#define GNTST_permission_denied (-8) /**< Not enough privilege for operation.  */
> +#define GNTST_bad_page         (-9) /**< Specified page was invalid for op.    */
> +#define GNTST_bad_copy_arg    (-10) /**< copy arguments cross page boundary.   */
> +#define GNTST_address_too_big (-11) /**< transfer page address too large.      */
> +#define GNTST_eagain          (-12) /**< Operation not done; try again.        */
>  
>  #define GNTTABOP_error_msgs {                   \
>      "okay",                                     \
> @@ -673,6 +750,10 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_cache_flush_t);
>      "operation not done; try again"             \
>  }
>  
> +/**
> + * @}
> + */
> +
>  #endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */
>  
>  /*
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 02:27:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 02:27:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146345.269248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwF4s-0004qL-7I; Thu, 24 Jun 2021 02:26:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146345.269248; Thu, 24 Jun 2021 02:26:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwF4s-0004qD-1S; Thu, 24 Jun 2021 02:26:50 +0000
Received: by outflank-mailman (input) for mailman id 146345;
 Thu, 24 Jun 2021 02:26:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwF4q-0004q3-V7; Thu, 24 Jun 2021 02:26:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwF4q-00044z-Mt; Thu, 24 Jun 2021 02:26:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwF4q-00046V-EI; Thu, 24 Jun 2021 02:26:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwF4q-0007gF-Do; Thu, 24 Jun 2021 02:26:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ybirhSv6rISKDMeO+pm4iqLuTu+sfxuHAVC5x6xSfTI=; b=On53WV9OMrluCqH9NaA3f3ADVQ
	4zCx5FxTdz+3QJGrXh7WX46ADt43e121lrZ5P6GvoA29s8HOMNQF2aCaN2dAJjpkeprvoF0YqIKzJ
	uTwp2LQEkaGWwGR5Rd+6FHgW/E3Vsaq4rQru5WE3urDyGH93N3mmO6nwfDbHEfix38yM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162998-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 162998: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-vhd:debian-di-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=4037804c55745738e0950658a5132790e6ac52e4
X-Osstest-Versions-That:
    linux=a82d4d5e9fe6e6448fb5885a184460864c2f14a5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Jun 2021 02:26:48 +0000

flight 162998 linux-5.4 real [real]
flight 163008 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162998/
http://logs.test-lab.xenproject.org/osstest/logs/163008/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-vhd      12 debian-di-install   fail pass in 163008-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-vhd     14 migrate-support-check fail in 163008 never pass
 test-armhf-armhf-xl-vhd 15 saverestore-support-check fail in 163008 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162890
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162890
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162890
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162890
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162890
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162890
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162890
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162890
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162890
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162890
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162890
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                4037804c55745738e0950658a5132790e6ac52e4
baseline version:
 linux                a82d4d5e9fe6e6448fb5885a184460864c2f14a5

Last test of basis   162890  2021-06-18 08:12:38 Z    5 days
Testing same since   162998  2021-06-23 13:11:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  afzal mohammed <afzal.mohd.ma@gmail.com>
  Aleksander Jan Bajkowski <olek2@wp.pl>
  Alex Deucher <alexander.deucher@amd.com>
  Andrew Lunn <andrew@lunn.ch>
  Andrew Morton <akpm@linux-foundation.org>
  Antti Järvinen <antti.jarvinen@gmail.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Avraham Stern <avraham.stern@intel.com>
  Axel Lin <axel.lin@ingics.com>
  Aya Levin <ayal@nvidia.com>
  Babu Moger <babu.moger@amd.com>
  Bjorn Helgaas <bhelgaas@google.com>
  Borislav Petkov <bp@suse.de>
  Bumyong Lee <bumyong.lee@samsung.com>
  Changbin Du <changbin.du@gmail.com>
  Chanho Park <chanho61.park@samsung.com>
  Chen Li <chenli@uniontech.com>
  Chengyang Fan <cy.fan@huawei.com>
  Chiqijun <chiqijun@huawei.com>
  Christian Brauner <christian.brauner@ubuntu.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Davide Caratti <dcaratti@redhat.com>
  Dima Chumak <dchumak@nvidia.com>
  Dongliang Mu <mudongliangabcd@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Dumazet <edumazet@google.com>
  Esben Haabendal <esben@geanix.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Fugang Duan <fugang.duan@nxp.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hangbin Liu <liuhangbin@gmail.com>
  Hauke Mehrtens <hauke@hauke-m.de>
  Huy Nguyen <huyn@nvidia.com>
  Ido Schimmel <idosch@nvidia.com>
  Jack Pham <jackp@codeaurora.org>
  Jack Yu <jack.yu@realtek.com>
  Jakub Kicinski <kuba@kernel.org>
  Jason Self <jason@bluehome.net>
  Jim Mattson <jmattson@google.com>
  Jisheng Zhang <Jisheng.Zhang@synaptics.com>
  Joakim Zhang <qiangqing.zhang@nxp.com>
  Johannes Berg <johannes.berg@intel.com>
  Jongho Park <jongho7.park@samsung.com>
  Kees Cook <keescook@chromium.org>
  kernel test robot <lkp@intel.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Linyu Yuan <linyyuan@codeaurora.org>
  Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
  Luca Coelho <luciano.coelho@intel.com>
  Maciej Żenczykowski <maze@google.com>
  Maor Gottlieb <maorg@nvidia.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Mark Brown <broonie@kernel.org>
  Matti Vaittinen <matti.vaittinen@fi.rohmeurope.com>
  Maxim Mikityanskiy <maximmi@nvidia.com>
  Michael Chan <michael.chan@broadcom.com>
  Nanyong Sun <sunnanyong@huawei.com>
  Naoya Horiguchi <naoya.horiguchi@nec.com>
  Nicolas Dichtel <nicolas.dichtel@6wind.com>
  Nikolay Aleksandrov <nikolay@nvidia.com>
  Norbert Slusarek <nslusarek@gmx.net>
  Oder Chiou <oder_chiou@realtek.com>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Oliver Hartkopp <socketcan@hartkopp.net>
  Pali Rohár <pali@kernel.org>
  Paolo Abeni <pabeni@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrice Chotard <patrice.chotard@foss.st.com>
  Paul Moore <paul@paul-moore.com>
  Pavel Machek (CIP) <pavel@denx.de>
  Pavel Machek <pavel@denx.de>
  Pavel Skripkin <paskripkin@gmail.com>
  Peter Chen <peter.chen@kernel.org>
  Randy Dunlap <rdunlap@infradead.org>
  Remi Pommarel <repk@triplefau.lt>
  Richard Cochran <richardcochran@gmail.com>
  Rik van Riel <riel@surriel.com>
  Riwen Lu <luriwen@kylinos.cn>
  Saeed Mahameed <saeedm@mellanox.com>
  Saeed Mahameed <saeedm@nvidia.com>
  Santosh Shilimkar <santosh.shilimkar@oracle.com>
  Sasha Levin <sashal@kernel.org>
  Sean Christopherson <seanjc@google.com>
  Sergio Paracuellos <sergio.paracuellos@gmail.com>
  Shanker Donthineni <sdonthineni@nvidia.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Simon Wunderlich <sw@simonwunderlich.de>
  Somnath Kotur <somnath.kotur@broadcom.com>
  Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Sven Eckelmann <sven@narfation.org>
  syzbot <syzbot+355f8edb2ff45d5f95fa@syzkaller.appspotmail.com>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Petazzoni <thomas.petazzoni@bootlin.com>
  Toke Høiland-Jørgensen <toke@redhat.com>
  Toke Høiland-Jørgensen <toke@toke.dk>
  Tony Lindgren <tony@atomide.com>
  Vineet Gupta <vgupta@synopsys.com>
  Vinod Koul <vkoul@kernel.org>
  Vlastimil Babka <vbabka@suse.cz>
  Xin Chen <chenxin@kylinos.cn>
  Yang Yingliang <yangyingliang@huawei.com>
  yangerkun <yangerkun@huawei.com>
  Yifan Zhang <yifan1.zhang@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   a82d4d5e9fe6..4037804c5574  4037804c55745738e0950658a5132790e6ac52e4 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 03:25:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 03:25:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146351.269261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwFzt-00020g-27; Thu, 24 Jun 2021 03:25:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146351.269261; Thu, 24 Jun 2021 03:25:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwFzs-00020Z-VM; Thu, 24 Jun 2021 03:25:44 +0000
Received: by outflank-mailman (input) for mailman id 146351;
 Thu, 24 Jun 2021 03:25:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwFzq-00020P-QD; Thu, 24 Jun 2021 03:25:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwFzq-000529-Ii; Thu, 24 Jun 2021 03:25:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwFzq-0007AW-53; Thu, 24 Jun 2021 03:25:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwFzq-0000cp-4V; Thu, 24 Jun 2021 03:25:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LNN7uNtZs0YCzi34rl9WE2MJkgVb3zYgR8OQH9pZIAU=; b=CVAN/UN5a9+Bkm6KuxM5G+BauY
	1WEEfCr2255ppAkdM8BltWF9QDH4OqZgHUOMCZNG+6qvFEiLC0FazcJWhMTxWGwbA0x67HG269cgr
	2JJjI3yntdiA56XFuJPCGNB2pG9zarsJ/LXjBCV151Hs9mppcNM9r3cJhN0v1GOIRw7A=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163003-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 163003: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:guest-start/debianhvm.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:guest-start.2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=0c18f29aae7ce3dadd26d8ee3505d07cc982df75
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Jun 2021 03:25:42 +0000

flight 163003 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163003/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  14 guest-start    fail in 162986 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit1  13 debian-fixup               fail pass in 162986
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 162986
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 162986
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 20 guest-start/debianhvm.repeat fail pass in 162986

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     19 guest-start.2 fail in 162986 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                0c18f29aae7ce3dadd26d8ee3505d07cc982df75
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  327 days
Failing since        152366  2020-08-01 20:49:34 Z  326 days  554 attempts
Testing same since   162986  2021-06-23 04:37:31 Z    0 days    2 attempts

------------------------------------------------------------
6199 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1688820 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 04:06:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 04:06:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146283.269281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwGct-00069U-MP; Thu, 24 Jun 2021 04:06:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146283.269281; Thu, 24 Jun 2021 04:06:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwGct-00068A-Gc; Thu, 24 Jun 2021 04:06:03 +0000
Received: by outflank-mailman (input) for mailman id 146283;
 Wed, 23 Jun 2021 18:44:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8eZS=LR=quicinc.com=quic_qiancai@srs-us1.protection.inumbo.net>)
 id 1lw7rg-0000Lq-Bx
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 18:44:44 +0000
Received: from alexa-out-sd-01.qualcomm.com (unknown [199.106.114.38])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f9d2ac5c-947f-4431-8cbe-a27ee5d4434a;
 Wed, 23 Jun 2021 18:44:43 +0000 (UTC)
Received: from unknown (HELO ironmsg05-sd.qualcomm.com) ([10.53.140.145])
 by alexa-out-sd-01.qualcomm.com with ESMTP; 23 Jun 2021 11:44:42 -0700
Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48])
 by ironmsg05-sd.qualcomm.com with ESMTP/TLS/AES256-SHA;
 23 Jun 2021 11:44:41 -0700
Received: from [10.38.240.33] (10.80.80.8) by nasanexm03e.na.qualcomm.com
 (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 23 Jun
 2021 11:44:35 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9d2ac5c-947f-4431-8cbe-a27ee5d4434a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim;
  t=1624473883; x=1656009883;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=N07wTNJU87j8J0GwX4jxR069BiU3mP0B1FK82hyEXsU=;
  b=YaQG1kmOJmrIQ4OYRGvFbMs7YdDbMfAUzTt5+3dU3awrC3ogq9wrVQyF
   by/8dBmI481zG9rDrDT63r5gl0+4Lgssz8Mjz1GicA/qq40Nz3W4Z2g8Y
   cwdpP2PDxeqtQnpQtwpqB5cxljkW9npsn4gepj+IF4rWnVDZ0OlwyEZ53
   U=;
X-QCInternal: smtphost
Subject: Re: [PATCH v14 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
To: Will Deacon <will@kernel.org>
CC: Claire Chang <tientzu@chromium.org>, Rob Herring <robh+dt@kernel.org>,
	<mpe@ellerman.id.au>, Joerg Roedel <joro@8bytes.org>, Frank Rowand
	<frowand.list@gmail.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	<boris.ostrovsky@oracle.com>, <jgross@suse.com>, Christoph Hellwig
	<hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>,
	<heikki.krogerus@linux.intel.com>, <thomas.hellstrom@linux.intel.com>,
	<peterz@infradead.org>, <benh@kernel.crashing.org>,
	<joonas.lahtinen@linux.intel.com>, <dri-devel@lists.freedesktop.org>,
	<chris@chris-wilson.co.uk>, <grant.likely@arm.com>, <paulus@samba.org>,
	<mingo@kernel.org>, <jxgao@google.com>, <sstabellini@kernel.org>, Saravana
 Kannan <saravanak@google.com>, <xypron.glpk@gmx.de>, "Rafael J . Wysocki"
	<rafael.j.wysocki@intel.com>, Bartosz Golaszewski
	<bgolaszewski@baylibre.com>, <bskeggs@redhat.com>,
	<linux-pci@vger.kernel.org>, <xen-devel@lists.xenproject.org>, Thierry Reding
	<treding@nvidia.com>, <intel-gfx@lists.freedesktop.org>,
	<matthew.auld@intel.com>, linux-devicetree <devicetree@vger.kernel.org>,
	<daniel@ffwll.ch>, <airlied@linux.ie>, <maarten.lankhorst@linux.intel.com>,
	<linuxppc-dev@lists.ozlabs.org>, <jani.nikula@linux.intel.com>, Nicolas
 Boichat <drinkcat@chromium.org>, <rodrigo.vivi@intel.com>,
	<bhelgaas@google.com>, Dan Williams <dan.j.williams@intel.com>, Andy
 Shevchenko <andriy.shevchenko@linux.intel.com>, Greg KH
	<gregkh@linuxfoundation.org>, Randy Dunlap <rdunlap@infradead.org>, lkml
	<linux-kernel@vger.kernel.org>, "list@263.net:IOMMU DRIVERS"
	<iommu@lists.linux-foundation.org>, Jim Quinlan <james.quinlan@broadcom.com>,
	<thomas.lendacky@amd.com>, Robin Murphy <robin.murphy@arm.com>,
	<bauerman@linux.ibm.com>
References: <20210619034043.199220-1-tientzu@chromium.org>
 <20210619034043.199220-7-tientzu@chromium.org>
 <76c3343d-72e5-9df3-8924-5474ee698ef4@quicinc.com>
 <20210623183736.GA472@willie-the-truck>
From: Qian Cai <quic_qiancai@quicinc.com>
Message-ID: <19d4c7a2-744d-21e0-289c-a576e1f0e6f3@quicinc.com>
Date: Wed, 23 Jun 2021 14:44:34 -0400
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210623183736.GA472@willie-the-truck>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [10.80.80.8]
X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To
 nasanexm03e.na.qualcomm.com (10.85.0.48)



On 6/23/2021 2:37 PM, Will Deacon wrote:
> On Wed, Jun 23, 2021 at 12:39:29PM -0400, Qian Cai wrote:
>>
>>
>> On 6/18/2021 11:40 PM, Claire Chang wrote:
>>> Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
>>> use it to determine whether to bounce the data or not. This will be
>>> useful later to allow for different pools.
>>>
>>> Signed-off-by: Claire Chang <tientzu@chromium.org>
>>> Reviewed-by: Christoph Hellwig <hch@lst.de>
>>> Tested-by: Stefano Stabellini <sstabellini@kernel.org>
>>> Tested-by: Will Deacon <will@kernel.org>
>>> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
>>
>> Reverting the rest of the series up to this patch fixed a boot crash with NVMe on today's linux-next.
> 
> Hmm, so that makes patch 7 the suspicious one, right?

Will, no. It is rather patch #6 (this patch). Only the patch from #6 to #12 were reverted to fix the issue. Also, looking at this offset of the crash,

pc : dma_direct_map_sg+0x304/0x8f0
is_swiotlb_force_bounce at /usr/src/linux-next/./include/linux/swiotlb.h:119

is_swiotlb_force_bounce() was the new function introduced in this patch here.

+static inline bool is_swiotlb_force_bounce(struct device *dev)
+{
+	return dev->dma_io_tlb_mem->force_bounce;
+}

> 
> Looking at that one more closely, it looks like swiotlb_find_slots() takes
> 'alloc_size + offset' as its 'alloc_size' parameter from
> swiotlb_tbl_map_single() and initialises 'mem->slots[i].alloc_size' based
> on 'alloc_size + offset', which looks like a change in behaviour from the
> old code, which didn't include the offset there.
> 
> swiotlb_release_slots() then adds the offset back on afaict, so we end up
> accounting for it twice and possibly unmap more than we're supposed to?
> 
> Will
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 04:06:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 04:06:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146270.269276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwGct-00065i-BS; Thu, 24 Jun 2021 04:06:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146270.269276; Thu, 24 Jun 2021 04:06:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwGct-00065b-8B; Thu, 24 Jun 2021 04:06:03 +0000
Received: by outflank-mailman (input) for mailman id 146270;
 Wed, 23 Jun 2021 16:39:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8eZS=LR=quicinc.com=quic_qiancai@srs-us1.protection.inumbo.net>)
 id 1lw5ud-0005dK-PO
 for xen-devel@lists.xenproject.org; Wed, 23 Jun 2021 16:39:39 +0000
Received: from alexa-out-sd-02.qualcomm.com (unknown [199.106.114.39])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 873a946a-2c8d-4b42-b894-0f855dfe5a83;
 Wed, 23 Jun 2021 16:39:38 +0000 (UTC)
Received: from unknown (HELO ironmsg01-sd.qualcomm.com) ([10.53.140.141])
 by alexa-out-sd-02.qualcomm.com with ESMTP; 23 Jun 2021 09:39:37 -0700
Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48])
 by ironmsg01-sd.qualcomm.com with ESMTP/TLS/AES256-SHA;
 23 Jun 2021 09:39:35 -0700
Received: from [10.38.240.33] (10.80.80.8) by nasanexm03e.na.qualcomm.com
 (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 23 Jun
 2021 09:39:31 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 873a946a-2c8d-4b42-b894-0f855dfe5a83
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim;
  t=1624466378; x=1656002378;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=bD3fHcpKfJA9p2zej1L/UrxZaxu1AH26pxH4pa5HgfE=;
  b=J25k+0X2r6bhjLWTDHVsp3IWabanjvu6tre2LozQj1mhkSKwYGVeHX0G
   R05+kLiSkWZQ4Fhh+6djlOQ4Wyu/OKG/TgJfOlCNBzx4RpLv2vW3g595P
   qJzQBKCngIf5AvIczu5o2ysOdhjrPow3f1PTT8QFyURZY6YJg01idm8Bx
   M=;
X-QCInternal: smtphost
Subject: Re: [PATCH v14 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
To: Claire Chang <tientzu@chromium.org>, Rob Herring <robh+dt@kernel.org>,
	<mpe@ellerman.id.au>, Joerg Roedel <joro@8bytes.org>, Will Deacon
	<will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, Konrad Rzeszutek
 Wilk <konrad.wilk@oracle.com>, <boris.ostrovsky@oracle.com>,
	<jgross@suse.com>, Christoph Hellwig <hch@lst.de>, Marek Szyprowski
	<m.szyprowski@samsung.com>
CC: <heikki.krogerus@linux.intel.com>, <thomas.hellstrom@linux.intel.com>,
	<peterz@infradead.org>, <benh@kernel.crashing.org>,
	<joonas.lahtinen@linux.intel.com>, <dri-devel@lists.freedesktop.org>,
	<chris@chris-wilson.co.uk>, <grant.likely@arm.com>, <paulus@samba.org>,
	<mingo@kernel.org>, <jxgao@google.com>, <sstabellini@kernel.org>, Saravana
 Kannan <saravanak@google.com>, <xypron.glpk@gmx.de>, "Rafael J . Wysocki"
	<rafael.j.wysocki@intel.com>, Bartosz Golaszewski
	<bgolaszewski@baylibre.com>, <bskeggs@redhat.com>,
	<linux-pci@vger.kernel.org>, <xen-devel@lists.xenproject.org>, Thierry Reding
	<treding@nvidia.com>, <intel-gfx@lists.freedesktop.org>,
	<matthew.auld@intel.com>, linux-devicetree <devicetree@vger.kernel.org>,
	<daniel@ffwll.ch>, <airlied@linux.ie>, <maarten.lankhorst@linux.intel.com>,
	<linuxppc-dev@lists.ozlabs.org>, <jani.nikula@linux.intel.com>, Nicolas
 Boichat <drinkcat@chromium.org>, <rodrigo.vivi@intel.com>,
	<bhelgaas@google.com>, Dan Williams <dan.j.williams@intel.com>, Andy
 Shevchenko <andriy.shevchenko@linux.intel.com>, Greg KH
	<gregkh@linuxfoundation.org>, Randy Dunlap <rdunlap@infradead.org>, lkml
	<linux-kernel@vger.kernel.org>, "list@263.net:IOMMU DRIVERS"
	<iommu@lists.linux-foundation.org>, Jim Quinlan <james.quinlan@broadcom.com>,
	<thomas.lendacky@amd.com>, Robin Murphy <robin.murphy@arm.com>,
	<bauerman@linux.ibm.com>
References: <20210619034043.199220-1-tientzu@chromium.org>
 <20210619034043.199220-7-tientzu@chromium.org>
From: Qian Cai <quic_qiancai@quicinc.com>
Message-ID: <76c3343d-72e5-9df3-8924-5474ee698ef4@quicinc.com>
Date: Wed, 23 Jun 2021 12:39:29 -0400
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210619034043.199220-7-tientzu@chromium.org>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [10.80.80.8]
X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To
 nasanexm03e.na.qualcomm.com (10.85.0.48)



On 6/18/2021 11:40 PM, Claire Chang wrote:
> Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
> use it to determine whether to bounce the data or not. This will be
> useful later to allow for different pools.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Tested-by: Stefano Stabellini <sstabellini@kernel.org>
> Tested-by: Will Deacon <will@kernel.org>
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

Reverting the rest of the series up to this patch fixed a boot crash with NVMe on today's linux-next.

[   22.286574][    T7] Unable to handle kernel paging request at virtual address dfff80000000000e
[   22.295225][    T7] Mem abort info:
[   22.298743][    T7]   ESR = 0x96000004
[   22.302496][    T7]   EC = 0x25: DABT (current EL), IL = 32 bits
[   22.308525][    T7]   SET = 0, FnV = 0
[   22.312274][    T7]   EA = 0, S1PTW = 0
[   22.316131][    T7]   FSC = 0x04: level 0 translation fault
[   22.321704][    T7] Data abort info:
[   22.325278][    T7]   ISV = 0, ISS = 0x00000004
[   22.329840][    T7]   CM = 0, WnR = 0
[   22.333503][    T7] [dfff80000000000e] address between user and kernel address ranges
[   22.338543][  T256] igb 0006:01:00.0: Intel(R) Gigabit Ethernet Network Connection
[   22.341400][    T7] Internal error: Oops: 96000004 [#1] SMP
[   22.348915][  T256] igb 0006:01:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 4c:38:d5:09:c8:83
[   22.354458][    T7] Modules linked in: igb(+) i2c_algo_bit nvme mlx5_core(+) i2c_core nvme_core firmware_class
[   22.362512][  T256] igb 0006:01:00.0: eth0: PBA No: G69016-004
[   22.372287][    T7] CPU: 13 PID: 7 Comm: kworker/u64:0 Not tainted 5.13.0-rc7-next-20210623+ #47
[   22.372293][    T7] Hardware name: MiTAC RAPTOR EV-883832-X3-0001/RAPTOR, BIOS 1.6 06/28/2020
[   22.372298][    T7] Workqueue: nvme-reset-wq nvme_reset_work [nvme]
[   22.378145][  T256] igb 0006:01:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s)
[   22.386901][    T7] 
[   22.386905][    T7] pstate: 10000005 (nzcV daif -PAN -UAO -TCO BTYPE=--)
[   22.386910][    T7] pc : dma_direct_map_sg+0x304/0x8f0

is_swiotlb_force_bounce at /usr/src/linux-next/./include/linux/swiotlb.h:119
(inlined by) dma_direct_map_page at /usr/src/linux-next/kernel/dma/direct.h:90
(inlined by) dma_direct_map_sg at /usr/src/linux-next/kernel/dma/direct.c:428

[   22.386919][    T7] lr : dma_map_sg_attrs+0x6c/0x118
[   22.386924][    T7] sp : ffff80001dc8eac0
[   22.386926][    T7] x29: ffff80001dc8eac0 x28: ffff0000199e70b0 x27: 0000000000000000
[   22.386935][    T7] x26: ffff000847ee7000 x25: ffff80001158e570 x24: 0000000000000002
[   22.386943][    T7] x23: dfff800000000000 x22: 0000000000000100 x21: ffff0000199e7460
[   22.386951][    T7] x20: ffff0000199e7488 x19: 0000000000000001 x18: ffff000010062670
[   22.386955][  T253] Unable to handle kernel paging request at virtual address dfff80000000000e
[   22.386958][    T7] x17: ffff8000109f6a90 x16: ffff8000109e1b4c x15: ffff800009303420
[   22.386965][  T253] Mem abort info:
[   22.386967][    T7] x14: 0000000000000001 x13: ffff80001158e000
[   22.386970][  T253]   ESR = 0x96000004
[   22.386972][    T7]  x12: 1fffe00108fdce01
[   22.386975][  T253]   EC = 0x25: DABT (current EL), IL = 32 bits
[   22.386976][    T7] x11: 1fffe00108fdce03 x10: ffff000847ee700c x9 : 0000000000000004
[   22.386981][  T253]   SET = 0, FnV = 0
[   22.386983][    T7] 
[   22.386985][    T7] x8 : ffff700003b91d72
[   22.386986][  T253]   EA = 0, S1PTW = 0
[   22.386987][    T7]  x7 : 0000000000000000 x6 : 000000000000000e
[   22.386990][  T253]   FSC = 0x04: level 0 translation fault
[   22.386992][    T7] 
[   22.386994][    T7] x5 : dfff800000000000
[   22.386995][  T253] Data abort info:
[   22.386997][    T7]  x4 : 00000008c7ede000
[   22.386999][  T253]   ISV = 0, ISS = 0x00000004
[   22.386999][    T7]  x3 : 00000008c7ede000
[   22.387003][    T7] x2 : 0000000000001000
[   22.387003][  T253]   CM = 0, WnR = 0
[   22.387006][    T7]  x1 : 0000000000000000 x0 : 0000000000000071
[   22.387008][  T253] [dfff80000000000e] address between user and kernel address ranges
[   22.387011][    T7] 
[   22.387013][    T7] Call trace:
[   22.387016][    T7]  dma_direct_map_sg+0x304/0x8f0
[   22.387022][    T7]  dma_map_sg_attrs+0x6c/0x118
[   22.387026][    T7]  nvme_map_data+0x2ec/0x21d8 [nvme]
[   22.387040][    T7]  nvme_queue_rq+0x274/0x3f0 [nvme]
[   22.387052][    T7]  blk_mq_dispatch_rq_list+0x2ec/0x18a0
[   22.387060][    T7]  __blk_mq_sched_dispatch_requests+0x2a0/0x3e8
[   22.387065][    T7]  blk_mq_sched_dispatch_requests+0xa4/0x100
[   22.387070][    T7]  __blk_mq_run_hw_queue+0x148/0x1d8
[   22.387075][    T7]  __blk_mq_delay_run_hw_queue+0x3f8/0x730
[   22.414539][  T269] igb 0006:01:00.0 enP6p1s0: renamed from eth0
[   22.418957][    T7]  blk_mq_run_hw_queue+0x148/0x248
[   22.418969][    T7]  blk_mq_sched_insert_request+0x2a4/0x330
[   22.418975][    T7]  blk_execute_rq_nowait+0xc8/0x118
[   22.418981][    T7]  blk_execute_rq+0xd4/0x188
[   22.453203][  T255] udevadm (255) used greatest stack depth: 57408 bytes left
[   22.456504][    T7]  __nvme_submit_sync_cmd+0x4e0/0x730 [nvme_core]
[   22.673245][    T7]  nvme_identify_ctrl.isra.0+0x124/0x1e0 [nvme_core]
[   22.679784][    T7]  nvme_init_identify+0x90/0x1868 [nvme_core]
[   22.685713][    T7]  nvme_init_ctrl_finish+0x1a8/0xb88 [nvme_core]
[   22.691903][    T7]  nvme_reset_work+0xe5c/0x2aa4 [nvme]
[   22.697219][    T7]  process_one_work+0x7e4/0x19a0
[   22.702005][    T7]  worker_thread+0x334/0xae0
[   22.706442][    T7]  kthread+0x3bc/0x470
[   22.710359][    T7]  ret_from_fork+0x10/0x18
[   22.714627][    T7] Code: f941ef81 9101c420 1200080e d343fc06 (38f768c6) 
[   22.721407][    T7] ---[ end trace 1f3c4181ae408676 ]---
[   22.726712][    T7] Kernel panic - not syncing: Oops: Fatal exception
[   22.733169][    T7] SMP: stopping secondary CPUs
[   23.765164][    T7] SMP: failed to stop secondary CPUs 13,15
[   23.770818][    T7] Kernel Offset: disabled
[   23.774991][    T7] CPU features: 0x00000251,20000846
[   23.780034][    T7] Memory Limit: none
[   23.783794][    T7] ---[ end Kernel panic - not syncing: Oops: Fatal exception ]---

> ---
>  drivers/xen/swiotlb-xen.c |  2 +-
>  include/linux/swiotlb.h   | 11 +++++++++++
>  kernel/dma/direct.c       |  2 +-
>  kernel/dma/direct.h       |  2 +-
>  kernel/dma/swiotlb.c      |  4 ++++
>  5 files changed, 18 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 0c6ed09f8513..4730a146fa35 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -369,7 +369,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
>  	if (dma_capable(dev, dev_addr, size, true) &&
>  	    !range_straddles_page_boundary(phys, size) &&
>  		!xen_arch_need_swiotlb(dev, phys, dev_addr) &&
> -		swiotlb_force != SWIOTLB_FORCE)
> +		!is_swiotlb_force_bounce(dev))
>  		goto done;
>  
>  	/*
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index dd1c30a83058..8d8855c77d9a 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -84,6 +84,7 @@ extern enum swiotlb_force swiotlb_force;
>   *		unmap calls.
>   * @debugfs:	The dentry to debugfs.
>   * @late_alloc:	%true if allocated using the page allocator
> + * @force_bounce: %true if swiotlb bouncing is forced
>   */
>  struct io_tlb_mem {
>  	phys_addr_t start;
> @@ -94,6 +95,7 @@ struct io_tlb_mem {
>  	spinlock_t lock;
>  	struct dentry *debugfs;
>  	bool late_alloc;
> +	bool force_bounce;
>  	struct io_tlb_slot {
>  		phys_addr_t orig_addr;
>  		size_t alloc_size;
> @@ -109,6 +111,11 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  	return mem && paddr >= mem->start && paddr < mem->end;
>  }
>  
> +static inline bool is_swiotlb_force_bounce(struct device *dev)
> +{
> +	return dev->dma_io_tlb_mem->force_bounce;
> +}
> +
>  void __init swiotlb_exit(void);
>  unsigned int swiotlb_max_segment(void);
>  size_t swiotlb_max_mapping_size(struct device *dev);
> @@ -120,6 +127,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  {
>  	return false;
>  }
> +static inline bool is_swiotlb_force_bounce(struct device *dev)
> +{
> +	return false;
> +}
>  static inline void swiotlb_exit(void)
>  {
>  }
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 7a88c34d0867..a92465b4eb12 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -496,7 +496,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
>  {
>  	/* If SWIOTLB is active, use its maximum mapping size */
>  	if (is_swiotlb_active(dev) &&
> -	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
> +	    (dma_addressing_limited(dev) || is_swiotlb_force_bounce(dev)))
>  		return swiotlb_max_mapping_size(dev);
>  	return SIZE_MAX;
>  }
> diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
> index 13e9e7158d94..4632b0f4f72e 100644
> --- a/kernel/dma/direct.h
> +++ b/kernel/dma/direct.h
> @@ -87,7 +87,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
>  	phys_addr_t phys = page_to_phys(page) + offset;
>  	dma_addr_t dma_addr = phys_to_dma(dev, phys);
>  
> -	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
> +	if (is_swiotlb_force_bounce(dev))
>  		return swiotlb_map(dev, phys, size, dir, attrs);
>  
>  	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 8a120f42340b..0d294bbf274c 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -179,6 +179,10 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
>  	mem->end = mem->start + bytes;
>  	mem->index = 0;
>  	mem->late_alloc = late_alloc;
> +
> +	if (swiotlb_force == SWIOTLB_FORCE)
> +		mem->force_bounce = true;
> +
>  	spin_lock_init(&mem->lock);
>  	for (i = 0; i < mem->nslabs; i++) {
>  		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 05:14:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 05:14:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146366.269298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwHgX-0006ST-HB; Thu, 24 Jun 2021 05:13:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146366.269298; Thu, 24 Jun 2021 05:13:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwHgX-0006SM-E3; Thu, 24 Jun 2021 05:13:53 +0000
Received: by outflank-mailman (input) for mailman id 146366;
 Thu, 24 Jun 2021 05:13:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GXyq=LS=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1lwHgV-0006SG-EY
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 05:13:51 +0000
Received: from mga06.intel.com (unknown [134.134.136.31])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3ed8ad76-7e0e-422c-81e7-d97dfaee5401;
 Thu, 24 Jun 2021 05:13:49 +0000 (UTC)
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
 by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 23 Jun 2021 22:13:47 -0700
Received: from fmsmsx604.amr.corp.intel.com ([10.18.126.84])
 by fmsmga002.fm.intel.com with ESMTP; 23 Jun 2021 22:13:47 -0700
Received: from fmsmsx605.amr.corp.intel.com (10.18.126.85) by
 fmsmsx604.amr.corp.intel.com (10.18.126.84) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.4; Wed, 23 Jun 2021 22:13:46 -0700
Received: from fmsedg602.ED.cps.intel.com (10.1.192.136) by
 fmsmsx605.amr.corp.intel.com (10.18.126.85) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.4
 via Frontend Transport; Wed, 23 Jun 2021 22:13:46 -0700
Received: from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.102)
 by edgegateway.intel.com (192.55.55.71) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2242.4; Wed, 23 Jun 2021 22:13:44 -0700
Received: from MWHPR11MB1886.namprd11.prod.outlook.com (2603:10b6:300:110::9)
 by MWHPR11MB1550.namprd11.prod.outlook.com (2603:10b6:301:b::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Thu, 24 Jun
 2021 05:13:43 +0000
Received: from MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::6597:eb05:c507:c6c1]) by MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::6597:eb05:c507:c6c1%12]) with mapi id 15.20.4242.024; Thu, 24 Jun
 2021 05:13:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ed8ad76-7e0e-422c-81e7-d97dfaee5401
IronPort-SDR: 8mPYyQ74HM+yL5bewgh6f/7e/56jfgWsT9v99+d44cioUwhCXR6cXsgIl0OvBd/mOfNKbhQlyO
 lrCkclBvVhHg==
X-IronPort-AV: E=McAfee;i="6200,9189,10024"; a="268530892"
X-IronPort-AV: E=Sophos;i="5.83,295,1616482800"; 
   d="scan'208";a="268530892"
IronPort-SDR: KWHijnh6cYZqVx2vxuZFQ5q020d5iWIQzVBaHUQ8+Cpoz3te9UMh6KaDi68fdN6ETEPBZCc2Kl
 iHEEJyJGH0Cg==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.83,295,1616482800"; 
   d="scan'208";a="490982049"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=knCwhuPgN7S6UEZhbm653ztH7VBit88WdBg04Btsnw7odeEARw6Y14Z5st93bMdDUfMKtpZQvUg6s+xa06ODf9ZcEp11cSShbrNsKDFQhLHSzXVXh4nvVA0oOtil8p13PlRY3hV61U5TAFEgZQIP9jxAd8oFy1HrDys79gipt4NU5WjzsjufoqJqWilMDaJd6HvCjOTKPIJLM4Z11M+UA1DufgMHB1xHCkLOyFyldYPKNDGPRYsj1zocwz5nCmCo6HQvVyctacsnk6JYxXzDTOnkGvQg2m+V79yKu/Wm+xMWKLJcpysVTOR2yFrPKD9KgFlQUHcKAr6hxg5SQ0Ciig==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nnibFZVDAsaR0mBMP5cYkF7RspspIk3voJFRu8ArpaA=;
 b=TS/aznQ16hVF+S1gRFSjkqut6G7VR4OkaoNX81lDbxzok1y96maYa+uDYTBeCeH1eR9QNLd8yRb4we9qlKxvVflyJy4brheAsWc0S0dtkGWL28xdTWr0+6Bv5DJNqtzkDgChGZPgTovwNy3AzlV+do7mMKNmn8/1MswtPs6KXnZKZto1L62on7wUmDdRFrDI/dAXuDxNFH16d+D5+uMNGlZWnW+Rs5Phh/r+VwPCgt6oOKdRRmTYbxG1VXqizCUPfyv10O46WDTKnqL6i8hppr3ZYWGEWxlhCkOAfQZ5Sl4l11XimxYgb54IaK0/qsHi86QgEUPM5p5CZIVn5ydVFA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nnibFZVDAsaR0mBMP5cYkF7RspspIk3voJFRu8ArpaA=;
 b=EIBRfKvnhAtamhO2m2LUXoLpg+5JCxZUyljn/hWTNtkIt/YW564j3a9npdWX8kXACcJ8jB0DfpgBbx7H5JEBjblru+f9/F+vkrO5cYmvEhzvf+bON13R/Jz51cyiEwHifagWZI5smEaM0y+Atpowbx/umsUyp+T7Vi6DQOo6mTs=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <paul@xen.org>, "Cooper, Andrew" <andrew.cooper3@citrix.com>
Subject: RE: [PATCH 3/9] VT-d: undo device mappings upon error
Thread-Topic: [PATCH 3/9] VT-d: undo device mappings upon error
Thread-Index: AQHXXRG+XpEHT4VRdkm+LQYGIgowYqsitShQ
Date: Thu, 24 Jun 2021 05:13:43 +0000
Message-ID: <MWHPR11MB18866D57EB9E61A2F43D5A8D8C079@MWHPR11MB1886.namprd11.prod.outlook.com>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
 <d6370703-97e3-2571-5ae3-8a5ec11e9bcd@suse.com>
In-Reply-To: <d6370703-97e3-2571-5ae3-8a5ec11e9bcd@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.142.21]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: dad3f66d-37fb-4038-8948-08d936ced611
x-ms-traffictypediagnostic: MWHPR11MB1550:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-microsoft-antispam-prvs: <MWHPR11MB1550CE310B4EFDC6461D251E8C079@MWHPR11MB1550.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:4941;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: I0ibXexX4dpMHWo0vhA97SSzhoRoNfvxFTME9UCR6NFWNGt7YbuTexcut+BVvmhLpqauV0C4IzJSy3n/EONEyzwuyqe8in3Ppj9c2iZsR9mO1VVjX2B+y3CYJ9QJj9S/VUycyGSzdHd5Xm5l2ECTG3q7tyLRC8fUDuncGHwGJngu+lHm7zqjMvbJy4T4p8pVgjq6iLWiGKx9f14zSZDRXvspgBP34OBCAchhyjmdQ6RkGqCbjivC//wgaiic4wTOhXsmeMVREm6R8iS0mmOXibMeTBHdlJgohNWCihFo2kIgM+DyVw7W9VSNDw4y8SDtWuldGLW7udCkDl6bcTf070+KVYZYmyh3Ga9hrpy643+zFKepImYpe066dAuIRvc5FyI7PiS2gcvEWDNgkfJrGn26Tiutk7TMRmuLcsVjidpiFeRG10ATbkQbQdkaClqmEz923vuUY+CsSVK9B3HSO93rf0sUvcUnoZSXtgMeEBg030mBh/YaJ3htknbEc1MmfKAB5DljzcMhDKUDkAW6imzrTo7iJKKP7wUKXXp0/CmggSLCerifs80FJEhMmdh973kfqiaKFXP/8VCKgTnDBOyhG6Pc1lJOxsVvx5w1EM7hcp29T8BZ3kT+Hr9rSqnrODIwFNaD0M0IxwOsLVUWlA==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1886.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(346002)(366004)(396003)(136003)(39860400002)(83380400001)(38100700002)(55016002)(71200400001)(54906003)(316002)(86362001)(2906002)(110136005)(122000001)(478600001)(9686003)(26005)(7696005)(8676002)(76116006)(52536014)(186003)(8936002)(6506007)(66946007)(64756008)(5660300002)(66476007)(4326008)(66556008)(33656002)(66446008);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?L3VpbnhuSjV4RmlHeEtPelJpbDkvdmFSc2U5M095UWNpdHhwT25lNnJxRXJ4?=
 =?utf-8?B?eVRuRk4wYm9XSTN4TVVDQjU3dm83THlpQWhGejBvUmRqaDAxTmx0Yms1Y1VD?=
 =?utf-8?B?K0RFSDN6OUxORGx4VVd4L0pLMHRscGhLdHg0bU9OaTg4MXRieTZ4TG44MGZm?=
 =?utf-8?B?aUU3cjltR3BrMys0eWhFTXUxdjNuWE9YVHN0ZU9SZFNZb2Z5a252cGh6bzZv?=
 =?utf-8?B?UGptVWVKbTJhdU1ubko3YlcwQUVrTzc1eVJMNDdSSmNoN1NuMzB5Z3ZESkVF?=
 =?utf-8?B?ZTl3aW5sM25xUnFHRE1yUGlTYVBvS3l4V2dYN2huMXM1SmF5bnBRT09tNG1q?=
 =?utf-8?B?TmxxS3NoR1I1MVpaMHprRmlSQTNuNUdNdjBobkJzWVU4M0pWVyt0S2ZoVGUr?=
 =?utf-8?B?aXl2N1hSSXdVTmRLWjFvaUdOUlBmVFNiY252UWEwMUxXWHdNMWRwZ2NSNlIy?=
 =?utf-8?B?NFNqenZhSm93NEhSekVseDJhbE1oaThaaUg2WTBycmI2ajVoZVYzTEdKZUdQ?=
 =?utf-8?B?bVJQUTltTHk5eFhFQi9lNHpYUUVreXpzZ2FsbUJZb3ljS0dtNE9JWkxkcWhh?=
 =?utf-8?B?cjhESWU2N1dJVEQwL0RMS2s4aWZYcnRZV1pINklXL2xncTQ2N3pvU1lZNElI?=
 =?utf-8?B?OWMyQTVMNG9TajBSS256ZnpUTUdsUzhrekZkWXV0MnFYKzhXOGl1WGdyaGYz?=
 =?utf-8?B?U09DN3c1TkxQNE5oR3hIelg5UDRNbmRlSkYwdkxpZ1VoSTNUbCtFTDgycXZP?=
 =?utf-8?B?TENpRmFIbWFsbHkxYjU3a1h0SFh4VEFHcHI0NHRNeDhPeUJ4VTRWdnJYblEv?=
 =?utf-8?B?Qk1Xa1RwUnpPaFhkaW5Qc3VDNEZqNHlOOWpWRzhXUnJuRGVtSE9GZ2JYTHY3?=
 =?utf-8?B?YWE4U2dzUEVpNDkrbFNDbVFsSnd5R1JCQ1l5YUd5dzRtVUt6TUV2TXpNUVZ5?=
 =?utf-8?B?N2RhMVRPQnQ2eUtLV3NHQ29uSEpnYTkxK0RJaThxMHNEaHdOREhNWWkxT1pu?=
 =?utf-8?B?a1B4SURuNyttYmpybGQvdWUxdmpIaVExeTcyejdFSUE2eFRoV1g1U0U0U29D?=
 =?utf-8?B?RnJEN0oyamZrMFRyOGVCMFh1K2FCc3JYK2tCdjNicHR5dVJNQlJ2cFBBU2M0?=
 =?utf-8?B?Tjd2MlczQk1NNXJDOG1DZVNCN0pxYlcyeG1EU1F4Zk5iK1pXRHNhTENiQWRh?=
 =?utf-8?B?ZkR0ZEV2cHRXbXA4UzZEbTZRUTZVZDd5YUZnYVA3eFhxKzE4S0hNRHEzWUo2?=
 =?utf-8?B?ZnpTMVZZeUNNcXdPMDJ0WnZ0anRyZ0w2azJjd3N3LzhGaG5MUEt0M2M5UTQw?=
 =?utf-8?B?SG4vSTNrTFNHMjhIdDVRZngreU5vOXk5NXZQclc0VC9uRFplMnZRdm5SN3hm?=
 =?utf-8?B?ZjdFOG5MZE4vZEltdlR5MXJNUTZralYxVjJpWHovd3kyeE03eUVXeWgrSDVz?=
 =?utf-8?B?VCtxQmduRnRlWFlUWi9pU09PS0UydURVcVBHY1R3dGtmT2hEYko2MDFJbXFl?=
 =?utf-8?B?aXlkbTVCTG14RFBCa3pWekVxRGsxeFlHZ3FqemthMXZBU2h2MFlsTWlZbkwx?=
 =?utf-8?B?dER1eWRMY2MrU1d4ZTAwM3pTYXRabUJsRDRPbHo4U2hnejlZU2h5d1FuYVB3?=
 =?utf-8?B?ZUJhTk5mUEdkeDF2ajdqak1RdVk5QVUvM2wveGxuYzlneE5JaktldTBCeTUv?=
 =?utf-8?B?LzhpeWx2MVZneFZFRHd2dzlrcFUxWks5bXBCaC9EcjNYVi9XMW9kTVgwN1BF?=
 =?utf-8?Q?Tn9t9uZxWov6dgIhk1fgIkL/CAzQ4WIbWpLeOSL?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1886.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dad3f66d-37fb-4038-8948-08d936ced611
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Jun 2021 05:13:43.1412
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: OeKn3JyRawSSdXgDOoq/CtwVYEaqefOCSIju6WHGxWfrL7MyUg5advlprCzd2n2t/TTcYH2nSYTQ3gMWdroa1A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR11MB1550
X-OriginatorOrg: intel.com

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IFdlZG5lc2Rh
eSwgSnVuZSA5LCAyMDIxIDU6MjggUE0NCj4gDQo+IFdoZW4NCj4gIC0gZmx1c2hlcyAoc3VwcG9z
ZWRseSBub3QgcG9zc2libGUgYW55bW9yZSBhZnRlciBYU0EtMzczKSwNCj4gIC0gc2Vjb25kYXJ5
IG1hcHBpbmdzIGZvciBsZWdhY3kgUENJIGRldmljZXMgYmVoaW5kIGJyaWRnZXMsDQo+ICAtIHNl
Y29uZGFyeSBtYXBwaW5ncyBmb3IgY2hpcHNldCBxdWlya3MsIG9yDQo+ICAtIGZpbmRfdXBzdHJl
YW1fYnJpZGdlKCkgaW52b2NhdGlvbnMNCj4gZmFpbCwgdGhlIHN1Y2Nlc3NmdWxseSBlc3RhYmxp
c2hlZCBkZXZpY2UgbWFwcGluZ3Mgc2hvdWxkIG5vdCBiZSBsZWZ0DQo+IGFyb3VuZC4NCj4gDQo+
IEZ1cnRoZXIsIHdoZW4gKHBhcnRzIG9mKSB1bm1hcHBpbmcgZmFpbCwgc2ltcGx5IHJldHVybmlu
ZyBhbiBlcnJvciBpcw0KPiB0eXBpY2FsbHkgbm90IGVub3VnaC4gQ3Jhc2ggdGhlIGRvbWFpbiBp
bnN0ZWFkIGluIHN1Y2ggY2FzZXMsIGFycmFuZ2luZw0KPiBmb3IgZG9tYWluIGNsZWFudXAgdG8g
Y29udGludWUgaW4gYSBiZXN0IGVmZm9ydCBtYW5uZXIgZGVzcGl0ZSBzdWNoDQo+IGZhaWx1cmVz
Lg0KPiANCj4gRmluYWxseSBtYWtlIGRvbWFpbl9jb250ZXh0X3VubWFwKCkncyBlcnJvciBiZWhh
dmlvciBjb25zaXN0ZW50IGluIHRoZQ0KPiBsZWdhY3kgUENJIGRldmljZSBjYXNlOiBEb24ndCBi
YWlsIGZyb20gdGhlIGZ1bmN0aW9uIGluIG9uZSBzcGVjaWFsDQo+IGNhc2UsIGJ1dCBhbHdheXMg
anVzdCBleGl0IHRoZSBzd2l0Y2ggc3RhdGVtZW50Lg0KPiANCj4gU2lnbmVkLW9mZi1ieTogSmFu
IEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiBSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50
IDxwYXVsQHhlbi5vcmc+DQoNClJldmlld2VkLWJ5OiBLZXZpbiBUaWFuIDxrZXZpbi50aWFuQGlu
dGVsLmNvbT4NCg0KPiANCj4gLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11
LmMNCj4gKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11LmMNCj4gQEAgLTE0
NDIsOSArMTQ0MiwxNSBAQCBpbnQgZG9tYWluX2NvbnRleHRfbWFwcGluZ19vbmUoDQo+ICAgICAg
aWYgKCAhc2VnICYmICFyYyApDQo+ICAgICAgICAgIHJjID0gbWVfd2lmaV9xdWlyayhkb21haW4s
IGJ1cywgZGV2Zm4sIE1BUF9NRV9QSEFOVE9NX0ZVTkMpOw0KPiANCj4gKyAgICBpZiAoIHJjICkN
Cj4gKyAgICAgICAgZG9tYWluX2NvbnRleHRfdW5tYXBfb25lKGRvbWFpbiwgaW9tbXUsIGJ1cywg
ZGV2Zm4pOw0KPiArDQo+ICAgICAgcmV0dXJuIHJjOw0KPiAgfQ0KPiANCj4gK3N0YXRpYyBpbnQg
ZG9tYWluX2NvbnRleHRfdW5tYXAoc3RydWN0IGRvbWFpbiAqZCwgdWludDhfdCBkZXZmbiwNCj4g
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgc3RydWN0IHBjaV9kZXYgKnBkZXYpOw0K
PiArDQo+ICBzdGF0aWMgaW50IGRvbWFpbl9jb250ZXh0X21hcHBpbmcoc3RydWN0IGRvbWFpbiAq
ZG9tYWluLCB1OCBkZXZmbiwNCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBz
dHJ1Y3QgcGNpX2RldiAqcGRldikNCj4gIHsNCj4gQEAgLTE1MDUsMTYgKzE1MTEsMjEgQEAgc3Rh
dGljIGludCBkb21haW5fY29udGV4dF9tYXBwaW5nKHN0cnVjdA0KPiAgICAgICAgICBpZiAoIHJl
dCApDQo+ICAgICAgICAgICAgICBicmVhazsNCj4gDQo+IC0gICAgICAgIGlmICggZmluZF91cHN0
cmVhbV9icmlkZ2Uoc2VnLCAmYnVzLCAmZGV2Zm4sICZzZWNidXMpIDwgMSApDQo+IC0gICAgICAg
ICAgICBicmVhazsNCj4gKyAgICAgICAgaWYgKCAocmV0ID0gZmluZF91cHN0cmVhbV9icmlkZ2Uo
c2VnLCAmYnVzLCAmZGV2Zm4sICZzZWNidXMpKSA8IDEgKQ0KPiArICAgICAgICB7DQo+ICsgICAg
ICAgICAgICBpZiAoICFyZXQgKQ0KPiArICAgICAgICAgICAgICAgIGJyZWFrOw0KPiArICAgICAg
ICAgICAgcmV0ID0gLUVOWElPOw0KPiArICAgICAgICB9DQo+IA0KPiAgICAgICAgICAvKg0KPiAg
ICAgICAgICAgKiBNYXBwaW5nIGEgYnJpZGdlIHNob3VsZCwgaWYgYW55dGhpbmcsIHBhc3MgdGhl
IHN0cnVjdCBwY2lfZGV2IG9mDQo+ICAgICAgICAgICAqIHRoYXQgYnJpZGdlLiBTaW5jZSBicmlk
Z2VzIGRvbid0IG5vcm1hbGx5IGdldCBhc3NpZ25lZCB0byBndWVzdHMsDQo+ICAgICAgICAgICAq
IHRoZWlyIG93bmVyIHdvdWxkIGJlIHRoZSB3cm9uZyBvbmUuIFBhc3MgTlVMTCBpbnN0ZWFkLg0K
PiAgICAgICAgICAgKi8NCj4gLSAgICAgICAgcmV0ID0gZG9tYWluX2NvbnRleHRfbWFwcGluZ19v
bmUoZG9tYWluLCBkcmhkLT5pb21tdSwgYnVzLCBkZXZmbiwNCj4gLSAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgTlVMTCk7DQo+ICsgICAgICAgIGlmICggcmV0ID49IDAg
KQ0KPiArICAgICAgICAgICAgcmV0ID0gZG9tYWluX2NvbnRleHRfbWFwcGluZ19vbmUoZG9tYWlu
LCBkcmhkLT5pb21tdSwgYnVzLA0KPiBkZXZmbiwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIE5VTEwpOw0KPiANCj4gICAgICAgICAgLyoNCj4gICAgICAg
ICAgICogRGV2aWNlcyBiZWhpbmQgUENJZS10by1QQ0kvUENJeCBicmlkZ2UgbWF5IGdlbmVyYXRl
IGRpZmZlcmVudA0KPiBAQCAtMTUzMSw2ICsxNTQyLDkgQEAgc3RhdGljIGludCBkb21haW5fY29u
dGV4dF9tYXBwaW5nKHN0cnVjdA0KPiAgICAgICAgICAgICAgcmV0ID0gZG9tYWluX2NvbnRleHRf
bWFwcGluZ19vbmUoZG9tYWluLCBkcmhkLT5pb21tdSwgc2VjYnVzLA0KPiAwLA0KPiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgTlVMTCk7DQo+IA0KPiArICAg
ICAgICBpZiAoIHJldCApDQo+ICsgICAgICAgICAgICBkb21haW5fY29udGV4dF91bm1hcChkb21h
aW4sIGRldmZuLCBwZGV2KTsNCj4gKw0KPiAgICAgICAgICBicmVhazsNCj4gDQo+ICAgICAgZGVm
YXVsdDoNCj4gQEAgLTE2MDksNiArMTYyMywxOSBAQCBpbnQgZG9tYWluX2NvbnRleHRfdW5tYXBf
b25lKA0KPiAgICAgIGlmICggIWlvbW11LT5kcmhkLT5zZWdtZW50ICYmICFyYyApDQo+ICAgICAg
ICAgIHJjID0gbWVfd2lmaV9xdWlyayhkb21haW4sIGJ1cywgZGV2Zm4sIFVOTUFQX01FX1BIQU5U
T01fRlVOQyk7DQo+IA0KPiArICAgIGlmICggcmMgJiYgIWlzX2hhcmR3YXJlX2RvbWFpbihkb21h
aW4pICYmIGRvbWFpbiAhPSBkb21faW8gKQ0KPiArICAgIHsNCj4gKyAgICAgICAgaWYgKCBkb21h
aW4tPmlzX2R5aW5nICkNCj4gKyAgICAgICAgew0KPiArICAgICAgICAgICAgcHJpbnRrKFhFTkxP
R19FUlIgIiVwZDogZXJyb3IgJWQNCj4gdW5tYXBwaW5nICUwNHg6JTAyeDolMDJ4LiV1XG4iLA0K
PiArICAgICAgICAgICAgICAgICAgIGRvbWFpbiwgcmMsIGlvbW11LT5kcmhkLT5zZWdtZW50LCBi
dXMsDQo+ICsgICAgICAgICAgICAgICAgICAgUENJX1NMT1QoZGV2Zm4pLCBQQ0lfRlVOQyhkZXZm
bikpOw0KPiArICAgICAgICAgICAgcmMgPSAwOyAvKiBNYWtlIHVwcGVyIGxheWVycyBjb250aW51
ZSBpbiBhIGJlc3QgZWZmb3J0IG1hbm5lci4gKi8NCj4gKyAgICAgICAgfQ0KPiArICAgICAgICBl
bHNlDQo+ICsgICAgICAgICAgICBkb21haW5fY3Jhc2goZG9tYWluKTsNCj4gKyAgICB9DQo+ICsN
Cj4gICAgICByZXR1cm4gcmM7DQo+ICB9DQo+IA0KPiBAQCAtMTY2MSwxNyArMTY4OCwyOSBAQCBz
dGF0aWMgaW50IGRvbWFpbl9jb250ZXh0X3VubWFwKHN0cnVjdCBkDQo+IA0KPiAgICAgICAgICB0
bXBfYnVzID0gYnVzOw0KPiAgICAgICAgICB0bXBfZGV2Zm4gPSBkZXZmbjsNCj4gLSAgICAgICAg
aWYgKCBmaW5kX3Vwc3RyZWFtX2JyaWRnZShzZWcsICZ0bXBfYnVzLCAmdG1wX2RldmZuLCAmc2Vj
YnVzKSA8IDEgKQ0KPiArICAgICAgICBpZiAoIChyZXQgPSBmaW5kX3Vwc3RyZWFtX2JyaWRnZShz
ZWcsICZ0bXBfYnVzLCAmdG1wX2RldmZuLA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAmc2VjYnVzKSkgPCAxICkNCj4gKyAgICAgICAgew0KPiArICAgICAgICAg
ICAgaWYgKCByZXQgKQ0KPiArICAgICAgICAgICAgew0KPiArICAgICAgICAgICAgICAgIHJldCA9
IC1FTlhJTzsNCj4gKyAgICAgICAgICAgICAgICBpZiAoICFkb21haW4tPmlzX2R5aW5nICYmDQo+
ICsgICAgICAgICAgICAgICAgICAgICAhaXNfaGFyZHdhcmVfZG9tYWluKGRvbWFpbikgJiYgZG9t
YWluICE9IGRvbV9pbyApDQo+ICsgICAgICAgICAgICAgICAgew0KPiArICAgICAgICAgICAgICAg
ICAgICBkb21haW5fY3Jhc2goZG9tYWluKTsNCj4gKyAgICAgICAgICAgICAgICAgICAgLyogTWFr
ZSB1cHBlciBsYXllcnMgY29udGludWUgaW4gYSBiZXN0IGVmZm9ydCBtYW5uZXIuICovDQo+ICsg
ICAgICAgICAgICAgICAgICAgIHJldCA9IDA7DQo+ICsgICAgICAgICAgICAgICAgfQ0KPiArICAg
ICAgICAgICAgfQ0KPiAgICAgICAgICAgICAgYnJlYWs7DQo+ICsgICAgICAgIH0NCj4gDQo+ICAg
ICAgICAgIC8qIFBDSWUgdG8gUENJL1BDSXggYnJpZGdlICovDQo+ICAgICAgICAgIGlmICggcGRl
dl90eXBlKHNlZywgdG1wX2J1cywgdG1wX2RldmZuKSA9PQ0KPiBERVZfVFlQRV9QQ0llMlBDSV9C
UklER0UgKQ0KPiAgICAgICAgICB7DQo+ICAgICAgICAgICAgICByZXQgPSBkb21haW5fY29udGV4
dF91bm1hcF9vbmUoZG9tYWluLCBpb21tdSwgdG1wX2J1cywNCj4gdG1wX2RldmZuKTsNCj4gLSAg
ICAgICAgICAgIGlmICggcmV0ICkNCj4gLSAgICAgICAgICAgICAgICByZXR1cm4gcmV0Ow0KPiAt
DQo+IC0gICAgICAgICAgICByZXQgPSBkb21haW5fY29udGV4dF91bm1hcF9vbmUoZG9tYWluLCBp
b21tdSwgc2VjYnVzLCAwKTsNCj4gKyAgICAgICAgICAgIGlmICggIXJldCApDQo+ICsgICAgICAg
ICAgICAgICAgcmV0ID0gZG9tYWluX2NvbnRleHRfdW5tYXBfb25lKGRvbWFpbiwgaW9tbXUsIHNl
Y2J1cywgMCk7DQo+ICAgICAgICAgIH0NCj4gICAgICAgICAgZWxzZSAvKiBMZWdhY3kgUENJIGJy
aWRnZSAqLw0KPiAgICAgICAgICAgICAgcmV0ID0gZG9tYWluX2NvbnRleHRfdW5tYXBfb25lKGRv
bWFpbiwgaW9tbXUsIHRtcF9idXMsDQo+IHRtcF9kZXZmbik7DQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 05:21:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 05:21:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146372.269309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwHoE-0007ug-Hx; Thu, 24 Jun 2021 05:21:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146372.269309; Thu, 24 Jun 2021 05:21:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwHoE-0007uZ-EA; Thu, 24 Jun 2021 05:21:50 +0000
Received: by outflank-mailman (input) for mailman id 146372;
 Thu, 24 Jun 2021 05:21:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GXyq=LS=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1lwHoD-0007uT-01
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 05:21:49 +0000
Received: from mga17.intel.com (unknown [192.55.52.151])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ecebf2c6-368d-4d52-aa46-01c6a7d74d27;
 Thu, 24 Jun 2021 05:21:39 +0000 (UTC)
Received: from orsmga001.jf.intel.com ([10.7.209.18])
 by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 23 Jun 2021 22:21:38 -0700
Received: from fmsmsx602.amr.corp.intel.com ([10.18.126.82])
 by orsmga001.jf.intel.com with ESMTP; 23 Jun 2021 22:21:37 -0700
Received: from fmsmsx603.amr.corp.intel.com (10.18.126.83) by
 fmsmsx602.amr.corp.intel.com (10.18.126.82) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.4; Wed, 23 Jun 2021 22:21:37 -0700
Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by
 fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.4
 via Frontend Transport; Wed, 23 Jun 2021 22:21:37 -0700
Received: from NAM04-BN8-obe.outbound.protection.outlook.com (104.47.74.49) by
 edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2242.4; Wed, 23 Jun 2021 22:21:36 -0700
Received: from MWHPR11MB1886.namprd11.prod.outlook.com (2603:10b6:300:110::9)
 by MW3PR11MB4587.namprd11.prod.outlook.com (2603:10b6:303:58::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Thu, 24 Jun
 2021 05:21:35 +0000
Received: from MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::6597:eb05:c507:c6c1]) by MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::6597:eb05:c507:c6c1%12]) with mapi id 15.20.4242.024; Thu, 24 Jun
 2021 05:21:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ecebf2c6-368d-4d52-aa46-01c6a7d74d27
IronPort-SDR: LW/18ocA/2h3R7u4bmGM/2XmFg5mGnB/YEsdSqSzWzvkD9uVoBa9jZ0lK7bAgJXmL+7DID2Gl9
 1tqNOZ1mfNyw==
X-IronPort-AV: E=McAfee;i="6200,9189,10024"; a="187779439"
X-IronPort-AV: E=Sophos;i="5.83,295,1616482800"; 
   d="scan'208";a="187779439"
IronPort-SDR: DZXSJp2fAT48Lkgbu34YQLT5KF0jIMoJAz5Ldq6GuihmAw0mAc1n3WwpjIgxqhsBi8iBRQLz+z
 rlQmrWanO+Sg==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.83,295,1616482800"; 
   d="scan'208";a="487615548"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EtGfe/As8u86Q1Kdy+8sXc8VbC+tTwCHICJKQGhNyF/wQYbtysbPlML6Tm/JmGzYo1DcWInkBpnJ/dlcRGwO/EwAkdQJMU5oe7shV8HeOWEgKdGJnMSTqp5fzUdlWXlGqykH3ZJkD/2EOaCykl1LFWEsXC5QeKeW6PvVB44k2YWAiXPi36Pcgp6kVTahernKWkqTQBuK57InTsw53C+Tj/fqv2Sdh+TmTEoFXtC1MY/9O/qV8rOVXIMsuvZGYrIjqAgpGhfcL9DF5pYI6Y1hf6m9MUN/l/4bD9D5RuIv+bUDlcxsE9kbktiYys2fKBT1MZqzkvhrqqr0liUkQwJ5kQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=q1rfPsGVelLmuPseQ6Loy6Bbgl2pHxI2BNOR3CKWw1c=;
 b=LRMVrg8Xp6DwCvgXE4DMFm7MO7qzLFyU0bwSP6xOMEf5KI7AaQ888/4EiST+45lv0KVSyTgZ52SbibzTsmMEQFZrYYi/DlObuIRnURQEY02+2KA1AX3PSSJFemaPp4Gq5m4OSOvPWxmOyHZMCxIgrWcd0sFlzmeRwjB4RmtHyTucwQ00IW4xrQwXyCu2g0jlkVD53Ze+5kmY0Yy864hdk7vq2yQdXm2MCMAZJ7zwK/Tp1bJmkBTKVqWK/9PoZmJIFNqJ4WXAKrHIybYBQDl54doOeBeYIRqIr078omA0wEOs98oGgX07oECON7VjkcyWekJ5nH92crGkMY57H6PKAA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=q1rfPsGVelLmuPseQ6Loy6Bbgl2pHxI2BNOR3CKWw1c=;
 b=kb7tSI/FgxrcRHS8cXfUJ4/GPSGX1UBozkAmC1DJsg97Sn6bZqMatvwBRgAJDfoWiCWI0Vq6kHVw1TCTI/lFX8TtsMkAPI5L8tPzJIyV4ARGlLJ5kS2hlqLPmG1t3s0TOZnlfyTlEJFswaPjMXOedKC0yWlbAn1UpQ1SIj3Vcjc=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <paul@xen.org>, "Cooper, Andrew" <andrew.cooper3@citrix.com>
Subject: RE: [PATCH 4/9] VT-d: adjust domid map updating when unmapping
 context
Thread-Topic: [PATCH 4/9] VT-d: adjust domid map updating when unmapping
 context
Thread-Index: AQHXXRHFNRbepFBfWEWHFxIPdQtGWKsit1XA
Date: Thu, 24 Jun 2021 05:21:35 +0000
Message-ID: <MWHPR11MB18863FB7CE96AB9EEA94DC778C079@MWHPR11MB1886.namprd11.prod.outlook.com>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
 <be453d69-dfa8-1f75-b30f-918229c73d02@suse.com>
In-Reply-To: <be453d69-dfa8-1f75-b30f-918229c73d02@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.142.21]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 02c7756f-70b8-4b13-c425-08d936cfef5c
x-ms-traffictypediagnostic: MW3PR11MB4587:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-microsoft-antispam-prvs: <MW3PR11MB458741EA89A4A947F11B9E108C079@MW3PR11MB4587.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8273;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: az/gCWPG4HOGk9GpWG8PhsE5VcSjX7EaRGForMOr1GzJJf70bpysGrzy6qYit+CRvVx1+n9Q4L1KNCzr0L7WCNt14eAMYgCHR3JGgfswAvhesaOCFxXjz5SK8E3n7I2lF/St3W6Jg6phzlZrfLiXCGBDXPegtHFurYMQkdsrHGX4gDuExKJDt6qpkQ/MbhzsyNQxzpeRGO6D1NIiLtuoSW5Ruwfgy8YsR80JEmNZAyixBDl6v0wzE6kahhpuHXRgPQXx3NXK66jxjO0Sxkm8YsTdEv2/YfChym/CqgfKhQemd+7B2rf+6sA/hs9iX4dn3Hg/ZWMs7C85NiNq6iXbbibkOkE6Fd0YJ0hdLi35tap5K6z5W6/UDoBkQx7rJlDhqNj6V2ccBe87AoKsFbGhzhcDecpxmRx6EoLg5BfLOXWl6abl8GgIJy16JzQgCPWay0sJM3+7/ExfNZ6xkfBO1LuII0cWBpPrVmH4/CDBe2KmBgGtYz7/lreCIe3DPS1JrLs9AD5LlcuP9j8QzjspziFiov5uZSfcIb8OhkdSRMw8rERWFd17fpnQ+G1Zq87QPC7mFA3XshfxwbP8FZznxl+BRtsYwyuQ6UXOFy/pjoY=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1886.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(396003)(366004)(136003)(346002)(39860400002)(38100700002)(64756008)(122000001)(66446008)(186003)(7696005)(26005)(478600001)(66476007)(6506007)(71200400001)(66946007)(8676002)(66556008)(55016002)(83380400001)(76116006)(2906002)(110136005)(316002)(86362001)(33656002)(9686003)(4326008)(8936002)(52536014)(54906003)(5660300002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?ZVFMelV0aUVVUXVhb2dDTW9PR2FJY1pSbFVLM05abW1CbXo5L2E5TTNpK2cz?=
 =?utf-8?B?bTRSNks4UWgyQ3ZUaWcrM1ZXZ3doNzRGRlVIK2xkWWlKcExZM1dYOEJuUjhw?=
 =?utf-8?B?R0F1cUVOK2J6LzQvaGVsc1o2M1JiNmxhcElTOUExeVQ2WWJSRmtqb0cyL3lY?=
 =?utf-8?B?NkZSRnU1d3YwN1hqWnFLVG15azB6N1RUWUFEeklMQXJyWWRVOG5uNkNPd1ZV?=
 =?utf-8?B?RXYvUGRqekk4MjZsZUp2NkxNZ0h6OGJEQnVQTVpmbDNlVG52TEwyckg0L1Qy?=
 =?utf-8?B?bSt2OTkyUDJ0OGNWVEdmRU50am01Skc5bGFudGh6S01IdVpQVUp0WXlzR290?=
 =?utf-8?B?V1V3cUNTQlNPcEJLZW43c3hINnV3aTBmd3BIN2MwWEdGUWVGS0o2VjNRQ3Ew?=
 =?utf-8?B?Q2RrRUYxTGhyWjNuOFFkL2FZYzFubU9yZWFnVGh2SUt4NUM0cm91M3ZMVEUy?=
 =?utf-8?B?SlNvSGE4eEFSL2htd0c4V2pOTCs1VXR4QzFYdjdMeGxmbWdiTFFaNnk5WTEv?=
 =?utf-8?B?NjNOa3ZZNG4ybHFjZTBTK20xZEhMY2I2cGZNcFp3U0k2WkVhaXJIMXdRZktR?=
 =?utf-8?B?SzFOVHJ5MjA3OWZaVCt3QjM0N0tGUS9nVWNtSk1FemlxdHFUajY1RXYyTzdr?=
 =?utf-8?B?OVJKTUpVbFdOZ09kbzlFRStPRHVzWEFocGZpS2xBU2ZQSFNSMlBkV2ZLbXdi?=
 =?utf-8?B?ZTQzcXBNRGdON1RBblZFQVRBMDNnckZDNTRXK1BmUTZVdmlUckdYZ3d0NEF4?=
 =?utf-8?B?SVRyU0x6UFJOQXZrUGU3eHVEckFWZVZWZzRTVmhkYjRTN05QalpsenRYRFVM?=
 =?utf-8?B?WkJPMUhrc3hON1NOVkhlbUxOQktxS0ZCb2tqOGtKR1dFcys4N3pNdkJHRUp4?=
 =?utf-8?B?aXNxenhuSXBSM0p3YlBmMWVsNlJvVmFoVzFiY05HOTVuc1ByQWNIMWVQTHp4?=
 =?utf-8?B?Qk85ZmdMRFpFZlIvbUhoSEhmTXVrTmlEYlRCU1Ava3B1QnR6c1VxM1RQUmFB?=
 =?utf-8?B?cFdhaDNwY1dMdW56UXpxMk53ZEIvOFJDaThEMDZaOFhBUWZxTTN4QmNFUUpv?=
 =?utf-8?B?YUtHUFBadTZLTmpROVQzbURaeXVvd1VLT3JCNmFGU0tEbUhzb0J0SFFsMWlj?=
 =?utf-8?B?UFY0eUx1OGZ6Skx6MzYxbC90M3FTdXR1VUlxaDNZUmNhcWttUXhmR0cvcm5H?=
 =?utf-8?B?TzZhOGtLNVYvR3hlQWVvNm40K0h2Rzg5T1M5NEtJeExkZGpyNVQ5QjZXWnU4?=
 =?utf-8?B?anlzV1F1cElSTDdqUkQ0dVhPSmtESy94UTZsWC9vbnNMckpvdmVyUk9zNWxZ?=
 =?utf-8?B?SmdvV2NRT2U3TGU5Q1RtU0dtUlF2NDJtMG5KdlBzMWJEMUlXSnhPTk9Nc2py?=
 =?utf-8?B?d2RjOTh1MitZQ1UvSUh3WlNic2YvNmUzR283UDBCeXpxMHUzVzhSeTVFcm9L?=
 =?utf-8?B?UVhuYU9sZmx5QkF5bm1KT2ZLQnlsYmFmZ21PaHVKczM3aU1qWWhqckh2WmxQ?=
 =?utf-8?B?OURjREpBczFVZWxOV29XTGZGVmg3RUc0QUtwUUJXTzFZUkt6aTc3MVlCajNw?=
 =?utf-8?B?K0ZnNjZkOFpaSmhwUXdEVHI3b1U3VHQxR0hLTzN2THBJazBzNVFhQXY2NUx2?=
 =?utf-8?B?UEkyK2FUWWhnMWpDcS9LaDJyOUtObnc1dG0xR1ZsbFZjRUpEQXdqSFp1TTF3?=
 =?utf-8?B?c2tJdTM4Q29LOEx0TnhpRnNPMzVjcTdlRmN6NlRiMXlCQVh6QmwzbThqMVZy?=
 =?utf-8?Q?P6lPBzM5sF4VmqWGQtH/n2igamY58aew9Um7RHL?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1886.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 02c7756f-70b8-4b13-c425-08d936cfef5c
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Jun 2021 05:21:35.0740
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: doYnIg1DULp8IbLcR9PVXsgQ1sYlG1pdM8Bw85cJFnDlbQzCp4K+GKHwGcAFvv1+RvKaZViFVZ80IPpiTCEzSA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR11MB4587
X-OriginatorOrg: intel.com

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IFdlZG5lc2Rh
eSwgSnVuZSA5LCAyMDIxIDU6MjggUE0NCj4gDQo+IFdoZW4gYW4gZWFybGllciBlcnJvciBvY2N1
cnJlZCwgY2xlYW5pbmcgdXAgdGhlIGRvbWlkIG1hcHBpbmcgZGF0YSBpcw0KPiB3cm9uZywgYXMg
cmVmZXJlbmNlcyBsaWtlbHkgc3RpbGwgZXhpc3QuIFRoZSBvbmx5IGV4Y2VwdGlvbiB0byB0aGlz
IGlzDQo+IHdoZW4gdGhlIGFjdHVhbCB1bm1hcHBpbmcgd29ya2VkLCBidXQgc29tZSBmbHVzaCBm
YWlsZWQgKHN1cHBvc2VkbHkNCj4gaW1wb3NzaWJsZSBhZnRlciBYU0EtMzczKS4gVGhlIGd1ZXN0
IHdpbGwgZ2V0IGNyYXNoZWQgaW4gc3VjaCBhIGNhc2UNCj4gdGhvdWdoLCBzbyBhZGQgZmFsbGJh
Y2sgY2xlYW51cCB0byBkb21haW4gZGVzdHJ1Y3Rpb24gdG8gY292ZXIgdGhpcw0KPiBjYXNlLiBU
aGlzIGluIHR1cm4gbWFrZXMgaXQgZGVzaXJhYmxlIHRvIHNpbGVuY2UgdGhlIGRwcmludGsoKSBp
bg0KPiBkb21haW5faW9tbXVfZG9taWQoKS4NCj4gDQo+IE5vdGUgdGhhdCBubyBlcnJvciB3aWxs
IGJlIHJldHVybmVkIGFueW1vcmUgd2hlbiB0aGUgbG9va3VwIGZhaWxzIC0gaW4NCj4gdGhlIGNv
bW1vbiBjYXNlIGxvb2t1cCBmYWlsdXJlIHdvdWxkIGFscmVhZHkgaGF2ZSBjYXVzZWQNCj4gZG9t
YWluX2NvbnRleHRfdW5tYXBfb25lKCkgdG8gZmFpbCwgeWV0IGV2ZW4gZnJvbSBhIG1vcmUgZ2Vu
ZXJhbA0KPiBwZXJzcGVjdGl2ZSBpdCBkb2Vzbid0IGxvb2sgcmlnaHQgdG8gZmFpbCBkb21haW5f
Y29udGV4dF91bm1hcCgpIGluIHN1Y2gNCj4gYSBjYXNlIHdoZW4gdGhpcyB3YXMgdGhlIGxhc3Qg
ZGV2aWNlLCBidXQgbm90IHdoZW4gYW55IGVhcmxpZXIgdW5tYXAgd2FzDQo+IG90aGVyd2lzZSBz
dWNjZXNzZnVsLg0KPiANCj4gU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1
c2UuY29tPg0KDQpSZXZpZXdlZC1ieTogS2V2aW4gVGlhbiA8a2V2aW4udGlhbkBpbnRlbC5jb20+
DQoNCj4gDQo+IC0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jDQo+ICsr
KyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jDQo+IEBAIC04MCw5ICs4MCwx
MSBAQCBzdGF0aWMgaW50IGRvbWFpbl9pb21tdV9kb21pZChzdHJ1Y3QgZG9tDQo+ICAgICAgICAg
IGkgPSBmaW5kX25leHRfYml0KGlvbW11LT5kb21pZF9iaXRtYXAsIG5yX2RvbSwgaSsxKTsNCj4g
ICAgICB9DQo+IA0KPiAtICAgIGRwcmludGsoWEVOTE9HX0VSUiBWVERQUkVGSVgsDQo+IC0gICAg
ICAgICAgICAiQ2Fubm90IGdldCB2YWxpZCBpb21tdSBkb21pZDogZG9taWQ9JWQgaW9tbXUtPmlu
ZGV4PSVkXG4iLA0KPiAtICAgICAgICAgICAgZC0+ZG9tYWluX2lkLCBpb21tdS0+aW5kZXgpOw0K
PiArICAgIGlmICggIWQtPmlzX2R5aW5nICkNCj4gKyAgICAgICAgZHByaW50ayhYRU5MT0dfRVJS
IFZURFBSRUZJWCwNCj4gKyAgICAgICAgICAgICAgICAiQ2Fubm90IGdldCB2YWxpZCBpb21tdSAl
dSBkb21pZDogJXBkXG4iLA0KPiArICAgICAgICAgICAgICAgIGlvbW11LT5pbmRleCwgZCk7DQo+
ICsNCj4gICAgICByZXR1cm4gLTE7DQo+ICB9DQo+IA0KPiBAQCAtMTQ3LDYgKzE0OSwxNyBAQCBz
dGF0aWMgaW50IGNvbnRleHRfZ2V0X2RvbWFpbl9pZChzdHJ1Y3QNCj4gICAgICByZXR1cm4gZG9t
aWQ7DQo+ICB9DQo+IA0KPiArc3RhdGljIHZvaWQgY2xlYW51cF9kb21pZF9tYXAoc3RydWN0IGRv
bWFpbiAqZG9tYWluLCBzdHJ1Y3QgdnRkX2lvbW11DQo+ICppb21tdSkNCj4gK3sNCj4gKyAgICBp
bnQgaW9tbXVfZG9taWQgPSBkb21haW5faW9tbXVfZG9taWQoZG9tYWluLCBpb21tdSk7DQo+ICsN
Cj4gKyAgICBpZiAoIGlvbW11X2RvbWlkID49IDAgKQ0KPiArICAgIHsNCj4gKyAgICAgICAgY2xl
YXJfYml0KGlvbW11X2RvbWlkLCBpb21tdS0+ZG9taWRfYml0bWFwKTsNCj4gKyAgICAgICAgaW9t
bXUtPmRvbWlkX21hcFtpb21tdV9kb21pZF0gPSAwOw0KPiArICAgIH0NCj4gK30NCj4gKw0KPiAg
c3RhdGljIHZvaWQgc3luY19jYWNoZShjb25zdCB2b2lkICphZGRyLCB1bnNpZ25lZCBpbnQgc2l6
ZSkNCj4gIHsNCj4gICAgICBzdGF0aWMgdW5zaWduZWQgbG9uZyBjbGZsdXNoX3NpemUgPSAwOw0K
PiBAQCAtMTcyNCw2ICsxNzM3LDkgQEAgc3RhdGljIGludCBkb21haW5fY29udGV4dF91bm1hcChz
dHJ1Y3QgZA0KPiAgICAgICAgICBnb3RvIG91dDsNCj4gICAgICB9DQo+IA0KPiArICAgIGlmICgg
cmV0ICkNCj4gKyAgICAgICAgZ290byBvdXQ7DQo+ICsNCj4gICAgICAvKg0KPiAgICAgICAqIGlm
IG5vIG90aGVyIGRldmljZXMgdW5kZXIgdGhlIHNhbWUgaW9tbXUgb3duZWQgYnkgdGhpcyBkb21h
aW4sDQo+ICAgICAgICogY2xlYXIgaW9tbXUgaW4gaW9tbXVfYml0bWFwIGFuZCBjbGVhciBkb21h
aW5faWQgaW4gZG9taWRfYml0bXANCj4gQEAgLTE3NDMsMTkgKzE3NTksOCBAQCBzdGF0aWMgaW50
IGRvbWFpbl9jb250ZXh0X3VubWFwKHN0cnVjdCBkDQo+IA0KPiAgICAgIGlmICggZm91bmQgPT0g
MCApDQo+ICAgICAgew0KPiAtICAgICAgICBpbnQgaW9tbXVfZG9taWQ7DQo+IC0NCj4gICAgICAg
ICAgY2xlYXJfYml0KGlvbW11LT5pbmRleCwgJmRvbV9pb21tdShkb21haW4pLQ0KPiA+YXJjaC52
dGQuaW9tbXVfYml0bWFwKTsNCj4gLQ0KPiAtICAgICAgICBpb21tdV9kb21pZCA9IGRvbWFpbl9p
b21tdV9kb21pZChkb21haW4sIGlvbW11KTsNCj4gLSAgICAgICAgaWYgKCBpb21tdV9kb21pZCA9
PSAtMSApDQo+IC0gICAgICAgIHsNCj4gLSAgICAgICAgICAgIHJldCA9IC1FSU5WQUw7DQo+IC0g
ICAgICAgICAgICBnb3RvIG91dDsNCj4gLSAgICAgICAgfQ0KPiAtDQo+IC0gICAgICAgIGNsZWFy
X2JpdChpb21tdV9kb21pZCwgaW9tbXUtPmRvbWlkX2JpdG1hcCk7DQo+IC0gICAgICAgIGlvbW11
LT5kb21pZF9tYXBbaW9tbXVfZG9taWRdID0gMDsNCj4gKyAgICAgICAgY2xlYW51cF9kb21pZF9t
YXAoZG9tYWluLCBpb21tdSk7DQo+ICAgICAgfQ0KPiANCj4gIG91dDoNCj4gQEAgLTE3NzUsNiAr
MTc4MCw3IEBAIHN0YXRpYyB2b2lkIGlvbW11X2RvbWFpbl90ZWFyZG93bihzdHJ1Y3QNCj4gIHsN
Cj4gICAgICBzdHJ1Y3QgZG9tYWluX2lvbW11ICpoZCA9IGRvbV9pb21tdShkKTsNCj4gICAgICBz
dHJ1Y3QgbWFwcGVkX3JtcnIgKm1ybXJyLCAqdG1wOw0KPiArICAgIGNvbnN0IHN0cnVjdCBhY3Bp
X2RyaGRfdW5pdCAqZHJoZDsNCj4gDQo+ICAgICAgaWYgKCBsaXN0X2VtcHR5KCZhY3BpX2RyaGRf
dW5pdHMpICkNCj4gICAgICAgICAgcmV0dXJuOw0KPiBAQCAtMTc4Niw2ICsxNzkyLDkgQEAgc3Rh
dGljIHZvaWQgaW9tbXVfZG9tYWluX3RlYXJkb3duKHN0cnVjdA0KPiAgICAgIH0NCj4gDQo+ICAg
ICAgQVNTRVJUKCFoZC0+YXJjaC52dGQucGdkX21hZGRyKTsNCj4gKw0KPiArICAgIGZvcl9lYWNo
X2RyaGRfdW5pdCAoIGRyaGQgKQ0KPiArICAgICAgICBjbGVhbnVwX2RvbWlkX21hcChkLCBkcmhk
LT5pb21tdSk7DQo+ICB9DQo+IA0KPiAgc3RhdGljIGludCBfX211c3RfY2hlY2sgaW50ZWxfaW9t
bXVfbWFwX3BhZ2Uoc3RydWN0IGRvbWFpbiAqZCwgZGZuX3QgZGZuLA0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 05:26:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 05:26:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146377.269320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwHsy-00009H-5X; Thu, 24 Jun 2021 05:26:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146377.269320; Thu, 24 Jun 2021 05:26:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwHsy-00009A-2J; Thu, 24 Jun 2021 05:26:44 +0000
Received: by outflank-mailman (input) for mailman id 146377;
 Thu, 24 Jun 2021 05:26:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GXyq=LS=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1lwHsw-000094-Lp
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 05:26:42 +0000
Received: from mga14.intel.com (unknown [192.55.52.115])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6247a943-3b0e-4157-b73d-9b10f01c7892;
 Thu, 24 Jun 2021 05:26:41 +0000 (UTC)
Received: from orsmga003.jf.intel.com ([10.7.209.27])
 by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 23 Jun 2021 22:26:40 -0700
Received: from orsmsx605.amr.corp.intel.com ([10.22.229.18])
 by orsmga003.jf.intel.com with ESMTP; 23 Jun 2021 22:26:40 -0700
Received: from orsmsx607.amr.corp.intel.com (10.22.229.20) by
 ORSMSX605.amr.corp.intel.com (10.22.229.18) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.4; Wed, 23 Jun 2021 22:26:40 -0700
Received: from orsmsx610.amr.corp.intel.com (10.22.229.23) by
 ORSMSX607.amr.corp.intel.com (10.22.229.20) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.4; Wed, 23 Jun 2021 22:26:40 -0700
Received: from orsedg603.ED.cps.intel.com (10.7.248.4) by
 orsmsx610.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.4
 via Frontend Transport; Wed, 23 Jun 2021 22:26:39 -0700
Received: from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.106)
 by edgegateway.intel.com (134.134.137.100) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2242.4; Wed, 23 Jun 2021 22:26:39 -0700
Received: from MWHPR11MB1886.namprd11.prod.outlook.com (2603:10b6:300:110::9)
 by MW3PR11MB4587.namprd11.prod.outlook.com (2603:10b6:303:58::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Thu, 24 Jun
 2021 05:26:36 +0000
Received: from MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::6597:eb05:c507:c6c1]) by MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::6597:eb05:c507:c6c1%12]) with mapi id 15.20.4242.024; Thu, 24 Jun
 2021 05:26:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6247a943-3b0e-4157-b73d-9b10f01c7892
IronPort-SDR: szcB3hnL/FVTFBdCh/BhP9GK0vJ1zZQ6fFitURnykpUzOwM7IGglChAjdGax+7Xk5ZvQlKN/2H
 NeuxJcUZY51g==
X-IronPort-AV: E=McAfee;i="6200,9189,10024"; a="207216314"
X-IronPort-AV: E=Sophos;i="5.83,295,1616482800"; 
   d="scan'208";a="207216314"
IronPort-SDR: I43Owi9I8MA/hGYTbm3W8sUPPIcksCmsgDtFkKeA2umb/RHqsDk1MV32VWi1KmMR9HdjiIrost
 bDxmtjDGgSFA==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.83,295,1616482800"; 
   d="scan'208";a="406916552"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SuqeXCJ26dDwO9w8fYLtlzYR73+yzpadYsvkGsRdztLHbGJXVUgDAZIJOaTBvdcmPZW04p+r6C6+eQpsd9yIfdrLKJBqyvMv3Mxbjnq1tjnNwsEC4HFWHKm6YG69tSyKaUBc/T6cEoJNltqJ+N0UQYYJlz0GhQJvwutZo1HqZLjcMd83MK1t64UeOtJG/WPHwjqDWsOaxgDYQ/Z2gRVtprFeGB4Aygs4p2lIX5R95026+s8XZUZDMtv6sEA9LLHSA89h8Gm9c7t/0QrAgauihDZeS3Kc+rOvG09TFrIkYRHr1tMbI+Kygj+Pk/tEX4nHEeduLPNHHgraoeyZQ2oLiQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=djNODLt35QTS7t4/ieOj6WwOOJ4gFG8CEHvyfn7Wtwk=;
 b=R17Jc8jviyBSOGeil5OyOY/TaduKjaLW83qR/GDamiNGJlfKgFwIvQJ1npXaSBU4+oBRmxms6YyenS3bbKTyxp7bC2nVFHK4Tqa3Tm9VzxanJDt5eRPz1AXyd8ml5eHh9wgov838G2CMBiNlu0xfdCg4OA1+/AYqE4QoPVLdvQGTlcjCIl12rl8kyc/BnhsdVDIv3wtnrK4F8qBQ3m4yY/ve5LxjX779ej5JAdJsakERQWCYpaCCsaqltwGq6DznEMgbBNqzc10VJIuQG2eD+pLy5H3d1rU9dNyEwvZLsz73M1aOSMz652j63IhFOetE8ql8FvT8UzleyQRbGudgzQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=djNODLt35QTS7t4/ieOj6WwOOJ4gFG8CEHvyfn7Wtwk=;
 b=oEZoPKIeidS4MYkk5QnZ5lOlsUTW+lK0Wes2zITq4nA8KfYSEcehxGHbJJZZXTxEwNu7/CUUpm9T5pLwdmWekaCR6D5CZpSFxOPQHDzVbLRuXMHUv7zWzA+r6CRBxBJ+1msyXJYjf8wbLf4ppmDO3c5adq6sw5hrfbdaRMIatYs=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <paul@xen.org>, "Cooper, Andrew" <andrew.cooper3@citrix.com>
Subject: RE: [PATCH 5/9] VT-d: clear_fault_bits() should clear all fault bits
Thread-Topic: [PATCH 5/9] VT-d: clear_fault_bits() should clear all fault bits
Thread-Index: AQHXXRHg+9thH88smkqcPcG6zIlV0KsiuMOg
Date: Thu, 24 Jun 2021 05:26:36 +0000
Message-ID: <MWHPR11MB18864E1F15A1891D908C9EA98C079@MWHPR11MB1886.namprd11.prod.outlook.com>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
 <fbb7664b-d55c-f3ce-01b2-e4e379e3780b@suse.com>
In-Reply-To: <fbb7664b-d55c-f3ce-01b2-e4e379e3780b@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.142.21]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: fdc1b9ea-c93c-49b3-1ed1-08d936d0a2e1
x-ms-traffictypediagnostic: MW3PR11MB4587:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-microsoft-antispam-prvs: <MW3PR11MB4587E1226882006383ACC7368C079@MW3PR11MB4587.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:1388;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: BrKMyS3uhNcGlwlfpNGu4fWox3qOq0b5ifRxpVNVlbjSFatjCMc7OTCPXQdjCi8/1l+YEKJveDwlY/Luesemb3vftANucx1MGgLUOgz0kg94vI6b0v9kEiLzkR4yWWw0+/n54BH9aEc8dYzx5I76lTMW6DG1ISjK4WnsQzEvzj23BcHcpRnVwksKq91OZsd1hWxaNTO2Bx+YgvMWM+/+CVFEwCJDXsabGbBQgLBJVqsDxvBmZHC01PCLUa8BzN8K7ysNgzHjLnyFrG+Fasjbpt9LnHmlP07qWt/DU+5Wzy0K26A6JwzaTaYxTQnooc/jkgpp/ZhjmxzpCq5tGdmHcAXLIInCEYLJhYTTrEwbpDbjSsfwLjM+pWI4QQfNcD+1mzpPwD9NuJuRTxJhFQQR0bHI2I4i5LXEYYfmcBGH33bW23fGiLZZ8BfE5RBvciIEWebAGtNXJNlC7TEzxFIo39pRW3d2Jek7xiJgrgMcbHweXiXyFv+klxeo4ZAVVM8qavXuEWOyM8FqfanMlZMfNGl2w64678O+Kc1MyxzBvwuZrDVDV8bdtXtVjKWFdWm5wPZMzKwzNdCdxuBnn6p2480Kq96SQfTbEcZroF8v5V0=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1886.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(396003)(366004)(136003)(346002)(39860400002)(38100700002)(64756008)(122000001)(66446008)(186003)(7696005)(26005)(478600001)(66476007)(6506007)(71200400001)(66946007)(8676002)(66556008)(55016002)(83380400001)(76116006)(2906002)(110136005)(316002)(86362001)(33656002)(9686003)(4326008)(8936002)(52536014)(54906003)(5660300002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?VmhqRWxtTDZNVzBsNXdUd0ZsRnJiNEpZWlRjR1hJR2xqR1plRDlzT1J3V1Rj?=
 =?utf-8?B?Wi9na1Z4a05tbmU2ZnJBNzdTYmMwdW12a2h3eThqditaaVNXWXZCM3ExU0Mr?=
 =?utf-8?B?bkd3RDJlQ0NJK3JRUzFRTkFWOERTdXp5S0RIRFJqUWVsdDhhcFhoNXQyWEo3?=
 =?utf-8?B?MFpuL0hUb0pLd09yK2NpTlV4Z2NEL2x6a0dNS0FxSEMwSjF2R1Y1RFMyMHMy?=
 =?utf-8?B?UlZ0NjlvK052aVNnR2JubndEL1hGeE1JeXVGc1hLcEpLNFRyaFdPSWpvaldi?=
 =?utf-8?B?bnphMzR3ZUh0RGs5M0c1U2NpOHFLMzQxL2lOaERJOFFvRmtZMlZJMU1CMnRo?=
 =?utf-8?B?WFZzZEhBUjFDTm83WjJzTVE3RWFjNDJkRVN5LzFZYVRaVmh0KzVMZWI4bDZN?=
 =?utf-8?B?THRiMGs3dmZQcmlMZ2xYTlhVRHB5TG9hTU1VWk0vb3d0eDVGL1pRTS9mZGQv?=
 =?utf-8?B?a3lnSGQ1VEtNUmVvemwwN2R1YkU4MTJqQVFvN2FHa2xKbGsrTTlyTVROejRs?=
 =?utf-8?B?clJRMmxuTkVheVZEVlNOY0ttbS82RnhGMHpOdU8zTnZ3dktRTzJVMzNPSlRu?=
 =?utf-8?B?QjB2TlloeDU2TjBERlVTSVd2bmQ1eE9sRmx2TDRTTGpjcWgwSUt6TURTdy9W?=
 =?utf-8?B?S2pCU0Q1aVo3UTY5cnIydVpvRXlIbHZwM2l3U3c4U0U2bEZwaHZpWUJMeUk0?=
 =?utf-8?B?ODl5NFRBVE92bHZnUWxBZ0tuUUJqcUVBMGhMV0lxOWFrd1hKSHZJeVU4c0c0?=
 =?utf-8?B?Q3FQUDJEcm1UMm5DYmJ6MElISG16dUJ4alJ2WVhiamFjT0ZZcUVXUllOT1hU?=
 =?utf-8?B?K2RwdDhQWExBYkVnbWVMTDArd2YzNWt5d2lBR09CUkVzMHNsRndVNzJvMUpZ?=
 =?utf-8?B?dXVqdytBM2pVV1l4a3ZITWZZdjV0cHp6VldxYlIvUnU4b0J1QmNWRm9FOXVF?=
 =?utf-8?B?NVNXNGYvMjdwS1VJRWNSamR6anV5SWR6ZWkzckZPeVJVSGJsb2pxVG9RVGNN?=
 =?utf-8?B?UlI2RVExODJsS01FdTFRalNUS3pSd2l6VVdZa2xza3lzUnVmQTJPU3pBQjlM?=
 =?utf-8?B?SU1JK3NRYWlRaXNIK3NwTGNTSlZVc2kyelBpNEMvbGdtSmVnU1JzcmJrVm1P?=
 =?utf-8?B?cWZzd2EzaFp0WEt3aWo3Z3VzMDlhbEN3VDNDY3BjaHNxa29nYkRTNmc2R1lU?=
 =?utf-8?B?czlmb2Z5dXkzUFlQSkhCMG9NUTBHUW94L0NucXNDMVFDOHppb3lRNG5RSVZE?=
 =?utf-8?B?Wk9YeFVtTDBSQkgrY245OGZEOVBXQXBHQ1RCZ1dybmc4N25McnlFN3lOL1ow?=
 =?utf-8?B?WXJ1eHFsc3MrWGFYRkdwd282UjRJalk1UUZNSk5iWFpGQWVxOXdzakVpQmpq?=
 =?utf-8?B?ZHIxLzBDWWlOeHJKSldicXRHcC9LM0N3OG9UWTZzMHhLSlJTNUtPeDFNanhC?=
 =?utf-8?B?TnVPS3pXMFpjTEd4TE5jaWVmQ2RMeWVqVDZQYUlmT016TVBlejBuOVhaVmM3?=
 =?utf-8?B?S0tYMEIxalF3aWhOczB4RVZSUlNoMmdXWGFOODZhMzcrMHhpdFFZd3puWXVn?=
 =?utf-8?B?K3NxS2RkdGROSi81K0pIcGZMV2ppMzJWWjZGSHFnbjg3RnppaWorc3l2Q3h1?=
 =?utf-8?B?MDZScU10dE4vYTY3cDRVaUxEZlRXc1FDK3dJL254clM0Wkw0OWVCb3l6czNx?=
 =?utf-8?B?N3d3TDFIMldBWDRLeUlGTFpKTnFRb1RXM0FvMnVkcFpSZTFBNDM1dzhXYnNE?=
 =?utf-8?Q?Tf4GuTavL/3SjSFp6RirhjPXmefKAVsV4DLDV0g?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1886.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fdc1b9ea-c93c-49b3-1ed1-08d936d0a2e1
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Jun 2021 05:26:36.1546
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: yT1t1udyINT3qpuBjeRRL1OyMcMEhHpx/URW2B1yf2vVk/CIinNDnppD9jwKP83NdKw6lkFlaoKBYiScVCceUg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR11MB4587
X-OriginatorOrg: intel.com

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IFdlZG5lc2Rh
eSwgSnVuZSA5LCAyMDIxIDU6MjkgUE0NCj4gDQo+IElmIHRoZXJlIGlzIGFueSB3YXkgZm9yIG9u
ZSBmYXVsdCB0byBiZSBsZWZ0IHNldCBpbiB0aGUgcmVjb3JkaW5nDQo+IHJlZ2lzdGVycywgdGhl
cmUncyBubyByZWFzb24gdGhlcmUgY291bGRuJ3QgYWxzbyBiZSBtdWx0aXBsZSBvbmVzLiBJZg0K
PiBQUEYgc2V0IHNldCAoYmVpbmcgdGhlIE9SIG9yIGFsbCBGIGZpZWxkcyksIHNpbXBseSBsb29w
IG92ZXIgdGhlIGVudGlyZQ0KPiByYW5nZSBvZiBmYXVsdCByZWNvcmRpbmcgcmVnaXN0ZXJzLCBj
bGVhcmluZyBGIGV2ZXJ5d2hlcmUuDQo+IA0KPiBTaW5jZSBQUEYgaXMgYSByL28gYml0LCBhbHNv
IHJlbW92ZSBpdCBmcm9tIERNQV9GU1RTX0ZBVUxUUyAoYXJndWFibHkNCj4gdGhlIGNvbnN0YW50
J3MgbmFtZSBpcyBhbWJpZ3VvdXMgYXMgd2VsbCkuDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBKYW4g
QmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQoNClJldmlld2VkLWJ5OiBLZXZpbiBUaWFuIDxr
ZXZpbi50aWFuQGludGVsLmNvbT4NCg0KPiANCj4gLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvdnRkL2lvbW11LmMNCj4gKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11
LmMNCj4gQEAgLTIwOTQsMTMgKzIwOTQsMjMgQEAgc3RhdGljIGludCBfX2h3ZG9tX2luaXQgc2V0
dXBfaHdkb21fZGV2aQ0KPiANCj4gIHZvaWQgY2xlYXJfZmF1bHRfYml0cyhzdHJ1Y3QgdnRkX2lv
bW11ICppb21tdSkNCj4gIHsNCj4gLSAgICB1NjQgdmFsOw0KPiAgICAgIHVuc2lnbmVkIGxvbmcg
ZmxhZ3M7DQo+IA0KPiAgICAgIHNwaW5fbG9ja19pcnFzYXZlKCZpb21tdS0+cmVnaXN0ZXJfbG9j
aywgZmxhZ3MpOw0KPiAtICAgIHZhbCA9IGRtYXJfcmVhZHEoaW9tbXUtPnJlZywgY2FwX2ZhdWx0
X3JlZ19vZmZzZXQoaW9tbXUtPmNhcCkgKyA4KTsNCj4gLSAgICBkbWFyX3dyaXRlcShpb21tdS0+
cmVnLCBjYXBfZmF1bHRfcmVnX29mZnNldChpb21tdS0+Y2FwKSArIDgsIHZhbCk7DQo+ICsNCj4g
KyAgICBpZiAoIGRtYXJfcmVhZGwoaW9tbXUtPnJlZywgRE1BUl9GU1RTX1JFRykgJiBETUFfRlNU
U19QUEYgKQ0KPiArICAgIHsNCj4gKyAgICAgICAgdW5zaWduZWQgaW50IHJlZyA9IGNhcF9mYXVs
dF9yZWdfb2Zmc2V0KGlvbW11LT5jYXApOw0KPiArICAgICAgICB1bnNpZ25lZCBpbnQgZW5kID0g
cmVnICsgY2FwX251bV9mYXVsdF9yZWdzKGlvbW11LT5jYXApOw0KPiArDQo+ICsgICAgICAgIGRv
IHsNCj4gKyAgICAgICAgICAgZG1hcl93cml0ZWwoaW9tbXUtPnJlZywgcmVnICsgMTIsIERNQV9G
UkNEX0YpOw0KPiArICAgICAgICAgICByZWcgKz0gUFJJTUFSWV9GQVVMVF9SRUdfTEVOOw0KPiAr
ICAgICAgICB9IHdoaWxlICggcmVnIDwgZW5kICk7DQo+ICsgICAgfQ0KPiArDQo+ICAgICAgZG1h
cl93cml0ZWwoaW9tbXUtPnJlZywgRE1BUl9GU1RTX1JFRywgRE1BX0ZTVFNfRkFVTFRTKTsNCj4g
Kw0KPiAgICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmlvbW11LT5yZWdpc3Rlcl9sb2NrLCBm
bGFncyk7DQo+ICB9DQo+IA0KPiAtLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9t
bXUuaA0KPiArKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuaA0KPiBAQCAt
MTc0LDkgKzE3NCw4IEBADQo+ICAjZGVmaW5lIERNQV9GU1RTX0lRRSAoMXUgPDwgNCkNCj4gICNk
ZWZpbmUgRE1BX0ZTVFNfSUNFICgxdSA8PCA1KQ0KPiAgI2RlZmluZSBETUFfRlNUU19JVEUgKDF1
IDw8IDYpDQo+IC0jZGVmaW5lIERNQV9GU1RTX0ZBVUxUUyAoRE1BX0ZTVFNfUEZPIHwgRE1BX0ZT
VFNfUFBGIHwNCj4gRE1BX0ZTVFNfQUZPIHwgXA0KPiAtICAgICAgICAgICAgICAgICAgICAgICAg
IERNQV9GU1RTX0FQRiB8IERNQV9GU1RTX0lRRSB8IERNQV9GU1RTX0lDRSB8IFwNCj4gLSAgICAg
ICAgICAgICAgICAgICAgICAgICBETUFfRlNUU19JVEUpDQo+ICsjZGVmaW5lIERNQV9GU1RTX0ZB
VUxUUyAoRE1BX0ZTVFNfUEZPIHwgRE1BX0ZTVFNfQUZPIHwNCj4gRE1BX0ZTVFNfQVBGIHwgXA0K
PiArICAgICAgICAgICAgICAgICAgICAgICAgIERNQV9GU1RTX0lRRSB8IERNQV9GU1RTX0lDRSB8
IERNQV9GU1RTX0lURSkNCj4gICNkZWZpbmUgZG1hX2ZzdHNfZmF1bHRfcmVjb3JkX2luZGV4KHMp
ICgoKHMpID4+IDgpICYgMHhmZikNCj4gDQo+ICAvKiBGUkNEX1JFRywgMzIgYml0cyBhY2Nlc3Mg
Ki8NCg0K


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 05:29:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 05:29:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146382.269330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwHvA-0000mX-IQ; Thu, 24 Jun 2021 05:29:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146382.269330; Thu, 24 Jun 2021 05:29:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwHvA-0000mQ-F8; Thu, 24 Jun 2021 05:29:00 +0000
Received: by outflank-mailman (input) for mailman id 146382;
 Thu, 24 Jun 2021 05:28:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GXyq=LS=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1lwHv8-0000mK-TT
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 05:28:58 +0000
Received: from mga12.intel.com (unknown [192.55.52.136])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f9db4f5-aef3-4c27-b411-3e91f1db7158;
 Thu, 24 Jun 2021 05:28:51 +0000 (UTC)
Received: from orsmga001.jf.intel.com ([10.7.209.18])
 by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 23 Jun 2021 22:28:50 -0700
Received: from fmsmsx602.amr.corp.intel.com ([10.18.126.82])
 by orsmga001.jf.intel.com with ESMTP; 23 Jun 2021 22:28:50 -0700
Received: from fmsmsx602.amr.corp.intel.com (10.18.126.82) by
 fmsmsx602.amr.corp.intel.com (10.18.126.82) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.4; Wed, 23 Jun 2021 22:28:49 -0700
Received: from fmsedg602.ED.cps.intel.com (10.1.192.136) by
 fmsmsx602.amr.corp.intel.com (10.18.126.82) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.4
 via Frontend Transport; Wed, 23 Jun 2021 22:28:49 -0700
Received: from NAM12-BN8-obe.outbound.protection.outlook.com (104.47.55.173)
 by edgegateway.intel.com (192.55.55.71) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2242.4; Wed, 23 Jun 2021 22:28:48 -0700
Received: from MWHPR11MB1886.namprd11.prod.outlook.com (2603:10b6:300:110::9)
 by CO1PR11MB4833.namprd11.prod.outlook.com (2603:10b6:303:99::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Thu, 24 Jun
 2021 05:28:44 +0000
Received: from MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::6597:eb05:c507:c6c1]) by MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::6597:eb05:c507:c6c1%12]) with mapi id 15.20.4242.024; Thu, 24 Jun
 2021 05:28:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f9db4f5-aef3-4c27-b411-3e91f1db7158
IronPort-SDR: 7Jc7IJCI/WWpY6d5zGrCQV756M0TcgoqIInKNVOlPfIpc+HPqorHdgLA/RbBfH0yMJmlD5cbEL
 9Z5J8CaCKrgA==
X-IronPort-AV: E=McAfee;i="6200,9189,10024"; a="187086500"
X-IronPort-AV: E=Sophos;i="5.83,295,1616482800"; 
   d="scan'208";a="187086500"
IronPort-SDR: KyRX9+06cNR1o+63fO8S4jLdFrIpO15HrhAnbTgFnOAa9zSamlc3C9QPw2wTXUQIfQUOYICZXa
 xp4OF6t/2R1Q==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.83,295,1616482800"; 
   d="scan'208";a="487617230"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jsqaWuYsUydJ/DTUXQyhErla4PNAy9bVYjbf8rQbm4d+ZrFnwaWyC3Ual04iagFligec2tGSg3Z7vYMZK8boCzID8P8uAbTAe6r8ERL8vnRo0QO+3dQEqYbeZE90MJIFb5ayVXKalF9FRRkiRjjWJdmzYUq8vM1RWNezmPLLhAuQewBQtAFlnxezC+rE/q3CmB02LoXefZg+iXeIh+cFlHjVlduTmLsJPRzQM3m8Sg0Fqn5TQBdypoVj2OJ+VyrbFVNm6AkdPqdnA6Bp0KhryzKZ3HbaWkb9ioIeMMi+56T3qOJt3RH9PdBlSD+Dbvt4Yoy43JAs6m6CkCIFrGun6A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DmnO1AJCvEJAYCu0B2SLfSl8IVcETv9btJ+UVbpijjg=;
 b=joLCN+TKoyRtc59uOm/WwWt2AfFM+t86TxBMNaegUXiUPLZqdOfDp8WWZyf1zWbSSah7QbU3KiOonyZ5fCAN9fFhGMZCpj36IrJ1lySS65al2BFnUjzskkVokWvqpBqBzxbs6UMfnExNOD2AJixotRxYIq03Y3botf1ISSmb/s09fUuucv3J4al6TWN7ca9aCRP8sJ7NGGMT3nmqiEmcEWOb5CjHhTWRpZw0k36oTWiiCVucf6FFWVTaAEnJNGcPb0i58Ec7RHKpus1C3128H6r8eb6gB5BjIoMqD9Zt4IKjC1fRI6gUE+/mdL463ZLXSI7ySppGKlA/uFB6CtFaRQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DmnO1AJCvEJAYCu0B2SLfSl8IVcETv9btJ+UVbpijjg=;
 b=bZaQrZylYW/VBbCLoYXGdi/6fyd+VtvP5AKG6AGRJFA9n1ZqmkqqDpfqPVtt9hUAIJjcR3xzAQD0Idg/nTBLoYKz67hqioFvwiSGDw1KonqUnM8y6iLdZawaElYSF3baxQqrojBwPGvebFlbYR2kyb+6YbnsIPyX94btC3i3R0o=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <paul@xen.org>, "Cooper, Andrew" <andrew.cooper3@citrix.com>
Subject: RE: [PATCH 6/9] VT-d: don't lose errors when flushing TLBs on
 multiple IOMMUs
Thread-Topic: [PATCH 6/9] VT-d: don't lose errors when flushing TLBs on
 multiple IOMMUs
Thread-Index: AQHXXRH0ckwXa9bz/0iMr111MgNGbqsiuVNg
Date: Thu, 24 Jun 2021 05:28:44 +0000
Message-ID: <MWHPR11MB18868E51563CA138A8AC660A8C079@MWHPR11MB1886.namprd11.prod.outlook.com>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
 <8a4f355e-c381-412c-8949-061851d0f7e2@suse.com>
In-Reply-To: <8a4f355e-c381-412c-8949-061851d0f7e2@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.142.21]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: c088a328-885e-4783-cd56-08d936d0ef5e
x-ms-traffictypediagnostic: CO1PR11MB4833:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-microsoft-antispam-prvs: <CO1PR11MB48334F5267EDA77F03E3EC938C079@CO1PR11MB4833.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7691;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 9VjSFNm1H0E6mAVtl6Cu77XplEP7oxleP7jTyHxwTWi6ZLNNYfH4Y84lwmWBBfPl0ZPUL1at2xMpevL4TPbbE2yNgM+ylIfZWcfPIUZMVchDG9v+qKZCeqSFbS7lJM0Gt8JTdnmjYWAJedgwSkY3k0PpFbN4/W0LslSbS9gzVYRJ3u8OMNOnZz2P7YOiQM9Zf3hWo52tiHt81VliDxW5C0yo7uTLMds0byHYcuexGO3oERxPk1sbQkHX7Bscj2a3rZ8VocztVxrjy5LYfQ+4RsHiIu2ljryKFXViNrt3CImN0xKW2wyKZV6ULhzC2ikJ87pXeZ7vagsXIPKAQGHSg2KB1gy5RhsuRa1dGs+PgicTuaevPVeXiQy1L71OFXgEmiieEDC9fNvCT32Tny76L79M+N8N861vCE94NDkYv+LDsQ0wMU1G1Z+O5wtXcXG7L65BYH5G8zwupwWJrjIx+m5falPmmki9Vmo4Sns7yF870DHKn0lCEKlgsHbl+1E049y5xWLjM98vXpN+YcqhB/q7IabYEF9OtJp0vN15LiH9o6ICf/0vL2TGqb5lnAqXupBjlfEuNfvX4BWgTACcLg+PIaG6O4zRCk2lgFxkK1I=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1886.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(39860400002)(376002)(366004)(396003)(346002)(478600001)(66476007)(64756008)(66446008)(66946007)(76116006)(66556008)(54906003)(83380400001)(110136005)(122000001)(316002)(9686003)(86362001)(52536014)(38100700002)(5660300002)(186003)(71200400001)(2906002)(8936002)(26005)(55016002)(8676002)(7696005)(4326008)(33656002)(6506007);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?TWorVmZlc2VZTWQ4VjlFeGVVRTlMVVBwaWZVWjl3clV2V0E0VnFnbEtvR3Bj?=
 =?utf-8?B?OUlYejVneHlZcTlqQzBla1F5MDZkM20zUGRHWWNyNCtlWHFBWU0weGlCWlA3?=
 =?utf-8?B?V0U4VUVYMDE1Y0RQdVRKN1F4ZUswZWhoUUZjUTBuaUNFem43Q1dLZHhwZVFW?=
 =?utf-8?B?RmFQeEtpTXF1MTJ2REFKQ1dka3piSUp2MkRlVTFWTmRpV3IyS3hOSjhZcnEx?=
 =?utf-8?B?bkhuN1dYK2wyMkFtUFU4UWk3d0dYT25yUGRLOU5YWE1EekJVUE95ZTBFOEJP?=
 =?utf-8?B?V0Yxb0Y2N0JxamV4VUU5WDFjVjVBYnBvUzZleWRQeXFoMExkVjMzWVVjbU4x?=
 =?utf-8?B?dzNQMDhCZTJhbjA3RmdXaUVmenlVREdoK1ozZSt3NTNhL3FHZVo5dFllNE4r?=
 =?utf-8?B?WlFBcFJLMVpUdnd5V3JZZ1JoQUFlMisxSzVqaFNFT2YzRjhmSVZCUEppbmVY?=
 =?utf-8?B?YUEwcWFXK096WFFjbWd6ZnlyM2UwU3BzWDd5QWdaQStxR1ZVS1NJaTdiekpN?=
 =?utf-8?B?UWlvOGEveEZkRkc1L2Vxd2hlbktoUXVpbWtuWmtnWGYyNWRCdk1rblJFbWZJ?=
 =?utf-8?B?VlNMZlptYmRnWXlYVVBlZDhLcU1vNFZzaTg5S25ycGRUSlgwd1N2dklORDdx?=
 =?utf-8?B?TkpuWG0zQ2NyZU1wNkZUMWFyalR6cWtpVEo3Y1N3MU9NaEF2MWc1ZC9PQkhS?=
 =?utf-8?B?SEdjRFhvbFpEUlZIL0pYZjRYdkVwVnhyalJRcFFxYi9NYStSY1A2ZXJFTm1Z?=
 =?utf-8?B?MVpmQnpBbC8xeGw0SkQrOTNHMWk5djdyRzhwd2NTSFNvS3ZvOENGdmNkU29v?=
 =?utf-8?B?U2ZSWEZna252U0lOTVNnV3FjMlJMOGRNaUJjN01maTM1aVBBdlVmb3M0OUZU?=
 =?utf-8?B?bmRyRzhka3JNdExCUUdHSkRUU2xUT1BwN1ZUU2dQcktaZTVPUnJpS1pFQTRa?=
 =?utf-8?B?TTZJNUNReVJpT2JCMmdyVU9ZNVRSQk9pd3I3eUxRUU94SU9BQjdoSHpNN0hK?=
 =?utf-8?B?b3JXV3dqUXlUUHlMZHFTRkEvZGdjRmxFN1l2LzF5YmtKV1hHeTZ4MVVEeTho?=
 =?utf-8?B?d0xFRWdRWENBbU5BZTIwdTRRNWx6Y3dzckNvdjcvZCtvZndiekx3cWNWUGR4?=
 =?utf-8?B?Y3JZb21VVDlBZmVUSnBtcmNOekF3eWtVVytFUUczUzhRaHlnVHlCNGpmRGVB?=
 =?utf-8?B?WjBZbndwRnBFbjFQdmkzL0RvZWVmU0NIWlgwMGY0NUxzQ2FXMXUvVlJManJD?=
 =?utf-8?B?cHZKRmRMNks4U2NzYlFDSzhSOC9yeTJWZGVwRjM0SGQra3JDOTBQSlhtaVNT?=
 =?utf-8?B?eVA4YnF1bFlKN283eld4SHZaVTNjb05sdE5CMjl4ZFJva0wxdmora1M0M2F0?=
 =?utf-8?B?NnpiVmFNUGc5TDBseWJXZFYxREdubU9rTnpHSmt5US91ellRelBjdmxPVVor?=
 =?utf-8?B?Z0lSMy91d2RtNThGNEVIWU1NSDV0ZjBQK0tLNTJvZ05SZDdGOEJHcDNtVWQv?=
 =?utf-8?B?YkFWaC9zY1JBSWM3NXpBazZiZTdsK013eHpwM1FycFBZUHV3cXRlL3NzbWRl?=
 =?utf-8?B?NWd1S1M4Z1lubVZVMDBFOVJqNjRWL0Yzd3BhN0pVdUl0UkdpVjVJY1ZtTERq?=
 =?utf-8?B?WGtSMjM3MHBjNEpEUloxdTZBNWRtZHdGWVFSOGdlL29rcmNBaDhCd1NOaStO?=
 =?utf-8?B?eEs0MU5oWTNxdlUxbi9KS0ZoQVlHdTlUTStyTmxEQUowUVNyOXYyQmdsRUlJ?=
 =?utf-8?Q?3B7fPKijpjqSpFRZIxglb1vd0p2qrZ5YCgcS/Jf?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1886.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c088a328-885e-4783-cd56-08d936d0ef5e
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Jun 2021 05:28:44.5111
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: SrXm4wdqjI/MdWAlpJZyBGMBnkkQoRpbaTmhEjUyvNjGW0Rzniohd/EKA5Q7mUSl26nlZvd06S+QXFSg22QTfQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR11MB4833
X-OriginatorOrg: intel.com

PiBGcm9tOiBKYW4gQmV1bGljaA0KPiBTZW50OiBXZWRuZXNkYXksIEp1bmUgOSwgMjAyMSA1OjI5
IFBNDQo+IA0KPiBXaGlsZSBubyBsb25nZXIgYW4gaW1tZWRpYXRlIHByb2JsZW0gd2l0aCBmbHVz
aGVzIG5vIGxvbmdlciB0aW1pbmcgb3V0LA0KPiBlcnJvcnMgKGlmIGFueSkgZ2V0IHByb3Blcmx5
IHJlcG9ydGVkIGJ5IGlvbW11X2ZsdXNoX2lvdGxiX3tkc2kscHNpfSgpLg0KPiBPdmVyd3JpdGlu
ZyBzdWNoIGFuIGVycm9yIHdpdGgsIHBlcmhhcHMsIGEgc3VjY2VzcyBpbmRpY2F0b3IgcmVjZWl2
ZWQNCj4gZnJvbSBhbm90aGVyIElPTU1VIHdpbGwgbWlzZ3VpZGUgY2FsbGVycy4gUmVjb3JkIHRo
ZSBmaXJzdCBlcnJvciwgYnV0DQo+IGRvbid0IGJhaWwgZnJvbSB0aGUgbG9vcCAoc3VjaCB0aGF0
IGZ1cnRoZXIgbmVjZXNzYXJ5IGludmFsaWRhdGlvbiBnZXRzDQo+IGNhcnJpZWQgb3V0IG9uIGEg
YmVzdCBlZmZvcnQgYmFzaXMpLg0KPiANCj4gU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPg0KDQpSZXZpZXdlZC1ieTogS2V2aW4gVGlhbiA8a2V2aW4udGlhbkBp
bnRlbC5jb20+DQoNCj4gDQo+IC0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21t
dS5jDQo+ICsrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jDQo+IEBAIC02
NDMsNyArNjQzLDcgQEAgc3RhdGljIGludCBfX211c3RfY2hlY2sgaW9tbXVfZmx1c2hfaW90bA0K
PiAgICAgIHN0cnVjdCB2dGRfaW9tbXUgKmlvbW11Ow0KPiAgICAgIGJvb2xfdCBmbHVzaF9kZXZf
aW90bGI7DQo+ICAgICAgaW50IGlvbW11X2RvbWlkOw0KPiAtICAgIGludCByYyA9IDA7DQo+ICsg
ICAgaW50IHJldCA9IDA7DQo+IA0KPiAgICAgIC8qDQo+ICAgICAgICogTm8gbmVlZCBwY2lkZXZl
c19sb2NrIGhlcmUgYmVjYXVzZSB3ZSBoYXZlIGZsdXNoDQo+IEBAIC02NTEsNiArNjUxLDggQEAg
c3RhdGljIGludCBfX211c3RfY2hlY2sgaW9tbXVfZmx1c2hfaW90bA0KPiAgICAgICAqLw0KPiAg
ICAgIGZvcl9lYWNoX2RyaGRfdW5pdCAoIGRyaGQgKQ0KPiAgICAgIHsNCj4gKyAgICAgICAgaW50
IHJjOw0KPiArDQo+ICAgICAgICAgIGlvbW11ID0gZHJoZC0+aW9tbXU7DQo+IA0KPiAgICAgICAg
ICBpZiAoICF0ZXN0X2JpdChpb21tdS0+aW5kZXgsICZoZC0+YXJjaC52dGQuaW9tbXVfYml0bWFw
KSApDQo+IEBAIC02NzMsMTMgKzY3NSwxMiBAQCBzdGF0aWMgaW50IF9fbXVzdF9jaGVjayBpb21t
dV9mbHVzaF9pb3RsDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBm
bHVzaF9kZXZfaW90bGIpOw0KPiANCj4gICAgICAgICAgaWYgKCByYyA+IDAgKQ0KPiAtICAgICAg
ICB7DQo+ICAgICAgICAgICAgICBpb21tdV9mbHVzaF93cml0ZV9idWZmZXIoaW9tbXUpOw0KPiAt
ICAgICAgICAgICAgcmMgPSAwOw0KPiAtICAgICAgICB9DQo+ICsgICAgICAgIGVsc2UgaWYgKCAh
cmV0ICkNCj4gKyAgICAgICAgICAgIHJldCA9IHJjOw0KPiAgICAgIH0NCj4gDQo+IC0gICAgcmV0
dXJuIHJjOw0KPiArICAgIHJldHVybiByZXQ7DQo+ICB9DQo+IA0KPiAgc3RhdGljIGludCBfX211
c3RfY2hlY2sgaW9tbXVfZmx1c2hfaW90bGJfcGFnZXMoc3RydWN0IGRvbWFpbiAqZCwNCj4gDQoN
Cg==


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 05:32:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 05:32:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146388.269341 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwHy7-0002Bh-5h; Thu, 24 Jun 2021 05:32:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146388.269341; Thu, 24 Jun 2021 05:32:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwHy7-0002Ba-2j; Thu, 24 Jun 2021 05:32:03 +0000
Received: by outflank-mailman (input) for mailman id 146388;
 Thu, 24 Jun 2021 05:32:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GXyq=LS=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1lwHy6-0002BU-33
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 05:32:02 +0000
Received: from mga17.intel.com (unknown [192.55.52.151])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 697d7019-1779-4e11-9d5d-75c00d0c37d0;
 Thu, 24 Jun 2021 05:32:01 +0000 (UTC)
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
 by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 23 Jun 2021 22:32:00 -0700
Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81])
 by fmsmga002.fm.intel.com with ESMTP; 23 Jun 2021 22:32:00 -0700
Received: from fmsmsx606.amr.corp.intel.com (10.18.126.86) by
 fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.4; Wed, 23 Jun 2021 22:31:59 -0700
Received: from fmsedg602.ED.cps.intel.com (10.1.192.136) by
 fmsmsx606.amr.corp.intel.com (10.18.126.86) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.4
 via Frontend Transport; Wed, 23 Jun 2021 22:31:59 -0700
Received: from NAM12-BN8-obe.outbound.protection.outlook.com (104.47.55.173)
 by edgegateway.intel.com (192.55.55.71) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2242.4; Wed, 23 Jun 2021 22:31:58 -0700
Received: from MWHPR11MB1886.namprd11.prod.outlook.com (2603:10b6:300:110::9)
 by CO1PR11MB4833.namprd11.prod.outlook.com (2603:10b6:303:99::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Thu, 24 Jun
 2021 05:31:55 +0000
Received: from MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::6597:eb05:c507:c6c1]) by MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::6597:eb05:c507:c6c1%12]) with mapi id 15.20.4242.024; Thu, 24 Jun
 2021 05:31:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 697d7019-1779-4e11-9d5d-75c00d0c37d0
IronPort-SDR: DuZpDOoSbiW2FTecW7idsJCL04fUPoFSwcWN6V9HxF24ggWGMy1J4tSdFiTpHrIpNDfOTaXr8e
 8kVWN18Pd+sA==
X-IronPort-AV: E=McAfee;i="6200,9189,10024"; a="187780309"
X-IronPort-AV: E=Sophos;i="5.83,295,1616482800"; 
   d="scan'208";a="187780309"
IronPort-SDR: Lj5Fg4CvjkAFXz0aMGoM+06KB+SPkoTvIarg1K75N3yr3mKluEkusAOp3BuBBYSKu1aLr7ttCk
 SXxfY3oq0SrA==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.83,295,1616482800"; 
   d="scan'208";a="490986794"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H4ynksWP83zvP8z/f5T+XZ+i9bRdLtd2GLawQ4xhAD6UcDQHJIv65VD25jFhKhK9+xa//GmUfSkcBoWrkUVbLY1RVJpXPsF7VnqDQBCSMKuwltqzsoilMPvlq2JNdc4uMjSYHh4wP8NQnT+EZLjThNLBQdB4iOBKiQNpBadeHLztp7KAtQdQqjRqZosN7+EYZ+O2eOUpslzsgFu+yu7mHe2zxEqE9c6+7w/G/TCzLJU4FGTqOKdJN3GlFg6b5/G9ebiQtXRYVNK5lS5+1ntVdzDG1SVo1AF2tKrj3qfqAO91nbepKtrsN/zSwm+0l7FZbfoy/ICwyNbJsJIYtHvPsQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Sw0NNNPf+YdLoMRpj5oeIHakIeM8LzpNM2Bz7Honjfo=;
 b=DGBVUu+XbVhyS7AockZ2hyepdV8zjsBx1f354KgxTJbpk2jcEnrJ0N9gUN8jFebrbq1DKE1prdugpTneMXJkxusd//HYAGUPxLu5ImPm/0FjcD7Low3Z6V0eFupeb41/HAMgwoTUeoeIPQBWDf/kv0qL6wti5TiPu15VF0XHr6/mTCsnRgUIaGtdxY1vMjRbnwGgKnrh+OU6YwG5anFLuVdWjQSYuaHmv0uGQRz3P0BVR7QOD44ug48Z/wyJRtUkWUOE94k2RAMbJXX3EGGJ9Uwvmo6q7Bk6Hd/Ry9GHLThJ6ahMvXdHN2ttSHa/8H4mMeYY/DlwCIQa/8RJSU4rvg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Sw0NNNPf+YdLoMRpj5oeIHakIeM8LzpNM2Bz7Honjfo=;
 b=XsDYKLU5OxfHCWG83KW0esbLLv7baGBLeJTPWuMNEM/5SevmLwja4bhbjG38BO4/cUtlwaN9xFNhd+LyenVgQ22QKpPGBmVQPZu7hPTBXKoO1CmGFEiry2Hf30BquyiztZMMC5/sR8UwWaZ3oV9iNbsdeGznGGwV30mF15y1YWg=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <paul@xen.org>, "Cooper, Andrew" <andrew.cooper3@citrix.com>
Subject: RE: [PATCH 7/9] VT-d: centralize mapping of QI entries
Thread-Topic: [PATCH 7/9] VT-d: centralize mapping of QI entries
Thread-Index: AQHXXRH9xEyZTXpMY0a5atXgjiDN6KsiujIA
Date: Thu, 24 Jun 2021 05:31:55 +0000
Message-ID: <MWHPR11MB18865ECDDB47B086ECC377A38C079@MWHPR11MB1886.namprd11.prod.outlook.com>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
 <b1aba243-e05e-1f50-d85d-00f60703b62b@suse.com>
In-Reply-To: <b1aba243-e05e-1f50-d85d-00f60703b62b@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.142.21]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: b661298a-2e1b-44cd-9065-08d936d160eb
x-ms-traffictypediagnostic: CO1PR11MB4833:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-microsoft-antispam-prvs: <CO1PR11MB4833AC65C84824AC29B053BE8C079@CO1PR11MB4833.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:3826;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 31kIvxi/bDRu9enT2rKY6jxeEEYz7R3FfbVEFmHxDTkfe10VhqVMwd7tOxIBG3/bim7/0H5YRHDJtrtG+ealeq6ZObbXjjlzCTu3Fr+F3Y6brERQcb3yovdDcgNj3HZ08f7yp+bh7wTvbW2rWFu03BXISDs51DXAJMldHLaT80j4cwgvx5MCy0bjVZ3dOP2RgAUeBwBGXKuAfY+4ebpPy8aanLiCcXQh/ykaDN9cZO7FaygtvKvwcD3VU7aUbe5oClLT14VDB4W6zsmLuHdaxEvmrRZqNXvkEnxPRnTO25Og53ga33vAKnF1hUfGBbiybH8Fyp6+lFVDGWUGRWLAYFKla6o2eW4zStMK5N/cQ7GzYfSElEz10+j8YbckZL212IL1tA4D6uCxiXqATsk85TSMgmSVlTA+9sIYV0nqVyQHxrJkbr6JA1heyApFM2baV61GkwNzGOnR6BouUGlbTbGT+dzx4NG6UHVhxuJkzHhEvK3mEgEeuUdIocKeN9xVkaCmRQfVXTc54KfiFrXBOe9f/q2r40ziwTV200dfvc7Qf99o2BAzLrCWMyGuFB92XOzfYd+zQRZRGvWUA/JhOnVZxuekhnQpwMGbMsEOUyg=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1886.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(346002)(366004)(396003)(39860400002)(136003)(86362001)(52536014)(38100700002)(5660300002)(122000001)(316002)(9686003)(8676002)(26005)(55016002)(33656002)(6506007)(7696005)(4326008)(186003)(71200400001)(2906002)(8936002)(76116006)(66946007)(66556008)(54906003)(478600001)(64756008)(66446008)(66476007)(110136005)(83380400001);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?NUZhM0N2cy9nNzEzS1ZaTWJLNUd2M3JFalg5bWJZQnEzU1hickdzWTFrQUI1?=
 =?utf-8?B?bDd4Z2MrWnFDeVAzai90SnQzYlc4Ynd3UDQ0cUJGT3dmb1hxU1pHbmpxQ2l3?=
 =?utf-8?B?elhEaE5hdTY3NDR6cmhKOVVPRUQyRDBsbWdzZSsycWg1UExLcnBQR1pHVGdv?=
 =?utf-8?B?cDVOSUR1NkNZTENnb1Y4VVh2NVB1NUxPN3k3VXNQMDVuVFltam1lZXhlZ2JW?=
 =?utf-8?B?UkZwN3RpVndRTWxyN0J5MnpFeWRIYXZYbDVkU0FtU2gzSDlSNlhGMW02YzJC?=
 =?utf-8?B?SEdXZXlaTStHNHdTWFFJczRiclRhbGNsMnpQbStpZTd1Q3dGRlpxMk4zVDJG?=
 =?utf-8?B?TndldHZ1bC9Va1N1UlpGNklLaU1nbEZDUWZGaTVxSWRzWFVUZ3pSWGtTSXMw?=
 =?utf-8?B?K0N5akZDelVnd3RYcFprM09nQ211VTlpT1p2aXlKZFg3eWR4alNZWjRxV2dz?=
 =?utf-8?B?WFd0N3ArWWJscW53S3dVVUhBYVpNbVpLWVFLcmlscUlQOVdJRnRITHRvT0du?=
 =?utf-8?B?MVNxQkNEQkd6Y1AzWGdHS3d0SGNSTEttbmNxRjB2aXdsSGRkZ29LaGRVSDZS?=
 =?utf-8?B?c0FUS0JTNHhObVJWVWFhMndrNkhYM3ZHdWJHcXk5V3BWMTdSaGlEUUJDR2Ry?=
 =?utf-8?B?UGxJcDdKQXphY00rZUlXaHptWHRFVEJ6RTZjRmM2VmV0ekpSZHMzMVRKMnli?=
 =?utf-8?B?azlZd1ZtTlAvejJncVdJRnBrbnRFTUYrcGxEOE1xWGoraVRpTEVFOUJhbUlo?=
 =?utf-8?B?Rk1ud3AzN2hOVkFxTTNaOURMQytWZUFIRnNGSHVFMHBYbW0wQmJXdEMwM3oy?=
 =?utf-8?B?Vm01Zys3a1hpV2tRWG1oek1vV0VJYXlwdG83UEpLazRwL3NYTE9qNEhWODV5?=
 =?utf-8?B?elZBc1h2QmIrM1BkOXVCb0VYYjBsNHRlM3hJWVFxSkJxMTV1MTE3OEwxNDli?=
 =?utf-8?B?MjJqUzlodnhNbGhnL2p4UkY3TSs4Y1hManZVaE1MeFpDc0VrMlJzU3UwZC8z?=
 =?utf-8?B?bktvV1lhUFJsOS9qNXNXR01XeDZxaXNtUUhGK0xhclBCaUVSMXMwL3BQMHB2?=
 =?utf-8?B?V3pGZUdwKzYyTktsTTQxcTlLQllIZVJOWlpvdVU5VHVEN004UGc4bUlVWGpS?=
 =?utf-8?B?RTBQcWgvQzhCWnE0WXZnT0QvNWNaMGhQVVk3MmNjcjJpZTUrNXVsSmtOdkNh?=
 =?utf-8?B?L1lpMStyVjlDajFENHJjVm50MmJUaFU2K2MzYlpmcXhvTlYwTE01R1A2eVN6?=
 =?utf-8?B?KzlXNEI4TER4R3NmTjgvTVkzQ0V5RGhWbGlaMWRTNUx2Q0FZMW1RazJLOVUx?=
 =?utf-8?B?alkwdThQcnBiVWZDWHRKMjNPeHF2cVZEdTlIK0hadnhTQWpjbmpxNHdaT2hJ?=
 =?utf-8?B?ZDVlZzdENS94d2E4cUpmTjh0bUM4QkdBc0YwMVJDL3FwYytWb1Z0dXRsL0dV?=
 =?utf-8?B?Qk5VWUxEV3YzZThWMWxYZHg0ZTJWNzlyQ2FYVXJvVXlCL0hXdms4ZkxKZk5Y?=
 =?utf-8?B?QWZKUlZSZDFUUGtuVGQzSXlYYWd4R2pCT0sxZGhHejl6L0dKK2owMUxSWlZ1?=
 =?utf-8?B?STBwWkgvQ3ZYR0haUTdyaDZOMjExR2EyMytNYklnM0swWVJNOGZsYXk3OGtT?=
 =?utf-8?B?VkFsOUlPVmg5RVYyQXdydW1INHNseUFCVmlRUVpMQnR3TFQ3K1BNLzU3Z3VI?=
 =?utf-8?B?d05sK2dKcnl2b3lCMk40aFdFVHNpSzBOK0s0d21UMU4zeTR6VlNFamhkNitU?=
 =?utf-8?Q?q6DnT5jzZD+Hc3dCx5TmLewrEB6fxAvvugi/guS?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1886.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b661298a-2e1b-44cd-9065-08d936d160eb
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Jun 2021 05:31:55.0661
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: NGs7vr0Agth6yMPFyDv2IymIqthNOhXa+0WzN4QVe8dzEnfL6Ftw3SbfO1eC663K7BCbCCJTXdpoP39XMsLFJQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR11MB4833
X-OriginatorOrg: intel.com

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IFdlZG5lc2Rh
eSwgSnVuZSA5LCAyMDIxIDU6MzAgUE0NCj4gDQo+IEludHJvZHVjZSBhIGhlbHBlciBmdW5jdGlv
biB0byByZWR1Y2UgcmVkdW5kYW5jeS4gVGFrZSB0aGUgb3Bwb3J0dW5pdHkNCj4gdG8gZXhwcmVz
cyB0aGUgbG9naWMgd2l0aG91dCB1c2luZyB0aGUgc29tZXdoYXQgb2RkIFFJTlZBTF9FTlRSWV9P
UkRFUi4NCj4gQWxzbyB0YWtlIHRoZSBvcHBvcnR1bml0eSB0byB1bmlmb3JtbHkgdW5tYXAgYWZ0
ZXIgdXBkYXRpbmcgcXVldWUgdGFpbA0KPiBhbmQgZHJvcHBpbmcgdGhlIGxvY2sgKGxpa2Ugd2Fz
IGRvbmUgc28gZmFyIG9ubHkgYnkNCj4gcXVldWVfaW52YWxpZGF0ZV9jb250ZXh0X3N5bmMoKSku
DQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQoN
ClJldmlld2VkLWJ5OiBLZXZpbiBUaWFuIDxrZXZpbi50aWFuQGludGVsLmNvbT4NCg0KPiAtLS0N
Cj4gSSB3b25kZXIgdGhvdWdoIHdoZXRoZXIgd2Ugd291bGRuJ3QgYmUgYmV0dGVyIG9mZiBwZXJt
YW5lbnRseSBtYXBwaW5nDQo+IHRoZSBxdWV1ZShzKS4NCj4gDQo+IC0tLSBhL3hlbi9kcml2ZXJz
L3Bhc3N0aHJvdWdoL3Z0ZC9xaW52YWwuYw0KPiArKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3Vn
aC92dGQvcWludmFsLmMNCj4gQEAgLTY5LDYgKzY5LDE2IEBAIHN0YXRpYyB2b2lkIHFpbnZhbF91
cGRhdGVfcXRhaWwoc3RydWN0IHYNCj4gICAgICBkbWFyX3dyaXRlbChpb21tdS0+cmVnLCBETUFS
X0lRVF9SRUcsIHZhbCA8PCBRSU5WQUxfSU5ERVhfU0hJRlQpOw0KPiAgfQ0KPiANCj4gK3N0YXRp
YyBzdHJ1Y3QgcWludmFsX2VudHJ5ICpxaV9tYXBfZW50cnkoY29uc3Qgc3RydWN0IHZ0ZF9pb21t
dSAqaW9tbXUsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVu
c2lnbmVkIGludCBpbmRleCkNCj4gK3sNCj4gKyAgICBwYWRkcl90IGJhc2UgPSBpb21tdS0+cWlu
dmFsX21hZGRyICsNCj4gKyAgICAgICAgICAgICAgICAgICAoKGluZGV4ICogc2l6ZW9mKHN0cnVj
dCBxaW52YWxfZW50cnkpKSAmIFBBR0VfTUFTSyk7DQo+ICsgICAgc3RydWN0IHFpbnZhbF9lbnRy
eSAqZW50cmllcyA9IG1hcF92dGRfZG9tYWluX3BhZ2UoYmFzZSk7DQo+ICsNCj4gKyAgICByZXR1
cm4gJmVudHJpZXNbaW5kZXggJSAoUEFHRV9TSVpFIC8gc2l6ZW9mKCplbnRyaWVzKSldOw0KPiAr
fQ0KPiArDQo+ICBzdGF0aWMgaW50IF9fbXVzdF9jaGVjayBxdWV1ZV9pbnZhbGlkYXRlX2NvbnRl
eHRfc3luYyhzdHJ1Y3QgdnRkX2lvbW11DQo+ICppb21tdSwNCj4gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHUxNiBkaWQsIHUxNiBzb3VyY2Vf
aWQsDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICB1OCBmdW5jdGlvbl9tYXNrLA0KPiBAQCAtNzYsMTUgKzg2LDExIEBAIHN0YXRpYyBpbnQg
X19tdXN0X2NoZWNrIHF1ZXVlX2ludmFsaWRhdGUNCj4gIHsNCj4gICAgICB1bnNpZ25lZCBsb25n
IGZsYWdzOw0KPiAgICAgIHVuc2lnbmVkIGludCBpbmRleDsNCj4gLSAgICB1NjQgZW50cnlfYmFz
ZTsNCj4gLSAgICBzdHJ1Y3QgcWludmFsX2VudHJ5ICpxaW52YWxfZW50cnksICpxaW52YWxfZW50
cmllczsNCj4gKyAgICBzdHJ1Y3QgcWludmFsX2VudHJ5ICpxaW52YWxfZW50cnk7DQo+IA0KPiAg
ICAgIHNwaW5fbG9ja19pcnFzYXZlKCZpb21tdS0+cmVnaXN0ZXJfbG9jaywgZmxhZ3MpOw0KPiAg
ICAgIGluZGV4ID0gcWludmFsX25leHRfaW5kZXgoaW9tbXUpOw0KPiAtICAgIGVudHJ5X2Jhc2Ug
PSBpb21tdS0+cWludmFsX21hZGRyICsNCj4gLSAgICAgICAgICAgICAgICAgKChpbmRleCA+PiBR
SU5WQUxfRU5UUllfT1JERVIpIDw8IFBBR0VfU0hJRlQpOw0KPiAtICAgIHFpbnZhbF9lbnRyaWVz
ID0gbWFwX3Z0ZF9kb21haW5fcGFnZShlbnRyeV9iYXNlKTsNCj4gLSAgICBxaW52YWxfZW50cnkg
PSAmcWludmFsX2VudHJpZXNbaW5kZXggJSAoMSA8PCBRSU5WQUxfRU5UUllfT1JERVIpXTsNCj4g
KyAgICBxaW52YWxfZW50cnkgPSBxaV9tYXBfZW50cnkoaW9tbXUsIGluZGV4KTsNCj4gDQo+ICAg
ICAgcWludmFsX2VudHJ5LT5xLmNjX2ludl9kc2MubG8udHlwZSA9IFRZUEVfSU5WQUxfQ09OVEVY
VDsNCj4gICAgICBxaW52YWxfZW50cnktPnEuY2NfaW52X2RzYy5sby5ncmFudSA9IGdyYW51Ow0K
PiBAQCAtOTgsNyArMTA0LDcgQEAgc3RhdGljIGludCBfX211c3RfY2hlY2sgcXVldWVfaW52YWxp
ZGF0ZQ0KPiAgICAgIHFpbnZhbF91cGRhdGVfcXRhaWwoaW9tbXUsIGluZGV4KTsNCj4gICAgICBz
cGluX3VubG9ja19pcnFyZXN0b3JlKCZpb21tdS0+cmVnaXN0ZXJfbG9jaywgZmxhZ3MpOw0KPiAN
Cj4gLSAgICB1bm1hcF92dGRfZG9tYWluX3BhZ2UocWludmFsX2VudHJpZXMpOw0KPiArICAgIHVu
bWFwX3Z0ZF9kb21haW5fcGFnZShxaW52YWxfZW50cnkpOw0KPiANCj4gICAgICByZXR1cm4gaW52
YWxpZGF0ZV9zeW5jKGlvbW11KTsNCj4gIH0NCj4gQEAgLTExMCwxNSArMTE2LDExIEBAIHN0YXRp
YyBpbnQgX19tdXN0X2NoZWNrIHF1ZXVlX2ludmFsaWRhdGUNCj4gIHsNCj4gICAgICB1bnNpZ25l
ZCBsb25nIGZsYWdzOw0KPiAgICAgIHVuc2lnbmVkIGludCBpbmRleDsNCj4gLSAgICB1NjQgZW50
cnlfYmFzZTsNCj4gLSAgICBzdHJ1Y3QgcWludmFsX2VudHJ5ICpxaW52YWxfZW50cnksICpxaW52
YWxfZW50cmllczsNCj4gKyAgICBzdHJ1Y3QgcWludmFsX2VudHJ5ICpxaW52YWxfZW50cnk7DQo+
IA0KPiAgICAgIHNwaW5fbG9ja19pcnFzYXZlKCZpb21tdS0+cmVnaXN0ZXJfbG9jaywgZmxhZ3Mp
Ow0KPiAgICAgIGluZGV4ID0gcWludmFsX25leHRfaW5kZXgoaW9tbXUpOw0KPiAtICAgIGVudHJ5
X2Jhc2UgPSBpb21tdS0+cWludmFsX21hZGRyICsNCj4gLSAgICAgICAgICAgICAgICAgKChpbmRl
eCA+PiBRSU5WQUxfRU5UUllfT1JERVIpIDw8IFBBR0VfU0hJRlQpOw0KPiAtICAgIHFpbnZhbF9l
bnRyaWVzID0gbWFwX3Z0ZF9kb21haW5fcGFnZShlbnRyeV9iYXNlKTsNCj4gLSAgICBxaW52YWxf
ZW50cnkgPSAmcWludmFsX2VudHJpZXNbaW5kZXggJSAoMSA8PCBRSU5WQUxfRU5UUllfT1JERVIp
XTsNCj4gKyAgICBxaW52YWxfZW50cnkgPSBxaV9tYXBfZW50cnkoaW9tbXUsIGluZGV4KTsNCj4g
DQo+ICAgICAgcWludmFsX2VudHJ5LT5xLmlvdGxiX2ludl9kc2MubG8udHlwZSA9IFRZUEVfSU5W
QUxfSU9UTEI7DQo+ICAgICAgcWludmFsX2VudHJ5LT5xLmlvdGxiX2ludl9kc2MubG8uZ3JhbnUg
PSBncmFudTsNCj4gQEAgLTEzMywxMCArMTM1LDExIEBAIHN0YXRpYyBpbnQgX19tdXN0X2NoZWNr
IHF1ZXVlX2ludmFsaWRhdGUNCj4gICAgICBxaW52YWxfZW50cnktPnEuaW90bGJfaW52X2RzYy5o
aS5yZXNfMSA9IDA7DQo+ICAgICAgcWludmFsX2VudHJ5LT5xLmlvdGxiX2ludl9kc2MuaGkuYWRk
ciA9IGFkZHIgPj4gUEFHRV9TSElGVF80SzsNCj4gDQo+IC0gICAgdW5tYXBfdnRkX2RvbWFpbl9w
YWdlKHFpbnZhbF9lbnRyaWVzKTsNCj4gICAgICBxaW52YWxfdXBkYXRlX3F0YWlsKGlvbW11LCBp
bmRleCk7DQo+ICAgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmaW9tbXUtPnJlZ2lzdGVyX2xv
Y2ssIGZsYWdzKTsNCj4gDQo+ICsgICAgdW5tYXBfdnRkX2RvbWFpbl9wYWdlKHFpbnZhbF9lbnRy
eSk7DQo+ICsNCj4gICAgICByZXR1cm4gaW52YWxpZGF0ZV9zeW5jKGlvbW11KTsNCj4gIH0NCj4g
DQo+IEBAIC0xNDcsMTcgKzE1MCwxMyBAQCBzdGF0aWMgaW50IF9fbXVzdF9jaGVjayBxdWV1ZV9p
bnZhbGlkYXRlDQo+ICAgICAgc3RhdGljIERFRklORV9QRVJfQ1BVKHVpbnQzMl90LCBwb2xsX3Ns
b3QpOw0KPiAgICAgIHVuc2lnbmVkIGludCBpbmRleDsNCj4gICAgICB1bnNpZ25lZCBsb25nIGZs
YWdzOw0KPiAtICAgIHU2NCBlbnRyeV9iYXNlOw0KPiAtICAgIHN0cnVjdCBxaW52YWxfZW50cnkg
KnFpbnZhbF9lbnRyeSwgKnFpbnZhbF9lbnRyaWVzOw0KPiArICAgIHN0cnVjdCBxaW52YWxfZW50
cnkgKnFpbnZhbF9lbnRyeTsNCj4gICAgICB1aW50MzJfdCAqdGhpc19wb2xsX3Nsb3QgPSAmdGhp
c19jcHUocG9sbF9zbG90KTsNCj4gDQo+ICAgICAgc3Bpbl9sb2NrX2lycXNhdmUoJmlvbW11LT5y
ZWdpc3Rlcl9sb2NrLCBmbGFncyk7DQo+ICAgICAgQUNDRVNTX09OQ0UoKnRoaXNfcG9sbF9zbG90
KSA9IFFJTlZBTF9TVEFUX0lOSVQ7DQo+ICAgICAgaW5kZXggPSBxaW52YWxfbmV4dF9pbmRleChp
b21tdSk7DQo+IC0gICAgZW50cnlfYmFzZSA9IGlvbW11LT5xaW52YWxfbWFkZHIgKw0KPiAtICAg
ICAgICAgICAgICAgICAoKGluZGV4ID4+IFFJTlZBTF9FTlRSWV9PUkRFUikgPDwgUEFHRV9TSElG
VCk7DQo+IC0gICAgcWludmFsX2VudHJpZXMgPSBtYXBfdnRkX2RvbWFpbl9wYWdlKGVudHJ5X2Jh
c2UpOw0KPiAtICAgIHFpbnZhbF9lbnRyeSA9ICZxaW52YWxfZW50cmllc1tpbmRleCAlICgxIDw8
IFFJTlZBTF9FTlRSWV9PUkRFUildOw0KPiArICAgIHFpbnZhbF9lbnRyeSA9IHFpX21hcF9lbnRy
eShpb21tdSwgaW5kZXgpOw0KPiANCj4gICAgICBxaW52YWxfZW50cnktPnEuaW52X3dhaXRfZHNj
LmxvLnR5cGUgPSBUWVBFX0lOVkFMX1dBSVQ7DQo+ICAgICAgcWludmFsX2VudHJ5LT5xLmludl93
YWl0X2RzYy5sby5pZmxhZyA9IGlmbGFnOw0KPiBAQCAtMTY3LDEwICsxNjYsMTEgQEAgc3RhdGlj
IGludCBfX211c3RfY2hlY2sgcXVldWVfaW52YWxpZGF0ZQ0KPiAgICAgIHFpbnZhbF9lbnRyeS0+
cS5pbnZfd2FpdF9kc2MubG8uc2RhdGEgPSBRSU5WQUxfU1RBVF9ET05FOw0KPiAgICAgIHFpbnZh
bF9lbnRyeS0+cS5pbnZfd2FpdF9kc2MuaGkuc2FkZHIgPSB2aXJ0X3RvX21hZGRyKHRoaXNfcG9s
bF9zbG90KTsNCj4gDQo+IC0gICAgdW5tYXBfdnRkX2RvbWFpbl9wYWdlKHFpbnZhbF9lbnRyaWVz
KTsNCj4gICAgICBxaW52YWxfdXBkYXRlX3F0YWlsKGlvbW11LCBpbmRleCk7DQo+ICAgICAgc3Bp
bl91bmxvY2tfaXJxcmVzdG9yZSgmaW9tbXUtPnJlZ2lzdGVyX2xvY2ssIGZsYWdzKTsNCj4gDQo+
ICsgICAgdW5tYXBfdnRkX2RvbWFpbl9wYWdlKHFpbnZhbF9lbnRyeSk7DQo+ICsNCj4gICAgICAv
KiBOb3cgd2UgZG9uJ3Qgc3VwcG9ydCBpbnRlcnJ1cHQgbWV0aG9kICovDQo+ICAgICAgaWYgKCBz
dyApDQo+ICAgICAgew0KPiBAQCAtMjQ2LDE2ICsyNDYsMTIgQEAgaW50IHFpbnZhbF9kZXZpY2Vf
aW90bGJfc3luYyhzdHJ1Y3QgdnRkXw0KPiAgew0KPiAgICAgIHVuc2lnbmVkIGxvbmcgZmxhZ3M7
DQo+ICAgICAgdW5zaWduZWQgaW50IGluZGV4Ow0KPiAtICAgIHU2NCBlbnRyeV9iYXNlOw0KPiAt
ICAgIHN0cnVjdCBxaW52YWxfZW50cnkgKnFpbnZhbF9lbnRyeSwgKnFpbnZhbF9lbnRyaWVzOw0K
PiArICAgIHN0cnVjdCBxaW52YWxfZW50cnkgKnFpbnZhbF9lbnRyeTsNCj4gDQo+ICAgICAgQVNT
RVJUKHBkZXYpOw0KPiAgICAgIHNwaW5fbG9ja19pcnFzYXZlKCZpb21tdS0+cmVnaXN0ZXJfbG9j
aywgZmxhZ3MpOw0KPiAgICAgIGluZGV4ID0gcWludmFsX25leHRfaW5kZXgoaW9tbXUpOw0KPiAt
ICAgIGVudHJ5X2Jhc2UgPSBpb21tdS0+cWludmFsX21hZGRyICsNCj4gLSAgICAgICAgICAgICAg
ICAgKChpbmRleCA+PiBRSU5WQUxfRU5UUllfT1JERVIpIDw8IFBBR0VfU0hJRlQpOw0KPiAtICAg
IHFpbnZhbF9lbnRyaWVzID0gbWFwX3Z0ZF9kb21haW5fcGFnZShlbnRyeV9iYXNlKTsNCj4gLSAg
ICBxaW52YWxfZW50cnkgPSAmcWludmFsX2VudHJpZXNbaW5kZXggJSAoMSA8PCBRSU5WQUxfRU5U
UllfT1JERVIpXTsNCj4gKyAgICBxaW52YWxfZW50cnkgPSBxaV9tYXBfZW50cnkoaW9tbXUsIGlu
ZGV4KTsNCj4gDQo+ICAgICAgcWludmFsX2VudHJ5LT5xLmRldl9pb3RsYl9pbnZfZHNjLmxvLnR5
cGUgPSBUWVBFX0lOVkFMX0RFVklDRV9JT1RMQjsNCj4gICAgICBxaW52YWxfZW50cnktPnEuZGV2
X2lvdGxiX2ludl9kc2MubG8ucmVzXzEgPSAwOw0KPiBAQCAtMjY4LDEwICsyNjQsMTEgQEAgaW50
IHFpbnZhbF9kZXZpY2VfaW90bGJfc3luYyhzdHJ1Y3QgdnRkXw0KPiAgICAgIHFpbnZhbF9lbnRy
eS0+cS5kZXZfaW90bGJfaW52X2RzYy5oaS5yZXNfMSA9IDA7DQo+ICAgICAgcWludmFsX2VudHJ5
LT5xLmRldl9pb3RsYl9pbnZfZHNjLmhpLmFkZHIgPSBhZGRyID4+IFBBR0VfU0hJRlRfNEs7DQo+
IA0KPiAtICAgIHVubWFwX3Z0ZF9kb21haW5fcGFnZShxaW52YWxfZW50cmllcyk7DQo+ICAgICAg
cWludmFsX3VwZGF0ZV9xdGFpbChpb21tdSwgaW5kZXgpOw0KPiAgICAgIHNwaW5fdW5sb2NrX2ly
cXJlc3RvcmUoJmlvbW11LT5yZWdpc3Rlcl9sb2NrLCBmbGFncyk7DQo+IA0KPiArICAgIHVubWFw
X3Z0ZF9kb21haW5fcGFnZShxaW52YWxfZW50cnkpOw0KPiArDQo+ICAgICAgcmV0dXJuIGRldl9p
bnZhbGlkYXRlX3N5bmMoaW9tbXUsIHBkZXYsIGRpZCk7DQo+ICB9DQo+IA0KPiBAQCAtMjgwLDE2
ICsyNzcsMTIgQEAgc3RhdGljIGludCBfX211c3RfY2hlY2sgcXVldWVfaW52YWxpZGF0ZQ0KPiAg
ew0KPiAgICAgIHVuc2lnbmVkIGxvbmcgZmxhZ3M7DQo+ICAgICAgdW5zaWduZWQgaW50IGluZGV4
Ow0KPiAtICAgIHU2NCBlbnRyeV9iYXNlOw0KPiAtICAgIHN0cnVjdCBxaW52YWxfZW50cnkgKnFp
bnZhbF9lbnRyeSwgKnFpbnZhbF9lbnRyaWVzOw0KPiArICAgIHN0cnVjdCBxaW52YWxfZW50cnkg
KnFpbnZhbF9lbnRyeTsNCj4gICAgICBpbnQgcmV0Ow0KPiANCj4gICAgICBzcGluX2xvY2tfaXJx
c2F2ZSgmaW9tbXUtPnJlZ2lzdGVyX2xvY2ssIGZsYWdzKTsNCj4gICAgICBpbmRleCA9IHFpbnZh
bF9uZXh0X2luZGV4KGlvbW11KTsNCj4gLSAgICBlbnRyeV9iYXNlID0gaW9tbXUtPnFpbnZhbF9t
YWRkciArDQo+IC0gICAgICAgICAgICAgICAgICgoaW5kZXggPj4gUUlOVkFMX0VOVFJZX09SREVS
KSA8PCBQQUdFX1NISUZUKTsNCj4gLSAgICBxaW52YWxfZW50cmllcyA9IG1hcF92dGRfZG9tYWlu
X3BhZ2UoZW50cnlfYmFzZSk7DQo+IC0gICAgcWludmFsX2VudHJ5ID0gJnFpbnZhbF9lbnRyaWVz
W2luZGV4ICUgKDEgPDwgUUlOVkFMX0VOVFJZX09SREVSKV07DQo+ICsgICAgcWludmFsX2VudHJ5
ID0gcWlfbWFwX2VudHJ5KGlvbW11LCBpbmRleCk7DQo+IA0KPiAgICAgIHFpbnZhbF9lbnRyeS0+
cS5pZWNfaW52X2RzYy5sby50eXBlID0gVFlQRV9JTlZBTF9JRUM7DQo+ICAgICAgcWludmFsX2Vu
dHJ5LT5xLmllY19pbnZfZHNjLmxvLmdyYW51ID0gZ3JhbnU7DQo+IEBAIC0yOTksMTAgKzI5Miwx
MSBAQCBzdGF0aWMgaW50IF9fbXVzdF9jaGVjayBxdWV1ZV9pbnZhbGlkYXRlDQo+ICAgICAgcWlu
dmFsX2VudHJ5LT5xLmllY19pbnZfZHNjLmxvLnJlc18yID0gMDsNCj4gICAgICBxaW52YWxfZW50
cnktPnEuaWVjX2ludl9kc2MuaGkucmVzID0gMDsNCj4gDQo+IC0gICAgdW5tYXBfdnRkX2RvbWFp
bl9wYWdlKHFpbnZhbF9lbnRyaWVzKTsNCj4gICAgICBxaW52YWxfdXBkYXRlX3F0YWlsKGlvbW11
LCBpbmRleCk7DQo+ICAgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmaW9tbXUtPnJlZ2lzdGVy
X2xvY2ssIGZsYWdzKTsNCj4gDQo+ICsgICAgdW5tYXBfdnRkX2RvbWFpbl9wYWdlKHFpbnZhbF9l
bnRyeSk7DQo+ICsNCj4gICAgICByZXQgPSBpbnZhbGlkYXRlX3N5bmMoaW9tbXUpOw0KPiANCj4g
ICAgICAvKg0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 05:33:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 05:33:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146393.269353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwHz9-0002nz-H4; Thu, 24 Jun 2021 05:33:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146393.269353; Thu, 24 Jun 2021 05:33:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwHz9-0002ns-DU; Thu, 24 Jun 2021 05:33:07 +0000
Received: by outflank-mailman (input) for mailman id 146393;
 Thu, 24 Jun 2021 05:33:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GXyq=LS=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1lwHz8-0002nm-Cb
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 05:33:06 +0000
Received: from mga04.intel.com (unknown [192.55.52.120])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 76e5b4da-2205-4979-bcbd-4c0a97999940;
 Thu, 24 Jun 2021 05:33:03 +0000 (UTC)
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
 by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 23 Jun 2021 22:33:02 -0700
Received: from fmsmsx604.amr.corp.intel.com ([10.18.126.84])
 by fmsmga001.fm.intel.com with ESMTP; 23 Jun 2021 22:33:02 -0700
Received: from fmsmsx602.amr.corp.intel.com (10.18.126.82) by
 fmsmsx604.amr.corp.intel.com (10.18.126.84) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.4; Wed, 23 Jun 2021 22:33:01 -0700
Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by
 fmsmsx602.amr.corp.intel.com (10.18.126.82) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.4
 via Frontend Transport; Wed, 23 Jun 2021 22:33:01 -0700
Received: from NAM12-BN8-obe.outbound.protection.outlook.com (104.47.55.177)
 by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2242.4; Wed, 23 Jun 2021 22:33:01 -0700
Received: from MWHPR11MB1886.namprd11.prod.outlook.com (2603:10b6:300:110::9)
 by CO1PR11MB4833.namprd11.prod.outlook.com (2603:10b6:303:99::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Thu, 24 Jun
 2021 05:32:59 +0000
Received: from MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::6597:eb05:c507:c6c1]) by MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::6597:eb05:c507:c6c1%12]) with mapi id 15.20.4242.024; Thu, 24 Jun
 2021 05:32:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76e5b4da-2205-4979-bcbd-4c0a97999940
IronPort-SDR: GuaJRPhzTWYx4NPFvmHhLueFw2/4QU6PBf4piEy/Xaw8IV/zH3Rkao+7VrudnfkGFHZthRADmY
 9BDv2vpX1/nw==
X-IronPort-AV: E=McAfee;i="6200,9189,10024"; a="205571567"
X-IronPort-AV: E=Sophos;i="5.83,295,1616482800"; 
   d="scan'208";a="205571567"
IronPort-SDR: WPVElbVZJCb4BL8g/ERAByaZMTUQDhX2gUjLBsX4tFBRZIDDmMNQ5aoWSMenPtmg1pIwexyLSr
 Ip41sTGYDXLA==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.83,295,1616482800"; 
   d="scan'208";a="557209273"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fHXgRsqvPQNR7iEEBqRkh2xZXYNiSmpTDjdEYXyPezfdxJQiZkXLA2IakZ4vdCcq9y82wtwIpkGucM1L/ul3XAbXfhqS93L0bAwZqV18mIeLwegW10MHZ2FIu0UhcwA/j9cks1dLTvf2JMXGiruZmyw5xfbMJ/2lEUxdcGIgxWm9Nws6O3dbvRqPef3O8Xx4RkYf8VKfIo79v2C62hMQxi17XRBSCKbRx/52NnF117OX81YYUlefsYlTNcAJXLmsV7Eg10KI4bmw6r/HC0cwxgzYJEbBRcQXwJ6awWt3RRPJ+JsIlTnwaI1MSx8KvAp5zsdyc6Ur2YEqNCk8ho7TxQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FAHwpg3SPwpJZ2ELFH4f7dRyPctI5pNwFqhD7Ku9vmg=;
 b=agnPgC42GMiegi4nQ6/njeYtqZ8A/gSNZbVI6dLFZ6Z6elKWcX9eARyvq1dntenM0dxHQKEue7k/nS1REF/TK6y+uxQY66P/DvumB2KUTvyHhN2hJr2j9Yv1I7zaDGAW32Ll6faVfBJHjfiASSdyqSIxuh4oplzfMna4WyBvDooCysEeRbPXIrHlekNjm+VnDRqz3XEPtNtTlXrAbJ48h/GNFZxS4StR3f2lOlCjbARyLLrmtAMY6OpgFqZW3shgO88CfAl1Ot5bijB59FyfqE1z5Ih44/WYlEMritioAD1OBH0GDZYy9xusb35yKAZ3OG9T+9x5/eMynrjxjgeC9g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FAHwpg3SPwpJZ2ELFH4f7dRyPctI5pNwFqhD7Ku9vmg=;
 b=qCVFK5mE7OHpiqJdICjAN55WNxmWzTQx2ZeJOZrjAdG4EMY8nXbddiMRv6qWvl2xop12myD68ZCQ/VtNbS3RREGavbufVFE8IE6EYW6/ApxHOFubDjeIWZrtyWJpj0WXGxDsADj60glbxvpCtDMR2qSyUQbTq3T+TlrN4rqjxPw=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <paul@xen.org>, "Cooper, Andrew" <andrew.cooper3@citrix.com>
Subject: RE: [PATCH 8/9] VT-d: drop/move a few QI related constants
Thread-Topic: [PATCH 8/9] VT-d: drop/move a few QI related constants
Thread-Index: AQHXXRIIGHcVQd+MLk6mkVDZqL+U3Ksiuo/g
Date: Thu, 24 Jun 2021 05:32:59 +0000
Message-ID: <MWHPR11MB18861EF0EFA04CAE5A7AC7C88C079@MWHPR11MB1886.namprd11.prod.outlook.com>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
 <1b8558d4-42cc-bd68-e6c8-138f40f81e1c@suse.com>
In-Reply-To: <1b8558d4-42cc-bd68-e6c8-138f40f81e1c@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.142.21]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 7b050d19-15e7-4376-fe02-08d936d18758
x-ms-traffictypediagnostic: CO1PR11MB4833:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-microsoft-antispam-prvs: <CO1PR11MB48338F90D769A317A896F12B8C079@CO1PR11MB4833.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:2958;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: Dkiv/cJhEawHcAaBWVPHxdGIbtQ2wJeVNArD3TSYNs2peL7gpWouz8sTFBc14UhCNLd+SCVaC7ady8WXwjbmfJVcHSCjeFg7a2FtZrWZWTOGSxhAziwzhMLOVu2zce9L7FG5zYeh642SRh7mdZJTkiPwqTJcHPTRHQfUKFHfcQ5HhvPKXA5C3mQtKx705nvHYjfoyhaai7GFDkc7PR7S1koh9A+GqXCi4B3HcgSsiUhdTfFxG8PhSzVKGIh9NMI1nFg48KnFuOULOmDFJ3O3atYmT7bvSf+yYYIMJ57Mz7x5XisgSUFA89a/G/6SKqtUuzbzFbyFgd12IHqFL72wTQ2rb0Xo3kwPUWnB2NIL69SZArQ6gnkAc9kOUvYoMyhurVJNQ9Zq2KWVI0b/DkjCQljqwDBIsZ9285XGyvT0rc+Ggxx6+SMh4Xn+46c+NtgCi3JLbkf5uf8L6O5i5AyroESo0l3TSJ6ORx+5OApPhdoNERNa6lMnXzqWwrbAhlkABtAVhnBYe/blWZOoO3smU67UPOXkrxu2WVV6nIQZ7ixcqFFRJEonQyl2YVuZFJpcQJTyANApvbTjglRnQwybdU8fMDPKZrJBIluhoK51b7Q=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1886.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(346002)(366004)(396003)(39860400002)(136003)(86362001)(52536014)(38100700002)(5660300002)(122000001)(316002)(9686003)(8676002)(26005)(55016002)(33656002)(6506007)(7696005)(4326008)(186003)(71200400001)(2906002)(8936002)(76116006)(66946007)(66556008)(54906003)(478600001)(64756008)(66446008)(66476007)(110136005)(83380400001);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?ZUh4Vk9vTnhybEJqTEFUWkZ3MndZZWw5ellkZGJBUVJ1VWdqNmtjWVlGNkhy?=
 =?utf-8?B?SlV5NWpKVit0ZEF4NWFvSGdubHhKMmZYNEhyalJWQm5HOFZhQTlhYXNSbEI4?=
 =?utf-8?B?eWtDNUhEVHpMeTRHRG9RWnFRY0dxcTdqMHMrUllUcUlSWXFkNkJVTkhhdmt0?=
 =?utf-8?B?SkI3bnUzVzFHNDJpTDVUaWhuM09mcjNkRk1yMi9IZy9HN0tEU1J5UWRUQzA0?=
 =?utf-8?B?aEdoSDF0bnpPQS96dS9MR1NKMmpQMmRnZm1NMzdQUjdRQ2RrdGJFZzNnWGho?=
 =?utf-8?B?d3dpNEVnUmJXcXNTTnF3RFovSFo5TmY2cVdNZmVLYWlnODhQemlUeEQ4MFdF?=
 =?utf-8?B?OXAzSTkwek5ZNUp6ajF1dDFBRTJpV1RhSDlIT3YvaEwySmZHNzBiKzJNZmJ0?=
 =?utf-8?B?MjBzUDg2aGxLRDBKa3U5ZStTVVY2YndTMWNqaXFsQXcrNGJiMU1iOVFUSEsx?=
 =?utf-8?B?NlNYUVl3dUZqYldzR3hZdnkweEVMRjZxc0puYks3R0prck5JdUlYVS9hRmp5?=
 =?utf-8?B?MkNKaVJONGgrMElDb0J3eDMzRTJteFFjd2xkTHRrMEYrbUNTSGM1dXhjdkFF?=
 =?utf-8?B?c29ZK2hVdmN1VW1KRFcwTlhBS09oL0x1dUo2YkJkSjhnaWN2Q0FuWVVjemZI?=
 =?utf-8?B?MjZqMTFmUDFZMUkrdlZ0d3crQTNMSDBpeEszYVR4UWxuRGwxOHY5TXUyUUxu?=
 =?utf-8?B?elg1dmhGcW9rdEU0bEpOWHpMKzVWNDNmckkwc3A1ZUF4dGl5bGtScUVwT0JG?=
 =?utf-8?B?akU2RTZsUjBWZmYwZnJoN01NS2NBQmZ4blNqNFNwOXRaVFU4NmlIaEhUc2Zn?=
 =?utf-8?B?aDNzZk1udUNOQ3p2NlZybkd6WURCbFg0K1FqWE9HZVNIQmtjTkQzYWg5MUR1?=
 =?utf-8?B?ZTBEM1M3dVpGaG1nZlgrdXQ5KzU0L1lCRVBoLzU1ZjBJcmJVMUZrblFXMGtF?=
 =?utf-8?B?aVRRWFRXSVlkOEZjK1pKRndqUXowYjZJZEpqWk1yTUxoL3lLV25yQTNzMHlB?=
 =?utf-8?B?WVBERmlsVmlzNG4vVjRISVFhbU02aitsRm5NcnZUaUkyVEpmQjZ4ZUI0MGJk?=
 =?utf-8?B?TG9lWFVWL2xJelJTdG1uM2pOQS9IdHlOS1dpWUxvNE1Hb1duQU1Ka3prSjl0?=
 =?utf-8?B?RVFWOVJkbGtSNy93VVFuTlRnN1g5dXFlNHROU09IOWdUdVBhWC9VUEYyMFUv?=
 =?utf-8?B?alNYb2Z3R2NHQjlSTG5xSkdsUzVCS3hiOUl5UmFBa1M0R0Z0Uy9XblNMNlR6?=
 =?utf-8?B?TkpiWkVvcURKTFJZVDRTcVlKaWsyOW4ydHV2VHMrRk96eEdPK2xoTFJ3R0Mv?=
 =?utf-8?B?Tm4yWGV6WFlsVE11NmNKMDNGU0FhZHF1NW9VU1hCSU1LZCtLWlpXYWVlblBS?=
 =?utf-8?B?NjA0TGI2Z2VYcVozSitLdFhvbTB3TnBMa3l0WjEwbXNSSjI2K2J1RnZSbTBU?=
 =?utf-8?B?QlBSeEtCVHZVZitRY0lUMitucFM4OVBJWVVWbTNOQ3NVOTdQaU00RmxvQVA4?=
 =?utf-8?B?OTcyQkw3Zlk5UktwdnVqSjlHWVJRU1JuTEtWUFlKSklIeVk2YTVteDRFNDlK?=
 =?utf-8?B?WkxoSXd4aWkwSllLMlZaR2tGajFXMVRvRzYzN2ViRFNaMW5ZK0YxVXFKdFRp?=
 =?utf-8?B?VnpRdXhJa0JGN0tTcDA5TDJYWlNUQjBxMG1BV1ZXNHRlK0ZqVkIxbUxvdFFl?=
 =?utf-8?B?ajkvazhRTXJTY0sxdWVWcmVZa0dYVDREMGRpK0RTV3FFL1hPYktaalFlY1Rh?=
 =?utf-8?Q?T7IV15+9E7wB5qSvc6VDKpV6L7F/YigZ0OYTTxM?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1886.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7b050d19-15e7-4376-fe02-08d936d18758
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Jun 2021 05:32:59.5017
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: uwPUxdoPi3GA+jr/Xm/GtXG0W/5gHGbZvW9f7flzo3hD9VsABY7wXkdbMDFJ3qnudGRBrsXIQpqdY1+sJ4DbWA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR11MB4833
X-OriginatorOrg: intel.com

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IFdlZG5lc2Rh
eSwgSnVuZSA5LCAyMDIxIDU6MzAgUE0NCj4gDQo+IFJlcGxhY2UgdXNlcyBvZiBRSU5WQUxfRU5U
UllfT1JERVIgYW5kIFFJTlZBTF9JTkRFWF9TSElGVCwgc3VjaCB0aGF0DQo+IHRoZSBjb25zdGFu
dHMgY2FuIGJlIGRyb3BwZWQuIE1vdmUgdGhlIHJlbWFpbmluZyBRSU5WQUxfKiBvbmVzIHRvIHRo
ZQ0KPiBzaW5nbGUgc291cmNlIGZpbGUgdXNpbmcgdGhlbS4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6
IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCg0KUmV2aWV3ZWQtYnk6IEtldmluIFRp
YW4gPGtldmluLnRpYW5AaW50ZWwuY29tPg0KDQo+IA0KPiAtLS0gYS94ZW4vZHJpdmVycy9wYXNz
dGhyb3VnaC92dGQvaW9tbXUuaA0KPiArKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQv
aW9tbXUuaA0KPiBAQCAtNDUxLDE3ICs0NTEsNiBAQCBzdHJ1Y3QgcWludmFsX2VudHJ5IHsNCj4g
ICAgICB9cTsNCj4gIH07DQo+IA0KPiAtLyogRWFjaCBlbnRyeSBpcyAxNiBieXRlcywgc28gMl44
IGVudHJpZXMgcGVyIHBhZ2UgKi8NCj4gLSNkZWZpbmUgUUlOVkFMX0VOVFJZX09SREVSICAoIFBB
R0VfU0hJRlQgLSA0ICkNCj4gLSNkZWZpbmUgUUlOVkFMX01BWF9FTlRSWV9OUiAoMXUgPDwgKDcg
KyBRSU5WQUxfRU5UUllfT1JERVIpKQ0KPiAtDQo+IC0vKiBTdGF0dXMgZGF0YSBmbGFnICovDQo+
IC0jZGVmaW5lIFFJTlZBTF9TVEFUX0lOSVQgIDANCj4gLSNkZWZpbmUgUUlOVkFMX1NUQVRfRE9O
RSAgMQ0KPiAtDQo+IC0vKiBRdWV1ZSBpbnZhbGlkYXRpb24gaGVhZC90YWlsIHNoaWZ0ICovDQo+
IC0jZGVmaW5lIFFJTlZBTF9JTkRFWF9TSElGVCA0DQo+IC0NCj4gICNkZWZpbmUgVFlQRV9JTlZB
TF9DT05URVhUICAgICAgMHgxDQo+ICAjZGVmaW5lIFRZUEVfSU5WQUxfSU9UTEIgICAgICAgIDB4
Mg0KPiAgI2RlZmluZSBUWVBFX0lOVkFMX0RFVklDRV9JT1RMQiAweDMNCj4gLS0tIGEveGVuL2Ry
aXZlcnMvcGFzc3Rocm91Z2gvdnRkL3FpbnZhbC5jDQo+ICsrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0
aHJvdWdoL3Z0ZC9xaW52YWwuYw0KPiBAQCAtMjksNiArMjksMTMgQEANCj4gICNpbmNsdWRlICJl
eHRlcm4uaCINCj4gICNpbmNsdWRlICIuLi9hdHMuaCINCj4gDQo+ICsvKiBFYWNoIGVudHJ5IGlz
IDE2IGJ5dGVzLCBhbmQgdGhlcmUgY2FuIGJlIHVwIHRvIDJeNyBwYWdlcy4gKi8NCj4gKyNkZWZp
bmUgUUlOVkFMX01BWF9FTlRSWV9OUiAoMXUgPDwgKDcgKyBQQUdFX1NISUZUXzRLIC0gNCkpDQo+
ICsNCj4gKy8qIFN0YXR1cyBkYXRhIGZsYWcgKi8NCj4gKyNkZWZpbmUgUUlOVkFMX1NUQVRfSU5J
VCAgMA0KPiArI2RlZmluZSBRSU5WQUxfU1RBVF9ET05FICAxDQo+ICsNCj4gIHN0YXRpYyB1bnNp
Z25lZCBpbnQgX19yZWFkX21vc3RseSBxaV9wZ19vcmRlcjsNCj4gIHN0YXRpYyB1bnNpZ25lZCBp
bnQgX19yZWFkX21vc3RseSBxaV9lbnRyeV9ucjsNCj4gDQo+IEBAIC00NSwxMSArNTIsMTEgQEAg
c3RhdGljIHVuc2lnbmVkIGludCBxaW52YWxfbmV4dF9pbmRleChzdA0KPiAgew0KPiAgICAgIHVu
c2lnbmVkIGludCB0YWlsID0gZG1hcl9yZWFkbChpb21tdS0+cmVnLCBETUFSX0lRVF9SRUcpOw0K
PiANCj4gLSAgICB0YWlsID4+PSBRSU5WQUxfSU5ERVhfU0hJRlQ7DQo+ICsgICAgdGFpbCAvPSBz
aXplb2Yoc3RydWN0IHFpbnZhbF9lbnRyeSk7DQo+IA0KPiAgICAgIC8qICh0YWlsKzEgPT0gaGVh
ZCkgaW5kaWNhdGVzIGEgZnVsbCBxdWV1ZSwgd2FpdCBmb3IgSFcgKi8NCj4gICAgICB3aGlsZSAo
ICgodGFpbCArIDEpICYgKHFpX2VudHJ5X25yIC0gMSkpID09DQo+IC0gICAgICAgICAgICAoZG1h
cl9yZWFkbChpb21tdS0+cmVnLCBETUFSX0lRSF9SRUcpID4+DQo+IFFJTlZBTF9JTkRFWF9TSElG
VCkgKQ0KPiArICAgICAgICAgICAgKGRtYXJfcmVhZGwoaW9tbXUtPnJlZywgRE1BUl9JUUhfUkVH
KSAvIHNpemVvZihzdHJ1Y3QNCj4gcWludmFsX2VudHJ5KSkgKQ0KPiAgICAgIHsNCj4gICAgICAg
ICAgcHJpbnRrX29uY2UoWEVOTE9HX0VSUiBWVERQUkVGSVggIiBJT01NVSMldTogbm8gUUkgc2xv
dA0KPiBhdmFpbGFibGVcbiIsDQo+ICAgICAgICAgICAgICAgICAgICAgIGlvbW11LT5pbmRleCk7
DQo+IEBAIC02Niw3ICs3Myw3IEBAIHN0YXRpYyB2b2lkIHFpbnZhbF91cGRhdGVfcXRhaWwoc3Ry
dWN0IHYNCj4gICAgICAvKiBOZWVkIGhvbGQgcmVnaXN0ZXIgbG9jayB3aGVuIHVwZGF0ZSB0YWls
ICovDQo+ICAgICAgQVNTRVJUKCBzcGluX2lzX2xvY2tlZCgmaW9tbXUtPnJlZ2lzdGVyX2xvY2sp
ICk7DQo+ICAgICAgdmFsID0gKGluZGV4ICsgMSkgJiAocWlfZW50cnlfbnIgLSAxKTsNCj4gLSAg
ICBkbWFyX3dyaXRlbChpb21tdS0+cmVnLCBETUFSX0lRVF9SRUcsIHZhbCA8PCBRSU5WQUxfSU5E
RVhfU0hJRlQpOw0KPiArICAgIGRtYXJfd3JpdGVsKGlvbW11LT5yZWcsIERNQVJfSVFUX1JFRywg
dmFsICogc2l6ZW9mKHN0cnVjdA0KPiBxaW52YWxfZW50cnkpKTsNCj4gIH0NCj4gDQo+ICBzdGF0
aWMgc3RydWN0IHFpbnZhbF9lbnRyeSAqcWlfbWFwX2VudHJ5KGNvbnN0IHN0cnVjdCB2dGRfaW9t
bXUgKmlvbW11LA0KPiBAQCAtNDEzLDE3ICs0MjAsMTggQEAgaW50IGVuYWJsZV9xaW52YWwoc3Ry
dWN0IHZ0ZF9pb21tdSAqaW9tbQ0KPiAgICAgICAgICAgICAgICogb25seSBvbmUgZW50cnkgbGVm
dC4NCj4gICAgICAgICAgICAgICAqLw0KPiAgICAgICAgICAgICAgQlVJTERfQlVHX09OKENPTkZJ
R19OUl9DUFVTICogMiA+PSBRSU5WQUxfTUFYX0VOVFJZX05SKTsNCj4gLSAgICAgICAgICAgIHFp
X3BnX29yZGVyID0gZ2V0X29yZGVyX2Zyb21fYnl0ZXMoKG51bV9wcmVzZW50X2NwdXMoKSAqIDIg
KyAxKQ0KPiA8PA0KPiAtICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAoUEFHRV9TSElGVCAtDQo+IC0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBRSU5WQUxfRU5UUllfT1JERVIpKTsNCj4gLSAgICAgICAgICAgIHFpX2Vu
dHJ5X25yID0gMXUgPDwgKHFpX3BnX29yZGVyICsgUUlOVkFMX0VOVFJZX09SREVSKTsNCj4gKyAg
ICAgICAgICAgIHFpX3BnX29yZGVyID0gZ2V0X29yZGVyX2Zyb21fYnl0ZXMoKG51bV9wcmVzZW50
X2NwdXMoKSAqIDIgKyAxKSAqDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHNpemVvZihzdHJ1Y3QgcWludmFsX2VudHJ5KSk7DQo+ICsgICAgICAgICAg
ICBxaV9lbnRyeV9uciA9IChQQUdFX1NJWkUgPDwgcWlfcGdfb3JkZXIpIC8NCj4gKyAgICAgICAg
ICAgICAgICAgICAgICAgICAgc2l6ZW9mKHN0cnVjdCBxaW52YWxfZW50cnkpOw0KPiANCj4gICAg
ICAgICAgICAgIGRwcmludGsoWEVOTE9HX0lORk8gVlREUFJFRklYLA0KPiAgICAgICAgICAgICAg
ICAgICAgICAiUUk6IHVzaW5nICV1LWVudHJ5IHJpbmcocylcbiIsIHFpX2VudHJ5X25yKTsNCj4g
ICAgICAgICAgfQ0KPiANCj4gICAgICAgICAgaW9tbXUtPnFpbnZhbF9tYWRkciA9DQo+IC0gICAg
ICAgICAgICBhbGxvY19wZ3RhYmxlX21hZGRyKHFpX2VudHJ5X25yID4+IFFJTlZBTF9FTlRSWV9P
UkRFUiwNCj4gKyAgICAgICAgICAgIGFsbG9jX3BndGFibGVfbWFkZHIoUEZOX0RPV04ocWlfZW50
cnlfbnIgKg0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzaXpl
b2Yoc3RydWN0IHFpbnZhbF9lbnRyeSkpLA0KPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBpb21tdS0+bm9kZSk7DQo+ICAgICAgICAgIGlmICggaW9tbXUtPnFpbnZhbF9tYWRkciA9
PSAwICkNCj4gICAgICAgICAgew0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 05:43:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 05:43:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146399.269363 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwI96-0004LX-KQ; Thu, 24 Jun 2021 05:43:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146399.269363; Thu, 24 Jun 2021 05:43:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwI96-0004LQ-Gz; Thu, 24 Jun 2021 05:43:24 +0000
Received: by outflank-mailman (input) for mailman id 146399;
 Thu, 24 Jun 2021 05:43:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=55tw=LS=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lwI94-0004LK-S4
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 05:43:22 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8a1f9078-82c8-485c-babb-480776fbae31;
 Thu, 24 Jun 2021 05:43:20 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 6EF7567373; Thu, 24 Jun 2021 07:43:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a1f9078-82c8-485c-babb-480776fbae31
Date: Thu, 24 Jun 2021 07:43:15 +0200
From: Christoph Hellwig <hch@lst.de>
To: Qian Cai <quic_qiancai@quicinc.com>
Cc: Will Deacon <will@kernel.org>, Claire Chang <tientzu@chromium.org>,
	Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	heikki.krogerus@linux.intel.com, thomas.hellstrom@linux.intel.com,
	peterz@infradead.org, benh@kernel.crashing.org,
	joonas.lahtinen@linux.intel.com, dri-devel@lists.freedesktop.org,
	chris@chris-wilson.co.uk, grant.likely@arm.com, paulus@samba.org,
	mingo@kernel.org, jxgao@google.com, sstabellini@kernel.org,
	Saravana Kannan <saravanak@google.com>, xypron.glpk@gmx.de,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>, bskeggs@redhat.com,
	linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org,
	Thierry Reding <treding@nvidia.com>,
	intel-gfx@lists.freedesktop.org, matthew.auld@intel.com,
	linux-devicetree <devicetree@vger.kernel.org>, daniel@ffwll.ch,
	airlied@linux.ie, maarten.lankhorst@linux.intel.com,
	linuxppc-dev@lists.ozlabs.org, jani.nikula@linux.intel.com,
	Nicolas Boichat <drinkcat@chromium.org>, rodrigo.vivi@intel.com,
	bhelgaas@google.com, Dan Williams <dan.j.williams@intel.com>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Greg KH <gregkh@linuxfoundation.org>,
	Randy Dunlap <rdunlap@infradead.org>,
	lkml <linux-kernel@vger.kernel.org>,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, thomas.lendacky@amd.com,
	Robin Murphy <robin.murphy@arm.com>, bauerman@linux.ibm.com
Subject: Re: [PATCH v14 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
Message-ID: <20210624054315.GA25381@lst.de>
References: <20210619034043.199220-1-tientzu@chromium.org> <20210619034043.199220-7-tientzu@chromium.org> <76c3343d-72e5-9df3-8924-5474ee698ef4@quicinc.com> <20210623183736.GA472@willie-the-truck> <19d4c7a2-744d-21e0-289c-a576e1f0e6f3@quicinc.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <19d4c7a2-744d-21e0-289c-a576e1f0e6f3@quicinc.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Jun 23, 2021 at 02:44:34PM -0400, Qian Cai wrote:
> is_swiotlb_force_bounce at /usr/src/linux-next/./include/linux/swiotlb.h:119
> 
> is_swiotlb_force_bounce() was the new function introduced in this patch here.
> 
> +static inline bool is_swiotlb_force_bounce(struct device *dev)
> +{
> +	return dev->dma_io_tlb_mem->force_bounce;
> +}

To me the crash looks like dev->dma_io_tlb_mem is NULL.  Can you
turn this into :

	return dev->dma_io_tlb_mem && dev->dma_io_tlb_mem->force_bounce;

for a quick debug check?


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 06:06:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 06:06:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146404.269375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwIV5-0006j7-HC; Thu, 24 Jun 2021 06:06:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146404.269375; Thu, 24 Jun 2021 06:06:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwIV5-0006j0-Bv; Thu, 24 Jun 2021 06:06:07 +0000
Received: by outflank-mailman (input) for mailman id 146404;
 Thu, 24 Jun 2021 06:06:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a5fo=LS=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lwIV3-0006iu-B0
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 06:06:05 +0000
Received: from mail-pj1-x102b.google.com (unknown [2607:f8b0:4864:20::102b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1370eef6-39c5-45f3-ab48-7649b53915f5;
 Thu, 24 Jun 2021 06:06:02 +0000 (UTC)
Received: by mail-pj1-x102b.google.com with SMTP id g24so2828737pji.4
 for <xen-devel@lists.xenproject.org>; Wed, 23 Jun 2021 23:06:02 -0700 (PDT)
Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com.
 [209.85.215.178])
 by smtp.gmail.com with ESMTPSA id y20sm1783902pfb.207.2021.06.23.23.06.00
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 23 Jun 2021 23:06:01 -0700 (PDT)
Received: by mail-pg1-f178.google.com with SMTP id y14so3823531pgs.12
 for <xen-devel@lists.xenproject.org>; Wed, 23 Jun 2021 23:06:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1370eef6-39c5-45f3-ab48-7649b53915f5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=osJqntoZHJL8VeQbE2d8Gx1TrOiajWQXazXH9w70kIY=;
        b=f09JHaTt6yY9HlwcujpzmyCJbvbUoIE0cZNa0aICkCK68T/rODYze/6/aKRkuFqgtc
         aQSgGc4CEL1bXaOSKVo+xlBop7hH5m1EqqxXbZi8VyWUJmFL6g6BM4X4ZMkMlYffhkXb
         itPPZk67YIl+8XjGG/C0lboNwjIIr80wXQ+NM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=osJqntoZHJL8VeQbE2d8Gx1TrOiajWQXazXH9w70kIY=;
        b=JA9l4y1CQlGzGHurVI/YIDS/dvEfcOL9QrUn5BNIopAh5x7z95mEr7LThxhzzlduvu
         YUsLG3wiWf66fgNNf3y/qSBBjmohnsOQCzVb1sBPedfyuqM5xc1UC519thyBi/ZRMQ30
         niHPW/m/NlxQKuk/ZmeCaowWSWqjYjVsui3xjq4MQX9AtO17yWEAAuvBUxjUAvzz9P7q
         WuzT/KMUmEUdDnUfSGmM6NeuAx+qDN96oTdW+zdGtnTOaC0GnU/6WknbAWuGfV2Wcgr2
         sk5LGY7V5LI+GB8+g6iDjON6swE5Id/iK0a3vl0sTEGKi6eM0ooSDUdvyMditY7/1Xqo
         kPEQ==
X-Gm-Message-State: AOAM531SWFG6nO2zsYXjX123PHvrSoTv58fJosSIysJF5WHmGjqknTWJ
	XIPZkSw7PNjllBnvVHwSp6iT2YwYy+q4Jw==
X-Google-Smtp-Source: ABdhPJyYGnGF3hSUN83IvCiIWaDz07IZqEnNlJ+NUuqs7DAGSS0jvxUYounIHck5nwoMhD+4su/b1A==
X-Received: by 2002:a17:902:bd96:b029:f3:f285:7d8 with SMTP id q22-20020a170902bd96b02900f3f28507d8mr3065723pls.57.1624514761362;
        Wed, 23 Jun 2021 23:06:01 -0700 (PDT)
X-Received: by 2002:a92:750c:: with SMTP id q12mr2332407ilc.303.1624514749926;
 Wed, 23 Jun 2021 23:05:49 -0700 (PDT)
MIME-Version: 1.0
References: <20210619034043.199220-1-tientzu@chromium.org> <20210619034043.199220-7-tientzu@chromium.org>
 <76c3343d-72e5-9df3-8924-5474ee698ef4@quicinc.com> <20210623183736.GA472@willie-the-truck>
 <19d4c7a2-744d-21e0-289c-a576e1f0e6f3@quicinc.com> <20210624054315.GA25381@lst.de>
In-Reply-To: <20210624054315.GA25381@lst.de>
From: Claire Chang <tientzu@chromium.org>
Date: Thu, 24 Jun 2021 14:05:39 +0800
X-Gmail-Original-Message-ID: <CALiNf288ZLMhY3E8E3N+z9rkwi1viWNLm1wwMEwT4rNwh3FfwQ@mail.gmail.com>
Message-ID: <CALiNf288ZLMhY3E8E3N+z9rkwi1viWNLm1wwMEwT4rNwh3FfwQ@mail.gmail.com>
Subject: Re: [PATCH v14 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
To: Christoph Hellwig <hch@lst.de>
Cc: Qian Cai <quic_qiancai@quicinc.com>, Will Deacon <will@kernel.org>, 
	Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Frank Rowand <frowand.list@gmail.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	boris.ostrovsky@oracle.com, jgross@suse.com, 
	Marek Szyprowski <m.szyprowski@samsung.com>, heikki.krogerus@linux.intel.com, 
	thomas.hellstrom@linux.intel.com, peterz@infradead.org, 
	benh@kernel.crashing.org, joonas.lahtinen@linux.intel.com, 
	dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk, 
	grant.likely@arm.com, paulus@samba.org, mingo@kernel.org, 
	Jianxiong Gao <jxgao@google.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Saravana Kannan <saravanak@google.com>, xypron.glpk@gmx.de, 
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	Bartosz Golaszewski <bgolaszewski@baylibre.com>, bskeggs@redhat.com, linux-pci@vger.kernel.org, 
	xen-devel@lists.xenproject.org, Thierry Reding <treding@nvidia.com>, 
	intel-gfx@lists.freedesktop.org, matthew.auld@intel.com, 
	linux-devicetree <devicetree@vger.kernel.org>, Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, 
	maarten.lankhorst@linux.intel.com, linuxppc-dev@lists.ozlabs.org, 
	jani.nikula@linux.intel.com, Nicolas Boichat <drinkcat@chromium.org>, rodrigo.vivi@intel.com, 
	Bjorn Helgaas <bhelgaas@google.com>, Dan Williams <dan.j.williams@intel.com>, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Greg KH <gregkh@linuxfoundation.org>, 
	Randy Dunlap <rdunlap@infradead.org>, lkml <linux-kernel@vger.kernel.org>, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tom Lendacky <thomas.lendacky@amd.com>, Robin Murphy <robin.murphy@arm.com>, bauerman@linux.ibm.com
Content-Type: text/plain; charset="UTF-8"

On Thu, Jun 24, 2021 at 1:43 PM Christoph Hellwig <hch@lst.de> wrote:
>
> On Wed, Jun 23, 2021 at 02:44:34PM -0400, Qian Cai wrote:
> > is_swiotlb_force_bounce at /usr/src/linux-next/./include/linux/swiotlb.h:119
> >
> > is_swiotlb_force_bounce() was the new function introduced in this patch here.
> >
> > +static inline bool is_swiotlb_force_bounce(struct device *dev)
> > +{
> > +     return dev->dma_io_tlb_mem->force_bounce;
> > +}
>
> To me the crash looks like dev->dma_io_tlb_mem is NULL.  Can you
> turn this into :
>
>         return dev->dma_io_tlb_mem && dev->dma_io_tlb_mem->force_bounce;
>
> for a quick debug check?

I just realized that dma_io_tlb_mem might be NULL like Christoph
pointed out since swiotlb might not get initialized.
However,  `Unable to handle kernel paging request at virtual address
dfff80000000000e` looks more like the address is garbage rather than
NULL?
I wonder if that's because dev->dma_io_tlb_mem is not assigned
properly (which means device_initialize is not called?).


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 07:22:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 07:22:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146409.269386 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwJgL-0005jt-1D; Thu, 24 Jun 2021 07:21:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146409.269386; Thu, 24 Jun 2021 07:21:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwJgK-0005jm-T2; Thu, 24 Jun 2021 07:21:48 +0000
Received: by outflank-mailman (input) for mailman id 146409;
 Thu, 24 Jun 2021 07:21:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3ax9=LS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lwJgJ-0005jg-7m
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 07:21:47 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77837906-0143-41f4-84b6-38507f91367c;
 Thu, 24 Jun 2021 07:21:45 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id A9FB82197E;
 Thu, 24 Jun 2021 07:21:44 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 694B611A97;
 Thu, 24 Jun 2021 07:21:44 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id nvMuFogy1GCkewAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 24 Jun 2021 07:21:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77837906-0143-41f4-84b6-38507f91367c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624519304; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=iBMd1VnM1SymcmFQSVpTidkwVIA930Jq4H7AHEVg+ig=;
	b=QmEhebshPKP0XmlJPbZnZ8R4my8g1HcJPxmTHkvIMclMBLHJ9c42a78SJ2m0kndSDj97BE
	qlobRTj9n17V5wR8wyMYA49YAD63ChQrJRmRLfb+n5G4wBqRDSTJXyAe0B0BJMWa0+GAGw
	E3Z8sLYlzAExT+3EKE/tKyDyjeOAI3U=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624519304; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=iBMd1VnM1SymcmFQSVpTidkwVIA930Jq4H7AHEVg+ig=;
	b=QmEhebshPKP0XmlJPbZnZ8R4my8g1HcJPxmTHkvIMclMBLHJ9c42a78SJ2m0kndSDj97BE
	qlobRTj9n17V5wR8wyMYA49YAD63ChQrJRmRLfb+n5G4wBqRDSTJXyAe0B0BJMWa0+GAGw
	E3Z8sLYlzAExT+3EKE/tKyDyjeOAI3U=
Subject: Re: [PATCH 02/10] tools/xenstored: Introduce lu_get_connection() and
 use it
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-3-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <e1a65b5b-874c-ae7d-dd7c-9dbf1aea7cbc@suse.com>
Date: Thu, 24 Jun 2021 09:21:43 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210616144324.31652-3-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="JrNVaHBkfC78R4iXdvHbqzHJcSDsY0MFe"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--JrNVaHBkfC78R4iXdvHbqzHJcSDsY0MFe
Content-Type: multipart/mixed; boundary="80XlrZkyMXvqn2dooROfuTJHoqJoEjsuE";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <e1a65b5b-874c-ae7d-dd7c-9dbf1aea7cbc@suse.com>
Subject: Re: [PATCH 02/10] tools/xenstored: Introduce lu_get_connection() and
 use it
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-3-julien@xen.org>
In-Reply-To: <20210616144324.31652-3-julien@xen.org>

--80XlrZkyMXvqn2dooROfuTJHoqJoEjsuE
Content-Type: multipart/mixed;
 boundary="------------6D551229CDA272F2688C96BD"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------6D551229CDA272F2688C96BD
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.06.21 16:43, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> At the moment, dump_state_buffered_data() is taking two connections
> in parameters (one is the connection to dump, the other is the
> connection used to request LU). The naming doesn't help to
> distinguish (c vs conn) them and this already lead to several mistake
> while modifying the function.
>=20
> To remove the confusion, introduce an help lu_get_connection() that
> will return the connection used to request LU and use it
> in place of the existing parameter.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------6D551229CDA272F2688C96BD
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------6D551229CDA272F2688C96BD--

--80XlrZkyMXvqn2dooROfuTJHoqJoEjsuE--

--JrNVaHBkfC78R4iXdvHbqzHJcSDsY0MFe
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDUMocFAwAAAAAACgkQsN6d1ii/Ey9P
BQf9GelxsCP2Ywqzrp4y+3SZCeH0PcBv+D2fTMdytq+ktrYQPLW7EJPKypJl7P6lHxdvdtGtN0jl
WoteWlG2laGbOAnq3FYw4RqGFSyEs2MPeByCIjgojvjEVaRdsX+S/SQJ7IexH5QndmZieYLS5nkC
NX+lD2XL7XpTzHQGBl90SnsCYsBMX6Spu2VY12AvUoBiF/oqjM/pmDwvlGd8RetKZFAPWFxo/MVB
xBWEsVHkHQ7XSk8b0UeR9hPdADy+CkbgvIGW90pN7PPqXU0TSaE+WsoPX7rdVJW7L9IHOlBKfxze
+UOb3mI4xaGMUy1ELJgBZuux6OHB3oqYGuzPiKhLfg==
=Huk+
-----END PGP SIGNATURE-----

--JrNVaHBkfC78R4iXdvHbqzHJcSDsY0MFe--


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 07:32:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 07:32:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146414.269396 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwJqo-0007CY-16; Thu, 24 Jun 2021 07:32:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146414.269396; Thu, 24 Jun 2021 07:32:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwJqn-0007CR-UR; Thu, 24 Jun 2021 07:32:37 +0000
Received: by outflank-mailman (input) for mailman id 146414;
 Thu, 24 Jun 2021 07:32:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3ax9=LS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lwJqm-0007CL-Nq
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 07:32:36 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 335ab086-46c3-4d6a-a5c6-636657cbd23b;
 Thu, 24 Jun 2021 07:32:36 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 25E242197E;
 Thu, 24 Jun 2021 07:32:35 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id E97A011A97;
 Thu, 24 Jun 2021 07:32:34 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id pV9xNxI11GDFAQAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 24 Jun 2021 07:32:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 335ab086-46c3-4d6a-a5c6-636657cbd23b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624519955; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=TpIF/bILtwA0IBDOGphI5lxgdvfBEarVL1t14bm/ApU=;
	b=eVdgNX5LiWCXMCxhdUJzAbSYn+VdhWHEJw4wPEMOARY2Y7S3p4Ois5J+YLWQnedrz3Bbxb
	X020YgHWRorvD+2fVCGRRiUKK9QZgsDqk0UkZ4VxRWbrvIpjeoxP6xH46H/B2VEHp3q7vU
	c6XngP0WWi5Jvyp/y0pohNbXegMN4+c=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624519955; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=TpIF/bILtwA0IBDOGphI5lxgdvfBEarVL1t14bm/ApU=;
	b=eVdgNX5LiWCXMCxhdUJzAbSYn+VdhWHEJw4wPEMOARY2Y7S3p4Ois5J+YLWQnedrz3Bbxb
	X020YgHWRorvD+2fVCGRRiUKK9QZgsDqk0UkZ4VxRWbrvIpjeoxP6xH46H/B2VEHp3q7vU
	c6XngP0WWi5Jvyp/y0pohNbXegMN4+c=
Subject: Re: [PATCH 03/10] tools/xenstore: Don't assume conn->in points to the
 LU request
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-4-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <fa249348-e4a0-13d0-03d7-d3536a8b759c@suse.com>
Date: Thu, 24 Jun 2021 09:32:33 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210616144324.31652-4-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="W0ktchS0A5MkVawIUIt96y0jshRyNBOaC"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--W0ktchS0A5MkVawIUIt96y0jshRyNBOaC
Content-Type: multipart/mixed; boundary="U2wKpwtlBTcvdhcInmyT3IOmaaeD89RvH";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <fa249348-e4a0-13d0-03d7-d3536a8b759c@suse.com>
Subject: Re: [PATCH 03/10] tools/xenstore: Don't assume conn->in points to the
 LU request
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-4-julien@xen.org>
In-Reply-To: <20210616144324.31652-4-julien@xen.org>

--U2wKpwtlBTcvdhcInmyT3IOmaaeD89RvH
Content-Type: multipart/mixed;
 boundary="------------48FEEF55307F5316D5CEDD88"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------48FEEF55307F5316D5CEDD88
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.06.21 16:43, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> call_delayed() is currently assuming that conn->in is NULL when
> handling delayed request. However, the connection is not paused.
> Therefore new request can be processed and conn->in may be non-NULL
> if we have only received a partial request.
>=20
> Furthermore, as we overwrite conn->in, the current partial request
> will not be transferred. This will result to corrupt the connection.
>=20
> Rather than updating conn->in, stash the LU request in lu_status and
> let each callback for delayed request to update conn->in when
> necessary.
>=20
> To keep a sane interface, the code to write the "OK" response the
> LU request is moved in xenstored_core.c.
>=20
> Fixes: c5ca1404b4 ("tools/xenstore: add support for delaying execution =
of a xenstore request")
> Fixes: ed6eebf17d ("tools/xenstore: dump the xenstore state for live up=
date")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

With dropping the conn parameter from call_delayed as already
mentioned by Luca you can add my:

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------48FEEF55307F5316D5CEDD88
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------48FEEF55307F5316D5CEDD88--

--U2wKpwtlBTcvdhcInmyT3IOmaaeD89RvH--

--W0ktchS0A5MkVawIUIt96y0jshRyNBOaC
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDUNRIFAwAAAAAACgkQsN6d1ii/Ey8N
Xgf/btaSB1d4+WNRCDFMT9wx4DLH6rTRv2nMw0lh29THuD/Fj2QuALJF+aAa/ofdE8pBXeyY9jsf
N7UDHUwsyU4VEdl0LajhlfJsdyCcC6lYeB7SJ0q541yQ8bo3uuZAVfdeMPS9fOAYuE+XRqz6BN0a
aQv6jcqmhYSXT9txnCbRdkQ3a+S/YI6oziz4R0cKp0nkiU6EjZQy8wczf9HzYa+mBcbVWtBEcx5u
3QzfMyrUaaHXZDLQt0P5A0fvtDtATHAlg9cjCbCFqnmxkHDVophsdoRSlDIh3DPsMkoCRGpa0u1I
pkMUcHYHnVa0KmvM3Teo6POLDP9kKdCv7+N+QZPnpA==
=NXMg
-----END PGP SIGNATURE-----

--W0ktchS0A5MkVawIUIt96y0jshRyNBOaC--


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 07:34:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 07:34:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146421.269408 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwJt3-0007sb-Kw; Thu, 24 Jun 2021 07:34:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146421.269408; Thu, 24 Jun 2021 07:34:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwJt3-0007sU-H2; Thu, 24 Jun 2021 07:34:57 +0000
Received: by outflank-mailman (input) for mailman id 146421;
 Thu, 24 Jun 2021 07:34:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3ax9=LS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lwJt3-0007sO-0N
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 07:34:57 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 17c0c82c-e658-49b6-9670-536984581418;
 Thu, 24 Jun 2021 07:34:56 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 718DB21973;
 Thu, 24 Jun 2021 07:34:55 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 45A4E11A97;
 Thu, 24 Jun 2021 07:34:55 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id dNLdD5811GD5AgAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 24 Jun 2021 07:34:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17c0c82c-e658-49b6-9670-536984581418
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624520095; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=MF+fRfPfu7cXOssURgQtzRVhV2H+QCioeeDGHNbBZi4=;
	b=L/wJdovz3Ruvo4YQv71kLqf4uw725ORa+4lI2VRAYIrf9DFGYQ0gP8cmjyScKULa56MSqF
	JC634Wrp8HyurSPQPTJfKCQnYJl+Pin4xovSTOWCnRH/7EJ5nzVlhU51dJ1LCYa6/EpKYk
	1W4FpoCho6+zXs7FxGf2FI645acpQbA=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624520095; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=MF+fRfPfu7cXOssURgQtzRVhV2H+QCioeeDGHNbBZi4=;
	b=L/wJdovz3Ruvo4YQv71kLqf4uw725ORa+4lI2VRAYIrf9DFGYQ0gP8cmjyScKULa56MSqF
	JC634Wrp8HyurSPQPTJfKCQnYJl+Pin4xovSTOWCnRH/7EJ5nzVlhU51dJ1LCYa6/EpKYk
	1W4FpoCho6+zXs7FxGf2FI645acpQbA=
Subject: Re: [PATCH 03/10] tools/xenstore: Don't assume conn->in points to the
 LU request
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-4-julien@xen.org>
 <fa249348-e4a0-13d0-03d7-d3536a8b759c@suse.com>
Message-ID: <2e28c385-a8ba-1dc3-c41e-8b39f624947c@suse.com>
Date: Thu, 24 Jun 2021 09:34:54 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <fa249348-e4a0-13d0-03d7-d3536a8b759c@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="epcqypj5nb8HsgLoqadsZdaziLUHxe3O2"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--epcqypj5nb8HsgLoqadsZdaziLUHxe3O2
Content-Type: multipart/mixed; boundary="x4fjtpPDxtWS6C6vTTX4edF0L1fk5lMtr";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <2e28c385-a8ba-1dc3-c41e-8b39f624947c@suse.com>
Subject: Re: [PATCH 03/10] tools/xenstore: Don't assume conn->in points to the
 LU request
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-4-julien@xen.org>
 <fa249348-e4a0-13d0-03d7-d3536a8b759c@suse.com>
In-Reply-To: <fa249348-e4a0-13d0-03d7-d3536a8b759c@suse.com>

--x4fjtpPDxtWS6C6vTTX4edF0L1fk5lMtr
Content-Type: multipart/mixed;
 boundary="------------5C72CD0924C683B93477CB13"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------5C72CD0924C683B93477CB13
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 24.06.21 09:32, Juergen Gross wrote:
> On 16.06.21 16:43, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> call_delayed() is currently assuming that conn->in is NULL when
>> handling delayed request. However, the connection is not paused.
>> Therefore new request can be processed and conn->in may be non-NULL
>> if we have only received a partial request.
>>
>> Furthermore, as we overwrite conn->in, the current partial request
>> will not be transferred. This will result to corrupt the connection.
>>
>> Rather than updating conn->in, stash the LU request in lu_status and
>> let each callback for delayed request to update conn->in when
>> necessary.
>>
>> To keep a sane interface, the code to write the "OK" response the
>> LU request is moved in xenstored_core.c.
>>
>> Fixes: c5ca1404b4 ("tools/xenstore: add support for delaying execution=20

>> of a xenstore request")
>> Fixes: ed6eebf17d ("tools/xenstore: dump the xenstore state for live=20
>> update")
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>=20
> With dropping the conn parameter from call_delayed as already
> mentioned by Luca you can add my:

Oh, please drop my request to delete the conn parameter, as it is being
used in patch 4 again.

>=20
> Reviewed-by: Juergen Gross <jgross@suse.com>

This stands, of course.


Juergen

--------------5C72CD0924C683B93477CB13
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------5C72CD0924C683B93477CB13--

--x4fjtpPDxtWS6C6vTTX4edF0L1fk5lMtr--

--epcqypj5nb8HsgLoqadsZdaziLUHxe3O2
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDUNZ4FAwAAAAAACgkQsN6d1ii/Ey+R
RQgAh5Qy4Wa+anvob+RYc2xFF6Kk4P6ZPecNPul5PUjwIs9OpQm1rQc8TT97WEqNDpB6H1LMX8Oz
pzCYYysyjwUZL8rG17U5WJS/aPGazQqklQ9hnEQkGZQFBDz3tFgWpGb43jw2O6XekJ66++G81eBy
j/uoPomN0iY5/qVQjNlgtywWknHq50tMda56V6laxcZ/YaW0xgqkI+PCnA8OKzpPAvTQPkJ05Q4Y
bALbLxOtvAKAeb1nz+5N+VJoCt+vrWj2cLHQcHCK2+Wv4MA7FKT6uAWrCKsNC49HN8n2tMUOLTn1
xTHqXZHpFIxRaVrE21U/qrHPwNHR0NuXldaBIYDe4A==
=pJdl
-----END PGP SIGNATURE-----

--epcqypj5nb8HsgLoqadsZdaziLUHxe3O2--


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 07:35:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 07:35:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146425.269418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwJu0-0008Tr-Tt; Thu, 24 Jun 2021 07:35:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146425.269418; Thu, 24 Jun 2021 07:35:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwJu0-0008Ti-Ql; Thu, 24 Jun 2021 07:35:56 +0000
Received: by outflank-mailman (input) for mailman id 146425;
 Thu, 24 Jun 2021 07:35:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3ax9=LS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lwJu0-0008Ta-6x
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 07:35:56 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9cb16e62-4821-416e-b3bf-9f00f672b300;
 Thu, 24 Jun 2021 07:35:55 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id BB62A2197E;
 Thu, 24 Jun 2021 07:35:54 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 7F4D611A97;
 Thu, 24 Jun 2021 07:35:54 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id rQtnHdo11GCQAwAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 24 Jun 2021 07:35:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9cb16e62-4821-416e-b3bf-9f00f672b300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624520154; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=0EkdKkisfuHr55+cqW5Usgz9i6J8IrNZ6rHa3/zBZrI=;
	b=e6UsJXxmz42MB+Tgf95HDcvucZkOszdbmnpfmKv2L7a0qJcUWtyJ+RDfodRSG4vH7kqZFA
	1PloioKkrLROggknRyIukOGpQFD69tzKbV7ZmjYLlz97+npWLtQVgt4T2g+qp+ebWvxxET
	50V/KBplVLOOBmBdRTxY8iAjCW7dKmI=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624520154; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=0EkdKkisfuHr55+cqW5Usgz9i6J8IrNZ6rHa3/zBZrI=;
	b=e6UsJXxmz42MB+Tgf95HDcvucZkOszdbmnpfmKv2L7a0qJcUWtyJ+RDfodRSG4vH7kqZFA
	1PloioKkrLROggknRyIukOGpQFD69tzKbV7ZmjYLlz97+npWLtQVgt4T2g+qp+ebWvxxET
	50V/KBplVLOOBmBdRTxY8iAjCW7dKmI=
Subject: Re: [PATCH 04/10] tools/xenstored: Limit the number of requests a
 connection can delay
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-5-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <2a086073-5da3-dc80-3b41-ff0e70101c01@suse.com>
Date: Thu, 24 Jun 2021 09:35:53 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210616144324.31652-5-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="dYkN3K2nZFopLrAgrwz2EsJImcWewBBji"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--dYkN3K2nZFopLrAgrwz2EsJImcWewBBji
Content-Type: multipart/mixed; boundary="KzAzcaHVL0ZAugGtJCKsmNPDISQsna4Xg";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <2a086073-5da3-dc80-3b41-ff0e70101c01@suse.com>
Subject: Re: [PATCH 04/10] tools/xenstored: Limit the number of requests a
 connection can delay
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-5-julien@xen.org>
In-Reply-To: <20210616144324.31652-5-julien@xen.org>

--KzAzcaHVL0ZAugGtJCKsmNPDISQsna4Xg
Content-Type: multipart/mixed;
 boundary="------------0970C87F7796C4AAA7E346DE"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------0970C87F7796C4AAA7E346DE
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.06.21 16:43, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> Currently, only liveupdate request can be delayed. The request can only=

> be performed by a privileged connection (e.g. dom0). So it is fine to
> have no limits.
>=20
> In a follow-up patch we will want to delay request for unprivileged
> connection as well. So it is best to apply a limit.
>=20
> For now and for simplicity, only a single request can be delayed
> for a given unprivileged connection.
>=20
> Take the opportunity to tweak the prototype and provide a way to
> bypass the quota check. This would be useful when the function
> is called from the restore code.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------0970C87F7796C4AAA7E346DE
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------0970C87F7796C4AAA7E346DE--

--KzAzcaHVL0ZAugGtJCKsmNPDISQsna4Xg--

--dYkN3K2nZFopLrAgrwz2EsJImcWewBBji
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDUNdkFAwAAAAAACgkQsN6d1ii/Ey81
OQf/U7YhwGQqpzA6iYNTUaGGx1aCPs/Jvwqp5CrQQ6CB5RDkR5BetvJskXWIQMyyGEiavJyAdKJl
yjNBtJ0AcRPhfeqeyDaTb32/fhhPoQwXXg7LyTa8tBTbcVFVSukLLoIWvtgCnuAgbeRH3NCXuCLC
fBmCCzoO3AjOSQZkpuYRNNBSCsc7M7e+YFthgyv5Cd8uV9VWM8OpC5rP8SQRSfoWh0MfBLUPBrI0
dnQaX/XjVSyUX0J+c8mGlkFNeyp2i7knTlzBTllKXOXkoUzZ1a53Lo9D42p6UllJTIfonKKA5yKN
ZLYU0dpvK+K+48t1q63YaCwAwc4lH96VkMqpdgjaVw==
=DpdV
-----END PGP SIGNATURE-----

--dYkN3K2nZFopLrAgrwz2EsJImcWewBBji--


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 07:36:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 07:36:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146429.269430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwJuk-0000e5-71; Thu, 24 Jun 2021 07:36:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146429.269430; Thu, 24 Jun 2021 07:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwJuk-0000dw-3k; Thu, 24 Jun 2021 07:36:42 +0000
Received: by outflank-mailman (input) for mailman id 146429;
 Thu, 24 Jun 2021 07:36:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3ax9=LS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lwJui-0000di-Ug
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 07:36:40 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 503cf7ef-19bd-453c-b397-84263991964f;
 Thu, 24 Jun 2021 07:36:40 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 8571521973;
 Thu, 24 Jun 2021 07:36:39 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 56D3111A97;
 Thu, 24 Jun 2021 07:36:39 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id B3cTFAc21GDaAwAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 24 Jun 2021 07:36:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 503cf7ef-19bd-453c-b397-84263991964f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624520199; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=JVXgPQZH0LpmxnDmfIcO9YJLLb5DZVS5SSClTDu4c3U=;
	b=m0idJospSQo/9JQXDI2DoK8fZjbjGqJxg63hKH/fMywpOBB/eLVB33C5QdxxadqN26qaLR
	TlEhWfYf9Ak1yNDQYZhndAwmk7duifjHFIRgHhjGOLC+EHw7rse1BIrwEeumuQPj5/+iRH
	RqUEmog54qFWcO1sLYCd0NJ6g+70+Oo=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624520199; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=JVXgPQZH0LpmxnDmfIcO9YJLLb5DZVS5SSClTDu4c3U=;
	b=m0idJospSQo/9JQXDI2DoK8fZjbjGqJxg63hKH/fMywpOBB/eLVB33C5QdxxadqN26qaLR
	TlEhWfYf9Ak1yNDQYZhndAwmk7duifjHFIRgHhjGOLC+EHw7rse1BIrwEeumuQPj5/+iRH
	RqUEmog54qFWcO1sLYCd0NJ6g+70+Oo=
Subject: Re: [PATCH 05/10] tools/xenstored: xenstored_core.h should include
 fcntl.h
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-6-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <96237c71-d414-b115-9d4f-de9cb21fb53e@suse.com>
Date: Thu, 24 Jun 2021 09:36:38 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210616144324.31652-6-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="qoOQzKHuC5VPqmgvnBu5tm2rW7Gwx7JB7"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--qoOQzKHuC5VPqmgvnBu5tm2rW7Gwx7JB7
Content-Type: multipart/mixed; boundary="McygoiuFpiZUD7VuDePEQ6uZlBV02X31N";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <96237c71-d414-b115-9d4f-de9cb21fb53e@suse.com>
Subject: Re: [PATCH 05/10] tools/xenstored: xenstored_core.h should include
 fcntl.h
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-6-julien@xen.org>
In-Reply-To: <20210616144324.31652-6-julien@xen.org>

--McygoiuFpiZUD7VuDePEQ6uZlBV02X31N
Content-Type: multipart/mixed;
 boundary="------------8C2F803569A37B6163F6B95E"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------8C2F803569A37B6163F6B95E
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.06.21 16:43, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> xenstored_core.h will consider live-udpate is not supported if
> O_CLOEXEC doesn't exist. However, the header doesn't include the one
> defining O_CLOEXEC (i.e. fcntl.h). This means that depending on
> the header included, some source file will think Live-Update is not
> supported.
>=20
> I am not aware of any issue with the existing. Therefore this is just
> a latent bug so far.
>=20
> Prevent any potential issue by including fcntl.h in xenstored_core.h
>=20
> Fixes: cd831ee438 ("tools/xenstore: handle CLOEXEC flag for local files=20
and pipes")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------8C2F803569A37B6163F6B95E
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------8C2F803569A37B6163F6B95E--

--McygoiuFpiZUD7VuDePEQ6uZlBV02X31N--

--qoOQzKHuC5VPqmgvnBu5tm2rW7Gwx7JB7
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDUNgYFAwAAAAAACgkQsN6d1ii/Ey9q
owgAm2ThRaU5G7uxIKoVENCHUx2B8ptK1ttgC+QMdQI9zlo+iWPel05uOb3zT7xpbzk2zpTdxT/X
Z33tZeIveBMnZfkdemXgu3N/iepM5Yl1blmBXkFUYqG4OgEQCMXJvWHzpkxQ5PHMfO5eWo863yjI
4rB1JxwTSG/DuDlDfRT6sFJEiOa8R0o2AVfjuA1tUleTNva7XkSbSzl+6y/q7nmt1D5/ESc5/Jru
amCu0WT5Hlt56f9Kdu+AkgnBcjkMBVwGVfx9osGH65axgBucV7rR8eUBMdLD5OYTR0yfOmPbho2C
oKS/pgWp8SVC/Qeh76+Nd+wJllf2cbCg674WsTyqbA==
=fCxy
-----END PGP SIGNATURE-----

--qoOQzKHuC5VPqmgvnBu5tm2rW7Gwx7JB7--


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 07:39:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 07:39:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146436.269441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwJxd-0001Lu-Nq; Thu, 24 Jun 2021 07:39:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146436.269441; Thu, 24 Jun 2021 07:39:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwJxd-0001Ln-J0; Thu, 24 Jun 2021 07:39:41 +0000
Received: by outflank-mailman (input) for mailman id 146436;
 Thu, 24 Jun 2021 07:39:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3ax9=LS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lwJxb-0001Lh-Ui
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 07:39:39 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 43eedfa9-54af-4832-89ba-d3571343ac38;
 Thu, 24 Jun 2021 07:39:38 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id C52A41FD66;
 Thu, 24 Jun 2021 07:39:37 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 8618D11A97;
 Thu, 24 Jun 2021 07:39:37 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id YxJ/H7k21GDRBQAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 24 Jun 2021 07:39:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43eedfa9-54af-4832-89ba-d3571343ac38
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624520377; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=25UwmIJOXDIhmJyv5n0I5nZf7hU2bre0iKAeuX0TwzQ=;
	b=LmVxR6+pc7fYNVRyUzBi5fRhSh1Q5GmTe3aBmyLZzkEW3zuMUxkC5CKnux90goJmMxA8uY
	tE2aaDOuVbiWA8wnSY3PBRGCPSpS8jWkcdLrrJDrnk5Iv8ldPpDg/sHYYSJNYEtlmqcRBn
	09Rib8Fxu/rB960v19mVBOdbkIlfQRQ=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624520377; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=25UwmIJOXDIhmJyv5n0I5nZf7hU2bre0iKAeuX0TwzQ=;
	b=LmVxR6+pc7fYNVRyUzBi5fRhSh1Q5GmTe3aBmyLZzkEW3zuMUxkC5CKnux90goJmMxA8uY
	tE2aaDOuVbiWA8wnSY3PBRGCPSpS8jWkcdLrrJDrnk5Iv8ldPpDg/sHYYSJNYEtlmqcRBn
	09Rib8Fxu/rB960v19mVBOdbkIlfQRQ=
Subject: Re: [PATCH 06/10] tools/xenstored: Introduce a wrapper for
 conn->funcs->can_{read, write}
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-7-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <bdd6a6a3-b1ab-0629-9873-a474bd33ef66@suse.com>
Date: Thu, 24 Jun 2021 09:39:36 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210616144324.31652-7-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="OCpk3NuKoPLc9NNh0QJfQscCYI61iPFoO"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--OCpk3NuKoPLc9NNh0QJfQscCYI61iPFoO
Content-Type: multipart/mixed; boundary="bfc1YOU1z0js8oRUL84brIkPJ0449K3aF";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <bdd6a6a3-b1ab-0629-9873-a474bd33ef66@suse.com>
Subject: Re: [PATCH 06/10] tools/xenstored: Introduce a wrapper for
 conn->funcs->can_{read, write}
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-7-julien@xen.org>
In-Reply-To: <20210616144324.31652-7-julien@xen.org>

--bfc1YOU1z0js8oRUL84brIkPJ0449K3aF
Content-Type: multipart/mixed;
 boundary="------------E99E79B311ABB28A2A6792FA"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E99E79B311ABB28A2A6792FA
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.06.21 16:43, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> Currently, the callbacks can_read and can_write are called directly. Th=
is
> doesn't allow us to add generic check and therefore requires duplicatio=
n.
>=20
> At the moment, one check that could benefit to be common is whether the=

> connection should ignored. The position is slightly different between
> domain and socket because for the latter we want to check the state of
> the file descriptor first.
>=20
> In follow-up patches, there will be more potential generic checks.
>=20
> This patch provides wrappers to read/write a connection and move
> the check ->is_ignored after the callback for everyone.
>=20
> This also requires to replace the direct call to domain_can_read()
> and domain_can_write() with the new wrapper. At the same time,
> both functions can now be static. Note that the implementations need
> to be moved earlier in the file xenstored_domain.c to avoid
> declaring the prototype.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------E99E79B311ABB28A2A6792FA
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E99E79B311ABB28A2A6792FA--

--bfc1YOU1z0js8oRUL84brIkPJ0449K3aF--

--OCpk3NuKoPLc9NNh0QJfQscCYI61iPFoO
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDUNrgFAwAAAAAACgkQsN6d1ii/Ey+h
ogf+MMMUI5Juxn/ajqDtQuXypfn2zmr61S+rk4VpwJjTWxxVD+td/b9Eq5XRnK1lYXbHQnes9Kgp
CcxCedtyO4Pk5wc9USyhaj+VxJoxbg3tjTrIyvP/hsnRjagSO3pC+mB2KGgxEdclEHlr060Lookm
Lh/8iwhYQKqUPIdNjTYvjyWA0Mi5hHed42FWtoKCOHQLbS8qm1C2cQQcMgXMBTSMgULEIk58M/O3
BVJjm/9Tr6/Wn9IrwVngiwNvo6kA+qhS5NY3Av7vyVwkE4FOpHBABRzjKS8kq4LJrwB+4SnbIF+H
RFSVSfZ5Vk+MbeSbdLIboBmyZ2sXdzKwT2eKnBGxPw==
=w7DI
-----END PGP SIGNATURE-----

--OCpk3NuKoPLc9NNh0QJfQscCYI61iPFoO--


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 07:44:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 07:44:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146442.269451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwK29-0002mx-DT; Thu, 24 Jun 2021 07:44:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146442.269451; Thu, 24 Jun 2021 07:44:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwK29-0002mq-AH; Thu, 24 Jun 2021 07:44:21 +0000
Received: by outflank-mailman (input) for mailman id 146442;
 Thu, 24 Jun 2021 07:44:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3ax9=LS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lwK28-0002mk-7f
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 07:44:20 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3c7dfcb2-38b2-490b-8aa6-24c14a4a6ad5;
 Thu, 24 Jun 2021 07:44:19 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id B0B432197E;
 Thu, 24 Jun 2021 07:44:18 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 576C211A97;
 Thu, 24 Jun 2021 07:44:18 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id gKUoEtI31GCcCAAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 24 Jun 2021 07:44:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c7dfcb2-38b2-490b-8aa6-24c14a4a6ad5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624520658; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=SmVArF2TC24guojXUcYTozjOi8Vhs1tk50s8L+Hemeo=;
	b=I5WGshpqLUkWiEDxMEMjcIbHWCe96jeI+iunh+qlpdxpXaaNEb2XZrGBWiD32v3km1B5Zq
	T9gAJhmx8hUPZTDv0rDklJYW7DFHSOJILioThDB7dBudxcXy63o9U5lgyzTdoSbjxWUsV0
	Ia6JGjvmAnTP2PW4c2B/zVZnWyju14s=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624520658; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=SmVArF2TC24guojXUcYTozjOi8Vhs1tk50s8L+Hemeo=;
	b=I5WGshpqLUkWiEDxMEMjcIbHWCe96jeI+iunh+qlpdxpXaaNEb2XZrGBWiD32v3km1B5Zq
	T9gAJhmx8hUPZTDv0rDklJYW7DFHSOJILioThDB7dBudxcXy63o9U5lgyzTdoSbjxWUsV0
	Ia6JGjvmAnTP2PW4c2B/zVZnWyju14s=
Subject: Re: [PATCH 07/10] tools/xenstored: delay_request: don't assume
 conn->in == in
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-8-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <30348a8d-aef4-dee4-50ee-f6613da27952@suse.com>
Date: Thu, 24 Jun 2021 09:44:17 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210616144324.31652-8-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="raKWHPguZ0q0C7k1mjoZQU57KUNgxylRK"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--raKWHPguZ0q0C7k1mjoZQU57KUNgxylRK
Content-Type: multipart/mixed; boundary="fNJmfWnaQywy7eZ5ixY8NaRS59dmXA77y";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <30348a8d-aef4-dee4-50ee-f6613da27952@suse.com>
Subject: Re: [PATCH 07/10] tools/xenstored: delay_request: don't assume
 conn->in == in
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-8-julien@xen.org>
In-Reply-To: <20210616144324.31652-8-julien@xen.org>

--fNJmfWnaQywy7eZ5ixY8NaRS59dmXA77y
Content-Type: multipart/mixed;
 boundary="------------F442925ECE74C2AC327E7A78"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------F442925ECE74C2AC327E7A78
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.06.21 16:43, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> delay_request() is currently assuming that the request delayed is
> always conn->in. This is currently correct, but it is a call for
> a latent bug as the function allows the caller to specify any request.
>=20
> To prevent any future surprise, check if the request delayed is the
> current one.
>=20
> Fixes: c5ca1404b4 ("tools/xenstore: add support for delaying execution =
of a xenstore request")

Is the Fixes: tag really wanted in this patch? Currently there is
nothing wrong without this patch. So a backport will be needed only if
e.g. this series will be backported.

I'm fine either way, but I think this should be Ian's call.

> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------F442925ECE74C2AC327E7A78
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------F442925ECE74C2AC327E7A78--

--fNJmfWnaQywy7eZ5ixY8NaRS59dmXA77y--

--raKWHPguZ0q0C7k1mjoZQU57KUNgxylRK
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDUN9EFAwAAAAAACgkQsN6d1ii/Ey9P
cgf+O/44hhJ4FAxDj5h5sY6hmM1QI6xdPPqisZewcV7Loy8xi0XmDeyqxQYuFK2TDFjbg1foN8hZ
XLhAsD68ohPyLde7JbJfz3VsXw6twow84YmjfBGtzX4V28kcOFrmBsuoUjM7MZInhW8jI0xmyWqv
sI8K5L8QOr7ZsjlDAQ3DnlBKLytbfvCmOkoYAJD+/WHe89axNyUzqTt2rUJqnq+qIQ2wwtqFE8Nk
I9ZiTmRs7BBwt982zAfA1EoOIMO5w2R4H7Sk8ZWcEZPqMt7mfpYqGzr6Yfg6XK19E8vovUF/PgCm
Hd7FLn6ib49MeeTLny45MM1kdAbxHk/4ssBBFNmmig==
=gwqu
-----END PGP SIGNATURE-----

--raKWHPguZ0q0C7k1mjoZQU57KUNgxylRK--


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 07:57:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 07:57:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146447.269463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKEm-0004Gg-J0; Thu, 24 Jun 2021 07:57:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146447.269463; Thu, 24 Jun 2021 07:57:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKEm-0004GZ-FC; Thu, 24 Jun 2021 07:57:24 +0000
Received: by outflank-mailman (input) for mailman id 146447;
 Thu, 24 Jun 2021 07:57:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6N57=LS=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lwKEl-0004F5-9M
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 07:57:23 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.13.58]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 14c08dcb-4c38-46d1-a6cc-5d7d5659f2e6;
 Thu, 24 Jun 2021 07:57:22 +0000 (UTC)
Received: from AM6PR05CA0021.eurprd05.prod.outlook.com (2603:10a6:20b:2e::34)
 by DBBPR08MB6121.eurprd08.prod.outlook.com (2603:10a6:10:204::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Thu, 24 Jun
 2021 07:57:12 +0000
Received: from VE1EUR03FT009.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2e:cafe::f1) by AM6PR05CA0021.outlook.office365.com
 (2603:10a6:20b:2e::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18 via Frontend
 Transport; Thu, 24 Jun 2021 07:57:12 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT009.mail.protection.outlook.com (10.152.18.92) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Thu, 24 Jun 2021 07:57:12 +0000
Received: ("Tessian outbound f88ae75fbd47:v96");
 Thu, 24 Jun 2021 07:57:11 +0000
Received: from 466892edab09.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 61AF457D-1B71-4597-9C57-FC8E47695BC6.1; 
 Thu, 24 Jun 2021 07:57:05 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 466892edab09.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 24 Jun 2021 07:57:05 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com (2603:10a6:102:130::10)
 by PR3PR08MB5594.eurprd08.prod.outlook.com (2603:10a6:102:8b::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Thu, 24 Jun
 2021 07:57:03 +0000
Received: from PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025]) by PAXPR08MB6816.eurprd08.prod.outlook.com
 ([fe80::7cfd:a8eb:b25a:f025%7]) with mapi id 15.20.4264.020; Thu, 24 Jun 2021
 07:57:03 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO2P265CA0194.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:9e::14) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Thu, 24 Jun 2021 07:57:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14c08dcb-4c38-46d1-a6cc-5d7d5659f2e6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nLqfbhX6NaaxpoFNyV/1TQMz4uCXhbWpJ0jXuUgd7Ds=;
 b=XpWg2gdeGdGsZ6tyJMDDlR2rAE9SxI6yyGXPqBXtge44pm+ibb/BE0cqJLCqhHvPZQUJIYX2VwI1XRHJBJU2qxj9T//P63IGQQ/w4eUJrGkD59wBnyHT2LbYiTGdfYp1TuE6HFr01xvXl/vdiA2y6LiMlrY+LVUo28ZSyRGKV2U=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8a90de0410b0747a
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U0f78UW3uOpHNusd4o9kDHiVbWLJUw1IHOZzFtjGoEjCW2qgo2Jq/tpWj5UK+Ju5YDmbFCs3tYfmztWCMRlv1cSGjc5bffZFqjs6drP7Q7ljxhzbgxCPZNz3IMZOWuR3TTzubCfuWGsed3PKtU7nIGCzlnfNMDs/McmOUGbbWHaJPrhy58OeUSdQdB9fFRTsIKF2Ucv39MycwpMgZ1ArH/RtHkuk1IkP7q0Cf9tfZ+B31fBfxfKLjjQSePEtfPYbWDWIuz2/9C3YNQj+OK2jjs2bIhlvP5K1SS2fxN6RNEm3WbJngMsLVcLbI4NBy92fKxLjSlfqGvOVD5OwJzey3w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nLqfbhX6NaaxpoFNyV/1TQMz4uCXhbWpJ0jXuUgd7Ds=;
 b=BewtCYcUQ1/3/oxfU1Dhs672QMoMSpL4oMwDjmD8sab4znHeEz75EmeZL5jmJquTsqVIlr/JsZ2+QWfnFUthEVmcWSnzvpY+34yEW4jo/8bsdALOy6lX6lBFyEEuaU4IUMuatvkczlG2oHQHkhoBM7jnyBOOj842rYO3Z7XVVPVlme2xVMrmzn0noyrgUaExys3TS8xfnrMEkDhiVZU0h50JtnSLLUgudcgzA/IklaaiR1FCR34KbxxaYCYd/7kNK+6MLfP7/k8NXIZR4cqt3Y6S6h+Nf7jWc7/Z30WtRrsoPkeGd8bQiRaybk2zdM0iVq9avWvoAG6rv7qC0lXs7A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nLqfbhX6NaaxpoFNyV/1TQMz4uCXhbWpJ0jXuUgd7Ds=;
 b=XpWg2gdeGdGsZ6tyJMDDlR2rAE9SxI6yyGXPqBXtge44pm+ibb/BE0cqJLCqhHvPZQUJIYX2VwI1XRHJBJU2qxj9T//P63IGQQ/w4eUJrGkD59wBnyHT2LbYiTGdfYp1TuE6HFr01xvXl/vdiA2y6LiMlrY+LVUo28ZSyRGKV2U=
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=utf-8
Subject: Re: [PATCH 03/10] tools/xenstore: Don't assume conn->in points to the
 LU request
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <2e28c385-a8ba-1dc3-c41e-8b39f624947c@suse.com>
Date: Thu, 24 Jun 2021 08:56:56 +0100
Cc: Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org,
 raphning@amazon.co.uk,
 doebel@amazon.de,
 Julien Grall <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
Content-Transfer-Encoding: quoted-printable
Message-Id: <10F0635C-2AA7-42A4-B4BB-A9DD869113C9@arm.com>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-4-julien@xen.org>
 <fa249348-e4a0-13d0-03d7-d3536a8b759c@suse.com>
 <2e28c385-a8ba-1dc3-c41e-8b39f624947c@suse.com>
To: Juergen Gross <jgross@suse.com>
X-Mailer: Apple Mail (2.3654.100.0.2.22)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO2P265CA0194.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9e::14) To PAXPR08MB6816.eurprd08.prod.outlook.com
 (2603:10a6:102:130::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 329d9350-f969-4317-73f0-08d936e5acd1
X-MS-TrafficTypeDiagnostic: PR3PR08MB5594:|DBBPR08MB6121:
X-Microsoft-Antispam-PRVS:
	<DBBPR08MB6121C7C87AC5457D251C4035E4079@DBBPR08MB6121.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 C/9tLoRBpi2e3iq7NdvPpmn9zmfkilMYxqjoQ8IZOeyyLF10dDLVzLjeB6NC7gb9bDRctMm768KzqcvjK47Xpf+cG2XzLrA1W0wVN0Gje9hNVNJVD4c03GhksEPOMEuAvK0PSYANkUTcjWZRVEhd6KSWEUTcwMo4Dn9y4nr1jezl8RBuBAQSVSdaq5OQK+T/xbRhYeJ8i/CtZeudIoTkbhhlqs+jI0E3JI0X2cCVVP6TXYQUwctIWlJGO8/3DAajDG1jTuDltMhCPUzwq4k/j6Nd1mD3J//hr1ITdn+Geb2bI3TGv0ZM4HNrCYRAwI17/SKc1+DMKpu9ON6DbJSlyYOZacnGtFqn5Makpl31DuMIe3iEx/6ZP5XZmquKnklncPNOQVRRm5vAWkBjAo3dlCGYsK5LTTq2rtdhXu7oJEpu5MDHsaII7MiXqszjo9DPV/MrAvJMZ2A+eZSYDIdNy4tKJx2ZqDRq5M7wiPjKAGwNN5Y/ktbq0fUEl/5/ClzpBuUDU0bMqr3YpDEUYNQtLbS6UVhNvFvYNCZsoBjHRwDJk3N0RGH554+7ihVf5MCfKmkeWZZ/BTsn8gZFLs9qB1m4RpwmNedB8fzbWzhu4ovVsVdLzhPLurc35SNNFuN6D/fnfhqzn6OB6VVDTD1VAa+XWZzy7TO0fvx+K+R03G5YhXsoihltcajVZAD2K7bj
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB6816.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(39850400004)(396003)(366004)(136003)(26005)(38100700002)(6666004)(478600001)(956004)(2616005)(44832011)(6916009)(4326008)(2906002)(86362001)(83380400001)(186003)(38350700002)(16526019)(36756003)(54906003)(6506007)(6512007)(53546011)(52116002)(8936002)(316002)(6486002)(8676002)(66476007)(5660300002)(66556008)(66946007)(33656002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
 =?utf-8?B?NnNJV1FTVEl3dFlHd2xYT1lJRWxzaFB3QTVCL3ZmSGhCanhCck5meFdMQ1F6?=
 =?utf-8?B?dnJFNEJaNlI0TW5teGFwUGdTZE5pSFN1eTIwSWwvRCs5ckxSSDROOTB1cFly?=
 =?utf-8?B?TVFxVGYyaU5QeWgydGUzb2ErdGxVZDM4S09VMitONC9zdzhaaUMxOWRrcDky?=
 =?utf-8?B?czBGdVRHM3VXdmdRejBCS0F6Uy9NWWRQSFFGWEVFTUlzUlhQdjJ0S01SS09C?=
 =?utf-8?B?SkRGVlZyOW4yU2NUTUdBeTdLYStUVk42L1VIOGpONU5JVExyM2tqejcrdUYy?=
 =?utf-8?B?bHBudVRLVkFqQ3dhU1hlZi8wQmtBKzdJVzJpQXM0bXg5bUc1VHQyUFRHVVZn?=
 =?utf-8?B?b1AvNDJEbWFIbGtoMWpMQnlpNzZ4MHBMcWwxTUJJRy9USm1EWXhscmZxbDFk?=
 =?utf-8?B?V2tDNDUwR0prUGxIY3NTRXV4citlVmdRZmJ5V2J4ak81MG4relEwRDZvRHo2?=
 =?utf-8?B?OGtwcld4UGZpdGpZTFNMMHlVS0NhbGJsUVhER2RRTmZ6cFNqWHBnSVRhSW1v?=
 =?utf-8?B?YytFMmZZbGwxOWRNZHN0ajk0Tk9kK1FPV29CQVZ1RG5TK2FhQmFMN1YwdTFm?=
 =?utf-8?B?UXl6STcyd292aE16N0IxZFppTndwT2J6YVNNOWZlclJDZzNlMGJBaWl2WEs1?=
 =?utf-8?B?YXJNRGo4Zy9UMlBMUFRJb1NZTjFIOWNrWEhxeTQybEFLUThTUFNrNy9oVFdC?=
 =?utf-8?B?MjFRd3duOTduNEFOZmpPb2daYW8rTitlNXBJZTUxZVMvbnkvYnFPUXZsamFi?=
 =?utf-8?B?cTZHSmk2NFVwQldvdkZwYjhFNGZBbmhLeTduemRRTFlvS3kwd1BOK2hJYmox?=
 =?utf-8?B?SmJQZjFDTk45Q0ZYK1puVWp4Q3h1T216aTBvK1R3b1FubGhScEhHSklDZXht?=
 =?utf-8?B?Um0rUVJLVlJsa2x5SWpUODAxUHNZaitOeG1TalRjS1ZHTmNMYUZoUzRkYloy?=
 =?utf-8?B?T2xOQXluYXBPY3B1Yy84aFRIaWZLK3ZYN2hCbmpkN3lBdEgwK05XSmFRQU1C?=
 =?utf-8?B?MlFiLzJ2Sm1IUSt6ZHA0RnppeHc4MTYrdGkvaGVKSmlQbWVIWi9xNHRzZWo5?=
 =?utf-8?B?UDhkdlhtNzR3T3h4T3lNb1dDRVFtNndSRzdKNWpwZFFGaGorWVdiaGJncFY0?=
 =?utf-8?B?bGxSbUpVTVZlb09UelEzSGhzbDJ2WU8yTkxzYmo0YjNleDhuSnFUOHZpTnZl?=
 =?utf-8?B?cjZyRWdmSCtKUEVha1RjeGN6U1lxdFJEam9IR05GVmhoSGpXUmtLODQrTEM4?=
 =?utf-8?B?RkdpMitsZ01pT09odnpyYW0wSERZTjBJZFJLQWw1MHF3NDdPMnhFSEprUFNN?=
 =?utf-8?B?a3ZYQVVEN25WQWtQTmRRVHpVb1VmZGUzVkJTTGNHc0F2VStqeWhYV25VRERG?=
 =?utf-8?B?RWRhSVl6cEVPS0ZoM3pBaVhENURmVmlYUEpaNGFWK2EraTZzZjNCTFR3SEJD?=
 =?utf-8?B?d1dIb3JSY2dsbk9kbEwxYnlkWkxLWGN2WGYxd3c0d1pTeVNPUHdrbFkvREhI?=
 =?utf-8?B?T09Wc3pmWWxsa1BZczNTWjg5RmticGNxenpER3oyeEk5YlVhOFBrMVloVXlr?=
 =?utf-8?B?bWdtS1R0bWQyY01YRmVtSXNtaExnNUtjeHdGQjd3VkcvQ0dIaFk3OXc0a1Ny?=
 =?utf-8?B?TnBzWDY2UEVubHFpdVFLZ3ZlLytjaGxYemt3SjhKR0lJZXMwanZNcnltNSsv?=
 =?utf-8?B?bUZ2NlFqZFluT0loY1dzeUxUL3JNNGx1ZmdKSnBaTUFxZ3RVNWFodjhWVWFy?=
 =?utf-8?Q?Azr6RWOx4n13Keh1StKU6A0K+qnotH7X/Rm3+CE?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5594
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	858aa970-7178-467a-a6cb-08d936e5a71d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JlF8LnNgAzTIc8oMuDGds+NTh9WE6D06CJLF/LGjfo3OeKqD4qPhOrnoQKWNAcSlEAJsSLrrZ06+wVUVP7+BNUZa8ABIc6gfaOCp2iKCfLaY1ye8K4eySx2EzO+J7kEnpPTaoIpbwppA/2LL4+i7YHIB7071ZbWBPDSj852RiBafndpN5RbV+TjZEYRxXpAUOe4LaUAFUvVVsqR2ZXq7xF1NEgK/W8jskVb2hyVz441GpQPyEGQStCZLu0+8dgZKnGCNxUHNnZTFVF8sfGDUGsT/gQpwB61iP1lLXybj9yyKAceVWq1WgtczWkWNBK5d7OBTMPy6+XYebuom0+RlMKIp7eD39H6BCI960nqL7yycThIUIegEiZFbjk072ibJ05YHdNEvnWoeg/aigOcIAd14+gJ96t/nJU9gxdepPxs+K3eANyiLPLCkiH29lFiIxesVEAXGS0GbzbMcXtHEzMdWzfCP8qgpsU4Qe8zyJqYQn4AEauaYyFIrYqByCzXlmelUNTP0mTd1RrmxSQcFhgZ/p7GB03dqBexTfAbP5EA0dX5ZLe1cbpaLDXL+DajUISriuDIFTfVL4RWXzjGJOyJsOJPGAyjPTmkCgalx9RB1jINtgS1eEyzcoH9QGk6Vvf0Wwist1s55Bf1jkGkMmH9sM+ISD4ZlLg8+yPWvU8NVKYVvcdzD01I2KZU02XtrqrV3arVNEIqCnN0/Loezcw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(396003)(376002)(39840400004)(346002)(36840700001)(46966006)(956004)(26005)(4326008)(2616005)(53546011)(6862004)(44832011)(70586007)(8676002)(81166007)(316002)(82310400003)(356005)(36756003)(6506007)(83380400001)(33656002)(478600001)(5660300002)(2906002)(8936002)(36860700001)(186003)(6486002)(70206006)(6666004)(16526019)(6512007)(336012)(86362001)(54906003)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2021 07:57:12.3244
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 329d9350-f969-4317-73f0-08d936e5acd1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6121



> On 24 Jun 2021, at 08:34, Juergen Gross <jgross@suse.com> wrote:
>=20
> On 24.06.21 09:32, Juergen Gross wrote:
>> On 16.06.21 16:43, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>=20
>>> call_delayed() is currently assuming that conn->in is NULL when
>>> handling delayed request. However, the connection is not paused.
>>> Therefore new request can be processed and conn->in may be non-NULL
>>> if we have only received a partial request.
>>>=20
>>> Furthermore, as we overwrite conn->in, the current partial request
>>> will not be transferred. This will result to corrupt the connection.
>>>=20
>>> Rather than updating conn->in, stash the LU request in lu_status and
>>> let each callback for delayed request to update conn->in when
>>> necessary.
>>>=20
>>> To keep a sane interface, the code to write the "OK" response the
>>> LU request is moved in xenstored_core.c.
>>>=20
>>> Fixes: c5ca1404b4 ("tools/xenstore: add support for delaying execution=
=20
>=20
>>> of a xenstore request")
>>> Fixes: ed6eebf17d ("tools/xenstore: dump the xenstore state for live up=
date")
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> With dropping the conn parameter from call_delayed as already
>> mentioned by Luca you can add my:
>=20
> Oh, please drop my request to delete the conn parameter, as it is being
> used in patch 4 again.
>=20
>> Reviewed-by: Juergen Gross <jgross@suse.com>
>=20
> This stands, of course.

Hi Juergen,

I=E2=80=99m sorry but I don=E2=80=99t see when the parameter is used, in pa=
tch 4 we have this call:

line 2344:
        if (delayed_requests) {
            list_for_each_entry(conn, &connections, list) {
                struct delayed_request *req, *tmp;

                list_for_each_entry_safe(req, tmp,
                             &conn->delayed, list)
                    call_delayed(conn, req);
            }
        }

But call_delayed is still this one:

Line 273:
static void call_delayed(struct connection *conn, struct delayed_request *r=
eq)
{
    if (req->func(req)) {
        undelay_request(req);
        talloc_set_destructor(req, NULL);
    }
}

Am I missing something?

Cheers,
Luca

>=20
>=20
> Juergen
> <OpenPGP_0xB0DE9DD628BF132F.asc>



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 07:58:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 07:58:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146451.269474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKFW-0004ot-Sj; Thu, 24 Jun 2021 07:58:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146451.269474; Thu, 24 Jun 2021 07:58:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKFW-0004om-PU; Thu, 24 Jun 2021 07:58:10 +0000
Received: by outflank-mailman (input) for mailman id 146451;
 Thu, 24 Jun 2021 07:58:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lwKFV-0004og-7G
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 07:58:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwKFT-0002uJ-J2; Thu, 24 Jun 2021 07:58:07 +0000
Received: from [54.239.6.182] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwKFT-0002C8-At; Thu, 24 Jun 2021 07:58:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=wNZZSzSIQ08whR5sxQWNM/FiFK2Bi63S0/15RY4mB6k=; b=ORQfUrC6SkEj/wzV3SNMx8BAWa
	b1xUzlmmk49CPqBrBRnwVZlxN+2vg+Wxrlrej1bdq6dnZwzrDlQ9Z1GEZI9n/XjS2XKXAFA8zLG+t
	wtzpKW1qJpqnmP+ooCUdZOdYPxyAKW/vPq6ALkurKLvDuiVQQ3uc+G/Ns76fUTREwz8o=;
Subject: Re: [PATCH 07/10] tools/xenstored: delay_request: don't assume
 conn->in == in
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-8-julien@xen.org>
 <30348a8d-aef4-dee4-50ee-f6613da27952@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <501cc477-08e2-174d-2e7e-f6441f943007@xen.org>
Date: Thu, 24 Jun 2021 09:58:04 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <30348a8d-aef4-dee4-50ee-f6613da27952@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 24/06/2021 09:44, Juergen Gross wrote:
> On 16.06.21 16:43, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> delay_request() is currently assuming that the request delayed is
>> always conn->in. This is currently correct, but it is a call for
>> a latent bug as the function allows the caller to specify any request.
>>
>> To prevent any future surprise, check if the request delayed is the
>> current one.
>>
>> Fixes: c5ca1404b4 ("tools/xenstore: add support for delaying execution 
>> of a xenstore request")
> 
> Is the Fixes: tag really wanted in this patch? Currently there is
> nothing wrong without this patch. So a backport will be needed only if
> e.g. this series will be backported. >
> I'm fine either way, but I think this should be Ian's call.

We don't usually backport to stable for tech preview (Xenstored 
Live-Update is one).

In this case, I mainly added it because it makes easier for a developper 
to figure out where the bugs comes from and downstream may want to 
ingest it. This doesn't mean I request an official backport.

I could just mention in the commit message if you prefer.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 08:05:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 08:05:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146460.269485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKMY-0006os-Th; Thu, 24 Jun 2021 08:05:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146460.269485; Thu, 24 Jun 2021 08:05:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKMY-0006ol-Qa; Thu, 24 Jun 2021 08:05:26 +0000
Received: by outflank-mailman (input) for mailman id 146460;
 Thu, 24 Jun 2021 08:05:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3ax9=LS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lwKMX-0006of-DZ
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 08:05:25 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 419c2cfc-3b48-436f-bd96-54ba96aeb526;
 Thu, 24 Jun 2021 08:05:24 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 86BBF1FD66;
 Thu, 24 Jun 2021 08:05:23 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 5001911A97;
 Thu, 24 Jun 2021 08:05:23 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id vC9REsM81GCAFAAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 24 Jun 2021 08:05:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 419c2cfc-3b48-436f-bd96-54ba96aeb526
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624521923; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=L2Mvnt3OVKuGRhEfNiy42x9mGDm7XA/mK3M3R5sZitk=;
	b=KDWJunp+JRoEHqCVJySF+eMYhlijQDXliN40BGxTDyp0YKviZ1u7A0EqL0d5t7QIWlHg4I
	/wSysQcPlh/sCfIv48bAzuUbwvjMLke2pJTA1yJVLGFxhy7QUl8TlzEqylisANd6IHCcIC
	RZ9KDtPL0neIsgyZULAtZM4sb34ypIo=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624521923; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=L2Mvnt3OVKuGRhEfNiy42x9mGDm7XA/mK3M3R5sZitk=;
	b=KDWJunp+JRoEHqCVJySF+eMYhlijQDXliN40BGxTDyp0YKviZ1u7A0EqL0d5t7QIWlHg4I
	/wSysQcPlh/sCfIv48bAzuUbwvjMLke2pJTA1yJVLGFxhy7QUl8TlzEqylisANd6IHCcIC
	RZ9KDtPL0neIsgyZULAtZM4sb34ypIo=
Subject: Re: [PATCH 03/10] tools/xenstore: Don't assume conn->in points to the
 LU request
To: Luca Fancellu <luca.fancellu@arm.com>
Cc: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 raphning@amazon.co.uk, doebel@amazon.de, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-4-julien@xen.org>
 <fa249348-e4a0-13d0-03d7-d3536a8b759c@suse.com>
 <2e28c385-a8ba-1dc3-c41e-8b39f624947c@suse.com>
 <10F0635C-2AA7-42A4-B4BB-A9DD869113C9@arm.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <f0e43b49-674d-1ec0-08b5-7fb25423af82@suse.com>
Date: Thu, 24 Jun 2021 10:05:22 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <10F0635C-2AA7-42A4-B4BB-A9DD869113C9@arm.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="qNnvDoqTkBB56ArsYnmgqb9IivUokgeyD"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--qNnvDoqTkBB56ArsYnmgqb9IivUokgeyD
Content-Type: multipart/mixed; boundary="OG8QeX7nsdG80gj0F4Hlx4DRcpZq86bIF";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Luca Fancellu <luca.fancellu@arm.com>
Cc: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 raphning@amazon.co.uk, doebel@amazon.de, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <f0e43b49-674d-1ec0-08b5-7fb25423af82@suse.com>
Subject: Re: [PATCH 03/10] tools/xenstore: Don't assume conn->in points to the
 LU request
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-4-julien@xen.org>
 <fa249348-e4a0-13d0-03d7-d3536a8b759c@suse.com>
 <2e28c385-a8ba-1dc3-c41e-8b39f624947c@suse.com>
 <10F0635C-2AA7-42A4-B4BB-A9DD869113C9@arm.com>
In-Reply-To: <10F0635C-2AA7-42A4-B4BB-A9DD869113C9@arm.com>

--OG8QeX7nsdG80gj0F4Hlx4DRcpZq86bIF
Content-Type: multipart/mixed;
 boundary="------------20E813A14FB26FBE5936C8FE"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------20E813A14FB26FBE5936C8FE
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 24.06.21 09:56, Luca Fancellu wrote:
>=20
>=20
>> On 24 Jun 2021, at 08:34, Juergen Gross <jgross@suse.com> wrote:
>>
>> On 24.06.21 09:32, Juergen Gross wrote:
>>> On 16.06.21 16:43, Julien Grall wrote:
>>>> From: Julien Grall <jgrall@amazon.com>
>>>>
>>>> call_delayed() is currently assuming that conn->in is NULL when
>>>> handling delayed request. However, the connection is not paused.
>>>> Therefore new request can be processed and conn->in may be non-NULL
>>>> if we have only received a partial request.
>>>>
>>>> Furthermore, as we overwrite conn->in, the current partial request
>>>> will not be transferred. This will result to corrupt the connection.=

>>>>
>>>> Rather than updating conn->in, stash the LU request in lu_status and=

>>>> let each callback for delayed request to update conn->in when
>>>> necessary.
>>>>
>>>> To keep a sane interface, the code to write the "OK" response the
>>>> LU request is moved in xenstored_core.c.
>>>>
>>>> Fixes: c5ca1404b4 ("tools/xenstore: add support for delaying executi=
on
>>
>>>> of a xenstore request")
>>>> Fixes: ed6eebf17d ("tools/xenstore: dump the xenstore state for live=20
update")
>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>> With dropping the conn parameter from call_delayed as already
>>> mentioned by Luca you can add my:
>>
>> Oh, please drop my request to delete the conn parameter, as it is bein=
g
>> used in patch 4 again.
>>
>>> Reviewed-by: Juergen Gross <jgross@suse.com>
>>
>> This stands, of course.
>=20
> Hi Juergen,
>=20
> I=E2=80=99m sorry but I don=E2=80=99t see when the parameter is used, i=
n patch 4 we have this call:
>=20
> line 2344:
>          if (delayed_requests) {
>              list_for_each_entry(conn, &connections, list) {
>                  struct delayed_request *req, *tmp;
>=20
>                  list_for_each_entry_safe(req, tmp,
>                               &conn->delayed, list)
>                      call_delayed(conn, req);
>              }
>          }
>=20
> But call_delayed is still this one:
>=20
> Line 273:
> static void call_delayed(struct connection *conn, struct delayed_reques=
t *req)
> {
>      if (req->func(req)) {
>          undelay_request(req);
>          talloc_set_destructor(req, NULL);
>      }
> }
>=20
> Am I missing something?

Hmm, I seem to have mixed up something.

Yes, you are right. So off with the conn parameter (again).


Juergen

--------------20E813A14FB26FBE5936C8FE
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------20E813A14FB26FBE5936C8FE--

--OG8QeX7nsdG80gj0F4Hlx4DRcpZq86bIF--

--qNnvDoqTkBB56ArsYnmgqb9IivUokgeyD
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDUPMIFAwAAAAAACgkQsN6d1ii/Ey+6
/Af/Ys39lhMGEZuybikffryZM+RZru7f+Lbm96tkcVdAUApgSEMrRtPoPsg9AkHF0ylqpWqpBKby
E9GBQBvgW6UTjClwdziLDjUPMt9Sg/l4ZXGhSwflapQ8jSAImFiceOlNeC+rPnx5EQz2+/mjehp3
eneGgDvYavUbBhWDbzn3c8TfXMVscJGs7m9QqQhl4dgvKLlBiHwEfstdu8BIZ0k8feATZVC63+vb
6qlli9nlE7/yQp5/AfFu0/idiBtnuwvexx/VPQyfwChvOiOtE2XkAF2/Zqcrms10wAbUOL2q6yB+
KOABNPqTdp4OJjt6dJ7N0g3/9Sregc26fpVvOX5Edw==
=ZH21
-----END PGP SIGNATURE-----

--qNnvDoqTkBB56ArsYnmgqb9IivUokgeyD--


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 08:06:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 08:06:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146466.269496 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKNf-0007T1-BS; Thu, 24 Jun 2021 08:06:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146466.269496; Thu, 24 Jun 2021 08:06:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKNf-0007Ss-8B; Thu, 24 Jun 2021 08:06:35 +0000
Received: by outflank-mailman (input) for mailman id 146466;
 Thu, 24 Jun 2021 08:06:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lwKNe-0007Sk-4U
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 08:06:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwKNb-0003ag-60; Thu, 24 Jun 2021 08:06:31 +0000
Received: from [54.239.6.182] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwKNa-0002ua-UC; Thu, 24 Jun 2021 08:06:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=pB1YWnCA7A0AOxCkbXSbKy3vf3NtrD1e87JcPsvmZXU=; b=T3SDWVWhValekvLRVr12bZlEmD
	5u09sN2I+rAyx9rDyXrbVVl4CWWNao3SV90X5qx/YPA9HcEHx1b4PdQtL5fqw6wU5/ar8+eqsKT7A
	56SjNN6d9dsACdkgNQvWZ5LFho3GAjG1NU85ohTQYAnT1Z+YJyBofZnfmUJbR0aSs+cw=;
Subject: Re: [PATCH 03/10] tools/xenstore: Don't assume conn->in points to the
 LU request
To: Luca Fancellu <luca.fancellu@arm.com>
Cc: xen-devel@lists.xenproject.org, raphning@amazon.co.uk, doebel@amazon.de,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-4-julien@xen.org>
 <4316B0AD-5F89-4D04-8996-00836AE3991A@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e08f65e9-8d3c-4f50-20d9-312a358417be@xen.org>
Date: Thu, 24 Jun 2021 10:06:28 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <4316B0AD-5F89-4D04-8996-00836AE3991A@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Luca,

On 21/06/2021 10:55, Luca Fancellu wrote:
>> diff --git a/tools/xenstore/xenstored_control.h b/tools/xenstore/xenstored_control.h
>> index 6842b8d88760..27d7f19e4b7f 100644
>> --- a/tools/xenstore/xenstored_control.h
>> +++ b/tools/xenstore/xenstored_control.h
>> @@ -20,3 +20,6 @@ int do_control(struct connection *conn, struct buffered_data *in);
>> void lu_read_state(void);
>>
>> struct connection *lu_get_connection(void);
>> +
>> +/* Write the "OK" response for the live-update command */
>> +unsigned int lu_write_response(FILE *fp);
>> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
>> index 607187361d84..41b26d7094c8 100644
>> --- a/tools/xenstore/xenstored_core.c
>> +++ b/tools/xenstore/xenstored_core.c
>> @@ -272,15 +272,10 @@ static int undelay_request(void *_req)
>>
>> static void call_delayed(struct connection *conn, struct delayed_request *req)
> 
> Here the conn parameter is not needed anymore, or am I missing something?

The parameter is now unused. I will drop it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 08:12:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 08:12:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146482.269524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKTL-0001QQ-F7; Thu, 24 Jun 2021 08:12:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146482.269524; Thu, 24 Jun 2021 08:12:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKTL-0001QJ-C3; Thu, 24 Jun 2021 08:12:27 +0000
Received: by outflank-mailman (input) for mailman id 146482;
 Thu, 24 Jun 2021 08:12:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lwKTJ-0001Q1-QP
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 08:12:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwKTI-0003h0-Da; Thu, 24 Jun 2021 08:12:24 +0000
Received: from [54.239.6.182] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwKTI-0003U5-4I; Thu, 24 Jun 2021 08:12:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=/gquh0QLxtElZLGe9Zcszyit3MGZK73YXaXO2qqDKdQ=; b=BtDpAHRbmszwrMgEEt3aBGepBf
	5YDbunzQWJnsCwADak7gplQPY5NADMHxP5d8cypECRCSOmdw9sEFA2TUfvt7+B+o8ZUbrOiMOsqzt
	vR0A6PpKnbDuZbwrHwVJEqaLJCClGACm8EaLkwsb6Bf0wehwrSeWZHSJCrqcDTtzRAUM=;
Subject: Re: [PATCH] tools/xenstored: Don't crash xenstored when Live-Update
 is cancelled
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien GralL
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210617173857.6450-1-julien@xen.org>
 <136d6a10-c93d-accd-fc34-62fbaa4742b0@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <325bf694-a30f-558c-ab84-d8a7a1865cc2@xen.org>
Date: Thu, 24 Jun 2021 10:12:21 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <136d6a10-c93d-accd-fc34-62fbaa4742b0@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 22/06/2021 11:23, Juergen Gross wrote:
> On 17.06.21 19:38, Julien Grall wrote:
>> From: Julien GralL <jgrall@amazon.com>
>>
>> As Live-Update is asynchronous, it is possible to receive a request to
>> cancel it (either on the same connection or from a different one).
>>
>> Currently, this will crash xenstored because do_lu_start() assumes
>> lu_status will be valid. This is not the case when Live-Update has been
>> cancelled. This will result to dereference a NULL pointer and
>> crash Xenstored.
>>
>> Rework do_lu_start() to check if lu_status is NULL and return an
>> error in this case.
>>
>> Fixes: af216a99fb ("tools/xenstore: add the basic framework for doing 
>> the live update")
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> ----
>>
>> This is currently based on top of:
>>
>> https://lore.kernel.org/xen-devel/20210616144324.31652-1-julien@xen.org
>>
>> This can be re-ordered if necessary.
>> ---
>>   tools/xenstore/xenstored_control.c | 15 +++++++++++++--
>>   1 file changed, 13 insertions(+), 2 deletions(-)
>>
>> diff --git a/tools/xenstore/xenstored_control.c 
>> b/tools/xenstore/xenstored_control.c
>> index a045f102a420..37a3d39f20b5 100644
>> --- a/tools/xenstore/xenstored_control.c
>> +++ b/tools/xenstore/xenstored_control.c
>> @@ -696,7 +696,18 @@ static bool do_lu_start(struct delayed_request *req)
>>       time_t now = time(NULL);
>>       const char *ret;
>>       struct buffered_data *saved_in;
>> -    struct connection *conn = lu_status->conn;
>> +    struct connection *conn = req->data;
>> +
>> +    /*
>> +     * Cancellation may have been requested asynchronously. In this
>> +     * case, lu_status will be NULL.
>> +     */
>> +    if (!lu_status) {
>> +        ret = "Cancellation was requested";
>> +        conn = req->data;
> 
> This will set conn to the same value it already has.

Ah yes. I will drop this line.

Also, I took the opportunity to replace

} else
   assert(...)

with just

assert(...)

This should improve a bit the readability. Let me know if you want me to 
resend the patch for that.

> 
> 
> Other than that:
> 
> Reviewed-by: Juergen Gross <jgross@suse.com>

Thank you!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 08:17:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 08:17:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146491.269536 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKYa-0002BU-2x; Thu, 24 Jun 2021 08:17:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146491.269536; Thu, 24 Jun 2021 08:17:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKYZ-0002BN-WA; Thu, 24 Jun 2021 08:17:51 +0000
Received: by outflank-mailman (input) for mailman id 146491;
 Thu, 24 Jun 2021 08:17:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3ax9=LS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lwKYX-0002BH-V5
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 08:17:49 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c8917da6-4322-48d2-9ffc-64078020f270;
 Thu, 24 Jun 2021 08:17:48 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id E19392195F;
 Thu, 24 Jun 2021 08:17:46 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id B2EC411A97;
 Thu, 24 Jun 2021 08:17:46 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id IpF+Kqo/1GAwGwAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 24 Jun 2021 08:17:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8917da6-4322-48d2-9ffc-64078020f270
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624522666; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Vx+5gKDfdRhLKNBeHBYKS3G7592LbeKzJTJFxONMaHw=;
	b=dBfQ6W5snv3kSXyjYtEWA4MrcP+OZPHEpKq+aefJZzehBxeUgYHLNYN+mVqGyN3YFNOspk
	aODGEL8be38MfP2MtvFEpvvcZLS6DyM5AtKykC9TGu7kHGk2q/bb/1gbBMG1ZwKTeC1/uT
	9zysxeIRpLxCf9a6qj2bb8vx0+6dP9g=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624522666; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Vx+5gKDfdRhLKNBeHBYKS3G7592LbeKzJTJFxONMaHw=;
	b=dBfQ6W5snv3kSXyjYtEWA4MrcP+OZPHEpKq+aefJZzehBxeUgYHLNYN+mVqGyN3YFNOspk
	aODGEL8be38MfP2MtvFEpvvcZLS6DyM5AtKykC9TGu7kHGk2q/bb/1gbBMG1ZwKTeC1/uT
	9zysxeIRpLxCf9a6qj2bb8vx0+6dP9g=
Subject: Re: [PATCH] tools/xenstored: Don't crash xenstored when Live-Update
 is cancelled
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien GralL
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210617173857.6450-1-julien@xen.org>
 <136d6a10-c93d-accd-fc34-62fbaa4742b0@suse.com>
 <325bf694-a30f-558c-ab84-d8a7a1865cc2@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <bcdad74c-393e-0582-6a26-b9a6f45cb30a@suse.com>
Date: Thu, 24 Jun 2021 10:17:46 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <325bf694-a30f-558c-ab84-d8a7a1865cc2@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="sWWtURDMf85KDFUlggZlGmVGEMXlt6Wm1"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--sWWtURDMf85KDFUlggZlGmVGEMXlt6Wm1
Content-Type: multipart/mixed; boundary="oIztaDXrHRNcxnbQhvTseAgL05RPvgXRx";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien GralL
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <bcdad74c-393e-0582-6a26-b9a6f45cb30a@suse.com>
Subject: Re: [PATCH] tools/xenstored: Don't crash xenstored when Live-Update
 is cancelled
References: <20210617173857.6450-1-julien@xen.org>
 <136d6a10-c93d-accd-fc34-62fbaa4742b0@suse.com>
 <325bf694-a30f-558c-ab84-d8a7a1865cc2@xen.org>
In-Reply-To: <325bf694-a30f-558c-ab84-d8a7a1865cc2@xen.org>

--oIztaDXrHRNcxnbQhvTseAgL05RPvgXRx
Content-Type: multipart/mixed;
 boundary="------------04B1911866C86B501E27424D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------04B1911866C86B501E27424D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 24.06.21 10:12, Julien Grall wrote:
> Hi Juergen,
>=20
> On 22/06/2021 11:23, Juergen Gross wrote:
>> On 17.06.21 19:38, Julien Grall wrote:
>>> From: Julien GralL <jgrall@amazon.com>
>>>
>>> As Live-Update is asynchronous, it is possible to receive a request t=
o
>>> cancel it (either on the same connection or from a different one).
>>>
>>> Currently, this will crash xenstored because do_lu_start() assumes
>>> lu_status will be valid. This is not the case when Live-Update has be=
en
>>> cancelled. This will result to dereference a NULL pointer and
>>> crash Xenstored.
>>>
>>> Rework do_lu_start() to check if lu_status is NULL and return an
>>> error in this case.
>>>
>>> Fixes: af216a99fb ("tools/xenstore: add the basic framework for doing=20

>>> the live update")
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>
>>> ----
>>>
>>> This is currently based on top of:
>>>
>>> https://lore.kernel.org/xen-devel/20210616144324.31652-1-julien@xen.o=
rg
>>>
>>> This can be re-ordered if necessary.
>>> ---
>>> =C2=A0 tools/xenstore/xenstored_control.c | 15 +++++++++++++--
>>> =C2=A0 1 file changed, 13 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/tools/xenstore/xenstored_control.c=20
>>> b/tools/xenstore/xenstored_control.c
>>> index a045f102a420..37a3d39f20b5 100644
>>> --- a/tools/xenstore/xenstored_control.c
>>> +++ b/tools/xenstore/xenstored_control.c
>>> @@ -696,7 +696,18 @@ static bool do_lu_start(struct delayed_request=20
>>> *req)
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 time_t now =3D time(NULL);
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 const char *ret;
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct buffered_data *saved_in;
>>> -=C2=A0=C2=A0=C2=A0 struct connection *conn =3D lu_status->conn;
>>> +=C2=A0=C2=A0=C2=A0 struct connection *conn =3D req->data;
>>> +
>>> +=C2=A0=C2=A0=C2=A0 /*
>>> +=C2=A0=C2=A0=C2=A0=C2=A0 * Cancellation may have been requested asyn=
chronously. In this
>>> +=C2=A0=C2=A0=C2=A0=C2=A0 * case, lu_status will be NULL.
>>> +=C2=A0=C2=A0=C2=A0=C2=A0 */
>>> +=C2=A0=C2=A0=C2=A0 if (!lu_status) {
>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ret =3D "Cancellation was=20
requested";
>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 conn =3D req->data;
>>
>> This will set conn to the same value it already has.
>=20
> Ah yes. I will drop this line.
>=20
> Also, I took the opportunity to replace
>=20
> } else
>  =C2=A0assert(...)
>=20
> with just
>=20
> assert(...)
>=20
> This should improve a bit the readability. Let me know if you want me t=
o=20
> resend the patch for that.

I guess you are planning to do the commit?

If yes, there is no need for resending the patch.


Juergen


--------------04B1911866C86B501E27424D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------04B1911866C86B501E27424D--

--oIztaDXrHRNcxnbQhvTseAgL05RPvgXRx--

--sWWtURDMf85KDFUlggZlGmVGEMXlt6Wm1
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDUP6oFAwAAAAAACgkQsN6d1ii/Ey/E
Swf+LGy6PtgvaivkwhfRRC/FZguOw7zncmbcDGB06GfReVynli3hbAnuhDb4QCruVDKVQHX5A19B
HOQa4/Sh6tdyv4V65m86u/8ZisYMuYPOnmVEOXUVUQSSDM/rzToaUp6qhBzHXyP7rTyNXRwE5P6P
i+wkMotnX4zcPyxC/YQwAq3ni9YG0k53YQ+nz4888Inybx9iJSpDNesvWPYEbytZdNc0TbsR4WXf
5/sYvQMYT6dbJLa1D4/M5jyuI6TxVfysTJuCG/4dUEAG/0JUgXSi2Ry8q2rcE+GSm5J1hgoc72r/
cs1RTatD6/ooZmUbcHlBZbug3WjnxyaAN1D2+wz5Nw==
=TzMw
-----END PGP SIGNATURE-----

--sWWtURDMf85KDFUlggZlGmVGEMXlt6Wm1--


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 08:18:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 08:18:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146495.269547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKZa-0002lB-DD; Thu, 24 Jun 2021 08:18:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146495.269547; Thu, 24 Jun 2021 08:18:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKZa-0002l4-9k; Thu, 24 Jun 2021 08:18:54 +0000
Received: by outflank-mailman (input) for mailman id 146495;
 Thu, 24 Jun 2021 08:18:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lwKZZ-0002ku-A0
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 08:18:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwKZX-0003ov-Rh; Thu, 24 Jun 2021 08:18:51 +0000
Received: from [54.239.6.182] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwKZX-0003nL-Iy; Thu, 24 Jun 2021 08:18:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=rX/QwlzLiFLie/fR3XNagqSviXSjY5kaEEmjJgdG688=; b=f1z1ZD1pV0hTEfR+vSJ4Yx31AJ
	AuKNnNVhrrXRwJ3iEaGnJvjWDfanYQ9WpkTVrjcsv3mQKWF0BsCvhkf44TPEPx1mFlnIXyC81UlmZ
	qNpD4Ssf21Xfcf/NjiQD6OzPzOW9LxX45F+TRSQNBasZmWq9QzKfQwkQpbHNLvH7tYhw=;
Subject: Re: [PATCH] tools/xenstored: Don't crash xenstored when Live-Update
 is cancelled
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien GralL
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210617173857.6450-1-julien@xen.org>
 <136d6a10-c93d-accd-fc34-62fbaa4742b0@suse.com>
 <325bf694-a30f-558c-ab84-d8a7a1865cc2@xen.org>
 <bcdad74c-393e-0582-6a26-b9a6f45cb30a@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7f252b19-b84e-b4d3-0c3c-976404b00701@xen.org>
Date: Thu, 24 Jun 2021 10:18:49 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <bcdad74c-393e-0582-6a26-b9a6f45cb30a@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 24/06/2021 10:17, Juergen Gross wrote:
> On 24.06.21 10:12, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 22/06/2021 11:23, Juergen Gross wrote:
>>> On 17.06.21 19:38, Julien Grall wrote:
>>>> From: Julien GralL <jgrall@amazon.com>
>>>>
>>>> As Live-Update is asynchronous, it is possible to receive a request to
>>>> cancel it (either on the same connection or from a different one).
>>>>
>>>> Currently, this will crash xenstored because do_lu_start() assumes
>>>> lu_status will be valid. This is not the case when Live-Update has been
>>>> cancelled. This will result to dereference a NULL pointer and
>>>> crash Xenstored.
>>>>
>>>> Rework do_lu_start() to check if lu_status is NULL and return an
>>>> error in this case.
>>>>
>>>> Fixes: af216a99fb ("tools/xenstore: add the basic framework for doing 
> 
>>>> the live update")
>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>>
>>>> ----
>>>>
>>>> This is currently based on top of:
>>>>
>>>> https://lore.kernel.org/xen-devel/20210616144324.31652-1-julien@xen.org
>>>>
>>>> This can be re-ordered if necessary.
>>>> ---
>>>>   tools/xenstore/xenstored_control.c | 15 +++++++++++++--
>>>>   1 file changed, 13 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/tools/xenstore/xenstored_control.c 
>>>> b/tools/xenstore/xenstored_control.c
>>>> index a045f102a420..37a3d39f20b5 100644
>>>> --- a/tools/xenstore/xenstored_control.c
>>>> +++ b/tools/xenstore/xenstored_control.c
>>>> @@ -696,7 +696,18 @@ static bool do_lu_start(struct delayed_request 
>>>> *req)
>>>>       time_t now = time(NULL);
>>>>       const char *ret;
>>>>       struct buffered_data *saved_in;
>>>> -    struct connection *conn = lu_status->conn;
>>>> +    struct connection *conn = req->data;
>>>> +
>>>> +    /*
>>>> +     * Cancellation may have been requested asynchronously. In this
>>>> +     * case, lu_status will be NULL.
>>>> +     */
>>>> +    if (!lu_status) {
>>>> +        ret = "Cancellation was 
> requested";
>>>> +        conn = req->data;
>>>
>>> This will set conn to the same value it already has.
>>
>> Ah yes. I will drop this line.
>>
>> Also, I took the opportunity to replace
>>
>> } else
>>   assert(...)
>>
>> with just
>>
>> assert(...)
>>
>> This should improve a bit the readability. Let me know if you want me 
>> to resend the patch for that.
> 
> I guess you are planning to do the commit?

That's my plan.

> 
> If yes, there is no need for resending the patch.

Thanks!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 08:30:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 08:30:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146501.269557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKkc-0004wC-DQ; Thu, 24 Jun 2021 08:30:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146501.269557; Thu, 24 Jun 2021 08:30:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKkc-0004w5-AH; Thu, 24 Jun 2021 08:30:18 +0000
Received: by outflank-mailman (input) for mailman id 146501;
 Thu, 24 Jun 2021 08:30:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3ax9=LS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lwKkb-0004vz-9F
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 08:30:17 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 915b0a1b-cb7c-4b9d-90d0-5d2b542b89a5;
 Thu, 24 Jun 2021 08:30:16 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id B3F0B1FD67;
 Thu, 24 Jun 2021 08:30:15 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 7F36D11A97;
 Thu, 24 Jun 2021 08:30:15 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id IFrsHZdC1GBBIgAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 24 Jun 2021 08:30:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 915b0a1b-cb7c-4b9d-90d0-5d2b542b89a5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624523415; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=mK3FUXG2k5/h1Ahqklaxt7B6min4oXXkJXXL4DaF10g=;
	b=u3gbljfv4eIhcBpXB5k7yK55P0cm0staXFWCHluX16e/QV8MidS3FI+gdIJZzDk63HZonI
	nzjdr4yO5lhk4AWxjofHwuCBXjc0TPQCw9UZqZLYiTAwd44YzyaYvAi87MS97uBfb8T5t8
	LLTxtjTyE33wTV4Az8VyliJftQISG28=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624523415; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=mK3FUXG2k5/h1Ahqklaxt7B6min4oXXkJXXL4DaF10g=;
	b=u3gbljfv4eIhcBpXB5k7yK55P0cm0staXFWCHluX16e/QV8MidS3FI+gdIJZzDk63HZonI
	nzjdr4yO5lhk4AWxjofHwuCBXjc0TPQCw9UZqZLYiTAwd44YzyaYvAi87MS97uBfb8T5t8
	LLTxtjTyE33wTV4Az8VyliJftQISG28=
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-9-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 08/10] tools/xenstored: Extend restore code to handle
 multiple input buffer
Message-ID: <444688a1-180f-7cf3-3461-0001030471d4@suse.com>
Date: Thu, 24 Jun 2021 10:30:15 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210616144324.31652-9-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="g1GiBB6Hcv0S5hn0ktPZTtaOMSCx4n5CS"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--g1GiBB6Hcv0S5hn0ktPZTtaOMSCx4n5CS
Content-Type: multipart/mixed; boundary="K1xUIfxrSeaTfzDNUDS9TfUbD14ptiP5m";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <444688a1-180f-7cf3-3461-0001030471d4@suse.com>
Subject: Re: [PATCH 08/10] tools/xenstored: Extend restore code to handle
 multiple input buffer
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-9-julien@xen.org>
In-Reply-To: <20210616144324.31652-9-julien@xen.org>

--K1xUIfxrSeaTfzDNUDS9TfUbD14ptiP5m
Content-Type: multipart/mixed;
 boundary="------------70D5279AF8A9D003712F5512"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------70D5279AF8A9D003712F5512
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.06.21 16:43, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> Currently, the restore code is considering the stream will contain at
> most one in-flight request per connection. In a follow-up changes, we
> will want to transfer multiple in-flight requests.
>=20
> The function read_state_buffered() is now extended to restore multiple
> in-flight request. Complete requests will be queued as delayed
> requests, if there a partial request (only the last one can) then it
> will used as the current in-flight request.
>=20
> Note that we want to bypass the quota check for delayed requests as
> the new Xenstore may have a lower limit.
>=20
> Lastly, there is no need to change the specification as there was
> no restriction on the number of in-flight requests preserved.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>   tools/xenstore/xenstored_core.c | 56 ++++++++++++++++++++++++++++----=
-
>   1 file changed, 48 insertions(+), 8 deletions(-)
>=20
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored=
_core.c
> index a5084a5b173d..5b7ab7f74013 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -1486,6 +1486,10 @@ static void process_message(struct connection *c=
onn, struct buffered_data *in)
>   	enum xsd_sockmsg_type type =3D in->hdr.msg.type;
>   	int ret;
>  =20
> +	/* At least send_error() and send_reply() expects conn->in =3D=3D in =
*/
> +	assert(conn->in =3D=3D in);
> +	trace_io(conn, in, 0);
> +
>   	if ((unsigned int)type >=3D XS_TYPE_COUNT || !wire_funcs[type].func)=20
{
>   		eprintf("Client unknown operation %i", type);
>   		send_error(conn, ENOSYS);
> @@ -1515,6 +1519,23 @@ static void process_message(struct connection *c=
onn, struct buffered_data *in)
>   	conn->transaction =3D NULL;
>   }
>  =20
> +static bool process_delayed_message(struct delayed_request *req)
> +{
> +	struct connection *conn =3D req->data;
> +	struct buffered_data *saved_in =3D conn->in;
> +
> +	/*
> +	 * Part of process_message() expects conn->in to contains the
> +	 * processed response. So save the current conn->in and restore it
> +	 * afterwards.
> +	 */
> +	conn->in =3D req->in;
> +	process_message(req->data, req->in);
> +	conn->in =3D saved_in;

This pattern was added to do_lu_start() already, and it will be needed
for each other function being called via call_delayed().

Maybe it would be better to add conn explicitly to struct
delayed_request (or even better: replace data with conn) and to do the
conn->in saving/setting/restoring in call_delayed() instead?

Other than that:

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------70D5279AF8A9D003712F5512
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------70D5279AF8A9D003712F5512--

--K1xUIfxrSeaTfzDNUDS9TfUbD14ptiP5m--

--g1GiBB6Hcv0S5hn0ktPZTtaOMSCx4n5CS
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDUQpcFAwAAAAAACgkQsN6d1ii/Ey8k
rAf/dDC1Nmvh05Js1FmnBx8azJm9k8sYtgiS1sGUlJUxHWctiu4s253UIfKYkfYpQ2rdDgaxFDhJ
2vRRU5UCeVxSgb4n1bDcSEKoASqXWo2GbweBUdVU1d++ZDzJxf8IFDJuf7R2g9ICSN2Sa8bmK7iL
yWDM+YLfsUApOors0xfws5rQn79OVA6Vd68I1une0VDTXvyQq2pmWctTeuPRRE15ideHRVJ4yY9h
r6D0NLYzKRFezT8Akf5NnUXHBRAcI96BwqP8kKx/v/nI28LZCY9PqCXjdxvz34r9z7myEqJlDsED
sB/Xby6fj4eHwnhfWLHoe5hKNjSRUovaKWCfb5yA/g==
=Qly2
-----END PGP SIGNATURE-----

--g1GiBB6Hcv0S5hn0ktPZTtaOMSCx4n5CS--


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 08:41:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 08:41:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146508.269569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKvS-0006RZ-Ho; Thu, 24 Jun 2021 08:41:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146508.269569; Thu, 24 Jun 2021 08:41:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKvS-0006RS-En; Thu, 24 Jun 2021 08:41:30 +0000
Received: by outflank-mailman (input) for mailman id 146508;
 Thu, 24 Jun 2021 08:41:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3ax9=LS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lwKvR-0006RM-H8
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 08:41:29 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4434ccea-cb67-4e10-b7fb-a6e6adc43bd5;
 Thu, 24 Jun 2021 08:41:28 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 9A07D1FD67;
 Thu, 24 Jun 2021 08:41:27 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 5A10111A97;
 Thu, 24 Jun 2021 08:41:27 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id oaKoFDdF1GBzKAAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 24 Jun 2021 08:41:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4434ccea-cb67-4e10-b7fb-a6e6adc43bd5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624524087; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=o2r/Dkb0dtVG4Ww86TtCZ1q0toL9021hGsxDUIoa0F0=;
	b=W8sE2X9PsQ3GjXw2LeZpa9aby3+SN8md+185/OCsWfnM1Twd4/SqdtPCLjuLTTP0G6E9Ey
	Ua+JOsrhoYdOLAPQ+OMPgBLn7VSWYU5PuCxcWTLZKY4ao+Ozl0YxPUwHtu0938jTysCEX/
	cEVwjAniOTZOynDt0qmMip1efaiyTS4=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624524087; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=o2r/Dkb0dtVG4Ww86TtCZ1q0toL9021hGsxDUIoa0F0=;
	b=W8sE2X9PsQ3GjXw2LeZpa9aby3+SN8md+185/OCsWfnM1Twd4/SqdtPCLjuLTTP0G6E9Ey
	Ua+JOsrhoYdOLAPQ+OMPgBLn7VSWYU5PuCxcWTLZKY4ao+Ozl0YxPUwHtu0938jTysCEX/
	cEVwjAniOTZOynDt0qmMip1efaiyTS4=
Subject: Re: [PATCH 09/10] tools/xenstored: Dump delayed requests
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-10-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <5b6455f3-9b44-2cf3-e53d-1f235977a4e2@suse.com>
Date: Thu, 24 Jun 2021 10:41:26 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210616144324.31652-10-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="F24buKIkETeVGiiMO6rPAYyMt4X1w0V81"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--F24buKIkETeVGiiMO6rPAYyMt4X1w0V81
Content-Type: multipart/mixed; boundary="akyuPgvLq5nNMJgZ5UnIGeFQmlt3Nyu13";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <5b6455f3-9b44-2cf3-e53d-1f235977a4e2@suse.com>
Subject: Re: [PATCH 09/10] tools/xenstored: Dump delayed requests
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-10-julien@xen.org>
In-Reply-To: <20210616144324.31652-10-julien@xen.org>

--akyuPgvLq5nNMJgZ5UnIGeFQmlt3Nyu13
Content-Type: multipart/mixed;
 boundary="------------DC70DE1E851EE4D8B4E1BD9C"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------DC70DE1E851EE4D8B4E1BD9C
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.06.21 16:43, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> Currently, only Live-Update request can be delayed. In a follow-up,
> we will want to delay more requests (e.g. transaction start).
> Therefore we want to preserve delayed requests across Live-Update.
>=20
> Delayed requests are just complete "in" buffer. So the code is
> refactored to allow sharing the code to dump "in" buffer.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>   tools/xenstore/xenstored_core.c | 42 +++++++++++++++++++++++++-------=
-
>   1 file changed, 32 insertions(+), 10 deletions(-)
>=20
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored=
_core.c
> index 5b7ab7f74013..9eca58682b51 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -2403,25 +2403,47 @@ const char *dump_state_global(FILE *fp)
>   	return NULL;
>   }
>  =20
> +static const char *dump_input_buffered_data(FILE *fp,
> +					    const struct buffered_data *in,
> +					    unsigned int *total_len)
> +{
> +	unsigned int hlen =3D in->inhdr ? in->used : sizeof(in->hdr);
> +
> +	*total_len +=3D hlen;
> +	if (fp && fwrite(&in->hdr, hlen, 1, fp) !=3D 1)
> +		return "Dump read data error";
> +	if (!in->inhdr && in->used) {
> +		*total_len +=3D in->used;
> +		if (fp && fwrite(in->buffer, in->used, 1, fp) !=3D 1)
> +			return "Dump read data error";
> +	}
> +
> +	return NULL;
> +}
> +
>   /* Called twice: first with fp =3D=3D NULL to get length, then for wr=
iting data. */
>   const char *dump_state_buffered_data(FILE *fp, const struct connectio=
n *c,
>   				     struct xs_state_connection *sc)
>   {
>   	unsigned int len =3D 0, used;
> -	struct buffered_data *out, *in =3D c->in;
> +	struct buffered_data *out;
>   	bool partial =3D true;
> +	struct delayed_request *req;
> +	const char *ret;
>  =20
> -	if (in) {
> -		len =3D in->inhdr ? in->used : sizeof(in->hdr);
> -		if (fp && fwrite(&in->hdr, len, 1, fp) !=3D 1)
> -			return "Dump read data error";
> -		if (!in->inhdr && in->used) {
> -			len +=3D in->used;
> -			if (fp && fwrite(in->buffer, in->used, 1, fp) !=3D 1)
> -				return "Dump read data error";
> -		}
> +	/* Dump any command that was delayed */
> +	list_for_each_entry(req, &c->delayed, list) {

Please add a comment here that the following if() serves especially to
avoid dumping the data for do_lu_start().

> +		if (req->func !=3D process_delayed_message)
> +			continue;
> +
> +		assert(!req->in->inhdr);
> +		if ((ret =3D dump_input_buffered_data(fp, req->in, &len)))
> +			return ret;
>   	}
>  =20
> +	if (c->in && (ret =3D dump_input_buffered_data(fp, c->in, &len)))
> +		return ret;
> +
>   	if (sc) {
>   		sc->data_in_len =3D len;
>   		sc->data_resp_len =3D 0;
>=20

Juergen

--------------DC70DE1E851EE4D8B4E1BD9C
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------DC70DE1E851EE4D8B4E1BD9C--

--akyuPgvLq5nNMJgZ5UnIGeFQmlt3Nyu13--

--F24buKIkETeVGiiMO6rPAYyMt4X1w0V81
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDURTYFAwAAAAAACgkQsN6d1ii/Ey8Y
wgf/dzRBwdCjDSnPU9hX9XxWNZwzshiTo2ZYCoUqsJIYFeNTEx70W/ZIYmZgS9SFWJIcHi78WcqC
RizfUinXrh+uPrOUm2Jk6qChvotSi/RGRQCeDwkh8/fgNv6wKziMlQptvJhQDk1BFNAKt4TAze6z
O7rvyZRa++UcyN+bJ2lYrVcwqvXTh7WTv2qG5wnoROn7AIF3s5Yt/j/V0KzpToaRpIWzKPSCD8Jv
vVs5skuY00KgQdodyDx5w2OfUYDTFLCjN2M/ZuCcMDIopxaBR8p4Y9fkbBERUr9M/kxkdlGNqZgM
7Qy5J0iGGHsf36xnVVM7DDge2OFf/J3oInW1ScTX7w==
=VZsT
-----END PGP SIGNATURE-----

--F24buKIkETeVGiiMO6rPAYyMt4X1w0V81--


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 08:42:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 08:42:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146512.269580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKwJ-00071r-Tr; Thu, 24 Jun 2021 08:42:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146512.269580; Thu, 24 Jun 2021 08:42:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwKwJ-00071k-O5; Thu, 24 Jun 2021 08:42:23 +0000
Received: by outflank-mailman (input) for mailman id 146512;
 Thu, 24 Jun 2021 08:42:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lwKwI-00070p-Dc
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 08:42:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwKwH-0004CN-3y; Thu, 24 Jun 2021 08:42:21 +0000
Received: from [54.239.6.182] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwKwG-0005cy-NE; Thu, 24 Jun 2021 08:42:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=xHGRI+FSO9v4DUEOX5z2V2jLM5FK6T/ILz8VhXK5Cvo=; b=18OQGyg+P068HG+t2Qnj6FQ27F
	EfxYXuaaJTAsYdZquf1oLSoNv3m+FXfME2bwNAJEOfMG7awLXDsr9gF+9Dco4/sidv1nX6egdog51
	Nn2ryq6scuERL1jAyOVrsKbffot8G4JV6EGRFc5U72+W8VtihmfSIvUOaKsUXEiKHlqE=;
Subject: Re: [PATCH 08/10] tools/xenstored: Extend restore code to handle
 multiple input buffer
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-9-julien@xen.org>
 <444688a1-180f-7cf3-3461-0001030471d4@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c6ad91cd-f531-5c19-e246-bb8c4ce304f2@xen.org>
Date: Thu, 24 Jun 2021 10:42:17 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <444688a1-180f-7cf3-3461-0001030471d4@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 24/06/2021 10:30, Juergen Gross wrote:
> On 16.06.21 16:43, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Currently, the restore code is considering the stream will contain at
>> most one in-flight request per connection. In a follow-up changes, we
>> will want to transfer multiple in-flight requests.
>>
>> The function read_state_buffered() is now extended to restore multiple
>> in-flight request. Complete requests will be queued as delayed
>> requests, if there a partial request (only the last one can) then it
>> will used as the current in-flight request.
>>
>> Note that we want to bypass the quota check for delayed requests as
>> the new Xenstore may have a lower limit.
>>
>> Lastly, there is no need to change the specification as there was
>> no restriction on the number of in-flight requests preserved.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> ---
>>   tools/xenstore/xenstored_core.c | 56 ++++++++++++++++++++++++++++-----
>>   1 file changed, 48 insertions(+), 8 deletions(-)
>>
>> diff --git a/tools/xenstore/xenstored_core.c 
>> b/tools/xenstore/xenstored_core.c
>> index a5084a5b173d..5b7ab7f74013 100644
>> --- a/tools/xenstore/xenstored_core.c
>> +++ b/tools/xenstore/xenstored_core.c
>> @@ -1486,6 +1486,10 @@ static void process_message(struct connection 
>> *conn, struct buffered_data *in)
>>       enum xsd_sockmsg_type type = in->hdr.msg.type;
>>       int ret;
>> +    /* At least send_error() and send_reply() expects conn->in == in */
>> +    assert(conn->in == in);
>> +    trace_io(conn, in, 0);
>> +
>>       if ((unsigned int)type >= XS_TYPE_COUNT || !wire_funcs[type].func) 
> {
>>           eprintf("Client unknown operation %i", type);
>>           send_error(conn, ENOSYS);
>> @@ -1515,6 +1519,23 @@ static void process_message(struct connection 
>> *conn, struct buffered_data *in)
>>       conn->transaction = NULL;
>>   }
>> +static bool process_delayed_message(struct delayed_request *req)
>> +{
>> +    struct connection *conn = req->data;
>> +    struct buffered_data *saved_in = conn->in;
>> +
>> +    /*
>> +     * Part of process_message() expects conn->in to contains the
>> +     * processed response. So save the current conn->in and restore it
>> +     * afterwards.
>> +     */
>> +    conn->in = req->in;
>> +    process_message(req->data, req->in);
>> +    conn->in = saved_in;
> 
> This pattern was added to do_lu_start() already, and it will be needed
> for each other function being called via call_delayed().
> 
> Maybe it would be better to add conn explicitly to struct
> delayed_request (or even better: replace data with conn) and to do the
> conn->in saving/setting/restoring in call_delayed() instead?

This was my original approach, but I abandoned it because in the case of 
do_lu_start() we need to save the original conn->in in the stream (see 
patch #3 for more details).

If we overwrite conn->in in call_delayed(), then we need a different way 
to find the original conn->in. I couldn't find a nice way and decided to 
settle with the duplication.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 09:00:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 09:00:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146518.269591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwLDx-0000uq-DS; Thu, 24 Jun 2021 09:00:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146518.269591; Thu, 24 Jun 2021 09:00:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwLDx-0000uj-A1; Thu, 24 Jun 2021 09:00:37 +0000
Received: by outflank-mailman (input) for mailman id 146518;
 Thu, 24 Jun 2021 09:00:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwLDw-0000uZ-7M; Thu, 24 Jun 2021 09:00:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwLDw-0004Wh-1T; Thu, 24 Jun 2021 09:00:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwLDv-0006Da-P7; Thu, 24 Jun 2021 09:00:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwLDv-0000bd-Oe; Thu, 24 Jun 2021 09:00:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dRqCX36GV48+Z/Ov4QL7kklWLY86xUA0i2qEZsyzA80=; b=GhSOC1mLAEmAp8kQzHNLjdARS5
	pQ6cJBv+/Esqap/LrtJ6CbtjOfhobOKKOitd0CZf/XIuWT3L3hxO88ZVP4KuDO70ilWQtlcO7ihzv
	ygVWh3ER9Q9+cwAq78m7GIXjxiS3reFXTIk122/bPCrFqVD9gfMWf54vfolOcMMR6ntM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163011-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 163011: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=b9f9df9f2d41d3d44dc20ee2a502c1b68ebfbc14
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Jun 2021 09:00:35 +0000

flight 163011 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163011/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              b9f9df9f2d41d3d44dc20ee2a502c1b68ebfbc14
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  349 days
Failing since        151818  2020-07-11 04:18:52 Z  348 days  340 attempts
Testing same since   163011  2021-06-24 04:18:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 63234 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 09:12:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 09:12:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146525.269605 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwLPf-0002T7-My; Thu, 24 Jun 2021 09:12:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146525.269605; Thu, 24 Jun 2021 09:12:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwLPf-0002T0-JD; Thu, 24 Jun 2021 09:12:43 +0000
Received: by outflank-mailman (input) for mailman id 146525;
 Thu, 24 Jun 2021 09:12:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwLPe-0002Sq-6c; Thu, 24 Jun 2021 09:12:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwLPd-0004hq-TR; Thu, 24 Jun 2021 09:12:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwLPd-0006pX-IS; Thu, 24 Jun 2021 09:12:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwLPd-0004bV-Hw; Thu, 24 Jun 2021 09:12:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UxjWlKzuF98ClAandcnX11NNeQQ6VztvVNsOSxBMPh8=; b=HXx7e9ioSWpjSVZKhNmYC2Fzwa
	AKSkjEPXggMOfJsafBe3RWnWyb0Yqab3CFDNsQRlP3QcKrMuz5npYNVumhSIEKw29BJeOQb8cKKSe
	KEqm1MO7C250xamBQk9qLszgv4PwO6SZXttJx1tm/Sijag7oF15ASeq9WT0kvNT8wOlw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163009-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163009: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=20ca52882877ba9025da2ee58c8dab7808eca457
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Jun 2021 09:12:41 +0000

flight 163009 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163009/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 20ca52882877ba9025da2ee58c8dab7808eca457
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   20 days
Failing since        162368  2021-06-04 15:42:59 Z   19 days   44 attempts
Testing same since   163009  2021-06-23 23:41:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2303 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 09:20:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 09:20:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146535.269619 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwLXK-0003sR-Jd; Thu, 24 Jun 2021 09:20:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146535.269619; Thu, 24 Jun 2021 09:20:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwLXK-0003sK-Ft; Thu, 24 Jun 2021 09:20:38 +0000
Received: by outflank-mailman (input) for mailman id 146535;
 Thu, 24 Jun 2021 09:20:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3ax9=LS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lwLXJ-0003sE-8X
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 09:20:37 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 69bbb48e-e750-40b5-b74d-62c367985ada;
 Thu, 24 Jun 2021 09:20:35 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id BD7101FD6C;
 Thu, 24 Jun 2021 09:20:34 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 8DD3D11A97;
 Thu, 24 Jun 2021 09:20:34 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id Le1zIWJO1GBiPgAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 24 Jun 2021 09:20:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69bbb48e-e750-40b5-b74d-62c367985ada
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624526434; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=T0Mxx5S8QM5qjHZ3fVFo6cdp1/NgSAT6Ti1PrhRyrvw=;
	b=m8pZwMkR6ohSRlUlB7BCsYS8snSCSgsTX6UX2Xd547bkPrcBcJ22XsjO0RneE2L3JNnr3Q
	g++T9u4gEb2GBiIhS08T43zVxQ917S7IhjPntQmDOi1HWK+Po38zkIf9xqweoJSdU5r62x
	iOA/RvUjnLvQ1LHLSRdh2uoCysuJWsA=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624526434; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=T0Mxx5S8QM5qjHZ3fVFo6cdp1/NgSAT6Ti1PrhRyrvw=;
	b=m8pZwMkR6ohSRlUlB7BCsYS8snSCSgsTX6UX2Xd547bkPrcBcJ22XsjO0RneE2L3JNnr3Q
	g++T9u4gEb2GBiIhS08T43zVxQ917S7IhjPntQmDOi1HWK+Po38zkIf9xqweoJSdU5r62x
	iOA/RvUjnLvQ1LHLSRdh2uoCysuJWsA=
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-9-julien@xen.org>
 <444688a1-180f-7cf3-3461-0001030471d4@suse.com>
 <c6ad91cd-f531-5c19-e246-bb8c4ce304f2@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 08/10] tools/xenstored: Extend restore code to handle
 multiple input buffer
Message-ID: <5d641d86-53ce-27d6-6a54-c58d85164204@suse.com>
Date: Thu, 24 Jun 2021 11:20:34 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <c6ad91cd-f531-5c19-e246-bb8c4ce304f2@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="oLJydyKWJTsKaXRP0iN12efFxPFM5U8w5"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--oLJydyKWJTsKaXRP0iN12efFxPFM5U8w5
Content-Type: multipart/mixed; boundary="mnaGEKbA9tN1bXRQi5TepbglzPA5OLbNX";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <5d641d86-53ce-27d6-6a54-c58d85164204@suse.com>
Subject: Re: [PATCH 08/10] tools/xenstored: Extend restore code to handle
 multiple input buffer
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-9-julien@xen.org>
 <444688a1-180f-7cf3-3461-0001030471d4@suse.com>
 <c6ad91cd-f531-5c19-e246-bb8c4ce304f2@xen.org>
In-Reply-To: <c6ad91cd-f531-5c19-e246-bb8c4ce304f2@xen.org>

--mnaGEKbA9tN1bXRQi5TepbglzPA5OLbNX
Content-Type: multipart/mixed;
 boundary="------------C8187A122C89074DAA432785"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C8187A122C89074DAA432785
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 24.06.21 10:42, Julien Grall wrote:
> Hi Juergen,
>=20
> On 24/06/2021 10:30, Juergen Gross wrote:
>> On 16.06.21 16:43, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> Currently, the restore code is considering the stream will contain at=

>>> most one in-flight request per connection. In a follow-up changes, we=

>>> will want to transfer multiple in-flight requests.
>>>
>>> The function read_state_buffered() is now extended to restore multipl=
e
>>> in-flight request. Complete requests will be queued as delayed
>>> requests, if there a partial request (only the last one can) then it
>>> will used as the current in-flight request.
>>>
>>> Note that we want to bypass the quota check for delayed requests as
>>> the new Xenstore may have a lower limit.
>>>
>>> Lastly, there is no need to change the specification as there was
>>> no restriction on the number of in-flight requests preserved.
>>>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>> ---
>>> =C2=A0 tools/xenstore/xenstored_core.c | 56 +++++++++++++++++++++++++=
+++-----
>>> =C2=A0 1 file changed, 48 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/tools/xenstore/xenstored_core.c=20
>>> b/tools/xenstore/xenstored_core.c
>>> index a5084a5b173d..5b7ab7f74013 100644
>>> --- a/tools/xenstore/xenstored_core.c
>>> +++ b/tools/xenstore/xenstored_core.c
>>> @@ -1486,6 +1486,10 @@ static void process_message(struct connection =

>>> *conn, struct buffered_data *in)
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 enum xsd_sockmsg_type type =3D in->hdr=
=2Emsg.type;
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 int ret;
>>> +=C2=A0=C2=A0=C2=A0 /* At least send_error() and send_reply() expects=20
conn->in =3D=3D in */
>>> +=C2=A0=C2=A0=C2=A0 assert(conn->in =3D=3D in);
>>> +=C2=A0=C2=A0=C2=A0 trace_io(conn, in, 0);
>>> +
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ((unsigned int)type >=3D XS_TYPE_CO=
UNT|| !wire_funcs[type].func)=20
>> {
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 eprintf("Clien=
t unknown operation %i", type);
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 send_error(con=
n, ENOSYS);
>>> @@ -1515,6 +1519,23 @@ static void process_message(struct connection =

>>> *conn, struct buffered_data *in)
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 conn->transaction =3D NULL;
>>> =C2=A0 }
>>> +static bool process_delayed_message(struct delayed_request *req)
>>> +{
>>> +=C2=A0=C2=A0=C2=A0 struct connection *conn =3D req->data;
>>> +=C2=A0=C2=A0=C2=A0 struct buffered_data *saved_in =3D conn->in;
>>> +
>>> +=C2=A0=C2=A0=C2=A0 /*
>>> +=C2=A0=C2=A0=C2=A0=C2=A0 * Part of process_message() expects conn->i=
n to contains the
>>> +=C2=A0=C2=A0=C2=A0=C2=A0 * processed response. So save the current c=
onn->in and restore it
>>> +=C2=A0=C2=A0=C2=A0=C2=A0 * afterwards.
>>> +=C2=A0=C2=A0=C2=A0=C2=A0 */
>>> +=C2=A0=C2=A0=C2=A0 conn->in =3D req->in;
>>> +=C2=A0=C2=A0=C2=A0 process_message(req->data, req->in);
>>> +=C2=A0=C2=A0=C2=A0 conn->in =3D saved_in;
>>
>> This pattern was added to do_lu_start() already, and it will be needed=

>> for each other function being called via call_delayed().
>>
>> Maybe it would be better to add conn explicitly to struct
>> delayed_request (or even better: replace data with conn) and to do the=

>> conn->in saving/setting/restoring in call_delayed() instead?
>=20
> This was my original approach, but I abandoned it because in the case o=
f=20
> do_lu_start() we need to save the original conn->in in the stream (see =

> patch #3 for more details).
>=20
> If we overwrite conn->in in call_delayed(), then we need a different wa=
y=20
> to find the original conn->in. I couldn't find a nice way and decided t=
o=20
> settle with the duplication.

Ah, okay, understood. Even in case we'd be able to restore conn->in it
would be the same amount of code again, but harder to follow.


Juergen

--------------C8187A122C89074DAA432785
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C8187A122C89074DAA432785--

--mnaGEKbA9tN1bXRQi5TepbglzPA5OLbNX--

--oLJydyKWJTsKaXRP0iN12efFxPFM5U8w5
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDUTmIFAwAAAAAACgkQsN6d1ii/Ey94
Vgf+IyKi5hj+OGWlxFUHaZ3aBfTOSJ9mn6AF07x+2AtXPfeuItG9JbTzoghVWyMNlM4wXbo4MGp3
e2eAkx6cbyDj1/6AiuySiQE/Gyzggfj5/vkxi0ctPPMkHpEqzVGChnM1i8s1bvcjy9WGoHUuB3pc
PD/V3gfoYzt3CbS7SA2MATyfku9NxxssyWSaBl1PpaPvYQpH74BWLbBAQJulPM1G8FBjWqiD3Ogb
QU3nDpEZObvT+P3afhp5nbnWTm9ieNiZnZxGwQ2xqDYXZ57srQrJcurDC9M9d+/nLpsgFyn6aMTO
sW/1h1xtsQfPaTeWEChxOSzElRpTSLhftgpp8lG4cQ==
=6shL
-----END PGP SIGNATURE-----

--oLJydyKWJTsKaXRP0iN12efFxPFM5U8w5--


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 09:24:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 09:24:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146544.269629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwLad-0004Xx-4T; Thu, 24 Jun 2021 09:24:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146544.269629; Thu, 24 Jun 2021 09:24:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwLad-0004Xq-1K; Thu, 24 Jun 2021 09:24:03 +0000
Received: by outflank-mailman (input) for mailman id 146544;
 Thu, 24 Jun 2021 09:24:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3ax9=LS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lwLab-0004Xi-Iw
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 09:24:01 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 38f517d1-93d5-440a-bd43-8e18a842ce72;
 Thu, 24 Jun 2021 09:24:00 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 18AB22195D;
 Thu, 24 Jun 2021 09:24:00 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id E043011A97;
 Thu, 24 Jun 2021 09:23:59 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id z4GlNS9P1GAXQAAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 24 Jun 2021 09:23:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38f517d1-93d5-440a-bd43-8e18a842ce72
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624526640; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=lfjqJs9CAcNNhXC+z3oQjM+eSBWPvGfNycQzwEl7QRc=;
	b=oACgJAMzQXHmQq1hOAomY7FEw/X83+x+HLA/wZC55Kq3b6zfUsPM1cKCw4hcqV7m/VcOmO
	PZnvm8yoHQbpsfCSBzEX/AbBHY0q4K8c1qA3hWamw/164ENYSDXCK7wPPIhiiSDXwKeVfU
	jcHLYxZTFqZKbNqSc8f4P1loN+kzuHI=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624526640; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=lfjqJs9CAcNNhXC+z3oQjM+eSBWPvGfNycQzwEl7QRc=;
	b=oACgJAMzQXHmQq1hOAomY7FEw/X83+x+HLA/wZC55Kq3b6zfUsPM1cKCw4hcqV7m/VcOmO
	PZnvm8yoHQbpsfCSBzEX/AbBHY0q4K8c1qA3hWamw/164ENYSDXCK7wPPIhiiSDXwKeVfU
	jcHLYxZTFqZKbNqSc8f4P1loN+kzuHI=
Subject: Re: [PATCH 10/10] tools/xenstored: Delay new transaction while
 Live-Update is pending
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-11-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <f2a5a643-17fb-4cb3-fb54-2c80ec45e43e@suse.com>
Date: Thu, 24 Jun 2021 11:23:59 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210616144324.31652-11-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="IYkzF63UCtCJxdBqRNaYuPSRbLxNLC57b"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--IYkzF63UCtCJxdBqRNaYuPSRbLxNLC57b
Content-Type: multipart/mixed; boundary="NcT2SaY58QGVI6xvg3pmLzItYEgMn1oml";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <f2a5a643-17fb-4cb3-fb54-2c80ec45e43e@suse.com>
Subject: Re: [PATCH 10/10] tools/xenstored: Delay new transaction while
 Live-Update is pending
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-11-julien@xen.org>
In-Reply-To: <20210616144324.31652-11-julien@xen.org>

--NcT2SaY58QGVI6xvg3pmLzItYEgMn1oml
Content-Type: multipart/mixed;
 boundary="------------58443455899DAF2201504E25"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------58443455899DAF2201504E25
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.06.21 16:43, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> At the moment, Live-Update will, by default, not proceed if there are
> in-flight transactions. It is possible force it by passing -F but this
> will break any connection with in-flight transactions.
>=20
> There are PV drivers out that may never terminate some transaction. On
> host running such guest, we would need to use -F. Unfortunately, this
> also risks to break well-behaving guests (and even dom0) because
> Live-Update will happen as soon as the timeout is hit.
>=20
> Ideally, we would want to preserve transactions but this requires
> some work and a lot of testing to be able to use it in production.
>=20
> As a stop gap, we want to limit the damage of -F. This patch will delay=

> any transactions that are started after Live-Update has been requested.=

>=20
> If the request cannot be delayed, the connection will be stalled to
> avoid loosing requests.
>=20
> If the connection has already a pending transaction before Live-Update,=

> then new transaction will not be delayed. This is to avoid the connecti=
on
> to stall.
>=20
> With this stop gap in place, domains with long running transactions wil=
l
> still break when using -F, but other domains which starts a transaction=

> in the middle of Live-Update will continue to work.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------58443455899DAF2201504E25
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------58443455899DAF2201504E25--

--NcT2SaY58QGVI6xvg3pmLzItYEgMn1oml--

--IYkzF63UCtCJxdBqRNaYuPSRbLxNLC57b
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDUTy8FAwAAAAAACgkQsN6d1ii/Ey82
8gf/W0WEIFnYqcX95AnediIZvOITedmkujGEiqhqJ/S/Q4wVaAfn52xChfWVrjM2HQr+2eLQzwJL
lWA/4Va8Birjsulo4VdSflCSWDAs1Y8cXXXWIL0Ds4j7dF+HI7RTz8SgaFpHg5Td66q/tySuu1yQ
m6N6WalmC0aHSFU9eRK0ZyKt6Y2dVLtm36d2hbES0bnlXlIOXNv2eLygyo1dKUr5Msa7lCUXK1A3
mJi/4p5QI1hr7dH+OsBdMxP1cyjJUrgIKrgxAlB2LxiX5LdhzWsKO+cQHCu3hfQxNjaINSY8MLxJ
S1DOPTk3r8iMWS7F310BVZJ9lq6GaRm3UhkaQ0pJWA==
=h+8i
-----END PGP SIGNATURE-----

--IYkzF63UCtCJxdBqRNaYuPSRbLxNLC57b--


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 09:54:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 09:54:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146562.269659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwM3X-0000I4-UW; Thu, 24 Jun 2021 09:53:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146562.269659; Thu, 24 Jun 2021 09:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwM3X-0000Hw-QO; Thu, 24 Jun 2021 09:53:55 +0000
Received: by outflank-mailman (input) for mailman id 146562;
 Thu, 24 Jun 2021 09:53:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D+gE=LS=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1lwM3W-0000Hq-I1
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 09:53:54 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 45c9bbe6-3b57-4de9-bec9-d742fe392d40;
 Thu, 24 Jun 2021 09:53:52 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 8AECA2197D;
 Thu, 24 Jun 2021 09:53:51 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 6305C11A97;
 Thu, 24 Jun 2021 09:53:51 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id TO47Fy9W1GC8UAAALh3uQQ
 (envelope-from <tzimmermann@suse.de>); Thu, 24 Jun 2021 09:53:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45c9bbe6-3b57-4de9-bec9-d742fe392d40
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1624528431; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=sgQz981DmPHHoBLAmzWHm2K8NJ1AiERtFp8KGPCVdkQ=;
	b=FQ7TZewTeM434xUo1DVCkrknJtFpjXMUkHaPsMZSmuoBOxjwWbmZFsbC8ispHDeFkxYvRr
	S9EFKZUUeGdetUlWQxEqD5jX9l0CGhU7OES+4lWTRXMBer1u6d54+v4xgQ2QeifKy1tTa8
	yKV9gEeGvxnIT/MjnG0Hg22+twSUIX4=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1624528431;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=sgQz981DmPHHoBLAmzWHm2K8NJ1AiERtFp8KGPCVdkQ=;
	b=JuJLk2PLfiI8Nc5dOkA8x0XYrxAHa22G8Prnn+9jh+IDZeLA3r5DVF7YdCDnl1ebMjzf0x
	2QA9tygMS+fZpuAA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1624528431; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=sgQz981DmPHHoBLAmzWHm2K8NJ1AiERtFp8KGPCVdkQ=;
	b=FQ7TZewTeM434xUo1DVCkrknJtFpjXMUkHaPsMZSmuoBOxjwWbmZFsbC8ispHDeFkxYvRr
	S9EFKZUUeGdetUlWQxEqD5jX9l0CGhU7OES+4lWTRXMBer1u6d54+v4xgQ2QeifKy1tTa8
	yKV9gEeGvxnIT/MjnG0Hg22+twSUIX4=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1624528431;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=sgQz981DmPHHoBLAmzWHm2K8NJ1AiERtFp8KGPCVdkQ=;
	b=JuJLk2PLfiI8Nc5dOkA8x0XYrxAHa22G8Prnn+9jh+IDZeLA3r5DVF7YdCDnl1ebMjzf0x
	2QA9tygMS+fZpuAA==
From: Thomas Zimmermann <tzimmermann@suse.de>
To: oleksandr_andrushchenko@epam.com,
	airlied@linux.ie,
	daniel@ffwll.ch
Cc: dri-devel@lists.freedesktop.org,
	xen-devel@lists.xenproject.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH] drm/xen: Implement mmap as GEM object function
Date: Thu, 24 Jun 2021 11:53:49 +0200
Message-Id: <20210624095349.8874-1-tzimmermann@suse.de>
X-Mailer: git-send-email 2.32.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Moving the driver-specific mmap code into a GEM object function allows
for using DRM helpers for various mmap callbacks.

The respective xen functions are being removed. The file_operations
structure fops is now being created by the helper macro
DEFINE_DRM_GEM_FOPS().

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/xen/xen_drm_front.c     |  16 +---
 drivers/gpu/drm/xen/xen_drm_front_gem.c | 108 +++++++++---------------
 drivers/gpu/drm/xen/xen_drm_front_gem.h |   7 --
 3 files changed, 44 insertions(+), 87 deletions(-)

diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
index 9f14d99c763c..434064c820e8 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.c
+++ b/drivers/gpu/drm/xen/xen_drm_front.c
@@ -469,19 +469,7 @@ static void xen_drm_drv_release(struct drm_device *dev)
 	kfree(drm_info);
 }
 
-static const struct file_operations xen_drm_dev_fops = {
-	.owner          = THIS_MODULE,
-	.open           = drm_open,
-	.release        = drm_release,
-	.unlocked_ioctl = drm_ioctl,
-#ifdef CONFIG_COMPAT
-	.compat_ioctl   = drm_compat_ioctl,
-#endif
-	.poll           = drm_poll,
-	.read           = drm_read,
-	.llseek         = no_llseek,
-	.mmap           = xen_drm_front_gem_mmap,
-};
+DEFINE_DRM_GEM_FOPS(xen_drm_dev_fops);
 
 static const struct drm_driver xen_drm_driver = {
 	.driver_features           = DRIVER_GEM | DRIVER_MODESET | DRIVER_ATOMIC,
@@ -489,7 +477,7 @@ static const struct drm_driver xen_drm_driver = {
 	.prime_handle_to_fd        = drm_gem_prime_handle_to_fd,
 	.prime_fd_to_handle        = drm_gem_prime_fd_to_handle,
 	.gem_prime_import_sg_table = xen_drm_front_gem_import_sg_table,
-	.gem_prime_mmap            = xen_drm_front_gem_prime_mmap,
+	.gem_prime_mmap            = drm_gem_prime_mmap,
 	.dumb_create               = xen_drm_drv_dumb_create,
 	.fops                      = &xen_drm_dev_fops,
 	.name                      = "xendrm-du",
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index b293c67230ef..dd358ba2bf8e 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -57,6 +57,47 @@ static void gem_free_pages_array(struct xen_gem_object *xen_obj)
 	xen_obj->pages = NULL;
 }
 
+static int xen_drm_front_gem_object_mmap(struct drm_gem_object *gem_obj,
+					 struct vm_area_struct *vma)
+{
+	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+	int ret;
+
+	vma->vm_ops = gem_obj->funcs->vm_ops;
+
+	/*
+	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
+	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
+	 * the whole buffer.
+	 */
+	vma->vm_flags &= ~VM_PFNMAP;
+	vma->vm_flags |= VM_MIXEDMAP;
+	vma->vm_pgoff = 0;
+
+	/*
+	 * According to Xen on ARM ABI (xen/include/public/arch-arm.h):
+	 * all memory which is shared with other entities in the system
+	 * (including the hypervisor and other guests) must reside in memory
+	 * which is mapped as Normal Inner Write-Back Outer Write-Back
+	 * Inner-Shareable.
+	 */
+	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
+
+	/*
+	 * vm_operations_struct.fault handler will be called if CPU access
+	 * to VM is here. For GPUs this isn't the case, because CPU  doesn't
+	 * touch the memory. Insert pages now, so both CPU and GPU are happy.
+	 *
+	 * FIXME: as we insert all the pages now then no .fault handler must
+	 * be called, so don't provide one
+	 */
+	ret = vm_map_pages(vma, xen_obj->pages, xen_obj->num_pages);
+	if (ret < 0)
+		DRM_ERROR("Failed to map pages into vma: %d\n", ret);
+
+	return ret;
+}
+
 static const struct vm_operations_struct xen_drm_drv_vm_ops = {
 	.open           = drm_gem_vm_open,
 	.close          = drm_gem_vm_close,
@@ -67,6 +108,7 @@ static const struct drm_gem_object_funcs xen_drm_front_gem_object_funcs = {
 	.get_sg_table = xen_drm_front_gem_get_sg_table,
 	.vmap = xen_drm_front_gem_prime_vmap,
 	.vunmap = xen_drm_front_gem_prime_vunmap,
+	.mmap = xen_drm_front_gem_object_mmap,
 	.vm_ops = &xen_drm_drv_vm_ops,
 };
 
@@ -238,58 +280,6 @@ xen_drm_front_gem_import_sg_table(struct drm_device *dev,
 	return &xen_obj->base;
 }
 
-static int gem_mmap_obj(struct xen_gem_object *xen_obj,
-			struct vm_area_struct *vma)
-{
-	int ret;
-
-	/*
-	 * clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
-	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
-	 * the whole buffer.
-	 */
-	vma->vm_flags &= ~VM_PFNMAP;
-	vma->vm_flags |= VM_MIXEDMAP;
-	vma->vm_pgoff = 0;
-	/*
-	 * According to Xen on ARM ABI (xen/include/public/arch-arm.h):
-	 * all memory which is shared with other entities in the system
-	 * (including the hypervisor and other guests) must reside in memory
-	 * which is mapped as Normal Inner Write-Back Outer Write-Back
-	 * Inner-Shareable.
-	 */
-	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
-
-	/*
-	 * vm_operations_struct.fault handler will be called if CPU access
-	 * to VM is here. For GPUs this isn't the case, because CPU
-	 * doesn't touch the memory. Insert pages now, so both CPU and GPU are
-	 * happy.
-	 * FIXME: as we insert all the pages now then no .fault handler must
-	 * be called, so don't provide one
-	 */
-	ret = vm_map_pages(vma, xen_obj->pages, xen_obj->num_pages);
-	if (ret < 0)
-		DRM_ERROR("Failed to map pages into vma: %d\n", ret);
-
-	return ret;
-}
-
-int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
-{
-	struct xen_gem_object *xen_obj;
-	struct drm_gem_object *gem_obj;
-	int ret;
-
-	ret = drm_gem_mmap(filp, vma);
-	if (ret < 0)
-		return ret;
-
-	gem_obj = vma->vm_private_data;
-	xen_obj = to_xen_gem_obj(gem_obj);
-	return gem_mmap_obj(xen_obj, vma);
-}
-
 int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
 {
 	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
@@ -313,17 +303,3 @@ void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
 {
 	vunmap(map->vaddr);
 }
-
-int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
-				 struct vm_area_struct *vma)
-{
-	struct xen_gem_object *xen_obj;
-	int ret;
-
-	ret = drm_gem_mmap_obj(gem_obj, gem_obj->size, vma);
-	if (ret < 0)
-		return ret;
-
-	xen_obj = to_xen_gem_obj(gem_obj);
-	return gem_mmap_obj(xen_obj, vma);
-}
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
index a4e67d0a149c..eaea470f7001 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
@@ -15,9 +15,7 @@ struct dma_buf_attachment;
 struct dma_buf_map;
 struct drm_device;
 struct drm_gem_object;
-struct file;
 struct sg_table;
-struct vm_area_struct;
 
 struct drm_gem_object *xen_drm_front_gem_create(struct drm_device *dev,
 						size_t size);
@@ -33,15 +31,10 @@ struct page **xen_drm_front_gem_get_pages(struct drm_gem_object *obj);
 
 void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
 
-int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
-
 int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
 				 struct dma_buf_map *map);
 
 void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
 				    struct dma_buf_map *map);
 
-int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
-				 struct vm_area_struct *vma);
-
 #endif /* __XEN_DRM_FRONT_GEM_H */
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 10:20:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 10:20:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146568.269670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwMTL-0003Tr-50; Thu, 24 Jun 2021 10:20:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146568.269670; Thu, 24 Jun 2021 10:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwMTL-0003Tk-1C; Thu, 24 Jun 2021 10:20:35 +0000
Received: by outflank-mailman (input) for mailman id 146568;
 Thu, 24 Jun 2021 10:20:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwMTJ-0003Ta-VG; Thu, 24 Jun 2021 10:20:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwMTJ-0005tV-N4; Thu, 24 Jun 2021 10:20:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwMTJ-000149-Dv; Thu, 24 Jun 2021 10:20:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwMTJ-0005J9-Bi; Thu, 24 Jun 2021 10:20:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=byW8+gg7V+H6c/ZAztVOP4Ru6j3fTEWkSzkTrmcDVxY=; b=rBdRV4f0p1HW9BJEaNAz9qLDyC
	1YBGK6KDVflZsu5I1z3FyOEhXMngiChQ9k//91Wb8cRcR1xI5HZf6rydQIjRLkYu7fHiqkof5dRxt
	ZAErZYmt6FadS6IXvxLM4gb4PdHH0T6w5Ucb2pqtRI25JW3nDvPNkec/p9iopYdxWtFA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163004-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 163004: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c7691f5e340f3b579d40c77548f63133cdf5aac7
X-Osstest-Versions-That:
    xen=5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Jun 2021 10:20:33 +0000

flight 163004 xen-unstable real [real]
flight 163012 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/163004/
http://logs.test-lab.xenproject.org/osstest/logs/163012/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 163012-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 162422
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162533
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162533
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162533
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162533
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162533
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162533
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162533
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162533
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c7691f5e340f3b579d40c77548f63133cdf5aac7
baseline version:
 xen                  5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1

Last test of basis   162533  2021-06-08 01:53:53 Z   16 days
Failing since        162556  2021-06-08 22:39:08 Z   15 days   22 attempts
Testing same since   163004  2021-06-23 16:21:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Christian Lindig <christian.lindig@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Dario Faggioli <dfaggioli@suse.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5268b2dcf7..c7691f5e34  c7691f5e340f3b579d40c77548f63133cdf5aac7 -> master


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 10:28:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 10:28:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146614.269858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwMb1-0006Vl-8f; Thu, 24 Jun 2021 10:28:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146614.269858; Thu, 24 Jun 2021 10:28:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwMb1-0006Ve-5g; Thu, 24 Jun 2021 10:28:31 +0000
Received: by outflank-mailman (input) for mailman id 146614;
 Thu, 24 Jun 2021 10:28:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lwMaz-0006Ub-GC
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 10:28:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwMax-00068P-9B; Thu, 24 Jun 2021 10:28:27 +0000
Received: from [54.239.6.182] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwMax-0006jm-1B; Thu, 24 Jun 2021 10:28:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=eGnAXvie1t0y8LYszeIBp26woJ1w2LQ5PuA7OhLh51c=; b=hi1jb4MeNm3TA//f26P+nnlgKW
	gRUAK6A7iu9ZRJpkgwsZe0UP2K9XqWuNPbTcwtJfPx9rGpezqXsOULgE91NZ0WUlLctjHcV5RyBHU
	OuaP8SS7nwRf/yfREgSw751ihOYyboGHg2Naoc4Y3N6TIwxD+iFpps9aZNxIFTUQluA0=;
Subject: Re: [PATCH 09/10] tools/xenstored: Dump delayed requests
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-10-julien@xen.org>
 <5b6455f3-9b44-2cf3-e53d-1f235977a4e2@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <d42131c2-ae2d-883b-037d-2ab6370678c3@xen.org>
Date: Thu, 24 Jun 2021 12:28:24 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <5b6455f3-9b44-2cf3-e53d-1f235977a4e2@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 24/06/2021 10:41, Juergen Gross wrote:
> On 16.06.21 16:43, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Currently, only Live-Update request can be delayed. In a follow-up,
>> we will want to delay more requests (e.g. transaction start).
>> Therefore we want to preserve delayed requests across Live-Update.
>>
>> Delayed requests are just complete "in" buffer. So the code is
>> refactored to allow sharing the code to dump "in" buffer.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> ---
>>   tools/xenstore/xenstored_core.c | 42 +++++++++++++++++++++++++--------
>>   1 file changed, 32 insertions(+), 10 deletions(-)
>>
>> diff --git a/tools/xenstore/xenstored_core.c 
>> b/tools/xenstore/xenstored_core.c
>> index 5b7ab7f74013..9eca58682b51 100644
>> --- a/tools/xenstore/xenstored_core.c
>> +++ b/tools/xenstore/xenstored_core.c
>> @@ -2403,25 +2403,47 @@ const char *dump_state_global(FILE *fp)
>>       return NULL;
>>   }
>> +static const char *dump_input_buffered_data(FILE *fp,
>> +                        const struct buffered_data *in,
>> +                        unsigned int *total_len)
>> +{
>> +    unsigned int hlen = in->inhdr ? in->used : sizeof(in->hdr);
>> +
>> +    *total_len += hlen;
>> +    if (fp && fwrite(&in->hdr, hlen, 1, fp) != 1)
>> +        return "Dump read data error";
>> +    if (!in->inhdr && in->used) {
>> +        *total_len += in->used;
>> +        if (fp && fwrite(in->buffer, in->used, 1, fp) != 1)
>> +            return "Dump read data error";
>> +    }
>> +
>> +    return NULL;
>> +}
>> +
>>   /* Called twice: first with fp == NULL to get length, then for 
>> writing data. */
>>   const char *dump_state_buffered_data(FILE *fp, const struct 
>> connection *c,
>>                        struct xs_state_connection *sc)
>>   {
>>       unsigned int len = 0, used;
>> -    struct buffered_data *out, *in = c->in;
>> +    struct buffered_data *out;
>>       bool partial = true;
>> +    struct delayed_request *req;
>> +    const char *ret;
>> -    if (in) {
>> -        len = in->inhdr ? in->used : sizeof(in->hdr);
>> -        if (fp && fwrite(&in->hdr, len, 1, fp) != 1)
>> -            return "Dump read data error";
>> -        if (!in->inhdr && in->used) {
>> -            len += in->used;
>> -            if (fp && fwrite(in->buffer, in->used, 1, fp) != 1)
>> -                return "Dump read data error";
>> -        }
>> +    /* Dump any command that was delayed */
>> +    list_for_each_entry(req, &c->delayed, list) {
> 
> Please add a comment here that the following if() serves especially to
> avoid dumping the data for do_lu_start().

Would you be happy to give your acked-by/reviewed-by if I add the 
following on commit?

"
We only want to preserve commands that wasn't processed at all. All the 
other delayed requests (such as do_lu_start()) must be processed before 
Live-Update.
"

> 
>> +        if (req->func != process_delayed_message)
>> +            continue;
>> +
>> +        assert(!req->in->inhdr);
>> +        if ((ret = dump_input_buffered_data(fp, req->in, &len)))
>> +            return ret;
>>       }
>> +    if (c->in && (ret = dump_input_buffered_data(fp, c->in, &len)))
>> +        return ret;
>> +
>>       if (sc) {
>>           sc->data_in_len = len;
>>           sc->data_resp_len = 0;
>>
> 
> Juergen

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 10:43:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 10:43:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146662.269980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwMpQ-0002IL-1o; Thu, 24 Jun 2021 10:43:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146662.269980; Thu, 24 Jun 2021 10:43:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwMpP-0002IE-Uk; Thu, 24 Jun 2021 10:43:23 +0000
Received: by outflank-mailman (input) for mailman id 146662;
 Thu, 24 Jun 2021 10:43:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lwMpO-0002I8-Pa
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 10:43:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwMpK-0006SJ-C4; Thu, 24 Jun 2021 10:43:18 +0000
Received: from [54.239.6.182] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwMpK-0008UQ-3Y; Thu, 24 Jun 2021 10:43:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=o1/nsgPR9iJ5Xj/zIhK2HyHOQYdK66thWYsF9hwLSyU=; b=BXOINeH++6cJliMHL9hMn1rptn
	2iGuKZgTUe5mHCPsR/RgbI1dzY36jOghuWKiKXf7Y6c1cVbXyfhOJ1mfVM9Hwx3lGwkO2QE4JglyN
	UYrnyKwtFAnNihq9xY+GdrTo71jWEVGaj8hNYF5azh1yoOOvgGTcm3tnvwmR43+Ygbc0=;
Subject: Re: [PATCH 00/10] tools/xenstored: Bug fixes + Improve Live-Update
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>
References: <20210616144324.31652-1-julien@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <a3b28a06-92e7-0bd5-0951-2db51a0bee46@xen.org>
Date: Thu, 24 Jun 2021 12:43:15 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210616144324.31652-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 16/06/2021 16:43, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Hi all,
> 
> At the moment, Live-Update will, by default, not proceed if there are
> in-flight transactions. It is possible force it by passing -F but this
> will break any connection with in-flight transactions.
> 
> There are PV drivers out that may never terminate some transaction. On
> host running such guest, we would need to use -F. Unfortunately, this
> also risks to break well-behaving guests (and even dom0) because
> Live-Update will happen as soon as the timeout is hit.
> 
> This series aims to allow to Live-Update more safely even when the option
> -F is used.
> 
> The first part of the series contains a few fixes for bug found while
> testing Live-Update.
> 
> Cheers,
> 
> Julien Grall (10):
>    MAINTAINERS: Add myself as reviewers for tools/xenstore
>    tools/xenstored: Introduce lu_get_connection() and use it
>    tools/xenstore: Don't assume conn->in points to the LU request
>    tools/xenstored: Limit the number of requests a connection can delay
>    tools/xenstored: xenstored_core.h should include fcntl.h
>    tools/xenstored: Introduce a wrapper for conn->funcs->can_{read,
>      write}
>    tools/xenstored: delay_request: don't assume conn->in == in
>    tools/xenstored: Extend restore code to handle multiple input buffer

I have committed the first 8 patches.

>    tools/xenstored: Dump delayed requests
>    tools/xenstored: Delay new transaction while Live-Update is pending

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 10:44:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 10:44:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146670.270004 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwMqX-00034Y-Lm; Thu, 24 Jun 2021 10:44:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146670.270004; Thu, 24 Jun 2021 10:44:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwMqX-00034R-Gx; Thu, 24 Jun 2021 10:44:33 +0000
Received: by outflank-mailman (input) for mailman id 146670;
 Thu, 24 Jun 2021 10:44:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lwMqV-000342-R4
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 10:44:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwMqV-0006U3-1B; Thu, 24 Jun 2021 10:44:31 +0000
Received: from [54.239.6.182] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwMqU-0000IY-Qq; Thu, 24 Jun 2021 10:44:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=kB/cuO9J71PCSUG1oXEvChtSpefXwEQF0XrHTCKSrpk=; b=1Mvgrb7a6kqClKUsOqSHHHoIAE
	9WYQvJVDjeRzzhdVIxoE9AN28CSba8yvBKBQjngefySqvstCFZvlN0vqdhfCpy13t2c9JrYcc+FRx
	dzPhNX+onGw608QAQjmyRuvi2IL5BSYvC6feBcTBQ3ntBhtUrtjHQgOI0ztaUvSACPho=;
Subject: Re: [PATCH] iommu/arm: ipmmu-vmsa: Add compatible for Renesas R-Car
 M3-W+ SoC
From: Julien Grall <julien@xen.org>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>, xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <1623698292-7464-1-git-send-email-olekstysh@gmail.com>
 <36ae57eb-e7ce-8ca9-0f4e-23b9f1b0c0e7@xen.org>
Message-ID: <f63fe592-6ada-2a0f-1b94-2fd6facfd29e@xen.org>
Date: Thu, 24 Jun 2021 12:44:28 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <36ae57eb-e7ce-8ca9-0f4e-23b9f1b0c0e7@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi,

On 22/06/2021 18:43, Julien Grall wrote:
> Hi Oleksandr,
> 
> On 14/06/2021 21:18, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> The "renesas,r8a77961" string identifies M3-W+ (aka M3-W ES3.0)
>> instead of "renesas,r8a7796" since Linux commit:
>> "9c9f7891093b02eb64ca4e1c7ab776a4296c058f soc: renesas: Identify R-Car 
>> M3-W+".
>> Add new compatible to the Xen driver.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> Acked-by: Julien Grall <jgrall@amazon.com>

Committed.

Cheers,

> 
> Cheers,
> 
>> ---
>>   xen/drivers/passthrough/arm/ipmmu-vmsa.c | 1 +
>>   1 file changed, 1 insertion(+)
>>
>> diff --git a/xen/drivers/passthrough/arm/ipmmu-vmsa.c 
>> b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
>> index 8b8e3a0..1255b0d 100644
>> --- a/xen/drivers/passthrough/arm/ipmmu-vmsa.c
>> +++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
>> @@ -1316,6 +1316,7 @@ static const struct dt_device_match 
>> ipmmu_dt_match[] __initconst =
>>       DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a7795"),
>>       DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a77965"),
>>       DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a7796"),
>> +    DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a77961"),
>>       { /* sentinel */ },
>>   };
>>
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 10:45:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 10:45:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146680.270036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwMrT-0003wk-6w; Thu, 24 Jun 2021 10:45:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146680.270036; Thu, 24 Jun 2021 10:45:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwMrT-0003wd-1D; Thu, 24 Jun 2021 10:45:31 +0000
Received: by outflank-mailman (input) for mailman id 146680;
 Thu, 24 Jun 2021 10:45:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3ax9=LS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lwMrR-0003wN-LH
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 10:45:29 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 296616c1-0f70-43d7-ab33-a3162f2b550f;
 Thu, 24 Jun 2021 10:45:28 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 2E24D1FD35;
 Thu, 24 Jun 2021 10:45:27 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id E0CAB11A97;
 Thu, 24 Jun 2021 10:45:26 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id zdveM0Zi1GBTbAAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 24 Jun 2021 10:45:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 296616c1-0f70-43d7-ab33-a3162f2b550f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624531527; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=E7cVxLqvqo3PLW6jrqqbFokYkCRbWenjtRHMhRpYifc=;
	b=Qk8Ue81slrZRV462BnnkWpUwKFsTvJYKmq2muRb5HzZfoMxW9WHCEo2xTlyDpTJNhHuPdQ
	Eab2ShtR/k5seSfFwoXm92VMVXTxS8VBbR/I7UBCmgrw8hbnJ/RZXDZLxashq24Q9rQCOx
	ED/qUu3J3eKfSidlgvifOsWgNiK+TDg=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624531527; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=E7cVxLqvqo3PLW6jrqqbFokYkCRbWenjtRHMhRpYifc=;
	b=Qk8Ue81slrZRV462BnnkWpUwKFsTvJYKmq2muRb5HzZfoMxW9WHCEo2xTlyDpTJNhHuPdQ
	Eab2ShtR/k5seSfFwoXm92VMVXTxS8VBbR/I7UBCmgrw8hbnJ/RZXDZLxashq24Q9rQCOx
	ED/qUu3J3eKfSidlgvifOsWgNiK+TDg=
Subject: Re: [PATCH 09/10] tools/xenstored: Dump delayed requests
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-10-julien@xen.org>
 <5b6455f3-9b44-2cf3-e53d-1f235977a4e2@suse.com>
 <d42131c2-ae2d-883b-037d-2ab6370678c3@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <6e17cbd5-a819-2ed6-6f73-784cda2b9a5c@suse.com>
Date: Thu, 24 Jun 2021 12:45:26 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <d42131c2-ae2d-883b-037d-2ab6370678c3@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="STaUbYCXc7fCqIbHNTHESdzQk5wtQ2QpT"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--STaUbYCXc7fCqIbHNTHESdzQk5wtQ2QpT
Content-Type: multipart/mixed; boundary="aztsCj61IwGDAZHRD8Dzmr672w3s8rZxf";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <6e17cbd5-a819-2ed6-6f73-784cda2b9a5c@suse.com>
Subject: Re: [PATCH 09/10] tools/xenstored: Dump delayed requests
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-10-julien@xen.org>
 <5b6455f3-9b44-2cf3-e53d-1f235977a4e2@suse.com>
 <d42131c2-ae2d-883b-037d-2ab6370678c3@xen.org>
In-Reply-To: <d42131c2-ae2d-883b-037d-2ab6370678c3@xen.org>

--aztsCj61IwGDAZHRD8Dzmr672w3s8rZxf
Content-Type: multipart/mixed;
 boundary="------------02F1D18DB2EA1423ACD93D9D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------02F1D18DB2EA1423ACD93D9D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 24.06.21 12:28, Julien Grall wrote:
> Hi Juergen,
>=20
> On 24/06/2021 10:41, Juergen Gross wrote:
>> On 16.06.21 16:43, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> Currently, only Live-Update request can be delayed. In a follow-up,
>>> we will want to delay more requests (e.g. transaction start).
>>> Therefore we want to preserve delayed requests across Live-Update.
>>>
>>> Delayed requests are just complete "in" buffer. So the code is
>>> refactored to allow sharing the code to dump "in" buffer.
>>>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>> ---
>>> =C2=A0 tools/xenstore/xenstored_core.c | 42 +++++++++++++++++++++++++=
--------
>>> =C2=A0 1 file changed, 32 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/tools/xenstore/xenstored_core.c=20
>>> b/tools/xenstore/xenstored_core.c
>>> index 5b7ab7f74013..9eca58682b51 100644
>>> --- a/tools/xenstore/xenstored_core.c
>>> +++ b/tools/xenstore/xenstored_core.c
>>> @@ -2403,25 +2403,47 @@ const char *dump_state_global(FILE *fp)
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return NULL;
>>> =C2=A0 }
>>> +static const char *dump_input_buffered_data(FILE *fp,
>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 con=
ststruct buffered_data *in,
>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 uns=
igned int *total_len)
>>> +{
>>> +=C2=A0=C2=A0=C2=A0 unsigned int hlen =3D in->inhdr ? in->used : size=
of(in->hdr);
>>> +
>>> +=C2=A0=C2=A0=C2=A0 *total_len +=3D hlen;
>>> +=C2=A0=C2=A0=C2=A0 if (fp && fwrite(&in->hdr, hlen, 1, fp) !=3D 1)
>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return "Dump read data er=
ror";
>>> +=C2=A0=C2=A0=C2=A0 if (!in->inhdr && in->used) {
>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 *total_len +=3D in->used;=

>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (fp && fwrite(in->buff=
er,in->used, 1, fp) !=3D 1)
>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 r=
eturn "Dump read data error";
>>> +=C2=A0=C2=A0=C2=A0 }
>>> +
>>> +=C2=A0=C2=A0=C2=A0 return NULL;
>>> +}
>>> +
>>> =C2=A0 /* Called twice: first with fp =3D=3D NULL to get length, then=20
for=20
>>> writing data. */
>>> =C2=A0 const char *dump_state_buffered_data(FILE *fp, const struct=20
>>> connection *c,
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct xs=
_state_connection *sc)
>>> =C2=A0 {
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned int len =3D 0, used;
>>> -=C2=A0=C2=A0=C2=A0 struct buffered_data *out, *in =3D c->in;
>>> +=C2=A0=C2=A0=C2=A0 struct buffered_data *out;
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 bool partial =3D true;
>>> +=C2=A0=C2=A0=C2=A0 struct delayed_request *req;
>>> +=C2=A0=C2=A0=C2=A0 const char *ret;
>>> -=C2=A0=C2=A0=C2=A0 if (in) {
>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 len =3D in->inhdr ? in->u=
sed: sizeof(in->hdr);
>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (fp && fwrite(&in->hdr=
, len, 1, fp) !=3D 1)
>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 r=
eturn "Dump read data error";
>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (!in->inhdr && in->use=
d) {
>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 l=
en +=3D in->used;
>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 i=
f (fp && fwrite(in->buffer, in->used, 1, fp) !=3D 1)
>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 return "Dump read data error";
>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>>> +=C2=A0=C2=A0=C2=A0 /* Dump any command that was delayed */
>>> +=C2=A0=C2=A0=C2=A0 list_for_each_entry(req, &c->delayed, list) {
>>
>> Please add a comment here that the following if() serves especially to=

>> avoid dumping the data for do_lu_start().
>=20
> Would you be happy to give your acked-by/reviewed-by if I add the=20
> following on commit?
>=20
> "
> We only want to preserve commands that wasn't processed at all. All the=20


s/wasn't/weren't/

> other delayed requests (such as do_lu_start()) must be processed before=20

> Live-Update.
> "

With that change I'm fine.


Juergen

--------------02F1D18DB2EA1423ACD93D9D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------02F1D18DB2EA1423ACD93D9D--

--aztsCj61IwGDAZHRD8Dzmr672w3s8rZxf--

--STaUbYCXc7fCqIbHNTHESdzQk5wtQ2QpT
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDUYkYFAwAAAAAACgkQsN6d1ii/Ey/+
8gf/YeGAQngLQXDoOWhF7M/sgh82dsz+hrHCI0dUFrRwadzLXN6Nzm84lv8e9LngdfyYWCNkuEFK
RfLfz9dXrbLHQt3qegTPGn8zbk6hbLAwfaOddmDjJdAVzZKEpNE4qtxWMESkWWY7v1I1xJu3KUbD
b5OFThU3v3FciFyird+CFfYB0pdTcBVOVGNCLXWi1erDGCpX9izpHcD8TCHDqpTuZ5OXTutNlMpP
iqEAMfoM8HALDw8uygTOsR/liOXLXHcC5DT1v8kHOqB6G4Zi1i8vTmz5nexprKaPQwI1K/dvNwyd
9qaZ8VJMSs8htBVF64EK9hZIv8jyJ0iRY/kWHkopSA==
=62KN
-----END PGP SIGNATURE-----

--STaUbYCXc7fCqIbHNTHESdzQk5wtQ2QpT--


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 10:45:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 10:45:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146684.270045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwMrt-0004Vo-DA; Thu, 24 Jun 2021 10:45:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146684.270045; Thu, 24 Jun 2021 10:45:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwMrt-0004Vh-AG; Thu, 24 Jun 2021 10:45:57 +0000
Received: by outflank-mailman (input) for mailman id 146684;
 Thu, 24 Jun 2021 10:45:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lwMrs-0004VZ-TE
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 10:45:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwMrr-0006Wc-Hy; Thu, 24 Jun 2021 10:45:55 +0000
Received: from [54.239.6.182] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwMrr-0000S3-BS; Thu, 24 Jun 2021 10:45:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=pe2WkScqUSNzasT0QYQWtMKT3duB1hhvHM8kZVK5rrU=; b=RE0ck0LzKC0RLRIHw/5L3jijwg
	PpYDoURqDVMER2GZ3TLSEllUVJ1NaR8rDLxAJQV36TGwVUA6b6S2aE1oVm0QBM7444o1GCAIvzzsF
	r64x5JJ8Weo6HeBZTkMEyfQZVY7ecFe53YcoFkcs0c+gVDS98C75OZMnfkBEsAdUMeu8=;
Subject: Re: [PATCH] maintainers: adding new reviewer for xsm
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20210617234955.18489-1-dpsmith@apertussolutions.com>
 <alpine.DEB.2.21.2106171712010.24906@sstabellini-ThinkPad-T480s>
 <635b99e9-815f-edbb-52bf-dd6465bf16c9@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <674fbd6b-784c-5617-364d-9b9c441b60c2@xen.org>
Date: Thu, 24 Jun 2021 12:45:52 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <635b99e9-815f-edbb-52bf-dd6465bf16c9@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 18/06/2021 11:59, Andrew Cooper wrote:
> On 18/06/2021 01:12, Stefano Stabellini wrote:
>> On Thu, 17 Jun 2021, Daniel P. Smith wrote:
>>> Would like to add myself as a reviewer for XSM.
>>>
>>> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
>> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

I have committed it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 10:47:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 10:47:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146691.270057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwMsx-0005Af-Od; Thu, 24 Jun 2021 10:47:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146691.270057; Thu, 24 Jun 2021 10:47:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwMsx-0005AY-La; Thu, 24 Jun 2021 10:47:03 +0000
Received: by outflank-mailman (input) for mailman id 146691;
 Thu, 24 Jun 2021 10:47:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lwMsw-0005AM-A6
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 10:47:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwMsv-0006Xr-2u; Thu, 24 Jun 2021 10:47:01 +0000
Received: from [54.239.6.182] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwMsu-0000W6-Ri; Thu, 24 Jun 2021 10:47:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Vvxe+zjRdDELqJjQvmI6It3sFk/2niBfnlHmsgU6cSw=; b=dGdd2WqUt50fExUHZ0iUrXr24N
	eEII7rUfF7r6/pZRBh7cjy0J0QI471Tce90YBUhelfvl1A5WVAXVEhNUwRcS7ESvCqBnmIDNpL5g/
	XznFwPDMCPMrOVRrKo/iHbxmsISMhECxGbgPK0n26pxBGsbS3r2AUcxoeVmgSCziY/WU=;
Subject: Re: [PATCH 09/10] tools/xenstored: Dump delayed requests
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-10-julien@xen.org>
 <5b6455f3-9b44-2cf3-e53d-1f235977a4e2@suse.com>
 <d42131c2-ae2d-883b-037d-2ab6370678c3@xen.org>
 <6e17cbd5-a819-2ed6-6f73-784cda2b9a5c@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <96cb7350-4f59-a359-5ba4-fee586e73370@xen.org>
Date: Thu, 24 Jun 2021 12:46:58 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <6e17cbd5-a819-2ed6-6f73-784cda2b9a5c@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 24/06/2021 12:45, Juergen Gross wrote:
> On 24.06.21 12:28, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 24/06/2021 10:41, Juergen Gross wrote:
>>> On 16.06.21 16:43, Julien Grall wrote:
>>>> From: Julien Grall <jgrall@amazon.com>
>>>>
>>>> Currently, only Live-Update request can be delayed. In a follow-up,
>>>> we will want to delay more requests (e.g. transaction start).
>>>> Therefore we want to preserve delayed requests across Live-Update.
>>>>
>>>> Delayed requests are just complete "in" buffer. So the code is
>>>> refactored to allow sharing the code to dump "in" buffer.
>>>>
>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>> ---
>>>>   tools/xenstore/xenstored_core.c | 42 
>>>> +++++++++++++++++++++++++--------
>>>>   1 file changed, 32 insertions(+), 10 deletions(-)
>>>>
>>>> diff --git a/tools/xenstore/xenstored_core.c 
>>>> b/tools/xenstore/xenstored_core.c
>>>> index 5b7ab7f74013..9eca58682b51 100644
>>>> --- a/tools/xenstore/xenstored_core.c
>>>> +++ b/tools/xenstore/xenstored_core.c
>>>> @@ -2403,25 +2403,47 @@ const char *dump_state_global(FILE *fp)
>>>>       return NULL;
>>>>   }
>>>> +static const char *dump_input_buffered_data(FILE *fp,
>>>> +                        conststruct buffered_data *in,
>>>> +                        unsigned int *total_len)
>>>> +{
>>>> +    unsigned int hlen = in->inhdr ? in->used : sizeof(in->hdr);
>>>> +
>>>> +    *total_len += hlen;
>>>> +    if (fp && fwrite(&in->hdr, hlen, 1, fp) != 1)
>>>> +        return "Dump read data error";
>>>> +    if (!in->inhdr && in->used) {
>>>> +        *total_len += in->used;
>>>> +        if (fp && fwrite(in->buffer,in->used, 1, fp) != 1)
>>>> +            return "Dump read data error";
>>>> +    }
>>>> +
>>>> +    return NULL;
>>>> +}
>>>> +
>>>>   /* Called twice: first with fp == NULL to get length, then 
> for
>>>> writing data. */
>>>>   const char *dump_state_buffered_data(FILE *fp, const struct 
>>>> connection *c,
>>>>                        struct xs_state_connection *sc)
>>>>   {
>>>>       unsigned int len = 0, used;
>>>> -    struct buffered_data *out, *in = c->in;
>>>> +    struct buffered_data *out;
>>>>       bool partial = true;
>>>> +    struct delayed_request *req;
>>>> +    const char *ret;
>>>> -    if (in) {
>>>> -        len = in->inhdr ? in->used: sizeof(in->hdr);
>>>> -        if (fp && fwrite(&in->hdr, len, 1, fp) != 1)
>>>> -            return "Dump read data error";
>>>> -        if (!in->inhdr && in->used) {
>>>> -            len += in->used;
>>>> -            if (fp && fwrite(in->buffer, in->used, 1, fp) != 1)
>>>> -                return "Dump read data error";
>>>> -        }
>>>> +    /* Dump any command that was delayed */
>>>> +    list_for_each_entry(req, &c->delayed, list) {
>>>
>>> Please add a comment here that the following if() serves especially to
>>> avoid dumping the data for do_lu_start().
>>
>> Would you be happy to give your acked-by/reviewed-by if I add the 
>> following on commit?
>>
>> "
>> We only want to preserve commands that wasn't processed at all. All the 
> 
> 
> s/wasn't/weren't/

I will do.

> 
>> other delayed requests (such as do_lu_start()) must be processed before 
> 
>> Live-Update.
>> "
> 
> With that change I'm fine.

Can I translate that to an acked-by? :)

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 11:02:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 11:02:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146698.270076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwN7v-0007Zb-7K; Thu, 24 Jun 2021 11:02:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146698.270076; Thu, 24 Jun 2021 11:02:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwN7v-0007ZU-4O; Thu, 24 Jun 2021 11:02:31 +0000
Received: by outflank-mailman (input) for mailman id 146698;
 Thu, 24 Jun 2021 11:02:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3ax9=LS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lwN7t-0007ZO-PA
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 11:02:29 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 929bb09d-8bad-4cc1-b535-a48e3d6da2a6;
 Thu, 24 Jun 2021 11:02:28 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id B98071FD73;
 Thu, 24 Jun 2021 11:02:27 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 6FFA111A97;
 Thu, 24 Jun 2021 11:02:27 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id MvFRGUNm1GBrdQAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 24 Jun 2021 11:02:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 929bb09d-8bad-4cc1-b535-a48e3d6da2a6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624532547; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=dh25hD0YwtMDw8cwuuHzzMiRqypMsJkjLrZKSk4NGeQ=;
	b=a4vnRgb4mnNGRsMem30Y5wa2GYRitJbDPCrzXUovCZLp+KauhCrhptfgRKSfsrXoe/zEKL
	FRyUy0zclkDn3PooAwS6TiPFHLcSk4/5ZB4vMIGexqakQuqMDTCfNRdlxWDB2ttbQD2ksE
	4/ICA3Ab2HTiuO6EwC2XbaQ1qzH/M20=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624532547; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=dh25hD0YwtMDw8cwuuHzzMiRqypMsJkjLrZKSk4NGeQ=;
	b=a4vnRgb4mnNGRsMem30Y5wa2GYRitJbDPCrzXUovCZLp+KauhCrhptfgRKSfsrXoe/zEKL
	FRyUy0zclkDn3PooAwS6TiPFHLcSk4/5ZB4vMIGexqakQuqMDTCfNRdlxWDB2ttbQD2ksE
	4/ICA3Ab2HTiuO6EwC2XbaQ1qzH/M20=
Subject: Re: [PATCH 09/10] tools/xenstored: Dump delayed requests
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-10-julien@xen.org>
 <5b6455f3-9b44-2cf3-e53d-1f235977a4e2@suse.com>
 <d42131c2-ae2d-883b-037d-2ab6370678c3@xen.org>
 <6e17cbd5-a819-2ed6-6f73-784cda2b9a5c@suse.com>
 <96cb7350-4f59-a359-5ba4-fee586e73370@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <0745889a-0142-3139-891a-4fb7a43fec18@suse.com>
Date: Thu, 24 Jun 2021 13:02:26 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <96cb7350-4f59-a359-5ba4-fee586e73370@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="EerK1KFG8uC5ZCzckLvQvnSNRBRtLvi7i"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--EerK1KFG8uC5ZCzckLvQvnSNRBRtLvi7i
Content-Type: multipart/mixed; boundary="UtIhu4JWxP1VeGKhZrNRAPDPAM7b5D7Y4";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <0745889a-0142-3139-891a-4fb7a43fec18@suse.com>
Subject: Re: [PATCH 09/10] tools/xenstored: Dump delayed requests
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-10-julien@xen.org>
 <5b6455f3-9b44-2cf3-e53d-1f235977a4e2@suse.com>
 <d42131c2-ae2d-883b-037d-2ab6370678c3@xen.org>
 <6e17cbd5-a819-2ed6-6f73-784cda2b9a5c@suse.com>
 <96cb7350-4f59-a359-5ba4-fee586e73370@xen.org>
In-Reply-To: <96cb7350-4f59-a359-5ba4-fee586e73370@xen.org>

--UtIhu4JWxP1VeGKhZrNRAPDPAM7b5D7Y4
Content-Type: multipart/mixed;
 boundary="------------C7717FD8542062D4756210DF"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C7717FD8542062D4756210DF
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 24.06.21 12:46, Julien Grall wrote:
> Hi Juergen,
>=20
> On 24/06/2021 12:45, Juergen Gross wrote:
>> On 24.06.21 12:28, Julien Grall wrote:
>>> Hi Juergen,
>>>
>>> On 24/06/2021 10:41, Juergen Gross wrote:
>>>> On 16.06.21 16:43, Julien Grall wrote:
>>>>> From: Julien Grall <jgrall@amazon.com>
>>>>>
>>>>> Currently, only Live-Update request can be delayed. In a follow-up,=

>>>>> we will want to delay more requests (e.g. transaction start).
>>>>> Therefore we want to preserve delayed requests across Live-Update.
>>>>>
>>>>> Delayed requests are just complete "in" buffer. So the code is
>>>>> refactored to allow sharing the code to dump "in" buffer.
>>>>>
>>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>>> ---
>>>>> =C2=A0 tools/xenstore/xenstored_core.c | 42=20
>>>>> +++++++++++++++++++++++++--------
>>>>> =C2=A0 1 file changed, 32 insertions(+), 10 deletions(-)
>>>>>
>>>>> diff --git a/tools/xenstore/xenstored_core.c=20
>>>>> b/tools/xenstore/xenstored_core.c
>>>>> index 5b7ab7f74013..9eca58682b51 100644
>>>>> --- a/tools/xenstore/xenstored_core.c
>>>>> +++ b/tools/xenstore/xenstored_core.c
>>>>> @@ -2403,25 +2403,47 @@ const char *dump_state_global(FILE *fp)
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return NULL;
>>>>> =C2=A0 }
>>>>> +static const char *dump_input_buffered_data(FILE *fp,
>>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
conststruct buffered_data *in,
>>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
unsigned int *total_len)
>>>>> +{
>>>>> +=C2=A0=C2=A0=C2=A0 unsigned int hlen =3D in->inhdr ? in->used : si=
zeof(in->hdr);
>>>>> +
>>>>> +=C2=A0=C2=A0=C2=A0 *total_len +=3D hlen;
>>>>> +=C2=A0=C2=A0=C2=A0 if (fp && fwrite(&in->hdr, hlen, 1, fp) !=3D 1)=

>>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return "Dump read data =
error";
>>>>> +=C2=A0=C2=A0=C2=A0 if (!in->inhdr && in->used) {
>>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 *total_len +=3D in->use=
d;
>>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (fp && fwrite(in->bu=
ffer,in->used, 1, fp) !=3D 1)
>>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=20
return "Dump read data error";
>>>>> +=C2=A0=C2=A0=C2=A0 }
>>>>> +
>>>>> +=C2=A0=C2=A0=C2=A0 return NULL;
>>>>> +}
>>>>> +
>>>>> =C2=A0 /* Called twice: first with fp =3D=3D NULL to get length, th=
en=20
>> for
>>>>> writing data. */
>>>>> =C2=A0 const char *dump_state_buffered_data(FILE *fp, const struct =

>>>>> connection *c,
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct xs=
_state_connection *sc)
>>>>> =C2=A0 {
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned int len =3D 0, used;
>>>>> -=C2=A0=C2=A0=C2=A0 struct buffered_data *out, *in =3D c->in;
>>>>> +=C2=A0=C2=A0=C2=A0 struct buffered_data *out;
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 bool partial =3D true;
>>>>> +=C2=A0=C2=A0=C2=A0 struct delayed_request *req;
>>>>> +=C2=A0=C2=A0=C2=A0 const char *ret;
>>>>> -=C2=A0=C2=A0=C2=A0 if (in) {
>>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 len =3D in->inhdr ? in-=
>used: sizeof(in->hdr);
>>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (fp && fwrite(&in->h=
dr,len, 1, fp) !=3D 1)
>>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=20
return "Dump read data error";
>>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (!in->inhdr && in->u=
sed) {
>>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=20
len +=3D in->used;
>>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=20
if(fp && fwrite(in->buffer, in->used, 1, fp) !=3D 1)
>>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 return "Dump read data error";
>>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>>>>> +=C2=A0=C2=A0=C2=A0 /* Dump any command that was delayed */
>>>>> +=C2=A0=C2=A0=C2=A0 list_for_each_entry(req, &c->delayed, list) {
>>>>
>>>> Please add a comment here that the following if() serves especially =
to
>>>> avoid dumping the data for do_lu_start().
>>>
>>> Would you be happy to give your acked-by/reviewed-by if I add the=20
>>> following on commit?
>>>
>>> "
>>> We only want to preserve commands that wasn't processed at all. All t=
he=20
>>
>>
>> s/wasn't/weren't/
>=20
> I will do.
>=20
>>
>>> other delayed requests (such as do_lu_start()) must be processed befo=
re=20
>>
>>> Live-Update.
>>> "
>>
>> With that change I'm fine.
>=20
> Can I translate that to an acked-by? :)

Make it a "Reviewed-by:" :-)


Juergen


--------------C7717FD8542062D4756210DF
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C7717FD8542062D4756210DF--

--UtIhu4JWxP1VeGKhZrNRAPDPAM7b5D7Y4--

--EerK1KFG8uC5ZCzckLvQvnSNRBRtLvi7i
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDUZkIFAwAAAAAACgkQsN6d1ii/Ey+D
qQf9ER2e5NHberkse0Ykvbx19nJfqviifyC12BMYQTFUZRJ9QB+CTnjJ75q9Bow/+OGFmIprvDyD
EYBleM6EPAd5KV2XtWz04+azuRbrZm0xrtEIRvH5N2emZf+i58AI9KHigxGAvsWsOiPt5N6x4Ibl
ucF+rTtmoMf/ki94j6FzAPbzrbbIBLzipWOr7EzAsYHoTLF42JBNf1lxSyZe/G5QHy+cKFju6gXI
KcoeLWbp4G0HR9Xe45McBzHfg/2VGMWhE7mZqSF/AabTwjoKXmnF6Z3h/Y00G+iXcYI8bxHsnWMI
r0YsFr/0S4Xmc8om+AoM5C6lKWSf4onTIY7MgcA4CQ==
=iCmn
-----END PGP SIGNATURE-----

--EerK1KFG8uC5ZCzckLvQvnSNRBRtLvi7i--


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 11:15:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 11:15:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146705.270086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwNJw-0000fy-Ga; Thu, 24 Jun 2021 11:14:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146705.270086; Thu, 24 Jun 2021 11:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwNJw-0000fr-Dg; Thu, 24 Jun 2021 11:14:56 +0000
Received: by outflank-mailman (input) for mailman id 146705;
 Thu, 24 Jun 2021 11:14:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NXgG=LS=arm.com=robin.murphy@srs-us1.protection.inumbo.net>)
 id 1lwNJu-0000fe-Lm
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 11:14:54 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 4f2cbc5d-3bfc-4843-aa33-795018e7cb9a;
 Thu, 24 Jun 2021 11:14:52 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3087531B;
 Thu, 24 Jun 2021 04:14:52 -0700 (PDT)
Received: from [10.57.9.136] (unknown [10.57.9.136])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 72B7B3F718;
 Thu, 24 Jun 2021 04:14:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f2cbc5d-3bfc-4843-aa33-795018e7cb9a
Subject: Re: [PATCH v14 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
To: Claire Chang <tientzu@chromium.org>, Christoph Hellwig <hch@lst.de>
Cc: Qian Cai <quic_qiancai@quicinc.com>, Will Deacon <will@kernel.org>,
 Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
 Joerg Roedel <joro@8bytes.org>, Frank Rowand <frowand.list@gmail.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com,
 jgross@suse.com, Marek Szyprowski <m.szyprowski@samsung.com>,
 heikki.krogerus@linux.intel.com, thomas.hellstrom@linux.intel.com,
 peterz@infradead.org, benh@kernel.crashing.org,
 joonas.lahtinen@linux.intel.com, dri-devel@lists.freedesktop.org,
 chris@chris-wilson.co.uk, grant.likely@arm.com, paulus@samba.org,
 mingo@kernel.org, Jianxiong Gao <jxgao@google.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Saravana Kannan <saravanak@google.com>, xypron.glpk@gmx.de,
 "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
 Bartosz Golaszewski <bgolaszewski@baylibre.com>, bskeggs@redhat.com,
 linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org,
 Thierry Reding <treding@nvidia.com>, intel-gfx@lists.freedesktop.org,
 matthew.auld@intel.com, linux-devicetree <devicetree@vger.kernel.org>,
 Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie,
 maarten.lankhorst@linux.intel.com, linuxppc-dev@lists.ozlabs.org,
 jani.nikula@linux.intel.com, Nicolas Boichat <drinkcat@chromium.org>,
 rodrigo.vivi@intel.com, Bjorn Helgaas <bhelgaas@google.com>,
 Dan Williams <dan.j.williams@intel.com>,
 Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
 Greg KH <gregkh@linuxfoundation.org>, Randy Dunlap <rdunlap@infradead.org>,
 lkml <linux-kernel@vger.kernel.org>,
 "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
 Jim Quinlan <james.quinlan@broadcom.com>,
 Tom Lendacky <thomas.lendacky@amd.com>, bauerman@linux.ibm.com
References: <20210619034043.199220-1-tientzu@chromium.org>
 <20210619034043.199220-7-tientzu@chromium.org>
 <76c3343d-72e5-9df3-8924-5474ee698ef4@quicinc.com>
 <20210623183736.GA472@willie-the-truck>
 <19d4c7a2-744d-21e0-289c-a576e1f0e6f3@quicinc.com>
 <20210624054315.GA25381@lst.de>
 <CALiNf288ZLMhY3E8E3N+z9rkwi1viWNLm1wwMEwT4rNwh3FfwQ@mail.gmail.com>
From: Robin Murphy <robin.murphy@arm.com>
Message-ID: <364e6715-eafd-fc4a-e0af-ce2a042756b4@arm.com>
Date: Thu, 24 Jun 2021 12:14:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <CALiNf288ZLMhY3E8E3N+z9rkwi1viWNLm1wwMEwT4rNwh3FfwQ@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

On 2021-06-24 07:05, Claire Chang wrote:
> On Thu, Jun 24, 2021 at 1:43 PM Christoph Hellwig <hch@lst.de> wrote:
>>
>> On Wed, Jun 23, 2021 at 02:44:34PM -0400, Qian Cai wrote:
>>> is_swiotlb_force_bounce at /usr/src/linux-next/./include/linux/swiotlb.h:119
>>>
>>> is_swiotlb_force_bounce() was the new function introduced in this patch here.
>>>
>>> +static inline bool is_swiotlb_force_bounce(struct device *dev)
>>> +{
>>> +     return dev->dma_io_tlb_mem->force_bounce;
>>> +}
>>
>> To me the crash looks like dev->dma_io_tlb_mem is NULL.  Can you
>> turn this into :
>>
>>          return dev->dma_io_tlb_mem && dev->dma_io_tlb_mem->force_bounce;
>>
>> for a quick debug check?
> 
> I just realized that dma_io_tlb_mem might be NULL like Christoph
> pointed out since swiotlb might not get initialized.
> However,  `Unable to handle kernel paging request at virtual address
> dfff80000000000e` looks more like the address is garbage rather than
> NULL?
> I wonder if that's because dev->dma_io_tlb_mem is not assigned
> properly (which means device_initialize is not called?).

What also looks odd is that the base "address" 0xdfff800000000000 is 
held in a couple of registers, but the offset 0xe looks too small to 
match up to any relevant structure member in that dereference chain :/

Robin.


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 11:17:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 11:17:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146713.270116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwNM8-0001tO-Bi; Thu, 24 Jun 2021 11:17:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146713.270116; Thu, 24 Jun 2021 11:17:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwNM8-0001tE-81; Thu, 24 Jun 2021 11:17:12 +0000
Received: by outflank-mailman (input) for mailman id 146713;
 Thu, 24 Jun 2021 11:17:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lwNM6-0001rV-Mw
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 11:17:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwNM5-00075S-2s; Thu, 24 Jun 2021 11:17:09 +0000
Received: from [54.239.6.182] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwNM4-0003HI-Q8; Thu, 24 Jun 2021 11:17:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=3x+4zeqoHksbHH2jN8EeTR+Jc1uTXMhfyzCmeIaTJ5g=; b=EUhaXjZX/NS0bWmNTXhAH3ElOg
	0Vy8/aYj6dqD7QGrv0RxmWGdywMJ+XFBYMLhz6zjitNoUHPZ0ouReuOXTnicBe4A6PayY3AJpadXj
	VI66PO2vHGq4967Q1pAbv/SmLfNbSs53DtAU611oDq4I6/x5qPnUDH9Yu4Zzn7qxl+4c=;
Subject: Re: [PATCH 09/10] tools/xenstored: Dump delayed requests
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210616144324.31652-1-julien@xen.org>
 <20210616144324.31652-10-julien@xen.org>
 <5b6455f3-9b44-2cf3-e53d-1f235977a4e2@suse.com>
 <d42131c2-ae2d-883b-037d-2ab6370678c3@xen.org>
 <6e17cbd5-a819-2ed6-6f73-784cda2b9a5c@suse.com>
 <96cb7350-4f59-a359-5ba4-fee586e73370@xen.org>
 <0745889a-0142-3139-891a-4fb7a43fec18@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <6130d1f5-f48d-53a7-fe12-de1c358bad32@xen.org>
Date: Thu, 24 Jun 2021 13:17:06 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <0745889a-0142-3139-891a-4fb7a43fec18@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 24/06/2021 13:02, Juergen Gross wrote:
> On 24.06.21 12:46, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 24/06/2021 12:45, Juergen Gross wrote:
>>> On 24.06.21 12:28, Julien Grall wrote:
>>>> Hi Juergen,
>>>>
>>>> On 24/06/2021 10:41, Juergen Gross wrote:
>>>>> On 16.06.21 16:43, Julien Grall wrote:
>>>>>> From: Julien Grall <jgrall@amazon.com>
>>>>>>
>>>>>> Currently, only Live-Update request can be delayed. In a follow-up,
>>>>>> we will want to delay more requests (e.g. transaction start).
>>>>>> Therefore we want to preserve delayed requests across Live-Update.
>>>>>>
>>>>>> Delayed requests are just complete "in" buffer. So the code is
>>>>>> refactored to allow sharing the code to dump "in" buffer.
>>>>>>
>>>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>>>> ---
>>>>>>   tools/xenstore/xenstored_core.c | 42 
>>>>>> +++++++++++++++++++++++++--------
>>>>>>   1 file changed, 32 insertions(+), 10 deletions(-)
>>>>>>
>>>>>> diff --git a/tools/xenstore/xenstored_core.c 
>>>>>> b/tools/xenstore/xenstored_core.c
>>>>>> index 5b7ab7f74013..9eca58682b51 100644
>>>>>> --- a/tools/xenstore/xenstored_core.c
>>>>>> +++ b/tools/xenstore/xenstored_core.c
>>>>>> @@ -2403,25 +2403,47 @@ const char *dump_state_global(FILE *fp)
>>>>>>       return NULL;
>>>>>>   }
>>>>>> +static const char *dump_input_buffered_data(FILE *fp,
>>>>>> +                        conststruct buffered_data *in,
>>>>>> +                        unsigned int *total_len)
>>>>>> +{
>>>>>> +    unsigned int hlen = in->inhdr ? in->used : sizeof(in->hdr);
>>>>>> +
>>>>>> +    *total_len += hlen;
>>>>>> +    if (fp && fwrite(&in->hdr, hlen, 1, fp) != 1)
>>>>>> +        return "Dump read data error";
>>>>>> +    if (!in->inhdr && in->used) {
>>>>>> +        *total_len += in->used;
>>>>>> +        if (fp && fwrite(in->buffer,in->used, 1, fp) != 1)
>>>>>> + 
> return "Dump read data error";
>>>>>> +    }
>>>>>> +
>>>>>> +    return NULL;
>>>>>> +}
>>>>>> +
>>>>>>   /* Called twice: first with fp == NULL to get length, then 
>>> for
>>>>>> writing data. */
>>>>>>   const char *dump_state_buffered_data(FILE *fp, const struct 
>>>>>> connection *c,
>>>>>>                        struct xs_state_connection *sc)
>>>>>>   {
>>>>>>       unsigned int len = 0, used;
>>>>>> -    struct buffered_data *out, *in = c->in;
>>>>>> +    struct buffered_data *out;
>>>>>>       bool partial = true;
>>>>>> +    struct delayed_request *req;
>>>>>> +    const char *ret;
>>>>>> -    if (in) {
>>>>>> -        len = in->inhdr ? in->used: sizeof(in->hdr);
>>>>>> -        if (fp && fwrite(&in->hdr,len, 1, fp) != 1)
>>>>>> - 
> return "Dump read data error";
>>>>>> -        if (!in->inhdr && in->used) {
>>>>>> - 
> len += in->used;
>>>>>> - 
> if(fp && fwrite(in->buffer, in->used, 1, fp) != 1)
>>>>>> -                return "Dump read data error";
>>>>>> -        }
>>>>>> +    /* Dump any command that was delayed */
>>>>>> +    list_for_each_entry(req, &c->delayed, list) {
>>>>>
>>>>> Please add a comment here that the following if() serves especially to
>>>>> avoid dumping the data for do_lu_start().
>>>>
>>>> Would you be happy to give your acked-by/reviewed-by if I add the 
>>>> following on commit?
>>>>
>>>> "
>>>> We only want to preserve commands that wasn't processed at all. All the 
>>>
>>>
>>> s/wasn't/weren't/
>>
>> I will do.
>>
>>>
>>>> other delayed requests (such as do_lu_start()) must be processed before 
>>>
>>>> Live-Update.
>>>> "
>>>
>>> With that change I'm fine.
>>
>> Can I translate that to an acked-by? :)
> 
> Make it a "Reviewed-by:" :-)

Thanks! I have committed it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 11:17:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 11:17:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146716.270127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwNMY-0002eA-N5; Thu, 24 Jun 2021 11:17:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146716.270127; Thu, 24 Jun 2021 11:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwNMY-0002e3-Ib; Thu, 24 Jun 2021 11:17:38 +0000
Received: by outflank-mailman (input) for mailman id 146716;
 Thu, 24 Jun 2021 11:17:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lwNMX-0002dZ-Ao
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 11:17:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwNMW-00075n-3W; Thu, 24 Jun 2021 11:17:36 +0000
Received: from [54.239.6.182] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwNMV-0003Ik-Rq; Thu, 24 Jun 2021 11:17:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=6kWFqzhsa6JK4hIY141tjkTi1+pNJaRgyWocBa7rve4=; b=1Q+nD5JhwfYjtwAhSVQpBwxgKP
	ZoyMJemd2Pfn40YAVuCp6WzuXKJyhScQUmT3lksWQlinlGdoEnPD+XoR2QaV0MFW0kXR5eP2Gz7hZ
	SELl+un7/rlBwKjYttXLlEdJIw53jvNjVbjHRtz1ch8tkwSmSqDdYP2WeBuZEFr4+xqQ=;
Subject: Re: [PATCH] tools/xenstored: Don't crash xenstored when Live-Update
 is cancelled
From: Julien Grall <julien@xen.org>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien GralL
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210617173857.6450-1-julien@xen.org>
 <136d6a10-c93d-accd-fc34-62fbaa4742b0@suse.com>
 <325bf694-a30f-558c-ab84-d8a7a1865cc2@xen.org>
 <bcdad74c-393e-0582-6a26-b9a6f45cb30a@suse.com>
 <7f252b19-b84e-b4d3-0c3c-976404b00701@xen.org>
Message-ID: <fca29482-2e12-67a9-bb7c-cc420e32de9a@xen.org>
Date: Thu, 24 Jun 2021 13:17:33 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <7f252b19-b84e-b4d3-0c3c-976404b00701@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 24/06/2021 10:18, Julien Grall wrote:
> Hi Juergen,
> 
> On 24/06/2021 10:17, Juergen Gross wrote:
>> On 24.06.21 10:12, Julien Grall wrote:
>>> Hi Juergen,
>>>
>>> On 22/06/2021 11:23, Juergen Gross wrote:
>>>> On 17.06.21 19:38, Julien Grall wrote:
>>>>> From: Julien GralL <jgrall@amazon.com>
>>>>>
>>>>> As Live-Update is asynchronous, it is possible to receive a request to
>>>>> cancel it (either on the same connection or from a different one).
>>>>>
>>>>> Currently, this will crash xenstored because do_lu_start() assumes
>>>>> lu_status will be valid. This is not the case when Live-Update has 
>>>>> been
>>>>> cancelled. This will result to dereference a NULL pointer and
>>>>> crash Xenstored.
>>>>>
>>>>> Rework do_lu_start() to check if lu_status is NULL and return an
>>>>> error in this case.
>>>>>
>>>>> Fixes: af216a99fb ("tools/xenstore: add the basic framework for doing 
>>
>>>>> the live update")
>>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>>>
>>>>> ----
>>>>>
>>>>> This is currently based on top of:
>>>>>
>>>>> https://lore.kernel.org/xen-devel/20210616144324.31652-1-julien@xen.org 
>>>>>
>>>>>
>>>>> This can be re-ordered if necessary.
>>>>> ---
>>>>>   tools/xenstore/xenstored_control.c | 15 +++++++++++++--
>>>>>   1 file changed, 13 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/tools/xenstore/xenstored_control.c 
>>>>> b/tools/xenstore/xenstored_control.c
>>>>> index a045f102a420..37a3d39f20b5 100644
>>>>> --- a/tools/xenstore/xenstored_control.c
>>>>> +++ b/tools/xenstore/xenstored_control.c
>>>>> @@ -696,7 +696,18 @@ static bool do_lu_start(struct delayed_request 
>>>>> *req)
>>>>>       time_t now = time(NULL);
>>>>>       const char *ret;
>>>>>       struct buffered_data *saved_in;
>>>>> -    struct connection *conn = lu_status->conn;
>>>>> +    struct connection *conn = req->data;
>>>>> +
>>>>> +    /*
>>>>> +     * Cancellation may have been requested asynchronously. In this
>>>>> +     * case, lu_status will be NULL.
>>>>> +     */
>>>>> +    if (!lu_status) {
>>>>> +        ret = "Cancellation was 
>> requested";
>>>>> +        conn = req->data;
>>>>
>>>> This will set conn to the same value it already has.
>>>
>>> Ah yes. I will drop this line.
>>>
>>> Also, I took the opportunity to replace
>>>
>>> } else
>>>   assert(...)
>>>
>>> with just
>>>
>>> assert(...)
>>>
>>> This should improve a bit the readability. Let me know if you want me 
>>> to resend the patch for that.
>>
>> I guess you are planning to do the commit?
> 
> That's my plan.

Committed.

> 
>>
>> If yes, there is no need for resending the patch.
> 
> Thanks!
> 
> Cheers,
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 11:19:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 11:19:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146730.270138 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwNO1-0003Si-2K; Thu, 24 Jun 2021 11:19:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146730.270138; Thu, 24 Jun 2021 11:19:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwNO0-0003Sb-VO; Thu, 24 Jun 2021 11:19:08 +0000
Received: by outflank-mailman (input) for mailman id 146730;
 Thu, 24 Jun 2021 11:19:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dCk9=LS=kernel.org=will@srs-us1.protection.inumbo.net>)
 id 1lwNO0-0003SV-CR
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 11:19:08 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 474aa3cb-fcbc-426b-8684-1ae7975d3bed;
 Thu, 24 Jun 2021 11:19:07 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 6D5C7613C1;
 Thu, 24 Jun 2021 11:18:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 474aa3cb-fcbc-426b-8684-1ae7975d3bed
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1624533547;
	bh=fyQk9K7V5mi+iulKCuVouHY96p/8l9exJzzlUIeSpn8=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=bw4AVsGwRrbnlqsX5CZZBCGVkLRWbVSBBROzH0cW0WxgQnoyiN2SkbZT9Y0qsOq8u
	 JL8IIrlYNHTHfBROv+myGIoSXn3V0Qajeyu6WrQ15CejRET9x4mwWpuBCwWkKgfe+I
	 +cOMFFna8tPSjLTq48KMyVkK5wGgtDdocNICZALKyCcFBQrrDac4GVZrs41lrwovdH
	 A7+mTUl4b8SpYL5RZSXQGAdCk4J4xBM23HuRVgN0YiE8QMME9/0wtATNSKneCXnM3U
	 aM6IUBXxfGBIajg731amXfhwP7wHdtJMySWai/ROQfVX1amjwBJVAnH3akjkMUY271
	 lYvk5WkbaBLcQ==
Date: Thu, 24 Jun 2021 12:18:56 +0100
From: Will Deacon <will@kernel.org>
To: Robin Murphy <robin.murphy@arm.com>
Cc: Claire Chang <tientzu@chromium.org>, Christoph Hellwig <hch@lst.de>,
	Qian Cai <quic_qiancai@quicinc.com>,
	Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	heikki.krogerus@linux.intel.com, thomas.hellstrom@linux.intel.com,
	peterz@infradead.org, benh@kernel.crashing.org,
	joonas.lahtinen@linux.intel.com, dri-devel@lists.freedesktop.org,
	chris@chris-wilson.co.uk, grant.likely@arm.com, paulus@samba.org,
	mingo@kernel.org, Jianxiong Gao <jxgao@google.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Saravana Kannan <saravanak@google.com>, xypron.glpk@gmx.de,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>, bskeggs@redhat.com,
	linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org,
	Thierry Reding <treding@nvidia.com>,
	intel-gfx@lists.freedesktop.org, matthew.auld@intel.com,
	linux-devicetree <devicetree@vger.kernel.org>,
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie,
	maarten.lankhorst@linux.intel.com, linuxppc-dev@lists.ozlabs.org,
	jani.nikula@linux.intel.com,
	Nicolas Boichat <drinkcat@chromium.org>, rodrigo.vivi@intel.com,
	Bjorn Helgaas <bhelgaas@google.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Greg KH <gregkh@linuxfoundation.org>,
	Randy Dunlap <rdunlap@infradead.org>,
	lkml <linux-kernel@vger.kernel.org>,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Tom Lendacky <thomas.lendacky@amd.com>, bauerman@linux.ibm.com
Subject: Re: [PATCH v14 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
Message-ID: <20210624111855.GA1382@willie-the-truck>
References: <20210619034043.199220-1-tientzu@chromium.org>
 <20210619034043.199220-7-tientzu@chromium.org>
 <76c3343d-72e5-9df3-8924-5474ee698ef4@quicinc.com>
 <20210623183736.GA472@willie-the-truck>
 <19d4c7a2-744d-21e0-289c-a576e1f0e6f3@quicinc.com>
 <20210624054315.GA25381@lst.de>
 <CALiNf288ZLMhY3E8E3N+z9rkwi1viWNLm1wwMEwT4rNwh3FfwQ@mail.gmail.com>
 <364e6715-eafd-fc4a-e0af-ce2a042756b4@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <364e6715-eafd-fc4a-e0af-ce2a042756b4@arm.com>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Thu, Jun 24, 2021 at 12:14:39PM +0100, Robin Murphy wrote:
> On 2021-06-24 07:05, Claire Chang wrote:
> > On Thu, Jun 24, 2021 at 1:43 PM Christoph Hellwig <hch@lst.de> wrote:
> > > 
> > > On Wed, Jun 23, 2021 at 02:44:34PM -0400, Qian Cai wrote:
> > > > is_swiotlb_force_bounce at /usr/src/linux-next/./include/linux/swiotlb.h:119
> > > > 
> > > > is_swiotlb_force_bounce() was the new function introduced in this patch here.
> > > > 
> > > > +static inline bool is_swiotlb_force_bounce(struct device *dev)
> > > > +{
> > > > +     return dev->dma_io_tlb_mem->force_bounce;
> > > > +}
> > > 
> > > To me the crash looks like dev->dma_io_tlb_mem is NULL.  Can you
> > > turn this into :
> > > 
> > >          return dev->dma_io_tlb_mem && dev->dma_io_tlb_mem->force_bounce;
> > > 
> > > for a quick debug check?
> > 
> > I just realized that dma_io_tlb_mem might be NULL like Christoph
> > pointed out since swiotlb might not get initialized.
> > However,  `Unable to handle kernel paging request at virtual address
> > dfff80000000000e` looks more like the address is garbage rather than
> > NULL?
> > I wonder if that's because dev->dma_io_tlb_mem is not assigned
> > properly (which means device_initialize is not called?).
> 
> What also looks odd is that the base "address" 0xdfff800000000000 is held in
> a couple of registers, but the offset 0xe looks too small to match up to any
> relevant structure member in that dereference chain :/

FWIW, I've managed to trigger a NULL dereference locally when swiotlb hasn't
been initialised but we dereference 'dev->dma_io_tlb_mem', so I think
Christoph's suggestion is needed regardless. But I agree that it won't help
with the issue reported by Qian Cai.

Qian Cai: please can you share your .config and your command line?

Thanks,

Will


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 11:34:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 11:34:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146742.270160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwNcm-0005vZ-Gd; Thu, 24 Jun 2021 11:34:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146742.270160; Thu, 24 Jun 2021 11:34:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwNcm-0005vS-Db; Thu, 24 Jun 2021 11:34:24 +0000
Received: by outflank-mailman (input) for mailman id 146742;
 Thu, 24 Jun 2021 11:34:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NXgG=LS=arm.com=robin.murphy@srs-us1.protection.inumbo.net>)
 id 1lwNcl-0005vG-95
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 11:34:23 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 6f134f9b-6ab7-43b4-8317-f75c6b175682;
 Thu, 24 Jun 2021 11:34:21 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 847F01063;
 Thu, 24 Jun 2021 04:34:21 -0700 (PDT)
Received: from [10.57.9.136] (unknown [10.57.9.136])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 361C23F718;
 Thu, 24 Jun 2021 04:34:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f134f9b-6ab7-43b4-8317-f75c6b175682
Subject: Re: [PATCH v14 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
To: Will Deacon <will@kernel.org>
Cc: Claire Chang <tientzu@chromium.org>, Christoph Hellwig <hch@lst.de>,
 Qian Cai <quic_qiancai@quicinc.com>, Rob Herring <robh+dt@kernel.org>,
 mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
 Frank Rowand <frowand.list@gmail.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com,
 jgross@suse.com, Marek Szyprowski <m.szyprowski@samsung.com>,
 heikki.krogerus@linux.intel.com, thomas.hellstrom@linux.intel.com,
 peterz@infradead.org, benh@kernel.crashing.org,
 joonas.lahtinen@linux.intel.com, dri-devel@lists.freedesktop.org,
 chris@chris-wilson.co.uk, grant.likely@arm.com, paulus@samba.org,
 mingo@kernel.org, Jianxiong Gao <jxgao@google.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Saravana Kannan <saravanak@google.com>, xypron.glpk@gmx.de,
 "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
 Bartosz Golaszewski <bgolaszewski@baylibre.com>, bskeggs@redhat.com,
 linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org,
 Thierry Reding <treding@nvidia.com>, intel-gfx@lists.freedesktop.org,
 matthew.auld@intel.com, linux-devicetree <devicetree@vger.kernel.org>,
 Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie,
 maarten.lankhorst@linux.intel.com, linuxppc-dev@lists.ozlabs.org,
 jani.nikula@linux.intel.com, Nicolas Boichat <drinkcat@chromium.org>,
 rodrigo.vivi@intel.com, Bjorn Helgaas <bhelgaas@google.com>,
 Dan Williams <dan.j.williams@intel.com>,
 Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
 Greg KH <gregkh@linuxfoundation.org>, Randy Dunlap <rdunlap@infradead.org>,
 lkml <linux-kernel@vger.kernel.org>,
 "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
 Jim Quinlan <james.quinlan@broadcom.com>,
 Tom Lendacky <thomas.lendacky@amd.com>, bauerman@linux.ibm.com
References: <20210619034043.199220-1-tientzu@chromium.org>
 <20210619034043.199220-7-tientzu@chromium.org>
 <76c3343d-72e5-9df3-8924-5474ee698ef4@quicinc.com>
 <20210623183736.GA472@willie-the-truck>
 <19d4c7a2-744d-21e0-289c-a576e1f0e6f3@quicinc.com>
 <20210624054315.GA25381@lst.de>
 <CALiNf288ZLMhY3E8E3N+z9rkwi1viWNLm1wwMEwT4rNwh3FfwQ@mail.gmail.com>
 <364e6715-eafd-fc4a-e0af-ce2a042756b4@arm.com>
 <20210624111855.GA1382@willie-the-truck>
From: Robin Murphy <robin.murphy@arm.com>
Message-ID: <452155d2-c98e-23f6-86d6-3a2ff2e74783@arm.com>
Date: Thu, 24 Jun 2021 12:34:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210624111855.GA1382@willie-the-truck>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

On 2021-06-24 12:18, Will Deacon wrote:
> On Thu, Jun 24, 2021 at 12:14:39PM +0100, Robin Murphy wrote:
>> On 2021-06-24 07:05, Claire Chang wrote:
>>> On Thu, Jun 24, 2021 at 1:43 PM Christoph Hellwig <hch@lst.de> wrote:
>>>>
>>>> On Wed, Jun 23, 2021 at 02:44:34PM -0400, Qian Cai wrote:
>>>>> is_swiotlb_force_bounce at /usr/src/linux-next/./include/linux/swiotlb.h:119
>>>>>
>>>>> is_swiotlb_force_bounce() was the new function introduced in this patch here.
>>>>>
>>>>> +static inline bool is_swiotlb_force_bounce(struct device *dev)
>>>>> +{
>>>>> +     return dev->dma_io_tlb_mem->force_bounce;
>>>>> +}
>>>>
>>>> To me the crash looks like dev->dma_io_tlb_mem is NULL.  Can you
>>>> turn this into :
>>>>
>>>>           return dev->dma_io_tlb_mem && dev->dma_io_tlb_mem->force_bounce;
>>>>
>>>> for a quick debug check?
>>>
>>> I just realized that dma_io_tlb_mem might be NULL like Christoph
>>> pointed out since swiotlb might not get initialized.
>>> However,  `Unable to handle kernel paging request at virtual address
>>> dfff80000000000e` looks more like the address is garbage rather than
>>> NULL?
>>> I wonder if that's because dev->dma_io_tlb_mem is not assigned
>>> properly (which means device_initialize is not called?).
>>
>> What also looks odd is that the base "address" 0xdfff800000000000 is held in
>> a couple of registers, but the offset 0xe looks too small to match up to any
>> relevant structure member in that dereference chain :/
> 
> FWIW, I've managed to trigger a NULL dereference locally when swiotlb hasn't
> been initialised but we dereference 'dev->dma_io_tlb_mem', so I think
> Christoph's suggestion is needed regardless.

Ack to that - for SWIOTLB_NO_FORCE, io_tlb_default_mem will remain NULL. 
The massive jump in KernelCI baseline failures as of yesterday looks 
like every arm64 machine with less than 4GB of RAM blowing up...

Robin.

> But I agree that it won't help
> with the issue reported by Qian Cai.
> 
> Qian Cai: please can you share your .config and your command line?
> 
> Thanks,
> 
> Will
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 11:48:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 11:48:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146759.270190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwNqf-0008TS-5j; Thu, 24 Jun 2021 11:48:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146759.270190; Thu, 24 Jun 2021 11:48:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwNqf-0008TL-2S; Thu, 24 Jun 2021 11:48:45 +0000
Received: by outflank-mailman (input) for mailman id 146759;
 Thu, 24 Jun 2021 11:48:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dCk9=LS=kernel.org=will@srs-us1.protection.inumbo.net>)
 id 1lwNqd-0008TF-OK
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 11:48:43 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 12daac31-ab98-4cff-9119-0fef9387dc7e;
 Thu, 24 Jun 2021 11:48:42 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id E399761185;
 Thu, 24 Jun 2021 11:48:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12daac31-ab98-4cff-9119-0fef9387dc7e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1624535321;
	bh=tIEzJbFyzrkVCHoPFZv7EiSn5eGdWcktY1hM7567WC4=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=cnBL/cs5jq0gzsWpdW01qPqR4FhnbZYA5FARUzRbfbTnReSUvqYZg/5d7fMfMzzLB
	 Yi5ocgI40U2CdP0yPBgaSH4aMAy6kTVPwJQovsDfnM8E0ctEFfmI50PYyqPNdilEHT
	 wiHUoLZz+z5T3P7Nu0n1FHp0CT0+Mpa9kCcUzUXOGKpJLUcEA7B6eF0lhIA1fVOD2N
	 8hTi4ZzKBg0KOAPGGETjH9KnPn8WseTFQmhg5QfTMY6mH9TlCuMS+A5EiYaVCFxL1+
	 hxLlyRsIfAlqe3aZtbbg/Ry8xq+FvG9CW/Y0YZqD103SdYwufpVGkDaukfSWqnEWQs
	 pjV7QbcyVw3Kw==
Date: Thu, 24 Jun 2021 12:48:30 +0100
From: Will Deacon <will@kernel.org>
To: Robin Murphy <robin.murphy@arm.com>
Cc: Claire Chang <tientzu@chromium.org>, Christoph Hellwig <hch@lst.de>,
	Qian Cai <quic_qiancai@quicinc.com>,
	Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	heikki.krogerus@linux.intel.com, thomas.hellstrom@linux.intel.com,
	peterz@infradead.org, benh@kernel.crashing.org,
	joonas.lahtinen@linux.intel.com, dri-devel@lists.freedesktop.org,
	chris@chris-wilson.co.uk, grant.likely@arm.com, paulus@samba.org,
	mingo@kernel.org, Jianxiong Gao <jxgao@google.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Saravana Kannan <saravanak@google.com>, xypron.glpk@gmx.de,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>, bskeggs@redhat.com,
	linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org,
	Thierry Reding <treding@nvidia.com>,
	intel-gfx@lists.freedesktop.org, matthew.auld@intel.com,
	linux-devicetree <devicetree@vger.kernel.org>,
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie,
	maarten.lankhorst@linux.intel.com, linuxppc-dev@lists.ozlabs.org,
	jani.nikula@linux.intel.com,
	Nicolas Boichat <drinkcat@chromium.org>, rodrigo.vivi@intel.com,
	Bjorn Helgaas <bhelgaas@google.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Greg KH <gregkh@linuxfoundation.org>,
	Randy Dunlap <rdunlap@infradead.org>,
	lkml <linux-kernel@vger.kernel.org>,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Tom Lendacky <thomas.lendacky@amd.com>, bauerman@linux.ibm.com
Subject: Re: [PATCH v14 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
Message-ID: <20210624114829.GB1382@willie-the-truck>
References: <20210619034043.199220-1-tientzu@chromium.org>
 <20210619034043.199220-7-tientzu@chromium.org>
 <76c3343d-72e5-9df3-8924-5474ee698ef4@quicinc.com>
 <20210623183736.GA472@willie-the-truck>
 <19d4c7a2-744d-21e0-289c-a576e1f0e6f3@quicinc.com>
 <20210624054315.GA25381@lst.de>
 <CALiNf288ZLMhY3E8E3N+z9rkwi1viWNLm1wwMEwT4rNwh3FfwQ@mail.gmail.com>
 <364e6715-eafd-fc4a-e0af-ce2a042756b4@arm.com>
 <20210624111855.GA1382@willie-the-truck>
 <452155d2-c98e-23f6-86d6-3a2ff2e74783@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <452155d2-c98e-23f6-86d6-3a2ff2e74783@arm.com>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Thu, Jun 24, 2021 at 12:34:09PM +0100, Robin Murphy wrote:
> On 2021-06-24 12:18, Will Deacon wrote:
> > On Thu, Jun 24, 2021 at 12:14:39PM +0100, Robin Murphy wrote:
> > > On 2021-06-24 07:05, Claire Chang wrote:
> > > > On Thu, Jun 24, 2021 at 1:43 PM Christoph Hellwig <hch@lst.de> wrote:
> > > > > 
> > > > > On Wed, Jun 23, 2021 at 02:44:34PM -0400, Qian Cai wrote:
> > > > > > is_swiotlb_force_bounce at /usr/src/linux-next/./include/linux/swiotlb.h:119
> > > > > > 
> > > > > > is_swiotlb_force_bounce() was the new function introduced in this patch here.
> > > > > > 
> > > > > > +static inline bool is_swiotlb_force_bounce(struct device *dev)
> > > > > > +{
> > > > > > +     return dev->dma_io_tlb_mem->force_bounce;
> > > > > > +}
> > > > > 
> > > > > To me the crash looks like dev->dma_io_tlb_mem is NULL.  Can you
> > > > > turn this into :
> > > > > 
> > > > >           return dev->dma_io_tlb_mem && dev->dma_io_tlb_mem->force_bounce;
> > > > > 
> > > > > for a quick debug check?
> > > > 
> > > > I just realized that dma_io_tlb_mem might be NULL like Christoph
> > > > pointed out since swiotlb might not get initialized.
> > > > However,  `Unable to handle kernel paging request at virtual address
> > > > dfff80000000000e` looks more like the address is garbage rather than
> > > > NULL?
> > > > I wonder if that's because dev->dma_io_tlb_mem is not assigned
> > > > properly (which means device_initialize is not called?).
> > > 
> > > What also looks odd is that the base "address" 0xdfff800000000000 is held in
> > > a couple of registers, but the offset 0xe looks too small to match up to any
> > > relevant structure member in that dereference chain :/
> > 
> > FWIW, I've managed to trigger a NULL dereference locally when swiotlb hasn't
> > been initialised but we dereference 'dev->dma_io_tlb_mem', so I think
> > Christoph's suggestion is needed regardless.
> 
> Ack to that - for SWIOTLB_NO_FORCE, io_tlb_default_mem will remain NULL. The
> massive jump in KernelCI baseline failures as of yesterday looks like every
> arm64 machine with less than 4GB of RAM blowing up...

Ok, diff below which attempts to tackle the offset issue I mentioned as
well. Qian Cai -- please can you try with these changes?

Will

--->8

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 175b6c113ed8..39284ff2a6cd 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -116,7 +116,9 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 
 static inline bool is_swiotlb_force_bounce(struct device *dev)
 {
-       return dev->dma_io_tlb_mem->force_bounce;
+       struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+
+       return mem && mem->force_bounce;
 }
 
 void __init swiotlb_exit(void);
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 44be8258e27b..0ffbaae9fba2 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -449,6 +449,7 @@ static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
                dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1);
        unsigned int nslots = nr_slots(alloc_size), stride;
        unsigned int index, wrap, count = 0, i;
+       unsigned int offset = swiotlb_align_offset(dev, orig_addr);
        unsigned long flags;
 
        BUG_ON(!nslots);
@@ -497,7 +498,7 @@ static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
        for (i = index; i < index + nslots; i++) {
                mem->slots[i].list = 0;
                mem->slots[i].alloc_size =
-                       alloc_size - ((i - index) << IO_TLB_SHIFT);
+                       alloc_size - (offset + ((i - index) << IO_TLB_SHIFT));
        }
        for (i = index - 1;
             io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 &&


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 12:30:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 12:30:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146769.270200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwOVP-0004xV-QF; Thu, 24 Jun 2021 12:30:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146769.270200; Thu, 24 Jun 2021 12:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwOVP-0004xO-NI; Thu, 24 Jun 2021 12:30:51 +0000
Received: by outflank-mailman (input) for mailman id 146769;
 Thu, 24 Jun 2021 12:30:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwOVO-0004xE-GF; Thu, 24 Jun 2021 12:30:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwOVN-0008KK-Vc; Thu, 24 Jun 2021 12:30:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwOVN-0008Fu-KN; Thu, 24 Jun 2021 12:30:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwOVN-0001aw-Js; Thu, 24 Jun 2021 12:30:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dQi7d/cO+zu1WY7aC6RUZBoaNthbbJrKdXA832R5/lA=; b=peAedF/VIUY+6RYrOYLXOuASA2
	6HB2do8jo6DuWBMSfbxI0Ol2uHkrIobN6i+6BVA8CQabhPk6RUIykrY84xmkvoTgh8fLMgOn3zmqE
	U9kQyddc0bBb/aFp5GxJFkHC8nQrwHwlDlyptLj31mRVLqcOeJcNKQCZe2WAiAmMS3is=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163007-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 163007: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=b22726abdfa54592d6ad88f65b0297c0e8b363e2
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Jun 2021 12:30:49 +0000

flight 163007 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163007/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                b22726abdfa54592d6ad88f65b0297c0e8b363e2
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  308 days
Failing since        152659  2020-08-21 14:07:39 Z  306 days  563 attempts
Testing same since   162996  2021-06-23 12:46:39 Z    0 days    2 attempts

------------------------------------------------------------
546 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 176957 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 13:37:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 13:37:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146775.270215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwPXH-0002ES-M6; Thu, 24 Jun 2021 13:36:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146775.270215; Thu, 24 Jun 2021 13:36:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwPXH-0002EL-J0; Thu, 24 Jun 2021 13:36:51 +0000
Received: by outflank-mailman (input) for mailman id 146775;
 Thu, 24 Jun 2021 13:36:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwPXF-0002EB-Ow; Thu, 24 Jun 2021 13:36:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwPXF-0000zL-JJ; Thu, 24 Jun 2021 13:36:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwPXF-000276-0Q; Thu, 24 Jun 2021 13:36:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwPXE-0000LH-WC; Thu, 24 Jun 2021 13:36:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0mQ1vj5QM1uaSaci87fYN2naMUf0E08/iCkBG5Lxihs=; b=0kaOBfjq3fAjJ012pLGjGyGMmO
	snn3UPm4y5TGgXZ+Ccaup1Vyeik/yCW5yqp225CAYwX4flyczJcoE2NZ/YR5L3rlbuMbCLPahHRD0
	FoDrjR3MoqQZ8BwkHX6/MEhLGIBjPpF+F3UuOFOINmniNI7+H14Vj2MfJXwNpwsHGsmY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163013-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163013: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=12e34cd2f7900578ee83cb01b8f1696a7bb7511b
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Jun 2021 13:36:48 +0000

flight 163013 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163013/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 12e34cd2f7900578ee83cb01b8f1696a7bb7511b
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   20 days
Failing since        162368  2021-06-04 15:42:59 Z   19 days   45 attempts
Testing same since   163013  2021-06-24 09:13:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2372 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 14:03:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 14:03:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146781.270229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwPx0-0005Nu-VM; Thu, 24 Jun 2021 14:03:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146781.270229; Thu, 24 Jun 2021 14:03:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwPx0-0005Nn-Ru; Thu, 24 Jun 2021 14:03:26 +0000
Received: by outflank-mailman (input) for mailman id 146781;
 Thu, 24 Jun 2021 14:03:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwPwz-0005Nd-7H; Thu, 24 Jun 2021 14:03:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwPwz-0001Wc-03; Thu, 24 Jun 2021 14:03:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwPwy-0003Lj-KV; Thu, 24 Jun 2021 14:03:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwPwy-00032g-K4; Thu, 24 Jun 2021 14:03:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DLj/XVed1Z1SJzHMUFJg6dZz8JV+E3PBiJifR4iBcaM=; b=3JZe60N6Q8K8+qNXJuJSU8ehw1
	TyGhqhOt7ldZt6KD7oAx5AsQsAOetMngFQHDhm/kYcV4FA9a+iNPrhXff3qu/LS8KaGfs6hShBf2N
	DVSmmTPUoAObsRhnNzjshya0scD18BJiLPFmk2SKaeSYjBrihGEiOW3zj+pYs+vMx4LA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163015-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 163015: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=76a0aa9c4d7a9fc6fee1158fd9df82ae9b8b605d
X-Osstest-Versions-That:
    xen=c7691f5e340f3b579d40c77548f63133cdf5aac7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Jun 2021 14:03:24 +0000

flight 163015 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163015/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  76a0aa9c4d7a9fc6fee1158fd9df82ae9b8b605d
baseline version:
 xen                  c7691f5e340f3b579d40c77548f63133cdf5aac7

Last test of basis   162977  2021-06-22 18:00:26 Z    1 days
Testing same since   163015  2021-06-24 11:00:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@xen.org>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c7691f5e34..76a0aa9c4d  76a0aa9c4d7a9fc6fee1158fd9df82ae9b8b605d -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 14:11:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 14:11:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146801.270261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwQ4M-0007rK-Dp; Thu, 24 Jun 2021 14:11:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146801.270261; Thu, 24 Jun 2021 14:11:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwQ4M-0007rD-Ar; Thu, 24 Jun 2021 14:11:02 +0000
Received: by outflank-mailman (input) for mailman id 146801;
 Thu, 24 Jun 2021 14:11:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LK1y=LS=quicinc.com=quic_qiancai@srs-us1.protection.inumbo.net>)
 id 1lwQ4L-0007r7-3w
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 14:11:01 +0000
Received: from alexa-out-sd-01.qualcomm.com (unknown [199.106.114.38])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6dc22c97-81ee-45af-9eba-ec279edc1065;
 Thu, 24 Jun 2021 14:11:00 +0000 (UTC)
Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144])
 by alexa-out-sd-01.qualcomm.com with ESMTP; 24 Jun 2021 07:10:59 -0700
Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48])
 by ironmsg04-sd.qualcomm.com with ESMTP/TLS/AES256-SHA;
 24 Jun 2021 07:10:57 -0700
Received: from [10.111.163.161] (10.80.80.8) by nasanexm03e.na.qualcomm.com
 (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Jun
 2021 07:10:52 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6dc22c97-81ee-45af-9eba-ec279edc1065
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim;
  t=1624543860; x=1656079860;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=jIPt55RUzMMbkq7vhbeD6czYPulB4Zz8LFVe0v3gRZ0=;
  b=CbSwpOTVg3NEaVEsIcdSI8DBHqB/juX4NC9YL4WiIw9Dz789ljHAv1oP
   f0KLxQFbEwTudWX0nxJYTeK3sxcUCOtLVCxEQvwJdaVmllN5WyDvD9q5V
   XmuxopkkkC6Hn5Jyg73zn1ZTNOxvtlzrE+xoAIOl3E/BdiN7BZpceUcOE
   4=;
X-QCInternal: smtphost
Subject: Re: [PATCH v14 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
To: Will Deacon <will@kernel.org>, Robin Murphy <robin.murphy@arm.com>
CC: Claire Chang <tientzu@chromium.org>, Christoph Hellwig <hch@lst.de>, Rob
 Herring <robh+dt@kernel.org>, <mpe@ellerman.id.au>, Joerg Roedel
	<joro@8bytes.org>, Frank Rowand <frowand.list@gmail.com>, Konrad Rzeszutek
 Wilk <konrad.wilk@oracle.com>, <boris.ostrovsky@oracle.com>,
	<jgross@suse.com>, Marek Szyprowski <m.szyprowski@samsung.com>,
	<heikki.krogerus@linux.intel.com>, <thomas.hellstrom@linux.intel.com>,
	<peterz@infradead.org>, <benh@kernel.crashing.org>,
	<joonas.lahtinen@linux.intel.com>, <dri-devel@lists.freedesktop.org>,
	<chris@chris-wilson.co.uk>, <grant.likely@arm.com>, <paulus@samba.org>,
	<mingo@kernel.org>, Jianxiong Gao <jxgao@google.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Saravana Kannan <saravanak@google.com>,
	<xypron.glpk@gmx.de>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>, <bskeggs@redhat.com>,
	<linux-pci@vger.kernel.org>, <xen-devel@lists.xenproject.org>, Thierry Reding
	<treding@nvidia.com>, <intel-gfx@lists.freedesktop.org>,
	<matthew.auld@intel.com>, linux-devicetree <devicetree@vger.kernel.org>,
	Daniel Vetter <daniel@ffwll.ch>, <airlied@linux.ie>,
	<maarten.lankhorst@linux.intel.com>, <linuxppc-dev@lists.ozlabs.org>,
	<jani.nikula@linux.intel.com>, Nicolas Boichat <drinkcat@chromium.org>,
	<rodrigo.vivi@intel.com>, Bjorn Helgaas <bhelgaas@google.com>, Dan Williams
	<dan.j.williams@intel.com>, Andy Shevchenko
	<andriy.shevchenko@linux.intel.com>, Greg KH <gregkh@linuxfoundation.org>,
	Randy Dunlap <rdunlap@infradead.org>, lkml <linux-kernel@vger.kernel.org>,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, Jim Quinlan
	<james.quinlan@broadcom.com>, Tom Lendacky <thomas.lendacky@amd.com>,
	<bauerman@linux.ibm.com>
References: <20210619034043.199220-1-tientzu@chromium.org>
 <20210619034043.199220-7-tientzu@chromium.org>
 <76c3343d-72e5-9df3-8924-5474ee698ef4@quicinc.com>
 <20210623183736.GA472@willie-the-truck>
 <19d4c7a2-744d-21e0-289c-a576e1f0e6f3@quicinc.com>
 <20210624054315.GA25381@lst.de>
 <CALiNf288ZLMhY3E8E3N+z9rkwi1viWNLm1wwMEwT4rNwh3FfwQ@mail.gmail.com>
 <364e6715-eafd-fc4a-e0af-ce2a042756b4@arm.com>
 <20210624111855.GA1382@willie-the-truck>
 <452155d2-c98e-23f6-86d6-3a2ff2e74783@arm.com>
 <20210624114829.GB1382@willie-the-truck>
From: Qian Cai <quic_qiancai@quicinc.com>
Message-ID: <43ec9dd6-12c0-98ec-8d5d-b2904292721e@quicinc.com>
Date: Thu, 24 Jun 2021 10:10:51 -0400
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210624114829.GB1382@willie-the-truck>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [10.80.80.8]
X-ClientProxiedBy: nasanexm03c.na.qualcomm.com (10.85.0.106) To
 nasanexm03e.na.qualcomm.com (10.85.0.48)



On 6/24/2021 7:48 AM, Will Deacon wrote:
> Ok, diff below which attempts to tackle the offset issue I mentioned as
> well. Qian Cai -- please can you try with these changes?

This works fine.

> 
> Will
> 
> --->8
> 
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 175b6c113ed8..39284ff2a6cd 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -116,7 +116,9 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  
>  static inline bool is_swiotlb_force_bounce(struct device *dev)
>  {
> -       return dev->dma_io_tlb_mem->force_bounce;
> +       struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
> +
> +       return mem && mem->force_bounce;
>  }
>  
>  void __init swiotlb_exit(void);
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 44be8258e27b..0ffbaae9fba2 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -449,6 +449,7 @@ static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
>                 dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1);
>         unsigned int nslots = nr_slots(alloc_size), stride;
>         unsigned int index, wrap, count = 0, i;
> +       unsigned int offset = swiotlb_align_offset(dev, orig_addr);
>         unsigned long flags;
>  
>         BUG_ON(!nslots);
> @@ -497,7 +498,7 @@ static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
>         for (i = index; i < index + nslots; i++) {
>                 mem->slots[i].list = 0;
>                 mem->slots[i].alloc_size =
> -                       alloc_size - ((i - index) << IO_TLB_SHIFT);
> +                       alloc_size - (offset + ((i - index) << IO_TLB_SHIFT));
>         }
>         for (i = index - 1;
>              io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 &&
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 14:47:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 14:47:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146858.270374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwQdC-0006KW-0j; Thu, 24 Jun 2021 14:47:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146858.270374; Thu, 24 Jun 2021 14:47:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwQdB-0006KP-Sb; Thu, 24 Jun 2021 14:47:01 +0000
Received: by outflank-mailman (input) for mailman id 146858;
 Thu, 24 Jun 2021 14:47:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lwQdA-0006KJ-FR
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 14:47:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwQd9-0002Dh-37; Thu, 24 Jun 2021 14:46:59 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwQd8-000486-Qr; Thu, 24 Jun 2021 14:46:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=XCk4lQYg4y7RtUs31wkreOLsu7v4xKjO1fVnBgQiBnI=; b=Iz6av1pTXigGNWzHmUoHkt75Ma
	Lik//fCZyrpXmjwZKn2Of2WY34tTTZxOaeqF2C9HjsuOmCqeV3FzN76ySatBmMojx8FBmMlAaR7DS
	i+RwO+3omLZ8i01DXN1tT3Bc1+Otzu6rjGm7X0UXUmPcQlIMY1vr8J67k70WSULxKvRo=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	doebel@amazon.de,
	Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [PATCH] tools/xenstored: Remove redundant check in socket_can_process()
Date: Thu, 24 Jun 2021 15:46:55 +0100
Message-Id: <20210624144655.12900-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Commit 3adfb50315d9 ("tools/xenstored: Introduce a wrapper for
conn->funcs->can_{read, write}") consolidated the check
!conn->is_ignored in two new wrappers.

This means the check in socket_can_process() is now redundant. In
fact it should have been removed in orignal commit (as it was done
for the domain helpers).

Reported-by: Raphael Ning <raphning@amazon.com
Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 9ffd2ac66d3e..cf7297a96cb1 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1752,7 +1752,7 @@ static bool socket_can_process(struct connection *conn, int mask)
 		return false;
 	}
 
-	return (fds[conn->pollfd_idx].revents & mask) && !conn->is_ignored;
+	return (fds[conn->pollfd_idx].revents & mask);
 }
 
 static bool socket_can_write(struct connection *conn)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 14:53:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 14:53:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146863.270384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwQjl-0007k5-Ls; Thu, 24 Jun 2021 14:53:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146863.270384; Thu, 24 Jun 2021 14:53:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwQjl-0007jy-J1; Thu, 24 Jun 2021 14:53:49 +0000
Received: by outflank-mailman (input) for mailman id 146863;
 Thu, 24 Jun 2021 14:53:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3ax9=LS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lwQjj-0007js-QK
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 14:53:47 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c4b3a32b-4b4a-4953-b211-683eb2e8d7f5;
 Thu, 24 Jun 2021 14:53:46 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1CD941FD8F;
 Thu, 24 Jun 2021 14:53:46 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id E0BBA11A97;
 Thu, 24 Jun 2021 14:53:45 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id mrKjNXmc1GBBDwAALh3uQQ
 (envelope-from <jgross@suse.com>); Thu, 24 Jun 2021 14:53:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4b3a32b-4b4a-4953-b211-683eb2e8d7f5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624546426; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=5Aa5J9/4iNm9u1xluN/ccOhBJz40YNS4mct9l+6PBPI=;
	b=ExJjija0wRjRVEaebyvwETC0R2Gs0BBJpY7S/ae6ubgInkM7sEGThDNo9Dw6c4kahQt92Q
	38ZgwSmqVk+fvQowEF3VvsvvZxzlqx/J36drIEMH2ceKNBEahXo0E/O1lYjafY7nPhYXAI
	a7JCrELPycFbcGkZ5AgW2xi4KEjvZXA=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624546426; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=5Aa5J9/4iNm9u1xluN/ccOhBJz40YNS4mct9l+6PBPI=;
	b=ExJjija0wRjRVEaebyvwETC0R2Gs0BBJpY7S/ae6ubgInkM7sEGThDNo9Dw6c4kahQt92Q
	38ZgwSmqVk+fvQowEF3VvsvvZxzlqx/J36drIEMH2ceKNBEahXo0E/O1lYjafY7nPhYXAI
	a7JCrELPycFbcGkZ5AgW2xi4KEjvZXA=
Subject: Re: [PATCH] tools/xenstored: Remove redundant check in
 socket_can_process()
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210624144655.12900-1-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <ef5c28e0-989c-2443-f72d-44cf17dc589c@suse.com>
Date: Thu, 24 Jun 2021 16:53:45 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210624144655.12900-1-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="LfzIgQA4j15ML7z59tofFUkRlHHzu7VLi"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--LfzIgQA4j15ML7z59tofFUkRlHHzu7VLi
Content-Type: multipart/mixed; boundary="t8QNoeCRoDoMu9k4MK0Rx8JVGfDhhH7ov";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <ef5c28e0-989c-2443-f72d-44cf17dc589c@suse.com>
Subject: Re: [PATCH] tools/xenstored: Remove redundant check in
 socket_can_process()
References: <20210624144655.12900-1-julien@xen.org>
In-Reply-To: <20210624144655.12900-1-julien@xen.org>

--t8QNoeCRoDoMu9k4MK0Rx8JVGfDhhH7ov
Content-Type: multipart/mixed;
 boundary="------------5B89786E40F2FD65E1A279C3"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------5B89786E40F2FD65E1A279C3
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 24.06.21 16:46, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> Commit 3adfb50315d9 ("tools/xenstored: Introduce a wrapper for
> conn->funcs->can_{read, write}") consolidated the check
> !conn->is_ignored in two new wrappers.
>=20
> This means the check in socket_can_process() is now redundant. In
> fact it should have been removed in orignal commit (as it was done
> for the domain helpers).
>=20
> Reported-by: Raphael Ning <raphning@amazon.com
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------5B89786E40F2FD65E1A279C3
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------5B89786E40F2FD65E1A279C3--

--t8QNoeCRoDoMu9k4MK0Rx8JVGfDhhH7ov--

--LfzIgQA4j15ML7z59tofFUkRlHHzu7VLi
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDUnHkFAwAAAAAACgkQsN6d1ii/Ey/W
Jgf/Z0k5KibYZQxCw/Oi9779f3p826ovTpzgnQm1JNIkE8CtnImeLYOJhSjd0QGxOBf5+wrIS3A0
e2HWw4/OKQQJGjtn65qylKASFy5FcRB8mbjCx1dSVFGfJH30Dy3mv0vaKf10JcRt4UZ6TKf2QqtQ
UCMjxB1baNPkt5puyfbCTn8bgBFhZ6XVJ7h+bNRvaUUynsdnq9R7fdC8SwrRbF7lC4xxN/Sl5bRg
kIOBe20YPbkEi31ESlO0JdSFSeY3HuAlqxF+grw44CUkaKPVo4VAagdSBc1+1W4djA3RNasCxOm0
G3B17LNo6Bguqu4SV7TugZ9AOVgyPnCnTT+q00VGyg==
=bAUN
-----END PGP SIGNATURE-----

--LfzIgQA4j15ML7z59tofFUkRlHHzu7VLi--


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 14:56:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 14:56:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146868.270395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwQmV-0008OT-3m; Thu, 24 Jun 2021 14:56:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146868.270395; Thu, 24 Jun 2021 14:56:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwQmV-0008OM-0u; Thu, 24 Jun 2021 14:56:39 +0000
Received: by outflank-mailman (input) for mailman id 146868;
 Thu, 24 Jun 2021 14:56:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lwQmT-0008OE-5K
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 14:56:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwQmR-0002OC-Rf; Thu, 24 Jun 2021 14:56:35 +0000
Received: from [54.239.6.182] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwQmR-0004sa-KS; Thu, 24 Jun 2021 14:56:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=dEtNWCKVkzqbyFAHE44xMqQTqtJFmSXLWfRMatdkYRc=; b=GfKb858ybWSHPUCTCqrgPRJV75
	CwVvzCZZZeDucmSFRr0enYhyDWQxnsj+MCrjDcPU3l6N4cXSq646VCPiaEjwgbhgW7F96WfMRMLm3
	gNaGQqL5uhYQ+zypTi6C5fm9m7zRhzJWHnkP2dEsW8Gcy+sIrFFkhGbJX6qXbetj11Cc=;
Subject: Re: [PATCH] tools/xenstored: Remove redundant check in
 socket_can_process()
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210624144655.12900-1-julien@xen.org>
 <ef5c28e0-989c-2443-f72d-44cf17dc589c@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <b97249d6-2c97-2bc9-2dbc-db491c814fa1@xen.org>
Date: Thu, 24 Jun 2021 16:56:33 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <ef5c28e0-989c-2443-f72d-44cf17dc589c@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 24/06/2021 16:53, Juergen Gross wrote:
> On 24.06.21 16:46, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Commit 3adfb50315d9 ("tools/xenstored: Introduce a wrapper for
>> conn->funcs->can_{read, write}") consolidated the check
>> !conn->is_ignored in two new wrappers.
>>
>> This means the check in socket_can_process() is now redundant. In
>> fact it should have been removed in orignal commit (as it was done
>> for the domain helpers).
>>
>> Reported-by: Raphael Ning <raphning@amazon.com
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Reviewed-by: Juergen Gross <jgross@suse.com>

Thanks! Committed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 15:06:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 15:06:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146874.270411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwQwO-0001Vl-6e; Thu, 24 Jun 2021 15:06:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146874.270411; Thu, 24 Jun 2021 15:06:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwQwO-0001Ve-20; Thu, 24 Jun 2021 15:06:52 +0000
Received: by outflank-mailman (input) for mailman id 146874;
 Thu, 24 Jun 2021 15:06:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwQwN-0001VU-89; Thu, 24 Jun 2021 15:06:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwQwN-0002b8-0Y; Thu, 24 Jun 2021 15:06:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwQwM-00078L-Jd; Thu, 24 Jun 2021 15:06:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwQwM-0004vZ-JB; Thu, 24 Jun 2021 15:06:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jmm3eRrhlYOGTgpGVN+yDjm0ySmhzahtZ9Yb5yUUSws=; b=ab2ra+4ZAAjmxu0FE/Am2zrrdu
	mEPrN7MrNpfvTT2lXH2Fdm37xNCjaKmOo7vsDelENjLd+RvOob1yfyfruUWI4LAFdD2GEmAR9DuqJ
	9BzB/1NVc8LMu50tu9idiNDoSMGktYy3SAsw8BprJHoWFHJ9NoEjC3hzsrcW6zf3M0rY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163010-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 163010: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7426cedc7dad67bf3c71ea6cc29ab7822e1a453f
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Jun 2021 15:06:50 +0000

flight 163010 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163010/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                7426cedc7dad67bf3c71ea6cc29ab7822e1a453f
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  327 days
Failing since        152366  2020-08-01 20:49:34 Z  326 days  555 attempts
Testing same since   163010  2021-06-24 03:29:41 Z    0 days    1 attempts

------------------------------------------------------------
6199 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1689018 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 15:29:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 15:29:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146881.270424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRIV-00045a-4L; Thu, 24 Jun 2021 15:29:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146881.270424; Thu, 24 Jun 2021 15:29:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRIV-00045T-1P; Thu, 24 Jun 2021 15:29:43 +0000
Received: by outflank-mailman (input) for mailman id 146881;
 Thu, 24 Jun 2021 15:29:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QFrQ=LS=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwRIU-00045N-80
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 15:29:42 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a05a8c57-5122-435a-8dc7-bf2ba6d5a43b;
 Thu, 24 Jun 2021 15:29:37 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2106.outbound.protection.outlook.com [104.47.17.106])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-16-py8U6TBlP4a2O1Y_px8Cxg-1; Thu, 24 Jun 2021 17:29:34 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6302.eurprd04.prod.outlook.com (2603:10a6:803:102::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Thu, 24 Jun
 2021 15:29:32 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Thu, 24 Jun 2021
 15:29:32 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM3PR03CA0058.eurprd03.prod.outlook.com (2603:10a6:207:5::16) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Thu, 24 Jun 2021 15:29:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a05a8c57-5122-435a-8dc7-bf2ba6d5a43b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624548576;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ydpdtYAmNnZpW8BgADft4BICYmMHsWJM1xz1idecOsw=;
	b=JLQUFwXYyxTqbwsLYru9/CJCiHRl2cLnIs1RK+I3qFtoBRVBWww043UEUZWnQvKMuLCC2S
	n3ajA9Nam+yoxgnIDhc7IlQp4Q0F24595ndKFtnf21J2juClsfPoELSUkpDC5VtNjfS9Zw
	A0d2RyFCrEzJut6H3U/lnPXmqBn7kKM=
X-MC-Unique: py8U6TBlP4a2O1Y_px8Cxg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ORtczgkNdFqbQYvrgRqi2A8SC4JoS+o508bdCuz9stFsQXBN/yDkvSTBJcuXehhnBxt+3J7IXEsLxlXSuzqc/Ifkw7ZT9t0QXL22fhtbCH1xY8iIzWWqqk+a3yt24AKRPzENlrB38ZYbxkwCNJ+cSUfUKfzrueXd3Tso1H7SwMnu+jvW2i/HwCp9lbJ7iC5kqUuPCIaZ/t0Ia0DVyba+UXpYXXDWAGfeiPBvUFugUDhho0VV90P96Y9/6e3vsq2fBrrQewXprRNsTySe1I8QHGdKlPTQesiKrJH9bFw1e18jhDCns+P38mGTVVa51CyDm5TFU6ixgX7y+Wf+pizmzg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ydpdtYAmNnZpW8BgADft4BICYmMHsWJM1xz1idecOsw=;
 b=RWR2ln+fkO2wf+fnM6uZFRkUs9IbbBrY8mm3sAQ8ypPVYTk1Gl4nmFkTynXfF+wWulAjDEOxT82Pf7EtPvdU5qUjHKuH6PQRjgxFC4ReE64uu2KIdGpNZ4/6yAnUuxDIte53A9JyzEp/s4NmNgCc2qqo2aXfV2MCh/Vx42fP9qehBGttdTN6EQ5AXMD3Pmv4Ac9D5yjV+0734nuTkYRuXmiGcg4hCfR2fwQ/RQxygyRyyKC+muNFFpawWiVosrzLNk0HYX3Wph0hH531OCE8CGH3gaTlCZb0m+9pjHcGPmeeOCmVkE21tEhITjXXorEOhVxxkcZv47uVlROnGNidPg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: Ping: [PATCH v2 0/2] x86/AMD: MSR handling adjustments
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <ebc58213-f68a-e060-83f5-c9c89a87f074@suse.com>
Message-ID: <cf132aaf-96cb-b79e-f5a2-7e0f0f2d28f1@suse.com>
Date: Thu, 24 Jun 2021 17:29:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <ebc58213-f68a-e060-83f5-c9c89a87f074@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM3PR03CA0058.eurprd03.prod.outlook.com
 (2603:10a6:207:5::16) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 90e4c31b-2066-4478-312c-08d93724dd15
X-MS-TrafficTypeDiagnostic: VI1PR04MB6302:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB63020761AF730CAF374A6B1FB3079@VI1PR04MB6302.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3383;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YvymRl+811ozXAKEcUqu3mX3JCvll/9qin1/wREUB3H6flXK/tI+ijdgvBATAGX/6VDneqA14BGyYLBKeS80SHuHUIUyVCAluhfFpxOc8UJ2Q6h7n7hm8t4S5+ZswMFD0oUVJ5X9wzNTapKXN3I2T/SSphkjZ8lS6AaU1rI4Z/PGVo32qwE+TXTB2kyOGZGDWPIyKRDMt02hZpALjzZJ5O7levIpssV5RqyPf1MhCU2F0TlkulsGi0oJr2zZZMQNnlQDOlf1YZUKSBmSFsZEq8CtUk7HOqWi2gk9h6THCJA+tVJDGP58RSzZ4JtaESZeM2chkCTocbJoecLCQy0u07Ybl6qpkAJ5jVtLMVv5Q08Dug9m9FEQsGpPvVmqYJHSHLwhyPusQABirq0Yi6wOEOp+TFGvZB67qNRaDWWjIZyYSHBvVtWPzwYE2aFwuNdu/QhGI32e9BqAMTyU8hPKYWLUitJYGbdM3tylksxXMygntwTnX/xCbLC2s6vBsr0tgZfcKRMdrXuJdEfomYokYHRsqVJhV54DnR8RoPDktObkVe1LRP/kXXcpclUKKyunn3JEgb/XRtvTtk/wTUh/YGNdUK0Kv/r4SnBBJCWmnjKdRJ8qUyK/nOTe44N5QFNf2SRcXSiy5QQBpCR16b6l+itw48Tx/qoiHngUmyhpVm05Z4vGR3sUQF0d/Fq93dQ0
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(346002)(366004)(39850400004)(136003)(396003)(16526019)(31686004)(186003)(8676002)(8936002)(6916009)(478600001)(54906003)(26005)(36756003)(2906002)(66476007)(316002)(558084003)(2616005)(86362001)(38100700002)(31696002)(53546011)(5660300002)(6486002)(956004)(16576012)(4326008)(66556008)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SW8yYVdhVlRrcDh4ei9zVDB3bGMweEVnTVNrd1lHTFF6dTlrcG5MWWVnZ3Vy?=
 =?utf-8?B?N2hVT201NGp6MUtnWUp2Y1RTMFdGUWlhc3F1cjFmSitrTExqREZ1WjhrSUpx?=
 =?utf-8?B?MWoyS2FvTlc2elY4MjR4R0I4cjc0MzgzaW1TY0FGc25qUzJVaG5XakVBWVZI?=
 =?utf-8?B?QlhobHIybzJxdWNUR0hFTnN6azR3U3cweFlDR1R6OTRqS3Y3V0Y3cWNwSnlz?=
 =?utf-8?B?bVhaNWRUV3RYR2VpeVdyUVRQY1FqOU9VOU5LazVRZlJ0SmR2TWJOcTRCOWZv?=
 =?utf-8?B?azI4MkI5NVNldENNSWFTd3k5WUYyZ1YxeVdhNXQveE1vdUxHSlRRUStaTVNP?=
 =?utf-8?B?cXorU2pMVXZYRzlJYmRYT3J2TkVHSkxrZHJXd2tZdUZtRmMvZmhWZG5KMjNF?=
 =?utf-8?B?eGJRSVVObUdZQlBhQnVCVkg2NHhyWU1zZkdlNWIrVEw3TFgrSEVFZ0xMMEF5?=
 =?utf-8?B?VHdSSGVZWUlzOHFlNTBrZG5ERGhBdzlOcm1La0JyaG12d2hGd2hpdmdyL1N6?=
 =?utf-8?B?WVcvUE5mS080QXZIRWRsZS9sdVRTakFUbnZ6MFpWQ0pKQ1dxaFc2cE5kV2Rv?=
 =?utf-8?B?Y245emh3TENudWRZcUhvdmtwTUJkdXdBRWl5WmE3cFg3cGFKYUNPWDJRMVVi?=
 =?utf-8?B?Wm1QZGxDcnJKb3cxNVdMYXFqTFBaeDIxQTBpeUtMakN4MXdVWGxQdnFIU1Jm?=
 =?utf-8?B?MXRrcG9mbTI4WkFIWHhPYy9QN2oxM3FrZDdqUVRCVzlzWWlpUWtxZEJncmRa?=
 =?utf-8?B?ckhIL2YyOTAvZkVPa3pmWHVvTkVCVDBmOVNRdjM5YnkyeEFWbHJpYzBDTC9Q?=
 =?utf-8?B?RTdVdDRYVXRuc1lVcmhsdldqQnZiTElmNU1abEk1WDlWSDgxU3QyZXlsK3hu?=
 =?utf-8?B?ak1WQWVvd1F5ZjB4ZmV2Q0NzdlNhUi9HQ1ViWE1Pby9mVHlhck5KTWFpcWtB?=
 =?utf-8?B?K09xUk5JZFo1ZXpkajk3RHBSVFRXa2tUY1FMbCtSUUhpQ0xHUGtxTmVtWURo?=
 =?utf-8?B?bUNsalpkM2hsMTU1bXlzMXNsejlKajJvRVQ2SXhNSWFpV2plZERZL3VxMU1G?=
 =?utf-8?B?V2xDVUNCVVdJYWkyZkJkcEZFbmdmVlVpdlZEaUZLeUNXYUJCbVZQb3c2SDhJ?=
 =?utf-8?B?ZjBvaWxHN3pYMFVMVGE0eDNHMmxsb2drdDMzQnZIMmZ0NE5ZSlgxZ1JjVXRH?=
 =?utf-8?B?Z3M4UFltMUNKZW1ZbTAzdGtlNUZTbWtWdmdYY3dNRXM5dnZFcGVOUE4zYi9X?=
 =?utf-8?B?QUJ2K2tXM3FBeUk4S2Q2TmRlZ2wwY09HcnFkc3MyUmR0eVNCZGp5blNOK2ZX?=
 =?utf-8?B?VFkyT0xna2I4TG4yRy9OTkxYeENTUWRKanl3YnVWakdhU3JCaEJqTUFscGh4?=
 =?utf-8?B?NnZjMThHUXhYcGhNWUxEZXVDL0s2elo2Rk5ocElxOEhIaUxsOXhhQldvbHlL?=
 =?utf-8?B?VklwMlo4dFVISU0wK01XWHIyck9SOUV5MThkcnZKSTFmSW5MUXVyN2dRbkpL?=
 =?utf-8?B?eTFWWURWb1VCelZMSWd4OTJidjlCcFZvQ2grUlJ0aXFma2M5YmRIZWFnNDNq?=
 =?utf-8?B?UG14VFRsYm0wODk4RzBQbHZpckMxSEZoT1FPME1FTUVjVEI0YUFpQ1Ewdko0?=
 =?utf-8?B?aXB5cjlyeVZWaUtwb0ZMSEFOMk5hbHVlSGhIanZMcjJjTzJFRmVvNHBXUC92?=
 =?utf-8?B?Ny8wSC9BbGJScDZrMHdldnRwSkdTMU1hc0x0WEtGM1ZEamNXVjkrR0ZKYTAw?=
 =?utf-8?Q?Y3MH8fDJX3Vj7y4eS1vl76aCnfAbEb+OTbBi+oK?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 90e4c31b-2066-4478-312c-08d93724dd15
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2021 15:29:32.0028
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: v7w4OEDc7dOve8AAuWqj7XE0b4XT4Ri/3pturK6fM870WMLM3ShusxnawsTdR9AIt7gdjSinETKBT3zSYbCyAg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6302

On 28.05.2021 08:56, Jan Beulich wrote:
> 1: expose SYSCFG, TOM, TOM2, and IORRs to Dom0
> 2: drop MSR_K7_HWCR

Any thoughts here?

Thanks, Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 15:34:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 15:34:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146886.270436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRNI-0005TU-Nk; Thu, 24 Jun 2021 15:34:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146886.270436; Thu, 24 Jun 2021 15:34:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRNI-0005TN-Kq; Thu, 24 Jun 2021 15:34:40 +0000
Received: by outflank-mailman (input) for mailman id 146886;
 Thu, 24 Jun 2021 15:34:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6vq+=LS=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1lwRNG-0005TH-Vq
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 15:34:39 +0000
Received: from mail-wr1-x429.google.com (unknown [2a00:1450:4864:20::429])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9dbc0e12-473e-462c-b95d-23d6b43ba86b;
 Thu, 24 Jun 2021 15:34:38 +0000 (UTC)
Received: by mail-wr1-x429.google.com with SMTP id a13so7145699wrf.10
 for <xen-devel@lists.xenproject.org>; Thu, 24 Jun 2021 08:34:38 -0700 (PDT)
Received: from ?IPv6:2a00:23c5:5785:9a01:e5c2:6da2:5433:99b5?
 ([2a00:23c5:5785:9a01:e5c2:6da2:5433:99b5])
 by smtp.gmail.com with ESMTPSA id p187sm3432664wmp.28.2021.06.24.08.34.36
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 24 Jun 2021 08:34:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9dbc0e12-473e-462c-b95d-23d6b43ba86b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:subject:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=6kXIpn+CEL9LTQWBFHZphYo5c1xea3b9mMMiHyfB8rg=;
        b=DrLci0requssIiZEPFJyXyHXuOC490++yMlCI24o+z90Xk8RfEdgpHX5SJPIVWulUt
         H903Rh8q8zMR0+uHp9qiZ342Zq/25q0TjbXbdB+QMQEZxO/PDsKViHUYLIJkGtoNwqGn
         GwaoaYtVazU6ycPricgI5mcUwwBdpduhz4XHQEB+CU6YkhPgsJcPR0Dy1CveQqpMMMzw
         u9azIkMqbdxrIeIBqbh0ZEBxVo8ZTamPYT8gV0h8cTtkLLyq5TVBCMPq+RsCyCTSTeka
         pjAJnmhqP1F95rIhOyN5iPg7VhFidWaas4kW4hEPtM5IBJIrRxYcc/IrmaadZ6IZJ+Hw
         QPvQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:subject:to:cc:references
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=6kXIpn+CEL9LTQWBFHZphYo5c1xea3b9mMMiHyfB8rg=;
        b=euwf8sShj5q4ZsAdGik4GN1PCuHSGcdZZVPIA7715olpoEIijSyo3NGY5RorIMues5
         /FJ0DGHFARKQgQKd8WgmKKXKmHHR9/N63WHFQNJ3sCaX1auG5LAg/2Ad0J8sZc6t+Qp1
         104DiWiyscWFb/JKNNX8z4UU9SUCxkMkJWMYRS9fMc6qNzGr3onv7SGpDcIVlLNOAwg0
         rSIM9TyDJFPfjz5010KV4IKBbuilTFM6Wi/CCesaA1Ny10aEXxLCrg63iiu9YRbZRk8c
         Sm6onalvldPTD3FkMRFyWcANgBOt0QYzOSR1RIwqgo/knjF0j2aAgrd2irSRZQk43ZHH
         KqWw==
X-Gm-Message-State: AOAM532dm2pbebiSjzY8OyNBPRTSPQYVPHY6TpRo+eHs4KyyJC5X6wRp
	iEXDh/6nqTeKKYvR4Oni9+s=
X-Google-Smtp-Source: ABdhPJxTu8t8wSDUGP6dHGJSpOfjktTfnd3RthCV7lbDROIQbNV/i55bhMxxccJv7bpAweQN9TVFqQ==
X-Received: by 2002:adf:fc8e:: with SMTP id g14mr5090300wrr.411.1624548877232;
        Thu, 24 Jun 2021 08:34:37 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Reply-To: paul@xen.org
Subject: Re: [PATCH 9/9] IOMMU/PCI: don't let domain cleanup continue when
 device de-assignment failed
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
References: <03285055-47ff-ab10-ae76-0553f28f136d@suse.com>
 <1a7b974c-8dee-3422-28fb-4118fe145b4e@suse.com>
Message-ID: <7c5a4244-8f33-0705-f518-0b4e9a0e7cb4@xen.org>
Date: Thu, 24 Jun 2021 16:34:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <1a7b974c-8dee-3422-28fb-4118fe145b4e@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09/06/2021 10:30, Jan Beulich wrote:
> Failure here could in principle mean the device may still be issuing DMA
> requests, which would continue to be translated by the page tables the
> device entry currently points at. With this we cannot allow the
> subsequent cleanup step of freeing the page tables to occur, to prevent
> use-after-free issues. We would need to accept, for the time being, that
> in such a case the remaining domain resources will all be leaked, and
> the domain will continue to exist as a zombie.
> 
> However, with flushes no longer timing out (and with proper timeout
> detection for device I/O TLB flushing yet to be implemented), there's no
> way anymore for failures to occur, except due to bugs elsewhere. Hence
> the change here is merely a "just in case" one.
> 
> In order to continue the loop in spite of an error, we can't use
> pci_get_pdev_by_domain() anymore. I have no idea why it was used here in
> the first place, instead of the cheaper list iteration.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Paul Durrant <paul@xen.org>

> ---
> A first step beyond this could be to have the backing functions of
> deassign_device() allow the caller to tell whether the failure was from
> removing the device from the domain being cleaned up, or from re-setup
> in wherever the device was supposed to get moved to. In the latter case
> we could allow domain cleanup to continue. I wonder whether we could
> simply make those functions return "success" anyway, overriding their
> returning of an error when ->is_dying is set.
> 
> A next step then might be to figure whether there's any "emergency"
> adjustment that could be done instead of the full-fledged (and failed)
> de-assign, to allow at least recovering all the memory from the guest.
> 
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -894,7 +894,7 @@ static int deassign_device(struct domain
>   
>   int pci_release_devices(struct domain *d)
>   {
> -    struct pci_dev *pdev;
> +    struct pci_dev *pdev, *tmp;
>       u8 bus, devfn;
>       int ret;
>   
> @@ -905,15 +905,15 @@ int pci_release_devices(struct domain *d
>           pcidevs_unlock();
>           return ret;
>       }
> -    while ( (pdev = pci_get_pdev_by_domain(d, -1, -1, -1)) )
> +    list_for_each_entry_safe ( pdev, tmp, &d->pdev_list, domain_list )
>       {
>           bus = pdev->bus;
>           devfn = pdev->devfn;
> -        deassign_device(d, pdev->seg, bus, devfn);
> +        ret = deassign_device(d, pdev->seg, bus, devfn) ?: ret;
>       }
>       pcidevs_unlock();
>   
> -    return 0;
> +    return ret;
>   }
>   
>   #define PCI_CLASS_BRIDGE_HOST    0x0600
> 



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 15:55:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 15:55:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146892.270458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRhm-000849-Mp; Thu, 24 Jun 2021 15:55:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146892.270458; Thu, 24 Jun 2021 15:55:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRhm-000840-IT; Thu, 24 Jun 2021 15:55:50 +0000
Received: by outflank-mailman (input) for mailman id 146892;
 Thu, 24 Jun 2021 15:55:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a5fo=LS=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lwRhl-00083W-2v
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 15:55:49 +0000
Received: from mail-pf1-x42e.google.com (unknown [2607:f8b0:4864:20::42e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3bf28517-40d5-4d76-aa28-41823a864d62;
 Thu, 24 Jun 2021 15:55:48 +0000 (UTC)
Received: by mail-pf1-x42e.google.com with SMTP id c5so5535501pfv.8
 for <xen-devel@lists.xenproject.org>; Thu, 24 Jun 2021 08:55:48 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:165a:99ec:42d5:d8b])
 by smtp.gmail.com with UTF8SMTPSA id f12sm3216544pfc.100.2021.06.24.08.55.40
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 24 Jun 2021 08:55:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3bf28517-40d5-4d76-aa28-41823a864d62
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=YJFR7rMJgjSCjddzLV99+gcsv72cCY4rcuJfLx7ix2M=;
        b=IZ8zEMilmar/QDVLgl1NDX/TTwxyjXxS5kA1fSnBE6n5IfzENsPcNaND27UjFp5jiN
         u8HH6B7JIXK5EX6f67y7l9dDoZ++rPB0nrtDbiVwhfF52n//6y1BzEqiBhQhAh+kBIK8
         UMPl2mPih9xqS2jFSSaq+Fm8SSizc0d0wP3VY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=YJFR7rMJgjSCjddzLV99+gcsv72cCY4rcuJfLx7ix2M=;
        b=eT5hIB2VemRRKa2tisFQNpxyE8brda5Bj2VENk2buUQl3i2tqEWyfm8IFR+/la0gQ1
         JYpZop3AieY3c2BCsuoe/WZw/awEeQgzOVZF2DzqyC+giKKB4UzvLXHYU+G+4tyZ7Rpo
         4L1MmtNVKVYQBo95p/euk9zu867aYW3KD/98e+NGMTbayHZZG5P6IVnsZ/xbz3vgNPDf
         4IgMGXCI9pWX8WoPGq3zMBarH69Mo/cSH/CYiYAx4m7D8uV3ekyW71QqA2l4gjT7TdvK
         Mhwy8LCq/l0v0OYQ0pqShNck6XnU69cDdjVneTOsfiwifTk6vU71gltYCcvw163fgK6I
         CJcQ==
X-Gm-Message-State: AOAM531YJ7I85kkBHCwZ2sJBcyBpephRh0plL6x45pBsSesM2zcuhPW0
	UpZQaCGpMh2u9Bf2wjqlg6uWrA==
X-Google-Smtp-Source: ABdhPJz6AmQg2p36DPOtVtB4TSDM/glgE/K0cXRSpLrBRHDSDkrUzfxWqUOOjFBiYq3vGkkuKVesgQ==
X-Received: by 2002:a63:d908:: with SMTP id r8mr5249237pgg.414.1624550147592;
        Thu, 24 Jun 2021 08:55:47 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com,
	quic_qiancai@quicinc.com
Subject: [PATCH v15 01/12] swiotlb: Refactor swiotlb init functions
Date: Thu, 24 Jun 2021 23:55:15 +0800
Message-Id: <20210624155526.2775863-2-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210624155526.2775863-1-tientzu@chromium.org>
References: <20210624155526.2775863-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
initialization to make the code reusable.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
 kernel/dma/swiotlb.c | 50 ++++++++++++++++++++++----------------------
 1 file changed, 25 insertions(+), 25 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 52e2ac526757..1f9b2b9e7490 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -168,9 +168,28 @@ void __init swiotlb_update_mem_attributes(void)
 	memset(vaddr, 0, bytes);
 }
 
-int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
+				    unsigned long nslabs, bool late_alloc)
 {
+	void *vaddr = phys_to_virt(start);
 	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
+
+	mem->nslabs = nslabs;
+	mem->start = start;
+	mem->end = mem->start + bytes;
+	mem->index = 0;
+	mem->late_alloc = late_alloc;
+	spin_lock_init(&mem->lock);
+	for (i = 0; i < mem->nslabs; i++) {
+		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
+		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
+		mem->slots[i].alloc_size = 0;
+	}
+	memset(vaddr, 0, bytes);
+}
+
+int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+{
 	struct io_tlb_mem *mem;
 	size_t alloc_size;
 
@@ -186,16 +205,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 	if (!mem)
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
-	mem->nslabs = nslabs;
-	mem->start = __pa(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
+
+	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
 
 	io_tlb_default_mem = mem;
 	if (verbose)
@@ -282,8 +293,8 @@ swiotlb_late_init_with_default_size(size_t default_size)
 int
 swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 {
-	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
 	struct io_tlb_mem *mem;
+	unsigned long bytes = nslabs << IO_TLB_SHIFT;
 
 	if (swiotlb_force == SWIOTLB_NO_FORCE)
 		return 0;
@@ -297,20 +308,9 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	if (!mem)
 		return -ENOMEM;
 
-	mem->nslabs = nslabs;
-	mem->start = virt_to_phys(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	mem->late_alloc = 1;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
-
+	memset(mem, 0, sizeof(*mem));
 	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
-	memset(tlb, 0, bytes);
+	swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
 
 	io_tlb_default_mem = mem;
 	swiotlb_print_info();
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 15:55:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 15:55:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146891.270446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRhe-0007ly-Db; Thu, 24 Jun 2021 15:55:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146891.270446; Thu, 24 Jun 2021 15:55:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRhe-0007lr-AW; Thu, 24 Jun 2021 15:55:42 +0000
Received: by outflank-mailman (input) for mailman id 146891;
 Thu, 24 Jun 2021 15:55:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a5fo=LS=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lwRhc-0007ll-FN
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 15:55:40 +0000
Received: from mail-pf1-x435.google.com (unknown [2607:f8b0:4864:20::435])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57017509-9d64-4481-bbaf-571de297b14e;
 Thu, 24 Jun 2021 15:55:39 +0000 (UTC)
Received: by mail-pf1-x435.google.com with SMTP id g192so5534666pfb.6
 for <xen-devel@lists.xenproject.org>; Thu, 24 Jun 2021 08:55:39 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:165a:99ec:42d5:d8b])
 by smtp.gmail.com with UTF8SMTPSA id oc9sm2594487pjb.43.2021.06.24.08.55.29
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 24 Jun 2021 08:55:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57017509-9d64-4481-bbaf-571de297b14e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=rdc0HB9wCm28bStTIxqWIJNvJopKcq8sWWQzFyMGdN8=;
        b=EWUPwdxHp5YKeCTYM1Ou9zc1bF9zF4WPE1YYn08iLU0DKzgmmZp5S8YTmyG8h6lXtv
         H7bIKs3GpVKupJr/fP+pph0lGjQ+mJcFcHV01gy1FmuxsYy+jElw6HlWfLFvvUP+d+yn
         QsIj+YxXmhSRVl7m7zjzDw87xA/6voIVdj0mk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=rdc0HB9wCm28bStTIxqWIJNvJopKcq8sWWQzFyMGdN8=;
        b=F+08jBLDCP3AikT71hTuVZzc+FstEDXdtIvguglZdMspLJ2vA09XyXjyht+dQH8jDH
         sWqImxRlNdCxJH/8GXdlz6w5b42oxV9dq/kAnXVJViuKnavxWBpzIg6hfyZKvT3w9rAe
         z0X3Hd7ThSGCUQvx+rcIC3hBSOWl6asgfJF7+JpoWjEjQ5RZAQioLRgc0CkHm1YsZoZO
         DXuU/LrQpxxfHPFE5v1XXlEvf5xRVjH6hP+Wfng40F8ToGWhSMREiuNHSUqZKHjDYM5P
         Gqn3GYBXmyl/BkdIj1ONOYm/Lxql6apOyn9HOx/3UJf8tLApREFr8PETzP7IJovpQmvC
         F4Pg==
X-Gm-Message-State: AOAM531ASILo9OC85fAR4q0UY7s891nbWWtLLA65V/pFj/HS9KcWciqu
	6tmCwx6vlLusH3RN/Vj0PBs4lg==
X-Google-Smtp-Source: ABdhPJy87SSPlxL47d7OH+XoeGh+AXQQoDaeBE720F3P2KtS6fZXS8ICxE/Q45nw3Tf/1EGtaohwag==
X-Received: by 2002:a05:6a00:810:b029:301:f08c:6b0d with SMTP id m16-20020a056a000810b0290301f08c6b0dmr5744051pfk.8.1624550138225;
        Thu, 24 Jun 2021 08:55:38 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com,
	quic_qiancai@quicinc.com
Subject: [PATCH v15 00/12] Restricted DMA
Date: Thu, 24 Jun 2021 23:55:14 +0800
Message-Id: <20210624155526.2775863-1-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series implements mitigations for lack of DMA access control on
systems without an IOMMU, which could result in the DMA accessing the
system memory at unexpected times and/or unexpected addresses, possibly
leading to data leakage or corruption.

For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
not behind an IOMMU. As PCI-e, by design, gives the device full access to
system memory, a vulnerability in the Wi-Fi firmware could easily escalate
to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
full chain of exploits; [2], [3]).

To mitigate the security concerns, we introduce restricted DMA. Restricted
DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
specially allocated region and does memory allocation from the same region.
The feature on its own provides a basic level of protection against the DMA
overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system needs
to provide a way to restrict the DMA to a predefined memory region (this is
usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).

[1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
[1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
[2] https://blade.tencent.com/en/advisories/qualpwn/
[3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
[4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132

v15:
- Apply Will's diff (https://lore.kernel.org/patchwork/patch/1448957/#1647521)
  to fix the crash reported by Qian.
- Add Stefano's Acked-by tag for patch 01/12 from v14

v14:
- Move set_memory_decrypted before swiotlb_init_io_tlb_mem (patch 01/12, 10,12)
- Add Stefano's Acked-by tag from v13
https://lore.kernel.org/patchwork/cover/1448954/

v13:
- Fix xen-swiotlb issues
  - memset in patch 01/12
  - is_swiotlb_force_bounce in patch 06/12
- Fix the dts example typo in reserved-memory.txt
- Add Stefano and Will's Tested-by tag from v12
https://lore.kernel.org/patchwork/cover/1448001/

v12:
Split is_dev_swiotlb_force into is_swiotlb_force_bounce (patch 06/12) and
is_swiotlb_for_alloc (patch 09/12)
https://lore.kernel.org/patchwork/cover/1447254/

v11:
- Rebase against swiotlb devel/for-linus-5.14
- s/mempry/memory/g
- exchange the order of patch 09/12 and 10/12
https://lore.kernel.org/patchwork/cover/1447216/

v10:
Address the comments in v9 to
  - fix the dev->dma_io_tlb_mem assignment
  - propagate swiotlb_force setting into io_tlb_default_mem->force
  - move set_memory_decrypted out of swiotlb_init_io_tlb_mem
  - move debugfs_dir declaration into the main CONFIG_DEBUG_FS block
  - add swiotlb_ prefix to find_slots and release_slots
  - merge the 3 alloc/free related patches
  - move the CONFIG_DMA_RESTRICTED_POOL later
https://lore.kernel.org/patchwork/cover/1446882/

v9:
Address the comments in v7 to
  - set swiotlb active pool to dev->dma_io_tlb_mem
  - get rid of get_io_tlb_mem
  - dig out the device struct for is_swiotlb_active
  - move debugfs_create_dir out of swiotlb_create_debugfs
  - do set_memory_decrypted conditionally in swiotlb_init_io_tlb_mem
  - use IS_ENABLED in kernel/dma/direct.c
  - fix redefinition of 'of_dma_set_restricted_buffer'
https://lore.kernel.org/patchwork/cover/1445081/

v8:
- Fix reserved-memory.txt and add the reg property in example.
- Fix sizeof for of_property_count_elems_of_size in
  drivers/of/address.c#of_dma_set_restricted_buffer.
- Apply Will's suggestion to try the OF node having DMA configuration in
  drivers/of/address.c#of_dma_set_restricted_buffer.
- Fix typo in the comment of drivers/of/address.c#of_dma_set_restricted_buffer.
- Add error message for PageHighMem in
  kernel/dma/swiotlb.c#rmem_swiotlb_device_init and move it to
  rmem_swiotlb_setup.
- Fix the message string in rmem_swiotlb_setup.
https://lore.kernel.org/patchwork/cover/1437112/

v7:
Fix debugfs, PageHighMem and comment style in rmem_swiotlb_device_init
https://lore.kernel.org/patchwork/cover/1431031/

v6:
Address the comments in v5
https://lore.kernel.org/patchwork/cover/1423201/

v5:
Rebase on latest linux-next
https://lore.kernel.org/patchwork/cover/1416899/

v4:
- Fix spinlock bad magic
- Use rmem->name for debugfs entry
- Address the comments in v3
https://lore.kernel.org/patchwork/cover/1378113/

v3:
Using only one reserved memory region for both streaming DMA and memory
allocation.
https://lore.kernel.org/patchwork/cover/1360992/

v2:
Building on top of swiotlb.
https://lore.kernel.org/patchwork/cover/1280705/

v1:
Using dma_map_ops.
https://lore.kernel.org/patchwork/cover/1271660/

Claire Chang (12):
  swiotlb: Refactor swiotlb init functions
  swiotlb: Refactor swiotlb_create_debugfs
  swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
  swiotlb: Update is_swiotlb_buffer to add a struct device argument
  swiotlb: Update is_swiotlb_active to add a struct device argument
  swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing
  swiotlb: Move alloc_size to swiotlb_find_slots
  swiotlb: Refactor swiotlb_tbl_unmap_single
  swiotlb: Add restricted DMA alloc/free support
  swiotlb: Add restricted DMA pool initialization
  dt-bindings: of: Add restricted DMA pool
  of: Add plumbing for restricted DMA pool

 .../reserved-memory/reserved-memory.txt       |  36 ++-
 drivers/base/core.c                           |   4 +
 drivers/gpu/drm/i915/gem/i915_gem_internal.c  |   2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c         |   2 +-
 drivers/iommu/dma-iommu.c                     |  12 +-
 drivers/of/address.c                          |  33 +++
 drivers/of/device.c                           |   3 +
 drivers/of/of_private.h                       |   6 +
 drivers/pci/xen-pcifront.c                    |   2 +-
 drivers/xen/swiotlb-xen.c                     |   4 +-
 include/linux/device.h                        |   4 +
 include/linux/swiotlb.h                       |  53 +++-
 kernel/dma/Kconfig                            |  14 +
 kernel/dma/direct.c                           |  59 ++--
 kernel/dma/direct.h                           |   8 +-
 kernel/dma/swiotlb.c                          | 251 +++++++++++++-----
 16 files changed, 390 insertions(+), 103 deletions(-)

-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 15:55:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 15:55:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146893.270469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRhv-0008Re-4E; Thu, 24 Jun 2021 15:55:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146893.270469; Thu, 24 Jun 2021 15:55:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRhv-0008RW-0c; Thu, 24 Jun 2021 15:55:59 +0000
Received: by outflank-mailman (input) for mailman id 146893;
 Thu, 24 Jun 2021 15:55:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a5fo=LS=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lwRht-0008QL-Pt
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 15:55:57 +0000
Received: from mail-pg1-x52e.google.com (unknown [2607:f8b0:4864:20::52e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2733aba1-665c-4386-894c-6259c438ad9e;
 Thu, 24 Jun 2021 15:55:57 +0000 (UTC)
Received: by mail-pg1-x52e.google.com with SMTP id a2so5059784pgi.6
 for <xen-devel@lists.xenproject.org>; Thu, 24 Jun 2021 08:55:57 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:165a:99ec:42d5:d8b])
 by smtp.gmail.com with UTF8SMTPSA id la18sm5913675pjb.55.2021.06.24.08.55.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 24 Jun 2021 08:55:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2733aba1-665c-4386-894c-6259c438ad9e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=87F32lyNJPVTVtO+0wV8iOXVR4FYPSJ8Efeoe/l3AWY=;
        b=fPyR7LNfa+/lU3WGWQPO3NZkUjo26lxHQdRzo89jZMyBjRWMed8i0Y6RqoJEyweboI
         IubDRG9d7RiT3UTHReJE0MAliYBlmMLlLps0zkeHlwgKitR9Va5Y9JWu2QIhfoUNH31f
         Sz4HZ90/m+NBhkrEaC1sLlamv48IpLCMUi5TY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=87F32lyNJPVTVtO+0wV8iOXVR4FYPSJ8Efeoe/l3AWY=;
        b=HYfwdu1wgg8FskX/jl3/TMyHQLoITmvhEeGzvK2qEyDVWDJzpP/D7TqvYJBIDHng1Q
         elNDXwtyHkammEOmEB5tShJy4KuVT03fxOyYmccKegZDgXAk/rTW0Bk5y/MN/ijBxSyW
         A5PglIlR4EHIJNHV/0nAe/MDBcomRzcZNSrkd3xO17QzzH5X4ZZ7qPM1+TNArJK0T7CO
         NACJq0oEwusxfEeoy3EV+WpLDBZFg2O5W+FhLUGPT0+Q3BTIhZCc61QNGXGzMsbLDlN8
         JtXVIMJ+zT/vnohBXBMj8zSInh/aWqmk5Q7PHSV+KCMqCacTOdjAvnwfxTpsC4NPtJVF
         FYUQ==
X-Gm-Message-State: AOAM530qtxkYcdqzDXUuoPTbnwdjBxK/Y1HMt98OikCA5FO9ksO+b71q
	oVbz++2lIOfUQiJdQV/3TJwJqQ==
X-Google-Smtp-Source: ABdhPJxQx3pwzFqcBdR/DNm2W1TpNOCeT4sa0oGntLB2nXanycaT15P3BNOzlPmGxyWI5Z+ImzxIzA==
X-Received: by 2002:a63:4e4d:: with SMTP id o13mr5258697pgl.361.1624550156451;
        Thu, 24 Jun 2021 08:55:56 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com,
	quic_qiancai@quicinc.com
Subject: [PATCH v15 02/12] swiotlb: Refactor swiotlb_create_debugfs
Date: Thu, 24 Jun 2021 23:55:16 +0800
Message-Id: <20210624155526.2775863-3-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210624155526.2775863-1-tientzu@chromium.org>
References: <20210624155526.2775863-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Split the debugfs creation to make the code reusable for supporting
different bounce buffer pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 kernel/dma/swiotlb.c | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 1f9b2b9e7490..ede66df6835b 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -671,19 +671,26 @@ bool is_swiotlb_active(void)
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
 #ifdef CONFIG_DEBUG_FS
+static struct dentry *debugfs_dir;
 
-static int __init swiotlb_create_debugfs(void)
+static void swiotlb_create_debugfs_files(struct io_tlb_mem *mem)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
-
-	if (!mem)
-		return 0;
-	mem->debugfs = debugfs_create_dir("swiotlb", NULL);
 	debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs);
 	debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used);
+}
+
+static int __init swiotlb_create_default_debugfs(void)
+{
+	struct io_tlb_mem *mem = io_tlb_default_mem;
+
+	debugfs_dir = debugfs_create_dir("swiotlb", NULL);
+	if (mem) {
+		mem->debugfs = debugfs_dir;
+		swiotlb_create_debugfs_files(mem);
+	}
 	return 0;
 }
 
-late_initcall(swiotlb_create_debugfs);
+late_initcall(swiotlb_create_default_debugfs);
 
 #endif
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 15:56:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 15:56:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146897.270480 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRi4-0000W2-D0; Thu, 24 Jun 2021 15:56:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146897.270480; Thu, 24 Jun 2021 15:56:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRi4-0000Vv-9q; Thu, 24 Jun 2021 15:56:08 +0000
Received: by outflank-mailman (input) for mailman id 146897;
 Thu, 24 Jun 2021 15:56:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a5fo=LS=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lwRi3-0000Nz-DU
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 15:56:07 +0000
Received: from mail-pj1-x102a.google.com (unknown [2607:f8b0:4864:20::102a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bf6f5d6b-06f2-4a02-802c-1fa6e7d74328;
 Thu, 24 Jun 2021 15:56:06 +0000 (UTC)
Received: by mail-pj1-x102a.google.com with SMTP id l11so3725150pji.5
 for <xen-devel@lists.xenproject.org>; Thu, 24 Jun 2021 08:56:06 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:165a:99ec:42d5:d8b])
 by smtp.gmail.com with UTF8SMTPSA id v21sm3445482pfu.77.2021.06.24.08.55.58
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 24 Jun 2021 08:56:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf6f5d6b-06f2-4a02-802c-1fa6e7d74328
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=+yhAgwjq5z1dAtXH3bS6halUgnpsXrGer7IGToK2ifg=;
        b=amIqlLHGrUt8djVOf4yys2+imq4cPT7hCVf0b/QuzNNknjD6TQ5PH/FESx40Jfm2Sy
         e9vppGwlrw0CYkZwPegdpkHKAhnvdkrDfv3mH1s+b6/aR0Bz86Elj1k+BXGZoNp8MZsX
         qL6/glvhzEwDwjhnM7F3SEdYPjqJ2hAkhfF50=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=+yhAgwjq5z1dAtXH3bS6halUgnpsXrGer7IGToK2ifg=;
        b=nKTtwv+6uRp80/Twgs3na7JPUiFFtLBwvcDfhEImQr0rH3Etov9E9FVkPnr3mb26rc
         LXyD7oFAxfvnMf8MRLL+unbYZtOQ3V3nzKsXBAud9IKpqcqKSx7ZDOKzORn1mP3SAyG8
         hbr81EsZPHL5j2y6MyhfyKoxL16a3Z5SfPmWMNIOvqDLrWxHbwNi3x9rtpgaUnN/CDkb
         kQR+1v8lMeIzyO6s3j6SFtyY0qlh7g9PZsiBnpFOziEF5btjgkrHsihZoA3aObpSuw/0
         82XBCsSLWrMWFITW3b+Ik+s0sPWuOJwC93w7P2WNDdL8DSRSE7ekB8WFXTEd/fnWKg63
         alLA==
X-Gm-Message-State: AOAM530SVeMCASfsL20+8tq64QMbUZzph924vcjXj7zAe7nOn4LLtZUz
	aWJHXXmYvN3eGL9aoPKw/HClbQ==
X-Google-Smtp-Source: ABdhPJx5buaD+Aj0oQHu9wgvwnByTOy2l3ZvCASGd6R1iZ+Ag+wNfG3I5+8ywo8QG6jWoo4vpvXgOg==
X-Received: by 2002:a17:902:d483:b029:127:95c1:19d8 with SMTP id c3-20020a170902d483b029012795c119d8mr4930234plg.42.1624550165683;
        Thu, 24 Jun 2021 08:56:05 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com,
	quic_qiancai@quicinc.com
Subject: [PATCH v15 03/12] swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
Date: Thu, 24 Jun 2021 23:55:17 +0800
Message-Id: <20210624155526.2775863-4-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210624155526.2775863-1-tientzu@chromium.org>
References: <20210624155526.2775863-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Always have the pointer to the swiotlb pool used in struct device. This
could help simplify the code for other pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
 drivers/base/core.c    | 4 ++++
 include/linux/device.h | 4 ++++
 kernel/dma/swiotlb.c   | 8 ++++----
 3 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/base/core.c b/drivers/base/core.c
index f29839382f81..cb3123e3954d 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -27,6 +27,7 @@
 #include <linux/netdevice.h>
 #include <linux/sched/signal.h>
 #include <linux/sched/mm.h>
+#include <linux/swiotlb.h>
 #include <linux/sysfs.h>
 #include <linux/dma-map-ops.h> /* for dma_default_coherent */
 
@@ -2736,6 +2737,9 @@ void device_initialize(struct device *dev)
     defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU_ALL)
 	dev->dma_coherent = dma_default_coherent;
 #endif
+#ifdef CONFIG_SWIOTLB
+	dev->dma_io_tlb_mem = io_tlb_default_mem;
+#endif
 }
 EXPORT_SYMBOL_GPL(device_initialize);
 
diff --git a/include/linux/device.h b/include/linux/device.h
index ba660731bd25..240d652a0696 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -416,6 +416,7 @@ struct dev_links_info {
  * @dma_pools:	Dma pools (if dma'ble device).
  * @dma_mem:	Internal for coherent mem override.
  * @cma_area:	Contiguous memory area for dma allocations
+ * @dma_io_tlb_mem: Pointer to the swiotlb pool used.  Not for driver use.
  * @archdata:	For arch-specific additions.
  * @of_node:	Associated device tree node.
  * @fwnode:	Associated device node supplied by platform firmware.
@@ -518,6 +519,9 @@ struct device {
 #ifdef CONFIG_DMA_CMA
 	struct cma *cma_area;		/* contiguous memory area for dma
 					   allocations */
+#endif
+#ifdef CONFIG_SWIOTLB
+	struct io_tlb_mem *dma_io_tlb_mem;
 #endif
 	/* arch specific additions */
 	struct dev_archdata	archdata;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index ede66df6835b..72a4289faed1 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -340,7 +340,7 @@ void __init swiotlb_exit(void)
 static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size,
 			   enum dma_data_direction dir)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT;
 	unsigned int offset = (tlb_addr - mem->start) & (IO_TLB_SIZE - 1);
 	phys_addr_t orig_addr = mem->slots[index].orig_addr;
@@ -431,7 +431,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
 static int find_slots(struct device *dev, phys_addr_t orig_addr,
 		size_t alloc_size)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
 	dma_addr_t tbl_dma_addr =
 		phys_to_dma_unencrypted(dev, mem->start) & boundary_mask;
@@ -508,7 +508,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		size_t mapping_size, size_t alloc_size,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
 	unsigned int i;
 	int index;
@@ -559,7 +559,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 			      size_t mapping_size, enum dma_data_direction dir,
 			      unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
 	unsigned long flags;
 	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 15:56:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 15:56:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146904.270491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRiK-0001PH-Ld; Thu, 24 Jun 2021 15:56:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146904.270491; Thu, 24 Jun 2021 15:56:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRiK-0001P3-IJ; Thu, 24 Jun 2021 15:56:24 +0000
Received: by outflank-mailman (input) for mailman id 146904;
 Thu, 24 Jun 2021 15:56:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a5fo=LS=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lwRiJ-0000Nz-2M
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 15:56:23 +0000
Received: from mail-pg1-x534.google.com (unknown [2607:f8b0:4864:20::534])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7a589a2-206c-4f5c-82d9-3ffbb18ac911;
 Thu, 24 Jun 2021 15:56:15 +0000 (UTC)
Received: by mail-pg1-x534.google.com with SMTP id d12so5058479pgd.9
 for <xen-devel@lists.xenproject.org>; Thu, 24 Jun 2021 08:56:15 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:165a:99ec:42d5:d8b])
 by smtp.gmail.com with UTF8SMTPSA id n69sm3378565pfd.132.2021.06.24.08.56.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 24 Jun 2021 08:56:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7a589a2-206c-4f5c-82d9-3ffbb18ac911
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=n+WwDLLEZtw2fHfSMVUWtPxnVCk0hCuq73bmvGm/m1o=;
        b=k80KiTOwSw6iqF0Tp4xgpgrjitk6znoX9PPzioQVvI1J496rPMnShxqQOjBDG8DOMQ
         1bNHBVAz8XiK0s2pSZm07pjWVfh1BfH1RIofiBnVz4e+uMjrm5LFJghyv1GRvp1G0Ul0
         ttLVVnlchPBK9QpMayvjMdUbCFfuPr+qofBQ4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=n+WwDLLEZtw2fHfSMVUWtPxnVCk0hCuq73bmvGm/m1o=;
        b=ql4vG5SvC6b6MuS+YC/TGvqeLQfdFGP/UTNedzXaz8dYp/iGjoTXVc0eyGbVY7k6SR
         p+xsgPoxre4Ogran+lFqrFtyvJbNR7GRbjGgR+D8hJv3USDiQYIX95oMp2sZ2euH3MQQ
         PPIAFYJqf+GQz0ChD9i+Y228HID/bGLIFuXm0pwUwRB5qf9Wszc/AupHLKA8kEnkDv4y
         31FHtPTxdBADwR4lmeZyKJT3LXDUrj7GLUpbaUePjZC/0rsfoY4avzZwXlqDplsCIVsO
         4yNJ13QLnPrcnWLy7PedJ3ROI/cC++jburGz5n4U94WQ7TSAtcGwLF0bzB5ch8AWfN5S
         HkOQ==
X-Gm-Message-State: AOAM532dEp08emolAClW/9vDypM7zJFbn6U/eB4xSFDENrZqwcoJ3SGX
	hGzWj9Mpb3024bklOH11/6DD2Q==
X-Google-Smtp-Source: ABdhPJzgje+wGQrvOOjfb6HbmbmMTosJM0TPXVPHtGPlgUEf/ZkvFvEuVcA/l9nFO74UYNuZfzJChg==
X-Received: by 2002:a63:c058:: with SMTP id z24mr5427217pgi.264.1624550174569;
        Thu, 24 Jun 2021 08:56:14 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com,
	quic_qiancai@quicinc.com
Subject: [PATCH v15 04/12] swiotlb: Update is_swiotlb_buffer to add a struct device argument
Date: Thu, 24 Jun 2021 23:55:18 +0800
Message-Id: <20210624155526.2775863-5-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210624155526.2775863-1-tientzu@chromium.org>
References: <20210624155526.2775863-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_buffer to add a struct device argument. This will be
useful later to allow for different pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
 drivers/iommu/dma-iommu.c | 12 ++++++------
 drivers/xen/swiotlb-xen.c |  2 +-
 include/linux/swiotlb.h   |  7 ++++---
 kernel/dma/direct.c       |  6 +++---
 kernel/dma/direct.h       |  6 +++---
 5 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 3087d9fa6065..10997ef541f8 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -507,7 +507,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
 
 	__iommu_dma_unmap(dev, dma_addr, size);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 
@@ -578,7 +578,7 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
 	}
 
 	iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
-	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
+	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys))
 		swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs);
 	return iova;
 }
@@ -749,7 +749,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
 	if (!dev_is_dma_coherent(dev))
 		arch_sync_dma_for_cpu(phys, size, dir);
 
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_cpu(dev, phys, size, dir);
 }
 
@@ -762,7 +762,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev,
 		return;
 
 	phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_device(dev, phys, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -783,7 +783,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
 
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_cpu(dev, sg_phys(sg),
 						    sg->length, dir);
 	}
@@ -800,7 +800,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
 		return;
 
 	for_each_sg(sgl, sg, nelems, i) {
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_device(dev, sg_phys(sg),
 						       sg->length, dir);
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 4c89afc0df62..0c6ed09f8513 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -100,7 +100,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	 * in our domain. Therefore _only_ check address within our domain.
 	 */
 	if (pfn_valid(PFN_DOWN(paddr)))
-		return is_swiotlb_buffer(paddr);
+		return is_swiotlb_buffer(dev, paddr);
 	return 0;
 }
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 216854a5e513..d1f3d95881cd 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_SWIOTLB_H
 #define __LINUX_SWIOTLB_H
 
+#include <linux/device.h>
 #include <linux/dma-direction.h>
 #include <linux/init.h>
 #include <linux/types.h>
@@ -101,9 +102,9 @@ struct io_tlb_mem {
 };
 extern struct io_tlb_mem *io_tlb_default_mem;
 
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
@@ -115,7 +116,7 @@ bool is_swiotlb_active(void);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index f737e3347059..84c9feb5474a 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -343,7 +343,7 @@ void dma_direct_sync_sg_for_device(struct device *dev,
 	for_each_sg(sgl, sg, nents, i) {
 		phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_device(dev, paddr, sg->length,
 						       dir);
 
@@ -369,7 +369,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(paddr, sg->length, dir);
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_cpu(dev, paddr, sg->length,
 						    dir);
 
@@ -504,7 +504,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr)
 {
 	return !dev_is_dma_coherent(dev) ||
-		is_swiotlb_buffer(dma_to_phys(dev, dma_addr));
+	       is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr));
 }
 
 /**
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 50afc05b6f1d..13e9e7158d94 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -56,7 +56,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev,
 {
 	phys_addr_t paddr = dma_to_phys(dev, addr);
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_device(dev, paddr, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -73,7 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev,
 		arch_sync_dma_for_cpu_all();
 	}
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_cpu(dev, paddr, size, dir);
 
 	if (dir == DMA_FROM_DEVICE)
@@ -113,7 +113,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
 		dma_direct_sync_single_for_cpu(dev, addr, size, dir);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 #endif /* _KERNEL_DMA_DIRECT_H */
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 15:56:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 15:56:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146906.270501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRiP-0001nj-UC; Thu, 24 Jun 2021 15:56:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146906.270501; Thu, 24 Jun 2021 15:56:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRiP-0001nc-R7; Thu, 24 Jun 2021 15:56:29 +0000
Received: by outflank-mailman (input) for mailman id 146906;
 Thu, 24 Jun 2021 15:56:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a5fo=LS=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lwRiO-0000Nz-2Y
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 15:56:28 +0000
Received: from mail-pg1-x52d.google.com (unknown [2607:f8b0:4864:20::52d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d86ed21-4b75-4afb-85da-a4aeeed0864f;
 Thu, 24 Jun 2021 15:56:24 +0000 (UTC)
Received: by mail-pg1-x52d.google.com with SMTP id h4so5077322pgp.5
 for <xen-devel@lists.xenproject.org>; Thu, 24 Jun 2021 08:56:24 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:165a:99ec:42d5:d8b])
 by smtp.gmail.com with UTF8SMTPSA id e1sm3380452pfd.16.2021.06.24.08.56.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 24 Jun 2021 08:56:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d86ed21-4b75-4afb-85da-a4aeeed0864f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=zrGk5vL3UOmfV1+EKp1h1m3lS/SKJ23U3aRYavMW2u4=;
        b=B6hn/8348JH2vHnPkyUEYC2fLk0Sj3SDc+OnMd4jx/VFUgTlJezOC+YlolCAOweLAt
         Oo0kyBMkmJHHWmhS0fvrHHODCR6O1qWW4bWw+QS4jTi9LZEBbOnCcJNgE55n3uGDNe5i
         HSER8yBYSdHvr4Z8kgs209kJHa4Ry9RL7QVfY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=zrGk5vL3UOmfV1+EKp1h1m3lS/SKJ23U3aRYavMW2u4=;
        b=b/oiUHW38WQYB329i9rbpDNCkgbxxQvTN0YxHDFymqw4v7D82LvG9X7QJ2J7zuhcol
         N3FSczYVb7HiiLTFGjLKS/nJXcfVL5+23fgPGDJDoMV+Ti45v1TR7G5QVkt4P9tP7Wpv
         mpMmAtymuXi2LEOzUFPGlPtxVDT7AaA4IoDM0X6tTb9SE/6HXASUFtjGaajKPeer+qor
         lBEeXCgaoPn0hijGsZphexMOlAhITd1rwdHKFvXL7H5++BLOUQMZ5g794niE8mneltjq
         4wB+F080cPCTxA7MlFjLJYS9nTsRj1R/3CDLqTbAL41KEYnsBWkD1qw6Dp+9+j/2VKW1
         JH1A==
X-Gm-Message-State: AOAM531gq1ORxvW/RA4HkWnX2ZbtGXkX6VGCksubpF7xHwHM/PARGdwF
	CfXZizVmMOP+7A0rgIUsXvSvFw==
X-Google-Smtp-Source: ABdhPJznGkhArLV3Q5epacwrDV6aWTJSesN4ouhS8WfY0NXWOghLPXDssO25Zt6tnlDnLhwafUevTA==
X-Received: by 2002:a63:4554:: with SMTP id u20mr5205395pgk.23.1624550183886;
        Thu, 24 Jun 2021 08:56:23 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com,
	quic_qiancai@quicinc.com
Subject: [PATCH v15 05/12] swiotlb: Update is_swiotlb_active to add a struct device argument
Date: Thu, 24 Jun 2021 23:55:19 +0800
Message-Id: <20210624155526.2775863-6-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210624155526.2775863-1-tientzu@chromium.org>
References: <20210624155526.2775863-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_active to add a struct device argument. This will be
useful later to allow for different pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
 drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c        | 2 +-
 drivers/pci/xen-pcifront.c                   | 2 +-
 include/linux/swiotlb.h                      | 4 ++--
 kernel/dma/direct.c                          | 2 +-
 kernel/dma/swiotlb.c                         | 4 ++--
 6 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index a9d65fc8aa0e..4b7afa0fc85d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 
 	max_order = MAX_ORDER;
 #ifdef CONFIG_SWIOTLB
-	if (is_swiotlb_active()) {
+	if (is_swiotlb_active(obj->base.dev->dev)) {
 		unsigned int max_segment;
 
 		max_segment = swiotlb_max_segment();
diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
index 9662522aa066..be15bfd9e0ee 100644
--- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
@@ -321,7 +321,7 @@ nouveau_ttm_init(struct nouveau_drm *drm)
 	}
 
 #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
-	need_swiotlb = is_swiotlb_active();
+	need_swiotlb = is_swiotlb_active(dev->dev);
 #endif
 
 	ret = ttm_bo_device_init(&drm->ttm.bdev, &nouveau_bo_driver,
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index b7a8f3a1921f..0d56985bfe81 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
 
 	spin_unlock(&pcifront_dev_lock);
 
-	if (!err && !is_swiotlb_active()) {
+	if (!err && !is_swiotlb_active(&pdev->xdev->dev)) {
 		err = pci_xen_swiotlb_init_late();
 		if (err)
 			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index d1f3d95881cd..dd1c30a83058 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -112,7 +112,7 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
-bool is_swiotlb_active(void);
+bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
@@ -132,7 +132,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev)
 	return SIZE_MAX;
 }
 
-static inline bool is_swiotlb_active(void)
+static inline bool is_swiotlb_active(struct device *dev)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 84c9feb5474a..7a88c34d0867 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
 size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
-	if (is_swiotlb_active() &&
+	if (is_swiotlb_active(dev) &&
 	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 72a4289faed1..8a120f42340b 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -664,9 +664,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)
 	return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
 }
 
-bool is_swiotlb_active(void)
+bool is_swiotlb_active(struct device *dev)
 {
-	return io_tlb_default_mem != NULL;
+	return dev->dma_io_tlb_mem != NULL;
 }
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 15:56:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 15:56:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146907.270513 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRiV-0002Fb-AN; Thu, 24 Jun 2021 15:56:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146907.270513; Thu, 24 Jun 2021 15:56:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRiV-0002FN-68; Thu, 24 Jun 2021 15:56:35 +0000
Received: by outflank-mailman (input) for mailman id 146907;
 Thu, 24 Jun 2021 15:56:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a5fo=LS=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lwRiU-0002ED-9Y
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 15:56:34 +0000
Received: from mail-pj1-x102f.google.com (unknown [2607:f8b0:4864:20::102f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 688a7c78-e76d-4e48-baed-983584c8d738;
 Thu, 24 Jun 2021 15:56:33 +0000 (UTC)
Received: by mail-pj1-x102f.google.com with SMTP id
 bv7-20020a17090af187b029016fb18e04cfso5696842pjb.0
 for <xen-devel@lists.xenproject.org>; Thu, 24 Jun 2021 08:56:33 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:165a:99ec:42d5:d8b])
 by smtp.gmail.com with UTF8SMTPSA id 10sm3356835pfh.174.2021.06.24.08.56.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 24 Jun 2021 08:56:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 688a7c78-e76d-4e48-baed-983584c8d738
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=vWWa68OFQeb7FNC92AYjUweQlraoAtZ2qywuTutOPN8=;
        b=VXwSpljaHyT9CGsX566zaWUYjLWqT6EZGvW+HUV35ZUKMCtL+5WOxJcFRf4L1/7RFc
         qGeV6pbn94x/vW7fLhV34a5kH7V7Jg+oyk1mVqp0p88iltTd6zaG8ZqWhoFwX03BHlru
         OGskxuzkvEXUDLIs9WCRDc/Gg2nJw/vs7F2Ng=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=vWWa68OFQeb7FNC92AYjUweQlraoAtZ2qywuTutOPN8=;
        b=IwX7ih9KwTOqoXEaN6RWN9KkP8r9hI/xCNl+/zh0iSwoxSmWmsmIXycBPw4tN7jMrk
         czQReVDm+JmeQ/aa3z7U75tR7sWfkgykHte82CrF7BcSacPw9c7mhrCMTwKbhhI3X+bp
         3mLkXXLt2+qpybYDoXwbAAAgPfR6rp4dgQOOWk0siB4FN3PWruM67UY/XIMfocnLG41m
         NGIl0dVf9UU/kNIxVWFs220s6Ek6LLH1ANi6++UYaAzYDr93DI0zcrHryR4w4+49ksT3
         /ouWothH0Xltm3dKBMsuspUh/WWOTlzDSXzYYk+0GI9jPTzx6yCvO36FFdL4DHjpGSWA
         br7Q==
X-Gm-Message-State: AOAM530qdD4qAdXau9Ra4bENTmjRTgsFvSeLuneMij1sHeKmdJ8SrDj4
	gqbzn2yFJyzZ5GQ9MQ18641Kfg==
X-Google-Smtp-Source: ABdhPJyM3ZtD9p071nGaZ41yrN00ryvbfXJ3YxQAdjvfE9uJr/thaI9NsyuGT18oKxVn0E9uAB208g==
X-Received: by 2002:a17:90a:db8a:: with SMTP id h10mr16188658pjv.50.1624550192680;
        Thu, 24 Jun 2021 08:56:32 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com,
	quic_qiancai@quicinc.com
Subject: [PATCH v15 06/12] swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing
Date: Thu, 24 Jun 2021 23:55:20 +0800
Message-Id: <20210624155526.2775863-7-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210624155526.2775863-1-tientzu@chromium.org>
References: <20210624155526.2775863-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
use it to determine whether to bounce the data or not. This will be
useful later to allow for different pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
 drivers/xen/swiotlb-xen.c |  2 +-
 include/linux/swiotlb.h   | 13 +++++++++++++
 kernel/dma/direct.c       |  2 +-
 kernel/dma/direct.h       |  2 +-
 kernel/dma/swiotlb.c      |  4 ++++
 5 files changed, 20 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 0c6ed09f8513..4730a146fa35 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -369,7 +369,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 	if (dma_capable(dev, dev_addr, size, true) &&
 	    !range_straddles_page_boundary(phys, size) &&
 		!xen_arch_need_swiotlb(dev, phys, dev_addr) &&
-		swiotlb_force != SWIOTLB_FORCE)
+		!is_swiotlb_force_bounce(dev))
 		goto done;
 
 	/*
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index dd1c30a83058..da348671b0d5 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -84,6 +84,7 @@ extern enum swiotlb_force swiotlb_force;
  *		unmap calls.
  * @debugfs:	The dentry to debugfs.
  * @late_alloc:	%true if allocated using the page allocator
+ * @force_bounce: %true if swiotlb bouncing is forced
  */
 struct io_tlb_mem {
 	phys_addr_t start;
@@ -94,6 +95,7 @@ struct io_tlb_mem {
 	spinlock_t lock;
 	struct dentry *debugfs;
 	bool late_alloc;
+	bool force_bounce;
 	struct io_tlb_slot {
 		phys_addr_t orig_addr;
 		size_t alloc_size;
@@ -109,6 +111,13 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
 
+static inline bool is_swiotlb_force_bounce(struct device *dev)
+{
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+
+	return mem && mem->force_bounce;
+}
+
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
@@ -120,6 +129,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
+static inline bool is_swiotlb_force_bounce(struct device *dev)
+{
+	return false;
+}
 static inline void swiotlb_exit(void)
 {
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 7a88c34d0867..a92465b4eb12 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -496,7 +496,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
 	if (is_swiotlb_active(dev) &&
-	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
+	    (dma_addressing_limited(dev) || is_swiotlb_force_bounce(dev)))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
 }
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 13e9e7158d94..4632b0f4f72e 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -87,7 +87,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
 	phys_addr_t phys = page_to_phys(page) + offset;
 	dma_addr_t dma_addr = phys_to_dma(dev, phys);
 
-	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
+	if (is_swiotlb_force_bounce(dev))
 		return swiotlb_map(dev, phys, size, dir, attrs);
 
 	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 8a120f42340b..0d294bbf274c 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -179,6 +179,10 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
 	mem->end = mem->start + bytes;
 	mem->index = 0;
 	mem->late_alloc = late_alloc;
+
+	if (swiotlb_force == SWIOTLB_FORCE)
+		mem->force_bounce = true;
+
 	spin_lock_init(&mem->lock);
 	for (i = 0; i < mem->nslabs; i++) {
 		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 15:57:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 15:57:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146914.270524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRit-0003N5-JH; Thu, 24 Jun 2021 15:56:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146914.270524; Thu, 24 Jun 2021 15:56:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRit-0003Mw-Fr; Thu, 24 Jun 2021 15:56:59 +0000
Received: by outflank-mailman (input) for mailman id 146914;
 Thu, 24 Jun 2021 15:56:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a5fo=LS=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lwRis-0002ED-3q
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 15:56:58 +0000
Received: from mail-pj1-x102d.google.com (unknown [2607:f8b0:4864:20::102d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 630627be-6f8c-45eb-bdd4-27bef3c2bce7;
 Thu, 24 Jun 2021 15:56:42 +0000 (UTC)
Received: by mail-pj1-x102d.google.com with SMTP id
 x21-20020a17090aa395b029016e25313bfcso3770258pjp.2
 for <xen-devel@lists.xenproject.org>; Thu, 24 Jun 2021 08:56:42 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:165a:99ec:42d5:d8b])
 by smtp.gmail.com with UTF8SMTPSA id t13sm3434704pfq.173.2021.06.24.08.56.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 24 Jun 2021 08:56:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 630627be-6f8c-45eb-bdd4-27bef3c2bce7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=k2v4ULCUV3P/ZykooMt4RnFrTyFS8LVxGS67nnIjZ/E=;
        b=QrT3tJse9UwGIKHlr8MsnQ/5shpb9vzqfj7YMQ2fYijn7FCRoIijuokKPkX0W4emjg
         zIW9mO8KXkFug0OUuL1YqRloVsCf7/HNDUAAE7RH/8xRS3Mt1GINoLA/1NQjeCh/6ZZs
         IZdWLInrLmUEvcR0rzL0waTaUCb67EWeMYbx0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=k2v4ULCUV3P/ZykooMt4RnFrTyFS8LVxGS67nnIjZ/E=;
        b=hmayg/KplL7q+j00gMpJ4RezvQR59D6niSNyDdrNGQcLWsbR6HLQI8/erFSbOIXs4q
         ePDRDKdinnsw8r/mI3zfQPteq40iTgGC4k7zaiynVQduzRDuhr8E4NepYND9u5VYWF7B
         +Ig3W3Mw99MyWvCBXmlbAdE03z89cKMSE7n/GzwpLYGeDKUcYi16WDZCQufNi13zZc5L
         V9tKvD40362/Wc9/TD3KW5HA3MGGfbYdecWOB8PLVt840XRw2dHIwC5Db/AhJZDykIoi
         rG4wrLr7KzRxldE4IXNXcofDU8urfi+cwbH0Ce7bF6cr+bIktZKua+jmMIZUjrTj1QyK
         mfmg==
X-Gm-Message-State: AOAM533vOL0y8fIoYVhKaCPRFKeEFh/c3XJJnJIelBNr6CovxlCG7xFF
	Ekg04oM8YeCFjCo2xd+MDE+lvA==
X-Google-Smtp-Source: ABdhPJzbejiNIcTS+DoAyoRSN0idnXT3+mB1g+t9ZbfEjTYMX6+0aJ+6gkXmZUbbSTOePShgS47cWw==
X-Received: by 2002:a17:90b:1bc6:: with SMTP id oa6mr15922344pjb.151.1624550201945;
        Thu, 24 Jun 2021 08:56:41 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com,
	quic_qiancai@quicinc.com
Subject: [PATCH v15 07/12] swiotlb: Move alloc_size to swiotlb_find_slots
Date: Thu, 24 Jun 2021 23:55:21 +0800
Message-Id: <20210624155526.2775863-8-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210624155526.2775863-1-tientzu@chromium.org>
References: <20210624155526.2775863-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Rename find_slots to swiotlb_find_slots and move the maintenance of
alloc_size to it for better code reusability later.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 kernel/dma/swiotlb.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 0d294bbf274c..b41d16e92cf6 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -432,8 +432,8 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
  * Find a suitable number of IO TLB entries size that will fit this request and
  * allocate a buffer from that IO TLB pool.
  */
-static int find_slots(struct device *dev, phys_addr_t orig_addr,
-		size_t alloc_size)
+static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
+			      size_t alloc_size)
 {
 	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
@@ -444,6 +444,7 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 		dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1);
 	unsigned int nslots = nr_slots(alloc_size), stride;
 	unsigned int index, wrap, count = 0, i;
+	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
 	unsigned long flags;
 
 	BUG_ON(!nslots);
@@ -488,8 +489,11 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 	return -1;
 
 found:
-	for (i = index; i < index + nslots; i++)
+	for (i = index; i < index + nslots; i++) {
 		mem->slots[i].list = 0;
+		mem->slots[i].alloc_size =
+			alloc_size - (offset + ((i - index) << IO_TLB_SHIFT));
+	}
 	for (i = index - 1;
 	     io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 &&
 	     mem->slots[i].list; i--)
@@ -530,7 +534,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		return (phys_addr_t)DMA_MAPPING_ERROR;
 	}
 
-	index = find_slots(dev, orig_addr, alloc_size + offset);
+	index = swiotlb_find_slots(dev, orig_addr, alloc_size + offset);
 	if (index == -1) {
 		if (!(attrs & DMA_ATTR_NO_WARN))
 			dev_warn_ratelimited(dev,
@@ -544,11 +548,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	 * This is needed when we sync the memory.  Then we sync the buffer if
 	 * needed.
 	 */
-	for (i = 0; i < nr_slots(alloc_size + offset); i++) {
+	for (i = 0; i < nr_slots(alloc_size + offset); i++)
 		mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
-		mem->slots[index + i].alloc_size =
-			alloc_size - (i << IO_TLB_SHIFT);
-	}
 	tlb_addr = slot_addr(mem->start, index) + offset;
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
 	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 15:57:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 15:57:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146916.270535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRiy-0003iy-SH; Thu, 24 Jun 2021 15:57:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146916.270535; Thu, 24 Jun 2021 15:57:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRiy-0003in-OT; Thu, 24 Jun 2021 15:57:04 +0000
Received: by outflank-mailman (input) for mailman id 146916;
 Thu, 24 Jun 2021 15:57:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CoeP=LS=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
 id 1lwRix-0002ED-47
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 15:57:03 +0000
Received: from mx0a-00069f02.pphosted.com (unknown [205.220.165.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4bc28911-aeb9-4dec-9ba6-da66dc286f40;
 Thu, 24 Jun 2021 15:56:43 +0000 (UTC)
Received: from pps.filterd (m0246629.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 15OFlHY2002748; Thu, 24 Jun 2021 15:55:51 GMT
Received: from oracle.com (aserp3030.oracle.com [141.146.126.71])
 by mx0b-00069f02.pphosted.com with ESMTP id 39c2wnkce0-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 24 Jun 2021 15:55:50 +0000
Received: from aserp3030.oracle.com (aserp3030.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 15OFtnlS067641;
 Thu, 24 Jun 2021 15:55:49 GMT
Received: from nam12-bn8-obe.outbound.protection.outlook.com
 (mail-bn8nam12lp2174.outbound.protection.outlook.com [104.47.55.174])
 by aserp3030.oracle.com with ESMTP id 3996mgr9nd-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 24 Jun 2021 15:55:49 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com (2603:10b6:a03:85::27)
 by BY5PR10MB4369.namprd10.prod.outlook.com (2603:10b6:a03:204::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Thu, 24 Jun
 2021 15:55:34 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::8111:d8f1:c262:808d]) by BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::8111:d8f1:c262:808d%6]) with mapi id 15.20.4264.020; Thu, 24 Jun 2021
 15:55:34 +0000
Received: from char.us.oracle.com (138.3.200.4) by
 SA9PR13CA0037.namprd13.prod.outlook.com (2603:10b6:806:22::12) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4287.8 via Frontend Transport; Thu, 24 Jun 2021 15:55:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4bc28911-aeb9-4dec-9ba6-da66dc286f40
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : content-type : in-reply-to :
 mime-version; s=corp-2020-01-29;
 bh=RNYw98/G7tITOD2JUar0dJNWfAGLFBejbGFGi7BFxgQ=;
 b=aqQbQjyqegqFFnwtAq+5yiLyt5h8lG/pDmyIHA/MMoi5T34sx1AvnFhUht5va4XsSA6d
 nutlUpL4XKHUE9M/hskFiiMJE17MzAoePlEvjGQZfG+SaeA+m5Qi14lxV6OIl8YVdrtQ
 sEH7iw9Od05GdCfppgzN3oiUpYVrXycnaef5LvZ8QCv6IenuMHu2ohLQx7jliSJkMmWr
 KbdtLf6sWZSJUZBlyZQ7yWZ+VPmy1CM93o2HXrkG1iMWmLzTH/8i7tHHt/ojfaiO+Auy
 KoTMM7Y2qSpmUhJbLydHEANcRCnR8xyuEmvMPGzyLdGoG7b8rrWUwEpSWmxdnlHd8HIW Qw== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LM8WdCAKxMuFmxRPdPWYfm6JwwLhm1EY+o1dOG6Omv7D9qMZiMQaefpWR/bF4JgkKwjVnExBGL/J+fF4S95dogPEHJU1EpiQjI2PXoOMSp7OQRckAKQDeQidnCG3UAm/5I7dwqxKMSf9qS5LIDmZx0J18vw7FoSYHNc0bkoWOFIry4SpJKI4dUtlMv/QdH/wp+LlSqia3OOoPIqoHC+onGIhoWWo+QsIHEvYujOX1AamdzODyJe1mWWlNrlDkenIFvJMY7Fsy4y9RJR/PGGzbIWJhyv+UPAPsxnYhjGpMOqqpvgl8Xg3lrC66nUP/ru3vdlVSUh7T/xj9+U2fajsWA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RNYw98/G7tITOD2JUar0dJNWfAGLFBejbGFGi7BFxgQ=;
 b=WGHZouTD0dBnGgiqwn40+bzGuLU6p1CWMn6EExAKVsxFXkLy8h0mVXFktwuN77Sc0aQ9plxHvEOyVT1kwjoQAui3vykvm7QFkfJzMWXQwqLhK1xevJEdYKtVjmNkvQ4OqawI708Kh5cS06l7TuvyBqhWyGQn4m81IzftzVurnsj+jQ/YaeF1+8kQfXStOvUudGWpBdqbJqSyHc98vtbbRdrMZLQvdP0KeFJVP0RgzdsbdtM0ZVz+1Dn+CjkqbJpUk1ZhbvyuuCYX6lYDY6rgBfP0FDaKBS46FKyUj/IlLK7L/ppaqHti1FvQjI5/7e/jf6hWYSZR0Sv9CbtPeMBGzA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RNYw98/G7tITOD2JUar0dJNWfAGLFBejbGFGi7BFxgQ=;
 b=hXwyChdVOMENoPgUAxpZq/b2l5rUNKZx1FBCKR8Z7ZASLfBfUH2NFv79FCNZwcAhHcLO+Jto3lpHZwwV7PVvHmsTjQ1YZnM4zjBbn4XlOUhtVq225tlgBvs1GJkiyTWxcBbf2nmtz4leINU+ePuZojZhj/BIT/vsf2rk0nDsvwM=
Authentication-Results: quicinc.com; dkim=none (message not signed)
 header.d=none;quicinc.com; dmarc=none action=none header.from=oracle.com;
Date: Thu, 24 Jun 2021 11:55:19 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Qian Cai <quic_qiancai@quicinc.com>
Cc: Will Deacon <will@kernel.org>, Robin Murphy <robin.murphy@arm.com>,
        Claire Chang <tientzu@chromium.org>, Christoph Hellwig <hch@lst.de>,
        Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
        Joerg Roedel <joro@8bytes.org>, Frank Rowand <frowand.list@gmail.com>,
        boris.ostrovsky@oracle.com, jgross@suse.com,
        Marek Szyprowski <m.szyprowski@samsung.com>,
        heikki.krogerus@linux.intel.com, thomas.hellstrom@linux.intel.com,
        peterz@infradead.org, benh@kernel.crashing.org,
        joonas.lahtinen@linux.intel.com, dri-devel@lists.freedesktop.org,
        chris@chris-wilson.co.uk, grant.likely@arm.com, paulus@samba.org,
        mingo@kernel.org, Jianxiong Gao <jxgao@google.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Saravana Kannan <saravanak@google.com>, xypron.glpk@gmx.de,
        "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
        Bartosz Golaszewski <bgolaszewski@baylibre.com>, bskeggs@redhat.com,
        linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org,
        Thierry Reding <treding@nvidia.com>, intel-gfx@lists.freedesktop.org,
        matthew.auld@intel.com, linux-devicetree <devicetree@vger.kernel.org>,
        Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie,
        maarten.lankhorst@linux.intel.com, linuxppc-dev@lists.ozlabs.org,
        jani.nikula@linux.intel.com, Nicolas Boichat <drinkcat@chromium.org>,
        rodrigo.vivi@intel.com, Bjorn Helgaas <bhelgaas@google.com>,
        Dan Williams <dan.j.williams@intel.com>,
        Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
        Greg KH <gregkh@linuxfoundation.org>,
        Randy Dunlap <rdunlap@infradead.org>,
        lkml <linux-kernel@vger.kernel.org>,
        "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
        Jim Quinlan <james.quinlan@broadcom.com>,
        Tom Lendacky <thomas.lendacky@amd.com>, bauerman@linux.ibm.com
Subject: Re: [PATCH v14 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
Message-ID: <YNSq56zyJ7EYdTcI@char.us.oracle.com>
References: <76c3343d-72e5-9df3-8924-5474ee698ef4@quicinc.com>
 <20210623183736.GA472@willie-the-truck>
 <19d4c7a2-744d-21e0-289c-a576e1f0e6f3@quicinc.com>
 <20210624054315.GA25381@lst.de>
 <CALiNf288ZLMhY3E8E3N+z9rkwi1viWNLm1wwMEwT4rNwh3FfwQ@mail.gmail.com>
 <364e6715-eafd-fc4a-e0af-ce2a042756b4@arm.com>
 <20210624111855.GA1382@willie-the-truck>
 <452155d2-c98e-23f6-86d6-3a2ff2e74783@arm.com>
 <20210624114829.GB1382@willie-the-truck>
 <43ec9dd6-12c0-98ec-8d5d-b2904292721e@quicinc.com>
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <43ec9dd6-12c0-98ec-8d5d-b2904292721e@quicinc.com>
X-Originating-IP: [138.3.200.4]
X-ClientProxiedBy: SA9PR13CA0037.namprd13.prod.outlook.com
 (2603:10b6:806:22::12) To BYAPR10MB2999.namprd10.prod.outlook.com
 (2603:10b6:a03:85::27)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 47a9ea18-e738-4e61-fbf3-08d937288021
X-MS-TrafficTypeDiagnostic: BY5PR10MB4369:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BY5PR10MB4369D2979C2AD8BE7FE6D7AE89079@BY5PR10MB4369.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	KIKtg1nGNwOGX/aAaFn8EHHEnbyV5IkwHkkuYXUp6k76Is7d9bdSAuPeFMKzDbynBrUFtPXcBBcOAg+K30PVogyrqTv5fhy0R7G0/sHAWNrHQ/lFQSpfaTX2xK/2YBZbLAeiShOfclW5vOI4kJhYLT6VVyn57NaJ/BQHD81zFJguqkqtDOL6wADv5yRVX4vRbWkeC+VsomJxA7zCIqwVfiBqMexwhX8rjODEohyYNNpaTgs/mvFB+OG6+/XO6U9E7RxrfFlIAldxqaOSoTJT1ABJXdFF4Ahj5S7Aq+G4hLuq5h0vbDwW9Yq+pfSvpj6USemdxbU/LzUvq1KxgcbozaI7G4V/htvgn3gy0UJYq36HKpIGPf/ZbUbvO6G5h3Z6m5BjlrGRgbAngGBhatcwiolCofQS8X2hSQ3YPKjoEFEUNd7X1y4O0Dvq6LD5yaqCi/87sWcHBkU5qYqCKDQmhSBq5hzAYGBpyQr68YdpgHd8nBpiFbtkOxx63l7qofF6mNhuHzESTgQlMviuRRVaUhs00cqwog7GmiNu3OPc1W6O0p2ZpF64BqRDE6BwHUNCqEoh2PteD8lwLoSuXoBVUFHqP33id9l2L03q6AwW5V/0TzLUopi/IMjHLQDl0WV2x8AsyEo9WIOnKcp3Dt0Okg==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2999.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(396003)(376002)(136003)(39860400002)(366004)(16526019)(186003)(7696005)(5660300002)(26005)(53546011)(52116002)(8676002)(8936002)(54906003)(66946007)(66556008)(7406005)(7366002)(7416002)(38100700002)(38350700002)(316002)(86362001)(956004)(66476007)(4326008)(55016002)(6666004)(6916009)(2906002)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?pzrYT5KFOELAqgkJjyJORIMiyHCjPKag7JOn1S020u7H2SZZdrqAEa05ox3Z?=
 =?us-ascii?Q?RzQ3wPVi9fuiOmzNXyjiYkYlwOe94CXVj8uiR3dN4Ety7pwLzN1QeAFJs3wB?=
 =?us-ascii?Q?783ongZwdHB5ERPGiDNHj0wl2zHiawi9NFVvKzZwgkv/SihvE4IMf3gVjrgD?=
 =?us-ascii?Q?TxYRoonszISPvvgXK3qgtrj+5ahQQYNKtIe72NI9p4E5/NlqCBu7OozxQLDu?=
 =?us-ascii?Q?uZZk0rwDw2rPzX/dYCWPD99anJ4cFuqd+nGeHHLzq8D63NHC1jWXez2Dh+D9?=
 =?us-ascii?Q?rL8HcO6cXqWxz8ssh7gT65g4yXWQciy33vVmx1n58LvC9jWBfhPGjE9M8YkO?=
 =?us-ascii?Q?3dbFaGGiA21ivSudT70ymRcdVS3PUDoRHh+FnYUQ+EoPEf+XH42UKSkYbbpl?=
 =?us-ascii?Q?mdaS9SZefg95MWLd50QE8T69wh0w6+S27gZg90UC0db498aNFjmLHQI1YmfJ?=
 =?us-ascii?Q?8NBZja5YLY5b23id3MFhT7Dh9uOCqG76ZOl+LNYinlf0mO+W/OwjlcxKdOLQ?=
 =?us-ascii?Q?kiwdRiJmHjcyk8u+mkUg9QOh7WDtRTdkxSUWtigB5aU0vMI//qMLg/XC5Crf?=
 =?us-ascii?Q?eEF44YyTCGi6osipfB3SlrqSD393vMd2sEUiB13cFffjizlvn6K1hdp+dIzs?=
 =?us-ascii?Q?ohAX+IbXY2ZTxRWt1MskDdWn3UQoSy5tUR9aohrmIxxrOguZ1ZZV/cp6ECPH?=
 =?us-ascii?Q?korY4eI4zPBlMHFbl0e5oI/YtJlx2/mdePnJ+L/CtryZJUgLfkzC8Gu2lVzw?=
 =?us-ascii?Q?M3Fwu3k1xaKiYjQnPyRzqgghY/5cm4+ZxqvidTXw2/YNZsbru1AmxPcA9EOv?=
 =?us-ascii?Q?BgE4kpquarvJf3NR/LKoRsN4DXCrH+b4Zq0MXD8erPLztv/JAEc4JnmoZeYN?=
 =?us-ascii?Q?3jwrGoaPXywipomTboj98NQ2ADpEjjuvDth7c4Y236smR3ys3txbKPPCUaWJ?=
 =?us-ascii?Q?Ic48SPkBZavF9Bak/cYl8GGtHSEgJgGR6x4UlhIYUN2lwJAz49pK8k6HNugP?=
 =?us-ascii?Q?fFLulRcf98upNYuLOL/gQQQbZaQq/I/XjVend8aJyEni/96Gai8UXZh5RPr5?=
 =?us-ascii?Q?ZeQIN1QQ8XfgW84/xAECXatWu9qP82oU1HMwcwvxvCqeS0bUJZD+oa/uE+A4?=
 =?us-ascii?Q?oY+y+wIf2Mc5Cz8CegYFw8FTdFmgoxS7zN2rq6TBmJ+iRsRnLULcxYuwXNYc?=
 =?us-ascii?Q?YNl2IpC10ufiGeYFq6ceCbf8VjbvQuKdatPrRqLL/kCed52R9oWkaVtM9MJ9?=
 =?us-ascii?Q?UFkVvUCt/euMMP3/L1wXvszZ/aaubH7kTE2uZm9ka3qMomuhTQuzqHVCyAMo?=
 =?us-ascii?Q?qzlWa2SSvHCCB7vu9QO/IJP7?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 47a9ea18-e738-4e61-fbf3-08d937288021
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2999.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2021 15:55:34.0664
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1Z26jQfjV7+erWEAS2AX9aQ5SeIm8Yv6bME/x0sm9C+HkxWYLELC+OsTOztxxJP0mmnZCMziJj6LrscyRc1Kzg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR10MB4369
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10025 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0
 mlxscore=0 mlxlogscore=999 adultscore=0 phishscore=0 spamscore=0
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2106240088
X-Proofpoint-GUID: p8KxGgb5y0yeXuOQo-n6HxcOzf5mDbyT
X-Proofpoint-ORIG-GUID: p8KxGgb5y0yeXuOQo-n6HxcOzf5mDbyT

On Thu, Jun 24, 2021 at 10:10:51AM -0400, Qian Cai wrote:
> 
> 
> On 6/24/2021 7:48 AM, Will Deacon wrote:
> > Ok, diff below which attempts to tackle the offset issue I mentioned as
> > well. Qian Cai -- please can you try with these changes?
> 
> This works fine.

Cool. Let me squash this patch in #6 and rebase the rest of them.

Claire, could you check the devel/for-linus-5.14 say by end of today to
double check that I didn't mess anything up please?

Will,

Thank you for generating the fix! I am going to run it on x86 and Xen
to make sure all is good (granted last time I ran devel/for-linus-5.14
on that setup I didn't see any errors so I need to double check
I didn't do something silly like run a wrong kernel).


> 
> > 
> > Will
> > 
> > --->8
> > 
> > diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> > index 175b6c113ed8..39284ff2a6cd 100644
> > --- a/include/linux/swiotlb.h
> > +++ b/include/linux/swiotlb.h
> > @@ -116,7 +116,9 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
> >  
> >  static inline bool is_swiotlb_force_bounce(struct device *dev)
> >  {
> > -       return dev->dma_io_tlb_mem->force_bounce;
> > +       struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
> > +
> > +       return mem && mem->force_bounce;
> >  }
> >  
> >  void __init swiotlb_exit(void);
> > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> > index 44be8258e27b..0ffbaae9fba2 100644
> > --- a/kernel/dma/swiotlb.c
> > +++ b/kernel/dma/swiotlb.c
> > @@ -449,6 +449,7 @@ static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
> >                 dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1);
> >         unsigned int nslots = nr_slots(alloc_size), stride;
> >         unsigned int index, wrap, count = 0, i;
> > +       unsigned int offset = swiotlb_align_offset(dev, orig_addr);
> >         unsigned long flags;
> >  
> >         BUG_ON(!nslots);
> > @@ -497,7 +498,7 @@ static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
> >         for (i = index; i < index + nslots; i++) {
> >                 mem->slots[i].list = 0;
> >                 mem->slots[i].alloc_size =
> > -                       alloc_size - ((i - index) << IO_TLB_SHIFT);
> > +                       alloc_size - (offset + ((i - index) << IO_TLB_SHIFT));
> >         }
> >         for (i = index - 1;
> >              io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 &&
> > 


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 15:57:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 15:57:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146920.270546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRjD-0004Q7-AB; Thu, 24 Jun 2021 15:57:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146920.270546; Thu, 24 Jun 2021 15:57:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRjD-0004Ps-6N; Thu, 24 Jun 2021 15:57:19 +0000
Received: by outflank-mailman (input) for mailman id 146920;
 Thu, 24 Jun 2021 15:57:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a5fo=LS=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lwRjC-0002ED-4f
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 15:57:18 +0000
Received: from mail-pg1-x52f.google.com (unknown [2607:f8b0:4864:20::52f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6dda1818-df10-435e-aa73-6d1a6321736f;
 Thu, 24 Jun 2021 15:56:51 +0000 (UTC)
Received: by mail-pg1-x52f.google.com with SMTP id t13so5053950pgu.11
 for <xen-devel@lists.xenproject.org>; Thu, 24 Jun 2021 08:56:51 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:165a:99ec:42d5:d8b])
 by smtp.gmail.com with UTF8SMTPSA id s1sm2804633pgg.49.2021.06.24.08.56.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 24 Jun 2021 08:56:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6dda1818-df10-435e-aa73-6d1a6321736f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=DgMLkxnMrm1ElKh2pKbH6DNaZYR5ugLD1JouEAo1Krw=;
        b=CBMnwj3bqWSxJk9QuIRXj7AXS1rDlhnCorGo2xL5ODh+Y1ZQXuqRVjdQ1sf+NKMF68
         LGTA1Lrr3OIkG7vhENTIDNd/H+9BmAfoCOyqiHCsY0TRSuVzsnR4mZ3/RZi0ObYA87bL
         L7rL4dqepWMHniYpyZxN1LD1FH654rL7Zq2lk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=DgMLkxnMrm1ElKh2pKbH6DNaZYR5ugLD1JouEAo1Krw=;
        b=dxZ0CyoBqtwO+pSoTaw6Pb/Dg4ff49rKVDHk/VOnr3xPEtb1dMlpaVxoQc97m83BG/
         6WKvPRvYLBRImA3OKiJdio5Ei/sRoyb9livV6rpFbrz76soQzWzOYE9PssfBefWp6Mq5
         QdB1RTTBX/omFaTbf6TgyFRoLMbc/JGvkBIfyZpfurdW0ff/JzGPK9Z/nCTgy28NSvDM
         PeAIrZojotuIzUi/KXOazBQbi0nAiXyDxSiBgIqaAvgrPCjl4aZLD9mGxubAJbQ7yxCq
         PkErRxa9KWgjl4I8sKjL3mjz46+gJXC3RH3hEAlapGlCS1QSVPsba93tmOS5cq1lWcPQ
         Z2dw==
X-Gm-Message-State: AOAM530QZRcQIILX5aYPGuwrgMehgWnd0Fx4RUvjmYYRTisn3B1ZLz+F
	mZgkvGUtb8/74KO1URd7R+L2oA==
X-Google-Smtp-Source: ABdhPJz0K8c/e85685sEfhv/cLt8Y/+djdObhlSxD8seBO46vO4znhwmAjNx3on/y8xXncFvFChqNA==
X-Received: by 2002:a62:d41e:0:b029:305:b3ff:4056 with SMTP id a30-20020a62d41e0000b0290305b3ff4056mr5643354pfh.78.1624550211262;
        Thu, 24 Jun 2021 08:56:51 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com,
	quic_qiancai@quicinc.com
Subject: [PATCH v15 08/12] swiotlb: Refactor swiotlb_tbl_unmap_single
Date: Thu, 24 Jun 2021 23:55:22 +0800
Message-Id: <20210624155526.2775863-9-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210624155526.2775863-1-tientzu@chromium.org>
References: <20210624155526.2775863-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, swiotlb_release_slots, to make the code reusable for
supporting different bounce buffer pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 kernel/dma/swiotlb.c | 35 ++++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index b41d16e92cf6..93752e752e76 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -557,27 +557,15 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	return tlb_addr;
 }
 
-/*
- * tlb_addr is the physical address of the bounce buffer to unmap.
- */
-void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
-			      size_t mapping_size, enum dma_data_direction dir,
-			      unsigned long attrs)
+static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr)
 {
-	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long flags;
-	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
+	unsigned int offset = swiotlb_align_offset(dev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
 	int nslots = nr_slots(mem->slots[index].alloc_size + offset);
 	int count, i;
 
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
-		swiotlb_bounce(hwdev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
-
 	/*
 	 * Return the buffer to the free list by setting the corresponding
 	 * entries to indicate the number of contiguous entries available.
@@ -612,6 +600,23 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 	spin_unlock_irqrestore(&mem->lock, flags);
 }
 
+/*
+ * tlb_addr is the physical address of the bounce buffer to unmap.
+ */
+void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
+			      size_t mapping_size, enum dma_data_direction dir,
+			      unsigned long attrs)
+{
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
+		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
+
+	swiotlb_release_slots(dev, tlb_addr);
+}
+
 void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
 		size_t size, enum dma_data_direction dir)
 {
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 16:02:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 16:02:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146936.270557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRnj-0006vT-05; Thu, 24 Jun 2021 16:01:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146936.270557; Thu, 24 Jun 2021 16:01:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRni-0006vM-RI; Thu, 24 Jun 2021 16:01:58 +0000
Received: by outflank-mailman (input) for mailman id 146936;
 Thu, 24 Jun 2021 16:01:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a5fo=LS=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lwRjl-0002ED-66
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 15:57:53 +0000
Received: from mail-pl1-x62c.google.com (unknown [2607:f8b0:4864:20::62c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f2361213-e94d-44cc-8b8a-3f6e1033fbb4;
 Thu, 24 Jun 2021 15:57:28 +0000 (UTC)
Received: by mail-pl1-x62c.google.com with SMTP id v12so3171942plo.10
 for <xen-devel@lists.xenproject.org>; Thu, 24 Jun 2021 08:57:28 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:165a:99ec:42d5:d8b])
 by smtp.gmail.com with UTF8SMTPSA id n6sm2862924pgt.7.2021.06.24.08.57.20
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 24 Jun 2021 08:57:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2361213-e94d-44cc-8b8a-3f6e1033fbb4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=xfRvKXXPDGozL9xpkc2RXgCWY6OxEYfQN41+lxoFnvY=;
        b=kEshnj7M08k1T5XsbOYdzjZbq7nVnmjng6eJnfjl4lHMzaAZWk6m94BUgoAkO9vE9h
         LGUHt/CR+GHi3cdJm3u8om1RbudjT01C0RCIVYBXBb3iKIFLf6Hz/aFd03GLLTot9C9q
         is4g5Rc9w98QwWpmpTahpJ12S/0U2a21x3Eus=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=xfRvKXXPDGozL9xpkc2RXgCWY6OxEYfQN41+lxoFnvY=;
        b=U1Ls8tjq4mbnKuBMwzbrUV7xNUc+ejem6YHpK5KyHYMr3orK9z8V3mcp8yI2b0xPKm
         1IGz7a5zHGj6LEhga2lFsfIDa0MzBr2QS9+A/Rg8Pyu1u9kMqAFw2Uf9w3RJi5n9sfP+
         /errx3bg0IUhgbdrUhWSHIma5PsXC6s92v1u51g8gMK2NlA3nIv4VyN8eOVbZgorPAv2
         dzlxjXzgrwBLIzcRUEumoAftTy8Xax/g8XhKvYmnhAVV9bkdtvUbzb74reN/yDFSOh/c
         mD3XOCA7EdnDMtdA0OtoC6XhoYqKplNdbZ+h6f3ru1gzBh5lXlt954L++mXz/osmYNzl
         8odw==
X-Gm-Message-State: AOAM530b/VUz+uugWYjzT0YofgLGZt5F4xkTH8Aw8Xys6w8paCU0WBPZ
	1Kab9tNXL1EbKxz6TThV3RYXlA==
X-Google-Smtp-Source: ABdhPJzInUpAy17snFNN0CGcBB/ghyLU73coTfwG0l77wDEHQQH9RWDi8n5UN9ZkYoU3wlPKMD5xpQ==
X-Received: by 2002:a17:90b:2282:: with SMTP id kx2mr5939882pjb.60.1624550248050;
        Thu, 24 Jun 2021 08:57:28 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com,
	quic_qiancai@quicinc.com
Subject: [PATCH v15 12/12] of: Add plumbing for restricted DMA pool
Date: Thu, 24 Jun 2021 23:55:26 +0800
Message-Id: <20210624155526.2775863-13-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210624155526.2775863-1-tientzu@chromium.org>
References: <20210624155526.2775863-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If a device is not behind an IOMMU, we look up the device node and set
up the restricted DMA when the restricted-dma-pool is presented.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 drivers/of/address.c    | 33 +++++++++++++++++++++++++++++++++
 drivers/of/device.c     |  3 +++
 drivers/of/of_private.h |  6 ++++++
 3 files changed, 42 insertions(+)

diff --git a/drivers/of/address.c b/drivers/of/address.c
index 73ddf2540f3f..cdf700fba5c4 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -8,6 +8,7 @@
 #include <linux/logic_pio.h>
 #include <linux/module.h>
 #include <linux/of_address.h>
+#include <linux/of_reserved_mem.h>
 #include <linux/pci.h>
 #include <linux/pci_regs.h>
 #include <linux/sizes.h>
@@ -1022,6 +1023,38 @@ int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map)
 	of_node_put(node);
 	return ret;
 }
+
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np)
+{
+	struct device_node *node, *of_node = dev->of_node;
+	int count, i;
+
+	count = of_property_count_elems_of_size(of_node, "memory-region",
+						sizeof(u32));
+	/*
+	 * If dev->of_node doesn't exist or doesn't contain memory-region, try
+	 * the OF node having DMA configuration.
+	 */
+	if (count <= 0) {
+		of_node = np;
+		count = of_property_count_elems_of_size(
+			of_node, "memory-region", sizeof(u32));
+	}
+
+	for (i = 0; i < count; i++) {
+		node = of_parse_phandle(of_node, "memory-region", i);
+		/*
+		 * There might be multiple memory regions, but only one
+		 * restricted-dma-pool region is allowed.
+		 */
+		if (of_device_is_compatible(node, "restricted-dma-pool") &&
+		    of_device_is_available(node))
+			return of_reserved_mem_device_init_by_idx(dev, of_node,
+								  i);
+	}
+
+	return 0;
+}
 #endif /* CONFIG_HAS_DMA */
 
 /**
diff --git a/drivers/of/device.c b/drivers/of/device.c
index 6cb86de404f1..e68316836a7a 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
 
 	arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
 
+	if (!iommu)
+		return of_dma_set_restricted_buffer(dev, np);
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(of_dma_configure_id);
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index d9e6a324de0a..25cebbed5f02 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -161,12 +161,18 @@ struct bus_dma_region;
 #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
 int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map);
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np);
 #else
 static inline int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map)
 {
 	return -ENODEV;
 }
+static inline int of_dma_set_restricted_buffer(struct device *dev,
+					       struct device_node *np)
+{
+	return -ENODEV;
+}
 #endif
 
 #endif /* _LINUX_OF_PRIVATE_H */
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 16:02:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 16:02:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146939.270562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRnj-0006z5-85; Thu, 24 Jun 2021 16:01:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146939.270562; Thu, 24 Jun 2021 16:01:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRnj-0006yq-3t; Thu, 24 Jun 2021 16:01:59 +0000
Received: by outflank-mailman (input) for mailman id 146939;
 Thu, 24 Jun 2021 16:01:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a5fo=LS=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lwRjg-0002ED-63
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 15:57:48 +0000
Received: from mail-pj1-x1036.google.com (unknown [2607:f8b0:4864:20::1036])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 202d8dac-cb63-4540-813e-498207b93e04;
 Thu, 24 Jun 2021 15:57:19 +0000 (UTC)
Received: by mail-pj1-x1036.google.com with SMTP id
 z3-20020a17090a3983b029016bc232e40bso3751583pjb.4
 for <xen-devel@lists.xenproject.org>; Thu, 24 Jun 2021 08:57:19 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:165a:99ec:42d5:d8b])
 by smtp.gmail.com with UTF8SMTPSA id s4sm2950901pjn.31.2021.06.24.08.57.11
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 24 Jun 2021 08:57:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 202d8dac-cb63-4540-813e-498207b93e04
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=YPdZA059qJtBhHo92L+QtxB+I/HHPOi+M+A9tEqXqKY=;
        b=PjlqqBPlZ7HlVzWQ/NG7jUEmdRoSGdDyyhsicywtHJWmlQpajkDrqtejX1sn/5F2sS
         SwwO3EzEcXtbM4jLQSoe8mUbP22ohS6knOWK2YU5Q+OvqR8ORZOMLvrptba985Ay6g72
         468Ys8fEwAI9IAi2Nx5OeG0lVcTQ7r4EAMUuk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=YPdZA059qJtBhHo92L+QtxB+I/HHPOi+M+A9tEqXqKY=;
        b=R7wc6MgtOu2zi9CbtnXIUU1rxSOs2rcYKFkLA+wm4lBI3W5mlFHyXZgYSQU3f9C9FD
         4EtXErdRL+ixPBltXfk0ftV20tok8xfENNWQv5fH5liIhqaDrqM0ZVsXP+hSll4B5+lr
         kGwwG0b23qzmBu2j2WhaiXCdEGqOhXRedv54GqCgvJDiplYKTQp7PfKPY6IlVeDq5UBN
         5UjMy/uiCB9TcCpsl79M2PkW0y931RerioovIVMaXeWxtdNrCFrxuM0VVSanp6oAHr9g
         1EAsPOUm9ij33dgLst8u/mu7rDUUOSYnhe+XOiJzc/yIf0LtvoOfysm2c9FHxPgjWjo1
         40fw==
X-Gm-Message-State: AOAM530zE3hpnT9barK27/Ckfiz87/NlfFpX0CMjdLLE2p4IEsAAhm+Y
	8JZCMVlCaO312CWeZuCmNFzFHA==
X-Google-Smtp-Source: ABdhPJxRwzuvR6w9vPa8M+gkJy/ziAGvgsthCKYRNma8WFuVe3INiwBpDz8jqeLqK9XJyVskEplkvw==
X-Received: by 2002:a17:90b:1d06:: with SMTP id on6mr5855940pjb.149.1624550238780;
        Thu, 24 Jun 2021 08:57:18 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com,
	quic_qiancai@quicinc.com
Subject: [PATCH v15 11/12] dt-bindings: of: Add restricted DMA pool
Date: Thu, 24 Jun 2021 23:55:25 +0800
Message-Id: <20210624155526.2775863-12-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210624155526.2775863-1-tientzu@chromium.org>
References: <20210624155526.2775863-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce the new compatible string, restricted-dma-pool, for restricted
DMA. One can specify the address and length of the restricted DMA memory
region by restricted-dma-pool in the reserved-memory node.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 .../reserved-memory/reserved-memory.txt       | 36 +++++++++++++++++--
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
index e8d3096d922c..39b5f4c5a511 100644
--- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
@@ -51,6 +51,23 @@ compatible (optional) - standard definition
           used as a shared pool of DMA buffers for a set of devices. It can
           be used by an operating system to instantiate the necessary pool
           management subsystem if necessary.
+        - restricted-dma-pool: This indicates a region of memory meant to be
+          used as a pool of restricted DMA buffers for a set of devices. The
+          memory region would be the only region accessible to those devices.
+          When using this, the no-map and reusable properties must not be set,
+          so the operating system can create a virtual mapping that will be used
+          for synchronization. The main purpose for restricted DMA is to
+          mitigate the lack of DMA access control on systems without an IOMMU,
+          which could result in the DMA accessing the system memory at
+          unexpected times and/or unexpected addresses, possibly leading to data
+          leakage or corruption. The feature on its own provides a basic level
+          of protection against the DMA overwriting buffer contents at
+          unexpected times. However, to protect against general data leakage and
+          system memory corruption, the system needs to provide way to lock down
+          the memory access, e.g., MPU. Note that since coherent allocation
+          needs remapping, one must set up another device coherent pool by
+          shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic
+          coherent allocation.
         - vendor specific string in the form <vendor>,[<device>-]<usage>
 no-map (optional) - empty property
     - Indicates the operating system must not create a virtual mapping
@@ -85,10 +102,11 @@ memory-region-names (optional) - a list of names, one for each corresponding
 
 Example
 -------
-This example defines 3 contiguous regions are defined for Linux kernel:
+This example defines 4 contiguous regions for Linux kernel:
 one default of all device drivers (named linux,cma@72000000 and 64MiB in size),
-one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB), and
-one for multimedia processing (named multimedia-memory@77000000, 64MiB).
+one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB),
+one for multimedia processing (named multimedia-memory@77000000, 64MiB), and
+one for restricted dma pool (named restricted_dma_reserved@0x50000000, 64MiB).
 
 / {
 	#address-cells = <1>;
@@ -120,6 +138,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 			compatible = "acme,multimedia-memory";
 			reg = <0x77000000 0x4000000>;
 		};
+
+		restricted_dma_reserved: restricted_dma_reserved {
+			compatible = "restricted-dma-pool";
+			reg = <0x50000000 0x4000000>;
+		};
 	};
 
 	/* ... */
@@ -138,4 +161,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 		memory-region = <&multimedia_reserved>;
 		/* ... */
 	};
+
+	pcie_device: pcie_device@0,0 {
+		reg = <0x83010000 0x0 0x00000000 0x0 0x00100000
+		       0x83010000 0x0 0x00100000 0x0 0x00100000>;
+		memory-region = <&restricted_dma_reserved>;
+		/* ... */
+	};
 };
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 16:02:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 16:02:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146946.270578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRnu-0007Zy-Gh; Thu, 24 Jun 2021 16:02:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146946.270578; Thu, 24 Jun 2021 16:02:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRnu-0007Zp-DX; Thu, 24 Jun 2021 16:02:10 +0000
Received: by outflank-mailman (input) for mailman id 146946;
 Thu, 24 Jun 2021 16:02:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a5fo=LS=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lwRjM-0002ED-54
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 15:57:28 +0000
Received: from mail-pl1-x633.google.com (unknown [2607:f8b0:4864:20::633])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b7f386b-385b-4d6b-ba3a-e6a7ce97517d;
 Thu, 24 Jun 2021 15:57:01 +0000 (UTC)
Received: by mail-pl1-x633.google.com with SMTP id y21so3183043plb.4
 for <xen-devel@lists.xenproject.org>; Thu, 24 Jun 2021 08:57:01 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:165a:99ec:42d5:d8b])
 by smtp.gmail.com with UTF8SMTPSA id p14sm3021513pgb.2.2021.06.24.08.56.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 24 Jun 2021 08:57:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b7f386b-385b-4d6b-ba3a-e6a7ce97517d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=3nr5duANOct6/oGOQLy2DP5IEE8G3PvgXwIs/U9R4Gw=;
        b=a8cmMzGcpXmztmz+lmxPA1Q6xyV16M28kxjqgCBbf1Ks7zV690v77sgOhYEhzTEZs5
         +Kq7hKLiNtZRYure1Gm29HwsXL5yWOVlL7+xFg6ipueZCOjHoaeiULU+xMBCxf2GAzqT
         bKoJgbcAHXJKSgTGFtRzL1B3lMuyRPBHOjNVs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=3nr5duANOct6/oGOQLy2DP5IEE8G3PvgXwIs/U9R4Gw=;
        b=XHQmT/9ISvRDk5LaLVQaN83qKAxzA8qb+5/NAbcw7Cqs7iktJEqr/RlNt27xicnSNB
         n4jfnal6AhHE8D4V5a0lKreRanvsRTFygpjHao4x5jhHd6zaVX0PXrx0pNzXktnzMduc
         cc+nlWVBPP6T1GlLCRiMPfHD9QJeFgDCa9H6EkyZXVUvb3bvvVtpmxivIVU8775vQLXP
         ifE6aiuCPftxiHmWAygcbgMuk2d8tXfLeIZFCwGxc7CzBaLsJEH6iHpTWbfQ6YrOVqLx
         SCDt+4CzDWoFxlYMDvXxd6enYQNYMb9dvkj0ei8RQld4qt7j+kUewE2IdKkM3A8Td/Al
         x9GQ==
X-Gm-Message-State: AOAM5319gGN0jg6Q3Re4BeG8jp6UmIXYYcIS02GmOSdtXmjqM1bCpn2D
	ctT+kkdwkAysm+3YDXc/VHnsBg==
X-Google-Smtp-Source: ABdhPJxmEwVXJd6cAH00JVqNM1sPqMocZqAA0whpk6jl7w3uVsxxZQx+6Mf7e+Nh2MKMhnKmM6WWEg==
X-Received: by 2002:a17:90a:3d09:: with SMTP id h9mr15516036pjc.78.1624550220587;
        Thu, 24 Jun 2021 08:57:00 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com,
	quic_qiancai@quicinc.com
Subject: [PATCH v15 09/12] swiotlb: Add restricted DMA alloc/free support
Date: Thu, 24 Jun 2021 23:55:23 +0800
Message-Id: <20210624155526.2775863-10-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210624155526.2775863-1-tientzu@chromium.org>
References: <20210624155526.2775863-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the functions, swiotlb_{alloc,free} and is_swiotlb_for_alloc to
support the memory allocation from restricted DMA pool.

The restricted DMA pool is preferred if available.

Note that since coherent allocation needs remapping, one must set up
another device coherent pool by shared-dma-pool and use
dma_alloc_from_dev_coherent instead for atomic coherent allocation.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
 include/linux/swiotlb.h | 26 ++++++++++++++++++++++
 kernel/dma/direct.c     | 49 +++++++++++++++++++++++++++++++----------
 kernel/dma/swiotlb.c    | 38 ++++++++++++++++++++++++++++++--
 3 files changed, 99 insertions(+), 14 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index da348671b0d5..3b9454d1e498 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -85,6 +85,7 @@ extern enum swiotlb_force swiotlb_force;
  * @debugfs:	The dentry to debugfs.
  * @late_alloc:	%true if allocated using the page allocator
  * @force_bounce: %true if swiotlb bouncing is forced
+ * @for_alloc:  %true if the pool is used for memory allocation
  */
 struct io_tlb_mem {
 	phys_addr_t start;
@@ -96,6 +97,7 @@ struct io_tlb_mem {
 	struct dentry *debugfs;
 	bool late_alloc;
 	bool force_bounce;
+	bool for_alloc;
 	struct io_tlb_slot {
 		phys_addr_t orig_addr;
 		size_t alloc_size;
@@ -158,4 +160,28 @@ static inline void swiotlb_adjust_size(unsigned long size)
 extern void swiotlb_print_info(void);
 extern void swiotlb_set_max_segment(unsigned int);
 
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size);
+bool swiotlb_free(struct device *dev, struct page *page, size_t size);
+
+static inline bool is_swiotlb_for_alloc(struct device *dev)
+{
+	return dev->dma_io_tlb_mem->for_alloc;
+}
+#else
+static inline struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	return NULL;
+}
+static inline bool swiotlb_free(struct device *dev, struct page *page,
+				size_t size)
+{
+	return false;
+}
+static inline bool is_swiotlb_for_alloc(struct device *dev)
+{
+	return false;
+}
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
+
 #endif /* __LINUX_SWIOTLB_H */
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index a92465b4eb12..2de33e5d302b 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -75,6 +75,15 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 		min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit);
 }
 
+static void __dma_direct_free_pages(struct device *dev, struct page *page,
+				    size_t size)
+{
+	if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL) &&
+	    swiotlb_free(dev, page, size))
+		return;
+	dma_free_contiguous(dev, page, size);
+}
+
 static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 		gfp_t gfp)
 {
@@ -86,6 +95,16 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 
 	gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
 					   &phys_limit);
+	if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL) &&
+	    is_swiotlb_for_alloc(dev)) {
+		page = swiotlb_alloc(dev, size);
+		if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
+			__dma_direct_free_pages(dev, page, size);
+			return NULL;
+		}
+		return page;
+	}
+
 	page = dma_alloc_contiguous(dev, size, gfp);
 	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
 		dma_free_contiguous(dev, page, size);
@@ -142,7 +161,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 		gfp |= __GFP_NOWARN;
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) {
 		page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
 		if (!page)
 			return NULL;
@@ -155,18 +174,23 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev))
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_swiotlb_for_alloc(dev))
 		return arch_dma_alloc(dev, size, dma_handle, gfp, attrs);
 
 	/*
 	 * Remapping or decrypting memory may block. If either is required and
 	 * we can't block, allocate the memory from the atomic pools.
+	 * If restricted DMA (i.e., is_swiotlb_for_alloc) is required, one must
+	 * set up another device coherent pool by shared-dma-pool and use
+	 * dma_alloc_from_dev_coherent instead.
 	 */
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
 	    !gfpflags_allow_blocking(gfp) &&
 	    (force_dma_unencrypted(dev) ||
-	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev))))
+	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
+	      !dev_is_dma_coherent(dev))) &&
+	    !is_swiotlb_for_alloc(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	/* we always manually zero the memory once we are done */
@@ -237,7 +261,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 			return NULL;
 	}
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -247,15 +271,15 @@ void dma_direct_free(struct device *dev, size_t size,
 	unsigned int page_order = get_order(size);
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) {
 		/* cpu_addr is a struct page cookie, not a kernel address */
 		dma_free_contiguous(dev, cpu_addr, size);
 		return;
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev)) {
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_swiotlb_for_alloc(dev)) {
 		arch_dma_free(dev, size, cpu_addr, dma_addr, attrs);
 		return;
 	}
@@ -273,7 +297,7 @@ void dma_direct_free(struct device *dev, size_t size,
 	else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
 		arch_dma_clear_uncached(cpu_addr, size);
 
-	dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size);
+	__dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
 }
 
 struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
@@ -283,7 +307,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	void *ret;
 
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
-	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp))
+	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) &&
+	    !is_swiotlb_for_alloc(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	page = __dma_direct_alloc_pages(dev, size, gfp);
@@ -310,7 +335,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	*dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
 	return page;
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -329,7 +354,7 @@ void dma_direct_free_pages(struct device *dev, size_t size,
 	if (force_dma_unencrypted(dev))
 		set_memory_encrypted((unsigned long)vaddr, 1 << page_order);
 
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 }
 
 #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 93752e752e76..6a7c6e30eb4b 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -464,8 +464,9 @@ static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
 
 	index = wrap = wrap_index(mem, ALIGN(mem->index, stride));
 	do {
-		if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
-		    (orig_addr & iotlb_align_mask)) {
+		if (orig_addr &&
+		    (slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
+			    (orig_addr & iotlb_align_mask)) {
 			index = wrap_index(mem, index + 1);
 			continue;
 		}
@@ -704,3 +705,36 @@ static int __init swiotlb_create_default_debugfs(void)
 late_initcall(swiotlb_create_default_debugfs);
 
 #endif
+
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+	phys_addr_t tlb_addr;
+	int index;
+
+	if (!mem)
+		return NULL;
+
+	index = swiotlb_find_slots(dev, 0, size);
+	if (index == -1)
+		return NULL;
+
+	tlb_addr = slot_addr(mem->start, index);
+
+	return pfn_to_page(PFN_DOWN(tlb_addr));
+}
+
+bool swiotlb_free(struct device *dev, struct page *page, size_t size)
+{
+	phys_addr_t tlb_addr = page_to_phys(page);
+
+	if (!is_swiotlb_buffer(dev, tlb_addr))
+		return false;
+
+	swiotlb_release_slots(dev, tlb_addr);
+
+	return true;
+}
+
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 16:02:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 16:02:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146947.270583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRnu-0007de-TX; Thu, 24 Jun 2021 16:02:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146947.270583; Thu, 24 Jun 2021 16:02:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRnu-0007cb-OP; Thu, 24 Jun 2021 16:02:10 +0000
Received: by outflank-mailman (input) for mailman id 146947;
 Thu, 24 Jun 2021 16:02:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a5fo=LS=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lwRjW-0002ED-5f
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 15:57:38 +0000
Received: from mail-pj1-x102c.google.com (unknown [2607:f8b0:4864:20::102c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2b9064ef-c70c-4fb8-9352-8c0a17f0ede4;
 Thu, 24 Jun 2021 15:57:10 +0000 (UTC)
Received: by mail-pj1-x102c.google.com with SMTP id
 bb10-20020a17090b008ab029016eef083425so6176206pjb.5
 for <xen-devel@lists.xenproject.org>; Thu, 24 Jun 2021 08:57:10 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:165a:99ec:42d5:d8b])
 by smtp.gmail.com with UTF8SMTPSA id x143sm3365707pfc.6.2021.06.24.08.57.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 24 Jun 2021 08:57:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b9064ef-c70c-4fb8-9352-8c0a17f0ede4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=WbwCPwkWNsZyfDNcMOTWOtrRGSDB7ukqN5rFm3/fcqM=;
        b=ST6t0XkrMRO5f6HAim2SUwLoqEUtVKc7/G35fun8pnEgrYgmaoy7jqM/CzM5ROKScW
         CaGs/CYeApWd6boKyliYyHZkZ8m75hN3AsFBcdHAPE4h8DeZNWtiFC9jFlW6gHYxntTL
         /JewURtKdZPaJv0dBLRhrI68g5ieBdJUAcjT8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=WbwCPwkWNsZyfDNcMOTWOtrRGSDB7ukqN5rFm3/fcqM=;
        b=t9Swqbcv1zShPcHkU0ISi1lI0LohhQvyCxhqPIYBR2r/61RlgD43I4xvQNaf7L3evx
         uOEOxtGzbHzgdQznYhMLSShdQJSO24r2mCYdoLIfgeirHgb+hdU+pee1sRQiyUbaa4Vj
         OaE2Qv3iqopcVVo1OCLKOSVVm1sdtcmKEjITekTg3SJ2fNqDwktN3T2emJnJvukxgzPD
         jE5YbHEZTZAViU16KYd2yi7gJtsxNgyFq07QR3VyEaiXMfkP7QYf2k/Z1DdKOuaynD/p
         U2+IedveGVbr+vtMQt6oeuYqqkX0OsMLPs3+zBh98RSj/5taHjc2v4Xw86zj7ov1Z2a8
         fTIg==
X-Gm-Message-State: AOAM532txavLmeKVa9pLV40jO7fC6HUoTeeyqLOASIyuPYFCsCjarSPX
	ALBaUWRhJmw16u1SZF7D+qxhaA==
X-Google-Smtp-Source: ABdhPJwI7EWsoc2ppnuqnW/9yjxiTzSeTHN60zP/WQgGph69JaVmO4yY57xwR4157oqtL7lcnhpfUA==
X-Received: by 2002:a17:903:1043:b029:11e:7489:9bad with SMTP id f3-20020a1709031043b029011e74899badmr5076690plc.34.1624550229453;
        Thu, 24 Jun 2021 08:57:09 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	thomas.lendacky@amd.com,
	quic_qiancai@quicinc.com
Subject: [PATCH v15 10/12] swiotlb: Add restricted DMA pool initialization
Date: Thu, 24 Jun 2021 23:55:24 +0800
Message-Id: <20210624155526.2775863-11-tientzu@chromium.org>
X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog
In-Reply-To: <20210624155526.2775863-1-tientzu@chromium.org>
References: <20210624155526.2775863-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the initialization function to create restricted DMA pools from
matching reserved-memory nodes.

Regardless of swiotlb setting, the restricted DMA pool is preferred if
available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Signed-off-by: Claire Chang <tientzu@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
---
 include/linux/swiotlb.h |  3 +-
 kernel/dma/Kconfig      | 14 ++++++++
 kernel/dma/swiotlb.c    | 76 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 92 insertions(+), 1 deletion(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 3b9454d1e498..39284ff2a6cd 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -73,7 +73,8 @@ extern enum swiotlb_force swiotlb_force;
  *		range check to see if the memory was in fact allocated by this
  *		API.
  * @nslabs:	The number of IO TLB blocks (in groups of 64) between @start and
- *		@end. This is command line adjustable via setup_io_tlb_npages.
+ *		@end. For default swiotlb, this is command line adjustable via
+ *		setup_io_tlb_npages.
  * @used:	The number of used IO TLB block.
  * @list:	The free list describing the number of free entries available
  *		from each index.
diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
index 77b405508743..3e961dc39634 100644
--- a/kernel/dma/Kconfig
+++ b/kernel/dma/Kconfig
@@ -80,6 +80,20 @@ config SWIOTLB
 	bool
 	select NEED_DMA_MAP_STATE
 
+config DMA_RESTRICTED_POOL
+	bool "DMA Restricted Pool"
+	depends on OF && OF_RESERVED_MEM
+	select SWIOTLB
+	help
+	  This enables support for restricted DMA pools which provide a level of
+	  DMA memory protection on systems with limited hardware protection
+	  capabilities, such as those lacking an IOMMU.
+
+	  For more information see
+	  <Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt>
+	  and <kernel/dma/swiotlb.c>.
+	  If unsure, say "n".
+
 #
 # Should be selected if we can mmap non-coherent mappings to userspace.
 # The only thing that is really required is a way to set an uncached bit
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 6a7c6e30eb4b..3baf49c9b766 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -39,6 +39,13 @@
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
 #endif
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_fdt.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/slab.h>
+#endif
 
 #include <asm/io.h>
 #include <asm/dma.h>
@@ -737,4 +744,73 @@ bool swiotlb_free(struct device *dev, struct page *page, size_t size)
 	return true;
 }
 
+static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
+				    struct device *dev)
+{
+	struct io_tlb_mem *mem = rmem->priv;
+	unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
+
+	/*
+	 * Since multiple devices can share the same pool, the private data,
+	 * io_tlb_mem struct, will be initialized by the first device attached
+	 * to it.
+	 */
+	if (!mem) {
+		mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
+		if (!mem)
+			return -ENOMEM;
+
+		set_memory_decrypted((unsigned long)phys_to_virt(rmem->base),
+				     rmem->size >> PAGE_SHIFT);
+		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
+		mem->force_bounce = true;
+		mem->for_alloc = true;
+
+		rmem->priv = mem;
+
+		if (IS_ENABLED(CONFIG_DEBUG_FS)) {
+			mem->debugfs =
+				debugfs_create_dir(rmem->name, debugfs_dir);
+			swiotlb_create_debugfs_files(mem);
+		}
+	}
+
+	dev->dma_io_tlb_mem = mem;
+
+	return 0;
+}
+
+static void rmem_swiotlb_device_release(struct reserved_mem *rmem,
+					struct device *dev)
+{
+	dev->dma_io_tlb_mem = io_tlb_default_mem;
+}
+
+static const struct reserved_mem_ops rmem_swiotlb_ops = {
+	.device_init = rmem_swiotlb_device_init,
+	.device_release = rmem_swiotlb_device_release,
+};
+
+static int __init rmem_swiotlb_setup(struct reserved_mem *rmem)
+{
+	unsigned long node = rmem->fdt_node;
+
+	if (of_get_flat_dt_prop(node, "reusable", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,cma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,dma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "no-map", NULL))
+		return -EINVAL;
+
+	if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
+		pr_err("Restricted DMA pool must be accessible within the linear mapping.");
+		return -EINVAL;
+	}
+
+	rmem->ops = &rmem_swiotlb_ops;
+	pr_info("Reserved memory: created restricted DMA pool at %pa, size %ld MiB\n",
+		&rmem->base, (unsigned long)rmem->size / SZ_1M);
+	return 0;
+}
+
+RESERVEDMEM_OF_DECLARE(dma, "restricted-dma-pool", rmem_swiotlb_setup);
 #endif /* CONFIG_DMA_RESTRICTED_POOL */
-- 
2.32.0.288.g62a8d224e6-goog



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 16:06:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 16:06:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146963.270601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRsJ-00015b-Mm; Thu, 24 Jun 2021 16:06:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146963.270601; Thu, 24 Jun 2021 16:06:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwRsJ-00015U-JW; Thu, 24 Jun 2021 16:06:43 +0000
Received: by outflank-mailman (input) for mailman id 146963;
 Thu, 24 Jun 2021 16:06:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a5fo=LS=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lwRsH-00015I-1r
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 16:06:42 +0000
Received: from mail-pl1-x62d.google.com (unknown [2607:f8b0:4864:20::62d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 64e1dff4-a842-45fe-95e9-ab5128ff3e64;
 Thu, 24 Jun 2021 16:06:40 +0000 (UTC)
Received: by mail-pl1-x62d.google.com with SMTP id v12so3186921plo.10
 for <xen-devel@lists.xenproject.org>; Thu, 24 Jun 2021 09:06:40 -0700 (PDT)
Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com.
 [209.85.210.175])
 by smtp.gmail.com with ESMTPSA id q3sm3279035pfj.89.2021.06.24.09.06.38
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 24 Jun 2021 09:06:38 -0700 (PDT)
Received: by mail-pf1-f175.google.com with SMTP id g21so3874747pfc.11
 for <xen-devel@lists.xenproject.org>; Thu, 24 Jun 2021 09:06:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64e1dff4-a842-45fe-95e9-ab5128ff3e64
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=OUC0E76X92PmzUvBM3M58z5qjgzPKxJvrlMYIlH3t/A=;
        b=V1DUMk4pWXUrJHEg/9Kd4ukoLxbJGeXfb8ZAhSFnF6RVbF8t2Z5J1ymG4niuoHNq4v
         lR13PRE87YP1jrN3LaPEaViS+9EA9qGG/k81iCPvzgmr34ySIt4SvvmFKyoWhdSPE98X
         KaxkIr3Pk6gTC5BkELMIeQJS2sWcVoDywrhkk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=OUC0E76X92PmzUvBM3M58z5qjgzPKxJvrlMYIlH3t/A=;
        b=sb4PGzdTPP9plLw5ouJHRfs5XEmwCAQJdJyILbeSmNVy+FzDvYei05P2YGvXiewz+J
         7gxnLYdx3CkadwrfUG2YP6t3RoGvUYiaBzhWXFJpq/G1jM/UY1AFnToXMNMwWa9LlGq/
         Vl7QiNmrhHEnGZphPX+cyXXE/g6HyqCUsCHv03OsLaHqT/e2tD9yjn8cbHSkNd/LhzdQ
         nKgY2sHgzDnRqXeElV4Iedovq7tRyIiqQgwgX48HyP2ppU0sjdbDpJuQgxmBsyLZLRRT
         BltybfSUpsvPOVhiLK5UMjmfo+NHlKPLF7btuGqvyobfnXkadcWRl+3XLI6j8MVFVZ0D
         Bw5Q==
X-Gm-Message-State: AOAM5310qHjdlz77/AdMOhDFuSx7KS6j30mlOsZwgmbWo3RAw+egwk17
	/WQEQBXwrBinlc0VDjHoNMgJY5cj9NaFhw==
X-Google-Smtp-Source: ABdhPJz3iN10gF0cioqR3Fy/orVLXZKic7Pb77x/JfPVqt5y/jL50Pj1KroF4RWqfFSV5FZ/+IVFbw==
X-Received: by 2002:a17:90b:a05:: with SMTP id gg5mr6197748pjb.181.1624550799179;
        Thu, 24 Jun 2021 09:06:39 -0700 (PDT)
X-Received: by 2002:a92:9509:: with SMTP id y9mr4122371ilh.18.1624550347935;
 Thu, 24 Jun 2021 08:59:07 -0700 (PDT)
MIME-Version: 1.0
References: <76c3343d-72e5-9df3-8924-5474ee698ef4@quicinc.com>
 <20210623183736.GA472@willie-the-truck> <19d4c7a2-744d-21e0-289c-a576e1f0e6f3@quicinc.com>
 <20210624054315.GA25381@lst.de> <CALiNf288ZLMhY3E8E3N+z9rkwi1viWNLm1wwMEwT4rNwh3FfwQ@mail.gmail.com>
 <364e6715-eafd-fc4a-e0af-ce2a042756b4@arm.com> <20210624111855.GA1382@willie-the-truck>
 <452155d2-c98e-23f6-86d6-3a2ff2e74783@arm.com> <20210624114829.GB1382@willie-the-truck>
 <43ec9dd6-12c0-98ec-8d5d-b2904292721e@quicinc.com> <YNSq56zyJ7EYdTcI@char.us.oracle.com>
In-Reply-To: <YNSq56zyJ7EYdTcI@char.us.oracle.com>
From: Claire Chang <tientzu@chromium.org>
Date: Thu, 24 Jun 2021 23:58:57 +0800
X-Gmail-Original-Message-ID: <CALiNf2_WCVodEZJz9PaCTgk_c8LpOAcD4=YZTLDMVyorJs3ESQ@mail.gmail.com>
Message-ID: <CALiNf2_WCVodEZJz9PaCTgk_c8LpOAcD4=YZTLDMVyorJs3ESQ@mail.gmail.com>
Subject: Re: [PATCH v14 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Qian Cai <quic_qiancai@quicinc.com>, Will Deacon <will@kernel.org>, 
	Robin Murphy <robin.murphy@arm.com>, Christoph Hellwig <hch@lst.de>, Rob Herring <robh+dt@kernel.org>, 
	mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, Frank Rowand <frowand.list@gmail.com>, 
	boris.ostrovsky@oracle.com, jgross@suse.com, 
	Marek Szyprowski <m.szyprowski@samsung.com>, heikki.krogerus@linux.intel.com, 
	thomas.hellstrom@linux.intel.com, peterz@infradead.org, 
	benh@kernel.crashing.org, joonas.lahtinen@linux.intel.com, 
	dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk, 
	grant.likely@arm.com, paulus@samba.org, mingo@kernel.org, 
	Jianxiong Gao <jxgao@google.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Saravana Kannan <saravanak@google.com>, xypron.glpk@gmx.de, 
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	Bartosz Golaszewski <bgolaszewski@baylibre.com>, bskeggs@redhat.com, linux-pci@vger.kernel.org, 
	xen-devel@lists.xenproject.org, Thierry Reding <treding@nvidia.com>, 
	intel-gfx@lists.freedesktop.org, matthew.auld@intel.com, 
	linux-devicetree <devicetree@vger.kernel.org>, Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, 
	maarten.lankhorst@linux.intel.com, linuxppc-dev@lists.ozlabs.org, 
	jani.nikula@linux.intel.com, Nicolas Boichat <drinkcat@chromium.org>, rodrigo.vivi@intel.com, 
	Bjorn Helgaas <bhelgaas@google.com>, Dan Williams <dan.j.williams@intel.com>, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Greg KH <gregkh@linuxfoundation.org>, 
	Randy Dunlap <rdunlap@infradead.org>, lkml <linux-kernel@vger.kernel.org>, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tom Lendacky <thomas.lendacky@amd.com>, bauerman@linux.ibm.com
Content-Type: text/plain; charset="UTF-8"

On Thu, Jun 24, 2021 at 11:56 PM Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
>
> On Thu, Jun 24, 2021 at 10:10:51AM -0400, Qian Cai wrote:
> >
> >
> > On 6/24/2021 7:48 AM, Will Deacon wrote:
> > > Ok, diff below which attempts to tackle the offset issue I mentioned as
> > > well. Qian Cai -- please can you try with these changes?
> >
> > This works fine.
>
> Cool. Let me squash this patch in #6 and rebase the rest of them.
>
> Claire, could you check the devel/for-linus-5.14 say by end of today to
> double check that I didn't mess anything up please?

I just submitted v15 here
(https://lore.kernel.org/patchwork/cover/1451322/) in case it's
helpful.
I'll double check of course. Thanks for the efforts!

>
> Will,
>
> Thank you for generating the fix! I am going to run it on x86 and Xen
> to make sure all is good (granted last time I ran devel/for-linus-5.14
> on that setup I didn't see any errors so I need to double check
> I didn't do something silly like run a wrong kernel).
>
>
> >
> > >
> > > Will
> > >
> > > --->8
> > >
> > > diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> > > index 175b6c113ed8..39284ff2a6cd 100644
> > > --- a/include/linux/swiotlb.h
> > > +++ b/include/linux/swiotlb.h
> > > @@ -116,7 +116,9 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
> > >
> > >  static inline bool is_swiotlb_force_bounce(struct device *dev)
> > >  {
> > > -       return dev->dma_io_tlb_mem->force_bounce;
> > > +       struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
> > > +
> > > +       return mem && mem->force_bounce;
> > >  }
> > >
> > >  void __init swiotlb_exit(void);
> > > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> > > index 44be8258e27b..0ffbaae9fba2 100644
> > > --- a/kernel/dma/swiotlb.c
> > > +++ b/kernel/dma/swiotlb.c
> > > @@ -449,6 +449,7 @@ static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
> > >                 dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1);
> > >         unsigned int nslots = nr_slots(alloc_size), stride;
> > >         unsigned int index, wrap, count = 0, i;
> > > +       unsigned int offset = swiotlb_align_offset(dev, orig_addr);
> > >         unsigned long flags;
> > >
> > >         BUG_ON(!nslots);
> > > @@ -497,7 +498,7 @@ static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
> > >         for (i = index; i < index + nslots; i++) {
> > >                 mem->slots[i].list = 0;
> > >                 mem->slots[i].alloc_size =
> > > -                       alloc_size - ((i - index) << IO_TLB_SHIFT);
> > > +                       alloc_size - (offset + ((i - index) << IO_TLB_SHIFT));
> > >         }
> > >         for (i = index - 1;
> > >              io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 &&
> > >


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 17:20:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 17:20:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146970.270612 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwT14-0007iV-6e; Thu, 24 Jun 2021 17:19:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146970.270612; Thu, 24 Jun 2021 17:19:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwT14-0007iO-3U; Thu, 24 Jun 2021 17:19:50 +0000
Received: by outflank-mailman (input) for mailman id 146970;
 Thu, 24 Jun 2021 17:19:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LLZQ=LS=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lwT11-0007iI-T3
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 17:19:48 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb578848-5d1a-4c1d-9d58-8b9e23efc5e0;
 Thu, 24 Jun 2021 17:19:46 +0000 (UTC)
Received: from [192.168.1.11] (209.152.155.107.marktwain.net
 [209.152.155.107]) by mx.zohomail.com
 with SMTPS id 1624555165661442.5443098503854;
 Thu, 24 Jun 2021 10:19:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb578848-5d1a-4c1d-9d58-8b9e23efc5e0
ARC-Seal: i=1; a=rsa-sha256; t=1624555168; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=en9Nl1hEPxBjl04S36MyuRPlcSswqNvJCqam4v0a/c6y/efDVpkdHg4CzU3I46egcJ+yCn11GjCKRgCClpmwJu+hiyLFx0ejwsPJuXDaw41fjX4LLSZF3CczS4U+LVROjxm8vwX98c9Q0UuhP4bEmcci4aB4wCdA0ek6YvtyogQ=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1624555168; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=yZyPrGS+9kVuE3ObNTJW3JxOpqrFNV+fuzYelEs9TOc=; 
	b=VA2iT+UH4GS+baJUGhVet8VHYBIne/BrMS8KIBO/ZVhX5dEQY6cozzHrK7REv2sUrxuCJy8B9Bq2meFOvFDm0Bxaq1o2tZh4rVnOd9EpjyZN8jz+7J39yPqbMwwsPGsGVT4Q5QHOb/BJshE1WERdQ1SCgC4WJplbtW0nCWXBDqo=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1624555168;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Subject:To:Cc:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
	bh=yZyPrGS+9kVuE3ObNTJW3JxOpqrFNV+fuzYelEs9TOc=;
	b=c8fffKpe3mg8pHMWUGWs+6rtPpu6kP8xnArCiED+GUmxdjtZCumc9oUST9VyVpp5
	4ycbrZTrWiBRg+t547nN5iUvyh4aycwmphyKDdT9wz9y0nA4OJ3EqDpcqfNtrVOyZLp
	PC3QHJ8ZitPuHZoBfJ2HYM4iecdlgbz1T0hjEJio=
Subject: Re: [PATCH 3/6] xsm: enabling xsm to always be included
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Tim Deegan <tim@xen.org>, Juergen Gross <jgross@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io, xen-devel@lists.xenproject.org,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-4-dpsmith@apertussolutions.com>
 <a8d60866-b9d9-8a76-3acc-703799b204b6@citrix.com>
 <3df8648d-f706-9684-4e6d-9438dc9f0894@apertussolutions.com>
 <ca65acbc-c57c-6056-7abd-299ce5ccd643@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Message-ID: <9be51dc7-2534-64c9-30dd-06eddc5702ba@apertussolutions.com>
Date: Thu, 24 Jun 2021 13:18:00 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.9.0
MIME-Version: 1.0
In-Reply-To: <ca65acbc-c57c-6056-7abd-299ce5ccd643@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External



On 6/21/21 2:53 AM, Jan Beulich wrote:
> On 18.06.2021 18:35, Daniel P. Smith wrote:
>> On 6/18/21 7:53 AM, Andrew Cooper wrote:
>>> On 18/06/2021 00:39, Daniel P. Smith wrote:
>>>> @@ -250,9 +261,8 @@ config XSM_FLASK_POLICY
>>>>   	  If unsure, say Y.
>>>>   
>>>>   config XSM_SILO
>>>> -	def_bool y
>>>> +	def_bool n
>>>
>>> I'm not sure we want to alter the FLASK/SILO defaults.  SILO in
>>> particular is mandatory on ARM, and without it, you're in a security
>>> unsupported configuration.
>> The intent here is the default is the classic dom0 configuration. What
>> if I did,
>>
>> def bool n
>> def bool y if ARM
> 
> Besides it needing to be with the order of the two lines flipped, if
> Arm requires XSM_SILO, then I think it would better "select" it.


Ack, I realized that as I fixed it for the upcoming v2.

Correct me if I am wrong but if you do a "select" that means you are 
forcing the user to always have SILO built in, i.e. that makes it so the 
option cannot be disabled. There may be users who would prefer to only 
have Flask enabled on ARM and those users would not be able to turn SILO 
off.

v/r,
dps


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 17:55:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 17:55:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146975.270623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwTZd-0003Bm-0M; Thu, 24 Jun 2021 17:55:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146975.270623; Thu, 24 Jun 2021 17:55:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwTZc-0003Bf-SH; Thu, 24 Jun 2021 17:55:32 +0000
Received: by outflank-mailman (input) for mailman id 146975;
 Thu, 24 Jun 2021 17:55:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FtQU=LS=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lwTZb-0003BZ-Gc
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 17:55:31 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e28414a2-d889-4f64-b750-7b2543be8634;
 Thu, 24 Jun 2021 17:55:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e28414a2-d889-4f64-b750-7b2543be8634
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624557330;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=ogUaIKbs5coINxSaMaKMVcF0Kshm2XAUnp4DjMJwo9k=;
  b=ZSsDojara9Z6+k+97+wLvUZBaEfJm9cjuQSIBiFnug3ZCes8lWY1ocwZ
   omKsZo2VkysX8nxBEJ4Kvor0p5jZxJuy3a7ZXlpwf5pQx35N4wrTw5FE9
   +Sx45YQImJF7jq7piifZXl1+zJjYsbDA5hjahJM8zQPxJ1My5Q8GzmMs8
   I=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: opHXM5QDPFO/hMNCKQdmxOSkEzlU5/iRC67EuZhTvvQbEYcaI8vFGogf8EUwdTNphjOBp7s/AV
 bza+zsxaX29pGCGgDvma2zgkA5c2O5MLM0meD0LWfRXhVY+CGv5rO/G2x4+2SJNFBGskc9kgXs
 tRYuoKTw0EyXm84WATjsI47p0XMk0i+7J4yYat3XJgyMesIcq8G7BMx3nBDBOFsa5R9NsbMUIX
 Eu6uk0RGBHbTKuRNmYkWyIEE6TjhCELuo3pvdPwQEz1qqfmhaeUP357tfu7BN7fjVfftfbOXB1
 e94=
X-SBRS: 5.1
X-MesageID: 46627713
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:FXStjaxktl6rqqpBPTbBKrPw1r1zdoMgy1knxilNoHxuH/BwWf
 rPoB17726RtN91YhsdcL+7V5VoLUmzyXcX2/h1AV7BZniEhILAFugLgbcKqweKJ8SUzJ8+6U
 4PSclD4N2bNykGsS75ijPIb+rJFrO8gd+VbeS19QYScelzAZsQiDuQkmygYzZLrA8tP+teKL
 OsovBpihCHYnotYsGyFhA+LpL+T42iruOeXfYebSRXkDWzsQ==
X-IronPort-AV: E=Sophos;i="5.83,296,1616472000"; 
   d="scan'208";a="46627713"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH] libxencall: Bump SONAME following new functionality
Date: Thu, 24 Jun 2021 18:55:21 +0100
Message-ID: <20210624175521.20843-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain

Fixes: bef64f2c00 ("libxencall: introduce variant of xencall2() returning long")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
---
 tools/libs/call/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libs/call/Makefile b/tools/libs/call/Makefile
index 4ed201b3b3..93d404b79e 100644
--- a/tools/libs/call/Makefile
+++ b/tools/libs/call/Makefile
@@ -2,7 +2,7 @@ XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
 MAJOR    = 1
-MINOR    = 2
+MINOR    = 3
 
 SRCS-y                 += core.c buffer.c
 SRCS-$(CONFIG_Linux)   += linux.c
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 17:56:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 17:56:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146979.270634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwTaQ-0003k1-8d; Thu, 24 Jun 2021 17:56:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146979.270634; Thu, 24 Jun 2021 17:56:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwTaQ-0003ju-5H; Thu, 24 Jun 2021 17:56:22 +0000
Received: by outflank-mailman (input) for mailman id 146979;
 Thu, 24 Jun 2021 17:56:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwTaO-0003jg-H2; Thu, 24 Jun 2021 17:56:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwTaO-0005wj-C3; Thu, 24 Jun 2021 17:56:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwTaO-0007Mr-2l; Thu, 24 Jun 2021 17:56:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwTaO-0002T1-2E; Thu, 24 Jun 2021 17:56:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Z4pDAGMPEP1YwYIQ2ZPxmiZIHCJ0jGYNKwYGk3DMfGU=; b=wmujlikPmkMHjJvnWtH4v8Gm+k
	l7DUaeWSg6yp75NwGlvMdwuL/7Oyn7arOgalgH/lrA4rqMxC/e85FqF6O+jQ0I+SJpEwPJqASF95p
	5H/Z9WmxHxJtbnyFGqxZZDNj8vD817mOO3mOWasWHppzkr9SQ8NCsXnHdLAJ4/2Tj/tA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163019-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 163019: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e87d8f60fa9b6eaa6a2357545a96e4fff05dbef0
X-Osstest-Versions-That:
    xen=76a0aa9c4d7a9fc6fee1158fd9df82ae9b8b605d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Jun 2021 17:56:20 +0000

flight 163019 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163019/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e87d8f60fa9b6eaa6a2357545a96e4fff05dbef0
baseline version:
 xen                  76a0aa9c4d7a9fc6fee1158fd9df82ae9b8b605d

Last test of basis   163015  2021-06-24 11:00:28 Z    0 days
Testing same since   163019  2021-06-24 15:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   76a0aa9c4d..e87d8f60fa  e87d8f60fa9b6eaa6a2357545a96e4fff05dbef0 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 18:00:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 18:00:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146987.270647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwTef-0005JN-0s; Thu, 24 Jun 2021 18:00:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146987.270647; Thu, 24 Jun 2021 18:00:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwTee-0005JG-Tv; Thu, 24 Jun 2021 18:00:44 +0000
Received: by outflank-mailman (input) for mailman id 146987;
 Thu, 24 Jun 2021 18:00:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwTec-0005J6-Vw; Thu, 24 Jun 2021 18:00:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwTec-00067i-QR; Thu, 24 Jun 2021 18:00:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwTec-0007eH-Jp; Thu, 24 Jun 2021 18:00:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwTec-0004kC-JJ; Thu, 24 Jun 2021 18:00:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2uOxG0Z+y26Cgbkcq3tNraOMABNL1x/GlGYLk6em1WQ=; b=lLYlp6j6zGb84qf9mWIMPXulhN
	myllayzrJQmOoEL7g6QJVWgbaKU5/7f8fROoN30V1y95tnfcvu4Y/tBkANoXijZ7falpBtOUC+Hik
	D31dKfwKTsiFH51cC6x/Ti86x6fln/Mkhrv+md/jBsP7gtpnBXuhYtIOwibq3NMXbQgM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163016-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xtf test] 163016: all pass - PUSHED
X-Osstest-Versions-This:
    xtf=93b29b886e8665e368598c711279d45b7e5d066c
X-Osstest-Versions-That:
    xtf=3e800027016ea4eb19887bf626b46f45fc43fa5d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Jun 2021 18:00:42 +0000

flight 163016 xtf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163016/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xtf                  93b29b886e8665e368598c711279d45b7e5d066c
baseline version:
 xtf                  3e800027016ea4eb19887bf626b46f45fc43fa5d

Last test of basis   162886  2021-06-17 23:10:13 Z    6 days
Testing same since   163016  2021-06-24 11:41:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-amd64-pvops                                            pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xtf.git
   3e80002..93b29b8  93b29b886e8665e368598c711279d45b7e5d066c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 19:21:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 19:21:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146995.270662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwUuI-0004Cv-Pv; Thu, 24 Jun 2021 19:20:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146995.270662; Thu, 24 Jun 2021 19:20:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwUuI-0004Co-Mt; Thu, 24 Jun 2021 19:20:58 +0000
Received: by outflank-mailman (input) for mailman id 146995;
 Thu, 24 Jun 2021 19:20:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CoeP=LS=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
 id 1lwUuH-0004Ci-Iq
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 19:20:57 +0000
Received: from mx0a-00069f02.pphosted.com (unknown [205.220.165.32])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ed65184d-28b4-4fa2-aa33-bde2399cb6df;
 Thu, 24 Jun 2021 19:20:56 +0000 (UTC)
Received: from pps.filterd (m0246617.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 15OJBmri025165; Thu, 24 Jun 2021 19:20:10 GMT
Received: from oracle.com (userp3030.oracle.com [156.151.31.80])
 by mx0b-00069f02.pphosted.com with ESMTP id 39c634ufgn-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 24 Jun 2021 19:20:09 +0000
Received: from userp3030.oracle.com (userp3030.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 15OJK8hE035087;
 Thu, 24 Jun 2021 19:20:08 GMT
Received: from nam10-bn7-obe.outbound.protection.outlook.com
 (mail-bn7nam10lp2107.outbound.protection.outlook.com [104.47.70.107])
 by userp3030.oracle.com with ESMTP id 3995q18276-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 24 Jun 2021 19:20:08 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com (2603:10b6:a03:85::27)
 by BYAPR10MB3160.namprd10.prod.outlook.com (2603:10b6:a03:151::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Thu, 24 Jun
 2021 19:20:04 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::8111:d8f1:c262:808d]) by BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::8111:d8f1:c262:808d%6]) with mapi id 15.20.4264.020; Thu, 24 Jun 2021
 19:20:04 +0000
Received: from char.us.oracle.com (138.3.200.25) by
 SN7P220CA0022.NAMP220.PROD.OUTLOOK.COM (2603:10b6:806:123::27) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Thu, 24 Jun 2021 19:19:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ed65184d-28b4-4fa2-aa33-bde2399cb6df
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : content-type : in-reply-to :
 mime-version; s=corp-2020-01-29;
 bh=P8+6fQKt1GcJiNjuEjUVT8DwDaU9l6qyoErT2/2fhEo=;
 b=ka4uO+w+HngPAlAlcvgVQkkZRUykFcqf4Wu4LLsIczBlUd9+Zr5u8u3Ez0oc7X0geuor
 NBnMkH8lTH5cH7Nisz9W4sdd/zSB8dlSuy3OuvmdLF8Oc7s/wwN/Fv0dpLseQu3bdK7Z
 8wAYkg3ilAXxVIAui6DqWxa+spw0uny2hoar3/sW6hLatzWZNfkDtYHhjydp8A4Dz1yK
 GiXznXL3JrmlRiC7nrEgrJPCqHuLUkxkwkvq/iCKDaooi2WJW6N0x6uixsHWWfFPAAKc
 gswRROCllkfB1ApiO/vA4vIcprZIkyFi/x5gUAkuecntkqUc3lqadkY4GOxgsTJQ65SP VA== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VpXMUxGbaQ9Xui1oyOBNhR0FlQrJsdrtDWMytIAHuioZMgW6zJlsv0RQ7Zyy0ipuGwV0EUoaBtpPPrGQUVhU1CLMwCc9Bm8NqLTCV7tGBbpMKTFW5SUdcmi3ileOSoHTnlrahWvP/PpcvPQ/WFhlq+5wfRP9HRB/tNFpZ4AODVejGA9Qxl8rH6/fmzV648vSQp0wun865YcsPwpsymd8TQBbwmJi20B1Pp7gt0UP1UXNTYXU7rhhoYl5uLLKeSQFQQBZV1J8tjezI7kk2zII34EuwGHr/4/yuq293og361XsZByELl0pIX7r0XIXUGcPXoH8mQvOFxB7+8kh/lDqug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=P8+6fQKt1GcJiNjuEjUVT8DwDaU9l6qyoErT2/2fhEo=;
 b=Cshqzkx1kSkYTdieScYtwuU9XYvSgltLRPgvqkm3ZBoVSbGYTAYM5qdWlrwvkT/mspLR05MBNFoGz90ikfF/o/uoaF0/Eesq/Xm5Puf03B/hjAot7s6Lge/vvuFHHqTa2s4illm5NLGJ+NLAFv7nzjkfJt1GA5mL1y5yCNmODo8AKrfZMMgzjG2iCOB2Kt70LnEPvNLJHbxqic952W1o6t7HGBjryKCAJD6yiJ/KAftlNRXLHAYBbQwU6GEyt1I+QLhfOZkDdVHwKGSfrYx6g3uNXjLWyM6BREfD6BHHy6s7qtMPkL3Opw/zDHY1RHzRrLCo4BJRtF9YLXheO5/4Fw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=P8+6fQKt1GcJiNjuEjUVT8DwDaU9l6qyoErT2/2fhEo=;
 b=Jx9PG0C124tyZswmT6sJXGulr9UexRz4ZOBu9TDsfCCLOqUNHQ+2JT25KlFEKD+cGDZa3h/55BGXNVLxWyk0I9RTPjP/mauOKH1Q6kDRxZw8D/8pJee/Gpp3ZW0YskPauNAsJcHC8NUtNeVPFg82SXUhFRLKJmuFn41INbG7puM=
Authentication-Results: chromium.org; dkim=none (message not signed)
 header.d=none;chromium.org; dmarc=none action=none header.from=oracle.com;
Date: Thu, 24 Jun 2021 15:19:48 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
        Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
        Frank Rowand <frowand.list@gmail.com>, boris.ostrovsky@oracle.com,
        jgross@suse.com, Christoph Hellwig <hch@lst.de>,
        Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org,
        paulus@samba.org,
        "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
        sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
        grant.likely@arm.com, xypron.glpk@gmx.de,
        Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
        bauerman@linux.ibm.com, peterz@infradead.org,
        Greg KH <gregkh@linuxfoundation.org>,
        Saravana Kannan <saravanak@google.com>,
        "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
        heikki.krogerus@linux.intel.com,
        Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
        Randy Dunlap <rdunlap@infradead.org>,
        Dan Williams <dan.j.williams@intel.com>,
        Bartosz Golaszewski <bgolaszewski@baylibre.com>,
        linux-devicetree <devicetree@vger.kernel.org>,
        lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
        xen-devel@lists.xenproject.org,
        Nicolas Boichat <drinkcat@chromium.org>,
        Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
        bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
        daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
        intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
        jxgao@google.com, joonas.lahtinen@linux.intel.com,
        linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
        matthew.auld@intel.com, rodrigo.vivi@intel.com,
        thomas.hellstrom@linux.intel.com, thomas.lendacky@amd.com,
        quic_qiancai@quicinc.com
Subject: Re: [PATCH v15 00/12] Restricted DMA
Message-ID: <YNTa1C5uvz+qWryf@char.us.oracle.com>
References: <20210624155526.2775863-1-tientzu@chromium.org>
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210624155526.2775863-1-tientzu@chromium.org>
X-Originating-IP: [138.3.200.25]
X-ClientProxiedBy: SN7P220CA0022.NAMP220.PROD.OUTLOOK.COM
 (2603:10b6:806:123::27) To BYAPR10MB2999.namprd10.prod.outlook.com
 (2603:10b6:a03:85::27)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e32a325a-d0c9-45f4-bf49-08d93745119f
X-MS-TrafficTypeDiagnostic: BYAPR10MB3160:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BYAPR10MB31602652C7EC6BB6F81C1EC689079@BYAPR10MB3160.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	i0tzwxIw/4elx5hgGO6B8CjrGZlYNudIrNxhhnBv2AEVGAu3qOEccTjmeVw/yPlpMcjJ+bDbBhjn2+hfLQmHvardT7U8dwR+IEbO7gNAHn2Z40GzLkt2Djw00klfkLLAHCILKIC9zOgcorAPVDMjXkKbIlTliv3wNg24lL7EqRJMotYkuff7S6PZqVUXbXzc//pNHEPio2z2Eo1mRtTbf3fbGtVHMdIfhSgIJ296IsKDCmAgGn5GEtoon3Z+v46kv+INIkWfCgzMcbBsVx4lKMzLE8QmpVn9X7BSPKqfM3zFeze7TuQTiNVgq768IO4blRwz+2U/w9Cj/H228oFeO3kwMlesgS2ci9DK+bc80GeW8fBEaNLRbfu0pTbG4Nkp62ywr+Psrcn1FwwVSlEUzu0RwicBeIgVIFR8EyFHPo03bhK7p9MS+DYJkd51RB/nv1JVOxXc4pGAoLz9viAev2WgTXqjTOSwvxbiBt9uLFMJ91UdTyqUQS/j4OivQHGYSZ87uFWSAWq3GKoPdu0oUpDnelPvYoE1tSfPQXbt2hN3tRPNnZIIZjKLMsCL8OrMTf+g4sXDuRRlgKfSafBgs8YmmBmEHzgjpGhodv/ZU6sR0Q9ixbnl9+dmmVdG5UO2nMOaOBQZeDV32rZA8g4BGBS6ESeLgdNda4Z+LkxoauheVOYquI/kgI8/85XfvvZrZG9+Tzf95XXXFs8C2y4JE74r8pc42U7e/6Go8i7RRPUGbTxkN+nz7UlMN9ZoIsXrVSX+8cUweb8l+SAdXpkX8gOfrk2Md+ongBUv9oRB94xn/X4PMchHzLipHqlELFr4ywVw9ywqe4tu2UAb9+J7cQ==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2999.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(396003)(346002)(39860400002)(376002)(366004)(54906003)(8676002)(5660300002)(316002)(86362001)(6916009)(478600001)(186003)(7406005)(2906002)(55016002)(966005)(16526019)(7416002)(26005)(6666004)(7366002)(38100700002)(52116002)(66476007)(7696005)(956004)(83380400001)(66556008)(4326008)(38350700002)(66946007)(8936002)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?bvJ7q3CIhSDXU5AEz5Lo5jbUWTmD3cx8rMWsToDHYRSYX63S2y8cG2e1rYk2?=
 =?us-ascii?Q?eErzOJrygXEY8O1Y/n1U2p6iY1ReaLDVmCYe4vz5cWjRIHDU5CMCmVOl8Bcz?=
 =?us-ascii?Q?xMmC8BvlazQ72+db6pe6YdvGFp5Qopzgw6ji6oq8s6ENqjEe+LrA188CgOVY?=
 =?us-ascii?Q?FggWwijeLR8ekpCFnbsjOEaZRkjsxZD46BgCPJLfmSgk6z6e81cAZInUIF0p?=
 =?us-ascii?Q?iuRYgKmhuI+wJ8VpgKyshIsdIxZTz/mIbVkXTkjO0ho3OT3wCJJsAEEv7s+8?=
 =?us-ascii?Q?TIa6UwGP7uu6IyNz7WM1z/6K1mrBHONs7YO31I6J/Q+U2y/AnhENtMbc7313?=
 =?us-ascii?Q?gcycAGPLkyxzVtE6hG6Aypbh0cgAnalSfvg2GOlA7Ia3tCpSS8XGsK0a6mfP?=
 =?us-ascii?Q?9qlsq2UaTVrGTEZIsl40V2DBVUGkEkEU6jo42bXwraARhocJU5ki5J86gGw8?=
 =?us-ascii?Q?ZOK1ghIKDsWW6c5hOjKBkXIq5Q4g3IcgftRSrYV2iZXgJ+XquJ2HIYdIcE6v?=
 =?us-ascii?Q?B9sSLXP5DHB7dYUuPzI8EeMBKVBv4A+n1L9KFX/cX+4dfhTi5q+gjYoflut/?=
 =?us-ascii?Q?f/DliTbrcQ2lHj7hgQvkfaYX32T+5CYDB5DyAra4EVdfei5/Jc9T6k7kiVxB?=
 =?us-ascii?Q?sZctc0fLIREF6B1VGnF1YRj4eOeep8xT1L8OdL2o2rheBbEZDWXP2X3r5z/T?=
 =?us-ascii?Q?ifZUwFkcrFhjrzbcZfiqgSDJ9KHF63Yt9zPnKv8Z5QGaTiqWVDARBIoonOEZ?=
 =?us-ascii?Q?vJj+fZRore8OtRvlbRRxmiS7DocoHqdwjKa5nTteojR48eb5qKwEq/O0Ur6F?=
 =?us-ascii?Q?KK/aC62a7rGONpgH5LjXC/o5izoP+hEPlTBqqSOnmJb7G4rQ+Oj8Akj6ijiK?=
 =?us-ascii?Q?ROsj4gvLWimdelviji7rwTQ+JNbMKWJ2yJ1YfhZu5pmTUINuaYk0miM7t1Zd?=
 =?us-ascii?Q?Og0aJ5QTriX3Lc6I3dmPmemfRZBGAI9b2NSt5L+2nHVgHQostGwVXMSn21Iq?=
 =?us-ascii?Q?W8EoQo5Jc8LgRX2EhDJKGtYHGOeqeV+iV14voRiubW5rWI7jkzYLX29RGcHL?=
 =?us-ascii?Q?txctobWYht7yescW1LhNXeTjBSrYNeea81RBXT0vtNPp1dQkjcgbIlnKfy3u?=
 =?us-ascii?Q?7k1B6kQITgUefQ6ez/oj5hn6Xk044IdQtV3wl2NbUYHbzFE0xVs2Q3dvVbg6?=
 =?us-ascii?Q?5C9UORHyjqNrPPFfPf4bvIBlt6+EJ4D15blYHjoo8k7AlUVxAOF/I6W4CSqs?=
 =?us-ascii?Q?cYZzsOgcLSPISaMKyIk2Rzp1Grqnz0+XcGTd6Kza4oLRHo+4JDdQitYPRs7k?=
 =?us-ascii?Q?oUYwqFmElmXOYx+cMilA4Egb?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e32a325a-d0c9-45f4-bf49-08d93745119f
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2999.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2021 19:20:04.0737
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xBncePuLzykfxJ7flQwRuVznf5ZATSXtneYvTgwW4c5iJ1PYAWB6YsZWdW9JYVICQZytQCiZHGX3C+dIcjwY8g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB3160
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10025 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 adultscore=0 suspectscore=0
 phishscore=0 malwarescore=0 mlxscore=0 mlxlogscore=999 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106240106
X-Proofpoint-ORIG-GUID: HVGSeOE_AI_2-cWDVJ4zX0pPBMurpisx
X-Proofpoint-GUID: HVGSeOE_AI_2-cWDVJ4zX0pPBMurpisx

On Thu, Jun 24, 2021 at 11:55:14PM +0800, Claire Chang wrote:
> This series implements mitigations for lack of DMA access control on
> systems without an IOMMU, which could result in the DMA accessing the
> system memory at unexpected times and/or unexpected addresses, possibly
> leading to data leakage or corruption.
> 
> For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
> not behind an IOMMU. As PCI-e, by design, gives the device full access to
> system memory, a vulnerability in the Wi-Fi firmware could easily escalate
> to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
> full chain of exploits; [2], [3]).
> 
> To mitigate the security concerns, we introduce restricted DMA. Restricted
> DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
> specially allocated region and does memory allocation from the same region.
> The feature on its own provides a basic level of protection against the DMA
> overwriting buffer contents at unexpected times. However, to protect
> against general data leakage and system memory corruption, the system needs
> to provide a way to restrict the DMA to a predefined memory region (this is
> usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).
> 
> [1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
> [1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
> [2] https://blade.tencent.com/en/advisories/qualpwn/
> [3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
> [4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132
> 
> v15:
> - Apply Will's diff (https://lore.kernel.org/patchwork/patch/1448957/#1647521)
>   to fix the crash reported by Qian.
> - Add Stefano's Acked-by tag for patch 01/12 from v14

That all should be now be on

https://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb.git/
devel/for-linus-5.14 (and linux-next)



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 19:21:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 19:21:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.146998.270673 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwUv9-0004lB-4Q; Thu, 24 Jun 2021 19:21:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 146998.270673; Thu, 24 Jun 2021 19:21:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwUv9-0004l4-0V; Thu, 24 Jun 2021 19:21:51 +0000
Received: by outflank-mailman (input) for mailman id 146998;
 Thu, 24 Jun 2021 19:21:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CoeP=LS=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
 id 1lwUv7-0004kw-B8
 for xen-devel@lists.xenproject.org; Thu, 24 Jun 2021 19:21:49 +0000
Received: from mx0b-00069f02.pphosted.com (unknown [205.220.177.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57bf1eff-82a5-4c75-9be8-99bdc89f1e2c;
 Thu, 24 Jun 2021 19:21:48 +0000 (UTC)
Received: from pps.filterd (m0246630.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 15OJBqEH025724; Thu, 24 Jun 2021 19:21:10 GMT
Received: from oracle.com (userp3020.oracle.com [156.151.31.79])
 by mx0b-00069f02.pphosted.com with ESMTP id 39c8twatp9-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 24 Jun 2021 19:21:10 +0000
Received: from userp3020.oracle.com (userp3020.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 15OJHgSF002381;
 Thu, 24 Jun 2021 19:21:09 GMT
Received: from nam10-bn7-obe.outbound.protection.outlook.com
 (mail-bn7nam10lp2103.outbound.protection.outlook.com [104.47.70.103])
 by userp3020.oracle.com with ESMTP id 399tbwf96k-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 24 Jun 2021 19:21:08 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com (2603:10b6:a03:85::27)
 by BYAPR10MB3160.namprd10.prod.outlook.com (2603:10b6:a03:151::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.18; Thu, 24 Jun
 2021 19:21:05 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::8111:d8f1:c262:808d]) by BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::8111:d8f1:c262:808d%6]) with mapi id 15.20.4264.020; Thu, 24 Jun 2021
 19:21:05 +0000
Received: from char.us.oracle.com (138.3.200.25) by
 BYAPR05CA0015.namprd05.prod.outlook.com (2603:10b6:a03:c0::28) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4287.8 via Frontend Transport; Thu, 24 Jun 2021 19:20:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57bf1eff-82a5-4c75-9be8-99bdc89f1e2c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : content-type : in-reply-to :
 mime-version; s=corp-2020-01-29;
 bh=Q5jxGgw/e7hKqYXxYbU2t6cMICL0Den735IuDjh1eLs=;
 b=AwapEz9Fn4isusfA3zGuaEInXZkBGlEb+1ZsiVmDJRpcmk7RH0/zg04lPgfbHBJUVAZr
 UpsX+ZH4CCKA5VNyl4+RN6fTbStc1giOrabAVIXs82+pF+cQUDdPXLX7Wh+GyQciTi1g
 01TvEtkQAKEiqF/TOqlv2AFEirVNPYFUsdxdtSMRV184a/YziX++8dEyNjLbKTKCtjO4
 RlVmF+7gao7jmZOY/4KN0qQCK2oEALnCFm9wMRrFMXG0if8EZ9WYfPSW6MNd0Wisuzi1
 bBnFi3HJwrsqgq79XkmX3940QsqaN9ecafHt9eKmIbdTtsc9BWMg3lDrcgpB4D31E3ar hw== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bsJhIILTZggUN9b2aVXhA72CJGMmfSvKxcl8GoCi5gOkOxApwApWj4dHHwqSjJZwk63Qiwb6ZaOWgv5jCJ5giHkSHTRlRohjap4TJ8gX6dD4QZZSlF+7xWmXKy1IsPZeX9iyWx8YceDW3wWF5SrDqe+kaJHQ2Mhn2QMTNFl5C4bd6Sx2+6/enfGE9x2zwFSuasKiKa+H0tSeDSJz/7I9FKbdw5U5cGYYt1QoltONloTIAuuypdzXnlCM/7W7FDumf9GV6z5qU715zbRpdxj4xT4dTqsTbpi7yq+2QU7PCUMk/u63pq0HMv+Ia3JCKWEX2hd1sYukYKBn8rPTjuw9zg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q5jxGgw/e7hKqYXxYbU2t6cMICL0Den735IuDjh1eLs=;
 b=h9vQXr7yAwo1dF3Jr2Eu6WhoHSSGuRpIoSRJHdN/uNNNBUSSiUCPvof2dx+ZIj62ZIXngoSd8BaBZ5WCEBulia+B1w90dGcrspsFBO69LwPAuKp07TbgSmPj3pw5S8crsUxIWcBwStUmoxatAl6I5CDLxLXtBFrBuDm+TQ+RRWedvxDENdaAdxUwX5ExndqONyrxRzPcQP/iNE7dns7oQbu4IuqmVGWs82uPNanvnDkKV8CwphV6hPbvNCJJXZ9wNovaiLwnYSFBsNBRvZtNTv/Q8DJ3SHngHtSVrX9SSRJhFMifHYgzwrcuRVAZzxgLYMeBMVQ+r2iaDrVLdN+JsQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q5jxGgw/e7hKqYXxYbU2t6cMICL0Den735IuDjh1eLs=;
 b=lv5vM2Tgin8y/ly7lXc68mx37dnRV098CJrmD3i5cpzSsBjE2qGcTrTJX5beuFzThZ8IhtKZUiWGMA7a4Hf7N9V0fYzcbAsx9BKaLxKZp3xGtgbVYPkRvExTAcdQd8GZsz5OdlLx+ikEYw+u2DhFiQjtrjyikX1QbN26ykKJzow=
Authentication-Results: chromium.org; dkim=none (message not signed)
 header.d=none;chromium.org; dmarc=none action=none header.from=oracle.com;
Date: Thu, 24 Jun 2021 15:20:55 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Claire Chang <tientzu@chromium.org>
Cc: Qian Cai <quic_qiancai@quicinc.com>, Will Deacon <will@kernel.org>,
        Robin Murphy <robin.murphy@arm.com>, Christoph Hellwig <hch@lst.de>,
        Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
        Joerg Roedel <joro@8bytes.org>, Frank Rowand <frowand.list@gmail.com>,
        boris.ostrovsky@oracle.com, jgross@suse.com,
        Marek Szyprowski <m.szyprowski@samsung.com>,
        heikki.krogerus@linux.intel.com, thomas.hellstrom@linux.intel.com,
        peterz@infradead.org, benh@kernel.crashing.org,
        joonas.lahtinen@linux.intel.com, dri-devel@lists.freedesktop.org,
        chris@chris-wilson.co.uk, grant.likely@arm.com, paulus@samba.org,
        mingo@kernel.org, Jianxiong Gao <jxgao@google.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Saravana Kannan <saravanak@google.com>, xypron.glpk@gmx.de,
        "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
        Bartosz Golaszewski <bgolaszewski@baylibre.com>, bskeggs@redhat.com,
        linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org,
        Thierry Reding <treding@nvidia.com>, intel-gfx@lists.freedesktop.org,
        matthew.auld@intel.com, linux-devicetree <devicetree@vger.kernel.org>,
        Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie,
        maarten.lankhorst@linux.intel.com, linuxppc-dev@lists.ozlabs.org,
        jani.nikula@linux.intel.com, Nicolas Boichat <drinkcat@chromium.org>,
        rodrigo.vivi@intel.com, Bjorn Helgaas <bhelgaas@google.com>,
        Dan Williams <dan.j.williams@intel.com>,
        Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
        Greg KH <gregkh@linuxfoundation.org>,
        Randy Dunlap <rdunlap@infradead.org>,
        lkml <linux-kernel@vger.kernel.org>,
        "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
        Jim Quinlan <james.quinlan@broadcom.com>,
        Tom Lendacky <thomas.lendacky@amd.com>, bauerman@linux.ibm.com
Subject: Re: [PATCH v14 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
Message-ID: <YNTbF5kP0r+VgO6Y@char.us.oracle.com>
References: <19d4c7a2-744d-21e0-289c-a576e1f0e6f3@quicinc.com>
 <20210624054315.GA25381@lst.de>
 <CALiNf288ZLMhY3E8E3N+z9rkwi1viWNLm1wwMEwT4rNwh3FfwQ@mail.gmail.com>
 <364e6715-eafd-fc4a-e0af-ce2a042756b4@arm.com>
 <20210624111855.GA1382@willie-the-truck>
 <452155d2-c98e-23f6-86d6-3a2ff2e74783@arm.com>
 <20210624114829.GB1382@willie-the-truck>
 <43ec9dd6-12c0-98ec-8d5d-b2904292721e@quicinc.com>
 <YNSq56zyJ7EYdTcI@char.us.oracle.com>
 <CALiNf2_WCVodEZJz9PaCTgk_c8LpOAcD4=YZTLDMVyorJs3ESQ@mail.gmail.com>
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CALiNf2_WCVodEZJz9PaCTgk_c8LpOAcD4=YZTLDMVyorJs3ESQ@mail.gmail.com>
X-Originating-IP: [138.3.200.25]
X-ClientProxiedBy: BYAPR05CA0015.namprd05.prod.outlook.com
 (2603:10b6:a03:c0::28) To BYAPR10MB2999.namprd10.prod.outlook.com
 (2603:10b6:a03:85::27)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 20a391a4-213a-4ec3-68bf-08d937453637
X-MS-TrafficTypeDiagnostic: BYAPR10MB3160:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BYAPR10MB31604EE3C3F8658D0C7EA04489079@BYAPR10MB3160.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	pYdgjjoUa/aadMQBwtD/HaKpdIg+4KgIuqoO2rsqwrEDpqRstvBdYENPfVnLzWHpNJjCHb9QbFMNnnTXwDfvjxsk05OWAn+ZZk7ZqrLSI/82FebrxBSJvfxTRi84KplYBLfPwiY1CCf+Rosv1U2n9pT+C6fVNAe81HhPD8uy531LADjyi3y2bFtLdxWkKKvJyke4G1NOYYAQg7F3HN1RVSPMgJlrQVI5bRT0R9fKz6TprhsQ5t6aR2CJ7oDMH1Ee5e1I8JyZJnNA0swpLooRhSzvzcaVqyBmXNRZwKkm+fse0vKviaVCEhSbgp2e16HdF1mZTLVQJHjfe0ibfaA5O16XkqiT0OJiCMo74bpd9eC4ahOLvF5pdpUL1G5sXnuEP8LkhAIbS0t2VpDp8WnhOQcXeve3eXgtGWtqU94EFic/C6blK+lGUmM66vBpyt30WIKdjOMJ6d9YSeSIkTbrcbWqIueR/NHLfMoyLLxC9MrH8QUqZ2KnWeNYrcNOa6QlhKHIV78Kv3jWwHqDMfDSRa91HdUC5tBKNECYDLBjHEASp9gW/+8Ld/94wvQBjfJsHiyc96nKbT8XBEoIhf5OHqvRmzW/TMknGru8oRla0WOtph/GqlK77xAr6fuBkQ8wf65//wDzww19mnuEpL0OQxZ3uejrISO2yLFunffeOXH7X3B8RKmV0Yh6ZshLRdYno4xAd2lcvRL1AuIJPfPT5yuM0NBzvHEx0tsSA4BEIs17wagt+7Va/75gfDlf2eF6PkCorBd3yT/wMYNPWnjYKtySAnEzW04zG8g/1b4xmlQ=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2999.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(396003)(346002)(39860400002)(376002)(366004)(53546011)(4744005)(54906003)(8676002)(5660300002)(316002)(86362001)(6916009)(478600001)(186003)(7406005)(2906002)(55016002)(16526019)(7416002)(26005)(6666004)(7366002)(38100700002)(52116002)(66476007)(7696005)(956004)(66556008)(4326008)(38350700002)(66946007)(8936002)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?meXSVjMpKq5RD4mjjT4/Qr7D7Q+QArEjVUVFZJT5oEzpGZFSOcI/yS9ccoxe?=
 =?us-ascii?Q?wI7pWoyITasmJCxtrUrlSo/hS2gKObh4hQtKhQDcx8HpRaTF4edWfIHa2SkZ?=
 =?us-ascii?Q?stvqg/xXlbOgXxvr5tslAWOlx8yFmmZvJb/mmxwvNVSnqeY6v2LPJYLnhSJn?=
 =?us-ascii?Q?aaNZJ7egPtod5QNSQ/zLpgo9Z8pzH0Mi75m7foM6WfZLewba4z92mM2/cnLe?=
 =?us-ascii?Q?TpgdrUWmpvpZ8FhpfK8nnRKWgMZeRL9Jli2vrkP/8cEjkvJw2f0c935CSi7P?=
 =?us-ascii?Q?67DB/0AkyD3Pb2IMnaOfqNG2esagbl9E/qjDjojDVGuUWVI7/Dzxkr5XHpYW?=
 =?us-ascii?Q?FdSZ/L5ZHM3JHYTDKTv5/bPzBtInxgXW0hvL4CSz1g+nzKc/PYcoJWDiboI8?=
 =?us-ascii?Q?zmAtuOek2Y29sidRZzl5CVTlRne/nTpkfxTGqQwLaiT7GYSZjBjmQ1dPoNM0?=
 =?us-ascii?Q?Ybqp6McVr3nGiSNQUm9QzT6zTBI7URhvkTqjbqbAvZZP6TLp311YWran4Dmj?=
 =?us-ascii?Q?eX6M42+vJ/A85WLbTZgGfZQA5TMu3cmBqs7Wbu2k/uZWkUAP/wudlw6xRNtU?=
 =?us-ascii?Q?7useIAOiDmMb8arkQy+jWyH1QDjKp8RR7INExGrMzxosIrFHTYbwyyuAQmf9?=
 =?us-ascii?Q?N1tJ+ilwRrZlCPpgy9BAwHVzXZkbLA3QOKiuAqrOpvWscu4eNpNhNx5WyNb1?=
 =?us-ascii?Q?DdFRdepguoe4L0y2Hlf1GhjqxurSVJTnqJ5DA7XdwsuqQPDbZgR1Ya5Ks+sZ?=
 =?us-ascii?Q?bVGV0IjPkUisUct4x3thR1ymDEfLeU8NlYcb5JOhQuZkP7gKa+jlak0HSc9I?=
 =?us-ascii?Q?xRTEBqH9Nsnw1FxWixrO/579nemhCY+MoyhbWYSKN/+VDqcsELR110XP2BjM?=
 =?us-ascii?Q?/ctaZwoWMQX1LdRpKO9EzMG/WYN0hukGlnX7fyazUCETSDeoG+r475DqEnjZ?=
 =?us-ascii?Q?CenePjAGDu48t5PTi2Slz9HrZR4MUzGLBWJvecVcgD6pUX96GCbQdfBwiXEu?=
 =?us-ascii?Q?BFdSlGQAFG+orSOAoT0AmH0v4X1NqVcZVtZ57rUG548tSEajIgAB6ADfs+EQ?=
 =?us-ascii?Q?Uj2xGAo6Tsh9QKLAFrZvdFifHZaMfYq2GGOpA+ZAiMdIiQnNIZT+xFoqZQ5w?=
 =?us-ascii?Q?3boPCIdkRAZIWTt0uQB86DYbkUQUozkfaKUQkzWI2TUZmNdC27dM8QcjNMfG?=
 =?us-ascii?Q?D6qyA3rB5B0gADdrjhbGjV8ULm8zWAq/yC5toUEPNf72nl94mwq1NYqT+pq3?=
 =?us-ascii?Q?1dkzG1Jgs4D0hJtYoL6gzcU1pKDDJHsMSzxQdSQn5sifopfJbrigdqAK8WT+?=
 =?us-ascii?Q?C7DTauQD5TUqGkw6+haxD630?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 20a391a4-213a-4ec3-68bf-08d937453637
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2999.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2021 19:21:05.4356
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 82hHNNiOaxu54jnsZ4uTpzVrq+TlaocBwcIevDh/KhHXr/l8iUrl6xoWdmw7bEEIWdz2uCVQRg7ZaiQhMoWsGA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB3160
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10025 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 phishscore=0 mlxscore=0
 spamscore=0 mlxlogscore=999 bulkscore=0 suspectscore=0 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2106240106
X-Proofpoint-ORIG-GUID: 6iDfD9Kng50yFMMjc0HRylcYvzz-gyz5
X-Proofpoint-GUID: 6iDfD9Kng50yFMMjc0HRylcYvzz-gyz5

On Thu, Jun 24, 2021 at 11:58:57PM +0800, Claire Chang wrote:
> On Thu, Jun 24, 2021 at 11:56 PM Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
> >
> > On Thu, Jun 24, 2021 at 10:10:51AM -0400, Qian Cai wrote:
> > >
> > >
> > > On 6/24/2021 7:48 AM, Will Deacon wrote:
> > > > Ok, diff below which attempts to tackle the offset issue I mentioned as
> > > > well. Qian Cai -- please can you try with these changes?
> > >
> > > This works fine.
> >
> > Cool. Let me squash this patch in #6 and rebase the rest of them.
> >
> > Claire, could you check the devel/for-linus-5.14 say by end of today to
> > double check that I didn't mess anything up please?
> 
> I just submitted v15 here
> (https://lore.kernel.org/patchwork/cover/1451322/) in case it's
> helpful.

Oh! Nice!
> I'll double check of course. Thanks for the efforts!

I ended up using your patch #6 and #7. Please double-check.


From xen-devel-bounces@lists.xenproject.org Thu Jun 24 20:59:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 20:59:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147007.270684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwWRW-0004u6-Hh; Thu, 24 Jun 2021 20:59:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147007.270684; Thu, 24 Jun 2021 20:59:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwWRW-0004tz-E2; Thu, 24 Jun 2021 20:59:22 +0000
Received: by outflank-mailman (input) for mailman id 147007;
 Thu, 24 Jun 2021 20:59:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwWRU-0004tT-QP; Thu, 24 Jun 2021 20:59:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwWRU-0000eH-HF; Thu, 24 Jun 2021 20:59:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwWRU-0006xA-6M; Thu, 24 Jun 2021 20:59:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwWRU-0001or-5q; Thu, 24 Jun 2021 20:59:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=W0gFgwAnX8eqH9Dqbz0uzAL8rgeTxQ4cgeNCJiVYbrc=; b=iUhgSU1acOzawmmGxo7yBkCAyr
	SOv+JTJmHjir+XSMbVDIrVllHGnr0vkynzT1icSehQ3RcF0TQvv5ZUMhzj16Vvxtw3Uhgiiyh8+MM
	XwFVbfjzb6hbdLNnCOVA+rc5skYZ2sIA950iFvXS49nDZf7eMVNbgW//w1kQA3fYYL+Q=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163014-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 163014: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c7691f5e340f3b579d40c77548f63133cdf5aac7
X-Osstest-Versions-That:
    xen=c7691f5e340f3b579d40c77548f63133cdf5aac7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Jun 2021 20:59:20 +0000

flight 163014 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163014/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 163004
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 163004
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 163004
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 163004
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail like 163004
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 163004
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 163004
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 163004
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 163004
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 163004
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 163004
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 163004
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c7691f5e340f3b579d40c77548f63133cdf5aac7
baseline version:
 xen                  c7691f5e340f3b579d40c77548f63133cdf5aac7

Last test of basis   163014  2021-06-24 10:22:27 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu Jun 24 21:38:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Jun 2021 21:38:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147013.270698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwX2n-0000Ro-E7; Thu, 24 Jun 2021 21:37:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147013.270698; Thu, 24 Jun 2021 21:37:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwX2n-0000Rh-Ay; Thu, 24 Jun 2021 21:37:53 +0000
Received: by outflank-mailman (input) for mailman id 147013;
 Thu, 24 Jun 2021 21:37:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwX2l-0000RX-Kj; Thu, 24 Jun 2021 21:37:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwX2l-0001Fp-E6; Thu, 24 Jun 2021 21:37:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwX2l-0008O5-4O; Thu, 24 Jun 2021 21:37:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwX2l-0005q0-3q; Thu, 24 Jun 2021 21:37:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NFBx1CI/Z3EmQcab77GIdzQT/ocVivHmw/nHH2yorJ8=; b=jJu59DzDyNLE8vewG+ozb8mobH
	t3f+KcefR0Qk9yHpZXsfNSzusgjiZYlr/1pr8UpQgtoZeT272ffBtMazxe24EdE/6hK/Xgvm08DgC
	UHbHaPRE8Y2kpBzjvNRN9R04C2G9Xi8lZC46h7T4+sjXUKaauTfPmczaNpiIVwT29G74=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163018-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163018: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=19a541d70e0748af69d3b09d55a1415762c8d749
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Jun 2021 21:37:51 +0000

flight 163018 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163018/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 19a541d70e0748af69d3b09d55a1415762c8d749
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   20 days
Failing since        162368  2021-06-04 15:42:59 Z   20 days   46 attempts
Testing same since   163018  2021-06-24 13:41:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2602 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 00:42:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 00:42:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147021.270712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwZv1-0000h4-Un; Fri, 25 Jun 2021 00:42:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147021.270712; Fri, 25 Jun 2021 00:42:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwZv1-0000gx-R4; Fri, 25 Jun 2021 00:42:03 +0000
Received: by outflank-mailman (input) for mailman id 147021;
 Fri, 25 Jun 2021 00:42:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hc5c=LT=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lwZv0-0000gr-AJ
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 00:42:02 +0000
Received: from mail-il1-x135.google.com (unknown [2607:f8b0:4864:20::135])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 998e538c-58e0-48ee-9a65-f3f5ba9856d2;
 Fri, 25 Jun 2021 00:42:01 +0000 (UTC)
Received: by mail-il1-x135.google.com with SMTP id s19so8233409ilj.1
 for <xen-devel@lists.xenproject.org>; Thu, 24 Jun 2021 17:42:01 -0700 (PDT)
Received: from mail-il1-f174.google.com (mail-il1-f174.google.com.
 [209.85.166.174])
 by smtp.gmail.com with ESMTPSA id y205sm2519374iof.1.2021.06.24.17.41.58
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 24 Jun 2021 17:41:59 -0700 (PDT)
Received: by mail-il1-f174.google.com with SMTP id z1so8268414ils.0
 for <xen-devel@lists.xenproject.org>; Thu, 24 Jun 2021 17:41:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 998e538c-58e0-48ee-9a65-f3f5ba9856d2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=EqlrFHdkETA6GW6RTp23OD25mZ9+t6ZRMvVFoaOPoUc=;
        b=bWcnQFoaV8XnWx2CmMqEub8uZpXzFMse4FkKs/6K7fAcmnfM33tj2MQVZVMqk8lHor
         cnCu5fEy9K2Coqappl8ZOF4ctPyliF3MamPmCDoyKpou7w8OAoAQjHS8393soCVU2vwe
         MHeea8kRyeqRwy1slhC35EkQWOKLuZs+gqotQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=EqlrFHdkETA6GW6RTp23OD25mZ9+t6ZRMvVFoaOPoUc=;
        b=VIB7n3Brdk3T8a9xrY4Z9SpIAa4kQqeBAhfR8Y22ifwsZCYn8S7+TkzS8lHDsa5kp2
         6LeBfSvT5M31nt/f7k6YM6Al4+b/lgAINtKloGJXNNDQWMoSlmr+/C0Uq8P+XrH75q70
         4o9uDIhOOnfVGgX+647pjG/HEbvRX6BgzCqXX5y3qpSpupdA2wklSvwCFbF1mrsOSgyN
         jhAs6LbbDRrQfMKM7bDFMAr/OZiIQA8AzXC0rRmObMx+jQKhPkcZWw8n+xv4CP2NZJIs
         RhkrDiErvk+G31TWWug8cmvm5F4PH3aTyVs5KNwYfV0YA8pzPsxDGkH6wEDNtr+KJO+h
         63Wg==
X-Gm-Message-State: AOAM5336xN0pJIiOFUvzhi54y6WstPJ3T9fJiVsy39qlT7RXLJmJw9Vh
	9QpABYIbEML7PKEH0N05jOJQxYSdAL8N6g==
X-Google-Smtp-Source: ABdhPJzPY+eeeaFQi1hhCsQWGRVV7BvEQ812Q7522o3m0yG6WwQxs+kuTmkx+pE+mW3LUpaBsTD4PA==
X-Received: by 2002:a92:b746:: with SMTP id c6mr61633ilm.81.1624581720542;
        Thu, 24 Jun 2021 17:42:00 -0700 (PDT)
X-Received: by 2002:a05:6602:1546:: with SMTP id h6mr6334034iow.34.1624581707960;
 Thu, 24 Jun 2021 17:41:47 -0700 (PDT)
MIME-Version: 1.0
References: <20210624155526.2775863-1-tientzu@chromium.org> <YNTa1C5uvz+qWryf@char.us.oracle.com>
In-Reply-To: <YNTa1C5uvz+qWryf@char.us.oracle.com>
From: Claire Chang <tientzu@chromium.org>
Date: Fri, 25 Jun 2021 08:41:37 +0800
X-Gmail-Original-Message-ID: <CALiNf297ep9C8-3s=F-xRDud=QB9geMfCMKTqLzPJKEdYnfbXQ@mail.gmail.com>
Message-ID: <CALiNf297ep9C8-3s=F-xRDud=QB9geMfCMKTqLzPJKEdYnfbXQ@mail.gmail.com>
Subject: Re: [PATCH v15 00/12] Restricted DMA
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, boris.ostrovsky@oracle.com, 
	jgross@suse.com, Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>, 
	benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, 
	xypron.glpk@gmx.de, Thierry Reding <treding@nvidia.com>, mingo@kernel.org, 
	bauerman@linux.ibm.com, peterz@infradead.org, 
	Greg KH <gregkh@linuxfoundation.org>, Saravana Kannan <saravanak@google.com>, 
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com, Tom Lendacky <thomas.lendacky@amd.com>, 
	Qian Cai <quic_qiancai@quicinc.com>
Content-Type: text/plain; charset="UTF-8"

On Fri, Jun 25, 2021 at 3:20 AM Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
>
> On Thu, Jun 24, 2021 at 11:55:14PM +0800, Claire Chang wrote:
> > This series implements mitigations for lack of DMA access control on
> > systems without an IOMMU, which could result in the DMA accessing the
> > system memory at unexpected times and/or unexpected addresses, possibly
> > leading to data leakage or corruption.
> >
> > For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
> > not behind an IOMMU. As PCI-e, by design, gives the device full access to
> > system memory, a vulnerability in the Wi-Fi firmware could easily escalate
> > to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
> > full chain of exploits; [2], [3]).
> >
> > To mitigate the security concerns, we introduce restricted DMA. Restricted
> > DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
> > specially allocated region and does memory allocation from the same region.
> > The feature on its own provides a basic level of protection against the DMA
> > overwriting buffer contents at unexpected times. However, to protect
> > against general data leakage and system memory corruption, the system needs
> > to provide a way to restrict the DMA to a predefined memory region (this is
> > usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).
> >
> > [1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
> > [1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
> > [2] https://blade.tencent.com/en/advisories/qualpwn/
> > [3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
> > [4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132
> >
> > v15:
> > - Apply Will's diff (https://lore.kernel.org/patchwork/patch/1448957/#1647521)
> >   to fix the crash reported by Qian.
> > - Add Stefano's Acked-by tag for patch 01/12 from v14
>
> That all should be now be on
>
> https://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb.git/
> devel/for-linus-5.14 (and linux-next)
>

devel/for-linus-5.14 looks good. Thanks!


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 00:51:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 00:51:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147026.270723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwa45-0002BF-Vq; Fri, 25 Jun 2021 00:51:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147026.270723; Fri, 25 Jun 2021 00:51:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwa45-0002B8-SI; Fri, 25 Jun 2021 00:51:25 +0000
Received: by outflank-mailman (input) for mailman id 147026;
 Fri, 25 Jun 2021 00:51:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l36E=LT=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lwa45-0002Aq-0u
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 00:51:25 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c2871de8-26a2-48b5-9519-997a5fbdb9f7;
 Fri, 25 Jun 2021 00:51:24 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 8A84A6137D;
 Fri, 25 Jun 2021 00:51:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2871de8-26a2-48b5-9519-997a5fbdb9f7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1624582283;
	bh=OR4nEQViM+sm5HbT6HHys1OrlU1T0jJ2x631OpR/zDg=;
	h=Date:From:To:cc:Subject:From;
	b=Cm3fEgRDdV/+deUg5UwaUOxlOcDjshTyd41SW3uqbpBiPuCwrxYxggB7m/r875Tkw
	 1ZGQVsOBHH8CNrrPR3INHpOmXOIrsbSbLtWFJrB/fNJFjey3O67KeLMTmnkWTRXgx3
	 l/YhniVNLIdYOrJou+PsbD1dGOWdHoR5RHMdbDAbfOBLwlNkK2+MFHNdu1RSlqmW/S
	 HzVoUBBvWFj6FjLQnl0fqGaNk80SX0vx/7CySIRCqAlZknivBndB59727KeTTt8yBk
	 8NGivtUrJxHKzF/I67+aZSw5i2NVpSSEKzI99VrB+1A6IbiRz4Q1Njbf2ZJrwEEmv+
	 NeL6diOVSSZjw==
Date: Thu, 24 Jun 2021 17:51:22 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: julien@xen.org
cc: sstabellini@kernel.org, xen-devel@lists.xenproject.org, 
    Volodymyr_Babchuk@epam.com
Subject: [PATCH] xen/arm: add forward_smc command line option for debugging
Message-ID: <alpine.DEB.2.21.2106241749310.24906@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

It has become clear that an option to disable trapping SMC calls to Xen
is very useful for debugging user issues. Instead of having to provide a
patch to users every time, it would be great if we could just tell them
to add forward_smc=true to the Xen command line.

This option is obviously unsafe and unsecure and only meant for
debugging. Make clear in the description that if you pass
forward_smc=true then the system is not security supported.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 3ece83a427..0833fe80fc 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2501,6 +2501,16 @@ vwfi to `native` reduces irq latency significantly. It can also lead to
 suboptimal scheduling decisions, but only when the system is
 oversubscribed (i.e., in total there are more vCPUs than pCPUs).
 
+### forward_smc (arm)
+> `= <boolean>`
+
+> Default: `false`
+
+If enabled, instead of trapping firmware SMC calls to Xen, allow SMC
+calls from VMs directly to the firmware. This option is UNSAFE and it is
+only meant for debugging. Systems with forward_smc=true are not security
+supported.
+
 ### watchdog (x86)
 > `= force | <boolean>`
 
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index e7384381cc..0580ac5762 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -95,11 +95,15 @@ static int __init parse_vwfi(const char *s)
 }
 custom_param("vwfi", parse_vwfi);
 
+static bool forward_smc = false;
+boolean_param("forward_smc", forward_smc);
+
 register_t get_default_hcr_flags(void)
 {
     return  (HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM|
              (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) |
-             HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
+             (forward_smc ? 0 : HCR_TSC) |
+             HCR_TID3|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
 }
 
 static enum {



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 03:37:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 03:37:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147033.270734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwceP-0000o9-7X; Fri, 25 Jun 2021 03:37:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147033.270734; Fri, 25 Jun 2021 03:37:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwceP-0000o1-1i; Fri, 25 Jun 2021 03:37:05 +0000
Received: by outflank-mailman (input) for mailman id 147033;
 Fri, 25 Jun 2021 03:37:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwceO-0000nr-7W; Fri, 25 Jun 2021 03:37:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwceO-0000bm-0O; Fri, 25 Jun 2021 03:37:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwceN-0000J4-O7; Fri, 25 Jun 2021 03:37:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwceN-0000V2-NY; Fri, 25 Jun 2021 03:37:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=B33G+cAyywbnJV48h61ZhU4HJcgnX2IbHncH4K8pjAU=; b=iUhy1Ak83Kp4XRtL5MFfmr+g65
	bCZaAJZnYDUwtNbdAK19twuLRbc/X/In3TvNEi6bel2RH0+V8vxDi0fXsTvNnWcvkTvcsAowDAQyo
	s2oKUIGrJN043DmSpak7ShFqpD89PI75GioZz198lBYKmYmucQIgynw75ZWkElGxFrnw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163017-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 163017: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d0ac9a61474cf594d19082bc8976247e984ea9a3
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Jun 2021 03:37:03 +0000

flight 163017 qemu-mainline real [real]
flight 163023 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/163017/
http://logs.test-lab.xenproject.org/osstest/logs/163023/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-amd 16 xen-boot/l1         fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 16 xen-boot/l1       fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                d0ac9a61474cf594d19082bc8976247e984ea9a3
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  308 days
Failing since        152659  2020-08-21 14:07:39 Z  307 days  564 attempts
Testing same since   163017  2021-06-24 12:33:04 Z    0 days    1 attempts

------------------------------------------------------------
547 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 177099 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 04:50:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 04:50:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147040.270751 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwdn5-0008HD-Oq; Fri, 25 Jun 2021 04:50:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147040.270751; Fri, 25 Jun 2021 04:50:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwdn5-0008H6-LT; Fri, 25 Jun 2021 04:50:07 +0000
Received: by outflank-mailman (input) for mailman id 147040;
 Fri, 25 Jun 2021 04:50:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwdn4-0008AW-5p; Fri, 25 Jun 2021 04:50:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwdn3-0001zV-TK; Fri, 25 Jun 2021 04:50:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwdn3-000548-KG; Fri, 25 Jun 2021 04:50:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwdn3-00062b-Jm; Fri, 25 Jun 2021 04:50:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xpippI1FzJc6QlakWQd+CZlivCGI8QVsFtQHqmA+Luc=; b=roG+PE/wR6lJ+waM+0Q42aJj4Q
	edlXd4FmKl3LIKLZyB1cNMFascoUwhO3D4ZL3bLfrqGks+4kZTs1xnRmffYFQlWR3m5HW2UjA6Koq
	bzwAMexfpd+UpOJ7W76zbrd/qL0TGfCyi+w0fZp3Rk5tkz+2dHQPANEufZcQDD84vYRg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163020-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 163020: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:heisenbug
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:leak-check/check:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7426cedc7dad67bf3c71ea6cc29ab7822e1a453f
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Jun 2021 04:50:05 +0000

flight 163020 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163020/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop     fail in 163010 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2  13 debian-fixup     fail in 163010 pass in 163020
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10     fail pass in 163010
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 21 leak-check/check fail pass in 163010

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 163010 like 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                7426cedc7dad67bf3c71ea6cc29ab7822e1a453f
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  328 days
Failing since        152366  2020-08-01 20:49:34 Z  327 days  556 attempts
Testing same since   163010  2021-06-24 03:29:41 Z    1 days    2 attempts

------------------------------------------------------------
6199 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1689018 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 06:31:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 06:31:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147046.270764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwfMw-0000pv-1v; Fri, 25 Jun 2021 06:31:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147046.270764; Fri, 25 Jun 2021 06:31:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwfMv-0000po-V6; Fri, 25 Jun 2021 06:31:13 +0000
Received: by outflank-mailman (input) for mailman id 147046;
 Fri, 25 Jun 2021 06:31:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwfMu-0000pi-MT
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 06:31:12 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 84e8a9c7-be87-492e-92ab-6955f1efa122;
 Fri, 25 Jun 2021 06:31:11 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2055.outbound.protection.outlook.com [104.47.13.55]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-23-S-KhC4khNjOeeVnvPgU28A-1; Fri, 25 Jun 2021 08:31:08 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5743.eurprd04.prod.outlook.com (2603:10a6:803:e0::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20; Fri, 25 Jun
 2021 06:31:06 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 06:31:05 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0262.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100::34) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Fri, 25 Jun 2021 06:31:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84e8a9c7-be87-492e-92ab-6955f1efa122
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624602669;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=F9w18Mp5lWdHoUibn/6rsn2m3iF8YMsSgt3o1ggwjw8=;
	b=kn2Xnn90pxD6jeL+uyoYibKJpItNTXoLGsE6PZPPwM6gF2iy4FoRmZh2DkGTJC5R+f180H
	OhTgTjhowgCGlwru6c2B6MhuhRF3q2KbZ4mY64FXCchizw6/23QdJN2dkoIemDzfJUBtvf
	zADvpEa8vjXQ9dYqQmBgkWKr6vLUaV8=
X-MC-Unique: S-KhC4khNjOeeVnvPgU28A-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dIuY3KCJxOp4dzxPi2zGur8d1gb0YkdHYEfins2+b+FHlW3OuwVO9qhyXboN0zIGf0puwcdeJUxIZUchX8q4nkJC5LquuucM02VcXvjeOn5VUdvULl2z+VErF+6UOa1Ag3pKy3gk0oZJgAWOvwhM070pdYeG+MOgF57+9s3AbuVZsLvX7tbV4hFb4mfYPxZpEb+9KeB/K1+Ya+fj9JgJE98dPwv25/uGjSWfnzAW53CWQqzIF7tH80xFc/68AXToEfzz1/9HvQNIFaeNZ8Xqe8R3eRqIm3UsKksSKfTu5TtuHcwCpB5k1K/nRbdi35lI9K/UsDnmvb7aU4HoKoZKGw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F9w18Mp5lWdHoUibn/6rsn2m3iF8YMsSgt3o1ggwjw8=;
 b=LYRoVMoPRs6QvFWwyK1Epe+ACRfmUJsh6PuF77resBef0h32xfDfq5lltxas8ooGlJq18SplkiHEOdsiDL6H3JSLillSX7hStzgaLJTCePIr7Psq9PBJQFzC50r/B0/o0HfKRiGTNYhef8A0+++0WpQ8nExYHW81sdtHgwDHR52uT47KPZ2xnsGP1DSBulsNKMJT8xX0XHos6GI4KOucb/ugUgbj9cxAxyenrB4h3vn39XnrCEqmycQaXkaKdAo/Tpy94VkTPJZuorNpTKSfkBIRoG6lS9+GM+JttX6k56B0wIFCc3Y1YtA7jrctiruCDpR8ecVJQ3njE9cauXhM8Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] libxencall: Bump SONAME following new functionality
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210624175521.20843-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6e103351-f3eb-39c8-441d-be926579f2ca@suse.com>
Date: Fri, 25 Jun 2021 08:31:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210624175521.20843-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0262.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100::34)
 To VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a15ac994-348a-45e6-22cf-08d937a2cf9c
X-MS-TrafficTypeDiagnostic: VI1PR04MB5743:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB5743A4748DE22699B07C7970B3069@VI1PR04MB5743.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4125;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	R10GTe4aigXziNcRu6+mREnIg+J9rDDdCUWdNYhObpLv/26t8UfL3YGf2KaDrJrrYVBdKLOUEs/8hcEhAkv3Rl0aiJ2DOEGAuhQLedGXTUo6tMg3chG019inMeQJSRSDPEujAtiE1lTTnt3GhTNrJyRzgt/zLQUjn6Rl8gJ8wvVD1CXnuLTGd7PYYtfXyx4MQ8ACDWLgt/k/RtajRVQUHCHUxx7ikIPeddC8xK5KNuAmtlB9J6jE0tZT9vI3g/fpcjaW0sOs3AkCtXwGR5R5fU1IBA8+50SmCfLREgM0OE9sLtjc95kvTKztjGxgp87/1lXo/oTRRwa/Ydc3fDtUn2+CXC+WzyFj7VFRQy5OM5aKHmMa8/TpcVDQV0kFLNPxtREbOY7raLGwCwNOQABSFgtQ+vKsyQnbtxHPxjmEuOnTWOhrE/EVkF6EgPBzlaquoLQk9XBdp7suTZk3DLFKUsUj7fyPv491+UFYW8s+5oAaup41R/INJPRVp0bCrBDCUbGVKy4Wx6TGY2add15qLeoYjyNwTAMJYKAPJhbk4rPpHYeN1ZqEQd4ZskaEUTUuL02k1UAIynRaSlZPMhYQmzY3ky7A0VEbjNGG7UBeM1Nz4C7Tx8K6hMN3DTDvrmDURKmEpin/Qsclm8IkEP+GVAdlKeSg68dvuAvbr4jbMgKIeZUMv+0GnNAWumb1OHCY
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(39860400002)(366004)(346002)(376002)(396003)(36756003)(66556008)(6486002)(86362001)(2906002)(66476007)(31686004)(31696002)(38100700002)(66946007)(110136005)(54906003)(316002)(16576012)(83380400001)(4326008)(5660300002)(8936002)(956004)(2616005)(16526019)(53546011)(26005)(6666004)(478600001)(186003)(8676002)(4744005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N0NNQWZ5dGNwbEVBbU1UaFgvUlk2UjhtVE1tQ1I1b2VrYkp6M2UrMVdoeTVM?=
 =?utf-8?B?aUxUTWdrUkpRMHBEak9jcE1qcTd3STVLYlJMcDZzRUhYYWNKdXdtTjl5WWd4?=
 =?utf-8?B?dlQwZnpYZTlHQnFsOHhPdWQvUVg5bHAyTFp6TWtSUU0vSXE3aG0rNkhHNnEv?=
 =?utf-8?B?TVBCaVEzbkFvOWFUOUtQb2JROHVYV0Y4bWYrcEJvQWk4L2lPam5CeTZIT0lR?=
 =?utf-8?B?dlBxYzFsWHVzWjl0MkUreUYzdTAyYVBjV2hwMDkrV3QvaG4vTFI0RXV4M0ZJ?=
 =?utf-8?B?ekMvczNUK3NTVHVlZDhOcE9YTDZoQ0QvK1E3Qlc1UEoycU9hT1ZrNUVsTHRP?=
 =?utf-8?B?STJNcGRJdW5paEVKbVhxd0tteCs4Tjl1VCsxdklLL2ZHeC9RejBuaTZwN0lx?=
 =?utf-8?B?V0RuSExzWkF0K2g2ZW9QYlA5bExzSkVhaGRJVEhJR0FycFJxcTJFb0V3Z202?=
 =?utf-8?B?NEcwNXFFOTJMSG5UanhQdmNPSmh0bGVkNktIT1BUWkc4bTg2NldJMk9Udm5J?=
 =?utf-8?B?OVhFcDM4eDAzYUxVY1Q0cU1wQ21FMkpYMnVHMUxjeDJQbmNDSzZOZXFtQlZR?=
 =?utf-8?B?N01vaTBuYU1yUG9LSzhQVTB4UmNGeFhCcnRZVkt2WVZFVHFFZTVGTDR2SVEy?=
 =?utf-8?B?eVhyeS83M1dMOXI2OW96blpmUUUySUtOM3FzaVRxL0Z1NHNUc0VzMXVNWnh2?=
 =?utf-8?B?MVFVc0poVnFORTZpblZDbFd5disrbVVMZ1VZUnd1R2pMOGMwZk16Y3BVa09t?=
 =?utf-8?B?bFRoUTNub285Z2dHUXdjOVV4azJ6YTFySUlxYW9RdnBjYkRJNVA4Nk5GQ2g5?=
 =?utf-8?B?a0EvWmxCREsxNWNsS0tTaVNUUndxVHZheTlsRTdOa1hkZjFwaGdKSkNqREpo?=
 =?utf-8?B?bFppZkVUbnlIOUh2NnlwY2FKSThybkFxd3lXOEJER1VQTDVxRElKS090UjhS?=
 =?utf-8?B?eUtIRXNqLzVvMHJYbjJCU3crakNsNVV2emkxV2dEQzJIOCt5c3hDTDE3ZFVM?=
 =?utf-8?B?MmM5YmdIZzhMQWI2Y2w3aDhRRFlCb1NxSEVNS1VSOUZsRXQzNkgweEo3MWgz?=
 =?utf-8?B?QUpsYno2Qk1WY3pMcUc5TnlBZmNpbmh6RG1aem9qN3ZWOXRkTjdWNk1zUkIw?=
 =?utf-8?B?WHdCc05BVVFpSnh6R1E5d1I1VW9oaERZVUV1RHRvQUJSRFpYVVpVRkJabUpj?=
 =?utf-8?B?TERvSitCSGhJbml2YkFsSTQwTU5vNm9JSlBPK0ozNnZnU3RJcmN2bmFDMitu?=
 =?utf-8?B?eW9lSXdPRG8xR2JMWlI1eTlvUVNRS3Q2ZDA2eElwVWpuN3ZTVldKcis4UC9i?=
 =?utf-8?B?alBlN2ltZGFJamI4dW5xakduR0gyNmZFUkxuYzlRd0JhcElaaGM0RVdJTG5E?=
 =?utf-8?B?eUtjNTh3a0F2THNrUGg2Qzg0M2VOaExKekZ1ODBqNWIxbFlITkd0ejRTU1Q2?=
 =?utf-8?B?Z2lyOWxVd0pTYy80Nk03SUFCR0dhNzZBUlBxTGNua3duRTRId090aHFWU3Na?=
 =?utf-8?B?Ri9vQVhna3JjdEpPeWZHc1BFRjJpcTUwSmZMcmdTSFFrTDRsU2QxUGJwMkZX?=
 =?utf-8?B?Q3BCcVRVMmQ2WDBmVk1GYXY2MzB0ZUtFOWdaWjhkcG1Ub1Y5VG1JN3RXSlRZ?=
 =?utf-8?B?QjVBV0VUT1E0YTIyQVNNYW9qa1RxWm4rOXB5aWEzS2RsNVAxOWwyL05LdHNp?=
 =?utf-8?B?UytJeXZ1N0lJaC9Da1NIazAwTWo4UEt0aUpvbkhSa0FlN2dIRko0K3RlbVdh?=
 =?utf-8?Q?FJIIVQsYXc212TT7s5cG0xXr1UDpLFX0ubHp51k?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a15ac994-348a-45e6-22cf-08d937a2cf9c
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 06:31:05.7921
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sVOVJUzQTuhX4HN6Jxx9aXWtyBEIEYpMOpBp+V7yg7H7WWrOCCRxIKtNKFw8eZO2TtpIC2co51O6N2oaeDS96Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5743

On 24.06.2021 19:55, Andrew Cooper wrote:
> Fixes: bef64f2c00 ("libxencall: introduce variant of xencall2() returning long")

Is this strictly necessary, i.e. is a Fixes: tag here warranted? I wonder
in particular with the possibility of backporting in mind, where I think
we shouldn't easily alter SONAME and file name.

Jan

> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Ian Jackson <iwj@xenproject.org>
> CC: Wei Liu <wl@xen.org>
> ---
>  tools/libs/call/Makefile | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/libs/call/Makefile b/tools/libs/call/Makefile
> index 4ed201b3b3..93d404b79e 100644
> --- a/tools/libs/call/Makefile
> +++ b/tools/libs/call/Makefile
> @@ -2,7 +2,7 @@ XEN_ROOT = $(CURDIR)/../../..
>  include $(XEN_ROOT)/tools/Rules.mk
>  
>  MAJOR    = 1
> -MINOR    = 2
> +MINOR    = 3
>  
>  SRCS-y                 += core.c buffer.c
>  SRCS-$(CONFIG_Linux)   += linux.c
> 



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 06:39:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 06:39:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147049.270776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwfUi-0001Ye-Rq; Fri, 25 Jun 2021 06:39:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147049.270776; Fri, 25 Jun 2021 06:39:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwfUi-0001YX-Oe; Fri, 25 Jun 2021 06:39:16 +0000
Received: by outflank-mailman (input) for mailman id 147049;
 Fri, 25 Jun 2021 06:39:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwfUh-0001YR-6Y
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 06:39:15 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e6d189c2-ca96-4d83-ba7b-295bdbe775ee;
 Fri, 25 Jun 2021 06:39:14 +0000 (UTC)
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2058.outbound.protection.outlook.com [104.47.12.58]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-26-35ROH1ogOmec7bDTHYf1_Q-1; Fri, 25 Jun 2021 08:39:11 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6173.eurprd04.prod.outlook.com (2603:10a6:803:ff::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Fri, 25 Jun
 2021 06:39:04 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 06:39:04 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM4PR0101CA0063.eurprd01.prod.exchangelabs.com (2603:10a6:200:41::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18 via Frontend
 Transport; Fri, 25 Jun 2021 06:39:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6d189c2-ca96-4d83-ba7b-295bdbe775ee
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624603152;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=C27HCT7qP3/9WkHoI0KFDxZj95G5b6p72VWx7cvBcfA=;
	b=T45p517+W7H6t7PyctctKBc7Nfcap0201jsicoKROnVfdkcmB+sQ2Dg6wIJ7Nno5u2kKSq
	Uzv1tWgkH/EGvZbWoorCs7+QSQYAdM7de3SawHbejBAkM7q+HlFYwPlUzKscQXCjvBsGQX
	Z5FvhTCIdoQpv9PaqjU3h1B9O5/+WX0=
X-MC-Unique: 35ROH1ogOmec7bDTHYf1_Q-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OZDxlYapZIuCWoM73HK/OAFo+y5oX+H61x50/twog2bM/2p0GqB8oSlEWeBu0WTDtgcy7TbsVCZAuzKXIek8APuiphQcORKhlSy+Y1iDdp1ffeA0edMWbJX6OVt9rxP47CZ+SnW0tKMhBVrSnt0gnZcJrbIQ8G0O0gN5mBvnQiOsb+gLsesXfuqGwUZEO6FssHCMWoS6H7Mqm2RfzHY/XglYmtwYfP3TkanwxIUgYJyXlzfD4GNCj2JV6qaduI44XPOgxT+wyJCa+hEOZrjZaLvyXNtDh6WiauTLOtByai14O/cXJfLimmSjOMUHTqNVgFYkbqoCSRJ7zb+Uq4oW8Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M3NN8DJ1vbxsCEj/kT4Qz5nhn1HnG55SPfFdvSNWPGw=;
 b=W1ZZEN4KbqAgNLKgIcxO+y+vCTyNdfZmZL4HZNJd3wz9VeLKpbrkNNNCcznGsjVlS63CrkGMrLUb3eKgkyihgILBGlnN0QniCY3FJoaX7tjDECyxYQyaUiOPqiAo+70f9xL8xBAyPugpDrWVt7xPI4f2czGxVYZV4ulOoZo0oHH38bdyMxyFRe9w086q6D4Uel2Rug/nz1/yOccPJo8SaH5bwpqf61vjgJMn76XKORwWSS01hMhFx23epI3q2WuUj31FMD5G2yCg0TXSCP+on9E/I4BIOD5kC+WowUI/7o2RWZ93ZHyMglP/FOgNq97oC0osGdnJ9gqqZRFeeuSRKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH 3/6] xsm: enabling xsm to always be included
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Tim Deegan <tim@xen.org>, Juergen Gross <jgross@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Dario Faggioli <dfaggioli@suse.com>, Paul Durrant <paul@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, persaur@gmail.com,
 christopher.w.clark@gmail.com, adam.schwalm@starlab.io,
 scott.davis@starlab.io, xen-devel@lists.xenproject.org,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <20210617233918.10095-1-dpsmith@apertussolutions.com>
 <20210617233918.10095-4-dpsmith@apertussolutions.com>
 <a8d60866-b9d9-8a76-3acc-703799b204b6@citrix.com>
 <3df8648d-f706-9684-4e6d-9438dc9f0894@apertussolutions.com>
 <ca65acbc-c57c-6056-7abd-299ce5ccd643@suse.com>
 <9be51dc7-2534-64c9-30dd-06eddc5702ba@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <942cdfc9-9a6a-1ea6-330c-77fcb01cfab4@suse.com>
Date: Fri, 25 Jun 2021 08:39:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <9be51dc7-2534-64c9-30dd-06eddc5702ba@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM4PR0101CA0063.eurprd01.prod.exchangelabs.com
 (2603:10a6:200:41::31) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8cd77e8f-e77a-4885-12ad-08d937a3ec7e
X-MS-TrafficTypeDiagnostic: VI1PR04MB6173:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB6173789E7D0A8B921BD51B30B3069@VI1PR04MB6173.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wqKH8CEv2VUpLf6cqhf2kpm4UuMNTUut0ZLnafFlczPQLpaJVz/pe+xbIXZL1FeaOJaVs2QKVxVkL86jF/E+VU9ksVbwLkYUUCcFy+HOzQWnjZk8nPW/4QAEOVoaczbW4x4AVLtLP/VJhOguO8iwJh16RRCPScMVZfXUzjxVQ2/iIOB8hYpptwIG7mYVfDsjVWVxBG29nCNfH05WaKqxXmZYnDBJrivZ7aJsUhf8989/3vSWf7hZS6ZGCVH8SFZpC2JX0wsxCXWWuY7Qfaw/t6NuIXlUYCOXClFOSUV1bnKhLfHMX4q9fcE/nzMU3Z+vXYzxvMQVRw0tQY0AmdE2maTsQWMGXIOVmuWD2eqFbowoth0qS3oiUCGmWOXTU7duvFTgM33Xk93wRaQdFENRzxjfoudcLA0qVSNy1wknZGUGrDnxaRL8PukaBj48x8AaR81wvgJrutyZquOpIJscewvN20vXSwhNCZwsmLSJToEdclER/HJqHh1HVib1+yuyOdxmKDGyOnElthksG8/7HvLhGWe3av1xe02HcqBlZpBemG8ie8TW8TjQmoWPGERfZVh3BuLPPLefl6VbwGpVsOipCN1vEHcpD8gAMWVz+ngCVKCZVS/iyL0hDPnCH2m88T02xdnhzbn4gS0JiPkT3Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(366004)(396003)(376002)(136003)(39860400002)(2906002)(478600001)(66946007)(5660300002)(7416002)(6916009)(2616005)(956004)(8936002)(8676002)(83380400001)(36756003)(31696002)(86362001)(54906003)(16526019)(26005)(186003)(38100700002)(16576012)(316002)(6486002)(66556008)(66476007)(4326008)(53546011)(31686004)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?89a1Gz86fCyfkjeeirC/CTWLXrPf5KKpwnn/Ep5aARZuK6vLfr83aFsWeFzP?=
 =?us-ascii?Q?+QHDZqe+H/j13ePBEDusU+TbICuMIFMMpo8yd0KTPtY/Fy/NRNxoNciuZ4IG?=
 =?us-ascii?Q?kbYG+GsP2mHU6EUzrOcAbvcAeQe9S0MhtaG9m/pxCDgvFPEncMjEwE362Z9f?=
 =?us-ascii?Q?MMoouqLZWo66e2LkZ7Oc6RCFpzk7qhQ5JpTEvY2i+6G0jZvFthdHKzW1ze4H?=
 =?us-ascii?Q?aI62eF5GAVH5KbHnv0ZtB8gaIQxICZfxeQyq6cePxKSqdK2emn/QSXZV47zE?=
 =?us-ascii?Q?jyrNr7N7OGbYlpnruIPS5KEjupENRbUDFMEya5EdqJSmOd8ToF2MRDrhGoDa?=
 =?us-ascii?Q?SVQ9WE7vk2d4IiV2UdzfTou4PaWr44MzorkgS3nStLY7F1upHwimA00jY7Oa?=
 =?us-ascii?Q?aia+rjrzswCZxo04aARiriIXnMsVr21HfdeTmTKM5HijjytKlQZFcUzhsMPU?=
 =?us-ascii?Q?bidFwLCyntXGWWGZgy3VC1i3n2xYIu9ZAsRou4mGUDg+la43EEenhvn+q5C/?=
 =?us-ascii?Q?qATZ204NIKyPdVvyFT2rMg68mXDneTX4lmdK2E02ivzGiB58NaxrJEGlbACl?=
 =?us-ascii?Q?jTL8arMucEisgn8CqOgf3u20rulER0Dk6ABRkXcmrbBmNbGaGtSIkUduBK1V?=
 =?us-ascii?Q?yikwwA77zSFBxguYk/T2hkVycn19oF2ciuANpikLv1N/iNNaEGUvoUAcW8bC?=
 =?us-ascii?Q?ASBo0quIuaFb/jQjrDQiDx/2zEHGeCH0Usrbl41s0T2H7cIh41YtMNO4NG+i?=
 =?us-ascii?Q?kfWNbn7SnOLVPSvH5EPddutdzcPxruU1g5y8gQPdENWyVsGZR8nCsCDkKGhc?=
 =?us-ascii?Q?EI/vjwQydrFJ1Trbrg40Tmsr4pF4s357DQw0ipMs5lDbuBR3nAHikxLZngAd?=
 =?us-ascii?Q?iRAraNKMM4lgQnBD4mw2uOkv4JftDkqbKqFrCYRhPl2HbR+ydPFGNHE0VZtx?=
 =?us-ascii?Q?OQEFIQY/dWnAmYtZcA4TwFz52hT1L03ljo4XMgnTYJ9P+juwvVwZ+HaSE6if?=
 =?us-ascii?Q?DHTbBIqDQCCeeKIUrIKNimF0DUeMV/vEdiQuQ1lj1TKxncDG49ro8EgwP6Hp?=
 =?us-ascii?Q?TslaZtB8m0qQGHBP0wunS+t+LUgECg9LxdBwNHOnACLS46ETEjvQpUtNEr+W?=
 =?us-ascii?Q?PqCIYZTspyfTM4IMeWsVAZydDJ0xxblfPBxrJgH4SgTD243n0DVLIoiU4HIz?=
 =?us-ascii?Q?Rx2SA/rcFcKvkYYA3DyK1kBoX1Kad0gnIr23xkS9CCnxlQt47gtAvv1GRcLc?=
 =?us-ascii?Q?YhznmCPh9X2xzmEUn6fjew2SuWReOs1KlGEaQ0wRbjlh7roEnvIzwz6mrXKp?=
 =?us-ascii?Q?+WkVEblfekBTt43VzCZ0BBqz?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8cd77e8f-e77a-4885-12ad-08d937a3ec7e
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 06:39:03.9206
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: q/dj1uRiXPlR440lBb5pLd9Gy0jPwx1cO/TrFUpI7gEXV/vuqFsm0+7saSBEpyWeiu4jsKFrVtt4N7lQGybV1w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6173

On 24.06.2021 19:18, Daniel P. Smith wrote:
>=20
>=20
> On 6/21/21 2:53 AM, Jan Beulich wrote:
>> On 18.06.2021 18:35, Daniel P. Smith wrote:
>>> On 6/18/21 7:53 AM, Andrew Cooper wrote:
>>>> On 18/06/2021 00:39, Daniel P. Smith wrote:
>>>>> @@ -250,9 +261,8 @@ config XSM_FLASK_POLICY
>>>>>   	  If unsure, say Y.
>>>>>  =20
>>>>>   config XSM_SILO
>>>>> -	def_bool y
>>>>> +	def_bool n
>>>>
>>>> I'm not sure we want to alter the FLASK/SILO defaults.=C2=A0 SILO in
>>>> particular is mandatory on ARM, and without it, you're in a security
>>>> unsupported configuration.
>>> The intent here is the default is the classic dom0 configuration. What
>>> if I did,
>>>
>>> def bool n
>>> def bool y if ARM
>>
>> Besides it needing to be with the order of the two lines flipped, if
>> Arm requires XSM_SILO, then I think it would better "select" it.
>=20
>=20
> Ack, I realized that as I fixed it for the upcoming v2.
>=20
> Correct me if I am wrong but if you do a "select" that means you are=20
> forcing the user to always have SILO built in, i.e. that makes it so the=
=20
> option cannot be disabled. There may be users who would prefer to only=20
> have Flask enabled on ARM and those users would not be able to turn SILO=
=20
> off.

Yes, you're right. Problem is the (imo) malformed entry, which makes
it that I couldn't see the presence of a prompt anymore in the context
above. Well-formed (imo; I might also say "consistently formatted")
entries with a prompt ought to look like (taking your change into
account already, leaving aside whether that's really what we want)

config XSM_SILO
	bool "SILO support"
	default y if ARM
	default n

Whether "depends" precedes or follows "default" is a less clear cut.

def_bool imo would better be used only for prompt-less entries.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 06:42:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 06:42:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147053.270787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwfYF-0002va-Bg; Fri, 25 Jun 2021 06:42:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147053.270787; Fri, 25 Jun 2021 06:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwfYF-0002vT-8O; Fri, 25 Jun 2021 06:42:55 +0000
Received: by outflank-mailman (input) for mailman id 147053;
 Fri, 25 Jun 2021 06:42:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lwfYE-0002vN-P0
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 06:42:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwfYE-0004Di-Hs; Fri, 25 Jun 2021 06:42:54 +0000
Received: from [54.239.6.184] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwfYE-0005Mz-A7; Fri, 25 Jun 2021 06:42:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=BC3HwrowlnMRI5KWlGcyY94VD2SmNqiuRgOC2C/HonI=; b=xjA1Qv1OU9vt7RGWjBgaRgzAq6
	mey5IRsEwsfo/+OKx+4QejxFz0d76ukdNvNddmdyXLef4q2VOvbEwj0tYmGCx8ppJAGS9IhFC3rpK
	iXwqVeNiTfbywME6mHzqtxAbXA0uiPXvISARj8qNSg+WlVWAX6GghTrNiN7mYGtQ1Pak=;
Subject: Re: [PATCH] xen/arm: add forward_smc command line option for
 debugging
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Volodymyr_Babchuk@epam.com
References: <alpine.DEB.2.21.2106241749310.24906@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <b5ba0757-322f-a77a-2293-111b77b29d35@xen.org>
Date: Fri, 25 Jun 2021 08:42:51 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2106241749310.24906@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 25/06/2021 02:51, Stefano Stabellini wrote:
> It has become clear that an option to disable trapping SMC calls to Xen
> is very useful for debugging user issues.
 >
> Instead of having to provide a
> patch to users every time, it would be great if we could just tell them
> to add forward_smc=true to the Xen command line.

I can understand this woud be useful to go a bit further in dom0 boot. 
But I am quite sceptical on the idea of providing an option directly in 
Xen because:

1) This breaks other SMC uses in Xen (optee, VM monitor...)
2) There are no guarantee that the SMC call will not wreck Xen. To be 
clear, I don't refer to a malicious OS here, but a normal OS that boot
3) Very likely the next steps for the user (or better call it the 
developper because that option should really not be used by a normal 
user) will be to decide whether they should modify the kernel or 
implement a mediator in Xen.

> This option is obviously unsafe and unsecure and only meant for
> debugging. Make clear in the description that if you pass
> forward_smc=true then the system is not security supported.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
> index 3ece83a427..0833fe80fc 100644
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -2501,6 +2501,16 @@ vwfi to `native` reduces irq latency significantly. It can also lead to
>   suboptimal scheduling decisions, but only when the system is
>   oversubscribed (i.e., in total there are more vCPUs than pCPUs).
>   
> +### forward_smc (arm)
> +> `= <boolean>`
> +
> +> Default: `false`
> +
> +If enabled, instead of trapping firmware SMC calls to Xen, allow SMC
> +calls from VMs directly to the firmware. This option is UNSAFE and it is
> +only meant for debugging. Systems with forward_smc=true are not security
> +supported.
> +
>   ### watchdog (x86)
>   > `= force | <boolean>`
>   
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index e7384381cc..0580ac5762 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -95,11 +95,15 @@ static int __init parse_vwfi(const char *s)
>   }
>   custom_param("vwfi", parse_vwfi);
>   
> +static bool forward_smc = false;
> +boolean_param("forward_smc", forward_smc);
> +
>   register_t get_default_hcr_flags(void)
>   {
>       return  (HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM|
>                (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) |
> -             HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
> +             (forward_smc ? 0 : HCR_TSC) |
> +             HCR_TID3|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);

A system wide option to turn off SMC trapping is a no-go because this 
would only be usable for debugging dom0 and not a guest.

So at the minimum this should be a per-domain option. Also, I think we 
still want to integrate with the rest of the SMC users. So Xen should 
still trap the SMC and the forward should happen in vsmccc_handle_call().

This would cover my first point. For the second and third point, I still 
like to understand how this is going to help the developer to fully port 
the board/OS to Xen with this option disabled?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 06:45:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 06:45:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147056.270798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwfaj-0003Xm-PR; Fri, 25 Jun 2021 06:45:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147056.270798; Fri, 25 Jun 2021 06:45:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwfaj-0003Xf-MO; Fri, 25 Jun 2021 06:45:29 +0000
Received: by outflank-mailman (input) for mailman id 147056;
 Fri, 25 Jun 2021 06:45:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lwfai-0003XJ-7v
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 06:45:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwfah-0004HU-6i; Fri, 25 Jun 2021 06:45:27 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwfag-0005fZ-Th; Fri, 25 Jun 2021 06:45:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=dViz5e+lVVEmf0UFlzwNGbarIpZltjnqcKJ8UbE/T+Q=; b=x++TpBpP/R1rEm4Ycr/imAuuW5
	wLwXKqGPSDBAOAvkTTxx8z/vPWXwgTEMQUgS5aLO4oV9gq5ZD1kubB9vhRxWM4s93P2t0skJwU8QI
	/Ikc1j5lOZ2uA+dTEXyOe+staFwP/tfIR1+rFywVGnBDWRG90O68mzza89sOtD56EXcc=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	doebel@amazon.de,
	Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [PATCH] tools/xenstored: Correctly read the requests header from the stream
Date: Fri, 25 Jun 2021 07:45:22 +0100
Message-Id: <20210625064522.24919-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Commit c0fe360f42 ("tools/xenstored: Extend restore code to handle
multiple input buffer") extend the read_buffered_state() to support
multiple input buffers. Unfortunately, the commit didn't go far
enough and still used sc->data (start of the buffers) for retrieving
the header. This would lead to read the wrong headers for second and
follow-up commands.

Use data in place for sc->data for the source of the memcpy()s.

Fixes: c0fe360f42 ("tools/xenstored: Extend restore code to handle multiple input buffer")
Reported-by: Raphael Ning <raphning@amazon.com>
Signed-off-by: Julien Grall <jgrall@amazon.com>

----

I unfortunately didn't spot the issue because I forgot to check whether
the REQ ID of the responses were unique.
---
 tools/xenstore/xenstored_core.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index cf7297a96cb1..16c856730c55 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2717,11 +2717,11 @@ void read_state_buffered_data(const void *ctx, struct connection *conn,
 		len = sc->data_in_len - (data - sc->data);
 		if (len < sizeof(bdata->hdr)) {
 			bdata->inhdr = true;
-			memcpy(&bdata->hdr, sc->data, len);
+			memcpy(&bdata->hdr, data, len);
 			bdata->used = len;
 		} else {
 			bdata->inhdr = false;
-			memcpy(&bdata->hdr, sc->data, sizeof(bdata->hdr));
+			memcpy(&bdata->hdr, data, sizeof(bdata->hdr));
 			if (bdata->hdr.msg.len <= DEFAULT_BUFFER_SIZE)
 				bdata->buffer = bdata->default_buffer;
 			else
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 06:58:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 06:58:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147060.270809 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwfnB-00055Z-2z; Fri, 25 Jun 2021 06:58:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147060.270809; Fri, 25 Jun 2021 06:58:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwfnA-00055S-VZ; Fri, 25 Jun 2021 06:58:20 +0000
Received: by outflank-mailman (input) for mailman id 147060;
 Fri, 25 Jun 2021 06:58:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EvRe=LT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lwfn9-00055M-Ge
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 06:58:19 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d44bb4af-8bb5-4c1d-8046-e48c99c800f9;
 Fri, 25 Jun 2021 06:58:17 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1DA041FE3B;
 Fri, 25 Jun 2021 06:58:17 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id D016011A97;
 Fri, 25 Jun 2021 06:58:16 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id SeaqMYh+1WBQJAAALh3uQQ
 (envelope-from <jgross@suse.com>); Fri, 25 Jun 2021 06:58:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d44bb4af-8bb5-4c1d-8046-e48c99c800f9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624604297; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=vJ6YNzbpNf1VuGE8tQImgPQg68H98sgcGfq4zdDbfxs=;
	b=MymUA2vRyKxRNWEkhzmTwLsxr0rL1YTe5deHt2TZhQM0whLfkFMqaSUtsR7+89fUU3mV/m
	VwUnum9k38XOJkq40ZAQRHPangKFeEnNTYEtvAFO2sc5pA2+Q67HtDYsqDdC4IfiWlvpMw
	u9tCMr3SZSFgwiQAJj8k7ptJC0EQ2EQ=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624604297; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=vJ6YNzbpNf1VuGE8tQImgPQg68H98sgcGfq4zdDbfxs=;
	b=MymUA2vRyKxRNWEkhzmTwLsxr0rL1YTe5deHt2TZhQM0whLfkFMqaSUtsR7+89fUU3mV/m
	VwUnum9k38XOJkq40ZAQRHPangKFeEnNTYEtvAFO2sc5pA2+Q67HtDYsqDdC4IfiWlvpMw
	u9tCMr3SZSFgwiQAJj8k7ptJC0EQ2EQ=
Subject: Re: [PATCH] tools/xenstored: Correctly read the requests header from
 the stream
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210625064522.24919-1-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <8ae5d765-e5a8-26ef-e2d2-8ada29661aa6@suse.com>
Date: Fri, 25 Jun 2021 08:58:16 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210625064522.24919-1-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="da9QAHex7NJYwaTTsRuQOFz1y26KPxfDM"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--da9QAHex7NJYwaTTsRuQOFz1y26KPxfDM
Content-Type: multipart/mixed; boundary="2LVeh3OeKa7MrbgRGt2To6VDHBlpEuWlA";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <8ae5d765-e5a8-26ef-e2d2-8ada29661aa6@suse.com>
Subject: Re: [PATCH] tools/xenstored: Correctly read the requests header from
 the stream
References: <20210625064522.24919-1-julien@xen.org>
In-Reply-To: <20210625064522.24919-1-julien@xen.org>

--2LVeh3OeKa7MrbgRGt2To6VDHBlpEuWlA
Content-Type: multipart/mixed;
 boundary="------------F42F5C59AF9ED421EFED0229"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------F42F5C59AF9ED421EFED0229
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 25.06.21 08:45, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> Commit c0fe360f42 ("tools/xenstored: Extend restore code to handle
> multiple input buffer") extend the read_buffered_state() to support
> multiple input buffers. Unfortunately, the commit didn't go far
> enough and still used sc->data (start of the buffers) for retrieving
> the header. This would lead to read the wrong headers for second and
> follow-up commands.
>=20
> Use data in place for sc->data for the source of the memcpy()s.
>=20
> Fixes: c0fe360f42 ("tools/xenstored: Extend restore code to handle mult=
iple input buffer")
> Reported-by: Raphael Ning <raphning@amazon.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------F42F5C59AF9ED421EFED0229
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------F42F5C59AF9ED421EFED0229--

--2LVeh3OeKa7MrbgRGt2To6VDHBlpEuWlA--

--da9QAHex7NJYwaTTsRuQOFz1y26KPxfDM
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmDVfogFAwAAAAAACgkQsN6d1ii/Ey8K
fwf/WiAP8eNpz2p6K4JEyl+jpLe/MveFwJKxvr52PyTxdM3LYCylBbZ8DKGUEktUem+ROpRuYspH
5XzgqzwNeBycXInBQ5tu90bsWptm24h2Sgmh5Gm7P3mbSpiikuuQSbHDkWAwSQHjnIDg60N6xHjx
eFdFh4pbkHzwmdy+R47Q1wmoyPZPeHUCUUpvnC7+hCEjFSw6f+KWvwnUxrc2f+jteSQBLQN2faNP
3pWqPTO29HkQuE3xUu0BwzwFFdg64yMHgwnE4coSuCW1ylbhTuptPMtja9XK5kCQCuxrTfChWx+I
GT6cdE2ExD9i/VipW+tgZ50ZHcwzS29KWUv8+PRaKQ==
=zopA
-----END PGP SIGNATURE-----

--da9QAHex7NJYwaTTsRuQOFz1y26KPxfDM--


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 07:29:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 07:29:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147064.270820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwgGi-0008IJ-Eu; Fri, 25 Jun 2021 07:28:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147064.270820; Fri, 25 Jun 2021 07:28:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwgGi-0008IC-BF; Fri, 25 Jun 2021 07:28:52 +0000
Received: by outflank-mailman (input) for mailman id 147064;
 Fri, 25 Jun 2021 07:28:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwgGg-0008I2-Rh; Fri, 25 Jun 2021 07:28:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwgGg-00050X-Kg; Fri, 25 Jun 2021 07:28:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwgGg-0003gW-AP; Fri, 25 Jun 2021 07:28:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwgGg-0005wc-9s; Fri, 25 Jun 2021 07:28:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=70CYDybVNKwQ01FEjfTp38jA5CCw4hCYICqvK9R5k8E=; b=yoT2FvJmbQTIG6ATWmeQp44VlI
	jhFuBWmefZ+n2pnK+mF9kIs49GGhoX6xksCVTnxW/9iAQMclLqGMFM+UcS18E4bfK6opIdAUgk+DO
	zqktg5SMNBL5ds+s4kYeY9IXc50b8s1vyRjNVtXflc8fy6P8cYB3JmvF5pIam6E0u5oU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163022-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163022: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=19a541d70e0748af69d3b09d55a1415762c8d749
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Jun 2021 07:28:50 +0000

flight 163022 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163022/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 19a541d70e0748af69d3b09d55a1415762c8d749
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   21 days
Failing since        162368  2021-06-04 15:42:59 Z   20 days   47 attempts
Testing same since   163018  2021-06-24 13:41:06 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2602 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 08:30:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 08:30:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147072.270833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwhE6-0006od-4c; Fri, 25 Jun 2021 08:30:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147072.270833; Fri, 25 Jun 2021 08:30:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwhE6-0006oW-1C; Fri, 25 Jun 2021 08:30:14 +0000
Received: by outflank-mailman (input) for mailman id 147072;
 Fri, 25 Jun 2021 08:30:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwhE5-0006oM-9C; Fri, 25 Jun 2021 08:30:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwhE4-0006We-VY; Fri, 25 Jun 2021 08:30:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwhE4-0006z2-NY; Fri, 25 Jun 2021 08:30:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwhE4-0003Au-N6; Fri, 25 Jun 2021 08:30:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ilRDT8Q668NRppPzxbtE+wU9S3BT4ys0fSJcd9KLRNI=; b=0omfiLXcFE6cSdGrJ7vhdE3fQq
	dXYWOdLUzPOKEzJsMj8xj2FR8sA4lGUn0502QJ21MTd2DPa2wHQ5rtcFMuHx0nkDxFN/9FEWLP5+r
	hf7uAwyL2y9IxutVt/Z0C1aNScR0hfqnQn4KCYCxyVMj9QTsPaTP3U+Z/yKMIiuDsOYs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163026-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 163026: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=64ae7635e642bed571c45feb2b388719c7bf0b2a
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Jun 2021 08:30:12 +0000

flight 163026 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163026/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              64ae7635e642bed571c45feb2b388719c7bf0b2a
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  350 days
Failing since        151818  2020-07-11 04:18:52 Z  349 days  341 attempts
Testing same since   163026  2021-06-25 04:19:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 63341 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 09:17:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 09:17:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147082.270854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwhxt-0002W4-QG; Fri, 25 Jun 2021 09:17:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147082.270854; Fri, 25 Jun 2021 09:17:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwhxt-0002Vx-Mx; Fri, 25 Jun 2021 09:17:33 +0000
Received: by outflank-mailman (input) for mailman id 147082;
 Fri, 25 Jun 2021 09:17:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNyz=LT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lwhxr-0002Vr-Q2
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 09:17:32 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 50d6b6d2-5ce3-47bb-8fc8-6232fb555e9e;
 Fri, 25 Jun 2021 09:17:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50d6b6d2-5ce3-47bb-8fc8-6232fb555e9e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624612650;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=mJ8eqNyNs7/dCmR6PTTBtcxOGRd9HDC6bMxJaQ+V+NY=;
  b=UZjf7a0YLDycx2+mZwmYwfmJ+qNM8UgRX/FpnQ3GHTiT3BRux0WnunXH
   lJ/DpD/vKJNduZD1iLLo1in8BTm1L0mfwxCXhKC1ES4AgxPZUaMS1t9O1
   cn59CE1Wg1isPeqhrGVoR7i0MhiGUJ9/wvbWyXq21yNuVyGYL4AOFJsyk
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: HcR1qPUHpQdg3GnC92ivq3D95T5jgt/+zUAs2U31INP96c1m0DW7tHiPWiCP9J+kVcVRS4DFdH
 aC140Xrfvjf9JjLpXOlR6qMiu6Z/ZkZa1ecG3WBK2HPhIg288J/TuoWhdp7vM/xU0R+mckSESe
 H5jC1c3gv+S4nKhyATwO6os3eeU3g6siz3SQdr0jrcKhoC9CCfHoMxv/PA9gINuRlLv+LhX454
 rqpupb8BAF7XdcCEF6s53M0ss7lVE8A1MV0C4tYdBvYZjLU+Me0QmnI2d8NkbsOtP//C3BAbHH
 5a4=
X-SBRS: 5.1
X-MesageID: 48567710
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:aH9nWqGBiMbYMGSIpLqFA5HXdLJyesId70hD6qkvc3Nom52j+/
 xGws536faVslcssHFJo6HkBEDyewKiyXcT2/hsAV7CZniahILMFu9fBOTZskXd8kHFh4lgPO
 JbAtJD4b7LfCtHZKTBkXCF+r8bqbHtmsDY5paq854ud3APV0gJ1XYINu/xKDwReOApP+taKH
 PR3Ls9m9L2Ek5nH/hTS0N1E9TrlpnurtbLcBQGDxko5E2nii6p0qfzF1y90g0FWz1C7L8++S
 yd+jaJqJmLgrWe8FvxxmXT55NZlJ/IzcZCPtWFjowwJi/3ggilSYx9U/mpvSwzosuo9FE2+e
 O87CsIDoBW0Tf8b2u1qRzi103LyzA18ULvzleenD/KvdH5bChSMbsDuatpNj/ir2YwttB116
 xGm0iDsYBMMB/GlCPho/DVShBRkFauq3ZKq59Ss5Vma/paVFZtl/1awKsMe61wWx4SqbpXUd
 WGNfuspsq/KjihHjbkVgAF+q3fYpwxdi32CXTq9PbligS+p0oJuHfw8vZv1kvoxKhNP6Ws2N
 60RJiAtIs+BPP+PpgNSdvof6OMeyXwqEX3QRyvyBLcZfk6B04=
X-IronPort-AV: E=Sophos;i="5.83,298,1616472000"; 
   d="scan'208";a="48567710"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H5ir3iIPv9nJONYzxw2ApDJlyLYx3bu9H83xnMru1cF3sIwi/W/h9ONjdkpr+G+/3xNN+8SRY7BFaeToPqC1HX5+rI4lXDEeBd42fvDwuJQR1hK/sbM4p8ZpJnaHmdpIQrKRRHXMx5akassg37u4li6jNykRii50Fw+Ic7kOrL6D0C0LjDq2i0HYIxAxzZh4nMlblW4piT6kNrQPjb0hi3S89V9LAJS3QVZQ639tIVBg7Hauh1oikTiXJiWkSMP7FHKZodEjRpusFtcAN1Q+34Y54/KGDZhDgucfIkYIAy4wVy0ja4ynfcjYY3YMxQLquG+W6Ig+oUZb5YkfXEa9vA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mJ8eqNyNs7/dCmR6PTTBtcxOGRd9HDC6bMxJaQ+V+NY=;
 b=KOqGk/x+SBRd0xsIzUbwvK6bGuWJGPhykns26sswXTlyOeABl6OFNU1TLXVJICWBXYz3LfWyWX0NmxeVSjeR+u0rPmZUZE7VbSJB3X6ac8lDuTuJsGtvbyoXtWGdDOuJB3qb56cz+RxlZbHfiDPCTuam4hxP32dnRA55VCZUE1MTN2oWxRjGD7DAdwuR9o5hrTdJh87DL9Dq9z2Bo0a/engLlRB9Xhz9JU8zCLdXU2PO+3CUyhrbsvVc4rqjuBJq3SouGIorncttRY6pTnIylfhUMr56uXSIP//2rWDXX6XvseK1oXLPnoRiFsPZEHeEJM6F+EbjH1ufiQff1TSHTQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mJ8eqNyNs7/dCmR6PTTBtcxOGRd9HDC6bMxJaQ+V+NY=;
 b=YLRBvqLUuLlXt9DQrBsMrjxSgwbS3EaGDcF5ndc5wfwHu9ysFkeqVkI72Hwj0T2KzmDiIbOz/jDcwSAYf3ooVkBiZPA61/3iiGagper4TcFsxsHZ3zqxqhVC57mjdgNu8Uj6G1KXJpvzMLYg/n74Ych/9lwNwuCS20weeKKRxVw=
To: Jan Beulich <jbeulich@suse.com>, Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210624175521.20843-1-andrew.cooper3@citrix.com>
 <6e103351-f3eb-39c8-441d-be926579f2ca@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] libxencall: Bump SONAME following new functionality
Message-ID: <9c5cff0e-0ab6-7015-3667-bad2d9f5b31e@citrix.com>
Date: Fri, 25 Jun 2021 10:17:20 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <6e103351-f3eb-39c8-441d-be926579f2ca@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64
Content-Language: en-GB
X-ClientProxiedBy: LO2P123CA0103.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:139::18) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2c9b134b-4c9b-4782-47fd-08d937ba0c5f
X-MS-TrafficTypeDiagnostic: BY5PR03MB5282:
X-Microsoft-Antispam-PRVS: <BY5PR03MB5282A2B31B393E87E9256733BA069@BY5PR03MB5282.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:590;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: byloKamuNwWimdmo7IwCMYlrOPX+xUkbjsj6eL+s2QSNkly+wj6N+vkdlUIU1O0la2oKGwSmAr8P93XDkUf+JMzHbKQ7dfuC7HAXMxOiHtg6e3FQp2b6FhEs3VWpRU79pZipEL2pTeF2esC6hx1vvN5wcfL0btimfwrkn0ysKFgSoCaCCLbizkWEcgXB5QfqPsOWYRk1dp1jOKt6XpSk8SuCWFbKs3tErbOrzAGdibMjnWQKZX4Cq7c0tgU4pRIKd/ayzcY/xZjasSW/lJTSTINq6Sbtx9R3uOVcFPHWEgz7xgAFewveG3DlkYv63LgiLK+jP7r4QdOMbDafF424U0Isd8dJyBQ8xPAb4k3gPTTu7Z4dQvRP4THjpoABbW+x49oDkibhp5irOBGLFGYRGDQyQ/wl68dlp3BQAuOoAv2iLjAthLhZYTbV6Fk9taImyF3ZrGnO06wNex6/A5AuD2pJutznPPo3BfO36mxnTV3dESOwGBAxnZGrpMtVXr2iTj03zSWj4TvlhVzQZ7F4JzMIYQzTmu3xv1RZWdwFF6QEMIixG3tcFoU3MeWcF20kmPw0C19e6iEmMvkyU/9Hbi/cAjGCMsu5GSGbbTI1HRIDOaeegFUdi5m0h+7iZCkkTQaiLCzeDhp6K2lZJXZMs2UDl+ZC5QpTGabU4EhXNAP4gVfjB90dn9xWoyTafIT2
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(396003)(136003)(346002)(39860400002)(53546011)(956004)(478600001)(6486002)(16576012)(316002)(31686004)(54906003)(110136005)(16526019)(38100700002)(186003)(86362001)(8936002)(26005)(8676002)(2616005)(31696002)(4744005)(66946007)(4326008)(6666004)(36756003)(5660300002)(2906002)(66476007)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?dlhmTkNWa2duSWFWOVlDZFhibDVFU0J1SVhvYS82S2dGSkU2b0piK3dWRlRE?=
 =?utf-8?B?djFnTm52UUlhYWM2RU95clg2SDd3cnNVczBWT1VTTytyQU5iNVUvRXo3S2I2?=
 =?utf-8?B?bUtpSjZuVmRZc3RFZ25Sc0hQOGtVZjV6eXZkMC91aVVUQmtSbHl3bzNiK0dP?=
 =?utf-8?B?OHBYdUpiUTFybEQxZ01CMFNVY0EwazVpbEptaXdBYlR6dmVRVU15ZWVGWVEx?=
 =?utf-8?B?THlsTE4rMVVpYXZTcC9XZUoxbnE3UzFMTkQ3S2NTZ2F1NnkrSlNHSkdpUm94?=
 =?utf-8?B?eHVjNVhrenhDTlVlZlFPek5QZkRvdVFPK2RycFRNQUZDeVZ6R1IwaFpHdmtN?=
 =?utf-8?B?NncxMFVFNVZvS28rZ1JiU04zZ2hWMDExejFOZUVQWUgrZWZueGhxcmVLcFA0?=
 =?utf-8?B?a1V2LzBjRUh0bmhBcnpsZFVBYnJ0TVhheWY1ZU5JUnNGekVXM1FhS0hVZ0xC?=
 =?utf-8?B?MnpIVE1lRW5RUWx0R2JJb21mRk1CZnphbzVqYzltOUNpWEp1MU9jZURNd3o4?=
 =?utf-8?B?K2duQm9QRFZ6NGE5d1N3U3BreDdZVkthLzBQZG9pbTZBWkpvN0VHYlhvK2Jz?=
 =?utf-8?B?TWlNUmowS2dMUkFYS3V4SkZKRkE4K3pIcEhpby93bFpCWUxtb0tTajBpc3J5?=
 =?utf-8?B?aUorWjJFZjFtTnV3Rk5zOHFSYmNSOFZLVm13OUhFTlVtWlVrdFM0cnUvNXFv?=
 =?utf-8?B?c0dNRWE3YnozeWRRVVJRWlZpZ0xwY2FjQi93bkt1NVVhVjh1WEl2dUdkcSta?=
 =?utf-8?B?emNReXNhMlJDdHFXcjhkYjhodFlBTmtueUwvbE1ZN1hjU0xVdkpRUnRjWnZX?=
 =?utf-8?B?cWNwZnh2VTR4Nk5xRUJ6amcwMnhUUXlGTmVuTEdLZDVmeEZlelhzWUhpZm5o?=
 =?utf-8?B?RDNuT1dSQldwQVFUcW1tTVhyNUM5cU9YWmVJZHpzS0JmcXZObkhJYzdaelhk?=
 =?utf-8?B?SzN5WDQ0VFFHdXBacFhTaTRiYVhETXI1Mkd3MHFCdGI3d2xLVmxTYnB5UXZZ?=
 =?utf-8?B?T2ZQMGdHWUFyLzRmTnRNMlFxWnpZZC9BcnNRYlpjdWt3ak1pSkZQQTAxanVt?=
 =?utf-8?B?c002cG14WXdtcUQ2QTY3Z2ZrOUhmbldVNS9wY2cvYklQYzVmdzBXc3ZINml5?=
 =?utf-8?B?K2h2S3RGUjhRbzdDdkptK2VxSWJHa0ZEbkpaa3d3SU9xTEsra3cxNmdVN1pS?=
 =?utf-8?B?ZGRSL0oyQ1RITldNY2xSaHI3MUs3bVRpN3h5Z3BjcDNRTmx3cDd3QnkwRmtB?=
 =?utf-8?B?OVJ0dHl6ZlV1MlExS0dXd05DWTMxWkRZTCs1YldhdSs0NjVON2xid1pqdVJV?=
 =?utf-8?B?SDEzZlNLMjFxUURCVVFON2Q5UmJ6bWNoeWNySzZqK0psaGhYcjZxNGRYQVlX?=
 =?utf-8?B?c2NRM3oxMlpwbktud2hnVXlOcENFUk9jRkRlaUdGYmlwdjdTWlBqMlhyUkZW?=
 =?utf-8?B?VEloYzQzenprekhSQUJKVFFPZllTcWVwSC9WR2hvdEZUZ3NEamlXdk5CRVhR?=
 =?utf-8?B?bUczNnFCTzlCQzMyOVdZbFNBYllGUjB0aFJMZ0dSbENNNGtWbkltRXhqMlZ1?=
 =?utf-8?B?WjBQRWFPVnV2R3ErbHNyVU8yTzVaK01aakpxbzFROHduc3MyTnVnM1Q2eHpQ?=
 =?utf-8?B?cVl0QnpCVUFWWHVqRDZ3VTN1LzIrWjRtemdJM000OEhlaHpzUVJuTWJPRlB0?=
 =?utf-8?B?dlo5YkRTMjJIb0F1VlM3VHRVazJ1ak1uY1JmMFRCelIxbkVPNmRNUUJwL012?=
 =?utf-8?Q?iEWxAC/OCaCM8f9de918zWP6hNgJ/Z+NxbfhiOQ?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 2c9b134b-4c9b-4782-47fd-08d937ba0c5f
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 09:17:26.1820
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9grUn4vSC0vYmShdnJkW+ykALjEQ5+uQSefRWggL0+RHcVIa82l1twJI6fLFJm4rMRRIMTlJNma7d5LuC/FhTDsTqJYqG1Xx20TG3+ah7UQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5282
X-OriginatorOrg: citrix.com

T24gMjUvMDYvMjAyMSAwNzozMSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4gT24gMjQuMDYuMjAyMSAx
OTo1NSwgQW5kcmV3IENvb3BlciB3cm90ZToKPj4gRml4ZXM6IGJlZjY0ZjJjMDAgKCJsaWJ4ZW5j
YWxsOiBpbnRyb2R1Y2UgdmFyaWFudCBvZiB4ZW5jYWxsMigpIHJldHVybmluZyBsb25nIikKPiBJ
cyB0aGlzIHN0cmljdGx5IG5lY2Vzc2FyeSwgaS5lLiBpcyBhIEZpeGVzOiB0YWcgaGVyZSB3YXJy
YW50ZWQ/CgpZZXMgLSB2ZXJ5IG11Y2ggc28uCgphbmRyZXdjb29wQGFuZHJld2Nvb3A6L2xvY2Fs
L3hlbi5naXQveGVuJCByZWFkZWxmIC1XYQouLi90b29scy9saWJzL2NhbGwvbGlieGVuY2FsbC5z
by4xLjIgfCBncmVwIDFcXC4zCsKgwqDCoCAzMzogMDAwMDAwMDAwMDAwMTQ5NsKgwqDCoCA1OSBG
VU5DwqDCoMKgIEdMT0JBTCBERUZBVUxUwqDCoCAxMwp4ZW5jYWxsMkxAQFZFUlNfMS4zCsKgwqDC
oCAzOTogMDAwMDAwMDAwMDAwMDAwMMKgwqDCoMKgIDAgT0JKRUNUwqAgR0xPQkFMIERFRkFVTFTC
oCBBQlMgVkVSU18xLjMKwqDCoMKgIDc2OiAwMDAwMDAwMDAwMDAwMDAwwqDCoMKgwqAgMCBPQkpF
Q1TCoCBHTE9CQUwgREVGQVVMVMKgIEFCUyBWRVJTXzEuMwrCoCAwMjA6wqDCoCA0IChWRVJTXzEu
MinCoMKgwqDCoMKgIDUgKFZFUlNfMS4zKcKgwqDCoMKgwqAgMiAoVkVSU18xLjApwqDCoMKgwqDC
oCAzCihWRVJTXzEuMSnCoMKgCsKgIDAyNDrCoMKgIDMgKFZFUlNfMS4xKcKgwqDCoMKgwqAgMiAo
VkVSU18xLjApwqDCoMKgwqDCoCA0IChWRVJTXzEuMinCoMKgwqDCoMKgIDUKKFZFUlNfMS4zKcKg
wqAKwqAgMHgwMDgwOiBSZXY6IDHCoCBGbGFnczogbm9uZcKgIEluZGV4OiA1wqAgQ250OiAywqAg
TmFtZTogVkVSU18xLjMKCldpdGhvdXQgdGhpcywgeW91IGNyZWF0ZSBhIGxpYnJhcnkgY2FsbGVk
IC5zby4xLjIgd2l0aCAxLjMncyBBQkkgaW4uCgp+QW5kcmV3Cg==


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 09:43:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 09:43:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147087.270864 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwiMr-0005ep-1Y; Fri, 25 Jun 2021 09:43:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147087.270864; Fri, 25 Jun 2021 09:43:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwiMq-0005ei-Um; Fri, 25 Jun 2021 09:43:20 +0000
Received: by outflank-mailman (input) for mailman id 147087;
 Fri, 25 Jun 2021 09:43:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lwiMp-0005ec-MP
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 09:43:19 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwiMo-0007hf-7t; Fri, 25 Jun 2021 09:43:18 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwiMl-0002dV-Oi; Fri, 25 Jun 2021 09:43:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=mSEBjiVaiQFjxMTj7LDSpISaPw+OdYsQprFIQFHf2/k=; b=Zb8KFsfvCysZJcbpoD5Da+sZQT
	vXhpi9V/LX9L0nzsacQzfTkRDj+5mq03/kXXeqlJ1Mjh5SMixKX+oaOav/I4nONJgDHkBF3pgsf4T
	KqB9k9NtQ5EHzCQ2kgcYpRHd+Y3tYeHnqYVTw8y5lZacwhDlVHNTjg2sm6pjxua4NSg4=;
Subject: Re: [PATCH] tools/xenstored: Correctly read the requests header from
 the stream
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, doebel@amazon.de, Julien Grall
 <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210625064522.24919-1-julien@xen.org>
 <8ae5d765-e5a8-26ef-e2d2-8ada29661aa6@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e42901d4-1aa9-b46b-2ff0-ab0b5f405baa@xen.org>
Date: Fri, 25 Jun 2021 11:43:00 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <8ae5d765-e5a8-26ef-e2d2-8ada29661aa6@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 25/06/2021 08:58, Juergen Gross wrote:
> On 25.06.21 08:45, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Commit c0fe360f42 ("tools/xenstored: Extend restore code to handle
>> multiple input buffer") extend the read_buffered_state() to support
>> multiple input buffers. Unfortunately, the commit didn't go far
>> enough and still used sc->data (start of the buffers) for retrieving
>> the header. This would lead to read the wrong headers for second and
>> follow-up commands.
>>
>> Use data in place for sc->data for the source of the memcpy()s.
>>
>> Fixes: c0fe360f42 ("tools/xenstored: Extend restore code to handle 
>> multiple input buffer")
>> Reported-by: Raphael Ning <raphning@amazon.com>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Reviewed-by: Juergen Gross <jgross@suse.com>

Thank you! I have committed the patch.

Cheers,

> 
> 
> Juergen

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 09:43:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 09:43:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147090.270876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwiNN-0006Ag-AU; Fri, 25 Jun 2021 09:43:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147090.270876; Fri, 25 Jun 2021 09:43:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwiNN-0006AZ-7J; Fri, 25 Jun 2021 09:43:53 +0000
Received: by outflank-mailman (input) for mailman id 147090;
 Fri, 25 Jun 2021 09:43:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwiNM-0006AJ-5X; Fri, 25 Jun 2021 09:43:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwiNL-0007iH-Vc; Fri, 25 Jun 2021 09:43:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwiNL-0001x2-IE; Fri, 25 Jun 2021 09:43:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwiNL-0000vy-Hh; Fri, 25 Jun 2021 09:43:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xE65rvLnhsNXQt5sy9iS16bZQpNSIbx6FjRILzg946g=; b=EwcZDoDCOdUiPATMCUxcP7ioq5
	Qgy60vW8AU2VcyrbWSY+UayXLJVvlrmp0VE2VBPf/TNXSSsN+0GxBRG+8x52iudXRpdzHAldV7NuL
	H6EhVNiPCkolOPAMuqCnyXUOxwKiIfZLMOP2pkyGDnWmCsRuvdyFgilVqG9glPvkYZSE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163021-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 163021: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e87d8f60fa9b6eaa6a2357545a96e4fff05dbef0
X-Osstest-Versions-That:
    xen=c7691f5e340f3b579d40c77548f63133cdf5aac7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Jun 2021 09:43:51 +0000

flight 163021 xen-unstable real [real]
flight 163030 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/163021/
http://logs.test-lab.xenproject.org/osstest/logs/163030/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 163014

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 163014
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 163014
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 163014
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 163014
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 163014
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 163014
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 163014
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 163014
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 163014
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 163014
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 163014
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e87d8f60fa9b6eaa6a2357545a96e4fff05dbef0
baseline version:
 xen                  c7691f5e340f3b579d40c77548f63133cdf5aac7

Last test of basis   163014  2021-06-24 10:22:27 Z    0 days
Testing same since   163021  2021-06-24 21:09:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@xen.org>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 509 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 09:58:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 09:58:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147096.270894 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwib3-0007nw-L3; Fri, 25 Jun 2021 09:58:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147096.270894; Fri, 25 Jun 2021 09:58:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwib3-0007np-Hd; Fri, 25 Jun 2021 09:58:01 +0000
Received: by outflank-mailman (input) for mailman id 147096;
 Fri, 25 Jun 2021 09:58:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwib2-0007nj-BF
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 09:58:00 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 74a26779-c716-4862-9cfd-d1c5bd7505e3;
 Fri, 25 Jun 2021 09:57:59 +0000 (UTC)
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur03lp2056.outbound.protection.outlook.com [104.47.9.56]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-35-VsPpbuswNIysK363DAqDrg-1; Fri, 25 Jun 2021 11:57:57 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5742.eurprd04.prod.outlook.com (2603:10a6:803:e5::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20; Fri, 25 Jun
 2021 09:57:56 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 09:57:56 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0221.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1e::17) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.19 via Frontend Transport; Fri, 25 Jun 2021 09:57:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74a26779-c716-4862-9cfd-d1c5bd7505e3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624615078;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bRJJj4XGivBfb1/vwcwREcwzGQa1OaB8VE1V/tqOHQY=;
	b=J3AvEElBlnOpPajIXuOPmjYeEQibDPFdYsd0mS5T8M4Zf/qoYevQ6q2hzviZBIAfEzpUry
	qu/K5sH3t8UPRsAF6BwRLX5Iu99adDrooVi4efD3vcDIo7uWTQ4sDrdr73TFeqtJhq18u5
	x1QbzmbaGUQXvjoPMEnjVFrXrSNOUJ0=
X-MC-Unique: VsPpbuswNIysK363DAqDrg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iiTWmbzUjXW+PEi8C0svoJEmkkyFWeHW/na9sTjeIgSyOwizIsVJiB2IM60zCDG9xxaagfW3TEnbOKvocX+oTrdj8t2odhLNBrVyy0gXQHB/rAv5r//v1zZjeBkd7Tsd249jwW839jXkCaZG6SQM8yDZr002gtrHMlrgQhvR6ONkJRf3iFPAA7To72wQyKW+dbZaa9fwlortprJaTfJzkNe6VqyZUQASiHMUGwdrCEMDDatAed13Nj/t/yaqBi41WlOumttUsiiDKZ+5p93SiBcgsYU1Td+aldYX6xHq3iNWIUWAyyuZRZVPUFjDt8a/VbwSJkmvxIXy0LANH82tzA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FoZWxPl+s3IRdc0xrlhnY9/hEXENm2vZcv5qvv0CqRw=;
 b=bnb2PwSKGAc2RvVqhOTdY2946f4QjINdB9e0MfjlioAwGZ/tP8SGQx6nVKIdA/jvp6j1Dt+359azauEOSfAKwUCXVyZ7mFrfzMIpS0vr1kMSal941+/448n+tmjFJ+sW21m2HFlmFEzLLqbMzSAiKdqn/c1sj1yfBohR4Af/sOG9xHfGil0VuMpFE7NVKPP3P8e1lDlFuCSoprSZnS2s2FWZz2NY5fxZBN6hq8+k3NrAnSwk1hL6Iz2bcX896jS3h4chxO2IrPLDfh+YhnXHeLTQqii6OsHzo/XMxiQswjK7yW9nld8RoNc9mgh/w9gfw2KA/zJ69syDn8D9OLcsDQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH] libxencall: Bump SONAME following new functionality
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>
References: <20210624175521.20843-1-andrew.cooper3@citrix.com>
 <6e103351-f3eb-39c8-441d-be926579f2ca@suse.com>
 <9c5cff0e-0ab6-7015-3667-bad2d9f5b31e@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4b360f95-aec2-935b-ce95-73a01cae98be@suse.com>
Date: Fri, 25 Jun 2021 11:57:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <9c5cff0e-0ab6-7015-3667-bad2d9f5b31e@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0221.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1e::17) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4d9c37d8-4ab7-4498-00b3-08d937bfb49d
X-MS-TrafficTypeDiagnostic: VI1PR04MB5742:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB5742AB0B540DC244A1D0202CB3069@VI1PR04MB5742.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zNN0HmIsEk9RuasDfbjUcBqm6JBj9+HftAN7XxOG+qxsCpQQLqON4kraDllwjTapyogGj3N6cOaVu74WtT4+jmVADQDLtoe75p9Wwr/QFF5AsLUCCcWYnF6tr7vVireL+9SPcSnO13eNzIJrPZvgkiUtDgRZzOl38ymdhVBkLjHLZekd4sFYHGu3PPD4wwzr74mRrljKONnR6017+X+xsnNe64KPbLTZUoUCHm8hR0yNLWVWbw7YaStE8ul6evZQT8zVfP8kNM44XQ+kCq5eDs401UyUV9G1dmikxEulLjACnwB6eTAcJr5maVplOyOPgrAKUy02ZkY8/wJMoVnHsqYcbLRSkxW2ho77W+OYPFwrmvhz5zK5L36qFzi9m5P5EhbgI8pfZ2RZMrhQ1xweYAFM6IXJwBUlk0poteGl/rkSl3QTG/Fw84xjY1rMSJlD/aaDNPbTbhDoXzan7dc9kPlS3Kl9m+N7nY8EOZWmDJwxkQI6AX2y4FMn0neMOfg3oZ9eIwhf3xhpoBStL7iddQlrve8324CqVKt1FBYfxj5nL5X87iR7nE7tNDsRFiXWq05qX8FdD/gq5yuKyx+rtrOcVr88/+QI6J2t1uB5BgCj2Q/ItAzlI0TljWTELpQc7D75DF326gGSptTy8bvHZBtwhruZi0hLs7etxVrWY4g39/h0cJ2thFA4RxR++tUY
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(39850400004)(396003)(346002)(136003)(366004)(26005)(16526019)(31696002)(4326008)(36756003)(186003)(16576012)(31686004)(5660300002)(66476007)(316002)(38100700002)(66946007)(66556008)(6916009)(2616005)(2906002)(53546011)(54906003)(6486002)(8936002)(86362001)(478600001)(8676002)(956004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?fBK3hd+kRyPXu0BUzlAADrJhGIeH38OlTq0Q/tVi6NMhbeqLWkFuLVw9JaMv?=
 =?us-ascii?Q?l5POaQgUz/tjZIIjMClPpOd5ZD6tQ47wqHBdPIncM1Rtxa4jOTDYukmke8L+?=
 =?us-ascii?Q?7OVdHSMI9gy7oIiA3P8Hjg8+bktRcW3PtMPlUMtVkvPmi2t0x5R7N5lr0cFp?=
 =?us-ascii?Q?yzTbHW2S9D8HLrAoPRgckbY8zwVULFRymkRlEbv4GNAYmW0dPMfwzKEollDM?=
 =?us-ascii?Q?wH4bvCDbm6wCbD0d3FgejWz7n102Y0Dnf2APqaCpHZucMoKVHt6f+A+jLQA2?=
 =?us-ascii?Q?7yyu5wKLE1bK/6OeziPfNKwcnj6K2vd60Y9CjhoIXH2dY1ur6DvEEdAtZQMi?=
 =?us-ascii?Q?YwXTC529YF+W6NkilMbqh+WtxsEswNJglKQ4z1tLZLQ7kDvYaX6yC2pWHasa?=
 =?us-ascii?Q?2hVMZPfUlwUHAlbZIviAgmtVluf3HcvBwvJPaF4wTfB3IlXFyHvjF40NDwDL?=
 =?us-ascii?Q?NzGw5L+e7HvgqzB+5Tve5yGlfv44KAH1PJpLycFuXj6N2rP9EZsMKbCXFYdF?=
 =?us-ascii?Q?iVMBReQTfiPVFVqhsZavWx61mNtnYEzlAbZQl6xSput4Hhhxfx3fWoY2+IaP?=
 =?us-ascii?Q?ZSXnXwc/rtB2LiuXZkmbTxMUtDQobREQz+bWi3WIbelYfen5fkhFcI+HC60N?=
 =?us-ascii?Q?La/rXf/H4vp7vOeXRL2JpI1R5HY5HotA+wWpzow60gzHvjQ98gnJZ/LoUSUG?=
 =?us-ascii?Q?6YDau9w3IIDXS69nAbH7kH3GV5O2XraJrxK8sjl2PK6BlrM/OWt4122fd6F9?=
 =?us-ascii?Q?GMqjNGzoI0SCNvXJjkaEdEnU6DaRtlwJlYeoe7VQk2CMXFknzE2e09xwoThx?=
 =?us-ascii?Q?UEHMDBT4QNfa4kuN6bABUsbDEZOM3pzJw2b8PWFdnp9r7PUY8Qnan9vyD1P/?=
 =?us-ascii?Q?UZEHAHelYPJxBYLzpdvsYuPrD3ZbWx0wuSWMurIs5uS8grtN5O938dv+rN+9?=
 =?us-ascii?Q?rnDojMsQhBJADqjDWqthpvaKFooatLSwAm8oJYN8CB1KKbxxhulAr9m1ES9K?=
 =?us-ascii?Q?gyhUA0VIWkiq56ZHL7Xx7frSTZAJqnIZ4f7ozubjNuMq85fcily22wy6IY1P?=
 =?us-ascii?Q?nLe8h9PtgXLtt66viJVOHh6ReM6cvUbk1X3KAilRzuHf0fNzRItOXEeWI0j/?=
 =?us-ascii?Q?8cyoX/Kw4ibDRLb7ibE+DqLCOI7rR2UXjw/NT06WFp9WMkQ+S/U4QJpNPibx?=
 =?us-ascii?Q?KV+lSzDwJNsyIb8NJ+MuodnA/bvRIDXlnLw5oWnD8Hs33wPewosC2CrDUCUe?=
 =?us-ascii?Q?8mHJiDrm1CvmlcFtXi9LW1cO45dJV+qIkYinjebd/boeifEsgDatjpcB1hED?=
 =?us-ascii?Q?eRc4lJ12V6cilDlckVKpZ8Sl?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4d9c37d8-4ab7-4498-00b3-08d937bfb49d
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 09:57:56.0004
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JauSLZNFWUYCX6MPCEeKl88EACsi5aBqo/OEba6xEIxc71+i7wfBAYh6yXOwaxTgb99kAlQKTAhdJQCz1m3cXQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5742

On 25.06.2021 11:17, Andrew Cooper wrote:
> On 25/06/2021 07:31, Jan Beulich wrote:
>> On 24.06.2021 19:55, Andrew Cooper wrote:
>>> Fixes: bef64f2c00 ("libxencall: introduce variant of xencall2() returni=
ng long")
>> Is this strictly necessary, i.e. is a Fixes: tag here warranted?
>=20
> Yes - very much so.
>=20
> andrewcoop@andrewcoop:/local/xen.git/xen$ readelf -Wa
> ../tools/libs/call/libxencall.so.1.2 | grep 1\\.3
> =C2=A0=C2=A0=C2=A0 33: 0000000000001496=C2=A0=C2=A0=C2=A0 59 FUNC=C2=A0=
=C2=A0=C2=A0 GLOBAL DEFAULT=C2=A0=C2=A0 13
> xencall2L@@VERS_1.3
> =C2=A0=C2=A0=C2=A0 39: 0000000000000000=C2=A0=C2=A0=C2=A0=C2=A0 0 OBJECT=
=C2=A0 GLOBAL DEFAULT=C2=A0 ABS VERS_1.3
> =C2=A0=C2=A0=C2=A0 76: 0000000000000000=C2=A0=C2=A0=C2=A0=C2=A0 0 OBJECT=
=C2=A0 GLOBAL DEFAULT=C2=A0 ABS VERS_1.3
> =C2=A0 020:=C2=A0=C2=A0 4 (VERS_1.2)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 5 (VER=
S_1.3)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 2 (VERS_1.0)=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 3
> (VERS_1.1)=C2=A0=C2=A0
> =C2=A0 024:=C2=A0=C2=A0 3 (VERS_1.1)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 2 (VER=
S_1.0)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 4 (VERS_1.2)=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 5
> (VERS_1.3)=C2=A0=C2=A0
> =C2=A0 0x0080: Rev: 1=C2=A0 Flags: none=C2=A0 Index: 5=C2=A0 Cnt: 2=C2=A0=
 Name: VERS_1.3
>=20
> Without this, you create a library called .so.1.2 with 1.3's ABI in.

I'm aware of the change to file contents as well as the disagreement
of file name / SONAME vs enumerated versions. So telling me this is
not really an answer to my question. It may be by convention that
the two should match up, but I don't see any functional issue (yet)
if they don't. Plus of course you leave open altogether the
backporting aspect of my question.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 10:59:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 10:59:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147110.270926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwjYi-0005F8-Rt; Fri, 25 Jun 2021 10:59:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147110.270926; Fri, 25 Jun 2021 10:59:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwjYi-0005F1-Ox; Fri, 25 Jun 2021 10:59:40 +0000
Received: by outflank-mailman (input) for mailman id 147110;
 Fri, 25 Jun 2021 10:59:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lwjYh-0005Ev-QG
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 10:59:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lwjYh-0000bw-PS
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 10:59:39 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lwjYh-00009J-Oe
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 10:59:39 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lwjYb-0003ja-Bt; Fri, 25 Jun 2021 11:59:33 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=6Z2Z4kabWMAKspZ0lr5ZblTSxmGY+JwvsFhRSW82OdA=; b=UVjwBGyl0X6esdrmC3FZycrWoo
	PJxS/MOO30bMzpCjD3U2dYoxtOs4Ni0ViOWRuCnf0ZeVtGco8svakjtCi8fWcDsegwOz197wBMqNa
	4tUJMUEtqWRxWeJtgTNNnqoB/11peiUk6b03c29ghPusa19+ZDRIoeb/fPXTb0sU6QFw=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24789.46868.943149.770825@mariner.uk.xensource.com>
Date: Fri, 25 Jun 2021 11:59:32 +0100
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] libxencall: Bump SONAME following new functionality
In-Reply-To: <4b360f95-aec2-935b-ce95-73a01cae98be@suse.com>
References: <20210624175521.20843-1-andrew.cooper3@citrix.com>
	<6e103351-f3eb-39c8-441d-be926579f2ca@suse.com>
	<9c5cff0e-0ab6-7015-3667-bad2d9f5b31e@citrix.com>
	<4b360f95-aec2-935b-ce95-73a01cae98be@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: [PATCH] libxencall: Bump SONAME following new functionality"):
> On 25.06.2021 11:17, Andrew Cooper wrote:
> > On 25/06/2021 07:31, Jan Beulich wrote:
> >> On 24.06.2021 19:55, Andrew Cooper wrote:
> >>> Fixes: bef64f2c00 ("libxencall: introduce variant of xencall2() returning long")
> >> Is this strictly necessary, i.e. is a Fixes: tag here warranted?
> > 
> > Yes - very much so.
> > 
> > andrewcoop@andrewcoop:/local/xen.git/xen$ readelf -Wa
> > ../tools/libs/call/libxencall.so.1.2 | grep 1\\.3
> >  33: 0000000000001496 59 FUNC GLOBAL DEFAULT 13
> > xencall2L@@VERS_1.3
> >  39: 0000000000000000 0 OBJECT GLOBAL DEFAULT ABS VERS_1.3
> >  76: 0000000000000000 0 OBJECT GLOBAL DEFAULT ABS VERS_1.3
> >  020: 4 (VERS_1.2) 5 (VERS_1.3) 2 (VERS_1.0) 3
> > (VERS_1.1)
> >  024: 3 (VERS_1.1) 2 (VERS_1.0) 4 (VERS_1.2) 5
> > (VERS_1.3)
> >  0x0080: Rev: 1 Flags: none Index: 5 Cnt: 2 Name: VERS_1.3
> > 
> > Without this, you create a library called .so.1.2 with 1.3's ABI in.
> 
> I'm aware of the change to file contents as well as the disagreement
> of file name / SONAME vs enumerated versions. So telling me this is
> not really an answer to my question. It may be by convention that
> the two should match up, but I don't see any functional issue (yet)
> if they don't. Plus of course you leave open altogether the
> backporting aspect of my question.

The patch, including the Fixes tag,

Reviewed-by: Ian Jackson <iwj@xenproject.org>

Changing minor version in the filename as well as the .so is not an
impediment to backporting.  The actual soname remains the same so
there is no compatibility problem and the change is still suitable for
including in eg distro stsable releases.

Not changing the filename is quite strange.  I havne't thought through
all of the implications but I'm sure it will confuse people, and it
seems like to confuse at least some computer programs that handle this
kind of thing.

Ian.


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 11:37:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 11:37:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147124.270966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwk94-0000qy-Cj; Fri, 25 Jun 2021 11:37:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147124.270966; Fri, 25 Jun 2021 11:37:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwk94-0000qr-9g; Fri, 25 Jun 2021 11:37:14 +0000
Received: by outflank-mailman (input) for mailman id 147124;
 Fri, 25 Jun 2021 11:37:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNyz=LT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lwk92-0000ql-P7
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 11:37:12 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c959e347-9704-4013-9ac3-aac23a2b0688;
 Fri, 25 Jun 2021 11:37:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c959e347-9704-4013-9ac3-aac23a2b0688
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624621030;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=F7jwxuSqjY9P9d1+WnxEdtyQype07kuO0mAqd2otHt4=;
  b=MoqJ0pqREreHJUAnHqPZcs0alRQmq/6SXtSZB/KUYqt4PHVmgDJy32tE
   R5W5xlovofvlFtgmForl/csLuYkJumo21t7ucQZI+Za6xFitfk9hfrB5P
   oxZseWFwI9goMXGxX5ki/kTax5A9WKfukNlkaqZGvSj2P9kVfFzFwpEZo
   k=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Mpq/vtWK8kR1Bk+aeBGvTsAaXL7/Kzr/bmMOPMts2IBDztEfiiHZiZkSHlnQ/y0OdMOYfdrcTn
 NSVR/TTzTHL4mg48I1w0dC+XXSWm1QcJN2Fv7Mr6mSGoOZWUdkV5psTKKKPj5Eo8vVlIZQN6sG
 6CblbycOq+NYHuBg7ctao9DVV8a+LJL3OtYpVxfTehilWTzJTU3V7izLxTApgTFPjLaUWd9KdC
 /h105F806R8X46gdhhe4a8qb5hauVpJNkEyQUc4cIFRzKD0W2E4do8MHyq17ZifNBR5tIZYebf
 Vs8=
X-SBRS: 5.1
X-MesageID: 48575470
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:UZIWsahbaMv1fAd8crKZvmwp8HBQXiAji2hC6mlwRA09TyX5ra
 2TdTogtSMc6QxhPE3I/OrrBEDuexzhHPJOj7X5Xo3SOTUO2lHYT72KhLGKq1Hd8kXFndK1vp
 0QEZSWZueQMbB75/yKnTVREbwbsaW6GHbDv5ag859vJzsaFZ2J921Ce2Gm+tUdfng8OXI+fq
 DsgPZvln6bVlk8SN+0PXUBV/irnaywqHq3CSR2fiLO8WO1/EuV1II=
X-IronPort-AV: E=Sophos;i="5.83,298,1616472000"; 
   d="scan'208";a="48575470"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Za/kN/S8F17zJnRXLijtDqYtxKbTPjBVVDBMhBg7laFg+e9erh9Y0+DbxfTOhtzajkG7nVuKxm8RJuZ0+Y1yPjP6rgP4JDjZKN+rGdW/s7NAedXwozxqoam8usPGqk3AurY1gfN3qsYa04E0E8/Z35iCcmP0LtQYdpJ3x6ByhYEnZVjaHlPv0Cw9EYqKNROlTYVCLPQ5AW8uwimU+YBYQ4Eq12DuZmF0m5fcJiI3MCWFTz4iYhVO98nO0SeLyKhPeNKe4Va4ux7QaElaP/bO+mbI/1MgZdwABXOKY1JiBcmX1YCeSPdNiQcP9YNAI51ZICHRC2nNYNfBr//GEgMd4Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=in9Lg7Ky5DALGXZNGc9gusfBYFGjuGqkZQqLp4Nv2kA=;
 b=FD39GX4Mz9fPWuBT4kAHlRyz4A7BG535Ish2DkZyFnhFXofFtKvZJyuuklUcpM4YU+fK/fMjty8EQQfyN20ggayVN66degh+bshnpcwror2plCVJ9Z8qk7Yn7hOAkKqm32smyMm30EIV/SkFoBNTA4tDtsldkEziqRTK27EGM7enqd8xAmzsFZqDnBJGyYYpJK/g1exYl9T378dfn4IIZNfCeIaK0lCOUaXocjE6e5tQFB6Ko8HDxY/hgW3GCpWEWK8JnlS1Tc+XZfq27FB5q/NmKBAODA1w2M9tAorbKS2M2rzWpTiAfMDYHkmryDYZeIJCTdq5oYhRoxeT+/kseA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=in9Lg7Ky5DALGXZNGc9gusfBYFGjuGqkZQqLp4Nv2kA=;
 b=bvD5Y6L1LpWlKAW3roDNo33lTKfG1atpqfJOPLD2HNfZiEUXJHX5TMlTYeVf5+0vxlbMhlzjFQkVcPyUhk6OLWZYIV/9gaIIDAo6GOl36zL/O6Cgil7e+a5ckg8lfl7AcgOwlSeZ3fS0BLAvgDzW4FMEtUL3lcJS5zwn8fOXcpw=
To: Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210624175521.20843-1-andrew.cooper3@citrix.com>
 <6e103351-f3eb-39c8-441d-be926579f2ca@suse.com>
 <9c5cff0e-0ab6-7015-3667-bad2d9f5b31e@citrix.com>
 <4b360f95-aec2-935b-ce95-73a01cae98be@suse.com>
 <24789.46868.943149.770825@mariner.uk.xensource.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] libxencall: Bump SONAME following new functionality
Message-ID: <3fc8f75b-fc60-78bb-f822-b87b6299fa64@citrix.com>
Date: Fri, 25 Jun 2021 12:36:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <24789.46868.943149.770825@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0149.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9::17) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: adee9800-b747-44a4-9d33-08d937cd8e6c
X-MS-TrafficTypeDiagnostic: BYAPR03MB4424:
X-Microsoft-Antispam-PRVS: <BYAPR03MB442499482947B7AFD1F8DA63BA069@BYAPR03MB4424.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: GCAgVLyyZ5UKHcQLs279DKnIeJBy5yyQctCfuJN2hp5y7K2OV5LvCuw6sr9ehp6MWFIqjCTdMKUpO6xSYhZyezePDVKgz+kC54o6fh9NlJuOR5Pb/C1cPEWxa+uKDrQjg3tVvTM9hOpwYS3Th6BOGeUgZjwDH/VGXtvCLsDqi/2LoxshArTJ14lhi8T3tvstzTRGMBz0+KJf1anGMNiFj8BGIywygOKyrnZl0de2npIS2+y/pHvJWaj3BX9GYwv7aOsQDYKoagbNanNrEc0OvYxo0rAgXUB545NirMK0RUfZ1kladQUAY68EJwimqVgRMky5Pgs7NAySTNCwVFx62EtewHkt1ZPy8VeEx44su57qtJh+yAfPcPPxCGDX9QADzE7lhJGFuRDwvhoqf5CJJDK4ddplhn6M5JuqlAZdxdqmModRXtBPXMNwavQVVIdtfktDlPDF1NlBYYdfZFVTxTqVLjtGnTzqNsO2xslrFtEYPruwNyw2RbGVxkepZRZn0KAB/lSc0v8Etk/thVbWdUGqFOfwpV8KE33Lof7q4NhRqc3rAfsz/hUXpgB3r+0BMmIXCFIWrB2nI11ZoPsPm0HxpfasH+pnLjL0o3pj5M/b/SilPmMh/meuNq8omHCqyxRoGA298x/Hpacp7REC5QIQabk66SuS349Xm6voErTmbdzsAYZhXOhDjWG/2Etj
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(136003)(396003)(39860400002)(376002)(8676002)(31686004)(53546011)(66556008)(6666004)(6486002)(110136005)(5660300002)(66946007)(66476007)(26005)(16526019)(186003)(316002)(54906003)(478600001)(16576012)(4326008)(2906002)(36756003)(956004)(2616005)(83380400001)(66574015)(31696002)(38100700002)(86362001)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?eHI4bWZFbDV6ZnBBcTNHS3kyWm9ZQ1cyMnZJZ2dvbzYvZDFsR1NjMnZycU8z?=
 =?utf-8?B?ODFhWmhnem5Ma1FrQjFyVXg5VXZJZ21GSWt2N1hoQ3ZzNFU4Wlp5MEg1eU84?=
 =?utf-8?B?eVp1V0pqOGpMNGh4V0wyOUk4ZnFVMCtjSkFTSStzL21VdCs2ZkRIYjNPaWRy?=
 =?utf-8?B?SDYxQ2dLcjN1MkVGZGNRVGNnZEc4WHdFVHpKZWlOR29oeUxWdHBWek01Mm9S?=
 =?utf-8?B?R1lLQnB3WUFNWDNiY1RZNElGMmpDSXNWS2VPKzBzQVZ5REdoa240V3N0QTF6?=
 =?utf-8?B?OVNUR0VPZ05yN0FxMmZ5KzI1Q2plczMrdVVuV1Y3bk1hRXVwYkcyc1FzMnc1?=
 =?utf-8?B?Mmd4Rk5WWnFjbFVueFBZOU1QU1QzV0tGVFBrRmhjZVdrNHBsL29hR3lRaldH?=
 =?utf-8?B?TlZkOFJwazNEQ3NMYmt2MHZPc0Z1T29MUC9kWG9LdCthdVJJamQ2T25QdFVm?=
 =?utf-8?B?ZW1MOWVDOWtrSHArT0ZsWElYNDhhUmlXVSs5M0kxK2VjZkZGV1B3dUhqZG1T?=
 =?utf-8?B?bTdDeUIzb2l5MEhRdjc4SmdlUG4reCs5U05CR0NhTVRWUEdkazEzVEpFVzd4?=
 =?utf-8?B?M08zQ1dodEd4VGxId3FtNHhrbUw5Z1JNN0xvbmEyMzdpTFl1OWFja1p1RXdn?=
 =?utf-8?B?bFVpV2I1bUdRRzZ2RXFCK0wrdVdoeXNyWTNIMzZXYUZlZ2t4NTBjWjdDRXJX?=
 =?utf-8?B?cGczeDF6MmlHeVh1M01GYmU3bHlyRjNRZWZEVXg5eVZ3R3YwUUc2WUtEZFgr?=
 =?utf-8?B?WjN6WTdqUWxQaVRpRGplcEpwOGZ2VWsxRVp0YnF3aERVYy9pTjN4a2VjaTRH?=
 =?utf-8?B?VmdaWlI4VEJqSHJ1c1NuK2lEVkpqRGIrZFM4cXdWaFBBTTZkaUNrSGdrRlNp?=
 =?utf-8?B?TmhpNlhRSGc1NzdnbURydm44dmJZMTc4VmRKOXowVGp4WlloektRZGIvOWJ1?=
 =?utf-8?B?b3l5cmRWNS90MWxuS1VtbzRRbXdkSjh5VjlwL3JJL1lzb25EcmNNWXgrb00z?=
 =?utf-8?B?UVpVZVFDVG1aQnpjekMvZ3Y3R1k2MGE4RTY1WHBXZU80b1lRSzZzYmJ4VkUw?=
 =?utf-8?B?ZG1mSG42V3hoamp3azBPTURDd3JXdllTRzMySjhnSDlHdUdyRkFVTHVGaHFz?=
 =?utf-8?B?YjJBY1REMit3aWwzZjIyUzNXNys1MHNkWjJnTDRpZ2x5MStCQm1ubUpoL3ZU?=
 =?utf-8?B?YzZtV2hIWXRWZ0d4THZTUzRlaENvSlhOUGtQWGw4SjhERFRTNURPOTI0U1ov?=
 =?utf-8?B?WnlkTmFFYWZibTVwU3FsNkFrb1B5ZjFzRm5RQUJPbVUvVEk1aStVd05Jak1h?=
 =?utf-8?B?QmsrVG11b212dXFaOXovYjNITWxxZmp4UUV4Y1dkeUZUNFdWTVYwaElGVlY0?=
 =?utf-8?B?NkNSaUhZdmh5ZmZQWmdEVVp4d0MxcDM4ckVlVzZMMmowekxUeWM1SldwbTdw?=
 =?utf-8?B?QWNqOFBEUEdxejRTbUdiMkk1WGR1bExBbm13WkEySDVTSEJqOEJxRDhHV0JQ?=
 =?utf-8?B?QjZHZnpWQ1Fiem9rRGJXQmFVcG5nT09sdjE0eGZRWkNuQXhWS1ZNb1BIakYz?=
 =?utf-8?B?UEZVQ3ZKSzZVTEpDOTBIeW9PQ2s2MzhtM1M3MTMyM0I2Y2tSdElVQkVXa0lx?=
 =?utf-8?B?QTNsbWQwQlBtZEhERFphVWNBYnJjMnR5YlV5dHRBWUdlNDBqNHB6NmlXZ0FU?=
 =?utf-8?B?UmV4WFlXcENPcEsraW5YUnEwaGRTalZacUo2RXVmL2FTelNYN0txZFU5UlVv?=
 =?utf-8?Q?k/eJPbXApWxYleE2Q4J+1WRBo5pWeBE280P0LGH?=
X-MS-Exchange-CrossTenant-Network-Message-Id: adee9800-b747-44a4-9d33-08d937cd8e6c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 11:37:05.4337
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /1WivpgsbsabHzWUKLdRkf4UouP2YdSAHiAqsnpoirhDlM8deKBf90zoCkhHv2SJ9G8GNPyVvycYs1OL9sDDduIKH7xkgIKAGiQFVOBYYVE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4424
X-OriginatorOrg: citrix.com

On 25/06/2021 11:59, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH] libxencall: Bump SONAME following new fu=
nctionality"):
>> On 25.06.2021 11:17, Andrew Cooper wrote:
>>> On 25/06/2021 07:31, Jan Beulich wrote:
>>>> On 24.06.2021 19:55, Andrew Cooper wrote:
>>>>> Fixes: bef64f2c00 ("libxencall: introduce variant of xencall2() retur=
ning long")
>>>> Is this strictly necessary, i.e. is a Fixes: tag here warranted?
>>> Yes - very much so.
>>>
>>> andrewcoop@andrewcoop:/local/xen.git/xen$ readelf -Wa
>>> ../tools/libs/call/libxencall.so.1.2 | grep 1\\.3
>>> =C2=A0=C2=A0=C2=A0 33: 0000000000001496=C2=A0=C2=A0=C2=A0 59 FUNC=C2=A0=
=C2=A0=C2=A0 GLOBAL DEFAULT=C2=A0=C2=A0 13
>>> xencall2L@@VERS_1.3
>>> =C2=A0=C2=A0=C2=A0 39: 0000000000000000=C2=A0=C2=A0=C2=A0=C2=A0 0 OBJEC=
T=C2=A0 GLOBAL DEFAULT=C2=A0 ABS VERS_1.3
>>> =C2=A0=C2=A0=C2=A0 76: 0000000000000000=C2=A0=C2=A0=C2=A0=C2=A0 0 OBJEC=
T=C2=A0 GLOBAL DEFAULT=C2=A0 ABS VERS_1.3
>>> =C2=A0 020:=C2=A0=C2=A0 4 (VERS_1.2)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 5 (V=
ERS_1.3)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 2 (VERS_1.0)=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 3
>>> (VERS_1.1)=C2=A0=C2=A0
>>> =C2=A0 024:=C2=A0=C2=A0 3 (VERS_1.1)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 2 (V=
ERS_1.0)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 4 (VERS_1.2)=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 5
>>> (VERS_1.3)=C2=A0=C2=A0
>>> =C2=A0 0x0080: Rev: 1=C2=A0 Flags: none=C2=A0 Index: 5=C2=A0 Cnt: 2=C2=
=A0 Name: VERS_1.3
>>>
>>> Without this, you create a library called .so.1.2 with 1.3's ABI in.
>> I'm aware of the change to file contents as well as the disagreement
>> of file name / SONAME vs enumerated versions. So telling me this is
>> not really an answer to my question. It may be by convention that
>> the two should match up, but I don't see any functional issue (yet)
>> if they don't. Plus of course you leave open altogether the
>> backporting aspect of my question.
> The patch, including the Fixes tag,
>
> Reviewed-by: Ian Jackson <iwj@xenproject.org>

Thanks.

> Changing minor version in the filename as well as the .so is not an
> impediment to backporting.  The actual soname remains the same so
> there is no compatibility problem and the change is still suitable for
> including in eg distro stsable releases.

Correct, although backporting in general however is problematic.

Until Xen 4.16 is released (or, we explicitly decide to make an explicit
library release early), the 1.3 ABI isn't set in stone.

Backports to older stable-* branches must sit on a boundary already set
in stone in staging, or we'll end up with different versions of Xen
having different ideas of what VERS_1.3 mean.

It also creates upgrade compatibility problems created for any distro
who's installs of newer versions doesn't have internet access to pull
updated packages.=C2=A0 This is a consequence of having different versions =
of
stable libs bundled in different releases of Xen, rather than having
them entirely separate and packaged by their soname alone.

~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 11:59:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 11:59:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147133.270994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwkU2-0003HK-Ih; Fri, 25 Jun 2021 11:58:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147133.270994; Fri, 25 Jun 2021 11:58:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwkU2-0003HD-FY; Fri, 25 Jun 2021 11:58:54 +0000
Received: by outflank-mailman (input) for mailman id 147133;
 Fri, 25 Jun 2021 11:58:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwkU1-0003H7-DF
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 11:58:53 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e3bd1e75-bb1f-4de6-b853-b9311d3756f7;
 Fri, 25 Jun 2021 11:58:52 +0000 (UTC)
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02lp2054.outbound.protection.outlook.com [104.47.5.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-8-5FeypJuWNhSC3fltX7rT8g-2;
 Fri, 25 Jun 2021 13:58:49 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7039.eurprd04.prod.outlook.com (2603:10a6:800:12b::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20; Fri, 25 Jun
 2021 11:58:47 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 11:58:46 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P191CA0056.EURP191.PROD.OUTLOOK.COM (2603:10a6:102:55::31) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.20 via Frontend Transport; Fri, 25 Jun 2021 11:58:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3bd1e75-bb1f-4de6-b853-b9311d3756f7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624622330;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rpp0A42piln6pIqA059GlkY0GIWSQeuqIBQNgsGcR6c=;
	b=f3VXfJcBQWYF4hNpn+2ftnwqWo6KDloNzrQ3vJdhxY3kh1nhRBDIzhZq6frqsFO/qMp956
	lxquUi1QVOejs1p8bFzqGOg3eyN2LuGcEWExjUV6/AzdsaAtCQV2OVEEQUgooA4JaZYHdR
	5LS5yHeDxcDUk6uxOMfIPjVc/cdixeY=
X-MC-Unique: 5FeypJuWNhSC3fltX7rT8g-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SJrVRYpjNI3z8xG3XJFo2d6XOGRWZmmMJzl7t1q0R5mZzU19MLD7L5UA4xNYdID/PHGtbrBf89MEVzQuvduGQs24MkAaPLYL4VwIdOHLLw45iVsVSj6aDKhtax15of+196vUQYU1lH80bWng4JuC4yrPi89W/7Ov3mnUE0qv58rIcW9QjVB0ofLclxNst2VDKI9qZ35utpI/rVtXdYB2ocDywesKqLyq5iuVrFPBVtY1ZpBtgFqN2VtG3Lbj0ZH2Q6PJR9tNwaq1ac12KRwYDtkvS8CyTWfiWoxyy0iegPqDEsaRMg3LGyyoWU5Bbnu6SjzixajYSBvFhDFy8FN1jA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yAhJUHImPIhAKi7vEFOmIRUhT5eo0pNQB+Ild4XOWD8=;
 b=UC+24ikWpMEkTjQDPlgX6o1UJuL8rjHFoio0N+MCbpOnWl95NJM0nb36DJs/6KWf6FWr3V3lZYSRxhRqKOtuUP/bd6olUEJ/+92Z8QtYBnb9BntLGr4tE+EGWhsWbPx2qz2oARKSkpvoQA7a3nb8id4JfSuZeerSaamjjp9WjSRHRQKP95OZcmppAMxjdErNO+N5Wlw/brmpkDvrWfjpzH1NbwXNHv1UsMdlo4g/X1yjTTOajdrBN+u2gXWqCk+l7qp85Sg2k9LKztqGWfR1X6xhKeE8jWrJM0gI/WcsIZQ0mVrRxYHkq8w444Rts78PpViWjpqfM0RqMy5LVPxbvA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] libxencall: Bump SONAME following new functionality
To: Ian Jackson <iwj@xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210624175521.20843-1-andrew.cooper3@citrix.com>
 <6e103351-f3eb-39c8-441d-be926579f2ca@suse.com>
 <9c5cff0e-0ab6-7015-3667-bad2d9f5b31e@citrix.com>
 <4b360f95-aec2-935b-ce95-73a01cae98be@suse.com>
 <24789.46868.943149.770825@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d6439a3c-9920-c300-2a4e-2a5894e5ee53@suse.com>
Date: Fri, 25 Jun 2021 13:58:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <24789.46868.943149.770825@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P191CA0056.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:102:55::31) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 64040073-94ce-46b1-f43b-08d937d09659
X-MS-TrafficTypeDiagnostic: VI1PR04MB7039:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB7039A8B1D0BEED7F0A831B9AB3069@VI1PR04MB7039.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kRBWnmSuWYsPSFt2nKqZ1ILCAOua8uNKRndcdsJHcZYEPKtInwrZrRzczyv3U3PnF32X73ImFuxR34vj4C9qhuiVpSLAYAEljbxSTIcwn9osJ6Lpo/wCpBJebpx9PwgUPDbIMhxcLcEEH5cpKDwnVVPr0himLzeVtDHfy3ANEtBHKgxRogWxSy9/ZQSvgmyBLjpJeHQ2uyWwyhjDTjMnTYyYlaqFkkGKcoORe+WNoH61ELCPevW+lgRjD9TduklxuIBUdzztEuuaPkzvwFGxWUaXtr1oVxT4kdlRkfsabFDdPRy/JVd0ocqBQCyKm6wZXUfBklAcRs3ezE8zF4bxX0kTbFJ3GUUgoEwV4lu83NPfc9oNCX8BaX6Y8zOxvdui9X23JXvOD44c9bZ7W3AnCGteV4TBQq3bQf6l6pE+wirELKhOUsDNLUdw7lxKjpiosSgDt7FG+nBI5n5u4kA8HYIMImmY5Xg8SM9KZDWsPKFxLU8UM3n2tKIHPq3Wfw+4wcnp2ytCPvED2Fab+uS3PJEaEvo3+zlGfNJn+vWZpRzOnP6jxO7AZmKE4t/II6IVxttDcrFFPwjzHpL8Ai62iDnCV+i00yfikG4s8ghQ7Pr3G8ywtSwO7cDQ3FSAXowpwHkKZuOuZ2dnLKhs9zoW0H5T15TPx0rQXQs3RLxBC5JSzA1p/z5nWuQxUJcsuygJ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(396003)(346002)(366004)(376002)(136003)(6486002)(31686004)(66556008)(38100700002)(2616005)(86362001)(8676002)(66476007)(31696002)(8936002)(2906002)(66946007)(16526019)(186003)(26005)(956004)(53546011)(4326008)(478600001)(54906003)(16576012)(5660300002)(316002)(6916009)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?TTVVovmqQTYhnX1TtVVkIVioHbTssCcQmiLMe8jGH1en6o+9f69WyD/ouT3M?=
 =?us-ascii?Q?fz4sP+PqdJ19KatD8VfhqlKegDHDpYIM2q2G6Exwlb2UZ8rQ0cABkRlpfCNX?=
 =?us-ascii?Q?VPlPFFYxNrSOVqfJxNMm2dTgO/j2q2QtIFd0LbVsHRLZXpmu9TlxQPMYw2/S?=
 =?us-ascii?Q?Xkxka4ng2OL/Jc3qhzZV2JpRP9TJhyYDUtPLkTf6wDS+AeTsBANOjzNlq22g?=
 =?us-ascii?Q?vLaeKBOygBnWqDutsWjO30lZ+31QvlqwK0iHW+EwwLtwlIBFQHRCVXOqG9y+?=
 =?us-ascii?Q?YGFKuA9xPIsSIPHFEcL0HpzeV/8FNRXJEWyTIJPn/nex2v1sekzXFyJMEcUJ?=
 =?us-ascii?Q?sLvioU9Q7W8jBDPh/lLl/Md+eu8BQiXMgdyUxCN6Ujy3omdlBlk7yiE/4xVJ?=
 =?us-ascii?Q?FVkOLUFXpm43ETj2CgInuFq5PYZp5X+mCD7pL47eyiJ8bguUNAlK4YY+B0TR?=
 =?us-ascii?Q?0BbBV/rqm9xgQu8JZJ0amyvMdDRgbtiS4rctagMw3LT+VMepIV/yWagVzt1+?=
 =?us-ascii?Q?Zb3iEAsCw3/irnBxQuHZiKmSqfUtsES7P+1c3SrRaYtYYDA4wCQGVZHYU+hP?=
 =?us-ascii?Q?MUIMQVnVOy13O10SCIqgDhOnAflIbb2VidUdvBbzZAUBTH2ybrIbA93/TxR9?=
 =?us-ascii?Q?BLeh0zD15Pmzg/oDnbkulpu7EoqsNWX2DqibTCtadYtiSQVjf9DAM7B7FPhz?=
 =?us-ascii?Q?xfmMF/is2eMWoQSX1oEF3Kn2WDFy1loLCS4MhzRwete0MOcyCP4WVmDm2yp6?=
 =?us-ascii?Q?/1vPOJG7FRK0+bGvFm33WjNamy1zC79yATMq3tgCsnigFAHqMXQ1gXsAAMyP?=
 =?us-ascii?Q?qrCVgpjTka2ppXFbh9OROv3j2XqbYGRGcnLCZV0jzM+vfRjnXnucnDje27B6?=
 =?us-ascii?Q?mscPWdNgHFnLGcegI0xDmYJsrZJIEm5OiR/yyV/5DPm0nOjDitmKVAy2lo3v?=
 =?us-ascii?Q?0HFYi/XonmFflJJaE/WPi1yIdEnhKItcBElcdKU1e7d2pM1/+p+gDQ3f/xpM?=
 =?us-ascii?Q?FYpFCpAUWBD1E/eZfygrdSW3DG2CRJ50D2vFvIUvjTc2e168sZJ11sowj5Hj?=
 =?us-ascii?Q?CECIkzoJyNoTGDaDmylwlSjZ0Qp6kbzSMd+ZvLxP2vFeV2KK0aneqGXlKTEi?=
 =?us-ascii?Q?36SLrF6R95SX0Mtykv3ig1Jf5Cbr75g0N/NQNB64PU51LNz9rCfv9HQQYror?=
 =?us-ascii?Q?GF/36sCbPRWq5puhoOZfNgtkfUU4UEevB+iz9PMI5ckiC5SNVO2VVXQyaIyj?=
 =?us-ascii?Q?jwwXtAkdUM/h8OoUI1sOQlKFnfTb1+7HDusIQXyZrdbDMoeB7novdRnY6SsS?=
 =?us-ascii?Q?bZthalBKIVQU/RvfcxNtFvji?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 64040073-94ce-46b1-f43b-08d937d09659
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 11:58:46.6477
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AIYfYql489uDehkVQjMcl+nBwkTaIXeEQ0GIpDfFGMb7roFrUDioeI3AzwRrXjnX17LhrpCB3pqOW+tqJlqyhw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7039

On 25.06.2021 12:59, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH] libxencall: Bump SONAME following new fu=
nctionality"):
>> On 25.06.2021 11:17, Andrew Cooper wrote:
>>> On 25/06/2021 07:31, Jan Beulich wrote:
>>>> On 24.06.2021 19:55, Andrew Cooper wrote:
>>>>> Fixes: bef64f2c00 ("libxencall: introduce variant of xencall2() retur=
ning long")
>>>> Is this strictly necessary, i.e. is a Fixes: tag here warranted?
>>>
>>> Yes - very much so.
>>>
>>> andrewcoop@andrewcoop:/local/xen.git/xen$ readelf -Wa
>>> ../tools/libs/call/libxencall.so.1.2 | grep 1\\.3
>>> =C2=A0=C2=A0=C2=A0 33: 0000000000001496=C2=A0=C2=A0=C2=A0 59 FUNC=C2=A0=
=C2=A0=C2=A0 GLOBAL DEFAULT=C2=A0=C2=A0 13
>>> xencall2L@@VERS_1.3
>>> =C2=A0=C2=A0=C2=A0 39: 0000000000000000=C2=A0=C2=A0=C2=A0=C2=A0 0 OBJEC=
T=C2=A0 GLOBAL DEFAULT=C2=A0 ABS VERS_1.3
>>> =C2=A0=C2=A0=C2=A0 76: 0000000000000000=C2=A0=C2=A0=C2=A0=C2=A0 0 OBJEC=
T=C2=A0 GLOBAL DEFAULT=C2=A0 ABS VERS_1.3
>>> =C2=A0 020:=C2=A0=C2=A0 4 (VERS_1.2)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 5 (V=
ERS_1.3)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 2 (VERS_1.0)=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 3
>>> (VERS_1.1)=C2=A0=C2=A0
>>> =C2=A0 024:=C2=A0=C2=A0 3 (VERS_1.1)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 2 (V=
ERS_1.0)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 4 (VERS_1.2)=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 5
>>> (VERS_1.3)=C2=A0=C2=A0
>>> =C2=A0 0x0080: Rev: 1=C2=A0 Flags: none=C2=A0 Index: 5=C2=A0 Cnt: 2=C2=
=A0 Name: VERS_1.3
>>>
>>> Without this, you create a library called .so.1.2 with 1.3's ABI in.
>>
>> I'm aware of the change to file contents as well as the disagreement
>> of file name / SONAME vs enumerated versions. So telling me this is
>> not really an answer to my question. It may be by convention that
>> the two should match up, but I don't see any functional issue (yet)
>> if they don't. Plus of course you leave open altogether the
>> backporting aspect of my question.
>=20
> The patch, including the Fixes tag,
>=20
> Reviewed-by: Ian Jackson <iwj@xenproject.org>
>=20
> Changing minor version in the filename as well as the .so is not an
> impediment to backporting.  The actual soname remains the same so
> there is no compatibility problem and the change is still suitable for
> including in eg distro stsable releases.
>=20
> Not changing the filename is quite strange.  I havne't thought through
> all of the implications but I'm sure it will confuse people, and it
> seems like to confuse at least some computer programs that handle this
> kind of thing.

I guess I'm still having trouble seeing the actual issue from not
bumping the minor version of the library. This is still largely
connected to me not seeing how a clean backport here would look
like, in particular if we were to assume for a moment that the
oldest tree to backport to did not already be at version 1.2.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 12:01:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 12:01:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147140.271007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwkW9-0004ix-Dw; Fri, 25 Jun 2021 12:01:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147140.271007; Fri, 25 Jun 2021 12:01:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwkW9-0004iq-Al; Fri, 25 Jun 2021 12:01:05 +0000
Received: by outflank-mailman (input) for mailman id 147140;
 Fri, 25 Jun 2021 12:01:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwkW7-0004ic-6p
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 12:01:03 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2e875587-1d49-4e94-9d9d-82b5a4601a3d;
 Fri, 25 Jun 2021 12:01:02 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2172.outbound.protection.outlook.com [104.47.17.172])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-32-WQL5L7QQM-SfTL4lZ1oYqA-1; Fri, 25 Jun 2021 14:01:00 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4845.eurprd04.prod.outlook.com (2603:10a6:803:51::30)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.22; Fri, 25 Jun
 2021 12:00:57 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 12:00:57 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0224.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1e::20) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.19 via Frontend Transport; Fri, 25 Jun 2021 12:00:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e875587-1d49-4e94-9d9d-82b5a4601a3d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624622461;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Go99QUlpApJ7sWqjTxE2LSt2AH1oDgR7ASuYsoJcCGE=;
	b=aUqXERV88zaVZRdTYj2a2o0dsfHz+p2L9C5thR9MgW1CYmq9SQdLQbC3lNyR7tSJMaRSmB
	fbkKsBFATRnq4W+pGP9hb4pWDYeskZjJxaUT9Yc3BFpbSWzcrCdU3qbZ+nE8BRIFMDuuyx
	VsKkxCGky2w4aQe4N48ZAsQ98HH5NXo=
X-MC-Unique: WQL5L7QQM-SfTL4lZ1oYqA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MORNr3dM+ayzccvT2FDEQLds20E5glV8MEYjxykBSw+zVivUm8m1Iiu7bTYQuvdPX571eIor5KCWwzIxof28aJ2NvCI4rmgB2mMwAQQ9QrlHMycG84wY6XHcrJa2nNLftYm4Z5vl/A6RIqHHjapCOSKMKV8dWyJ0Ww945G15fCfu1UT7D7eCA5icpyRPoDQuQUfpEaiCPHJV/80KSEDubHLtlUNZH13boKkE2UFuzRJJu49rfdD0HrT6FaA2ynqOAZoUkBRcUVMdtSkS+Kz0vlFk/fuovJD6QUeJTKAQ6zJ1iTVREGqfrluDg2KQ0ivST4xUUF9eRknZtnyyz47zwQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pSJHl2Fwu38djEfq1uvzLYrKquCGWT6i2pVYatcDK4c=;
 b=AF7wSUlUm9PIb2SfQug+cefJTQ1ePmAsLVK+kdpoWDsyqTknuVgta5e2KUSsWInR77+vUf4eO7y+l700E9OuVFys4enCaHYmYmAB1Xgx/Mw+5RYIO4HTpqToiU8C9F586mDqw5g1PJ989c+n4ggcyhod/FTmyuC2DTComITwBdhJGecxlqf5M6d9DGUSmucBbGRx6kdxf4072MaMk+pjXSf+PrfSToEbMKMd9lPhxKVlyuK+tTQz3q/6wX1hmJtz+RDLQOuHWitWTmY9fUOhGO+uTFwcd3yzYZlZjWy/EcOVa7Bd5y0EFtDJNMnZ4qbXhL2CFniSDjelMwfWzfEKCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: Re: [PATCH] libxencall: Bump SONAME following new functionality
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>
References: <20210624175521.20843-1-andrew.cooper3@citrix.com>
 <6e103351-f3eb-39c8-441d-be926579f2ca@suse.com>
 <9c5cff0e-0ab6-7015-3667-bad2d9f5b31e@citrix.com>
 <4b360f95-aec2-935b-ce95-73a01cae98be@suse.com>
 <24789.46868.943149.770825@mariner.uk.xensource.com>
 <3fc8f75b-fc60-78bb-f822-b87b6299fa64@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a841fff7-f386-d7b4-1695-ef6916247f11@suse.com>
Date: Fri, 25 Jun 2021 14:00:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <3fc8f75b-fc60-78bb-f822-b87b6299fa64@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0224.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1e::20) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 59b1b59e-94b2-4f7a-ee07-08d937d0e479
X-MS-TrafficTypeDiagnostic: VI1PR04MB4845:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB48454319F55F949D79F463C5B3069@VI1PR04MB4845.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SZd01+nKozVPWoEDYS0p9cpbGKbttX8/P3WlcmTQRHWZ6nboPOEAllf0pVlQUCI0U/WVpBT7m8nOv98TW8ped/2biP+lN1+y21UamBRy8O+v7FZ0ZkQDTVvdqRqoFk/tBYtRuLTRkXr9xuwvLsXlUtt/EmheiTvxbzh4FOMLiJZsiD6qluTAwEOqjFHzmFWWgL/R0jif1tz2dMZFQZLWwgA9JZBW1+iYtN90mJIaWsStDiqOQKJXEPn0iU18AgSPZriRcGRYm0tqiXuVXS/rZFwevFSmJwframJpafB9zdtGVSTGRIfCrlBdolw47Yl+ggJJrODHqid2K+Y+dbgZXXhHJfWeCHCCQGWBQjuqSDWXGbwiPeX1J5FUzmuMxQQNHd3m4BhGlDd7hGohe0H8odI/tyufF3SvMI7SyNS6ul6dfJZd+h95+7BBH49HeTLnaxlyoxQHhXslyAKmxMM6fBzjjQHs0NBPDfEM0VDzqEIzyAgbULLHPQcs7EcOA+RWw6t02d3TQokacRYzmvZlnDKx2Qh6nnFYiK1McnlnHS+8yuHOwOltkksCJmQNVKP5RF8GtUBsfBass6363+TTpnzQUInMtx7F+cUkx+87q9VVATl7iEFm0eEStXLg+AE6pJxu7MVgfPqvq3Dj4wnnyXNSjlpJ05th6+D0xw+8Pk0t+NDfxj6toRJNU1yqiFgRj1clSq66tsUVOGiaB1+4Tm//TFdMm7+oQ8UXz2kKI+o=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(346002)(376002)(39860400002)(136003)(396003)(31696002)(31686004)(2906002)(36756003)(86362001)(316002)(4326008)(2616005)(38100700002)(956004)(16576012)(478600001)(5660300002)(54906003)(66946007)(66476007)(66556008)(53546011)(8676002)(6486002)(8936002)(26005)(16526019)(6916009)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?cVzgTdoiIfqD92KMjjOe+tbWSE8vkWb0D4U5P9pSKRqPvrd4zjgsbnKF7+S6?=
 =?us-ascii?Q?AIBakpQ/baaCTm/2ea5/VhJCU+Tw7CWbxhSDgqmoy4vmSB/VPgie8EmsUAeU?=
 =?us-ascii?Q?AB4tW6xGVWOz5efvQC2Dn7xlW15PKMpPgREkrdE6OoV03GQAxDUyd7kEbYBN?=
 =?us-ascii?Q?F6WsYxZoeNGhbY5oplGJ6EA3kv9w+HsylDOB8Dczqd2N87PoKWZq8ZlMeGaM?=
 =?us-ascii?Q?BMmJG77Vn1j0jcH1pthUhbnk/xi9ppS2JZeOqju8cIMyc1rqoec6s8ZtQ7D9?=
 =?us-ascii?Q?m4Z+nvHLGQA06V+lhmAK9VhJ5Ci8an3gWsYyxSP1zQWg24Zx3G5121Uz5zIh?=
 =?us-ascii?Q?3BbhyXlns/oSOEDpGGCptsyTVanO415Dz0iaRbW1fOPgwhRWj5PS7wHJINDg?=
 =?us-ascii?Q?EkO0AgcPmHQqfWf5FArL8wofo8a/dT2+JCW2+0Joj/RFKLN/jKkLOzW+vZ72?=
 =?us-ascii?Q?u2Q82xbvBTT6gNqsVDbLQyfBL1eaEijH1W/4KHc2JAZNHac7IZJShepgy+ua?=
 =?us-ascii?Q?LJ8eEfPkxpyYz4gK9yCKMvYpbtGxTtIaVLYYlJzr+xVX03sdDn5EkMfsYsqq?=
 =?us-ascii?Q?1bGfnOGSQPCYp6QLhrRuPWBEwFtXj6yvZR689TPLnUt95oQtL8vVcurgMBvS?=
 =?us-ascii?Q?t5Hrb3NIFtECD+C8CSc9gVx1JfVKLK9UUYj8hfJust/I0Ra7lCjfNsxdwJxl?=
 =?us-ascii?Q?ZVAF0EBi+d7W5X1Hk2PS7BnLHZ6iGKXaG5bS8SbwZy/CW1fDJ7QxDwENWEHP?=
 =?us-ascii?Q?gTi/kN5T+tiUspTM76KIZwuezyy78o7gg70OQQw6xBfzWIcUgiUeNN5Eh6pz?=
 =?us-ascii?Q?5KZbNIXO4UYq4kJB+k0FTh9rRP2+3sB6jB6s5d7TwPiOp6iRmx4Noo9BTje3?=
 =?us-ascii?Q?Zv+ChmlzmGKJiVmgOzQUeDynNBSl6qcfAndnfalQBli7rPJXbnWL9qgK8m5P?=
 =?us-ascii?Q?eFxf8qHMs49WU4V1pBVr+tFZH3kKzqi6NhrHB8OHQL5lfGoQ/mjc8a664c42?=
 =?us-ascii?Q?R63IOSTmLR/YiESCttf5vKUthp2hGgw806oGlXjLNIb5dSr6eBsWwvf/51ui?=
 =?us-ascii?Q?ZZbRObO82PNtgS6JPpJID+nZTjzqk5zfZJyc46sSkC2HecMpIlAoVKoqL6fQ?=
 =?us-ascii?Q?DDjTAlKqdrUCnGvGuZPUhxTBvBEBuup3cp62J+DFP8Wg//fiMOS09r8r85tv?=
 =?us-ascii?Q?ESeiXYHxq3PB2Xmh7zgc3rgTAB4tF4SVGLuBJLFjY+eG86Swz6z1wEVIwnKm?=
 =?us-ascii?Q?lrucK9Z393Tz39y3/TQM58KWRc8Q39SiX5AR2PppuI3ouSk/B6NkjVrQPOOs?=
 =?us-ascii?Q?uiHrlVozT1Rvs6lZSvAK3+ix?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 59b1b59e-94b2-4f7a-ee07-08d937d0e479
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 12:00:57.6696
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aGW3OOZ8spfa2ZQQdNvgRw2VhmyUztNJ1pKk+nv5/yDgnuqyrCVmZE6+diZDdgDZamMabJLSXjkWQcxwFsfL6A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4845

On 25.06.2021 13:36, Andrew Cooper wrote:
> On 25/06/2021 11:59, Ian Jackson wrote:
>> Jan Beulich writes ("Re: [PATCH] libxencall: Bump SONAME following new f=
unctionality"):
>>> On 25.06.2021 11:17, Andrew Cooper wrote:
>>>> On 25/06/2021 07:31, Jan Beulich wrote:
>>>>> On 24.06.2021 19:55, Andrew Cooper wrote:
>>>>>> Fixes: bef64f2c00 ("libxencall: introduce variant of xencall2() retu=
rning long")
>>>>> Is this strictly necessary, i.e. is a Fixes: tag here warranted?
>>>> Yes - very much so.
>>>>
>>>> andrewcoop@andrewcoop:/local/xen.git/xen$ readelf -Wa
>>>> ../tools/libs/call/libxencall.so.1.2 | grep 1\\.3
>>>> =C2=A0=C2=A0=C2=A0 33: 0000000000001496=C2=A0=C2=A0=C2=A0 59 FUNC=C2=
=A0=C2=A0=C2=A0 GLOBAL DEFAULT=C2=A0=C2=A0 13
>>>> xencall2L@@VERS_1.3
>>>> =C2=A0=C2=A0=C2=A0 39: 0000000000000000=C2=A0=C2=A0=C2=A0=C2=A0 0 OBJE=
CT=C2=A0 GLOBAL DEFAULT=C2=A0 ABS VERS_1.3
>>>> =C2=A0=C2=A0=C2=A0 76: 0000000000000000=C2=A0=C2=A0=C2=A0=C2=A0 0 OBJE=
CT=C2=A0 GLOBAL DEFAULT=C2=A0 ABS VERS_1.3
>>>> =C2=A0 020:=C2=A0=C2=A0 4 (VERS_1.2)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 5 (=
VERS_1.3)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 2 (VERS_1.0)=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 3
>>>> (VERS_1.1)=C2=A0=C2=A0
>>>> =C2=A0 024:=C2=A0=C2=A0 3 (VERS_1.1)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 2 (=
VERS_1.0)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 4 (VERS_1.2)=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 5
>>>> (VERS_1.3)=C2=A0=C2=A0
>>>> =C2=A0 0x0080: Rev: 1=C2=A0 Flags: none=C2=A0 Index: 5=C2=A0 Cnt: 2=C2=
=A0 Name: VERS_1.3
>>>>
>>>> Without this, you create a library called .so.1.2 with 1.3's ABI in.
>>> I'm aware of the change to file contents as well as the disagreement
>>> of file name / SONAME vs enumerated versions. So telling me this is
>>> not really an answer to my question. It may be by convention that
>>> the two should match up, but I don't see any functional issue (yet)
>>> if they don't. Plus of course you leave open altogether the
>>> backporting aspect of my question.
>> The patch, including the Fixes tag,
>>
>> Reviewed-by: Ian Jackson <iwj@xenproject.org>
>=20
> Thanks.
>=20
>> Changing minor version in the filename as well as the .so is not an
>> impediment to backporting.  The actual soname remains the same so
>> there is no compatibility problem and the change is still suitable for
>> including in eg distro stsable releases.
>=20
> Correct, although backporting in general however is problematic.
>=20
> Until Xen 4.16 is released (or, we explicitly decide to make an explicit
> library release early), the 1.3 ABI isn't set in stone.
>=20
> Backports to older stable-* branches must sit on a boundary already set
> in stone in staging, or we'll end up with different versions of Xen
> having different ideas of what VERS_1.3 mean.

Which effectively means we'd have to open 1.4 despite being in the
same release cycle if this change got backported. Or did I not
understand correctly what you were trying to say?

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 12:14:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 12:14:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147149.271026 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwkig-0006SF-Ny; Fri, 25 Jun 2021 12:14:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147149.271026; Fri, 25 Jun 2021 12:14:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwkig-0006S8-Ji; Fri, 25 Jun 2021 12:14:02 +0000
Received: by outflank-mailman (input) for mailman id 147149;
 Fri, 25 Jun 2021 12:14:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwkif-0006S2-ET
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 12:14:01 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8cee7c8c-7e69-4ec0-a83c-9588e67ca901;
 Fri, 25 Jun 2021 12:14:00 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2053.outbound.protection.outlook.com [104.47.13.53]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-25-DYfn8cHCObqv4jotVFZQTQ-1; Fri, 25 Jun 2021 14:13:58 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5742.eurprd04.prod.outlook.com (2603:10a6:803:e5::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20; Fri, 25 Jun
 2021 12:13:56 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 12:13:56 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR2P264CA0047.FRAP264.PROD.OUTLOOK.COM (2603:10a6:101:1::35) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.19 via Frontend Transport; Fri, 25 Jun 2021 12:13:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8cee7c8c-7e69-4ec0-a83c-9588e67ca901
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624623239;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=REVI6KfvtKXwhN8n92x1KTdF7rfMwP8XySowAlfxpxM=;
	b=A4gDzs/Fv4fWrGjEBSXahIlW9RXn/I0LeYwzf+Vq1pTr6bZolt0pDiuQRUXx4jRnTF/UA3
	ruTWYoKLLBrR45gGerD7gIBD1JQ/3YEcqfyoDmIWTpfejgEe4csjC3dGGboU1NtkRQ2eo8
	VQdWY843Goe2VO62EaIgkbVQZ6zV1Nw=
X-MC-Unique: DYfn8cHCObqv4jotVFZQTQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j2KmfKLwYw6+t/iaM39jwrQ3D20uEhF1os0nmr/wvsRV7R7rF6vNQFOx0XkFZW/4KStnSf4bE9oYPHkqbwWbj4GucxaNnOU81ue6/7nxBiDW9G6u4RJEPQ4GLToHBAWDWmNRS3HBF/7AzEz8vryNyHX0hvHli43k5h6FUjkajQwFXQzf9ufxnNZL6/27lIda4Ze1ImPVW1w9S5gMvcvovb1CfDyqyp8zRP6V0y1AxTFM77Im9V1GZD2pObd7JS6aKzIJ5NVF6LS5NLWH5RPCDCvIbK2V9pOKQ7nLmzVKH4SXJXiT6QmvQCSQhJC95B8dffMwXPEqAyRCeM63EfvOQw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=REVI6KfvtKXwhN8n92x1KTdF7rfMwP8XySowAlfxpxM=;
 b=YTsiiP1T5dzxkkVPx8SrhUWPEXnUcpoRXo+XAkDADUFmLtl3SDF7H5b+fBnwmqNpmFf7a8R7cNgrEkrI2UKH0CyYAiFCxSMSMSET4uWMuwCqmEPnKr0aYe0xCJKwH6Lr1iJqD6dpXzPXZj93DBm7aI1ymcGV+5cGzj35fe5AXWDc5rKF6QFj/z/fuj0+U3ZHSO+OdEY1FbMqb3mVdSUp7hWGfHBI/5AoRUmYSeoVbnoJvSthwEwaQ5GqBG6oJgqEwWpgvPefRbSfF0Ct8F8PKYIg/rg+Jbsl9HV6484DhsavsZL6vol0s+H7y9nU62fpiZ9mpNeCxy2Pjo9l+liAvw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/2] IOMMU: XSA-373 follow-on
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a6e72235-570b-c426-589c-236f37749e1e@suse.com>
Date: Fri, 25 Jun 2021 14:13:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR2P264CA0047.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:101:1::35) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2ceacb33-89bc-4f83-47ac-08d937d2b4a0
X-MS-TrafficTypeDiagnostic: VI1PR04MB5742:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB5742FC18C707053FEE94B489B3069@VI1PR04MB5742.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5516;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7VWRY9TnROdG7fuDNHf8qLsfspnZnq6LHMtn0LdU/DD6q1KjPeStTPjOrDuRPBu4WdmSOqv2ZxaKv2gtnUqdoZ9GpkzGPJBO0AS+YinlF6PH5smQzRX5S+1Wfrbn8tixCWRYipOpKDo8quQ+qosc4zewVuPkVBTZabRQ3/yt0K5ah3e2ykPo35CBdRCC1jz2xZ0IjeziTb+/ohRJ1cF4rFmbCtO+NXPbwl0SeKE2wMLwBNdP2VhkNcUn57v34Uuy8hkE/OBRJEF3/Pb+zXwZbh+xEu2TT0dgURNU1pt8SCqfbi8um+YmbPqRQoAMAFzo+L9XVQtOUtj52W4wpfjw/V2uLBcSTgFVZQPbugjECAbLw5o4w5GW3NLn/8WpCiq1FJbPvZnaffpJ31R+tX7+Z9rCgG5MSg5XfjqJQ2lLJfiu+u5b04qvZ28D6wAqhlLXT1ozmw8ubS8P73IW0yS5tKQpydznycseaJA42Ji3NRDq04WLYLHQLy++lysOhGLyd8AOOL1F5MmlKuaKCULNAVha/tNI9qcF20bVCKkRTTFKNpc5HvwPgGS0ZX4M5WqWe2ZrGsA+4KpprXEwPeDaNLe5PV2LXFIieagXIq3k3hSRIuld3H89qpHKHOjMusNlVihGIhSjv0AQO/iK3UL+NodX8dnru2Ax5/gY+e955RJTy2ZKcGmE3Sq7z6/UrNmY
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(396003)(366004)(39860400002)(376002)(346002)(6486002)(54906003)(6916009)(2616005)(2906002)(6666004)(83380400001)(956004)(478600001)(8676002)(8936002)(86362001)(4326008)(186003)(16576012)(31686004)(26005)(31696002)(36756003)(16526019)(38100700002)(316002)(5660300002)(66476007)(4744005)(66946007)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aE5jWSsvVlQva0JIVkduMzlDMzZSVExTVWQyRytNMnJ2bmNhRm4xcjExZ0FI?=
 =?utf-8?B?bFp3bmorWVBwYXMxUjRQbEZvaU8xTjNmL04vOHFJUEJLM3FQSTE3ZGl6NmVO?=
 =?utf-8?B?SjhlRkxzREh4THdtSDVuUlhaTXpHNGIxSHgrRXRlK3pXN2ovRk9GVVZUbTZM?=
 =?utf-8?B?TnJmSExWeVVJNTBILzduc2o3bmhyemFpbzRSdnk2d3hqdzJ0ckdxajZoMDA4?=
 =?utf-8?B?cjdBVGZTRXVMSVYrTDNKcEVDYTlsYUNNejZkSnd4ME1RQjRTd1liYjhydmFx?=
 =?utf-8?B?RStFSVZ5TGorTm95UEl6anY5UTZHSzUrS1VPelBSakZkZE5yYTB6MGVaRThn?=
 =?utf-8?B?TldpejNBc2NHMVBmc09oUVg3V3huVTlnR1hsRUhUQlU0OFVxWVJmaEhHSmtn?=
 =?utf-8?B?TFFqQUVEUmxoUXdXOWN2enVuQlRqb1hGb0ZLN0JMNjdYT1RuOHRYSGdwVEdW?=
 =?utf-8?B?WXU1K3JPS1ZRdVJnNUt6T0pyS1FpQ1ZwbEtLa2l3V04yQkR2Y1JYRXh6SS9C?=
 =?utf-8?B?MCtJazc4aks2MGl5QXpKWDhyMU0xdUViYXNyeUhiRUxtbldmVGgvNG5pcTdr?=
 =?utf-8?B?dU9pcWU1SW5yT1pCWU1tbWRUcFFnVHRmLzlkYVdpS045ZWNFUC9XUXJxTTNO?=
 =?utf-8?B?aE9xMm1qakJmOE5QYTFtdGdieGtXdTFtSFlyRmE0d2EvblBUemVUSzhKQmdP?=
 =?utf-8?B?bUZERHRUdWRkN0tBNTRVeDA2QllHQlJEMGVvNzl1WWVOR0FGVmVmb3htT0x0?=
 =?utf-8?B?NXBOMFhiZGJyRUpoVEQ4VVkzZWtEZWt0czRnUWpWRU9hclRmY1dRRHkzQllL?=
 =?utf-8?B?S0xNMFpsZWZEcTZTQ2xGL2NpanBGcFl5WU9PY29SYTd3eW1FOWtSUkZ5YXF1?=
 =?utf-8?B?aTNUUnRWMXlYOHRHQ283VGdOd1FtemVZOWRuUXNZUzlJL3dRRnJQeTlOcjUx?=
 =?utf-8?B?VEZEZWhFR3MyTzczOS9ScnI4RVdWanNjeVQ5OVBhQ1FzSXFxMWJFbWswM1lw?=
 =?utf-8?B?eEs2UTNyOXpwQmpRUE5ZUHN3NWF4cG1JVDlHUTJDQVdrL0pTdlE3M2dUd2Fq?=
 =?utf-8?B?ZnUxOTFKR0RRNE1yVklXM2swbXo0dUlTTnFSOXhRTTUySUQ4VVc5bmxFNEhY?=
 =?utf-8?B?RmVrdmxRdkhDWm5Ea0psRVJkL2FTdHJRenFjekh1VTFEZ2dKMGplUkIzT1FM?=
 =?utf-8?B?OGhkS0NIaEJwV0JsNHZEZDg0ZFA4SS9rNWFGekhuSUhMMG5PYjZ3cWk3K3pT?=
 =?utf-8?B?eFA5NTVPcVJ2WWpRcytOSks0SzNjcm9MMXhTcE4vM015K2YyZlFzK00yTGZW?=
 =?utf-8?B?WVpVbEVWM0pEYzBmSkNNK1A0UmZES3B3RVAydWgzMDEvT1h4Y3VOS3Zrd21s?=
 =?utf-8?B?VE9tWkE1eGt0VzRnV1JHRHh3dGtFdHhrYjYvWmJOTVVaOEJzMGFNR3MwOExV?=
 =?utf-8?B?WnA2NEVwbE1uMmRTQ2VaajQwK2xaT2gwbnZvS3I3RW0zNk9ZMnM2SW94UVBw?=
 =?utf-8?B?ZStJem9XNXFXaXNMS3Q0VWNWQXp4RTJkNTcvRnJVQytXQllEMWtIdm1QM1pk?=
 =?utf-8?B?dTZXUVV3OEpXNStxY0ZrdERpditFZGdZa3h2U1BuZHNjQVNJb1N4MEdGckJ1?=
 =?utf-8?B?Um5XZEtUdm9oNk1RczdwUHJHWlZCU1ZQeElTOS8xSndRUUloUFpzRjVpMlRS?=
 =?utf-8?B?eDQ2RWhnVG83QjdJQTVJWXdvN2lvQlBqb1VhN0RLT2tza2dRTW1HUXRybnI1?=
 =?utf-8?Q?ryNViqn5ijb/VpJlt/WKvFuWw96vOmYhc5e0BwO?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2ceacb33-89bc-4f83-47ac-08d937d2b4a0
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 12:13:56.4177
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qFqUYkWsIbxvSX72GzBMVRGksfNMmIvij8H2xfjZdhBEj1OnE4q+lBtsPeNo0+mIs5ReyvsotVsM8WlqnecVzA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5742

A number of further adjustments were left out of the XSA, for not
being a security concern (anymore in some of the cases, with the
changes put in place there). Only AMD side pieces are left in v2.

1: AMD/IOMMU: redo awaiting of command completion
2: AMD/IOMMU: re-work locking around sending of commands

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 12:15:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 12:15:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147153.271037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwkjm-00071i-29; Fri, 25 Jun 2021 12:15:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147153.271037; Fri, 25 Jun 2021 12:15:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwkjl-00071b-UQ; Fri, 25 Jun 2021 12:15:09 +0000
Received: by outflank-mailman (input) for mailman id 147153;
 Fri, 25 Jun 2021 12:15:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwkjk-00071T-Rk
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 12:15:08 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 591b26a0-93c5-4044-b7c3-d616942fe692;
 Fri, 25 Jun 2021 12:15:07 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2054.outbound.protection.outlook.com [104.47.14.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-23-qHcmV0iUMbKdv2YGaMJiIQ-1; Fri, 25 Jun 2021 14:15:05 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6671.eurprd04.prod.outlook.com (2603:10a6:803:11f::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20; Fri, 25 Jun
 2021 12:15:03 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 12:15:03 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P195CA0016.EURP195.PROD.OUTLOOK.COM (2603:10a6:102:b6::21) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.19 via Frontend Transport; Fri, 25 Jun 2021 12:15:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 591b26a0-93c5-4044-b7c3-d616942fe692
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624623306;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HKNAiCAkTEux8snV3/fR8a3Zgbg4iM0MzH/i8v9qTag=;
	b=ni9lhuVBwkNPhL3uZ3TVeAgrNgO82LFe1cUfqU/uxRpyfIFyvquhog7ZtkkofjCiKV7jC8
	G9vhOe+6GBy7D0uN0TliR1hIiSZBMif+A1X9Ul6sRPO2HmS6OXdNs6kHKbKj7RvXzrdg28
	NU1glvRCeXwhI3XM2k7JYSgpLWDz7FA=
X-MC-Unique: qHcmV0iUMbKdv2YGaMJiIQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d+/Z/PUx3aonao+Bn2BfVUmUDRCPWgFKL/E0TrW1ZNTu6gx4k50kKcjL/rZk5lYZNHkB/UDVogt86pB4tTSKFl4El8N9caSIJqDgFeQT/qHTpEGC6BFwhuGALGWgKu7Trmv+4Y0FnWbYY3yHXVatBQwknfxW38YsDY3Ik8XkxjGgzm3sTH+qe3IvhHxMJoHNA8fzkOmN/KXqEpeKGh5jsDlELw+Kbgz1PuWwCJzL4UGQ+j+gJS0DKtcRvLvE4NNKxrHEhIwHBHOb8vGsRlT4rOJskf7jNgharvSg6N25gaNjyHErNUlL9UWvkraX1lzwEn1y7hY4eFHK+T4e+4YVbg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HKNAiCAkTEux8snV3/fR8a3Zgbg4iM0MzH/i8v9qTag=;
 b=Duu4Q1Ir0QVpHQEUocabQ0hQhKbk1NhS+vPAsQdiQOG/wjvc+/0mXN826C97OVMW1ViW5SglOU2GvvQDu01s0jugLr6esbj3h5SyqT4m2IDHtYiy4GcH8ub4ajHvYU9XZ7Uf6ANHfdNL4Be+9BKnfxuErtkXfA2tRlJinu6JoFf6aNReN6jqRn5hof8No5gO0oPo872L9BY/crWWc4QP+hsl6D7ylCO79Pcm609KIcqM6ajC/ji6LaTlHDGtwY21hKuYTkUQeZe1A0+4sX2lMvazvp5I5PaxqBxAl2xUw46rnBZ1qjtmSHalWwejJNYjWP7K78b978woOm6pvLbQsg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH v2 2/2] AMD/IOMMU: redo awaiting of command completion
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <a6e72235-570b-c426-589c-236f37749e1e@suse.com>
Message-ID: <506e3a46-2103-26f3-94f1-c3243cf41fd5@suse.com>
Date: Fri, 25 Jun 2021 14:15:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <a6e72235-570b-c426-589c-236f37749e1e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P195CA0016.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:102:b6::21) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: edacaf42-b2c8-43ee-496a-08d937d2dcb4
X-MS-TrafficTypeDiagnostic: VE1PR04MB6671:
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB6671FC1D6BF6067C477BC19BB3069@VE1PR04MB6671.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ycVQBW22EgcjfFuGl8ksaS6cFxgw4imrke18A0aXNDbG0wbe2UaLyC/nvtlBFHkL/Qi+DbG/SNClyHR+fYoaBlDVDFNuVOrnD0Ooo7+EYgHoL7p4sgJ1UtfXU8IQMb5Kjw5m3kmrb6SnqBF8hLP0lTqzY69sumAxjx5hnBRVkf1DkI6sTnVASORRQXDRZVR4Z+cTK9C56s0azYxRT+U8mQj177j8LjSvJPZfR2CfmM9Td7Mb80nQ6ozLOFKgdznXlmCiq/9WR/d5QzlwKV2rVnELsQnfAP9byXx34FcF1JcXg19Oitd7o2YndLZVB5FbI1WI+xyiwoBOon9m5oDLiqbYlH1nMzrHnLYoZTp9/DH3ZnauvmY5xubOU+OOFXSbe3TDEnL2wHjchNB2hlDU9JOTUzHM4RWzqJm+NX+HaBjd8iLakIwAK51xHfEUrmLKmi6jYGvaOQSqKex3LKrA0zpVEPp7r43siloIY3LNmQ1eT9gHUWceS1jMMdfUNIvEdAhPDOb3N8Gx6g/iZl4spJXbkNJoWEFgL5Qn8qbLMmtJhCAgv/rNvU8GEC7/Z8SaLX1AFrEbvddg6V3FjDROrWzopfVgncnPzqn2TFxz2bnnHQyj8+rX6Ah4zcpZ5JWpqOiBiA2CibVRFsI2UH3WXNpEp02Z2eFgK7GVL0kZasvheLcLcV2jqfkDvltIpdJy
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(346002)(396003)(136003)(39860400002)(366004)(5660300002)(66946007)(316002)(2616005)(83380400001)(38100700002)(478600001)(66476007)(16526019)(186003)(31696002)(26005)(956004)(66556008)(31686004)(16576012)(8936002)(86362001)(2906002)(6486002)(8676002)(6916009)(54906003)(4326008)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bGsrRGlsZzFDRTFNWGNYWXVTczVwaUZKT0pUQmQxYmkxM2JJaUliZXpsbmlB?=
 =?utf-8?B?ampCYWdxNzJhem5IOXBSRUxndG4wUnpQNExZOEFIME52aHM4OThDUWwraFQz?=
 =?utf-8?B?cHRwaVJDN3FnNkJSaU43M3N6RERZOHA5UHBNTjZCbHUvMGV5QXBhVFhWRWVP?=
 =?utf-8?B?Tm5sbkRZeGdKK1VNaTNxZm5pdDZyWWU5cTNMUVlvVXpUQnlTSGRld2FraCt5?=
 =?utf-8?B?WEhZd2c2LzZPSXhVVCs2VURYekc0c3J2ZU5QYjNxbW84M2UyRTlURXhMZGFK?=
 =?utf-8?B?dzJ3TVNkNVRyWVNHRDhkTFEwMUpYb0gvVjlXdUpLYkFYTFBxWjZLb21HMjdx?=
 =?utf-8?B?ZVlKUlJnN3RrZXlET2FSVzJia293bld1ZWloalRDOUczeUZQQU1jUGtuYXpF?=
 =?utf-8?B?aXo1ZE40Q1hYdXJvR2JQbEh0TTNPRm5NTFpSZlptUENWT1FIYms3V1Bqalp4?=
 =?utf-8?B?V09qZytER2xKNldXRDJ5T05UVzNnRmJtMW9Cc1VONVNnbDZDVUxlRVBLWDhP?=
 =?utf-8?B?MVMwWUtKbE9DS2wyQ1loQlBFVXdtL0k3b0JWdnU4alp3Y2hzcGVTK1Rmd2Fs?=
 =?utf-8?B?d1p6OUVBQjdTbDR0amVTZHBicHVkRzFXNW9DWkFxVlYySmJsOHgrMFVVVDV4?=
 =?utf-8?B?SndZRDlWalVhQTcxSXVRWnhQVTBSa05sL0U0aVViOFNnL2Fwc3M5dDlZVitR?=
 =?utf-8?B?S0xzVHZ6elREQzdFekg0V0VwaUtCd2Q5bW0zK2tkditMSmEyWi9Qa3h3N0l2?=
 =?utf-8?B?THM3Q2lKNEZsL2VuVzNJN2sraXJjOGFQWlh3RENZNVhNMThnRXgxS3o1eXhY?=
 =?utf-8?B?SWlLNWlBUWMySEgrREVmam0vY2RjZndEbVdGYWdCTDZkWVlDVTBGOWFXUmFH?=
 =?utf-8?B?dkliY3Bod2NDZjJIc2k2cmNPNUVhYkFyNHdxSHVNT0dVMXN1TDhDalJReG84?=
 =?utf-8?B?Y2tuSHBZTWtQajN1amR6clBoenM0ODVsa05yOElTc2NTN1VkZ3BKTXlCakt4?=
 =?utf-8?B?QTk0NkV1cHc5Z1J6YUFtZkppdDJSSWUzb1hLcWhjMm1taDdqYlloWVJOcGVr?=
 =?utf-8?B?eDlIeG5HaHBRY25jcUFGaDJHN3BiZVE5bzZTbGczcjJ4YWtWRzByL0R4dHda?=
 =?utf-8?B?eW1EdUF3S2JaZGFpVFl3cktnQmFtNXVoR3RJaEk3bGJqcnM5TDdYTm8zalZQ?=
 =?utf-8?B?NyttRE9XZDE3ajNNU2hSQjliNXlFMTNCelVZd0Q3di9ZcEhhclNjNnc0ZlMy?=
 =?utf-8?B?eS92NHFUeUNFRlVYTFozUGVEVEpCSHVxdkhGbUdJQnpCRXRhbmM5MlErUTV3?=
 =?utf-8?B?YTArdWhIWU5LRlcwVGx2eVQ3WU02cHR2cmJ3Ry9zdDh5cEcxS1RLU2hJVHd4?=
 =?utf-8?B?b0hNQlBsSUJuT3dMNE1JKzZSd1J3TkxrZ3k2Yzl2RUxqS0tOOWZjYUtxd1FL?=
 =?utf-8?B?QTVPbHE0K3VWZ3BybUpNemxuQmF2Y01BTEVCWnk2SmZGc25HRXNhQ3ZhWmRX?=
 =?utf-8?B?NktPblVaUjZYTmtDTUMxRzFKWGh4a2hnNDhHaWoyVGV5cUFyaXRiUXZNTllx?=
 =?utf-8?B?N0xGYUVFQkNXTVBHWmNQUHE5YXJTN1lqRjIzQ3FrZkdHSVFoYnhES2dlZWxu?=
 =?utf-8?B?Um1QTG5Nb1RIc2R2cnhkTG5mNkpJQVdkbGYrVmZFT09TVmlIOWJtWVBJdjgz?=
 =?utf-8?B?dUpJS3NnRXJVSGdpZUxXOGIxRGx5VW16Zi8vbis0TlVLcU5ERTBVQ0NjT2kw?=
 =?utf-8?Q?VKH8N/XDEu7SBwM17yE0TyKzqUQN/YSxNRlyZuq?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: edacaf42-b2c8-43ee-496a-08d937d2dcb4
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 12:15:03.6497
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: drw/EOekeG0kCsC+JnTtXTvEJsK6SdsNxIaxCaPC2Mc6yiXFAvxxCkzyKeeOeVgeJr7cvnkr2rPxMwQP9jDstQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6671

The present abuse of the completion interrupt does not only stand in the
way of, down the road, using it for its actual purpose, but also
requires holding the IOMMU lock while waiting for command completion,
limiting parallelism and keeping interrupts off for non-negligible
periods of time. Have the IOMMU do an ordinary memory write instead of
signaling an otherwise disabled interrupt (by just updating a status
register bit).

Since IOMMU_COMP_WAIT_I_FLAG_SHIFT is now unused and
IOMMU_COMP_WAIT_[FS]_FLAG_SHIFT already were, drop all three of them
while at it.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
v2: Avoid set_field_in_reg_u32(). Drop now unused
    IOMMU_COMP_WAIT_[FIS]_FLAG_SHIFT.

--- a/xen/drivers/passthrough/amd/iommu-defs.h
+++ b/xen/drivers/passthrough/amd/iommu-defs.h
@@ -178,11 +178,8 @@ struct amd_iommu_dte {
 #define IOMMU_COMP_WAIT_DATA_BUFFER_SIZE	8
 #define IOMMU_COMP_WAIT_DATA_BUFFER_ALIGNMENT	8
 #define IOMMU_COMP_WAIT_S_FLAG_MASK		0x00000001
-#define IOMMU_COMP_WAIT_S_FLAG_SHIFT		0
 #define IOMMU_COMP_WAIT_I_FLAG_MASK		0x00000002
-#define IOMMU_COMP_WAIT_I_FLAG_SHIFT		1
 #define IOMMU_COMP_WAIT_F_FLAG_MASK		0x00000004
-#define IOMMU_COMP_WAIT_F_FLAG_SHIFT		2
 #define IOMMU_COMP_WAIT_ADDR_LOW_MASK		0xFFFFFFF8
 #define IOMMU_COMP_WAIT_ADDR_LOW_SHIFT		3
 #define IOMMU_COMP_WAIT_ADDR_HIGH_MASK		0x000FFFFF
--- a/xen/drivers/passthrough/amd/iommu_cmd.c
+++ b/xen/drivers/passthrough/amd/iommu_cmd.c
@@ -20,6 +20,9 @@
 #include "iommu.h"
 #include "../ats.h"
 
+#define CMD_COMPLETION_INIT 0
+#define CMD_COMPLETION_DONE 1
+
 static void send_iommu_command(struct amd_iommu *iommu,
                                const uint32_t cmd[4])
 {
@@ -49,28 +52,27 @@ static void send_iommu_command(struct am
 static void flush_command_buffer(struct amd_iommu *iommu,
                                  unsigned int timeout_base)
 {
-    uint32_t cmd[4];
+    static DEFINE_PER_CPU(uint64_t, poll_slot);
+    uint64_t *this_poll_slot = &this_cpu(poll_slot);
+    paddr_t addr = virt_to_maddr(this_poll_slot);
+    /* send a COMPLETION_WAIT command to flush command buffer */
+    uint32_t cmd[4] = {
+        addr | MASK_INSR(IOMMU_CONTROL_ENABLED,
+                         IOMMU_COMP_WAIT_S_FLAG_MASK),
+        (addr >> 32) | MASK_INSR(IOMMU_CMD_COMPLETION_WAIT,
+                                 IOMMU_CMD_OPCODE_MASK),
+        CMD_COMPLETION_DONE
+    };
     s_time_t start, timeout;
     static unsigned int __read_mostly threshold = 1;
 
-    /* RW1C 'ComWaitInt' in status register */
-    writel(IOMMU_STATUS_COMP_WAIT_INT,
-           iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
-
-    /* send an empty COMPLETION_WAIT command to flush command buffer */
-    cmd[3] = cmd[2] = 0;
-    set_field_in_reg_u32(IOMMU_CMD_COMPLETION_WAIT, 0,
-                         IOMMU_CMD_OPCODE_MASK,
-                         IOMMU_CMD_OPCODE_SHIFT, &cmd[1]);
-    set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, 0,
-                         IOMMU_COMP_WAIT_I_FLAG_MASK,
-                         IOMMU_COMP_WAIT_I_FLAG_SHIFT, &cmd[0]);
+    ACCESS_ONCE(*this_poll_slot) = CMD_COMPLETION_INIT;
+
     send_iommu_command(iommu, cmd);
 
     start = NOW();
     timeout = start + (timeout_base ?: 100) * MILLISECS(threshold);
-    while ( !(readl(iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET) &
-              IOMMU_STATUS_COMP_WAIT_INT) )
+    while ( ACCESS_ONCE(*this_poll_slot) != CMD_COMPLETION_DONE )
     {
         if ( timeout && NOW() > timeout )
         {



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 12:15:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 12:15:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147157.271048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwkkE-0007f2-FA; Fri, 25 Jun 2021 12:15:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147157.271048; Fri, 25 Jun 2021 12:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwkkE-0007ev-BK; Fri, 25 Jun 2021 12:15:38 +0000
Received: by outflank-mailman (input) for mailman id 147157;
 Fri, 25 Jun 2021 12:15:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwkkC-0007eh-Tg
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 12:15:36 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 77737e35-c514-4c5b-8033-ffa74c5175eb;
 Fri, 25 Jun 2021 12:15:36 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2105.outbound.protection.outlook.com [104.47.18.105])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-12-k6S0ZeH3P7KuM3A_VRj4aQ-1; Fri, 25 Jun 2021 14:15:34 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6671.eurprd04.prod.outlook.com (2603:10a6:803:11f::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20; Fri, 25 Jun
 2021 12:15:32 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 12:15:32 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P195CA0030.EURP195.PROD.OUTLOOK.COM (2603:10a6:102:b6::35) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Fri, 25 Jun 2021 12:15:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77737e35-c514-4c5b-8033-ffa74c5175eb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624623335;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=l2pXfNxThTc3rc087ftHWI5WYIVmApjfFkK2DTveZXM=;
	b=dHD6+fsghgnOCHvKjPI7ASnazavsm7lVrZ6G61wcRyt0tT+yKQhUi5XwBxxXm9aIcJZx1a
	iu7RbQR3dviABd3T7f3SQuPUa0CuiM5BSPOzKF3o2dDPw3Xs8aKzc32aM7lujqVnXSl/LG
	g9eBx2MoCqhBfWixeiBEHsQfhzmy2vo=
X-MC-Unique: k6S0ZeH3P7KuM3A_VRj4aQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Fu+HcT6w9YoF4XBMIguuuJP69r5aSDvdazhydS2JnW8S3bTCFNPrxXz3je/W1SzUf25808xjNlfy8jt/0tw9/R6Bs/jJDG7dGSriy2cuCcMxfMlARZHmvmAvP1NfPnPXLouaEKpZzTBAADtwzITsR2gyYbHx8PceX1T9wkHBl8EhDamyjfqYv2YQv0c5GMQiRs/9N6NE0Rdd8yTOjavkJi4tMzmz7tys9IPj4KWHyVU11jHp1MXNhA1+0xd0cNaopTnSR/7nSwfyoSyp3GNR7KqtocUDEuMgt5dp40Q2UqlbzQMmyp8PtllGojXfpc8efxTf1MXoIV8EOMaFM+NW4g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l2pXfNxThTc3rc087ftHWI5WYIVmApjfFkK2DTveZXM=;
 b=XzLteRa4jde8HmE0B7VgbklpdTYtKT0ky0No75CvhoNLtl5ElvqiRpj5tIw2aHeALW/B1aogfKLqqBv7f97qdKVGLMJgS3oeOdTwVSQxZHU2ZOMPqMZwhJOqA4jZcLKPgqS7Y9lhy2O12b3bX+1pJj1TfAF2YP0IUSwffwg51OUBNy+Cu9Dkmvd+kgn3Vdo9lRw+mTjUt7hZAPhohMyGlfsARNpoYmdOSCQBImnn19rIQ9zZKyX6lhMUHojIbd5OugQic+hrWaJW5X3FbolGoNiYzBczSGnwhirE6B2vC/51xbFb00thpoboM8oemOdD9EVVxKWmM74fQfnnHtt3TQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH v2 2/2] AMD/IOMMU: re-work locking around sending of commands
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <a6e72235-570b-c426-589c-236f37749e1e@suse.com>
Message-ID: <80f6365d-4f0d-66b5-b0ab-99dfeb40bd31@suse.com>
Date: Fri, 25 Jun 2021 14:15:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <a6e72235-570b-c426-589c-236f37749e1e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P195CA0030.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:102:b6::35) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d93aeae6-ce11-4398-e325-08d937d2edf4
X-MS-TrafficTypeDiagnostic: VE1PR04MB6671:
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB66710C608228B89EFF7962A0B3069@VE1PR04MB6671.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3968;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ENsAkQuGq6hzi2u5imZ1unHYd9IzvmYKCX2t5RD53u4QBjTp4Xt/9RJLcj+tRAVaQHJl2c7PU6Sfrtn3kX5SnSPIPYiHnz6KnExUkUAOLVJUQhrn/aL4iJ0HQIXBhD4Pqx9chOWdE0TXSPuKVNBD+n3PlqazC1U0OJY1trFcrIq1TnM0TgmrsL5X4RmuwQGGl5jcMYwbg9Mce6zgyQibTi4oaFtBN/MgcYcgOjKHNBjjH2/+xl7IINowiuyVOCkbqkB3L+xkl3LyussOzlqi9uSXufoM6xHCZDOrrKANwQY/9JWwGe5dBuZIrM+uRPGR6/Vv5PPc3ULJpgvlaWRj1+VjmiiMtdKwMh7GcWdMscYG92fhtVm9P5XOACC8e+CfKR3GHKL3cxZOQWN/PCGCBKhmljSMEWIGDZslMMutV2q2t19EEqAVLqku7SQpcToUtIC2MJboKEKBNrd/XtpmM1p8VAhAuKJepG0wU+h4B1DvyXttvWRbVchNAYT7mrCW4d8g7J6Ic7C+TEsl1I7VnrKnWeHX6nJaag1iVLaa/ZHeN846cHFJufKm2GlpHhr7/96zCS2RfWONFaTB2BIY+UVBIFT1gDrby1SR7f0zOaSeu5XxbPRSmvIW9A8KVpFtSNChgeINRJoctSBaL9LOanKrFZjshQ/0XtU+XA3Uix6yg4CWNxY/DzeWkzuNDxld
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(346002)(396003)(136003)(39860400002)(366004)(5660300002)(66946007)(316002)(2616005)(83380400001)(38100700002)(478600001)(66476007)(16526019)(186003)(31696002)(26005)(956004)(66556008)(31686004)(16576012)(8936002)(86362001)(2906002)(6486002)(8676002)(6916009)(54906003)(4326008)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MXo0ZEFqVU5NUDVPVFkrRnFwRmE1NEZzaHlPd0Z4M2x6WU5sZjd1U3VBdUdw?=
 =?utf-8?B?OTB6dXljRCtwcjNOUEJUYlFidmdLUTc1dUtDSlVzZ3V1a2lyZDNkZXVEdm53?=
 =?utf-8?B?eWpnL21HLzFLakNtVVlFeWFxQmQvU09tSUtOYWFhNVhqTldqTTN6dzY5WTJD?=
 =?utf-8?B?b0dHbkxnTVZuYjRETTY1SnpFeVlYZUkwRkRWekJLdDU1MXMyMWQrc0Y1U05k?=
 =?utf-8?B?Sk1iMTh2RWtvM1BhNkZOS1ptc0NTeWtVbTV6R0NpdWE3L3AxdUxYR01ZaGl2?=
 =?utf-8?B?K0Q0R1FaNUVxWDJFQVdRd21iTHN5QWpBS2swWjd2NUNzRjBFNlZiaGN1R0lB?=
 =?utf-8?B?OEpmck9QT0hIcG9OZ1orNmVYcmZWbGN2Ky9pb0tiR1E2Qjg0dXRFaVh6ZTJr?=
 =?utf-8?B?TElSRGliZjlnZS9USkN2Q3FWaXQyMlVydVJWYzVFc2w0NDUrL2hGendjQnNB?=
 =?utf-8?B?MHY1bzF0SjdZNGM4VWp0RnhBdUw5SnkwNU9qalRoaUQ3S25KZllVYmRsZGk2?=
 =?utf-8?B?TUlnRU4wNUhVYTZOR1FvbHhHc0ZpK0dWL0l3Q002WWNCMXFyV205R3VpTHJt?=
 =?utf-8?B?d1pzZndxQ2pOQ3kzRnFML25qVWQrdjJQTllERVNPZEsvb0txVGNBVktQL0NM?=
 =?utf-8?B?emdjQ21Qd0xBMlhwM0o1dytpdjBDbk5lenJ2aCtwTVQxYmUxRUxwRWdJQ3dl?=
 =?utf-8?B?aW04UUFZeUtxdm0xYU9pakgrdG9ONGdMZUNQUnQyVTIrZmJMeElHMDhQYWNy?=
 =?utf-8?B?VmpHbk9oZDNrNytFaDdRenQzU3V3WUE4U0JVbC9wL3cwUkY2RHVjVm9YZnhP?=
 =?utf-8?B?M3c1V0lzb1VjVmwxMTAyeDNEanBTKzVWUVJ5dGV3L3E2U2diMmZVM1lRRUVX?=
 =?utf-8?B?K0p4amlycjZjZFhCd1dTS3RrTWsyanJNM25GMmVMK2Q4Tmt3c1BXZVFqUWl3?=
 =?utf-8?B?VHc2SFpxRGFNRXNLRmlvWlZTdVZtUlZnWjRsYldlQldlNlpJbldKazNtenNx?=
 =?utf-8?B?MlBuSHl5NlVYTW5IeFVhYjhyWi9pbGFsM0QyL3c0aEQ2SndNY3U1T1k0Y3lT?=
 =?utf-8?B?cWE1T2pEdzA4Q3hSNmZJZk9KNm0xVHhsckJTREM2L2JEaGJJZE9zVTUvMktS?=
 =?utf-8?B?Q1AxZ0Erd0c4SHA2Q0NoUmFLNlIzUlhOaC9IRzVLdmo2Qkk0RHRDeDBJMTFM?=
 =?utf-8?B?YlhuUmRyVGVsL0FEeTI4enA2WVR0emtFL1hneXZzbTdaK241b0ZUVnhrOU1p?=
 =?utf-8?B?NmJDMTA1YklqNlRGNW0wSldZYW1WTzBpZ0U1Z1FLdDJ2elZsMC9kV08zQ0VG?=
 =?utf-8?B?LzI4YXhtWURTY0hnNDlxQU1YNGRJcjJzdXJ4QXNQQk12WDVCd3Jnc21nSE5E?=
 =?utf-8?B?RXA1NUIzNFgxQ1QrYlFtZlNNZW8zUVR2N21GdlBBYjFVenpPZEg1KzZGYXRu?=
 =?utf-8?B?LzRxcXJVa1AvaHlMbFZTeVM3Q1M4eHlxYXE4NFp0L2g4U0lVdnRneFFTbWNl?=
 =?utf-8?B?N0R4SDk5WlNiRkVoZVdyRkswL0hSTEpMVXRjazRmWDJsMDdrSG9ZeFdHSFVi?=
 =?utf-8?B?enpJelExSnlRdVlkaFpYSnkvdHMvN1dkNGZ3b3B0RVRVYlJSdEozOWIxMFlH?=
 =?utf-8?B?R3V0UVdvMCtjYjBnSUpRb055RU5rS1JEUmNFRzFaYU0vbktYWU5mdjF1YkJQ?=
 =?utf-8?B?R1NlSU5ycWkwZEkzYkhzeElTbGdReFFia3g3NjBmNTJHTjRRS25DNmcxYXJM?=
 =?utf-8?Q?4Y7k8JoExA/OkWApouiYofWmvE0uRhlT2JeGO4z?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d93aeae6-ce11-4398-e325-08d937d2edf4
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 12:15:32.6063
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: A7Yw0Xy3/Ph4FKkETP5K561FVIbs55WY49hFt/DOQ+zpEY/fon4SK7uh/fk0d8Uh+7n3IfRDcJKP/z1BWzNuJw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6671

It appears unhelpful to me for flush_command_buffer() to block all
progress elsewhere for the given IOMMU by holding its lock while waiting
for command completion. There's no real need for callers of that
function or of send_iommu_command() to hold the lock. Contain all
command sending related locking to the latter function.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Re-work to contain locking to send_iommu_command().

--- a/xen/drivers/passthrough/amd/iommu_cmd.c
+++ b/xen/drivers/passthrough/amd/iommu_cmd.c
@@ -27,6 +27,9 @@ static void send_iommu_command(struct am
                                const uint32_t cmd[4])
 {
     uint32_t tail;
+    unsigned long flags;
+
+    spin_lock_irqsave(&iommu->lock, flags);
 
     tail = iommu->cmd_buffer.tail + sizeof(cmd_entry_t);
     if ( tail == iommu->cmd_buffer.size )
@@ -47,6 +50,8 @@ static void send_iommu_command(struct am
     iommu->cmd_buffer.tail = tail;
 
     writel(tail, iommu->mmio_base + IOMMU_CMD_BUFFER_TAIL_OFFSET);
+
+    spin_unlock_irqrestore(&iommu->lock, flags);
 }
 
 static void flush_command_buffer(struct amd_iommu *iommu,
@@ -273,7 +278,6 @@ static void invalidate_iommu_all(struct
 void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev *pdev,
                            daddr_t daddr, unsigned int order)
 {
-    unsigned long flags;
     struct amd_iommu *iommu;
     unsigned int req_id, queueid, maxpend;
 
@@ -300,10 +304,8 @@ void amd_iommu_flush_iotlb(u8 devfn, con
     maxpend = pdev->ats.queue_depth & 0xff;
 
     /* send INVALIDATE_IOTLB_PAGES command */
-    spin_lock_irqsave(&iommu->lock, flags);
     invalidate_iotlb_pages(iommu, maxpend, 0, queueid, daddr, req_id, order);
     flush_command_buffer(iommu, iommu_dev_iotlb_timeout);
-    spin_unlock_irqrestore(&iommu->lock, flags);
 }
 
 static void amd_iommu_flush_all_iotlbs(struct domain *d, daddr_t daddr,
@@ -330,17 +332,14 @@ static void amd_iommu_flush_all_iotlbs(s
 static void _amd_iommu_flush_pages(struct domain *d,
                                    daddr_t daddr, unsigned int order)
 {
-    unsigned long flags;
     struct amd_iommu *iommu;
     unsigned int dom_id = d->domain_id;
 
     /* send INVALIDATE_IOMMU_PAGES command */
     for_each_amd_iommu ( iommu )
     {
-        spin_lock_irqsave(&iommu->lock, flags);
         invalidate_iommu_pages(iommu, daddr, dom_id, order);
         flush_command_buffer(iommu, 0);
-        spin_unlock_irqrestore(&iommu->lock, flags);
     }
 
     if ( ats_enabled )
@@ -360,37 +359,25 @@ void amd_iommu_flush_pages(struct domain
 
 void amd_iommu_flush_device(struct amd_iommu *iommu, uint16_t bdf)
 {
-    ASSERT( spin_is_locked(&iommu->lock) );
-
     invalidate_dev_table_entry(iommu, bdf);
     flush_command_buffer(iommu, 0);
 }
 
 void amd_iommu_flush_intremap(struct amd_iommu *iommu, uint16_t bdf)
 {
-    ASSERT( spin_is_locked(&iommu->lock) );
-
     invalidate_interrupt_table(iommu, bdf);
     flush_command_buffer(iommu, 0);
 }
 
 void amd_iommu_flush_all_caches(struct amd_iommu *iommu)
 {
-    ASSERT( spin_is_locked(&iommu->lock) );
-
     invalidate_iommu_all(iommu);
     flush_command_buffer(iommu, 0);
 }
 
 void amd_iommu_send_guest_cmd(struct amd_iommu *iommu, u32 cmd[])
 {
-    unsigned long flags;
-
-    spin_lock_irqsave(&iommu->lock, flags);
-
     send_iommu_command(iommu, cmd);
     /* TBD: Timeout selection may require peeking into cmd[]. */
     flush_command_buffer(iommu, 0);
-
-    spin_unlock_irqrestore(&iommu->lock, flags);
 }
--- a/xen/drivers/passthrough/amd/iommu_guest.c
+++ b/xen/drivers/passthrough/amd/iommu_guest.c
@@ -449,9 +449,10 @@ static int do_invalidate_dte(struct doma
     spin_lock_irqsave(&iommu->lock, flags);
     dte_set_gcr3_table(mdte, hdom_id, gcr3_mfn << PAGE_SHIFT, gv, glx);
 
-    amd_iommu_flush_device(iommu, req_id);
     spin_unlock_irqrestore(&iommu->lock, flags);
 
+    amd_iommu_flush_device(iommu, req_id);
+
     return 0;
 }
 
--- a/xen/drivers/passthrough/amd/iommu_init.c
+++ b/xen/drivers/passthrough/amd/iommu_init.c
@@ -871,7 +871,10 @@ static void enable_iommu(struct amd_iomm
     spin_lock_irqsave(&iommu->lock, flags);
 
     if ( unlikely(iommu->enabled) )
-        goto out;
+    {
+        spin_unlock_irqrestore(&iommu->lock, flags);
+        return;
+    }
 
     amd_iommu_erratum_746_workaround(iommu);
 
@@ -921,13 +924,12 @@ static void enable_iommu(struct amd_iomm
 
     set_iommu_translation_control(iommu, IOMMU_CONTROL_ENABLED);
 
-    if ( iommu->features.flds.ia_sup )
-        amd_iommu_flush_all_caches(iommu);
-
     iommu->enabled = 1;
 
- out:
     spin_unlock_irqrestore(&iommu->lock, flags);
+
+    if ( iommu->features.flds.ia_sup )
+        amd_iommu_flush_all_caches(iommu);
 }
 
 static void disable_iommu(struct amd_iommu *iommu)
@@ -1544,7 +1546,6 @@ static int _invalidate_all_devices(
 {
     unsigned int bdf; 
     u16 req_id;
-    unsigned long flags;
     struct amd_iommu *iommu;
 
     for ( bdf = 0; bdf < ivrs_bdf_entries; bdf++ )
@@ -1553,10 +1554,8 @@ static int _invalidate_all_devices(
         req_id = ivrs_mappings[bdf].dte_requestor_id;
         if ( iommu )
         {
-            spin_lock_irqsave(&iommu->lock, flags);
             amd_iommu_flush_device(iommu, req_id);
             amd_iommu_flush_intremap(iommu, req_id);
-            spin_unlock_irqrestore(&iommu->lock, flags);
         }
     }
 
--- a/xen/drivers/passthrough/amd/iommu_intr.c
+++ b/xen/drivers/passthrough/amd/iommu_intr.c
@@ -310,9 +310,7 @@ static int update_intremap_entry_from_io
         entry.ptr32->flds.remap_en = false;
         spin_unlock(lock);
 
-        spin_lock(&iommu->lock);
         amd_iommu_flush_intremap(iommu, req_id);
-        spin_unlock(&iommu->lock);
 
         spin_lock(lock);
     }
@@ -527,11 +525,9 @@ static int update_intremap_entry_from_ms
 
         if ( iommu->enabled )
         {
-            spin_lock_irqsave(&iommu->lock, flags);
             amd_iommu_flush_intremap(iommu, req_id);
             if ( alias_id != req_id )
                 amd_iommu_flush_intremap(iommu, alias_id);
-            spin_unlock_irqrestore(&iommu->lock, flags);
         }
 
         return 0;
@@ -567,11 +563,9 @@ static int update_intremap_entry_from_ms
         entry.ptr32->flds.remap_en = false;
         spin_unlock(lock);
 
-        spin_lock(&iommu->lock);
         amd_iommu_flush_intremap(iommu, req_id);
         if ( alias_id != req_id )
             amd_iommu_flush_intremap(iommu, alias_id);
-        spin_unlock(&iommu->lock);
 
         spin_lock(lock);
     }
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -129,6 +129,8 @@ static void amd_iommu_setup_domain_devic
              iommu_has_cap(iommu, PCI_CAP_IOTLB_SHIFT) )
             dte->i = ats_enabled;
 
+        spin_unlock_irqrestore(&iommu->lock, flags);
+
         amd_iommu_flush_device(iommu, req_id);
 
         AMD_IOMMU_DEBUG("Setup I/O page table: device id = %#x, type = %#x, "
@@ -138,8 +140,8 @@ static void amd_iommu_setup_domain_devic
                         page_to_maddr(hd->arch.amd.root_table),
                         domain->domain_id, hd->arch.amd.paging_mode);
     }
-
-    spin_unlock_irqrestore(&iommu->lock, flags);
+    else
+        spin_unlock_irqrestore(&iommu->lock, flags);
 
     ASSERT(pcidevs_locked());
 
@@ -307,6 +309,8 @@ static void amd_iommu_disable_domain_dev
         smp_wmb();
         dte->v = true;
 
+        spin_unlock_irqrestore(&iommu->lock, flags);
+
         amd_iommu_flush_device(iommu, req_id);
 
         AMD_IOMMU_DEBUG("Disable: device id = %#x, "
@@ -314,7 +318,8 @@ static void amd_iommu_disable_domain_dev
                         req_id,  domain->domain_id,
                         dom_iommu(domain)->arch.amd.paging_mode);
     }
-    spin_unlock_irqrestore(&iommu->lock, flags);
+    else
+        spin_unlock_irqrestore(&iommu->lock, flags);
 
     ASSERT(pcidevs_locked());
 
@@ -455,9 +460,9 @@ static int amd_iommu_add_device(u8 devfn
             iommu->dev_table.buffer + (bdf * IOMMU_DEV_TABLE_ENTRY_SIZE),
             ivrs_mappings[bdf].intremap_table, iommu, iommu_intremap);
 
-        amd_iommu_flush_device(iommu, bdf);
-
         spin_unlock_irqrestore(&iommu->lock, flags);
+
+        amd_iommu_flush_device(iommu, bdf);
     }
 
     amd_iommu_setup_domain_device(pdev->domain, iommu, devfn, pdev);



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 12:19:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 12:19:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147160.271058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwknr-0008Nz-VS; Fri, 25 Jun 2021 12:19:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147160.271058; Fri, 25 Jun 2021 12:19:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwknr-0008Ns-SK; Fri, 25 Jun 2021 12:19:23 +0000
Received: by outflank-mailman (input) for mailman id 147160;
 Fri, 25 Jun 2021 12:19:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwknr-0008Ni-CT; Fri, 25 Jun 2021 12:19:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwknr-0001xb-5Z; Fri, 25 Jun 2021 12:19:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwknq-0000mH-V2; Fri, 25 Jun 2021 12:19:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwknq-0000PT-UT; Fri, 25 Jun 2021 12:19:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sjcBmvQUSe/e36XJ2f6niXUjjAJIiuHroIDlV+O2FIQ=; b=58IydurkoTzW97XLLH2NCh6pND
	b6/shbomSMpvmc8PPEYwLz2I2v2R59xcHm81x8UBAru0IuntDWbKnnJzdfD9TTuHU5+IiXHPXE7+2
	YIgybvc6++kZO/E7qe6Yy7suBCPJRVWBt6unsFtgdX1agOp/8hRMREAli0ldy4pQo78Y=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163033-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 163033: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8a9b94982b0e0928ad874907f5a5b005944ab7cf
X-Osstest-Versions-That:
    xen=e87d8f60fa9b6eaa6a2357545a96e4fff05dbef0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Jun 2021 12:19:22 +0000

flight 163033 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163033/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a9b94982b0e0928ad874907f5a5b005944ab7cf
baseline version:
 xen                  e87d8f60fa9b6eaa6a2357545a96e4fff05dbef0

Last test of basis   163019  2021-06-24 15:00:27 Z    0 days
Testing same since   163033  2021-06-25 10:01:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e87d8f60fa..8a9b94982b  8a9b94982b0e0928ad874907f5a5b005944ab7cf -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 12:30:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 12:30:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147167.271079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwkyR-00029l-5w; Fri, 25 Jun 2021 12:30:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147167.271079; Fri, 25 Jun 2021 12:30:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwkyR-00029e-29; Fri, 25 Jun 2021 12:30:19 +0000
Received: by outflank-mailman (input) for mailman id 147167;
 Fri, 25 Jun 2021 12:30:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xnol=LT=kernel.org=will@srs-us1.protection.inumbo.net>)
 id 1lwkyQ-00029Y-0m
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 12:30:18 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 154c3622-c7b3-4074-bca4-e8816fa18135;
 Fri, 25 Jun 2021 12:30:17 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id A4C9361463;
 Fri, 25 Jun 2021 12:30:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 154c3622-c7b3-4074-bca4-e8816fa18135
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1624624216;
	bh=236dSGX7ExOVPv4h0UeOksGQ8SgYCOQ7aXD3NGK83S0=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=bCaWwZtC9a0kNjsO191mUbtUcojnviRFMJnXBZ/UQc+wZDgM8hUcreC3qNCEkirNE
	 bbLWlaMbJfBXlg05V08tS0sV+lbJ6mjPNoN7Y8AT15UfxghYQbA3zVDmQkdENHWUeg
	 aoWWbni5r3mbAZmV1yWyb6Qtn3Rm3bH4e4imD6DcErBaFTOVYzmckxYQM6wCi00qxL
	 IKspsHt2W8+A5P7LrhMSB0wLC0J5AtiueFbpUWEKsYqOURtPkRdQJiwA+N05SkMbY0
	 yNml1K+qGwoAZk+3C0lKdhaf1wsPR1zKe7fVETOXT9FqPpHlh5Z2ca5kWomvg9Dx6c
	 8Pdmk/dzJ5M/g==
Date: Fri, 25 Jun 2021 13:30:05 +0100
From: Will Deacon <will@kernel.org>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Claire Chang <tientzu@chromium.org>, Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
	Frank Rowand <frowand.list@gmail.com>, boris.ostrovsky@oracle.com,
	jgross@suse.com, Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com, thomas.lendacky@amd.com,
	quic_qiancai@quicinc.com
Subject: Re: [PATCH v15 00/12] Restricted DMA
Message-ID: <20210625123004.GA3170@willie-the-truck>
References: <20210624155526.2775863-1-tientzu@chromium.org>
 <YNTa1C5uvz+qWryf@char.us.oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YNTa1C5uvz+qWryf@char.us.oracle.com>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Thu, Jun 24, 2021 at 03:19:48PM -0400, Konrad Rzeszutek Wilk wrote:
> On Thu, Jun 24, 2021 at 11:55:14PM +0800, Claire Chang wrote:
> > This series implements mitigations for lack of DMA access control on
> > systems without an IOMMU, which could result in the DMA accessing the
> > system memory at unexpected times and/or unexpected addresses, possibly
> > leading to data leakage or corruption.
> > 
> > For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
> > not behind an IOMMU. As PCI-e, by design, gives the device full access to
> > system memory, a vulnerability in the Wi-Fi firmware could easily escalate
> > to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
> > full chain of exploits; [2], [3]).
> > 
> > To mitigate the security concerns, we introduce restricted DMA. Restricted
> > DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
> > specially allocated region and does memory allocation from the same region.
> > The feature on its own provides a basic level of protection against the DMA
> > overwriting buffer contents at unexpected times. However, to protect
> > against general data leakage and system memory corruption, the system needs
> > to provide a way to restrict the DMA to a predefined memory region (this is
> > usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).
> > 
> > [1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
> > [1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
> > [2] https://blade.tencent.com/en/advisories/qualpwn/
> > [3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
> > [4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132
> > 
> > v15:
> > - Apply Will's diff (https://lore.kernel.org/patchwork/patch/1448957/#1647521)
> >   to fix the crash reported by Qian.
> > - Add Stefano's Acked-by tag for patch 01/12 from v14
> 
> That all should be now be on
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb.git/
> devel/for-linus-5.14 (and linux-next)

Thanks Konrad!

Will


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 13:15:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 13:15:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147178.271105 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlgI-0006J5-3o; Fri, 25 Jun 2021 13:15:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147178.271105; Fri, 25 Jun 2021 13:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlgH-0006Iy-Vs; Fri, 25 Jun 2021 13:15:37 +0000
Received: by outflank-mailman (input) for mailman id 147178;
 Fri, 25 Jun 2021 13:15:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwlgG-0006Is-FD
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 13:15:36 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6fe5dcaf-eb59-4d95-91cb-e175b5e10978;
 Fri, 25 Jun 2021 13:15:35 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2106.outbound.protection.outlook.com [104.47.18.106])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-35-Ob_EhFLzNOWqihqfGNk29Q-1; Fri, 25 Jun 2021 15:15:33 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4384.eurprd04.prod.outlook.com (2603:10a6:803:6f::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Fri, 25 Jun
 2021 13:15:32 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 13:15:32 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0242.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100::14) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Fri, 25 Jun 2021 13:15:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6fe5dcaf-eb59-4d95-91cb-e175b5e10978
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624626934;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=aKwyIGLXnRebabJB5E/m6YXe6q4iLMqI5AW5V8QxUJw=;
	b=JIOh7xGU5sOlh0zF8j3j8iIAjZgIiiNoQ+hX4qnEHpJ7lkNczi+eRGevQECAN0lzwU4hwJ
	LQbfOqm/NAYo07LY9l9y+4ANwMJ33oXPzEUWQ5bUiDU3UJYO+wxLX1L9guxKh3WWuc10qX
	ToVXBXrDkWcHvtiRclKjY/6ndU1BHZs=
X-MC-Unique: Ob_EhFLzNOWqihqfGNk29Q-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hABQ9sNBqgoQKpRkLLnziZpwwTUqKpQf+pi6F2Wq9bQAiJtNTU/UMNSLOBILcFjYt2TMGqXzk9nSZKyqWaJKU2+XBGFTaN7POndHMmc0JZq/omMzY6K1FtS7PDuycoFX+/3C0iSxJDIxrLbcC/3XQ5VH1jA4yT7XbYUrCWe7bZo0FyKBcGGrl99ESuYbE5AnoyMOWNsZi0lKQNQf10Q6J46BN9l4Am/Ti3Ju0dLwdt/yItq+urBtqF9dgmqQtENdbv65Wck1q7dBqZGR8Vu/TLVXGqWW8tPM28cBh9OY+JSpnOfjgdICG8gU6qyO9V/pjBcSa8ImYoKvK+vZaV1W7Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aKwyIGLXnRebabJB5E/m6YXe6q4iLMqI5AW5V8QxUJw=;
 b=LTmRSG5y9T/jlEokJZIkQeErZS0Z2N88bel4UlAUNVe160mY5nsMS1SauUQyBkZOxeEeU2CY4iEwQsDjDsS2zN1HrPcdOQ4w9hPIV0H+c2HzO8x/P39OWLOkr0655f5TKyLveRGu1ULhFKYTddpxaT/5g/AsUyYDp2VvEWLlgQryPkDILzFh4o+msA2Drgc9AnVq4ZnqLNSqxfkKWMjXIWWE3p45odmpsrxgYhbsMg3cVoS2Sl9s98fKowqHSbuUQ1GoXPK1yRwiLwHhrdbsMTaFXVJE9mGBYe0EVt8FKYEdTPHEAY4KHG9vSOeDYxourf27Dyee20Yp58HMzr/2SA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 00/12] x86: more or less log-dirty related improvements
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Date: Fri, 25 Jun 2021 15:15:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0242.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100::14)
 To VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b0a908c7-de9d-41ea-4a97-08d937db4f5c
X-MS-TrafficTypeDiagnostic: VI1PR04MB4384:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB43842E0881A2DAEED5CB35ADB3069@VI1PR04MB4384.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nnNGzGrbt79wIslBcsITUCxFhWVeLPT9AgBJc/6ja/1lJ3aHv8tlNQEd6VFsXQZTw9VZbEuydmCqgs3mDO9lJ+HBHZzH0ZRwVEym97ViWCdqwX61/7AK3jkU1bgP/ikdIsGY6Of8uVccZ3LrVGsl1oe/GumLNFH9NKMqAdw4ulu3BK6Y/W9qcMZuT4JPbe4dXIaPE0HoFdlxPoWiqYJHxNoBRb3XRnZepecKRaHw22eHsI5FH2XnKP0//ukI0k0BPU9BKp0EVMsKjZ9p4UY+fqVzvfwUfGf9ccvLgB3EZhwwGpO2XUAWs4Qjn74tgUdZOaLmHZoX5AdqMlpfFetsKHOmlw9q4U+wq23PBR772E/wwRUZaHHL+fHv3ITC+doG/D6dhSgbjrT2InnK5h+AYOMHLZb/0MSupCAPw4CIZfSDuQFuWDrKBp+9XbQIxepvYTYfFHfET9NELdzWffqR3YVx/RBPcb56palJRQGE7P4AiDIrav96603jKl1OhJrGKGUtiaXFWpiPtC8WpTwolL18urFtbyl5EissBwm+6nehmgYajNn1J05xI8yTXZv6QFFrERs7YRM9IU4V7kO/lu1GpIKPuQYeX1I51hsSk5oYKufHStS/iJVO7Ffio1tlaG2XQAJ7RFYksvXdsl4fOV1Q47ygGhWwxJMNOfVDXWxy7k7AVHvTl8Fqj2IVo2WzToRVctBP+KikMunbCEOU1A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(396003)(346002)(376002)(136003)(366004)(54906003)(83380400001)(316002)(16576012)(2616005)(956004)(186003)(6916009)(26005)(8676002)(66556008)(66946007)(4326008)(16526019)(66476007)(8936002)(478600001)(5660300002)(6486002)(36756003)(38100700002)(2906002)(86362001)(31696002)(31686004)(14143004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ckJvZ3VuTWFZa1B3Z1BTYjRTR1ZjeWx1UFJYOWNKc3Z3dXVRYmhlelI0dTQw?=
 =?utf-8?B?Y0lrVy9uenpJVTd2STlsVFdTQmJDUGhTMUxmSWJSdDNmT3N3V1I3SEVqbS9i?=
 =?utf-8?B?MlFUUDB2STc1NWl5bzcwaWZseUEzTU96Y0NNaUhsaU10cXU3NVk1YmpSdGtw?=
 =?utf-8?B?NGZGVjEwY2JudGJBY3hTYTR4MUZKZ2MrRkxlZ0VmQ1UxWTgzaklyS0piZjMr?=
 =?utf-8?B?cGZTNXE1TXl6UWd0WXYxKzV0Ti9rRm1KQVY1VlBkTWFTaWNobGNlWW5JZXJq?=
 =?utf-8?B?YklSMTlnbHJadmdvb0VwL2dvSHhFbDkrZ1djYzF3MUJWaDlTQ2JPUm1BMVJC?=
 =?utf-8?B?NmVJdXdhQjhxNit3TVlMZ1RJcmtCSzhpMGhLQ0pDS0J5UmdQUHJaN2xLcnZP?=
 =?utf-8?B?QkpLK0dENHJJMG0zcGNlK3d6VjUxOVJZSVFPeHB5djBURG45THF3aERLb3VU?=
 =?utf-8?B?blJhNVFjLy9xcXdJdEd5SEJEQVphRmg3T1V1OUwxNDB5cmpXZnVpbDBReitR?=
 =?utf-8?B?dEFDSDZRMDU3Uk1xYVQySkw1U0dnZVZvdXNjYm44c1B3SjQwZlh5cW9taWZI?=
 =?utf-8?B?R0ZxSEF5bVdWUEsraHBJRnQ2a2JFak4yU2pBQkU0NVlJRGNRRFZ1dngwdUtm?=
 =?utf-8?B?S2Y0aHJLTFhBUkloeTl1eXZORkJGWktvUWZhSlB4N2RnK3lzb3dCUFA5Y29R?=
 =?utf-8?B?WCtEUXZ4R3ZrY3dUOE9GT2gzQk93RDNwcEd1ZGlpUWttMnR5SDRxZnkyQ3Zt?=
 =?utf-8?B?ZkI0ZFJrVzNPZ2F5dU9paGhIQ2lJNERZTmJXc2kxaDlGc0RMVldyVFhFVk5Y?=
 =?utf-8?B?cmtwUkcvNlIxQWExbmc5cHBhNFBTcU5JMmFJY2prQkNGeTA3YXJHS1JYd214?=
 =?utf-8?B?YzMzZVFnRFRseDZIUEU2WkpIZXhlUngvNmNrM1dzdG9TNWM4L0g2ZThEZ1g3?=
 =?utf-8?B?K3d6bWxWbnB6NXJDZnBUNG83RHRkUXFjTjgxZGpXZWtJUDNpazIxR3BwOEx3?=
 =?utf-8?B?S3VnbCs3bllCK2YxWi9YNElSRExWTjJuTGh1OUdiTUlhY1daeFVpQ2FKMnQ1?=
 =?utf-8?B?VzlnNUxTSnEwazZmSHdCUXBUTHkrVUZ1bi9pZ21qOUwzTzF2SXlRaU5wdzc4?=
 =?utf-8?B?TW9kYmlRZmtOVkxPeFlCekFoSWFzazBEb3FJQWIrQjZYWU5kOWcvZzlLNlpT?=
 =?utf-8?B?YktDSlB5bzlrWjFidFZoOHV6TXc4aC94MXMzV0EyREU4eTZmMHhKQVRXQTZi?=
 =?utf-8?B?c28zTFcvS3Z3eHhqU0xBMEtEeVBBVXdyOEhZVmUxWTBJZUVQOHZPeEEzMXlw?=
 =?utf-8?B?N0FMRm5DOUpuaVUwKzNLU0t6Q3hUR1EzcGRnMGpMdEZSYTBKSHdvbTFYUmNQ?=
 =?utf-8?B?ZG40V2NTYTRTM1FodENTVDRTbnVHSkpHV3BOMjdJbWF1bkJaTjN4RVhzeW56?=
 =?utf-8?B?eExaejV0YnJzSHlYeXFCd3ZVNzFKeVZFNVRqZUgzbGorcHhSdlI4M0szRDA1?=
 =?utf-8?B?NFhuaXNTMXdMV0FuaVRpTEFmY3ZnZHp4YlFEdW91NnpGckhwa3l4SUF6bDE0?=
 =?utf-8?B?Z0hjNU41S1ZsaWZCNHdaMkZDMzZkalhwVVZnUDd4SHRTeHdCK3VIS0pETnNO?=
 =?utf-8?B?M3laN3U2bnAxZERNYXJvYmRkeStxUUowbjBpN2Y5TWZkT0l5eWE1WkFKUUZC?=
 =?utf-8?B?bi8rTkF5aG5LTGdMb2FoMXc2SGxxTmFaUG5xdFN1NzFSalpISWxJSmdVeHZ2?=
 =?utf-8?Q?Y17bi5hsF/PC2obZXkKS9MUcQJu5QzdbzF4y0rK?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b0a908c7-de9d-41ea-4a97-08d937db4f5c
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 13:15:32.0168
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zGLCJYRxfWuuPMTpHj1RwunAct1mLg9v7F/sLbIIFeLkG74X2SWkrh9+YtRSJAPmXxr3u6Jcd7+0Nk0EAkLjGg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4384

... or so I hope. This series continues the attempt to deal with
the ovmf change putting the shared info page at a very high address
(which is now planned to get reverted there, but the general
problem doesn't go away by them doing so). There are further issues
with truncated value, which are being dealt with here. But there
are also not directly related changes, when I simply spotted things
that aren't very likely to be right the way they are. And then
there are also adjustments to the underlying hypervisor
implementation, with the goal of making the returned data more
useful to the consumers.

With these changes in place, a 1Gb guest which has "inflated"
itself by putting a page right below the 16Tb boundary migrates
successfully, albeit the process takes from some 20 minutes to over
half an hour on my test system.

01: libxc: split xc_logdirty_control() from xc_shadow_control()
02: libxenguest: deal with log-dirty op stats overflow
03: libxenguest: short-circuit "all-dirty" handling
04: libxenguest: avoid allocating unused deferred-pages bitmap
05: libxenguest: complete loops in xc_map_domain_meminfo()
06: libxenguest: guard against overflow from too large p2m when checkpointing
07: libxenguest: fix off-by-1 in colo-secondary-bitmap merging
08: x86/paging: deal with log-dirty stats overflow
09: x86/paging: supply more useful log-dirty page count
10: x86/mm: update log-dirty bitmap when manipulating P2M
11: x86/mm: pull a sanity check earlier in xenmem_add_to_physmap_one()
12: SUPPORT.md: write down restriction of 32-bit tool stacks

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 13:18:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 13:18:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147182.271119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlii-00070i-MN; Fri, 25 Jun 2021 13:18:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147182.271119; Fri, 25 Jun 2021 13:18:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlii-00070b-JB; Fri, 25 Jun 2021 13:18:08 +0000
Received: by outflank-mailman (input) for mailman id 147182;
 Fri, 25 Jun 2021 13:18:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwlih-00070V-PT
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 13:18:07 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 69681104-e8f7-4c5d-b555-e3c8c279dfcd;
 Fri, 25 Jun 2021 13:18:06 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2104.outbound.protection.outlook.com [104.47.17.104])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-37-KqQchVhzMAOVMuuGLRSrHg-2; Fri, 25 Jun 2021 15:18:03 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4384.eurprd04.prod.outlook.com (2603:10a6:803:6f::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Fri, 25 Jun
 2021 13:18:01 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 13:18:01 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR2P281CA0032.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:14::19) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4287.8 via Frontend Transport; Fri, 25 Jun 2021 13:18:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69681104-e8f7-4c5d-b555-e3c8c279dfcd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624627085;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WeQBwOgFXz1U60UH22PesVSYD2OW+F4k/D3F8uvruuE=;
	b=eg5wtkpQPFBx06fcBrkFJ01lF04gvcDZoaUs9wYLfgNroC4ru8jFeRSCsKJCjGR74jwigN
	YpHMW5Ny0kiad9vV3eIo6tONH8padz/c5S0Lk/nojkH8MZNK7+ftxWlxRTYqPoU2ZHXO/O
	WyfZ2uxa5SvaC54VWbUpTzSYiII8Sss=
X-MC-Unique: KqQchVhzMAOVMuuGLRSrHg-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dseunScbm2NtiRmMFfxv7+JtHsjbkdtXWGzJ0kBVUZa8HdcrkuV9cS3MvJpdqoMXQ7FKcZvd2roi9Kvfam8gnhsigbjvY3Exc3MzxRXYsBAHK3QSVbt5JaqFTIgIIQofiBTvTOjh3z+xmgHSB3/2h9qFvjiFzHS+9v7pcgtmUiIOf5dG6HsAV11PwZ1Ta7twR23LleaabRMymirVbRFBkEnp+ZUgi68UI9O6AWZafmrM8TmpmJG+YZTM+ijipAeeTM4DhOw3Yoc4SoA5CfxKvnoXtlKbCQquEgoSKKXuhq4EWMXjn5CRfZ/KWXb0MVXzbfSK3FafQVIhwhCy5i2Z6Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WeQBwOgFXz1U60UH22PesVSYD2OW+F4k/D3F8uvruuE=;
 b=D79moY8/s1aYZe0Ko9ixZb0D36RXv+ExIhaiyTbz0AYy2GbuGRQ5a3IegVRr/0T51TiPpiPl/Ddo6C0dHdfTbntBzxCGx0h2dYa8FbHmEZ+Cv3ZJsqoCMHToJlvJjcSXu+68YF4Xj1QDK+vk+XkZODsg6qTQBoTn7nrgynJlXVH453GIXUZ4ciwr1izeFgViSUJUPP2TKBJuYByyeOhydT+LqKWQeab3m6Wqo9Aj//Tdgk00JL6MKMJZgYSgC2gFqTwM7dvyCZlIL99JKFbFd2OlCZTsGbzV3X8/Zvi6JdvCOwPevfXVCmvAmKgBv7/i+g4wk3XVokTFqNv716Ggyg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: recoil.org; dkim=none (message not signed)
 header.d=none;recoil.org; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 01/12] libxc: split xc_logdirty_control() from
 xc_shadow_control()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>,
 Marek Marczykowski <marmarek@invisiblethingslab.com>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott <dave@recoil.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Message-ID: <e928490c-d13c-8041-0ff7-e8b69ee73d6e@suse.com>
Date: Fri, 25 Jun 2021 15:17:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR2P281CA0032.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::19) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e92da259-073f-45a1-d9b7-08d937dba846
X-MS-TrafficTypeDiagnostic: VI1PR04MB4384:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4384A242F8605036A3CE1A6CB3069@VI1PR04MB4384.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:534;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	It0xrBrFKURpFyrZyygOFZ2nDK2PLUEiY1ql+UpoXtcXZDpSaCuakctF3scLM0XaP9NZDZ0+U2pw4zwsjHQbIc5A1no+/OgZlgcRVWUnWHgORbrODSfvx/e58cGESLhpheA1LDJ83Cegbu+RDuK/uAxK+pe2DqXdRCv+uUe0DGd0k1XfsTDNRHFwaFBD53w6dPAwDcgoiApw/VlUlb8MfcDAVIMYS+HXk0fN47VOqyoHSjDWh5UUe12DjWj9T/zbnBfYmmk7cjYEc4bIaign6Lkd/prxVNLdsS8k1cXLbw7h8M204Dipy9cpT0ac4j3rnjfdOzhEuJyAljAl63EB87LIO5Clv4lmWKI66R5qu7ky8uHBUOp6IItcx8/XpitjbbSJBh0HwCmsU3/67SYeOjGZfhqB5KmJn/Nf3sBqkE+7w1GDBPeR6MqwVsFrUrLHI+R1JFTgUJDyXI3U2LgFErJTDFFbatWk9cdWM6+BPtDhd3NbtdwUMGzlS5Ja1bdEoR0Q5Xn/NeHc8dHb5Ot5CuHsjNkvaChJxUYlwgZazNjvpCeWRWrgbw5goDuC3WzAZnZnwYeajeVfOltuKVi7Viq1xw3r/smga9AtUwf+7wqlFWSt78FywTXsHED05A00e6C3pGKcO9eLv6PfnC6d1qcwcYkLrlEIcDvsUOCTFetSsmPTBmxOltuSjb5R8zdP
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39850400004)(396003)(346002)(376002)(136003)(366004)(54906003)(83380400001)(316002)(16576012)(2616005)(956004)(186003)(6916009)(26005)(8676002)(66556008)(66946007)(4326008)(16526019)(66476007)(8936002)(478600001)(5660300002)(6486002)(36756003)(38100700002)(2906002)(86362001)(31696002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Z1Y3ZnJ0eUtJNllqL3dpVWNndHNjbDBOejRjTkc5c3RPallPT0t0WG1WMWYz?=
 =?utf-8?B?MHB4eFUrTkswa1UwcmFVL0FnT082U3dsUG12Mm1FSndhWkNOU1c3Ymw0NnR3?=
 =?utf-8?B?cDlGMkw3bmh2R0YrcjRxR29hUHI4bUJKNDFIRC9iRTNPczZpa3h6TXlGYXRY?=
 =?utf-8?B?cWc2SGhNT29GWjAxZ2kvek83YUxqOGM3VXRlSnlCT1VGQTNIeHY0MS85ODdC?=
 =?utf-8?B?VjVDUktPc1FsY05EMmc3UUVSa1lSZHNsM2dYeUFHNDhTaEgzaXFEemk5VzFO?=
 =?utf-8?B?djVxRVgyYXEvbjNUTk9FMGt3VnVOVGxBSFBGVm5IUUh6OGVOaHp3RTk4NGVR?=
 =?utf-8?B?dG5heWpaWkpySGV4V3hMWHllRldSaHo5VU4zZ0lKVVBQdXFwK2VTcGJra05w?=
 =?utf-8?B?WEJOZkhON3dRczM3cm5ZTjVZbUpiRjBuUXRBdi91LzU4L0NoZXF6ZE5KRnE5?=
 =?utf-8?B?QXQxTTVBNUtWNDF3Zlo3ZnpQNEwyMmFpRXN4QTRKN1dFczViZU5RNUYzKzND?=
 =?utf-8?B?WlhDN00yYzdSM2FPdjF1ay9mVjR1alQzNFY3MjdsVFJGZDRqenVmbmVIb2Fi?=
 =?utf-8?B?Z2w4TnM0OGRDQ3FGOVhiN2RQcTQ2aThBR0ZCT0JwYVFlOTVEQTE3bElrQTdF?=
 =?utf-8?B?NnJmaWhveEJMdVBiREUxQytnbEc0SlF1QjVYWHp1TU9RekhHdG9lU1NuemVZ?=
 =?utf-8?B?QmU3MkxveDc1NzJyeHhpWStWK1d5UWZpUnlCQnptNUg2L25SeGtEY3JXQjRN?=
 =?utf-8?B?WEdIMjJlYmhxazFnQ2RIVE8yaStxSjRFc2lBYzJqRHQrNkpSSmQwNnB2djR5?=
 =?utf-8?B?d0JUcU5zaXV1cWNiZ0VNTXBkMlVjTmJKb1ovVWJQd0VuNzJ0R0J2TEVTUS9r?=
 =?utf-8?B?ak4yYkhEUXUzTURRRThrRFc3SDF2NFRERkZLMklNQXFUSURRMzRjcXcwbzNI?=
 =?utf-8?B?S0dKTGNDWFJUMDJPUy9aYnhqVlhwQXA4TzZpOFpDYVduWmZKTmpHZVo2QmV6?=
 =?utf-8?B?QXArSldiaithc0hvVGloSXc0MDhtVWlJMVBzZGc5cXJET2R3eWxwS3RUdWpE?=
 =?utf-8?B?MWwwNm9ydVlLMFpVb2NOWWI4VFl6WWYxMjFkS1orQW50d0JUZjRRK3FxQ2tv?=
 =?utf-8?B?S2MvWTJ1Q3hsUW9oWFBVSWtJQldnRzFnWWhQN3dVdlJKTWFFWkNDazcxL1c4?=
 =?utf-8?B?NHNUWGpTMTVMN05IZDZFOXB4eWNrL05RR3dHakJSN2I3dlFOV0Y2eHVBQTRl?=
 =?utf-8?B?MGxBQnVDTWZtME5mVDFiOHZkZThqY0ZCY3Ewc1NYeGV0cjdMYmUxYmxOSnNl?=
 =?utf-8?B?a3dnRW0vUkh2NVZPVS9IZ1A5QkczSHFHb1ZBcnl2anhobTRGZ3daa1JiWnNO?=
 =?utf-8?B?RktWTFlFTzI5MHZIUEpjdlorVGpWUDFrNndTcXc4Nkh0RkpuaVJ3Tlkyb2l4?=
 =?utf-8?B?QjI1ZjEweW9oZnRnRS81OG5Ic0ZBNmJyaEdKK0ZaenVHU2ZoMWg0YUhxRXFH?=
 =?utf-8?B?eVd1cXJqQnRWRmlncVJYM01qTHM3RDNOUVpPTXdGN0YvN3V5U1ZuU2lIWTdy?=
 =?utf-8?B?cDl5M1pwbnV5NjdaVHFTQWN5Wmd4OHcyT1hnRkpyRWY0WDFBa1psNytBenBy?=
 =?utf-8?B?YURXN1MyalE5NlA4cjhEdWI2aHNReGMyM3dhTlJMQkhGRklHR2hzdDNlT21t?=
 =?utf-8?B?bW84QVJHdGhIMFNpUURFWm96UXliRERwL1U0UlhHS0d6Sm0vU2RtK3hjY0Rm?=
 =?utf-8?Q?mWRSBhPeMcjvzvzH8DOV98BAjRQYtmyScclWGRw?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e92da259-073f-45a1-d9b7-08d937dba846
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 13:18:01.1716
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LAqQKHfSf7xa/wgithWnWfYS/uMF8U/kxZMYnGIdD0mUoEjVz1QdxEsHY2TnGhC6qWzrGqDw08GhIuZ7nkVNoA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4384

For log-dirty operations a 64-bit field is being truncated to become an
"int" return value. Seeing the large number of arguments the present
function takes, reduce its set of parameters to that needed for all
operations not involving the log-dirty bitmap, while introducing a new
wrapper for the log-dirty bitmap operations. This new function in turn
doesn't need an "mb" parameter, but has a 64-bit return type. (Using the
return value in favor of a pointer-type parameter is left as is, to
disturb callers as little as possible.)

While altering xc_shadow_control() anyway, also adjust the types of the
last two of the remaining parameters.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I wonder whether we shouldn't take the opportunity and also rename
xc_shadow_control() to, say, xc_paging_control(), matching the layer
above the HAP/shadow distinction in the hypervisor.

--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -885,11 +885,15 @@ typedef struct xen_domctl_shadow_op_stat
 int xc_shadow_control(xc_interface *xch,
                       uint32_t domid,
                       unsigned int sop,
-                      xc_hypercall_buffer_t *dirty_bitmap,
-                      unsigned long pages,
-                      unsigned long *mb,
-                      uint32_t mode,
-                      xc_shadow_op_stats_t *stats);
+                      unsigned int *mb,
+                      unsigned int mode);
+long long xc_logdirty_control(xc_interface *xch,
+                              uint32_t domid,
+                              unsigned int sop,
+                              xc_hypercall_buffer_t *dirty_bitmap,
+                              unsigned long pages,
+                              unsigned int mode,
+                              xc_shadow_op_stats_t *stats);
 
 int xc_sched_credit_domain_set(xc_interface *xch,
                                uint32_t domid,
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -650,25 +650,48 @@ int xc_watchdog(xc_interface *xch,
 int xc_shadow_control(xc_interface *xch,
                       uint32_t domid,
                       unsigned int sop,
-                      xc_hypercall_buffer_t *dirty_bitmap,
-                      unsigned long pages,
-                      unsigned long *mb,
-                      uint32_t mode,
-                      xc_shadow_op_stats_t *stats)
+                      unsigned int *mb,
+                      unsigned int mode)
 {
     int rc;
     DECLARE_DOMCTL;
-    DECLARE_HYPERCALL_BUFFER_ARGUMENT(dirty_bitmap);
 
     memset(&domctl, 0, sizeof(domctl));
 
     domctl.cmd = XEN_DOMCTL_shadow_op;
     domctl.domain = domid;
     domctl.u.shadow_op.op     = sop;
-    domctl.u.shadow_op.pages  = pages;
     domctl.u.shadow_op.mb     = mb ? *mb : 0;
     domctl.u.shadow_op.mode   = mode;
-    if (dirty_bitmap != NULL)
+
+    rc = do_domctl(xch, &domctl);
+
+    if ( mb )
+        *mb = domctl.u.shadow_op.mb;
+
+    return rc;
+}
+
+long long xc_logdirty_control(xc_interface *xch,
+                              uint32_t domid,
+                              unsigned int sop,
+                              xc_hypercall_buffer_t *dirty_bitmap,
+                              unsigned long pages,
+                              unsigned int mode,
+                              xc_shadow_op_stats_t *stats)
+{
+    int rc;
+    DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BUFFER_ARGUMENT(dirty_bitmap);
+
+    memset(&domctl, 0, sizeof(domctl));
+
+    domctl.cmd = XEN_DOMCTL_shadow_op;
+    domctl.domain = domid;
+    domctl.u.shadow_op.op    = sop;
+    domctl.u.shadow_op.pages = pages;
+    domctl.u.shadow_op.mode  = mode;
+    if ( dirty_bitmap )
         set_xen_guest_handle(domctl.u.shadow_op.dirty_bitmap,
                                 dirty_bitmap);
 
@@ -678,9 +701,6 @@ int xc_shadow_control(xc_interface *xch,
         memcpy(stats, &domctl.u.shadow_op.stats,
                sizeof(xc_shadow_op_stats_t));
     
-    if ( mb ) 
-        *mb = domctl.u.shadow_op.mb;
-
     return (rc == 0) ? domctl.u.shadow_op.pages : rc;
 }
 
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -459,10 +459,10 @@ static int send_checkpoint_dirty_pfn_lis
     DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
                                     &ctx->restore.dirty_bitmap_hbuf);
 
-    if ( xc_shadow_control(
+    if ( xc_logdirty_control(
              xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_CLEAN,
              HYPERCALL_BUFFER(dirty_bitmap), ctx->restore.p2m_size,
-             NULL, 0, &stats) != ctx->restore.p2m_size )
+             0, &stats) != ctx->restore.p2m_size )
     {
         PERROR("Failed to retrieve logdirty bitmap");
         goto err;
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -428,18 +428,18 @@ static int enable_logdirty(struct xc_sr_
     /* This juggling is required if logdirty is enabled for VRAM tracking. */
     rc = xc_shadow_control(xch, ctx->domid,
                            XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY,
-                           NULL, 0, NULL, 0, NULL);
+                           NULL, 0);
     if ( rc < 0 )
     {
         on1 = errno;
         rc = xc_shadow_control(xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_OFF,
-                               NULL, 0, NULL, 0, NULL);
+                               NULL, 0);
         if ( rc < 0 )
             off = errno;
         else {
             rc = xc_shadow_control(xch, ctx->domid,
                                    XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY,
-                                   NULL, 0, NULL, 0, NULL);
+                                   NULL, 0);
             if ( rc < 0 )
                 on2 = errno;
         }
@@ -556,10 +556,10 @@ static int send_memory_live(struct xc_sr
         if ( policy_decision != XGS_POLICY_CONTINUE_PRECOPY )
             break;
 
-        if ( xc_shadow_control(
+        if ( xc_logdirty_control(
                  xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_CLEAN,
                  &ctx->save.dirty_bitmap_hbuf, ctx->save.p2m_size,
-                 NULL, 0, &stats) != ctx->save.p2m_size )
+                 0, &stats) != ctx->save.p2m_size )
         {
             PERROR("Failed to retrieve logdirty bitmap");
             rc = -1;
@@ -653,10 +653,10 @@ static int suspend_and_send_dirty(struct
     if ( rc )
         goto out;
 
-    if ( xc_shadow_control(
+    if ( xc_logdirty_control(
              xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_CLEAN,
              HYPERCALL_BUFFER(dirty_bitmap), ctx->save.p2m_size,
-             NULL, XEN_DOMCTL_SHADOW_LOGDIRTY_FINAL, &stats) !=
+             XEN_DOMCTL_SHADOW_LOGDIRTY_FINAL, &stats) !=
          ctx->save.p2m_size )
     {
         PERROR("Failed to retrieve logdirty bitmap");
@@ -716,10 +716,10 @@ static int verify_frames(struct xc_sr_co
     if ( rc )
         goto out;
 
-    if ( xc_shadow_control(
+    if ( xc_logdirty_control(
              xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_PEEK,
              &ctx->save.dirty_bitmap_hbuf, ctx->save.p2m_size,
-             NULL, 0, &stats) != ctx->save.p2m_size )
+             0, &stats) != ctx->save.p2m_size )
     {
         PERROR("Failed to retrieve logdirty bitmap");
         rc = -1;
@@ -834,7 +834,7 @@ static void cleanup(struct xc_sr_context
 
 
     xc_shadow_control(xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_OFF,
-                      NULL, 0, NULL, 0, NULL);
+                      NULL, 0);
 
     if ( ctx->save.ops.cleanup(ctx) )
         PERROR("Failed to clean up");
--- a/tools/libs/light/libxl_colo_restore.c
+++ b/tools/libs/light/libxl_colo_restore.c
@@ -62,7 +62,7 @@ static void colo_enable_logdirty(libxl__
     /* we need to know which pages are dirty to restore the guest */
     if (xc_shadow_control(CTX->xch, domid,
                           XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY,
-                          NULL, 0, NULL, 0, NULL) < 0) {
+                          NULL, 0) < 0) {
         LOGD(ERROR, domid, "cannot enable secondary vm's logdirty");
         lds->callback(egc, lds, ERROR_FAIL);
         return;
@@ -90,7 +90,7 @@ static void colo_disable_logdirty(libxl_
 
     /* we need to know which pages are dirty to restore the guest */
     if (xc_shadow_control(CTX->xch, domid, XEN_DOMCTL_SHADOW_OP_OFF,
-                          NULL, 0, NULL, 0, NULL) < 0)
+                          NULL, 0) < 0)
         LOGD(WARN, domid, "cannot disable secondary vm's logdirty");
 
     if (crs->hvm) {
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -529,10 +529,10 @@ int libxl__arch_domain_create(libxl__gc
         xc_domain_set_time_offset(ctx->xch, domid, rtc_timeoffset);
 
     if (d_config->b_info.type != LIBXL_DOMAIN_TYPE_PV) {
-        unsigned long shadow = DIV_ROUNDUP(d_config->b_info.shadow_memkb,
-                                           1024);
+        unsigned int shadow = DIV_ROUNDUP(d_config->b_info.shadow_memkb,
+                                          1024);
         xc_shadow_control(ctx->xch, domid, XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
-                          NULL, 0, &shadow, 0, NULL);
+                          &shadow, 0);
     }
 
     if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_PV &&
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -997,13 +997,13 @@ CAMLprim value stub_shadow_allocation_ge
 {
 	CAMLparam2(xch, domid);
 	CAMLlocal1(mb);
-	unsigned long c_mb;
+	unsigned int c_mb;
 	int ret;
 
 	caml_enter_blocking_section();
 	ret = xc_shadow_control(_H(xch), _D(domid),
 				XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION,
-				NULL, 0, &c_mb, 0, NULL);
+				&c_mb, 0);
 	caml_leave_blocking_section();
 	if (ret != 0)
 		failwith_xc(_H(xch));
@@ -1016,14 +1016,14 @@ CAMLprim value stub_shadow_allocation_se
 					  value mb)
 {
 	CAMLparam3(xch, domid, mb);
-	unsigned long c_mb;
+	unsigned int c_mb;
 	int ret;
 
 	c_mb = Int_val(mb);
 	caml_enter_blocking_section();
 	ret = xc_shadow_control(_H(xch), _D(domid),
 				XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
-				NULL, 0, &c_mb, 0, NULL);
+				&c_mb, 0);
 	caml_leave_blocking_section();
 	if (ret != 0)
 		failwith_xc(_H(xch));
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -1192,8 +1192,7 @@ static PyObject *pyxc_shadow_control(PyO
                                       &dom, &op) )
         return NULL;
     
-    if ( xc_shadow_control(xc->xc_handle, dom, op, NULL, 0, NULL, 0, NULL) 
-         < 0 )
+    if ( xc_shadow_control(xc->xc_handle, dom, op, NULL, 0) < 0 )
         return pyxc_error_to_exception(xc->xc_handle);
     
     Py_INCREF(zero);
@@ -1208,7 +1207,7 @@ static PyObject *pyxc_shadow_mem_control
     int op;
     uint32_t dom;
     int mbarg = -1;
-    unsigned long mb;
+    unsigned int mb;
 
     static char *kwd_list[] = { "dom", "mb", NULL };
 
@@ -1223,7 +1222,7 @@ static PyObject *pyxc_shadow_mem_control
         mb = mbarg;
         op = XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION;
     }
-    if ( xc_shadow_control(xc->xc_handle, dom, op, NULL, 0, &mb, 0, NULL) < 0 )
+    if ( xc_shadow_control(xc->xc_handle, dom, op, &mb, 0) < 0 )
         return pyxc_error_to_exception(xc->xc_handle);
     
     mbarg = mb;



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 13:18:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 13:18:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147186.271129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwljE-0007Y0-Vj; Fri, 25 Jun 2021 13:18:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147186.271129; Fri, 25 Jun 2021 13:18:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwljE-0007Xt-Si; Fri, 25 Jun 2021 13:18:40 +0000
Received: by outflank-mailman (input) for mailman id 147186;
 Fri, 25 Jun 2021 13:18:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwljD-0007Xj-Fs
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 13:18:39 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ef223325-bd1c-4857-99a5-a9f0a587314d;
 Fri, 25 Jun 2021 13:18:38 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2112.outbound.protection.outlook.com [104.47.17.112])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-27-xuX8TIB7PU6wSEGD5cTtIQ-1; Fri, 25 Jun 2021 15:18:36 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4384.eurprd04.prod.outlook.com (2603:10a6:803:6f::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Fri, 25 Jun
 2021 13:18:34 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 13:18:34 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR2P281CA0035.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:14::22) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4287.8 via Frontend Transport; Fri, 25 Jun 2021 13:18:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef223325-bd1c-4857-99a5-a9f0a587314d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624627117;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rgEdhafpz8BW5gWdiXksunqCC0nVZhZBo0PaP6yDugg=;
	b=TeWlYzzqSGKV+9bnTHjldDvpHVTpR2OvfSSGpSSnlqh1G4vv7Jh2tGwm5SsHUaDwXKg/PB
	Lu5JbMBoPzajp9DSDp/sQz6uatVgn6l2UXEHYIuboFC61txJAzO+9xU+m84fMJhP9eZcuf
	vu+hbeEkgQwLDkpKLkEBfWBNUidLs7g=
X-MC-Unique: xuX8TIB7PU6wSEGD5cTtIQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Dky+8mgM99Sq/rpaXeUbtHhTizAalE/7p09a5fM/p5T1xZaeuBaXpi0m+WvIbU5q8s3HuDf+VFdOY93CMZPfPeafr12ZVw6QUo9LxpPVImwZvFFX0KADtRd88KNo/QuJKN2TNh5Ay11KuCy8xiRpN6HglZDyXRdfBDe/xMyDg8vHXDmJy1XJ7bXoOfXY1MLYA43Fj2Sh6vbkQjzsOzRn/+mPELj5S2NhydoQ3pU9iHXOVgsyBR/eBx+8taYucA1lWPZ1hJRjPULWqVMSt3ayzLRQIwATHiegJwkCymvB9WMqT07Wa4e2HPoUTt8odrTrgNPJkjsdq3ERoxmf5uC9aQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rgEdhafpz8BW5gWdiXksunqCC0nVZhZBo0PaP6yDugg=;
 b=D2BHOFzM5L5M8RbUGpkcrKRX3gqDEIRX4VBGa0gv9tMqSW5hgfpL3y6hda5tPdqLCQAoVmYY6K37m5iGAy4ZBXnkqv4l9FPcQmZFTICl+0fKfcQfaBrDWx+eB4QPWPtr5Eav/J6rECqN99hg3bM+C6kuEPhG3LAsQ2avbRpHxWXT8dYDmYJNxtPlqDFYHa51tFx/fspHVwlQA8AL59p3pG9N6qdK1bD0CXAt/v0aofLpjPTx7+EZPksIGhX7qY9+14nRUfic29jfn5cl5TD6+C2d28m5OmEQ4uHbk/grxhhl2JQAOV1zf5yIhM4xLL5VQ6BxdSU58vJNfHrS4KvDOg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 02/12] libxenguest: deal with log-dirty op stats overflow
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Message-ID: <46831816-a5f2-2eb4-bb91-aba345448feb@suse.com>
Date: Fri, 25 Jun 2021 15:18:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR2P281CA0035.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::22) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bf8d2562-1189-4f96-cd43-08d937dbbc19
X-MS-TrafficTypeDiagnostic: VI1PR04MB4384:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4384959999ED241920AEECA9B3069@VI1PR04MB4384.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VvU1rfxwoEtl13yhhM8NNYF3EpcBoxUjiur2xo3b/lWEeINTTbPIunfda0uSdfnvrgfMxzG1wR5Bltl6lImX9FzzHd7l5fOiNkYUN6N20tNiqmc8kL1q2zfmrgxiV63Ac9Cl9fmsFtm3WnI/kTJl72VKkk07dZkGytP2s2OKP2rJMKNxdIDY8YjCfrM8H7JJEHEBE3vfpOBNS3751MdDXj2h8Mz0j5Bl0NTmVcf9xdcQ+9EBx6NI/A45uOsDovsoE3+kmurn02YXwBbfHcmp21MBLodLnaU1wbF47lnxG3qVsYQfnS1zHLis1s9rc+A2RwOHgO0+4GQvWehRQQ8iNNuZPqNX5aNFJQcbtcapPrhoUq4KsBLnRnVgTrpnrkUWDzp4FVBWUYwuDhFFBBmI+oT2o79sfRtqo8iUNvjgoHpbIL3D4MezLBbJ56AvlAabJKur2OccHDefk9EUjdUcI0ie2nLdiZzduP8x0QtUMiTIjwMcMeoOnkYZryDDyjUgeMtQuRgR2Pq5ynZkDCQ/EVPUKAcoqGw70DpyYunj/VOpC89oX3t+wKv/ITbxwRKuXbvdQHnYfWS8T3AWG+zYGucaJFxoj+keg7Pou7/agTOMpQ5kuFXBxVc//7fggHmlprrKfGFjyu1xbISUsKhnE5fxMNZ9OOWDXXGxgvQMQmpG/iI/MbNtm9YQNvneFm5QpULVVtKwp+Lo231s4ImGsg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(396003)(346002)(376002)(136003)(366004)(54906003)(83380400001)(316002)(16576012)(2616005)(956004)(186003)(6916009)(26005)(8676002)(66556008)(66946007)(4326008)(16526019)(66476007)(8936002)(478600001)(5660300002)(6486002)(36756003)(38100700002)(2906002)(86362001)(31696002)(31686004)(14143004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OEFaRnNuWjVXMlFXK1dzcGVOMy95THJXaHVZajRBQm5zK3BKbFg1WktoYWpL?=
 =?utf-8?B?SGVESFhCa1Y2alJRUzB5TU01amZwWVdseUFTeTRCRWNCUkI2N0YzWnNXYWNT?=
 =?utf-8?B?cFZkTWFrVHRYeUZFNTdGN2g2QnRuVjlyWExhL052b0hMRkN5YnQrMjR6RVFh?=
 =?utf-8?B?RGE4QXB3REtmRDBwQ01ndDMyUjhtcHkyazZPVXRUTGVMVmZ6MlpLaE9uU1Iw?=
 =?utf-8?B?bXFsQlVIZFNkOFBid003VlNIdUV6MmY4RmpaMTc3dGdLTHE3RlZBdHJaaURo?=
 =?utf-8?B?WEZ2WTdTRGZ3WVlGeTVmb1ZsYUY3WlNjZ2J0Qkc0WHBNWWl4czk0Q241aUJ0?=
 =?utf-8?B?d0hlbXhTL0Rmb1dwM3cxMG9MQjF2dDdUYXF1bkZjRmZZRG1xWWlpa1lvcXpY?=
 =?utf-8?B?bXcxaHp3NCtYUUhDOGlMMWhvQy9SQ1NSaWtmVHRYclczekRBVjFOeDZFdEs3?=
 =?utf-8?B?aUdydFF0eEN1OFBIOU5YcUVwSTFHQ3FreWgwNlFSLzFERXJsWnRJQjhFUmEw?=
 =?utf-8?B?NDR5UGU2TXZuK3huYmhsUEg5ZFRVd1FkUCtPSHFyVWtsendQYnFTUU1LcmxG?=
 =?utf-8?B?aEU1TUtlMkpnTjdvSWFaS0Z3MW9RWDlMWGcwcHlyZXZ1QWZUTEUzUXJiOTBC?=
 =?utf-8?B?VHFTdi9nTUNETUplYllBOGN4RnZ1YjZHdytMSXh0b0svcFViaWxOU0xpVVdF?=
 =?utf-8?B?aFFXNzFiMHNhWk9iMDdiOGVxWURlVmhwWEtaVytlTjZXRHFZREZtaE92NUJW?=
 =?utf-8?B?UnRBNDFndHY5ZHpLSmpXYXB5dXpidWpSNzJEcDhhTHVRMCswbERHSTk3UjFk?=
 =?utf-8?B?UHJ5YXlhR1JXYXY0TVZSckhnK3VveWJtcUJQdU92d25NMEdqeXhtY1plak5o?=
 =?utf-8?B?MUFGQWQzYWl0NldXeXg0bXJ0R0ZvNmZ6emlWU21TaWNERm52cCtlWVlWV2xx?=
 =?utf-8?B?U0xCQTVpVGl4UmZsK1NoWXhVOHcwcHZYSmQ0dWxlRmFwN0NIQXQyaFMrd28r?=
 =?utf-8?B?ME0rb0NjMVZQR214a2hDeTRiL2Z4aXZoaldNVGFiSU5mVytBelpXOXcyL1c0?=
 =?utf-8?B?MkgrZHdkanIyNHE5MFgzbmxhUlpyaE5XaFI4ZzNrSVRJODFuN2VZZFJkNVYx?=
 =?utf-8?B?blRQZGFNR1NhcVFxMTFJa3hPcFZ0Z1dXc1FYZXRaS1hmWlRaR3pyaVh1NGcv?=
 =?utf-8?B?ZFZ5MmloT2J4d1c2QmxiUGUwWE9tNHprSUxIa2hTR29WSEprQVJqMG5DakJL?=
 =?utf-8?B?NnpPODkxV0QwOXI3c3pOb2J5Y2VNaFJFamF4aWc0OVYxcVhwektWQVZpTWtS?=
 =?utf-8?B?YWtFQTVVLzBnaVBxdURNMlZPWUFnQmw4S014WTkrbm0xaTEzcGk4QlY3TXpZ?=
 =?utf-8?B?NnNtVU5XakVNNXk0L3VnS2lHeTA4OWVIekd5UmZRZThuTFJrUFZxb21zdTQy?=
 =?utf-8?B?Y2NqNTlQdGtQL05UVXk3NitkWloxQ1lwN0tTVW5EeDlkcEpLT0JSVHE5akhP?=
 =?utf-8?B?TUxEMVg2UmZZenZPcVVnT0dEUTlJdU4yT0NJdXdBMXJRdVVzSGRuaUpUbVhQ?=
 =?utf-8?B?SmJUV29XVGhoQ1RUSUgxRCtHeVJIK2xVSHdObVg3bXpaYnFkbzhTTWdxWGxD?=
 =?utf-8?B?amFDM2E5ZG5DVk1NVTNGWEhBcGNqdjN3cjlmLytaZEk0RmxkYmJhcGdrK3Q2?=
 =?utf-8?B?ZDlSWWxOOTd3djY0OWJVbHVkY2prRGI4TXo3eWgxbk5PWVpoejY5ZjRBY2ZD?=
 =?utf-8?Q?2v5mqXe+OhIbk+xZHoFfrPVkjW5QzJMfmocPOQU?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bf8d2562-1189-4f96-cd43-08d937dbbc19
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 13:18:34.4208
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hTqHcuDP8RwEgiHRS4hskHNh5+8xqRsxKXp8klsZOzMR1MFGE4EJVJEFvlLiAFhhedVCPZ3x/DiPQU8Y3ZpCZA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4384

In send_memory_live() the precise value the dirty_count struct field
gets initialized to doesn't matter much (apart from the triggering of
the log message in send_dirty_pages(), see below), but it is important
that it not be zero on the first iteration (or else send_dirty_pages()
won't get called at all). Saturate the initializer value at the maximum
value the field can hold.

While there also initialize struct precopy_stats' respective field to a
more sane value: We don't really know how many dirty pages there are at
that point.

In suspend_and_send_dirty() and verify_frames() the local variables
don't need initializing at all, as they're only an output from the
hypercall which gets invoked first thing.

In send_checkpoint_dirty_pfn_list() the local variable can be dropped
altogether: It's optional to xc_logdirty_control() and not used anywhere
else.

Note that in case the clipping actually takes effect, the "Bitmap
contained more entries than expected..." log message will trigger. This
being just an informational message, I don't think this is overly
concerning.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -452,7 +452,6 @@ static int send_checkpoint_dirty_pfn_lis
     unsigned int count, written;
     uint64_t i, *pfns = NULL;
     struct iovec *iov = NULL;
-    xc_shadow_op_stats_t stats = { 0, ctx->restore.p2m_size };
     struct xc_sr_record rec = {
         .type = REC_TYPE_CHECKPOINT_DIRTY_PFN_LIST,
     };
@@ -462,7 +461,7 @@ static int send_checkpoint_dirty_pfn_lis
     if ( xc_logdirty_control(
              xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_CLEAN,
              HYPERCALL_BUFFER(dirty_bitmap), ctx->restore.p2m_size,
-             0, &stats) != ctx->restore.p2m_size )
+             0, NULL) != ctx->restore.p2m_size )
     {
         PERROR("Failed to retrieve logdirty bitmap");
         goto err;
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -500,7 +500,9 @@ static int simple_precopy_policy(struct
 static int send_memory_live(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
-    xc_shadow_op_stats_t stats = { 0, ctx->save.p2m_size };
+    xc_shadow_op_stats_t stats = {
+        .dirty_count = MIN(ctx->save.p2m_size, (typeof(stats.dirty_count))~0)
+    };
     char *progress_str = NULL;
     unsigned int x = 0;
     int rc;
@@ -519,7 +521,7 @@ static int send_memory_live(struct xc_sr
         goto out;
 
     ctx->save.stats = (struct precopy_stats){
-        .dirty_count = ctx->save.p2m_size,
+        .dirty_count = -1,
     };
     policy_stats = &ctx->save.stats;
 
@@ -643,7 +645,7 @@ static int colo_merge_secondary_dirty_bi
 static int suspend_and_send_dirty(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
-    xc_shadow_op_stats_t stats = { 0, ctx->save.p2m_size };
+    xc_shadow_op_stats_t stats;
     char *progress_str = NULL;
     int rc;
     DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
@@ -701,7 +703,7 @@ static int suspend_and_send_dirty(struct
 static int verify_frames(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
-    xc_shadow_op_stats_t stats = { 0, ctx->save.p2m_size };
+    xc_shadow_op_stats_t stats;
     int rc;
     struct xc_sr_record rec = { .type = REC_TYPE_VERIFY };
 



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 13:19:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 13:19:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147189.271141 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwljd-0008A9-CV; Fri, 25 Jun 2021 13:19:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147189.271141; Fri, 25 Jun 2021 13:19:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwljd-0008A2-8w; Fri, 25 Jun 2021 13:19:05 +0000
Received: by outflank-mailman (input) for mailman id 147189;
 Fri, 25 Jun 2021 13:19:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwljb-00089g-6i
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 13:19:03 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e4dd75fb-842d-46e5-9aa8-31da860c64c1;
 Fri, 25 Jun 2021 13:19:02 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2051.outbound.protection.outlook.com [104.47.13.51]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-15-eaRjTwZzNQmLSfrWdTfcrg-2; Fri, 25 Jun 2021 15:19:00 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5901.eurprd04.prod.outlook.com (2603:10a6:803:e9::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Fri, 25 Jun
 2021 13:18:57 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 13:18:57 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR02CA0114.eurprd02.prod.outlook.com (2603:10a6:20b:28c::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20 via Frontend
 Transport; Fri, 25 Jun 2021 13:18:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4dd75fb-842d-46e5-9aa8-31da860c64c1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624627141;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UQL1vwy7ts4g8z4e13aIolGoDY7pl/cPWGBcYrNuYnE=;
	b=gQZxib/zmcwV5HCIhJJ0nVp7979NOykdZR8C1q31Vagw01jz89RObyGrwOkVxJS/DygtWC
	vvryCpRZ6jeRv4sQyfU9QCtqXBANz+Rqu9NKvxXUWPqfrNqh7ZYVHlbQML5v/oF+tRxF0s
	SnWLZGs4cUsw6g2uP3FSkcPyNgt1Yfc=
X-MC-Unique: eaRjTwZzNQmLSfrWdTfcrg-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RNwmP2YsC9T0W987CpTTWoiLcipbJSRU+xEM+SJn/CdB5gny5Cc7qGOc04gAJPSWTNPeDuDGAQT0vQ48FlRE2710/AwnNj6O64Lq3Y5a9L5hkycMoK9ilGkEvlJ9pxD22503wmAsSY3Ut1iWNd333ik7Y/kiDVmUXl050ddZ2ldCQ/FuA4kgEo0OcWu5oYBtniyPrHnxM+/mpzFTxAFdtQBoea4CUy3eD91iQqvJZwJ05JHY9n7UI/MEW5wM4+rU4IhPDwMVWc65BxtynWqZE5H9LidLISLV/1NksRXDqVnWY6lxlP7WKj5VURAJbCkv47KxdfzDIMvBJvKU3a0AuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UQL1vwy7ts4g8z4e13aIolGoDY7pl/cPWGBcYrNuYnE=;
 b=mjrarlw+2WB8tAuYjeyDcZtfEv6z+kET7NwbflCr2ckKR2mQW1/mzWewMXGWBg3sMm7cTf0B6ldheJ8u0jUG0dNDWCGpSHIpPw1ynun+vmtEz9RglyRe+hTkkx+hKEvAk26wSAyC7m/dKMB0/fKcwiPdYmk3H1k+7hy5N6UEBG5PpWiByPrMCcErvUfYSGBZV8C2+63BH6dMi7X2irrqZJecxwbzEWQqLndB59fZY9msdo8hqEUnIrDPY8qKUw4oN6dvS7rjkxnTODaTOiq7POv49RJBdKhWt0NMbtoqT58vBgDm6BWNqh3ZVX9yIZ6UupdzyICJJU5IeqeiUBURRg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 03/12] libxenguest: short-circuit "all-dirty" handling
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Message-ID: <55875a26-7f1d-a6d9-9384-b03b3b2cb86d@suse.com>
Date: Fri, 25 Jun 2021 15:18:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR02CA0114.eurprd02.prod.outlook.com
 (2603:10a6:20b:28c::11) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 83c55d88-8442-45e9-6d45-08d937dbc986
X-MS-TrafficTypeDiagnostic: VI1PR04MB5901:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB59019F222CB5AB422428F0A9B3069@VI1PR04MB5901.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:249;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uOJOvaTv/hzHcsAeuycdN3POoU68RRbn7eA223F7+xUILj5LXdg58c7k2FcSEcX9ljTfUyD7EbqiB4Z6c6om5wO5wgCGTMqZewDA3GsR1rYLYzH7rYyN8VLGYqb1MZzD+si0PcLzsU1+dfDulnlmRkJh6v1qP3M2d7gv49YNvvpxkQKRkzoorfE5r4jjGEizbJXzIzed7HStrYKYGRgyINg0AzmHPfhRVqpQKfHmAq2morshAXTkakf+qz9HcV51q1r/TL0CHYEBQFTEA7WAND7a3JNkQlNoW9gqzo6bOuL+zt5KFYEJZCyIU++WyFbiAFsYQZ+UwMvqAKk2s0AlXhvyudzZDQDPBOhKNTzb87Hq7KLkjNbV53dhm7GxklHN9jkuKLSp4Z2MnBk51D5oYUgM4pobAnUGbJB4fXpBhbCxov0OiFU5cvFo2qOo0AJmFrS5uIHrZpdenveFifaNxoiinh9cmT2bqa5hi5selNSq2WZZXQENLMA2B78yjCx6Rx3av2Ic7hBfAnf+oEb6UJ45WN1i3rkNmL/pG+MsjITz+fG3GYP7xq+/6OuyhuXOtmHtMTdYmHO7xqRUVhUEEB959BmdDv+T5qpCq8nwJEIfxQaRtJx6PD5dBLMDM/Pvn8UXBNdwVaBp2HqZd3iotutlU7dg7dIVtjPHl5uFFjQD2/wZ29YN80cJh8KxmZ88VU1rsvjFkWIOd1LrUrChKw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(396003)(366004)(376002)(39860400002)(136003)(83380400001)(6666004)(2906002)(66556008)(66476007)(2616005)(956004)(31696002)(86362001)(31686004)(66946007)(36756003)(186003)(16526019)(26005)(16576012)(316002)(8936002)(8676002)(6486002)(6916009)(5660300002)(478600001)(54906003)(38100700002)(4326008)(14143004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N0lRdC92cGs0Rm4rd0pWR2tVaTh3SU1NUXJOTVUxK0U0UDlzd0RtYkp3a0Jn?=
 =?utf-8?B?Yk96Y2pSTGs5WHJFQ2grUDhHZWRCZVdzVmw4ejVmNUl4L3ZPN0NINFNJbng3?=
 =?utf-8?B?anA3VW5lT0RpeEg4WlhETUlCQmZuT0VKNnBtTll6aWNhY25qWWZvSno1UW83?=
 =?utf-8?B?VjlFcmQ3alFndS9WbXFOQjNiY2FRMjNLN0tHbm9Iem8veTU3cU1UZll5aGpI?=
 =?utf-8?B?TVpWbm1oOWJkQXo3ZC9MVVlqTE01cC9rU0liWW5DMWd5T05CMDRMYzEzaTc2?=
 =?utf-8?B?bDhXVks5eDJwSEV5c1NVNkRidERodDFsSGNkT01KVjJtdGliU3grNEdJMjhZ?=
 =?utf-8?B?cG5LaFNyNWhKZVBwMDhBRTY1TXhSbTRsZ2kxaUVWRUFGZ3NkSjRHZmMwWU9j?=
 =?utf-8?B?d2l5N2E0VzhiamVJckw3alAwWVhYNmNoR0tQNDk0Ykp2aU1ualR3QXBjeGZU?=
 =?utf-8?B?QUp5ZU5CVmN0K2VWRDFsTjBoanNJeWhRV2FsWTdDbjRKMG5aVjZLdXhBcFZ2?=
 =?utf-8?B?VUZjNDN0L2RFaHdjK08rNGFBMEhzS2FIM1FtRTJuSVBYc1JUcWRQRFlOeXFI?=
 =?utf-8?B?WFYwc0RPZjFIdGxnY01PamcxR255VGt1U242UEFJS0ovZ2tMNHVtMVVVckF4?=
 =?utf-8?B?YUxjdHJBanQ0NFdFc09UR3kzYlNaSzUweHR4dTRubjBJekEvQVhnZlBObTkz?=
 =?utf-8?B?bEVFTUtDWmhJWi9hd3dycEgwYXpOWHhKNHdXSFlVOG44c3FmV0YvM0NRNGlt?=
 =?utf-8?B?d0ViaEh1akgwaXNFV1NjbWFpVEFUeEo1NTdpMEFWRXlxNmJqajRSSk1VSGx4?=
 =?utf-8?B?eHQ5czkxY1lDU3RHNUxJd1ZWNTd6L1RaTUo0alU0U3ZvZW5uWnZEWmJaaWRE?=
 =?utf-8?B?Z1FUQTNaZThlSEpJUGNmTWl6VUhyY2tFRGFLd1NYRDR1SWtBL1VZc2JpQWhp?=
 =?utf-8?B?a3RoRkxBQXlSQ0xOcUc1MmlzVit6UzgvLy9tdkFRenRjSlBzRFhhZFpaNWwx?=
 =?utf-8?B?ZWNqZk4rTEk3dVU2TnNYcmxvM3dMN3A5ZnhGUnVrOHgzRU5hOTFhbkZKSkJW?=
 =?utf-8?B?dXdZVWtUY1FXblVTblFuVitJWkkzYVdHTnYzQyswSlpPZjFLZnYwQ2V4Mm9J?=
 =?utf-8?B?YlRqRWMrNWVrTmdDbzRXd3IyS0JKeE5UYkdDblpGODNnNks4OEx5eVppNStS?=
 =?utf-8?B?NVh1WDc3ZlMxdExSZ1RtaHVGbFZKODRyV2lQdG5KeDlzL2g1ZzkrQU1TUmhZ?=
 =?utf-8?B?S1dURzJMM2xxWWdLb1h1Y3BUZU15M2h1YVhjNHF1YzNhNlg0YXR0RWxVVzZB?=
 =?utf-8?B?QXl0OU50d29BdTJXU1h0cmlhcG54RTdXcFplSkk5aUU2cGxneXNZZ0s5SVNv?=
 =?utf-8?B?RFBEZyt6QzRJUTFPY3lhZW4rcncvcjM1Zk5KeTdJTGFPNHJQYy9CMHE1K2t2?=
 =?utf-8?B?ZjZVbFJ1cmxZbHlhOXg2RTJVR29Jb09nQURWek4wM21VYS9ZMEJYSlVPSlVR?=
 =?utf-8?B?ZUx6ZlRJSUhmOVIrM2lBaVp3TENNUUhTYTBVWWpkZGpneFZTZ3pvUG5xTmI5?=
 =?utf-8?B?TlhlT0J3bVBxOHZZelRCUERkVHdLeHZHQVZXS0Q2WCtwc0F2MmtkUXNDOUZ2?=
 =?utf-8?B?azcveCtRUnM1MVZ2SFlqbjZ2R3ArVGszT1YxRDh2VElsKzJpNGxIVmtaYTdl?=
 =?utf-8?B?dEVHei9HUnIvN05Lb0pxZ1YzNHlHNjZLVkFkYk8wdXFhdGVlbCtrek82OTA1?=
 =?utf-8?Q?FnfLI9YY3rYxgOsR6yKasOvgMkVpOrxMGNbozgF?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 83c55d88-8442-45e9-6d45-08d937dbc986
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 13:18:56.9570
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jvGUnKf9pEd0JKM3jjrLR6nEpfozfE1dJzaxyqNjcGQ0FEVYcyCLiJ8q7KkAkpK36p5Qdwm7beKwH2ddilYptg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5901

For one it is unnecessary to fill a perhaps large chunk of memory with
all ones. Add a new parameter to send_dirty_pages() for callers to
indicate so.

Then it is further unnecessary to allocate the dirty bitmap altogether
when all that's ever going to happen is a single all-dirty run.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -368,7 +368,7 @@ static int suspend_domain(struct xc_sr_c
  * Bitmap is bounded by p2m_size.
  */
 static int send_dirty_pages(struct xc_sr_context *ctx,
-                            unsigned long entries)
+                            unsigned long entries, bool all_dirty)
 {
     xc_interface *xch = ctx->xch;
     xen_pfn_t p;
@@ -379,7 +379,7 @@ static int send_dirty_pages(struct xc_sr
 
     for ( p = 0, written = 0; p < ctx->save.p2m_size; ++p )
     {
-        if ( !test_bit(p, dirty_bitmap) )
+        if ( !all_dirty && !test_bit(p, dirty_bitmap) )
             continue;
 
         rc = add_to_batch(ctx, p);
@@ -411,12 +411,7 @@ static int send_dirty_pages(struct xc_sr
  */
 static int send_all_pages(struct xc_sr_context *ctx)
 {
-    DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
-                                    &ctx->save.dirty_bitmap_hbuf);
-
-    bitmap_set(dirty_bitmap, ctx->save.p2m_size);
-
-    return send_dirty_pages(ctx, ctx->save.p2m_size);
+    return send_dirty_pages(ctx, ctx->save.p2m_size, true /* all_dirty */);
 }
 
 static int enable_logdirty(struct xc_sr_context *ctx)
@@ -508,9 +503,6 @@ static int send_memory_live(struct xc_sr
     int rc;
     int policy_decision;
 
-    DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
-                                    &ctx->save.dirty_bitmap_hbuf);
-
     precopy_policy_t precopy_policy = ctx->save.callbacks->precopy_policy;
     void *data = ctx->save.callbacks->data;
 
@@ -528,8 +520,6 @@ static int send_memory_live(struct xc_sr
     if ( precopy_policy == NULL )
         precopy_policy = simple_precopy_policy;
 
-    bitmap_set(dirty_bitmap, ctx->save.p2m_size);
-
     for ( ; ; )
     {
         policy_decision = precopy_policy(*policy_stats, data);
@@ -541,7 +531,7 @@ static int send_memory_live(struct xc_sr
             if ( rc )
                 goto out;
 
-            rc = send_dirty_pages(ctx, stats.dirty_count);
+            rc = send_dirty_pages(ctx, stats.dirty_count, x == 1);
             if ( rc )
                 goto out;
         }
@@ -687,7 +677,8 @@ static int suspend_and_send_dirty(struct
         }
     }
 
-    rc = send_dirty_pages(ctx, stats.dirty_count + ctx->save.nr_deferred_pages);
+    rc = send_dirty_pages(ctx, stats.dirty_count + ctx->save.nr_deferred_pages,
+                          false /* all_dirty */);
     if ( rc )
         goto out;
 
@@ -807,8 +798,11 @@ static int setup(struct xc_sr_context *c
     if ( rc )
         goto err;
 
-    dirty_bitmap = xc_hypercall_buffer_alloc_pages(
-        xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->save.p2m_size)));
+    dirty_bitmap = ctx->save.live || ctx->stream_type != XC_STREAM_PLAIN
+        ? xc_hypercall_buffer_alloc_pages(
+              xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->save.p2m_size)))
+        : (void *)-1L;
+
     ctx->save.batch_pfns = malloc(MAX_BATCH_SIZE *
                                   sizeof(*ctx->save.batch_pfns));
     ctx->save.deferred_pages = bitmap_alloc(ctx->save.p2m_size);



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 13:19:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 13:19:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147192.271152 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlju-0000Cb-Lk; Fri, 25 Jun 2021 13:19:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147192.271152; Fri, 25 Jun 2021 13:19:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlju-0000CO-HQ; Fri, 25 Jun 2021 13:19:22 +0000
Received: by outflank-mailman (input) for mailman id 147192;
 Fri, 25 Jun 2021 13:19:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwljt-0000BR-A6
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 13:19:21 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3876b446-720e-438f-8a23-29219d538974;
 Fri, 25 Jun 2021 13:19:20 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2173.outbound.protection.outlook.com [104.47.17.173])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-20-nVKnUX0mNEqqLjsdmkd90w-2; Fri, 25 Jun 2021 15:19:17 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5901.eurprd04.prod.outlook.com (2603:10a6:803:e9::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Fri, 25 Jun
 2021 13:19:15 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 13:19:15 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P193CA0033.EURP193.PROD.OUTLOOK.COM (2603:10a6:102:51::8) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.20 via Frontend Transport; Fri, 25 Jun 2021 13:19:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3876b446-720e-438f-8a23-29219d538974
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624627159;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kW3iCE9fvyJLe9hO7RfRRXOuY/QBfyW7eYF2CsurNMU=;
	b=aGWee49w4Kkv/xZDcQ2uxbMoIa4dQ9k1Vdzd2/6sbue8TxSkjAo98ILL6FYvtJTcHNG2R8
	2bujOMptxXtzXrj3fpTniSvKel+RLQ3gUC7QKnp3oG4Jc7H2e9PtbBSwxtVx/FhUwZ3HOm
	Ngtkux4dg5K/NCcdsrGF89VxEwm/HLY=
X-MC-Unique: nVKnUX0mNEqqLjsdmkd90w-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V0adlsIZDwqut6iKJRexBGQ7RlufCXEPL311HT+XMQB1cl11Rc57/HFl+3uj9PebXiMF7rrvoBG8xLKbBxrueR7+4IbLEFSmYhx1JA3WPGpunmN/OMOnvtm6eRDcofwTldVUIFYYVte0SbgshlGBlmHIizbbQk7rhB/2bk2Fl5+cj7ICHAdBeIwk00mihicVKfyyAkhVqLTm2V6R37fes3nTv1votHfGLH9nF0iFWE8ag9fQpTBQ3lFASXnu88zNTA2qmN9EDnG5SGPvDv5tmJrylgAFV1x15rBbsshuS74lIm0QP5VYQ3PyhqYefDRpxHMcIzcnnd54Hdh5NrOoLg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kW3iCE9fvyJLe9hO7RfRRXOuY/QBfyW7eYF2CsurNMU=;
 b=UG1rKIkZkbhAkORRCxDol9hed9azsbjGim89dcmK69dX+bxo37XcpKJkcn3q35wCUCqvTZ+zCFEoIrzsd64Vww1NwXjgwNNLgQ6EaPxMrma6imrsWpp5fc5ahM1uRGZPEaGL8lL6Jv/PQ40EHhiRnIe+ktk9dnPe2gOnGZMXSGVsAA3voKb6yo+IjEwb13fN+kQh/0dI9zF8FjQLvSprTzuH+IrwQUDyLN/kfikpcVDbZW5iFrx/dUguz53J2053XLnJqACijAr9YNqVZY5gZzomosE7qFO3wCB+ePQSUEo9maTh24YpgvAHGqlUgLVrPGR1oR+viSg7xioRmC+LDw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 04/12] libxenguest: avoid allocating unused deferred-pages
 bitmap
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Message-ID: <61ff4f26-a9cc-d123-98a0-be6c23f21e9b@suse.com>
Date: Fri, 25 Jun 2021 15:19:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P193CA0033.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:102:51::8) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: fa0caaec-8a0a-47ec-6d9c-08d937dbd4ab
X-MS-TrafficTypeDiagnostic: VI1PR04MB5901:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB59012318B3BEA6FB88C068E2B3069@VI1PR04MB5901.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2657;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2HN5GcRqtmDaS09naiM8ERfwbwRWg/xYvfpGCQvv+DeTf9gDB3TTFsem3L46kh2zius2NrPfSLdsSJEq6CG9uxYb3rQN/JRJksCekcM4V07SmvXEpC8RXmK/k8lIlyzJeoadIRJzpIPjMRq2zmCNRmft7G40uSBlAswXpssEU2SAPsI2rHb35y/KD1gtHYCIv8C5WuPQJb0QwaJMfNU8LxD2Je4TzBJAuj5lEHudO63s9ndcq4g9NXEmcTwwodZhGze0G8tL+IYbMyrRtU7RBJ4TfJgOR92Hmu4BamJbrghc2YrOQTFNfMhfoV6VqpgAtiOGvl46auQBNmziVfQghwqkmQ1nNw0NComru9NUOXZFY64i99sHBYhBWq9UaMklIUAv+pl07r2qaIyRr1DrN766gOE+Yr6mKGCxXwWtZL2j7IXClQaQk3fGS8G0PKM8vZzJitkVHJud7fTfza+hhzu3GE+sI1NlAL9zNpU4RHepz6wgGXbLw78nLmX1zJeea9ceh4KmON+TDSCesjqL9NLWbTsrDEenQsntNwDO1xothBBWpDpz2Tv07L/5TV0u9xDJylKVy6VLS3Zsq4G6ERB1spPS46N83jN7COFcyrQmPGcE8Qpe/aOYXNa7nQg7KRiMLBf40avMVP6q/xG/yApKyT1PBNH8n9UttmZZmVud9PiqcjcqPdinkRMmCqXf
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(396003)(366004)(376002)(39860400002)(136003)(83380400001)(2906002)(66556008)(66476007)(2616005)(956004)(31696002)(86362001)(31686004)(66946007)(36756003)(186003)(16526019)(26005)(16576012)(316002)(8936002)(8676002)(6486002)(6916009)(5660300002)(478600001)(54906003)(38100700002)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ekdkWmE3Ky9weERpbWJEcWtOVHAyWmNXN3ZBcUs1M0tTR0pLOWc5QXdqT1B0?=
 =?utf-8?B?RDV0WmplRUlBS3F0Y0J5UkkvMFk1U2hZVXg0dUpiaWR3NytFNmZKV0dXYUtn?=
 =?utf-8?B?N1M2S003TEY2M2JMeXZKbTNGcDRuNldkY2Z6aE1lOTRMSmNMWHBQWDYrMC9Q?=
 =?utf-8?B?SGp6SkVDbTFxRm9WS0cvNlc5V2hRUlFKMUs3bVc2ZzI5QVlFb0FtNWMwdHdr?=
 =?utf-8?B?RjdhOWJRMEVMYWptVWtucDFlQmpVbS9HaEQ5L29RQ0dseGF4MmhrWk9DS0lE?=
 =?utf-8?B?QWZxeVV0S0F0NzArQ0pBYnFHSTZ1MGhuQmU0cWF2VWVOQ3NsUWphcWxWVDda?=
 =?utf-8?B?OFlYblUwWTlwTG5jdTlNWmhPL2tXWHoxMUlCcVBpRUo4RVBrRDRTalFyaG1a?=
 =?utf-8?B?aHl5TnZkVHlIaCtKNm5PS0x3RjZGOGkvNitmaGEzQzN0THcxYzVaV2hKRzF5?=
 =?utf-8?B?L1ZaMGtXcmZNUE94ZlI1VnYzc3U1K0piMS85SE1lVW0vc1F0dEJCbG8xaVNP?=
 =?utf-8?B?clVqZzN2Vlk0aGloZUhBN2t0RlZhV1Bqb3Rtc3N6MGxhbGgwVmEvVHIwOEpC?=
 =?utf-8?B?ZTYybklOVllQR3YvYVRQM00xL3lVL2J0NEROblh6elp1dlNjNnFpUE9FUjc1?=
 =?utf-8?B?Q1BYa0NxbVhNd25uQ215VGhJMGN2R2drb1B2ZHZJaXFqNVNQRkNCWHBWQ3Na?=
 =?utf-8?B?Q3V3cU5WbWlZbVY0SVQxei9HdTRhbmFLWkFZWDkrV3VZQVkzZXppRk8zV08r?=
 =?utf-8?B?VExQUitjcUxscUU2ckkzYWxoNXc4Y2dkSW9QWUxua0RrSDU3Wmdzb01Ua2Rx?=
 =?utf-8?B?WFFybkFXNVVINkgyaElzYjZmSXN5eUZuODdaVEtrWERzWFYvMnZ5Q1d4TzZB?=
 =?utf-8?B?S2VQWTFPcVpvZjBkZFpOYnlqbUJuVVkzSHhtK3JoekpUWXlUNjhSMGFwL3J5?=
 =?utf-8?B?TUVNdjBpSlNKblJrVXZRc1l0aEhIZUdhMktHUFJIOFBtaGRaKzNnS1ZQbW9I?=
 =?utf-8?B?c29GdmtFbHQrTlJJSHltRUY0dWcxVEJPUDJ0YzcvaGU5WTlLQjBsWlVnanB3?=
 =?utf-8?B?WE9JL2o0R3l3OGNPNEFUbDFLanNRQ1Y0d1hiVyt4YzlGRG5mblBYeEticUJi?=
 =?utf-8?B?TytnV2RPNmhYeTBTcUpub2pWdkVhcXdNcTdXVkhTVkF5MHkvNzFRNWMxdFZM?=
 =?utf-8?B?ZXR3bTFXS1JpalFtUXlSclBTYXRiZGlPcnJQQVZiUFJlSXJHV2lwZmRNRHZ1?=
 =?utf-8?B?T3NrUVltdXgyZFppZk9jSXN2RGlkWmZvaUx1RU4yR05oT2doa1lhVVhISUQ3?=
 =?utf-8?B?MWVjcmNWQzljZy9NZnlNL1NFNXU5Vk1HeEZQckJJN2lyWW1LSUhyRlJ5eThy?=
 =?utf-8?B?c0VvQ2cya3RzbU1mQytQNXZuOHROa2t3ZFFsd1lKc0Z5c1lET3FzTFFLdm9u?=
 =?utf-8?B?QTBhSXF3YmlwT3FweDFETVBOemhLaHVVQXQvcCsySUFSUGhaYUpERFZSSjU2?=
 =?utf-8?B?eURMK21qWUxOZTh4V1hkMEdZN3lWTGhvRGlOR0lCZTlaQUR5WkV6THlKYXdV?=
 =?utf-8?B?OWpLc0EzMG0xTzNMZGh2M2VzaHRXWHo1YnpaSWtZN0o5dzRaTkJuWXYvUHl0?=
 =?utf-8?B?S2RGdWFhemhqUjN1S3lTL0xwMVZjdy9rOCtUVHlkRFU2Qjl2VGlObFJpVVhh?=
 =?utf-8?B?Uzd6dVBLS1dxbXR6d2hxSFA4RVNtNGJYWThDTE0wVjlhaWJMN3hRc2ZaTWRG?=
 =?utf-8?Q?PqgaEz4YHAe3hVLRNlqUphePYRMpgiC3pHWQgm0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fa0caaec-8a0a-47ec-6d9c-08d937dbd4ab
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 13:19:15.6694
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: piVoYQ0WmFCmSEyjix2zX79A8CKVn+S90U/2PfGsGZ1nu9ga5ypiF5xMC/LJ51p5frhKRpxGYsfNnMC0Ihe9Og==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5901

Like for the dirty bitmap, it is unnecessary to allocate the deferred-
pages bitmap when all that's ever going to happen is a single all-dirty
run.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
The clearing of the bitmap at the end of suspend_and_send_dirty() also
looks unnecessary - am I overlooking anything?

--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -130,7 +130,7 @@ static int write_batch(struct xc_sr_cont
                                                       ctx->save.batch_pfns[i]);
 
         /* Likely a ballooned page. */
-        if ( mfns[i] == INVALID_MFN )
+        if ( mfns[i] == INVALID_MFN && ctx->save.deferred_pages )
         {
             set_bit(ctx->save.batch_pfns[i], ctx->save.deferred_pages);
             ++ctx->save.nr_deferred_pages;
@@ -196,8 +196,12 @@ static int write_batch(struct xc_sr_cont
             {
                 if ( rc == -1 && errno == EAGAIN )
                 {
-                    set_bit(ctx->save.batch_pfns[i], ctx->save.deferred_pages);
-                    ++ctx->save.nr_deferred_pages;
+                    if ( ctx->save.deferred_pages )
+                    {
+                        set_bit(ctx->save.batch_pfns[i],
+                                ctx->save.deferred_pages);
+                        ++ctx->save.nr_deferred_pages;
+                    }
                     types[i] = XEN_DOMCTL_PFINFO_XTAB;
                     --nr_pages;
                 }
@@ -665,7 +669,8 @@ static int suspend_and_send_dirty(struct
     else
         xc_set_progress_prefix(xch, "Checkpointed save");
 
-    bitmap_or(dirty_bitmap, ctx->save.deferred_pages, ctx->save.p2m_size);
+    if ( ctx->save.deferred_pages )
+        bitmap_or(dirty_bitmap, ctx->save.deferred_pages, ctx->save.p2m_size);
 
     if ( !ctx->save.live && ctx->stream_type == XC_STREAM_COLO )
     {
@@ -682,7 +687,8 @@ static int suspend_and_send_dirty(struct
     if ( rc )
         goto out;
 
-    bitmap_clear(ctx->save.deferred_pages, ctx->save.p2m_size);
+    if ( ctx->save.deferred_pages )
+        bitmap_clear(ctx->save.deferred_pages, ctx->save.p2m_size);
     ctx->save.nr_deferred_pages = 0;
 
  out:
@@ -791,24 +797,31 @@ static int setup(struct xc_sr_context *c
 {
     xc_interface *xch = ctx->xch;
     int rc;
-    DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
-                                    &ctx->save.dirty_bitmap_hbuf);
 
     rc = ctx->save.ops.setup(ctx);
     if ( rc )
         goto err;
 
-    dirty_bitmap = ctx->save.live || ctx->stream_type != XC_STREAM_PLAIN
-        ? xc_hypercall_buffer_alloc_pages(
-              xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->save.p2m_size)))
-        : (void *)-1L;
+    if ( ctx->save.live || ctx->stream_type != XC_STREAM_PLAIN )
+    {
+        DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
+                                        &ctx->save.dirty_bitmap_hbuf);
+
+        dirty_bitmap =
+            xc_hypercall_buffer_alloc_pages(
+                xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->save.p2m_size)));
+        ctx->save.deferred_pages = bitmap_alloc(ctx->save.p2m_size);
+
+        if ( !dirty_bitmap || !ctx->save.deferred_pages )
+            goto enomem;
+    }
 
     ctx->save.batch_pfns = malloc(MAX_BATCH_SIZE *
                                   sizeof(*ctx->save.batch_pfns));
-    ctx->save.deferred_pages = bitmap_alloc(ctx->save.p2m_size);
 
-    if ( !ctx->save.batch_pfns || !dirty_bitmap || !ctx->save.deferred_pages )
+    if ( !ctx->save.batch_pfns )
     {
+    enomem:
         ERROR("Unable to allocate memory for dirty bitmaps, batch pfns and"
               " deferred pages");
         rc = -1;



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 13:20:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 13:20:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147196.271163 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlkX-00011k-Uf; Fri, 25 Jun 2021 13:20:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147196.271163; Fri, 25 Jun 2021 13:20:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlkX-00011Y-Qu; Fri, 25 Jun 2021 13:20:01 +0000
Received: by outflank-mailman (input) for mailman id 147196;
 Fri, 25 Jun 2021 13:20:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwlkW-0000y2-HW
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 13:20:00 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b1d4cd40-0721-436b-9954-39d18df7b62a;
 Fri, 25 Jun 2021 13:19:59 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2055.outbound.protection.outlook.com [104.47.13.55]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-22-GVjm8HHUOPqADXMl-wDJhA-3; Fri, 25 Jun 2021 15:19:57 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5901.eurprd04.prod.outlook.com (2603:10a6:803:e9::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Fri, 25 Jun
 2021 13:19:54 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 13:19:54 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR3P281CA0043.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:4a::15) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4287.12 via Frontend Transport; Fri, 25 Jun 2021 13:19:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1d4cd40-0721-436b-9954-39d18df7b62a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624627198;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9zqPlZFqr+nDgrvYU7y3XbcPEkd6sL5MQWdO6KTLw4I=;
	b=kQ+5S4fbktn5D9dIG3btbwKYDcv2rZKIUgJpL3tiOzPun4akv8W3PvVgaqW4MZG2a5arCu
	dP2bgjQw7d0+vvdDBoGALZZYbmMAUZnnaSxuoa9tXUEWW9DeTxhDWd0ZkDHFrfNBJ+QPaO
	ee9ocL5O+YXeFyyYxrI9E3ock8zwVJs=
X-MC-Unique: GVjm8HHUOPqADXMl-wDJhA-3
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gsyO5B1COetqzUbnhFXW+RCXm4MpeHr0u9CQWP++e1cEjWKWVvbSSaW0FffII3FAo6vMbZDS50JrnwcivJkt/5/u0IcGNNDHcXY6/RcYQ7uIT0jfKAFgWkfis1jgaHgMLxdtsGDu6vtFsdcxFeQ8WlwPelhY16G5UvBdVQ/LnJSFjT/4o/4QS/Lmcd2LeKSTknW8iUMe81itNQWJzY0as0shmfemhJgMR+iaog5Tgikf9uMxZ2IHgxb8soVqdumE+dTiOemdZHk12j190C1h+dkhX6iJ46rDkJkP4L7YupI0ZmJeNwsI8rr3IskRlI7wK0U5DPKLOCMidzNPVs7Oqg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9zqPlZFqr+nDgrvYU7y3XbcPEkd6sL5MQWdO6KTLw4I=;
 b=kkHK65kpjSGbcm2k3oXHF9vpeh6xQWDguQ+yfQOPY8UG2OFejkHattHa7AmYZMU8C9dCFF/kMzrS9ZAEuMVMwKGo7exYS0B0ZM8QK6JCBbU3ejDN+XtBTWP1tnka5FMl1zRQTOg5v9sPlvkyR2r4g+K5M3KGLXm4ZjLFueOfFca9hny1KiQ4ibtI3tlWwM13Gpx/WCyfJvR29Tl9/kNRQcsmRRG1fqCODk5wyLjSF9C4MdcKXjyrA/ye2QEHT8T/nnXrhwJkEbIX0cBRWoVtXFZ+pLqcHiV2p+lsNi548RZvQgorVsLBMjv26RSLlFel7cUV6NtEpZ2I4fygQjT9ew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 05/12] libxenguest: complete loops in xc_map_domain_meminfo()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Message-ID: <0d824d4b-0696-baca-a3ef-95ee641e4d08@suse.com>
Date: Fri, 25 Jun 2021 15:19:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR3P281CA0043.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::15) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8ad4c409-5c68-4ba6-bc58-08d937dbebf2
X-MS-TrafficTypeDiagnostic: VI1PR04MB5901:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB590179424D1738105594287CB3069@VI1PR04MB5901.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4941;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OmBp+abpP/PW4LCuRDxkI6PIjJXcEHZTVkJPfCAgqu7KViI8JpuPXHEsdFJILcUyPGH+J+mEdXJs0AEO4rW+cbtKzuwUfZ/1/IaTfkQrGCgitXGc7rM0h6vaOehYmHsc0EqjzlvTSSUVKjOhxYFfzFZfM+cv1PLPP4fAvm6IXc//FhoaISUrT+h8jfVtp+x8sE6lqtykRGOCe01vKN2sUPNTz5Lnhbpkt/Hc8cTQ0uVzU+PqHfQlWXE8Bw9WW6i6rI8wnh9dZE7iLNlpkYseYKIclIYctJWVkeWKUEt13KTn7rJRZhCKH1E20rWMwBRIXoTr7823HZVAWiMQ0jLTbgp+M6bYlPAmGjNTq2JirG7sAApR517xXwllPNqGihI/3rZl1jNZL6X/Q/xRlcCjsipI04sWbDOgmtQbc/eLVSkNzrkgE9tosIvzkzTXx5jamRwNuvmh6aIBTH9P764WLxCYmY4F2tL3Gvy3PBcXoq0zU4E8JhrQAaCC5vYu9+WikKXklgCW4Job3eoOy+oLB4rX2SxmKBzxu3YdvTYyVcrEHhFk0Ish/queYsrJomwtiRn2h/hl15l0kGUsrxyQ64bWA1NvaQqaamsL75gzEW9xihuvJ6ByEOp3LwU6kkxk1TtIMQkdTmzlz6SzsoR1bp/xLEC+XInAx/WwidFSH2t+FCkB0oqX4UgGWp+avENC
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(396003)(366004)(376002)(39860400002)(136003)(2906002)(66556008)(66476007)(2616005)(956004)(31696002)(86362001)(31686004)(66946007)(36756003)(186003)(16526019)(26005)(16576012)(316002)(8936002)(8676002)(6486002)(6916009)(5660300002)(478600001)(54906003)(38100700002)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Z21mU09aT3RBcmo4aDU1ZHFuMGkyNnljYjlwbXFUWnoweXhGNmFhRWpaTWQx?=
 =?utf-8?B?cDBVaE4wZTE3TUJrdzBqbEhXTUNQRHA4TVdSNC96OVhtQXZXTFQwL3dLdkwx?=
 =?utf-8?B?M1BjTUQ0b3VFaXVPNGZOL25UWkhnSjFWZ05ZVDNtOFAzVzN6MEpiZ1VkdGNl?=
 =?utf-8?B?eW9tUEhyeEprYzdFcG0yRVJnaHAvUEJOVFlralNYRVJrellrdHFjMmZkTVBr?=
 =?utf-8?B?M3FHU2xiSmdEVVhvQUNXS1MwNDF4M2ppYWNYNkVvaUptdjFRNXk1VEhmRGFm?=
 =?utf-8?B?ZDFjeEJUQXI5RlBlb0FlRVRrajBscHRZQmUxQy9pb3llN0JBY1hSWm9IeS85?=
 =?utf-8?B?M1JhbUNUWlVZSnBENlZjdzhrS0l6UVo4bWp2TE1HcW82WE9USVpwZ2xxK1U4?=
 =?utf-8?B?UUxxTXc3Z0Iydk5Ic0xFcStveGlVbEYwMGlrTE1pQTNrWURVd1FHQmpuUE5n?=
 =?utf-8?B?V0FjSlFRMkMzV2VhbiswNUxZa2lOVTgvTG11eENhM05rR2NqbmtZT1NnV0E5?=
 =?utf-8?B?c0JUMi8xanZFalJmVFNhMXg3dkkxcGdPeXpuQU1nZ1VMakNPcjFTK2FLUjVt?=
 =?utf-8?B?NStUY2Nib0JBYys2MFFsMFQxTjhEZWdVYlkvUzNYYTJPYUhPM0NQbjJMNENK?=
 =?utf-8?B?M283TUt5OHZHMzR3NTdIWUVNL3lFd1lnbkNmZG5ZeXR3dEtzRUg4NDVDN0h3?=
 =?utf-8?B?clgzNkVGWnl2MkFVZEhHa0ZsalkzdnRYZFpvZld4ZFVxMXFEM05XQlc3cmhp?=
 =?utf-8?B?TEs5cjBlamZSTC9LUmZ0V1ZpOU1USHJ4Q1BBbkFVQWFvNnBzNGRSZWp0d2o2?=
 =?utf-8?B?V09tdHdtTnA1V2tVcFJwaEdocU9VaFUrYzRIbEhxWVJTQWhuMm9VRWx1Nmt5?=
 =?utf-8?B?R3AydUhWNGp6R3I4SlBYdlJKWk5QaDlPRWx0dWpTZUdxa2d1d2h1MHFHeGVV?=
 =?utf-8?B?enRvWXhsNW94NVJIREgycWQxSTlqUXlGMEx0QmVEbjQzQ1BaTkkwdm1rK3U3?=
 =?utf-8?B?S0ZBU1dlSUUrUjdjclRSTnY4YXVxT2MxSExMdTVUekdja2xyRDg1L241Rld4?=
 =?utf-8?B?WVNMSGx3c1VTeEU5MnBOUi9zTUEvTXJpSmxWQVJseXBEcG55TmxIcWJWRzVL?=
 =?utf-8?B?ZU5wZk94ZWpkcmVUWlh3d1Y2T3EzeCszUDgrZ2crVWRGM01MSzBCMTNZRnFS?=
 =?utf-8?B?dDZ2bnR6ZTZqTS8zcXVuSDV6MkZ4R0N4NElGS3JOR3M5aG1IQ051MXFGY0Va?=
 =?utf-8?B?TDRiMVNla2pHeHdRdVljZWRXT0xBOStseEc4Y2FZcDZVV0VzbCtrNWlpUHI2?=
 =?utf-8?B?cHM3TGxPUjlMTFQ3dGNBMXNEYWptaCszNXR4UnlXTmEvN29YelR4RmtMVVhv?=
 =?utf-8?B?UUg1RzNqNFl6UFhtM2grdWVocVJ0M0VpL21lNHJENzNpUTJzV2dIWXVERGta?=
 =?utf-8?B?aWFzK0c4VEVXWlYrS0VzbEMxSHdHRDRGWDB2Qk1FYmk4M0prVUxnU05YY1E1?=
 =?utf-8?B?QkhQNHdXdzRzRmRrVVNMTG0rS3NLc3hiSEUzUit0SEpPNVN1eXA5N29FMXdx?=
 =?utf-8?B?enhBa1pvK3BaL3JkaytIak03Qm5meHF1OGJmVGpyR0x0Wnc2OGY3S2x6TjR5?=
 =?utf-8?B?WVpvZDlRWWpLRUM0THpLdk9pNVArZDhBSU4zTjVlYUFsSjhrTmRLcnAvTExq?=
 =?utf-8?B?bitFT3lveU8rZFM2N01FZm9YMDFORjFFd3RRRGJlUUYzaEZBUVhuSjFvd0dx?=
 =?utf-8?Q?lqX/uIyARa+jtuL5+5DjG0eMtSoGX1SKrbR7gRX?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8ad4c409-5c68-4ba6-bc58-08d937dbebf2
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 13:19:54.6634
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NMs4bjWCSPWU6l3uyeS2bMqBFpdwDJiKjouNJXO3uU7Id+hmoXvTvPKWovCvwLrhRfJ4kQ3QnkjR3E6QiBR9KA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5901

minfo->p2m_size may have more than 31 significant bits. Change the
induction variable to unsigned long, and (largely for signed-ness
consistency) a helper variable to unsigned int.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libs/guest/xg_domain.c
+++ b/tools/libs/guest/xg_domain.c
@@ -40,7 +40,7 @@ int xc_map_domain_meminfo(xc_interface *
     xc_dominfo_t info;
     shared_info_any_t *live_shinfo;
     xen_capabilities_info_t xen_caps = "";
-    int i;
+    unsigned long i;
 
     /* Only be initialized once */
     if ( minfo->pfn_type || minfo->p2m_table )
@@ -116,12 +116,12 @@ int xc_map_domain_meminfo(xc_interface *
     /* Retrieve PFN types in batches */
     for ( i = 0; i < minfo->p2m_size ; i+=1024 )
     {
-        int count = ((minfo->p2m_size - i ) > 1024 ) ?
-                        1024: (minfo->p2m_size - i);
+        unsigned int count = ((minfo->p2m_size - i) > 1024) ?
+                             1024 : (minfo->p2m_size - i);
 
         if ( xc_get_pfn_type_batch(xch, domid, count, minfo->pfn_type + i) )
         {
-            PERROR("Could not get %d-eth batch of PFN types", (i+1)/1024);
+            PERROR("Could not get batch %lu of PFN types", (i + 1) / 1024);
             goto failed;
         }
     }



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 13:20:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 13:20:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147201.271174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlkz-0002K0-AB; Fri, 25 Jun 2021 13:20:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147201.271174; Fri, 25 Jun 2021 13:20:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlkz-0002Jt-6q; Fri, 25 Jun 2021 13:20:29 +0000
Received: by outflank-mailman (input) for mailman id 147201;
 Fri, 25 Jun 2021 13:20:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwlky-0002IV-3H
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 13:20:28 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d30a404f-f236-419f-8445-83a68afa4d29;
 Fri, 25 Jun 2021 13:20:27 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2050.outbound.protection.outlook.com [104.47.13.50]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-30-XLwUSQetPoWehQbf_rgoSw-3; Fri, 25 Jun 2021 15:20:25 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5901.eurprd04.prod.outlook.com (2603:10a6:803:e9::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Fri, 25 Jun
 2021 13:20:22 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 13:20:22 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P191CA0060.EURP191.PROD.OUTLOOK.COM (2603:10a6:102:55::35) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Fri, 25 Jun 2021 13:20:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d30a404f-f236-419f-8445-83a68afa4d29
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624627226;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ct2KS5gJ4GCYf4vquepAwBJLuaf7rCN9u1z/krvGGpw=;
	b=jnKSTrvHrDhsdELbtwE1eux1ipwZpyXJIls3kfvNaw7JKGtsP/xDqwdenDYiO9E+GS8ZnZ
	M6chwYBzdHhH4sadQzUby500q+Gr65s2a81l4b6A/rFZ/mTpSZnAGq0N5YdbYgNQG4SuXg
	CYHXpT3E99JPbuVQP/DZUi7E5iMi7gk=
X-MC-Unique: XLwUSQetPoWehQbf_rgoSw-3
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oVV7p73F8hl92UEJ1f/H06o3hrDpCSBpa+8VXWvyEsHErkjuyQqdJgykZhUGx4WTALHKqDrDlbL7Le/OJfL2pn8QmYquulk2LtX+TVOf2LWqICIjV5yUdgneJUlid5DHQ5C0+3C4iLY19E+4LD9VBwgZT5E91ogEgsw+IC0sKsBIq7R+0KBbBBSGFo7EJ+C1JUTHiq0I12hKVFTl3Bzqm6bbEDZa5nQZt5F/ehjfs9EM8Y38OD6CSpBJiPmTHyPYjDC2xO+k7AHj+jY6Tkvo7XNgjpbJG1tOYsab7KAw4VHE6gCyYq+2PNAbF4zhNVAb5mXJxbT6ItjVls9mZHCb6Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ct2KS5gJ4GCYf4vquepAwBJLuaf7rCN9u1z/krvGGpw=;
 b=GIMS7JnVw97NiH05HqjgQ3LD225I4Ys61WPC4Do7Dy0A+Os7CMMpR2zmKPZHiDKWeY3zVklGvGlFTEFFdUNAoDn58LFRrahiCvmm6VynxnN0OLD8Xtqon1FRO7vXeb7w51WxvAbazEbOYdwsGQOOfpBAt9xMCmuGcbpNfTdO64CwgJrSO1wv6YDkPiv64BWAjgahbPF0YzrpRpuInDFRnmJGpzMvbRpZLiV3uoCiB7rjDNw+NhAE8HFDOr4+myi0/erfUySvqLEHG+eFBxWN+7P0/m6Y0KWJoPMcZJpzL6xeZxOo3mnWR1KlfrNqIwbQ3rHEHSljVg5MMsllQlm8fw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 06/12] libxenguest: guard against overflow from too large p2m
 when checkpointing
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Message-ID: <09e81b91-84de-6e49-9a62-eb3a6f392954@suse.com>
Date: Fri, 25 Jun 2021 15:20:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P191CA0060.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:102:55::35) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 04bdc782-23cc-4c09-2ae7-08d937dbfbf2
X-MS-TrafficTypeDiagnostic: VI1PR04MB5901:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB59019535A378CF23994D2A8FB3069@VI1PR04MB5901.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4941;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DXtRdePfNXwE3zIOtKKlitejkoh/GfeED9tKSmxHINOR6uu3J7Z/+Z0PdmsrWReAnrH9YlyAUrRiCmluAsbPvezSj1PtYkrKQV/vX4+aSEHx9qGapO/OXMrB7UWq/4BYeys+N5TldTiFwLnDA4UAlIgwfNU+xy6YUot1FrH423h7wHWUFg/b1/jktjtJYmzXW4lr0LziyjiFdxE5o2BCoeJT1Ddqlwf3smS4E+zU52MdcmaKoLoFma6/E+jFCo6yLzJNmQS8ustV60o9Z7qF0Y8uM3DfM5BDWcz6jbIbXAvRtztBcfqUXE71BQxA4sdXnyrYhGTCwy5sIHDwqaTBu1AhUWnCI7dVtc5IuYiW3XTlcmamXSsKU24ec+R7FOIBHRQ6JQtwjDVeUJVC0DprSW/lZYnCRTDjQjIOk0AWyP8rceeF6WKHVMS3s2gLt2BpPeJ+Cyx3GuBvFB2pNdIVPbLb8nVTkTFMC1QaJ4YRHXyFoeDB3NXzFgkVgA7JoqzfCW8MaBinfuE6FpFH5OYi8LWr12sX/nnTEG4fSf8poepK690+ISzgtEwrzMDQ5yd0uN+SKW68DiZt/wPjVqTohJcTWwkDQiVhNXLUWPUEhQG7Vx89QLbPio2pjnf3RbpAagaw0Ygme1d2tYmIrHqHzMllgS3jG7zSmtPgkoSoGliw6GFlx9lsmfBYd+FC/Jwu
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(396003)(366004)(376002)(39860400002)(136003)(83380400001)(2906002)(66556008)(66476007)(2616005)(956004)(31696002)(86362001)(31686004)(66946007)(36756003)(186003)(16526019)(26005)(16576012)(316002)(8936002)(8676002)(6486002)(6916009)(5660300002)(478600001)(54906003)(38100700002)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Zkg1a0IvRjBZNXJRN1M1REg2UEFiSUhBOGIvYWgvTjVObUZHbVRqZEtJTEFD?=
 =?utf-8?B?V1l5WU0vQmdyaENzcmQwUEN6eUhoWVhRYnRSWlc3b0FnZm9KazlrdURGcHcx?=
 =?utf-8?B?RTFCUFVCWTFZVkRFVCs2QXBVM0pPUVV5NGVYVFhwY3UzRG1KSnpHdWpSVStt?=
 =?utf-8?B?VmZyOVJFZG5lcEVCWDdINTY0bGFlbHFaRlpGQzJ3ZXV1TGxwem85WWxlYXU1?=
 =?utf-8?B?SjloRDV5UkJqQ3U4OHhqM0JBNXZpNUxnWExyVVlscHFtSWhzNCt6WnBjSVc3?=
 =?utf-8?B?enZiVzd3a0hHeXlwRGovYVRPQ3dhZS9MU3FPR0pINE1URlZyckt4OWI3UzVQ?=
 =?utf-8?B?K0VveG16QXZzbzJwcjVDQTZhK21RRE5xaStvZ0pYMkJRWmJVS0p1R2MxWTRR?=
 =?utf-8?B?dUF2d0IvN1pYaGxYUFppZk9jNXdGZ1RvMm94RHExWGxEU3FyM1E3dEhaT09r?=
 =?utf-8?B?Y205SUgyWDFQUk5CMkkza2JoZWVjTmxVYzUzeGY3b1kwN2NzVldOYVlYS0pZ?=
 =?utf-8?B?VWd0ZDdWcm5XQUNaN0hhVHFmRnU5bnBTQUh0Wjg5Y2ExTVhEVitPUnZ0UnAx?=
 =?utf-8?B?MHVJRzUzTzhXZENZUXc5RWh3Q3Q1emg3MnVBbzlDeWc4a1ZYSmIxZUZLeEl5?=
 =?utf-8?B?MGxrdWhXSkVVZWdtaFlvZlVvenY1c0crNDRHY3VNSGI2MlVkMHhPcXhyQzVa?=
 =?utf-8?B?cGhCaFZ3UU96UytXVUVYVUU5U3d1MXBzOVlhV0YvVnBuOXFKcHNTRU5mOUZv?=
 =?utf-8?B?RGxncGswc21vR1VJNElSSDJRaldlU1NnSGFoQUVJVjV5TE04RWxtK3BkOGpo?=
 =?utf-8?B?bTVNT29EVy9HVWhiQTBwbEttekg3aUJONjFSbFh1MUdIdzlIRU91QnNHN2hz?=
 =?utf-8?B?R1FnMXh4d1lkWjhQVitralJGRnllRjg5bnBxT0dOTnFBVVIrcUVjaG1wWnFn?=
 =?utf-8?B?aGJJY1NXWFI3TVM1NGJwV0tzdlU2eFFhK3BIbW9hU3g1QjU4UG5JNjU4eGpU?=
 =?utf-8?B?MnNIUm4xR04zeEJ4TkViWkExRE5zTDNyTkk4ZTFOVEl0ZWpIbmVJRTYrTHVT?=
 =?utf-8?B?R0dCSm1LNXcvcWNsRE9kaDZ6MzVYMUVvSC9NK1BoZGlwZUFGeHN4WWV2Q3dF?=
 =?utf-8?B?OFVNQU1ZR09ITGoreVl5bDlhZXR0UHNRaTRwY1BrTWJjazI0b3kwQ0ZnOU82?=
 =?utf-8?B?Q1YybmRYZmhJVlpRNGhNSTJWS25YMFdORC8xNFRZQ2luWTZ2UlNLLzBacmN1?=
 =?utf-8?B?RGxrR2lPNzg3cVY5ak45eThFSEFwdVZ1dHJkNk0yN0NUZGZNRWluVGFadHBh?=
 =?utf-8?B?bitaMHBZMERHc1BPd1YxNkZNTnBWZHo0TlRsRGpUc1VHQnNOQkdibzhOdkMx?=
 =?utf-8?B?ajJreSt3YndWZ3dCdk1zSVpURlFXQ1Q1SjFicWtabk5aTk8vRGltWEpSajFS?=
 =?utf-8?B?Q0VmMjU1cHdQT042TzJHUllUWms3aU8xOHllb0xvYWRPeCtZVlBPaGJOMS83?=
 =?utf-8?B?OTlwK2dPRDI0amZwU3NUdUJBNGNqalJpUUZEd2t0emNHaFc2SkF2RFVqcVkx?=
 =?utf-8?B?UE9ZMUQ3YzFnR2MzeHdna1R5dHd6Z2dVZ2JOQ21JQ3BIZjkwQUdzdmpncW5T?=
 =?utf-8?B?RDh1ZXBOVkZTQzVWS1U0aVd3SDJiS1JKcmpEU1VTQS8zZjN0SkY4enB1OThx?=
 =?utf-8?B?OW05L2lLRm9nUkFhSXo2N2NSME10YmF3VlI1QzRBc1FIRDFDZGgxV2k0QVI2?=
 =?utf-8?Q?lWKTUi++iS5JP7d3OtGjnURSZsw6yrgrqqlci7g?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 04bdc782-23cc-4c09-2ae7-08d937dbfbf2
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 13:20:21.7371
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EfczbUV+cMkP5OWrj/L/AzEVj36pPSJshbbM1l58I9CNZ+bCxXYSrvASkqEG2soGQDphPGs+dSxs8XKSM//flA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5901

struct xc_sr_record's length field has just 32 bits. Fill it early and
check that the calculated value hasn't overflowed. Additionally check
for counter overflow early - there's no point even trying to allocate
any memory in such an event.

While there also limit an induction variable's type to unsigned long:
There's no gain from it being uint64_t.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Of course looping over test_bit() is pretty inefficient, but given that
I have no idea how to test this code I wanted to restrict changes to
what can sensibly be seen as no worse than before from just looking at
the changes.

--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -450,7 +450,8 @@ static int send_checkpoint_dirty_pfn_lis
     xc_interface *xch = ctx->xch;
     int rc = -1;
     unsigned int count, written;
-    uint64_t i, *pfns = NULL;
+    unsigned long i;
+    uint64_t *pfns = NULL;
     struct iovec *iov = NULL;
     struct xc_sr_record rec = {
         .type = REC_TYPE_CHECKPOINT_DIRTY_PFN_LIST,
@@ -469,16 +470,28 @@ static int send_checkpoint_dirty_pfn_lis
 
     for ( i = 0, count = 0; i < ctx->restore.p2m_size; i++ )
     {
-        if ( test_bit(i, dirty_bitmap) )
-            count++;
+        if ( test_bit(i, dirty_bitmap) && !++count )
+            break;
     }
 
+    if ( i < ctx->restore.p2m_size )
+    {
+        ERROR("Too many dirty pfns");
+        goto err;
+    }
+
+    rec.length = count * sizeof(*pfns);
+    if ( rec.length / sizeof(*pfns) != count )
+    {
+        ERROR("Too many (%u) dirty pfns", count);
+        goto err;
+    }
 
-    pfns = malloc(count * sizeof(*pfns));
+    pfns = malloc(rec.length);
     if ( !pfns )
     {
-        ERROR("Unable to allocate %zu bytes of memory for dirty pfn list",
-              count * sizeof(*pfns));
+        ERROR("Unable to allocate %u bytes of memory for dirty pfn list",
+              rec.length);
         goto err;
     }
 
@@ -504,8 +517,6 @@ static int send_checkpoint_dirty_pfn_lis
         goto err;
     }
 
-    rec.length = count * sizeof(*pfns);
-
     iov[0].iov_base = &rec.type;
     iov[0].iov_len = sizeof(rec.type);
 
@@ -513,7 +524,7 @@ static int send_checkpoint_dirty_pfn_lis
     iov[1].iov_len = sizeof(rec.length);
 
     iov[2].iov_base = pfns;
-    iov[2].iov_len = count * sizeof(*pfns);
+    iov[2].iov_len = rec.length;
 
     if ( writev_exact(ctx->restore.send_back_fd, iov, 3) )
     {



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 13:20:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 13:20:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147206.271185 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwllN-0002vU-Ie; Fri, 25 Jun 2021 13:20:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147206.271185; Fri, 25 Jun 2021 13:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwllN-0002vN-F4; Fri, 25 Jun 2021 13:20:53 +0000
Received: by outflank-mailman (input) for mailman id 147206;
 Fri, 25 Jun 2021 13:20:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwllM-0002tn-3q
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 13:20:52 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1b86b4ba-cd7a-4470-9898-3a480f334781;
 Fri, 25 Jun 2021 13:20:51 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2054.outbound.protection.outlook.com [104.47.13.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-38-WP_4OA7iOjy6fcDkb_KXcQ-2; Fri, 25 Jun 2021 15:20:49 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5901.eurprd04.prod.outlook.com (2603:10a6:803:e9::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Fri, 25 Jun
 2021 13:20:46 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 13:20:46 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0179.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1c::23) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.19 via Frontend Transport; Fri, 25 Jun 2021 13:20:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b86b4ba-cd7a-4470-9898-3a480f334781
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624627250;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dvKtEXrYuF/DSgjrdaM5MapZ8LS8e8Zco99pDWWMsA4=;
	b=c/Q6J/nUYSzwxX7NBA/OMo6A6vGPIdTSW+Su0JtFreuFZ+Zc+t747yrNr29fCCz2me1NdP
	kMPi3O4Tek4ILFqb0NOv/kqJivd0bBP4HQCk75UIAcQyMjdjuspIdL7fdMW6zQAeNrbvKb
	HuWE8RH10H4Ec/gEmHHMrrtCF7NLH+Y=
X-MC-Unique: WP_4OA7iOjy6fcDkb_KXcQ-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ni/JXLQdRCsYcOxDYex6MwWXkC+IifuACj6K27FusDd9SNwfX48oKUBtnds2IwnHXH07mGA/oy+EBL9nkBT1SM8K9QKZW2IJ1KuDtXTRxjqtS33uGNDYQ55WrAIsSzZoM24uLdWtSwzpbxOQXSNaOnCXenq3i7AVyLJJbcxh8qWCzOigIS6jpVfHB0OGJse4vE3x2dcky/VVroVmwMytsB3AvG0uD4QyrCUORGzcBFm8ahsD1mspDcl4P5dTgKI8JdiDrB+C5XQjOeJUiCdc7mYzwhSEzS/LN+eJSjUc0fgNXQ8+zJUfwN5QcjEHUDDZKR7NHB/sOd7ssBGP7b8r9g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dvKtEXrYuF/DSgjrdaM5MapZ8LS8e8Zco99pDWWMsA4=;
 b=PFo+5meOm4Sv4mh7wBmEGf71hOSaN4Ohxn1v4QFwI/bXv+CJVaB/JvPdm1PI2lFFsv+AdeOkO6YdXWhNe6Z50hjBNPUC5SOKM4povXMCHDs+AKnJlx0Si7Y5MSk1WUKj0SXh/1uGDP5fIfEhSGWlOrJ12DV8mwgvQpbkWPn88ueiHVn4fLcXi/xmDqfsFJDUtyXZ8hoj/a9MrPdz/YkmioFAD8qw29G6im+gvZ65tUk6ZUsGb3WCTaJpoEJtD1H7a8suFjsJiURATt2L5xxBNvWgj9u8F5oLRPARxKcH+D/pakPkFpQxGmHzPCfbfW/4/zUtJuB+FLsGS/A2ihiP0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 07/12] libxenguest: fix off-by-1 in colo-secondary-bitmap
 merging
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Message-ID: <cec4594f-f346-c951-b0aa-55d7a56cefbc@suse.com>
Date: Fri, 25 Jun 2021 15:20:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0179.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1c::23) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4070a9d4-0855-4e2e-bd13-08d937dc0aa7
X-MS-TrafficTypeDiagnostic: VI1PR04MB5901:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB5901B279EDD648B0B37D6832B3069@VI1PR04MB5901.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:179;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gaRmmRlmj6cS10H2cq4aQfPEKAb/2vgRv5+FiciYO8zj4dc1il1q2g1x7oIql/3twKel/fWtOXMlY1/3m3q1vtfT2U6cQTm6JpjGMIH4XasHBd6c+vhvwfE1fmZNNa+/60Da67fE6iAa00c5QLU72Zmo89h42UPwP1O/cbfkv5W3lwypQcM8PUbpzL6VpGuCL6JmAcVqtihK8cvDLo1wq/YI1+G+L0FKvao6cQusVmgahkA+NKwz3XtCKebC6DskZH1XtO+dL+Mq1LNPN24J3PtuMQUJ/gEHQl/AISRdaviZesE/A7GpeUteOSbHtU/7S5O2Vo3/UytnAwrunVGywHwiyp6LUmSsOs7dif6mtAOz58iOWa0EMfLp9uXRBPDDXkXdeR0VG/geyLwJdX+U39QBBXdRKdcAM+iW/2ILB+J25BBZQeDJpWwC0EED6R1JEBzc6NsxWjNSU+QkG/OAcG6f9Rc/Te+DMRUxkzmpZmmeXSkeqYjcfhhBBvBAonbnH+Xepp4TRDUMlzoFjGqLo4Mvn2NVc6IzcgNfF8GajlkHhEGYo0g6jhkWLrZEnugxFp+9bhbmJdgFJV4OzOFrQMaKEvuL4RKgrvRm8YdR1rCeq9pupdMAod6cEbruWXbCR5KIl6DOOo2RHjLTyvZAhoCB7ULkNW80T1SQTmur8GcRaUyF9fLT2wdDUOt+gAMv
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(396003)(366004)(376002)(39860400002)(136003)(2906002)(66556008)(66476007)(2616005)(956004)(31696002)(86362001)(31686004)(66946007)(36756003)(186003)(16526019)(26005)(16576012)(316002)(8936002)(8676002)(6486002)(6916009)(4744005)(5660300002)(478600001)(54906003)(38100700002)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MGJhUGszQnJoSFY2WWF2NnhOVzRzQVJMdE5kb3N5R04vbStjcVdWTktZbnNJ?=
 =?utf-8?B?QUJzTUFlRlNDYU93NUcwWXViQjljbWpwMWRyTzZMUWRuVG5OYUN1aEdpL1BY?=
 =?utf-8?B?OUptWHpaMUVtYzNHL28rdmVRVU9iSUI2eVhuWmMxOTVqOWZoSHBOWUtLQ1ZG?=
 =?utf-8?B?QzVCS2ZjQzUwaEs2M1ZwekRVU0hvblJMRE02TW4vL3lVNFE3aDNuZFB4N0M4?=
 =?utf-8?B?eGVtT1ZMTkpiK3JibjdNZzlqdVhUeU9Ray9SZnE4N29EY1R5ZG1GaDM1SHI1?=
 =?utf-8?B?bVJWSlloaFVMbWt0S1lwUzIzWjhFamFTeXo3MTQyTWhySzZKalRJUUc0Z2Y0?=
 =?utf-8?B?MlRpWndZVGZBemEwemIwNG9mcVhFMmgwQWwwcEFMNUM5M08yZWh6RDJSUFli?=
 =?utf-8?B?bFBreHZwSHZ3dzNqNUxlMHgzaHZ1TzgyOGVSME1MaGF6VUhubHM4NUNmUUxZ?=
 =?utf-8?B?SXNpbitTRU5xMUZJSTNvdlpBNHhpaGkyZUIrNFYyR2RNYWdjSC80QlQ2a280?=
 =?utf-8?B?MDFtNXBzTU12S0FOZWtHT0JqT0h6ZXlvLzN3aldPeTNMRWltb21kM0VrZGhh?=
 =?utf-8?B?Z0pOSENJTzR3bG9BY3h0RjFBdFNaNVY5bWxVOU13QXVVQ2N4MkhOcnVBNGcy?=
 =?utf-8?B?WWErdGs0MGRZQ204d1FoV0lWR0xsRGpmdDQ5RlhNR2ovTTVxZ3E2VVR1amJI?=
 =?utf-8?B?NjdxSm9NM2c1ZTNIWWk3L3N2RDV6U2ZsUGV4dC9tK0k2Ykc0elAzSmpzVEVI?=
 =?utf-8?B?UmZTRmpjUmltdmlhcldhSFNKSVM3ZzZxK29yV2xCTytISGtzcWhEQllDeFVF?=
 =?utf-8?B?L24wSXJ6Zm83dGdiV2RqamdIQ2RCbnowWXNPRS9oYjB4VDcxWG1Uc2xRbk1j?=
 =?utf-8?B?dWZhTXU1WnF2ZjBSNEYrNTM3YlRGVVZJUVJoTHhXcG1ubHV0eG90SUdEVGNp?=
 =?utf-8?B?aHB5WmxhTTZTVW9CU2l5WERncy9xWmJRc3ZtdXF6L1hpL1htNGc3TTNmL2xa?=
 =?utf-8?B?N2M1bGJnSWpTS21pVHFDZ2k5Ylp3QWdEd0hraWpRRmU0elZseFFZaG1UOXZT?=
 =?utf-8?B?WC95d3JieGxTK1FpMkZBRGFvM0NUN1JvL1JkTWYxbno0c2RuKzZlanVVM3JS?=
 =?utf-8?B?cXhOSm92aGNTaEw4TEZ3Q2NXQnJTQzZNempQcFZ2Q2I4UFZ6RmsvMDkzR0lG?=
 =?utf-8?B?eTZlejNWN0oxY2ZwTG10NjhzWldoQzRWSXUvcy9hSk1rWERFTWNZUkFuY2lu?=
 =?utf-8?B?VElUeHlBaTM4YjVYRWpsNnpOMXplMjRrSnRqYUp5Z1FIVWVaOUxacFAxWVp6?=
 =?utf-8?B?TXg3Rit5K2Q1a2pJZzJuU2QyVTFNNjhGd1BVWGxzaFZiSkZzWDJmZWVudUtj?=
 =?utf-8?B?VEc3NCt2MUpjblVTM2xOZ25WSUp2QkM0d1JkL0pkMFo4aFdWcmEwaCtiZUFJ?=
 =?utf-8?B?TEdGL3Q5ZGZIQzlBWjIwQ3FLR0JOMWtjYkxCS3I5RjYyNHlDMGo0V2sxV2I4?=
 =?utf-8?B?MTNhekNnb1pxY1ZlSCs5YWg0QUg2SURFN2E5MmlWZlYvK3hyNzFDTDRRSkxo?=
 =?utf-8?B?ZU9MQkFkeEJPaWM2MkFlQlJYZ0xlYjBkdGZMQmNKamg5ZnoyNU9lOGdtM0N2?=
 =?utf-8?B?M2JDY0RrKzVxNkJWcVBmem1tUWUxQTUxY1RtWnprQkN3TVFkbzhlWUk1NGY1?=
 =?utf-8?B?NHRTWk1iM1QzMk9BNlJvdnByZXo2TTdlTGIveFg5KzhDMjdYZkhqMHpLOHNL?=
 =?utf-8?Q?o717k4t/vunOPL82vAqfYzcFaCjab/c/CRVwt0w?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4070a9d4-0855-4e2e-bd13-08d937dc0aa7
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 13:20:46.2272
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aqVylmF7/0hpWbws4BBiDivHjPX7xmBt0pkbMRsYH8qk41pmU2YVJhtG4J2phSEhaK8R211biXSUmE4eY0IlJA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5901

Valid GFNs (having a representation in the dirty bitmap) need to be
strictly below p2m_size.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -614,7 +614,7 @@ static int colo_merge_secondary_dirty_bi
     for ( i = 0; i < count; i++ )
     {
         pfn = pfns[i];
-        if ( pfn > ctx->save.p2m_size )
+        if ( pfn >= ctx->save.p2m_size )
         {
             PERROR("Invalid pfn 0x%" PRIx64, pfn);
             rc = -1;



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 13:21:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 13:21:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147212.271198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlm1-0003ad-Tm; Fri, 25 Jun 2021 13:21:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147212.271198; Fri, 25 Jun 2021 13:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlm1-0003aW-Qj; Fri, 25 Jun 2021 13:21:33 +0000
Received: by outflank-mailman (input) for mailman id 147212;
 Fri, 25 Jun 2021 13:21:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwlm1-0003a8-8o
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 13:21:33 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 280a3dba-d588-43ab-93e7-58afc64e532b;
 Fri, 25 Jun 2021 13:21:32 +0000 (UTC)
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02lp2052.outbound.protection.outlook.com [104.47.5.52]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-36-9MKHCehGP_eK2JfivLeiCQ-1; Fri, 25 Jun 2021 15:21:30 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3390.eurprd04.prod.outlook.com (2603:10a6:803:9::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Fri, 25 Jun
 2021 13:21:25 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 13:21:25 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P192CA0009.EURP192.PROD.OUTLOOK.COM (2603:10a6:102:56::14) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.19 via Frontend Transport; Fri, 25 Jun 2021 13:21:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 280a3dba-d588-43ab-93e7-58afc64e532b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624627291;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wy8FFzMhJLtSUZVcqiSIXfxNx8LKO8CGNT1Vso9BXnI=;
	b=nqMZWISyAmTsVdNnW38Cn60dfeLZnCDS+dXQNGOT5wDW6mRYL4c9kFfarVezvi5UZUHz50
	biPg8TUXlLfLCMAaGWBxkm7PuDRzZcC8zVoITFW54ZRagwU933pjeqYvYmzJjdfLLOQ1Fn
	8I7e4zcpQQkpFELwafq3D8qHmlYSb8g=
X-MC-Unique: 9MKHCehGP_eK2JfivLeiCQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hITg0nLH8FX36vG+D5jEqUK+gRj4dHHHtjvjWuaI6VG2nGyMEA/M/w7NftCeDa/lDWdk0L/51FHA12J/Rskt87kDwCAZR+0Bpi/4SbfXMbCjd41+HOt6DkwWuz5O2KiBdrjqPo/Bl567Ohhgyr97jgtt4ilOjOcTxzI6gs1Q2VMfNcmVjD2RkwEoY/Fp6NueC8KRjQwsPmdUhWq7YYaKVxGc2H1CAp4DOSlhWUQxCmcP27tJPSjfPsnjvIv6YVFFXWpRVpkZJOoptku58zC+AMmCkY1taq/xfzaPt6kO4CKIWz/pCt5+58Ls4qLLHA/yuOkKdMbtMWCKDItjx7//zQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wy8FFzMhJLtSUZVcqiSIXfxNx8LKO8CGNT1Vso9BXnI=;
 b=ZinHXSqxSVhYHFx3OZw2itEst3JyEsRHnobEngK6WPHFDF5Ka9np8Xo78CzcMaPsRYQYD5/7HJ9BDvUmtxTTGI5lpGzMvy7aZ4bN8w0HAIceqJjY3CcI8HGJTCoZTLTfkM6dpgZDOdLaAc4QULArxnAiKf4iLs9L0pCMOetfxtrE7MLY/nSWwfdZD+uQLrqdCkduPyKo9Sj0EX/e5pS9oJB1CVpXX1Pd09KXZf1a6zPRcWiPLjd1ySYSRRaGqK2DZaRV6btblYoCcHiEbmNlAZ3K2AGTKoBNeo2zKnyqoKeYcb+G1TykQl87A9POINdM65zyX+yJ3IJH+t3pkQn/Zg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 08/12] x86/paging: deal with log-dirty stats overflow
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Message-ID: <92968d22-f3e2-4eb2-59fe-b1f638c60133@suse.com>
Date: Fri, 25 Jun 2021 15:21:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P192CA0009.EURP192.PROD.OUTLOOK.COM
 (2603:10a6:102:56::14) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 80f312f3-c034-41f3-7614-08d937dc224b
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3390:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB33900F821539E4A6C2DDA161B3069@VI1PR0402MB3390.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:167;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OEJLIWAxy/5iSwwdH0/avC12CnTX3yttMGvS/bAiBhAhsHocA7ZNHHrYTcwG2A38um1RvSGYNf1zWeacaeBNNreIYbDQ7bbNQofUoJU4Gkh7jjhx67nfYasXTJ7vcQRq0hsxqOR3BHfNjyUx56IpW9uF+ZT1JURWz2GuKdIaO8j7Mc+dbXKJ1S6nqvQoBbMHZmWvSu4pOuVZBYZqPfF/gGUgD4v/pLCWeF4+mHd0B8lKx9hB4730zYFAX0dN59N5cgOlpg3M1eBW0J+jPHUYCXF65jDDDwY9y1WGjIJ1cMOF/7ZCyOu2RuSWaSOzZGSiWsTkEU9GN7N24R6EdOFmDAsF2YUnqiIMvCmvF/+BjC1gvFAPzBewGJbZhLhFyBLBQbi2+7B7+X28WFX0Jf/8Lo3DuNOFYu0SrGbr/6voTCreBzHfBF+uzUHgp4PAZzSj5YPDNBYzWXAsAK/3UGr2BG0gH3RRRU6lPbycpGUGzAf7Hy6IvwFLREq5/0UXmQtB7iDK4ebd53yavXnWOeDrqwLDMk0NDV0yWajJOX8VATvwge5Tlzna8vixKKWU5Ni3nE+yrFpbciRZ2zN5i+zc/jEtYCz92627F9cynIWphvdPdSIOip9FCwDFIx6FxzXZROI8bIB4lSjcVFAmAxp370N+qoa4pvc5hdBl6E3nDgbdBu3lsL7b0PqfEcrciFABgx3cs0XDUdaDNoUDcMNjaA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(136003)(39860400002)(346002)(366004)(396003)(16526019)(36756003)(83380400001)(26005)(4326008)(186003)(31686004)(8936002)(478600001)(54906003)(86362001)(2906002)(66946007)(38100700002)(16576012)(956004)(6486002)(8676002)(5660300002)(6916009)(66556008)(66476007)(31696002)(316002)(2616005)(14143004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WGlMN2ZGd2VPa0NHdUxqMkY1RzlpcnBmYnl1NFBNQlo4b3RjUDJ4T2Q5Z2dP?=
 =?utf-8?B?U3BqRG1tVEhscmRPd283YkFYYXk4UW9aOWFBYVliSStJdmtqMDdMWGhPYjVJ?=
 =?utf-8?B?Q3haWmNnYis3R2x1dzVHYXlvQnUxNHVkdzFDeDFweHc4Q3dTT2p0S01PVm9J?=
 =?utf-8?B?WEtQV21VeFZwKzVzaHptQUN2M2UrY1VLRW82TDJ4STRCVGVROFdXTWVTK0d0?=
 =?utf-8?B?MnNBeUZacUx0WGQ2ZFNuc3RRd1pvNUZEb1JWNHd1M0grWDByZENXa0cwL1Bs?=
 =?utf-8?B?UzV2TEw1bW8rM1V5dXV3Tk9TdEJ6RnFUTDB1Rk0vcm9uQ2J1ZVJEak1XVkJU?=
 =?utf-8?B?bGpKQmt0WkorQnpPZ1JuYnRieXNCbVQzcUt2N3dSd09jVjdGaG5GbytwVkdW?=
 =?utf-8?B?eEpMMnVTcEt1OTkwNlhNcGp4RTRSQzI2MjJkR0VWYThNWml2WVo2RWN0UytH?=
 =?utf-8?B?QlN2SWpUbjB5bEFpT3VvaDJGV0g2aWRUYVJMOXpGd3FjbWNYd3U2MVVqNkN1?=
 =?utf-8?B?bnZWVHVJajJ1QUZ2WnZ1Ymt1dHFLTUs0dXJZN2N0Z3k1SFpZc0FiUmNsZWF6?=
 =?utf-8?B?dFE5dDRyMFlFUldGL3dVZnZtanU3MEFCNXZVYWFBSDJaVWlGOEF4N0FoQ0d2?=
 =?utf-8?B?eWtGMTFJQXFzT0hHaW1tS1pHRzdXSVl1SCtXMFNxeGVpZnpiWkRRQkQ4cDZa?=
 =?utf-8?B?ODJNNVp2M1p6Q2pkMUpBdGhZUjJ5a2wzTjFUUG5JWnR6WjdUL3pPOG9UcEt6?=
 =?utf-8?B?STFTWUJKclFmS1BuK0thMnhIT0FzSkpFUTVWcGRaUmJuZzN6V1JwY0M3andi?=
 =?utf-8?B?cURuTlFSVGZmdlZjQm9GQ2pkelZGVFllemVHaG4rZnJWSWRXRzg3YW5BY250?=
 =?utf-8?B?alc0MEVQVzJGakJlZ3AxMTIyR0g5N1hNaEJwdUZTU0d3Q21zamxSd096bS93?=
 =?utf-8?B?S3c2T2ZGZU5NMlJGNGFza0NLQ0QySWt3WnFjdXRhUHpoOHJFRTR4KzRpVEF6?=
 =?utf-8?B?THBFaEk2QldKQ2ErVEFKZ2VqaStqU29Xam94TlIwR011ZFRyQW1RMW16RCtk?=
 =?utf-8?B?UTJwcWhvWk5uaUJiK096YW1uditNTzBCT2VpQnczNStNYit6TFA3QUxLVmJv?=
 =?utf-8?B?RHlydnN2bVdBdklyZW9zd3NVMjFKNVlnODU5N0QrRCtESHNoZ0hkM0FnSzBD?=
 =?utf-8?B?SXdVYkx5Q090cTdvNXdOMVdDaEF3WW9ZdEIrTWc5TnV5ODhiaVF0eFl6eVQy?=
 =?utf-8?B?bXYzVUVHZW9XeXJibWYvNTg0TTMwS3crVXppWGtMOGpzOFhIckJVYlRzQndJ?=
 =?utf-8?B?OGRwOHJLRTdhWGYrb2pEQTZPR1NRbUlMM1FIVXNjTFJ4bnpReGlHcGk1bEp3?=
 =?utf-8?B?SlY5NlB4UTRUQ0hDdmw0YTRsY1Z6VW1PMmlCL2ZJUHVjemRQbmpBYmswUXYw?=
 =?utf-8?B?bW1oRFBJS0g5ZmV1NGtTZlRwdWdLUCs0NUgxaUxwVHNNdHJrNlVpNGNpSTV2?=
 =?utf-8?B?dENiQWdBd3JVRzZ6Zkw5RjEzSGE1dlRHY2NtM3FnQUZVMEpiZkE2VDBNSk1I?=
 =?utf-8?B?YlMvVnJ4bVV0Zk4xM0gxT2tmSlh1YXF4UEF0MWdZZ2FGOU9DMFRJeGxDTUh5?=
 =?utf-8?B?bEdLa0xaQ0VYVVhtN1UvRUZWVVdISWNnU1djc0tmVnpPMGgvcDVDOEJpMFls?=
 =?utf-8?B?eWhGaE0vTFFkZ0oxZVZlWUJpNVJpVXY0R3Q0bjRWQUlPWXdOTEVnejJYUGRj?=
 =?utf-8?Q?kXEyCUHubfcW1tuDAW3bEEy8aHhq7Gu6qwIoEHx?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 80f312f3-c034-41f3-7614-08d937dc224b
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 13:21:25.8518
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JMFUCFZ2+7dTIgbJPDM//0gMPrxElBUVvuQC5PBZY2uV6ovHDHJDpspHD3t4nj3WcMpMINm8A4m8r83/7CgTDw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3390

While the precise values are unlikely of interest once they exceed 4
billion (allowing us to leave alone the domctl struct), we still
shouldn't wrap or truncate the actual values. It is in particular
problematic if the truncated values were zero (causing libxenguest to
skip an iteration altogether) or a very small value (leading to
premature exiting of the pre-copy phase).

Change the internal fields to unsigned long, and suitably saturate for
copying to guest context.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -446,14 +446,16 @@ static int paging_log_dirty_op(struct do
 
     clean = (sc->op == XEN_DOMCTL_SHADOW_OP_CLEAN);
 
-    PAGING_DEBUG(LOGDIRTY, "log-dirty %s: dom %u faults=%u dirty=%u\n",
+    PAGING_DEBUG(LOGDIRTY, "log-dirty %s: dom %u faults=%lu dirty=%lu\n",
                  (clean) ? "clean" : "peek",
                  d->domain_id,
                  d->arch.paging.log_dirty.fault_count,
                  d->arch.paging.log_dirty.dirty_count);
 
-    sc->stats.fault_count = d->arch.paging.log_dirty.fault_count;
-    sc->stats.dirty_count = d->arch.paging.log_dirty.dirty_count;
+    sc->stats.fault_count = min(d->arch.paging.log_dirty.fault_count,
+                                UINT32_MAX + 0UL);
+    sc->stats.dirty_count = min(d->arch.paging.log_dirty.dirty_count,
+                                UINT32_MAX + 0UL);
 
     if ( guest_handle_is_null(sc->dirty_bitmap) )
         /* caller may have wanted just to clean the state or access stats. */
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -190,8 +190,8 @@ struct log_dirty_domain {
     unsigned int   failed_allocs;
 
     /* log-dirty mode stats */
-    unsigned int   fault_count;
-    unsigned int   dirty_count;
+    unsigned long  fault_count;
+    unsigned long  dirty_count;
 
     /* functions which are paging mode specific */
     const struct log_dirty_ops {



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 13:22:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 13:22:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147213.271210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlmU-0004Gz-BG; Fri, 25 Jun 2021 13:22:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147213.271210; Fri, 25 Jun 2021 13:22:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlmU-0004Gr-7p; Fri, 25 Jun 2021 13:22:02 +0000
Received: by outflank-mailman (input) for mailman id 147213;
 Fri, 25 Jun 2021 13:22:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwlmT-0004Gh-6o
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 13:22:01 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 84cdf138-2e4b-46d1-a03f-d1c54c9931a4;
 Fri, 25 Jun 2021 13:21:59 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2112.outbound.protection.outlook.com [104.47.17.112])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-19-sO_3WyecO7G_D3lXkY5Abw-1; Fri, 25 Jun 2021 15:21:57 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3390.eurprd04.prod.outlook.com (2603:10a6:803:9::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Fri, 25 Jun
 2021 13:21:53 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 13:21:53 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P189CA0033.EURP189.PROD.OUTLOOK.COM (2603:10a6:102:53::8) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Fri, 25 Jun 2021 13:21:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84cdf138-2e4b-46d1-a03f-d1c54c9931a4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624627318;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PbQU7L3blROF4Klwca7Ybd+xUlO1Dcp9UzFdRq08MR0=;
	b=FOGgr17uLUjtBjs2F1/wa4MPF0imzwETTWH+zFDwi2Tdv/dJMxxQ6lkJ6buSqx/YbP/zib
	sT+2WkKVeDJXemFqkSAFixzT/ZtUKpziGc1x7VCYe9Elaq6wc0BEP6evzdSnT6M6upEobZ
	a4KSlQoklEWGOgQj6HDHUZb/rPr+2xM=
X-MC-Unique: sO_3WyecO7G_D3lXkY5Abw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lsp6pvv3guHOqAtB/HnwkkKJhiFHZ9nC8kgH5RRUCTZ/H09RNWk2/VlNhlkbUhj4AHnMcCTS5BaoMtjOzsqrMHe65BNk+30cx1ZD8cpe2hajYyxd3jfKNZRmNkfFBUUCaS/VP2tFb5kABlVaHv/Tuk4tMlwW+IqYqa8hl5sN1FS1/7ZwKuWIWHdNcmw2FMSe7ZDUXQVZ+lr1tJMbfShD7x4R6nqVuwF67DbGDJRy1OIfmJUXOLtAbI4vSX+sPGyI2D9lVkcbUGVwDvTsBhKkaw5sehMZFihgwAOmaR5yYG9ABx1m3pFfCicx2jBVQcz6ufAKWhlPRhf+1sh60RAGTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PbQU7L3blROF4Klwca7Ybd+xUlO1Dcp9UzFdRq08MR0=;
 b=KOux2M4bh0KVcZNpd8aEnLDVwnicoJvv6Rxyxav2/FnGMDdiyDr9nT4TQt8YbvLbpnFhUT3OmS4frnOCjwxlFc7L0Y0J0KtD+UwLL8HXa2K5kKAWUMyW/2QhvpdiIoFNZdlAQBUlwoGzrdYryWsC9OUuWRNiM+WOZxoJaXnB7wBapa1M/S2iczM2ad8dEGYCTeVkNCVTmTv5bEuIAe9wZ7cnNhe/10pLitzUbUfdp/xhY67NnC2o3CWQ87pu6YSaLb4DW6USIG5qfhCNTmksVVOEgZRrI8t0xgmvRQrDHefI1G0BgGL5/UwotyOmiiTZJazikQ3/L5g5CUQdOApAlQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 09/12] x86/paging: supply more useful log-dirty page count
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Message-ID: <7871d13b-2ad3-2eb4-8148-dbf6a88fdcdd@suse.com>
Date: Fri, 25 Jun 2021 15:21:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P189CA0033.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:53::8) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8a0655b0-48ed-4d07-bdd8-08d937dc32c4
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3390:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB3390174A8D0A48E86BE35855B3069@VI1PR0402MB3390.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4941;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7yZyeSDTHTO/a+dgTlW7Lcua7qXCmfqPgcRncxMGQM/mag2RWzjI+ZI5Ar8OPvPFr8P6kj8HK1DDWzb/4+6Sms2FWSnxufkN+NuGi8LnaRATc7ag5RBvpJ7+tFzGEzXKd8hl4dXwNpidAGcACeGHUFOamHXZoI+6Y8B0gYb54/Ki2n/hSomIvbKdq5z5T14WyB47m5SXbH7sH0qLIr+PupiKiZDbeT7Ukdy698c7TOXVeLzfV5girQxB+NPWYYHIUAa6eQJsKUeMQPQXtWdhOB+XOkQ1Cj6dOiPM59SBPLLQ4xF3DmVJGmdC9T8uFw4u4XXQ4RS8/fDC7sgaE5GF1iewFZxmxKe/0tJKJ7rAy9Z2Bn4sSvXCeNWUHlCEX4FhtfdMlIYE8oD0mIicQihQdSy3lFkYSreH9ZmTZKDQJewFVMWGxdu5wHmlIlE6N/p/akfTA3AOYdnQ2i5QBu0NIz8TnPWtuLc1nlYJLDI1e8NHUHLW4A2VZsCOEMBaI+Zeln3wnkGNS9tGxnG0HMmFKMKuxqsU1Y9pHQ37Rl4IVW/VfnoqSVDa3ead5BqwrRzpFDbNSUt+NIw1GE2Gs7g05nLPonLz+rQ0HFgrH+x41pnf0Se9YJTvL21O4xOi0ALHgzn0BR61GPjpSxfo5Z1PENxpNraETnO0dX+JTdNJbOWGa5ySvBYs9OZ96sVNQK9Qmp4xkEzPB6M7T5Sd8oB9PQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(136003)(39860400002)(346002)(366004)(396003)(16526019)(36756003)(83380400001)(30864003)(26005)(4326008)(186003)(31686004)(8936002)(478600001)(54906003)(86362001)(2906002)(66946007)(38100700002)(16576012)(956004)(6486002)(8676002)(5660300002)(6916009)(66556008)(66476007)(31696002)(316002)(2616005)(14143004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OUQ3RWVJTkxtTGpQZjAwWkt4SW5JSGNBNS9abUI5Rm9HSWdCa2NyK0tRTGJq?=
 =?utf-8?B?VXBNTE9kRDZnUEJjZEJMZDY3Q2VBcko0bE1BTWFTSXFhbDVFTnFNUXA0Mmo3?=
 =?utf-8?B?clNmRDQxWXRKa1l1OE1mcng0QXRyOHlpV1h0YXhYcmlVOWp6L0E0ekxrZ3cw?=
 =?utf-8?B?SXRUNG9yVEhVb200Z0NqSjlmeDVPZlVNbGxqMHNwaHRkd1c5U0tkRTVhVGxB?=
 =?utf-8?B?MjgzcEMycXFoNlJ4RFVxRzE3L1NLcEl3NTRjTTNZMzdtNzdHeVc1UE1md3pQ?=
 =?utf-8?B?cUZkUXNCZXNyWFd0SHczWlJXQ3M4UUJodGo0aGNPRTJ4enVCa2pVWk1WQ2Jl?=
 =?utf-8?B?SUkydGRBc3M3Q3dpVjJLbTg3NG9QQzJLNHVyOXRRWVdKU1M1K3RkZ0txNi9i?=
 =?utf-8?B?SXMxNkJrQisxbkR3STQyc25Bb0RaVThTU1VPN0g2TzJwYW9IeFp2Rk45WjFF?=
 =?utf-8?B?cEpncko1a1k4TGFtUGFqVnp2UjQwTGVqZUZVUzhaR2thNmN6TGhValdSWVJV?=
 =?utf-8?B?RHp3am56TzBVbVBNMkhCeitJQVdHQU9tUG93VHJnWnpmZ3dnQVhUR0FQYVVz?=
 =?utf-8?B?Um9PWjZRZTU3OXNFMnBuVGxDK01TZmJFMUt1M0ZISllEbXEzcUxsVW1NaWZZ?=
 =?utf-8?B?REpmSE50ZGJOcTRQc1ZYT2R1WDJEa3MyK2dsRmVmcVIxUVMyNWVDdzRIV29S?=
 =?utf-8?B?SGdielJBSUtmNkR0aVIvNis1QTdzVmR1M3NKd0JCbzh3N0M3ajBXaVhpcGpt?=
 =?utf-8?B?NmJ5blM5Ui96VTc2T2h0MlI2bG1qbGc2ckl3Q3N2YUZJVURIYm9sdVBVYUhN?=
 =?utf-8?B?bVo2M2MyV2RHZExuci9NME9WVUVjNmRvZ051OXNCbGIvUUZKU2RBS29pWTl6?=
 =?utf-8?B?M09hSjhJakRDS3dlcVo2S1lRYXhOSW5sQ2RhK09ZY2RMeWVxU2gzbFRmclQ1?=
 =?utf-8?B?ZjZpRXhQMFpFdkUwdGJaUnFpSHdNcFNpc3ZKQy84OGowV2pxQnJTckYrVm9X?=
 =?utf-8?B?dkY2dFNZck1QTEhXUFM4eWhiOGp0M0ZDYzJvOXFVUzNoMUgwK1hCNDlFZHlU?=
 =?utf-8?B?UzdhU01ValBRSUdFTVVLa1lweGR3cXpwazd1MU9pZFY0VHRYMGtzZTlLS2Nx?=
 =?utf-8?B?VERJbHlYSzZzMmlhcE5VaHJYQU8xUE9abWpzK3VGMXF4cHd4bFRVYXp4V25z?=
 =?utf-8?B?amhXS2g5THFOcW9ld1FkYzYxaWdWclJGOGRBYVZIaTRsMGVhNkRXMFVyaGhM?=
 =?utf-8?B?V2xuT2FZejRxcGROeDhnd1psYTJ3clNaOVA4enJId1B4b3pkRTZUSU9HdUpV?=
 =?utf-8?B?Qm5DMlp0WlBjOGIwak4zanhmRXB2L05wNllaQ2pTSzZML0JleVZJZ2U4YjhS?=
 =?utf-8?B?dEZXamdVb0tMWWUyMXdXYmJjL0tNMGNEa2hVWG1kT1V5bGZNaUxuWU8wQVVT?=
 =?utf-8?B?ZXBqb0lzUlMvS3RLZ3QyWW1jMk84WWQ3RGpCSXNuUnZQRHptZ25oZzlxbnFn?=
 =?utf-8?B?SVU2UmVBNVJmZkptMWY2VjhrcTZqd0xCVWxCcEFZQnQ2OE1MNkdQNTA5ZXE0?=
 =?utf-8?B?dEpGZ2lUaFUrYklKOFRTWDh5U3dTZjJpV1hTRFF2S0dFTnJpaGdaajhZa2gw?=
 =?utf-8?B?NjE0L1k4b2NCdXYyeC9zTGZQZDZScTJiNi9QZWhEMFNtR2s5ZHlxSmVCTnBl?=
 =?utf-8?B?ZWJkT2VoTTNydWE2bWhZS3U1bkRFZlF2OElPeGFvWDFQcnpaQmpPZlkrdnpV?=
 =?utf-8?Q?PR16sI4/17PJajBganMCx8VbwM0ipTxXH1cGfhi?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8a0655b0-48ed-4d07-bdd8-08d937dc32c4
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 13:21:53.5362
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /DpKGtYNaFR7pKBjk1Pb0gsK3Auf4aYktG3F+qhF48v3DonqogNFHOo/8qvqAgiPOQrE8z8gG7olEj4uHy5bpA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3390

In paging_log_dirty_op(), always update the count of pages field:
- if more pages were specified than the guest has ever accessed (HVM) or
  marked part of the p2m (PV), there's no point for the caller to
  inspect bits beyond the one for that last page,
- if the guest's p2m size has grown in the meantime, the caller would
  have no indication that it may not have caught all dirty bits.

Also exit the loop once having passed the last valid GFN. To balance
overhead and savings, do this before inspecting a new L2 table.

Adjust libxenguest accordingly, albeit these changes are necessary only
for guests which actually alter their P2M size while under migration.
They do, however, additionally open up the option of the hypervisor
eventually zapping large ranges of trailing zeros from the bitmap when
providing it back to the tools.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Of course this still is far from ideal: At the very least a perhaps
large tail of zeros could very well also result in a reduced page
count.

--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -237,7 +237,16 @@ struct xc_sr_context
             /* Further debugging information in the stream. */
             bool debug;
 
+            /*
+             * Counts of bits (each representing a guest page), expressing
+             * respectively
+             * - obtained P2M size,
+             * - allocated bitmap size,
+             * - range actually filled with valid data.
+             */
             unsigned long p2m_size;
+            unsigned long p2m_alloc_size;
+            unsigned long p2m_used_size;
 
             struct precopy_stats stats;
 
@@ -245,6 +254,7 @@ struct xc_sr_context
             unsigned int nr_batch_pfns;
             unsigned long *deferred_pages;
             unsigned long nr_deferred_pages;
+            unsigned long used_deferred_pages;
             xc_hypercall_buffer_t dirty_bitmap_hbuf;
         } save;
 
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -450,7 +450,8 @@ static int send_checkpoint_dirty_pfn_lis
     xc_interface *xch = ctx->xch;
     int rc = -1;
     unsigned int count, written;
-    unsigned long i;
+    unsigned long i, p2m_size;
+    long long ret;
     uint64_t *pfns = NULL;
     struct iovec *iov = NULL;
     struct xc_sr_record rec = {
@@ -459,22 +460,29 @@ static int send_checkpoint_dirty_pfn_lis
     DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
                                     &ctx->restore.dirty_bitmap_hbuf);
 
-    if ( xc_logdirty_control(
-             xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_CLEAN,
-             HYPERCALL_BUFFER(dirty_bitmap), ctx->restore.p2m_size,
-             0, NULL) != ctx->restore.p2m_size )
+    ret = xc_logdirty_control(
+              xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_CLEAN,
+              HYPERCALL_BUFFER(dirty_bitmap), ctx->restore.p2m_size,
+              0, NULL);
+    if ( ret < 0 )
     {
         PERROR("Failed to retrieve logdirty bitmap");
         goto err;
     }
+    if ( ret > ctx->restore.p2m_size )
+    {
+        ERROR("Guest has grown its p2m too much");
+        goto err;
+    }
+    p2m_size = ret;
 
-    for ( i = 0, count = 0; i < ctx->restore.p2m_size; i++ )
+    for ( i = 0, count = 0; i < p2m_size; i++ )
     {
         if ( test_bit(i, dirty_bitmap) && !++count )
             break;
     }
 
-    if ( i < ctx->restore.p2m_size )
+    if ( i < p2m_size )
     {
         ERROR("Too many dirty pfns");
         goto err;
@@ -495,7 +503,7 @@ static int send_checkpoint_dirty_pfn_lis
         goto err;
     }
 
-    for ( i = 0, written = 0; i < ctx->restore.p2m_size; ++i )
+    for ( i = 0, written = 0; i < p2m_size; ++i )
     {
         if ( !test_bit(i, dirty_bitmap) )
             continue;
@@ -739,8 +747,10 @@ static int setup(struct xc_sr_context *c
 
     if ( ctx->stream_type == XC_STREAM_COLO )
     {
+        unsigned long pages = NRPAGES(bitmap_size(ctx->restore.p2m_size));
+
         dirty_bitmap = xc_hypercall_buffer_alloc_pages(
-            xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->restore.p2m_size)));
+            xch, dirty_bitmap, pages);
 
         if ( !dirty_bitmap )
         {
@@ -748,6 +758,8 @@ static int setup(struct xc_sr_context *c
             rc = -1;
             goto err;
         }
+
+        ctx->restore.p2m_size = pages << (PAGE_SHIFT + 3);
     }
 
     rc = ctx->restore.ops.setup(ctx);
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -74,6 +74,16 @@ static int write_checkpoint_record(struc
     return write_record(ctx, &checkpoint);
 }
 
+static void update_deferred_pages(struct xc_sr_context *ctx, xen_pfn_t pfn)
+{
+    if ( !ctx->save.deferred_pages )
+        return;
+    set_bit(pfn, ctx->save.deferred_pages);
+    ++ctx->save.nr_deferred_pages;
+    if ( pfn >= ctx->save.used_deferred_pages )
+        ctx->save.used_deferred_pages = pfn + 1;
+}
+
 /*
  * Writes a batch of memory as a PAGE_DATA record into the stream.  The batch
  * is constructed in ctx->save.batch_pfns.
@@ -130,11 +140,8 @@ static int write_batch(struct xc_sr_cont
                                                       ctx->save.batch_pfns[i]);
 
         /* Likely a ballooned page. */
-        if ( mfns[i] == INVALID_MFN && ctx->save.deferred_pages )
-        {
-            set_bit(ctx->save.batch_pfns[i], ctx->save.deferred_pages);
-            ++ctx->save.nr_deferred_pages;
-        }
+        if ( mfns[i] == INVALID_MFN )
+            update_deferred_pages(ctx, ctx->save.batch_pfns[i]);
     }
 
     rc = xc_get_pfn_type_batch(xch, ctx->domid, nr_pfns, types);
@@ -196,12 +203,7 @@ static int write_batch(struct xc_sr_cont
             {
                 if ( rc == -1 && errno == EAGAIN )
                 {
-                    if ( ctx->save.deferred_pages )
-                    {
-                        set_bit(ctx->save.batch_pfns[i],
-                                ctx->save.deferred_pages);
-                        ++ctx->save.nr_deferred_pages;
-                    }
+                    update_deferred_pages(ctx, ctx->save.batch_pfns[i]);
                     types[i] = XEN_DOMCTL_PFINFO_XTAB;
                     --nr_pages;
                 }
@@ -369,7 +371,7 @@ static int suspend_domain(struct xc_sr_c
  * Send a subset of pages in the guests p2m, according to the dirty bitmap.
  * Used for each subsequent iteration of the live migration loop.
  *
- * Bitmap is bounded by p2m_size.
+ * Bitmap is bounded by p2m_alloc_size, but populated only up to p2m_used_size.
  */
 static int send_dirty_pages(struct xc_sr_context *ctx,
                             unsigned long entries, bool all_dirty)
@@ -381,7 +383,10 @@ static int send_dirty_pages(struct xc_sr
     DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
                                     &ctx->save.dirty_bitmap_hbuf);
 
-    for ( p = 0, written = 0; p < ctx->save.p2m_size; ++p )
+    if ( all_dirty )
+        ctx->save.p2m_used_size = ctx->save.p2m_size;
+
+    for ( p = 0, written = 0; p < ctx->save.p2m_used_size; ++p )
     {
         if ( !all_dirty && !test_bit(p, dirty_bitmap) )
             continue;
@@ -526,6 +531,8 @@ static int send_memory_live(struct xc_sr
 
     for ( ; ; )
     {
+        long long ret;
+
         policy_decision = precopy_policy(*policy_stats, data);
         x++;
 
@@ -552,15 +559,23 @@ static int send_memory_live(struct xc_sr
         if ( policy_decision != XGS_POLICY_CONTINUE_PRECOPY )
             break;
 
-        if ( xc_logdirty_control(
-                 xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_CLEAN,
-                 &ctx->save.dirty_bitmap_hbuf, ctx->save.p2m_size,
-                 0, &stats) != ctx->save.p2m_size )
+        ret = xc_logdirty_control(
+                  xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_CLEAN,
+                  &ctx->save.dirty_bitmap_hbuf, ctx->save.p2m_alloc_size,
+                  0, &stats);
+        if ( ret < 0 )
         {
             PERROR("Failed to retrieve logdirty bitmap");
             rc = -1;
             goto out;
         }
+        if ( ret > ctx->save.p2m_alloc_size )
+        {
+            ERROR("Guest has grown its p2m too much");
+            rc = -1;
+            goto out;
+        }
+        ctx->save.p2m_used_size = ret;
 
         policy_stats->dirty_count = stats.dirty_count;
 
@@ -614,7 +629,7 @@ static int colo_merge_secondary_dirty_bi
     for ( i = 0; i < count; i++ )
     {
         pfn = pfns[i];
-        if ( pfn >= ctx->save.p2m_size )
+        if ( pfn >= ctx->save.p2m_alloc_size )
         {
             PERROR("Invalid pfn 0x%" PRIx64, pfn);
             rc = -1;
@@ -642,6 +657,7 @@ static int suspend_and_send_dirty(struct
     xc_shadow_op_stats_t stats;
     char *progress_str = NULL;
     int rc;
+    long long ret;
     DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
                                     &ctx->save.dirty_bitmap_hbuf);
 
@@ -649,16 +665,22 @@ static int suspend_and_send_dirty(struct
     if ( rc )
         goto out;
 
-    if ( xc_logdirty_control(
-             xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_CLEAN,
-             HYPERCALL_BUFFER(dirty_bitmap), ctx->save.p2m_size,
-             XEN_DOMCTL_SHADOW_LOGDIRTY_FINAL, &stats) !=
-         ctx->save.p2m_size )
+    ret = xc_logdirty_control(
+              xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_CLEAN,
+              HYPERCALL_BUFFER(dirty_bitmap), ctx->save.p2m_alloc_size,
+              XEN_DOMCTL_SHADOW_LOGDIRTY_FINAL, &stats);
+    if ( ret < 0 )
     {
         PERROR("Failed to retrieve logdirty bitmap");
         rc = -1;
         goto out;
     }
+    if ( ret > ctx->save.p2m_alloc_size )
+    {
+        ERROR("Guest has grown its p2m too much");
+        rc = -1;
+        goto out;
+    }
 
     if ( ctx->save.live )
     {
@@ -670,7 +692,8 @@ static int suspend_and_send_dirty(struct
         xc_set_progress_prefix(xch, "Checkpointed save");
 
     if ( ctx->save.deferred_pages )
-        bitmap_or(dirty_bitmap, ctx->save.deferred_pages, ctx->save.p2m_size);
+        bitmap_or(dirty_bitmap, ctx->save.deferred_pages, ctx->save.p2m_alloc_size);
+    ctx->save.p2m_used_size = MAX(ret, ctx->save.used_deferred_pages);
 
     if ( !ctx->save.live && ctx->stream_type == XC_STREAM_COLO )
     {
@@ -688,8 +711,9 @@ static int suspend_and_send_dirty(struct
         goto out;
 
     if ( ctx->save.deferred_pages )
-        bitmap_clear(ctx->save.deferred_pages, ctx->save.p2m_size);
+        bitmap_clear(ctx->save.deferred_pages, ctx->save.p2m_alloc_size);
     ctx->save.nr_deferred_pages = 0;
+    ctx->save.used_deferred_pages = 0;
 
  out:
     xc_set_progress_prefix(xch, NULL);
@@ -702,6 +726,7 @@ static int verify_frames(struct xc_sr_co
     xc_interface *xch = ctx->xch;
     xc_shadow_op_stats_t stats;
     int rc;
+    long long ret;
     struct xc_sr_record rec = { .type = REC_TYPE_VERIFY };
 
     DPRINTF("Enabling verify mode");
@@ -715,15 +740,18 @@ static int verify_frames(struct xc_sr_co
     if ( rc )
         goto out;
 
-    if ( xc_logdirty_control(
-             xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_PEEK,
-             &ctx->save.dirty_bitmap_hbuf, ctx->save.p2m_size,
-             0, &stats) != ctx->save.p2m_size )
+    ret = xc_logdirty_control(
+              xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_PEEK,
+              &ctx->save.dirty_bitmap_hbuf, ctx->save.p2m_alloc_size,
+              0, &stats);
+    if ( ret < 0 )
     {
         PERROR("Failed to retrieve logdirty bitmap");
         rc = -1;
         goto out;
     }
+    if ( ret > ctx->save.p2m_alloc_size )
+        IPRINTF("Guest has grown its p2m too much");
 
     DPRINTF("  Further stats: faults %u, dirty %u",
             stats.fault_count, stats.dirty_count);
@@ -804,13 +832,14 @@ static int setup(struct xc_sr_context *c
 
     if ( ctx->save.live || ctx->stream_type != XC_STREAM_PLAIN )
     {
+        unsigned long pages = NRPAGES(bitmap_size(ctx->save.p2m_size));
         DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
                                         &ctx->save.dirty_bitmap_hbuf);
 
         dirty_bitmap =
-            xc_hypercall_buffer_alloc_pages(
-                xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->save.p2m_size)));
-        ctx->save.deferred_pages = bitmap_alloc(ctx->save.p2m_size);
+            xc_hypercall_buffer_alloc_pages(xch, dirty_bitmap, pages);
+        ctx->save.p2m_alloc_size = pages << (PAGE_SHIFT + 3);
+        ctx->save.deferred_pages = bitmap_alloc(ctx->save.p2m_alloc_size);
 
         if ( !dirty_bitmap || !ctx->save.deferred_pages )
             goto enomem;
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -397,6 +397,19 @@ int paging_mfn_is_dirty(struct domain *d
     return rv;
 }
 
+/*
+ * This is used to provide a rough (upper) estimate to the caller of how many
+ * more pages we might have data for.
+ */
+static unsigned int last_valid_entry(const mfn_t *tbl, unsigned int idx) {
+    unsigned int last = LOGDIRTY_NODE_ENTRIES;
+
+    for ( ; idx < LOGDIRTY_NODE_ENTRIES; ++idx )
+        if ( mfn_valid(tbl[idx]) )
+            last = idx;
+
+    return last;
+}
 
 /* Read a domain's log-dirty bitmap and stats.  If the operation is a CLEAN,
  * clear the bitmap and stats as well. */
@@ -405,10 +418,10 @@ static int paging_log_dirty_op(struct do
                                bool_t resuming)
 {
     int rv = 0, clean = 0, peek = 1;
-    unsigned long pages = 0;
+    unsigned long pages = 0, extra = 0;
     mfn_t *l4 = NULL, *l3 = NULL, *l2 = NULL;
     unsigned long *l1 = NULL;
-    int i4, i3, i2;
+    unsigned int i4, i3, i2;
 
     if ( !resuming )
     {
@@ -479,6 +492,15 @@ static int paging_log_dirty_op(struct do
         l3 = (l4 && mfn_valid(l4[i4])) ? map_domain_page(l4[i4]) : NULL;
         for ( ; (pages < sc->pages) && (i3 < LOGDIRTY_NODE_ENTRIES); i3++ )
         {
+            unsigned long max_gfn = domain_get_maximum_gpfn(d);
+
+            if ( (i4 * LOGDIRTY_NODE_ENTRIES + i3) *
+                 LOGDIRTY_NODE_ENTRIES * PAGE_SIZE * 8 > max_gfn )
+            {
+                i4 = LOGDIRTY_NODE_ENTRIES;
+                break;
+            }
+
             l2 = ((l3 && mfn_valid(l3[i3])) ?
                   map_domain_page(l3[i3]) : NULL);
             for ( i2 = 0;
@@ -502,18 +524,36 @@ static int paging_log_dirty_op(struct do
                         goto out;
                     }
                 }
+
                 pages += bytes << 3;
+
                 if ( l1 )
                 {
+                    if ( unlikely(pages >= sc->pages) )
+                        extra = (PAGE_SIZE - bytes) << 3;
+
                     if ( clean )
                         clear_page(l1);
                     unmap_domain_page(l1);
                 }
             }
+
             if ( l2 )
+            {
+                if ( unlikely(pages >= sc->pages) )
+                {
+                    i2 = last_valid_entry(l2, i2);
+                    if ( i2 < LOGDIRTY_NODE_ENTRIES )
+                        extra = ((i4 * LOGDIRTY_NODE_ENTRIES + i3) *
+                                 LOGDIRTY_NODE_ENTRIES + i2 + 1) *
+                                PAGE_SIZE * 8;
+                }
+
                 unmap_domain_page(l2);
+            }
 
-            if ( i3 < LOGDIRTY_NODE_ENTRIES - 1 && hypercall_preempt_check() )
+            if ( pages < sc->pages && i3 < LOGDIRTY_NODE_ENTRIES - 1 &&
+                 hypercall_preempt_check() )
             {
                 d->arch.paging.preempt.log_dirty.i4 = i4;
                 d->arch.paging.preempt.log_dirty.i3 = i3 + 1;
@@ -521,10 +561,21 @@ static int paging_log_dirty_op(struct do
                 break;
             }
         }
+
         if ( l3 )
+        {
+            if ( !rv && unlikely(pages >= sc->pages) )
+            {
+                i3 = last_valid_entry(l3, i3);
+                if ( i3 < LOGDIRTY_NODE_ENTRIES )
+                    extra = (i4 * LOGDIRTY_NODE_ENTRIES + i3 + 1) *
+                            LOGDIRTY_NODE_ENTRIES * PAGE_SIZE * 8;
+            }
+
             unmap_domain_page(l3);
+        }
 
-        if ( !rv && i4 < LOGDIRTY_NODE_ENTRIES - 1 &&
+        if ( !rv && pages < sc->pages && i4 < LOGDIRTY_NODE_ENTRIES - 1 &&
              hypercall_preempt_check() )
         {
             d->arch.paging.preempt.log_dirty.i4 = i4 + 1;
@@ -534,8 +585,19 @@ static int paging_log_dirty_op(struct do
         if ( rv )
             break;
     }
+
     if ( l4 )
+    {
+        if ( !rv && unlikely(pages >= sc->pages) )
+        {
+            i4 = last_valid_entry(l4, i4);
+            if ( i4 < LOGDIRTY_NODE_ENTRIES )
+                extra = (i4 + 1) * LOGDIRTY_NODE_ENTRIES *
+                        LOGDIRTY_NODE_ENTRIES * PAGE_SIZE * 8;
+        }
+
         unmap_domain_page(l4);
+    }
 
     if ( !rv )
     {
@@ -562,8 +624,8 @@ static int paging_log_dirty_op(struct do
         return rv;
     }
 
-    if ( pages < sc->pages )
-        sc->pages = pages;
+    sc->pages = min(pages + extra, domain_get_maximum_gpfn(d) + 1);
+
     if ( clean )
     {
         /* We need to further call clean_dirty_bitmap() functions of specific
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -261,7 +261,8 @@ struct xen_domctl_shadow_op {
 
     /* OP_PEEK / OP_CLEAN */
     XEN_GUEST_HANDLE_64(uint8) dirty_bitmap;
-    uint64_aligned_t pages; /* Size of buffer. Updated with actual size. */
+    uint64_aligned_t pages; /* Size of buffer. Updated with actual (or
+                               potentially needed) size. */
     struct xen_domctl_shadow_op_stats stats;
 };
 



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 13:22:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 13:22:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147219.271220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlmz-0004u2-Js; Fri, 25 Jun 2021 13:22:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147219.271220; Fri, 25 Jun 2021 13:22:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlmz-0004tv-Gs; Fri, 25 Jun 2021 13:22:33 +0000
Received: by outflank-mailman (input) for mailman id 147219;
 Fri, 25 Jun 2021 13:22:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwlmx-0004tF-Tp
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 13:22:31 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 407cb74d-86c4-4300-b14c-b5c7ff87737d;
 Fri, 25 Jun 2021 13:22:30 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2106.outbound.protection.outlook.com [104.47.17.106])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-39-_vRuFGULO0ClOZzw-YVGgA-2; Fri, 25 Jun 2021 15:22:28 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3390.eurprd04.prod.outlook.com (2603:10a6:803:9::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Fri, 25 Jun
 2021 13:22:25 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 13:22:25 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR05CA0092.eurprd05.prod.outlook.com (2603:10a6:208:136::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19 via Frontend
 Transport; Fri, 25 Jun 2021 13:22:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 407cb74d-86c4-4300-b14c-b5c7ff87737d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624627349;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QpvgoXH+ELyJQft9Bj0N8UohKEfOlbayQ9hVqUJa1lY=;
	b=YV+4HC5Av4l5R4BnC7lEcaNRXxty2vO7BosBdswMZqj9YbqV3anhENz6jZV0gSsptfCqnz
	G2yRG/bgMjqsycapLdaGtpVTwdmAWweqJZQGGHZgxoN1s6PWKa6INQDx9qtYieHq2cMbY/
	vwbyt7JGj0P254BlXfb6tpPTpxd0gXw=
X-MC-Unique: _vRuFGULO0ClOZzw-YVGgA-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HZWde4G1fSO9AiuyLld0MuKlKY//Aj1VwyXj0w44BOTmqGeCvicKQ7idCFXXh5STYYNYKcnjxx0f8D6R2y7xdtzDXu3iwZW8X6qwKd7V9BlvduHN/xaRPlyQJtiRFqnfFFlf06FMeyRttvCV9GlHrs9C1LWYmNzrwz/raBcfx7YCVFsnx0XKmG7XepVjohHa3Fcjh6OcP3uqen0wbIAknri4LGOELdiWqfdHcxYiYIxCg46rtECNmpqmcLLlnCbVOCQSWGl0nl8V6jdLmBISCjyp7O8riINByLgzyPcbCsnIOTU4E4ElKlm/eO6PRlD9P7vR5q/9p0xDwyaDjdzKgQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QpvgoXH+ELyJQft9Bj0N8UohKEfOlbayQ9hVqUJa1lY=;
 b=bmq/b7RBQdDh/lkJq+IKtGIIebHfb//vMXuBHMh2YNhVEYuzs/YbpQjW4J4OgttP7bbVH4bB9jXwZWi4sJgA6pUnKS3IPfLEiDMxKmBJbNpNOZibfzxr7K1tBBkpxOCPhjt/FIReen2/4Y3taSyicMuI4ggYo0cHo/7dEavual7xLPzorQa+WiRFofuV/8chND1tPUmne4w7coWqoHdhF8qHnk8piiIviCWrAzpqPJjhZNRy9I14L8FanCS2ZptigcBJE/gZqyKqqfn39H9d8PaaDpGxJyxxcvBWXZf0f0js1+H//DqTPILNOf36DygLwx4CQN0PWg97JMixUCwQUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 10/12] x86/mm: update log-dirty bitmap when manipulating P2M
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Message-ID: <ec2af09e-d69f-3e4f-f913-56d7ec0492e6@suse.com>
Date: Fri, 25 Jun 2021 15:22:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR05CA0092.eurprd05.prod.outlook.com
 (2603:10a6:208:136::32) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 448d6dd8-472f-43ed-4d41-08d937dc45d8
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3390:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB33904E76808175A66327888EB3069@VI1PR0402MB3390.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9XXUlu3bSNTc01rvViICxPCky6cT+R8bNZWQHxLLSmKQn8VUy+Bo7e00C4n0Collskbwce+iQ9gfqYCkrWJIqM+QGQ413HZyUJpo7DGE3LHsTxSHZ2WvAIa4mRL5hM3hyBTnkdT5+5uafLcm1qdixoUDL/5Q+BjTPQTo2uuvZdYCZMELun3gmCdNHFpuB9c2OOnJBjPLLX5UDWSDBc13UNt60z5KWGVp0qZyuKLjVlpEfKaSo//PCFT1p1aeLNIrOTtgkFDcEOKrbJNXHNJJ5/d8hErMzfaO4bLgxzGnCagQJuuRoexfyyTeYrL+u+d74oCuR1vV0SGcum6BBo1B8jOz1uLFLiNkqvrTxophNnJicr7DvosuwN8Giu06C+wv7gTf9NPXkzvbE7G8dhnOaDrt20ApOyhjkSBwoH65n4JTqBAPHNTgWlrSqY6lbbnIJ1H7wgiN2swuVhgng7PFny4PTW9cPaBq2UCF/S5LW4C3DHcFxlrRMh4u7qy75zmE+j5Iva2nvEY+ayh6DniXYVw9MhhHhnc5bAy6msFDxtHECrXWe91oXtiAiiPnEvCz9jE0KD9gwbpGWC6Hzfrd0JFM187usSHja9/G2y4d6Jgmnjo+vgCaBneau4TSKpLnomVN4t2+0PgL55FLynvfjHNGSAr9pEX2JK/kv/kXOTZ5VUFyrqTQD9zRaMu97WCvI8j0Bf1aXpee0driOAXEfw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(136003)(39860400002)(346002)(366004)(396003)(16526019)(36756003)(83380400001)(26005)(4326008)(15650500001)(186003)(31686004)(8936002)(478600001)(54906003)(86362001)(2906002)(66946007)(38100700002)(16576012)(956004)(6486002)(8676002)(5660300002)(6916009)(66556008)(66476007)(31696002)(316002)(2616005)(14143004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cXhKdFBHa0JpdktPaTBXWGthZzhHK29mZ3V5RDlXZW12WWtWSTRhRzgvV1hG?=
 =?utf-8?B?azlRa3BIL21zeHg3Y0h1ZHFXUmNGVEhGanJjYnozRG1wUEc1SXNHU01EMDZV?=
 =?utf-8?B?VEtUSzZxODE5SUtTRDZFeHlZUWpOekh5Yjlad0JzV2p4ZzhPeDRUQUhZb05N?=
 =?utf-8?B?b0FzM29DU2x2cFhJcHNGZUtXSmU4anorUEZrMmJEc1BkVGQ4elA2T2Eveno3?=
 =?utf-8?B?Wlh4T0o2UzZiR1VVZVd5NUpzeXNzU2FMS3RlN3JORFNZTkxidFRKb0pNcEFj?=
 =?utf-8?B?TGthcDdPZksvN0E2ZlNDbTk3VUdYeVNsanRyZTRjczJKSWc2dkh6ZWlKSlY2?=
 =?utf-8?B?cHZISzNRY0hQblJpWnZmTzhEbDcwOElOMUljQmJSZzJHL3hiR2JGUE42ZkJI?=
 =?utf-8?B?QVAwTnJWL2lxS0dzNlhXaEM2dlNyK2NmTnBwbkJ0cVRZeHVFMjFKQkw0MW5n?=
 =?utf-8?B?YnZ2RkhNVXAweU9TUVk1THdTeTFkVFR5dG5acHlCSU9BMUk5RVpYd1pLVWNU?=
 =?utf-8?B?NTFLM0FydWEvcE0vRU00RlpRNFhQSGs0RmIzT3ZKK1ozZXJBR29HajBUbHVy?=
 =?utf-8?B?bVNaVXFab2hBRWhRTVRNaTRIUHBqa1NrL1hXMGhqUVNtSTlhNENqeG4zVGZy?=
 =?utf-8?B?SWg4TmZOMityS2MwK24zYmttTHZBRnZYK2hJSVRJcm1kbWlFOFFPbnp2bGVy?=
 =?utf-8?B?YkpGNkRyTElTUDRhRGIwZ1AxU0J2bmhSNWpuaGt4YkVBakJCVEI0T21aR1Bs?=
 =?utf-8?B?NGpYdCtlRmsxOElwMFVFN2J3M2V6YmhiUmdsQ0tyYWFqYTQxc3ZtREJWWFJW?=
 =?utf-8?B?YUpHTnI4bUQvL3haOFA2VFFjS2IwanZDa2VSV1FuWWowTUhCNG15TGNaWlJz?=
 =?utf-8?B?ejB0RVVEMnlPeTVlTGRMSVBhcG5iT0V4Ly95QnVxRExXUGdwVDlmb0djNlFV?=
 =?utf-8?B?WnVuUkFoNVI5dVBnc2trNkswRStFc0V1N3Nmbml2em5MZkpOdStSZmNJUW1C?=
 =?utf-8?B?M3Y2NlFZK0s2bFJFTVVhZkMvTVI3MHVxVFBNOXpMdWxDamFuOTdQVkl2V3p0?=
 =?utf-8?B?OTBKd3dXY1cyLzIrazZBOXNZYXpBTU1BRzBxaTlSenZJMFNmamtGbnBkTFJ0?=
 =?utf-8?B?b1MwZk1tOEFFaENNRnhLZVpMSXVSZnFKVFlyMEFjNllQVisyNDRIZlJvM1NE?=
 =?utf-8?B?ZFRsdXN5eGFNRXoxZ003cXhjdzM3ejlCN1lKK0VmZ2pTNkF6cHRKaFd4NzlJ?=
 =?utf-8?B?TnpTa2pjcW9SSkthQ0k3VHVZRWJCbTB0dko1MzJlcVVzNU9scEd2dG1mL3hu?=
 =?utf-8?B?TGp1b0F1LzczWXFBcGdKNnVrcVR2VXlVR0t3KzM0N2JRdlZZN1VndkozQ0pu?=
 =?utf-8?B?emZYQ1d1dVJQMzdwZjdZZFQrbEtXakltWTU0b0NCbXE5ODZjK3NzeXFrQUVQ?=
 =?utf-8?B?aDY0ajJ5ZmVvVVY4SXQ3QnBmcms0R0FvMU9oZytNbUI3QWVQS2dYWFE3OHZ6?=
 =?utf-8?B?dWZMMkdaZUFVOTJObURtS3lYWDE1blFXMDZqVDF2alg5c3JKVXJBYng3bHBt?=
 =?utf-8?B?aEMyYXRxS2t5SkYvNElTOFVJaFBCZlBkV20ybVk3UmV5TzlaSnpTckFiYnM3?=
 =?utf-8?B?cG5lakhXaVpSOXRBQ3BpSlJSaTFIampoZUFWRE0xTldHRHMvZUF3dFFHb0Uw?=
 =?utf-8?B?cngxVytKYzRqdGZPTTNuNXRpaFVvWDJTbHc4SXRZTk05MUdRTFdCbWJwcmx1?=
 =?utf-8?Q?mMwcJ9JTT2Ps1HH5rnZWF7l+3CAIndwowYnhtsf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 448d6dd8-472f-43ed-4d41-08d937dc45d8
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 13:22:25.5681
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0QHPS2XwVlHE9yQaHtMD7ZIHAiDzaTh7HZXHphjMRrtd4OHCNnr694flw7cIypbIwH2Jr8ThJGLWbLsHsF/Jng==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3390

Just like for PV guests MMU_MACHPHYS_UPDATE implies marking of the
respective page as dirty, additions to a HVM guest's P2M should do so.

For HVM the opposite is als true: Pages being removed from the P2M are
no longer dirty at their prior GFN; there's no point in telling the tool
stack to try and copy that page, when this will fail anyway (until
perhaps a new page gets placed there). Introduce paging_mark_pfn_clean()
(intentionally without a paging_mark_clean() counterpart) to handle
this. Note that while there is an earlier call to set_gpfn_from_mfn() in
guest_physmap_add_entry(), but there's little reason to mark the page
clean there when later in the function it'll be marked dirty. This is
even more so given that at this point it's only the M2P that gets
updated, with the P2M still left unchanged.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
guest_physmap_add_entry()'s error handling looks bogus in this regard
anyway: If an error occurs before an MFN actually gets assciated with
the new GFN, the M2P entry ought to be restored imo. But of course a
guest is still hosed if the operation succeeds partially.

Note that I've not even checked mem-paging and mem-sharing code for
whether they may need similar adjustment. At least the latters is, aiui,
incompatible with log-dirty mode anyway.

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -818,7 +818,10 @@ p2m_remove_page(struct p2m_domain *p2m,
         {
             p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0, NULL, NULL);
             if ( !p2m_is_grant(t) && !p2m_is_shared(t) && !p2m_is_foreign(t) )
+            {
                 set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY);
+                paging_mark_pfn_clean(p2m->domain, _pfn(gfn_x(gfn) + i));
+            }
         }
     }
 
@@ -1027,8 +1030,11 @@ guest_physmap_add_entry(struct domain *d
         if ( !p2m_is_grant(t) )
         {
             for ( i = 0; i < (1UL << page_order); i++ )
+            {
                 set_gpfn_from_mfn(mfn_x(mfn_add(mfn, i)),
                                   gfn_x(gfn_add(gfn, i)));
+                paging_mark_pfn_dirty(d, _pfn(gfn_x(gfn) + i));
+            }
         }
     }
 
@@ -1314,6 +1320,7 @@ static int set_typed_p2m_entry(struct do
         {
             ASSERT(mfn_valid(mfn_add(omfn, i)));
             set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY);
+            paging_mark_pfn_clean(d, _pfn(gfn_x(gfn) + i));
         }
 
         ioreq_request_mapcache_invalidate(d);
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -645,7 +645,10 @@ p2m_pod_decrease_reservation(struct doma
             }
             p2m_tlb_flush_sync(p2m);
             for ( j = 0; j < n; ++j )
+            {
                 set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY);
+                paging_mark_pfn_clean(d, _pfn(gfn_x(gfn) + i + j));
+            }
             p2m_pod_cache_add(p2m, page, cur_order);
 
             ioreq_request_mapcache_invalidate(d);
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -259,7 +259,7 @@ static int paging_log_dirty_disable(stru
 }
 
 /* Mark a page as dirty, with taking guest pfn as parameter */
-void paging_mark_pfn_dirty(struct domain *d, pfn_t pfn)
+static void mark_pfn_dirty(struct domain *d, pfn_t pfn, bool dirty)
 {
     bool changed;
     mfn_t mfn, *l4, *l3, *l2;
@@ -290,14 +290,15 @@ void paging_mark_pfn_dirty(struct domain
 
     if ( unlikely(!mfn_valid(d->arch.paging.log_dirty.top)) ) 
     {
-         d->arch.paging.log_dirty.top = paging_new_log_dirty_node(d);
+         if ( dirty )
+             d->arch.paging.log_dirty.top = paging_new_log_dirty_node(d);
          if ( unlikely(!mfn_valid(d->arch.paging.log_dirty.top)) )
              goto out;
     }
 
     l4 = paging_map_log_dirty_bitmap(d);
     mfn = l4[i4];
-    if ( !mfn_valid(mfn) )
+    if ( !mfn_valid(mfn) && dirty )
         l4[i4] = mfn = paging_new_log_dirty_node(d);
     unmap_domain_page(l4);
     if ( !mfn_valid(mfn) )
@@ -305,7 +306,7 @@ void paging_mark_pfn_dirty(struct domain
 
     l3 = map_domain_page(mfn);
     mfn = l3[i3];
-    if ( !mfn_valid(mfn) )
+    if ( !mfn_valid(mfn) && dirty )
         l3[i3] = mfn = paging_new_log_dirty_node(d);
     unmap_domain_page(l3);
     if ( !mfn_valid(mfn) )
@@ -313,21 +314,22 @@ void paging_mark_pfn_dirty(struct domain
 
     l2 = map_domain_page(mfn);
     mfn = l2[i2];
-    if ( !mfn_valid(mfn) )
+    if ( !mfn_valid(mfn) && dirty )
         l2[i2] = mfn = paging_new_log_dirty_leaf(d);
     unmap_domain_page(l2);
     if ( !mfn_valid(mfn) )
         goto out;
 
     l1 = map_domain_page(mfn);
-    changed = !__test_and_set_bit(i1, l1);
+    changed = dirty ? !__test_and_set_bit(i1, l1)
+                    : __test_and_clear_bit(i1, l1);
     unmap_domain_page(l1);
     if ( changed )
     {
         PAGING_DEBUG(LOGDIRTY,
-                     "d%d: marked mfn %" PRI_mfn " (pfn %" PRI_pfn ")\n",
-                     d->domain_id, mfn_x(mfn), pfn_x(pfn));
-        d->arch.paging.log_dirty.dirty_count++;
+                     "%pd: marked mfn %" PRI_mfn " (pfn %" PRI_pfn ") %s\n",
+                     d, mfn_x(mfn), pfn_x(pfn), dirty ? "dirty" : "clean");
+        d->arch.paging.log_dirty.dirty_count += dirty ? 1 : -1;
     }
 
 out:
@@ -336,6 +338,16 @@ out:
     return;
 }
 
+void paging_mark_pfn_dirty(struct domain *d, pfn_t pfn)
+{
+    mark_pfn_dirty(d, pfn, true);
+}
+
+void paging_mark_pfn_clean(struct domain *d, pfn_t pfn)
+{
+    mark_pfn_dirty(d, pfn, false);
+}
+
 /* Mark a page as dirty */
 void paging_mark_dirty(struct domain *d, mfn_t gmfn)
 {
@@ -348,7 +360,7 @@ void paging_mark_dirty(struct domain *d,
     /* We /really/ mean PFN here, even for non-translated guests. */
     pfn = _pfn(get_gpfn_from_mfn(mfn_x(gmfn)));
 
-    paging_mark_pfn_dirty(d, pfn);
+    mark_pfn_dirty(d, pfn, true);
 }
 
 
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -170,8 +170,9 @@ void paging_log_dirty_init(struct domain
 
 /* mark a page as dirty */
 void paging_mark_dirty(struct domain *d, mfn_t gmfn);
-/* mark a page as dirty with taking guest pfn as parameter */
+/* mark a page as dirty/clean with taking guest pfn as parameter */
 void paging_mark_pfn_dirty(struct domain *d, pfn_t pfn);
+void paging_mark_pfn_clean(struct domain *d, pfn_t pfn);
 
 /* is this guest page dirty? 
  * This is called from inside paging code, with the paging lock held. */



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 13:23:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 13:23:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147222.271232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlnU-0005a2-1l; Fri, 25 Jun 2021 13:23:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147222.271232; Fri, 25 Jun 2021 13:23:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlnT-0005Zt-Ua; Fri, 25 Jun 2021 13:23:03 +0000
Received: by outflank-mailman (input) for mailman id 147222;
 Fri, 25 Jun 2021 13:23:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwlnR-0005ZZ-VH
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 13:23:02 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6ecbdc7a-0b1f-4419-9918-bb8ef96ef5ec;
 Fri, 25 Jun 2021 13:23:01 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2113.outbound.protection.outlook.com [104.47.17.113])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-18-frDSjFXnPK6jZigERtBcXQ-1; Fri, 25 Jun 2021 15:22:58 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3390.eurprd04.prod.outlook.com (2603:10a6:803:9::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Fri, 25 Jun
 2021 13:22:57 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 13:22:57 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR10CA0001.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:17c::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18 via Frontend
 Transport; Fri, 25 Jun 2021 13:22:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ecbdc7a-0b1f-4419-9918-bb8ef96ef5ec
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624627380;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TOsEQIQ7DC4In8gkn+d6JI6dXDcB4jJx1x6CvkOlsRs=;
	b=a294FYBBvevlEBSopuYlVQqhszzmfSbTE7t9hBDK8N6rywXNcNLP8S+fxayAh6ziWEsi9c
	0q9crPVxYmCbTbzzdXVnmploYmvtqKfBtXPC5P5ai0qeRq1LcDamKzneMGb9cgJ27scKiC
	La1gXC4T5s/hJOT9htu+eT/qqBsBCCA=
X-MC-Unique: frDSjFXnPK6jZigERtBcXQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dcTx8zSM+36hmkBn9SkOeA1y9eGzh77AwAiCevfWZFWyqsKahC6oa3Su8EMpfslsbP4aHVclDh2NG+WxE467GiCMry6fEQsrDs1dEVeFq1ZqJdPyoyTkFg7q31K7lD3Jd2BU620dkjrazEOgv8LjwwKa45vD/s7AjY6c7DGPgzzZzade8NFpK8Mh6Sav/Wdr1WMIF4i1tqZk/lof3G9qwZnfaeMmixGal+gjFsXssFg4VCRGq0n7nzxYke5cskDI9LZC4p3/hKl8BVfyvc2ubzyJZifS/FnNQgPQv/wHANfLm9HWBKEeEjZkItMtWsx9aMFk26cjErh/gbCFUvZZZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TOsEQIQ7DC4In8gkn+d6JI6dXDcB4jJx1x6CvkOlsRs=;
 b=eqGAmsarqy5g4BLw8UmE2yP/9ADmi6ZcoQKEjyn3SjIHA7s23IAkQQTZvX2Ptr5r5vUifTbqxmQbT5jl8R8Q8vcF1Nj0NHXgOUXz3IRBIj6VHT9Ari5xGPE1fOPQCMAy5WFHowPpy0cd9qoQBNUMJvIe14rjtJnaXAiGbjD1XKtoc1I7rdm6VYTiBBA+hAdT4W5Mp1xXgZs7+C6+UgI/MLMqDSpO+ozM9y4ccJiPrB024dimBcEUYOQ3r3nDrARKo8IpdyhhDbsKUihsVipPiV7kdTUFqW4iIEzmWm7ne4lFokBlDzxZo//gG8s/sjrpOeZ1h7c3//bG1I4EH/wXcw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 11/12] x86/mm: pull a sanity check earlier in
 xenmem_add_to_physmap_one()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Message-ID: <e0819038-98b0-2c4c-9176-70d68be39130@suse.com>
Date: Fri, 25 Jun 2021 15:22:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR10CA0001.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:208:17c::11) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1ece2f6b-d618-4c8c-3a0f-08d937dc5908
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3390:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB3390092CCE375EFA495033AFB3069@VI1PR0402MB3390.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3044;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7xyUAR0A/RyvqjdVpfXbO55nZVhV2Nl9kmzv1YUNQjp5k9I9GjwNbRna0aX0hjuUbF6kNvyTx5/5OcMiACRiAkxknMWqjfDsM2qUUqM4bvLJu54Ii7yCNRZKUFYvqUkBi4fSE40SyeyJBp589+FClvzd3e3ALwqXRBRfVG0VaLO6Wid7QsYWFa/XNFedJ7YxZZimJjtzcFbB8NEc5TWC84e8+/AO2hJdlhLExJvldtgrfsOP5JTMHwzytCrkhhoSeyvUEwCFVh216m8QsGURcjGl/ton9NAysRNgYi2VixjaIieLB69mxQbQaSYPv97qRcBz3KaxW17xl5cm9ATCnXKj+k34/jug6g8SKE9OowhuPi3EagnAf6DlETnoc/GZ05yEr5Z/qoaaE6aH5yCp3nzxrJz7qJFD5Gs19/pYuEf3YGO11v3GMyB4PyC5lyuVmTEHVFsUWGb6RYwOuxw92NsQnttrADOiPCVaMZQfFjRU159ntFPQubbeo/4QxHr6hQDcbCQLegi0GbYZeQM7BRi+y12QsPBRAiUGgml/dLm5C8eiJHc0jTKoVRSBHxZt65vQBrY9ZFMDd7cp4Uz22XSfLnHBbKKpDvEsCtYvQASbJVFMAUEckPtr1TCFGg57+k/nedA71Mu9Bi0ActoXu6ot5/MERjrYN461m/EPtKtiMNqu1Fe7mMlZrsZzq5T7
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(136003)(39860400002)(346002)(366004)(396003)(16526019)(36756003)(83380400001)(26005)(4326008)(186003)(31686004)(8936002)(478600001)(54906003)(86362001)(2906002)(66946007)(38100700002)(16576012)(956004)(6486002)(8676002)(5660300002)(6916009)(66556008)(66476007)(31696002)(316002)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VHRsMEV0Z2kydjZTZTBQTC9RZmNwcUFSQ2lkUHFTS1JLWnk5aTArVytLbUx3?=
 =?utf-8?B?TUNzcUgyRWl0N0QwZ0lZSkZSODZCTyt4UERQMXhKbFVoNExKOEJjVXFDaWdP?=
 =?utf-8?B?emdIaHZDbVdXZ2RQVGROdGF4MVFCWnVNNGQ4aWNUNk5qU2tJTnRINnVwQjZt?=
 =?utf-8?B?RnVjQXJtMmEzVE9NYkFuaXNydTh5Mzl0QUlGN0JrZzNHZWQzMUx2d3FTeVBI?=
 =?utf-8?B?b2x6bXJ6dE1qd1NDYkhoT3Q5RVgrL1VZSEg4czF0bUNiUzhzSlJMaStpZlcz?=
 =?utf-8?B?MS92dHZ1eXovUmFZRllINU1uOEtqUlhWQ05mVkcvRGNDRlU3azZWVWhJb3F2?=
 =?utf-8?B?aGJsdkMxcVpJb1BTanZVUzhWL2ZjWnFqa3h0ZSs2MFgzdmcxM0QyS1NwMUw3?=
 =?utf-8?B?cWdSR0VzMHZiRTdRUi9NWk1mc09EQlBpWUM1b2IrVE4xMWhLZ3pYcTFWQkth?=
 =?utf-8?B?bnM2NkhTdWN6M3BuNGY5Yi9tRzdub1ROZ25iNldFdXdzM2J3ZEZUL0E2dEtR?=
 =?utf-8?B?Z2Jjang4UWpZemY4ekZzTHhETS9XUGZiSy9teXI1d2gvcFhmMStPT3BKMUU0?=
 =?utf-8?B?OU5NUzdTSWlXb0JoUEd3NGx2d0FLMGlyWHNhb0d1TFFRZ0l6TTEvQU9Eb1VN?=
 =?utf-8?B?NC9WcDFtdysrWjVMMmltV1VJVW9WWC96dlZFSmNLSjE2YVdyUUwyWTVBRjFT?=
 =?utf-8?B?YXc5V3dQMXlqc3lZckNiOXVxaFBKWk0zbkx6UDZqUFI2c1JPU1llQXZPMjRF?=
 =?utf-8?B?TlBUM3BDbGJKVlh1am81elJEUCtOTjdSTmVXMkczT25jL3NoTXdzSDlIRmJN?=
 =?utf-8?B?YW5mSnNaQndZTURKd240QXA4T0wzSExKNis1czZGZEcva0RtcjgvUldzRUg2?=
 =?utf-8?B?RlhLRFdhSENheG93N1hXbnJ1ZGpmL3B1bmxOaUZ6elBFRFFjaVdDajlFUzdt?=
 =?utf-8?B?eUp5cHBUTkpUbGE4STRJYUpqWjdjZ29vakxtZ2UrM0JVbkxocnZDVGFUL2M4?=
 =?utf-8?B?SVpTaVBqRXArNnVNVUc1RkpsY2VwTWZOeFptOXoxZ2JWczZqOXdRUGp1T0Fi?=
 =?utf-8?B?TVU1QXZKUXh1N0JUVlRpa0thYTN1d1htcWlIOG5qUTFqK2ErVitGQjl0N3Va?=
 =?utf-8?B?blVHaHFWYVZydSs2ZERlUWRTLzB4WFFlTW1IdGhSMFJHVDZzaWFXME1Uck5W?=
 =?utf-8?B?UzgyNzFYYkpuK1hNUGV3cUdTaWRPQ0xDMzFxZTlsckpzbEozRDVKa3Z3d2Vx?=
 =?utf-8?B?S1FJVmpYOU5YS1pKbDcrT0xESVJCWHVEU3pWYnQ3NU1wQ2FPM0N6Q3FxYnNR?=
 =?utf-8?B?WVlCdEZ0cExLRGwyTW92cEJTakRpcTFXY0hWbjJVYlRLZU9OczFDbWVNNDdH?=
 =?utf-8?B?VWlDVzByeGxKY0oySzNTaHJkOE9mMm1obUNKRjRUWHlvWU9PZ0VQOWZRTWw2?=
 =?utf-8?B?dUorbWRHaHVLUmxzM2RZZnRzMkRGc1UzdTRZMTN5WmVGNVpCdHhTMkhHUklO?=
 =?utf-8?B?dTRzRFZQM2Rjb1Vyb1BVcjk0RTg1V1FOM05iRHpXSElHeU1wbXNWTHVJNFpq?=
 =?utf-8?B?aVhuMkpPWTRkQzBTbVB6OHFJZ05obElBYXZEZjNvWjIrVCtBQzZiTU54QzhV?=
 =?utf-8?B?M1NudVBBL0hlWkd2WXRZcDF5OTNBV25BRVRQbTduZGxnMmh0am9Tby9vSjgv?=
 =?utf-8?B?OGE5WnBpd3QxK1pkaVBFbjRsZXZkZklkbVpscmhWVERWa2ZDc2I4YjU5T3hF?=
 =?utf-8?Q?DVIeR4q/txwB1BvyTd5UKFLhTOPuoqxOdnGtUuR?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1ece2f6b-d618-4c8c-3a0f-08d937dc5908
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 13:22:57.6699
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: eY9aUD2DFKrcuTfzJRjzpStfP4VfHFj23vi2RhVLzuNtIbgNq+8YsikGeVXm1W3ofXfD7kVIgt//Xy3sw4X31A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3390

We should try to limit the failure reasons after we've started making
changes.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2763,6 +2763,15 @@ int xenmem_add_to_physmap_one(
         goto put_both;
     }
 
+    /* XENMAPSPACE_gmfn: Check if the MFN is associated with another GFN. */
+    old_gpfn = get_gpfn_from_mfn(mfn_x(mfn));
+    ASSERT(!SHARED_M2P(old_gpfn));
+    if ( space == XENMAPSPACE_gmfn && old_gpfn != gfn )
+    {
+        rc = -EXDEV;
+        goto put_both;
+    }
+
     /* Remove previously mapped page if it was present. */
     prev_mfn = get_gfn(d, gfn_x(gpfn), &p2mt);
     if ( mfn_valid(prev_mfn) )
@@ -2781,13 +2790,6 @@ int xenmem_add_to_physmap_one(
         goto put_both;
 
     /* Unmap from old location, if any. */
-    old_gpfn = get_gpfn_from_mfn(mfn_x(mfn));
-    ASSERT(!SHARED_M2P(old_gpfn));
-    if ( space == XENMAPSPACE_gmfn && old_gpfn != gfn )
-    {
-        rc = -EXDEV;
-        goto put_both;
-    }
     if ( old_gpfn != INVALID_M2P_ENTRY )
         rc = guest_physmap_remove_page(d, _gfn(old_gpfn), mfn, PAGE_ORDER_4K);
 



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 13:24:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 13:24:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147230.271249 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlob-0006GZ-Dr; Fri, 25 Jun 2021 13:24:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147230.271249; Fri, 25 Jun 2021 13:24:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwlob-0006GS-AB; Fri, 25 Jun 2021 13:24:13 +0000
Received: by outflank-mailman (input) for mailman id 147230;
 Fri, 25 Jun 2021 13:24:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBOo=LT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lwloa-0006GG-F8
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 13:24:12 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f49d5b31-5609-4204-978c-bfd8d1706e1b;
 Fri, 25 Jun 2021 13:24:11 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2111.outbound.protection.outlook.com [104.47.17.111])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-36-5pf-YncbPxKDgi77_QYp5g-1; Fri, 25 Jun 2021 15:24:09 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5472.eurprd04.prod.outlook.com (2603:10a6:803:d3::28)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Fri, 25 Jun
 2021 13:24:05 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.020; Fri, 25 Jun 2021
 13:24:05 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0106.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:19::22) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Fri, 25 Jun 2021 13:24:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f49d5b31-5609-4204-978c-bfd8d1706e1b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624627450;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yOEUDOPXWP6D4MKSfHeNrHSH9v7EBiETR03v/53OyJ8=;
	b=Dnagzc9xtafQlITtm31cRE9HbPhk4cNV9JJWMf/6uyurzktOKkbIOOgYyg9UIpcK+yQIcD
	yyZ6sTQr/JN/Cfurjmd1YSpyCG4zIJS/UGVJYUp57Gyp83+68Kg81koDJcfN5dXJDZ67sh
	X1zjOVA0Mx3Ltk2zpVKNRK7d/MMgLSw=
X-MC-Unique: 5pf-YncbPxKDgi77_QYp5g-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LWNLcs772SYqKrncLypX7Hbq2prV0ORBEEZvLbpiX0bn2lDMC9GFJvebqXumd58bjBtyLFNuum/6MIaq8LU61hN0EbLpDz0JzaP/xpV7JEIdh/VXY8fuHvWkOqfCPp5sQsdBrujTTeE66gmhpBhp7Yk9m/ZXB0HD9qaScWvo7i8yYWxmRzzS5qOIp2Qs63tZCPP19ef1rxo40DZ8cIkFdeQCva/aP+IILy8jHVbotvIJ2l2TEQggSEgEILmRtAMr5nmvxBakIszI91t4Fg9jhmCLn1Lep5dETjjtLm6W2zjl5yxFKfjLtVeTIP1GWzYlJXgpmIgtU2a6stE0GYmarw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yOEUDOPXWP6D4MKSfHeNrHSH9v7EBiETR03v/53OyJ8=;
 b=DkvQdKY5QOvoUIfMp7Cr3C/ECtG/D01YORQu1zXRIvDUXSCWa0zsUvbqbSgOeshsUjITpjjPA0fhNDMwOZzUQraqwT4psGcFji4LWpdQDleMxobqfmlifKLBwujBCrEbYtvjhiTeaq7sn0OYjncVSLYGKibwFVlCmRmqSryA5PMVlOMBD3okVByjupOjBRUV5qznSthswg4oByymC+ODsIjs2s2u7GR9Jf14ACmP6wIkL4RQZ8WABHbMIMz15fJ493PzS9cIeQO3h4sPSRPZv9p8hKMb2hA8tRj1wZM4ONzwP20bGw1VRGGVUzo+xpmdTBnyo7YEZkexfnXoU8pq2A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 12/12] SUPPORT.md: write down restriction of 32-bit tool
 stacks
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Message-ID: <03512f29-7a4e-70ff-271b-7d65ed471935@suse.com>
Date: Fri, 25 Jun 2021 15:24:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0106.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:19::22) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 67efd9f3-c7d5-4601-52fb-08d937dc813d
X-MS-TrafficTypeDiagnostic: VI1PR04MB5472:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB547236655BFDDDE42E809DC2B3069@VI1PR04MB5472.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HmTChsITZglglCNvW5b7WXbmyPtFaePSxSnPiSqghIVdM1+MNEOXHoCeuK+6HUKuQmrd8ZRylhbmehUIsedUXkPfq3wpDmTM+M13N8zOLcEFQq9tpZI6H7eFGvk3TfmIiS3+tfkBqW4eX5Gs2XaTxkhRcDIZX73r5QR6Jb0/5L0AHfvkMXoh6WSELfBNzsTSlMJVEMnXUADBboWgrNU4aYtN7XlOjAqEb4gO8Bw1zanq2VMkTSWxQfdH0m9w799Gnb22/8PDhOd+mfF2Qh1wvfTZZgVE5lK/LE6yCF2f+HqfXZIFx2afpOBYCQTl2BoKnGGbFDUkKHx+ZElGtjwH1nwgjWlIgNypgt//vENHgJxCJRiI/Ma46fwFgene2Sy1JzTsSIRry6dNorCAKE2pvciyAuDqDirXiVX7kRz4vOAAukL0FvxiU3jnGPCopVDdQ5nWM04SOCKQloaIkH+77ZB0cdJdBUZohlG40pMGBwGW9vLO0HOJxB7nk1z2ON9Wo3wJwVNWT8fAudcteSeAG88JYRzx6KKdH2NKR3XWGIYrn9KYtyT4rin222h9lP/WwwEz5LcuHJ13O6ny4zIGLmkLtrMYhTZLFh8KLOcAeZtIVKZB5KDerFjs6nLQp//F09yYPSGd5rAigG+yFxkKWmns7OgZPxN7HuOoVBkS3TomzXxlN9XhS13r7dP4gTsM
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(366004)(346002)(39850400004)(376002)(136003)(2906002)(478600001)(316002)(956004)(86362001)(4326008)(8936002)(8676002)(186003)(26005)(2616005)(4744005)(6916009)(5660300002)(36756003)(66476007)(66946007)(66556008)(54906003)(38100700002)(16526019)(31686004)(31696002)(16576012)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aE0vMnptTnE5dEt6UHZqdlcrTkNWWjY5Zzg0YS9GTlBYWk5PUk5XYXQ1eE1Q?=
 =?utf-8?B?UE40UCtvS0IwNnZCdnQzRVdsWUxFSXhPb0lXNGJVaTRvY0Z5ZzVRdHRhQWdw?=
 =?utf-8?B?WlV0bFMyeEV3bWNVQm5wdmZUTjM3TmhCTWVaNmlUZFVETWNmSmM3ZTZUR3A5?=
 =?utf-8?B?Yjk5citwYlJVTVJIRFpUVDhzTVduWmJoNGRKaVNSc2dHTUZmNzRON25xSU5W?=
 =?utf-8?B?ZEJtRjhrd2ZsSFQweEphSWZycWdLOGEwUzJtY3EySmk2T1ZsMVdJUDQ4QjZ5?=
 =?utf-8?B?WEtvVjdKY2V4NlpXTkh4NDZTVFVpS0NKUTRmcVJudmZiRzllYlVGSVZxSmZo?=
 =?utf-8?B?SDBTeUtHdC9DellzSWRUc2dFSUNDOWN2UEx6M3RySmkzeExqUWYxdVRkSk0y?=
 =?utf-8?B?eGcwWlJvaU42bjluVldZaVN6eXFCYm4xdUZnbkxhQXdHU0dOWXpveTI4aW9L?=
 =?utf-8?B?dzJQNXUwV2FlRFgwOUJncHdzbmhrSkFVYjhEdVcrODV2K3J1VytYNzY5NzFs?=
 =?utf-8?B?QU0xNURJbzVLRTRpOHFLRmk4cGZYaGRmUFdtejhUaXB2ODhtQnYxWWh5OXpH?=
 =?utf-8?B?Y2hwNEgydXlkcm1FVHlZb3ZNTm8razR2SG5hLzZISzlKZ21LYWFmc0dKc1Bn?=
 =?utf-8?B?em4waXVGclF6T3NrdGY4ZEtnSzcyb0xDQ1hVaHZDL2pla1lUK2MvcEYzVVZ0?=
 =?utf-8?B?R0poa3prenlUNnZtbDdRTFZWOGdCZU5mdkN6eUZ4and3UHcyZzljNFlZVlhN?=
 =?utf-8?B?a1dNV3gvbVpEdWtMVERDOXZjNWduQ3hiQnRTbjVXTEhUNThidllJWkRDcU1x?=
 =?utf-8?B?cTV6aEwrazNrNkhpRHZHUTE3c0ZnVkdaN3FTL2FlVG5LM29PN1JZZ0c2UU9j?=
 =?utf-8?B?VTZHMGdLcjltRmFYdklybG9zWkJYZVdCNXBieklYeWhVdHpmcS9STTQ1M2NF?=
 =?utf-8?B?N1VteC9lQmNoWDJHT1ZkN0ZEa1dQV0VJOFBhQ1E0NEtZekFheXBqQ0M0UWh5?=
 =?utf-8?B?d3dkQlBKNVF4R3Q1Y2sranZFTHgwNlZRWURDYmY5RWFUTnNjR3FFUlVXTS9j?=
 =?utf-8?B?QWZMcEZFbHNaQjUwbTBPbWQ2VVpaR0ZNN2E1OUprZUJWblN0YnFiMlQ2bk0r?=
 =?utf-8?B?M00wT2xXTkwzeVRFNkNwdUdJY0ZYTzZvbFFHUlBWdE1wMDZ4eTNEanhOQVB1?=
 =?utf-8?B?aHByMnhWTWRGbWRUbVMrYVZiempNVm1EcFVCWXFBSHo4QXE4aU11TFBLUTFV?=
 =?utf-8?B?NXZRank2QVhiMEY0SzZMcXA2SHo3TU9rMDNwZk9sYnFuNWR5MzQ5RVRQYlpO?=
 =?utf-8?B?b2hySElQN3dsSEFsSVF1aWVDbllpMGJ0OFVVaE96ZW1CcjQxakhIQ2tvTzFS?=
 =?utf-8?B?dWo3NHRRbFBDbTkyV2RNQkk3aUhWc3daMDFqL0F1cUpUdVlTbWxBTlhTcGNn?=
 =?utf-8?B?TmdBK3B2b3kwMWpaWGNZVXoxWThrZGZZaUF1NWlkUkpEa0lpV3dxYUVrOU5l?=
 =?utf-8?B?VWJaVWdwS3NCVGVuUkxzOXp3RS9ab2l5N0JrelJrdHJjaUtSQ2dkTU5EcXhZ?=
 =?utf-8?B?TVNEMFQ1bjhSY0ZJVjI1VGNsajBDcEFOR2oxYXBteDBLYnFHbkkyM3c2bkIr?=
 =?utf-8?B?bTI5YlRsNFZhTVdHQnoyM3FueUIvMTZVTndzOEQxZmVraVlaZzluY1Npejl4?=
 =?utf-8?B?a3NITlZKbVI1aWpqN0xIVWVXYVgvVmRYbm5wUDVVQXBibU9NYmxid29zbVlF?=
 =?utf-8?Q?DC336ps/COvePz2IhMnox9yjXP7dXbgF5K2UZ8D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 67efd9f3-c7d5-4601-52fb-08d937dc813d
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 13:24:05.1687
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7xqoPn1L+vYTCTC9NI2iNEcGHWYmA4ItoFGgU2OXJ7lsVB2gEGIksRwUKASjAonOKzfSUXYfkArFA6eOkotMfg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5472

Let's try to avoid giving the impression that 32-bit tool stacks are as
capable as 64-bit ones.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -131,6 +131,11 @@ ARM only has one guest type at the momen
 
 ## Toolstack
 
+While 32-bit builds of the tool stack are generally supported, restrictions
+apply in particular when running on top of a 64-bit hypervisor.  For example,
+very large guests aren't supported in this case.  This includes guests giving
+the appearance of being large, by altering their own memory layouts.
+
 ### xl
 
     Status: Supported



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 13:42:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 13:42:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147244.271269 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwm5h-0000B6-3T; Fri, 25 Jun 2021 13:41:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147244.271269; Fri, 25 Jun 2021 13:41:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwm5h-0000Ay-0E; Fri, 25 Jun 2021 13:41:53 +0000
Received: by outflank-mailman (input) for mailman id 147244;
 Fri, 25 Jun 2021 13:41:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNyz=LT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lwm5f-0000As-7c
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 13:41:51 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9a2def10-8298-4fc6-8213-47247ed25e9c;
 Fri, 25 Jun 2021 13:41:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a2def10-8298-4fc6-8213-47247ed25e9c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624628509;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=QabGLG8N+OC52EYYAaaSgkVs4imrXTRHmaOnENFab10=;
  b=f4yDbYL2DXRZ2A/LSJB8Mp+TG7ABqSRgjGegk0A/WmYK3JEHNq+T4ya2
   MI5r3NNjAtgjHAoZtaZ+ftcYV1pxE3UEyVaX0hYqmAZFRx0UuztF+5eF4
   Dpvq8DC7oApDCG7XgcQobpfUVmT601wjk3G9kCGPmerdI3Jp+ze5tbhHZ
   0=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Ej+DwY/ylwjZmhRx6RKqJn2a0jhEEColw3c9HgG86ZArtc9V3QWPD4fEpWDCNh6xR6N3Wz/kFg
 splcdAlh0rPxL25F8Y44w2aUg3AFobALxuNwUzKqdc7Wry/4DR71wQ/G4j+fnNoKk6Iys6AVlv
 JNz4b+G1G82B10W2mo/QsDgy1cEpODa14CHXpxNUcKpc/C4/US7dkhoxmfVr6nO5BqHhIy/7sv
 uw7GmL+N0Zw+GqlOTpBwnKkXf8q32NoVL11R+mEpzrWozNAtgEXGGVUq7YpqhFwKWf5dN+WDUF
 up8=
X-SBRS: 5.1
X-MesageID: 46688731
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:h08KnqM6MeDDHcBcTjujsMiBIKoaSvp037BK7S1MoNJuEvBw9v
 re+MjzsCWftN9/Yh4dcLy7VpVoIkmskKKdg7NhXotKNTOO0AeVxelZhrcKqAeQeREWmNQ96U
 9hGZIOdeEZDzJB/LrHCN/TKade/DGFmprY+9s31x1WPGZXgzkL1XYDNu6ceHcGIjVuNN4CO7
 e3wNFInDakcWR/VLXAOpFUN9Kz3uEijfjdEGY7OyI=
X-IronPort-AV: E=Sophos;i="5.83,298,1616472000"; 
   d="scan'208";a="46688731"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Julien Grall <julien@xen.org>
Subject: [PATCH] Replace FSF street address with canonical URL (again)
Date: Fri, 25 Jun 2021 14:41:40 +0100
Message-ID: <20210625134140.19870-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain

As recommended in http://www.gnu.org/licenses/gpl-howto.en.html.

Exactly as per changeset 443701ef0c7ff3 - Some errors have crept back in in
the past 6 years.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Ian Jackson <iwj@xenproject.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>
---
 tools/libs/guest/xg_dom_hvmloader.c | 3 +--
 xen/arch/arm/acpi/boot.c            | 3 +--
 xen/common/argo.c                   | 3 +--
 xen/include/asm-arm/acpi.h          | 3 +--
 xen/include/xen/argo.h              | 3 +--
 xen/include/xen/rbtree.h            | 3 +--
 xen/lib/rbtree.c                    | 3 +--
 7 files changed, 7 insertions(+), 14 deletions(-)

diff --git a/tools/libs/guest/xg_dom_hvmloader.c b/tools/libs/guest/xg_dom_hvmloader.c
index ae50d98011..39e1e5f579 100644
--- a/tools/libs/guest/xg_dom_hvmloader.c
+++ b/tools/libs/guest/xg_dom_hvmloader.c
@@ -14,8 +14,7 @@
  * Lesser General Public License for more details.
  *
  * You should have received a copy of the GNU Lesser General Public
- * License along with this library; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
  *
  */
 
diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
index 7ea2990cb8..db5085e15d 100644
--- a/xen/arch/arm/acpi/boot.c
+++ b/xen/arch/arm/acpi/boot.c
@@ -19,8 +19,7 @@
  *  GNU General Public License for more details.
  *
  *  You should have received a copy of the GNU General Public License
- *  along with this program; if not, write to the Free Software
- *  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ *  along with this program; If not, see <http://www.gnu.org/licenses/>.
  *
  * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  */
diff --git a/xen/common/argo.c b/xen/common/argo.c
index 49be715f63..eaea7ba888 100644
--- a/xen/common/argo.c
+++ b/xen/common/argo.c
@@ -12,8 +12,7 @@
  * GNU General Public License for more details.
  *
  * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
 #include <xen/argo.h>
diff --git a/xen/include/asm-arm/acpi.h b/xen/include/asm-arm/acpi.h
index b52ae2d6ef..e53973e054 100644
--- a/xen/include/asm-arm/acpi.h
+++ b/xen/include/asm-arm/acpi.h
@@ -14,8 +14,7 @@
  *  GNU General Public License for more details.
  *
  *  You should have received a copy of the GNU General Public License
- *  along with this program; if not, write to the Free Software
- *  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ *  along with this program; If not, see <http://www.gnu.org/licenses/>.
  *
  * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  */
diff --git a/xen/include/xen/argo.h b/xen/include/xen/argo.h
index 2ba7e5c0c0..fd4cfdd55c 100644
--- a/xen/include/xen/argo.h
+++ b/xen/include/xen/argo.h
@@ -9,8 +9,7 @@
  * GNU General Public License for more details.
  *
  * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
 #ifndef __XEN_ARGO_H__
diff --git a/xen/include/xen/rbtree.h b/xen/include/xen/rbtree.h
index 1b72590e4e..77bf57d4ab 100644
--- a/xen/include/xen/rbtree.h
+++ b/xen/include/xen/rbtree.h
@@ -13,8 +13,7 @@
   GNU General Public License for more details.
 
   You should have received a copy of the GNU General Public License
-  along with this program; if not, write to the Free Software
-  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+  along with this program; If not, see <http://www.gnu.org/licenses/>.
 
   linux/include/linux/rbtree.h
 
diff --git a/xen/lib/rbtree.c b/xen/lib/rbtree.c
index 95e045d524..85a4f20313 100644
--- a/xen/lib/rbtree.c
+++ b/xen/lib/rbtree.c
@@ -15,8 +15,7 @@
   GNU General Public License for more details.
 
   You should have received a copy of the GNU General Public License
-  along with this program; if not, write to the Free Software
-  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+  along with this program; If not, see <http://www.gnu.org/licenses/>.
 
   linux/lib/rbtree.c
 */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 13:43:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 13:43:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147248.271279 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwm6s-0000mn-DQ; Fri, 25 Jun 2021 13:43:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147248.271279; Fri, 25 Jun 2021 13:43:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwm6s-0000mg-AV; Fri, 25 Jun 2021 13:43:06 +0000
Received: by outflank-mailman (input) for mailman id 147248;
 Fri, 25 Jun 2021 13:43:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lwm6q-0000mW-Oy
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 13:43:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwm6n-0003Sm-RO; Fri, 25 Jun 2021 13:43:01 +0000
Received: from [54.239.6.179] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lwm6n-00060m-IA; Fri, 25 Jun 2021 13:43:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ZXh8a3aIWYhJbddLkHhwqIiCYXl1FHEyu/rPyVgsIU8=; b=YjB2+xcLSSnCFZ1V3+rq9paXxH
	JhXWZPTfo/xFtkNcascFHnUJ1+yd5UfJ7lHnlqVrh03AC3s3CfIpQcYtbmI3aXRhIz3D2A8O3eIb3
	uImZFu9/j5ISaXaweTVjKQVNq5cpRhctz8rN6cmFY11qfTO9E7YfK22AuA9VbRkSweSw=;
Subject: Re: [PATCH] Replace FSF street address with canonical URL (again)
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Jan Beulich <JBeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20210625134140.19870-1-andrew.cooper3@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <477386f0-3f02-c9f9-6f97-2facd84fee74@xen.org>
Date: Fri, 25 Jun 2021 15:42:58 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210625134140.19870-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Andrew,

On 25/06/2021 15:41, Andrew Cooper wrote:
> As recommended in http://www.gnu.org/licenses/gpl-howto.en.html.
> 
> Exactly as per changeset 443701ef0c7ff3 - Some errors have crept back in in
> the past 6 years.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> CC: George Dunlap <George.Dunlap@eu.citrix.com>
> CC: Ian Jackson <iwj@xenproject.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Wei Liu <wl@xen.org>
> CC: Julien Grall <julien@xen.org>
> ---
>   tools/libs/guest/xg_dom_hvmloader.c | 3 +--
>   xen/arch/arm/acpi/boot.c            | 3 +--
>   xen/common/argo.c                   | 3 +--
>   xen/include/asm-arm/acpi.h          | 3 +--
>   xen/include/xen/argo.h              | 3 +--
>   xen/include/xen/rbtree.h            | 3 +--
>   xen/lib/rbtree.c                    | 3 +--
>   7 files changed, 7 insertions(+), 14 deletions(-)
> 
> diff --git a/tools/libs/guest/xg_dom_hvmloader.c b/tools/libs/guest/xg_dom_hvmloader.c
> index ae50d98011..39e1e5f579 100644
> --- a/tools/libs/guest/xg_dom_hvmloader.c
> +++ b/tools/libs/guest/xg_dom_hvmloader.c
> @@ -14,8 +14,7 @@
>    * Lesser General Public License for more details.
>    *
>    * You should have received a copy of the GNU Lesser General Public
> - * License along with this library; if not, write to the Free Software
> - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
> + * License along with this library; If not, see <http://www.gnu.org/licenses/>.
>    *
>    */
>   
> diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
> index 7ea2990cb8..db5085e15d 100644
> --- a/xen/arch/arm/acpi/boot.c
> +++ b/xen/arch/arm/acpi/boot.c
> @@ -19,8 +19,7 @@
>    *  GNU General Public License for more details.
>    *
>    *  You should have received a copy of the GNU General Public License
> - *  along with this program; if not, write to the Free Software
> - *  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
> + *  along with this program; If not, see <http://www.gnu.org/licenses/>.
>    *
>    * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>    */
> diff --git a/xen/common/argo.c b/xen/common/argo.c
> index 49be715f63..eaea7ba888 100644
> --- a/xen/common/argo.c
> +++ b/xen/common/argo.c
> @@ -12,8 +12,7 @@
>    * GNU General Public License for more details.
>    *
>    * You should have received a copy of the GNU General Public License
> - * along with this program; if not, write to the Free Software
> - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
> + * along with this program; If not, see <http://www.gnu.org/licenses/>.
>    */
>   
>   #include <xen/argo.h>
> diff --git a/xen/include/asm-arm/acpi.h b/xen/include/asm-arm/acpi.h
> index b52ae2d6ef..e53973e054 100644
> --- a/xen/include/asm-arm/acpi.h
> +++ b/xen/include/asm-arm/acpi.h
> @@ -14,8 +14,7 @@
>    *  GNU General Public License for more details.
>    *
>    *  You should have received a copy of the GNU General Public License
> - *  along with this program; if not, write to the Free Software
> - *  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
> + *  along with this program; If not, see <http://www.gnu.org/licenses/>.
>    *
>    * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>    */
> diff --git a/xen/include/xen/argo.h b/xen/include/xen/argo.h
> index 2ba7e5c0c0..fd4cfdd55c 100644
> --- a/xen/include/xen/argo.h
> +++ b/xen/include/xen/argo.h
> @@ -9,8 +9,7 @@
>    * GNU General Public License for more details.
>    *
>    * You should have received a copy of the GNU General Public License
> - * along with this program; if not, write to the Free Software
> - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
> + * along with this program; If not, see <http://www.gnu.org/licenses/>.
>    */
>   
>   #ifndef __XEN_ARGO_H__
> diff --git a/xen/include/xen/rbtree.h b/xen/include/xen/rbtree.h
> index 1b72590e4e..77bf57d4ab 100644
> --- a/xen/include/xen/rbtree.h
> +++ b/xen/include/xen/rbtree.h
> @@ -13,8 +13,7 @@
>     GNU General Public License for more details.
>   
>     You should have received a copy of the GNU General Public License
> -  along with this program; if not, write to the Free Software
> -  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
> +  along with this program; If not, see <http://www.gnu.org/licenses/>.
>   
>     linux/include/linux/rbtree.h
>   
> diff --git a/xen/lib/rbtree.c b/xen/lib/rbtree.c
> index 95e045d524..85a4f20313 100644
> --- a/xen/lib/rbtree.c
> +++ b/xen/lib/rbtree.c
> @@ -15,8 +15,7 @@
>     GNU General Public License for more details.
>   
>     You should have received a copy of the GNU General Public License
> -  along with this program; if not, write to the Free Software
> -  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
> +  along with this program; If not, see <http://www.gnu.org/licenses/>.
>   
>     linux/lib/rbtree.c
>   */
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 14:09:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 14:09:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147260.271300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwmWM-0003Ok-QU; Fri, 25 Jun 2021 14:09:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147260.271300; Fri, 25 Jun 2021 14:09:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwmWM-0003Od-NO; Fri, 25 Jun 2021 14:09:26 +0000
Received: by outflank-mailman (input) for mailman id 147260;
 Fri, 25 Jun 2021 14:09:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwmWL-0003OT-FG; Fri, 25 Jun 2021 14:09:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwmWL-00042g-CX; Fri, 25 Jun 2021 14:09:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwmWL-00063K-2a; Fri, 25 Jun 2021 14:09:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwmWL-0007Ku-22; Fri, 25 Jun 2021 14:09:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iFTNqAL1G5tQc6xBAficCS/V3kbxyfJTT/3xIrSihZ8=; b=Smy48Cftrcda3zQ2N8NOx8uXzZ
	eGy7M2E2eJQ1J7ULw1Avjl82qo7ba1+NGaelEl1+HDtbmd9QZ1yBriK/b7yDCiHh9lDS+fjQWLeJ9
	rVLDble2g0hyhpp4a8Q0CMAzOUPJmQLo9mMHF8uLCxmlWpRXdgQtQo1aPecYnirMH+9o=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163028-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163028: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=17143c4837393d42c484b42d1789b85b2cff1aaf
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Jun 2021 14:09:25 +0000

flight 163028 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163028/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 17143c4837393d42c484b42d1789b85b2cff1aaf
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   21 days
Failing since        162368  2021-06-04 15:42:59 Z   20 days   48 attempts
Testing same since   163028  2021-06-25 07:30:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2647 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 14:13:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 14:13:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147265.271317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwmZy-0004o2-E0; Fri, 25 Jun 2021 14:13:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147265.271317; Fri, 25 Jun 2021 14:13:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwmZy-0004nv-Aw; Fri, 25 Jun 2021 14:13:10 +0000
Received: by outflank-mailman (input) for mailman id 147265;
 Fri, 25 Jun 2021 14:13:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tpuW=LT=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1lwmZx-0004np-2O
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 14:13:09 +0000
Received: from mail-wr1-x42c.google.com (unknown [2a00:1450:4864:20::42c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ff3ae15-d344-434d-b3ce-931d23368bf2;
 Fri, 25 Jun 2021 14:13:07 +0000 (UTC)
Received: by mail-wr1-x42c.google.com with SMTP id e22so10782335wrc.1
 for <xen-devel@lists.xenproject.org>; Fri, 25 Jun 2021 07:13:07 -0700 (PDT)
Received: from ?IPv6:2a00:23c5:5785:9a01:9da4:f15e:5616:d3e5?
 ([2a00:23c5:5785:9a01:9da4:f15e:5616:d3e5])
 by smtp.gmail.com with ESMTPSA id 12sm9532966wme.28.2021.06.25.07.13.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 25 Jun 2021 07:13:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ff3ae15-d344-434d-b3ce-931d23368bf2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:subject:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=kmXKJJsN40CJbASCatuHJEFEeomHLNq1D39cTQAmCAg=;
        b=Kz+a6gBl+d8RQ87xxzczMNIm7rsS77JeyiNr7tWcK/fwlDxbEfCDdEk0nyb49nm/Eb
         oZL6FBnQ7598hZS1fu9RwtWwRcL8cdzdCjluN8oXaulSkE7zmOc7RO7wFMmQkXztHzr9
         tQvfylPHlCYjjZqIm7jCmBm93Ame6w1GQizrQaX2liNbrY0PSJ9FuSoOPzUGYKb6w92x
         Nf9mFqGV62qvMbsHR58iUqPLml5ihPsfqCAdBaVDqZfV/iOIEjTXUQy3FT78FIGDFjPA
         /hcl74KFyUvrpDl85O28SrcqQiX3EA/fvuUV0h7LJBalAVnT09gIAJYY5eCjkBacPlUd
         RmQw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:subject:to:cc:references
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=kmXKJJsN40CJbASCatuHJEFEeomHLNq1D39cTQAmCAg=;
        b=hOUQ5ui5ae33XY5e0+zL2C/8CW1KlQJFBqPvvZHnUaZBmV3qyaNrb3LouHWVahUrkj
         2ro9YpXFeYg3N+79jTP3f+h4vaNgdwIW9VKHbXHoTj5M8KcADOqB1T13eRH5wkTgK7ZO
         ogA2V3s0u773rQvNtM9l32qIgUzBu/oFpsorx+9xiFmc+QnXG21DZ+nx2g71WXs089C0
         gSE4mHOPqjKFho1KCDlxksEpAUT1px6yd++gqietYu8Pd7Ty3cvlkgCLsOctemR1uwXH
         9D3Zg3Mc8UquGjdeKr8dk0lth2F/jO0jCAsWzx5aijCt43X+50OapU7+rA/qnruXQMfD
         G5Nw==
X-Gm-Message-State: AOAM532F5qklxf/IuyF31nY1rm5axvHlxshBJyKyC4MnpV5LRvcOEyy5
	kpR4r4EzKh5dgFJknjFl6IU=
X-Google-Smtp-Source: ABdhPJz0aIxUGnGk+VQRQXI33YAaJsw35AG50dIliqYjxP+aIoIa6MgNPIAIGBENb424OlrT5RrxcA==
X-Received: by 2002:a5d:4b8d:: with SMTP id b13mr11284117wrt.147.1624630386896;
        Fri, 25 Jun 2021 07:13:06 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Reply-To: paul@xen.org
Subject: Re: [PATCH v2 2/2] AMD/IOMMU: re-work locking around sending of
 commands
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
References: <a6e72235-570b-c426-589c-236f37749e1e@suse.com>
 <80f6365d-4f0d-66b5-b0ab-99dfeb40bd31@suse.com>
Message-ID: <9c2c4888-3618-1d55-8283-36e95775ff98@xen.org>
Date: Fri, 25 Jun 2021 15:13:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <80f6365d-4f0d-66b5-b0ab-99dfeb40bd31@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25/06/2021 13:15, Jan Beulich wrote:
> It appears unhelpful to me for flush_command_buffer() to block all
> progress elsewhere for the given IOMMU by holding its lock while waiting
> for command completion. There's no real need for callers of that
> function or of send_iommu_command() to hold the lock. Contain all
> command sending related locking to the latter function.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Paul Durrant <paul@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 14:40:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 14:40:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147275.271340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwn0P-00082p-W4; Fri, 25 Jun 2021 14:40:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147275.271340; Fri, 25 Jun 2021 14:40:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwn0P-00082i-RN; Fri, 25 Jun 2021 14:40:29 +0000
Received: by outflank-mailman (input) for mailman id 147275;
 Fri, 25 Jun 2021 14:40:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNyz=LT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lwn0O-00082c-5K
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 14:40:28 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fb0ef7d0-f7cd-4f9a-aebc-376269e0fb0d;
 Fri, 25 Jun 2021 14:40:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb0ef7d0-f7cd-4f9a-aebc-376269e0fb0d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624632026;
  h=to:references:from:subject:message-id:date:in-reply-to:
   content-transfer-encoding:mime-version;
  bh=3fg+T3wQESKfJGdYL+eqMYIoGGhS8RF+YpoeMOdfqx8=;
  b=Cn6G9YZXZRFY/b0iXzgAHPKFDq32QXgA7DK1nIUPUlllc3Rn0Oa9WBQX
   g6pERZuwvZwrZzYIRwGrwyQinWlv3BtImPXrD6AUruWaRmLDB70V9BWK6
   eUXwriHloxQbjXjhvHj+7W6vT2Xcf0H91qXG6YgG1T9HLstiD5EDlqCvh
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: EhOP+MBER2kbov95TgiJ/EGc6Yz3+rSEztqUt1z5amvRZsBlZWUcpkl2GIcqIOO6mc488RGR1d
 +NjRFIjNeW7dx4RhwKuotnJ1dap8AYwpkDI7CPgHsG6NCZK4ffLFQ55fJpR2C4Gf976TofzLZK
 N1+MGFKLE7EAy9jer6d+oYPZJAoeQhNtD4HIhe1XckkXa2266PzB20Gdo0sWPRUS9gavu5c+Zg
 VIBvNGPl05KicUQIBh+mLyAMV/WxWCf9fAHipyEMqNnI2rY0zqI8NE8/w7XPkfmnRyiu3ZjWR2
 tX4=
X-SBRS: 5.1
X-MesageID: 47351189
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:QyTLRKFgvBBzk8hQpLqFlpLXdLJyesId70hD6qkvc20wTiXIrb
 HKoB1E726XtN9IYgBSpTnyAtjzfZq8z+8x3WB1B93SOzUO11HYU72KgbGSuwEIXheOhtK1tp
 0QN5SWaueAc2SS5PySiGLUf7hAoKjlgcbY/Ns2jU0dPD2CAJsQlTuRfzzrbnGeMzM2eKbReq
 DsnfauLlGbFkj+4KyAdyk4tzSqnaybqLvWJTo9QzI34giHij2lrJTgFQKD4xsYWzRThZ8/7G
 nsiWXCl+KemsD+7iWZ+37Y7pxQltek4MBEHtawhs8cLSipohq0Zb5mR6aJsFkO0aeSARcR4Y
 DxSiUbTp9OAkDqDzuISNzWqlTdOQMVmiffIJmj8CfeSILCNW0H4oF69Pdkm1Pimj4dVZdHof
 x29lPcjoFQCxzYmiT7+pznazFG/3DE80YKoKorlHpYXpIZaLhN6aol3G0QPqshMUvBmdMa+M
 8HNrCJ2B+TSyLOU5mRhBgZ/PW8Gns0BRuIWU4Ep4ic1CVXhmlwyw8CyNUYhWpozuN9d3Bo3Z
 WKDk1TrsABcibWV9M2OM4RBc+sTmDdSxPFN2yfZVzhCaEcInrI75r6+q886u2mcIEBiMJaou
 WDbHpI8WopP07+A8yH25NGthjLXWWmRDzojsVT/YJwtLHwTKfidSeDVFctmc29pOh3OLyaZx
 9yAuMbPxbHFxqtJW9k5Xy3Z3BiEwhSbCROgKdzZ7unmLOBFmTFjJ2sTMru
X-IronPort-AV: E=Sophos;i="5.83,299,1616472000"; 
   d="scan'208";a="47351189"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JBLE60/EEob1QMJi5m/pIzZOj0iimic8HusuHXjdG3Si3vdTy1cA1trExXVuFgoWLfCVtrduOXEBiyRHmc5No8mdpO0sMpHiqveG/iLPS3JyZIp+THFXO8L1hM9B6lein8TiAHQN1cZ5tuy/C3p0Uix4JibrT1hHINlDhL1X0GcWrLUSN8Og5848xH75p9+AenXF+sweSSUW74PtO+tNkALly0f93swZy3CmWs3WV91IAvUzJw3C+j/9gd3UUdCvXFdyMD6QsFBzVqkX2ZcLcLI1WMlOuAf6Zmg1E1AJN7hATGX/v/D+uiR5inxIAsmQKsrhAL+HcbSjTvblWjssAw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cVCJMt0Nbw3/93DJI1GSWfYreC82364GaYfRJbO5u5Q=;
 b=UoA4zhmo4GBWEnEYcSfbzPKMKjwztb3h+pyoj0hcjivYVnKPJXHUhW8unbc1SIWBV2kgqV5RhrGZFRSc0WxVhhqaBYLu64Eqh55moFIXa/k6ZMW0f2JaBW6+5kgv3lQKhxMB4eQYs/hVCgvdpp0JjxdYP03pxnlqt4PEsfJB98Tg/sPdFEZCHXk6aKRcdcgc7fQhX21/uWsNsdRkNf9MFgh8pBxftd9DoRB9CbBj6Iqpdb3bcoP4ekhDJMDZTkrZG6KgFnuHNsxTgcPurdiOQ7QzUbRiAeEKcMApVCKqLqnNKyJDX/DenQ9RGqoUvRL8wEEe/CzMq8xTEta8pL5Awg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cVCJMt0Nbw3/93DJI1GSWfYreC82364GaYfRJbO5u5Q=;
 b=XWnKORPuwbCP3Xaj8X1CfSodS2LyRNQy+zaXcWeOQ8mVqi73t77TLf2URy/sttjM2qPdJ9Ed476jE1GdP9RuXkAO3GsqmBlnCJLRmwBMR2blPjp2UjyAkLWDHUYWZ8IDJN2GU/el/sptnVVyChE6JVIJgRGl9Dh9jvw1BinO5bM=
To: osstest service owner <osstest-admin@xenproject.org>,
	<xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <osstest-163021-mainreport@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [xen-unstable test] 163021: regressions - FAIL
Message-ID: <4257506e-41d4-3d04-a7c5-b33d5f356560@citrix.com>
Date: Fri, 25 Jun 2021 15:40:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <osstest-163021-mainreport@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0128.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9f::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9e20a7c3-29d8-46b4-51fc-08d937e7257a
X-MS-TrafficTypeDiagnostic: BYAPR03MB4615:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4615BF805E3BE301071D075ABA069@BYAPR03MB4615.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4714;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 0e5p6uTnhxPd6qpvCEmHU5v2/MWfA4kAtH7mhLT6dikdpVAM5nqftgh/I10Pa2nDUy3iOuIQmt7/PV3N4mb9GtXLy32tXE8YLCfSDbqVJdExZSNMaL4FVArc9sGySiRFGn9CUGPKzQ8MO8DmQk3h3K1Q59IsWMzuPKgFWmipQ6t6OGRvLYuopI4l5duKy7IUts6gXp6G2aeenMJnoCwlnACtAcuhPW3AWmaA2gFPJcMvBoPkoqRWtC2thxGywwJdZSMPFpmF8H2Wl7QY4WePWUPbqm2mEC9RxvCMXIuNq5vmOEkintLyeVGG8PJlF+EeTfeZLSCorGrhLVDcRTMXHLw9dXgz1AGj/5hyMPlmT5HMiovq/VeqqBL4vxXDjBSWwaDOUJOdDdjxaS6wQM8QC9YyjWMJW31sf4gqXTlCcWp9rtGJ3pI2FjJUt04aZKMzAsFmQAs3x3q6LqtnHTWxQEXLvLXVJGsiVhEJi5BSlZpZbJCbhJrFhckDR4BQ8zJNkp4FPeGoVNllYiGRU+Cd7op1/GNUMTkfuvc6k8L9euBlbg00/C1olFFidqyEiFPrp5AAgR7RxS3W+xrIyugqjeVqRMveG7sEJyICpmcKSv0ovGYABdlRx5L6LFnzBTY3A+CDcZanUYb4N3aNOZ68amq2pglNK7O6i8hJEqpHrIbeWE69Me7eC3Os7oklCuHfa3Ak4UCJDNnODAFYoXHfqPDtWLTKtIs83KMfuLRFAjx48OdYK1CNo5ThzlJKSwGBjzXQbjR/lDsd2auTPTIy9hYHBo19xYhGQkNRXblMBs+4I+OxXahTps6KCWtUvz8ur9nsEsCtCX3j8o7OEPzOPa+mj6McGhOjaIi06+70pq8=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(376002)(136003)(396003)(366004)(6486002)(2616005)(8676002)(8936002)(26005)(4744005)(956004)(53546011)(186003)(110136005)(5660300002)(16526019)(36756003)(6666004)(16576012)(316002)(38100700002)(86362001)(6636002)(83380400001)(31686004)(478600001)(31696002)(66556008)(966005)(2906002)(66476007)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?cGZ1WVJVaHUvenNUOXBZU1JDL2JIcDd5TWR2YmJCSjcvZjlCWkN1NUJybzF4?=
 =?utf-8?B?SmxwOUEwbmxTcE1raFNqNVdHcUJGTlF4Q0pHQlo3d1NaMG5pMEkyVVE1a3BE?=
 =?utf-8?B?eDBxM1dYdFhEamsrNnJCYm53cCszWHdCQW9HSy83aytrakJ3aEhSV3daNFUx?=
 =?utf-8?B?SmQvYW1IYXBqNUxyQXM4eHFYMWczVGcrUDc3SExJZWEzRklqVXVHVFdCclV1?=
 =?utf-8?B?RWRXWGRmTDVXckwvY2RLUEJWY1NJQ0Y2S0VlTFB3RTBYS2RSeDdTL0VrRFF1?=
 =?utf-8?B?N0NUalNnRFN4QVFHN1ZWdExVZW9PWFlDaks5UGpyWnZqTnlZUmwwMnpGY3NU?=
 =?utf-8?B?dHJlamlMbm1YSDN1ZVpDcFRQeEFiQmdmakNJQUF6UmhvR21mdzZxeHErcWNM?=
 =?utf-8?B?bVJiSXM1L2JjeUNPMXc0dG1hTWV6d2dEb2hBcWFxaFJqRDNtRlNBR2JORVhO?=
 =?utf-8?B?Wjd5eTFYZExHY2QwYnlYd1h4YVdBemxrY1lvMUZXeDYxWnhWUTV3Q1ZRd3RC?=
 =?utf-8?B?VUdZMENBWkVKay9teTFldU45V01YYklDWDh3SktPZ25ZWEFreXhBTDJwYTZI?=
 =?utf-8?B?SXBBazVqNFRJcUhRQ1kyOE1maXhEMzJMYVBzSXUzNkNoMWVoZ2REZ0RuR0Fi?=
 =?utf-8?B?S1dYN3BkYlZ6Qk5COCsvOXYySXh4WHNFc0VRTE9FR1JxdjVxNlVjM2Q4eGty?=
 =?utf-8?B?c0d6bFpNeUVOOG12ZitkeEJGdnJLWmdBbXVJMG9XYnUxSzZyUktvSzFCU2Vz?=
 =?utf-8?B?NEd0STJ0aFhXdmdub0VlYnJGWS9McXJtQzY1dERiNGRsNWlQNnVMWUZyZ081?=
 =?utf-8?B?WER4VllUOU94cC9TWHNBdms0M2NOcGpDY1VkY3hkUVZhbWM3aXpCRWdwOUti?=
 =?utf-8?B?YWVtb0ZIVnp1OVlKVFUvK21SNW84WVhIbUF5MWFjR0J0U0ZMMFhyYVpOc2RE?=
 =?utf-8?B?TzJhNUVVL2pmNmxpeDJiTG9xdTlPRExpaitTb0NKVCs4WERscmhpaU9iK0Yv?=
 =?utf-8?B?aG8vY2dlN1JycFkwUXp6MlpCVU5FaTlqc3lLcGhaWll0eGpXVVR4bHRjUmla?=
 =?utf-8?B?MExvNlpjaURGZTJRM0xSTDNwUHFHd3gvOGR2K0pOQlA5V0p3dHJyMng0WnRL?=
 =?utf-8?B?SjRNUkRjQmNEOStpdnJ1OXpSSmdyL0RvUjZBVWxGODBMM0ZaU3ZJNU5zQkla?=
 =?utf-8?B?dnQyQmhWWkpnVFBsdEdVUTN0VE4wSWZpb2pCK3QxOXE0YXVtQUthVThlbUpF?=
 =?utf-8?B?eS9pS3JhNGY4NmREcWVYUGtrSEV4dTREaVRiS0NqUE9uUmNPTG8zRmtrN0E2?=
 =?utf-8?B?RGgyRkNJQlpvekhSL2lUenBLMnhWa2VzNSt2Y203SGt1c1MwUXNoUW1VVHo3?=
 =?utf-8?B?V2I3em1KYjBqa2tRMDB5WUhUSlJuN21XNFptdU4ydHNYYjkxWlpYV3dWK2ZT?=
 =?utf-8?B?QUkzRkJESk9JUzBKVEp3Zmdnd0hYTktLRnh2WmYyR3lEbjZ3V1Jpb0pEZWk4?=
 =?utf-8?B?dHdkSnoxVTRkUlEzNGJyRUtEYlNZV2pvaUtQSjZESHh6SFBVVkNxcE00R3Qz?=
 =?utf-8?B?ekd4bVZtRUdBZVpqd3d1ZXhuWEpxRVl3ajJjY3Z1ZWFuRkQ1UVZ1dHdvbjBm?=
 =?utf-8?B?dytRb2NxYnhpRU9rZlhIQjkvRE1xZGpmMkRCT0NkTTZTR0NEOFRlZnBjZ21X?=
 =?utf-8?B?WlYza29MWHdzdGUzOWtDOFVmUHJmK3BQRm96N004azR3RFZLUmtLRzhPcnZy?=
 =?utf-8?Q?GoWH8FfCez7Cf+u3XTDCmi3A7kedw/pmcGIlmh4?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 9e20a7c3-29d8-46b4-51fc-08d937e7257a
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 14:40:15.9409
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QjVtXsUiw+3LAK3ztcZHTFpdhX8lPXOOVO7PCWWiPevyiKUR/uhXZTb1t2UVFx/LjVaP19JjyMTPpp125HYqJGUM2tEnjQwope1lmPtSgp4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4615
X-OriginatorOrg: citrix.com

On 25/06/2021 10:43, osstest service owner wrote:
> flight 163021 xen-unstable real [real]
> flight 163030 xen-unstable real-retest [real]
> http://logs.test-lab.xenproject.org/osstest/logs/163021/
> http://logs.test-lab.xenproject.org/osstest/logs/163030/
>
> Regressions :-(
>
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. =
163014

This looks like an OSSTest bug.

Immediately before rebooting, the PXE config is modified, and all
subsequent boot attempts find no suitable targets, until the next
flight/job/etc starts and writes a new PXE config.

~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 14:48:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 14:48:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147280.271354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwn8G-0000OJ-Sk; Fri, 25 Jun 2021 14:48:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147280.271354; Fri, 25 Jun 2021 14:48:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwn8G-0000OC-Ns; Fri, 25 Jun 2021 14:48:36 +0000
Received: by outflank-mailman (input) for mailman id 147280;
 Fri, 25 Jun 2021 14:48:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EvRe=LT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lwn8F-0000O5-Do
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 14:48:35 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a32020a9-b58a-4680-93d1-53db3094e33d;
 Fri, 25 Jun 2021 14:48:34 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 87D6521CD8;
 Fri, 25 Jun 2021 14:48:33 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 58B1111A97;
 Fri, 25 Jun 2021 14:48:33 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 3H1uFMHs1WBNIAAALh3uQQ
 (envelope-from <jgross@suse.com>); Fri, 25 Jun 2021 14:48:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a32020a9-b58a-4680-93d1-53db3094e33d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624632513; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=aDKc2sUUuqzsaNgqaaJJMe7pZHE+SjbV9/47hMBhTNA=;
	b=LV7yM8b82FGsU+VWDq435TKFBEs5osvyYWWd4K4KoeYnkiQiS6ns8zH1wfiTb2ALEoSBcQ
	WFLhvk2BTmt8mvARqhFHsrr3cHK7gUww7VMLHG03l/H44IlJ8YqOcMJRVxWGNbVjNSs7Aw
	wZJu7LP1sWyouQ2vl4Sueiba29m/E64=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1624632513; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=aDKc2sUUuqzsaNgqaaJJMe7pZHE+SjbV9/47hMBhTNA=;
	b=LV7yM8b82FGsU+VWDq435TKFBEs5osvyYWWd4K4KoeYnkiQiS6ns8zH1wfiTb2ALEoSBcQ
	WFLhvk2BTmt8mvARqhFHsrr3cHK7gUww7VMLHG03l/H44IlJ8YqOcMJRVxWGNbVjNSs7Aw
	wZJu7LP1sWyouQ2vl4Sueiba29m/E64=
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [GIT PULL] xen: branch for v5.13-rc8
Date: Fri, 25 Jun 2021 16:48:32 +0200
Message-Id: <20210625144832.20839-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.13b-rc8-tag

xen: branch for v5.13-rc8

It contains a fix for a regression introduced in 5.12: when migrating
an irq related to a Xen user event to another cpu, a race might result
in a WARN() triggering.

Thanks.

Juergen

 drivers/xen/events/events_base.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

Juergen Gross (1):
      xen/events: reset active flag for lateeoi events later


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 14:52:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 14:52:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147284.271365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwnBc-0001li-BZ; Fri, 25 Jun 2021 14:52:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147284.271365; Fri, 25 Jun 2021 14:52:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwnBc-0001lb-7z; Fri, 25 Jun 2021 14:52:04 +0000
Received: by outflank-mailman (input) for mailman id 147284;
 Fri, 25 Jun 2021 14:52:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=76aW=LT=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
 id 1lwnBb-0001lV-7e
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 14:52:03 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e473c7a2-11c7-4238-bc98-be7cb8155478;
 Fri, 25 Jun 2021 14:52:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e473c7a2-11c7-4238-bc98-be7cb8155478
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624632720;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=Ko0+7VnaVleRu+JXalSXPMrJKNgoxY9ne9wxTtzK8yA=;
  b=JC3IJFO+vQvptz0xAWObUYP3Ht1tumzZPJRKYNl87Sepy+FlsT0y3SHH
   7FrsHsIm0V7V4AZ80bUci9U+NIA+59igXHJYll4NPTDgs50OW1zJMPMDL
   VoNPOyMJgfbCPQ7gxWEcI/vkPzutfstcKmphk20Wc7cDDtulOddBe+WI8
   M=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: FjnDES62Io+LH5/qcoX61Ji7Na8Pd8JB6M90rDyFLzHBnGqMmPy1YhfnJK1HQk+KGAGczAud2t
 MIBiHWM0kbM3EGhBX1Gmydx4eRyaDtllqpczj5wWgNJQYnReqkYHsy+Tg48AARUvLnPdNZKmT3
 YpDQG8k2sYSSQmDoloITxs698XCuXX4YvX987NoLpIpEs/Wv06iTlMvG0CGDjEJ/9tJXlZtR4A
 PCTCYRNzQnHV6aroH46EI6UwtzGRBxypmnHuHG1WiFug8S6Dq5m3qPoI8K8zt+5Qjti6Vyos9v
 +k0=
X-SBRS: 5.1
X-MesageID: 48591494
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:IyjVi6jf049qx+4v+bvQoPkkZ3BQXiAji2hC6mlwRA09TyX5ra
 2TdTogtSMc6QxhPE3I/OrrBEDuexzhHPJOj7X5Xo3SOTUO2lHYT72KhLGKq1Hd8kXFndK1vp
 0QEZSWZueQMbB75/yKnTVREbwbsaW6GHbDv5ag859vJzsaFZ2J921Ce2Gm+tUdfng8OXI+fq
 DsgPZvln6bVlk8SN+0PXUBV/irnaywqHq3CSR2fiLO8WO1/EuV1II=
X-IronPort-AV: E=Sophos;i="5.83,299,1616472000"; 
   d="scan'208,217";a="48591494"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TY9siNXczNZIcXqEwei9C+fba1cM6yQVqVtU6JQTH3P4lhN7VHMQJwQIdU4DT0diuw8xoDNi53ctuIUQDGx+s8Q1tAVZZnPj8Fz24CPSEJbfalmhv8QXsNM0YjQwNizzSHTHHFjKR3YNzYSUAzEeqrocTLIgEkOP2FLxZgsBr/juT2coLWqJl2zY67JsgY3tF0CgbeqBdZ/TW9G9FsGui9KnGJ5puEcW7H1A0eBIpZx4ko9vv6et7L1w8ZEk4oslDhecHq5lyrESwMbxYfB+Gtu5H/ViE0DGSRoppRBCGPg19nNIUkPfDyUOhiRkjJCw7vGetnbDV+EJyH61XM/SeA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M4B+9aJCGZpw9Tz4VE8Y7JBuzhEdpG6r3fQWa8Gczhw=;
 b=UbBqW6Ovi+TUfadmj8SLv4CjtEOywnkFohEPkEQlvRhTdkSPoMmyJJmfhdBIRNVVOevJRC4UWCa/OJ7VsoNrDel+A73YhFrdjqY8+uA/kAyc7dTzIOCGUAkoJJtVyuzSRlM4kRMhWTOHv3dJQjUAkGRjYoswJnh7nWfDwos63nC5JJ6aGJDl7vMUe2fCIf05pV/JiP7WcS0WBpbeqKMcuYfXq1pa9Ra+QdDlYBCaU1seIuKtlH6YkbrYYqNc2aihNJ9PuIjXS3KLmSxub4n2c4JenvWCa/HeqnxI6TiSqQ89uXjK41/WChjqPFr/B60sj6RU+WJt1wA/EOhtMpaG4A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M4B+9aJCGZpw9Tz4VE8Y7JBuzhEdpG6r3fQWa8Gczhw=;
 b=KTA1dDnQf4D3gamOi84nqFj1toAbVf5ZWaA5jzFd6YzagUFurg8WD9KdqYmJiN6oj8jY7Cg8t7Z0h+RpH+pss+LgahYTk4D1qMfmD3FP74Jg5jQNll4XqyPGhzvsR7INJyAvMja8FB5OhMgejWDdiF8G36FgBvWQCGyXLvR+ghM=
From: Christian Lindig <christian.lindig@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <Andrew.Cooper3@citrix.com>, Wei Liu <wl@xen.org>, Roger Pau Monne
	<roger.pau@citrix.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<George.Dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, "Marek
 Marczykowski" <marmarek@invisiblethingslab.com>, David Scott
	<dave@recoil.org>
Subject: Re: [PATCH 01/12] libxc: split xc_logdirty_control() from
 xc_shadow_control()
Thread-Topic: [PATCH 01/12] libxc: split xc_logdirty_control() from
 xc_shadow_control()
Thread-Index: AQHXacSXhkB1fy1iEUKbQj9aZj8qr6skz76A
Date: Fri, 25 Jun 2021 14:51:56 +0000
Message-ID: <71F58ADC-945E-4ED3-9CB9-17889DE22641@citrix.com>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <e928490c-d13c-8041-0ff7-e8b69ee73d6e@suse.com>
In-Reply-To: <e928490c-d13c-8041-0ff7-e8b69ee73d6e@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 069cb178-f882-46cd-2c44-08d937e8c77c
x-ms-traffictypediagnostic: CO1PR03MB5730:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <CO1PR03MB5730333F8638B85B8772ACE0F6069@CO1PR03MB5730.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:655;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: PrifGTVTJ+hgmnMH2MYGtPKnm6D1wONJNDLUpTXI6vM1lP+TINzXDOf2dpOhb7foNrFkKZc/yAjXKGQGMSm3cgKjCvEEQtw9oTdGyK8mIdplhowYpD2XkyF6mGzTGgHiy3TrzI1tfj840RZDPGey58lllmAC3mZrhO8gRF+WUrrzf1+7VFY9VNEwt/hXDv6zaA1amQap/OIwjjX4Nic1TKrBjBHXd9jaufJDuvtIU6gJMoaszwelNQ+FYZwO+/b3/k3xVpx2su2Grl4CrxFz2wucikvV07gzWVG7VYtjrDaE+t5/Q8PRxF0wpXqXYiGKnJ0oWcnpAPS74H9P0v41CBHZRCYVvoSTytDoHE16SWbBB9jCoocKTzJyFn+xMkSpvOsyZ1v8TZYWSiS6oOWOTn89TNFoUTMtdMdCHVh1ykc4DKsPDx5VlecozI99o5iaM/apHRany0hwn8WDmBPCkThabvMvkUzbalQBUyzDO5IACx1d06L9/LmPOqATPbmFDMBFNhJzBXTlSktqDUYszP34byxHS88ACUpNhp4vNqQuJbiQ3lTBCqU6fiW/VDuVCo/Ich30G67aTf83LWiVvMs04UIwflVjg+3p9TImZ7ecbzsQPRh86J5DOUozEfNtg19+tqzeLj/ywdfk0xXrDGqqASI2YTLiyMNDZD854CgfzgDBAFvaUMqqfvWa+6Ng3g9zEj1u4RUCycg2J6fltg==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW4PR03MB6380.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(396003)(346002)(136003)(39860400002)(316002)(66476007)(66556008)(6512007)(91956017)(66946007)(6486002)(76116006)(26005)(64756008)(66446008)(4326008)(8676002)(8936002)(54906003)(6916009)(122000001)(186003)(71200400001)(83380400001)(6506007)(478600001)(5660300002)(2616005)(38100700002)(53546011)(2906002)(44832011)(33656002)(36756003)(86362001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?j0MGPIQC/B7wQg56Q7hC4sD3MjMepvmDwZUM+gGu/FgHz3tgvtmeHVRGrO2O?=
 =?us-ascii?Q?gHscXEznXmhOY32TnNmGVteIXbsXJp86QL+EZZCUpM3khPuZRfaylsEKLDIu?=
 =?us-ascii?Q?zEdmPVkpkJIhHl1+InUKuwATxELrRKDCsaYWjAV5u5mGAKGfw2bNDSNUx1g3?=
 =?us-ascii?Q?DRJldyOtnH9LN64rD4aXqlmrfZUBqRZ6x4e+hhH940toPag9VzajGtWbdVH4?=
 =?us-ascii?Q?BRy6NvpDbm80PmP+mh4C+hpKYohnz/IcqsVG4DDDGlP2oajRmmqloKeUxFZg?=
 =?us-ascii?Q?GKWbMK35o6ehKvqmr0SaxuDIXt8voPIH6mimipT2tlLc+WIOattRoCawhGi9?=
 =?us-ascii?Q?3TBSFLQkkFjW+3mED0AWB6wkCD51RbXmI30AtpNIJHz336483+A0U2oZkCMi?=
 =?us-ascii?Q?6E5gBed/u6utBBi5vAu91jWCoXO+wYMj8ZTRe6Vn29/OiE2aMEUCh5iSw1zg?=
 =?us-ascii?Q?ULFnpGWuyX6U4HyGHYQOnpSu7pUvZbzP88Vq66MqHX/5HZFSnsej+tplQFLd?=
 =?us-ascii?Q?AmYLjSMnAd3D/v6jqxCQxXObvqKRaCDzteUrgI9GFCYq4i4BpnCGz4pKsYhB?=
 =?us-ascii?Q?iN2LaJsO+x63aEdEgEc01YeBYhAZHAF9iTpDOwcOnZWYathPQX7W1dDSm/25?=
 =?us-ascii?Q?SXPHFJDZja6u2U4sP0zv0ju8r6JArr4Z32yek8/oE19+G/jFBKIn8mJhY8AT?=
 =?us-ascii?Q?wd8sN/ECs94SwiBGdBcs+KzzkPodO14/bXcjfeLLPvqUwyaxJsEbA4YME3tQ?=
 =?us-ascii?Q?J9uG5yZTYzdLZyqB58DRmSnd7VMhpPxi0vRTbVRm4v+3Jw7IfuYV2UkexDqR?=
 =?us-ascii?Q?PM4Iza0v7oDOki27zeK2B3r53cqXn90+YMLT0Qzxi97HHQcfKEm/JWllUFa8?=
 =?us-ascii?Q?fLTZyGgnE/nGksq+zXboeeNB+ArTEnQWh9t3ey3a7Mbtu2S+yk2WsEk92FQu?=
 =?us-ascii?Q?vCegYY1K4P3oEzQ3urBfgJEGs1H9T+e0AsaqZK8IffpKl7baLXsOrq8NmCWC?=
 =?us-ascii?Q?FggsdXwvH6At4dnm6nqiTyUR57JJyp673HdvaH8Xnf2lBH1Jj7LFCysGfysj?=
 =?us-ascii?Q?7pjEYK1tIeutMvYd/F7S/GesytT4VneP1cuiaIVGsAiKqBFeLkZFwzF74Edu?=
 =?us-ascii?Q?0uVmyulLIxO4l5VVctvIPjeeuoykkrpcHN89mKu90Z2Kg+lmBMko0Xc0b5ha?=
 =?us-ascii?Q?wr3L/tMpioSl05ljPl0yv+sp+bOfl/y+fNoUXFwYj4Bw+m8fMhsWjM5Kd01z?=
 =?us-ascii?Q?eUDwTFZzmXwonhVXL84xyBszI67wS7oaRC9aaZbYPCvYEUyQ8jMyHT2U20of?=
 =?us-ascii?Q?cCefgRIwQQadRT35mpnZDG20?=
Content-Type: multipart/alternative;
	boundary="_000_71F58ADC945E4ED39CB917889DE22641citrixcom_"
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MW4PR03MB6380.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 069cb178-f882-46cd-2c44-08d937e8c77c
X-MS-Exchange-CrossTenant-originalarrivaltime: 25 Jun 2021 14:51:56.6926
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: T9gm3V7qVuR/kz/WWd2/rX/J8bwahd3v0+A8DtS0U9UXaf+Se1qUMqgbSj8bx+qFVxFRzAddXzJphsgT8CX1EX9NqMPKS1WLK+EYmzCMN8I=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR03MB5730
X-OriginatorOrg: citrix.com

--_000_71F58ADC945E4ED39CB917889DE22641citrixcom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

For the OCaml part:

Acked-by: Christian Lindig <christian.lindig@citrix.com<mailto:christian.li=
ndig@citrix.com>>

On 25 Jun 2021, at 14:17, Jan Beulich <jbeulich@suse.com<mailto:jbeulich@su=
se.com>> wrote:

For log-dirty operations a 64-bit field is being truncated to become an
"int" return value. Seeing the large number of arguments the present
function takes, reduce its set of parameters to that needed for all
operations not involving the log-dirty bitmap, while introducing a new
wrapper for the log-dirty bitmap operations. This new function in turn
doesn't need an "mb" parameter, but has a 64-bit return type. (Using the
return value in favor of a pointer-type parameter is left as is, to
disturb callers as little as possible.)

While altering xc_shadow_control() anyway, also adjust the types of the
last two of the remaining parameters.

Signed-off-by: Jan Beulich <jbeulich@suse.com<mailto:jbeulich@suse.com>>
---
I wonder whether we shouldn't take the opportunity and also rename
xc_shadow_control() to, say, xc_paging_control(), matching the layer
above the HAP/shadow distinction in the hypervisor.

--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -885,11 +885,15 @@ typedef struct xen_domctl_shadow_op_stat
int xc_shadow_control(xc_interface *xch,
                      uint32_t domid,
                      unsigned int sop,
-                      xc_hypercall_buffer_t *dirty_bitmap,
-                      unsigned long pages,
-                      unsigned long *mb,
-                      uint32_t mode,
-                      xc_shadow_op_stats_t *stats);
+                      unsigned int *mb,
+                      unsigned int mode);
+long long xc_logdirty_control(xc_interface *xch,
+                              uint32_t domid,
+                              unsigned int sop,
+                              xc_hypercall_buffer_t *dirty_bitmap,
+                              unsigned long pages,
+                              unsigned int mode,
+                              xc_shadow_op_stats_t *stats);

int xc_sched_credit_domain_set(xc_interface *xch,
                               uint32_t domid,
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -650,25 +650,48 @@ int xc_watchdog(xc_interface *xch,
int xc_shadow_control(xc_interface *xch,
                      uint32_t domid,
                      unsigned int sop,
-                      xc_hypercall_buffer_t *dirty_bitmap,
-                      unsigned long pages,
-                      unsigned long *mb,
-                      uint32_t mode,
-                      xc_shadow_op_stats_t *stats)
+                      unsigned int *mb,
+                      unsigned int mode)
{
    int rc;
    DECLARE_DOMCTL;
-    DECLARE_HYPERCALL_BUFFER_ARGUMENT(dirty_bitmap);

    memset(&domctl, 0, sizeof(domctl));

    domctl.cmd =3D XEN_DOMCTL_shadow_op;
    domctl.domain =3D domid;
    domctl.u.shadow_op.op     =3D sop;
-    domctl.u.shadow_op.pages  =3D pages;
    domctl.u.shadow_op.mb     =3D mb ? *mb : 0;
    domctl.u.shadow_op.mode   =3D mode;
-    if (dirty_bitmap !=3D NULL)
+
+    rc =3D do_domctl(xch, &domctl);
+
+    if ( mb )
+        *mb =3D domctl.u.shadow_op.mb;
+
+    return rc;
+}
+
+long long xc_logdirty_control(xc_interface *xch,
+                              uint32_t domid,
+                              unsigned int sop,
+                              xc_hypercall_buffer_t *dirty_bitmap,
+                              unsigned long pages,
+                              unsigned int mode,
+                              xc_shadow_op_stats_t *stats)
+{
+    int rc;
+    DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BUFFER_ARGUMENT(dirty_bitmap);
+
+    memset(&domctl, 0, sizeof(domctl));
+
+    domctl.cmd =3D XEN_DOMCTL_shadow_op;
+    domctl.domain =3D domid;
+    domctl.u.shadow_op.op    =3D sop;
+    domctl.u.shadow_op.pages =3D pages;
+    domctl.u.shadow_op.mode  =3D mode;
+    if ( dirty_bitmap )
        set_xen_guest_handle(domctl.u.shadow_op.dirty_bitmap,
                                dirty_bitmap);

@@ -678,9 +701,6 @@ int xc_shadow_control(xc_interface *xch,
        memcpy(stats, &domctl.u.shadow_op.stats,
               sizeof(xc_shadow_op_stats_t));

-    if ( mb )
-        *mb =3D domctl.u.shadow_op.mb;
-
    return (rc =3D=3D 0) ? domctl.u.shadow_op.pages : rc;
}

--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -459,10 +459,10 @@ static int send_checkpoint_dirty_pfn_lis
    DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
                                    &ctx->restore.dirty_bitmap_hbuf);

-    if ( xc_shadow_control(
+    if ( xc_logdirty_control(
             xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_CLEAN,
             HYPERCALL_BUFFER(dirty_bitmap), ctx->restore.p2m_size,
-             NULL, 0, &stats) !=3D ctx->restore.p2m_size )
+             0, &stats) !=3D ctx->restore.p2m_size )
    {
        PERROR("Failed to retrieve logdirty bitmap");
        goto err;
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -428,18 +428,18 @@ static int enable_logdirty(struct xc_sr_
    /* This juggling is required if logdirty is enabled for VRAM tracking. =
*/
    rc =3D xc_shadow_control(xch, ctx->domid,
                           XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY,
-                           NULL, 0, NULL, 0, NULL);
+                           NULL, 0);
    if ( rc < 0 )
    {
        on1 =3D errno;
        rc =3D xc_shadow_control(xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_OFF,
-                               NULL, 0, NULL, 0, NULL);
+                               NULL, 0);
        if ( rc < 0 )
            off =3D errno;
        else {
            rc =3D xc_shadow_control(xch, ctx->domid,
                                   XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY,
-                                   NULL, 0, NULL, 0, NULL);
+                                   NULL, 0);
            if ( rc < 0 )
                on2 =3D errno;
        }
@@ -556,10 +556,10 @@ static int send_memory_live(struct xc_sr
        if ( policy_decision !=3D XGS_POLICY_CONTINUE_PRECOPY )
            break;

-        if ( xc_shadow_control(
+        if ( xc_logdirty_control(
                 xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_CLEAN,
                 &ctx->save.dirty_bitmap_hbuf, ctx->save.p2m_size,
-                 NULL, 0, &stats) !=3D ctx->save.p2m_size )
+                 0, &stats) !=3D ctx->save.p2m_size )
        {
            PERROR("Failed to retrieve logdirty bitmap");
            rc =3D -1;
@@ -653,10 +653,10 @@ static int suspend_and_send_dirty(struct
    if ( rc )
        goto out;

-    if ( xc_shadow_control(
+    if ( xc_logdirty_control(
             xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_CLEAN,
             HYPERCALL_BUFFER(dirty_bitmap), ctx->save.p2m_size,
-             NULL, XEN_DOMCTL_SHADOW_LOGDIRTY_FINAL, &stats) !=3D
+             XEN_DOMCTL_SHADOW_LOGDIRTY_FINAL, &stats) !=3D
         ctx->save.p2m_size )
    {
        PERROR("Failed to retrieve logdirty bitmap");
@@ -716,10 +716,10 @@ static int verify_frames(struct xc_sr_co
    if ( rc )
        goto out;

-    if ( xc_shadow_control(
+    if ( xc_logdirty_control(
             xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_PEEK,
             &ctx->save.dirty_bitmap_hbuf, ctx->save.p2m_size,
-             NULL, 0, &stats) !=3D ctx->save.p2m_size )
+             0, &stats) !=3D ctx->save.p2m_size )
    {
        PERROR("Failed to retrieve logdirty bitmap");
        rc =3D -1;
@@ -834,7 +834,7 @@ static void cleanup(struct xc_sr_context


    xc_shadow_control(xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_OFF,
-                      NULL, 0, NULL, 0, NULL);
+                      NULL, 0);

    if ( ctx->save.ops.cleanup(ctx) )
        PERROR("Failed to clean up");
--- a/tools/libs/light/libxl_colo_restore.c
+++ b/tools/libs/light/libxl_colo_restore.c
@@ -62,7 +62,7 @@ static void colo_enable_logdirty(libxl__
    /* we need to know which pages are dirty to restore the guest */
    if (xc_shadow_control(CTX->xch, domid,
                          XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY,
-                          NULL, 0, NULL, 0, NULL) < 0) {
+                          NULL, 0) < 0) {
        LOGD(ERROR, domid, "cannot enable secondary vm's logdirty");
        lds->callback(egc, lds, ERROR_FAIL);
        return;
@@ -90,7 +90,7 @@ static void colo_disable_logdirty(libxl_

    /* we need to know which pages are dirty to restore the guest */
    if (xc_shadow_control(CTX->xch, domid, XEN_DOMCTL_SHADOW_OP_OFF,
-                          NULL, 0, NULL, 0, NULL) < 0)
+                          NULL, 0) < 0)
        LOGD(WARN, domid, "cannot disable secondary vm's logdirty");

    if (crs->hvm) {
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -529,10 +529,10 @@ int libxl__arch_domain_create(libxl__gc
        xc_domain_set_time_offset(ctx->xch, domid, rtc_timeoffset);

    if (d_config->b_info.type !=3D LIBXL_DOMAIN_TYPE_PV) {
-        unsigned long shadow =3D DIV_ROUNDUP(d_config->b_info.shadow_memkb=
,
-                                           1024);
+        unsigned int shadow =3D DIV_ROUNDUP(d_config->b_info.shadow_memkb,
+                                          1024);
        xc_shadow_control(ctx->xch, domid, XEN_DOMCTL_SHADOW_OP_SET_ALLOCAT=
ION,
-                          NULL, 0, &shadow, 0, NULL);
+                          &shadow, 0);
    }

    if (d_config->c_info.type =3D=3D LIBXL_DOMAIN_TYPE_PV &&
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -997,13 +997,13 @@ CAMLprim value stub_shadow_allocation_ge
{
CAMLparam2(xch, domid);
CAMLlocal1(mb);
- unsigned long c_mb;
+ unsigned int c_mb;
int ret;

caml_enter_blocking_section();
ret =3D xc_shadow_control(_H(xch), _D(domid),
XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION,
- NULL, 0, &c_mb, 0, NULL);
+ &c_mb, 0);
caml_leave_blocking_section();
if (ret !=3D 0)
failwith_xc(_H(xch));
@@ -1016,14 +1016,14 @@ CAMLprim value stub_shadow_allocation_se
 value mb)
{
CAMLparam3(xch, domid, mb);
- unsigned long c_mb;
+ unsigned int c_mb;
int ret;

c_mb =3D Int_val(mb);
caml_enter_blocking_section();
ret =3D xc_shadow_control(_H(xch), _D(domid),
XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
- NULL, 0, &c_mb, 0, NULL);
+ &c_mb, 0);
caml_leave_blocking_section();
if (ret !=3D 0)
failwith_xc(_H(xch));
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -1192,8 +1192,7 @@ static PyObject *pyxc_shadow_control(PyO
                                      &dom, &op) )
        return NULL;

-    if ( xc_shadow_control(xc->xc_handle, dom, op, NULL, 0, NULL, 0, NULL)
-         < 0 )
+    if ( xc_shadow_control(xc->xc_handle, dom, op, NULL, 0) < 0 )
        return pyxc_error_to_exception(xc->xc_handle);

    Py_INCREF(zero);
@@ -1208,7 +1207,7 @@ static PyObject *pyxc_shadow_mem_control
    int op;
    uint32_t dom;
    int mbarg =3D -1;
-    unsigned long mb;
+    unsigned int mb;

    static char *kwd_list[] =3D { "dom", "mb", NULL };

@@ -1223,7 +1222,7 @@ static PyObject *pyxc_shadow_mem_control
        mb =3D mbarg;
        op =3D XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION;
    }
-    if ( xc_shadow_control(xc->xc_handle, dom, op, NULL, 0, &mb, 0, NULL) =
< 0 )
+    if ( xc_shadow_control(xc->xc_handle, dom, op, &mb, 0) < 0 )
        return pyxc_error_to_exception(xc->xc_handle);

    mbarg =3D mb;



--_000_71F58ADC945E4ED39CB917889DE22641citrixcom_
Content-Type: text/html; charset="us-ascii"
Content-ID: <3573AAB75EBF964E8193470316F3C2AF@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; line-break:=
 after-white-space;" class=3D"">
For the OCaml part:
<div class=3D""><br class=3D"">
</div>
<div class=3D"">
<div style=3D"margin: 0px; font-stretch: normal; font-size: 11px; line-heig=
ht: normal; font-family: Menlo;" class=3D"">
<span style=3D"font-variant-ligatures: no-common-ligatures" class=3D"">Acke=
d-by: Christian Lindig &lt;<a href=3D"mailto:christian.lindig@citrix.com" c=
lass=3D"">christian.lindig@citrix.com</a>&gt;</span></div>
<div><br class=3D"">
<blockquote type=3D"cite" class=3D"">
<div class=3D"">On 25 Jun 2021, at 14:17, Jan Beulich &lt;<a href=3D"mailto=
:jbeulich@suse.com" class=3D"">jbeulich@suse.com</a>&gt; wrote:</div>
<br class=3D"Apple-interchange-newline">
<div class=3D"">
<div class=3D"">For log-dirty operations a 64-bit field is being truncated =
to become an<br class=3D"">
&quot;int&quot; return value. Seeing the large number of arguments the pres=
ent<br class=3D"">
function takes, reduce its set of parameters to that needed for all<br clas=
s=3D"">
operations not involving the log-dirty bitmap, while introducing a new<br c=
lass=3D"">
wrapper for the log-dirty bitmap operations. This new function in turn<br c=
lass=3D"">
doesn't need an &quot;mb&quot; parameter, but has a 64-bit return type. (Us=
ing the<br class=3D"">
return value in favor of a pointer-type parameter is left as is, to<br clas=
s=3D"">
disturb callers as little as possible.)<br class=3D"">
<br class=3D"">
While altering xc_shadow_control() anyway, also adjust the types of the<br =
class=3D"">
last two of the remaining parameters.<br class=3D"">
<br class=3D"">
Signed-off-by: Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.com" class=
=3D"">jbeulich@suse.com</a>&gt;<br class=3D"">
---<br class=3D"">
I wonder whether we shouldn't take the opportunity and also rename<br class=
=3D"">
xc_shadow_control() to, say, xc_paging_control(), matching the layer<br cla=
ss=3D"">
above the HAP/shadow distinction in the hypervisor.<br class=3D"">
<br class=3D"">
--- a/tools/include/xenctrl.h<br class=3D"">
+++ b/tools/include/xenctrl.h<br class=3D"">
@@ -885,11 +885,15 @@ typedef struct xen_domctl_shadow_op_stat<br class=3D"=
">
int xc_shadow_control(xc_interface *xch,<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;uint32_t domid,<br=
 class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;unsigned int sop,<=
br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;xc_hypercall_buffer_t =
*dirty_bitmap,<br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;unsigned long pages,<b=
r class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;unsigned long *mb,<br =
class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;uint32_t mode,<br clas=
s=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;xc_shadow_op_stats_t *=
stats);<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;unsigned int *mb,<br c=
lass=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;unsigned int mode);<br=
 class=3D"">
+long long xc_logdirty_control(xc_interface *xch,<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;uint32_t domid,<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;unsigned int sop,<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;xc_hypercall_buffer_t *dirty_bitmap,<br class=3D"=
">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;unsigned long pages,<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;unsigned int mode,<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;xc_shadow_op_stats_t *stats);<br class=3D"">
<br class=3D"">
int xc_sched_credit_domain_set(xc_interface *xch,<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;uint32_t domid,<br class=3D"">
--- a/tools/libs/ctrl/xc_domain.c<br class=3D"">
+++ b/tools/libs/ctrl/xc_domain.c<br class=3D"">
@@ -650,25 +650,48 @@ int xc_watchdog(xc_interface *xch,<br class=3D"">
int xc_shadow_control(xc_interface *xch,<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;uint32_t domid,<br=
 class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;unsigned int sop,<=
br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;xc_hypercall_buffer_t =
*dirty_bitmap,<br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;unsigned long pages,<b=
r class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;unsigned long *mb,<br =
class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;uint32_t mode,<br clas=
s=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;xc_shadow_op_stats_t *=
stats)<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;unsigned int *mb,<br c=
lass=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;unsigned int mode)<br =
class=3D"">
{<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;int rc;<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;DECLARE_DOMCTL;<br class=3D"">
- &nbsp;&nbsp;&nbsp;DECLARE_HYPERCALL_BUFFER_ARGUMENT(dirty_bitmap);<br cla=
ss=3D"">
<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;memset(&amp;domctl, 0, sizeof(domctl));<br class=3D=
"">
<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;domctl.cmd =3D XEN_DOMCTL_shadow_op;<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;domctl.domain =3D domid;<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;domctl.u.shadow_op.op &nbsp;&nbsp;&nbsp;&nbsp;=3D s=
op;<br class=3D"">
- &nbsp;&nbsp;&nbsp;domctl.u.shadow_op.pages &nbsp;=3D pages;<br class=3D""=
>
&nbsp;&nbsp;&nbsp;&nbsp;domctl.u.shadow_op.mb &nbsp;&nbsp;&nbsp;&nbsp;=3D m=
b ? *mb : 0;<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;domctl.u.shadow_op.mode &nbsp;&nbsp;=3D mode;<br cl=
ass=3D"">
- &nbsp;&nbsp;&nbsp;if (dirty_bitmap !=3D NULL)<br class=3D"">
+<br class=3D"">
+ &nbsp;&nbsp;&nbsp;rc =3D do_domctl(xch, &amp;domctl);<br class=3D"">
+<br class=3D"">
+ &nbsp;&nbsp;&nbsp;if ( mb )<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*mb =3D domctl.u.shadow_op.mb;<=
br class=3D"">
+<br class=3D"">
+ &nbsp;&nbsp;&nbsp;return rc;<br class=3D"">
+}<br class=3D"">
+<br class=3D"">
+long long xc_logdirty_control(xc_interface *xch,<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;uint32_t domid,<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;unsigned int sop,<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;xc_hypercall_buffer_t *dirty_bitmap,<br class=3D"=
">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;unsigned long pages,<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;unsigned int mode,<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;xc_shadow_op_stats_t *stats)<br class=3D"">
+{<br class=3D"">
+ &nbsp;&nbsp;&nbsp;int rc;<br class=3D"">
+ &nbsp;&nbsp;&nbsp;DECLARE_DOMCTL;<br class=3D"">
+ &nbsp;&nbsp;&nbsp;DECLARE_HYPERCALL_BUFFER_ARGUMENT(dirty_bitmap);<br cla=
ss=3D"">
+<br class=3D"">
+ &nbsp;&nbsp;&nbsp;memset(&amp;domctl, 0, sizeof(domctl));<br class=3D"">
+<br class=3D"">
+ &nbsp;&nbsp;&nbsp;domctl.cmd =3D XEN_DOMCTL_shadow_op;<br class=3D"">
+ &nbsp;&nbsp;&nbsp;domctl.domain =3D domid;<br class=3D"">
+ &nbsp;&nbsp;&nbsp;domctl.u.shadow_op.op &nbsp;&nbsp;&nbsp;=3D sop;<br cla=
ss=3D"">
+ &nbsp;&nbsp;&nbsp;domctl.u.shadow_op.pages =3D pages;<br class=3D"">
+ &nbsp;&nbsp;&nbsp;domctl.u.shadow_op.mode &nbsp;=3D mode;<br class=3D"">
+ &nbsp;&nbsp;&nbsp;if ( dirty_bitmap )<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;set_xen_guest_handle(domctl=
.u.shadow_op.dirty_bitmap,<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;dirty_bitmap);<br class=3D"">
<br class=3D"">
@@ -678,9 +701,6 @@ int xc_shadow_control(xc_interface *xch,<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;memcpy(stats, &amp;domctl.u=
.shadow_op.stats,<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;sizeof(xc_shadow_op_stats_t));<br class=3D"">
<br class=3D"">
- &nbsp;&nbsp;&nbsp;if ( mb ) <br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*mb =3D domctl.u.shadow_op.mb;<=
br class=3D"">
-<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;return (rc =3D=3D 0) ? domctl.u.shadow_op.pages : r=
c;<br class=3D"">
}<br class=3D"">
<br class=3D"">
--- a/tools/libs/guest/xg_sr_restore.c<br class=3D"">
+++ b/tools/libs/guest/xg_sr_restore.c<br class=3D"">
@@ -459,10 +459,10 @@ static int send_checkpoint_dirty_pfn_lis<br class=3D"=
">
&nbsp;&nbsp;&nbsp;&nbsp;DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirt=
y_bitmap,<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&amp;ctx-=
&gt;restore.dirty_bitmap_hbuf);<br class=3D"">
<br class=3D"">
- &nbsp;&nbsp;&nbsp;if ( xc_shadow_control(<br class=3D"">
+ &nbsp;&nbsp;&nbsp;if ( xc_logdirty_control(<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;xch, ctx-&gt;domid, XEN_DOMCTL_SHADOW_OP_CLEAN,<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;HYPERCALL_BUFFER(dirty_bitmap), ctx-&gt;restore.p2m_size,<br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;N=
ULL, 0, &amp;stats) !=3D ctx-&gt;restore.p2m_size )<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0=
, &amp;stats) !=3D ctx-&gt;restore.p2m_size )<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;{<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;PERROR(&quot;Failed to retr=
ieve logdirty bitmap&quot;);<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;goto err;<br class=3D"">
--- a/tools/libs/guest/xg_sr_save.c<br class=3D"">
+++ b/tools/libs/guest/xg_sr_save.c<br class=3D"">
@@ -428,18 +428,18 @@ static int enable_logdirty(struct xc_sr_<br class=3D"=
">
&nbsp;&nbsp;&nbsp;&nbsp;/* This juggling is required if logdirty is enabled=
 for VRAM tracking. */<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;rc =3D xc_shadow_control(xch, ctx-&gt;domid,<br cla=
ss=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY,<br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;NULL, 0, NULL, 0, NULL);<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;NULL, 0);<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;if ( rc &lt; 0 )<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;{<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;on1 =3D errno;<br class=3D"=
">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rc =3D xc_shadow_control(xc=
h, ctx-&gt;domid, XEN_DOMCTL_SHADOW_OP_OFF,<br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;NULL, 0, NULL, 0, NULL);<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;NULL, 0);<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if ( rc &lt; 0 )<br class=
=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;off=
 =3D errno;<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else {<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rc =
=3D xc_shadow_control(xch, ctx-&gt;domid,<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;XEN_DOMCTL_SHAD=
OW_OP_ENABLE_LOGDIRTY,<br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;NULL, 0, NULL, 0, N=
ULL);<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;NULL, 0);<br class=
=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if =
( rc &lt; 0 )<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;on2 =3D errno;<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br class=3D"">
@@ -556,10 +556,10 @@ static int send_memory_live(struct xc_sr<br class=3D"=
">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if ( policy_decision !=3D X=
GS_POLICY_CONTINUE_PRECOPY )<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;bre=
ak;<br class=3D"">
<br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if ( xc_shadow_control(<br clas=
s=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if ( xc_logdirty_control(<br cl=
ass=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;xch, ctx-&gt;domid, XEN_DOMCTL_SHADOW_OP_CLEAN,<=
br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&amp;ctx-&gt;save.dirty_bitmap_hbuf, ctx-&gt;sav=
e.p2m_size,<br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;NULL, 0, &amp;stats) !=3D ctx-&gt;save.p2m_size )<br=
 class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;0, &amp;stats) !=3D ctx-&gt;save.p2m_size )<br class=
=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;PER=
ROR(&quot;Failed to retrieve logdirty bitmap&quot;);<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rc =
=3D -1;<br class=3D"">
@@ -653,10 +653,10 @@ static int suspend_and_send_dirty(struct<br class=3D"=
">
&nbsp;&nbsp;&nbsp;&nbsp;if ( rc )<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;goto out;<br class=3D"">
<br class=3D"">
- &nbsp;&nbsp;&nbsp;if ( xc_shadow_control(<br class=3D"">
+ &nbsp;&nbsp;&nbsp;if ( xc_logdirty_control(<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;xch, ctx-&gt;domid, XEN_DOMCTL_SHADOW_OP_CLEAN,<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;HYPERCALL_BUFFER(dirty_bitmap), ctx-&gt;save.p2m_size,<br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;N=
ULL, XEN_DOMCTL_SHADOW_LOGDIRTY_FINAL, &amp;stats) !=3D<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;X=
EN_DOMCTL_SHADOW_LOGDIRTY_FINAL, &amp;stats) !=3D<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ctx-&gt;save.p2m_size=
 )<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;{<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;PERROR(&quot;Failed to retr=
ieve logdirty bitmap&quot;);<br class=3D"">
@@ -716,10 +716,10 @@ static int verify_frames(struct xc_sr_co<br class=3D"=
">
&nbsp;&nbsp;&nbsp;&nbsp;if ( rc )<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;goto out;<br class=3D"">
<br class=3D"">
- &nbsp;&nbsp;&nbsp;if ( xc_shadow_control(<br class=3D"">
+ &nbsp;&nbsp;&nbsp;if ( xc_logdirty_control(<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;xch, ctx-&gt;domid, XEN_DOMCTL_SHADOW_OP_PEEK,<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&amp;ctx-&gt;save.dirty_bitmap_hbuf, ctx-&gt;save.p2m_size,<br class=3D"=
">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;N=
ULL, 0, &amp;stats) !=3D ctx-&gt;save.p2m_size )<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0=
, &amp;stats) !=3D ctx-&gt;save.p2m_size )<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;{<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;PERROR(&quot;Failed to retr=
ieve logdirty bitmap&quot;);<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rc =3D -1;<br class=3D"">
@@ -834,7 +834,7 @@ static void cleanup(struct xc_sr_context<br class=3D"">
<br class=3D"">
<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;xc_shadow_control(xch, ctx-&gt;domid, XEN_DOMCTL_SH=
ADOW_OP_OFF,<br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;NULL, 0, NULL, 0, NULL=
);<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;NULL, 0);<br class=3D"=
">
<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;if ( ctx-&gt;save.ops.cleanup(ctx) )<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;PERROR(&quot;Failed to clea=
n up&quot;);<br class=3D"">
--- a/tools/libs/light/libxl_colo_restore.c<br class=3D"">
+++ b/tools/libs/light/libxl_colo_restore.c<br class=3D"">
@@ -62,7 +62,7 @@ static void colo_enable_logdirty(libxl__<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;/* we need to know which pages are dirty to restore=
 the guest */<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;if (xc_shadow_control(CTX-&gt;xch, domid,<br class=
=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY,<br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;NULL, 0, NULL, 0, NULL) &lt; 0) {<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;NULL, 0) &lt; 0) {<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;LOGD(ERROR, domid, &quot;ca=
nnot enable secondary vm's logdirty&quot;);<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;lds-&gt;callback(egc, lds, =
ERROR_FAIL);<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return;<br class=3D"">
@@ -90,7 +90,7 @@ static void colo_disable_logdirty(libxl_<br class=3D"">
<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;/* we need to know which pages are dirty to restore=
 the guest */<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;if (xc_shadow_control(CTX-&gt;xch, domid, XEN_DOMCT=
L_SHADOW_OP_OFF,<br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;NULL, 0, NULL, 0, NULL) &lt; 0)<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;NULL, 0) &lt; 0)<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;LOGD(WARN, domid, &quot;can=
not disable secondary vm's logdirty&quot;);<br class=3D"">
<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;if (crs-&gt;hvm) {<br class=3D"">
--- a/tools/libs/light/libxl_x86.c<br class=3D"">
+++ b/tools/libs/light/libxl_x86.c<br class=3D"">
@@ -529,10 +529,10 @@ int libxl__arch_domain_create(libxl__gc<br class=3D""=
>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;xc_domain_set_time_offset(c=
tx-&gt;xch, domid, rtc_timeoffset);<br class=3D"">
<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;if (d_config-&gt;b_info.type !=3D LIBXL_DOMAIN_TYPE=
_PV) {<br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;unsigned long shadow =3D DIV_RO=
UNDUP(d_config-&gt;b_info.shadow_memkb,<br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1024);<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;unsigned int shadow =3D DIV_ROU=
NDUP(d_config-&gt;b_info.shadow_memkb,<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;1024);<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;xc_shadow_control(ctx-&gt;x=
ch, domid, XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,<br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;NULL, 0, &amp;shadow, 0, NULL);<br class=3D"">
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&amp;shadow, 0);<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;}<br class=3D"">
<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;if (d_config-&gt;c_info.type =3D=3D LIBXL_DOMAIN_TY=
PE_PV &amp;&amp;<br class=3D"">
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c<br class=3D"">
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c<br class=3D"">
@@ -997,13 +997,13 @@ CAMLprim value stub_shadow_allocation_ge<br class=3D"=
">
{<br class=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span>CAMLparam2(=
xch, domid);<br class=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span>CAMLlocal1(=
mb);<br class=3D"">
-<span class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>unsigned =
long c_mb;<br class=3D"">
+<span class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>unsigned =
int c_mb;<br class=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span>int ret;<br=
 class=3D"">
<br class=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span>caml_enter_=
blocking_section();<br class=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span>ret =3D xc_=
shadow_control(_H(xch), _D(domid),<br class=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span><span class=
=3D"Apple-tab-span" style=3D"white-space:pre"></span><span class=3D"Apple-t=
ab-span" style=3D"white-space:pre"></span><span class=3D"Apple-tab-span" st=
yle=3D"white-space:pre"></span>XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION,<br clas=
s=3D"">
-<span class=3D"Apple-tab-span" style=3D"white-space:pre"> </span><span cla=
ss=3D"Apple-tab-span" style=3D"white-space:pre"></span><span class=3D"Apple=
-tab-span" style=3D"white-space:pre"></span><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"></span>NULL, 0, &amp;c_mb,
 0, NULL);<br class=3D"">
+<span class=3D"Apple-tab-span" style=3D"white-space:pre"> </span><span cla=
ss=3D"Apple-tab-span" style=3D"white-space:pre"></span><span class=3D"Apple=
-tab-span" style=3D"white-space:pre"></span><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"></span>&amp;c_mb, 0);<br class=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span>caml_leave_=
blocking_section();<br class=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span>if (ret !=
=3D 0)<br class=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span><span class=
=3D"Apple-tab-span" style=3D"white-space:pre"></span>failwith_xc(_H(xch));<=
br class=3D"">
@@ -1016,14 +1016,14 @@ CAMLprim value stub_shadow_allocation_se<br class=
=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span><span class=
=3D"Apple-tab-span" style=3D"white-space:pre"></span><span class=3D"Apple-t=
ab-span" style=3D"white-space:pre"></span><span class=3D"Apple-tab-span" st=
yle=3D"white-space:pre"></span><span class=3D"Apple-tab-span" style=3D"whit=
e-space:pre"></span>&nbsp;value
 mb)<br class=3D"">
{<br class=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span>CAMLparam3(=
xch, domid, mb);<br class=3D"">
-<span class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>unsigned =
long c_mb;<br class=3D"">
+<span class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>unsigned =
int c_mb;<br class=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span>int ret;<br=
 class=3D"">
<br class=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span>c_mb =3D In=
t_val(mb);<br class=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span>caml_enter_=
blocking_section();<br class=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span>ret =3D xc_=
shadow_control(_H(xch), _D(domid),<br class=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span><span class=
=3D"Apple-tab-span" style=3D"white-space:pre"></span><span class=3D"Apple-t=
ab-span" style=3D"white-space:pre"></span><span class=3D"Apple-tab-span" st=
yle=3D"white-space:pre"></span>XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,<br clas=
s=3D"">
-<span class=3D"Apple-tab-span" style=3D"white-space:pre"> </span><span cla=
ss=3D"Apple-tab-span" style=3D"white-space:pre"></span><span class=3D"Apple=
-tab-span" style=3D"white-space:pre"></span><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"></span>NULL, 0, &amp;c_mb,
 0, NULL);<br class=3D"">
+<span class=3D"Apple-tab-span" style=3D"white-space:pre"> </span><span cla=
ss=3D"Apple-tab-span" style=3D"white-space:pre"></span><span class=3D"Apple=
-tab-span" style=3D"white-space:pre"></span><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"></span>&amp;c_mb, 0);<br class=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span>caml_leave_=
blocking_section();<br class=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span>if (ret !=
=3D 0)<br class=3D"">
<span class=3D"Apple-tab-span" style=3D"white-space:pre"></span><span class=
=3D"Apple-tab-span" style=3D"white-space:pre"></span>failwith_xc(_H(xch));<=
br class=3D"">
--- a/tools/python/xen/lowlevel/xc/xc.c<br class=3D"">
+++ b/tools/python/xen/lowlevel/xc/xc.c<br class=3D"">
@@ -1192,8 +1192,7 @@ static PyObject *pyxc_shadow_control(PyO<br class=3D"=
">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&amp;dom, &amp;op) )<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return NULL;<br class=3D"">
<br class=3D"">
- &nbsp;&nbsp;&nbsp;if ( xc_shadow_control(xc-&gt;xc_handle, dom, op, NULL,=
 0, NULL, 0, NULL) <br class=3D"">
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt; 0 )<br class=3D"">
+ &nbsp;&nbsp;&nbsp;if ( xc_shadow_control(xc-&gt;xc_handle, dom, op, NULL,=
 0) &lt; 0 )<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return pyxc_error_to_except=
ion(xc-&gt;xc_handle);<br class=3D"">
<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;Py_INCREF(zero);<br class=3D"">
@@ -1208,7 +1207,7 @@ static PyObject *pyxc_shadow_mem_control<br class=3D"=
">
&nbsp;&nbsp;&nbsp;&nbsp;int op;<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;uint32_t dom;<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;int mbarg =3D -1;<br class=3D"">
- &nbsp;&nbsp;&nbsp;unsigned long mb;<br class=3D"">
+ &nbsp;&nbsp;&nbsp;unsigned int mb;<br class=3D"">
<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;static char *kwd_list[] =3D { &quot;dom&quot;, &quo=
t;mb&quot;, NULL };<br class=3D"">
<br class=3D"">
@@ -1223,7 +1222,7 @@ static PyObject *pyxc_shadow_mem_control<br class=3D"=
">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;mb =3D mbarg;<br class=3D""=
>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;op =3D XEN_DOMCTL_SHADOW_OP=
_SET_ALLOCATION;<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;}<br class=3D"">
- &nbsp;&nbsp;&nbsp;if ( xc_shadow_control(xc-&gt;xc_handle, dom, op, NULL,=
 0, &amp;mb, 0, NULL) &lt; 0 )<br class=3D"">
+ &nbsp;&nbsp;&nbsp;if ( xc_shadow_control(xc-&gt;xc_handle, dom, op, &amp;=
mb, 0) &lt; 0 )<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return pyxc_error_to_except=
ion(xc-&gt;xc_handle);<br class=3D"">
<br class=3D"">
&nbsp;&nbsp;&nbsp;&nbsp;mbarg =3D mb;<br class=3D"">
<br class=3D"">
</div>
</div>
</blockquote>
</div>
<br class=3D"">
</div>
</body>
</html>

--_000_71F58ADC945E4ED39CB917889DE22641citrixcom_--


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 15:19:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 15:19:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147295.271394 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwnbZ-0004P3-To; Fri, 25 Jun 2021 15:18:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147295.271394; Fri, 25 Jun 2021 15:18:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwnbZ-0004Ow-OL; Fri, 25 Jun 2021 15:18:53 +0000
Received: by outflank-mailman (input) for mailman id 147295;
 Fri, 25 Jun 2021 15:18:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwnbY-0004Ok-81; Fri, 25 Jun 2021 15:18:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwnbY-0005R0-2h; Fri, 25 Jun 2021 15:18:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwnbX-0000mf-SC; Fri, 25 Jun 2021 15:18:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwnbX-0006v4-Rh; Fri, 25 Jun 2021 15:18:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EGqBYfNR2qOEEjwyH/Nejz81oJe4OvbYvU82tITrDYw=; b=JHug8LGmhKTjzvOsg8MdKJHYoS
	OCsiSrXGzFJ0XKkvwho05eDDMZ/HSdzCgaVHmpyIBkwzoMj7BSA3FMwaVGT4Gg5aG8otQroV9I4r6
	U+c7JElRijiFK5fCYDl3ttzKOehfT87tk8oecjc7d45BwNw5FZ3LztnfLg5zum3DQZL4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163048-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 163048: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f591755823a7e94fc6b4b8ddce71f0421a94fa09
X-Osstest-Versions-That:
    xen=8a9b94982b0e0928ad874907f5a5b005944ab7cf
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Jun 2021 15:18:51 +0000

flight 163048 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163048/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f591755823a7e94fc6b4b8ddce71f0421a94fa09
baseline version:
 xen                  8a9b94982b0e0928ad874907f5a5b005944ab7cf

Last test of basis   163033  2021-06-25 10:01:25 Z    0 days
Testing same since   163048  2021-06-25 13:05:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8a9b94982b..f591755823  f591755823a7e94fc6b4b8ddce71f0421a94fa09 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 15:36:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 15:36:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147302.271414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwnsx-0006kl-Ia; Fri, 25 Jun 2021 15:36:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147302.271414; Fri, 25 Jun 2021 15:36:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwnsx-0006ke-FM; Fri, 25 Jun 2021 15:36:51 +0000
Received: by outflank-mailman (input) for mailman id 147302;
 Fri, 25 Jun 2021 15:36:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwnsw-0006kU-8G; Fri, 25 Jun 2021 15:36:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwnsv-0005ir-Q2; Fri, 25 Jun 2021 15:36:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwnsv-0001RO-Ey; Fri, 25 Jun 2021 15:36:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwnsv-0002F8-ES; Fri, 25 Jun 2021 15:36:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RimjMyjkwlE7Qzpi91xwCZU+eeTWhseO5b5CUo631DY=; b=NVXbOeAmenjO2MOJIj359bZJHq
	/eltq02s5l0RFEeF89bMY3pMOLb1treKOl6YbCv9kdcFh0tFm1OdTD8eGtSfQFFh1s2tDNHWgjXUO
	wC3TsrT5Os8k6eDMXDx40r7wblsv0rsHJZrYv3U8mkZERhcmPo62HXDhlTVl1FwNrIF4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163024-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 163024: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=ecba223da6215d6f6ce2d343b70b2e9a19bfb90b
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Jun 2021 15:36:49 +0000

flight 163024 qemu-mainline real [real]
flight 163056 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/163024/
http://logs.test-lab.xenproject.org/osstest/logs/163056/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-amd 16 xen-boot/l1         fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 16 xen-boot/l1       fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                ecba223da6215d6f6ce2d343b70b2e9a19bfb90b
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  309 days
Failing since        152659  2020-08-21 14:07:39 Z  308 days  565 attempts
Testing same since   163024  2021-06-25 03:40:20 Z    0 days    1 attempts

------------------------------------------------------------
548 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 178035 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 15:50:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 15:50:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147309.271433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwo6C-0000aD-V3; Fri, 25 Jun 2021 15:50:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147309.271433; Fri, 25 Jun 2021 15:50:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwo6C-0000a6-S7; Fri, 25 Jun 2021 15:50:32 +0000
Received: by outflank-mailman (input) for mailman id 147309;
 Fri, 25 Jun 2021 15:50:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNyz=LT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lwo6B-0000a0-LK
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 15:50:31 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c5a04fa3-11e9-4926-a0a3-97e36c102e20;
 Fri, 25 Jun 2021 15:50:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5a04fa3-11e9-4926-a0a3-97e36c102e20
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624636230;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=lbJX1sieZyhnA6aRPLlgk83XxJE+uHXwtEf1cPZZ3l8=;
  b=DxvU0DYtsDQ9ot1FcUNzc7UcGVsDp7dZ+531LXG4SiFxG0v3TSPOTrWu
   Oz1A9+//7tNeh4tPqG0tP2jM5lVTr3I1MUoNvF4ZUfRoSSd2K0X/HcwQX
   EvzFgE4+b0DHq2CXrdumC37bEeAksSWGBz7M0Ybrv15imgSkprIjT25N4
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Vp/VasKC9PgWJrc6VKPIXpSqWww/PmtE6Wlg9toU5bzl0MI/iujibC6oM2K59DqPNv6SbTOiaR
 qi7vPNNUZghl/6bETIpaxyq+UhD/12gltxf0ByW4E6KzPT5W5F129FjeRitiA8qz/Dq+Vlq7em
 DsRXd8LHOs4zwuLxmMWkrau2YzxvAeguXa2gVFQN0fzMGM3yu3qZIVmlIdopdv/3JD13kGpnAQ
 2qsF24i5TOrvCzYfvLp4JN/cdIIJQiql9D/b+87qC38ctoDrzatHBe8r5DX3gf84tlHI34Y8N3
 kzw=
X-SBRS: 5.1
X-MesageID: 47357318
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:oMjtyatA3019UNrm9oLRS67M7skC8oMji2hC6mlwRA09TyXGra
 2TdaUgvyMc1gx7ZJh5o6H6BEGBKUmslqKdkrNhR4tKPTOW91dASbsP0WKM+UyGJ8STzI9gPO
 JbAtBD4b7LfBZHZKTBkW+F+r8bqbHpnpxAx92utkuFJjsaCZ2Imj0JbjpzZXcGITWua6BYKL
 Osou584xawc3Ueacq2QlMfWfLYmtHNnJX6JTYbGh8O8mC1/HOVwY+/NyLd8gYVUjtJz7tn23
 PCiRbF6qKqtOz+4gPA1lXU849dlLLau5h+7Y23+4oowwfX+0KVjbdaKvq/VfcO0aeSAWMR4Z
 zxStEbTp1OAj3qDzmISFDWqnTdOX4VmgPfIBmj8DreSIXCNU0HItsEioRDfhTD7U08+Nl6za
 JQxmqc84FaFBXagU3GlpL1vr5R5ziJSFcZ4KYuZkZkIMAjgX5q3Pgi1VIQFI1FEDPx6YghHu
 UrBMbA5OxOeVffa3zCpGFgzNGlQ3x2R369MwQ/k93Q1yITkGFyzkMeysBalnAc9IglQ50B4+
 jfKKxnmLxHU8dTZ6NgA+UKR9exFwX2MF/x2aKpUB3a/YQ8SgfwQrLMkcUIDdCRCeo1JcEJ6e
 X8uXtjxB0PkmzVeLOz9YwO6RbGWmWyNA6dvf1j2w==
X-IronPort-AV: E=Sophos;i="5.83,299,1616472000"; 
   d="scan'208";a="47357318"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iHQxzmFqt3IW74Z+Xco9nS3XKUb6ztiaZmajn2YPt4gGmXtXFjslGLEB7CRDGbtxUjLw1hHcRLQa0g52smvVmfAuruYA64LoHOO0h49F9sb66jwuq+7vQm4ncRJh6ZFZ77+rQruwYBjvvfbKo/2pakEcaRBJSkanivLBPFL+cjCPvBgEgWUMadhT2tBUmoHhbVD4GIhsRovR7S1uFIOMNyGaftN4LEATwDn+A1o+ccp89YeWMpwnH5H304zKu9mgLL2DeEiKISDI3klG5WerngHXtkFvu9+UKUi6AFTldzlmH4wwXQq7FfsBkjDlMCXlidTfq+CTjpmooF1GkacT/Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ak3tPlxQkMh5vBXIcYSQwADNQwkBWk5FyZpeHaInSCI=;
 b=Rn8NDaq4/VHrMnz7vdng6Gci6ozz7DQ1yI+Cgd40k39WYAL/wqBHtVW/0xuK3+0trp9ivru7yDFryhtQ3TntwGDbISx510L1NS2hu4ePnoj2YKR6i5y+dkcXD/O2cthysRWMpHDLVORxp3vzhTPDGF3zoQHGdxJTJpKol7hNEjxNzT4IMYcM5eWtjn5FmNbOLLly9S+lKXiDTjrDAIFBw/ah7Ix/kE0gtpKIwATA+oQ6HPiRofBXsSi3UJk3DX63uR7XCbq1FyrQ3LlP6206ChWYcw0B11NvKsyu9Pakm0/c0xnNaWu0RoHs10Q5FbOVbjuxeyad4RoLxzxrN4oLbA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ak3tPlxQkMh5vBXIcYSQwADNQwkBWk5FyZpeHaInSCI=;
 b=OaHQE4Fw9PWnOaE5XIZ35NlnG+DQFqPG7BbNA0irczIwukMJqNtLzhfvwtg2QRdeuQn4lXaOfacYPIyz8QRdjAWbfoDS6kYS4pJWKiqcNMCcoo4XyVhKcfyLV0TU1sK8Eww4txoqwmIFUUkyFWFk7+Wy/p1A54xVvByZKZlbTmQ=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Marek
 Marczykowski <marmarek@invisiblethingslab.com>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <e928490c-d13c-8041-0ff7-e8b69ee73d6e@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 01/12] libxc: split xc_logdirty_control() from
 xc_shadow_control()
Message-ID: <034399e0-d79f-71b1-286c-823a97da7e73@citrix.com>
Date: Fri, 25 Jun 2021 16:49:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <e928490c-d13c-8041-0ff7-e8b69ee73d6e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0378.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18e::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c25559d0-e080-4225-a283-08d937f0e3f2
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5744:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5744A758DA3ED354DEEB24E3BA069@SJ0PR03MB5744.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1303;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: pNBhdRSsp+D2yS6SZMCYRngMLA/NAqGMuJv+tGEIgXX6DEnilzL87I6uclD+AgGcxrpihocf6Ptx/8xD2J5eg+3oqWRI3dp5KCtFPyMwc+odkvTlmeRyj6QGNf/3JYZkQ0SCIkUjfMzxesbsvi+j8xhRwtzJKkirT+1/3x5HFV2i4gn9xAWYjCOIyhgcewQ8fv3Ur0oAZVUgykRXH1TN+WXWrkjbEjHT7UbggHH7l4TPq6kwqAQUIH5rGw/VqCZcXxZxnj7STUbWqdTb4MG+k5Z+C55LxcqFhJxRvwURtwbehK3CXDVoXNpf5zQ1dy23G2QvCmmu1EjzH8l+QG9xnR4QVUFQV+BdGOt8y2RpofGUGhUthXZVgzisO8FswF59bbaWFSYmhS+8IK0g6Xf3HyVTaz7gaxixqcKtEDtdl3awpQMhcXnbsMhx+3I+QLlXlO/75QIZ7AOZt4HdHb4Ofl4cf37SfnecxeJhNo0JWD8p1fJc2pjpodgOf62y4SoqEyEGLlgnUu6kSrusQZB7ZnO/hwx8HDYnCwQAQaQmUIB4n+PLqDyR1u1uWRNMCKiS+aV6ghXjyPlCG6kctNhPM4SzfPTaNL+I3YK4nXHETq4YtKL4B+wFrsJJ7C7Gl3RYRFGvdJSsnNc7tlQU1YvOoNAHRE79A5+RBYRmH9iEfZEicR7gF+luL9TJHJoaNlxU5Odq/QS21I18i+1bnII39TNVFFthkk1OdUoea8G6RWk=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(376002)(136003)(366004)(396003)(346002)(66946007)(8936002)(16576012)(316002)(66476007)(66556008)(6666004)(8676002)(956004)(6486002)(4326008)(54906003)(83380400001)(110136005)(53546011)(31696002)(36756003)(16526019)(38100700002)(186003)(86362001)(478600001)(2906002)(26005)(31686004)(5660300002)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?SG10MHB6WEc2L2tUcG1NeHRoWDliTHMrQmlZT2tmcUdUMnU4dHB0VHY2a05s?=
 =?utf-8?B?TEw3UmIwTVdUUmdOSjhkaHdSSUgrdmFxbzlPSXgwSVBjL3FObHVzUEF2YTRa?=
 =?utf-8?B?TmEwU09ncklEamZIT1lLTy81WG5yWXdKRExCdHBzdloxNnJoRktwWStHb1d4?=
 =?utf-8?B?dzV4WXRxa2tObHJPZFJLVE05VGdPaG4rVXprbCs4NCtuNHlVcGs5eGpqcFFO?=
 =?utf-8?B?Yk1QRG9pMFl0dlh4bzE4azB3YXUxbzAwQnYrUnQ5WjZ2Y1U0N3FEUFVjbWV3?=
 =?utf-8?B?bFV5ZDY1bUhXdis5aExJQW52L1cxc0hlRVlIakNCU0RibkZpVXd5NHVwa2xE?=
 =?utf-8?B?NFFDNXpkTHZtbUN5d1hRbWtsbmJNc1c0d3hDV2ZjTkUrMWxtL2pyZ1hzM0VH?=
 =?utf-8?B?aVlnTkw0WU9lYXN1NFhVZVh6YXhYUUpOT2tOc1hBQTJKOXFRT3VtNWQ2eEdJ?=
 =?utf-8?B?bWlLRGhmcVlvY0x4UHB2bDRvblpGWGFkQUk1bEl6VjljVkNlSjk5T29kVmww?=
 =?utf-8?B?dXM5TG1pMXBEdUo0V2NhNDU0UU01Ly9RZTY1VURVUUY0VGRXeG1zYWZob2hD?=
 =?utf-8?B?OWRTRnorT1k2QVB1R3lLZTkzYVJKL1A4TnpFajFiUjdwUm1YV09Icmw3ckMv?=
 =?utf-8?B?NmR4Z3ZhTGd6TmQ3THo0djdMN29ZYVc3TmllK1ErQVVpTzNncVFndkk3aFdu?=
 =?utf-8?B?c1k5L09hWitzNzdYNUlXWWxVRFFIRStiMEJ5ODRGOGNhUTlwYVM1WGRNMTEy?=
 =?utf-8?B?VGl1N0lUaWFtbUdWRVdhenE1M2pWYi9LN1dYS0dRUDkxQmJQWTZYRVY3WUFt?=
 =?utf-8?B?QWdQaTkxeDVpVFJxaUN2WkkvakZBN1JCMUhQOGI4bjd3VmdIbGR3a1lnckEy?=
 =?utf-8?B?NjFMVDhjcVRyTjU0ZU1kT0Z3c1hKS25PQVdwUFJnaHU1bVhZS0F2Qjl3c3pX?=
 =?utf-8?B?OWE0N3NLMzFTNVk3UG9pMFpCNUEveTlxcTZvSHIycHpGZThmZGtaOGo3YzRR?=
 =?utf-8?B?RWs5M2JBZ3QzaE9KU3V0c1hOYmVCV0ROQ0UxVVRTMlZ1TEh3R0Rmbk5sL0x4?=
 =?utf-8?B?RFZuVGNnakozRUd5elM2aXNSK05SdzdTTmZLL0wyMS9EWWJrd1BpVy9lSlhk?=
 =?utf-8?B?K3kvK2liblU1NXZSaFpVbm5aV2NuTzJ2bys1NHEzcGFJUStId1l4Y3hubHhY?=
 =?utf-8?B?Q0I5NVJUaTFQTGdLQXhISlRNc25iTU45SE1oOEM0Zk5aOVZnT21TaHJCWU1y?=
 =?utf-8?B?ZnhITVk3Vnh1SWNpa21YbnFmOXQ4MmFSbUNsOElkWE1pVDR0V3E3RzFRR2F3?=
 =?utf-8?B?QWptS3lLR2FlL3QvREVSaUloZTU3bzRwNVoxaTl6eFFLa0NvbFIvb0U2cmd4?=
 =?utf-8?B?bVkwbUhvU3hRdmdOMStlK0ViU0N6WUdkTk9Qb1FlMWRuM2htUXRCMWJ1amty?=
 =?utf-8?B?MlFQN0RKSElNUStBcXYwYS9LVWcwUjduZ01XdzBYdWhRY0dGcGoxeVpUdEsy?=
 =?utf-8?B?SmpibEp5T0FXclZDRXdpbk14UGl3dDExcTFXYkZhN0tkSU40Q0JldDFWV0Rl?=
 =?utf-8?B?aVppdVh1Y3JValoxQytuM29MWkpMd3dYTGV3Nmh3NTVBeUsrZFBoSmdWcDBp?=
 =?utf-8?B?WkZlTTVBUEhNbnhZMFdBNVRaS1F6TytobmlncXJnb1FWVCtzbm1SdHNNSUNB?=
 =?utf-8?B?dWFPbW0rZDJHT1FQMytHZE1iZ1BUWSs0amVLc0JaRGZqWHBGZEVqRGxDQXJG?=
 =?utf-8?Q?bE7rQluaGmNaXa6A3/AfTB8cb1sNSsGJNpo6YDg?=
X-MS-Exchange-CrossTenant-Network-Message-Id: c25559d0-e080-4225-a283-08d937f0e3f2
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 15:50:00.7073
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ovaA2/0z22Gxo4HtDwaYzmEZ4IyDhHYHG1R8vBONUZop5DclEUY3v2IN0zCwSg2ownwmNeP409IqmlJIZ5owFex7Xj20f1++34DDELAee9s=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5744
X-OriginatorOrg: citrix.com

On 25/06/2021 14:17, Jan Beulich wrote:
> For log-dirty operations a 64-bit field is being truncated to become an
> "int" return value. Seeing the large number of arguments the present
> function takes, reduce its set of parameters to that needed for all
> operations not involving the log-dirty bitmap, while introducing a new
> wrapper for the log-dirty bitmap operations. This new function in turn
> doesn't need an "mb" parameter, but has a 64-bit return type. (Using the
> return value in favor of a pointer-type parameter is left as is, to
> disturb callers as little as possible.)
>
> While altering xc_shadow_control() anyway, also adjust the types of the
> last two of the remaining parameters.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> I wonder whether we shouldn't take the opportunity and also rename
> xc_shadow_control() to, say, xc_paging_control(), matching the layer
> above the HAP/shadow distinction in the hypervisor.

I do remember this being an especially obnoxious interface to use.=C2=A0 An=
y
improvement would go a long way, but I think we also need to rename some
domctls too.

>
> --- a/tools/include/xenctrl.h
> +++ b/tools/include/xenctrl.h
> @@ -885,11 +885,15 @@ typedef struct xen_domctl_shadow_op_stat
>  int xc_shadow_control(xc_interface *xch,
>                        uint32_t domid,
>                        unsigned int sop,
> -                      xc_hypercall_buffer_t *dirty_bitmap,
> -                      unsigned long pages,
> -                      unsigned long *mb,
> -                      uint32_t mode,
> -                      xc_shadow_op_stats_t *stats);
> +                      unsigned int *mb,
> +                      unsigned int mode);
> +long long xc_logdirty_control(xc_interface *xch,

uint64_t to match the hypercall?=C2=A0 All users of libxc are stdint.h awar=
e.

> --- a/tools/libs/ctrl/xc_domain.c
> +++ b/tools/libs/ctrl/xc_domain.c
> @@ -650,25 +650,48 @@ int xc_watchdog(xc_interface *xch,
>  int xc_shadow_control(xc_interface *xch,
>                        uint32_t domid,
>                        unsigned int sop,
> -                      xc_hypercall_buffer_t *dirty_bitmap,
> -                      unsigned long pages,
> -                      unsigned long *mb,
> -                      uint32_t mode,
> -                      xc_shadow_op_stats_t *stats)
> +                      unsigned int *mb,
> +                      unsigned int mode)
>  {
>      int rc;
>      DECLARE_DOMCTL;
> -    DECLARE_HYPERCALL_BUFFER_ARGUMENT(dirty_bitmap);
> =20
>      memset(&domctl, 0, sizeof(domctl));
> =20
>      domctl.cmd =3D XEN_DOMCTL_shadow_op;
>      domctl.domain =3D domid;
>      domctl.u.shadow_op.op     =3D sop;
> -    domctl.u.shadow_op.pages  =3D pages;
>      domctl.u.shadow_op.mb     =3D mb ? *mb : 0;
>      domctl.u.shadow_op.mode   =3D mode;
> -    if (dirty_bitmap !=3D NULL)
> +
> +    rc =3D do_domctl(xch, &domctl);
> +
> +    if ( mb )
> +        *mb =3D domctl.u.shadow_op.mb;
> +
> +    return rc;
> +}
> +
> +long long xc_logdirty_control(xc_interface *xch,
> +                              uint32_t domid,
> +                              unsigned int sop,
> +                              xc_hypercall_buffer_t *dirty_bitmap,
> +                              unsigned long pages,
> +                              unsigned int mode,
> +                              xc_shadow_op_stats_t *stats)
> +{
> +    int rc;
> +    DECLARE_DOMCTL;
> +    DECLARE_HYPERCALL_BUFFER_ARGUMENT(dirty_bitmap);
> +
> +    memset(&domctl, 0, sizeof(domctl));
> +
> +    domctl.cmd =3D XEN_DOMCTL_shadow_op;
> +    domctl.domain =3D domid;
> +    domctl.u.shadow_op.op    =3D sop;
> +    domctl.u.shadow_op.pages =3D pages;
> +    domctl.u.shadow_op.mode  =3D mode;

Please use:

struct xen_domctl domctl =3D {
=C2=A0=C2=A0=C2=A0 .cmd =3D XEN_DOMCTL_shadow_op,
=C2=A0=C2=A0=C2=A0 ...
};

I've been slowly taking out users of DECLARE_DOMCTL, because beyond
being pure code obfuscation, valgrind (rightly) complains that the
hypercall operates on uninitialised memory.

> --- a/tools/libs/light/libxl_colo_restore.c
> +++ b/tools/libs/light/libxl_colo_restore.c
> @@ -62,7 +62,7 @@ static void colo_enable_logdirty(libxl__
>      /* we need to know which pages are dirty to restore the guest */
>      if (xc_shadow_control(CTX->xch, domid,
>                            XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY,
> -                          NULL, 0, NULL, 0, NULL) < 0) {
> +                          NULL, 0) < 0) {
>          LOGD(ERROR, domid, "cannot enable secondary vm's logdirty");
>          lds->callback(egc, lds, ERROR_FAIL);
>          return;

:-/ even more COLO code which escaped my attempts to use a consistent
coding style.

I'll fix this up later, as its fairly invasive (context wise).

> @@ -90,7 +90,7 @@ static void colo_disable_logdirty(libxl_
> =20
>      /* we need to know which pages are dirty to restore the guest */
>      if (xc_shadow_control(CTX->xch, domid, XEN_DOMCTL_SHADOW_OP_OFF,
> -                          NULL, 0, NULL, 0, NULL) < 0)
> +                          NULL, 0) < 0)
>          LOGD(WARN, domid, "cannot disable secondary vm's logdirty");
> =20
>      if (crs->hvm) {
> --- a/tools/libs/light/libxl_x86.c
> +++ b/tools/libs/light/libxl_x86.c
> @@ -529,10 +529,10 @@ int libxl__arch_domain_create(libxl__gc
>          xc_domain_set_time_offset(ctx->xch, domid, rtc_timeoffset);
> =20
>      if (d_config->b_info.type !=3D LIBXL_DOMAIN_TYPE_PV) {
> -        unsigned long shadow =3D DIV_ROUNDUP(d_config->b_info.shadow_mem=
kb,
> -                                           1024);
> +        unsigned int shadow =3D DIV_ROUNDUP(d_config->b_info.shadow_memk=
b,
> +                                          1024);
>          xc_shadow_control(ctx->xch, domid, XEN_DOMCTL_SHADOW_OP_SET_ALLO=
CATION,
> -                          NULL, 0, &shadow, 0, NULL);
> +                          &shadow, 0);

I know this isn't introduced by your patch, but this cannot possibly be
correct without error handling.=C2=A0 There is a good chance of this call
running Xen out of memory.

Any chance of a fix split out into a separate patch?

>      }
> =20
>      if (d_config->c_info.type =3D=3D LIBXL_DOMAIN_TYPE_PV &&
> --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> @@ -997,13 +997,13 @@ CAMLprim value stub_shadow_allocation_ge
>  {
>  	CAMLparam2(xch, domid);
>  	CAMLlocal1(mb);
> -	unsigned long c_mb;
> +	unsigned int c_mb;
>  	int ret;
> =20
>  	caml_enter_blocking_section();
>  	ret =3D xc_shadow_control(_H(xch), _D(domid),
>  				XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION,
> -				NULL, 0, &c_mb, 0, NULL);
> +				&c_mb, 0);
>  	caml_leave_blocking_section();
>  	if (ret !=3D 0)
>  		failwith_xc(_H(xch));

Not a bug introduced in this patch, but this is broken.=C2=A0 There is a kb
vs mb units mismatch, and I don't see any shifts by 10 anywhere in the
Ocaml stubs.

> @@ -1016,14 +1016,14 @@ CAMLprim value stub_shadow_allocation_se
>  					  value mb)
>  {
>  	CAMLparam3(xch, domid, mb);
> -	unsigned long c_mb;
> +	unsigned int c_mb;
>  	int ret;
> =20
>  	c_mb =3D Int_val(mb);

This has a 31 bit truncation issue on 32bit builds.=C2=A0 I'm not sure how
much we care.

>  	caml_enter_blocking_section();
>  	ret =3D xc_shadow_control(_H(xch), _D(domid),
>  				XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
> -				NULL, 0, &c_mb, 0, NULL);
> +				&c_mb, 0);
>  	caml_leave_blocking_section();
>  	if (ret !=3D 0)
>  		failwith_xc(_H(xch));
> --- a/tools/python/xen/lowlevel/xc/xc.c
> +++ b/tools/python/xen/lowlevel/xc/xc.c
> @@ -1192,8 +1192,7 @@ static PyObject *pyxc_shadow_control(PyO
>                                        &dom, &op) )
>          return NULL;
>     =20
> -    if ( xc_shadow_control(xc->xc_handle, dom, op, NULL, 0, NULL, 0, NUL=
L)=20
> -         < 0 )
> +    if ( xc_shadow_control(xc->xc_handle, dom, op, NULL, 0) < 0 )
>          return pyxc_error_to_exception(xc->xc_handle);
>     =20
>      Py_INCREF(zero);
> @@ -1208,7 +1207,7 @@ static PyObject *pyxc_shadow_mem_control
>      int op;
>      uint32_t dom;
>      int mbarg =3D -1;
> -    unsigned long mb;
> +    unsigned int mb;
> =20
>      static char *kwd_list[] =3D { "dom", "mb", NULL };
> =20
> @@ -1223,7 +1222,7 @@ static PyObject *pyxc_shadow_mem_control
>          mb =3D mbarg;
>          op =3D XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION;
>      }
> -    if ( xc_shadow_control(xc->xc_handle, dom, op, NULL, 0, &mb, 0, NULL=
) < 0 )
> +    if ( xc_shadow_control(xc->xc_handle, dom, op, &mb, 0) < 0 )
>          return pyxc_error_to_exception(xc->xc_handle);

Here too.=C2=A0 There are int truncations on the input and output, and like
the Ocaml stubs, an apparent kb vs mb confusion.

I'm not sure whether switching to PyLong is sensible.=C2=A0 Its probably ok
from a compatibility perspective.

~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 16:37:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 16:37:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147321.271465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwop9-0005NS-2b; Fri, 25 Jun 2021 16:36:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147321.271465; Fri, 25 Jun 2021 16:36:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwop8-0005NL-Vv; Fri, 25 Jun 2021 16:36:58 +0000
Received: by outflank-mailman (input) for mailman id 147321;
 Fri, 25 Jun 2021 16:36:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNyz=LT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lwop7-0005NF-Nq
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 16:36:57 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f5b6475-67b5-41a1-9539-6b92fe7ea763;
 Fri, 25 Jun 2021 16:36:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f5b6475-67b5-41a1-9539-6b92fe7ea763
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624639016;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=a1hyDjeIWUFoZ8OSnH5eS0/OD+auhAmwHDe4oVB7pGM=;
  b=helsjrGtNw2TAm9VIvhdPE+rqO/LlMB/7Nr21GVxQs+Y3YEZ4BvxWjLM
   nQEq15EkXRVIqKAFgOTYCj9Pn043XvfuMDQX7O52mrMDwp7TtAe0IoK1V
   ASGOy53CyF3F4Uh9f9COQpMOSdEJjKjqWOaR3yvAaWAt+FPUY9UubeZmI
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: +9aCOyYyNyO7pM3XfxZ9oQz1Cv9iTzym7IIX44Y6ncgqxRYl2esvk15Kul3FDP2A3ePzrBf79n
 1him43Iaxqq612FyO8RL+gH3BYTSqIQjA7idv1OGmeCdxQ3VTYzxjLsWicnInQysGFjYrLJ8Dn
 nz9xISUEZMFKp7pf6WhAuFOxR1unrtw2PD8lPJdFPCm+imSIBlolom9sXeOMw3XJYDAFWokanr
 qt+Img8KmOAFbfulcmfzwPTSuSEZd3h/1g09eYYE0j36Uk+I9d+Mdv6L9PADOytd6nNTrc008I
 kR4=
X-SBRS: 5.1
X-MesageID: 48599959
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:A7AE/qG2PMIJB6fEpLqEEseALOsnbusQ8zAXPiBKJCC9vPb5qy
 nOpoV+6faQslwssR4b9uxoVJPvfZq+z+8R3WByB8bAYOCOggLBQL2KhbGI/9SKIVydygcy78
 Zdm6gVMqyMMbB55/yKnDVRxbwbsaa6GKPDv5ah8590JzsaDJ2Jd21Ce32m+ksdfnghObMJUK
 Cyy+BgvDSadXEefq2AdwM4t7iqnayzqHr+CyR2fyIa1A==
X-IronPort-AV: E=Sophos;i="5.83,299,1616472000"; 
   d="scan'208";a="48599959"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oQJmf+sY56j+fw3N/vRQnNwjzP4MKFC7oDasDGth7aKkbPJnEnjCk2nG62qsDTwzAMcfzKAo/t9PD4EGUpjwxRbEm3AlC5IQJ0j5a0OdbJWY6mDbNLk58ssKEODnj2K/s2ZVU/Ogn8LeKoBze0JSs74LbZoG5KeiIP9DRDKT3ipksfNYrJz0SrmQJRlRPWGlF8AgpXE/QEFK0zJODKC2WuO2hq8kBnPDeDkYlXmw5Kv3JFcHEI0QSWITGIGBtJG0BS2MqjsX9fFN3XbvpRZClUVY0eryGImeDx+QdwZW8EtV9tfMNVgZDf/W1atnFQQg6wZGkmIvMm+dnoGy+PQKSg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V9aEG+Zut38SWBHt4g05EgKetcQtmvpylvG0L86KFPo=;
 b=jpSVf6u4jxQBCzfCyoFHAbFB+lhdx8RsK6nDqvr8Be79TkQn5h7GffJKlmrROneUffcM4o8PXjJvzquqrWPzcr/HQLVkZ/wXXRJQJWE3HMX21foLpyjXqW5sqvzxhW+8+OH06NPL+/lobNPeL18u0VJJ9JJRhMgZrLnpPQkAQUuE/TFy1B0lCSPSwxnnR+5+X7SjXQ9qVIT1X7Xl8FVLmiiSflzU/zTuXviKiuLDCHkQbyvgft/bnExpOTkAuDs7KHKX2Okl9j4S6m96jyQHeeiSVJ8Bi+TIb6V5/7Dept7zVpRApq2kZ411QlmqTFIb25XodxPcGW+rTT+YfEtbLQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V9aEG+Zut38SWBHt4g05EgKetcQtmvpylvG0L86KFPo=;
 b=cBT41zN0nPPeuB1IHlKoAxqX5byxGAL4jhHoEwPqMVM/P1gp7S0Fx+UO2nHUcvWieRDC2g+gx557ajGZpQTZ72S+zMGuproMpc/iUhJft+9JQAI9csgyOV7b2n+o3nWfHf6JZjlA0IJjpyIb7MsdGk9YpExzjCLlyosSgD1TolA=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <46831816-a5f2-2eb4-bb91-aba345448feb@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 02/12] libxenguest: deal with log-dirty op stats overflow
Message-ID: <5e725a42-953a-c96f-3e72-f0c741b0ce16@citrix.com>
Date: Fri, 25 Jun 2021 17:36:40 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <46831816-a5f2-2eb4-bb91-aba345448feb@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0407.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:189::16) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 33a3e502-7477-42e5-d2f3-08d937f76c9c
X-MS-TrafficTypeDiagnostic: BYAPR03MB4248:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB42488FFD0D818FDAF02DDAFABA069@BYAPR03MB4248.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: iSpVGeUHhyrFqqwcPsHuzon5ZjnIdN61+muGQBs4ep8wTVF61t1IyQFd9NuAo4s5I+hKSA1K8NvjMil4TWcnM4zGdeCyetCjdVdlQFZT2DHS/qJ4oJ+FoORi3nLlQpxDsXiZQK4sVVqNUZqTSO9WTIk/6Jxp3Q9/QrMpGQVyVDbHPVUf0FkkkEy6S6SlruXreD93kthZGWllZfiFG7DS67/KcdYvDNZvRgHedT6/ngeffDtcB8pSQPwOt8SOi6FD2PgkA/J7PKgBhtosmfjaafvcK3eHDUBPUdJUKHZOKqVmjLsU1h9H7cxYWcPm1zpL1od5q9SR0ZKG15J3QOK4/rcPvYU4Hvgv8BeSg8FFVU+WaqO0N/wLlUI4P0FEPbMMOvsyUEOrAdY2uMABD99PPlgcV4MNMJVnLxCqr2Gl+UDZiWkJFIL1JsqyexU9YWY5Ug9bHfAz5aKH0nrZZ7dWEEMhCZ7zcEnK4S11MbvFs1+v3IHAdHKck66teXRD4yShSuozFqLBRBgYlDrcCEP6uzWeimGzrICCdy5Qg/KeT3e3jNPh+36I0qwUi78rgUZVbKaTCoGgWRnCUq6ocJv1j5utqY+O3Zh8irLHZtZzpbdOb/qQC4IK6GUwXYxqffn6UF6NjpSafK+0Wl/K0798uSG1pCv7D2RYk/nv2TnlaWkTs/P568/gMv/qHeb2vKhuvJGhVIAUrCUSq4tFP6NzqQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(346002)(39860400002)(31696002)(83380400001)(66556008)(66476007)(53546011)(54906003)(6486002)(26005)(31686004)(86362001)(316002)(5660300002)(2616005)(38100700002)(66946007)(186003)(2906002)(4326008)(8936002)(36756003)(110136005)(478600001)(8676002)(16526019)(956004)(6666004)(16576012)(14143004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?N1JUT0VzRVg4V0xwNkJMQVAzeEFSTVBja2IxWE9qbFVLbHROWGV2dG5YUjZG?=
 =?utf-8?B?WkRzdHlmUUtEaENXTWlUY1hSaFkzK0FGMDRxUWJzREo1RlNIa29MRjREUm1k?=
 =?utf-8?B?ekUzYThCZGkrVWRCUGNZM0M4eUQwaHB6MTdZSW0zUWNpNGV6WnZmV3hqeTV5?=
 =?utf-8?B?cTN2NlFZamVHbEV4aHN2Rks4VTZhN3VTb3hDSENCZms2dmlzVkVzT2djKzlu?=
 =?utf-8?B?YVJZUTd6YWxiZVJnWHNQdUJhZDIycW1mOHc1NGNHQnN4WlVUbG9qNnJaK05Q?=
 =?utf-8?B?S051QkhPRWVIbzRCZ0FoWjNDY0NraXovaWw1NWpXN0Q0dkt2TXdqbUxWdUdj?=
 =?utf-8?B?djFaVld4c3hXTGZUMTlSYmU4V3BqQ2J1ay9KM2pGaHVxem5rZHhvLzVRTFo1?=
 =?utf-8?B?VERtZ3A1QU10VUE5alg3SjV2cjZzcmxBbkdHaVR0L3hKL2FPRndjTVlBRHU3?=
 =?utf-8?B?WnI3Ymw5MjFQRDMra3JvSkpZYkRGT1ZYZEc4dXo2KzZNem1ldHNUbHBlL2k5?=
 =?utf-8?B?Y0JySDRDY1I3enM1VkhyNy9NN3RRRDM4WEhtMTYvbm9QWXdKUTZjeFhjdGVm?=
 =?utf-8?B?dndFbS9OZGdFQWdWWkZvRWQzQVZSZjJIZXBvYWZaQmJtS2hQUGdVZWpnaWhY?=
 =?utf-8?B?R1JRUGhLcWduSm53UjcwOG9zVmFzVm1SQWFrVGRDMytYWnFra0pWQWIrYTBT?=
 =?utf-8?B?dVVxRllmeGszWThsbU1GSlBGbStDZmY2NHcyZzhocjFVNFV4THNuRHBqUTZI?=
 =?utf-8?B?K2JyaGRmcHArdGZSR2ZWanEvcWVLSENGMFIweUdEM3REeTVBQVRWTEV5S2tW?=
 =?utf-8?B?Q2ZObzlkdFk5WWJYR0NZMnB0alVlNDE2Zzh5U1paaUxKdzJyZ2xnb01HTXd2?=
 =?utf-8?B?ODNPOTVOM1BHM0dJVHFuSlRRd2Q4M0t5bU8xYTNwRWNyN2llMXY2VmdVbCth?=
 =?utf-8?B?aUlFSU9DTmFidkU2OUI3K25rMjlPaGY3eEFub3QrdVdweE1vK2xpUmpYNkV1?=
 =?utf-8?B?UE9DS3ZjRFQ3V3dPVTRncS93YzR6MWN2V25najhUTDFnTmVSTkNsMVN1RnFs?=
 =?utf-8?B?UEM4SDJvd3pJZERSdHJuT1FUTVc1KzkrcldEcEszQnIxeWRMTG5CRXFLcjlu?=
 =?utf-8?B?aUxoMU9zOUN4R2dKeFM4cjU5QXRiZEc5UFYxMzJ4L2hXZ2l1ejNvOGJhTmZi?=
 =?utf-8?B?RlMvWW1lWEVRd1o0NXdReUdPa3pLbFJiS1didTdWTmpKYmNyVGl2QXRmZHQ0?=
 =?utf-8?B?em4waEc2RkVsNXgrbzE5V21LWWhiQ05CVFVmbEhMQ3FLelh1d204cTA3YzVy?=
 =?utf-8?B?SEdIdXVIb2N5WHM4ZXhicTJZZS9ieVFXWlVtZUY2RzNxNGxNaElNckNRYnhr?=
 =?utf-8?B?OVE1d2sxUjZpT1JVa2g5ckZVSzlzT1dvNWROZndlbGVyQ2owVjM3VVZsUmFD?=
 =?utf-8?B?UWdMek5lT3ZvZCtXeDRKN1hJdWhiNzhDTGpENFBuaWNxZnYxZGdjbVdDUlN2?=
 =?utf-8?B?L2pNY0xlSzNYTVdwcCtLTUdjSlZ2N0RaWHRuZlltenpRQys4M0tkVm9vRCtR?=
 =?utf-8?B?dDFhQlFaZVI0QXk0TFRDYzY5QXlBRjhlTGlDWHdtdlVZTXM0ME55SzRZM3dX?=
 =?utf-8?B?b3hoL2k4MDBkajVkRzRpUFhzRVluQUgxaW85bDJXRTQ1MGc4bzErNTJBZ0Nz?=
 =?utf-8?B?aElsRStWZzI0emREWFkzeXRQZEhVa0hsc3JGUWRpTDFvQ25uLzRnNzdReTlW?=
 =?utf-8?Q?dkzC0FTcqQEc7m/P9W3ClroTPIvy3wxK+Tbadoz?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 33a3e502-7477-42e5-d2f3-08d937f76c9c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 16:36:47.0971
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2f3TDlkSXlFZU8JCXa44RqMXcTcOOvCz0hU38Jp0Kaj00jllT6JLoufDU8R4GehOEIfdUMRPFFw0AHvP1dWO71YMMlYhyVs5jRXIzU+hG18=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4248
X-OriginatorOrg: citrix.com

On 25/06/2021 14:18, Jan Beulich wrote:
> In send_memory_live() the precise value the dirty_count struct field
> gets initialized to doesn't matter much (apart from the triggering of
> the log message in send_dirty_pages(), see below), but it is important
> that it not be zero on the first iteration (or else send_dirty_pages()
> won't get called at all). Saturate the initializer value at the maximum
> value the field can hold.

I don't follow.=C2=A0 Migration would be extremely broken if the first
iteration didn't work correctly, so something else is going on here.

>
> While there also initialize struct precopy_stats' respective field to a
> more sane value: We don't really know how many dirty pages there are at
> that point.
>
> In suspend_and_send_dirty() and verify_frames() the local variables
> don't need initializing at all, as they're only an output from the
> hypercall which gets invoked first thing.
>
> In send_checkpoint_dirty_pfn_list() the local variable can be dropped
> altogether: It's optional to xc_logdirty_control() and not used anywhere
> else.
>
> Note that in case the clipping actually takes effect, the "Bitmap
> contained more entries than expected..." log message will trigger. This
> being just an informational message, I don't think this is overly
> concerning.

That message is currently a error, confirming that the VM will crash on
the resuming side.

This is a consequence of it attempting to balloon during the live phase
of migration, and discussed in docs/features/migration.pandoc (well - at
least mentioned on the "noone has fixed this yet" list).

> --- a/tools/libs/guest/xg_sr_save.c
> +++ b/tools/libs/guest/xg_sr_save.c
> @@ -500,7 +500,9 @@ static int simple_precopy_policy(struct
>  static int send_memory_live(struct xc_sr_context *ctx)
>  {
>      xc_interface *xch =3D ctx->xch;
> -    xc_shadow_op_stats_t stats =3D { 0, ctx->save.p2m_size };
> +    xc_shadow_op_stats_t stats =3D {
> +        .dirty_count =3D MIN(ctx->save.p2m_size, (typeof(stats.dirty_cou=
nt))~0)
> +    };
>      char *progress_str =3D NULL;
>      unsigned int x =3D 0;
>      int rc;
> @@ -519,7 +521,7 @@ static int send_memory_live(struct xc_sr
>          goto out;
> =20
>      ctx->save.stats =3D (struct precopy_stats){
> -        .dirty_count =3D ctx->save.p2m_size,
> +        .dirty_count =3D -1,

This is an external interface, and I'm not sure it will tolerate finding
more than p2m_size allegedly dirty.

~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 16:37:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 16:37:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147324.271476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwopq-0005wn-Bu; Fri, 25 Jun 2021 16:37:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147324.271476; Fri, 25 Jun 2021 16:37:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwopq-0005wg-8z; Fri, 25 Jun 2021 16:37:42 +0000
Received: by outflank-mailman (input) for mailman id 147324;
 Fri, 25 Jun 2021 16:37:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eqCl=LT=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1lwopp-0005wW-8o
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 16:37:41 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id ac1ae509-4fba-482a-84ea-ea43ab348c04;
 Fri, 25 Jun 2021 16:37:39 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1EDE31396;
 Fri, 25 Jun 2021 09:37:39 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 37B1B3F719;
 Fri, 25 Jun 2021 09:37:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac1ae509-4fba-482a-84ea-ea43ab348c04
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: smmuv1: Fixed stream matching register allocation
Date: Fri, 25 Jun 2021 17:37:26 +0100
Message-Id: <612e7f61c19e60019bb7829888342fda95fd36be.1624546532.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1

SMR allocation should be based on the number of supported stream
matching register for each SMMU device.

Issue introduced by commit 5e08586afbb90b2e2d56c175c07db77a4afa873c
when backported the patches from Linux to XEN to fix the stream match
conflict issue when two devices have the same stream-id.

Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/drivers/passthrough/arm/smmu.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index d9a3a0cbf6..da2cd457d7 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -149,6 +149,7 @@ typedef enum irqreturn irqreturn_t;
 #define kzalloc(size, flags)		_xzalloc(size, sizeof(void *))
 #define devm_kzalloc(dev, size, flags)	_xzalloc(size, sizeof(void *))
 #define kmalloc_array(size, n, flags)	_xmalloc_array(size, sizeof(void *), n)
+#define kzalloc_array(size, n, flags)	_xzalloc_array(size, sizeof(void *), n)
 
 static void __iomem *devm_ioremap_resource(struct device *dev,
 					   struct resource *res)
@@ -2221,7 +2222,7 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
 		smmu->smr_mask_mask = smr >> SMR_MASK_SHIFT;
 
 		/* Zero-initialised to mark as invalid */
-		smmu->smrs = devm_kzalloc(smmu->dev, sizeof(*smmu->smrs), GFP_KERNEL);
+		smmu->smrs = kzalloc_array(sizeof(*smmu->smrs), size, GFP_KERNEL);
 		if (!smmu->smrs)
 			return -ENOMEM;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 16:37:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 16:37:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147325.271487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwopw-0006GF-JQ; Fri, 25 Jun 2021 16:37:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147325.271487; Fri, 25 Jun 2021 16:37:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwopw-0006G6-GB; Fri, 25 Jun 2021 16:37:48 +0000
Received: by outflank-mailman (input) for mailman id 147325;
 Fri, 25 Jun 2021 16:37:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eqCl=LT=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1lwopv-0006FN-1k
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 16:37:47 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 559867d7-eded-4ff4-b049-5ac0980b89b7;
 Fri, 25 Jun 2021 16:37:46 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 29ED91396;
 Fri, 25 Jun 2021 09:37:46 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 433703F719;
 Fri, 25 Jun 2021 09:37:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 559867d7-eded-4ff4-b049-5ac0980b89b7
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: smmuv1: Set privileged attr to 'default'
Date: Fri, 25 Jun 2021 17:37:27 +0100
Message-Id: <c6c5e3deb97200baefb75d06ec934d2c6ee5eb62.1624546852.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <612e7f61c19e60019bb7829888342fda95fd36be.1624546532.git.rahul.singh@arm.com>
References: <612e7f61c19e60019bb7829888342fda95fd36be.1624546532.git.rahul.singh@arm.com>

Backport commit e19898077cfb642fe151ba22981e795c74d9e114
"iommu/arm-smmu: Set privileged attribute to 'default' instead of
'unprivileged'"

Original commit message:
    Currently the driver sets all the device transactions privileges
    to UNPRIVILEGED, but there are cases where the iommu masters wants
    to isolate privileged supervisor and unprivileged user.
    So don't override the privileged setting to unprivileged, instead
    set it to default as incoming and let it be controlled by the
    pagetable settings.

    Acked-by: Will Deacon <will.deacon@arm.com>
    Signed-off-by: Sricharan R <sricharan@codeaurora.org>
    Signed-off-by: Will Deacon <will.deacon@arm.com>

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/drivers/passthrough/arm/smmu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 1a68c2ab3b..d9a3a0cbf6 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -1566,7 +1566,7 @@ static int arm_smmu_domain_add_master(struct arm_smmu_domain *smmu_domain,
 			continue;
 
 		s2cr[idx].type = type ;
-		s2cr[idx].privcfg = S2CR_PRIVCFG_UNPRIV;
+		s2cr[idx].privcfg = S2CR_PRIVCFG_DEFAULT;
 		s2cr[idx].cbndx = cbndx;
 		arm_smmu_write_s2cr(smmu, idx);
 	}
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 17:10:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 17:10:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147338.271514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwpL4-0001Qw-L6; Fri, 25 Jun 2021 17:09:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147338.271514; Fri, 25 Jun 2021 17:09:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwpL4-0001Qp-Gk; Fri, 25 Jun 2021 17:09:58 +0000
Received: by outflank-mailman (input) for mailman id 147338;
 Fri, 25 Jun 2021 17:09:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNyz=LT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lwpL3-0001Qj-BZ
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 17:09:57 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cce38ab2-1a13-4edc-b88b-a248d8ef5fc8;
 Fri, 25 Jun 2021 17:09:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cce38ab2-1a13-4edc-b88b-a248d8ef5fc8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624640996;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=MqkJs+10dOjAdnTgVMK1Et60QKTOIcdAUzMWHRitG3A=;
  b=X287Nsf+2iM71hYAFi5Ge1xoQwj6eEArrXsfm3RMVa07lBZzLYjpITRX
   82TOg1Zmfz3erBS0n4Dd16VDk4Y7rmZSMnyQLDzFSvEDkpt3mPIsEFPms
   kDLTlUwMbmMKpIUlrLZAkDEauZkgkPL+TLRF9z0d9PzCjWCXVnd3uAEOU
   U=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 0v8A0RUI265NXxX552RpXsXZyNEd5NTjWYD1t5olAYWEhG6GVGTleHVa0qvLh+YH+qMOmj8cZs
 NzhEA8k51jYwzbnurvgk3UUCrrmQ8V8BXDCUbU0TLI4ZuK6qony+u29rTAsSMbGI0S/0S9Uia4
 i3Iz/XxtQ9jndkkcClMYoimbh9Y/CwjaRp9IV3EFSf6ROvwAtXtUmRwM5EBCO+PUggXFerYs+S
 dfxOAxKDwFd2juxf52jlBaBdpI7m8YeJ23mvebi7ADJwjvfVRQTxHmrO0fcjdNJ1Q+lWJJI+jU
 ghs=
X-SBRS: 5.1
X-MesageID: 46990931
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:5HZMDqOG5mIegMBcTyP155DYdb4zR+YMi2TDiHofdfUFSKClfp
 6V8cjztSWUtN4QMEtQ/OxoS5PwPk80kqQFnbX5XI3SITUO3VHHEGgM1/qb/9SNIVyYygcZ79
 YbT0EcMqyCMbEZt7eC3ODQKb9Jq7PmgcPY8Ns2jU0dKT2CA5sQnzuRYTzrdHGeKjM2Z6bRWK
 Dsnfau8FGbCAUqh4mAdzY4dtmGg+eOuIPtYBYACRJiwA6SjQmw4Lq/NxSDxB8RXx5G3L9nqA
 H+4k3Ez5Tml8v+5g7X1mfV4ZgTsNz9yuFbDMjJrsQOMD3jhiuheYwkcbyfuzIepv2p9T8R4Z
 fxiiZlG/42x2Laf2mzrxeo8RLnyiwS53jrzkLdqWf/oOTiLQhKSfZptMZ8SF/0+kAgtNZz3O
 ZgxGSCradaChvGgWDU+8XIbRd3jUC5yEBS0tL7t0YvFbf2VYUh6rD2pChuYdE99WPBmcAa+d
 BVfYThDK08SyLCU5ix1VMfsuBFXRwIb127qwY5y5SoO5U/pgEx86Ii/r1pop43zuN3d3B13Z
 WxDk1WrsA5ciY3V9MxOA5Te7r6NoTyKSi8eF56dm6Xap3vfUi98KLK3A==
X-IronPort-AV: E=Sophos;i="5.83,299,1616472000"; 
   d="scan'208";a="46990931"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bLXR4XcqgvNDjDWErYcOMg5UHMS35Jk5ouQK7OA0JT0zUetQ0+JcoUg/hBAym4sdo1sGPQgJq8VAoibMd9S03v0aHwK/fSSK6ziEdJYeMLHnREBoIh/rT5cjmVcMPQzC2mmRKed/xZHt5VAAyVZnsJYdZ+8997yBIqfDnQgXjUcRXaI2P/GLW7WleZomv0ZHzf81f5WEXbgkWS3pSsaRoFTChdh25dRApMASW5jUnJqcldwy421cKg1DOSEld8xjpLi217+XDpt2ACWCyEU4jv9ppWPOl1rL+2D6VrHevgDJk7e29fgvl0F0xaSPCoIZGNKTLDy4FjhKCQy01t1Oxw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l2YarMUpOQxZRELYwNTJRw5dWIndo6T3fl3kMbrqguU=;
 b=fdPqxlTQmvAfBXsHcIudVr1RpLTLIlhrKSS4RPsKQ/WqtIzupxZ3lsuX2D+tNO66np8WkcGzK6eEMvZZbF08P8IfYOKJy0jv01LOXWpO7emaut95kYj5cpS26aYEKgsq00W7EsqyTJ/bncxDmzG6YZKB8hE65luMtkO8MDqtNs+2wjOBpMrnfnYXNa8oprc4tj9rdbucq+H1/AQQJTj925P3Hy0dYYkKWNjyPXH0G/35IT/x8f2Yif0GxsNG/hdgigeN+uDxOBaZ3WRL6WTh6GU3+j5oe5KIsfGYj2LQy9SENXQq7wHnMlqqQoG4heOH4k2tJzcTg93+S6VIKWXikg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l2YarMUpOQxZRELYwNTJRw5dWIndo6T3fl3kMbrqguU=;
 b=v39WxJ8t7AHh5hi6umMyiGEe9728nWB3Iimnp4Hak94XlqXFYDuhBMbGAli5Wf3gocuN3+2NJpRTU9gXbGB2P/xVOQB7dEDGUAaa/Y9qRdm93QbM53pPyZUiYLi2W42RQjSCiQKa1oepPqS9qa1A8x4ibjlLZlV51FCDf2WeHmQ=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <55875a26-7f1d-a6d9-9384-b03b3b2cb86d@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 03/12] libxenguest: short-circuit "all-dirty" handling
Message-ID: <60be051f-7751-f15d-ae4d-2c7e9af82693@citrix.com>
Date: Fri, 25 Jun 2021 18:02:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <55875a26-7f1d-a6d9-9384-b03b3b2cb86d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LNXP265CA0059.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5d::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8ef2a7f5-a508-4e39-cdfe-08d937fafc29
X-MS-TrafficTypeDiagnostic: BYAPR03MB3671:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB36717592933552DD70901238BA069@BYAPR03MB3671.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: hLM+f7OgpStp+MUmWFeQj0w6Ll7rs5kYk6kYSkYeInGJ+ei2hGXqpdTkQulvYEZ/Os4aVOlhsox7ASxCrkdZLLjSS8eRUiacXItAAMK1Ef1Nwx686FLNQmk4oeUjFIPF2rxYSzDRRCv8E3dW5fWV6IJ/CqTHEMLRC7onzuJ9YDO7nrJ8jApa8jxVVmUd/tfBAL+XIhf8bPri/8/BizAKbrg0Qr25ra/iJ9FtUXE/Ae2T7bdFo2yA6oD3xaJZcQe8oje/Vd/4orxVApq2X04ZmHLruXL/isPhXHUiPAPmgs75+hZjRl2gQCcYvFeTVR7UBICEEny8W2KWl27+xPWOkhf2szBS6G1BTBGsmNpUX1KhDHt5NUMfrVtcz6hnk7XP3U3+GEn8EcTU/ubtMWZ+56mfMRlDTiam8CQ417pG89tTTDe2LufFr++fYe4YI8tNyKju02sTRJ0Ng46vf8obwiS9jypCn6FoYC01yIjgDPoFHOU7ykGHh/HUTojLmpVDT+rWLhmjmUq5GjE+/zvl89uqnLWMUrZ/D85noPiLo9iuq6bJrvfv8EfUpB2LSuZPERO/vBrcWSfsE5vA+50vYP0lerDemtvfaHhRQUdApmVTkEwmdNwOhrdhFcaMrQFo5V/aolCoX9z9Gi+zZGHwhFobBdR7vBVTX77vr7lp4cbC6awWtvaIux73zISZJgGNcQnjv/s/nGo08GclP7IdqY9zkpWRd1Pa6qaPkQ8qpjrUgPyJt1FvZmxz4AZWhlLT
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(346002)(366004)(376002)(136003)(26005)(478600001)(31696002)(16526019)(5660300002)(53546011)(4326008)(66476007)(31686004)(86362001)(66946007)(186003)(66556008)(16576012)(6666004)(110136005)(36756003)(6486002)(54906003)(2906002)(316002)(2616005)(38100700002)(8936002)(8676002)(956004)(14143004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?TWVCb3NTQUE5MkhGMUZOMHY3R1BkcGxVSStLTkdhRHVaSHVYdFhsbFVQcnFm?=
 =?utf-8?B?NUU2MWMxSVMybnAvOWVJeFdmMlpRQ1BFUGdEMGhVc2wzN0w5MDBvLzd4ODF5?=
 =?utf-8?B?dm9uQUp2dHNEeXlMYit5ZklKNklZNVVqVjU4dDlqbDlZT0xkekk4TVRVU1o2?=
 =?utf-8?B?SDkwalRadnJPVXZsU2ZFdDJvaDVRbHE1TTFPc05tRzMwM0VsVzlHaFpkcmtB?=
 =?utf-8?B?aytxalBEYTRhYlphN0RBM2djMlpEMGNyak0wRmpMYjlLSEMxMFhqMjV6Y3Q1?=
 =?utf-8?B?NCticHpVQ2Nqd1dVQlpLeWM2YjZ4T2xwM0xKVXNUUmRVN3ZXK3VOWGJNMXZK?=
 =?utf-8?B?Yk42ZStsQU9TZ25ENzdPVWJSWFlGYjQ1aVpZQmJneDJLZFhoTk9IQWQvSC8w?=
 =?utf-8?B?SDZVWWM0a2tnaGpTUjRrOWxucExKVFY3aFV6UEF1cWZWeVdzOGtRTnFEZmRL?=
 =?utf-8?B?ZzNxWng4aGZrN2dMT1dnWm5wYnVvZmlWZy80ZytpOHI0VXRhV3RPQndIT3Zr?=
 =?utf-8?B?a3I3UHdJMVJFaGxYNTREM3hCR24zL2UzWDhhWTVwWWpjN2dZeW1MMlpWbGpk?=
 =?utf-8?B?cExITUpZTTVLbXQrRjU5ck9na3VZLzdNZHp1UU1STytKNWFManZQaW9xeU5H?=
 =?utf-8?B?Rjcyck9mRE8vanlTbHZEbEdpS2xxcWRGR3hVSzkyWkE2dXZnbjgxZ091NVZJ?=
 =?utf-8?B?OUI2VUtHSm83cEViRXQ0VWV5ajZ6cjU2Y2U2Y3RlR0ViWHJWV0NEeHZ4Q2hx?=
 =?utf-8?B?c2FQVHk2SVNzRFgxbHkxNTNLYm9oR0w5bStTek14QU9BTVNOTnpNNjhLeEp5?=
 =?utf-8?B?SG1hSmZTdzJFUWlzZmxrL2xxZVpqb2UxR014cFIvL29vZVg5Z0kwN1UxMjY4?=
 =?utf-8?B?K011SlRSTjVKUHg3eWkyeEFMQnhBSFE3bGlUNnFsRFAxWnlxUTgyKzY0RjBU?=
 =?utf-8?B?UkNWQm44WElvYXFUa1hKNUpkTkJxczRXZ2tWa1k4aVpQamdhYzlaTTFWb1Bm?=
 =?utf-8?B?OXFaL0QvYWszVDBhQzJIbHZCYlRqaWZ2YUd5RVk0RHhTbmFLWXArQ3ZoVHdN?=
 =?utf-8?B?VmhlYytheWNDK21sLzl3NWwrT05haHlYemVVZ05LbWVDbDJOZGtFZjBKUWc5?=
 =?utf-8?B?YXkxbzhnMCtOUkVCZDYyc2VuSXgydEFQdmo0amcyU0dUbjBCMUFyaVo3Vms2?=
 =?utf-8?B?YWdwbWROTSsralBiYmhUSk9vd2pvUThZWk5KZGcwUjBJVXRLdGllTGM3VHJu?=
 =?utf-8?B?WmJTUE5qYWJOOWNrOWZXR3dJeFRYV2x1QTY0M3lZNkIxNEx6YU4wb0pWUU1y?=
 =?utf-8?B?MjNSNzZtOHBwYVdNQXdlOXkva3A0L0s3TW9sRi9mcHVjL05SV2xhenoxNTNU?=
 =?utf-8?B?d29YTW84QlpmVGNJSm0ydUd6UFI1VnRYek90Z3RJNm5WSlNwdkdJQlN4eXZS?=
 =?utf-8?B?R1dsRzlnUEo5c3hudHJZcC9nVWtvR1lBVnlyVmZGL3dvWHc1cGxGVlZXczhV?=
 =?utf-8?B?djd2aU5kRThoSk1lMWhtV1l3RnJTZzhNaEhGc1VoUUhVdGtXQ29qR2Z0aTNW?=
 =?utf-8?B?cG1kTElNMlA0OXNQSkN1Q29uMmRDellQVktLdzYzUW5GalRzYVNDeTJHcm5q?=
 =?utf-8?B?alRpZ1NTcmd4eDRTK2pOZ1daSWtvOFRNMnR6YzRWVWVBTER6UmZaZVlvZ1Rm?=
 =?utf-8?B?cHIyM0VmTllnbzd6MHVKNkZIMUdMcktPTlk1Um9GT240TnZwQ1hjQW9YRjgv?=
 =?utf-8?Q?5CjpEZJdH6MpZFqK68WB7y3wEEr5C16or8MZWzi?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 8ef2a7f5-a508-4e39-cdfe-08d937fafc29
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 17:02:16.2834
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EpfS6vqw1mNYfbKdtpU4/R6o3uNCCEXnsveL1BKBCsBm97tF/tMil6uNjjD9Cr/zzfQbQsYeqmcPjGN+VXdZqsURyqF+XuNjvtjNmDTHrhI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3671
X-OriginatorOrg: citrix.com

On 25/06/2021 14:18, Jan Beulich wrote:
> For one it is unnecessary to fill a perhaps large chunk of memory with
> all ones. Add a new parameter to send_dirty_pages() for callers to
> indicate so.
>
> Then it is further unnecessary to allocate the dirty bitmap altogether
> when all that's ever going to happen is a single all-dirty run.

The allocation is deliberate, and does want to stay where it is IMO.

Single all-dirty runs are a debugging technique only.=C2=A0 All production
cases are live, and you don't want to fail midway through because a
late, large, memory allocation failed.


As for the send_{dirty,all}_pages() split, that was deliberate to keep
the logic simple.=C2=A0 The logdirty bitmap is tiny (in comparison to other
structures) outside of artificial cases like this.

What you've done with this change is rendered send_all_pages()
redundant, but not actually taken it out of the code, thereby
complicating it.=C2=A0 At the moment, this doesn't look to be an improvemen=
t.

> @@ -807,8 +798,11 @@ static int setup(struct xc_sr_context *c
>      if ( rc )
>          goto err;
> =20
> -    dirty_bitmap =3D xc_hypercall_buffer_alloc_pages(
> -        xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->save.p2m_size)));
> +    dirty_bitmap =3D ctx->save.live || ctx->stream_type !=3D XC_STREAM_P=
LAIN
> +        ? xc_hypercall_buffer_alloc_pages(
> +              xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->save.p2m_size)=
))
> +        : (void *)-1L;

This is a pointer loaded with a timebomb, which doesn't trigger NULL
pointer checks, and for which {set,clear}_bit(dirty_bitmap, large_pfn)
won't fault and will instead corrupt random areas of the address space.

~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 17:47:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 17:47:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147352.271548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwpvG-0005XB-2W; Fri, 25 Jun 2021 17:47:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147352.271548; Fri, 25 Jun 2021 17:47:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwpvF-0005X4-Vn; Fri, 25 Jun 2021 17:47:21 +0000
Received: by outflank-mailman (input) for mailman id 147352;
 Fri, 25 Jun 2021 17:47:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l36E=LT=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lwpvF-0005Wy-9g
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 17:47:21 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 351e135f-7696-44e6-b8c0-d18fb6054ae9;
 Fri, 25 Jun 2021 17:47:20 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 822FF6157E;
 Fri, 25 Jun 2021 17:47:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 351e135f-7696-44e6-b8c0-d18fb6054ae9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1624643239;
	bh=nOVeSVwc3vAa+JVLD6a/lw25Pvfe+iUBs+PTwaec6tU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=SrXsxrd6S6BN8XcXzR1f3GVK+MewsoyV7ruAbm/7cSQ8e5C5wxjmCbI5YOQGnfag8
	 BxRDNlGTsz2vDZgyGyKGdFGEpYWh+IH7SyHNbf8HPiqMJY5mQQ70cva3n4zz6YIeFF
	 UKjTEGuU1zB/MQect/A/K7mjiMhy/6h24FguYuQ3RpRGdoptylyCfscaGuNJFmeK11
	 /9iPE4+pQoAqoUCy72ECKEuIBlo4SnFJmjSTa+3zb1EwTHOL2HlOKffOATABjE7Yfy
	 uWuTqsvg1rBB/8ArIj6UXV/RzvhS1ijPeKxiPEx/H9PICV2lMAipLgwLcCA03w/CkJ
	 /dPvmk6bA1zRA==
Date: Fri, 25 Jun 2021 10:47:18 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Volodymyr_Babchuk@epam.com
Subject: Re: [PATCH] xen/arm: add forward_smc command line option for
 debugging
In-Reply-To: <b5ba0757-322f-a77a-2293-111b77b29d35@xen.org>
Message-ID: <alpine.DEB.2.21.2106251033500.24906@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2106241749310.24906@sstabellini-ThinkPad-T480s> <b5ba0757-322f-a77a-2293-111b77b29d35@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 25 Jun 2021, Julien Grall wrote:
> Hi,
> 
> On 25/06/2021 02:51, Stefano Stabellini wrote:
> > It has become clear that an option to disable trapping SMC calls to Xen
> > is very useful for debugging user issues.
> >
> > Instead of having to provide a
> > patch to users every time, it would be great if we could just tell them
> > to add forward_smc=true to the Xen command line.
> 
> I can understand this woud be useful to go a bit further in dom0 boot. But I
> am quite sceptical on the idea of providing an option directly in Xen because:
> 
> 1) This breaks other SMC uses in Xen (optee, VM monitor...)
> 2) There are no guarantee that the SMC call will not wreck Xen. To be clear, I
> don't refer to a malicious OS here, but a normal OS that boot
> 3) Very likely the next steps for the user (or better call it the developper
> because that option should really not be used by a normal user) will be to
> decide whether they should modify the kernel or implement a mediator in Xen.
> 
> > This option is obviously unsafe and unsecure and only meant for
> > debugging. Make clear in the description that if you pass
> > forward_smc=true then the system is not security supported.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > 
> > diff --git a/docs/misc/xen-command-line.pandoc
> > b/docs/misc/xen-command-line.pandoc
> > index 3ece83a427..0833fe80fc 100644
> > --- a/docs/misc/xen-command-line.pandoc
> > +++ b/docs/misc/xen-command-line.pandoc
> > @@ -2501,6 +2501,16 @@ vwfi to `native` reduces irq latency significantly.
> > It can also lead to
> >   suboptimal scheduling decisions, but only when the system is
> >   oversubscribed (i.e., in total there are more vCPUs than pCPUs).
> >   +### forward_smc (arm)
> > +> `= <boolean>`
> > +
> > +> Default: `false`
> > +
> > +If enabled, instead of trapping firmware SMC calls to Xen, allow SMC
> > +calls from VMs directly to the firmware. This option is UNSAFE and it is
> > +only meant for debugging. Systems with forward_smc=true are not security
> > +supported.
> > +
> >   ### watchdog (x86)
> >   > `= force | <boolean>`
> >   diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> > index e7384381cc..0580ac5762 100644
> > --- a/xen/arch/arm/traps.c
> > +++ b/xen/arch/arm/traps.c
> > @@ -95,11 +95,15 @@ static int __init parse_vwfi(const char *s)
> >   }
> >   custom_param("vwfi", parse_vwfi);
> >   +static bool forward_smc = false;
> > +boolean_param("forward_smc", forward_smc);
> > +
> >   register_t get_default_hcr_flags(void)
> >   {
> >       return  (HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM|
> >                (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) |
> > -             HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
> > +             (forward_smc ? 0 : HCR_TSC) |
> > +             HCR_TID3|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
> 
> A system wide option to turn off SMC trapping is a no-go because this would
> only be usable for debugging dom0 and not a guest.
> 
> So at the minimum this should be a per-domain option. Also, I think we still
> want to integrate with the rest of the SMC users. So Xen should still trap the
> SMC and the forward should happen in vsmccc_handle_call().
> 
> This would cover my first point.

Yes, you are totally right. I thought about it this morning as well.
This patch would break even PSCI :-(

It would be best implemented in platform_smc as forward_to_fw (see
xen/arch/arm/platforms/xilinx-zynqmp-eemi.c:forward_to_fw).


> For the second and third point, I still like
> to understand how this is going to help the developer to fully port the
> board/OS to Xen with this option disabled?

This is meant to help with bug triage only. There are a number of bugs
that can happen if certain platform SMCs are intercerpted by Xen instead
of being forwarded to the hardware. I found myself having to provide a
patch to forward_to_fw all platform SMCs as a first test to
triage bugs a few times recently. It is never a fix, only a way to
understand the next step of debugging. Also Alex stumbled across
something similar on a non-Xilinx board (MacchiatoBin) so I thought it
was time for a better debugging option.

I think for debugging purposes it would be sufficient if all platform
SMCs were forward_to_fw from all domains. Of course it is totally
unsafe, but it is just for debugging. But I can also see the value in
having a command line option to forward_to_fw all platform SMCs from
dom0 only and maybe a separate patch later to add a per-domain option to
forward_to_fw platform SMCs for specific domains if needed. That would
be safer and more flexible but a little more work.


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 17:49:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 17:49:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147355.271560 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwpwu-00068z-EW; Fri, 25 Jun 2021 17:49:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147355.271560; Fri, 25 Jun 2021 17:49:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwpwu-00068s-BK; Fri, 25 Jun 2021 17:49:04 +0000
Received: by outflank-mailman (input) for mailman id 147355;
 Fri, 25 Jun 2021 17:49:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p8Eg=LT=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
 id 1lwpws-00068k-Nh
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 17:49:02 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e7e2f43d-290b-4e42-9893-de16d6449cef;
 Fri, 25 Jun 2021 17:49:02 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPS id 83676616EA;
 Fri, 25 Jun 2021 17:49:01 +0000 (UTC)
Received: from pdx-korg-docbuild-2.ci.codeaurora.org (localhost.localdomain
 [127.0.0.1])
 by pdx-korg-docbuild-2.ci.codeaurora.org (Postfix) with ESMTP id 7DCE860A37;
 Fri, 25 Jun 2021 17:49:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7e2f43d-290b-4e42-9893-de16d6449cef
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1624643341;
	bh=q1IXRZ38qk3iRDF+4a0s9gUKtrwRwjE4zvG25cKiv/U=;
	h=Subject:From:In-Reply-To:References:Date:To:Cc:From;
	b=YRKCKRsAAh/RarveQLlHQqHm2y/0MCbv/VOirSm3pzsUj/shWEp4Gajt5LsV0i0XE
	 8kvMlTjm1jURNIH7ilXuEx1ZzJd/b86udxX0mtYhluH+OfCA8Zh/Jw1R1dqVT+Gqla
	 XGmye5FLRwQvINggVz0TyS2khVLVeQCGPcBhm0LBUE8FCYh+MU2Jsp71bu4EFCN8jU
	 wds24sF1YVCfO6GoOzN8b8o8+z0oIAVuIxPz2kxr/hPVT9eEuOzSDlk7S1OUW0iUzZ
	 kWvPoeQfeJAv+aTjpLj7HtixdcHTBMhvn6iAnjvOa0VQK0ruk62dIOzsPKHAzJhPj6
	 XBFS7zTF1F9xA==
Subject: Re: [GIT PULL] xen: branch for v5.13-rc8
From: pr-tracker-bot@kernel.org
In-Reply-To: <20210625144832.20839-1-jgross@suse.com>
References: <20210625144832.20839-1-jgross@suse.com>
X-PR-Tracked-List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
X-PR-Tracked-Message-Id: <20210625144832.20839-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.13b-rc8-tag
X-PR-Tracked-Commit-Id: 3de218ff39b9e3f0d453fe3154f12a174de44b25
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: b960e0147451915b5d4cd208b7abd3b07ceaf1a2
Message-Id: <162464334150.2214.18063317640641616641.pr-tracker-bot@kernel.org>
Date: Fri, 25 Jun 2021 17:49:01 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com

The pull request you sent on Fri, 25 Jun 2021 16:48:32 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.13b-rc8-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/b960e0147451915b5d4cd208b7abd3b07ceaf1a2

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 18:09:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 18:09:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147361.271577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwqGL-00007n-El; Fri, 25 Jun 2021 18:09:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147361.271577; Fri, 25 Jun 2021 18:09:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwqGL-00007g-9U; Fri, 25 Jun 2021 18:09:09 +0000
Received: by outflank-mailman (input) for mailman id 147361;
 Fri, 25 Jun 2021 18:09:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNyz=LT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lwqGK-00007Z-Nm
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 18:09:08 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ee1f386a-1d66-4ec1-b9e7-e6ab92edf1ba;
 Fri, 25 Jun 2021 18:09:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee1f386a-1d66-4ec1-b9e7-e6ab92edf1ba
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624644547;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=DCR9DOjKD8pVtz4qOLegIOjyauoklhVfvRnIVBYfnS4=;
  b=JyP/OxcqYYiFDIqdxuC2fH6rV9KUN7Fl1lK3A+8Tw+pRRMiRqRL4fclS
   za2U5fr2VBvvH7nxDo8FwMCbbIMp4pkuL7k4jgU3Cwq59ZXnMzZ2bTaf+
   mLSLjBV5QwJ6iVUXKgdMNv4kisFzPMKSlnHFbFoeTL+F4I5l/XWq+Qtak
   0=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 2CFwX7yGZefsnzZOF5Meh2vsn6mi52bWioaIS2CnybiQ4Ag2zz6bFnosrvlbSl9C4BqhmieF1w
 qCj5OHC0C1SeCkuM2lst1Zzx3e3xmV4sYgC6d6g1BD4Pcaoy21OFdY41c3RIO89y2xWTuRImj5
 LzDHVj69i4XdV9UoCXGu697wHGkOi9O72SBBTz8ufaILY4mYLtKhEycW5w4mZ2pH0Gr4de+8/z
 JIIktXVKSw2PdCA5NVsLUx1wOYKG0lPaCwqfUVOCMpvAU6TdkU8dSBWlmACk84ib4wrp8Bnahs
 dq4=
X-SBRS: 5.1
X-MesageID: 46710053
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:bC/JJaypWhvIcxwBjd8QKrPw7b1zdoMgy1knxilNoERuA7elf8
 DHpoV56faGskdqZJhAo6HxBEDuewK6yXcY2+gs1PKZLXHbUWaTR72KjrGSsAEIeReOkNK1vJ
 0IG8UTZ7PN5BpB/L/HCWKDYrQdKay8gcSVbJDlvhJQpG9RC52I6T0SNu/RKDwKeOAPP+tEKL
 OMos9AoSPIQwVnUiysbkN1INQriee76q7bXQ==
X-IronPort-AV: E=Sophos;i="5.83,299,1616472000"; 
   d="scan'208";a="46710053"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CW0zvnO0ThZng76EKMu/usQQX2j0KIRa7/awSwXNVxadGCDStmei8HJZF1hnhTtzDg75qnqnmt/DWJlt8q+sKXYqo8n+CIfLG3jzRUSZ34JvSdPVqODdYqyxIbez707ZiF9tQTQL5yZO3ag+w3515NddMO0WuLDrf7HEDry2a50HE0kTpgQMKIZtLwTU1GLdMmKFG14hBj9lQl11E+UgkLiXAkCkf22yK2jKY0xkTC1gQ0xV9+2Ad5gkEJPUD6MLTZitzYIhymAp4jAVPkEfRciVpqxpyS5Sec0c1H8Wi3qONLfw0aoMwlFPJjv9MSAeFoJODF81ZZQj4QdkE0OdXg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WG/EsOZKVfSHR+G9NgGAfbRGt9br8CDW/wU3ZxVJpJQ=;
 b=XM0qGkr2kmJVgbHo8YnlEXTFQTRIeQxeTeSHphOjiMJe0hxmbGz8LaCsi5F0zuhlDIghlfoYr0gSl4omnaUMYWLsX5PsgDWX3TuzqmzbabaoCA9e9DqBEIxJCgv7ZBs3VoLBv1mhg6aZQnKzInCGOgwy5F/HE54J26FJxLhSRD7iNagEQ4OXa7df+nybZ5m61keE3qSMzZeBm9UHZJlFwEuzI1001B4GvEmzwMmFc3jc6gCdPsvVvJAlgJiy2Y9lscL19sVi0v99B8C8UEY+y4qtLm54NHT5sYbldxIrXQghQwT0r0H5q8PJene11SuIaBETxrYMyGwZEJf/UYUucw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WG/EsOZKVfSHR+G9NgGAfbRGt9br8CDW/wU3ZxVJpJQ=;
 b=a9Xrus7Gl//Boy3IzP9m9xsrXzQ5tg7PHeuN02lf9S+/tQLD5c/JhFHfEoLlxKOIp6CtXorid4Y5J3iHZCbI/fhTMwDNAT32U6kAWrou+ahG+5oJ6eEeHPaDeQ00TBWCT8svMl01XHaZSgpywOCQpDDD4TvrEVY6ojYjuA39Tf4=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <61ff4f26-a9cc-d123-98a0-be6c23f21e9b@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 04/12] libxenguest: avoid allocating unused deferred-pages
 bitmap
Message-ID: <44825600-c27b-34ac-01b2-1ffb5e0bf0be@citrix.com>
Date: Fri, 25 Jun 2021 19:08:54 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <61ff4f26-a9cc-d123-98a0-be6c23f21e9b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0096.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8::36) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a1fc69f1-2aa4-4772-65f1-08d938044fba
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6240:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB6240102E68639ED3E73BA85CBA069@SJ0PR03MB6240.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5516;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: lrb3Vi2ZtpQlGiOeVUi5j4v7GuzqQZsqNNUs9yVBx/M3EhHXnOQJJVfkNA+KAVdnDT0VORjO/3zl+7C0753zRwKbt2SW01MLWrLOjdfYYjRDjXmvWcKSpanCF6DIXUVDoKhV7Qw/I33dqUwB4iNCWiLzC8yL10EE3reBgK2RdIA4lUAtutvNRue0MTMrr8CPNudpvDJ1McvqCTz0WTmvoYDUvGtfE5uHtp7wViuQjrPDR1LiGc3YTuobedZKQ+0DDOovSBk4b+NOFJanTI2naXpbW40W8iJZ5znSi+mp356OiS/sOlgbyV13GrKK8RkZSuU6FzBuiI3f2myzAML0zk/zW9h8VuXNQzKgfAEu3ngFfOCCRUexWVoXTjZoTtBN2TSkCXmEq1NeTQ7rk5hDqPUqGl4nGO2nryIcAtEmmxmgIiSaUFchmrBs/WLBL1YtP2gu/Yiyv6IsLOCawzlb3RzDKi+ckwtAkpcmvx8H3mg14q+ohka2hBwUT+aZ2/P7jfKWrZiZyYilr47mlzyRanukUddQk367fipdhJYOohIJZQWxsApakSwcUB4bmKqvcdl0Arxle/OlQ74w0VfVw3t7TFm0KPXeqKYO3fImv7dlRKOrWzRa3Er1iqqtc0p0rQbJVnGAM4X7nNG+og/Adep08K2fVN9/mMQGGKTcCus5y+CLh0hBpBVTcO+W2U7fdIwF7yemdExgtbtTKFRNk3FLzdq/9Ckto3o2X5IfDrc=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(39850400004)(346002)(366004)(136003)(83380400001)(4326008)(26005)(316002)(478600001)(16526019)(186003)(66946007)(16576012)(54906003)(5660300002)(31686004)(66556008)(66476007)(110136005)(53546011)(38100700002)(6666004)(2906002)(36756003)(86362001)(956004)(8936002)(31696002)(6486002)(8676002)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?N1F1ZDdPTUwrd1E3Yk93WmVubG1UVzBuOXVTSmtmNm8yS08xQlZnSWdVNE5k?=
 =?utf-8?B?cllzUkRMQWJrNm93VWxxYlF1NUN4T2ZJQnI0UGNpMHBpQkdWNlBndUlDaTJ3?=
 =?utf-8?B?TFJsMUIwd1ZTb1lkWFZJWjNIMlZxOXphdXdzT3B3U0s2YzBhWHUzWHZqR1d5?=
 =?utf-8?B?Rkczb1Nva2ZRQ1VzM3VJSjVQR3VYNnFKWWg4c081SldPVHpOSTkvOUNMUjBT?=
 =?utf-8?B?TStiM010L1hwU2hubHhzeEp5YTZwOU41M2lxVnp1R0FjV3d3Z3JTSmlmNDdO?=
 =?utf-8?B?RnlnTVlQWmlXSmR6d3hYS0tzM2hzdmorSi9JVVVHSmJYTStxeDZMS0FyanZ1?=
 =?utf-8?B?R3Rxd2kxdnhjdzRlaS82dUVhSGNSd2NKeHBISjUwOTJmbFVZTG8zZUZSRFVi?=
 =?utf-8?B?dTBzRU12ZlBPSW9jMnYxcFlHWFBwUjUrTklGNVlYVDN3SEtOZHpKY1Q1U3hW?=
 =?utf-8?B?ZUtkMWJYRGhvc09jMmRZbUdpVDlLaFpUeVJtSS9MOU9sZElLaDNESVVVQTBu?=
 =?utf-8?B?S3VUeThjRXlZOWYrNFJQZXpWYS9hNXM0bm9FYmxwWlVYNDFrZ21VOURZOFZO?=
 =?utf-8?B?QU5RN0tSd052cFhHQU1IMUFJVHhMbnZQbDJaSjN2YjBUZmlXbGo5Tkc4b0w4?=
 =?utf-8?B?endhNXFLVHkrQzFNTzR4QkJhYWF6SnRDQnFjRmtQT3EzbTRnZFdYSWkvaFV4?=
 =?utf-8?B?QlFMck5IUnlWSGhlaDFXOGJzNDQ1TGViZ085c090ak5NOGsrZS9hL1JNQm9H?=
 =?utf-8?B?MzFsOFhHNitpeWNTYlpUZVVvTUJiTDhnVlM0c2dyYnJ2ZTdsUCtQODAwdXNy?=
 =?utf-8?B?VitOV0VvaW12TVdyWGxPU0JXdUJPSE95SjZzUkI0OUpQNEtVZ2Y0VFZTQ3pu?=
 =?utf-8?B?ajBIamVnUXNyMjkzazNPbGVWRERjVzEyK2c5cGhGYTU4TENhaFphQzBqZksw?=
 =?utf-8?B?ZDVqeEJLeWlLQ0hUV1hVTVhiSXlxTE5nUjE4VVp6b3o4cVhYZEY1MnZUK2N1?=
 =?utf-8?B?emNTaUxPT29zSWk4Y3lwa3cyalhOS2lGazBNdlQzQWhWMzdmZ0IrYXp0OWxY?=
 =?utf-8?B?eVpRUjV1YlNqeUJIbkk1ck0zRmkyby9HVVh3N1hsYVo4MjVWTWlHZ3hRYk5p?=
 =?utf-8?B?RkF5QjYvdFJMNXVEUUJISWduejhGTlJaZXowVSsySytZR2UwYWEyZUdMb0sz?=
 =?utf-8?B?Skh4eEVwazkwTUg1c295OWhqTHFzTmg4NVI4dnlVcGxheEhRdndDVHlhU3lO?=
 =?utf-8?B?aHRXN2JmRDhtRy9VM2M5N2xwcFFRVmRSR2lWOEdxbm5ObVloUW14bDRIQ3Ez?=
 =?utf-8?B?WGlQbHZYUXY5SkVJSThmSUNreTJVMzNac2psa1FHQktSWTZwRmJtS2hqVVN3?=
 =?utf-8?B?cC9oU3Y5QUFPaXZEL01IMWhOejUzRXI3ZEpnRDhjbzIyaHlkV0RaZjcyU3dL?=
 =?utf-8?B?YjkwYjBET2M2Q3VBY2pKVXBHVDNpYjk5T0ozYkNUbksrajc5QzZlQ0pUR1h0?=
 =?utf-8?B?RURVZFBCTEZoT0RabFNaUHdkSnZlM3F0SkxkekpiZEc0UnVnNnZtSGJTMFJK?=
 =?utf-8?B?bDEwZ2kzWXJsZFJrdGZQOHNEWXVvNGZQSVp1ZkttR0RPZjhObkFRZ3M5a2J5?=
 =?utf-8?B?RFJGTmIyNVlVcml4SnkyOHR4c2k4NmFyY1QrY1BHTGhUTDdLb2hYT1NMdmZW?=
 =?utf-8?B?S1RSQmlqZitheUEyam1FOGdEcExoMFNpaWh1WjN3OUxERmdUNm1xck5MZE1Y?=
 =?utf-8?Q?oPx27sZSYh9LGERN7MxJWxWMz63ED+rZ9EMqMky?=
X-MS-Exchange-CrossTenant-Network-Message-Id: a1fc69f1-2aa4-4772-65f1-08d938044fba
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 18:09:02.0005
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 21HotF01oKWZ893qYv1EBvIrIRBfG3YoWioAWlQTaZmIk5iF0DofFBCgXqWw3TOrvx5zfchdaMUv7dSUZr71+O6t5xVxO8qqZ7TMoC42xS0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6240
X-OriginatorOrg: citrix.com

On 25/06/2021 14:19, Jan Beulich wrote:
> Like for the dirty bitmap, it is unnecessary to allocate the deferred-
> pages bitmap when all that's ever going to happen is a single all-dirty
> run.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> The clearing of the bitmap at the end of suspend_and_send_dirty() also
> looks unnecessary - am I overlooking anything?

Yes. Remus and COLO.=C2=A0 You don't want accumulate successfully-sent
deferred pages over checkpoints, otherwise you'll eventually be sending
the entire VM every checkpoint.


Answering out of patch order...
> @@ -791,24 +797,31 @@ static int setup(struct xc_sr_context *c
>  {
>      xc_interface *xch =3D ctx->xch;
>      int rc;
> -    DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
> -                                    &ctx->save.dirty_bitmap_hbuf);
> =20
>      rc =3D ctx->save.ops.setup(ctx);
>      if ( rc )
>          goto err;
> =20
> -    dirty_bitmap =3D ctx->save.live || ctx->stream_type !=3D XC_STREAM_P=
LAIN
> -        ? xc_hypercall_buffer_alloc_pages(
> -              xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->save.p2m_size)=
))
> -        : (void *)-1L;
> +    if ( ctx->save.live || ctx->stream_type !=3D XC_STREAM_PLAIN )
> +    {
> +        DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
> +                                        &ctx->save.dirty_bitmap_hbuf);
> +
> +        dirty_bitmap =3D
> +            xc_hypercall_buffer_alloc_pages(
> +                xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->save.p2m_siz=
e)));
> +        ctx->save.deferred_pages =3D bitmap_alloc(ctx->save.p2m_size);
> +
> +        if ( !dirty_bitmap || !ctx->save.deferred_pages )
> +            goto enomem;
> +    }

So this is better than the previous patch.=C2=A0 At least we've got a clean
NULL pointer now.

I could in principle get on board with the optimisation, except its not
safe (see below).

> --- a/tools/libs/guest/xg_sr_save.c
> +++ b/tools/libs/guest/xg_sr_save.c
> @@ -130,7 +130,7 @@ static int write_batch(struct xc_sr_cont
>                                                        ctx->save.batch_pf=
ns[i]);
> =20
>          /* Likely a ballooned page. */
> -        if ( mfns[i] =3D=3D INVALID_MFN )
> +        if ( mfns[i] =3D=3D INVALID_MFN && ctx->save.deferred_pages )
>          {
>              set_bit(ctx->save.batch_pfns[i], ctx->save.deferred_pages);
>              ++ctx->save.nr_deferred_pages;
> @@ -196,8 +196,12 @@ static int write_batch(struct xc_sr_cont
>              {
>                  if ( rc =3D=3D -1 && errno =3D=3D EAGAIN )
>                  {
> -                    set_bit(ctx->save.batch_pfns[i], ctx->save.deferred_=
pages);
> -                    ++ctx->save.nr_deferred_pages;
> +                    if ( ctx->save.deferred_pages )
> +                    {
> +                        set_bit(ctx->save.batch_pfns[i],
> +                                ctx->save.deferred_pages);
> +                        ++ctx->save.nr_deferred_pages;
> +                    }

These two blocks are the only two which modify deferred_pages.

It occurs to me that this means deferred_pages is PV-only, because of
the stub implementations of x86_hvm_pfn_to_gfn() and
x86_hvm_normalise_page().=C2=A0 Furthermore, this is likely to be true for
any HVM-like domains even on other architectures.

If these instead were hard errors when !deferred_pages, then that at
least get the logic into an acceptable state.=C2=A0

However, the first hunk demonstrates that deferred_pages gets used even
in the non-live case.=C2=A0 In particular, it is sensitive to errors with t=
he
guests' handling of its own P2M.=C2=A0 Also, I can't obviously spot anythin=
g
which will correctly fail migration if deferred pages survive the final
iteration.

~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 18:30:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 18:30:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147370.271599 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwqam-0003Dd-GC; Fri, 25 Jun 2021 18:30:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147370.271599; Fri, 25 Jun 2021 18:30:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwqam-0003DW-DB; Fri, 25 Jun 2021 18:30:16 +0000
Received: by outflank-mailman (input) for mailman id 147370;
 Fri, 25 Jun 2021 18:30:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNyz=LT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lwqal-0003DQ-BX
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 18:30:15 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d0dd64d5-8267-400b-81c3-e568e684f3a4;
 Fri, 25 Jun 2021 18:30:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0dd64d5-8267-400b-81c3-e568e684f3a4
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624645814;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=rnt3G4VwIadZCl10fkgvQNzuX23lGdCnsgQ1ox9CHuA=;
  b=hr1ppLj9vX7sWWJyxMHnroiSq1oy9rZuZBFmqG/shUm4eOjl2kgAdWfB
   0r5t2jwnIkyNcHb81N7kJFXvWgwjnCOB2JXY9Y0bsv+KbFE3RNCFSTehg
   XYwPHkSkKOGxFu0pRJfsXRcAfHhdAd368noA2gqU7ErQZP/cta8nbd0qt
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: VcNld3m5D5MHLdoI4b6pvA+A92bvhDpVKdMMKk7yMDkq1r5NdigIFdQG2VyfEt34fUnOkeedfy
 +kDqTJt32v33b5l3Cop99A9a0lFTlhR61atKcDnIDJRf9110tw5aE/uBEWMboVxHCC3OyGUQ0c
 aMdEMGWIg/xo+1mpDL+OCp+99v314Gr5QmnWvMwHyseqJ7/Vf6BipSFHXXR/YF71+xG91jojm3
 WGclPMrOI1BX9pVpyjnOQv1HZuug5rekFA1zwwHX6bQfu3wuw31ER2rXsQ0s8PD9OvBVcO4K2Q
 8cM=
X-SBRS: 5.1
X-MesageID: 46987394
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:jRW7g6z2l49B409VEounKrPxyOskLtp133Aq2lEZdPULSKKlfp
 GV88jziyWZtN9wYhEdcdDpAtjlfZquz+8K3WB3B8bcYOCGghrVEGgG1+rfKlLbalbDH4JmpM
 Fdmu1FeaDN5DtB/LTHCWuDYq4dKbC8mcjC74qurAYOPHVXguNbnmBE426gYz5LrWJ9dOME/f
 Snl696TnabCA4qhpPRPAh0YwGPnayEqLvWJTo9QzI34giHij2lrJb8Dhijxx8bFxdC260r/2
 TpmxHwovzLiYD69jbsk0voq7hGktrozdVOQOSKl8guMz3pziKlfp5oVbGutC085Muv9FEput
 /RpApIBbUz11rhOkWO5Tf90Qjp1zgjr1X4z0WDvHflqcvlABonFston+tiA17kwntlmOs5/L
 NA3mqfuZYSJwjHhj7B69/BUAwvvlaooEAljfUYgxVkIMkjgYdq3MsiFX5uYdE99HqQ0vF/LA
 AuNrCe2B9uSyLfU5iD1VMfmOBFNx8Ib2K7qktrgL3Z79EZpgEj86O0rPZv1kvoz6hNPaWs0d
 60eJiApIs+OfP+UpgNTdvpYfHHRlAlEii8f157HzzcZeo60iX22u/KCfMOlbuXRKA=
X-IronPort-AV: E=Sophos;i="5.83,299,1616472000"; 
   d="scan'208";a="46987394"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZWMElGv2eMI8J0CNeIuXc5AxAkitdAofU2WgaEgHGxYxRr0StW1whpiLja8rz8IT+iV0S1uc5b96mrKeiLcQW7/ofqjMUo4erZiSnjsdkctE2bUXUvmpOoaArAX/NHt0ZWlrfhTgN5zTuu1UvpuFBsfxenr5gUPt7/Naeb6NBBh5nkpsTHa4zRlQ8/Nrbx0AmzjCCQZkMq/nukfA5oD39jGNl2Ll66A9m8ufg8ANSvXnZK/K5V9gljepe71ylHGEzky/K1Usd/8ZrpIdX3rkurGpGNOUbWqYaDf+wiy4mDSP+v4irkyblNrZI2+LzN+9D/N007gOLj+vu1tvWYNmjA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1nJvP+9uFsJIXLduVWEcB6WqwnucemD0Bt0vSwzrOb4=;
 b=hCCPhPuz6MyCaIYFLB7Vgg274P8eabXKO8dw7UIddn3KU/2DNn6iLV1mQCm+O0iC8AKwgqTdJWavNnOmm4pfSBJtzyRYcT/XuQNN6uwXAIM1kRWDZKzNjm05NKYKMXn6kDQTBriAEtCmvUtgnl3Ufoxcun/K6Z23B4eVzxPM9sB54qnPQqoLsbMn6RI3xBJfqgYDNuPje9qBf6GaCMw89NlftzSpBgHq2Tb5up3b2aIYc9B9qpdY+3921wC+HravTIvtB3NjGOuRbE2W+U6MXYN+9zZtknQWaJjXAP+ZR204dEZJsVY7lOmksn7IR8ELt+Dsigktxo9RDvjABx8l8Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1nJvP+9uFsJIXLduVWEcB6WqwnucemD0Bt0vSwzrOb4=;
 b=EVxozvna1xDl1e/jHooCxmMXEGUgofC1seL18HvzEWUxVdx1BoD0f502u8n9kk8pxhpMNwCqeEy2JW4xXZcgs5dhiu4lw5iqZ7DKYcz2jAVuDtdJ5NGO0hdtVVrvJj3V2JW3mYPrjkjrC3Znc4jS3Va2D5SIfiODVzKBlEdz/V4=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <0d824d4b-0696-baca-a3ef-95ee641e4d08@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 05/12] libxenguest: complete loops in
 xc_map_domain_meminfo()
Message-ID: <7cafca96-1d01-db96-8583-b8299aad41fe@citrix.com>
Date: Fri, 25 Jun 2021 19:30:02 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <0d824d4b-0696-baca-a3ef-95ee641e4d08@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0078.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:190::11) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: de6349da-ec28-4c4d-ef16-08d9380743a7
X-MS-TrafficTypeDiagnostic: BY5PR03MB5079:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB50792A73A7420F611F2E0A1DBA069@BY5PR03MB5079.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: J0QbDaOJoXi08q4xWjlaRGdU8vLPAlEO/TPma7gl/zu4WkvZYylFtTIlaSjQLedxohL2Exjv70WVSxg/VPoVLOo2g4sgr2/llk0tLAT/YXO9AP3wWSVYRSSMubvBpEzTZLwUcR6Mb1hYQmS2dKG2oQTbWaBc1rQYJ1ILJLmsiUHu0BkaU1CQ48mCSyU8uV68yx1bxopL2ldgQeBDvWTEhlXr+BMu4fQ2RNHSdjnbzKBO9FQO0/AZjXvKJMoYNwTTJzXaqCgadPn3oj9nAvcFc83b1+PTBP6INMYcmqHHR6UMh1rnN920AcEtOia+ib6j1HaRrJmHnLUOkjXVYbtVKDx97M175gwQA1SLKRjulFQAByNUhnpdko46Ec0T3KOrbLwycm/wpqS0SdXzPfFLnxhVWW47FGbWhntQv3JpJGr6n70S2cu4IsiBWUQ4WuKrOQFqRHE+gakBwsGVYOQBQA68trxhUCab8HDnp6PNoFmX/n6W+3TQY7cdcWUm5fTg5vsyzWIf/r5VoDddpEh3KWHK75GKMwQRGOYWFQV+lPCzLOc3eDjBxCai0M3268KrDkciHalYX5e7lz3ZcvA4o70OWkSthpbcbOd8LXsNt4Wh65Rl7kDnuGsP/0G+jLrsmnSVjRyxTXutImLGNm1fq10CblNjfjSdftxRjnw4aJSMowKknKLxdAbnL9mwYAH6
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(136003)(366004)(376002)(346002)(31686004)(31696002)(38100700002)(53546011)(2906002)(110136005)(16526019)(6666004)(66556008)(66476007)(86362001)(26005)(5660300002)(83380400001)(16576012)(4326008)(8676002)(478600001)(66946007)(186003)(8936002)(2616005)(956004)(36756003)(54906003)(316002)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?RmpWWUFEb251YWc0VzlOaWwzZzZJdXd3WTdJWkludDlQdVVTTzBwcjJId3B3?=
 =?utf-8?B?MHlqSWdqRUM3YWd0NHAwOEc2NEFtOUl1ZVU5YXhrUEJocTkyTWR4V2s1RHhM?=
 =?utf-8?B?NW5RUFJFL1o0dGE5enR0dWQ3SDhDYzZnMXI5dVQrMW42L0hnMThYMnJjTktj?=
 =?utf-8?B?eGo3SDVCWVYrdkU4NFkxUW4zNktVNldwWWp2dU9ta0FtNGZqa0FqSFo4cmhx?=
 =?utf-8?B?cUM4cSt1bktha1ZVcVpXd2pZUjZpNDEwczRBTlpqWFQyYk9lenVDc0VLQkZn?=
 =?utf-8?B?TTBSTW42OEJiQ3NlUVM2ZWhXSGIxYzB5dWRpM0tWekFXZ05IWUFRQUgvSGNQ?=
 =?utf-8?B?dysweVkxUUVodUNZa3FBVEk4TjVDU3luQmhyYmxUbUI5VG5qbUpsNFhiaGVR?=
 =?utf-8?B?NDFWdWh3MkQ2SU1ESkFjQVdyRk5oekN3UXdLVXJqQzZtK0ttdmN1b0l3aXYr?=
 =?utf-8?B?bE9QL3l0STNWNFVpelVKd24vbnpzcTJidXRVSzMyaFZPQmxXZjZMZmR1SHB2?=
 =?utf-8?B?eFNnVVFSUlRWQzIwb2ZqSG54NFZRcEtNZi82bW9rT1kxQUZabVRnb0FZdXZl?=
 =?utf-8?B?ODV6UG12SnBRK1NaTlJIZHd2SllQWnZtMWJsUTRoWnFmd0tRWHAvd1lnTXpu?=
 =?utf-8?B?aTZOMm5ZVmF6T2daamNndmpON3YxaHlteFZKWm8vV0pDUXpOOE55dGdqUzAy?=
 =?utf-8?B?dStjVVdSZXhuY0xVNGZTZ3FiVmsxNXRib2RXUUk5eUZsV202TUM3alQvMGZ0?=
 =?utf-8?B?U1JTUnkzcWx2ZlhUcmc4ajZ0aW1CbFhzTEVlOFh3bWVZN0FSdEFKOUdYeW5O?=
 =?utf-8?B?Ui9ub2RQeTBMOHdjMUVEV3B1WEwrRXVYQlEzSVFPWEFCTzgyM1VZY1c5Zlha?=
 =?utf-8?B?d2IweEIzWThCb3phK0dKTkU4OG1WMSswVUtvWVpKTXFZWjd1ZG5NUmZTSHJK?=
 =?utf-8?B?SzUrNHh6S3F4NlpCZUtZek85MmtJWEl3QmJmanZTTCtxbHdlbTJrNkRqT0l4?=
 =?utf-8?B?M0NoSEszeGZ0L2RLKzF4TjRaVStEL1g2aHJ5NHhaK3FkOE1Tc29VOTRReEZ5?=
 =?utf-8?B?QWZwZTdMQkp2VTRISW1rQUZidUN5RXY0eFJteHlUQmRyRVZ5WGl4UCtKTUZF?=
 =?utf-8?B?VG9Wd1l6NnpLS3RmN1hPQjlFcW9HNllrSXNXTjR6MldzNnovclMzMjJud0t0?=
 =?utf-8?B?UHFON0c2OWtGR1pNaTdmNG9lVnhsOHJldmxSdjlvL2krdHVsNHZsR3dtMUU1?=
 =?utf-8?B?K2lFcHhHUFp4V0xPK1hDRk55S2NwVGVVbDJ3ZmJqa055UWJFZlJhYzMzZmxM?=
 =?utf-8?B?QTF6RDBKZlRRQlREaDBKTlNwckVkVUhKK3lIWkROTW84MFdDS3ZKTVRQWmt1?=
 =?utf-8?B?dDlsZmM3bm5XOG5XUHl2eUFMVC9GSEd4ZExKbmtNRVdJK1JSYVpsRWNHVzhI?=
 =?utf-8?B?NHQ2V0lXYUVBdGdYNVhHdVp3VWFvVUpLUnBTcUpwb0JyQmlkV0NLTmZRVE45?=
 =?utf-8?B?d3pOcUNrdThzQlVrT3NFQVFlbjF6d09yU2VJMUUwS0o5TUo4M2dGUzF1NGlD?=
 =?utf-8?B?OSs5ek8rWSsxOUk0Q3F1NWovTS8rREcreFErNTBDU2xXSklDRDVqS2orZ2JI?=
 =?utf-8?B?Ym0yNHpWT1U3QldZeUw4S0NyTE12QkdjWjB5VXJqaXlGbGpMSlhRQWJWUUFj?=
 =?utf-8?B?djZ6cmF0L1RxZElFbTV4M3FFcThFWllPY3pJUUFPUEtNL2loS20zUXZwZmda?=
 =?utf-8?Q?hjTo9QUoR7WwZOQ2fyxcsZCgEQKBrDiw3bmDLxA?=
X-MS-Exchange-CrossTenant-Network-Message-Id: de6349da-ec28-4c4d-ef16-08d9380743a7
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 18:30:10.1960
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: A5eoj51ZCH3nxw5pL3dtDTIRvTCu8X/rpq9ciGJDhNd77ebIZl2YGGU0+bli6yB2sr7h8B2j16a4IuF6tB1vXYBHQXtmzefALiRSLHAKRtE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5079
X-OriginatorOrg: citrix.com

On 25/06/2021 14:19, Jan Beulich wrote:
> minfo->p2m_size may have more than 31 significant bits. Change the
> induction variable to unsigned long, and (largely for signed-ness
> consistency) a helper variable to unsigned int.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/tools/libs/guest/xg_domain.c
> +++ b/tools/libs/guest/xg_domain.c
> @@ -40,7 +40,7 @@ int xc_map_domain_meminfo(xc_interface *
>      xc_dominfo_t info;
>      shared_info_any_t *live_shinfo;
>      xen_capabilities_info_t xen_caps =3D "";
> -    int i;
> +    unsigned long i;
> =20
>      /* Only be initialized once */
>      if ( minfo->pfn_type || minfo->p2m_table )
> @@ -116,12 +116,12 @@ int xc_map_domain_meminfo(xc_interface *
>      /* Retrieve PFN types in batches */
>      for ( i =3D 0; i < minfo->p2m_size ; i+=3D1024 )
>      {
> -        int count =3D ((minfo->p2m_size - i ) > 1024 ) ?
> -                        1024: (minfo->p2m_size - i);
> +        unsigned int count =3D ((minfo->p2m_size - i) > 1024) ?
> +                             1024 : (minfo->p2m_size - i);

min().

Otherwise, Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

This whole infrastructure is almost abandoned, and broken.=C2=A0 Its used b=
y
xen-mfndump (debugging only) and xen-hptool mem-offline.

The mem-offline functionally cannot possibly work usefully.=C2=A0 It is PV
only, despite not having an HVM check, and in particular reads the dead
page in an attempt to restore the contents elsewhere.=C2=A0 There is also n=
o
thought given to writes from outside sources, such as DMA from
passthrough or a different dom0 foreign mapping.

This is perhaps ok as an academic demonstration of "can I shuffle memory
behind an alive VM in ideal circumstances", but will be killed by the
dom0 kernel if you ever try running it to resolve a real memory error on
a VM, because there is no possibility of recovering the data.

The mem-offline functionality needs deleting.=C2=A0 It isn't production
ready, and can't credibly be made so.

~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 19:00:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 19:00:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147377.271620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwr4A-0006Uy-9X; Fri, 25 Jun 2021 19:00:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147377.271620; Fri, 25 Jun 2021 19:00:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwr4A-0006Ur-6T; Fri, 25 Jun 2021 19:00:38 +0000
Received: by outflank-mailman (input) for mailman id 147377;
 Fri, 25 Jun 2021 19:00:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNyz=LT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lwr49-0006US-AW
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 19:00:37 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4869df88-89c9-4922-b74e-087207e8dace;
 Fri, 25 Jun 2021 19:00:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4869df88-89c9-4922-b74e-087207e8dace
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624647635;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=rLDLYRwMboHJFFFHeECdEkav7NBJo/2jsE9a+LuF7ZI=;
  b=AwQh9IY79RulwzJfZO7BDMQoOgSKUj1tLtg2BgdHqKZViUTNl17bPYDO
   aQTGR7pHg8s5U383+gWJqbp5NJC2bO4DhPra5pKLRoRhWdxNswHddl0sB
   j/jKBXvuE1da+HO+ggAyiuOv6p9TyOZ/eM5xbWccdstgFhaVY3Ydsd9ts
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: UwFocDfWdxzpMhE0JG/eCIoeteSBtf333rQGGqHrl3l0WVeEozC//iaBbGOKfJANLGxTeCAgit
 IlwHoJQ4LViz6hBcQFXkRuD89QtQsYr9bWLCilgsLaXvYY1JjD++F9Nf7wQH6MYOCsLvxj1hKz
 y0Q33fZiHAjZnCI1gNbjfifqfhszsStSxW55EQFa3h/d/TbC3uJAqDw0qV1dVY5h+GWSR2xcFB
 TYpj2X58fGw8kBeKjM5xKgyjk62UqS4n0BzlB709h/WZEQaEtIe3R2jd52ThWsFfiWiGth+t1E
 NKs=
X-SBRS: 5.1
X-MesageID: 46713366
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:KoKFYK+4Go4zhqtyeT5uk+E6db1zdoMgy1knxilNoENuHfBwxv
 rDoB1E73LJYVYqOU3Jmbi7Scy9qADnhOFICO4qTMuftWjdyRaVxeRZg7cKrAeQYxEWmtQtsp
 uINpIOcuEYbmIK/voSgjPIaurIqePvmMvD5Za8vgJQpENRGsVdBm9Ce3am+yZNNW977PQCZf
 ihD4Z81kGdkSN9VLXLOpBJZZmNm/T70LbdJTIWDR8u7weDyRuu9b7BChCdmjMTSSlGz7sO+X
 XM11WR3NTjj9iLjjvnk0PD5ZVfn9XsjvNFGcy3k8AQbhHhkByhaohNU6CL+Bo1vOaswlA3l8
 SkmWZvA+1Dr1fqOk2lqxrk3AftlBw07WX59FOeiXz/5eTkWTMTEaN69MBkWyqcz3BlkMB30a
 pN0W7cnYFQFwn8kCP04MWNfw12l3CzvWEpnYco/j9iuLMlGftsRLEkjQRo+M9qJlO91GlnKp
 gvMCjk3ocSTbvABEqp51WGqbeXLwYO9hTveDlJhiXa6UkPoJjVp3FojfD3pU1wg67VfaM0rN
 gsAp4Y4I2mcfVmG56VJN1xDPdfWVa9DS4lDgqpUBza/fY8SgzwQtjMke4I2N0=
X-IronPort-AV: E=Sophos;i="5.83,299,1616472000"; 
   d="scan'208";a="46713366"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i75wVq6q/PQIP2qwxKc61QcV+Qk1PEbukeHMI7U94nlGZW40BJ5lu7XGukfKvl3j3aGGiUSf5pDtofcWm4JmJkcS77mfAuZAcqjz40IXh4X12BRfLQ0pOHP12ee9WNTaTn0UyHZyqphajcxZCcxtle2utHroq8Rw6VG351z6pmEuCr4TGvzhY8+C5mDPlR9Y7rfpy8NXCfepHn1I06vob657Q0CUiuwLughaOUzSLo5Pm7LVAYlz6W/2kO9l2GKhCWuKauh2dCY/eNXYDPXjyTY/M7aqxR1Vd7BJ7oqnNiPYfMyHfKIaSLD7UUC42kyx5u80lfYP39Nzw8TeVVZZRA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yPGvVCo7omd/hK9DMmyUOmC/PwZNwFLSWdPE566IVlg=;
 b=UAG4LadrOpHvQsuTSsNC6uT78aNpcBACuCcy88/egVJ4tK5CoaM9zuuDITytHxZlUG3mYG0IqT4QFwQbP3ownBeolio1nlqOZF6zFmZQr++Y7Uv1P86B3khcfmnjfhOmxFohN7xJjgjmnYr4T/XVQLX3R3sbkE5VD5wMBLhbLbZ6e9KhQRdGz1ift6PTh8Z7TrW+O7PFVLYsKvevUG4Q+dAFFpl9gX/OJMNAWdMtrsaxCEbOZz1yW9+SUpPWXXh1Y1gkPpsslT2osnR1htjlG70UE94BhPws2cQX1u+p8eJKh7x80OqvccEHiOGb9I25f2WAnw734RQBm9lQq7Q3nA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yPGvVCo7omd/hK9DMmyUOmC/PwZNwFLSWdPE566IVlg=;
 b=BZ/pm8byCeZU/bMJBkfIMH0uXePunVNUwY7DJA7GnfxuCNUdWE6G9XgtRX/nYjpklFa0dBB2WB2EElcg+7HEwmQazL0/xJUU3aGIIBlSPrfO53R/xcbXLdW+IJygGqYohsjiR33QSHQUZahVko8fCiRXqsj8XINqRUla5y8femU=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <09e81b91-84de-6e49-9a62-eb3a6f392954@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 06/12] libxenguest: guard against overflow from too large
 p2m when checkpointing
Message-ID: <8248ed3f-0437-4ba4-fc26-884e8d70cf92@citrix.com>
Date: Fri, 25 Jun 2021 20:00:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <09e81b91-84de-6e49-9a62-eb3a6f392954@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0346.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:d::22) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8705d08c-7894-467f-f573-08d9380b80e7
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5727:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB57273C03B6F4C3E320BDB35EBA069@SJ0PR03MB5727.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: k/g6KkgpOYMMMuyUxeayWeUmcCwFRnPgsma9PJrru8EErjFIYRXEWotjPCIuE49vQBrtyJVR2SOhZ4P0ijSQYD5WSW7WUZJ2q4kJuj/C72B3sIAg1imZ0lGwwfExjjLjoXd60012DkiOliTJh37FIZLjk8Lpko/GvdwEtnUCUZA0mHVZLCANPhIR/w46LRhwp84YucZLzNs4Jsg+9AoVR8t4hf/tPyFWJZ99SyVmk/9/cvh2VVQU+xthCyrrA/FSSwBC4vFSBnU9z1t4UwwoZwWa1QZEycVJJrhTOOHcUFdoTXBQXURIT12/ts7vK0HXg5N3DY2E8DViRGNdTa/KwgD0iVEjNWPHfX80HImmuDw3l3qOek5ny9hSKLpI/ERNu2i4OAiJJqbvjcb06+0+dlOK8DGP8UqgB5qacHbLjTfcsBQ9vmnRhMIDFd+utNGn2mdFAntmj26C9he1I6zaqipgF+vtUzkMDinl11NViXpbAKbpLiv7bk7qBbaMgY0x23ClSWB2UHoQ76g/eUAvgH9cXH9p+IV+EQD8YhOj98SmK0CanNskw8Jr2PhhCELW1jroeuOOZ8hWSYW2KdUCOewJhTnbgGfIdwRpZpRLIzC205CTASBatpEvgboLbV6HA4kkj3n34YPbviH6c/7J/tqEStvZ2hyC8CP+KfRr3/b3dMCRvm9QbEWHnw3nJKZd4zuoiLIGBtPNVU+OH6kp7PJ/PJMEScsyQtdsQrSbR50p7Mwl1YmbItp9PuctMIDV
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(39850400004)(396003)(366004)(136003)(6666004)(478600001)(6486002)(53546011)(956004)(2616005)(110136005)(16576012)(54906003)(186003)(26005)(38100700002)(16526019)(8676002)(316002)(8936002)(4326008)(31696002)(36756003)(2906002)(5660300002)(86362001)(31686004)(83380400001)(66946007)(66476007)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?cWNhOUNQVHNzZnozNjZpM2htRkFyY0F2ZFNhcDZ0RDFhSTI1dUp6eHJVU1JV?=
 =?utf-8?B?eUxMTzhnUzlYYkFqa0pyazdwQnVBTFVnejhtRHJUbEJCcUpNd2lKbXNGTy9F?=
 =?utf-8?B?TEFQbWFrUlhsWGplL3EzblNGd1JkT001WUVxTWJhVGllUzNBaXFwL0NhbzZ0?=
 =?utf-8?B?N0R0OFczU1BSUW91OHora3c3U2NnR1QxdDgxaks2bnNCdUNGQ216ampCMFN4?=
 =?utf-8?B?NUtRazQ5bzZ4SDNWUUlNeW44UzhyNFFodmE5SWMveCtBdHUzenFJSnp4aWhC?=
 =?utf-8?B?eHZUUkQzTG1JR29qSERFUWRxVUFOV0FNZEdwaEkreHk4N1gxMzJlQi9Mbk1H?=
 =?utf-8?B?VE9ydm12TEtPeWZSVjQzU0Z5YUZZQkcyZk12WFIybW5ieGxvN2hpY1hrcUlZ?=
 =?utf-8?B?eU1LeXBCVGd4VzRqdU1rNCtNU2xOaUhZaytjdlZIR1AySTFNengzby9wTThp?=
 =?utf-8?B?djhvdU1tdll5aUFpZXFJb0N0eUNieDJMUlFsZVAyRzZKVi81dFcvVmY4anZz?=
 =?utf-8?B?cGFiRmZDVjUwU1RDZXlSYTBWOW9wNmhkSEF1SUFEK1hLZmVNZnRLK2RleU1x?=
 =?utf-8?B?OHNqeGtVbHdlNFkwN1RvTDNqNEc0cEV5V0JGcFVjM0tKYXlXMUJIN2h0dHh5?=
 =?utf-8?B?SE5pSFBTT0FPUXhOYkJZUXplejQvb01EdG1iQy9QWE5NSVVMQ3FxdmdZQlNu?=
 =?utf-8?B?V29PTTJUZG5USmtPclV4T1BnVzg0aDZUMElubW15VnVCZmFzb0NIcmxxLy9B?=
 =?utf-8?B?cVNPZXpzK3kva2hBMkp0bExDZElYbWpHODNrQ28zV2lCbWR5Z1Q1MGg2SHV1?=
 =?utf-8?B?ckFzWlpwTGh0cVVvUWc2RGpBUjBoZFFRbjRadjIybkQ0MmVMZ2V1Yml5em1l?=
 =?utf-8?B?TkIxeGJrOXFIbjh4UElhUXBFUmVZQnoyS2RhRUgvbUVjdWZYZCtqMUdISW9t?=
 =?utf-8?B?U21Vb2U3WFpnek9uc25nMTVRS1NOSzE1eVhRbTE2L3o4d0ZrbE1yUTA3SlNm?=
 =?utf-8?B?TlQ5VXpMWEJUZDJ4Ulc2M3pXZ2NOUk45NS9kT3JCZ0JqSlJjNVppV29PdGtB?=
 =?utf-8?B?a2k4dXZDYXZzMnVqZGtlcmtvbjk5NHpmRWExR25OVUsrekdvT0hpb1kwVUJN?=
 =?utf-8?B?bHNTRUlHNjhBRTFmMmFjYTErRFJWRHhpb2JEc1h5QUg5eUtpNGNwVGtJR0hx?=
 =?utf-8?B?SDlCVnZGbWhQdEp2Wkd3OVAvNVhxUzh1RVh1NXFTcEdQZnVRaWRHNHZGMVZF?=
 =?utf-8?B?VVFabmVsbmM3NWlRVHNCWTdhWEQxbmZaL0dOcWF6SkJqcU43OVQ2cEpMZmxq?=
 =?utf-8?B?MndyS0dMNTJOQWl4NVhlRUE4ajVmSEFRK2NGcTU1aUhOSWJrMDNWTzNCd3l4?=
 =?utf-8?B?enI1NHFVTkZmb2hPQVljcDdJOE0zeEttZVFFcWxTT0RCTGRsUXZVcjdBRXFq?=
 =?utf-8?B?VERpTVRPUkoyUUh1aDRFeEVjZlhDNWtkc3pKaTJNbDJCZE91N2E0TDBNSDVp?=
 =?utf-8?B?dHRPZGNKRzdnNFhDbTZMWHpZVkRIOTZPTjl1bTdpc01IcFVJT0QvWDhmWmdO?=
 =?utf-8?B?bjRERkdqSStaUGdRQW5ycytmbUl4aTNnZ2dEWEUyUnk3eER0YTV3TTVYMitH?=
 =?utf-8?B?Z3hoWFh3eHh4QmNSN28xdDZqZEhOcWpYOFdjNnRiMXVneEM3Tm1VL3c3R1Mv?=
 =?utf-8?B?M2pBejBiYkhneW5rK2tsS2JnZ3BUWWRYanBmT3lXWjc3VDV0TDhpUGdQUkIv?=
 =?utf-8?Q?D9r/PPBY8GQL4Q9aPfaf3YDp2/mlovF4FOeEfnL?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 8705d08c-7894-467f-f573-08d9380b80e7
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 19:00:30.9296
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YscUyowuaAMKq2snfNMSicD99x38mlL/S75bhc5ElFro+rsd+U0MXpbQZj4zxD8AfeQZ1w1I2ZubmTK1q0ChBEtWOaEXKQDliKF5y4wfbqM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5727
X-OriginatorOrg: citrix.com

On 25/06/2021 14:20, Jan Beulich wrote:
> struct xc_sr_record's length field has just 32 bits.

The stream max record length is

/* Somewhat arbitrary - 128MB */
#define REC_LENGTH_MAX=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (128U << 20)

and is checked in the low level helpers, making the upper bound on the
number of PFNs 0xFFFFFF once the record header is taken into account.

There doesn't appear to have been any consideration made to what happens
if this number gets too large.=C2=A0 That said, the replication will totall=
y
fall apart if it ever gets to a fraction of this, because this is the
list of pages the source side needs to send again in addition to
whatever *it* dirtied, as it is the state we've lost on the destination
side by permitting the VM to run live.

The common case is that, when execution diverges, the dirtied pages on
source and destination will be almost the same, so merging this on the
source side shouldn't lead to many superfluous pages needing to be sent.

>  Fill it early and
> check that the calculated value hasn't overflowed. Additionally check
> for counter overflow early - there's no point even trying to allocate
> any memory in such an event.
>
> While there also limit an induction variable's type to unsigned long:
> There's no gain from it being uint64_t.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Of course looping over test_bit() is pretty inefficient, but given that
> I have no idea how to test this code I wanted to restrict changes to
> what can sensibly be seen as no worse than before from just looking at
> the changes.

At this point, I'm not sure it can be tested.=C2=A0 IIRC, COLO depends on
some functionality which didn't make its way upstream into Qemu.

> --- a/tools/libs/guest/xg_sr_restore.c
> +++ b/tools/libs/guest/xg_sr_restore.c
> @@ -450,7 +450,8 @@ static int send_checkpoint_dirty_pfn_lis
>      xc_interface *xch =3D ctx->xch;
>      int rc =3D -1;
>      unsigned int count, written;
> -    uint64_t i, *pfns =3D NULL;
> +    unsigned long i;
> +    uint64_t *pfns =3D NULL;
>      struct iovec *iov =3D NULL;
>      struct xc_sr_record rec =3D {
>          .type =3D REC_TYPE_CHECKPOINT_DIRTY_PFN_LIST,
> @@ -469,16 +470,28 @@ static int send_checkpoint_dirty_pfn_lis
> =20
>      for ( i =3D 0, count =3D 0; i < ctx->restore.p2m_size; i++ )
>      {
> -        if ( test_bit(i, dirty_bitmap) )
> -            count++;
> +        if ( test_bit(i, dirty_bitmap) && !++count )

This is far too opaque logic.

Its also entirely unnecessary...=C2=A0 All this loop is doing is calculatin=
g
the size for the memory allocation below, and that can be done by using
the stats output from xc_logdirty_control(), which means it doesn't want
deleting in the earlier patch.

~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 19:06:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 19:06:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147382.271634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwrAA-0007CZ-4z; Fri, 25 Jun 2021 19:06:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147382.271634; Fri, 25 Jun 2021 19:06:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwrAA-0007CS-0J; Fri, 25 Jun 2021 19:06:50 +0000
Received: by outflank-mailman (input) for mailman id 147382;
 Fri, 25 Jun 2021 19:06:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNyz=LT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lwrA8-0007CM-P9
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 19:06:48 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id da136eda-1a35-4a21-892f-a25e36c401a8;
 Fri, 25 Jun 2021 19:06:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da136eda-1a35-4a21-892f-a25e36c401a8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624648007;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=9QYXDcFXZk0aAgorQhIjJYkbL6evADUdI8LiYEvQg7o=;
  b=caiE9D1yrzyR6DkPDGw+NFTo0rWzUHHs50M/TbAsjmv+PsfLEHEtlbYx
   Qvtdj4Lz//HDaDOS4jgRxxn6QY8fApdxsUXUkn9t2c3i3wiGAMoJUcHQI
   uENGYgM5uEho+CqPfV09I7ix4lHJzWqKglgFKL+vb4P/msK7tmLxWRE84
   A=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: BKop2HIsPA1n3awfKgOlgaTxFyqmCSnkkr7ai02wl1ShErbICGXirhNoRhfFK27+Fpp51wpZvT
 4p/epPmB9ENLhaZkjPussgTAJFYB0td7LzaSSgaNB0LauoaOaXTk5PdMZl1Azin2rpGwVc8Bhw
 KPsph5Fv2UYULliV+MGgQ1+Z/Vla/2c7gcnSo3nYD9K4Q4P0tFn2QoMbffGlKeGBSLnyHejxll
 4v4f8c00RKE9Hy3dpwkuhkED1hb+K9UgZQgpk7sNzgEo2Kh9QXk22YYlqtAjiWmeSRmKHLSsHv
 FlI=
X-SBRS: 5.1
X-MesageID: 46713856
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:bl6SUapAFHsuV/T7qVlRvtgaV5vJL9V00zEX/kB9WHVpm5Oj+P
 xGzc526farslsssREb+OxpOMG7MBfhHO1OkPYs1NCZLXXbUQqTXfxfBO7ZrQEIdBeOjtK1uZ
 0QFZSWTeeAd2SS7vyKkzVQcexQueVvmZrA7Yy1rwYPPHRXguNbnmBE426gYz1LrWJ9dPgE/f
 Snl696TnabCA8qhpPRPAh1YwGPnayFqLvWJTo9QzI34giHij2lrJb8Dhijxx8bFxdC260r/2
 TpmxHwovzLiYD79jbsk0voq7hGktrozdVOQOSKl8guMz3pziKlfp5oVbGutC085Muv9FEput
 /RpApIBbU811rhOkWO5Tf90Qjp1zgjr1X4z0WDvHflqcvlABonFston+tiA1nkwntlmOs5/L
 NA3mqfuZYSJwjHhj7B69/BUAwvvlaooEAljfUYgxVkIMgjgYdq3MgiFX5uYdA99HqQ0vFgLA
 AuNrCd2B9uSyLeU5iD1VMfmeBFNx8Ib2W7qktrgL3e79EZpgEg86O0rPZv10voz6hNPKWs0d
 60eZiApIs+OvP+UpgNctvpYfHHR1AlEii8fF57HzzcZek60iX22uDKCfMOlbqXRKA=
X-IronPort-AV: E=Sophos;i="5.83,299,1616472000"; 
   d="scan'208";a="46713856"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=isFZWFdX2y/BrZwN+fxo9UaAOyScxmky63k4IR0CzcULhS5xTucwmxLbTGuUDQ1om8/4x8XUCmiSNEBaHvJ5nzZdhnbY8sN9fSseQ3n8XMuhXwoVIobRYWA1QfoNVjJ6iL9UwRPcqsWR93vnHJoIVbYSg3KGOtD0aw7YxWQfmt/8Hl7i6674LMRVW6o9H5LJUoNom9tnP0PK2mehA7gIKPdnwtE8sjHAglyvWiuSS+UWs+J0lDd5l698J4LdOkhbuDO7AxdNpP7cU8gGVNjsmiyZe82tuD3pjTeDzYK7k3ondEdLA6Kj2FAwHFP0Yq0d49M2+i2IiTMQGX9EMtWtXg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9QYXDcFXZk0aAgorQhIjJYkbL6evADUdI8LiYEvQg7o=;
 b=n+Fk2QbizZuJDP/wkEl/wxAw5JNc49liYKJvSoEmWjDqo5SrYYhVVk1To1zFUp3y2zsdgpEq5kuwllXIbOr7xfwKw1+IBUjnS6OL59nA837wGtSfgmH9KMRMsX0a68uA0jcrpSizUpVSAYX1mfiZ+HQJPT54Jgb+UszG9ApxDmQfGSyNZQM0H/yPX7VIk5FWetBYqApTt3GXtqqoMAwhID45kdxKi/gmATLXyfpKULRqat4YdpbWtQqYKAAv/C0YZ7PxBfyZwV4y5nUprrdF0ilzLJ5q6sTzZwFuO2l7S0t0FE/kB5N7WwNvKkr62NdEEk/fnLTsRI8uENzWKCMwfA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9QYXDcFXZk0aAgorQhIjJYkbL6evADUdI8LiYEvQg7o=;
 b=eI84XKwEL2PEiiSK18nzKZSouBmO7HpQZg93xL5xnba7R7vJgaLCUFCA9Cd5cBFJ5veIH2S43IuaZ7wdqxpHqaPU19O17uMplqcNCz8AkqPkFfvSqYhY1DeXxl8Q+W3ogJHZuMzs1lRGzySsqWOLKYJKPAbONRW8KugzV7h5+TE=
Subject: Re: [PATCH 07/12] libxenguest: fix off-by-1 in colo-secondary-bitmap
 merging
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <cec4594f-f346-c951-b0aa-55d7a56cefbc@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6bb4fd84-a25d-206b-8e82-eeef06d87bae@citrix.com>
Date: Fri, 25 Jun 2021 20:06:37 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <cec4594f-f346-c951-b0aa-55d7a56cefbc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0279.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:195::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 66541ecc-d633-49f1-17c9-08d9380c5f75
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5438:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5438E87A84E50C783477E0C0BA069@SJ0PR03MB5438.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:203;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: WRlCjrTVwMYjeTfYkBs7QL7RjENgVhDPUzCJaR5Xs9k75LcwTDqxZGBnYwlJZSkp4+IYJ2c5n94tTMxRTfxJjM+qFo9OKyYlo3bDkDJECZtBiB1VVVHbAdeq7r839XTYh/VsfTQ+WXN44IIHoJKcG+jA+AI/nEFR6HvPGuLFSzLv2Rcu2FAVcRoe6dATzS4yTB65lgRM3JTqEsaA2J4RQeZwJUdqGB421WhUS1H+hJXA6sM2poMAQ67R0u3lJOpagdDrS0wZLf0p5GJUQYirpsVoAXqkpEFgDYAH9U0R5RZ4idGpuxVQWlegVu0Z1+2XWBRggCCUUd0s7EIEWVj/kovn7ncx32sP/3KYhVGZnGHBT+6fsU/DJewjWW3fEEh7F7BrI2co31idFVGhLJSr4p9yjj5DE+HLrGsB+OZyY6pki2YMZFydNmEc3wDYWXuxLLuMPBKDfxvD9Cq+woZF+OIPclakrGw3daKkmwKA6JV7Pgb5wevPgo/7R6x6Yfum6vLohVKoIoRJCfcAx5qJf6fR6cjg5FrpuHtQgTmvS2/JLqlBS3Zx1D1Lh/mq/WPkNryUSBApGn/IEeVP4xjpH98mPDI1OI2VZKA9fTIepezHjhiJ00VrefTUTE2HQNdyJNGpZ4wb6PkR26g41EwbsO7FycoJl63xgX+a2/ClcDGanO4LUtAaFYb5QaIpBSEBKKqkRDHcH9kE0MBA+TXSlxcbHrEWKaXfV5x4QHve0eHmDeNe6BwrvpItG+xuOFIx
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(39860400002)(346002)(396003)(136003)(5660300002)(186003)(53546011)(478600001)(8936002)(8676002)(6486002)(54906003)(86362001)(26005)(110136005)(31696002)(36756003)(16526019)(6666004)(66556008)(66476007)(2616005)(2906002)(956004)(4326008)(16576012)(558084003)(66946007)(316002)(38100700002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?ZjRXRldHY3hYY2E0M2U4dUoxZkpyb1lIMFVISnl3SE80d2dwRlkrc0tsZkdW?=
 =?utf-8?B?cG0zVzlHS1QvRzlBcVFQWlE1TGVOUWlWcVRKOUhVWmRtRStNZkxCbnZ5dk1P?=
 =?utf-8?B?NTFoT2tTK1NvWWt1Z0FBZ0dQTUJTQTc4MzBGREdiMTVJaUhIaWxVb2R2N05k?=
 =?utf-8?B?ZXpUSzkreWFtM05CY1VYRG9TU1lxdUt3KzJXSkVzQmJldkp2cURnMTJ4UURY?=
 =?utf-8?B?VHlicHRrL0xHQjAwTWREWENtSVRoRXIzcTBHeTI1RHk2VzlMMm1iZ1dzd0hE?=
 =?utf-8?B?MHZKOGhwMC9qUDFXVUdMOW9tS3lhQU4rYnM5REpJSmNCQ011VjR0TWdNc3FU?=
 =?utf-8?B?dW85R3lQNDRNNU9NMkw0VWFkSkhrcmN1NVBKeVpRT1hob0dVbUdXdHlXQ1pG?=
 =?utf-8?B?NUgzMjYxcndqRTlUVkViMzhzRHlUSmw0c1JyM3BLSDdoM1kwZ0RxcDUzUm1P?=
 =?utf-8?B?c2ZPaFBLaFZRZlppMU1CczlkbWFiMUZwQ3Y1QXRDRDVCTmYwRVlZRnZrVEJs?=
 =?utf-8?B?YytoY3NHZWp3czJHQWpCTnRCS2h1bm0xUmdOV3cwclB0YmE3bldqMnRac0xu?=
 =?utf-8?B?VXg5U2IyYS9jdjdkcTRmNFFmTDBsYi93ODJUU0VKUFB3UzdNcmFBU21XK04v?=
 =?utf-8?B?dWlmME5HdDhVL0hRcUl5WlRjRjVaU3FRMy9YOUVjbzhoRnQwczlFS0lWTkh0?=
 =?utf-8?B?ZjFnQmxvNkxlSXlobTltWUszQTRQaTRwNWt3MmZKOGEzcXI2Q0FpTnBKTWR5?=
 =?utf-8?B?ZmNOaWppc3dRb2M3UHBCZFhYWjRoOU5lRXV3amF4WkNxcDk4VDBHdXZnN3Nn?=
 =?utf-8?B?RlFGM0QvV3lBL3dVaGpFUkMrQlFiL1pkRG1YV0RhNDdOUDFWYW5xSHprQXhu?=
 =?utf-8?B?Rnh1SG9ObXlrNk93a05jVHgySkY2QURuSTdaUGlZL24rNlROdkxXdXdtM1Js?=
 =?utf-8?B?ZTcrSm5OM2s4RHFmWlpIS29OdHV1emRrQjQvUlVlYllYRjVxRnk0VDgwR0U1?=
 =?utf-8?B?azFQdUZBSXVLaXA0VVl6STVNZnNKOEw1Uk5BbVFGVStkY0tqbmd1MmVkbVo0?=
 =?utf-8?B?eTJoaG5ZL050eW16eTV2YXdMK0s0ekdCa2VJSmI4ZUlvTW56NEN1NmExaExt?=
 =?utf-8?B?OUFyamM4OWM5M1VvWk9rR1dCMTcvRkh6enBndnNBMHJQUU1MTlplY3NHUnpv?=
 =?utf-8?B?cnozSWFYVFVxOTB0TzJscXoxM2UwWnkzUE9vRVgxdlpqQnZoMkFCOVNYbTNy?=
 =?utf-8?B?RmJSTVkvUm52RlFBQXc5a2Qxcy9WOGFJRVdUYzdQSjhyd0Y1SlFDTGVzQVBL?=
 =?utf-8?B?dHpqU1U4MDJrQzBqM3hjd0pyMzJRek1PbVorc0lyUDhYaHdrOXBVZ3lsbVdZ?=
 =?utf-8?B?aXFHSUdNTlZVbUZ6anhDU3VZYWc0YnZuQmNDN0JYUFVRWDZaU3ZSYndEN2ZC?=
 =?utf-8?B?SlN1L2FqQ24wWnU4U1Zuc1F2U1lCdUJNeTUycXRHdzFHU0g5VW0zcFF2M1V2?=
 =?utf-8?B?RWE5M3I4MnB5blVOSlFDSkJrVlFIaE1YMVB2Q0Y4Y2VrUnZaN3NBUTRwTHFI?=
 =?utf-8?B?SnhyZjQzajM2Rlo2UFpsMFlnbnJQRGFVem1GWWY5MGRlMWRaeThCWDJZbFpS?=
 =?utf-8?B?djZDS2NsNjNMN0tQcEV6SWRYREFtZWF4ZmZPaFFKRTM3bDJXdW1aWEtVd3dV?=
 =?utf-8?B?NHpKUVA1L2hMaTBVQW5UQ04xUU95Wm1zOVZJT0Y1QUovY09KODVYV0dBRzc1?=
 =?utf-8?Q?rlAEv9v3KMoDBowzsTTbS4AcJ8YAkn42307D7ex?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 66541ecc-d633-49f1-17c9-08d9380c5f75
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 19:06:44.3717
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RKNq0DorW2tz1bzR+CaYBVGCdw9s47gKIORpgFkdR8izYRGFC7uj79L+P6MeWeNo65/dFYXc8Fjm6dHG86ewWt9zLc2g0xs4GfugUk58G58=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5438
X-OriginatorOrg: citrix.com

On 25/06/2021 14:20, Jan Beulich wrote:
> Valid GFNs (having a representation in the dirty bitmap) need to be
> strictly below p2m_size.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 19:09:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 19:09:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147386.271644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwrCg-0007qE-HU; Fri, 25 Jun 2021 19:09:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147386.271644; Fri, 25 Jun 2021 19:09:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwrCg-0007q7-EW; Fri, 25 Jun 2021 19:09:26 +0000
Received: by outflank-mailman (input) for mailman id 147386;
 Fri, 25 Jun 2021 19:09:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNyz=LT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lwrCf-0007q1-5Z
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 19:09:25 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8b232197-7165-4ff2-a47c-b2bd621d9c62;
 Fri, 25 Jun 2021 19:09:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b232197-7165-4ff2-a47c-b2bd621d9c62
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624648164;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=oYGAbKx6nYxEdmQxP42MpwAMSnLTujWNfpB4WJdDm94=;
  b=gR4C3Uqt5a2lFIV/51xIQcYQYfG+uhj3kTcmtHLgYPXKKOvBTSfSCrVm
   fJtGaxUME4q10udm3jeyEmCEHn7+XPU9BXp6yxz8Fifs5oiOQi8KVtJyl
   4SyY6UIAusyfPI3COmB8C+uWbhlTQ0oxDrfAf5e6gSFrLCkZpkgKtbsoe
   E=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: JT5X98sikgoIM5P+NMLKwMwvUvk9fb38SrX4SE7KmQdvSYj7KUspWYIJxuugYZPaTUuYFo9XbH
 oWU9DRgBhghbDL0pBQmQpmId+s2x09GbAdckndlCNScfjyQXUw+Tcm4elLx0jkHTP3TqNLBDLm
 xQaHE4/O6aXfYBrjrD5dyh7r1yGTdUQ6E8U6qXIpGsSXTndMFsQB8NJf+7LyynGOm3kexFsbd5
 vPL9qDScEuhHA2OjajHtKBPcfSXWezsDlA8NLpR1tTpU3C/DOM0pvep8AjC+4oscPdqKjkJDwK
 tQI=
X-SBRS: 5.1
X-MesageID: 46714146
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:0FuVlalePM9EfoGfCTFmgbncDBjpDfLW3DAbv31ZSRFFG/Fw9/
 rCoB3U73/JYVcqKRUdcLW7UpVoLkmyyXcY2+cs1NSZLWzbUQmTXeJfBOLZqlWNJ8SXzIVgPM
 xbAspD4bPLbGSTjazBkXSF+9RL+qj6zEh/792usEuETmtRGt9dBx8SMHf9LqXvLjM2fqbQEv
 Cnl6x6jgvlQ1s7ROKhCEIIWuDSzue77q4PMXY9dmcaABDlt0LR1ILH
X-IronPort-AV: E=Sophos;i="5.83,299,1616472000"; 
   d="scan'208";a="46714146"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ja+ZmAjPcoCMx75UPeXGVYP4UwztiK52LIkc71HjjZNehtAM0Md/OHdE2UOfPTDCPm5X8RkgzHd6JEjXhbaL5xqVL3BUFl4J68eeTnOHsNcY0Cv6lrrsHcYIQ+AOpsHywWM7nETi/m3fWZR8aurC7LT0yPvZptqzlRK++n4wl3k/+vMmmWm1mJbG8oM94FrkjEioIzPrHnHaMHQD+cx6al8CdZZ0x1bUjduDBR8sVx766IG4A/Cc2slMD7tv4lzcqAZvYqR5G1v6tsa4+5m4tAA1TxWpTlerEJ6SnKICFBMmi6/dolYdeD6yi2YI11moE+VOwKMbjjoq+ytFpc1HBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oYGAbKx6nYxEdmQxP42MpwAMSnLTujWNfpB4WJdDm94=;
 b=Q/BG2cqXQonj6P0QjBE3h7aU969+zJkfFN56/hJx1HXzPWueaNGp/DB3+NaLYfvyhI1WOohHXVTI5OiQ/XuTcBIDhhc9tJdksu+4QX6IBuBlNSC/8Iq/Gtdgn9Q7X5sjpZ2IKNgwAeY6MlpURj1I8PvldsGPozlnwRVRWgzOn1uvgZJOpjCfp1kyO5CNuuIllqm9t3/AFKT5hX+rSVls5yhB+Il2aY1lHsKdRME6iAPJ6aunwFuk8hjyHHxMw7EKOl8tQj1HHPQWpAhcmADV3BF8t17yQ7ahG0QXq6JNc5Bh2OSXY304FxTAkHYEK0N72g7g4NuIv16PZiUfKJfbbQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oYGAbKx6nYxEdmQxP42MpwAMSnLTujWNfpB4WJdDm94=;
 b=mv5ibWRRFrpXE7lnNXEYlYHdN/gqPUEvA/1RQ6px9mZUvhSF9PYVBMU603r8WKUCE1WuCwxvZ1K/lU+8rt/mtIQs2Zh9abOKFnLQ+Lsh8NRzJ+4IsABKXgxtGKGlyt2tv6xJCsdytBC11q8oBlYcX5UbY1FG5BMu2nM+sKJPH0U=
Subject: Re: [PATCH 08/12] x86/paging: deal with log-dirty stats overflow
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, George Dunlap <george.dunlap@citrix.com>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <92968d22-f3e2-4eb2-59fe-b1f638c60133@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e482a0a6-3412-c3b2-a752-961a3840a6f7@citrix.com>
Date: Fri, 25 Jun 2021 20:09:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <92968d22-f3e2-4eb2-59fe-b1f638c60133@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0052.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:152::21) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e50b505f-1411-4ffa-1139-08d9380cba93
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5454:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB54547D0004328197B7823321BA069@SJ0PR03MB5454.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: eZdeDb0OBPdJukjVaXnW1k2yzQIfqhqSwo1KOjL8iSQdVK4DDUSbtPZMK3W36ot3zgtRcszMsD0G/sJPXoazv5+LEHQE3QtNt7Eysg5QsGSvCwDPm+ryfFeNdKQYjQPGugZBZCUBZWUHTxlZUCEqemNRD56iWgs6/IVRCNsJlGEgrjnikjT1AidZFyI28183YEP1+CIwuRP3jqKDGu3a5xGfB6slX8pVRBA3fYbYxLjw8gPK4O7omdj4YmOCjRWe/GnwalLLTWX+K7f+eealgkdroo91yEMekmnuHox/7AZSGHkY1T4/0YCmHp5g/aASYO2HsA2giYRXu5/t43S7rnqgFNHn1I16rWKMyk8Nb5cK+w3QScNFtDLhj3SXO7rBsjLIxH9KRQNEcEeNHrqgEp3MzTrpNv3X2kU439aeKZxdOCqzUPAi9Oa8OP0moLj53dn36aMPl2Hf0qexy6LPwZA+SnCY9zOUHBz4kz4JqzOtvHO1NS2b9PNZ3oHhrBz+59AfwDc/057r4qlHbD9VVh29++x4oF/xXK0Ic0wvmzVx776w8PyOpTdOYAGtr2dtBTvdFJBEiPNPPms5ShSuyqJw4AZaYDHGSiIPggN2oGrnY57W8ml/XOa+uddTbmw/+Yps7hyOGw3qeNKYi8lRQVe4zaBgU8PK5CCXUO9CFkaQ2wu3vmWAaGroxiOzHCitA3n52RsL6b+NPuu4RRbYHw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(366004)(376002)(136003)(39850400004)(66556008)(107886003)(66946007)(66476007)(86362001)(4326008)(110136005)(2906002)(31686004)(83380400001)(16576012)(5660300002)(316002)(6486002)(54906003)(26005)(2616005)(38100700002)(6666004)(4744005)(8936002)(478600001)(956004)(8676002)(16526019)(186003)(36756003)(53546011)(31696002)(14143004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?a3hXYi9wRThlTGNpa1ZxMFdIMFk0a2NGK00zL1RBYU1rNEJ4cVpuYzFUejg3?=
 =?utf-8?B?SVRlNGpUQUVoNTJhSmozK0p2ekVGWisyQ0JDQXpGMkxuM3Z3bzkwVG9OeC8x?=
 =?utf-8?B?clltdEVMUCt4VDk1RFo1blVYUC8xdlVIZXdDajB4T1orWVhxWnU3MFlQN0hW?=
 =?utf-8?B?aVJQMWlSbUlIdnpVWU4xeE5pMzBHajFWT0lqY0Q4YTNsaXJ1UUhPeDM1Rndo?=
 =?utf-8?B?YS9xb213MTVQeFBHOTJYK0pWWW5TdXFhTWdVU2lGeVlFKzdpYU9hQXY0Rlgx?=
 =?utf-8?B?VHZxTTZIVU9rbGl5RFJodFRycFRNak5nK3huSlZzQTRmSGdCcVkzekFYa2k5?=
 =?utf-8?B?Q1RIODgxV2hZN2tXZk04UjFaNUg1ZXlEa2RwZXhMOXhPaGxCQVpGSWVSOCtX?=
 =?utf-8?B?ZXRGOGFwYmE1amM1dHRwSDY3REFHL1FkQ29TTmc3ZVhVT1ZwbWxhSm5lbXFl?=
 =?utf-8?B?M2U3aWdBZzMwZm5ka0tuc05uVVo2aWtkM3NhQWRFV0lFT1dqWU5aYUJWdFhx?=
 =?utf-8?B?ejdZMlRweGNzVmtQaEprbWlkSmYwZlJ6alVtelQwQVNuRFpOdHo3RHZ5RFpH?=
 =?utf-8?B?eTVxYWErSWE2SHZZUEZmVVdzVXRTWjlnMVRWZHNZMnJXUzZxcFhnOW9BSjgv?=
 =?utf-8?B?cTViVEZnV2RoaWhUVnhXcmxGbnNnNkF0T20vbGZtZVVrMmtkOHlSNUthTE9W?=
 =?utf-8?B?WjBFcUpGTjB4RXFjRUxVcjR4OXcyRmFKcnFSMXZzNUxnejlPYXhDK2pIOHBQ?=
 =?utf-8?B?dEFMV0FRRjJZVlcrN01VeHNUdFdoQ0FXYXBjdUsrWFpHN1FXUEZ2Y2IrMjdC?=
 =?utf-8?B?TFhLVCs1SUlJL1JHTGhxTnlqK3dxK0sxQ1BHcThwcGhGY1JTczlNUTdObmtP?=
 =?utf-8?B?NUJSRmtmYVR2TXgrTkR6M1B5M2UxMVo5bHBjQTUwUVdnc1UwdDhMZktJVTZj?=
 =?utf-8?B?Rk13VDc3K3JSZnVoYVRTekI4SVRrQVMrdzQrTms0VUY3ZWRsZUk2b0xlQ3Br?=
 =?utf-8?B?a2hseXh2MUxyRlB6ejEwRUF2QXZ6SWJwMnR3bVdydDh0cmwzeVdkNm9iQXdG?=
 =?utf-8?B?QUlRQ2RLeExHTlg2cUxGUGMyTHdmb2RYSXl3aHgxNzZ3bThaMW4wMHp5Q2tZ?=
 =?utf-8?B?KzJHVFlqVVJjekVUeHVoOGFOVWFBZnRBSTdCR0ZnaXMwL2hMZUV0dkxDV2F5?=
 =?utf-8?B?dG41TzArQzY4WVFqQnA5VW4vemppSkc2cEdBWTZ0ZE1HUUEwRWMyKy8rS3A0?=
 =?utf-8?B?aTl4bHBXNjZPcngxcmhsYzZ0QzBnc1FyS01YRDVpbzg3SFp2MzdNQXFlMS9K?=
 =?utf-8?B?RUw4cEo1dElmNk90bVhkYmZXN21xVkZ5Yy9SR1Q1UEM2WGErWVJoRW83ajhR?=
 =?utf-8?B?YnA0bndDeDdsdXN3aTNrQ2VST1VJS21PZ0Qwb2ZBc3BlV29ZaXpaU0FETi9v?=
 =?utf-8?B?dUVmT3l4NE9HUXJQNUpTRyt0Z0R1YVdsRkhRS3h3SkpIc0lKZWpjY3lGZEdX?=
 =?utf-8?B?Zm8zN0E0c0FMU3p0M1Q3bXdSVVYrWVRpWU0xQnV6UUgvUTBPQnVuRU9DNmZy?=
 =?utf-8?B?NDd1bng0Ym5wWHZORStiZFR2ZHhraUxrd2FTbm5xVUVPWDc1VGFIU0xzbEpw?=
 =?utf-8?B?OXIwdHJHNFRCcThmK3VzVXlPSnE1VWh1RDc3TGE2RnN1R0YyMldIeDVDV1pP?=
 =?utf-8?B?cllpa2wyUk1NUkt5TENPWlJNUE5lZWRIemY4UjRrOFJ2MjdsUktMb3BCbnZR?=
 =?utf-8?Q?X5PmNizLwbADPg1Xx1LhUGk4y38O6db+88rVbBG?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e50b505f-1411-4ffa-1139-08d9380cba93
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 19:09:17.1778
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bIIXXTHdvKc0CdJkhTXnQ6Yxjekx4mmeikWsodxP3WcJgx1AxajMMFheSwnQ+sj4lnRfh9nz6zv6rdHiNs3yPwayvnkkyFe8YIn8gZl8y2A=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5454
X-OriginatorOrg: citrix.com

On 25/06/2021 14:21, Jan Beulich wrote:
> While the precise values are unlikely of interest once they exceed 4
> billion (allowing us to leave alone the domctl struct), we still
> shouldn't wrap or truncate the actual values. It is in particular
> problematic if the truncated values were zero (causing libxenguest to
> skip an iteration altogether) or a very small value (leading to
> premature exiting of the pre-copy phase).
>
> Change the internal fields to unsigned long, and suitably saturate for
> copying to guest context.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 19:10:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 19:10:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147391.271659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwrDn-0000mS-48; Fri, 25 Jun 2021 19:10:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147391.271659; Fri, 25 Jun 2021 19:10:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwrDn-0000mL-0r; Fri, 25 Jun 2021 19:10:35 +0000
Received: by outflank-mailman (input) for mailman id 147391;
 Fri, 25 Jun 2021 19:10:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNyz=LT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lwrDl-0000m6-B7
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 19:10:33 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1aab6e3f-7d74-4483-946b-cd42d3add6bf;
 Fri, 25 Jun 2021 19:10:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1aab6e3f-7d74-4483-946b-cd42d3add6bf
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624648232;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=DhQX5zvhBRSz03GMH2/UUMLt1lwkneaGawIFZsXKllU=;
  b=L2vk/U867mZdVzde/gNPrDa1riV094pDPNc6WT4uAr4lcidp98jHMry5
   CLz501qba3kuyT07f+Lns2ZX03Huus7i0dsqBu+RyygdjQuNjS5Dp1gtn
   wStSrXvenjcl/MQRvkNit8l9UEZsvlTXenfYumyVppu0R8bL2U/4AdXFD
   w=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: XYqZ/H+g0mBVj77i0nKYR7jVOWskPZ/qcBRhzFOLT9SUUIeySrZALBr5ck5xVaEMjR8BizulQU
 uT+iMBqE9TUOQPblB9Y7C/fR1qGfcDG6AFEMnoLBZEUyRwXkX8WHbrPqC5fltaSn8sAs3Yi2/L
 JGePKiHpuulYUnqXTbPYyMO/OvzDGmjPfMQFZKsU7/8aE8nWXyG/e1m4v/5y4EAyIdFZcgic+D
 o0/+GW24x0Wlg6EcFZFQnYJrWcjhrCup60NzekieQ/jPWMDaahs/OdpQhTeI1dVJJ62D7kFD7W
 Gms=
X-SBRS: 5.1
X-MesageID: 48610576
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:dTCaWauO3zcfRZyY/pZfokLW7skCkoMji2hC6mlwRA09TyXGra
 6TdaUguiMc1gx8ZJhBo7C90KnpewK6yXcH2/huAV7EZniYhILIFvAf0WKG+Vzd8kLFh5VgPM
 tbAtFD4ZjLfCVHZKXBkXuF+rQbsaG6GcmT7I+0pRodLnAJGtVdBkVCe2Cm+yVNNXp77PECZf
 +hD6R81l6dkDgsH76G7i5vZZmymzSHruOpXTc2QzocrCWehzKh77D3VzCewxclSjtKhZMv63
 LMnQDV7riq96jT8G6d60bjq7Bt3PfxwNpKA8KBzuATNzXXkw6tIKBsQaeLsjwZqPymrHwqjN
 7PiRE9ONkb0QKRQkiF5T/WnyXw2jcn7HHvjXWCh2H4nMD/TDUmT+JcmINwaHLimg8dleA59J
 gO83OStpJRAx+Ftj/6/cL0WxZjkVfxiWY+kNQUk2dUXeIlGf1sRLQkjQRo+ao7bWTHANhNKp
 g2MCic3ocUTbqiVQGcgoE1q+bcBkjad3y9Mz0/Us/86UkaoJk29TpC+CSz9k1wvK7VcKM0kd
 gsBJ4Y3o2mfvVmGp6VO91xCPdfKla9DS4kY1jibmgOKsk8SjnwQsnMkcQIDaeRCcY18Kc=
X-IronPort-AV: E=Sophos;i="5.83,299,1616472000"; 
   d="scan'208";a="48610576"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AgD77oBfM66gmVwbv4uHRzPPzY/VpYYEdUAdBALC7vvNqeNwVQtyoVk04UOjytUmU1atl+poEW6tz4nOK/ULy18Ss+sNZb1VHKXSrR84qF4GSyKGL156LROeiZXzFDNHSvs8E8dTC764ZHwSu0x0djNBq5TH7tuBL1x0LT07c7CR0NXnq9/D2729K73A0Ytav9IbQ8kteMT3IUhQPaAIQAJsnQuu70Ms0suwXJZ54tOnoZ9TDuhTK2V1vA4J5nNHAKG5JpQw7kN88WUsTqUi2vSmPcrjEZxmutrO63ya84RWv0KKqgQGEfwpaPVx3ldLaL0RM19Cvy9Pn+S8K9eHMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DhQX5zvhBRSz03GMH2/UUMLt1lwkneaGawIFZsXKllU=;
 b=ZCzOIYK7ybbh3059mHNimwvNmtOrFIeiXVAzgPCnN+NWDnPay6vDW5UFcK6ZqJqUDFx54gJmg83E2dk/zp5seRqEvxEvK+CJ509SliVk6Zuh4mTEoqnL306KY9+mthaOl0hlynqZU1usD4E6i3jSKS7JZYcOSBaFXZJXK3rOK8Eyno5bwgbgPKw+bl3vXYbAZyIs6HTXVslbQrvMFLEe8J+0ZyGa8icbGQnteTviQ5P/LURtKchmE4FrMBGn3JC6MkDdCSEagvKMQ0q6et2UdC89SMjdqdAWjNdyMcQ23R/QgEHkFk6eGE4+1SxWOMpdSnNZ0H6R0nAsAE/gYKBjVg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DhQX5zvhBRSz03GMH2/UUMLt1lwkneaGawIFZsXKllU=;
 b=VuEO43evF2vqwQP2PJdjXC/Us07GJ3qavurQzdF8ekyIEDOiOdMjPSTzGK24ztr07QKf9/mQPQZOXC+Toe+Cqp7OInDqps9Obz8Qy4mhwOSwv3dQiZuq79Z1PMhz+1NS3b2qfcTRtpXCOtRPQ4DV9uheFzPZbVVJOLFV3xCdJP0=
Subject: Re: [PATCH 11/12] x86/mm: pull a sanity check earlier in
 xenmem_add_to_physmap_one()
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, George Dunlap <george.dunlap@citrix.com>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <e0819038-98b0-2c4c-9176-70d68be39130@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <af5e1458-98df-9597-663b-4b0325613f92@citrix.com>
Date: Fri, 25 Jun 2021 20:10:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <e0819038-98b0-2c4c-9176-70d68be39130@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0041.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:152::10) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7225f35d-5478-439d-1402-08d9380ce589
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5454:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB54541137E10B4DD4E6726A8ABA069@SJ0PR03MB5454.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:949;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: KJQ85+SdFzHq3FrLXxc/8XtrBFCh7/H1rNSj9cSFu8Keq2IsHkteIBYc/OyZgRphumTrIXZMo2WHh8HGcaQK45qK4AuUU7sPvxMFmg37VhuDT29T1PlhFlrWyAXZUKRw6qkjkmXQhuhoUkyjCtN1355Wtmw/BB8c6bK7tzlZTROFbYQ9pm8VdOVqNFS1P20+p7VnUsuVcIJ7c5ptQxOoakq8R8eldNPLJr0RPn9F0JoL/y5pjvv1LogKyaotsrxSciljigB7ZBINmAqLfUb1pQYfMd6SGMKeL5u4TI0yh08Hx3/gJlMi3EKKKcOH3hG+4LQO6/4JM1k9LoLO6Bu59/CdetLYAQ9LWrmSN1eL27CqyalJYYUQqnK1aUHzW8aAgJtD0tX+HN15hhChVT0xI3YORMsQNtz0Swf2V5YwnBIIWqrLWMgwyB2PrcqD5k7acMJZyuMId4RhDYDwzgleLZfB0Ayt53nsjxLPzvpkmxqVBzhi7uTKkkISYnjprZRar46GN466COe9TPaANGTDK8lfHtOvD7Lu4wT4WmMRKsmg8ZKwBmzxYGYIJqljqDl2BNHtJq88vzTajJ3wZlfL7tqoIcXElm2DrGpLHjyJPDwnD2UJB9yCK9AXQ4serM6e4U+jH0+D+rQFjL3PkYaQ38iLZIfPAIpgrYS0NsJpLR2iSJdlNbcV2zbVX3K71S/P
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(366004)(376002)(136003)(39850400004)(66556008)(107886003)(66946007)(66476007)(86362001)(4326008)(110136005)(2906002)(31686004)(83380400001)(16576012)(5660300002)(316002)(6486002)(54906003)(26005)(2616005)(38100700002)(6666004)(558084003)(8936002)(478600001)(956004)(8676002)(16526019)(186003)(36756003)(53546011)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?WDViRi9xWXBpOUx3ZHRDUy9OcStnRVNBby9FdjRrbE0yU0xkOGtMVVJBRVpJ?=
 =?utf-8?B?ZUhvTURSV1o2WTByZFVGdGk0Slk4a3Vpa2k4aUYxY3VzU0d1eUZVSlRobE5C?=
 =?utf-8?B?aldZcEFmSWN0bzkxTXppQUVtazlIdmIxTkhJVkNJb0tiVGszNUZMa0RNdS9U?=
 =?utf-8?B?QW8xS0RuaVhOd1ZSQStHVGF4VmNtWWFKM25ZSVNSZ0pCK2lqUm55d3M4bkp3?=
 =?utf-8?B?R3FXSTJzczdGb09DTVFIcEJuR2o2a1VRL0tBWm96RUdFN3djNU5hMmNnbWhz?=
 =?utf-8?B?RGV0WjNZdWM5WXc4OElpcFhVUkljVk1QaWdBb2lkbTNBZ0RQMURydVIrVnFR?=
 =?utf-8?B?VTBsSXEvQlM4bi8vakowOUtERkN5dnQ1azlNM3hIVFhEUlJCNERDTTA0RVpR?=
 =?utf-8?B?ZnZ3WkFSdUNxeHZIRllnWnVJR1N4WU1oTUZMY3RHYWdjUVNOQlUvMkM3MlE0?=
 =?utf-8?B?c0Fxd3BwNVVsMmR0M25TRXVUaDF6ZWVUNERhR0hzVmNxTnYxeWNpQWZaMzRs?=
 =?utf-8?B?Tkg1OEVENVNRZjRQYmxDazlrcGkvczUrQTBlb0pFVEJGNGt2OGdURVU2Z3FW?=
 =?utf-8?B?V3ZkZHZyM2dFbUFGNW4rNjBJRzJFa2c0S1pxa2pUNGpIdzZrRDRHZmRaTW9U?=
 =?utf-8?B?MUgwTHZrUUc5cVNTWGxmQytoMlEyWnk0UjBxZlBLYW15Nm4ybGVlaVIyVkE3?=
 =?utf-8?B?NER2RjRhSDI0Tkp1Z0N5anM5NlI1SkR6NXBPT2duVDZxb1pyWUEwSUw3eUdP?=
 =?utf-8?B?Y0JjNE5XZ1p4Z1loeUxQbndFaWhXM3plNFZSU3lMZGFEOGpNSFBpQ2NFWVFy?=
 =?utf-8?B?VEFZN1VGc2JydDBxeXovSktuakxSWFlYcllyOWc0RFZvK2NrdENGcjgrU1NE?=
 =?utf-8?B?bXEzOWgvdkpoVVROc3ZIZjRWK01PU2VaRVl4RDk0dWZ5cGgvOFJWbDAvT0FH?=
 =?utf-8?B?QkdmVWZxZGs1WFJ6ZC9EQy9qODJ1MjNBTW5nR0JNbWIwMFQ4S24rVFY4VEZI?=
 =?utf-8?B?YXJubm01MVJQdCt3Q0JWMG90UlBDVTY5OWJiRlpoNjU5cndvOFFkNUM2UFdw?=
 =?utf-8?B?dmlKa0wyK3RORTMvd1l6dHBxMW9FakVObFR5RnVzY0dPVUNhNlYyL0M2eGly?=
 =?utf-8?B?NzY0NmxqN3VudmdWY1R0NTBTT09zL205L3VNeURPOFlZQlFrN3lubWZlOXI1?=
 =?utf-8?B?aDdLV1BxMGQxWktiZVBlVlVTTW5jdkcvME5XakY5Q1ltSHNmdE5BK3N1MXpw?=
 =?utf-8?B?M01zWW9xL3QwMmZlM3lTL3NvQ2JlN2F6TThvTnRHWGFMVmp3Z3pGUHZqK1Nx?=
 =?utf-8?B?eG5OY0JsaTJLTzQ1anF0T1VIVzdiekEveDVXSTZHSFlVdlVqSUY1TDg0N0g2?=
 =?utf-8?B?MU40ckwxaFk5SFVmbnJnTEExbkV6QTVyZXUyR2FuSDVRUC8rTitBOXVKcnNR?=
 =?utf-8?B?S0RLM3lCc0xDWTljU1hzQnd1dGx6WTY4ejliZWZycExuazBMK1FDNGtlRWMx?=
 =?utf-8?B?MEdxMC9FcElqcUpLcC94T043T0tlNHBVR2JjajcrSmZIcmxmd2tpQ1duSVFa?=
 =?utf-8?B?STVHN1lWWVVIQTFJSlk3VCtDOFFxMHZXSUdwT2RFN1llLzFDQ2FKMWJWYnp1?=
 =?utf-8?B?WXdWV3FUcmpmcjF2dVQ0aXVDbFZkSHFEdUFYdWFpaDE2bWdJN1RLUldKeHU4?=
 =?utf-8?B?RzNhTkJTTTZmWHVuNEZXM3BMNStsTGtLODhzeFV4SDFtMFpMSDlCMi8wd0E0?=
 =?utf-8?Q?Cx99l6eYnudd+vi8+MeAM/T4Rry0paMR9xdM6jX?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 7225f35d-5478-439d-1402-08d9380ce589
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 19:10:29.2524
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Vg6Zd1UHdJFdYlHMKdl3vBUssisr2GKdWGB5xLbwrVf552sbqu9/WzhPGyAmMSsCrBAM3/BWnz0LsetRF+Y2bVRH2lgv8Syx7hVtQe2zjtc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5454
X-OriginatorOrg: citrix.com

On 25/06/2021 14:22, Jan Beulich wrote:
> We should try to limit the failure reasons after we've started making
> changes.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 19:46:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 19:46:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147404.271691 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwrmF-00046A-8k; Fri, 25 Jun 2021 19:46:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147404.271691; Fri, 25 Jun 2021 19:46:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwrmF-000463-5d; Fri, 25 Jun 2021 19:46:11 +0000
Received: by outflank-mailman (input) for mailman id 147404;
 Fri, 25 Jun 2021 19:46:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNyz=LT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lwrmD-00045x-VT
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 19:46:10 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce364d74-a0dd-4fc2-8d37-0c587b3e106d;
 Fri, 25 Jun 2021 19:46:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce364d74-a0dd-4fc2-8d37-0c587b3e106d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624650368;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=1n716Ac/clSahlyuPtUMEaUjD+dpUbwVP9vxaUNAkJM=;
  b=JJrMRsGjo9rQtBR5X71dDCnmm9C4bvncjlfj94RmitUhnC7rR0UEp0ek
   nIKshUp8Lelz6xkigreN7ywUTYkss5Mqcxidwj/b0sAulV5pZEw4mzSLF
   fFpAYexKHK1wz1hsgz4cuw/ymn3KPuqP7Tv28nCxk2RuAVxmbr+HhJTba
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: W7ryesnrZ1mT1NU/MBQPMVkjvyPqNeADJhCd5EeJD37rj1dbSNwn7ttmbLBLsKdp+EPLHfGkkf
 UdwEKMu2dN2r1k5ICbV2jbo7gYZ2jYgJU9QYNiMBAc+4QFbNL+Hbdxva/HkBTHh47ivCSqYTRQ
 Uhdw0KuraUYsH2lIRlRrj2FLKh0Enx10EhlTZcwi6QBU4jyyqA8TcaU4gldRfsGmdJ6ZhLiBgs
 ttwL6n7prkAcrbEz3n8fLdKwMyON4n2SrrmaZDO1ZE/zmCJKzzH1u/eJETDiP+XqqTX6Xv5oxw
 ij8=
X-SBRS: 5.1
X-MesageID: 47374235
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:SoEy6qmy7ORh252OlFc0rY1HKp7pDfLo3DAbv31ZSRFFG/Fw9/
 rCoB17726QtN91YhsdcL+7V5VoLUmzyXcX2/hyAV7BZmnbUQKTRekP0WKL+Vbd8kbFh41gPM
 lbEpSXCLfLfCJHZcSR2njELz73quP3jJxBho3lvghQpRkBUdAF0+/gYDzranGfQmN9dP0EPa
 vZ3OVrjRy6d08aa8yqb0N1JNQq97Xw5fTbiQdtPW9f1DWz
X-IronPort-AV: E=Sophos;i="5.83,299,1616472000"; 
   d="scan'208";a="47374235"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F9jMG6TQkiAivg3RsHAI5M8bcWM3adZtDQX556dMiog8VyqzqozvgWnwSmPnToU6C6CseX375qCketkP78gb8oeRvT2nF/LDXeMJBVfERukc5SKQZ76ZUIFvdizFM5fWnY/4d7KZAPt19JGy2tYqV6USEs2V/CJ9rDwQJMohgCMNU1zLtzpnv2n1TF3watRK8Grb8fniQzoVsENfApqxEbVyvOgJ+CjQj6RRKk4VZ99uQLHTEp6uqzACu78UAD9WVVkeay9Zy5SlH6/6B20bYcQEA5lQTpRG8WYzNGb5JTk9K/efLH5c97n/ROs7Eu7DySlDwi9O8ibCC52XitHo/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BSNxHExedVhCZK4HiV9874ynzIwZDWceReo04kMqWPg=;
 b=lPco/B9WwbzuQTLcdgNrGJtJGTBsnWxUhubeUbMQAOxzyKnJDdoHFZ2WwsKF7nIiqPq9DBkyEZ0uKxo+EytGgGalt3m2HQfQNr0a/eTvtMqBMOkAFyD/7d07JgcI3tqpoo+fFGK5rDjDr+EtnRKN0jpTuNFp/ejVZ2J8NmvOrxgFJ0NVk+6bktJZe1F5fuDUFZlCUVXGOdPOqD8ruR5SbrcWv3ppGsVE8oxMAvcZ6o5TdANzTFnUbBwQcy1Wx3Vi6mMcnMCVULrfdPzHbJW8oVPrJylgI2edj5U5WfuIyIKtqXXMYNL6zo/EDhO0tfQasfOpOZVsu90M2i+MQpWGCA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BSNxHExedVhCZK4HiV9874ynzIwZDWceReo04kMqWPg=;
 b=HiDzfObAiFQeqMHTKtWMkzHzxdSDkFwBhuIaiAJsR9gvQ39eK8KQdEnr6azojLCMa/xtdHyCR1YVlEtHAdYegDQ5QZNrOeHZ+WlHzyblhMKSq8O0jTQF4TyaxCBShAabzYs8//lHh4vMvMfNaNku01rchYwfoyR2VudtR5Tonlc=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <03512f29-7a4e-70ff-271b-7d65ed471935@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 12/12] SUPPORT.md: write down restriction of 32-bit tool
 stacks
Message-ID: <f3a758a2-a8a9-60a3-92c9-1a490084dbb6@citrix.com>
Date: Fri, 25 Jun 2021 20:45:56 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <03512f29-7a4e-70ff-271b-7d65ed471935@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0413.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:189::22) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8159c283-4a38-40ce-bfa6-08d93811dddd
X-MS-TrafficTypeDiagnostic: BYAPR03MB4360:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4360798DA090CC80E6CDFDCFBA069@BYAPR03MB4360.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: u3rZaLIIQfa/+FZSjvL1R5ohtw6wKRRN6yKDcpjpbtDc2g2X6CTIwmGDWCQXs8vCBmsAsTmOQnEiHomUi6/l+GcqDnBpogwCWXfKpX39QM7UPxH1Rp8q42ll+fDqDrZJl8BV8d3hHZB2ptYs5a7vgbnHJ4ZsjkG7acQZO7rBFtFWK4VP040QnkFGURSS2cKsN5H7ySHTWzzx2nDOfykVphtyEmGCg0tpHCJRYy2zpU/Phmfeux1GhqhTwGP+RgkbF+CfZtqR2zXHocax4574QNjQM5CAkLku3fD36F6FdiOjqkiqcf73/h0jAsveDzMmA3YjDmTcSzX9kG4xFue6DqwYCvIweJe2ATi1aS5mMTKHY669riUTcooYO5TjWfmPayJaz4+od7B/JVhXEWWmuzC5rWT5Hf7H9dPsq4iSx6uDbyfl73UTH2GVu2FL/MdZdKAWSDGWufnHPX8IFCsD10ZKqSzq6+hfAiagMo6/EW8O9Dtfy5P+wH5ohr/IJe2YvzOIn82zkf+5aTbJFQYLJNHnqAmPXMWQqBT9DhcvG5nWKwWloT0vwTqO0KdtuIiSsB5UqBVBsJ3uKVqPzfaauRy9PeuIa/IG0pgma5SWD13OjpINk75x+cRyNtNUulQw680/rxy4d43131sPYeTHxv53jUojaDvfsteXUejktwxkjqUUdJGZSzwh/bMEY6YT
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(136003)(376002)(396003)(39860400002)(346002)(38100700002)(6666004)(956004)(83380400001)(86362001)(2616005)(31696002)(110136005)(8676002)(478600001)(31686004)(54906003)(8936002)(6486002)(66946007)(66556008)(66476007)(5660300002)(316002)(16576012)(36756003)(4326008)(53546011)(186003)(16526019)(26005)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?dkF0OERsMVVzMEg4aWV4RTZKVDh5bkNaTWxiU0lrbklvNmFHS2hwTUs0VUVk?=
 =?utf-8?B?Q2IrcEhRbEtXWW9SL2Y4RkdrQWQwR1dwM2d1eVEraWVCWklwVkFIa1pLbEV3?=
 =?utf-8?B?VWkvWm1XOXFjcXBYSmoxcG5PdktyYlZjVGRHTkFxSVJDZlZneWpJSURPeERa?=
 =?utf-8?B?cXlPVTJXWGlhKzlUTXIwN2U4UG1CaG1tdkFsNlNPbXVOZ0NXNy9SL2JWamow?=
 =?utf-8?B?Ykdxd2pacWlzYXJTZ0Z6K2pXTzlRMmRwVjhSYVU0N09aVTBSNVpDKytLdGpN?=
 =?utf-8?B?azlFWTQwWko4cFMwRkRVNjlxR2FweWlDQW9xbHc0Z2pIdUdUdkxzUzBQb1ZK?=
 =?utf-8?B?NkV2MUgyWnVTVXU5Z21LekZMQlFLM1BHdi8xb0Zta2ZDUUJHZzMyMUVkTDJk?=
 =?utf-8?B?R1MyaS8xVWFZQjlXam5iNnVGekFuaUZ2ekliYzQ3c3ljOFQ3Qk1mNFE3VG41?=
 =?utf-8?B?TGZCZHF2ZkEydHF4YlUzdXdKSEVsUHo1eWdTSG84UEZTdmE3ZDgyTTBCMEVN?=
 =?utf-8?B?S2Z0aFhab2g2L1RLRHZMVGFHZ0lnaDh2Q3lFOFhnM0hjVlBQbjRDT1RYV2U1?=
 =?utf-8?B?U0hQRDAxa0FER0U3VFBJQzlXVzhRYjNmTWhFMEJjTDBhZ24wZ0R4b1ZDbVhn?=
 =?utf-8?B?bWxEeFBxbjZwc3p5cyt4ZzlPM213NjJGQkNFZU1uSnZ2d3pIU01nbFRrcysw?=
 =?utf-8?B?V1JvMTRFTkpGRjBEK2xjSmlaQ3ZBOVNIVkRTVmhDa283UVNZa3FyODBIckZK?=
 =?utf-8?B?SGExN0RvQTQxbEk1ZFgzd2VYaHVnbVMrSVVxaE9Ib2k2TjUvRWNjK0hPSElO?=
 =?utf-8?B?U2lDMy81MzFXVzcwZy9jaW9maHlwOHYyeXFPR0Jzeld3NjJ2OVo3Mi81Q05z?=
 =?utf-8?B?SFE4a3hGZVJtcXc3UmpuMVU3RVlBc0pSWlROSVpKVFYrSDRwTFg5WWQyTm5u?=
 =?utf-8?B?SkFDUDF1VTBWMG1EOFpTZGpIRnd0amNZTDFzZzU0Nm5UT3IxUlhscml0VEpv?=
 =?utf-8?B?RGV4UWFJcnhXdkphOUJENW5DVEVsZU9ndFZSdG8vc2xaZUY3cHNqME4rNlBk?=
 =?utf-8?B?b0dMTHJpeUFvVW9rODRzTlN1WEZDUUYxQ1VJVlI3WmFmMVpQR0Z6V3FVL3pL?=
 =?utf-8?B?TlpvRjNJaFcvVXNGdVRQSzE5eVNNTmxGUjJWUVpzNjM1Y1gzREpIVHlwcjlB?=
 =?utf-8?B?Q3RSblY2alFtQXBwNmtkZ3pLTmJoL1NjdjlrL1piTEVKZTR6ejNpZS80WEJ4?=
 =?utf-8?B?YzJjYWtpZ3k0Zm1TWk04OTV3aDZEeGZ6VXZpQXBmTnMzRzVwS2dEUDZGbkFa?=
 =?utf-8?B?aFFCMTJEdTY3cG5YR0t0QWVxK3VHTHV3YktIY1htMjAzbnRObjZVQTRvOGc2?=
 =?utf-8?B?aWozZFhQVWNsVktWZGVyMEtaSmxSRUd3dTJqamlyZWk1VFZKZmRQRGxaY3gx?=
 =?utf-8?B?YmRseVUrY3lYYnFxNHlWOGFWa1E3MndybXJoZkk0bXI0RlVyU0Vza3lVVldm?=
 =?utf-8?B?MWtwWlFMUmhjWG1DeVFpZ044SUIwVXdJaFB5dTh0eGxsU0V6ZVZDRm9EYlBk?=
 =?utf-8?B?R1F3RW8xWGhMMjlvbFJ2NlVrc0NMS3ExM3pGQTIwQ0QxY2JuSkVmajZTaU5u?=
 =?utf-8?B?dWZyTmsyUHUwWklKOU40Szg0OTJ2Z1A2RURkT0owRW5EcmdpNEM3Uy9ad0VU?=
 =?utf-8?B?Smsyck8wZXpabmdvanoxQUg2Mm4rbGZ5bjVNYXNoa05PQ1UyYXV1L3Z3TnVV?=
 =?utf-8?Q?WudLTB/n6SZJwbChBrjDaB0vFuBfu9E08HvxL2u?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 8159c283-4a38-40ce-bfa6-08d93811dddd
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2021 19:46:03.8781
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ipGPETwB844D9bsKmZmmAu906UYDlR9mjg0M2eb9jC2W3HSsdLOBg/O9sfHmeU4MLoPh4LbVwVip+EZ8jwFvZruIcRntzlVjTPmLqwi05KM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4360
X-OriginatorOrg: citrix.com

On 25/06/2021 14:24, Jan Beulich wrote:
> Let's try to avoid giving the impression that 32-bit tool stacks are as
> capable as 64-bit ones.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -131,6 +131,11 @@ ARM only has one guest type at the momen
> =20
>  ## Toolstack
> =20
> +While 32-bit builds of the tool stack are generally supported, restricti=
ons
> +apply in particular when running on top of a 64-bit hypervisor.

Actually, this isn't true, and in a way which helps us right now.

PV32 isn't security supported, and neither ARM nor x86 support dom0
bitness !=3D Xen bitness.=C2=A0 On x86, it doesn't remotely work because of=
 the
pointer size mismatch, and while this was bodged in a horrifying way in
the ARM ABI, I doubt anyone is in a hurry to turn that into a supported
configuration.

That said, it is my intent with the ABIv2 changes for a 32bit toolstack,
under 64bit guest kernel, under 64bit or 128bit Xen (yes - RISCV-128 is
already a thing being discussed) to function.

>   For example,
> +very large guests aren't supported in this case.

The wording here wants to be careful, because under certain readings,
you've just dropped security support for 32bit toolstacks.

What we actually mean is "a toolstack with bitness < Xen is not expected
to be able to manage very large domains correctly, and don't pester us
with bugs when it doesn't work because we won't fix them".

Whereas we will fix security issues which only happen to manifest in
32bit builds of the toolstack.


>   This includes guests giving
> +the appearance of being large, by altering their own memory layouts.

I'd drop sentence.=C2=A0 Its an internal detail of a corner case which we'r=
e
expecting to remove in the future, and "the guest kernel can DoS itself"
isn't interesting.

~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 25 21:13:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 21:13:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147422.271741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwt8k-00043O-KO; Fri, 25 Jun 2021 21:13:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147422.271741; Fri, 25 Jun 2021 21:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwt8k-00043H-HT; Fri, 25 Jun 2021 21:13:30 +0000
Received: by outflank-mailman (input) for mailman id 147422;
 Fri, 25 Jun 2021 21:13:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwt8j-000437-E6; Fri, 25 Jun 2021 21:13:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwt8j-00044H-55; Fri, 25 Jun 2021 21:13:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwt8i-0001je-R4; Fri, 25 Jun 2021 21:13:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwt8i-0001ZV-Qb; Fri, 25 Jun 2021 21:13:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QqAhCbJrgI355xpYOE/QA54Ql2k/zST0FIvqe/gVAy8=; b=Pb3ERnDD6hbz3iD9AnRU5FWI9c
	//+UFWvj/abNS/Hw92ZOjCAlZEoia5czENYvgbEAYNCuNgAs/86SqBL7VrfZaqBri+fyi71fHth9B
	w9kPmVS2/+Th+y7xbStMVTw15ljZe+1Pl++jB3DrsW5CCobDNZOfc6llavCj3ReNsX7U=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163027-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 163027: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=44db63d1ad8d71c6932cbe007eb41f31c434d140
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Jun 2021 21:13:28 +0000

flight 163027 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163027/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 17 guest-localmigrate      fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                44db63d1ad8d71c6932cbe007eb41f31c434d140
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  329 days
Failing since        152366  2020-08-01 20:49:34 Z  328 days  557 attempts
Testing same since   163027  2021-06-25 04:54:34 Z    0 days    1 attempts

------------------------------------------------------------
6201 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1689790 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 22:01:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 22:01:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147435.271776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwtsd-0000Wb-3S; Fri, 25 Jun 2021 22:00:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147435.271776; Fri, 25 Jun 2021 22:00:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwtsd-0000WU-0T; Fri, 25 Jun 2021 22:00:55 +0000
Received: by outflank-mailman (input) for mailman id 147435;
 Fri, 25 Jun 2021 22:00:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l36E=LT=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lwtsb-0000WO-V2
 for xen-devel@lists.xenproject.org; Fri, 25 Jun 2021 22:00:53 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d4c80d5c-e5d2-4ac5-95f9-713034929a81;
 Fri, 25 Jun 2021 22:00:53 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 9766461613;
 Fri, 25 Jun 2021 22:00:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4c80d5c-e5d2-4ac5-95f9-713034929a81
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1624658452;
	bh=yyBnq+tOA8an6SOV7Jwguf55yjHun4npbbcJ7i3Wqp4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=aiN5qkoQFOqoqiG4h/rMfdEE8q+TcHeX67n4DoJSdOiSvL/abXc00LxU7fLFBVDky
	 VQ+JqFAV9JZGw4LYIGrlaIrIj+qYvzWkxYs08NBfkSsbnCef+qQwDWrURCwpe6OT2E
	 o66DTUv4GqRr2FSRaTQkFkWdxgwKz7CtJlvHwEy4bVK+Id1QLGwMw6oeUbGN/G9pUi
	 gSOMlvx+pgeta1ErPY6+hVrTjPZta7LHhAsLmEthLAdIo07obmNHAEysGKd0woAyF/
	 k8lWbSQWwoTRo3iTRztXQSlTtmEohhg5JywOowJEXHc9B/8hJp0YZjzIseDhE8UcxR
	 M86Lit7efG3Ig==
Date: Fri, 25 Jun 2021 15:00:51 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: smmuv1: Set privileged attr to 'default'
In-Reply-To: <c6c5e3deb97200baefb75d06ec934d2c6ee5eb62.1624546852.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2106251500400.24906@sstabellini-ThinkPad-T480s>
References: <612e7f61c19e60019bb7829888342fda95fd36be.1624546532.git.rahul.singh@arm.com> <c6c5e3deb97200baefb75d06ec934d2c6ee5eb62.1624546852.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 25 Jun 2021, Rahul Singh wrote:
> Backport commit e19898077cfb642fe151ba22981e795c74d9e114
> "iommu/arm-smmu: Set privileged attribute to 'default' instead of
> 'unprivileged'"
> 
> Original commit message:
>     Currently the driver sets all the device transactions privileges
>     to UNPRIVILEGED, but there are cases where the iommu masters wants
>     to isolate privileged supervisor and unprivileged user.
>     So don't override the privileged setting to unprivileged, instead
>     set it to default as incoming and let it be controlled by the
>     pagetable settings.
> 
>     Acked-by: Will Deacon <will.deacon@arm.com>
>     Signed-off-by: Sricharan R <sricharan@codeaurora.org>
>     Signed-off-by: Will Deacon <will.deacon@arm.com>
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>


Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/drivers/passthrough/arm/smmu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index 1a68c2ab3b..d9a3a0cbf6 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -1566,7 +1566,7 @@ static int arm_smmu_domain_add_master(struct arm_smmu_domain *smmu_domain,
>  			continue;
>  
>  		s2cr[idx].type = type ;
> -		s2cr[idx].privcfg = S2CR_PRIVCFG_UNPRIV;
> +		s2cr[idx].privcfg = S2CR_PRIVCFG_DEFAULT;
>  		s2cr[idx].cbndx = cbndx;
>  		arm_smmu_write_s2cr(smmu, idx);
>  	}
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 22:03:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 22:03:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147439.271787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwtuy-0001AE-Ij; Fri, 25 Jun 2021 22:03:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147439.271787; Fri, 25 Jun 2021 22:03:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwtuy-0001A7-EV; Fri, 25 Jun 2021 22:03:20 +0000
Received: by outflank-mailman (input) for mailman id 147439;
 Fri, 25 Jun 2021 22:03:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwtuw-00019u-W3; Fri, 25 Jun 2021 22:03:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwtuw-0004vK-QJ; Fri, 25 Jun 2021 22:03:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwtuw-0003sB-J0; Fri, 25 Jun 2021 22:03:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwtuw-0003Wd-IY; Fri, 25 Jun 2021 22:03:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tEH7yAhY+FJS0oKVeiT2T/kPhxROUFzLKSzugn+KkW4=; b=XUhY2YRe9lsahJ7gFR6OQ/jLDW
	06N1bBgVCCXSzHs4b5SJtD0SIuPQwBYmkxVFKlTYI53jKDZ17uXv01NhZ1bst6vRUjeLzpCTm+931
	A7NHyVT79TPzWFiLKNDzVAP1tRhdorlfe5AcqwLL3lBmrMHQWasaTe+/IJRJexViNv5M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163055-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163055: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=17143c4837393d42c484b42d1789b85b2cff1aaf
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Jun 2021 22:03:18 +0000

flight 163055 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163055/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 17143c4837393d42c484b42d1789b85b2cff1aaf
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   21 days
Failing since        162368  2021-06-04 15:42:59 Z   21 days   49 attempts
Testing same since   163028  2021-06-25 07:30:32 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2647 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 25 23:34:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Jun 2021 23:34:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147469.271887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwvLP-0001fd-0l; Fri, 25 Jun 2021 23:34:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147469.271887; Fri, 25 Jun 2021 23:34:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwvLO-0001fW-Tj; Fri, 25 Jun 2021 23:34:42 +0000
Received: by outflank-mailman (input) for mailman id 147469;
 Fri, 25 Jun 2021 23:34:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwvLN-0001f7-LY; Fri, 25 Jun 2021 23:34:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwvLN-0006R0-DO; Fri, 25 Jun 2021 23:34:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwvLM-000875-Uo; Fri, 25 Jun 2021 23:34:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwvLM-0006rB-UM; Fri, 25 Jun 2021 23:34:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=n8ZbUVfhvq6Qsf6bUtwgcHkRk4mavq78P5AUXbrjuKI=; b=p0O7zVmQL9pZrMH5zKaiZpdU2l
	mP7gh5ronSNbiCaQE3qEWHKXTKwr/enMDfJ7X2LWRAPLbsRpBBqA8ENUOLZbDE1vFT9KENCMzEBIN
	B8qFt4RGKz/XLkZ4DKC0kLSXh/HqS+p9sr3Khmy8IWKDkJydgSLZ3LNFM8BdmEGbl5js=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163031-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 163031: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e87d8f60fa9b6eaa6a2357545a96e4fff05dbef0
X-Osstest-Versions-That:
    xen=c7691f5e340f3b579d40c77548f63133cdf5aac7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Jun 2021 23:34:40 +0000

flight 163031 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163031/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 163014
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 163014
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 163014
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 163014
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 163014
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 163014
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 163014
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 163014
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 163014
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 163014
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 163014
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  e87d8f60fa9b6eaa6a2357545a96e4fff05dbef0
baseline version:
 xen                  c7691f5e340f3b579d40c77548f63133cdf5aac7

Last test of basis   163014  2021-06-24 10:22:27 Z    1 days
Testing same since   163021  2021-06-24 21:09:11 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@xen.org>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c7691f5e34..e87d8f60fa  e87d8f60fa9b6eaa6a2357545a96e4fff05dbef0 -> master


From xen-devel-bounces@lists.xenproject.org Sat Jun 26 01:19:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Jun 2021 01:19:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147494.271975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwwyZ-0004kw-5S; Sat, 26 Jun 2021 01:19:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147494.271975; Sat, 26 Jun 2021 01:19:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwwyZ-0004kc-0S; Sat, 26 Jun 2021 01:19:15 +0000
Received: by outflank-mailman (input) for mailman id 147494;
 Sat, 26 Jun 2021 01:19:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwwyX-0004kR-IS; Sat, 26 Jun 2021 01:19:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwwyX-00014E-9p; Sat, 26 Jun 2021 01:19:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwwyX-0004ye-1b; Sat, 26 Jun 2021 01:19:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwwyX-0005pj-15; Sat, 26 Jun 2021 01:19:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dzbACkJpUgnLnkvK1QvwG9xFrKN6eKBF4VMigtuZLeA=; b=UrGNRIox7ks2DebBcexUEhYHQp
	wUe177k55E4BnyiqxIag8lzQ6OcS4d+ftUulsDq/D77QYEehx395oZNk3exBaSt8HGCuya2fuNxT0
	gtA36KN6i10TiQNVCIBva9WxLsXYM6iojlJxb6eipPEbLwGAah+WivTz7ZZUuy+xEya0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163101-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 163101: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=bb11edcec1a953ce590da797b0d005cd60f21e83
X-Osstest-Versions-That:
    xen=f591755823a7e94fc6b4b8ddce71f0421a94fa09
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 26 Jun 2021 01:19:13 +0000

flight 163101 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163101/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  bb11edcec1a953ce590da797b0d005cd60f21e83
baseline version:
 xen                  f591755823a7e94fc6b4b8ddce71f0421a94fa09

Last test of basis   163048  2021-06-25 13:05:05 Z    0 days
Testing same since   163101  2021-06-25 23:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rahul Singh <rahul.singh@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f591755823..bb11edcec1  bb11edcec1a953ce590da797b0d005cd60f21e83 -> smoke


From xen-devel-bounces@lists.xenproject.org Sat Jun 26 02:55:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Jun 2021 02:55:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147501.271995 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwyTc-0005iu-CF; Sat, 26 Jun 2021 02:55:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147501.271995; Sat, 26 Jun 2021 02:55:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwyTc-0005in-82; Sat, 26 Jun 2021 02:55:24 +0000
Received: by outflank-mailman (input) for mailman id 147501;
 Sat, 26 Jun 2021 02:55:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwyTb-0005id-OQ; Sat, 26 Jun 2021 02:55:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwyTb-00035q-Ka; Sat, 26 Jun 2021 02:55:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwyTb-0000Km-Dc; Sat, 26 Jun 2021 02:55:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwyTb-0001jy-DA; Sat, 26 Jun 2021 02:55:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PfZL7JCvYOPZfNvq5Lelt7BUPBfx3FJ/ddzlNR6VvCA=; b=GVMxBbzowhYKkC+7Nx3WsCeiyA
	KcApolzC5piSIIXqr87DWidvRcrhT9wRHtZ6yvI/xKta3yFne/dUE+nSd2Uk1KbzN1GXVnm+IyZ2P
	e2PXLtRaxLd0RtMOal+3Qh/fdKgqFjzBgaYKF6z4MR8KswcxxliFgZrhxGiIdMAS9xsM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163096-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163096: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=17143c4837393d42c484b42d1789b85b2cff1aaf
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 26 Jun 2021 02:55:23 +0000

flight 163096 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163096/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 17143c4837393d42c484b42d1789b85b2cff1aaf
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   21 days
Failing since        162368  2021-06-04 15:42:59 Z   21 days   50 attempts
Testing same since   163028  2021-06-25 07:30:32 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2647 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 26 04:17:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Jun 2021 04:17:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147508.272014 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwzlG-0004m1-DM; Sat, 26 Jun 2021 04:17:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147508.272014; Sat, 26 Jun 2021 04:17:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lwzlG-0004lu-AV; Sat, 26 Jun 2021 04:17:42 +0000
Received: by outflank-mailman (input) for mailman id 147508;
 Sat, 26 Jun 2021 04:17:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwzlF-0004lk-Fc; Sat, 26 Jun 2021 04:17:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwzlF-0004TA-7N; Sat, 26 Jun 2021 04:17:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lwzlE-0003YO-Vi; Sat, 26 Jun 2021 04:17:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lwzlE-0006VS-VF; Sat, 26 Jun 2021 04:17:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AhYq4vKSA80PsWrvxxsOobwfDKSfkSWRO+0xZRngrEg=; b=3V0HqAQelUKyvt5srwF9/jB+Kl
	q4Ok+TUCZ7nwHZy+Xt09zWJDI+2w0WMcfPFjg+uI665Yd2pQaMNpVpSVwvLUXJ+cmEbGc7QsKpOUe
	+8I+SDNHL9E+AnWJb7n6nVa4jbqmRBWC6V7vRRxefFghQwyywQAsYY49nim7SI9YdrWA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163066-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 163066: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=050cee12315536aba18a73c8dea21116a9c90ffa
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 26 Jun 2021 04:17:40 +0000

flight 163066 qemu-mainline real [real]
flight 163108 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/163066/
http://logs.test-lab.xenproject.org/osstest/logs/163108/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-amd 16 xen-boot/l1         fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 16 xen-boot/l1       fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                050cee12315536aba18a73c8dea21116a9c90ffa
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  309 days
Failing since        152659  2020-08-21 14:07:39 Z  308 days  566 attempts
Testing same since   163066  2021-06-25 16:07:05 Z    0 days    1 attempts

------------------------------------------------------------
548 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 178296 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 26 07:47:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Jun 2021 07:47:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147517.272041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lx32H-0006iY-VJ; Sat, 26 Jun 2021 07:47:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147517.272041; Sat, 26 Jun 2021 07:47:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lx32H-0006iR-RF; Sat, 26 Jun 2021 07:47:29 +0000
Received: by outflank-mailman (input) for mailman id 147517;
 Sat, 26 Jun 2021 07:47:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lx32H-0006iH-9c; Sat, 26 Jun 2021 07:47:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lx32H-0008Mf-4U; Sat, 26 Jun 2021 07:47:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lx32G-000617-PM; Sat, 26 Jun 2021 07:47:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lx32G-0005w4-Or; Sat, 26 Jun 2021 07:47:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZAy8+5eBnPUqSQ1ch1nwbNwTfHpPMmkLMydf41aT3II=; b=cHvRUvzZ8N6Jo4CywdBXt0leFY
	7ivSQoLRFsQxpsgkYgrE1pLSRWzqO4kgX6vo6vuZFsfzGs+dTWW7VNdhW7OZ08idGguxYVlWCcm7M
	81Jb5qelPUzv8P3QP5cTSHV9LBm0KPd0ZxRmCkvLnPpAXnCSzajSfrw/f3zLpaoHdi50=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163093-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 163093: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=55fcd4493da5ac8a0f7a0b3b5ae8448aee2041bb
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 26 Jun 2021 07:47:28 +0000

flight 163093 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163093/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10   fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                55fcd4493da5ac8a0f7a0b3b5ae8448aee2041bb
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  329 days
Failing since        152366  2020-08-01 20:49:34 Z  328 days  558 attempts
Testing same since   163093  2021-06-25 21:39:55 Z    0 days    1 attempts

------------------------------------------------------------
6204 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1690461 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 26 08:38:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Jun 2021 08:38:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147528.272061 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lx3pZ-0005fo-8f; Sat, 26 Jun 2021 08:38:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147528.272061; Sat, 26 Jun 2021 08:38:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lx3pZ-0005fh-55; Sat, 26 Jun 2021 08:38:25 +0000
Received: by outflank-mailman (input) for mailman id 147528;
 Sat, 26 Jun 2021 08:38:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lx3pX-0005fQ-AK; Sat, 26 Jun 2021 08:38:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lx3pX-0001IF-4h; Sat, 26 Jun 2021 08:38:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lx3pW-0000KU-Pn; Sat, 26 Jun 2021 08:38:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lx3pW-0004mX-PC; Sat, 26 Jun 2021 08:38:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iC1SblUfK3BFut8sEaQeuYlPbz7XB5MbDrkkPHN/lk4=; b=zupTsIgpHNLOySNCdOFgVRoiuD
	V/E518eIvRyuQugpKPg+b5HPm6OWr+j3v0acKqDD/jk4H7tWRyr8CjY4hZv9N8Ymk78iAyK344r3k
	9P8I9nQ6HeZdQ9YjOWp+NsOhhUZ41h7KRPZYvS7Pv2facGtoqClZPInRUpr71fTOwdXY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163107-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163107: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=17143c4837393d42c484b42d1789b85b2cff1aaf
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 26 Jun 2021 08:38:22 +0000

flight 163107 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163107/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 17143c4837393d42c484b42d1789b85b2cff1aaf
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   22 days
Failing since        162368  2021-06-04 15:42:59 Z   21 days   51 attempts
Testing same since   163028  2021-06-25 07:30:32 Z    1 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2647 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 26 09:47:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Jun 2021 09:47:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147536.272081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lx4tu-0005cQ-Ha; Sat, 26 Jun 2021 09:46:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147536.272081; Sat, 26 Jun 2021 09:46:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lx4tu-0005cJ-Dv; Sat, 26 Jun 2021 09:46:58 +0000
Received: by outflank-mailman (input) for mailman id 147536;
 Sat, 26 Jun 2021 09:46:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lx4tt-0005c9-2z; Sat, 26 Jun 2021 09:46:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lx4ts-0002Ml-Tx; Sat, 26 Jun 2021 09:46:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lx4ts-00043c-MQ; Sat, 26 Jun 2021 09:46:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lx4ts-0000YS-M0; Sat, 26 Jun 2021 09:46:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=q6WZzbELHemo18nK9JJRK0xBjLEXyxrBqL9g9KTPZtE=; b=sjM7y7ZaenIhq54nwq1thoNBFV
	JHix/hqaZkv54NiR+Bzc3Ja4vXCfBy2JkaytwE6HbOve+W4xbYKxJ3sJgdsfqiVqhqhuJAni47Q3Q
	W5NPwkPHDu89Ln6k/kZ9J3rfGzy0QI46A8FJchapCZR13UgHCvu05GUI7O7ItCQG+m3A=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163111-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 163111: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=7c08141f906e20e730c4b6407bc638e743deea48
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 26 Jun 2021 09:46:56 +0000

flight 163111 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163111/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              7c08141f906e20e730c4b6407bc638e743deea48
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  351 days
Failing since        151818  2020-07-11 04:18:52 Z  350 days  342 attempts
Testing same since   163111  2021-06-26 04:20:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 63387 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 26 13:30:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Jun 2021 13:30:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147546.272107 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lx8OB-0000QB-0A; Sat, 26 Jun 2021 13:30:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147546.272107; Sat, 26 Jun 2021 13:30:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lx8OA-0000Q4-Sj; Sat, 26 Jun 2021 13:30:26 +0000
Received: by outflank-mailman (input) for mailman id 147546;
 Sat, 26 Jun 2021 13:30:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GWiO=LU=qq.com=2284696125@srs-us1.protection.inumbo.net>)
 id 1lx8O8-0000Py-R8
 for xen-devel@lists.xenproject.org; Sat, 26 Jun 2021 13:30:25 +0000
Received: from out162-62-57-64.mail.qq.com (unknown [162.62.57.64])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 54065af5-8735-46ef-9aa4-10677b093ea3;
 Sat, 26 Jun 2021 13:30:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54065af5-8735-46ef-9aa4-10677b093ea3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qq.com; s=s201512;
	t=1624714215; bh=4naTaEjwySTebJOOZL3UK06xTA656+TtS0+zezOtnn8=;
	h=From:To:Subject:Date;
	b=wUQmeuA6KFgdkGbwb/GSYKGt+f/apYUsABshugM/RJhmvzo1GDQVY/z10f+dis2df
	 g3dBP67T3neDa4gjBGS4AwOicP9/0PTujq6+fqWMKGRywa1596xMMieg+hlxkpRv6b
	 /U+Ee6FMQs4VDPElAdJpepe4N2pHnMQ3jFnPoQRw=
X-QQ-FEAT: 7D8cPacN2jh9xe+k5Y9kA/Qy/3E91/l6TF3W9lBH23hsM6PF49c9+L5uVrA7E
	lcqeOQblCp8Y29HIR2AsTLm2/HCBoDXYgNKsgif/7yJ5/cTD9ykwyizZeS1TGXavMG95F6I
	nW8VX0ZCCQv9hS1A7JUqwGtbQrOD/heg2LvPlaFIkuKYxEp05mzLUX9wSQx2aHmZZH84/bj
	lOKAfM3p8gWkxZvR6ZGgZMNEMwcQUr+yrEySe2w8s1VYBVSTFbciWJgaH2hSCEN5asZ+eO7
	kBhvO3SokbjEf9
X-QQ-SSF: 0000000000000010000000000000006
X-QQ-XMAILINFO: NEyaofXnFTZaH2BzkWs9RFzbFT1Em72VoUJkiaMKsBpe1TxInwSsIZ3M6KHm6F
	 Rms6K1P7SXB/4rxLtJT8H57Np2+sUbcXxc5U26cgC2ZSr1sGl7mkkIK+L3SIBn2O4Ie5cKtag/51l
	 IHdjqY1g+ctSJJ9iSzlHjJJ8Pqwl40gC88z06b3gm8GI05WyXC+VVAyKHa2tuSuLAaPbkXvkeF37P
	 wD/24+k5zWhtbRafgeZKw5uwGTS4WWcYyMy47+zdoGSTeq+HRla2D9JXuMp17vdRsnlL//R8rajJj
	 a/+m1b3w+oEflHjeAb4PZQ7wp2qwAUFx3687xd9B437d7th2IqfNQW0+fqfhIkQ7zPADGJsAuoFVs
	 vDuN4CgW7X3BXHvHVHceZ0KR4qUH0Hg+Qpn45tnpMmHcdhnbZfzp1tvWhaZ5smSzT/gsavOu08Kub
	 RP2jnr1FkSL0vOPs6rTaFLoL3kOy8RQnSslP6qwyjQjbXvQWZrSnfkKMxfkrub6us05VFChLNjP9W
	 y/85sPo56daPVKpe3VgFM5wywq726PMwpbXZJfrMVUFPD5yJRAV9JVrv2aTcwEy5FRcf7dvsCXOKz
	 mOHsPmtxGuW52RS8cmBE1CSteEKHNR6/x8WTMpltEuHyp+0vHz0rtVgkR2cKWOn841OtgiejivOvS
	 7uR76KBdt6wVXFFtq/YfWAPLZTr3NU3KoYRyvHJ8QuM0zAk/GJvivgspPAvHYTv/7XK3uZe26N6bI
	 /m7uZoZ9J+5/nb9ohjbru8fKV/VbJp6JDalPN3/Mvv8WEwsNVJiWocunlIHbmSRZqcko=
X-HAS-ATTACH: no
X-QQ-BUSINESS-ORIGIN: 2
X-Originating-IP: 103.138.232.65
X-QQ-STYLE: 
X-QQ-mid: webmail801t1624714150t1377481
From: "=?gb18030?B?UnJvYWNo?=" <2284696125@qq.com>
To: "=?gb18030?B?eGVuLWRldmVs?=" <xen-devel@lists.xenproject.org>
Subject: A possible pointer_overflow in xen-4.13
Mime-Version: 1.0
Content-Type: multipart/alternative;
	boundary="----=_NextPart_60D72BA6_0F315DD0_749858E4"
Content-Transfer-Encoding: 8Bit
Date: Sat, 26 Jun 2021 21:29:10 +0800
X-Priority: 3
Message-ID: <tencent_A17CA7BA63F6E47B3FE7B1AC54E55B2A3609@qq.com>
X-QQ-MIME: TCMime 1.0 by Tencent
X-Mailer: QQMail 2.x
X-QQ-Mailer: QQMail 2.x

This is a multi-part message in MIME format.

------=_NextPart_60D72BA6_0F315DD0_749858E4
Content-Type: text/plain;
	charset="gb18030"
Content-Transfer-Encoding: base64

SGksIEkgY29tcGlsZSBYZW4tNC4xMyB3aXRoIENPTkZJR19VQlNBTiwgYW5kIHRyeSB0ZXN0
IGl0LiBIb3dldmVyLCBkdXJpbmcgdGVzdGluZywgeGwgZG1lc2cgZ290IHRoZSBvdXRwdXQg
YXMgc2hvd24gYmVsb3cuDQoNCg0KSXQgc2VlbXMgdGhhdCB0aGVyZSBpcyBhIHBvdGVudGlh
bCBwb2ludGVyIG92ZXJmbG93IHdpdGhpbiBhcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYzox
MzEgd2hlcmUgeGVuIHRyeSB0byBleGVjdXRlIGluc3RydWN0aW9uICcnJyBBUFBFTkRfQ0FM
TChzYXZlX2d1ZXN0X2dwcnMpICcnJ6Osd2hlcmUgQVBQRU5EX0NBTEwgdHJ5IHRvIGFkZCBh
biBvZmZzZXQgb24gKnAgd2l0aG91dCBwcm9wZXIgY2hlY2tpbmcuDQoNCg0KSSBjb21waWxl
ZCB4ZW4tNC4xMyBieSBjbGFuZy05LCB3aXRoIGZvbGxvd2luZyBpbnN0cnVjdGlvbnM6ICcn
JyBleHBvcnQgQ09ORklHX1VCU0FOPXkgJycnICZhbXA7JmFtcDsgJycnIG1ha2UgY2xhbmc9
eSBkZWJ1Zz15ICcnJyAuIERvIHlvdSBoYXZlIGFueSBpZGVhIHdoYXQgZ29pbmcgb24gaGVy
ZT8NCg0KDQooWEVOKSBwb2ludGVyIG9wZXJhdGlvbiB1bmRlcmZsb3dlZCBmZmZmODIwMDQw
MDE3MGQzIHRvIGZmZmYwNGQwYzAwMTQxOTMNCihYRU4pIC0tLS1bIFhlbi00LjE1LXVuc3Rh
YmxlJm5ic3A7IHg4Nl82NCZuYnNwOyBkZWJ1Zz15Jm5ic3A7ICZuYnNwO05vdCB0YWludGVk
IF0tLS0tDQooWEVOKSBDUFU6Jm5ic3A7ICZuYnNwOyAxDQooWEVOKSBSSVA6Jm5ic3A7ICZu
YnNwOyBlMDA4Ols8ZmZmZjgyZDA0MDJkNjk0YSZndDtdIGNvbW1vbi91YnNhbi91YnNhbi5j
I3Vic2FuX2VwaWxvZ3VlKzB4YS8weDkwDQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAwMTAw
ODImbmJzcDsgJm5ic3A7Q09OVEVYVDogaHlwZXJ2aXNvciAoZDB2MCkNCihYRU4pIHJheDog
MDAwMDAwMDAwMDAwMDAwMCZuYnNwOyAmbmJzcDtyYng6IGZmZmY4MzAwN2MzNmY4NzAmbmJz
cDsgJm5ic3A7cmN4OiAwMDAwMDAwMDAwMDAwMDEwDQooWEVOKSByZHg6IDAwMDAwMDAwMDAw
MTAwMDAmbmJzcDsgJm5ic3A7cnNpOiBmZmZmODMwMDdjMzcwMDAwJm5ic3A7ICZuYnNwO3Jk
aTogZmZmZjgzMDA3YzM2Zjg3MA0KKFhFTikgcmJwOiBmZmZmODMwMDdjMzZmODU4Jm5ic3A7
ICZuYnNwO3JzcDogZmZmZjgzMDA3YzM2Zjg0OCZuYnNwOyAmbmJzcDtyODombmJzcDsgZmZm
ZjgyZDA0MDg1M2Y3MA0KKFhFTikgcjk6Jm5ic3A7IDAwMDAwMDAwMDAwMDAwMDEmbmJzcDsg
Jm5ic3A7cjEwOiBmZmZmODJkMDQwODU0NDAwJm5ic3A7ICZuYnNwO3IxMTogZmZmZjgyZDA0
MDg1NDNkMA0KKFhFTikgcjEyOiBmZmZmODMwMDdjMzZmODcwJm5ic3A7ICZuYnNwO3IxMzog
ZmZmZjgyMDA0MDAxNzBkMCZuYnNwOyAmbmJzcDtyMTQ6IGZmZmYwNGQwYzAwMTQxOTMNCihY
RU4pIHIxNTogZmZmZjgyMDA0MDAxNzBkMyZuYnNwOyAmbmJzcDtjcjA6IDAwMDAwMDAwODAw
NTAwMzMmbmJzcDsgJm5ic3A7Y3I0OiAwMDAwMDAwMDAwMDAwNjYwDQooWEVOKSBjcjM6IDAw
MDAwMDAwNzY0MGMwMDAmbmJzcDsgJm5ic3A7Y3IyOiBmZmZmYzkwMDAwM2ZmMDAwDQooWEVO
KSBmc2I6IDAwMDAwMDAwMDAwMDAwMDAmbmJzcDsgJm5ic3A7Z3NiOiBmZmZmODg4MDczNjAw
MDAwJm5ic3A7ICZuYnNwO2dzczogMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgZHM6IDAwMDAm
bmJzcDsgJm5ic3A7ZXM6IDAwMDAmbmJzcDsgJm5ic3A7ZnM6IDAwMDAmbmJzcDsgJm5ic3A7
Z3M6IDAwMDAmbmJzcDsgJm5ic3A7c3M6IDAwMDAmbmJzcDsgJm5ic3A7Y3M6IGUwMDgNCihY
RU4pIFhlbiBjb2RlIGFyb3VuZCA8ZmZmZjgyZDA0MDJkNjk0YSZndDsgKGNvbW1vbi91YnNh
bi91YnNhbi5jI3Vic2FuX2VwaWxvZ3VlKzB4YS8weDkwKToNCihYRU4pJm5ic3A7IDg5IGU1
IDQxIDU2IDUzIDQ4IDg5IGZiIDwwZiZndDsgMGIgNDggOGQgM2QgMTcgODMgM2MgMDAgMzEg
YzAgZTggNzYgMjkgMDAgMDANCihYRU4pIFhlbiBzdGFjayB0cmFjZSBmcm9tIHJzcD1mZmZm
ODMwMDdjMzZmODQ4Og0KKFhFTikmbmJzcDsgJm5ic3A7IGZmZmY4MmQwNDBhNWI5YjAgZmZm
ZjA0ZDBjMDAxNDE5MyBmZmZmODMwMDdjMzZmODk4IGZmZmY4MmQwNDAyZDdiZGUNCihYRU4p
Jm5ic3A7ICZuYnNwOyAwMDAwMDAwMDAwMDAzMDAwIDAwMDAwMDAwMDAwMDAyODYgZmZmZjA0
ZDBjMDAxNDE5MyBmZmZmODMwMDdjMzZmZTU4DQooWEVOKSZuYnNwOyAmbmJzcDsgZmZmZjgy
ZDA3ZmZmZDBjMCBmZmZmODIwMDQwMDE3MGQzIGZmZmY4MzAwN2MzNmY4ZjggZmZmZjgyZDA0
MDQ5M2Q3Yg0KKFhFTikmbmJzcDsgJm5ic3A7IGZmZmY4MzAwN2MzNmY4YzggMDAwMDAwMDEw
MDAwMDNkYSBmZmZmODMwMDdmODVhYTUwIDAwMDAwMGVjN2Y4NWNlMDENCihYRU4pJm5ic3A7
ICZuYnNwOyBmZmZmODMwMDdjMzZmZTAwIGZmZmY4MzAwN2MzNmZlMTggMDAwMDAwMDAwMDAw
MDNkYSBmZmZmODMwMDdjMzZmZTAwDQooWEVOKSZuYnNwOyAmbmJzcDsgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAxIGZmZmY4MzAwN2MzNmY5MzggZmZmZjgyZDA0MDQ5MGVi
NQ0KKFhFTikmbmJzcDsgJm5ic3A7IGZmZmY4MzAwN2MzNmZjZTggMDAwMDAwMDAwMDAwMDNk
YSAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4MmQwNDA2YzAzNTgNCihYRU4pJm5ic3A7ICZuYnNw
OyBmZmZmODJkMDQwNmMwM2MwIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjgzMDA3YzM2ZmRhOCBm
ZmZmODJkMDQwNTMxYTczDQooWEVOKSZuYnNwOyAmbmJzcDsgZmZmZjgyZDA0MDU4ZDg1MSAw
MDAwMDAwMDAwMDAwMDQ2IDAwMDAwMDAwMDAwMDAwNDYgZmZmZjgyZDA0MDI3MDYxZA0KKFhF
TikmbmJzcDsgJm5ic3A7IDAwMDAwMDAwMDAwNzdmMDcgZmZmZjgzMDA3Zjg1ZjJiOCAwMDAw
MDAwMDAwMDAwMDAwIGFhYWFhYWFhYWFhYWFhYWENCihYRU4pJm5ic3A7ICZuYnNwOyBhYWFh
YWFhYWFhYWFhYWFhIGFhYWFhYWFhYWFhYWFhYWEgYWFhYWFhYWFhYWFhYWFhYSBhYWFhYWFh
YWFhYWFhYWFhDQooWEVOKSZuYnNwOyAmbmJzcDsgYWFhYWFhYWFhYWFhYWFhYSBhYWFhYWFh
YWFhYWFhYWFhIGFhYWFhYWFhYWFhYWFhYWEgZmZmZjgyZDA0MDI3MDk4MA0KKFhFTikmbmJz
cDsgJm5ic3A7IDAwMDAwMDAwMDAwMDAwMTcgZmZmZjgyZDA0MDIzNjRmNCAwMDAwMDAwMDAw
MDAwMjQ2IGZmZmY4MmQwNDAyOGY0ZTkNCihYRU4pJm5ic3A7ICZuYnNwOyAwMDAwMDAwMTAw
MDAwMjIwIGZmZmY4MzAwN2MzNzgwMDQgMDAwMDAwMDAwMDAwMDIyZiAwMDAwMDAwMDAwMDAw
MDBmDQooWEVOKSZuYnNwOyAmbmJzcDsgMDAwMDAwMDAwMDAwMDAwMSAwMDAwMDAwMDAwMDAw
MjQ2IDAwMDAwMDAwMDAwMDAyNDYgZmZmZjgyZDA0MDI4ZjRlOQ0KKFhFTikmbmJzcDsgJm5i
c3A7IDAwMDAwMDAxMDAwMDAwYTAgZmZmZjgzMDA3YzM3ODAwNCAwMDAwMDAwMDAwMDAwMGEz
IDAwMDAwMDAwMDAwMDAwMDMNCihYRU4pJm5ic3A7ICZuYnNwOyBmZmZmODMwMDdjMzZmZTJj
IDAwMDAwMDAwMDAwMDAwMDEgZmZmZjgzMDA3YzM2ZmE3OCBmZmZmODJkMDQwMjcwYTA3DQoo
WEVOKSZuYnNwOyAmbmJzcDsgZmZmZjgzMDA3YzM2ZmVmOCBmZmZmODJkMDQwYzY4MDU4IGZm
ZmY4MzAwN2MzNmZhOTggZmZmZjgyZDA0MDI3MDk4MA0KKFhFTikmbmJzcDsgJm5ic3A7IGZm
ZmY4MzAwN2ZmNDhlYTggMDAwMDAwMDAwMDAwMDBiMCBmZmZmODMwMDdjMzZmYWY4IGZmZmY4
MmQwNDAyOGRlZDENCihYRU4pJm5ic3A7ICZuYnNwOyBmZmZmODMwMDdmZjQ4ZWE4IGZmZmY4
MzAwN2ZmNDhlYjAgZmZmZjgzMDA3YzM3OTg2OCAwMDAwMDAwMDAwMDAwMGEwDQooWEVOKSBY
ZW4gY2FsbCB0cmFjZToNCihYRU4pJm5ic3A7ICZuYnNwOyBbPGZmZmY4MmQwNDAyZDY5NGEm
Z3Q7XSBSIGNvbW1vbi91YnNhbi91YnNhbi5jI3Vic2FuX2VwaWxvZ3VlKzB4YS8weDkwDQoo
WEVOKSZuYnNwOyAmbmJzcDsgWzxmZmZmODJkMDQwMmQ3YmRlJmd0O10gRiBfX3Vic2FuX2hh
bmRsZV9wb2ludGVyX292ZXJmbG93KzB4NmUvMHhhMA0KKFhFTikmbmJzcDsgJm5ic3A7IFs8
ZmZmZjgyZDA0MDQ5M2Q3YiZndDtdIEYgYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMjaW9f
ZW11bF9zdHViX3NldHVwKzB4NDRiLzB4NmEwDQooWEVOKSZuYnNwOyAmbmJzcDsgWzxmZmZm
ODJkMDQwNDkwZWI1Jmd0O10gRiBhcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYyNyZWFkX2lv
KzB4ZDUvMHgxYzANCihYRU4pJm5ic3A7ICZuYnNwOyBbPGZmZmY4MmQwNDA1MzFhNzMmZ3Q7
XSBGIHg4Nl9lbXVsYXRlKzB4OTRmMy8weDJlMTcwDQooWEVOKSZuYnNwOyAmbmJzcDsgWzxm
ZmZmODJkMDQwNTY1ZWIxJmd0O10gRiB4ODZfZW11bGF0ZV93cmFwcGVyKzB4NzEvMHgyMTAN
CihYRU4pJm5ic3A7ICZuYnNwOyBbPGZmZmY4MmQwNDA0OGY1ZjImZ3Q7XSBGIHB2X2VtdWxh
dGVfcHJpdmlsZWdlZF9vcCsweDM5Mi8weDZhMA0KKFhFTikmbmJzcDsgJm5ic3A7IFs8ZmZm
ZjgyZDA0MDUyMmQzYSZndDtdIEYgZG9fZ2VuZXJhbF9wcm90ZWN0aW9uKzB4NDFhLzB4NTIw
DQooWEVOKSZuYnNwOyAmbmJzcDsgWzxmZmZmODJkMDQwNThkYTNhJmd0O10gRiB4ODZfNjQv
ZW50cnkuUyNoYW5kbGVfZXhjZXB0aW9uX3NhdmVkKzB4NjUvMHg5MQ0KKFhFTikNCihYRU4p
ID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09DQooWEVOKSBkMDogRm9yY2luZyByZWFkLW9u
bHkgYWNjZXNzIHRvIE1GTiBmZWQwMA==

------=_NextPart_60D72BA6_0F315DD0_749858E4
Content-Type: text/html;
	charset="gb18030"
Content-Transfer-Encoding: base64

PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNo
YXJzZXQ9R0IxODAzMCI+PGRpdj48Zm9udCBzaXplPSIzIj48ZGl2PkhpLCBJIGNvbXBpbGUg
WGVuLTQuMTMgd2l0aCBDT05GSUdfVUJTQU4sIGFuZCB0cnkgdGVzdCBpdC4gSG93ZXZlciwg
ZHVyaW5nIHRlc3RpbmcsIHhsIGRtZXNnIGdvdCB0aGUgb3V0cHV0IGFzIHNob3duIGJlbG93
LjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+SXQgc2VlbXMgdGhhdCB0aGVyZSBpcyBhIHBv
dGVudGlhbCBwb2ludGVyIG92ZXJmbG93IHdpdGhpbiBhcmNoL3g4Ni9wdi9lbXVsLXByaXYt
b3AuYzoxMzEgd2hlcmUgeGVuIHRyeSB0byBleGVjdXRlIGluc3RydWN0aW9uICcnJyBBUFBF
TkRfQ0FMTChzYXZlX2d1ZXN0X2dwcnMpICcnJ6Osd2hlcmUgQVBQRU5EX0NBTEwgdHJ5IHRv
IGFkZCBhbiBvZmZzZXQgb24gKnAgd2l0aG91dCBwcm9wZXIgY2hlY2tpbmcuPC9kaXY+PGRp
dj48YnI+PC9kaXY+PGRpdj5JIGNvbXBpbGVkIHhlbi00LjEzIGJ5IGNsYW5nLTksIHdpdGgg
Zm9sbG93aW5nIGluc3RydWN0aW9uczogJycnIGV4cG9ydCBDT05GSUdfVUJTQU49eSAnJycg
JmFtcDsmYW1wOyAnJycgbWFrZSBjbGFuZz15IGRlYnVnPXkgJycnIC4gRG8geW91IGhhdmUg
YW55IGlkZWEgd2hhdCBnb2luZyBvbiBoZXJlPzwvZGl2PjwvZm9udD48L2Rpdj48ZGl2Pjxm
b250IHNpemU9IjMiPjxicj48L2ZvbnQ+PGRpdj48Zm9udCBzaXplPSIzIj48ZGl2PihYRU4p
IHBvaW50ZXIgb3BlcmF0aW9uIHVuZGVyZmxvd2VkIGZmZmY4MjAwNDAwMTcwZDMgdG8gZmZm
ZjA0ZDBjMDAxNDE5MzwvZGl2PjxkaXY+KFhFTikgLS0tLVsgWGVuLTQuMTUtdW5zdGFibGUm
bmJzcDsgeDg2XzY0Jm5ic3A7IGRlYnVnPXkmbmJzcDsgJm5ic3A7Tm90IHRhaW50ZWQgXS0t
LS08L2Rpdj48ZGl2PihYRU4pIENQVTombmJzcDsgJm5ic3A7IDE8L2Rpdj48ZGl2PihYRU4p
IFJJUDombmJzcDsgJm5ic3A7IGUwMDg6WyZsdDtmZmZmODJkMDQwMmQ2OTRhJmd0O10gY29t
bW9uL3Vic2FuL3Vic2FuLmMjdWJzYW5fZXBpbG9ndWUrMHhhLzB4OTA8L2Rpdj48ZGl2PihY
RU4pIFJGTEFHUzogMDAwMDAwMDAwMDAxMDA4MiZuYnNwOyAmbmJzcDtDT05URVhUOiBoeXBl
cnZpc29yIChkMHYwKTwvZGl2PjxkaXY+KFhFTikgcmF4OiAwMDAwMDAwMDAwMDAwMDAwJm5i
c3A7ICZuYnNwO3JieDogZmZmZjgzMDA3YzM2Zjg3MCZuYnNwOyAmbmJzcDtyY3g6IDAwMDAw
MDAwMDAwMDAwMTA8L2Rpdj48ZGl2PihYRU4pIHJkeDogMDAwMDAwMDAwMDAxMDAwMCZuYnNw
OyAmbmJzcDtyc2k6IGZmZmY4MzAwN2MzNzAwMDAmbmJzcDsgJm5ic3A7cmRpOiBmZmZmODMw
MDdjMzZmODcwPC9kaXY+PGRpdj4oWEVOKSByYnA6IGZmZmY4MzAwN2MzNmY4NTgmbmJzcDsg
Jm5ic3A7cnNwOiBmZmZmODMwMDdjMzZmODQ4Jm5ic3A7ICZuYnNwO3I4OiZuYnNwOyBmZmZm
ODJkMDQwODUzZjcwPC9kaXY+PGRpdj4oWEVOKSByOTombmJzcDsgMDAwMDAwMDAwMDAwMDAw
MSZuYnNwOyAmbmJzcDtyMTA6IGZmZmY4MmQwNDA4NTQ0MDAmbmJzcDsgJm5ic3A7cjExOiBm
ZmZmODJkMDQwODU0M2QwPC9kaXY+PGRpdj4oWEVOKSByMTI6IGZmZmY4MzAwN2MzNmY4NzAm
bmJzcDsgJm5ic3A7cjEzOiBmZmZmODIwMDQwMDE3MGQwJm5ic3A7ICZuYnNwO3IxNDogZmZm
ZjA0ZDBjMDAxNDE5MzwvZGl2PjxkaXY+KFhFTikgcjE1OiBmZmZmODIwMDQwMDE3MGQzJm5i
c3A7ICZuYnNwO2NyMDogMDAwMDAwMDA4MDA1MDAzMyZuYnNwOyAmbmJzcDtjcjQ6IDAwMDAw
MDAwMDAwMDA2NjA8L2Rpdj48ZGl2PihYRU4pIGNyMzogMDAwMDAwMDA3NjQwYzAwMCZuYnNw
OyAmbmJzcDtjcjI6IGZmZmZjOTAwMDAzZmYwMDA8L2Rpdj48ZGl2PihYRU4pIGZzYjogMDAw
MDAwMDAwMDAwMDAwMCZuYnNwOyAmbmJzcDtnc2I6IGZmZmY4ODgwNzM2MDAwMDAmbmJzcDsg
Jm5ic3A7Z3NzOiAwMDAwMDAwMDAwMDAwMDAwPC9kaXY+PGRpdj4oWEVOKSBkczogMDAwMCZu
YnNwOyAmbmJzcDtlczogMDAwMCZuYnNwOyAmbmJzcDtmczogMDAwMCZuYnNwOyAmbmJzcDtn
czogMDAwMCZuYnNwOyAmbmJzcDtzczogMDAwMCZuYnNwOyAmbmJzcDtjczogZTAwODwvZGl2
PjxkaXY+KFhFTikgWGVuIGNvZGUgYXJvdW5kICZsdDtmZmZmODJkMDQwMmQ2OTRhJmd0OyAo
Y29tbW9uL3Vic2FuL3Vic2FuLmMjdWJzYW5fZXBpbG9ndWUrMHhhLzB4OTApOjwvZGl2Pjxk
aXY+KFhFTikmbmJzcDsgODkgZTUgNDEgNTYgNTMgNDggODkgZmIgJmx0OzBmJmd0OyAwYiA0
OCA4ZCAzZCAxNyA4MyAzYyAwMCAzMSBjMCBlOCA3NiAyOSAwMCAwMDwvZGl2PjxkaXY+KFhF
TikgWGVuIHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4MzAwN2MzNmY4NDg6PC9kaXY+PGRp
dj4oWEVOKSZuYnNwOyAmbmJzcDsgZmZmZjgyZDA0MGE1YjliMCBmZmZmMDRkMGMwMDE0MTkz
IGZmZmY4MzAwN2MzNmY4OTggZmZmZjgyZDA0MDJkN2JkZTwvZGl2PjxkaXY+KFhFTikmbmJz
cDsgJm5ic3A7IDAwMDAwMDAwMDAwMDMwMDAgMDAwMDAwMDAwMDAwMDI4NiBmZmZmMDRkMGMw
MDE0MTkzIGZmZmY4MzAwN2MzNmZlNTg8L2Rpdj48ZGl2PihYRU4pJm5ic3A7ICZuYnNwOyBm
ZmZmODJkMDdmZmZkMGMwIGZmZmY4MjAwNDAwMTcwZDMgZmZmZjgzMDA3YzM2ZjhmOCBmZmZm
ODJkMDQwNDkzZDdiPC9kaXY+PGRpdj4oWEVOKSZuYnNwOyAmbmJzcDsgZmZmZjgzMDA3YzM2
ZjhjOCAwMDAwMDAwMTAwMDAwM2RhIGZmZmY4MzAwN2Y4NWFhNTAgMDAwMDAwZWM3Zjg1Y2Uw
MTwvZGl2PjxkaXY+KFhFTikmbmJzcDsgJm5ic3A7IGZmZmY4MzAwN2MzNmZlMDAgZmZmZjgz
MDA3YzM2ZmUxOCAwMDAwMDAwMDAwMDAwM2RhIGZmZmY4MzAwN2MzNmZlMDA8L2Rpdj48ZGl2
PihYRU4pJm5ic3A7ICZuYnNwOyAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDEg
ZmZmZjgzMDA3YzM2ZjkzOCBmZmZmODJkMDQwNDkwZWI1PC9kaXY+PGRpdj4oWEVOKSZuYnNw
OyAmbmJzcDsgZmZmZjgzMDA3YzM2ZmNlOCAwMDAwMDAwMDAwMDAwM2RhIDAwMDAwMDAwMDAw
MDAwMDAgZmZmZjgyZDA0MDZjMDM1ODwvZGl2PjxkaXY+KFhFTikmbmJzcDsgJm5ic3A7IGZm
ZmY4MmQwNDA2YzAzYzAgMDAwMDAwMDAwMDAwMDAwMCBmZmZmODMwMDdjMzZmZGE4IGZmZmY4
MmQwNDA1MzFhNzM8L2Rpdj48ZGl2PihYRU4pJm5ic3A7ICZuYnNwOyBmZmZmODJkMDQwNThk
ODUxIDAwMDAwMDAwMDAwMDAwNDYgMDAwMDAwMDAwMDAwMDA0NiBmZmZmODJkMDQwMjcwNjFk
PC9kaXY+PGRpdj4oWEVOKSZuYnNwOyAmbmJzcDsgMDAwMDAwMDAwMDA3N2YwNyBmZmZmODMw
MDdmODVmMmI4IDAwMDAwMDAwMDAwMDAwMDAgYWFhYWFhYWFhYWFhYWFhYTwvZGl2PjxkaXY+
KFhFTikmbmJzcDsgJm5ic3A7IGFhYWFhYWFhYWFhYWFhYWEgYWFhYWFhYWFhYWFhYWFhYSBh
YWFhYWFhYWFhYWFhYWFhIGFhYWFhYWFhYWFhYWFhYWE8L2Rpdj48ZGl2PihYRU4pJm5ic3A7
ICZuYnNwOyBhYWFhYWFhYWFhYWFhYWFhIGFhYWFhYWFhYWFhYWFhYWEgYWFhYWFhYWFhYWFh
YWFhYSBmZmZmODJkMDQwMjcwOTgwPC9kaXY+PGRpdj4oWEVOKSZuYnNwOyAmbmJzcDsgMDAw
MDAwMDAwMDAwMDAxNyBmZmZmODJkMDQwMjM2NGY0IDAwMDAwMDAwMDAwMDAyNDYgZmZmZjgy
ZDA0MDI4ZjRlOTwvZGl2PjxkaXY+KFhFTikmbmJzcDsgJm5ic3A7IDAwMDAwMDAxMDAwMDAy
MjAgZmZmZjgzMDA3YzM3ODAwNCAwMDAwMDAwMDAwMDAwMjJmIDAwMDAwMDAwMDAwMDAwMGY8
L2Rpdj48ZGl2PihYRU4pJm5ic3A7ICZuYnNwOyAwMDAwMDAwMDAwMDAwMDAxIDAwMDAwMDAw
MDAwMDAyNDYgMDAwMDAwMDAwMDAwMDI0NiBmZmZmODJkMDQwMjhmNGU5PC9kaXY+PGRpdj4o
WEVOKSZuYnNwOyAmbmJzcDsgMDAwMDAwMDEwMDAwMDBhMCBmZmZmODMwMDdjMzc4MDA0IDAw
MDAwMDAwMDAwMDAwYTMgMDAwMDAwMDAwMDAwMDAwMzwvZGl2PjxkaXY+KFhFTikmbmJzcDsg
Jm5ic3A7IGZmZmY4MzAwN2MzNmZlMmMgMDAwMDAwMDAwMDAwMDAwMSBmZmZmODMwMDdjMzZm
YTc4IGZmZmY4MmQwNDAyNzBhMDc8L2Rpdj48ZGl2PihYRU4pJm5ic3A7ICZuYnNwOyBmZmZm
ODMwMDdjMzZmZWY4IGZmZmY4MmQwNDBjNjgwNTggZmZmZjgzMDA3YzM2ZmE5OCBmZmZmODJk
MDQwMjcwOTgwPC9kaXY+PGRpdj4oWEVOKSZuYnNwOyAmbmJzcDsgZmZmZjgzMDA3ZmY0OGVh
OCAwMDAwMDAwMDAwMDAwMGIwIGZmZmY4MzAwN2MzNmZhZjggZmZmZjgyZDA0MDI4ZGVkMTwv
ZGl2PjxkaXY+KFhFTikmbmJzcDsgJm5ic3A7IGZmZmY4MzAwN2ZmNDhlYTggZmZmZjgzMDA3
ZmY0OGViMCBmZmZmODMwMDdjMzc5ODY4IDAwMDAwMDAwMDAwMDAwYTA8L2Rpdj48ZGl2PihY
RU4pIFhlbiBjYWxsIHRyYWNlOjwvZGl2PjxkaXY+KFhFTikmbmJzcDsgJm5ic3A7IFsmbHQ7
ZmZmZjgyZDA0MDJkNjk0YSZndDtdIFIgY29tbW9uL3Vic2FuL3Vic2FuLmMjdWJzYW5fZXBp
bG9ndWUrMHhhLzB4OTA8L2Rpdj48ZGl2PihYRU4pJm5ic3A7ICZuYnNwOyBbJmx0O2ZmZmY4
MmQwNDAyZDdiZGUmZ3Q7XSBGIF9fdWJzYW5faGFuZGxlX3BvaW50ZXJfb3ZlcmZsb3crMHg2
ZS8weGEwPC9kaXY+PGRpdj4oWEVOKSZuYnNwOyAmbmJzcDsgWyZsdDtmZmZmODJkMDQwNDkz
ZDdiJmd0O10gRiBhcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYyNpb19lbXVsX3N0dWJfc2V0
dXArMHg0NGIvMHg2YTA8L2Rpdj48ZGl2PihYRU4pJm5ic3A7ICZuYnNwOyBbJmx0O2ZmZmY4
MmQwNDA0OTBlYjUmZ3Q7XSBGIGFyY2gveDg2L3B2L2VtdWwtcHJpdi1vcC5jI3JlYWRfaW8r
MHhkNS8weDFjMDwvZGl2PjxkaXY+KFhFTikmbmJzcDsgJm5ic3A7IFsmbHQ7ZmZmZjgyZDA0
MDUzMWE3MyZndDtdIEYgeDg2X2VtdWxhdGUrMHg5NGYzLzB4MmUxNzA8L2Rpdj48ZGl2PihY
RU4pJm5ic3A7ICZuYnNwOyBbJmx0O2ZmZmY4MmQwNDA1NjVlYjEmZ3Q7XSBGIHg4Nl9lbXVs
YXRlX3dyYXBwZXIrMHg3MS8weDIxMDwvZGl2PjxkaXY+KFhFTikmbmJzcDsgJm5ic3A7IFsm
bHQ7ZmZmZjgyZDA0MDQ4ZjVmMiZndDtdIEYgcHZfZW11bGF0ZV9wcml2aWxlZ2VkX29wKzB4
MzkyLzB4NmEwPC9kaXY+PGRpdj4oWEVOKSZuYnNwOyAmbmJzcDsgWyZsdDtmZmZmODJkMDQw
NTIyZDNhJmd0O10gRiBkb19nZW5lcmFsX3Byb3RlY3Rpb24rMHg0MWEvMHg1MjA8L2Rpdj48
ZGl2PihYRU4pJm5ic3A7ICZuYnNwOyBbJmx0O2ZmZmY4MmQwNDA1OGRhM2EmZ3Q7XSBGIHg4
Nl82NC9lbnRyeS5TI2hhbmRsZV9leGNlcHRpb25fc2F2ZWQrMHg2NS8weDkxPC9kaXY+PGRp
dj4oWEVOKTwvZGl2PjxkaXY+KFhFTikgPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT08L2Rp
dj48ZGl2PihYRU4pIGQwOiBGb3JjaW5nIHJlYWQtb25seSBhY2Nlc3MgdG8gTUZOIGZlZDAw
PC9kaXY+PC9mb250PjwvZGl2PjwvZGl2Pg==

------=_NextPart_60D72BA6_0F315DD0_749858E4--



From xen-devel-bounces@lists.xenproject.org Sat Jun 26 13:51:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Jun 2021 13:51:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147554.272124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lx8i1-0002ig-P0; Sat, 26 Jun 2021 13:50:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147554.272124; Sat, 26 Jun 2021 13:50:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lx8i1-0002iZ-LS; Sat, 26 Jun 2021 13:50:57 +0000
Received: by outflank-mailman (input) for mailman id 147554;
 Sat, 26 Jun 2021 13:50:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=duLp=LU=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lx8i0-0002iT-Er
 for xen-devel@lists.xenproject.org; Sat, 26 Jun 2021 13:50:56 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b58653ef-93d1-4564-95df-98388f4d8fc8;
 Sat, 26 Jun 2021 13:50:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b58653ef-93d1-4564-95df-98388f4d8fc8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624715454;
  h=to:references:from:subject:message-id:date:in-reply-to:
   mime-version;
  bh=76BezD0TbmRj6QLpIza1jOHPyQBwRKZEFZXR+rj529g=;
  b=X9gwDt/U5+chC1O8TE3McvCt6Bj8eMQEMpv0KheTClT7kiLj2uj5lgBD
   yI/MxKhtmTcU8X4+1QbBCrqnPh7lJaHwxOf09bRf6enJXigt7zf9AxXuF
   xFleSJsAGPHstGbLpHp9TK+2rOqOd7To9cxraFwe5q1KtoI4puq6GLyKy
   A=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=hardfail (body hash did not verify [final]) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ghlMmVxhq9iozOb+eY8NH6Roh2XEly7Bfm9JtWJQuZrXpTlqu3LydqfZ0YrWNBWc5xTZYSbdB/
 6hlKmsMC1rqPb+LoKJqX+DCjrtS3VQ7WGqhyUH9ZdTBBwbqLZ2/JAEvUPQlwJhQAZoOjz40Kzj
 SkUrtLSr7TrudY9nADdhaxxzqYaa23FKwslGIMbHMtuL8csIIpDP9TSuZuWLgIBhclYkJQZox1
 zQAAGS1TP2CYQTnn1FyqdjEv8+FXbmWwNvIvwk5H+8Vv1Cwx1epIucmwuYObITjtpWgVHs6BIL
 jvk=
X-SBRS: 5.1
X-MesageID: 47027669
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:tkNaQ641vy9e9R1oEAPXwTiBI+orL9Y04lQ7vn2ZFiY6TiXIra
 +TdaoguSMc6AxwZJhSo6H9BEDgewKgyXcR2+gs1NiZLXHbUQeTXeZfBM7Zskfd8k7Fh55gPM
 VbAtFD4bTLZDAQ56uKg3jbYqQdKZu8gcSVbI/lvgZQpGpRGsddBmlCe2Om+wFNNXJ77c1TLu
 vj2iMLnUvsRV0nKuCAQlUVVenKoNPG0LrgfB49HhYirC2Dlymh5rLWGwWRmk52aUIB/Z4StU
 z+1yDp7KSqtP+2jjfaym/o9pxT3P/s0MFKCsCggtUcbh/slgGrToJ8XKDqhkF4nMifrHIR1P
 XcqRYpOMp+r1nLeHuunBfr0w78lB4z9n7L0zaj8DveiP28YAh/J9tKhIpffBecwVEnpstA3K
 VC2H/cn4ZLDCnHgD/267HzJlBXf3KP0DgfeNMo/jliudN0Us4UkWVfxjIaLH44JlO41Gh9e9
 MeS/01jZ1tACCnh3OwhBgm/DXjZAV0Iv68eDl3hiWi6UkeoJlI9Tps+CUhpAZ2yHsccegO2w
 2WCNUjqFlxJvVmG56VU91xPvdfTFa9GC7xDA==
X-IronPort-AV: E=Sophos;i="5.83,301,1616472000"; 
   d="scan'208,217";a="47027669"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KhS8DXJVbzNnR6JHOeGQNM9f/1os91/ubrQqOlcREl0NsMexi7cLxKZsznvDkGBXsxprAncDvs1EZtj8IHuN0Zg55WR5KuxCMKjF+5If1EA41RVHW89h+fcvYdAHeRiD6BxZ7RJCvNEBr3Kwmp/InEKV01APsdMQ5yrhqMAwFt9r1/vsN10U/FgI1HR5fGKHUoAadAWqXcDEiAGETZ2ak4+M/Opa1x/mloqKcWznvCS3OP1mLCCkYtW5hULYIcILo+dahuQJlRsgeAQRrx+JoI9aHy8sygOLh9NkuFhF4bbWRkuFQwsSD/qu+NlwVw4R7U6HP8r4nTqbotFbwej80A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=O1RqFWzu0I3vAXD4pa7U6+yCgAxkwwoDMeZNDEYeE/s=;
 b=QKlQlbHBDZkd+TJ+2NU6t3SXAGAfWpNZVSPTuNsN/3Ek4MsuT3iAj+C9biD1+gQcgjgm2gwM9p7GHzqeU5XYkaKE/HB0/QpZblI7pRQVy+XdcBtN1PhZIUvYR5Sn0maPO39VbUIs/W0Hp6tPbGtyFHW+y7cT0M8zunpiSozh7PmrmyxgaAXlvZT3aBFgjj9ziqFG8veihYBEMgE6jXaW72qjJhoVhDk9nEhjAWQQLMVq76G3coOLrExTqVUjibd8msDKD+uEOk4MBaWo9dJHpwTgB+i5wfd6vlnKo89xkEHtNdNETH+dXg1UirEkVGi1V+OhEFgobxc3emtsPqUp1Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=O1RqFWzu0I3vAXD4pa7U6+yCgAxkwwoDMeZNDEYeE/s=;
 b=xd9FjWRCrv71DCT2mYkoJvn6jTbV6iVIQdkaUlCOyPW4NXjSFdmeyFuIS3M8VrhesjCd4YuNkq9EFwvawucsybf3UISsZtxmFWrEF+Z3M3en46wioZvi4G+cZhah8ZOY1JuQNNvmWJNL/PYe0A5ONPQjEhyQwDHDUp1esSE4fVk=
To: Rroach <2284696125@qq.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <tencent_A17CA7BA63F6E47B3FE7B1AC54E55B2A3609@qq.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: A possible pointer_overflow in xen-4.13
Message-ID: <fbd1eb89-695c-5c23-da07-ae16fd567010@citrix.com>
Date: Sat, 26 Jun 2021 14:50:37 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <tencent_A17CA7BA63F6E47B3FE7B1AC54E55B2A3609@qq.com>
Content-Type: multipart/alternative;
 boundary="------------A21145073B93ACB78A62D826"
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0152.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:188::13) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4e603cbc-ba6d-4061-78f8-08d938a96742
X-MS-TrafficTypeDiagnostic: BYAPR03MB3863:
X-Microsoft-Antispam-PRVS: <BYAPR03MB386322E3343899CAC5810F64BA059@BYAPR03MB3863.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: dZ5NRheLsAVRVyA1S+RCDf4znQC9MFKEfBP0vQL9/o6eoshHKLBWO8RSgSt0+hAZYa3kORAcr8dhERiTn4ZH4s+70ZF0zQsW/MvR5ChLg5Ejd++gfsuwxD+8fedvG1Q6E5kZc0MW7sd9PLWPYQQJ4/srhp0EtkF5jYYfyqIXz8K2Dz92EEed422fjanEAAykRBg/SbucwN/eUPo/Xu0Dm9qbUStARg+gHb70rcXESR0M6b70leVQM0uQQmtZJUaJ+IRQG94ZBF0h9J9jb4/qjaXplqhBDKsNnmQ2JWL3RzcL2C5l1lvlx+EjbZgGuF8lIdnNX9lqyUjvX6xh/0To5SnolZLnnmNUiD8JaHjx6ZxL89V4WMMshoRVqSstfAaOCwnhzPYTKyAc9+BQYQuoSBktITn2Fr9LzBJQHLLe/jh2bofnsNNbT8VdGnlHgFIIz3sfjsALeDKjaVgcei/cMidIp2GMXUjCS/eLZKRpgXudeyVK2JkciC7cB5OIGvAsImZFpOHalMeJA5GokJfwhBeUrlLdwfL1r8Dk9XEUyRdrRYLtDMljtiV+fj33EvjEZ8l8CXfjm/vu2wmpI0Z7Hn2dOC2prh327gpBSqHW6pYlvd88QH3YjDsYn16emDgdbV8xS1dMJL2tVBP8a9S6OQFQZ8CxQ7OKl7aONbvhvsmwPaGVwDK47L/7ZKO9lDJ3
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(6666004)(38100700002)(66476007)(66556008)(2906002)(316002)(110136005)(66946007)(16576012)(956004)(8936002)(8676002)(2616005)(478600001)(31686004)(26005)(186003)(31696002)(6486002)(53546011)(5660300002)(83380400001)(86362001)(16526019)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?GB18030?B?UGdBWGdxVEZiYitZQmZ6UitReExEODljWFhuUUVtRU1DdkdjMkJWcDRJVU5Z?=
 =?GB18030?B?RUN3RmR1RG4rbGUwNlBqOWIwdHpYRWVaanY5TkRqU0JwelVDYlNZSWtUOUdB?=
 =?GB18030?B?OGRIRWxSMWtVOG1IbU9TQzd4NkVYOWw3SmV2bGY3UmlXMlZhNDhCRTVGaWZR?=
 =?GB18030?B?RFBBUi92ZkJRWitWb2lNYzJkSm1DcEIwbVlmRm9IeGhJazVCNjBSc2p1d01I?=
 =?GB18030?B?ZHJEZXd0dGx5S3RsY3lHVml3eEppS0MvazBFUFIrSVM1Qy9YTW14OTZjVTV5?=
 =?GB18030?B?cktaZzN5RzZ6bzZxc1ZtdExBR2FXNEkwSnhsZmZpWmwrdGtrSWo0bWgwNitF?=
 =?GB18030?B?bEg5MVhXTDQ0dnhKdllma3Q0ZUlyUnl3YlI4TkdJT3VXOWF2MTVXWlExOURt?=
 =?GB18030?B?ZzBVNG5IcXQ5L0Rma3d6Y0hFMUZQaCtPNElIWUN6RWd0dmNxYXpWZ1pseDJ5?=
 =?GB18030?B?ZWwrK2cvRVFiU095MVY5ajZYaEEyeU1XZTluYWFubmFkY1JNM3MxMDZsUDc3?=
 =?GB18030?B?dEVHcXRNbWt2M0dpMGplVzd4cVIrTEtqYlUxK3VLVmZYZVZWWDhRblh2aGY5?=
 =?GB18030?B?VUxYUTQ5NGRoZU1TQU90MTlWQ1BmNE11Q0c2Vlp0NXpQcXhtZXQ2S1hyZ3JR?=
 =?GB18030?B?SjNQWkNCV1NqVlpqa210bXUzSDNkUHRucDU4SGZnbmxCay9Makw4dlhZNCtC?=
 =?GB18030?B?YmYvTDZCYWhUUlpkdnU3a1d4eUxxV0pXQlZZYVZTcXY3TnZiS0JzMkpoY2tp?=
 =?GB18030?B?S1ppTUFFaFlDbmM3M0FJSGp5Z1YveTZsY2VLYXdlbzZuSkRMY0c0OUh6TVAx?=
 =?GB18030?B?VEEzUXVhdjlzczRZT2lZUjNZcVU2Q2VZTmtDTEZjWXlwWGhIaWhMRVorVENs?=
 =?GB18030?B?TXpvR1hXaWZHaEtIaDlKNWVNQTRwU0E4ZmQ1NlFKaU10TzUrd1IwWjdId3cr?=
 =?GB18030?B?dDFBa2F0dER3WEpBalcydnJ4SjUrM0RYRTVYb2xsODBnc0tRTFZhUmM0aERh?=
 =?GB18030?B?dkRFSlVkd1J4SlB1KzJiRVd3aU1XN1YvR3VSYjVYRlJkSXg0QVZYTTdndmZY?=
 =?GB18030?B?cFRlNnJVZ3FhOUxVcmtmWk9GZ1NVV1pGK1hWUFF0dnNPbm9UNVBEb3doaVY5?=
 =?GB18030?B?R01GZ25SYmpiN0hpdkRTaklwWWtIMlU3K0FxcENHRldOdlN1QnROMmVLWk5h?=
 =?GB18030?B?ZnFlbEVTT29lOXZxSXloVUdISkRKaDJqWDl1Zjc3U0NqYUQwbUNvdXVmb2th?=
 =?GB18030?B?WTUrZ243L09zMzFKRnZpTXp1QS9pcTEvSzB0aTFFand1SEZnLzdCUm9WZjR0?=
 =?GB18030?B?ZW40Qm04czdidjFPcFMzWG5LempHWjU0bkFlV25YSitJWGZzdG43WlljT2x5?=
 =?GB18030?B?d0FZdkM2dmNia3U0ODUrdmxqUnNQNENZelQ4NmZiNURXTHkzL1pIMm9HdlBM?=
 =?GB18030?B?UVZCL1dnbVA5anR3d1hwZnJ3ZG5qN1ZBS1FlY3p3UUdLNnRreWcwK1JXT05s?=
 =?GB18030?B?M0JDVktvMHRJTEViZXNldm9sS2JTOUVTK1YwOGlPc0pFelVpVFpFWWU5ejdy?=
 =?GB18030?B?SjhCeUlMM0FLcEM4THFnUUhPb2lXeTdMUjUyS2tFYjFmTUhrVzMrbm05a3h2?=
 =?GB18030?B?YVFCakZPNmFvdnBvWUtqM3V6SVNyeTI3ajRTekhXbzZBK0NqLzQ3YVROTE1u?=
 =?GB18030?B?THpLbDhFT2F0QmNDaFpVTHQ5RFNkYnV2bTVNQUo5b2V5Y3l1bFRMNE5zUGRI?=
 =?GB18030?B?UTJqeFhwRmhlWHJWblhZakhVeTBqMFZEZ1VBL1hlQkgxdTFEdjVs?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 4e603cbc-ba6d-4061-78f8-08d938a96742
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jun 2021 13:50:48.4812
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6GVA7WKinPEvHG/IqlPikMh7cwHLQhIo3gU5F2y/38bDQ4LIo3XbVclD1QM5FfhlT62+vu63L0svbD0tJ1zEcuelY1MpuLacsUMlSa3LpFo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3863
X-OriginatorOrg: citrix.com

--------------A21145073B93ACB78A62D826
Content-Type: text/plain; charset=gb18030
Content-Transfer-Encoding: base64

T24gMjYvMDYvMjAyMSAxNDoyOSwgUnJvYWNoIHdyb3RlOgo+IEhpLCBJIGNvbXBpbGUgWGVu
LTQuMTMgd2l0aCBDT05GSUdfVUJTQU4sIGFuZCB0cnkgdGVzdCBpdC4gSG93ZXZlciwKPiBk
dXJpbmcgdGVzdGluZywgeGwgZG1lc2cgZ290IHRoZSBvdXRwdXQgYXMgc2hvd24gYmVsb3cu
Cj4KPiBJdCBzZWVtcyB0aGF0IHRoZXJlIGlzIGEgcG90ZW50aWFsIHBvaW50ZXIgb3ZlcmZs
b3cgd2l0aGluCj4gYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmM6MTMxIHdoZXJlIHhlbiB0
cnkgdG8gZXhlY3V0ZSBpbnN0cnVjdGlvbgo+ICcnJyBBUFBFTkRfQ0FMTChzYXZlX2d1ZXN0
X2dwcnMpICcnJ6Osd2hlcmUgQVBQRU5EX0NBTEwgdHJ5IHRvIGFkZCBhbgo+IG9mZnNldCBv
biAqcCB3aXRob3V0IHByb3BlciBjaGVja2luZy4KPgo+IEkgY29tcGlsZWQgeGVuLTQuMTMg
YnkgY2xhbmctOSwgd2l0aCBmb2xsb3dpbmcgaW5zdHJ1Y3Rpb25zOiAnJycKPiBleHBvcnQg
Q09ORklHX1VCU0FOPXkgJycnICYmICcnJyBtYWtlIGNsYW5nPXkgZGVidWc9eSAnJycgLiBE
byB5b3UKPiBoYXZlIGFueSBpZGVhIHdoYXQgZ29pbmcgb24gaGVyZT8KCllvdSBzYXkgWGVu
IDQuMTMsIGJ1dCBBUFBFTkRfQ0FMTCgpIGRvZXNuJ3QgZXhpc3QgdGhlcmUugTCEMiBJIGFk
ZGVkIGl0IGluCjQuMTQgd2hlbiBJIHJld3JvdGUgdGhpcyBtZXNzIHRvIGJlIGNvbXBhdGli
bGUgd2l0aCBDRVQgYnkgbm90IHVzaW5nIGEKUk9QIGdhZGdldC6BMIQyIFlvdXIgYmFja3Ry
YWNlIHNheXMgNC4xNSB1bnN0YWJsZSB3aGljaCBtZWFucyBpdHMgYW4gb2xkCnN0YWdpbmcg
YnVpbGQgKG5vdCB0aGF0IHRoYXQgaXMgZ29pbmcgdG8gaGF2ZSBhbnkgZWZmZWN0IG9uIHRo
aXMKc3BlY2lmaWMgaXNzdWUpLgoKVGhlIGZhY3QgdGhhdCBpdCBjb250aW51ZWQgZXhlY3V0
aW5nIGNvcnJlY3RseSBtZWFucyB0aGUgY2FsY3VsYXRpb24gZGlkCnRoZSByaWdodCB0aGlu
Zywgd2hldGhlciBvciBub3QgVUJTQU4gd2FzIGhhcHB5LoEwhDIgVGhlIGRpc3BsYWNlbWVu
dCB3aWxsCmVuZCB1cCBuZWdhdGl2ZSBhcyB0aGUgc3R1YiB3ZSdyZSB3cml0aW5nIGlzIG51
bWVyaWNhbGx5IGhpZ2hlciB0aGFuCntsb2FkLHNhdmV9X2d1ZXN0X2dwcnMoKSwgd2hpY2gg
SSBndWVzcyBtZWFucyB0aGF0IGYgLSBzdHViX3ZhIHdpbGwKdW5kZXJmbG93LgoKSSdtIHZl
cnkgY29uZnVzZWQgYXMgdG8gd2h5IFVCU0FOIHJlcG9ydHMgYWdhaW5zdCBzYXZlX2d1ZXN0
X2dwcnMoKQpjb25zaWRlcmluZyB0aGF0IGxvYWRfZ3Vlc3RfZ3BycygpIHdoZW4gdGhyb3Vn
aCB0aGUgZXhhY3Qgc2FtZSBsb2dpYyBhCmZldyBpbnN0cnVjdGlvbnMgZWFybGllci4KCkVp
dGhlciB3YXksIGRvZXMgdGhpcyBtYWtlIHRoZSBwcm9ibGVtIGdvIGF3YXk/CgpkaWZmIC0t
Z2l0IGEveGVuL2FyY2gveDg2L3B2L2VtdWwtcHJpdi1vcC5jIGIveGVuL2FyY2gveDg2L3B2
L2VtdWwtcHJpdi1vcC5jCmluZGV4IDExNDY3YTFlM2EuLmJlNDFiY2VkNzYgMTAwNjQ0Ci0t
LSBhL3hlbi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYworKysgYi94ZW4vYXJjaC94ODYv
cHYvZW11bC1wcml2LW9wLmMKQEAgLTk4LDcgKzk4LDcgQEAgc3RhdGljIGlvX2VtdWxfc3R1
Yl90ICppb19lbXVsX3N0dWJfc2V0dXAoc3RydWN0CnByaXZfb3BfY3R4dCAqY3R4dCwgdTgg
b3Bjb2RlLAqBMIQyI2RlZmluZSBBUFBFTkRfQlVGRihiKSAoeyBtZW1jcHkocCwgYiwgc2l6
ZW9mKGIpKTsgcCArPSBzaXplb2YoYik7IH0pCoEwhDIjZGVmaW5lIEFQUEVORF9DQUxMKGYp
gTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEw
hDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQy
gTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEw
hDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMiBcCoEwhDKBMIQygTCEMoEwhDIg
KHuBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQy
gTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEw
hDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQy
gTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEw
hDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMiBcCi2BMIQy
gTCEMoEwhDKBMIQygTCEMoEwhDKBMIQyIGxvbmcgZGlzcCA9IChsb25nKShmKSAtIChzdHVi
X3ZhICsgcCAtIGN0eHQtPmlvX2VtdWxfc3R1YiArIDUpOyBcCiuBMIQygTCEMoEwhDKBMIQy
gTCEMoEwhDKBMIQyIGxvbmcgZGlzcCA9IChsb25nKShmKSAtIChsb25nKShzdHViX3ZhICsg
cCAtIGN0eHQtPmlvX2VtdWxfc3R1YgorIDUpOyBcCoEwhDKBMIQygTCEMoEwhDKBMIQygTCE
MoEwhDKBMIQyIEJVR19PTigoaW50MzJfdClkaXNwICE9IGRpc3ApO4EwhDKBMIQygTCEMoEw
hDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQy
gTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEw
hDKBMIQygTCEMiBcCoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQyICpwKysgPSAw
eGU4O4EwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEw
hDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQy
gTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEw
hDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMiBcCoEwhDKB
MIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQyICooaW50MzJfdCAqKXAgPSBkaXNwOyBwICs9
IDQ7gTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCE
MoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMoEwhDKB
MIQygTCEMoEwhDKBMIQygTCEMoEwhDKBMIQygTCEMiBcCgp+QW5kcmV3Cg==
--------------A21145073B93ACB78A62D826
Content-Type: text/html; charset=gb18030
Content-Transfer-Encoding: 8bit

<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=GB18030">
  </head>
  <body>
    <div class="moz-cite-prefix">On 26/06/2021 14:29, Rroach wrote:<br>
    </div>
    <blockquote type="cite" cite="mid:tencent_A17CA7BA63F6E47B3FE7B1AC54E55B2A3609@qq.com">
      
      <div><font size="3">
          <div>Hi, I compile Xen-4.13 with CONFIG_UBSAN, and try test
            it. However, during testing, xl dmesg got the output as
            shown below.</div>
          <div><br>
          </div>
          <div>It seems that there is a potential pointer overflow
            within arch/x86/pv/emul-priv-op.c:131 where xen try to
            execute instruction ''' APPEND_CALL(save_guest_gprs)
            '''where APPEND_CALL try to add an offset on *p without
            proper checking.</div>
          <div><br>
          </div>
          <div>I compiled xen-4.13 by clang-9, with following
            instructions: ''' export CONFIG_UBSAN=y ''' &amp;&amp; '''
            make clang=y debug=y ''' . Do you have any idea what going
            on here?</div>
        </font></div>
    </blockquote>
    <br>
    <font size="3">You say Xen 4.13, but APPEND_CALL() doesn't exist
      there.&nbsp; I added it in 4.14 when I rewrote this mess to be
      compatible with CET by not using a ROP gadget.&nbsp; Your backtrace
      says 4.15 unstable which means its an old staging build (not that
      that is going to have any effect on this specific issue).<br>
      <br>
      The fact that it continued executing correctly means the
      calculation did the right thing, whether or not UBSAN was happy.&nbsp;
      The displacement will end up negative as the stub we're writing is
      numerically higher than {load,save}_guest_gprs(), which I guess
      means that f - stub_va will underflow.<br>
      <br>
      I'm very confused as to why UBSAN reports against
      save_guest_gprs() considering that load_guest_gprs() when through
      the exact same logic a few instructions earlier.<br>
      <br>
      Either way, does this make the problem go away?<br>
      <br>
      diff --git a/xen/arch/x86/pv/emul-priv-op.c
      b/xen/arch/x86/pv/emul-priv-op.c<br>
      index 11467a1e3a..be41bced76 100644<br>
      --- a/xen/arch/x86/pv/emul-priv-op.c<br>
      +++ b/xen/arch/x86/pv/emul-priv-op.c<br>
      @@ -98,7 +98,7 @@ static io_emul_stub_t *io_emul_stub_setup(struct
      priv_op_ctxt *ctxt, u8 opcode,<br>
      &nbsp;#define APPEND_BUFF(b) ({ memcpy(p, b, sizeof(b)); p +=
      sizeof(b); })<br>
      &nbsp;#define
      APPEND_CALL(f)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \<br>
      &nbsp;&nbsp;&nbsp;&nbsp;
      ({&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      \<br>
      -&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; long disp = (long)(f) - (stub_va + p -
      ctxt-&gt;io_emul_stub + 5); \<br>
      +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; long disp = (long)(f) - (long)(stub_va + p -
      ctxt-&gt;io_emul_stub + 5); \<br>
      &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; BUG_ON((int32_t)disp !=
      disp);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \<br>
      &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *p++ =
      0xe8;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \<br>
      &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *(int32_t *)p = disp; p +=
      4;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \<br>
      <br>
      ~Andrew</font><br>
  </body>
</html>

--------------A21145073B93ACB78A62D826--


From xen-devel-bounces@lists.xenproject.org Sat Jun 26 13:59:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Jun 2021 13:59:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147559.272135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lx8q4-0003Wh-NC; Sat, 26 Jun 2021 13:59:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147559.272135; Sat, 26 Jun 2021 13:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lx8q4-0003Wa-Ju; Sat, 26 Jun 2021 13:59:16 +0000
Received: by outflank-mailman (input) for mailman id 147559;
 Sat, 26 Jun 2021 13:59:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lx8q3-0003WQ-4b; Sat, 26 Jun 2021 13:59:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lx8q2-0006Rm-Tx; Sat, 26 Jun 2021 13:59:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lx8q2-0006lK-IM; Sat, 26 Jun 2021 13:59:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lx8q2-0007so-Hs; Sat, 26 Jun 2021 13:59:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qBTSAIBDxZ1rIA67bnbwmCLX2By5SEBYwLNVt/n84Ik=; b=P9ww500DoDdOa7BGGk/mDIeWxm
	DBqdWmL2rNsd356IL8nHlUla4q4pJLG4sDGqGVuf1HFVW3DOz8EitVPvsLT0jEElRkx4ZWhs4eUBr
	7Fba5Zwh5wELsgd5JRZ3XzdWl9DWGK7m0fovpX0ABlmh/aKQDxaC5FRwbOjewOQkP8Pw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163104-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 163104: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f591755823a7e94fc6b4b8ddce71f0421a94fa09
X-Osstest-Versions-That:
    xen=e87d8f60fa9b6eaa6a2357545a96e4fff05dbef0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 26 Jun 2021 13:59:14 +0000

flight 163104 xen-unstable real [real]
flight 163120 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/163104/
http://logs.test-lab.xenproject.org/osstest/logs/163120/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 163120-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 163031

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 163031
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 163031
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 163031
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 163031
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 163031
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 163031
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 163031
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 163031
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 163031
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 163031
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 163031
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f591755823a7e94fc6b4b8ddce71f0421a94fa09
baseline version:
 xen                  e87d8f60fa9b6eaa6a2357545a96e4fff05dbef0

Last test of basis   163031  2021-06-25 09:47:01 Z    1 days
Testing same since   163104  2021-06-25 23:38:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e87d8f60fa..f591755823  f591755823a7e94fc6b4b8ddce71f0421a94fa09 -> master


From xen-devel-bounces@lists.xenproject.org Sat Jun 26 15:40:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Jun 2021 15:40:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147570.272166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxAPm-00050r-97; Sat, 26 Jun 2021 15:40:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147570.272166; Sat, 26 Jun 2021 15:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxAPm-00050k-6A; Sat, 26 Jun 2021 15:40:14 +0000
Received: by outflank-mailman (input) for mailman id 147570;
 Sat, 26 Jun 2021 15:40:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GWiO=LU=qq.com=2284696125@srs-us1.protection.inumbo.net>)
 id 1lxAPk-00050e-TE
 for xen-devel@lists.xenproject.org; Sat, 26 Jun 2021 15:40:13 +0000
Received: from out203-205-221-191.mail.qq.com (unknown [203.205.221.191])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8168ef38-a086-409b-9341-45451d3145fe;
 Sat, 26 Jun 2021 15:40:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8168ef38-a086-409b-9341-45451d3145fe
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qq.com; s=s201512;
	t=1624722005; bh=mQdqil8MJcW72E6esP/TrAucxyDvk/VTTQR9a5B+shw=;
	h=From:To:Subject:Date;
	b=FeQzvd0vGT2k/lwE4oAu9zBLHNiR4uxJvf1KcmRSwpaWGauX/MHkle2Vm1jkmPZ5B
	 SYxMWSdFSHabXe+SNgTbwQu9egiPoLLq5cyP//IAFwXhLfJSvm2v3jSbnh4IGW9r2c
	 9U1oEItf58gQnOcHKvlzPTK5Cz6Ze3Yd/UqzKubU=
X-QQ-FEAT: zlhDrsF45Te9zTnBMo+pJnCBiHjJ2tkTVxX44SodkmcjqzRiJcelxnYP70LbO
	2qGmVIbztFiDTiJ0EXAo9HGeM8NIgC4zL8WnDt9ZtUb1EdnumnsYQmY6uoytxcolR6fEHxy
	ghho9gWka1e5N+sKvtWHIon39sFt1bhDFOhB0XNMsZWID1eeZqL9AiXLmKv4fozZ+OJPuvg
	mdfPZOjP1sS9mnMfxFsSrFw2VTHWvZv2tznPalzj8kMG9bggDG9deddfLP5A5K8KUkKCLxK
	HAnA==
X-QQ-SSF: 0000000000000020000000000000006
X-QQ-XMAILINFO: N+o6GGgP2bY6+PrtZNRmy+j9c36ar5l4C2FEJ53aAsIZZkHhv4vz+VRWiJLelA
	 EVUWwSAZ78sVK8Bdwo9n18bLgip1oidXATaUbz/j3R3NoZU5DDVk77fyKfi98YY+bt+XthEhg3B/G
	 1Na/NPFQTYtbynvWTKtQEC67frhmVyNVS5UN4g0Ldv9z1/fy2s9scwNpmHKeL29XNJqY6u7VYD2Bi
	 FQGSQ9CjMCEnr2NedrbkL26Apl0pFylz8VcHwwJ0mOffUGi1ejTZo59WPGIC5HrEombsxUM+ooDC6
	 KHja9+O4P3eW+sk1ORoFU1evepEnZ/HjJRDSkrKVDJG4mW5fVPSEBtA2AMH5P2x71s9nztE2gxFby
	 XfMC8jFQW/hosj2Ic+V2R6eVXOlDascJeBbgTkGNrbOAn5oKVeQo7VYU16ZGLE+OkgYFgMZDWS8S/
	 kRcZwt5fS6S8XqHPdyIOeEal8uDK6kXJbtzOoUjjY5ZX5xdcaNVpo7IHXRmgKmu2fbYo1Vt/hyfzN
	 YtFYm1jLzAw0vTRqRh8bT3ykBYRYhJgNSg8jLZbeHKAozrcgLPPEay0o32qKNF09eBvpeI1Tpnvsf
	 xYGuKfQlNrN4ZgAp6NS1K6/Wfemw1laCZcJfOXGzmLrcgvodLwIzovgFFEeD0nowIisMC11UDolcH
	 aIgP88k2cqztUWYdTzXA/0fAdwOE4jG/0s+/7Aa3Mxlt4CbQfCMpUF+fcI952rFBmHLmDEw3xoKno
	 rAk5aLuBJkjWUtIb+0dEaOz/jGs3/Re0AZ0I=
X-HAS-ATTACH: no
X-QQ-BUSINESS-ORIGIN: 2
X-Originating-IP: 183.172.26.224
X-QQ-STYLE: 
X-QQ-mid: webmail801t1624721817t8801481
From: "=?ISO-8859-1?B?UnJvYWNo?=" <2284696125@qq.com>
To: "=?ISO-8859-1?B?eGVuLWRldmVs?=" <xen-devel@lists.xenproject.org>
Subject: A mismatched type error found
Mime-Version: 1.0
Content-Type: multipart/alternative;
	boundary="----=_NextPart_60D74999_10706D40_7C0933BE"
Content-Transfer-Encoding: 8Bit
Date: Sat, 26 Jun 2021 23:36:57 +0800
X-Priority: 3
Message-ID: <tencent_D17B77D71E5C6FE0BCB2559C7C59459FEF06@qq.com>
X-QQ-MIME: TCMime 1.0 by Tencent
X-Mailer: QQMail 2.x
X-QQ-Mailer: QQMail 2.x

This is a multi-part message in MIME format.

------=_NextPart_60D74999_10706D40_7C0933BE
Content-Type: text/plain;
	charset="ISO-8859-1"
Content-Transfer-Encoding: base64

SGksIHdoZW4gSSBsb29rIHRoZSBzb3VyY2UgY29kZSBpbiBYZW4tNC4xNSBzb3VyY2UgY29k
ZSwgSSBmb3VuZCBhIHR5cGUgbWlzbWF0Y2guDQoNCg0KSW4gZGV0YWlsZWQsIGluIHhlbi9h
cmNoL3g4Ni9tc2kuYzpmaW5kX21zaV9lbnRyeSwgdGhlcmUgaXMgYSBjb21wYXJpc29uIGJl
dHdlZW4gZW50cnktJmd0O21zaV9hdHRyaWIudHlwZSBhbmQgY2FwX2lkLiBIb3dldmVyLCBh
Y2NvcmRpbmcgdG8gdGhlIGRlZmluaXRpb24sIHRoZSB0eXBlIGFwcGVhcnMgdG8gYmUgX191
OCwgd2hlcmUgaXMgYSBjaGFyIHZhcmlhYmxlLCBhbmQgdGhlIGNhcF9pZCBpcyBkZWZpbmVk
IGFzIGludCB2YXJpYWJsZSwgaGVuY2UgaXQgc2VlbXMgdG8gYmUuYSB0eXBlIG1pc21hdGNo
Lg0KDQoNCkRlc3BpdGUgdGhpcyBlcnJvciBkbyBub3QgYWZmZWN0IHN5c3RlbSBvcGVyYXRp
b24gYnkgZmFyLCBpdCBzdGlsbCBhZmZlY3QgdGhlIGNvZGUncyBxdWFsaXR5LCBhcyBzdWNo
IGVycm9yIGNvdWxkIHJlc3VsdCBpbiBwb3RlbnRpYWwgYnVncyBpbiB0aGUgZnV0dXJlLg0K
DQoNCkJlc3QgcmVnYXJkcy4NCg0KDQpGcmFua2xpbg==

------=_NextPart_60D74999_10706D40_7C0933BE
Content-Type: text/html;
	charset="ISO-8859-1"
Content-Transfer-Encoding: base64

PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNo
YXJzZXQ9R0IxODAzMCI+PGRpdj48ZGl2IHN0eWxlPSJmb250LWZhbWlseTogJnF1b3Q7bHVj
aWRhIEdyYW5kZSZxdW90OywgVmVyZGFuYTsiPkhpLCB3aGVuIEkgbG9vayB0aGUgc291cmNl
IGNvZGUgaW4gWGVuLTQuMTUgc291cmNlIGNvZGUsIEkgZm91bmQgYSB0eXBlIG1pc21hdGNo
LjwvZGl2PjxkaXYgc3R5bGU9ImZvbnQtZmFtaWx5OiAmcXVvdDtsdWNpZGEgR3JhbmRlJnF1
b3Q7LCBWZXJkYW5hOyI+PGJyPjwvZGl2PjxkaXYgc3R5bGU9ImZvbnQtZmFtaWx5OiAmcXVv
dDtsdWNpZGEgR3JhbmRlJnF1b3Q7LCBWZXJkYW5hOyI+SW4gZGV0YWlsZWQsIGluIHhlbi9h
cmNoL3g4Ni9tc2kuYzpmaW5kX21zaV9lbnRyeSwgdGhlcmUgaXMgYSBjb21wYXJpc29uIGJl
dHdlZW4gZW50cnktJmd0O21zaV9hdHRyaWIudHlwZSBhbmQgY2FwX2lkLiBIb3dldmVyLCBh
Y2NvcmRpbmcgdG8gdGhlIGRlZmluaXRpb24sIHRoZSB0eXBlIGFwcGVhcnMgdG8gYmUgX191
OCwgd2hlcmUgaXMgYSBjaGFyIHZhcmlhYmxlLCBhbmQgdGhlIGNhcF9pZCBpcyBkZWZpbmVk
IGFzIGludCB2YXJpYWJsZSwgaGVuY2UgaXQgc2VlbXMgdG8gYmUuYSB0eXBlIG1pc21hdGNo
LjwvZGl2PjxkaXYgc3R5bGU9ImZvbnQtZmFtaWx5OiAmcXVvdDtsdWNpZGEgR3JhbmRlJnF1
b3Q7LCBWZXJkYW5hOyI+PGJyPjwvZGl2PjxkaXYgc3R5bGU9ImZvbnQtZmFtaWx5OiAmcXVv
dDtsdWNpZGEgR3JhbmRlJnF1b3Q7LCBWZXJkYW5hOyI+RGVzcGl0ZSB0aGlzIGVycm9yIGRv
IG5vdCBhZmZlY3Qgc3lzdGVtIG9wZXJhdGlvbiBieSBmYXIsIGl0IHN0aWxsIGFmZmVjdCB0
aGUgY29kZSdzIHF1YWxpdHksIGFzIHN1Y2ggZXJyb3IgY291bGQgcmVzdWx0IGluIHBvdGVu
dGlhbCBidWdzIGluIHRoZSBmdXR1cmUuPC9kaXY+PGRpdiBzdHlsZT0iZm9udC1mYW1pbHk6
ICZxdW90O2x1Y2lkYSBHcmFuZGUmcXVvdDssIFZlcmRhbmE7Ij48YnI+PC9kaXY+PGRpdiBz
dHlsZT0iZm9udC1mYW1pbHk6ICZxdW90O2x1Y2lkYSBHcmFuZGUmcXVvdDssIFZlcmRhbmE7
Ij5CZXN0IHJlZ2FyZHMuPC9kaXY+PGRpdiBzdHlsZT0iZm9udC1mYW1pbHk6ICZxdW90O2x1
Y2lkYSBHcmFuZGUmcXVvdDssIFZlcmRhbmE7Ij48YnI+PC9kaXY+PGRpdiBzdHlsZT0iZm9u
dC1mYW1pbHk6ICZxdW90O2x1Y2lkYSBHcmFuZGUmcXVvdDssIFZlcmRhbmE7Ij5GcmFua2xp
bjwvZGl2PjwvZGl2Pg==

------=_NextPart_60D74999_10706D40_7C0933BE--
1


From xen-devel-bounces@lists.xenproject.org Sat Jun 26 15:59:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Jun 2021 15:59:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147576.272177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxAih-0006au-VZ; Sat, 26 Jun 2021 15:59:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147576.272177; Sat, 26 Jun 2021 15:59:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxAih-0006an-Rt; Sat, 26 Jun 2021 15:59:47 +0000
Received: by outflank-mailman (input) for mailman id 147576;
 Sat, 26 Jun 2021 15:59:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxAig-0006ad-FK; Sat, 26 Jun 2021 15:59:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxAig-0008SH-B2; Sat, 26 Jun 2021 15:59:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxAif-00037G-Ss; Sat, 26 Jun 2021 15:59:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxAif-0005kJ-SO; Sat, 26 Jun 2021 15:59:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DqPpR/WbF5zJN8ikiv9GB/JGWnIU0WGLx4z381KpPig=; b=T1rPNxlCjbkkgW+lXtdwBgHy4i
	S1lavWndyyx5VlpnxZFAC6GlBqCXD40Yvz4f0xm1N3Ldkcc9TChFE9nmbvffmVw9VhOI1ulTj5gYY
	3xNH8o3Vhfnotl4jIfutv41J+8+1C7uO/sNI+1mpX4mmHOvOQUUqvfb5ZVoBtaGiRjdc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163116-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163116: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=17143c4837393d42c484b42d1789b85b2cff1aaf
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 26 Jun 2021 15:59:45 +0000

flight 163116 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163116/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 17143c4837393d42c484b42d1789b85b2cff1aaf
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   22 days
Failing since        162368  2021-06-04 15:42:59 Z   22 days   52 attempts
Testing same since   163028  2021-06-25 07:30:32 Z    1 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2647 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 26 16:37:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Jun 2021 16:37:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147582.272191 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxBIi-0002fj-47; Sat, 26 Jun 2021 16:37:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147582.272191; Sat, 26 Jun 2021 16:37:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxBIi-0002fc-0z; Sat, 26 Jun 2021 16:37:00 +0000
Received: by outflank-mailman (input) for mailman id 147582;
 Sat, 26 Jun 2021 16:36:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxBIg-0002fS-3G; Sat, 26 Jun 2021 16:36:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxBIf-00019N-TX; Sat, 26 Jun 2021 16:36:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxBIf-00054k-Gc; Sat, 26 Jun 2021 16:36:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxBIf-0002WQ-G4; Sat, 26 Jun 2021 16:36:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=RUCJ/c9TppkcrD0vyXQpRHJvNJwMAI6jjD/gFIU+4xQ=; b=OzYhk9nB/qXJXFNToCHuIsY/kj
	Vi5B31+UGcuZeym7pFyQAbWA1pQkXG8reDQpDAC2IifDa2/iqv7RvX1lGukVIJ5nTTBwitOxRgBjI
	n0YaGDMXwg7wpIMcuJTmNcZI+a7EqW+LVib6nkGGP4adwFup6CarnLLxWJG0/C0YvQiA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-amd64-qemuu-nested-amd
Message-Id: <E1lxBIf-0002WQ-G4@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 26 Jun 2021 16:36:57 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-qemuu-nested-amd
testid xen-boot/l1

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  d5f54009dba11d04bfe2a28eee47b994de66b84a
  Bug not present: 8c9ed863738ff9e8b91975d6aa4464e7e8324eb7
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/163123/


  commit d5f54009dba11d04bfe2a28eee47b994de66b84a
  Author: Anthony PERARD <anthony.perard@citrix.com>
  Date:   Tue May 11 10:28:03 2021 +0100
  
      libxl: Replace deprecated QMP command by "query-cpus-fast"
      
      We use the deprecated QMP command "query-cpus" which is removed in the
      QEMU 6.0 release. There's a replacement which is "query-cpus-fast",
      and have been available since QEMU 2.12 (April 2018).
      
      This patch try the new command first and when the command isn't
      available, it fall back to the deprecated one so libxl still works
      with older QEMU versions.
      
      Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
      Reviewed-by: Jason Andryuk <jandryuk@gmail.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-nested-amd.xen-boot--l1.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-nested-amd.xen-boot--l1 --summary-out=tmp/163123.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-amd64-qemuu-nested-amd xen-boot/l1
Searching for failure / basis pass:
 163066 fail [host=pinot1] / 163007 ok.
Failure / basis pass flights: 163066 / 163007
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 050cee12315536aba18a73c8dea21116a9c90ffa e3c30795823672eec9bde75187e184f23ed98d70 c7691f5e340f3b579d40c77548f63133cdf5aac7
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#c410ad4da4b7785170d3d42a3ba190c2caac6feb-c410ad4da4b7785170d3d42a3ba190c2caac6feb git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#b22726abdfa54592d6ad88f65b0297c0e8b363e2-050cee12315536aba18a73c8dea21116a9c90ffa git://xenbits.xen.org/osstest/seabios.git#e3c30795823672eec9bde75187e184f23ed98d70-e3c30795823672eec9bde75187e184f23ed98d70 git://xenbits.xen.org/xen.git#5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1-c7691f5e340f3b579d40c77548f63133cdf5aac7
Loaded 30169 nodes in revision graph
Searching for test results:
 162996 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 163007 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 163017 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d0ac9a61474cf594d19082bc8976247e984ea9a3 e3c30795823672eec9bde75187e184f23ed98d70 c7691f5e340f3b579d40c77548f63133cdf5aac7
 163025 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 163024 [host=pinot0]
 163029 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d0ac9a61474cf594d19082bc8976247e984ea9a3 e3c30795823672eec9bde75187e184f23ed98d70 c7691f5e340f3b579d40c77548f63133cdf5aac7
 163034 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 d2cad41defe4e0e9987549fbc8ebdf9ae138f90f
 163040 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 5d3e4ebb5c71477d74a0c503438545a0126d3863
 163051 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 f3f778c81769075ac0eb93b98d4b2803e7936453
 163060 [host=pinot0]
 163065 [host=pinot0]
 163066 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 050cee12315536aba18a73c8dea21116a9c90ffa e3c30795823672eec9bde75187e184f23ed98d70 c7691f5e340f3b579d40c77548f63133cdf5aac7
 163073 [host=pinot0]
 163082 [host=pinot0]
 163089 [host=pinot0]
 163099 [host=pinot0]
 163105 [host=pinot0]
 163106 [host=pinot0]
 163109 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 163112 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 050cee12315536aba18a73c8dea21116a9c90ffa e3c30795823672eec9bde75187e184f23ed98d70 c7691f5e340f3b579d40c77548f63133cdf5aac7
 163113 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 f7079d7ef69f6bf38d6ec3bda294ed5eabcf98ba
 163115 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 8c9ed863738ff9e8b91975d6aa4464e7e8324eb7
 163117 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 d5f54009dba11d04bfe2a28eee47b994de66b84a
 163118 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 8c9ed863738ff9e8b91975d6aa4464e7e8324eb7
 163119 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 d5f54009dba11d04bfe2a28eee47b994de66b84a
 163121 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 8c9ed863738ff9e8b91975d6aa4464e7e8324eb7
 163123 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 d5f54009dba11d04bfe2a28eee47b994de66b84a
Searching for interesting versions
 Result found: flight 162996 (pass), for basis pass
 Result found: flight 163066 (fail), for basis failure
 Repro found: flight 163109 (pass), for basis pass
 Repro found: flight 163112 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 8c9ed863738ff9e8b91975d6aa4464e7e8324eb7
No revisions left to test, checking graph state.
 Result found: flight 163115 (pass), for last pass
 Result found: flight 163117 (fail), for first failure
 Repro found: flight 163118 (pass), for last pass
 Repro found: flight 163119 (fail), for first failure
 Repro found: flight 163121 (pass), for last pass
 Repro found: flight 163123 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  d5f54009dba11d04bfe2a28eee47b994de66b84a
  Bug not present: 8c9ed863738ff9e8b91975d6aa4464e7e8324eb7
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/163123/


  commit d5f54009dba11d04bfe2a28eee47b994de66b84a
  Author: Anthony PERARD <anthony.perard@citrix.com>
  Date:   Tue May 11 10:28:03 2021 +0100
  
      libxl: Replace deprecated QMP command by "query-cpus-fast"
      
      We use the deprecated QMP command "query-cpus" which is removed in the
      QEMU 6.0 release. There's a replacement which is "query-cpus-fast",
      and have been available since QEMU 2.12 (April 2018).
      
      This patch try the new command first and when the command isn't
      available, it fall back to the deprecated one so libxl still works
      with older QEMU versions.
      
      Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
      Reviewed-by: Jason Andryuk <jandryuk@gmail.com>

Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-nested-amd.xen-boot--l1.{dot,ps,png,html,svg}.
----------------------------------------
163123: tolerable ALL FAIL

flight 163123 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/163123/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-amd 16 xen-boot/l1        fail baseline untested


jobs:
 test-amd64-amd64-qemuu-nested-amd                            fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sat Jun 26 18:18:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Jun 2021 18:18:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147590.272215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxCsQ-0003OL-KE; Sat, 26 Jun 2021 18:17:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147590.272215; Sat, 26 Jun 2021 18:17:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxCsQ-0003OE-HF; Sat, 26 Jun 2021 18:17:58 +0000
Received: by outflank-mailman (input) for mailman id 147590;
 Sat, 26 Jun 2021 18:17:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWd7=LU=kota.moe=nospam@srs-us1.protection.inumbo.net>)
 id 1lxCsN-0003O8-0p
 for xen-devel@lists.xenproject.org; Sat, 26 Jun 2021 18:17:57 +0000
Received: from mail-wm1-x32d.google.com (unknown [2a00:1450:4864:20::32d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 80461327-dc5e-40b9-a2bb-5413d5b94546;
 Sat, 26 Jun 2021 18:17:40 +0000 (UTC)
Received: by mail-wm1-x32d.google.com with SMTP id
 v20-20020a05600c2154b02901dcefb16af0so7974027wml.5
 for <xen-devel@lists.xenproject.org>; Sat, 26 Jun 2021 11:17:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80461327-dc5e-40b9-a2bb-5413d5b94546
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=kota.moe; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=K6SbdIom3zgio+kZ6wfwDJH7rKPEfDAGy8TdscQ1Fts=;
        b=c6cU2RJcxB9treX5hi/OxkWRplBi7U9CErc1rc4z3B1uzmoMpEqqGdlY+LWXH7N5DH
         swjMX20G2YEJmG9iZ0b09BNw848w7hUZj3VyKZo8AeSMYihuCF9ZM4RA8CLuF49SbKAB
         HUiQgKLP2q3MpTKaB6Fp+44UVWbd9koTN5U9Q=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=K6SbdIom3zgio+kZ6wfwDJH7rKPEfDAGy8TdscQ1Fts=;
        b=WlMih2+93c6JkzlvXv/g1/C0O1fPg8ss4Mbb/bHC6ZS777UnVGot7FtCGqik3QQgxe
         wF/7qrO2W6tmWii2o7JvrdCJMPn15zu2AMqi5+7CjAABbL4vw8MJfI+mfLZgNjQgiOe0
         NztSMJd1brfs0r5ze51vwaNVAGBDfAmfGjuk2MK8Fox6uSZKBD8MieB8k2EVWe5o7r+/
         SLk8FQN1N/ACJI//CJgcWGuWAMqW63MwmroQbccVRFyOqdUnY7hJEpJIbGw/Wa+lHXn6
         7DDR58BLvnQhKfiAhP1HOJWLRSbYC9ssp3aaUhRFHVTWsvU10qVEwP3XjbeDXsux9i0o
         q1YQ==
X-Gm-Message-State: AOAM533Hzd9fpc04HQhkOBPDfsgLrdrWnvQyyklyCWZlaQutpXxSwJJi
	+vPByK/EtQ0dUDWK02bWWBv6xO/qI936W5GGjLS0cg==
X-Google-Smtp-Source: ABdhPJxOEjtV4xly2hQKRSYOj/QdYtY37Z1yVHzjvMk45DdusPTcP6PcPe1UsAYNko+U8dIogiNwPfBjWQkq7A0F3CY=
X-Received: by 2002:a7b:cc09:: with SMTP id f9mr17569753wmh.104.1624731459793;
 Sat, 26 Jun 2021 11:17:39 -0700 (PDT)
MIME-Version: 1.0
References: <YMWl4UnFBAVRDnys@eldamar.lan> <162352833546.2353.230557992597997974.reportbug@home.kota.moe>
 <2f7c7d36-b6f4-f8ab-756e-a563fa03b9e4@arm.com> <YNOCXe1cuNDNAB+Z@eldamar.lan>
In-Reply-To: <YNOCXe1cuNDNAB+Z@eldamar.lan>
From: =?UTF-8?B?4oCN5bCP5aSq?= <nospam@kota.moe>
Date: Sun, 27 Jun 2021 04:17:03 +1000
Message-ID: <CACsxjPZEMbxjiRQit7yaykL8LwHFdCg53ObCfTCdRLYV_-_Few@mail.gmail.com>
Subject: Re: Bug#989778: Regression in at least 5.10.y and mainline: Firewire
 audio interface fails to work properly (when booted under Xen)
To: Salvatore Bonaccorso <carnil@debian.org>
Cc: Robin Murphy <robin.murphy@arm.com>, 989778-maintonly@bugs.debian.org, 
	Jianxiong Gao <jxgao@google.com>, Christoph Hellwig <hch@lst.de>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, linux-kernel@vger.kernel.org, 
	iommu@lists.linux-foundation.org, Marek Szyprowski <m.szyprowski@samsung.com>, 
	xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="00000000000084412605c5af4556"

--00000000000084412605c5af4556
Content-Type: text/plain; charset="UTF-8"

On Thu, 24 Jun 2021 at 04:50, Salvatore Bonaccorso <carnil@debian.org>
wrote:

> Hi Robin,
>
> On Mon, Jun 14, 2021 at 02:29:08PM +0100, Robin Murphy wrote:
> > On 2021-06-13 07:29, Salvatore Bonaccorso wrote:
> > > A user in Debian reported the above issue, which was reproducible with
> > > 5.13-rc5 and 5.10.y as packaged in Debian and found that 85a5a6875ca9
> > > ("swiotlb: don't modify orig_addr in swiotlb_tbl_sync_single") that
> > > introduced the issue.
> >
> > Sounds like it's probably the same thing as being discussed over here:
> >
> >
> https://lore.kernel.org/linux-iommu/2e899de2-4b69-c4b6-33a6-09fb8949d2fd@nxp.com/
>
> Thanks for the pointer, it seems that now it has been fixed upstream
> with 5f89468e2f06 ("swiotlb: manipulate orig_addr when tlb_addr has
> offset").
>

I've checked out upstream v5.13 b7050b242430f3170e0b57f5f55136e44cb8dc66
(which includes the above commit) and confirmed that my issue is now
resolved.
Now to wait for it to reach Debian :)

--00000000000084412605c5af4556
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail=
_attr">On Thu, 24 Jun 2021 at 04:50, Salvatore Bonaccorso &lt;<a href=3D"ma=
ilto:carnil@debian.org">carnil@debian.org</a>&gt; wrote:<br></div><blockquo=
te class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px =
solid rgb(204,204,204);padding-left:1ex">Hi Robin,<br>
<br>
On Mon, Jun 14, 2021 at 02:29:08PM +0100, Robin Murphy wrote:<br>
&gt; On 2021-06-13 07:29, Salvatore Bonaccorso wrote:<br>
&gt; &gt; A user in Debian reported the above issue, which was reproducible=
 with<br>
&gt; &gt; 5.13-rc5 and 5.10.y as packaged in Debian and found that 85a5a687=
5ca9<br>
&gt; &gt; (&quot;swiotlb: don&#39;t modify orig_addr in swiotlb_tbl_sync_si=
ngle&quot;) that<br>
&gt; &gt; introduced the issue.<br>
&gt; <br>
&gt; Sounds like it&#39;s probably the same thing as being discussed over h=
ere:<br>
&gt; <br>
&gt; <a href=3D"https://lore.kernel.org/linux-iommu/2e899de2-4b69-c4b6-33a6=
-09fb8949d2fd@nxp.com/" rel=3D"noreferrer" target=3D"_blank">https://lore.k=
ernel.org/linux-iommu/2e899de2-4b69-c4b6-33a6-09fb8949d2fd@nxp.com/</a><br>
<br>
Thanks for the pointer, it seems that now it has been fixed upstream<br>
with 5f89468e2f06 (&quot;swiotlb: manipulate orig_addr when tlb_addr has<br=
>
offset&quot;).<br></blockquote><div><br></div><div>I&#39;ve checked out ups=
tream v5.13 b7050b242430f3170e0b57f5f55136e44cb8dc66 (which includes the ab=
ove commit) and confirmed that my issue is now resolved. <br></div><div>Now=
 to wait for it to reach Debian :)<br></div></div></div>

--00000000000084412605c5af4556--


From xen-devel-bounces@lists.xenproject.org Sat Jun 26 18:40:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Jun 2021 18:40:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147595.272225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxDEa-0006OG-FU; Sat, 26 Jun 2021 18:40:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147595.272225; Sat, 26 Jun 2021 18:40:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxDEa-0006O8-CZ; Sat, 26 Jun 2021 18:40:52 +0000
Received: by outflank-mailman (input) for mailman id 147595;
 Sat, 26 Jun 2021 18:40:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxDEZ-0006Ny-Qf; Sat, 26 Jun 2021 18:40:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxDEZ-0003Dp-JH; Sat, 26 Jun 2021 18:40:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxDEZ-0002jQ-C9; Sat, 26 Jun 2021 18:40:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxDEZ-0000YV-Bc; Sat, 26 Jun 2021 18:40:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jXgM+G7/DG+g6fcGMr7wHxpC1uyJ9XK88AowFp50mVI=; b=66eiSf+3+GrwyG5eMlbM+P30R2
	lZoRvos+BuWfUhp4umO/TBrTXvXGihUiQhEr2tamzqgpqt8GdkSbpNKMBl4q4psBJU66jzoregms8
	6yDlOtc79UQ+rXcj7TR1vQ1ps0urBACOPy1PJUOwUQu8AX1mEz6ZL/8e68R3+LT8R7ds=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163110-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 163110: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=e3955ae93f5151ad2e982440b7c8d3776a9afee2
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 26 Jun 2021 18:40:51 +0000

flight 163110 qemu-mainline real [real]
flight 163126 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/163110/
http://logs.test-lab.xenproject.org/osstest/logs/163126/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-amd 16 xen-boot/l1         fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 16 xen-boot/l1       fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                e3955ae93f5151ad2e982440b7c8d3776a9afee2
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  310 days
Failing since        152659  2020-08-21 14:07:39 Z  309 days  567 attempts
Testing same since   163110  2021-06-26 04:20:02 Z    0 days    1 attempts

------------------------------------------------------------
549 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 178478 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 26 21:01:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Jun 2021 21:01:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147603.272246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxFPy-0001z4-W5; Sat, 26 Jun 2021 21:00:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147603.272246; Sat, 26 Jun 2021 21:00:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxFPy-0001yx-Qv; Sat, 26 Jun 2021 21:00:46 +0000
Received: by outflank-mailman (input) for mailman id 147603;
 Sat, 26 Jun 2021 21:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxFPx-0001yn-C7; Sat, 26 Jun 2021 21:00:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxFPx-0005Xc-1u; Sat, 26 Jun 2021 21:00:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxFPw-0000b2-Lo; Sat, 26 Jun 2021 21:00:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxFPw-0002SR-LM; Sat, 26 Jun 2021 21:00:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=53tz5ElAk//BKdSwf8bJy9hDhStHSn6H2DapERnJYLo=; b=K2LozOQBdPTHQW0yWkY1K0oNiO
	354z5PdJQh0HEDDcS88V0TgRGx1IDsgNSRvJALDENWOldN5BTT6GbNmRPz6fSKKaHtM7fNDgJ9W7+
	lLAvUzXnf2z7RLUxz7XvUaee4Kui1RMfBin5A1NZA5dvrG7qxGubKpLls1qEcdh8lQzc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163114-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 163114: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start/debian.repeat:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b7050b242430f3170e0b57f5f55136e44cb8dc66
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 26 Jun 2021 21:00:44 +0000

flight 163114 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163114/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-xl         22 guest-start/debian.repeat fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                b7050b242430f3170e0b57f5f55136e44cb8dc66
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  330 days
Failing since        152366  2020-08-01 20:49:34 Z  329 days  559 attempts
Testing same since   163114  2021-06-26 07:49:15 Z    0 days    1 attempts

------------------------------------------------------------
6207 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1691462 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 26 22:53:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Jun 2021 22:53:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147610.272266 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxHBF-0003Qv-Ds; Sat, 26 Jun 2021 22:53:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147610.272266; Sat, 26 Jun 2021 22:53:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxHBF-0003Qn-Ap; Sat, 26 Jun 2021 22:53:41 +0000
Received: by outflank-mailman (input) for mailman id 147610;
 Sat, 26 Jun 2021 22:53:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxHBE-0003Qe-W5; Sat, 26 Jun 2021 22:53:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxHBE-0007Vl-Ld; Sat, 26 Jun 2021 22:53:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxHBE-0006X0-CX; Sat, 26 Jun 2021 22:53:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxHBE-0005db-C2; Sat, 26 Jun 2021 22:53:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kW7L4Rm/PG3k1NsyobZi6yxn2na55ojTy1XXMGyBu3k=; b=zjy4ZkOPBdVwzS1jh1BwgoMzLa
	CCEA6ma0ndkGML6IYuPLWF1piOXA9u+elZNBCPcPJQxICeCbjcAKWNnLPwBNG9Yg1utqS9D0ynUjE
	djzaIZNzKa5/4mrQg9qTwT6jX2R/LoNCnHkf9mQ8MAmnxkTDyOfJrq6uNB+IBA7TRVaI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163124-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163124: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=17143c4837393d42c484b42d1789b85b2cff1aaf
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 26 Jun 2021 22:53:40 +0000

flight 163124 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163124/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 17143c4837393d42c484b42d1789b85b2cff1aaf
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   22 days
Failing since        162368  2021-06-04 15:42:59 Z   22 days   53 attempts
Testing same since   163028  2021-06-25 07:30:32 Z    1 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2647 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 27 01:26:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Jun 2021 01:26:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147619.272292 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxJYc-0001bY-1h; Sun, 27 Jun 2021 01:25:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147619.272292; Sun, 27 Jun 2021 01:25:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxJYb-0001bR-UW; Sun, 27 Jun 2021 01:25:57 +0000
Received: by outflank-mailman (input) for mailman id 147619;
 Sun, 27 Jun 2021 01:25:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxJYa-0001bH-Kt; Sun, 27 Jun 2021 01:25:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxJYa-0002mI-Fx; Sun, 27 Jun 2021 01:25:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxJYa-0005NX-5q; Sun, 27 Jun 2021 01:25:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxJYa-0005Rv-5K; Sun, 27 Jun 2021 01:25:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VpSqNIFuJwVnxtq0pVbj9GKhoLhbtL/HYOExFRxvbdA=; b=Tn68d8qiq3UcaUNSJejZ+g9i+Z
	TZBmmiBueVulrYEDdhs0uuqqAbkwZgZ4F5JFTH+Ws5zHcpHgOQMa+AZPRfMZ/OkrZusS1k8faMnGu
	XawbkvWBTmSunalMBngZLCQJMdyTK7XOcxRTCqDL0qRE9Puv6KDsKBNHgdc4UpbHjE5U=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163122-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 163122: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=bb11edcec1a953ce590da797b0d005cd60f21e83
X-Osstest-Versions-That:
    xen=f591755823a7e94fc6b4b8ddce71f0421a94fa09
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Jun 2021 01:25:56 +0000

flight 163122 xen-unstable real [real]
flight 163134 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/163122/
http://logs.test-lab.xenproject.org/osstest/logs/163134/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail pass in 163134-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 163104
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 163104
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 163104
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 163104
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 163104
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 163104
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 163104
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 163104
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 163104
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 163104
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 163104
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 163104
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  bb11edcec1a953ce590da797b0d005cd60f21e83
baseline version:
 xen                  f591755823a7e94fc6b4b8ddce71f0421a94fa09

Last test of basis   163104  2021-06-25 23:38:27 Z    1 days
Testing same since   163122  2021-06-26 14:01:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rahul Singh <rahul.singh@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f591755823..bb11edcec1  bb11edcec1a953ce590da797b0d005cd60f21e83 -> master


From xen-devel-bounces@lists.xenproject.org Sun Jun 27 05:57:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Jun 2021 05:57:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147637.272332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxNnb-0000gC-9M; Sun, 27 Jun 2021 05:57:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147637.272332; Sun, 27 Jun 2021 05:57:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxNnb-0000g4-3q; Sun, 27 Jun 2021 05:57:43 +0000
Received: by outflank-mailman (input) for mailman id 147637;
 Sun, 27 Jun 2021 05:57:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxNna-0000fu-6d; Sun, 27 Jun 2021 05:57:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxNnZ-00083e-VO; Sun, 27 Jun 2021 05:57:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxNnZ-0000X7-KU; Sun, 27 Jun 2021 05:57:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxNnZ-0001c3-Jw; Sun, 27 Jun 2021 05:57:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c7tHZjLT2MU7TlYYfhsL4ycu65fTAKjxSxRCDqirdZQ=; b=PNmi/I6zjiyNI9EvpMktIzeuU2
	ho+8GhpwNU6tMaXPBl9IDVHENE1//ChXPVLw+EfqpVzEegQiE65CF1Cx7rxc2BRUKYDq7p9Wk8x8S
	hbCin26zavw1tV4QQF42TGxbqmPiBoBKsVs3zOJaQmnlrTtKaL+lt4AWHLTgxnlinoYk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163139-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 163139: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=7c08141f906e20e730c4b6407bc638e743deea48
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Jun 2021 05:57:41 +0000

flight 163139 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163139/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              7c08141f906e20e730c4b6407bc638e743deea48
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  352 days
Failing since        151818  2020-07-11 04:18:52 Z  351 days  343 attempts
Testing same since   163111  2021-06-26 04:20:02 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 63387 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 27 06:29:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Jun 2021 06:29:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147641.272346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxOI9-0003u8-Rh; Sun, 27 Jun 2021 06:29:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147641.272346; Sun, 27 Jun 2021 06:29:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxOI9-0003u1-Mf; Sun, 27 Jun 2021 06:29:17 +0000
Received: by outflank-mailman (input) for mailman id 147641;
 Sun, 27 Jun 2021 06:29:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxOI8-0003tr-52; Sun, 27 Jun 2021 06:29:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxOI7-0000EB-Rc; Sun, 27 Jun 2021 06:29:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxOI7-00021n-HF; Sun, 27 Jun 2021 06:29:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxOI7-0004qJ-Gm; Sun, 27 Jun 2021 06:29:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+U6pfZSGn3Kdm6qdnYwj7HmRTW3NRVA8oAcs91fj840=; b=eh5w7YCSmKGHbC/A4PgfYkdgWs
	aaHj7aK3Yh3VaYCrOZvr/J4ikSfWAvp9qM+8R6+WbLvtbYSYGIQaQ+XmndfzSrvmCspMr03wmTpvh
	QJprEilBBUTNJS15cYV2rv4vpy1OC98+u6hgJ2CbfwsMjoqFcDuDqgzixvzN34YJTZms=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163128-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 163128: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=e3955ae93f5151ad2e982440b7c8d3776a9afee2
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Jun 2021 06:29:15 +0000

flight 163128 qemu-mainline real [real]
flight 163140 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/163128/
http://logs.test-lab.xenproject.org/osstest/logs/163140/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-amd 16 xen-boot/l1         fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 16 xen-boot/l1       fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                e3955ae93f5151ad2e982440b7c8d3776a9afee2
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  310 days
Failing since        152659  2020-08-21 14:07:39 Z  309 days  568 attempts
Testing same since   163110  2021-06-26 04:20:02 Z    1 days    2 attempts

------------------------------------------------------------
549 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 178478 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 27 06:41:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Jun 2021 06:41:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147646.272360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxOTe-00067p-4k; Sun, 27 Jun 2021 06:41:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147646.272360; Sun, 27 Jun 2021 06:41:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxOTe-00067i-1S; Sun, 27 Jun 2021 06:41:10 +0000
Received: by outflank-mailman (input) for mailman id 147646;
 Sun, 27 Jun 2021 06:41:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxOTd-00067Y-2Z; Sun, 27 Jun 2021 06:41:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxOTc-0000Qk-TY; Sun, 27 Jun 2021 06:41:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxOTc-0002bs-KS; Sun, 27 Jun 2021 06:41:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxOTc-0000Xt-Jz; Sun, 27 Jun 2021 06:41:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=M9TK3oxTbHyXjTa8HH6JTMUnVgXXRFBH61W0yMzQiqc=; b=KYMZwozz8ulLY/bXr2XebclfKW
	AvnaMlKdDnC0z9pESkHW/R1SeqJjz+IHvK37At4ANbhm/ckdbwqvEKP6m2WRdbhFyk1U8IUL2E0OK
	pfa/KUkyRR/y5MmGV0E/KxK9tRYXSy9N0fSdYfeMA3grVXDiiwS58KBgjdmhV5H6bGCI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163132-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163132: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=17143c4837393d42c484b42d1789b85b2cff1aaf
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Jun 2021 06:41:08 +0000

flight 163132 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163132/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 17143c4837393d42c484b42d1789b85b2cff1aaf
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   23 days
Failing since        162368  2021-06-04 15:42:59 Z   22 days   54 attempts
Testing same since   163028  2021-06-25 07:30:32 Z    1 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2647 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 27 08:17:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Jun 2021 08:17:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147655.272379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxPyb-0006Xg-KP; Sun, 27 Jun 2021 08:17:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147655.272379; Sun, 27 Jun 2021 08:17:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxPyb-0006XZ-HP; Sun, 27 Jun 2021 08:17:13 +0000
Received: by outflank-mailman (input) for mailman id 147655;
 Sun, 27 Jun 2021 08:17:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxPya-0006XP-88; Sun, 27 Jun 2021 08:17:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxPya-0002Vu-08; Sun, 27 Jun 2021 08:17:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxPyZ-00076E-M0; Sun, 27 Jun 2021 08:17:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxPyZ-0006pv-LV; Sun, 27 Jun 2021 08:17:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bkaAKvc6kLdmG8qk7rYbF7x1mD1wSiXnX0t22OivfrI=; b=mAHr0+LJliugeft1i1PcCjc+vB
	QbJX34cYU0UMK5oUkoiUdGD3VfaV/bEjoDyn0m7iHG0cH1aqjoup8I9ca9qtcyjWuv7vIwSa4M03C
	XZD6wyy/NUdxFn1e9x+osPqoRvMvtl+IMW0G75uCxKraVKzI0jYPwxWw4fh4dxeLXgQc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163130-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 163130: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start/debian.repeat:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start/debian.repeat:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=625acffd7ae2c52898d249e6c5c39f348db0d8df
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Jun 2021 08:17:11 +0000

flight 163130 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163130/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow  22 guest-start/debian.repeat fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 22 guest-start/debian.repeat fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                625acffd7ae2c52898d249e6c5c39f348db0d8df
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  330 days
Failing since        152366  2020-08-01 20:49:34 Z  329 days  560 attempts
Testing same since   163130  2021-06-26 21:11:57 Z    0 days    1 attempts

------------------------------------------------------------
6207 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1691622 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 27 09:48:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Jun 2021 09:48:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147661.272399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxROh-0006Tf-B8; Sun, 27 Jun 2021 09:48:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147661.272399; Sun, 27 Jun 2021 09:48:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxROh-0006TY-7v; Sun, 27 Jun 2021 09:48:15 +0000
Received: by outflank-mailman (input) for mailman id 147661;
 Sun, 27 Jun 2021 09:48:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxROg-0006TO-36; Sun, 27 Jun 2021 09:48:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxROf-0003xv-Ub; Sun, 27 Jun 2021 09:48:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxROf-0002w6-Nh; Sun, 27 Jun 2021 09:48:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxROf-0003XD-NC; Sun, 27 Jun 2021 09:48:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VZceQgZuEA8DPNnUCHmG6ixq76Zldh+TVZ+k4xiwTGE=; b=JUdrzpZm9YNTuHSpTO3f0l1TcT
	c5CI1KkOsWGQCxRpenF1eKHqPOUxKhiwY98pKMwqGv23EHlXAjPmP1r72O7iD0wcl4v4FDJ2bkXlE
	VbaWiCU63mnqoMJv7HzWDaEIFoNcyk+ap555XpJLgPN0oAekOrU86tNKgXeGJSAyBcHI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163147-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 163147: all pass - PUSHED
X-Osstest-Versions-This:
    xen=bb11edcec1a953ce590da797b0d005cd60f21e83
X-Osstest-Versions-That:
    xen=c7691f5e340f3b579d40c77548f63133cdf5aac7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Jun 2021 09:48:13 +0000

flight 163147 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163147/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  bb11edcec1a953ce590da797b0d005cd60f21e83
baseline version:
 xen                  c7691f5e340f3b579d40c77548f63133cdf5aac7

Last test of basis   162992  2021-06-23 09:20:40 Z    4 days
Testing same since   163147  2021-06-27 09:18:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@xen.org>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
  Rahul Singh <rahul.singh@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c7691f5e34..bb11edcec1  bb11edcec1a953ce590da797b0d005cd60f21e83 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Jun 27 13:13:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Jun 2021 13:13:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147670.272425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxUbP-0008Ag-Bv; Sun, 27 Jun 2021 13:13:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147670.272425; Sun, 27 Jun 2021 13:13:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxUbP-0008AZ-8t; Sun, 27 Jun 2021 13:13:35 +0000
Received: by outflank-mailman (input) for mailman id 147670;
 Sun, 27 Jun 2021 13:13:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxUbN-0008AP-2C; Sun, 27 Jun 2021 13:13:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxUbM-0007XG-Jk; Sun, 27 Jun 2021 13:13:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxUbM-0005Jt-Ac; Sun, 27 Jun 2021 13:13:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxUbM-0005Bz-A7; Sun, 27 Jun 2021 13:13:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=L588nrgjIwt8Z8ADjvlNn+vKa8vZtnqctQEiUTaLwpw=; b=XKLq+6GlNuzFw7VEmbUrFfX8lW
	SRzcWp9D+8kFWVuqPoqhqK072YrS+DPxFk6LAHmBSc7/AsrCVVb3Kv6EVfh1kYltVZvj+X6gEwU5n
	1DFejrGqDUepl8r2POcmPyg3lm45I2M5strVfscrskqeU8jc7NsJqwmgd8xdRyDRMxsA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163136-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 163136: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=bb11edcec1a953ce590da797b0d005cd60f21e83
X-Osstest-Versions-That:
    xen=bb11edcec1a953ce590da797b0d005cd60f21e83
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Jun 2021 13:13:32 +0000

flight 163136 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163136/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 163122 pass in 163136
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 163122

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 163122
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 163122
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 163122
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 163122
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 163122
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 163122
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 163122
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 163122
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 163122
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 163122
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 163122
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 163122
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  bb11edcec1a953ce590da797b0d005cd60f21e83
baseline version:
 xen                  bb11edcec1a953ce590da797b0d005cd60f21e83

Last test of basis   163136  2021-06-27 01:52:46 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Jun 27 15:49:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Jun 2021 15:49:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147678.272451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxX1e-0004th-NM; Sun, 27 Jun 2021 15:48:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147678.272451; Sun, 27 Jun 2021 15:48:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxX1e-0004ta-KQ; Sun, 27 Jun 2021 15:48:50 +0000
Received: by outflank-mailman (input) for mailman id 147678;
 Sun, 27 Jun 2021 15:48:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxX1d-0004tQ-NJ; Sun, 27 Jun 2021 15:48:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxX1d-0001nG-Jy; Sun, 27 Jun 2021 15:48:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxX1d-0004Te-BC; Sun, 27 Jun 2021 15:48:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxX1d-0006lS-Ag; Sun, 27 Jun 2021 15:48:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gMXG7WylzgunLb6L2FcM8d2eNPrz7lGqx5yrQEIC8PY=; b=cEbFrd9KM03AyyJQI6WGb1v7Vv
	kLE7oagvm0onrfwMdPnMt8eYJ+mO+IXLTyL2yLj1tsGVm2zTBjOMN1IQ1LVteA0KTf8Z3e3yc9wlY
	LlqvTq6DoaFk6H3BNRynoy/Fn2dr5guckR/QAapikBAD5Y1SlPizepnXrI11Ck3r5K7E=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163143-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163143: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=17143c4837393d42c484b42d1789b85b2cff1aaf
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Jun 2021 15:48:49 +0000

flight 163143 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163143/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 17143c4837393d42c484b42d1789b85b2cff1aaf
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   23 days
Failing since        162368  2021-06-04 15:42:59 Z   23 days   55 attempts
Testing same since   163028  2021-06-25 07:30:32 Z    2 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2647 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 27 18:22:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Jun 2021 18:22:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147684.272472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxZQG-0002lY-0T; Sun, 27 Jun 2021 18:22:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147684.272472; Sun, 27 Jun 2021 18:22:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxZQF-0002lR-Rn; Sun, 27 Jun 2021 18:22:23 +0000
Received: by outflank-mailman (input) for mailman id 147684;
 Sun, 27 Jun 2021 18:22:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxZQE-0002kW-L5; Sun, 27 Jun 2021 18:22:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxZQE-0004r5-Al; Sun, 27 Jun 2021 18:22:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxZQE-0002hK-0x; Sun, 27 Jun 2021 18:22:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxZQE-0005ch-0Q; Sun, 27 Jun 2021 18:22:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=D1cgDGuf3a6YdKeJOGGSPzza5w7oZYImhRD+Ovk/IQc=; b=WwQIe8mrPNPLkttHJx7RsmBCg1
	z8yUEjUMFW0uocIMecDOvbxh2WuiF/EuNzT8gicDEwOBOq0iYzKFrvHOg6cxJedhZmEbZlFlGbgTr
	Dm5Sqi/oYZ0qqv+qzT1mn9xR0s2LDCCjC6xLpcPUeBN5nZgarip881433EYbNSSV2Q+U=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163142-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 163142: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=e3955ae93f5151ad2e982440b7c8d3776a9afee2
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Jun 2021 18:22:22 +0000

flight 163142 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163142/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-amd 16 xen-boot/l1         fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 16 xen-boot/l1       fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 163128

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                e3955ae93f5151ad2e982440b7c8d3776a9afee2
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  311 days
Failing since        152659  2020-08-21 14:07:39 Z  310 days  569 attempts
Testing same since   163110  2021-06-26 04:20:02 Z    1 days    3 attempts

------------------------------------------------------------
549 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 178478 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 27 19:17:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Jun 2021 19:17:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147688.272486 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxaH4-0007gy-6b; Sun, 27 Jun 2021 19:16:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147688.272486; Sun, 27 Jun 2021 19:16:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxaH4-0007gr-3d; Sun, 27 Jun 2021 19:16:58 +0000
Received: by outflank-mailman (input) for mailman id 147688;
 Sun, 27 Jun 2021 19:16:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxaH2-0007gh-Au; Sun, 27 Jun 2021 19:16:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxaH2-0005ic-6F; Sun, 27 Jun 2021 19:16:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxaH1-000480-Up; Sun, 27 Jun 2021 19:16:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxaH1-0006Ha-UL; Sun, 27 Jun 2021 19:16:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sLpxDL4qqax764Z950PW5XyT/sM28UPoqMU2HRbjkcQ=; b=BHystxIbHAlZ+HASSfipVgQy1x
	IFPJwENEy672x9yzhkSIcn2bAdv/EUb+B90nE86zK7UoHZI5cLoMBNGW6ci6PSl072EBIh4SEu9PA
	/OHFbzJujaHBx8c78WqPnWy7FbLmaunUQg+DbZo3z6ZJ1BuztZLMVZFjvFYIermQX3mY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163154-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163154: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=17143c4837393d42c484b42d1789b85b2cff1aaf
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Jun 2021 19:16:55 +0000

flight 163154 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163154/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 17143c4837393d42c484b42d1789b85b2cff1aaf
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   23 days
Failing since        162368  2021-06-04 15:42:59 Z   23 days   56 attempts
Testing same since   163028  2021-06-25 07:30:32 Z    2 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2647 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 27 19:34:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Jun 2021 19:34:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147692.272499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxaXS-0001Xd-MK; Sun, 27 Jun 2021 19:33:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147692.272499; Sun, 27 Jun 2021 19:33:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxaXS-0001XW-JB; Sun, 27 Jun 2021 19:33:54 +0000
Received: by outflank-mailman (input) for mailman id 147692;
 Sun, 27 Jun 2021 19:33:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxaXR-0001XM-OS; Sun, 27 Jun 2021 19:33:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxaXR-00060M-F7; Sun, 27 Jun 2021 19:33:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxaXR-0004ij-6S; Sun, 27 Jun 2021 19:33:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxaXR-0004Ui-61; Sun, 27 Jun 2021 19:33:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=7undsRYXY+nBMS+/G7GvkBNEqt2ZSI9WptvK+FGXtss=; b=joat+NW5ZgYb14oAzMzawM3Nda
	3rSb1O2mcdvSBF6HktdSxxcS68VX8Oajb5IMPTEbBRMwMQQui7E68PJMUJsf5lGrLivWOmhcB+W8b
	qNnlGU5bFKVk2foD6OQ2cmH1hR40lSQfwRg+0IaHCQ809s6PiqW3Q090K3+6R5tpZhhA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-amd64-qemuu-nested-intel
Message-Id: <E1lxaXR-0004Ui-61@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Jun 2021 19:33:53 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-qemuu-nested-intel
testid xen-boot/l1

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  d5f54009dba11d04bfe2a28eee47b994de66b84a
  Bug not present: 8c9ed863738ff9e8b91975d6aa4464e7e8324eb7
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/163155/


  commit d5f54009dba11d04bfe2a28eee47b994de66b84a
  Author: Anthony PERARD <anthony.perard@citrix.com>
  Date:   Tue May 11 10:28:03 2021 +0100
  
      libxl: Replace deprecated QMP command by "query-cpus-fast"
      
      We use the deprecated QMP command "query-cpus" which is removed in the
      QEMU 6.0 release. There's a replacement which is "query-cpus-fast",
      and have been available since QEMU 2.12 (April 2018).
      
      This patch try the new command first and when the command isn't
      available, it fall back to the deprecated one so libxl still works
      with older QEMU versions.
      
      Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
      Reviewed-by: Jason Andryuk <jandryuk@gmail.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-nested-intel.xen-boot--l1.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-nested-intel.xen-boot--l1 --summary-out=tmp/163155.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-amd64-qemuu-nested-intel xen-boot/l1
Searching for failure / basis pass:
 163142 fail [host=fiano1] / 163007 ok.
Failure / basis pass flights: 163142 / 163007
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e3955ae93f5151ad2e982440b7c8d3776a9afee2 e3c30795823672eec9bde75187e184f23ed98d70 bb11edcec1a953ce590da797b0d005cd60f21e83
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#c410ad4da4b7785170d3d42a3ba190c2caac6feb-c410ad4da4b7785170d3d42a3ba190c2caac6feb git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#b22726abdfa54592d6ad88f65b0297c0e8b363e2-e3955ae93f5151ad2e982440b7c8d3776a9afee2 git://xenbits.xen.org/osstest/seabios.git#e3c30795823672eec9bde75187e184f23ed98d70-e3c30795823672eec9bde75187e184f23ed98d70 git://xenbits.xen.org/xen.git#5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1-bb11edcec1a953ce590da797b0d005cd60f21e83
Loaded 39943 nodes in revision graph
Searching for test results:
 162996 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 163007 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 163017 fail irrelevant
 163024 fail irrelevant
 163066 fail irrelevant
 163110 fail irrelevant
 163125 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 163127 fail irrelevant
 163129 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e0da9171e02f4534124b9a9e07333382b38376c6 e3c30795823672eec9bde75187e184f23ed98d70 e87d8f60fa9b6eaa6a2357545a96e4fff05dbef0
 163131 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 6d622f3a96bbd76ce8422c6e3805e6609417ec76
 163133 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 45f59ed8865318bb0356954bad067f329677ce9e
 163135 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 4f1858763b7b1aeb79fa7c818eca98c96943aa69
 163137 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 f3f778c81769075ac0eb93b98d4b2803e7936453
 163128 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e3955ae93f5151ad2e982440b7c8d3776a9afee2 e3c30795823672eec9bde75187e184f23ed98d70 f591755823a7e94fc6b4b8ddce71f0421a94fa09
 163138 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 163f47c14737cfa9dfb3240deea356b08caf7614
 163141 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
 163144 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e3955ae93f5151ad2e982440b7c8d3776a9afee2 e3c30795823672eec9bde75187e184f23ed98d70 f591755823a7e94fc6b4b8ddce71f0421a94fa09
 163142 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e3955ae93f5151ad2e982440b7c8d3776a9afee2 e3c30795823672eec9bde75187e184f23ed98d70 bb11edcec1a953ce590da797b0d005cd60f21e83
 163146 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 8c9ed863738ff9e8b91975d6aa4464e7e8324eb7
 163148 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 d5f54009dba11d04bfe2a28eee47b994de66b84a
 163151 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 8c9ed863738ff9e8b91975d6aa4464e7e8324eb7
 163152 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 d5f54009dba11d04bfe2a28eee47b994de66b84a
 163153 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 8c9ed863738ff9e8b91975d6aa4464e7e8324eb7
 163155 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 d5f54009dba11d04bfe2a28eee47b994de66b84a
Searching for interesting versions
 Result found: flight 162996 (pass), for basis pass
 Result found: flight 163128 (fail), for basis failure (at ancestor ~2)
 Repro found: flight 163141 (pass), for basis pass
 Repro found: flight 163142 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c410ad4da4b7785170d3d42a3ba190c2caac6feb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b22726abdfa54592d6ad88f65b0297c0e8b363e2 e3c30795823672eec9bde75187e184f23ed98d70 8c9ed863738ff9e8b91975d6aa4464e7e8324eb7
No revisions left to test, checking graph state.
 Result found: flight 163146 (pass), for last pass
 Result found: flight 163148 (fail), for first failure
 Repro found: flight 163151 (pass), for last pass
 Repro found: flight 163152 (fail), for first failure
 Repro found: flight 163153 (pass), for last pass
 Repro found: flight 163155 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  d5f54009dba11d04bfe2a28eee47b994de66b84a
  Bug not present: 8c9ed863738ff9e8b91975d6aa4464e7e8324eb7
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/163155/


  commit d5f54009dba11d04bfe2a28eee47b994de66b84a
  Author: Anthony PERARD <anthony.perard@citrix.com>
  Date:   Tue May 11 10:28:03 2021 +0100
  
      libxl: Replace deprecated QMP command by "query-cpus-fast"
      
      We use the deprecated QMP command "query-cpus" which is removed in the
      QEMU 6.0 release. There's a replacement which is "query-cpus-fast",
      and have been available since QEMU 2.12 (April 2018).
      
      This patch try the new command first and when the command isn't
      available, it fall back to the deprecated one so libxl still works
      with older QEMU versions.
      
      Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
      Reviewed-by: Jason Andryuk <jandryuk@gmail.com>

Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-nested-intel.xen-boot--l1.{dot,ps,png,html,svg}.
----------------------------------------
163155: tolerable ALL FAIL

flight 163155 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/163155/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-intel 16 xen-boot/l1      fail baseline untested


jobs:
 test-amd64-amd64-qemuu-nested-intel                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun Jun 27 20:57:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Jun 2021 20:57:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147697.272514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxbq1-0000aV-2d; Sun, 27 Jun 2021 20:57:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147697.272514; Sun, 27 Jun 2021 20:57:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxbq0-0000aO-Vp; Sun, 27 Jun 2021 20:57:08 +0000
Received: by outflank-mailman (input) for mailman id 147697;
 Sun, 27 Jun 2021 20:57:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxbpz-0000aE-Eq; Sun, 27 Jun 2021 20:57:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxbpz-0007Pa-4a; Sun, 27 Jun 2021 20:57:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxbpy-0007qr-P6; Sun, 27 Jun 2021 20:57:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxbpy-0000AS-OX; Sun, 27 Jun 2021 20:57:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UKJ2f32gv8Rxq5F0uEkxOdy8GsDZPcSLAliN4bRO0Zg=; b=RpXev6tBhxe/EmsqKArjT0a+6Z
	9KaJlXCCG5fN3RxOx7P4TEE1pugT10l9f9NMONNIC2oF/c6XKMJsLgTRZPioyIgjlVO64E7xynqGZ
	sZLJze/uXQbspfrUHe802n9Rik6QbTk4sF5RHJqH+xbsyq/gWCAgDNpaInpMeQcOyeh8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163145-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 163145: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:heisenbug
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start/freebsd.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=625acffd7ae2c52898d249e6c5c39f348db0d8df
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Jun 2021 20:57:06 +0000

flight 163145 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163145/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      14 guest-start    fail in 163130 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-shadow 22 guest-start/debian.repeat fail in 163130 pass in 163145
 test-amd64-amd64-dom0pvh-xl-intel 22 guest-start/debian.repeat fail in 163130 pass in 163145
 test-arm64-arm64-xl-xsm      13 debian-fixup               fail pass in 163130
 test-amd64-amd64-qemuu-freebsd11-amd64 21 guest-start/freebsd.repeat fail pass in 163130

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                625acffd7ae2c52898d249e6c5c39f348db0d8df
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  331 days
Failing since        152366  2020-08-01 20:49:34 Z  330 days  561 attempts
Testing same since   163130  2021-06-26 21:11:57 Z    0 days    2 attempts

------------------------------------------------------------
6207 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1691622 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 27 23:44:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Jun 2021 23:44:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147703.272527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxeRj-0007K1-7e; Sun, 27 Jun 2021 23:44:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147703.272527; Sun, 27 Jun 2021 23:44:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxeRj-0007Ju-4k; Sun, 27 Jun 2021 23:44:15 +0000
Received: by outflank-mailman (input) for mailman id 147703;
 Sun, 27 Jun 2021 23:44:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxeRi-0007Jk-0F; Sun, 27 Jun 2021 23:44:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxeRh-0001hI-OF; Sun, 27 Jun 2021 23:44:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxeRh-0007dT-Gz; Sun, 27 Jun 2021 23:44:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxeRh-0001lV-GJ; Sun, 27 Jun 2021 23:44:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7qVES1shAfr31GzwlHSAcilzm3sDZV2Wp53jaGstPVg=; b=xD3TT9GbNjC0eCgADr5UdvLkuP
	XSgOAJzjaD1SBJ/p1NfsO/2Krp/hux35xeKjUg5MNVHsuNYYh8eg1NwMq8xte/8ehpXKXPx7Bn0aj
	GVfK5rjuCmf9vFUfpV9BcBEEd1pFBpj1NBzgsKWuN6E1RlLChEu3TB2PUpMzGSPgPvpQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163157-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163157: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=17143c4837393d42c484b42d1789b85b2cff1aaf
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Jun 2021 23:44:13 +0000

flight 163157 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163157/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 17143c4837393d42c484b42d1789b85b2cff1aaf
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   23 days
Failing since        162368  2021-06-04 15:42:59 Z   23 days   57 attempts
Testing same since   163028  2021-06-25 07:30:32 Z    2 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2647 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 03:17:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 03:17:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147718.272559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxhlq-00042a-8u; Mon, 28 Jun 2021 03:17:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147718.272559; Mon, 28 Jun 2021 03:17:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxhlq-00042T-5S; Mon, 28 Jun 2021 03:17:14 +0000
Received: by outflank-mailman (input) for mailman id 147718;
 Mon, 28 Jun 2021 03:17:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxhlo-00042J-Kf; Mon, 28 Jun 2021 03:17:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxhlo-0007Hz-DZ; Mon, 28 Jun 2021 03:17:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxhlo-0000D7-3S; Mon, 28 Jun 2021 03:17:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxhlo-0004Se-2w; Mon, 28 Jun 2021 03:17:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ecGP2PmD5oJBKcHA5+wBz1yHgdRR5YAu9ssEgpyT67w=; b=kSBVQRZjH+wNIAgaL2vukmMrdp
	4e8rkgfRegL0ZriE3dtANkkCZuNlT6YeyxDzZYc4J6na0NnLmGdV883ILiWKPJ16Ku7sFYTpLniMA
	Vq4PsHrOLr7dnKpZPGHaxf8uUE9JnYef36dhkLcbB+EBOBcQn4wTAfptogDraK1QIEUI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163159-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163159: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=17143c4837393d42c484b42d1789b85b2cff1aaf
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 28 Jun 2021 03:17:12 +0000

flight 163159 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163159/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 17143c4837393d42c484b42d1789b85b2cff1aaf
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   23 days
Failing since        162368  2021-06-04 15:42:59 Z   23 days   58 attempts
Testing same since   163028  2021-06-25 07:30:32 Z    2 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2647 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 04:08:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 04:08:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147722.272573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxiZ0-0000Xm-TA; Mon, 28 Jun 2021 04:08:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147722.272573; Mon, 28 Jun 2021 04:08:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxiZ0-0000Xf-QE; Mon, 28 Jun 2021 04:08:02 +0000
Received: by outflank-mailman (input) for mailman id 147722;
 Mon, 28 Jun 2021 04:08:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxiYz-0000XU-AV; Mon, 28 Jun 2021 04:08:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxiYz-0008BR-4P; Mon, 28 Jun 2021 04:08:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxiYy-0003Up-Nu; Mon, 28 Jun 2021 04:08:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxiYy-00015D-NK; Mon, 28 Jun 2021 04:08:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1I86DD0+tBc+vKfxUWbdDv5s3zZ87Vsm9NB4xxHeHfk=; b=5ovENG0XItwBzbQHj//yEo6TaL
	adqU8LLtIBs8lQFhLKUv+YLVYh0KB8yZ46DF0J+piEZRwJXvKQ+T9JoLQlpRqGPdzAsOLpLKaN3i8
	mJJDRQ91cxOFEOYv2pIrA0RftJ/+cNC3fxWcQjF0NfsYbjIcmALeacUynPqRgJn6a9Cc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163156-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 163156: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=e3955ae93f5151ad2e982440b7c8d3776a9afee2
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 28 Jun 2021 04:08:00 +0000

flight 163156 qemu-mainline real [real]
flight 163161 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/163156/
http://logs.test-lab.xenproject.org/osstest/logs/163161/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-amd 16 xen-boot/l1         fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 16 xen-boot/l1       fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                e3955ae93f5151ad2e982440b7c8d3776a9afee2
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  311 days
Failing since        152659  2020-08-21 14:07:39 Z  310 days  570 attempts
Testing same since   163110  2021-06-26 04:20:02 Z    1 days    4 attempts

------------------------------------------------------------
549 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 178478 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 05:30:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 05:30:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147726.272588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxjqD-0008Pb-86; Mon, 28 Jun 2021 05:29:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147726.272588; Mon, 28 Jun 2021 05:29:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxjqD-0008PU-4P; Mon, 28 Jun 2021 05:29:53 +0000
Received: by outflank-mailman (input) for mailman id 147726;
 Mon, 28 Jun 2021 05:29:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ouwx=LW=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
 id 1lxjqB-0008PO-L5
 for xen-devel@lists.xen.org; Mon, 28 Jun 2021 05:29:51 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.4.57]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c9f43677-c85f-4660-a387-08f502560669;
 Mon, 28 Jun 2021 05:29:47 +0000 (UTC)
Received: from AM0PR05CA0084.eurprd05.prod.outlook.com (2603:10a6:208:136::24)
 by AM8PR08MB6434.eurprd08.prod.outlook.com (2603:10a6:20b:369::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Mon, 28 Jun
 2021 05:29:45 +0000
Received: from VE1EUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:136:cafe::ba) by AM0PR05CA0084.outlook.office365.com
 (2603:10a6:208:136::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18 via Frontend
 Transport; Mon, 28 Jun 2021 05:29:45 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT023.mail.protection.outlook.com (10.152.18.133) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Mon, 28 Jun 2021 05:29:44 +0000
Received: ("Tessian outbound df524a02e6bb:v97");
 Mon, 28 Jun 2021 05:29:44 +0000
Received: from 2e8f869104c3.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 672B54BE-6077-4B24-94B9-CA2A030D0B4D.1; 
 Mon, 28 Jun 2021 05:29:38 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2e8f869104c3.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 28 Jun 2021 05:29:38 +0000
Received: from DB9PR08MB6857.eurprd08.prod.outlook.com (2603:10a6:10:2a2::7)
 by DB8PR08MB5465.eurprd08.prod.outlook.com (2603:10a6:10:118::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20; Mon, 28 Jun
 2021 05:29:35 +0000
Received: from DB9PR08MB6857.eurprd08.prod.outlook.com
 ([fe80::90a8:39a5:bfd3:5507]) by DB9PR08MB6857.eurprd08.prod.outlook.com
 ([fe80::90a8:39a5:bfd3:5507%4]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 05:29:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9f43677-c85f-4660-a387-08f502560669
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7lkb/wHA4845nShKyhFcCiwf1MbpARwIedmbKnAX0bE=;
 b=QurbxloCeUM21Hz8j1iZRNBIsmxiPJ4Ucm6nWqMZ26bkUxuBti61EOl8f5Lm82DAPcIIs04RAzpq9eyeoE4p3gE0VWDapFVMo517WHKzkGNrFUUiUCnIqTQwxYm2bsemjPWPFodS3xzLlF5xcgRQS+abYCKENjRsDnqxbIpbYW4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xen.org; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;lists.xen.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WfuH6Daea0UJQei5YmHlehIwrk9/vMiMt0YLe7kqA9vAZlpLaSHnKQJCTPEqYRUTR+/y527OG6JHHaS2Bb2h5mChF5/AqTZcVCqZepvu75viSjk8KF8Z2FODPYeML2IkrJcJC9C5LXvMIiwOHKa2oKuoYDDoXQR9/tUhXnD51vAWuhRriVlRRp9nnRqHnXTw+0X9cnJQf/SUzeayUfmT6gEv/wvr9KdFN/BhtEBLm+py4nLZptJ9TnjWw/6IZLTToonEEHAAZD44SpFl8UVXc4i3jRLGHZmGiGA/28ftPmrv1wCYjKGjrKr7wXsQ9nRTA+/yfQ0cFXmcjJkM2CYjYQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7lkb/wHA4845nShKyhFcCiwf1MbpARwIedmbKnAX0bE=;
 b=DcgCyJ9OLYTde8FpyMvXWVeM/1SOI0VLGWTqklABlwrw015Z5XlSrhgrTnOKdp1FOCKuXO99H9s2HelMflhBHwYlbH0/Xcfoby17GJv24LOnaaWe4igFJD+oyyhx50xRqT7SfmYadlVISoX74tJBXHiZNL6YJD2WiPGIOHBWVXo74JeyyoiVf2cXkrbv3vHLh7QYpjMTq1/9GTxxoFgFA/Ibgm+7FdwmL2oQELomosOyZNvxosp/ak1g+BMYfMEKDVB7wq4d5AgwDssZ6kWM8psETQ1MM3yvdoHzmkeKqOqGM5dKW9uYaanMOLH6drrfYdtAgVsLsvrA+cctdqvkhA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7lkb/wHA4845nShKyhFcCiwf1MbpARwIedmbKnAX0bE=;
 b=QurbxloCeUM21Hz8j1iZRNBIsmxiPJ4Ucm6nWqMZ26bkUxuBti61EOl8f5Lm82DAPcIIs04RAzpq9eyeoE4p3gE0VWDapFVMo517WHKzkGNrFUUiUCnIqTQwxYm2bsemjPWPFodS3xzLlF5xcgRQS+abYCKENjRsDnqxbIpbYW4=
From: Wei Chen <Wei.Chen@arm.com>
To: "kvm@vger.kernel.org" <kvm@vger.kernel.org>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
CC: "will@kernel.org" <will@kernel.org>, "jean-philippe@linaro.org"
	<jean-philippe@linaro.org>, Julien Grall <julien@xen.org>, Andre Przywara
	<Andre.Przywara@arm.com>, Marc Zyngier <maz@kernel.org>,
	"julien.thierry.kdev@gmail.com" <julien.thierry.kdev@gmail.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Oleksandr Tyshchenko
	<Oleksandr_Tyshchenko@epam.com>
Subject: RE: [Kvmtool] Some thoughts on using kvmtool Virtio for Xen
Thread-Topic: [Kvmtool] Some thoughts on using kvmtool Virtio for Xen
Thread-Index: Addhq3Jd+FbZaJt0R6WdbgcPW7X96wKMxNug
Date: Mon, 28 Jun 2021 05:29:35 +0000
Message-ID:
 <DB9PR08MB6857A33CD6A4BE540DCFF1459E039@DB9PR08MB6857.eurprd08.prod.outlook.com>
References:
 <DB9PR08MB6857B375207376D8320AFBA89E309@DB9PR08MB6857.eurprd08.prod.outlook.com>
In-Reply-To:
 <DB9PR08MB6857B375207376D8320AFBA89E309@DB9PR08MB6857.eurprd08.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 3D8323CD574FB64992F64C4D129E408D.0
x-checkrecipientchecked: true
Authentication-Results-Original: vger.kernel.org; dkim=none (message not
 signed) header.d=none;vger.kernel.org; dmarc=none action=none
 header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 3a6a99cc-cc7e-4e18-c4bc-08d939f5bcdf
x-ms-traffictypediagnostic: DB8PR08MB5465:|AM8PR08MB6434:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM8PR08MB64341883E5BF0BA728BE5A7C9E039@AM8PR08MB6434.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 9BlyRw9+5NPyoUCGLw0JijoODgSUPAmmvhpHAUkdSt2VkHaBp00K2VpQfY/ie5mL1y2rFCCX1HlfmVr3WSSKFktaQGzSWP1YQCch2woXdHex+p3HMs6sIy/BJFuABs1RbHJ1sKnhYXsf7BmXUr1IoqdA9C5opyzi0cJwpAq/irCi7ZF7zfhGnc3wQlh+2y/aMf0LoD8ITfeIyzaxK9mk5o030npSi0tw9mwt33pAS/GuaTxd1BvlumnDEIFIWW8YMrUM6jTa4y36J3iZrYhU8wwrTZScW0MhOfAADxxzNQPUmJYH6ioqgweRobJyfBn32MZGsBiXygSLTBKZp5yOgaNJcLWVPQMHiEmAgqKeM8GcU5/Wv7dszbGlW8nFXKH9VQXnQwTmu/mpVxSbAu/CoTfGKfo50re8kuDPWorPfDrCkH5tHbRUw1Q+E/qN94N/D/ZEIbTFU3AfsFk/OZW2y6HNsoP/vN/QzjUr5shPjYB6yXHiqBzduNm9GMOsEFtpJ0tBZBe5I0hw6WfSG7wGeydcJhFPIdbyrx7knXoMIeFGVgY7CH8SPNDvYk5jpYKzXh1gA2BrBuGgTSwbEQnoWILMb2b1smh0CNsxTebKlwZVaa9P5wyREhHURzElwrubXNSoLHZzzvh5eck8CTRAxb4KVnCcUU0vhXJXsnm3ZUWxiGPjNkOQpJWBzabkE4n6L6Lgiit9ab22cxnEK4fymWFRNJ79akt34b+zRGgWqNRpDfrQkHXXWL78fLPa309l
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB9PR08MB6857.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39850400004)(366004)(376002)(346002)(136003)(53546011)(6506007)(4326008)(7696005)(38100700002)(9686003)(26005)(186003)(8676002)(316002)(966005)(8936002)(122000001)(478600001)(54906003)(110136005)(2906002)(55016002)(83380400001)(86362001)(52536014)(76116006)(5660300002)(66946007)(66476007)(66556008)(64756008)(33656002)(71200400001)(66446008)(30864003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?gb2312?B?T1BNbTJ4M0lyalZnalRqNDExT3B4Ny90RUJxUUlHdXdoUDZuYThyUXR0QWR3?=
 =?gb2312?B?WUFES0c2ZTV2M1VZWUtaZERiOVRKTjFiSXY1NVFGUlBMd3ZuaHRnZVQxcGJm?=
 =?gb2312?B?QnYwVkhTSjZ6K0Z0bmV0UGhGaDNiU2VBN0xUTFFlRlozL1RqaSs1eVpFMkor?=
 =?gb2312?B?czRwK1JPQUU4ZlBpcXNNUHlsaGxaeGx3ZGdkVnNQdEZaSDREZmdCVDFKY1Vx?=
 =?gb2312?B?SmtsQzVwSDNBcUJUQmdtdHhFQndVV2paRU8vM2pyOFRyNEdrVFJqbUxSbXJx?=
 =?gb2312?B?YW1oOVl4ZFhFNXNROWdIOGh5blZHRVRDZzBBeXI0TkhkNG9JL1RpTlZTVFR2?=
 =?gb2312?B?MmZ6eW1qaGtsS1Qvcm55cHhZa0Z4MDdnY1ZtK3lic1lWdEtBd2ZsZkwxeVdK?=
 =?gb2312?B?YzlMSDh1ZDhoMXdzZVdMQ29KeVNySlNROWJLYk5tdXUvSlVERlRyOGw1SzNV?=
 =?gb2312?B?VnM0M0ZTOXdxbGU2bytZdmVHeWdXREJvZjNWMXNic1JwQkhUVGJRbFhQd2xt?=
 =?gb2312?B?bDYvRllzclc0SW96aGliaTdKeWc3d0lyWGhkWjdhVDBEY0NmTUszbGFqREZM?=
 =?gb2312?B?d3puUEpyTnE0VENwNTBnNGx3TWhSMmY1cGlJZUZqTDhGaGJSbnlGWVBwNTY0?=
 =?gb2312?B?MHVobEFwb25xRnd5N05DRE9iWWxxVnE0bjdYTjNZTUpvTG50a2c1REZQNlRl?=
 =?gb2312?B?TmpwajJWem11RU8rOWUxSUc0d2FBa05aRWNUWTAxaGdIeDIxVjU4dDJTQ3d2?=
 =?gb2312?B?WVQvaXFlRlRmSDlnT2tuSlZadnl6UnJHVDBoeEZyMW5CTDZjcjRHclJKd2dn?=
 =?gb2312?B?NFJudksrZEJ4ZVZacHk1TzZ4UVRpeFlxYUNuSWV1L3kyNUs5ZXVsUVI3NWMz?=
 =?gb2312?B?dE1wNEJpSVRHdlhLbC9QTUNvRTU3aHlZeXAvYTlyRzUvbHFyTUdLOEs5cGVE?=
 =?gb2312?B?ZU15QjBEeXJBYWVzMHA5Z0dlMFJ4ZVlEU1c5VkZ3RzYzUForaGZYUmxOdDhJ?=
 =?gb2312?B?dXlTallEbEZNV2V2MVJYUUxmTXovdmN6YUZzSEdrZDd2QTB0NmZYUTlKaDFs?=
 =?gb2312?B?NUl3L2VXS0JqbExDSlUzTy9MVndKekw0Y1FQMC9XSFUwZWpaUFhqSTN4T3gw?=
 =?gb2312?B?VlhGRkZCNGE0c21Yc01Ga3Q2ZTltb29OY2sxZStKQU9LbTFDUFg4WUpIaGhB?=
 =?gb2312?B?cjNwcDU5U0xobktTSXYrRTZqcmludzZWZnpxek5uWmpRZ284V3ZvYlNETjNx?=
 =?gb2312?B?bFgrMU9DdHhyQmlFWHZ3U3A5U0hUWnpBQVFxS2pGR2lOZW9ROE4rcmxBOWxY?=
 =?gb2312?B?bEp2bzZycDc3citQamZFT1RVNWg4TXB5UlA1Z1gxMW5aSzRaNGdUd3VmbE9E?=
 =?gb2312?B?RllXRVlMS1hySTFHUEcyTmxkYXlWVXNXN2xKdkZNbkswMHJVQW1VSFpRVlV5?=
 =?gb2312?B?V1ZMR2tTV0JpMnA5bE5NZU1CRTV6TEdrajBzem9lWGVRdVRzRVRRRCtqS05q?=
 =?gb2312?B?WXlrZUR0M0hydGNKTm9FNDg1Q0JPdU1UMC9lVi9uTGFMQUhoTVRWcW12amcr?=
 =?gb2312?B?K1hwcnl2NnRXVncvMWNQKzhkYXpILytyOGR5Z3R3SisyZTBxd1QvYzNNcGNn?=
 =?gb2312?B?R2pDNE41cnRqZ2dqRm0zTTk5R0Mxd3A5bzdWWDRZUytQdVRmK0JuRGZnZ05J?=
 =?gb2312?B?b0h4THhmUTE2U0YxeHI0Y01wRE5xQTZGdEF4R1VJWjZBcTVraGhETG1YaUp2?=
 =?gb2312?Q?mhYeCpimpTP818W0O0HnvqSuESsdFt68lUdVJ6i?=
Content-Type: text/plain; charset="gb2312"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5465
Original-Authentication-Results: vger.kernel.org; dkim=none (message not signed)
 header.d=none;vger.kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	931f34b9-f41e-4c02-7b8f-08d939f5b75f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mDxpPLfHNE7gJvr120ZkHLCtNUzmd/BTajcCn57n8GbN7kUldn5/YR8LXQitTAObMO5BjmKsSMTN188P7ikAe9dqTZfKqrHl4CBtmZZ/016pe7CF6fyAfZnsEzdHdka8Nze9nNCstVFZyh6xHx7jPsKGeZP5N9dxbSOArm1lcQToI+Kn8iRFJf3tTfSjXeBOYjAdZdWdm25MKPTEb5u6CWbMtGSs7V4Op50o5bdv1Zt56KU9nagX08A2YWz82nE575GLR4EbslLrIt8/pPxhw3Tkxdxt6PeSgYsM4WAT7i8TgEO0mXzu1mpQH1iC2wC9aRZTroEC6cBjrO0SVygSt/Ha9whaplj6KkGd6ee0Ox5Eo6dvRyv9EtJFN9gABZ74qY1qAsfoiPIe2sDAfkNISdXaoZHsHlGd7HuMMkNCyZsuTQj1v1UUoYSgRnt9fxQSwnrQWqDwtKBmppP9pqn0illqnBilZEujcm5gYe/HKI9W26w0B13d+eP/vF2uWURE6Lx+KgDZRwOs0+pENPepzKxSHQFYOpFBgGzrlWCE6NCeTO9wLknpESQSJgQ2Xsevfan8o/LHmDfL4tzf+0sIcjt/dNXDnL2OxRvLwePBSO4OD2wbVDpaBSZjf50DpXqBznacV2GggMgmGRVWB9FXt29xFD7JnsLxCv8fEikJDjSapZyRrZDv9uXM6lR/QU5U4CyVI0z64SSnJJ45gLs4x3NHfRa/i4n6B5Lkc+3b4jIwQc5xBKmdmKnBza7dh90fTCiFUQEN/O4oYSSDjSnomiYAXaRN93OwsRT8cmkJAwr+/Cxago+oh4xZab8CJsqQ
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(376002)(39850400004)(346002)(136003)(46966006)(36840700001)(82740400003)(81166007)(356005)(36860700001)(82310400003)(47076005)(5660300002)(70586007)(2906002)(86362001)(52536014)(70206006)(30864003)(83380400001)(316002)(110136005)(54906003)(336012)(8936002)(33656002)(7696005)(6506007)(53546011)(8676002)(966005)(478600001)(107886003)(9686003)(55016002)(186003)(26005)(4326008);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 05:29:44.6648
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3a6a99cc-cc7e-4e18-c4bc-08d939f5bcdf
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6434

SGksDQoNCkFueSBjb21tZW50Pw0KDQpDaGVlcnMsDQpXZWkgQ2hlbg0KDQo+IC0tLS0tT3JpZ2lu
YWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IFdlaSBDaGVuDQo+IFNlbnQ6IDIwMjHE6jbUwjE1yNUg
MTQ6MTINCj4gVG86ICdrdm1Admdlci5rZXJuZWwub3JnJyA8a3ZtQHZnZXIua2VybmVsLm9yZz47
IHhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnDQo+IENjOiAnd2lsbEBrZXJuZWwub3JnJyA8d2lsbEBr
ZXJuZWwub3JnPjsgJ2plYW4tcGhpbGlwcGVAbGluYXJvLm9yZycgPGplYW4tDQo+IHBoaWxpcHBl
QGxpbmFyby5vcmc+OyAnSnVsaWVuIEdyYWxsJyA8anVsaWVuQHhlbi5vcmc+OyBBbmRyZSBQcnp5
d2FyYQ0KPiA8QW5kcmUuUHJ6eXdhcmFAYXJtLmNvbT47ICdNYXJjIFp5bmdpZXInIDxtYXpAa2Vy
bmVsLm9yZz47DQo+ICdqdWxpZW4udGhpZXJyeS5rZGV2QGdtYWlsLmNvbScgPGp1bGllbi50aGll
cnJ5LmtkZXZAZ21haWwuY29tPjsgU3RlZmFubw0KPiBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBr
ZXJuZWwub3JnPjsgJ09sZWtzYW5kciBUeXNoY2hlbmtvJw0KPiA8T2xla3NhbmRyX1R5c2hjaGVu
a29AZXBhbS5jb20+DQo+IFN1YmplY3Q6IFtLdm10b29sXSBTb21lIHRob3VnaHRzIG9uIHVzaW5n
IGt2bXRvb2wgVmlydGlvIGZvciBYZW4NCj4NCj4gSGksDQo+DQo+IEkgaGF2ZSBzb21lIHRob3Vn
aHRzIG9mIHVzaW5nIGt2bXRvb2wgVmlydGlvIGltcGxlbWVudGF0aW9uDQo+IGZvciBYZW4uIEkg
Y29waWVkIG15IG1hcmtkb3duIGZpbGUgdG8gdGhpcyBlbWFpbC4gSWYgeW91IGhhdmUNCj4gdGlt
ZSwgY291bGQgeW91IHBsZWFzZSBoZWxwIG1lIHJldmlldyBpdD8NCj4NCj4gQW55IGZlZWRiYWNr
IGlzIHdlbGNvbWUhDQo+DQo+ICMgU29tZSB0aG91Z2h0cyBvbiB1c2luZyBrdm10b29sIFZpcnRp
byBmb3IgWGVuDQo+ICMjIEJhY2tncm91bmQNCj4NCj4gWGVuIGNvbW11bml0eSBpcyB3b3JraW5n
IG9uIGFkZGluZyBWSVJUSU8gY2FwYWJpbGl0eSB0byBYZW4uIEFuZCB3ZSdyZQ0KPiB3b3JraW5n
DQo+IG9uIFZJUlRJTyBiYWNrZW5kIG9mIFhlbi4gQnV0IGV4Y2VwdCBRRU1VIGNhbiBzdXBwb3J0
IHZpcnRpby1uZXQgZm9yIHg4Ni0NCj4geGVuLA0KPiB0aGVyZSBpcyBub3QgYW55IFZJUlRJTyBi
YWNrZW5kIGNhbiBzdXBwb3J0IFhlbi4gQmVjYXVzZSBvZiB0aGUNCj4gY29tbXVuaXR5J3MNCj4g
c3Ryb25nIHZvaWNlIG9mIE91dC1vZi1RRU1VLCB3ZSB3YW50IHRvIGZpbmQgYSBsaWdodCB3ZWln
aHQgVklSVElPIGJhY2tlbmQNCj4gdG8NCj4gc3VwcG9ydCBYZW4uDQo+DQo+IFdlIGhhdmUgYW4g
aWRlYSBvZiB1dGlsaXppbmcgdGhlIHZpcnRpbyBpbXBsZW1lbnRhdG9uIG9mIGt2bXRvb2wgZm9y
IFhlbi4NCj4gQW5kDQo+IFdlIGtub3cgdGhlcmUgd2FzIHNvbWUgYWdyZWVtZW50IHRoYXQga3Zt
dG9vbCB3b24ndCB0cnkgdG8gYmUgYSBmdWxsIFFFTVUNCj4gYWx0ZXJuYXRpdmUuIFNvIHdlIGhh
dmUgd3JpdHRlbiB0d28gcHJvcG9zYWxzIGluIGZvbGxvd2luZyBjb250ZW50IGZvcg0KPiBjb21t
dW5pdGllcyB0byBkaXNjdXNzIGluIHB1YmxpYzoNCj4NCj4gIyMgUHJvcG9zYWxzDQo+ICMjIyAx
LiBJbnRyb2R1Y2UgYSBuZXcgImRtLW9ubHkiIGNvbW1hbmQNCj4gMS4gSW50cm9kdWNlIGEgbmV3
ICJkbS1vbmx5IiBjb21tYW5kIHRvIHByb3ZpZGUgYSBwdXJlIGRldmljZSBtb2RlbCBtb2RlLg0K
PiBJbg0KPiAgICB0aGlzIG1vZGUsIGt2bXRvb2wgb25seSBoYW5kbGVzIElPIHJlcXVlc3QuIFZN
IGNyZWF0aW9uIGFuZA0KPiBpbml0aWFsaXphdGlvbg0KPiAgICB3aWxsIGJlIGJ5cGFzc2VkLg0K
Pg0KPiAgICAgKiBXZSB3aWxsIHJld29yayB0aGUgaW50ZXJmYWNlIGJldHdlZW4gdGhlIHZpcnRp
byBjb2RlIGFuZCB0aGUgcmVzdCBvZg0KPiAgICAga3ZtdG9vbCwgdG8gdXNlIGp1c3QgdGhlIG1p
bmltYWwgc2V0IG9mIGluZm9ybWF0aW9uLiBBdCB0aGUgZW5kLCB0aGVyZQ0KPiAgICAgd291bGQg
YmUgTU1JTyBhY2Nlc3NlcyBhbmQgc2hhcmVkIG1lbW9yeSB0aGF0IGNvbnRyb2wgdGhlIGRldmlj
ZSBtb2RlbCwNCj4gICAgIHNvIHRoYXQgY291bGQgYmUgYWJzdHJhY3RlZCB0byBkbyBhd2F5IHdp
dGggYW55IEtWTSBzcGVjaWZpY3MgYXQgYWxsLg0KPiBJZg0KPiAgICAgdGhpcyBpcyB3b3JrYWJs
ZSwgd2Ugd2lsbCBzZW5kIHRoZSBmaXJzdCBzZXQgb2YgcGF0Y2hlcyB0byBpbnRyb2R1Y2UNCj4g
dGhpcw0KPiAgICAgaW50ZXJmYWNlLCBhbmQgYWRhcHQgdGhlIGV4aXN0aW5nIGt2bXRvb2wgdG8g
aXQuIFRoZW4gbGF0ZXIgd2Ugd2lsbA0KPiBjYW4NCj4gICAgIGFkZCBYZW4gc3VwcG9ydCBvbiB0
b3Agb2YgaXQuDQo+DQo+ICAgICBBYm91dCBYZW4gc3VwcG9ydCwgd2Ugd2lsbCBkZXRlY3QgdGhl
IHByZXNlbmNlIG9mIFhlbiBsaWJyYXJpZXMsIGFsc28NCj4gICAgIGFsbG93IHBlb3BsZSB0byBp
Z25vcmUgdGhlbSwgYXMga3ZtdG9sbCBkbyB3aXRoIG9wdGlvbmFsIGZlYXR1cmVzIGxpa2UNCj4g
ICAgIGxpYnogb3IgbGliYWlvLg0KPg0KPiAgICAgSWRlYWx5LCB3ZSB3YW50IHRvIG1vdmUgYWxs
IGNvZGUgcmVwbHlpbmcgb24gWGVuIGxpYnJhcmllcyB0byBhIHNldCBvZg0KPiAgICAgbmV3IGZp
bGVzLiBJbiB0aGlzIGNhc2UsIHRoZXMgZmlsZXMgY2FuIG9ubHkgYmUgY29tcGlsZWQgd2hlbiBY
ZW4NCj4gICAgIGxpYnJhcmllcyBhcmUgZGV0ZWN0ZWQuIEJ1dCBpZiB3ZSBjYW4ndCBkZWNvdXBs
ZSB0aGlzIGNvZGUgY29tcGxldGVseSwNCj4gICAgIHdlIG1heSBpbnRyb2R1Y2UgYSBiaXQgb2Yg
I2lmZGVmcyB0byBwcm90ZWN0IHRoaXMgY29kZS4NCj4NCj4gICAgIElmIGt2bSBvciBvdGhlciBW
TU0gZG8gbm90IG5lZWQgImRtLW9ubHkiIG1vZGUuIE9yICJkbS1vbmx5IiBjYW4gbm90DQo+ICAg
ICB3b3JrIHdpdGhvdXQgWGVuIGxpYnJhcmllcy4gV2Ugd2lsbCBtYWtlICJkbS1vbmx5IiBjb21t
YW5kIGRlcGVuZHMgb24NCj4gICAgIHRoZSBwcmVzZW5jZSBvZiBYZW4gbGlicmFyaWVzLg0KPg0K
PiAgICAgU28gYSBub3JtYWwgY29tcGlsZSAod2l0aG91dCB0aGUgWGVuIGxpYnJhcmllcyBpbnN0
YWxsZWQpIHdvdWxkIGNyZWF0ZQ0KPiAgICAgYSBiaW5hcnkgYXMgY2xvc2UgYXMgcG9zc2libGUg
dG8gdGhlIGN1cnJlbnQgY29kZSwgYW5kIG9ubHkgdGhlIHBlb3BsZQ0KPiAgICAgd2hvIGhhdmlu
ZyBYZW4gbGlicmFyaWVzIGluc3RhbGxlZCB3b3VsZCBldmVyIGdlbmVyYXRlIGEgImRtLW9ubHki
DQo+ICAgICBjYXBhYmxlIGt2bXRvb2wuDQo+DQo+ICMjIyAyLiBBYnN0cmFjdCBrdm10b29sIHZp
cnRpbyBpbXBsZW1lbnRhdGlvbiBhcyBhIGxpYnJhcnkNCj4gMS4gQWRkIGEga3ZtdG9vbCBNYWtl
ZmlsZSB0YXJnZXQgdG8gZ2VuZXJhdGUgYSB2aXJ0aW8gbGlicmFyeS4gSW4gdGhpcw0KPiAgICBz
Y2VuYXJpbywgbm90IGp1c3QgWGVuLCBidXQgYW55IHByb2plY3QgZWxzZSB3YW50IHRvIHByb3Zp
ZGUgYQ0KPiAgICB1c2Vyc3BhY2UgdmlydGlvIGJhY2tlbmQgc2VydmljZSBjYW4gbGluayB0byB0
aGlzIHZpcnRpbyBsaWJyYXJpcy4NCj4gICAgVGhlc2UgdXNlcnMgd291bGQgYmVuZWZpdCBmcm9t
IHRoZSBWSVJUSU8gaW1wbGVtZW50YXRpb24gb2Yga3ZtdG9vbA0KPiAgICBhbmQgd2lsbCBwYXJ0
aWNpcGF0ZSBpbiBpbXByb3ZlbWVudHMsIHVwZ3JhZGVzLCBhbmQgbWFpbnRlbmFuY2Ugb2YNCj4g
ICAgdGhlIFZJUlRJTyBsaWJyYXJpZXMuDQo+DQo+ICAgICAqIEluIHRoaXMgY2FzZSwgWGVuIHBh
cnQgY29kZSB3aWxsIG5vdCB1cHN0cmVhbSB0byBrdm10b29sIHJlcG8sDQo+ICAgICAgIGl0IHdv
dWxkIHRoZW4gYmUgbmF0dXJhbCBwYXJ0cyBvZiB0aGUgeGVuIHJlcG8sIGluIHhlbi90b29scyBv
cg0KPiAgICAgICBtYWludGFpbmVkIGluIG90aGVyIHJlcG8uDQo+DQo+ICAgICAgIFdlIHdpbGwg
aGF2ZSBhIGNvbXBsZXRlbHkgc2VwYXJhdGUgVklSVElPIGJhY2tlbmQgZm9yIFhlbiwganVzdA0K
PiAgICAgICBsaW5raW5nIHRvIGt2bXRvb2wncyBWSVJUSU8gbGlicmFyeS4NCj4NCj4gICAgICog
VGhlIG1haW4gY2hhbmdlcyBvZiBrdm10b29sIHdvdWxkIGJlOg0KPiAgICAgICAgIDEuIFN0aWxs
IG5lZWQgdG8gcmV3b3JrIHRoZSBpbnRlcmZhY2UgYmV0d2VlbiB0aGUgdmlydGlvIGNvZGUNCj4g
ICAgICAgICAgICBhbmQgdGhlIHJlc3Qgb2Yga3ZtdG9vbCwgdG8gYWJzdHJhY3QgdGhlIHdob2xl
IHZpcnRpbw0KPiAgICAgICAgICAgIGltcGxlbWVudGF0aW9uIGludG8gYSBsaWJyYXJ5DQo+ICAg
ICAgICAgMi4gTW9kaWZ5IGN1cnJlbnQgYnVpbGQgc3lzdGVtIHRvIGFkZCBhIG5ldyB2aXJ0aW8g
bGlicmFyeSB0YXJnZXQuDQo+DQo+ICMjIFJld29ya2luZyB0aGUgaW50ZXJmYWNlIGlzIHRoZSBj
b21tb24gd29yayBmb3IgYWJvdmUgcHJvcG9zYWxzDQo+ICoqSW4ga3ZtdG9vbCwgb25lIHZpcnR1
YWwgZGV2aWNlIGNhbiBiZSBzZXBhcmF0ZWQgaW50byB0aHJlZSBsYXllcnM6KioNCj4NCj4gLSBB
IGRldmljZSB0eXBlIGxheWVyIHRvIHByb3ZpZGUgYW4gYWJzdHJhY3QNCj4gICAgIC0gUHJvdmlk
ZSBpbnRlcmZhY2UgdG8gY29sbGVjdCBhbmQgc3RvcmUgZGV2aWNlIGNvbmZpZ3VyYXRpb24uDQo+
ICAgICAgICAgVXNpbmcgYmxvY2sgZGV2aWNlIGFzIGFuIGV4YW1wbGUsIGt2bXRvb2wgaXMgdXNp
bmcgZGlza19pbWFnZSB0bw0KPiAgICAgICAgIC0gIGNvbGxlY3QgYW5kIHN0b3JlIGRpc2sgcGFy
YW1ldGVycyBsaWtlOg0KPiAgICAgICAgICAgICAtICBiYWNrZW5kIGltYWdlIGZvcm1hdDogcmF3
LCBxY293IG9yIGJsb2NrIGRldmljZQ0KPiAgICAgICAgICAgICAtICBiYWNrZW5kIGJsb2NrIGRl
dmljZSBvciBmaWxlIGltYWdlIHBhdGgNCj4gICAgICAgICAgICAgLSAgUmVhZG9ubHksIGRpcmVj
dCBhbmQgZXRjDQo+ICAgICAtIFByb3ZpZGUgb3BlcmF0aW9ucyB0byBpbnRlcmFjdCB3aXRoIHJl
YWwgYmFja2VuZCBkZXZpY2VzIG9yIHNlcnZpY2VzOg0KPiAgICAgICAgIC0gcHJvdmlkZSBiYWNr
ZW5kIGRldmljZSBvcGVyYXRpb25zOg0KPiAgICAgICAgICAgICAtIGJsb2NrIGRldmljZSBvcGVy
YXRpb25zDQo+ICAgICAgICAgICAgIC0gcmF3IGltYWdlIG9wZXJhdGlvbnMNCj4gICAgICAgICAg
ICAgLSBxY293IGltYWdlIG9wZXJhdGlvbnMNCj4gLSBIeXBlcnZpc29yIGludGVyZmFjZXMNCj4g
ICAgIC0gR3Vlc3QgbWVtb3J5IG1hcHBpbmcgYW5kIHVubWFwcGluZyBpbnRlcmZhY2VzDQo+ICAg
ICAtIFZpcnR1YWwgZGV2aWNlIHJlZ2lzdGVyIGludGVyZmFjZQ0KPiAgICAgICAgIC0gTU1JTy9Q
SU8gc3BhY2UgcmVnaXN0ZXINCj4gICAgICAgICAtIElSUSByZWdpc3Rlcg0KPiAgICAgLSBWaXJ0
dWFsIElSUSBpbmplY3QgaW50ZXJmYWNlDQo+ICAgICAtIEh5cGVydmlzb3IgZXZlbnRmZCBpbnRl
cmZhY2UNCj4gLSBBbiBpbXBsZW1lbnRhdGlvbiBsYXllciB0byBoYW5kbGUgZ3Vlc3QgSU8gcmVx
dWVzdC4NCj4gICAgIC0gS3ZtdG9vbCBwcm92aWRlcyB2aXJ0dWFsIGRldmljZXMgZm9yIGd1ZXN0
LiBTb21lIHZpcnR1YWwgZGV2aWNlcyB0d28NCj4gICAgICAga2luZHMgb2YgaW1wbGVtZW50YXRp
b25zOg0KPiAgICAgICAgIC0gVklSVElPIGltcGxlbWVudGF0aW9uDQo+ICAgICAgICAgLSBSZWFs
IGhhcmR3YXJlIGVtdWxhdGlvbg0KPg0KPiBGb3IgZXhhbXBsZSwga3ZtdG9vbCBjb25zb2xlIGhh
cyB2aXJ0aW8gY29uc29sZSBhbmQgODI1MCBzZXJpYWwgdHdvIGtpbmRzDQo+IG9mIGltcGxlbWVu
dGF0aW9ucy4gVGhlc2UgaW1wbGVtZW50YXRpb24gZGVwZW5kcyBvbiBkZXZpY2UgdHlwZSBwYXJh
bWV0ZXJzDQo+IHRvIGNyZWF0ZSBkZXZpY2VzLCBhbmQgZGVwZW5kcyBvbiBkZXZpY2UgdHlwZSBv
cHMgdG8gZm9yd2FyZCBkYXRhIGZyb20vdG8NCj4gcmVhbCBkZXZpY2UuIEFuZCB0aGUgaW1wbGVt
ZW50YXRpb24gd2lsbCBpbnZva2UgaHlwZXJ2aXNvciBpbnRlcmZhY2VzIHRvDQo+IG1hcC91bm1h
cCByZXNvdXJjZXMgYW5kIG5vdGlmeSBndWVzdC4NCj4NCj4gSW4gdGhlIGN1cnJlbnQga3ZtdG9v
bCBjb2RlLCB0aGUgYm91bmRhcmllcyBiZXR3ZWVuIHRoZXNlIHRocmVlIGxheWVycyBhcmUNCj4g
cmVsYXRpdmVseSBjbGVhciwgYnV0IHRoZXJlIGFyZSBhIGZldyBwaWVjZXMgb2YgY29kZSB0aGF0
IGFyZSBzb21ld2hhdA0KPiBpbnRlcmxlYXZlZCwgZm9yIGV4YW1wbGU6DQo+IC0gSW4gdmlydGlv
X2Jsa19faW5pdCguLi4pIGZ1bmN0aW9uLCB0aGUgY29kZSB3aWxsIHVzZSBkaXNrX2ltYWdlIGRp
cmVjdGx5Lg0KPiAgIFRoaXMgZGF0YSBpcyBrdm10b29sIHNwZWNpZmllZC4gSWYgd2Ugd2FudCB0
byBtYWtlIFZJUlRJTyBpbXBsZW1lbnRhdGlvbg0KPiAgIGJlY29tZSBoeXBlcnZpc29yIGFnbm9z
dGljLiBTdWNoIGtpbmQgb2YgY29kZSBzaG91bGQgYmUgbW92ZWQgdG8gb3RoZXINCj4gICBwbGFj
ZS4gT3Igd2UganVzdCBrZWVwIGNvZGUgZnJvbSB2aXJ0aW9fYmxrX19pbml0X29uZSguLi4pIGlu
IHZpcnRpbw0KPiBibG9jaw0KPiAgIGltcGxlbWVudGF0aW9uLCBidXQga2VlcCB2aXJ0aW9fYmxr
X19pbml0KC4uLikgaW4ga3ZtdG9vbCBzcGVjaWZpZWQgcGFydA0KPiAgIGNvZGUuDQo+DQo+IEhv
d2V2ZXIsIGluIHRoZSBjdXJyZW50IFZJUlRJTyBkZXZpY2UgY3JlYXRpb24gYW5kIGRhdGEgaGFu
ZGxpbmcgcHJvY2VzcywNCj4gdGhlIGRldmljZSB0eXBlIGFuZCBoeXBlcnZpc29yIEFQSSB1c2Vk
IGFyZSBib3RoIGV4Y2x1c2l2ZSB0byBrdm10b29sIGFuZA0KPiBLVk0uIElmIHdlIHdhbnQgdG8g
dXNlIGN1cnJlbnQgVklSVElPIGltcGxlbWVudGF0aW9uIGZvciBvdGhlciBkZXZpY2UNCj4gbW9k
ZWxzIGFuZCBoeXBlcnZpc29ycywgaXQgaXMgdW5saWtlbHkgdG8gd29yayBwcm9wZXJseS4NCj4N
Cj4gU28sIHRoZSBtYWpvciB3b3JrIG9mIHJld29ya2luZyBpbnRlcmZhY2UgaXMgZGVjb3VwbGlu
ZyBWSVJUSU8NCj4gaW1wbGVtZW50YXRpb24NCj4gZnJvbSBrdm10b29sIGFuZCBLVk0uDQo+DQo+
ICoqSW50cm9kdWNlIHNvbWUgaW50ZXJtZWRpYXRlIGRhdGEgc3RydWN0dXJlcyB0byBkbyBkZWNv
dXBsZToqKg0KPiAxLiBJbnRyb2R1Y2UgaW50ZXJtZWRpZGF0ZSB0eXBlIGRhdGEgc3RydWN0dXJl
cyBsaWtlIGB2aXJ0aW9fZGlza190eXBlYCwNCj4gICAgYHZpcnRpb19uZXRfdHlwZWAsIGB2aXJ0
aW9fY29uc29sZV90eXBlYCBhbmQgZXRjLiBUaGVzZSBkYXRhIHN0cnVjdHVyZXMNCj4gICAgd2ls
bCBiZSB0aGUgc3RhbmRhcmQgZGV2aWNlIHR5cGUgaW50ZXJmYWNlcyBiZXR3ZWVuIHZpcnRpbyBk
ZXZpY2UNCj4gICAgaW1wbGVtZW50YXRpb24gYW5kIGh5cGVydmlzb3IuICBVc2luZyB2aXJ0aW9f
ZGlza190eXBlIGFzIGFuIGV4YW1wbGU6DQo+ICAgICB+fn5+DQo+ICAgICBzdHJ1Y3QgdmlydGlv
X2Rpc2tfdHlwZSB7DQo+ICAgICAgICAgLyoNCj4gICAgICAgICAgKiBFc3NlbnRpYWwgY29uZmln
dXJhdGlvbiBmb3IgdmlydGlvIGJsb2NrIGRldmljZSBjYW4gYmUgZ290IGZyb20NCj4gICAgICAg
ICAgKiBrdm10b29sIGRpc2tfaW1hZ2UuIE90aGVyIGh5cGVydmlzb3IgZGV2aWNlIG1vZGVsIGFs
c28gY2FuIHVzZQ0KPiAgICAgICAgICAqIHRoaXMgZGF0YSBzdHJ1Y3R1cmUgdG8gcGFzcyBuZWNl
c3NhcnkgcGFyYW1ldGVycyBmb3IgY3JlYXRpbmcNCj4gICAgICAgICAgKiBhIHZpcnRpbyBibG9j
ayBkZXZpY2UuDQo+ICAgICAgICAgICovDQo+ICAgICAgICAgc3RydWN0IHZpcnRpb19ibGtfY2Zn
IHZibGtfY2ZnOw0KPiAgICAgICAgIC8qDQo+ICAgICAgICAgICogVmlydGlvIGJsb2NrIGRldmlj
ZSBNTUlPIGFkZHJlc3MgYW5kIElSUSBsaW5lLiBUaGVzZSB0d28NCj4gbWVtYmVycw0KPiAgICAg
ICAgICAqIGFyZSBvcHRpb25hbC4gSWYgaHlwZXJ2aXNvciBwcm92aWRlcyBhbGxvY2F0ZV9tbWlv
X3NwYWNlIGFuZA0KPiAgICAgICAgICAqIGFsbG9jYXRlX2lycV9saW5lIGNhcGFiaWxpdHkgYW5k
IGRldmljZSBtb2RlbCBkb2Vzbid0IHNldCB0aGVzZQ0KPiAgICAgICAgICAqIHR3byBmaWVsZHMs
IHZpcnRpbyBibG9jayBpbXBsZW1lbnRhdGlvbiB3aWxsIHVzZSBoeXBlcnZpc29yDQo+IEFQSXMN
Cj4gICAgICAgICAgKiB0byBhbGxvY2F0ZSBNTUlPIGFkZHJlc3MgYW5kIElSUSBsaW5lLiBJZiB0
aGVzZSB0d28gZmllbGRzIGFyZQ0KPiAgICAgICAgICAqIGNvbmZpZ3VyZWQsIHZpcnRpbyBibG9j
ayBpbXBsZW1lbnRhdGlvbiB3aWxsIHVzZSB0aGVtLg0KPiAgICAgICAgICAqLw0KPiAgICAgICAg
IHBhZGRyX3QgYWRkcjsNCj4gICAgICAgICB1aW50MzJfdCBpcnE7DQo+ICAgICAgICAgLyoNCj4g
ICAgICAgICAgKiBJbiBrdm10b29sLCB0aGlzIG9wcyB3aWxsIGNvbm5lY3QgdG8gZGlza19pbWFn
ZSBBUElzLiBPdGhlcg0KPiAgICAgICAgICAqIGh5cGVydmlzb3IgZGV2aWNlIG1vZGVsIHNob3Vs
ZCBwcm92aWRlIHNpbWlsYXIgQVBJcyBmb3IgdGhpcw0KPiAgICAgICAgICAqIG9wcyB0byBpbnRl
cmFjdCB3aXRoIHJlYWwgYmFja2VuZCBkZXZpY2UuDQo+ICAgICAgICAgICovDQo+ICAgICAgICAg
c3RydWN0IGRpc2tfdHlwZV9vcHMgew0KPiAgICAgICAgICAgICAucmVhZA0KPiAgICAgICAgICAg
ICAud3JpdGUNCj4gICAgICAgICAgICAgLmZsdXNoDQo+ICAgICAgICAgICAgIC53YWl0DQo+ICAg
ICAgICAgICAgIC4uLg0KPiAgICAgICAgIH0gb3BzOw0KPiAgICAgfTsNCj4gICAgIH5+fn4NCj4N
Cj4gMi4gSW50cm9kdWNlIGEgaW50ZXJtZWRpYXRlIGh5cGVydmlzb3IgZGF0YSBzdHJ1Y3R1cmUu
IFRoaXMgZGF0YSBzdHJ1Y3R1cmUNCj4gICAgcHJvdmlkZXMgYSBzZXQgb2Ygc3RhbmRhcmQgaHlw
ZXJ2aXNvciBBUEkgaW50ZXJmYWNlcy4gSW4gdmlydGlvDQo+ICAgIGltcGxlbWVudGF0aW9uLCB0
aGUgS1ZNIHNwZWNpZmllZCBBUElzLCBsaWtlIGt2bV9yZWdpc3Rlcl9tbWlvLCB3aWxsDQo+IG5v
dA0KPiAgICBiZSBpbnZva2VkIGRpcmVjdGx5LiBUaGUgdmlydGlvIGltcGxlbWVudGF0aW9uIHdp
bGwgdXNlIHRoZXNlDQo+IGludGVyZmFjZXMNCj4gICAgdG8gYWNjZXNzIGh5cGVydmlzb3Igc3Bl
Y2lmaWVkIEFQSXMuIGZvciBleGFtcGxlIGBzdHJ1Y3Qgdm1tX2ltcGxgOg0KPiAgICAgfn5+fg0K
PiAgICAgc3RydWN0IHZtbV9pbXBsIHsNCj4gICAgICAgICAvKg0KPiAgICAgICAgICAqIFBvaW50
ZXIgdGhhdCBsaW5rIHRvIHJlYWwgaHlwZXJ2aXNvciBoYW5kbGUgbGlrZSBgc3RydWN0IGt2bQ0K
PiAqa3ZtYC4NCj4gICAgICAgICAgKiBUaGlzIHBvaW50ZXIgd2lsbCBiZSBwYXNzZWQgdG8gdGhl
IHZtbSBvcHM7DQo+ICAgICAgICAgICovDQo+ICAgICAgICAgdm9pZCAqdm1tOw0KPiAgICAgICAg
IGFsbG9jYXRlX2lycV9saW5lX2ZuX3Qodm9pZCogdm1tLCAuLi4pOw0KPiAgICAgICAgIGFsbG9j
YXRlX21taW9fc3BhY2VfZm5fdCh2b2lkKiB2bW0sIC4uLik7DQo+ICAgICAgICAgcmVnaXN0ZXJf
bW1pb19mbl90KHZvaWQqIHZtbSwgLi4uKTsNCj4gICAgICAgICBtYXBfZ3Vlc3RfcGFnZV9mbl90
KHZvaWQqIHZtbSwgLi4uKTsNCj4gICAgICAgICB1bm1hcF9ndWVzdF9wYWdlX2ZuX3Qodm9pZCog
dm1tLCAuLi4pOw0KPiAgICAgICAgIHZpcnR1YWxfaXJxX2luamVjdF9mbl90KHZvaWQqIHZtbSwg
Li4uKTsNCj4gICAgIH07DQo+ICAgICB+fn5+DQo+DQo+IDMuIEFmdGVyIGRlY291cGxlZCB3aXRo
IGt2bXRvb2wsIGFueSBoeXBlcnZpc29yIGNhbiB1c2Ugc3RhbmRhcmQNCj4gYHZtbV9pbXBsYA0K
PiAgICBhbmQgYHZpcnRpb194eHh4X3R5cGVgIGludGVyZmFjZXMgdG8gaW52b2tlIHN0YW5kYXJk
IHZpcnRpbw0KPiBpbXBsZW1lbnRhdGlvbg0KPiAgICBpbnRlcmZhY2VzIHRvIGNyZWF0ZSB2aXJ0
aW8gZGV2aWNlcy4NCj4gICAgIH5+fn4NCj4gICAgIC8qIFByZXBhcmUgVk1NIGludGVyZmFjZSAq
Lw0KPiAgICAgc3RydWN0IHZtbV9pbXBsICp2bW0gPSAuLi47DQo+ICAgICB2bW0tPnJlZ2lzdGVy
X21taW9fZm5fdCA9IGt2bV9fcmVnaXN0ZXJfbW1pbzsNCj4gICAgIC8qIGt2bV9fbWFwX2d1c2V0
X3BhZ2UgaXMgYSB3cmFwcGVyIGd1ZXN0X2ZsYXRfdG9faG9zdCAqLw0KPiAgICAgdm1tLT5tYXBf
Z3Vlc3RfcGFnZV9mbl90ID0ga3ZtX19tYXBfZ3VzZXRfcGFnZTsNCj4gICAgIC4uLg0KPg0KPiAg
ICAgLyogUHJlcGFyZSB2aXJ0aW9fZGlza190eXBlICovDQo+ICAgICBzdHJ1Y3QgdmlydGlvX2Rp
c2tfdHlwZSAqdmRpc2tfdHlwZSA9IC4uLjsNCj4gICAgIHZkaXNrX3R5cGUtPnZibGtfY2ZnLmNh
cGFjaXR5ID0gZGlza19pbWFnZS0+c2l6ZSAvIFNFQ1RPUl9TSVpFOw0KPiAgICAgLi4uDQo+ICAg
ICB2ZGlza190eXBlLT5vcHMtPnJlYWQgPSBkaXNrX2ltYWdlX19yZWFkOw0KPiAgICAgdmRpc2tf
dHlwZS0+b3BzLT53cml0ZSA9IGRpc2tfaW1hZ2VfX3dyaXRlOw0KPiAgICAgLi4uDQo+DQo+ICAg
ICAvKiBJbnZva2UgVklSVElPIGltcGxlbWVudGF0aW9uIEFQSSB0byBjcmVhdGUgYSB2aXJ0aW8g
YmxvY2sgZGV2aWNlICovDQo+ICAgICB2aXJ0aW9fYmxrX19pbml0X29uZSh2bW0sIHZkaXNrX3R5
cGUpOw0KPiAgICAgfn5+fg0KPg0KPiBWSVJUSU8gYmxvY2sgZGV2aWNlIHNpbXBsZSBmbG93IGJl
Zm9yZSByZXdvcmtpbmcgaW50ZXJmYWNlOg0KPiBodHRwczovL2RyaXZlLmdvb2dsZS5jb20vZmls
ZS9kLzFrMEdyZDRSU3VDbWhLVVBrdEhqOUZSYW1FWXJQQ0ZrWC92aWV3P3VzcA0KPiA9c2hhcmlu
Zw0KPiAhW2ltYWdlXShodHRwczovL2RyaXZlLmdvb2dsZS5jb20vdWM/ZXhwb3J0PXZpZXcmaWQ9
MWswR3JkNFJTdUNtaEtVUGt0SGo5Rg0KPiBSYW1FWXJQQ0ZrWCkNCj4NCj4gVklSVElPIGJsb2Nr
IGRldmljZSBzaW1wbGUgZmxvdyBhZnRlciByZXdvcmtpbmcgaW50ZXJmYWNlOg0KPiBodHRwczov
L2RyaXZlLmdvb2dsZS5jb20vZmlsZS9kLzFyTVhSdnVsd2xSTzM5anVXZjA4V2drM0cxTlp0RzJu
TC92aWV3P3VzcA0KPiA9c2hhcmluZw0KPiAhW2ltYWdlXShodHRwczovL2RyaXZlLmdvb2dsZS5j
b20vdWM/ZXhwb3J0PXZpZXcmaWQ9MXJNWFJ2dWx3bFJPMzlqdVdmMDhXZw0KPiBrM0cxTlp0RzJu
TCkNCj4NCj4NCj4gVGhhbmtzLA0KPiBXZWkgQ2hlbg0KSU1QT1JUQU5UIE5PVElDRTogVGhlIGNv
bnRlbnRzIG9mIHRoaXMgZW1haWwgYW5kIGFueSBhdHRhY2htZW50cyBhcmUgY29uZmlkZW50aWFs
IGFuZCBtYXkgYWxzbyBiZSBwcml2aWxlZ2VkLiBJZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5kZWQg
cmVjaXBpZW50LCBwbGVhc2Ugbm90aWZ5IHRoZSBzZW5kZXIgaW1tZWRpYXRlbHkgYW5kIGRvIG5v
dCBkaXNjbG9zZSB0aGUgY29udGVudHMgdG8gYW55IG90aGVyIHBlcnNvbiwgdXNlIGl0IGZvciBh
bnkgcHVycG9zZSwgb3Igc3RvcmUgb3IgY29weSB0aGUgaW5mb3JtYXRpb24gaW4gYW55IG1lZGl1
bS4gVGhhbmsgeW91Lg0K


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 06:04:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 06:04:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147730.272598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxkNQ-00044Y-Vc; Mon, 28 Jun 2021 06:04:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147730.272598; Mon, 28 Jun 2021 06:04:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxkNQ-00044R-SZ; Mon, 28 Jun 2021 06:04:12 +0000
Received: by outflank-mailman (input) for mailman id 147730;
 Mon, 28 Jun 2021 06:04:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxkNP-00044G-8A; Mon, 28 Jun 2021 06:04:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxkNP-00026P-1T; Mon, 28 Jun 2021 06:04:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxkNO-0007dz-KB; Mon, 28 Jun 2021 06:04:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxkNO-0004SA-Jd; Mon, 28 Jun 2021 06:04:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mgwbMD1K/j7M9HvnwT6JIDJPQr4RqbrGkWWc7D/ta5k=; b=Pi6hIxgIXgamVE7WuG4QDlBXP6
	y2b27y3dKGvdwkXTrkqEmpR4WjFlgJRToww/u3P6+zzRPmkJGO4itlxls0oz9EDRtNM3LOVfYzHms
	/cRhEmqIlpLjnH+idEinkYuXNzIebGWYKP7xqmrRyMy3gkvfc3iKj/sHQDiPwg44JffE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163158-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 163158: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b4b27b9eed8ebdbf9f3046197d29d733c8c944f3
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 28 Jun 2021 06:04:10 +0000

flight 163158 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163158/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                b4b27b9eed8ebdbf9f3046197d29d733c8c944f3
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  331 days
Failing since        152366  2020-08-01 20:49:34 Z  330 days  562 attempts
Testing same since   163158  2021-06-27 21:11:05 Z    0 days    1 attempts

------------------------------------------------------------
6207 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1691694 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 07:48:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 07:48:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147734.272613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxm0P-0004q9-I4; Mon, 28 Jun 2021 07:48:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147734.272613; Mon, 28 Jun 2021 07:48:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxm0P-0004q2-EW; Mon, 28 Jun 2021 07:48:33 +0000
Received: by outflank-mailman (input) for mailman id 147734;
 Mon, 28 Jun 2021 07:48:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxm0N-0004pw-Pk
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 07:48:31 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 61ca2de9-dcb7-425e-aa82-3bd4e6908611;
 Mon, 28 Jun 2021 07:48:30 +0000 (UTC)
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2050.outbound.protection.outlook.com [104.47.8.50]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-8-Lk29B_fGMNWHRcmOgjm4MA-1;
 Mon, 28 Jun 2021 09:48:27 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6864.eurprd04.prod.outlook.com (2603:10a6:803:138::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Mon, 28 Jun
 2021 07:48:27 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 07:48:26 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0231.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1e::27) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Mon, 28 Jun 2021 07:48:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61ca2de9-dcb7-425e-aa82-3bd4e6908611
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624866509;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SepR3EtBTO1KSFKVfte0PAqJwsJ9K5Rytq5s+ceOquQ=;
	b=GH+mgPl4gIBQaoV7EXmp4aSvgsVRL4oPQYP4rS7rm1Z6uJpf6Gl6cEP9ooq9lNqRFwg6Cv
	wkpxxjuJiCo4fbFIMoQWFmvxRRFU/GAee2ezqObYtOn/G6cxJrRS0CXQe4SLl9sL+xAM6r
	SYC8upA5Z6yUoDatUKgv1Rr2mOj1Iuo=
X-MC-Unique: Lk29B_fGMNWHRcmOgjm4MA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lsARovnZdJfdKdoCvU4rQyncucokSaoX80W2RndV0IRHMja/gTdv//eTnOXQdkOMBGxC+eDhGYC7aDHxwuZtf2PvwDu7JqdZsjdsngxqhlB0V5rZVKq/1vaihV6Bs4J27mG3jNBtXevoXCykzhpwdf4pviJX7OKKYMTFPZyPD/BWdZ/BCtge+sr7wBaPnSOX6hWu3ITgww2O3oACRTDES3cl6OF7zi68NUzUJMV8JGNq0SEn3r5xEMPIgMkEcGTDsuQzEBZ0SheSh9QRRzKpWg8ad6oG2d2G7eG4/TxlMabopocFCp9kcK62X8SJBozMVcZxdW1L9w5EnQ89nQw4Zw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/u3sp1G1hFNEfwlHUsmH5TEQyR13zTTE4f0w2duRqhw=;
 b=HJiSX/JTW+HhLBhiAb/UtLSBaoW2wawa3Q5U0DPC5z2u8Rj2odpQrINmi2rkMFEwhKXFXYFQc1DkG15I59RdOQk/nrttLuRvjjwQtTsBJSfcp+SJLs/l7t29U5LuoR6RN/pBu/ynK1JWo5FtrOdX0AQa/aMDWdwWSggAuBgM5/MuzkR/8HwwD05ziW+4faIr1eRVpHIxV6A6Xq+dilyJEwxqFktHp8SLQSieD4bv/vAceR3y2ku4w76Ej/UuG5ZCJhxXpa9jFJBgV5HdMbwIzVwOj203VW9VPh9bWcN/gFngIPc+C1HyPMekfT/7CY2tK1VpOCkVuXQM15ueuxgluA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 02/12] libxenguest: deal with log-dirty op stats overflow
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Juergen Gross <jgross@suse.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <46831816-a5f2-2eb4-bb91-aba345448feb@suse.com>
 <5e725a42-953a-c96f-3e72-f0c741b0ce16@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4e3afc8e-1ed8-2e27-b583-476d35352efd@suse.com>
Date: Mon, 28 Jun 2021 09:48:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <5e725a42-953a-c96f-3e72-f0c741b0ce16@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0231.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1e::27) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2412636b-b60d-40ba-45ce-08d93a091d17
X-MS-TrafficTypeDiagnostic: VI1PR04MB6864:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB6864218BFC15754A1F7A9933B3039@VI1PR04MB6864.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pl8t816JSwf4LagMj4slXaPIn3LjCZHkURvH5QTr2n6crGwDj7pBVQfA/bU/wHv6PV/OrvS4LJvAUGdXaO2hpwqlN5N26VBGbZqdX+hpW2brxXaRastPq/LHdNcCgtU4zvx99N9TTxTPU36UFk+KFydV2m7yo++3uIM6zZgFTOODU3kTWLexd9rVz+Wf8lvBKitQbFZYWnpEztQW+JUx2nNbRZMF+cGZSBXXAyZqdBnKAhP11toZp7mjEPND4oxE/GP2F99QNdbCNbNaPIbAxvZnjLNmTSR6gch/2KrQTbod6Jp32gPS83eoVc7z3kM/mTm52gwsHQWKHLm/BZkjQ2YAlHGDQod6Ea326w+i+egq9ivR+znHaSVR/Gz6ksd3asQ0L7Ps7N3c8GH1YX2hRXlmdBNlXjmnmULmz02XRDy2KrrD2Tw77vGUe58g7U7ULoCxBX+n+PwCrxL7KZ0qDKN5aKltIUUKgiXcvbrGQAgaGceGFpgvNqAPTYHJfOJojb2k15+qSyM/O19DqJxsnLHKBB/7pZZc6v0RMHjPIUiCzLNat+gGm4+o5Tp9ADnKvz518oX11TS7lE5/m/EQlG6n0sK6zXFXL01LgvryUZ0i3X8nH5RyaWQ35Lb9MPfhZ1arcxCQ93xxUVfqhA6B2o26NpHap5xNqeoyqUOZdgXQFGnLc6ZNa670yH4JXECH/4Ce33PR7BUC5F8pSRcTw23FCWYEj9xq8HBTs0V0B/rPg2IBbz15q3lkitcMph9n
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(376002)(366004)(136003)(396003)(39850400004)(6486002)(54906003)(38100700002)(2906002)(66476007)(66556008)(86362001)(66946007)(316002)(2616005)(5660300002)(16576012)(6916009)(26005)(478600001)(31696002)(8676002)(8936002)(83380400001)(186003)(16526019)(36756003)(4326008)(31686004)(956004)(53546011)(14143004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?5BjQ4zn+Q5zXP7zKpMoDAj1/g830LPNWf+fDXuKV4w/dAKozMiS6ZfyE0nII?=
 =?us-ascii?Q?zpyfVoz5aRawnPtk/PlBnufBHSipk5o3wm2wGpKC+d3jCxD0Vd5b0J03VIM2?=
 =?us-ascii?Q?5LyW/nO2HHKBj2GLT1+hLOr8ClzHnSMi+h4Fzr88V91UCi1cvJnf7nUk0QMX?=
 =?us-ascii?Q?y3QtOJyi0rZPr95mO4AZpkd0ClWZ2Lm5u4twMTHLGdH/Gih01fVBiLNhUIHF?=
 =?us-ascii?Q?JCKeQtt9aOjkm9vfprtf8U/6Tzfe/fGBkXkhS0ZfMIvOPh6m3ibMu+lPYczk?=
 =?us-ascii?Q?ejboU/ZDu1vrMVToc2OjDv+tW/ReIOj1VgKvDa37p9o+6RD6VVlCCsM0V5Zi?=
 =?us-ascii?Q?VQ1VgyDSwaxf5XMs2js4H1idvO5lqeJv6fnfi+nqd6uPP+x6eFdcQVIecCjf?=
 =?us-ascii?Q?X5luEB9xmROk+UV+yVWPs6Lbxc5C/7U2hFM6HUvbWdFzlzUeG33hEnA9Tt0C?=
 =?us-ascii?Q?xSRLnpxh+apw6Z0IibAywx/HuBgl/ct1Y9Y43kEcFzq2UeAF0B2mqRWWPEPI?=
 =?us-ascii?Q?0j9aarYMpJ4ldawYMhlgZKVQyWewiD8JQuBBRHzWnOBE4zwOYwtd2zVjD93U?=
 =?us-ascii?Q?bnFJ6Dy7+JXcTdJcOmz3Fw9MI5AQPqjG6zTKRHVydtyBf13i2RkbIJ/jOi0l?=
 =?us-ascii?Q?ZrpILDhrdzX7BqOZ1WbsPOn+tVAdVvP7j1K9U9cdALKyJh3EnN3hg9xmEq9X?=
 =?us-ascii?Q?SF2RuQogiFubowBg4kfgLfrSWxWDWZXi3FATJrC/qt+8ZibvyrMPrRIrjYO6?=
 =?us-ascii?Q?qJs78jo2xI+mAjOG48494S8nU+NErS0rUd+hi/Tqs7Mph06foN1hNl3a3Mm1?=
 =?us-ascii?Q?SADGAltOKv/e92XH7JtDSR8NQfItLHiMMJvlfalO6kf2bvDVaFn8zkH6qGx+?=
 =?us-ascii?Q?kvgpPG1GMyREAhMQsDkOFk7vtCSw+5NkEGPOFzsMgaa2J+M5naim5W+QQQSc?=
 =?us-ascii?Q?PaV9E4uKbDXKeGpUy4vPZAkfMXpi92x+ZW26M3OTACn9V6ZiuanoPZmTEx2Z?=
 =?us-ascii?Q?tKhvxPJYdpNAVAk267PVu1Yj5E92fZxOP8FfveTKyklUxO4kaRC/+n4kkjvJ?=
 =?us-ascii?Q?9vSHsDPCGJsflsjylPpBPf+Vr+5kcO42xuclV/BleilNLvuD/oSOV3TVZrjZ?=
 =?us-ascii?Q?nk4J4idDjDvT4aY1RsjD4juvkql7BdXnZv3eL8xyNuPA/BjgWpJQzTlAP3L3?=
 =?us-ascii?Q?6AuG37GCKHVgw69qeK/lHYlk+t9++8V2RIvsDZmzfewnBb5ryVMbZos6yk/7?=
 =?us-ascii?Q?cBl4vsBDLJ6NJCTqXyzDjRigl1z+MdY8HVHZxpHPSATfivK4zqCOzwXtxnL4?=
 =?us-ascii?Q?xPbuj+bI4SZ7gJOHChHRD1t6?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2412636b-b60d-40ba-45ce-08d93a091d17
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 07:48:26.8366
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bGpKWtdx98byX1cwK1D9Ol6sYDPqzH3UPs/8TY7w8odWZn/k7RYoyQ1VFHHeP4x5he9T7XsuyGc3Q3Af688XGw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6864

On 25.06.2021 18:36, Andrew Cooper wrote:
> On 25/06/2021 14:18, Jan Beulich wrote:
>> In send_memory_live() the precise value the dirty_count struct field
>> gets initialized to doesn't matter much (apart from the triggering of
>> the log message in send_dirty_pages(), see below), but it is important
>> that it not be zero on the first iteration (or else send_dirty_pages()
>> won't get called at all). Saturate the initializer value at the maximum
>> value the field can hold.
>=20
> I don't follow.=C2=A0 Migration would be extremely broken if the first
> iteration didn't work correctly, so something else is going on here.

As per the title we're talking about overflow situation here. In particular
the field did end up zero when ctx->save.p2m_size was 0x100000000. I'm not
claiming in any way that the first iteration would generally not work.

>> While there also initialize struct precopy_stats' respective field to a
>> more sane value: We don't really know how many dirty pages there are at
>> that point.
>>
>> In suspend_and_send_dirty() and verify_frames() the local variables
>> don't need initializing at all, as they're only an output from the
>> hypercall which gets invoked first thing.
>>
>> In send_checkpoint_dirty_pfn_list() the local variable can be dropped
>> altogether: It's optional to xc_logdirty_control() and not used anywhere
>> else.
>>
>> Note that in case the clipping actually takes effect, the "Bitmap
>> contained more entries than expected..." log message will trigger. This
>> being just an informational message, I don't think this is overly
>> concerning.
>=20
> That message is currently a error, confirming that the VM will crash on
> the resuming side.

An error? All I see is

    if ( written > entries )
        DPRINTF("Bitmap contained more entries than expected...");

> This is a consequence of it attempting to balloon during the live phase
> of migration, and discussed in docs/features/migration.pandoc (well - at
> least mentioned on the "noone has fixed this yet" list).
>=20
>> --- a/tools/libs/guest/xg_sr_save.c
>> +++ b/tools/libs/guest/xg_sr_save.c
>> @@ -500,7 +500,9 @@ static int simple_precopy_policy(struct
>>  static int send_memory_live(struct xc_sr_context *ctx)
>>  {
>>      xc_interface *xch =3D ctx->xch;
>> -    xc_shadow_op_stats_t stats =3D { 0, ctx->save.p2m_size };
>> +    xc_shadow_op_stats_t stats =3D {
>> +        .dirty_count =3D MIN(ctx->save.p2m_size, (typeof(stats.dirty_co=
unt))~0)
>> +    };
>>      char *progress_str =3D NULL;
>>      unsigned int x =3D 0;
>>      int rc;
>> @@ -519,7 +521,7 @@ static int send_memory_live(struct xc_sr
>>          goto out;
>> =20
>>      ctx->save.stats =3D (struct precopy_stats){
>> -        .dirty_count =3D ctx->save.p2m_size,
>> +        .dirty_count =3D -1,
>=20
> This is an external interface, and I'm not sure it will tolerate finding
> more than p2m_size allegedly dirty.

But you do realize that a few lines down from here there already was

        policy_stats->dirty_count   =3D -1;

? Or are you trying to tell me that -1 (documented as indicating
"unknown") is okay on subsequent iterations, but not on the first one?
That's not being said anywhere ...

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 08:26:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 08:26:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147742.272624 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxmbG-0000z3-T3; Mon, 28 Jun 2021 08:26:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147742.272624; Mon, 28 Jun 2021 08:26:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxmbG-0000yw-Ph; Mon, 28 Jun 2021 08:26:38 +0000
Received: by outflank-mailman (input) for mailman id 147742;
 Mon, 28 Jun 2021 08:26:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxmbF-0000yq-QL
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 08:26:37 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5f81786c-0c53-4e63-8c21-c7052a706459;
 Mon, 28 Jun 2021 08:26:36 +0000 (UTC)
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur02lp2052.outbound.protection.outlook.com [104.47.4.52]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-20-myO3LV2lNrCMqW0lg1dlKw-1; Mon, 28 Jun 2021 10:26:33 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2607.eurprd04.prod.outlook.com (2603:10a6:800:58::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Mon, 28 Jun
 2021 08:26:31 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 08:26:31 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM3PR07CA0106.eurprd07.prod.outlook.com (2603:10a6:207:7::16) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4287.8 via Frontend Transport; Mon, 28 Jun 2021 08:26:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f81786c-0c53-4e63-8c21-c7052a706459
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624868795;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=h7lK0AxLZewWyTrw7vkCgAyGbnK77VPY5Cdn6nedg3Q=;
	b=BtYmj4UUicNT2Ks/OJTuaiBQvMFRlpaWf5whCcy2nj611A4nkXmQftuurdGs7Tzw8zNM8d
	022GufEnc58zLwQWe2SvCvYRGZ6lQ5WABOx5Y444unq0UG7KnLeWTw3+USztlBBmWhYfg/
	X2lc6ge9zLTRn+xDjlRWP7Wi2hPeEKA=
X-MC-Unique: myO3LV2lNrCMqW0lg1dlKw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eVn2xdUXLC+ZBtz7WknD49TPvXrnJMd2YbE7WrqdYtmJDFLwk9QVoHJlCoAJWMtGS3DEcNixSQUtik904Fp9DRHBgGQWXurJjnFFwoI+yzCUXaf31WopMFPRUC8Mt9yN0oWqyW2deUvQqS4mW1rLapVb2NsUDIqEuBk+hR7ZB2fUNYXY0xy3jgkkVr114wLYDt23NDKZTy9BDCRu+HNqyjdGZWp+y0vj4gDEJC9/I2ElAdMJwYrb5OQsctPy68z9FflFaJG65pJ3p6i3lQ14gU22XFifibAWTJTYk59MVq4JTYFOopviUlTlyA/gZ4ukNMR2GFKDxjHN9xkxU66+KA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AxLzyCEHDHA3/Ym7Xddqw9HTbN/RxvULE1e9RfZ/EXI=;
 b=MOu4+7kYd6wd6SuSinaK+J9hBq7mnmWFL6e9ZjPROsWBcDymAk3FYBgMBkXl8k2cQDpNyp8o3RE0uzYJAGpaOtezuKJSJ1vd97hPCO/GhtubWEup777Vb2OJKzOWKCOWaxH9yxLYwD73Uypg9jOeyqSSEpIxpLBRWPuTsM+uoPvbeVPTJzQgyHc/OQstQyCvql2iTT0zW/PPoIG7QZUnYm2zsdZAUVysJtfGNjiXDIHXrQiKK7LGgdaoOMb1wvC5EjOdskesFOEeBw8FwJ9UjpVScQesHLZepcvNNieK3pO7xBQ/AmgzVIBGvnIw6PK1O7Q1KcMV6OYgSEeF+4LB3Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 03/12] libxenguest: short-circuit "all-dirty" handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Juergen Gross <jgross@suse.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <55875a26-7f1d-a6d9-9384-b03b3b2cb86d@suse.com>
 <60be051f-7751-f15d-ae4d-2c7e9af82693@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f85ee373-b497-549b-242f-0dd9eda1b4cd@suse.com>
Date: Mon, 28 Jun 2021 10:26:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <60be051f-7751-f15d-ae4d-2c7e9af82693@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM3PR07CA0106.eurprd07.prod.outlook.com
 (2603:10a6:207:7::16) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bf905386-ede4-45be-0edc-08d93a0e6ea8
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2607:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB260727C9DDACF68960089313B3039@VI1PR0401MB2607.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	45g0wgzR8yDY8DNA13dxEKet45IFm/S+NQU90pv+d9p7p5YlIYs/yTiXYsM+ykZf1V2jzcJnWIqq0t03gfVMd0+LYIeS55U8meiNWGwk1cvvyt0Zt07lG3qvxP6F9fNXeuutUliMhkI48ffeqZgDCoJP0kqgvx1fRgaeWrPiZPWvEeAA4EYkLN7Kvgjc0pgQbOeJSRnXvVo8NsjMg3IjstmNTF9h5RwQjOVt4Ze5zVq3x6jivzUFVLCyzWb3WATfZEjGNq3DnWG49FkomsNQbkYRoe+IQdBYnLTobCh0CywOafofCO9J+pWbRqZCxOb5eXNPPEJUNi1ci4SzWWkLSYLWXej8FElBEQLke/2PYkDRVd1ho7JSjtjCMLjoPPZoxbRlj199BF+niGoxHLVpiP81zhKw4cHIsWRzNyQB/5PgPMWy/11Nt5wnXEtvQEHMJZvrAkdspTmiFTB2LBRhBCVAJZp1fL1fq2rVXtfeMncStFFnSbZ0AtDuKjtf4gbZhtLYwUqe7nr6MqZXMbjuxaxt0k8VhQS1OxxE0L2+R6CFM4WJSt72VJuLFLl+S3febSBlASnkHa/YxTgN7gHN9iqO8MnDzUCm1+jJ5xRXRI9IaEE7unM+EiabLg2tSvept0b53iZFhRkermTJLIXfoDJA8pnGFCX4gNWteB+t/aE5S0LssYk2porxKPlRzlAzJ5oq0m17/uJ3Csl8I7zSxD3R3Obuw1qpnGz/9V5y6g4BMnqfVNKRVABxmePaD8m2auMmwI8qrrIM0+EC8mpzyw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(136003)(366004)(39850400004)(346002)(396003)(16576012)(6486002)(54906003)(316002)(83380400001)(478600001)(956004)(8676002)(6916009)(2616005)(5660300002)(31696002)(2906002)(38100700002)(4326008)(86362001)(36756003)(66476007)(66946007)(16526019)(53546011)(186003)(26005)(8936002)(66556008)(31686004)(14143004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?t6o+n+7WxKRsryqlIsK23SYvBKdradNddKok9eBo+dpKBPuygByowH6VXny5?=
 =?us-ascii?Q?76vQrsOAJFi1vIorgWT16p/KquHq+/AFxLiPbAxuUXRYH6N5rhj5auct+zWT?=
 =?us-ascii?Q?sFN92FvS47GB+7gEzIF20TW4eQIAeX3nLoaioeC3Ao8uUAHm/OxtGq4BAXJc?=
 =?us-ascii?Q?a7wUvZ5xPi2eVYpD4FLKZcTSZ0HWOnTohjl4UzIFnaYPcqKu89oP9GPtjw+W?=
 =?us-ascii?Q?pPwdRcBQLeBB/4UGkeZICP2InXxKKgSDH2EtOKUQrmRtJD8FFCPau4Xf7bVw?=
 =?us-ascii?Q?hQHmi3ncJCCwYIaiCNz08NR710aBfe9wnuo0m426nJW11mSLOSh6IeWBfFKG?=
 =?us-ascii?Q?dtJoI5dGsVRr1/RaIs3QnwkXQPc9qMBtdaPbJY6L7xh2Scim9RJvfyhxcu93?=
 =?us-ascii?Q?wxPIzOarGwWKFX6VCf+GM8pMIVt2A5yJd9EeFzkMqB2kSiP2j1UfAmRJ/5x4?=
 =?us-ascii?Q?npTakiTgtUn5G9icxw/Cm+ItXYIX4tcQ7PCiCPDH0socsRaMGPKBG8GTyNB/?=
 =?us-ascii?Q?52LayyZZTzaIoJpKeHmf6O+oOaqpGeKLN905rGr16LkhLEBX0mtx+61YDlIy?=
 =?us-ascii?Q?N8wasxw9PhjIVm1uoO+w3UsI8n4KzG5p0z4sCu/1fYKToAzZOBjO5OqwFJlx?=
 =?us-ascii?Q?ixjfaw6rkFnEIM+rWiw6TKFKVOB410o+SU9KrBHE+ie+7Q+SeixjUioVUEOi?=
 =?us-ascii?Q?9vh2G1G2/EStLQlVQ+bhla/6+cDcbDyp3O8chT0bdkxqsqLS5rDSpxlDKUoa?=
 =?us-ascii?Q?NE6bzOY+gb7pHawX0m/UImFPDSIVtPqoFUv6reUX+QoZlXnx3VvWJtlpBCKh?=
 =?us-ascii?Q?VdD4ONgPFnyyAvrxIZFWTR32N3u38w8LL1h0/+zUboqO2gXDjMTPJmdA1PJv?=
 =?us-ascii?Q?4ienVcaLG4qiOcimJJle2CLJEL9hQwV4K8iEU+24FOmapxA66gQVMp2Ml1YT?=
 =?us-ascii?Q?9j1L1SI0tQHzP0aq+9z51bmHQAV/GUNNT+z6qUpKPbv03BXrW6yWnkhAAlom?=
 =?us-ascii?Q?UAVKsFjh/0WDLR9adU+TuBGTvpH0Whl0wMreNAD4AkRa/wHu0mEUMwIIjNxp?=
 =?us-ascii?Q?lnaUvlo08VeD0zj90WR/W9IPkH17fECtl6bve86dcPfDCPenecmvISr3M6tN?=
 =?us-ascii?Q?l4uktv7X6FEzwR0Lpy7Ca4oNQsow6yLuRkfgOqm7dAbJBTiec+Vsti04n7CW?=
 =?us-ascii?Q?/Acc7wz/hQf3Zoq7RR4WITwJWvEsl7+/Xi+0USr94BhOKJqA/0yfR9JJSsMA?=
 =?us-ascii?Q?FpSL4oyreK+NA8+segl/T+pKGOfrv043n6pd579r1RFI46wp4h4F7q/iMrMf?=
 =?us-ascii?Q?m4kw2BAEzLoENmtiEx9OXnv0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bf905386-ede4-45be-0edc-08d93a0e6ea8
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 08:26:31.0946
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PLri8TCB4ym5scjjpk3UYoqHZJpOvkVtHXVw3rK4ifeKkAXV1EL7aPZJuZtKLXydXOlzW2NO2iodT0Jjf1Vxtg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2607

On 25.06.2021 19:02, Andrew Cooper wrote:
> On 25/06/2021 14:18, Jan Beulich wrote:
>> For one it is unnecessary to fill a perhaps large chunk of memory with
>> all ones. Add a new parameter to send_dirty_pages() for callers to
>> indicate so.
>>
>> Then it is further unnecessary to allocate the dirty bitmap altogether
>> when all that's ever going to happen is a single all-dirty run.
>=20
> The allocation is deliberate, and does want to stay where it is IMO.
>=20
> Single all-dirty runs are a debugging technique only.=C2=A0 All productio=
n
> cases are live, and you don't want to fail midway through because a
> late, large, memory allocation failed.

I'm afraid I don't understand: I don't move _when_ the allocation
occurs; I only suppress the allocation (altogether) when the allocated
memory remains unused.

> As for the send_{dirty,all}_pages() split, that was deliberate to keep
> the logic simple.=C2=A0 The logdirty bitmap is tiny (in comparison to oth=
er
> structures) outside of artificial cases like this.
>=20
> What you've done with this change is rendered send_all_pages()
> redundant, but not actually taken it out of the code, thereby
> complicating it.=C2=A0 At the moment, this doesn't look to be an improvem=
ent.

I view the remaining send_all_pages() as similarly useful (or not) as
e.g. send_domain_memory_checkpointed() (being merely a wrapper around
suspend_and_send_dirty()).

>> @@ -807,8 +798,11 @@ static int setup(struct xc_sr_context *c
>>      if ( rc )
>>          goto err;
>> =20
>> -    dirty_bitmap =3D xc_hypercall_buffer_alloc_pages(
>> -        xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->save.p2m_size)));
>> +    dirty_bitmap =3D ctx->save.live || ctx->stream_type !=3D XC_STREAM_=
PLAIN
>> +        ? xc_hypercall_buffer_alloc_pages(
>> +              xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->save.p2m_size=
)))
>> +        : (void *)-1L;
>=20
> This is a pointer loaded with a timebomb, which doesn't trigger NULL
> pointer checks, and for which {set,clear}_bit(dirty_bitmap, large_pfn)
> won't fault and will instead corrupt random areas of the address space.

Yeah, this isn't very nice, and gets done away with again in a later
patch. I'd prefer to keep it like it is (assuming the later change
will also go in), but if really deemed necessary I can move that code
re-arrangement here, such that the use of (void *)-1L wouldn't be
necessary anymore. (You may have noticed that all I did this for is
to be able to pass the !dirty_bitmap later in the function, and that
I deliberately only update the local variable, not the hbuf, making
pretty certain that this pointer isn't going to be de-referenced.)

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 08:38:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 08:38:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147745.272635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxmmw-0002Tx-1K; Mon, 28 Jun 2021 08:38:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147745.272635; Mon, 28 Jun 2021 08:38:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxmmv-0002Tq-TR; Mon, 28 Jun 2021 08:38:41 +0000
Received: by outflank-mailman (input) for mailman id 147745;
 Mon, 28 Jun 2021 08:38:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxmmt-0002Tg-SD; Mon, 28 Jun 2021 08:38:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxmmt-00059h-M7; Mon, 28 Jun 2021 08:38:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxmmt-0008AX-AQ; Mon, 28 Jun 2021 08:38:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxmmt-0000vr-9v; Mon, 28 Jun 2021 08:38:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eYPSfUejT7H/Nmm1nx/6fwZXFRvTAaI33VPWO6k4fYg=; b=3ZsoYuzdh+wqjhR/VgNBExTl9V
	W/flag9kYYvllt50cgbymvIxmlDI18CMmHQ6TtBqLhkXId0euKizq/+X9BLDGlS7s5SgC4XyfS1y2
	qMpgQ+3Q4jDDZOmol7caTrgiWc4nIRAWdEOVK6oUPLgU35lKh2XWw9mktOK2nwuGaXik=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163164-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 163164: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=7c08141f906e20e730c4b6407bc638e743deea48
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 28 Jun 2021 08:38:39 +0000

flight 163164 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163164/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              7c08141f906e20e730c4b6407bc638e743deea48
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  353 days
Failing since        151818  2020-07-11 04:18:52 Z  352 days  344 attempts
Testing same since   163111  2021-06-26 04:20:02 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 63387 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 08:47:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 08:47:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147749.272649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxmvE-0003ud-Tg; Mon, 28 Jun 2021 08:47:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147749.272649; Mon, 28 Jun 2021 08:47:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxmvE-0003uW-QH; Mon, 28 Jun 2021 08:47:16 +0000
Received: by outflank-mailman (input) for mailman id 147749;
 Mon, 28 Jun 2021 08:47:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxmvD-0003uQ-PV
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 08:47:15 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 37800dcc-adce-4e56-8146-34f7112a418e;
 Mon, 28 Jun 2021 08:47:14 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2053.outbound.protection.outlook.com [104.47.14.53]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-7-gu4uCluIPFGyi0N5Acz6qg-2;
 Mon, 28 Jun 2021 10:47:12 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6864.eurprd04.prod.outlook.com (2603:10a6:803:138::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Mon, 28 Jun
 2021 08:47:08 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 08:47:08 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0130.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1a::22) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Mon, 28 Jun 2021 08:47:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37800dcc-adce-4e56-8146-34f7112a418e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624870033;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=gG9mfNJ2KY2KWty3JiJ8dwc56C7sjhBo7TdmiXTQJic=;
	b=WXKM4tTAZ6MDCVpadbQY3iXeApxdgfRCz6FFtzOTEI40Wmko7JSUO2keM+nO/HlEoEqBDy
	66964BCy6adLCbdFMzPQtpi8JlhbDDNJxtC95SoYMS8zoWn9OYgPX+0yhp33PUihLYOA4K
	fr4iGbak++d/wvpbzEwMhOGQqsLXrD8=
X-MC-Unique: gu4uCluIPFGyi0N5Acz6qg-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=E296JI+XHtI4dtk9eCAcQXRz3qNGM1zaM7b/LA7TpYkEi8MUkKGDP9AbF0BIBZYZhL6lIbyKeEcIKn9/9syPclweEoDsL9FjJFcPwOuhO1ZnMEVTCYLR2ys0ji4med5/TpoKQI+ncBjITXmt930ZbCcRm3FbQ39FknaCvuRb71GbTTAxHsS2yPUGAwO4JUvo7PEh1iLQfNGRy97IaPi6roExBQMATMRoMqY6YwlOttghUIRKTwvjEeYa/sHEaLwljsBA0CEKwQ0kJpu3HIe6jasHXOkTsFaXOSxsfBlAC4nFdoBA2j+B02cfuuh5nea0F8eYXp47hlJAF3P5AoGBqA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SZBBbGPOcR6TTLv+d6owWAOPtBvGPLTU0IZO4pqfTLs=;
 b=l3J4f69GQtoBH2jsC/YfnmEQdvPKHEz9dRfe5cZFRntQlLAAK0dMVLYV9NvFA55XHZ4mEyganAGlyKH3ISqE4TDZSdwZsDZKuRTJT2BOXZ9vELJN6NwzGCrEJBlgozQX+i/s7YOTexh3fVP4s/+sKEU0/uXNzc185hQBFpZOwbpB8R0KjQkoszLy/sYGOkGCm3nMlDOwQqmFHFyKXg2/w0GmOdwaFkva4NMhUhT6M7hypPYVllr3ncZIf2ZRp8Eo3d1G+0CI4YjoM+tt4EeLcNJnCfGOOD4Rfi5ZNMVOKMWkmRHnxl2RZmfAXTIi3eFk1AH4H85rlJzu0iHUN4q39A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 04/12] libxenguest: avoid allocating unused deferred-pages
 bitmap
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Juergen Gross <jgross@suse.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <61ff4f26-a9cc-d123-98a0-be6c23f21e9b@suse.com>
 <44825600-c27b-34ac-01b2-1ffb5e0bf0be@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3aea7472-6c1d-f786-db5f-ead60eb03240@suse.com>
Date: Mon, 28 Jun 2021 10:47:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <44825600-c27b-34ac-01b2-1ffb5e0bf0be@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0130.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1a::22) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1db7e232-a450-4ff4-853f-08d93a115042
X-MS-TrafficTypeDiagnostic: VI1PR04MB6864:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB6864A72C8C42342DF965AFD2B3039@VI1PR04MB6864.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WWQhcLw4+GIxILx8saGcvYkxLcqJnVV0lLIYgguqmmGMp67R2JDuizBxnBrCd6bH6SO2TqmPIiHXJIiB+mDLO1X0tyeLSpk5ugQQp69x81iLC4q+epP8BjGkJH1eo2nb/r3paShaYDT7og2KkQOHVmlBfevUQcjRW7/5ZyLFhMO4W4A2xViFiHuPgMXwn2zqibuABHcj8UPL6iHtIp2vZi/yNjoEAp83SofPmnS/BkuF5AyPCpJ89eDnjMZ3K0mn6+SFqB7Q/3Nnb84+aFG93aLCdV5lFqs5x8XHvxvnibzju7QYnIAJAW+9fkqqHNlPYehLuS/E29S81dzQssV5VHCSf40Ubm4yyemLPXmQbsecbwTex7RRD7c2V2Ovo4ok6WZZ8lAI3tJ2E4DYJFna8tp1HiNV+hCYi3E01dLacPMGgXJrMB48hhLlfE5b1CkuFGFKFil6LIvp+viMmkVLvGPSP02yZ523lPehbXjPk+L3rsMK+J7FhYUimbbkqAeqOU82zkwG642MMmB9gzeggu3MiAOoUseSeNnKAjU5vQowrzpevss/mPaRF0fpENnFcZ6z47M4C/ysX6GThG4h+zeixIt1hLRNjPQDdiaf5RlqCXe74Ndf8xFQzSIIU5dG23GbqkmzUbO8P9tX/yK7Of6ba3OPoyJ/ZVvJUEotTcep0Z3mrh33S0vyWZJBNHhNptGeCKrXuMtA57+Byqg6pQu+vWHI2SwWJU44i6RGs7A=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(396003)(366004)(136003)(346002)(376002)(6486002)(54906003)(38100700002)(2906002)(66476007)(66556008)(86362001)(66946007)(316002)(2616005)(5660300002)(16576012)(6916009)(26005)(478600001)(31696002)(8676002)(8936002)(83380400001)(186003)(16526019)(36756003)(4326008)(31686004)(956004)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?E3NeHoyUKlUZLP2Kx/FRdLP8dHu31i24/RD3/nDVOsnQaP1UV8P51oOTSHJg?=
 =?us-ascii?Q?qotPWSaCyLekPFrHerQhNYby//d3Ec0eMRYCIzEy+ZVy+K7WlSf1DsY4F+nj?=
 =?us-ascii?Q?H0e+Xjl4xBId5lgQcNNcQJaw8PeDZorJFpQ1lCf4qWWrH3g/SCkr0KBTdxV8?=
 =?us-ascii?Q?Acmw/7PCWOq+yY2kPYzuL6SCc+L0jLmCOmiTtlsN+K/M3XObQKq1xTcX/Gh0?=
 =?us-ascii?Q?CLzdZVVIVop9jvHcV22Cbqoc+D4yjSil0v3GO0OOUfkiDoO4qNNUiDvvEZY4?=
 =?us-ascii?Q?IREd5gmAYbZlKiJwIGcRsMHBWQsZCyJgqA+fLEIF1sD2ndi23g7cfzhv0PvN?=
 =?us-ascii?Q?pVCDKJRPCuhseRjOz44r+GB6+SukiJEgCKQRWA6MmHaSah6NpPmTz+4piTW7?=
 =?us-ascii?Q?TPKUpUp8fOHm4lvnqgGRFdt6kQe0GdMATuXJPTPvqL4dZhpOR5rAlkonq1Xs?=
 =?us-ascii?Q?r72Bpg3698hvtP8U/Woa54jeT9LYnHcCwvMiLsn6P9s3AdDfDvAcX+vBUmwD?=
 =?us-ascii?Q?XViUjNEtO5gaHqMUPsKwkqLQbfvNfwU/gi2TFMHoQMkbQrEl+d8/shauKs5C?=
 =?us-ascii?Q?+itjnmd3AFENd3vyl5VEjmjvGe/AUc4HpPJO4TcKHZDPmFut/wzui7UDnCP7?=
 =?us-ascii?Q?O8hpxO0FKHt1SyeCD3rhLTV3iU+G7yPQ73PPZyxYpz0H5/IvXgk2IkAl++fi?=
 =?us-ascii?Q?gwf4b+I4YPKyTRofYGCK5GQO7tIoLd6vn1ZtNrurpU4+i6Q1f52aL46uQvi0?=
 =?us-ascii?Q?sw15bKQXYWfnBmRMFP16bmhxYYqaSNAV/dkbpssmHNXROnlcQeFg33ngNsh3?=
 =?us-ascii?Q?xhcIX9947+8FozTrdhJt6v+CLsqrjpFagp7/zx6ZTVVygzQfQb0QoZgkKwYV?=
 =?us-ascii?Q?S3yMODc06W9eIxe067W/W0eqAFpUtbwkmIRNZWNbJlv3rxsta0YhDVIXx4HU?=
 =?us-ascii?Q?gxio37f9oA6NKyXjjqUQrK+uM1ScBCgNRQDNCgsm5zuzP8b+iDaFCzQVG6e4?=
 =?us-ascii?Q?wb1Jv4D/01On5N1+UqUNglONcbkC0+NJig+oRAwtJXkhFBeN3nz++mFognwX?=
 =?us-ascii?Q?p/TgyDij3hWMqSia870sBm9abHRM2nXupU0apf0fLG9IFaji1JR6+8ynHtXg?=
 =?us-ascii?Q?Qn9f46LTo1P8UOfizb035aKg64Q0NKIYsXtmvRAMJsTRNjZ7fhslgR3FWT+g?=
 =?us-ascii?Q?MbYPBq5t01vGqXfbPi+Q4b/nwjLtEQpLDMkhelth4k++HU17hdqR/775AjhS?=
 =?us-ascii?Q?WVIVJkup/+NeJYfKdeLihVMkOqQNW6Vc06Cl4wRER/F4np7ZL80A2EudJAlZ?=
 =?us-ascii?Q?yAvsJo8+PH5xnsZj4PLQUNqm?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1db7e232-a450-4ff4-853f-08d93a115042
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 08:47:08.5837
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: E8/RGnYqsUq34LXOoWY1rUp7txczqig6P9aJK+CdvpJ642bjY69T/71vtFR4Fc6rIQYZnNUrhjsJgZclAQ8njg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6864

On 25.06.2021 20:08, Andrew Cooper wrote:
> On 25/06/2021 14:19, Jan Beulich wrote:
>> Like for the dirty bitmap, it is unnecessary to allocate the deferred-
>> pages bitmap when all that's ever going to happen is a single all-dirty
>> run.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> The clearing of the bitmap at the end of suspend_and_send_dirty() also
>> looks unnecessary - am I overlooking anything?
>=20
> Yes. Remus and COLO.=C2=A0 You don't want accumulate successfully-sent
> deferred pages over checkpoints, otherwise you'll eventually be sending
> the entire VM every checkpoint.

Oh, so what I've really missed is save() being a loop over these
functions.

> Answering out of patch order...
>> @@ -791,24 +797,31 @@ static int setup(struct xc_sr_context *c
>>  {
>>      xc_interface *xch =3D ctx->xch;
>>      int rc;
>> -    DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
>> -                                    &ctx->save.dirty_bitmap_hbuf);
>> =20
>>      rc =3D ctx->save.ops.setup(ctx);
>>      if ( rc )
>>          goto err;
>> =20
>> -    dirty_bitmap =3D ctx->save.live || ctx->stream_type !=3D XC_STREAM_=
PLAIN
>> -        ? xc_hypercall_buffer_alloc_pages(
>> -              xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->save.p2m_size=
)))
>> -        : (void *)-1L;
>> +    if ( ctx->save.live || ctx->stream_type !=3D XC_STREAM_PLAIN )
>> +    {
>> +        DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
>> +                                        &ctx->save.dirty_bitmap_hbuf);
>> +
>> +        dirty_bitmap =3D
>> +            xc_hypercall_buffer_alloc_pages(
>> +                xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->save.p2m_si=
ze)));
>> +        ctx->save.deferred_pages =3D bitmap_alloc(ctx->save.p2m_size);
>> +
>> +        if ( !dirty_bitmap || !ctx->save.deferred_pages )
>> +            goto enomem;
>> +    }
>=20
> So this is better than the previous patch.=C2=A0 At least we've got a cle=
an
> NULL pointer now.
>=20
> I could in principle get on board with the optimisation, except its not
> safe (see below).
>=20
>> --- a/tools/libs/guest/xg_sr_save.c
>> +++ b/tools/libs/guest/xg_sr_save.c
>> @@ -130,7 +130,7 @@ static int write_batch(struct xc_sr_cont
>>                                                        ctx->save.batch_p=
fns[i]);
>> =20
>>          /* Likely a ballooned page. */
>> -        if ( mfns[i] =3D=3D INVALID_MFN )
>> +        if ( mfns[i] =3D=3D INVALID_MFN && ctx->save.deferred_pages )
>>          {
>>              set_bit(ctx->save.batch_pfns[i], ctx->save.deferred_pages);
>>              ++ctx->save.nr_deferred_pages;
>> @@ -196,8 +196,12 @@ static int write_batch(struct xc_sr_cont
>>              {
>>                  if ( rc =3D=3D -1 && errno =3D=3D EAGAIN )
>>                  {
>> -                    set_bit(ctx->save.batch_pfns[i], ctx->save.deferred=
_pages);
>> -                    ++ctx->save.nr_deferred_pages;
>> +                    if ( ctx->save.deferred_pages )
>> +                    {
>> +                        set_bit(ctx->save.batch_pfns[i],
>> +                                ctx->save.deferred_pages);
>> +                        ++ctx->save.nr_deferred_pages;
>> +                    }
>=20
> These two blocks are the only two which modify deferred_pages.
>=20
> It occurs to me that this means deferred_pages is PV-only, because of
> the stub implementations of x86_hvm_pfn_to_gfn() and
> x86_hvm_normalise_page().=C2=A0 Furthermore, this is likely to be true fo=
r
> any HVM-like domains even on other architectures.

IOW are you suggesting to also avoid allocation for HVM live
migration, thus effectively making assumptions on the two hooks
being just stubs in that case, which can't ever fail?

> If these instead were hard errors when !deferred_pages, then that at
> least get the logic into an acceptable state.=C2=A0

But the goal here isn't to change the logic, just to avoid allocating
memory that's effectively never used. What you suggest could be a
separate patch, yes, but I'm afraid I'm not feeling confident enough
in understanding why you think this needs changing, so I'd prefer to
leave such a change to you. (If I was to apply some guessing to what
you may mean, I could deduce that you think ->nr_deferred_pages may
still need maintaining, with it being non-zero at the end of the last
step causing migration to fail. But there would then still not be any
need for the bitmap itself in the cases where it no longer gets
allocated.)

> However, the first hunk demonstrates that deferred_pages gets used even
> in the non-live case.=C2=A0 In particular, it is sensitive to errors with=
 the
> guests' handling of its own P2M.=C2=A0 Also, I can't obviously spot anyth=
ing
> which will correctly fail migration if deferred pages survive the final
> iteration.

How does the first hunk demonstrate this? The question isn't when
the bitmap gets updated, but under what conditions it gets consumed.
If the only sending function ever called is suspend_and_send_dirty(),
then nothing would ever have had a chance to set any bit. And any
bits set in the course of suspend_and_send_dirty() running will get
cleared again at the end of suspend_and_send_dirty().

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 08:53:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 08:53:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147753.272660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxn18-0005Pq-NJ; Mon, 28 Jun 2021 08:53:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147753.272660; Mon, 28 Jun 2021 08:53:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxn18-0005Pj-JX; Mon, 28 Jun 2021 08:53:22 +0000
Received: by outflank-mailman (input) for mailman id 147753;
 Mon, 28 Jun 2021 08:53:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxn17-0005Pd-Ew
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 08:53:21 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2a1540eb-78f2-4f27-b949-331357f2e585;
 Mon, 28 Jun 2021 08:53:20 +0000 (UTC)
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2055.outbound.protection.outlook.com [104.47.12.55]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-2-Be9qyy78OaOXhPNW2IabGg-2;
 Mon, 28 Jun 2021 10:53:18 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6301.eurprd04.prod.outlook.com (2603:10a6:803:f1::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Mon, 28 Jun
 2021 08:53:14 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 08:53:14 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR3P281CA0011.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:1d::17) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4287.12 via Frontend Transport; Mon, 28 Jun 2021 08:53:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a1540eb-78f2-4f27-b949-331357f2e585
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624870399;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=l21HFzxltDp6BhNjIaoFSlmyCCOOxxDHhIQ4T3l6tCU=;
	b=LL8TygXv1wsvEa7eUsxi0MfSrw2nRBuTsC1ZP2yKgGjcth4zs1g2K0x3f+vEBFJdNW89M3
	2VzefDqwDMurCPHpqZ60TwomhGUXefgyXKbw5VnO/vf0k5otl5+w4wC0SQ+85/2YOXIe6G
	juAJBYiQL2/00CkVa9Kkxy3ZUkTZYho=
X-MC-Unique: Be9qyy78OaOXhPNW2IabGg-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ceCx7tw2LEPU1/d5cIwEqTG906wg4Qhi7/OLU6dL6ypL+7lrGTaEO5qQGt+GOel7v0ZlQeg0NyBjePQQgDFWf+sD7wN3XMBFKk+T/uiTDvDiOr8EsrAvoZH4rDswqhQ+IX+aHQLc0EvpAb7rak04dJhehnoH1Ga6Uh09nZArEcp2agT/GxOZbzEIqLaC5ud0LK6OWVCcqe3jk68KtldPZdTTBrEWsFpwUnM1+1/gc1SGgbIpGtyHt+kv3GQiP7GjFoLEy6+HZ6L3Fou/pf0DzDJZCGyx7UZOp7VS4SYD25CP0XVUKHrZ+JDvgKsK4dai5xdiY/FuXq4GmylaSh/+dw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MNfBoWLeAchrpJjQXWjjbGgAEVvQCWKuiBky322Y9og=;
 b=bKHD/xMmy90uOl99i0Iq0ainZnk/3uyCIC8p4XD9mjbo1hCn9ErWVr5drQwAPJdcPgqifnFNwmJwxAbcHW3jZkXr9kcQB/UyullaB4BiQF7Sx+IHl4JFp7eCVkasSaDDlQp0uzSIkeu4VqEfO5xZ45SIcVvm4lPd6iJi+7tJwFPJojk6cNYjK9evbmcXD4YrluEHfBYPlSBPYqcXs5O0S2x1/+NdtxlBZHHyPwVAmLzrs6AbgkzQX+MkexVTF4tg6UMRFNu06aoSQequ88OvWNKqA8VAGDg8/5afXzX08oSSq1ExMVhxSc1NMakOHXdpuZCual2BjJCnolxJGAgkBg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 05/12] libxenguest: complete loops in
 xc_map_domain_meminfo()
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Juergen Gross <jgross@suse.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <0d824d4b-0696-baca-a3ef-95ee641e4d08@suse.com>
 <7cafca96-1d01-db96-8583-b8299aad41fe@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fc9de126-d3e3-cc9c-a203-9c6e733a7d2b@suse.com>
Date: Mon, 28 Jun 2021 10:53:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <7cafca96-1d01-db96-8583-b8299aad41fe@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR3P281CA0011.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::17) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b94d8614-5c97-4ede-08ac-08d93a122a54
X-MS-TrafficTypeDiagnostic: VI1PR04MB6301:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB6301433DA9C0E4B398E2380DB3039@VI1PR04MB6301.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4BOgry0nbbxTtmPKJ3L0IHxsTzdGumU/QDRDnfaiA8srUF6zTO8Qh363QBjEbEER39hXs0tWYiO7RSUR/MaeIDdW6Or8EIVw7bYzaaLxoHm8c0InHB9a2+TX+4ubhk8SL5LTCsCI6d2YL3RFpMLvEopVbSIM38aPlslXjQqTvXVHqfQpccnzvzS/OQLiJiGtInTtGR0sD673h6htoUMLFvOqOsnyq1+mpPfp87kwkDkOC6E6KNuzXZOMOrRlJdGXyP2IALS+hBbLaHt697TJQSfft76t4MJKXvdPe8TUGM9FfYFbdCCcXQo9GjbeBIF3Ts2ZY3hDDTr6rL7OMHNtq+DoGKjdb9vPU1Y3kSiSP83iZ1cfEc8lmiKROom67nv6W8AWkNnr62hFuq7NzMYwEhn3JRglydSg32+UP+QrkTCQ+8a02/kDIBftOZ89F3DsmHvpWkOFlFnHxt6tb9sIAZUiiVdydppQOZN4lAgWNjr0V/xGGbDKp0eTiEwKV8eYEeQ8Quj34SRDRXKbatfv+2fMFjCdM1MlEuwkQh3xsfYV87mXOabvwGhvy1XXDvqkgRaUY2hNL/3C58g9JANQSjkBBJVfKqjfRQuzBRaZMecIf4uw22jUwg2lrE3FQVm4/WQrtayYQaTxIWi61xxGlIvWAZznd89Ci758t7nyQBuQQMvw85qyr8TQUQtI+HhcvSeZrTWi4LjuK7X5tPOk3V2w06vVFuVs8+PjC5wf4bI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(396003)(346002)(39850400004)(376002)(366004)(6916009)(956004)(53546011)(5660300002)(2906002)(66946007)(66476007)(316002)(66556008)(38100700002)(54906003)(478600001)(16576012)(186003)(16526019)(4326008)(83380400001)(8936002)(6486002)(31686004)(86362001)(8676002)(26005)(2616005)(31696002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?43TS+fm92Uydir5+QYCOFuyoQC8447b35Q2qJ66heQ2VHfsLhJBIiurl+0Me?=
 =?us-ascii?Q?725Gmbkr0QUTbzb7Vk+fGoTzl+99RgLTakfrCANFhM2ozcI7wMjw3kB2PhU5?=
 =?us-ascii?Q?y+nGq8Pg/zwgH4oT4HXJfQrsFyz0qzbliQZe/v0tvxrq7ISxS1tfrxIwuZH7?=
 =?us-ascii?Q?RegjDdAQ6e4srXTQfOvUAjYtFiEY1jSTL+8YBNy7lhDJ5aw9os3ql7Y4l5Bf?=
 =?us-ascii?Q?gAxNfoorFOSyLtFN/W21zAdRsgErRHHA0BH6YdmvK7r9Hurm5AemrbtH+dPX?=
 =?us-ascii?Q?j0wjJPKGFaQb+ulAGZcRt3Benk+OO/ll7819FToitpFpGBYGKFXq7Ym2BUr4?=
 =?us-ascii?Q?X2C5QhJcz9r1UVppMtMKg/QtcEml2JGk1odxk0XNd3daqK6kglprK+SOmwev?=
 =?us-ascii?Q?lgb44Mm4L7P6qJd+aUnfQA72tagt2Jhrpv6GymVclF/5rQvoeJsbwT2RVs5r?=
 =?us-ascii?Q?Beiw9Wgy4HZL3VdlFaclM1AGmpNnGPA6dK719kT/3BMl5imxoMus0p+7Lydx?=
 =?us-ascii?Q?L591JfUyaZHyiotyBu+wTwhiUARPjOSZI2oJjKcxxR2t4bWbQisZ5bDW28hm?=
 =?us-ascii?Q?haZvPIclXzHZZ4PRa9HHQF+h6eWiD98LLO/vRgOTObhfjWilyX/5LqCXcgA3?=
 =?us-ascii?Q?hGbM2W+CwLHvDj3IFCQPQl3yLYbeMRf254I7VeSMF76LJRHL+9wdBhfZ94+q?=
 =?us-ascii?Q?Xi08rDQkgI5tjIcbdshHZqBc9BrnPz9A5HSPkKRf85N5tRa8BkM8TuuGdWfL?=
 =?us-ascii?Q?z3tdHfvfFw3dICcMrqKXk0VPozyGgg//dgdx76LC+Ghkaafthd3TptPw9ILW?=
 =?us-ascii?Q?obFpFaHx6E1wOX0xYF3YxfP/Fjm+GWj6iyfy7ZUKh2dsVwy3VZ1VejSJSPac?=
 =?us-ascii?Q?C4BmPMAnn+Gp8Ajf30W4Ui6f0GlWJQsAW8mK0fRZQddlon9EB1n6XhJ4d+lU?=
 =?us-ascii?Q?nX+n9Od6eguKzBDBKIY5mvCEtfi76CRb9lYRgI+ObWlB3cJLcxBQhoRMhbrZ?=
 =?us-ascii?Q?GUL2obTJm2M+YxFY5SYLKpNY5jJPRaks4ejDqLKB8LMzerFNbqGKCmGiA+DX?=
 =?us-ascii?Q?xf9xa0zY5eSC3KwMk4Hu1J2rIi5zCU7JdBdYoDG0DDzr0FCNjPIQtCg2pdAq?=
 =?us-ascii?Q?64j+pQ5lzk7gNjswMFxeNyy6CheAA1VAbpcINMiqIkhcA1gDJZrehk8DRPA0?=
 =?us-ascii?Q?9ccGsyIazDC+rNC3ZHozELkj93gFMhd4KTZtAOSK4IINS8ziLF4IxXKvN+mX?=
 =?us-ascii?Q?Sh/E1hR3aBhwxt7dr34Ysybmhg62LwB4+fLlsG8cVUkC9lncQpLEM67FUM3w?=
 =?us-ascii?Q?AAqvnNw/4nYFHgD3F3r575yT?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b94d8614-5c97-4ede-08ac-08d93a122a54
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 08:53:14.4507
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CPzS1IbT5COpI3HavtD8bwqcK4iMK5FzjvBXMANs0m51ysx7FvCIVojqVj2dE7SW4uz2A7enc43JYKZLx3+neA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6301

On 25.06.2021 20:30, Andrew Cooper wrote:
> On 25/06/2021 14:19, Jan Beulich wrote:
>> minfo->p2m_size may have more than 31 significant bits. Change the
>> induction variable to unsigned long, and (largely for signed-ness
>> consistency) a helper variable to unsigned int.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/tools/libs/guest/xg_domain.c
>> +++ b/tools/libs/guest/xg_domain.c
>> @@ -40,7 +40,7 @@ int xc_map_domain_meminfo(xc_interface *
>>      xc_dominfo_t info;
>>      shared_info_any_t *live_shinfo;
>>      xen_capabilities_info_t xen_caps =3D "";
>> -    int i;
>> +    unsigned long i;
>> =20
>>      /* Only be initialized once */
>>      if ( minfo->pfn_type || minfo->p2m_table )
>> @@ -116,12 +116,12 @@ int xc_map_domain_meminfo(xc_interface *
>>      /* Retrieve PFN types in batches */
>>      for ( i =3D 0; i < minfo->p2m_size ; i+=3D1024 )
>>      {
>> -        int count =3D ((minfo->p2m_size - i ) > 1024 ) ?
>> -                        1024: (minfo->p2m_size - i);
>> +        unsigned int count =3D ((minfo->p2m_size - i) > 1024) ?
>> +                             1024 : (minfo->p2m_size - i);
>=20
> min().

min() using 1024UL or MIN()? (I'll use the former unless you tell
me otherwise.)

> Otherwise, Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

> This whole infrastructure is almost abandoned, and broken.=C2=A0 Its used=
 by
> xen-mfndump (debugging only) and xen-hptool mem-offline.
>=20
> The mem-offline functionally cannot possibly work usefully.=C2=A0 It is P=
V
> only, despite not having an HVM check, and in particular reads the dead
> page in an attempt to restore the contents elsewhere.=C2=A0 There is also=
 no
> thought given to writes from outside sources, such as DMA from
> passthrough or a different dom0 foreign mapping.
>=20
> This is perhaps ok as an academic demonstration of "can I shuffle memory
> behind an alive VM in ideal circumstances", but will be killed by the
> dom0 kernel if you ever try running it to resolve a real memory error on
> a VM, because there is no possibility of recovering the data.
>=20
> The mem-offline functionality needs deleting.=C2=A0 It isn't production
> ready, and can't credibly be made so.

I definitely agree; I'm merely trying to address an anomaly found
while auditing the code for certain properties, without any claim
that afterwards any of this would really work.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 09:05:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 09:05:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147757.272670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxnCZ-0006vV-Qz; Mon, 28 Jun 2021 09:05:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147757.272670; Mon, 28 Jun 2021 09:05:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxnCZ-0006vO-NW; Mon, 28 Jun 2021 09:05:11 +0000
Received: by outflank-mailman (input) for mailman id 147757;
 Mon, 28 Jun 2021 09:05:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxnCY-0006vI-EC
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 09:05:10 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f3c456b2-960a-4244-8385-2a4d23a3f331;
 Mon, 28 Jun 2021 09:05:09 +0000 (UTC)
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2054.outbound.protection.outlook.com [104.47.12.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-10-SrglopH4OxSU0M5SG6nbsQ-1; Mon, 28 Jun 2021 11:05:07 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Mon, 28 Jun
 2021 09:05:04 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 09:05:04 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR0P281CA0085.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:1e::9) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4287.12 via Frontend Transport; Mon, 28 Jun 2021 09:05:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3c456b2-960a-4244-8385-2a4d23a3f331
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624871108;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XYk0OhTCDAfLE2dJ9YJDwAft3+yZP+9/0b2noT/03ys=;
	b=EB+PD4ofycqjLj6iuXHnQjOHAFQvEYNbCueANlnVcM7lKWgr38HZpzJJ9WgObLiq0zC2nJ
	Zdf4e2jHYwJoSda/SOdOOdjWip7DsU4yho4fnslOPXbavUZdcnxL08oT7qF/gAROoJCv8F
	JNnYvmNVWUZQO1i5gIfhZly03e5PN/s=
X-MC-Unique: SrglopH4OxSU0M5SG6nbsQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Tq3wUTQdaze0lNUrVO+S44IWV2OuL2ijOA2F7vlir7XYNknjy+z+2n1WgjKDGTDkkKm56RNU+oZvDa0FVEtNvCLlncn0/ohUcIGzzPr/PsYjwZoLoukoxgUqkYhLnPtL9Nl/4vEx6uaO8fQ0BhtGjRxgWQGoSAh6nebGYrK0VIcBWTwbhRqHAGcRB9QYGkvkDosz9hSCUyq9N8RcV6gCHRKdcbEAn2fQBgHtHsU5UOA8M6+F1fNy0raUTluPJlyna/xULkvkfZgHcHXG+TyLiUksoGspJYSWsrcZxBsxLxKFBapdhcBsuiNQ/09qOFkNycIf5Q4lK3fqNgLwa9h0dA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bW3/zbzhhdKD9/F2tCmrbV8jKBAkovc87nDqkTjAzcQ=;
 b=mQWMJ93PsxP+KUwcN0m7Oe3aC56ZjEgq4IvQk23RSK0PenP1LJ5alN5JNUNnYTbfEDSE21lc8EMfn4eiXYb/CGm757hJFy+OyC9K5rtiB8GYgPqCuAL1k7FzhvRCwzQwwcvvS5VMoYryfq5X4PuJUYkvwOCqDc6ZpCShXZIlclCGZKGN2D290mWU5U684Aj1aRxpLxgVOSDmOfkVisC2gzOvxd70ZEeW/SDerQoOXcCPx1rWZiaqzKLR4xMj7F8+n88KU4i5HwSitD9gqMuf72YBmzIxAYQRWwDauyAlFx7UaK1yrbmwROHChAk3P7BoMmMVuEyyoMkE9VXabY50Cw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 06/12] libxenguest: guard against overflow from too large
 p2m when checkpointing
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Juergen Gross <jgross@suse.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <09e81b91-84de-6e49-9a62-eb3a6f392954@suse.com>
 <8248ed3f-0437-4ba4-fc26-884e8d70cf92@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <990e4e8e-52e8-2f78-bce1-46045de798fb@suse.com>
Date: Mon, 28 Jun 2021 11:05:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <8248ed3f-0437-4ba4-fc26-884e8d70cf92@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR0P281CA0085.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::9) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7607f183-7b3f-4cf6-d32d-08d93a13d171
X-MS-TrafficTypeDiagnostic: VI1PR04MB5600:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB5600842803AB2A0623F69C30B3039@VI1PR04MB5600.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	otFflH6fd8bWB2DjWjxxR2rKoCrZZhb9UNwqiUHhD4qZgHw/5M7f9+KevcQkDTC20aBAEZ/yw0Ioitrnmp+RVcH0NVZA9bvs2WM7krcU8C4VY5asV2nG10KI6ROKFcJrI8TFPEabrO+RnVF65I1DrQBufDQaqcLL6rH6rd5Y9e2J2HB0GRcBwL353AdMBcbF+y3jVBf9iUXrchRm8CXDpV146dDh+YJEQvByuPMvJvY6P3aGQCtmHb0oKJHVedv3a0PtVmPwsJ5TcqvCplI9wtf3j7vQ6LL4+mUF/yyuLcDsrId64HwQA8eSiudGCQb/p9HXYDHqd/45YA4l57RvOMCXw8nkrIO9/baIr0In9RS0uMs8Y+CIzw2rYGTM5ry7X0mRoQehSRIEVGZBj1jducRmUhdT9Qtpus2kuFBdj82xaQ3/6dxQ8yQBtWD2/wQgh/gZpev9Am0lDnOzq55PJjaGkXWsLXldAuWl7bjPX4eWvlez+7IhPrQS9p1N/WudfzwYYXfj3sJdx2LMvJ4ZPiDPA7SsUI1m55zq7dP9J07GMDVYayxJE50eDJ1tho49Yiy0DdW0WNirvWCQKDVdZkn8PBkZVhwTWrkclQM2IDpzSVfFO4XEqNWGNa/lbOBR6mAEireNRvTiPvcV5Q/dtDtgGjNIbeg3u/G4zszEN/Zvqv7tmuzntmrbjasIgwVJNU1+EkkSMTYkYH4UZBSukOkE9Z36t4aFCvVMWOIOf4c=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(396003)(366004)(376002)(136003)(39860400002)(5660300002)(6916009)(66946007)(66476007)(8676002)(66556008)(83380400001)(8936002)(186003)(36756003)(31696002)(16576012)(316002)(86362001)(2616005)(54906003)(478600001)(4326008)(956004)(16526019)(38100700002)(53546011)(2906002)(6486002)(31686004)(26005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?XEBljNMgc4iWZvdxxzOOsAs1f1KhBZI/nGdyF+ouXsxA0xvj6nr/t0skXKOj?=
 =?us-ascii?Q?P53GLeLVv/uMAldb7KUNk4p9hP/lBo1pPxm74oRloew81Cy7jTikbF7BPYCq?=
 =?us-ascii?Q?IfJkW3jqo0kf2PDCTn8o9/hlledirQIRktR1cwx+wQLsKkYT6IZaL1HNn2H2?=
 =?us-ascii?Q?mesqrQuCB4QCgSvEfVy2UMb7nWkVx011UeK5b9ZjTOo2oggqIWDEgthG48FI?=
 =?us-ascii?Q?eX8u/DpjmPozIwZxZyvQ2BaSQbyIn2EbgUqB2V3u5dK/2anRud4A2rERxohJ?=
 =?us-ascii?Q?5AnryJsZkx8Pyj3fy4WLwa7Jrr56BsvVDW3bFdQRAYpK6KTtsbuS28XJ0wvY?=
 =?us-ascii?Q?R/h3jO+9EkJtffTX8p1G/N9wJg7ZcDynP3DjBFbn4+XprohG27xzuBSq1ywC?=
 =?us-ascii?Q?tGme9nDyGtYgWjEe57XPCZr34IFcthSPJHHu1ZnRzgNauwh1iZduO8rLuMrC?=
 =?us-ascii?Q?MmFzHRY2ONEB5Sb/2pezXUEqr13kopUrmhmdbWCh+ap+hpx1nQHhuVcMzZm4?=
 =?us-ascii?Q?Z261cUmilxLYPg0JpRgsziyne6/jYWMpxWTYZ+jJ0S5b6+od+FVP4h4udD3Q?=
 =?us-ascii?Q?EHqydjLaCrzq9qXcBHk+TRqsDrjikJFLu9kLPP731puW/GRDFNXlsTdSHx2/?=
 =?us-ascii?Q?G8jT5SFpEN+UZfSTzw1GIOQdTjWsALqlvikH7wiqEHPVVwhTw+LNSxKn5qHh?=
 =?us-ascii?Q?W/CsChK/L2PfNqcR03saZuaBkPH+r5jdk3T3x+bg42F4flf6irnpTX+Z1m5E?=
 =?us-ascii?Q?C2hefnmdfg10t9bmF0Or4tHNkSS2kI8TfxUXyQBPBEL3iDc4w9bsVFFPnmiz?=
 =?us-ascii?Q?W33YBbyWD0yxLWzdFCk6sPdmsBFmQ2viRuID5pDcY5Rh9J/k+Cz8zpqrHIxA?=
 =?us-ascii?Q?dL5z0Kd2OfhMjJyv+Zi79t9Z8gc1kxlr6Cb/wCDhlV7fQDDiIYSurwhFQt0C?=
 =?us-ascii?Q?jjYvGSwmENfmtgcRussV6AhCeNwBminyegVb8C3acFYsKPdmawoZQnxx+Br/?=
 =?us-ascii?Q?w7Tv0PeHIXk+9X5ulb+Wc7m4rXrhThR1tkNlm48QRZao8WTQQc7E1r6WpmAX?=
 =?us-ascii?Q?ePsolb34zUEFM5ed15EuM2h2SP9gWkHwDoMm6DHG4BEUf/voADGUYdThczfJ?=
 =?us-ascii?Q?eINByu7ZUXWBxzAGn/IFBujtDt37e8a/DTht8F93kpvWQINu8Nc+WqNsmDtq?=
 =?us-ascii?Q?PHs03ee/qCDCDPeaYJZL7AVHY9VR+X5DHlrYCKyMJ2f0zuNz2G3O+XxvWVg9?=
 =?us-ascii?Q?JG2zZqD03JdUhYrRUA7+NbMHBO8naIMENPsm+ydYPl527uYt0dC7reBIdGDc?=
 =?us-ascii?Q?QvBtC95wimvmmnVQfrVfk4DD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7607f183-7b3f-4cf6-d32d-08d93a13d171
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 09:05:04.3232
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aBn26ZhzFWNBN+FjHlhsdtb9sgVIyB6Wl0R+eDuZP+sv4Ev+NsY5IG92xZl9Dk5URjhjL2LnQglL9aNbKSN5Jg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5600

On 25.06.2021 21:00, Andrew Cooper wrote:
> On 25/06/2021 14:20, Jan Beulich wrote:
>> struct xc_sr_record's length field has just 32 bits.
>=20
> The stream max record length is
>=20
> /* Somewhat arbitrary - 128MB */
> #define REC_LENGTH_MAX=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (128U << 20)
>=20
> and is checked in the low level helpers, making the upper bound on the
> number of PFNs 0xFFFFFF once the record header is taken into account.
>=20
> There doesn't appear to have been any consideration made to what happens
> if this number gets too large.=C2=A0 That said, the replication will tota=
lly
> fall apart if it ever gets to a fraction of this, because this is the
> list of pages the source side needs to send again in addition to
> whatever *it* dirtied, as it is the state we've lost on the destination
> side by permitting the VM to run live.
>=20
> The common case is that, when execution diverges, the dirtied pages on
> source and destination will be almost the same, so merging this on the
> source side shouldn't lead to many superfluous pages needing to be sent.

While I follow what you say, what I can't conclude is whether with this
you mean me to make any change to this first sentence of the description
(or even the patch itself).

>> --- a/tools/libs/guest/xg_sr_restore.c
>> +++ b/tools/libs/guest/xg_sr_restore.c
>> @@ -450,7 +450,8 @@ static int send_checkpoint_dirty_pfn_lis
>>      xc_interface *xch =3D ctx->xch;
>>      int rc =3D -1;
>>      unsigned int count, written;
>> -    uint64_t i, *pfns =3D NULL;
>> +    unsigned long i;
>> +    uint64_t *pfns =3D NULL;
>>      struct iovec *iov =3D NULL;
>>      struct xc_sr_record rec =3D {
>>          .type =3D REC_TYPE_CHECKPOINT_DIRTY_PFN_LIST,
>> @@ -469,16 +470,28 @@ static int send_checkpoint_dirty_pfn_lis
>> =20
>>      for ( i =3D 0, count =3D 0; i < ctx->restore.p2m_size; i++ )
>>      {
>> -        if ( test_bit(i, dirty_bitmap) )
>> -            count++;
>> +        if ( test_bit(i, dirty_bitmap) && !++count )
>=20
> This is far too opaque logic.

This is an observation, without hint towards possible improvement.
I can attach a comment, but it's far from clear whether that's all
you're after.

> Its also entirely unnecessary...=C2=A0 All this loop is doing is calculat=
ing
> the size for the memory allocation below, and that can be done by using
> the stats output from xc_logdirty_control(), which means it doesn't want
> deleting in the earlier patch.

Only if there is a guarantee that the stats.dirty_count value hasn't
itself been truncated (or, as per the later patch, saturated) in the
hypervisor. Otherwise there may be more set bits in the bitmap, and
counting locally is still necessary. I generally think that anything
called "stats" may be used as guiding information, but not as precise
data. Yet the latter would be needed if you want to make memory
allocation depend on it.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 09:22:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 09:22:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147760.272681 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxnT6-0000l4-Ar; Mon, 28 Jun 2021 09:22:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147760.272681; Mon, 28 Jun 2021 09:22:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxnT6-0000kx-7n; Mon, 28 Jun 2021 09:22:16 +0000
Received: by outflank-mailman (input) for mailman id 147760;
 Mon, 28 Jun 2021 09:22:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxnT4-0000kr-B6
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 09:22:14 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 65de924d-9b41-4902-ab8e-ff564b583400;
 Mon, 28 Jun 2021 09:22:12 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2052.outbound.protection.outlook.com [104.47.14.52]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-34-hFxY6NK1MIuz2DQHStMB5A-1; Mon, 28 Jun 2021 11:22:10 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4445.eurprd04.prod.outlook.com (2603:10a6:803:6e::33)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20; Mon, 28 Jun
 2021 09:22:08 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 09:22:08 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR2P264CA0030.FRAP264.PROD.OUTLOOK.COM (2603:10a6:101:1::18) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Mon, 28 Jun 2021 09:22:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65de924d-9b41-4902-ab8e-ff564b583400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624872131;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ekL30rVGUWPmzwXTCzbjBBg2CpH8R64dGCHd8HGXuxk=;
	b=cvhaV9yIq/j5MZzVZGsx/rHxUqYiDlmRmCZbf5hPsgPcrmBV60aK6axhbNXF/VmEREbWnn
	RQck/6qxF2ZX5GXui2cH9YBkA9LBg9GsYGBS39Rcmo77p5ks5ht8aULuMwaqHRMp6AM050
	eHo+uo6GEHMF/fsljMp+XDRuFRrXZ3Q=
X-MC-Unique: hFxY6NK1MIuz2DQHStMB5A-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q7Ia4aW7wOIcBh4Bs1ZoXNcNJXEP88uqQgyG17OjwTREZzDGP7Vtk5X/5oGc6qXblPdu/YkMJ5SvOu1LMOsyrsUCqmBqhqyJpbEePtaQnm2eD+PPVREEEE8I2tgSdyMhRb9LBS+u6SCRJS0RDXhsOaDuww4vzww03l1Sl6qJM2lWzfgn8DKaDeSYG4LIIU+u+Jr1NHZibDrAMIF2v2w1zRdZyf2EB6arBrAzS7okg4xdLU5FRwhy6fg25pMA+Mpn+HshxDLwfRw+zuqdz/Qp1wvUT9L13OBSogCfpQ66bHNtQOi/poNlpFGAdHRX0OA/SOH3ZqA+Xs6EnEMOn5lnag==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tCQup5LRg5XvPA/47B72HUVCaTRAE1rF6WFN49f+ga4=;
 b=FE7tloSqgH0DZD73Yn09G1rt7EA+5mSqg2PNaPbjK4Pzsxf9x7mDtfViqzj1YUBEuAxb4zbKx6oT4GGCF5bmBw0ESRfBQVIpXkmgqmFi5HymG0EO7nRiklBlLJfV1atpF+Uh4VVrep5r70xN5WEnb1wIlrFB5qzDCUpoXqECC7JqZiMFL0Jh2dsgxBdE6diWlvOcNcEpL+/IjLqLvE5u7v1WtVfYYiA2AVv0e8DGYHefWKiJ0rqiWxa+HO62oVk5BSxyWQe0H7ybm2R80c04ntU1r4M6kExHWl5cm52tMGzfewch48gI/sp2OSzifpYlV4ugS9epZ/jm36TtwagPQQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 12/12] SUPPORT.md: write down restriction of 32-bit tool
 stacks
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <03512f29-7a4e-70ff-271b-7d65ed471935@suse.com>
 <f3a758a2-a8a9-60a3-92c9-1a490084dbb6@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <813f814c-2ec5-915b-4fd6-303b8437ae04@suse.com>
Date: Mon, 28 Jun 2021 11:22:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <f3a758a2-a8a9-60a3-92c9-1a490084dbb6@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR2P264CA0030.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:101:1::18) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: dccba999-72bc-4ca6-37d3-08d93a163416
X-MS-TrafficTypeDiagnostic: VI1PR04MB4445:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4445D7D956BD7A49DF7FFAA5B3039@VI1PR04MB4445.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/nudiJiG99DBSe24KSBGDnp5u5jQPttx+ivHWlEBhrNdsjlV1Fe5n/JBGvXOXWNo+uEmzWKC57+pebvjk/6PeCCyYqor71NUHgbj8k/O6UlxEc3BBdYFVVuYLFwNcQXGI9wpYKNC7rNMOWX6kHdDiCmz4Pcz2cAIE8ZTNtf7U9YiQ78UsPfjg6SDaSTrh/KEiIdERgrW3enk6NYyylTkILicoCh25/C5s6Pox0kGsf98bidDMkTgP+EE0I/gXEFdfp+JD28C5m9gpYEQKaXWF4mSchrwCpUQ4jMbVHq+K3Zi8sDAhgrAoipQNu0i7UKBAvkiAg1o/uzvkiy5wTto5UsFJVC7PN0nu7mmn9wE8qkhigwoyLpeTWtNXm/1jZqv2yQYrR050Qa7zoi1lkPrUuJTAm435F10K2c5Z3qtsRspBxqJFfFUT+6CSq8KUxzJOUtFaIBnAVEhIUJbFAg2uDvhKy0ydTMaEhF3/NQcYtVnpbGjxOHAOZfxaMpt/oeQFpHa/RlHnvBuifRbsEmSRY2/Q5nlxz+xBWfdrT0XN+dMBF+/suTMo4i39vuyMOTnEtnb6Eq51+6g6966zZEQyjxxioE8l3ehgIYpBW4V6cta3QgLYq9MteO+D873oIuGB9uSrLeU/W9ptGRTFwxsbT7hGiN4npNPmFpCEv8rvtU7n+jr9X2YXsT1NLsjzhwavrY4UB6t3aw8KM0KZnLp28COWintnNkhhZg3fIdFj+0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(39860400002)(366004)(346002)(396003)(136003)(16526019)(186003)(6486002)(8676002)(83380400001)(86362001)(36756003)(5660300002)(54906003)(31686004)(53546011)(6916009)(66476007)(2616005)(31696002)(66556008)(26005)(66946007)(478600001)(4326008)(8936002)(2906002)(16576012)(956004)(38100700002)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?UqRvsChpxIwruQBJb7IpFmodc2ZzDmY5JCMFGOWTyi4N11bXd2nSiP/OzD4q?=
 =?us-ascii?Q?Eo8pWy5azmea8H6dPaAWZ/YekAox/PtViDY0Y92LSg4EhbAbsfNlB7E2Qjm/?=
 =?us-ascii?Q?NVA7sHhz4fe0NbH16Ae2EZgoivj0lIACMNefXKBZZgiIlWg0mYQ1VbeQ7rqh?=
 =?us-ascii?Q?LJT5Qo9VEUFvBXMQSL7RamripOEGkV5DGPmdg2N2P9aDmAYjK5GzEYLeA22e?=
 =?us-ascii?Q?zuCOsJk7yFwjApsf0K/ZTt0ZhCLdKVv4Zf9qBR/ppkaFuJ7pZDIIXBma4TvM?=
 =?us-ascii?Q?/Dy8C9L0Q61KhauQ3fZYuGhYIogPTqfeAliMPczwDmSnoQhUCMqD8tRqS9QG?=
 =?us-ascii?Q?eDp5kkz19CeT81znNPQj73TsfaZSpTMqshKmCeIuTwdhAScSA6kJgKoVEbOz?=
 =?us-ascii?Q?jLE7SaTWV5Dn1TwiqMEZR8yX33VYdf/d37X73bOTkrXVneBfE1SfMJkGfsZn?=
 =?us-ascii?Q?mHN8emLTPkr2u5MrTdqPeCL3LzEugkea4eW2gzhQ4OB1euSQXeAAAGkhA7Zz?=
 =?us-ascii?Q?+MgoZAVkyB4T8CfcaOU6Vjvm7nKfWdOIID8NPm9MsFQA5d1SlU+kUhVENLNl?=
 =?us-ascii?Q?BPKUDcg5Jp1DDW7sa2y/ZIdrxzKiTtmKiAYARy9G/utOIx3xGBLIovp1BrxT?=
 =?us-ascii?Q?UlePbs691ED71yJlvWB9p1TF7q/tA4xFguikkNuXanDG/alWhJ2zDcU/iIUo?=
 =?us-ascii?Q?59wvTp3qbedL8yIFSWkjtX+DNDg5GyFyoWhatoot4ac2vZRS5RG425dfxyna?=
 =?us-ascii?Q?hG/fNkl2NWwp9BCSEwoq9czJE1zrsaS3DEmXM19MgDHJFqiQgncpDZgfFuRb?=
 =?us-ascii?Q?qTz8nsZReKS+o3WTOryVkHeCYEfWV23eQhTDfmN+S6atMUnMXhUHU6kH6U5k?=
 =?us-ascii?Q?SqjbydC4vFA+GeOnrzIyP9BsSxia71R6P8z5VP3TeCx67HTs2NSdrgmuZ2Yr?=
 =?us-ascii?Q?5qJgVwtrDe2ekiOmJhoWjR76Oopd5sYSgfVrmTJmc9l2sDW64/3jykv8xUr4?=
 =?us-ascii?Q?Asb6imX90lCsbdSMINB0+MDEY+sNAn3yDmSSqmrfU+x2wgohgBnVrGZmfPlk?=
 =?us-ascii?Q?f9yOfthO1DvI6QMAmKM9xrcb7Jt/1J++bUiGdACCIgfBMyG6y25MrEwRt6VT?=
 =?us-ascii?Q?zRwkDC8XOeXtnZ4s+OCWN2H/6cQ/Wx0yUIKPH/UVqbozK3AqNmJsQVjTYuI2?=
 =?us-ascii?Q?cgHumGqLc3a7g+NrhcCZ7nMbqiij2uuYPMLGW8Zc9jralSpk4c2d9SzzB9mK?=
 =?us-ascii?Q?9N/UQghvxT2NL9zo/LRubvsu70+rdVzmzNe3o5s5gELLYxMFWYkLqUbz8Qv6?=
 =?us-ascii?Q?q/IRMtJoCkO1cpBOEd+N/fEy?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dccba999-72bc-4ca6-37d3-08d93a163416
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 09:22:08.8407
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: K9HJypSp2r9rI+ZdibAZX+ZlTj245BYepZBjpsN2QzlIIbRSSoJgod8cEba0rh6wW3WAiZnolKew+w74TNAETg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4445

On 25.06.2021 21:45, Andrew Cooper wrote:
> On 25/06/2021 14:24, Jan Beulich wrote:
>> Let's try to avoid giving the impression that 32-bit tool stacks are as
>> capable as 64-bit ones.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/SUPPORT.md
>> +++ b/SUPPORT.md
>> @@ -131,6 +131,11 @@ ARM only has one guest type at the momen
>> =20
>>  ## Toolstack
>> =20
>> +While 32-bit builds of the tool stack are generally supported, restrict=
ions
>> +apply in particular when running on top of a 64-bit hypervisor.
>=20
> Actually, this isn't true, and in a way which helps us right now.
>=20
> PV32 isn't security supported, and neither ARM nor x86 support dom0
> bitness !=3D Xen bitness.

While I agree this may be one possible way of the recent change of
support status for PV32, I didn't so far think "x86 doesn't support
dom0 bitness !=3D Xen bitness" was stated anywhere. The recent change
was about security support only, yet Dom0 isn't covered by this as
far as its status as a "guest" goes. This view of mine is, I think,
supported by osstest actually spending a fair share of its effort
on testing exactly this.

Also please don't forget that besides a (64,32,32) tuple of (Xen,
Dom0-kernel,Dom0-userspace) there's also the possible (64,64,32)
one.

>=C2=A0 On x86, it doesn't remotely work because of the
> pointer size mismatch,

Are you saying this for (64,32,32) or (64,64,32)? In any event I
have to admit that I don't see where pointer size (and it not
matching) would make this in principle impossible. That's what
both Xen and kernel have compat layers for.

> and while this was bodged in a horrifying way in
> the ARM ABI, I doubt anyone is in a hurry to turn that into a supported
> configuration.
>=20
> That said, it is my intent with the ABIv2 changes for a 32bit toolstack,
> under 64bit guest kernel, under 64bit or 128bit Xen (yes - RISCV-128 is
> already a thing being discussed) to function.

I'm curious what your plans are there. Afaict accommodating 128-
bit in the ABI right away would be a good idea, by end up bloating
structures unnecessarily. But perhaps you have some clever ideas
there ...

>>   For example,
>> +very large guests aren't supported in this case.
>=20
> The wording here wants to be careful, because under certain readings,
> you've just dropped security support for 32bit toolstacks.
>=20
> What we actually mean is "a toolstack with bitness < Xen is not expected
> to be able to manage very large domains correctly, and don't pester us
> with bugs when it doesn't work because we won't fix them".
>=20
> Whereas we will fix security issues which only happen to manifest in
> 32bit builds of the toolstack.

I've replaced "supported" by "expected to be manageable". If this
still doesn't fit, then please provide sufficient detail for me
to derive what wording would suit you.

>>   This includes guests giving
>> +the appearance of being large, by altering their own memory layouts.
>=20
> I'd drop sentence.=C2=A0 Its an internal detail of a corner case which we=
're
> expecting to remove in the future,

But this is the main reason for us having notice the lack of clear
statement here. Plus within the current ABI I don't see us having
any means to remove all the truncation issues. We shall be glad if
we get a 64-bit tool stack to correctly deal with such guests
(performance aside).

> and "the guest kernel can DoS itself" isn't interesting.

Of course there's no intention to state anything about this aspect.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 09:41:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 09:41:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147764.272693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxnlD-000369-1O; Mon, 28 Jun 2021 09:40:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147764.272693; Mon, 28 Jun 2021 09:40:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxnlC-000362-To; Mon, 28 Jun 2021 09:40:58 +0000
Received: by outflank-mailman (input) for mailman id 147764;
 Mon, 28 Jun 2021 09:40:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxnlC-00035w-9k
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 09:40:58 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fac805ec-c058-4e89-bdc2-479db3c155d2;
 Mon, 28 Jun 2021 09:40:56 +0000 (UTC)
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02lp2051.outbound.protection.outlook.com [104.47.5.51]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-22-ZDfQ4tZQNwysG1TZrOYQqA-1; Mon, 28 Jun 2021 11:40:54 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6863.eurprd04.prod.outlook.com (2603:10a6:803:12f::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Mon, 28 Jun
 2021 09:40:51 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 09:40:51 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P189CA0031.EURP189.PROD.OUTLOOK.COM (2603:10a6:102:53::6) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Mon, 28 Jun 2021 09:40:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fac805ec-c058-4e89-bdc2-479db3c155d2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624873255;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KOpzN7lwjzR31ucJIMTrFbm6iJk94HYFxhKO5ptxMYA=;
	b=fk3UTh3VrOih8bcyI2aigyjn2lxFkjTU3c/1xXoEkB66BXMk6WKpgeN3jmW9oavaC9sqW7
	4Nvg1vf7544Q5yPzISwDMqTNMFZZv2ILnLh3cGq0cTmGQNj8EDyB/DR2xsoE9E9kihIN4S
	p2CAWKhMrkyJG9CCFjHE5A+GK/Ywx4w=
X-MC-Unique: ZDfQ4tZQNwysG1TZrOYQqA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ra1eg+ieDKHJglhZw0KQM3NL3UPervBgJNG7r7mXCWErErItx+T/VhVD/JtKQrX7ak/CyFrBi0DWfv+wToq3zAE6Q0t7+JM8NIonW3J6UzFez704EKfSeTTNEfRmqItqLb/HPOpyxnLnJcBR3synYG85lK6pUu5hmm76/C7BsO3o8h+/3zmCysiqMpwE55FKROMOHHRraBuXGHc0tEDzcOMbql5kbg7aySRroBmv61vgNDQcr3KamD5Eb5XsI+7QJ9n3bFlmUVAV6v6rGvQmIv2kwedHH8Df6SE93GlRcEDQ6nUahlZ/UCSZko/pi2ODx1bEQqdwAbtonl43NZ5gvA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PH4PcK33IPTOiyOnqCHTe+FHa2/T4R6WA5+rcTftmmQ=;
 b=ehm4mzdG2vsEt5M129O1YfgNHuqDLcoBHoyrEeGu81ImByQIivWyYMzXkeuE61xsxMJRliJC7H2CMNTufLdRL3dVHJDioU+ChBP3NRQiVGbeQv4ZtwVWKf/XhmJzFJDhiZ3GPZXDYCjUX3z+Cl1wa5L5VmD3dhyTDzVEqEEsmFO0S3RlA82piBUERPzK9+/Z6NrzA4rfhYyijBrU1Zhv440adNvA3D645pNnr56Z1xG2tnUGuCHBiZ1+Jr1Hl+NsbqJq91p85m5/YOT5HCixSUBpMoe40iel9ZNvDPuLcUfrvaI46499WHZ07sdU2voktBg+MGHA6/Fx1dGe2jNvqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 01/12] libxc: split xc_logdirty_control() from
 xc_shadow_control()
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Juergen Gross <jgross@suse.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Marek Marczykowski <marmarek@invisiblethingslab.com>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <e928490c-d13c-8041-0ff7-e8b69ee73d6e@suse.com>
 <034399e0-d79f-71b1-286c-823a97da7e73@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <629db19c-9408-53c4-b247-3f567ffda4ff@suse.com>
Date: Mon, 28 Jun 2021 11:40:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <034399e0-d79f-71b1-286c-823a97da7e73@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P189CA0031.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:53::6) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2dae1524-a001-48d4-1452-08d93a18d152
X-MS-TrafficTypeDiagnostic: VI1PR04MB6863:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB6863B4A267236A005C482088B3039@VI1PR04MB6863.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SOEtj4TBnRO8nzi7wFijH8eXbSeuVa+rSf8e2Uq128aFs9ZdcGfHzuczoey+dDgJCZWDsSoRhjxa2YTH4vLnA5UZppjnpkZty/PbFaWb+8Zr+qtOtr6iQ9Uz3M00GfzEYVy2mKxhi3Fa1BumrFO2scVVDW51CQSqj+JAVlCVfWww64Gd5CdZLDYIKlJ8U4PfbEk2WGlLSyv5zAAx8Oa78illW57zHQn2RPd+mqlTInsXTt+d2/4AZ3PlCghbsgXXsQP7xccT5dEE7SR9AoeExxojk1TNofhT3rs2vOTsGXlWceexLr5Jg5QLmJRkTK94yHq25yckUjIJKgzkq0SIJpRL2ZbFOJe3q9vfSFs6b3/QUNUeqy/j0p/96S5mbFTF6CK550rHPkDm8yZuSFbvwr1dpO7FzzWLOkOHy3eSd6vYhd6IZ+mosm8I/haf8bt33rTFeu3eLwniBUg66VzINFh/HypA0W+nt+p0PhZncHA6hNBjsBEPAbZANRRr/bDqlrmsN8fY5Yi/WvwENywpxgIyk1fDUXB52nHdaccgkjdCKux0/tizK4yF6Fg3kiZ/q8B6yV1UXBGDJejPhlWdARkr6ZcsMBNTity/FUzc16LT4WAMgEPxQ/j65MuICXTFI0/C11EqRp93ZjP1lP3O/DZJkcRIx34500FmW2aAw1tNiIijCBbC17rxM4lH8fxhGzHyyb3RsNrC6rYJB0ATbqLPnzqJ4ElRn4qHLwp7P58=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(376002)(366004)(346002)(136003)(39860400002)(2906002)(31696002)(86362001)(66476007)(66946007)(316002)(66556008)(5660300002)(478600001)(38100700002)(8936002)(186003)(16526019)(54906003)(956004)(6916009)(31686004)(2616005)(83380400001)(6486002)(53546011)(16576012)(8676002)(4326008)(26005)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?vicTeGTAW1O7a/DmXuPGQATnXjOZdAMZbAdvYy5UPBg5DS11BHbLWhnJoj4M?=
 =?us-ascii?Q?fQ3j1Tk4MMb45wouoB7qhjCfkzK8J6/7BSyqK5Bjs13iSfvSINgfp7yhPluw?=
 =?us-ascii?Q?6rfGAPAPIHsEh/9xiM76G4CCzKv+eUC3E6R9AzTsV3baKD3wEtPmt4EYZR4T?=
 =?us-ascii?Q?HAnHky9CPaoVo7JYjwl2oe+VDEH3SLgrgFUUYOhYn6eHGIsIdPtYCV1xKgvf?=
 =?us-ascii?Q?CO0m2L1Zpzf12YOz4GWfQwIu35gUjYMgo5d767tn+PRwcvgM483Ji0ClMUXt?=
 =?us-ascii?Q?uVsnHOXUUaqqUhS5OZiSzYYtoboO7WkM/nOkS/nvZwqU1CihfxY05cZ/G++S?=
 =?us-ascii?Q?V8fRQOloU7bXlxTRECOcGtBILDunVExw5Xw5+hFw/bs3jZQFZlS+xdHx8Vv+?=
 =?us-ascii?Q?H9O+hTdSDYq6/TrzZECyp4Out7rwI+TI0e4qzCbZIyYMo1eTLacXbkzxiPVZ?=
 =?us-ascii?Q?1MNkiZ3HiB3R6DahGs4svtpmgjyHyVQNd2rWbUSVwj/hxF44OKj8u1UMu10N?=
 =?us-ascii?Q?KqkouzI065HGEkhohHRypbz/01EcxYcXgPdtMp5Vq8/kA/FGjfUHxKv7Tn6q?=
 =?us-ascii?Q?jrDiVKGz22YRkW5XjL3N8pv0s/5EIvXMNl37GWq3u+XpS9d27v08xNMcyB3i?=
 =?us-ascii?Q?6gYqcgn+6WQh6JThSho2RF/QGOk9qJrcqPjUqZqvJbKEjYwCdZu/LGpNUyIV?=
 =?us-ascii?Q?ISRJN9xLAXU+Et04Zh23dVsrVjnO3YaTn+CmgfeKOd8UX6KQa06pbUTA7sr9?=
 =?us-ascii?Q?nGKg0QvHKn596/gkQdHp/gUlBFzuJfOXRDZGnXmMLvQawhlY/DF19iXRjele?=
 =?us-ascii?Q?eC6DDMSfws0CVugyQulOfGuXu+9dAGrGqzQLT0DmnGBEBnbuaJNLTIag5L/z?=
 =?us-ascii?Q?MluFHX2klvbp/AN2ST0fjFgRRqfzvIXsXObch3CPslpauatq0eTGkwUw16lG?=
 =?us-ascii?Q?7yCxA0j2xS+7Qgp+7ntGATlg8vs0bj20hK5xYV/66rjuEhDmlJP9813JvqjY?=
 =?us-ascii?Q?hpeKdi147pVjAJPqr29Z6jYYQO7MO6q4b3AyxS65gCDQ6ILEGfervzq/9c0f?=
 =?us-ascii?Q?WgHrMl9c3ged5pqYgHnFar0kanMd0d+rmdUWNhYbZ0aJSHu2FgrIcuIUJnAn?=
 =?us-ascii?Q?TMow/J44Qu5JqyuEo27JFoO2mNLw9X+96m+hymn4D9gILkF+f2kNpuclM7YP?=
 =?us-ascii?Q?C0URauaqetNE2VgRpqBqhLWVKXwmVjOD9TUPWA/K6lqwA5PlCdx9O7wDprWV?=
 =?us-ascii?Q?LKuH9igSKn7oxJYS7RF3ZlTGAX9/YjTbKc5iklP8vAM/4Q725MFuCurFbwHs?=
 =?us-ascii?Q?SzjgBK59Uq1JPlT4eMUC4K3+?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2dae1524-a001-48d4-1452-08d93a18d152
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 09:40:51.5918
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bneql5ikSDcdyqMEmsnZcRTmTwm3HWmdtXmwM2m6TiW+LfgxPJADSkj7D+vbkL5lKR4xcOkLEtXMb9RtPbOu+A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6863

On 25.06.2021 17:49, Andrew Cooper wrote:
> On 25/06/2021 14:17, Jan Beulich wrote:
>> For log-dirty operations a 64-bit field is being truncated to become an
>> "int" return value. Seeing the large number of arguments the present
>> function takes, reduce its set of parameters to that needed for all
>> operations not involving the log-dirty bitmap, while introducing a new
>> wrapper for the log-dirty bitmap operations. This new function in turn
>> doesn't need an "mb" parameter, but has a 64-bit return type. (Using the
>> return value in favor of a pointer-type parameter is left as is, to
>> disturb callers as little as possible.)
>>
>> While altering xc_shadow_control() anyway, also adjust the types of the
>> last two of the remaining parameters.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> I wonder whether we shouldn't take the opportunity and also rename
>> xc_shadow_control() to, say, xc_paging_control(), matching the layer
>> above the HAP/shadow distinction in the hypervisor.
>=20
> I do remember this being an especially obnoxious interface to use.=C2=A0 =
Any
> improvement would go a long way, but I think we also need to rename some
> domctls too.

Perhaps, but not here and now. Yet still - do you have an opinion
towards renaming xc_shadow_control() at this occasion? I will admit
that I don't consider xc_paging_control() a great name, but at
least it would gte terminology in line with the hypervisor's.

>> --- a/tools/include/xenctrl.h
>> +++ b/tools/include/xenctrl.h
>> @@ -885,11 +885,15 @@ typedef struct xen_domctl_shadow_op_stat
>>  int xc_shadow_control(xc_interface *xch,
>>                        uint32_t domid,
>>                        unsigned int sop,
>> -                      xc_hypercall_buffer_t *dirty_bitmap,
>> -                      unsigned long pages,
>> -                      unsigned long *mb,
>> -                      uint32_t mode,
>> -                      xc_shadow_op_stats_t *stats);
>> +                      unsigned int *mb,
>> +                      unsigned int mode);
>> +long long xc_logdirty_control(xc_interface *xch,
>=20
> uint64_t to match the hypercall?=C2=A0 All users of libxc are stdint.h aw=
are.

First of all - if anything, then int64_t. And I first wanted to use
that type, indeed. But then I went and checked: There are no present
uses of int64_t at all here; throughout tools/include/ there's exactly
one: A function parameter in libxl.h. Otoh there is precedent of a
function returning "long long". Hence I went with that.

>> --- a/tools/libs/ctrl/xc_domain.c
>> +++ b/tools/libs/ctrl/xc_domain.c
>> @@ -650,25 +650,48 @@ int xc_watchdog(xc_interface *xch,
>>  int xc_shadow_control(xc_interface *xch,
>>                        uint32_t domid,
>>                        unsigned int sop,
>> -                      xc_hypercall_buffer_t *dirty_bitmap,
>> -                      unsigned long pages,
>> -                      unsigned long *mb,
>> -                      uint32_t mode,
>> -                      xc_shadow_op_stats_t *stats)
>> +                      unsigned int *mb,
>> +                      unsigned int mode)
>>  {
>>      int rc;
>>      DECLARE_DOMCTL;
>> -    DECLARE_HYPERCALL_BUFFER_ARGUMENT(dirty_bitmap);
>> =20
>>      memset(&domctl, 0, sizeof(domctl));
>> =20
>>      domctl.cmd =3D XEN_DOMCTL_shadow_op;
>>      domctl.domain =3D domid;
>>      domctl.u.shadow_op.op     =3D sop;
>> -    domctl.u.shadow_op.pages  =3D pages;
>>      domctl.u.shadow_op.mb     =3D mb ? *mb : 0;
>>      domctl.u.shadow_op.mode   =3D mode;
>> -    if (dirty_bitmap !=3D NULL)
>> +
>> +    rc =3D do_domctl(xch, &domctl);
>> +
>> +    if ( mb )
>> +        *mb =3D domctl.u.shadow_op.mb;
>> +
>> +    return rc;
>> +}
>> +
>> +long long xc_logdirty_control(xc_interface *xch,
>> +                              uint32_t domid,
>> +                              unsigned int sop,
>> +                              xc_hypercall_buffer_t *dirty_bitmap,
>> +                              unsigned long pages,
>> +                              unsigned int mode,
>> +                              xc_shadow_op_stats_t *stats)
>> +{
>> +    int rc;
>> +    DECLARE_DOMCTL;
>> +    DECLARE_HYPERCALL_BUFFER_ARGUMENT(dirty_bitmap);
>> +
>> +    memset(&domctl, 0, sizeof(domctl));
>> +
>> +    domctl.cmd =3D XEN_DOMCTL_shadow_op;
>> +    domctl.domain =3D domid;
>> +    domctl.u.shadow_op.op    =3D sop;
>> +    domctl.u.shadow_op.pages =3D pages;
>> +    domctl.u.shadow_op.mode  =3D mode;
>=20
> Please use:
>=20
> struct xen_domctl domctl =3D {
> =C2=A0=C2=A0=C2=A0 .cmd =3D XEN_DOMCTL_shadow_op,
> =C2=A0=C2=A0=C2=A0 ...
> };
>=20
> I've been slowly taking out users of DECLARE_DOMCTL, because beyond
> being pure code obfuscation, valgrind (rightly) complains that the
> hypercall operates on uninitialised memory.

Well, yes, if that's the intended new style, then I can surely do it
that way. Would get the two function out of sync though, unless I
also made the (unrelated) change in the original place as well.

(For the record, I don't think valgrind rightly complains: As with
pointer arguments passed to functions, at the call site it is unknown
whether the pointed to object it input, output, or both. Analysis
tools can at best make guesses there.)

>> --- a/tools/libs/light/libxl_x86.c
>> +++ b/tools/libs/light/libxl_x86.c
>> @@ -529,10 +529,10 @@ int libxl__arch_domain_create(libxl__gc
>>          xc_domain_set_time_offset(ctx->xch, domid, rtc_timeoffset);
>> =20
>>      if (d_config->b_info.type !=3D LIBXL_DOMAIN_TYPE_PV) {
>> -        unsigned long shadow =3D DIV_ROUNDUP(d_config->b_info.shadow_me=
mkb,
>> -                                           1024);
>> +        unsigned int shadow =3D DIV_ROUNDUP(d_config->b_info.shadow_mem=
kb,
>> +                                          1024);
>>          xc_shadow_control(ctx->xch, domid, XEN_DOMCTL_SHADOW_OP_SET_ALL=
OCATION,
>> -                          NULL, 0, &shadow, 0, NULL);
>> +                          &shadow, 0);
>=20
> I know this isn't introduced by your patch, but this cannot possibly be
> correct without error handling.=C2=A0 There is a good chance of this call
> running Xen out of memory.
>=20
> Any chance of a fix split out into a separate patch?

Sure - too mechanical editing on my part; I clearly could have
spotted this myself. At the risk of stating the obvious though: This
change will come with a risk of regressions, in case a request was
almost successful and the guest then gets away with the slightly
smaller allocation.

>> --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
>> +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
>> @@ -997,13 +997,13 @@ CAMLprim value stub_shadow_allocation_ge
>>  {
>>  	CAMLparam2(xch, domid);
>>  	CAMLlocal1(mb);
>> -	unsigned long c_mb;
>> +	unsigned int c_mb;
>>  	int ret;
>> =20
>>  	caml_enter_blocking_section();
>>  	ret =3D xc_shadow_control(_H(xch), _D(domid),
>>  				XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION,
>> -				NULL, 0, &c_mb, 0, NULL);
>> +				&c_mb, 0);
>>  	caml_leave_blocking_section();
>>  	if (ret !=3D 0)
>>  		failwith_xc(_H(xch));
>=20
> Not a bug introduced in this patch, but this is broken.=C2=A0 There is a =
kb
> vs mb units mismatch, and I don't see any shifts by 10 anywhere in the
> Ocaml stubs.

May I please get away with leaving this to the maintainers? I already
don't feel overly comfortable touching Ocaml code...

>> @@ -1016,14 +1016,14 @@ CAMLprim value stub_shadow_allocation_se
>>  					  value mb)
>>  {
>>  	CAMLparam3(xch, domid, mb);
>> -	unsigned long c_mb;
>> +	unsigned int c_mb;
>>  	int ret;
>> =20
>>  	c_mb =3D Int_val(mb);
>=20
> This has a 31 bit truncation issue on 32bit builds.=C2=A0 I'm not sure ho=
w
> much we care.
>=20
>>  	caml_enter_blocking_section();
>>  	ret =3D xc_shadow_control(_H(xch), _D(domid),
>>  				XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
>> -				NULL, 0, &c_mb, 0, NULL);
>> +				&c_mb, 0);
>>  	caml_leave_blocking_section();
>>  	if (ret !=3D 0)
>>  		failwith_xc(_H(xch));
>> --- a/tools/python/xen/lowlevel/xc/xc.c
>> +++ b/tools/python/xen/lowlevel/xc/xc.c
>> @@ -1192,8 +1192,7 @@ static PyObject *pyxc_shadow_control(PyO
>>                                        &dom, &op) )
>>          return NULL;
>>     =20
>> -    if ( xc_shadow_control(xc->xc_handle, dom, op, NULL, 0, NULL, 0, NU=
LL)=20
>> -         < 0 )
>> +    if ( xc_shadow_control(xc->xc_handle, dom, op, NULL, 0) < 0 )
>>          return pyxc_error_to_exception(xc->xc_handle);
>>     =20
>>      Py_INCREF(zero);
>> @@ -1208,7 +1207,7 @@ static PyObject *pyxc_shadow_mem_control
>>      int op;
>>      uint32_t dom;
>>      int mbarg =3D -1;
>> -    unsigned long mb;
>> +    unsigned int mb;
>> =20
>>      static char *kwd_list[] =3D { "dom", "mb", NULL };
>> =20
>> @@ -1223,7 +1222,7 @@ static PyObject *pyxc_shadow_mem_control
>>          mb =3D mbarg;
>>          op =3D XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION;
>>      }
>> -    if ( xc_shadow_control(xc->xc_handle, dom, op, NULL, 0, &mb, 0, NUL=
L) < 0 )
>> +    if ( xc_shadow_control(xc->xc_handle, dom, op, &mb, 0) < 0 )
>>          return pyxc_error_to_exception(xc->xc_handle);
>=20
> Here too.=C2=A0 There are int truncations on the input and output, and li=
ke
> the Ocaml stubs, an apparent kb vs mb confusion.
>=20
> I'm not sure whether switching to PyLong is sensible.=C2=A0 Its probably =
ok
> from a compatibility perspective.

Same here then, as I don't think I'm making anything worse with
these purely mechanical changes.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 10:02:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 10:02:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147769.272704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxo5h-0005WG-Ku; Mon, 28 Jun 2021 10:02:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147769.272704; Mon, 28 Jun 2021 10:02:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxo5h-0005W9-Ho; Mon, 28 Jun 2021 10:02:09 +0000
Received: by outflank-mailman (input) for mailman id 147769;
 Mon, 28 Jun 2021 10:02:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPZa=LW=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lxo5g-0005W3-MY
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 10:02:08 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 58989fc3-5613-44c0-84c0-394af414ecf2;
 Mon, 28 Jun 2021 10:02:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58989fc3-5613-44c0-84c0-394af414ecf2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624874527;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=J6HQlxlq3cLCqM+vOyHjQEzG+GRhxAkoDjQtJxvej+s=;
  b=ShQvot0e8v2ADB2l0viiAJLDirDFlMK9hb86scoP6mDIcKW1N7IHNqi+
   i5o8jV38QYxx0zWspIMco6KaRICz2FBM6Nsm4VhF3fkDXTfJr3mTeevBA
   KnOGP9AlIn+oTwCNscOLhmTK+UjGUvc4m7bkHl5gEiDtm3TXzC9gvkI6Z
   Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: noTJMdOnNS+nZAcDLa5Wm2kV3c9O4xl259OL2rctdjnbK5sJGWUbPFvPEfGVnwK5oML/YjeZqJ
 mU5XTABrRrJ5piNliLhaswhlCb/8IKnLQpHzrotWnaMQY77Z7Q4qYW9Cpml26dHrQ7ug6TIqRm
 O3+EvseyivNrhwFyJK+qvCdBVQj6+MTWzGxjFxV9yNGpFt/zxCncVRKhiqKBImxUFxsSPjrnEP
 xZXpsSCBB9n2lKoEYcrUPJFtVeCAQyowBqdWhSBm2qoCBZjnHjL+TYKqconn3H9l7OxwiRSiMv
 weQ=
X-SBRS: 5.1
X-MesageID: 47084622
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:pZkJIKnq6cwdKbybsN1YJXXM233pDfIm3DAbv31ZSRFFG/Fxl6
 iV88jzsiWE7Qr5OUtQ/uxoV5PgfZqxz/NICOoqTNWftWvd2FdARbsKheCJ/9SJIVybygc378
 ldmsZFZOEYdWIK7vrH3A==
X-IronPort-AV: E=Sophos;i="5.83,305,1616472000"; 
   d="scan'208";a="47084622"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
Subject: Some more QEMU 6.0 fixes
Date: Mon, 28 Jun 2021 11:01:55 +0100
Message-ID: <20210628100157.5010-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.32.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Follow-up of
    [XEN PATCH v2 0/8] Fix libxl with QEMU 6.0 + remove some more deprecated usages
to fix few missing bits.

To be backported to Xen 4.15 as well.




From xen-devel-bounces@lists.xenproject.org Mon Jun 28 10:02:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 10:02:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147771.272714 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxo67-00061q-TR; Mon, 28 Jun 2021 10:02:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147771.272714; Mon, 28 Jun 2021 10:02:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxo67-00061j-QN; Mon, 28 Jun 2021 10:02:35 +0000
Received: by outflank-mailman (input) for mailman id 147771;
 Mon, 28 Jun 2021 10:02:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPZa=LW=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lxo66-0005wp-Gw
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 10:02:34 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 09cc351c-5b7e-4cef-bfed-25fc7fb7c0d6;
 Mon, 28 Jun 2021 10:02:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09cc351c-5b7e-4cef-bfed-25fc7fb7c0d6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624874551;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=zX7r6xlS7TNdyxVtVDnfSmlYIWmjPsGRxGAv5/JWNnQ=;
  b=FQwU2ap1wo1uKbFhE/luE3zbRumfwRGYdMT7OW85PR5BK7dY2k8TSfqn
   fGGFPTbVGr3aNICDrsEtUQjh3lX3Wd3bq84oJfqlKXEZpbZXxO8Y6u4EX
   JBsyWgz+Tl4ubkzm/3IHSVCu0+r1hM8D7fUg1ChCVzLzE/5GwXvILpp8q
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: n5gxZFmaZzav9i0hIZl6nPfPEGi8RDin+0d1osdGIO9tE8G3j0M9He1JPF7G9H2IvGPKw83T1U
 A6UH24O+f00PcCPWCc6sZeX1iVp59pZhUk/3NwXrwYdbKlOJZu4szhcH7VfzHzeww7u5OFJtk+
 4JXUpikLrykFlgjnTYTNQfMCQt3d2Ps1kSuR2JI/WYhLWiK9BbwAu3WRhWhWmpFW2iUHwnv+lR
 eYPVrJG+V20/qqWc2c5qsn20oEhkdasNJ/iaB2PWio9UcyZJ0qTlvXrEViRrUCEpNpp3LZf7c7
 4r4=
X-SBRS: 5.1
X-MesageID: 46798712
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:LM1oV6uo0TSbWDEt5CaUj1xL7skDcNV00zEX/kB9WHVpmszxra
 +TdZMgpHjJYVcqKQgdcL+7WZVoLUmwyXcx2/hyAV7AZniDhILLFuFfBOLZqlWKcREWtNQtsJ
 uIG5IObuEYZmIVsS+V2mWF+q4bsbq6zJw=
X-IronPort-AV: E=Sophos;i="5.83,305,1616472000"; 
   d="scan'208";a="46798712"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: [XEN PATCH 1/2] libxl: Replace short-form boolean for QEMU's -vnc
Date: Mon, 28 Jun 2021 11:01:56 +0100
Message-ID: <20210628100157.5010-2-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.32.0
In-Reply-To: <20210628100157.5010-1-anthony.perard@citrix.com>
References: <20210628100157.5010-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

f3f778c81769 forgot one boolean parameter.

Fixes: f3f778c81769 ("libxl: Replace QEMU's command line short-form boolean option")
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_dm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 5b01cf284163..7670e403a90f 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1324,7 +1324,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             vncarg = GCSPRINTF("127.0.0.1:%d", vnc->display);
 
         if (vnc->passwd && vnc->passwd[0]) {
-            vncarg = GCSPRINTF("%s,password", vncarg);
+            vncarg = GCSPRINTF("%s,password=on", vncarg);
         }
 
         if (libxl_defbool_val(vnc->findunused)) {
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 10:02:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 10:02:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147772.272726 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxo6D-0006Lv-6a; Mon, 28 Jun 2021 10:02:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147772.272726; Mon, 28 Jun 2021 10:02:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxo6D-0006Lo-2Z; Mon, 28 Jun 2021 10:02:41 +0000
Received: by outflank-mailman (input) for mailman id 147772;
 Mon, 28 Jun 2021 10:02:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPZa=LW=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lxo6B-0005wp-Gv
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 10:02:39 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ef78c4ea-94b6-4a83-90a9-67fc60d55536;
 Mon, 28 Jun 2021 10:02:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef78c4ea-94b6-4a83-90a9-67fc60d55536
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624874554;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=UFTvpVdfNxrP+bv4AAuRgBxIwdYxDQVjcOfg33auNmM=;
  b=SPrxiYJ/2NRLD/c1MVWnAm0GakSn4oiEYTE8FPb5yiqv059TqXasGUX3
   lSBZiMlfzimG9HJmSK7hwNk9+HprKx2d/MbtIQRgTABRF0BcwUV8TnMQ8
   DtVfeIaI6aOUdayssaovIuRLCRZyAK1M1KdJfnfOMQNB36fSJ43j2BBa6
   g=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 4EIA5kcW1I6IqWbB+qeCtYNw3oo2Q9HJXNiY2CMTTgZnDDEBM/SV7FaV2HZJ+QYny5PXW8Mg9u
 80tq3JTLTo+MI3tw7fY9RxlYYOT8A1JuChY+OxJCMvv8bYnptTOpeJ6UR4BGSRjz6qR/1xSTKI
 K/T/Zg310ZGzxbnRaDtDFCVY5pUJpZnRuJ1qo72owudqVogHXZWQk9MeiSPZS8INyYbt+qkG+T
 fDWZuJZj83Gx1snThFk3/Mv3569pUL38t7Lb4JmIMn1FXoNP/2Z0h27C5kEUe7xpK9ipXhhrOf
 1YU=
X-SBRS: 5.1
X-MesageID: 46798714
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:UPF/eatQEQe/qgrHq7XYEzf+7skDTtV00zEX/kB9WHVpmszxra
 6TdZMgpHnJYVcqKQkdcL+7WJVoLUmxyXcx2/h1AV7AZniAhILLFvAA0WKK+VSJcEeSygce79
 YFT0EXMqyIMbEQt6fHCWeDfOrIuOP3kpyVuQ==
X-IronPort-AV: E=Sophos;i="5.83,305,1616472000"; 
   d="scan'208";a="46798714"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: [XEN PATCH 2/2] libxl: Fix QEMU cmdline for scsi device
Date: Mon, 28 Jun 2021 11:01:57 +0100
Message-ID: <20210628100157.5010-3-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.32.0
In-Reply-To: <20210628100157.5010-1-anthony.perard@citrix.com>
References: <20210628100157.5010-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Usage of 'scsi-disk' device is deprecated and removed from QEMU,
instead we need to use 'scsi-hd' for hard drives.
See QEMU 879be3af49 (hw/scsi: remove 'scsi-disk' device)

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_dm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 7670e403a90f..dbd3c7f278f9 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1972,7 +1972,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                                                         &drive_id),
                     flexarray_vappend(dm_args,
                         "-drive", drive,
-                        "-device", GCSPRINTF("scsi-disk,drive=%s,scsi-id=%d",
+                        "-device", GCSPRINTF("scsi-hd,drive=%s,scsi-id=%d",
                                              drive_id, disk),
                         NULL);
                     continue;
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 10:45:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 10:45:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147780.272741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxolv-0002Pe-Fj; Mon, 28 Jun 2021 10:45:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147780.272741; Mon, 28 Jun 2021 10:45:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxolv-0002PX-CV; Mon, 28 Jun 2021 10:45:47 +0000
Received: by outflank-mailman (input) for mailman id 147780;
 Mon, 28 Jun 2021 10:45:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxolt-0002PN-AX; Mon, 28 Jun 2021 10:45:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxolt-0007Lo-6r; Mon, 28 Jun 2021 10:45:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxols-0005Yk-TB; Mon, 28 Jun 2021 10:45:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxols-0001uv-Sj; Mon, 28 Jun 2021 10:45:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ill0/ZYrkPdIn10F432L/6jIQ8+m+iX4NNExo9muepE=; b=45lMDKwwKDiD3EUoPrlh4F/ppT
	dz92B/5ho0VDs9uOjexHCjfRC00yaoUVmXbJGjSoDW8XRq8crJd58hHFUQ719dCIp29WVAs6CmYvN
	V5vXaNEnrmNdX/8vx9Oqv59Lz8IeJeO7t7kIxiRM8dDousC1n0CdzBz1UGcztfsFS6tI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163162-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163162: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=17143c4837393d42c484b42d1789b85b2cff1aaf
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 28 Jun 2021 10:45:44 +0000

flight 163162 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163162/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 17143c4837393d42c484b42d1789b85b2cff1aaf
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   24 days
Failing since        162368  2021-06-04 15:42:59 Z   23 days   59 attempts
Testing same since   163028  2021-06-25 07:30:32 Z    3 days   12 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2647 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 11:10:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 11:10:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147785.272755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxp9u-0005bP-NN; Mon, 28 Jun 2021 11:10:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147785.272755; Mon, 28 Jun 2021 11:10:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxp9u-0005bI-KI; Mon, 28 Jun 2021 11:10:34 +0000
Received: by outflank-mailman (input) for mailman id 147785;
 Mon, 28 Jun 2021 11:10:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fuEk=LW=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lxp9t-0005bC-0O
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 11:10:33 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [81.169.146.167])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dc6d4dc2-636a-41dd-b4ea-a0918186f289;
 Mon, 28 Jun 2021 11:10:31 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.5 AUTH)
 with ESMTPSA id j0443ex5SBATHnv
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 28 Jun 2021 13:10:29 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc6d4dc2-636a-41dd-b4ea-a0918186f289
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1624878629;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=0foUv8GlEVBTaZG8KN9P9hiU8JchAeoSOcQivY/DUqc=;
    b=LK5GXEzNJu2fdqzsXnT+sDnHIqQv88h3P/2WewPYCrBDmCo8PC10F5jGcugFZgkCEG
    /iv/do5xca7CeJwZb6IEJ5aj5so8XGkQDpBwDUChR9PXGkbaM9/rBVKANVG6StuXUKx7
    IIObEVE12GxLyL1fXLs5fbFNjeJjMqa3OpYHJzpgehLSxL0M48rH76yt2KHlhLgCE+0h
    yxmgCbuhfkWCr9UGQMi598QqoV1fLA8QXADV1NBmhzZPgtdVGIdZjJhUnYQ5JXC0QVrP
    9YAkdaygzDfk0qhcmRWjhopB5i8sz4v+oADBMbA9/JI6OAcy9DIcMuRP0bT4waHqMlGc
    4VtA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF1Uh6FPk3sesKYv+F4ULcnddTEqNLurekxi0Bc"
X-RZG-CLASS-ID: mo00
Date: Mon, 28 Jun 2021 13:10:22 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Roger
 Pau =?UTF-8?B?TW9ubsOp?= <roger.pau@citrix.com>, Juergen Gross
 <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 02/12] libxenguest: deal with log-dirty op stats
 overflow
Message-ID: <20210628131022.3f2f2c4b.olaf@aepfle.de>
In-Reply-To: <4e3afc8e-1ed8-2e27-b583-476d35352efd@suse.com>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
	<46831816-a5f2-2eb4-bb91-aba345448feb@suse.com>
	<5e725a42-953a-c96f-3e72-f0c741b0ce16@citrix.com>
	<4e3afc8e-1ed8-2e27-b583-476d35352efd@suse.com>
X-Mailer: Claws Mail 2021.05.27 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/=L9YIJzssQbWgKVfvv0DFqD";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/=L9YIJzssQbWgKVfvv0DFqD
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Mon, 28 Jun 2021 09:48:26 +0200
schrieb Jan Beulich <jbeulich@suse.com>:

> On 25.06.2021 18:36, Andrew Cooper wrote:
> > This is an external interface, and I'm not sure it will tolerate finding
> > more than p2m_size allegedly dirty. =20
> But you do realize that a few lines down from here there already was
>         policy_stats->dirty_count   =3D -1;
> ? Or are you trying to tell me that -1 (documented as indicating
> "unknown") is okay on subsequent iterations, but not on the first one?

precopy_policy() gets called twice during each iteration.
Last time I tried to use this API it was difficult to work with.
It is required to look at dirty_count and iteration to see the actual state.
Maybe it was just me who initially failed to fully understand the intent.

I think as it is right now, the first run with iteration being zero is the =
only way to know the actual p2m_size, in case the consumer really wants to =
know this detail.

Olaf

--Sig_/=L9YIJzssQbWgKVfvv0DFqD
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmDZrh4ACgkQ86SN7mm1
DoDgsxAAo4EpHM2/NEx0qMy5VzNwt2XT+YhzGYa0uLItyw80q0i+L3fIFA4elAlU
xYF5oNWAbTGyg2cjdEAL/nUN7eFhnSF6I+VP2olrC5UpodYPAzM/XKzxeIyV4UvP
FlwRgMwS952yMdQ8VRlEyaXIZkSBspsv2UoDl5vWKFzY/TKLzt8bbCxjsdD2zSCB
zjJyZvdcdm77FsaG8x+ZwPLpLCmC+NMx9ZuIzajwD2OIpHb9wdwFBVwJc9tFZ7cq
HUeynqwcs2RZ3uh4SFV8N02iOtAtoEheSLDX+SzL1u6go6hiRLP8PJH6z6scyHCc
P+EOKWE86cYAbrIX7m5+M0SiunlPfF7zQ1Yjzqy+LtGIibRzZrpsiJfR6mWK5Rwm
Ks2c4jJLFpklwHV4gl1cTbm19bjqfi44lXXNg6+gEk5NHoH4D/B1BjrLjyN4dd6x
af/1s57+IgmFQwgrZmWibyHEwZmTitY/0e6aHefiPR9JsvtyUtUJoHri04aIuwFc
9KA4b2+xha8pKRdcUhw5WBY43MbxNdhAMHBJXdK4Uem+d0OPADQGYsVu/4XGehHB
oOZ9NYQUDGXrY5IgZ1wiM6GJollJW8Re/aCS/eD3BFOoAyLX7psp+40dG11PR5AS
34RAk4F0WdIQL1JaGpTk0+Bl5KYfACbzEfm+UTjAujE0oIPm95Y=
=BFdC
-----END PGP SIGNATURE-----

--Sig_/=L9YIJzssQbWgKVfvv0DFqD--


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 11:18:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 11:18:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147788.272765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxpHL-0006Lc-HI; Mon, 28 Jun 2021 11:18:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147788.272765; Mon, 28 Jun 2021 11:18:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxpHL-0006LV-E5; Mon, 28 Jun 2021 11:18:15 +0000
Received: by outflank-mailman (input) for mailman id 147788;
 Mon, 28 Jun 2021 11:18:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxpHJ-0006LL-M6; Mon, 28 Jun 2021 11:18:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxpHJ-0007vV-6X; Mon, 28 Jun 2021 11:18:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxpHI-00071Q-Rd; Mon, 28 Jun 2021 11:18:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxpHI-00081J-R9; Mon, 28 Jun 2021 11:18:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=09/wJMd4owN4I4EOQrKVW8uU5+Lvyd4/RQ2U2lRekFY=; b=GEmfg9rms6rfk149qZ1QQEsvb2
	mzGSyHhX2e53B3BfMzxI0sXdxUag6c01UxBLQVfdWURzRMtyQpZbtOWBgIFhvIsuW5MeMdUzNC3JN
	ETEQ6zkorBp8iiR4VsEjDIhux1waUOFu8kfXeaHCR/YDVN+kay0hBCjMHN2t1WXDB/3k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163160-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 163160: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=bb11edcec1a953ce590da797b0d005cd60f21e83
X-Osstest-Versions-That:
    xen=bb11edcec1a953ce590da797b0d005cd60f21e83
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 28 Jun 2021 11:18:12 +0000

flight 163160 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163160/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 163136
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 163136
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 163136
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 163136
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 163136
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 163136
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 163136
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 163136
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 163136
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 163136
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 163136
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  bb11edcec1a953ce590da797b0d005cd60f21e83
baseline version:
 xen                  bb11edcec1a953ce590da797b0d005cd60f21e83

Last test of basis   163160  2021-06-28 01:51:35 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 11:20:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 11:20:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147791.272780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxpJI-0007eY-V3; Mon, 28 Jun 2021 11:20:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147791.272780; Mon, 28 Jun 2021 11:20:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxpJI-0007eR-Ra; Mon, 28 Jun 2021 11:20:16 +0000
Received: by outflank-mailman (input) for mailman id 147791;
 Mon, 28 Jun 2021 11:20:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxpJH-0007eL-6Q
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 11:20:15 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9a5bb0ed-e08c-41ee-acc1-ac486eb10220;
 Mon, 28 Jun 2021 11:20:14 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2057.outbound.protection.outlook.com [104.47.13.57]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-7-3_uySbk-OuaSXxdwmbJyGQ-2;
 Mon, 28 Jun 2021 13:20:12 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB3119.eurprd04.prod.outlook.com (2603:10a6:802:10::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.22; Mon, 28 Jun
 2021 11:20:08 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 11:20:08 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P189CA0010.EURP189.PROD.OUTLOOK.COM (2603:10a6:102:52::15) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Mon, 28 Jun 2021 11:20:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a5bb0ed-e08c-41ee-acc1-ac486eb10220
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624879213;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ycMBJMYx+bo33iul5PhArutVOlVjHEdt6VKnSuDkl/Q=;
	b=W4u+YljTJBvIcCSgH6mOd4BlP5lyy5jjUGcOfJaslTBKOVtVDFuU7Z3AaBcShNIEzXdIUp
	6Vs3QJ0mqgQ225UMFKT2FyzHEyTabtrvHAoeyjc3BgxRywTeIztgUK8ZJinOaVEgcau84Q
	8JHum1HE/kC7ruYGM/jNQaON8O+UIu8=
X-MC-Unique: 3_uySbk-OuaSXxdwmbJyGQ-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e7OZkDApT+9k+16nByHbEvPy6jAvh7ZJjR2FzwbBENjhG5hQucH+0mK0VVtlS/AI8KQZEvwMP8Ch9fT5xKqWBt+WFJRlgzqVnds/8OpG8Bx/2/j5vSC0/oogejIaGGQtM5qZYN6jKFnZnBr29PEkVPPDb/2iu4toomYYD2e7cyWmDjo363ZvIsT7QMRppg4esdbjhArfvvDzi5282w2CsRKrQCrMueony1DalQsAI8PsDLeXo+TetCjQMV8rReDMRtVl325FnrNBftYNQUKUNptkKP3fgc2wCebz5nB6UWtLAmr9YcpFdQNpjkSKNontLZ27GH2o1/pxxL/acW0+dA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ycMBJMYx+bo33iul5PhArutVOlVjHEdt6VKnSuDkl/Q=;
 b=Vrmy6jbi1wJe1EB6cBfjlro30mmZzehttaG7LE6gfsnUxYF1BgldgJYmdqM1IOEugAD2mgU99rLzmlhgFnosZy98pYnA733L/nO13zj7wODc1qDImfXPb6pj4gTfFaK8N/VceOdzk11jIv7763LLtwlz6Rl8+kpcSKbagJ1l+hIw8MOHhVOLeXqDk5arXOTn98WoQ03gQBH4Z3MiuAtBJR9PPBm2jzRcadg6Io9+uOrcSELN9kw7mNY2PpTQy+EbTpLEpzpSGWUfF0eTyCua0v08NLlM5FhQTmkAYqVMYYcpvszLYqLat1KOiLD6wU1mYsWED57lhXDYVrWsoH2BMg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 02/12] libxenguest: deal with log-dirty op stats overflow
To: Olaf Hering <olaf@aepfle.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
 <46831816-a5f2-2eb4-bb91-aba345448feb@suse.com>
 <5e725a42-953a-c96f-3e72-f0c741b0ce16@citrix.com>
 <4e3afc8e-1ed8-2e27-b583-476d35352efd@suse.com>
 <20210628131022.3f2f2c4b.olaf@aepfle.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3e90f06d-9410-4778-bf93-61940bd95456@suse.com>
Date: Mon, 28 Jun 2021 13:20:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210628131022.3f2f2c4b.olaf@aepfle.de>
Content-Type: text/plain; charset=windows-1252
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P189CA0010.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:52::15) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 28a17943-a217-40e8-7b8f-08d93a26afa8
X-MS-TrafficTypeDiagnostic: VI1PR04MB3119:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB31197353B3B2CBEB740FC1D0B3039@VI1PR04MB3119.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	o8eyuVM5XuvGMs5VcrhjW3WCPu2RdtcfsBUoymp8a1uVPlkjqDLIr2bpvB+0NKk2srQjM1Ru69Zrh+DmRhjqWYxZvkvGCGanYPkL0ys/qwxjJFmq1QUDNAS2XV4UraFK4/eymoPDgiZ3bu3hIobT0pWIscL+yU6QMGQFL+HZu9/TsXkF1BgP5Bth/8mGk1JnxDsdv7kT5swSu7rB5tCgvsvfObp/3e4yGqz2W2z1iHhGP6m0vuQleD+kixzb6JPd5V4Z5g8MBDQHprqUp8mQc3p2ZhPOklTLCqIY5AGfRRQpJJwGocBhZ3+fsq4pAL1CSPMr0BECWB0Py1qTqM+PuH6/V/Ck7ypdgEaZ+6+pdpMVLKD1Rxps6nDozs7ruejjjaF662HSwgaaCa5DqzVd0o+DomtvSULdJQYMZkqDgv/Z1ETRIZ8Du13e/jMfsU3pOndis0EyUsiKMg+22RGa5iJNzAY2SGSxyGH6zKw/J3PCnaRIw0rV5QJKaEb70REK+vTRGaiMYpv47wSd3J2dIxKIGOxHTlRczkYQYuQfuYMRaf2ATdTBPwcm6iXvO7tNt5LhiBF/Ge0uswWpn//ssw+7MMuI4di+VHV/zPyv/o0GFXqw9lHAjIN4E4AVpZrzSX4vVWAa0SEEgqfQvtfFcmZFT1/yT4uj53nctHpgjRpuHkyIN5t3ypcuSlbx6IJrQ2a/I0Dp6LhLWsq6b0jYjCHdhfblmiOGRsCa5cNLh9rBQhhQ9YN84hmgzYvkDPxnybgTl2Wvr/y0x/i0wYhgRA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(366004)(346002)(136003)(376002)(396003)(38100700002)(2616005)(86362001)(186003)(4326008)(956004)(8676002)(6916009)(8936002)(54906003)(31686004)(16526019)(53546011)(66556008)(66476007)(2906002)(316002)(16576012)(36756003)(6486002)(66946007)(31696002)(478600001)(5660300002)(26005)(14143004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?Windows-1252?Q?KS9Qfd8QV2DWKIzvB3at3mm7o44jM3jKD8geG3DXIZg9WIix1NyRakfv?=
 =?Windows-1252?Q?IP6LNgvaxIOFb2aGiAWuJqjvSzwKngalGAUOLwWMRWbsHFwTZorPCymJ?=
 =?Windows-1252?Q?I0hVFofH65J8OOlDYPLuNmIH7OZVKSyaJbWwcrQR4ddv7SIFXJyPZQGd?=
 =?Windows-1252?Q?nTpIAm5hNwWqcMqFhetmxzEPl7C9nSVqs1jHumMZedvdvLx29vqMN+91?=
 =?Windows-1252?Q?fhNPkPxOt0vJbzGJWQW9Y2OxkP7C6VHuWcyeZ29F9t+HF/NX0WzUbK4t?=
 =?Windows-1252?Q?hbK+Wv9/ngfCs4XUvOQpIdreBeZOiwM63foPLfBehIxY3QfLbfOSnknb?=
 =?Windows-1252?Q?DWWyhaGeKmz7KSglGHkGv54TZ6hrODk7NOQ2gO35iNNcIZplWyks6F1A?=
 =?Windows-1252?Q?QfaDH0FDHq99A2/mk5xnhYC2PRg0nVmWpSBiZvCJ2yF7YL7b6zCH5wQz?=
 =?Windows-1252?Q?yU5j4vkx6Z4Mw1V0ttnPadB1h4RFFdqcqYVE+LS3un1vHBFJHE3Boze4?=
 =?Windows-1252?Q?GeJpxbrgwMpNbH3w7x8YjIKXsx6YOP937XWquc2Zm/x2vMgQonjyUxub?=
 =?Windows-1252?Q?P2TI8gLaacc7+5sC975bfEIrfj/B2M/tsvHbzsO2yNPcAjoqbbDmcYyf?=
 =?Windows-1252?Q?Qh1UMjIUJT3PTLOrHdk2BAAVHjOGvdBLkgBdXH+BKXzddZSzB3sNuEtI?=
 =?Windows-1252?Q?G74opBg38MjcCRil9gz9bdP6rYsHJh0VZIXvDQ8taIjV62GAFEE+UneV?=
 =?Windows-1252?Q?G/Zp2SSDEDcqY/eXY/U/IG+8ArHqYkXpDI/rgIQrGisul4k1kJFZAp1o?=
 =?Windows-1252?Q?NuB8F7AG+Ny5oJgk4p8PaOCXyoODahDo1xpPMi1zbdeaqIszi/sTNVMp?=
 =?Windows-1252?Q?ixLe2IA00Q18rA2lD7PWcC2BxSX7YIsfhZqq/kUUTvm+EOhYXP4l0Kcm?=
 =?Windows-1252?Q?68SHpG0qfAJtKd4Gahi7YWTxcVVsoFzNB21QS3UA2eTO2O16DfktKP7a?=
 =?Windows-1252?Q?GPNeJ7YpAEP1MeyBnV2JLYyAM3JCGvVUyo/YWpNC69t04f+kTijBBFaX?=
 =?Windows-1252?Q?8Z9jdruPRlEtVnm0Go49S1ni0Iv3hVPlTuVd4RH+eV9maCmTkyuqJB/u?=
 =?Windows-1252?Q?wQaBqV3ftIgSNUshvTmMkDQEsk7e+L6KddC2SYXbAMXCFAm13SExKSHO?=
 =?Windows-1252?Q?52vjv9xiMZDKIzZgKCKD5ckQMPg/VhxxgtuH5y6Msg8F+w9D00AXp7TQ?=
 =?Windows-1252?Q?eHgAmffEW0kiWUCCcD7ppWy85LbD0rxJhRHeEhkWAUFEeK8ccGZKW4Bv?=
 =?Windows-1252?Q?DFkpUWbgFfVVPP4V2EzkmnbIKMRmi+mxM7oKbFxyn98wouiPextw21BL?=
 =?Windows-1252?Q?PqPxhCL6fJgSfGi5ihQg8TGjbERk+VhZkozMaZK7+0sd1TswBAmaxi+S?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 28a17943-a217-40e8-7b8f-08d93a26afa8
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 11:20:08.1747
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5Tv0t3JUuK4vRZRqZ6S/r+3gooJax3X6N9li7WpwwSikkVEYqwc7/W29gzOrs6Qwsqap+dlJ0DnkAMsXBiSWaQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB3119

On 28.06.2021 13:10, Olaf Hering wrote:
> Am Mon, 28 Jun 2021 09:48:26 +0200
> schrieb Jan Beulich <jbeulich@suse.com>:
> 
>> On 25.06.2021 18:36, Andrew Cooper wrote:
>>> This is an external interface, and I'm not sure it will tolerate finding
>>> more than p2m_size allegedly dirty.  
>> But you do realize that a few lines down from here there already was
>>         policy_stats->dirty_count   = -1;
>> ? Or are you trying to tell me that -1 (documented as indicating
>> "unknown") is okay on subsequent iterations, but not on the first one?
> 
> precopy_policy() gets called twice during each iteration.
> Last time I tried to use this API it was difficult to work with.
> It is required to look at dirty_count and iteration to see the actual state.
> Maybe it was just me who initially failed to fully understand the intent.
> 
> I think as it is right now, the first run with iteration being zero is
> the only way to know the actual p2m_size, in case the consumer really
> wants to know this detail.

But if a field named dirty_count was intended to convey the P2M size
(and then only on the first iteration), this very certainly would
have needed writing down somewhere.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 11:31:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 11:31:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147797.272790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxpTr-0000lr-4z; Mon, 28 Jun 2021 11:31:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147797.272790; Mon, 28 Jun 2021 11:31:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxpTr-0000lk-21; Mon, 28 Jun 2021 11:31:11 +0000
Received: by outflank-mailman (input) for mailman id 147797;
 Mon, 28 Jun 2021 11:31:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fuEk=LW=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lxpTp-0000le-T0
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 11:31:10 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.50])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2b422aef-a92a-42be-801c-3b9c06dcea82;
 Mon, 28 Jun 2021 11:31:08 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.5 AUTH)
 with ESMTPSA id j0443ex5SBV6Hvh
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 28 Jun 2021 13:31:06 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b422aef-a92a-42be-801c-3b9c06dcea82
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1624879867;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=t6fonBeM1ZIQREKeKpKSaAJXsi23EtV7j1QuiSSldmQ=;
    b=Q+c1orLZitFnZAeN7Yzt+nbEcrst5qvNtAb1QIdM5YcGhznp3u+QTSAQkI7TETnSdC
    19ZetgOYUeIp8dO7+RX7a0oSt3lMGLWlG2xpgaacoUaCY4NI8QgK8e4ysSIdQYSlfq5n
    qinDFAFb2fuz9zIZSRfO4vlXIaZkW9uOc1iIN31iTvcvqDMLyScHIgwuf9re8ldG/uoq
    RBetI/Wdq8hAIcblDCEk6Yry1R58g7PHEjS8tTc1I3zpRih8a9vY+ND+A9/FiVnt+6kv
    ZauHis8+T97LhAIg/GpilTFxK7MN17lR2upJaNcUhb/4oIT+tibdMQ0PLDjnhxpWSHw7
    OEuA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF1Uh6FPk3sesKYv+F4ULcnddTEqNLurekxi0Bc"
X-RZG-CLASS-ID: mo00
Date: Mon, 28 Jun 2021 13:30:57 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Roger
 Pau =?UTF-8?B?TW9ubsOp?= <roger.pau@citrix.com>, Juergen Gross
 <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 02/12] libxenguest: deal with log-dirty op stats
 overflow
Message-ID: <20210628133057.14fbbc30.olaf@aepfle.de>
In-Reply-To: <3e90f06d-9410-4778-bf93-61940bd95456@suse.com>
References: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>
	<46831816-a5f2-2eb4-bb91-aba345448feb@suse.com>
	<5e725a42-953a-c96f-3e72-f0c741b0ce16@citrix.com>
	<4e3afc8e-1ed8-2e27-b583-476d35352efd@suse.com>
	<20210628131022.3f2f2c4b.olaf@aepfle.de>
	<3e90f06d-9410-4778-bf93-61940bd95456@suse.com>
X-Mailer: Claws Mail 2021.05.27 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/eBBqE7SmUe8OQCF/D0z=ws+";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/eBBqE7SmUe8OQCF/D0z=ws+
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Mon, 28 Jun 2021 13:20:06 +0200
schrieb Jan Beulich <jbeulich@suse.com>:

> But if a field named dirty_count was intended to convey the P2M size
> (and then only on the first iteration), this very certainly would
> have needed writing down somewhere.

Sure. Right now the only documentation for the precopy_policy callback is s=
end_memory_live itself.


Olaf

--Sig_/eBBqE7SmUe8OQCF/D0z=ws+
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmDZsvEACgkQ86SN7mm1
DoBYtg//UY2SGAi3w0deuLxZDVoHWcWdBcDa6lrTa7f08lqCD2rP6Vu7qx0f7tct
WYewV4bT5QLSnG260DaR+KBdtlrNTOP7XFCkXrwtsQfYyda1vZji5FnetqMzj/Q7
xkAxJv+c5Ux3VJpHciX+S/mHPnKHYwQn5X9TLPgqrQQo9SzTumrZcvvxd31sxUO6
PhdOteOJy7jg93seguoy66AI59VR/cM3eEehRaEL2AEiwKoRvlzbSUKfDwGBs6Pt
j7UJqOiBnFKug8J1nP0vnVJwg6yzcV4W076EibzA017mVI5g8orVWqk4ngEed1ig
I8wGCvXKiUCTPOUUfvFRlb/aLE//0Z+Rpy+/BtFIt1mx3pEGupZ9pLnBGgVOdsUa
XY40W5uI+O28dcK+4Jpzta3SzYygJEnLyKV0eOltK8teySi7C+apWVyl0nHS7HXc
T+Ayt8xpdbh1CabylMMDuFpcdsdmTndAxHTbITjb4ujKfd5Ucz69SO4eLb+t2jaG
fJt53d9Ydd7P70jUYX6UxadBi5/FFQ1EGn+ETDCnGqdHbFaqPNz5nixqzDwGlnYn
pTiU+jY4bIxDrgK6giJiOuhaD8Q5uqN3TJ57T5TZC7MKb7S6DuFx2Waogdfj3Fp6
0pyNLjJ5zvHLNJ4SDb1MJFAV0c+SjlLc8ULiEBlM82h0+v5N95c=
=9fff
-----END PGP SIGNATURE-----

--Sig_/eBBqE7SmUe8OQCF/D0z=ws+--


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 11:47:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 11:47:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147800.272801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxpjM-0002MQ-JB; Mon, 28 Jun 2021 11:47:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147800.272801; Mon, 28 Jun 2021 11:47:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxpjM-0002MJ-G4; Mon, 28 Jun 2021 11:47:12 +0000
Received: by outflank-mailman (input) for mailman id 147800;
 Mon, 28 Jun 2021 11:47:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxpjL-0002MD-0P
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 11:47:11 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2bfa1b81-0a13-4c20-8bb5-1e9b6902297f;
 Mon, 28 Jun 2021 11:47:09 +0000 (UTC)
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2053.outbound.protection.outlook.com [104.47.12.53]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-5-mzakSNy3OhOXQuOOJs3VSQ-1;
 Mon, 28 Jun 2021 13:47:07 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6382.eurprd04.prod.outlook.com (2603:10a6:803:122::31)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Mon, 28 Jun
 2021 11:47:05 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 11:47:05 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR3P281CA0067.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:4b::14) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4287.12 via Frontend Transport; Mon, 28 Jun 2021 11:47:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2bfa1b81-0a13-4c20-8bb5-1e9b6902297f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624880828;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=pchRJ7GBXfvuj3yvXkmACQFz+EgDnbCG1su6IHj8H1Y=;
	b=MhbcZV/GhW+bDCE5lT3weHhj5LNDHFId8oYNCdee8YuV6vHtSuS2Qpvjp/D4Pkhg5RNQ6J
	eAWZPSgRUM68zJFbaM9z3yFTdWz5c49B0L5943UiqI15Ig+XZ/fyY1bY1B3rvKz0mXrImW
	FT3GDzy+pSw4ITGG6H/5ewgxKgHIteQ=
X-MC-Unique: mzakSNy3OhOXQuOOJs3VSQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KwJzbxCRMpew94cYNpVb3dz0JZYNZLcqkGAAN92Q5FdQP1RjfBbph/HmdtJf5Y6nv4Is9yutQNL/Pt77KSa6qwNyVt81Yv578rlFTT5b+0RJo2Su7Bfwz48d31I9yCwkRENBBbZu7LrUfAagsMW8r9m02Y/X0pbPAvj0ZW18HlMDahWRgEjrbjtbt7sBVS525/nX7WKb8UGQ10N7cwGHw47RzlYr3LN8a+38vjjj1HJlnGbxhVzVtlE72qLfa2w+KoEpse5FqXskKGrVjJBrYXr9DspQbv+REoCFhpaJiVN7KE3ZDn27EgTQZB97RR9C/9ayHifu8b1WZZtr1NknmQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pchRJ7GBXfvuj3yvXkmACQFz+EgDnbCG1su6IHj8H1Y=;
 b=RPxyrh+nPn0rtEFpm6k4hp8drkFUkaD7W0h/BFNddh2yVw6HpE4Akn7ilYalbtXAnlBh9i/OmQxy0REZjlotS9f7CrWJeuLn3bQFI6IznZxIdR/lmFMwr29HTZNiNYOrW2f2/gB4bvpTJfpZS/MxtwtL4W3gc97H9L8lILbfc1WZ843bCNEmEr+CZsEVdaKkfSOwqM+94tNuRF/KcGcPoCLf8FqnsJ19qFLQDUv2x9v8KaTCQ0eiOCTGzqnIE+A/SvHskU+OV9ca/zOjlmT4Q/jMnK0fsDQ17Q2Y7cV7pEfaw5TgNhAHC+XkJ40IrzFfNsns3bMU8oA7O9beQ1GKdg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] libxl/x86: check return value of SHADOW_OP_SET_ALLOCATION
 domctl
Message-ID: <5d2bb2cf-8c0c-7300-c895-75bef0e50817@suse.com>
Date: Mon, 28 Jun 2021 13:47:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR3P281CA0067.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::14) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 05ccd0d7-a7e6-4273-5311-08d93a2a7392
X-MS-TrafficTypeDiagnostic: VE1PR04MB6382:
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB63820CE6DF11FB8F67C843A5B3039@VE1PR04MB6382.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4941;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vuATSfMKwMh2tY8UwogTko1GrHp93S4yc+vvAEHb2CF4MX2EtwjELC/mx7cprXV3P3HC+4PnzDnWXMdyvR4YrTUwigxUJ7Fvvh56l9Be8Rq9M0f33gMSygn9uWAESseLVXpNxYsCJ5+SqbzIB5WolAFrxExDmmHUZ7O4NhvcDnvixBbXO+yXNOJMXK7rO1x/5u7PRrB7cnpbElH6D0Z5UwjZsOo21m/MGp2NNgfdxjQXzaoMA+nla1mUBdWfRA5RYumboAl+hJr+0Hlthfj7rIWl6daPMu2WLFyQCuKGGy8PX5XyaYu269adZdWbZxcW+B0nz05okxAtRaGq4swIwW+x4gXy2nadQVkK2Loug6OpJ00W343+9BgBc8ombRH302Mofr0YEXvR6bIQs+aSMWzyytMIDgOza3GNlpHsZLojUwpl6iYM3Cm9tllPDqlzatuOze8KAerHjSh2QyQgvPUff8SeP5d5mHXUq3YIF0XzS7OF6XeZlSwaA62rEI+IDoZD9yFjgeENp0UI8AQSdBrIRttcze61mCsjBxaMfG4ROrL3YqbFU3WG3n55a9Pnv/qWXDer88Z15g+BP8X4gt9WHLrQJyg1SfH9fa1gMyCXqr2HcPI2HMj9lDA9RhHA7Ks4Z4RlWkBloq8GShLR1E+JuccX4Fu9s40zR75SMC1kTX1xCzdE9+tfywADUHma
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(396003)(136003)(39860400002)(346002)(376002)(6486002)(8676002)(316002)(16576012)(54906003)(31686004)(8936002)(4326008)(6916009)(26005)(186003)(956004)(2616005)(16526019)(478600001)(66556008)(5660300002)(66946007)(36756003)(66476007)(83380400001)(31696002)(38100700002)(2906002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aVV5L3VkVVpuaE9rZE5Md1ljVlF6VWlPaXdYanY3WGo1bTdtcnR5aGFicS9j?=
 =?utf-8?B?UnBFV3lyZmhDaWhhaHc3ZDJrM0F5R2RLdjJuVGRDZk5wdTJ0bkNNdVlUWGFi?=
 =?utf-8?B?Z0RIOFVDbHNrcHErQ1dvc05XM0hJZGg0N2xBanZQU1BiblRCSHBVbnlrRlJS?=
 =?utf-8?B?a2w5Q0lWWkhIdVEvczZnY3FFMG9WbU93L09hRnpuNE9XK1lhTnFWT1BESkVh?=
 =?utf-8?B?dGd4MC9FS1JCTVh6SHFCNVNLOW1pWlN6UXNnV3pKR1ovSU5qenNPMlZoMzJt?=
 =?utf-8?B?WTVlVDEvc2RibGlINEhHYjMyMXMydDN5NXh5dGhtMGx3ejdyUEl0U25jZ25X?=
 =?utf-8?B?ZW9IREVUKzFLYlovRkZmblRjcDBBOG9QQklFWWxqVUZBV1Rxc2xtMEQ3M092?=
 =?utf-8?B?OC9tRTN6MFlvWGZabnBORnZlWHdaQ2dJLzl1ZlFXMkVEbmdvaVNJaE5jendl?=
 =?utf-8?B?OE1ydEdEVnpLc3lwZk1VSnRoT2pUNllUMThNOFQzVEFaMklWdVBpQkV6Tk1r?=
 =?utf-8?B?QTlMZ1ZkczdMMkZXZ21KajM0Qmx1TklmZE9EYUprbjlSM29PNWJ4VGdsLzhB?=
 =?utf-8?B?ZlgrYUpHODk1UFdlMEI1dTJtSVgyZ0xvSVRBZXJEQnZPdnVZcFQ4VFp3TjlX?=
 =?utf-8?B?bHMzYVJ4cS9oU09BcE9mSkExcWo2TGNTOE1oVmM0QTRheTR3czJSZVhsbUN2?=
 =?utf-8?B?M3lGTlgzSFRodlprWUVYWi9SOC91MVFtQmJLaXZTb0F0RWx3YWxLOUxUaXBr?=
 =?utf-8?B?eFNqNjRlckRXai9sdHRmZlh4T0tBMHJpZk8zbWN6MTY1Uk5SY2UyMFFmQ050?=
 =?utf-8?B?b1NjTEVycGhiRzhSUXNCamZ2NVk3YzI0dEpUQTd1b05hN250UTBNcXVLemlK?=
 =?utf-8?B?SGhORDNsY0NsOXJLNFBjMWxQNmNpZlh2TVNmeE5oTmRyd2g0QWZSZVVVZlhm?=
 =?utf-8?B?ZXRwT2VyZXhvWlF5cTMrZndyUHdVZXkzS3RFZVhseWNsSXlRTXJzcm1FUGds?=
 =?utf-8?B?dzMxM05iN1dkRVNPbEwxeXVQeGxiOEVLLzVmeFMxN0JiL1JXalN6N2lITFpy?=
 =?utf-8?B?dVpCZFRZZUdXbzNyb29CM09vbVZuR1dCeWxhMnZmRWpFdm1XeW5LMkxQdGgw?=
 =?utf-8?B?bkVtTlhMVDRFNUNiMEdESFFuaDQrTmRTTGlSRlpqcFRuZnE5UXF5RDdWQjla?=
 =?utf-8?B?bUZUeFpqZXFCSWhKajl3UFRWUDUzUisxR1hRL3FuMEo4akoxbVhadzJqRjFG?=
 =?utf-8?B?NG9jdVYydXYzRXJDMEdCanZjdE02Vnl6S2xwSHhMWlpSd2R3SlVhMTRZT1hJ?=
 =?utf-8?B?NU9yaUZkZlRQdGlQYUY5bjZqT0lEdmFKMUxWeURnVFVGbjAwVityN21FQitm?=
 =?utf-8?B?MkwwbkZqb0dDc3pWckZrbHlvZEcxa2FmbVJ5YlVjL011T1pTQjluNVpaendL?=
 =?utf-8?B?eDRuZEVORFdRNTlaZjU1a3ByL1ZxWU1CZXVuVXZnOFZNUVlNR1JQNFpncU1L?=
 =?utf-8?B?bk5FVGlIM3pwQ3AyZFhJUnJXR09ZbWhtRjdhbXc1VDB0alNXTVg3ckY5eTBp?=
 =?utf-8?B?ZkRVdUlYU2hBS2dYMnEzUCtpdXdYR3Vsc3FIQWpsTG5iQkJGVmlkQzZtYXc3?=
 =?utf-8?B?eGRyMXJOUkIrb0V2TjNlb2ppVWFaL2YzWkhuZFZwbGdNMFBKQk1hSm1uYThv?=
 =?utf-8?B?cERhZmF3VVFkS2RVTS9KdStGdnRCUFJtVHZQUllmWUZlWjQyenNxODVXemdY?=
 =?utf-8?Q?P2fKhekRxF6brVMf49eFqj/Jo3hlr7zkhGx/ElF?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 05ccd0d7-a7e6-4273-5311-08d93a2a7392
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 11:47:05.2552
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: s1F0WvfmKaLg7RG8oLAy/aUAgMQ1HUjpW/dBnZNrf2YgnFNALbfhLwXhHTijcJ0OXtd5QKZQOzbT20qR8eO1TA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6382

The hypervisor may not have enough memory to satisfy the request.

Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Especially if the request was mostly fulfilled, guests may have done
fine despite the failure, so there is a risk of perceived regressions
here. But not checking the error at all was certainly wrong.

--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -531,8 +531,18 @@ int libxl__arch_domain_create(libxl__gc
     if (d_config->b_info.type != LIBXL_DOMAIN_TYPE_PV) {
         unsigned long shadow = DIV_ROUNDUP(d_config->b_info.shadow_memkb,
                                            1024);
-        xc_shadow_control(ctx->xch, domid, XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
-                          NULL, 0, &shadow, 0, NULL);
+        int rc = xc_shadow_control(ctx->xch, domid,
+                                   XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
+                                   NULL, 0, &shadow, 0, NULL);
+
+        if (rc) {
+            LOGED(ERROR, domid,
+                  "Failed to set %s allocation: %d (errno:%d)\n",
+                  libxl_defbool_val(d_config->c_info.hap) ? "HAP" : "shadow",
+                  rc, errno);
+            ret = ERROR_FAIL;
+            goto out;
+        }
     }
 
     if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_PV &&



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 11:49:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 11:49:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147803.272812 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxpl6-0002ym-VA; Mon, 28 Jun 2021 11:49:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147803.272812; Mon, 28 Jun 2021 11:49:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxpl6-0002yf-SF; Mon, 28 Jun 2021 11:49:00 +0000
Received: by outflank-mailman (input) for mailman id 147803;
 Mon, 28 Jun 2021 11:48:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxpl5-0002yZ-Cs
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 11:48:59 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 36be1bc0-4158-4ee4-8a1f-99d21953f949;
 Mon, 28 Jun 2021 11:48:58 +0000 (UTC)
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02lp2057.outbound.protection.outlook.com [104.47.6.57]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-14-jf2029s-PImhlqfW_Oiugg-1; Mon, 28 Jun 2021 13:48:56 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6382.eurprd04.prod.outlook.com (2603:10a6:803:122::31)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Mon, 28 Jun
 2021 11:48:54 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 11:48:54 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR3P281CA0038.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:4a::14) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4287.8 via Frontend Transport; Mon, 28 Jun 2021 11:48:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 36be1bc0-4158-4ee4-8a1f-99d21953f949
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624880937;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=ovgdEUZK4449SSVXu+RZ6OixL9MZGdsajLD/7j3duts=;
	b=OcbDUeKfNEg6zWHr3R4gXKOILVEjc/XJ7Lm28AdyGC3CBybqWNrA87BTZ2RaU+/KI764uh
	J4vnY8/6GzrtebaeDpk5gcfSVptzzNS0Wh1eVWtBeWC4oxXVp0dKQI0o4csuXVxGIPW1A5
	79YPR7HxHnHY9BE4s/zxjuCnc+Rno7w=
X-MC-Unique: jf2029s-PImhlqfW_Oiugg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UWtx65viYSnF1HMwMin9jkSjtCKchyjuO4qv32ILRJSG66JCq5lvnfHmZ850YjWX0W0tvraaLHCAJT8E05Dav14M48ni56pzgF1yztnmIG5CijshX5pnupfHLmvjAJXHSwiNzWZYBcvsIF9iAg1EPX9iq2sCm9pkWPu14FVlhHjzao9mjrLUqeJUGQg+UyM3kgRTvQcb+96qa44qnYe+LtVN+rkmZpEjZtn2Gda4NGqrQYtFTo2tpp8LLGbUCIyE33aVhn+fyhHHmk7v3cZbkgFK0GyQdQ6JPVI3xzdrgZzbUhWl1QCVZkKqn85Bd7wOd5nwpQc7+A7d7+iVMPbSBQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ovgdEUZK4449SSVXu+RZ6OixL9MZGdsajLD/7j3duts=;
 b=WjpwJT3dVgGvXZTh5yti7WjhJxtdxnMsBitRGOscoq5q79foaykYRbfeSHXFPMpaZ/M6Vl3F+ofs4NUM4BjUylmpqMgXavaxCaBG8uan5clFdwHVZz78YYwv2bPpB3zdgaNIKMcUOQY4FOGbUmKxr+qvLHIz5/6+IpjEIA9t8TztaYrpWRBQq0PKzG0yEAOnzS/BwLCEmHe8IeIkW4QtqEMDe1dQeFldkY3/h0A0W/QOdnUOAzDMlnwdNlg8JcPxsyB1Kbtt5H7+60xnrBD5kZLCtjPkBQQMVsMZ9HqOjNvHfxS4dPuAxjdfGlz+M8ljxlFi4qnyNrScEN0Kx/1TFw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] x86/AMD: make HT range dynamic for Fam17 and up
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Message-ID: <36df1141-5c3b-6f8a-3a83-1f954b1e27a6@suse.com>
Date: Mon, 28 Jun 2021 13:48:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR3P281CA0038.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::14) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 26e65e1c-5cc4-4cd8-f108-08d93a2ab4de
X-MS-TrafficTypeDiagnostic: VE1PR04MB6382:
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB638263AD1FF4818C58A78536B3039@VE1PR04MB6382.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	p16VeoQqZxI62ze42Gcfuh1mS+NWAy0zGLv3J8R+DHUeF3AqYhSWt/UdMOLdPRF4XEYsn4fo6B12Is3rgQ4nB7FDRa13mZVSlvzTKt2xcgNFvnOcGTObdBjp1xV13x6gKdjwNahwAMUjVyd09GIvhQ7KmS3EMYh0nlv/h8SdfxsepjdZFswQ6Eu4TMl4DWQxueEm00hDr2m5MVpNc3tMYzaiZthBMLV2LJzMH1MrPJ2JHbGus81ea1eULXRpRaHg28btGyOS8X0hLulY77/2xz5tCy4oSCKoae1NwrA/5BLFpa23CDpNOMjbG23uD7gM/LRe9U729rXQFnNpYeopKH094MO9MLIrTzNAqQzI7zPNgTd3IuyDhFyqmzGS4uodCToW//GPHJKU6i6uLONrzDIkfGmNfV28aZO3rCuRRUN40G3Dq9kkXg86BO75zgmHmc331zpoLU8chKZzoRoQQuIDHqLUMarci7Wu26BX1cnqF0jQXQhz7qv/SqbgLd9avdxraOU40Jhd9I/vQZ42ri/nwCnGIoCWF7apCRozUft59l4lU1pzyjIfR1U4IJlZPBp+sswRDvT32j/UFwL6v6JppiTulXyUZ4wocnUShfB6u/PRLEsHAO2e3+x3kjiZeWkS6RS0Bu7089AySn3Lt3R0DSOrhYFGZ5sZHJRxUdNUuphPdM1nIr7jb7MqqYP+
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(396003)(136003)(39860400002)(346002)(376002)(6486002)(8676002)(316002)(16576012)(54906003)(31686004)(8936002)(4326008)(6916009)(26005)(186003)(956004)(2616005)(16526019)(478600001)(66556008)(5660300002)(66946007)(36756003)(66476007)(83380400001)(31696002)(38100700002)(2906002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ckMwNVRKYVN3cktCN1ZoY280cFNxVWtoWVpZRHBUL2oxMnJaUE5uUDF5RlU4?=
 =?utf-8?B?bk5lemJSbnZ2WElRVVQzZ2tVTVRxY0Q0OGNZM3hiSktIVGxZVGhqVFFUUmZB?=
 =?utf-8?B?TDZBaVY4SlJHM0FlVXRMWkxIRFZjZTlmQ0l5VTdreit6Z0FCMU9YMGNHUFN2?=
 =?utf-8?B?Vkpib0NIVWk3UmxJd0ZQeFZ1blBYZ3MyUVpMWTNuZ3d2cUdERE1STzhBR1Na?=
 =?utf-8?B?ZFgwcjRHK0FrOXdtN0Yxcy9Va3Z6cFZQL1M1RkJQcmVjWWlkOEhNRytKYkxC?=
 =?utf-8?B?YkQzTWtLZzA0b0V2SDdMVzVNT2tXenlWR3VxUkZkaXp1RzFYUG8wM2w4NEZZ?=
 =?utf-8?B?VkJxWjBxYjd6bWNqNFd1ZGZXbVFtbEs5Y3oxdVdFUmhPcU5ONTNtZUFodVVV?=
 =?utf-8?B?WWM1OXFBNTZESlMwMmlKd2YrOGxreENqY29vWE1meXZhbnlVVGQwYTNQM1A5?=
 =?utf-8?B?cktkVHR6Zmk4NkVFQUIweW5YQWtQL2RKdmhGR2dwb3JnbGxMVG9YVDByRmR5?=
 =?utf-8?B?VDNHdTRQcXlVcjJERmI5ME0yM1pDQUsxN05wdlRuV0tNK0hybSsySndRK2pL?=
 =?utf-8?B?dXdJYzJaTDd6QlJPVkVCcU9wWElTRVdmV0FmNXo2eTJoU0JpWjgvYUNYLzcx?=
 =?utf-8?B?SGYzM0FYVUs3b2Z3VXVEVGxJZXphOTNMamlwTFREdUpPZlB4VEZyWHViK0FU?=
 =?utf-8?B?R1NZS1hiN1RIckRPQVRFQVczQXFKQ1QrVC9NeWc0bDFsWkJkTjVOWWJpUm5j?=
 =?utf-8?B?OE5XYU9KaWJqSGRKYzZRVlVFNzlXZEQwQnhkaXZKREtvR0tzSW1aOFZPbERV?=
 =?utf-8?B?OWdVNDNqRUNyejhyMFBaSk90cGpkZFNiNTdiSzNZVC9SRWVOaWcxTlR6VHQv?=
 =?utf-8?B?dTM5TjM4ci9lUEpScEJxRW1GZ2sweVdsUnhDb3E5bUdPUTVrWnptdHBQQ0Fa?=
 =?utf-8?B?c1pwamQyeFF6WGh6YXhGSUFaZ0FET2tuMGEwbTh5Mi9HSStHeFRnQmRkVnN2?=
 =?utf-8?B?V3p6dUFPcm9TcmVnQzF6WUJEUUcwYUVrTjZ1ZitpbzBSdUdoNHJzcS9GRHM0?=
 =?utf-8?B?MmRZOFM2ankzY0ZXZlltay9hZ0tKZkJpbFdscm9CYVZINVV4ZEl0UVdWQ1o5?=
 =?utf-8?B?U1ByZUF3b3RoSTMxb2dhRVpFdkd5bmcwc3l5bnRtRDNBU2UrU3FGT3YrQ0wz?=
 =?utf-8?B?ck9HZjVaOXpIN1lob2s5OWxFTWhMWHcyUXJLanJMeHV6SW5ETllhWnE1NjNY?=
 =?utf-8?B?QWpLTHYzZVVtMU9BVmE3Qm1VTFVhcnZONzZnTk1yQUpCUmg4NS9uY0RRR2ZV?=
 =?utf-8?B?S0lxWmR1YkpOSE1Wd1M1WS9UVGttTGM0dDI2TnJZRDBqZlQwMG4rRHV2OXlv?=
 =?utf-8?B?MnM2QmtTeU4wek5SdERHNG1yRVdwazlSZWNPK0xNQWU5dzBFVDhIWWtZMWUy?=
 =?utf-8?B?U3l5YXJjL2NlTVEraE1aUVJJcDk4aWplL2psbVVsNHloUDBMZkV2Z3VsMjFG?=
 =?utf-8?B?VTYzTGxxTVVRbFBHZmpINk5FbDVXL0xaZTBhejcrY2t5Ujc2SERialR5SVd2?=
 =?utf-8?B?b2Q0RWcrS3NMb3ZFVjBLa0MrSXJCaXJ4YTM3SHFSUDA5RjVjTlFwTkN1K2Rj?=
 =?utf-8?B?d2w4Q0hQdy9ncFZOTEF2KzNWbitjSGwvQTI3N1BmdEFDV2oxTXBuREZSQmQ3?=
 =?utf-8?B?dTdIeFFkV3VUME43MWt4SzYyZHRJN0YyYlJzOTRWT0REZm1uekRhNjc4VXRp?=
 =?utf-8?Q?kgQamhqXjHY/RHZk8gmJKC7RM3nhoxW/cYqx75a?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 26e65e1c-5cc4-4cd8-f108-08d93a2ab4de
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 11:48:54.8152
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kWT2y11Inhnu/olqEUBUXb6S//FamNVqh/LKCmbLRuccgrS/mBtG+ePyDfuEn+0IPcxDWouzlbQjc7tWRbZfvA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6382

At the time of d838ac2539cf ("x86: don't allow Dom0 access to the HT
address range") documentation correctly stated that the range was
completely fixed. For Fam17 and newer, it lives at the top of physical
address space, though.

To correctly determine the top of physical address space, we need to
account for their physical address reduction, hence the calculation of
paddr_bits also gets adjusted.

While for paddr_bits < 40 the HT range is completely hidden, there's no
need to suppress the range insertion in that case: It'll just have no
real meaning.

Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Move adjustment last, to leave hap_paddr_bits unaffected. Add
    comment.

--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -349,16 +349,23 @@ void __init early_cpu_init(void)
 
 	eax = cpuid_eax(0x80000000);
 	if ((eax >> 16) == 0x8000 && eax >= 0x80000008) {
+		ebx = eax >= 0x8000001f ? cpuid_ebx(0x8000001f) : 0;
 		eax = cpuid_eax(0x80000008);
+
 		paddr_bits = eax & 0xff;
 		if (paddr_bits > PADDR_BITS)
 			paddr_bits = PADDR_BITS;
+
 		vaddr_bits = (eax >> 8) & 0xff;
 		if (vaddr_bits > VADDR_BITS)
 			vaddr_bits = VADDR_BITS;
+
 		hap_paddr_bits = ((eax >> 16) & 0xff) ?: paddr_bits;
 		if (hap_paddr_bits > PADDR_BITS)
 			hap_paddr_bits = PADDR_BITS;
+
+		/* Account for SME's physical address space reduction. */
+		paddr_bits -= (ebx >> 6) & 0x3f;
 	}
 
 	if (!(c->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)))
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -524,8 +524,11 @@ int __init dom0_setup_permissions(struct
                                          MSI_ADDR_DEST_ID_MASK));
     /* HyperTransport range. */
     if ( boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON) )
-        rc |= iomem_deny_access(d, paddr_to_pfn(0xfdULL << 32),
-                                paddr_to_pfn((1ULL << 40) - 1));
+    {
+        mfn = paddr_to_pfn(1UL <<
+                           (boot_cpu_data.x86 < 0x17 ? 40 : paddr_bits));
+        rc |= iomem_deny_access(d, mfn - paddr_to_pfn(3UL << 32), mfn - 1);
+    }
 
     /* Remove access to E820_UNUSABLE I/O regions above 1MB. */
     for ( i = 0; i < e820.nr_map; i++ )



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 11:52:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 11:52:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147808.272824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxpoK-0004ND-GT; Mon, 28 Jun 2021 11:52:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147808.272824; Mon, 28 Jun 2021 11:52:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxpoK-0004N6-CB; Mon, 28 Jun 2021 11:52:20 +0000
Received: by outflank-mailman (input) for mailman id 147808;
 Mon, 28 Jun 2021 11:52:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxpoI-0004N0-TV
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 11:52:18 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c22e417a-69b3-4ca2-a235-bda77b76f7e3;
 Mon, 28 Jun 2021 11:52:17 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2171.outbound.protection.outlook.com [104.47.17.171])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-6-AUxQb9RCMoi7JTkVoUwOuA-1; Mon, 28 Jun 2021 13:52:15 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7150.eurprd04.prod.outlook.com (2603:10a6:800:12a::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Mon, 28 Jun
 2021 11:52:12 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 11:52:12 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM9P193CA0005.EURP193.PROD.OUTLOOK.COM (2603:10a6:20b:21e::10) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.19 via Frontend Transport; Mon, 28 Jun 2021 11:52:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c22e417a-69b3-4ca2-a235-bda77b76f7e3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624881136;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=JXQVnE3/S6cW3qT/GvlZw27Abkt7kt8h+dybYejTGi8=;
	b=ClH6tF60JNR0E9CPgeWTQEC+9cKSXEoQXIiNJBVgRbVSsze8GCSgab8n9A1qzRyqJU6b7E
	jSuCBGAc2zsCxO33/m5HtDb7ui3MF7loOGgSy5b48oyksUh8KkmZ2UHl6pS7GenIHZ+xCg
	rTtkOIvxtygC4lN2+E7MZaxO6m5vWNw=
X-MC-Unique: AUxQb9RCMoi7JTkVoUwOuA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TyJkSxNL3nUyeb6Cl7ji7JtbVR51kNQAwDFsVe/KXQmwfEmJ9ZnIzQqLO64dNG21mdDUOlDUTJduC8f65BWeI26ltTMt9I9Yun6ta5VydGp4omrcoVAPnxKlx3usvIXcTmsnFZ48rpAwNcrIppztKWH3znnmYfwMotD7cSyYj9TiE7XmemWLYIW0aBybBEmLbSJ2+9N+O78cIM83gQ6/ro9REqJUoMB9SKYzljYGUfodMa0q8LM91hBTg4gSvxxKXiopd82TTi+dCJhCRVLd0duFrbbrkPe7OyRxVWde4u7U4sd7Y5+WCp7xKSf1eMqecfxddg1s25LHE383loeynA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JXQVnE3/S6cW3qT/GvlZw27Abkt7kt8h+dybYejTGi8=;
 b=nH82IUdjTgaUHAWPVwhydZvbEWJLmxrD9NCSh8/e8TIjmnsPJHjEARn1Zqx1nizR3IzGeChetIEKUErLJ4eoFG5w23nRa3jlHffNh0AYk4yE/xN5TusNFalMpAoYMvmababEpM60QPITzPW2TmFT41ttSU9u813nLHuxK2ZH/1frAHUIGMDfs5G7M9GODwJs/UpxoixwbosqouWUCD1V7FZVICA3Xn+z4vhEa+7FZt7rBYV3QEKeSh+3PzKxU+w0yXg2IbtsC2+MJK2ERd9xqMkhCR27jNCt8GXqGuJ4ftVd+Rw4ThH67UCSs/oG2OFZ0vZcVMkIh4Wqx59uichh/Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] fully replace mfn_to_gmfn()
Message-ID: <389f5988-4700-da3e-e628-1513e87eb56a@suse.com>
Date: Mon, 28 Jun 2021 13:52:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM9P193CA0005.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:20b:21e::10) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ccc01185-fd9d-4300-f3cb-08d93a2b2aaf
X-MS-TrafficTypeDiagnostic: VI1PR04MB7150:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB7150ACBDF59ACD7FFE76D7E2B3039@VI1PR04MB7150.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5236;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ljZP9IvWftxqJq/gsL6Erw04GgY33l1YoDQkRA61eI1l3IJmsP/KoYZmlIzXybde0pblzGwrXq3BMx7VZ70OJ6N8g0ik4qDZxeSq3r0LX/ZSF6zHdo8NrgjvZXd+Ln2U7gyTBkGp06ihsCsvd7C8+0lx9aj1uuq8SuQnC8aM/T7ZJWNVy+xMHg4bEVHkJXLhy2Pm6132ZhSkySIjs1ervGIus+1jS2dwTxy3phVuRCHgJvvoaeSEJP0+4SCuoTsIvQadzFEfOMiQEfkCB0i+wg73plqImaJHz1CmMQmLHZETdPF1HQNfaDLBnj6Rrtm6stp8Y7gji9uZHM4JWMFx6bigoqRgYIYXAj95AnZQOaS8iHVwOAKQSBCXqZEEgJqgFH0JoQPPXt8rzHo3SM/GqsLIE4DPYY1KHnZJd+FD59inM5YWOoRDaaUqq+H0wmKcJ/7eveTDkWp2A+Upa2tBB5tS58G/5L8ugnQBLZnpWjMTvqijinNb5NFArZeLB1HK5aAg8UWsU9v8UX4z5ycpO5xAT9yeFC2VfuB8rJocpupgO96FlvZ5FXuzvHP+5QsljVqoevBxp7Ge1DuzRvcOXK2v3hQ5yfdrNeysw782L4Aoph09TiJ3lBug78dmk6xZ0SVpA98PtVBYlLy7FGJq0ZQbJmaXsmyUI/wanRybq6aRg3b4Hd9yELp764fS45AbcRHUc2b5TSp3onC0uUDtYJYryVYcvJ4zIixH1nmatgc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(346002)(396003)(39860400002)(366004)(136003)(2906002)(36756003)(26005)(16526019)(186003)(2616005)(16576012)(66946007)(86362001)(31686004)(5660300002)(316002)(8676002)(956004)(66556008)(38100700002)(478600001)(66476007)(54906003)(31696002)(6486002)(6916009)(8936002)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Wlo3aWxiV3pGUjJIY1ZwWUVNSU5zRmFKSEZMTXdvd2N3VXc5YUJzVzVETi9n?=
 =?utf-8?B?L25BSURRMDA0WEZkK2FnZ0NlbGxVS0FqWHdXNW1vU1dHdHU3RzhpWis2d2VE?=
 =?utf-8?B?WVpsWkFUeTE4WGllK2doZ2xlSXgwUzNPUUtPVTlWZkhoM1NuM0RQdlBTRVZZ?=
 =?utf-8?B?dXhrRUZGQkh3Z0FlRTU4UEVIcjlSZTBNRWgxZ3hqSXRiUTJWejU3ZXMxQ1hi?=
 =?utf-8?B?REN2eDdOTnpmZGVFeGhjQ0JmcmJCcDJ3VDhJN3Q3VXd2Q3RSSmFnMWdYVHNr?=
 =?utf-8?B?R0M3ZTBhZWJOelhYUmR4dHVzZm5JbGxHRENGRVRrRlNiLzQ3Mld0Y3ViQnVN?=
 =?utf-8?B?SXkrOTgyd3FPRFRJYmRmY2RxVHQzaTRib2hkYUFiOXhsenQ5R29rZTVSazI2?=
 =?utf-8?B?bVltVitCNUcwdUVuTTU4N09rU042TkxIRTZzRnRveTFwWDkrNUR3OWZ6SWlm?=
 =?utf-8?B?bCtZR2hlSWo5SjkvLytCSmtrTWlScXp4RXpITmpKb1pNbTlKNElXVUt2Kytr?=
 =?utf-8?B?RmpVSndIZStDUzAvSGErN1hNOGlDcTV4SVR5enVMdzZlTkRYaHJtU0poK3g1?=
 =?utf-8?B?Ukl1VjJKYktHaFp2QW9UbTIyUWZpWTJvYUtLdUk0SWYveFVuZ0xTNFIya0J0?=
 =?utf-8?B?WUlleGZxRm5rbTFSRUNDUEplcVBETlJkYlUySC9wb3FBcjE3T2NpcnZJd0x1?=
 =?utf-8?B?MDNPU010ajhVVWVFVVRNZmpqbUIwS05IT2dZeVRMOGpLdm1rNGpraGtSOW5E?=
 =?utf-8?B?eWFWVUVheSsweVBlaGlLTlhuTkFMbnYyMTB2dng0d2JwRjFWU0E3QS85dFo5?=
 =?utf-8?B?bktkMlUyRHBhbFFOaGR4alloNWZ6OFFWN0QzRStBY0d4V3ZOcEpMdGlSM3Vi?=
 =?utf-8?B?ZGhZWXh6dUQxaEsvVzdYVm40cUovTkhwWURTdmh6b25ycVZGRG4wUEZXN3Zj?=
 =?utf-8?B?M29xYkcwOFNxL1hpc1NvaWN4SThMVTR0dzZTYlNLMmpJNnZZM2RUaE42cnFr?=
 =?utf-8?B?TmxBTnhDSHZlY2czbGxPTklhcXFQZEFqUVBZVjhCY2lmOVpQUStrMXYyaFFY?=
 =?utf-8?B?VVZkN3FmUi9lcEFudFJ1NlNRR0FnV2lCbzU1dThUdVdMMDQ3T3FpbVJsUTRu?=
 =?utf-8?B?elNQSGQ4U3FJV2NpL3pBcXNOSUVnbk0xN24wQ3RTRzNvaHQyL0gyalVJM09w?=
 =?utf-8?B?NjJxNVlib09UbjJ1cStvaFhSVTlUQ3hxTmlWNVR3K1k0SVZqaWQ4Y3A1bjRm?=
 =?utf-8?B?MU5wR016MnJHUGtRd1ZlTEt2RytieGU1TzhqcHdFWkovUXdyVG1vcC85NWtw?=
 =?utf-8?B?VzBXYVYvbDBoRU9aNVliWk9GcG04SnJycnBSUDBTdkJVbFltWC85dU1YdWhX?=
 =?utf-8?B?NEFtejNIcGREc2ZKMU1IbDFmRkQ2Zm1QUHJEUzlqMXV1Lzl4WmJKQnBoQlZs?=
 =?utf-8?B?eVRpN0ZqaHZRZDllQndKTnZuS2JXNktyUWxoZjhHZWp6emhWZjVxazI2ZDVX?=
 =?utf-8?B?KzRaQmYvYkhva2w4UDBJN05ndWRmRzZudmFMWDlXZUFUc2twaWtCNXpQSFRL?=
 =?utf-8?B?cEZFSmNWbGhPcG5pcm5KNDEyOVZTdEVIMk9WT3hzQWNHeWtveEtnTjJpUzZV?=
 =?utf-8?B?UEZvSDZwa1JXNnlVQmlMVWp5UU9ST0hwUFFpeGNMbDNiQ0wyOUtiVGRWWkZI?=
 =?utf-8?B?cDByNUQxN05QVWcvY1k0SXZMZUZMemh3MzFkeHFEZTNhSE1hVzJjcC9ieXk4?=
 =?utf-8?Q?qcMa6r6ELFjKTfj/4fYoUSjP/XQKheEun7lIYL7?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ccc01185-fd9d-4300-f3cb-08d93a2b2aaf
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 11:52:12.4804
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5anGUx2lckY3ynYWScYXwMU34OAjhBY/Qye1BFOKiVHzAq7Kj6xW99mFCgdVRmMV6K18manhucvvYUmagEUMEA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7150

Convert the two remaining uses as well as Arm's stub to the properly
named and type-safe mfn_to_gfn(), dropping x86's definition (where we
already have mfn_to_gfn()).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -111,7 +111,8 @@ void getdomaininfo(struct domain *d, str
     info->outstanding_pages = d->outstanding_pages;
     info->shr_pages         = atomic_read(&d->shr_pages);
     info->paged_pages       = atomic_read(&d->paged_pages);
-    info->shared_info_frame = mfn_to_gmfn(d, virt_to_mfn(d->shared_info));
+    info->shared_info_frame =
+        gfn_x(mfn_to_gfn(d, _mfn(virt_to_mfn(d->shared_info))));
     BUG_ON(SHARED_M2P(info->shared_info_frame));
 
     info->cpupool = cpupool_get_id(d);
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -714,13 +714,13 @@ static long memory_exchange(XEN_GUEST_HA
          */
         while ( (page = page_list_remove_head(&in_chunk_list)) )
         {
-            unsigned long gfn;
+            gfn_t gfn;
 
             mfn = page_to_mfn(page);
-            gfn = mfn_to_gmfn(d, mfn_x(mfn));
+            gfn = mfn_to_gfn(d, mfn);
             /* Pages were unshared above */
-            BUG_ON(SHARED_M2P(gfn));
-            if ( guest_physmap_remove_page(d, _gfn(gfn), mfn, 0) )
+            BUG_ON(SHARED_M2P(gfn_x(gfn)));
+            if ( guest_physmap_remove_page(d, gfn, mfn, 0) )
                 domain_crash(d);
             free_domheap_page(page);
         }
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -328,8 +328,7 @@ struct page_info *get_page_from_gva(stru
 
 /* Xen always owns P2M on ARM */
 #define set_gpfn_from_mfn(mfn, pfn) do { (void) (mfn), (void)(pfn); } while (0)
-#define mfn_to_gmfn(_d, mfn)  (mfn)
-
+#define mfn_to_gfn(d, mfn) _gfn(mfn_x(mfn))
 
 /* Arch-specific portion of memory_op hypercall. */
 long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -527,11 +527,6 @@ extern struct rangeset *mmio_ro_ranges;
 
 #define get_gpfn_from_mfn(mfn)      (machine_to_phys_mapping[(mfn)])
 
-#define mfn_to_gmfn(_d, mfn)                            \
-    ( (paging_mode_translate(_d))                       \
-      ? get_gpfn_from_mfn(mfn)                          \
-      : (mfn) )
-
 #define compat_pfn_to_cr3(pfn) (((unsigned)(pfn) << 12) | ((unsigned)(pfn) >> 20))
 #define compat_cr3_to_pfn(cr3) (((unsigned)(cr3) >> 12) | ((unsigned)(cr3) << 20))
 



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 11:52:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 11:52:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147809.272835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxpow-00050p-US; Mon, 28 Jun 2021 11:52:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147809.272835; Mon, 28 Jun 2021 11:52:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxpow-00050i-Px; Mon, 28 Jun 2021 11:52:58 +0000
Received: by outflank-mailman (input) for mailman id 147809;
 Mon, 28 Jun 2021 11:52:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxpow-00050Y-1O
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 11:52:58 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id df311f25-970e-4456-884f-52316732ef94;
 Mon, 28 Jun 2021 11:52:57 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2169.outbound.protection.outlook.com [104.47.17.169])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-5-EcTwmc2UPwGOd-N2Wn0rCw-1; Mon, 28 Jun 2021 13:52:54 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7150.eurprd04.prod.outlook.com (2603:10a6:800:12a::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Mon, 28 Jun
 2021 11:52:53 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 11:52:53 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM9P193CA0006.EURP193.PROD.OUTLOOK.COM (2603:10a6:20b:21e::11) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.19 via Frontend Transport; Mon, 28 Jun 2021 11:52:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df311f25-970e-4456-884f-52316732ef94
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624881176;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=9ZczkSMCRVUGG35vGro3rUeJaFaSvSdJdIGBjDs1ovM=;
	b=D84wx6H7QgbVPdRJTxCLlirJVwsZC7/4+VOFl/88cxickQGkFGJa3JiDF8IoNBc0OPddoD
	F+rCoDeF27NCYLCQ1d3biAp2j4sOI169LhfVU8lEhUcRfQJj8iYGxMiIHXdGWlCuxvYrVj
	7yAZhU3hKWOTmRIE0e+9agZTLIELHQQ=
X-MC-Unique: EcTwmc2UPwGOd-N2Wn0rCw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hY1Y04TdxN/UBNm+8TNrafGM9yyJLGr/TYIjVTGgjYStlwGIjErvXalfXz2jatYdNBGLIia95hwhC+4L6BhAfr1QInHqveDUyy7bkBt65RfFZNILqAI2+fHrLQxQt+LDWwvVrIb0vEq3xTmtUMXpzrgbm5s+GfXbszYpdAqnrNuljy8bL7vtqgFP3CaMY8mRXF6EBRGbyeBWWeEa8hsLlkB+MtW3+HI19lkd87BAiZ+sKu0qkZ4FSkB80Lz6jDeJyrgqTja+EvsybK2kmINoxYGvQMuKTIQP/GZM3dZygNE5zitUfZAZGPdoXm1j0VSDnXBrMbxCFcAdaUTCLfgZ2w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9ZczkSMCRVUGG35vGro3rUeJaFaSvSdJdIGBjDs1ovM=;
 b=nTq0S6jTU2PYi4m3uLor3sija4Ge32i3/p1Dtc8LEphZNjkMt8reuoX/JmSGIgJurNYZlcDKXEKic7uzRMvVMWOfBMdZzdzmoiqzc/lplLppv9H7HMB/Bf2ggF4VRLZ8TCpK/i1I3BmZfYPqSptFrbcpeCRYOQEhCzUb1QGXYf1dixhZOpCTvmGf5OGr21scfUMNtQaeECIoDT6cmhkO/gEoize9weoNokAMFEpmvxSg8Sm8Yv+Tz7miOiYYmlaja2lT8K0lAqXKc8klhfFwfzaxU19GeHLsvhXEgqp5OoG3Tn1A7m2Ff8q4hfPVjclHnAtF1JjCQEf1RGi4rqPS2A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86: drop a bogus SHARED_M2P() check from Dom0 building code
Message-ID: <47deb99d-3ffc-61e4-26e6-7e7ab186a79a@suse.com>
Date: Mon, 28 Jun 2021 13:52:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM9P193CA0006.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:20b:21e::11) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d5d22c09-8593-417b-3b93-08d93a2b4357
X-MS-TrafficTypeDiagnostic: VI1PR04MB7150:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB715048B62C71BC6C43489A6BB3039@VI1PR04MB7150.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7mXeq1nhun6VK4/vXQ5c5t7dmISQ8lQt9iEyvAbzUt0g05gTmFV3W/s6nPKr2pbuBN53z9+yACG+uNIJS707V0EoP63VHbSo1K+qH1hwHczKgUgDuygpI95xFOAteeKLGU2wXaptV936t/D+84SrQFYDJYBX6LyarEJ5PUsHWsqPpdskbJU7pv2HecZnIitfBG76IxBK5xmhYLqVvYjAc/zrbOPGPIEN+w6UFGvzMIN6F7rivalZl48ZJCpOY9pwEpqXYHm67i28S9siJn1PCRsnTZ8tkKKbAxOySY16BpEq4u7jGYBw55s3EshG89QGPiN0Nn0G62TWgBb8W8wdzYUSjopvwVdUlWmrvks5qclO44pz4uSQzcXMwZQYFjuaPBSkR/pmB7ssSMNoX6h/vhhmxnBnzASSLhO0jR4xS0F2OOKVxdBTG3ksCeo9bPZ2Irg/xi3hIqDKKxfRaw1cGw5NpcS/SsPI4Cjo4cNtlki/0rjlAv/MOUkXnKpvhZMxuBdYThmt9z1HhwgpV+w4sfQ4jvYlDLsE8RZtqOcGdE0pI+40oug9Nqgh7Z+jt0BzrevXwGvgAquupCnWkuxdMn9BCOFzySCcj1Tc6KxeF5FXxUtPtf6NjVNOCkBbvTEkQXl4Rmk/oorVDH7l+uGzIJZo6D8MFhAxNWaeev2xwAI5JzLEGxgXUfeKEtuibErTVEAG7I8RWS9tals4uj6/oLUse8bC9ZOR9thcn+GOg4s=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(346002)(396003)(39860400002)(366004)(136003)(2906002)(36756003)(26005)(16526019)(186003)(2616005)(16576012)(66946007)(86362001)(31686004)(5660300002)(316002)(8676002)(956004)(66556008)(38100700002)(478600001)(66476007)(54906003)(31696002)(6486002)(6916009)(4744005)(8936002)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RVBLZlZVTVdLSmxaMHRNUWJ5dTZmYWl3REVZTWErdXp0cVliS3VTWUM4THlJ?=
 =?utf-8?B?YUN1SHhtdGs3WGhHOUdaN3FGR05qeEVVY0hsNWlUZzRVSGFPQ2xQV3E3VDY4?=
 =?utf-8?B?RHB2NDJCRWtZSUc0WU5UTERuYURXSGdvVVUwLy91NmJxN1lMQmt6ZWo2cnhZ?=
 =?utf-8?B?MS9CRDMwNkVtQ2xwQnBrUlpUV00xQkdXenpBa0RMTzhsaEpjRW93SzZRVHNB?=
 =?utf-8?B?aW91NmZjRWVvZ0tqUWVYenYwc2FvdlMwR3l2U3FoMU15cFhoVkZiUXYzMUQz?=
 =?utf-8?B?QVBkVmdnZTJMcnExcFlSeE50TmRiT1JHOVQrOG1iMmpwNlFvRXZPbis2Ri9V?=
 =?utf-8?B?UW1VaHNCK3ZhZlRBUlFVVWZ6SE5MU1c4VFJtbmhTTWxFaXRsNnhtRGo3YWJM?=
 =?utf-8?B?Y0o3S1g3cktkUTFqL0lDM21FaWN4TkVGS0FTbU1YT3NsUkZ5ZGtKaStFSGd5?=
 =?utf-8?B?dUV4UVVGaThhZEJKZjdNUEExT1NzRWpYOTRaRkN2WTdlVE9xL1B1NDF5Y3NO?=
 =?utf-8?B?Z1VuNmpuNGNpTTBoQzVYNkRrRm1lVWg3K21NN3duVDBJaWtldDZsSTNwblFk?=
 =?utf-8?B?WC9udSs5dXZmTGdpMXlRZG55UG5MdHNZTGRTTkZvMDdGVVVDYmdTM0kzaExa?=
 =?utf-8?B?b1Q0dWJmK3ZFYkx5WVFWeU54TU9ZOWY3T2tTT0R2RTFmQlNFMkRjMVFxaWtU?=
 =?utf-8?B?dGF0QW4rMDlKWG1TQWpreDhpckl1cFltc3RzeXAvb09xUUJRaUw0Y0s1Q1Ni?=
 =?utf-8?B?OWlDd0xFNzBsWkZJQjVObCt3eDhPQzRCbVNVL1BkY0I2QmdyT3pFanhERm5E?=
 =?utf-8?B?c1liWjJSM0lWd05lbzJGTnJacThxR1ROM3VxbjI3Z2pCYmhJQXNKUENuRmtq?=
 =?utf-8?B?b0h6MGxzclhKakx0dzV6S1l4T2hCa1Y4OFhMMndqelN0ZHRVU2VlOTZoYUZY?=
 =?utf-8?B?NVltalFpN0cxcUFMbTU1TW9Ub1hzNkgrMmRNSFdwYzB1eVptUnRJZ1pqbEJE?=
 =?utf-8?B?NHAyTDFDUG5CTEk3VlRuNGFjT1p6bWp4YnUwNFRpaFpFamd2Z2ZBQ3ZPL3M0?=
 =?utf-8?B?U1k4MER4eC9teUdCbUQ2T2hxWDBwRXJtYVdWRit2a0ZtSE5KNGVSMlI1QUpl?=
 =?utf-8?B?N2x1U3VjTzROQ0ppU1BxRDJZS0t6dGJHRys1c2VpSGZEMzRVV3BIQXJYZ01B?=
 =?utf-8?B?Mk5ZS043TGkrT1dLZGhNT0FCL2MvMzlGT1VLN09GbkcxOExWT3dNUVliQnJG?=
 =?utf-8?B?ay9BeE9Fei8zVGlYUTZQR1VWSWcwQnlOaWNsTmhjdTBCREh1TER6ZnZpa293?=
 =?utf-8?B?bWhya2RRWnE0aTdUOUl5ekZyWFV5S3cxOC9IZjJPWEhSWmkxaXIrVzJMcGd6?=
 =?utf-8?B?bEplaFVhVkdrVUlXMXJUSXh1TWltRENtN3ExSEZiR0NPK29ubWl4KzVqTmcx?=
 =?utf-8?B?MVBoYlR4ZisrMVBWaGYrbXpnZEZnb1ZpUTZha1plUllOOXFqN3lTc1RUemE5?=
 =?utf-8?B?RCs2d2NCNnJrMXdENWUyQnYzRDQzNjBma3pTVmJTcklKRC9hcWhBL1JMcVR0?=
 =?utf-8?B?ckZ1OXhwbHYwUjdmQmN1UVdXWVgzZ2x4eTF4SXdGMGpJbjJlRHROeENVK05X?=
 =?utf-8?B?U3hleUZMNjRNdjRsYzluYlRiNG5OL0RuK0FHSUppL2xYZVZzMmpJWVJ3bXp4?=
 =?utf-8?B?K3FWSFRndlpoem5la2w5MU9Vc01vNklVWm9XVk1FVUhEMkJ3L3NleFZrR3JG?=
 =?utf-8?Q?oEBAj/rWvV6LPxH09tLmdPNjxzI8AmGU+6eZ4BZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d5d22c09-8593-417b-3b93-08d93a2b4357
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 11:52:53.8420
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: i14L1GqIwbvyTwvR2w2IQzs0WKXQp3n27GmoOCB97fMZaLVyBC/cM0mfiDXo4N3PA5Dimr6eTc1bjyQYptHy2Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7150

If anything, a check covering a wider range of invalid M2P entries ought
to be used (e.g. VALID_M2P()). But since everything is fully under Xen's
control at this stage, simply remove the BUG_ON().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -815,7 +815,6 @@ int __init dom0_construct_pv(struct doma
     page_list_for_each ( page, &d->page_list )
     {
         mfn = mfn_x(page_to_mfn(page));
-        BUG_ON(SHARED_M2P(get_gpfn_from_mfn(mfn)));
         if ( get_gpfn_from_mfn(mfn) >= count )
         {
             BUG_ON(compat);



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 11:56:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 11:56:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147815.272846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxpsj-0005hK-EO; Mon, 28 Jun 2021 11:56:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147815.272846; Mon, 28 Jun 2021 11:56:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxpsj-0005hD-AZ; Mon, 28 Jun 2021 11:56:53 +0000
Received: by outflank-mailman (input) for mailman id 147815;
 Mon, 28 Jun 2021 11:56:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxpsh-0005h7-Po
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 11:56:51 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 500a954e-20a9-4624-97ab-bf0cb6f41bca;
 Mon, 28 Jun 2021 11:56:51 +0000 (UTC)
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2054.outbound.protection.outlook.com [104.47.8.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-20-220yQRYUMeiEJVDoSJgngw-1; Mon, 28 Jun 2021 13:56:48 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6864.eurprd04.prod.outlook.com (2603:10a6:803:138::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Mon, 28 Jun
 2021 11:56:47 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 11:56:47 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM9P195CA0026.EURP195.PROD.OUTLOOK.COM (2603:10a6:20b:21f::31) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Mon, 28 Jun 2021 11:56:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 500a954e-20a9-4624-97ab-bf0cb6f41bca
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624881410;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=YG738T/83EKiBaXdAveT5Gkzm8IukkHl6rYHF+zBInE=;
	b=U6KQfDapRu8cEsb1EY9ayxQi+Za30QzA3S85UMwIUWXJhH9efZtE4+fSf1mDTj9kItLRtA
	v7zFZ4Sx2jv34YiLUX96vTdsKKCGLsMw8SCKUQkJXh6aeBp2cpm1/tRG5XwD8ixXlZQyrG
	aDBbEB5PTw4Mw2tGcWDcC1aQbNN/JBc=
X-MC-Unique: 220yQRYUMeiEJVDoSJgngw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BpzwyAClYEEwG0h3aBnbG6XV7lYVyD1k7/U3nhHkilN2puREUnCu5weI0S8rgNJmb+vytUy4U8I3Rbl5NBysZVoSZoWZC27mxm0zr4/w3g3nzjMDVfo+efQmX2Oa9zOUfNn4JsV3mBx3ytcb7jnwIgIN2uCY2egL+rDxjaZuzV/E+4TuZwzp1BRUh24MGT2CPZa8xG9UDY+W8wmky736LtVoXL8+5hz5a4UZil37MhZMZRyxsVl/1VSp5Glg4we72cTvMxKF16y6qgdkjtqaKDbXYw/EnvnJKfscCyv2IQPVG2LZ71/VyvYqm59Z75lS7n7JhMheLa6buEvo20q31Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YG738T/83EKiBaXdAveT5Gkzm8IukkHl6rYHF+zBInE=;
 b=Gg9jOFaO43JASpinQgLprkeKoG1WNca1pf8LStVXm9OoIRvxmGJG5+cSWU5FXFKd/mNKqYkAeQk1BUGa9mai/aplLHkb8IJRVWzTMVTyfg2t369Z67bMwCq3UWJmvWj7ZfVxi1sDHkcRuz0rnhpTxTzxvHobyZm8olEBVVdfT1d6EineOQY9GshU5q+YawrKNp/qtgX+G/89LJlsnP3G+4ZoeDyfqLbiW/85rIVibideYr4YR2nmzQxH7vWeWT9Gw2OzB4iDtMfRSFpRejbvEu9LiBvs3z9aR5/tjLu9YRH9RGTbN+foLBgtaByqOae2Yc78bnB81TOJGu/N+1Gn5g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/2] x86/vMCE: address handling related adjustments
Message-ID: <b16904ec-0302-4094-7d89-f484cbf0f8a5@suse.com>
Date: Mon, 28 Jun 2021 13:56:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM9P195CA0026.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:20b:21f::31) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 84dfb57e-9d53-4eec-bbdb-08d93a2bce83
X-MS-TrafficTypeDiagnostic: VI1PR04MB6864:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB686405BBF4AF10F30A2AB26CB3039@VI1PR04MB6864.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5236;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xijpiWrO0bl0A42rfZqcJ+cAK/ouHUpjzaHm9Qft8QBB6M//BkfgYEs6DEVf5tQI6Yokacj8zMngh2NkPZn1oaGStZeRuz+zRwwPIxovtqtM85WHA+fS2HHqofZwKvwjx1m8QYFyUNgyo31wQ2MmUTdlrRRcNtH1ehBIB9vZ09Qpe+GxizfE0Er5RoFDHa9o+b+C4/G3VP0B6PImYmDohgITLUi1n00Nwwjsyo3jtfi77yN2iJYwmKrA/HlY5vjR+LNGhJ+9r5Fy2g8lgsubuSxQQmrrJICComyc4knG2gMVHzRReKXs/4jzhyRb9U4B1AUkQQOF0wv/5gKQbBHr3sZ9KPspao7BDoZuRPVrnUXn1VA7IU0Tq/yclqU6JW7KwKjG4hvM/7dqy2VYGFqqbVFlQmFNUNTdfeKS0e+O6AUTvAH29Y8V9LuDaJuhH0T17HGftdVES25PxDFSS5EOIh8FIPE2zaqHvciMSJyrRs33Gpqnjt0OZ0rDM5JSjVw3wNPcmN61I3qwrsnpKNUscCvygK4t1IXoyTzUvQCw1/nw1GPl6gYawikljF/6CiA0GPbGiDynu6yiTMwFoUyUXGNignZNNMZY6SeaLr+5ZHFPb2c3K0AMAP5eOqCsJBLzV7Nwhdj7Qf1zbJmureVH/itefHPDLxGxuYuqC6X50a1qLm+sWs+Upe2/iS9vT0cwv5X6xJsXanTwCL3L8XZokjsrYDfcJTWwCA2v4IfgUdQ=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(366004)(39860400002)(346002)(136003)(396003)(8676002)(8936002)(478600001)(26005)(31696002)(956004)(31686004)(36756003)(4326008)(16526019)(83380400001)(186003)(66946007)(16576012)(316002)(86362001)(38100700002)(6486002)(54906003)(66476007)(66556008)(2906002)(6916009)(558084003)(2616005)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SWtWUVZ4UFp6SVphUzFGM3dDdFpsNm5QZU4yNWpPb0M2cytBZExoSXVyUmts?=
 =?utf-8?B?RDFPZS9PdlppMENKWGVldWJJVUlaLzU5VStFQ1Y3MHNvZkpJLzRad0MwTCtI?=
 =?utf-8?B?SFlxL3BrVzBDTjhnbk8wUUJHLzhYR1Q3b0N0elNsSVJsbGo2WXlQTlBMVlVR?=
 =?utf-8?B?WUN1aFE3YkNVSjN5b29QQ0Q0YXZEeGM4aEQzRUFhM0FYTFRlaTB6T2xtdGpE?=
 =?utf-8?B?VXRaUFhaRXljK0hMTHJLTlExdEx6NTd6Q3JMT01ualVzUFBibWsyV0Q5VUVj?=
 =?utf-8?B?U0ZyYndXYzNBd2FRd0ZJbVBudUxMdXVhT0ExYWpBcGtRdVQrZktGaWFjQUU2?=
 =?utf-8?B?TTdKcUJ0WjcxZWhkWVVDWlJSN1hERnQ2NTU0cEpLRUF3Z2k3Ymg2d3RZSkUv?=
 =?utf-8?B?VTBoaXFJMlBoUGZyWXArREN6eTkrZ2xNK3JIN3FXU1EvVmNtY1JCZFdBbVFJ?=
 =?utf-8?B?dUVTMzZVZlF0SjNhM2VhN1dDMWlNUXFHOEkrQlpBVm9jQ1BaVkpyMC9WVkpU?=
 =?utf-8?B?R2FBNDR6RlBEQnhNaXJXcktUVnc4UjlzazJ5bVhSaC80ZTlXN0ZaVElJREpo?=
 =?utf-8?B?anpYNFRJWXgxMkZ6YWhPSjlxVjJVNFA3aWhjZ09Xd3RtcXg0ajE0VGtHVVJW?=
 =?utf-8?B?VC93V0JEL2Rkc3g2L1ZVQ2N2NVlYRWFMbTFjbmlaSDZVTnMrQmo4VElIS0M0?=
 =?utf-8?B?U3BLank5MWU1NEhJb2tRclV0aGVXc3pJeW1pTDhORkdGbEhIVUxkRklKNTBa?=
 =?utf-8?B?cUpiQ2ZIYmpVZEtVT1pnZXIxcGRqY3J0MzBsaXRiSlFlaG90SXphQm5WcFBX?=
 =?utf-8?B?T3drdHFtQjd2M0tqcVF0eWlGcCtudjFKaU1vdWZqZkFGZ2w4RmIvUTZKZko1?=
 =?utf-8?B?Szg0SWQyVS9kcVpHZ3NQdFREU2xyQi83Yi9kUUdhYXpqTmtWeHAvaWRMWVFm?=
 =?utf-8?B?SzN2bU1zbU1GSDNZQWZiR3dPSnRPNDJmVnV2N25iWGxIcnpuZzk0N2s1a29E?=
 =?utf-8?B?UDhTbEZkbU1TQnNCajk5N2sweEp4ZGM2dTdkUHdudll2UjFxcVJ0LzZvNzFj?=
 =?utf-8?B?UXB4RWVTZ082cjltVjliOSsrU0RwRmpnb0Y1ZCtrc09ncjIvMnk2STR1OHVN?=
 =?utf-8?B?UVJwQ2lyd09mZHFldHUyNWhzQ2greGpQQmRCeTNYK3gzaEFYb05OSUFwMVVy?=
 =?utf-8?B?UjdaeFNWMzVSUWlaZEdNbDhrSXR2WGJWT0JhQkhSQzZYQURGMXo2Znhnd3lI?=
 =?utf-8?B?cWt6Y2dNRXMzZ1BwcVlJT2NXcTdRUjFRTVZTeTRzZXJod1l2ZnpXUU90R0hy?=
 =?utf-8?B?THBTUmtoc2RHQlJOUWlscEV2WUpGMUcrZ3ZPTUczUmtmSkpGb2JEUjFYcm9R?=
 =?utf-8?B?MzJWTmZXdlhoVGRReUJSZlhYSnMzREkxMllEZWJlcUxBYTlwRDNza2h5YW5E?=
 =?utf-8?B?WURLY2tLUmZUTHVOZC82aVMzeVdvc0xyaUtIaVlaYTdwU3V6OHNsaC92aTNE?=
 =?utf-8?B?S2g0bnd3YWQrcmMzemhMM2o1VHZHVWtMTXMzcGQrUUtzdi9HazRDWHVWdE5n?=
 =?utf-8?B?RnFuSEpVK01mZkpkTWpDbFNUTEQ2UkZvODN4blpFaEtrRlVBSUt0S0ZpV0FS?=
 =?utf-8?B?b2MxOGZMbG5WZU56ZFpoM2NJNmhMMDc0blZhOW50OVVFb2pVdnlrM0VYTjNm?=
 =?utf-8?B?WnFUUzZDekhxRlkvYS9teWxoRFZVQ3lUa1RteEx6Y1c3MHdoNW1xK09hU2pj?=
 =?utf-8?Q?6/xTZHfI878DxWqgUBzJ8aYLilu9U3LS13A7Kr9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 84dfb57e-9d53-4eec-bbdb-08d93a2bce83
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 11:56:47.3409
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: w1q6tmhRKOwh6ha8njWiGJHJwiiN5XYQvj5voxjNLOt3x2VfKu+cnfVRIR1gKf1tHLFJmJf+7+EqI+QdNKwpxg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6864

While going through uses of get_gpfn_from_mfn(), I've noticed
some anomalies here (but of course there are more left). Patch
2 is specifically RFC, for altering the public interface.

1: adjustments to unmmap_broken_page()
2: change address space for incident reporting

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 11:57:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 11:57:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147818.272857 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxptb-0006IG-Ng; Mon, 28 Jun 2021 11:57:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147818.272857; Mon, 28 Jun 2021 11:57:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxptb-0006I9-KU; Mon, 28 Jun 2021 11:57:47 +0000
Received: by outflank-mailman (input) for mailman id 147818;
 Mon, 28 Jun 2021 11:57:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxptZ-0006Hx-N5
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 11:57:45 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f8ac53c3-03f0-46f4-8880-020c66095cbe;
 Mon, 28 Jun 2021 11:57:44 +0000 (UTC)
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur03lp2050.outbound.protection.outlook.com [104.47.9.50]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-3-RCcSi6Y-Nwe7nxlf8Npi1A-1;
 Mon, 28 Jun 2021 13:57:42 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB4192.eurprd04.prod.outlook.com (2603:10a6:803:4c::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20; Mon, 28 Jun
 2021 11:57:41 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 11:57:41 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM9P195CA0025.EURP195.PROD.OUTLOOK.COM (2603:10a6:20b:21f::30) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.20 via Frontend Transport; Mon, 28 Jun 2021 11:57:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8ac53c3-03f0-46f4-8880-020c66095cbe
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624881463;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jtO2paUlv26XhxGKVu8ApzECI4X2+KAJunwAtx7j8Ww=;
	b=W2ps62uwOaB4Az4l+Icw8kVmSuup9gOZsT6RZqC1OJUGeso8+T2cF+ymLZzT8HrV9LsSa9
	I5gqJTDhmilXzc/GipU+JSm/uFyaRSLjvQEWnEY7FMnHfC9GUSke9MybaP0kRcNt02vw81
	anTUUylTfStFH8tYWDXEgUuYr457YGU=
X-MC-Unique: RCcSi6Y-Nwe7nxlf8Npi1A-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QC+QQRyeQU5PAhO3tzbezxiVihyT5k7DnZ2yQ5QE3M16HUiMPMc8cxNr8fLnIpc0rSF5kOXuxUllNAdYsN+L/ywCBAUN/KcpoUhtw8x8YKS/a4HbguxIKmW/DWXRt3ov1ldI5AjILXlm8wIVbsmUy3+QZsdQH8ssrw6FsgFIy7cV5pk3EOBpfkonzxdxd0NxubyQdNfb3LIiC4vZK1TLSnuVfJZbfULVZ6AYNRlEMr+avZQw96n6ugg+i8e+mxaN7Seg0VSToct91CfDGb7zJ79VP8cxAqU6Yhxef4HC/WkgGsTMx8tU2HVkcgqDqpUKhI7lciWELRmmPT4qIqQc7A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jtO2paUlv26XhxGKVu8ApzECI4X2+KAJunwAtx7j8Ww=;
 b=ip/vmGX9NqkF/j4F1VEs0rmkoJfAqwXu0sLWQOjinvZ2l7sVde8ClczKYRV4qz1QOS/yDZ4O1AV/wJuS3SsCVM8ElYzag1areMXkhbL6xgTR66YT7+iB3mlNxgm79Ql6kgCQfPh/a2fCBP6tb3awVApJISn/+2Qii9nyGhIGEyA2Xcr6x9HXJQL1tGi4JgKZIt1+i5irJ4uqeTHDr+QgI+HkMRRx7OjFmOnAUaQ/Chy2QEZAUhp2P9tu/rBHaNDQtqK7GQvIMXHy1Wu6+UHURSB5ycYEZ1PyufPGDqij3wxPSe7Ra3DA7iHOi+VzGBBoF7PNb2J0rKmY/+CeoJrDjA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 1/2] x86/vMCE: adjustments to unmmap_broken_page()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <b16904ec-0302-4094-7d89-f484cbf0f8a5@suse.com>
Message-ID: <48a34cd8-e9aa-d251-7838-4edd071050dc@suse.com>
Date: Mon, 28 Jun 2021 13:57:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <b16904ec-0302-4094-7d89-f484cbf0f8a5@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM9P195CA0025.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:20b:21f::30) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4b921dae-461f-48ce-7708-08d93a2beeb4
X-MS-TrafficTypeDiagnostic: VI1PR04MB4192:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4192840187CA901AA24AE337B3039@VI1PR04MB4192.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JFfr1IFMP819mmsJ+YHjaPg3xnhQY8hAX0rJ+Jy2kHcvfmGihfEWX9JTiE8MNz0ueB6ZwQBllvBw5JKngyvOeT299cmnF1NVxwOC5d0JKMS7rc8uOEwXdSZs9IOIkv7OGh3bcPIJjFz/8MdFVCVhlDjb9Rh5eiBGeHCeyAtWHxwvD/B1La4GkwMqC76bralUpfDiUmY9USti7lfeIEA0QLIy47Vx/vM41tLDnGCtrvQw4ouL13V87hY0vXdtVFFmKt/1oVWpFSqHG9MCBoc+ue3bYG+rKocDbzR/8yCDOjuDJj9LDxa6dzzTTIc8dfQ0HhSaH++HO2IH5TMfY2rTLVx3VzV/nv1HXjkGC/CnmA8Lvd4i5ecp9RuBJOIGgrsD9Ll8kUGeFWlChSMOZd368KE0K9MXDG23LFMd3s7gT4jCLJVLo1GmT823rrHb/Bv6dO3o/vqnlQPP29uiUSbqkbBstfzbBUuXsHzSR5OWPxx5pb3HFb2aaHeHSqVIowzAIXHn44gdWDLQmU9W2HR7FQ/GX92JGu4lQ+XffoEyyCVDVAy5fygVS5AglylvNeSOJTO7yACL0dPN1Rn2XLANycXp8A7iG2rSYT/cIVpsuB6npGZypqwWvK0GO0HYMUKGAlPrj62dc5n4Ec+zBsZd6RCLnGeoXKsZo+jreYNBJsgrxmd3FoElbKIOZzrbVgCj6qqWe0PIkfFo0Gp65aHetyFMfBDZ8N16DDRlzKq1EHZEXtHVDwYUxFmeZCzn173Q
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(376002)(366004)(396003)(136003)(39860400002)(31686004)(5660300002)(2616005)(2906002)(956004)(478600001)(66556008)(8936002)(36756003)(66476007)(31696002)(6916009)(8676002)(86362001)(66946007)(6486002)(316002)(54906003)(16576012)(4326008)(83380400001)(26005)(186003)(38100700002)(16526019)(21314003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y2J5bHlqL05WRjZFRHp2bVZEemhDblB0dmtxQVNwdGdJeTQyNWNxbTBvZ2Z1?=
 =?utf-8?B?a3dsT0xMU0VqSXBaVjlOZ1RKK0tTS3NjSlVBbVFLNHdNYUdKKzg3Z2M3K3dM?=
 =?utf-8?B?S0lRUGhNWWtyVG9RcGlSeHZsQUYyWS9rb3QyZ2Y2NDhYbHVnNExkdXVNQkxQ?=
 =?utf-8?B?enV5bXhvMS96REFWK2pia0NNaTluNkZadHRvNWlWdWVYWVNRNG9sajdzTERO?=
 =?utf-8?B?MHpHRTZnNlYvc1o5dnp0VzZCeEZDQjkxVi91WlUzSG0yR1Q5L1BLak5YUHpK?=
 =?utf-8?B?OUJna3FhMktUYmV0bkc4eWJsV0p3RnY2aGtGR3cybXZHTzFnMEhYWis2b0tE?=
 =?utf-8?B?YXhVQjJ0UEc2dmYxby9NNk84RUFYdkY1Q0pLYVFzbEJwb3hnLzl2QWpnaHdh?=
 =?utf-8?B?dG9EenVFOWJ4eFNWMUhYeGpCeXhIVWF2RENQcytXa0FpSVV1ajhzc2IxNGNF?=
 =?utf-8?B?SXJ1aUdQMlZnTjREZnNndFpmYWd4TnZjaG1ralpYQ2dTd0xldTBvbHF4Wkd3?=
 =?utf-8?B?ZVlCS0tUWVJucFloOTdyekNqRWlLRmUyMHhmN2pZNWdHSVM5T093OC9VTDJO?=
 =?utf-8?B?dlJtVDFlRUlMcFIvTzhaU3ExK0hyV25xQlJHeldTTHhBV0N2ZGNWOC8vY00z?=
 =?utf-8?B?YkxGMWNDTk5CR2xIcWxuRHRMaCsyYU5VS01uRTVMakxoZ3RlNVlkZm9SdTkv?=
 =?utf-8?B?Y29LdG16TkZLR2VyVVJaSkxnT2JraE92WEx4K0t1aUJnaWdua3hlY2NWVUcz?=
 =?utf-8?B?cnpjMStSSlJPZWQyTHJOVW4rbXB6dVArMkMwVHBWN2xZbWQ1eklGOENuOExq?=
 =?utf-8?B?bDh0NUVBTDZTL3JHQk1raTNEaGxONGhMbzZMWXNhNDd5UC9KSDM2amVjZXdx?=
 =?utf-8?B?bmpobENzL3FRZ3VzRVBnK09GT3RvM3R6VXlxT1FWVUhGd0xleVhnNE5tcHZk?=
 =?utf-8?B?dzNuNnhhWTFEdmt4RUE3UWowUmRRYWtmaHY2eGVIK0lqNjdGejdZRVVuZm5i?=
 =?utf-8?B?NDhWSmdwRkg2YkhvQ1VST3BIc0kvYkJoTGoxWW5xV2lEVHFGa2pCYlY3Z21X?=
 =?utf-8?B?bUNVWFhFaTJoL29DTlpSNnl4b1E0c0JqTDIyYlFreTZrREgvaDZyL2xuR0lR?=
 =?utf-8?B?OFQyanFsUHR0QlQ2VThGWWxGa2t0d2toZVFCanhjcS9nZGl2anFlNEFTZkFW?=
 =?utf-8?B?a05vdnpWUkwyTVF3bjBtZ3VJa25QNTUzMmwvWTE2WUZuT282MUJLL2JwaFVi?=
 =?utf-8?B?cGZYbDBIMXFFRHArTk5oMWo3ZHdsaUNHMVhTc1lXZkVOMnh5Rkt6U2lDR0No?=
 =?utf-8?B?WlVRMzlwNXV4eU5YaGNhWWlFT0lXODZVeG9mUkc1MU8xN21ZT2Rud0IraW9X?=
 =?utf-8?B?Y0EwWDdIa2x4c1pqS3dkb3AxckhDVUptTkZSdGIrWTZPbDIvQjdVbWpndDln?=
 =?utf-8?B?S3FUR3VNVzBuZTJGL2lKQVp4cnZNR1RBbHZEcXVPYlRLTjZpMjU1ZDJrbHdH?=
 =?utf-8?B?Zm1BeHFKbzdEV1dHT2Y1cHVzZkxkWGl0ajJmQnpSQlFwajVZeSs1ZjVEUjRh?=
 =?utf-8?B?aDlCOXJXNWdqbHBORnlpT3l6VWZRMGJVaWxyOWQ1bnJGYnY5RnJwVXpRdURq?=
 =?utf-8?B?elcxRVJOLzIzWlBLRFNNYXZLY21tUmlwYkxRZ0ViZlhDU3JhSmZYM0VzbnJC?=
 =?utf-8?B?by9uYWhWSEQvaks3WFRUK1VaeUh1MjNBRENBbjBDYkh3SXBFOVQ0RmpDV2tY?=
 =?utf-8?Q?Cm9nz7rJE4P/MNmq5tQhxWMc87GN9sBZUcZhAR4?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4b921dae-461f-48ce-7708-08d93a2beeb4
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 11:57:41.3454
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KFs/ijRVA3/IeekULe4ZoKIH1bAiPQVBAahXSpFnxOxMiDIIYB54pmwiDDS3K0ungBBKHJRFaGhYDCb3gh+DLg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4192

There's no need for more than an assertion as to the passed in MFN's
validity, as the caller's prior call to offline_page() would not have
succeeded on an invalid one.

There's no use in checking both is_hvm_domain() and paging_mode_hap(),
as the latter implies the former.

Extend the P2M manipulation that's there also to PVH Dom0, merely having
it using the prior PV Dom0 related behavioral assumption when the page
type cannot be changed (yet).

There's no point in P2M_UNMAP_TYPES including p2m_mmio_direct. The
respective comment is bogus afaict, there are no RAM pages getting
mapped with that type for the purpose of becoming UC. The sole RAM page
getting mapped with this attribute is the (now global) APIC access MFN.
(This page, if it went bad, shouldn't have any effect on the system
anyway, as it never really gets accessed; it's only its address which
matters.)

Make the last function parameter type-safe.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/mcheck/mcaction.c
+++ b/xen/arch/x86/cpu/mcheck/mcaction.c
@@ -91,7 +91,7 @@ mc_memerr_dhandler(struct mca_binfo *bin
                 ASSERT(d);
                 gfn = get_gpfn_from_mfn((bank->mc_addr) >> PAGE_SHIFT);
 
-                if ( unmmap_broken_page(d, mfn, gfn) )
+                if ( unmmap_broken_page(d, mfn, _gfn(gfn)) )
                 {
                     printk("Unmap broken memory %"PRI_mfn" for DOM%d failed\n",
                            mfn_x(mfn), d->domain_id);
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -502,11 +502,9 @@ int fill_vmsr_data(struct mcinfo_bank *m
     return ret;
 }
 
-/* It's said some ram is setup as mmio_direct for UC cache attribute */
-#define P2M_UNMAP_TYPES (p2m_to_mask(p2m_ram_rw) \
-                                | p2m_to_mask(p2m_ram_logdirty) \
-                                | p2m_to_mask(p2m_ram_ro)       \
-                                | p2m_to_mask(p2m_mmio_direct))
+#define P2M_UNMAP_TYPES (p2m_to_mask(p2m_ram_rw) | \
+                         p2m_to_mask(p2m_ram_logdirty) | \
+                         p2m_to_mask(p2m_ram_ro))
 
 /*
  * Currently all CPUs are redenzevous at the MCE softirq handler, no
@@ -515,30 +513,25 @@ int fill_vmsr_data(struct mcinfo_bank *m
  * XXX following situation missed:
  * PoD, Foreign mapped, Granted, Shared
  */
-int unmmap_broken_page(struct domain *d, mfn_t mfn, unsigned long gfn)
+int unmmap_broken_page(struct domain *d, mfn_t mfn, gfn_t gfn)
 {
-    mfn_t r_mfn;
     p2m_type_t pt;
     int rc;
 
-    /* Always trust dom0's MCE handler will prevent future access */
-    if ( is_hardware_domain(d) )
-        return 0;
-
-    if ( !mfn_valid(mfn) )
-        return -EINVAL;
-
-    if ( !is_hvm_domain(d) || !paging_mode_hap(d) )
-        return -EOPNOTSUPP;
-
-    rc = -1;
-    r_mfn = get_gfn_query(d, gfn, &pt);
-    if ( p2m_to_mask(pt) & P2M_UNMAP_TYPES)
-    {
-        ASSERT(mfn_eq(r_mfn, mfn));
-        rc = p2m_change_type_one(d, gfn, pt, p2m_ram_broken);
-    }
-    put_gfn(d, gfn);
+    if ( !paging_mode_hap(d) )
+        /* Always trust Dom0's MCE handler will prevent further access. */
+        return is_hardware_domain(d) ? 0 : -EOPNOTSUPP;
+
+    ASSERT(mfn_valid(mfn));
+
+    if ( !mfn_eq(get_gfn_query(d, gfn_x(gfn), &pt), mfn) )
+        rc = -EAGAIN;
+    else if ( p2m_to_mask(pt) & P2M_UNMAP_TYPES )
+        rc = p2m_change_type_one(d, gfn_x(gfn), pt, p2m_ram_broken);
+    else
+        /* Always trust Dom0's MCE handler will prevent further access. */
+        rc = is_hardware_domain(d) ? 0 : -EOPNOTSUPP;
+    put_gfn(d, gfn_x(gfn));
 
     return rc;
 }
--- a/xen/arch/x86/cpu/mcheck/vmce.h
+++ b/xen/arch/x86/cpu/mcheck/vmce.h
@@ -9,7 +9,7 @@ int vmce_init(struct cpuinfo_x86 *c);
     (hardware_domain && \
      evtchn_virq_enabled(domain_vcpu(hardware_domain, 0), VIRQ_MCA))
 
-int unmmap_broken_page(struct domain *d, mfn_t mfn, unsigned long gfn);
+int unmmap_broken_page(struct domain *d, mfn_t mfn, gfn_t gfn);
 
 int vmce_intel_rdmsr(const struct vcpu *, uint32_t msr, uint64_t *val);
 int vmce_intel_wrmsr(struct vcpu *, uint32_t msr, uint64_t val);



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 11:58:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 11:58:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147822.272867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxpu9-0006vA-3w; Mon, 28 Jun 2021 11:58:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147822.272867; Mon, 28 Jun 2021 11:58:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxpu9-0006v1-0y; Mon, 28 Jun 2021 11:58:21 +0000
Received: by outflank-mailman (input) for mailman id 147822;
 Mon, 28 Jun 2021 11:58:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxpu8-0006ut-47
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 11:58:20 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 62e3e985-97cd-4a7f-9f9b-2416465eae8e;
 Mon, 28 Jun 2021 11:58:19 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2113.outbound.protection.outlook.com [104.47.17.113])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-35-dY3iSNqnMbWEyLwuqv1VwQ-1; Mon, 28 Jun 2021 13:58:17 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3390.eurprd04.prod.outlook.com (2603:10a6:803:9::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.23; Mon, 28 Jun
 2021 11:58:16 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 11:58:15 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0206.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1f::26) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Mon, 28 Jun 2021 11:58:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62e3e985-97cd-4a7f-9f9b-2416465eae8e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624881498;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=BBtvtRsR+vScZJWOTg32R7Q02yQyzv+05uwRDq5vNt8=;
	b=Y4LEWryGdfsCYBp4VsKHOR2wlWdaXcr8GS6W2vTM8pgZujQoZS14+VeUmjpHqjmO47+8Jf
	OKtEXkx70shE2wUW5jZr7a+aKIFDhAfMwaYXL0gBD7SsyHoh7eFFwGh0dy27VH3jYtL590
	09PEWznDy8nWVWgyTX9wOKF50hgYGOw=
X-MC-Unique: dY3iSNqnMbWEyLwuqv1VwQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VqsDjWq9i33j5ZgmB3n2A2noQHw78sjoCJjVuq+q3cbwWG0d/nWRNcXdyo9CBRTT+vx7qeDEsPCaGzp7Hej89viD82DENBZSUCJY6gAF28iRbL8TYCsjRkHKc9NM6YMnPItNplSIyuIMhFPTck0Ce91Wdd2nK2lOhSXr4CaQXXBnn6cFj7BK4Iqc2lhpcqzTER8dHTYBp9NsRjXgOQUJBOKufHZGfpg5i2k1qalvet3ZmQW3kW1I1Q5Qc5Vmper0oA9YGN1iRPlaqlNRMRbJCCob5slBDawWC5if7qs1HX/vOmfUJKorKpp4mH8nMvOMqpb5ms0X9ZhHqSWDKnDr1w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BBtvtRsR+vScZJWOTg32R7Q02yQyzv+05uwRDq5vNt8=;
 b=OXnXKPEys7H1UWnlAkKOZpnphnJwJyOtIMZ+IGMHDWOplmeqxKIF4xLbdA4VJvzs4Hi5Q9xg11e/7HpIjnXqoHqhKZEFzDEjM3BUOXS4T6MMEEF2+/Uskipfc4U2u69iACDSS0bscjnckszg4daO4B/a5EqWKSfKu8eGhOBLW2dXGQR5Qe62EOl6qAKnRc+EH3N7D0EpeMIC6yZYElgtQ/j9gAD8GqP1KeMed5ZvLiWawA3QswXwR/43G1b/yUJUoqkVbMwKAxQEX7t1ugYxo6Nitjl/x3PAh31GZhlFw/JHJZCLq1sBjPSYjeXRSujlBTSdEpnq7ZuiNZJS0ja0dQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 2/2] x86/vMCE: change address space for incident reporting
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <b16904ec-0302-4094-7d89-f484cbf0f8a5@suse.com>
Message-ID: <a119ae20-cd82-a4a2-e46a-38ad3cb918cf@suse.com>
Date: Mon, 28 Jun 2021 13:58:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <b16904ec-0302-4094-7d89-f484cbf0f8a5@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0206.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1f::26) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 41788579-d552-4f39-df27-08d93a2c0347
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3390:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB33905D6A7E601CD10E266E57B3039@VI1PR0402MB3390.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mH7Xq4gEVM/dxIdRsSYJrMhcn5ZiErLc+aKjH+ufGmM8zSZStElE8VaTPTf4Lj8Ef/KnF35wfycFqlLTYFbu1P8/qll/A33v+Mz9l8OzX3eWvoYze7O7IgCPCzpVK3PI5ryK3KKdi5pPTlKCe4/MIw2T4+q0ZVFZmDK/9SkZYwj9XCpbbux+8OIL8siUszAfQH1BBnykSrrikrTL74pLsE0f4tXQBXNiqgQTumYAl+SjYCOa3K3h2KIzVbzXfwDwiYuv+3leljSlGlBavG3wws4S7zhAN1WhK3PkZZeljSh9mM1jO3ZVqgdcS0cij0Pu7ccJbtDerJVjfo5PsXw7Ziyssc9cvBV5kXAJt/O4YbeAJCKeVTrc88kHF71zs9M5D1nXMggzvYy9X3Yodudo6zSFHaYnqpoZMt3SLkC56VoG8oMzpHLenpTLxD6I4jH2UwKTzOW0Bg9z0KXvA6P1yW17YO87SFErfJMRP6Yrs9IJksx7AanokyJWQcV5DA7OG/oK4fDi4zNy77asA71o4AEX7p6I5+XsdFN9uktw3Z9gQIE33+1htCws/JQq7X7qlepfKHTe5hbQKgehH+RjqHWLJHA9hP2ZT1yl3ybUJQs2ZSErXjBCxTR+am2GMMbMKBX0od8UFxSjxKOaVu5KPJCHQI68foqtxFM3ga01kX8v/MdotiNc5hJ7ekfEQlqPKM1+A5FlU2IY50NwRLNVmhL/P7XyN51ArHosq2TohQk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(366004)(376002)(346002)(136003)(39860400002)(8936002)(26005)(16526019)(2906002)(2616005)(83380400001)(186003)(54906003)(36756003)(38100700002)(4326008)(66946007)(6486002)(16576012)(86362001)(478600001)(8676002)(31696002)(66476007)(5660300002)(316002)(6916009)(31686004)(956004)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Q3Y5NEMwalpuMHBoaDJROGxqU3A1b3UzQjAzcFh4UWtJTnBOYUN3ekM1Nmt0?=
 =?utf-8?B?dytyU2pYTE5OaXV3UTdacnpkM1pQYzBYZEdEc3Zxa0ZvbDVFOWZTc0hXT3cv?=
 =?utf-8?B?Tk9DeWVPekRBa1BTU250Mkx3L3lZOTJ6OWtjZTgvMkVmVFAyZUtFeG1xNHpq?=
 =?utf-8?B?c3gxaENKS3ZaR3RHUjR0a2dIeEZSdzNNRi9USExkSXcyTzczODFPZjRNMXNF?=
 =?utf-8?B?QU0yRVhyMEVMemF4VmE4Q2VpRXk0MnBxRWwwakJHZXlsbnFPcFVRWTZMRlUz?=
 =?utf-8?B?NHFKRTZ0amxZUWhPeDVvUkN2VUd5N2ZzZytjL3RDYUJiYjJtS3NmVTNnQnRG?=
 =?utf-8?B?ODJDbHhZRDVsV0ViV0JxUHVYWnE1clVFd3FQQzdoMENGaU5oTlpLcGJDRDVB?=
 =?utf-8?B?L05LSFYwVTNPM29RbjNReERqcm5NbW9UV1FvdlJYSXljN0liL3F2eitiVmRy?=
 =?utf-8?B?dWd5N2xJYmR4aVlsdEpOV0RaczY5dWgvV1JiT0Z4aFNyejJ6Z2RsV0xuSnRo?=
 =?utf-8?B?N2ROb0RRYTErenNyQWx1WFVUV2h4eG03ZzVZWks4V3dEZ3UzSDNnZmFkekVK?=
 =?utf-8?B?L05VZm9yRk9qYnN6Z1JsNlFqVkE2NmlaVmdwdXViV1JsSGU0RG0wbG51TmZz?=
 =?utf-8?B?NDNNTDgvazVPTUJ1NTZHQVJRS1pObk5OK2hFajRwR2VWS0xvU1duZzRIcFpq?=
 =?utf-8?B?TEpWV1hEcVNCRnhjNW1YTTFLWnk5Q3VPTnpIWUdDaXQrd2lsRlNLZFhMREJt?=
 =?utf-8?B?QW9vbnVwSk43a2xlcW9WNUNGVUpTTWVmYzRjbC9mU0hrTUxFdnRoTytIOHpO?=
 =?utf-8?B?dUx5ZmJLUGNNdXBlc3FzZ3EyWDNMdC83bEwydFplUndhazluclkzSU1lcWhw?=
 =?utf-8?B?cWtQZHJzTUVTWnhrSS9rcTdTN1N5M2lxU2ZvVko3UzBUZERhcjByYnRPUTdJ?=
 =?utf-8?B?ZEs4OStrK3Z0cmsxQlpoSmJmNGdIMEN0VklzVzdtbW0vTStNeXFkWGM4c1hC?=
 =?utf-8?B?RTZ5b0Z2cEhpODdlRHdvRFZqS3o5Z0NGY09Dc3pqWmpqOWw5bCtJeXlmLzZU?=
 =?utf-8?B?dS8rdnFSRCtiSjVhWVJXSWE4UldDditTc0NacFFyZ3JTTXRCNTFiSTBlUlVa?=
 =?utf-8?B?eFJjZlFCeG1qSXR4WlZjM3BMaDRwbHNJQ1BUQUdOdmhPbEE0V3g3Q09FZDB0?=
 =?utf-8?B?bmd1TS9UM1hxbksxYm5laHQ5ME5YcVI0eXVYdld0VVZBQmJqOEZ5RVhEWTlq?=
 =?utf-8?B?eEE0cFQ1eG52ekZveXhBcG9nQVJ3dWJTTWx2eHZobm9FS0IzWUtVLzJNV3Bx?=
 =?utf-8?B?K1ZVSzNFWno2dE9BNUdQbnBxeUtiZElFVUdUaTVhSU1tUkRwaDBCMk1MTklJ?=
 =?utf-8?B?RDlwelRKeEU4MkZEbVl2SzNYOGc3VTMwMkUxRFl6Zk12Z212SlFRcWlJNGI4?=
 =?utf-8?B?QzBOaVJlWXFSellMcUZqZGEybXFEbE5xbGJDbmh6VUZXZUVGcVlxV0hEbTF4?=
 =?utf-8?B?M255VTFJejZYNzRXMHV5cW5qQ2NZamRKa1M1UHQrYjE1WEVYOUxXandkWUJW?=
 =?utf-8?B?MlNlUDJ6akxxVDY2Q1QxVDJSWEVyU0kzVTE1UkI4cWNkYW0wMCt3OG9sT21L?=
 =?utf-8?B?ZTBId2tKQTVhTndQYTgwMUJMYUZZeDNodzVHVXF3UEp2aGNoajR5Uk0xc0U3?=
 =?utf-8?B?eU9xb0JIWkRPRVV1aGhPTkw2STRmSmZISktBMCtUM1VFdXpwY2hxaC8xUk8w?=
 =?utf-8?Q?bz/Bp4gsQK11r22xS6AqVkXV9hUJjErX8N0ybEw?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 41788579-d552-4f39-df27-08d93a2c0347
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 11:58:15.8659
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Myh23j6xBDpp/cqXauBIK+BUZ0s7mGV4HYoUt99xeYOwrtbMBLRjax16RtHr8W2F8vfTBp0XQAxqZAbPd5ywFg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3390

PFNs are purely a software construct. In particular their association
with MFNs can change at any time. Switch to reporting back GFNs through
the hypercall interface (but stick to PFNs / paddr for the MSR one).
(Note that unmmap_broken_page() validly expects a GFN anyway.)

While doing the adjustments, replace an open-coded instance of
PAGE_OFFSET().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
RFC because of the change to the hypercall interface, for which the
address space is not documented anywhere anyway. Aiui the main purpose
is to get things logged, and in a (system wide, even if maintained by
Dom0) log system wide meaningful addresses are surely of more use.

--- a/xen/arch/x86/cpu/mcheck/mcaction.c
+++ b/xen/arch/x86/cpu/mcheck/mcaction.c
@@ -1,5 +1,6 @@
 #include <xen/types.h>
 #include <xen/sched.h>
+#include <asm/p2m.h>
 #include "mcaction.h"
 #include "vmce.h"
 #include "mce.h"
@@ -43,7 +44,6 @@ mc_memerr_dhandler(struct mca_binfo *bin
     struct mcinfo_global *global = binfo->mig;
     struct domain *d;
     mfn_t mfn;
-    unsigned long gfn;
     uint32_t status;
     int vmce_vcpuid;
     unsigned int mc_vcpuid;
@@ -87,11 +87,13 @@ mc_memerr_dhandler(struct mca_binfo *bin
             BUG_ON( bank->mc_domid == DOMID_COW );
             if ( bank->mc_domid != DOMID_XEN )
             {
+                gfn_t gfn;
+
                 d = rcu_lock_domain_by_id(bank->mc_domid);
                 ASSERT(d);
-                gfn = get_gpfn_from_mfn((bank->mc_addr) >> PAGE_SHIFT);
 
-                if ( unmmap_broken_page(d, mfn, _gfn(gfn)) )
+                gfn = mfn_to_gfn(d, mfn);
+                if ( unmmap_broken_page(d, mfn, gfn) )
                 {
                     printk("Unmap broken memory %"PRI_mfn" for DOM%d failed\n",
                            mfn_x(mfn), d->domain_id);
@@ -115,8 +117,7 @@ mc_memerr_dhandler(struct mca_binfo *bin
                 else
                     vmce_vcpuid = mc_vcpuid;
 
-                bank->mc_addr = gfn << PAGE_SHIFT |
-                                (bank->mc_addr & (PAGE_SIZE - 1));
+                bank->mc_addr = gfn_to_gaddr(gfn) | PAGE_OFFSET(bank->mc_addr);
                 if ( fill_vmsr_data(bank, d, global->mc_gstatus, vmce_vcpuid) )
                 {
                     mce_printk(MCE_QUIET, "Fill vMCE# data for DOM%d "
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -465,6 +465,7 @@ int fill_vmsr_data(struct mcinfo_bank *m
 {
     struct vcpu *v = d->vcpu[0];
     bool broadcast = (vmce_vcpuid == VMCE_INJECT_BROADCAST);
+    paddr_t addr = mc_bank->mc_addr;
     int ret, err;
 
     if ( mc_bank->mc_domid == DOMID_INVALID )
@@ -479,6 +480,14 @@ int fill_vmsr_data(struct mcinfo_bank *m
     }
 
     /*
+     * Provide a PADDR through the MSR interface, for historical reasons. What
+     * we are being passed is a GADDR (i.e. MADDR for PV and PADDR for HVM).
+     */
+    if ( !paging_mode_translate(d) )
+        addr = pfn_to_paddr(get_gpfn_from_mfn(mfn_x(maddr_to_mfn(addr)))) |
+               PAGE_OFFSET(addr);
+
+    /*
      * vMCE with the actual error information is injected to vCPU0,
      * and, if broadcast is required, we choose to inject less severe
      * vMCEs to other vCPUs. Thus guest can always get the severest
@@ -487,7 +496,7 @@ int fill_vmsr_data(struct mcinfo_bank *m
      * vCPUs will not prevent guest from recovering on those vCPUs.
      */
     ret = vcpu_fill_mc_msrs(v, gstatus, mc_bank->mc_status,
-                            mc_bank->mc_addr, mc_bank->mc_misc);
+                            addr, mc_bank->mc_misc);
     if ( broadcast )
         for_each_vcpu ( d, v )
         {



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 12:06:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 12:06:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147829.272879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxq1i-0000Fg-9k; Mon, 28 Jun 2021 12:06:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147829.272879; Mon, 28 Jun 2021 12:06:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxq1i-0000FZ-6N; Mon, 28 Jun 2021 12:06:10 +0000
Received: by outflank-mailman (input) for mailman id 147829;
 Mon, 28 Jun 2021 12:06:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxq1g-0000FT-LM
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 12:06:08 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3f8c1bc5-4631-4a9b-ac7c-3bc439cbd65d;
 Mon, 28 Jun 2021 12:06:06 +0000 (UTC)
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2058.outbound.protection.outlook.com [104.47.12.58]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-21-GKcPtTwuMFmAq-G3FHurzw-1; Mon, 28 Jun 2021 14:06:04 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2445.eurprd04.prod.outlook.com (2603:10a6:800:55::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Mon, 28 Jun
 2021 12:06:03 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 12:06:03 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P191CA0029.EURP191.PROD.OUTLOOK.COM (2603:10a6:102:54::34) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Mon, 28 Jun 2021 12:06:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f8c1bc5-4631-4a9b-ac7c-3bc439cbd65d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624881965;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IjQoxJqvDTIywswmk+AZI3gMAaaJFna4jZzpW+tVyJI=;
	b=I2wJkOjnfTRFQw2SgonXOyDR3NUn68hAHNyfHYVAttBg3RlXaJLIA4WP8PpA524XSfoVwL
	iQtwm82Rj63JOo2kcCavwlzfTpF7cZqHjWHzGGZcToZA98mJebqp6fz4LCyskjvukd0yJW
	LUhLd3SvFqg7hTogSLNyz9TSU/thRSY=
X-MC-Unique: GKcPtTwuMFmAq-G3FHurzw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jVJHOJ0yNQjFut7NNRFVZLidDis+5kSp01pcOKYt7zkO/1IvxXm6gwupBuvclWMvIJPLT1lEGa2CN2oED70Tyq+KqtLPil9h8f9wKC811qbhvXld3vISvypArrBVjT2cfczfXcowGBiTU6s0AmlF6IomFb4bv5RNRs1VRxLUysQs9shIn3+uiZVpSCaA0h+CIbj+kM5GyG1jd0pNCouptQPgxDOhRc7eNsT8klXovFX7nqyH1BVAo2JgvAuAcMXfzcvBPxG5fXz0EeZvoh48EYaIUGpl+Ba8HpK/DCEkbK7Ywv1IYsbHNrrbmLaRk4BJR/5wnQEnRwli63dg1h6Ypg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IjQoxJqvDTIywswmk+AZI3gMAaaJFna4jZzpW+tVyJI=;
 b=gJYzcupEwHBjuCXSY0SWe/w7v/n2WtXMiImJ2ypihGV88iXlV7bVTIacLmjqCBmjbGW2kerxG1g73kEXWzf1KQbRDJeJCOvatf2q5bSbd2TeKhM/hMrtE1QkPC1Wwo61LwKzMoKTjWA6N8q3V5m2Cmy2S13cBpfIrfaL1nKnlVabgeXwvfRXI2vG0vUP8SUKYCVYBmAhpwDdFxB4Ek5bfTWqInkNR9RfKd3ve9PFOIaTfjli1mcrTZmpXDrjp1DGonz4HIlkHJzBd+U4ZVr+mi1KGcIHkoBNdAbvPwQL7gJkYTbvgrWqsgaAAitfqiHre401rF2TJpHxDLsSO8pPrw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: A mismatched type error found
To: Rroach <2284696125@qq.com>
References: <tencent_D17B77D71E5C6FE0BCB2559C7C59459FEF06@qq.com>
From: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <3d6938de-f7c3-5cd4-b3e2-0d06635cd3c6@suse.com>
Date: Mon, 28 Jun 2021 14:06:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <tencent_D17B77D71E5C6FE0BCB2559C7C59459FEF06@qq.com>
Content-Type: text/plain; charset=windows-1252
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P191CA0029.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:102:54::34) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: af2404f3-802a-4697-7dd2-08d93a2d19b1
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2445:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB244510003BE09C253F9DA681B3039@VI1PR0401MB2445.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sMjW2r698yi1bZgS1GDYwqfynA2cTy7lN2bqsNJ/NDthqYtY6RcuJRTbPpaNPDdKtCXzLMJLPYIjSGoi7QMja3FtQfD/xu5eGkWKJ1DlHF31xePVE9eAMMMWdGa+1V1vTG8Odkk6D9nj/hswTdUetqePD4aO0jqyQj7eMcHOzw5nW/vUrqGZlNe8sGZpOi2QOOd647ilzpoET3gDSb0dWoGaqRtjWVESGREpyzhcrpEAK85doif1DVpo3GekukT5MoT7Iuy1IbPWdShevOQfS6Y3/wvSKpVEctlIWcQGjp0bVzkjg6ZXnDAS74z+x4NMBz2isJ4fbvIE8ceqwbDnV/sUhwBNG4rGPRh8zSqz0LrTVhzhEZBF1pZ9gWPlwrpT6Hx93FmqnJkY771Uc4CH8k8IYA+bJ9hgzMzT/Ms28PoESKTvWCD7C809UCriUOM6wAdRZeyjkmZCjHc3Cy7LlL1LudS4fwWvVADYQ3ddo8U++H5bS2mnbYRLh4536QYXTlbQFi891gYCQGGcNmo3X8TTMi41hINNO3va7sJGV3DiIIKIae7ll6U79M1s1BP8mUhcTqfGzBdzQ1K8UPbmXes9h8YdAW3cN5XjnZeQxTOb69FCioTlu6Hf4lo+WQ/LxHflhpJXSTFLytJSJ8lSjl0EkFFYguKH/bn7Ss1At1Q=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(346002)(39860400002)(376002)(396003)(136003)(16576012)(6486002)(316002)(478600001)(2616005)(956004)(8676002)(6916009)(5660300002)(31696002)(2906002)(36756003)(86362001)(38100700002)(4326008)(66476007)(66946007)(16526019)(53546011)(186003)(26005)(8936002)(66556008)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?Windows-1252?Q?wV9wDhJ/fNg+jiuyDrf8SL2U1KSzxyGCpV+ishXM3+4Sk9Q8tBopkvjl?=
 =?Windows-1252?Q?XuipRqnt4jwRaOg8vl9mpwpyiQFiSuYEOJn65V+DWOtZUdO9CV2EQrjo?=
 =?Windows-1252?Q?13WxqTsW8Do7ZmDjHnsx3YPFJ4dBAO9+7EIclZ8k8QAdDBHiKiptK0YR?=
 =?Windows-1252?Q?ozGKvFFNQyozpm3V9FYT4+oGPNZReKPji00JosljDmbyddzhdhAFO+Ke?=
 =?Windows-1252?Q?Qbx3hsKIjfaio/+yFKZv3vYJrU1cbsm1gMuxMM4JJ6n9HdbmXXU3KR3S?=
 =?Windows-1252?Q?DSsnF5EJ7rVLMt4JjyHDVagpHyHUyw/wi7w38mdqP+fLVv79mzxdkt5S?=
 =?Windows-1252?Q?z0GATeAvsjwHLFz1Qsduw2gwdiXPOuH8jsXIybeYMuBI1ydw4Ihq25sm?=
 =?Windows-1252?Q?pZIKZZZoX+BObrH0f9fegb4U5gNcPNjwF0Mi0FB/cyQ2fZfbdBtZnzl6?=
 =?Windows-1252?Q?/5WUt70XfshZWEFIMwoasvxIC4pluIMqTCEWaoEZlbeb25LGFmu4onZQ?=
 =?Windows-1252?Q?PeLEzCP4qGNuNcI5N1xXi/nrVXHbjlrmyZBWxv8KimidQAkOo9o/lBAQ?=
 =?Windows-1252?Q?n0Ql+v3Wy6SS+0PNlru7lIlX545Otwnro9s+67ZJuUm81cff3GsTKJu1?=
 =?Windows-1252?Q?h4usbwfk0cXypjnCIZCdf10O8jTXCR5JHHn+v0d0ei9PWqlgY/RKgRmX?=
 =?Windows-1252?Q?QTsH4+IGl7Hp0wjwyhcIoB610Tt86LKf+oKJkTQrjnmvQlwvlTKzXXag?=
 =?Windows-1252?Q?tn2LdfCg30zvAOylktXLVqgVSZ6+IwpR5gcOPluIR2RZwrOINdF0lP3T?=
 =?Windows-1252?Q?NJKS66Gr7QtKcbVhb+I6zV0tDGOAEWwc6rc3YfRPcDRrL5s5vW3MiwJ9?=
 =?Windows-1252?Q?7FdAb4ZgPKyMUcoS82rlgaaUF+o41rlpVPgnMiVh5ygM+8GcmCGl9fDu?=
 =?Windows-1252?Q?YYjrto7QE87V03BFNwsSx3tAdlXO5G6VdxKgZi6WT7/Kec/fZl1EAwPG?=
 =?Windows-1252?Q?BauRKXXJoRZLKV4BhHRmbu+5epS66fJApsse1H8i6kr/6+P2FYTb5TwL?=
 =?Windows-1252?Q?4xyw52quIurOB4bfFo1bT/sa52ACNTq+CxTkRpz2zLtdstmDyNbITAco?=
 =?Windows-1252?Q?RPYdycqdU4bDoXuh6EbHNiqAQ6Ts0jeQBG6jbRSZpSquuC472D3ySQKO?=
 =?Windows-1252?Q?upgthEX7muTrGYEW0wDpb3U+I1WEoqnB5xTTMi9AXnochVEwWiUFuC1z?=
 =?Windows-1252?Q?4NsOVJLqU+SdLAsCsm39TI9qotztmwpWQw3DemriTeC0KPSc/KMz3J6j?=
 =?Windows-1252?Q?7vVkT1NgrDpCk4cYqTsSeyg+r2H8KpMP2CORX9OhqoPPP3XAYTzSYx5g?=
 =?Windows-1252?Q?puXYEj48EbFS/NqfpKtYZgCD6Bf1BSiwg+qg3AW/ju2uS0upTcLG3u1O?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: af2404f3-802a-4697-7dd2-08d93a2d19b1
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 12:06:02.9577
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 75TwASef+b5DcRD3FQ1eZ5z4q7Wbv21aPgrRnpaPYzN/spiWA63lFxEM2wtX/2TU6KwDwAbCEqGIB0ZXet3ibA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2445

On 26.06.2021 17:36, Rroach wrote:
> Hi, when I look the source code in Xen-4.15 source code, I found a type mismatch.
> 
> 
> In detailed, in xen/arch/x86/msi.c:find_msi_entry, there is a comparison between entry-&gt;msi_attrib.type and cap_id. However, according to the definition, the type appears to be __u8, where is a char variable, and the cap_id is defined as int variable, hence it seems to be.a type mismatch.

unsigned char, uint8_t, and alike promote to int when used in an
expression. All callers pass values known to fit in 8 bits. Plus
the hardware field where the ID is coming from is also an 8-bit
one. So while I would welcome a change of the function parameters
from plain int to unsigned int, changing them to uint<NN>_t would
actually make the code worse imo, not the least because of it
then violating ./CODING_STYLE (section "Types"). And using a type
wider than what's needed to hold any valid values in a structure
with perhaps many instances is not a good use of resources.

> Despite this error do not affect system operation by far, it still affect the code's quality, as such error could result in potential bugs in the future.

In this case I'm having a hard time seeing any such happen, for
the reasons above.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 12:12:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 12:12:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147832.272889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxq7U-0001h3-1q; Mon, 28 Jun 2021 12:12:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147832.272889; Mon, 28 Jun 2021 12:12:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxq7T-0001gw-UL; Mon, 28 Jun 2021 12:12:07 +0000
Received: by outflank-mailman (input) for mailman id 147832;
 Mon, 28 Jun 2021 12:09:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZQAJ=LW=amazon.co.uk=prvs=8067dfff9=mahgul@srs-us1.protection.inumbo.net>)
 id 1lxq4x-0000uL-BL
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 12:09:31 +0000
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a4230fb2-0ee5-4251-9ce9-d63950e31891;
 Mon, 28 Jun 2021 12:09:30 +0000 (UTC)
Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO
 email-inbound-relay-1e-28209b7b.us-east-1.amazon.com) ([10.25.36.214])
 by smtp-border-fw-9102.sea19.amazon.com with ESMTP; 28 Jun 2021 12:09:23 +0000
Received: from EX13D38EUB004.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166])
 by email-inbound-relay-1e-28209b7b.us-east-1.amazon.com (Postfix) with ESMTPS
 id 1D796E16D4
 for <xen-devel@lists.xenproject.org>; Mon, 28 Jun 2021 12:09:22 +0000 (UTC)
Received: from EX13D38EUB003.ant.amazon.com (10.43.166.191) by
 EX13D38EUB004.ant.amazon.com (10.43.166.140) with Microsoft SMTP Server (TLS)
 id 15.0.1497.18; Mon, 28 Jun 2021 12:09:21 +0000
Received: from EX13D38EUB003.ant.amazon.com ([10.43.166.191]) by
 EX13D38EUB003.ant.amazon.com ([10.43.166.191]) with mapi id 15.00.1497.018;
 Mon, 28 Jun 2021 12:09:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4230fb2-0ee5-4251-9ce9-d63950e31891
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1624882171; x=1656418171;
  h=from:to:subject:date:message-id:content-id:
   content-transfer-encoding:mime-version;
  bh=SSEfeS39KWHTHP1X/9TARKyVee/ZR54Rjkov5ecbNGw=;
  b=V24lE1St0TjzrZL7VJQD5UV9q0OfswB8eqdtloyb2XvbQGtSD+jxca0O
   0N56aw4Y7Nc7Fgf0Y5OyAHSp1Ea6IAKpt8JYnrYpapJrdhPOecb9AW39L
   X2Zw43mBdyOJRf3t30xlIWQcI/zEimFLMxQEDJvum0BLSQclFxJyR1Mso
   Q=;
X-IronPort-AV: E=Sophos;i="5.83,305,1616457600"; 
   d="scan'208";a="142486884"
From: "Gul, Mahircan" <mahgul@amazon.co.uk>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Notes from Xen live-update design session
Thread-Topic: Notes from Xen live-update design session
Thread-Index: AQHXbBZtI6tBiV5T9UGRiPDOgClkqg==
Date: Mon, 28 Jun 2021 12:09:21 +0000
Message-ID: <7D9E37A5-3BC4-41E6-8E6F-738CA1CC6305@amazon.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.215]
Content-Type: text/plain; charset="utf-8"
Content-ID: <38D1345A5B498A4A8F3F062972FEEB92@amazon.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

SGkgYWxsLA0KDQojIyBNaW51dGVzIGZyb20gWGVuIGxpdmUtdXBkYXRlIGRlc2lnbiBzZXNzaW9u
LCAyOHRoIE1heQ0KDQotIFRoZSByZWNvcmRzIGFyZSBhcHBsaWVkIGluIFhlbjIgYXMgaWYgYSBo
eXBlcmNhbGwgd2FzIGV4ZWN1dGVkLiBXZSBleGVjdXRlIHRoZSBoeXBlcmNhbGwgb24gYmVoYWxm
IG9mIHRoZSBkb21haW4sIHJhdGhlciB0aGFuIHRoZSBkb21haW4gKGkuZS4gZG9tMCkgY2FsbGlu
ZyB0aGUgaHlwZXJjYWxsLg0KICAgICAgLSAiSG93IGV4dGVuc2libGUgaXMgdGhhdCBhcHByb2Fj
aD8gVGhpbmdzIHdpbGwgY2hhbmdlLiIgQXMgbG9uZyBhcyB0aGVyZSdzIGNvbXBhdGliaWxpdHkg
d2l0aCBmdXR1cmUgZXh0ZW5zaW9ucywgdGhpcyBpcyBmaW5lLg0KLSBPbmUgcHJvYmxlbSB3aXRo
IGNyZWF0aW5nIGRvbWFpbnMgYW5kIHZDUFVzIGlzOiB0aGVyZSdsbCBiZSBhIHNldF9jcHUgaHlw
ZXJjYWxsIGJldHdlZW4gZG9tYWluX2NyZWF0ZSBhbmQgdmNwdV9jcmVhdGUuIFBsYW4gaXMgdG8g
ZW5mb3JjZSB0aGlzIGFzIGEgaHlwZXJjYWxsIGRlcGVuZGVuY3kuDQotIFJlZHVjaW5nIHRoZSBk
b3dudGltZSB0byBzZXZlcmFsIGh1bmRyZWQgbWlsbGlzZWNvbmRzIHdpdGggYSBiaXQgb2YgY2Fy
ZS4NCi0gUG90ZW50aWFsIGlzc3VlcyB0aGF0IGNhbiBhcmlzZSBkdWUgdG8gZGlmZmVyZW5jZXMg
YmV0d2VlbiBJbnRlbCBhbmQgQU1EOg0KICAgICAgLSBEaWZmZXJlbmNlcyBiZXR3ZWVuIFZNQ1Mg
YW5kIFNWTS4NCiAgICAgIC0gRVBUIGFuZCBNUFQuIERlcGVuZGluZyBvbiBob3cgcDJtIGlzIGhh
bmRsZWQuIEhvd2V2ZXIsIHdlIHRyYW5zZmVyIGl0IGFzIGlzIChwb2ludGVyIHRvIHRoZSB0b3Ap
IGFzIHRoZSBwYWdlcyBtdXN0IGJlIHByZXNlbnQgKGRldmljZXMgY2Fubm90IGJlIHF1aWVzY2Vk
KS4gU28gdGhpcyBtdXN0IGJlIG9rYXkuDQogICAgICAtIE9uIEFNRCwgcDJtIGhhcyBtZXRhZGF0
YSBpbiBkaWZmZXJlbnQgcGxhY2VzLg0KICAgICAgLSBGb3Igc2hhZG93IG1lbW9yeSwgcGFzcyBw
Mm0ganVzdCBhcyBIVk0gZ3Vlc3RzLg0KLSBBcyBhIGZpcnN0IHN0ZXAsIGl0J3Mgb2theSB0byBw
dXNoIHRoZSByZWNvcmRzIHRvIHVwc3RyZWFtIGV2ZW4gaWYgdGhleSB3b24ndCBiZSB1c2VkIGlt
bWVkaWF0ZWx5LiBXZSBrbm93IHRoZSBlbmQgZ29hbC4gVGhpcyBhbGlnbnMgd2l0aCBvdXIgaWRl
YSBvZiBwdXNoaW5nIHRoZSBzcGVjIGFuZCB0aGUgcmVjb3JkIGhlYWRlcnMgZmlyc3QuDQotIEFz
IGEgZm9sbG93IHVwLCB1cHN0cmVhbSB0aGUgcGF0Y2hlcyB0byBiZSBhYmxlIHRvIExVIGRvbTAg
b25seS4NCi0gSU9NTVUgbWFza2luZyAod2hhdCB3ZSBkbykgaXMgdGhlIGNvcnJlY3Qgd2F5IHRv
IHByb2NlZWQuDQotIEFzIGxvbmcgYXMgaW50ZXJydXB0cyBhcmUgbWFza2VkIGFzIHBlbmRpbmcg
b24gdGhlIGV2ZW50IGNoYW5uZWwsIGl0IHNob3VsZCBiZSBva2F5IG9uIHRoZSByZXN0b3JlIHNp
ZGUgZXZlbiBpZiB0aGUgSVJRIHdhcyBpbi1mbGlnaHQgaW4gWGVuMS4NCi0gQSBzZXBhcmF0ZSBt
b250aGx5IGNhbGwgb24gWGVuIExVIChzZXBhcmF0ZSB0aGFuIHRoZSBtb250aGx5IGNhbGwpLg0K
DQotLQ0KTWFoaXINCg0K


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 12:13:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 12:13:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147836.272901 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxq9A-0002J4-DS; Mon, 28 Jun 2021 12:13:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147836.272901; Mon, 28 Jun 2021 12:13:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxq9A-0002Ix-AK; Mon, 28 Jun 2021 12:13:52 +0000
Received: by outflank-mailman (input) for mailman id 147836;
 Mon, 28 Jun 2021 12:13:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxq99-0002Ip-Ju
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 12:13:51 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 62939fee-3192-4b90-8951-62f5879bad3f;
 Mon, 28 Jun 2021 12:13:50 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2055.outbound.protection.outlook.com [104.47.14.55]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-5-BNlic1ALP-yznXHMmMB66A-1;
 Mon, 28 Jun 2021 14:13:48 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7039.eurprd04.prod.outlook.com (2603:10a6:800:12b::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20; Mon, 28 Jun
 2021 12:13:46 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 12:13:46 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0213.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1f::33) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.19 via Frontend Transport; Mon, 28 Jun 2021 12:13:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62939fee-3192-4b90-8951-62f5879bad3f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624882429;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QEQ9PcYqF9WvWC4zp6yy+DyXR04ehCHa/bXmEXm0750=;
	b=hTH2ihXRGQu6o2eJjvbTio0A8tBuOmvQqMvMhR6M+lkQX/U/gz7mg6zPARnMOPWrzJtnvU
	rDr/phJoXiwrUXm6KQYcPMopqOPP+cWDYHjzWevJIvhNA5dE9NxZUbbgsn+0wLaJ8fPHzX
	VQjyo0NYztpPmCfEZIdFELWBKsEAQO0=
X-MC-Unique: BNlic1ALP-yznXHMmMB66A-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i5Tlf/nIEZac+eyhzBo7uFW8WwAcNXOL6Bf97KVklDJEq7kjzWiTN2NQ4+jf0tkOHfd096vkLdxKfbIaVh8x6yj2XJn0oX9C3F09DI7xVC8p3PE0A7iIxNFQrY9YabcmzRKjrzCf+WVYCJD/GzPFybpzJHcC12LpRcCNBDqwhCRPxCW2jyXirqm2FNUoYGCAB1cub49iBdzlL569SztDg7BG/jo69035rj4u0UjGAvehybKCdC9eOaKxdYVftaVvdVD6Hk04JFYVbZEFSsX/iDu42gk0pe3d2W2cIcZN3llNqesSMDnJyuFGK9rCX8uGKrSHlbLhTUG24uBRjpUqIg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QEQ9PcYqF9WvWC4zp6yy+DyXR04ehCHa/bXmEXm0750=;
 b=AesZUVpOfFt8N5+B65Dx3O2O1xiXWyIQ22idyKn20tja6NMXjMXLSrKPCs5VMx4Wz3U8XsBZru8IsfUML3Tr3W/cvw9++6pOTAYwqSMzJJSnk4+dPPpobNsx4VFvKhOR+as4gQPuCmIy83PusyftiLQjA9twCAL41k+VTBY1TME6TcPwkS15Tnb/U2QOd05SUogoL1UKsCTqU2oAcG3hu/pSEDPHqHBjjcVghzZY0lo4qJNAuI4re66aryVoTbitmDRf0XL/pTmT/4pNDfI2DHy5WaToohh2M4vVC8mpkYkni4gd1d6jr5BhPUlKFLynbokjwzy/NFOI/csLNpZyGg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Ping: [PATCH v2] x86emul: de-duplicate scatters to the same linear
 address
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <cf935e11-27c8-969f-9fb2-a5c0e85ccff1@suse.com>
Message-ID: <26495a3f-d250-4445-ca91-0d0aae336fe7@suse.com>
Date: Mon, 28 Jun 2021 14:13:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <cf935e11-27c8-969f-9fb2-a5c0e85ccff1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0213.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1f::33) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f10c7d6e-17dd-4b30-0b54-08d93a2e2e29
X-MS-TrafficTypeDiagnostic: VI1PR04MB7039:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB70393C460332DED186F50833B3039@VI1PR04MB7039.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2gHsqHXjjg4MCCHY0x/0WbwokszA3a0tDFbTM8wcKMncYu8imWF4CjMYHG+L0MZs2QjVrqai/qxjSzVfsBzSd4lPROViNM3RDDIueMNIpJkmcHemVN/+CIK0qCGcHZCmMq0LgMMMCM/gixn8GuZrb9Z5SgHZw/Eg883JCat6XsJy51UZIepPshQtwi+1OjugFnuEGg7PKj9Phu7CjUOswSu0tiJZGqQjzT6/EmHThpA8YpqYoZ4O2ji7Xjn8p4DOztksx45fo7IIFIwKFNVv66pP+Ipjhn9rtmUsd3VPddPtDvNosnh8hK54G55HQBvl/bqOXF3hg711pv+eCTJW+fBlnNx441/zaqdEM1kiQjP/UO5VeLsrC1JwnYNXo3ZWQMnJBZx8rF/o92QVnXBoAT8VyMjgPCHkhs235JnvTw3uHLebPMuLqELiKD/5LIqsMQOQMLwKqnBRFwKCyzOt/iF4+2q8wGpxvBk9LDRtVtspJbTi41NhPGKgrA3Yag9BcujkU0/j/k37fJuwc1wmTTpRgSWtHsE0eBADWL+ywFLIPK3gVhYFB4aeiMFatAHwhwpanuzt4ryqc1d1LOK96bsUaBh5Ztd7WJpmyI1krmsC8Puxd/0Y6BSHEu3rBxxu3aFEfiRKl2O3vMqsJZY2wjCogh99K2ck1vokYMqRGfMc6rluKhcAAD0YcbgLJITsrs0QbGHJ4/ciyT1AnWSWxhe6vybwHblNiqvNclZ2wUc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(39860400002)(136003)(366004)(346002)(376002)(16526019)(186003)(6486002)(8676002)(83380400001)(86362001)(36756003)(5660300002)(31686004)(53546011)(66476007)(31696002)(2616005)(66556008)(110136005)(26005)(66946007)(478600001)(4326008)(2906002)(8936002)(16576012)(956004)(38100700002)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UklMWERnMTcxb1JKN1FzWHdYUmtsa0ozSWl0aVp0OXpDd2F3b2hNTkhDVzY0?=
 =?utf-8?B?RjJoMmZzbXFIYkpXcFZIY0g1K3BnWVV3cFNPWWF5YWozd2RNTGhaMG80TW1K?=
 =?utf-8?B?ZHV3dW1ZTW9HZ2s0Q0ViUTNSTmFLQTdIN2Z1ZzVNRDRaWDhzaWFlblRqdklE?=
 =?utf-8?B?aVh0OVdhdkg2bVBCUVRCZGVuTXUyOGVrdlhqT21yQjFGSi95aXArL1JsRFo2?=
 =?utf-8?B?QTJPQ0k0SGltZ0xqbkVFWTlXcVB6TWdpbS9Pd2lua2s0Tk5KVEhLU1B0aGNP?=
 =?utf-8?B?L2o3MXdoNWEwQlFVS0ZyVm9lb2Z4djlmZ21CbUhVczlMbnVZOEJYZEN3YUZx?=
 =?utf-8?B?TFRqYlVzeFBiTEh4MHdxbUFEMkoyaXB5TU9JRGNNVnp3YWRaZ1dsbW1UV0Jp?=
 =?utf-8?B?UUZkSWFWQkFONXRoVGNvSHE3TS9RUUVQUG5wbEQyOFljVGxYWEdJc0FDTFVu?=
 =?utf-8?B?Rk5RS29pbWQ2NWZTMW0vUFpvd0dCRnc4b1hrT3pkMzRtTktxM2lRUlFYU3Vh?=
 =?utf-8?B?TXhmRHZsSUNCZkNQbnFpdDhrZVZTT2dsemlxV2RJQVZ0ZlJXaXFjaVp4bGpH?=
 =?utf-8?B?cHhyZ3lEMEdFSFRIMGpJY0s2ZHRqMEtCRTFTbWpCdnVGanV6dnQ2QzAxL1RQ?=
 =?utf-8?B?elR6NklNcEE4ZUllQlhqRHZzOGx1WG9qU3NORVFld3FpVUZEUURFYzUvNi9Y?=
 =?utf-8?B?QlNPTDNxZDdmc0FKaFNTN2hNaVd4MnhJZUo5ZmllbU9helZFUUw5VERhcFdR?=
 =?utf-8?B?UDNwUmhzb2EzY2ZhREZuVmpFNzVyVnJkWmtQd3VQUXN3RVVQTzYzRTcvK3J1?=
 =?utf-8?B?WG1RTjRPTHpNSHdCSnFjME9XY3RXRk1HUnZVM1BzcUREZmdHNzdjUmdVeTZy?=
 =?utf-8?B?K1IyYjhtd0RSbzR2b1NzUXlaQWlUaldoQ1RYdUVJaTJlYzBqRDh1bGVpVi95?=
 =?utf-8?B?SVZGZU52cnVVK0VsNS84MjNmMnBiWVBoN3pEL3lQYW53RCtNYVE3TjVJNFgz?=
 =?utf-8?B?RktHV1FqZkxCY05VYTRmKzhhSWpqRzZlOGdhMzNYZlFSOFZKU1dIVXlhSFBU?=
 =?utf-8?B?eEQ4RDE0em8yUUFzSmtWV3BDL2h6K2kyOVlTY3NhWkRvUWVSTkxaN0dZTDQ5?=
 =?utf-8?B?TStITTR5YTdPUXovZGhDN0pzNllMZ01PT0N2L09qQlhJZzNvRkxEa2ozbHJj?=
 =?utf-8?B?WjdIdDdycEtNRmROUTRzeDRJalo3VlB5RnliTVBVYUYxamt3Ynk0S0xDWWQ3?=
 =?utf-8?B?M1RTK0xBOXZLaFVkQzNhMGdXaHJPUjg4cG1USjllMU0rQnJ0MXp3RXZWWjVY?=
 =?utf-8?B?dlF6WkJidW5lTGU4WlZTb012VklhOWgzLzUyM0ZDejZuRlh5a2tJYUwwTWNH?=
 =?utf-8?B?ZzloVkNQS0d2L1orU1NRcTRsZ1ZaSDRKRytNUGpKNFhDN2E3MWZveEFzdk84?=
 =?utf-8?B?TEdNSmhCdkk4Ry84UzhnZ0JuRWVrYlZKaVJzemF3VHgwa1NFMUdvWExUcWFi?=
 =?utf-8?B?cnhWbHFDOHJWQmV6RWVHZXJseHhKNUFFUjI1czdCSzE0UmxDcWdkY0JCWXNL?=
 =?utf-8?B?blF5SFZVN2Zac2VTNE9lTk5vclZZcFRKY3RGNTBjN01MbDlvSi9ZTzB5T3dy?=
 =?utf-8?B?VGErUGQ5aEtGeURjNmprUi9HKzkrUzNtOG9GZUtxREVsNThUemZ5aHNzQURP?=
 =?utf-8?B?QS9HVEI3ZzIrTlBZQXNlNHhHRzlHdzJ0dUovRDRwQUpxQXdvRm52bWp6b3By?=
 =?utf-8?Q?d6HBRkAMrkZPJfdQOVLOjCtl1RThyw/9tSdU9Me?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f10c7d6e-17dd-4b30-0b54-08d93a2e2e29
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 12:13:46.8483
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0eNINbpQ3l9qPvpxsGycDTWlXgR12fTXth9yr2TFVX2kHzuiPKxqgOw/0O1YLDOlP9NKx3ECLHprc6gXUmxF1A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7039

On 20.05.2021 15:34, Jan Beulich wrote:
> The SDM specifically allows for earlier writes to fully overlapping
> ranges to be dropped. If a guest did so, hvmemul_phys_mmio_access()
> would crash it if varying data was written to the same address. Detect
> overlaps early, as doing so in hvmemul_{linear,phys}_mmio_access() would
> be quite a bit more difficult. To maintain proper faulting behavior,
> instead of dropping earlier write instances of fully overlapping slots
> altogether, write the data of the final of these slots multiple times.
> (We also can't pull ahead the [single] write of the data of the last of
> the slots, clearing all involved slots' op_mask bits together, as this
> would yield incorrect results if there were intervening partially
> overlapping ones.)
> 
> Note that due to cache slot use being linear address based, there's no
> similar issue with multiple writes to the same physical address (mapped
> through different linear addresses).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

As indicated before, this is an issue which - afaict - would be a
security issue if introspection was security supported. As such I
find it highly irritating that this has now been pending for well
over half a year (counting from the submission of v1).

Jan

> ---
> v2: Maintain correct faulting behavior.
> 
> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -10073,15 +10073,36 @@ x86_emulate(
>  
>          for ( i = 0; op_mask; ++i )
>          {
> -            long idx = b & 1 ? index.qw[i] : index.dw[i];
> +            long idx = (b & 1 ? index.qw[i]
> +                              : index.dw[i]) * (1 << state->sib_scale);
> +            unsigned long offs = truncate_ea(ea.mem.off + idx);
> +            unsigned int j, slot;
>  
>              if ( !(op_mask & (1 << i)) )
>                  continue;
>  
> -            rc = ops->write(ea.mem.seg,
> -                            truncate_ea(ea.mem.off +
> -                                        idx * (1 << state->sib_scale)),
> -                            (void *)mmvalp + i * op_bytes, op_bytes, ctxt);
> +            /*
> +             * hvmemul_linear_mmio_access() will find a cache slot based on
> +             * linear address.  hvmemul_phys_mmio_access() will crash the
> +             * domain if observing varying data getting written to the same
> +             * cache slot.  Utilize that squashing earlier writes to fully
> +             * overlapping addresses is permitted by the spec.  We can't,
> +             * however, drop the writes altogether, to maintain correct
> +             * faulting behavior.  Instead write the data from the last of
> +             * the fully overlapping slots multiple times.
> +             */
> +            for ( j = (slot = i) + 1; j < n; ++j )
> +            {
> +                long idx2 = (b & 1 ? index.qw[j]
> +                                   : index.dw[j]) * (1 << state->sib_scale);
> +
> +                if ( (op_mask & (1 << j)) &&
> +                     truncate_ea(ea.mem.off + idx2) == offs )
> +                    slot = j;
> +            }
> +
> +            rc = ops->write(ea.mem.seg, offs,
> +                            (void *)mmvalp + slot * op_bytes, op_bytes, ctxt);
>              if ( rc != X86EMUL_OKAY )
>              {
>                  /* See comment in gather emulation. */
> 



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 12:14:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 12:14:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147839.272911 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxq9m-0002s5-Nu; Mon, 28 Jun 2021 12:14:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147839.272911; Mon, 28 Jun 2021 12:14:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxq9m-0002ry-KW; Mon, 28 Jun 2021 12:14:30 +0000
Received: by outflank-mailman (input) for mailman id 147839;
 Mon, 28 Jun 2021 12:14:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxq9l-0002rq-PR
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 12:14:29 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a594bc0-b2e7-427d-adfd-96e3d10df11f;
 Mon, 28 Jun 2021 12:14:28 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2054.outbound.protection.outlook.com [104.47.13.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-20-3uTLG9ckNEGPAFMFn3U8yA-1; Mon, 28 Jun 2021 14:14:26 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6477.eurprd04.prod.outlook.com (2603:10a6:803:11e::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Mon, 28 Jun
 2021 12:14:23 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 12:14:23 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0214.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1f::34) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Mon, 28 Jun 2021 12:14:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a594bc0-b2e7-427d-adfd-96e3d10df11f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624882467;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=M90D9+BdxAf5Mno841pb00beQDnilbxJZ2fZ/0gnGZg=;
	b=eQpcboxjNS8RlRJyUREF+MDe8PzWtw1UJRomXebMDRwCipUuQTTKSKYLXpgPkaaRs8RPDZ
	BQq/yqyuK+2n7AikL9mZ4nKMBVaQXdO8bHQsa3jYVD8bpBUnoe/YwhsXnSBLqr7HfNCfXL
	tJYWSzwqvThIgtZZMJ+SI0Ej6VELW20=
X-MC-Unique: 3uTLG9ckNEGPAFMFn3U8yA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fs/xqZLINdwa5bYp5EoujBIkT7M8W7uORxbS9nXiQsWJQRn9HCQf+bA4wZCxN1rVGp7+hvwdDm7QZ8slitqeZEAlz/nnVnMo5FHVXXjtGbB1yI1AhFLfF5liwUbmojk+xhS8hYNuGmO8q9dl6GpAbOlNlNSCzWMtVIAl6ELAgJ07Tk/2hh1nzqnDXI51SXIOi7Xozd+CE7OG1HCgDnKQRuTKccIY9Y85lqF/ZIrVN8yHGFUMOaXGo8Ao0yBIop4PjUFsz9okg3awHs5avYZ7hKokn2TtIp2sxvKqPi3W68fc+djMj5TvlEpv+Xk1O+GnQYApnFUBuv2SL/go5t58Lw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M90D9+BdxAf5Mno841pb00beQDnilbxJZ2fZ/0gnGZg=;
 b=a9GAl74VtNegW6DcUHF6baSSgdaFKVNlVBt1lzxBvU0SaDB6+0fkdWXzRDpf0yrlpJrYbTG5f0BJ/birylbwLqxy7yjrN6Lg8DFppEJxBAHdDu3iOAS3L0Lo6oo+gXJxwMNTciy2QUxGgdiH1KiSdj4nehjaxbwR56BQEryqNSWdc5IVxE6oh1thWZK75D2Sz1jc9aGnn7XPjzVqVRGXSI4ILtRytkEQdoh0/iXSzU8WlJvrmiWTzV4pQUZc2oIOKM1/G3f7N36C4PoHY0qTFMMEXCb956ASIyYG3WTnmYFuVEjCQGqmDP1u1//b+bk9MER7zWHWa6b9Me7GE2gc/Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Ping: [PATCH] x86emul: avoid using _PRE_EFLAGS() in a few cases
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9285f566-e352-9265-e9e3-e9a1e15ce7d5@suse.com>
Message-ID: <8c9a84a0-4dd5-178d-5409-838ce10825da@suse.com>
Date: Mon, 28 Jun 2021 14:14:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <9285f566-e352-9265-e9e3-e9a1e15ce7d5@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0214.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1f::34) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: eca7c97e-5923-4a52-75b6-08d93a2e4418
X-MS-TrafficTypeDiagnostic: VE1PR04MB6477:
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB6477513AFE5749A4EABB3A22B3039@VE1PR04MB6477.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VQL9du8v7RCBIl9p8aUzJbkzyLAN55GeMTM7SvfdgIm7psNub8h487QAdvxxV6azZ9Uk9swmpZtHyMqELWlXibl5kTH7byx9SAucWKpLNL/0IY8cA6yKQuH8kzQRLrDl4Bj2OcAmDNGLBbaNNRyDDU5s/lmZHBeUkEPkQ0+9gHUa3AEddZsjStFqiTzH4iLk9OPa+9wS6nCUS5TgbHRtVXa/rnz1fS/GxcI05k9vFjO8QOkNtTiEIp0TLYAHiQo6qHIk5WJL8FPizbB0cEtxMJxANvnhTcFwtXiBd5rht55jfb2auRT0SVX2UJlkZ//Z1cOkOdDiXguYODAN3kQauLn2Fst3MGoVEKy9iuEG28VEQ3dPyymUrkiM8nBh3TiYZezJyGnbCX40NGrR2JvDXKinJU7tsz8C9p1puVFsJXoDOf7v+9GHVZhlNClkZnmghbeShfkmgWU/OhPC4i0A/eh1Migv+hRgKX2ofr0Qmp2KRo72u8WME228qZDAby3fmMf0pdASr1h18Hm51OZ6depGDDRkmHelGS9VtA4SUp4zJujpD8QDr0mATjcapzcRXgKmWJwpdQHjd8Pkw/xN4/5oYyOq+NbccVpqmMbQUmQHiismUdY6HMhr6qJRkPHMO8EVVD5aXjDqRi+tI2EQ3zX+6Jl4Qxf3rdvea2bDbPFmWsgeE5kufgzxOMdfrxSXH0NiVS9nIJvoBZsP/wVILOSDcjB8QgEEZzUJFFwFYz8=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(136003)(396003)(39860400002)(376002)(346002)(66556008)(66476007)(5660300002)(8676002)(186003)(36756003)(53546011)(16526019)(2906002)(38100700002)(16576012)(8936002)(66946007)(86362001)(26005)(31696002)(83380400001)(110136005)(4326008)(2616005)(478600001)(31686004)(956004)(6486002)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SWNkRlhwM0ZSRDdBK3hFOWxQeEJlZkl5LzZ4VkMvclhWajNtOHJOQmw5aUhC?=
 =?utf-8?B?bGhOMVNFTjhsMWNKQWovYjhjcXExVU0ydHJnOWRVV1R6eDB6STJQd2Z3aFYw?=
 =?utf-8?B?elRaajJRNUYrT3Z6SlNwNG5sd2NQZDJGT2NsWWdtRW1rZjVqOXh5blgwMEpV?=
 =?utf-8?B?VWsyck53SFFkL3VJbmJXVlQ5djYyb3B5SXBCSGIrajhRUk52T1RWaVZYMmFU?=
 =?utf-8?B?bk9XSXk0OEFZNXd6RFcrR1NYVDcvalhLS3pYWThKUm5JejVsU2lRZkJVRjh2?=
 =?utf-8?B?SHVXNTVvMDVuTkhYODh2YlNqR1VsczJldHNMeG9EL21qSlRRbEttOEJUS2Ur?=
 =?utf-8?B?cWNocTVUNzVmTnVib0wyQXBybmdnaGR6OTRNTElqOThRa3dsU2dTSFFrazl6?=
 =?utf-8?B?SDVFUnlsRllneTNNTGx2Ry9adDFKVHpoT3puQXlkQ1JOUzVEWk5BUVdZN3JN?=
 =?utf-8?B?MytLK0NiN0RjSERiNU1WZ0NoNzhmTGM0SEJEV0lPTm1rSzFENGJ0eFBPY0ZZ?=
 =?utf-8?B?TXRYN0ppUTRLdzA3NnhBZnM0Vjk4VTRWU3M4V1lQSHRtM05LOTloMVhER1hk?=
 =?utf-8?B?Y3JrTnA5WHg2aDVjSEdVSmFtdWxVNlNiWXU3RHdkaUFwdlQ0K2tKdXQzUjVh?=
 =?utf-8?B?bS9CektucERCTE1xdER1dEkybTRwSmpvRVBoSWVwemhsU3VBeW1CbVBBUFBh?=
 =?utf-8?B?QVkvRTYvcHRyVStFMG9pWHhTTDR1WU15UlZ1bno1V0NnV2hqU1U0d2kyY2kr?=
 =?utf-8?B?ajVPUU1vVkFkTUNuRXZiQlpITFd0Y2dQZWxteUxYemFiYk1tTG13UEJhREEy?=
 =?utf-8?B?N1FwTmc0Y3BDNFhyUUNja1RPdXA5ME45WUxNaUc1K0NlVXk5dEtla3BQWHZa?=
 =?utf-8?B?UjNQWHZ3VGxOYlM5NGNkYW8rRGM1c3RlMmo3Q0g5TXZuWkxzMzRlRXZhcXk1?=
 =?utf-8?B?L1ZQUDU1UWw3c0h5clR2Y29Cem9HTHFLUU5TdTlZa0JGb3loUWJmZDdMQ1VU?=
 =?utf-8?B?SEdOaVpDUnAzL2YwWGs1OUtaM1hNMTZxc1VrV05QdjE0Z3RZZjN1MTMxK2tz?=
 =?utf-8?B?YjZKeGdJSTZiRTNuVDk3a21RNE5LaCtWb0NNMDJLTTVSdVQ2VjdLUituNkJz?=
 =?utf-8?B?d3RoWXQrdFdRR3pJWmhjaDk1Y1JXOWZjOGtWc3NGRGFvMHFJOHFMQkVFTXlB?=
 =?utf-8?B?YVJhNG1NeUtrSzh1ZlJnMEp5RmI5NEo1a3pVZDZOT2pjOEZWVC91RTZUNkh4?=
 =?utf-8?B?MnVWSmdRdHVCUU15TGoyQVpIZG1iZXRLVUZ0YnRRaFFRN0czUmNhS3liQzZr?=
 =?utf-8?B?SW0wOTVIYTgzeW1GWjlIVDBiR2s3NTFEU1VNRjdtWlp5TnJzOXhGUDNxakRK?=
 =?utf-8?B?OTFaOUhXRGxIVkJoeHQzY2I3bUZ4RW43SHBXQVB5ajBKTGZ5MDlsK2ttblRl?=
 =?utf-8?B?TEVtd3JoUGVwczNOc3Nkb2MzNVJNZ0VmcXR4UEpwdDk4TEZJSlJNZUo2aDVY?=
 =?utf-8?B?YnFVaE9BL3hhVWlpckxTSFR0MkNxS2d5TjVCMVFlMmNoZVBvOUxIek9HZGtj?=
 =?utf-8?B?dGYyenNxNFBsTFkwdCtwRjRyaWF2Qkg1dUgvUmoxK2NIUTU1djlTTExFcmdx?=
 =?utf-8?B?WjdKQ2ltYXAvN1pSc3ZEUjg3NkFKaEdKMnlrSEt3TnBCSUthaDJWTFg1ajEw?=
 =?utf-8?B?SVR6MU9jd1JQSEdacXJOd1BLOFpHYUpnVUFJVHhpN29XU0xuUEJWQjJhVU1w?=
 =?utf-8?Q?k7z+7aoqr/0Tju7Zlt+d0BPIwImNHpBVd+c+lrB?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: eca7c97e-5923-4a52-75b6-08d93a2e4418
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 12:14:23.6285
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wnwKEzcJe8AHgS1oyVrHQmbZEtbrStTa35qjM44b7qolCOm80U10YTMn8vm4AodA0D9aT8h6NG2t/mSsq+/8EA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6477

On 02.06.2021 16:37, Jan Beulich wrote:
> The macro expanding to quite a few insns, replace its use by simply
> clearing the status flags when the to be executed insn doesn't depend
> on their initial state, in cases where this is easily possible. (There
> are more cases where the uses are hidden inside macros, and where some
> of the users of the macros want guest flags put in place before running
> the insn, i.e. the macros can't be updated as easily.)
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Anyone?

Thanks, Jan

> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -6863,7 +6863,8 @@ x86_emulate(
>          }
>          opc[2] = 0xc3;
>  
> -        invoke_stub(_PRE_EFLAGS("[eflags]", "[mask]", "[tmp]"),
> +        _regs.eflags &= ~EFLAGS_MASK;
> +        invoke_stub("",
>                      _POST_EFLAGS("[eflags]", "[mask]", "[tmp]"),
>                      [eflags] "+g" (_regs.eflags),
>                      [tmp] "=&r" (dummy), "+m" (*mmvalp)
> @@ -8111,7 +8112,8 @@ x86_emulate(
>          opc[2] = 0xc3;
>  
>          copy_VEX(opc, vex);
> -        invoke_stub(_PRE_EFLAGS("[eflags]", "[mask]", "[tmp]"),
> +        _regs.eflags &= ~EFLAGS_MASK;
> +        invoke_stub("",
>                      _POST_EFLAGS("[eflags]", "[mask]", "[tmp]"),
>                      [eflags] "+g" (_regs.eflags),
>                      "=a" (dst.val), [tmp] "=&r" (dummy)
> @@ -11698,13 +11700,14 @@ int x86_emul_rmw(
>          break;
>  
>      case rmw_xadd:
> +        *eflags &= ~EFLAGS_MASK;
>          switch ( state->op_bytes )
>          {
>              unsigned long dummy;
>  
>  #define XADD(sz, cst, mod) \
>          case sz: \
> -            asm ( _PRE_EFLAGS("[efl]", "[msk]", "[tmp]") \
> +            asm ( "" \
>                    COND_LOCK(xadd) " %"#mod"[reg], %[mem]; " \
>                    _POST_EFLAGS("[efl]", "[msk]", "[tmp]") \
>                    : [reg] "+" #cst (state->ea.val), \
> 
> 



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 12:15:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 12:15:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147842.272922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxqAX-0003Xa-7Z; Mon, 28 Jun 2021 12:15:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147842.272922; Mon, 28 Jun 2021 12:15:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxqAX-0003XT-3l; Mon, 28 Jun 2021 12:15:17 +0000
Received: by outflank-mailman (input) for mailman id 147842;
 Mon, 28 Jun 2021 12:15:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxqAW-0003XL-1h
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 12:15:16 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3cea285c-0e81-432c-adfc-2b8d9c93342f;
 Mon, 28 Jun 2021 12:15:15 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2050.outbound.protection.outlook.com [104.47.13.50]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-32-9Z-f6F6fOdCcX4PRrCX-Bg-1; Mon, 28 Jun 2021 14:15:11 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6477.eurprd04.prod.outlook.com (2603:10a6:803:11e::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Mon, 28 Jun
 2021 12:15:09 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 12:15:09 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR2P264CA0002.FRAP264.PROD.OUTLOOK.COM (2603:10a6:101::14) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Mon, 28 Jun 2021 12:15:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3cea285c-0e81-432c-adfc-2b8d9c93342f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624882513;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Q3xhD2WCbCxNPzWRzSrAxXsfY7wluQcriH8hyeBCdB8=;
	b=al+ebbjjjWOVEXnutjZaFp6idv0QI6et9rtsqP032nwrmdVBveEV5C0pj23nqOPAJ5OJIf
	Q2Gg9ninGvezuRFTWqPADzI6aICTQQTwev6B2LARbW8MUykOfhBId7ndJUHHl3rOy7EFqa
	N0B0W0ny5njYVGqI1+jIsXR7LYC7pMo=
X-MC-Unique: 9Z-f6F6fOdCcX4PRrCX-Bg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i/d2uoENX2TTCf2W/ZRcksPJdLNNrbDCxhNk0iW7/3mB7hTmih/hIU1aLwshmA4GqlCHgJN/DUlcirwOxA6xUyl4Pzk5JHR//mSKduzIc1SQUDdV9zQU6PVkAPa+klqS5og6CzIx03DZerbbxHiFQzHawJX9u4ytr45mM6iYEBHfVJR5dGZzlfjMm7EY3JtJyVIjTS7CM7/eYgh92p8rq+I5jn6P5oGzWlLJRhK3kUSkov1ofBpU07GO9v3jVPfc0icVL+mTsMMyCR0GtUC1Ey+rg64gq6XWWXJKwzr/Fxm/jvOpLHRnh0DuxxxaPZlzn0f9BnvSMYF/R8tqoXJWww==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q3xhD2WCbCxNPzWRzSrAxXsfY7wluQcriH8hyeBCdB8=;
 b=IbvRasCDg6V6OCEQF6nkpbrPuqcfGjc1a0uBNwJ876ONe8unOTIHsolUA0MtJjuP+HRVGPkZ1fEzmAwAJ+tWmVCuu5BNJ6XZ6LHN6qm95Uh83pwx612r0OdHwT/DbrqOWRqpvwSC/UQMCZ+oKJmSvNJiAu4XYe1/xzgy6jtpYTqd/mduGAeng3+ANU1tzX3boPPvfS83an57WSyTZlM0zv5CMsP3FjsaBJ6dSvWtfL5ZCv2KhAFTLwwLx/1L5de9/xqonp4aF1IcrZFrbfMVb9WymKTxI+R/qscgQlloxcapd34cMKc9Lx5+UdEs0gsoHDb/nHOx8s4V969PB2LuLw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Ping: [PATCH] x86emul: pad blob-execution "okay" messages
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <3250a871-e49d-d3c4-333a-eff435e092c2@suse.com>
Message-ID: <e53a10ad-3489-eccb-2707-8445746d84b5@suse.com>
Date: Mon, 28 Jun 2021 14:15:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <3250a871-e49d-d3c4-333a-eff435e092c2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR2P264CA0002.FRAP264.PROD.OUTLOOK.COM (2603:10a6:101::14)
 To VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 375242a7-6c5d-4ff8-9674-08d93a2e5f79
X-MS-TrafficTypeDiagnostic: VE1PR04MB6477:
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB64771E0022DEB9436914EFF5B3039@VE1PR04MB6477.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:529;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wGn1+P2ZotPTYm/aa1oYeK/nm+iEb62Cib2aebhH5O0iINiWQbT1T3PnykqdVrEDpf0LFArZzYoNoJwjmMHTQQPZ3ft5xV8V+lur34zeDj5zla0nv2nmdwviVZxjYKyvnA1qukbyQGRcb1jklsWxIIdQykcqBUHlPRT8pY4ZPBNtskCF0rQvVmbuQDjEc2kk7T2NXE2ES60lCAlUgoxQFvRFej6LrXa1YQBkvinnI2/Auqm0fw+R1QGCpibl0BNb9D74MSQ23tPIy7B/hlHjcNd9s7VdBKPZCC/CboYHH9+pPQPCXZBNnv2v/LpboCyHoDw7JdtFxyqwCoGnigHe3elazvf9n+tFoxjZcc+RTEH8HhZZ1cAMCCcUzXNugU9LxiWaG+fAdSbJGb6WOsElxC97Dh6H1AwQaMWeHCnpFC2jxqkPbQ+0GuTiCMz7g4Eiuoi41nlnPT4WdryqI0MLKVkDdIIr7P2yMPxCAKs9QO6nnClraryx8DreW1lvfkX3OBHizJMcOAYefDNMLl5aiX4qF5Dta/12aq1Vmb0rtFZ50DqEEgrT3zKXec5YmVJTWV53RkFzjk1Hpd4Hszm+om1HocH/KEfibdJRTnqLsSAkMWI0AYPvLII5LNbsoxrgiI4cZKLWlGxO2zLtw+i4q3S7/eglqJqvHQVuBi/RHq/yCNDxKU9W4sIGjrkcK4dC8V+HGB9sC6eekjv08TzApMIJdVsBpJzmcslNAeg5YZc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(136003)(396003)(39860400002)(376002)(346002)(66556008)(66476007)(5660300002)(8676002)(186003)(36756003)(53546011)(15650500001)(16526019)(2906002)(38100700002)(16576012)(8936002)(66946007)(86362001)(26005)(31696002)(83380400001)(110136005)(4326008)(2616005)(478600001)(31686004)(956004)(6486002)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bkFMZm5rNFdhV2Z1cmJTdHBvdjFsOFNERVI5T2VwcGQ2WVgyYkxMcjhPZk9S?=
 =?utf-8?B?Z0hiNkZBZVROMFlyL3ZLemRJWS9USGVTQXZaUkNPWU5WT1o5ZmJSZzZYMnlp?=
 =?utf-8?B?RHhYRWFJOWNlMDNVTmRGOVNORzAvekUxME8wQzZmYmkwcmU2cnhKSDhVUUNz?=
 =?utf-8?B?OTJVTTJMbWVVYStvRElhM1pFR1UrbU5zU29XQVp6ekxoL0txTkgzOUdkVjcv?=
 =?utf-8?B?eERiTVhjcFptR2RQcDN0Zjk3QWIra0h5NVRTS29vTGVpWXRaWmlFOGNPc1JQ?=
 =?utf-8?B?YllFSWJrbUJsbDI1eGF0NHBFMkI4U21pSkE2VUo2MXVyV2ZCZ21VOGFVMUhD?=
 =?utf-8?B?VDhrUGM1TnNpYVltWm1uWWxWMWFXM3NHUkh2RzlFQ2NWdmZxTVcwUDJlZjJl?=
 =?utf-8?B?dU5SYWN5Ynp6WU90Yk1EdUpiZjJaQzNCMThVYzZoMnhPa0lIMkFRSEFwNCtH?=
 =?utf-8?B?TEFlemJzaU5JRnRVckRJV0JyUmhRMWZuZnVNY293akI3dFh5dGluWGdaMlZh?=
 =?utf-8?B?bnBPM2x1WkM3aTAzd2hYb2Q5Y3hUOUZ4RDZEZm5GNkx0Ky8zcEc4MUxMdUhy?=
 =?utf-8?B?VmUram9BMzlTMEo3ZEIxN3pPc2UzcFFkT2VOeHhqYzJXTU95N0FaRWNpZ0NU?=
 =?utf-8?B?OU9ZNlhNU1lUaW5tRGNSblIybDZRTTdnTE83WkdlSXg0N244TnRPb2JoVzVB?=
 =?utf-8?B?R0tHU2J4ekFRc2Z2SzFRUmMrY1E1bTAyVGF5WHpva1ZqYjdtV3ZJRUxMeldC?=
 =?utf-8?B?QVJUMkcwbXlQRko0WVlBMjQ3UmRUclFkZy8wWWMwdVFtU1ZlQ2gwWkJyeURa?=
 =?utf-8?B?SThnNlZEN1BFQWpnQXpTVHVDN3Q0ZUs0eFpWbWQ4NTF6QWhYT2tvb2hzd3Vy?=
 =?utf-8?B?aUJKU0YvZ0ViQXlGdHdTT05YNFc4bk5kQy9vRHlzbHRwYTA1SnI4TWxUb0o2?=
 =?utf-8?B?b1UvS1hJbjZ4K0RpU09lYUE1bGpXQ0k0KzNQLzRrMUJPT1FWYTFqbHQ0Q2RB?=
 =?utf-8?B?a2ZnSVFTOTZ3SWJwNWhwNjBsWGk0QzFYdE1xK1h3ekJQU2cwL2V0ZTlYZkRo?=
 =?utf-8?B?Q3RMN0o3MnAyenltYmRkR0s5K3U5TzBvbTlmaWUveW9MOFNjcEREMGRacWov?=
 =?utf-8?B?Q0JrVHN6QnhEaVIxZGFkRkg4K0xaeHdjdXFBcUtQNUMxWkU5bU5qN3B4eEhy?=
 =?utf-8?B?YmRpcG9MSmJPbHlSTjRnb1gxSGhDNkdLdFljNmR6MGRnR1N6elBYSjBOUnBv?=
 =?utf-8?B?SG80N1Q5TTBDY29yODlOR3F0RnlCVzNiZTQ0N1pjNTY4NW9KejEvS0sxKy9H?=
 =?utf-8?B?ZHBNNUo0c2R2aUdXdVBheGt6dTRtdlE1NFRGdXlzZk1aNkxUckM3QnhCS1lh?=
 =?utf-8?B?TzJ3MmdBZUtzTDJZeTFRMFExcVhJK3BoZWxIK1JqalRvUHFqSVI2bHdJcVlh?=
 =?utf-8?B?YTVXcHlMNllyeFZSWUxSMEJWeWtwaGdJQkNvS29LdlFLTlgrd3JNajdTT2tT?=
 =?utf-8?B?UjdxUGpoeVBCYnFiUVdyVlJqbisxSHk1bGwvNDNrSU5aa2ZjNi9DM2RYT2Ns?=
 =?utf-8?B?Qi9TWWlRUWJ6c3FMd2IxbXJuYzAwbGtFMURJdXVtcVdtTk1BSzQ5c2xMdWlL?=
 =?utf-8?B?K3lyZ1luc3Zja2VhbG1IRmFlYS9lOUxzclNuRG9YUkIzNHRuSzZIOGJEdDQw?=
 =?utf-8?B?VmMrd243NG5ES1I4Nkt6RGlPS0RSUzJ3TEFqbXJCeXZTODVKcmljUm1sSVhl?=
 =?utf-8?Q?AMON1ySHM15gyEGh/YrG7oWLk4lK5LiIeITcX+c?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 375242a7-6c5d-4ff8-9674-08d93a2e5f79
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 12:15:09.5305
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KeTkHdFTAz9JrRq+ocfaFnFbFYvKtJys7pPO+JXn4ikF0xDFjL7rp/hxixHU4ndLzRi9jyrPewkxtQ2fnBv0IQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6477

On 02.06.2021 16:38, Jan Beulich wrote:
> We already do so in the native execution case, and a few descriptions (I
> did notice this with SHA ones) are short enough for the output to look
> slightly odd.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Again - anyone?

Thanks, Jan

> ---
> Many descriptions are longer than 37 characters, so I wonder whether we
> wouldn't want to bump the padding to 50, 60, or even 70. And this
> perhaps despite then going out of sync with the individual insn tests
> earlier on (which I wouldn't want to touch).
> 
> --- a/tools/tests/x86_emulator/test_x86_emulator.c
> +++ b/tools/tests/x86_emulator/test_x86_emulator.c
> @@ -5181,6 +5181,8 @@ int main(int argc, char **argv)
>  
>      for ( j = 0; j < ARRAY_SIZE(blobs); j++ )
>      {
> +        unsigned int nr;
> +
>          if ( blobs[j].check_cpu && !blobs[j].check_cpu() )
>              continue;
>  
> @@ -5196,7 +5198,8 @@ int main(int argc, char **argv)
>  
>          if ( ctxt.addr_size == sizeof(void *) * CHAR_BIT )
>          {
> -            i = printf("Testing %s native execution...", blobs[j].name);
> +            nr = printf("Testing %s native execution...", blobs[j].name);
> +
>              if ( blobs[j].set_regs )
>                  blobs[j].set_regs(&regs);
>              asm volatile (
> @@ -5212,11 +5215,13 @@ int main(int argc, char **argv)
>              );
>              if ( !blobs[j].check_regs(&regs) )
>                  goto fail;
> -            printf("%*sokay\n", i < 40 ? 40 - i : 0, "");
> +
> +            printf("%*sokay\n", nr < 40 ? 40 - nr : 0, "");
>          }
>  
> -        printf("Testing %s %u-bit code sequence",
> -               blobs[j].name, ctxt.addr_size);
> +        nr = printf("Testing %s %u-bit code sequence",
> +                    blobs[j].name, ctxt.addr_size);
> +
>          if ( blobs[j].set_regs )
>              blobs[j].set_regs(&regs);
>          regs.eip = (unsigned long)res;
> @@ -5233,7 +5238,10 @@ int main(int argc, char **argv)
>                  regs.eip < (unsigned long)res + blobs[j].size )
>          {
>              if ( (i++ & 8191) == 0 )
> +            {
>                  printf(".");
> +                ++nr;
> +            }
>              rc = x86_emulate(&ctxt, &emulops);
>              if ( rc != X86EMUL_OKAY )
>              {
> @@ -5242,13 +5250,17 @@ int main(int argc, char **argv)
>                  return 1;
>              }
>          }
> -        for ( ; i < 2 * 8192; i += 8192 )
> +        for ( ; i < 2 * 8192; i += 8192 ) {
>              printf(".");
> +            ++nr;
> +        }
> +
>          if ( (regs.eip != 0x12345678) ||
>               (regs.esp != ((unsigned long)res + MMAP_SZ)) ||
>               !blobs[j].check_regs(&regs) )
>              goto fail;
> -        printf("okay\n");
> +
> +        printf("%*sokay\n", nr < 40 ? 40 - nr : 0, "");
>      }
>  
>      return 0;
> 
> 



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 12:35:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 12:35:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147847.272934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxqTv-0005yv-R6; Mon, 28 Jun 2021 12:35:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147847.272934; Mon, 28 Jun 2021 12:35:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxqTv-0005yo-Nf; Mon, 28 Jun 2021 12:35:19 +0000
Received: by outflank-mailman (input) for mailman id 147847;
 Mon, 28 Jun 2021 12:35:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxqTu-0005yi-9Y
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 12:35:18 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d04d7be6-8270-418c-98ce-8437a2fc8d7f;
 Mon, 28 Jun 2021 12:35:17 +0000 (UTC)
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2059.outbound.protection.outlook.com [104.47.8.59]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-3-JLOLGcG1MfiC_DSJLrlbRg-1;
 Mon, 28 Jun 2021 14:35:15 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2606.eurprd04.prod.outlook.com (2603:10a6:800:51::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Mon, 28 Jun
 2021 12:35:13 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 12:35:13 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR2P281CA0010.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:a::20) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4287.12 via Frontend Transport; Mon, 28 Jun 2021 12:35:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d04d7be6-8270-418c-98ce-8437a2fc8d7f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624883716;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NqO9UDVuUossHo7M/+agqwQGG+tXBLJ5Zyyux978lLo=;
	b=WbCn93Ykv70ACURWYPy03YmyS2VBxU4Q2Qkevxaeq/gP8qMcSLnyaA/K8Sjk+IFtMIjsEB
	NelqXLD8/Cu6rrZAiy2Gnjr5xzwZRseNGswymOHmxOCWd+O174RlJrRftfmTVGcMjfzJSs
	gfOgSeMooufpbe8al9mfuKhAZCttEzI=
X-MC-Unique: JLOLGcG1MfiC_DSJLrlbRg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oDrjMA4WhEUHiIYJkmajRHgmMKHUevW5HCXM0q9wfRnhq8XGYfwmWO7cRRXPjxCbeD3Kla0YqAk13hvR7Sy8mfswodTd9kwTgtYJpN7Il4CrBjRcgtOxXiJFkzCj20zymdQg9qVCEcbARQ4+uPrBdrk8I6QSf1127olUEtGG2rpT8YHTao53yFeUGNf0nS/4pTTAG8+Byt1POHz7nYVj/Mg7e3GjWlzkHv1haqJVLJdCQbFqq4w3knXtE0lYCNK+HNBt8V6dC3x60FwivV/tqWT9HnX4n2eiepG8vDjCJohU4VVpeHujntb0UcEQslWHOXvIRw4dfx6CiZwJBeiVMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NqO9UDVuUossHo7M/+agqwQGG+tXBLJ5Zyyux978lLo=;
 b=V+CLOLD+HNLFbtdZP7FOE+nmSFW2oeLoTHAYBxTsh+FCFKU8orXZz2XidVjhuGZqyUaOOCJDt5di320Xa4nWNUKzyPV2YosesnamvoBHGQCtD83mgrllTuLRMfr2JviGu5APf2WZk3C3Gsj+1nFZ9SxBYefuyqeNVc63BcwQyjfbkCfQAnDIfkGxQbzVgXlpcGVx4fCJFM1QpwQj+iZi+u4jlzSgOaPDPXBQKCRx5ElrfCValBZTuN+fSXidXnzPTG7E/xb+tAEuJtRDfain1WnGaCIiOXMd3oD2hFiU2OWHIq18H06xVLe7d8yiVQLilb/ah/8ZCIf75vjZ5wmATg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
Subject: Ping: Regressed XSA-286, was [xen-unstable test] 161917: regressions
 - FAIL
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, "committers@xenproject.org"
 <committers@xenproject.org>, Ian Jackson <iwj@xenproject.org>
References: <osstest-161917-mainreport@xen.org>
 <7cfa28ae-2fbe-0945-8a6c-a965ec52155f@citrix.com>
 <b57c2120-2f86-caa7-56ec-e215a7ad0529@suse.com>
 <637ff3c7-afeb-aae4-0f1d-5ae168e01e01@citrix.com>
 <99833b7b-f626-fac5-d426-109afd4ffa38@suse.com>
 <24779.18584.523983.904660@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c044998d-f7a5-dac4-1f7d-396e1be951a8@suse.com>
Date: Mon, 28 Jun 2021 14:35:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <24779.18584.523983.904660@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR2P281CA0010.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::20) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5f5a1f99-29ba-4a01-b4f2-08d93a312cd7
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2606:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB2606F2575639C57C68E03BCAB3039@VI1PR0401MB2606.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EyITUFPNsy3YrZpjUz9wmW9t2cecVFbj/6t76A5qkp4MRV7u+ItSbbQYFzaCl5VNHqtYo4Q6TKkJZeiRLQxzOTa9yc4WNb4JgPWIJLTGg0zyVFqh311fFIe1JA50UIgZJC3L9HuZTjEbdMBby4CeaTxgJV6wP3o9emmTB3vJoY5IcOp13ALwSXdM0npRizUi/kjhFpixOeYNaq2u/UsdWobrdcHiBRTHwPQauhcf6Z1OnkCfWgia9OSF7Y9mBtbotzJvEu2gZ64s7S70vOSYYCt6Ol8R+45Sdy6MDYtW0v8cwvXfqLnUTn5690YlZvNdn3xkPfUzgLLAi7Kek/+o+r7tddek5HuJ7aQId+oh1NFY0ygDkMeOW0ljrPa3ZkcwIJTc5vBIWNbSNT2z7gjtiZf3sv9y8PtOwLDvHtgjrxClcsky4k9FNWr+AafE2w0nlWEdqe69aYGw7IVmPyBMiEUBx1NHNGOK1fkr5yHQ0dplKjs/vjY/fpN3FHoBguFGQrLD45DQ9AsUKnTxk6o9LIBT4AiRnNOYmu6rWKDpiK0zVbWWAJUwj2DcOdt5AkZGqWQ2u+EuXaz+GUy4KkmoMjSKocv8NRTdvCCkWkOcePbYEmQg+nU7lmw0XLzcO/CcLC2IZZ6c5YvL8dgrqY5yjDe3dmeTdd35UTiEgJG4oJ17/EB1b/KAB8MwNVZZu/2BzRbpKHnwzPYGs6/PoyPhvnMvQ4OaJeiL5LbaRWi6Gzw=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(376002)(136003)(346002)(366004)(396003)(66946007)(66476007)(66556008)(5660300002)(2906002)(38100700002)(86362001)(36756003)(83380400001)(31696002)(16576012)(54906003)(8936002)(4326008)(6916009)(6486002)(31686004)(53546011)(316002)(8676002)(2616005)(478600001)(16526019)(956004)(26005)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aGJZMFBiTGx6UjRBeC8wdTBOaE5QNEs5bXF5bzROQkl3QnpIYTd4bWdUa3RO?=
 =?utf-8?B?QTV0aVJrV0VLU2d3UWhpUUdJYmtKZ3BEZ29HSUZFdmpqZmErME0vR1Z5Qk5j?=
 =?utf-8?B?S3h3RHoya3JIR2syQWZVclBkQjJMc2dWNDVCb3JhajBuNitRMmE2UnhMRHpV?=
 =?utf-8?B?VU1GRW43V3BSZVk2Ym9qdG5oZFJna29NaWhEVktKSldHZ0xSN1k2V3JOUUZF?=
 =?utf-8?B?QWJ2dXNjL1JFZTV2L0VNcHpVVEdwV2x2c1RDbnhab20vMkNmckNzSXJISzZX?=
 =?utf-8?B?L1FPd3NrR0h3SGM2WnFFY0Z3MFAvNDZmUlMwS2NHd1pGbklFc1VIdW9jSVVk?=
 =?utf-8?B?OFpneVZ0SHZta1NxVyt3TVY0dU0yVk02dUIyV1Y3UTc4MGY0TW5ERVlDNzkv?=
 =?utf-8?B?SHBqblRHS0NERkdZQzJmMHJ3TnVUdVFMRlhDV1lwY05jWFFIdlRFSmxVdWhV?=
 =?utf-8?B?NWhMYW5HOUQrRjFsMWJnZTVnWCt3WEp1QmtpSTNWVm1IVnJhL2haM0NsVDdx?=
 =?utf-8?B?UGJLSEZjTklZMVB1YzJwVGlyVE05K0V4UzROVjU2NTVqcEhIbTU5S1B2TTV3?=
 =?utf-8?B?bzNKb0FsemY2ajYrL2FkRGlmbmd3QUZGajhmUlNlcXA0OUNhcnJxZkxwZDFr?=
 =?utf-8?B?TFJMRHNpQVRvVmtHZWpYSGxMUHI0M0dnMjRudXVWd0VOajVFUEV3U0lKZ1Zu?=
 =?utf-8?B?OFhSWFc3U3hrTEFpTDNJZ1ZzV0xtaERBQnRnK245VmdFTFBkYlZmaHZ0N3J5?=
 =?utf-8?B?SVFJbmQwWTZwb29XazhWSi9KTzZucXB5dk5obVFoYm5iV0hHNGZPQmNodEJI?=
 =?utf-8?B?UjBqRzBHWTZKR3BhUHQ4cldHY2Z3UU9BV3J0MUpXNmk5VW0zTVFVUHF4R0VJ?=
 =?utf-8?B?SU1SNjdhN3VjUmhML2xUWU9CVTVtdVlPcGgwSURaRk9GUE9sakpsYVdrZ2VR?=
 =?utf-8?B?UHRUSmRWK1JUTWNJZk1jZEJVZnE3SUVjTThSM1B0ZEt0TS95TjNnVWFyVW0r?=
 =?utf-8?B?MnVYVzFCb0lpcVRPcFNXeDlvWFdzdjhNdXlFYUtFanFFeW5ucnFVNWhWMTRJ?=
 =?utf-8?B?NVhmMFhiLzYyRm52bzRLNUNKeGdUM28vY3BEL1AyNTdWellqZVJkaEN4ZnRS?=
 =?utf-8?B?aFI5UUdHcERUR0doWVZNOHRTZVp6TW1NUWdZQU8ya0NFNVlnakxGVjFaZU8x?=
 =?utf-8?B?U2Zoc2M1WkI5ajRZT3Vlb3BlcFI4MDBRZXRrenhua254M3dLOVJBOEJzS3hj?=
 =?utf-8?B?NWhlalhwcjREQjQ5a29mZ0lVbElPVzhQdkF2em1TYlNYdkgvb0RtQ29QbUhI?=
 =?utf-8?B?VTBwSDg4bGcyTy9IRTcwbGZPQzBueWFuT05aMlYxUGh1SmI5cmtrZXZTNmsz?=
 =?utf-8?B?Sm44TVVYZFNVMDRRQzFMNmlreEdoMVZVK05oUG5OaTBhcFdYYTRuVFRoM3dN?=
 =?utf-8?B?a2tUWjVoQXB2KzVmZmhpdGtuNUxCSlJLZVRZRy8zbk02Yy93RnVZMWZ5YkYx?=
 =?utf-8?B?ZWtRZGJBVFdPSkg3WkJ4M2I0RW8yeW1YYlh0S08ybGN2TU5WMm5zN21rNUs4?=
 =?utf-8?B?Y0JueDBwOUdidlplT1ZzZXplc0o3cU5LMkhjdXZZRkJBV3VTN1A2SWJTZ2Vh?=
 =?utf-8?B?ZjNiN1BMTy8vTFYvWEY3Lys0Rk9YYUd0cnc4N2dlbnJ3RWRXN3BUMzI1cEtJ?=
 =?utf-8?B?QTlaWWkwS1dEdnJ5UXYzMzUyWWZxdTljTVFWSkNsL3JKLzNmZk5VZGJtOWRu?=
 =?utf-8?Q?7bg5iODOYclcmXrn5oN1lSL716Gua9VudJh5TB0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5f5a1f99-29ba-4a01-b4f2-08d93a312cd7
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 12:35:13.0737
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 31zIYlc3FFliqsxdkqUFJAmdHm2VbXejYMGNv0LLvCZt1kKgNjQh9+3U8eFb0j9OQ3Y1ZxFXTYr4R4sm11nHJw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2606

On 17.06.2021 15:05, Ian Jackson wrote:
> On to process:
> 
> Jan Beulich writes ("Re: Regressed XSA-286, was [xen-unstable test] 161917: regressions - FAIL"):
>> On 16.06.2021 17:43, Andrew Cooper wrote:
>>> I am very irritated that you have *twice* recently introduced security
>>> vulnerabilities by bypassing my reviews/objections on patches.
>>
>> I'm sorry, Andrew, but already in my original reply a month ago I did
>> express that I couldn't find any record of you having objected to the
>> changes. It doesn't help that you claim you've objected when you
>> really didn't (which is the impression I get from not finding anything,
>> and which also matches my recollection of what was discussed).
> 
> Andrew, can you provide references to your objections ?
> 
>> I don't think I know which 2nd instance you're referring to, and hence
>> I can't respond to that aspect.
> 
> And, likewise, references for this.
> 
>>> In the case of this revert specifically, I did get agreement on IRC
>>> before reverting.
>>
>> How can I know you did? You didn't even care to reply to my mail from
>> a month ago. And there was no reason to make an emergency out of this
>> and ask on irc. You could have sent mail just like is done for all
>> other normal bug fixes etc. Iirc I was on PTO at that time; it would
>> hence only have been fair to wait until my return.
> 
> I think it would be good practice to copy and paste relevant IRC
> discussions into email in this kind of situation.  That email also
> makes space to properly write down what you are doing, that you
> realise it is controversial, who you have consulted, and why you are
> going ahead.
> 
> I looked at one of the two disputed reverts in Xen,
> cb199cc7de987cfda4659fccf51059f210f6ad34, and it does not have any
> tags indicating approval by anyone else.
> 
> Andy, if you got agreement on IRC, who from ? [1]
> 
> Ian.
> 
> [1] This may well have included me.  I do not reliably record this
> kind of information in my wetware.  That is what we have computers
> for.

Another 11 days have passed without a reply to any of the questions
above. I find it generally inappropriate to try to have controversies
die out by simply not replying, but in a case like this one it is imo
extra bad to do so. In case it hasn't come through clearly before: My
primary goal is not to revert your revert. Instead I'd like want to be
given proper reasons, so I can fully understand parts I may have been
missing so far. But of course I also expect you to correct your views
in case the technical details speak against your original reasoning
(at which point undoing your change may indeed be the necessary
consequence).

And of course all technical aspects aside there remains the process
aspect of this whole situation.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 12:42:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 12:42:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147850.272945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxqad-0007Oy-K2; Mon, 28 Jun 2021 12:42:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147850.272945; Mon, 28 Jun 2021 12:42:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxqad-0007Or-GZ; Mon, 28 Jun 2021 12:42:15 +0000
Received: by outflank-mailman (input) for mailman id 147850;
 Mon, 28 Jun 2021 12:42:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxqac-0007Ol-QF
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 12:42:14 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14e512c5-6921-4806-8a6c-45e233333501;
 Mon, 28 Jun 2021 12:42:13 +0000 (UTC)
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01lp2059.outbound.protection.outlook.com [104.47.1.59]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-4-lbt7wlz2OTirEiRABSiFdg-2;
 Mon, 28 Jun 2021 14:42:11 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3391.eurprd04.prod.outlook.com (2603:10a6:803:3::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20; Mon, 28 Jun
 2021 12:42:08 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 12:42:08 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P191CA0056.EURP191.PROD.OUTLOOK.COM (2603:10a6:102:55::31) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.20 via Frontend Transport; Mon, 28 Jun 2021 12:42:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14e512c5-6921-4806-8a6c-45e233333501
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624884132;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=P81GsxTs/2DdL45WpwXDIwGJ3vkgQ2EGgQS6E3ldKns=;
	b=nDe9HQw2TL5CX19ddorXk3hxbBv9EkXjKWbeRabBz5lIt3dodaOY9/8YIqS+1UOjSJxcVP
	UDx4akHPctFLy7p70K8r7AjJ1NDRyRGNtyS3liYgKhIrl6sWpHYVVm4sqYh4DHS1/950E7
	Y3jbMsBRxvOSb2luNTDIibHuuEuFogk=
X-MC-Unique: lbt7wlz2OTirEiRABSiFdg-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VfXQ0q4iGR/TXOxJFobspa/wsJ+BbgVO2TjvUlH4hzuZegyAEKB75tj8v4hrmaManSk1y5qsG1LQOcnNVFkxyBGBg3+wpBCBsA4RB9PGoP0uWIL+UZ4uYh4AZ+v55Wl0M+m74tl9uvGGAsIpHeCgui4ezMkZWCS1q8pIWqanRntxCJsbaLU7ZKHdERmjUSiVAVF1WvDu6aDwKXocjHw6iMjn5nnV8ZpP2FPD2rdxvML4u6PUnxQeHIRx4JOCTQT2LoICP/fNqsTgdjQt5JroUrxM3CAne5bh6D49xWC2NMpgtroCqxF8Dosi6VxqCXuLiJqFvKzFsuG1V7IuvfSRzQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=P81GsxTs/2DdL45WpwXDIwGJ3vkgQ2EGgQS6E3ldKns=;
 b=d5LdxdioawRpfUu1j9yUAkqf+FiZ6Lji0Gw3LZKxUNQ6kN6Zlm8FZLquayLWQdsttEaPOfN7Nz+dCMsqsFp4rpckYPzr6JQm1kDexRK7Dm2N95N+D3uREABcF0ASbU852LcCkaB1jIgGrMR4S9NREVU0iA1vytAUEzu6XwSZhr4xZbQqETf9SEzjqvbguChcJvc4v+ZmKY72mQ168O2oFX4X0CfVCkasR2RKuonhF8rR3cmODVIjg4K5Q5VMM4jLg6D2DM3gnSDYhpqYMG6tLGrWE1oetaeGPvmy9KJlf7sr/s36JZUNk02PCiWoGmbXFIdn2NSI1pmCFF1bOWrJDQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 2/4] tests/resource: Rework Makefile
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210622182124.11571-1-andrew.cooper3@citrix.com>
 <20210622182124.11571-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cb9dc908-78d2-7738-9781-c29b0b37994a@suse.com>
Date: Mon, 28 Jun 2021 14:42:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210622182124.11571-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P191CA0056.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:102:55::31) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 01451b67-1792-4127-a8c1-08d93a32248f
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3391:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB339107597B3949BE379D6284B3039@VI1PR0402MB3391.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4941;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gLec3ccxsKp6BkfFz0EIylPF7sy3svGBIEiM/NBjuDnwQBIIE0K4FKz9eIDtaFJDscD6scJSAiyEBQ8VuIAFWRu4BQWfp4IHoAuKN0bgTUrquwHV2OLmtgS3IRUA4cJZ4+TLI6MSKHa12h/xGKIkmyOR0UnseWJOuRlj0hxPghwC6W+1MU0dDf82TBs6b2XC79hkRLEYBdffplqS+4VwtJwNGV2gYCRl0X+34sYGXOBfC/qpueNb5lfRr3RhWVnUF8lwjj5RRWI371+YY8VlLNiX/b1xBtA4dyXgr3ddG3+p62lZTWw7SlZC9cRvvCJ8v4SKTN+cvPBVCKDS9Ruu1jRUOH71Ikmx8q/hyexdt8XX0STLfE+ZH/EX6vqr2B/Sq0vBqwBDIGaTcg0oxz3lrj4xHm4QUlyOjjvWlPMt6p9p7g+U8ajrXaYIo2G02YSLS3fSR0586M8c3Ui+3juEHX/9vjSCtetWCrGD8OJxKIkeBa/QC77+7kqgYU+UYGtFq9sGAp01niCzdIrLJTBHOZQPtMeXH/Wyk0e4Wjp0MEdN0d5uDGQrf7WETjoy+3xxdnqvigDVwSuhfA0iCpP58/pbbrWkP2LnYrF7GKoCw8iLF7J1MelBlsgcHrDD2o7op0ikLz6UgfxQ75WRA5sakw9vnVv9sfgE36yuns5dV9+JUsNo7hwi7piNzGWJetKZpmy8uvIWhIMWmubZx8qFJvrLw0TIG0Jb2gwxoHPf5KY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(136003)(366004)(39860400002)(376002)(396003)(8676002)(16526019)(8936002)(956004)(31696002)(31686004)(2616005)(4326008)(478600001)(186003)(38100700002)(6486002)(86362001)(36756003)(5660300002)(2906002)(316002)(66556008)(4744005)(53546011)(66946007)(54906003)(6916009)(66476007)(26005)(16576012)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Tlg3VGNzRGlTblBGKzFxYkZCK1VrNGlDWU92SmpBdHhHNEI5cjNFbFhxbEx3?=
 =?utf-8?B?ZklydE4yaE9MQUNEM1V6UXJBaHpRS20vWGlHOER3VWEzYTlOZHZLaWR4Mndu?=
 =?utf-8?B?dnk1bndhalkvbFdEZ3ZzODY1ZnZsYVVGRjVJV25NL3d5NHltbjNlVHpiQUU3?=
 =?utf-8?B?VHlWSDRoQlpuaTg5bGhaNjM0VHRETGZxVUdEdlUvZTNLOGxGQmVHRmpBd2Fr?=
 =?utf-8?B?elN1MG1oWVd6WCtySlhITFdJK09mSUp0TndLbExqMUJXTDQ2N2JoSW1xNWhz?=
 =?utf-8?B?ZzJIVGE4NnA2RS9hSHZxK20wa0J5c3NxZ3hrcGsxalZWejAyQ0FYQlU2eGxF?=
 =?utf-8?B?WE00Ym9tUjhtUGREaEp2aWdISm5qeGpqdUlIeTJSZVIwakt5b2xuOUxFMU5W?=
 =?utf-8?B?YzhXY2xDVE1tSXByVi9kZ0RGWU41TGI1TW1COWZoTW5HWVNhK0NxWDEzdGtZ?=
 =?utf-8?B?NkFnRUNveHNLbExDMGhVZXVRQlFreHNvejA1SDYvR2dydmdhZGRzdUdVcUc5?=
 =?utf-8?B?WmlVaklmOThNaFlITjIwakxueXIyZ0F3Y1Y0RmpPQmxEQ0N6T21abFRNbG4y?=
 =?utf-8?B?TGd3SlhVK2lPUmxNL28wdjRML0Y2OEFEclE4eVZFOTI3d21nZkFPVmxxZlBT?=
 =?utf-8?B?ZmxkMXFQWTlkdkNaMG54K3VLNWtWbHNEdFk4VjRjb0NwK1MzMWRzaFhyb1Qw?=
 =?utf-8?B?aEJscVlFWTZrM1hTOGR4bWVCUUhrNU9aK05qNWN4MGJMUE5WcE1mRERkZjRv?=
 =?utf-8?B?cENlYlZHK3NmZjFzblRZeFdIQkpzLzh0V0tLWldDbnQ1SGVNUklZak0vNmNS?=
 =?utf-8?B?Qk1JZEFCNmZDUXgvRDNKOFRtZEZ6U21tOTVsTHZyVU5ndVVQcytPMFRhM0dv?=
 =?utf-8?B?VWtSSW9pRG83RGcrY2RXQ1dFN3c5T1hLRUpxYm1MYUJKenM5a0lJSVh0UDFv?=
 =?utf-8?B?d1dKdkg5YW11T3huTURpaWdmRzRwa1VCbjV2Q2x1SlgvRG9Id1ZmZys5RTJu?=
 =?utf-8?B?RnFHWEMvN3ZEM2lFdzZQcy8yQ3lab1dyQ2lDWnErdE5SeWd1cm5YbUdUVlJP?=
 =?utf-8?B?Ulk0YzlQSnN5UzNPK2hzWDVMUWgrUlI4c2RHM0RvalJLR3ExRGtieWt2endN?=
 =?utf-8?B?b1FpSmtDTjVTM3NQUXI3aDRRb24yVDh5RnI0QldETHljMXdCeVFqckx0My9G?=
 =?utf-8?B?bGUzUjcvaEdaZlBzdEFmSmlaM1RCYnA0Sm8zd1dqK0o1WUZBMDBPM0lOOHJk?=
 =?utf-8?B?Zm9sVzI5dEtMWDQ2aWJxWi9RKzdKRGRUWkMvVk1kZE43NHJpQ1Z0ek0ySWZO?=
 =?utf-8?B?VHhTZUdhT00xMU1UVkhQaUpvSnR5K2xTZ1pZNmw2QklCT1hJZzRuR1B5enV6?=
 =?utf-8?B?S2JWclJhakZyVlNiMHRtaHBHaVdqYWxBN3c4ZTg3OWtENlhSYzFLRi9LU2xs?=
 =?utf-8?B?Q3NoeGpid3RScCtFZVhUUG5kTUxrbk1wRWhyQ3YyYU95NkJoYWpTUGJISmI0?=
 =?utf-8?B?aFh1U1FpSm8vZW5rUzdaTU9zN1pUdXpkS3VyMmdnRGdoZGpnZElFYVBXQ0p0?=
 =?utf-8?B?V0NXNGRYWDdDVWxlMkxsczBEMk5DcVNIVExNa0lWMVJCbmxGamdyNVRsM3gw?=
 =?utf-8?B?R0dZUnFIMnlWZ1p0cUFKNUFZNncvaWE4YUNQQTVuSzBBUGdqZ05QMjUzbkpn?=
 =?utf-8?B?TGJ3clBKbUZFNUhBODhleEFkQWxoKzRhZVd6QUVTekVBN2V4L0x6TGlaMDRs?=
 =?utf-8?Q?rkiTz+EFB1O8qZTsCj0XwkKT3lLNgY5EKFDJfq7?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 01451b67-1792-4127-a8c1-08d93a32248f
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 12:42:08.7005
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: A3FtGxN72Dtbtjm6yPMvVTSS+0YREU5Q4XqYbZMAMWdFJ1uPGKwrVAORt5E4P5pNbAJQ6aX2X/MS9IV5+mjGWQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3391

On 22.06.2021 20:21, Andrew Cooper wrote:
> In particular, fill in the install/uninstall rules so this test can be
> packaged to be automated sensibly.
> 
> Make all object files depend on the Makefile, drop redundant -f's for $(RM),
> and use $(TARGET) when appropriate.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

> --- a/tools/tests/resource/Makefile
> +++ b/tools/tests/resource/Makefile
> @@ -12,17 +12,20 @@ run: $(TARGET)
>  
>  .PHONY: clean
>  clean:
> -	$(RM) -f -- *.o $(TARGET) $(DEPS_RM)
> +	$(RM) -- *.o $(TARGET) $(DEPS_RM)
>  
>  .PHONY: distclean
>  distclean: clean
> -	$(RM) -f -- *~
> +	$(RM) -- *~

While needing to repeat very similar clean: rules in every Makefile is
already looking odd to me, having to repeat this distclean: rule
everywhere leaves me with even more question marks. But of course this
is nothing you introduce here, so merely a remark.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 12:48:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 12:48:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147854.272955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxqgH-00089h-Ch; Mon, 28 Jun 2021 12:48:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147854.272955; Mon, 28 Jun 2021 12:48:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxqgH-00089a-96; Mon, 28 Jun 2021 12:48:05 +0000
Received: by outflank-mailman (input) for mailman id 147854;
 Mon, 28 Jun 2021 12:48:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxqgG-00089U-2e
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 12:48:04 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6606ec7a-2b92-4643-a1b6-d27c87927252;
 Mon, 28 Jun 2021 12:48:03 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2110.outbound.protection.outlook.com [104.47.17.110])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-6-4d7CqSfVOSiidogRbvN5kw-2; Mon, 28 Jun 2021 14:48:00 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6303.eurprd04.prod.outlook.com (2603:10a6:803:f2::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.23; Mon, 28 Jun
 2021 12:47:57 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 12:47:57 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR2PR09CA0003.eurprd09.prod.outlook.com (2603:10a6:101:16::15) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Mon, 28 Jun 2021 12:47:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6606ec7a-2b92-4643-a1b6-d27c87927252
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624884482;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cIycQI4iuwXTPEuDjO3ZJ5nWRpgCyn8TiB2q9eb2DW4=;
	b=dPIJ10i9yO+wwKJiO4fy34Nk3ODJQGiojGrbKLKcBgBcNEFd+BAmCgfGXecRaEsQXS84ts
	hybXF4wA+xwa+i+33TCa6XIOwwEbXzA7C6PIaNqr9boslyPkYHDsPDSKyj116ZXmls2rA/
	sahuD6l3hkC2oKZqkeO5awlmk2NlwGI=
X-MC-Unique: 4d7CqSfVOSiidogRbvN5kw-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q3Ub3A6ArI9bxt3a2c2/4fxfyivYpoNU9vuefVnCi7UYdL9FJCJC9fJmVp1TmO/F/WiaAQzRa7cr8Y9As02lop9z8eBiG7eL8GtTe7seYrjARhYToyi4Janb6tDrHrTLzWmeQvVZkbjm8uzrYGGDTwi6Q/v79XZyKCUEvrbpfuHPCcAVjZKum+MJURzMW6JXv+r42cj0cjcboI8ggbbZbGRNqyC0NoU4c9B+eqExF4R9mo2uu3Lpl+O7MGqdMZCdh6MgtHtt8BZWFEOweGaqD2bXBgOpYPW6jFd1RWBlew5znoDbSc5UwPNXkMpwuH5szcUGMaI2QO1uqYEdgsnuew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cIycQI4iuwXTPEuDjO3ZJ5nWRpgCyn8TiB2q9eb2DW4=;
 b=XPRS2W1i3lTzwz2vMIGsueDjY5rpTg/gCbIzGuCiKYITNSP7ZjruvwhTUaDH7Rp7lpwSG3PQypGJUughQ1b/7Yu82uuQHij/SH0kjuadkrGDp9jOUo0YS24Lwe5W0XZ/aalODBtlY13QqVj2pPJaJNh9h5N3B+NGAAchxqNLb5cl7n7470hXyqZi+fQy/QTD0br0TR39hvgLtJgJt8GsShxCOy+kQLd0FpowfsePyGM9y3mWd27s+1VmwCyJgqY1AX5j3sEWebpUeV7gjtjXp50n9jLZyLFswYF2TFEyUJ83CmVT3EC7ecvA4RKz8g57DiOSnimIhoB6QC6zipJbJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 3/4] tests/cpu-policy: Rework Makefile
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210622182124.11571-1-andrew.cooper3@citrix.com>
 <20210622182124.11571-4-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b8ff5619-2684-b8d4-c30f-2102fbd72284@suse.com>
Date: Mon, 28 Jun 2021 14:47:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210622182124.11571-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR2PR09CA0003.eurprd09.prod.outlook.com
 (2603:10a6:101:16::15) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: efb51a73-e84e-4a8b-642d-08d93a32f42b
X-MS-TrafficTypeDiagnostic: VI1PR04MB6303:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB63036C3805B7BE4D0C39D80FB3039@VI1PR04MB6303.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uLlv68JFSySXRtHV+Hko/8Wp2eomhjDkwV26cHJqn2nKyuA60ULxEMf6z5EyHZSf/hJuujPyoPjWFjSJ7KVCEHtCS4JXpPbK3muCItFWi/mgchRHd0TXkG4RpsqBLGnoADWgvAZbTSWgMLUhnxuWhzZpDL5YuhPSunqEWIDenL0ZFk3LfIKKYRPdKW+JnZrYiL35+qXloEZcGzRvUXxy1jvPkqk6TvcMK6OHfbt86HsYl5eTIi9rDqCxbhc0eZqjBpClrgT9WCFb2tgRvfnNvfjS6yM4rWOu19tsrtCCFZ9mQcxyYUcCyB+U4KnjFdLTICZ1BJApBn++LcCdUoOq1AOAmMAiOAv9Yp6MWwBzOhzjDONIjc7LLN+D6ok05H7U7v0Kbu3PPXYWnYXLIMrIryY56I3S1e2MLfO1MueC8XTzjy/tlKaw4Mhlr5mXH/1Y8hEm94VQjz6Y2i2GL7Np18LK8qtkfZ3In3jusk7IiVCoV7xQYeobEjXayUwXSeHl+lUjOPu/O7lItoF3LcOvzU6NDJCfx1ViSyviAs5Gr7zdcl/Z4N/IbDbqz9EgNhpTbVam8KH1N8VOg7J1ZsBpP+U5XfwtBnCxopYlggMcS1CS7xWXMiSKSyjxp0CrSdaGGW06g01nllheMjFXRYHygqS8hpI6Rh55jvwTmKpvGzCLsyPj5dRtafolu9mQlP/BiAs/mnZqQNpyNRAHxzBRZvCN3pUCC0ZsCWYMUb58Vxo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(396003)(366004)(136003)(39860400002)(346002)(53546011)(8936002)(26005)(16526019)(2906002)(2616005)(83380400001)(186003)(54906003)(36756003)(38100700002)(4326008)(66946007)(6486002)(16576012)(86362001)(8676002)(31696002)(478600001)(66476007)(316002)(5660300002)(956004)(6916009)(31686004)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Z2U2L2lqdVhGWmlYV3kvVEZiejRsVC9PK2d3UjVTR294U3JDSFprWkllYkhD?=
 =?utf-8?B?TXBNRy93T1FCK2ZPRDRGYkxPQ2RJbnFHWlhOYUdOSUxrSmxhK0o5Q3QrYk5a?=
 =?utf-8?B?UGZGbGdQV0RqV2NKeTZqeVhWdGJhcWd2S2krZHp2a25JMXI1STgxUFAwN0lW?=
 =?utf-8?B?K0RFVWJDZ1pZZlFqdmJoMDdyYUhVSFR5TzF0WXV5cUs0djJqUHl4Nk5lSHc2?=
 =?utf-8?B?bVdqOWJHVjVBTmg5dVlHVjB5OVFBNG9HRlJmUEcxbHJXMjhwSXYwZ3ZEa1lH?=
 =?utf-8?B?RWtaSEtBUmNVRG5UeDZKa0ZwMUhzd2dyYjRQV2dXb09McXNwUllSQ2V1WFFH?=
 =?utf-8?B?Y3M3TUp0UDI1WE1lYmx6QXFPZ3lKL2VHSnBDa0hyYWVRSmZXbEszalQ3TlF4?=
 =?utf-8?B?RWw3OWkxakFna2Jnc3FCdkdudDRwdnY4WFM0SFZBOWF1RWczanM3eGRYTDVQ?=
 =?utf-8?B?aEh3R1F6WHVTaXFDajlZTjJldVFtS3ZxU25GV0J6azJ6OXZhU0poVitXSWVI?=
 =?utf-8?B?bWNOMXhqWWZuWm5HU25MVXRHRzg0Ti9jVUVnL1VEUHg0YkQxRDFPNnVzSE1t?=
 =?utf-8?B?aWFVRXJnRm5PRGZEUitpa1J2WklEK2lXenZBSW1OVWhHSE95V0xXaDBqSW9E?=
 =?utf-8?B?aWRrTXQwQ3NQTnFzR2cvMWNmMC9WZExWaGk3dGdNSlhOdEE5VmVlb3JSTnFZ?=
 =?utf-8?B?YUd6VGxqMWcvWE5odDM2RzI3bWszMDYrRHhZeUp5WE81RHNDRzFobzZ5cjJt?=
 =?utf-8?B?Y0dJckplVjZ0TTc5TVZXQm5JNURPNzE2aHgvTXJ5NVE1Y0ltVS92cE41TWZO?=
 =?utf-8?B?RjhSa0Z6aFU0NWNsMW1RRzYxcitnMVhsWDllSlBZNk9yYVQ1QnpaZ3AraVA1?=
 =?utf-8?B?UjJGRFdBV3hIOVpKVHhNUERQbmsybkhKNmk4Yjd6U3R4OXZqa2dxQ0s3MWxS?=
 =?utf-8?B?czR1cUJtREdCNm9Xb3RaRTVWdW1jaE1YOEgzVlJNVVU1R095YVdwZkFCbHpM?=
 =?utf-8?B?dXAwQ09jWmxkbFJFcC9BeldxMlBFWVdzWFVPYk9JTkNVSDVoU0FNUGlTWWNh?=
 =?utf-8?B?emRtSFdBV0l6OWtqY1R3NWxrV1hmL1NoemU2aDhUQU51RUsxbE5kRHlabSsy?=
 =?utf-8?B?QUM3MVdVVWpxa2IxYjRPMG5tSmNUZFY2K2tERE0rRlNMM0kwUzNvY0dGdS8y?=
 =?utf-8?B?a1FycEE5UzJRemh1ckdFTUh2bFB1a1BJdVpqTDVQcDY5L3R1Y3lramV4ZFZF?=
 =?utf-8?B?QjRSOExteHh5M3lZcUd4WE1oVmZTcGFaOGVDRVNPVkdCd2dybGc0bG1jb3p6?=
 =?utf-8?B?WEJRZlFMTGNsYjdJc3FUbHlUMzVtT0tJN00vbWxndzMya0JDVFBkQUZtMTU3?=
 =?utf-8?B?N3RZSFJCR0gvYm1la2FhNjFjTXB4aXR5S0NDaFhaRXVpQTFWRmFDMWdvZzZz?=
 =?utf-8?B?K1o2WTJna2JIMWdCYVlKL1ZhMDNGMzBiOGpudXAyelhiTlFIODhWOVBOb3Yy?=
 =?utf-8?B?L01mSkhFVWRNWE41dTUrNmJ2eHRlazczdSt6L1JlMUNjNG14V2ZCK0p1UHUx?=
 =?utf-8?B?OUxLVHBqZ2lYQXhJSzFPQVhZY1pUTmc3ZFJtUFpyeVRUN0RSa05lem5mSEd5?=
 =?utf-8?B?WVZ1ckJlNEp5MkJ4Z2EzUURwMGsydVp3cktXbmQ2Z3RIU3BCLzZuZ2dqM1o4?=
 =?utf-8?B?S3F4b25aaVNGV0ptU2hZMGZxWkE4cGluUzY3MXlaamd0S2o2NzZ1dE1Cc0FT?=
 =?utf-8?Q?muTQO/Vb7iCTPtB5mxHfFzlwypgYbqpv/BkJugP?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: efb51a73-e84e-4a8b-642d-08d93a32f42b
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 12:47:56.9705
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EomltINoaETcqb9bTKqZwI5VjyVzWqTPtXgEi9VOB2TIQGkU8Lxu3NAf7Cv1ertzeFuNAQ8vBff3jXGgRAbSaQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6303

On 22.06.2021 20:21, Andrew Cooper wrote:
> @@ -23,23 +21,32 @@ run: $(TARGET-y)
>  
>  .PHONY: clean
>  clean:
> -	$(RM) -f -- *.o .*.d .*.d2 test-cpu-policy
> +	$(RM) -- *.o $(TARGETS) $(DEPS_RM)
>  
>  .PHONY: distclean
>  distclean: clean
> -	$(RM) -f -- *~
> +	$(RM) -- *~
>  
>  .PHONY: install
>  install: all
> +	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
> +	$(if $(TARGETS),$(INSTALL_PROG) $(TARGETS) $(DESTDIR)$(LIBEXEC_BIN))
>  
>  .PHONY: uninstall
> +uninstall:
> +	$(RM) -- $(addprefix $(DESTDIR)$(LIBEXEC_BIN)/,$(TARGETS))
>  
> -CFLAGS += -Werror $(CFLAGS_xeninclude) -D__XEN_TOOLS__ -O3
> +CFLAGS += -Werror -D__XEN_TOOLS__
> +CFLAGS += $(CFLAGS_xeninclude)
>  CFLAGS += $(APPEND_CFLAGS)
>  
> -vpath %.c ../../../xen/lib/x86
> +LDFLAGS += $(APPEND_LDFLAGS)
> +
> +vpath %.c $(XEN_ROOT)/xen/lib/x86

Is this a good move? In general I think relative references are better,
because it is then possible to move the tree as a whole (or access it
from multiple locations, where each one has it appearing in a different
place in the file system). I do realize though that we have many such
absolute references, so this one more isn't making things much worse.
Still
Reviewed-by: Jan Beulich <jbeulich@suse.com>
preferably with it left relative (or a strong reason for making it
absolute spelled out in the description).

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 12:52:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 12:52:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147859.272967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxqke-00019O-UF; Mon, 28 Jun 2021 12:52:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147859.272967; Mon, 28 Jun 2021 12:52:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxqke-00019H-R0; Mon, 28 Jun 2021 12:52:36 +0000
Received: by outflank-mailman (input) for mailman id 147859;
 Mon, 28 Jun 2021 12:52:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxqkd-00019B-Dk
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 12:52:35 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1f4d0d4c-1f03-4f06-b60b-1d1cac113edc;
 Mon, 28 Jun 2021 12:52:34 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2055.outbound.protection.outlook.com [104.47.14.55]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-21-hANrjQ4vMVegjTrIas9qCg-1; Mon, 28 Jun 2021 14:52:32 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB2958.eurprd04.prod.outlook.com (2603:10a6:802:a::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.23; Mon, 28 Jun
 2021 12:52:31 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 12:52:31 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0279.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1::27) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Mon, 28 Jun 2021 12:52:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f4d0d4c-1f03-4f06-b60b-1d1cac113edc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624884753;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yVJNCo1sNX/APcKFoae9hxZ2G4hFWbGPjS/ATQ26mDY=;
	b=QZvVLaa8leJMnC1xbkWgE91Rpomlo61zZWENxfFK7bqZU1ippoeZ3ujvI9UtOEsvnCGa6z
	oCcgCVz710y9RJdRl7820Bnj+a8mrqUcg9ITtNfzDBWBM9BB9Ogj434ApYjHAibTl6b46T
	uaVZRHwhoWavV7cVhFkK+IWJfOMpvs8=
X-MC-Unique: hANrjQ4vMVegjTrIas9qCg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ehghYznthXVfKfH8EQjcgMq3kG4vZG+1hUVZ/P3iQGYXwY0539XL1nwfnzY9e8uEDsMDc3KGiD/m7jxGa5MlkzbQa1C2eRPbctd2x+PGw4UxyEb4FOCfWjab/o5Z6zF9R6o/lGw4ktPY7cnrKIYqcRUEvXFsHo28DnaJx3YpuFZWMO2wrArlWbBowuhyA+amDKAVWykU4wK4GQSwT0HAn5J3jiEEcaV5VgBkybYUGgfHreI6/Znz3DKobR9i1uwV8e0jH6vvhSj1Ab/t8ddpThbwa+u3ZgL1wNLcavlIMOpYfS6uaAvWz5KbN6k85D2CyIHHMPv9DktCp2+6OFE8RA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yVJNCo1sNX/APcKFoae9hxZ2G4hFWbGPjS/ATQ26mDY=;
 b=aeKBxEtnOuCvHQ6p5K3YQyyFx/2v+HeIby8rD3/WGsgDWx5ixOqbK/mKxPGAbksOgDViYU60z7JdsfBtPGCdDVQdo/5EkUY7lSk8WXir34CH89xFukIr6H/iWN1W6zcCPxFJ+ACl7Oh9vH6jdsDxscVKvHFAXpLrLIWXewdcULTOyHi2JpJDcM0XFvkn2JCACcOtXvuhnxb1nbmwEBq6MXRQmp9qzjeY/gKQ1V+m4Fe5gPRO7c83wzOi9xIOh0vfEQ1tlU+mZSkOS7ve/qdfp2OXQvGFmVWSsIXoBh7ieUoi1iyOYeq1tB1gQDz7Io8YUmSyyc61qDqfq6Mo2UD4qw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 4/4] tests/xenstore: Rework Makefile
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210622182124.11571-1-andrew.cooper3@citrix.com>
 <20210622182124.11571-5-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <290f1e12-0185-977c-4a6b-3cb8fe9d919d@suse.com>
Date: Mon, 28 Jun 2021 14:52:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210622182124.11571-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0279.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1::27) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ef05a5dc-f3a9-4c63-c397-08d93a339787
X-MS-TrafficTypeDiagnostic: VI1PR04MB2958:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB2958F72C72EB30936EEF775BB3039@VI1PR04MB2958.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4303;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dh1k0uWI/uezsSx7BC1xIQR7ZTl6vaaEoNN0JIPm9vrktKsTlX10fBxr6LHzMsVW1mj5PkbUuJHPu2JGtFnRaEZF14P99EWAMYpSgVtM8e0R4MxzSe/5JxwmMIuehoefY6TkwgCaj70aMTDUqvXbobVAd046HkWzs70HqkpaSVxvf/rqi5gFi9V9IRJTvtH9mluwoWwzThN8GqcSzvuM8woxRuG3YrJb/E7bw2+qB93NkGjRQcI3vuhQ7U8c9QsWSGuzegf1gf7uywrqoUpRVXh46a41liPDCHjss+FewNEaPIZtA+uzqMUZouhW37mSKN3s3T51bqHm3GsqrYXxdo+a//RriJMQESfg4KbWRaMi2qcy8008Bo7JaZSD3BwwltMAPHspaqOnR9dda1EsUoL6i+UutQ7muPSLP+1wESf84XK39S+vvuu0zMCbUo3YdPRa7GsLbdniqBZ6xJu0+Ustw2ekkC9mpnrU/AmVO8LeYjp4/b47YPBN7+iLGl8NAOzRipxA8GULzEJpxWD6YKJ0OJuLJf2Q3HUS4HRfcv/u5na9239nuQs9+djpZqty56L5rnbEIRMps4NvMusVB4Gfb9wbSnxldOwFazH5i+bhSbNe10XON35DD6JlOW3h6JmAAOwgRglJX3LZlOFBaWWS//5YsPAo3cBfAmffw2Y/5eawTn+6UPd2iscpO/bB90Kzr9kzqHryiyBt7smIhPEBVV2M99WJSlz9zoTr4LY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(136003)(39860400002)(376002)(366004)(396003)(8676002)(53546011)(6486002)(36756003)(2906002)(31686004)(4326008)(66946007)(478600001)(5660300002)(86362001)(316002)(186003)(2616005)(31696002)(83380400001)(66556008)(66476007)(8936002)(54906003)(4744005)(6916009)(956004)(38100700002)(16576012)(26005)(16526019)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Mkh6aldsRDRKU2p1dkx2K1JNL1ZvTGNqb2NDZUVidEZ3QjFnc1kvTlFqSG0y?=
 =?utf-8?B?Yk14Z3ZWVHl5OHpHaUYyOEpseG5uUkNZcUY2aEk1UmlGUkkxZ1ArY0IzV0J4?=
 =?utf-8?B?M2hLOVROVUhZQitaSkVNYzJoRnJiVnFhWUZCL1J5Z0FFejhOb0krRnlqWjJi?=
 =?utf-8?B?eFJrWjNWSlBVa0Z6VzRQcFZaMzFDbXdJY045b0s2WTNaZTZ2WEU2bGNMdith?=
 =?utf-8?B?Y2k2S3A5TGR1VWtRSTNIbTVVejJhekhyOTV4TmFPUG1jS01YWUFTdC8yQVlp?=
 =?utf-8?B?SGczMFRlQStHRzdpRlhiblUwYU0rcU5sTnZEL1o4d29kV1VtY2RVYVltM0pH?=
 =?utf-8?B?MXVKRmtPR09vam9icFNJb0Z0NC9BWDhDMmJsQW05OVpjNDJxLzZsdkJ1UG5Z?=
 =?utf-8?B?aXoyVEQxUHZSUW9uRElBaWl0dThYbGRMOVRFcHp6aVFnMnNkMUNhZ0NhT241?=
 =?utf-8?B?T094VllmY0tVYWw4V3lPTCtpQ3FKRU5abGNHR2JKbE9PK0N2NkFzbGI1TmlZ?=
 =?utf-8?B?T0RYUmpKd3oxdmpUUVRLQjRjaDd6RDdZNGYwNVd0bUQyK2ZENHlqcTV6eTBx?=
 =?utf-8?B?eXM4Q20zUWVtODAyOUNudFI4L3p2ZTdtSmMzRFIrM08vc2FPUGhFT2E4NFJO?=
 =?utf-8?B?ZDg5TnBCZkFVRGZFOGVOd0Fjb0d3U0hDWnIyTzBHQmFweEs2d0dPbktZOStl?=
 =?utf-8?B?bTV1TFVHa3pkSDc3RzRtS0dTdW1LekdKWDVtQ1U4Mmw0YklHR1B6L21Pc0Nh?=
 =?utf-8?B?T0VHd3RmRS9QcDlwVlhUWUtKVTU1SnlpdkNuRVMxNVRHdEZsbU45M1EzNmUv?=
 =?utf-8?B?R2RaOU9EZ3hzZEFYWWhWcWkwVisrZU5FcXdTV1N4U2JyL3FIR2lBdks3Vmho?=
 =?utf-8?B?ZFpJK1JJdFBld3dPK3lvM0MzUExBcW9pelFyNmtwZE1HOEMwb2xYRFQ2RUJC?=
 =?utf-8?B?cFBqVjB0YUZzUlNUNDdmSnpYZWU5Nk0wNHRGZCtabUVIaytGclR2THBGSmtp?=
 =?utf-8?B?NnM4MldYY2RyNVg2Nzc0U1ZnSkNWRjREZFByS1ZGc0RZQkdZYmUxcE5oNkdv?=
 =?utf-8?B?ekdUUENFYVFZeVlCamFjOVVEYVM0WFBxSU1aeFJMa2M0ZTlnb1NEUk4xZnRK?=
 =?utf-8?B?WEpveFdzQ21WODBTakN0cDJqdmZqaG9FUXQ4bk04S1R2OHFRTzdDQmpHaWJV?=
 =?utf-8?B?eEt4T041OXllaWJYN21RU0ZMd1pSSlV2N0tqSUVtS0VDeGdLbXhNU0VQTDdB?=
 =?utf-8?B?MUN1RGRVLzRNRXVteEloYlZFYXMrcFhrWWw4YXpsQ05qSnB0ZDRyTWIxcExK?=
 =?utf-8?B?RUdsZWZxcjdPWmtLRVcrZ1NhYjVUUEozZjYzbEFZYjRFQnRGTUd6ZlZ3WjJo?=
 =?utf-8?B?OHhaYVYxaW9JSmYwNkpJVVg2MjBsQVp4OEVEekQyVHl5ZEVsYVRGR1pIUGdV?=
 =?utf-8?B?RS9Ebm8vWWNnc01kRnl5dTcvNldTWmVhcThjdXg1V2FFb2hBdGtWbURPbVNj?=
 =?utf-8?B?b2xZQ3pmeEUvRmtkbjcyaDhYb240T05VdHE0c3BaeW8vM3FUUHM0L2FwdElp?=
 =?utf-8?B?WHpuZkNPOEl6VU9QKzBhYzNDb2N6Q1NNRkFmSDZuZTlPV2h1d29LN2pqWU01?=
 =?utf-8?B?OFNsV0tSWGRoYlVjNnNwWFhJTzNJYm9kN0dJYVMwUDVrcHN5UU5qYTl0WjlU?=
 =?utf-8?B?SWcwK0tIUzBtMlNaQTQrL1llaWZsS1BvYzNhMGprK0xhV3lPbXFjaU5zaXAv?=
 =?utf-8?Q?vvNpFKkd4R706vQ5NhulZvwnFBVtt3wzXO9qCm1?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ef05a5dc-f3a9-4c63-c397-08d93a339787
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 12:52:31.0675
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6Z74VUfQE2Wv38YzhQXmt1QjPJchiizGpAHfFr4i/CivyNONI2ydWc2CT0ACz6b1jmL+j9Z2+AzY+QikILqNoQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB2958

On 22.06.2021 20:21, Andrew Cooper wrote:
> In particular, fill in the install/uninstall rules so this test can be
> packaged to be automated sensibly.
> 
> This causes the code to be noticed by CI, which objects as follows:
> 
>   test-xenstore.c: In function 'main':
>   test-xenstore.c:486:5: error: ignoring return value of 'asprintf', declared
>   with attribute warn_unused_result [-Werror=unused-result]
>        asprintf(&path, "%s/%u", TEST_PATH, getpid());
>        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 
> Address the CI failure by checking the asprintf() return value and exiting.
> 
> Rename xs-test to test-xenstore to be consistent with other tests.  Honour
> APPEND_FLAGS too.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 13:00:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 13:00:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147874.272985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxqrw-0002h6-Py; Mon, 28 Jun 2021 13:00:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147874.272985; Mon, 28 Jun 2021 13:00:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxqrw-0002gz-N0; Mon, 28 Jun 2021 13:00:08 +0000
Received: by outflank-mailman (input) for mailman id 147874;
 Mon, 28 Jun 2021 13:00:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxqrw-0002gt-0N
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 13:00:08 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 129cfeee-f482-4179-a612-5a17d23fd224;
 Mon, 28 Jun 2021 13:00:07 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2050.outbound.protection.outlook.com [104.47.13.50]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-8-BzvnfWplOSGtLwcI2QUZrg-2;
 Mon, 28 Jun 2021 15:00:05 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6861.eurprd04.prod.outlook.com (2603:10a6:803:13c::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Mon, 28 Jun
 2021 13:00:00 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 13:00:00 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P193CA0030.EURP193.PROD.OUTLOOK.COM (2603:10a6:102:50::35) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Mon, 28 Jun 2021 12:59:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 129cfeee-f482-4179-a612-5a17d23fd224
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624885206;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4mQ6VsUPqqk/73sP0wLrxM9bN58O+3jIUWEXrgvJS2w=;
	b=Gyls/Xn0aZ32nLJ8zKcJ0fDzuX+HQxCkufS2dvf/fXZSf7OzRzUQhPXzsIoiMJrqO/uu23
	i0ycpt0pDS7Pt3fqZ+g92875vqvjhLwMrkHcJLK+tbtXlIHnfy8eRhCfrvRzvqu6oizAXr
	bGFKoA0KrS35hHI3z/8YjBv6uMjlGmY=
X-MC-Unique: BzvnfWplOSGtLwcI2QUZrg-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FjZkC9HvXWtttwMNKjLESiRD9kkXVN4FPArxr36cZ5rNEdyPwB3CVlrvu1aFZh1RbwXXiCYXoVoEwSxsABUIB0MDRrJGzhexxPjhnr58s45fvSnG123FsFYz08+srcZGeJXbR5GsRa7ZpHT5di019rNzRKCnFZ9coEvSDGxBygQm6kXt7NL68XbkKZkL++7w34W3Z+/d+qlokteMWC45e1BIdDnfsJj2h8eNt+EQbssSRRorxQDRVMBx8j2am6a0jJ9A3gxG9m/8kgnjUZhAJAYVqZCSv/rQzEdeGw2viGjW18A85lbtudCl2+vGMG/QWXNPLLg5/vA32Qc9inf7fw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4mQ6VsUPqqk/73sP0wLrxM9bN58O+3jIUWEXrgvJS2w=;
 b=GMXrBlwsSayYMYhVTXAmWgiF5JbA5Wkyx6EJpWfCDOdQ0bu+dCpX0dQ/PVt5wi3fllgP9BNrwUYKOCYXiWjVF0QanyIlCMG13NC1pFd/6PzwFMW33rYhETH5IiKwromYaDbvGjkQKFk+0qn/fc9Gz7/Q2JTUrKmDss5QVKJy72GoiR5Iq/3pbODwc6fUuoHz+5ds4bBZj6+ewdoPVrAsAZAG9a5PfacsdutvrSfBKmB/tIIVVR6dDs543e0ru7h5JcY8khvaqRwzLyZ7sOQBkK6o2knTGYtYcmyOqAlRkuI0ZsTgA53TYs1k9cYWCgVFyHtGEzyHMKONlchBGCw+wg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH v2 0/4] tools/tests: More cleanup for automation
 improvements
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210622182124.11571-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <88a88d81-38d3-f039-e8cd-342c07a09b14@suse.com>
Date: Mon, 28 Jun 2021 14:59:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210622182124.11571-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P193CA0030.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:102:50::35) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: fa632f85-412a-4613-2b62-08d93a34a333
X-MS-TrafficTypeDiagnostic: VI1PR04MB6861:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB686195D2A4B44DB058CB89A0B3039@VI1PR04MB6861.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KbjwCz1OyUDH82aMspGOksyzZH5LikO1yejKymbTlaKzvuq7d60g9/8jTldTS7H8uN4ytx4p23/SlxC5+f8jaVAzDO/nomis/d7k/pdtKwK0t1GM58MOvTh8/hJQYwE75Z5HT/FII9C9bVRgDh3sFdCu1ttNDAjHyRkAi1nOnrdQ9hDuINoQcfBqNr5VL6WG02PJ21pKV5EdOfirGH9VTpAabbHdv5J+sm7BrkYJskaq6XDly+sxGGc1P+yGeIIgfxomuQHhbjvZdIhRHjKqlIiw+dlJP1K//J3FtquXj9m3x2uDna+fkvrRAV2AES/g/61sk1a0pJOPzioZvj5C2JbISqJVgkKNIsbwiOqvg56d91HFXqZWZHLACyO84P5MlmRix7EvSunedLIQZDe57BtSMgAtI2m0NNPOMnXsoRV3F/k2mzHQXpwWwwov+5A/9vMEuZj10b0HbqdyyMA/zfiHqgjdQ5eqZgViY17T3DtKt/5mNQsJbtjAm0UDqDtP3Y0+a1/8n45zR+mq1qRQUoXr+HvMIlC2ICZqWuIstTKgm2ig80+OQbht2wF5jR5VmmDBhS6ipayjJKtCoBGTfDOKdMrfBB21pTx+2WCoWnQnwXdSo7VXajiXX4XiqFVgwFzEjyTwtDDa4gYLYD6SS9dW7kuGMsx0ugg53lC+HvoDqUWqDG/nopOje7Qlb0EEXnKYMWRPpTxs1Xa2uVrbV00GNwrY9jQYVrgdAcn9ZaQ=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(39860400002)(346002)(396003)(366004)(376002)(6486002)(4326008)(86362001)(8676002)(26005)(478600001)(316002)(83380400001)(5660300002)(956004)(16576012)(54906003)(2616005)(53546011)(66476007)(66946007)(8936002)(36756003)(31686004)(66556008)(31696002)(6916009)(2906002)(186003)(38100700002)(16526019)(4744005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U3k1TmxlVmNWQ3hoRmJ2QTN3d1BtQ0gvcFZOYzg1dUw3Vmt2ZExGWU0wempR?=
 =?utf-8?B?M1JKRGJTNnNmMkl1QWxtVlpDbE9HcmlHd3hYY2tBaG5XQjJlM0JvOWlqK3V3?=
 =?utf-8?B?SUtwMk5EV2tWcGZpMTAyc21sUk90RHF1NmpYTzc4NW0yaXdna2FxbTVtUjc3?=
 =?utf-8?B?QzJCVkN3WTR6dW1VWEpTOEtwYkw3Tzg1RTNGcCtjVDhFdmdzU1IrdkJDa2JS?=
 =?utf-8?B?clgxd1FqVzJjSGxHZURzT2lTeHAxeU5LcGFWTmM1MXFXcEtQekI4T2RkV1Vw?=
 =?utf-8?B?RFVvRmxhOU5jNlM5MDBCVVh4UlVmd2Q3TjVKWkVRbkp1Y254OXZmcEVFTHdh?=
 =?utf-8?B?Y3NNWGpIU3p3TVhqZlpwMzVwNGhHczJXdXo4c0NURXJkaEpTSzVPNFJ0UWJE?=
 =?utf-8?B?N1JXb3FUZUwrTmtPdWdHUnRWRnV2Nmd5Z0JJNUQzOFJQQnJ4RE9aSWxOTEgv?=
 =?utf-8?B?eTdqNFNmT25sZ2hhUC9YeE1iUWFJTHdCSFVDL3RtZjVFL01uZFNJWXg1QjRC?=
 =?utf-8?B?Z2VJY1E4UFhGblA3NGx3WXVQU2lMOGhXVEhsSHR0c0VSNisydUN2cXFBWmFv?=
 =?utf-8?B?WnRncDhBcVhyemdTYXZGYkg2S2d5Rk9sbjZkWFpHdGNWQ0x0eTNMK2h2amRq?=
 =?utf-8?B?TGRlNDVYYkI2b0RoTFFmdGR6a1ZiN2UrZlJxK2duQlFERndTNWE3QXA2R0Fw?=
 =?utf-8?B?SmU2Y09BZWNaU0d6eWFyTncwVHRSTDlMNTF3YlozQStiTlJTRkxOSktqUjgv?=
 =?utf-8?B?UUpBc3p2N2pUR1NEaVhNc0xoYy9ZemZRb1dCWmRPVFlyODFxUmpZWGYwZlZG?=
 =?utf-8?B?NXhKN0pXN2lCSklFeFRrTE1CYmFSOUhLQU1DeW1QWDhZekZmQzl0SDlvY1h2?=
 =?utf-8?B?TUdUenExWENzT3MzdE93Tm9LTjZUNm5DeWtUZHVYSmx1eGRjMEJEaG5wR0JW?=
 =?utf-8?B?d3ZYRHg2SHdOWEZnVXVkaWdKZXNiNGZlaGlpQ0ZjcExsWkxEMTUreTZ0MGRo?=
 =?utf-8?B?dWtXZGlSTkZMNFlaY1dBTXNwaXpvcXpnLzlIT2lRckZDME01TjFEcDUxalRB?=
 =?utf-8?B?cFNJSStqa3k1WFpQelhSUnZCb0JWY25xOGM4OCtTK2ZKV3hITjgvWm9IeHU3?=
 =?utf-8?B?M0krck10RHZ4dVZPUUdmYWQ0ZWFvMVFwdTlrUDZWMHp5RldDRmQyWTZYSmJI?=
 =?utf-8?B?UGd3WTI4STQwaFBtVXgySXZydy9pNFRrUGxGYzdxSU9ON053TmJKN0Iyd1ZM?=
 =?utf-8?B?OHQyelJRZm1oMmdTYkpoRXNCcDlUYWpVTkR2c2M3NUpoU295ZHNOcks4bElU?=
 =?utf-8?B?aDJDb1FPU0FhMUZMTjlRLzUzRWtzaEFoakkyRnpuUytCQXdrUk9lcVhWN2JZ?=
 =?utf-8?B?dkl1WU1mYW9GSmJYU3laRzR4RWlBNE9rK3JMbGxIdXdEQWtxY0RPdzBMa3FY?=
 =?utf-8?B?SDE5cXhaaUtXbFBpellCajhXNjJTbTRBZnVQYnZBL203RW1GMWJzcXdRM2d3?=
 =?utf-8?B?di9rRWIrbmdrQmcyUFN1VUhFRGQrZWRPNzhtTVNxbDk5ZktOUWZIZmFXSHBs?=
 =?utf-8?B?MnBVTTlyNnVveWx5aG5la25pNFpVTWc3ZHRDYlMwcHpRenQ1RVhsSmk3cnZs?=
 =?utf-8?B?aVFNRXNuSkhhakFveHhXNEYvRWZ5dzB4b3RBa0ltZnM2YlpGZHYycmoyenhr?=
 =?utf-8?B?OFdjOW9XeU5QK0xlZm1JanoxWm02S2FhdGR4NFlOUWtzK2JtUEJETEQxZDY0?=
 =?utf-8?Q?1fw46F1xHVj4jyAFSMnTK/MYjbV93KcDwJFdV+E?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fa632f85-412a-4613-2b62-08d93a34a333
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 13:00:00.1695
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +psO+f7VsZniqUknP8K1x4XdGuGwpqL3AP9Mpb/uvtJ7ZrJiuiDa5wGwMFWiSKjBfvPqqO8H672B4HEzCG3MUQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6861

On 22.06.2021 20:21, Andrew Cooper wrote:
> v2:
>  * Fix CI failures from newly-exposed logic
>  * Drop -f's from $(RM)
>  * Drop the 'run' rune patch.  Its clearly controvertial, but ignoring the
>    problems isn't an available option in the longterm.

What is "the problem" here? The presence of the run targets in
the first place (and their wiring up from the top level
Makefile, allowing direct invocation)? If so, I'm afraid so far
I haven't seen replacement proposals by you (nor why exactly
this would be a problem).

> All other RFC questions still outstanding.

I didn't find any here or in the individual patches.

Also a remark on patches 2 ... 4 each saying "fill in the
install/uninstall rules so this test can be packaged to be
automated sensibly": Why is running (or at least picking) tests
from the build area not an option in an automated environment?
And why is installing tests unconditionally a generally good
idea? I'd view this as unnecessary bloat for the majority of
downstreams.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 13:42:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 13:42:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147879.272997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxrX2-0006um-4B; Mon, 28 Jun 2021 13:42:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147879.272997; Mon, 28 Jun 2021 13:42:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxrX2-0006uf-17; Mon, 28 Jun 2021 13:42:36 +0000
Received: by outflank-mailman (input) for mailman id 147879;
 Mon, 28 Jun 2021 13:42:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPZa=LW=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lxrX0-0006uZ-GC
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 13:42:34 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 79d6af25-9e46-4954-944d-a37b6652b773;
 Mon, 28 Jun 2021 13:42:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79d6af25-9e46-4954-944d-a37b6652b773
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624887753;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=5GABtzq/aApw+RmaCklAMEhAkSpVLs8/ZGT9NtYDllw=;
  b=IrLyyvs+5M2lB2mwgDyT5AaKKizPBRcejdxVOdLJmu17g+kmmRsMAzqi
   WRPtFf7UY38NBP8aW78PR7aG5wdpL8jkUHiTAtjmhO3rEtuQgwgLKiDHz
   eWTlMC5SLLoaWcTfTvQQdHx0kGgOp+Wqrc1YplgYy180Bgu5VdCg7TOuq
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: LOVd+2YJSp/+4OH/j3/N4E1gvEKfNex1rOR3qesyCciiE+fJtZZNMhfg3zvi3Z5+HjhH+uXgdA
 qrw89qXWLp3Ym16c+AotBa1ExQaeQxJQ9hzN0xTakk9of4kuGV5oOXRbNzj2+fRKQA5OWE9jcV
 eb00YJALzi/16TrTTIMF97MWbf4ogMcx5bl5dNJcVaWUf0ITNfOVXBdWnjGKKd1N8ePs72iYOw
 8z8C7kn+yGZkONYVGKPlfOqzeWfO9fE71V+oEjp7h66a1tl1SpR1OpF5tuPGlUMQ2CTrplyjBM
 3tU=
X-SBRS: 5.1
X-MesageID: 47472394
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:evEiqqB3klI8HDTlHemm55DYdb4zR+YMi2TC1yhKJiC9Ffbo8v
 xG/c5rsiMc5wxxZJhNo7290cq7MBHhHPxOgbX5VI3KNGKNhILBFvAH0WKI+VPd8kPFmtK1rZ
 0QEJRDNA==
X-IronPort-AV: E=Sophos;i="5.83,306,1616472000"; 
   d="scan'208";a="47472394"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Ian
 Jackson" <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: [XEN PATCH] Config.mk: re-pin OVMF changeset and unpin qemu-xen
Date: Mon, 28 Jun 2021 14:42:17 +0100
Message-ID: <20210628134217.47622-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.32.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

qemu-xen tree have a osstest gate and doesn't need to be pinned.

On the other hand, OVMF's xen repository doesn't have a gate and needs
to be pinned. The "master" branch correspond now to the tag
"edk2-stable202105", so pin to that commit.

Fixes: a04509d34d72 ("Branching: Update version files etc. for newly unstable")
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 Config.mk | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Config.mk b/Config.mk
index 9a174c77f383..f9dce4549b7c 100644
--- a/Config.mk
+++ b/Config.mk
@@ -244,8 +244,8 @@ QEMU_TRADITIONAL_URL ?= git://xenbits.xen.org/qemu-xen-traditional.git
 SEABIOS_UPSTREAM_URL ?= git://xenbits.xen.org/seabios.git
 MINIOS_UPSTREAM_URL ?= git://xenbits.xen.org/mini-os.git
 endif
-OVMF_UPSTREAM_REVISION ?= master
-QEMU_UPSTREAM_REVISION ?= 7ea428895af2840d85c524f0bd11a38aac308308
+OVMF_UPSTREAM_REVISION ?= e1999b264f1f9d7230edf2448f757c73da567832
+QEMU_UPSTREAM_REVISION ?= master
 MINIOS_UPSTREAM_REVISION ?= 051b87bb9c19609976fb038f386920e1ce5454c5
 
 SEABIOS_UPSTREAM_REVISION ?= rel-1.14.0
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 13:48:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 13:48:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147882.273008 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxrcQ-0007Zd-P6; Mon, 28 Jun 2021 13:48:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147882.273008; Mon, 28 Jun 2021 13:48:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxrcQ-0007ZW-LI; Mon, 28 Jun 2021 13:48:10 +0000
Received: by outflank-mailman (input) for mailman id 147882;
 Mon, 28 Jun 2021 13:48:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lxrcO-0007ZQ-Ry
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 13:48:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lxrcO-00024i-Oy
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 13:48:08 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lxrcO-00044F-Ng
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 13:48:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lxrcL-0001V7-DR; Mon, 28 Jun 2021 14:48:05 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=yaVmPCXqZy3Cc00aVR0hqcQKO8JE4koDvEWPy5xLXKk=; b=X816NyEXyQkHQS+GWV2VjZHKeP
	57OjmS1W5OpwZS8GswiIqh6JSkrPRtEFG7Ph7Rgddjywd41FjDBgMLNtYGNYMJzal08nzV/0JJacr
	C0alRiv074DaXbde/SR6Gw3wAX1s9jQPDinRBgrLw3/wUje6r6SUjAg41O7DMnNoxVNo=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24793.54037.159770.159505@mariner.uk.xensource.com>
Date: Mon, 28 Jun 2021 14:48:05 +0100
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: <xen-devel@lists.xenproject.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Jan Beulich <jbeulich@suse.com>,
    Julien Grall <julien@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH] Config.mk: re-pin OVMF changeset and unpin qemu-xen
In-Reply-To: <20210628134217.47622-1-anthony.perard@citrix.com>
References: <20210628134217.47622-1-anthony.perard@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Anthony PERARD writes ("[XEN PATCH] Config.mk: re-pin OVMF changeset and unpin qemu-xen"):
> qemu-xen tree have a osstest gate and doesn't need to be pinned.
> 
> On the other hand, OVMF's xen repository doesn't have a gate and needs
> to be pinned. The "master" branch correspond now to the tag
> "edk2-stable202105", so pin to that commit.
> 
> Fixes: a04509d34d72 ("Branching: Update version files etc. for newly unstable")
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Acked-by: Ian Jackson <iwj@xenproject.org>

Looks like I adjusted the wrong line in a04509d34d72.  Sorry.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 14:34:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 14:34:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147896.273022 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxsLV-0004Dg-8n; Mon, 28 Jun 2021 14:34:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147896.273022; Mon, 28 Jun 2021 14:34:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxsLV-0004DZ-5B; Mon, 28 Jun 2021 14:34:45 +0000
Received: by outflank-mailman (input) for mailman id 147896;
 Mon, 28 Jun 2021 14:34:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxsLT-0004DP-GY; Mon, 28 Jun 2021 14:34:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxsLT-0002uu-BX; Mon, 28 Jun 2021 14:34:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxsLT-0007uV-2J; Mon, 28 Jun 2021 14:34:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxsLT-0000d7-1k; Mon, 28 Jun 2021 14:34:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qMhH0cEWO0EsCtFVwA/1CWVqoqvtSsSqdnTxd6CfK6k=; b=eiwHjj8FMOJrDQBqnt/Tfwib9H
	6cqCq4Yg/SGDrKOY3C2hkKwo/C95SFlOljZeYz5y8yJ76QDZZQYvjUSIjheo01mxvGto+tUsfnx+U
	V1a/mXhu9mJk7fRc5u3MPeq1jB++fnQaUf/p34BiN11H3y3QZPa0yEUadO8JnGzglLH0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163166-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 163166: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c636a5fe59575d84778f676ca1728fbd1a7c7104
X-Osstest-Versions-That:
    xen=bb11edcec1a953ce590da797b0d005cd60f21e83
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 28 Jun 2021 14:34:43 +0000

flight 163166 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163166/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c636a5fe59575d84778f676ca1728fbd1a7c7104
baseline version:
 xen                  bb11edcec1a953ce590da797b0d005cd60f21e83

Last test of basis   163101  2021-06-25 23:00:26 Z    2 days
Testing same since   163166  2021-06-28 11:01:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   bb11edcec1..c636a5fe59  c636a5fe59575d84778f676ca1728fbd1a7c7104 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 14:46:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 14:46:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147901.273037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxsWO-0005ha-AE; Mon, 28 Jun 2021 14:46:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147901.273037; Mon, 28 Jun 2021 14:46:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxsWO-0005hT-6f; Mon, 28 Jun 2021 14:46:00 +0000
Received: by outflank-mailman (input) for mailman id 147901;
 Mon, 28 Jun 2021 14:45:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lxsWN-0005hN-Ap
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 14:45:59 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lxsWL-00035g-Au; Mon, 28 Jun 2021 14:45:57 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lxsWL-0001ks-4g; Mon, 28 Jun 2021 14:45:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=eC2VJaT6tB/N1fcv1Z1pshj4ynZCbfN/TZEKca4qZno=; b=IGDOwRdV1dGS3YwDvlaL03Qw/a
	VDwHRwXfzE35YBF3oOslhUVmYZNxnFtYWjo3n9X0Fm4Sw6T65N34qR5pyrB72S1MCAtBaCGugc70z
	7OpxxEBgM5VX/tRRLTc1a83h3yDfMF1S6hjuD7VgGdDoXsrJ9Wr34nOyS76HjOhqEu9M=;
Subject: Re: [PATCH] fully replace mfn_to_gmfn()
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <389f5988-4700-da3e-e628-1513e87eb56a@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <b90cc2da-d1d6-9a85-9601-1708ab094b54@xen.org>
Date: Mon, 28 Jun 2021 15:45:54 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <389f5988-4700-da3e-e628-1513e87eb56a@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 28/06/2021 12:52, Jan Beulich wrote:
> Convert the two remaining uses as well as Arm's stub to the properly
> named and type-safe mfn_to_gfn(), dropping x86's definition (where we
> already have mfn_to_gfn()).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -111,7 +111,8 @@ void getdomaininfo(struct domain *d, str
>       info->outstanding_pages = d->outstanding_pages;
>       info->shr_pages         = atomic_read(&d->shr_pages);
>       info->paged_pages       = atomic_read(&d->paged_pages);
> -    info->shared_info_frame = mfn_to_gmfn(d, virt_to_mfn(d->shared_info));
> +    info->shared_info_frame =
> +        gfn_x(mfn_to_gfn(d, _mfn(virt_to_mfn(d->shared_info))));
>       BUG_ON(SHARED_M2P(info->shared_info_frame));
>   
>       info->cpupool = cpupool_get_id(d);
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -714,13 +714,13 @@ static long memory_exchange(XEN_GUEST_HA
>            */
>           while ( (page = page_list_remove_head(&in_chunk_list)) )
>           {
> -            unsigned long gfn;
> +            gfn_t gfn;
>   
>               mfn = page_to_mfn(page);
> -            gfn = mfn_to_gmfn(d, mfn_x(mfn));
> +            gfn = mfn_to_gfn(d, mfn);
>               /* Pages were unshared above */
> -            BUG_ON(SHARED_M2P(gfn));
> -            if ( guest_physmap_remove_page(d, _gfn(gfn), mfn, 0) )
> +            BUG_ON(SHARED_M2P(gfn_x(gfn)));
> +            if ( guest_physmap_remove_page(d, gfn, mfn, 0) )
>                   domain_crash(d);
>               free_domheap_page(page);
>           }
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -328,8 +328,7 @@ struct page_info *get_page_from_gva(stru
>   
>   /* Xen always owns P2M on ARM */
>   #define set_gpfn_from_mfn(mfn, pfn) do { (void) (mfn), (void)(pfn); } while (0)
> -#define mfn_to_gmfn(_d, mfn)  (mfn)
> -
> +#define mfn_to_gfn(d, mfn) _gfn(mfn_x(mfn))

I still have a series pending to drop mfn_to_g{,m}fn() on Arm. But it 
has been stuck for quite a while on an agreement on for the toolstack 
interface (see [1]).

Anyway, this function is not worse than before. So:

Acked-by: Julien Grall <julien@xen.org>

Cheers,

[1] 
https://lore.kernel.org/xen-devel/32d4f762-a61d-bfdd-c4a8-38e5edef1aa8@xen.org/

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 15:00:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 15:00:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147904.273047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxskS-0007xJ-IO; Mon, 28 Jun 2021 15:00:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147904.273047; Mon, 28 Jun 2021 15:00:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxskS-0007xC-FD; Mon, 28 Jun 2021 15:00:32 +0000
Received: by outflank-mailman (input) for mailman id 147904;
 Mon, 28 Jun 2021 15:00:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/HW7=LW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lxskQ-0007x6-LH
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 15:00:30 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ed34cb9e-b8b2-49a3-899d-6d942d006bf8;
 Mon, 28 Jun 2021 15:00:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ed34cb9e-b8b2-49a3-899d-6d942d006bf8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624892428;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=rjv0+KN+w2SEggZ26BkprHazfCWFofWHNisRqYfXtGY=;
  b=fBFcCBQ4ykwno6DRfJCb87AW4k5XivJ120/ZeBt6SINXmJ2BXr2aOrDM
   kxOZT2CwiriXuf7k77RxRvaZkSWOYQm7RHeabTh8hAuJo4F+7BLh9H76T
   cJo+/+0heAUZkR9IfmqXjSvs66XCee2TF5tCkhhkX++lNcLpX2OZZVevj
   E=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 2g6992rnAe70KSfQUYxinF4UOt5IV6smQHTrh7hLYgIDg3r0MswesZ7r2VwHlEiB2e613x1Vft
 kMKfA4jnmrrSWA3sa/AKlu0NeCVwk2L6uzRYYzJ5Qdj9EJXPc4D2pMO/rYLIFHOBi3AGSQe9J5
 VjIpN3mkdC+2t+1VeMfCxOpUNq24B2jBRvkcVMH6LBjX9uKiArc4VBSvqN4IA8+ByYz/tAvjQr
 UEX9h2m/9Ztvj4tbOiC8QDyQnaRjHMm1q6ec7Cyykj+cInG1n7LUeR8nb1JAmXaMW8y1mMaOfG
 mIs=
X-SBRS: 5.1
X-MesageID: 48719958
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:2NrbNa5UG+qVr+z4HgPXwViBI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc0AxhI03Jmbi7Scq9qADnhORICOgqTPqftWzd1FdAQ7sSircKrweAJ8S6zJ8k6U
 4CSdkzNDSTNykdsS+S2mDRfLgdKZu8gdmVbIzlvhVQpHRRGsVdBnBCe2Om+yNNJDVuNN4cLt
 6x98BHrz2vdTA8dcKgHEQIWODFupniiI/mSQRuPW9o1CC+yReTrJLqGRmR2RkTFxlVx605zG
 TDmwvloo2+rvCAzAPG3WO71eUWpDKh8KoCOCW/sLlWFtzesHfsWG2nYczHgNkBmpDt1L/tqq
 iKn/5vBbU015qbRBDJnfKk4Xid7N9p0Q6s9bbQuwqdneXpAD09EMZPnoRfb1/Q7Fchpsh11O
 ZR03uerIc/N2KJoM3R3am/a/hRrDv8nZPiq59ns1VPFY8FLLNBp40W+01YVJ8GASLh8YgiVO
 1jFtvV6vpaeU6TKymxhBgk/PW8GnAoWhuWSEkLvcKYlzBQgXBi1kMdgMgShG0J+p4xQ4RNo+
 7ELqNrnrdTSdJ+V9M3OA7Ae7rBNoXpe2O/DIu/GyWWKEg3AQO4l3es2sRF2AiDQu168HIdou
 W+bG9l
X-IronPort-AV: E=Sophos;i="5.83,306,1616472000"; 
   d="scan'208";a="48719958"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2] libx86: Introduce x86_cpu_policy_calculate_compatible() with MSR_ARCH_CAPS handling
Date: Mon, 28 Jun 2021 16:00:11 +0100
Message-ID: <20210628150011.13106-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Just as with x86_cpu_policies_are_compatible(), make a start on this function
with some token handling.

Add levelling support for MSR_ARCH_CAPS, because RSBA has interesting
properties, and introduce test_calculate_compatible_success() to the unit
tests, covering various cases where the arch_caps CPUID bit falls out, and
with RSBA accumulating rather than intersecting across the two.

Extend x86_cpu_policies_are_compatible() with a check for MSR_ARCH_CAPS, which
was arguably missing from c/s e32605b07ef "x86: Begin to introduce support for
MSR_ARCH_CAPS".

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Don't break the libxl build.
 * Extend the function description to discuss why levelling the max policy
   doesn't make sense, and that we (will) pick some data from @a specifically.
 * Ifdefary for now until a Xen caller arrives.
---
 tools/include/xen-tools/libs.h           |   5 ++
 tools/libs/light/libxl_internal.h        |   2 -
 tools/tests/cpu-policy/test-cpu-policy.c | 150 +++++++++++++++++++++++++++++++
 xen/include/xen/lib/x86/cpu-policy.h     |  28 ++++++
 xen/lib/x86/policy.c                     |  49 ++++++++++
 5 files changed, 232 insertions(+), 2 deletions(-)

diff --git a/tools/include/xen-tools/libs.h b/tools/include/xen-tools/libs.h
index a16e0c3807..4de10efdea 100644
--- a/tools/include/xen-tools/libs.h
+++ b/tools/include/xen-tools/libs.h
@@ -63,4 +63,9 @@
 #define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
 #endif
 
+#ifndef _AC
+#define __AC(X, Y)   (X ## Y)
+#define _AC(X, Y)    __AC(X, Y)
+#endif
+
 #endif	/* __XEN_TOOLS_LIBS__ */
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 0b4671318c..1816c9704f 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -126,8 +126,6 @@
 #define PVSHIM_CMDLINE "pv-shim console=xen,pv"
 
 /* Size macros. */
-#define __AC(X,Y)   (X##Y)
-#define _AC(X,Y)    __AC(X,Y)
 #define MB(_mb)     (_AC(_mb, ULL) << 20)
 #define GB(_gb)     (_AC(_gb, ULL) << 30)
 
diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
index 75973298df..455b4fe3c0 100644
--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -775,6 +775,154 @@ static void test_is_compatible_failure(void)
     }
 }
 
+static void test_calculate_compatible_success(void)
+{
+    static struct test {
+        const char *name;
+        struct {
+            struct cpuid_policy p;
+            struct msr_policy m;
+        } a, b, out;
+    } tests[] = {
+        {
+            "arch_caps, b short max_leaf",
+            .a = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+                .m.arch_caps.rdcl_no = true,
+            },
+            .b = {
+                .p.basic.max_leaf = 6,
+                .p.feat.arch_caps = true,
+                .m.arch_caps.rdcl_no = true,
+            },
+            .out = {
+                .p.basic.max_leaf = 6,
+            },
+        },
+        {
+            "arch_caps, b feat missing",
+            .a = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+                .m.arch_caps.rdcl_no = true,
+            },
+            .b = {
+                .p.basic.max_leaf = 7,
+                .m.arch_caps.rdcl_no = true,
+            },
+            .out = {
+                .p.basic.max_leaf = 7,
+            },
+        },
+        {
+            "arch_caps, b rdcl_no missing",
+            .a = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+                .m.arch_caps.rdcl_no = true,
+            },
+            .b = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+            },
+            .out = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+            },
+        },
+        {
+            "arch_caps, rdcl_no ok",
+            .a = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+                .m.arch_caps.rdcl_no = true,
+            },
+            .b = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+                .m.arch_caps.rdcl_no = true,
+            },
+            .out = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+                .m.arch_caps.rdcl_no = true,
+            },
+        },
+        {
+            "arch_caps, rsba accum",
+            .a = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+                .m.arch_caps.rsba = true,
+            },
+            .b = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+            },
+            .out = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+                .m.arch_caps.rsba = true,
+            },
+        },
+    };
+    struct cpu_policy_errors no_errors = INIT_CPU_POLICY_ERRORS;
+
+    printf("Testing calculate compatibility success:\n");
+
+    for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
+    {
+        struct test *t = &tests[i];
+        struct cpuid_policy *p = malloc(sizeof(struct cpuid_policy));
+        struct msr_policy *m = malloc(sizeof(struct msr_policy));
+        struct cpu_policy a = {
+            &t->a.p,
+            &t->a.m,
+        }, b = {
+            &t->b.p,
+            &t->b.m,
+        }, out = {
+            p,
+            m,
+        };
+        struct cpu_policy_errors e;
+        int res;
+
+        if ( !p || !m )
+            err(1, "%s() malloc failure", __func__);
+
+        res = x86_cpu_policy_calculate_compatible(&a, &b, &out, &e);
+
+        /* Check the expected error output. */
+        if ( res != 0 || memcmp(&no_errors, &e, sizeof(no_errors)) )
+        {
+            fail("  Test '%s' expected no errors\n"
+                 "    got res %d { leaf %08x, subleaf %08x, msr %08x }\n",
+                 t->name, res, e.leaf, e.subleaf, e.msr);
+            goto test_done;
+        }
+
+        if ( memcmp(&t->out.p, p, sizeof(*p)) )
+        {
+            fail("  Test '%s' resulting CPUID policy not as expected\n",
+                 t->name);
+            goto test_done;
+        }
+
+        if ( memcmp(&t->out.m, m, sizeof(*m)) )
+        {
+            fail("  Test '%s' resulting MSR policy not as expected\n",
+                 t->name);
+            goto test_done;
+        }
+
+    test_done:
+        free(p);
+        free(m);
+    }
+}
+
 int main(int argc, char **argv)
 {
     printf("CPU Policy unit tests\n");
@@ -793,6 +941,8 @@ int main(int argc, char **argv)
     test_is_compatible_success();
     test_is_compatible_failure();
 
+    test_calculate_compatible_success();
+
     if ( nr_failures )
         printf("Done: %u failures\n", nr_failures);
     else
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index 5a2c4c7b2d..0e8ff1194a 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -37,6 +37,34 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
                                     const struct cpu_policy *guest,
                                     struct cpu_policy_errors *err);
 
+/*
+ * Given two policies, calculate one which is compatible with each.
+ *
+ * i.e. Given host @a and host @b, calculate what to give a VM so it can live
+ * migrate between the two.
+ *
+ * @param a        A cpu_policy.
+ * @param b        Another cpu_policy.
+ * @param out      A policy compatible with @a and @b, if successful.
+ * @param err      Optional hint for error diagnostics.
+ * @returns -errno
+ *
+ * For typical usage, @a and @b should be default system policies of the same
+ * type (i.e. PV or HVM) from different hosts.  It does not make sense to try
+ * and level max policies, as they contain the non-migrateable features.
+ *
+ * Some data (e.g. the long CPU brand string) cannot be levelled.  Such data
+ * will be taken from @a, and the content in @b will be discaraded.
+ *
+ * It is possible that @a and @b cannot be resolved to migration-compatible
+ * new policy.  In this case, the optional err pointer may identify the
+ * problematic leaf/subleaf and/or MSR.
+ */
+int x86_cpu_policy_calculate_compatible(const struct cpu_policy *a,
+                                        const struct cpu_policy *b,
+                                        struct cpu_policy *out,
+                                        struct cpu_policy_errors *err);
+
 #endif /* !XEN_LIB_X86_POLICIES_H */
 
 /*
diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c
index f6cea4e2f9..de14fe4912 100644
--- a/xen/lib/x86/policy.c
+++ b/xen/lib/x86/policy.c
@@ -29,6 +29,9 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
     if ( ~host->msr->platform_info.raw & guest->msr->platform_info.raw )
         FAIL_MSR(MSR_INTEL_PLATFORM_INFO);
 
+    if ( ~host->msr->arch_caps.raw & guest->msr->arch_caps.raw )
+        FAIL_MSR(MSR_ARCH_CAPABILITIES);
+
 #undef FAIL_MSR
 #undef FAIL_CPUID
 #undef NA
@@ -43,6 +46,52 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
     return ret;
 }
 
+#ifndef __XEN__
+int x86_cpu_policy_calculate_compatible(const struct cpu_policy *a,
+                                        const struct cpu_policy *b,
+                                        struct cpu_policy *out,
+                                        struct cpu_policy_errors *err)
+{
+    const struct cpuid_policy *ap = a->cpuid, *bp = b->cpuid;
+    const struct msr_policy *am = a->msr, *bm = b->msr;
+    struct cpuid_policy *cp = out->cpuid;
+    struct msr_policy *mp = out->msr;
+
+    memset(cp, 0, sizeof(*cp));
+    memset(mp, 0, sizeof(*mp));
+
+    cp->basic.max_leaf = min(ap->basic.max_leaf, bp->basic.max_leaf);
+
+    if ( cp->basic.max_leaf >= 7 )
+    {
+        cp->feat.max_subleaf = min(ap->feat.max_subleaf, bp->feat.max_subleaf);
+
+        cp->feat.raw[0].b = ap->feat.raw[0].b & bp->feat.raw[0].b;
+        cp->feat.raw[0].c = ap->feat.raw[0].c & bp->feat.raw[0].c;
+        cp->feat.raw[0].d = ap->feat.raw[0].d & bp->feat.raw[0].d;
+    }
+
+    /* TODO: Far more. */
+
+    mp->platform_info.raw = am->platform_info.raw & bm->platform_info.raw;
+
+    if ( cp->feat.arch_caps )
+    {
+        /*
+         * RSBA means "RSB Alternative", i.e. RSB stuffing not necesserily
+         * safe.  It needs to accumulate rather than intersect across a
+         * resource pool.
+         */
+#define POL_MASK ARCH_CAPS_RSBA
+        mp->arch_caps.raw = ((am->arch_caps.raw ^ POL_MASK) &
+                             (bm->arch_caps.raw ^ POL_MASK)) ^ POL_MASK;
+#undef POL_MASK
+    }
+
+    return 0;
+}
+#endif /* !__XEN__ */
+
 /*
  * Local variables:
  * mode: C
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 15:43:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 15:43:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147908.273059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxtPa-0003lV-HZ; Mon, 28 Jun 2021 15:43:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147908.273059; Mon, 28 Jun 2021 15:43:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxtPa-0003lO-E6; Mon, 28 Jun 2021 15:43:02 +0000
Received: by outflank-mailman (input) for mailman id 147908;
 Mon, 28 Jun 2021 15:43:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/HW7=LW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lxtPZ-0003lH-5A
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 15:43:01 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 820a6f3c-2fd8-4fd9-b15e-92c09787fc07;
 Mon, 28 Jun 2021 15:43:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 820a6f3c-2fd8-4fd9-b15e-92c09787fc07
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624894980;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=rL07fJcyD0f4sLEVHpsjaNe1y8DD0QBR32MSuR5IHyo=;
  b=acEGsUsc1ZVJGQtJ7BPdtQnaRUznmQbkVkaE4KwlQwkp6eVPRT7s5Min
   BhDxp7m3ioAtFsrecqkgNnf7fUnYUkP+Z/m4nftx+FfAUeJws0cZr4H9l
   sV2vcb9noiSUKvkiU5qYsO0hQmkdxcnef4mmBrYetbsKJv27QdzJQ8d9v
   0=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: xTP84Sw6YhutYAXXiwS7ZY3bTOGJhQxUxUvKR6z2xR9i6wGeyIFFpQso/dZDi7a8WzeyK74QUD
 qTKSkzwEEr9EigNOiiXFQrrhCfYBvaxviey3nSWMN+Mnpf0ZQRQiv9M78XS1xCTdWoYmKxMJe1
 aHLTWMOWwyO8I0VzN6+CQ201Q6lyVCVlx1dFUpp74EoCrS1mGuHrxc8oT2M4xvdQCVS5O6Um4b
 GO5voTfXzN+32iKnDUrU3p6kkkGXYxGlQQE9TZ8ZsW5sPEhSbvPj/Z2GS9gXoEh1XwAFiIuj+x
 +LA=
X-SBRS: 5.1
X-MesageID: 47111395
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:rhL2qaNtdSFZDcBcTyX155DYdb4zR+YMi2TDiHoedfUFSKOlfp
 6V8MjztSWVtN4QMEtQ/uxoS5PwP080kqQFnrX5XI3SIDUO3VHIEGgM1/qY/9SNIVyGygcZ79
 YcT0EcMqyDMbEZt7eD3ODQKb9Jq7PrgcPY55ar854ud3ANV0gJ1XYLNu/xKDwSeOApP+tcKH
 PR3Ls8m9L2Ek5nHvhTS0N1EdTrlpnurtbLcBQGDxko5E2nii6p0qfzF1y90g0FWz1C7L8++S
 yd+jaJppmLgrWe8FvxxmXT55NZlJ/IzcZCPtWFjowwJi/3ggilSYx9U/mpvSwzosuo9FE2+e
 O84isIDoBW0Tf8b2u1qRzi103LyzA18ULvzleenD/KvdH5bChSMbsFuatpNj/ir2YwttB116
 xGm0iDsYBMMB/GlCPho/DVShBRkFauq3ZKq59Qs5Vma/pYVFZtl/1YwKsMe61wRR4SqbpXU9
 WGNfusoMq/KjihHijkVgAF+q3YYpwxdi32D3Tq9PbliAS/MRhCvgMlLfck7wE9HaQGOtN5Dt
 T/Q9NVfY51P4YrhIJGdas8qJiMeyPwqSylChPYHb2xLtB3B5uKke+s3IkI
X-IronPort-AV: E=Sophos;i="5.83,306,1616472000"; 
   d="scan'208";a="47111395"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bH1lvMs4iG34ajPmrEzjISFlVaW8jOlueTyu/B4V2ZTWHkIQsLNXdDWGnC7uSftL/GO7mkt4GQ4CSBNKGt5g1mO4IlS0UxHMTeTgerKto+Qj4FtwhvbrRqzqsvs1GwpCDUUS1y5r3qR4v47XeMFlyNje9sS4Z7DUQtiq9n7EaF7idZfh8X3q+FOzuMFDvbfXhTwQtfJkdnnDeUm6RrVmw6CAgtelreIAqy/vnispUogBRUG/1zs4NxLCTGaPy5FEWAWVMTuEpBOJgXBiIVp8faHFGT33COeExDACIx5iaGvueyJMrBDgPCW9zN1ybUd9H5Rb390YIXv1E/Sdc0dBhg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fzsyPWRkH/AuOxaHf9zOdo6QAdP0gtCntHQtxnq4QMw=;
 b=J7odn7z900X6rLU7bDcwu8x5hknlxGUwZfq2ObQC8qEevC20qRlUC653AXOzMUvlL1S6M109Zgobe/LuZUWwPu9ZXaqfFlnVd+VYRkDZVm3ZxdXSOMHX4iZvEP1X/FZkQ/PFz/qdmidQGRB1WXXOzHFg2prTd4YgSO4jeDf1dzu4wbt2q4Jmb3BBHy6XgIuKmwNISTMpQGqJSOzezshd/2KTWbFV77Cf/BFUVI3HFevC2Di8Ag9gn+EsCmMSrETC1irZ83IhfAp6PqIO1YuSgtWeUVo7oIwVl/ADIYBflwITMl3QknpOD1M9zZFbawk7OT6fR9KPVIOO8O3QP1YmrA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fzsyPWRkH/AuOxaHf9zOdo6QAdP0gtCntHQtxnq4QMw=;
 b=KXrV1Rv3G/WLJpiuvtkor0yX/I5T7h/WVaJUfLAgn5aMYKbeghBwKrKkUlqnCQ5yhgHNHWMi2Zz82r1x3XG2S/JPB9MV+1V1vwOVmHmAX4GgxNV03NdwwI5qJHq1V8uN1oEq9shTqTCV93rGPbz4qW3zbOVtyHntudzIMFFmeSA=
Subject: Re: [PATCH] fully replace mfn_to_gmfn()
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <389f5988-4700-da3e-e628-1513e87eb56a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <263f1b30-e33d-711c-ff22-64f8acad230d@citrix.com>
Date: Mon, 28 Jun 2021 16:42:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <389f5988-4700-da3e-e628-1513e87eb56a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LNXP265CA0072.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5d::36) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c6cfe0d9-9fd2-40a9-42d4-08d93a4b6435
X-MS-TrafficTypeDiagnostic: BYAPR03MB4679:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB46792607AB1AD862F8A68A5FBA039@BYAPR03MB4679.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: yeM8Da/fmHsnfb4bKbc49+20RtDU+OTsPV1j3EZD9fSkQDkzQWX6eKI/p21ob1+Bmpe7FMMRIV4WXJiG+SshjmEyTCbpnU6eMdpsfmaixxDF/ebSY5onoXAF0qVzDGEtlpz8gQl/VrzaU8W/xAsT2a2NjuaGCn2Vynujq/lBD/+nzpvJwiMIA98B9URIgOokBj9o23HY3o0XZk8Cy+3fXrqQWLzMmrIB4oqAmgUXI7cdLxvipW4H61Xk5iSMbelzZm82pEyetPtInF9O/ruze0WZaQS1uMJmeMgUQLNIc/yrX8ayZy9FOIvx67YTvNGVYmEQJPC+5fUhBFWnJ1IOchnm8qX2dontcLYKkQtfRLZtG9p2R+jHnmnpj17alsljJCGHZff0HgPJLfkdDJoXCRT1bwbGnll8NKUFN5Dv5zBlMl2iO8xmWmJkhZBAjkFjW5Aw+0xdtTlHr3p07aKwR+Q4Af6mCm6MlOrL5s8d37j58uxK1jflK1JD/EAWkZqFATwXj+K24NA0I2/mkSmqYPSo9CiPuMwMfAq7LSoxF0l/v31ZsI6vDCqKrCJvxVYzdC6aeLji7LfltLOzyQ8asHd1tUKuG2i2M0BMa96vhK06S2S8wpZiG/mBN6TZ4JlpzMpbeN/u5h/gT+nNZzPrNLMUhp1XKPrbPn2CeDrhDYmxM2O7TZixAln4iX15awt5GmCrfWFxbWnlzGHjlh8AtxrNN+aNnxSZpLVtly7HSkk=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(346002)(376002)(39860400002)(366004)(956004)(107886003)(110136005)(53546011)(2906002)(5660300002)(6666004)(66476007)(66946007)(54906003)(16576012)(66556008)(38100700002)(316002)(478600001)(4326008)(8936002)(186003)(16526019)(8676002)(6486002)(4744005)(31686004)(86362001)(26005)(31696002)(36756003)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?VXpxNkJyNWRoRFZkYVpOQnhkVzFGRG0rNGJpY2EzeTlBeUR1cGxTUlRDcUpu?=
 =?utf-8?B?akRBQ1RYNW8rSTBPTGNuWHdIZXR5NExHZmFlS1NQZXNzOW1vOVI0WmovYTcx?=
 =?utf-8?B?SkF5dWJUS0w4OFpBZmRvTG43bUtGMHRZQmVBdDJnTE1VK1dOQ2xhZWJWKzFD?=
 =?utf-8?B?RlV1enZka1RVMGhld3o5NkdIQi9DNEczaXMwd0ZtN0oyRDhDU2NtZEVwTXlz?=
 =?utf-8?B?STRuK2t2cklaL3I2STdIYzZ1bWRsVlV0Wk9wOUozRlpDNFVITmpSNXo5dWE2?=
 =?utf-8?B?QnI0SDM0eWhEandXMTFSZGw4NWFIQ0tIeHdDaHcrM0trU05iQ3h6NDI5SHhW?=
 =?utf-8?B?MkF6WC9JMC9YSGkxNnBZS09yWmh2NE1kRGs0Rjd0MzVlQnMwTEp6dVZ0MHFJ?=
 =?utf-8?B?a3Iva2M3ME9PYnlpaEhGODVhVWFGeHg0alVzRjREZThoMWFuUDJHNVBLYVdx?=
 =?utf-8?B?SDA5R2JVUURJdlh4aG9vNHVFTWdRc0NrVDBqcXpvZGpwZlQzZlBaelV0d084?=
 =?utf-8?B?TjhNc1FuSVlISGRRVmlnSWpPQjVUL0RTSTN2ajZiTVZDMVhtTmVldjBuU24w?=
 =?utf-8?B?UUkzRUs4dUwxNHFxRkdHM1JFYzhYVExFN3JKUWFGZGd2RHpQSUJLT21DZDla?=
 =?utf-8?B?L1lCVjkwUFMzQlhOc05teU93TTExSVNFMjAvWlJkcWhBVlRJUHBBallWSXhY?=
 =?utf-8?B?ZlIxaTIvMElTNmEyaGNuaHdsN0V4Q2FHWnVHS04reUZobEI4bU4wUm9YWmVQ?=
 =?utf-8?B?eHJPUmJ6dlJ6UjB0R3V1ajBwc0owMXYvWmdvRlp0L1lKcGx5cVVvS3I0dVkz?=
 =?utf-8?B?UVA0M0dIQnBhM2k0TW5zOUxaZVgvZE9wemVGZDVlUmlBM3U3NXF6RDQ3WVQr?=
 =?utf-8?B?dDZpT21Ud3dvUnBaR0ZzZ2tQRm02TVB6T2kyRGNzL0wwcE12U3BoVlphOHdP?=
 =?utf-8?B?RXpIQ1NpYWUzbEFFR2NzQjdJbFZSWnFoV1Vad2pRN2VPMlhvM3JtQjFQNEZD?=
 =?utf-8?B?WU5wb3FVdGtGNDdiWVIyMUZiVmFheVpLcndYR1V1TWZ2bTlPaEJzZGozQWVm?=
 =?utf-8?B?VTQzTUJlNFUwNXo4NFBPQU5JQ3d0U0xCaFMveTg2UEVRcytld0lFM1phM3F4?=
 =?utf-8?B?d0pNV1JHaWpIUnZxRjByNEtKandTSzgzbDFSRkhST1N4b2VsWXN4MHlFL0pC?=
 =?utf-8?B?dlE2OWNrZjVUOFVDazJwY2JrYzNtSmlEUmxVRk9yVFNQL3UwNVZERTRvNkQ1?=
 =?utf-8?B?a2ZDMnFyL3M0bGx6akVrS3FsektKekhudlVrTjBhOEFIYk8ycDFQYm9mTmxq?=
 =?utf-8?B?TUZJSnZZc2ovSlhhdDhGM3JneStIV0lOeDY2dW0rdnZ5akdHSGVDQ244ZlhE?=
 =?utf-8?B?Z2kwSHpKYWMzNkIrczdKODlKSm9pMTlrL0VDL3FOV1V2VU5vM09qK0FxeXkz?=
 =?utf-8?B?OFEyWEtDTXlKZkcxT0RkcUswZkxkcVJWUVIwVWRrRFFnRnpwL1pOdWl1WCtQ?=
 =?utf-8?B?eHN2S1VDSGVRRVdneURxRHZlRWE4U0hlQnc1ak5SV0RZbEN2U24vZnZ0dUxu?=
 =?utf-8?B?alV1dzhUNTc5Z0xod2dieGluckxGeENmSnZDV0Y1R0lXb3IwNU1xM3dXMTFV?=
 =?utf-8?B?em5mZEtOc1FIc2Z5dDZJV004V0llZWMrZmpiSWRnaTF5MERDOTRBRnJhSVNs?=
 =?utf-8?B?cXlBdUxCbFNVMHdPdk1aUHozZy92NHpwbnJGWHk3bGtiMHR6aEtpYm1la1FN?=
 =?utf-8?Q?3+MC1j3xjO9ofUqKzt5evvQplScUL0eUov874U2?=
X-MS-Exchange-CrossTenant-Network-Message-Id: c6cfe0d9-9fd2-40a9-42d4-08d93a4b6435
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 15:42:52.9435
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KUrCxEjcL/7szcdkbg6PGo246kuXYhVjGDUo18t54fdUxVk0tQiQV7aBnLLVNKkiF/1nioqoBdTDOwRW4A1bk/QmdNUQV8PzTga7PQsoD88=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4679
X-OriginatorOrg: citrix.com

On 28/06/2021 12:52, Jan Beulich wrote:
> Convert the two remaining uses as well as Arm's stub to the properly
> named and type-safe mfn_to_gfn(), dropping x86's definition (where we
> already have mfn_to_gfn()).
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>, but ...

> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -328,8 +328,7 @@ struct page_info *get_page_from_gva(stru
>  
>  /* Xen always owns P2M on ARM */
>  #define set_gpfn_from_mfn(mfn, pfn) do { (void) (mfn), (void)(pfn); } while (0)
> -#define mfn_to_gmfn(_d, mfn)  (mfn)
> -
> +#define mfn_to_gfn(d, mfn) _gfn(mfn_x(mfn))

... surely this wants to be ((void)(d), _gfn(mfn_x(mfn))), even if it's
just a latent bug right now?

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 15:59:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 15:59:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147911.273070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxtfW-0005JV-VK; Mon, 28 Jun 2021 15:59:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147911.273070; Mon, 28 Jun 2021 15:59:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxtfW-0005JO-S5; Mon, 28 Jun 2021 15:59:30 +0000
Received: by outflank-mailman (input) for mailman id 147911;
 Mon, 28 Jun 2021 15:59:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/HW7=LW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lxtfV-0005JI-9x
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 15:59:29 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ff3209b7-673f-4f25-91f9-df135ffc4980;
 Mon, 28 Jun 2021 15:59:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff3209b7-673f-4f25-91f9-df135ffc4980
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624895967;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=N2hImsygCLrcAFlAz1cbvthrFYOwPIIe+WVxekBt4aI=;
  b=XjXz8t4QNuIzebwUJJB11M1qUvQEkgfHQaFB71+V1WfnKm4huc42cVOj
   QBAeZLfwKkNK9lApiUbPQGYq2BZCEyDxvYTI/jaiMatHKp2iY82BrW0ct
   uqNAHCHnQGkcLRNOzAmjZ91ynqjJEOg9TiXl2p+L7VivsSzrGUigrR+5f
   w=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: kOlcXvqGIFcGmP5ynFV8Om4x2BF4DDUzavt4O1RygXpy5Sl4vEoWKnGc3cNbWKggHuzZyUE058
 6j3a1vyOSwWhBDI9vcQ+RoYe7eeyhi3nd1pDSRwPLwikCATvl2ksHThB2ONaKrsQE+BYiZS3WF
 I+kjtei4Z5kL0qV9q3ESZ+VqaO8XYXsBsDyZ3iIPNNHOkUnypPYrZfAwe3GgXNYMn9SS5/THGK
 aWiGFHOBU0M/6Xy/N22YuAQ9vxHc62zNVI1J6EvkZACpBuFMKfDQTF4D2KrEjL8sG5Iwj0aFn9
 K3w=
X-SBRS: 5.1
X-MesageID: 46828034
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:DAo0hq5eVtELRLhEHAPXwAzXdLJyesId70hD6qkQc3Fom62j5q
 WTdZEgvyMc5wx/ZJhNo7690cq7MBHhHPxOgbX5VI3KNGXbUQOTR72KhrGSoAEIdReeygZcv5
 0QCZSXCrfLfCVHZRCR2njFLz4iquP3j5xBnY3lvhNQpZkBUdAZ0+9+YDzrdXFedU19KrcSMo
 GT3cZDryrIQwVtUizqbkN1OdQqvrfw5evbXSI=
X-IronPort-AV: E=Sophos;i="5.83,306,1616472000"; 
   d="scan'208";a="46828034"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Qe8GQsrSR5EEhLLCYPE3bNxwrPCiNNREx9xG1xnOU0ImNB07u231FiX/rg3DGAGuxojl+AJa9Fdcbdahp6NsowhEegO8U/FCo5gOp0leDYa0jxmVLJ1KZLdYnIYT0NWKTwq74Fz09dSjNEEOm9Y/ID1LICGbDc/XAvFPSi0P92BM++leDAWXf8+FMImc3AWSA0Hqy3EEeYY+ccoj+ZTGdFmIoTbc1Wb76Wm/j+NuMD55RnSNifiMZdyhJQCAdgFDbCC4NnUIVYO8JDvg93aml7PX+tYV95RT3yoygOiurRcAuTVDD+FH+vU2rc5egDqjSaKSWuW5Jm2WyBv9ZXrdVw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=I1LhmWwCqtXrWqJaeO70GCgK0KHVkduuA9tc0a5w+b4=;
 b=MtpMtsNqbQMyPjtrwzo0Qkx1l+02soA4y1TqBEs55HtvAw93kaprNdYpV4lA8Dg3ChaIOv8VY7ZK0uT5mEjDjuh81cRKZ6iuZnqNqgfJ5nL1sREwm8ivGiIusMSyH2gfPToVZrJSxivh147rOBHAqHdfO1v9MJXEPisdXGFLVVtFXzmACQgr6fpGmlvcZ74OVtFGeIVKW5Xn5YPC/8FU9wM3ZDCYBYgVMJ764MeRYbZ9/Rjsh7zBrkOTKerRiCGaXo/cuy/6xYNtWjCn0hX/XRoK//f20PN4xjgjAP6EcLgsaJ3HgwR+SefyRiYin0bjJg4hPCJcVgaoxKgEYeT4pQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=I1LhmWwCqtXrWqJaeO70GCgK0KHVkduuA9tc0a5w+b4=;
 b=qphAmmGQizCXWpAdKRBrhb4/6c+ABFzufW8MR31b2As1Z5uhW6qndzwyXrZKhZr5wciC0K9hpAZqHGB5VjmrCdv5iTiLSG6riyYsaCpo9o85cMJbOu4ZIkTPjrdip9cvvfRTpeCnBkgnnA8heoAHO5vmdhJ8PY8rxHXyWwmP9YQ=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony Perard
	<anthony.perard@citrix.com>
References: <5d2bb2cf-8c0c-7300-c895-75bef0e50817@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] libxl/x86: check return value of SHADOW_OP_SET_ALLOCATION
 domctl
Message-ID: <ef6a73e6-974f-67a3-5a4f-6be6f27d412b@citrix.com>
Date: Mon, 28 Jun 2021 16:59:15 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <5d2bb2cf-8c0c-7300-c895-75bef0e50817@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0277.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:195::12) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 451b1a3c-fffa-4f12-40f6-08d93a4db235
X-MS-TrafficTypeDiagnostic: BYAPR03MB4552:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB455284B183D3E99E55CB2B85BA039@BYAPR03MB4552.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: M//auQucFyAUjsTZGa6QmMYu+U5R1nqUbkEquMx3yBBWIQolWp6TKiLIhn/5n32x7o9/mc/WGIZJnyhgi6EzoRjnoKkxEOB2lJuPMeOMRHoFowgrGdXmAR8hEkg8A/ZARt7fIZJ5pZ2Qb6HoHkmhgH9HTg70I6qutCMi2sXQYe1+l8tOCTbOsdEp0nHbHNWgYKg+g66SNRDJZikwHsXTaZma/BsIaeud2WfiyydQBGK7FXJFGJPIJ2H63/UmOZ56xZuSwoh169ASqqPmqLu7M41rjD3Vwni0X0Hyd4toiu1Y7KK3yWtWD3QBEzG00m3MpInYxsn63cUWYoQjV4IXqfMeNJWtBcCfhOTNDJXnI2/CiRnR+9fx+EfhPM2pOvL8aqzrE2aFJ5gW0HR3ArX5fi+6H+ifgyT+4LU4Qxb0Pg3cuBeqIzETyjxNmgHEsuiVgqwnIKctiBU8DQ+FyAKvkDCpBvhTOefhirJb0VU+lvCJWZZIP5NhEhqj23Cl86UjsTxIM4JXxIJJ335oRJA3jZc/ve5/3G22zLeZSzfu8o7g+cFJ5OSGEKOih90c44tEeaQS4xSOI8+VYi3suUfC8UGhMXXNbGhucY87JPg2bEVShx2hBquQ7Kt05BeCbPtzkIvJgH9Rn2v8p54+UrdbcJO+IlSRS8mg6OnsirswjMJGCHYkUwRbRFUM2cOdM+LUjdCpVr2l3p+D6Y8mofCAHtVVAvsfT+bXLQxiG5DR6xY=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(396003)(39860400002)(136003)(376002)(107886003)(956004)(53546011)(2906002)(5660300002)(66946007)(66476007)(316002)(6666004)(110136005)(66556008)(38100700002)(54906003)(478600001)(16576012)(83380400001)(8936002)(186003)(16526019)(4326008)(6486002)(31686004)(86362001)(8676002)(26005)(2616005)(31696002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?Qy9Ec1FxTXFuNytCNk00RXNMMzEyT1lIdGdlTjhKSVMrblc3dHNyVlFVRGJU?=
 =?utf-8?B?VUhqUmZyMjZZOUdlakRaRWUrL1BFbE83NTlxSEdDSzhHTnVKVFlSRzJMT3hi?=
 =?utf-8?B?RGdZUDlYQTJEVjVzeE1lS2ltZUZIY0loaXllT21yc21ZUnh4VExYS2tnUnhW?=
 =?utf-8?B?dUJPQU5xbnBBcnNRTXhJN2tHVWRzcHlwc2Y5Q21MTllQT1JLSzRYWUp0NWZl?=
 =?utf-8?B?bEdLM2NWbjdQWG5uWllYamFtSXNMdDBldGpJYk41ZFU5UEpoNTlCU1VvWkxE?=
 =?utf-8?B?OFE4WjdiWmxhSC9kbkFOblVJcGl2Y21mM3ZkR3A4R1ZidEV4V0FVWlpRV1E0?=
 =?utf-8?B?eHVPOUptNDZSREYxWGZYb1d1ZTZhSDNKdnhzZStUVzkraDJ6RkN2VElFazdq?=
 =?utf-8?B?ZGs4T2s5alU3SzF1YnBsVUhlRjl2NEVkanVFMUpxd2dXYzFPc1Z0ajlSQTVJ?=
 =?utf-8?B?aUJ4bEFFR2VsUVlZVSs1M1pZVlg1Ry9JZG9tRVpYTzBvQ3UzM1k3TWNxbmJN?=
 =?utf-8?B?Qnd3cEdjM0J5LzVON1QvSTNJYUNYSE5UWU04bnhIQ3pGN0I4V2lGMkxHdnhx?=
 =?utf-8?B?a3VGWSsxZklFY3VkZm9lbUgvOWRZTXlKNVRjalgxdjZqZEdUQVVTcWFtN2Ix?=
 =?utf-8?B?akdYR3Ftak9MTlpPbXhZZHZJajBUL1YrS1kyV2JERk0ybHE1WWRweHVwN1Zo?=
 =?utf-8?B?ZFp5bXl2WktPQ3p1bXY4V1Y2dnJPU2hxZTJhOEVic2N1czI5TDlKaGQ0amhE?=
 =?utf-8?B?Sld6c01SazVwOHZvblZrTHIwL2pwMFBRRld6SlBXZS84dGp5aUlFUjBPMEpJ?=
 =?utf-8?B?TmV5ZkpLbDk5VkRwbFR6SjB0SkF3bmxyZWZCck51SEE1dlRkNzdoYzBwMGxz?=
 =?utf-8?B?bmhWbVQ3bm1ITEFodWo0Sk10U2VySUFKd2lKWjFQNTRIZWhWditWQll2QTNw?=
 =?utf-8?B?d2NlRG9BSkV1VUNwYVJUWXdkajdXZ1dYOHhMdExwWWFpQVc5c1hlSlVNUzdQ?=
 =?utf-8?B?enNxVSttM1R3eDAzRzFWN0VNL1F1Y3VHdEc4OWlBRncyTHh2SjVqem5aMytY?=
 =?utf-8?B?OUlGMi9lbWxPdVA4UndSY2pPREhxYlNZNGowOFk0Q2puVVE0bmRIUXQzRXpV?=
 =?utf-8?B?L3JMN2NYbTNpV0dTMnJZdVZJTFFKMEJlcjhvNFhoT3ptYW1aK1NpN2FDa1Ew?=
 =?utf-8?B?UEJtSEppL1RqUEgrTHBldmdob0gxblRqK3drYUllVHNVWXZ6TGZYcmRvV01J?=
 =?utf-8?B?Ymt1S2VCWEVod3FoS2FZT1lSZ2J5NFAxQXBuL3k0VzhiTmRnSDUrV01kWjJt?=
 =?utf-8?B?ZmZWR3kyMXE2a0wydHI5cC9hREd4dzd6NHlFbitUd0wzRnFVWkJxN2tyeHFQ?=
 =?utf-8?B?anl4ZGU5VEo4anFISkpNRWJENUZHNTdaYVV1NEswZHoyNmdua1JTZkkyRFBn?=
 =?utf-8?B?LzRRL2xhKythMHlyaisrQWsrandVNkIvcXhaMFE0OUlpaW1PcjNzN2FHQ0Rv?=
 =?utf-8?B?VzhjZGE4anZpMGNxSmM5YUFEa1c2RDBVQkFmbDhYTWpUZmFpUitUaGhrZzRv?=
 =?utf-8?B?KzZBODNsUHdFUWUvNDdUUlB6VlloWVR6YmRYSWhJbXRTL2ZqYStNTWp4THdt?=
 =?utf-8?B?Y2NjQmgvbnRZM2RINWFFYkRrUHo5K0d6THVRNi9wT0taOE1wdDlvUU9reHpR?=
 =?utf-8?B?Wm9iN25sRVNTVnBsTlY3RFMzSllDeitkeWF1bVFzZ0FncWVtVnE4L0VYbXVa?=
 =?utf-8?Q?WKbH2KBLXjF+hVU0FKgdphTC6TnBjsp7gARhfnq?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 451b1a3c-fffa-4f12-40f6-08d93a4db235
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 15:59:22.7632
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NRpRHhI8JtfGBe/PDJJ4HBkmU/cXO8eh8HHNgXq5w08tQlC4ahJ4ymNjCcZHLTareDG6VtfOusVO54RXon1pOkigR5m7YeJ3RaVdJMZMhFI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4552
X-OriginatorOrg: citrix.com

On 28/06/2021 12:47, Jan Beulich wrote:
> The hypervisor may not have enough memory to satisfy the request.
>
> Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Especially if the request was mostly fulfilled, guests may have done
> fine despite the failure, so there is a risk of perceived regressions
> here. But not checking the error at all was certainly wrong.
>
> --- a/tools/libs/light/libxl_x86.c
> +++ b/tools/libs/light/libxl_x86.c
> @@ -531,8 +531,18 @@ int libxl__arch_domain_create(libxl__gc
>      if (d_config->b_info.type !=3D LIBXL_DOMAIN_TYPE_PV) {
>          unsigned long shadow =3D DIV_ROUNDUP(d_config->b_info.shadow_mem=
kb,
>                                             1024);
> -        xc_shadow_control(ctx->xch, domid, XEN_DOMCTL_SHADOW_OP_SET_ALLO=
CATION,
> -                          NULL, 0, &shadow, 0, NULL);
> +        int rc =3D xc_shadow_control(ctx->xch, domid,
> +                                   XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
> +                                   NULL, 0, &shadow, 0, NULL);
> +
> +        if (rc) {
> +            LOGED(ERROR, domid,
> +                  "Failed to set %s allocation: %d (errno:%d)\n",
> +                  libxl_defbool_val(d_config->c_info.hap) ? "HAP" : "sha=
dow",

The error message wants to include the value of shadow, just in case the
cause of the failure is because the value is stupidly high.

Having traced the number through, the local variable wants to be named
shadow_mb to try and make the units clearer.=C2=A0 (Also - why on earth do =
we
have a hypercall which takes integer units of MB, and a memkb field in
the config file...)

Otherwise, Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 16:05:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 16:05:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147913.273081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxtl6-0007GR-Kc; Mon, 28 Jun 2021 16:05:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147913.273081; Mon, 28 Jun 2021 16:05:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxtl6-0007GK-HY; Mon, 28 Jun 2021 16:05:16 +0000
Received: by outflank-mailman (input) for mailman id 147913;
 Mon, 28 Jun 2021 16:05:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxtl4-0007GA-TR; Mon, 28 Jun 2021 16:05:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxtl4-0004xM-Kc; Mon, 28 Jun 2021 16:05:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxtl4-0003WD-Bu; Mon, 28 Jun 2021 16:05:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxtl4-0006UE-BN; Mon, 28 Jun 2021 16:05:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6nVvCxQcD4m+LjsbM6ysOvbYIkRxPYse7m7A9lEwepI=; b=QnW8vY9BjuJgSsx8zSjrzrYM11
	zwPeKOix4dch+90INV4O78FW/1+YaMco/1oqrIuGahLDZeHRUaRlF4rrehMFCZchyHbx+/1rQSWDE
	hb8UvWzduiiLEBDdViuZt3iKCUIJER6Rx+5ioFgLtiZb22As2iPHwu/0NzWL5yGkQj/w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163163-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 163163: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=e3955ae93f5151ad2e982440b7c8d3776a9afee2
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 28 Jun 2021 16:05:14 +0000

flight 163163 qemu-mainline real [real]
flight 163170 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/163163/
http://logs.test-lab.xenproject.org/osstest/logs/163170/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-amd 16 xen-boot/l1         fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 16 xen-boot/l1       fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                e3955ae93f5151ad2e982440b7c8d3776a9afee2
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  312 days
Failing since        152659  2020-08-21 14:07:39 Z  311 days  571 attempts
Testing same since   163110  2021-06-26 04:20:02 Z    2 days    5 attempts

------------------------------------------------------------
549 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 178478 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 16:14:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 16:14:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147919.273095 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxtuA-0000Nq-QE; Mon, 28 Jun 2021 16:14:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147919.273095; Mon, 28 Jun 2021 16:14:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxtuA-0000Nj-N4; Mon, 28 Jun 2021 16:14:38 +0000
Received: by outflank-mailman (input) for mailman id 147919;
 Mon, 28 Jun 2021 16:14:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lxtuA-0000Nd-3Z
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 16:14:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lxtu9-00056W-R5; Mon, 28 Jun 2021 16:14:37 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lxtu9-0003RF-LK; Mon, 28 Jun 2021 16:14:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=EkmHBmybHyhTkWaI7QWBXlFpsguUGKDGOYul6WZWT2c=; b=jEQAsIp817TA3T5MWgnV1fULxa
	a1Jymj+9skjeEmOaTOyjwBM8cYu4t46DPSJXWDV/SlPYK3vwpQM2n0s49OK4DRXHz5JbRirh2DKiy
	ZnzLTccPas774k6DUUavAQaDHFpAa/854nhuMdyDH/YH1AsIs0qTD50JBkjCm/clBrZc=;
Subject: Re: [PATCH] xen/arm: add forward_smc command line option for
 debugging
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Volodymyr_Babchuk@epam.com
References: <alpine.DEB.2.21.2106241749310.24906@sstabellini-ThinkPad-T480s>
 <b5ba0757-322f-a77a-2293-111b77b29d35@xen.org>
 <alpine.DEB.2.21.2106251033500.24906@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <db2405f9-61d2-5d8f-816e-547bc09bb95c@xen.org>
Date: Mon, 28 Jun 2021 17:14:36 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2106251033500.24906@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 25/06/2021 18:47, Stefano Stabellini wrote:
> On Fri, 25 Jun 2021, Julien Grall wrote:
>> Hi,
>>
>> On 25/06/2021 02:51, Stefano Stabellini wrote:
>>> It has become clear that an option to disable trapping SMC calls to Xen
>>> is very useful for debugging user issues.
>>>
>>> Instead of having to provide a
>>> patch to users every time, it would be great if we could just tell them
>>> to add forward_smc=true to the Xen command line.
>>
>> I can understand this woud be useful to go a bit further in dom0 boot. But I
>> am quite sceptical on the idea of providing an option directly in Xen because:
>>
>> 1) This breaks other SMC uses in Xen (optee, VM monitor...)
>> 2) There are no guarantee that the SMC call will not wreck Xen. To be clear, I
>> don't refer to a malicious OS here, but a normal OS that boot
>> 3) Very likely the next steps for the user (or better call it the developper
>> because that option should really not be used by a normal user) will be to
>> decide whether they should modify the kernel or implement a mediator in Xen.
>>
>>> This option is obviously unsafe and unsecure and only meant for
>>> debugging. Make clear in the description that if you pass
>>> forward_smc=true then the system is not security supported.
>>>
>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>>
>>> diff --git a/docs/misc/xen-command-line.pandoc
>>> b/docs/misc/xen-command-line.pandoc
>>> index 3ece83a427..0833fe80fc 100644
>>> --- a/docs/misc/xen-command-line.pandoc
>>> +++ b/docs/misc/xen-command-line.pandoc
>>> @@ -2501,6 +2501,16 @@ vwfi to `native` reduces irq latency significantly.
>>> It can also lead to
>>>    suboptimal scheduling decisions, but only when the system is
>>>    oversubscribed (i.e., in total there are more vCPUs than pCPUs).
>>>    +### forward_smc (arm)
>>> +> `= <boolean>`
>>> +
>>> +> Default: `false`
>>> +
>>> +If enabled, instead of trapping firmware SMC calls to Xen, allow SMC
>>> +calls from VMs directly to the firmware. This option is UNSAFE and it is
>>> +only meant for debugging. Systems with forward_smc=true are not security
>>> +supported.
>>> +
>>>    ### watchdog (x86)
>>>    > `= force | <boolean>`
>>>    diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>>> index e7384381cc..0580ac5762 100644
>>> --- a/xen/arch/arm/traps.c
>>> +++ b/xen/arch/arm/traps.c
>>> @@ -95,11 +95,15 @@ static int __init parse_vwfi(const char *s)
>>>    }
>>>    custom_param("vwfi", parse_vwfi);
>>>    +static bool forward_smc = false;
>>> +boolean_param("forward_smc", forward_smc);
>>> +
>>>    register_t get_default_hcr_flags(void)
>>>    {
>>>        return  (HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM|
>>>                 (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) |
>>> -             HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
>>> +             (forward_smc ? 0 : HCR_TSC) |
>>> +             HCR_TID3|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
>>
>> A system wide option to turn off SMC trapping is a no-go because this would
>> only be usable for debugging dom0 and not a guest.
>>
>> So at the minimum this should be a per-domain option. Also, I think we still
>> want to integrate with the rest of the SMC users. So Xen should still trap the
>> SMC and the forward should happen in vsmccc_handle_call().
>>
>> This would cover my first point.
> 
> Yes, you are totally right. I thought about it this morning as well.
> This patch would break even PSCI :-(
> 
> It would be best implemented in platform_smc as forward_to_fw (see
> xen/arch/arm/platforms/xilinx-zynqmp-eemi.c:forward_to_fw).

There is one problem though. How do you know which calling convention to 
use? IOW, will all the firmware call (in particular on older platform) 
follow the SMCCC?

> 
> 
>> For the second and third point, I still like
>> to understand how this is going to help the developer to fully port the
>> board/OS to Xen with this option disabled?
> 
> This is meant to help with bug triage only. There are a number of bugs
> that can happen if certain platform SMCs are intercerpted by Xen instead
> of being forwarded to the hardware.

We already print a message informating the user that the SMC call was 
trapped and terminated in Xen. So I am not entirely sure why you also 
need to passthrough all the SMC calls to triage it. You already know 
that the SMC will have to be implemented in Xen...


> I found myself having to provide a
> patch to forward_to_fw all platform SMCs as a first test to
> triage bugs a few times recently. It is never a fix, only a way to
> understand the next step of debugging. Also Alex stumbled across
> something similar on a non-Xilinx board (MacchiatoBin) so I thought it
> was time for a better debugging option.
> 
> I think for debugging purposes it would be sufficient if all platform
> SMCs were forward_to_fw from all domains. Of course it is totally
> unsafe, but it is just for debugging.

In order to add a debugging option in Xen, we need to be reasonably 
confident that the option will not make more damage (I am not speaking 
about security here...) than it is actually worth it.

I can see how this helps in both your situation to boot dom0. However, I 
am not sure this can be generalized to every platform. A developper (or 
user) enabling this debugging option may end up to see corruption/hang 
because:
   1) SMC call may pass memory address. A domain would pass a guest 
physical address but the firmware will interpret as host physical 
address. This working(ish) for dom0 because both are equivalent, but for 
other domain this will break.
  2) SMC call may change the behavior of the system (i.e. turning off 
the UART)...

It would be difficult to pinpoint whether the problem is because an SMC 
(or else) without implementing each SMC call in Xen.

I don't think it is a lot of work to implement SMCs in Xen as you find 
them (sooner or later, you will have to do it anyway...). At which 
point, forwarding all the unknowns SMCs to attempt to boot further is 
probably more risky than it is worth it.

If the problem is re-building, then we could consider to provide a 
command line option to easily specify which SMC call is passthrough...

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 16:15:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 16:15:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147921.273106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxtuz-0000w3-3x; Mon, 28 Jun 2021 16:15:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147921.273106; Mon, 28 Jun 2021 16:15:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxtuz-0000vw-0j; Mon, 28 Jun 2021 16:15:29 +0000
Received: by outflank-mailman (input) for mailman id 147921;
 Mon, 28 Jun 2021 16:15:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T4Tg=LW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lxtuy-0000vo-3n
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 16:15:28 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f384b09e-7bbb-4a53-a0e0-d27868545201;
 Mon, 28 Jun 2021 16:15:26 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2054.outbound.protection.outlook.com [104.47.14.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-8-HQF5BtepMZOOfRQukNANqQ-1;
 Mon, 28 Jun 2021 18:15:23 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6304.eurprd04.prod.outlook.com (2603:10a6:803:fd::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.21; Mon, 28 Jun
 2021 16:15:21 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Mon, 28 Jun 2021
 16:15:20 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0278.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1::26) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Mon, 28 Jun 2021 16:15:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f384b09e-7bbb-4a53-a0e0-d27868545201
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624896925;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pbJpNJLR8C8ufDmrtZTnOf4qQUfD4rly0HHSesBAlWo=;
	b=PMLI7Tk3GF0AGHjn/Rp5UfKkWLQSyduymFPTAXa/BxmZgfad2om0XLUNqjqfp+iMkYNidr
	jaAKKnZ6AtVC1H/v0S1DeNOv/2/p1OMUAzGZy5vkVRzNMSF5GLnADqyRYwo4sUWeS6OtWT
	E1IFUejUeQot1+mNEZZEXu1GWhafbVI=
X-MC-Unique: HQF5BtepMZOOfRQukNANqQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BW3XZrb7GAcvRacdvJYmJAfkYOD4OVIgHzsE7QaQCIT3tU8F5z+8ocZibU2SNS9/xKs/qlCs22qUBG6gVb1pJXfQa0Pti7/zpG70o7nAWJoV3y8JC4q8vS7MMMj6VXWIRI57x+pAfO7cqCB8ABdgeee0cCYVsf7pjAnJnaomoMuw3/Cjdy+nw/ZRRa29ZM0r/akuWXQmj4s5otXZIOc7+ryctsvbdbciIyg8fU6Ov+zklkKwSuaVAIu58JSGIjEcmWwZyQ9LEYiWhCTY3BvzUrfNRPvT7SIZn840GS6PQgRQHS4uBAxeIRixnwiiICwNMVmjdX+EuxlDlWvWRSzdIg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pbJpNJLR8C8ufDmrtZTnOf4qQUfD4rly0HHSesBAlWo=;
 b=mj2CmqCW/ynahqJSPGb2tcfT9wMJx21j/laxVOTxHnjnMH3vc5QeN3hKBGwzfytF6aLKGfCBaanrfF9eDOCt1lrtkBgjBC7x61Rg9UD3Gr/dbL6EOzNSdbt7OC4OAx2eg1bCqNkSpaeCXmSXtRVR57MtTbqX6dWe+D4ld+9hzrMv6CC1NHxb4esl/hoNFur9a5qTNaAosF7XVdvwEV+0yWESjlI/X9dJQ/4wPPPcRWNSQOYB/R2dzg4r10Fb+y9Vw/DI4+uQBSiE6WZD9dn/lbIovxH24LnKfyGRC2UJPutWrmZSXvHAFM7EzNAteG00TM9EDLbRTQHFGVYbcs89Mg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] fully replace mfn_to_gmfn()
To: Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <389f5988-4700-da3e-e628-1513e87eb56a@suse.com>
 <263f1b30-e33d-711c-ff22-64f8acad230d@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9138e7a0-f4a0-db77-c09f-4fa6a45652cf@suse.com>
Date: Mon, 28 Jun 2021 18:15:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <263f1b30-e33d-711c-ff22-64f8acad230d@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0278.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1::26) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d42a8f41-7963-4d54-44c3-08d93a4fed2d
X-MS-TrafficTypeDiagnostic: VI1PR04MB6304:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB630444A42DE574C897372821B3039@VI1PR04MB6304.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vmkm3N4dLLpisqUF+5/1AjEF3ehOxk9rwV6tOpqwg3+NBu8dGf4SYXvvrTSpPwNO9rOUEEByShmti16fT5H0V+kHTi3Ehz2PgUrkVIQCPXYk7XA+TKl9xhnlBJy4Mfj+aXQiVU3WAnhBcwhs5cT78nQqzXjn6wwD08BWIIMwmzSAfo3einqpQuZpl6mugQ4qE+7d9EOTTqVYYxD8RPsc+wpT4KG9t+ZpTo1GBA3uF5pI4uPssias7qNLRZqCE4KBXajpMybQvOfQhONxxndInRVvr951u/g0IyrxnGaRJDiRGTg5PkEPywpLr12qmxoeQSPwwCm6cxXD3RNaFBWlU0VMMiJ7oAu3JZxM20kH3EQbHC1+PzGhel/PS2czuPSVa3rukgFxU2y3vPNL1jpnb2rRg/0hm7djYozG015ilnW+/yE+fyoYrvQgbkk1Au27TSyCSz7gP2Soloj0lZJND0KMN1QSmpQC8C0L2N6drvoknCUveCckZlIZYRuJMT5WCxDLdP9eo05wzQmJXJ9f2UQiV3omSRCQLAhOD/1M4n57l/z5WvbR5IEcHi+KnXZIkCp3EKHQuTaSJ7BKfbKd6Ge+BIhyVu9pVXmpcGwt2nIcDAp08+mn8GM5wRgE7ScYVk9yfwptjcpz+L9gRjNUB5Rztoeij5AGyhq6AxnJJpWQ4A8K17xhtKJsBdh+eKju8FjVPm1w0TCgC37RAe6gXvk22zrBVpVKnKUhcx0ZiB8=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(396003)(376002)(346002)(136003)(39860400002)(38100700002)(66556008)(16526019)(86362001)(26005)(186003)(66946007)(53546011)(66476007)(36756003)(5660300002)(6486002)(2616005)(2906002)(4326008)(4744005)(16576012)(316002)(31696002)(478600001)(31686004)(54906003)(956004)(110136005)(8676002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R2JGQWVsSEYrZkMvcThHaVJkTnYvWjRUOGlNRllsdE5zUGtZd09TYmZwUmNT?=
 =?utf-8?B?a1lxSEtSaEpCWWRYdWQrODRzNzU5WEpPWHF1SDZPSXFBK0hkNHZ2VmdXN1ZI?=
 =?utf-8?B?RXpCcEY3SjBKU0R3RUsrWTE0U1JXVE0rWnNlc0NhbUVlL1NHdUJ0aTZjQ01Z?=
 =?utf-8?B?ODgxdkdXS2dnaDRZVldiUFVzQW5LekQwZjN2VVBvakx4R1l2RHFnZFBIQ2Fu?=
 =?utf-8?B?MCswLzBpTnZWYjB3aVpuNEpPTW9UQUdSUVNhL3dML2cvWWlYeUc3a0FLNGR1?=
 =?utf-8?B?cmJzemV6NlRGS05qQTlJNXcwZnlBYUIrK2Fad2EvU2p3QVVSTjhEbW44YnhX?=
 =?utf-8?B?ell1YWIxTk5xNVZxUW9vUCtPVFFaUi8zNmY5VGFza0JHK001NVZQc1hBZGZF?=
 =?utf-8?B?K1NObk9TTjRrZmlsTmlWaFlMcGp2SFFpeXg2VkFrNjF2S3YvMGk0QUswYTF3?=
 =?utf-8?B?Y1JGOTdBQUt2am9Qb2EwM2kzU1UyQXIwTlNUUXYydzV2WWRSQzRaNW9adEtM?=
 =?utf-8?B?M3BxVDdtZnJtMmtZYjJDUklxc0JWNWpDSjF0K1JXOXh0M2kvNUZwNUZpSFho?=
 =?utf-8?B?MjVQVU9Ed01YN1phcXl3NndtVDdjTnRZRXFoM3JFQ0dsZDl5ajVQN0VOS1Zs?=
 =?utf-8?B?aFdhNVFZSUdqK3BGU1B6VncwMklZcEpYK1F6NHpEQWVYOUpoSkxWNHJJbFZZ?=
 =?utf-8?B?cjJwWTNzcGhwVFZXdStVcEN6OGkyZzNmQ21zY3FZNldOaGw4MXRIUGYyUnhn?=
 =?utf-8?B?TFNpZmt6WGU3a3NSRmMrY0F3TENlUkkrMStkTGE2c25UbzhuNlg1aytwZjNw?=
 =?utf-8?B?WEFoY1VtemlOdUZjWkNrOEM4YXY5WnI0ZlI3QXNaeVAxR1ZhaCtLMFVTRm1Z?=
 =?utf-8?B?eVZzZytyaERTWlQ0UDNCUWVDUFdGOXhjc2lQWjQxY0xtN2NiNWR0VnB4alp2?=
 =?utf-8?B?WFJmOUxsQ3ZQTXBlOG5SOUF6Nm5RZE9pRVpzWjNubThvVGluR2s1R2ZHTkhu?=
 =?utf-8?B?bGlIR2RIajRCWE9QVE42cGw0RGo3TVc2RUMvMXVVRktrNWhnS1JFS3RDTkh1?=
 =?utf-8?B?bHFVUFNQZzVkU3JmREsrNkRtc0xGK2ZMY2xTY3EyRDRmMUgwWkJFc00yT0Yr?=
 =?utf-8?B?ZXB6OHBCYkNyZElyV1BWZEJnRGlFY25OR1pLM2dXdEJSSUdtaXAzaVVkS3NL?=
 =?utf-8?B?RVdzZE03R0RNaWhwdkpQaEZnUVRDeXpvVi82SE41RldwZCtPRFZLZDRldnFx?=
 =?utf-8?B?V2JnQkdsSDJoNzQ4aGNRa21XTks1cWd0VFNzVW84Sno0QVVjOWRWSzVMZE5z?=
 =?utf-8?B?eGpRNE5WWDhCc1pqMnpKajNmZmtyRzk0SFEzdUlFUWZjNHNxOSt6bklXbldT?=
 =?utf-8?B?d2s0cEcrMlQ2OUFYT1JlOE5kWGhDbTlkb1pGTGRseGhnNUlwam9qUm4yQzJQ?=
 =?utf-8?B?TTZNR0pWazhSQTNUNnZaRFk4NnhFMDUzY2VXd3MwZWQ0NW1Db3RTVGxtR2dq?=
 =?utf-8?B?bFZ2cWNEQlowZFBLaGNJY3ljMVh4eUNpejloWUNxaERadUFIbFdGbllkMUZH?=
 =?utf-8?B?U3dSa1BwRjRDTjVWQXNkRGdkdUJrWlh1Tlp4eWY1ako0MWVHTFZaNzAzWjh5?=
 =?utf-8?B?bkc0TWUzSzVXL01udk9CYVVvMStBc0JXdHZFMkowcGQ1RGJVaGkwLy81Q3gw?=
 =?utf-8?B?akpWK3ptK2krOCtuS1gzbVE2QWwvaVB1dVdoZnpZc2dGZU01N1VRWUhQSDRi?=
 =?utf-8?Q?QlXgb/VWv23ndsjE0Sco7FwQmqdgwfFva7XmT5k?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d42a8f41-7963-4d54-44c3-08d93a4fed2d
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 16:15:20.7329
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gt2n0Iuln6pmM+ObSDM44RlSMtOtDuvckdo3j8kfEIVIHYSHGDiSAL0gKKqb3O4ZhBZ0sIgMRga10mp3+PJE7A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6304

On 28.06.2021 17:42, Andrew Cooper wrote:
> On 28/06/2021 12:52, Jan Beulich wrote:
>> Convert the two remaining uses as well as Arm's stub to the properly
>> named and type-safe mfn_to_gfn(), dropping x86's definition (where we
>> already have mfn_to_gfn()).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>, but ...

Thanks.

>> --- a/xen/include/asm-arm/mm.h
>> +++ b/xen/include/asm-arm/mm.h
>> @@ -328,8 +328,7 @@ struct page_info *get_page_from_gva(stru
>>  
>>  /* Xen always owns P2M on ARM */
>>  #define set_gpfn_from_mfn(mfn, pfn) do { (void) (mfn), (void)(pfn); } while (0)
>> -#define mfn_to_gmfn(_d, mfn)  (mfn)
>> -
>> +#define mfn_to_gfn(d, mfn) _gfn(mfn_x(mfn))
> 
> ... surely this wants to be ((void)(d), _gfn(mfn_x(mfn))), even if it's
> just a latent bug right now?

Well, Julien said he plans to get rid of this anyway. I'll do here
whatever the Arm maintainers say is wanted. Julien, Stefano?

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 28 16:35:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 16:35:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147925.273117 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxuEH-0003Nz-Mh; Mon, 28 Jun 2021 16:35:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147925.273117; Mon, 28 Jun 2021 16:35:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxuEH-0003Ns-Jl; Mon, 28 Jun 2021 16:35:25 +0000
Received: by outflank-mailman (input) for mailman id 147925;
 Mon, 28 Jun 2021 16:35:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lxuEG-0003NT-FP
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 16:35:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lxuEF-0005Rd-2z; Mon, 28 Jun 2021 16:35:23 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lxuEE-0000db-RA; Mon, 28 Jun 2021 16:35:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=hBYLE5N4Wo3TfX/d3Ya7Pvj0xX218GanNazl0aDy9zs=; b=Den0/zUB9VR2GQ0F37wU34T1JW
	bDtsAFt2QmFTQaghvAt77kUVwfiimf5rc5agC6YcbH4tPzuYYI9Czt0tiuna38RVsaPTm60lmmN62
	oQJzhr4wz78bAaVeYHtLQkRZN2EFZvZn80TklJKSV7buK15VjE+uWKboZEz9uwEJJiAM=;
Subject: Re: [PATCH] fully replace mfn_to_gmfn()
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <389f5988-4700-da3e-e628-1513e87eb56a@suse.com>
 <263f1b30-e33d-711c-ff22-64f8acad230d@citrix.com>
 <9138e7a0-f4a0-db77-c09f-4fa6a45652cf@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <6901f742-ee27-aeef-83ed-8c8de08acf75@xen.org>
Date: Mon, 28 Jun 2021 17:35:19 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <9138e7a0-f4a0-db77-c09f-4fa6a45652cf@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 28/06/2021 17:15, Jan Beulich wrote:
> On 28.06.2021 17:42, Andrew Cooper wrote:
>> On 28/06/2021 12:52, Jan Beulich wrote:
>>> Convert the two remaining uses as well as Arm's stub to the properly
>>> named and type-safe mfn_to_gfn(), dropping x86's definition (where we
>>> already have mfn_to_gfn()).
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>, but ...
> 
> Thanks.
> 
>>> --- a/xen/include/asm-arm/mm.h
>>> +++ b/xen/include/asm-arm/mm.h
>>> @@ -328,8 +328,7 @@ struct page_info *get_page_from_gva(stru
>>>   
>>>   /* Xen always owns P2M on ARM */
>>>   #define set_gpfn_from_mfn(mfn, pfn) do { (void) (mfn), (void)(pfn); } while (0)
>>> -#define mfn_to_gmfn(_d, mfn)  (mfn)
>>> -
>>> +#define mfn_to_gfn(d, mfn) _gfn(mfn_x(mfn))
>>
>> ... surely this wants to be ((void)(d), _gfn(mfn_x(mfn))), even if it's
>> just a latent bug right now?
> 
> Well, Julien said he plans to get rid of this anyway. I'll do here
> whatever the Arm maintainers say is wanted. Julien, Stefano?

I am fine with the change suggested by Andrew.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 16:49:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 16:49:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147927.273127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxuRi-0004uK-VB; Mon, 28 Jun 2021 16:49:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147927.273127; Mon, 28 Jun 2021 16:49:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxuRi-0004uD-S1; Mon, 28 Jun 2021 16:49:18 +0000
Received: by outflank-mailman (input) for mailman id 147927;
 Mon, 28 Jun 2021 16:49:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxuRg-0004u3-TN; Mon, 28 Jun 2021 16:49:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxuRg-0005fj-KC; Mon, 28 Jun 2021 16:49:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxuRg-0004wN-Bw; Mon, 28 Jun 2021 16:49:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxuRg-0007pX-BQ; Mon, 28 Jun 2021 16:49:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GU00fxjsGlNhx0KHZ9VVWL25GKnpZ930UoYafBUxAD8=; b=2bQaotvunn0EK3PsPjXMDk5MG8
	w5IB9MaGHWa9KGOjDUtkMbo4Q1ffvWgPUmVRBoxiI6wAgSdyZPIFxcjvPrUHYWmCrdrupBigj8CQ4
	66LxHamCSqYUq2nOapgdzh4UUxtrFLrZYEx4gLIA3VU5eUL0UkOsYKU1t8xfrgGw8aMc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163167-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163167: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=17143c4837393d42c484b42d1789b85b2cff1aaf
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 28 Jun 2021 16:49:16 +0000

flight 163167 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163167/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 17143c4837393d42c484b42d1789b85b2cff1aaf
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   24 days
Failing since        162368  2021-06-04 15:42:59 Z   24 days   60 attempts
Testing same since   163028  2021-06-25 07:30:32 Z    3 days   13 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2647 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 17:14:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 17:14:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147932.273142 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxuq2-00089s-1p; Mon, 28 Jun 2021 17:14:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147932.273142; Mon, 28 Jun 2021 17:14:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxuq1-00089l-UE; Mon, 28 Jun 2021 17:14:25 +0000
Received: by outflank-mailman (input) for mailman id 147932;
 Mon, 28 Jun 2021 17:14:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/HW7=LW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lxuq0-00089P-5y
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 17:14:24 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e6c05fc-553d-41d7-aaec-2e1dcc98dc83;
 Mon, 28 Jun 2021 17:14:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e6c05fc-553d-41d7-aaec-2e1dcc98dc83
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624900462;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=sx6ZyCIyCG4Gn4+QvvhHoxti4R2150zKzNSmihQKkQY=;
  b=XdTrCTfZQKDe+n7P2wd4yo7Jbz/1f2qZCLUlo9nBBuclx0FFT1l53OOy
   qzD/8HQ9rNFmoYPGcYPMFiB4L2rZZhRqHkLbbQFvP9UcG8UEPey4Rl/+E
   XFcH6hQ6gAWeRuICd9g6Z4Gnkkg0nc/eXIexyNQdIEmA62+CBVC1UIerk
   4=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: //brKe2+Q44CV3Ulmk7QOh7UbeIuvP8u+SMcEBhB3qODbKEExiwm3qVdcwSJY6aWzNVLbopfZf
 cSUV46Jm8xEOA9Y+/T8wYhd3hfNo+Oc2hDMume67zcBXcW61eHSKnmywm5mTX5v2JdOc4Djlp7
 ZXQD+gYqa2Ksd4fhsClm3edmANf9NXHUDRehXxmgq6z4lrotBQNiC3GqrMlIJVJwOVtr6mgJ5t
 nwHboE0aKZ3HKeG+ojShIHj5tFNA+dghU5SAnCxnBh9l8zPUshV+WYzmrpFcOB19sugXGKBLST
 Po0=
X-SBRS: 5.1
X-MesageID: 47185225
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:wypO46uaBFoIBLy9WkNE8qIX7skClIMji2hC6mlwRA09TyXGra
 +TdaUguSMc1gx9ZJhBo7G90KnpewK4yXcH2/hvAV7EZnibhILIFvAe0WKG+VPd8kLFh5ZgPM
 tbAs9D4ZjLfCJHZKXBkXqF+rQbsaC6GcmT7I+0pRcdLnAeV0gj1XYcNu/yKDwHeOAsP+teKH
 Pz3Lskm9PtQwVtUiztbUN1LtQr6ue7267OUFojPVoK+QOOhTSn5PrTFAWZ5A4XV3dqza05+W
 bIvgTl7uH72svLiyP05iv21dB7idHhwtxMCIiljdUUECzljkKNaJ56U7OPkTgpqKWE6Uoskv
 PLvxA8Vv4DpU/5TyWQm1/AygPg2DEh5zvJ0lmDm0bupsT/WXYTF9dBrZgxSGqa12MQ+PVHlI
 5b1WOQsJRaSTnamj7m2tTOXxZ20mKpvHsZl/IJhXA3a/pcVFZol/1awKppKuZGIMqjg7pXVt
 WGTfuspMq+SGnqKkww5QJUsYWRth1ZJGb1fqAA0vblmQS+0koJl3fxaaQk7z49HakGOu55Dt
 L/Q+9VfYF1P7srhJ1GdZE8qOuMeyHwqEH3QS6vyWqOLtBOB5ubke+I3Fxy3pDwRKA1
X-IronPort-AV: E=Sophos;i="5.83,306,1616472000"; 
   d="scan'208";a="47185225"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FmlxCzG5XQHUI0eKQy8g448AbzU+XrpY0d3gUuIgDipX9UcbjBaRYvoMkwgl8wb3lVR627hF+fSatJK5PVOpTIFlwEsjCaNmKlDS9D9eb7RFD+guZ7ATPrDVRlpRwA3MMP7VgTKP3MVonO8wUZBbK9l9YrSkpsxgcx3Tz4DTTZ2BVwTr9DRQJwpPd2VGe6aI7m23OrQbK6nvnUtNX+xzV5lGgM0/4fc/XfAxPnvGe2XtF8DkSX2tc5bdgKDL08K9Kt5zkI8kskX7RvRP7ujh4IhwQJoWNO7AtXYcfXIjqeEyMkb1mHE6pWavl6vXzClW6aUDsDe69PczIYhOh57a5w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BczKiF5MWHmpKlEjrCxziHQlmGDkA7m92trEKv3gdzw=;
 b=WU6P4gZQr4ZnLtr9bQOkYHyxkLmcnzS+HyaRm36Rz3C6vbJxSy0MaxVl52Oe2ZP08t+hAL70qcJPe5HODV6spGKUUthA3ETkSmEwt/Ys4BwJf0XPKNEBd/On4V85Mh8i8bj43bArM+2SCdDaB3wLbaVaHXY/O20/A75uDGN43mpS1zWbF0vsh4lXIcXBmE+B4x56VlF09oirmBAMXyCeg5Ff1ae8Z/03VsJouN0knu3nTDt4odUMYea8PaNKpHhjNoloo1+Rzma6UGL9u/gex49qVEwfWmTVE9TQNRb0cK65hF818jsjpZDu1BO4tCrP2wZ95ZmT3j6kre7gH4vGhw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BczKiF5MWHmpKlEjrCxziHQlmGDkA7m92trEKv3gdzw=;
 b=vJ3N4ttas70KbaWP/K31PW3noRNPtaW2rfc5nKLffNjb7+oy9ocnwghI63wq7oUPyoDbLYmn/pbEuYbsFy+3YPqXFWBbEtwUpaBMCmyicd3l9Mr4rDeBq/5lNWcBoV+BvbK6ULK+M0ivdr38kE++ZdKWroK6tIN32+Qc3d4ng7o=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <9285f566-e352-9265-e9e3-e9a1e15ce7d5@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86emul: avoid using _PRE_EFLAGS() in a few cases
Message-ID: <4362c5af-64d4-ef13-dd84-1c885616afc8@citrix.com>
Date: Mon, 28 Jun 2021 18:14:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <9285f566-e352-9265-e9e3-e9a1e15ce7d5@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0156.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9::24) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2dedc71d-8c70-4e65-943d-08d93a582a60
X-MS-TrafficTypeDiagnostic: BY5PR03MB5045:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB50457E85C37187F1EED845C6BA039@BY5PR03MB5045.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: UMW18UWHRiYCvrDy5T6upGtboDX5/U8jDgIWkCgHkDWXCtZ2JMZL834jEQLds+8wPseFV/JG5feameLyYsABxIUdMtK/T4ZOchf8eNES2JXA2jsfBv6X3HNFkgwW2dCM+AzDTKvp6Hxr3JbwoWf6zpfOlX3y9HHIvKuURoxP607FlnN9PzBMBjd5s3H+OzrQYUkTAUp0OaLj4/tCcZJ3ckOBlNO86iNFG4KrzeoqI0oiLMH+OEJQSYY/MXznheCyvbPtwCoC8ElLKW1fVHo0T8LrpDQBuuvE3koJGWjVxAe94nY6R+agLko97JL6oLhARlpCBix5wF6F6Scg/nYMjyS4FvDeppTge5Qn+XX5Zw0o025z5cSxBLFlVqG9g+tIUJTeTaG4hFy0byTQzmVrtDvDZnz4Cl9VdTKkEuJQA/1aecsOXrh40qL76ZE7kWstxtXAoZTs0VZShHdBWIKHzsOqL9VqXia+nHImrPX2V/3Kaoyid8+D1yvCfzsPAPiiryszwDpssvj10OwmGDmaJiRGSRzfQHcaOD3p7qwQjknHjpkPuruUyiX7jtXZAnXafivtwm/YCmu/STyAFD4A9q6He+eZAYPjdnAE9jFybqacAQl5/xTNVfMe+loMbsnyYfjALQx9UZDLD0rsGZdHiCxQmzrV+RktFvJkwmQU+tvmSmaW02Atww805+hMWbEKSAYbGp85zQD9Z8D0o3z+5Ted03xV1IVVCeEXnccFNOPj0h39NLz/rykcRhtH6VAG
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(376002)(396003)(366004)(136003)(346002)(31686004)(2906002)(8676002)(110136005)(2616005)(83380400001)(66476007)(31696002)(4326008)(956004)(478600001)(5660300002)(66556008)(54906003)(38100700002)(53546011)(107886003)(16526019)(186003)(8936002)(6486002)(86362001)(66946007)(316002)(6666004)(16576012)(36756003)(26005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?TTNLeU9CVmVLQ0dUUmVXNVpEUXIzcndvVW1TcnFFQ0JVWTk0WTBORjNoZ0tL?=
 =?utf-8?B?UkFFNE85MjFrbW1Pd2FUQ0xISHEyeGttYkJyb25VcThtcjUzYk14cnJDbldh?=
 =?utf-8?B?RHozci9oMTZlSURwRWJkdGJUcERzVTlzajk0ZzVoTnh2bjFrb2FGUzRCdDV2?=
 =?utf-8?B?L0kvdm9rNHpBdUdVSXdFSk5FMjlpRGE3NkNCL1dJYm9YUU9XcVBlcnIyYmVP?=
 =?utf-8?B?RVcxSHBtRGdZcExBdmhLR2lzbVJGeXhLeDRxUTJaT1BkOGpJNlEzL0p6Y08y?=
 =?utf-8?B?VE9NT3J0emQ4Q2xCVjJVd0I1eU5MeGxraCtZbllGNityb0Z4MGtkVFhlejRC?=
 =?utf-8?B?cGVjeGdYcHgvMkpmOTFBN0FVUVQvV0Z0QnIwZTdWemk5K3Fqc1Y0NkJ6TzNW?=
 =?utf-8?B?b3RSWDRKL053SzVpdnNFSkdkbGVIOTZqdU9rcTlvSlhsR2NEMEc4ZmJiUEVX?=
 =?utf-8?B?T3JSNFlML0JsZlRlYS9HYlhyUDdmdC84S1Z0L1JqRS8zbmRzaE14OE50bzFs?=
 =?utf-8?B?enV4VHdBTFFDZmdPV2RJUGd0RXBJUDV3SWg2NWRrNTdWMy9ieE9HRGY1Unh4?=
 =?utf-8?B?ZXo0SjZjQkg2cHIwa0RXZEg0MmlrbFNuSWpsd2ZGbFFOSVNXY3ErNFc4YWVo?=
 =?utf-8?B?T2U0a1UybDZjRmJRMzN4TDJHbGJzK0x6WTJOSk5WeHlSUlRJYlNMR2JUd0hN?=
 =?utf-8?B?Y0ZsM0VaMXFiN0l2d01UY2JLNmJ5N3pZSGZYT3BDWXVjMFY5NTliNjRvUThz?=
 =?utf-8?B?Y2EwNjQ3ZnJ3dE4wREI3b0VpLzRIeWlKMldOWkpBUUpkQTBCODQranZ2dS9C?=
 =?utf-8?B?cml3dFBMcXB5UC9nVDZWSkVQbk9aTlljSXI1MmREVFRoeDBlVEg3TlU2K1lF?=
 =?utf-8?B?YVFnL2dDcUJkM1RYc0NEOEZPWElOREgxRjN1a2o0dTRrcnNHMEl2TGw1ZUYr?=
 =?utf-8?B?bnQ3WlVGazZYUjB5a21Ca29Fem9NK1dzM0V3TkVjQ2ozb1ZGdlBEd1VFMWVB?=
 =?utf-8?B?Yndsd1g0dXJZc3BnenFhWGd4L3JIWDhpcXQ3NjFXTkYwNWxuSzN0Y04zcUl5?=
 =?utf-8?B?RHZheTVSUHBrMVFHaGUxZEowNDREVnArK2VFMEJhSUlPTGNxRDNOYU5KTlFO?=
 =?utf-8?B?VndVQVNIQ0FxU0Z6RTRwVW5nU2wzMlgraG9ZN1N1NXhrbjBUV29xTjRsWThT?=
 =?utf-8?B?SUJ2dFFIbzBnVDRRYTBEVkhpZ0xUdkljeU12c1RxOHVsbm0vZzllSFVEejF2?=
 =?utf-8?B?UGhnbmREd0xKdDZTTG9Db2c2Mmk1cXJlYVhOVkd1TnljRnJhL1dJc3FjV1N6?=
 =?utf-8?B?UnhJb1J3VndqZFJaZ3VEYlhlVms2M3JVRUFYRUgxMzVQaEx2a0VXR2xoWTd4?=
 =?utf-8?B?RFFNdXR4bTBrd082TUpZT1ZzSUJaZytCUVZMZytteGFnZTRRaVliaUpuZWtz?=
 =?utf-8?B?aXh2NHVaQVBhM0xqVWVkTFUxVGs3QlpKNkpQeEhITzlteHdRTFNwMmdmcW43?=
 =?utf-8?B?Zk15cGRyYlZXRFhvcTNJWWRZd3dlcitHY3V2WGliQ2RTZEFraXYvNkRUajAw?=
 =?utf-8?B?MDU5Mkt1K1FxQXoxQ0dDdE43N1RBU0ZBaFBZVUZXYVJ0MG1QSnhUWGMzb0J3?=
 =?utf-8?B?TnZTUG52S1dsQ0FaTFlwN2tOcHlVZTJNeGJLSzlWR2VNdlRSQ1BxdmxFeWdx?=
 =?utf-8?B?VHdYU05RcVZGTzJneHRWQXhvNjBjVXlvU1BVZkdKcXhRUVVoSTFXVWxSMUlW?=
 =?utf-8?Q?+dIr0kcs+eyoptOPeTJIZDnS6dRjFuw7YwlwqKT?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 2dedc71d-8c70-4e65-943d-08d93a582a60
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 17:14:19.3790
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NLOUld8U3akBVCc/jpzhMKNQfHw1ec8eb1QhWK3z3ABQTzpn5gjbmE32bYghHjfT6JlmAcebMKjqiuSbUHIqPaSORl7u/dq8ylUCIiLwjYQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5045
X-OriginatorOrg: citrix.com

On 02/06/2021 15:37, Jan Beulich wrote:
> The macro expanding to quite a few insns, replace its use by simply
> clearing the status flags when the to be executed insn doesn't depend
> on their initial state, in cases where this is easily possible. (There
> are more cases where the uses are hidden inside macros, and where some
> of the users of the macros want guest flags put in place before running
> the insn, i.e. the macros can't be updated as easily.)
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Honestly, this is the first time I've looked into _PRE/_POST_EFLAGS() in
detail.=C2=A0 Why is most of this not in C to begin with?

The only bits which need to be in asm are the popf to establish the
stub's flags context, and the pushf to save the resulting state.=C2=A0
Everything else is better off done by the compiler especially as it
would remove a load of work on temporaries from the stack.

Nevertheless, ...

> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -6863,7 +6863,8 @@ x86_emulate(
>          }
>          opc[2] =3D 0xc3;
> =20
> -        invoke_stub(_PRE_EFLAGS("[eflags]", "[mask]", "[tmp]"),
> +        _regs.eflags &=3D ~EFLAGS_MASK;
> +        invoke_stub("",
>                      _POST_EFLAGS("[eflags]", "[mask]", "[tmp]"),
>                      [eflags] "+g" (_regs.eflags),
>                      [tmp] "=3D&r" (dummy), "+m" (*mmvalp)
> @@ -8111,7 +8112,8 @@ x86_emulate(
>          opc[2] =3D 0xc3;
> =20
>          copy_VEX(opc, vex);
> -        invoke_stub(_PRE_EFLAGS("[eflags]", "[mask]", "[tmp]"),
> +        _regs.eflags &=3D ~EFLAGS_MASK;
> +        invoke_stub("",
>                      _POST_EFLAGS("[eflags]", "[mask]", "[tmp]"),
>                      [eflags] "+g" (_regs.eflags),
>                      "=3Da" (dst.val), [tmp] "=3D&r" (dummy)

... this hunk is k{,or}test, which only modified ZF and CF according to
the SDM.

The other flags are not listed as modified, which means they're
preserved, unless you're planning to have Intel issue a correction to
the SDM.

The flags logic for both instructions is custom, so it wouldn't be a
surprise to me if it really did deviate from the normal behaviour of a
test operation.

~Andrew

> @@ -11698,13 +11700,14 @@ int x86_emul_rmw(
>          break;
> =20
>      case rmw_xadd:
> +        *eflags &=3D ~EFLAGS_MASK;
>          switch ( state->op_bytes )
>          {
>              unsigned long dummy;
> =20
>  #define XADD(sz, cst, mod) \
>          case sz: \
> -            asm ( _PRE_EFLAGS("[efl]", "[msk]", "[tmp]") \
> +            asm ( "" \
>                    COND_LOCK(xadd) " %"#mod"[reg], %[mem]; " \
>                    _POST_EFLAGS("[efl]", "[msk]", "[tmp]") \
>                    : [reg] "+" #cst (state->ea.val), \
>




From xen-devel-bounces@lists.xenproject.org Mon Jun 28 18:39:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 18:39:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147942.273157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxw9h-0007bh-67; Mon, 28 Jun 2021 18:38:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147942.273157; Mon, 28 Jun 2021 18:38:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxw9h-0007ba-2X; Mon, 28 Jun 2021 18:38:49 +0000
Received: by outflank-mailman (input) for mailman id 147942;
 Mon, 28 Jun 2021 18:38:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxw9g-0007bQ-Cq; Mon, 28 Jun 2021 18:38:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxw9g-0007af-5w; Mon, 28 Jun 2021 18:38:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lxw9f-000222-OJ; Mon, 28 Jun 2021 18:38:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lxw9f-0001bn-Nm; Mon, 28 Jun 2021 18:38:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EEZKz1DfKfK16gzppz4MeGYH0A+TFZUPppI8O9A3R9A=; b=bHncQj5BCbirnykgNSy7HynQxn
	4IYoyPho6xmFnOjK2ij5oaa2+jjCw5Ho+2BW/U8bewvPOfj1s1m3fi0d2EfVkskUQDeV+Z9x/hPKM
	X6LnXkESMD1VHue2/QaCk2hORUNeRavcAjCg2K13v7k/RTiBU26LlFGSNEV5jE1E7Hsw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163165-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 163165: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=62fb9874f5da54fdb243003b386128037319b219
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 28 Jun 2021 18:38:47 +0000

flight 163165 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163165/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                62fb9874f5da54fdb243003b386128037319b219
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  331 days
Failing since        152366  2020-08-01 20:49:34 Z  330 days  563 attempts
Testing same since   163165  2021-06-28 06:08:29 Z    0 days    1 attempts

------------------------------------------------------------
6207 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1691700 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 28 21:40:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Jun 2021 21:40:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147951.273171 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxyz8-00084N-Hf; Mon, 28 Jun 2021 21:40:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147951.273171; Mon, 28 Jun 2021 21:40:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lxyz8-00084G-EC; Mon, 28 Jun 2021 21:40:06 +0000
Received: by outflank-mailman (input) for mailman id 147951;
 Mon, 28 Jun 2021 21:40:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8jr4=LW=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lxyz6-0007pA-KD
 for xen-devel@lists.xenproject.org; Mon, 28 Jun 2021 21:40:04 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 98eda089-46b4-4b85-a055-c1ebdbc9242a;
 Mon, 28 Jun 2021 21:40:03 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id CB03B61CF9;
 Mon, 28 Jun 2021 21:40:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98eda089-46b4-4b85-a055-c1ebdbc9242a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1624916403;
	bh=QVNLAl6MKPiCumJKwCfV0HnPQ7cA9+fiISTBRdiF74Q=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=J7nVY/vweVl/vuM5hpaEufXDwkMBlVn5DhXw3qHoi2EnoD+aHxiJcKJMI+niB/BVg
	 GnRU3iVyDMCMse3fXZBFyWVZEhNw5OEg5l1/sgOHBF6syXFhoLEUSWmLDQL26ZzdvQ
	 pVHpT5bpz47a52dL4pUsaGnxsAJm6vyT74XfAxL5zc3XnLUFNAS4TNRn2G0iw8hTQe
	 IEkrtmuAIt6GVW63pJda0VSNGoO3/GoWrzwsC0y40q9B1PA7uWF2FuPtXr/M7gJ0OR
	 C8TbSdi29f22lHQY3ILDJtjDuB+np4IFZxT0Z1mzoYmyY3uMF1K3IZUVTEUNjszYgr
	 ssSNsjg+LwVaw==
Date: Mon, 28 Jun 2021 14:40:02 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Volodymyr_Babchuk@epam.com
Subject: Re: [PATCH] xen/arm: add forward_smc command line option for
 debugging
In-Reply-To: <db2405f9-61d2-5d8f-816e-547bc09bb95c@xen.org>
Message-ID: <alpine.DEB.2.21.2106281421530.9437@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2106241749310.24906@sstabellini-ThinkPad-T480s> <b5ba0757-322f-a77a-2293-111b77b29d35@xen.org> <alpine.DEB.2.21.2106251033500.24906@sstabellini-ThinkPad-T480s> <db2405f9-61d2-5d8f-816e-547bc09bb95c@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 28 Jun 2021, Julien Grall wrote:
> On 25/06/2021 18:47, Stefano Stabellini wrote:
> > On Fri, 25 Jun 2021, Julien Grall wrote:
> > > Hi,
> > > 
> > > On 25/06/2021 02:51, Stefano Stabellini wrote:
> > > > It has become clear that an option to disable trapping SMC calls to Xen
> > > > is very useful for debugging user issues.
> > > > 
> > > > Instead of having to provide a
> > > > patch to users every time, it would be great if we could just tell them
> > > > to add forward_smc=true to the Xen command line.
> > > 
> > > I can understand this woud be useful to go a bit further in dom0 boot. But
> > > I
> > > am quite sceptical on the idea of providing an option directly in Xen
> > > because:
> > > 
> > > 1) This breaks other SMC uses in Xen (optee, VM monitor...)
> > > 2) There are no guarantee that the SMC call will not wreck Xen. To be
> > > clear, I
> > > don't refer to a malicious OS here, but a normal OS that boot
> > > 3) Very likely the next steps for the user (or better call it the
> > > developper
> > > because that option should really not be used by a normal user) will be to
> > > decide whether they should modify the kernel or implement a mediator in
> > > Xen.
> > > 
> > > > This option is obviously unsafe and unsecure and only meant for
> > > > debugging. Make clear in the description that if you pass
> > > > forward_smc=true then the system is not security supported.
> > > > 
> > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > > > 
> > > > diff --git a/docs/misc/xen-command-line.pandoc
> > > > b/docs/misc/xen-command-line.pandoc
> > > > index 3ece83a427..0833fe80fc 100644
> > > > --- a/docs/misc/xen-command-line.pandoc
> > > > +++ b/docs/misc/xen-command-line.pandoc
> > > > @@ -2501,6 +2501,16 @@ vwfi to `native` reduces irq latency
> > > > significantly.
> > > > It can also lead to
> > > >    suboptimal scheduling decisions, but only when the system is
> > > >    oversubscribed (i.e., in total there are more vCPUs than pCPUs).
> > > >    +### forward_smc (arm)
> > > > +> `= <boolean>`
> > > > +
> > > > +> Default: `false`
> > > > +
> > > > +If enabled, instead of trapping firmware SMC calls to Xen, allow SMC
> > > > +calls from VMs directly to the firmware. This option is UNSAFE and it
> > > > is
> > > > +only meant for debugging. Systems with forward_smc=true are not
> > > > security
> > > > +supported.
> > > > +
> > > >    ### watchdog (x86)
> > > >    > `= force | <boolean>`
> > > >    diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> > > > index e7384381cc..0580ac5762 100644
> > > > --- a/xen/arch/arm/traps.c
> > > > +++ b/xen/arch/arm/traps.c
> > > > @@ -95,11 +95,15 @@ static int __init parse_vwfi(const char *s)
> > > >    }
> > > >    custom_param("vwfi", parse_vwfi);
> > > >    +static bool forward_smc = false;
> > > > +boolean_param("forward_smc", forward_smc);
> > > > +
> > > >    register_t get_default_hcr_flags(void)
> > > >    {
> > > >        return  (HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM|
> > > >                 (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) |
> > > > -
> > > > HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
> > > > +             (forward_smc ? 0 : HCR_TSC) |
> > > > +             HCR_TID3|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
> > > 
> > > A system wide option to turn off SMC trapping is a no-go because this
> > > would
> > > only be usable for debugging dom0 and not a guest.
> > > 
> > > So at the minimum this should be a per-domain option. Also, I think we
> > > still
> > > want to integrate with the rest of the SMC users. So Xen should still trap
> > > the
> > > SMC and the forward should happen in vsmccc_handle_call().
> > > 
> > > This would cover my first point.
> > 
> > Yes, you are totally right. I thought about it this morning as well.
> > This patch would break even PSCI :-(
> > 
> > It would be best implemented in platform_smc as forward_to_fw (see
> > xen/arch/arm/platforms/xilinx-zynqmp-eemi.c:forward_to_fw).
> 
> There is one problem though. How do you know which calling convention to use?
> IOW, will all the firmware call (in particular on older platform) follow the
> SMCCC?

I am not aware of any firmware (Xilinx or non-Xilinx) with SMCs call
not SMCCC compliant.

That said, the most important thing is that we handle PSCI correctly,
right? After that, if we forward all calls to the firmware we should be
OK. From a calling convention perspective, it would only break if we
don't forward enough parameters on registers or save enough return
values on registers.

Then there are the problems with memory addresses you wrote below.

 
> > > For the second and third point, I still like
> > > to understand how this is going to help the developer to fully port the
> > > board/OS to Xen with this option disabled?
> > 
> > This is meant to help with bug triage only. There are a number of bugs
> > that can happen if certain platform SMCs are intercerpted by Xen instead
> > of being forwarded to the hardware.
> 
> We already print a message informating the user that the SMC call was trapped
> and terminated in Xen. So I am not entirely sure why you also need to
> passthrough all the SMC calls to triage it. You already know that the SMC will
> have to be implemented in Xen...

On Xilinx, we have so many SMCs that it would be difficult to figure it
out from the boot log alone. In the sense that it is unfortunately
"normal" for one or two SMCs to fail (even without Xen!) But that could
be a Xilinx-only issue.

 
> > I found myself having to provide a
> > patch to forward_to_fw all platform SMCs as a first test to
> > triage bugs a few times recently. It is never a fix, only a way to
> > understand the next step of debugging. Also Alex stumbled across
> > something similar on a non-Xilinx board (MacchiatoBin) so I thought it
> > was time for a better debugging option.
> > 
> > I think for debugging purposes it would be sufficient if all platform
> > SMCs were forward_to_fw from all domains. Of course it is totally
> > unsafe, but it is just for debugging.
> 
> In order to add a debugging option in Xen, we need to be reasonably confident
> that the option will not make more damage (I am not speaking about security
> here...) than it is actually worth it.
> 
> I can see how this helps in both your situation to boot dom0. However, I am
> not sure this can be generalized to every platform. A developper (or user)
> enabling this debugging option may end up to see corruption/hang because:
>   1) SMC call may pass memory address. A domain would pass a guest physical
> address but the firmware will interpret as host physical address. This
> working(ish) for dom0 because both are equivalent, but for other domain this
> will break.
>  2) SMC call may change the behavior of the system (i.e. turning off the
> UART)...
> 
> It would be difficult to pinpoint whether the problem is because an SMC (or
> else) without implementing each SMC call in Xen.
> 
> I don't think it is a lot of work to implement SMCs in Xen as you find them
> (sooner or later, you will have to do it anyway...). At which point,
> forwarding all the unknowns SMCs to attempt to boot further is probably more
> risky than it is worth it.
> 
> If the problem is re-building, then we could consider to provide a command
> line option to easily specify which SMC call is passthrough...

The problem is that it isn't always the same person doing the work.

If it is me working on a new release or a new platform, the command line
option wouldn't help me much. In fact, it might even be faster for me to
add "goto forward_to_fw" and rebuild. I am happy with that.

The issue is when there is a bug reported by a customer or by a user on
the mailing list. In that case, it is very useful to ask them to run a
little experiment to narrow down the possibilities, and it i easier to
ask them to add a command line option than to apply a patch. If
passthrough is involved, then we need to ask the user to forward all
SMCs, not just Dom0. If passthrough is not involved, then forwarding
only Dom0 calls is fine.


That said, I am not so sure we want this patch upstream: I think it
would benefit Xilinx users and a recent request from Alex made me think
that it would benefit other platforms too, but maybe the benefits on
other platforms are not enough to introduce an option like this, which
could easily break things.

So I am happy to follow your preference:

1) I can drop the patch
2) forward platform_smc only for dom0
3) forward platform_smc for all domains


Let me know. I am happy either way.

Cheers,

Stefano


From xen-devel-bounces@lists.xenproject.org Tue Jun 29 02:12:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 02:12:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147957.273182 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ly3EA-0002t3-Uc; Tue, 29 Jun 2021 02:11:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147957.273182; Tue, 29 Jun 2021 02:11:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ly3EA-0002sl-NV; Tue, 29 Jun 2021 02:11:54 +0000
Received: by outflank-mailman (input) for mailman id 147957;
 Tue, 29 Jun 2021 02:11:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ly3E9-0002sb-9P; Tue, 29 Jun 2021 02:11:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ly3E9-0008Rf-2C; Tue, 29 Jun 2021 02:11:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ly3E8-0007wZ-Jk; Tue, 29 Jun 2021 02:11:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ly3E8-0006vJ-Is; Tue, 29 Jun 2021 02:11:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+oY18XXt8jB4xX6X+NC3/WyHtf0ZJVmVv+Mujyk02gw=; b=ajnmorgdClrR9ttBSDc2Cy6IJ5
	xa6PDwJH9nROs3fOz18no1XDSS7q7e9Pjnl24JepNtEcf5bavfIbMMEnXDARQR6FmWofBhw/CxYvN
	P7Ue64fVnOrA028SKS9Scleuqv4OLVcJzQpkCgWQ/h95V9FwifjCc3WaYSkBhwT8M6w4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163172-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163172: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=17143c4837393d42c484b42d1789b85b2cff1aaf
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Jun 2021 02:11:52 +0000

flight 163172 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163172/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 17143c4837393d42c484b42d1789b85b2cff1aaf
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   24 days
Failing since        162368  2021-06-04 15:42:59 Z   24 days   61 attempts
Testing same since   163028  2021-06-25 07:30:32 Z    3 days   14 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2647 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 29 02:19:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 02:19:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147960.273195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ly3LQ-0003aY-KS; Tue, 29 Jun 2021 02:19:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147960.273195; Tue, 29 Jun 2021 02:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ly3LQ-0003aR-HQ; Tue, 29 Jun 2021 02:19:24 +0000
Received: by outflank-mailman (input) for mailman id 147960;
 Tue, 29 Jun 2021 02:19:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ly3LP-0003aH-DP; Tue, 29 Jun 2021 02:19:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ly3LP-00008q-4b; Tue, 29 Jun 2021 02:19:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ly3LO-0008BG-OT; Tue, 29 Jun 2021 02:19:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ly3LO-0000jU-Nx; Tue, 29 Jun 2021 02:19:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/eWq2jpS8NChJzMwHE0TDRrG9XRE/bd1Ibee/tYXbYc=; b=yw+LzYMnixdPQqycqqrh0i2Cz7
	rE2wkY2tb/dAffyHxQ1N9tPIcy6aqygPP3SLyOW7SZcVzoRzGhtM9fcoBrjtxvaE8Y9VKGFIoSIGQ
	q1r2ZTh773tmJkK9Mfn83EwnxdJls3o1etocKfa2ftA2A5KTl4Xc9UNR/nNfMJhOrrT0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163168-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 163168: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-libvirt-xsm:debian-fixup:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c636a5fe59575d84778f676ca1728fbd1a7c7104
X-Osstest-Versions-That:
    xen=bb11edcec1a953ce590da797b0d005cd60f21e83
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Jun 2021 02:19:22 +0000

flight 163168 xen-unstable real [real]
flight 163174 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/163168/
http://logs.test-lab.xenproject.org/osstest/logs/163174/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-xsm  13 debian-fixup        fail pass in 163174-retest
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 163174-retest
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 163174-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm 15 migrate-support-check fail in 163174 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 163160
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 163160
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 163160
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 163160
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 163160
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 163160
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 163160
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 163160
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 163160
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 163160
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 163160
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c636a5fe59575d84778f676ca1728fbd1a7c7104
baseline version:
 xen                  bb11edcec1a953ce590da797b0d005cd60f21e83

Last test of basis   163160  2021-06-28 01:51:35 Z    1 days
Testing same since   163168  2021-06-28 14:38:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   bb11edcec1..c636a5fe59  c636a5fe59575d84778f676ca1728fbd1a7c7104 -> master


From xen-devel-bounces@lists.xenproject.org Tue Jun 29 05:18:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 05:18:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147966.273214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ly67z-0003jh-O3; Tue, 29 Jun 2021 05:17:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147966.273214; Tue, 29 Jun 2021 05:17:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ly67z-0003ja-Km; Tue, 29 Jun 2021 05:17:43 +0000
Received: by outflank-mailman (input) for mailman id 147966;
 Tue, 29 Jun 2021 05:17:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ly67y-0003jQ-GL; Tue, 29 Jun 2021 05:17:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ly67y-0003dp-5J; Tue, 29 Jun 2021 05:17:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ly67x-0006lr-On; Tue, 29 Jun 2021 05:17:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ly67x-0001oH-Nu; Tue, 29 Jun 2021 05:17:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zOj/mEsltGQwmOpf8eiHeSulfaN1OpLyVlBvgcOsAno=; b=Y+zmgtzvorRn9Zwd1ULTnes/8h
	sq5IHOipjz3HZUGe2HBVIONKdqwMmAmuWYjDoo4kY9HWBPeP4dn/pQ+tWzWew6ubz0BHv/8FEy/yi
	dJop88AU20RllIGNI0+qw9eqefRw6NR0K3ws+ulqzjJu0RG1HShqHTqHV21kAGr0w6X4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163171-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 163171: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=687f9f7834e30330fd952f1fe096518509ff8ff7
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Jun 2021 05:17:41 +0000

flight 163171 qemu-mainline real [real]
flight 163177 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/163171/
http://logs.test-lab.xenproject.org/osstest/logs/163177/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-amd 16 xen-boot/l1         fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 16 xen-boot/l1       fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                687f9f7834e30330fd952f1fe096518509ff8ff7
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  312 days
Failing since        152659  2020-08-21 14:07:39 Z  311 days  572 attempts
Testing same since   163171  2021-06-28 16:09:38 Z    0 days    1 attempts

------------------------------------------------------------
551 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 178833 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 29 06:02:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 06:02:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147971.273228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ly6pK-0000Pg-AT; Tue, 29 Jun 2021 06:02:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147971.273228; Tue, 29 Jun 2021 06:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ly6pK-0000PZ-6t; Tue, 29 Jun 2021 06:02:30 +0000
Received: by outflank-mailman (input) for mailman id 147971;
 Tue, 29 Jun 2021 06:02:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ly6pJ-0000PO-Gx; Tue, 29 Jun 2021 06:02:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ly6pJ-0004Rv-8v; Tue, 29 Jun 2021 06:02:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ly6pI-0008Al-Ve; Tue, 29 Jun 2021 06:02:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ly6pI-0007bD-VD; Tue, 29 Jun 2021 06:02:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WzrNsPGcr0UL9li5O4F/E/gFLTyIp5mqMpZZezo6BdU=; b=Z0dLovzROHPzIlwz+YqhtxC5d+
	sbQfIQeuzP9wFbnwaOQhPN9w4bTDP6vRrwGkdXsColfyYAUMQ0N8494khzDy4OZOFMOxNqAvmmqTh
	j/KlCmADp//OE6FYszYWfIyIBmDrbmW0MvDPl1VQpK5rkG78AQ3nMcMqaKiEX7mVrUXQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163175-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163175: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=17143c4837393d42c484b42d1789b85b2cff1aaf
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Jun 2021 06:02:28 +0000

flight 163175 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163175/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 17143c4837393d42c484b42d1789b85b2cff1aaf
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   25 days
Failing since        162368  2021-06-04 15:42:59 Z   24 days   62 attempts
Testing same since   163028  2021-06-25 07:30:32 Z    3 days   15 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daoxiang Li <daoxiang.li@intel.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2647 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 29 06:24:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 06:24:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147974.273242 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ly7An-0002nj-65; Tue, 29 Jun 2021 06:24:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147974.273242; Tue, 29 Jun 2021 06:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ly7An-0002nc-2e; Tue, 29 Jun 2021 06:24:41 +0000
Received: by outflank-mailman (input) for mailman id 147974;
 Tue, 29 Jun 2021 06:24:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ly7Al-0002nS-Dl; Tue, 29 Jun 2021 06:24:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ly7Al-0004pU-3g; Tue, 29 Jun 2021 06:24:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ly7Ak-000146-PF; Tue, 29 Jun 2021 06:24:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ly7Ak-0001Ar-On; Tue, 29 Jun 2021 06:24:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rYua6G9Y+214WeH8+GchiAcLXQq4kxio3OsI12hrptY=; b=x7TlyCm+W3Nhxv+iBLkYDq5ByG
	2W5fiXURp+vS5I+MeXfN7Vop7hQ7jQhgC6IO1ELduFr1mTtlWcxBbwNLgnIpHs/n05/BPaG9DfYHW
	YloKunHhkSkDtKFj6Mbl9zaCvFnznfVsAxWQXVf5856Zv1jQqrahU4Vzh8xML3o7V4yE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163178-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 163178: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=7c08141f906e20e730c4b6407bc638e743deea48
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Jun 2021 06:24:38 +0000

flight 163178 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163178/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              7c08141f906e20e730c4b6407bc638e743deea48
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  354 days
Failing since        151818  2020-07-11 04:18:52 Z  353 days  345 attempts
Testing same since   163111  2021-06-26 04:20:02 Z    3 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 63387 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 29 07:27:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 07:27:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147977.273255 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ly88r-0000RW-PG; Tue, 29 Jun 2021 07:26:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147977.273255; Tue, 29 Jun 2021 07:26:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ly88r-0000RP-M7; Tue, 29 Jun 2021 07:26:45 +0000
Received: by outflank-mailman (input) for mailman id 147977;
 Tue, 29 Jun 2021 07:26:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ly88q-0000RF-9K; Tue, 29 Jun 2021 07:26:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ly88q-0005qN-2l; Tue, 29 Jun 2021 07:26:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ly88p-0004U6-QM; Tue, 29 Jun 2021 07:26:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ly88p-0006Bp-Pp; Tue, 29 Jun 2021 07:26:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=m1AX7hBk9EMZGbrUtI2w1KxLKXWBGJQ6I16+PcRjXqM=; b=DypUnZknO5Nyt5WlVQeN21G+dr
	xYxDc7AjoQXVaKHGzA7fr9Pukz+bHwMOTQzYF/i1ytQk5Y1IrLKvNDLRtfLeXcibYQIMNpnJujQ5W
	ec1ef2Xlyucy4o+l6zXh9W73Wr51Lt9o5TndbUzuczL5nevKkinVOZEM+o3sSl222tr8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163173-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 163173: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start/debian.repeat:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d04f7de0a5134de13420e72ae62a26f05d312c06
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Jun 2021 07:26:43 +0000

flight 163173 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163173/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds    22 guest-start/debian.repeat fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                d04f7de0a5134de13420e72ae62a26f05d312c06
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  332 days
Failing since        152366  2020-08-01 20:49:34 Z  331 days  564 attempts
Testing same since   163173  2021-06-28 19:12:24 Z    0 days    1 attempts

------------------------------------------------------------
6234 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1698413 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 29 08:00:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 08:00:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147984.273270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ly8fF-00051w-R0; Tue, 29 Jun 2021 08:00:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147984.273270; Tue, 29 Jun 2021 08:00:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ly8fF-00051p-Nc; Tue, 29 Jun 2021 08:00:13 +0000
Received: by outflank-mailman (input) for mailman id 147984;
 Tue, 29 Jun 2021 08:00:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qa6d=LX=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ly8fE-00051j-EO
 for xen-devel@lists.xenproject.org; Tue, 29 Jun 2021 08:00:12 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.218])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 18742f92-622f-4363-bd14-615c76220192;
 Tue, 29 Jun 2021 08:00:11 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.27.5 AUTH)
 with ESMTPSA id j0443ex5T806MRU
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 29 Jun 2021 10:00:06 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18742f92-622f-4363-bd14-615c76220192
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1624953606;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=AQC/aLt0DhgMo86ylACoJ5llLFOF5Po/NEv6BWbDGJI=;
    b=fBBcJS1Q9YZMjhAGEMURNiTNlZ+/nnnu0XxpM8WVri182NIl2sESM5/UTA8EABsdvl
    obogMCzCn3KhCffA5WDcD/3ryREg3M8UJRfcItN3lkfPFvTGuXXwdCir7CfT1FOPIFug
    +JnZzVpd28iaAimb3MY+H1R8CB0R6lIBc1HO96sTnGUwFWqckEvQ6TI4w4UoFcfeRpzC
    dVlEV0sCuCQyahLBL497nmiMZ04Xu9V7rsRc+vFFAiri6dLPiBBT9xyUV/XD6/zmH5+s
    XdJ0BqWLb+bFQYU5sLuXnaeU5sBZpsfuyhgR1ukRIdPMn/CGDh8ldMxCUYIM9oUZZbum
    CW9A==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF1Uh6FPk3sesKYv+F4ULcnddTEqNLurekxi0Bc"
X-RZG-CLASS-ID: mo00
Date: Tue, 29 Jun 2021 09:59:52 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [XEN PATCH v2 0/8] Fix libxl with QEMU 6.0 + remove some more
 deprecated usages.
Message-ID: <20210629095952.7b0b94c1.olaf@aepfle.de>
In-Reply-To: <20210511092810.13759-1-anthony.perard@citrix.com>
References: <20210511092810.13759-1-anthony.perard@citrix.com>
X-Mailer: Claws Mail 2021.05.27 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/GghZSpkOBNpF6VHlirqC1PP";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/GghZSpkOBNpF6VHlirqC1PP
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Tue, 11 May 2021 10:28:02 +0100
schrieb Anthony PERARD <anthony.perard@citrix.com>:

> All of the series should be backported to at least Xen 4.15 or it won't be
> possible to migrate, hotplug cpu or change cdrom on HVM guest when QEMU 6=
.0 and
> newer is used. QEMU 6.0 is about to be release, within a week.
>=20
> Backport: 4.15

In case a backport to 4.15 is actually done, please backport this and the f=
olluwup fixes also to 4.14.

Thank you.

Olaf

--Sig_/GghZSpkOBNpF6VHlirqC1PP
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmDa0vgACgkQ86SN7mm1
DoB8gQ//e45QbOVD/114S0DXcxv2bjNoYEVY26eFRTUqugcnhpDTFK9LrbMYbF4p
O4uDwyx+I5yr0m+7pOd1UnaoBvTJzQNMbfH4vvKqKb7ypoMKqyGrP7eOXkJAvCYx
t+HRYrPIOq2+YE5mxeGgm729dxVThIqi085uP+PoItBkKUkloMNOs6oKw090IR+j
b1Zfeh6aHgqslmPp3pSdLEf6MpC3f7D4cHSJHjDIGF0BHyNBnhj+tV3zvepg7QXI
Z7mR3aUQva4JqGt/T1CepYxM+3QC8maxxB8AJI0aufDTg32Pfuj82TGEjTA4HKkA
obAbSexmy+ds482HilpbK1k5FccF28sChBia7SAl1wMge18lOQDyaW303f6FNVg8
sVnjrvG1mQtVudtnASBzWpJ5YL2dOJ5r86SZiFmIJq2dY9vaz13xhWeGQLa8CM5j
W2UjkkmvKUyFq6N6Ukpuxl73Frh3EJKE6dxm2jQLg6r5ucVW8PVvUyiKCFvmNHfn
7NMWA/XYHaHY7cp0E+U/p+tN3bPkbZdCv6gSdFuql0AmI2fn2/MhtZBn+LExQfZJ
+ncmT/ZKx/42P0lCHyKAp57fcAkJQUsv0vP6HpbFd+cAdYSrVfI7KHxDAqh9jHA0
EikQD5EBsc7zh6ahDkwRgie6X7V6KnJuCTZbrsf69vZ8mAZh4j0=
=UgkB
-----END PGP SIGNATURE-----

--Sig_/GghZSpkOBNpF6VHlirqC1PP--


From xen-devel-bounces@lists.xenproject.org Tue Jun 29 09:09:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 09:09:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147987.273281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ly9kC-0002ul-0B; Tue, 29 Jun 2021 09:09:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147987.273281; Tue, 29 Jun 2021 09:09:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ly9kB-0002ue-Sn; Tue, 29 Jun 2021 09:09:23 +0000
Received: by outflank-mailman (input) for mailman id 147987;
 Tue, 29 Jun 2021 09:09:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wA62=LX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ly9kA-0002uY-4U
 for xen-devel@lists.xenproject.org; Tue, 29 Jun 2021 09:09:22 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 01c94723-406d-487b-951c-81911d7a5f4a;
 Tue, 29 Jun 2021 09:09:20 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2107.outbound.protection.outlook.com [104.47.17.107])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-33-mnw0FPrzPpaaPxM1EF6xrw-1; Tue, 29 Jun 2021 11:09:17 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2607.eurprd04.prod.outlook.com (2603:10a6:800:58::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.21; Tue, 29 Jun
 2021 09:09:15 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Tue, 29 Jun 2021
 09:09:15 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR04CA0115.eurprd04.prod.outlook.com (2603:10a6:208:55::20) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4287.21 via Frontend Transport; Tue, 29 Jun 2021 09:09:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01c94723-406d-487b-951c-81911d7a5f4a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624957759;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=M0JLxIfMg7OyRwYtnKGGfBmNsyzpNeZOWSN68NtLVOI=;
	b=eMNas0wY4tKujfDFZ45vJh4hcK92umFFAlsBZOo2L7jxln1+UXd1yf8FV10NLF+Jdd1494
	LLQTqGydJpSkhWvQZ6zmgBftKU2vZFzeC71QSfKEIouAJ0sKWpQ+JjHd3pRqoCNzq8h+Q+
	wYWlXn/ZLJr7vYXoxP/al3lCp9wpcV8=
X-MC-Unique: mnw0FPrzPpaaPxM1EF6xrw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NXnwxiz46c1J7NcMNmiyt2dv+kkvOcbm1PAZuCiRGAmL8Fzrkbqtw+vtB4/nI+DJIm28LoHBG9o/srUqVX+KKCBxbCHqciNZRfxSXU8p0DCa7yCnUfCaJr3vSc7zThaeZoIXVmUWwLKna5d5LHf8dPrbPJzdr0onBHvyGtuPbS58aS99tEpKdq1Ll7Dm5M7IA54kfKSaU7qiMToa+76426Ge28jkKNO0s0QFTmBuwU4E/jLqiE3+51pKwByePwySv8NEtB2DATKZwUb2SqLxVoDZOYG8/AXx9iqitMFCEpLD03lWHhSEoS/oXXCRjelcV3TUAVcmSmbM/got3+PHmw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rRaV8+fVhLodpx+ySjS6ObWyl/2MdwjAbxOYQphDkhs=;
 b=N/uqeCc+VODNx2L2goa0zbhmKzVuG1hyEO6lnZ3xAjC87JCcy+C23EZFF6uOt1U8Jh4oI8aHPGZ7fss2ok9htaXkPFbixM1oz/zU4Ds2JLgHKyPKuGHHG632en/TYzo2u/TN66cNxlLQGbNwJFBBV4L9L760oruWNIlEDKYPXLEz4HjJVVtDsy2GCfn8s+pqHwvwxTWN9rftOHc9oarz0G+Ud8H5cHryUGjuo24q9awpel0BP/Pa2/zVX4CXf3CxmG9vs0bT9WWToEZsccPH37/zob4XFVePJkkQRcojclCyIFVle5EmvLpsr2sh9ccYLqFAApLVfLWVIbdxpgiMgA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] x86emul: avoid using _PRE_EFLAGS() in a few cases
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9285f566-e352-9265-e9e3-e9a1e15ce7d5@suse.com>
 <4362c5af-64d4-ef13-dd84-1c885616afc8@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <56b1144a-f659-2b67-f054-8e141694f9cb@suse.com>
Date: Tue, 29 Jun 2021 11:09:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <4362c5af-64d4-ef13-dd84-1c885616afc8@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR04CA0115.eurprd04.prod.outlook.com
 (2603:10a6:208:55::20) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: fafc6265-e265-4485-44af-08d93add9141
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2607:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB2607602286F7AC3D957E387AB3029@VI1PR0401MB2607.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5516;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oPmNzOpR/qxgcNCqAd2+ubCTrr+dBaraiBeWxuTmNW9RALL39bKM2SevPviNZ4/oQoxLDuk3Zb99aurHmwPncEdYGxHRmxmqjZLhhrpL5awLQCr3EznZrMTqw5Eq0qLmIpG3eccYU0m/kZPugGKx2VnofJjEc45cPMoTR6htRLfEgpjfrMytpGDdGjIz248xzDuvlD1EMn0fLaqHnGCKOeqlHkbtR5cgKEMTjIEnljDpVusiMmysfByfDYVa20IieU+sgpzFlAT5qDnhQYEbIk8ZRWyKe+99uXchfAtXvp9KZwovEaA3Lz1mvxOxx5RcieFfjhxFSkrZqs4MD4lsbwzRYh2sy3qGwcUjhpyi10UdS4Ao4beGB332pJdQ1d/k8/mFr99YOl/Nb7dZssGrTP0MzeQx8gs8oj+zIiZ44E2esNUUpiX02T5T1ZVZXGSuIBwmfMZIKGc0n3SOJLN8uPf++QcSOOo3nINmizbejrjmD7zkLCUVBWgngtg4frLIYWVUaSmM2sSTBmMCJVMqUBb9xNUGTlXMta5WI2txgJQKabr4D/1oEIuh44SUdD2Tt9SOs1jZq9YzfzV1GBw6VpcSu+YPaTWrQfkCCzj59QrFwLS3eyoVYS7dA+lHwVh1W8k42VxpY49/tSfzy7Lc86nYT0akJj34bUZ2ar1MhYbJQIaydQdNyQq7vktCwWtNyBxvIxXkUHKIrzpbKYHAdNzllpK7pCe3VegYy2kzTGI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(396003)(346002)(136003)(376002)(366004)(83380400001)(478600001)(31696002)(86362001)(38100700002)(5660300002)(66556008)(956004)(6486002)(36756003)(66946007)(4326008)(2616005)(6916009)(66476007)(26005)(8936002)(186003)(16526019)(54906003)(16576012)(316002)(8676002)(53546011)(31686004)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?0QGvTNB9eAryWrBYUdTZ4dNy8zPHGNeYidH1yTyQ55u33JWm+4Y91jfj2mAa?=
 =?us-ascii?Q?VECwBG8mwnMAcYB4IAS9qF/hxZOJKOYg3lbR4jT86vtuT3n4mUa9KI5hep0C?=
 =?us-ascii?Q?rM28OnYMkyTkg2cp15No02foLzInvLVSDyYCdiz4PZIdwMiEWOP1VPvl3p9d?=
 =?us-ascii?Q?jQHx+4ni/vl67bjRld4v6Wdb6VOvbEoHXa4QvSxf04RWAU987OfTHIE+bAb4?=
 =?us-ascii?Q?jRTMJhHmI6K9JfoLM37GDpLbXAZ4o85+ZQguvChDNFv+NhIy8SxdWiUu+/rb?=
 =?us-ascii?Q?kjHV3CrR7Hki0ZTXWxf8r4o2TyxU1oPptGEUsX7tcBBwgYbNGeR5DSvD6yDc?=
 =?us-ascii?Q?4a5APrUBReuhz1DftQNSX7edRUFv10BJ7V5W3+BuxigDGGEUpCY9AZNvqtMy?=
 =?us-ascii?Q?NVTpkReyjC7aab2PpQ17AMRcCQ4gdOhX0rWYI1Xro21v93wgkcISROK8Za+/?=
 =?us-ascii?Q?glMiENvXdWQtCXzI0z3pXm89+7m4BBO0cHLJNFvVHziIFQdSjdT5b33hLnNq?=
 =?us-ascii?Q?Ds806heQ1ZXr4KqQ7DtqbDfmoQW6UjCPBab7lvVTRdzwaJTq2B9sg9fEzIOQ?=
 =?us-ascii?Q?z6vEboY2OHop26qAGrqEUl0eEN3fgE5GEtvVIhSDGDWmVeLsC+GZONxfEs73?=
 =?us-ascii?Q?Lv1UeABcXivPTYExr5x9pWc2orlRhhulx/mpXlgpM7rI5PY3fs1S9drKdONb?=
 =?us-ascii?Q?HRe0lUSLEYIR3xIWXgRPsQ2LhcPJiSDpVgj3b62nmjZr169EkUE7EBoRoqaM?=
 =?us-ascii?Q?r1bfDn5X/2rgpzxYuDccjJ+PoZUwMRQ2c6G51LQ8Zuu7092ZmjAHCCq3N5tj?=
 =?us-ascii?Q?sQAi6nxAJH0mGZLCZO7jygEg0SGIiTpip/6cKpFOMwukNVNw5PxqQm1zWWn1?=
 =?us-ascii?Q?DgGVFHJ3vtkcNjMCthGm8U2xiVS/tC5iWoRt2AT61zae0QII8zOF21Fd5Prd?=
 =?us-ascii?Q?slqVQC+liusIHK4z/enEksLPFX3Iyhe6Y5g8gkawO36liBoTZzXSJB9Zb+0y?=
 =?us-ascii?Q?GjTSZcOkZYIogN8/DyvG8PYA8vua1lCnZf3Q1CkhJVEoqWfrdx3TwfcG1ZBb?=
 =?us-ascii?Q?4uZCAdBDgjXREp6isBtaltWxzBe4DE+EQa8kU2+5LiCZ1AZcONCvFMG4NThr?=
 =?us-ascii?Q?O6e1vydngL13b1h6JnvyOJRC+/IL2ylXRU7xGdvZYNxe1JDwL2JCq26T/fdH?=
 =?us-ascii?Q?o24pb+Wh13RBIbsS1A53tqDgNsxICLmR+ysx1Lr1VHo79FLmPutaW3ylR7ka?=
 =?us-ascii?Q?nM/TsSjd5vWTeVWOnkUYKacz+c9N+m6nRzE/j6+wzokcjLnQweKjMvlODZyt?=
 =?us-ascii?Q?4EQctH3P1k2/9TL+92CJ9vcq?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fafc6265-e265-4485-44af-08d93add9141
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2021 09:09:15.1331
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ioCXlTYZCdzbYQy5qdfwh7X+HCLSYAX/+OtCP/fEc3Kj+bfLf19hURmcUXp/3d+8iw9aA5wGhOHIn6+5YAZ6KA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2607

On 28.06.2021 19:14, Andrew Cooper wrote:
> On 02/06/2021 15:37, Jan Beulich wrote:
>> The macro expanding to quite a few insns, replace its use by simply
>> clearing the status flags when the to be executed insn doesn't depend
>> on their initial state, in cases where this is easily possible. (There
>> are more cases where the uses are hidden inside macros, and where some
>> of the users of the macros want guest flags put in place before running
>> the insn, i.e. the macros can't be updated as easily.)
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>=20
> Honestly, this is the first time I've looked into _PRE/_POST_EFLAGS() in
> detail.=C2=A0 Why is most of this not in C to begin with?

Ask Keir?

> The only bits which need to be in asm are the popf to establish the
> stub's flags context, and the pushf to save the resulting state.=C2=A0
> Everything else is better off done by the compiler especially as it
> would remove a load of work on temporaries from the stack.

I'll try to find time to do something along these lines.

> Nevertheless, ...
>=20
>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>> @@ -6863,7 +6863,8 @@ x86_emulate(
>>          }
>>          opc[2] =3D 0xc3;
>> =20
>> -        invoke_stub(_PRE_EFLAGS("[eflags]", "[mask]", "[tmp]"),
>> +        _regs.eflags &=3D ~EFLAGS_MASK;
>> +        invoke_stub("",
>>                      _POST_EFLAGS("[eflags]", "[mask]", "[tmp]"),
>>                      [eflags] "+g" (_regs.eflags),
>>                      [tmp] "=3D&r" (dummy), "+m" (*mmvalp)
>> @@ -8111,7 +8112,8 @@ x86_emulate(
>>          opc[2] =3D 0xc3;
>> =20
>>          copy_VEX(opc, vex);
>> -        invoke_stub(_PRE_EFLAGS("[eflags]", "[mask]", "[tmp]"),
>> +        _regs.eflags &=3D ~EFLAGS_MASK;
>> +        invoke_stub("",
>>                      _POST_EFLAGS("[eflags]", "[mask]", "[tmp]"),
>>                      [eflags] "+g" (_regs.eflags),
>>                      "=3Da" (dst.val), [tmp] "=3D&r" (dummy)
>=20
> ... this hunk is k{,or}test, which only modified ZF and CF according to
> the SDM.
>=20
> The other flags are not listed as modified, which means they're
> preserved, unless you're planning to have Intel issue a correction to
> the SDM.

kortest has

"The OF, SF, AF, and PF flags are set to 0."

in its "Flags Affected" section. ktest has

"AF :=3D OF :=3D PF :=3D SF :=3D 0;"

in its "Operation" section.

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 29 10:01:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 10:01:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.147995.273304 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyAYH-0000RU-4O; Tue, 29 Jun 2021 10:01:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 147995.273304; Tue, 29 Jun 2021 10:01:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyAYH-0000RN-1I; Tue, 29 Jun 2021 10:01:09 +0000
Received: by outflank-mailman (input) for mailman id 147995;
 Tue, 29 Jun 2021 10:01:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YSCM=LX=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lyAYE-0000RF-Gy
 for xen-devel@lists.xenproject.org; Tue, 29 Jun 2021 10:01:06 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 964a43fb-1653-41ab-90df-338d5b5dfc07;
 Tue, 29 Jun 2021 10:01:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 964a43fb-1653-41ab-90df-338d5b5dfc07
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1624960862;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=yv0DeA/V7uiSQCloXwlL74oiMYp91zxL6wChZDR+JQA=;
  b=L6oQO8RLXaJTaVTQlhbYc5DXDLgBci7LL3qoWbMi6Cea8nGwK2z5Ky4o
   WELFGvyRE60yc/pKCGqCM+9PlK97/eKxneIK2Rit7BEary2JkfsgmJiUd
   30IVgGaiHtNVSsPC0ec5WTni0+gL+ft7wvUODrM88uSfTNT5uM8ekfKou
   U=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: rQWncyUDLPsgWgiEJuyLx81l2Jg63mleAi9+Gdgs3yk5homF9Mo2cEWFmP3908Qj5XWKnrhwgy
 PBOb5XEmM0VuwNmykFF8PhRVJ9gRXzCzhchfujbXaLn6xorAOIVPOEf45EHdPJz1US8yWxYseJ
 r6626L+k8NcKAR+cJNbhNDCzBWfXzI+SGkQg3BHlQrTbSIaYl8QiZCCx4YmouBk/X8b/6SZaLx
 V1+6joL0/GIutaDrUjLOH/xiRbgMvfxvuwaDOdRFO0APiwvvPKb0XI8AAPOw5oFOPgEOIxLb5z
 wCc=
X-SBRS: 5.1
X-MesageID: 47234934
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:+vUBLK7aszhMINkvWAPXwSqBI+orL9Y04lQ7vn2ZFiY7TiXIra
 yTdaoguCMc6AxxZJkh8erwXZVoMkmsiqKdhrNhQYtKPTOWxVdASbsN0WKM+UyZJ8STzJ876U
 4kSdkFNDSSNykIsS+Z2njALz9I+rDum8rJ9ISuukuFDzsaD52Ihz0JejpzeXcGIjWua6BJdq
 Z0qvA33AZJLh8sH7WG7zQ+Lqf+juyOsKijTQ8NBhYh5gXLpTS06ITiGxzd+hsFSTtAzZor7G
 CAymXCl+SemsD+7iWZ+37Y7pxQltek4txfBPaUgsxQDjn3kA6naKloRrXHljEop+OE7kosjb
 D30lkdFvU2z0mUUnC+oBPr1QWl+i0p8WXexViRhmamidDlRRohYvAxx75xQ1/80Q4Nrdt82K
 VE0yayrJxMFy7Nmyz7+pzhSwxqrEypunAv+NRjzEC3abFuLIO5kLZvu3+8SPw7bWTHAcEcYa
 lT5fjnlbNrmQjwVQGBgoEHq+bcLEjaHX+9MwI/U4KuomBrdN0Q9TpQ+CUlpAZ2yHsKcegO2w
 31CNUdqFhwdL5hUUtcPpZNfSLlMB2AffrzWFjiaWgPQ5t3RU4l7aSHu4kI2A==
X-IronPort-AV: E=Sophos;i="5.83,308,1616472000"; 
   d="scan'208";a="47234934"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hTUVqHU3wxTSTz8EqOpBKkc4xDF+UaqG/aH1LkcjAGzQFi7h/OtKiaKauBSHzuepWeAbIAOWlnyAJpVecZCd0G/5rzwPfRRDIZOYukU3wRByIcqtCWnjkqFpAeaDGH1wvza1AIHDe/4Y3JPVdpMSbkfgqmsRgke5UFXSSTUMLaCapGfKVS5iJGVMxQ12ft0qE6S/vxN0mFa9OSljmJ24UC2XBoj9Gb4sJnBEhqPvSPkMyQPuq9C+vdQtlAEfZewh3fXueLbW40pD0xTaWrbYC5QaC/xGPQzMMUf3XAaNp2zbl4rAYEroNtJneHmtPRM9OJMeE/DdnJcMQG9W56RPJQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wLgipyWsdCU1xKHlwrIa2KemOhIzT9rb6VhyMVYIw2A=;
 b=mQIYLLkbrDIvKzrSoWUY+T7qj7bJxMTeCktv+S9XlIwTG8kqHa7f7ghbFXBGjIV9m85ST4Z78xX1ZmYLZkoy0V31GuMpUQ8iZWulhijZsiGNpLdqM+aNnruP2Dg3BDMh7po5acqsh4rihgYfs3cPrO8NHUeCK6fSZIAkTpPJyqKmYaRmIcTkUS225o5VRNOwcF92eHv3gP8rfj9neHB93r8Qvu9kTMu/mQ+S4xYH+cFlHhVGJM5IlmPf0bCOMyJ/SnGetG5SpMtaV473tipoWq6ql6eiy8JTB3nEAZks+KcQfOTt0VU8tDRTuFoHbDTMzzObElr3th5xpgEkJ6Nfvg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wLgipyWsdCU1xKHlwrIa2KemOhIzT9rb6VhyMVYIw2A=;
 b=s2x9OQT9ZUGf6MORQLMiMhq/LavaMKv2KifHWurAa1XkVA0twym80RSUQxwRY2I0FP9L4SqUYzTb2ppzuTc0NKmNV62Di825rUBrmlxMGd7DFH8LzG9HcWY0lAdKMTUEBbj0eSEaE5V/r4z/Oo/XtMzXh+kmzwbW3SRlA2/dL60=
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <9285f566-e352-9265-e9e3-e9a1e15ce7d5@suse.com>
 <4362c5af-64d4-ef13-dd84-1c885616afc8@citrix.com>
 <56b1144a-f659-2b67-f054-8e141694f9cb@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86emul: avoid using _PRE_EFLAGS() in a few cases
Message-ID: <aec1ebff-34ae-2f41-ddb0-07ef147502ec@citrix.com>
Date: Tue, 29 Jun 2021 11:00:52 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <56b1144a-f659-2b67-f054-8e141694f9cb@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LNXP123CA0006.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:d2::18) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 29be55ae-6acc-4f24-8884-08d93ae4cb78
X-MS-TrafficTypeDiagnostic: BY5PR03MB5046:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB50466DF32260B2BFA8EA62D5BA029@BY5PR03MB5046.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4303;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: hH0kr5zpd/T9Cz7y+pf5wvJWKIPxleS6B3Jed4tjbJHTKhcGoKFg1zU4WXwb5VdbS96jTJkVRny+ijA7PQ3jl0J9yAdLpIwND9K+nORHTSTCvd/xjD8l9cA+X28trXu9ECqint2URZ3PHOP36nN5vthmif2LQbXX4tZhSnymfpJwysoin6QdOTg15yvpTSg+0ha8E0aOhy/WUi/1lecw8OU+hL96/331lryEbM0LLZ2XdI260EmOL0yYYa3bMuCgb9WdYySNYIRQEKDpYFgJ7cf5K74m4q1+THRIHaWX26wpw/7kKmzvpxPqsljUj2lO6oo2rUTRfC/Mpokc7UHNECIG9zOLrcwLr+ZqotElRV12zoIejAOObZGUqBx5a9l+9vF50sweZhdZePDDTN6QIIY1UJ6X4QgNqo4uJAYV6mD8awnmOf2ey+PlK1rVJY/CKU2mUTatJLsJpzEzpdqYwx5vDzNdZudT/Fko5GJOGrLbfBHugbHBIxulSgjdtQzONS5CgpqrOlk4RvM2c0iJHz6thY+T1hQcvtrayS5+0xqseThuV56+FtxwzLiYIO1MvlNZYc9bO1Ope8IXGDeFJpbwlKtU1LLISLetD7FRY4DwFjVVzT1iJO0/PAcIrYFfnRfL5gMcKb6DqkcLbfpKAFdin5ZKcYQjYYNvUF6x2LUdOWAhiuPaK1RYDiPpjT701PkVTTshm1KJfgHSMAObNAPMZ0w4aO+UiUwsQB8LYsU=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(396003)(39860400002)(136003)(346002)(36756003)(6666004)(16576012)(956004)(31686004)(38100700002)(86362001)(316002)(31696002)(2616005)(6486002)(6916009)(66946007)(8676002)(5660300002)(83380400001)(53546011)(16526019)(186003)(26005)(4326008)(8936002)(54906003)(66476007)(66556008)(2906002)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?S21Pdzk0TU4xbUkyTDQyVlNNanFXMGZuWi9CeE1JY244NVhuS2dPU3FoYStV?=
 =?utf-8?B?cVVsTHN5cjBrYS9yT0N1N0VxdjY4NFc0SVBsN2laR2w3Z3lpWFlkNnRuZndt?=
 =?utf-8?B?cUJ0SjNZajJ0a0N1TGFmcnJxUVUvbnRjMVN4ZGVzTjk3Y1NhUWo1Um8xeGZW?=
 =?utf-8?B?aEZicWNSWkFnNzRUTkxkYi9kWU1yekJTSnY0eUNKRENFU1ltN3hNMjNzck1H?=
 =?utf-8?B?MVpQNjM2QnNXUUNIY3ZEWFNpVVNWa3VJMDR4UldZQlhXVU9aaXhjNlQzUmQ3?=
 =?utf-8?B?aFl3THJGR0xzL1hreEkrTlo2R1krTldsczRUYkhkb1hZYjc3Q2JSQ24wazMx?=
 =?utf-8?B?TGM3ZkJBeDZlOWhjQTdtZGwvYmducFNuaGFZSHhJY3lBWTlVRDdlSFBTMHJL?=
 =?utf-8?B?Z1V1bmV3dmlaajZXQ255MW4vMGxFMkR6WWFpS3pybmdnZ1czK2pSRFE3bisv?=
 =?utf-8?B?MnlidmJUNWVvNDJzN0UvY1VUT3pGaDVNSFhJUEx4N2o4bmw3N1JqRFZVUHM2?=
 =?utf-8?B?VzNaVE9Ha2VQd2p4MWtVQi91SmlZTThqUDQzKzJ0QW91U2JVOGNLKy93YlRo?=
 =?utf-8?B?S1FuV0RrWHJ3RDE0TEZpY0prNTBTcFpGbVJwa3JIaGxZR2U5MWFkalptV1Bp?=
 =?utf-8?B?R0gxYWtYZDdVbmpXL09ZU0xJVGswOUdqL1gvbHdhNzRNV1c4a1FDVnJzSlVt?=
 =?utf-8?B?VkZ0SVBva2RRVGtPa3ZaelVEY1RmOWkzcUozK3dkd0FGQmFmcVkwOHdIT0tw?=
 =?utf-8?B?QzF1WjhNTTduZ3dzMmNWMHMwZUszV2ozK2RHZjVpSlp0bk4xN1NFTUludHho?=
 =?utf-8?B?NDRXM05jZHRYR216NGZFOXhySStHZzRrVUV1Zi9tU212akxhN21yL1F2anNl?=
 =?utf-8?B?US9XUFg3bWRkM3cyTFZtVVlaMFg1bU9SOTVyTnVDU1RsSjZ1WFlCbFJLOGVZ?=
 =?utf-8?B?MjM0bjlhMXc1WDJkdkxkZ3pFR2tVaHJ0d3g0RzFXMElrL1ZxcUljU0M4aVE0?=
 =?utf-8?B?bWJtWlREL0RCTEljS1pVODMyaFhwV1g5ZDZFMmZIckJWSjZjY1pSTThXZVhs?=
 =?utf-8?B?cFRhTFpmWkNPUldOQW41ZHVKdDVPSzY5Q1NNOXAxZm9UR3VKUG9YYm5HSDJo?=
 =?utf-8?B?WndrbUdITHJVaDJxOGd4OEdhdWduQ2hlMTdrUmJHZVJ1RXZPbUJpL1psVjk2?=
 =?utf-8?B?bGczWlhJT0ZQN0lNNGhjbVZ2eUFoR2FFUVg1K2dFOFFLZEI5R3JFaTJ5NmdM?=
 =?utf-8?B?UWFOY28yaTU2T1Y5d1pHZ1VzWDJFWXBYTXJ5ejFFbTB0WW9xTkQwQ2twTytG?=
 =?utf-8?B?M1c0aWpBVTAxaklKK2I1S0NLMFo1NUtrWk9CeWc4WkZvZmtWc2tpYzlBNm1D?=
 =?utf-8?B?U2xXOEFTZkczTDNTMkthWElqNHJ5NTVqYmQzNHNBVkY0RUpYTzE5Y2NUWnFM?=
 =?utf-8?B?bWJad2k5cmN2TEgxMGxZaUFXUnNJdEdna0NMTDV0d0U3ck1NYWVZL1duTVox?=
 =?utf-8?B?RGc0WnlGWnVSbXV6RWwwZGVDQ3BWRFNlVGFJZzBacVBUdG9QZjZCYlNMT2w5?=
 =?utf-8?B?VmFWMm54bjB4WHpvTjVobkdNVGlyTEZvQ1FyVHl1aWNjcmFwMFFXUFhuazlJ?=
 =?utf-8?B?WUlpZjNRNkJIcDg2M254WHpqOTNvREVabVlHT2xua20weFYrdWJpZjB0UGho?=
 =?utf-8?B?ZTlVZEIvVHMzMXZqbVo2ZkhJdk9Udk5HRUtpSmFmUDhwQW9JNjRHT21RUlRI?=
 =?utf-8?Q?wb0tJUvMOgybsao1KjbCIlVebULP6TrXlGTU6Ju?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 29be55ae-6acc-4f24-8884-08d93ae4cb78
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2021 10:00:59.3202
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CaquqNSOTHYJg1pXrmvPq5qxABQAYSziqpAtIIQJ2RoDFZvvExwgukq34/3x3uGn+nNOcybrGVNgIvXvjOxUjekrbzjWU6nQeU9qjioyQoo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5046
X-OriginatorOrg: citrix.com

On 29/06/2021 10:09, Jan Beulich wrote:
> On 28.06.2021 19:14, Andrew Cooper wrote:
>> On 02/06/2021 15:37, Jan Beulich wrote:
>>> The macro expanding to quite a few insns, replace its use by simply
>>> clearing the status flags when the to be executed insn doesn't depend
>>> on their initial state, in cases where this is easily possible. (There
>>> are more cases where the uses are hidden inside macros, and where some
>>> of the users of the macros want guest flags put in place before running
>>> the insn, i.e. the macros can't be updated as easily.)
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Honestly, this is the first time I've looked into _PRE/_POST_EFLAGS() in
>> detail.=C2=A0 Why is most of this not in C to begin with?
> Ask Keir?
>
>> The only bits which need to be in asm are the popf to establish the
>> stub's flags context, and the pushf to save the resulting state.=C2=A0
>> Everything else is better off done by the compiler especially as it
>> would remove a load of work on temporaries from the stack.
> I'll try to find time to do something along these lines.
>
>> Nevertheless, ...
>>
>>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>>> @@ -6863,7 +6863,8 @@ x86_emulate(
>>>          }
>>>          opc[2] =3D 0xc3;
>>> =20
>>> -        invoke_stub(_PRE_EFLAGS("[eflags]", "[mask]", "[tmp]"),
>>> +        _regs.eflags &=3D ~EFLAGS_MASK;
>>> +        invoke_stub("",
>>>                      _POST_EFLAGS("[eflags]", "[mask]", "[tmp]"),
>>>                      [eflags] "+g" (_regs.eflags),
>>>                      [tmp] "=3D&r" (dummy), "+m" (*mmvalp)
>>> @@ -8111,7 +8112,8 @@ x86_emulate(
>>>          opc[2] =3D 0xc3;
>>> =20
>>>          copy_VEX(opc, vex);
>>> -        invoke_stub(_PRE_EFLAGS("[eflags]", "[mask]", "[tmp]"),
>>> +        _regs.eflags &=3D ~EFLAGS_MASK;
>>> +        invoke_stub("",
>>>                      _POST_EFLAGS("[eflags]", "[mask]", "[tmp]"),
>>>                      [eflags] "+g" (_regs.eflags),
>>>                      "=3Da" (dst.val), [tmp] "=3D&r" (dummy)
>> ... this hunk is k{,or}test, which only modified ZF and CF according to
>> the SDM.
>>
>> The other flags are not listed as modified, which means they're
>> preserved, unless you're planning to have Intel issue a correction to
>> the SDM.
> kortest has
>
> "The OF, SF, AF, and PF flags are set to 0."
>
> in its "Flags Affected" section. ktest has
>
> "AF :=3D OF :=3D PF :=3D SF :=3D 0;"
>
> in its "Operation" section.

Oh - the pseudocode and the text don't match.=C2=A0 How helpful.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>



From xen-devel-bounces@lists.xenproject.org Tue Jun 29 12:07:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 12:07:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148016.273335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyCWf-0003qB-TZ; Tue, 29 Jun 2021 12:07:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148016.273335; Tue, 29 Jun 2021 12:07:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyCWf-0003q4-QH; Tue, 29 Jun 2021 12:07:37 +0000
Received: by outflank-mailman (input) for mailman id 148016;
 Tue, 29 Jun 2021 12:07:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyCWd-0003pu-Nj; Tue, 29 Jun 2021 12:07:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyCWd-0002h3-Jg; Tue, 29 Jun 2021 12:07:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyCWd-0000YK-CD; Tue, 29 Jun 2021 12:07:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lyCWd-000881-Be; Tue, 29 Jun 2021 12:07:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wYUJaqABXI6qMqLJ623nCUTsOZjc6zIFrdYSFG3L7YA=; b=kMkhv1qnHLXWjXIi7edV3knz4z
	p1EuHJOCCNdUwd/vqhORV80pjZZgX0xRa69oP336o1AiMyfjMtiGj8RNhX4qKI7xEAzyRh3ItrFSv
	pgtP09XYn7Pmrvo3hw9+CxjQ62ocBw6itHDnBfqeJjo8M1IY3jrIpqiRtVbAFAkVmptc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163182-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 163182: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f8582da0417660269bec69e399f8667f761e886b
X-Osstest-Versions-That:
    xen=c636a5fe59575d84778f676ca1728fbd1a7c7104
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Jun 2021 12:07:35 +0000

flight 163182 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163182/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f8582da0417660269bec69e399f8667f761e886b
baseline version:
 xen                  c636a5fe59575d84778f676ca1728fbd1a7c7104

Last test of basis   163166  2021-06-28 11:01:37 Z    1 days
Testing same since   163182  2021-06-29 10:01:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c636a5fe59..f8582da041  f8582da0417660269bec69e399f8667f761e886b -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jun 29 12:10:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 12:10:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148021.273349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyCZd-0005A6-Cq; Tue, 29 Jun 2021 12:10:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148021.273349; Tue, 29 Jun 2021 12:10:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyCZd-00059z-9m; Tue, 29 Jun 2021 12:10:41 +0000
Received: by outflank-mailman (input) for mailman id 148021;
 Tue, 29 Jun 2021 12:10:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DRwR=LX=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lyCZc-00059t-5D
 for xen-devel@lists.xenproject.org; Tue, 29 Jun 2021 12:10:40 +0000
Received: from mail-lj1-x231.google.com (unknown [2a00:1450:4864:20::231])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b1422efb-cf25-4577-8374-318002e396a2;
 Tue, 29 Jun 2021 12:10:39 +0000 (UTC)
Received: by mail-lj1-x231.google.com with SMTP id x20so24009145ljc.5
 for <xen-devel@lists.xenproject.org>; Tue, 29 Jun 2021 05:10:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1422efb-cf25-4577-8374-318002e396a2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=rs82UyLdVHwbHp5EGtmjg4YLEYEQMKjkbyh4jFYg30s=;
        b=J0AysvdDxQQjXCWW0NKd8lD5brA+DKwlhfL27oDxmPkP4mI5NVKQeYRilGryzP6QWF
         gvpY37u08aiP6uxzrIzf7PdcYvm06abwnVk1lGKQU5vJU7YsiJsRoo3Uj2CwxYY+k6Lg
         xNiuqUNc8PGyQ1YpaHbwaZrBbtrbnkPijj/CBNqIYGiUlx6xR/dqknGfbvDT+4tGV4+6
         deV4UR4EEiKQGs7bpNsqqEbpoMpqN8h4dFVrYaOjfwJgiQfq0yj5cDNqFhLW4GVpWE2t
         ImGc6jSHZnsAvUIq42Ni/EXOF/YKV4D1FnHTPbGEn4UROUw5zHJjYnRvMxlAtdTN29B/
         4DTQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=rs82UyLdVHwbHp5EGtmjg4YLEYEQMKjkbyh4jFYg30s=;
        b=oKsrnA8pr54bG6odJWoZhDErpvG07BC0vdO1c6MmJW5Jmib6V/AY8fXSVhO1YwhHJZ
         RzDedHnNtQnDmJaRkZLcgE5zFHszf42F3qPLr74KbDKv+D71ThOaxren4YJee9nC5Ktn
         xpvEccizX68yarMbP/d4b307nVVpm2cgYcYDe8TJttU+veFQtdl7Ht01F1D2qGFB/FBa
         nNLeIUqDnsyOtFUgPNpFHTZthu4WsYOGdSfPHVPJMIAqOAxgA3FzkNiWrF6XXjEr6ojJ
         irsaN1KAb5cv55VPXWoLg0giXfHzxOX7U5J4YH2MhxWYxUe3mLJvXZ1oqsUw3Hr0RQ4A
         0vbQ==
X-Gm-Message-State: AOAM531clc5N3sBCiTpmksfEKuJ0Y8eME8lcIb9etwQ99YtGnzP717uG
	AUXfx/AiCIcLEgLHEMYjqjM1aieF6ssfAsrSuAg=
X-Google-Smtp-Source: ABdhPJwP5O1co0J/ArqCTd8/xF/Wlo9XbK7a4wyDJMW4a1Oqgr1D5sqo2jhUrHV6MGmARZAtn95xZrgsDSuIY1O6VlM=
X-Received: by 2002:a2e:2a85:: with SMTP id q127mr3714856ljq.77.1624968637938;
 Tue, 29 Jun 2021 05:10:37 -0700 (PDT)
MIME-Version: 1.0
References: <20210628100157.5010-1-anthony.perard@citrix.com> <20210628100157.5010-3-anthony.perard@citrix.com>
In-Reply-To: <20210628100157.5010-3-anthony.perard@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 29 Jun 2021 08:10:26 -0400
Message-ID: <CAKf6xpvO8y-kfVcm+d2NnTSvfkPESHo6fsWBbPsZK=iXvt4u1A@mail.gmail.com>
Subject: Re: [XEN PATCH 2/2] libxl: Fix QEMU cmdline for scsi device
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, 
	Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"

On Mon, Jun 28, 2021 at 6:02 AM Anthony PERARD
<anthony.perard@citrix.com> wrote:
>
> Usage of 'scsi-disk' device is deprecated and removed from QEMU,
> instead we need to use 'scsi-hd' for hard drives.
> See QEMU 879be3af49 (hw/scsi: remove 'scsi-disk' device)
>
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Reviewed-by: Jason Andryuk <jandryuk@gmail.com>

Looks like scsi-hd is really old - qemu v0.15.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Tue Jun 29 12:11:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 12:11:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148023.273360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyCad-0005od-NG; Tue, 29 Jun 2021 12:11:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148023.273360; Tue, 29 Jun 2021 12:11:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyCad-0005oW-KH; Tue, 29 Jun 2021 12:11:43 +0000
Received: by outflank-mailman (input) for mailman id 148023;
 Tue, 29 Jun 2021 12:11:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DRwR=LX=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lyCac-0005oQ-Kg
 for xen-devel@lists.xenproject.org; Tue, 29 Jun 2021 12:11:42 +0000
Received: from mail-lf1-x12f.google.com (unknown [2a00:1450:4864:20::12f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e638679c-2502-4049-be2e-ec5e1a5f19a4;
 Tue, 29 Jun 2021 12:11:41 +0000 (UTC)
Received: by mail-lf1-x12f.google.com with SMTP id h15so38997687lfv.12
 for <xen-devel@lists.xenproject.org>; Tue, 29 Jun 2021 05:11:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e638679c-2502-4049-be2e-ec5e1a5f19a4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=xb8hQI+wE43pCRiBedZ9Z+fQ8WfeuCfjS/51Qh7sNpk=;
        b=N6hfdzHeQxpuDGQJg76RdW7Z95jQ1GDBKFpFq4cHo9dxGGJaJA77MN2Xn7KeqdOk5z
         4rEL0ptkk5yeM6WJi40nmw6pssKmcwjkzYkZscv4EOD4svhjx+vB12ngUsnK2FDDMILM
         d/n+x+6A7X4vLytkwSw/TVBlNjdy6UnqUMupTmC/2kJMxUeV7YdtCinh4Iq8ijffW9Zx
         shzRUwYcYU6TzE35FlZtPdE6NxnYTvK00916F4Ep7X76WdFrBgIMrzrlV40NsxAnfDD8
         BfhgcMJ9W1wPCsYPlyhmj/ByVgVO8xiniHKn55RkFGuga8iPdIFw4wiZDSJxhPYV/Hm+
         sgOA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=xb8hQI+wE43pCRiBedZ9Z+fQ8WfeuCfjS/51Qh7sNpk=;
        b=sAxTXG7Hdgnb7JahAdGG+Q6wDOdMVm4oLg9WZW22Snhi+l9M2ZtCDemqpxGle30cZ7
         4eJjep9SmiYcDf22kJYVzjTFmE38EgKs62ChAw2ZsxOSLzuJfS3TH/bSvmcCI5Y9/+Em
         Q6Kc2xJmXF4GVsSL+Cx4PbBnEGvCRbR4zsG8tBIVf5MNHRvs3KQsEGFsnH2tTR0FudqZ
         z4MWRDFVk8Ic80krDJB9CDMJqKLyo5NsaNGoOk2qL/nOW0svzppWFPux3WwBOCg0ARDB
         WXE9ohrPjAfYmUe5IerUGFWmcEEYnCtd5HJCQ72kP6UEJ7gAPrElaZ8SibXip9uteaLe
         Ppog==
X-Gm-Message-State: AOAM533JsIVylRt/JpRVCC9n0EdPpLDUrQjHPxkVcvof6D21WbaDyAoP
	GEr3NVgzFJdTXq36tA8R/OlD+f1GfJR0sY9/OHs=
X-Google-Smtp-Source: ABdhPJzMkE61pUjXKYrYhlRUIYucoUGKAW7Yz3A2KBcXdiB6JqTj9NupDqRK2/AEmyu4Y2Ozqq+Fxd4ZhjWM+bGKvpI=
X-Received: by 2002:a19:c147:: with SMTP id r68mr22016546lff.226.1624968700973;
 Tue, 29 Jun 2021 05:11:40 -0700 (PDT)
MIME-Version: 1.0
References: <20210628100157.5010-1-anthony.perard@citrix.com> <20210628100157.5010-2-anthony.perard@citrix.com>
In-Reply-To: <20210628100157.5010-2-anthony.perard@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 29 Jun 2021 08:11:29 -0400
Message-ID: <CAKf6xps7w0_zT2iecRvq9UkZfqvqLqMo_p4BwZ1Oife37O2sFA@mail.gmail.com>
Subject: Re: [XEN PATCH 1/2] libxl: Replace short-form boolean for QEMU's -vnc
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, 
	Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"

On Mon, Jun 28, 2021 at 6:02 AM Anthony PERARD
<anthony.perard@citrix.com> wrote:
>
> f3f778c81769 forgot one boolean parameter.
>
> Fixes: f3f778c81769 ("libxl: Replace QEMU's command line short-form boolean option")
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Reviewed-by: Jason Andryuk <jandryuk@gmail.com>


From xen-devel-bounces@lists.xenproject.org Tue Jun 29 12:53:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 12:53:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148028.273370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyDEr-0001aj-VL; Tue, 29 Jun 2021 12:53:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148028.273370; Tue, 29 Jun 2021 12:53:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyDEr-0001ac-SP; Tue, 29 Jun 2021 12:53:17 +0000
Received: by outflank-mailman (input) for mailman id 148028;
 Tue, 29 Jun 2021 12:53:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wA62=LX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lyDEq-0001aW-3t
 for xen-devel@lists.xenproject.org; Tue, 29 Jun 2021 12:53:16 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a3c1c072-a12b-47bc-9065-53c7a815e0ad;
 Tue, 29 Jun 2021 12:53:15 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2055.outbound.protection.outlook.com [104.47.14.55]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-10-sJhavAyaPuyF0q4E5pCF5A-1; Tue, 29 Jun 2021 14:53:08 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB3390.eurprd04.prod.outlook.com (2603:10a6:803:9::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.23; Tue, 29 Jun
 2021 12:53:07 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Tue, 29 Jun 2021
 12:53:07 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P193CA0019.EURP193.PROD.OUTLOOK.COM (2603:10a6:102:50::24) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Tue, 29 Jun 2021 12:53:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3c1c072-a12b-47bc-9065-53c7a815e0ad
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624971194;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=cxpZ3l0gqQ68qpFOsfCAiYiKficY6f+LCgmpcbadpUY=;
	b=buAo52fPH8Qr/kr1aSH5axdM9bsj22iUPrWyxrL7pbCjkCUlJlB6dkPX0pU4f5n5rcBr3F
	ukoGfyOQJncFNwkmNVLt6+GD7SZM3IlJvql7z8VuEovnapndk5IMtuWI4e5C9H3NenoTf2
	9oPYkAW6FiFazQmYejqb+Ve47wci7WM=
X-MC-Unique: sJhavAyaPuyF0q4E5pCF5A-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nLh3ebardeqOfgoKxHj+Gy3mDZvM939PyuwcxPN9xAwXSEdafEYlUkcEC0GELttAX51Shjazy++n5pujc0Lyw3GbhLvXRtQ+pL9DqmK12HSr6UyfdVbSzWyoUIbZg47jHdOURqX88monU13DqL737JHufKUMuc6eVL4zvvrW4ZSdG01YBENddddaEGZFkY1dXeCXlRiL8OhJ/+heo8IkGcocuUxIAgBPKbb/Qz7MnPk9HIHugHIn2Zc1gy2M+SWJERNcnkQ0tPtCgDfSS+8rU3Gk1re7cJijw8pReGMNjr9fbGXRPwxBBM6++exappTeHRrZswBETHChm/pKTg6grQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cxpZ3l0gqQ68qpFOsfCAiYiKficY6f+LCgmpcbadpUY=;
 b=RYiq3jSAmDt6NzqVfQR7ST+EnO0RawQyTwQKRGIOjWPwDikyvr+qO9UQOc7Knb4nHl9SQdV7MwmpVZq3KmPTB/ITb0DXk6H2k4KPF1v8aqy0/mt9gSG3TTOpLwG5FNkdJvAKGXo4Qm44KKEau61xUkEgOxhCy9vhgxmLFjVGAOajZ8l8AWsTrNsNgEyhf8jxj1/dB06p/+t8B5XMibPQlA5kXda2SV33l0Dr5KQLkePsElz6DEtEqYrExxT3/pc6mBGvTeHncxXVOdF7XBP1XcLlYdTlgWfyvK5XtzvDTKQd7Ex4hlbzG3zn4wlWNgqx+X9egSNhYnM1oa0FNiYqJw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Tamas K Lengyel <tamas@tklengyel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/2] x86/mem-sharing: a fix and some cleanup
Message-ID: <dea13187-04ce-9c1d-aa5c-e2cd0a7d42d9@suse.com>
Date: Tue, 29 Jun 2021 14:53:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P193CA0019.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:102:50::24) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 999aba0d-12cc-48bf-f209-08d93afcd76a
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3390:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB3390AC050851DA0559084CC6B3029@VI1PR0402MB3390.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2089;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	80FAWAJT/7WwGsOJ6bAAy2D2cWOVHcjb9nzAfmBw07lE0Nj7uHY0inqCqGJXBQbGEOvZR1l6+E6evyDgC1SD1FbUd9xsfwxJEc8OmxVMLUy96u5vPpU/pYbs7CXLLhez4nLxeqXRXv0gBXkm033pC0fiAWTu0cmoIEBi2GbjM2ZwLWFEVL0bZZ4y1gyW/ATh1JyfROo36C9guuF6NM4NXqUSFbuDMf/kKyXsjch3DtaGqIk/fhsjyFmi2kMLOCWQZ/7m/Zxv2tqIBaELGysPGmNS5MYbz6VRKgtWpUr+KioUnQbQeHozYxvIqU9ZqkN/Nfkm3Pgl0YvK/kloND+0wTpMGI49/v7Z5W/LdNJJZ4qhwejQQil1Nb5M1A3YKJJkHAj1AlZnyhmfTQgTb/wCxPnGt8ME+2CTExRB1GNG5J1IdVDMQiIQhT2mv2VpDywf98EtItcQ6YWTUq0i4JEo1L5U7I76c5u2f96CJorlGL8XBf1Yu9g6qvOZFnJB5XlRUGmH314Ce+Y4xHXn81gsfeM8qHM7Khqw2m+xaYcb3nt8nqbtkfAMPKbB8oZRalPzjrDPpS2XmTR//FeNx+jeQty2bsVuPb3IpzaEymALXOTquL1xX6yTr7gt5r8YukS45OyZcjhvrZUygsLlGTeJHLC+U0/KbJQEM3AegzlhggGUgFu2qTkL+P8qv9TNDQmNG5/54Y5GYoAQXkbkdyrHHcaXtDIAl80x9Gl0Xd/OXic=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(376002)(366004)(396003)(39850400004)(6486002)(16576012)(6916009)(4326008)(66946007)(31686004)(66556008)(66476007)(478600001)(956004)(316002)(5660300002)(86362001)(31696002)(8676002)(16526019)(83380400001)(558084003)(186003)(2906002)(2616005)(54906003)(8936002)(26005)(38100700002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eTg1d2ZpeExqOTlUcHZUSDdhaXkxRHB5K0VqbGZsMXJXbGNHeCt0ZHVvNHJU?=
 =?utf-8?B?WmNYM2dNQzRGR08rR0YzcTFGMHpPRlErdG1EbjdBc29MSEdXdzl1OEpwb3RC?=
 =?utf-8?B?R0FMZkFzY3cyK2RKZDFFUDRLM0pCZG5jbXNKeWl6NW1vZ3lkN0g5Qm5WVnc2?=
 =?utf-8?B?MGxEYmRoZHlzRkExTXg4WDIzVFc4aVFYcEVnRE54cXM3ODNBNmFFbmNvL2ZY?=
 =?utf-8?B?djUrSm9oanR2Z2lCL0lMT2c4bWEyRjRkMnc5YjdoSUxrK1Nzdng1QXJnQStq?=
 =?utf-8?B?V1RmaVlHRGlZOXFpa2ZXbEU4Wi8yRnZzeXgrZEhPQXdpQ1RZTGVZTHV2cE1W?=
 =?utf-8?B?Y2lNdngwQkpsV0Q4bDl4ZnZsdHVsVkcwNkNuZUhuOTlqaGJjOUR6MlhqdUxm?=
 =?utf-8?B?MTVaUWoxa281MEZ6T3lPQXBaUDRqZGliQlpiZDlqcXBkWCsrbHhLQ1BjbUZM?=
 =?utf-8?B?MFluVTVMUW9Tb1dsQnAybmRNbVJSZ3llRWdmaWZqQ0pYR2srb1kxLzUwVzdT?=
 =?utf-8?B?eTI0eUtRaXhwWUo5Yjl2RGtSbzBoRXdLcE1DZ3k3RS9xSjlFNmd6Q21Obm01?=
 =?utf-8?B?YnArRWRxRTZMNGVLMFlCUGd3eStUdzdqYnpsK2hvQjE0U2F4WlNXbDh4aGRi?=
 =?utf-8?B?RG4zYkI5ZXBCSDBWeUlqVXYzMndSeTBPaG9zTnpMYk8zTVdUKy9lbm9pTFZu?=
 =?utf-8?B?UE5DMUcwNUN6Z0UxWHNEODRwSW5OZmVGd1N5UTdrTTBhS0prVjVqY0llb1dG?=
 =?utf-8?B?R0VsblB2K0NtK3VtYXFzUCtZaU1MZHNUTXUybTN3S2xWaHBtVnJUN3hCNnQ3?=
 =?utf-8?B?Ti95bGpZN2tHakxHN2lyR0FFQjR5dHhlMlA2SnhxQnhSN0FiaGcrazlPR1ZS?=
 =?utf-8?B?cGRqOHlNVk1rSVlUS2kwYkZIUU52SnArTTQ0aGxORG1aSThScG1jSk9PTk1B?=
 =?utf-8?B?RW94NXVCT0VHcXlPbVVCR0NjUW5raGNPWnE2SFBLMTAzK0xxY1Bwd1pjdnFF?=
 =?utf-8?B?N2s4VGZDTHhoSlRMbEREVjZmaEEwWCsyNHRtOHIzZ0hTaFc0MXFnV2tyTXhl?=
 =?utf-8?B?a1RQNExPMUg2cTlhU3ZWYmpYU2FJMmY5UG5EbS9sblUyZXFWMzJrUDAwTTdt?=
 =?utf-8?B?Zi83ZFFVUFA1Vi8vQWlTRHAwUVVZVjlwNHlINXhzbTlkSEtGYnI4Tnd0aG90?=
 =?utf-8?B?NjM0YkNhWDN3VmU3ZTVJbkYybVl1TTF6VUd0UXNvd3pzcitHdE9obEtRa3cr?=
 =?utf-8?B?ZVZZTmdhZ3h4dm83V3NQK1h0c3JWaEwxdjBza2dTOU9QTFRsODJiWTBxaVlt?=
 =?utf-8?B?bjAwVFd1U1FyZEdPVUR2NlhhaXJrWkZwaVpobXVZZXNlQ2N0UTJOVHlic2VS?=
 =?utf-8?B?NEJhYVdKOXpkeGlJM1NlM0d2ZklPTGFvbitsVHAvUmNmUGV6NFpGRWxqOWFN?=
 =?utf-8?B?U0RnZG1CdXdnelhESmRxMURtTzVDN0ljNTR5SFg3UEl1MlhmUEl0SUFCR09V?=
 =?utf-8?B?czF5bDY1TW5ZWDZRWEh0Sm1zUmFnWjB0QlNYU25zUm54Z1dQZzhEaDlMSitZ?=
 =?utf-8?B?WVlxUXFGM1VDR2NXUVUvWTl6OFVBaURlbGQ4L0ovTHFPT3dsRlYyTTYyMmRE?=
 =?utf-8?B?MWVMMnFUWG9Bbmlmb0huVjhZUVB6TkNyVmFTcU5ZWkxWTmErR3IzVU85V2FC?=
 =?utf-8?B?a3hNTTdBYXdIaFlBazViVk4vMklsU0ZyRjk1VXA5RHBQNHNSUnQvVHFhVWJV?=
 =?utf-8?Q?uOhoaNi6h2bw8VhIofgqQFyRqOuPlTshKLHwyBL?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 999aba0d-12cc-48bf-f209-08d93afcd76a
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2021 12:53:07.1544
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6X8ikSYLIQC9vWDB3X/YHZWGrFqOHQe4Tc5cBXk9UQZT1hTaRXfKRHT2OkiwITizIN42a580eyjKs1Z6fzw+sA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3390

1: guarantee consistent lock order in get_two_gfns()
2: mov {get,put}_two_gfns()

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 29 12:54:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 12:54:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148030.273381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyDFe-00028f-8R; Tue, 29 Jun 2021 12:54:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148030.273381; Tue, 29 Jun 2021 12:54:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyDFe-00028Y-5Q; Tue, 29 Jun 2021 12:54:06 +0000
Received: by outflank-mailman (input) for mailman id 148030;
 Tue, 29 Jun 2021 12:54:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wA62=LX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lyDFc-00028L-DS
 for xen-devel@lists.xenproject.org; Tue, 29 Jun 2021 12:54:04 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ef57f87d-8817-4ae0-9ca9-c9824f20a7c9;
 Tue, 29 Jun 2021 12:54:03 +0000 (UTC)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2059.outbound.protection.outlook.com [104.47.13.59]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-8-lOfWBJs9Ni2Afeco7YpR7g-1;
 Tue, 29 Jun 2021 14:54:01 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6173.eurprd04.prod.outlook.com (2603:10a6:803:ff::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Tue, 29 Jun
 2021 12:53:59 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Tue, 29 Jun 2021
 12:53:58 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P193CA0030.EURP193.PROD.OUTLOOK.COM (2603:10a6:102:50::35) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.18 via Frontend Transport; Tue, 29 Jun 2021 12:53:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef57f87d-8817-4ae0-9ca9-c9824f20a7c9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624971242;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=T+KmuULkjATHsJkpBCbdQw5XA9fzShGhY+AfPy7XXqk=;
	b=GW0mdIl42Rn67IlRdwV7lkkiFIWbK2LmZEnb+xtXNSmAN8mJlUIq7Iu99uiowt8kbLC5yo
	j94W4UjzXBKZeWD9uneUmOEqMECjuvYm1GAMutyvYQn6J04Vu5tVYCYFlYF6AhGE9xZ7RG
	OMivXR8izP/MT7lTFWy9hSrcNtLOVLs=
X-MC-Unique: lOfWBJs9Ni2Afeco7YpR7g-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fJp63VXxr6xepdm84a+yyqUxl7uPfYouhqPTE4A9msOjGJWuFo8OAgwuqbSDVazY4hHCx7czaZnmUFuds56khZH3X9ai8Ik01cuOwRy+vvWd0vDdnVCD5i3mGbv62WeGRk+HVMJNydaqfTLa9dG/7PZYTGRKg1nOhz93dDoV698LFTlahapQDfxtNysYBNumTw2aNmPe+csO7E3qdwYedV1XzY16X1qYavC/8rYhb2RE08pl77G8Z4xYwJeZAoRk2J6Y4XoNFZG8L2a4YM7PCOjJ6umo6D2rPuv3MyuqpxvElpFQC4KTr0FLrPoKuHcS4sBkabSXdMbqZwtSZcHt4A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=T+KmuULkjATHsJkpBCbdQw5XA9fzShGhY+AfPy7XXqk=;
 b=TKUCEGsCho7705fA+y8Wl7Yj85Jz134ObhNFPuu7RiflzwAVA35hzpB8oh9KLNaS2WHoGo/tC+6sxokpX/r3Z+PxzWWG06GuwOtye++TXoukpqAARRVsFd9Yas7i1bfAyyDKelP3d3rIDo8/cFJ3d8KgRlqy+Mc3CmUMfC1UAsVa/kXv/gzfOSFNZ3+PSYE0IXLw8pfaGTgXnJD+dAV7tXDrNZOyf1hpYVQDQ3c2z3ZOyINBYmEGxlYwj+j2sSPy3er0tkgzljYgOH0geGqMLOyPxV7im3+liNyXuZ9fBSyy3IucyAmyQlxcsPR+ty49ZrD9z0JqJs2I7AIrrhcQCQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 1/2] x86/mem-sharing: ensure consistent lock order in
 get_two_gfns()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Tamas K Lengyel <tamas@tklengyel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <dea13187-04ce-9c1d-aa5c-e2cd0a7d42d9@suse.com>
Message-ID: <932211b2-c3aa-17f6-9fed-ca762e189786@suse.com>
Date: Tue, 29 Jun 2021 14:53:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <dea13187-04ce-9c1d-aa5c-e2cd0a7d42d9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P193CA0030.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:102:50::35) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e2559226-2be0-4a85-649e-08d93afcf64d
X-MS-TrafficTypeDiagnostic: VI1PR04MB6173:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB61733E61B5D3BC10A9D1674EB3029@VI1PR04MB6173.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3968;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	P8VcGnUHIYnv+6pewCXD/0Z7cxBpynvDmlpqY/4743HLzqHJTUt44pIe108Bi8379xR+ytNB9xcgmApFk4t/XwVwnQuZOaoGUWiXXyREXkK51wcWEHj4wZ5c7NaG3hFcjMBzgfpU45IbwRtmEhkcOiUpdnt3O5ICC702i3qxXrx7FAz1fgj1bvECF2rn89prOWgxQUxCE7yZzBxLJuQ42Q9nAPJnGQlrX8tWLpyYFt5qkWPmqMjYGnzzuOgUG5otd6bgmtYr2PUtkRsVJQaPTbyyP8rz1SsOJar2+GTb+juYmMziUSexXu16bEq88rTqZ1nldiLiD7yaK8Vt8RttHiSCj90oe/qwrPlRNBxyMfpcO61KeEfoUw0IWNGycUBmSm2lbltRYtvfarIoaUNjbcjZuEpC/DUrL4IYVWkotclrWieKFj1i4X33zLL1bAzSO+VyDe5w6dXrqPI3x7gas2zyLQn699I1GZ+XQ1zSoYlGYdWX/HSdHl8+GedN48CZvx1hNQ4m4OBzGXJeku3vw7Js/kNBUO+uaxViMv2GQxLLkRhS536mbs/EZTr9bqJjNCP0IWxgO5nFD4RzOaQtkILsH+eqpe1lGA3NXFqP7ozm31KNkF9m6dpxIBTd7iUMQqV1B4EikwRRIVVfQAKbRVr2wtMLfhqijt8G+PrboxIBssDMrChwY2Rhvfn4skUgkO4RqKHlmxRb8HRnv96Jvqxcm+AZmRwYr8KImgpjF8M=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39850400004)(136003)(366004)(396003)(376002)(346002)(6916009)(478600001)(5660300002)(66476007)(66946007)(956004)(4744005)(66556008)(8936002)(8676002)(2616005)(31696002)(86362001)(38100700002)(4326008)(186003)(16576012)(316002)(6486002)(54906003)(26005)(2906002)(83380400001)(36756003)(16526019)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WmNQZ3dnRFJQQ0tmd2ZBNENNRVNyWDlpMjJrN3Bjd2lFM0dHNUdwK2lYRVND?=
 =?utf-8?B?TFFWTlNlV2hwVnZjQUhSYnB1bkpucnMvWGVydFhGZGZpTnFPazhGUmZCNlM5?=
 =?utf-8?B?REc5NFNTSmc0VmNEZ1lzSXQzMGxlSEpMcGlYaFBmRjU1MisyUVljZGpTK1h0?=
 =?utf-8?B?UjkweXd5MklHd0htMDE1NnUvVXcxakdqT1dwei9KbXV4SFNCWlNXWUxnRE8v?=
 =?utf-8?B?VElUTXJQelJ6MFI4c0c3V1BXdUZmTDdQOGZLZ3dNYUVxMFFtZHovV2RVN3hH?=
 =?utf-8?B?bnlPRHh4Q2h0SG90b3pmU04wMkpHc3Rwd01qZzcweTl0K2dnMU1RdStkSWpF?=
 =?utf-8?B?YU95YnI1K1VEN2FzaDM2b01BVUpyRG9JLytiMXVUUTJMK3l2VkpTeDZ3STI3?=
 =?utf-8?B?RC9UWXBuL0lmRmRLUS80MWZwY0p6QkxZMlJNZUlpbXR4VFlOWVE1dHM3NlNx?=
 =?utf-8?B?ZndVRHVpNkdVZzF3Tkl4c1ZaTDNTUG1ndFRTQ21ZNWVrY0lkWUxVODBUVlI5?=
 =?utf-8?B?M254akVGVlZ4T2VkNFFWdTl0MFVRNDR4b0N0WUF6cW5SR0dzeVAzUi9kZElu?=
 =?utf-8?B?a2N4MS9iekIxNlhlaDRhYVVDOHI1czZXWWRzSTdhQ1FJZkQweXU3L3pTM3Nz?=
 =?utf-8?B?U3p6Z3hJdnBvQnpzU2hoNXNYcmJWT1lWSmgvSEgxK3JDeitEUGJIWFdvYVlV?=
 =?utf-8?B?ZWp6Vk4zN3lOYzZJaEwvQ0NnQnJXVm9WdUJKcHJFSi9IcWdIb1YxZ054MjMz?=
 =?utf-8?B?T04vNkw1Z09iK1Z0Vmh5R0dYbDRFQ0oyK1dYekE0UUJsbWtFMDFWU0lGU2Zm?=
 =?utf-8?B?ZkZ2cy9XRktIRmx0dmpmeDFlcFZ6Q25CKzZXMmtlTHpJb1pqeWJEZjJ4eDBM?=
 =?utf-8?B?NW1CM2xEakhzUW45NDNtMHhpdC9kalZjM29ka0U0TDUzOTRadTFxckFHRDFt?=
 =?utf-8?B?SlFISEZSU3A5MDcxYjhtbXlXZEFBNWx1RzNveG9NQjB2cDJvUnlCcW5Xa2Nt?=
 =?utf-8?B?My85VHFQZnNOaDk5eGZTUkwrOExocG1WZHV6MVd4Zi9FblhlS3psU2J2QkVs?=
 =?utf-8?B?T3NoQUdEc2E2eTdoUGY5UGgyQU9WdGdBSXVnN1ZUNUtxSzZ4bVhuWXJqdjNZ?=
 =?utf-8?B?UzhBZUZDeUdMRHB2aVdkVnlEV1ozY3hhait0VDlpVm8rM3BzdGM3Ni96TWhY?=
 =?utf-8?B?QXh1ckpYVTNZTjhrbzNiVytGTFU1SG1DUmxGNXE5ZzhjcEFwT1RKWjZMb00y?=
 =?utf-8?B?WUc1UlRHVGVTNE9RRi8ydXBnczVRaWg5c0NwU05Fd2UwRnQzUTE1alNIejl0?=
 =?utf-8?B?SUViQURVZEhsUHIyYmRGYjhHSDdpNG8wQlo5NTlEbTdJcVJOeDJYZ0ZmWTVn?=
 =?utf-8?B?VlArdHFyVzNtWEs1Mm13NFg5aG0yM2VrYXVsTWRhTlgzNHZnM3RkNEZPVFpZ?=
 =?utf-8?B?Q2IvbUNrQmxwTExFZ2hRQTdkZHNEQnpFS1RMOUtXcjJwcVZLbkVwbmhxSjdi?=
 =?utf-8?B?RXp5dzQveEdQdG1xSWlxNEMzaCtnNjhpWW1WOWNHeG9YSmVjWjlxMURlYmhx?=
 =?utf-8?B?aUFJMUFpWU5xWXV6WG1kVUg3OEtzMkV3MUg0eTVYdEc3U3MxQ2V5b3ZVWVQ0?=
 =?utf-8?B?Vy90VXZDSjNVMXFHMGhNTFNBckd1Nlg4NWZMZWNKQTA0MmJTZXVaQ2R0MCsr?=
 =?utf-8?B?NzVpbWYvRDNVaDA1RHBNQW9NYzc2aEVUMk1jVUJuR09TN05IRStiS0x5L1g2?=
 =?utf-8?Q?kO9HFYO9qIVuIQ0Mn1+O9iyu9O7ru71DW/u7GfE?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e2559226-2be0-4a85-649e-08d93afcf64d
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2021 12:53:58.9062
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 551Q3KUvvOJr8Smxpnd6k7c407l4Ox9Sph4ylRooHd+S+HvqT3RJwNOGAPvkKjvX232TX8OWCDYhksEaOVZbbg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6173

While the comment validly says "Sort by domain, if same domain by gfn",
the implementation also included equal domain IDs in the first part of
the check, thus rending the second part entirely dead and leaving
deadlock potential when there's only a single domain involved.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -587,7 +587,7 @@ do {
     dest ## _t   = (source ## t)   ?: &scratch_t;       \
 } while (0)
 
-    if ( (rd->domain_id <= ld->domain_id) ||
+    if ( (rd->domain_id < ld->domain_id) ||
          ((rd == ld) && (gfn_x(rgfn) <= gfn_x(lgfn))) )
     {
         assign_pointers(first, r);



From xen-devel-bounces@lists.xenproject.org Tue Jun 29 12:54:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 12:54:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148033.273393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyDG8-0002lK-Lp; Tue, 29 Jun 2021 12:54:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148033.273393; Tue, 29 Jun 2021 12:54:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyDG8-0002lD-Ij; Tue, 29 Jun 2021 12:54:36 +0000
Received: by outflank-mailman (input) for mailman id 148033;
 Tue, 29 Jun 2021 12:54:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wA62=LX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lyDG7-0002l7-Vu
 for xen-devel@lists.xenproject.org; Tue, 29 Jun 2021 12:54:36 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c52841f-d7e6-46cc-9007-af3e1e348d48;
 Tue, 29 Jun 2021 12:54:35 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2173.outbound.protection.outlook.com [104.47.17.173])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-13-wV90MzgDNd6tK67nmVbnFQ-1; Tue, 29 Jun 2021 14:54:32 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6173.eurprd04.prod.outlook.com (2603:10a6:803:ff::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Tue, 29 Jun
 2021 12:54:31 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Tue, 29 Jun 2021
 12:54:31 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM3PR07CA0148.eurprd07.prod.outlook.com (2603:10a6:207:8::34) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4287.12 via Frontend Transport; Tue, 29 Jun 2021 12:54:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c52841f-d7e6-46cc-9007-af3e1e348d48
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624971274;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+kokP/tuDkyLODI5tQsnPxes6fbGBAjcWgNA4ZJ5vuU=;
	b=T/ryBV1FU9uX4mGQiug/YPFntSOCAppTzOJCdArRz2ALD7bmILfzUQ1KT3cD+vJ32kCh1p
	UQZ43ZKn54tYK/sBx1KlzNCFo88Tc/Qq2e4pFtql7Hx+Iiaj7VgvKMiOVfnTleJ0llLkhE
	tmrYTudePMyA2jE/31HBQ9hnZv4RsRg=
X-MC-Unique: wV90MzgDNd6tK67nmVbnFQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eo9pFtuEFG5Qkayi4jocKmijweQyc7U/+ULZ2VjDYtGIBExF6KR5NqPbrIq03kx4hcfFGnZmYbExOFP2UYZnKEs33V2m0I+Lh+QukU03kAg96wa3NW3AJDDRJS16UOXgfQxn4E/Xr+eqIKTRU1fflMUYSwoU5BgbmqF5536cMxudfqE7VpEl4P2baFXzbQs6jtqQB5x6ZPSoOVfvqKA4lbNVBVbFERH/eykIzxZ/JaIxfss9ypd5YKioIqdIjx87xnbzLECFJKS7AQ9msT2OSKCJMozVDe7iteXNW0YoCmIx17cgLqJ7P1BKhzap6es/hivcIeFOWB8vLvpsle5CSA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+kokP/tuDkyLODI5tQsnPxes6fbGBAjcWgNA4ZJ5vuU=;
 b=Boq3w/EWTVPbbie0DBFeAwXXKqtI8Svi9rtf9JQIDEUiLH9MLTZuPZ4p/N4pteHr7JMt50DzCE+knBGf38mudE91XaYmbYufTf853UDK5YPRtP8qF6j+NqlrO6dE3aoJ0ZS9n/7XBD/J3HZ7oH86mi/ql61MEmgfezBliq6DBAqSrcQd/Z5SneWELsNzspGCWp3wCyDJVUzVLvKgq9cRLZKBkbABPuiDY+ZSfjs1d+41HRPEvj4zzV48ziYNn2CXV6kZXk1u5kGIgaehLoTlE91MwEWNeixPjVPYPI+FghZT9lcv5SPcLANd7pGXRrv2335jcfP3wOZr5yd0N4r9Rw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
Subject: [PATCH 2/2] x86/mem-sharing: mov {get,put}_two_gfns()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Tamas K Lengyel <tamas@tklengyel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <dea13187-04ce-9c1d-aa5c-e2cd0a7d42d9@suse.com>
Message-ID: <6f4c081d-732d-87c0-2ad9-0aafea1ad927@suse.com>
Date: Tue, 29 Jun 2021 14:54:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <dea13187-04ce-9c1d-aa5c-e2cd0a7d42d9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM3PR07CA0148.eurprd07.prod.outlook.com
 (2603:10a6:207:8::34) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e76bf4ee-f027-440f-4e05-08d93afd09f0
X-MS-TrafficTypeDiagnostic: VI1PR04MB6173:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB617328F6756513CF8EDD31BAB3029@VI1PR04MB6173.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4vUTuA9KLImDiM7GrQY6rg7gxz4UJDGBEhwUoTrruwg7i+IisFTFX/AagiMlc72aBb1QXlagUIZiNLcuM7vR54687mvmc95bpbX4EGzoZUaUWjchdx18h+ML11BMIjQoPNEpW5jYv7jIsBw642zgE4KQXNS1pbk1Bf/XWmYQf0Ysqzqc6D+h1P77FRwJzggAvPpqq1Dfh5e6WxzsFskHHBDMoSGctG9p1W7utXWvvf61kmjRK663j4XGaAzFljTi0I5DwCfw6MdvXvsMpOPX6sevFqj0Pjg4D0/JiUIzgvHYD4VW8aMVjYwI7cQ0MvJmVdctXi3WuQAlC6bgvA0gMdsq807QqMBcZt4NAdS30tKlhftrYrB0aqnsXhhNSF8ZjIimP0iUTsefNIKS26xZk3XgfwJKE2A50lU6toOKyd6wI3JZMbuHb2OZAKe564jt+ir57VXLybmNDOqxXlp6GVnHVK7Kao/Y1HlLkRODCs7T91pimsqlpyQVyk5rJSi1UessEinrgXh03eh/zNzzVEjOlT7lV3d7glq2JDdlwOlBhzRLYXNIdELyXEsX4/Lkxm/m5roB8+WZzaUR/t7088KTHETSDNvnke/j1dIUZiYpSGW4JOOjYkMcaK0+s8P8lBMYShg/SlRW+MHpi/O+dUNHyOZVbdSRfRzGS6/gbVkc5cJSMSsn1+6AR2eA98WccYvcAl+/3T6nygNASNUqPaTr2TNa54zRWV7SELhUCHM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39850400004)(136003)(366004)(396003)(376002)(346002)(6916009)(478600001)(5660300002)(66476007)(66946007)(956004)(66556008)(8936002)(8676002)(2616005)(31696002)(86362001)(38100700002)(4326008)(186003)(16576012)(316002)(6486002)(54906003)(26005)(2906002)(83380400001)(36756003)(16526019)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NlhyQzVaaGR6YjZHK1JkbU5PdndSZ3NmaUhoVEVISkpBTTgvNVlFSGd3TDBM?=
 =?utf-8?B?bjhKODJFbHRFSlgza0VScSthaExBV01GZ3NZOHQ1c2pGREZ4Z0tucDR2dkNT?=
 =?utf-8?B?MHdHVVY5RDRXVEE4dmZzQVc4emlDL1pDeWQvd0lUQ0RaVnhENEZ0by9iMkUx?=
 =?utf-8?B?a2tMVllYUFoyYUk5ZS9YbjV2U1RZekJqbEhIUWdQZW5Sb1FVSzZlQlBobTN1?=
 =?utf-8?B?NFNDL2pKOEZDa2JPZVptNTBMaTNXNU5yc0N6dXVleXZodVpzd2pGMHlJU3Qx?=
 =?utf-8?B?V2EwRjY3eHY1U1NrOHNxT2VHOFNwcXZYUG13WGdTWXdOemphVlV5Q1BkMk5B?=
 =?utf-8?B?Y3NaZitCZ3FEckluSmxXS0M4bUFpbWFUY0l5azNXU1BLSlp4WU13ekM2dzUz?=
 =?utf-8?B?TnNFaEdXODVYYi93MWtiTnc4MUhHYlUyV0dxZzRiUEhSR0Rmajl2TndUaFo4?=
 =?utf-8?B?bW1UVGNqd0lxZXVtNHpLNTk5cVBtRGJDZU5seUtwbHJYUzJBbnl6eUtkMTdl?=
 =?utf-8?B?MG0yM2JvdXhaMkk0eFhjMWhtQWhQdDQ5b2VHaDhud05tVFRVcE0wdEJBRkQ2?=
 =?utf-8?B?UUlhZUVVU1U2YXJaSGRFUDVsOERXb09rQjZVNEpsRzJKeEFlRWVOMjBtVTlJ?=
 =?utf-8?B?TDd6d0tYd0dSV2VaY3FFTzRkbENnb3pERVpHbHBNdEVha1VLdjRSQkhkdG5l?=
 =?utf-8?B?SXNFWnpsSnYvNjFZK2IwQ1p3alNBWWhkRWY4UG04U3V3ZU1PMHhOeE1oSFc1?=
 =?utf-8?B?N0ttZDAwRFJhQW4vS25yaWJWWW5uWWZUb2NiYjdlNC8xU2U4T0djeG1JdG5p?=
 =?utf-8?B?OWhxVDNVQ1J0OEIzcTBVMlV5MXQ1d2RiZUJzOTlOMmh5ZTJnaDhUM1lLZ1Ey?=
 =?utf-8?B?QlptcUVtVjVuRUo2RnB2bVRqYlIyTVQ3cko3Q0ZyQzgxOHEyQ3ppK0lLZ3ZL?=
 =?utf-8?B?Yy92RXFETnZGZUZPSDJEUHBpSmdtK1NYNVBwdzFvajgvcWJjNHhwUVpIOGll?=
 =?utf-8?B?ZEt4SnVpRU11empEenJJbC9GWHBYZStaNnhGYlBKaHdrRmFMYm1WclYvM2R0?=
 =?utf-8?B?RkNpNDRtU01WUnRGTlVPSzBxaHl3aEpFemUycWs2TTFPa1U3NlJtY2FJdHhy?=
 =?utf-8?B?SFpPci9PLzA4VkxscEkzdmJuT2VMdzhtaFlLbDRkVTJIVXAzYTJ4K3NoMHoy?=
 =?utf-8?B?aGhyNWRJU2djQmpReEdvUzJnd0JmK1MwdFJxUTVtWjlJS1c0ckVGblNUNGQz?=
 =?utf-8?B?S3d0Umd2TlB6akQvemZpb0N6WlVBNzI1OUNRdXYyLytMYzA3VnNBbmRTUDZp?=
 =?utf-8?B?N1Z1eGNyRHFoV1h4ZU00NkI3NkJrbGd6ZFA4SjA1WWswbFFFK1FJTmFDT2ZO?=
 =?utf-8?B?STVZbmw4RVRTRHJGTEoxNFhNLzVVeHBHVSsvYm5kc0cvZ1ZsMnZEZWhlNnFW?=
 =?utf-8?B?UzBsYS9EbnJHa1p6NCtiVHVMSTRxSnJsY09TZm1YbG5XTHBNbkRhMlVvZHRI?=
 =?utf-8?B?cmQwaFFmQ21aUlcrN3pOWjNJMFdkSjdpUzI4TlNqQUdpaW13cG5OVVFKS1ZL?=
 =?utf-8?B?L3N3NEVzRTh3VmpVb2Fwc3JUaUx4YkY0MENlQU1tbnNWbUtRaWJXeTZUSFI4?=
 =?utf-8?B?NHFLazl6YUFJOFZUenROZXFzMDk1L1RqenFxTnVPRDhxNXZWb1Fybk1RMW9s?=
 =?utf-8?B?NEFjWE5nTXExMG9XK3pnQUJZK3hBTmNNeWd5SndzVGVsSm0wNDJ6ck5ONHZi?=
 =?utf-8?Q?ZHq9QbjJvo0ToyThmG8lE0fMzktLLpZ42cULMKg?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e76bf4ee-f027-440f-4e05-08d93afd09f0
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2021 12:54:31.8385
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IKxvF2MAqdrEX8dwJMWSmWkrsiw5USFrZ5+Ybo5lsV5QGsV5xK+EiJKaabV+U4DibmOWgh6g926vp29h/8TPYg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6173

There's no reason for every CU including p2m.h to have these two
functions compiled, when they're both mem-sharing specific right now and
for the foreseeable future.

Largely just code movement, with some style tweaks, the inline-s
dropped, and "put" being made consistent with "get" as to their NULL
checking of the passed in pointer to struct two_gfns.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -432,6 +432,66 @@ static void mem_sharing_gfn_destroy(stru
     xfree(gfn_info);
 }
 
+/* Deadlock-avoidance scheme when calling get_gfn on different gfn's */
+struct two_gfns {
+    struct domain *first_domain, *second_domain;
+    gfn_t          first_gfn,     second_gfn;
+};
+
+/*
+ * Returns mfn, type and access for potential caller consumption, but any
+ * of those can be NULL.
+ */
+static void get_two_gfns(struct domain *rd, gfn_t rgfn, p2m_type_t *rt,
+                         p2m_access_t *ra, mfn_t *rmfn,
+                         struct domain *ld, gfn_t lgfn, p2m_type_t *lt,
+                         p2m_access_t *la, mfn_t *lmfn,
+                         p2m_query_t q, struct two_gfns *rval, bool lock)
+{
+    mfn_t        *first_mfn, *second_mfn, scratch_mfn;
+    p2m_access_t *first_a, *second_a, scratch_a;
+    p2m_type_t   *first_t, *second_t, scratch_t;
+
+    /* Sort by domain, if same domain by gfn */
+
+#define assign_pointers(dest, source)                   \
+do {                                                    \
+    rval-> dest ## _domain = source ## d;               \
+    rval-> dest ## _gfn = source ## gfn;                \
+    dest ## _mfn = (source ## mfn) ?: &scratch_mfn;     \
+    dest ## _a   = (source ## a)   ?: &scratch_a;       \
+    dest ## _t   = (source ## t)   ?: &scratch_t;       \
+} while ( false )
+
+    if ( (rd->domain_id < ld->domain_id) ||
+         ((rd == ld) && (gfn_x(rgfn) <= gfn_x(lgfn))) )
+    {
+        assign_pointers(first, r);
+        assign_pointers(second, l);
+    }
+    else
+    {
+        assign_pointers(first, l);
+        assign_pointers(second, r);
+    }
+
+#undef assign_pointers
+
+    /* Now do the gets. */
+    *first_mfn  = __get_gfn_type_access(p2m_get_hostp2m(rval->first_domain),
+                                        gfn_x(rval->first_gfn), first_t,
+                                        first_a, q, NULL, lock);
+    *second_mfn = __get_gfn_type_access(p2m_get_hostp2m(rval->second_domain),
+                                        gfn_x(rval->second_gfn), second_t,
+                                        second_a, q, NULL, lock);
+}
+
+static void put_two_gfns(const struct two_gfns *arg)
+{
+    put_gfn(arg->second_domain, gfn_x(arg->second_gfn));
+    put_gfn(arg->first_domain,  gfn_x(arg->first_gfn));
+}
+
 static struct page_info *mem_sharing_lookup(unsigned long mfn)
 {
     struct page_info *page;
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -559,62 +559,6 @@ int altp2m_get_effective_entry(struct p2
                                bool prepopulate);
 #endif
 
-/* Deadlock-avoidance scheme when calling get_gfn on different gfn's */
-struct two_gfns {
-    struct domain *first_domain, *second_domain;
-    gfn_t          first_gfn,     second_gfn;
-};
-
-/* Returns mfn, type and access for potential caller consumption, but any
- * of those can be NULL */
-static inline void get_two_gfns(struct domain *rd, gfn_t rgfn,
-        p2m_type_t *rt, p2m_access_t *ra, mfn_t *rmfn, struct domain *ld,
-        gfn_t lgfn, p2m_type_t *lt, p2m_access_t *la, mfn_t *lmfn,
-        p2m_query_t q, struct two_gfns *rval, bool lock)
-{
-    mfn_t           *first_mfn, *second_mfn, scratch_mfn;
-    p2m_access_t    *first_a, *second_a, scratch_a;
-    p2m_type_t      *first_t, *second_t, scratch_t;
-
-    /* Sort by domain, if same domain by gfn */
-
-#define assign_pointers(dest, source)                   \
-do {                                                    \
-    rval-> dest ## _domain = source ## d;               \
-    rval-> dest ## _gfn = source ## gfn;                \
-    dest ## _mfn = (source ## mfn) ?: &scratch_mfn;     \
-    dest ## _a   = (source ## a)   ?: &scratch_a;       \
-    dest ## _t   = (source ## t)   ?: &scratch_t;       \
-} while (0)
-
-    if ( (rd->domain_id < ld->domain_id) ||
-         ((rd == ld) && (gfn_x(rgfn) <= gfn_x(lgfn))) )
-    {
-        assign_pointers(first, r);
-        assign_pointers(second, l);
-    } else {
-        assign_pointers(first, l);
-        assign_pointers(second, r);
-    }
-
-#undef assign_pointers
-
-    /* Now do the gets */
-    *first_mfn  = __get_gfn_type_access(p2m_get_hostp2m(rval->first_domain),
-                                        gfn_x(rval->first_gfn), first_t, first_a, q, NULL, lock);
-    *second_mfn = __get_gfn_type_access(p2m_get_hostp2m(rval->second_domain),
-                                        gfn_x(rval->second_gfn), second_t, second_a, q, NULL, lock);
-}
-
-static inline void put_two_gfns(struct two_gfns *arg)
-{
-    if ( !arg )
-        return;
-
-    put_gfn(arg->second_domain, gfn_x(arg->second_gfn));
-    put_gfn(arg->first_domain,  gfn_x(arg->first_gfn));
-}
-
 /* Init the datastructures for later use by the p2m code */
 int p2m_init(struct domain *d);
 



From xen-devel-bounces@lists.xenproject.org Tue Jun 29 13:26:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 13:26:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148036.273403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyDl6-0006CP-78; Tue, 29 Jun 2021 13:26:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148036.273403; Tue, 29 Jun 2021 13:26:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyDl6-0006CI-3Y; Tue, 29 Jun 2021 13:26:36 +0000
Received: by outflank-mailman (input) for mailman id 148036;
 Tue, 29 Jun 2021 13:26:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyDl5-0006C8-A0; Tue, 29 Jun 2021 13:26:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyDl5-00044Q-4Z; Tue, 29 Jun 2021 13:26:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyDl4-00057y-Qe; Tue, 29 Jun 2021 13:26:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lyDl4-0002Wt-Q6; Tue, 29 Jun 2021 13:26:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RoX8dHi2iejXN6Eo9+0j4FQnVcY0apvRinGdG/1mvc8=; b=UVltwZlC29+M796uBHjRRWU/Sf
	Za/6RU1jrMqMu9QVbk+X44KcwWy48ckwXGYbnvN8XdxhklMkk5zuN4LR7M2w1tUyRVcSdRV6v31yf
	2EovyG12XFMhYdw9sbVVKwIHKqU8Ah2QDZcVCSeWk9XsUMwlsnT1yaKjICW7okosIWhY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163176-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 163176: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-libvirt-xsm:debian-fixup:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:heisenbug
    xen-unstable:test-arm64-arm64-xl-seattle:guest-start.2:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-arm64-arm64-xl-xsm:debian-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c636a5fe59575d84778f676ca1728fbd1a7c7104
X-Osstest-Versions-That:
    xen=c636a5fe59575d84778f676ca1728fbd1a7c7104
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Jun 2021 13:26:34 +0000

flight 163176 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163176/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-xsm  13 debian-fixup     fail in 163168 pass in 163176
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 163168 pass in 163176
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 163168 pass in 163176
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install    fail pass in 163168
 test-arm64-arm64-xl-seattle  19 guest-start.2              fail pass in 163168
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 163168
 test-arm64-arm64-xl-xsm      12 debian-install             fail pass in 163168

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail in 163168 blocked in 163176
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 163168 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 163168 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 163168
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 163168
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 163168
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 163168
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 163168
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 163168
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 163168
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 163168
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 163168
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 163168
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c636a5fe59575d84778f676ca1728fbd1a7c7104
baseline version:
 xen                  c636a5fe59575d84778f676ca1728fbd1a7c7104

Last test of basis   163176  2021-06-29 02:21:20 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Jun 29 13:33:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 13:33:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148041.273417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyDrv-0007jk-3B; Tue, 29 Jun 2021 13:33:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148041.273417; Tue, 29 Jun 2021 13:33:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyDrv-0007jd-0K; Tue, 29 Jun 2021 13:33:39 +0000
Received: by outflank-mailman (input) for mailman id 148041;
 Tue, 29 Jun 2021 13:33:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyDrt-0007jT-CL; Tue, 29 Jun 2021 13:33:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyDrt-0004CS-8k; Tue, 29 Jun 2021 13:33:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyDrs-0005bV-WE; Tue, 29 Jun 2021 13:33:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lyDrs-0004lR-Vk; Tue, 29 Jun 2021 13:33:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WyB7ZRbh0sM/+qC6kdPdn6vjdBZ0NZM9//u6u4x4scE=; b=mKiE0pmgdKfJMB2I/IR9MghGl1
	gTO0dMk4bRGNqUo7GIsag6xDoGKDCYLR8+GoASfu6M7PD/0phu6b5Shq9Ju5KgFBZQGnb3xpR+gg9
	HYa/oL25ppv7cfGFv1ZVrHs61jl0LXZgJvIT/l9BSt+7OOGFKCCCtvXfAuaW/09+Jej8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163180-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163180: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=d1fc3d7ef3cb77c63409f5ad873eb37302fa3481
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Jun 2021 13:33:36 +0000

flight 163180 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163180/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 d1fc3d7ef3cb77c63409f5ad873eb37302fa3481
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   25 days
Failing since        162368  2021-06-04 15:42:59 Z   24 days   63 attempts
Testing same since   163180  2021-06-29 06:10:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daniel Schaefer <daniel.schaefer@hpe.com>
  Daoxiang Li <daoxiang.li@intel.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Sunil V L <sunilvl@ventanamicro.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2721 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 29 13:52:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 13:52:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148044.273432 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyE9n-0001f8-Kj; Tue, 29 Jun 2021 13:52:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148044.273432; Tue, 29 Jun 2021 13:52:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyE9n-0001f1-Gk; Tue, 29 Jun 2021 13:52:07 +0000
Received: by outflank-mailman (input) for mailman id 148044;
 Tue, 29 Jun 2021 13:52:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lyE9n-0001ev-3j
 for xen-devel@lists.xenproject.org; Tue, 29 Jun 2021 13:52:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyE9m-0004Vl-Rr; Tue, 29 Jun 2021 13:52:06 +0000
Received: from [54.239.6.184] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyE9m-0000IA-Ld; Tue, 29 Jun 2021 13:52:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=TrTrqByIbZNIsJZMwJT7cYL2TuoWQFtrm8ZQk+z54SQ=; b=yPngWjWjO5TgMkcrdE8QZ4eOJj
	9PJl7JLCuPet1G1p9Pt49fffwXrid0FbT+zDRP6D2YuvLsw3zoODpVru3UhhWlGqUgdN9QtVIGuy2
	qpc33GdA6hyu/Q+5gTEQiYaa0o6ym7OgLoVpbGA1+dAuDWeyMhdZXPr1u3JwFsVtfz8I=;
Subject: Re: [PATCH] xen/arm: add forward_smc command line option for
 debugging
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Volodymyr_Babchuk@epam.com
References: <alpine.DEB.2.21.2106241749310.24906@sstabellini-ThinkPad-T480s>
 <b5ba0757-322f-a77a-2293-111b77b29d35@xen.org>
 <alpine.DEB.2.21.2106251033500.24906@sstabellini-ThinkPad-T480s>
 <db2405f9-61d2-5d8f-816e-547bc09bb95c@xen.org>
 <alpine.DEB.2.21.2106281421530.9437@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <9ba333b4-cef9-7e1f-6ea6-444889f7c721@xen.org>
Date: Tue, 29 Jun 2021 14:52:05 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2106281421530.9437@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 28/06/2021 22:40, Stefano Stabellini wrote:
> On Mon, 28 Jun 2021, Julien Grall wrote:
>> On 25/06/2021 18:47, Stefano Stabellini wrote:
>>> On Fri, 25 Jun 2021, Julien Grall wrote:
>>>> Hi,
>>>>
>>>> On 25/06/2021 02:51, Stefano Stabellini wrote:
>>>>> It has become clear that an option to disable trapping SMC calls to Xen
>>>>> is very useful for debugging user issues.
>>>>>
>>>>> Instead of having to provide a
>>>>> patch to users every time, it would be great if we could just tell them
>>>>> to add forward_smc=true to the Xen command line.
>>>>
>>>> I can understand this woud be useful to go a bit further in dom0 boot. But
>>>> I
>>>> am quite sceptical on the idea of providing an option directly in Xen
>>>> because:
>>>>
>>>> 1) This breaks other SMC uses in Xen (optee, VM monitor...)
>>>> 2) There are no guarantee that the SMC call will not wreck Xen. To be
>>>> clear, I
>>>> don't refer to a malicious OS here, but a normal OS that boot
>>>> 3) Very likely the next steps for the user (or better call it the
>>>> developper
>>>> because that option should really not be used by a normal user) will be to
>>>> decide whether they should modify the kernel or implement a mediator in
>>>> Xen.
>>>>
>>>>> This option is obviously unsafe and unsecure and only meant for
>>>>> debugging. Make clear in the description that if you pass
>>>>> forward_smc=true then the system is not security supported.
>>>>>
>>>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>>>>
>>>>> diff --git a/docs/misc/xen-command-line.pandoc
>>>>> b/docs/misc/xen-command-line.pandoc
>>>>> index 3ece83a427..0833fe80fc 100644
>>>>> --- a/docs/misc/xen-command-line.pandoc
>>>>> +++ b/docs/misc/xen-command-line.pandoc
>>>>> @@ -2501,6 +2501,16 @@ vwfi to `native` reduces irq latency
>>>>> significantly.
>>>>> It can also lead to
>>>>>     suboptimal scheduling decisions, but only when the system is
>>>>>     oversubscribed (i.e., in total there are more vCPUs than pCPUs).
>>>>>     +### forward_smc (arm)
>>>>> +> `= <boolean>`
>>>>> +
>>>>> +> Default: `false`
>>>>> +
>>>>> +If enabled, instead of trapping firmware SMC calls to Xen, allow SMC
>>>>> +calls from VMs directly to the firmware. This option is UNSAFE and it
>>>>> is
>>>>> +only meant for debugging. Systems with forward_smc=true are not
>>>>> security
>>>>> +supported.
>>>>> +
>>>>>     ### watchdog (x86)
>>>>>     > `= force | <boolean>`
>>>>>     diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>>>>> index e7384381cc..0580ac5762 100644
>>>>> --- a/xen/arch/arm/traps.c
>>>>> +++ b/xen/arch/arm/traps.c
>>>>> @@ -95,11 +95,15 @@ static int __init parse_vwfi(const char *s)
>>>>>     }
>>>>>     custom_param("vwfi", parse_vwfi);
>>>>>     +static bool forward_smc = false;
>>>>> +boolean_param("forward_smc", forward_smc);
>>>>> +
>>>>>     register_t get_default_hcr_flags(void)
>>>>>     {
>>>>>         return  (HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM|
>>>>>                  (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) |
>>>>> -
>>>>> HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
>>>>> +             (forward_smc ? 0 : HCR_TSC) |
>>>>> +             HCR_TID3|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
>>>>
>>>> A system wide option to turn off SMC trapping is a no-go because this
>>>> would
>>>> only be usable for debugging dom0 and not a guest.
>>>>
>>>> So at the minimum this should be a per-domain option. Also, I think we
>>>> still
>>>> want to integrate with the rest of the SMC users. So Xen should still trap
>>>> the
>>>> SMC and the forward should happen in vsmccc_handle_call().
>>>>
>>>> This would cover my first point.
>>>
>>> Yes, you are totally right. I thought about it this morning as well.
>>> This patch would break even PSCI :-(
>>>
>>> It would be best implemented in platform_smc as forward_to_fw (see
>>> xen/arch/arm/platforms/xilinx-zynqmp-eemi.c:forward_to_fw).
>>
>> There is one problem though. How do you know which calling convention to use?
>> IOW, will all the firmware call (in particular on older platform) follow the
>> SMCCC?
> 
> I am not aware of any firmware (Xilinx or non-Xilinx) with SMCs call
> not SMCCC compliant.

Here an example that doesn't look to follow the SMCCC spec:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm/mach-omap2/omap-smc.S

> 
> That said, the most important thing is that we handle PSCI correctly,
> right?
Most modern Linux version will use HVC rather than SMC for the PSCI... 
What we want to avoid breaking is user of a TEE (e.g. OP-TEE) and/or 
spectre mitigations.

> After that, if we forward all calls to the firmware we should be
> OK. From a calling convention perspective, it would only break if we
> don't forward enough parameters on registers or save enough return
> values on registers.
> 
> Then there are the problems with memory addresses you wrote below.
> 
>   
>>>> For the second and third point, I still like
>>>> to understand how this is going to help the developer to fully port the
>>>> board/OS to Xen with this option disabled?
>>>
>>> This is meant to help with bug triage only. There are a number of bugs
>>> that can happen if certain platform SMCs are intercerpted by Xen instead
>>> of being forwarded to the hardware.
>>
>> We already print a message informating the user that the SMC call was trapped
>> and terminated in Xen. So I am not entirely sure why you also need to
>> passthrough all the SMC calls to triage it. You already know that the SMC will
>> have to be implemented in Xen...
> 
> On Xilinx, we have so many SMCs that it would be difficult to figure it
> out from the boot log alone.

Xen will print the function ID on the console:

gprintk(XENLOG_INFO, "Unhandled SMC/HVC: %#x\n", funcid);

Isn't it enough to know which SMC call fails or look at the documentation?

> In the sense that it is unfortunately
> "normal" for one or two SMCs to fail (even without Xen!) But that could
> be a Xilinx-only issue.
Right, then I don't quite see how the debugging option will help here. 
Or are you saying the return failure is different and therefore Linux 
will not be able to cope when running on Xen?

> 
>   
>>> I found myself having to provide a
>>> patch to forward_to_fw all platform SMCs as a first test to
>>> triage bugs a few times recently. It is never a fix, only a way to
>>> understand the next step of debugging. Also Alex stumbled across
>>> something similar on a non-Xilinx board (MacchiatoBin) so I thought it
>>> was time for a better debugging option.
>>>
>>> I think for debugging purposes it would be sufficient if all platform
>>> SMCs were forward_to_fw from all domains. Of course it is totally
>>> unsafe, but it is just for debugging.
>>
>> In order to add a debugging option in Xen, we need to be reasonably confident
>> that the option will not make more damage (I am not speaking about security
>> here...) than it is actually worth it.
>>
>> I can see how this helps in both your situation to boot dom0. However, I am
>> not sure this can be generalized to every platform. A developper (or user)
>> enabling this debugging option may end up to see corruption/hang because:
>>    1) SMC call may pass memory address. A domain would pass a guest physical
>> address but the firmware will interpret as host physical address. This
>> working(ish) for dom0 because both are equivalent, but for other domain this
>> will break.
>>   2) SMC call may change the behavior of the system (i.e. turning off the
>> UART)...
>>
>> It would be difficult to pinpoint whether the problem is because an SMC (or
>> else) without implementing each SMC call in Xen.
>>
>> I don't think it is a lot of work to implement SMCs in Xen as you find them
>> (sooner or later, you will have to do it anyway...). At which point,
>> forwarding all the unknowns SMCs to attempt to boot further is probably more
>> risky than it is worth it.
>>
>> If the problem is re-building, then we could consider to provide a command
>> line option to easily specify which SMC call is passthrough...
> 
> The problem is that it isn't always the same person doing the work.
> 
> If it is me working on a new release or a new platform, the command line
> option wouldn't help me much. In fact, it might even be faster for me to
> add "goto forward_to_fw" and rebuild. I am happy with that.
> 
> The issue is when there is a bug reported by a customer or by a user on
> the mailing list. In that case, it is very useful to ask them to run a
> little experiment to narrow down the possibilities, and it i easier to
> ask them to add a command line option than to apply a patch. If
> passthrough is involved, then we need to ask the user to forward all
> SMCs, not just Dom0. If passthrough is not involved, then forwarding
> only Dom0 calls is fine.

The problem is the OS may be able to cope with all the SMCs returning 
"not handled" up to the one you are interested in. Now if you pass the 
option to Xen, the firmware will handle all the SMCs and this may change 
the behavior of the OS. IOW, you are potentially adding more potential 
reasons for bugs.

I mention one OS here, but the problem is exactly the same with multiple 
one. You may have a working dom0 OS but non-working guest OS. Enabling 
the option may break your dom0.

> 
> 
> That said, I am not so sure we want this patch upstream: I think it
> would benefit Xilinx users and a recent request from Alex made me think
> that it would benefit other platforms too, but maybe the benefits on
> other platforms are not enough to introduce an option like this, which
> could easily break things. >
> So I am happy to follow your preference:
> 
> 1) I can drop the patch
> 2) forward platform_smc only for dom0
> 3) forward platform_smc for all domains
2) would be acceptable. Although, this doesn't entirely cover the 
concern I wrote above. One possibility would be to allow the user to 
list the platform SMCs they want to passthrough.

This could be easily implemented using the rangeset framework in Xen.

Anyway, for now I would be OK with just forwarding all unhandled 
platform SMCs with some documentation explaining the limitations of the 
option.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 29 15:35:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 15:35:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148051.273442 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyFm5-0002Ph-97; Tue, 29 Jun 2021 15:35:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148051.273442; Tue, 29 Jun 2021 15:35:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyFm5-0002Pa-5v; Tue, 29 Jun 2021 15:35:45 +0000
Received: by outflank-mailman (input) for mailman id 148051;
 Tue, 29 Jun 2021 15:35:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wA62=LX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lyFm3-0002PU-6s
 for xen-devel@lists.xenproject.org; Tue, 29 Jun 2021 15:35:43 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 98e0d57e-69ba-4c6e-b300-c5d2d22cfb0e;
 Tue, 29 Jun 2021 15:35:41 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2113.outbound.protection.outlook.com [104.47.17.113])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-6-YjG3iKdGMHOGk3lTp28SGg-1; Tue, 29 Jun 2021 17:35:39 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5328.eurprd04.prod.outlook.com (2603:10a6:803:59::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22; Tue, 29 Jun
 2021 15:35:37 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Tue, 29 Jun 2021
 15:35:36 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR01CA0170.eurprd01.prod.exchangelabs.com (2603:10a6:208:aa::39) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19 via Frontend
 Transport; Tue, 29 Jun 2021 15:35:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98e0d57e-69ba-4c6e-b300-c5d2d22cfb0e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1624980940;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=URLnnwCsKBkaRrg2bq5VfQAyb6TPlECEcaLBS5kAmfk=;
	b=N1qvcKvCN1eBN/NfutIVaDVBzupOF4UXjPHDO25psYlEUTEGjCbf7M31H9rtJfEn5Ody5o
	gsqTgUXK/WAM6UQAHQdzMM6BjsbegWhPIyv92IJK9n0kXAZ3lm+5L11f8LRbAF8D3l1jif
	0cuh0z6mQEvoH+z+Ms9yxW+vCTvX48A=
X-MC-Unique: YjG3iKdGMHOGk3lTp28SGg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=c/nmr1M9CAjlFApM2wA59xlBntQaL1gMFhkf9i4Cuzz+cEKaGAAXtjrszCHvolkVpWedDWj2cgWESe2UhnFFstJG0vJQhBITz/6nBB9YcJzEW5gZMdovxD42x4lsZTH1z9El9DLu8omX5k8VGnswypJUJYRd9vUmmFpF35zxLSbcy/T3niQFKq0wK2JD2g5865pzq9Zw6DPvZ+Erg6r7qBa1lRB7dKQN39YqeqO0+cvmhvv7gKZfaMUwXBR/aL/hIUfefs8sB+oV7sUPcgr2jTFUDIjDafqSVpefXQNE3IQ87jfDFw5JwD6Oo0jmERXX7OEHDIpXluTseyt14OpcrA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=URLnnwCsKBkaRrg2bq5VfQAyb6TPlECEcaLBS5kAmfk=;
 b=LK55jjbtBR5qqBocSQzRBHERZTkj8t7aaagFlhXsze1gYibGwFD8ULPDZ+i1B+DyuZAmzwKXKNzzw73NIP849bBNtvFiT6iNlsnlVfnCBq3Cp2FkYtb23Ig+LxwN8fpjsA5VXkbsZwXG22bJVtQxEbSGhLzLy4jbjA3wjDQSxslPrgQeEijX1X7kHSD/TbyyNj+/LM2+6Q7l8ERs6nE4NUBt10DS3X26ABv98ZBGxtD2PQHMwcBr4S7omFmIsX2tgD2TDmLlJ5jekduRF0eX0VV7fQT0N2bprvZJ5KaqnpwB0XvqNdmB97lMTn3E3YmUc3A0ISzDJleVn3r/ihacCA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH v2] libx86: Introduce
 x86_cpu_policy_calculate_compatible() with MSR_ARCH_CAPS handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210628150011.13106-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <44f88dea-f39b-d6a9-acd3-dc1aa12bf25a@suse.com>
Date: Tue, 29 Jun 2021 17:35:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <20210628150011.13106-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR01CA0170.eurprd01.prod.exchangelabs.com
 (2603:10a6:208:aa::39) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4e7f9df0-8bcd-49b6-c4ac-08d93b138a9f
X-MS-TrafficTypeDiagnostic: VI1PR04MB5328:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB532835B201BD7365548CAF0FB3029@VI1PR04MB5328.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KPLHLHGUDG4Xo/SO+KR+deYpSG/yiHyMqbjJWAqkqQe4Yn2KMqkB7SZesAnXc1U+kOPDqJzAmQhWZo45f3A6CJnseuUTUCBRfMNoXhRrFp62ivzRV+lkCX3paWOLcaAQoa78fsPNxe0kgUco/ACwdyo+U+s4tLl6Wc68zoOV8Alk5O86hGiJD4xvKCW/LPjBkLqqvBe6Avh+zb+aAl0RfpJ1jofQ4bNkwXQ9aYGlPgJobXwsJXMnqwkJlkKJ6raxY0aq7XVjTt7moRlMvqM3aUXoON6TXSA64GLH7/NzVaq/8robQ9bv+XZUDHKCfY3qCkZMUNYUaqmo5LGot23dSxJvmF4CpVdvIliKjE5ruFGx3mmTtdvZrPkGbsCgHRji1rCNne+bV31qEySJ9NR4egtl6j+uDoTM1taXzjiqTnpN+33jZ0knVEirPl+/zsMQmmLz9sdHPZThRFpFX/EiTuD6IFMYwAKIcBH+ayvouEAlQbQCdbQI4D/hXrSdigy7VXUUSFLzu51DivWQwDuQ0BD9L6BOAMCNfq1wdz1yIY6nF3rm+LMzjgPkEdddnQKq34XWAa4Xt1PWwsoZ7Ymz5XvmOypQIxk4s1EGXD2Fmc5QIOZ0xyerInpcv9+YF7bciqFbHStEahdYVB9M0szcLXbEASwCPJt1/ym1yZ1LNYOZlcsVAAfUxM8k2G1ah1Uj3lTrwmpyWYUwAQPRffOYbQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(366004)(396003)(39860400002)(136003)(376002)(31686004)(2906002)(316002)(16576012)(53546011)(4326008)(8676002)(36756003)(66946007)(8936002)(54906003)(83380400001)(66556008)(66476007)(16526019)(86362001)(2616005)(38100700002)(6916009)(186003)(956004)(6486002)(31696002)(5660300002)(26005)(478600001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VjAzVDlFd2JqLzdtRFVINWg5MFpnUVlSVWQrenFteFZodzZrVlp4a3RTT25Y?=
 =?utf-8?B?NlAwY3Ryc0ZuWVVDUjNMQzBzOVZUbXg1Y1NwaWhGMXpSL0pRd3hVaWh1azFE?=
 =?utf-8?B?QUZlUHl1T2JLRlM5bCtmaHJ6ZEtESHlRYkZLbHVYWXFHbTRHUG5WRS9HV0xI?=
 =?utf-8?B?YUVJMFZ0c1RHeldyei9GdzI5ajBreWEwcU9CWlVod293ZVBwWnNrcmpWSE00?=
 =?utf-8?B?Ums3OUtOSCtTS0xFSjZ5MlVvTGl3a3d1a2lPR294YXJQZExsTktudHkzeDNh?=
 =?utf-8?B?SE9MREo2RGhlekZLeGdrVHowY2NyemUyS1ZNVjBvaFJWbDcwSEo5NGp2aTVI?=
 =?utf-8?B?L0wzQm9tU0JIKzNobWVQd2RvSldmM3FvWEt3ZVFxNU01cUVvdENmcUJkWWVl?=
 =?utf-8?B?VEpOYnIrd1JsUmFTMmp2RW1QMlNsakRqNlVEak9QMHdLOWlnNkhoU2daeFBB?=
 =?utf-8?B?VC85SktVMVVZa0NuVCtvenVZNndxZ0VSZG5hUWI0Y2M1SzBFMlR3ZWtrWU5z?=
 =?utf-8?B?ZGQ0eGNvMlVtcXVRZ2t1S25UQnlFaThqSFNYVnZTc052OGtTNGdka0FVOHU2?=
 =?utf-8?B?dkdjZFlXcXo1Q0h0RTUrUnBTNkxiRk12Tndpcy9uMDNjYWY0V3dYOUY5N0tj?=
 =?utf-8?B?YlYvN296QWtIZlBoczYzM3JZck1wREJDdDJQZTh0ckVub2FCRFEzTkZJaWdT?=
 =?utf-8?B?U290dXRuWlh0MUFzQUdlN0N1UjFsMmFnVGt3Y3FWeUN0RWYyKzVnRU5mZEo2?=
 =?utf-8?B?a3dmRUErV2JjbHYrUnd4K3hienFmYURTVTFtck9lTlJxRkR1b0ZrSFdSbEhv?=
 =?utf-8?B?VjVldy96Sy85cHlSWFNkTEFKbERVTlF4QWsrWElOQ2JaWDhSR1h5Q2ZpZ2R6?=
 =?utf-8?B?SXVQR2ZqWjhMT05qQjdjR0VNUFJXamdvR05rRkFQZzVtdmJmTEJBbk5TZU13?=
 =?utf-8?B?Tkh6U1BycHdEdzEzWnJtQTdodHU2TUxPS0ZjMy9UTG9sb1VkbitSd0U1Yno3?=
 =?utf-8?B?SnF4aU9XdDUyKysvcEFINUExZ1FHWld0ZVBxRGIvY0d3STJLN3BvK0JrNjdZ?=
 =?utf-8?B?UEVTbkZHSk9jMGpoQjdBczNaeWthcENiakNNcHQ2YWJEZlkwZTBKbVpqTm1W?=
 =?utf-8?B?MGtQRmwrdUV3Wm5HRVo1Y1BJK1RORG9ib1E1NTJDaENBMGI1Qk90Rnp3OUt3?=
 =?utf-8?B?dUg3ZFJMeU92NHRDV1B4bjEzdzFqdDVacGpmSTFzMzkzV01KY01FSGk3YS9u?=
 =?utf-8?B?cnVpZTgzTU93TDdYcFBmWlovajVyc0VDS084VmNJNmROSU9HdXRONWJrZmZ2?=
 =?utf-8?B?UlJPd1c0Vm5QMzRDRDB1RkFEbzd4a2dnOC9CU3hMNSt5aGFUVWpINnN2TEpN?=
 =?utf-8?B?dWxFdGJWQVhuMGtsN2YyeE16MzZ2Yk0reVQxNVM4aTZHZkN6SnJnc2VSUmJE?=
 =?utf-8?B?Z29nY1BWWENhTFVGTFc3U1Y0K0dCaU85eXNCdHE1SSs4aGw5QlZpZ3BRNmJi?=
 =?utf-8?B?YlRJcURlTW8yekNDYzU0SzE1V2oxZ1RyRC94N1hwYmFkMCtTamJGWEkrb3Qw?=
 =?utf-8?B?M1J0Rk5zMG9WQ2RXWEo2cmtEbDN2K0VyTEJseGtHdStFdVY2OTBYbnk1c3JE?=
 =?utf-8?B?YXRXd0xXc3pNUlhucmZjMnpUd0wwUllYdzV1QnZCK1k3bnlwekwxZ1A0SE9a?=
 =?utf-8?B?U08rcFV3Skxnbk13K1VFNENFak4vbEhjb3drNVdQRUkyc0NDYmo5cjlySGhn?=
 =?utf-8?Q?W2QIxFReOr7qojCVdR4uPJKEupwaaYhktuHnNQf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4e7f9df0-8bcd-49b6-c4ac-08d93b138a9f
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2021 15:35:36.8038
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rLg7GkbGJ8oO5qxDHnrlFuJ94CYcNQKdNtJ5Iny+WflgWBM7+rnf4PH1einECAwsf2M3P6zwinCHJhIFOQgjsg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5328

On 28.06.2021 17:00, Andrew Cooper wrote:
> --- a/tools/tests/cpu-policy/test-cpu-policy.c
> +++ b/tools/tests/cpu-policy/test-cpu-policy.c
> @@ -775,6 +775,154 @@ static void test_is_compatible_failure(void)
>      }
>  }
>  
> +static void test_calculate_compatible_success(void)
> +{
> +    static struct test {

It's only testing code, so it doesn't matter all this much, but
elsewhere such static struct-s are const.

> +        const char *name;
> +        struct {
> +            struct cpuid_policy p;
> +            struct msr_policy m;
> +        } a, b, out;
> +    } tests[] = {
> +        {
> +            "arch_caps, b short max_leaf",
> +            .a = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rdcl_no = true,
> +            },
> +            .b = {
> +                .p.basic.max_leaf = 6,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rdcl_no = true,
> +            },
> +            .out = {
> +                .p.basic.max_leaf = 6,
> +            },
> +        },
> +        {
> +            "arch_caps, b feat missing",
> +            .a = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rdcl_no = true,
> +            },
> +            .b = {
> +                .p.basic.max_leaf = 7,
> +                .m.arch_caps.rdcl_no = true,
> +            },
> +            .out = {
> +                .p.basic.max_leaf = 7,
> +            },
> +        },
> +        {
> +            "arch_caps, b rdcl_no missing",
> +            .a = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rdcl_no = true,
> +            },
> +            .b = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +            },
> +            .out = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +            },
> +        },
> +        {
> +            "arch_caps, rdcl_no ok",
> +            .a = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rdcl_no = true,
> +            },
> +            .b = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rdcl_no = true,
> +            },
> +            .out = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rdcl_no = true,
> +            },
> +        },
> +        {
> +            "arch_caps, rsba accum",
> +            .a = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rsba = true,
> +            },
> +            .b = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +            },
> +            .out = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rsba = true,
> +            },
> +        },

For RDCL_NO you go through quite a few more variations, and given
the accumulating nature of RSBA habing a similar set for it would
imo be quite valuable, not the least for people like me to see
clearly what behavior is expected there.

> +    };
> +    struct cpu_policy_errors no_errors = INIT_CPU_POLICY_ERRORS;
> +
> +    printf("Testing calculate compatibility success:\n");
> +
> +    for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
> +    {
> +        struct test *t = &tests[i];
> +        struct cpuid_policy *p = malloc(sizeof(struct cpuid_policy));
> +        struct msr_policy *m = malloc(sizeof(struct msr_policy));
> +        struct cpu_policy a = {
> +            &t->a.p,
> +            &t->a.m,
> +        }, b = {
> +            &t->b.p,
> +            &t->b.m,

Hmm, I guess these two struct instances are the reason for tests[]
to be non-const. I vaguely recall discussion about having a const-
correct variant of struct cpu_policy; if you don't think this is
warranted, may I ask that you add a respective brief comment to
tests[]?

> +        }, out = {
> +            p,
> +            m,
> +        };
> +        struct cpu_policy_errors e;
> +        int res;
> +
> +        if ( !p || !m )
> +            err(1, "%s() malloc failure", __func__);
> +
> +        res = x86_cpu_policy_calculate_compatible(&a, &b, &out, &e);
> +
> +        /* Check the expected error output. */
> +        if ( res != 0 || memcmp(&no_errors, &e, sizeof(no_errors)) )

While this memcmp() has precedents, ...

> +        {
> +            fail("  Test '%s' expected no errors\n"
> +                 "    got res %d { leaf %08x, subleaf %08x, msr %08x }\n",
> +                 t->name, res, e.leaf, e.subleaf, e.msr);
> +            goto test_done;
> +        }
> +
> +        if ( memcmp(&t->out.p, p, sizeof(*p)) )

... I'm worried that this and ...

> +        {
> +            fail("  Test '%s' resulting CPUID policy not as expected\n",
> +                 t->name);
> +            goto test_done;
> +        }
> +
> +        if ( memcmp(&t->out.m, m, sizeof(*m)) )

... this may (down the road) suffer from mismatches on uninitialized
padding fields. Otoh I've meanwhile found that the new function
clears both output buffers first thinhg.

> --- a/xen/include/xen/lib/x86/cpu-policy.h
> +++ b/xen/include/xen/lib/x86/cpu-policy.h
> @@ -37,6 +37,34 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
>                                      const struct cpu_policy *guest,
>                                      struct cpu_policy_errors *err);
>  
> +/*
> + * Given two policies, calculate one which is compatible with each.
> + *
> + * i.e. Given host @a and host @b, calculate what to give a VM so it can live
> + * migrate between the two.
> + *
> + * @param a        A cpu_policy.
> + * @param b        Another cpu_policy.
> + * @param out      A policy compatible with @a and @b, if successful.
> + * @param err      Optional hint for error diagnostics.
> + * @returns -errno
> + *
> + * For typical usage, @a and @b should be default system policies of the same
> + * type (i.e. PV or HVM) from different hosts.

Given this property, what use do you anticipate for the new function
within libxl? Or asked differently, where from would libxl obtain a
remote host's policy?

>  It does not make sense to try
> + * and level max policies, as they contain the non-migrateable features.
> + *
> + * Some data (e.g. the long CPU brand string) cannot be levelled.  Such data
> + * will be taken from @a, and the content in @b will be discaraded.

I'm afraid I can't spot this "taking from @a" in the code.

Also, nit: "discarded"

> + * It is possible that @a and @b cannot be resolved to migration-compatible

Nit: Missing "a" after "to"?

> @@ -43,6 +46,52 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
>      return ret;
>  }
>  
> +#ifndef __XEN__
> +int x86_cpu_policy_calculate_compatible(const struct cpu_policy *a,
> +                                        const struct cpu_policy *b,
> +                                        struct cpu_policy *out,
> +                                        struct cpu_policy_errors *err)
> +{
> +    const struct cpuid_policy *ap = a->cpuid, *bp = b->cpuid;
> +    const struct msr_policy *am = a->msr, *bm = b->msr;
> +    struct cpuid_policy *cp = out->cpuid;
> +    struct msr_policy *mp = out->msr;
> +
> +    memset(cp, 0, sizeof(*cp));
> +    memset(mp, 0, sizeof(*mp));
> +
> +    cp->basic.max_leaf = min(ap->basic.max_leaf, bp->basic.max_leaf);
> +
> +    if ( cp->basic.max_leaf >= 7 )
> +    {
> +        cp->feat.max_subleaf = min(ap->feat.max_subleaf, bp->feat.max_subleaf);
> +
> +        cp->feat.raw[0].b = ap->feat.raw[0].b & bp->feat.raw[0].b;
> +        cp->feat.raw[0].c = ap->feat.raw[0].c & bp->feat.raw[0].c;
> +        cp->feat.raw[0].d = ap->feat.raw[0].d & bp->feat.raw[0].d;

Is there a particular reason to not handle this in full, i.e. for
all of the subleaves? If there is, I'd still have expected you to
at least handle _7a1 that we already know about. Failing that I'd
have hoped for a justifying comment (or maybe a TODO item beyond ...

> +    }
> +
> +    /* TODO: Far more. */

... this one.

> +    mp->platform_info.raw = am->platform_info.raw & bm->platform_info.raw;
> +
> +    if ( cp->feat.arch_caps )
> +    {
> +        /*
> +         * RSBA means "RSB Alternative", i.e. RSB stuffing not necesserily
> +         * safe.  It needs to accumulate rather than intersect across a
> +         * resource pool.
> +         */
> +#define POL_MASK ARCH_CAPS_RSBA
> +        mp->arch_caps.raw = ((am->arch_caps.raw ^ POL_MASK) &
> +                             (bm->arch_caps.raw ^ POL_MASK)) ^ POL_MASK;
> +#undef POL_MASK
> +    }

Related to my respective request on the set of tests performed, this
really is partial accumulation, as ARCH_CAPS are still taken as a
prereq feature. That is, one host with RSBA and another without
ARCH_CAPS will result in a policy without RSBA. Is this really what's
intended?

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 29 15:56:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 15:56:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148055.273454 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyG6H-0004jm-4s; Tue, 29 Jun 2021 15:56:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148055.273454; Tue, 29 Jun 2021 15:56:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyG6H-0004jf-1o; Tue, 29 Jun 2021 15:56:37 +0000
Received: by outflank-mailman (input) for mailman id 148055;
 Tue, 29 Jun 2021 15:56:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyG6G-0004jV-7j; Tue, 29 Jun 2021 15:56:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyG6G-0006bw-3O; Tue, 29 Jun 2021 15:56:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyG6F-0004Iw-R0; Tue, 29 Jun 2021 15:56:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lyG6F-0003rU-QP; Tue, 29 Jun 2021 15:56:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RAnUx6VHoAyD21EOkiWXsEB02mbvc245LF7NjUBatVM=; b=1p4HtqQK0uLa9zsSFjGwJbTlwk
	hSOlAd/K99RhrV+iNu2QYuIsG/n5ZA8KoBhhpV42GFP2e+DmMCwnGq6lqxxLcgqECtL15LHCM35/A
	iCf3RnGD1S/1gUyBvcnZJ4z6fRv3spYc3h+TtqDybP6qTGJsguuqKyJmqsUjdkBLm7aU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163183-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 163183: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f95b7b37cfc6d4613721df9357090d14712013c0
X-Osstest-Versions-That:
    xen=f8582da0417660269bec69e399f8667f761e886b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Jun 2021 15:56:35 +0000

flight 163183 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163183/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f95b7b37cfc6d4613721df9357090d14712013c0
baseline version:
 xen                  f8582da0417660269bec69e399f8667f761e886b

Last test of basis   163182  2021-06-29 10:01:35 Z    0 days
Testing same since   163183  2021-06-29 13:01:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f8582da041..f95b7b37cf  f95b7b37cfc6d4613721df9357090d14712013c0 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jun 29 17:09:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 17:09:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148058.273468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyHEf-0003J7-FR; Tue, 29 Jun 2021 17:09:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148058.273468; Tue, 29 Jun 2021 17:09:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyHEf-0003J0-CS; Tue, 29 Jun 2021 17:09:21 +0000
Received: by outflank-mailman (input) for mailman id 148058;
 Tue, 29 Jun 2021 17:09:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vrVS=LX=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1lyHEe-0003Iu-8h
 for xen-devel@lists.xenproject.org; Tue, 29 Jun 2021 17:09:20 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 54126923-46df-40e4-951c-a7a05d98cfec;
 Tue, 29 Jun 2021 17:09:18 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7E8081FB;
 Tue, 29 Jun 2021 10:09:18 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 414B43F694;
 Tue, 29 Jun 2021 10:09:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54126923-46df-40e4-951c-a7a05d98cfec
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [RFC PATCH 0/4] xen/arm: Sanitize cpuinfo
Date: Tue, 29 Jun 2021 18:08:52 +0100
Message-Id: <cover.1624974370.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1

On arm architecture we might have heterogeneous platforms with different
types of cores. As a guest can potentialy run on any of those cores we
have to present them cpu features which are compatible with all cores
and discard the features which are only available on some cores.

As the features can be fairly complex, the way to deduce from 2
different features, what should be the acceptable minimal feature can be
complex (and sometime impossible).

This RFC is a first attempt for a solution to start a discussion and get
inputs from the community.

To reduce the implementation effort in Xen, this serie is importing the
structures and filtering system used by Linux in order to build a
cpuinfo containing the best values compatible with all cores on the
platform.

The serie start by importing the necessary code and structure from Linux
and then use it to sanitize the boot cpuinfo.
Finally it is simplifying some Xen code which was doing the same in p2m
and allows to use heterogeneous platforms on arm64.


Bertrand Marquis (4):
  xen/arm: Import ID registers definitions from Linux
  xen/arm: Import ID features sanitize from linux
  xen/arm: Sanitize cpuinfo ID registers fields
  xen/arm: Use sanitize values for p2m

 xen/arch/arm/arm64/Makefile         |   1 +
 xen/arch/arm/arm64/cpusanitize.c    | 628 ++++++++++++++++++++++++++++
 xen/arch/arm/p2m.c                  |  30 +-
 xen/arch/arm/smpboot.c              |   5 +-
 xen/include/asm-arm/arm64/sysregs.h | 312 ++++++++++++++
 xen/include/asm-arm/cpufeature.h    |   9 +
 xen/include/xen/lib.h               |   1 +
 7 files changed, 965 insertions(+), 21 deletions(-)
 create mode 100644 xen/arch/arm/arm64/cpusanitize.c

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 29 17:09:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 17:09:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148060.273479 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyHFH-0003pB-O6; Tue, 29 Jun 2021 17:09:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148060.273479; Tue, 29 Jun 2021 17:09:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyHFH-0003p4-Kq; Tue, 29 Jun 2021 17:09:59 +0000
Received: by outflank-mailman (input) for mailman id 148060;
 Tue, 29 Jun 2021 17:09:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vrVS=LX=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1lyHFG-0003oo-3C
 for xen-devel@lists.xenproject.org; Tue, 29 Jun 2021 17:09:58 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 66099a5a-a1dd-491c-aadc-b30e4cda6bcc;
 Tue, 29 Jun 2021 17:09:56 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 904091FB;
 Tue, 29 Jun 2021 10:09:56 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BF41C3F694;
 Tue, 29 Jun 2021 10:09:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66099a5a-a1dd-491c-aadc-b30e4cda6bcc
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [RFC PATCH 1/4] xen/arm: Import ID registers definitions from Linux
Date: Tue, 29 Jun 2021 18:08:53 +0100
Message-Id: <432a76f45a783fb74db133b54d4b78bab9241212.1624974370.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1624974370.git.bertrand.marquis@arm.com>
References: <cover.1624974370.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1624974370.git.bertrand.marquis@arm.com>
References: <cover.1624974370.git.bertrand.marquis@arm.com>

Import some ID registers definitions from Linux sysreg header to have
required shift definitions for all ID registers fields.

Those are required to reuse the cpufeature sanitization system from
Linux kernel.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/include/asm-arm/arm64/sysregs.h | 312 ++++++++++++++++++++++++++++
 1 file changed, 312 insertions(+)

diff --git a/xen/include/asm-arm/arm64/sysregs.h b/xen/include/asm-arm/arm64/sysregs.h
index 077fd95fb7..555f1e08b2 100644
--- a/xen/include/asm-arm/arm64/sysregs.h
+++ b/xen/include/asm-arm/arm64/sysregs.h
@@ -85,6 +85,318 @@
 #define ID_DFR1_EL1                 S3_0_C0_C3_5
 #endif
 
+/* ID registers (imported from arm64/include/asm/sysreg.h in Linux) */
+
+/* id_aa64isar0 */
+#define ID_AA64ISAR0_RNDR_SHIFT      60
+#define ID_AA64ISAR0_TLB_SHIFT       56
+#define ID_AA64ISAR0_TS_SHIFT        52
+#define ID_AA64ISAR0_FHM_SHIFT       48
+#define ID_AA64ISAR0_DP_SHIFT        44
+#define ID_AA64ISAR0_SM4_SHIFT       40
+#define ID_AA64ISAR0_SM3_SHIFT       36
+#define ID_AA64ISAR0_SHA3_SHIFT      32
+#define ID_AA64ISAR0_RDM_SHIFT       28
+#define ID_AA64ISAR0_ATOMICS_SHIFT   20
+#define ID_AA64ISAR0_CRC32_SHIFT     16
+#define ID_AA64ISAR0_SHA2_SHIFT      12
+#define ID_AA64ISAR0_SHA1_SHIFT      8
+#define ID_AA64ISAR0_AES_SHIFT       4
+
+#define ID_AA64ISAR0_TLB_RANGE_NI    0x0
+#define ID_AA64ISAR0_TLB_RANGE       0x2
+
+/* id_aa64isar1 */
+#define ID_AA64ISAR1_I8MM_SHIFT      52
+#define ID_AA64ISAR1_DGH_SHIFT       48
+#define ID_AA64ISAR1_BF16_SHIFT      44
+#define ID_AA64ISAR1_SPECRES_SHIFT   40
+#define ID_AA64ISAR1_SB_SHIFT        36
+#define ID_AA64ISAR1_FRINTTS_SHIFT   32
+#define ID_AA64ISAR1_GPI_SHIFT       28
+#define ID_AA64ISAR1_GPA_SHIFT       24
+#define ID_AA64ISAR1_LRCPC_SHIFT     20
+#define ID_AA64ISAR1_FCMA_SHIFT      16
+#define ID_AA64ISAR1_JSCVT_SHIFT     12
+#define ID_AA64ISAR1_API_SHIFT       8
+#define ID_AA64ISAR1_APA_SHIFT       4
+#define ID_AA64ISAR1_DPB_SHIFT       0
+
+#define ID_AA64ISAR1_APA_NI                     0x0
+#define ID_AA64ISAR1_APA_ARCHITECTED            0x1
+#define ID_AA64ISAR1_APA_ARCH_EPAC              0x2
+#define ID_AA64ISAR1_APA_ARCH_EPAC2             0x3
+#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC        0x4
+#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC_CMB    0x5
+#define ID_AA64ISAR1_API_NI                     0x0
+#define ID_AA64ISAR1_API_IMP_DEF                0x1
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC           0x2
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2          0x3
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC     0x4
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC_CMB 0x5
+#define ID_AA64ISAR1_GPA_NI                     0x0
+#define ID_AA64ISAR1_GPA_ARCHITECTED            0x1
+#define ID_AA64ISAR1_GPI_NI                     0x0
+#define ID_AA64ISAR1_GPI_IMP_DEF                0x1
+
+/* id_aa64pfr0 */
+#define ID_AA64PFR0_CSV3_SHIFT       60
+#define ID_AA64PFR0_CSV2_SHIFT       56
+#define ID_AA64PFR0_DIT_SHIFT        48
+#define ID_AA64PFR0_AMU_SHIFT        44
+#define ID_AA64PFR0_MPAM_SHIFT       40
+#define ID_AA64PFR0_SEL2_SHIFT       36
+#define ID_AA64PFR0_SVE_SHIFT        32
+#define ID_AA64PFR0_RAS_SHIFT        28
+#define ID_AA64PFR0_GIC_SHIFT        24
+#define ID_AA64PFR0_ASIMD_SHIFT      20
+#define ID_AA64PFR0_FP_SHIFT         16
+#define ID_AA64PFR0_EL3_SHIFT        12
+#define ID_AA64PFR0_EL2_SHIFT        8
+#define ID_AA64PFR0_EL1_SHIFT        4
+#define ID_AA64PFR0_EL0_SHIFT        0
+
+#define ID_AA64PFR0_AMU              0x1
+#define ID_AA64PFR0_SVE              0x1
+#define ID_AA64PFR0_RAS_V1           0x1
+#define ID_AA64PFR0_FP_NI            0xf
+#define ID_AA64PFR0_FP_SUPPORTED     0x0
+#define ID_AA64PFR0_ASIMD_NI         0xf
+#define ID_AA64PFR0_ASIMD_SUPPORTED  0x0
+#define ID_AA64PFR0_EL1_64BIT_ONLY   0x1
+#define ID_AA64PFR0_EL1_32BIT_64BIT  0x2
+#define ID_AA64PFR0_EL0_64BIT_ONLY   0x1
+#define ID_AA64PFR0_EL0_32BIT_64BIT  0x2
+
+/* id_aa64pfr1 */
+#define ID_AA64PFR1_MPAMFRAC_SHIFT   16
+#define ID_AA64PFR1_RASFRAC_SHIFT    12
+#define ID_AA64PFR1_MTE_SHIFT        8
+#define ID_AA64PFR1_SSBS_SHIFT       4
+#define ID_AA64PFR1_BT_SHIFT         0
+
+#define ID_AA64PFR1_SSBS_PSTATE_NI    0
+#define ID_AA64PFR1_SSBS_PSTATE_ONLY  1
+#define ID_AA64PFR1_SSBS_PSTATE_INSNS 2
+#define ID_AA64PFR1_BT_BTI            0x1
+
+#define ID_AA64PFR1_MTE_NI           0x0
+#define ID_AA64PFR1_MTE_EL0          0x1
+#define ID_AA64PFR1_MTE              0x2
+
+/* id_aa64zfr0 */
+#define ID_AA64ZFR0_F64MM_SHIFT      56
+#define ID_AA64ZFR0_F32MM_SHIFT      52
+#define ID_AA64ZFR0_I8MM_SHIFT       44
+#define ID_AA64ZFR0_SM4_SHIFT        40
+#define ID_AA64ZFR0_SHA3_SHIFT       32
+#define ID_AA64ZFR0_BF16_SHIFT       20
+#define ID_AA64ZFR0_BITPERM_SHIFT    16
+#define ID_AA64ZFR0_AES_SHIFT        4
+#define ID_AA64ZFR0_SVEVER_SHIFT     0
+
+#define ID_AA64ZFR0_F64MM            0x1
+#define ID_AA64ZFR0_F32MM            0x1
+#define ID_AA64ZFR0_I8MM             0x1
+#define ID_AA64ZFR0_BF16             0x1
+#define ID_AA64ZFR0_SM4              0x1
+#define ID_AA64ZFR0_SHA3             0x1
+#define ID_AA64ZFR0_BITPERM          0x1
+#define ID_AA64ZFR0_AES              0x1
+#define ID_AA64ZFR0_AES_PMULL        0x2
+#define ID_AA64ZFR0_SVEVER_SVE2      0x1
+
+/* id_aa64mmfr0 */
+#define ID_AA64MMFR0_ECV_SHIFT       60
+#define ID_AA64MMFR0_FGT_SHIFT       56
+#define ID_AA64MMFR0_EXS_SHIFT       44
+#define ID_AA64MMFR0_TGRAN4_2_SHIFT  40
+#define ID_AA64MMFR0_TGRAN64_2_SHIFT 36
+#define ID_AA64MMFR0_TGRAN16_2_SHIFT 32
+#define ID_AA64MMFR0_TGRAN4_SHIFT    28
+#define ID_AA64MMFR0_TGRAN64_SHIFT   24
+#define ID_AA64MMFR0_TGRAN16_SHIFT   20
+#define ID_AA64MMFR0_BIGENDEL0_SHIFT 16
+#define ID_AA64MMFR0_SNSMEM_SHIFT    12
+#define ID_AA64MMFR0_BIGENDEL_SHIFT  8
+#define ID_AA64MMFR0_ASID_SHIFT      4
+#define ID_AA64MMFR0_PARANGE_SHIFT   0
+
+#define ID_AA64MMFR0_TGRAN4_NI         0xf
+#define ID_AA64MMFR0_TGRAN4_SUPPORTED  0x0
+#define ID_AA64MMFR0_TGRAN64_NI        0xf
+#define ID_AA64MMFR0_TGRAN64_SUPPORTED 0x0
+#define ID_AA64MMFR0_TGRAN16_NI        0x0
+#define ID_AA64MMFR0_TGRAN16_SUPPORTED 0x1
+#define ID_AA64MMFR0_PARANGE_48        0x5
+#define ID_AA64MMFR0_PARANGE_52        0x6
+
+/* id_aa64mmfr1 */
+#define ID_AA64MMFR1_ETS_SHIFT       36
+#define ID_AA64MMFR1_TWED_SHIFT      32
+#define ID_AA64MMFR1_XNX_SHIFT       28
+#define ID_AA64MMFR1_SPECSEI_SHIFT   24
+#define ID_AA64MMFR1_PAN_SHIFT       20
+#define ID_AA64MMFR1_LOR_SHIFT       16
+#define ID_AA64MMFR1_HPD_SHIFT       12
+#define ID_AA64MMFR1_VHE_SHIFT       8
+#define ID_AA64MMFR1_VMIDBITS_SHIFT  4
+#define ID_AA64MMFR1_HADBS_SHIFT     0
+
+#define ID_AA64MMFR1_VMIDBITS_8      0
+#define ID_AA64MMFR1_VMIDBITS_16     2
+
+/* id_aa64mmfr2 */
+#define ID_AA64MMFR2_E0PD_SHIFT      60
+#define ID_AA64MMFR2_EVT_SHIFT       56
+#define ID_AA64MMFR2_BBM_SHIFT       52
+#define ID_AA64MMFR2_TTL_SHIFT       48
+#define ID_AA64MMFR2_FWB_SHIFT       40
+#define ID_AA64MMFR2_IDS_SHIFT       36
+#define ID_AA64MMFR2_AT_SHIFT        32
+#define ID_AA64MMFR2_ST_SHIFT        28
+#define ID_AA64MMFR2_NV_SHIFT        24
+#define ID_AA64MMFR2_CCIDX_SHIFT     20
+#define ID_AA64MMFR2_LVA_SHIFT       16
+#define ID_AA64MMFR2_IESB_SHIFT      12
+#define ID_AA64MMFR2_LSM_SHIFT       8
+#define ID_AA64MMFR2_UAO_SHIFT       4
+#define ID_AA64MMFR2_CNP_SHIFT       0
+
+/* id_aa64dfr0 */
+#define ID_AA64DFR0_DOUBLELOCK_SHIFT 36
+#define ID_AA64DFR0_PMSVER_SHIFT     32
+#define ID_AA64DFR0_CTX_CMPS_SHIFT   28
+#define ID_AA64DFR0_WRPS_SHIFT       20
+#define ID_AA64DFR0_BRPS_SHIFT       12
+#define ID_AA64DFR0_PMUVER_SHIFT     8
+#define ID_AA64DFR0_TRACEVER_SHIFT   4
+#define ID_AA64DFR0_DEBUGVER_SHIFT   0
+
+#define ID_AA64DFR0_PMUVER_8_0       0x1
+#define ID_AA64DFR0_PMUVER_8_1       0x4
+#define ID_AA64DFR0_PMUVER_8_4       0x5
+#define ID_AA64DFR0_PMUVER_8_5       0x6
+#define ID_AA64DFR0_PMUVER_IMP_DEF   0xf
+
+#define ID_DFR0_PERFMON_SHIFT        24
+
+#define ID_DFR0_PERFMON_8_1          0x4
+
+#define ID_ISAR4_SWP_FRAC_SHIFT        28
+#define ID_ISAR4_PSR_M_SHIFT           24
+#define ID_ISAR4_SYNCH_PRIM_FRAC_SHIFT 20
+#define ID_ISAR4_BARRIER_SHIFT         16
+#define ID_ISAR4_SMC_SHIFT             12
+#define ID_ISAR4_WRITEBACK_SHIFT       8
+#define ID_ISAR4_WITHSHIFTS_SHIFT      4
+#define ID_ISAR4_UNPRIV_SHIFT          0
+
+#define ID_DFR1_MTPMU_SHIFT          0
+
+#define ID_ISAR0_DIVIDE_SHIFT        24
+#define ID_ISAR0_DEBUG_SHIFT         20
+#define ID_ISAR0_COPROC_SHIFT        16
+#define ID_ISAR0_CMPBRANCH_SHIFT     12
+#define ID_ISAR0_BITFIELD_SHIFT      8
+#define ID_ISAR0_BITCOUNT_SHIFT      4
+#define ID_ISAR0_SWAP_SHIFT          0
+
+#define ID_ISAR5_RDM_SHIFT           24
+#define ID_ISAR5_CRC32_SHIFT         16
+#define ID_ISAR5_SHA2_SHIFT          12
+#define ID_ISAR5_SHA1_SHIFT          8
+#define ID_ISAR5_AES_SHIFT           4
+#define ID_ISAR5_SEVL_SHIFT          0
+
+#define ID_ISAR6_I8MM_SHIFT          24
+#define ID_ISAR6_BF16_SHIFT          20
+#define ID_ISAR6_SPECRES_SHIFT       16
+#define ID_ISAR6_SB_SHIFT            12
+#define ID_ISAR6_FHM_SHIFT           8
+#define ID_ISAR6_DP_SHIFT            4
+#define ID_ISAR6_JSCVT_SHIFT         0
+
+#define ID_MMFR0_INNERSHR_SHIFT      28
+#define ID_MMFR0_FCSE_SHIFT          24
+#define ID_MMFR0_AUXREG_SHIFT        20
+#define ID_MMFR0_TCM_SHIFT           16
+#define ID_MMFR0_SHARELVL_SHIFT      12
+#define ID_MMFR0_OUTERSHR_SHIFT      8
+#define ID_MMFR0_PMSA_SHIFT          4
+#define ID_MMFR0_VMSA_SHIFT          0
+
+#define ID_MMFR4_EVT_SHIFT           28
+#define ID_MMFR4_CCIDX_SHIFT         24
+#define ID_MMFR4_LSM_SHIFT           20
+#define ID_MMFR4_HPDS_SHIFT          16
+#define ID_MMFR4_CNP_SHIFT           12
+#define ID_MMFR4_XNX_SHIFT           8
+#define ID_MMFR4_AC2_SHIFT           4
+#define ID_MMFR4_SPECSEI_SHIFT       0
+
+#define ID_MMFR5_ETS_SHIFT           0
+
+#define ID_PFR0_DIT_SHIFT            24
+#define ID_PFR0_CSV2_SHIFT           16
+#define ID_PFR0_STATE3_SHIFT         12
+#define ID_PFR0_STATE2_SHIFT         8
+#define ID_PFR0_STATE1_SHIFT         4
+#define ID_PFR0_STATE0_SHIFT         0
+
+#define ID_DFR0_PERFMON_SHIFT        24
+#define ID_DFR0_MPROFDBG_SHIFT       20
+#define ID_DFR0_MMAPTRC_SHIFT        16
+#define ID_DFR0_COPTRC_SHIFT         12
+#define ID_DFR0_MMAPDBG_SHIFT        8
+#define ID_DFR0_COPSDBG_SHIFT        4
+#define ID_DFR0_COPDBG_SHIFT         0
+
+#define ID_PFR2_SSBS_SHIFT           4
+#define ID_PFR2_CSV3_SHIFT           0
+
+#define MVFR0_FPROUND_SHIFT          28
+#define MVFR0_FPSHVEC_SHIFT          24
+#define MVFR0_FPSQRT_SHIFT           20
+#define MVFR0_FPDIVIDE_SHIFT         16
+#define MVFR0_FPTRAP_SHIFT           12
+#define MVFR0_FPDP_SHIFT             8
+#define MVFR0_FPSP_SHIFT             4
+#define MVFR0_SIMD_SHIFT             0
+
+#define MVFR1_SIMDFMAC_SHIFT         28
+#define MVFR1_FPHP_SHIFT             24
+#define MVFR1_SIMDHP_SHIFT           20
+#define MVFR1_SIMDSP_SHIFT           16
+#define MVFR1_SIMDINT_SHIFT          12
+#define MVFR1_SIMDLS_SHIFT           8
+#define MVFR1_FPDNAN_SHIFT           4
+#define MVFR1_FPFTZ_SHIFT            0
+
+#define ID_PFR1_GIC_SHIFT            28
+#define ID_PFR1_VIRT_FRAC_SHIFT      24
+#define ID_PFR1_SEC_FRAC_SHIFT       20
+#define ID_PFR1_GENTIMER_SHIFT       16
+#define ID_PFR1_VIRTUALIZATION_SHIFT 12
+#define ID_PFR1_MPROGMOD_SHIFT       8
+#define ID_PFR1_SECURITY_SHIFT       4
+#define ID_PFR1_PROGMOD_SHIFT        0
+
+#define MVFR2_FPMISC_SHIFT           4
+#define MVFR2_SIMDMISC_SHIFT         0
+
+#define DCZID_DZP_SHIFT              4
+#define DCZID_BS_SHIFT               0
+
+/*
+ * The ZCR_ELx_LEN_* definitions intentionally include bits [8:4] which
+ * are reserved by the SVE architecture for future expansion of the LEN
+ * field, with compatible semantics.
+ */
+#define ZCR_ELx_LEN_SHIFT            0
+#define ZCR_ELx_LEN_SIZE             9
+#define ZCR_ELx_LEN_MASK             0x1ff
+
 /* Access to system registers */
 
 #define READ_SYSREG32(name) ((uint32_t)READ_SYSREG64(name))
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 29 17:10:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 17:10:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148061.273490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyHFK-00047S-30; Tue, 29 Jun 2021 17:10:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148061.273490; Tue, 29 Jun 2021 17:10:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyHFJ-00047L-SM; Tue, 29 Jun 2021 17:10:01 +0000
Received: by outflank-mailman (input) for mailman id 148061;
 Tue, 29 Jun 2021 17:09:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vrVS=LX=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1lyHFH-0003pT-RD
 for xen-devel@lists.xenproject.org; Tue, 29 Jun 2021 17:09:59 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id a139ef0a-c618-4222-9b02-19e5a1e0e7f6;
 Tue, 29 Jun 2021 17:09:59 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 07D6C1FB;
 Tue, 29 Jun 2021 10:09:59 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C0F5D3F694;
 Tue, 29 Jun 2021 10:09:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a139ef0a-c618-4222-9b02-19e5a1e0e7f6
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [RFC PATCH 3/4] xen/arm: Sanitize cpuinfo ID registers fields
Date: Tue, 29 Jun 2021 18:08:55 +0100
Message-Id: <b9c86a28df2bddca095ae02511ced09585dce164.1624974370.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1624974370.git.bertrand.marquis@arm.com>
References: <cover.1624974370.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1624974370.git.bertrand.marquis@arm.com>
References: <cover.1624974370.git.bertrand.marquis@arm.com>

Define a sanitize_cpu function to be called on secondary cores to
sanitize the cpuinfo structure from the boot CPU.

The safest value is taken when possible and the system is marked tainted
if we encounter values which are incompatible with each other.

Call the sanitize_cpu function on all secondary cores and remove the
code disabling secondary cores if they are not the same as the boot core
as we are now able to handle this use case.

This is only supported on arm64 so cpu_sanitize is an empty static
inline on arm32.

This patch also removes the code imported from Linux that will not be
used by Xen.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/arm64/cpusanitize.c | 236 ++++++++++++++++++++++++-------
 xen/arch/arm/smpboot.c           |   5 +-
 xen/include/asm-arm/cpufeature.h |   9 ++
 xen/include/xen/lib.h            |   1 +
 4 files changed, 199 insertions(+), 52 deletions(-)

diff --git a/xen/arch/arm/arm64/cpusanitize.c b/xen/arch/arm/arm64/cpusanitize.c
index 4cc8378c14..744006ca7c 100644
--- a/xen/arch/arm/arm64/cpusanitize.c
+++ b/xen/arch/arm/arm64/cpusanitize.c
@@ -97,10 +97,6 @@ struct arm64_ftr_bits {
 		.width = 0,				\
 	}
 
-static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
-
-static bool __system_matches_cap(unsigned int n);
-
 /*
  * NOTE: Any changes to the visibility of features should be kept in
  * sync with the documentation of the CPU feature register ABI.
@@ -277,31 +273,6 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
 	ARM64_FTR_END,
 };
 
-static const struct arm64_ftr_bits ftr_ctr[] = {
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RES1 */
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DIC_SHIFT, 1, 1),
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IDC_SHIFT, 1, 1),
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_CWG_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_ERG_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DMINLINE_SHIFT, 4, 1),
-	/*
-	 * Linux can handle differing I-cache policies. Userspace JITs will
-	 * make use of *minLine.
-	 * If we have differing I-cache policies, report it as the weakest - VIPT.
-	 */
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_EXACT, CTR_L1IP_SHIFT, 2, ICACHE_POLICY_VIPT),	/* L1Ip */
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IMINLINE_SHIFT, 4, 0),
-	ARM64_FTR_END,
-};
-
-static struct arm64_ftr_override __ro_after_init no_override = { };
-
-struct arm64_ftr_reg arm64_ftr_reg_ctrel0 = {
-	.name		= "SYS_CTR_EL0",
-	.ftr_bits	= ftr_ctr,
-	.override	= &no_override,
-};
-
 static const struct arm64_ftr_bits ftr_id_mmfr0[] = {
 	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_INNERSHR_SHIFT, 4, 0xf),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_FCSE_SHIFT, 4, 0),
@@ -335,12 +306,6 @@ static const struct arm64_ftr_bits ftr_mvfr2[] = {
 	ARM64_FTR_END,
 };
 
-static const struct arm64_ftr_bits ftr_dczid[] = {
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, DCZID_DZP_SHIFT, 1, 1),
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, DCZID_BS_SHIFT, 4, 0),
-	ARM64_FTR_END,
-};
-
 static const struct arm64_ftr_bits ftr_id_isar0[] = {
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DIVIDE_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DEBUG_SHIFT, 4, 0),
@@ -454,12 +419,6 @@ static const struct arm64_ftr_bits ftr_id_dfr1[] = {
 	ARM64_FTR_END,
 };
 
-static const struct arm64_ftr_bits ftr_zcr[] = {
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE,
-		ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0),	/* LEN */
-	ARM64_FTR_END,
-};
-
 /*
  * Common ftr bits for a 32bit register with all hidden, strict
  * attributes, with 4bit feature fields and a default safe value of
@@ -478,17 +437,192 @@ static const struct arm64_ftr_bits ftr_generic_32bits[] = {
 	ARM64_FTR_END,
 };
 
-/* Table for a single 32bit feature value */
-static const struct arm64_ftr_bits ftr_single32[] = {
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 0, 32, 0),
-	ARM64_FTR_END,
-};
-
-static const struct arm64_ftr_bits ftr_raz[] = {
-	ARM64_FTR_END,
-};
-
 /*
  * End of imported linux structures
  */
 
+static inline int __attribute_const__
+cpuid_feature_extract_signed_field_width(u64 features, int field, int width)
+{
+    return (s64)(features << (64 - width - field)) >> (64 - width);
+}
+
+static inline int __attribute_const__
+cpuid_feature_extract_signed_field(u64 features, int field)
+{
+    return cpuid_feature_extract_signed_field_width(features, field, 4);
+}
+
+static inline unsigned int __attribute_const__
+cpuid_feature_extract_unsigned_field_width(u64 features, int field, int width)
+{
+    return (u64)(features << (64 - width - field)) >> (64 - width);
+}
+
+static inline unsigned int __attribute_const__
+cpuid_feature_extract_unsigned_field(u64 features, int field)
+{
+    return cpuid_feature_extract_unsigned_field_width(features, field, 4);
+}
+
+static inline u64 arm64_ftr_mask(const struct arm64_ftr_bits *ftrp)
+{
+    return (u64)GENMASK(ftrp->shift + ftrp->width - 1, ftrp->shift);
+}
+
+static inline int __attribute_const__
+cpuid_feature_extract_field_width(u64 features, int field, int width,
+                                  bool sign)
+{
+    return (sign) ?
+        cpuid_feature_extract_signed_field_width(features, field, width) :
+        cpuid_feature_extract_unsigned_field_width(features, field, width);
+}
+
+static inline int __attribute_const__
+cpuid_feature_extract_field(u64 features, int field, bool sign)
+{
+    return cpuid_feature_extract_field_width(features, field, 4, sign);
+}
+
+static inline s64 arm64_ftr_value(const struct arm64_ftr_bits *ftrp, u64 val)
+{
+    return (s64)cpuid_feature_extract_field_width(val, ftrp->shift,
+                                                  ftrp->width, ftrp->sign);
+}
+
+static s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new,
+                                s64 cur)
+{
+    s64 ret = 0;
+
+    switch ( ftrp->type ) {
+    case FTR_EXACT:
+        ret = ftrp->safe_val;
+        break;
+    case FTR_LOWER_SAFE:
+        ret = new < cur ? new : cur;
+        break;
+    case FTR_HIGHER_OR_ZERO_SAFE:
+        if (!cur || !new)
+            break;
+        fallthrough;
+    case FTR_HIGHER_SAFE:
+        ret = new > cur ? new : cur;
+        break;
+    default:
+        BUG();
+    }
+
+    return ret;
+}
+
+static void sanitize_reg(u64 *cur_reg, u64 new_reg, const char *reg_name,
+                        const struct arm64_ftr_bits *ftrp)
+{
+    int taint = 0;
+    u64 old_reg = *cur_reg;
+
+    for ( ; ftrp->width !=0 ; ftrp++ )
+    {
+        u64 mask;
+        s64 cur_field = arm64_ftr_value(ftrp, *cur_reg);
+        s64 new_field = arm64_ftr_value(ftrp, new_reg);
+
+        if ( cur_field == new_field )
+            continue;
+
+        if ( ftrp->strict )
+            taint = 1;
+
+        mask = arm64_ftr_mask(ftrp);
+
+        *cur_reg &= ~mask;
+        *cur_reg |= (arm64_ftr_safe_value(ftrp, new_field, cur_field)
+                    << ftrp->shift) & mask;
+    }
+
+    if ( old_reg != new_reg )
+        printk(XENLOG_DEBUG "SANITY DIF: %s 0x%"PRIx64" -> 0x%"PRIx64"\n",
+               reg_name, old_reg, new_reg);
+    if ( old_reg != *cur_reg )
+        printk(XENLOG_DEBUG "SANITY FIX: %s 0x%"PRIx64" -> 0x%"PRIx64"\n",
+               reg_name, old_reg, *cur_reg);
+
+    if ( taint )
+    {
+        printk(XENLOG_WARNING "SANITY CHECK: Unexpected variation in %s.\n",
+                reg_name);
+        add_taint(TAINT_CPU_OUT_OF_SPEC);
+    }
+}
+
+
+/*
+ * This function should be called on secondary cores to sanitize the boot cpu
+ * cpuinfo.
+ */
+void sanitize_cpu(const struct cpuinfo_arm *new)
+{
+
+#define SANITIZE_REG(field, num, reg)  \
+    sanitize_reg(&boot_cpu_data.field.bits[num], new->field.bits[num], \
+                 #reg, ftr_##reg)
+
+#define SANITIZE_ID_REG(field, num, reg)  \
+    sanitize_reg(&boot_cpu_data.field.bits[num], new->field.bits[num], \
+                 #reg, ftr_id_##reg)
+
+#define SANITIZE_GENERIC_REG(field, num, reg)  \
+    sanitize_reg(&boot_cpu_data.field.bits[num], new->field.bits[num], \
+                 #reg, ftr_generic_32bits)
+
+    SANITIZE_ID_REG(pfr64, 0, aa64pfr0);
+    SANITIZE_ID_REG(pfr64, 1, aa64pfr1);
+
+    SANITIZE_ID_REG(dbg64, 0, aa64dfr0);
+
+    /* aux64 x2 */
+
+    SANITIZE_ID_REG(mm64, 0, aa64mmfr0);
+    SANITIZE_ID_REG(mm64, 1, aa64mmfr1);
+    SANITIZE_ID_REG(mm64, 2, aa64mmfr2);
+
+    SANITIZE_ID_REG(isa64, 0, aa64isar0);
+    SANITIZE_ID_REG(isa64, 1, aa64isar1);
+
+    SANITIZE_ID_REG(zfr64, 0, aa64zfr0);
+
+    if ( cpu_feature64_has_el0_32(&boot_cpu_data) )
+    {
+        SANITIZE_ID_REG(pfr32, 0, pfr0);
+        SANITIZE_ID_REG(pfr32, 1, pfr1);
+        SANITIZE_ID_REG(pfr32, 2, pfr2);
+
+        SANITIZE_ID_REG(dbg32, 0, dfr0);
+        SANITIZE_ID_REG(dbg32, 1, dfr1);
+
+        /* aux32 x1 */
+
+        SANITIZE_ID_REG(mm32, 0, mmfr0);
+        SANITIZE_GENERIC_REG(mm32, 1, mmfr1);
+        SANITIZE_GENERIC_REG(mm32, 2, mmfr2);
+        SANITIZE_GENERIC_REG(mm32, 3, mmfr3);
+        SANITIZE_ID_REG(mm32, 4, mmfr4);
+        SANITIZE_ID_REG(mm32, 5, mmfr5);
+
+        SANITIZE_ID_REG(isa32, 0, isar0);
+        SANITIZE_GENERIC_REG(isa32, 1, isar1);
+        SANITIZE_GENERIC_REG(isa32, 2, isar2);
+        SANITIZE_GENERIC_REG(isa32, 3, isar3);
+        SANITIZE_ID_REG(isa32, 4, isar4);
+        SANITIZE_ID_REG(isa32, 5, isar5);
+        SANITIZE_ID_REG(isa32, 6, isar6);
+
+        SANITIZE_GENERIC_REG(mvfr, 0, mvfr0);
+        SANITIZE_GENERIC_REG(mvfr, 1, mvfr1);
+#ifndef MVFR2_MAYBE_UNDEFINED
+        SANITIZE_REG(mvfr, 2, mvfr2);
+#endif
+    }
+}
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index a1ee3146ef..8fdf5e70d3 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -307,12 +307,14 @@ void start_secondary(void)
     set_processor_id(cpuid);
 
     identify_cpu(&current_cpu_data);
+    sanitize_cpu(&current_cpu_data);
     processor_setup();
 
     init_traps();
 
+#ifndef CONFIG_ARM_64
     /*
-     * Currently Xen assumes the platform has only one kind of CPUs.
+     * Currently Xen assumes the platform has only one kind of CPUs on ARM32.
      * This assumption does not hold on big.LITTLE platform and may
      * result to instability and insecure platform (unless cpu affinity
      * is manually specified for all domains). Better to park them for
@@ -328,6 +330,7 @@ void start_secondary(void)
                boot_cpu_data.midr.bits);
         stop_cpu();
     }
+#endif
 
     if ( dcache_line_bytes != read_dcache_line_bytes() )
     {
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index ba48db3eac..ad34e55cc8 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -330,6 +330,15 @@ extern struct cpuinfo_arm boot_cpu_data;
 
 extern void identify_cpu(struct cpuinfo_arm *);
 
+#ifdef CONFIG_ARM_64
+extern void sanitize_cpu(const struct cpuinfo_arm *);
+#else
+static inline void sanitize_cpu(const struct cpuinfo_arm *cpuinfo)
+{
+    /* Not supported on arm32 */
+}
+#endif
+
 extern struct cpuinfo_arm cpu_data[];
 #define current_cpu_data cpu_data[smp_processor_id()]
 
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 1198c7c0b2..c6987973bf 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -192,6 +192,7 @@ uint64_t muldiv64(uint64_t a, uint32_t b, uint32_t c);
 #define TAINT_ERROR_INJECT              (1u << 2)
 #define TAINT_HVM_FEP                   (1u << 3)
 #define TAINT_MACHINE_UNSECURE          (1u << 4)
+#define TAINT_CPU_OUT_OF_SPEC           (1u << 5)
 extern unsigned int tainted;
 #define TAINT_STRING_MAX_LEN            20
 extern char *print_tainted(char *str);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 29 17:10:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 17:10:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148062.273501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyHFM-0004ez-EK; Tue, 29 Jun 2021 17:10:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148062.273501; Tue, 29 Jun 2021 17:10:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyHFM-0004em-AB; Tue, 29 Jun 2021 17:10:04 +0000
Received: by outflank-mailman (input) for mailman id 148062;
 Tue, 29 Jun 2021 17:10:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vrVS=LX=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1lyHFL-0003oo-1u
 for xen-devel@lists.xenproject.org; Tue, 29 Jun 2021 17:10:03 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 96c4dc65-8492-43ba-8a40-bcca92ebe71b;
 Tue, 29 Jun 2021 17:09:57 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9022E11D4;
 Tue, 29 Jun 2021 10:09:57 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C23CE3F694;
 Tue, 29 Jun 2021 10:09:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96c4dc65-8492-43ba-8a40-bcca92ebe71b
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [RFC PATCH 2/4] xen/arm: Import ID features sanitize from linux
Date: Tue, 29 Jun 2021 18:08:54 +0100
Message-Id: <da0e48cde6c26d19fde468ad2860b807459a1042.1624974370.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1624974370.git.bertrand.marquis@arm.com>
References: <cover.1624974370.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1624974370.git.bertrand.marquis@arm.com>
References: <cover.1624974370.git.bertrand.marquis@arm.com>

Import structures declared in Linux file arch/arm64/kernel/cpufeature.c
and import the required types.
Current code has been imported from Linux 5.13-rc5 (Commit ID
cd1245d75ce93b8fd206f4b34eb58bcfe156d5e9)

Those structure will be used to sanitize the cpu features available to
the ones availble on all cores of a system even if we are on an
heterogeneous platform (from example a big/LITTLE).

For each feature field of all ID registers, those structures define what
is the safest value and if we can allow to have different values in
different cores.

This patch is introducing Linux code without any changes to it.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/arm64/Makefile      |   1 +
 xen/arch/arm/arm64/cpusanitize.c | 494 +++++++++++++++++++++++++++++++
 2 files changed, 495 insertions(+)
 create mode 100644 xen/arch/arm/arm64/cpusanitize.c

diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
index 40642ff574..c626990185 100644
--- a/xen/arch/arm/arm64/Makefile
+++ b/xen/arch/arm/arm64/Makefile
@@ -1,6 +1,7 @@
 obj-y += lib/
 
 obj-y += cache.o
+obj-y += cpusanitize.o
 obj-$(CONFIG_HARDEN_BRANCH_PREDICTOR) += bpi.o
 obj-$(CONFIG_EARLY_PRINTK) += debug.o
 obj-y += domctl.o
diff --git a/xen/arch/arm/arm64/cpusanitize.c b/xen/arch/arm/arm64/cpusanitize.c
new file mode 100644
index 0000000000..4cc8378c14
--- /dev/null
+++ b/xen/arch/arm/arm64/cpusanitize.c
@@ -0,0 +1,494 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Sanitize CPU feature definitions
+ *
+ * Copyright (C) 2021 Arm Ltd.
+ * based on code from the Linux kernel, which is:
+ *  Copyright (C) 2015 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/types.h>
+#include <asm/sysregs.h>
+#include <asm/cpufeature.h>
+
+/*
+ * CPU feature register tracking
+ *
+ * The safe value of a CPUID feature field is dependent on the implications
+ * of the values assigned to it by the architecture. Based on the relationship
+ * between the values, the features are classified into 3 types - LOWER_SAFE,
+ * HIGHER_SAFE and EXACT.
+ *
+ * The lowest value of all the CPUs is chosen for LOWER_SAFE and highest
+ * for HIGHER_SAFE. It is expected that all CPUs have the same value for
+ * a field when EXACT is specified, failing which, the safe value specified
+ * in the table is chosen.
+ */
+
+enum ftr_type {
+    FTR_EXACT,               /* Use a predefined safe value */
+    FTR_LOWER_SAFE,          /* Smaller value is safe */
+    FTR_HIGHER_SAFE,         /* Bigger value is safe */
+    FTR_HIGHER_OR_ZERO_SAFE, /* Bigger value is safe, but 0 is biggest */
+};
+
+#define FTR_STRICT      true    /* SANITY check strict matching required */
+#define FTR_NONSTRICT   false   /* SANITY check ignored */
+
+#define FTR_SIGNED      true    /* Value should be treated as signed */
+#define FTR_UNSIGNED    false   /* Value should be treated as unsigned */
+
+#define FTR_VISIBLE true    /* Feature visible to the user space */
+#define FTR_HIDDEN  false   /* Feature is hidden from the user */
+
+#define FTR_VISIBLE_IF_IS_ENABLED(config)       \
+    (IS_ENABLED(config) ? FTR_VISIBLE : FTR_HIDDEN)
+
+struct arm64_ftr_bits {
+    bool    sign;   /* Value is signed ? */
+    bool    visible;
+    bool    strict; /* CPU Sanity check: strict matching required ? */
+    enum ftr_type   type;
+    u8      shift;
+    u8      width;
+    s64     safe_val; /* safe value for FTR_EXACT features */
+};
+
+/*
+ * NOTE: The following structures are imported directly from Linux kernel and
+ * should be kept in sync.
+ * The current version has been imported from arch/arm64/kernel/cpufeature.c
+ *  from kernel version 5.13-rc5
+ */
+
+#define __ARM64_FTR_BITS(SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
+	{						\
+		.sign = SIGNED,				\
+		.visible = VISIBLE,			\
+		.strict = STRICT,			\
+		.type = TYPE,				\
+		.shift = SHIFT,				\
+		.width = WIDTH,				\
+		.safe_val = SAFE_VAL,			\
+	}
+
+/* Define a feature with unsigned values */
+#define ARM64_FTR_BITS(VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
+	__ARM64_FTR_BITS(FTR_UNSIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL)
+
+/* Define a feature with a signed value */
+#define S_ARM64_FTR_BITS(VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
+	__ARM64_FTR_BITS(FTR_SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL)
+
+#define ARM64_FTR_END					\
+	{						\
+		.width = 0,				\
+	}
+
+static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
+
+static bool __system_matches_cap(unsigned int n);
+
+/*
+ * NOTE: Any changes to the visibility of features should be kept in
+ * sync with the documentation of the CPU feature register ABI.
+ */
+static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RNDR_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_TLB_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_TS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM4_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RDM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_ATOMICS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_CRC32_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA1_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_AES_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_I8MM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DGH_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_BF16_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_SPECRES_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_SB_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FRINTTS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_GPI_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_GPA_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_DIT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_AMU_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_MPAM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SEL2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+				   FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
+	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
+	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_MPAMFRAC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_RASFRAC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_MTE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_MTE_SHIFT, 4, ID_AA64PFR1_MTE_NI),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR1_SSBS_SHIFT, 4, ID_AA64PFR1_SSBS_PSTATE_NI),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_BTI),
+				    FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_BT_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_aa64zfr0[] = {
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_F64MM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_F32MM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_I8MM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SM4_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SHA3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_BF16_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_BITPERM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_AES_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SVEVER_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ECV_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_FGT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EXS_SHIFT, 4, 0),
+	/*
+	 * Page size not being supported at Stage-2 is not fatal. You
+	 * just give up KVM if PAGE_SIZE isn't supported there. Go fix
+	 * your favourite nesting hypervisor.
+	 *
+	 * There is a small corner case where the hypervisor explicitly
+	 * advertises a given granule size at Stage-2 (value 2) on some
+	 * vCPUs, and uses the fallback to Stage-1 (value 0) for other
+	 * vCPUs. Although this is not forbidden by the architecture, it
+	 * indicates that the hypervisor is being silly (or buggy).
+	 *
+	 * We make no effort to cope with this and pretend that if these
+	 * fields are inconsistent across vCPUs, then it isn't worth
+	 * trying to bring KVM up.
+	 */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN4_2_SHIFT, 4, 1),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN64_2_SHIFT, 4, 1),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN16_2_SHIFT, 4, 1),
+	/*
+	 * We already refuse to boot CPUs that don't support our configured
+	 * page size, so we can only detect mismatches for a page size other
+	 * than the one we're currently using. Unfortunately, SoCs like this
+	 * exist in the wild so, even though we don't like it, we'll have to go
+	 * along with it and treat them as non-strict.
+	 */
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN4_SHIFT, 4, ID_AA64MMFR0_TGRAN4_NI),
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN64_SHIFT, 4, ID_AA64MMFR0_TGRAN64_NI),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN16_SHIFT, 4, ID_AA64MMFR0_TGRAN16_NI),
+
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL0_SHIFT, 4, 0),
+	/* Linux shouldn't care about secure memory */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_SNSMEM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ASID_SHIFT, 4, 0),
+	/*
+	 * Differing PARange is fine as long as all peripherals and memory are mapped
+	 * within the minimum PARange of all CPUs
+	 */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_PARANGE_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_ETS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_TWED_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_XNX_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64MMFR1_SPECSEI_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_PAN_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_LOR_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HPD_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VHE_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VMIDBITS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HADBS_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_E0PD_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EVT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_BBM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_TTL_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_FWB_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IDS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_ST_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_NV_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_CCIDX_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LSM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_UAO_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_CNP_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_ctr[] = {
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RES1 */
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DIC_SHIFT, 1, 1),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IDC_SHIFT, 1, 1),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_CWG_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_ERG_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DMINLINE_SHIFT, 4, 1),
+	/*
+	 * Linux can handle differing I-cache policies. Userspace JITs will
+	 * make use of *minLine.
+	 * If we have differing I-cache policies, report it as the weakest - VIPT.
+	 */
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_EXACT, CTR_L1IP_SHIFT, 2, ICACHE_POLICY_VIPT),	/* L1Ip */
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IMINLINE_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static struct arm64_ftr_override __ro_after_init no_override = { };
+
+struct arm64_ftr_reg arm64_ftr_reg_ctrel0 = {
+	.name		= "SYS_CTR_EL0",
+	.ftr_bits	= ftr_ctr,
+	.override	= &no_override,
+};
+
+static const struct arm64_ftr_bits ftr_id_mmfr0[] = {
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_INNERSHR_SHIFT, 4, 0xf),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_FCSE_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_MMFR0_AUXREG_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_TCM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_SHARELVL_SHIFT, 4, 0),
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_OUTERSHR_SHIFT, 4, 0xf),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_PMSA_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_VMSA_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_aa64dfr0[] = {
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_DOUBLELOCK_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64DFR0_PMSVER_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_CTX_CMPS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_WRPS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_BRPS_SHIFT, 4, 0),
+	/*
+	 * We can instantiate multiple PMU instances with different levels
+	 * of support.
+	 */
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64DFR0_PMUVER_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64DFR0_DEBUGVER_SHIFT, 4, 0x6),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_mvfr2[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR2_FPMISC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR2_SIMDMISC_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_dczid[] = {
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, DCZID_DZP_SHIFT, 1, 1),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, DCZID_BS_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_isar0[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DIVIDE_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DEBUG_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_COPROC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_CMPBRANCH_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_BITFIELD_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_BITCOUNT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_SWAP_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_isar5[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_RDM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_CRC32_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA1_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_AES_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SEVL_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_mmfr4[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_EVT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_CCIDX_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_LSM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_HPDS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_CNP_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_XNX_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_AC2_SHIFT, 4, 0),
+
+	/*
+	 * SpecSEI = 1 indicates that the PE might generate an SError on an
+	 * external abort on speculative read. It is safe to assume that an
+	 * SError might be generated than it will not be. Hence it has been
+	 * classified as FTR_HIGHER_SAFE.
+	 */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_MMFR4_SPECSEI_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_isar4[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_SWP_FRAC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_PSR_M_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_SYNCH_PRIM_FRAC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_BARRIER_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_SMC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_WRITEBACK_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_WITHSHIFTS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_UNPRIV_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_mmfr5[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR5_ETS_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_isar6[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_I8MM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_BF16_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_SPECRES_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_SB_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_FHM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_DP_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_JSCVT_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_pfr0[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_DIT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR0_CSV2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE1_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE0_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_pfr1[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_GIC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_VIRT_FRAC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_SEC_FRAC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_GENTIMER_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_VIRTUALIZATION_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_MPROGMOD_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_SECURITY_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_PROGMOD_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_pfr2[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR2_SSBS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR2_CSV3_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_dfr0[] = {
+	/* [31:28] TraceFilt */
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_PERFMON_SHIFT, 4, 0xf),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MPROFDBG_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MMAPTRC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_COPTRC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MMAPDBG_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_COPSDBG_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_COPDBG_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_dfr1[] = {
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR1_MTPMU_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_zcr[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE,
+		ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0),	/* LEN */
+	ARM64_FTR_END,
+};
+
+/*
+ * Common ftr bits for a 32bit register with all hidden, strict
+ * attributes, with 4bit feature fields and a default safe value of
+ * 0. Covers the following 32bit registers:
+ * id_isar[1-4], id_mmfr[1-3], id_pfr1, mvfr[0-1]
+ */
+static const struct arm64_ftr_bits ftr_generic_32bits[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 28, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 24, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 20, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 16, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 12, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 8, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 4, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 0, 4, 0),
+	ARM64_FTR_END,
+};
+
+/* Table for a single 32bit feature value */
+static const struct arm64_ftr_bits ftr_single32[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 0, 32, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_raz[] = {
+	ARM64_FTR_END,
+};
+
+/*
+ * End of imported linux structures
+ */
+
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 29 17:10:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 17:10:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148063.273512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyHFR-0005Pz-Ni; Tue, 29 Jun 2021 17:10:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148063.273512; Tue, 29 Jun 2021 17:10:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyHFR-0005Po-Ip; Tue, 29 Jun 2021 17:10:09 +0000
Received: by outflank-mailman (input) for mailman id 148063;
 Tue, 29 Jun 2021 17:10:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vrVS=LX=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1lyHFQ-0003oo-2Y
 for xen-devel@lists.xenproject.org; Tue, 29 Jun 2021 17:10:08 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 51d1fbfe-0e80-4af1-b3da-52a946aa9230;
 Tue, 29 Jun 2021 17:10:00 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DE8FA11D4;
 Tue, 29 Jun 2021 10:09:59 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 39D353F694;
 Tue, 29 Jun 2021 10:09:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51d1fbfe-0e80-4af1-b3da-52a946aa9230
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [RFC PATCH 4/4] xen/arm: Use sanitize values for p2m
Date: Tue, 29 Jun 2021 18:08:56 +0100
Message-Id: <9ef2c92f4f749bce4cabad34fefe2d886b1deffe.1624974370.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1624974370.git.bertrand.marquis@arm.com>
References: <cover.1624974370.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1624974370.git.bertrand.marquis@arm.com>
References: <cover.1624974370.git.bertrand.marquis@arm.com>

Replace the code in p2m trying to find a sane value for the VMID size
supported and the PAR to use. We are now using the boot cpuinfo as the
values there are sanitized during boot and the value for those
parameters is now the safest possible value on the system.

On arm32, the system will panic if there are different types of core so
those checks were not needed anyway.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/p2m.c | 30 ++++++++++--------------------
 1 file changed, 10 insertions(+), 20 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d414c4feb9..2536a86a13 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2030,31 +2030,21 @@ void __init setup_virt_paging(void)
         [7] = { 0 }  /* Invalid */
     };
 
-    unsigned int i, cpu;
+    unsigned int i;
     unsigned int pa_range = 0x10; /* Larger than any possible value */
-    bool vmid_8_bit = false;
-
-    for_each_online_cpu ( cpu )
-    {
-        const struct cpuinfo_arm *info = &cpu_data[cpu];
 
-        /*
-         * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured
-         * with IPA bits == PA bits, compare against "pabits".
-         */
-        if ( pa_range_info[info->mm64.pa_range].pabits < p2m_ipa_bits )
-            p2m_ipa_bits = pa_range_info[info->mm64.pa_range].pabits;
-
-        /* Set a flag if the current cpu does not support 16 bit VMIDs. */
-        if ( info->mm64.vmid_bits != MM64_VMID_16_BITS_SUPPORT )
-            vmid_8_bit = true;
-    }
+    /*
+     * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured
+     * with IPA bits == PA bits, compare against "pabits".
+     */
+    if ( pa_range_info[boot_cpu_data.mm64.pa_range].pabits < p2m_ipa_bits )
+        p2m_ipa_bits = pa_range_info[boot_cpu_data.mm64.pa_range].pabits;
 
     /*
-     * If the flag is not set then it means all CPUs support 16-bit
-     * VMIDs.
+     * cpu info sanitization made sure we support 16bits VMID only if all
+     * cores are supporting it.
      */
-    if ( !vmid_8_bit )
+    if ( boot_cpu_data.mm64.vmid_bits == MM64_VMID_16_BITS_SUPPORT )
         max_vmid = MAX_VMID_16_BIT;
 
     /* Choose suitable "pa_range" according to the resulted "p2m_ipa_bits". */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 29 19:56:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 19:56:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148086.273523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyJqM-0003tQ-84; Tue, 29 Jun 2021 19:56:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148086.273523; Tue, 29 Jun 2021 19:56:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyJqM-0003tJ-3D; Tue, 29 Jun 2021 19:56:26 +0000
Received: by outflank-mailman (input) for mailman id 148086;
 Tue, 29 Jun 2021 19:56:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyJqK-0003t7-Im; Tue, 29 Jun 2021 19:56:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyJqK-0002g3-7t; Tue, 29 Jun 2021 19:56:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyJqJ-0007KB-UD; Tue, 29 Jun 2021 19:56:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lyJqJ-0004NL-Tf; Tue, 29 Jun 2021 19:56:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pECMll/w4Avc0MljK3L2wwM+jmoeBjCRdhUpx1LbT0c=; b=rnfC8d7j6G7WboClyVNXVRbpWf
	F1YhsUDdkONTLfdnHTBOetYW8a5GbH1hlknEboT3KUMKNWQkjE0YVomHCN1PJi7Knfa3jSIzogXCT
	1tPLAv2CV4m+QARvdYeaD51fs2/3Dgg8r2lU2p95CSbY9mfezSA5ksUgn9Z4laVpoAaw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163179-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 163179: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=9e654e10197f5a014eccd71de5ea633c1b0f4303
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Jun 2021 19:56:23 +0000

flight 163179 qemu-mainline real [real]
flight 163186 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/163179/
http://logs.test-lab.xenproject.org/osstest/logs/163186/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-amd 16 xen-boot/l1         fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 16 xen-boot/l1       fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                9e654e10197f5a014eccd71de5ea633c1b0f4303
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  313 days
Failing since        152659  2020-08-21 14:07:39 Z  312 days  573 attempts
Testing same since   163179  2021-06-29 05:21:07 Z    0 days    1 attempts

------------------------------------------------------------
551 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 179188 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 29 21:29:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 21:29:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148090.273536 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyLIC-0003eb-2L; Tue, 29 Jun 2021 21:29:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148090.273536; Tue, 29 Jun 2021 21:29:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyLIB-0003eU-Vc; Tue, 29 Jun 2021 21:29:15 +0000
Received: by outflank-mailman (input) for mailman id 148090;
 Tue, 29 Jun 2021 21:29:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyLIA-0003eK-RS; Tue, 29 Jun 2021 21:29:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyLIA-0004JB-BY; Tue, 29 Jun 2021 21:29:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyLI9-00045a-U8; Tue, 29 Jun 2021 21:29:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lyLI9-0004Z6-Tj; Tue, 29 Jun 2021 21:29:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/Q8KxiCDzi4JIl6paaAuY7pkJqdaQxzQY1SP8+jRhV8=; b=LLuTpjyoQ93g9poTlExQSUujO5
	f6jlQZeJDUMwhe3fBEEgDl6Oa9QbyXyqxNMmx3vS9Ve08ihakSfSvHZ91zUGgTRs66/h/WCaEawir
	RJ0SX+ezwxSY9iSEwIJPkQcSyhv3AOL4s4tu/y/EuzXfsu5WkehjM5homkBxH4gMzHWs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163181-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 163181: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c54b245d011855ea91c5beff07f1db74143ce614
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Jun 2021 21:29:13 +0000

flight 163181 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163181/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                c54b245d011855ea91c5beff07f1db74143ce614
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  333 days
Failing since        152366  2020-08-01 20:49:34 Z  332 days  565 attempts
Testing same since   163181  2021-06-29 07:30:01 Z    0 days    1 attempts

------------------------------------------------------------
6291 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1731312 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 29 22:54:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Jun 2021 22:54:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148095.273551 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyMcL-0003Hp-Ic; Tue, 29 Jun 2021 22:54:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148095.273551; Tue, 29 Jun 2021 22:54:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyMcL-0003Hi-FG; Tue, 29 Jun 2021 22:54:09 +0000
Received: by outflank-mailman (input) for mailman id 148095;
 Tue, 29 Jun 2021 22:54:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyMcK-0003HY-I2; Tue, 29 Jun 2021 22:54:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyMcK-0005g1-90; Tue, 29 Jun 2021 22:54:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyMcK-0007ve-11; Tue, 29 Jun 2021 22:54:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lyMcK-000302-0W; Tue, 29 Jun 2021 22:54:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ntdpar/Tyqln3P2tCtGhdHG8cktZ5OfyJxya9OFIYH4=; b=TsXXDo49ER2VnCCwXFCYnbDGcs
	9S/0aiG8eMxiBPTnRj2EfJkFXe9xn0VPeXcVnYbhQFGDNEmIhxvQdWy2dS0Uow+BH/ncnnaxLS8zj
	N7drz/t3wTp0ZpoJ2e+vkVNJbJpCa8aFwN9xxk0P0fHNPrVwPmgT+/ipogrFeeOIwh4A=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163185-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163185: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=9421f5ab8d1efff12a4d07b4f07152426684b56a
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Jun 2021 22:54:08 +0000

flight 163185 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163185/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 9421f5ab8d1efff12a4d07b4f07152426684b56a
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   25 days
Failing since        162368  2021-06-04 15:42:59 Z   25 days   64 attempts
Testing same since   163185  2021-06-29 13:40:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daniel Schaefer <daniel.schaefer@hpe.com>
  Daoxiang Li <daoxiang.li@intel.com>
  Dov Murik <dovmurik@linux.ibm.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Sunil V L <sunilvl@ventanamicro.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2835 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 00:43:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 00:43:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148099.273564 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyOK9-00059B-T3; Wed, 30 Jun 2021 00:43:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148099.273564; Wed, 30 Jun 2021 00:43:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyOK9-000594-Q6; Wed, 30 Jun 2021 00:43:29 +0000
Received: by outflank-mailman (input) for mailman id 148099;
 Wed, 30 Jun 2021 00:43:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZaJs=LY=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lyOK7-00058y-In
 for xen-devel@lists.xen.org; Wed, 30 Jun 2021 00:43:27 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e25435b7-e2bb-4986-91f5-e5ddef50724f;
 Wed, 30 Jun 2021 00:43:26 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 365A461584;
 Wed, 30 Jun 2021 00:43:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e25435b7-e2bb-4986-91f5-e5ddef50724f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1625013805;
	bh=nifXhRNkDUFY6e07rBpxMiom6KNdsFobBAMY357YZOc=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=NPIaFHQh3qvY4MLjJQcJH8Gh0EF+VuTM+ndy88BROb1H+mnWY6xerzPCzpmP5PKFW
	 rzGIjxggupzQE4bV3AVoxKdusDPsihfxF6zUvjRzJ4dqduxjp+dWLFTCNPfgpTAwox
	 ItgC3oCowtMd2PIVSPf3mte4c5JNUAsWLZ543sc+Mo1RZvRrHmnF1jSxw709b9pMMA
	 SgLPPVw2LRvJO06c+dI8wtPoyTVlCfpPtXGMs3RgKFulWFhzuTGpLpc2qXRCGVN5lM
	 NMxtWQ3Aad21dfBK6I2wOsgd+hpche/71RsC0CocNwQ7WTAvP484FCsaEM56DmeVVC
	 4P7xRT3AzEDEQ==
Date: Tue, 29 Jun 2021 17:43:24 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: will@kernel.org, julien.thierry.kdev@gmail.com, Wei.Chen@arm.com
cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>, 
    "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
    "jean-philippe@linaro.org" <jean-philippe@linaro.org>, 
    Julien Grall <julien@xen.org>, Andre Przywara <Andre.Przywara@arm.com>, 
    Marc Zyngier <maz@kernel.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>
Subject: Re: [Kvmtool] Some thoughts on using kvmtool Virtio for Xen
In-Reply-To: <DB9PR08MB6857B375207376D8320AFBA89E309@DB9PR08MB6857.eurprd08.prod.outlook.com>
Message-ID: <alpine.DEB.2.21.2106291716560.9437@sstabellini-ThinkPad-T480s>
References: <DB9PR08MB6857B375207376D8320AFBA89E309@DB9PR08MB6857.eurprd08.prod.outlook.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi Wei,

Sorry for the late reply.


On Tue, 15 Jun 2021, Wei Chen wrote:
> Hi,
> 
> I have some thoughts of using kvmtool Virtio implementation
> for Xen. I copied my markdown file to this email. If you have
> time, could you please help me review it?
> 
> Any feedback is welcome!
> 
> # Some thoughts on using kvmtool Virtio for Xen
> ## Background
> 
> Xen community is working on adding VIRTIO capability to Xen. And we're working
> on VIRTIO backend of Xen. But except QEMU can support virtio-net for x86-xen,
> there is not any VIRTIO backend can support Xen. Because of the community's
> strong voice of Out-of-QEMU, we want to find a light weight VIRTIO backend to
> support Xen.
> 
> We have an idea of utilizing the virtio implementaton of kvmtool for Xen. And
> We know there was some agreement that kvmtool won't try to be a full QEMU
> alternative. So we have written two proposals in following content for
> communities to discuss in public:
> 
> ## Proposals
> ### 1. Introduce a new "dm-only" command
> 1. Introduce a new "dm-only" command to provide a pure device model mode. In
>    this mode, kvmtool only handles IO request. VM creation and initialization
>    will be bypassed.
> 
>     * We will rework the interface between the virtio code and the rest of
>     kvmtool, to use just the minimal set of information. At the end, there
>     would be MMIO accesses and shared memory that control the device model,
>     so that could be abstracted to do away with any KVM specifics at all. If
>     this is workable, we will send the first set of patches to introduce this
>     interface, and adapt the existing kvmtool to it. Then later we will can
>     add Xen support on top of it.
> 
>     About Xen support, we will detect the presence of Xen libraries, also
>     allow people to ignore them, as kvmtoll do with optional features like
>     libz or libaio.
> 
>     Idealy, we want to move all code replying on Xen libraries to a set of
>     new files. In this case, thes files can only be compiled when Xen
>     libraries are detected. But if we can't decouple this code completely,
>     we may introduce a bit of #ifdefs to protect this code.
> 
>     If kvm or other VMM do not need "dm-only" mode. Or "dm-only" can not
>     work without Xen libraries. We will make "dm-only" command depends on
>     the presence of Xen libraries.
> 
>     So a normal compile (without the Xen libraries installed) would create
>     a binary as close as possible to the current code, and only the people
>     who having Xen libraries installed would ever generate a "dm-only"
>     capable kvmtool.
> 
> ### 2. Abstract kvmtool virtio implementation as a library
> 1. Add a kvmtool Makefile target to generate a virtio library. In this
>    scenario, not just Xen, but any project else want to provide a
>    userspace virtio backend service can link to this virtio libraris.
>    These users would benefit from the VIRTIO implementation of kvmtool
>    and will participate in improvements, upgrades, and maintenance of
>    the VIRTIO libraries.
> 
>     * In this case, Xen part code will not upstream to kvmtool repo,
>       it would then be natural parts of the xen repo, in xen/tools or
>       maintained in other repo.
> 
>       We will have a completely separate VIRTIO backend for Xen, just
>       linking to kvmtool's VIRTIO library.
> 
>     * The main changes of kvmtool would be:
>         1. Still need to rework the interface between the virtio code
>            and the rest of kvmtool, to abstract the whole virtio
>            implementation into a library
>         2. Modify current build system to add a new virtio library target.


I don't really have a preference between the two.

>From my past experience with Xen enablement in QEMU, I can say that the
Xen part of receiving IO emulation requests is actually pretty minimal.
See as a reference
https://github.com/qemu/qemu/blob/13d5f87cc3b94bfccc501142df4a7b12fee3a6e7/hw/i386/xen/xen-hvm.c#L1163.
The modifications to rework the internal interfaces that you listed
below are far more "interesting" than the code necessary to receive
emulation requests from Xen.

So it looks like option-1 would be less efforts and fewer code changes
overall to kvmtools. Option-2 is more work. The library could be nice to
have but then we would have to be very careful about the API/ABI,
compatibility, etc.

Will Deacon and Julien Thierry might have an opinion.



> ## Reworking the interface is the common work for above proposals
> **In kvmtool, one virtual device can be separated into three layers:**
> 
> - A device type layer to provide an abstract
>     - Provide interface to collect and store device configuration.
>         Using block device as an example, kvmtool is using disk_image to
>         -  collect and store disk parameters like:
>             -  backend image format: raw, qcow or block device
>             -  backend block device or file image path
>             -  Readonly, direct and etc
>     - Provide operations to interact with real backend devices or services:
>         - provide backend device operations:
>             - block device operations
>             - raw image operations
>             - qcow image operations
> - Hypervisor interfaces
>     - Guest memory mapping and unmapping interfaces
>     - Virtual device register interface
>         - MMIO/PIO space register
>         - IRQ register
>     - Virtual IRQ inject interface
>     - Hypervisor eventfd interface

The "hypervisor interfaces" are the ones that are most interesting as we
need an alternative implementation for Xen for each of them. This is
the part that was a bit more delicate when we added Xen support to QEMU.
Especially the memory mapping and unmapping. All doable but we need
proper abstractions.


> - An implementation layer to handle guest IO request.
>     - Kvmtool provides virtual devices for guest. Some virtual devices two
>       kinds of implementations:
>         - VIRTIO implementation
>         - Real hardware emulation
> 
> For example, kvmtool console has virtio console and 8250 serial two kinds
> of implementations. These implementation depends on device type parameters
> to create devices, and depends on device type ops to forward data from/to
> real device. And the implementation will invoke hypervisor interfaces to
> map/unmap resources and notify guest.
> 
> In the current kvmtool code, the boundaries between these three layers are
> relatively clear, but there are a few pieces of code that are somewhat
> interleaved, for example:
> - In virtio_blk__init(...) function, the code will use disk_image directly.
>   This data is kvmtool specified. If we want to make VIRTIO implementation
>   become hypervisor agnostic. Such kind of code should be moved to other
>   place. Or we just keep code from virtio_blk__init_one(...) in virtio block
>   implementation, but keep virtio_blk__init(...) in kvmtool specified part
>   code.
> 
> However, in the current VIRTIO device creation and data handling process,
> the device type and hypervisor API used are both exclusive to kvmtool and
> KVM. If we want to use current VIRTIO implementation for other device
> models and hypervisors, it is unlikely to work properly.
> 
> So, the major work of reworking interface is decoupling VIRTIO implementation
> from kvmtool and KVM.
> 
> **Introduce some intermediate data structures to do decouple:**
> 1. Introduce intermedidate type data structures like `virtio_disk_type`,
>    `virtio_net_type`, `virtio_console_type` and etc. These data structures
>    will be the standard device type interfaces between virtio device
>    implementation and hypervisor.  Using virtio_disk_type as an example:
>     ~~~~
>     struct virtio_disk_type {
>         /*
>          * Essential configuration for virtio block device can be got from
>          * kvmtool disk_image. Other hypervisor device model also can use
>          * this data structure to pass necessary parameters for creating
>          * a virtio block device.
>          */
>         struct virtio_blk_cfg vblk_cfg;
>         /*
>          * Virtio block device MMIO address and IRQ line. These two members
>          * are optional. If hypervisor provides allocate_mmio_space and
>          * allocate_irq_line capability and device model doesn't set these
>          * two fields, virtio block implementation will use hypervisor APIs
>          * to allocate MMIO address and IRQ line. If these two fields are
>          * configured, virtio block implementation will use them.
>          */
>         paddr_t addr;
>         uint32_t irq;
>         /*
>          * In kvmtool, this ops will connect to disk_image APIs. Other
>          * hypervisor device model should provide similar APIs for this
>          * ops to interact with real backend device.
>          */
>         struct disk_type_ops {
>             .read
>             .write
>             .flush
>             .wait
>             ...
>         } ops;
>     };
>     ~~~~
> 
> 2. Introduce a intermediate hypervisor data structure. This data structure
>    provides a set of standard hypervisor API interfaces. In virtio
>    implementation, the KVM specified APIs, like kvm_register_mmio, will not
>    be invoked directly. The virtio implementation will use these interfaces
>    to access hypervisor specified APIs. for example `struct vmm_impl`:
>     ~~~~
>     struct vmm_impl {
>         /*
>          * Pointer that link to real hypervisor handle like `struct kvm *kvm`.
>          * This pointer will be passed to the vmm ops;
>          */
>         void *vmm;
>         allocate_irq_line_fn_t(void* vmm, ...);
>         allocate_mmio_space_fn_t(void* vmm, ...);
>         register_mmio_fn_t(void* vmm, ...);
>         map_guest_page_fn_t(void* vmm, ...);
>         unmap_guest_page_fn_t(void* vmm, ...);
>         virtual_irq_inject_fn_t(void* vmm, ...);
>     };
>     ~~~~

Are the map_guest_page and unmap_guest_page functions already called at
the appropriate places for KVM?

If not, the main issue is going to be adding the
map_guest_page/unmap_guest_page calls to the virtio device
implementations.

 
> 3. After decoupled with kvmtool, any hypervisor can use standard `vmm_impl`
>    and `virtio_xxxx_type` interfaces to invoke standard virtio implementation
>    interfaces to create virtio devices.
>     ~~~~
>     /* Prepare VMM interface */
>     struct vmm_impl *vmm = ...;
>     vmm->register_mmio_fn_t = kvm__register_mmio;
>     /* kvm__map_guset_page is a wrapper guest_flat_to_host */
>     vmm->map_guest_page_fn_t = kvm__map_guset_page;
>     ...
> 
>     /* Prepare virtio_disk_type */
>     struct virtio_disk_type *vdisk_type = ...;
>     vdisk_type->vblk_cfg.capacity = disk_image->size / SECTOR_SIZE;
>     ...
>     vdisk_type->ops->read = disk_image__read;
>     vdisk_type->ops->write = disk_image__write;
>     ...
> 
>     /* Invoke VIRTIO implementation API to create a virtio block device */
>     virtio_blk__init_one(vmm, vdisk_type);
>     ~~~~
> 
> VIRTIO block device simple flow before reworking interface:
> https://drive.google.com/file/d/1k0Grd4RSuCmhKUPktHj9FRamEYrPCFkX/view?usp=sharing
> ![image](https://drive.google.com/uc?export=view&id=1k0Grd4RSuCmhKUPktHj9FRamEYrPCFkX)
> 
> VIRTIO block device simple flow after reworking interface:
> https://drive.google.com/file/d/1rMXRvulwlRO39juWf08Wgk3G1NZtG2nL/view?usp=sharing
> ![image](https://drive.google.com/uc?export=view&id=1rMXRvulwlRO39juWf08Wgk3G1NZtG2nL)
> 
> 
> Thanks,
> Wei Chen
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 00:49:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 00:49:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148102.273576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyOQ3-0005oT-JE; Wed, 30 Jun 2021 00:49:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148102.273576; Wed, 30 Jun 2021 00:49:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyOQ3-0005oM-G1; Wed, 30 Jun 2021 00:49:35 +0000
Received: by outflank-mailman (input) for mailman id 148102;
 Wed, 30 Jun 2021 00:49:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZaJs=LY=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lyOQ2-0005oG-5T
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 00:49:34 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f697942e-feb6-4786-a978-d05ff8c40066;
 Wed, 30 Jun 2021 00:49:33 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 4B6CC61D2A;
 Wed, 30 Jun 2021 00:49:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f697942e-feb6-4786-a978-d05ff8c40066
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1625014172;
	bh=eESDptVDqHkg/ybEAJw0VpjrApeyGOqpIjDuPfzboqQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=e25hB2bOUkhg/ewBkh7aziZ08SO6nPshVwdYfrFozfqn0NIp3TOvwVMJGhOCcZzJn
	 pvAlEIRjNlmAZTOJqh4Ljh2k6iSunNAq+l4vBjjFW+q/MmniQArMlHiobgRdHP/H96
	 q3zpJZSjxG/p2BiEyfvtOgzQsD9A3Esw7knV9IFMung/82vZZOS8CMiXtEmnpgO6cv
	 s/g2kvMw2msxEUb/Zoowh2FQO9kk+4oJ/ml0jL2b2huAh1DdsGZ/FFgD3PrPwjHnAo
	 IJXhHP7zS+MSMgxNUsWT0xcDuAIwgTVmJAh1a3T1Ku5AkHlweWQbmiX4xfPe9WT8SK
	 qgO372X/XdUtw==
Date: Tue, 29 Jun 2021 17:49:31 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: xen-devel@lists.xenproject.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: bootfdt: Always sort memory banks
In-Reply-To: <1623699267-9475-1-git-send-email-olekstysh@gmail.com>
Message-ID: <alpine.DEB.2.21.2106291747180.9437@sstabellini-ThinkPad-T480s>
References: <1623699267-9475-1-git-send-email-olekstysh@gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 14 Jun 2021, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> At the moment, Xen expects the memory banks to be ordered.
> Unfortunately, there may be a case when updated by firmware
> device tree contains unordered banks. This means Xen will panic
> when setting xenheap mappings for the subsequent bank with start
> address being less than xenheap_mfn_start (start address of
> the first bank).
> 
> As there is no clear requirment regarding ordering in the device
> tree, update code to be able to deal with by sorting memory
> banks if we have more than one.
> 
> Suggested-by: Julien Grall <jgrall@amazon.com>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> 
> The proposed commit fixes the booting Xen on R-Car M3-W+ SoC:
> 
> Starting kernel ...
> - UART enabled -
> - Boot CPU booting -
> - Current EL 00000008 -
> - Initialize CPU -
> - Turning on paging -
> - Zero BSS -
> - Ready -
> (XEN) Checking for initrd in /chosen
> (XEN) Initrd 0000000084000040-0000000085dbc32a
> (XEN) RAM: 0000000480000000 - 00000004ffffffff
> (XEN) RAM: 0000000048000000 - 00000000bfffffff
> (XEN) RAM: 0000000600000000 - 00000006ffffffff
> 
> ...
> 
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) cannot add xenheap mapping at 48000 below heap start 480000
> (XEN) ****************************************
> (XEN) 
> (XEN) Reboot in five seconds...
> ---
>  xen/arch/arm/bootfdt.c | 22 ++++++++++++++++++++++
>  1 file changed, 22 insertions(+)
> 
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index dcff512..3ef63b3 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -13,6 +13,7 @@
>  #include <xen/init.h>
>  #include <xen/device_tree.h>
>  #include <xen/libfdt/libfdt.h>
> +#include <xen/sort.h>
>  #include <xsm/xsm.h>
>  #include <asm/setup.h>
>  
> @@ -395,6 +396,21 @@ static void __init early_print_info(void)
>      printk("\n");
>  }
>  
> +/* This function assumes that memory regions are not overlapped */
> +static int __init cmp_memory_node(const void *key, const void *elem)
> +{
> +    const struct membank *handler0 = key;
> +    const struct membank *handler1 = elem;
> +
> +    if ( handler0->start < handler1->start )
> +        return -1;
> +
> +    if ( handler0->start >= (handler1->start + handler1->size) )
> +        return 1;
> +
> +    return 0;
> +}
> +
>  /**
>   * boot_fdt_info - initialize bootinfo from a DTB
>   * @fdt: flattened device tree binary
> @@ -412,6 +428,12 @@ size_t __init boot_fdt_info(const void *fdt, paddr_t paddr)
>      add_boot_module(BOOTMOD_FDT, paddr, fdt_totalsize(fdt), false);
>  
>      device_tree_for_each_node((void *)fdt, 0, early_scan_node, NULL);
> +    if ( bootinfo.mem.nr_banks > 1 )
> +    {
> +        /* Some DT may describe unordered banks, sort them in ascending order */
> +        sort(bootinfo.mem.bank, bootinfo.mem.nr_banks, sizeof(struct membank),
> +             cmp_memory_node, NULL);
> +    }
>      early_print_info();
>  
>      return fdt_totalsize(fdt);
> -- 
> 2.7.4
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 00:50:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 00:50:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148105.273587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyOQw-000756-TA; Wed, 30 Jun 2021 00:50:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148105.273587; Wed, 30 Jun 2021 00:50:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyOQw-00074z-Pv; Wed, 30 Jun 2021 00:50:30 +0000
Received: by outflank-mailman (input) for mailman id 148105;
 Wed, 30 Jun 2021 00:50:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZaJs=LY=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lyOQv-00074p-91
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 00:50:29 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9b3d08f5-9ab7-4d2d-a358-c61c1a1c955d;
 Wed, 30 Jun 2021 00:50:28 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id D530161D6E;
 Wed, 30 Jun 2021 00:50:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b3d08f5-9ab7-4d2d-a358-c61c1a1c955d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1625014228;
	bh=2T2MfbcJXYl5bhffk3l2gpCiddXKi6U6XNOiSQZoxUg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=QIMYj+rXVHw6NRyIWKnBsLZy3k/hL9d/A5bayGegjcdyzqzXa0AW9Vka08oZCdDQz
	 2LwDhtDC18HqFWDIXttBcMkH0XtkVG4y8aKQAJjvBcr1Zh0zo2sTTg0q6alP5Ypz0X
	 3nb32/tz8gF19zSGDwkjWHUDiAPHX0TS5rJmUHx1Wp5CNDonVhOLBXO9DL565BtwZa
	 B8T/Wbf/mKTcBk1+T0kZqOlL1Z732VvsRnb/an+rIto0MSqeCNlGOXG3nvoh+XgomH
	 vKB9tUwmwFbovO90+ea/c5lY8kv/04UTau5758WMZVX9g4gSPSd911KUC2uxoPw5eY
	 ZJbO3rwrgV9lw==
Date: Tue, 29 Jun 2021 17:50:27 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: xen-devel@lists.xenproject.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] iommu/arm: ipmmu-vmsa: Add compatible for Renesas R-Car
 M3-W+ SoC
In-Reply-To: <1623698292-7464-1-git-send-email-olekstysh@gmail.com>
Message-ID: <alpine.DEB.2.21.2106291750160.9437@sstabellini-ThinkPad-T480s>
References: <1623698292-7464-1-git-send-email-olekstysh@gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 14 Jun 2021, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> The "renesas,r8a77961" string identifies M3-W+ (aka M3-W ES3.0)
> instead of "renesas,r8a7796" since Linux commit:
> "9c9f7891093b02eb64ca4e1c7ab776a4296c058f soc: renesas: Identify R-Car M3-W+".
> Add new compatible to the Xen driver.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/drivers/passthrough/arm/ipmmu-vmsa.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/xen/drivers/passthrough/arm/ipmmu-vmsa.c b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
> index 8b8e3a0..1255b0d 100644
> --- a/xen/drivers/passthrough/arm/ipmmu-vmsa.c
> +++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
> @@ -1316,6 +1316,7 @@ static const struct dt_device_match ipmmu_dt_match[] __initconst =
>      DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a7795"),
>      DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a77965"),
>      DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a7796"),
> +    DT_MATCH_COMPATIBLE("renesas,ipmmu-r8a77961"),
>      { /* sentinel */ },
>  };
>  
> -- 
> 2.7.4
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 02:45:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 02:45:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148116.273609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyQDt-0001g8-FQ; Wed, 30 Jun 2021 02:45:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148116.273609; Wed, 30 Jun 2021 02:45:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyQDt-0001g1-Bx; Wed, 30 Jun 2021 02:45:09 +0000
Received: by outflank-mailman (input) for mailman id 148116;
 Wed, 30 Jun 2021 02:45:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyQDs-0001fr-BB; Wed, 30 Jun 2021 02:45:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyQDs-0002lf-4l; Wed, 30 Jun 2021 02:45:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyQDr-0002HB-P8; Wed, 30 Jun 2021 02:45:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lyQDr-00012N-OD; Wed, 30 Jun 2021 02:45:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=13nyOUjuEHkfoZUX7gYgH2p439uZQ4Gm00SLThR1OOc=; b=sIHxnNJb/AjeWYLvUWf5n2Pz+s
	puFeNbdcmSPw5nQyBYIwS7rQwSNqkfDTp9/axkegGLlZxI34rlZSKVTQqSZ2Fnhu01fq9YncZmY8a
	FwMQBJKsujkY9ntSiWgRC7Jh3A5Zj5xAvoQXrkOTkt5N5DG5FIZcu6z5ZJ1tWSqMLvic=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163184-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 163184: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f8582da0417660269bec69e399f8667f761e886b
X-Osstest-Versions-That:
    xen=c636a5fe59575d84778f676ca1728fbd1a7c7104
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 30 Jun 2021 02:45:07 +0000

flight 163184 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163184/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 163168
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 163176
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 163176
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 163176
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 163176
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 163176
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 163176
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 163176
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 163176
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 163176
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 163176
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f8582da0417660269bec69e399f8667f761e886b
baseline version:
 xen                  c636a5fe59575d84778f676ca1728fbd1a7c7104

Last test of basis   163176  2021-06-29 02:21:20 Z    1 days
Testing same since   163184  2021-06-29 13:38:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c636a5fe59..f8582da041  f8582da0417660269bec69e399f8667f761e886b -> master


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 04:03:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 04:03:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148111.273623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyRR7-0000Vr-8V; Wed, 30 Jun 2021 04:02:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148111.273623; Wed, 30 Jun 2021 04:02:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyRR7-0000Vk-56; Wed, 30 Jun 2021 04:02:53 +0000
Received: by outflank-mailman (input) for mailman id 148111;
 Wed, 30 Jun 2021 01:43:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rlqy=LY=kernel.org=nathan@srs-us1.protection.inumbo.net>)
 id 1lyPGH-0004aJ-GJ
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 01:43:33 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3d51d27c-1617-4971-9f2e-6c94e24df989;
 Wed, 30 Jun 2021 01:43:32 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 208FA61D81;
 Wed, 30 Jun 2021 01:43:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d51d27c-1617-4971-9f2e-6c94e24df989
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1625017411;
	bh=amEkbjsSwgY3M1jUxO/2NmeBoHeUq3+mIk6P11gk4Og=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=ZrP1J2spjwRkKn5ofA3edFhF7yqkdeJUlkVykRTK5gRjacWiRF9eziQMCJ66JvddE
	 xxby/GvXcv7sUEIQqwGmQskTCIuHRtV6yp4SqeoxbU+XKeMa/PPXMw2ZYZWlXwqrPn
	 18zNNvdovs7mX4SlJVXwmtUcPwdXxw6F21SK2cgGq1syXqg+2vrXj5cUoKzY0EYQkw
	 S1+uSmJNPxHigRHFwFOgWU/2keF8Gd4WQ5Ju2MQHn1+qC0peGRZSWjtaNhcEGYqA1x
	 T+yCPUbaw9EY4103T5H0gMQUrS/a+uG4ZuW5g9FchOGBOCI/ew/A6uOTjFezef5blV
	 aEPAuoG9gN7tg==
Date: Tue, 29 Jun 2021 18:43:21 -0700
From: Nathan Chancellor <nathan@kernel.org>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com, thomas.lendacky@amd.com,
	quic_qiancai@quicinc.com
Subject: Re: [PATCH v15 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
Message-ID: <YNvMDFWKXSm4LRfZ@Ryzen-9-3900X.localdomain>
References: <20210624155526.2775863-1-tientzu@chromium.org>
 <20210624155526.2775863-7-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="QbssCZQk3Ov6WW0W"
Content-Disposition: inline
In-Reply-To: <20210624155526.2775863-7-tientzu@chromium.org>


--QbssCZQk3Ov6WW0W
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Thu, Jun 24, 2021 at 11:55:20PM +0800, Claire Chang wrote:
> Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
> use it to determine whether to bounce the data or not. This will be
> useful later to allow for different pools.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Tested-by: Stefano Stabellini <sstabellini@kernel.org>
> Tested-by: Will Deacon <will@kernel.org>
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

This patch as commit af452ec1b1a3 ("swiotlb: Use is_swiotlb_force_bounce
for swiotlb data bouncing") causes my Ryzen 3 4300G system to fail to
get to an X session consistently (although not every single time),
presumably due to a crash in the AMDGPU driver that I see in dmesg.

I have attached logs at af452ec1b1a3 and f127c9556a8e and I am happy
to provide any further information, debug, or test patches as necessary.

Cheers,
Nathan

--QbssCZQk3Ov6WW0W
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="af452ec1b1a3.log"

-- Journal begins at Mon 2021-06-28 09:22:12 MST, ends at Tue 2021-06-29 18:29:54 MST. --
Jun 29 18:28:41 hp-4300G kernel: Linux version 5.12.0-rc3-00025-gaf452ec1b1a3 (nathan@archlinux-ax161) (gcc (GCC) 11.1.0, GNU ld (GNU Binutils) 2.36.50.20210627) #1 SMP PREEMPT Tue Jun 29 18:25:35 MST 2021
Jun 29 18:28:41 hp-4300G kernel: Command line: initrd=\amd-ucode.img initrd=\initramfs-linux-next-llvm.img root=PARTUUID=8680aa0c-cf09-4a69-8cf3-970478040ee7 rw intel_pstate=no_hwp irqpoll
Jun 29 18:28:41 hp-4300G kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jun 29 18:28:41 hp-4300G kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jun 29 18:28:41 hp-4300G kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jun 29 18:28:41 hp-4300G kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jun 29 18:28:41 hp-4300G kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jun 29 18:28:41 hp-4300G kernel: BIOS-provided physical RAM map:
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000000a0000-0x00000000000fffff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x0000000000100000-0x0000000009c0ffff] usable
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x0000000009c10000-0x0000000009ffffff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x000000000a000000-0x000000000a1fffff] usable
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x000000000a200000-0x000000000a20cfff] ACPI NVS
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x000000000a20d000-0x000000000affffff] usable
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x000000000b000000-0x000000000b01ffff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x000000000b020000-0x00000000b838ffff] usable
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000b8390000-0x00000000b86c5fff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000b86c6000-0x00000000b8721fff] ACPI data
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000b8722000-0x00000000b8a14fff] ACPI NVS
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000b8a15000-0x00000000badfefff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000badff000-0x00000000bbffffff] usable
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000bc000000-0x00000000bdffffff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000bf000000-0x00000000bfffffff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000fd200000-0x00000000fd2fffff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000fd600000-0x00000000fd6fffff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000fea00000-0x00000000fea0ffff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000feb80000-0x00000000fec01fff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000fec10000-0x00000000fec10fff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000fec30000-0x00000000fec30fff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000fed00000-0x00000000fed00fff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000fed40000-0x00000000fed44fff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000fed80000-0x00000000fed8ffff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000fedc2000-0x00000000fedcffff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000fedd4000-0x00000000fedd5fff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021f37ffff] usable
Jun 29 18:28:41 hp-4300G kernel: BIOS-e820: [mem 0x000000021f380000-0x000000023fffffff] reserved
Jun 29 18:28:41 hp-4300G kernel: intel_pstate: HWP disabled
Jun 29 18:28:41 hp-4300G kernel: NX (Execute Disable) protection: active
Jun 29 18:28:41 hp-4300G kernel: e820: update [mem 0xb4c66018-0xb4c73457] usable ==> usable
Jun 29 18:28:41 hp-4300G kernel: e820: update [mem 0xb4c66018-0xb4c73457] usable ==> usable
Jun 29 18:28:41 hp-4300G kernel: extended physical RAM map:
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000000a0000-0x00000000000fffff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000009c0ffff] usable
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x0000000009c10000-0x0000000009ffffff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x000000000a000000-0x000000000a1fffff] usable
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x000000000a200000-0x000000000a20cfff] ACPI NVS
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x000000000a20d000-0x000000000affffff] usable
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x000000000b000000-0x000000000b01ffff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x000000000b020000-0x00000000b4c66017] usable
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000b4c66018-0x00000000b4c73457] usable
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000b4c73458-0x00000000b838ffff] usable
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000b8390000-0x00000000b86c5fff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000b86c6000-0x00000000b8721fff] ACPI data
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000b8722000-0x00000000b8a14fff] ACPI NVS
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000b8a15000-0x00000000badfefff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000badff000-0x00000000bbffffff] usable
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000bc000000-0x00000000bdffffff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000bf000000-0x00000000bfffffff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000fd200000-0x00000000fd2fffff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000fd600000-0x00000000fd6fffff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000fea00000-0x00000000fea0ffff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000feb80000-0x00000000fec01fff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000fec10000-0x00000000fec10fff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000fec30000-0x00000000fec30fff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000fed00000-0x00000000fed00fff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000fed40000-0x00000000fed44fff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000fed80000-0x00000000fed8ffff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000fedc2000-0x00000000fedcffff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000fedd4000-0x00000000fedd5fff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x0000000100000000-0x000000021f37ffff] usable
Jun 29 18:28:41 hp-4300G kernel: reserve setup_data: [mem 0x000000021f380000-0x000000023fffffff] reserved
Jun 29 18:28:41 hp-4300G kernel: efi: EFI v2.70 by American Megatrends
Jun 29 18:28:41 hp-4300G kernel: efi: ACPI=0xb8721000 ACPI 2.0=0xb8721014 TPMFinalLog=0xb89c8000 SMBIOS=0xbac0f000 SMBIOS 3.0=0xbac0e000 MEMATTR=0xb5184018 ESRT=0xb6dde918 RNG=0xbac3e998 TPMEventLog=0xb5185018 
Jun 29 18:28:41 hp-4300G kernel: efi: seeding entropy pool
Jun 29 18:28:41 hp-4300G kernel: SMBIOS 3.3.0 present.
Jun 29 18:28:41 hp-4300G kernel: DMI: HP HP Desktop M01-F1xxx/87D6, BIOS F.12 12/17/2020
Jun 29 18:28:41 hp-4300G kernel: tsc: Fast TSC calibration using PIT
Jun 29 18:28:41 hp-4300G kernel: tsc: Detected 3793.033 MHz processor
Jun 29 18:28:41 hp-4300G kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jun 29 18:28:41 hp-4300G kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jun 29 18:28:41 hp-4300G kernel: last_pfn = 0x21f380 max_arch_pfn = 0x400000000
Jun 29 18:28:41 hp-4300G kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jun 29 18:28:41 hp-4300G kernel: e820: update [mem 0xc0000000-0xffffffff] usable ==> reserved
Jun 29 18:28:41 hp-4300G kernel: last_pfn = 0xbc000 max_arch_pfn = 0x400000000
Jun 29 18:28:41 hp-4300G kernel: esrt: Reserving ESRT space from 0x00000000b6dde918 to 0x00000000b6dde950.
Jun 29 18:28:41 hp-4300G kernel: e820: update [mem 0xb6dde000-0xb6ddefff] usable ==> reserved
Jun 29 18:28:41 hp-4300G kernel: check: Scanning 1 areas for low memory corruption
Jun 29 18:28:41 hp-4300G kernel: Using GB pages for direct mapping
Jun 29 18:28:41 hp-4300G kernel: Secure boot disabled
Jun 29 18:28:41 hp-4300G kernel: RAMDISK: [mem 0x7f859000-0x7fff5fff]
Jun 29 18:28:41 hp-4300G kernel: ACPI: Early table checksum verification disabled
Jun 29 18:28:41 hp-4300G kernel: ACPI: RSDP 0x00000000B8721014 000024 (v02 HPQOEM)
Jun 29 18:28:41 hp-4300G kernel: ACPI: XSDT 0x00000000B8720728 0000EC (v01 HPQOEM SLIC-CPC 01072009 AMI  01000013)
Jun 29 18:28:41 hp-4300G kernel: ACPI: FACP 0x00000000B870F000 000114 (v06 HPQOEM SLIC-CPC 01072009 AMI  00010013)
Jun 29 18:28:41 hp-4300G kernel: ACPI: DSDT 0x00000000B86FE000 01050C (v02 HPQOEM SLIC-CPC 01072009 INTL 20120913)
Jun 29 18:28:41 hp-4300G kernel: ACPI: FACS 0x00000000B89F8000 000040
Jun 29 18:28:41 hp-4300G kernel: ACPI: MSDM 0x00000000B871F000 000055 (v03 HPQOEM SLIC-CPC 01072009 AMI  01000013)
Jun 29 18:28:41 hp-4300G kernel: ACPI: SSDT 0x00000000B871E000 000050 (v01 HPQOEM SLIC-CPC 00000001 INTL 20120913)
Jun 29 18:28:41 hp-4300G kernel: ACPI: IVRS 0x00000000B871D000 0000D0 (v02 HPQOEM SLIC-CPC 00000001 AMD  00000000)
Jun 29 18:28:41 hp-4300G kernel: ACPI: SSDT 0x00000000B8715000 007229 (v02 HPQOEM SLIC-CPC 00000002 MSFT 04000000)
Jun 29 18:28:41 hp-4300G kernel: ACPI: SSDT 0x00000000B8711000 003BA1 (v01 HPQOEM SLIC-CPC 00000001 INTL 20120913)
Jun 29 18:28:41 hp-4300G kernel: ACPI: SSDT 0x00000000B8710000 000094 (v02 HPQOEM SLIC-CPC 01072009 AMI  01072009)
Jun 29 18:28:41 hp-4300G kernel: ACPI: FIDT 0x00000000B86FD000 00009C (v01 HPQOEM SLIC-CPC 01072009 AMI  00010013)
Jun 29 18:28:41 hp-4300G kernel: ACPI: MCFG 0x00000000B86FC000 00003C (v01 HPQOEM SLIC-CPC 01072009 MSFT 00010013)
Jun 29 18:28:41 hp-4300G kernel: ACPI: HPET 0x00000000B86FB000 000038 (v01 HPQOEM SLIC-CPC 01072009 AMI  00000005)
Jun 29 18:28:41 hp-4300G kernel: ACPI: VFCT 0x00000000B86ED000 00D484 (v01 HPQOEM SLIC-CPC 00000001 AMD  31504F47)
Jun 29 18:28:41 hp-4300G kernel: ACPI: BGRT 0x00000000B86EC000 000038 (v01 HPQOEM SLIC-CPC 01072009 AMI  00010013)
Jun 29 18:28:41 hp-4300G kernel: ACPI: TPM2 0x00000000B86EB000 00004C (v04 HPQOEM SLIC-CPC 00000001 AMI  00000000)
Jun 29 18:28:41 hp-4300G kernel: ACPI: SSDT 0x00000000B86E9000 001CE4 (v02 HPQOEM SLIC-CPC 00000001 AMD  00000001)
Jun 29 18:28:41 hp-4300G kernel: ACPI: CRAT 0x00000000B86E8000 0007E8 (v01 HPQOEM SLIC-CPC 00000001 AMD  00000001)
Jun 29 18:28:41 hp-4300G kernel: ACPI: CDIT 0x00000000B86E7000 000029 (v01 HPQOEM SLIC-CPC 00000001 AMD  00000001)
Jun 29 18:28:41 hp-4300G kernel: ACPI: SSDT 0x00000000B86E6000 000D37 (v01 HPQOEM SLIC-CPC 00000001 INTL 20120913)
Jun 29 18:28:41 hp-4300G kernel: ACPI: SSDT 0x00000000B86E4000 0010A5 (v01 HPQOEM SLIC-CPC 00000001 INTL 20120913)
Jun 29 18:28:41 hp-4300G kernel: ACPI: SSDT 0x00000000B86E0000 00333E (v01 HPQOEM SLIC-CPC 00000001 INTL 20120913)
Jun 29 18:28:41 hp-4300G kernel: ACPI: SSDT 0x00000000B86DF000 0000BF (v01 HPQOEM SLIC-CPC 00001000 INTL 20120913)
Jun 29 18:28:41 hp-4300G kernel: ACPI: WSMT 0x00000000B86DE000 000028 (v01 HPQOEM SLIC-CPC 01072009 AMI  00010013)
Jun 29 18:28:41 hp-4300G kernel: ACPI: APIC 0x00000000B86DD000 00015E (v03 HPQOEM SLIC-CPC 01072009 AMI  00010013)
Jun 29 18:28:41 hp-4300G kernel: ACPI: SSDT 0x00000000B86DC000 000517 (v01 HPQOEM SLIC-CPC 00000001 INTL 20120913)
Jun 29 18:28:41 hp-4300G kernel: ACPI: SSDT 0x00000000B86DA000 0010AF (v01 HPQOEM SLIC-CPC 00000001 INTL 20120913)
Jun 29 18:28:41 hp-4300G kernel: ACPI: FPDT 0x00000000B86D9000 000044 (v01 HPQOEM SLIC-CPC 01072009 AMI  01000013)
Jun 29 18:28:41 hp-4300G kernel: ACPI: Local APIC address 0xfee00000
Jun 29 18:28:41 hp-4300G kernel: No NUMA configuration found
Jun 29 18:28:41 hp-4300G kernel: Faking a node at [mem 0x0000000000000000-0x000000021f37ffff]
Jun 29 18:28:41 hp-4300G kernel: NODE_DATA(0) allocated [mem 0x21f37c000-0x21f37ffff]
Jun 29 18:28:41 hp-4300G kernel: Zone ranges:
Jun 29 18:28:41 hp-4300G kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jun 29 18:28:41 hp-4300G kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jun 29 18:28:41 hp-4300G kernel:   Normal   [mem 0x0000000100000000-0x000000021f37ffff]
Jun 29 18:28:41 hp-4300G kernel:   Device   empty
Jun 29 18:28:41 hp-4300G kernel: Movable zone start for each node
Jun 29 18:28:41 hp-4300G kernel: Early memory node ranges
Jun 29 18:28:41 hp-4300G kernel:   node   0: [mem 0x0000000000001000-0x000000000009ffff]
Jun 29 18:28:41 hp-4300G kernel:   node   0: [mem 0x0000000000100000-0x0000000009c0ffff]
Jun 29 18:28:41 hp-4300G kernel:   node   0: [mem 0x000000000a000000-0x000000000a1fffff]
Jun 29 18:28:41 hp-4300G kernel:   node   0: [mem 0x000000000a20d000-0x000000000affffff]
Jun 29 18:28:41 hp-4300G kernel:   node   0: [mem 0x000000000b020000-0x00000000b838ffff]
Jun 29 18:28:41 hp-4300G kernel:   node   0: [mem 0x00000000badff000-0x00000000bbffffff]
Jun 29 18:28:41 hp-4300G kernel:   node   0: [mem 0x0000000100000000-0x000000021f37ffff]
Jun 29 18:28:41 hp-4300G kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021f37ffff]
Jun 29 18:28:41 hp-4300G kernel: On node 0 totalpages: 1934483
Jun 29 18:28:41 hp-4300G kernel:   DMA zone: 64 pages used for memmap
Jun 29 18:28:41 hp-4300G kernel:   DMA zone: 26 pages reserved
Jun 29 18:28:41 hp-4300G kernel:   DMA zone: 3999 pages, LIFO batch:0
Jun 29 18:28:41 hp-4300G kernel:   DMA zone: 28769 pages in unavailable ranges
Jun 29 18:28:41 hp-4300G kernel:   DMA32 zone: 11782 pages used for memmap
Jun 29 18:28:41 hp-4300G kernel:   DMA32 zone: 754036 pages, LIFO batch:63
Jun 29 18:28:41 hp-4300G kernel:   DMA32 zone: 28300 pages in unavailable ranges
Jun 29 18:28:41 hp-4300G kernel:   Normal zone: 18382 pages used for memmap
Jun 29 18:28:41 hp-4300G kernel:   Normal zone: 1176448 pages, LIFO batch:63
Jun 29 18:28:41 hp-4300G kernel:   Normal zone: 3200 pages in unavailable ranges
Jun 29 18:28:41 hp-4300G kernel: ACPI: PM-Timer IO Port: 0x808
Jun 29 18:28:41 hp-4300G kernel: ACPI: Local APIC address 0xfee00000
Jun 29 18:28:41 hp-4300G kernel: ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
Jun 29 18:28:41 hp-4300G kernel: IOAPIC[0]: apic_id 9, version 33, address 0xfec00000, GSI 0-23
Jun 29 18:28:41 hp-4300G kernel: IOAPIC[1]: apic_id 10, version 33, address 0xfec01000, GSI 24-55
Jun 29 18:28:41 hp-4300G kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jun 29 18:28:41 hp-4300G kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
Jun 29 18:28:41 hp-4300G kernel: ACPI: IRQ0 used by override.
Jun 29 18:28:41 hp-4300G kernel: ACPI: IRQ9 used by override.
Jun 29 18:28:41 hp-4300G kernel: Using ACPI (MADT) for SMP configuration information
Jun 29 18:28:41 hp-4300G kernel: ACPI: HPET id: 0x10228201 base: 0xfed00000
Jun 29 18:28:41 hp-4300G kernel: e820: update [mem 0xb5158000-0xb517ffff] usable ==> reserved
Jun 29 18:28:41 hp-4300G kernel: smpboot: Allowing 32 CPUs, 24 hotplug CPUs
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000fffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0x09c10000-0x09ffffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0x0a200000-0x0a20cfff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0x0b000000-0x0b01ffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xb4c66000-0xb4c66fff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xb4c73000-0xb4c73fff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xb5158000-0xb517ffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xb6dde000-0xb6ddefff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xb8390000-0xb86c5fff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xb86c6000-0xb8721fff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xb8722000-0xb8a14fff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xb8a15000-0xbadfefff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xbc000000-0xbdffffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xbe000000-0xbeffffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xbf000000-0xbfffffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xefffffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xf0000000-0xf7ffffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xf8000000-0xfd1fffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfd200000-0xfd2fffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfd300000-0xfd5fffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfd600000-0xfd6fffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfd700000-0xfe9fffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfea00000-0xfea0ffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfea10000-0xfeb7ffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfeb80000-0xfec01fff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfec02000-0xfec0ffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfec10000-0xfec10fff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfec11000-0xfec2ffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfec30000-0xfec30fff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfec31000-0xfecfffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfed00000-0xfed00fff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfed01000-0xfed3ffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfed40000-0xfed44fff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfed45000-0xfed7ffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfed80000-0xfed8ffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfed90000-0xfedc1fff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfedc2000-0xfedcffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfedd0000-0xfedd3fff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfedd4000-0xfedd5fff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfedd6000-0xfeffffff]
Jun 29 18:28:41 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xffffffff]
Jun 29 18:28:41 hp-4300G kernel: [mem 0xc0000000-0xefffffff] available for PCI devices
Jun 29 18:28:41 hp-4300G kernel: Booting paravirtualized kernel on bare hardware
Jun 29 18:28:41 hp-4300G kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 6370452778343963 ns
Jun 29 18:28:41 hp-4300G kernel: setup_percpu: NR_CPUS:320 nr_cpumask_bits:320 nr_cpu_ids:32 nr_node_ids:1
Jun 29 18:28:41 hp-4300G kernel: percpu: Embedded 56 pages/cpu s192512 r8192 d28672 u262144
Jun 29 18:28:41 hp-4300G kernel: pcpu-alloc: s192512 r8192 d28672 u262144 alloc=1*2097152
Jun 29 18:28:41 hp-4300G kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 
Jun 29 18:28:41 hp-4300G kernel: pcpu-alloc: [0] 16 17 18 19 20 21 22 23 [0] 24 25 26 27 28 29 30 31 
Jun 29 18:28:41 hp-4300G kernel: Built 1 zonelists, mobility grouping on.  Total pages: 1904229
Jun 29 18:28:41 hp-4300G kernel: Policy zone: Normal
Jun 29 18:28:41 hp-4300G kernel: Kernel command line: initrd=\amd-ucode.img initrd=\initramfs-linux-next-llvm.img root=PARTUUID=8680aa0c-cf09-4a69-8cf3-970478040ee7 rw intel_pstate=no_hwp irqpoll
Jun 29 18:28:41 hp-4300G kernel: Misrouted IRQ fixup and polling support enabled
Jun 29 18:28:41 hp-4300G kernel: This may significantly impact system performance
Jun 29 18:28:41 hp-4300G kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes
Jun 29 18:28:41 hp-4300G kernel: printk: log_buf_len total cpu_extra contributions: 126976 bytes
Jun 29 18:28:41 hp-4300G kernel: printk: log_buf_len min size: 131072 bytes
Jun 29 18:28:41 hp-4300G kernel: printk: log_buf_len: 262144 bytes
Jun 29 18:28:41 hp-4300G kernel: printk: early log buf free: 114232(87%)
Jun 29 18:28:41 hp-4300G kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jun 29 18:28:41 hp-4300G kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jun 29 18:28:41 hp-4300G kernel: mem auto-init: stack:off, heap alloc:on, heap free:off
Jun 29 18:28:41 hp-4300G kernel: Memory: 7409704K/7737932K available (14344K kernel code, 2035K rwdata, 4856K rodata, 1648K init, 4340K bss, 327968K reserved, 0K cma-reserved)
Jun 29 18:28:41 hp-4300G kernel: random: get_random_u64 called from __kmem_cache_create+0x2a/0x560 with crng_init=0
Jun 29 18:28:41 hp-4300G kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=32, Nodes=1
Jun 29 18:28:41 hp-4300G kernel: ftrace: allocating 41885 entries in 164 pages
Jun 29 18:28:41 hp-4300G kernel: ftrace: allocated 164 pages with 3 groups
Jun 29 18:28:41 hp-4300G kernel: rcu: Preemptible hierarchical RCU implementation.
Jun 29 18:28:41 hp-4300G kernel: rcu:         RCU dyntick-idle grace-period acceleration is enabled.
Jun 29 18:28:41 hp-4300G kernel: rcu:         RCU restricting CPUs from NR_CPUS=320 to nr_cpu_ids=32.
Jun 29 18:28:41 hp-4300G kernel: rcu:         RCU priority boosting: priority 1 delay 500 ms.
Jun 29 18:28:41 hp-4300G kernel:         Trampoline variant of Tasks RCU enabled.
Jun 29 18:28:41 hp-4300G kernel:         Rude variant of Tasks RCU enabled.
Jun 29 18:28:41 hp-4300G kernel:         Tracing variant of Tasks RCU enabled.
Jun 29 18:28:41 hp-4300G kernel: rcu: RCU calculated value of scheduler-enlistment delay is 30 jiffies.
Jun 29 18:28:41 hp-4300G kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=32
Jun 29 18:28:41 hp-4300G kernel: NR_IRQS: 20736, nr_irqs: 1224, preallocated irqs: 16
Jun 29 18:28:41 hp-4300G kernel: Console: colour dummy device 80x25
Jun 29 18:28:41 hp-4300G kernel: printk: console [tty0] enabled
Jun 29 18:28:41 hp-4300G kernel: ACPI: Core revision 20210105
Jun 29 18:28:41 hp-4300G kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484873504 ns
Jun 29 18:28:41 hp-4300G kernel: APIC: Switch to symmetric I/O mode setup
Jun 29 18:28:41 hp-4300G kernel: Switched APIC routing to physical flat.
Jun 29 18:28:41 hp-4300G kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Jun 29 18:28:41 hp-4300G kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x6d59455212d, max_idle_ns: 881590509164 ns
Jun 29 18:28:41 hp-4300G kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 7589.15 BogoMIPS (lpj=12643443)
Jun 29 18:28:41 hp-4300G kernel: pid_max: default: 32768 minimum: 301
Jun 29 18:28:41 hp-4300G kernel: LSM: Security Framework initializing
Jun 29 18:28:41 hp-4300G kernel: Yama: becoming mindful.
Jun 29 18:28:41 hp-4300G kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jun 29 18:28:41 hp-4300G kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jun 29 18:28:41 hp-4300G kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jun 29 18:28:41 hp-4300G kernel: LVT offset 1 assigned for vector 0xf9
Jun 29 18:28:41 hp-4300G kernel: LVT offset 2 assigned for vector 0xf4
Jun 29 18:28:41 hp-4300G kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 512
Jun 29 18:28:41 hp-4300G kernel: Last level dTLB entries: 4KB 2048, 2MB 2048, 4MB 1024, 1GB 0
Jun 29 18:28:41 hp-4300G kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jun 29 18:28:41 hp-4300G kernel: Spectre V2 : Mitigation: Full AMD retpoline
Jun 29 18:28:41 hp-4300G kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Jun 29 18:28:41 hp-4300G kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Jun 29 18:28:41 hp-4300G kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jun 29 18:28:41 hp-4300G kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl
Jun 29 18:28:41 hp-4300G kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
Jun 29 18:28:41 hp-4300G kernel: Freeing SMP alternatives memory: 36K
Jun 29 18:28:41 hp-4300G kernel: smpboot: CPU0: AMD Ryzen 3 4300G with Radeon Graphics (family: 0x17, model: 0x60, stepping: 0x1)
Jun 29 18:28:41 hp-4300G kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jun 29 18:28:41 hp-4300G kernel: ... version:                0
Jun 29 18:28:41 hp-4300G kernel: ... bit width:              48
Jun 29 18:28:41 hp-4300G kernel: ... generic registers:      6
Jun 29 18:28:41 hp-4300G kernel: ... value mask:             0000ffffffffffff
Jun 29 18:28:41 hp-4300G kernel: ... max period:             00007fffffffffff
Jun 29 18:28:41 hp-4300G kernel: ... fixed-purpose events:   0
Jun 29 18:28:41 hp-4300G kernel: ... event mask:             000000000000003f
Jun 29 18:28:41 hp-4300G kernel: rcu: Hierarchical SRCU implementation.
Jun 29 18:28:41 hp-4300G kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter.
Jun 29 18:28:41 hp-4300G kernel: smp: Bringing up secondary CPUs ...
Jun 29 18:28:41 hp-4300G kernel: x86: Booting SMP configuration:
Jun 29 18:28:41 hp-4300G kernel: .... node  #0, CPUs:        #1  #2  #3  #4  #5  #6  #7
Jun 29 18:28:41 hp-4300G kernel: smp: Brought up 1 node, 8 CPUs
Jun 29 18:28:41 hp-4300G kernel: smpboot: Max logical packages: 4
Jun 29 18:28:41 hp-4300G kernel: smpboot: Total of 8 processors activated (60712.21 BogoMIPS)
Jun 29 18:28:41 hp-4300G kernel: devtmpfs: initialized
Jun 29 18:28:41 hp-4300G kernel: x86/mm: Memory block size: 128MB
Jun 29 18:28:41 hp-4300G kernel: PM: Registering ACPI NVS region [mem 0x0a200000-0x0a20cfff] (53248 bytes)
Jun 29 18:28:41 hp-4300G kernel: PM: Registering ACPI NVS region [mem 0xb8722000-0xb8a14fff] (3092480 bytes)
Jun 29 18:28:41 hp-4300G kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 6370867519511994 ns
Jun 29 18:28:41 hp-4300G kernel: futex hash table entries: 8192 (order: 7, 524288 bytes, linear)
Jun 29 18:28:41 hp-4300G kernel: pinctrl core: initialized pinctrl subsystem
Jun 29 18:28:41 hp-4300G kernel: PM: RTC time: 01:28:37, date: 2021-06-30
Jun 29 18:28:41 hp-4300G kernel: NET: Registered protocol family 16
Jun 29 18:28:41 hp-4300G kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jun 29 18:28:41 hp-4300G kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jun 29 18:28:41 hp-4300G kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jun 29 18:28:41 hp-4300G kernel: audit: initializing netlink subsys (disabled)
Jun 29 18:28:41 hp-4300G kernel: audit: type=2000 audit(1625016517.143:1): state=initialized audit_enabled=0 res=1
Jun 29 18:28:41 hp-4300G kernel: thermal_sys: Registered thermal governor 'fair_share'
Jun 29 18:28:41 hp-4300G kernel: thermal_sys: Registered thermal governor 'bang_bang'
Jun 29 18:28:41 hp-4300G kernel: thermal_sys: Registered thermal governor 'step_wise'
Jun 29 18:28:41 hp-4300G kernel: thermal_sys: Registered thermal governor 'user_space'
Jun 29 18:28:41 hp-4300G kernel: thermal_sys: Registered thermal governor 'power_allocator'
Jun 29 18:28:41 hp-4300G kernel: cpuidle: using governor ladder
Jun 29 18:28:41 hp-4300G kernel: cpuidle: using governor menu
Jun 29 18:28:41 hp-4300G kernel: ACPI: bus type PCI registered
Jun 29 18:28:41 hp-4300G kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jun 29 18:28:41 hp-4300G kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000)
Jun 29 18:28:41 hp-4300G kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820
Jun 29 18:28:41 hp-4300G kernel: PCI: Using configuration type 1 for base access
Jun 29 18:28:41 hp-4300G kernel: Kprobes globally optimized
Jun 29 18:28:41 hp-4300G kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Jun 29 18:28:41 hp-4300G kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Jun 29 18:28:41 hp-4300G kernel: ACPI: Added _OSI(Module Device)
Jun 29 18:28:41 hp-4300G kernel: ACPI: Added _OSI(Processor Device)
Jun 29 18:28:41 hp-4300G kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Jun 29 18:28:41 hp-4300G kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jun 29 18:28:41 hp-4300G kernel: ACPI: Added _OSI(Linux-Dell-Video)
Jun 29 18:28:41 hp-4300G kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Jun 29 18:28:41 hp-4300G kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Jun 29 18:28:41 hp-4300G kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded
Jun 29 18:28:41 hp-4300G kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored
Jun 29 18:28:41 hp-4300G kernel: ACPI: EC: EC started
Jun 29 18:28:41 hp-4300G kernel: ACPI: EC: interrupt blocked
Jun 29 18:28:41 hp-4300G kernel: ACPI: EC: EC_CMD/EC_SC=0x66, EC_DATA=0x62
Jun 29 18:28:41 hp-4300G kernel: ACPI: \_SB_.PCI0.SBRG.EC0_: Boot DSDT EC used to handle transactions
Jun 29 18:28:41 hp-4300G kernel: ACPI: Interpreter enabled
Jun 29 18:28:41 hp-4300G kernel: ACPI: (supports S0 S3 S4 S5)
Jun 29 18:28:41 hp-4300G kernel: ACPI: Using IOAPIC for interrupt routing
Jun 29 18:28:41 hp-4300G kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jun 29 18:28:41 hp-4300G kernel: ACPI: Enabled 4 GPEs in block 00 to 1F
Jun 29 18:28:41 hp-4300G kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jun 29 18:28:41 hp-4300G kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jun 29 18:28:41 hp-4300G kernel: acpi PNP0A08:00: _OSC: platform does not support [SHPCHotplug AER LTR DPC]
Jun 29 18:28:41 hp-4300G kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability]
Jun 29 18:28:41 hp-4300G kernel: acpi PNP0A08:00: [Firmware Info]: MMCONFIG for domain 0000 [bus 00-7f] only partially covers this bridge
Jun 29 18:28:41 hp-4300G kernel: PCI host bridge to bus 0000:00
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x03af window]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:00: root bus resource [io  0x03e0-0x0cf7 window]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:00: root bus resource [io  0x03b0-0x03df window]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:00: root bus resource [mem 0x000c0000-0x000dffff window]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfec2ffff window]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:00: root bus resource [mem 0xfee00000-0xffffffff window]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:00.0: [1022:1630] type 00 class 0x060000
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:00.2: [1022:1631] type 00 class 0x080600
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:01.0: [1022:1632] type 00 class 0x060000
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.0: [1022:1632] type 00 class 0x060000
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.1: [1022:1634] type 01 class 0x060400
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.1: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.1: PME# supported from D0 D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.2: [1022:1634] type 01 class 0x060400
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.2: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.2: PME# supported from D0 D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.0: [1022:1632] type 00 class 0x060000
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.1: [1022:1635] type 01 class 0x060400
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.1: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.1: PME# supported from D0 D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.2: [1022:1635] type 01 class 0x060400
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.2: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.2: PME# supported from D0 D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:14.0: [1022:790b] type 00 class 0x0c0500
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:14.3: [1022:790e] type 00 class 0x060100
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:18.0: [1022:1448] type 00 class 0x060000
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:18.1: [1022:1449] type 00 class 0x060000
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:18.2: [1022:144a] type 00 class 0x060000
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:18.3: [1022:144b] type 00 class 0x060000
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:18.4: [1022:144c] type 00 class 0x060000
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:18.5: [1022:144d] type 00 class 0x060000
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:18.6: [1022:144e] type 00 class 0x060000
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:18.7: [1022:144f] type 00 class 0x060000
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.0: [1022:43d1] type 00 class 0x0c0330
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfcda0000-0xfcda7fff 64bit]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.0: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.0: PME# supported from D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.1: [1022:43c8] type 00 class 0x010601
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.1: reg 0x24: [mem 0xfcd80000-0xfcd9ffff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.1: reg 0x30: [mem 0xfcd00000-0xfcd7ffff pref]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.1: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.1: PME# supported from D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.2: [1022:43c6] type 01 class 0x060400
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.2: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.2: PME# supported from D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.1: PCI bridge to [bus 01-0a]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.1:   bridge window [io  0xd000-0xefff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.1:   bridge window [mem 0xfcb00000-0xfcdfffff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:00.0: [1022:43c7] type 01 class 0x060400
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:00.0: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:00.0: PME# supported from D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:01.0: [1022:43c7] type 01 class 0x060400
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:01.0: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:01.0: PME# supported from D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:02.0: [1022:43c7] type 01 class 0x060400
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:02.0: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:02.0: PME# supported from D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:03.0: [1022:43c7] type 01 class 0x060400
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:03.0: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:03.0: PME# supported from D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:04.0: [1022:43c7] type 01 class 0x060400
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:04.0: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:04.0: PME# supported from D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:05.0: [1022:43c7] type 01 class 0x060400
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:05.0: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:05.0: PME# supported from D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:06.0: [1022:43c7] type 01 class 0x060400
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:06.0: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:06.0: PME# supported from D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:07.0: [1022:43c7] type 01 class 0x060400
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:07.0: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:07.0: PME# supported from D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.2: PCI bridge to [bus 02-0a]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.2:   bridge window [io  0xd000-0xefff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.2:   bridge window [mem 0xfcb00000-0xfccfffff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:00.0: PCI bridge to [bus 03]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:01.0: PCI bridge to [bus 04]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:02.0: PCI bridge to [bus 05]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:03.0: PCI bridge to [bus 06]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:04.0: PCI bridge to [bus 07]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:05.0: PCI bridge to [bus 08]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:09:00.0: [10ec:c821] type 00 class 0x028000
Jun 29 18:28:41 hp-4300G kernel: pci 0000:09:00.0: reg 0x10: [io  0xe000-0xe0ff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:09:00.0: reg 0x18: [mem 0xfcc00000-0xfcc0ffff 64bit]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:09:00.0: supports D1 D2
Jun 29 18:28:41 hp-4300G kernel: pci 0000:09:00.0: PME# supported from D0 D1 D2 D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:06.0: PCI bridge to [bus 09]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:06.0:   bridge window [io  0xe000-0xefff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:06.0:   bridge window [mem 0xfcc00000-0xfccfffff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0a:00.0: [10ec:8168] type 00 class 0x020000
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0a:00.0: reg 0x10: [io  0xd000-0xd0ff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0a:00.0: reg 0x18: [mem 0xfcb04000-0xfcb04fff 64bit]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0a:00.0: reg 0x20: [mem 0xfcb00000-0xfcb03fff 64bit]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0a:00.0: supports D1 D2
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0a:00.0: PME# supported from D0 D1 D2 D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:07.0: PCI bridge to [bus 0a]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:07.0:   bridge window [io  0xd000-0xdfff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:07.0:   bridge window [mem 0xfcb00000-0xfcbfffff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0b:00.0: [1c5c:1339] type 00 class 0x010802
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfcf00000-0xfcf03fff 64bit]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0b:00.0: supports D1
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D3hot
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0b:00.0: 15.752 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x2 link at 0000:00:02.2 (capable of 31.504 Gb/s with 8.0 GT/s PCIe x4 link)
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.2: PCI bridge to [bus 0b]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.2:   bridge window [mem 0xfcf00000-0xfcffffff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.0: [1002:1636] type 00 class 0x030000
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.0: reg 0x10: [mem 0xd0000000-0xdfffffff 64bit pref]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.0: reg 0x18: [mem 0xe0000000-0xe01fffff 64bit pref]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.0: reg 0x20: [io  0xf000-0xf0ff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.0: reg 0x24: [mem 0xfca00000-0xfca7ffff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.0: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.0: BAR 0: assigned to efifb
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.0: PME# supported from D1 D2 D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.0: 126.016 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x16 link at 0000:00:08.1 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link)
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.1: [1002:1637] type 00 class 0x040300
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.1: reg 0x10: [mem 0xfca88000-0xfca8bfff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.1: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.1: PME# supported from D1 D2 D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.2: [1022:15df] type 00 class 0x108000
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.2: reg 0x18: [mem 0xfc900000-0xfc9fffff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.2: reg 0x24: [mem 0xfca8c000-0xfca8dfff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.2: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.3: [1022:1639] type 00 class 0x0c0330
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.3: reg 0x10: [mem 0xfc800000-0xfc8fffff 64bit]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.3: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.3: PME# supported from D0 D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.4: [1022:1639] type 00 class 0x0c0330
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.4: reg 0x10: [mem 0xfc700000-0xfc7fffff 64bit]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.4: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.4: PME# supported from D0 D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.6: [1022:15e3] type 00 class 0x040300
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.6: reg 0x10: [mem 0xfca80000-0xfca87fff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.6: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.6: PME# supported from D0 D3hot D3cold
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.1: PCI bridge to [bus 0c]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.1:   bridge window [io  0xf000-0xffff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.1:   bridge window [mem 0xfc700000-0xfcafffff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.1:   bridge window [mem 0xd0000000-0xe01fffff 64bit pref]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0d:00.0: [1022:7901] type 00 class 0x010601
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0d:00.0: reg 0x24: [mem 0xfce01000-0xfce017ff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0d:00.0: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0d:00.0: 126.016 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x16 link at 0000:00:08.2 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link)
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0d:00.1: [1022:7901] type 00 class 0x010601
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0d:00.1: reg 0x24: [mem 0xfce00000-0xfce007ff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0d:00.1: enabling Extended Tags
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.2: PCI bridge to [bus 0d]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.2:   bridge window [mem 0xfce00000-0xfcefffff]
Jun 29 18:28:41 hp-4300G kernel: ACPI: PCI Interrupt Link [LNKA] (IRQs 4 5 7 10 11 14 15) *0
Jun 29 18:28:41 hp-4300G kernel: ACPI: PCI Interrupt Link [LNKB] (IRQs 4 5 7 10 11 14 15) *0
Jun 29 18:28:41 hp-4300G kernel: ACPI: PCI Interrupt Link [LNKC] (IRQs 4 5 7 10 11 14 15) *0
Jun 29 18:28:41 hp-4300G kernel: ACPI: PCI Interrupt Link [LNKD] (IRQs 4 5 7 10 11 14 15) *0
Jun 29 18:28:41 hp-4300G kernel: ACPI: PCI Interrupt Link [LNKE] (IRQs 4 5 7 10 11 14 15) *0
Jun 29 18:28:41 hp-4300G kernel: ACPI: PCI Interrupt Link [LNKF] (IRQs 4 5 7 10 11 14 15) *0
Jun 29 18:28:41 hp-4300G kernel: ACPI: PCI Interrupt Link [LNKG] (IRQs 4 5 7 10 11 14 15) *0
Jun 29 18:28:41 hp-4300G kernel: ACPI: PCI Interrupt Link [LNKH] (IRQs 4 5 7 10 11 14 15) *0
Jun 29 18:28:41 hp-4300G kernel: ACPI: EC: interrupt unblocked
Jun 29 18:28:41 hp-4300G kernel: ACPI: EC: event unblocked
Jun 29 18:28:41 hp-4300G kernel: ACPI: EC: EC_CMD/EC_SC=0x66, EC_DATA=0x62
Jun 29 18:28:41 hp-4300G kernel: ACPI: EC: GPE=0x3
Jun 29 18:28:41 hp-4300G kernel: ACPI: \_SB_.PCI0.SBRG.EC0_: Boot DSDT EC initialization complete
Jun 29 18:28:41 hp-4300G kernel: ACPI: \_SB_.PCI0.SBRG.EC0_: EC: Used to handle transactions and events
Jun 29 18:28:41 hp-4300G kernel: iommu: Default domain type: Translated 
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.0: vgaarb: bridge control possible
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.0: vgaarb: setting as boot device
Jun 29 18:28:41 hp-4300G kernel: vgaarb: loaded
Jun 29 18:28:41 hp-4300G kernel: SCSI subsystem initialized
Jun 29 18:28:41 hp-4300G kernel: libata version 3.00 loaded.
Jun 29 18:28:41 hp-4300G kernel: ACPI: bus type USB registered
Jun 29 18:28:41 hp-4300G kernel: usbcore: registered new interface driver usbfs
Jun 29 18:28:41 hp-4300G kernel: usbcore: registered new interface driver hub
Jun 29 18:28:41 hp-4300G kernel: usbcore: registered new device driver usb
Jun 29 18:28:41 hp-4300G kernel: pps_core: LinuxPPS API ver. 1 registered
Jun 29 18:28:41 hp-4300G kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jun 29 18:28:41 hp-4300G kernel: PTP clock support registered
Jun 29 18:28:41 hp-4300G kernel: EDAC MC: Ver: 3.0.0
Jun 29 18:28:41 hp-4300G kernel: Registered efivars operations
Jun 29 18:28:41 hp-4300G kernel: NetLabel: Initializing
Jun 29 18:28:41 hp-4300G kernel: NetLabel:  domain hash size = 128
Jun 29 18:28:41 hp-4300G kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jun 29 18:28:41 hp-4300G kernel: NetLabel:  unlabeled traffic allowed by default
Jun 29 18:28:41 hp-4300G kernel: PCI: Using ACPI for IRQ routing
Jun 29 18:28:41 hp-4300G kernel: PCI: pci_cache_line_size set to 64 bytes
Jun 29 18:28:41 hp-4300G kernel: e820: reserve RAM buffer [mem 0x09c10000-0x0bffffff]
Jun 29 18:28:41 hp-4300G kernel: e820: reserve RAM buffer [mem 0x0a200000-0x0bffffff]
Jun 29 18:28:41 hp-4300G kernel: e820: reserve RAM buffer [mem 0x0b000000-0x0bffffff]
Jun 29 18:28:41 hp-4300G kernel: e820: reserve RAM buffer [mem 0xb4c66018-0xb7ffffff]
Jun 29 18:28:41 hp-4300G kernel: e820: reserve RAM buffer [mem 0xb5158000-0xb7ffffff]
Jun 29 18:28:41 hp-4300G kernel: e820: reserve RAM buffer [mem 0xb6dde000-0xb7ffffff]
Jun 29 18:28:41 hp-4300G kernel: e820: reserve RAM buffer [mem 0xb8390000-0xbbffffff]
Jun 29 18:28:41 hp-4300G kernel: e820: reserve RAM buffer [mem 0x21f380000-0x21fffffff]
Jun 29 18:28:41 hp-4300G kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
Jun 29 18:28:41 hp-4300G kernel: hpet0: 3 comparators, 32-bit 14.318180 MHz counter
Jun 29 18:28:41 hp-4300G kernel: clocksource: Switched to clocksource tsc-early
Jun 29 18:28:41 hp-4300G kernel: VFS: Disk quotas dquot_6.6.0
Jun 29 18:28:41 hp-4300G kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jun 29 18:28:41 hp-4300G kernel: pnp: PnP ACPI init
Jun 29 18:28:41 hp-4300G kernel: system 00:00: [mem 0xf0000000-0xf7ffffff] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:00: Plug and Play ACPI device, IDs PNP0c01 (active)
Jun 29 18:28:41 hp-4300G kernel: system 00:01: [mem 0x220000000-0x23fffffff window] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:01: Plug and Play ACPI device, IDs PNP0c02 (active)
Jun 29 18:28:41 hp-4300G kernel: pnp 00:02: Plug and Play ACPI device, IDs PNP0b00 (active)
Jun 29 18:28:41 hp-4300G kernel: system 00:03: [io  0x0a00-0x0a0f] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:03: [io  0x0a10-0x0a1f] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:03: [io  0x0a20-0x0a2f] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:03: [io  0x0a30-0x0a3f] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:03: [io  0x0a40-0x0a4f] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:03: [io  0x0a50-0x0a5f] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:03: [io  0x0a60-0x0a6f] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:03: [io  0x0a70-0x0a7f] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:03: [io  0x0a80-0x0a8f] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:03: [io  0x0a90-0x0b8e] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:03: [io  0x0aa0-0x0aaf] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:03: [io  0x0ab0-0x0abf] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:03: [io  0x0ac0-0x0acf] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:03: [io  0x0ad0-0x0adf] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:03: Plug and Play ACPI device, IDs PNP0c02 (active)
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x04d0-0x04d1] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x040b] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x04d6] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x0c00-0x0c01] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x0c14] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x0c50-0x0c51] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x0c52] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x0c6c] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x0c6f] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x0cd0-0x0cd1] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x0cd2-0x0cd3] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x0cd4-0x0cd5] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x0cd6-0x0cd7] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x0cd8-0x0cdf] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x0800-0x089f] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x0b00-0x0b0f] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x0b20-0x0b3f] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x0900-0x090f] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [io  0x0910-0x091f] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [mem 0xfec00000-0xfec00fff] could not be reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [mem 0xfec01000-0xfec01fff] could not be reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [mem 0xfedc0000-0xfedc0fff] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [mem 0xfee00000-0xfee00fff] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [mem 0xfed80000-0xfed8ffff] could not be reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [mem 0xfec10000-0xfec10fff] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: [mem 0xff000000-0xffffffff] has been reserved
Jun 29 18:28:41 hp-4300G kernel: system 00:04: Plug and Play ACPI device, IDs PNP0c02 (active)
Jun 29 18:28:41 hp-4300G kernel: pnp: PnP ACPI: found 5 devices
Jun 29 18:28:41 hp-4300G kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jun 29 18:28:41 hp-4300G kernel: NET: Registered protocol family 2
Jun 29 18:28:41 hp-4300G kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jun 29 18:28:41 hp-4300G kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jun 29 18:28:41 hp-4300G kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jun 29 18:28:41 hp-4300G kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jun 29 18:28:41 hp-4300G kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jun 29 18:28:41 hp-4300G kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jun 29 18:28:41 hp-4300G kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jun 29 18:28:41 hp-4300G kernel: NET: Registered protocol family 1
Jun 29 18:28:41 hp-4300G kernel: NET: Registered protocol family 44
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:00.0: PCI bridge to [bus 03]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:01.0: PCI bridge to [bus 04]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:02.0: PCI bridge to [bus 05]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:03.0: PCI bridge to [bus 06]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:04.0: PCI bridge to [bus 07]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:05.0: PCI bridge to [bus 08]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:06.0: PCI bridge to [bus 09]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:06.0:   bridge window [io  0xe000-0xefff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:06.0:   bridge window [mem 0xfcc00000-0xfccfffff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:07.0: PCI bridge to [bus 0a]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:07.0:   bridge window [io  0xd000-0xdfff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:07.0:   bridge window [mem 0xfcb00000-0xfcbfffff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.2: PCI bridge to [bus 02-0a]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.2:   bridge window [io  0xd000-0xefff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.2:   bridge window [mem 0xfcb00000-0xfccfffff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.1: PCI bridge to [bus 01-0a]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.1:   bridge window [io  0xd000-0xefff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.1:   bridge window [mem 0xfcb00000-0xfcdfffff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.2: PCI bridge to [bus 0b]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.2:   bridge window [mem 0xfcf00000-0xfcffffff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.1: PCI bridge to [bus 0c]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.1:   bridge window [io  0xf000-0xffff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.1:   bridge window [mem 0xfc700000-0xfcafffff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.1:   bridge window [mem 0xd0000000-0xe01fffff 64bit pref]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.2: PCI bridge to [bus 0d]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.2:   bridge window [mem 0xfce00000-0xfcefffff]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x03af window]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:00: resource 5 [io  0x03e0-0x0cf7 window]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:00: resource 6 [io  0x03b0-0x03df window]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:00: resource 7 [io  0x0d00-0xffff window]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:00: resource 8 [mem 0x000a0000-0x000bffff window]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:00: resource 9 [mem 0x000c0000-0x000dffff window]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:00: resource 10 [mem 0xc0000000-0xfec2ffff window]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:00: resource 11 [mem 0xfee00000-0xffffffff window]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:01: resource 0 [io  0xd000-0xefff]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:01: resource 1 [mem 0xfcb00000-0xfcdfffff]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:02: resource 0 [io  0xd000-0xefff]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:02: resource 1 [mem 0xfcb00000-0xfccfffff]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:09: resource 0 [io  0xe000-0xefff]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:09: resource 1 [mem 0xfcc00000-0xfccfffff]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:0a: resource 0 [io  0xd000-0xdfff]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:0a: resource 1 [mem 0xfcb00000-0xfcbfffff]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:0b: resource 1 [mem 0xfcf00000-0xfcffffff]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:0c: resource 0 [io  0xf000-0xffff]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:0c: resource 1 [mem 0xfc700000-0xfcafffff]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:0c: resource 2 [mem 0xd0000000-0xe01fffff 64bit pref]
Jun 29 18:28:41 hp-4300G kernel: pci_bus 0000:0d: resource 1 [mem 0xfce00000-0xfcefffff]
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.1: D0 power state depends on 0000:0c:00.0
Jun 29 18:28:41 hp-4300G kernel: PCI: CLS 64 bytes, default 64
Jun 29 18:28:41 hp-4300G kernel: Trying to unpack rootfs image as initramfs...
Jun 29 18:28:41 hp-4300G kernel: Freeing initrd memory: 7796K
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:00.2: AMD-Vi: Unable to read/write to IOMMU perf counter.
Jun 29 18:28:41 hp-4300G kernel: fbcon: Taking over console
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:00.2: can't derive routing for PCI INT A
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:00.2: PCI INT A: not connected
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:01.0: Adding to iommu group 0
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.0: Adding to iommu group 1
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.1: Adding to iommu group 2
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:02.2: Adding to iommu group 3
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.0: Adding to iommu group 4
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.1: Adding to iommu group 4
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:08.2: Adding to iommu group 4
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:14.0: Adding to iommu group 5
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:14.3: Adding to iommu group 5
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:18.0: Adding to iommu group 6
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:18.1: Adding to iommu group 6
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:18.2: Adding to iommu group 6
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:18.3: Adding to iommu group 6
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:18.4: Adding to iommu group 6
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:18.5: Adding to iommu group 6
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:18.6: Adding to iommu group 6
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:18.7: Adding to iommu group 6
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.0: Adding to iommu group 7
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.1: Adding to iommu group 7
Jun 29 18:28:41 hp-4300G kernel: pci 0000:01:00.2: Adding to iommu group 7
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:00.0: Adding to iommu group 7
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:01.0: Adding to iommu group 7
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:02.0: Adding to iommu group 7
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:03.0: Adding to iommu group 7
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:04.0: Adding to iommu group 7
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:05.0: Adding to iommu group 7
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:06.0: Adding to iommu group 7
Jun 29 18:28:41 hp-4300G kernel: pci 0000:02:07.0: Adding to iommu group 7
Jun 29 18:28:41 hp-4300G kernel: pci 0000:09:00.0: Adding to iommu group 7
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0a:00.0: Adding to iommu group 7
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0b:00.0: Adding to iommu group 8
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.0: Adding to iommu group 4
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.1: Adding to iommu group 4
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.2: Adding to iommu group 4
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.3: Adding to iommu group 4
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.4: Adding to iommu group 4
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0c:00.6: Adding to iommu group 4
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0d:00.0: Adding to iommu group 4
Jun 29 18:28:41 hp-4300G kernel: pci 0000:0d:00.1: Adding to iommu group 4
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
Jun 29 18:28:41 hp-4300G kernel: pci 0000:00:00.2: AMD-Vi: Extended features (0x206d73ef22254ade):
Jun 29 18:28:41 hp-4300G kernel:  PPR X2APIC NX GT IA GA PC GA_vAPIC
Jun 29 18:28:41 hp-4300G kernel: AMD-Vi: Interrupt remapping enabled
Jun 29 18:28:41 hp-4300G kernel: AMD-Vi: Virtual APIC enabled
Jun 29 18:28:41 hp-4300G kernel: AMD-Vi: X2APIC enabled
Jun 29 18:28:41 hp-4300G kernel: AMD-Vi: Lazy IO/TLB flushing enabled
Jun 29 18:28:41 hp-4300G kernel: amd_uncore: 4  amd_df counters detected
Jun 29 18:28:41 hp-4300G kernel: amd_uncore: 6  amd_l3 counters detected
Jun 29 18:28:41 hp-4300G kernel: LVT offset 0 assigned for vector 0x400
Jun 29 18:28:41 hp-4300G kernel: perf: AMD IBS detected (0x000003ff)
Jun 29 18:28:41 hp-4300G kernel: check: Scanning for low memory corruption every 60 seconds
Jun 29 18:28:41 hp-4300G kernel: Initialise system trusted keyrings
Jun 29 18:28:41 hp-4300G kernel: Key type blacklist registered
Jun 29 18:28:41 hp-4300G kernel: workingset: timestamp_bits=41 max_order=21 bucket_order=0
Jun 29 18:28:41 hp-4300G kernel: zbud: loaded
Jun 29 18:28:41 hp-4300G kernel: Key type asymmetric registered
Jun 29 18:28:41 hp-4300G kernel: Asymmetric key parser 'x509' registered
Jun 29 18:28:41 hp-4300G kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 243)
Jun 29 18:28:41 hp-4300G kernel: io scheduler mq-deadline registered
Jun 29 18:28:41 hp-4300G kernel: io scheduler kyber registered
Jun 29 18:28:41 hp-4300G kernel: io scheduler bfq registered
Jun 29 18:28:41 hp-4300G kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 26
Jun 29 18:28:41 hp-4300G kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 27
Jun 29 18:28:41 hp-4300G kernel: pcieport 0000:00:08.1: PME: Signaling with IRQ 28
Jun 29 18:28:41 hp-4300G kernel: pcieport 0000:00:08.2: PME: Signaling with IRQ 29
Jun 29 18:28:41 hp-4300G kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jun 29 18:28:41 hp-4300G kernel: efifb: probing for efifb
Jun 29 18:28:41 hp-4300G kernel: efifb: framebuffer at 0xd0000000, using 3072k, total 3072k
Jun 29 18:28:41 hp-4300G kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1
Jun 29 18:28:41 hp-4300G kernel: efifb: scrolling: redraw
Jun 29 18:28:41 hp-4300G kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
Jun 29 18:28:41 hp-4300G kernel: Console: switching to colour frame buffer device 128x48
Jun 29 18:28:41 hp-4300G kernel: fb0: EFI VGA frame buffer device
Jun 29 18:28:41 hp-4300G kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Jun 29 18:28:41 hp-4300G kernel: ACPI: button: Power Button [PWRB]
Jun 29 18:28:41 hp-4300G kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input1
Jun 29 18:28:41 hp-4300G kernel: ACPI: button: Power Button [PWRF]
Jun 29 18:28:41 hp-4300G kernel: Monitor-Mwait will be used to enter C-1 state
Jun 29 18:28:41 hp-4300G kernel: ACPI: \_PR_.C000: Found 3 idle states
Jun 29 18:28:41 hp-4300G kernel: ACPI: \_PR_.C002: Found 3 idle states
Jun 29 18:28:41 hp-4300G kernel: ACPI: \_PR_.C004: Found 3 idle states
Jun 29 18:28:41 hp-4300G kernel: ACPI: \_PR_.C006: Found 3 idle states
Jun 29 18:28:41 hp-4300G kernel: ACPI: \_PR_.C001: Found 3 idle states
Jun 29 18:28:41 hp-4300G kernel: ACPI: \_PR_.C003: Found 3 idle states
Jun 29 18:28:41 hp-4300G kernel: ACPI: \_PR_.C005: Found 3 idle states
Jun 29 18:28:41 hp-4300G kernel: ACPI: \_PR_.C007: Found 3 idle states
Jun 29 18:28:41 hp-4300G kernel: thermal LNXTHERM:00: registered as thermal_zone0
Jun 29 18:28:41 hp-4300G kernel: ACPI: thermal: Thermal Zone [HPTZ] (30 C)
Jun 29 18:28:41 hp-4300G kernel: Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
Jun 29 18:28:41 hp-4300G kernel: Non-volatile memory driver v1.3
Jun 29 18:28:41 hp-4300G kernel: AMD-Vi: AMD IOMMUv2 driver by Joerg Roedel <jroedel@suse.de>
Jun 29 18:28:41 hp-4300G kernel: nvme nvme0: pci function 0000:0b:00.0
Jun 29 18:28:41 hp-4300G kernel: ahci 0000:01:00.1: version 3.0
Jun 29 18:28:41 hp-4300G kernel: ahci 0000:01:00.1: enabling device (0100 -> 0102)
Jun 29 18:28:41 hp-4300G kernel: ahci 0000:01:00.1: SSS flag set, parallel bus scan disabled
Jun 29 18:28:41 hp-4300G kernel: ahci 0000:01:00.1: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode
Jun 29 18:28:41 hp-4300G kernel: ahci 0000:01:00.1: flags: 64bit ncq sntf stag pm led clo only pmp pio slum part sxs deso sadm sds apst 
Jun 29 18:28:41 hp-4300G kernel: scsi host0: ahci
Jun 29 18:28:41 hp-4300G kernel: scsi host1: ahci
Jun 29 18:28:41 hp-4300G kernel: scsi host2: ahci
Jun 29 18:28:41 hp-4300G kernel: scsi host3: ahci
Jun 29 18:28:41 hp-4300G kernel: scsi host4: ahci
Jun 29 18:28:41 hp-4300G kernel: scsi host5: ahci
Jun 29 18:28:41 hp-4300G kernel: scsi host6: ahci
Jun 29 18:28:41 hp-4300G kernel: scsi host7: ahci
Jun 29 18:28:41 hp-4300G kernel: ata1: SATA max UDMA/133 abar m131072@0xfcd80000 port 0xfcd80100 irq 44
Jun 29 18:28:41 hp-4300G kernel: ata2: SATA max UDMA/133 abar m131072@0xfcd80000 port 0xfcd80180 irq 44
Jun 29 18:28:41 hp-4300G kernel: ata3: SATA max UDMA/133 abar m131072@0xfcd80000 port 0xfcd80200 irq 44
Jun 29 18:28:41 hp-4300G kernel: ata4: SATA max UDMA/133 abar m131072@0xfcd80000 port 0xfcd80280 irq 44
Jun 29 18:28:41 hp-4300G kernel: ata5: SATA max UDMA/133 abar m131072@0xfcd80000 port 0xfcd80300 irq 44
Jun 29 18:28:41 hp-4300G kernel: ata6: SATA max UDMA/133 abar m131072@0xfcd80000 port 0xfcd80380 irq 44
Jun 29 18:28:41 hp-4300G kernel: ata7: SATA max UDMA/133 abar m131072@0xfcd80000 port 0xfcd80400 irq 44
Jun 29 18:28:41 hp-4300G kernel: ata8: SATA max UDMA/133 abar m131072@0xfcd80000 port 0xfcd80480 irq 44
Jun 29 18:28:41 hp-4300G kernel: ahci 0000:0d:00.0: enabling device (0100 -> 0102)
Jun 29 18:28:41 hp-4300G kernel: ahci 0000:0d:00.0: AHCI 0001.0301 32 slots 1 ports 6 Gbps 0x1 impl SATA mode
Jun 29 18:28:41 hp-4300G kernel: ahci 0000:0d:00.0: flags: 64bit ncq sntf ilck pm led clo only pmp fbs pio slum part 
Jun 29 18:28:41 hp-4300G kernel: scsi host8: ahci
Jun 29 18:28:41 hp-4300G kernel: ata9: SATA max UDMA/133 abar m2048@0xfce01000 port 0xfce01100 irq 46
Jun 29 18:28:41 hp-4300G kernel: ahci 0000:0d:00.1: enabling device (0100 -> 0102)
Jun 29 18:28:41 hp-4300G kernel: ahci 0000:0d:00.1: AHCI 0001.0301 32 slots 1 ports 6 Gbps 0x1 impl SATA mode
Jun 29 18:28:41 hp-4300G kernel: ahci 0000:0d:00.1: flags: 64bit ncq sntf ilck pm led clo only pmp fbs pio slum part 
Jun 29 18:28:41 hp-4300G kernel: scsi host9: ahci
Jun 29 18:28:41 hp-4300G kernel: ata10: SATA max UDMA/133 abar m2048@0xfce00000 port 0xfce00100 irq 48
Jun 29 18:28:41 hp-4300G kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Jun 29 18:28:41 hp-4300G kernel: ehci-pci: EHCI PCI platform driver
Jun 29 18:28:41 hp-4300G kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Jun 29 18:28:41 hp-4300G kernel: ohci-pci: OHCI PCI platform driver
Jun 29 18:28:41 hp-4300G kernel: uhci_hcd: USB Universal Host Controller Interface driver
Jun 29 18:28:41 hp-4300G kernel: usbcore: registered new interface driver usbserial_generic
Jun 29 18:28:41 hp-4300G kernel: usbserial: USB Serial support registered for generic
Jun 29 18:28:41 hp-4300G kernel: rtc_cmos 00:02: RTC can wake from S4
Jun 29 18:28:41 hp-4300G kernel: rtc_cmos 00:02: registered as rtc0
Jun 29 18:28:41 hp-4300G kernel: rtc_cmos 00:02: setting system clock to 2021-06-30T01:28:38 UTC (1625016518)
Jun 29 18:28:41 hp-4300G kernel: rtc_cmos 00:02: alarms up to one month, y3k, 114 bytes nvram, hpet irqs
Jun 29 18:28:41 hp-4300G kernel: ledtrig-cpu: registered to indicate activity on CPUs
Jun 29 18:28:41 hp-4300G kernel: hid: raw HID events driver (C) Jiri Kosina
Jun 29 18:28:41 hp-4300G kernel: drop_monitor: Initializing network drop monitor service
Jun 29 18:28:41 hp-4300G kernel: Initializing XFRM netlink socket
Jun 29 18:28:41 hp-4300G kernel: NET: Registered protocol family 10
Jun 29 18:28:41 hp-4300G kernel: nvme nvme0: missing or invalid SUBNQN field.
Jun 29 18:28:41 hp-4300G kernel: Segment Routing with IPv6
Jun 29 18:28:41 hp-4300G kernel: RPL Segment Routing with IPv6
Jun 29 18:28:41 hp-4300G kernel: NET: Registered protocol family 17
Jun 29 18:28:41 hp-4300G kernel: microcode: CPU0: patch_level=0x08600106
Jun 29 18:28:41 hp-4300G kernel: microcode: CPU1: patch_level=0x08600106
Jun 29 18:28:41 hp-4300G kernel: microcode: CPU2: patch_level=0x08600106
Jun 29 18:28:41 hp-4300G kernel: microcode: CPU3: patch_level=0x08600106
Jun 29 18:28:41 hp-4300G kernel: microcode: CPU4: patch_level=0x08600106
Jun 29 18:28:41 hp-4300G kernel: microcode: CPU5: patch_level=0x08600106
Jun 29 18:28:41 hp-4300G kernel: microcode: CPU6: patch_level=0x08600106
Jun 29 18:28:41 hp-4300G kernel: microcode: CPU7: patch_level=0x08600106
Jun 29 18:28:41 hp-4300G kernel: microcode: Microcode Update Driver: v2.2.
Jun 29 18:28:41 hp-4300G kernel: resctrl: L3 allocation detected
Jun 29 18:28:41 hp-4300G kernel: resctrl: L3DATA allocation detected
Jun 29 18:28:41 hp-4300G kernel: resctrl: L3CODE allocation detected
Jun 29 18:28:41 hp-4300G kernel: resctrl: MB allocation detected
Jun 29 18:28:41 hp-4300G kernel: resctrl: L3 monitoring detected
Jun 29 18:28:41 hp-4300G kernel: IPI shorthand broadcast: enabled
Jun 29 18:28:41 hp-4300G kernel: sched_clock: Marking stable (480878644, 416905)->(484173894, -2878345)
Jun 29 18:28:41 hp-4300G kernel: registered taskstats version 1
Jun 29 18:28:41 hp-4300G kernel: Loading compiled-in X.509 certificates
Jun 29 18:28:41 hp-4300G kernel: Loaded X.509 cert 'Build time autogenerated kernel key: 344a5c3e232222f5edf962ab341f037cc1f1c148'
Jun 29 18:28:41 hp-4300G kernel: nvme nvme0: 16/0/0 default/read/poll queues
Jun 29 18:28:41 hp-4300G kernel: zswap: loaded using pool lz4/z3fold
Jun 29 18:28:41 hp-4300G kernel: Key type ._fscrypt registered
Jun 29 18:28:41 hp-4300G kernel: Key type .fscrypt registered
Jun 29 18:28:41 hp-4300G kernel: Key type fscrypt-provisioning registered
Jun 29 18:28:41 hp-4300G kernel: PM:   Magic number: 9:284:459
Jun 29 18:28:41 hp-4300G kernel: RAS: Correctable Errors collector initialized.
Jun 29 18:28:41 hp-4300G kernel:  nvme0n1: p1 p2
Jun 29 18:28:41 hp-4300G kernel: ata1: SATA link down (SStatus 0 SControl 300)
Jun 29 18:28:41 hp-4300G kernel: ata9: SATA link down (SStatus 0 SControl 300)
Jun 29 18:28:41 hp-4300G kernel: ata10: SATA link down (SStatus 0 SControl 300)
Jun 29 18:28:41 hp-4300G kernel: ata2: SATA link down (SStatus 0 SControl 300)
Jun 29 18:28:41 hp-4300G kernel: ata3: SATA link down (SStatus 0 SControl 300)
Jun 29 18:28:41 hp-4300G kernel: tsc: Refined TSC clocksource calibration: 3819.679 MHz
Jun 29 18:28:41 hp-4300G kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x6e1dec388f1, max_idle_ns: 881590444886 ns
Jun 29 18:28:41 hp-4300G kernel: clocksource: Switched to clocksource tsc
Jun 29 18:28:41 hp-4300G kernel: ata4: SATA link down (SStatus 0 SControl 300)
Jun 29 18:28:41 hp-4300G kernel: ata5: SATA link down (SStatus 0 SControl 300)
Jun 29 18:28:41 hp-4300G kernel: ata6: SATA link down (SStatus 0 SControl 300)
Jun 29 18:28:41 hp-4300G kernel: ata7: SATA link down (SStatus 0 SControl 300)
Jun 29 18:28:41 hp-4300G kernel: ata8: SATA link down (SStatus 0 SControl 300)
Jun 29 18:28:41 hp-4300G kernel: Freeing unused decrypted memory: 2036K
Jun 29 18:28:41 hp-4300G kernel: Freeing unused kernel image (initmem) memory: 1648K
Jun 29 18:28:41 hp-4300G kernel: Write protecting the kernel read-only data: 22528k
Jun 29 18:28:41 hp-4300G kernel: Freeing unused kernel image (text/rodata gap) memory: 2036K
Jun 29 18:28:41 hp-4300G kernel: Freeing unused kernel image (rodata/data gap) memory: 1288K
Jun 29 18:28:41 hp-4300G kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jun 29 18:28:41 hp-4300G kernel: rodata_test: all tests were successful
Jun 29 18:28:41 hp-4300G kernel: Run /init as init process
Jun 29 18:28:41 hp-4300G kernel:   with arguments:
Jun 29 18:28:41 hp-4300G kernel:     /init
Jun 29 18:28:41 hp-4300G kernel:   with environment:
Jun 29 18:28:41 hp-4300G kernel:     HOME=/
Jun 29 18:28:41 hp-4300G kernel:     TERM=linux
Jun 29 18:28:41 hp-4300G kernel: xhci_hcd 0000:01:00.0: xHCI Host Controller
Jun 29 18:28:41 hp-4300G kernel: xhci_hcd 0000:01:00.0: new USB bus registered, assigned bus number 1
Jun 29 18:28:41 hp-4300G kernel: xhci_hcd 0000:01:00.0: hcc params 0x0200ef81 hci version 0x110 quirks 0x0000000000000410
Jun 29 18:28:41 hp-4300G kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.12
Jun 29 18:28:41 hp-4300G kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jun 29 18:28:41 hp-4300G kernel: usb usb1: Product: xHCI Host Controller
Jun 29 18:28:41 hp-4300G kernel: usb usb1: Manufacturer: Linux 5.12.0-rc3-00025-gaf452ec1b1a3 xhci-hcd
Jun 29 18:28:41 hp-4300G kernel: usb usb1: SerialNumber: 0000:01:00.0
Jun 29 18:28:41 hp-4300G kernel: hub 1-0:1.0: USB hub found
Jun 29 18:28:41 hp-4300G kernel: hub 1-0:1.0: 14 ports detected
Jun 29 18:28:41 hp-4300G kernel: xhci_hcd 0000:01:00.0: xHCI Host Controller
Jun 29 18:28:41 hp-4300G kernel: xhci_hcd 0000:01:00.0: new USB bus registered, assigned bus number 2
Jun 29 18:28:41 hp-4300G kernel: xhci_hcd 0000:01:00.0: Host supports USB 3.1 Enhanced SuperSpeed
Jun 29 18:28:41 hp-4300G kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM.
Jun 29 18:28:41 hp-4300G kernel: usb usb2: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.12
Jun 29 18:28:41 hp-4300G kernel: usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jun 29 18:28:41 hp-4300G kernel: usb usb2: Product: xHCI Host Controller
Jun 29 18:28:41 hp-4300G kernel: usb usb2: Manufacturer: Linux 5.12.0-rc3-00025-gaf452ec1b1a3 xhci-hcd
Jun 29 18:28:41 hp-4300G kernel: usb usb2: SerialNumber: 0000:01:00.0
Jun 29 18:28:41 hp-4300G kernel: hub 2-0:1.0: USB hub found
Jun 29 18:28:41 hp-4300G kernel: hub 2-0:1.0: 8 ports detected
Jun 29 18:28:41 hp-4300G kernel: xhci_hcd 0000:0c:00.3: xHCI Host Controller
Jun 29 18:28:41 hp-4300G kernel: xhci_hcd 0000:0c:00.3: new USB bus registered, assigned bus number 3
Jun 29 18:28:41 hp-4300G kernel: xhci_hcd 0000:0c:00.3: hcc params 0x0268ffe5 hci version 0x110 quirks 0x0000000000000410
Jun 29 18:28:41 hp-4300G kernel: usb usb3: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.12
Jun 29 18:28:41 hp-4300G kernel: usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jun 29 18:28:41 hp-4300G kernel: usb usb3: Product: xHCI Host Controller
Jun 29 18:28:41 hp-4300G kernel: usb usb3: Manufacturer: Linux 5.12.0-rc3-00025-gaf452ec1b1a3 xhci-hcd
Jun 29 18:28:41 hp-4300G kernel: usb usb3: SerialNumber: 0000:0c:00.3
Jun 29 18:28:41 hp-4300G kernel: hub 3-0:1.0: USB hub found
Jun 29 18:28:41 hp-4300G kernel: hub 3-0:1.0: 4 ports detected
Jun 29 18:28:41 hp-4300G kernel: xhci_hcd 0000:0c:00.3: xHCI Host Controller
Jun 29 18:28:41 hp-4300G kernel: xhci_hcd 0000:0c:00.3: new USB bus registered, assigned bus number 4
Jun 29 18:28:41 hp-4300G kernel: xhci_hcd 0000:0c:00.3: Host supports USB 3.1 Enhanced SuperSpeed
Jun 29 18:28:41 hp-4300G kernel: usb usb4: We don't know the algorithms for LPM for this host, disabling LPM.
Jun 29 18:28:41 hp-4300G kernel: usb usb4: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.12
Jun 29 18:28:41 hp-4300G kernel: usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jun 29 18:28:41 hp-4300G kernel: usb usb4: Product: xHCI Host Controller
Jun 29 18:28:41 hp-4300G kernel: usb usb4: Manufacturer: Linux 5.12.0-rc3-00025-gaf452ec1b1a3 xhci-hcd
Jun 29 18:28:41 hp-4300G kernel: usb usb4: SerialNumber: 0000:0c:00.3
Jun 29 18:28:41 hp-4300G kernel: hub 4-0:1.0: USB hub found
Jun 29 18:28:41 hp-4300G kernel: hub 4-0:1.0: 2 ports detected
Jun 29 18:28:41 hp-4300G kernel: xhci_hcd 0000:0c:00.4: xHCI Host Controller
Jun 29 18:28:41 hp-4300G kernel: xhci_hcd 0000:0c:00.4: new USB bus registered, assigned bus number 5
Jun 29 18:28:41 hp-4300G kernel: xhci_hcd 0000:0c:00.4: hcc params 0x0268ffe5 hci version 0x110 quirks 0x0000000000000410
Jun 29 18:28:41 hp-4300G kernel: usb usb5: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.12
Jun 29 18:28:41 hp-4300G kernel: usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jun 29 18:28:41 hp-4300G kernel: usb usb5: Product: xHCI Host Controller
Jun 29 18:28:41 hp-4300G kernel: usb usb5: Manufacturer: Linux 5.12.0-rc3-00025-gaf452ec1b1a3 xhci-hcd
Jun 29 18:28:41 hp-4300G kernel: usb usb5: SerialNumber: 0000:0c:00.4
Jun 29 18:28:41 hp-4300G kernel: hub 5-0:1.0: USB hub found
Jun 29 18:28:41 hp-4300G kernel: hub 5-0:1.0: 4 ports detected
Jun 29 18:28:41 hp-4300G kernel: xhci_hcd 0000:0c:00.4: xHCI Host Controller
Jun 29 18:28:41 hp-4300G kernel: xhci_hcd 0000:0c:00.4: new USB bus registered, assigned bus number 6
Jun 29 18:28:41 hp-4300G kernel: xhci_hcd 0000:0c:00.4: Host supports USB 3.1 Enhanced SuperSpeed
Jun 29 18:28:41 hp-4300G kernel: usb usb6: We don't know the algorithms for LPM for this host, disabling LPM.
Jun 29 18:28:41 hp-4300G kernel: usb usb6: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.12
Jun 29 18:28:41 hp-4300G kernel: usb usb6: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jun 29 18:28:41 hp-4300G kernel: usb usb6: Product: xHCI Host Controller
Jun 29 18:28:41 hp-4300G kernel: usb usb6: Manufacturer: Linux 5.12.0-rc3-00025-gaf452ec1b1a3 xhci-hcd
Jun 29 18:28:41 hp-4300G kernel: usb usb6: SerialNumber: 0000:0c:00.4
Jun 29 18:28:41 hp-4300G kernel: hub 6-0:1.0: USB hub found
Jun 29 18:28:41 hp-4300G kernel: hub 6-0:1.0: 2 ports detected
Jun 29 18:28:41 hp-4300G kernel: SGI XFS with ACLs, security attributes, realtime, scrub, repair, quota, no debug enabled
Jun 29 18:28:41 hp-4300G kernel: XFS (nvme0n1p2): Mounting V5 Filesystem
Jun 29 18:28:41 hp-4300G kernel: XFS (nvme0n1p2): Ending clean mount
Jun 29 18:28:41 hp-4300G kernel: xfs filesystem being mounted at /new_root supports timestamps until 2038 (0x7fffffff)
Jun 29 18:28:41 hp-4300G kernel: random: fast init done
Jun 29 18:28:41 hp-4300G kernel: random: crng init done
Jun 29 18:28:41 hp-4300G systemd[1]: Successfully credited entropy passed from boot loader.
Jun 29 18:28:41 hp-4300G systemd[1]: systemd 248.3-2-arch running in system mode. (+PAM +AUDIT -SELINUX -APPARMOR -IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD +XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jun 29 18:28:41 hp-4300G systemd[1]: Detected architecture x86-64.
Jun 29 18:28:41 hp-4300G systemd[1]: Hostname set to <hp-4300G>.
Jun 29 18:28:41 hp-4300G systemd-fstab-generator[247]: Mount point  is not a valid path, ignoring.
Jun 29 18:28:41 hp-4300G systemd-fstab-generator[247]: Mount point  is not a valid path, ignoring.
Jun 29 18:28:41 hp-4300G systemd[1]: Queued start job for default target Graphical Interface.
Jun 29 18:28:41 hp-4300G systemd[1]: Created slice system-getty.slice.
Jun 29 18:28:41 hp-4300G systemd[1]: Created slice system-modprobe.slice.
Jun 29 18:28:41 hp-4300G systemd[1]: Created slice User and Session Slice.
Jun 29 18:28:41 hp-4300G systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jun 29 18:28:41 hp-4300G systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jun 29 18:28:41 hp-4300G systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jun 29 18:28:41 hp-4300G systemd[1]: Reached target Local Encrypted Volumes.
Jun 29 18:28:41 hp-4300G systemd[1]: Reached target Login Prompts.
Jun 29 18:28:41 hp-4300G systemd[1]: Reached target Paths.
Jun 29 18:28:41 hp-4300G systemd[1]: Reached target Remote File Systems.
Jun 29 18:28:41 hp-4300G systemd[1]: Reached target Slices.
Jun 29 18:28:41 hp-4300G systemd[1]: Reached target Swap.
Jun 29 18:28:41 hp-4300G systemd[1]: Reached target Local Verity Integrity Protected Volumes.
Jun 29 18:28:41 hp-4300G systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jun 29 18:28:41 hp-4300G systemd[1]: Listening on Process Core Dump Socket.
Jun 29 18:28:41 hp-4300G systemd[1]: Listening on Journal Audit Socket.
Jun 29 18:28:41 hp-4300G systemd[1]: Listening on Journal Socket (/dev/log).
Jun 29 18:28:41 hp-4300G systemd[1]: Listening on Journal Socket.
Jun 29 18:28:41 hp-4300G systemd[1]: Listening on Network Service Netlink Socket.
Jun 29 18:28:41 hp-4300G systemd[1]: Listening on udev Control Socket.
Jun 29 18:28:41 hp-4300G systemd[1]: Listening on udev Kernel Socket.
Jun 29 18:28:41 hp-4300G systemd[1]: Mounting Huge Pages File System...
Jun 29 18:28:41 hp-4300G systemd[1]: Mounting POSIX Message Queue File System...
Jun 29 18:28:41 hp-4300G systemd[1]: Mounting Kernel Debug File System...
Jun 29 18:28:41 hp-4300G systemd[1]: Mounting Kernel Trace File System...
Jun 29 18:28:41 hp-4300G systemd[1]: Starting Create list of static device nodes for the current kernel...
Jun 29 18:28:41 hp-4300G systemd[1]: Starting Load Kernel Module configfs...
Jun 29 18:28:41 hp-4300G systemd[1]: Starting Load Kernel Module drm...
Jun 29 18:28:41 hp-4300G systemd[1]: Starting Load Kernel Module fuse...
Jun 29 18:28:41 hp-4300G systemd[1]: Starting Set Up Additional Binary Formats...
Jun 29 18:28:41 hp-4300G systemd[1]: Condition check resulted in File System Check on Root Device being skipped.
Jun 29 18:28:41 hp-4300G kernel: Linux agpgart interface v0.103
Jun 29 18:28:41 hp-4300G systemd[1]: Starting Journal Service...
Jun 29 18:28:41 hp-4300G systemd[1]: Starting Load Kernel Modules...
Jun 29 18:28:41 hp-4300G systemd[1]: Starting Remount Root and Kernel File Systems...
Jun 29 18:28:41 hp-4300G systemd[1]: Condition check resulted in Repartition Root Disk being skipped.
Jun 29 18:28:41 hp-4300G kernel: fuse: init (API version 7.33)
Jun 29 18:28:41 hp-4300G systemd[1]: Starting Coldplug All udev Devices...
Jun 29 18:28:41 hp-4300G systemd[1]: Mounted Huge Pages File System.
Jun 29 18:28:41 hp-4300G systemd[1]: Mounted POSIX Message Queue File System.
Jun 29 18:28:41 hp-4300G systemd[1]: Mounted Kernel Debug File System.
Jun 29 18:28:41 hp-4300G kernel: Asymmetric key parser 'pkcs8' registered
Jun 29 18:28:41 hp-4300G kernel: XFS: attr2 mount option is deprecated.
Jun 29 18:28:41 hp-4300G systemd[1]: Mounted Kernel Trace File System.
Jun 29 18:28:41 hp-4300G systemd[1]: Finished Create list of static device nodes for the current kernel.
Jun 29 18:28:41 hp-4300G kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jun 29 18:28:41 hp-4300G kernel: audit: type=1130 audit(1625016521.439:2): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:28:41 hp-4300G systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jun 29 18:28:41 hp-4300G systemd[1]: Finished Load Kernel Module configfs.
Jun 29 18:28:41 hp-4300G kernel: audit: type=1130 audit(1625016521.439:3): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:28:41 hp-4300G kernel: audit: type=1131 audit(1625016521.439:4): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:28:41 hp-4300G systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jun 29 18:28:41 hp-4300G systemd[1]: Finished Load Kernel Module fuse.
Jun 29 18:28:41 hp-4300G kernel: audit: type=1130 audit(1625016521.443:5): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:28:41 hp-4300G kernel: audit: type=1131 audit(1625016521.443:6): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:28:41 hp-4300G systemd[1]: Finished Load Kernel Modules.
Jun 29 18:28:41 hp-4300G kernel: audit: type=1130 audit(1625016521.443:7): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:28:41 hp-4300G systemd[1]: Finished Remount Root and Kernel File Systems.
Jun 29 18:28:41 hp-4300G kernel: audit: type=1130 audit(1625016521.443:8): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:28:41 hp-4300G systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 263 (systemd-binfmt)
Jun 29 18:28:41 hp-4300G systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jun 29 18:28:41 hp-4300G systemd[1]: Mounting FUSE Control File System...
Jun 29 18:28:41 hp-4300G systemd[1]: Mounting Kernel Configuration File System...
Jun 29 18:28:41 hp-4300G systemd[1]: Condition check resulted in First Boot Wizard being skipped.
Jun 29 18:28:41 hp-4300G systemd[1]: Condition check resulted in Rebuild Hardware Database being skipped.
Jun 29 18:28:41 hp-4300G systemd[1]: Starting Load/Save Random Seed...
Jun 29 18:28:41 hp-4300G systemd[1]: Starting Apply Kernel Variables...
Jun 29 18:28:41 hp-4300G systemd[1]: Starting Create System Users...
Jun 29 18:28:41 hp-4300G systemd[1]: modprobe@drm.service: Deactivated successfully.
Jun 29 18:28:41 hp-4300G systemd[1]: Finished Load Kernel Module drm.
Jun 29 18:28:41 hp-4300G kernel: audit: type=1130 audit(1625016521.453:9): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:28:41 hp-4300G kernel: audit: type=1131 audit(1625016521.453:10): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:28:41 hp-4300G systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jun 29 18:28:41 hp-4300G systemd[1]: Mounted FUSE Control File System.
Jun 29 18:28:41 hp-4300G systemd[1]: Mounted Kernel Configuration File System.
Jun 29 18:28:41 hp-4300G systemd[1]: Finished Load/Save Random Seed.
Jun 29 18:28:41 hp-4300G systemd[1]: Finished Set Up Additional Binary Formats.
Jun 29 18:28:41 hp-4300G systemd[1]: Finished Apply Kernel Variables.
Jun 29 18:28:41 hp-4300G systemd[1]: Condition check resulted in First Boot Complete being skipped.
Jun 29 18:28:41 hp-4300G systemd[1]: Finished Create System Users.
Jun 29 18:28:41 hp-4300G systemd[1]: Starting Create Static Device Nodes in /dev...
Jun 29 18:28:41 hp-4300G systemd[1]: Started Journal Service.
Jun 29 18:28:41 hp-4300G kernel: usb 1-11: new full-speed USB device number 2 using xhci_hcd
Jun 29 18:28:41 hp-4300G kernel: acpi_cpufreq: overriding BIOS provided _PSD data
Jun 29 18:28:41 hp-4300G kernel: acpi-tad ACPI000E:00: Missing _PRW
Jun 29 18:28:41 hp-4300G kernel: ACPI: video: Video Device [VGA1] (multi-head: yes  rom: no  post: no)
Jun 29 18:28:41 hp-4300G kernel: acpi device:1e: registered as cooling_device8
Jun 29 18:28:41 hp-4300G kernel: input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:1d/LNXVIDEO:01/input/input2
Jun 29 18:28:41 hp-4300G kernel: acpi PNP0C14:01: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:00)
Jun 29 18:28:41 hp-4300G kernel: piix4_smbus 0000:00:14.0: SMBus Host Controller at 0xb00, revision 0
Jun 29 18:28:41 hp-4300G kernel: piix4_smbus 0000:00:14.0: Using register 0x02 for SMBus port selection
Jun 29 18:28:41 hp-4300G kernel: piix4_smbus 0000:00:14.0: Auxiliary SMBus Host Controller at 0xb20
Jun 29 18:28:41 hp-4300G kernel: ccp 0000:0c:00.2: enabling device (0100 -> 0102)
Jun 29 18:28:41 hp-4300G kernel: ccp 0000:0c:00.2: ccp: unable to access the device: you might be running a broken BIOS.
Jun 29 18:28:41 hp-4300G kernel: input: PC Speaker as /devices/platform/pcspkr/input/input3
Jun 29 18:28:41 hp-4300G kernel: ccp 0000:0c:00.2: tee enabled
Jun 29 18:28:41 hp-4300G kernel: ccp 0000:0c:00.2: psp enabled
Jun 29 18:28:41 hp-4300G kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jun 29 18:28:41 hp-4300G kernel: FAT-fs (nvme0n1p1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.
Jun 29 18:28:41 hp-4300G kernel: cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jun 29 18:28:41 hp-4300G kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jun 29 18:28:41 hp-4300G kernel: cfg80211: failed to load regulatory.db
Jun 29 18:28:41 hp-4300G kernel: usb 1-11: New USB device found, idVendor=046d, idProduct=c534, bcdDevice=29.01
Jun 29 18:28:41 hp-4300G kernel: usb 1-11: New USB device strings: Mfr=1, Product=2, SerialNumber=0
Jun 29 18:28:41 hp-4300G kernel: usb 1-11: Product: USB Receiver
Jun 29 18:28:41 hp-4300G kernel: usb 1-11: Manufacturer: Logitech
Jun 29 18:28:41 hp-4300G kernel: sp5100_tco: SP5100/SB800 TCO WatchDog Timer Driver
Jun 29 18:28:41 hp-4300G kernel: sp5100-tco sp5100-tco: Using 0xfeb00000 for watchdog MMIO address
Jun 29 18:28:41 hp-4300G kernel: sp5100-tco sp5100-tco: initialized. heartbeat=60 sec (nowayout=0)
Jun 29 18:28:41 hp-4300G kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 163840 ms ovfl timer
Jun 29 18:28:41 hp-4300G kernel: RAPL PMU: hw unit of domain package 2^-16 Joules
Jun 29 18:28:41 hp-4300G kernel: cryptd: max_cpu_qlen set to 1000
Jun 29 18:28:41 hp-4300G kernel: AVX2 version of gcm_enc/dec engaged.
Jun 29 18:28:41 hp-4300G kernel: AES CTR mode by8 optimization enabled
Jun 29 18:28:41 hp-4300G kernel: libphy: r8169: probed
Jun 29 18:28:41 hp-4300G kernel: r8169 0000:0a:00.0 eth0: RTL8168h/8111h, 00:68:eb:ad:98:43, XID 541, IRQ 91
Jun 29 18:28:41 hp-4300G kernel: r8169 0000:0a:00.0 eth0: jumbo features [frames: 9194 bytes, tx checksumming: ko]
Jun 29 18:28:41 hp-4300G kernel: snd_hda_intel 0000:0c:00.1: enabling device (0100 -> 0102)
Jun 29 18:28:41 hp-4300G kernel: snd_hda_intel 0000:0c:00.1: Handle vga_switcheroo audio client
Jun 29 18:28:41 hp-4300G kernel: snd_hda_intel 0000:0c:00.6: enabling device (0100 -> 0102)
Jun 29 18:28:42 hp-4300G kernel: r8169 0000:0a:00.0 enp10s0: renamed from eth0
Jun 29 18:28:42 hp-4300G kernel: usb 1-12: new full-speed USB device number 3 using xhci_hcd
Jun 29 18:28:42 hp-4300G kernel: rtw_8821ce 0000:09:00.0: enabling device (0100 -> 0103)
Jun 29 18:28:42 hp-4300G kernel: rtw_8821ce 0000:09:00.0: Firmware version 24.8.0, H2C version 12
Jun 29 18:28:42 hp-4300G kernel: input: HD-Audio Generic HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:08.1/0000:0c:00.1/sound/card0/input5
Jun 29 18:28:42 hp-4300G kernel: input: HP WMI hotkeys as /devices/virtual/input/input4
Jun 29 18:28:42 hp-4300G kernel: Generic FE-GE Realtek PHY r8169-a00:00: attached PHY driver (mii_bus:phy_addr=r8169-a00:00, irq=MAC)
Jun 29 18:28:42 hp-4300G kernel: [drm] amdgpu kernel modesetting enabled.
Jun 29 18:28:42 hp-4300G kernel: Virtual CRAT table created for CPU
Jun 29 18:28:42 hp-4300G kernel: amdgpu: Topology: Add CPU node
Jun 29 18:28:42 hp-4300G kernel: checking generic (d0000000 300000) vs hw (d0000000 10000000)
Jun 29 18:28:42 hp-4300G kernel: fb0: switching to amdgpudrmfb from EFI VGA
Jun 29 18:28:42 hp-4300G kernel: Console: switching to colour dummy device 80x25
Jun 29 18:28:42 hp-4300G kernel: amdgpu 0000:0c:00.0: vgaarb: deactivate vga console
Jun 29 18:28:42 hp-4300G kernel: amdgpu 0000:0c:00.0: enabling device (0106 -> 0107)
Jun 29 18:28:42 hp-4300G kernel: [drm] initializing kernel modesetting (RENOIR 0x1002:0x1636 0x103C:0x87D6 0xCA).
Jun 29 18:28:42 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: Trusted Memory Zone (TMZ) feature disabled as experimental (default)
Jun 29 18:28:42 hp-4300G kernel: [drm] register mmio base: 0xFCA00000
Jun 29 18:28:42 hp-4300G kernel: [drm] register mmio size: 524288
Jun 29 18:28:42 hp-4300G kernel: [drm] PCIE atomic ops is not supported
Jun 29 18:28:42 hp-4300G kernel: snd_hda_codec_realtek hdaudioC1D0: autoconfig for ALC671: line_outs=1 (0x14/0x0/0x0/0x0/0x0) type:line
Jun 29 18:28:42 hp-4300G kernel: snd_hda_codec_realtek hdaudioC1D0:    speaker_outs=0 (0x0/0x0/0x0/0x0/0x0)
Jun 29 18:28:42 hp-4300G kernel: snd_hda_codec_realtek hdaudioC1D0:    hp_outs=1 (0x21/0x0/0x0/0x0/0x0)
Jun 29 18:28:42 hp-4300G kernel: snd_hda_codec_realtek hdaudioC1D0:    mono: mono_out=0x0
Jun 29 18:28:42 hp-4300G kernel: snd_hda_codec_realtek hdaudioC1D0:    inputs:
Jun 29 18:28:42 hp-4300G kernel: snd_hda_codec_realtek hdaudioC1D0:      Mic=0x19
Jun 29 18:28:42 hp-4300G kernel: snd_hda_codec_realtek hdaudioC1D0:      Line=0x1b
Jun 29 18:28:42 hp-4300G kernel: [drm] add ip block number 0 <soc15_common>
Jun 29 18:28:42 hp-4300G kernel: [drm] add ip block number 1 <gmc_v9_0>
Jun 29 18:28:42 hp-4300G kernel: [drm] add ip block number 2 <vega10_ih>
Jun 29 18:28:42 hp-4300G kernel: [drm] add ip block number 3 <psp>
Jun 29 18:28:42 hp-4300G kernel: [drm] add ip block number 4 <smu>
Jun 29 18:28:42 hp-4300G kernel: [drm] add ip block number 5 <gfx_v9_0>
Jun 29 18:28:42 hp-4300G kernel: [drm] add ip block number 6 <sdma_v4_0>
Jun 29 18:28:42 hp-4300G kernel: [drm] add ip block number 7 <dm>
Jun 29 18:28:42 hp-4300G kernel: [drm] add ip block number 8 <vcn_v2_0>
Jun 29 18:28:42 hp-4300G kernel: [drm] add ip block number 9 <jpeg_v2_0>
Jun 29 18:28:42 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: Fetched VBIOS from VFCT
Jun 29 18:28:42 hp-4300G kernel: amdgpu: ATOM BIOS: 113-RENOIR-026
Jun 29 18:28:42 hp-4300G kernel: [drm] VCN decode is enabled in VM mode
Jun 29 18:28:42 hp-4300G kernel: [drm] VCN encode is enabled in VM mode
Jun 29 18:28:42 hp-4300G kernel: [drm] JPEG decode is enabled in VM mode
Jun 29 18:28:42 hp-4300G kernel: [drm] vm size is 262144 GB, 4 levels, block size is 9-bit, fragment size is 9-bit
Jun 29 18:28:42 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: VRAM: 512M 0x000000F400000000 - 0x000000F41FFFFFFF (512M used)
Jun 29 18:28:42 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: GART: 1024M 0x0000000000000000 - 0x000000003FFFFFFF
Jun 29 18:28:42 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: AGP: 267419648M 0x000000F800000000 - 0x0000FFFFFFFFFFFF
Jun 29 18:28:42 hp-4300G kernel: [drm] Detected VRAM RAM=512M, BAR=512M
Jun 29 18:28:42 hp-4300G kernel: [drm] RAM width 64bits DDR4
Jun 29 18:28:42 hp-4300G kernel: [TTM] Zone  kernel: Available graphics memory: 3750314 KiB
Jun 29 18:28:42 hp-4300G kernel: [TTM] Zone   dma32: Available graphics memory: 2097152 KiB
Jun 29 18:28:42 hp-4300G kernel: [drm] amdgpu: 512M of VRAM memory ready
Jun 29 18:28:42 hp-4300G kernel: [drm] amdgpu: 3072M of GTT memory ready.
Jun 29 18:28:42 hp-4300G kernel: BUG: unable to handle page fault for address: 00000000003a8290
Jun 29 18:28:42 hp-4300G kernel: #PF: supervisor write access in kernel mode
Jun 29 18:28:42 hp-4300G kernel: #PF: error_code(0x0002) - not-present page
Jun 29 18:28:42 hp-4300G kernel: PGD 0 P4D 0 
Jun 29 18:28:42 hp-4300G kernel: Oops: 0002 [#1] PREEMPT SMP NOPTI
Jun 29 18:28:42 hp-4300G kernel: CPU: 2 PID: 296 Comm: systemd-udevd Not tainted 5.12.0-rc3-00025-gaf452ec1b1a3 #1
Jun 29 18:28:42 hp-4300G kernel: Hardware name: HP HP Desktop M01-F1xxx/87D6, BIOS F.12 12/17/2020
Jun 29 18:28:42 hp-4300G kernel: RIP: 0010:native_queued_spin_lock_slowpath+0x1c2/0x200
Jun 29 18:28:42 hp-4300G kernel: Code: ff ff f3 90 8b 02 85 c0 74 ee eb f6 c1 ef 12 83 e0 03 83 ef 01 48 c1 e0 05 48 63 ff 48 05 80 d5 02 00 48 03 04 fd 00 69 e3 a9 <48> 89 08 8b 41 08 85 c0 75 09 f3 90 8b 41 08 85 c0 74 f7 48 8b 39
Jun 29 18:28:42 hp-4300G kernel: RSP: 0018:ffffadb4013db9e8 EFLAGS: 00010006
Jun 29 18:28:42 hp-4300G kernel: RAX: 00000000003a8290 RBX: 0000000000000000 RCX: ffff8900572ad580
Jun 29 18:28:42 hp-4300G kernel: RDX: ffff89005653f024 RSI: 00000000000c0000 RDI: 0000000000001d17
Jun 29 18:28:42 hp-4300G kernel: RBP: 000000000a20d000 R08: 00000000000c0000 R09: 0000000000000000
Jun 29 18:28:42 hp-4300G kernel: R10: 000000000a20d000 R11: ffff89005653f000 R12: 0000000000000212
Jun 29 18:28:42 hp-4300G kernel: R13: 0000000000001000 R14: 0000000000000002 R15: 0000000000200000
Jun 29 18:28:42 hp-4300G kernel: FS:  00007f1f8898ea40(0000) GS:ffff890057280000(0000) knlGS:0000000000000000
Jun 29 18:28:42 hp-4300G kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jun 29 18:28:42 hp-4300G kernel: CR2: 00000000003a8290 CR3: 00000001020d0000 CR4: 0000000000350ee0
Jun 29 18:28:42 hp-4300G kernel: Call Trace:
Jun 29 18:28:42 hp-4300G kernel:  _raw_spin_lock_irqsave+0x39/0x50
Jun 29 18:28:42 hp-4300G kernel:  swiotlb_tbl_map_single+0x12b/0x4c0
Jun 29 18:28:42 hp-4300G kernel:  ? sysvec_call_function+0xb/0x90
Jun 29 18:28:42 hp-4300G kernel:  ? asm_sysvec_call_function+0x12/0x20
Jun 29 18:28:42 hp-4300G kernel:  swiotlb_map+0x5d/0x260
Jun 29 18:28:42 hp-4300G kernel:  dma_map_page_attrs+0x151/0x220
Jun 29 18:28:42 hp-4300G kernel:  amdgpu_gart_init+0x43/0x90 [amdgpu]
Jun 29 18:28:42 hp-4300G kernel:  gmc_v9_0_sw_init+0x51c/0x560 [amdgpu]
Jun 29 18:28:42 hp-4300G kernel:  amdgpu_device_init.cold+0x120e/0x19d7 [amdgpu]
Jun 29 18:28:42 hp-4300G kernel:  ? pci_conf1_read+0x9b/0xf0
Jun 29 18:28:42 hp-4300G kernel:  ? pci_bus_read_config_word+0x49/0x70
Jun 29 18:28:42 hp-4300G kernel:  amdgpu_driver_load_kms+0x65/0x260 [amdgpu]
Jun 29 18:28:42 hp-4300G kernel:  amdgpu_pci_probe+0x11f/0x1b0 [amdgpu]
Jun 29 18:28:42 hp-4300G kernel:  local_pci_probe+0x42/0x80
Jun 29 18:28:42 hp-4300G kernel:  ? pci_match_device+0xd7/0x110
Jun 29 18:28:42 hp-4300G kernel:  pci_device_probe+0xfa/0x1b0
Jun 29 18:28:42 hp-4300G kernel:  really_probe+0xf2/0x440
Jun 29 18:28:42 hp-4300G kernel:  driver_probe_device+0xe1/0x150
Jun 29 18:28:42 hp-4300G kernel:  device_driver_attach+0xa1/0xb0
Jun 29 18:28:42 hp-4300G kernel:  __driver_attach+0x8a/0x150
Jun 29 18:28:42 hp-4300G kernel:  ? device_driver_attach+0xb0/0xb0
Jun 29 18:28:42 hp-4300G kernel:  ? device_driver_attach+0xb0/0xb0
Jun 29 18:28:42 hp-4300G kernel:  bus_for_each_dev+0x78/0xc0
Jun 29 18:28:42 hp-4300G kernel:  bus_add_driver+0x12b/0x1e0
Jun 29 18:28:42 hp-4300G kernel:  driver_register+0x8f/0xe0
Jun 29 18:28:42 hp-4300G kernel:  ? 0xffffffffc1c42000
Jun 29 18:28:42 hp-4300G kernel:  do_one_initcall+0x44/0x200
Jun 29 18:28:42 hp-4300G kernel:  ? do_init_module+0x23/0x260
Jun 29 18:28:42 hp-4300G kernel:  ? kmem_cache_alloc_trace+0x161/0x2d0
Jun 29 18:28:42 hp-4300G kernel:  do_init_module+0x5c/0x260
Jun 29 18:28:42 hp-4300G kernel:  __do_sys_finit_module+0xb1/0x110
Jun 29 18:28:42 hp-4300G kernel:  do_syscall_64+0x33/0x40
Jun 29 18:28:42 hp-4300G kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xae
Jun 29 18:28:42 hp-4300G kernel: RIP: 0033:0x7f1f892be18d
Jun 29 18:28:42 hp-4300G kernel: Code: b4 0c 00 0f 05 eb a9 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d b3 6c 0c 00 f7 d8 64 89 01 48
Jun 29 18:28:42 hp-4300G kernel: RSP: 002b:00007ffd0caa5d58 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
Jun 29 18:28:42 hp-4300G kernel: RAX: ffffffffffffffda RBX: 000055f12c90e9c0 RCX: 00007f1f892be18d
Jun 29 18:28:42 hp-4300G kernel: RDX: 0000000000000000 RSI: 00007f1f8941ba9d RDI: 0000000000000018
Jun 29 18:28:42 hp-4300G kernel: RBP: 0000000000020000 R08: 0000000000000000 R09: 00007f1f8964e5ea
Jun 29 18:28:42 hp-4300G kernel: R10: 0000000000000018 R11: 0000000000000246 R12: 00007f1f8941ba9d
Jun 29 18:28:42 hp-4300G kernel: R13: 0000000000000000 R14: 000055f12c90ac20 R15: 000055f12c90e9c0
Jun 29 18:28:42 hp-4300G kernel: Modules linked in: kvm_amd(+) snd_hda_codec_realtek(+) amdgpu(+) rtw88_8821ce(+) snd_hda_codec_generic hp_wmi wmi_bmof sparse_keymap fjes(-) ledtrig_audio snd_hda_codec_hdmi pcc_cpufreq(-) kvm rtw88_8821c gpu_sched i2c_algo_bit drm_ttm_helper rtw88_pci snd_hda_intel ttm irqbypass snd_intel_dspcfg crct10dif_pclmul drm_kms_helper rtw88_core crc32_pclmul snd_intel_sdw_acpi ghash_clmulni_intel snd_hda_codec cec mac80211 snd_hda_core syscopyarea snd_hwdep aesni_intel snd_pcm crypto_simd snd_timer cryptd sysfillrect sp5100_tco rapl snd r8169 sysimgblt vfat pcspkr i2c_piix4 k10temp fat ccp soundcore fb_sys_fops cfg80211 realtek mdio_devres libphy rfkill libarc4 wmi video tpm_crb tpm_tis gpio_amdpt gpio_generic tpm_tis_core tpm mac_hid rng_core pinctrl_amd acpi_tad acpi_cpufreq drm pkcs8_key_parser fuse agpgart bpf_preload ip_tables x_tables xfs libcrc32c crc32c_generic crc32c_intel xhci_pci xhci_pci_renesas
Jun 29 18:28:42 hp-4300G kernel: CR2: 00000000003a8290
Jun 29 18:28:42 hp-4300G kernel: ---[ end trace f6c00070cdcfa658 ]---
Jun 29 18:28:42 hp-4300G kernel: RIP: 0010:native_queued_spin_lock_slowpath+0x1c2/0x200
Jun 29 18:28:42 hp-4300G kernel: Code: ff ff f3 90 8b 02 85 c0 74 ee eb f6 c1 ef 12 83 e0 03 83 ef 01 48 c1 e0 05 48 63 ff 48 05 80 d5 02 00 48 03 04 fd 00 69 e3 a9 <48> 89 08 8b 41 08 85 c0 75 09 f3 90 8b 41 08 85 c0 74 f7 48 8b 39
Jun 29 18:28:42 hp-4300G kernel: RSP: 0018:ffffadb4013db9e8 EFLAGS: 00010006
Jun 29 18:28:42 hp-4300G kernel: RAX: 00000000003a8290 RBX: 0000000000000000 RCX: ffff8900572ad580
Jun 29 18:28:42 hp-4300G kernel: RDX: ffff89005653f024 RSI: 00000000000c0000 RDI: 0000000000001d17
Jun 29 18:28:42 hp-4300G kernel: RBP: 000000000a20d000 R08: 00000000000c0000 R09: 0000000000000000
Jun 29 18:28:42 hp-4300G kernel: R10: 000000000a20d000 R11: ffff89005653f000 R12: 0000000000000212
Jun 29 18:28:42 hp-4300G kernel: R13: 0000000000001000 R14: 0000000000000002 R15: 0000000000200000
Jun 29 18:28:42 hp-4300G kernel: FS:  00007f1f8898ea40(0000) GS:ffff890057280000(0000) knlGS:0000000000000000
Jun 29 18:28:42 hp-4300G kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jun 29 18:28:42 hp-4300G kernel: CR2: 00000000003a8290 CR3: 00000001020d0000 CR4: 0000000000350ee0
Jun 29 18:28:42 hp-4300G kernel: note: systemd-udevd[296] exited with preempt_count 1
Jun 29 18:28:42 hp-4300G kernel: kvm: Nested Virtualization enabled
Jun 29 18:28:42 hp-4300G kernel: SVM: kvm: Nested Paging enabled
Jun 29 18:28:42 hp-4300G kernel: SVM: Virtual VMLOAD VMSAVE supported
Jun 29 18:28:42 hp-4300G kernel: SVM: Virtual GIF supported
Jun 29 18:28:42 hp-4300G kernel: input: HD-Audio Generic Mic as /devices/pci0000:00/0000:00:08.1/0000:0c:00.6/sound/card1/input6
Jun 29 18:28:42 hp-4300G kernel: input: HD-Audio Generic Line as /devices/pci0000:00/0000:00:08.1/0000:0c:00.6/sound/card1/input7
Jun 29 18:28:42 hp-4300G kernel: input: HD-Audio Generic Line Out as /devices/pci0000:00/0000:00:08.1/0000:0c:00.6/sound/card1/input8
Jun 29 18:28:42 hp-4300G kernel: input: HD-Audio Generic Front Headphone as /devices/pci0000:00/0000:00:08.1/0000:0c:00.6/sound/card1/input9
Jun 29 18:28:42 hp-4300G kernel: MCE: In-kernel MCE decoding enabled.
Jun 29 18:28:42 hp-4300G kernel: r8169 0000:0a:00.0 enp10s0: Link is Down
Jun 29 18:28:42 hp-4300G kernel: intel_rapl_common: Found RAPL domain package
Jun 29 18:28:42 hp-4300G kernel: intel_rapl_common: Found RAPL domain core
Jun 29 18:28:42 hp-4300G kernel: usb 1-12: New USB device found, idVendor=0bda, idProduct=b00a, bcdDevice= 1.10
Jun 29 18:28:42 hp-4300G kernel: usb 1-12: New USB device strings: Mfr=1, Product=2, SerialNumber=3
Jun 29 18:28:42 hp-4300G kernel: usb 1-12: Product: Bluetooth Radio 
Jun 29 18:28:42 hp-4300G kernel: usb 1-12: Manufacturer: Realtek 
Jun 29 18:28:42 hp-4300G kernel: usb 1-12: SerialNumber: 00e04c000001
Jun 29 18:28:42 hp-4300G kernel: input: Logitech USB Receiver as /devices/pci0000:00/0000:00:02.1/0000:01:00.0/usb1/1-11/1-11:1.0/0003:046D:C534.0001/input/input10
Jun 29 18:28:42 hp-4300G kernel: Bluetooth: Core ver 2.22
Jun 29 18:28:42 hp-4300G kernel: NET: Registered protocol family 31
Jun 29 18:28:42 hp-4300G kernel: Bluetooth: HCI device and connection manager initialized
Jun 29 18:28:42 hp-4300G kernel: Bluetooth: HCI socket layer initialized
Jun 29 18:28:42 hp-4300G kernel: Bluetooth: L2CAP socket layer initialized
Jun 29 18:28:42 hp-4300G kernel: Bluetooth: SCO socket layer initialized
Jun 29 18:28:42 hp-4300G kernel: usbcore: registered new interface driver btusb
Jun 29 18:28:42 hp-4300G kernel: Bluetooth: hci0: RTL: examining hci_ver=08 hci_rev=000c lmp_ver=08 lmp_subver=8821
Jun 29 18:28:42 hp-4300G kernel: Bluetooth: hci0: RTL: rom_version status=0 version=1
Jun 29 18:28:42 hp-4300G kernel: Bluetooth: hci0: RTL: loading rtl_bt/rtl8821c_fw.bin
Jun 29 18:28:42 hp-4300G kernel: Bluetooth: hci0: RTL: loading rtl_bt/rtl8821c_config.bin
Jun 29 18:28:42 hp-4300G kernel: Bluetooth: hci0: RTL: cfg_sz 10, total sz 31990
Jun 29 18:28:42 hp-4300G kernel: hid-generic 0003:046D:C534.0001: input,hidraw0: USB HID v1.11 Keyboard [Logitech USB Receiver] on usb-0000:01:00.0-11/input0
Jun 29 18:28:42 hp-4300G kernel: rtw_8821ce 0000:09:00.0: start vif 74:12:b3:a0:4a:cb on port 0
Jun 29 18:28:43 hp-4300G kernel: input: Logitech USB Receiver Mouse as /devices/pci0000:00/0000:00:02.1/0000:01:00.0/usb1/1-11/1-11:1.1/0003:046D:C534.0002/input/input11
Jun 29 18:28:43 hp-4300G kernel: input: Logitech USB Receiver Consumer Control as /devices/pci0000:00/0000:00:02.1/0000:01:00.0/usb1/1-11/1-11:1.1/0003:046D:C534.0002/input/input12
Jun 29 18:28:43 hp-4300G kernel: input: Logitech USB Receiver System Control as /devices/pci0000:00/0000:00:02.1/0000:01:00.0/usb1/1-11/1-11:1.1/0003:046D:C534.0002/input/input13
Jun 29 18:28:43 hp-4300G kernel: hid-generic 0003:046D:C534.0002: input,hiddev96,hidraw1: USB HID v1.11 Mouse [Logitech USB Receiver] on usb-0000:01:00.0-11/input1
Jun 29 18:28:43 hp-4300G kernel: usbcore: registered new interface driver usbhid
Jun 29 18:28:43 hp-4300G kernel: usbhid: USB HID core driver
Jun 29 18:28:43 hp-4300G kernel: logitech-djreceiver 0003:046D:C534.0001: hidraw0: USB HID v1.11 Keyboard [Logitech USB Receiver] on usb-0000:01:00.0-11/input0
Jun 29 18:28:43 hp-4300G kernel: Bluetooth: hci0: RTL: fw version 0x829a7644
Jun 29 18:28:44 hp-4300G kernel: logitech-djreceiver 0003:046D:C534.0002: hiddev96,hidraw1: USB HID v1.11 Mouse [Logitech USB Receiver] on usb-0000:01:00.0-11/input1
Jun 29 18:28:44 hp-4300G kernel: logitech-djreceiver 0003:046D:C534.0002: device of type eQUAD nano Lite (0x0a) connected on slot 1
Jun 29 18:28:44 hp-4300G kernel: logitech-djreceiver 0003:046D:C534.0002: device of type eQUAD nano Lite (0x0a) connected on slot 2
Jun 29 18:28:44 hp-4300G kernel: mousedev: PS/2 mouse device common for all mice
Jun 29 18:28:44 hp-4300G kernel: input: Logitech Wireless Keyboard PID:4075 Keyboard as /devices/pci0000:00/0000:00:02.1/0000:01:00.0/usb1/1-11/1-11:1.1/0003:046D:C534.0002/0003:046D:4075.0003/input/input16
Jun 29 18:28:44 hp-4300G kernel: hid-generic 0003:046D:4075.0003: input,hidraw2: USB HID v1.11 Keyboard [Logitech Wireless Keyboard PID:4075] on usb-0000:01:00.0-11/input1:1
Jun 29 18:28:44 hp-4300G kernel: input: Logitech Wireless Mouse as /devices/pci0000:00/0000:00:02.1/0000:01:00.0/usb1/1-11/1-11:1.1/0003:046D:C534.0002/0003:046D:4054.0004/input/input21
Jun 29 18:28:44 hp-4300G kernel: logitech-hidpp-device 0003:046D:4054.0004: input,hidraw2: USB HID v1.11 Mouse [Logitech Wireless Mouse] on usb-0000:01:00.0-11/input1:2
Jun 29 18:28:44 hp-4300G kernel: input: Logitech Wireless Keyboard PID:4075 as /devices/pci0000:00/0000:00:02.1/0000:01:00.0/usb1/1-11/1-11:1.1/0003:046D:C534.0002/0003:046D:4075.0003/input/input22
Jun 29 18:28:44 hp-4300G kernel: logitech-hidpp-device 0003:046D:4075.0003: input,hidraw3: USB HID v1.11 Keyboard [Logitech Wireless Keyboard PID:4075] on usb-0000:01:00.0-11/input1:1
Jun 29 18:28:45 hp-4300G kernel: r8169 0000:0a:00.0 enp10s0: Link is Up - 1Gbps/Full - flow control rx/tx
Jun 29 18:28:45 hp-4300G kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp10s0: link becomes ready
Jun 29 18:28:47 hp-4300G kernel: kauditd_printk_skb: 35 callbacks suppressed
Jun 29 18:28:47 hp-4300G kernel: audit: type=1131 audit(1625016527.419:46): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-rfkill comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:28:50 hp-4300G kernel: irq 7: nobody cared (try booting with the "irqpoll" option)
Jun 29 18:28:50 hp-4300G kernel: CPU: 4 PID: 0 Comm: swapper/4 Tainted: G      D           5.12.0-rc3-00025-gaf452ec1b1a3 #1
Jun 29 18:28:50 hp-4300G kernel: Hardware name: HP HP Desktop M01-F1xxx/87D6, BIOS F.12 12/17/2020
Jun 29 18:28:50 hp-4300G kernel: Call Trace:
Jun 29 18:28:50 hp-4300G kernel:  <IRQ>
Jun 29 18:28:50 hp-4300G kernel:  dump_stack+0x76/0x94
Jun 29 18:28:50 hp-4300G kernel:  __report_bad_irq+0x35/0xaa
Jun 29 18:28:50 hp-4300G kernel:  note_interrupt.cold+0xb/0x64
Jun 29 18:28:50 hp-4300G kernel:  handle_irq_event+0xa9/0xb0
Jun 29 18:28:50 hp-4300G kernel:  handle_fasteoi_irq+0x8a/0x1f0
Jun 29 18:28:50 hp-4300G kernel:  __common_interrupt+0x41/0xa0
Jun 29 18:28:50 hp-4300G kernel:  common_interrupt+0x7e/0xa0
Jun 29 18:28:50 hp-4300G kernel:  </IRQ>
Jun 29 18:28:50 hp-4300G kernel:  asm_common_interrupt+0x1e/0x40
Jun 29 18:28:50 hp-4300G kernel: RIP: 0010:cpuidle_enter_state+0xc7/0x380
Jun 29 18:28:50 hp-4300G kernel: Code: 8b 3d d5 18 e2 56 e8 e8 b4 8d ff 49 89 c5 0f 1f 44 00 00 31 ff e8 e9 c1 8d ff 45 84 ff 0f 85 da 01 00 00 fb 66 0f 1f 44 00 00 <45> 85 f6 0f 88 11 01 00 00 49 63 d6 4c 2b 2c 24 48 8d 04 52 48 8d
Jun 29 18:28:50 hp-4300G kernel: RSP: 0018:ffffadb4001a7ea8 EFLAGS: 00000246
Jun 29 18:28:50 hp-4300G kernel: RAX: ffff89005732c7c0 RBX: 0000000000000003 RCX: 000000000000001f
Jun 29 18:28:50 hp-4300G kernel: RDX: 0000000000000000 RSI: 000000002182bb3a RDI: 0000000000000000
Jun 29 18:28:50 hp-4300G kernel: RBP: ffff88ff43cea800 R08: 00000002ce7b160d R09: 000000031c28bc9a
Jun 29 18:28:50 hp-4300G kernel: R10: 0000000000000002 R11: 0000000000000002 R12: ffffffffaa149be0
Jun 29 18:28:50 hp-4300G kernel: R13: 00000002ce7b160d R14: 0000000000000003 R15: 0000000000000000
Jun 29 18:28:50 hp-4300G kernel:  ? cpuidle_enter_state+0xb7/0x380
Jun 29 18:28:50 hp-4300G kernel:  cpuidle_enter+0x29/0x40
Jun 29 18:28:50 hp-4300G kernel:  do_idle+0x1d5/0x270
Jun 29 18:28:50 hp-4300G kernel:  cpu_startup_entry+0x19/0x20
Jun 29 18:28:50 hp-4300G kernel:  secondary_startup_64_no_verify+0xc2/0xcb
Jun 29 18:28:50 hp-4300G kernel: handlers:
Jun 29 18:28:50 hp-4300G kernel: [<00000000e83c4a82>] amd_gpio_irq_handler [pinctrl_amd]
Jun 29 18:28:50 hp-4300G kernel: Disabling IRQ #7
Jun 29 18:29:54 hp-4300G kernel: audit: type=1101 audit(1625016594.700:47): pid=429 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_permit,pam_time acct="nathan" exe="/usr/bin/sshd" hostname=192.168.4.54 addr=192.168.4.54 terminal=ssh res=success'
Jun 29 18:29:54 hp-4300G kernel: audit: type=1103 audit(1625016594.703:48): pid=429 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_shells,pam_faillock,pam_permit,pam_env,pam_faillock acct="nathan" exe="/usr/bin/sshd" hostname=192.168.4.54 addr=192.168.4.54 terminal=ssh res=success'
Jun 29 18:29:54 hp-4300G kernel: audit: type=1006 audit(1625016594.703:49): pid=429 uid=0 old-auid=4294967295 auid=1000 tty=(none) old-ses=4294967295 ses=1 res=1
Jun 29 18:29:54 hp-4300G kernel: audit: type=1300 audit(1625016594.703:49): arch=c000003e syscall=1 success=yes exit=4 a0=3 a1=7fffd1265250 a2=4 a3=3e8 items=0 ppid=359 pid=429 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=1 comm="sshd" exe="/usr/bin/sshd" key=(null)
Jun 29 18:29:54 hp-4300G kernel: audit: type=1327 audit(1625016594.703:49): proctitle=737368643A206E617468616E205B707269765D
Jun 29 18:29:54 hp-4300G kernel: audit: type=1130 audit(1625016594.716:50): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=user-runtime-dir@1000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:29:54 hp-4300G kernel: audit: type=1101 audit(1625016594.720:51): pid=432 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_permit,pam_time acct="nathan" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:29:54 hp-4300G kernel: audit: type=1103 audit(1625016594.720:52): pid=432 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=? acct="nathan" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jun 29 18:29:54 hp-4300G kernel: audit: type=1006 audit(1625016594.723:53): pid=432 uid=0 old-auid=4294967295 auid=1000 tty=(none) old-ses=4294967295 ses=2 res=1
Jun 29 18:29:54 hp-4300G kernel: audit: type=1300 audit(1625016594.723:53): arch=c000003e syscall=1 success=yes exit=4 a0=9 a1=7ffd5ba43a80 a2=4 a3=3e8 items=0 ppid=1 pid=432 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="(systemd)" exe="/usr/lib/systemd/systemd" key=(null)

--QbssCZQk3Ov6WW0W
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="f127c9556a8e.log"

-- Journal begins at Mon 2021-06-28 09:22:12 MST, ends at Tue 2021-06-29 18:40:31 MST. --
Jun 29 18:40:01 hp-4300G kernel: Linux version 5.12.0-rc3-00024-gf127c9556a8e (nathan@archlinux-ax161) (gcc (GCC) 11.1.0, GNU ld (GNU Binutils) 2.36.50.20210627) #1 SMP PREEMPT Tue Jun 29 18:31:18 MST 2021
Jun 29 18:40:01 hp-4300G kernel: Command line: initrd=\amd-ucode.img initrd=\initramfs-linux-next-llvm.img root=PARTUUID=8680aa0c-cf09-4a69-8cf3-970478040ee7 rw intel_pstate=no_hwp irqpoll
Jun 29 18:40:01 hp-4300G kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jun 29 18:40:01 hp-4300G kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jun 29 18:40:01 hp-4300G kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jun 29 18:40:01 hp-4300G kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jun 29 18:40:01 hp-4300G kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jun 29 18:40:01 hp-4300G kernel: BIOS-provided physical RAM map:
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000000a0000-0x00000000000fffff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x0000000000100000-0x0000000009c0ffff] usable
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x0000000009c10000-0x0000000009ffffff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x000000000a000000-0x000000000a1fffff] usable
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x000000000a200000-0x000000000a20cfff] ACPI NVS
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x000000000a20d000-0x000000000affffff] usable
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x000000000b000000-0x000000000b01ffff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x000000000b020000-0x00000000b838ffff] usable
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000b8390000-0x00000000b86c5fff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000b86c6000-0x00000000b8721fff] ACPI data
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000b8722000-0x00000000b8a14fff] ACPI NVS
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000b8a15000-0x00000000badfefff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000badff000-0x00000000bbffffff] usable
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000bc000000-0x00000000bdffffff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000bf000000-0x00000000bfffffff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000fd200000-0x00000000fd2fffff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000fd600000-0x00000000fd6fffff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000fea00000-0x00000000fea0ffff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000feb80000-0x00000000fec01fff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000fec10000-0x00000000fec10fff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000fec30000-0x00000000fec30fff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000fed00000-0x00000000fed00fff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000fed40000-0x00000000fed44fff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000fed80000-0x00000000fed8ffff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000fedc2000-0x00000000fedcffff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000fedd4000-0x00000000fedd5fff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021f37ffff] usable
Jun 29 18:40:01 hp-4300G kernel: BIOS-e820: [mem 0x000000021f380000-0x000000023fffffff] reserved
Jun 29 18:40:01 hp-4300G kernel: intel_pstate: HWP disabled
Jun 29 18:40:01 hp-4300G kernel: NX (Execute Disable) protection: active
Jun 29 18:40:01 hp-4300G kernel: e820: update [mem 0xb4c66018-0xb4c73457] usable ==> usable
Jun 29 18:40:01 hp-4300G kernel: e820: update [mem 0xb4c66018-0xb4c73457] usable ==> usable
Jun 29 18:40:01 hp-4300G kernel: extended physical RAM map:
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000000a0000-0x00000000000fffff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000009c0ffff] usable
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x0000000009c10000-0x0000000009ffffff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x000000000a000000-0x000000000a1fffff] usable
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x000000000a200000-0x000000000a20cfff] ACPI NVS
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x000000000a20d000-0x000000000affffff] usable
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x000000000b000000-0x000000000b01ffff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x000000000b020000-0x00000000b4c66017] usable
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000b4c66018-0x00000000b4c73457] usable
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000b4c73458-0x00000000b838ffff] usable
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000b8390000-0x00000000b86c5fff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000b86c6000-0x00000000b8721fff] ACPI data
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000b8722000-0x00000000b8a14fff] ACPI NVS
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000b8a15000-0x00000000badfefff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000badff000-0x00000000bbffffff] usable
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000bc000000-0x00000000bdffffff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000bf000000-0x00000000bfffffff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000fd200000-0x00000000fd2fffff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000fd600000-0x00000000fd6fffff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000fea00000-0x00000000fea0ffff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000feb80000-0x00000000fec01fff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000fec10000-0x00000000fec10fff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000fec30000-0x00000000fec30fff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000fed00000-0x00000000fed00fff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000fed40000-0x00000000fed44fff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000fed80000-0x00000000fed8ffff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000fedc2000-0x00000000fedcffff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000fedd4000-0x00000000fedd5fff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x0000000100000000-0x000000021f37ffff] usable
Jun 29 18:40:01 hp-4300G kernel: reserve setup_data: [mem 0x000000021f380000-0x000000023fffffff] reserved
Jun 29 18:40:01 hp-4300G kernel: efi: EFI v2.70 by American Megatrends
Jun 29 18:40:01 hp-4300G kernel: efi: ACPI=0xb8721000 ACPI 2.0=0xb8721014 TPMFinalLog=0xb89c8000 SMBIOS=0xbac0f000 SMBIOS 3.0=0xbac0e000 MEMATTR=0xb5184018 ESRT=0xb6dde918 RNG=0xbac3e998 TPMEventLog=0xb5185018 
Jun 29 18:40:01 hp-4300G kernel: efi: seeding entropy pool
Jun 29 18:40:01 hp-4300G kernel: SMBIOS 3.3.0 present.
Jun 29 18:40:01 hp-4300G kernel: DMI: HP HP Desktop M01-F1xxx/87D6, BIOS F.12 12/17/2020
Jun 29 18:40:01 hp-4300G kernel: tsc: Fast TSC calibration using PIT
Jun 29 18:40:01 hp-4300G kernel: tsc: Detected 3792.694 MHz processor
Jun 29 18:40:01 hp-4300G kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jun 29 18:40:01 hp-4300G kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jun 29 18:40:01 hp-4300G kernel: last_pfn = 0x21f380 max_arch_pfn = 0x400000000
Jun 29 18:40:01 hp-4300G kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jun 29 18:40:01 hp-4300G kernel: e820: update [mem 0xc0000000-0xffffffff] usable ==> reserved
Jun 29 18:40:01 hp-4300G kernel: last_pfn = 0xbc000 max_arch_pfn = 0x400000000
Jun 29 18:40:01 hp-4300G kernel: esrt: Reserving ESRT space from 0x00000000b6dde918 to 0x00000000b6dde950.
Jun 29 18:40:01 hp-4300G kernel: e820: update [mem 0xb6dde000-0xb6ddefff] usable ==> reserved
Jun 29 18:40:01 hp-4300G kernel: check: Scanning 1 areas for low memory corruption
Jun 29 18:40:01 hp-4300G kernel: Using GB pages for direct mapping
Jun 29 18:40:01 hp-4300G kernel: Secure boot disabled
Jun 29 18:40:01 hp-4300G kernel: RAMDISK: [mem 0x7f859000-0x7fff5fff]
Jun 29 18:40:01 hp-4300G kernel: ACPI: Early table checksum verification disabled
Jun 29 18:40:01 hp-4300G kernel: ACPI: RSDP 0x00000000B8721014 000024 (v02 HPQOEM)
Jun 29 18:40:01 hp-4300G kernel: ACPI: XSDT 0x00000000B8720728 0000EC (v01 HPQOEM SLIC-CPC 01072009 AMI  01000013)
Jun 29 18:40:01 hp-4300G kernel: ACPI: FACP 0x00000000B870F000 000114 (v06 HPQOEM SLIC-CPC 01072009 AMI  00010013)
Jun 29 18:40:01 hp-4300G kernel: ACPI: DSDT 0x00000000B86FE000 01050C (v02 HPQOEM SLIC-CPC 01072009 INTL 20120913)
Jun 29 18:40:01 hp-4300G kernel: ACPI: FACS 0x00000000B89F8000 000040
Jun 29 18:40:01 hp-4300G kernel: ACPI: MSDM 0x00000000B871F000 000055 (v03 HPQOEM SLIC-CPC 01072009 AMI  01000013)
Jun 29 18:40:01 hp-4300G kernel: ACPI: SSDT 0x00000000B871E000 000050 (v01 HPQOEM SLIC-CPC 00000001 INTL 20120913)
Jun 29 18:40:01 hp-4300G kernel: ACPI: IVRS 0x00000000B871D000 0000D0 (v02 HPQOEM SLIC-CPC 00000001 AMD  00000000)
Jun 29 18:40:01 hp-4300G kernel: ACPI: SSDT 0x00000000B8715000 007229 (v02 HPQOEM SLIC-CPC 00000002 MSFT 04000000)
Jun 29 18:40:01 hp-4300G kernel: ACPI: SSDT 0x00000000B8711000 003BA1 (v01 HPQOEM SLIC-CPC 00000001 INTL 20120913)
Jun 29 18:40:01 hp-4300G kernel: ACPI: SSDT 0x00000000B8710000 000094 (v02 HPQOEM SLIC-CPC 01072009 AMI  01072009)
Jun 29 18:40:01 hp-4300G kernel: ACPI: FIDT 0x00000000B86FD000 00009C (v01 HPQOEM SLIC-CPC 01072009 AMI  00010013)
Jun 29 18:40:01 hp-4300G kernel: ACPI: MCFG 0x00000000B86FC000 00003C (v01 HPQOEM SLIC-CPC 01072009 MSFT 00010013)
Jun 29 18:40:01 hp-4300G kernel: ACPI: HPET 0x00000000B86FB000 000038 (v01 HPQOEM SLIC-CPC 01072009 AMI  00000005)
Jun 29 18:40:01 hp-4300G kernel: ACPI: VFCT 0x00000000B86ED000 00D484 (v01 HPQOEM SLIC-CPC 00000001 AMD  31504F47)
Jun 29 18:40:01 hp-4300G kernel: ACPI: BGRT 0x00000000B86EC000 000038 (v01 HPQOEM SLIC-CPC 01072009 AMI  00010013)
Jun 29 18:40:01 hp-4300G kernel: ACPI: TPM2 0x00000000B86EB000 00004C (v04 HPQOEM SLIC-CPC 00000001 AMI  00000000)
Jun 29 18:40:01 hp-4300G kernel: ACPI: SSDT 0x00000000B86E9000 001CE4 (v02 HPQOEM SLIC-CPC 00000001 AMD  00000001)
Jun 29 18:40:01 hp-4300G kernel: ACPI: CRAT 0x00000000B86E8000 0007E8 (v01 HPQOEM SLIC-CPC 00000001 AMD  00000001)
Jun 29 18:40:01 hp-4300G kernel: ACPI: CDIT 0x00000000B86E7000 000029 (v01 HPQOEM SLIC-CPC 00000001 AMD  00000001)
Jun 29 18:40:01 hp-4300G kernel: ACPI: SSDT 0x00000000B86E6000 000D37 (v01 HPQOEM SLIC-CPC 00000001 INTL 20120913)
Jun 29 18:40:01 hp-4300G kernel: ACPI: SSDT 0x00000000B86E4000 0010A5 (v01 HPQOEM SLIC-CPC 00000001 INTL 20120913)
Jun 29 18:40:01 hp-4300G kernel: ACPI: SSDT 0x00000000B86E0000 00333E (v01 HPQOEM SLIC-CPC 00000001 INTL 20120913)
Jun 29 18:40:01 hp-4300G kernel: ACPI: SSDT 0x00000000B86DF000 0000BF (v01 HPQOEM SLIC-CPC 00001000 INTL 20120913)
Jun 29 18:40:01 hp-4300G kernel: ACPI: WSMT 0x00000000B86DE000 000028 (v01 HPQOEM SLIC-CPC 01072009 AMI  00010013)
Jun 29 18:40:01 hp-4300G kernel: ACPI: APIC 0x00000000B86DD000 00015E (v03 HPQOEM SLIC-CPC 01072009 AMI  00010013)
Jun 29 18:40:01 hp-4300G kernel: ACPI: SSDT 0x00000000B86DC000 000517 (v01 HPQOEM SLIC-CPC 00000001 INTL 20120913)
Jun 29 18:40:01 hp-4300G kernel: ACPI: SSDT 0x00000000B86DA000 0010AF (v01 HPQOEM SLIC-CPC 00000001 INTL 20120913)
Jun 29 18:40:01 hp-4300G kernel: ACPI: FPDT 0x00000000B86D9000 000044 (v01 HPQOEM SLIC-CPC 01072009 AMI  01000013)
Jun 29 18:40:01 hp-4300G kernel: ACPI: Local APIC address 0xfee00000
Jun 29 18:40:01 hp-4300G kernel: No NUMA configuration found
Jun 29 18:40:01 hp-4300G kernel: Faking a node at [mem 0x0000000000000000-0x000000021f37ffff]
Jun 29 18:40:01 hp-4300G kernel: NODE_DATA(0) allocated [mem 0x21f37c000-0x21f37ffff]
Jun 29 18:40:01 hp-4300G kernel: Zone ranges:
Jun 29 18:40:01 hp-4300G kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jun 29 18:40:01 hp-4300G kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jun 29 18:40:01 hp-4300G kernel:   Normal   [mem 0x0000000100000000-0x000000021f37ffff]
Jun 29 18:40:01 hp-4300G kernel:   Device   empty
Jun 29 18:40:01 hp-4300G kernel: Movable zone start for each node
Jun 29 18:40:01 hp-4300G kernel: Early memory node ranges
Jun 29 18:40:01 hp-4300G kernel:   node   0: [mem 0x0000000000001000-0x000000000009ffff]
Jun 29 18:40:01 hp-4300G kernel:   node   0: [mem 0x0000000000100000-0x0000000009c0ffff]
Jun 29 18:40:01 hp-4300G kernel:   node   0: [mem 0x000000000a000000-0x000000000a1fffff]
Jun 29 18:40:01 hp-4300G kernel:   node   0: [mem 0x000000000a20d000-0x000000000affffff]
Jun 29 18:40:01 hp-4300G kernel:   node   0: [mem 0x000000000b020000-0x00000000b838ffff]
Jun 29 18:40:01 hp-4300G kernel:   node   0: [mem 0x00000000badff000-0x00000000bbffffff]
Jun 29 18:40:01 hp-4300G kernel:   node   0: [mem 0x0000000100000000-0x000000021f37ffff]
Jun 29 18:40:01 hp-4300G kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021f37ffff]
Jun 29 18:40:01 hp-4300G kernel: On node 0 totalpages: 1934483
Jun 29 18:40:01 hp-4300G kernel:   DMA zone: 64 pages used for memmap
Jun 29 18:40:01 hp-4300G kernel:   DMA zone: 26 pages reserved
Jun 29 18:40:01 hp-4300G kernel:   DMA zone: 3999 pages, LIFO batch:0
Jun 29 18:40:01 hp-4300G kernel:   DMA zone: 28769 pages in unavailable ranges
Jun 29 18:40:01 hp-4300G kernel:   DMA32 zone: 11782 pages used for memmap
Jun 29 18:40:01 hp-4300G kernel:   DMA32 zone: 754036 pages, LIFO batch:63
Jun 29 18:40:01 hp-4300G kernel:   DMA32 zone: 28300 pages in unavailable ranges
Jun 29 18:40:01 hp-4300G kernel:   Normal zone: 18382 pages used for memmap
Jun 29 18:40:01 hp-4300G kernel:   Normal zone: 1176448 pages, LIFO batch:63
Jun 29 18:40:01 hp-4300G kernel:   Normal zone: 3200 pages in unavailable ranges
Jun 29 18:40:01 hp-4300G kernel: ACPI: PM-Timer IO Port: 0x808
Jun 29 18:40:01 hp-4300G kernel: ACPI: Local APIC address 0xfee00000
Jun 29 18:40:01 hp-4300G kernel: ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
Jun 29 18:40:01 hp-4300G kernel: IOAPIC[0]: apic_id 9, version 33, address 0xfec00000, GSI 0-23
Jun 29 18:40:01 hp-4300G kernel: IOAPIC[1]: apic_id 10, version 33, address 0xfec01000, GSI 24-55
Jun 29 18:40:01 hp-4300G kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jun 29 18:40:01 hp-4300G kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
Jun 29 18:40:01 hp-4300G kernel: ACPI: IRQ0 used by override.
Jun 29 18:40:01 hp-4300G kernel: ACPI: IRQ9 used by override.
Jun 29 18:40:01 hp-4300G kernel: Using ACPI (MADT) for SMP configuration information
Jun 29 18:40:01 hp-4300G kernel: ACPI: HPET id: 0x10228201 base: 0xfed00000
Jun 29 18:40:01 hp-4300G kernel: e820: update [mem 0xb5158000-0xb517ffff] usable ==> reserved
Jun 29 18:40:01 hp-4300G kernel: smpboot: Allowing 32 CPUs, 24 hotplug CPUs
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000fffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0x09c10000-0x09ffffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0x0a200000-0x0a20cfff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0x0b000000-0x0b01ffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xb4c66000-0xb4c66fff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xb4c73000-0xb4c73fff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xb5158000-0xb517ffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xb6dde000-0xb6ddefff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xb8390000-0xb86c5fff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xb86c6000-0xb8721fff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xb8722000-0xb8a14fff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xb8a15000-0xbadfefff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xbc000000-0xbdffffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xbe000000-0xbeffffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xbf000000-0xbfffffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xefffffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xf0000000-0xf7ffffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xf8000000-0xfd1fffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfd200000-0xfd2fffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfd300000-0xfd5fffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfd600000-0xfd6fffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfd700000-0xfe9fffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfea00000-0xfea0ffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfea10000-0xfeb7ffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfeb80000-0xfec01fff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfec02000-0xfec0ffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfec10000-0xfec10fff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfec11000-0xfec2ffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfec30000-0xfec30fff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfec31000-0xfecfffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfed00000-0xfed00fff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfed01000-0xfed3ffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfed40000-0xfed44fff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfed45000-0xfed7ffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfed80000-0xfed8ffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfed90000-0xfedc1fff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfedc2000-0xfedcffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfedd0000-0xfedd3fff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfedd4000-0xfedd5fff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xfedd6000-0xfeffffff]
Jun 29 18:40:01 hp-4300G kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xffffffff]
Jun 29 18:40:01 hp-4300G kernel: [mem 0xc0000000-0xefffffff] available for PCI devices
Jun 29 18:40:01 hp-4300G kernel: Booting paravirtualized kernel on bare hardware
Jun 29 18:40:01 hp-4300G kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 6370452778343963 ns
Jun 29 18:40:01 hp-4300G kernel: setup_percpu: NR_CPUS:320 nr_cpumask_bits:320 nr_cpu_ids:32 nr_node_ids:1
Jun 29 18:40:01 hp-4300G kernel: percpu: Embedded 56 pages/cpu s192512 r8192 d28672 u262144
Jun 29 18:40:01 hp-4300G kernel: pcpu-alloc: s192512 r8192 d28672 u262144 alloc=1*2097152
Jun 29 18:40:01 hp-4300G kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 
Jun 29 18:40:01 hp-4300G kernel: pcpu-alloc: [0] 16 17 18 19 20 21 22 23 [0] 24 25 26 27 28 29 30 31 
Jun 29 18:40:01 hp-4300G kernel: Built 1 zonelists, mobility grouping on.  Total pages: 1904229
Jun 29 18:40:01 hp-4300G kernel: Policy zone: Normal
Jun 29 18:40:01 hp-4300G kernel: Kernel command line: initrd=\amd-ucode.img initrd=\initramfs-linux-next-llvm.img root=PARTUUID=8680aa0c-cf09-4a69-8cf3-970478040ee7 rw intel_pstate=no_hwp irqpoll
Jun 29 18:40:01 hp-4300G kernel: Misrouted IRQ fixup and polling support enabled
Jun 29 18:40:01 hp-4300G kernel: This may significantly impact system performance
Jun 29 18:40:01 hp-4300G kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes
Jun 29 18:40:01 hp-4300G kernel: printk: log_buf_len total cpu_extra contributions: 126976 bytes
Jun 29 18:40:01 hp-4300G kernel: printk: log_buf_len min size: 131072 bytes
Jun 29 18:40:01 hp-4300G kernel: printk: log_buf_len: 262144 bytes
Jun 29 18:40:01 hp-4300G kernel: printk: early log buf free: 114232(87%)
Jun 29 18:40:01 hp-4300G kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jun 29 18:40:01 hp-4300G kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jun 29 18:40:01 hp-4300G kernel: mem auto-init: stack:off, heap alloc:on, heap free:off
Jun 29 18:40:01 hp-4300G kernel: Memory: 7409704K/7737932K available (14344K kernel code, 2035K rwdata, 4856K rodata, 1648K init, 4340K bss, 327968K reserved, 0K cma-reserved)
Jun 29 18:40:01 hp-4300G kernel: random: get_random_u64 called from __kmem_cache_create+0x2a/0x560 with crng_init=0
Jun 29 18:40:01 hp-4300G kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=32, Nodes=1
Jun 29 18:40:01 hp-4300G kernel: ftrace: allocating 41885 entries in 164 pages
Jun 29 18:40:01 hp-4300G kernel: ftrace: allocated 164 pages with 3 groups
Jun 29 18:40:01 hp-4300G kernel: rcu: Preemptible hierarchical RCU implementation.
Jun 29 18:40:01 hp-4300G kernel: rcu:         RCU dyntick-idle grace-period acceleration is enabled.
Jun 29 18:40:01 hp-4300G kernel: rcu:         RCU restricting CPUs from NR_CPUS=320 to nr_cpu_ids=32.
Jun 29 18:40:01 hp-4300G kernel: rcu:         RCU priority boosting: priority 1 delay 500 ms.
Jun 29 18:40:01 hp-4300G kernel:         Trampoline variant of Tasks RCU enabled.
Jun 29 18:40:01 hp-4300G kernel:         Rude variant of Tasks RCU enabled.
Jun 29 18:40:01 hp-4300G kernel:         Tracing variant of Tasks RCU enabled.
Jun 29 18:40:01 hp-4300G kernel: rcu: RCU calculated value of scheduler-enlistment delay is 30 jiffies.
Jun 29 18:40:01 hp-4300G kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=32
Jun 29 18:40:01 hp-4300G kernel: NR_IRQS: 20736, nr_irqs: 1224, preallocated irqs: 16
Jun 29 18:40:01 hp-4300G kernel: Console: colour dummy device 80x25
Jun 29 18:40:01 hp-4300G kernel: printk: console [tty0] enabled
Jun 29 18:40:01 hp-4300G kernel: ACPI: Core revision 20210105
Jun 29 18:40:01 hp-4300G kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484873504 ns
Jun 29 18:40:01 hp-4300G kernel: APIC: Switch to symmetric I/O mode setup
Jun 29 18:40:01 hp-4300G kernel: Switched APIC routing to physical flat.
Jun 29 18:40:01 hp-4300G kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Jun 29 18:40:01 hp-4300G kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x6d56c6114d4, max_idle_ns: 881591138569 ns
Jun 29 18:40:01 hp-4300G kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 7588.44 BogoMIPS (lpj=12642313)
Jun 29 18:40:01 hp-4300G kernel: pid_max: default: 32768 minimum: 301
Jun 29 18:40:01 hp-4300G kernel: LSM: Security Framework initializing
Jun 29 18:40:01 hp-4300G kernel: Yama: becoming mindful.
Jun 29 18:40:01 hp-4300G kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jun 29 18:40:01 hp-4300G kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jun 29 18:40:01 hp-4300G kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jun 29 18:40:01 hp-4300G kernel: LVT offset 1 assigned for vector 0xf9
Jun 29 18:40:01 hp-4300G kernel: LVT offset 2 assigned for vector 0xf4
Jun 29 18:40:01 hp-4300G kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 512
Jun 29 18:40:01 hp-4300G kernel: Last level dTLB entries: 4KB 2048, 2MB 2048, 4MB 1024, 1GB 0
Jun 29 18:40:01 hp-4300G kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jun 29 18:40:01 hp-4300G kernel: Spectre V2 : Mitigation: Full AMD retpoline
Jun 29 18:40:01 hp-4300G kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Jun 29 18:40:01 hp-4300G kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Jun 29 18:40:01 hp-4300G kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jun 29 18:40:01 hp-4300G kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl
Jun 29 18:40:01 hp-4300G kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
Jun 29 18:40:01 hp-4300G kernel: Freeing SMP alternatives memory: 36K
Jun 29 18:40:01 hp-4300G kernel: smpboot: CPU0: AMD Ryzen 3 4300G with Radeon Graphics (family: 0x17, model: 0x60, stepping: 0x1)
Jun 29 18:40:01 hp-4300G kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jun 29 18:40:01 hp-4300G kernel: ... version:                0
Jun 29 18:40:01 hp-4300G kernel: ... bit width:              48
Jun 29 18:40:01 hp-4300G kernel: ... generic registers:      6
Jun 29 18:40:01 hp-4300G kernel: ... value mask:             0000ffffffffffff
Jun 29 18:40:01 hp-4300G kernel: ... max period:             00007fffffffffff
Jun 29 18:40:01 hp-4300G kernel: ... fixed-purpose events:   0
Jun 29 18:40:01 hp-4300G kernel: ... event mask:             000000000000003f
Jun 29 18:40:01 hp-4300G kernel: rcu: Hierarchical SRCU implementation.
Jun 29 18:40:01 hp-4300G kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter.
Jun 29 18:40:01 hp-4300G kernel: smp: Bringing up secondary CPUs ...
Jun 29 18:40:01 hp-4300G kernel: x86: Booting SMP configuration:
Jun 29 18:40:01 hp-4300G kernel: .... node  #0, CPUs:        #1  #2  #3  #4  #5  #6  #7
Jun 29 18:40:01 hp-4300G kernel: smp: Brought up 1 node, 8 CPUs
Jun 29 18:40:01 hp-4300G kernel: smpboot: Max logical packages: 4
Jun 29 18:40:01 hp-4300G kernel: smpboot: Total of 8 processors activated (60707.56 BogoMIPS)
Jun 29 18:40:01 hp-4300G kernel: devtmpfs: initialized
Jun 29 18:40:01 hp-4300G kernel: x86/mm: Memory block size: 128MB
Jun 29 18:40:01 hp-4300G kernel: PM: Registering ACPI NVS region [mem 0x0a200000-0x0a20cfff] (53248 bytes)
Jun 29 18:40:01 hp-4300G kernel: PM: Registering ACPI NVS region [mem 0xb8722000-0xb8a14fff] (3092480 bytes)
Jun 29 18:40:01 hp-4300G kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 6370867519511994 ns
Jun 29 18:40:01 hp-4300G kernel: futex hash table entries: 8192 (order: 7, 524288 bytes, linear)
Jun 29 18:40:01 hp-4300G kernel: pinctrl core: initialized pinctrl subsystem
Jun 29 18:40:01 hp-4300G kernel: PM: RTC time: 01:39:57, date: 2021-06-30
Jun 29 18:40:01 hp-4300G kernel: NET: Registered protocol family 16
Jun 29 18:40:01 hp-4300G kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jun 29 18:40:01 hp-4300G kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jun 29 18:40:01 hp-4300G kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jun 29 18:40:01 hp-4300G kernel: audit: initializing netlink subsys (disabled)
Jun 29 18:40:01 hp-4300G kernel: audit: type=2000 audit(1625017197.143:1): state=initialized audit_enabled=0 res=1
Jun 29 18:40:01 hp-4300G kernel: thermal_sys: Registered thermal governor 'fair_share'
Jun 29 18:40:01 hp-4300G kernel: thermal_sys: Registered thermal governor 'bang_bang'
Jun 29 18:40:01 hp-4300G kernel: thermal_sys: Registered thermal governor 'step_wise'
Jun 29 18:40:01 hp-4300G kernel: thermal_sys: Registered thermal governor 'user_space'
Jun 29 18:40:01 hp-4300G kernel: thermal_sys: Registered thermal governor 'power_allocator'
Jun 29 18:40:01 hp-4300G kernel: cpuidle: using governor ladder
Jun 29 18:40:01 hp-4300G kernel: cpuidle: using governor menu
Jun 29 18:40:01 hp-4300G kernel: ACPI: bus type PCI registered
Jun 29 18:40:01 hp-4300G kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jun 29 18:40:01 hp-4300G kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000)
Jun 29 18:40:01 hp-4300G kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820
Jun 29 18:40:01 hp-4300G kernel: PCI: Using configuration type 1 for base access
Jun 29 18:40:01 hp-4300G kernel: Kprobes globally optimized
Jun 29 18:40:01 hp-4300G kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Jun 29 18:40:01 hp-4300G kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Jun 29 18:40:01 hp-4300G kernel: ACPI: Added _OSI(Module Device)
Jun 29 18:40:01 hp-4300G kernel: ACPI: Added _OSI(Processor Device)
Jun 29 18:40:01 hp-4300G kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Jun 29 18:40:01 hp-4300G kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jun 29 18:40:01 hp-4300G kernel: ACPI: Added _OSI(Linux-Dell-Video)
Jun 29 18:40:01 hp-4300G kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Jun 29 18:40:01 hp-4300G kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Jun 29 18:40:01 hp-4300G kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded
Jun 29 18:40:01 hp-4300G kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored
Jun 29 18:40:01 hp-4300G kernel: ACPI: EC: EC started
Jun 29 18:40:01 hp-4300G kernel: ACPI: EC: interrupt blocked
Jun 29 18:40:01 hp-4300G kernel: ACPI: EC: EC_CMD/EC_SC=0x66, EC_DATA=0x62
Jun 29 18:40:01 hp-4300G kernel: ACPI: \_SB_.PCI0.SBRG.EC0_: Boot DSDT EC used to handle transactions
Jun 29 18:40:01 hp-4300G kernel: ACPI: Interpreter enabled
Jun 29 18:40:01 hp-4300G kernel: ACPI: (supports S0 S3 S4 S5)
Jun 29 18:40:01 hp-4300G kernel: ACPI: Using IOAPIC for interrupt routing
Jun 29 18:40:01 hp-4300G kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jun 29 18:40:01 hp-4300G kernel: ACPI: Enabled 4 GPEs in block 00 to 1F
Jun 29 18:40:01 hp-4300G kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jun 29 18:40:01 hp-4300G kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jun 29 18:40:01 hp-4300G kernel: acpi PNP0A08:00: _OSC: platform does not support [SHPCHotplug AER LTR DPC]
Jun 29 18:40:01 hp-4300G kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability]
Jun 29 18:40:01 hp-4300G kernel: acpi PNP0A08:00: [Firmware Info]: MMCONFIG for domain 0000 [bus 00-7f] only partially covers this bridge
Jun 29 18:40:01 hp-4300G kernel: PCI host bridge to bus 0000:00
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x03af window]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:00: root bus resource [io  0x03e0-0x0cf7 window]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:00: root bus resource [io  0x03b0-0x03df window]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:00: root bus resource [mem 0x000c0000-0x000dffff window]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfec2ffff window]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:00: root bus resource [mem 0xfee00000-0xffffffff window]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:00.0: [1022:1630] type 00 class 0x060000
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:00.2: [1022:1631] type 00 class 0x080600
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:01.0: [1022:1632] type 00 class 0x060000
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.0: [1022:1632] type 00 class 0x060000
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.1: [1022:1634] type 01 class 0x060400
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.1: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.1: PME# supported from D0 D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.2: [1022:1634] type 01 class 0x060400
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.2: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.2: PME# supported from D0 D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.0: [1022:1632] type 00 class 0x060000
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.1: [1022:1635] type 01 class 0x060400
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.1: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.1: PME# supported from D0 D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.2: [1022:1635] type 01 class 0x060400
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.2: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.2: PME# supported from D0 D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:14.0: [1022:790b] type 00 class 0x0c0500
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:14.3: [1022:790e] type 00 class 0x060100
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:18.0: [1022:1448] type 00 class 0x060000
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:18.1: [1022:1449] type 00 class 0x060000
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:18.2: [1022:144a] type 00 class 0x060000
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:18.3: [1022:144b] type 00 class 0x060000
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:18.4: [1022:144c] type 00 class 0x060000
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:18.5: [1022:144d] type 00 class 0x060000
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:18.6: [1022:144e] type 00 class 0x060000
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:18.7: [1022:144f] type 00 class 0x060000
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.0: [1022:43d1] type 00 class 0x0c0330
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfcda0000-0xfcda7fff 64bit]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.0: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.0: PME# supported from D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.1: [1022:43c8] type 00 class 0x010601
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.1: reg 0x24: [mem 0xfcd80000-0xfcd9ffff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.1: reg 0x30: [mem 0xfcd00000-0xfcd7ffff pref]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.1: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.1: PME# supported from D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.2: [1022:43c6] type 01 class 0x060400
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.2: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.2: PME# supported from D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.1: PCI bridge to [bus 01-0a]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.1:   bridge window [io  0xd000-0xefff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.1:   bridge window [mem 0xfcb00000-0xfcdfffff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:00.0: [1022:43c7] type 01 class 0x060400
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:00.0: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:00.0: PME# supported from D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:01.0: [1022:43c7] type 01 class 0x060400
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:01.0: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:01.0: PME# supported from D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:02.0: [1022:43c7] type 01 class 0x060400
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:02.0: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:02.0: PME# supported from D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:03.0: [1022:43c7] type 01 class 0x060400
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:03.0: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:03.0: PME# supported from D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:04.0: [1022:43c7] type 01 class 0x060400
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:04.0: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:04.0: PME# supported from D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:05.0: [1022:43c7] type 01 class 0x060400
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:05.0: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:05.0: PME# supported from D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:06.0: [1022:43c7] type 01 class 0x060400
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:06.0: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:06.0: PME# supported from D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:07.0: [1022:43c7] type 01 class 0x060400
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:07.0: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:07.0: PME# supported from D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.2: PCI bridge to [bus 02-0a]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.2:   bridge window [io  0xd000-0xefff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.2:   bridge window [mem 0xfcb00000-0xfccfffff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:00.0: PCI bridge to [bus 03]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:01.0: PCI bridge to [bus 04]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:02.0: PCI bridge to [bus 05]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:03.0: PCI bridge to [bus 06]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:04.0: PCI bridge to [bus 07]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:05.0: PCI bridge to [bus 08]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:09:00.0: [10ec:c821] type 00 class 0x028000
Jun 29 18:40:01 hp-4300G kernel: pci 0000:09:00.0: reg 0x10: [io  0xe000-0xe0ff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:09:00.0: reg 0x18: [mem 0xfcc00000-0xfcc0ffff 64bit]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:09:00.0: supports D1 D2
Jun 29 18:40:01 hp-4300G kernel: pci 0000:09:00.0: PME# supported from D0 D1 D2 D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:06.0: PCI bridge to [bus 09]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:06.0:   bridge window [io  0xe000-0xefff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:06.0:   bridge window [mem 0xfcc00000-0xfccfffff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0a:00.0: [10ec:8168] type 00 class 0x020000
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0a:00.0: reg 0x10: [io  0xd000-0xd0ff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0a:00.0: reg 0x18: [mem 0xfcb04000-0xfcb04fff 64bit]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0a:00.0: reg 0x20: [mem 0xfcb00000-0xfcb03fff 64bit]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0a:00.0: supports D1 D2
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0a:00.0: PME# supported from D0 D1 D2 D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:07.0: PCI bridge to [bus 0a]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:07.0:   bridge window [io  0xd000-0xdfff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:07.0:   bridge window [mem 0xfcb00000-0xfcbfffff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0b:00.0: [1c5c:1339] type 00 class 0x010802
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfcf00000-0xfcf03fff 64bit]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0b:00.0: supports D1
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D3hot
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0b:00.0: 15.752 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x2 link at 0000:00:02.2 (capable of 31.504 Gb/s with 8.0 GT/s PCIe x4 link)
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.2: PCI bridge to [bus 0b]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.2:   bridge window [mem 0xfcf00000-0xfcffffff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.0: [1002:1636] type 00 class 0x030000
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.0: reg 0x10: [mem 0xd0000000-0xdfffffff 64bit pref]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.0: reg 0x18: [mem 0xe0000000-0xe01fffff 64bit pref]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.0: reg 0x20: [io  0xf000-0xf0ff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.0: reg 0x24: [mem 0xfca00000-0xfca7ffff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.0: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.0: BAR 0: assigned to efifb
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.0: PME# supported from D1 D2 D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.0: 126.016 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x16 link at 0000:00:08.1 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link)
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.1: [1002:1637] type 00 class 0x040300
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.1: reg 0x10: [mem 0xfca88000-0xfca8bfff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.1: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.1: PME# supported from D1 D2 D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.2: [1022:15df] type 00 class 0x108000
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.2: reg 0x18: [mem 0xfc900000-0xfc9fffff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.2: reg 0x24: [mem 0xfca8c000-0xfca8dfff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.2: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.3: [1022:1639] type 00 class 0x0c0330
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.3: reg 0x10: [mem 0xfc800000-0xfc8fffff 64bit]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.3: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.3: PME# supported from D0 D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.4: [1022:1639] type 00 class 0x0c0330
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.4: reg 0x10: [mem 0xfc700000-0xfc7fffff 64bit]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.4: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.4: PME# supported from D0 D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.6: [1022:15e3] type 00 class 0x040300
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.6: reg 0x10: [mem 0xfca80000-0xfca87fff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.6: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.6: PME# supported from D0 D3hot D3cold
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.1: PCI bridge to [bus 0c]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.1:   bridge window [io  0xf000-0xffff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.1:   bridge window [mem 0xfc700000-0xfcafffff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.1:   bridge window [mem 0xd0000000-0xe01fffff 64bit pref]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0d:00.0: [1022:7901] type 00 class 0x010601
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0d:00.0: reg 0x24: [mem 0xfce01000-0xfce017ff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0d:00.0: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0d:00.0: 126.016 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x16 link at 0000:00:08.2 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link)
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0d:00.1: [1022:7901] type 00 class 0x010601
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0d:00.1: reg 0x24: [mem 0xfce00000-0xfce007ff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0d:00.1: enabling Extended Tags
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.2: PCI bridge to [bus 0d]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.2:   bridge window [mem 0xfce00000-0xfcefffff]
Jun 29 18:40:01 hp-4300G kernel: ACPI: PCI Interrupt Link [LNKA] (IRQs 4 5 7 10 11 14 15) *0
Jun 29 18:40:01 hp-4300G kernel: ACPI: PCI Interrupt Link [LNKB] (IRQs 4 5 7 10 11 14 15) *0
Jun 29 18:40:01 hp-4300G kernel: ACPI: PCI Interrupt Link [LNKC] (IRQs 4 5 7 10 11 14 15) *0
Jun 29 18:40:01 hp-4300G kernel: ACPI: PCI Interrupt Link [LNKD] (IRQs 4 5 7 10 11 14 15) *0
Jun 29 18:40:01 hp-4300G kernel: ACPI: PCI Interrupt Link [LNKE] (IRQs 4 5 7 10 11 14 15) *0
Jun 29 18:40:01 hp-4300G kernel: ACPI: PCI Interrupt Link [LNKF] (IRQs 4 5 7 10 11 14 15) *0
Jun 29 18:40:01 hp-4300G kernel: ACPI: PCI Interrupt Link [LNKG] (IRQs 4 5 7 10 11 14 15) *0
Jun 29 18:40:01 hp-4300G kernel: ACPI: PCI Interrupt Link [LNKH] (IRQs 4 5 7 10 11 14 15) *0
Jun 29 18:40:01 hp-4300G kernel: ACPI: EC: interrupt unblocked
Jun 29 18:40:01 hp-4300G kernel: ACPI: EC: event unblocked
Jun 29 18:40:01 hp-4300G kernel: ACPI: EC: EC_CMD/EC_SC=0x66, EC_DATA=0x62
Jun 29 18:40:01 hp-4300G kernel: ACPI: EC: GPE=0x3
Jun 29 18:40:01 hp-4300G kernel: ACPI: \_SB_.PCI0.SBRG.EC0_: Boot DSDT EC initialization complete
Jun 29 18:40:01 hp-4300G kernel: ACPI: \_SB_.PCI0.SBRG.EC0_: EC: Used to handle transactions and events
Jun 29 18:40:01 hp-4300G kernel: iommu: Default domain type: Translated 
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.0: vgaarb: bridge control possible
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.0: vgaarb: setting as boot device
Jun 29 18:40:01 hp-4300G kernel: vgaarb: loaded
Jun 29 18:40:01 hp-4300G kernel: SCSI subsystem initialized
Jun 29 18:40:01 hp-4300G kernel: libata version 3.00 loaded.
Jun 29 18:40:01 hp-4300G kernel: ACPI: bus type USB registered
Jun 29 18:40:01 hp-4300G kernel: usbcore: registered new interface driver usbfs
Jun 29 18:40:01 hp-4300G kernel: usbcore: registered new interface driver hub
Jun 29 18:40:01 hp-4300G kernel: usbcore: registered new device driver usb
Jun 29 18:40:01 hp-4300G kernel: pps_core: LinuxPPS API ver. 1 registered
Jun 29 18:40:01 hp-4300G kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jun 29 18:40:01 hp-4300G kernel: PTP clock support registered
Jun 29 18:40:01 hp-4300G kernel: EDAC MC: Ver: 3.0.0
Jun 29 18:40:01 hp-4300G kernel: Registered efivars operations
Jun 29 18:40:01 hp-4300G kernel: NetLabel: Initializing
Jun 29 18:40:01 hp-4300G kernel: NetLabel:  domain hash size = 128
Jun 29 18:40:01 hp-4300G kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jun 29 18:40:01 hp-4300G kernel: NetLabel:  unlabeled traffic allowed by default
Jun 29 18:40:01 hp-4300G kernel: PCI: Using ACPI for IRQ routing
Jun 29 18:40:01 hp-4300G kernel: PCI: pci_cache_line_size set to 64 bytes
Jun 29 18:40:01 hp-4300G kernel: e820: reserve RAM buffer [mem 0x09c10000-0x0bffffff]
Jun 29 18:40:01 hp-4300G kernel: e820: reserve RAM buffer [mem 0x0a200000-0x0bffffff]
Jun 29 18:40:01 hp-4300G kernel: e820: reserve RAM buffer [mem 0x0b000000-0x0bffffff]
Jun 29 18:40:01 hp-4300G kernel: e820: reserve RAM buffer [mem 0xb4c66018-0xb7ffffff]
Jun 29 18:40:01 hp-4300G kernel: e820: reserve RAM buffer [mem 0xb5158000-0xb7ffffff]
Jun 29 18:40:01 hp-4300G kernel: e820: reserve RAM buffer [mem 0xb6dde000-0xb7ffffff]
Jun 29 18:40:01 hp-4300G kernel: e820: reserve RAM buffer [mem 0xb8390000-0xbbffffff]
Jun 29 18:40:01 hp-4300G kernel: e820: reserve RAM buffer [mem 0x21f380000-0x21fffffff]
Jun 29 18:40:01 hp-4300G kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
Jun 29 18:40:01 hp-4300G kernel: hpet0: 3 comparators, 32-bit 14.318180 MHz counter
Jun 29 18:40:01 hp-4300G kernel: clocksource: Switched to clocksource tsc-early
Jun 29 18:40:01 hp-4300G kernel: VFS: Disk quotas dquot_6.6.0
Jun 29 18:40:01 hp-4300G kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jun 29 18:40:01 hp-4300G kernel: pnp: PnP ACPI init
Jun 29 18:40:01 hp-4300G kernel: system 00:00: [mem 0xf0000000-0xf7ffffff] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:00: Plug and Play ACPI device, IDs PNP0c01 (active)
Jun 29 18:40:01 hp-4300G kernel: system 00:01: [mem 0x220000000-0x23fffffff window] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:01: Plug and Play ACPI device, IDs PNP0c02 (active)
Jun 29 18:40:01 hp-4300G kernel: pnp 00:02: Plug and Play ACPI device, IDs PNP0b00 (active)
Jun 29 18:40:01 hp-4300G kernel: system 00:03: [io  0x0a00-0x0a0f] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:03: [io  0x0a10-0x0a1f] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:03: [io  0x0a20-0x0a2f] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:03: [io  0x0a30-0x0a3f] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:03: [io  0x0a40-0x0a4f] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:03: [io  0x0a50-0x0a5f] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:03: [io  0x0a60-0x0a6f] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:03: [io  0x0a70-0x0a7f] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:03: [io  0x0a80-0x0a8f] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:03: [io  0x0a90-0x0b8e] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:03: [io  0x0aa0-0x0aaf] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:03: [io  0x0ab0-0x0abf] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:03: [io  0x0ac0-0x0acf] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:03: [io  0x0ad0-0x0adf] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:03: Plug and Play ACPI device, IDs PNP0c02 (active)
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x04d0-0x04d1] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x040b] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x04d6] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x0c00-0x0c01] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x0c14] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x0c50-0x0c51] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x0c52] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x0c6c] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x0c6f] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x0cd0-0x0cd1] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x0cd2-0x0cd3] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x0cd4-0x0cd5] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x0cd6-0x0cd7] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x0cd8-0x0cdf] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x0800-0x089f] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x0b00-0x0b0f] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x0b20-0x0b3f] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x0900-0x090f] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [io  0x0910-0x091f] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [mem 0xfec00000-0xfec00fff] could not be reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [mem 0xfec01000-0xfec01fff] could not be reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [mem 0xfedc0000-0xfedc0fff] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [mem 0xfee00000-0xfee00fff] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [mem 0xfed80000-0xfed8ffff] could not be reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [mem 0xfec10000-0xfec10fff] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: [mem 0xff000000-0xffffffff] has been reserved
Jun 29 18:40:01 hp-4300G kernel: system 00:04: Plug and Play ACPI device, IDs PNP0c02 (active)
Jun 29 18:40:01 hp-4300G kernel: pnp: PnP ACPI: found 5 devices
Jun 29 18:40:01 hp-4300G kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jun 29 18:40:01 hp-4300G kernel: NET: Registered protocol family 2
Jun 29 18:40:01 hp-4300G kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jun 29 18:40:01 hp-4300G kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jun 29 18:40:01 hp-4300G kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jun 29 18:40:01 hp-4300G kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jun 29 18:40:01 hp-4300G kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jun 29 18:40:01 hp-4300G kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jun 29 18:40:01 hp-4300G kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jun 29 18:40:01 hp-4300G kernel: NET: Registered protocol family 1
Jun 29 18:40:01 hp-4300G kernel: NET: Registered protocol family 44
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:00.0: PCI bridge to [bus 03]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:01.0: PCI bridge to [bus 04]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:02.0: PCI bridge to [bus 05]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:03.0: PCI bridge to [bus 06]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:04.0: PCI bridge to [bus 07]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:05.0: PCI bridge to [bus 08]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:06.0: PCI bridge to [bus 09]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:06.0:   bridge window [io  0xe000-0xefff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:06.0:   bridge window [mem 0xfcc00000-0xfccfffff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:07.0: PCI bridge to [bus 0a]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:07.0:   bridge window [io  0xd000-0xdfff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:07.0:   bridge window [mem 0xfcb00000-0xfcbfffff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.2: PCI bridge to [bus 02-0a]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.2:   bridge window [io  0xd000-0xefff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.2:   bridge window [mem 0xfcb00000-0xfccfffff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.1: PCI bridge to [bus 01-0a]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.1:   bridge window [io  0xd000-0xefff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.1:   bridge window [mem 0xfcb00000-0xfcdfffff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.2: PCI bridge to [bus 0b]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.2:   bridge window [mem 0xfcf00000-0xfcffffff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.1: PCI bridge to [bus 0c]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.1:   bridge window [io  0xf000-0xffff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.1:   bridge window [mem 0xfc700000-0xfcafffff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.1:   bridge window [mem 0xd0000000-0xe01fffff 64bit pref]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.2: PCI bridge to [bus 0d]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.2:   bridge window [mem 0xfce00000-0xfcefffff]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x03af window]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:00: resource 5 [io  0x03e0-0x0cf7 window]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:00: resource 6 [io  0x03b0-0x03df window]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:00: resource 7 [io  0x0d00-0xffff window]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:00: resource 8 [mem 0x000a0000-0x000bffff window]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:00: resource 9 [mem 0x000c0000-0x000dffff window]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:00: resource 10 [mem 0xc0000000-0xfec2ffff window]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:00: resource 11 [mem 0xfee00000-0xffffffff window]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:01: resource 0 [io  0xd000-0xefff]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:01: resource 1 [mem 0xfcb00000-0xfcdfffff]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:02: resource 0 [io  0xd000-0xefff]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:02: resource 1 [mem 0xfcb00000-0xfccfffff]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:09: resource 0 [io  0xe000-0xefff]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:09: resource 1 [mem 0xfcc00000-0xfccfffff]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:0a: resource 0 [io  0xd000-0xdfff]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:0a: resource 1 [mem 0xfcb00000-0xfcbfffff]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:0b: resource 1 [mem 0xfcf00000-0xfcffffff]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:0c: resource 0 [io  0xf000-0xffff]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:0c: resource 1 [mem 0xfc700000-0xfcafffff]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:0c: resource 2 [mem 0xd0000000-0xe01fffff 64bit pref]
Jun 29 18:40:01 hp-4300G kernel: pci_bus 0000:0d: resource 1 [mem 0xfce00000-0xfcefffff]
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.1: D0 power state depends on 0000:0c:00.0
Jun 29 18:40:01 hp-4300G kernel: PCI: CLS 64 bytes, default 64
Jun 29 18:40:01 hp-4300G kernel: Trying to unpack rootfs image as initramfs...
Jun 29 18:40:01 hp-4300G kernel: Freeing initrd memory: 7796K
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:00.2: AMD-Vi: Unable to read/write to IOMMU perf counter.
Jun 29 18:40:01 hp-4300G kernel: fbcon: Taking over console
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:00.2: can't derive routing for PCI INT A
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:00.2: PCI INT A: not connected
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:01.0: Adding to iommu group 0
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.0: Adding to iommu group 1
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.1: Adding to iommu group 2
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:02.2: Adding to iommu group 3
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.0: Adding to iommu group 4
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.1: Adding to iommu group 4
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:08.2: Adding to iommu group 4
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:14.0: Adding to iommu group 5
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:14.3: Adding to iommu group 5
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:18.0: Adding to iommu group 6
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:18.1: Adding to iommu group 6
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:18.2: Adding to iommu group 6
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:18.3: Adding to iommu group 6
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:18.4: Adding to iommu group 6
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:18.5: Adding to iommu group 6
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:18.6: Adding to iommu group 6
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:18.7: Adding to iommu group 6
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.0: Adding to iommu group 7
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.1: Adding to iommu group 7
Jun 29 18:40:01 hp-4300G kernel: pci 0000:01:00.2: Adding to iommu group 7
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:00.0: Adding to iommu group 7
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:01.0: Adding to iommu group 7
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:02.0: Adding to iommu group 7
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:03.0: Adding to iommu group 7
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:04.0: Adding to iommu group 7
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:05.0: Adding to iommu group 7
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:06.0: Adding to iommu group 7
Jun 29 18:40:01 hp-4300G kernel: pci 0000:02:07.0: Adding to iommu group 7
Jun 29 18:40:01 hp-4300G kernel: pci 0000:09:00.0: Adding to iommu group 7
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0a:00.0: Adding to iommu group 7
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0b:00.0: Adding to iommu group 8
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.0: Adding to iommu group 4
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.1: Adding to iommu group 4
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.2: Adding to iommu group 4
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.3: Adding to iommu group 4
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.4: Adding to iommu group 4
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0c:00.6: Adding to iommu group 4
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0d:00.0: Adding to iommu group 4
Jun 29 18:40:01 hp-4300G kernel: pci 0000:0d:00.1: Adding to iommu group 4
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
Jun 29 18:40:01 hp-4300G kernel: pci 0000:00:00.2: AMD-Vi: Extended features (0x206d73ef22254ade):
Jun 29 18:40:01 hp-4300G kernel:  PPR X2APIC NX GT IA GA PC GA_vAPIC
Jun 29 18:40:01 hp-4300G kernel: AMD-Vi: Interrupt remapping enabled
Jun 29 18:40:01 hp-4300G kernel: AMD-Vi: Virtual APIC enabled
Jun 29 18:40:01 hp-4300G kernel: AMD-Vi: X2APIC enabled
Jun 29 18:40:01 hp-4300G kernel: AMD-Vi: Lazy IO/TLB flushing enabled
Jun 29 18:40:01 hp-4300G kernel: amd_uncore: 4  amd_df counters detected
Jun 29 18:40:01 hp-4300G kernel: amd_uncore: 6  amd_l3 counters detected
Jun 29 18:40:01 hp-4300G kernel: LVT offset 0 assigned for vector 0x400
Jun 29 18:40:01 hp-4300G kernel: perf: AMD IBS detected (0x000003ff)
Jun 29 18:40:01 hp-4300G kernel: check: Scanning for low memory corruption every 60 seconds
Jun 29 18:40:01 hp-4300G kernel: Initialise system trusted keyrings
Jun 29 18:40:01 hp-4300G kernel: Key type blacklist registered
Jun 29 18:40:01 hp-4300G kernel: workingset: timestamp_bits=41 max_order=21 bucket_order=0
Jun 29 18:40:01 hp-4300G kernel: zbud: loaded
Jun 29 18:40:01 hp-4300G kernel: Key type asymmetric registered
Jun 29 18:40:01 hp-4300G kernel: Asymmetric key parser 'x509' registered
Jun 29 18:40:01 hp-4300G kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 243)
Jun 29 18:40:01 hp-4300G kernel: io scheduler mq-deadline registered
Jun 29 18:40:01 hp-4300G kernel: io scheduler kyber registered
Jun 29 18:40:01 hp-4300G kernel: io scheduler bfq registered
Jun 29 18:40:01 hp-4300G kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 26
Jun 29 18:40:01 hp-4300G kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 27
Jun 29 18:40:01 hp-4300G kernel: pcieport 0000:00:08.1: PME: Signaling with IRQ 28
Jun 29 18:40:01 hp-4300G kernel: pcieport 0000:00:08.2: PME: Signaling with IRQ 29
Jun 29 18:40:01 hp-4300G kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jun 29 18:40:01 hp-4300G kernel: efifb: probing for efifb
Jun 29 18:40:01 hp-4300G kernel: efifb: framebuffer at 0xd0000000, using 3072k, total 3072k
Jun 29 18:40:01 hp-4300G kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1
Jun 29 18:40:01 hp-4300G kernel: efifb: scrolling: redraw
Jun 29 18:40:01 hp-4300G kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
Jun 29 18:40:01 hp-4300G kernel: Console: switching to colour frame buffer device 128x48
Jun 29 18:40:01 hp-4300G kernel: fb0: EFI VGA frame buffer device
Jun 29 18:40:01 hp-4300G kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Jun 29 18:40:01 hp-4300G kernel: ACPI: button: Power Button [PWRB]
Jun 29 18:40:01 hp-4300G kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input1
Jun 29 18:40:01 hp-4300G kernel: ACPI: button: Power Button [PWRF]
Jun 29 18:40:01 hp-4300G kernel: Monitor-Mwait will be used to enter C-1 state
Jun 29 18:40:01 hp-4300G kernel: ACPI: \_PR_.C000: Found 3 idle states
Jun 29 18:40:01 hp-4300G kernel: ACPI: \_PR_.C002: Found 3 idle states
Jun 29 18:40:01 hp-4300G kernel: ACPI: \_PR_.C004: Found 3 idle states
Jun 29 18:40:01 hp-4300G kernel: ACPI: \_PR_.C006: Found 3 idle states
Jun 29 18:40:01 hp-4300G kernel: ACPI: \_PR_.C001: Found 3 idle states
Jun 29 18:40:01 hp-4300G kernel: ACPI: \_PR_.C003: Found 3 idle states
Jun 29 18:40:01 hp-4300G kernel: ACPI: \_PR_.C005: Found 3 idle states
Jun 29 18:40:01 hp-4300G kernel: ACPI: \_PR_.C007: Found 3 idle states
Jun 29 18:40:01 hp-4300G kernel: thermal LNXTHERM:00: registered as thermal_zone0
Jun 29 18:40:01 hp-4300G kernel: ACPI: thermal: Thermal Zone [HPTZ] (30 C)
Jun 29 18:40:01 hp-4300G kernel: Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
Jun 29 18:40:01 hp-4300G kernel: Non-volatile memory driver v1.3
Jun 29 18:40:01 hp-4300G kernel: AMD-Vi: AMD IOMMUv2 driver by Joerg Roedel <jroedel@suse.de>
Jun 29 18:40:01 hp-4300G kernel: nvme nvme0: pci function 0000:0b:00.0
Jun 29 18:40:01 hp-4300G kernel: ahci 0000:01:00.1: version 3.0
Jun 29 18:40:01 hp-4300G kernel: ahci 0000:01:00.1: enabling device (0100 -> 0102)
Jun 29 18:40:01 hp-4300G kernel: ahci 0000:01:00.1: SSS flag set, parallel bus scan disabled
Jun 29 18:40:01 hp-4300G kernel: ahci 0000:01:00.1: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode
Jun 29 18:40:01 hp-4300G kernel: ahci 0000:01:00.1: flags: 64bit ncq sntf stag pm led clo only pmp pio slum part sxs deso sadm sds apst 
Jun 29 18:40:01 hp-4300G kernel: scsi host0: ahci
Jun 29 18:40:01 hp-4300G kernel: scsi host1: ahci
Jun 29 18:40:01 hp-4300G kernel: scsi host2: ahci
Jun 29 18:40:01 hp-4300G kernel: scsi host3: ahci
Jun 29 18:40:01 hp-4300G kernel: scsi host4: ahci
Jun 29 18:40:01 hp-4300G kernel: scsi host5: ahci
Jun 29 18:40:01 hp-4300G kernel: scsi host6: ahci
Jun 29 18:40:01 hp-4300G kernel: scsi host7: ahci
Jun 29 18:40:01 hp-4300G kernel: ata1: SATA max UDMA/133 abar m131072@0xfcd80000 port 0xfcd80100 irq 44
Jun 29 18:40:01 hp-4300G kernel: ata2: SATA max UDMA/133 abar m131072@0xfcd80000 port 0xfcd80180 irq 44
Jun 29 18:40:01 hp-4300G kernel: ata3: SATA max UDMA/133 abar m131072@0xfcd80000 port 0xfcd80200 irq 44
Jun 29 18:40:01 hp-4300G kernel: ata4: SATA max UDMA/133 abar m131072@0xfcd80000 port 0xfcd80280 irq 44
Jun 29 18:40:01 hp-4300G kernel: ata5: SATA max UDMA/133 abar m131072@0xfcd80000 port 0xfcd80300 irq 44
Jun 29 18:40:01 hp-4300G kernel: ata6: SATA max UDMA/133 abar m131072@0xfcd80000 port 0xfcd80380 irq 44
Jun 29 18:40:01 hp-4300G kernel: ata7: SATA max UDMA/133 abar m131072@0xfcd80000 port 0xfcd80400 irq 44
Jun 29 18:40:01 hp-4300G kernel: ata8: SATA max UDMA/133 abar m131072@0xfcd80000 port 0xfcd80480 irq 44
Jun 29 18:40:01 hp-4300G kernel: ahci 0000:0d:00.0: enabling device (0100 -> 0102)
Jun 29 18:40:01 hp-4300G kernel: ahci 0000:0d:00.0: AHCI 0001.0301 32 slots 1 ports 6 Gbps 0x1 impl SATA mode
Jun 29 18:40:01 hp-4300G kernel: ahci 0000:0d:00.0: flags: 64bit ncq sntf ilck pm led clo only pmp fbs pio slum part 
Jun 29 18:40:01 hp-4300G kernel: scsi host8: ahci
Jun 29 18:40:01 hp-4300G kernel: ata9: SATA max UDMA/133 abar m2048@0xfce01000 port 0xfce01100 irq 46
Jun 29 18:40:01 hp-4300G kernel: ahci 0000:0d:00.1: enabling device (0100 -> 0102)
Jun 29 18:40:01 hp-4300G kernel: ahci 0000:0d:00.1: AHCI 0001.0301 32 slots 1 ports 6 Gbps 0x1 impl SATA mode
Jun 29 18:40:01 hp-4300G kernel: ahci 0000:0d:00.1: flags: 64bit ncq sntf ilck pm led clo only pmp fbs pio slum part 
Jun 29 18:40:01 hp-4300G kernel: scsi host9: ahci
Jun 29 18:40:01 hp-4300G kernel: ata10: SATA max UDMA/133 abar m2048@0xfce00000 port 0xfce00100 irq 48
Jun 29 18:40:01 hp-4300G kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Jun 29 18:40:01 hp-4300G kernel: ehci-pci: EHCI PCI platform driver
Jun 29 18:40:01 hp-4300G kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Jun 29 18:40:01 hp-4300G kernel: ohci-pci: OHCI PCI platform driver
Jun 29 18:40:01 hp-4300G kernel: uhci_hcd: USB Universal Host Controller Interface driver
Jun 29 18:40:01 hp-4300G kernel: usbcore: registered new interface driver usbserial_generic
Jun 29 18:40:01 hp-4300G kernel: usbserial: USB Serial support registered for generic
Jun 29 18:40:01 hp-4300G kernel: rtc_cmos 00:02: RTC can wake from S4
Jun 29 18:40:01 hp-4300G kernel: rtc_cmos 00:02: registered as rtc0
Jun 29 18:40:01 hp-4300G kernel: rtc_cmos 00:02: setting system clock to 2021-06-30T01:39:58 UTC (1625017198)
Jun 29 18:40:01 hp-4300G kernel: rtc_cmos 00:02: alarms up to one month, y3k, 114 bytes nvram, hpet irqs
Jun 29 18:40:01 hp-4300G kernel: ledtrig-cpu: registered to indicate activity on CPUs
Jun 29 18:40:01 hp-4300G kernel: hid: raw HID events driver (C) Jiri Kosina
Jun 29 18:40:01 hp-4300G kernel: drop_monitor: Initializing network drop monitor service
Jun 29 18:40:01 hp-4300G kernel: Initializing XFRM netlink socket
Jun 29 18:40:01 hp-4300G kernel: NET: Registered protocol family 10
Jun 29 18:40:01 hp-4300G kernel: nvme nvme0: missing or invalid SUBNQN field.
Jun 29 18:40:01 hp-4300G kernel: Segment Routing with IPv6
Jun 29 18:40:01 hp-4300G kernel: RPL Segment Routing with IPv6
Jun 29 18:40:01 hp-4300G kernel: NET: Registered protocol family 17
Jun 29 18:40:01 hp-4300G kernel: microcode: CPU0: patch_level=0x08600106
Jun 29 18:40:01 hp-4300G kernel: microcode: CPU1: patch_level=0x08600106
Jun 29 18:40:01 hp-4300G kernel: microcode: CPU2: patch_level=0x08600106
Jun 29 18:40:01 hp-4300G kernel: microcode: CPU3: patch_level=0x08600106
Jun 29 18:40:01 hp-4300G kernel: microcode: CPU4: patch_level=0x08600106
Jun 29 18:40:01 hp-4300G kernel: microcode: CPU5: patch_level=0x08600106
Jun 29 18:40:01 hp-4300G kernel: microcode: CPU6: patch_level=0x08600106
Jun 29 18:40:01 hp-4300G kernel: microcode: CPU7: patch_level=0x08600106
Jun 29 18:40:01 hp-4300G kernel: microcode: Microcode Update Driver: v2.2.
Jun 29 18:40:01 hp-4300G kernel: resctrl: L3 allocation detected
Jun 29 18:40:01 hp-4300G kernel: resctrl: L3DATA allocation detected
Jun 29 18:40:01 hp-4300G kernel: resctrl: L3CODE allocation detected
Jun 29 18:40:01 hp-4300G kernel: resctrl: MB allocation detected
Jun 29 18:40:01 hp-4300G kernel: resctrl: L3 monitoring detected
Jun 29 18:40:01 hp-4300G kernel: IPI shorthand broadcast: enabled
Jun 29 18:40:01 hp-4300G kernel: sched_clock: Marking stable (481827256, 421468)->(484410312, -2161588)
Jun 29 18:40:01 hp-4300G kernel: registered taskstats version 1
Jun 29 18:40:01 hp-4300G kernel: Loading compiled-in X.509 certificates
Jun 29 18:40:01 hp-4300G kernel: nvme nvme0: 16/0/0 default/read/poll queues
Jun 29 18:40:01 hp-4300G kernel: Loaded X.509 cert 'Build time autogenerated kernel key: 22ba77e5a745fb64a1c258554eb6d5c653018035'
Jun 29 18:40:01 hp-4300G kernel: zswap: loaded using pool lz4/z3fold
Jun 29 18:40:01 hp-4300G kernel: Key type ._fscrypt registered
Jun 29 18:40:01 hp-4300G kernel: Key type .fscrypt registered
Jun 29 18:40:01 hp-4300G kernel: Key type fscrypt-provisioning registered
Jun 29 18:40:01 hp-4300G kernel: PM:   Magic number: 9:490:661
Jun 29 18:40:01 hp-4300G kernel: RAS: Correctable Errors collector initialized.
Jun 29 18:40:01 hp-4300G kernel:  nvme0n1: p1 p2
Jun 29 18:40:01 hp-4300G kernel: ata1: SATA link down (SStatus 0 SControl 300)
Jun 29 18:40:01 hp-4300G kernel: ata9: SATA link down (SStatus 0 SControl 300)
Jun 29 18:40:01 hp-4300G kernel: ata10: SATA link down (SStatus 0 SControl 300)
Jun 29 18:40:01 hp-4300G kernel: ata2: SATA link down (SStatus 0 SControl 300)
Jun 29 18:40:01 hp-4300G kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x6d56c6114d4, max_idle_ns: 881591138569 ns
Jun 29 18:40:01 hp-4300G kernel: clocksource: Switched to clocksource tsc
Jun 29 18:40:01 hp-4300G kernel: ata3: SATA link down (SStatus 0 SControl 300)
Jun 29 18:40:01 hp-4300G kernel: ata4: SATA link down (SStatus 0 SControl 300)
Jun 29 18:40:01 hp-4300G kernel: ata5: SATA link down (SStatus 0 SControl 300)
Jun 29 18:40:01 hp-4300G kernel: ata6: SATA link down (SStatus 0 SControl 300)
Jun 29 18:40:01 hp-4300G kernel: ata7: SATA link down (SStatus 0 SControl 300)
Jun 29 18:40:01 hp-4300G kernel: ata8: SATA link down (SStatus 0 SControl 300)
Jun 29 18:40:01 hp-4300G kernel: Freeing unused decrypted memory: 2036K
Jun 29 18:40:01 hp-4300G kernel: Freeing unused kernel image (initmem) memory: 1648K
Jun 29 18:40:01 hp-4300G kernel: Write protecting the kernel read-only data: 22528k
Jun 29 18:40:01 hp-4300G kernel: Freeing unused kernel image (text/rodata gap) memory: 2036K
Jun 29 18:40:01 hp-4300G kernel: Freeing unused kernel image (rodata/data gap) memory: 1288K
Jun 29 18:40:01 hp-4300G kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jun 29 18:40:01 hp-4300G kernel: rodata_test: all tests were successful
Jun 29 18:40:01 hp-4300G kernel: Run /init as init process
Jun 29 18:40:01 hp-4300G kernel:   with arguments:
Jun 29 18:40:01 hp-4300G kernel:     /init
Jun 29 18:40:01 hp-4300G kernel:   with environment:
Jun 29 18:40:01 hp-4300G kernel:     HOME=/
Jun 29 18:40:01 hp-4300G kernel:     TERM=linux
Jun 29 18:40:01 hp-4300G kernel: xhci_hcd 0000:01:00.0: xHCI Host Controller
Jun 29 18:40:01 hp-4300G kernel: xhci_hcd 0000:01:00.0: new USB bus registered, assigned bus number 1
Jun 29 18:40:01 hp-4300G kernel: xhci_hcd 0000:01:00.0: hcc params 0x0200ef81 hci version 0x110 quirks 0x0000000000000410
Jun 29 18:40:01 hp-4300G kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.12
Jun 29 18:40:01 hp-4300G kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jun 29 18:40:01 hp-4300G kernel: usb usb1: Product: xHCI Host Controller
Jun 29 18:40:01 hp-4300G kernel: usb usb1: Manufacturer: Linux 5.12.0-rc3-00024-gf127c9556a8e xhci-hcd
Jun 29 18:40:01 hp-4300G kernel: usb usb1: SerialNumber: 0000:01:00.0
Jun 29 18:40:01 hp-4300G kernel: hub 1-0:1.0: USB hub found
Jun 29 18:40:01 hp-4300G kernel: hub 1-0:1.0: 14 ports detected
Jun 29 18:40:01 hp-4300G kernel: xhci_hcd 0000:01:00.0: xHCI Host Controller
Jun 29 18:40:01 hp-4300G kernel: xhci_hcd 0000:01:00.0: new USB bus registered, assigned bus number 2
Jun 29 18:40:01 hp-4300G kernel: xhci_hcd 0000:01:00.0: Host supports USB 3.1 Enhanced SuperSpeed
Jun 29 18:40:01 hp-4300G kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM.
Jun 29 18:40:01 hp-4300G kernel: usb usb2: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.12
Jun 29 18:40:01 hp-4300G kernel: usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jun 29 18:40:01 hp-4300G kernel: usb usb2: Product: xHCI Host Controller
Jun 29 18:40:01 hp-4300G kernel: usb usb2: Manufacturer: Linux 5.12.0-rc3-00024-gf127c9556a8e xhci-hcd
Jun 29 18:40:01 hp-4300G kernel: usb usb2: SerialNumber: 0000:01:00.0
Jun 29 18:40:01 hp-4300G kernel: hub 2-0:1.0: USB hub found
Jun 29 18:40:01 hp-4300G kernel: hub 2-0:1.0: 8 ports detected
Jun 29 18:40:01 hp-4300G kernel: xhci_hcd 0000:0c:00.3: xHCI Host Controller
Jun 29 18:40:01 hp-4300G kernel: xhci_hcd 0000:0c:00.3: new USB bus registered, assigned bus number 3
Jun 29 18:40:01 hp-4300G kernel: xhci_hcd 0000:0c:00.3: hcc params 0x0268ffe5 hci version 0x110 quirks 0x0000000000000410
Jun 29 18:40:01 hp-4300G kernel: usb usb3: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.12
Jun 29 18:40:01 hp-4300G kernel: usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jun 29 18:40:01 hp-4300G kernel: usb usb3: Product: xHCI Host Controller
Jun 29 18:40:01 hp-4300G kernel: usb usb3: Manufacturer: Linux 5.12.0-rc3-00024-gf127c9556a8e xhci-hcd
Jun 29 18:40:01 hp-4300G kernel: usb usb3: SerialNumber: 0000:0c:00.3
Jun 29 18:40:01 hp-4300G kernel: hub 3-0:1.0: USB hub found
Jun 29 18:40:01 hp-4300G kernel: hub 3-0:1.0: 4 ports detected
Jun 29 18:40:01 hp-4300G kernel: xhci_hcd 0000:0c:00.3: xHCI Host Controller
Jun 29 18:40:01 hp-4300G kernel: xhci_hcd 0000:0c:00.3: new USB bus registered, assigned bus number 4
Jun 29 18:40:01 hp-4300G kernel: xhci_hcd 0000:0c:00.3: Host supports USB 3.1 Enhanced SuperSpeed
Jun 29 18:40:01 hp-4300G kernel: usb usb4: We don't know the algorithms for LPM for this host, disabling LPM.
Jun 29 18:40:01 hp-4300G kernel: usb usb4: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.12
Jun 29 18:40:01 hp-4300G kernel: usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jun 29 18:40:01 hp-4300G kernel: usb usb4: Product: xHCI Host Controller
Jun 29 18:40:01 hp-4300G kernel: usb usb4: Manufacturer: Linux 5.12.0-rc3-00024-gf127c9556a8e xhci-hcd
Jun 29 18:40:01 hp-4300G kernel: usb usb4: SerialNumber: 0000:0c:00.3
Jun 29 18:40:01 hp-4300G kernel: hub 4-0:1.0: USB hub found
Jun 29 18:40:01 hp-4300G kernel: hub 4-0:1.0: 2 ports detected
Jun 29 18:40:01 hp-4300G kernel: xhci_hcd 0000:0c:00.4: xHCI Host Controller
Jun 29 18:40:01 hp-4300G kernel: xhci_hcd 0000:0c:00.4: new USB bus registered, assigned bus number 5
Jun 29 18:40:01 hp-4300G kernel: xhci_hcd 0000:0c:00.4: hcc params 0x0268ffe5 hci version 0x110 quirks 0x0000000000000410
Jun 29 18:40:01 hp-4300G kernel: usb usb5: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.12
Jun 29 18:40:01 hp-4300G kernel: usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jun 29 18:40:01 hp-4300G kernel: usb usb5: Product: xHCI Host Controller
Jun 29 18:40:01 hp-4300G kernel: usb usb5: Manufacturer: Linux 5.12.0-rc3-00024-gf127c9556a8e xhci-hcd
Jun 29 18:40:01 hp-4300G kernel: usb usb5: SerialNumber: 0000:0c:00.4
Jun 29 18:40:01 hp-4300G kernel: hub 5-0:1.0: USB hub found
Jun 29 18:40:01 hp-4300G kernel: hub 5-0:1.0: 4 ports detected
Jun 29 18:40:01 hp-4300G kernel: xhci_hcd 0000:0c:00.4: xHCI Host Controller
Jun 29 18:40:01 hp-4300G kernel: xhci_hcd 0000:0c:00.4: new USB bus registered, assigned bus number 6
Jun 29 18:40:01 hp-4300G kernel: xhci_hcd 0000:0c:00.4: Host supports USB 3.1 Enhanced SuperSpeed
Jun 29 18:40:01 hp-4300G kernel: usb usb6: We don't know the algorithms for LPM for this host, disabling LPM.
Jun 29 18:40:01 hp-4300G kernel: usb usb6: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.12
Jun 29 18:40:01 hp-4300G kernel: usb usb6: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jun 29 18:40:01 hp-4300G kernel: usb usb6: Product: xHCI Host Controller
Jun 29 18:40:01 hp-4300G kernel: usb usb6: Manufacturer: Linux 5.12.0-rc3-00024-gf127c9556a8e xhci-hcd
Jun 29 18:40:01 hp-4300G kernel: usb usb6: SerialNumber: 0000:0c:00.4
Jun 29 18:40:01 hp-4300G kernel: hub 6-0:1.0: USB hub found
Jun 29 18:40:01 hp-4300G kernel: hub 6-0:1.0: 2 ports detected
Jun 29 18:40:01 hp-4300G kernel: SGI XFS with ACLs, security attributes, realtime, scrub, repair, quota, no debug enabled
Jun 29 18:40:01 hp-4300G kernel: XFS (nvme0n1p2): Mounting V5 Filesystem
Jun 29 18:40:01 hp-4300G kernel: XFS (nvme0n1p2): Ending clean mount
Jun 29 18:40:01 hp-4300G kernel: xfs filesystem being mounted at /new_root supports timestamps until 2038 (0x7fffffff)
Jun 29 18:40:01 hp-4300G kernel: random: fast init done
Jun 29 18:40:01 hp-4300G kernel: random: crng init done
Jun 29 18:40:01 hp-4300G systemd[1]: Successfully credited entropy passed from boot loader.
Jun 29 18:40:01 hp-4300G systemd[1]: systemd 248.3-2-arch running in system mode. (+PAM +AUDIT -SELINUX -APPARMOR -IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD +XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jun 29 18:40:01 hp-4300G systemd[1]: Detected architecture x86-64.
Jun 29 18:40:01 hp-4300G systemd[1]: Hostname set to <hp-4300G>.
Jun 29 18:40:01 hp-4300G systemd-fstab-generator[250]: Mount point  is not a valid path, ignoring.
Jun 29 18:40:01 hp-4300G systemd-fstab-generator[250]: Mount point  is not a valid path, ignoring.
Jun 29 18:40:01 hp-4300G systemd[1]: Queued start job for default target Graphical Interface.
Jun 29 18:40:01 hp-4300G systemd[1]: Created slice system-getty.slice.
Jun 29 18:40:01 hp-4300G systemd[1]: Created slice system-modprobe.slice.
Jun 29 18:40:01 hp-4300G systemd[1]: Created slice User and Session Slice.
Jun 29 18:40:01 hp-4300G systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jun 29 18:40:01 hp-4300G systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jun 29 18:40:01 hp-4300G systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jun 29 18:40:01 hp-4300G systemd[1]: Reached target Local Encrypted Volumes.
Jun 29 18:40:01 hp-4300G systemd[1]: Reached target Login Prompts.
Jun 29 18:40:01 hp-4300G systemd[1]: Reached target Paths.
Jun 29 18:40:01 hp-4300G systemd[1]: Reached target Remote File Systems.
Jun 29 18:40:01 hp-4300G systemd[1]: Reached target Slices.
Jun 29 18:40:01 hp-4300G systemd[1]: Reached target Swap.
Jun 29 18:40:01 hp-4300G systemd[1]: Reached target Local Verity Integrity Protected Volumes.
Jun 29 18:40:01 hp-4300G systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jun 29 18:40:01 hp-4300G systemd[1]: Listening on Process Core Dump Socket.
Jun 29 18:40:01 hp-4300G systemd[1]: Listening on Journal Audit Socket.
Jun 29 18:40:01 hp-4300G systemd[1]: Listening on Journal Socket (/dev/log).
Jun 29 18:40:01 hp-4300G systemd[1]: Listening on Journal Socket.
Jun 29 18:40:01 hp-4300G systemd[1]: Listening on Network Service Netlink Socket.
Jun 29 18:40:01 hp-4300G systemd[1]: Listening on udev Control Socket.
Jun 29 18:40:01 hp-4300G systemd[1]: Listening on udev Kernel Socket.
Jun 29 18:40:01 hp-4300G systemd[1]: Mounting Huge Pages File System...
Jun 29 18:40:01 hp-4300G systemd[1]: Mounting POSIX Message Queue File System...
Jun 29 18:40:01 hp-4300G systemd[1]: Mounting Kernel Debug File System...
Jun 29 18:40:01 hp-4300G systemd[1]: Mounting Kernel Trace File System...
Jun 29 18:40:01 hp-4300G systemd[1]: Starting Create list of static device nodes for the current kernel...
Jun 29 18:40:01 hp-4300G systemd[1]: Starting Load Kernel Module configfs...
Jun 29 18:40:01 hp-4300G systemd[1]: Starting Load Kernel Module drm...
Jun 29 18:40:01 hp-4300G systemd[1]: Starting Load Kernel Module fuse...
Jun 29 18:40:01 hp-4300G systemd[1]: Starting Set Up Additional Binary Formats...
Jun 29 18:40:01 hp-4300G systemd[1]: Condition check resulted in File System Check on Root Device being skipped.
Jun 29 18:40:01 hp-4300G kernel: Linux agpgart interface v0.103
Jun 29 18:40:01 hp-4300G systemd[1]: Starting Journal Service...
Jun 29 18:40:01 hp-4300G systemd[1]: Starting Load Kernel Modules...
Jun 29 18:40:01 hp-4300G systemd[1]: Starting Remount Root and Kernel File Systems...
Jun 29 18:40:01 hp-4300G systemd[1]: Condition check resulted in Repartition Root Disk being skipped.
Jun 29 18:40:01 hp-4300G kernel: fuse: init (API version 7.33)
Jun 29 18:40:01 hp-4300G systemd[1]: Starting Coldplug All udev Devices...
Jun 29 18:40:01 hp-4300G systemd[1]: Mounted Huge Pages File System.
Jun 29 18:40:01 hp-4300G systemd[1]: Mounted POSIX Message Queue File System.
Jun 29 18:40:01 hp-4300G kernel: XFS: attr2 mount option is deprecated.
Jun 29 18:40:01 hp-4300G systemd[1]: Mounted Kernel Debug File System.
Jun 29 18:40:01 hp-4300G systemd[1]: Mounted Kernel Trace File System.
Jun 29 18:40:01 hp-4300G kernel: Asymmetric key parser 'pkcs8' registered
Jun 29 18:40:01 hp-4300G systemd[1]: Finished Create list of static device nodes for the current kernel.
Jun 29 18:40:01 hp-4300G systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jun 29 18:40:01 hp-4300G systemd[1]: Finished Load Kernel Module configfs.
Jun 29 18:40:01 hp-4300G systemd[1]: modprobe@drm.service: Deactivated successfully.
Jun 29 18:40:01 hp-4300G systemd[1]: Finished Load Kernel Module drm.
Jun 29 18:40:01 hp-4300G systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jun 29 18:40:01 hp-4300G systemd[1]: Finished Load Kernel Module fuse.
Jun 29 18:40:01 hp-4300G systemd[1]: Finished Load Kernel Modules.
Jun 29 18:40:01 hp-4300G systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 265 (systemd-binfmt)
Jun 29 18:40:01 hp-4300G systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jun 29 18:40:01 hp-4300G kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jun 29 18:40:01 hp-4300G systemd[1]: Mounting FUSE Control File System...
Jun 29 18:40:01 hp-4300G systemd[1]: Mounting Kernel Configuration File System...
Jun 29 18:40:01 hp-4300G systemd[1]: Starting Apply Kernel Variables...
Jun 29 18:40:01 hp-4300G systemd[1]: Finished Remount Root and Kernel File Systems.
Jun 29 18:40:01 hp-4300G kernel: audit: type=1130 audit(1625017201.694:2): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:40:01 hp-4300G systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jun 29 18:40:01 hp-4300G systemd[1]: Mounted FUSE Control File System.
Jun 29 18:40:01 hp-4300G systemd[1]: Mounted Kernel Configuration File System.
Jun 29 18:40:01 hp-4300G systemd[1]: Condition check resulted in First Boot Wizard being skipped.
Jun 29 18:40:01 hp-4300G systemd[1]: Condition check resulted in Rebuild Hardware Database being skipped.
Jun 29 18:40:01 hp-4300G systemd[1]: Starting Load/Save Random Seed...
Jun 29 18:40:01 hp-4300G systemd[1]: Starting Create System Users...
Jun 29 18:40:01 hp-4300G systemd[1]: Finished Set Up Additional Binary Formats.
Jun 29 18:40:01 hp-4300G kernel: audit: type=1130 audit(1625017201.700:3): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-binfmt comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:40:01 hp-4300G systemd[1]: Finished Apply Kernel Variables.
Jun 29 18:40:01 hp-4300G kernel: usb 1-11: new full-speed USB device number 2 using xhci_hcd
Jun 29 18:40:01 hp-4300G kernel: audit: type=1130 audit(1625017201.700:4): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:40:01 hp-4300G systemd[1]: Finished Load/Save Random Seed.
Jun 29 18:40:01 hp-4300G kernel: audit: type=1130 audit(1625017201.710:5): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:40:01 hp-4300G systemd[1]: Condition check resulted in First Boot Complete being skipped.
Jun 29 18:40:01 hp-4300G systemd[1]: Started Journal Service.
Jun 29 18:40:01 hp-4300G kernel: audit: type=1130 audit(1625017201.720:6): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:40:01 hp-4300G kernel: audit: type=1130 audit(1625017201.724:7): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:40:01 hp-4300G kernel: audit: type=1130 audit(1625017201.734:8): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:40:01 hp-4300G kernel: audit: type=1130 audit(1625017201.740:9): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:40:01 hp-4300G kernel: audit: type=1334 audit(1625017201.744:10): prog-id=7 op=LOAD
Jun 29 18:40:01 hp-4300G kernel: acpi-tad ACPI000E:00: Missing _PRW
Jun 29 18:40:01 hp-4300G kernel: acpi_cpufreq: overriding BIOS provided _PSD data
Jun 29 18:40:01 hp-4300G kernel: ACPI: video: Video Device [VGA1] (multi-head: yes  rom: no  post: no)
Jun 29 18:40:01 hp-4300G kernel: acpi device:1e: registered as cooling_device8
Jun 29 18:40:01 hp-4300G kernel: input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:1d/LNXVIDEO:01/input/input2
Jun 29 18:40:01 hp-4300G kernel: acpi PNP0C14:01: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:00)
Jun 29 18:40:01 hp-4300G kernel: input: PC Speaker as /devices/platform/pcspkr/input/input3
Jun 29 18:40:01 hp-4300G kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 163840 ms ovfl timer
Jun 29 18:40:01 hp-4300G kernel: RAPL PMU: hw unit of domain package 2^-16 Joules
Jun 29 18:40:01 hp-4300G kernel: ccp 0000:0c:00.2: enabling device (0100 -> 0102)
Jun 29 18:40:01 hp-4300G kernel: ccp 0000:0c:00.2: ccp: unable to access the device: you might be running a broken BIOS.
Jun 29 18:40:01 hp-4300G kernel: piix4_smbus 0000:00:14.0: SMBus Host Controller at 0xb00, revision 0
Jun 29 18:40:01 hp-4300G kernel: piix4_smbus 0000:00:14.0: Using register 0x02 for SMBus port selection
Jun 29 18:40:01 hp-4300G kernel: cryptd: max_cpu_qlen set to 1000
Jun 29 18:40:01 hp-4300G kernel: piix4_smbus 0000:00:14.0: Auxiliary SMBus Host Controller at 0xb20
Jun 29 18:40:01 hp-4300G kernel: FAT-fs (nvme0n1p1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.
Jun 29 18:40:01 hp-4300G kernel: ccp 0000:0c:00.2: tee enabled
Jun 29 18:40:01 hp-4300G kernel: ccp 0000:0c:00.2: psp enabled
Jun 29 18:40:01 hp-4300G kernel: sp5100_tco: SP5100/SB800 TCO WatchDog Timer Driver
Jun 29 18:40:01 hp-4300G kernel: sp5100-tco sp5100-tco: Using 0xfeb00000 for watchdog MMIO address
Jun 29 18:40:01 hp-4300G kernel: sp5100-tco sp5100-tco: initialized. heartbeat=60 sec (nowayout=0)
Jun 29 18:40:01 hp-4300G kernel: AVX2 version of gcm_enc/dec engaged.
Jun 29 18:40:01 hp-4300G kernel: AES CTR mode by8 optimization enabled
Jun 29 18:40:01 hp-4300G kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jun 29 18:40:01 hp-4300G kernel: cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jun 29 18:40:01 hp-4300G kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jun 29 18:40:01 hp-4300G kernel: cfg80211: failed to load regulatory.db
Jun 29 18:40:02 hp-4300G kernel: input: HP WMI hotkeys as /devices/virtual/input/input4
Jun 29 18:40:02 hp-4300G kernel: snd_hda_intel 0000:0c:00.1: enabling device (0100 -> 0102)
Jun 29 18:40:02 hp-4300G kernel: snd_hda_intel 0000:0c:00.1: Handle vga_switcheroo audio client
Jun 29 18:40:02 hp-4300G kernel: snd_hda_intel 0000:0c:00.6: enabling device (0100 -> 0102)
Jun 29 18:40:02 hp-4300G kernel: libphy: r8169: probed
Jun 29 18:40:02 hp-4300G kernel: r8169 0000:0a:00.0 eth0: RTL8168h/8111h, 00:68:eb:ad:98:43, XID 541, IRQ 91
Jun 29 18:40:02 hp-4300G kernel: r8169 0000:0a:00.0 eth0: jumbo features [frames: 9194 bytes, tx checksumming: ko]
Jun 29 18:40:02 hp-4300G kernel: r8169 0000:0a:00.0 enp10s0: renamed from eth0
Jun 29 18:40:02 hp-4300G kernel: usb 1-11: New USB device found, idVendor=046d, idProduct=c534, bcdDevice=29.01
Jun 29 18:40:02 hp-4300G kernel: usb 1-11: New USB device strings: Mfr=1, Product=2, SerialNumber=0
Jun 29 18:40:02 hp-4300G kernel: usb 1-11: Product: USB Receiver
Jun 29 18:40:02 hp-4300G kernel: usb 1-11: Manufacturer: Logitech
Jun 29 18:40:02 hp-4300G kernel: Generic FE-GE Realtek PHY r8169-a00:00: attached PHY driver (mii_bus:phy_addr=r8169-a00:00, irq=MAC)
Jun 29 18:40:02 hp-4300G kernel: input: HD-Audio Generic HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:08.1/0000:0c:00.1/sound/card0/input5
Jun 29 18:40:02 hp-4300G kernel: kvm: Nested Virtualization enabled
Jun 29 18:40:02 hp-4300G kernel: SVM: kvm: Nested Paging enabled
Jun 29 18:40:02 hp-4300G kernel: SVM: Virtual VMLOAD VMSAVE supported
Jun 29 18:40:02 hp-4300G kernel: SVM: Virtual GIF supported
Jun 29 18:40:02 hp-4300G kernel: snd_hda_codec_realtek hdaudioC1D0: autoconfig for ALC671: line_outs=1 (0x14/0x0/0x0/0x0/0x0) type:line
Jun 29 18:40:02 hp-4300G kernel: snd_hda_codec_realtek hdaudioC1D0:    speaker_outs=0 (0x0/0x0/0x0/0x0/0x0)
Jun 29 18:40:02 hp-4300G kernel: snd_hda_codec_realtek hdaudioC1D0:    hp_outs=1 (0x21/0x0/0x0/0x0/0x0)
Jun 29 18:40:02 hp-4300G kernel: snd_hda_codec_realtek hdaudioC1D0:    mono: mono_out=0x0
Jun 29 18:40:02 hp-4300G kernel: snd_hda_codec_realtek hdaudioC1D0:    inputs:
Jun 29 18:40:02 hp-4300G kernel: snd_hda_codec_realtek hdaudioC1D0:      Mic=0x19
Jun 29 18:40:02 hp-4300G kernel: snd_hda_codec_realtek hdaudioC1D0:      Line=0x1b
Jun 29 18:40:02 hp-4300G kernel: usb 1-12: new full-speed USB device number 3 using xhci_hcd
Jun 29 18:40:02 hp-4300G kernel: input: HD-Audio Generic Mic as /devices/pci0000:00/0000:00:08.1/0000:0c:00.6/sound/card1/input6
Jun 29 18:40:02 hp-4300G kernel: input: HD-Audio Generic Line as /devices/pci0000:00/0000:00:08.1/0000:0c:00.6/sound/card1/input7
Jun 29 18:40:02 hp-4300G kernel: input: HD-Audio Generic Line Out as /devices/pci0000:00/0000:00:08.1/0000:0c:00.6/sound/card1/input8
Jun 29 18:40:02 hp-4300G kernel: input: HD-Audio Generic Front Headphone as /devices/pci0000:00/0000:00:08.1/0000:0c:00.6/sound/card1/input9
Jun 29 18:40:02 hp-4300G kernel: MCE: In-kernel MCE decoding enabled.
Jun 29 18:40:02 hp-4300G kernel: r8169 0000:0a:00.0 enp10s0: Link is Down
Jun 29 18:40:02 hp-4300G kernel: intel_rapl_common: Found RAPL domain package
Jun 29 18:40:02 hp-4300G kernel: intel_rapl_common: Found RAPL domain core
Jun 29 18:40:02 hp-4300G kernel: usb 1-12: New USB device found, idVendor=0bda, idProduct=b00a, bcdDevice= 1.10
Jun 29 18:40:02 hp-4300G kernel: usb 1-12: New USB device strings: Mfr=1, Product=2, SerialNumber=3
Jun 29 18:40:02 hp-4300G kernel: usb 1-12: Product: Bluetooth Radio 
Jun 29 18:40:02 hp-4300G kernel: usb 1-12: Manufacturer: Realtek 
Jun 29 18:40:02 hp-4300G kernel: usb 1-12: SerialNumber: 00e04c000001
Jun 29 18:40:02 hp-4300G kernel: [drm] amdgpu kernel modesetting enabled.
Jun 29 18:40:02 hp-4300G kernel: Virtual CRAT table created for CPU
Jun 29 18:40:02 hp-4300G kernel: amdgpu: Topology: Add CPU node
Jun 29 18:40:02 hp-4300G kernel: checking generic (d0000000 300000) vs hw (d0000000 10000000)
Jun 29 18:40:02 hp-4300G kernel: fb0: switching to amdgpudrmfb from EFI VGA
Jun 29 18:40:02 hp-4300G kernel: Console: switching to colour dummy device 80x25
Jun 29 18:40:02 hp-4300G kernel: amdgpu 0000:0c:00.0: vgaarb: deactivate vga console
Jun 29 18:40:02 hp-4300G kernel: amdgpu 0000:0c:00.0: enabling device (0106 -> 0107)
Jun 29 18:40:02 hp-4300G kernel: [drm] initializing kernel modesetting (RENOIR 0x1002:0x1636 0x103C:0x87D6 0xCA).
Jun 29 18:40:02 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: Trusted Memory Zone (TMZ) feature disabled as experimental (default)
Jun 29 18:40:02 hp-4300G kernel: [drm] register mmio base: 0xFCA00000
Jun 29 18:40:02 hp-4300G kernel: [drm] register mmio size: 524288
Jun 29 18:40:02 hp-4300G kernel: [drm] PCIE atomic ops is not supported
Jun 29 18:40:02 hp-4300G kernel: [drm] add ip block number 0 <soc15_common>
Jun 29 18:40:02 hp-4300G kernel: [drm] add ip block number 1 <gmc_v9_0>
Jun 29 18:40:02 hp-4300G kernel: [drm] add ip block number 2 <vega10_ih>
Jun 29 18:40:02 hp-4300G kernel: [drm] add ip block number 3 <psp>
Jun 29 18:40:02 hp-4300G kernel: [drm] add ip block number 4 <smu>
Jun 29 18:40:02 hp-4300G kernel: [drm] add ip block number 5 <gfx_v9_0>
Jun 29 18:40:02 hp-4300G kernel: [drm] add ip block number 6 <sdma_v4_0>
Jun 29 18:40:02 hp-4300G kernel: [drm] add ip block number 7 <dm>
Jun 29 18:40:02 hp-4300G kernel: [drm] add ip block number 8 <vcn_v2_0>
Jun 29 18:40:02 hp-4300G kernel: [drm] add ip block number 9 <jpeg_v2_0>
Jun 29 18:40:02 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: Fetched VBIOS from VFCT
Jun 29 18:40:02 hp-4300G kernel: amdgpu: ATOM BIOS: 113-RENOIR-026
Jun 29 18:40:02 hp-4300G kernel: rtw_8821ce 0000:09:00.0: enabling device (0100 -> 0103)
Jun 29 18:40:02 hp-4300G kernel: [drm] VCN decode is enabled in VM mode
Jun 29 18:40:02 hp-4300G kernel: [drm] VCN encode is enabled in VM mode
Jun 29 18:40:02 hp-4300G kernel: [drm] JPEG decode is enabled in VM mode
Jun 29 18:40:02 hp-4300G kernel: [drm] vm size is 262144 GB, 4 levels, block size is 9-bit, fragment size is 9-bit
Jun 29 18:40:02 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: VRAM: 512M 0x000000F400000000 - 0x000000F41FFFFFFF (512M used)
Jun 29 18:40:02 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: GART: 1024M 0x0000000000000000 - 0x000000003FFFFFFF
Jun 29 18:40:02 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: AGP: 267419648M 0x000000F800000000 - 0x0000FFFFFFFFFFFF
Jun 29 18:40:02 hp-4300G kernel: [drm] Detected VRAM RAM=512M, BAR=512M
Jun 29 18:40:02 hp-4300G kernel: [drm] RAM width 64bits DDR4
Jun 29 18:40:02 hp-4300G kernel: [TTM] Zone  kernel: Available graphics memory: 3750314 KiB
Jun 29 18:40:02 hp-4300G kernel: [TTM] Zone   dma32: Available graphics memory: 2097152 KiB
Jun 29 18:40:02 hp-4300G kernel: rtw_8821ce 0000:09:00.0: Firmware version 24.8.0, H2C version 12
Jun 29 18:40:02 hp-4300G kernel: [drm] amdgpu: 512M of VRAM memory ready
Jun 29 18:40:02 hp-4300G kernel: [drm] amdgpu: 3072M of GTT memory ready.
Jun 29 18:40:02 hp-4300G kernel: [drm] GART: num cpu pages 262144, num gpu pages 262144
Jun 29 18:40:02 hp-4300G kernel: [drm] PCIE GART of 1024M enabled (table at 0x000000F400900000).
Jun 29 18:40:02 hp-4300G kernel: input: Logitech USB Receiver as /devices/pci0000:00/0000:00:02.1/0000:01:00.0/usb1/1-11/1-11:1.0/0003:046D:C534.0001/input/input10
Jun 29 18:40:02 hp-4300G kernel: Bluetooth: Core ver 2.22
Jun 29 18:40:02 hp-4300G kernel: NET: Registered protocol family 31
Jun 29 18:40:02 hp-4300G kernel: Bluetooth: HCI device and connection manager initialized
Jun 29 18:40:02 hp-4300G kernel: Bluetooth: HCI socket layer initialized
Jun 29 18:40:02 hp-4300G kernel: Bluetooth: L2CAP socket layer initialized
Jun 29 18:40:02 hp-4300G kernel: Bluetooth: SCO socket layer initialized
Jun 29 18:40:02 hp-4300G kernel: [drm] Loading DMUB firmware via PSP: version=0x00000000
Jun 29 18:40:02 hp-4300G kernel: usbcore: registered new interface driver btusb
Jun 29 18:40:02 hp-4300G kernel: hid-generic 0003:046D:C534.0001: input,hidraw0: USB HID v1.11 Keyboard [Logitech USB Receiver] on usb-0000:01:00.0-11/input0
Jun 29 18:40:02 hp-4300G kernel: Bluetooth: hci0: RTL: examining hci_ver=08 hci_rev=000c lmp_ver=08 lmp_subver=8821
Jun 29 18:40:02 hp-4300G kernel: Bluetooth: hci0: RTL: rom_version status=0 version=1
Jun 29 18:40:02 hp-4300G kernel: Bluetooth: hci0: RTL: loading rtl_bt/rtl8821c_fw.bin
Jun 29 18:40:02 hp-4300G kernel: Bluetooth: hci0: RTL: loading rtl_bt/rtl8821c_config.bin
Jun 29 18:40:02 hp-4300G kernel: Bluetooth: hci0: RTL: cfg_sz 10, total sz 31990
Jun 29 18:40:02 hp-4300G kernel: [drm] Found VCN firmware Version ENC: 1.7 DEC: 4 VEP: 0 Revision: 17
Jun 29 18:40:02 hp-4300G kernel: [drm] PSP loading VCN firmware
Jun 29 18:40:03 hp-4300G kernel: rtw_8821ce 0000:09:00.0: start vif 74:12:b3:a0:4a:cb on port 0
Jun 29 18:40:03 hp-4300G kernel: [drm] reserve 0x400000 from 0xf41f800000 for PSP TMR
Jun 29 18:40:03 hp-4300G kernel: input: Logitech USB Receiver Mouse as /devices/pci0000:00/0000:00:02.1/0000:01:00.0/usb1/1-11/1-11:1.1/0003:046D:C534.0002/input/input11
Jun 29 18:40:03 hp-4300G kernel: input: Logitech USB Receiver Consumer Control as /devices/pci0000:00/0000:00:02.1/0000:01:00.0/usb1/1-11/1-11:1.1/0003:046D:C534.0002/input/input12
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: RAS: optional ras ta ucode is not available
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: RAP: optional rap ta ucode is not available
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: SECUREDISPLAY: securedisplay ta ucode is not available
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: SMU is initialized successfully!
Jun 29 18:40:03 hp-4300G kernel: [drm] kiq ring mec 2 pipe 1 q 0
Jun 29 18:40:03 hp-4300G kernel: [drm] Display Core initialized with v3.2.122!
Jun 29 18:40:03 hp-4300G kernel: [drm] DMUB hardware initialized: version=0x01020008
Jun 29 18:40:03 hp-4300G kernel: input: Logitech USB Receiver System Control as /devices/pci0000:00/0000:00:02.1/0000:01:00.0/usb1/1-11/1-11:1.1/0003:046D:C534.0002/input/input13
Jun 29 18:40:03 hp-4300G kernel: hid-generic 0003:046D:C534.0002: input,hiddev96,hidraw1: USB HID v1.11 Mouse [Logitech USB Receiver] on usb-0000:01:00.0-11/input1
Jun 29 18:40:03 hp-4300G kernel: usbcore: registered new interface driver usbhid
Jun 29 18:40:03 hp-4300G kernel: usbhid: USB HID core driver
Jun 29 18:40:03 hp-4300G kernel: snd_hda_intel 0000:0c:00.1: bound 0000:0c:00.0 (ops amdgpu_dm_audio_component_bind_ops [amdgpu])
Jun 29 18:40:03 hp-4300G kernel: [drm] VCN decode and encode initialized successfully(under DPG Mode).
Jun 29 18:40:03 hp-4300G kernel: [drm] JPEG decode initialized successfully.
Jun 29 18:40:03 hp-4300G kernel: kfd kfd: Allocated 3969056 bytes on gart
Jun 29 18:40:03 hp-4300G kernel: Virtual CRAT table created for GPU
Jun 29 18:40:03 hp-4300G kernel: amdgpu: Topology: Add dGPU node [0x1636:0x1002]
Jun 29 18:40:03 hp-4300G kernel: kfd kfd: added device 1002:1636
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: SE 1, SH per SE 2, CU per SH 18, active_cu_number 26
Jun 29 18:40:03 hp-4300G kernel: [drm] fb mappable at 0x220CD1000
Jun 29 18:40:03 hp-4300G kernel: [drm] vram apper at 0x220000000
Jun 29 18:40:03 hp-4300G kernel: [drm] size 8294400
Jun 29 18:40:03 hp-4300G kernel: [drm] fb depth is 24
Jun 29 18:40:03 hp-4300G kernel: [drm]    pitch is 7680
Jun 29 18:40:03 hp-4300G kernel: fbcon: amdgpudrmfb (fb0) is primary device
Jun 29 18:40:03 hp-4300G kernel: Console: switching to colour frame buffer device 240x67
Jun 29 18:40:03 hp-4300G kernel: logitech-djreceiver 0003:046D:C534.0001: hidraw0: USB HID v1.11 Keyboard [Logitech USB Receiver] on usb-0000:01:00.0-11/input0
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: [drm] fb0: amdgpudrmfb frame buffer device
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: ring gfx uses VM inv eng 0 on hub 0
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.0.0 uses VM inv eng 1 on hub 0
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.1.0 uses VM inv eng 4 on hub 0
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.2.0 uses VM inv eng 5 on hub 0
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.3.0 uses VM inv eng 6 on hub 0
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.0.1 uses VM inv eng 7 on hub 0
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.1.1 uses VM inv eng 8 on hub 0
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.2.1 uses VM inv eng 9 on hub 0
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.3.1 uses VM inv eng 10 on hub 0
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: ring kiq_2.1.0 uses VM inv eng 11 on hub 0
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: ring sdma0 uses VM inv eng 0 on hub 1
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: ring vcn_dec uses VM inv eng 1 on hub 1
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: ring vcn_enc0 uses VM inv eng 4 on hub 1
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: ring vcn_enc1 uses VM inv eng 5 on hub 1
Jun 29 18:40:03 hp-4300G kernel: amdgpu 0000:0c:00.0: amdgpu: ring jpeg_dec uses VM inv eng 6 on hub 1
Jun 29 18:40:03 hp-4300G kernel: [drm] Initialized amdgpu 3.40.0 20150101 for 0000:0c:00.0 on minor 0
Jun 29 18:40:03 hp-4300G kernel: kauditd_printk_skb: 26 callbacks suppressed
Jun 29 18:40:03 hp-4300G kernel: audit: type=1130 audit(1625017203.727:37): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-backlight@backlight:acpi_video0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:40:03 hp-4300G kernel: Bluetooth: hci0: RTL: fw version 0x829a7644
Jun 29 18:40:03 hp-4300G kernel: logitech-djreceiver 0003:046D:C534.0002: hiddev96,hidraw1: USB HID v1.11 Mouse [Logitech USB Receiver] on usb-0000:01:00.0-11/input1
Jun 29 18:40:03 hp-4300G kernel: logitech-djreceiver 0003:046D:C534.0002: device of type eQUAD nano Lite (0x0a) connected on slot 1
Jun 29 18:40:03 hp-4300G kernel: logitech-djreceiver 0003:046D:C534.0002: device of type eQUAD nano Lite (0x0a) connected on slot 2
Jun 29 18:40:03 hp-4300G kernel: mousedev: PS/2 mouse device common for all mice
Jun 29 18:40:03 hp-4300G kernel: input: Logitech Wireless Keyboard PID:4075 Keyboard as /devices/pci0000:00/0000:00:02.1/0000:01:00.0/usb1/1-11/1-11:1.1/0003:046D:C534.0002/0003:046D:4075.0003/input/input16
Jun 29 18:40:03 hp-4300G kernel: hid-generic 0003:046D:4075.0003: input,hidraw2: USB HID v1.11 Keyboard [Logitech Wireless Keyboard PID:4075] on usb-0000:01:00.0-11/input1:1
Jun 29 18:40:04 hp-4300G kernel: input: Logitech Wireless Mouse as /devices/pci0000:00/0000:00:02.1/0000:01:00.0/usb1/1-11/1-11:1.1/0003:046D:C534.0002/0003:046D:4054.0004/input/input21
Jun 29 18:40:04 hp-4300G kernel: logitech-hidpp-device 0003:046D:4054.0004: input,hidraw2: USB HID v1.11 Mouse [Logitech Wireless Mouse] on usb-0000:01:00.0-11/input1:2
Jun 29 18:40:04 hp-4300G kernel: input: Logitech Wireless Keyboard PID:4075 as /devices/pci0000:00/0000:00:02.1/0000:01:00.0/usb1/1-11/1-11:1.1/0003:046D:C534.0002/0003:046D:4075.0003/input/input22
Jun 29 18:40:04 hp-4300G kernel: logitech-hidpp-device 0003:046D:4075.0003: input,hidraw3: USB HID v1.11 Keyboard [Logitech Wireless Keyboard PID:4075] on usb-0000:01:00.0-11/input1:1
Jun 29 18:40:04 hp-4300G kernel: audit: type=1103 audit(1625017204.490:38): pid=464 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_env,pam_permit acct="lightdm" exe="/usr/bin/lightdm" hostname=? addr=? terminal=:0 res=success'
Jun 29 18:40:04 hp-4300G kernel: audit: type=1130 audit(1625017204.647:39): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=user-runtime-dir@973 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:40:04 hp-4300G kernel: audit: type=1101 audit(1625017204.654:40): pid=468 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_permit,pam_time acct="lightdm" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:40:04 hp-4300G kernel: audit: type=1103 audit(1625017204.654:41): pid=468 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=? acct="lightdm" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jun 29 18:40:04 hp-4300G kernel: audit: type=1006 audit(1625017204.654:42): pid=468 uid=0 old-auid=4294967295 auid=973 tty=(none) old-ses=4294967295 ses=1 res=1
Jun 29 18:40:04 hp-4300G kernel: audit: type=1300 audit(1625017204.654:42): arch=c000003e syscall=1 success=yes exit=3 a0=9 a1=7ffc7624c2f0 a2=3 a3=3cd items=0 ppid=1 pid=468 auid=973 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=1 comm="(systemd)" exe="/usr/lib/systemd/systemd" key=(null)
Jun 29 18:40:04 hp-4300G kernel: audit: type=1327 audit(1625017204.654:42): proctitle="(systemd)"
Jun 29 18:40:04 hp-4300G kernel: audit: type=1105 audit(1625017204.657:43): pid=468 uid=0 auid=973 ses=1 msg='op=PAM:session_open grantors=pam_loginuid,pam_loginuid,pam_keyinit,pam_limits,pam_unix,pam_permit,pam_mail,pam_systemd,pam_env acct="lightdm" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:40:04 hp-4300G kernel: audit: type=1334 audit(1625017204.660:44): prog-id=15 op=LOAD
Jun 29 18:40:06 hp-4300G kernel: r8169 0000:0a:00.0 enp10s0: Link is Up - 1Gbps/Full - flow control rx/tx
Jun 29 18:40:06 hp-4300G kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp10s0: link becomes ready
Jun 29 18:40:31 hp-4300G kernel: kauditd_printk_skb: 6 callbacks suppressed
Jun 29 18:40:31 hp-4300G kernel: audit: type=1101 audit(1625017231.361:49): pid=520 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_permit,pam_time acct="nathan" exe="/usr/bin/sshd" hostname=192.168.4.54 addr=192.168.4.54 terminal=ssh res=success'
Jun 29 18:40:31 hp-4300G kernel: audit: type=1103 audit(1625017231.401:50): pid=520 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_shells,pam_faillock,pam_permit,pam_env,pam_faillock acct="nathan" exe="/usr/bin/sshd" hostname=192.168.4.54 addr=192.168.4.54 terminal=ssh res=success'
Jun 29 18:40:31 hp-4300G kernel: audit: type=1006 audit(1625017231.401:51): pid=520 uid=0 old-auid=4294967295 auid=1000 tty=(none) old-ses=4294967295 ses=2 res=1
Jun 29 18:40:31 hp-4300G kernel: audit: type=1300 audit(1625017231.401:51): arch=c000003e syscall=1 success=yes exit=4 a0=3 a1=7ffd3b8c3e70 a2=4 a3=3e8 items=0 ppid=375 pid=520 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="sshd" exe="/usr/bin/sshd" key=(null)
Jun 29 18:40:31 hp-4300G kernel: audit: type=1327 audit(1625017231.401:51): proctitle=737368643A206E617468616E205B707269765D
Jun 29 18:40:31 hp-4300G kernel: audit: type=1130 audit(1625017231.424:52): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=user-runtime-dir@1000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:40:31 hp-4300G kernel: audit: type=1101 audit(1625017231.427:53): pid=523 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_permit,pam_time acct="nathan" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 29 18:40:31 hp-4300G kernel: audit: type=1103 audit(1625017231.427:54): pid=523 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=? acct="nathan" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jun 29 18:40:31 hp-4300G kernel: audit: type=1006 audit(1625017231.427:55): pid=523 uid=0 old-auid=4294967295 auid=1000 tty=(none) old-ses=4294967295 ses=3 res=1
Jun 29 18:40:31 hp-4300G kernel: audit: type=1300 audit(1625017231.427:55): arch=c000003e syscall=1 success=yes exit=4 a0=9 a1=7ffc7624c2f0 a2=4 a3=3e8 items=0 ppid=1 pid=523 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=3 comm="(systemd)" exe="/usr/lib/systemd/systemd" key=(null)

--QbssCZQk3Ov6WW0W--


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 05:19:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 05:19:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148126.273635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyScW-0007Wz-PB; Wed, 30 Jun 2021 05:18:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148126.273635; Wed, 30 Jun 2021 05:18:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyScW-0007Ws-M9; Wed, 30 Jun 2021 05:18:44 +0000
Received: by outflank-mailman (input) for mailman id 148126;
 Wed, 30 Jun 2021 05:18:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyScV-0007Wi-Ro; Wed, 30 Jun 2021 05:18:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyScV-0005lE-Jh; Wed, 30 Jun 2021 05:18:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyScV-0008Hn-AX; Wed, 30 Jun 2021 05:18:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lyScV-0002XW-A8; Wed, 30 Jun 2021 05:18:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OBPfixO8Ao5yFjoaKdU94N5VdCusiatyJfTuV74Z2ds=; b=mVwNzEZX7RvEKQIA/s/TTCBYpv
	XADer5mS5Ifcq9FEczIGy1yt24H5LIXaVxBQdv7sq3dpQ4DF+Wu1itaq5Z5uYVR5DyZRMYnewfh+W
	5XMNsE8kX+PKr7gMnoZy9U+Xl67lGb5w5fTMtBRw6eSp/U10qWolqfW08HSvJKSQOitU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163189-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163189: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=b37cfdd2807181aed2fee1e17bd7ec1190db266a
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 30 Jun 2021 05:18:43 +0000

flight 163189 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163189/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 b37cfdd2807181aed2fee1e17bd7ec1190db266a
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   26 days
Failing since        162368  2021-06-04 15:42:59 Z   25 days   65 attempts
Testing same since   163189  2021-06-29 23:12:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daniel Schaefer <daniel.schaefer@hpe.com>
  Daoxiang Li <daoxiang.li@intel.com>
  Dov Murik <dovmurik@linux.ibm.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Sunil V L <sunilvl@ventanamicro.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2859 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 06:42:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 06:42:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148129.273649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyTva-00079t-3L; Wed, 30 Jun 2021 06:42:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148129.273649; Wed, 30 Jun 2021 06:42:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyTvZ-00079m-Vi; Wed, 30 Jun 2021 06:42:29 +0000
Received: by outflank-mailman (input) for mailman id 148129;
 Wed, 30 Jun 2021 06:42:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EmAy=LY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lyTvY-00079g-TL
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 06:42:28 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 75188b07-987f-4156-905f-388f965d37e6;
 Wed, 30 Jun 2021 06:42:27 +0000 (UTC)
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2055.outbound.protection.outlook.com [104.47.0.55]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-31-PmaHPqjSPN2V8IqbvnCH4g-1; Wed, 30 Jun 2021 08:42:25 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2606.eurprd04.prod.outlook.com (2603:10a6:800:51::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22; Wed, 30 Jun
 2021 06:42:22 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Wed, 30 Jun 2021
 06:42:22 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR02CA0074.eurprd02.prod.outlook.com (2603:10a6:208:154::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20 via Frontend
 Transport; Wed, 30 Jun 2021 06:42:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75188b07-987f-4156-905f-388f965d37e6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1625035346;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=fh/shTkGqvJB/J9EVTLYqycPvu55Go8cyuPpvmDnTHk=;
	b=NB9CkaTNvTQ6rPI77lY/yOIw/bjFtGb5owKf+1kGtWqkI269lAewOQ8/wYRbuesr/HVPcp
	B8GLRAm6IL7+z7HUo4giex8hcNy54u2xlSoMC45D36gPGzg422jFhNXfxBiQKWfu9QA037
	L7en2+R15SiOyKgezZasVKK6bHOM2/Y=
X-MC-Unique: PmaHPqjSPN2V8IqbvnCH4g-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T6Lo0dnjJx6mgtK9SJQ9p2jn/5bANM1ht5EyYYvjLuWACq8R7chjM/9HS/YSXtz4sxCvBELSXVF56DtoHb3YlEPDYgWjyHRWSrKFq8Z9dg05aceC2GSh3piCbU1dMHmlxHqKCIVABI98HDdy1HQknHHSClla0nS5dvEcDKU2qXm47nVIQz50uT6IGEtvb0l1txpt+si0UZEf2um6H0rKp7BTpT4xypVhonk2k9gnA168C877pru3Rh60Q77Pj7dpKkBZNbZKmiXjs3e8lKlrY7adkrWqI5ruFr/2kZIfrgiLSnRdan05MKKQsi0hGbK4w+9TDBaGXMJGS6q2HqkaGg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fh/shTkGqvJB/J9EVTLYqycPvu55Go8cyuPpvmDnTHk=;
 b=JXIJhdcYvI+H8ytxiQk7+jraTJEOiWEwi7ldXTIeb1A5rm/xLUzbz3yCXo3reKKzkmi2wPGa6VSkQW60CwRUkvO1zZvHMMweQX/ZwnGXF6H0CfmzLKOCw5c7lF7n4VzCJs2U5dDijoGRj/aApghnLvGOODGugMvvUWJ/DA9o8x+YGuDZDj1IZLjnbwMUSJgmlleOnDJmMFX4VIvvenQML7i/gDp4fxE5ZBbJrXunLhtVZC2wD/OZ7a90ULw4/uAgLAt41hKrYGCmKFvVMaBB2DxKYwgnLSyAf61cZqiXmDK5gHlv5SBtiB087PdpbSM5CVwsFTOqVWXv6cHDS5XquA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: bugseng.com; dkim=none (message not signed)
 header.d=none;bugseng.com; dmarc=none action=none header.from=suse.com;
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Tim Deegan <tim@xen.org>,
 George Dunlap <george.dunlap@citrix.com>,
 Roberto Bagnara <roberto.bagnara@bugseng.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/shadow: drop callback_mask pseudo-variables
Message-ID: <b791d89f-5c9d-9c04-00ed-0cbaae68536a@suse.com>
Date: Wed, 30 Jun 2021 08:42:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR02CA0074.eurprd02.prod.outlook.com
 (2603:10a6:208:154::15) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: eef15c18-e403-42a5-bc0b-08d93b923725
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2606:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB2606B4B639FB0C7922E90568B3019@VI1PR0401MB2606.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:873;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hATEY8hErKieU6tg/2Yy/KWp6oSwirGvHX8yWaDGzEbNVdLN15k6CyoExVmOKFa9JLPBgTjqLddO9ZLvPsAKY5DS3gsmc8QLPUp+zk2Pa7Z+h/C0iMbMhQ9ZIfQEMd6zoztDpDztrr7LYYg+Q2wPZ82sNGL/Wd0yN3oWB/sPjxR/VHDLQRQQijgTZgzYnX7I0uf/nxdVSa9thxec1Sh3ooE1YyTq+Jo3zCMCFqI6BGBJpUhrxKqzAu//ZsY+lrzDUgyXMqGB4CPkAkkJBRX7X1Z8OKnQRir4hBbYvJD6KHd69mkgRRNBZ1lUJmRyIFlTkACsVsWDPmsicl7Vt3rrCI/sY4AvILtxAFv/xnLwZGTF7pT43Ba+PbHiiMaGmdCQ5sSPFSwjv66j2u2nAmErrgTsHV+9InEgk+tKwcDnjkLcpLF66LXaH8UMcE37mn2eLSmnROMvabEm19zCBoEHHntxZPlkRBylIQkaiUDSlw8rHuOe3HMxjzW/QStFT6Rh7nqh35uTiHec8dP3yqtlj0c1zF0QdLp0qDvZInE1kE4o0/3SF/s3v1oLNId7D2WwP6iyH3mmgTG2p9/l3qeaQi5bLXB3JcbhHnRi7/LbLR1FQkb0MbV2xWyeJ6o3S0SgSvHP4BFZDAKc7EChlNgQX+MNXAB1W+x8+iLRLHAF/Z7jHH8XUY3o39tl38A2PSM9s/TX+1cZn/CQqlCtlu9cfw9ZVWyEMC5gG19/suFhlng=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(376002)(346002)(366004)(396003)(39850400004)(8676002)(16526019)(2616005)(956004)(186003)(4326008)(5660300002)(66556008)(66476007)(8936002)(31696002)(316002)(54906003)(66946007)(38100700002)(16576012)(83380400001)(36756003)(478600001)(31686004)(2906002)(86362001)(6486002)(26005)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZXM5cWI2dHVVZDQzTFkwa3hrcWpGVXBNZGV2VUxGMEdHWkpwdHdaYzdqTWxN?=
 =?utf-8?B?REwyVVdzckd4SlQ3cis1UXlxQUgrVEY3QnZDRThob2JaVEhTRVZPdFZDRzNm?=
 =?utf-8?B?eWF5WDhxb3FnekxFTWlpdkFIa01JdTY2MFdMc0FZY0lDaXJMTmpXRXdaNE1v?=
 =?utf-8?B?RlZTekFhYTJPcHlIOFFyVzNyOENMNzJLeFg3dmZGY1JDc3drdUI0UVUrQmY1?=
 =?utf-8?B?dlZaRmtOOGhxOHB1NlZUbmMyTHMxY01VQ0tuV3hOOE4wbnV3WVk4MkdCU1Jh?=
 =?utf-8?B?UVNFVVZhbnVtaGtLUmUxdE1NM1RRWEoxa3Z0REl2TFkyalVvWDI2ODhFMmJD?=
 =?utf-8?B?aG1VOXM1cUExRjBFMW81c3FKdDNOeTFGWVBPTmtOWjFlTzd6d1pZNC8xNzZ2?=
 =?utf-8?B?VDlTODJhQktVU2pOTndhNHhTajd1RW80SVVTSStLUlI3UVJId0RHcSt4MCtp?=
 =?utf-8?B?YlR0UUJhREMraXE1WTBvM0swRTZtMnVSVnc0YnFuUUZKajJUOWhma0taZnZh?=
 =?utf-8?B?Wk1nQ1ArcFFoa1U1WGdhY0x3MVBBMWx0blM2Yk9wSGVkN2krVjFMMlMwaTVW?=
 =?utf-8?B?d3hhQmdYeGtXemdQT1FoWWRLMy90NnJUTnhFdGpWMG5mME5KWlA2TkMwUFk5?=
 =?utf-8?B?Y0owa3ZTdXlhanMrS1V0MVdVOTAxbGQvNitPMUVSQWtIcnVoVjh6amNKeHNh?=
 =?utf-8?B?N1plYllvaTNoYjF3eXBGSXFXMWdYYzN1UTdSb0lHbFBPelo2N1JlMGtHVXFC?=
 =?utf-8?B?MkEyaFFZM3lXanpPNDgycVU5WDdsK3RFTTM4NE9BWkVvRXNjRnRIZVg3di9x?=
 =?utf-8?B?TklaSGZRQUozK0ZnRmRSRXRUSTRPMXlPTHBadzdKSzFRTGZVcTc1NHR5YmRL?=
 =?utf-8?B?b0xjVHltVVRaUGgyWlpaT1A0cUh5K0JFeFVyeGg1QktEZmQxU0J1Tk9ZSW1Y?=
 =?utf-8?B?dGdqM2hCL2VHc3JhWStqSEZ6ZytZUzRoaTBsVFlmN3dQYnd6QVF1YmdsbndP?=
 =?utf-8?B?NlMxTXUzQnJOOWQ2UGNkMXJ1aVlRVW5PcDlKL1MxRUYrV1pETk9hUHZzUFIw?=
 =?utf-8?B?bWNRdFplRnhkRFI3QlVrT0g5di9WOVl4WFlPNnh0LytaT0lBRERPcWpPWGM3?=
 =?utf-8?B?ejcyQTlVaGE3ajBUb0ViWk4zVXZFY2E5RlNUaVZsbzJDcTQ2cS8yZm51OGZP?=
 =?utf-8?B?VFZsM1ZIcmRKUjBpaGdLRmMyaEhBS0xNQng1ejl5cGlZd0szNi9VcXVMdHJ3?=
 =?utf-8?B?c1c0YlV2QkJ2amdCQXNxcGxKcnNNM3laNHp1UlpjamJCZ1lwVS9hNE11dnMw?=
 =?utf-8?B?Mms0OUozcFBySzJRYmtPeExEaGx2UkRLYWRsYUtNVGlEV0FOUlZYM2xkSEdu?=
 =?utf-8?B?bUxsaFZSc0NWaGs5NEVnU0l4a1RMYW52dFN0SnhyK1dIM3pSZXpqV2h3ODFa?=
 =?utf-8?B?UmtKZWxiSGx2WmRjWiswSWorS1dOaGQrbWoxTUlvL0NweTRtQXFSTmZWM1ZX?=
 =?utf-8?B?ZFVqNXBaaGVLcFdJS2t2bXBXb3hNMHp3VzVQeU9nRmNXL2xOQ1YyczJlWVYx?=
 =?utf-8?B?d2pYbGFnWFFneUlvQ2VldVFUVTdHVEppdEplVzI4UStQd0VaNUlmeFFVWFMz?=
 =?utf-8?B?RTIvZUJYbHJYMGhNZkgrYk1Id3R0UlRRSlNuL3g2SGNNSml6ZzYzRDdIaXBm?=
 =?utf-8?B?UWh6RnZQaDZyOXZDNm1SeWw5WDdybXQ5aFBTa1FrVDZodk1kYjZVWkgyT2Z3?=
 =?utf-8?Q?GzWJCXTYKFZt2zGArPdoN4dPxLFdyEFtJCLUKaj?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: eef15c18-e403-42a5-bc0b-08d93b923725
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2021 06:42:22.7169
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: A9ghueK9LSzj+KBNdWPmzwNeDxp4LlMKs/Vwp3V9zCIKPeOZRK0lE2ihsTxy7sG5avWkFNDL9BLyug/lsOKSYQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2606

In commit 90629587e16e ("x86/shadow: replace stale literal numbers in
hash_{vcpu,domain}_foreach()") I had to work around a clang shortcoming
(if you like), leveraging that gcc tolerates static const variables in
otherwise integer constant expressions. Roberto suggests that we'd
better not rely on such behavior. Drop the involved static const-s,
using their "expansions" in both of the prior use sites each. This then
allows dropping the short-circuiting of the check for clang.

Requested-by: Roberto Bagnara <roberto.bagnara@bugseng.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1626,12 +1626,8 @@ void shadow_hash_delete(struct domain *d
 typedef int (*hash_vcpu_callback_t)(struct vcpu *v, mfn_t smfn, mfn_t other_mfn);
 typedef int (*hash_domain_callback_t)(struct domain *d, mfn_t smfn, mfn_t other_mfn);
 
-#ifndef __clang__ /* At least some versions dislike some of the uses. */
 #define HASH_CALLBACKS_CHECK(mask) \
     BUILD_BUG_ON((mask) > (1U << ARRAY_SIZE(callbacks)) - 1)
-#else
-#define HASH_CALLBACKS_CHECK(mask) ((void)(mask))
-#endif
 
 static void hash_vcpu_foreach(struct vcpu *v, unsigned int callback_mask,
                               const hash_vcpu_callback_t callbacks[],
@@ -1829,7 +1825,6 @@ int sh_remove_write_access(struct domain
         [SH_type_l1_64_shadow] = SHADOW_INTERNAL_NAME(sh_rm_write_access_from_l1, 4),
         [SH_type_fl1_64_shadow] = SHADOW_INTERNAL_NAME(sh_rm_write_access_from_l1, 4),
     };
-    static const unsigned int callback_mask = SHF_L1_ANY | SHF_FL1_ANY;
     struct page_info *pg = mfn_to_page(gmfn);
 #if SHADOW_OPTIMIZATIONS & SHOPT_WRITABLE_HEURISTIC
     struct vcpu *curr = current;
@@ -2004,8 +1999,8 @@ int sh_remove_write_access(struct domain
         perfc_incr(shadow_writeable_bf_1);
     else
         perfc_incr(shadow_writeable_bf);
-    HASH_CALLBACKS_CHECK(callback_mask);
-    hash_domain_foreach(d, callback_mask, callbacks, gmfn);
+    HASH_CALLBACKS_CHECK(SHF_L1_ANY | SHF_FL1_ANY);
+    hash_domain_foreach(d, SHF_L1_ANY | SHF_FL1_ANY, callbacks, gmfn);
 
     /* If that didn't catch the mapping, then there's some non-pagetable
      * mapping -- ioreq page, grant mapping, &c. */
@@ -2044,7 +2039,6 @@ int sh_remove_all_mappings(struct domain
         [SH_type_l1_64_shadow] = SHADOW_INTERNAL_NAME(sh_rm_mappings_from_l1, 4),
         [SH_type_fl1_64_shadow] = SHADOW_INTERNAL_NAME(sh_rm_mappings_from_l1, 4),
     };
-    static const unsigned int callback_mask = SHF_L1_ANY | SHF_FL1_ANY;
 
     perfc_incr(shadow_mappings);
     if ( sh_check_page_has_no_refs(page) )
@@ -2060,8 +2054,8 @@ int sh_remove_all_mappings(struct domain
 
     /* Brute-force search of all the shadows, by walking the hash */
     perfc_incr(shadow_mappings_bf);
-    HASH_CALLBACKS_CHECK(callback_mask);
-    hash_domain_foreach(d, callback_mask, callbacks, gmfn);
+    HASH_CALLBACKS_CHECK(SHF_L1_ANY | SHF_FL1_ANY);
+    hash_domain_foreach(d, SHF_L1_ANY | SHF_FL1_ANY, callbacks, gmfn);
 
     /* If that didn't catch the mapping, something is very wrong */
     if ( !sh_check_page_has_no_refs(page) )
@@ -2307,10 +2301,9 @@ void sh_reset_l3_up_pointers(struct vcpu
     static const hash_vcpu_callback_t callbacks[SH_type_unused] = {
         [SH_type_l3_64_shadow] = sh_clear_up_pointer,
     };
-    static const unsigned int callback_mask = SHF_L3_64;
 
-    HASH_CALLBACKS_CHECK(callback_mask);
-    hash_vcpu_foreach(v, callback_mask, callbacks, INVALID_MFN);
+    HASH_CALLBACKS_CHECK(SHF_L3_64);
+    hash_vcpu_foreach(v, SHF_L3_64, callbacks, INVALID_MFN);
 }
 
 



From xen-devel-bounces@lists.xenproject.org Wed Jun 30 06:47:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 06:47:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148132.273659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyU0E-0007n9-Mp; Wed, 30 Jun 2021 06:47:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148132.273659; Wed, 30 Jun 2021 06:47:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyU0E-0007n2-J0; Wed, 30 Jun 2021 06:47:18 +0000
Received: by outflank-mailman (input) for mailman id 148132;
 Wed, 30 Jun 2021 06:47:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EmAy=LY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lyU0C-0007mw-Tn
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 06:47:16 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bc99b50b-3160-499b-8827-d36695b7f287;
 Wed, 30 Jun 2021 06:47:16 +0000 (UTC)
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2057.outbound.protection.outlook.com [104.47.8.57]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-13-QyE7b3CcPEuKKLdhVz9xWQ-1; Wed, 30 Jun 2021 08:47:14 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0402MB2702.eurprd04.prod.outlook.com (2603:10a6:800:b4::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Wed, 30 Jun
 2021 06:47:12 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Wed, 30 Jun 2021
 06:47:12 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 FR2P281CA0023.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:14::10) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4287.12 via Frontend Transport; Wed, 30 Jun 2021 06:47:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc99b50b-3160-499b-8827-d36695b7f287
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1625035635;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lirN/Fyn7nOihsEVU+pGPhqLTCx1O9LAWzSTLgYYGbI=;
	b=T+bZChhkda3jr1h8K/z0NBpNqCyj7felRl6apQvLexJ2sWiiRwzHh+Gpfp3OSSAoF5aRYf
	QdWZIq1W6zs/zbmq+36eeYF0wT6WuwjFgI4ow0uCP5bBUVnmVL7dehUyVXfd5SoRZZ0+8u
	2Ve1rpQoAJVM4qI9rGzmHKaQyOqIVww=
X-MC-Unique: QyE7b3CcPEuKKLdhVz9xWQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Hg1TASnbXwEDOeP7c5K7z8A6G2/kjfjBhCG3wyPLtWBkn55KEAUfy+f13Cf5JnSNxZRmgXmRgt+SfQmtJXRwVrrDH/imD/PxYfcfwK4/sbA1jmmkz/+DMf6qe3FO4Hd/e/VpnbCYFJ/pA4Lz/AeETv302ngPgjt8U3js7/pTmeDbv5OV9CviDuuGq1v22wyiqaGNuXWByOlf6wLRxjSRDlrIM/N7nDeYWzhPkCbMG77bg6gMmpQFTpPPfWASIcxQmt8u67q5N34k3ka/X7nUwRSrS5Z6M9iF9T23wP3/Xfxf7FZAa/oJtMl8YTW377f5+i6gkO2Eh+5NJnwp972XdA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lLwd1wlSUaqk94eww1pmuux6R3MK0OD5v99tOaIyXNs=;
 b=Kivi4BT2WlA+GbVWr+d3whTObIKLA7lqezP04nerpT3qsL2ME7yxmSE5zhd+mopFo44xyE5rvyL9Cu+lcWv/wKnB668Sn3F9tS4fFnmdEfi81XLQAwnM0kxyDY/r3NQh6jiSkkF+cWy78HBoleRXUKSbWS68+syeMMiglICjTxyihqD8vQaY+AE0VXcloSc1bhI8OVEBrUd5FtqhfPsoqX9i5i6KhEEJOnqE4LrzTLyHXNbkntprE4UZybhV5/V3F4iIYga2sPe3UvP6z5rE+f3WgmY9+SzbiCEKMnK23Jqv3VQ/wlxnXZ3CEzx4At7+g+0pEHKCi3aiEpxApBz2MA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Ping: [PATCH] x86: mark hypercall argument regs clobbering for
 intended fall-through
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bdbd506a-e6fc-a560-1be7-7424f33d413e@suse.com>
 <b0675d1f-892a-fde5-133f-65462dd01677@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <485b066e-d57e-a966-4ca5-734da4973be2@suse.com>
Date: Wed, 30 Jun 2021 08:47:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <b0675d1f-892a-fde5-133f-65462dd01677@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: FR2P281CA0023.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::10) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 59eff27e-c1de-4051-3e96-08d93b92e3a6
X-MS-TrafficTypeDiagnostic: VI1PR0402MB2702:
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB27021707E39F359D99347E78B3019@VI1PR0402MB2702.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aP84Zw4RT4tUfjwZS/Wyx2fWY/fvaHcZSaJh5+XqKmMM0rAy1fsri21u9hBkDkFXuW8ZvQhb6MHeokIhcXE9PPixvJw5pN+kcbN3kTDvPmDXmX0fI0YJ8KGCRsuPCupsQSj6WXOSfRLDHhjGCPtvsWMwgsjxkh7Doq55uOeMwSgd8B/bX69OOL4f5PftnfioHbiEKgxkFfkAlnIwTsFabMlFe9gLKoI9/zWRkiMHRpAdkC5iHYI+tBD2xW5o58FxnyBdvs20bmquyhPf8lpvZugVKMSFbIXMvVd3LUy5jVcbiMlL2jeXt21FrN5O/z7KduNWlt1+biWrXlReZZ4bPbmU9OE+uQkjmOS4P6HcQv1yBD8N981gQZ/Fw/emZ4g4fqz4OO1CYflJ34WquHJTB0pjv3PUqvD6USMnknR5bjleqc7D2tGsLXa6u2dCjTq85N3YneSAGFRSuYzqXa0hQYgpbx4fb2gf19Xp4wagk/IRenzDjSLhnTayB7vz9cWDwBhqxnwe7f86lqkVRQ/6ffjQ/A3biKTtkksCt/AkcW9MUbg1buby7IJ3XX1F2oN45cJpaLqB8cHOegNUqUSpto4AuaqewcwsJBCcjBDkqv4/eq8/BGoZ1Q0dEprqfXzFPAA6+JWIS756IBeDMKIzZwf8uca394t3dWGCeUJCoC8MrBtXgJPBBrWu0kpRnILWB/KDc2DbhYU1IRKIDcwx2IefDV0ZGV+Y/dhH9gqFjX5B2ERu54G2ZhEHA5Jj9vtH
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(396003)(376002)(39850400004)(346002)(366004)(86362001)(26005)(16576012)(83380400001)(31696002)(54906003)(38100700002)(316002)(6486002)(4326008)(2616005)(31686004)(478600001)(66946007)(956004)(8936002)(5660300002)(53546011)(66556008)(66476007)(6916009)(16526019)(2906002)(186003)(36756003)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?jInFlj73xRMuGJAL0nOrjqW5VdUe1AujwQqiXy0uIQb5LYyd0Bt7RPJYgvw/?=
 =?us-ascii?Q?3Eky52e/67o9FsZ9Cby/GEX5TEw0ljNCC7UtkaWwwOdVGJhMs3rHV2ncIvvy?=
 =?us-ascii?Q?sWl9bEC/k6ijdLKCd3O7pgOHjZ8O/VNO8K7OKED8Nw7rItnIywK2u2EnT7Ac?=
 =?us-ascii?Q?xwDsBcgaKWxHWuDcVCdzRJhF8vgWFaY+6UGTW//HhzhgCgF+8XaULt6NGRwi?=
 =?us-ascii?Q?Sxt3XV5UdJ2N1ZhtjFT88u4v70gry7s8MinLmR2IvQr/eLKZX8cp0NTmT8Qw?=
 =?us-ascii?Q?sn36TWfNppCSlVr97EUYI1xkZRypN+n1b2aKoE+gY+yAlZptaiRnJ9HaVIYH?=
 =?us-ascii?Q?p3xdzt23rHaygOXvZ9wNRYA8jXy+4V4Ha7TK5zy82AMtkCDJTuPoQCJrYDic?=
 =?us-ascii?Q?Im8MjZ/E5jNzL+GmXvPV1OP8NSObykVwFC+f1qcui8u80HNjRIMwDqfO+jSR?=
 =?us-ascii?Q?Ra9aVVJ0D5EBvLJYIEA2s4K6XT0aQXvqZiF6SGW/ytlHxk94G22rKJ0xERB5?=
 =?us-ascii?Q?f1wlo6dm85FzRspNDyL2gtUeY1FurVNwuCS19etEc5NOGhAw2W6KZnPJX91B?=
 =?us-ascii?Q?Jf0YcA2dq15YPtvfucTroHO4ljJ7x1j3XRzWq38WJEXDfsz9zyaV7maJjM0L?=
 =?us-ascii?Q?El0/cf2zArKGnMYsPYKmBW33sEsiz1KBXpJk4Gof6BhJRLrVbzbHIc1bVV0d?=
 =?us-ascii?Q?TfDGSM9ctbWrPtKz+ZziOzfgb7zL50jyCXjNF3K8OXFcsnmDvI6SYtx2sF35?=
 =?us-ascii?Q?LbE198/MR/jFHWEalJMioeDRG77PO+SAz8KVkXTPt7p/rHXffQ6yWhcSsGNQ?=
 =?us-ascii?Q?KiVUZs2B6+OdXqchMfS1SXhI385Bs2twNKPKM1+kKxHMsHfKIFwLVHytRDXl?=
 =?us-ascii?Q?3IZIKpDoNSUQj35s6vkGYZGQwNF17YgK6V54C7cRc9XyZYOu1yAlbw8i3a49?=
 =?us-ascii?Q?2PyfH5k1d5uDx4As65MIgpkMKBbopSKi2Ag+rBvwq6SpN6PgvZohifb+vUUL?=
 =?us-ascii?Q?CmzLveTKIw5al2QO0OdgZPRGGClpvGUr6AzlSUB9b9IEng0J/LJJBTI5nfvU?=
 =?us-ascii?Q?bCMFF/UetihtMsAuuMHESXKQJgDI6Bj8ZkdG5hHF5sVIHiFnaMF+iIVntEI8?=
 =?us-ascii?Q?wI9+owYOei9w2Hto70tvLmKi8EfAZJmeLs2xuTAPNrIwfVKga8aeJS3FhPQT?=
 =?us-ascii?Q?PLUqkejcwcNAIkwXBzTBDx19NaOo+tkKpvyDCKd5W8ms3plAyWgznryRmm+u?=
 =?us-ascii?Q?j6EFzf97+Sg36xJKdOkKsPk9GR3Yljjw0jwLT/3KqY1ytwzc+lZ7VI+R/KoK?=
 =?us-ascii?Q?LjWnFRkIrv1Kpuah876llYpR?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 59eff27e-c1de-4051-3e96-08d93b92e3a6
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2021 06:47:12.1761
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HPShgnxD57NgXhGdPmRnCVQuaw1srp8NjQIFgvJk9WCajHNjx4L5XKV7+tWgxAkAMLmUjFB/bDmtd6dSrHLuEw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB2702

On 09.06.2021 14:49, Andrew Cooper wrote:
> On 09/06/2021 11:34, Jan Beulich wrote:
>> The CIDs below are all for the PV side of things, but also take care of
>> the HVM side.
>>
>> Coverity-ID: 1485896, 1485901, 1485906, 1485910, 1485911,=20
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> Let's see whether Coverity actually understands the (relatively) new
>> pseudo-keyword.
>=20
> This is exceedingly disappointing.=C2=A0 Coverity used to have the only
> sensible rule for not causing spurious fallthrough warnings, but this
> has apparently regressed.
>=20
> Coverity works on the AST, so ought to be after GCC has interpreted
> __attribute__((__fallthrough__)) if applicable.
>=20
> However, I doubt it will work in the fallback case, because #define
> fallthrough looks dubious.=C2=A0 To trigger the older logic, the /*
> fallthrough */ comment needs to be the final thing before the next case
> label, and it isn't with the added semicolon.
>=20
> Given that this pseudo-keyword is restricted to the SMMU driver for now,
> we don't actually know if Coverity likes it or not.

My reply from the 9th had no further reaction, so let me ask more
directly: Besides leaving the Coverity issues open, what alternatives
do you see? IOW I'm missing from your reply any indication what it
would rework of the patch you want me to do, if any. Or if none, what
it is that stands in the way of getting this change in.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 30 06:50:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 06:50:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148135.273670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyU3d-0000iG-5A; Wed, 30 Jun 2021 06:50:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148135.273670; Wed, 30 Jun 2021 06:50:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyU3d-0000i9-26; Wed, 30 Jun 2021 06:50:49 +0000
Received: by outflank-mailman (input) for mailman id 148135;
 Wed, 30 Jun 2021 06:50:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EmAy=LY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lyU3b-0000i3-Jx
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 06:50:47 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 327a6f83-e906-48c2-a57d-ccaa2a99585a;
 Wed, 30 Jun 2021 06:50:46 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2112.outbound.protection.outlook.com [104.47.17.112])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-3-OCxTPCXsONeeT_UfuEiZ6w-1; Wed, 30 Jun 2021 08:50:44 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB5166.eurprd04.prod.outlook.com (2603:10a6:803:53::33)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.22; Wed, 30 Jun
 2021 06:50:40 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Wed, 30 Jun 2021
 06:50:40 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR1P264CA0017.FRAP264.PROD.OUTLOOK.COM (2603:10a6:102:19e::22) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4287.22 via Frontend Transport; Wed, 30 Jun 2021 06:50:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 327a6f83-e906-48c2-a57d-ccaa2a99585a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1625035845;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2TY7M9Qlj7YBm+QB/fUXr1NCcvinqwXt4230p1zNZ40=;
	b=VxzaFWqPBmlndiEpn6lbjtAQlG9089SotAubfEL7YfYGkdrJrYT9Z4oCpbezjnVGr8zqP/
	H/ooBR2z4Lc8wCjKq3FbiBz2FQ1shXOTA2FitqgjCJTSL1VDkrF3ycNpUFb/cs4zFJAfAA
	hUNg6aM8nc+xDILhE5pcIPigU5GOacE=
X-MC-Unique: OCxTPCXsONeeT_UfuEiZ6w-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gGf9c/Lp7R2UdpH65W5aTvory9THgFXKK42hIH+g8V4BYMdhWDvXjQJOIBwEVb4vMVEQChBipvA7oScLQ3qEVCE3UyVYiYJqdVK4qSPwda38QKh9t7636Ri4c8QP2BpNTP9XxcBZ53AsJ6Lm6ZuHPSt41jv0ZI9kRabyZC3QyshTAAHPz++uVJSd17hVcfi6btocSMn9OPOe7ky++jRpY5G82v4KCOnCvSE66zqw3DwmsbHOBBGvvmPOSP0GQC0EFi547MahVaLjLl8Tnk2+uqYl499QHgJxMFnNrYOAWi/GLclYUPQEoahLTqEeoMp/55L3fJCKtcCnTGmVSkkW0Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U7bT2vXf6JpyltTCEOL++RoyjzyGamJs2jk7Df5eEJ8=;
 b=agz9FQASxgKmPp+kAP7MAeHt+1C5A3Yl5a9haVQbqMhWgiK17+SKJA11fuKdila8SB3CuEit5H2Fs34q5Zdvx/mvPOI8KrOaTjjHhPa9YqaaJy3m4ygADrRxiC9fVnY+NpgDZfrbEb1g/Eot1DccREd3NoD+jYkUgKBIYccFWg1sBXtHf+syotVluMQfMcMnXwd4cebANoiAfa9nw+VNJjRkIejhJIi8dhlM7HUuinH50sjm+vLWFtQFNpcoGmEPsSNgYJS1RI4ZVOQY435JDotzNclsl+dqjwRqFTEflTULN2d2VZ9bu1czVvxgvXb+Gs+s4pG99svnMNGIkgNdPw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH] x86: mark hypercall argument regs clobbering for intended
 fall-through
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bdbd506a-e6fc-a560-1be7-7424f33d413e@suse.com>
 <b0675d1f-892a-fde5-133f-65462dd01677@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <89c382a0-1f1a-218d-0f19-e43d0a68de5e@suse.com>
Date: Wed, 30 Jun 2021 08:50:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <b0675d1f-892a-fde5-133f-65462dd01677@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR1P264CA0017.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:102:19e::22) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9b375196-cd26-4cfa-6a6f-08d93b93600c
X-MS-TrafficTypeDiagnostic: VI1PR04MB5166:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB51664F80040AAED63A4D3824B3019@VI1PR04MB5166.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NZGy7F/qgRwQN8J+EBoTRK88PmftjRaNrajV2kpImvXbIfb6BsKm+Blnv97rOmEOuqrACKGl7RHoCdJ/iueeXTP7lzTyBFfLljWKff9tecj6UvN9FqBnCDBPsjve42zEJxEFcKO2SOZbwJPCvygojOnMdPsG5REYKSsawWgdsWkAfdjn+zhQzi6PVJCHJpyJSNSEOTp/VZLVgyfTpKuuwqdo/XaNLzux1U0nSNg3NaUwYkLcfdP/nnMWlM4DEPhgl+1nWSIFdP664XMfIxVHpYYZZYpWrsC7FUCVy9u3dNHi8+6B0FsTUReJjPR4eceC+NkcWiFMMbo/qpMExsFw0VZH64AC+hEM/UON5qz6buyVqdL1qwgE4o+ak3l4UaXQsBnXJ/Qi6dlExIPB/WbetboRmDkenZAC3yYgVXP1OPmg7jJxO56qK3MyUx9IqKSNstWd7AGdzC1EoptER4FqjjWPrAYZALOFDKap8Qyn31ELXXDXnCRo55r1gaCCpSu/nwRzU6ZrhoF6rVWTCSpimsOwjiaKPhYm4Zzy2XObDT/6O8lXsAkSqRVTNJS5JupfCbtyqZDnzeFLFBo5Vs/Rfhm3GB9YQLSNIqW358dtmW6ypVh4w1uEOFFsyq60pvR8+i16F4oP0xIodffwcz2zbWK7Lg2KAJrflirqjjObN2XeO5MukU3IFf8p9U3V5JyjNI9rO5/wMgOprsDB+c+OftYFrfGZ9H1m0NNkFvZR9xU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(396003)(39850400004)(376002)(366004)(66556008)(66476007)(66946007)(6486002)(36756003)(16526019)(4326008)(86362001)(8936002)(8676002)(956004)(26005)(2616005)(6916009)(5660300002)(53546011)(186003)(31686004)(83380400001)(54906003)(478600001)(2906002)(4744005)(316002)(31696002)(38100700002)(16576012)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?/xbPSJSaOt8XEwtORwCeDPA3R2v7RrxVFruxqcg/cJmKHzJpRC9vp18ndtk3?=
 =?us-ascii?Q?NyAt//sjI+hd0w9mUa0MUBJ4/yYb/8ezw7FSiOOpcZjCkew7rLb1CriNnzCS?=
 =?us-ascii?Q?N9YrYjRo5al0de9CIOMFDcAeJCJIE//jGvTtE9O19clGqlpDQhOmrm4v4H0S?=
 =?us-ascii?Q?Uzqp6hgp+FWNulvG4t6CDGL2hSDg0/QswNFdAuZPHBUDan5XSCEOU9AoSgxA?=
 =?us-ascii?Q?Bv2ckdxWJuaXk9HIRGOWdib6z+BejZU81FWnRf30pgy6xHWbRaKIJs0aCHJx?=
 =?us-ascii?Q?mdY6kA9txD8O0hJG21vN8y/nUM6Wk5YvNdCuLlqC04urBL+2ja5H4wnqTSzi?=
 =?us-ascii?Q?KVAR9nsdiVBKOJgA6ZkrtEVf4baWs7RKpsn2whbraavq0yhgqwhgHwNqDZf2?=
 =?us-ascii?Q?nU2vD2cleWlUJRUD6Rchf+I29N3XflT2SEspjS1UjMbbdauztlxBavXLg6Vk?=
 =?us-ascii?Q?lbRwtSRSdZKocctVxByfBYB8AFZNll3d0zC4rdexsmw46TzxyZ1LNiv1F1T8?=
 =?us-ascii?Q?w8pwxdjXgUpIr/Je3po0OokDZfdSLekRSt17gei7j0FJY7OTY5rb4mz7jdgz?=
 =?us-ascii?Q?DLr1hcjhyXnyFp9yDciS5DFcmrTjGc0u3C56qBZ37JkZ7irTuyFUboYwUtJB?=
 =?us-ascii?Q?yF2fmZPRJ6U6CXQYARN3mihfI7NJAg5sMmTYZO1/ATC5VE31VurmL5LeeLLj?=
 =?us-ascii?Q?874W3mPDYpwZJxAo/AtBO7GO+1hZVQsudPCKbHZfJv/9XIpJfU8enx/Lb3OE?=
 =?us-ascii?Q?Vq+MhVzgi8KuFsRuwbq6ImhEhZvPjmlD4+nnul1SpeJKR5AbF/YRC1TB5K34?=
 =?us-ascii?Q?KDEXPBKw98tyw8GcLj1UfZBs9NdVRMXGb8/+6CwnFVclNqVRSsNjw7OuQV/u?=
 =?us-ascii?Q?t23p0lji4i+bUEuGqCGJnjaBt92WZ23f15DnpwLqu9qmu6oD0rFyGK6WyNX0?=
 =?us-ascii?Q?MldC/Ozm/WrPQhOQO22aKX6qp8DI2sYZJTA21nx6N3jHqyA9OaWtAeak667H?=
 =?us-ascii?Q?vmJxReUEZ/g8p/H+wU03j3dCirQfGwAw0AJScWFhFziwFS7tyc1AzC7JCr6h?=
 =?us-ascii?Q?7g4sUJdY2XH3iYCfsp3FRYfZsOFo2PH5TnidFnehkMeVlcLhf4VHhCn6bKaw?=
 =?us-ascii?Q?otBINP7X4YGh8UjD4i8is7JTP4xVJtLIvyvabAo2GDfIUylIH6meOs2zLEXF?=
 =?us-ascii?Q?Rw799sywI8O8o1KDJ7O3YNl8QHM5HAvgtq8tZqmCFFAXDG826JeV1WrkPKvB?=
 =?us-ascii?Q?XIIgYO0/1Sapxs0bmI5mEdr8RoNs+eKcb53lF47upSC/JXJcvz9IXDHdBwSb?=
 =?us-ascii?Q?i3GfdC/ZB23ufxQPG9ESL/I3?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9b375196-cd26-4cfa-6a6f-08d93b93600c
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2021 06:50:40.8013
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: z2s9K4cU9KrKyitdBY6s9lZI7xVoGP8dBleta/fbI2SSPCnKpWqY1vb543df+z3Y1fBxg9VOFfuKMW14R295Xg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5166

On 09.06.2021 14:49, Andrew Cooper wrote:
> On 09/06/2021 11:34, Jan Beulich wrote:
>> The CIDs below are all for the PV side of things, but also take care of
>> the HVM side.
>>
>> Coverity-ID: 1485896, 1485901, 1485906, 1485910, 1485911,=20
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> Let's see whether Coverity actually understands the (relatively) new
>> pseudo-keyword.
>=20
> This is exceedingly disappointing.=C2=A0 Coverity used to have the only
> sensible rule for not causing spurious fallthrough warnings, but this
> has apparently regressed.

Actually - where do you see a regression here? These cases of fall-through
have been entirely un-annotated so far, so I'm instead surprised that
apparently there were no warnings so far. Or maybe they had been marked
false-positive in some database, and some unrelated code change made
Coverity think this was new / changed code.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 30 06:53:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 06:53:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148138.273682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyU6Z-0001Rc-Or; Wed, 30 Jun 2021 06:53:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148138.273682; Wed, 30 Jun 2021 06:53:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyU6Z-0001RV-LJ; Wed, 30 Jun 2021 06:53:51 +0000
Received: by outflank-mailman (input) for mailman id 148138;
 Wed, 30 Jun 2021 06:53:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyU6Y-0001R0-NG; Wed, 30 Jun 2021 06:53:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyU6Y-0007NN-FK; Wed, 30 Jun 2021 06:53:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyU6Y-00047g-26; Wed, 30 Jun 2021 06:53:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lyU6X-0001CP-VU; Wed, 30 Jun 2021 06:53:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yCnmDiuRp/ArG9bAxa8xxGYtBbuyqdHXpHtBcnbybbg=; b=Mn67O0Z3Vp4gIEIII8jyRM8x33
	uIhFepbuXBvVndD/mWu6m63j5GqnzYS3yOfQ0VJOSUAp8TRfPhErg6kS55G55aCtMI+9kbY/5EdYF
	8rPZRRRm9ugcFL+3v233yQbvDAlmN8UTlF+/2/yVWLu47KTQQAaWKBcfb8c1VAZSyh7w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163191-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 163191: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=f63397de614bc292232efb5d2b953c101b0377d8
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 30 Jun 2021 06:53:49 +0000

flight 163191 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163191/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              f63397de614bc292232efb5d2b953c101b0377d8
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  355 days
Failing since        151818  2020-07-11 04:18:52 Z  354 days  346 attempts
Testing same since   163191  2021-06-30 04:20:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  William Douglas <william.douglas@intel.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 63606 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 07:32:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 07:32:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148142.273696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyUhy-0005PZ-Ri; Wed, 30 Jun 2021 07:32:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148142.273696; Wed, 30 Jun 2021 07:32:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyUhy-0005PS-Nd; Wed, 30 Jun 2021 07:32:30 +0000
Received: by outflank-mailman (input) for mailman id 148142;
 Wed, 30 Jun 2021 07:32:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyUhx-0005PI-4F; Wed, 30 Jun 2021 07:32:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyUhw-00080k-Rt; Wed, 30 Jun 2021 07:32:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyUhw-0007Te-IB; Wed, 30 Jun 2021 07:32:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lyUhw-0007gJ-Hd; Wed, 30 Jun 2021 07:32:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ajFOVjFvn6/wZhr4zeQnOLzPmgK+YnXXlz5ZI5XX4x8=; b=3885iiDReraprM8ydp00xFcKJ+
	W46rs9htbChJDRJGeXpt5Wt9KMTKI6wubTsJ+hpEtakgjujCU9CTFsZ9dnmBikS3girx/VqjJHvfP
	k0N31RJ7EKQkGQMoa79KHg8HzRSBja2ff7T4UBkIHIjDCEV31jZ9+3VegrVaVS5eec9g=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163187-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 163187: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=13d5f87cc3b94bfccc501142df4a7b12fee3a6e7
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 30 Jun 2021 07:32:28 +0000

flight 163187 qemu-mainline real [real]
flight 163193 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/163187/
http://logs.test-lab.xenproject.org/osstest/logs/163193/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-amd 16 xen-boot/l1         fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 16 xen-boot/l1       fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                13d5f87cc3b94bfccc501142df4a7b12fee3a6e7
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  313 days
Failing since        152659  2020-08-21 14:07:39 Z  312 days  574 attempts
Testing same since   163187  2021-06-29 20:09:42 Z    0 days    1 attempts

------------------------------------------------------------
552 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 179743 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 07:44:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 07:44:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148147.273710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyUt4-00070x-30; Wed, 30 Jun 2021 07:43:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148147.273710; Wed, 30 Jun 2021 07:43:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyUt3-00070q-VZ; Wed, 30 Jun 2021 07:43:57 +0000
Received: by outflank-mailman (input) for mailman id 148147;
 Wed, 30 Jun 2021 07:43:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EmAy=LY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lyUt1-00070g-SM
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 07:43:55 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 20bbb1a3-422f-464d-8c28-9b299aecb694;
 Wed, 30 Jun 2021 07:43:54 +0000 (UTC)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2054.outbound.protection.outlook.com [104.47.14.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-21-5tJjxN5pOqaViFIvHDltmQ-1; Wed, 30 Jun 2021 09:43:52 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB6173.eurprd04.prod.outlook.com (2603:10a6:803:ff::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Wed, 30 Jun
 2021 07:43:50 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Wed, 30 Jun 2021
 07:43:50 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR04CA0043.eurprd04.prod.outlook.com (2603:10a6:208:1::20) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4287.22 via Frontend Transport; Wed, 30 Jun 2021 07:43:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20bbb1a3-422f-464d-8c28-9b299aecb694
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1625039033;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=l3FgzD2nb1eGianYjmDXgYghYrlnkqyxQmlsrx7Fo7E=;
	b=XZsqoG5+lHqbH/N/W3DmiI2ncBEw6jdOuJXZochj6TTghT3xJBAC36wk2VNXR8mToqfz/e
	vkcYcndl0bAiVQMJvVXS6OBhygh7C338v2uMjzXPzawMCHnS3W9WWo2+2091y/OJAFBv0R
	Gtv1Hogq5tLHzYyQPhJKK8xVoFh+vyU=
X-MC-Unique: 5tJjxN5pOqaViFIvHDltmQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FYPfHlSFOIc0t9YKZCWNq94cBQagNCHjcd759xvdDzc9uua5nI6Of3DXvY0EUDOHzO6ELsq+C4rGWnZ931YQh0arhW42yiNhR8ApEunct01MIcP3I2uw0rGLrHLrmVg5nYkrPR6lG+5IvkJCTAD1wq1spnYM3cCgiBIyphGwTUIyj9QKyoQXG6cNxcUIkc7NrfMqWIzy6m0LsAyhirhDYrx9s1McusVC2LVYhQg1RtghlVqQqqWZfoSG6whS2NkukBWnP1Yh+zapLaWZm6/caRjPinAC/hXZOaJ8d1lS75s5R2GqnwOS/+0k/tlsVZ109pb/lAHHgJe70lUqftAGyA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l3FgzD2nb1eGianYjmDXgYghYrlnkqyxQmlsrx7Fo7E=;
 b=lmjDcCVUG5QH6c8NMrWaOX0ncP6fgSu+07g0zSH2IzHVLbke6u3KmaGHeD23Hu4gRvhoNn8E/7tjkGkf7BBXnh2KNpolGUqJ1BpOx1+igQV4vD8lw9U541/ZAjkexkEpNCqa4Nui5ITxMnKN75clNBunEiskKeQkjxcVARGVE7OYWazVPW8M7IjgC65zIXBGTit9HH7iyK01CT9B2HJDMCZfu5HLXd8rVgXiEqOi0+rKUmtQWZxLqG407MlsN6tUZyMOS4dhPwAhPrZ9J4TM9bSTR46psOEft9av14KxlWCpD0OvMzDr5HN+7juOSAQvIKDDJlJ92040KT0WsAJItQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=suse.com;
Subject: Ping: [PATCH v4 3/3] unzstd: make helper symbols static
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Julien Grall <julien@xen.org>
References: <12054cba-4386-0dbf-46fd-41ace0344f8e@suse.com>
 <759c8524-cc01-fac8-bc62-0ba6558477bd@suse.com>
 <cb8fa703-f421-ce55-811a-d4a649bc201a@xen.org>
 <1696e5f2-481a-5a7f-258d-b2a0679b041f@suse.com>
 <f6e00fd9-a207-858e-37e8-fb25427cf8de@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4a083cb0-9131-adbe-9be6-bc1ee3028eb0@suse.com>
Date: Wed, 30 Jun 2021 09:43:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <f6e00fd9-a207-858e-37e8-fb25427cf8de@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR04CA0043.eurprd04.prod.outlook.com
 (2603:10a6:208:1::20) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 11019529-c348-41ab-c7d2-08d93b9acd10
X-MS-TrafficTypeDiagnostic: VI1PR04MB6173:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB61730572827E6D50D1F55289B3019@VI1PR04MB6173.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	90WppSXJaK8PrDQcrDLj8yFmw5nfb4kv9Z6HOo6wax6Kj7EoFhLP87MoEKJXeVHLSWuWmhxeeZSRjOZceJh44hlMFodg4wMDfzxhovQzSTK9FrriZcsx5ccINUYoO8lvyjH3HQB28nRAVGYBQWu38UkwWC5Aw2tKT0VmNvj/cvl26KAkg0cA9YOX78MJbrxhbsJ4qk/nM/c+vqjdyE2mTk8488wCfhZ5gwB6GM7+mqzBosGNX9KCiYWA5kLWu5h36mhpDSpSlo7xjDYq+u287tfrrhGae8irNOpWleZYBdoahsqHRGwwdamd/q5/uZeYN6f++DukrrcE2mL9bcCSspoGzO8p685h319RcH2mp5VRufXwmFrAi0rTIY8OUWrf3Y7diH5OMxKVpTiysukFVUgUTfQEYV/VCIsPj49WUGHIAo0SprUArEISzK/PsGv8izbkMlYwV4pLj1kiKYuTk+z0eoDRsGWc0tSMDR7YCpUa/ttqRPEC71gyd30T3pgtXNVxa2t/SgZbEkuTNKPqI5hHjfW3Nfx4LvXUqAPwiMsAig9vdr6Mf9wbtMnltBb2ofg6fXZzI01r/SokkVilQh1GQiSBHR5uJTJQNrqMC1Nggca5x69yVplowuspm/CA6/aMic+ZN5mNwgbUlI51qUhz4u/qGRxV62bORBAkrY1NCSLLBNX5EkFbv4RL1MOG0KPPqF7IdePgdj64aSqhKU4LuVzrUB1doH7irYUD4+Y=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(396003)(366004)(136003)(39850400004)(346002)(186003)(38100700002)(4326008)(16576012)(316002)(31696002)(86362001)(110136005)(36756003)(2906002)(53546011)(31686004)(16526019)(6486002)(54906003)(26005)(83380400001)(66476007)(66946007)(5660300002)(956004)(478600001)(8936002)(66556008)(2616005)(8676002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Uy9HTU5lUXB5TGo3YitrclplYnNITUNtcHk0TFQ2eU80dVY0UWNuRXc4Yzcv?=
 =?utf-8?B?VVdwbDhLUjVhVXR0Skw2RnFhZFFKdXVVaHBqZmxuZ2ZxTitoZUtlZmdIRXpa?=
 =?utf-8?B?SXdqTjFpRmZpMjBWNHhnYmQycmpMbGx5amM5VUlyVHJKWHhTNnBqNmdWQ3BN?=
 =?utf-8?B?TW5JYUFCNUROelZJSkZLNjZQRkxrWXVHM2k5RHN6OTB1d1NxVkpIbThXOUQy?=
 =?utf-8?B?RHpIVnFmQ09WRDJyOXhlVkQzUDIrS1ZoZUpyb2M5ZXpPSXM2bEhIaGQxMnNn?=
 =?utf-8?B?cS9Bb0k3VUswRWxDc2RIRHRzSmFQMjRBMFozRGZiaktxWGRVVmYrNmZlQ21W?=
 =?utf-8?B?QmNoVXY2am03NVIrcERnTlVFV0lFZmwzVkJzeXcxR29SdUFVb0JZRlo2dXdQ?=
 =?utf-8?B?SURNaFhQWG16ZERoMzVYYllITU1rQ2RpU1VPTEo1ZUZxZi9SU2Q3WE0xK1Nq?=
 =?utf-8?B?aDhMQ3lqNnZNRHEyTURuL0NnNHREUUhldGRWcFROWTBBSjVObnRSMVRqbENv?=
 =?utf-8?B?cWZzSC9aNVVuTVpUOE95RUZ6enRkZWZ4WnpTRlJNK3EvSE1IOTNIL0pyeCtk?=
 =?utf-8?B?dTRUczczUlloVWpYSEZVcVJQU2d2ZWROTUR6WEYrS2gvSjdVSGE4TkMwRGs0?=
 =?utf-8?B?ZVdjaUcxM2xHSWJkNUNyTFlSY0t4akVIbHlMN211Z0IxQzZlUWhRTnBjcmxm?=
 =?utf-8?B?cE02V2RvQzllTnoreStIcUdXNXA5UTZSRlQ5MmRkOVRPaWg5U1l4aEJYaXd5?=
 =?utf-8?B?Sk1XdzBFOG5vNDdka29FZzlHOHZFVnE2OWFvS2N0Zzd2YUUvRlh1dmt0SjlG?=
 =?utf-8?B?VWJtMXEyV2FUOHp6K0lrTVBVNUd2T3ExWnAwZ3M2d0FlL3Q1RjVhY3lkdHlo?=
 =?utf-8?B?ZGFKcnhXVnJkL24wNmlSUXFBOFZQVzI4OU9LTjBOL1N4VlNxS000dGJUbmRM?=
 =?utf-8?B?S1JBd3VEK21VVTQydGw0VnNtcGRxRmRJN1ByalB4QllkRHFiUTBWRTZWeHJP?=
 =?utf-8?B?NFkvYk5CNjJOVityckVnZ1d2ZGRTV2ovanlkYkVuRGFsMVpXbkxpM2NiWXg4?=
 =?utf-8?B?Rjh0d0dXSFZWMkZvUWlFbkdwVlp4cDlvazhoNzZXMEdjNlFRRndJTkhCY3FP?=
 =?utf-8?B?LzlXelRuN3AxNWhORWNRUzZ0SEQwMEpEOUlHalNKOXlrRExXVEVZdzNKbUgv?=
 =?utf-8?B?MldNWnZXaXozNVp4ZHNBTko0OUY0RmQyYUhDK0d5UysrWmJ2QVZXR1ZEWlgy?=
 =?utf-8?B?WC92L0sxbGpKTUsxalF6bHUyaks1NjFyT1NwYjRSMFdRRVI3REZrSFRkNGo2?=
 =?utf-8?B?K3dwcHl5cnhOYUNXWGI3ZFNuMzE1R0lQaXMvbG9iRUNCdnJUdW5uN2orSTZO?=
 =?utf-8?B?T1ZENTBsTWt2MG41cFQ0OHNWaUhZeDJNQ2lMMFM0K2JlaG8reG4zVDZIZ3Y2?=
 =?utf-8?B?NTRqR0RVOWMzalI3L25YRzNBbWpTbGNWSzFpZ0lSRTlQdzdxSEhCdEltWitJ?=
 =?utf-8?B?Mi9NSTNBZFIvWG03SU45aGRJeTY4Q0JIb0lFcGhpazJEaGZSaHI1S0xLcVA3?=
 =?utf-8?B?UXNLRVByN2hMWjlqcmQ2YkxqN2FrUGxlQWZ4ZHZCV3Y0Z1hCSDF1dUNVVmt4?=
 =?utf-8?B?a0w0VDN6aWpDUjk0RGZranJtVkVwTnJ4Q1pObGhjWkhvby93aEpmczRjYUZF?=
 =?utf-8?B?TU9XSWVuQUl6NTZza1Q0S2hLR2FGbmxITzRPcGNUYlFpVnlOVzBOdGhjY1NT?=
 =?utf-8?Q?0o1Es8ovWMqzeUzuqg6JQy4ED3ij4uPm9xM6EQv?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 11019529-c348-41ab-c7d2-08d93b9acd10
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2021 07:43:50.2826
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HmuOND+0VPOzLhH09ctalV+0gjWNtKzorOh+mKshMA5H+gH8xoXI86qqWDVd/HXwBJkBoy5jPPg0IQabnfAQ7A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6173

On 05.05.2021 19:35, Julien Grall wrote:
> 
> 
> On 29/04/2021 14:26, Jan Beulich wrote:
>> On 29.04.2021 13:27, Julien Grall wrote:
>>> On 21/04/2021 11:22, Jan Beulich wrote:
>>>> While for the original library's purposes these functions of course want
>>>> to be externally exposed, we don't need this, and we also don't want
>>>> this both to prevent unintended use and to keep the name space tidy.
>>>> (When functions have no callers at all, wrap them with a suitable
>>>> #ifdef.) This has the added benefit of reducing the resulting binary
>>>> size - while this is all .init code, it's still desirable to not carry
>>>> dead code.
>>>
>>> So I understand the desire to keep the code close to Linux and removing
>>> the dead code. However I am still not convinced that the approach taken
>>> is actually worth the amount of memory saved.
>>>
>>> How much memory are we talking about here?
>>
>> There are no (runtime) memory savings, as is being said by the
>> description. There are savings on the image and symbol table sizes
>> (see below - .*.0/ holding files as produced without the patch
>> applied, while .*.1/ holding output with it in place), the image
>> size reduction part of which is - as also expressed by the
>> description - a nice side effect, but not the main motivation for
>> the change.
> 
> Thanks for the providing the information. I have misunderstood your 
> original intention.
> 
> Reading them again, I have to admit this doesn't really change my view 
> here. You are trading a smaller name space or prevent unintended use 
> (not clear what would be wrong to call them) to code 
> maintenability/readability.
> 
> At the same time, this is not a code I usually work on (even if I am 
> meant to maintain it). I will leave another maintainer to make the 
> decision here.

May I ask for another REST maintainer's view here? If there's support
for Julien's position, then I'll at least know to drop the patch. Of
course I'd prefer it, or a stripped down version of it, to go in.

Thanks, Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 30 08:56:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 08:56:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148155.273720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyW13-0005qz-Mv; Wed, 30 Jun 2021 08:56:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148155.273720; Wed, 30 Jun 2021 08:56:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyW13-0005qs-Jy; Wed, 30 Jun 2021 08:56:17 +0000
Received: by outflank-mailman (input) for mailman id 148155;
 Wed, 30 Jun 2021 08:56:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lyW12-0005qm-Hz
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 08:56:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyW11-0001UY-IP; Wed, 30 Jun 2021 08:56:15 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyW11-00077d-CB; Wed, 30 Jun 2021 08:56:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=lGhntIi5d+bmdzCK2WNo2xd11yeTsL100l83DZuVED8=; b=UB1JcPx7lcEhm5tdtUAGExPzHz
	tSX5TjE8r/UHHxo7y3fJag/Pk36j+OoG2wwaWy9Imt7KYQZuYMZ0Q9nTn3xhTHSkn61ZT23I1FTTW
	vZavP0tcmPQHurW/INQS0jjr2STst7z0NjyHYEOOO4FBZRLNecxKmLnWDyf9j1HrWpOg=;
Subject: Re: [PATCH] xen/arm: bootfdt: Always sort memory banks
To: Oleksandr Tyshchenko <olekstysh@gmail.com>, xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <1623699267-9475-1-git-send-email-olekstysh@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <47cbf077-8681-7ac4-04e2-f31e1fa4c04f@xen.org>
Date: Wed, 30 Jun 2021 09:56:13 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <1623699267-9475-1-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Oleksandr,

On 14/06/2021 20:34, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> At the moment, Xen expects the memory banks to be ordered.
> Unfortunately, there may be a case when updated by firmware
> device tree contains unordered banks. This means Xen will panic
> when setting xenheap mappings for the subsequent bank with start
> address being less than xenheap_mfn_start (start address of
> the first bank).

Please clarify in the commit message that the behavior you are 
describing is for arm64. For arm32, there is only one heap region.

That said, I think the sorting is fine to be done in common code.

> 
> As there is no clear requirment regarding ordering in the device

s/requirment/requirement/

> tree, update code to be able to deal with by sorting memory
> banks if we have more than one.
> 
> Suggested-by: Julien Grall <jgrall@amazon.com>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> ---
> 
> The proposed commit fixes the booting Xen on R-Car M3-W+ SoC:
> 
> Starting kernel ...
> - UART enabled -
> - Boot CPU booting -
> - Current EL 00000008 -
> - Initialize CPU -
> - Turning on paging -
> - Zero BSS -
> - Ready -
> (XEN) Checking for initrd in /chosen
> (XEN) Initrd 0000000084000040-0000000085dbc32a
> (XEN) RAM: 0000000480000000 - 00000004ffffffff
> (XEN) RAM: 0000000048000000 - 00000000bfffffff
> (XEN) RAM: 0000000600000000 - 00000006ffffffff
> 
> ...
> 
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) cannot add xenheap mapping at 48000 below heap start 480000
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...
> ---
>   xen/arch/arm/bootfdt.c | 22 ++++++++++++++++++++++
>   1 file changed, 22 insertions(+)
> 
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index dcff512..3ef63b3 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -13,6 +13,7 @@
>   #include <xen/init.h>
>   #include <xen/device_tree.h>
>   #include <xen/libfdt/libfdt.h>
> +#include <xen/sort.h>
>   #include <xsm/xsm.h>
>   #include <asm/setup.h>
>   
> @@ -395,6 +396,21 @@ static void __init early_print_info(void)
>       printk("\n");
>   }
>   
> +/* This function assumes that memory regions are not overlapped */
> +static int __init cmp_memory_node(const void *key, const void *elem)
> +{
> +    const struct membank *handler0 = key;
> +    const struct membank *handler1 = elem;
> +
> +    if ( handler0->start < handler1->start )
> +        return -1;
> +
> +    if ( handler0->start >= (handler1->start + handler1->size) )
> +        return 1;
> +
> +    return 0;
> +}
> +
>   /**
>    * boot_fdt_info - initialize bootinfo from a DTB
>    * @fdt: flattened device tree binary
> @@ -412,6 +428,12 @@ size_t __init boot_fdt_info(const void *fdt, paddr_t paddr)
>       add_boot_module(BOOTMOD_FDT, paddr, fdt_totalsize(fdt), false);
>   
>       device_tree_for_each_node((void *)fdt, 0, early_scan_node, NULL);
> +    if ( bootinfo.mem.nr_banks > 1 )

NIT: sort() is already be able to deal with one element array. So this 
checks looks a bit pointless.

> +    {
> +        /* Some DT may describe unordered banks, sort them in ascending order */

It would be good to explain in the comment *why* this is necessary. Some 
along the line:

On Arm64, setup_xenheap_pages() expects to be called with the lowest 
bank in memory first. There is no requirement that the DT will provide 
the banks sorted in ascending order. So sort them through.

> +        sort(bootinfo.mem.bank, bootinfo.mem.nr_banks, sizeof(struct membank),
> +             cmp_memory_node, NULL);
> +    }
>       early_print_info();
>   
>       return fdt_totalsize(fdt);
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 09:02:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 09:02:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148159.273731 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyW79-0007Jl-FE; Wed, 30 Jun 2021 09:02:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148159.273731; Wed, 30 Jun 2021 09:02:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyW79-0007Je-C1; Wed, 30 Jun 2021 09:02:35 +0000
Received: by outflank-mailman (input) for mailman id 148159;
 Wed, 30 Jun 2021 09:02:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyW77-0007JP-Ob; Wed, 30 Jun 2021 09:02:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyW77-0001cZ-Ff; Wed, 30 Jun 2021 09:02:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyW77-00025K-36; Wed, 30 Jun 2021 09:02:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lyW77-0004qM-2a; Wed, 30 Jun 2021 09:02:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mJMwqu8QZJE3UVRXBcz+PhgL1rZcJCpQSdzABOrb+Vs=; b=bEY6eSXgZJCmP6tY0ft3QZVNyE
	xDclAMXhAaBAaNsRH/2TaUFN9v8jxn+id8xdgIRt/Kz5yfOXm5PV6rqUbrJnhR2sAOokJXO3dUCwr
	snx5QP00Grb7uoJqdJl535ZQAebXINbTb4IP/WCeZYwBORuS3+xTQ6x6cmmZwMKYWYMg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163188-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 163188: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=349a2d52ffe59b7a0c5876fa7ee9f3eaf188b830
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 30 Jun 2021 09:02:33 +0000

flight 163188 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163188/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10  fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                349a2d52ffe59b7a0c5876fa7ee9f3eaf188b830
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  333 days
Failing since        152366  2020-08-01 20:49:34 Z  332 days  566 attempts
Testing same since   163188  2021-06-29 21:42:28 Z    0 days    1 attempts

------------------------------------------------------------
6302 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1737270 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 09:10:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 09:10:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148165.273746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyWEJ-00084o-EW; Wed, 30 Jun 2021 09:09:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148165.273746; Wed, 30 Jun 2021 09:09:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyWEJ-00084h-BM; Wed, 30 Jun 2021 09:09:59 +0000
Received: by outflank-mailman (input) for mailman id 148165;
 Wed, 30 Jun 2021 09:09:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lyWEI-00084b-To
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 09:09:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyWEH-0001ky-GJ; Wed, 30 Jun 2021 09:09:57 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyWEH-00089l-AI; Wed, 30 Jun 2021 09:09:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=quUuhcUj8Lmb3qmX/7h+AoH6JolIgsZBD2IzgJX7B0g=; b=w+D8SH+OpkhTHDUI73K4eQU6h3
	9qJ0GRHWPu5pXprL/hOwnFoTTuVwKAZcXPhuvIAsJyPUMvw9KXW3ZDLDgAf7YVGlm546RP09tTChi
	Eio1hDTVPSHHZ+wnahA5ei4EcfonmqaLx46O5e4xDl0QtHb6nL/sWv6TfdqRqNJ1dmTs=;
Subject: Re: [PATCH] xen/arm: smmuv1: Fixed stream matching register
 allocation
To: Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <612e7f61c19e60019bb7829888342fda95fd36be.1624546532.git.rahul.singh@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <11df0a34-724a-63ad-1822-4bd8aa364ab0@xen.org>
Date: Wed, 30 Jun 2021 10:09:55 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <612e7f61c19e60019bb7829888342fda95fd36be.1624546532.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Rahul,

On 25/06/2021 17:37, Rahul Singh wrote:
> SMR allocation should be based on the number of supported stream
> matching register for each SMMU device.
> 
> Issue introduced by commit 5e08586afbb90b2e2d56c175c07db77a4afa873c
> when backported the patches from Linux to XEN to fix the stream match
> conflict issue when two devices have the same stream-id.
> 
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> Tested-by: Stefano Stabellini <sstabellini@kernel.org>
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>   xen/drivers/passthrough/arm/smmu.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index d9a3a0cbf6..da2cd457d7 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -149,6 +149,7 @@ typedef enum irqreturn irqreturn_t;
>   #define kzalloc(size, flags)		_xzalloc(size, sizeof(void *))
>   #define devm_kzalloc(dev, size, flags)	_xzalloc(size, sizeof(void *))
>   #define kmalloc_array(size, n, flags)	_xmalloc_array(size, sizeof(void *), n)
> +#define kzalloc_array(size, n, flags)	_xzalloc_array(size, sizeof(void *), n)
>   
>   static void __iomem *devm_ioremap_resource(struct device *dev,
>   					   struct resource *res)
> @@ -2221,7 +2222,7 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
>   		smmu->smr_mask_mask = smr >> SMR_MASK_SHIFT;
>   
>   		/* Zero-initialised to mark as invalid */
> -		smmu->smrs = devm_kzalloc(smmu->dev, sizeof(*smmu->smrs), GFP_KERNEL);
> +		smmu->smrs = kzalloc_array(sizeof(*smmu->smrs), size, GFP_KERNEL);

I noticed this is already in... However, I am a bit puzzled into why 
this was switched devm_kzalloc() to kzalloc_array(). This doesn't matter 
for Xen as they are just wrappers to x*alloc() but a mention in the 
commit message would have been useful.

Also, when sending series, please remember to create a cover letter and 
number each patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 09:23:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 09:23:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148168.273757 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyWRT-0001qr-M7; Wed, 30 Jun 2021 09:23:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148168.273757; Wed, 30 Jun 2021 09:23:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyWRT-0001qk-Iz; Wed, 30 Jun 2021 09:23:35 +0000
Received: by outflank-mailman (input) for mailman id 148168;
 Wed, 30 Jun 2021 09:23:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+CRT=LY=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lyWRS-0001qe-MJ
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 09:23:34 +0000
Received: from mail-il1-x12b.google.com (unknown [2607:f8b0:4864:20::12b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e72c69c-5027-4572-9918-e454865a059e;
 Wed, 30 Jun 2021 09:23:33 +0000 (UTC)
Received: by mail-il1-x12b.google.com with SMTP id i17so2196341ilj.11
 for <xen-devel@lists.xenproject.org>; Wed, 30 Jun 2021 02:23:33 -0700 (PDT)
Received: from mail-io1-f47.google.com (mail-io1-f47.google.com.
 [209.85.166.47])
 by smtp.gmail.com with ESMTPSA id 17sm11381483iog.20.2021.06.30.02.23.32
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 30 Jun 2021 02:23:32 -0700 (PDT)
Received: by mail-io1-f47.google.com with SMTP id k16so2302937ios.10
 for <xen-devel@lists.xenproject.org>; Wed, 30 Jun 2021 02:23:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e72c69c-5027-4572-9918-e454865a059e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=SGxHKBuGZkCU1I+kyNcTwBrZAXyCdyYBNGIIT95dpcY=;
        b=PitW6imfDn0IzY3RG7JEiKUR+7Nsn4w2y1bEvPgW/HS2AbzJNtJNRONqnYjnNN01RO
         f4Cy3TS0FXR+ruq1aYhiZGEJMhWbD1l8vF9wU2uLP3NYm94onDiG0L+Sdwsda2NHyvii
         j6adVFIua9Ybkbk/RCb14aonwq2ISknAbeRG8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=SGxHKBuGZkCU1I+kyNcTwBrZAXyCdyYBNGIIT95dpcY=;
        b=bk35+AYC4mdBSsUpAPc9ChyQwI7OLlWhwfTFrYXT2w5noblFvzGf9qwZVnS9GNDjsJ
         rmLnAJgPCN1Zlwb3Jv4J2jP99Fzvqh/c+WCD5LIP5iQ112BM9lNiUlu13Tc57DpDG0b+
         +35pH1iznHyF2vvPQTVesXJTj4ysNDIPe7/QuT/Hwx4EcRzu9YkIPFO5xWcLV2C+L+bJ
         6kqp2XxuVjpfbfOrZcJtQPUOtv/CaH9mt14GB31HyuwxAcH91j4vqz+XZg9b1NsY73NU
         BG68b+gxolZfGcIURv0pRDRgvKgXQ9bWV/K1w+V9hyBbcj1GKlAnU6fXiWFS0M55RBgR
         apJA==
X-Gm-Message-State: AOAM532xt4HWH5ZS3/1hFKyomX97FzL2BDV+ePMKME4R4V4bQLn+QRdX
	nP8myJN4mg/QV8dt6IwhU1d3X/MtlzU/dw==
X-Google-Smtp-Source: ABdhPJx1yRoBZ4KHgIQJ66F5Sb9L2VUyXHcK3R2P38N2QniChZSsxOvjuZ6nNnK0pBCn4YK/yVbdfQ==
X-Received: by 2002:a05:6e02:bed:: with SMTP id d13mr23832857ilu.259.1625045012859;
        Wed, 30 Jun 2021 02:23:32 -0700 (PDT)
X-Received: by 2002:a6b:e013:: with SMTP id z19mr7163972iog.34.1625044658498;
 Wed, 30 Jun 2021 02:17:38 -0700 (PDT)
MIME-Version: 1.0
References: <20210624155526.2775863-1-tientzu@chromium.org>
 <20210624155526.2775863-7-tientzu@chromium.org> <YNvMDFWKXSm4LRfZ@Ryzen-9-3900X.localdomain>
In-Reply-To: <YNvMDFWKXSm4LRfZ@Ryzen-9-3900X.localdomain>
From: Claire Chang <tientzu@chromium.org>
Date: Wed, 30 Jun 2021 17:17:27 +0800
X-Gmail-Original-Message-ID: <CALiNf2-a-haQN0-4+gX8+wa++52-0CnO2O4BEkxrQCxoTa_47w@mail.gmail.com>
Message-ID: <CALiNf2-a-haQN0-4+gX8+wa++52-0CnO2O4BEkxrQCxoTa_47w@mail.gmail.com>
Subject: Re: [PATCH v15 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
To: Nathan Chancellor <nathan@kernel.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, 
	paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, 
	xypron.glpk@gmx.de, Thierry Reding <treding@nvidia.com>, mingo@kernel.org, 
	bauerman@linux.ibm.com, peterz@infradead.org, 
	Greg KH <gregkh@linuxfoundation.org>, Saravana Kannan <saravanak@google.com>, 
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com, Tom Lendacky <thomas.lendacky@amd.com>, 
	Qian Cai <quic_qiancai@quicinc.com>
Content-Type: text/plain; charset="UTF-8"

On Wed, Jun 30, 2021 at 9:43 AM Nathan Chancellor <nathan@kernel.org> wrote:
>
> On Thu, Jun 24, 2021 at 11:55:20PM +0800, Claire Chang wrote:
> > Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
> > use it to determine whether to bounce the data or not. This will be
> > useful later to allow for different pools.
> >
> > Signed-off-by: Claire Chang <tientzu@chromium.org>
> > Reviewed-by: Christoph Hellwig <hch@lst.de>
> > Tested-by: Stefano Stabellini <sstabellini@kernel.org>
> > Tested-by: Will Deacon <will@kernel.org>
> > Acked-by: Stefano Stabellini <sstabellini@kernel.org>
>
> This patch as commit af452ec1b1a3 ("swiotlb: Use is_swiotlb_force_bounce
> for swiotlb data bouncing") causes my Ryzen 3 4300G system to fail to
> get to an X session consistently (although not every single time),
> presumably due to a crash in the AMDGPU driver that I see in dmesg.
>
> I have attached logs at af452ec1b1a3 and f127c9556a8e and I am happy
> to provide any further information, debug, or test patches as necessary.

Are you using swiotlb=force? or the swiotlb_map is called because of
!dma_capable? (https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/tree/kernel/dma/direct.h#n93)

`BUG: unable to handle page fault for address: 00000000003a8290` and
the fact it crashed at `_raw_spin_lock_irqsave` look like the memory
(maybe dev->dma_io_tlb_mem) was corrupted?
The dev->dma_io_tlb_mem should be set here
(https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/tree/drivers/pci/probe.c#n2528)
through device_initialize.

I can't tell what happened from the logs, but maybe we could try KASAN
to see if it provides more clue.

Thanks,
Claire

>
> Cheers,
> Nathan


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 09:55:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 09:55:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148171.273768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyWvm-0004wQ-4Z; Wed, 30 Jun 2021 09:54:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148171.273768; Wed, 30 Jun 2021 09:54:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyWvm-0004wJ-0S; Wed, 30 Jun 2021 09:54:54 +0000
Received: by outflank-mailman (input) for mailman id 148171;
 Wed, 30 Jun 2021 09:54:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyWvl-0004vt-1N; Wed, 30 Jun 2021 09:54:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyWvk-0002T3-Od; Wed, 30 Jun 2021 09:54:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyWvk-0006A1-Dl; Wed, 30 Jun 2021 09:54:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lyWvk-0005xY-DF; Wed, 30 Jun 2021 09:54:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CWETca5rl/z0qlAlEtUNVt+UXHjSJIIYjb/8+WJ35Bg=; b=bex8i+SKe8LQxa54vH4QUvqkyO
	Al0MrAs65LcTT3VgSP5+qhAC/nHd9RcyBtD29vRpIXF2fqUKLbsl/EaejFE+36MQjuzpicQjz0RJF
	EjZTO/3H7EDEZnVZ1YOZsD8HkzbG+kDQDkf8GWE4RNJsj+Mo54w7JgwHE1cCwsiW1tQw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163196-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 163196: all pass - PUSHED
X-Osstest-Versions-This:
    xen=f95b7b37cfc6d4613721df9357090d14712013c0
X-Osstest-Versions-That:
    xen=bb11edcec1a953ce590da797b0d005cd60f21e83
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 30 Jun 2021 09:54:52 +0000

flight 163196 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163196/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  f95b7b37cfc6d4613721df9357090d14712013c0
baseline version:
 xen                  bb11edcec1a953ce590da797b0d005cd60f21e83

Last test of basis   163147  2021-06-27 09:18:28 Z    3 days
Testing same since   163196  2021-06-30 09:18:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   bb11edcec1..f95b7b37cf  f95b7b37cfc6d4613721df9357090d14712013c0 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 10:12:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 10:12:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148175.273782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyXCs-0007KM-Ks; Wed, 30 Jun 2021 10:12:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148175.273782; Wed, 30 Jun 2021 10:12:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyXCs-0007KF-H6; Wed, 30 Jun 2021 10:12:34 +0000
Received: by outflank-mailman (input) for mailman id 148175;
 Wed, 30 Jun 2021 10:12:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lyXCr-0007K7-Ii
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 10:12:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyXCq-0002pF-7v; Wed, 30 Jun 2021 10:12:32 +0000
Received: from [54.239.6.179] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyXCq-0004lX-1N; Wed, 30 Jun 2021 10:12:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=HZCwDAKbLzZbVr7mwaHy+G9eCxYYF+xmY7QGTSDDT/4=; b=5LvEsLWLParcWrV0KPVRGl6fp/
	LimI21EkHF5w7h9sBgkFmKEC/evFhy28CEClNP3WlnDQJsQquH+uA9IdmHMtbgGuU2RK4D8Dpx86/
	fGntPejyI8FepxcoPbzfVJleufqYTssL6X7l7DUwZRUI1pY6dEKL5/hTIlZrrrLJVEzY=;
Subject: Re: [XEN PATCH v2] libxl/arm: provide guests with random seed
From: Julien Grall <julien@xen.org>
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
References: <20210527085233.69917-1-Sergiy_Kibrik@epam.com>
 <78b26e15-a3ca-e218-9afa-9f443e234260@xen.org>
Message-ID: <658a814d-2a99-00ff-8855-134a1e707e97@xen.org>
Date: Wed, 30 Jun 2021 11:12:30 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <78b26e15-a3ca-e218-9afa-9f443e234260@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

On 03/06/2021 14:08, Julien Grall wrote:
> Hi,
> 
> On 27/05/2021 09:52, Sergiy Kibrik wrote:
>> Pass 128 bytes of random seed via FDT, so that guests' CRNGs are 
>> better seeded
>> early at boot. This is larger than ChaCha20 key size of 32, so each 
>> byte of
>> CRNG state will be mixed 4 times using this seed. There does not seem 
>> to be
>> advantage in larger seed though.
>>
>> Depending on its configuration Linux can use the seed as device 
>> randomness
>> or to just quickly initialize CRNG.
>> In either case this will provide extra randomness to further harden CRNG.
>>
>> CC: Julien Grall <julien@xen.org>
>> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
> 
> Reviewed-by: Julien Grall <jgrall@amazon.com>
> 
> Ian, Wei, can we get an ack for this patch?

This has been sitting on the ML for quite a while. I was going to commit 
it as this looks uncontroversial but the patch doesn't apply on the 
lastest Xen (tools/libxl was moved in tools/libs/light).

@Sergiy, would you be able to rebase the patch?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 11:44:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 11:44:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148178.273792 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyYdQ-0007DY-Et; Wed, 30 Jun 2021 11:44:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148178.273792; Wed, 30 Jun 2021 11:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyYdQ-0007DR-By; Wed, 30 Jun 2021 11:44:04 +0000
Received: by outflank-mailman (input) for mailman id 148178;
 Wed, 30 Jun 2021 11:44:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=muIj=LY=kernel.org=will@srs-us1.protection.inumbo.net>)
 id 1lyYdO-0007DL-Gg
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 11:44:02 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fca5b77a-3425-4845-8bdc-992783a64d56;
 Wed, 30 Jun 2021 11:44:01 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 658936191E;
 Wed, 30 Jun 2021 11:43:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fca5b77a-3425-4845-8bdc-992783a64d56
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1625053440;
	bh=T1naDXZ4KL7jQSaB561jHvJlgBOMGMQ3/1PtLxSRV3U=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=Vw8LJOu5KCD34mYOTCMTaQTWi8sHKQiF+Cb+FRKyG+InGjsh4w5CH8gnw31a5qAII
	 fdxwWup5a6Ngr850taEcU9ZCSLqJ9xB2i8TJhh5jtbhjxfDnOyNmjKcCavn+Lg4tkH
	 BT4r0W8tudkIUpIlaA5qcnegOAtKQcX6CJcSCUAclVHoKv4x47CJbH5I8ALtxpamjf
	 y69MeQSUFUeisr4w4Nt7M+WH4SJ7ztkfgf6rb2x4qemdLyCnbXEMIuY+1x1gvjyIVN
	 oskCC9GD49xMtxzkxb+rjm1PqbzB1n44Y73LgO/3einGNWpmSI/rVZcsQyguvuwKPI
	 LyjS1rkSbdFqQ==
Date: Wed, 30 Jun 2021 12:43:48 +0100
From: Will Deacon <will@kernel.org>
To: Claire Chang <tientzu@chromium.org>
Cc: Nathan Chancellor <nathan@kernel.org>, Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com,
	xypron.glpk@gmx.de, Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com,
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk,
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie,
	dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com, Jianxiong Gao <jxgao@google.com>,
	joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com, matthew.auld@intel.com,
	rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Qian Cai <quic_qiancai@quicinc.com>
Subject: Re: [PATCH v15 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
Message-ID: <20210630114348.GA8383@willie-the-truck>
References: <20210624155526.2775863-1-tientzu@chromium.org>
 <20210624155526.2775863-7-tientzu@chromium.org>
 <YNvMDFWKXSm4LRfZ@Ryzen-9-3900X.localdomain>
 <CALiNf2-a-haQN0-4+gX8+wa++52-0CnO2O4BEkxrQCxoTa_47w@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CALiNf2-a-haQN0-4+gX8+wa++52-0CnO2O4BEkxrQCxoTa_47w@mail.gmail.com>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed, Jun 30, 2021 at 05:17:27PM +0800, Claire Chang wrote:
> On Wed, Jun 30, 2021 at 9:43 AM Nathan Chancellor <nathan@kernel.org> wrote:
> >
> > On Thu, Jun 24, 2021 at 11:55:20PM +0800, Claire Chang wrote:
> > > Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
> > > use it to determine whether to bounce the data or not. This will be
> > > useful later to allow for different pools.
> > >
> > > Signed-off-by: Claire Chang <tientzu@chromium.org>
> > > Reviewed-by: Christoph Hellwig <hch@lst.de>
> > > Tested-by: Stefano Stabellini <sstabellini@kernel.org>
> > > Tested-by: Will Deacon <will@kernel.org>
> > > Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> >
> > This patch as commit af452ec1b1a3 ("swiotlb: Use is_swiotlb_force_bounce
> > for swiotlb data bouncing") causes my Ryzen 3 4300G system to fail to
> > get to an X session consistently (although not every single time),
> > presumably due to a crash in the AMDGPU driver that I see in dmesg.
> >
> > I have attached logs at af452ec1b1a3 and f127c9556a8e and I am happy
> > to provide any further information, debug, or test patches as necessary.
> 
> Are you using swiotlb=force? or the swiotlb_map is called because of
> !dma_capable? (https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/tree/kernel/dma/direct.h#n93)

The command line is in the dmesg:

  | Kernel command line: initrd=\amd-ucode.img initrd=\initramfs-linux-next-llvm.img root=PARTUUID=8680aa0c-cf09-4a69-8cf3-970478040ee7 rw intel_pstate=no_hwp irqpoll

but I worry that this looks _very_ similar to the issue reported by Qian
Cai which we thought we had fixed. Nathan -- is the failure deterministic?

> `BUG: unable to handle page fault for address: 00000000003a8290` and
> the fact it crashed at `_raw_spin_lock_irqsave` look like the memory
> (maybe dev->dma_io_tlb_mem) was corrupted?
> The dev->dma_io_tlb_mem should be set here
> (https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/tree/drivers/pci/probe.c#n2528)
> through device_initialize.

I'm less sure about this. 'dma_io_tlb_mem' should be pointing at
'io_tlb_default_mem', which is a page-aligned allocation from memblock.
The spinlock is at offset 0x24 in that structure, and looking at the
register dump from the crash:

Jun 29 18:28:42 hp-4300G kernel: RSP: 0018:ffffadb4013db9e8 EFLAGS: 00010006
Jun 29 18:28:42 hp-4300G kernel: RAX: 00000000003a8290 RBX: 0000000000000000 RCX: ffff8900572ad580
Jun 29 18:28:42 hp-4300G kernel: RDX: ffff89005653f024 RSI: 00000000000c0000 RDI: 0000000000001d17
Jun 29 18:28:42 hp-4300G kernel: RBP: 000000000a20d000 R08: 00000000000c0000 R09: 0000000000000000
Jun 29 18:28:42 hp-4300G kernel: R10: 000000000a20d000 R11: ffff89005653f000 R12: 0000000000000212
Jun 29 18:28:42 hp-4300G kernel: R13: 0000000000001000 R14: 0000000000000002 R15: 0000000000200000
Jun 29 18:28:42 hp-4300G kernel: FS:  00007f1f8898ea40(0000) GS:ffff890057280000(0000) knlGS:0000000000000000
Jun 29 18:28:42 hp-4300G kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jun 29 18:28:42 hp-4300G kernel: CR2: 00000000003a8290 CR3: 00000001020d0000 CR4: 0000000000350ee0
Jun 29 18:28:42 hp-4300G kernel: Call Trace:
Jun 29 18:28:42 hp-4300G kernel:  _raw_spin_lock_irqsave+0x39/0x50
Jun 29 18:28:42 hp-4300G kernel:  swiotlb_tbl_map_single+0x12b/0x4c0

Then that correlates with R11 holding the 'dma_io_tlb_mem' pointer and
RDX pointing at the spinlock. Yet RAX is holding junk :/

I agree that enabling KASAN would be a good idea, but I also think we
probably need to get some more information out of swiotlb_tbl_map_single()
to see see what exactly is going wrong in there.

Will


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 11:52:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 11:52:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148181.273804 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyYlz-0000D5-A7; Wed, 30 Jun 2021 11:52:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148181.273804; Wed, 30 Jun 2021 11:52:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyYlz-0000Cy-75; Wed, 30 Jun 2021 11:52:55 +0000
Received: by outflank-mailman (input) for mailman id 148181;
 Wed, 30 Jun 2021 11:52:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E+GI=LY=epam.com=prvs=7815400710=sergiy_kibrik@srs-us1.protection.inumbo.net>)
 id 1lyYlx-0000CY-Rk
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 11:52:54 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 47d5ea32-0d0e-45f8-87c5-11498747ee5e;
 Wed, 30 Jun 2021 11:52:53 +0000 (UTC)
Received: from pps.filterd (m0174680.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 15UBkj55000314; Wed, 30 Jun 2021 11:52:51 GMT
Received: from eur01-db5-obe.outbound.protection.outlook.com
 (mail-db5eur01lp2059.outbound.protection.outlook.com [104.47.2.59])
 by mx0b-0039f301.pphosted.com with ESMTP id 39gr1k80xk-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 30 Jun 2021 11:52:51 +0000
Received: from AM9PR03MB6836.eurprd03.prod.outlook.com (2603:10a6:20b:2d8::8)
 by VI1PR0302MB3437.eurprd03.prod.outlook.com (2603:10a6:803:20::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20; Wed, 30 Jun
 2021 11:52:48 +0000
Received: from AM9PR03MB6836.eurprd03.prod.outlook.com
 ([fe80::24c9:276d:e56c:34d4]) by AM9PR03MB6836.eurprd03.prod.outlook.com
 ([fe80::24c9:276d:e56c:34d4%5]) with mapi id 15.20.4242.023; Wed, 30 Jun 2021
 11:52:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47d5ea32-0d0e-45f8-87c5-11498747ee5e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AmehO9Lr4oo4Nl5tSEuqFnl0kPwoooheDdPNp/YpFuHra63brUy4LjqCYcFeWfLG897YWKjZ8IBFfks2siSGMgWcVRCTKT7SniteQzw2H/FywBP1vbBINmjJr7m6ijjrr37LZk3Y0jkBllvlh71fwU/UgJdkG6ifl30My4dLH87kAgQnncRADgEQ8OwYRmowmjLCJ2rOgoonx/VGFKawThO3FlrFwTvR1JwxZGvmXrDnIK0FCIVsXdtrp5zET5I3YsPDlsDbHmRHIVnK1MTbKJ7BEgxxagzKaUzxyuw0EzcFkfDCs2yef2CJrwrYc/ad40nC8A/UoLiiHI1/nYM50g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qkKWOIBLkgshZvIsBGS/x4npvpbIkst102jI46Ov3D4=;
 b=XsvQDCDhSkvddNqzzfwUsc54n31srq9gFvcqq3OrVHGLqxyjo8kJlGgF5BU6C4ZBPok7eEkuKENnk4IQLapveU1x/C/exAFc1Q9A3IVZN1VGat1JWmrc+XVmyBPtMNEeE33X/Nh0z+pCWsRtJTJ/bt6Pvxjbb3yKVsWObkFUmVNvZEHtvwGyONPSkyZugDV7SVYmTCrzg37cbC86okDb9Wa9B/VXsSdZltHtYw+F7dZ1B1PHiNMTjcqgBGVe3GCxRlZsQGZGxeOyNMaCGMzIRRuWHYIgn9UOaP+oxHcVVvVTRHzURpvigibdZk7mdhrbJ8CuQ8Ty+2V0Yca8g134NA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qkKWOIBLkgshZvIsBGS/x4npvpbIkst102jI46Ov3D4=;
 b=qlBGvruciRCgf1vC+ndFTFUGYbRBYGKY+4mldZBnfb3ML+VDlsaQJXB1YbyVQIVh0SsnZp5tP1msp36rlVbhXf2LCDlgWtxQDTqMcDizHY2KEV8IY7HE7mSKs2JYSO6tb7dM24YZIGnYjQ8bPllbwMQJyPgAfYIDKVVYY+4DtS/Va1tykeQt1r2WKgybxquEoo5ir1QJa/2Daoi74br4pAUNoTU5zdBUJCbyr8S6mmUPHbTUpGuLzsXs5BaQF4iQ2cgpNlen5ELYilD0kLIKaYAVgbT8QT8WexrT+9732D5TtZ4qG00tN8+0l7B3B8fibawT3+0T7+L31RPbUxQehQ==
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: Julien Grall <julien@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Subject: RE: [XEN PATCH v2] libxl/arm: provide guests with random seed
Thread-Topic: [XEN PATCH v2] libxl/arm: provide guests with random seed
Thread-Index: AQHXWHmL+0MzhpKbOUCds4qOsNnv3qssf+sAgAAbmYA=
Date: Wed, 30 Jun 2021 11:52:48 +0000
Message-ID: 
 <AM9PR03MB683611963AAB15208D1EECB3F0019@AM9PR03MB6836.eurprd03.prod.outlook.com>
References: <20210527085233.69917-1-Sergiy_Kibrik@epam.com>
 <78b26e15-a3ca-e218-9afa-9f443e234260@xen.org>
 <658a814d-2a99-00ff-8855-134a1e707e97@xen.org>
In-Reply-To: <658a814d-2a99-00ff-8855-134a1e707e97@xen.org>
Accept-Language: en-US
Content-Language: ru-RU
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [91.206.32.135]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 09f24d7b-a22a-4dcd-588a-08d93bbd952b
x-ms-traffictypediagnostic: VI1PR0302MB3437:
x-microsoft-antispam-prvs: 
 <VI1PR0302MB34370564193C1C20A31F6831F0019@VI1PR0302MB3437.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 tzBwWCDvLpxoZLljtnX/W6/b/n/WknhaOYrvPgmAFbWhA0PghIY+/vSR8cOrYFS6vxMve4dL0yHK4fqUKTodBVcFidWZ1QVINf53/CJKRYUdJnk9vFlHyrgnEjAlkpdSpjpvmvrdBq4AKqCR1m8fF+/gGqM1BTWXSFH7dUUEQD3YkVWcidkDkBNF1OI5osO100gortdwX3Pn0Zar7T1wWKoocrKBR9u1B2ENd/4voKNLdFm36y6ZuUYT+S4UwTPVkzy3NK/3jdRQrqX7IeAjvjM2Moh4HvOXrWvhwLcldh/o58dIcB1vsPlRJfEf6EOb+zdrpltGFbd3z8sjYqVVO6ZO6sCdlfuh2B/WJOmG2GfylJtcL+aQRtZdrOV7ehODZZTVhLL2XSP7eHIPqMixYzRh3xJCZM9YffyRlkpgWhCAekhWjCsqkZ6RIY3sIkGlwSYrKSAeNI4hUUxoowt3BPJ46vSfEwe/pBQNr2ff9L45bt2g16HsXrXM9SCvSCkdNxyiuEWyFMq/R6/XOLv+LJwr8xraOQvTTKyT83aqHW7c5iWCaK5aa/31gGiPi01YSn2sBW2S2w0A+exKW4WDEBO/HZ2gflbvD2UIFYLLaZJ/ncBoKyi6k15L/VF/ftroTtHSUaulwWiT9tmTyaKaTQ==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM9PR03MB6836.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(346002)(39860400002)(136003)(366004)(71200400001)(52536014)(6506007)(38100700002)(53546011)(5660300002)(186003)(7696005)(26005)(9686003)(122000001)(55016002)(4326008)(54906003)(110136005)(4744005)(2906002)(316002)(478600001)(8936002)(33656002)(66946007)(76116006)(66556008)(86362001)(64756008)(66446008)(8676002)(66476007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?utf-8?B?alV6VHd1cmcyVWZrZ2NMcm0zeFpMOFNTdmpvYll4Tnd0QWc3TmRicnNsWkUw?=
 =?utf-8?B?Z1dvUHlqdnQ3UVh0K1ZIS20rUjJkM3V6WHdISVpCeGV2UUxURnFSOTdRYzUr?=
 =?utf-8?B?dXB3WXE2NTM3VldMMzI1VC9kWElsbUl1NVc2SWtEcUhIYUtTNnZqRDlEMlhN?=
 =?utf-8?B?aTlBN1NRM2xJckdhWmQ5MVFiSmxEMUI1MW9VZ1l6a1BSOFpHVExwK2ViUTNk?=
 =?utf-8?B?Nk5rSHR4d2JUUVJoTVh4ZGdrWmZYNjNScHlmendkN0c5dWJ3Z2Z5UG1hL2Uy?=
 =?utf-8?B?cy9aZHBrOWt3VTdLOTJTQ203aERTSVhFVEJZOWExa3lydW4vanpvRldKUXp1?=
 =?utf-8?B?Qmg1TFhMV3hBQ2wxWFlMZ3BEL0NwV01lWHFzRHZLSVM3N2ErR28vT3laSUlH?=
 =?utf-8?B?TTFYNDZyWHBIRGRkTkNLYUpOTitNUkdrM2xYRnZlMmpjL1pXclQ0RGhVYUEw?=
 =?utf-8?B?eTdkUFJkSXM0eGlTalp4M05MeGdySlUyZW95MUFpc0x1QkJoQ2VzUTh5UVZX?=
 =?utf-8?B?enFHQkFoZWg5YzFqZHhsKzlkelRUSCtMRmV4R2tyVWdIQmpoQWtoRDhXT25J?=
 =?utf-8?B?S1Z2TDhFTWlOYmJDTDkvOElDcG9TeFIrSkpPSENyYkI5RXVTTXJ3M3IzNlNn?=
 =?utf-8?B?Sm5naWtRc2x0NlEvZTJXc0JPdHdRb3RpbEZkOWsrbk5zYmFyaXNNVk9hWEZG?=
 =?utf-8?B?TGtmc0lrWWdyL0NUNjZtVTZkK2F3Q1Z1czFWUktTUTBaTVZ4OTZnWmtNcGI4?=
 =?utf-8?B?OFBhMklkOHVpM3N4NVJZOGF0M0pkUkRPQjhoVlg5bVdLbmtheW4xc1ZUOFFO?=
 =?utf-8?B?M0hHbVBHQXNhZnMwbWlPeDlPclRObytncGs3UWlwSzMwbjQxd1lPRlhldFph?=
 =?utf-8?B?M0NWZWVIV3RwM0dhMW9ab3ErK1FSZXo5cUxjcnZNQnMrZGNSa3p3NGN6SEYx?=
 =?utf-8?B?aXdHWHo2clRjYXRjS2JkSFZodWoydjJsQlEvMjNFbjFhamFOOXJsWVF1eHo4?=
 =?utf-8?B?RnV1aEIyME0velduajVwc0NNTkZvem1pR24zbXhGa2liV2NwSTBROHRUZXdW?=
 =?utf-8?B?dkt2RjJqdEdWcVhXdHdSbm4wUnVKc3QwNzRQR2kyMEhwcUFxTjBFSllGby8w?=
 =?utf-8?B?bkU4a2QrbWM1c1ZiZkZqeWlJdWZUU0YvUXRlQ2hmdFg1R1RsVDVQVVpsbldz?=
 =?utf-8?B?ZnNTVlljRnViYUZ6QkRiY0kvcHRyeG0yMTFGaWJ3NmNMZlhYTXdqMGpsazdq?=
 =?utf-8?B?TjFjdFFnRlhoWXBUOGx1aThMZDhaTWFVVTNObGdaTkFUVW9Iak9nMm53S3RS?=
 =?utf-8?B?NWs5ZWFaeFRDaXlsWG5RWTBSdXdCRFp6ZTMzTGRQSXRDS1kyalE3RGo3UEQv?=
 =?utf-8?B?dmgzbHlWRlh1cDRUZS9MeUxmQUFuNXQydndOOUsrUW1LT2g1YU45cjdoYksy?=
 =?utf-8?B?bmNhbDljZnN4U1BXcjZmOXNhRzV5MDJ0WWEydnJGWmlqejZVT1l4bU1SaFRI?=
 =?utf-8?B?VVpTRmN2NHM2TUNsV1FhemN5bDZ6OW9iV3g4aVJ1bVVCcWE1QnYzMWI2bXZm?=
 =?utf-8?B?ZnBNbjFQajh6WHJTMG9QRmZVaTJ3bDBMdE4xOEJDUkt2azJWcndINXh2M1FO?=
 =?utf-8?B?dU9JU2EwWk5tWXlCaVZrVlZETUYzMUJ6eDVCalZGSkNZbGFlY053eXBTYzNa?=
 =?utf-8?B?ZFRSZ0svUUVEYUJieXRITGlrWGtheFF6QmkyZVNxSUc4TDZuckIzUVg2OWgy?=
 =?utf-8?Q?cf1O1ZErGzsxwxZmSw=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM9PR03MB6836.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 09f24d7b-a22a-4dcd-588a-08d93bbd952b
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Jun 2021 11:52:48.5785
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 2xhRdGyIKZeM4IgQxngF5lD0F+MWkdv2+AGyiKQbHX4l5UqGvy8ubJumgsTJhga6OztzuaWyy9qJZnbWhzx/mw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0302MB3437
X-Proofpoint-ORIG-GUID: ZcP-6ZyxZzfIxUdDgwD4tKvAh3aLn5zY
X-Proofpoint-GUID: ZcP-6ZyxZzfIxUdDgwD4tKvAh3aLn5zY
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 phishscore=0
 mlxlogscore=999 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0
 impostorscore=0 malwarescore=0 lowpriorityscore=0 priorityscore=1501
 clxscore=1011 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2106300075

PiBPbiAwMy8wNi8yMDIxIDE0OjA4LCBKdWxpZW4gR3JhbGwgd3JvdGU6DQpbLi5dDQo+IA0KPiBU
aGlzIGhhcyBiZWVuIHNpdHRpbmcgb24gdGhlIE1MIGZvciBxdWl0ZSBhIHdoaWxlLiBJIHdhcyBn
b2luZyB0byBjb21taXQgaXQgYXMNCj4gdGhpcyBsb29rcyB1bmNvbnRyb3ZlcnNpYWwgYnV0IHRo
ZSBwYXRjaCBkb2Vzbid0IGFwcGx5IG9uIHRoZSBsYXN0ZXN0IFhlbg0KPiAodG9vbHMvbGlieGwg
d2FzIG1vdmVkIGluIHRvb2xzL2xpYnMvbGlnaHQpLg0KPiANCj4gQFNlcmdpeSwgd291bGQgeW91
IGJlIGFibGUgdG8gcmViYXNlIHRoZSBwYXRjaD8NCj4gDQoNClN1cmUsIEknbGwgcG9zdCByZWJh
c2VkIHZlcnNpb24uDQoNCi0tDQpSZWdhcmRzLA0KICBTZXJnaXkNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 12:44:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 12:44:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148187.273815 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyZZs-00054x-Hq; Wed, 30 Jun 2021 12:44:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148187.273815; Wed, 30 Jun 2021 12:44:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyZZs-00054q-Eb; Wed, 30 Jun 2021 12:44:28 +0000
Received: by outflank-mailman (input) for mailman id 148187;
 Wed, 30 Jun 2021 12:44:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyZZr-00054e-BF; Wed, 30 Jun 2021 12:44:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyZZr-0005NT-5Z; Wed, 30 Jun 2021 12:44:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyZZq-000849-PN; Wed, 30 Jun 2021 12:44:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lyZZq-00076j-Ot; Wed, 30 Jun 2021 12:44:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1cpqL2Mq8Giqcq6CEMYOI/FvT1hGNK3IUCFPY35nGXU=; b=6wqVs38FTYpZxm8dyaWA6UDlm5
	IorrPOSJNNCW6SPQuLm26bKbll9j8Be1zmjQiiagbeDZ0HoiRtR2nW72oE+KGoYmSL79hMTLrF3qw
	2zu4HwbkyHlIwX4MyXqjsAoKhPXVH1mJNDAL5fMaL+Fsv78JR4FvQgaR/7bTN7sHCbWg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163192-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163192: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=580b11201ed001f9533c6782ec87d430b1736040
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 30 Jun 2021 12:44:26 +0000

flight 163192 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163192/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 580b11201ed001f9533c6782ec87d430b1736040
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   26 days
Failing since        162368  2021-06-04 15:42:59 Z   25 days   66 attempts
Testing same since   163192  2021-06-30 05:20:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daniel Schaefer <daniel.schaefer@hpe.com>
  Daoxiang Li <daoxiang.li@intel.com>
  Dov Murik <dovmurik@linux.ibm.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Loo Tung Lun <tung.lun.loo@intel.com>
  Loo, Tung Lun <tung.lun.loo@intel.com>
  Manickavasakam Karpagavinayagam <manickavasakamk@ami.com>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Sunil V L <sunilvl@ventanamicro.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2917 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 13:56:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 13:56:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148191.273829 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyahc-00035P-V1; Wed, 30 Jun 2021 13:56:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148191.273829; Wed, 30 Jun 2021 13:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyahc-00035I-Qu; Wed, 30 Jun 2021 13:56:32 +0000
Received: by outflank-mailman (input) for mailman id 148191;
 Wed, 30 Jun 2021 13:56:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EmAy=LY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lyaha-00035C-T9
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 13:56:30 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.111.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2638aaa1-5d94-4f69-994e-58be09ede02b;
 Wed, 30 Jun 2021 13:56:29 +0000 (UTC)
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur03lp2058.outbound.protection.outlook.com [104.47.10.58]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-8-npfOnX-6M9S27Ct96yTMpQ-2;
 Wed, 30 Jun 2021 15:56:27 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7040.eurprd04.prod.outlook.com (2603:10a6:800:121::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20; Wed, 30 Jun
 2021 13:56:24 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::99d3:99cd:8adf:3eea%5]) with mapi id 15.20.4264.026; Wed, 30 Jun 2021
 13:56:23 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR3P191CA0047.EURP191.PROD.OUTLOOK.COM (2603:10a6:102:55::22) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4264.20 via Frontend Transport; Wed, 30 Jun 2021 13:56:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2638aaa1-5d94-4f69-994e-58be09ede02b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1625061388;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=yjRVhFyA1kvj1/ITXdYyeVhmLbnmD3z33wuncL4z6rM=;
	b=SThNfZ+YAwZ2vVobzuJZZVXfe+Qve7oEA1vi2GPDWrTLLOktG6H1/xXmsOJJZ/YhgrIIFd
	uEvZOpqH2oAnG3mAr9NCOZSc5bFGx3RBmH82kdzv8lUpZlY2av3cKX8eK2i9zauAolOyWP
	woh3OurYi3NmBKPRm1j4c3HpXUOay+A=
X-MC-Unique: npfOnX-6M9S27Ct96yTMpQ-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z4qUxHP155G5eR4YEF6ilHpwGXGGMH3YaC/6aGJ9kbnHdH7uKVaEUe/JtX4h5JMqLlzy8URnpAOlinIGpZxR+kBWMeAOLekBoL5/sXQkOwFvMYucx/+xbY+0isB+oLa/XC2DjayjGPLUfRBHa6eld+vfV1Vrei2cuck6Di+sQfj829I5/gQsz9UmzhfeCEKftg7mw+gJ2e1iKXxIZceRtSMteYVB2ipkx6AtyrKMCU//4Brt2YKVY7ve+1yDMy9V6TGOnEQnhO/vgkTFwwl3Xqc1ECyY8ajRKjP8Pxb0t2FH5HhWH835nVoRaRObdQ9fLkUFLk8hYr74hVG9VncGhA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yjRVhFyA1kvj1/ITXdYyeVhmLbnmD3z33wuncL4z6rM=;
 b=cz+vHcOVA821/69IEH4mNufp6SHLYRXpXHEHFC8hGp1fixdg31KrV4GNbyCx4tHBQGeQZ9sTatf2kYSSeKnEU10AzZxlbs+Nb65Ua+sIHBSNW5W9cY1m/7yP1UOSPNJrZpVL6GzXzLRNSP07wIJ8B/ql9Q7bGWwemibkh9AZxDrD3lIjxvwlggfoP6CKiwb0iP8vnkI/PWYSQNQq8K8H6QCeikoNG/SxsXeQdHeL6ysYcs1XWVn0y5lctQHTLsMJZ3pEAt2vsH/zGa+nxIBVmBEoT6Vv7zlAUKetQVk04s9GFID+JNaSTmWjTTOPsAm660p4eVDJPzTrJpmRP4O0WA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=suse.com;
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] libxenguest: restrict PV guest size
Message-ID: <68f5c66d-a908-b2cb-c292-73b1a0efa472@suse.com>
Date: Wed, 30 Jun 2021 15:56:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR3P191CA0047.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:102:55::22) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c25eaa45-c976-4d48-dfc4-08d93bced884
X-MS-TrafficTypeDiagnostic: VI1PR04MB7040:
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB7040C6784109910B3805031EB3019@VI1PR04MB7040.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	o3387R0Rk+cj7ot5KhF2selYlBxjCVcqf02C7FneOy6EYMnEx1BEpJh7AmZdRZ9D4wy2CLQfsxgKghTJL7Bkm6RHR19LpST54ivUp/gn+3+ZUpOnsrIzauzz08iev32IE7OgMT9md35vnGIst9xyspTbMPUnlI9pDKUyWU8lykSwB74J3cujjoB+QqtnDYJOAc+UObY9vdttMUgPCgG42Z46UqLqFPegA7LQ7Vx28aRSpO/xumo9WBl3wgjGKUQ/39Q43RFIP0b0WHO8f5qqHC9ziVqwdQ+gOnq2BM875IWr4K8rpkzWP/iqU+lQK0AMMZjlu/ymJXeDSUhfy45l+46FFiKmiVF9agHtpnAmKgN0bgmnmGmuH4fRs2K8FNmK46ofK+gZMlX7YaoecJJ0kT9dBSAMWB562lGywHe+BTWdbyYmwIcuJN2dpHoh665VCq45GbY7bUHhIVzvpE0C+Zdd4GbivhGGlWqiMKsdgFUAOX6oC5jmO4/7FQ9tj58gtOnyjyqgkGhLrByUMELTcxDvjDfn8ocv+wquFo4naTWRbSB2J00NZ2OYJvexPiC+7ccUPvIZWMtEUNvqMlJqsViv0JV8r2DFyb276CXdMUI09bOrLZDWsSLD5toSBc+2SvvtyxwvfvNhvqpTG6pNNJVaPhIIJfJKndwSnos0wj3E+KgXNkk05gza4JwDJEiH2BhEVqVpo6RO9kqM23LmhCbtI+eLRchQ/mU7NXS40Qg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(366004)(376002)(39850400004)(136003)(396003)(8936002)(316002)(66556008)(66476007)(66946007)(956004)(36756003)(5660300002)(31686004)(6916009)(2616005)(38100700002)(54906003)(6486002)(4326008)(26005)(31696002)(478600001)(186003)(2906002)(8676002)(83380400001)(86362001)(16526019)(16576012)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L3o1Y0FlbmIwTkZQN0pJVmt4RWZ6MktGSkxXZWQxWTFZK2R0VGpUckcxVzIz?=
 =?utf-8?B?SXBteW5Gbzl6RWlrbCtnTXRRTGk5NE4xbGRoRjFmS01qM3JQUWdDSUlvdFpY?=
 =?utf-8?B?VFdOK1BuOTYyemtra3dQZUJ0RUcrYmlhdmJwcEtzU01YL3dUbUdKaUwxR0ds?=
 =?utf-8?B?QlNQWFBQc0NDTXd5bHZSb2NEM3A1VDFVei9PS3Z0V05yRjMyR1FmNC9pVmpM?=
 =?utf-8?B?UFFCK01MT2lYSXJxOHozWWNaUEZCK1g0cG95YXJuUzlIeWZubW9ZeVdUU2p1?=
 =?utf-8?B?b0ltMnVKaUJKZXpSS3BMSmlLYU5mM1NyczcraUJLb1dLT0thRWFHZXlhTVdw?=
 =?utf-8?B?djFrWUsxRGZYZDd4czlPUXZPc2NPUGw3OWdrSU45dVo5bkhOUUZjMlFndXVi?=
 =?utf-8?B?RVNMWmtVRUpuNnNvcGpUTnV1NElOdXNodTFmZGNRUWZqdk5xMVhoMW5TajdR?=
 =?utf-8?B?bTdsZWpVMUdLSW1jTkJzTHF4L1R1S1FJcDdZNm52S2NxQkU2VnVUZFlJMVdo?=
 =?utf-8?B?eGcrOHdHQkU4VDE2azZKZWNyRXp2MW41MDZmSVJRT1MyM2Viek9FQjd6Uks2?=
 =?utf-8?B?NkxJRk1EVXZMQXZNM284dmFhbU1lOElQb3MzUjUrbTM5NDRTa2xCbGszWFdI?=
 =?utf-8?B?YXBRRDJlVm0xcFk2VDlSL2MrV1pQc0ZQN0ppOUp0ekRCMGRnQTA1MmdCYzNK?=
 =?utf-8?B?ZnB5NXNrTXUrV2wvekYvazVtd25mTWpBQURxUnN0WEE2TDBOWG5NTityU0xT?=
 =?utf-8?B?VC95bG9qSVFkT20wUXgrN3d1R2lEQ3dFeXJOaGhxc2hNVWE0c2FDazY4SHBL?=
 =?utf-8?B?YmR2Z3JlcDhjNHZQQklnSTNHYm1sWkZGUTNhTmZ5WDFjQnRFaXk4Z2JpTzlR?=
 =?utf-8?B?TllFQVovVXpKaFYzYWtQNUlyS2VkbEFPL2I4MHZQbk8yTmJTeEszak0yak5P?=
 =?utf-8?B?SWR2alRpVGNvQUZEajgxdlFPd2tBU1ZQRVF0VzdQalo3RSs2TCtMeFlHdmdK?=
 =?utf-8?B?REplSjNjVXpmYitGOURYdVpoTTBlY1h3YmFKVk9WN0lROWxySUpnVlJYSUVy?=
 =?utf-8?B?VGZYUmEzMEgySVdJZmlvczROVnBHcHdPdWw5OHc5QVkwNGt0cFhsMlNBNXdC?=
 =?utf-8?B?MUhVKy92TzRpaFZDd2srS0lJOWQ4R2o5aVBYbTNiSXRBQUlXWjJmTHVlQXNj?=
 =?utf-8?B?TWlhd3MrQ0pFRWNaNktrcEtOb3NNd3BHSkd5OWZSQXphVHZsU0h1ZFNXTGNV?=
 =?utf-8?B?aTNtMUxCUy9OUkdON1JMaGtqaTNzWU1xQi90d25xWU5GUXBBZ0o3eUVqWkF4?=
 =?utf-8?B?dXhsZTZlNnB1R1VlcXYrd1ZiNjZUVzFEb0FSMFdUQXJ6clR3THBBR3BGOGha?=
 =?utf-8?B?Szl1QzcydjgyWFBScVVJcHlSN1FsYU5ZYjhpdk1VQzA2bTBOVU9NZmtJQ245?=
 =?utf-8?B?S3cyaTNabWRxeWpseHAwZktibnFjUzljTkRQdmdib0tqNStTZEdOcUp2bDVB?=
 =?utf-8?B?T29aNzZVMHhtMkVaR0tDZFNJVXpUNWZuUWxKcWFFSTZUT2piemNmSXlscXRw?=
 =?utf-8?B?NUUwcTl4elQ4NlhNM2sya0phZ04rZWtoSE1iRTdQV1ZkY2dNQVBGdWRrbTF1?=
 =?utf-8?B?TFRMa2d0amJWU3dZZUc5UGZKemhkeGJWdldQYklLQyt3SlgvaVh4Z0JCR2wy?=
 =?utf-8?B?cVlZeTVENXNwOVJJMVl1cUdIRkVyd3lSRkEzekY3SkVnaHRFZUN5bXl4S1lt?=
 =?utf-8?Q?BmJwHiChYZVDZFmI6EurNw217zXGSvZV+kgeVqA?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c25eaa45-c976-4d48-dfc4-08d93bced884
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2021 13:56:23.4112
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: p+N4NUXc8qrgu4VpYKh2i/joOrYXQj4lLEhA9mG2c13NlMt2rJ0bXngyPdd6wp9rXxQ8eg4IYyWwRdPiGqd3gw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7040

The P2M, the use of PFNs, and hence the maximum valid PFN are purely
software constructs in PV. In principle a guest is free to use arbitrary
PFNs. However, at least page table normalization requires that PFN space
be, like MFN space, limited to the architectural 40 bits (52 address
bits). And of course a 32-bit tool stack places further constraints.

Bounding the values also makes sure that various subsequent calculations
won't truncate values and then continue with inconsistencies (see e.g.
fl_entries vs ctx->x86.pv.p2m_frames in map_p2m_tree()).

While there correct an adjacent error message with wrong way round
wording in restore code and another slightly malformed and misleading
(off by one) one in core dumping code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
In case the save/restore change don't make it obvious enough: It escapes
me why struct xc_sr_rec_x86_pv_p2m_frames has p2m_pfns[] with uint64_t
element type but {start,end}_pfn both as uint32_t. Imo all three can
sensibly only ever be of the same type.

--- a/tools/include/xen-tools/libs.h
+++ b/tools/include/xen-tools/libs.h
@@ -13,6 +13,10 @@
 #define ARRAY_SIZE(a) (sizeof(a) / sizeof(*a))
 #endif
 
+#ifndef sizeof_field
+#define sizeof_field(type, field) sizeof(((type *)0)->field)
+#endif
+
 #ifndef MAX
 #define MAX(x, y) ((x) > (y) ? (x) : (y))
 #endif
--- a/tools/libs/guest/xg_core_x86.c
+++ b/tools/libs/guest/xg_core_x86.c
@@ -59,6 +59,43 @@ xc_core_arch_memory_map_get(xc_interface
     if ( xc_domain_nr_gpfns(xch, info->domid, &p2m_size) < 0 )
         return -1;
 
+    if ( !p2m_size )
+    {
+        ERROR("Cannot map a guest without P2M");
+        errno = ENODATA;
+        return -1;
+    }
+
+    if ( !info->hvm )
+    {
+        unsigned int guest_width;
+
+        if ( xc_domain_get_guest_width(xch, info->domid, &guest_width) != 0 )
+        {
+            PERROR("Cannot get address size for PV guest");
+            return -1;
+        }
+
+        if ( p2m_size == (guest_width > 4 ? ~0UL : ~0U) )
+        {
+            ERROR("Cannot map a PV guest with invalid P2M");
+            errno = ENODATA;
+            return -1;
+        }
+    }
+
+#ifndef __i386__
+    if ( (p2m_size - 1) >> 40 )
+#else
+    /* Very large domains (> 1TB) will exhaust virtual address space. */
+    if ( (p2m_size - 1) >> 28 )
+#endif
+    {
+        ERROR("Cannot map a guest with P2M size %#lx", p2m_size);
+        errno = EOPNOTSUPP;
+        return -1;
+    }
+
     map = malloc(sizeof(*map));
     if ( map == NULL )
     {
@@ -333,10 +370,30 @@ xc_core_arch_map_p2m_rw(xc_interface *xc
 
     if ( dinfo->p2m_size < info->nr_pages  )
     {
-        ERROR("p2m_size < nr_pages -1 (%lx < %lx", dinfo->p2m_size, info->nr_pages - 1);
+        ERROR("p2m_size < nr_pages (%lx < %lx)", dinfo->p2m_size, info->nr_pages);
         goto out;
     }
 
+    if ( !info->hvm && dinfo->p2m_size == (dinfo->guest_width > 4 ? ~0UL : ~0U) )
+    {
+        ERROR("Cannot r/%c-map a PV guest with invalid P2M", rw ? 'w' : 'o');
+        errno = ENODATA;
+        return -1;
+    }
+
+#ifndef __i386__
+    if ( (dinfo->p2m_size - 1) >> 40 )
+#else
+    /* Very large domains (> 1TB) will exhaust virtual address space. */
+    if ( (dinfo->p2m_size - 1) >> 28 )
+#endif
+    {
+        ERROR("Cannot r/%c-map a guest with P2M size %#lx",
+              rw ? 'w' : 'o', dinfo->p2m_size);
+        errno = EOPNOTSUPP;
+        return -1;
+    }
+
     p2m_cr3 = GET_FIELD(live_shinfo, arch.p2m_cr3, dinfo->guest_width);
 
     p2m_frame_list = p2m_cr3 ? xc_core_arch_map_p2m_list_rw(xch, dinfo, dom, live_shinfo, p2m_cr3)
--- a/tools/libs/guest/xg_sr_restore_x86_pv.c
+++ b/tools/libs/guest/xg_sr_restore_x86_pv.c
@@ -709,10 +709,23 @@ static int handle_x86_pv_p2m_frames(stru
         return -1;
     }
 
+#ifdef __i386__
+    /* Very large domains (> 1TB) will exhaust virtual address space. */
+    if ( data->end_pfn >> 28 )
+#elif 0 /* sizeof(data->end_pfn) > 4 */
+    if ( data->end_pfn >> (ctx->x86.pv.width > 4 ? 40 : 32) )
+#else
+    if ( 0 )
+#endif
+    {
+        ERROR("End pfn in stream (%#x) too large", data->end_pfn);
+        return -1;
+    }
+
     if ( data->start_pfn > data->end_pfn )
     {
-        ERROR("End pfn in stream (%#x) exceeds Start (%#x)",
-              data->end_pfn, data->start_pfn);
+        ERROR("Start pfn in stream (%#x) exceeds End (%#x)",
+              data->start_pfn, data->end_pfn);
         return -1;
     }
 
--- a/tools/libs/guest/xg_sr_save_x86_pv.c
+++ b/tools/libs/guest/xg_sr_save_x86_pv.c
@@ -464,11 +464,40 @@ static int map_p2m_list(struct xc_sr_con
  */
 static int map_p2m(struct xc_sr_context *ctx)
 {
+    xc_interface *xch = ctx->xch;
     uint64_t p2m_cr3;
+    uint64_t max_pfn = GET_FIELD(ctx->x86.pv.shinfo, arch.max_pfn,
+                                 ctx->x86.pv.width);
+
+    if ( !max_pfn )
+    {
+        ERROR("Cannot save a guest without P2M");
+        errno = ENODATA;
+        return -1;
+    }
+
+    if ( max_pfn-- == (ctx->x86.pv.width > 4 ? ~0UL : ~0U) )
+    {
+        ERROR("Cannot save a guest with invalid P2M");
+        errno = ENODATA;
+        return -1;
+    }
+
+#ifndef __i386__
+    if ( max_pfn >> (sizeof_field(struct xc_sr_rec_x86_pv_p2m_frames,
+                                  end_pfn) > 4 ? 40 : 32) )
+#else
+    /* Very large domains (> 1TB) will exhaust virtual address space. */
+    if ( max_pfn >> 28 )
+#endif
+    {
+        ERROR("Cannot save a guest with maximum PFN %#"PRIx64, max_pfn);
+        errno = EOPNOTSUPP;
+        return -1;
+    }
 
     ctx->x86.pv.p2m_generation = ~0ULL;
-    ctx->x86.pv.max_pfn = GET_FIELD(ctx->x86.pv.shinfo, arch.max_pfn,
-                                    ctx->x86.pv.width) - 1;
+    ctx->x86.pv.max_pfn = max_pfn;
     p2m_cr3 = GET_FIELD(ctx->x86.pv.shinfo, arch.p2m_cr3, ctx->x86.pv.width);
 
     return p2m_cr3 ? map_p2m_list(ctx, p2m_cr3) : map_p2m_tree(ctx);



From xen-devel-bounces@lists.xenproject.org Wed Jun 30 14:22:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 14:22:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148194.273840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyb6P-0006H2-1m; Wed, 30 Jun 2021 14:22:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148194.273840; Wed, 30 Jun 2021 14:22:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyb6O-0006Gv-Ua; Wed, 30 Jun 2021 14:22:08 +0000
Received: by outflank-mailman (input) for mailman id 148194;
 Wed, 30 Jun 2021 14:22:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9IgB=LY=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lyb6M-0006Gp-Pd
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 14:22:06 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0bb21d7d-276b-40d7-8f4f-703b3540f9cb;
 Wed, 30 Jun 2021 14:22:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0bb21d7d-276b-40d7-8f4f-703b3540f9cb
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1625062925;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=QhK/gCUXmNcRJ3VqqlI5E6tfI53oZenKrcTVvJ4/hG8=;
  b=DHNigzmnVppd/RQ/ko8yn9mOO1oQ4xu0sUYUjPNn16+ww8WFpwsorDmZ
   vWqKpnVHOgsvnKdm7aphxm1mvgAgODeuWlZQ68ADn8MW+xxDpoPZLqpWE
   LD6CaJbcICmO6+/ygZzmqpefWJA8ZsXOQWLecFmiGv0NhictlM0n/xOhN
   o=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Tp3ZoQtkCn8r4PsH8xbegdU1EP6E2tgeVpLOTKqeYUg5WnzgJETYYgAZhn/S8qyUUavLIcq617
 Oc9zwZSiJXUi1gnLJpTa3liJJmpjSkbNtwNYz3+eAn1iIbkoSHKsCswt4FaKkfxQYqpjCv9vqI
 b9/apX63T8117aXg/EwpfUZ75Fpd/7FgGJV9Xncfk5RVyoogJJjknXvX2Rx10dpSZBnAC2JsAw
 Xp5n/XTxG3jjOpRSeCwInPwtL/OdaGjkXP3KnGvVfs6O0hzWKRxektw7lQb+yhjq6TCFvoAolk
 k8o=
X-SBRS: 5.1
X-MesageID: 47283752
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:PfYBmarAGKd6nfrblFJXutkaV5uwL9V00zEX/kB9WHVpm5Oj+f
 xGzc516farslossREb+expOMG7MBfhHO1OkPcs1NCZLXbbUQqTXf1fBO7ZogEIdBeOjdK1uZ
 0QFZSWTeeAcGSS7vyKkjVQcexQuOVvmZrA7Yy1ogYPPGMaH52IrT0JbTpzencGNzWubqBJba
 Z0iPA3wgZINU5nFPhSURI+Lpj+TpDw5dPbiVlsPW9u1CC+yReTrJLqGRmR2RkTFxtJ3LcZ6G
 DA1yj0/L+qvf2XwgLVkza71eUUpPLRjv94QOCcgMkcLTvhzi6ueYRaQrWH+BQ4uvum5loGmM
 TF5z0gI8NwwXXMeXzdm2qs5yDQlBIVr1Pyw16RhnXu5ebjQighNsZHjYVFNjPE9ksJprhHoe
 Z29lPck6ASIQLLnSz76dSNfQptjFCIrX0rlvNWp2BDULEZdKRaoeUkjQZo+a87bWTHAb0cYb
 BT5Jm23ocPTbraVQGagoBX+q3qYpxpdS32GXTr06euok1rdHMQ9TpV+CVQpAZbyHqRI6M0rN
 gsCZ4Y4o2mePVmIJ6VNN1xNvdfNVa9CC4kEFjibmgPR5t3dU4klfbMkf8IDbaRCe01Jd0J6c
 n8bG8=
X-IronPort-AV: E=Sophos;i="5.83,312,1616472000"; 
   d="scan'208";a="47283752"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=McPdQEPhJizuoknELcl8O6vEXeWfx2B3wzOP0UjAybf4f5/rvqkXdo4jmXPdpeBUksPXkKaVykGbkkg31KjnqA9j3T+tOXgh+ZcMTDs4NuLqREMLdqTU9LtjYuWxI7QNkrbcjkBrk1JsHDVSHwewIALYFZSR3uCnboFBFwb0jDgcBmMNtRlpCqesoL6BovS1p/9c53hoDRTTnJ4EAeRNwWJprEwR5drG7LrJonNOjLxKzi8e46O/SJUEE97+L0F69Wyogo+umOS9xZzCpE8ABla+ORDHIunUwrYMVfZTgDO1fx4PVDhvxN+zKGBt9RtVLYdI2TgyVu9qXjSqXAmDeg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QhK/gCUXmNcRJ3VqqlI5E6tfI53oZenKrcTVvJ4/hG8=;
 b=HTARWc6W1251DrHZYjbt0Juob41Ch2uEN0Ow+DxFynMdy+8n19CeBkhiJ2hQiC2AlQEcDKC+bQK1K22ikg1TiS1iWDUSCId6SfGHh6JvJHi0CM7kB6z7TEopFIQv1dex5+tsPrGf1aVQXMVxGgF+/VuSrAMpQX79bkON2zTG/K0yZJq481vfn7zXMXw1/7MbQqsP6+ApkAdZFugf6P7/q68yxnGuXZCQDwMPFaCLu/mW7mKbhkHHKJsqIySH4uZ939UxtPOqx3hI477brb7f8cdCYyzGKNiqLUOcEah+rmH9G4v3YI0NG4GYC20+Z96o9tc64U8OmcO/QfNlpRFF0w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QhK/gCUXmNcRJ3VqqlI5E6tfI53oZenKrcTVvJ4/hG8=;
 b=vPSumUGi/1+isfo7/QAmxwm/+VV8JjQPEv6AplsYmyDt8n+UE/zF4zW+ccwo7rju5gTBHnDnhRFmgPbE9pzOVAmjhSIC0V3wJKzNGU8pIUYchJBhqlSJ4pVp23vuZTQIVDm9SX0DNZ9K5vGWIKQClAYrxiDFc4c5L1r7i5I4GNs=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Roberto Bagnara <roberto.bagnara@bugseng.com>
References: <b791d89f-5c9d-9c04-00ed-0cbaae68536a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/shadow: drop callback_mask pseudo-variables
Message-ID: <cea3493c-7794-c722-5255-f36d3869d2e9@citrix.com>
Date: Wed, 30 Jun 2021 15:12:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
In-Reply-To: <b791d89f-5c9d-9c04-00ed-0cbaae68536a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0484.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13a::9) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 691bd038-5901-4c9a-3c8c-08d93bd11630
X-MS-TrafficTypeDiagnostic: BYAPR03MB4357:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4357AC96202FAC9CA56F11CBBA019@BYAPR03MB4357.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:913;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: U2ZFUzGbNMrxAab3kVcYow96rnOrNyWYhi/mQhF2P+XaFKqZ2UcwIJf6UqgLCkxXXj/I62XdLmvA81scDVFfcmh/UzY5VGRrwSEVQkRdP1Ls0DqwRkFqoNMrA8oV0pC+pw0zsKk9ogKTma6VjzTmrQaplXY9KL/9LpL8e92zmhvFQkwciLs/Y7mAUugTHUnmbuJeVL1X5y9BZpSso9VQ8zuY+/iOq7GvtxgGNdkacGZwzXFYJdoE1oXkgUd/EMh12PQ3OplZtZmPqdYgI1WOcxEHTgiZ7QiTyhneByZEz7idWY1d1A1Oz4njdan9b3/InmpD3BS5NyvnK8ppFcFaDoUB/iwFFhHGXb7aitav8BVsIDGGQ0NFS5vAMrNKWu868/evVO3mFnM8DFZJdUVy8XMLKKvJsQTsQCLzlqzvYHMUKmXYtsmsfN7oj/VmzJ7rnmkVSyoiqO1eEj/HOaQYg5R9364Ig/FdMjdb139K59ptojHDSi7ud8R/UVczsKJTyR/JF+DqQCmn+/OU/9E/uVfD2wnKBK7uGjj0O1JYaKDCnS3Fw1LysQcsPGofvsB04N/M9I5ed9q4NRTshQsv34/8Ocy/LjmXivKCJcorhyDYD3izHQFEIJAFG40HY9outfN2Mk19caAxlOgkFXVX55oiFaf1SVrnbiY04cG2V1+SWwRs0/usbDAByRiTmU4Q4RPfspbHJT4LcLoY7R05detcF4stIAAddWBo/ykg56c=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(376002)(346002)(396003)(136003)(66946007)(31686004)(4326008)(8936002)(83380400001)(4744005)(2906002)(38100700002)(478600001)(5660300002)(8676002)(6486002)(66556008)(66476007)(316002)(16576012)(31696002)(2616005)(53546011)(956004)(186003)(110136005)(54906003)(86362001)(26005)(6666004)(16526019)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?cUFyOVMwTkRKejlpc0ZmazNuMTlJOVpidXdZdWR0NjQ4SUxMSEpWbG1pV3o3?=
 =?utf-8?B?d2w3RmV3WTVaK2NZcVhjcTVYTFp1cVpsUTdGbjhBUTRodjhYczVlUE81U1lP?=
 =?utf-8?B?bTdlVVF4N3Z4UDZwSGZnbDQrWmthQUNnNHlaLzc4bGF5eDh0Zi9HNGxValFG?=
 =?utf-8?B?TjVWMUxQODV3OHdUN1RPdjJoWTFiTlNyQVEwTldNVUFlbFJaa1RlTHgvQ1R4?=
 =?utf-8?B?eVI3aUpyam5FMzh5QWg4K0NwUTBiL3Jzb1piZ29tVDRJdVR6QjY3OU5NckRa?=
 =?utf-8?B?ZDZ6eFJnWkxIaG8rZ09WUHh2SFRFQkJXRHFyNGsrcWFCK0h1UndvbFhJQUNS?=
 =?utf-8?B?dFZ6UUgrbXhCRnBtRkwzV2tPK1Z6VDBaSWZwTVdUdGNuQ09wODFOWHJmSGdo?=
 =?utf-8?B?TXVKSnJTd2NoT0w0T25ZREZWV2dGNmVqQTU2ZUc2d3F2SDZtZi9JRUVsRUtW?=
 =?utf-8?B?SE01eFQwckNHeW13NnhhK1BzZTlmZ21PSW5GSU5vYmlmakY2T3V0b3dxYVcv?=
 =?utf-8?B?aHdXenp5emR0UDhNM3hTZzh4OGR5SHRjOGVhZmN3SjZ5Tm1OUXNaMXVSTUZD?=
 =?utf-8?B?WkN3dkw4NXZvNEIwMUFmYkw4VXMvbWQrdzRaOGc1UWNLYkdob1gyaFJFd2pB?=
 =?utf-8?B?aG92RWJ6MXEvNnY0TjVLdk1OanV2Vldkd1NUTksydnczT042NjhGblJUYzlZ?=
 =?utf-8?B?Rmg0Q3V0eGlTQWZBQTNnekQzcnpCdXBhR211VGpUc1VkQW4yYVpZVXVmNFZZ?=
 =?utf-8?B?SnlwTDhxblhEWndVZmhqQkJDS0FzQS9IT21VS0ZVNk0wejdoVnlrdlkxeGZU?=
 =?utf-8?B?eTg3dmV2MEhVV3R6WjBCZFpIdzZZaE4vdWhpVkNuYVNPTm5oRjZsdkNCSlpO?=
 =?utf-8?B?WTVjdFRYN3FTVU5GTExIakxFbG12US9zSXVqQUdlSnJaZlpoeTVVMVBVZkJO?=
 =?utf-8?B?TWVFQWxjbzlLQnVOQ3poV2IrSTJxUTN0QUdZQUZlM0ZjREFHVHlBai9lMGhk?=
 =?utf-8?B?NW1ZRlNJSnM4cGc5L0J5TE1zbUFzV01tZXM2bDQwYk5SSVZPNU1VUXlRMEs5?=
 =?utf-8?B?L2NQdkVYZFgzZFRSZEpJb2lWWENpcmNNMXNEdWtXRTlOZDByNm1LZ09wKzlK?=
 =?utf-8?B?ZW5aUEx5K3VVQUZxZkJ1SVFnWWN5UnkxSU1hUGVDZnFSVVZ6WVlodjg2SWFo?=
 =?utf-8?B?cU0yeXBTVUlvdEszY0c1SGUwT3lsY2dxVWJOOXhGQ04wajFsaFVJSFlqSDZM?=
 =?utf-8?B?SUFFSTVaRG1aWTNpZUxDYk9kTzErMWhwWncrNGlxKzFlcCtxUkVyWkQ2RnZ3?=
 =?utf-8?B?MUlMeHpCUVBoRm5DNHhHSEJvNEVqdUwvZnNhTytqVGl0anVKNXlYOEpQRHZp?=
 =?utf-8?B?cDhGU2NlQXhLWTUyNktJdGRCdDdVWDhsZ0R6QVA1Tk9EM1VwMllQOFBCQi9q?=
 =?utf-8?B?bXN5OUk3a2MzamN6Q0ordjNGYTgzK1lWbnd6RC9oOEV5STUyb2JSODMvcWN2?=
 =?utf-8?B?R3dpMXFud0E1UURtS2Z6djUrOUp0V0hYVHZXTU5rT2U5YVdEdDFWemlkYWxM?=
 =?utf-8?B?Y1ZEZ1IyR2h1cWpQZ3o4OU9qT25GZUJBZHBDNzFFVmtUdVdUTEsrRlpteTRs?=
 =?utf-8?B?cGNaYjlUVWNaTnAyTkZJYlQwU1pSRnlZL2hBcUVnVTd2UENXTnR4Slp1T1Vy?=
 =?utf-8?B?ekJhWlo4Z2VqU29FcWtyNnAxdnhZRWRhT2pxN2F1cVorR2tCR0Y1QWNQVHRn?=
 =?utf-8?Q?OL34KPcycuFTUiRAie6ReNw7oFw9xLJuB8Hxznb?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 691bd038-5901-4c9a-3c8c-08d93bd11630
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2021 14:12:26.0876
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: eXxsJlSm4yUUBgM5L56wSlAmXAqR4LnKKwDM8FlQc/Pvw2dT0WiA7onTn2HcgXgqL08k6edP0/q9hw8hE/6jew6+K1Rc6ysUsqgbhLW86eQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4357
X-OriginatorOrg: citrix.com

On 30/06/2021 07:42, Jan Beulich wrote:
> In commit 90629587e16e ("x86/shadow: replace stale literal numbers in
> hash_{vcpu,domain}_foreach()") I had to work around a clang shortcoming
> (if you like), leveraging that gcc tolerates static const variables in
> otherwise integer constant expressions. Roberto suggests that we'd
> better not rely on such behavior. Drop the involved static const-s,
> using their "expansions" in both of the prior use sites each. This then
> allows dropping the short-circuiting of the check for clang.
>
> Requested-by: Roberto Bagnara <roberto.bagnara@bugseng.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

It is not fair to categorise this as a shortcoming in clang.=C2=A0 C mandat=
es
an ICE in _Static_assert().=C2=A0 You tried to used a GCC
extension/implementation detail which Clang doesn't implement.

With a suitable change to the commit message, Acked-by: Andrew Cooper
<andrew.cooper3@citrix.com>



From xen-devel-bounces@lists.xenproject.org Wed Jun 30 15:28:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 15:28:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148198.273852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyc8V-0003bD-56; Wed, 30 Jun 2021 15:28:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148198.273852; Wed, 30 Jun 2021 15:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyc8U-0003b6-W3; Wed, 30 Jun 2021 15:28:22 +0000
Received: by outflank-mailman (input) for mailman id 148198;
 Wed, 30 Jun 2021 15:28:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyc8U-0003aw-9y; Wed, 30 Jun 2021 15:28:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyc8T-00088m-W9; Wed, 30 Jun 2021 15:28:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyc8T-0008GI-Ii; Wed, 30 Jun 2021 15:28:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lyc8T-0002Fz-IG; Wed, 30 Jun 2021 15:28:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=g6M9dFDQwyOwPYrv/2BUiMqAASrGLQejInOkgFY+As8=; b=OaDgIFtrkOISIBntS7SdHnA4l8
	Mql7IQi4TckMTxX6AHeE0dciaAtbUYHucXNzbwDSzlICKmh+joONt8PanJM90YgHcppP3T3mj2vNO
	S7NBzmUmq1873VNCDARwoXEsYC5e2umy7iO2c3iEOpbSQs81nrpk30f5HLweHQ6bh39E=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163190-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 163190: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f95b7b37cfc6d4613721df9357090d14712013c0
X-Osstest-Versions-That:
    xen=f8582da0417660269bec69e399f8667f761e886b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 30 Jun 2021 15:28:21 +0000

flight 163190 xen-unstable real [real]
flight 163199 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/163190/
http://logs.test-lab.xenproject.org/osstest/logs/163199/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 163199-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 163184
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 163184
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 163184
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 163184
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 163184
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 163184
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 163184
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 163184
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 163184
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 163184
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 163184
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f95b7b37cfc6d4613721df9357090d14712013c0
baseline version:
 xen                  f8582da0417660269bec69e399f8667f761e886b

Last test of basis   163184  2021-06-29 13:38:00 Z    1 days
Testing same since   163190  2021-06-30 02:48:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f8582da041..f95b7b37cf  f95b7b37cfc6d4613721df9357090d14712013c0 -> master


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 15:57:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 15:57:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148205.273877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lycaB-0006nh-J7; Wed, 30 Jun 2021 15:56:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148205.273877; Wed, 30 Jun 2021 15:56:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lycaB-0006na-FR; Wed, 30 Jun 2021 15:56:59 +0000
Received: by outflank-mailman (input) for mailman id 148205;
 Wed, 30 Jun 2021 15:56:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rlqy=LY=kernel.org=nathan@srs-us1.protection.inumbo.net>)
 id 1lyca9-0006nE-VX
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 15:56:58 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2b38f6fb-fcf5-4519-af52-f6f126b4b42b;
 Wed, 30 Jun 2021 15:56:56 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 3EC0061396;
 Wed, 30 Jun 2021 15:56:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b38f6fb-fcf5-4519-af52-f6f126b4b42b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1625068615;
	bh=t4c1D/KrdZudddoFFMreAQi55GnAG0OUL4mpRujr/+8=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=nFQR+NYjpCrpMIj0+9nBAj35wlj1qXRWbFKtyhNDLpJTAm3BAAOI+jIUn7QIQ6LMC
	 /D08b/Wyq1LKmfD6NJJYuuUXm1XpHRMF+bzag+U7SE0Xnf1l+0yaFFUCUoKE4yB6Fx
	 KHCUGF9UUbr1kSihYQ7S2P6QTrW/0sw9J8RiF/5y7S55QWGDrjBUVqbIJpceKtB839
	 13XYMsllNYSyOsjMPZqtEclitRsqTPicPLIWpPGvJ9tnj6RZjijLVBtTMriTuGUDnx
	 pujXO32GnzzUvzSsthpj3FxbtH2/lBtbWkeBfV4uT8mpQQ+qjMcUVFz7XO5LQkzFA7
	 Ko9guip4RPSmA==
Date: Wed, 30 Jun 2021 08:56:51 -0700
From: Nathan Chancellor <nathan@kernel.org>
To: Will Deacon <will@kernel.org>
Cc: Claire Chang <tientzu@chromium.org>, Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com,
	xypron.glpk@gmx.de, Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com,
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk,
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie,
	dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com, Jianxiong Gao <jxgao@google.com>,
	joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com, matthew.auld@intel.com,
	rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Qian Cai <quic_qiancai@quicinc.com>
Subject: Re: [PATCH v15 06/12] swiotlb: Use is_swiotlb_force_bounce for
 swiotlb data bouncing
Message-ID: <YNyUQwiagNeZ9YeJ@Ryzen-9-3900X.localdomain>
References: <20210624155526.2775863-1-tientzu@chromium.org>
 <20210624155526.2775863-7-tientzu@chromium.org>
 <YNvMDFWKXSm4LRfZ@Ryzen-9-3900X.localdomain>
 <CALiNf2-a-haQN0-4+gX8+wa++52-0CnO2O4BEkxrQCxoTa_47w@mail.gmail.com>
 <20210630114348.GA8383@willie-the-truck>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210630114348.GA8383@willie-the-truck>

Hi Will and Claire,

On Wed, Jun 30, 2021 at 12:43:48PM +0100, Will Deacon wrote:
> On Wed, Jun 30, 2021 at 05:17:27PM +0800, Claire Chang wrote:
> > On Wed, Jun 30, 2021 at 9:43 AM Nathan Chancellor <nathan@kernel.org> wrote:
> > >
> > > On Thu, Jun 24, 2021 at 11:55:20PM +0800, Claire Chang wrote:
> > > > Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
> > > > use it to determine whether to bounce the data or not. This will be
> > > > useful later to allow for different pools.
> > > >
> > > > Signed-off-by: Claire Chang <tientzu@chromium.org>
> > > > Reviewed-by: Christoph Hellwig <hch@lst.de>
> > > > Tested-by: Stefano Stabellini <sstabellini@kernel.org>
> > > > Tested-by: Will Deacon <will@kernel.org>
> > > > Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> > >
> > > This patch as commit af452ec1b1a3 ("swiotlb: Use is_swiotlb_force_bounce
> > > for swiotlb data bouncing") causes my Ryzen 3 4300G system to fail to
> > > get to an X session consistently (although not every single time),
> > > presumably due to a crash in the AMDGPU driver that I see in dmesg.
> > >
> > > I have attached logs at af452ec1b1a3 and f127c9556a8e and I am happy
> > > to provide any further information, debug, or test patches as necessary.
> > 
> > Are you using swiotlb=force? or the swiotlb_map is called because of
> > !dma_capable? (https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/tree/kernel/dma/direct.h#n93)
> 
> The command line is in the dmesg:
> 
>   | Kernel command line: initrd=\amd-ucode.img initrd=\initramfs-linux-next-llvm.img root=PARTUUID=8680aa0c-cf09-4a69-8cf3-970478040ee7 rw intel_pstate=no_hwp irqpoll
> 
> but I worry that this looks _very_ similar to the issue reported by Qian
> Cai which we thought we had fixed. Nathan -- is the failure deterministic?

Yes, for the most part. It does not happen every single boot so when I
was bisecting, I did a series of seven boots and only considered the
revision good when all seven of them made it to LightDM's greeter. My
results that I notated show most bad revisions failed anywhere from four
to six times.

> > `BUG: unable to handle page fault for address: 00000000003a8290` and
> > the fact it crashed at `_raw_spin_lock_irqsave` look like the memory
> > (maybe dev->dma_io_tlb_mem) was corrupted?
> > The dev->dma_io_tlb_mem should be set here
> > (https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/tree/drivers/pci/probe.c#n2528)
> > through device_initialize.
> 
> I'm less sure about this. 'dma_io_tlb_mem' should be pointing at
> 'io_tlb_default_mem', which is a page-aligned allocation from memblock.
> The spinlock is at offset 0x24 in that structure, and looking at the
> register dump from the crash:
> 
> Jun 29 18:28:42 hp-4300G kernel: RSP: 0018:ffffadb4013db9e8 EFLAGS: 00010006
> Jun 29 18:28:42 hp-4300G kernel: RAX: 00000000003a8290 RBX: 0000000000000000 RCX: ffff8900572ad580
> Jun 29 18:28:42 hp-4300G kernel: RDX: ffff89005653f024 RSI: 00000000000c0000 RDI: 0000000000001d17
> Jun 29 18:28:42 hp-4300G kernel: RBP: 000000000a20d000 R08: 00000000000c0000 R09: 0000000000000000
> Jun 29 18:28:42 hp-4300G kernel: R10: 000000000a20d000 R11: ffff89005653f000 R12: 0000000000000212
> Jun 29 18:28:42 hp-4300G kernel: R13: 0000000000001000 R14: 0000000000000002 R15: 0000000000200000
> Jun 29 18:28:42 hp-4300G kernel: FS:  00007f1f8898ea40(0000) GS:ffff890057280000(0000) knlGS:0000000000000000
> Jun 29 18:28:42 hp-4300G kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> Jun 29 18:28:42 hp-4300G kernel: CR2: 00000000003a8290 CR3: 00000001020d0000 CR4: 0000000000350ee0
> Jun 29 18:28:42 hp-4300G kernel: Call Trace:
> Jun 29 18:28:42 hp-4300G kernel:  _raw_spin_lock_irqsave+0x39/0x50
> Jun 29 18:28:42 hp-4300G kernel:  swiotlb_tbl_map_single+0x12b/0x4c0
> 
> Then that correlates with R11 holding the 'dma_io_tlb_mem' pointer and
> RDX pointing at the spinlock. Yet RAX is holding junk :/
> 
> I agree that enabling KASAN would be a good idea, but I also think we
> probably need to get some more information out of swiotlb_tbl_map_single()
> to see see what exactly is going wrong in there.

I can certainly enable KASAN and if there is any debug print I can add
or dump anything, let me know!

Cheers,
Nathan


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 17:44:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 17:44:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148214.273892 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyeG5-0000CI-De; Wed, 30 Jun 2021 17:44:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148214.273892; Wed, 30 Jun 2021 17:44:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyeG5-0000CB-Ai; Wed, 30 Jun 2021 17:44:21 +0000
Received: by outflank-mailman (input) for mailman id 148214;
 Wed, 30 Jun 2021 17:44:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lyeG3-0000C5-OJ
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 17:44:19 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyeG2-0002Ss-Mo; Wed, 30 Jun 2021 17:44:18 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyeG2-0004uz-Ex; Wed, 30 Jun 2021 17:44:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=kctGUokibAu8Zy2yd9F68eAYEDcyIIZZQYgRFk7raSU=; b=twU2abxlPox1MulXLnWvygR7+P
	0Kl3P4JAqsDG3ZVBLh10g2oQqhfjzERJ35Y4Ajg6TBq/WfDJ/I11RW44A2cYhnOwuTcrseDkY92rX
	CsQebu0djS0s3xJjfgz5hKncQfV4jzTWWICyfSHH3XJraypZ448Iinb2xHK2HOmJZLuI=;
Subject: Re: [PATCH 2/9] xen/arm: introduce PGC_reserved
To: Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org, jbeulich@suse.com
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com
References: <20210607024318.3988467-1-penny.zheng@arm.com>
 <20210607024318.3988467-3-penny.zheng@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1f1c1567-1a53-3b6a-2868-b7673d9180b3@xen.org>
Date: Wed, 30 Jun 2021 18:44:16 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210607024318.3988467-3-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Penny,

On 07/06/2021 03:43, Penny Zheng wrote:
> In order to differentiate pages of static memory, from those allocated from
> heap, this patch introduces a new page flag PGC_reserved to tell.

I would prefer if this patch is folded in the patch first using it. This 
will be easier to understand how this flag will be used.

Cheers,

> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
> changes v2:
> - remove unused reserved field in struct page_info
> - remove unused helper page_get_reserved_owner and page_set_reserved_owner
> ---
>   xen/include/asm-arm/mm.h | 3 +++
>   1 file changed, 3 insertions(+)
> 
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index 0b7de3102e..7034fae1b6 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -108,6 +108,9 @@ struct page_info
>     /* Page is Xen heap? */
>   #define _PGC_xen_heap     PG_shift(2)
>   #define PGC_xen_heap      PG_mask(1, 2)
> +  /* Page is reserved */
> +#define _PGC_reserved     PG_shift(3)
> +#define PGC_reserved      PG_mask(1, 3)
>   /* ... */
>   /* Page is broken? */
>   #define _PGC_broken       PG_shift(7)
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 17:45:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 17:45:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148218.273903 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyeGy-0000nO-SD; Wed, 30 Jun 2021 17:45:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148218.273903; Wed, 30 Jun 2021 17:45:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyeGy-0000nH-P5; Wed, 30 Jun 2021 17:45:16 +0000
Received: by outflank-mailman (input) for mailman id 148218;
 Wed, 30 Jun 2021 17:45:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lyeGx-0000n9-Cg
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 17:45:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyeGv-0002Tx-JR; Wed, 30 Jun 2021 17:45:13 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyeGv-00057o-DE; Wed, 30 Jun 2021 17:45:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=FfI6ppnPns+NmCdd9yp8YwCtDplHv5GrM16/jYM7ZEI=; b=FosGCeCZPOimqoK8lPvV8K2pFU
	FrR0gDwTJAp7VA2KBkxm98Olt6z4q/cWzI6vow2rTAzFS91Z8hrdRPH6G89wACVMVRiOdO5HhANjJ
	2/VhM/x8bDN1vpjEjfLxkNHZWWw6J317SIv4vq/v1zoXj5EZ8l+FnoS28plfwM7FeWvM=;
Subject: Re: [PATCH 3/9] xen/arm: introduce CONFIG_STATIC_ALLOCATION
To: Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org, jbeulich@suse.com
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com
References: <20210607024318.3988467-1-penny.zheng@arm.com>
 <20210607024318.3988467-4-penny.zheng@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2b604aba-1d12-7957-ad9b-114f6ad1f857@xen.org>
Date: Wed, 30 Jun 2021 18:45:11 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210607024318.3988467-4-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Penny,

On 07/06/2021 03:43, Penny Zheng wrote:
> For now, since the feature of Domain on Static Allocation is only supported
> on ARM Architecture, this commit introduces new CONFIG_STATIC_ALLOCATION to
> avoid bringing dead codes in other archs.

Similarly to patch #2, I think it would be better to introduce this 
Kconfig when it is used or after the common code is introduced. This 
would prevent dead Kconfig.

Cheers,

> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
> changes v2:
> - new commit
> ---
>   xen/arch/arm/Kconfig | 3 +++
>   1 file changed, 3 insertions(+)
> 
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index ecfa6822e4..f165db8ecd 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -278,6 +278,9 @@ config ARM64_ERRATUM_1286807
>   
>   	  If unsure, say Y.
>   
> +config STATIC_ALLOCATION
> +    def_bool y
> +
>   endmenu
>   
>   config ARM64_HARDEN_BRANCH_PREDICTOR
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 17:46:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 17:46:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148221.273913 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyeHm-0001QH-5R; Wed, 30 Jun 2021 17:46:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148221.273913; Wed, 30 Jun 2021 17:46:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyeHm-0001QA-2X; Wed, 30 Jun 2021 17:46:06 +0000
Received: by outflank-mailman (input) for mailman id 148221;
 Wed, 30 Jun 2021 17:46:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lyeHl-0001Pw-8d
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 17:46:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyeHl-0002Us-3A; Wed, 30 Jun 2021 17:46:05 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyeHk-0005AX-Tc; Wed, 30 Jun 2021 17:46:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=MMCmuWbYY1Iy80Nrq3l4pQcHIg2D0GPcfF2MBFVe11U=; b=khkMxgxBKEa5l9lOoDoVCpP6Xb
	L2zIrcjyrxoAAiYmq5g+53K7EDAUqK1GijqrCnHp5oIvPbTzX8kzu26nmlcOxmyhXc7b5GaRTwocH
	LtvNJWIilDgP0URMNURjyOMDSvGRlL6IfHA8cCofmDT4VuAM+boCfmD7ItOKn+47kh5o=;
Subject: Re: [PATCH 4/9] xen/arm: static memory initialization
To: Jan Beulich <jbeulich@suse.com>, Penny Zheng <penny.zheng@arm.com>
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com,
 xen-devel@lists.xenproject.org, sstabellini@kernel.org
References: <20210607024318.3988467-1-penny.zheng@arm.com>
 <20210607024318.3988467-5-penny.zheng@arm.com>
 <e0a312a1-f430-3ff0-6dd6-fcfe18e58071@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7f77349f-015e-83d3-d646-af9897e31348@xen.org>
Date: Wed, 30 Jun 2021 18:46:03 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <e0a312a1-f430-3ff0-6dd6-fcfe18e58071@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 10/06/2021 10:35, Jan Beulich wrote:
> On 07.06.2021 04:43, Penny Zheng wrote:
>> --- a/xen/arch/arm/setup.c
>> +++ b/xen/arch/arm/setup.c
>> @@ -611,6 +611,30 @@ static void __init init_pdx(void)
>>       }
>>   }
>>   
>> +/* Static memory initialization */
>> +static void __init init_staticmem_pages(void)
>> +{
>> +    int bank;
> 
> While I'm not a maintainer of this code, I'd still like to point out
> that wherever possible we prefer "unsigned int" when dealing with
> only non-negative values, and even more so when using them as array
> indexes.

+1.

> 
>> +    /*
>> +     * TODO: Considering NUMA-support scenario.
>> +     */
> 
> Nit: Comment style.
> 
>> @@ -872,6 +896,9 @@ void __init start_xen(unsigned long boot_phys_offset,
>>       cmdline_parse(cmdline);
>>   
>>       setup_mm();
>> +    /* If exists, Static Memory Initialization. */
>> +    if ( bootinfo.static_mem.nr_banks > 0 )
>> +        init_staticmem_pages();
> 
> I don't think the conditional is really needed here?
> 
>> --- a/xen/common/page_alloc.c
>> +++ b/xen/common/page_alloc.c
>> @@ -1376,6 +1376,37 @@ bool scrub_free_pages(void)
>>       return node_to_scrub(false) != NUMA_NO_NODE;
>>   }
>>   
>> +static void free_page(struct page_info *pg, bool need_scrub)
>> +{
>> +    mfn_t mfn = page_to_mfn(pg);
> 
> With pdx compression this is a non-trivial conversion. The function
> being an internal helper and the caller already holding the MFN, I
> think it would be preferable if the MFN was passed in here. If done
> this way, you may want to consider adding an ASSERT() to double
> check both passed in arguments match up.
> 
>> +    /* If a page has no owner it will need no safety TLB flush. */
>> +    pg->u.free.need_tlbflush = (page_get_owner(pg) != NULL);
>> +    if ( pg->u.free.need_tlbflush )
>> +        page_set_tlbflush_timestamp(pg);
>> +
>> +    /* This page is not a guest frame any more. */
>> +    page_set_owner(pg, NULL); /* set_gpfn_from_mfn snoops pg owner */
>> +    set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY);
>> +
>> +#ifdef CONFIG_ARM
> 
> If avoidable there should be no arch-specific code added to this
> file. Assuming another arch gained PGC_reserved, what's wrong with
> enabling this code right away for them as well? I.e. use
> PGC_reserved here instead of CONFIG_ARM? Alternatively this may
> want to be CONFIG_STATIC_ALLOCATION, assuming we consider
> PGC_reserved tied to it.
> 
>> +    if ( pg->count_info & PGC_reserved )
>> +    {
>> +        /* TODO: asynchronous scrubbing. */
>> +        if ( need_scrub )
>> +            scrub_one_page(pg);
>> +        return;
>> +    }
>> +#endif
>> +    if ( need_scrub )
> 
> Nit: Please have a blank line between these last two.
> 
>> +    {
>> +        pg->count_info |= PGC_need_scrub;
>> +        poison_one_page(pg);
>> +    }
>> +
>> +    return;
> 
> Please omit return statements at the end of functions returning void.
> 
>> +}
> 
> On the whole, bike shedding or not, I'm afraid the function's name
> doesn't match what it does: There's no freeing of a page here. What
> gets done is marking of a page as free. Hence maybe mark_page_free()
> or mark_free_page() or some such?
> 
>> @@ -1512,6 +1530,38 @@ static void free_heap_pages(
>>       spin_unlock(&heap_lock);
>>   }
>>   
>> +#ifdef CONFIG_STATIC_ALLOCATION
>> +/* Equivalent of free_heap_pages to free nr_mfns pages of static memory. */
>> +void __init free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
>> +                                 bool need_scrub)
>> +{
>> +    mfn_t mfn = page_to_mfn(pg);
>> +    unsigned long i;
>> +
>> +    for ( i = 0; i < nr_mfns; i++ )
>> +    {
>> +        switch ( pg[i].count_info & PGC_state )
>> +        {
>> +        case PGC_state_inuse:
>> +            BUG_ON(pg[i].count_info & PGC_broken);
>> +            /* Mark it free and reserved. */
>> +            pg[i].count_info = PGC_state_free | PGC_reserved;
>> +            break;
>> +
>> +        default:
>> +            printk(XENLOG_ERR
>> +                   "Page state shall be only in PGC_state_inuse. "
> 
> Why? A page (static or not) can become broken while in use. IOW I
> don't think you can avoid handling PGC_state_offlining here. At which
> point this code will match free_heap_pages()'es, and hence likely
> will want folding as well.
> 
>> --- a/xen/include/xen/mm.h
>> +++ b/xen/include/xen/mm.h
>> @@ -85,6 +85,12 @@ bool scrub_free_pages(void);
>>   } while ( false )
>>   #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
>>   
>> +#ifdef CONFIG_ARM
> 
> ITYM CONFIG_STATIC_ALLOCATION here?
> 
> Jan
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 17:49:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 17:49:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148227.273925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyeKx-00029L-LU; Wed, 30 Jun 2021 17:49:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148227.273925; Wed, 30 Jun 2021 17:49:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyeKx-00029E-Ha; Wed, 30 Jun 2021 17:49:23 +0000
Received: by outflank-mailman (input) for mailman id 148227;
 Wed, 30 Jun 2021 17:49:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wBKC=LY=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1lyeKw-000298-90
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 17:49:22 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.76]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 97b9cebb-3209-4aca-942e-586a5d953643;
 Wed, 30 Jun 2021 17:49:19 +0000 (UTC)
Received: from AM6PR10CA0038.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:80::15)
 by DB8PR08MB5404.eurprd08.prod.outlook.com (2603:10a6:10:117::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Wed, 30 Jun
 2021 17:49:18 +0000
Received: from AM5EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:80:cafe::e6) by AM6PR10CA0038.outlook.office365.com
 (2603:10a6:209:80::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend
 Transport; Wed, 30 Jun 2021 17:49:18 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT034.mail.protection.outlook.com (10.152.16.81) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4287.22 via Frontend Transport; Wed, 30 Jun 2021 17:49:18 +0000
Received: ("Tessian outbound 1763b1d84bc3:v97");
 Wed, 30 Jun 2021 17:49:18 +0000
Received: from 37b0976fe99c.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E6C2ADA1-02FD-444F-9BC8-0D3C86F39819.1; 
 Wed, 30 Jun 2021 17:49:11 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 37b0976fe99c.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 30 Jun 2021 17:49:11 +0000
Received: from AS8PR08MB6919.eurprd08.prod.outlook.com (2603:10a6:20b:39e::10)
 by AM5PR0802MB2561.eurprd08.prod.outlook.com (2603:10a6:203:9b::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.23; Wed, 30 Jun
 2021 17:49:10 +0000
Received: from AS8PR08MB6919.eurprd08.prod.outlook.com
 ([fe80::2de3:452a:87cf:3ff5]) by AS8PR08MB6919.eurprd08.prod.outlook.com
 ([fe80::2de3:452a:87cf:3ff5%9]) with mapi id 15.20.4264.026; Wed, 30 Jun 2021
 17:49:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97b9cebb-3209-4aca-942e-586a5d953643
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u59L1DPlbkD5PqGrE9Hb//T7Ee7m1KExDJqn8cbCrIU=;
 b=sjVE3C2u0y9OzNB1xLtwtIrVItbwbW6/QWEOESG3m4BuMLwfnO9H/FsgK8v1gOQV6Ur6QPF6JYnc03GXU4u1P2ZkOHZs163RK3MTwfaZJVxRJumhO8lxj4qzJ6f4yC+L0dJZ1ruE6fJ602wnB/9Ym4XuU/ApygkYB2EEYQb8viE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 78152e7f9179dbae
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cd/jnWlQH9Tic0PPWhGLB7Ae0yTa4XuVxneUx0zf77/cX/vUCg26Rs/zwOlNfFljwwMQQKlt+6q1T7BJGfv6aOltp1x5dW1Zrn2sBwvoQr0hw51Tl+68b1OulGwWOzLR7ObNhxtzQ2ghTNejNxTIpy43Tz7Qob2v2ztpQdc4CP7nwKy7AtAw/aPxl1lzMeIZnUWXflHsM0pxVU3wA8ENdcpYUxho48INVQjAnUoFBaUSZiXDk+oZxRFiy2mh6l0RmM3a5t/5JXyXtHJgjSY0k6e+k61CXU+vZDFAfRacjXjniWjHJQlfAHc4Ou7RwXM+OUB6sxLjEQMtpQ+0m7ypUw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u59L1DPlbkD5PqGrE9Hb//T7Ee7m1KExDJqn8cbCrIU=;
 b=JMwEMcEl7Wh7CCGW9Zi0KWmyfKeYUVAWOr5mIZoJCbLu0ju4bsFLaoO7QXJb1SGcNAY0cQbTX74S3CAo5QKudP1sOST36cEDOpKzDtGjeP1+7OB7hEnMkVLQienpinwxv4MYd/vn/sz6aCiFH1dFuxr59R9McmFZO13iYL1Zl9DpXJCTfKvBn+mhorxOXp+U+Zrev3CdEmzIlSvm/1xI4u9b4ZsApSDXPOMMpeXhTKU6eNZ8ZbAsyDvO/SQbjhFBL9PHhD72HLaZlbWmT0hgErj8AZUMiJBLF+NKgqur58Kbd0qJ/0B3CIlwAc8yfr3/UVbVYcTHckrK0iLljUiz8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u59L1DPlbkD5PqGrE9Hb//T7Ee7m1KExDJqn8cbCrIU=;
 b=sjVE3C2u0y9OzNB1xLtwtIrVItbwbW6/QWEOESG3m4BuMLwfnO9H/FsgK8v1gOQV6Ur6QPF6JYnc03GXU4u1P2ZkOHZs163RK3MTwfaZJVxRJumhO8lxj4qzJ6f4yC+L0dJZ1ruE6fJ602wnB/9Ym4XuU/ApygkYB2EEYQb8viE=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: smmuv1: Fixed stream matching register
 allocation
Thread-Topic: [PATCH] xen/arm: smmuv1: Fixed stream matching register
 allocation
Thread-Index: AQHXaeBuO8lC15Rvy0GU2dq6zulC56ssS6GAgACREoA=
Date: Wed, 30 Jun 2021 17:49:10 +0000
Message-ID: <BE2AB42D-A896-4FFE-856C-DA494D8DF1C8@arm.com>
References:
 <612e7f61c19e60019bb7829888342fda95fd36be.1624546532.git.rahul.singh@arm.com>
 <11df0a34-724a-63ad-1822-4bd8aa364ab0@xen.org>
In-Reply-To: <11df0a34-724a-63ad-1822-4bd8aa364ab0@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 055b2dd2-1659-4772-cf89-08d93bef6251
x-ms-traffictypediagnostic: AM5PR0802MB2561:|DB8PR08MB5404:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB8PR08MB54046E837A3FE483C4ACD40BFC019@DB8PR08MB5404.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 K3mxKg2+2TD9qFerILDRjkEZq0jf4yc4h+Ml3k/+SpEVqqmEDlYfdOnIl5w6K/KrLRRnCkrcEeSRKjnw7DgeiRt2B+AXci8mXJ+yerI5h2kYvoEqpCXLWsWH7gWdYnxg4WiuSckOwTlMD0ajXhkDkf2A0GN/wrgjqeyH+HdwxEWHJLxFLmKvaJYXmMj6LQF1cITP1ZQ243VSQN5D1y8meMsOELzTQHsYT36E8mVNy7ULlhH5e178Gnis7wGxfiRHgwh8OwZJzs9Iy3s9VrN+DQLjSLfjXF3c8YeYy0LGNQa1nRoHVHjefV809GVh+C5LL+xBCLzGFxGnthaSuqiLbpCmfKo+3w7Re8exYkkUnVnyR52dL/lrpoijf5BHkXIyol2jxlbTuEmWCGwSvA/CxZW0gLTmc5PUj1feqfD/BBt2XpP3H6O8I/w5fLxfiHNoh4F47vhjZOqnRVx9ZnC+bTv4IYCXBMKVNB9De+iHJZV5S4pngbcPLHlus9OKWTVioVNrjjsQ9hflEktbi3EB2rUPg2nAFss31mBPVzFDX+0RQLqPnFUkvvbK5Rn1xyjk4vhi+JPkiw+tW+sENHolx65hzDk55oTohRLzfM7OwK4/hiIqEgtQsMixP16VlI3847myg455eurcKe5QyQe+1J4xgtdV+cNzdnrLQcPaRgk1OFqsbzPfGwZrGO/clWRApOrLm9aBpG+BvIa4+rY3kw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB6919.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(136003)(376002)(396003)(346002)(39850400004)(54906003)(8936002)(6506007)(66946007)(66476007)(86362001)(91956017)(5660300002)(76116006)(316002)(478600001)(53546011)(64756008)(66446008)(66556008)(36756003)(6512007)(8676002)(2906002)(83380400001)(2616005)(122000001)(6486002)(38100700002)(4326008)(6916009)(33656002)(186003)(26005)(71200400001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?asX/9zl1ze0i7kuEsPEirI38tNt0tuTbfHJCLDSdrgZz75GGHBh6BpjVp5mM?=
 =?us-ascii?Q?RpTq6mUSA2qZb3B3xeXDO4dBsUt041KwiiJzWrfIUfZ+yGyX5shr7648vr6z?=
 =?us-ascii?Q?ksA6L4p3naA/gjnLzmP99yunMqSttp77/7DnLgDYI8KjzoRNnRp48EYoAV3N?=
 =?us-ascii?Q?w/ay2/dcpyr2xAwtDODNYJQ6pyOgtOnAr6cRHON6oi/lg6LpiRqYnFU2mBMb?=
 =?us-ascii?Q?I/DlGcs7wjf5VFspCVBkhrtMRJPjqtjESi/egU+CKslOSBOwFFP4Pq+n+Jr6?=
 =?us-ascii?Q?qfxPmA64/vSIfuoGHdxuvzZEexzAgr3ixYUIcYZiGZ6oCm7QeNk2taef+xlj?=
 =?us-ascii?Q?Zl2ep5dF7ezmKfEpPmEzAtqIBOm8GD+B7g4oI5deuy0YPeKyuWvt0g3fiP13?=
 =?us-ascii?Q?ehno4m1+cOMwns6aR9DP8I2GyTdVekEjOGwoI5ZEXV2oUOdWsjxMUlVbbouO?=
 =?us-ascii?Q?eX/7g73XDAbcXrek82WJ2hX2YGTbjLLoviXRilgXm4g1OoGRTUhFNdbaCaUZ?=
 =?us-ascii?Q?1oF+5sYVBEfdODX0F88BHc0LgmmL77q7OEK1Fc7aFmk93Mz5Ge7IbOTTc7HG?=
 =?us-ascii?Q?IEG77LYEdviTx/LnRTR2Gts5SfrbiF9dazM7CjJjhHYwjKhPNRHafhxGlKlh?=
 =?us-ascii?Q?vYWQzqmZuLKPqRgIUZmZEK1NzHHkGthopinXaQj5DcyHBiC5RVOz8TydQWRa?=
 =?us-ascii?Q?CvW7w6aP72apyM2scWxE+W8LB3i+h/OJ5+sAQpUe6mQ0VelwdQoujb9gkLwz?=
 =?us-ascii?Q?8bB2vHXULPH248I2ykkOLnAY7tMT6zNyCtXBul4e4W9jmuOtKIRpsPWm9+AY?=
 =?us-ascii?Q?LlPzrGDtampqlnSjblK2fRsDtO4JH7dDJzKMCGWftTtb2BfokYq4c0bXoJ7I?=
 =?us-ascii?Q?CBx7g0cCDBrL8U1FBAa5UvPJ6ey6R674Lkh9RW+tOTEivLw2vBZXaeICYR2a?=
 =?us-ascii?Q?aBGClsuzheCPs5hU7gUho7GXQx8GPC7UqvYYYzRIm7l38oLkfQqQOZAhNRHT?=
 =?us-ascii?Q?TrfVDZfJspN1DdTRIQF8+gndt1SohhX7TPclCwcoO2TcgjyvIEhs0y7+VoJb?=
 =?us-ascii?Q?tRtl4zqA4eTgNTabX0T3NYlw3EdUtyPidrQjBwpj2iIhMQ9kKq5+axAAQZ4P?=
 =?us-ascii?Q?/fXqEqboS8E8l+qmCUQHyiA25NS0XIPqjA5vR9DpLXXJ14QyKQGmwrCcntss?=
 =?us-ascii?Q?KsAZcSP9ZWggpVPPciCDkJRL+xn4jv56xP4N81dauzJt2fQMJu8f1HLdQ5EW?=
 =?us-ascii?Q?a/attln8cOpKmj2T9qfQQ+4SIss1fo0U6yBLEqu/5ne4CXpa7peHxXdjpe5c?=
 =?us-ascii?Q?zos9GrVEEfwNti4JX4L/sWT/?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <7F657223C9F59F41B3F222575A141C42@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0802MB2561
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e502022a-e639-4220-fc05-08d93bef5dc3
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+NCw31jCV/4aJQ4OS4J2e51O/446p/Clpou+8FSNhjVjdaGn+xsrvrZnSlgeBkLN6zMlHViuqprTShXHDQuCWITAat28WM/23jpxr2RI27tVQuNah/drblZtvv+U84kfn9dnvBFyOBTYoWxAA9heC85VWlRabTVD3lbRJ8ldsosWOqIQstFrnePhCep2//WRKGU4S71efPkYj2gVCfptNLEvJfcStBxTwdZ0ylvkr01VdAGZudrUjMOaWjSoIhsvIiWTUowfCRq20K4A7HWLscHUHXK0EYkayYNqKU9jryFCTsmuTpA7DBZjj/JaiezJOKpEZlluiZ3svhJD/AkCrFkcWBlVepgBwXidNpX3uDLGDVQk7DgfP7hbCY9q9cQBhTzP6xUpjG8LkBjeAky8D+Qt4HEekwTKFwhRBPMv+wx4wSC1nrRfZJTMd+Art1hH/5Pf0HrHFdvjUdHZaZaPznwfEWG6mfxwV/reyUw6Qf2+CJnFb8jg7W3HUTYRl7Dce4K9eo6+kKZ0ZNn5pwl2Rgv6LAwiZMIzfDdcIB8fejuQvVLnYSJmW/Cq43UCK8x2yxhLlexnDRqi1cOEs/4XCeImLYoKS8t8+ZyUCnbteL4Fgmk7n4tZxN+ci5NQ1+TYjkj5ZXvJ9j1mnjBChI8n7SC2n5eIPA4o/oBUCTO3Vstcnz+fhO6VXFOTjLdwMJ1khwlSY+t/4GdNkHUnKDE5dA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39840400004)(376002)(136003)(346002)(396003)(36840700001)(46966006)(6512007)(47076005)(70586007)(5660300002)(36756003)(6862004)(8936002)(54906003)(316002)(4326008)(33656002)(107886003)(2616005)(336012)(8676002)(53546011)(70206006)(83380400001)(36860700001)(478600001)(2906002)(6506007)(86362001)(82310400003)(26005)(186003)(356005)(6486002)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2021 17:49:18.1747
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 055b2dd2-1659-4772-cf89-08d93bef6251
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5404

Hi Julien,

> On 30 Jun 2021, at 10:09 am, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Rahul,
>=20
> On 25/06/2021 17:37, Rahul Singh wrote:
>> SMR allocation should be based on the number of supported stream
>> matching register for each SMMU device.
>> Issue introduced by commit 5e08586afbb90b2e2d56c175c07db77a4afa873c
>> when backported the patches from Linux to XEN to fix the stream match
>> conflict issue when two devices have the same stream-id.
>> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
>> Tested-by: Stefano Stabellini <sstabellini@kernel.org>
>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>> ---
>>  xen/drivers/passthrough/arm/smmu.c | 3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthroug=
h/arm/smmu.c
>> index d9a3a0cbf6..da2cd457d7 100644
>> --- a/xen/drivers/passthrough/arm/smmu.c
>> +++ b/xen/drivers/passthrough/arm/smmu.c
>> @@ -149,6 +149,7 @@ typedef enum irqreturn irqreturn_t;
>>  #define kzalloc(size, flags)		_xzalloc(size, sizeof(void *))
>>  #define devm_kzalloc(dev, size, flags)	_xzalloc(size, sizeof(void *))
>>  #define kmalloc_array(size, n, flags)	_xmalloc_array(size, sizeof(void =
*), n)
>> +#define kzalloc_array(size, n, flags)	_xzalloc_array(size, sizeof(void =
*), n)
>>    static void __iomem *devm_ioremap_resource(struct device *dev,
>>  					   struct resource *res)
>> @@ -2221,7 +2222,7 @@ static int arm_smmu_device_cfg_probe(struct arm_sm=
mu_device *smmu)
>>  		smmu->smr_mask_mask =3D smr >> SMR_MASK_SHIFT;
>>    		/* Zero-initialised to mark as invalid */
>> -		smmu->smrs =3D devm_kzalloc(smmu->dev, sizeof(*smmu->smrs), GFP_KERNE=
L);
>> +		smmu->smrs =3D kzalloc_array(sizeof(*smmu->smrs), size, GFP_KERNEL);
>=20
> I noticed this is already in... However, I am a bit puzzled into why this=
 was switched devm_kzalloc() to kzalloc_array(). This doesn't matter for Xe=
n as they are just wrappers to x*alloc() but a mention in the commit messag=
e would have been useful.

Yes we can use the devm_kzalloc(..) but then we have to pass (sizeof(*smmu-=
>smrs) * size ) as size argument to devm_kzalloc(..)
I thought for better code readability I will use kzalloc_array() as the fun=
ction name suggests we are allocating memory for an array.

>=20
> Also, when sending series, please remember to create a cover letter and n=
umber each patch.
>=20

Ok.

Regards,
Rahul
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Wed Jun 30 18:00:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 18:00:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148233.273936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyeVU-0004NI-NR; Wed, 30 Jun 2021 18:00:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148233.273936; Wed, 30 Jun 2021 18:00:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyeVU-0004NB-JJ; Wed, 30 Jun 2021 18:00:16 +0000
Received: by outflank-mailman (input) for mailman id 148233;
 Wed, 30 Jun 2021 18:00:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lyeVT-0004N5-1c
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 18:00:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyeVS-0002q8-02; Wed, 30 Jun 2021 18:00:14 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyeVR-0006Pc-PK; Wed, 30 Jun 2021 18:00:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=6NwnJyT75rAq/4tGb6rIRlERCRqDIOKV2CFDqtyDaOU=; b=lt1qK4JT3WDXSoDI/5Apvogh3T
	khE8tSSHSaGjauixdoWCtRf70KLamr/olHorswJiKYzg0cWpjERhaazCorncfbxZrq8hZUaleQ9HL
	ea0rqSBs704TEvPRSD+CD8qYLN/twfhaghrm5+LNT0TIyhF/crdS34npjnWYzS2dbzb4=;
Subject: Re: [PATCH] xen/arm: smmuv1: Fixed stream matching register
 allocation
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <612e7f61c19e60019bb7829888342fda95fd36be.1624546532.git.rahul.singh@arm.com>
 <11df0a34-724a-63ad-1822-4bd8aa364ab0@xen.org>
 <BE2AB42D-A896-4FFE-856C-DA494D8DF1C8@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f1a4cdb5-c525-8d6b-5f4d-7e2f2c090dcf@xen.org>
Date: Wed, 30 Jun 2021 19:00:11 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <BE2AB42D-A896-4FFE-856C-DA494D8DF1C8@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 30/06/2021 18:49, Rahul Singh wrote:
> Hi Julien,

Hi,

>> On 30 Jun 2021, at 10:09 am, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Rahul,
>>
>> On 25/06/2021 17:37, Rahul Singh wrote:
>>> SMR allocation should be based on the number of supported stream
>>> matching register for each SMMU device.
>>> Issue introduced by commit 5e08586afbb90b2e2d56c175c07db77a4afa873c
>>> when backported the patches from Linux to XEN to fix the stream match
>>> conflict issue when two devices have the same stream-id.
>>> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
>>> Tested-by: Stefano Stabellini <sstabellini@kernel.org>
>>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>>> ---
>>>   xen/drivers/passthrough/arm/smmu.c | 3 ++-
>>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
>>> index d9a3a0cbf6..da2cd457d7 100644
>>> --- a/xen/drivers/passthrough/arm/smmu.c
>>> +++ b/xen/drivers/passthrough/arm/smmu.c
>>> @@ -149,6 +149,7 @@ typedef enum irqreturn irqreturn_t;
>>>   #define kzalloc(size, flags)		_xzalloc(size, sizeof(void *))
>>>   #define devm_kzalloc(dev, size, flags)	_xzalloc(size, sizeof(void *))
>>>   #define kmalloc_array(size, n, flags)	_xmalloc_array(size, sizeof(void *), n)
>>> +#define kzalloc_array(size, n, flags)	_xzalloc_array(size, sizeof(void *), n)
>>>     static void __iomem *devm_ioremap_resource(struct device *dev,
>>>   					   struct resource *res)
>>> @@ -2221,7 +2222,7 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
>>>   		smmu->smr_mask_mask = smr >> SMR_MASK_SHIFT;
>>>     		/* Zero-initialised to mark as invalid */
>>> -		smmu->smrs = devm_kzalloc(smmu->dev, sizeof(*smmu->smrs), GFP_KERNEL);
>>> +		smmu->smrs = kzalloc_array(sizeof(*smmu->smrs), size, GFP_KERNEL);
>>
>> I noticed this is already in... However, I am a bit puzzled into why this was switched devm_kzalloc() to kzalloc_array(). This doesn't matter for Xen as they are just wrappers to x*alloc() but a mention in the commit message would have been useful.
> 
> Yes we can use the devm_kzalloc(..) but then we have to pass (sizeof(*smmu->smrs) * size ) as size argument to devm_kzalloc(..)
> I thought for better code readability I will use kzalloc_array() as the function name suggests we are allocating memory for an array.

My point is devm_k*alloc() and k*alloc() are quite different on the 
paper. One will allocate memory for a given device while the other is 
unknown memory.

It would have been better to call the function devm_kzalloc_array() to 
keep to keep the code coherent. Can you please send a patch to make the 
switch?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 18:09:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 18:09:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148239.273947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyeeb-0005A2-O8; Wed, 30 Jun 2021 18:09:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148239.273947; Wed, 30 Jun 2021 18:09:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyeeb-00059v-L6; Wed, 30 Jun 2021 18:09:41 +0000
Received: by outflank-mailman (input) for mailman id 148239;
 Wed, 30 Jun 2021 18:09:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lyeea-00059p-BD
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 18:09:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyeeZ-0002zu-5b; Wed, 30 Jun 2021 18:09:39 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyeeY-0007Cr-Vr; Wed, 30 Jun 2021 18:09:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=8e+FBkXevaus3nO4LlGboA9dqr8g2NQdotFQo2/WGmo=; b=ZBhH3LVIVo+ZpRL5Ff/BMEj5Rz
	rCnjrOG1rmCLkg9s/wPCXkB4lhHK/owgE9brHkoPmkhqIuqf926XXtI9NxUZzGxrgrEuuBF8v+wNx
	4UaaQK3yPaaRonRilkp1lZdfIZyQO1oksHt4T7dYcZZHhKCqi2fsafWCK0CBulblbWFY=;
Subject: Re: [PATCH 4/9] xen/arm: static memory initialization
To: Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org, jbeulich@suse.com
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com
References: <20210607024318.3988467-1-penny.zheng@arm.com>
 <20210607024318.3988467-5-penny.zheng@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1c6530bf-a362-0993-c4c5-953ee2afb1bf@xen.org>
Date: Wed, 30 Jun 2021 19:09:36 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <20210607024318.3988467-5-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Penny,

On 07/06/2021 03:43, Penny Zheng wrote:
> This patch introduces static memory initialization, during system RAM boot up.

The word "RAM" looks spurious.

> New func init_staticmem_pages is responsible for static memory initialization.

s/New func/The new function/

> Helper free_staticmem_pages is the equivalent of free_heap_pages, to free
> nr_mfns pages of static memory.
> 
> This commit defines a new helper free_page to extract common code between
> free_heap_pages and free_staticmem_pages, like following the same cache/TLB
> coherency policy.
> 
> For each page, free_staticmem_pages includes the following extra steps to
> initialize:
> 1. change page state from inuse to free state and grant PGC_reserved.

I think you mean "set" rather than "grant".

> 2. scrub the page in need synchronously.

Can you explain why this is necessary?

> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
> changes v2:
> - rename to nr_mfns
> - extract common code from free_heap_pages and free_staticmem_pages
> - remove dead codes in other archs, including move some to arm-specific file,
> and put some under CONFIG_ARM
> - mark free_staticmem_pages __init
> ---
>   xen/arch/arm/setup.c    | 27 ++++++++++++++

I think it would be best to split the arm use in a separate patch.

>   xen/common/page_alloc.c | 78 +++++++++++++++++++++++++++++++++--------
>   xen/include/xen/mm.h    |  6 ++++
>   3 files changed, 97 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 00aad1c194..daafea0abb 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -611,6 +611,30 @@ static void __init init_pdx(void)
>       }
>   }
>   
> +/* Static memory initialization */
> +static void __init init_staticmem_pages(void)
> +{
> +    int bank;
> +
> +    /*
> +     * TODO: Considering NUMA-support scenario.
> +     */
> +    for ( bank = 0 ; bank < bootinfo.static_mem.nr_banks; bank++ )
> +    {
> +        paddr_t bank_start = bootinfo.static_mem.bank[bank].start;
> +        paddr_t bank_size = bootinfo.static_mem.bank[bank].size;
> +        paddr_t bank_end = bank_start + bank_size;
> +
> +        bank_start = round_pgup(bank_start);
> +        bank_end = round_pgdown(bank_end);
> +        if ( bank_end <= bank_start )
> +            return;
> +
> +        free_staticmem_pages(maddr_to_page(bank_start),
> +                            (bank_end - bank_start) >> PAGE_SHIFT, false);
> +    }
> +}
> +
>   #ifdef CONFIG_ARM_32
>   static void __init setup_mm(void)
>   {
> @@ -872,6 +896,9 @@ void __init start_xen(unsigned long boot_phys_offset,
>       cmdline_parse(cmdline);
>   
>       setup_mm();
> +    /* If exists, Static Memory Initialization. */
> +    if ( bootinfo.static_mem.nr_banks > 0 )

This check seems a pointless because init_staticmem_pages() is already 
able to cope with nr_banks == 0.

> +        init_staticmem_pages();
I would prefer if this is folded in setup_mm().

>   
>       /* Parse the ACPI tables for possible boot-time configuration */
>       acpi_boot_table_init();
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 958ba0cd92..8c00262c04 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1376,6 +1376,37 @@ bool scrub_free_pages(void)
>       return node_to_scrub(false) != NUMA_NO_NODE;
>   }
>   
> +static void free_page(struct page_info *pg, bool need_scrub)
> +{
> +    mfn_t mfn = page_to_mfn(pg);
> +
> +    /* If a page has no owner it will need no safety TLB flush. */
> +    pg->u.free.need_tlbflush = (page_get_owner(pg) != NULL);
> +    if ( pg->u.free.need_tlbflush )
> +        page_set_tlbflush_timestamp(pg);
> +
> +    /* This page is not a guest frame any more. */
> +    page_set_owner(pg, NULL); /* set_gpfn_from_mfn snoops pg owner */
> +    set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY);
> +
> +#ifdef CONFIG_ARM

To echo what Jan already wrote, I am not in favor of adding new #ifdef 
CONFIG_<arch> in common code. I would expect the logic for static memory 
to be the same for each arch, so this should be protected with a generic 
Kconfig.

> +    if ( pg->count_info & PGC_reserved )
> +    {
> +        /* TODO: asynchronous scrubbing. */
> +        if ( need_scrub )
> +            scrub_one_page(pg);
> +        return;
> +    }
> +#endif
> +    if ( need_scrub )
> +    {
> +        pg->count_info |= PGC_need_scrub;
> +        poison_one_page(pg);
> +    }
> +
> +    return;
> +}
> +
>   /* Free 2^@order set of pages. */
>   static void free_heap_pages(
>       struct page_info *pg, unsigned int order, bool need_scrub)
> @@ -1425,20 +1456,7 @@ static void free_heap_pages(
>               BUG();
>           }
>   
> -        /* If a page has no owner it will need no safety TLB flush. */
> -        pg[i].u.free.need_tlbflush = (page_get_owner(&pg[i]) != NULL);
> -        if ( pg[i].u.free.need_tlbflush )
> -            page_set_tlbflush_timestamp(&pg[i]);
> -
> -        /* This page is not a guest frame any more. */
> -        page_set_owner(&pg[i], NULL); /* set_gpfn_from_mfn snoops pg owner */
> -        set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY);
> -
> -        if ( need_scrub )
> -        {
> -            pg[i].count_info |= PGC_need_scrub;
> -            poison_one_page(&pg[i]);
> -        }
> +        free_page(&pg[i], need_scrub);
>       }
>   
>       avail[node][zone] += 1 << order;
> @@ -1512,6 +1530,38 @@ static void free_heap_pages(
>       spin_unlock(&heap_lock);
>   }
>   
> +#ifdef CONFIG_STATIC_ALLOCATION
> +/* Equivalent of free_heap_pages to free nr_mfns pages of static memory. */
> +void __init free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
> +                                 bool need_scrub)
> +{
> +    mfn_t mfn = page_to_mfn(pg);
> +    unsigned long i;
> +
> +    for ( i = 0; i < nr_mfns; i++ )
> +    {
> +        switch ( pg[i].count_info & PGC_state )
> +        {
> +        case PGC_state_inuse:
> +            BUG_ON(pg[i].count_info & PGC_broken);
> +            /* Mark it free and reserved. */
> +            pg[i].count_info = PGC_state_free | PGC_reserved;
> +            break;
> +
> +        default:
> +            printk(XENLOG_ERR
> +                   "Page state shall be only in PGC_state_inuse. "
> +                   "pg[%lu] MFN %"PRI_mfn" count_info=%#lx tlbflush_timestamp=%#x.\n",
> +                   i, mfn_x(mfn) + i,
> +                   pg[i].count_info,
> +                   pg[i].tlbflush_timestamp);
> +            BUG();
> +        }
> +
> +        free_page(&pg[i], need_scrub);
> +    }
> +}
> +#endif
>   
>   /*
>    * Following rules applied for page offline:
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index 667f9dac83..df25e55966 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -85,6 +85,12 @@ bool scrub_free_pages(void);
>   } while ( false )
>   #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
>   
> +#ifdef CONFIG_ARM
> +/* Static Allocation */
> +void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
> +                          bool need_scrub);
> +#endif
> +
>   /* Map machine page range in Xen virtual address space. */
>   int map_pages_to_xen(
>       unsigned long virt,
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 18:29:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 18:29:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148244.273958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyexn-0007RM-Eb; Wed, 30 Jun 2021 18:29:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148244.273958; Wed, 30 Jun 2021 18:29:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyexn-0007RF-As; Wed, 30 Jun 2021 18:29:31 +0000
Received: by outflank-mailman (input) for mailman id 148244;
 Wed, 30 Jun 2021 18:29:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lyexl-0007R9-Q1
 for xen-devel@lists.xenproject.org; Wed, 30 Jun 2021 18:29:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyexl-0003JO-Fn; Wed, 30 Jun 2021 18:29:29 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lyexl-0000I0-9W; Wed, 30 Jun 2021 18:29:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=7UMIQsjIjtzvsYe9X5MfRnbr2KLE524nYIcB4ggs/co=; b=goNBho6rFn0cQK+BQRZ0W7QwDW
	3tfA/CjMp3cmNAWnca1RIVbXZmjjHTV93zGwr1bFCJzLBJeK0aWB6hBLF//c0/rQ14pYqZGuzCGqv
	h/o181WdE/jym9B8Mh9VGVeLE6895pOCQzFZBPK+N2egLBFez+xOIroqFsxgDaFidluo=;
Subject: Re: [PATCH 5/9] xen: introduce assign_pages_nr
To: Jan Beulich <jbeulich@suse.com>, Penny Zheng <penny.zheng@arm.com>
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com,
 xen-devel@lists.xenproject.org, sstabellini@kernel.org
References: <20210607024318.3988467-1-penny.zheng@arm.com>
 <20210607024318.3988467-6-penny.zheng@arm.com>
 <41a7389b-630c-6cf4-fa28-7d80cb79176b@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e7e89abb-1601-0cdf-71d2-c22af86057c4@xen.org>
Date: Wed, 30 Jun 2021 19:29:27 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <41a7389b-630c-6cf4-fa28-7d80cb79176b@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 10/06/2021 10:49, Jan Beulich wrote:
> On 07.06.2021 04:43, Penny Zheng wrote:
>> Introduce new interface assign_pages_nr to deal with when page number is
>> not in a power-of-two, which will save the trouble each time user needs
>> to split the size in a power of 2 to use assign_pages.
> 
> First of all I still don't see why in this one special case it is a
> meaningful burden to do the count-to-order conversion in the caller you
> mean to add,

This sort of works for one caller. However, I would expect some more 
user in the future (we use it for Live-Update).

> and hence why we really need this new function (to keep it
> simple, you could even have the caller not break down to arbitrary
> power-of-2 chunks, but simply iterate over all individual [order-0]
> pages).

The function assign_pages() will always use 1U << order (and sadly 1 << 
order). So we would end up to convert the count in multiple order for 
then directly converting back to a number. To me, this sounds rather 
pointless...

There are also a slight benefits to call assign_pages() a single time 
during boot because it will reduce the number of time we need to 
lock/unlock d->page_alloc_lock.

> The more that I'm not happy with the chosen name, despite it
> having been suggested during v1 review. _If_ we needed two functions,
> imo they ought to be named assign_page() (dealing with a single page of
> the given order) and assign_pages(). Backporting confusion could be
> helped by altering the order of parameters, such that the compiler
> would point out that adjustments at call sites are needed.
> 
> Irrespective of this a few remarks on the code change itself:
> 
>> --- a/xen/common/page_alloc.c
>> +++ b/xen/common/page_alloc.c
>> @@ -2301,14 +2301,14 @@ void init_domheap_pages(paddr_t ps, paddr_t pe)
>>   }
>>   
>>   
>> -int assign_pages(
>> +int assign_pages_nr(
>>       struct domain *d,
>>       struct page_info *pg,
>> -    unsigned int order,
>> +    unsigned int nr_pfns,
> 
> Even leaving the naming aspect of "pfns" aside, I can't see why this
> can't be simply "nr" (of appropriate type, see next remark).
> 
>>       unsigned int memflags)
>>   {
>>       int rc = 0;
>> -    unsigned long i;
>> +    unsigned int i;
> 
> This is not an acceptable type change, at least not as long as it's not
> justified at all in the description. While both Arm and x86 will be
> fine this way, the code here is supposed to be generic, and hence would
> better remain generally correct.

I would like to point out the code is already not correct as we are 
using 1U << order or worse 1 << order :).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 23:57:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 23:57:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148249.273969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyk4a-0001e7-AA; Wed, 30 Jun 2021 23:56:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148249.273969; Wed, 30 Jun 2021 23:56:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyk4a-0001e0-5O; Wed, 30 Jun 2021 23:56:52 +0000
Received: by outflank-mailman (input) for mailman id 148249;
 Wed, 30 Jun 2021 23:56:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyk4Z-0001dq-PI; Wed, 30 Jun 2021 23:56:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyk4Z-0000vU-Hh; Wed, 30 Jun 2021 23:56:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyk4Z-0006oP-1T; Wed, 30 Jun 2021 23:56:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lyk4Z-0005Zg-0w; Wed, 30 Jun 2021 23:56:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WF80vPFcNGtb2P9aDffiHqZKCOwq++eql5B5j4Ncf0w=; b=efvMow7oBSmDefkSuYALMTkCbT
	MVoXSvAXV7kAd+6FsrWH7HJR778X5YXQVD3FhUW7Y1nH60RMa9bQEFysZeWXsnoX94FHMQYC51LYM
	2tyrL91rdDKIONeSB5wF43Q4jycgR5Z1rI0RBStioSgaNCXneVFwNJswC4OmJBje5kqk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163194-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 163194: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=13d5f87cc3b94bfccc501142df4a7b12fee3a6e7
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 30 Jun 2021 23:56:51 +0000

flight 163194 qemu-mainline real [real]
flight 163202 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/163194/
http://logs.test-lab.xenproject.org/osstest/logs/163202/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-amd 16 xen-boot/l1         fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 16 xen-boot/l1       fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                13d5f87cc3b94bfccc501142df4a7b12fee3a6e7
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  314 days
Failing since        152659  2020-08-21 14:07:39 Z  313 days  575 attempts
Testing same since   163187  2021-06-29 20:09:42 Z    1 days    2 attempts

------------------------------------------------------------
552 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 179743 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 30 23:57:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Jun 2021 23:57:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.148255.273983 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyk5O-0002I7-Q9; Wed, 30 Jun 2021 23:57:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 148255.273983; Wed, 30 Jun 2021 23:57:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lyk5O-0002I0-MS; Wed, 30 Jun 2021 23:57:42 +0000
Received: by outflank-mailman (input) for mailman id 148255;
 Wed, 30 Jun 2021 23:57:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyk5N-0002Hg-Iy; Wed, 30 Jun 2021 23:57:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyk5N-0000wA-Db; Wed, 30 Jun 2021 23:57:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lyk5N-0006rB-5g; Wed, 30 Jun 2021 23:57:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lyk5N-0006kl-5F; Wed, 30 Jun 2021 23:57:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=i3axdepMMVZSIjovqZoL9Q9j5ay7fTuBAui4K9nf/Jg=; b=PSbRBGwQLIrOIDTv0EmXgsp7ca
	d3O2AzRc3qFAKKa8ytBz6ISaJ9fB7BVj0KOYMS68X+pBZY3PP5hrEp0lPE24IZVCPUc1KgN4msJwT
	R6xxvtKv0EH6cSJSAkFWVnb9L0OJ2c5E8ipGMyEdUJW9gseBbX3zaKD8iXARNoHHVkRg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-163197-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 163197: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=3cde0d553d9324a3681b65f9d9a2a8691af26840
X-Osstest-Versions-That:
    ovmf=c410ad4da4b7785170d3d42a3ba190c2caac6feb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 30 Jun 2021 23:57:41 +0000

flight 163197 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/163197/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 162359

version targeted for testing:
 ovmf                 3cde0d553d9324a3681b65f9d9a2a8691af26840
baseline version:
 ovmf                 c410ad4da4b7785170d3d42a3ba190c2caac6feb

Last test of basis   162359  2021-06-04 03:40:08 Z   26 days
Failing since        162368  2021-06-04 15:42:59 Z   26 days   67 attempts
Testing same since   163197  2021-06-30 13:10:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Agrawal, Sachin <sachin.agrawal@intel.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Daniel Schaefer <daniel.schaefer@hpe.com>
  Daoxiang Li <daoxiang.li@intel.com>
  Dov Murik <dovmurik@linux.ibm.com>
  DunTan <dun.tan@intel.com>
  gaoliming <gaoliming@byosoft.com.cn>
  Guo Dong <guo.dong@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Jian J Wang <jian.j.wang@intel.com>
  Kaaira Gupta <kaaira7319@gmail.com>
  Ken Lautner <klautner@microsoft.com>
  Kenneth Lautner <kenlautner3@gmail.com>
  Kun Qin <kuqin12@gmail.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Loo Tung Lun <tung.lun.loo@intel.com>
  Loo, Tung Lun <tung.lun.loo@intel.com>
  Manickavasakam Karpagavinayagam <manickavasakamk@ami.com>
  Maurice Ma <maurice.ma@intel.com>
  Ni, Ray <ray.ni@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sachin Agrawal <sachin.agrawal@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Scottie Kuo <scottie.kuo@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sean Brogan <spbrogan@live.com>
  Sumana Venur <sumana.venur@intel.com>
  Sunil V L <sunilvl@ventanamicro.com>
  xueshengfeng <xueshengfeng@byosoft.com.cn>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2960 lines long.)


